Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-next-2019-12-06' of git://anongit.freedesktop.org/drm/drm

Pull more drm updates from Dave Airlie:
"Rob pointed out I missed his pull request for msm-next, it's been in
next for a while outside of my tree so shouldn't cause any unexpected
issues, it has some OCMEM support in drivers/soc that is acked by
other maintainers as it's outside my tree.

Otherwise it's a usual fixes pull, i915, amdgpu, the main ones, with
some tegra, omap, mgag200 and one core fix.

Summary:

msm-next:
- OCMEM support for a3xx and a4xx GPUs.
- a510 support + display support

core:
- mst payload deletion fix

i915:
- uapi alignment fix
- fix for power usage regression due to security fixes
- change default preemption timeout to 640ms from 100ms
- EHL voltage level display fixes
- TGL DGL PHY fix
- gvt - MI_ATOMIC cmd parser fix, CFL non-priv warning
- CI spotted deadlock fix
- EHL port D programming fix

amdgpu:
- VRAM lost fixes on BACO for CI/VI
- navi14 DC fixes
- misc SR-IOV, gfx10 fixes
- XGMI fixes for arcturus
- SRIOV fixes

amdkfd:
- KFD on ppc64le enabled
- page table optimisations

radeon:
- fix for r1xx/2xx register checker.

tegra:
- displayport regression fixes
- DMA API regression fixes

mgag200:
- fix devices that can't scanout except at 0 addr

omap:
- fix dma_addr refcounting"

* tag 'drm-next-2019-12-06' of git://anongit.freedesktop.org/drm/drm: (100 commits)
drm/dp_mst: Correct the bug in drm_dp_update_payload_part1()
drm/omap: fix dma_addr refcounting
drm/tegra: Run hub cleanup on ->remove()
drm/tegra: sor: Make the +5V HDMI supply optional
drm/tegra: Silence expected errors on IOMMU attach
drm/tegra: vic: Export module device table
drm/tegra: sor: Implement system suspend/resume
drm/tegra: Use proper IOVA address for cursor image
drm/tegra: gem: Remove premature import restrictions
drm/tegra: gem: Properly pin imported buffers
drm/tegra: hub: Remove bogus connection mutex check
ia64: agp: Replace empty define with do while
agp: Add bridge parameter documentation
agp: remove unused variable num_segments
agp: move AGPGART_MINOR to include/linux/miscdevice.h
agp: remove unused variable size in agp_generic_create_gatt_table
drm/dp_mst: Fix build on systems with STACKTRACE_SUPPORT=n
drm/radeon: fix r1xx/r2xx register checker for POT textures
drm/amdgpu: fix GFX10 missing CSIB set(v3)
drm/amdgpu: should stop GFX ring in hw_fini
...

+2016 -766
+51
Documentation/devicetree/bindings/display/msm/gmu.txt
··· 31 31 - iommus: phandle to the adreno iommu 32 32 - operating-points-v2: phandle to the OPP operating points 33 33 34 + Optional properties: 35 + - sram: phandle to the On Chip Memory (OCMEM) that's present on some Snapdragon 36 + SoCs. See Documentation/devicetree/bindings/sram/qcom,ocmem.yaml. 37 + 34 38 Example: 35 39 36 40 / { ··· 65 61 iommus = <&adreno_smmu 5>; 66 62 67 63 operating-points-v2 = <&gmu_opp_table>; 64 + }; 65 + }; 66 + 67 + a3xx example with OCMEM support: 68 + 69 + / { 70 + ... 71 + 72 + gpu: adreno@fdb00000 { 73 + compatible = "qcom,adreno-330.2", 74 + "qcom,adreno"; 75 + reg = <0xfdb00000 0x10000>; 76 + reg-names = "kgsl_3d0_reg_memory"; 77 + interrupts = <GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>; 78 + interrupt-names = "kgsl_3d0_irq"; 79 + clock-names = "core", 80 + "iface", 81 + "mem_iface"; 82 + clocks = <&mmcc OXILI_GFX3D_CLK>, 83 + <&mmcc OXILICX_AHB_CLK>, 84 + <&mmcc OXILICX_AXI_CLK>; 85 + sram = <&gmu_sram>; 86 + power-domains = <&mmcc OXILICX_GDSC>; 87 + operating-points-v2 = <&gpu_opp_table>; 88 + iommus = <&gpu_iommu 0>; 89 + }; 90 + 91 + ocmem@fdd00000 { 92 + compatible = "qcom,msm8974-ocmem"; 93 + 94 + reg = <0xfdd00000 0x2000>, 95 + <0xfec00000 0x180000>; 96 + reg-names = "ctrl", 97 + "mem"; 98 + 99 + clocks = <&rpmcc RPM_SMD_OCMEMGX_CLK>, 100 + <&mmcc OCMEMCX_OCMEMNOC_CLK>; 101 + clock-names = "core", 102 + "iface"; 103 + 104 + #address-cells = <1>; 105 + #size-cells = <1>; 106 + 107 + gmu_sram: gmu-sram@0 { 108 + reg = <0x0 0x100000>; 109 + ranges = <0 0 0xfec00000 0x100000>; 110 + }; 68 111 }; 69 112 };
+2
Documentation/devicetree/bindings/display/msm/mdp5.txt
··· 76 76 Optional properties: 77 77 - clock-names: the following clocks are optional: 78 78 * "lut" 79 + * "tbu" 80 + * "tbu_rt" 79 81 80 82 Example: 81 83
+96
Documentation/devicetree/bindings/sram/qcom,ocmem.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/sram/qcom,ocmem.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: On Chip Memory (OCMEM) that is present on some Qualcomm Snapdragon SoCs. 8 + 9 + maintainers: 10 + - Brian Masney <masneyb@onstation.org> 11 + 12 + description: | 13 + The On Chip Memory (OCMEM) is typically used by the GPU, camera/video, and 14 + audio components on some Snapdragon SoCs. 15 + 16 + properties: 17 + compatible: 18 + const: qcom,msm8974-ocmem 19 + 20 + reg: 21 + items: 22 + - description: Control registers 23 + - description: OCMEM address range 24 + 25 + reg-names: 26 + items: 27 + - const: ctrl 28 + - const: mem 29 + 30 + clocks: 31 + items: 32 + - description: Core clock 33 + - description: Interface clock 34 + 35 + clock-names: 36 + items: 37 + - const: core 38 + - const: iface 39 + 40 + '#address-cells': 41 + const: 1 42 + 43 + '#size-cells': 44 + const: 1 45 + 46 + required: 47 + - compatible 48 + - reg 49 + - reg-names 50 + - clocks 51 + - clock-names 52 + - '#address-cells' 53 + - '#size-cells' 54 + 55 + patternProperties: 56 + "^.+-sram$": 57 + type: object 58 + description: A region of reserved memory. 59 + 60 + properties: 61 + reg: 62 + maxItems: 1 63 + 64 + ranges: 65 + maxItems: 1 66 + 67 + required: 68 + - reg 69 + - ranges 70 + 71 + examples: 72 + - | 73 + #include <dt-bindings/clock/qcom,rpmcc.h> 74 + #include <dt-bindings/clock/qcom,mmcc-msm8974.h> 75 + 76 + ocmem: ocmem@fdd00000 { 77 + compatible = "qcom,msm8974-ocmem"; 78 + 79 + reg = <0xfdd00000 0x2000>, 80 + <0xfec00000 0x180000>; 81 + reg-names = "ctrl", 82 + "mem"; 83 + 84 + clocks = <&rpmcc RPM_SMD_OCMEMGX_CLK>, 85 + <&mmcc OCMEMCX_OCMEMNOC_CLK>; 86 + clock-names = "core", 87 + "iface"; 88 + 89 + #address-cells = <1>; 90 + #size-cells = <1>; 91 + 92 + gmu-sram@0 { 93 + reg = <0x0 0x100000>; 94 + ranges = <0 0 0xfec00000 0x100000>; 95 + }; 96 + };
-1
MAINTAINERS
··· 862 862 F: drivers/i2c/busses/i2c-amd-mp2* 863 863 864 864 AMD POWERPLAY 865 - M: Rex Zhu <rex.zhu@amd.com> 866 865 M: Evan Quan <evan.quan@amd.com> 867 866 L: amd-gfx@lists.freedesktop.org 868 867 S: Supported
+2 -2
arch/ia64/include/asm/agp.h
··· 14 14 * in coherent mode, which lets us map the AGP memory as normal (write-back) memory 15 15 * (unlike x86, where it gets mapped "write-coalescing"). 16 16 */ 17 - #define map_page_into_agp(page) /* nothing */ 18 - #define unmap_page_from_agp(page) /* nothing */ 17 + #define map_page_into_agp(page) do { } while (0) 18 + #define unmap_page_from_agp(page) do { } while (0) 19 19 #define flush_agp_cache() mb() 20 20 21 21 /* GATT allocation. Returns/accepts GATT kernel virtual address. */
+1 -2
drivers/char/agp/frontend.c
··· 102 102 int size, pgprot_t page_prot) 103 103 { 104 104 struct agp_segment_priv *seg; 105 - int num_segments, i; 105 + int i; 106 106 off_t pg_start; 107 107 size_t pg_count; 108 108 109 109 pg_start = offset / 4096; 110 110 pg_count = size / 4096; 111 111 seg = *(client->segments); 112 - num_segments = client->num_segments; 113 112 114 113 for (i = 0; i < client->num_segments; i++) { 115 114 if ((seg[i].pg_start == pg_start) &&
+5 -7
drivers/char/agp/generic.c
··· 207 207 /** 208 208 * agp_allocate_memory - allocate a group of pages of a certain type. 209 209 * 210 + * @bridge: an agp_bridge_data struct allocated for the AGP host bridge. 210 211 * @page_count: size_t argument of the number of pages 211 212 * @type: u32 argument of the type of memory to be allocated. 212 213 * ··· 356 355 /** 357 356 * agp_copy_info - copy bridge state information 358 357 * 358 + * @bridge: an agp_bridge_data struct allocated for the AGP host bridge. 359 359 * @info: agp_kern_info pointer. The caller should insure that this pointer is valid. 360 360 * 361 361 * This function copies information about the agp bridge device and the state of ··· 852 850 { 853 851 char *table; 854 852 char *table_end; 855 - int size; 856 853 int page_order; 857 854 int num_entries; 858 855 int i; ··· 865 864 table = NULL; 866 865 i = bridge->aperture_size_idx; 867 866 temp = bridge->current_size; 868 - size = page_order = num_entries = 0; 867 + page_order = num_entries = 0; 869 868 870 869 if (bridge->driver->size_type != FIXED_APER_SIZE) { 871 870 do { 872 871 switch (bridge->driver->size_type) { 873 872 case U8_APER_SIZE: 874 - size = A_SIZE_8(temp)->size; 875 873 page_order = 876 874 A_SIZE_8(temp)->page_order; 877 875 num_entries = 878 876 A_SIZE_8(temp)->num_entries; 879 877 break; 880 878 case U16_APER_SIZE: 881 - size = A_SIZE_16(temp)->size; 882 879 page_order = A_SIZE_16(temp)->page_order; 883 880 num_entries = A_SIZE_16(temp)->num_entries; 884 881 break; 885 882 case U32_APER_SIZE: 886 - size = A_SIZE_32(temp)->size; 887 883 page_order = A_SIZE_32(temp)->page_order; 888 884 num_entries = A_SIZE_32(temp)->num_entries; 889 885 break; ··· 888 890 case FIXED_APER_SIZE: 889 891 case LVL2_APER_SIZE: 890 892 default: 891 - size = page_order = num_entries = 0; 893 + page_order = num_entries = 0; 892 894 break; 893 895 } 894 896 ··· 918 920 } 919 921 } while (!table && (i < bridge->driver->num_aperture_sizes)); 920 922 } else { 921 - size = ((struct aper_size_info_fixed *) temp)->size; 922 923 page_order = ((struct aper_size_info_fixed *) temp)->page_order; 923 924 num_entries = ((struct aper_size_info_fixed *) temp)->num_entries; 924 925 table = alloc_gatt_pages(page_order); ··· 1279 1282 /** 1280 1283 * agp_enable - initialise the agp point-to-point connection. 1281 1284 * 1285 + * @bridge: an agp_bridge_data struct allocated for the AGP host bridge. 1282 1286 * @mode: agp mode register value to configure with. 1283 1287 */ 1284 1288 void agp_enable(struct agp_bridge_data *bridge, u32 mode)
+51 -1
drivers/firmware/qcom_scm-32.c
··· 442 442 req, req_cnt * sizeof(*req), resp, sizeof(*resp)); 443 443 } 444 444 445 + int __qcom_scm_ocmem_lock(struct device *dev, u32 id, u32 offset, u32 size, 446 + u32 mode) 447 + { 448 + struct ocmem_tz_lock { 449 + __le32 id; 450 + __le32 offset; 451 + __le32 size; 452 + __le32 mode; 453 + } request; 454 + 455 + request.id = cpu_to_le32(id); 456 + request.offset = cpu_to_le32(offset); 457 + request.size = cpu_to_le32(size); 458 + request.mode = cpu_to_le32(mode); 459 + 460 + return qcom_scm_call(dev, QCOM_SCM_OCMEM_SVC, QCOM_SCM_OCMEM_LOCK_CMD, 461 + &request, sizeof(request), NULL, 0); 462 + } 463 + 464 + int __qcom_scm_ocmem_unlock(struct device *dev, u32 id, u32 offset, u32 size) 465 + { 466 + struct ocmem_tz_unlock { 467 + __le32 id; 468 + __le32 offset; 469 + __le32 size; 470 + } request; 471 + 472 + request.id = cpu_to_le32(id); 473 + request.offset = cpu_to_le32(offset); 474 + request.size = cpu_to_le32(size); 475 + 476 + return qcom_scm_call(dev, QCOM_SCM_OCMEM_SVC, QCOM_SCM_OCMEM_UNLOCK_CMD, 477 + &request, sizeof(request), NULL, 0); 478 + } 479 + 445 480 void __qcom_scm_init(void) 446 481 { 447 482 } ··· 617 582 int __qcom_scm_restore_sec_cfg(struct device *dev, u32 device_id, 618 583 u32 spare) 619 584 { 620 - return -ENODEV; 585 + struct msm_scm_sec_cfg { 586 + __le32 id; 587 + __le32 ctx_bank_num; 588 + } cfg; 589 + int ret, scm_ret = 0; 590 + 591 + cfg.id = cpu_to_le32(device_id); 592 + cfg.ctx_bank_num = cpu_to_le32(spare); 593 + 594 + ret = qcom_scm_call(dev, QCOM_SCM_SVC_MP, QCOM_SCM_RESTORE_SEC_CFG, 595 + &cfg, sizeof(cfg), &scm_ret, sizeof(scm_ret)); 596 + 597 + if (ret || scm_ret) 598 + return ret ? ret : -EINVAL; 599 + 600 + return 0; 621 601 } 622 602 623 603 int __qcom_scm_iommu_secure_ptbl_size(struct device *dev, u32 spare,
+12
drivers/firmware/qcom_scm-64.c
··· 291 291 return ret; 292 292 } 293 293 294 + int __qcom_scm_ocmem_lock(struct device *dev, uint32_t id, uint32_t offset, 295 + uint32_t size, uint32_t mode) 296 + { 297 + return -ENOTSUPP; 298 + } 299 + 300 + int __qcom_scm_ocmem_unlock(struct device *dev, uint32_t id, uint32_t offset, 301 + uint32_t size) 302 + { 303 + return -ENOTSUPP; 304 + } 305 + 294 306 void __qcom_scm_init(void) 295 307 { 296 308 u64 cmd;
+53
drivers/firmware/qcom_scm.c
··· 192 192 EXPORT_SYMBOL(qcom_scm_pas_supported); 193 193 194 194 /** 195 + * qcom_scm_ocmem_lock_available() - is OCMEM lock/unlock interface available 196 + */ 197 + bool qcom_scm_ocmem_lock_available(void) 198 + { 199 + return __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_OCMEM_SVC, 200 + QCOM_SCM_OCMEM_LOCK_CMD); 201 + } 202 + EXPORT_SYMBOL(qcom_scm_ocmem_lock_available); 203 + 204 + /** 205 + * qcom_scm_ocmem_lock() - call OCMEM lock interface to assign an OCMEM 206 + * region to the specified initiator 207 + * 208 + * @id: tz initiator id 209 + * @offset: OCMEM offset 210 + * @size: OCMEM size 211 + * @mode: access mode (WIDE/NARROW) 212 + */ 213 + int qcom_scm_ocmem_lock(enum qcom_scm_ocmem_client id, u32 offset, u32 size, 214 + u32 mode) 215 + { 216 + return __qcom_scm_ocmem_lock(__scm->dev, id, offset, size, mode); 217 + } 218 + EXPORT_SYMBOL(qcom_scm_ocmem_lock); 219 + 220 + /** 221 + * qcom_scm_ocmem_unlock() - call OCMEM unlock interface to release an OCMEM 222 + * region from the specified initiator 223 + * 224 + * @id: tz initiator id 225 + * @offset: OCMEM offset 226 + * @size: OCMEM size 227 + */ 228 + int qcom_scm_ocmem_unlock(enum qcom_scm_ocmem_client id, u32 offset, u32 size) 229 + { 230 + return __qcom_scm_ocmem_unlock(__scm->dev, id, offset, size); 231 + } 232 + EXPORT_SYMBOL(qcom_scm_ocmem_unlock); 233 + 234 + /** 195 235 * qcom_scm_pas_init_image() - Initialize peripheral authentication service 196 236 * state machine for a given peripheral, using the 197 237 * metadata ··· 366 326 .assert = qcom_scm_pas_reset_assert, 367 327 .deassert = qcom_scm_pas_reset_deassert, 368 328 }; 329 + 330 + /** 331 + * qcom_scm_restore_sec_cfg_available() - Check if secure environment 332 + * supports restore security config interface. 333 + * 334 + * Return true if restore-cfg interface is supported, false if not. 335 + */ 336 + bool qcom_scm_restore_sec_cfg_available(void) 337 + { 338 + return __qcom_scm_is_call_available(__scm->dev, QCOM_SCM_SVC_MP, 339 + QCOM_SCM_RESTORE_SEC_CFG); 340 + } 341 + EXPORT_SYMBOL(qcom_scm_restore_sec_cfg_available); 369 342 370 343 int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare) 371 344 {
+9
drivers/firmware/qcom_scm.h
··· 42 42 43 43 extern void __qcom_scm_init(void); 44 44 45 + #define QCOM_SCM_OCMEM_SVC 0xf 46 + #define QCOM_SCM_OCMEM_LOCK_CMD 0x1 47 + #define QCOM_SCM_OCMEM_UNLOCK_CMD 0x2 48 + 49 + extern int __qcom_scm_ocmem_lock(struct device *dev, u32 id, u32 offset, 50 + u32 size, u32 mode); 51 + extern int __qcom_scm_ocmem_unlock(struct device *dev, u32 id, u32 offset, 52 + u32 size); 53 + 45 54 #define QCOM_SCM_SVC_PIL 0x2 46 55 #define QCOM_SCM_PAS_INIT_IMAGE_CMD 0x1 47 56 #define QCOM_SCM_PAS_MEM_SETUP_CMD 0x2
+1
drivers/gpu/drm/Kconfig
··· 95 95 96 96 config DRM_DEBUG_DP_MST_TOPOLOGY_REFS 97 97 bool "Enable refcount backtrace history in the DP MST helpers" 98 + depends on STACKTRACE_SUPPORT 98 99 select STACKDEPOT 99 100 depends on DRM_KMS_HELPER 100 101 depends on DEBUG_KERNEL
+14 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
··· 105 105 (kfd_mem_limit.max_ttm_mem_limit >> 20)); 106 106 } 107 107 108 + /* Estimate page table size needed to represent a given memory size 109 + * 110 + * With 4KB pages, we need one 8 byte PTE for each 4KB of memory 111 + * (factor 512, >> 9). With 2MB pages, we need one 8 byte PTE for 2MB 112 + * of memory (factor 256K, >> 18). ROCm user mode tries to optimize 113 + * for 2MB pages for TLB efficiency. However, small allocations and 114 + * fragmented system memory still need some 4KB pages. We choose a 115 + * compromise that should work in most cases without reserving too 116 + * much memory for page tables unnecessarily (factor 16K, >> 14). 117 + */ 118 + #define ESTIMATE_PT_SIZE(mem_size) ((mem_size) >> 14) 119 + 108 120 static int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev, 109 121 uint64_t size, u32 domain, bool sg) 110 122 { 123 + uint64_t reserved_for_pt = 124 + ESTIMATE_PT_SIZE(amdgpu_amdkfd_total_mem_size); 111 125 size_t acc_size, system_mem_needed, ttm_mem_needed, vram_needed; 112 - uint64_t reserved_for_pt = amdgpu_amdkfd_total_mem_size >> 9; 113 126 int ret = 0; 114 127 115 128 acc_size = ttm_bo_dma_acc_size(&adev->mman.bdev, size,
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
··· 1487 1487 return ret; 1488 1488 1489 1489 /* Start rlc autoload after psp recieved all the gfx firmware */ 1490 - if (psp->autoload_supported && ucode->ucode_id == 1491 - AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM) { 1490 + if (psp->autoload_supported && ucode->ucode_id == (amdgpu_sriov_vf(adev) ? 1491 + AMDGPU_UCODE_ID_CP_MEC2 : AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM)) { 1492 1492 ret = psp_rlc_autoload(psp); 1493 1493 if (ret) { 1494 1494 DRM_ERROR("Failed to start rlc autoload\n");
+12 -5
drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c
··· 27 27 #include <linux/bits.h> 28 28 #include "smu_v11_0_i2c.h" 29 29 30 - #define EEPROM_I2C_TARGET_ADDR 0xA0 30 + #define EEPROM_I2C_TARGET_ADDR_ARCTURUS 0xA8 31 + #define EEPROM_I2C_TARGET_ADDR_VEGA20 0xA0 31 32 32 33 /* 33 34 * The 2 macros bellow represent the actual size in bytes that ··· 84 83 { 85 84 int ret = 0; 86 85 struct i2c_msg msg = { 87 - .addr = EEPROM_I2C_TARGET_ADDR, 86 + .addr = 0, 88 87 .flags = 0, 89 88 .len = EEPROM_ADDRESS_SIZE + EEPROM_TABLE_HEADER_SIZE, 90 89 .buf = buff, ··· 93 92 94 93 *(uint16_t *)buff = EEPROM_HDR_START; 95 94 __encode_table_header_to_buff(&control->tbl_hdr, buff + EEPROM_ADDRESS_SIZE); 95 + 96 + msg.addr = control->i2c_address; 96 97 97 98 ret = i2c_transfer(&control->eeprom_accessor, &msg, 1); 98 99 if (ret < 1) ··· 206 203 unsigned char buff[EEPROM_ADDRESS_SIZE + EEPROM_TABLE_HEADER_SIZE] = { 0 }; 207 204 struct amdgpu_ras_eeprom_table_header *hdr = &control->tbl_hdr; 208 205 struct i2c_msg msg = { 209 - .addr = EEPROM_I2C_TARGET_ADDR, 206 + .addr = 0, 210 207 .flags = I2C_M_RD, 211 208 .len = EEPROM_ADDRESS_SIZE + EEPROM_TABLE_HEADER_SIZE, 212 209 .buf = buff, ··· 216 213 217 214 switch (adev->asic_type) { 218 215 case CHIP_VEGA20: 216 + control->i2c_address = EEPROM_I2C_TARGET_ADDR_VEGA20; 219 217 ret = smu_v11_0_i2c_eeprom_control_init(&control->eeprom_accessor); 220 218 break; 221 219 222 220 case CHIP_ARCTURUS: 221 + control->i2c_address = EEPROM_I2C_TARGET_ADDR_ARCTURUS; 223 222 ret = smu_i2c_eeprom_init(&adev->smu, &control->eeprom_accessor); 224 223 break; 225 224 ··· 233 228 DRM_ERROR("Failed to init I2C controller, ret:%d", ret); 234 229 return ret; 235 230 } 231 + 232 + msg.addr = control->i2c_address; 236 233 237 234 /* Read/Create table header from EEPROM address 0 */ 238 235 ret = i2c_transfer(&control->eeprom_accessor, &msg, 1); ··· 415 408 * Update bits 16,17 of EEPROM address in I2C address by setting them 416 409 * to bits 1,2 of Device address byte 417 410 */ 418 - msg->addr = EEPROM_I2C_TARGET_ADDR | 419 - ((control->next_addr & EEPROM_ADDR_MSB_MASK) >> 15); 411 + msg->addr = control->i2c_address | 412 + ((control->next_addr & EEPROM_ADDR_MSB_MASK) >> 15); 420 413 msg->flags = write ? 0 : I2C_M_RD; 421 414 msg->len = EEPROM_ADDRESS_SIZE + EEPROM_TABLE_RECORD_SIZE; 422 415 msg->buf = buff;
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.h
··· 50 50 struct mutex tbl_mutex; 51 51 bool bus_locked; 52 52 uint32_t tbl_byte_sum; 53 + uint16_t i2c_address; // 8-bit represented address 53 54 }; 54 55 55 56 /*
+1 -9
drivers/gpu/drm/amd/amdgpu/amdgpu_rlc.c
··· 124 124 */ 125 125 int amdgpu_gfx_rlc_init_csb(struct amdgpu_device *adev) 126 126 { 127 - volatile u32 *dst_ptr; 128 127 u32 dws; 129 128 int r; 130 129 131 130 /* allocate clear state block */ 132 131 adev->gfx.rlc.clear_state_size = dws = adev->gfx.rlc.funcs->get_csb_size(adev); 133 - r = amdgpu_bo_create_reserved(adev, dws * 4, PAGE_SIZE, 132 + r = amdgpu_bo_create_kernel(adev, dws * 4, PAGE_SIZE, 134 133 AMDGPU_GEM_DOMAIN_VRAM, 135 134 &adev->gfx.rlc.clear_state_obj, 136 135 &adev->gfx.rlc.clear_state_gpu_addr, ··· 139 140 amdgpu_gfx_rlc_fini(adev); 140 141 return r; 141 142 } 142 - 143 - /* set up the cs buffer */ 144 - dst_ptr = adev->gfx.rlc.cs_ptr; 145 - adev->gfx.rlc.funcs->get_csb_buffer(adev, dst_ptr); 146 - amdgpu_bo_kunmap(adev->gfx.rlc.clear_state_obj); 147 - amdgpu_bo_unpin(adev->gfx.rlc.clear_state_obj); 148 - amdgpu_bo_unreserve(adev->gfx.rlc.clear_state_obj); 149 143 150 144 return 0; 151 145 }
+5 -2
drivers/gpu/drm/amd/amdgpu/cik.c
··· 1346 1346 { 1347 1347 int r; 1348 1348 1349 - if (cik_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) 1349 + if (cik_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) { 1350 + if (!adev->in_suspend) 1351 + amdgpu_inc_vram_lost(adev); 1350 1352 r = smu7_asic_baco_reset(adev); 1351 - else 1353 + } else { 1352 1354 r = cik_asic_pci_config_reset(adev); 1355 + } 1353 1356 1354 1357 return r; 1355 1358 }
+53 -129
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
··· 690 690 adev->gfx.ce_fw_version = le32_to_cpu(cp_hdr->header.ucode_version); 691 691 adev->gfx.ce_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version); 692 692 693 - snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_rlc.bin", chip_name); 694 - err = request_firmware(&adev->gfx.rlc_fw, fw_name, adev->dev); 695 - if (err) 696 - goto out; 697 - err = amdgpu_ucode_validate(adev->gfx.rlc_fw); 698 - rlc_hdr = (const struct rlc_firmware_header_v2_0 *)adev->gfx.rlc_fw->data; 699 - version_major = le16_to_cpu(rlc_hdr->header.header_version_major); 700 - version_minor = le16_to_cpu(rlc_hdr->header.header_version_minor); 701 - if (version_major == 2 && version_minor == 1) 702 - adev->gfx.rlc.is_rlc_v2_1 = true; 693 + if (!amdgpu_sriov_vf(adev)) { 694 + snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_rlc.bin", chip_name); 695 + err = request_firmware(&adev->gfx.rlc_fw, fw_name, adev->dev); 696 + if (err) 697 + goto out; 698 + err = amdgpu_ucode_validate(adev->gfx.rlc_fw); 699 + rlc_hdr = (const struct rlc_firmware_header_v2_0 *)adev->gfx.rlc_fw->data; 700 + version_major = le16_to_cpu(rlc_hdr->header.header_version_major); 701 + version_minor = le16_to_cpu(rlc_hdr->header.header_version_minor); 702 + if (version_major == 2 && version_minor == 1) 703 + adev->gfx.rlc.is_rlc_v2_1 = true; 703 704 704 - adev->gfx.rlc_fw_version = le32_to_cpu(rlc_hdr->header.ucode_version); 705 - adev->gfx.rlc_feature_version = le32_to_cpu(rlc_hdr->ucode_feature_version); 706 - adev->gfx.rlc.save_and_restore_offset = 705 + adev->gfx.rlc_fw_version = le32_to_cpu(rlc_hdr->header.ucode_version); 706 + adev->gfx.rlc_feature_version = le32_to_cpu(rlc_hdr->ucode_feature_version); 707 + adev->gfx.rlc.save_and_restore_offset = 707 708 le32_to_cpu(rlc_hdr->save_and_restore_offset); 708 - adev->gfx.rlc.clear_state_descriptor_offset = 709 + adev->gfx.rlc.clear_state_descriptor_offset = 709 710 le32_to_cpu(rlc_hdr->clear_state_descriptor_offset); 710 - adev->gfx.rlc.avail_scratch_ram_locations = 711 + adev->gfx.rlc.avail_scratch_ram_locations = 711 712 le32_to_cpu(rlc_hdr->avail_scratch_ram_locations); 712 - adev->gfx.rlc.reg_restore_list_size = 713 + adev->gfx.rlc.reg_restore_list_size = 713 714 le32_to_cpu(rlc_hdr->reg_restore_list_size); 714 - adev->gfx.rlc.reg_list_format_start = 715 + adev->gfx.rlc.reg_list_format_start = 715 716 le32_to_cpu(rlc_hdr->reg_list_format_start); 716 - adev->gfx.rlc.reg_list_format_separate_start = 717 + adev->gfx.rlc.reg_list_format_separate_start = 717 718 le32_to_cpu(rlc_hdr->reg_list_format_separate_start); 718 - adev->gfx.rlc.starting_offsets_start = 719 + adev->gfx.rlc.starting_offsets_start = 719 720 le32_to_cpu(rlc_hdr->starting_offsets_start); 720 - adev->gfx.rlc.reg_list_format_size_bytes = 721 + adev->gfx.rlc.reg_list_format_size_bytes = 721 722 le32_to_cpu(rlc_hdr->reg_list_format_size_bytes); 722 - adev->gfx.rlc.reg_list_size_bytes = 723 + adev->gfx.rlc.reg_list_size_bytes = 723 724 le32_to_cpu(rlc_hdr->reg_list_size_bytes); 724 - adev->gfx.rlc.register_list_format = 725 + adev->gfx.rlc.register_list_format = 725 726 kmalloc(adev->gfx.rlc.reg_list_format_size_bytes + 726 - adev->gfx.rlc.reg_list_size_bytes, GFP_KERNEL); 727 - if (!adev->gfx.rlc.register_list_format) { 728 - err = -ENOMEM; 729 - goto out; 727 + adev->gfx.rlc.reg_list_size_bytes, GFP_KERNEL); 728 + if (!adev->gfx.rlc.register_list_format) { 729 + err = -ENOMEM; 730 + goto out; 731 + } 732 + 733 + tmp = (unsigned int *)((uintptr_t)rlc_hdr + 734 + le32_to_cpu(rlc_hdr->reg_list_format_array_offset_bytes)); 735 + for (i = 0 ; i < (rlc_hdr->reg_list_format_size_bytes >> 2); i++) 736 + adev->gfx.rlc.register_list_format[i] = le32_to_cpu(tmp[i]); 737 + 738 + adev->gfx.rlc.register_restore = adev->gfx.rlc.register_list_format + i; 739 + 740 + tmp = (unsigned int *)((uintptr_t)rlc_hdr + 741 + le32_to_cpu(rlc_hdr->reg_list_array_offset_bytes)); 742 + for (i = 0 ; i < (rlc_hdr->reg_list_size_bytes >> 2); i++) 743 + adev->gfx.rlc.register_restore[i] = le32_to_cpu(tmp[i]); 744 + 745 + if (adev->gfx.rlc.is_rlc_v2_1) 746 + gfx_v10_0_init_rlc_ext_microcode(adev); 730 747 } 731 - 732 - tmp = (unsigned int *)((uintptr_t)rlc_hdr + 733 - le32_to_cpu(rlc_hdr->reg_list_format_array_offset_bytes)); 734 - for (i = 0 ; i < (rlc_hdr->reg_list_format_size_bytes >> 2); i++) 735 - adev->gfx.rlc.register_list_format[i] = le32_to_cpu(tmp[i]); 736 - 737 - adev->gfx.rlc.register_restore = adev->gfx.rlc.register_list_format + i; 738 - 739 - tmp = (unsigned int *)((uintptr_t)rlc_hdr + 740 - le32_to_cpu(rlc_hdr->reg_list_array_offset_bytes)); 741 - for (i = 0 ; i < (rlc_hdr->reg_list_size_bytes >> 2); i++) 742 - adev->gfx.rlc.register_restore[i] = le32_to_cpu(tmp[i]); 743 - 744 - if (adev->gfx.rlc.is_rlc_v2_1) 745 - gfx_v10_0_init_rlc_ext_microcode(adev); 746 748 747 749 snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mec%s.bin", chip_name, wks); 748 750 err = request_firmware(&adev->gfx.mec_fw, fw_name, adev->dev); ··· 993 991 } 994 992 995 993 return 0; 996 - } 997 - 998 - static int gfx_v10_0_csb_vram_pin(struct amdgpu_device *adev) 999 - { 1000 - int r; 1001 - 1002 - r = amdgpu_bo_reserve(adev->gfx.rlc.clear_state_obj, false); 1003 - if (unlikely(r != 0)) 1004 - return r; 1005 - 1006 - r = amdgpu_bo_pin(adev->gfx.rlc.clear_state_obj, 1007 - AMDGPU_GEM_DOMAIN_VRAM); 1008 - if (!r) 1009 - adev->gfx.rlc.clear_state_gpu_addr = 1010 - amdgpu_bo_gpu_offset(adev->gfx.rlc.clear_state_obj); 1011 - 1012 - amdgpu_bo_unreserve(adev->gfx.rlc.clear_state_obj); 1013 - 1014 - return r; 1015 - } 1016 - 1017 - static void gfx_v10_0_csb_vram_unpin(struct amdgpu_device *adev) 1018 - { 1019 - int r; 1020 - 1021 - if (!adev->gfx.rlc.clear_state_obj) 1022 - return; 1023 - 1024 - r = amdgpu_bo_reserve(adev->gfx.rlc.clear_state_obj, true); 1025 - if (likely(r == 0)) { 1026 - amdgpu_bo_unpin(adev->gfx.rlc.clear_state_obj); 1027 - amdgpu_bo_unreserve(adev->gfx.rlc.clear_state_obj); 1028 - } 1029 994 } 1030 995 1031 996 static void gfx_v10_0_mec_fini(struct amdgpu_device *adev) ··· 1756 1787 1757 1788 static int gfx_v10_0_init_csb(struct amdgpu_device *adev) 1758 1789 { 1759 - int r; 1760 - 1761 - if (adev->in_gpu_reset) { 1762 - r = amdgpu_bo_reserve(adev->gfx.rlc.clear_state_obj, false); 1763 - if (r) 1764 - return r; 1765 - 1766 - r = amdgpu_bo_kmap(adev->gfx.rlc.clear_state_obj, 1767 - (void **)&adev->gfx.rlc.cs_ptr); 1768 - if (!r) { 1769 - adev->gfx.rlc.funcs->get_csb_buffer(adev, 1770 - adev->gfx.rlc.cs_ptr); 1771 - amdgpu_bo_kunmap(adev->gfx.rlc.clear_state_obj); 1772 - } 1773 - 1774 - amdgpu_bo_unreserve(adev->gfx.rlc.clear_state_obj); 1775 - if (r) 1776 - return r; 1777 - } 1790 + adev->gfx.rlc.funcs->get_csb_buffer(adev, adev->gfx.rlc.cs_ptr); 1778 1791 1779 1792 /* csib */ 1780 1793 WREG32_SOC15(GC, 0, mmRLC_CSIB_ADDR_HI, ··· 1765 1814 adev->gfx.rlc.clear_state_gpu_addr & 0xfffffffc); 1766 1815 WREG32_SOC15(GC, 0, mmRLC_CSIB_LENGTH, adev->gfx.rlc.clear_state_size); 1767 1816 1768 - return 0; 1769 - } 1770 - 1771 - static int gfx_v10_0_init_pg(struct amdgpu_device *adev) 1772 - { 1773 - int i; 1774 - int r; 1775 - 1776 - r = gfx_v10_0_init_csb(adev); 1777 - if (r) 1778 - return r; 1779 - 1780 - for (i = 0; i < adev->num_vmhubs; i++) 1781 - amdgpu_gmc_flush_gpu_tlb(adev, 0, i, 0); 1782 - 1783 - /* TODO: init power gating */ 1784 1817 return 0; 1785 1818 } 1786 1819 ··· 1860 1925 { 1861 1926 int r; 1862 1927 1863 - if (amdgpu_sriov_vf(adev)) 1864 - return 0; 1865 - 1866 1928 if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { 1929 + 1867 1930 r = gfx_v10_0_wait_for_rlc_autoload_complete(adev); 1868 1931 if (r) 1869 1932 return r; 1870 1933 1871 - r = gfx_v10_0_init_pg(adev); 1872 - if (r) 1873 - return r; 1934 + gfx_v10_0_init_csb(adev); 1874 1935 1875 - /* enable RLC SRM */ 1876 - gfx_v10_0_rlc_enable_srm(adev); 1877 - 1936 + if (!amdgpu_sriov_vf(adev)) /* enable RLC SRM */ 1937 + gfx_v10_0_rlc_enable_srm(adev); 1878 1938 } else { 1879 1939 adev->gfx.rlc.funcs->stop(adev); 1880 1940 ··· 1891 1961 return r; 1892 1962 } 1893 1963 1894 - r = gfx_v10_0_init_pg(adev); 1895 - if (r) 1896 - return r; 1964 + gfx_v10_0_init_csb(adev); 1897 1965 1898 1966 adev->gfx.rlc.funcs->start(adev); 1899 1967 ··· 2753 2825 /* Init gfx ring 0 for pipe 0 */ 2754 2826 mutex_lock(&adev->srbm_mutex); 2755 2827 gfx_v10_0_cp_gfx_switch_pipe(adev, PIPE_ID0); 2756 - mutex_unlock(&adev->srbm_mutex); 2828 + 2757 2829 /* Set ring buffer size */ 2758 2830 ring = &adev->gfx.gfx_ring[0]; 2759 2831 rb_bufsz = order_base_2(ring->ring_size / 8); ··· 2791 2863 WREG32_SOC15(GC, 0, mmCP_RB_ACTIVE, 1); 2792 2864 2793 2865 gfx_v10_0_cp_gfx_set_doorbell(adev, ring); 2866 + mutex_unlock(&adev->srbm_mutex); 2794 2867 2795 2868 /* Init gfx ring 1 for pipe 1 */ 2796 2869 mutex_lock(&adev->srbm_mutex); 2797 2870 gfx_v10_0_cp_gfx_switch_pipe(adev, PIPE_ID1); 2798 - mutex_unlock(&adev->srbm_mutex); 2799 2871 ring = &adev->gfx.gfx_ring[1]; 2800 2872 rb_bufsz = order_base_2(ring->ring_size / 8); 2801 2873 tmp = REG_SET_FIELD(0, CP_RB1_CNTL, RB_BUFSZ, rb_bufsz); ··· 2825 2897 WREG32_SOC15(GC, 0, mmCP_RB1_ACTIVE, 1); 2826 2898 2827 2899 gfx_v10_0_cp_gfx_set_doorbell(adev, ring); 2900 + mutex_unlock(&adev->srbm_mutex); 2828 2901 2829 2902 /* Switch to pipe 0 */ 2830 2903 mutex_lock(&adev->srbm_mutex); ··· 3704 3775 int r; 3705 3776 struct amdgpu_device *adev = (struct amdgpu_device *)handle; 3706 3777 3707 - r = gfx_v10_0_csb_vram_pin(adev); 3708 - if (r) 3709 - return r; 3710 - 3711 3778 if (!amdgpu_emu_mode) 3712 3779 gfx_v10_0_init_golden_registers(adev); 3713 3780 ··· 3786 3861 if (amdgpu_gfx_disable_kcq(adev)) 3787 3862 DRM_ERROR("KCQ disable failed\n"); 3788 3863 if (amdgpu_sriov_vf(adev)) { 3789 - pr_debug("For SRIOV client, shouldn't do anything.\n"); 3864 + gfx_v10_0_cp_gfx_enable(adev, false); 3790 3865 return 0; 3791 3866 } 3792 3867 gfx_v10_0_cp_enable(adev, false); 3793 3868 gfx_v10_0_enable_gui_idle_interrupt(adev, false); 3794 - gfx_v10_0_csb_vram_unpin(adev); 3795 3869 3796 3870 return 0; 3797 3871 }
+2
drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
··· 4554 4554 4555 4555 gfx_v7_0_constants_init(adev); 4556 4556 4557 + /* init CSB */ 4558 + adev->gfx.rlc.funcs->get_csb_buffer(adev, adev->gfx.rlc.cs_ptr); 4557 4559 /* init rlc */ 4558 4560 r = adev->gfx.rlc.funcs->resume(adev); 4559 4561 if (r)
+1 -39
drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
··· 1321 1321 return 0; 1322 1322 } 1323 1323 1324 - static int gfx_v8_0_csb_vram_pin(struct amdgpu_device *adev) 1325 - { 1326 - int r; 1327 - 1328 - r = amdgpu_bo_reserve(adev->gfx.rlc.clear_state_obj, false); 1329 - if (unlikely(r != 0)) 1330 - return r; 1331 - 1332 - r = amdgpu_bo_pin(adev->gfx.rlc.clear_state_obj, 1333 - AMDGPU_GEM_DOMAIN_VRAM); 1334 - if (!r) 1335 - adev->gfx.rlc.clear_state_gpu_addr = 1336 - amdgpu_bo_gpu_offset(adev->gfx.rlc.clear_state_obj); 1337 - 1338 - amdgpu_bo_unreserve(adev->gfx.rlc.clear_state_obj); 1339 - 1340 - return r; 1341 - } 1342 - 1343 - static void gfx_v8_0_csb_vram_unpin(struct amdgpu_device *adev) 1344 - { 1345 - int r; 1346 - 1347 - if (!adev->gfx.rlc.clear_state_obj) 1348 - return; 1349 - 1350 - r = amdgpu_bo_reserve(adev->gfx.rlc.clear_state_obj, true); 1351 - if (likely(r == 0)) { 1352 - amdgpu_bo_unpin(adev->gfx.rlc.clear_state_obj); 1353 - amdgpu_bo_unreserve(adev->gfx.rlc.clear_state_obj); 1354 - } 1355 - } 1356 - 1357 1324 static void gfx_v8_0_mec_fini(struct amdgpu_device *adev) 1358 1325 { 1359 1326 amdgpu_bo_free_kernel(&adev->gfx.mec.hpd_eop_obj, NULL, NULL); ··· 3884 3917 3885 3918 static void gfx_v8_0_init_csb(struct amdgpu_device *adev) 3886 3919 { 3920 + adev->gfx.rlc.funcs->get_csb_buffer(adev, adev->gfx.rlc.cs_ptr); 3887 3921 /* csib */ 3888 3922 WREG32(mmRLC_CSIB_ADDR_HI, 3889 3923 adev->gfx.rlc.clear_state_gpu_addr >> 32); ··· 4805 4837 gfx_v8_0_init_golden_registers(adev); 4806 4838 gfx_v8_0_constants_init(adev); 4807 4839 4808 - r = gfx_v8_0_csb_vram_pin(adev); 4809 - if (r) 4810 - return r; 4811 - 4812 4840 r = adev->gfx.rlc.funcs->resume(adev); 4813 4841 if (r) 4814 4842 return r; ··· 4921 4957 else 4922 4958 pr_err("rlc is busy, skip halt rlc\n"); 4923 4959 amdgpu_gfx_rlc_exit_safe_mode(adev); 4924 - 4925 - gfx_v8_0_csb_vram_unpin(adev); 4926 4960 4927 4961 return 0; 4928 4962 }
+1 -39
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 1695 1695 return 0; 1696 1696 } 1697 1697 1698 - static int gfx_v9_0_csb_vram_pin(struct amdgpu_device *adev) 1699 - { 1700 - int r; 1701 - 1702 - r = amdgpu_bo_reserve(adev->gfx.rlc.clear_state_obj, false); 1703 - if (unlikely(r != 0)) 1704 - return r; 1705 - 1706 - r = amdgpu_bo_pin(adev->gfx.rlc.clear_state_obj, 1707 - AMDGPU_GEM_DOMAIN_VRAM); 1708 - if (!r) 1709 - adev->gfx.rlc.clear_state_gpu_addr = 1710 - amdgpu_bo_gpu_offset(adev->gfx.rlc.clear_state_obj); 1711 - 1712 - amdgpu_bo_unreserve(adev->gfx.rlc.clear_state_obj); 1713 - 1714 - return r; 1715 - } 1716 - 1717 - static void gfx_v9_0_csb_vram_unpin(struct amdgpu_device *adev) 1718 - { 1719 - int r; 1720 - 1721 - if (!adev->gfx.rlc.clear_state_obj) 1722 - return; 1723 - 1724 - r = amdgpu_bo_reserve(adev->gfx.rlc.clear_state_obj, true); 1725 - if (likely(r == 0)) { 1726 - amdgpu_bo_unpin(adev->gfx.rlc.clear_state_obj); 1727 - amdgpu_bo_unreserve(adev->gfx.rlc.clear_state_obj); 1728 - } 1729 - } 1730 - 1731 1698 static void gfx_v9_0_mec_fini(struct amdgpu_device *adev) 1732 1699 { 1733 1700 amdgpu_bo_free_kernel(&adev->gfx.mec.hpd_eop_obj, NULL, NULL); ··· 2382 2415 2383 2416 static void gfx_v9_0_init_csb(struct amdgpu_device *adev) 2384 2417 { 2418 + adev->gfx.rlc.funcs->get_csb_buffer(adev, adev->gfx.rlc.cs_ptr); 2385 2419 /* csib */ 2386 2420 WREG32_RLC(SOC15_REG_OFFSET(GC, 0, mmRLC_CSIB_ADDR_HI), 2387 2421 adev->gfx.rlc.clear_state_gpu_addr >> 32); ··· 3674 3706 3675 3707 gfx_v9_0_constants_init(adev); 3676 3708 3677 - r = gfx_v9_0_csb_vram_pin(adev); 3678 - if (r) 3679 - return r; 3680 - 3681 3709 r = adev->gfx.rlc.funcs->resume(adev); 3682 3710 if (r) 3683 3711 return r; ··· 3754 3790 3755 3791 gfx_v9_0_cp_enable(adev, false); 3756 3792 adev->gfx.rlc.funcs->stop(adev); 3757 - 3758 - gfx_v9_0_csb_vram_unpin(adev); 3759 3793 3760 3794 return 0; 3761 3795 }
+17 -2
drivers/gpu/drm/amd/amdgpu/gfxhub_v1_1.c
··· 33 33 u32 xgmi_lfb_cntl = RREG32_SOC15(GC, 0, mmMC_VM_XGMI_LFB_CNTL); 34 34 u32 max_region = 35 35 REG_GET_FIELD(xgmi_lfb_cntl, MC_VM_XGMI_LFB_CNTL, PF_MAX_REGION); 36 + u32 max_num_physical_nodes = 0; 37 + u32 max_physical_node_id = 0; 38 + 39 + switch (adev->asic_type) { 40 + case CHIP_VEGA20: 41 + max_num_physical_nodes = 4; 42 + max_physical_node_id = 3; 43 + break; 44 + case CHIP_ARCTURUS: 45 + max_num_physical_nodes = 8; 46 + max_physical_node_id = 7; 47 + break; 48 + default: 49 + return -EINVAL; 50 + } 36 51 37 52 /* PF_MAX_REGION=0 means xgmi is disabled */ 38 53 if (max_region) { 39 54 adev->gmc.xgmi.num_physical_nodes = max_region + 1; 40 - if (adev->gmc.xgmi.num_physical_nodes > 4) 55 + if (adev->gmc.xgmi.num_physical_nodes > max_num_physical_nodes) 41 56 return -EINVAL; 42 57 43 58 adev->gmc.xgmi.physical_node_id = 44 59 REG_GET_FIELD(xgmi_lfb_cntl, MC_VM_XGMI_LFB_CNTL, PF_LFB_REGION); 45 - if (adev->gmc.xgmi.physical_node_id > 3) 60 + if (adev->gmc.xgmi.physical_node_id > max_physical_node_id) 46 61 return -EINVAL; 47 62 adev->gmc.xgmi.node_segment_size = REG_GET_FIELD( 48 63 RREG32_SOC15(GC, 0, mmMC_VM_XGMI_LFB_SIZE),
+2 -1
drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
··· 326 326 327 327 if (!adev->mman.buffer_funcs_enabled || 328 328 !adev->ib_pool_ready || 329 - adev->in_gpu_reset) { 329 + adev->in_gpu_reset || 330 + ring->sched.ready == false) { 330 331 gmc_v10_0_flush_vm_hub(adev, vmid, AMDGPU_GFXHUB_0, 0); 331 332 mutex_unlock(&adev->mman.gtt_window_lock); 332 333 return;
+5 -2
drivers/gpu/drm/amd/amdgpu/vi.c
··· 783 783 { 784 784 int r; 785 785 786 - if (vi_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) 786 + if (vi_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) { 787 + if (!adev->in_suspend) 788 + amdgpu_inc_vram_lost(adev); 787 789 r = smu7_asic_baco_reset(adev); 788 - else 790 + } else { 789 791 r = vi_asic_pci_config_reset(adev); 792 + } 790 793 791 794 return r; 792 795 }
+1 -1
drivers/gpu/drm/amd/amdkfd/Kconfig
··· 5 5 6 6 config HSA_AMD 7 7 bool "HSA kernel driver for AMD GPU devices" 8 - depends on DRM_AMDGPU && (X86_64 || ARM64) 8 + depends on DRM_AMDGPU && (X86_64 || ARM64 || PPC64) 9 9 imply AMD_IOMMU_V2 if X86_64 10 10 select MMU_NOTIFIER 11 11 help
+2 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c
··· 342 342 if (adev->powerplay.pp_funcs && adev->powerplay.pp_funcs->get_clock_by_type) { 343 343 if (adev->powerplay.pp_funcs->get_clock_by_type(pp_handle, 344 344 dc_to_pp_clock_type(clk_type), &pp_clks)) { 345 - /* Error in pplib. Provide default values. */ 345 + /* Error in pplib. Provide default values. */ 346 + get_default_clock_levels(clk_type, dc_clks); 346 347 return true; 347 348 } 348 349 } else if (adev->smu.ppt_funcs && adev->smu.ppt_funcs->get_clock_by_type) {
+19
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
··· 1037 1037 if (pipe->plane_state != NULL) 1038 1038 flip_immediate = pipe->plane_state->flip_immediate; 1039 1039 1040 + if (flip_immediate && lock) { 1041 + const int TIMEOUT_FOR_FLIP_PENDING = 100000; 1042 + int i; 1043 + 1044 + for (i = 0; i < TIMEOUT_FOR_FLIP_PENDING; ++i) { 1045 + if (!pipe->plane_res.hubp->funcs->hubp_is_flip_pending(pipe->plane_res.hubp)) 1046 + break; 1047 + udelay(1); 1048 + } 1049 + 1050 + if (pipe->bottom_pipe != NULL) { 1051 + for (i = 0; i < TIMEOUT_FOR_FLIP_PENDING; ++i) { 1052 + if (!pipe->bottom_pipe->plane_res.hubp->funcs->hubp_is_flip_pending(pipe->bottom_pipe->plane_res.hubp)) 1053 + break; 1054 + udelay(1); 1055 + } 1056 + } 1057 + } 1058 + 1040 1059 /* In flip immediate and pipe splitting case, we need to use GSL 1041 1060 * for synchronization. Only do setup on locking and on flip type change. 1042 1061 */
+74
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
··· 157 157 .xfc_fill_constant_bytes = 0, 158 158 }; 159 159 160 + struct _vcs_dpi_ip_params_st dcn2_0_nv14_ip = { 161 + .odm_capable = 1, 162 + .gpuvm_enable = 0, 163 + .hostvm_enable = 0, 164 + .gpuvm_max_page_table_levels = 4, 165 + .hostvm_max_page_table_levels = 4, 166 + .hostvm_cached_page_table_levels = 0, 167 + .num_dsc = 5, 168 + .rob_buffer_size_kbytes = 168, 169 + .det_buffer_size_kbytes = 164, 170 + .dpte_buffer_size_in_pte_reqs_luma = 84, 171 + .dpte_buffer_size_in_pte_reqs_chroma = 42,//todo 172 + .dpp_output_buffer_pixels = 2560, 173 + .opp_output_buffer_lines = 1, 174 + .pixel_chunk_size_kbytes = 8, 175 + .pte_enable = 1, 176 + .max_page_table_levels = 4, 177 + .pte_chunk_size_kbytes = 2, 178 + .meta_chunk_size_kbytes = 2, 179 + .writeback_chunk_size_kbytes = 2, 180 + .line_buffer_size_bits = 789504, 181 + .is_line_buffer_bpp_fixed = 0, 182 + .line_buffer_fixed_bpp = 0, 183 + .dcc_supported = true, 184 + .max_line_buffer_lines = 12, 185 + .writeback_luma_buffer_size_kbytes = 12, 186 + .writeback_chroma_buffer_size_kbytes = 8, 187 + .writeback_chroma_line_buffer_width_pixels = 4, 188 + .writeback_max_hscl_ratio = 1, 189 + .writeback_max_vscl_ratio = 1, 190 + .writeback_min_hscl_ratio = 1, 191 + .writeback_min_vscl_ratio = 1, 192 + .writeback_max_hscl_taps = 12, 193 + .writeback_max_vscl_taps = 12, 194 + .writeback_line_buffer_luma_buffer_size = 0, 195 + .writeback_line_buffer_chroma_buffer_size = 14643, 196 + .cursor_buffer_size = 8, 197 + .cursor_chunk_size = 2, 198 + .max_num_otg = 5, 199 + .max_num_dpp = 5, 200 + .max_num_wb = 1, 201 + .max_dchub_pscl_bw_pix_per_clk = 4, 202 + .max_pscl_lb_bw_pix_per_clk = 2, 203 + .max_lb_vscl_bw_pix_per_clk = 4, 204 + .max_vscl_hscl_bw_pix_per_clk = 4, 205 + .max_hscl_ratio = 8, 206 + .max_vscl_ratio = 8, 207 + .hscl_mults = 4, 208 + .vscl_mults = 4, 209 + .max_hscl_taps = 8, 210 + .max_vscl_taps = 8, 211 + .dispclk_ramp_margin_percent = 1, 212 + .underscan_factor = 1.10, 213 + .min_vblank_lines = 32, // 214 + .dppclk_delay_subtotal = 77, // 215 + .dppclk_delay_scl_lb_only = 16, 216 + .dppclk_delay_scl = 50, 217 + .dppclk_delay_cnvc_formatter = 8, 218 + .dppclk_delay_cnvc_cursor = 6, 219 + .dispclk_delay_subtotal = 87, // 220 + .dcfclk_cstate_latency = 10, // SRExitTime 221 + .max_inter_dcn_tile_repeaters = 8, 222 + .xfc_supported = true, 223 + .xfc_fill_bw_overhead_percent = 10.0, 224 + .xfc_fill_constant_bytes = 0, 225 + .ptoi_supported = 0 226 + }; 227 + 160 228 struct _vcs_dpi_soc_bounding_box_st dcn2_0_soc = { 161 229 /* Defaults that get patched on driver load from firmware. */ 162 230 .clock_limits = { ··· 922 854 .num_pll = 5, 923 855 .num_dwb = 1, 924 856 .num_ddc = 5, 857 + .num_vmid = 16, 858 + .num_dsc = 5, 925 859 }; 926 860 927 861 static const struct dc_debug_options debug_defaults_drv = { ··· 3282 3212 static struct _vcs_dpi_ip_params_st *get_asic_rev_ip_params( 3283 3213 uint32_t hw_internal_rev) 3284 3214 { 3215 + /* NV14 */ 3216 + if (ASICREV_IS_NAVI14_M(hw_internal_rev)) 3217 + return &dcn2_0_nv14_ip; 3218 + 3285 3219 /* NV12 and NV10 */ 3286 3220 return &dcn2_0_ip; 3287 3221 }
+9
drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
··· 2548 2548 2549 2549 return ret; 2550 2550 } 2551 + 2552 + int smu_send_smc_msg(struct smu_context *smu, 2553 + enum smu_message_type msg) 2554 + { 2555 + int ret; 2556 + 2557 + ret = smu_send_smc_msg_with_param(smu, msg, 0); 2558 + return ret; 2559 + }
-1
drivers/gpu/drm/amd/powerplay/arcturus_ppt.c
··· 2130 2130 .set_tool_table_location = smu_v11_0_set_tool_table_location, 2131 2131 .notify_memory_pool_location = smu_v11_0_notify_memory_pool_location, 2132 2132 .system_features_control = smu_v11_0_system_features_control, 2133 - .send_smc_msg = smu_v11_0_send_msg, 2134 2133 .send_smc_msg_with_param = smu_v11_0_send_msg_with_param, 2135 2134 .read_smc_arg = smu_v11_0_read_arg, 2136 2135 .init_display_count = smu_v11_0_init_display_count,
+2 -2
drivers/gpu/drm/amd/powerplay/inc/amdgpu_smu.h
··· 497 497 int (*notify_memory_pool_location)(struct smu_context *smu); 498 498 int (*set_last_dcef_min_deep_sleep_clk)(struct smu_context *smu); 499 499 int (*system_features_control)(struct smu_context *smu, bool en); 500 - int (*send_smc_msg)(struct smu_context *smu, uint16_t msg); 501 - int (*send_smc_msg_with_param)(struct smu_context *smu, uint16_t msg, uint32_t param); 500 + int (*send_smc_msg_with_param)(struct smu_context *smu, 501 + enum smu_message_type msg, uint32_t param); 502 502 int (*read_smc_arg)(struct smu_context *smu, uint32_t *arg); 503 503 int (*init_display_count)(struct smu_context *smu, uint32_t count); 504 504 int (*set_allowed_mask)(struct smu_context *smu);
+2 -3
drivers/gpu/drm/amd/powerplay/inc/smu_v11_0.h
··· 177 177 int smu_v11_0_system_features_control(struct smu_context *smu, 178 178 bool en); 179 179 180 - int smu_v11_0_send_msg(struct smu_context *smu, uint16_t msg); 181 - 182 180 int 183 - smu_v11_0_send_msg_with_param(struct smu_context *smu, uint16_t msg, 181 + smu_v11_0_send_msg_with_param(struct smu_context *smu, 182 + enum smu_message_type msg, 184 183 uint32_t param); 185 184 186 185 int smu_v11_0_read_arg(struct smu_context *smu, uint32_t *arg);
+2 -3
drivers/gpu/drm/amd/powerplay/inc/smu_v12_0.h
··· 44 44 45 45 int smu_v12_0_wait_for_response(struct smu_context *smu); 46 46 47 - int smu_v12_0_send_msg(struct smu_context *smu, uint16_t msg); 48 - 49 47 int 50 - smu_v12_0_send_msg_with_param(struct smu_context *smu, uint16_t msg, 48 + smu_v12_0_send_msg_with_param(struct smu_context *smu, 49 + enum smu_message_type msg, 51 50 uint32_t param); 52 51 53 52 int smu_v12_0_check_fw_status(struct smu_context *smu);
-1
drivers/gpu/drm/amd/powerplay/navi10_ppt.c
··· 2055 2055 .set_tool_table_location = smu_v11_0_set_tool_table_location, 2056 2056 .notify_memory_pool_location = smu_v11_0_notify_memory_pool_location, 2057 2057 .system_features_control = smu_v11_0_system_features_control, 2058 - .send_smc_msg = smu_v11_0_send_msg, 2059 2058 .send_smc_msg_with_param = smu_v11_0_send_msg_with_param, 2060 2059 .read_smc_arg = smu_v11_0_read_arg, 2061 2060 .init_display_count = smu_v11_0_init_display_count,
-1
drivers/gpu/drm/amd/powerplay/renoir_ppt.c
··· 697 697 .check_fw_version = smu_v12_0_check_fw_version, 698 698 .powergate_sdma = smu_v12_0_powergate_sdma, 699 699 .powergate_vcn = smu_v12_0_powergate_vcn, 700 - .send_smc_msg = smu_v12_0_send_msg, 701 700 .send_smc_msg_with_param = smu_v12_0_send_msg_with_param, 702 701 .read_smc_arg = smu_v12_0_read_arg, 703 702 .set_gfx_cgpg = smu_v12_0_set_gfx_cgpg,
+2 -2
drivers/gpu/drm/amd/powerplay/smu_internal.h
··· 75 75 #define smu_set_default_od_settings(smu, initialize) \ 76 76 ((smu)->ppt_funcs->set_default_od_settings ? (smu)->ppt_funcs->set_default_od_settings((smu), (initialize)) : 0) 77 77 78 - #define smu_send_smc_msg(smu, msg) \ 79 - ((smu)->ppt_funcs->send_smc_msg? (smu)->ppt_funcs->send_smc_msg((smu), (msg)) : 0) 78 + int smu_send_smc_msg(struct smu_context *smu, enum smu_message_type msg); 79 + 80 80 #define smu_send_smc_msg_with_param(smu, msg, param) \ 81 81 ((smu)->ppt_funcs->send_smc_msg_with_param? (smu)->ppt_funcs->send_smc_msg_with_param((smu), (msg), (param)) : 0) 82 82 #define smu_read_smc_arg(smu, arg) \
+2 -27
drivers/gpu/drm/amd/powerplay/smu_v11_0.c
··· 90 90 return RREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_90) == 0x1 ? 0 : -EIO; 91 91 } 92 92 93 - int smu_v11_0_send_msg(struct smu_context *smu, uint16_t msg) 94 - { 95 - struct amdgpu_device *adev = smu->adev; 96 - int ret = 0, index = 0; 97 - 98 - index = smu_msg_get_index(smu, msg); 99 - if (index < 0) 100 - return index; 101 - 102 - smu_v11_0_wait_for_response(smu); 103 - 104 - WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_90, 0); 105 - 106 - smu_v11_0_send_msg_without_waiting(smu, (uint16_t)index); 107 - 108 - ret = smu_v11_0_wait_for_response(smu); 109 - 110 - if (ret) 111 - pr_err("failed send message: %10s (%d) response %#x\n", 112 - smu_get_message_name(smu, msg), index, ret); 113 - 114 - return ret; 115 - 116 - } 117 - 118 93 int 119 - smu_v11_0_send_msg_with_param(struct smu_context *smu, uint16_t msg, 94 + smu_v11_0_send_msg_with_param(struct smu_context *smu, 95 + enum smu_message_type msg, 120 96 uint32_t param) 121 97 { 122 - 123 98 struct amdgpu_device *adev = smu->adev; 124 99 int ret = 0, index = 0; 125 100
+2 -26
drivers/gpu/drm/amd/powerplay/smu_v12_0.c
··· 77 77 return RREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_90) == 0x1 ? 0 : -EIO; 78 78 } 79 79 80 - int smu_v12_0_send_msg(struct smu_context *smu, uint16_t msg) 81 - { 82 - struct amdgpu_device *adev = smu->adev; 83 - int ret = 0, index = 0; 84 - 85 - index = smu_msg_get_index(smu, msg); 86 - if (index < 0) 87 - return index; 88 - 89 - smu_v12_0_wait_for_response(smu); 90 - 91 - WREG32_SOC15(MP1, 0, mmMP1_SMN_C2PMSG_90, 0); 92 - 93 - smu_v12_0_send_msg_without_waiting(smu, (uint16_t)index); 94 - 95 - ret = smu_v12_0_wait_for_response(smu); 96 - 97 - if (ret) 98 - pr_err("Failed to send message 0x%x, response 0x%x\n", index, 99 - ret); 100 - 101 - return ret; 102 - 103 - } 104 - 105 80 int 106 - smu_v12_0_send_msg_with_param(struct smu_context *smu, uint16_t msg, 81 + smu_v12_0_send_msg_with_param(struct smu_context *smu, 82 + enum smu_message_type msg, 107 83 uint32_t param) 108 84 { 109 85 struct amdgpu_device *adev = smu->adev;
-1
drivers/gpu/drm/amd/powerplay/vega20_ppt.c
··· 3231 3231 .set_tool_table_location = smu_v11_0_set_tool_table_location, 3232 3232 .notify_memory_pool_location = smu_v11_0_notify_memory_pool_location, 3233 3233 .system_features_control = smu_v11_0_system_features_control, 3234 - .send_smc_msg = smu_v11_0_send_msg, 3235 3234 .send_smc_msg_with_param = smu_v11_0_send_msg_with_param, 3236 3235 .read_smc_arg = smu_v11_0_read_arg, 3237 3236 .init_display_count = smu_v11_0_init_display_count,
+4 -2
drivers/gpu/drm/drm_dp_mst_topology.c
··· 3176 3176 drm_dp_mst_topology_put_port(port); 3177 3177 } 3178 3178 3179 - for (i = 0; i < mgr->max_payloads; i++) { 3180 - if (mgr->payloads[i].payload_state != DP_PAYLOAD_DELETE_LOCAL) 3179 + for (i = 0; i < mgr->max_payloads; /* do nothing */) { 3180 + if (mgr->payloads[i].payload_state != DP_PAYLOAD_DELETE_LOCAL) { 3181 + i++; 3181 3182 continue; 3183 + } 3182 3184 3183 3185 DRM_DEBUG_KMS("removing payload %d\n", i); 3184 3186 for (j = i; j < mgr->max_payloads - 1; j++) {
+1 -1
drivers/gpu/drm/i915/Kconfig.profile
··· 25 25 26 26 config DRM_I915_PREEMPT_TIMEOUT 27 27 int "Preempt timeout (ms, jiffy granularity)" 28 - default 100 # milliseconds 28 + default 640 # milliseconds 29 29 help 30 30 How long to wait (in milliseconds) for a preemption event to occur 31 31 when submitting a new context via execlists. If the current context
+3 -1
drivers/gpu/drm/i915/display/intel_cdclk.c
··· 1273 1273 1274 1274 static u8 ehl_calc_voltage_level(int cdclk) 1275 1275 { 1276 - if (cdclk > 312000) 1276 + if (cdclk > 326400) 1277 + return 3; 1278 + else if (cdclk > 312000) 1277 1279 return 2; 1278 1280 else if (cdclk > 180000) 1279 1281 return 1;
+24 -5
drivers/gpu/drm/i915/display/intel_ddi.c
··· 593 593 u32 dkl_de_emphasis_control; 594 594 }; 595 595 596 - static const struct tgl_dkl_phy_ddi_buf_trans tgl_dkl_phy_ddi_translations[] = { 596 + static const struct tgl_dkl_phy_ddi_buf_trans tgl_dkl_phy_dp_ddi_trans[] = { 597 597 /* VS pre-emp Non-trans mV Pre-emph dB */ 598 598 { 0x7, 0x0, 0x00 }, /* 0 0 400mV 0 dB */ 599 599 { 0x5, 0x0, 0x03 }, /* 0 1 400mV 3.5 dB */ ··· 605 605 { 0x2, 0x0, 0x00 }, /* 2 0 800mV 0 dB */ 606 606 { 0x0, 0x0, 0x0B }, /* 2 1 800mV 3.5 dB */ 607 607 { 0x0, 0x0, 0x00 }, /* 3 0 1200mV 0 dB HDMI default */ 608 + }; 609 + 610 + static const struct tgl_dkl_phy_ddi_buf_trans tgl_dkl_phy_hdmi_ddi_trans[] = { 611 + /* HDMI Preset VS Pre-emph */ 612 + { 0x7, 0x0, 0x0 }, /* 1 400mV 0dB */ 613 + { 0x6, 0x0, 0x0 }, /* 2 500mV 0dB */ 614 + { 0x4, 0x0, 0x0 }, /* 3 650mV 0dB */ 615 + { 0x2, 0x0, 0x0 }, /* 4 800mV 0dB */ 616 + { 0x0, 0x0, 0x0 }, /* 5 1000mV 0dB */ 617 + { 0x0, 0x0, 0x5 }, /* 6 Full -1.5 dB */ 618 + { 0x0, 0x0, 0x6 }, /* 7 Full -1.8 dB */ 619 + { 0x0, 0x0, 0x7 }, /* 8 Full -2 dB */ 620 + { 0x0, 0x0, 0x8 }, /* 9 Full -2.5 dB */ 621 + { 0x0, 0x0, 0xA }, /* 10 Full -3 dB */ 608 622 }; 609 623 610 624 static const struct ddi_buf_trans * ··· 912 898 icl_get_combo_buf_trans(dev_priv, INTEL_OUTPUT_HDMI, 913 899 0, &n_entries); 914 900 else 915 - n_entries = ARRAY_SIZE(tgl_dkl_phy_ddi_translations); 901 + n_entries = ARRAY_SIZE(tgl_dkl_phy_hdmi_ddi_trans); 916 902 default_entry = n_entries - 1; 917 903 } else if (INTEL_GEN(dev_priv) == 11) { 918 904 if (intel_phy_is_combo(dev_priv, phy)) ··· 2385 2371 icl_get_combo_buf_trans(dev_priv, encoder->type, 2386 2372 intel_dp->link_rate, &n_entries); 2387 2373 else 2388 - n_entries = ARRAY_SIZE(tgl_dkl_phy_ddi_translations); 2374 + n_entries = ARRAY_SIZE(tgl_dkl_phy_dp_ddi_trans); 2389 2375 } else if (INTEL_GEN(dev_priv) == 11) { 2390 2376 if (intel_phy_is_combo(dev_priv, phy)) 2391 2377 icl_get_combo_buf_trans(dev_priv, encoder->type, ··· 2837 2823 const struct tgl_dkl_phy_ddi_buf_trans *ddi_translations; 2838 2824 u32 n_entries, val, ln, dpcnt_mask, dpcnt_val; 2839 2825 2840 - n_entries = ARRAY_SIZE(tgl_dkl_phy_ddi_translations); 2841 - ddi_translations = tgl_dkl_phy_ddi_translations; 2826 + if (encoder->type == INTEL_OUTPUT_HDMI) { 2827 + n_entries = ARRAY_SIZE(tgl_dkl_phy_hdmi_ddi_trans); 2828 + ddi_translations = tgl_dkl_phy_hdmi_ddi_trans; 2829 + } else { 2830 + n_entries = ARRAY_SIZE(tgl_dkl_phy_dp_ddi_trans); 2831 + ddi_translations = tgl_dkl_phy_dp_ddi_trans; 2832 + } 2842 2833 2843 2834 if (level >= n_entries) 2844 2835 level = n_entries - 1;
+5 -7
drivers/gpu/drm/i915/display/intel_dp.c
··· 5476 5476 return I915_READ(GEN8_DE_PORT_ISR) & bit; 5477 5477 } 5478 5478 5479 - static bool icl_combo_port_connected(struct drm_i915_private *dev_priv, 5480 - struct intel_digital_port *intel_dig_port) 5479 + static bool intel_combo_phy_connected(struct drm_i915_private *dev_priv, 5480 + enum phy phy) 5481 5481 { 5482 - enum port port = intel_dig_port->base.port; 5483 - 5484 - if (HAS_PCH_MCC(dev_priv) && port == PORT_C) 5482 + if (HAS_PCH_MCC(dev_priv) && phy == PHY_C) 5485 5483 return I915_READ(SDEISR) & SDE_TC_HOTPLUG_ICP(PORT_TC1); 5486 5484 5487 - return I915_READ(SDEISR) & SDE_DDI_HOTPLUG_ICP(port); 5485 + return I915_READ(SDEISR) & SDE_DDI_HOTPLUG_ICP(phy); 5488 5486 } 5489 5487 5490 5488 static bool icl_digital_port_connected(struct intel_encoder *encoder) ··· 5492 5494 enum phy phy = intel_port_to_phy(dev_priv, encoder->port); 5493 5495 5494 5496 if (intel_phy_is_combo(dev_priv, phy)) 5495 - return icl_combo_port_connected(dev_priv, dig_port); 5497 + return intel_combo_phy_connected(dev_priv, phy); 5496 5498 else if (intel_phy_is_tc(dev_priv, phy)) 5497 5499 return intel_tc_port_connected(dig_port); 5498 5500 else
+2 -2
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 368 368 if (!ce->timeline) 369 369 return NULL; 370 370 371 - rcu_read_lock(); 371 + mutex_lock(&ce->timeline->mutex); 372 372 list_for_each_entry_reverse(rq, &ce->timeline->requests, link) { 373 373 if (i915_request_completed(rq)) 374 374 break; ··· 378 378 if (engine) 379 379 break; 380 380 } 381 - rcu_read_unlock(); 381 + mutex_unlock(&ce->timeline->mutex); 382 382 383 383 return engine; 384 384 }
+17 -4
drivers/gpu/drm/i915/gt/intel_context.c
··· 310 310 GEM_BUG_ON(rq->hw_context == ce); 311 311 312 312 if (rcu_access_pointer(rq->timeline) != tl) { /* timeline sharing! */ 313 - err = mutex_lock_interruptible_nested(&tl->mutex, 314 - SINGLE_DEPTH_NESTING); 315 - if (err) 316 - return err; 313 + /* 314 + * Ideally, we just want to insert our foreign fence as 315 + * a barrier into the remove context, such that this operation 316 + * occurs after all current operations in that context, and 317 + * all future operations must occur after this. 318 + * 319 + * Currently, the timeline->last_request tracking is guarded 320 + * by its mutex and so we must obtain that to atomically 321 + * insert our barrier. However, since we already hold our 322 + * timeline->mutex, we must be careful against potential 323 + * inversion if we are the kernel_context as the remote context 324 + * will itself poke at the kernel_context when it needs to 325 + * unpin. Ergo, if already locked, we drop both locks and 326 + * try again (through the magic of userspace repeating EAGAIN). 327 + */ 328 + if (!mutex_trylock(&tl->mutex)) 329 + return -EAGAIN; 317 330 318 331 /* Queue this switch after current activity by this context. */ 319 332 err = i915_active_fence_set(&tl->last_request, rq);
+1 -3
drivers/gpu/drm/i915/gt/intel_engine.h
··· 100 100 static inline struct i915_request * 101 101 execlists_active(const struct intel_engine_execlists *execlists) 102 102 { 103 - GEM_BUG_ON(execlists->active - execlists->inflight > 104 - execlists_num_ports(execlists)); 105 - return READ_ONCE(*execlists->active); 103 + return *READ_ONCE(execlists->active); 106 104 } 107 105 108 106 static inline void
+5 -3
drivers/gpu/drm/i915/gt/intel_engine_cs.c
··· 28 28 29 29 #include "i915_drv.h" 30 30 31 - #include "gt/intel_gt.h" 32 - 31 + #include "intel_context.h" 33 32 #include "intel_engine.h" 34 33 #include "intel_engine_pm.h" 35 34 #include "intel_engine_pool.h" 36 35 #include "intel_engine_user.h" 37 - #include "intel_context.h" 36 + #include "intel_gt.h" 37 + #include "intel_gt_requests.h" 38 38 #include "intel_lrc.h" 39 39 #include "intel_reset.h" 40 40 #include "intel_ring.h" ··· 616 616 intel_engine_init_execlists(engine); 617 617 intel_engine_init_cmd_parser(engine); 618 618 intel_engine_init__pm(engine); 619 + intel_engine_init_retire(engine); 619 620 620 621 intel_engine_pool_init(&engine->pool); 621 622 ··· 839 838 840 839 cleanup_status_page(engine); 841 840 841 + intel_engine_fini_retire(engine); 842 842 intel_engine_pool_fini(&engine->pool); 843 843 intel_engine_fini_breadcrumbs(engine); 844 844 intel_engine_cleanup_cmd_parser(engine);
+58 -9
drivers/gpu/drm/i915/gt/intel_engine_pm.c
··· 73 73 74 74 #endif /* !IS_ENABLED(CONFIG_LOCKDEP) */ 75 75 76 + static void 77 + __queue_and_release_pm(struct i915_request *rq, 78 + struct intel_timeline *tl, 79 + struct intel_engine_cs *engine) 80 + { 81 + struct intel_gt_timelines *timelines = &engine->gt->timelines; 82 + 83 + GEM_TRACE("%s\n", engine->name); 84 + 85 + /* 86 + * We have to serialise all potential retirement paths with our 87 + * submission, as we don't want to underflow either the 88 + * engine->wakeref.counter or our timeline->active_count. 89 + * 90 + * Equally, we cannot allow a new submission to start until 91 + * after we finish queueing, nor could we allow that submitter 92 + * to retire us before we are ready! 93 + */ 94 + spin_lock(&timelines->lock); 95 + 96 + /* Let intel_gt_retire_requests() retire us (acquired under lock) */ 97 + if (!atomic_fetch_inc(&tl->active_count)) 98 + list_add_tail(&tl->link, &timelines->active_list); 99 + 100 + /* Hand the request over to HW and so engine_retire() */ 101 + __i915_request_queue(rq, NULL); 102 + 103 + /* Let new submissions commence (and maybe retire this timeline) */ 104 + __intel_wakeref_defer_park(&engine->wakeref); 105 + 106 + spin_unlock(&timelines->lock); 107 + } 108 + 76 109 static bool switch_to_kernel_context(struct intel_engine_cs *engine) 77 110 { 111 + struct intel_context *ce = engine->kernel_context; 78 112 struct i915_request *rq; 79 113 unsigned long flags; 80 114 bool result = true; ··· 132 98 * This should hold true as we can only park the engine after 133 99 * retiring the last request, thus all rings should be empty and 134 100 * all timelines idle. 101 + * 102 + * For unlocking, there are 2 other parties and the GPU who have a 103 + * stake here. 104 + * 105 + * A new gpu user will be waiting on the engine-pm to start their 106 + * engine_unpark. New waiters are predicated on engine->wakeref.count 107 + * and so intel_wakeref_defer_park() acts like a mutex_unlock of the 108 + * engine->wakeref. 109 + * 110 + * The other party is intel_gt_retire_requests(), which is walking the 111 + * list of active timelines looking for completions. Meanwhile as soon 112 + * as we call __i915_request_queue(), the GPU may complete our request. 113 + * Ergo, if we put ourselves on the timelines.active_list 114 + * (se intel_timeline_enter()) before we increment the 115 + * engine->wakeref.count, we may see the request completion and retire 116 + * it causing an undeflow of the engine->wakeref. 135 117 */ 136 - flags = __timeline_mark_lock(engine->kernel_context); 118 + flags = __timeline_mark_lock(ce); 119 + GEM_BUG_ON(atomic_read(&ce->timeline->active_count) < 0); 137 120 138 - rq = __i915_request_create(engine->kernel_context, GFP_NOWAIT); 121 + rq = __i915_request_create(ce, GFP_NOWAIT); 139 122 if (IS_ERR(rq)) 140 123 /* Context switch failed, hope for the best! Maybe reset? */ 141 124 goto out_unlock; 142 - 143 - intel_timeline_enter(i915_request_timeline(rq)); 144 125 145 126 /* Check again on the next retirement. */ 146 127 engine->wakeref_serial = engine->serial + 1; ··· 165 116 rq->sched.attr.priority = I915_PRIORITY_BARRIER; 166 117 __i915_request_commit(rq); 167 118 168 - /* Release our exclusive hold on the engine */ 169 - __intel_wakeref_defer_park(&engine->wakeref); 170 - __i915_request_queue(rq, NULL); 119 + /* Expose ourselves to the world */ 120 + __queue_and_release_pm(rq, ce->timeline, engine); 171 121 172 122 result = false; 173 123 out_unlock: 174 - __timeline_mark_unlock(engine->kernel_context, flags); 124 + __timeline_mark_unlock(ce, flags); 175 125 return result; 176 126 } 177 127 ··· 225 177 226 178 engine->execlists.no_priolist = false; 227 179 228 - intel_gt_pm_put(engine->gt); 180 + /* While gt calls i915_vma_parked(), we have to break the lock cycle */ 181 + intel_gt_pm_put_async(engine->gt); 229 182 return 0; 230 183 } 231 184
+10
drivers/gpu/drm/i915/gt/intel_engine_pm.h
··· 31 31 intel_wakeref_put(&engine->wakeref); 32 32 } 33 33 34 + static inline void intel_engine_pm_put_async(struct intel_engine_cs *engine) 35 + { 36 + intel_wakeref_put_async(&engine->wakeref); 37 + } 38 + 39 + static inline void intel_engine_pm_flush(struct intel_engine_cs *engine) 40 + { 41 + intel_wakeref_unlock_wait(&engine->wakeref); 42 + } 43 + 34 44 void intel_engine_init__pm(struct intel_engine_cs *engine); 35 45 36 46 #endif /* INTEL_ENGINE_PM_H */
+8
drivers/gpu/drm/i915/gt/intel_engine_types.h
··· 451 451 452 452 struct intel_engine_execlists execlists; 453 453 454 + /* 455 + * Keep track of completed timelines on this engine for early 456 + * retirement with the goal of quickly enabling powersaving as 457 + * soon as the engine is idle. 458 + */ 459 + struct intel_timeline *retire; 460 + struct work_struct retire_work; 461 + 454 462 /* status_notifier: list of callbacks for context-switch changes */ 455 463 struct atomic_notifier_head context_status_notifier; 456 464
+1 -2
drivers/gpu/drm/i915/gt/intel_gt_pm.c
··· 105 105 static const struct intel_wakeref_ops wf_ops = { 106 106 .get = __gt_unpark, 107 107 .put = __gt_park, 108 - .flags = INTEL_WAKEREF_PUT_ASYNC, 109 108 }; 110 109 111 110 void intel_gt_pm_init_early(struct intel_gt *gt) ··· 271 272 272 273 static suspend_state_t pm_suspend_target(void) 273 274 { 274 - #if IS_ENABLED(CONFIG_PM_SLEEP) 275 + #if IS_ENABLED(CONFIG_SUSPEND) && IS_ENABLED(CONFIG_PM_SLEEP) 275 276 return pm_suspend_target_state; 276 277 #else 277 278 return PM_SUSPEND_TO_IDLE;
+5
drivers/gpu/drm/i915/gt/intel_gt_pm.h
··· 32 32 intel_wakeref_put(&gt->wakeref); 33 33 } 34 34 35 + static inline void intel_gt_pm_put_async(struct intel_gt *gt) 36 + { 37 + intel_wakeref_put_async(&gt->wakeref); 38 + } 39 + 35 40 static inline int intel_gt_pm_wait_for_idle(struct intel_gt *gt) 36 41 { 37 42 return intel_wakeref_wait_for_idle(&gt->wakeref);
+79 -4
drivers/gpu/drm/i915/gt/intel_gt_requests.c
··· 4 4 * Copyright © 2019 Intel Corporation 5 5 */ 6 6 7 + #include <linux/workqueue.h> 8 + 7 9 #include "i915_drv.h" /* for_each_engine() */ 8 10 #include "i915_request.h" 9 11 #include "intel_gt.h" ··· 31 29 intel_engine_flush_submission(engine); 32 30 } 33 31 32 + static void engine_retire(struct work_struct *work) 33 + { 34 + struct intel_engine_cs *engine = 35 + container_of(work, typeof(*engine), retire_work); 36 + struct intel_timeline *tl = xchg(&engine->retire, NULL); 37 + 38 + do { 39 + struct intel_timeline *next = xchg(&tl->retire, NULL); 40 + 41 + /* 42 + * Our goal here is to retire _idle_ timelines as soon as 43 + * possible (as they are idle, we do not expect userspace 44 + * to be cleaning up anytime soon). 45 + * 46 + * If the timeline is currently locked, either it is being 47 + * retired elsewhere or about to be! 48 + */ 49 + if (mutex_trylock(&tl->mutex)) { 50 + retire_requests(tl); 51 + mutex_unlock(&tl->mutex); 52 + } 53 + intel_timeline_put(tl); 54 + 55 + GEM_BUG_ON(!next); 56 + tl = ptr_mask_bits(next, 1); 57 + } while (tl); 58 + } 59 + 60 + static bool add_retire(struct intel_engine_cs *engine, 61 + struct intel_timeline *tl) 62 + { 63 + struct intel_timeline *first; 64 + 65 + /* 66 + * We open-code a llist here to include the additional tag [BIT(0)] 67 + * so that we know when the timeline is already on a 68 + * retirement queue: either this engine or another. 69 + * 70 + * However, we rely on that a timeline can only be active on a single 71 + * engine at any one time and that add_retire() is called before the 72 + * engine releases the timeline and transferred to another to retire. 73 + */ 74 + 75 + if (READ_ONCE(tl->retire)) /* already queued */ 76 + return false; 77 + 78 + intel_timeline_get(tl); 79 + first = READ_ONCE(engine->retire); 80 + do 81 + tl->retire = ptr_pack_bits(first, 1, 1); 82 + while (!try_cmpxchg(&engine->retire, &first, tl)); 83 + 84 + return !first; 85 + } 86 + 87 + void intel_engine_add_retire(struct intel_engine_cs *engine, 88 + struct intel_timeline *tl) 89 + { 90 + if (add_retire(engine, tl)) 91 + schedule_work(&engine->retire_work); 92 + } 93 + 94 + void intel_engine_init_retire(struct intel_engine_cs *engine) 95 + { 96 + INIT_WORK(&engine->retire_work, engine_retire); 97 + } 98 + 99 + void intel_engine_fini_retire(struct intel_engine_cs *engine) 100 + { 101 + flush_work(&engine->retire_work); 102 + GEM_BUG_ON(engine->retire); 103 + } 104 + 34 105 long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout) 35 106 { 36 107 struct intel_gt_timelines *timelines = &gt->timelines; ··· 127 52 } 128 53 129 54 intel_timeline_get(tl); 130 - GEM_BUG_ON(!tl->active_count); 131 - tl->active_count++; /* pin the list element */ 55 + GEM_BUG_ON(!atomic_read(&tl->active_count)); 56 + atomic_inc(&tl->active_count); /* pin the list element */ 132 57 spin_unlock_irqrestore(&timelines->lock, flags); 133 58 134 59 if (timeout > 0) { ··· 149 74 150 75 /* Resume iteration after dropping lock */ 151 76 list_safe_reset_next(tl, tn, link); 152 - if (!--tl->active_count) 77 + if (atomic_dec_and_test(&tl->active_count)) 153 78 list_del(&tl->link); 154 79 else 155 80 active_count += !!rcu_access_pointer(tl->last_request.fence); ··· 158 83 159 84 /* Defer the final release to after the spinlock */ 160 85 if (refcount_dec_and_test(&tl->kref.refcount)) { 161 - GEM_BUG_ON(tl->active_count); 86 + GEM_BUG_ON(atomic_read(&tl->active_count)); 162 87 list_add(&tl->link, &free); 163 88 } 164 89 }
+7
drivers/gpu/drm/i915/gt/intel_gt_requests.h
··· 7 7 #ifndef INTEL_GT_REQUESTS_H 8 8 #define INTEL_GT_REQUESTS_H 9 9 10 + struct intel_engine_cs; 10 11 struct intel_gt; 12 + struct intel_timeline; 11 13 12 14 long intel_gt_retire_requests_timeout(struct intel_gt *gt, long timeout); 13 15 static inline void intel_gt_retire_requests(struct intel_gt *gt) 14 16 { 15 17 intel_gt_retire_requests_timeout(gt, 0); 16 18 } 19 + 20 + void intel_engine_init_retire(struct intel_engine_cs *engine); 21 + void intel_engine_add_retire(struct intel_engine_cs *engine, 22 + struct intel_timeline *tl); 23 + void intel_engine_fini_retire(struct intel_engine_cs *engine); 17 24 18 25 int intel_gt_wait_for_idle(struct intel_gt *gt, long timeout); 19 26
+32 -18
drivers/gpu/drm/i915/gt/intel_lrc.c
··· 142 142 #include "intel_engine_pm.h" 143 143 #include "intel_gt.h" 144 144 #include "intel_gt_pm.h" 145 + #include "intel_gt_requests.h" 145 146 #include "intel_lrc_reg.h" 146 147 #include "intel_mocs.h" 147 148 #include "intel_reset.h" ··· 1116 1115 * refrain from doing non-trivial work here. 1117 1116 */ 1118 1117 1118 + /* 1119 + * If we have just completed this context, the engine may now be 1120 + * idle and we want to re-enter powersaving. 1121 + */ 1122 + if (list_is_last(&rq->link, &ce->timeline->requests) && 1123 + i915_request_completed(rq)) 1124 + intel_engine_add_retire(engine, ce->timeline); 1125 + 1119 1126 intel_engine_context_out(engine); 1120 1127 execlists_context_status_change(rq, INTEL_CONTEXT_SCHEDULE_OUT); 1121 - intel_gt_pm_put(engine->gt); 1128 + intel_gt_pm_put_async(engine->gt); 1122 1129 1123 1130 /* 1124 1131 * If this is part of a virtual engine, its next request may ··· 1946 1937 static void 1947 1938 cancel_port_requests(struct intel_engine_execlists * const execlists) 1948 1939 { 1949 - struct i915_request * const *port, *rq; 1940 + struct i915_request * const *port; 1950 1941 1951 - for (port = execlists->pending; (rq = *port); port++) 1952 - execlists_schedule_out(rq); 1942 + for (port = execlists->pending; *port; port++) 1943 + execlists_schedule_out(*port); 1953 1944 memset(execlists->pending, 0, sizeof(execlists->pending)); 1954 1945 1955 - for (port = execlists->active; (rq = *port); port++) 1956 - execlists_schedule_out(rq); 1957 - execlists->active = 1958 - memset(execlists->inflight, 0, sizeof(execlists->inflight)); 1946 + /* Mark the end of active before we overwrite *active */ 1947 + for (port = xchg(&execlists->active, execlists->pending); *port; port++) 1948 + execlists_schedule_out(*port); 1949 + WRITE_ONCE(execlists->active, 1950 + memset(execlists->inflight, 0, sizeof(execlists->inflight))); 1959 1951 } 1960 1952 1961 1953 static inline void ··· 2109 2099 else 2110 2100 promote = gen8_csb_parse(execlists, buf + 2 * head); 2111 2101 if (promote) { 2102 + struct i915_request * const *old = execlists->active; 2103 + 2104 + /* Point active to the new ELSP; prevent overwriting */ 2105 + WRITE_ONCE(execlists->active, execlists->pending); 2106 + set_timeslice(engine); 2107 + 2112 2108 if (!inject_preempt_hang(execlists)) 2113 2109 ring_set_paused(engine, 0); 2114 2110 2115 2111 /* cancel old inflight, prepare for switch */ 2116 - trace_ports(execlists, "preempted", execlists->active); 2117 - while (*execlists->active) 2118 - execlists_schedule_out(*execlists->active++); 2112 + trace_ports(execlists, "preempted", old); 2113 + while (*old) 2114 + execlists_schedule_out(*old++); 2119 2115 2120 2116 /* switch pending to inflight */ 2121 2117 GEM_BUG_ON(!assert_pending_valid(execlists, "promote")); 2122 - execlists->active = 2123 - memcpy(execlists->inflight, 2124 - execlists->pending, 2125 - execlists_num_ports(execlists) * 2126 - sizeof(*execlists->pending)); 2127 - 2128 - set_timeslice(engine); 2118 + WRITE_ONCE(execlists->active, 2119 + memcpy(execlists->inflight, 2120 + execlists->pending, 2121 + execlists_num_ports(execlists) * 2122 + sizeof(*execlists->pending))); 2129 2123 2130 2124 WRITE_ONCE(execlists->pending[0], NULL); 2131 2125 } else {
+1 -1
drivers/gpu/drm/i915/gt/intel_reset.c
··· 1114 1114 out: 1115 1115 intel_engine_cancel_stop_cs(engine); 1116 1116 reset_finish_engine(engine); 1117 - intel_engine_pm_put(engine); 1117 + intel_engine_pm_put_async(engine); 1118 1118 return ret; 1119 1119 } 1120 1120
+4 -9
drivers/gpu/drm/i915/gt/intel_ring.c
··· 57 57 58 58 i915_vma_make_unshrinkable(vma); 59 59 60 - GEM_BUG_ON(ring->vaddr); 61 - ring->vaddr = addr; 60 + /* Discard any unused bytes beyond that submitted to hw. */ 61 + intel_ring_reset(ring, ring->emit); 62 62 63 + ring->vaddr = addr; 63 64 return 0; 64 65 65 66 err_ring: ··· 86 85 if (!atomic_dec_and_test(&ring->pin_count)) 87 86 return; 88 87 89 - /* Discard any unused bytes beyond that submitted to hw. */ 90 - intel_ring_reset(ring, ring->emit); 91 - 92 88 i915_vma_unset_ggtt_write(vma); 93 89 if (i915_vma_is_map_and_fenceable(vma)) 94 90 i915_vma_unpin_iomap(vma); 95 91 else 96 92 i915_gem_object_unpin_map(vma->obj); 97 93 98 - GEM_BUG_ON(!ring->vaddr); 99 - ring->vaddr = NULL; 100 - 101 - i915_vma_unpin(vma); 102 94 i915_vma_make_purgeable(vma); 95 + i915_vma_unpin(vma); 103 96 } 104 97 105 98 static struct i915_vma *create_ring_vma(struct i915_ggtt *ggtt, int size)
+28 -7
drivers/gpu/drm/i915/gt/intel_timeline.c
··· 282 282 { 283 283 GEM_BUG_ON(atomic_read(&timeline->pin_count)); 284 284 GEM_BUG_ON(!list_empty(&timeline->requests)); 285 + GEM_BUG_ON(timeline->retire); 285 286 286 287 if (timeline->hwsp_cacheline) 287 288 cacheline_free(timeline->hwsp_cacheline); ··· 340 339 struct intel_gt_timelines *timelines = &tl->gt->timelines; 341 340 unsigned long flags; 342 341 342 + /* 343 + * Pretend we are serialised by the timeline->mutex. 344 + * 345 + * While generally true, there are a few exceptions to the rule 346 + * for the engine->kernel_context being used to manage power 347 + * transitions. As the engine_park may be called from under any 348 + * timeline, it uses the power mutex as a global serialisation 349 + * lock to prevent any other request entering its timeline. 350 + * 351 + * The rule is generally tl->mutex, otherwise engine->wakeref.mutex. 352 + * 353 + * However, intel_gt_retire_request() does not know which engine 354 + * it is retiring along and so cannot partake in the engine-pm 355 + * barrier, and there we use the tl->active_count as a means to 356 + * pin the timeline in the active_list while the locks are dropped. 357 + * Ergo, as that is outside of the engine-pm barrier, we need to 358 + * use atomic to manipulate tl->active_count. 359 + */ 343 360 lockdep_assert_held(&tl->mutex); 344 - 345 361 GEM_BUG_ON(!atomic_read(&tl->pin_count)); 346 - if (tl->active_count++) 362 + 363 + if (atomic_add_unless(&tl->active_count, 1, 0)) 347 364 return; 348 - GEM_BUG_ON(!tl->active_count); /* overflow? */ 349 365 350 366 spin_lock_irqsave(&timelines->lock, flags); 351 - list_add(&tl->link, &timelines->active_list); 367 + if (!atomic_fetch_inc(&tl->active_count)) 368 + list_add_tail(&tl->link, &timelines->active_list); 352 369 spin_unlock_irqrestore(&timelines->lock, flags); 353 370 } 354 371 ··· 375 356 struct intel_gt_timelines *timelines = &tl->gt->timelines; 376 357 unsigned long flags; 377 358 359 + /* See intel_timeline_enter() */ 378 360 lockdep_assert_held(&tl->mutex); 379 361 380 - GEM_BUG_ON(!tl->active_count); 381 - if (--tl->active_count) 362 + GEM_BUG_ON(!atomic_read(&tl->active_count)); 363 + if (atomic_add_unless(&tl->active_count, -1, 1)) 382 364 return; 383 365 384 366 spin_lock_irqsave(&timelines->lock, flags); 385 - list_del(&tl->link); 367 + if (atomic_dec_and_test(&tl->active_count)) 368 + list_del(&tl->link); 386 369 spin_unlock_irqrestore(&timelines->lock, flags); 387 370 388 371 /*
+4 -1
drivers/gpu/drm/i915/gt/intel_timeline_types.h
··· 42 42 * from the intel_context caller plus internal atomicity. 43 43 */ 44 44 atomic_t pin_count; 45 - unsigned int active_count; 45 + atomic_t active_count; 46 46 47 47 const u32 *hwsp_seqno; 48 48 struct i915_vma *hwsp_ggtt; ··· 65 65 * protection themselves (cf the i915_active_fence API). 66 66 */ 67 67 struct i915_active_fence last_request; 68 + 69 + /** A chain of completed timelines ready for early retirement. */ 70 + struct intel_timeline *retire; 68 71 69 72 /** 70 73 * We track the most recent seqno that we wait on in every context so
+4 -3
drivers/gpu/drm/i915/gt/selftest_engine_pm.c
··· 51 51 pr_err("intel_engine_pm_get_if_awake(%s) failed under %s\n", 52 52 engine->name, p->name); 53 53 else 54 - intel_engine_pm_put(engine); 55 - intel_engine_pm_put(engine); 54 + intel_engine_pm_put_async(engine); 55 + intel_engine_pm_put_async(engine); 56 56 p->critical_section_end(); 57 57 58 - /* engine wakeref is sync (instant) */ 58 + intel_engine_pm_flush(engine); 59 + 59 60 if (intel_engine_pm_is_awake(engine)) { 60 61 pr_err("%s is still awake after flushing pm\n", 61 62 engine->name);
+3 -3
drivers/gpu/drm/i915/gvt/cmd_parser.c
··· 1599 1599 if (!(cmd_val(s, 0) & (1 << 22))) 1600 1600 return ret; 1601 1601 1602 - /* check if QWORD */ 1603 - if (DWORD_FIELD(0, 20, 19) == 1) 1604 - valid_len += 8; 1602 + /* check inline data */ 1603 + if (cmd_val(s, 0) & BIT(18)) 1604 + valid_len = CMD_LEN(9); 1605 1605 ret = gvt_check_valid_cmd_length(cmd_length(s), 1606 1606 valid_len); 1607 1607 if (ret)
+3 -2
drivers/gpu/drm/i915/gvt/handlers.c
··· 460 460 static i915_reg_t force_nonpriv_white_list[] = { 461 461 GEN9_CS_DEBUG_MODE1, //_MMIO(0x20ec) 462 462 GEN9_CTX_PREEMPT_REG,//_MMIO(0x2248) 463 + PS_INVOCATION_COUNT,//_MMIO(0x2348) 463 464 GEN8_CS_CHICKEN1,//_MMIO(0x2580) 464 465 _MMIO(0x2690), 465 466 _MMIO(0x2694), ··· 509 508 static int force_nonpriv_write(struct intel_vgpu *vgpu, 510 509 unsigned int offset, void *p_data, unsigned int bytes) 511 510 { 512 - u32 reg_nonpriv = *(u32 *)p_data; 511 + u32 reg_nonpriv = (*(u32 *)p_data) & REG_GENMASK(25, 2); 513 512 int ring_id = intel_gvt_render_mmio_to_ring_id(vgpu->gvt, offset); 514 513 u32 ring_base; 515 514 struct drm_i915_private *dev_priv = vgpu->gvt->dev_priv; ··· 529 528 bytes); 530 529 } else 531 530 gvt_err("vgpu(%d) Invalid FORCE_NONPRIV write %x at offset %x\n", 532 - vgpu->id, reg_nonpriv, offset); 531 + vgpu->id, *(u32 *)p_data, offset); 533 532 534 533 return 0; 535 534 }
+3 -2
drivers/gpu/drm/i915/i915_active.c
··· 672 672 * populated by i915_request_add_active_barriers() to point to the 673 673 * request that will eventually release them. 674 674 */ 675 - spin_lock_irqsave_nested(&ref->tree_lock, flags, SINGLE_DEPTH_NESTING); 676 675 llist_for_each_safe(pos, next, take_preallocated_barriers(ref)) { 677 676 struct active_node *node = barrier_from_ll(pos); 678 677 struct intel_engine_cs *engine = barrier_to_engine(node); 679 678 struct rb_node **p, *parent; 680 679 680 + spin_lock_irqsave_nested(&ref->tree_lock, flags, 681 + SINGLE_DEPTH_NESTING); 681 682 parent = NULL; 682 683 p = &ref->tree.rb_node; 683 684 while (*p) { ··· 694 693 } 695 694 rb_link_node(&node->node, parent, p); 696 695 rb_insert_color(&node->node, &ref->tree); 696 + spin_unlock_irqrestore(&ref->tree_lock, flags); 697 697 698 698 GEM_BUG_ON(!intel_engine_pm_is_awake(engine)); 699 699 llist_add(barrier_to_ll(node), &engine->barrier_tasks); 700 700 intel_engine_pm_put(engine); 701 701 } 702 - spin_unlock_irqrestore(&ref->tree_lock, flags); 703 702 } 704 703 705 704 void i915_request_add_active_barriers(struct i915_request *rq)
+3 -3
drivers/gpu/drm/i915/i915_pmu.c
··· 190 190 val = 0; 191 191 if (intel_gt_pm_get_if_awake(gt)) { 192 192 val = __get_rc6(gt); 193 - intel_gt_pm_put(gt); 193 + intel_gt_pm_put_async(gt); 194 194 } 195 195 196 196 spin_lock_irqsave(&pmu->lock, flags); ··· 343 343 344 344 skip: 345 345 spin_unlock_irqrestore(&engine->uncore->lock, flags); 346 - intel_engine_pm_put(engine); 346 + intel_engine_pm_put_async(engine); 347 347 } 348 348 } 349 349 ··· 368 368 if (intel_gt_pm_get_if_awake(gt)) { 369 369 val = intel_uncore_read_notrace(uncore, GEN6_RPSTAT1); 370 370 val = intel_get_cagf(rps, val); 371 - intel_gt_pm_put(gt); 371 + intel_gt_pm_put_async(gt); 372 372 } 373 373 374 374 add_sample_mult(&pmu->sample[__I915_SAMPLE_FREQ_ACT],
+5 -2
drivers/gpu/drm/i915/i915_query.c
··· 103 103 struct drm_i915_engine_info __user *info_ptr; 104 104 struct drm_i915_query_engine_info query; 105 105 struct drm_i915_engine_info info = { }; 106 + unsigned int num_uabi_engines = 0; 106 107 struct intel_engine_cs *engine; 107 108 int len, ret; 108 109 109 110 if (query_item->flags) 110 111 return -EINVAL; 111 112 113 + for_each_uabi_engine(engine, i915) 114 + num_uabi_engines++; 115 + 112 116 len = sizeof(struct drm_i915_query_engine_info) + 113 - RUNTIME_INFO(i915)->num_engines * 114 - sizeof(struct drm_i915_engine_info); 117 + num_uabi_engines * sizeof(struct drm_i915_engine_info); 115 118 116 119 ret = copy_query_item(&query, sizeof(query), len, query_item); 117 120 if (ret != 0)
+15 -6
drivers/gpu/drm/i915/intel_wakeref.c
··· 54 54 55 55 static void ____intel_wakeref_put_last(struct intel_wakeref *wf) 56 56 { 57 - if (!atomic_dec_and_test(&wf->count)) 57 + INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0); 58 + if (unlikely(!atomic_dec_and_test(&wf->count))) 58 59 goto unlock; 59 60 60 61 /* ops->put() must reschedule its own release on error/deferral */ ··· 68 67 mutex_unlock(&wf->mutex); 69 68 } 70 69 71 - void __intel_wakeref_put_last(struct intel_wakeref *wf) 70 + void __intel_wakeref_put_last(struct intel_wakeref *wf, unsigned long flags) 72 71 { 73 72 INTEL_WAKEREF_BUG_ON(work_pending(&wf->work)); 74 73 75 74 /* Assume we are not in process context and so cannot sleep. */ 76 - if (wf->ops->flags & INTEL_WAKEREF_PUT_ASYNC || 77 - !mutex_trylock(&wf->mutex)) { 75 + if (flags & INTEL_WAKEREF_PUT_ASYNC || !mutex_trylock(&wf->mutex)) { 78 76 schedule_work(&wf->work); 79 77 return; 80 78 } ··· 109 109 110 110 int intel_wakeref_wait_for_idle(struct intel_wakeref *wf) 111 111 { 112 - return wait_var_event_killable(&wf->wakeref, 113 - !intel_wakeref_is_active(wf)); 112 + int err; 113 + 114 + might_sleep(); 115 + 116 + err = wait_var_event_killable(&wf->wakeref, 117 + !intel_wakeref_is_active(wf)); 118 + if (err) 119 + return err; 120 + 121 + intel_wakeref_unlock_wait(wf); 122 + return 0; 114 123 } 115 124 116 125 static void wakeref_auto_timeout(struct timer_list *t)
+36 -9
drivers/gpu/drm/i915/intel_wakeref.h
··· 9 9 10 10 #include <linux/atomic.h> 11 11 #include <linux/bits.h> 12 + #include <linux/lockdep.h> 12 13 #include <linux/mutex.h> 13 14 #include <linux/refcount.h> 14 15 #include <linux/stackdepot.h> ··· 30 29 struct intel_wakeref_ops { 31 30 int (*get)(struct intel_wakeref *wf); 32 31 int (*put)(struct intel_wakeref *wf); 33 - 34 - unsigned long flags; 35 - #define INTEL_WAKEREF_PUT_ASYNC BIT(0) 36 32 }; 37 33 38 34 struct intel_wakeref { ··· 55 57 } while (0) 56 58 57 59 int __intel_wakeref_get_first(struct intel_wakeref *wf); 58 - void __intel_wakeref_put_last(struct intel_wakeref *wf); 60 + void __intel_wakeref_put_last(struct intel_wakeref *wf, unsigned long flags); 59 61 60 62 /** 61 63 * intel_wakeref_get: Acquire the wakeref ··· 98 100 } 99 101 100 102 /** 101 - * intel_wakeref_put: Release the wakeref 102 - * @i915: the drm_i915_private device 103 + * intel_wakeref_put_flags: Release the wakeref 103 104 * @wf: the wakeref 104 - * @fn: callback for releasing the wakeref, called only on final release. 105 + * @flags: control flags 105 106 * 106 107 * Release our hold on the wakeref. When there are no more users, 107 108 * the runtime pm wakeref will be released after the @fn callback is called ··· 113 116 * code otherwise. 114 117 */ 115 118 static inline void 116 - intel_wakeref_put(struct intel_wakeref *wf) 119 + __intel_wakeref_put(struct intel_wakeref *wf, unsigned long flags) 120 + #define INTEL_WAKEREF_PUT_ASYNC BIT(0) 117 121 { 118 122 INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0); 119 123 if (unlikely(!atomic_add_unless(&wf->count, -1, 1))) 120 - __intel_wakeref_put_last(wf); 124 + __intel_wakeref_put_last(wf, flags); 125 + } 126 + 127 + static inline void 128 + intel_wakeref_put(struct intel_wakeref *wf) 129 + { 130 + might_sleep(); 131 + __intel_wakeref_put(wf, 0); 132 + } 133 + 134 + static inline void 135 + intel_wakeref_put_async(struct intel_wakeref *wf) 136 + { 137 + __intel_wakeref_put(wf, INTEL_WAKEREF_PUT_ASYNC); 121 138 } 122 139 123 140 /** ··· 163 152 } 164 153 165 154 /** 155 + * intel_wakeref_unlock_wait: Wait until the active callback is complete 156 + * @wf: the wakeref 157 + * 158 + * Waits for the active callback (under the @wf->mutex or another CPU) is 159 + * complete. 160 + */ 161 + static inline void 162 + intel_wakeref_unlock_wait(struct intel_wakeref *wf) 163 + { 164 + mutex_lock(&wf->mutex); 165 + mutex_unlock(&wf->mutex); 166 + flush_work(&wf->work); 167 + } 168 + 169 + /** 166 170 * intel_wakeref_is_active: Query whether the wakeref is currently held 167 171 * @wf: the wakeref 168 172 * ··· 196 170 static inline void 197 171 __intel_wakeref_defer_park(struct intel_wakeref *wf) 198 172 { 173 + lockdep_assert_held(&wf->mutex); 199 174 INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count)); 200 175 atomic_set_release(&wf->count, 1); 201 176 }
+35 -1
drivers/gpu/drm/mgag200/mgag200_drv.c
··· 30 30 static struct drm_driver driver; 31 31 32 32 static const struct pci_device_id pciidlist[] = { 33 + { PCI_VENDOR_ID_MATROX, 0x522, PCI_VENDOR_ID_SUN, 0x4852, 0, 0, 34 + G200_SE_A | MGAG200_FLAG_HW_BUG_NO_STARTADD}, 33 35 { PCI_VENDOR_ID_MATROX, 0x522, PCI_ANY_ID, PCI_ANY_ID, 0, 0, G200_SE_A }, 34 36 { PCI_VENDOR_ID_MATROX, 0x524, PCI_ANY_ID, PCI_ANY_ID, 0, 0, G200_SE_B }, 35 37 { PCI_VENDOR_ID_MATROX, 0x530, PCI_ANY_ID, PCI_ANY_ID, 0, 0, G200_EV }, ··· 62 60 63 61 DEFINE_DRM_GEM_FOPS(mgag200_driver_fops); 64 62 63 + static bool mgag200_pin_bo_at_0(const struct mga_device *mdev) 64 + { 65 + return mdev->flags & MGAG200_FLAG_HW_BUG_NO_STARTADD; 66 + } 67 + 68 + int mgag200_driver_dumb_create(struct drm_file *file, 69 + struct drm_device *dev, 70 + struct drm_mode_create_dumb *args) 71 + { 72 + struct mga_device *mdev = dev->dev_private; 73 + unsigned long pg_align; 74 + 75 + if (WARN_ONCE(!dev->vram_mm, "VRAM MM not initialized")) 76 + return -EINVAL; 77 + 78 + pg_align = 0ul; 79 + 80 + /* 81 + * Aligning scanout buffers to the size of the video ram forces 82 + * placement at offset 0. Works around a bug where HW does not 83 + * respect 'startadd' field. 84 + */ 85 + if (mgag200_pin_bo_at_0(mdev)) 86 + pg_align = PFN_UP(mdev->mc.vram_size); 87 + 88 + return drm_gem_vram_fill_create_dumb(file, dev, &dev->vram_mm->bdev, 89 + pg_align, false, args); 90 + } 91 + 65 92 static struct drm_driver driver = { 66 93 .driver_features = DRIVER_GEM | DRIVER_MODESET, 67 94 .load = mgag200_driver_load, ··· 102 71 .major = DRIVER_MAJOR, 103 72 .minor = DRIVER_MINOR, 104 73 .patchlevel = DRIVER_PATCHLEVEL, 105 - DRM_GEM_VRAM_DRIVER 74 + .debugfs_init = drm_vram_mm_debugfs_init, 75 + .dumb_create = mgag200_driver_dumb_create, 76 + .dumb_map_offset = drm_gem_vram_driver_dumb_mmap_offset, 77 + .gem_prime_mmap = drm_gem_prime_mmap, 106 78 }; 107 79 108 80 static struct pci_driver mgag200_pci_driver = {
+18
drivers/gpu/drm/mgag200/mgag200_drv.h
··· 150 150 G200_EW3, 151 151 }; 152 152 153 + /* HW does not handle 'startadd' field correct. */ 154 + #define MGAG200_FLAG_HW_BUG_NO_STARTADD (1ul << 8) 155 + 156 + #define MGAG200_TYPE_MASK (0x000000ff) 157 + #define MGAG200_FLAG_MASK (0x00ffff00) 158 + 153 159 #define IS_G200_SE(mdev) (mdev->type == G200_SE_A || mdev->type == G200_SE_B) 154 160 155 161 struct mga_device { ··· 186 180 /* SE model number stored in reg 0x1e24 */ 187 181 u32 unique_rev_id; 188 182 }; 183 + 184 + static inline enum mga_type 185 + mgag200_type_from_driver_data(kernel_ulong_t driver_data) 186 + { 187 + return (enum mga_type)(driver_data & MGAG200_TYPE_MASK); 188 + } 189 + 190 + static inline unsigned long 191 + mgag200_flags_from_driver_data(kernel_ulong_t driver_data) 192 + { 193 + return driver_data & MGAG200_FLAG_MASK; 194 + } 189 195 190 196 /* mgag200_mode.c */ 191 197 int mgag200_modeset_init(struct mga_device *mdev);
+2 -1
drivers/gpu/drm/mgag200/mgag200_main.c
··· 94 94 struct mga_device *mdev = dev->dev_private; 95 95 int ret, option; 96 96 97 - mdev->type = flags; 97 + mdev->flags = mgag200_flags_from_driver_data(flags); 98 + mdev->type = mgag200_type_from_driver_data(flags); 98 99 99 100 /* Hardcode the number of CRTCs to 1 */ 100 101 mdev->num_crtc = 1;
+1
drivers/gpu/drm/msm/Kconfig
··· 7 7 depends on OF && COMMON_CLK 8 8 depends on MMU 9 9 depends on INTERCONNECT || !INTERCONNECT 10 + depends on QCOM_OCMEM || QCOM_OCMEM=n 10 11 select QCOM_MDT_LOADER if ARCH_QCOM 11 12 select REGULATOR 12 13 select DRM_KMS_HELPER
+7 -21
drivers/gpu/drm/msm/adreno/a3xx_gpu.c
··· 6 6 * Copyright (c) 2014 The Linux Foundation. All rights reserved. 7 7 */ 8 8 9 - #ifdef CONFIG_MSM_OCMEM 10 - # include <mach/ocmem.h> 11 - #endif 12 - 13 9 #include "a3xx_gpu.h" 14 10 15 11 #define A3XX_INT0_MASK \ ··· 191 195 gpu_write(gpu, REG_A3XX_RBBM_GPR0_CTL, 0x00000000); 192 196 193 197 /* Set the OCMEM base address for A330, etc */ 194 - if (a3xx_gpu->ocmem_hdl) { 198 + if (a3xx_gpu->ocmem.hdl) { 195 199 gpu_write(gpu, REG_A3XX_RB_GMEM_BASE_ADDR, 196 - (unsigned int)(a3xx_gpu->ocmem_base >> 14)); 200 + (unsigned int)(a3xx_gpu->ocmem.base >> 14)); 197 201 } 198 202 199 203 /* Turn on performance counters: */ ··· 314 318 315 319 adreno_gpu_cleanup(adreno_gpu); 316 320 317 - #ifdef CONFIG_MSM_OCMEM 318 - if (a3xx_gpu->ocmem_base) 319 - ocmem_free(OCMEM_GRAPHICS, a3xx_gpu->ocmem_hdl); 320 - #endif 321 + adreno_gpu_ocmem_cleanup(&a3xx_gpu->ocmem); 321 322 322 323 kfree(a3xx_gpu); 323 324 } ··· 487 494 488 495 /* if needed, allocate gmem: */ 489 496 if (adreno_is_a330(adreno_gpu)) { 490 - #ifdef CONFIG_MSM_OCMEM 491 - /* TODO this is different/missing upstream: */ 492 - struct ocmem_buf *ocmem_hdl = 493 - ocmem_allocate(OCMEM_GRAPHICS, adreno_gpu->gmem); 494 - 495 - a3xx_gpu->ocmem_hdl = ocmem_hdl; 496 - a3xx_gpu->ocmem_base = ocmem_hdl->addr; 497 - adreno_gpu->gmem = ocmem_hdl->len; 498 - DBG("using %dK of OCMEM at 0x%08x", adreno_gpu->gmem / 1024, 499 - a3xx_gpu->ocmem_base); 500 - #endif 497 + ret = adreno_gpu_ocmem_init(&adreno_gpu->base.pdev->dev, 498 + adreno_gpu, &a3xx_gpu->ocmem); 499 + if (ret) 500 + goto fail; 501 501 } 502 502 503 503 if (!gpu->aspace) {
+1 -2
drivers/gpu/drm/msm/adreno/a3xx_gpu.h
··· 19 19 struct adreno_gpu base; 20 20 21 21 /* if OCMEM is used for GMEM: */ 22 - uint32_t ocmem_base; 23 - void *ocmem_hdl; 22 + struct adreno_ocmem ocmem; 24 23 }; 25 24 #define to_a3xx_gpu(x) container_of(x, struct a3xx_gpu, base) 26 25
+6 -19
drivers/gpu/drm/msm/adreno/a4xx_gpu.c
··· 2 2 /* Copyright (c) 2014 The Linux Foundation. All rights reserved. 3 3 */ 4 4 #include "a4xx_gpu.h" 5 - #ifdef CONFIG_MSM_OCMEM 6 - # include <soc/qcom/ocmem.h> 7 - #endif 8 5 9 6 #define A4XX_INT0_MASK \ 10 7 (A4XX_INT0_RBBM_AHB_ERROR | \ ··· 185 188 (1 << 30) | 0xFFFF); 186 189 187 190 gpu_write(gpu, REG_A4XX_RB_GMEM_BASE_ADDR, 188 - (unsigned int)(a4xx_gpu->ocmem_base >> 14)); 191 + (unsigned int)(a4xx_gpu->ocmem.base >> 14)); 189 192 190 193 /* Turn on performance counters: */ 191 194 gpu_write(gpu, REG_A4XX_RBBM_PERFCTR_CTL, 0x01); ··· 315 318 316 319 adreno_gpu_cleanup(adreno_gpu); 317 320 318 - #ifdef CONFIG_MSM_OCMEM 319 - if (a4xx_gpu->ocmem_base) 320 - ocmem_free(OCMEM_GRAPHICS, a4xx_gpu->ocmem_hdl); 321 - #endif 321 + adreno_gpu_ocmem_cleanup(&a4xx_gpu->ocmem); 322 322 323 323 kfree(a4xx_gpu); 324 324 } ··· 572 578 573 579 /* if needed, allocate gmem: */ 574 580 if (adreno_is_a4xx(adreno_gpu)) { 575 - #ifdef CONFIG_MSM_OCMEM 576 - /* TODO this is different/missing upstream: */ 577 - struct ocmem_buf *ocmem_hdl = 578 - ocmem_allocate(OCMEM_GRAPHICS, adreno_gpu->gmem); 579 - 580 - a4xx_gpu->ocmem_hdl = ocmem_hdl; 581 - a4xx_gpu->ocmem_base = ocmem_hdl->addr; 582 - adreno_gpu->gmem = ocmem_hdl->len; 583 - DBG("using %dK of OCMEM at 0x%08x", adreno_gpu->gmem / 1024, 584 - a4xx_gpu->ocmem_base); 585 - #endif 581 + ret = adreno_gpu_ocmem_init(dev->dev, adreno_gpu, 582 + &a4xx_gpu->ocmem); 583 + if (ret) 584 + goto fail; 586 585 } 587 586 588 587 if (!gpu->aspace) {
+1 -2
drivers/gpu/drm/msm/adreno/a4xx_gpu.h
··· 16 16 struct adreno_gpu base; 17 17 18 18 /* if OCMEM is used for GMEM: */ 19 - uint32_t ocmem_base; 20 - void *ocmem_hdl; 19 + struct adreno_ocmem ocmem; 21 20 }; 22 21 #define to_a4xx_gpu(x) container_of(x, struct a4xx_gpu, base) 23 22
+62 -17
drivers/gpu/drm/msm/adreno/a5xx_gpu.c
··· 353 353 * 2D mode 3 draw 354 354 */ 355 355 OUT_RING(ring, 0x0000000B); 356 + } else if (adreno_is_a510(adreno_gpu)) { 357 + /* Workaround for token and syncs */ 358 + OUT_RING(ring, 0x00000001); 356 359 } else { 357 360 /* No workarounds enabled */ 358 361 OUT_RING(ring, 0x00000000); ··· 571 568 0x00100000 + adreno_gpu->gmem - 1); 572 569 gpu_write(gpu, REG_A5XX_UCHE_GMEM_RANGE_MAX_HI, 0x00000000); 573 570 574 - gpu_write(gpu, REG_A5XX_CP_MEQ_THRESHOLDS, 0x40); 575 - if (adreno_is_a530(adreno_gpu)) 576 - gpu_write(gpu, REG_A5XX_CP_MERCIU_SIZE, 0x40); 577 - if (adreno_is_a540(adreno_gpu)) 578 - gpu_write(gpu, REG_A5XX_CP_MERCIU_SIZE, 0x400); 579 - gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_2, 0x80000060); 580 - gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_1, 0x40201B16); 581 - 582 - gpu_write(gpu, REG_A5XX_PC_DBG_ECO_CNTL, (0x400 << 11 | 0x300 << 22)); 571 + if (adreno_is_a510(adreno_gpu)) { 572 + gpu_write(gpu, REG_A5XX_CP_MEQ_THRESHOLDS, 0x20); 573 + gpu_write(gpu, REG_A5XX_CP_MERCIU_SIZE, 0x20); 574 + gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_2, 0x40000030); 575 + gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_1, 0x20100D0A); 576 + gpu_write(gpu, REG_A5XX_PC_DBG_ECO_CNTL, 577 + (0x200 << 11 | 0x200 << 22)); 578 + } else { 579 + gpu_write(gpu, REG_A5XX_CP_MEQ_THRESHOLDS, 0x40); 580 + if (adreno_is_a530(adreno_gpu)) 581 + gpu_write(gpu, REG_A5XX_CP_MERCIU_SIZE, 0x40); 582 + if (adreno_is_a540(adreno_gpu)) 583 + gpu_write(gpu, REG_A5XX_CP_MERCIU_SIZE, 0x400); 584 + gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_2, 0x80000060); 585 + gpu_write(gpu, REG_A5XX_CP_ROQ_THRESHOLDS_1, 0x40201B16); 586 + gpu_write(gpu, REG_A5XX_PC_DBG_ECO_CNTL, 587 + (0x400 << 11 | 0x300 << 22)); 588 + } 583 589 584 590 if (adreno_gpu->info->quirks & ADRENO_QUIRK_TWO_PASS_USE_WFI) 585 591 gpu_rmw(gpu, REG_A5XX_PC_DBG_ECO_CNTL, 0, (1 << 8)); ··· 600 588 601 589 /* Enable ME/PFP split notification */ 602 590 gpu_write(gpu, REG_A5XX_RBBM_AHB_CNTL1, 0xA6FFFFFF); 591 + 592 + /* 593 + * In A5x, CCU can send context_done event of a particular context to 594 + * UCHE which ultimately reaches CP even when there is valid 595 + * transaction of that context inside CCU. This can let CP to program 596 + * config registers, which will make the "valid transaction" inside 597 + * CCU to be interpreted differently. This can cause gpu fault. This 598 + * bug is fixed in latest A510 revision. To enable this bug fix - 599 + * bit[11] of RB_DBG_ECO_CNTL need to be set to 0, default is 1 600 + * (disable). For older A510 version this bit is unused. 601 + */ 602 + if (adreno_is_a510(adreno_gpu)) 603 + gpu_rmw(gpu, REG_A5XX_RB_DBG_ECO_CNTL, (1 << 11), 0); 603 604 604 605 /* Enable HWCG */ 605 606 a5xx_set_hwcg(gpu, true); ··· 660 635 /* UCHE */ 661 636 gpu_write(gpu, REG_A5XX_CP_PROTECT(16), ADRENO_PROTECT_RW(0xE80, 16)); 662 637 663 - if (adreno_is_a530(adreno_gpu)) 638 + if (adreno_is_a530(adreno_gpu) || adreno_is_a510(adreno_gpu)) 664 639 gpu_write(gpu, REG_A5XX_CP_PROTECT(17), 665 640 ADRENO_PROTECT_RW(0x10000, 0x8000)); 666 641 ··· 704 679 705 680 a5xx_preempt_hw_init(gpu); 706 681 707 - a5xx_gpmu_ucode_init(gpu); 682 + if (!adreno_is_a510(adreno_gpu)) 683 + a5xx_gpmu_ucode_init(gpu); 708 684 709 685 ret = a5xx_ucode_init(gpu); 710 686 if (ret) ··· 738 712 } 739 713 740 714 /* 741 - * Try to load a zap shader into the secure world. If successful 715 + * If the chip that we are using does support loading one, then 716 + * try to load a zap shader into the secure world. If successful 742 717 * we can use the CP to switch out of secure mode. If not then we 743 718 * have no resource but to try to switch ourselves out manually. If we 744 719 * guessed wrong then access to the RBBM_SECVID_TRUST_CNTL register will ··· 1093 1066 1094 1067 static int a5xx_pm_resume(struct msm_gpu *gpu) 1095 1068 { 1069 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 1096 1070 int ret; 1097 1071 1098 1072 /* Turn on the core power */ 1099 1073 ret = msm_gpu_pm_resume(gpu); 1100 1074 if (ret) 1101 1075 return ret; 1076 + 1077 + if (adreno_is_a510(adreno_gpu)) { 1078 + /* Halt the sp_input_clk at HM level */ 1079 + gpu_write(gpu, REG_A5XX_RBBM_CLOCK_CNTL, 0x00000055); 1080 + a5xx_set_hwcg(gpu, true); 1081 + /* Turn on sp_input_clk at HM level */ 1082 + gpu_rmw(gpu, REG_A5XX_RBBM_CLOCK_CNTL, 0xff, 0); 1083 + return 0; 1084 + } 1102 1085 1103 1086 /* Turn the RBCCU domain first to limit the chances of voltage droop */ 1104 1087 gpu_write(gpu, REG_A5XX_GPMU_RBCCU_POWER_CNTL, 0x778000); ··· 1138 1101 1139 1102 static int a5xx_pm_suspend(struct msm_gpu *gpu) 1140 1103 { 1104 + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 1105 + u32 mask = 0xf; 1106 + 1107 + /* A510 has 3 XIN ports in VBIF */ 1108 + if (adreno_is_a510(adreno_gpu)) 1109 + mask = 0x7; 1110 + 1141 1111 /* Clear the VBIF pipe before shutting down */ 1142 - gpu_write(gpu, REG_A5XX_VBIF_XIN_HALT_CTRL0, 0xF); 1143 - spin_until((gpu_read(gpu, REG_A5XX_VBIF_XIN_HALT_CTRL1) & 0xF) == 0xF); 1112 + gpu_write(gpu, REG_A5XX_VBIF_XIN_HALT_CTRL0, mask); 1113 + spin_until((gpu_read(gpu, REG_A5XX_VBIF_XIN_HALT_CTRL1) & 1114 + mask) == mask); 1144 1115 1145 1116 gpu_write(gpu, REG_A5XX_VBIF_XIN_HALT_CTRL0, 0); 1146 1117 ··· 1334 1289 kfree(a5xx_state); 1335 1290 } 1336 1291 1337 - int a5xx_gpu_state_put(struct msm_gpu_state *state) 1292 + static int a5xx_gpu_state_put(struct msm_gpu_state *state) 1338 1293 { 1339 1294 if (IS_ERR_OR_NULL(state)) 1340 1295 return 1; ··· 1344 1299 1345 1300 1346 1301 #if defined(CONFIG_DEBUG_FS) || defined(CONFIG_DEV_COREDUMP) 1347 - void a5xx_show(struct msm_gpu *gpu, struct msm_gpu_state *state, 1348 - struct drm_printer *p) 1302 + static void a5xx_show(struct msm_gpu *gpu, struct msm_gpu_state *state, 1303 + struct drm_printer *p) 1349 1304 { 1350 1305 int i, j; 1351 1306 u32 pos = 0;
+7
drivers/gpu/drm/msm/adreno/a5xx_power.c
··· 297 297 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 298 298 int ret; 299 299 300 + /* Not all A5xx chips have a GPMU */ 301 + if (adreno_is_a510(adreno_gpu)) 302 + return 0; 303 + 300 304 /* Set up the limits management */ 301 305 if (adreno_is_a530(adreno_gpu)) 302 306 a530_lm_setup(gpu); ··· 329 325 uint32_t dwords = 0, offset = 0, bosize; 330 326 unsigned int *data, *ptr, *cmds; 331 327 unsigned int cmds_size; 328 + 329 + if (adreno_is_a510(adreno_gpu)) 330 + return; 332 331 333 332 if (a5xx_gpu->gpmu_bo) 334 333 return;
+15
drivers/gpu/drm/msm/adreno/adreno_device.c
··· 115 115 .inactive_period = DRM_MSM_INACTIVE_PERIOD, 116 116 .init = a4xx_gpu_init, 117 117 }, { 118 + .rev = ADRENO_REV(5, 1, 0, ANY_ID), 119 + .revn = 510, 120 + .name = "A510", 121 + .fw = { 122 + [ADRENO_FW_PM4] = "a530_pm4.fw", 123 + [ADRENO_FW_PFP] = "a530_pfp.fw", 124 + }, 125 + .gmem = SZ_256K, 126 + /* 127 + * Increase inactive period to 250 to avoid bouncing 128 + * the GDSC which appears to make it grumpy 129 + */ 130 + .inactive_period = 250, 131 + .init = a5xx_gpu_init, 132 + }, { 118 133 .rev = ADRENO_REV(5, 3, 0, 2), 119 134 .revn = 530, 120 135 .name = "A530",
+40
drivers/gpu/drm/msm/adreno/adreno_gpu.c
··· 14 14 #include <linux/pm_opp.h> 15 15 #include <linux/slab.h> 16 16 #include <linux/soc/qcom/mdt_loader.h> 17 + #include <soc/qcom/ocmem.h> 17 18 #include "adreno_gpu.h" 18 19 #include "msm_gem.h" 19 20 #include "msm_mmu.h" ··· 892 891 gpu->icc_path = NULL; 893 892 894 893 return 0; 894 + } 895 + 896 + int adreno_gpu_ocmem_init(struct device *dev, struct adreno_gpu *adreno_gpu, 897 + struct adreno_ocmem *adreno_ocmem) 898 + { 899 + struct ocmem_buf *ocmem_hdl; 900 + struct ocmem *ocmem; 901 + 902 + ocmem = of_get_ocmem(dev); 903 + if (IS_ERR(ocmem)) { 904 + if (PTR_ERR(ocmem) == -ENODEV) { 905 + /* 906 + * Return success since either the ocmem property was 907 + * not specified in device tree, or ocmem support is 908 + * not compiled into the kernel. 909 + */ 910 + return 0; 911 + } 912 + 913 + return PTR_ERR(ocmem); 914 + } 915 + 916 + ocmem_hdl = ocmem_allocate(ocmem, OCMEM_GRAPHICS, adreno_gpu->gmem); 917 + if (IS_ERR(ocmem_hdl)) 918 + return PTR_ERR(ocmem_hdl); 919 + 920 + adreno_ocmem->ocmem = ocmem; 921 + adreno_ocmem->base = ocmem_hdl->addr; 922 + adreno_ocmem->hdl = ocmem_hdl; 923 + adreno_gpu->gmem = ocmem_hdl->len; 924 + 925 + return 0; 926 + } 927 + 928 + void adreno_gpu_ocmem_cleanup(struct adreno_ocmem *adreno_ocmem) 929 + { 930 + if (adreno_ocmem && adreno_ocmem->base) 931 + ocmem_free(adreno_ocmem->ocmem, OCMEM_GRAPHICS, 932 + adreno_ocmem->hdl); 895 933 } 896 934 897 935 int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev,
+15
drivers/gpu/drm/msm/adreno/adreno_gpu.h
··· 126 126 }; 127 127 #define to_adreno_gpu(x) container_of(x, struct adreno_gpu, base) 128 128 129 + struct adreno_ocmem { 130 + struct ocmem *ocmem; 131 + unsigned long base; 132 + void *hdl; 133 + }; 134 + 129 135 /* platform config data (ie. from DT, or pdata) */ 130 136 struct adreno_platform_config { 131 137 struct adreno_rev rev; ··· 212 206 return gpu->revn == 430; 213 207 } 214 208 209 + static inline int adreno_is_a510(struct adreno_gpu *gpu) 210 + { 211 + return gpu->revn == 510; 212 + } 213 + 215 214 static inline int adreno_is_a530(struct adreno_gpu *gpu) 216 215 { 217 216 return gpu->revn == 530; ··· 246 235 void adreno_dump(struct msm_gpu *gpu); 247 236 void adreno_wait_ring(struct msm_ringbuffer *ring, uint32_t ndwords); 248 237 struct msm_ringbuffer *adreno_active_ring(struct msm_gpu *gpu); 238 + 239 + int adreno_gpu_ocmem_init(struct device *dev, struct adreno_gpu *adreno_gpu, 240 + struct adreno_ocmem *ocmem); 241 + void adreno_gpu_ocmem_cleanup(struct adreno_ocmem *ocmem); 249 242 250 243 int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev, 251 244 struct adreno_gpu *gpu, const struct adreno_gpu_funcs *funcs,
+10 -33
drivers/gpu/drm/msm/disp/dpu1/dpu_core_irq.c
··· 55 55 int dpu_core_irq_idx_lookup(struct dpu_kms *dpu_kms, 56 56 enum dpu_intr_type intr_type, u32 instance_idx) 57 57 { 58 - if (!dpu_kms || !dpu_kms->hw_intr || 59 - !dpu_kms->hw_intr->ops.irq_idx_lookup) 58 + if (!dpu_kms->hw_intr || !dpu_kms->hw_intr->ops.irq_idx_lookup) 60 59 return -EINVAL; 61 60 62 61 return dpu_kms->hw_intr->ops.irq_idx_lookup(intr_type, ··· 72 73 unsigned long irq_flags; 73 74 int ret = 0, enable_count; 74 75 75 - if (!dpu_kms || !dpu_kms->hw_intr || 76 + if (!dpu_kms->hw_intr || 76 77 !dpu_kms->irq_obj.enable_counts || 77 78 !dpu_kms->irq_obj.irq_counts) { 78 79 DPU_ERROR("invalid params\n"); ··· 113 114 { 114 115 int i, ret = 0, counts; 115 116 116 - if (!dpu_kms || !irq_idxs || !irq_count) { 117 + if (!irq_idxs || !irq_count) { 117 118 DPU_ERROR("invalid params\n"); 118 119 return -EINVAL; 119 120 } ··· 137 138 { 138 139 int ret = 0, enable_count; 139 140 140 - if (!dpu_kms || !dpu_kms->hw_intr || !dpu_kms->irq_obj.enable_counts) { 141 + if (!dpu_kms->hw_intr || !dpu_kms->irq_obj.enable_counts) { 141 142 DPU_ERROR("invalid params\n"); 142 143 return -EINVAL; 143 144 } ··· 168 169 { 169 170 int i, ret = 0, counts; 170 171 171 - if (!dpu_kms || !irq_idxs || !irq_count) { 172 + if (!irq_idxs || !irq_count) { 172 173 DPU_ERROR("invalid params\n"); 173 174 return -EINVAL; 174 175 } ··· 185 186 186 187 u32 dpu_core_irq_read(struct dpu_kms *dpu_kms, int irq_idx, bool clear) 187 188 { 188 - if (!dpu_kms || !dpu_kms->hw_intr || 189 + if (!dpu_kms->hw_intr || 189 190 !dpu_kms->hw_intr->ops.get_interrupt_status) 190 191 return 0; 191 192 ··· 204 205 { 205 206 unsigned long irq_flags; 206 207 207 - if (!dpu_kms || !dpu_kms->irq_obj.irq_cb_tbl) { 208 + if (!dpu_kms->irq_obj.irq_cb_tbl) { 208 209 DPU_ERROR("invalid params\n"); 209 210 return -EINVAL; 210 211 } ··· 239 240 { 240 241 unsigned long irq_flags; 241 242 242 - if (!dpu_kms || !dpu_kms->irq_obj.irq_cb_tbl) { 243 + if (!dpu_kms->irq_obj.irq_cb_tbl) { 243 244 DPU_ERROR("invalid params\n"); 244 245 return -EINVAL; 245 246 } ··· 273 274 274 275 static void dpu_clear_all_irqs(struct dpu_kms *dpu_kms) 275 276 { 276 - if (!dpu_kms || !dpu_kms->hw_intr || 277 - !dpu_kms->hw_intr->ops.clear_all_irqs) 277 + if (!dpu_kms->hw_intr || !dpu_kms->hw_intr->ops.clear_all_irqs) 278 278 return; 279 279 280 280 dpu_kms->hw_intr->ops.clear_all_irqs(dpu_kms->hw_intr); ··· 281 283 282 284 static void dpu_disable_all_irqs(struct dpu_kms *dpu_kms) 283 285 { 284 - if (!dpu_kms || !dpu_kms->hw_intr || 285 - !dpu_kms->hw_intr->ops.disable_all_irqs) 286 + if (!dpu_kms->hw_intr || !dpu_kms->hw_intr->ops.disable_all_irqs) 286 287 return; 287 288 288 289 dpu_kms->hw_intr->ops.disable_all_irqs(dpu_kms->hw_intr); ··· 340 343 341 344 void dpu_core_irq_preinstall(struct dpu_kms *dpu_kms) 342 345 { 343 - struct msm_drm_private *priv; 344 346 int i; 345 - 346 - if (!dpu_kms->dev) { 347 - DPU_ERROR("invalid drm device\n"); 348 - return; 349 - } else if (!dpu_kms->dev->dev_private) { 350 - DPU_ERROR("invalid device private\n"); 351 - return; 352 - } 353 - priv = dpu_kms->dev->dev_private; 354 347 355 348 pm_runtime_get_sync(&dpu_kms->pdev->dev); 356 349 dpu_clear_all_irqs(dpu_kms); ··· 366 379 367 380 void dpu_core_irq_uninstall(struct dpu_kms *dpu_kms) 368 381 { 369 - struct msm_drm_private *priv; 370 382 int i; 371 - 372 - if (!dpu_kms->dev) { 373 - DPU_ERROR("invalid drm device\n"); 374 - return; 375 - } else if (!dpu_kms->dev->dev_private) { 376 - DPU_ERROR("invalid device private\n"); 377 - return; 378 - } 379 - priv = dpu_kms->dev->dev_private; 380 383 381 384 pm_runtime_get_sync(&dpu_kms->pdev->dev); 382 385 for (i = 0; i < dpu_kms->irq_obj.total_irqs; i++)
+3 -18
drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
··· 32 32 static struct dpu_kms *_dpu_crtc_get_kms(struct drm_crtc *crtc) 33 33 { 34 34 struct msm_drm_private *priv; 35 - 36 - if (!crtc->dev || !crtc->dev->dev_private) { 37 - DPU_ERROR("invalid device\n"); 38 - return NULL; 39 - } 40 - 41 35 priv = crtc->dev->dev_private; 42 - if (!priv || !priv->kms) { 43 - DPU_ERROR("invalid kms\n"); 44 - return NULL; 45 - } 46 - 47 36 return to_dpu_kms(priv->kms); 48 37 } 49 38 ··· 105 116 } 106 117 107 118 kms = _dpu_crtc_get_kms(crtc); 108 - if (!kms || !kms->catalog) { 119 + if (!kms->catalog) { 109 120 DPU_ERROR("invalid parameters\n"); 110 121 return 0; 111 122 } ··· 204 215 void dpu_core_perf_crtc_release_bw(struct drm_crtc *crtc) 205 216 { 206 217 struct dpu_crtc *dpu_crtc; 207 - struct dpu_crtc_state *dpu_cstate; 208 218 struct dpu_kms *kms; 209 219 210 220 if (!crtc) { ··· 212 224 } 213 225 214 226 kms = _dpu_crtc_get_kms(crtc); 215 - if (!kms || !kms->catalog) { 227 + if (!kms->catalog) { 216 228 DPU_ERROR("invalid kms\n"); 217 229 return; 218 230 } 219 231 220 232 dpu_crtc = to_dpu_crtc(crtc); 221 - dpu_cstate = to_dpu_crtc_state(crtc->state); 222 233 223 234 if (atomic_dec_return(&kms->bandwidth_ref) > 0) 224 235 return; ··· 274 287 u64 clk_rate = 0; 275 288 struct dpu_crtc *dpu_crtc; 276 289 struct dpu_crtc_state *dpu_cstate; 277 - struct msm_drm_private *priv; 278 290 struct dpu_kms *kms; 279 291 int ret; 280 292 ··· 283 297 } 284 298 285 299 kms = _dpu_crtc_get_kms(crtc); 286 - if (!kms || !kms->catalog) { 300 + if (!kms->catalog) { 287 301 DPU_ERROR("invalid kms\n"); 288 302 return -EINVAL; 289 303 } 290 - priv = kms->dev->dev_private; 291 304 292 305 dpu_crtc = to_dpu_crtc(crtc); 293 306 dpu_cstate = to_dpu_crtc_state(crtc->state);
+12 -8
drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
··· 266 266 { 267 267 struct drm_encoder *encoder; 268 268 269 - if (!crtc || !crtc->dev) { 269 + if (!crtc) { 270 270 DPU_ERROR("invalid crtc\n"); 271 271 return INTF_MODE_NONE; 272 272 } 273 273 274 + /* 275 + * TODO: This function is called from dpu debugfs and as part of atomic 276 + * check. When called from debugfs, the crtc->mutex must be held to 277 + * read crtc->state. However reading crtc->state from atomic check isn't 278 + * allowed (unless you have a good reason, a big comment, and a deep 279 + * understanding of how the atomic/modeset locks work (<- and this is 280 + * probably not possible)). So we'll keep the WARN_ON here for now, but 281 + * really we need to figure out a better way to track our operating mode 282 + */ 274 283 WARN_ON(!drm_modeset_is_locked(&crtc->mutex)); 275 284 276 285 /* TODO: Returns the first INTF_MODE, could there be multiple values? */ ··· 703 694 unsigned long flags; 704 695 bool release_bandwidth = false; 705 696 706 - if (!crtc || !crtc->dev || !crtc->dev->dev_private || !crtc->state) { 697 + if (!crtc || !crtc->state) { 707 698 DPU_ERROR("invalid crtc\n"); 708 699 return; 709 700 } ··· 775 766 struct msm_drm_private *priv; 776 767 bool request_bandwidth; 777 768 778 - if (!crtc || !crtc->dev || !crtc->dev->dev_private) { 769 + if (!crtc) { 779 770 DPU_ERROR("invalid crtc\n"); 780 771 return; 781 772 } ··· 1297 1288 { 1298 1289 struct drm_crtc *crtc = NULL; 1299 1290 struct dpu_crtc *dpu_crtc = NULL; 1300 - struct msm_drm_private *priv = NULL; 1301 - struct dpu_kms *kms = NULL; 1302 1291 int i; 1303 - 1304 - priv = dev->dev_private; 1305 - kms = to_dpu_kms(priv->kms); 1306 1292 1307 1293 dpu_crtc = kzalloc(sizeof(*dpu_crtc), GFP_KERNEL); 1308 1294 if (!dpu_crtc)
+7 -32
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
··· 645 645 priv = drm_enc->dev->dev_private; 646 646 647 647 dpu_kms = to_dpu_kms(priv->kms); 648 - if (!dpu_kms) { 649 - DPU_ERROR("invalid dpu_kms\n"); 650 - return; 651 - } 652 - 653 648 hw_mdptop = dpu_kms->hw_mdp; 654 649 if (!hw_mdptop) { 655 650 DPU_ERROR("invalid mdptop\n"); ··· 730 735 struct msm_drm_private *priv; 731 736 bool is_vid_mode = false; 732 737 733 - if (!drm_enc || !drm_enc->dev || !drm_enc->dev->dev_private || 734 - !drm_enc->crtc) { 738 + if (!drm_enc || !drm_enc->dev || !drm_enc->crtc) { 735 739 DPU_ERROR("invalid parameters\n"); 736 740 return -EINVAL; 737 741 } ··· 1086 1092 struct msm_drm_private *priv; 1087 1093 struct dpu_kms *dpu_kms; 1088 1094 1089 - if (!drm_enc || !drm_enc->dev || !drm_enc->dev->dev_private) { 1095 + if (!drm_enc || !drm_enc->dev) { 1090 1096 DPU_ERROR("invalid parameters\n"); 1091 1097 return; 1092 1098 } 1093 1099 1094 1100 priv = drm_enc->dev->dev_private; 1095 1101 dpu_kms = to_dpu_kms(priv->kms); 1096 - if (!dpu_kms) { 1097 - DPU_ERROR("invalid dpu_kms\n"); 1098 - return; 1099 - } 1100 1102 1101 1103 dpu_enc = to_dpu_encoder_virt(drm_enc); 1102 1104 if (!dpu_enc || !dpu_enc->cur_master) { ··· 1174 1184 struct dpu_encoder_virt *dpu_enc = NULL; 1175 1185 struct msm_drm_private *priv; 1176 1186 struct dpu_kms *dpu_kms; 1177 - struct drm_display_mode *mode; 1178 1187 int i = 0; 1179 1188 1180 1189 if (!drm_enc) { ··· 1182 1193 } else if (!drm_enc->dev) { 1183 1194 DPU_ERROR("invalid dev\n"); 1184 1195 return; 1185 - } else if (!drm_enc->dev->dev_private) { 1186 - DPU_ERROR("invalid dev_private\n"); 1187 - return; 1188 1196 } 1189 1197 1190 1198 dpu_enc = to_dpu_encoder_virt(drm_enc); ··· 1189 1203 1190 1204 mutex_lock(&dpu_enc->enc_lock); 1191 1205 dpu_enc->enabled = false; 1192 - 1193 - mode = &drm_enc->crtc->state->adjusted_mode; 1194 1206 1195 1207 priv = drm_enc->dev->dev_private; 1196 1208 dpu_kms = to_dpu_kms(priv->kms); ··· 1718 1734 struct msm_drm_private *priv; 1719 1735 struct msm_drm_thread *event_thread; 1720 1736 1721 - if (!drm_enc->dev || !drm_enc->dev->dev_private || 1722 - !drm_enc->crtc) { 1737 + if (!drm_enc->dev || !drm_enc->crtc) { 1723 1738 DPU_ERROR("invalid parameters\n"); 1724 1739 return; 1725 1740 } ··· 1897 1914 static int _dpu_encoder_init_debugfs(struct drm_encoder *drm_enc) 1898 1915 { 1899 1916 struct dpu_encoder_virt *dpu_enc = to_dpu_encoder_virt(drm_enc); 1900 - struct msm_drm_private *priv; 1901 - struct dpu_kms *dpu_kms; 1902 1917 int i; 1903 1918 1904 1919 static const struct file_operations debugfs_status_fops = { ··· 1908 1927 1909 1928 char name[DPU_NAME_SIZE]; 1910 1929 1911 - if (!drm_enc->dev || !drm_enc->dev->dev_private) { 1930 + if (!drm_enc->dev) { 1912 1931 DPU_ERROR("invalid encoder or kms\n"); 1913 1932 return -EINVAL; 1914 1933 } 1915 - 1916 - priv = drm_enc->dev->dev_private; 1917 - dpu_kms = to_dpu_kms(priv->kms); 1918 1934 1919 1935 snprintf(name, DPU_NAME_SIZE, "encoder%u", drm_enc->base.id); 1920 1936 ··· 2020 2042 enum dpu_intf_type intf_type; 2021 2043 struct dpu_enc_phys_init_params phys_params; 2022 2044 2023 - if (!dpu_enc || !dpu_kms) { 2024 - DPU_ERROR("invalid arg(s), enc %d kms %d\n", 2025 - dpu_enc != 0, dpu_kms != 0); 2045 + if (!dpu_enc) { 2046 + DPU_ERROR("invalid arg(s), enc %d\n", dpu_enc != 0); 2026 2047 return -EINVAL; 2027 2048 } 2028 2049 ··· 2110 2133 struct dpu_encoder_virt *dpu_enc = from_timer(dpu_enc, t, 2111 2134 frame_done_timer); 2112 2135 struct drm_encoder *drm_enc = &dpu_enc->base; 2113 - struct msm_drm_private *priv; 2114 2136 u32 event; 2115 2137 2116 - if (!drm_enc->dev || !drm_enc->dev->dev_private) { 2138 + if (!drm_enc->dev) { 2117 2139 DPU_ERROR("invalid parameters\n"); 2118 2140 return; 2119 2141 } 2120 - priv = drm_enc->dev->dev_private; 2121 2142 2122 2143 if (!dpu_enc->frame_busy_mask[0] || !dpu_enc->crtc_frame_event_cb) { 2123 2144 DRM_DEBUG_KMS("id:%u invalid timeout frame_busy_mask=%lu\n",
-15
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
··· 124 124 static void dpu_encoder_phys_cmd_ctl_start_irq(void *arg, int irq_idx) 125 125 { 126 126 struct dpu_encoder_phys *phys_enc = arg; 127 - struct dpu_encoder_phys_cmd *cmd_enc; 128 127 129 128 if (!phys_enc || !phys_enc->hw_ctl) 130 129 return; 131 130 132 131 DPU_ATRACE_BEGIN("ctl_start_irq"); 133 - cmd_enc = to_dpu_encoder_phys_cmd(phys_enc); 134 132 135 133 atomic_add_unless(&phys_enc->pending_ctlstart_cnt, -1, 0); 136 134 ··· 314 316 static void dpu_encoder_phys_cmd_irq_control(struct dpu_encoder_phys *phys_enc, 315 317 bool enable) 316 318 { 317 - struct dpu_encoder_phys_cmd *cmd_enc; 318 - 319 319 if (!phys_enc) 320 320 return; 321 - 322 - cmd_enc = to_dpu_encoder_phys_cmd(phys_enc); 323 321 324 322 trace_dpu_enc_phys_cmd_irq_ctrl(DRMID(phys_enc->parent), 325 323 phys_enc->hw_pp->idx - PINGPONG_0, ··· 349 355 struct drm_display_mode *mode; 350 356 bool tc_enable = true; 351 357 u32 vsync_hz; 352 - struct msm_drm_private *priv; 353 358 struct dpu_kms *dpu_kms; 354 359 355 360 if (!phys_enc || !phys_enc->hw_pp) { ··· 366 373 } 367 374 368 375 dpu_kms = phys_enc->dpu_kms; 369 - if (!dpu_kms || !dpu_kms->dev || !dpu_kms->dev->dev_private) { 370 - DPU_ERROR("invalid device\n"); 371 - return; 372 - } 373 - priv = dpu_kms->dev->dev_private; 374 376 375 377 /* 376 378 * TE default: dsi byte clock calculated base on 70 fps; ··· 638 650 struct dpu_encoder_phys *phys_enc) 639 651 { 640 652 int rc; 641 - struct dpu_encoder_phys_cmd *cmd_enc; 642 653 643 654 if (!phys_enc) 644 655 return -EINVAL; 645 - 646 - cmd_enc = to_dpu_encoder_phys_cmd(phys_enc); 647 656 648 657 rc = _dpu_encoder_phys_cmd_wait_for_idle(phys_enc); 649 658 if (rc) {
+2 -5
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_vid.c
··· 374 374 struct drm_display_mode *mode, 375 375 struct drm_display_mode *adj_mode) 376 376 { 377 - if (!phys_enc || !phys_enc->dpu_kms) { 377 + if (!phys_enc) { 378 378 DPU_ERROR("invalid encoder/kms\n"); 379 379 return; 380 380 } ··· 566 566 567 567 static void dpu_encoder_phys_vid_disable(struct dpu_encoder_phys *phys_enc) 568 568 { 569 - struct msm_drm_private *priv; 570 569 unsigned long lock_flags; 571 570 int ret; 572 571 573 - if (!phys_enc || !phys_enc->parent || !phys_enc->parent->dev || 574 - !phys_enc->parent->dev->dev_private) { 572 + if (!phys_enc || !phys_enc->parent || !phys_enc->parent->dev) { 575 573 DPU_ERROR("invalid encoder/device\n"); 576 574 return; 577 575 } 578 - priv = phys_enc->parent->dev->dev_private; 579 576 580 577 if (!phys_enc->hw_intf || !phys_enc->hw_ctl) { 581 578 DPU_ERROR("invalid hw_intf %d hw_ctl %d\n",
+4 -56
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
··· 30 30 #define CREATE_TRACE_POINTS 31 31 #include "dpu_trace.h" 32 32 33 - static const char * const iommu_ports[] = { 34 - "mdp_0", 35 - }; 36 - 37 33 /* 38 34 * To enable overall DRM driver logging 39 35 * # echo 0x2 > /sys/module/drm/parameters/debug ··· 64 68 bool danger_status) 65 69 { 66 70 struct dpu_kms *kms = (struct dpu_kms *)s->private; 67 - struct msm_drm_private *priv; 68 71 struct dpu_danger_safe_status status; 69 72 int i; 70 73 71 - if (!kms->dev || !kms->dev->dev_private || !kms->hw_mdp) { 74 + if (!kms->hw_mdp) { 72 75 DPU_ERROR("invalid arg(s)\n"); 73 76 return 0; 74 77 } 75 78 76 - priv = kms->dev->dev_private; 77 79 memset(&status, 0, sizeof(struct dpu_danger_safe_status)); 78 80 79 81 pm_runtime_get_sync(&kms->pdev->dev); ··· 147 153 return 0; 148 154 149 155 dev = dpu_kms->dev; 150 - if (!dev) 151 - return 0; 152 - 153 156 priv = dev->dev_private; 154 - if (!priv) 155 - return 0; 156 - 157 157 base = dpu_kms->mmio + regset->offset; 158 158 159 159 /* insert padding spaces, if needed */ ··· 268 280 struct drm_atomic_state *state) 269 281 { 270 282 struct dpu_kms *dpu_kms; 271 - struct msm_drm_private *priv; 272 283 struct drm_device *dev; 273 284 struct drm_crtc *crtc; 274 285 struct drm_crtc_state *crtc_state; ··· 278 291 return; 279 292 dpu_kms = to_dpu_kms(kms); 280 293 dev = dpu_kms->dev; 281 - 282 - if (!dev || !dev->dev_private) 283 - return; 284 - priv = dev->dev_private; 285 294 286 295 /* Call prepare_commit for all affected encoders */ 287 296 for_each_new_crtc_in_state(state, crtc, crtc_state, i) { ··· 316 333 if (funcs && funcs->commit) 317 334 funcs->commit(encoder); 318 335 319 - WARN_ON(!drm_modeset_is_locked(&dev->mode_config.connection_mutex)); 320 336 drm_for_each_crtc(crtc, dev) { 321 337 if (!(crtc->state->encoder_mask & drm_encoder_mask(encoder))) 322 338 continue; ··· 446 464 struct msm_drm_private *priv; 447 465 int i; 448 466 449 - if (!dpu_kms) { 450 - DPU_ERROR("invalid dpu_kms\n"); 451 - return; 452 - } else if (!dpu_kms->dev) { 453 - DPU_ERROR("invalid dev\n"); 454 - return; 455 - } else if (!dpu_kms->dev->dev_private) { 456 - DPU_ERROR("invalid dev_private\n"); 457 - return; 458 - } 459 467 priv = dpu_kms->dev->dev_private; 460 468 461 469 for (i = 0; i < priv->num_crtcs; i++) ··· 477 505 478 506 int primary_planes_idx = 0, cursor_planes_idx = 0, i, ret; 479 507 int max_crtc_count; 480 - 481 508 dev = dpu_kms->dev; 482 509 priv = dev->dev_private; 483 510 catalog = dpu_kms->catalog; ··· 556 585 int i; 557 586 558 587 dev = dpu_kms->dev; 559 - if (!dev) 560 - return; 561 588 562 589 if (dpu_kms->hw_intr) 563 590 dpu_hw_intr_destroy(dpu_kms->hw_intr); ··· 694 725 695 726 mmu = dpu_kms->base.aspace->mmu; 696 727 697 - mmu->funcs->detach(mmu, (const char **)iommu_ports, 698 - ARRAY_SIZE(iommu_ports)); 728 + mmu->funcs->detach(mmu); 699 729 msm_gem_address_space_put(dpu_kms->base.aspace); 700 730 701 731 dpu_kms->base.aspace = NULL; ··· 720 752 return PTR_ERR(aspace); 721 753 } 722 754 723 - ret = aspace->mmu->funcs->attach(aspace->mmu, iommu_ports, 724 - ARRAY_SIZE(iommu_ports)); 755 + ret = aspace->mmu->funcs->attach(aspace->mmu); 725 756 if (ret) { 726 757 DPU_ERROR("failed to attach iommu %d\n", ret); 727 758 msm_gem_address_space_put(aspace); ··· 770 803 771 804 dpu_kms = to_dpu_kms(kms); 772 805 dev = dpu_kms->dev; 773 - if (!dev) { 774 - DPU_ERROR("invalid device\n"); 775 - return rc; 776 - } 777 - 778 806 priv = dev->dev_private; 779 - if (!priv) { 780 - DPU_ERROR("invalid private data\n"); 781 - return rc; 782 - } 783 807 784 808 atomic_set(&dpu_kms->bandwidth_ref, 0); 785 809 ··· 932 974 struct dpu_kms *dpu_kms; 933 975 int irq; 934 976 935 - if (!dev || !dev->dev_private) { 977 + if (!dev) { 936 978 DPU_ERROR("drm device node invalid\n"); 937 979 return ERR_PTR(-EINVAL); 938 980 } ··· 1022 1064 struct dss_module_power *mp = &dpu_kms->mp; 1023 1065 1024 1066 ddev = dpu_kms->dev; 1025 - if (!ddev) { 1026 - DPU_ERROR("invalid drm_device\n"); 1027 - return rc; 1028 - } 1029 - 1030 1067 rc = msm_dss_enable_clk(mp->clk_config, mp->num_clk, false); 1031 1068 if (rc) 1032 1069 DPU_ERROR("clock disable failed rc:%d\n", rc); ··· 1039 1086 struct dss_module_power *mp = &dpu_kms->mp; 1040 1087 1041 1088 ddev = dpu_kms->dev; 1042 - if (!ddev) { 1043 - DPU_ERROR("invalid drm_device\n"); 1044 - return rc; 1045 - } 1046 - 1047 1089 rc = msm_dss_enable_clk(mp->clk_config, mp->num_clk, true); 1048 1090 if (rc) { 1049 1091 DPU_ERROR("clock enable failed rc:%d\n", rc);
-4
drivers/gpu/drm/msm/disp/dpu1/dpu_kms.h
··· 139 139 140 140 #define to_dpu_kms(x) container_of(x, struct dpu_kms, base) 141 141 142 - /* get struct msm_kms * from drm_device * */ 143 - #define ddev_to_msm_kms(D) ((D) && (D)->dev_private ? \ 144 - ((struct msm_drm_private *)((D)->dev_private))->kms : NULL) 145 - 146 142 /** 147 143 * Debugfs functions - extra helper functions for debugfs support 148 144 *
+1 -5
drivers/gpu/drm/msm/disp/dpu1/dpu_vbif.c
··· 154 154 u32 ot_lim; 155 155 int ret, i; 156 156 157 - if (!dpu_kms) { 158 - DPU_ERROR("invalid arguments\n"); 159 - return; 160 - } 161 157 mdp = dpu_kms->hw_mdp; 162 158 163 159 for (i = 0; i < ARRAY_SIZE(dpu_kms->hw_vbif); i++) { ··· 210 214 const struct dpu_vbif_qos_tbl *qos_tbl; 211 215 int i; 212 216 213 - if (!dpu_kms || !params || !dpu_kms->hw_mdp) { 217 + if (!params || !dpu_kms->hw_mdp) { 214 218 DPU_ERROR("invalid arguments\n"); 215 219 return; 216 220 }
+2 -8
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
··· 157 157 } 158 158 } 159 159 160 - static const char * const iommu_ports[] = { 161 - "mdp_port0_cb0", "mdp_port1_cb0", 162 - }; 163 - 164 160 static void mdp4_destroy(struct msm_kms *kms) 165 161 { 166 162 struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); ··· 168 172 drm_gem_object_put_unlocked(mdp4_kms->blank_cursor_bo); 169 173 170 174 if (aspace) { 171 - aspace->mmu->funcs->detach(aspace->mmu, 172 - iommu_ports, ARRAY_SIZE(iommu_ports)); 175 + aspace->mmu->funcs->detach(aspace->mmu); 173 176 msm_gem_address_space_put(aspace); 174 177 } 175 178 ··· 519 524 520 525 kms->aspace = aspace; 521 526 522 - ret = aspace->mmu->funcs->attach(aspace->mmu, iommu_ports, 523 - ARRAY_SIZE(iommu_ports)); 527 + ret = aspace->mmu->funcs->attach(aspace->mmu); 524 528 if (ret) 525 529 goto fail; 526 530 } else {
+106 -8
drivers/gpu/drm/msm/disp/mdp5/mdp5_cfg.c
··· 14 14 /* mdp5_cfg must be exposed (used in mdp5.xml.h) */ 15 15 const struct mdp5_cfg_hw *mdp5_cfg = NULL; 16 16 17 - const struct mdp5_cfg_hw msm8x74v1_config = { 17 + static const struct mdp5_cfg_hw msm8x74v1_config = { 18 18 .name = "msm8x74v1", 19 19 .mdp = { 20 20 .count = 1, ··· 98 98 .max_clk = 200000000, 99 99 }; 100 100 101 - const struct mdp5_cfg_hw msm8x74v2_config = { 101 + static const struct mdp5_cfg_hw msm8x74v2_config = { 102 102 .name = "msm8x74", 103 103 .mdp = { 104 104 .count = 1, ··· 180 180 .max_clk = 200000000, 181 181 }; 182 182 183 - const struct mdp5_cfg_hw apq8084_config = { 183 + static const struct mdp5_cfg_hw apq8084_config = { 184 184 .name = "apq8084", 185 185 .mdp = { 186 186 .count = 1, ··· 275 275 .max_clk = 320000000, 276 276 }; 277 277 278 - const struct mdp5_cfg_hw msm8x16_config = { 278 + static const struct mdp5_cfg_hw msm8x16_config = { 279 279 .name = "msm8x16", 280 280 .mdp = { 281 281 .count = 1, ··· 342 342 .max_clk = 320000000, 343 343 }; 344 344 345 - const struct mdp5_cfg_hw msm8x94_config = { 345 + static const struct mdp5_cfg_hw msm8x94_config = { 346 346 .name = "msm8x94", 347 347 .mdp = { 348 348 .count = 1, ··· 437 437 .max_clk = 400000000, 438 438 }; 439 439 440 - const struct mdp5_cfg_hw msm8x96_config = { 440 + static const struct mdp5_cfg_hw msm8x96_config = { 441 441 .name = "msm8x96", 442 442 .mdp = { 443 443 .count = 1, ··· 545 545 .max_clk = 412500000, 546 546 }; 547 547 548 - const struct mdp5_cfg_hw msm8917_config = { 548 + const struct mdp5_cfg_hw msm8x76_config = { 549 + .name = "msm8x76", 550 + .mdp = { 551 + .count = 1, 552 + .caps = MDP_CAP_SMP | 553 + MDP_CAP_DSC | 554 + MDP_CAP_SRC_SPLIT | 555 + 0, 556 + }, 557 + .ctl = { 558 + .count = 3, 559 + .base = { 0x01000, 0x01200, 0x01400 }, 560 + .flush_hw_mask = 0xffffffff, 561 + }, 562 + .smp = { 563 + .mmb_count = 10, 564 + .mmb_size = 10240, 565 + .clients = { 566 + [SSPP_VIG0] = 1, [SSPP_VIG1] = 9, 567 + [SSPP_DMA0] = 4, 568 + [SSPP_RGB0] = 7, [SSPP_RGB1] = 8, 569 + }, 570 + }, 571 + .pipe_vig = { 572 + .count = 2, 573 + .base = { 0x04000, 0x06000 }, 574 + .caps = MDP_PIPE_CAP_HFLIP | 575 + MDP_PIPE_CAP_VFLIP | 576 + MDP_PIPE_CAP_SCALE | 577 + MDP_PIPE_CAP_CSC | 578 + MDP_PIPE_CAP_DECIMATION | 579 + MDP_PIPE_CAP_SW_PIX_EXT | 580 + 0, 581 + }, 582 + .pipe_rgb = { 583 + .count = 2, 584 + .base = { 0x14000, 0x16000 }, 585 + .caps = MDP_PIPE_CAP_HFLIP | 586 + MDP_PIPE_CAP_VFLIP | 587 + MDP_PIPE_CAP_DECIMATION | 588 + MDP_PIPE_CAP_SW_PIX_EXT | 589 + 0, 590 + }, 591 + .pipe_dma = { 592 + .count = 1, 593 + .base = { 0x24000 }, 594 + .caps = MDP_PIPE_CAP_HFLIP | 595 + MDP_PIPE_CAP_VFLIP | 596 + MDP_PIPE_CAP_SW_PIX_EXT | 597 + 0, 598 + }, 599 + .pipe_cursor = { 600 + .count = 1, 601 + .base = { 0x440DC }, 602 + .caps = MDP_PIPE_CAP_HFLIP | 603 + MDP_PIPE_CAP_VFLIP | 604 + MDP_PIPE_CAP_SW_PIX_EXT | 605 + MDP_PIPE_CAP_CURSOR | 606 + 0, 607 + }, 608 + .lm = { 609 + .count = 2, 610 + .base = { 0x44000, 0x45000 }, 611 + .instances = { 612 + { .id = 0, .pp = 0, .dspp = 0, 613 + .caps = MDP_LM_CAP_DISPLAY, }, 614 + { .id = 1, .pp = -1, .dspp = -1, 615 + .caps = MDP_LM_CAP_WB }, 616 + }, 617 + .nb_stages = 8, 618 + .max_width = 2560, 619 + .max_height = 0xFFFF, 620 + }, 621 + .dspp = { 622 + .count = 1, 623 + .base = { 0x54000 }, 624 + 625 + }, 626 + .pp = { 627 + .count = 3, 628 + .base = { 0x70000, 0x70800, 0x72000 }, 629 + }, 630 + .dsc = { 631 + .count = 2, 632 + .base = { 0x80000, 0x80400 }, 633 + }, 634 + .intf = { 635 + .base = { 0x6a000, 0x6a800, 0x6b000 }, 636 + .connect = { 637 + [0] = INTF_DISABLED, 638 + [1] = INTF_DSI, 639 + [2] = INTF_DSI, 640 + }, 641 + }, 642 + .max_clk = 360000000, 643 + }; 644 + 645 + static const struct mdp5_cfg_hw msm8917_config = { 549 646 .name = "msm8917", 550 647 .mdp = { 551 648 .count = 1, ··· 727 630 .max_clk = 320000000, 728 631 }; 729 632 730 - const struct mdp5_cfg_hw msm8998_config = { 633 + static const struct mdp5_cfg_hw msm8998_config = { 731 634 .name = "msm8998", 732 635 .mdp = { 733 636 .count = 1, ··· 842 745 { .revision = 6, .config = { .hw = &msm8x16_config } }, 843 746 { .revision = 9, .config = { .hw = &msm8x94_config } }, 844 747 { .revision = 7, .config = { .hw = &msm8x96_config } }, 748 + { .revision = 11, .config = { .hw = &msm8x76_config } }, 845 749 { .revision = 15, .config = { .hw = &msm8917_config } }, 846 750 }; 847 751
-3
drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c
··· 214 214 struct mdp5_pipeline *pipeline = &mdp5_cstate->pipeline; 215 215 struct mdp5_kms *mdp5_kms = get_kms(crtc); 216 216 struct drm_plane *plane; 217 - const struct mdp5_cfg_hw *hw_cfg; 218 217 struct mdp5_plane_state *pstate, *pstates[STAGE_MAX + 1] = {NULL}; 219 218 const struct mdp_format *format; 220 219 struct mdp5_hw_mixer *mixer = pipeline->mixer; ··· 230 231 u32 mixer_op_mode = 0; 231 232 u32 val; 232 233 #define blender(stage) ((stage) - STAGE0) 233 - 234 - hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg); 235 234 236 235 spin_lock_irqsave(&mdp5_crtc->lm_lock, flags); 237 236
+12 -11
drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c
··· 19 19 #include "msm_mmu.h" 20 20 #include "mdp5_kms.h" 21 21 22 - static const char *iommu_ports[] = { 23 - "mdp_0", 24 - }; 25 - 26 22 static int mdp5_hw_init(struct msm_kms *kms) 27 23 { 28 24 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); ··· 229 233 mdp5_pipe_destroy(mdp5_kms->hwpipes[i]); 230 234 231 235 if (aspace) { 232 - aspace->mmu->funcs->detach(aspace->mmu, 233 - iommu_ports, ARRAY_SIZE(iommu_ports)); 236 + aspace->mmu->funcs->detach(aspace->mmu); 234 237 msm_gem_address_space_put(aspace); 235 238 } 236 239 } ··· 309 314 mdp5_kms->enable_count--; 310 315 WARN_ON(mdp5_kms->enable_count < 0); 311 316 317 + if (mdp5_kms->tbu_rt_clk) 318 + clk_disable_unprepare(mdp5_kms->tbu_rt_clk); 319 + if (mdp5_kms->tbu_clk) 320 + clk_disable_unprepare(mdp5_kms->tbu_clk); 312 321 clk_disable_unprepare(mdp5_kms->ahb_clk); 313 322 clk_disable_unprepare(mdp5_kms->axi_clk); 314 323 clk_disable_unprepare(mdp5_kms->core_clk); ··· 333 334 clk_prepare_enable(mdp5_kms->core_clk); 334 335 if (mdp5_kms->lut_clk) 335 336 clk_prepare_enable(mdp5_kms->lut_clk); 337 + if (mdp5_kms->tbu_clk) 338 + clk_prepare_enable(mdp5_kms->tbu_clk); 339 + if (mdp5_kms->tbu_rt_clk) 340 + clk_prepare_enable(mdp5_kms->tbu_rt_clk); 336 341 337 342 return 0; 338 343 } ··· 469 466 { 470 467 struct drm_device *dev = mdp5_kms->dev; 471 468 struct msm_drm_private *priv = dev->dev_private; 472 - const struct mdp5_cfg_hw *hw_cfg; 473 469 unsigned int num_crtcs; 474 470 int i, ret, pi = 0, ci = 0; 475 471 struct drm_plane *primary[MAX_BASES] = { NULL }; 476 472 struct drm_plane *cursor[MAX_BASES] = { NULL }; 477 - 478 - hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg); 479 473 480 474 /* 481 475 * Construct encoders and modeset initialize connector devices ··· 737 737 738 738 kms->aspace = aspace; 739 739 740 - ret = aspace->mmu->funcs->attach(aspace->mmu, iommu_ports, 741 - ARRAY_SIZE(iommu_ports)); 740 + ret = aspace->mmu->funcs->attach(aspace->mmu); 742 741 if (ret) { 743 742 DRM_DEV_ERROR(&pdev->dev, "failed to attach iommu: %d\n", 744 743 ret); ··· 973 974 974 975 /* optional clocks: */ 975 976 get_clk(pdev, &mdp5_kms->lut_clk, "lut", false); 977 + get_clk(pdev, &mdp5_kms->tbu_clk, "tbu", false); 978 + get_clk(pdev, &mdp5_kms->tbu_rt_clk, "tbu_rt", false); 976 979 977 980 /* we need to set a default rate before enabling. Set a safe 978 981 * rate first, then figure out hw revision, and then set a
+2
drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.h
··· 53 53 struct clk *ahb_clk; 54 54 struct clk *core_clk; 55 55 struct clk *lut_clk; 56 + struct clk *tbu_clk; 57 + struct clk *tbu_rt_clk; 56 58 struct clk *vsync_clk; 57 59 58 60 /*
-2
drivers/gpu/drm/msm/disp/mdp5/mdp5_smp.c
··· 121 121 struct mdp5_kms *mdp5_kms = get_kms(smp); 122 122 int rev = mdp5_cfg_get_hw_rev(mdp5_kms->cfg); 123 123 int i, hsub, nplanes, nlines; 124 - u32 fmt = format->base.pixel_format; 125 124 uint32_t blkcfg = 0; 126 125 127 126 nplanes = info->num_planes; ··· 134 135 * them together, writes to SMP using a single client. 135 136 */ 136 137 if ((rev > 0) && (format->chroma_sample > CHROMA_FULL)) { 137 - fmt = DRM_FORMAT_NV24; 138 138 nplanes = 2; 139 139 140 140 /* if decimation is enabled, HW decimates less on the
+25 -3
drivers/gpu/drm/msm/dsi/dsi_cfg.c
··· 66 66 .num_dsi = 1, 67 67 }; 68 68 69 + static const char * const dsi_8976_bus_clk_names[] = { 70 + "mdp_core", "iface", "bus", 71 + }; 72 + 73 + static const struct msm_dsi_config msm8976_dsi_cfg = { 74 + .io_offset = DSI_6G_REG_SHIFT, 75 + .reg_cfg = { 76 + .num = 3, 77 + .regs = { 78 + {"gdsc", -1, -1}, 79 + {"vdda", 100000, 100}, /* 1.2 V */ 80 + {"vddio", 100000, 100}, /* 1.8 V */ 81 + }, 82 + }, 83 + .bus_clk_names = dsi_8976_bus_clk_names, 84 + .num_bus_clks = ARRAY_SIZE(dsi_8976_bus_clk_names), 85 + .io_start = { 0x1a94000, 0x1a96000 }, 86 + .num_dsi = 2, 87 + }; 88 + 69 89 static const struct msm_dsi_config msm8994_dsi_cfg = { 70 90 .io_offset = DSI_6G_REG_SHIFT, 71 91 .reg_cfg = { ··· 167 147 .num_dsi = 2, 168 148 }; 169 149 170 - const static struct msm_dsi_host_cfg_ops msm_dsi_v2_host_ops = { 150 + static const struct msm_dsi_host_cfg_ops msm_dsi_v2_host_ops = { 171 151 .link_clk_enable = dsi_link_clk_enable_v2, 172 152 .link_clk_disable = dsi_link_clk_disable_v2, 173 153 .clk_init_ver = dsi_clk_init_v2, ··· 178 158 .calc_clk_rate = dsi_calc_clk_rate_v2, 179 159 }; 180 160 181 - const static struct msm_dsi_host_cfg_ops msm_dsi_6g_host_ops = { 161 + static const struct msm_dsi_host_cfg_ops msm_dsi_6g_host_ops = { 182 162 .link_clk_enable = dsi_link_clk_enable_6g, 183 163 .link_clk_disable = dsi_link_clk_disable_6g, 184 164 .clk_init_ver = NULL, ··· 189 169 .calc_clk_rate = dsi_calc_clk_rate_6g, 190 170 }; 191 171 192 - const static struct msm_dsi_host_cfg_ops msm_dsi_6g_v2_host_ops = { 172 + static const struct msm_dsi_host_cfg_ops msm_dsi_6g_v2_host_ops = { 193 173 .link_clk_enable = dsi_link_clk_enable_6g, 194 174 .link_clk_disable = dsi_link_clk_disable_6g, 195 175 .clk_init_ver = dsi_clk_init_6g_v2, ··· 217 197 &msm8916_dsi_cfg, &msm_dsi_6g_host_ops}, 218 198 {MSM_DSI_VER_MAJOR_6G, MSM_DSI_6G_VER_MINOR_V1_4_1, 219 199 &msm8996_dsi_cfg, &msm_dsi_6g_host_ops}, 200 + {MSM_DSI_VER_MAJOR_6G, MSM_DSI_6G_VER_MINOR_V1_4_2, 201 + &msm8976_dsi_cfg, &msm_dsi_6g_host_ops}, 220 202 {MSM_DSI_VER_MAJOR_6G, MSM_DSI_6G_VER_MINOR_V2_2_0, 221 203 &msm8998_dsi_cfg, &msm_dsi_6g_v2_host_ops}, 222 204 {MSM_DSI_VER_MAJOR_6G, MSM_DSI_6G_VER_MINOR_V2_2_1,
+1
drivers/gpu/drm/msm/dsi/dsi_cfg.h
··· 17 17 #define MSM_DSI_6G_VER_MINOR_V1_3 0x10030000 18 18 #define MSM_DSI_6G_VER_MINOR_V1_3_1 0x10030001 19 19 #define MSM_DSI_6G_VER_MINOR_V1_4_1 0x10040001 20 + #define MSM_DSI_6G_VER_MINOR_V1_4_2 0x10040002 20 21 #define MSM_DSI_6G_VER_MINOR_V2_2_0 0x20000000 21 22 #define MSM_DSI_6G_VER_MINOR_V2_2_1 0x20020001 22 23
+1 -2
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 1293 1293 static int dsi_cmd_dma_rx(struct msm_dsi_host *msm_host, 1294 1294 u8 *buf, int rx_byte, int pkt_size) 1295 1295 { 1296 - u32 *lp, *temp, data; 1296 + u32 *temp, data; 1297 1297 int i, j = 0, cnt; 1298 1298 u32 read_cnt; 1299 1299 u8 reg[16]; 1300 1300 int repeated_bytes = 0; 1301 1301 int buf_offset = buf - msm_host->rx_buf; 1302 1302 1303 - lp = (u32 *)buf; 1304 1303 temp = (u32 *)reg; 1305 1304 cnt = (rx_byte + 3) >> 2; 1306 1305 if (cnt > 4)
+4 -4
drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
··· 145 145 { 146 146 const unsigned long bit_rate = clk_req->bitclk_rate; 147 147 const unsigned long esc_rate = clk_req->escclk_rate; 148 - s32 ui, ui_x8, lpx; 148 + s32 ui, ui_x8; 149 149 s32 tmax, tmin; 150 150 s32 pcnt0 = 50; 151 151 s32 pcnt1 = 50; ··· 175 175 176 176 ui = mult_frac(NSEC_PER_MSEC, coeff, bit_rate / 1000); 177 177 ui_x8 = ui << 3; 178 - lpx = mult_frac(NSEC_PER_MSEC, coeff, esc_rate / 1000); 179 178 180 179 temp = S_DIV_ROUND_UP(38 * coeff - val_ckln * ui, ui_x8); 181 180 tmin = max_t(s32, temp, 0); ··· 261 262 { 262 263 const unsigned long bit_rate = clk_req->bitclk_rate; 263 264 const unsigned long esc_rate = clk_req->escclk_rate; 264 - s32 ui, ui_x8, lpx; 265 + s32 ui, ui_x8; 265 266 s32 tmax, tmin; 266 267 s32 pcnt0 = 50; 267 268 s32 pcnt1 = 50; ··· 283 284 284 285 ui = mult_frac(NSEC_PER_MSEC, coeff, bit_rate / 1000); 285 286 ui_x8 = ui << 3; 286 - lpx = mult_frac(NSEC_PER_MSEC, coeff, esc_rate / 1000); 287 287 288 288 temp = S_DIV_ROUND_UP(38 * coeff, ui_x8); 289 289 tmin = max_t(s32, temp, 0); ··· 483 485 #ifdef CONFIG_DRM_MSM_DSI_28NM_PHY 484 486 { .compatible = "qcom,dsi-phy-28nm-hpm", 485 487 .data = &dsi_phy_28nm_hpm_cfgs }, 488 + { .compatible = "qcom,dsi-phy-28nm-hpm-fam-b", 489 + .data = &dsi_phy_28nm_hpm_famb_cfgs }, 486 490 { .compatible = "qcom,dsi-phy-28nm-lp", 487 491 .data = &dsi_phy_28nm_lp_cfgs }, 488 492 #endif
+1
drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
··· 40 40 }; 41 41 42 42 extern const struct msm_dsi_phy_cfg dsi_phy_28nm_hpm_cfgs; 43 + extern const struct msm_dsi_phy_cfg dsi_phy_28nm_hpm_famb_cfgs; 43 44 extern const struct msm_dsi_phy_cfg dsi_phy_28nm_lp_cfgs; 44 45 extern const struct msm_dsi_phy_cfg dsi_phy_20nm_cfgs; 45 46 extern const struct msm_dsi_phy_cfg dsi_phy_28nm_8960_cfgs;
+52 -8
drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
··· 39 39 DSI_28nm_PHY_TIMING_CTRL_11_TRIG3_CMD(0)); 40 40 } 41 41 42 - static void dsi_28nm_phy_regulator_ctrl(struct msm_dsi_phy *phy, bool enable) 42 + static void dsi_28nm_phy_regulator_enable_dcdc(struct msm_dsi_phy *phy) 43 43 { 44 44 void __iomem *base = phy->reg_base; 45 - 46 - if (!enable) { 47 - dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CAL_PWR_CFG, 0); 48 - return; 49 - } 50 45 51 46 dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_0, 0x0); 52 47 dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CAL_PWR_CFG, 1); ··· 51 56 dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_1, 0x9); 52 57 dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_0, 0x7); 53 58 dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_4, 0x20); 59 + dsi_phy_write(phy->base + REG_DSI_28nm_PHY_LDO_CNTRL, 0x00); 60 + } 61 + 62 + static void dsi_28nm_phy_regulator_enable_ldo(struct msm_dsi_phy *phy) 63 + { 64 + void __iomem *base = phy->reg_base; 65 + 66 + dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_0, 0x0); 67 + dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CAL_PWR_CFG, 0); 68 + dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_5, 0x7); 69 + dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_3, 0); 70 + dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_2, 0x1); 71 + dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_1, 0x1); 72 + dsi_phy_write(base + REG_DSI_28nm_PHY_REGULATOR_CTRL_4, 0x20); 73 + 74 + if (phy->cfg->type == MSM_DSI_PHY_28NM_LP) 75 + dsi_phy_write(phy->base + REG_DSI_28nm_PHY_LDO_CNTRL, 0x05); 76 + else 77 + dsi_phy_write(phy->base + REG_DSI_28nm_PHY_LDO_CNTRL, 0x0d); 78 + } 79 + 80 + static void dsi_28nm_phy_regulator_ctrl(struct msm_dsi_phy *phy, bool enable) 81 + { 82 + if (!enable) { 83 + dsi_phy_write(phy->reg_base + 84 + REG_DSI_28nm_PHY_REGULATOR_CAL_PWR_CFG, 0); 85 + return; 86 + } 87 + 88 + if (phy->regulator_ldo_mode) 89 + dsi_28nm_phy_regulator_enable_ldo(phy); 90 + else 91 + dsi_28nm_phy_regulator_enable_dcdc(phy); 54 92 } 55 93 56 94 static int dsi_28nm_phy_enable(struct msm_dsi_phy *phy, int src_pll_id, ··· 104 76 dsi_phy_write(base + REG_DSI_28nm_PHY_STRENGTH_0, 0xff); 105 77 106 78 dsi_28nm_phy_regulator_ctrl(phy, true); 107 - 108 - dsi_phy_write(base + REG_DSI_28nm_PHY_LDO_CNTRL, 0x00); 109 79 110 80 dsi_28nm_dphy_set_timing(phy, timing); 111 81 ··· 165 139 .init = msm_dsi_phy_init_common, 166 140 }, 167 141 .io_start = { 0xfd922b00, 0xfd923100 }, 142 + .num_dsi_phy = 2, 143 + }; 144 + 145 + const struct msm_dsi_phy_cfg dsi_phy_28nm_hpm_famb_cfgs = { 146 + .type = MSM_DSI_PHY_28NM_HPM, 147 + .src_pll_truthtable = { {true, true}, {false, true} }, 148 + .reg_cfg = { 149 + .num = 1, 150 + .regs = { 151 + {"vddio", 100000, 100}, 152 + }, 153 + }, 154 + .ops = { 155 + .enable = dsi_28nm_phy_enable, 156 + .disable = dsi_28nm_phy_disable, 157 + .init = msm_dsi_phy_init_common, 158 + }, 159 + .io_start = { 0x1a94400, 0x1a96400 }, 168 160 .num_dsi_phy = 2, 169 161 }; 170 162
+6 -2
drivers/gpu/drm/msm/hdmi/hdmi_phy.c
··· 29 29 reg = devm_regulator_get(dev, cfg->reg_names[i]); 30 30 if (IS_ERR(reg)) { 31 31 ret = PTR_ERR(reg); 32 - DRM_DEV_ERROR(dev, "failed to get phy regulator: %s (%d)\n", 33 - cfg->reg_names[i], ret); 32 + if (ret != -EPROBE_DEFER) { 33 + DRM_DEV_ERROR(dev, 34 + "failed to get phy regulator: %s (%d)\n", 35 + cfg->reg_names[i], ret); 36 + } 37 + 34 38 return ret; 35 39 } 36 40
+3 -3
drivers/gpu/drm/msm/msm_gpu.c
··· 16 16 #include <linux/pm_opp.h> 17 17 #include <linux/devfreq.h> 18 18 #include <linux/devcoredump.h> 19 + #include <linux/sched/task.h> 19 20 20 21 /* 21 22 * Power Management: ··· 839 838 return ERR_CAST(aspace); 840 839 } 841 840 842 - ret = aspace->mmu->funcs->attach(aspace->mmu, NULL, 0); 841 + ret = aspace->mmu->funcs->attach(aspace->mmu); 843 842 if (ret) { 844 843 msm_gem_address_space_put(aspace); 845 844 return ERR_PTR(ret); ··· 996 995 msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace, false); 997 996 998 997 if (!IS_ERR_OR_NULL(gpu->aspace)) { 999 - gpu->aspace->mmu->funcs->detach(gpu->aspace->mmu, 1000 - NULL, 0); 998 + gpu->aspace->mmu->funcs->detach(gpu->aspace->mmu); 1001 999 msm_gem_address_space_put(gpu->aspace); 1002 1000 } 1003 1001 }
+2 -4
drivers/gpu/drm/msm/msm_gpummu.c
··· 21 21 #define GPUMMU_PAGE_SIZE SZ_4K 22 22 #define TABLE_SIZE (sizeof(uint32_t) * GPUMMU_VA_RANGE / GPUMMU_PAGE_SIZE) 23 23 24 - static int msm_gpummu_attach(struct msm_mmu *mmu, const char * const *names, 25 - int cnt) 24 + static int msm_gpummu_attach(struct msm_mmu *mmu) 26 25 { 27 26 return 0; 28 27 } 29 28 30 - static void msm_gpummu_detach(struct msm_mmu *mmu, const char * const *names, 31 - int cnt) 29 + static void msm_gpummu_detach(struct msm_mmu *mmu) 32 30 { 33 31 } 34 32
+2 -4
drivers/gpu/drm/msm/msm_iommu.c
··· 23 23 return 0; 24 24 } 25 25 26 - static int msm_iommu_attach(struct msm_mmu *mmu, const char * const *names, 27 - int cnt) 26 + static int msm_iommu_attach(struct msm_mmu *mmu) 28 27 { 29 28 struct msm_iommu *iommu = to_msm_iommu(mmu); 30 29 31 30 return iommu_attach_device(iommu->domain, mmu->dev); 32 31 } 33 32 34 - static void msm_iommu_detach(struct msm_mmu *mmu, const char * const *names, 35 - int cnt) 33 + static void msm_iommu_detach(struct msm_mmu *mmu) 36 34 { 37 35 struct msm_iommu *iommu = to_msm_iommu(mmu); 38 36
+2 -2
drivers/gpu/drm/msm/msm_mmu.h
··· 10 10 #include <linux/iommu.h> 11 11 12 12 struct msm_mmu_funcs { 13 - int (*attach)(struct msm_mmu *mmu, const char * const *names, int cnt); 14 - void (*detach)(struct msm_mmu *mmu, const char * const *names, int cnt); 13 + int (*attach)(struct msm_mmu *mmu); 14 + void (*detach)(struct msm_mmu *mmu); 15 15 int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, 16 16 unsigned len, int prot); 17 17 int (*unmap)(struct msm_mmu *mmu, uint64_t iova, unsigned len);
+11 -5
drivers/gpu/drm/msm/msm_rd.c
··· 298 298 299 299 static void snapshot_buf(struct msm_rd_state *rd, 300 300 struct msm_gem_submit *submit, int idx, 301 - uint64_t iova, uint32_t size) 301 + uint64_t iova, uint32_t size, bool full) 302 302 { 303 303 struct msm_gem_object *obj = submit->bos[idx].obj; 304 304 unsigned offset = 0; ··· 317 317 */ 318 318 rd_write_section(rd, RD_GPUADDR, 319 319 (uint32_t[3]){ iova, size, iova >> 32 }, 12); 320 + 321 + if (!full) 322 + return; 320 323 321 324 /* But only dump the contents of buffers marked READ */ 322 325 if (!(submit->bos[idx].flags & MSM_SUBMIT_BO_READ)) ··· 384 381 rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); 385 382 386 383 for (i = 0; i < submit->nr_bos; i++) 387 - if (should_dump(submit, i)) 388 - snapshot_buf(rd, submit, i, 0, 0); 384 + snapshot_buf(rd, submit, i, 0, 0, should_dump(submit, i)); 389 385 390 386 for (i = 0; i < submit->nr_cmds; i++) { 391 - uint64_t iova = submit->cmd[i].iova; 392 387 uint32_t szd = submit->cmd[i].size; /* in dwords */ 393 388 394 389 /* snapshot cmdstream bo's (if we haven't already): */ 395 390 if (!should_dump(submit, i)) { 396 391 snapshot_buf(rd, submit, submit->cmd[i].idx, 397 - submit->cmd[i].iova, szd * 4); 392 + submit->cmd[i].iova, szd * 4, true); 398 393 } 394 + } 395 + 396 + for (i = 0; i < submit->nr_cmds; i++) { 397 + uint64_t iova = submit->cmd[i].iova; 398 + uint32_t szd = submit->cmd[i].size; /* in dwords */ 399 399 400 400 switch (submit->cmd[i].type) { 401 401 case MSM_SUBMIT_CMD_IB_TARGET_BUF:
+4
drivers/gpu/drm/omapdrm/omap_gem.c
··· 843 843 */ 844 844 static void omap_gem_unpin_locked(struct drm_gem_object *obj) 845 845 { 846 + struct omap_drm_private *priv = obj->dev->dev_private; 846 847 struct omap_gem_object *omap_obj = to_omap_bo(obj); 847 848 int ret; 849 + 850 + if (omap_gem_is_contiguous(omap_obj) || !priv->has_dmm) 851 + return; 848 852 849 853 if (refcount_dec_and_test(&omap_obj->dma_addr_cnt)) { 850 854 ret = tiler_unpin(omap_obj->block);
+2 -2
drivers/gpu/drm/radeon/r100.c
··· 1826 1826 track->textures[i].use_pitch = 1; 1827 1827 } else { 1828 1828 track->textures[i].use_pitch = 0; 1829 - track->textures[i].width = 1 << ((idx_value >> RADEON_TXFORMAT_WIDTH_SHIFT) & RADEON_TXFORMAT_WIDTH_MASK); 1830 - track->textures[i].height = 1 << ((idx_value >> RADEON_TXFORMAT_HEIGHT_SHIFT) & RADEON_TXFORMAT_HEIGHT_MASK); 1829 + track->textures[i].width = 1 << ((idx_value & RADEON_TXFORMAT_WIDTH_MASK) >> RADEON_TXFORMAT_WIDTH_SHIFT); 1830 + track->textures[i].height = 1 << ((idx_value & RADEON_TXFORMAT_HEIGHT_MASK) >> RADEON_TXFORMAT_HEIGHT_SHIFT); 1831 1831 } 1832 1832 if (idx_value & RADEON_TXFORMAT_CUBIC_MAP_ENABLE) 1833 1833 track->textures[i].tex_coord_type = 2;
+2 -2
drivers/gpu/drm/radeon/r200.c
··· 476 476 track->textures[i].use_pitch = 1; 477 477 } else { 478 478 track->textures[i].use_pitch = 0; 479 - track->textures[i].width = 1 << ((idx_value >> RADEON_TXFORMAT_WIDTH_SHIFT) & RADEON_TXFORMAT_WIDTH_MASK); 480 - track->textures[i].height = 1 << ((idx_value >> RADEON_TXFORMAT_HEIGHT_SHIFT) & RADEON_TXFORMAT_HEIGHT_MASK); 479 + track->textures[i].width = 1 << ((idx_value & RADEON_TXFORMAT_WIDTH_MASK) >> RADEON_TXFORMAT_WIDTH_SHIFT); 480 + track->textures[i].height = 1 << ((idx_value & RADEON_TXFORMAT_HEIGHT_MASK) >> RADEON_TXFORMAT_HEIGHT_SHIFT); 481 481 } 482 482 if (idx_value & R200_TXFORMAT_LOOKUP_DISABLE) 483 483 track->textures[i].lookup_disable = true;
+9 -9
drivers/gpu/drm/tegra/dc.c
··· 837 837 static void tegra_cursor_atomic_update(struct drm_plane *plane, 838 838 struct drm_plane_state *old_state) 839 839 { 840 - struct tegra_bo *bo = tegra_fb_get_plane(plane->state->fb, 0); 840 + struct tegra_plane_state *state = to_tegra_plane_state(plane->state); 841 841 struct tegra_dc *dc = to_tegra_dc(plane->state->crtc); 842 - struct drm_plane_state *state = plane->state; 843 842 u32 value = CURSOR_CLIP_DISPLAY; 844 843 845 844 /* rien ne va plus */ 846 845 if (!plane->state->crtc || !plane->state->fb) 847 846 return; 848 847 849 - switch (state->crtc_w) { 848 + switch (plane->state->crtc_w) { 850 849 case 32: 851 850 value |= CURSOR_SIZE_32x32; 852 851 break; ··· 863 864 break; 864 865 865 866 default: 866 - WARN(1, "cursor size %ux%u not supported\n", state->crtc_w, 867 - state->crtc_h); 867 + WARN(1, "cursor size %ux%u not supported\n", 868 + plane->state->crtc_w, plane->state->crtc_h); 868 869 return; 869 870 } 870 871 871 - value |= (bo->iova >> 10) & 0x3fffff; 872 + value |= (state->iova[0] >> 10) & 0x3fffff; 872 873 tegra_dc_writel(dc, value, DC_DISP_CURSOR_START_ADDR); 873 874 874 875 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 875 - value = (bo->iova >> 32) & 0x3; 876 + value = (state->iova[0] >> 32) & 0x3; 876 877 tegra_dc_writel(dc, value, DC_DISP_CURSOR_START_ADDR_HI); 877 878 #endif 878 879 ··· 891 892 tegra_dc_writel(dc, value, DC_DISP_BLEND_CURSOR_CONTROL); 892 893 893 894 /* position the cursor */ 894 - value = (state->crtc_y & 0x3fff) << 16 | (state->crtc_x & 0x3fff); 895 + value = (plane->state->crtc_y & 0x3fff) << 16 | 896 + (plane->state->crtc_x & 0x3fff); 895 897 tegra_dc_writel(dc, value, DC_DISP_CURSOR_POSITION); 896 898 } 897 899 ··· 2017 2017 dev_warn(dc->dev, "failed to allocate syncpoint\n"); 2018 2018 2019 2019 err = host1x_client_iommu_attach(client); 2020 - if (err < 0) { 2020 + if (err < 0 && err != -ENODEV) { 2021 2021 dev_err(client->dev, "failed to attach to domain: %d\n", err); 2022 2022 return err; 2023 2023 }
+4 -3
drivers/gpu/drm/tegra/drm.c
··· 920 920 921 921 if (tegra->domain) { 922 922 group = iommu_group_get(client->dev); 923 - if (!group) { 924 - dev_err(client->dev, "failed to get IOMMU group\n"); 923 + if (!group) 925 924 return -ENODEV; 926 - } 927 925 928 926 if (domain != tegra->domain) { 929 927 err = iommu_attach_group(tegra->domain, group); ··· 1240 1242 tegra_drm_fb_exit(drm); 1241 1243 drm_atomic_helper_shutdown(drm); 1242 1244 drm_mode_config_cleanup(drm); 1245 + 1246 + if (tegra->hub) 1247 + tegra_display_hub_cleanup(tegra->hub); 1243 1248 1244 1249 err = host1x_device_exit(dev); 1245 1250 if (err < 0)
+43 -7
drivers/gpu/drm/tegra/gem.c
··· 27 27 drm_gem_object_put_unlocked(&obj->gem); 28 28 } 29 29 30 + /* XXX move this into lib/scatterlist.c? */ 31 + static int sg_alloc_table_from_sg(struct sg_table *sgt, struct scatterlist *sg, 32 + unsigned int nents, gfp_t gfp_mask) 33 + { 34 + struct scatterlist *dst; 35 + unsigned int i; 36 + int err; 37 + 38 + err = sg_alloc_table(sgt, nents, gfp_mask); 39 + if (err < 0) 40 + return err; 41 + 42 + dst = sgt->sgl; 43 + 44 + for (i = 0; i < nents; i++) { 45 + sg_set_page(dst, sg_page(sg), sg->length, 0); 46 + dst = sg_next(dst); 47 + sg = sg_next(sg); 48 + } 49 + 50 + return 0; 51 + } 52 + 30 53 static struct sg_table *tegra_bo_pin(struct device *dev, struct host1x_bo *bo, 31 54 dma_addr_t *phys) 32 55 { ··· 75 52 return ERR_PTR(-ENOMEM); 76 53 77 54 if (obj->pages) { 55 + /* 56 + * If the buffer object was allocated from the explicit IOMMU 57 + * API code paths, construct an SG table from the pages. 58 + */ 78 59 err = sg_alloc_table_from_pages(sgt, obj->pages, obj->num_pages, 79 60 0, obj->gem.size, GFP_KERNEL); 80 61 if (err < 0) 81 62 goto free; 63 + } else if (obj->sgt) { 64 + /* 65 + * If the buffer object already has an SG table but no pages 66 + * were allocated for it, it means the buffer was imported and 67 + * the SG table needs to be copied to avoid overwriting any 68 + * other potential users of the original SG table. 69 + */ 70 + err = sg_alloc_table_from_sg(sgt, obj->sgt->sgl, obj->sgt->nents, 71 + GFP_KERNEL); 72 + if (err < 0) 73 + goto free; 82 74 } else { 75 + /* 76 + * If the buffer object had no pages allocated and if it was 77 + * not imported, it had to be allocated with the DMA API, so 78 + * the DMA API helper can be used. 79 + */ 83 80 err = dma_get_sgtable(dev, sgt, obj->vaddr, obj->iova, 84 81 obj->gem.size); 85 82 if (err < 0) ··· 440 397 err = tegra_bo_iommu_map(tegra, bo); 441 398 if (err < 0) 442 399 goto detach; 443 - } else { 444 - if (bo->sgt->nents > 1) { 445 - err = -EINVAL; 446 - goto detach; 447 - } 448 - 449 - bo->iova = sg_dma_address(bo->sgt->sgl); 450 400 } 451 401 452 402 bo->gem.import_attach = attach;
-3
drivers/gpu/drm/tegra/hub.c
··· 605 605 tegra_display_hub_get_state(struct tegra_display_hub *hub, 606 606 struct drm_atomic_state *state) 607 607 { 608 - struct drm_device *drm = dev_get_drvdata(hub->client.parent); 609 608 struct drm_private_state *priv; 610 - 611 - WARN_ON(!drm_modeset_is_locked(&drm->mode_config.connection_mutex)); 612 609 613 610 priv = drm_atomic_get_private_obj_state(state, &hub->base); 614 611 if (IS_ERR(priv))
+11
drivers/gpu/drm/tegra/plane.c
··· 129 129 goto unpin; 130 130 } 131 131 132 + /* 133 + * The display controller needs contiguous memory, so 134 + * fail if the buffer is discontiguous and we fail to 135 + * map its SG table to a single contiguous chunk of 136 + * I/O virtual memory. 137 + */ 138 + if (err > 1) { 139 + err = -EINVAL; 140 + goto unpin; 141 + } 142 + 132 143 state->iova[i] = sg_dma_address(sgt->sgl); 133 144 state->sgt[i] = sgt; 134 145 } else {
+33 -5
drivers/gpu/drm/tegra/sor.c
··· 3912 3912 return 0; 3913 3913 } 3914 3914 3915 - #ifdef CONFIG_PM 3916 - static int tegra_sor_suspend(struct device *dev) 3915 + static int tegra_sor_runtime_suspend(struct device *dev) 3917 3916 { 3918 3917 struct tegra_sor *sor = dev_get_drvdata(dev); 3919 3918 int err; ··· 3934 3935 return 0; 3935 3936 } 3936 3937 3937 - static int tegra_sor_resume(struct device *dev) 3938 + static int tegra_sor_runtime_resume(struct device *dev) 3938 3939 { 3939 3940 struct tegra_sor *sor = dev_get_drvdata(dev); 3940 3941 int err; ··· 3966 3967 3967 3968 return 0; 3968 3969 } 3969 - #endif 3970 + 3971 + static int tegra_sor_suspend(struct device *dev) 3972 + { 3973 + struct tegra_sor *sor = dev_get_drvdata(dev); 3974 + int err; 3975 + 3976 + if (sor->hdmi_supply) { 3977 + err = regulator_disable(sor->hdmi_supply); 3978 + if (err < 0) 3979 + return err; 3980 + } 3981 + 3982 + return 0; 3983 + } 3984 + 3985 + static int tegra_sor_resume(struct device *dev) 3986 + { 3987 + struct tegra_sor *sor = dev_get_drvdata(dev); 3988 + int err; 3989 + 3990 + if (sor->hdmi_supply) { 3991 + err = regulator_enable(sor->hdmi_supply); 3992 + if (err < 0) 3993 + return err; 3994 + } 3995 + 3996 + return 0; 3997 + } 3970 3998 3971 3999 static const struct dev_pm_ops tegra_sor_pm_ops = { 3972 - SET_RUNTIME_PM_OPS(tegra_sor_suspend, tegra_sor_resume, NULL) 4000 + SET_RUNTIME_PM_OPS(tegra_sor_runtime_suspend, tegra_sor_runtime_resume, 4001 + NULL) 4002 + SET_SYSTEM_SLEEP_PM_OPS(tegra_sor_suspend, tegra_sor_resume) 3973 4003 }; 3974 4004 3975 4005 struct platform_driver tegra_sor_driver = {
+4 -3
drivers/gpu/drm/tegra/vic.c
··· 167 167 int err; 168 168 169 169 err = host1x_client_iommu_attach(client); 170 - if (err < 0) { 170 + if (err < 0 && err != -ENODEV) { 171 171 dev_err(vic->dev, "failed to attach to domain: %d\n", err); 172 172 return err; 173 173 } ··· 386 386 .supports_sid = true, 387 387 }; 388 388 389 - static const struct of_device_id vic_match[] = { 389 + static const struct of_device_id tegra_vic_of_match[] = { 390 390 { .compatible = "nvidia,tegra124-vic", .data = &vic_t124_config }, 391 391 { .compatible = "nvidia,tegra210-vic", .data = &vic_t210_config }, 392 392 { .compatible = "nvidia,tegra186-vic", .data = &vic_t186_config }, 393 393 { .compatible = "nvidia,tegra194-vic", .data = &vic_t194_config }, 394 394 { }, 395 395 }; 396 + MODULE_DEVICE_TABLE(of, tegra_vic_of_match); 396 397 397 398 static int vic_probe(struct platform_device *pdev) 398 399 { ··· 517 516 struct platform_driver tegra_vic_driver = { 518 517 .driver = { 519 518 .name = "tegra-vic", 520 - .of_match_table = vic_match, 519 + .of_match_table = tegra_vic_of_match, 521 520 .pm = &vic_pm_ops 522 521 }, 523 522 .probe = vic_probe,
+10
drivers/soc/qcom/Kconfig
··· 66 66 tristate 67 67 select QCOM_SCM 68 68 69 + config QCOM_OCMEM 70 + tristate "Qualcomm On Chip Memory (OCMEM) driver" 71 + depends on ARCH_QCOM 72 + select QCOM_SCM 73 + help 74 + The On Chip Memory (OCMEM) allocator allows various clients to 75 + allocate memory from OCMEM based on performance, latency and power 76 + requirements. This is typically used by the GPU, camera/video, and 77 + audio components on some Snapdragon SoCs. 78 + 69 79 config QCOM_PM 70 80 bool "Qualcomm Power Management" 71 81 depends on ARCH_QCOM && !ARM64
+1
drivers/soc/qcom/Makefile
··· 6 6 obj-$(CONFIG_QCOM_GLINK_SSR) += glink_ssr.o 7 7 obj-$(CONFIG_QCOM_GSBI) += qcom_gsbi.o 8 8 obj-$(CONFIG_QCOM_MDT_LOADER) += mdt_loader.o 9 + obj-$(CONFIG_QCOM_OCMEM) += ocmem.o 9 10 obj-$(CONFIG_QCOM_PM) += spm.o 10 11 obj-$(CONFIG_QCOM_QMI_HELPERS) += qmi_helpers.o 11 12 qmi_helpers-y += qmi_encdec.o qmi_interface.o
+433
drivers/soc/qcom/ocmem.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * The On Chip Memory (OCMEM) allocator allows various clients to allocate 4 + * memory from OCMEM based on performance, latency and power requirements. 5 + * This is typically used by the GPU, camera/video, and audio components on 6 + * some Snapdragon SoCs. 7 + * 8 + * Copyright (C) 2019 Brian Masney <masneyb@onstation.org> 9 + * Copyright (C) 2015 Red Hat. Author: Rob Clark <robdclark@gmail.com> 10 + */ 11 + 12 + #include <linux/bitfield.h> 13 + #include <linux/clk.h> 14 + #include <linux/io.h> 15 + #include <linux/kernel.h> 16 + #include <linux/module.h> 17 + #include <linux/of_device.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/qcom_scm.h> 20 + #include <linux/sizes.h> 21 + #include <linux/slab.h> 22 + #include <linux/types.h> 23 + #include <soc/qcom/ocmem.h> 24 + 25 + enum region_mode { 26 + WIDE_MODE = 0x0, 27 + THIN_MODE, 28 + MODE_DEFAULT = WIDE_MODE, 29 + }; 30 + 31 + enum ocmem_macro_state { 32 + PASSTHROUGH = 0, 33 + PERI_ON = 1, 34 + CORE_ON = 2, 35 + CLK_OFF = 4, 36 + }; 37 + 38 + struct ocmem_region { 39 + bool interleaved; 40 + enum region_mode mode; 41 + unsigned int num_macros; 42 + enum ocmem_macro_state macro_state[4]; 43 + unsigned long macro_size; 44 + unsigned long region_size; 45 + }; 46 + 47 + struct ocmem_config { 48 + uint8_t num_regions; 49 + unsigned long macro_size; 50 + }; 51 + 52 + struct ocmem { 53 + struct device *dev; 54 + const struct ocmem_config *config; 55 + struct resource *memory; 56 + void __iomem *mmio; 57 + unsigned int num_ports; 58 + unsigned int num_macros; 59 + bool interleaved; 60 + struct ocmem_region *regions; 61 + unsigned long active_allocations; 62 + }; 63 + 64 + #define OCMEM_MIN_ALIGN SZ_64K 65 + #define OCMEM_MIN_ALLOC SZ_64K 66 + 67 + #define OCMEM_REG_HW_VERSION 0x00000000 68 + #define OCMEM_REG_HW_PROFILE 0x00000004 69 + 70 + #define OCMEM_REG_REGION_MODE_CTL 0x00001000 71 + #define OCMEM_REGION_MODE_CTL_REG0_THIN 0x00000001 72 + #define OCMEM_REGION_MODE_CTL_REG1_THIN 0x00000002 73 + #define OCMEM_REGION_MODE_CTL_REG2_THIN 0x00000004 74 + #define OCMEM_REGION_MODE_CTL_REG3_THIN 0x00000008 75 + 76 + #define OCMEM_REG_GFX_MPU_START 0x00001004 77 + #define OCMEM_REG_GFX_MPU_END 0x00001008 78 + 79 + #define OCMEM_HW_PROFILE_NUM_PORTS(val) FIELD_PREP(0x0000000f, (val)) 80 + #define OCMEM_HW_PROFILE_NUM_MACROS(val) FIELD_PREP(0x00003f00, (val)) 81 + 82 + #define OCMEM_HW_PROFILE_LAST_REGN_HALFSIZE 0x00010000 83 + #define OCMEM_HW_PROFILE_INTERLEAVING 0x00020000 84 + #define OCMEM_REG_GEN_STATUS 0x0000000c 85 + 86 + #define OCMEM_REG_PSGSC_STATUS 0x00000038 87 + #define OCMEM_REG_PSGSC_CTL(i0) (0x0000003c + 0x1*(i0)) 88 + 89 + #define OCMEM_PSGSC_CTL_MACRO0_MODE(val) FIELD_PREP(0x00000007, (val)) 90 + #define OCMEM_PSGSC_CTL_MACRO1_MODE(val) FIELD_PREP(0x00000070, (val)) 91 + #define OCMEM_PSGSC_CTL_MACRO2_MODE(val) FIELD_PREP(0x00000700, (val)) 92 + #define OCMEM_PSGSC_CTL_MACRO3_MODE(val) FIELD_PREP(0x00007000, (val)) 93 + 94 + #define OCMEM_CLK_CORE_IDX 0 95 + static struct clk_bulk_data ocmem_clks[] = { 96 + { 97 + .id = "core", 98 + }, 99 + { 100 + .id = "iface", 101 + }, 102 + }; 103 + 104 + static inline void ocmem_write(struct ocmem *ocmem, u32 reg, u32 data) 105 + { 106 + writel(data, ocmem->mmio + reg); 107 + } 108 + 109 + static inline u32 ocmem_read(struct ocmem *ocmem, u32 reg) 110 + { 111 + return readl(ocmem->mmio + reg); 112 + } 113 + 114 + static void update_ocmem(struct ocmem *ocmem) 115 + { 116 + uint32_t region_mode_ctrl = 0x0; 117 + int i; 118 + 119 + if (!qcom_scm_ocmem_lock_available()) { 120 + for (i = 0; i < ocmem->config->num_regions; i++) { 121 + struct ocmem_region *region = &ocmem->regions[i]; 122 + 123 + if (region->mode == THIN_MODE) 124 + region_mode_ctrl |= BIT(i); 125 + } 126 + 127 + dev_dbg(ocmem->dev, "ocmem_region_mode_control %x\n", 128 + region_mode_ctrl); 129 + ocmem_write(ocmem, OCMEM_REG_REGION_MODE_CTL, region_mode_ctrl); 130 + } 131 + 132 + for (i = 0; i < ocmem->config->num_regions; i++) { 133 + struct ocmem_region *region = &ocmem->regions[i]; 134 + u32 data; 135 + 136 + data = OCMEM_PSGSC_CTL_MACRO0_MODE(region->macro_state[0]) | 137 + OCMEM_PSGSC_CTL_MACRO1_MODE(region->macro_state[1]) | 138 + OCMEM_PSGSC_CTL_MACRO2_MODE(region->macro_state[2]) | 139 + OCMEM_PSGSC_CTL_MACRO3_MODE(region->macro_state[3]); 140 + 141 + ocmem_write(ocmem, OCMEM_REG_PSGSC_CTL(i), data); 142 + } 143 + } 144 + 145 + static unsigned long phys_to_offset(struct ocmem *ocmem, 146 + unsigned long addr) 147 + { 148 + if (addr < ocmem->memory->start || addr >= ocmem->memory->end) 149 + return 0; 150 + 151 + return addr - ocmem->memory->start; 152 + } 153 + 154 + static unsigned long device_address(struct ocmem *ocmem, 155 + enum ocmem_client client, 156 + unsigned long addr) 157 + { 158 + WARN_ON(client != OCMEM_GRAPHICS); 159 + 160 + /* TODO: gpu uses phys_to_offset, but others do not.. */ 161 + return phys_to_offset(ocmem, addr); 162 + } 163 + 164 + static void update_range(struct ocmem *ocmem, struct ocmem_buf *buf, 165 + enum ocmem_macro_state mstate, enum region_mode rmode) 166 + { 167 + unsigned long offset = 0; 168 + int i, j; 169 + 170 + for (i = 0; i < ocmem->config->num_regions; i++) { 171 + struct ocmem_region *region = &ocmem->regions[i]; 172 + 173 + if (buf->offset <= offset && offset < buf->offset + buf->len) 174 + region->mode = rmode; 175 + 176 + for (j = 0; j < region->num_macros; j++) { 177 + if (buf->offset <= offset && 178 + offset < buf->offset + buf->len) 179 + region->macro_state[j] = mstate; 180 + 181 + offset += region->macro_size; 182 + } 183 + } 184 + 185 + update_ocmem(ocmem); 186 + } 187 + 188 + struct ocmem *of_get_ocmem(struct device *dev) 189 + { 190 + struct platform_device *pdev; 191 + struct device_node *devnode; 192 + 193 + devnode = of_parse_phandle(dev->of_node, "sram", 0); 194 + if (!devnode || !devnode->parent) { 195 + dev_err(dev, "Cannot look up sram phandle\n"); 196 + return ERR_PTR(-ENODEV); 197 + } 198 + 199 + pdev = of_find_device_by_node(devnode->parent); 200 + if (!pdev) { 201 + dev_err(dev, "Cannot find device node %s\n", devnode->name); 202 + return ERR_PTR(-EPROBE_DEFER); 203 + } 204 + 205 + return platform_get_drvdata(pdev); 206 + } 207 + EXPORT_SYMBOL(of_get_ocmem); 208 + 209 + struct ocmem_buf *ocmem_allocate(struct ocmem *ocmem, enum ocmem_client client, 210 + unsigned long size) 211 + { 212 + struct ocmem_buf *buf; 213 + int ret; 214 + 215 + /* TODO: add support for other clients... */ 216 + if (WARN_ON(client != OCMEM_GRAPHICS)) 217 + return ERR_PTR(-ENODEV); 218 + 219 + if (size < OCMEM_MIN_ALLOC || !IS_ALIGNED(size, OCMEM_MIN_ALIGN)) 220 + return ERR_PTR(-EINVAL); 221 + 222 + if (test_and_set_bit_lock(BIT(client), &ocmem->active_allocations)) 223 + return ERR_PTR(-EBUSY); 224 + 225 + buf = kzalloc(sizeof(*buf), GFP_KERNEL); 226 + if (!buf) { 227 + ret = -ENOMEM; 228 + goto err_unlock; 229 + } 230 + 231 + buf->offset = 0; 232 + buf->addr = device_address(ocmem, client, buf->offset); 233 + buf->len = size; 234 + 235 + update_range(ocmem, buf, CORE_ON, WIDE_MODE); 236 + 237 + if (qcom_scm_ocmem_lock_available()) { 238 + ret = qcom_scm_ocmem_lock(QCOM_SCM_OCMEM_GRAPHICS_ID, 239 + buf->offset, buf->len, WIDE_MODE); 240 + if (ret) { 241 + dev_err(ocmem->dev, "could not lock: %d\n", ret); 242 + ret = -EINVAL; 243 + goto err_kfree; 244 + } 245 + } else { 246 + ocmem_write(ocmem, OCMEM_REG_GFX_MPU_START, buf->offset); 247 + ocmem_write(ocmem, OCMEM_REG_GFX_MPU_END, 248 + buf->offset + buf->len); 249 + } 250 + 251 + dev_dbg(ocmem->dev, "using %ldK of OCMEM at 0x%08lx for client %d\n", 252 + size / 1024, buf->addr, client); 253 + 254 + return buf; 255 + 256 + err_kfree: 257 + kfree(buf); 258 + err_unlock: 259 + clear_bit_unlock(BIT(client), &ocmem->active_allocations); 260 + 261 + return ERR_PTR(ret); 262 + } 263 + EXPORT_SYMBOL(ocmem_allocate); 264 + 265 + void ocmem_free(struct ocmem *ocmem, enum ocmem_client client, 266 + struct ocmem_buf *buf) 267 + { 268 + /* TODO: add support for other clients... */ 269 + if (WARN_ON(client != OCMEM_GRAPHICS)) 270 + return; 271 + 272 + update_range(ocmem, buf, CLK_OFF, MODE_DEFAULT); 273 + 274 + if (qcom_scm_ocmem_lock_available()) { 275 + int ret; 276 + 277 + ret = qcom_scm_ocmem_unlock(QCOM_SCM_OCMEM_GRAPHICS_ID, 278 + buf->offset, buf->len); 279 + if (ret) 280 + dev_err(ocmem->dev, "could not unlock: %d\n", ret); 281 + } else { 282 + ocmem_write(ocmem, OCMEM_REG_GFX_MPU_START, 0x0); 283 + ocmem_write(ocmem, OCMEM_REG_GFX_MPU_END, 0x0); 284 + } 285 + 286 + kfree(buf); 287 + 288 + clear_bit_unlock(BIT(client), &ocmem->active_allocations); 289 + } 290 + EXPORT_SYMBOL(ocmem_free); 291 + 292 + static int ocmem_dev_probe(struct platform_device *pdev) 293 + { 294 + struct device *dev = &pdev->dev; 295 + unsigned long reg, region_size; 296 + int i, j, ret, num_banks; 297 + struct resource *res; 298 + struct ocmem *ocmem; 299 + 300 + if (!qcom_scm_is_available()) 301 + return -EPROBE_DEFER; 302 + 303 + ocmem = devm_kzalloc(dev, sizeof(*ocmem), GFP_KERNEL); 304 + if (!ocmem) 305 + return -ENOMEM; 306 + 307 + ocmem->dev = dev; 308 + ocmem->config = device_get_match_data(dev); 309 + 310 + ret = devm_clk_bulk_get(dev, ARRAY_SIZE(ocmem_clks), ocmem_clks); 311 + if (ret) { 312 + if (ret != -EPROBE_DEFER) 313 + dev_err(dev, "Unable to get clocks\n"); 314 + 315 + return ret; 316 + } 317 + 318 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl"); 319 + ocmem->mmio = devm_ioremap_resource(&pdev->dev, res); 320 + if (IS_ERR(ocmem->mmio)) { 321 + dev_err(&pdev->dev, "Failed to ioremap ocmem_ctrl resource\n"); 322 + return PTR_ERR(ocmem->mmio); 323 + } 324 + 325 + ocmem->memory = platform_get_resource_byname(pdev, IORESOURCE_MEM, 326 + "mem"); 327 + if (!ocmem->memory) { 328 + dev_err(dev, "Could not get mem region\n"); 329 + return -ENXIO; 330 + } 331 + 332 + /* The core clock is synchronous with graphics */ 333 + WARN_ON(clk_set_rate(ocmem_clks[OCMEM_CLK_CORE_IDX].clk, 1000) < 0); 334 + 335 + ret = clk_bulk_prepare_enable(ARRAY_SIZE(ocmem_clks), ocmem_clks); 336 + if (ret) { 337 + dev_info(ocmem->dev, "Failed to enable clocks\n"); 338 + return ret; 339 + } 340 + 341 + if (qcom_scm_restore_sec_cfg_available()) { 342 + dev_dbg(dev, "configuring scm\n"); 343 + ret = qcom_scm_restore_sec_cfg(QCOM_SCM_OCMEM_DEV_ID, 0); 344 + if (ret) { 345 + dev_err(dev, "Could not enable secure configuration\n"); 346 + goto err_clk_disable; 347 + } 348 + } 349 + 350 + reg = ocmem_read(ocmem, OCMEM_REG_HW_PROFILE); 351 + ocmem->num_ports = OCMEM_HW_PROFILE_NUM_PORTS(reg); 352 + ocmem->num_macros = OCMEM_HW_PROFILE_NUM_MACROS(reg); 353 + ocmem->interleaved = !!(reg & OCMEM_HW_PROFILE_INTERLEAVING); 354 + 355 + num_banks = ocmem->num_ports / 2; 356 + region_size = ocmem->config->macro_size * num_banks; 357 + 358 + dev_info(dev, "%u ports, %u regions, %u macros, %sinterleaved\n", 359 + ocmem->num_ports, ocmem->config->num_regions, 360 + ocmem->num_macros, ocmem->interleaved ? "" : "not "); 361 + 362 + ocmem->regions = devm_kcalloc(dev, ocmem->config->num_regions, 363 + sizeof(struct ocmem_region), GFP_KERNEL); 364 + if (!ocmem->regions) { 365 + ret = -ENOMEM; 366 + goto err_clk_disable; 367 + } 368 + 369 + for (i = 0; i < ocmem->config->num_regions; i++) { 370 + struct ocmem_region *region = &ocmem->regions[i]; 371 + 372 + if (WARN_ON(num_banks > ARRAY_SIZE(region->macro_state))) { 373 + ret = -EINVAL; 374 + goto err_clk_disable; 375 + } 376 + 377 + region->mode = MODE_DEFAULT; 378 + region->num_macros = num_banks; 379 + 380 + if (i == (ocmem->config->num_regions - 1) && 381 + reg & OCMEM_HW_PROFILE_LAST_REGN_HALFSIZE) { 382 + region->macro_size = ocmem->config->macro_size / 2; 383 + region->region_size = region_size / 2; 384 + } else { 385 + region->macro_size = ocmem->config->macro_size; 386 + region->region_size = region_size; 387 + } 388 + 389 + for (j = 0; j < ARRAY_SIZE(region->macro_state); j++) 390 + region->macro_state[j] = CLK_OFF; 391 + } 392 + 393 + platform_set_drvdata(pdev, ocmem); 394 + 395 + return 0; 396 + 397 + err_clk_disable: 398 + clk_bulk_disable_unprepare(ARRAY_SIZE(ocmem_clks), ocmem_clks); 399 + return ret; 400 + } 401 + 402 + static int ocmem_dev_remove(struct platform_device *pdev) 403 + { 404 + clk_bulk_disable_unprepare(ARRAY_SIZE(ocmem_clks), ocmem_clks); 405 + 406 + return 0; 407 + } 408 + 409 + static const struct ocmem_config ocmem_8974_config = { 410 + .num_regions = 3, 411 + .macro_size = SZ_128K, 412 + }; 413 + 414 + static const struct of_device_id ocmem_of_match[] = { 415 + { .compatible = "qcom,msm8974-ocmem", .data = &ocmem_8974_config }, 416 + { } 417 + }; 418 + 419 + MODULE_DEVICE_TABLE(of, ocmem_of_match); 420 + 421 + static struct platform_driver ocmem_driver = { 422 + .probe = ocmem_dev_probe, 423 + .remove = ocmem_dev_remove, 424 + .driver = { 425 + .name = "ocmem", 426 + .of_match_table = ocmem_of_match, 427 + }, 428 + }; 429 + 430 + module_platform_driver(ocmem_driver); 431 + 432 + MODULE_DESCRIPTION("On Chip Memory (OCMEM) allocator for some Snapdragon SoCs"); 433 + MODULE_LICENSE("GPL v2");
-2
include/linux/agpgart.h
··· 30 30 #include <linux/agp_backend.h> 31 31 #include <uapi/linux/agpgart.h> 32 32 33 - #define AGPGART_MINOR 175 34 - 35 33 struct agp_info { 36 34 struct agp_version version; /* version of the driver */ 37 35 u32 bridge_id; /* bridge vendor/device */
+1
include/linux/miscdevice.h
··· 33 33 #define SGI_MMTIMER 153 34 34 #define STORE_QUEUE_MINOR 155 /* unused */ 35 35 #define I2O_MINOR 166 36 + #define AGPGART_MINOR 175 36 37 #define HWRNG_MINOR 183 37 38 #define MICROCODE_MINOR 184 38 39 #define IRNET_MINOR 187
+26
include/linux/qcom_scm.h
··· 24 24 int perm; 25 25 }; 26 26 27 + enum qcom_scm_ocmem_client { 28 + QCOM_SCM_OCMEM_UNUSED_ID = 0x0, 29 + QCOM_SCM_OCMEM_GRAPHICS_ID, 30 + QCOM_SCM_OCMEM_VIDEO_ID, 31 + QCOM_SCM_OCMEM_LP_AUDIO_ID, 32 + QCOM_SCM_OCMEM_SENSORS_ID, 33 + QCOM_SCM_OCMEM_OTHER_OS_ID, 34 + QCOM_SCM_OCMEM_DEBUG_ID, 35 + }; 36 + 37 + enum qcom_scm_sec_dev_id { 38 + QCOM_SCM_MDSS_DEV_ID = 1, 39 + QCOM_SCM_OCMEM_DEV_ID = 5, 40 + QCOM_SCM_PCIE0_DEV_ID = 11, 41 + QCOM_SCM_PCIE1_DEV_ID = 12, 42 + QCOM_SCM_GFX_DEV_ID = 18, 43 + QCOM_SCM_UFS_DEV_ID = 19, 44 + QCOM_SCM_ICE_DEV_ID = 20, 45 + }; 46 + 27 47 #define QCOM_SCM_VMID_HLOS 0x3 28 48 #define QCOM_SCM_VMID_MSS_MSA 0xF 29 49 #define QCOM_SCM_VMID_WLAN 0x18 ··· 61 41 extern bool qcom_scm_hdcp_available(void); 62 42 extern int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, 63 43 u32 *resp); 44 + extern bool qcom_scm_ocmem_lock_available(void); 45 + extern int qcom_scm_ocmem_lock(enum qcom_scm_ocmem_client id, u32 offset, 46 + u32 size, u32 mode); 47 + extern int qcom_scm_ocmem_unlock(enum qcom_scm_ocmem_client id, u32 offset, 48 + u32 size); 64 49 extern bool qcom_scm_pas_supported(u32 peripheral); 65 50 extern int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, 66 51 size_t size); ··· 80 55 extern void qcom_scm_cpu_power_down(u32 flags); 81 56 extern u32 qcom_scm_get_version(void); 82 57 extern int qcom_scm_set_remote_state(u32 state, u32 id); 58 + extern bool qcom_scm_restore_sec_cfg_available(void); 83 59 extern int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare); 84 60 extern int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size); 85 61 extern int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare);
+65
include/soc/qcom/ocmem.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * The On Chip Memory (OCMEM) allocator allows various clients to allocate 4 + * memory from OCMEM based on performance, latency and power requirements. 5 + * This is typically used by the GPU, camera/video, and audio components on 6 + * some Snapdragon SoCs. 7 + * 8 + * Copyright (C) 2019 Brian Masney <masneyb@onstation.org> 9 + * Copyright (C) 2015 Red Hat. Author: Rob Clark <robdclark@gmail.com> 10 + */ 11 + 12 + #include <linux/device.h> 13 + #include <linux/err.h> 14 + 15 + #ifndef __OCMEM_H__ 16 + #define __OCMEM_H__ 17 + 18 + enum ocmem_client { 19 + /* GMEM clients */ 20 + OCMEM_GRAPHICS = 0x0, 21 + /* 22 + * TODO add more once ocmem_allocate() is clever enough to 23 + * deal with multiple clients. 24 + */ 25 + OCMEM_CLIENT_MAX, 26 + }; 27 + 28 + struct ocmem; 29 + 30 + struct ocmem_buf { 31 + unsigned long offset; 32 + unsigned long addr; 33 + unsigned long len; 34 + }; 35 + 36 + #if IS_ENABLED(CONFIG_QCOM_OCMEM) 37 + 38 + struct ocmem *of_get_ocmem(struct device *dev); 39 + struct ocmem_buf *ocmem_allocate(struct ocmem *ocmem, enum ocmem_client client, 40 + unsigned long size); 41 + void ocmem_free(struct ocmem *ocmem, enum ocmem_client client, 42 + struct ocmem_buf *buf); 43 + 44 + #else /* IS_ENABLED(CONFIG_QCOM_OCMEM) */ 45 + 46 + static inline struct ocmem *of_get_ocmem(struct device *dev) 47 + { 48 + return ERR_PTR(-ENODEV); 49 + } 50 + 51 + static inline struct ocmem_buf *ocmem_allocate(struct ocmem *ocmem, 52 + enum ocmem_client client, 53 + unsigned long size) 54 + { 55 + return ERR_PTR(-ENODEV); 56 + } 57 + 58 + static inline void ocmem_free(struct ocmem *ocmem, enum ocmem_client client, 59 + struct ocmem_buf *buf) 60 + { 61 + } 62 + 63 + #endif /* IS_ENABLED(CONFIG_QCOM_OCMEM) */ 64 + 65 + #endif /* __OCMEM_H__ */