Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2024-08-01' of https://gitlab.freedesktop.org/drm/misc/kernel into drm-next

drm-misc-next for v6.12:

UAPI Changes:

virtio:
- Define DRM capset

Cross-subsystem Changes:

dma-buf:
- heaps: Clean up documentation

printk:
- Pass description to kmsg_dump()

Core Changes:

CI:
- Update IGT tests
- Point upstream repo to GitLab instance

modesetting:
- Introduce Power Saving Policy property for connectors
- Add might_fault() to drm_modeset_lock priming
- Add dynamic per-crtc vblank configuration support

panic:
- Avoid build-time interference with framebuffer console

docs:
- Document Colorspace property

scheduler:
- Remove full_recover from drm_sched_start

TTM:
- Make LRU walk restartable after dropping locks
- Allow direct reclaim to allocate local memory

Driver Changes:

amdgpu:
- Support Power Saving Policy connector property

ast:
- astdp: Support AST2600 with VGA; Clean up HPD

bridge:
- Silence error message on -EPROBE_DEFER
- analogix: Clean aup
- bridge-connector: Fix double free
- lt6505: Disable interrupt when powered off
- tc358767: Make default DP port preemphasis configurable

gma500:
- Update i2c terminology

ivpu:
- Add MODULE_FIRMWARE()

lcdif:
- Fix pixel clock

loongson:
- Use GEM refcount over TTM's

mgag200:
- Improve BMC handling
- Support VBLANK intterupts

nouveau:
- Refactor and clean up internals
- Use GEM refcount over TTM's

panel:
- Shutdown fixes plus documentation
- Refactor several drivers for better code sharing
- boe-th101mb31ig002: Support for starry-er88577 MIPI-DSI panel plus
DT; Fix porch parameter
- edp: Support AOU B116XTN02.3, AUO B116XAN06.1, AOU B116XAT04.1,
BOE NV140WUM-N41, BOE NV133WUM-N63, BOE NV116WHM-A4D, CMN N116BCA-EA2,
CMN N116BCP-EA2, CSW MNB601LS1-4
- himax-hx8394: Support Microchip AC40T08A MIPI Display panel plus DT
- ilitek-ili9806e: Support Densitron DMT028VGHMCMI-1D TFT plus DT
- jd9365da: Support Melfas lmfbx101117480 MIPI-DSI panel plus DT; Refactor
for code sharing

sti:
- Fix module owner

stm:
- Avoid UAF wih managed plane and CRTC helpers
- Fix module owner
- Fix error handling in probe
- Depend on COMMON_CLK
- ltdc: Fix transparency after disabling plane; Remove unused interrupt

tegra:
- Call drm_atomic_helper_shutdown()

v3d:
- Clean up perfmon

vkms:
- Clean up

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
From: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://patchwork.freedesktop.org/patch/msgid/20240801121406.GA102996@linux.fritz.box

+4717 -6127
-6
Documentation/accel/qaic/qaic.rst
··· 147 147 recent execution of a BO. This allows userspace to construct an end to end 148 148 timeline of the BO processing for a performance analysis. 149 149 150 - DRM_IOCTL_QAIC_PART_DEV 151 - This IOCTL allows userspace to request a duplicate "shadow device". This extra 152 - accelN device is associated with a specific partition of resources on the 153 - AIC100 device and can be used for limiting a process to some subset of 154 - resources. 155 - 156 150 DRM_IOCTL_QAIC_DETACH_SLICE_BO 157 151 This IOCTL allows userspace to remove the slicing information from a BO that 158 152 was originally provided by a call to DRM_IOCTL_QAIC_ATTACH_SLICE_BO. This
+20 -1
Documentation/devicetree/bindings/display/bridge/toshiba,tc358767.yaml
··· 92 92 reference to a valid DPI output or input endpoint node. 93 93 94 94 port@2: 95 - $ref: /schemas/graph.yaml#/properties/port 95 + $ref: /schemas/graph.yaml#/$defs/port-base 96 + unevaluatedProperties: false 96 97 description: | 97 98 eDP/DP output port. The remote endpoint phandle should be a 98 99 reference to a valid eDP panel input endpoint node. This port is 99 100 optional, treated as DP panel if not defined 101 + 102 + properties: 103 + endpoint: 104 + $ref: /schemas/media/video-interfaces.yaml# 105 + unevaluatedProperties: false 106 + 107 + properties: 108 + toshiba,pre-emphasis: 109 + description: 110 + Display port output Pre-Emphasis settings for both DP lanes. 111 + $ref: /schemas/types.yaml#/definitions/uint8-array 112 + minItems: 2 113 + maxItems: 2 114 + items: 115 + enum: 116 + - 0 # No pre-emphasis 117 + - 1 # 3.5dB pre-emphasis 118 + - 2 # 6dB pre-emphasis 100 119 101 120 oneOf: 102 121 - required:
+18 -3
Documentation/devicetree/bindings/display/panel/boe,th101mb31ig002-28a.yaml
··· 9 9 maintainers: 10 10 - Manuel Traut <manut@mecka.net> 11 11 12 - allOf: 13 - - $ref: panel-common.yaml# 14 - 15 12 properties: 16 13 compatible: 17 14 enum: 18 15 # BOE TH101MB31IG002-28A 10.1" WXGA TFT LCD panel 19 16 - boe,th101mb31ig002-28a 17 + # The Starry-er88577 is a 10.1" WXGA TFT-LCD panel 18 + - starry,er88577 20 19 21 20 reg: 22 21 maxItems: 1 23 22 24 23 backlight: true 25 24 enable-gpios: true 25 + reset-gpios: true 26 26 power-supply: true 27 27 port: true 28 28 rotation: true ··· 32 32 - reg 33 33 - enable-gpios 34 34 - power-supply 35 + 36 + allOf: 37 + - $ref: panel-common.yaml# 38 + - if: 39 + properties: 40 + compatible: 41 + # The Starry-er88577 is a 10.1" WXGA TFT-LCD panel 42 + const: starry,er88577 43 + then: 44 + properties: 45 + reset-gpios: false 46 + else: 47 + required: 48 + - reset-gpios 35 49 36 50 additionalProperties: false 37 51 ··· 61 47 reg = <0>; 62 48 backlight = <&backlight_lcd0>; 63 49 enable-gpios = <&gpio 45 GPIO_ACTIVE_HIGH>; 50 + reset-gpios = <&gpio 55 GPIO_ACTIVE_LOW>; 64 51 rotation = <90>; 65 52 power-supply = <&vcc_3v3>; 66 53 port {
+13 -4
Documentation/devicetree/bindings/display/panel/himax,hx8394.yaml
··· 15 15 such as the HannStar HSD060BHW4 720x1440 TFT LCD panel connected with 16 16 a MIPI-DSI video interface. 17 17 18 - allOf: 19 - - $ref: panel-common.yaml# 20 - 21 18 properties: 22 19 compatible: 23 20 items: 24 21 - enum: 25 22 - hannstar,hsd060bhw4 23 + - microchip,ac40t08a-mipi-panel 26 24 - powkiddy,x55-panel 27 25 - const: himax,hx8394 28 26 ··· 44 46 required: 45 47 - compatible 46 48 - reg 47 - - reset-gpios 48 49 - backlight 49 50 - port 50 51 - vcc-supply 51 52 - iovcc-supply 52 53 53 54 additionalProperties: false 55 + 56 + allOf: 57 + - $ref: panel-common.yaml# 58 + - if: 59 + not: 60 + properties: 61 + compatible: 62 + enum: 63 + - microchip,ac40t08a-mipi-panel 64 + then: 65 + required: 66 + - reset-gpios 54 67 55 68 examples: 56 69 - |
+1
Documentation/devicetree/bindings/display/panel/ilitek,ili9806e.yaml
··· 16 16 compatible: 17 17 items: 18 18 - enum: 19 + - densitron,dmt028vghmcmi-1d 19 20 - ortustech,com35h3p70ulc 20 21 - const: ilitek,ili9806e 21 22
+1
Documentation/devicetree/bindings/display/panel/jadard,jd9365da-h3.yaml
··· 18 18 - enum: 19 19 - chongzhou,cz101b4001 20 20 - kingdisplay,kd101ne3-40ti 21 + - melfas,lmfbx101117480 21 22 - radxa,display-10hd-ad001 22 23 - radxa,display-8hd-ad002 23 24 - const: jadard,jd9365da-h3
+14 -17
Documentation/gpu/todo.rst
··· 475 475 As of commit d2aacaf07395 ("drm/panel: Check for already prepared/enabled in 476 476 drm_panel"), we have a check in the drm_panel core to make sure nobody 477 477 double-calls prepare/enable/disable/unprepare. Eventually that should probably 478 - be turned into a WARN_ON() or somehow made louder, but right now we actually 479 - expect it to trigger and so we don't want it to be too loud. 478 + be turned into a WARN_ON() or somehow made louder. 480 479 481 - Specifically, that warning will trigger for panel-edp and panel-simple at 482 - shutdown time because those panels hardcode a call to drm_panel_disable() 483 - and drm_panel_unprepare() at shutdown and remove time that they call regardless 484 - of panel state. On systems with a properly coded DRM modeset driver that 485 - calls drm_atomic_helper_shutdown() this is pretty much guaranteed to cause 486 - the warning to fire. 480 + At the moment, we expect that we may still encounter the warnings in the 481 + drm_panel core when using panel-simple and panel-edp. Since those panel 482 + drivers are used with a lot of different DRM modeset drivers they still 483 + make an extra effort to disable/unprepare the panel themsevles at shutdown 484 + time. Specifically we could still encounter those warnings if the panel 485 + driver gets shutdown() _before_ the DRM modeset driver and the DRM modeset 486 + driver properly calls drm_atomic_helper_shutdown() in its own shutdown() 487 + callback. Warnings could be avoided in such a case by using something like 488 + device links to ensure that the panel gets shutdown() after the DRM modeset 489 + driver. 487 490 488 - Unfortunately we can't safely remove the calls in panel-edp and panel-simple 489 - until we're sure that all DRM modeset drivers that are used with those panels 490 - properly call drm_atomic_helper_shutdown(). This TODO item is to validate 491 - that all DRM modeset drivers used with panel-edp and panel-simple properly 492 - call drm_atomic_helper_shutdown() and then remove the calls to 493 - disable/unprepare from those panels. Alternatively, this TODO item could be 494 - removed by convincing stakeholders that those calls are fine and downgrading 495 - the error message in drm_panel_disable() / drm_panel_unprepare() to a 496 - debug-level message. 491 + Once all DRM modeset drivers are known to shutdown properly, the extra 492 + calls to disable/unprepare in remove/shutdown in panel-simple and panel-edp 493 + should be removed and this TODO item marked complete. 497 494 498 495 Contact: Douglas Anderson <dianders@chromium.org> 499 496
+8
MAINTAINERS
··· 1013 1013 T: git https://gitlab.freedesktop.org/agd5f/linux.git 1014 1014 F: drivers/gpu/drm/amd/display/ 1015 1015 1016 + AMD DISPLAY CORE - DML 1017 + M: Chaitanya Dhere <chaitanya.dhere@amd.com> 1018 + M: Jun Lei <jun.lei@amd.com> 1019 + S: Supported 1020 + F: drivers/gpu/drm/amd/display/dc/dml/ 1021 + F: drivers/gpu/drm/amd/display/dc/dml2/ 1022 + 1016 1023 AMD FAM15H PROCESSOR POWER MONITORING DRIVER 1017 1024 M: Huang Rui <ray.huang@amd.com> 1018 1025 L: linux-hwmon@vger.kernel.org ··· 6667 6660 F: drivers/dma-buf/heaps/* 6668 6661 F: include/linux/dma-heap.h 6669 6662 F: include/uapi/linux/dma-heap.h 6663 + F: tools/testing/selftests/dmabuf-heaps/ 6670 6664 6671 6665 DMC FREQUENCY DRIVER FOR SAMSUNG EXYNOS5422 6672 6666 M: Lukasz Luba <lukasz.luba@arm.com>
+4 -4
arch/powerpc/kernel/nvram_64.c
··· 73 73 }; 74 74 75 75 static void oops_to_nvram(struct kmsg_dumper *dumper, 76 - enum kmsg_dump_reason reason); 76 + struct kmsg_dump_detail *detail); 77 77 78 78 static struct kmsg_dumper nvram_kmsg_dumper = { 79 79 .dump = oops_to_nvram ··· 643 643 * partition. If that's too much, go back and capture uncompressed text. 644 644 */ 645 645 static void oops_to_nvram(struct kmsg_dumper *dumper, 646 - enum kmsg_dump_reason reason) 646 + struct kmsg_dump_detail *detail) 647 647 { 648 648 struct oops_log_info *oops_hdr = (struct oops_log_info *)oops_buf; 649 649 static unsigned int oops_count = 0; ··· 655 655 unsigned int err_type = ERR_TYPE_KERNEL_PANIC_GZ; 656 656 int rc = -1; 657 657 658 - switch (reason) { 658 + switch (detail->reason) { 659 659 case KMSG_DUMP_SHUTDOWN: 660 660 /* These are almost always orderly shutdowns. */ 661 661 return; ··· 671 671 break; 672 672 default: 673 673 pr_err("%s: ignoring unrecognized KMSG_DUMP_* reason %d\n", 674 - __func__, (int) reason); 674 + __func__, (int) detail->reason); 675 675 return; 676 676 } 677 677
+2 -2
arch/powerpc/platforms/powernv/opal-kmsg.c
··· 20 20 * message, it just ensures that OPAL completely flushes the console buffer. 21 21 */ 22 22 static void kmsg_dump_opal_console_flush(struct kmsg_dumper *dumper, 23 - enum kmsg_dump_reason reason) 23 + struct kmsg_dump_detail *detail) 24 24 { 25 25 /* 26 26 * Outside of a panic context the pollers will continue to run, 27 27 * so we don't need to do any special flushing. 28 28 */ 29 - if (reason != KMSG_DUMP_PANIC) 29 + if (detail->reason != KMSG_DUMP_PANIC) 30 30 return; 31 31 32 32 opal_flush_console(0);
+1 -1
arch/um/kernel/kmsg_dump.c
··· 8 8 #include <os.h> 9 9 10 10 static void kmsg_dumper_stdout(struct kmsg_dumper *dumper, 11 - enum kmsg_dump_reason reason) 11 + struct kmsg_dump_detail *detail) 12 12 { 13 13 static struct kmsg_dump_iter iter; 14 14 static DEFINE_SPINLOCK(lock);
+4
drivers/accel/ivpu/ivpu_fw.c
··· 60 60 { IVPU_HW_IP_40XX, "intel/vpu/vpu_40xx_v0.0.bin" }, 61 61 }; 62 62 63 + /* Production fw_names from the table above */ 64 + MODULE_FIRMWARE("intel/vpu/vpu_37xx_v0.0.bin"); 65 + MODULE_FIRMWARE("intel/vpu/vpu_40xx_v0.0.bin"); 66 + 63 67 static int ivpu_fw_request(struct ivpu_device *vdev) 64 68 { 65 69 int ret = -ENOENT;
+18 -15
drivers/dma-buf/dma-heap.c
··· 7 7 */ 8 8 9 9 #include <linux/cdev.h> 10 - #include <linux/debugfs.h> 11 10 #include <linux/device.h> 12 11 #include <linux/dma-buf.h> 13 - #include <linux/err.h> 14 - #include <linux/xarray.h> 15 - #include <linux/list.h> 16 - #include <linux/slab.h> 17 - #include <linux/nospec.h> 18 - #include <linux/uaccess.h> 19 - #include <linux/syscalls.h> 20 12 #include <linux/dma-heap.h> 13 + #include <linux/err.h> 14 + #include <linux/list.h> 15 + #include <linux/nospec.h> 16 + #include <linux/syscalls.h> 17 + #include <linux/uaccess.h> 18 + #include <linux/xarray.h> 21 19 #include <uapi/linux/dma-heap.h> 22 20 23 21 #define DEVNAME "dma_heap" ··· 26 28 * struct dma_heap - represents a dmabuf heap in the system 27 29 * @name: used for debugging/device-node name 28 30 * @ops: ops struct for this heap 29 - * @heap_devt heap device node 30 - * @list list head connecting to list of heaps 31 - * @heap_cdev heap char device 31 + * @priv: private data for this heap 32 + * @heap_devt: heap device node 33 + * @list: list head connecting to list of heaps 34 + * @heap_cdev: heap char device 32 35 * 33 36 * Represents a heap of memory from which buffers can be made. 34 37 */ ··· 192 193 }; 193 194 194 195 /** 195 - * dma_heap_get_drvdata() - get per-subdriver data for the heap 196 + * dma_heap_get_drvdata - get per-heap driver data 196 197 * @heap: DMA-Heap to retrieve private data for 197 198 * 198 199 * Returns: 199 - * The per-subdriver data for the heap. 200 + * The per-heap data for the heap. 200 201 */ 201 202 void *dma_heap_get_drvdata(struct dma_heap *heap) 202 203 { ··· 204 205 } 205 206 206 207 /** 207 - * dma_heap_get_name() - get heap name 208 - * @heap: DMA-Heap to retrieve private data for 208 + * dma_heap_get_name - get heap name 209 + * @heap: DMA-Heap to retrieve the name of 209 210 * 210 211 * Returns: 211 212 * The char* for the heap name. ··· 215 216 return heap->name; 216 217 } 217 218 219 + /** 220 + * dma_heap_add - adds a heap to dmabuf heaps 221 + * @exp_info: information needed to register this heap 222 + */ 218 223 struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info) 219 224 { 220 225 struct dma_heap *heap, *h, *err_ret;
+1 -1
drivers/gpu/drm/Kconfig
··· 107 107 108 108 config DRM_PANIC 109 109 bool "Display a user-friendly message when a kernel panic occurs" 110 - depends on DRM && !(FRAMEBUFFER_CONSOLE && VT_CONSOLE) 110 + depends on DRM 111 111 select FONT_SUPPORT 112 112 help 113 113 Enable a drm panic handler, which will display a user-friendly message
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_arcturus.c
··· 300 300 if (r) 301 301 goto out; 302 302 } else { 303 - drm_sched_start(&ring->sched, false); 303 + drm_sched_start(&ring->sched); 304 304 } 305 305 } 306 306
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 5879 5879 if (!amdgpu_ring_sched_ready(ring)) 5880 5880 continue; 5881 5881 5882 - drm_sched_start(&ring->sched, true); 5882 + drm_sched_start(&ring->sched); 5883 5883 } 5884 5884 5885 5885 if (!drm_drv_uses_atomic_modeset(adev_to_drm(tmp_adev)) && !job_signaled) ··· 6374 6374 if (!amdgpu_ring_sched_ready(ring)) 6375 6375 continue; 6376 6376 6377 - drm_sched_start(&ring->sched, true); 6377 + drm_sched_start(&ring->sched); 6378 6378 } 6379 6379 6380 6380 amdgpu_device_unset_mp1_state(adev);
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_display.c
··· 1407 1407 "dither", 1408 1408 amdgpu_dither_enum_list, sz); 1409 1409 1410 + if (adev->dc_enabled) 1411 + drm_mode_create_power_saving_policy_property(adev_to_drm(adev), 1412 + DRM_MODE_POWER_SAVING_POLICY_ALL); 1413 + 1410 1414 return 0; 1411 1415 } 1412 1416
+4
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 2421 2421 if (r) 2422 2422 return r; 2423 2423 2424 + ttm_lru_bulk_move_init(&vm->lru_bulk_move); 2425 + 2424 2426 vm->is_compute_context = false; 2425 2427 2426 2428 vm->use_cpu_for_update = !!(adev->vm_manager.vm_update_mode & ··· 2487 2485 error_free_delayed: 2488 2486 dma_fence_put(vm->last_tlb_flush); 2489 2487 dma_fence_put(vm->last_unlocked); 2488 + ttm_lru_bulk_move_fini(&adev->mman.bdev, &vm->lru_bulk_move); 2490 2489 amdgpu_vm_fini_entities(vm); 2491 2490 2492 2491 return r; ··· 2644 2641 } 2645 2642 } 2646 2643 2644 + ttm_lru_bulk_move_fini(&adev->mman.bdev, &vm->lru_bulk_move); 2647 2645 } 2648 2646 2649 2647 /**
+47 -5
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 6725 6725 } else if (property == adev->mode_info.underscan_property) { 6726 6726 dm_new_state->underscan_enable = val; 6727 6727 ret = 0; 6728 + } else if (property == dev->mode_config.power_saving_policy) { 6729 + dm_new_state->abm_forbidden = val & DRM_MODE_REQUIRE_COLOR_ACCURACY; 6730 + dm_new_state->abm_level = (dm_new_state->abm_forbidden || 6731 + !dm_old_state->abm_level) ? 6732 + ABM_LEVEL_IMMEDIATE_DISABLE : 6733 + dm_old_state->abm_level; 6734 + dm_new_state->psr_forbidden = val & DRM_MODE_REQUIRE_LOW_LATENCY; 6735 + ret = 0; 6728 6736 } 6729 6737 6730 6738 return ret; ··· 6775 6767 } else if (property == adev->mode_info.underscan_property) { 6776 6768 *val = dm_state->underscan_enable; 6777 6769 ret = 0; 6770 + } else if (property == dev->mode_config.power_saving_policy) { 6771 + *val = 0; 6772 + if (dm_state->psr_forbidden) 6773 + *val |= DRM_MODE_REQUIRE_LOW_LATENCY; 6774 + if (dm_state->abm_forbidden) 6775 + *val |= DRM_MODE_REQUIRE_COLOR_ACCURACY; 6776 + ret = 0; 6778 6777 } 6779 6778 6780 6779 return ret; ··· 6808 6793 u8 val; 6809 6794 6810 6795 drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); 6811 - val = to_dm_connector_state(connector->state)->abm_level == 6812 - ABM_LEVEL_IMMEDIATE_DISABLE ? 0 : 6813 - to_dm_connector_state(connector->state)->abm_level; 6796 + if (to_dm_connector_state(connector->state)->abm_forbidden) 6797 + val = 0; 6798 + else 6799 + val = to_dm_connector_state(connector->state)->abm_level == 6800 + ABM_LEVEL_IMMEDIATE_DISABLE ? 0 : 6801 + to_dm_connector_state(connector->state)->abm_level; 6814 6802 drm_modeset_unlock(&dev->mode_config.connection_mutex); 6815 6803 6816 6804 return sysfs_emit(buf, "%u\n", val); ··· 6837 6819 return -EINVAL; 6838 6820 6839 6821 drm_modeset_lock(&dev->mode_config.connection_mutex, NULL); 6840 - to_dm_connector_state(connector->state)->abm_level = val ?: 6841 - ABM_LEVEL_IMMEDIATE_DISABLE; 6822 + if (to_dm_connector_state(connector->state)->abm_forbidden) 6823 + ret = -EBUSY; 6824 + else 6825 + to_dm_connector_state(connector->state)->abm_level = val ?: 6826 + ABM_LEVEL_IMMEDIATE_DISABLE; 6842 6827 drm_modeset_unlock(&dev->mode_config.connection_mutex); 6828 + 6829 + if (ret) 6830 + return ret; 6843 6831 6844 6832 drm_kms_helper_hotplug_event(dev); 6845 6833 ··· 8039 8015 8040 8016 aconnector->base.state->max_bpc = 16; 8041 8017 aconnector->base.state->max_requested_bpc = aconnector->base.state->max_bpc; 8018 + 8019 + if (connector_type == DRM_MODE_CONNECTOR_eDP && 8020 + (dc_is_dmcu_initialized(adev->dm.dc) || 8021 + adev->dm.dc->ctx->dmub_srv)) { 8022 + drm_object_attach_property(&aconnector->base.base, 8023 + dm->ddev->mode_config.power_saving_policy, 8024 + 0); 8025 + } 8042 8026 8043 8027 if (connector_type == DRM_MODE_CONNECTOR_HDMIA) { 8044 8028 /* Content Type is currently only implemented for HDMI. */ ··· 9748 9716 for_each_oldnew_connector_in_state(state, connector, old_con_state, new_con_state, i) { 9749 9717 struct dm_connector_state *dm_new_con_state = to_dm_connector_state(new_con_state); 9750 9718 struct dm_connector_state *dm_old_con_state = to_dm_connector_state(old_con_state); 9719 + struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector); 9751 9720 struct amdgpu_crtc *acrtc = to_amdgpu_crtc(dm_new_con_state->base.crtc); 9752 9721 struct dc_surface_update *dummy_updates; 9753 9722 struct dc_stream_update stream_update; ··· 9800 9767 if (hdr_changed) { 9801 9768 fill_hdr_info_packet(new_con_state, &hdr_packet); 9802 9769 stream_update.hdr_static_metadata = &hdr_packet; 9770 + } 9771 + 9772 + aconnector->disallow_edp_enter_psr = dm_new_con_state->psr_forbidden; 9773 + 9774 + /* immediately disable PSR if disallowed */ 9775 + if (aconnector->disallow_edp_enter_psr) { 9776 + mutex_lock(&dm->dc_lock); 9777 + amdgpu_dm_psr_disable(dm_new_crtc_state->stream); 9778 + mutex_unlock(&dm->dc_lock); 9803 9779 } 9804 9780 9805 9781 status = dc_stream_get_status(dm_new_crtc_state->stream);
+2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
··· 915 915 bool underscan_enable; 916 916 bool freesync_capable; 917 917 bool update_hdcp; 918 + bool abm_forbidden; 919 + bool psr_forbidden; 918 920 uint8_t abm_level; 919 921 int vcpi_slots; 920 922 uint64_t pbn;
+75 -102
drivers/gpu/drm/ast/ast_dp.c
··· 9 9 10 10 bool ast_astdp_is_connected(struct ast_device *ast) 11 11 { 12 - if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xD1, ASTDP_MCU_FW_EXECUTING)) 13 - return false; 14 - if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, ASTDP_HPD)) 15 - return false; 16 - if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDC, ASTDP_LINK_SUCCESS)) 12 + if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, AST_IO_VGACRDF_HPD)) 17 13 return false; 18 14 return true; 19 15 } ··· 17 21 int ast_astdp_read_edid(struct drm_device *dev, u8 *ediddata) 18 22 { 19 23 struct ast_device *ast = to_ast_device(dev); 20 - u8 i = 0, j = 0; 24 + int ret = 0; 25 + u8 i; 21 26 22 - /* 23 - * CRD1[b5]: DP MCU FW is executing 24 - * CRDC[b0]: DP link success 25 - * CRDF[b0]: DP HPD 26 - * CRE5[b0]: Host reading EDID process is done 27 - */ 28 - if (!(ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xD1, ASTDP_MCU_FW_EXECUTING) && 29 - ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDC, ASTDP_LINK_SUCCESS) && 30 - ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, ASTDP_HPD) && 31 - ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xE5, 32 - ASTDP_HOST_EDID_READ_DONE_MASK))) { 33 - goto err_astdp_edid_not_ready; 34 - } 35 - 36 - ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xE5, (u8) ~ASTDP_HOST_EDID_READ_DONE_MASK, 37 - 0x00); 27 + /* Start reading EDID data */ 28 + ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xe5, (u8)~AST_IO_VGACRE5_EDID_READ_DONE, 0x00); 38 29 39 30 for (i = 0; i < 32; i++) { 31 + unsigned int j; 32 + 40 33 /* 41 34 * CRE4[7:0]: Read-Pointer for EDID (Unit: 4bytes); valid range: 0~64 42 35 */ 43 - ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xE4, 44 - ASTDP_AND_CLEAR_MASK, (u8)i); 45 - j = 0; 36 + ast_set_index_reg(ast, AST_IO_VGACRI, 0xe4, i); 46 37 47 38 /* 48 39 * CRD7[b0]: valid flag for EDID 49 40 * CRD6[b0]: mirror read pointer for EDID 50 41 */ 51 - while ((ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xD7, 52 - ASTDP_EDID_VALID_FLAG_MASK) != 0x01) || 53 - (ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xD6, 54 - ASTDP_EDID_READ_POINTER_MASK) != i)) { 42 + for (j = 0; j < 200; ++j) { 43 + u8 vgacrd7, vgacrd6; 44 + 55 45 /* 56 46 * Delay are getting longer with each retry. 57 - * 1. The Delays are often 2 loops when users request "Display Settings" 47 + * 48 + * 1. No delay on first try 49 + * 2. The Delays are often 2 loops when users request "Display Settings" 58 50 * of right-click of mouse. 59 - * 2. The Delays are often longer a lot when system resume from S3/S4. 51 + * 3. The Delays are often longer a lot when system resume from S3/S4. 60 52 */ 61 - mdelay(j+1); 53 + if (j) 54 + mdelay(j + 1); 62 55 63 - if (!(ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xD1, 64 - ASTDP_MCU_FW_EXECUTING) && 65 - ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDC, 66 - ASTDP_LINK_SUCCESS) && 67 - ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, ASTDP_HPD))) { 68 - goto err_astdp_jump_out_loop_of_edid; 56 + /* Wait for EDID offset to show up in mirror register */ 57 + vgacrd7 = ast_get_index_reg(ast, AST_IO_VGACRI, 0xd7); 58 + if (vgacrd7 & AST_IO_VGACRD7_EDID_VALID_FLAG) { 59 + vgacrd6 = ast_get_index_reg(ast, AST_IO_VGACRI, 0xd6); 60 + if (vgacrd6 == i) 61 + break; 69 62 } 70 - 71 - j++; 72 - if (j > 200) 73 - goto err_astdp_jump_out_loop_of_edid; 63 + } 64 + if (j == 200) { 65 + ret = -EBUSY; 66 + goto out; 74 67 } 75 68 76 - *(ediddata) = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 77 - 0xD8, ASTDP_EDID_READ_DATA_MASK); 78 - *(ediddata + 1) = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xD9, 79 - ASTDP_EDID_READ_DATA_MASK); 80 - *(ediddata + 2) = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDA, 81 - ASTDP_EDID_READ_DATA_MASK); 82 - *(ediddata + 3) = ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDB, 83 - ASTDP_EDID_READ_DATA_MASK); 69 + ediddata[0] = ast_get_index_reg(ast, AST_IO_VGACRI, 0xd8); 70 + ediddata[1] = ast_get_index_reg(ast, AST_IO_VGACRI, 0xd9); 71 + ediddata[2] = ast_get_index_reg(ast, AST_IO_VGACRI, 0xda); 72 + ediddata[3] = ast_get_index_reg(ast, AST_IO_VGACRI, 0xdb); 84 73 85 74 if (i == 31) { 86 75 /* ··· 77 96 * The Bytes-126 indicates the Number of extensions to 78 97 * follow. 0 represents noextensions. 79 98 */ 80 - *(ediddata + 3) = *(ediddata + 3) + *(ediddata + 2); 81 - *(ediddata + 2) = 0; 99 + ediddata[3] = ediddata[3] + ediddata[2]; 100 + ediddata[2] = 0; 82 101 } 83 102 84 103 ediddata += 4; 85 104 } 86 105 87 - ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xE5, (u8) ~ASTDP_HOST_EDID_READ_DONE_MASK, 88 - ASTDP_HOST_EDID_READ_DONE); 106 + out: 107 + /* Signal end of reading */ 108 + ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xe5, (u8)~AST_IO_VGACRE5_EDID_READ_DONE, 109 + AST_IO_VGACRE5_EDID_READ_DONE); 89 110 90 - return 0; 91 - 92 - err_astdp_jump_out_loop_of_edid: 93 - ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xE5, 94 - (u8) ~ASTDP_HOST_EDID_READ_DONE_MASK, 95 - ASTDP_HOST_EDID_READ_DONE); 96 - return (~(j+256) + 1); 97 - 98 - err_astdp_edid_not_ready: 99 - if (!(ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xD1, ASTDP_MCU_FW_EXECUTING))) 100 - return (~0xD1 + 1); 101 - if (!(ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDC, ASTDP_LINK_SUCCESS))) 102 - return (~0xDC + 1); 103 - if (!(ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, ASTDP_HPD))) 104 - return (~0xDF + 1); 105 - if (!(ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xE5, ASTDP_HOST_EDID_READ_DONE_MASK))) 106 - return (~0xE5 + 1); 107 - 108 - return 0; 111 + return ret; 109 112 } 110 113 111 114 /* 112 115 * Launch Aspeed DP 113 116 */ 114 - void ast_dp_launch(struct drm_device *dev) 117 + int ast_dp_launch(struct ast_device *ast) 115 118 { 116 - u32 i = 0; 117 - u8 bDPExecute = 1; 118 - struct ast_device *ast = to_ast_device(dev); 119 + struct drm_device *dev = &ast->base; 120 + unsigned int i = 10; 119 121 120 - // Wait one second then timeout. 121 - while (ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xD1, ASTDP_MCU_FW_EXECUTING) != 122 - ASTDP_MCU_FW_EXECUTING) { 123 - i++; 124 - // wait 100 ms 125 - msleep(100); 122 + while (i) { 123 + u8 vgacrd1 = ast_get_index_reg(ast, AST_IO_VGACRI, 0xd1); 126 124 127 - if (i >= 10) { 128 - // DP would not be ready. 129 - bDPExecute = 0; 125 + if (vgacrd1 & AST_IO_VGACRD1_MCU_FW_EXECUTING) 130 126 break; 131 - } 127 + --i; 128 + msleep(100); 129 + } 130 + if (!i) { 131 + drm_err(dev, "Wait DPMCU executing timeout\n"); 132 + return -ENODEV; 132 133 } 133 134 134 - if (!bDPExecute) 135 - drm_err(dev, "Wait DPMCU executing timeout\n"); 135 + ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xe5, 136 + (u8) ~AST_IO_VGACRE5_EDID_READ_DONE, 137 + AST_IO_VGACRE5_EDID_READ_DONE); 136 138 137 - ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xE5, 138 - (u8) ~ASTDP_HOST_EDID_READ_DONE_MASK, 139 - ASTDP_HOST_EDID_READ_DONE); 139 + return 0; 140 140 } 141 141 142 142 bool ast_dp_power_is_on(struct ast_device *ast) ··· 143 181 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xE3, (u8) ~AST_DP_PHY_SLEEP, bE3); 144 182 } 145 183 184 + void ast_dp_link_training(struct ast_device *ast) 185 + { 186 + struct drm_device *dev = &ast->base; 187 + unsigned int i = 10; 146 188 189 + while (i--) { 190 + u8 vgacrdc = ast_get_index_reg(ast, AST_IO_VGACRI, 0xdc); 191 + 192 + if (vgacrdc & AST_IO_VGACRDC_LINK_SUCCESS) 193 + break; 194 + if (i) 195 + msleep(100); 196 + } 197 + if (!i) 198 + drm_err(dev, "Link training failed\n"); 199 + } 147 200 148 201 void ast_dp_set_on_off(struct drm_device *dev, bool on) 149 202 { ··· 169 192 // Video On/Off 170 193 ast_set_index_reg_mask(ast, AST_IO_VGACRI, 0xE3, (u8) ~AST_DP_VIDEO_ENABLE, on); 171 194 172 - // If DP plug in and link successful then check video on / off status 173 - if (ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDC, ASTDP_LINK_SUCCESS) && 174 - ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, ASTDP_HPD)) { 175 - video_on_off <<= 4; 176 - while (ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, 195 + video_on_off <<= 4; 196 + while (ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, 177 197 ASTDP_MIRROR_VIDEO_ENABLE) != video_on_off) { 178 - // wait 1 ms 179 - mdelay(1); 180 - if (++i > 200) 181 - break; 182 - } 198 + // wait 1 ms 199 + mdelay(1); 200 + if (++i > 200) 201 + break; 183 202 } 184 203 } 185 204
+2 -1
drivers/gpu/drm/ast/ast_drv.h
··· 471 471 /* aspeed DP */ 472 472 bool ast_astdp_is_connected(struct ast_device *ast); 473 473 int ast_astdp_read_edid(struct drm_device *dev, u8 *ediddata); 474 - void ast_dp_launch(struct drm_device *dev); 474 + int ast_dp_launch(struct ast_device *ast); 475 475 bool ast_dp_power_is_on(struct ast_device *ast); 476 476 void ast_dp_power_on_off(struct drm_device *dev, bool no); 477 + void ast_dp_link_training(struct ast_device *ast); 477 478 void ast_dp_set_on_off(struct drm_device *dev, bool no); 478 479 void ast_dp_set_mode(struct drm_crtc *crtc, struct ast_vbios_mode_info *vbios_mode); 479 480
+4 -2
drivers/gpu/drm/ast/ast_main.c
··· 115 115 } else if (IS_AST_GEN7(ast)) { 116 116 if (ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xD1, TX_TYPE_MASK) == 117 117 ASTDP_DPMCU_TX) { 118 - ast->tx_chip_types = AST_TX_ASTDP_BIT; 119 - ast_dp_launch(&ast->base); 118 + int ret = ast_dp_launch(ast); 119 + 120 + if (!ret) 121 + ast->tx_chip_types = AST_TX_ASTDP_BIT; 120 122 } 121 123 } 122 124
+2
drivers/gpu/drm/ast/ast_mode.c
··· 1622 1622 struct ast_device *ast = to_ast_device(dev); 1623 1623 1624 1624 ast_dp_power_on_off(dev, AST_DP_POWER_ON); 1625 + ast_dp_link_training(ast); 1626 + 1625 1627 ast_wait_for_vretrace(ast); 1626 1628 ast_dp_set_on_off(dev, 1); 1627 1629 }
+1 -1
drivers/gpu/drm/ast/ast_post.c
··· 351 351 352 352 if (IS_AST_GEN7(ast)) { 353 353 if (ast->tx_chip_types & AST_TX_ASTDP_BIT) 354 - ast_dp_launch(dev); 354 + ast_dp_launch(ast); 355 355 } else if (ast->config_mode == ast_use_p2a) { 356 356 if (IS_AST_GEN6(ast)) 357 357 ast_post_chip_2500(dev);
+6 -16
drivers/gpu/drm/ast/ast_reg.h
··· 37 37 #define AST_IO_VGACRCB_HWC_16BPP BIT(0) /* set: ARGB4444, cleared: 2bpp palette */ 38 38 #define AST_IO_VGACRCB_HWC_ENABLED BIT(1) 39 39 40 + #define AST_IO_VGACRD1_MCU_FW_EXECUTING BIT(5) 41 + #define AST_IO_VGACRD7_EDID_VALID_FLAG BIT(0) 42 + #define AST_IO_VGACRDC_LINK_SUCCESS BIT(0) 43 + #define AST_IO_VGACRDF_HPD BIT(0) 44 + #define AST_IO_VGACRE5_EDID_READ_DONE BIT(0) 45 + 40 46 #define AST_IO_VGAIR1_R (0x5A) 41 47 #define AST_IO_VGAIR1_VREFRESH BIT(3) 42 48 ··· 73 67 #define AST_DP_VIDEO_ENABLE BIT(0) 74 68 75 69 /* 76 - * CRD1[b5]: DP MCU FW is executing 77 - * CRDC[b0]: DP link success 78 - * CRDF[b0]: DP HPD 79 - * CRE5[b0]: Host reading EDID process is done 80 - */ 81 - #define ASTDP_MCU_FW_EXECUTING BIT(5) 82 - #define ASTDP_LINK_SUCCESS BIT(0) 83 - #define ASTDP_HPD BIT(0) 84 - #define ASTDP_HOST_EDID_READ_DONE BIT(0) 85 - #define ASTDP_HOST_EDID_READ_DONE_MASK GENMASK(0, 0) 86 - 87 - /* 88 70 * CRDF[b4]: Mirror of AST_DP_VIDEO_ENABLE 89 71 * Precondition: A. ~AST_DP_PHY_SLEEP && 90 72 * B. DP_HPD && 91 73 * C. DP_LINK_SUCCESS 92 74 */ 93 75 #define ASTDP_MIRROR_VIDEO_ENABLE BIT(4) 94 - 95 - #define ASTDP_EDID_READ_POINTER_MASK GENMASK(7, 0) 96 - #define ASTDP_EDID_VALID_FLAG_MASK GENMASK(0, 0) 97 - #define ASTDP_EDID_READ_DATA_MASK GENMASK(7, 0) 98 76 99 77 /* 100 78 * ASTDP setmode registers:
-5
drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
··· 36 36 37 37 static const bool verify_fast_training; 38 38 39 - struct bridge_init { 40 - struct i2c_client *client; 41 - struct device_node *node; 42 - }; 43 - 44 39 static void analogix_dp_init_dp(struct analogix_dp_device *dp) 45 40 { 46 41 analogix_dp_reset(dp);
+12 -5
drivers/gpu/drm/bridge/ite-it6505.c
··· 460 460 bool enable_drv_hold; 461 461 462 462 const struct drm_edid *cached_edid; 463 + 464 + int irq; 463 465 }; 464 466 465 467 struct it6505_step_train_para { ··· 2626 2624 it6505_init(it6505); 2627 2625 it6505_lane_off(it6505); 2628 2626 2627 + enable_irq(it6505->irq); 2628 + 2629 2629 return 0; 2630 2630 } 2631 2631 ··· 2643 2639 DRM_DEV_DEBUG_DRIVER(dev, "power had been already off"); 2644 2640 return 0; 2645 2641 } 2642 + 2643 + disable_irq_nosync(it6505->irq); 2646 2644 2647 2645 if (pdata->gpiod_reset) 2648 2646 gpiod_set_value_cansleep(pdata->gpiod_reset, 0); ··· 3395 3389 struct it6505 *it6505; 3396 3390 struct device *dev = &client->dev; 3397 3391 struct extcon_dev *extcon; 3398 - int err, intp_irq; 3392 + int err; 3399 3393 3400 3394 it6505 = devm_kzalloc(&client->dev, sizeof(*it6505), GFP_KERNEL); 3401 3395 if (!it6505) ··· 3436 3430 3437 3431 it6505_parse_dt(it6505); 3438 3432 3439 - intp_irq = client->irq; 3433 + it6505->irq = client->irq; 3440 3434 3441 - if (!intp_irq) { 3435 + if (!it6505->irq) { 3442 3436 dev_err(dev, "Failed to get INTP IRQ"); 3443 3437 err = -ENODEV; 3444 3438 return err; 3445 3439 } 3446 3440 3447 - err = devm_request_threaded_irq(&client->dev, intp_irq, NULL, 3441 + err = devm_request_threaded_irq(&client->dev, it6505->irq, NULL, 3448 3442 it6505_int_threaded_handler, 3449 - IRQF_TRIGGER_LOW | IRQF_ONESHOT, 3443 + IRQF_TRIGGER_LOW | IRQF_ONESHOT | 3444 + IRQF_NO_AUTOEN, 3450 3445 "it6505-intp", it6505); 3451 3446 if (err) { 3452 3447 dev_err(dev, "Failed to request INTP threaded IRQ: %d", err);
+6 -1
drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c
··· 722 722 723 723 static void dw_mipi_dsi_packet_handler_config(struct dw_mipi_dsi *dsi) 724 724 { 725 - dsi_write(dsi, DSI_PCKHDL_CFG, CRC_RX_EN | ECC_RX_EN | BTA_EN); 725 + u32 val = CRC_RX_EN | ECC_RX_EN | BTA_EN | EOTP_TX_EN; 726 + 727 + if (dsi->mode_flags & MIPI_DSI_MODE_NO_EOT_PACKET) 728 + val &= ~EOTP_TX_EN; 729 + 730 + dsi_write(dsi, DSI_PCKHDL_CFG, val); 726 731 } 727 732 728 733 static void dw_mipi_dsi_video_packet_config(struct dw_mipi_dsi *dsi,
+38 -7
drivers/gpu/drm/bridge/tc358767.c
··· 241 241 242 242 /* Link Training */ 243 243 #define DP0_SRCCTRL 0x06a0 244 + #define DP0_SRCCTRL_PRE1 GENMASK(29, 28) 245 + #define DP0_SRCCTRL_SWG1 GENMASK(25, 24) 246 + #define DP0_SRCCTRL_PRE0 GENMASK(21, 20) 247 + #define DP0_SRCCTRL_SWG0 GENMASK(17, 16) 244 248 #define DP0_SRCCTRL_SCRMBLDIS BIT(13) 245 249 #define DP0_SRCCTRL_EN810B BIT(12) 246 250 #define DP0_SRCCTRL_NOTP (0 << 8) ··· 282 278 #define AUDIFDATA6 0x0720 /* DP0 Audio Info Frame Bytes 27 to 24 */ 283 279 284 280 #define DP1_SRCCTRL 0x07a0 /* DP1 Control Register */ 281 + #define DP1_SRCCTRL_PRE GENMASK(21, 20) 282 + #define DP1_SRCCTRL_SWG GENMASK(17, 16) 285 283 286 284 /* PHY */ 287 285 #define DP_PHY_CTRL 0x0800 ··· 375 369 376 370 u32 rev; 377 371 u8 assr; 372 + u8 pre_emphasis[2]; 378 373 379 374 struct gpio_desc *sd_gpio; 380 375 struct gpio_desc *reset_gpio; ··· 1097 1090 return ret; 1098 1091 } 1099 1092 1100 - ret = regmap_write(tc->regmap, DP0_SRCCTRL, tc_srcctrl(tc)); 1093 + ret = regmap_write(tc->regmap, DP0_SRCCTRL, 1094 + tc_srcctrl(tc) | 1095 + FIELD_PREP(DP0_SRCCTRL_PRE0, tc->pre_emphasis[0]) | 1096 + FIELD_PREP(DP0_SRCCTRL_PRE1, tc->pre_emphasis[1])); 1101 1097 if (ret) 1102 1098 return ret; 1103 1099 /* SSCG and BW27 on DP1 must be set to the same as on DP0 */ 1104 1100 ret = regmap_write(tc->regmap, DP1_SRCCTRL, 1105 1101 (tc->link.spread ? DP0_SRCCTRL_SSCG : 0) | 1106 - ((tc->link.rate != 162000) ? DP0_SRCCTRL_BW27 : 0)); 1102 + ((tc->link.rate != 162000) ? DP0_SRCCTRL_BW27 : 0) | 1103 + FIELD_PREP(DP1_SRCCTRL_PRE, tc->pre_emphasis[1])); 1107 1104 if (ret) 1108 1105 return ret; 1109 1106 ··· 1199 1188 goto err_dpcd_write; 1200 1189 1201 1190 /* Reset voltage-swing & pre-emphasis */ 1202 - tmp[0] = tmp[1] = DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | 1203 - DP_TRAIN_PRE_EMPH_LEVEL_0; 1191 + tmp[0] = DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | 1192 + FIELD_PREP(DP_TRAIN_PRE_EMPHASIS_MASK, tc->pre_emphasis[0]); 1193 + tmp[1] = DP_TRAIN_VOLTAGE_SWING_LEVEL_0 | 1194 + FIELD_PREP(DP_TRAIN_PRE_EMPHASIS_MASK, tc->pre_emphasis[1]); 1204 1195 ret = drm_dp_dpcd_write(aux, DP_TRAINING_LANE0_SET, tmp, 2); 1205 1196 if (ret < 0) 1206 1197 goto err_dpcd_write; ··· 1226 1213 ret = regmap_write(tc->regmap, DP0_SRCCTRL, 1227 1214 tc_srcctrl(tc) | DP0_SRCCTRL_SCRMBLDIS | 1228 1215 DP0_SRCCTRL_AUTOCORRECT | 1229 - DP0_SRCCTRL_TP1); 1216 + DP0_SRCCTRL_TP1 | 1217 + FIELD_PREP(DP0_SRCCTRL_PRE0, tc->pre_emphasis[0]) | 1218 + FIELD_PREP(DP0_SRCCTRL_PRE1, tc->pre_emphasis[1])); 1230 1219 if (ret) 1231 1220 return ret; 1232 1221 ··· 1263 1248 ret = regmap_write(tc->regmap, DP0_SRCCTRL, 1264 1249 tc_srcctrl(tc) | DP0_SRCCTRL_SCRMBLDIS | 1265 1250 DP0_SRCCTRL_AUTOCORRECT | 1266 - DP0_SRCCTRL_TP2); 1251 + DP0_SRCCTRL_TP2 | 1252 + FIELD_PREP(DP0_SRCCTRL_PRE0, tc->pre_emphasis[0]) | 1253 + FIELD_PREP(DP0_SRCCTRL_PRE1, tc->pre_emphasis[1])); 1267 1254 if (ret) 1268 1255 return ret; 1269 1256 ··· 1291 1274 1292 1275 /* Clear Training Pattern, set AutoCorrect Mode = 1 */ 1293 1276 ret = regmap_write(tc->regmap, DP0_SRCCTRL, tc_srcctrl(tc) | 1294 - DP0_SRCCTRL_AUTOCORRECT); 1277 + DP0_SRCCTRL_AUTOCORRECT | 1278 + FIELD_PREP(DP0_SRCCTRL_PRE0, tc->pre_emphasis[0]) | 1279 + FIELD_PREP(DP0_SRCCTRL_PRE1, tc->pre_emphasis[1])); 1295 1280 if (ret) 1296 1281 return ret; 1297 1282 ··· 2382 2363 return -EINVAL; 2383 2364 } 2384 2365 mode |= BIT(endpoint.port); 2366 + 2367 + if (endpoint.port == 2) { 2368 + of_property_read_u8_array(node, "toshiba,pre-emphasis", 2369 + tc->pre_emphasis, 2370 + ARRAY_SIZE(tc->pre_emphasis)); 2371 + 2372 + if (tc->pre_emphasis[0] < 0 || tc->pre_emphasis[0] > 2 || 2373 + tc->pre_emphasis[1] < 0 || tc->pre_emphasis[1] > 2) { 2374 + dev_err(dev, "Incorrect Pre-Emphasis setting, use either 0=0dB 1=3.5dB 2=6dB\n"); 2375 + return -EINVAL; 2376 + } 2377 + } 2385 2378 } 2386 2379 2387 2380 if (mode == mode_dpi_to_edp || mode == mode_dpi_to_dp) {
+2 -2
drivers/gpu/drm/ci/gitlab-ci.yml
··· 2 2 DRM_CI_PROJECT_PATH: &drm-ci-project-path mesa/mesa 3 3 DRM_CI_COMMIT_SHA: &drm-ci-commit-sha e2b9c5a9e3e4f9b532067af8022eaef8d6fc6c00 4 4 5 - UPSTREAM_REPO: git://anongit.freedesktop.org/drm/drm 5 + UPSTREAM_REPO: https://gitlab.freedesktop.org/drm/kernel.git 6 6 TARGET_BRANCH: drm-next 7 7 8 - IGT_VERSION: 0df7b9b97f9da0e364f5ee30fe331004b8c86b56 8 + IGT_VERSION: f13702b8e4e847c56da3ef6f0969065d686049c5 9 9 10 10 DEQP_RUNNER_GIT_URL: https://gitlab.freedesktop.org/anholt/deqp-runner.git 11 11 DEQP_RUNNER_GIT_TAG: v0.15.0
+1
drivers/gpu/drm/ci/igt_runner.sh
··· 80 80 --igt-folder /igt/libexec/igt-gpu-tools \ 81 81 --caselist $TESTLIST \ 82 82 --output /results \ 83 + -vvvv \ 83 84 $IGT_SKIPS \ 84 85 $IGT_FLAKES \ 85 86 $IGT_FAILS \
+1
drivers/gpu/drm/ci/xfails/amdgpu-stoney-fails.txt
··· 30 30 kms_cursor_crc@cursor-size-change,Fail 31 31 kms_cursor_crc@cursor-sliding-64x21,Fail 32 32 kms_cursor_crc@cursor-sliding-64x64,Fail 33 + kms_cursor_edge_walk@64x64-left-edge,Fail 33 34 kms_flip@flip-vs-modeset-vs-hang,Fail 34 35 kms_flip@flip-vs-panning-vs-hang,Fail 35 36 kms_lease@lease-uevent,Fail
+13 -1
drivers/gpu/drm/ci/xfails/amdgpu-stoney-flakes.txt
··· 1 1 # Board Name: hp-11A-G6-EE-grunt 2 2 # Bug Report: https://lore.kernel.org/amd-gfx/3542730f-b8d7-404d-a947-b7a5e95d661c@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 5 - # Failure Rate: 50 6 6 kms_async_flips@async-flip-with-page-flip-events 7 + 8 + # Board Name: hp-11A-G6-EE-grunt 9 + # Bug Report: https://lore.kernel.org/amd-gfx/3542730f-b8d7-404d-a947-b7a5e95d661c@collabora.com/T/#u 10 + # Failure Rate: 50 11 + # IGT Version: 1.28-g0df7b9b97 12 + # Linux Version: 6.9.0-rc7 7 13 kms_async_flips@crc 14 + 15 + # Board Name: hp-11A-G6-EE-grunt 16 + # Bug Report: https://lore.kernel.org/amd-gfx/3542730f-b8d7-404d-a947-b7a5e95d661c@collabora.com/T/#u 17 + # Failure Rate: 50 18 + # IGT Version: 1.28-g0df7b9b97 19 + # Linux Version: 6.9.0-rc7 8 20 kms_plane@pixel-format-source-clamping
+2 -2
drivers/gpu/drm/ci/xfails/amdgpu-stoney-skips.txt
··· 2 2 .*suspend.* 3 3 4 4 # Skip driver specific tests 5 - msm_.* 5 + ^msm.* 6 6 nouveau_.* 7 - panfrost_.* 7 + ^panfrost.* 8 8 ^v3d.* 9 9 ^vc4.* 10 10 ^vmwgfx*
+8 -4
drivers/gpu/drm/ci/xfails/i915-amly-fails.txt
··· 6 6 i915_module_load@resize-bar,Fail 7 7 i915_pm_rpm@gem-execbuf-stress,Timeout 8 8 i915_pm_rpm@module-reload,Fail 9 - kms_async_flips@invalid-async-flip,Timeout 10 - kms_atomic_transition@modeset-transition-fencing,Timeout 11 9 kms_ccs@crc-primary-rotation-180-yf-tiled-ccs,Timeout 10 + kms_cursor_legacy@short-flip-before-cursor-atomic-transitions,Timeout 12 11 kms_fb_coherency@memset-crc,Crash 13 - kms_flip@flip-vs-dpms-off-vs-modeset,Timeout 12 + kms_flip@busy-flip,Timeout 13 + kms_flip@single-buffer-flip-vs-dpms-off-vs-modeset-interruptible,Fail 14 14 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-downscaling,Fail 15 15 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling,Fail 16 16 kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-downscaling,Fail ··· 33 33 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling,Fail 34 34 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling,Fail 35 35 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling,Fail 36 + kms_frontbuffer_tracking@fbc-rgb565-draw-mmap-cpu,Timeout 36 37 kms_lease@lease-uevent,Fail 37 38 kms_plane_alpha_blend@alpha-basic,Fail 38 39 kms_plane_alpha_blend@alpha-opaque-fb,Fail 39 40 kms_plane_alpha_blend@alpha-transparent-fb,Fail 40 41 kms_plane_alpha_blend@constant-alpha-max,Fail 41 42 kms_plane_scaling@plane-scaler-with-clipping-clamping-rotation,Timeout 42 - kms_pm_rpm@modeset-lpsp-stress,Timeout 43 + kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5,Timeout 43 44 kms_pm_rpm@modeset-stress-extra-wait,Timeout 44 45 kms_pm_rpm@universal-planes,Timeout 45 46 kms_pm_rpm@universal-planes-dpms,Timeout 47 + kms_prop_blob@invalid-set-prop,Fail 48 + kms_rotation_crc@primary-rotation-180,Timeout 49 + kms_vblank@query-forked-hang,Timeout 46 50 perf@i915-ref-count,Fail 47 51 perf_pmu@module-unload,Fail 48 52 perf_pmu@rc6,Crash
+40 -1
drivers/gpu/drm/ci/xfails/i915-amly-flakes.txt
··· 1 1 # Board Name: asus-C433TA-AJ0005-rammus 2 2 # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 5 - # Failure Rate: 50 6 6 i915_hangman@engine-engine-error 7 + 8 + # Board Name: asus-C433TA-AJ0005-rammus 9 + # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 10 + # Failure Rate: 50 11 + # IGT Version: 1.28-g0df7b9b97 12 + # Linux Version: 6.9.0-rc7 7 13 i915_hangman@gt-engine-hang 14 + 15 + # Board Name: asus-C433TA-AJ0005-rammus 16 + # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 17 + # Failure Rate: 50 18 + # IGT Version: 1.28-g0df7b9b97 19 + # Linux Version: 6.9.0-rc7 8 20 kms_async_flips@crc 21 + 22 + # Board Name: asus-C433TA-AJ0005-rammus 23 + # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 24 + # Failure Rate: 50 25 + # IGT Version: 1.28-g0df7b9b97 26 + # Linux Version: 6.9.0-rc7 9 27 kms_universal_plane@cursor-fb-leak 28 + 29 + # Board Name: asus-C433TA-AJ0005-rammus 30 + # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 31 + # Failure Rate: 50 32 + # IGT Version: 1.28-gf13702b8e 33 + # Linux Version: 6.10.0-rc5 34 + kms_sysfs_edid_timing 35 + 36 + # Board Name: asus-C433TA-AJ0005-rammus 37 + # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 38 + # Failure Rate: 50 39 + # IGT Version: 1.28-gf13702b8e 40 + # Linux Version: 6.10.0-rc5 41 + i915_hangman@engine-engine-hang 42 + 43 + # Board Name: asus-C433TA-AJ0005-rammus 44 + # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 45 + # Failure Rate: 50 46 + # IGT Version: 1.28-gf13702b8e 47 + # Linux Version: 6.10.0-rc5 48 + kms_pm_rpm@modeset-lpsp-stress
+3 -2
drivers/gpu/drm/ci/xfails/i915-amly-skips.txt
··· 5 5 6 6 # Skip driver specific tests 7 7 ^amdgpu.* 8 - msm_.* 8 + ^msm.* 9 9 nouveau_.* 10 - panfrost_.* 10 + ^panfrost.* 11 11 ^v3d.* 12 12 ^vc4.* 13 13 ^vmwgfx* ··· 19 19 i915_pm_rc6_residency.* 20 20 i915_suspend.* 21 21 kms_scaling_modes.* 22 + i915_pm_rpm.* 22 23 23 24 # Kernel panic 24 25 drm_fdinfo.*
+1 -1
drivers/gpu/drm/ci/xfails/i915-apl-flakes.txt
··· 1 1 # Board Name: asus-C523NA-A20057-coral 2 2 # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 5 - # Failure Rate: 50 6 6 kms_fb_coherency@memset-crc
+2 -2
drivers/gpu/drm/ci/xfails/i915-apl-skips.txt
··· 7 7 8 8 # Skip driver specific tests 9 9 ^amdgpu.* 10 - msm_.* 10 + ^msm.* 11 11 nouveau_.* 12 - panfrost_.* 12 + ^panfrost.* 13 13 ^v3d.* 14 14 ^vc4.* 15 15 ^vmwgfx*
+9 -5
drivers/gpu/drm/ci/xfails/i915-cml-fails.txt
··· 9 9 i915_pm_rpm@gem-execbuf-stress,Timeout 10 10 i915_pm_rpm@module-reload,Fail 11 11 i915_pm_rpm@system-suspend-execbuf,Timeout 12 - kms_async_flips@invalid-async-flip,Timeout 13 - kms_atomic_transition@modeset-transition-fencing,Timeout 14 12 kms_ccs@crc-primary-rotation-180-yf-tiled-ccs,Timeout 15 13 kms_fb_coherency@memset-crc,Crash 16 - kms_flip@flip-vs-dpms-off-vs-modeset,Timeout 14 + kms_flip@busy-flip,Timeout 15 + kms_flip@single-buffer-flip-vs-dpms-off-vs-modeset-interruptible,Fail 17 16 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-downscaling,Fail 18 17 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling,Fail 19 18 kms_flip_scaled_crc@flip-32bpp-xtile-to-64bpp-xtile-downscaling,Fail ··· 40 41 kms_plane_alpha_blend@alpha-opaque-fb,Fail 41 42 kms_plane_alpha_blend@alpha-transparent-fb,Fail 42 43 kms_plane_alpha_blend@constant-alpha-max,Fail 43 - kms_plane_alpha_blend@constant-alpha-min,Fail 44 44 kms_plane_scaling@plane-scaler-with-clipping-clamping-rotation,Timeout 45 + kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5,Timeout 45 46 kms_pm_rpm@modeset-stress-extra-wait,Timeout 46 47 kms_pm_rpm@universal-planes,Timeout 47 48 kms_pm_rpm@universal-planes-dpms,Timeout 49 + kms_prop_blob@invalid-set-prop,Fail 50 + kms_psr2_sf@cursor-plane-update-sf,Fail 48 51 kms_psr2_sf@fbc-plane-move-sf-dmg-area,Timeout 49 52 kms_psr2_sf@overlay-plane-update-continuous-sf,Fail 50 53 kms_psr2_sf@overlay-plane-update-sf-dmg-area,Fail 54 + kms_psr2_sf@overlay-primary-update-sf-dmg-area,Fail 55 + kms_psr2_sf@plane-move-sf-dmg-area,Fail 51 56 kms_psr2_sf@primary-plane-update-sf-dmg-area,Fail 52 57 kms_psr2_sf@primary-plane-update-sf-dmg-area-big-fb,Fail 53 58 kms_psr2_su@page_flip-NV12,Fail 54 59 kms_psr2_su@page_flip-P010,Fail 55 - kms_psr@psr-sprite-render,Timeout 60 + kms_rotation_crc@primary-rotation-180,Timeout 56 61 kms_setmode@basic,Fail 62 + kms_vblank@query-forked-hang,Timeout 57 63 perf@i915-ref-count,Fail 58 64 perf_pmu@module-unload,Fail 59 65 perf_pmu@rc6,Crash
+8 -1
drivers/gpu/drm/ci/xfails/i915-cml-flakes.txt
··· 1 1 # Board Name: asus-C436FA-Flip-hatch 2 2 # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 5 - # Failure Rate: 50 6 6 kms_plane_alpha_blend@constant-alpha-min 7 + 8 + # Board Name: asus-C436FA-Flip-hatch 9 + # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 10 + # Failure Rate: 50 11 + # IGT Version: 1.28-gf13702b8e 12 + # Linux Version: 6.10.0-rc5 13 + kms_atomic_transition@plane-all-modeset-transition-internal-panels
+3 -2
drivers/gpu/drm/ci/xfails/i915-cml-skips.txt
··· 3 3 4 4 # Skip driver specific tests 5 5 ^amdgpu.* 6 - msm_.* 6 + ^msm.* 7 7 nouveau_.* 8 - panfrost_.* 8 + ^panfrost.* 9 9 ^v3d.* 10 10 ^vc4.* 11 11 ^vmwgfx* ··· 19 19 xe_module_load.* 20 20 api_intel_allocator.* 21 21 kms_cursor_legacy.* 22 + i915_pm_rpm.* 22 23 23 24 # Kernel panic 24 25 drm_fdinfo.*
+13 -11
drivers/gpu/drm/ci/xfails/i915-glk-fails.txt
··· 1 1 core_setmaster@master-drop-set-user,Fail 2 + core_setmaster_vs_auth,Fail 2 3 i915_module_load@load,Fail 3 4 i915_module_load@reload,Fail 4 5 i915_module_load@reload-no-display,Fail 5 6 i915_module_load@resize-bar,Fail 6 - kms_async_flips@invalid-async-flip,Timeout 7 - kms_atomic_transition@modeset-transition-fencing,Timeout 8 - kms_big_fb@linear-16bpp-rotate-0,Fail 9 - kms_big_fb@linear-16bpp-rotate-180,Fail 10 - kms_big_fb@linear-32bpp-rotate-0,Fail 11 - kms_big_fb@linear-32bpp-rotate-180,Fail 12 - kms_big_fb@linear-8bpp-rotate-0,Fail 13 - kms_big_fb@linear-8bpp-rotate-180,Fail 14 - kms_big_fb@linear-max-hw-stride-32bpp-rotate-0,Fail 7 + kms_cursor_legacy@short-flip-before-cursor-atomic-transitions,Timeout 15 8 kms_dirtyfb@default-dirtyfb-ioctl,Fail 16 - kms_draw_crc@draw-method-render,Fail 17 - kms_flip@flip-vs-dpms-off-vs-modeset,Timeout 9 + kms_dirtyfb@drrs-dirtyfb-ioctl,Fail 10 + kms_dirtyfb@fbc-dirtyfb-ioctl,Fail 11 + kms_flip@blocking-wf_vblank,Fail 12 + kms_flip@busy-flip,Timeout 13 + kms_flip@single-buffer-flip-vs-dpms-off-vs-modeset-interruptible,Fail 18 14 kms_flip@wf_vblank-ts-check,Fail 19 15 kms_flip@wf_vblank-ts-check-interruptible,Fail 20 16 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-downscaling,Fail ··· 22 26 kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-downscaling,Fail 23 27 kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-upscaling,Fail 24 28 kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-downscaling,Fail 29 + kms_flip_scaled_crc@flip-64bpp-linear-to-16bpp-linear-upscaling,Fail 25 30 kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-downscaling,Fail 26 31 kms_flip_scaled_crc@flip-64bpp-linear-to-32bpp-linear-upscaling,Fail 27 32 kms_flip_scaled_crc@flip-64bpp-xtile-to-16bpp-xtile-downscaling,Fail ··· 35 38 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling,Fail 36 39 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling,Fail 37 40 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling,Fail 41 + kms_frontbuffer_tracking@fbc-rgb565-draw-mmap-cpu,Timeout 38 42 kms_frontbuffer_tracking@fbc-tiling-linear,Fail 39 43 kms_frontbuffer_tracking@fbcdrrs-tiling-linear,Fail 40 44 kms_lease@lease-uevent,Fail 41 45 kms_plane_alpha_blend@alpha-opaque-fb,Fail 42 46 kms_plane_scaling@plane-scaler-with-clipping-clamping-rotation,Timeout 47 + kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5,Timeout 43 48 kms_pm_rpm@legacy-planes,Timeout 44 49 kms_pm_rpm@legacy-planes-dpms,Timeout 45 50 kms_pm_rpm@modeset-stress-extra-wait,Timeout 46 51 kms_pm_rpm@universal-planes,Timeout 47 52 kms_pm_rpm@universal-planes-dpms,Timeout 53 + kms_prop_blob@invalid-set-prop,Fail 48 54 kms_rotation_crc@multiplane-rotation,Fail 49 55 kms_rotation_crc@multiplane-rotation-cropping-bottom,Fail 50 56 kms_rotation_crc@multiplane-rotation-cropping-top,Fail 57 + kms_rotation_crc@primary-rotation-180,Timeout 58 + kms_vblank@query-forked-hang,Timeout 51 59 perf@non-zero-reason,Timeout 52 60 sysfs_heartbeat_interval@long,Timeout 53 61 sysfs_heartbeat_interval@off,Timeout
+7 -1
drivers/gpu/drm/ci/xfails/i915-glk-flakes.txt
··· 1 1 # Board Name: hp-x360-12b-ca0010nr-n4020-octopus 2 2 # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 5 - # Failure Rate: 50 6 6 core_hotunplug@unplug-rescan 7 + 8 + # Board Name: hp-x360-12b-ca0010nr-n4020-octopus 9 + # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 10 + # Failure Rate: 50 11 + # IGT Version: 1.28-g0df7b9b97 12 + # Linux Version: 6.9.0-rc7 7 13 kms_fb_coherency@memset-crc
+2 -2
drivers/gpu/drm/ci/xfails/i915-glk-skips.txt
··· 6 6 7 7 # Skip driver specific tests 8 8 ^amdgpu.* 9 - msm_.* 9 + ^msm.* 10 10 nouveau_.* 11 - panfrost_.* 11 + ^panfrost.* 12 12 ^v3d.* 13 13 ^vc4.* 14 14 ^vmwgfx*
+2
drivers/gpu/drm/ci/xfails/i915-kbl-fails.txt
··· 17 17 perf_pmu@busy-accuracy-50,Fail 18 18 perf_pmu@module-unload,Fail 19 19 perf_pmu@rc6,Crash 20 + prime_busy@after,Fail 20 21 sysfs_heartbeat_interval@long,Timeout 21 22 sysfs_heartbeat_interval@off,Timeout 22 23 sysfs_preempt_timeout@off,Timeout 23 24 sysfs_timeslice_duration@off,Timeout 25 + testdisplay,Timeout 24 26 xe_module_load@force-load,Fail 25 27 xe_module_load@load,Fail 26 28 xe_module_load@many-reload,Fail
+1 -1
drivers/gpu/drm/ci/xfails/i915-kbl-flakes.txt
··· 1 1 # Board Name: hp-x360-14-G1-sona 2 2 # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 5 - # Failure Rate: 50 6 6 prime_busy@hang
+2 -2
drivers/gpu/drm/ci/xfails/i915-kbl-skips.txt
··· 6 6 7 7 # Skip driver specific tests 8 8 ^amdgpu.* 9 - msm_.* 9 + ^msm.* 10 10 nouveau_.* 11 - panfrost_.* 11 + ^panfrost.* 12 12 ^v3d.* 13 13 ^vc4.* 14 14 ^vmwgfx*
+16 -9
drivers/gpu/drm/ci/xfails/i915-tgl-fails.txt
··· 1 - api_intel_bb@blit-noreloc-keep-cache,Timeout 1 + api_intel_allocator@simple-allocator,Timeout 2 + api_intel_bb@object-reloc-keep-cache,Timeout 2 3 api_intel_bb@offset-control,Timeout 3 - api_intel_bb@render-ccs,Timeout 4 - core_getclient,Timeout 5 - core_hotunplug@hotreplug-lateclose,Timeout 6 - drm_read@short-buffer-block,Timeout 4 + core_auth@getclient-simple,Timeout 5 + core_hotunplug@hotunbind-rebind,Timeout 6 + debugfs_test@read_all_entries_display_on,Timeout 7 + drm_read@invalid-buffer,Timeout 7 8 drm_read@short-buffer-nonblock,Timeout 8 - dumb_buffer@map-uaf,Timeout 9 9 gen3_render_tiledx_blits,Timeout 10 10 gen7_exec_parse@basic-allocation,Timeout 11 11 gen7_exec_parse@batch-without-end,Timeout 12 12 gen9_exec_parse@batch-invalid-length,Timeout 13 13 gen9_exec_parse@bb-secure,Timeout 14 + gen9_exec_parse@secure-batches,Timeout 15 + gen9_exec_parse@shadow-peek,Timeout 16 + gen9_exec_parse@unaligned-jump,Timeout 14 17 i915_module_load@load,Fail 15 18 i915_module_load@reload,Fail 16 19 i915_module_load@reload-no-display,Fail 17 20 i915_module_load@resize-bar,Fail 18 - i915_pciid,Timeout 19 21 i915_query@engine-info,Timeout 22 + i915_query@query-topology-kernel-writes,Timeout 23 + i915_query@test-query-geometry-subslices,Timeout 20 24 kms_lease@lease-uevent,Fail 21 25 kms_rotation_crc@multiplane-rotation,Fail 22 26 perf@i915-ref-count,Fail 23 - perf_pmu@busy,Timeout 24 27 perf_pmu@enable-race,Timeout 25 28 perf_pmu@event-wait,Timeout 26 29 perf_pmu@gt-awake,Timeout 30 + perf_pmu@interrupts,Timeout 27 31 perf_pmu@module-unload,Fail 28 32 perf_pmu@rc6,Crash 29 33 prime_mmap@test_map_unmap,Timeout 34 + prime_mmap@test_refcounting,Timeout 30 35 prime_self_import@basic-with_one_bo,Timeout 31 - syncobj_basic@bad-destroy,Timeout 36 + syncobj_basic@bad-flags-fd-to-handle,Timeout 32 37 syncobj_eventfd@invalid-bad-pad,Timeout 33 38 syncobj_wait@invalid-multi-wait-unsubmitted-signaled,Timeout 34 39 syncobj_wait@invalid-signal-illegal-handle,Timeout ··· 42 37 syncobj_wait@multi-wait-for-submit-submitted-signaled,Timeout 43 38 syncobj_wait@wait-any-complex,Timeout 44 39 syncobj_wait@wait-delayed-signal,Timeout 40 + template@A,Timeout 45 41 xe_module_load@force-load,Fail 46 42 xe_module_load@load,Fail 43 + xe_module_load@many-reload,Fail 47 44 xe_module_load@reload,Fail 48 45 xe_module_load@reload-no-display,Fail
+2 -2
drivers/gpu/drm/ci/xfails/i915-tgl-skips.txt
··· 12 12 13 13 # Skip driver specific tests 14 14 ^amdgpu.* 15 - msm_.* 15 + ^msm.* 16 16 nouveau_.* 17 - panfrost_.* 17 + ^panfrost.* 18 18 ^v3d.* 19 19 ^vc4.* 20 20 ^vmwgfx*
+7 -10
drivers/gpu/drm/ci/xfails/i915-whl-fails.txt
··· 7 7 i915_pm_rpm@gem-execbuf-stress,Timeout 8 8 i915_pm_rpm@module-reload,Fail 9 9 i915_pm_rpm@system-suspend-execbuf,Timeout 10 - kms_async_flips@invalid-async-flip,Timeout 11 - kms_atomic_transition@modeset-transition-fencing,Timeout 12 - kms_big_fb@linear-16bpp-rotate-0,Fail 13 - kms_big_fb@linear-16bpp-rotate-180,Fail 14 - kms_big_fb@linear-32bpp-rotate-0,Fail 15 - kms_big_fb@linear-32bpp-rotate-180,Fail 16 - kms_big_fb@linear-8bpp-rotate-0,Fail 17 - kms_big_fb@linear-8bpp-rotate-180,Fail 18 - kms_big_fb@linear-max-hw-stride-32bpp-rotate-0,Fail 19 10 kms_ccs@crc-primary-rotation-180-yf-tiled-ccs,Timeout 11 + kms_cursor_legacy@short-flip-before-cursor-atomic-transitions,Timeout 20 12 kms_dirtyfb@default-dirtyfb-ioctl,Fail 21 - kms_draw_crc@draw-method-render,Fail 13 + kms_dirtyfb@fbc-dirtyfb-ioctl,Fail 22 14 kms_fb_coherency@memset-crc,Crash 23 15 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-downscaling,Fail 24 16 kms_flip_scaled_crc@flip-32bpp-linear-to-64bpp-linear-upscaling,Fail ··· 32 40 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytile-upscaling,Fail 33 41 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling,Fail 34 42 kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling,Fail 43 + kms_frontbuffer_tracking@fbc-rgb565-draw-mmap-cpu,Timeout 35 44 kms_frontbuffer_tracking@fbc-tiling-linear,Fail 36 45 kms_lease@lease-uevent,Fail 37 46 kms_plane_alpha_blend@alpha-basic,Fail ··· 40 47 kms_plane_alpha_blend@alpha-transparent-fb,Fail 41 48 kms_plane_alpha_blend@constant-alpha-max,Fail 42 49 kms_plane_scaling@plane-scaler-with-clipping-clamping-rotation,Timeout 50 + kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5,Timeout 43 51 kms_pm_rpm@modeset-stress-extra-wait,Timeout 44 52 kms_pm_rpm@universal-planes,Timeout 45 53 kms_pm_rpm@universal-planes-dpms,Timeout 54 + kms_prop_blob@invalid-set-prop,Fail 55 + kms_rotation_crc@primary-rotation-180,Timeout 56 + kms_vblank@query-forked-hang,Timeout 46 57 perf@i915-ref-count,Fail 47 58 perf_pmu@module-unload,Fail 48 59 perf_pmu@rc6,Crash
+1 -1
drivers/gpu/drm/ci/xfails/i915-whl-flakes.txt
··· 1 1 # Board Name: dell-latitude-5400-8665U-sarien 2 2 # Bug Report: https://lore.kernel.org/intel-gfx/af4ca4df-a3ef-4943-bdbf-4c3af2c333af@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 5 - # Failure Rate: 50 6 6 kms_pm_rpm@modeset-lpsp-stress
+3 -2
drivers/gpu/drm/ci/xfails/i915-whl-skips.txt
··· 3 3 4 4 # Skip driver specific tests 5 5 ^amdgpu.* 6 - msm_.* 6 + ^msm.* 7 7 nouveau_.* 8 - panfrost_.* 8 + ^panfrost.* 9 9 ^v3d.* 10 10 ^vc4.* 11 11 ^vmwgfx* ··· 17 17 i915_pm_rc6_residency.* 18 18 i915_suspend.* 19 19 kms_flip.* 20 + i915_pm_rpm.* 20 21 21 22 # Kernel panic 22 23 drm_fdinfo.*
+8 -1
drivers/gpu/drm/ci/xfails/mediatek-mt8173-fails.txt
··· 5 5 dumb_buffer@invalid-bpp,Fail 6 6 fbdev@eof,Fail 7 7 fbdev@read,Fail 8 - fbdev@unaligned-write,Fail 9 8 kms_3d,Fail 9 + kms_bw@connected-linear-tiling-1-displays-1920x1080p,Fail 10 + kms_bw@connected-linear-tiling-1-displays-2160x1440p,Fail 11 + kms_bw@connected-linear-tiling-1-displays-2560x1440p,Fail 12 + kms_bw@connected-linear-tiling-1-displays-3840x2160p,Fail 13 + kms_bw@connected-linear-tiling-2-displays-1920x1080p,Fail 14 + kms_bw@connected-linear-tiling-2-displays-2160x1440p,Fail 15 + kms_bw@connected-linear-tiling-2-displays-2560x1440p,Fail 16 + kms_bw@connected-linear-tiling-2-displays-3840x2160p,Fail 10 17 kms_bw@linear-tiling-1-displays-1920x1080p,Fail 11 18 kms_bw@linear-tiling-1-displays-2160x1440p,Fail 12 19 kms_bw@linear-tiling-1-displays-2560x1440p,Fail
+31 -1
drivers/gpu/drm/ci/xfails/mediatek-mt8173-flakes.txt
··· 1 1 # Board Name: mt8173-elm-hana 2 2 # Bug Report: https://lore.kernel.org/linux-mediatek/0b2a1899-15dd-42fa-8f63-ea0ca28dbb17@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 5 - # Failure Rate: 50 6 6 core_setmaster_vs_auth 7 + 8 + # Board Name: mt8173-elm-hana 9 + # Bug Report: https://lore.kernel.org/linux-mediatek/0b2a1899-15dd-42fa-8f63-ea0ca28dbb17@collabora.com/T/#u 10 + # Failure Rate: 50 11 + # IGT Version: 1.28-g0df7b9b97 12 + # Linux Version: 6.9.0-rc7 7 13 dumb_buffer@create-clear 14 + 15 + # Board Name: mt8173-elm-hana 16 + # Bug Report: https://lore.kernel.org/linux-mediatek/0b2a1899-15dd-42fa-8f63-ea0ca28dbb17@collabora.com/T/#u 17 + # Failure Rate: 50 18 + # IGT Version: 1.28-g0df7b9b97 19 + # Linux Version: 6.9.0-rc7 8 20 fbdev@unaligned-write 21 + 22 + # Board Name: mt8173-elm-hana 23 + # Bug Report: https://lore.kernel.org/linux-mediatek/0b2a1899-15dd-42fa-8f63-ea0ca28dbb17@collabora.com/T/#u 24 + # Failure Rate: 50 25 + # IGT Version: 1.28-g0df7b9b97 26 + # Linux Version: 6.9.0-rc7 9 27 fbdev@write 28 + 29 + # Board Name: mt8173-elm-hana 30 + # Bug Report: https://lore.kernel.org/linux-mediatek/0b2a1899-15dd-42fa-8f63-ea0ca28dbb17@collabora.com/T/#u 31 + # Failure Rate: 50 32 + # IGT Version: 1.28-g0df7b9b97 33 + # Linux Version: 6.9.0-rc7 10 34 kms_cursor_legacy@cursor-vs-flip-atomic-transitions 35 + 36 + # Board Name: mt8173-elm-hana 37 + # Bug Report: https://lore.kernel.org/linux-mediatek/0b2a1899-15dd-42fa-8f63-ea0ca28dbb17@collabora.com/T/#u 38 + # Failure Rate: 50 39 + # IGT Version: 1.28-g0df7b9b97 40 + # Linux Version: 6.9.0-rc7 11 41 kms_prop_blob@invalid-set-prop
+2 -2
drivers/gpu/drm/ci/xfails/mediatek-mt8173-skips.txt
··· 1 1 # Skip driver specific tests 2 2 ^amdgpu.* 3 - msm_.* 3 + ^msm.* 4 4 nouveau_.* 5 - panfrost_.* 5 + ^panfrost.* 6 6 ^v3d.* 7 7 ^vc4.* 8 8 ^vmwgfx*
+1 -1
drivers/gpu/drm/ci/xfails/mediatek-mt8183-fails.txt
··· 4 4 dumb_buffer@map-invalid-size,Fail 5 5 dumb_buffer@map-uaf,Fail 6 6 dumb_buffer@map-valid,Fail 7 - panfrost_prime@gem-prime-import,Fail 7 + panfrost/panfrost_prime@gem-prime-import,Fail 8 8 tools_test@tools_test,Fail
+1 -1
drivers/gpu/drm/ci/xfails/mediatek-mt8183-skips.txt
··· 1 1 # Skip driver specific tests 2 2 ^amdgpu.* 3 - msm_.* 3 + ^msm.* 4 4 nouveau_.* 5 5 ^v3d.* 6 6 ^vc4.*
+1 -1
drivers/gpu/drm/ci/xfails/meson-g12b-fails.txt
··· 4 4 dumb_buffer@map-invalid-size,Fail 5 5 dumb_buffer@map-uaf,Fail 6 6 dumb_buffer@map-valid,Fail 7 - panfrost_prime@gem-prime-import,Fail 7 + panfrost/panfrost_prime@gem-prime-import,Fail 8 8 tools_test@tools_test,Fail
+1 -1
drivers/gpu/drm/ci/xfails/meson-g12b-skips.txt
··· 1 1 # Skip driver specific tests 2 2 ^amdgpu.* 3 - msm_.* 3 + ^msm.* 4 4 nouveau_.* 5 5 ^v3d.* 6 6 ^vc4.*
+1 -4
drivers/gpu/drm/ci/xfails/msm-apq8016-fails.txt
··· 4 4 device_reset@unbind-reset-rebind,Fail 5 5 dumb_buffer@invalid-bpp,Fail 6 6 kms_3d,Fail 7 - kms_cursor_legacy@forked-move,Fail 8 - kms_cursor_legacy@single-bo,Fail 9 7 kms_cursor_legacy@torture-bo,Fail 10 - kms_cursor_legacy@torture-move,Fail 11 8 kms_force_connector_basic@force-edid,Fail 12 9 kms_hdmi_inject@inject-4k,Fail 13 10 kms_lease@lease-uevent,Fail 14 - msm_mapping@ring,Fail 11 + msm/msm_mapping@ring,Fail 15 12 tools_test@tools_test,Fail
+1 -1
drivers/gpu/drm/ci/xfails/msm-apq8016-skips.txt
··· 1 1 # Skip driver specific tests 2 2 ^amdgpu.* 3 3 nouveau_.* 4 - panfrost_.* 4 + ^panfrost.* 5 5 ^v3d.* 6 6 ^vc4.* 7 7 ^vmwgfx*
+1 -1
drivers/gpu/drm/ci/xfails/msm-apq8096-flakes.txt
··· 1 1 # Board Name: apq8096-db820c 2 2 # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 5 - # Failure Rate: 50 6 6 dumb_buffer@create-clear
+2 -2
drivers/gpu/drm/ci/xfails/msm-apq8096-skips.txt
··· 4 4 # Skip driver specific tests 5 5 ^amdgpu.* 6 6 nouveau_.* 7 - panfrost_.* 7 + ^panfrost.* 8 8 ^v3d.* 9 9 ^vc4.* 10 10 ^vmwgfx* ··· 23 23 # *** gpu fault: ttbr0=00000001030ea000 iova=0000000001074000 dir=WRITE type=PERMISSION source=1f030000 (0,0,0,0) 24 24 # msm_mdp 901000.display-controller: RBBM | ME master split | status=0x701000B0 25 25 # watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [kworker/u16:3:46] 26 - msm_mapping@shadow 26 + msm/msm_mapping@shadow
-145
drivers/gpu/drm/ci/xfails/msm-sc7180-trogdor-kingoftown-fails.txt
··· 3 3 device_reset@unbind-cold-reset-rebind,Fail 4 4 device_reset@unbind-reset-rebind,Fail 5 5 dumb_buffer@invalid-bpp,Fail 6 - kms_atomic_transition@plane-primary-toggle-with-vblank-wait,Fail 7 6 kms_color@ctm-0-25,Fail 8 7 kms_color@ctm-0-50,Fail 9 8 kms_color@ctm-0-75,Fail 10 9 kms_color@ctm-blue-to-red,Fail 11 10 kms_color@ctm-green-to-red,Fail 12 - kms_color@ctm-max,Fail 13 11 kms_color@ctm-negative,Fail 14 12 kms_color@ctm-red-to-blue,Fail 15 13 kms_color@ctm-signed,Fail ··· 19 21 kms_content_protection@srm,Crash 20 22 kms_content_protection@type1,Crash 21 23 kms_content_protection@uevent,Crash 22 - kms_cursor_crc@cursor-alpha-opaque,Fail 23 - kms_cursor_crc@cursor-alpha-transparent,Fail 24 - kms_cursor_crc@cursor-dpms,Fail 25 - kms_cursor_crc@cursor-offscreen-128x128,Fail 26 - kms_cursor_crc@cursor-offscreen-128x42,Fail 27 - kms_cursor_crc@cursor-offscreen-256x256,Fail 28 - kms_cursor_crc@cursor-offscreen-256x85,Fail 29 - kms_cursor_crc@cursor-offscreen-32x10,Fail 30 - kms_cursor_crc@cursor-offscreen-32x32,Fail 31 - kms_cursor_crc@cursor-offscreen-512x170,Fail 32 - kms_cursor_crc@cursor-offscreen-512x512,Fail 33 - kms_cursor_crc@cursor-offscreen-64x21,Fail 34 - kms_cursor_crc@cursor-offscreen-64x64,Fail 35 - kms_cursor_crc@cursor-onscreen-128x128,Fail 36 - kms_cursor_crc@cursor-onscreen-128x42,Fail 37 - kms_cursor_crc@cursor-onscreen-256x256,Fail 38 - kms_cursor_crc@cursor-onscreen-256x85,Fail 39 - kms_cursor_crc@cursor-onscreen-32x10,Fail 40 - kms_cursor_crc@cursor-onscreen-32x32,Fail 41 - kms_cursor_crc@cursor-onscreen-512x170,Fail 42 - kms_cursor_crc@cursor-onscreen-512x512,Fail 43 - kms_cursor_crc@cursor-onscreen-64x21,Fail 44 - kms_cursor_crc@cursor-onscreen-64x64,Fail 45 - kms_cursor_crc@cursor-random-128x128,Fail 46 - kms_cursor_crc@cursor-random-128x42,Fail 47 - kms_cursor_crc@cursor-random-256x256,Fail 48 - kms_cursor_crc@cursor-random-256x85,Fail 49 - kms_cursor_crc@cursor-random-32x10,Fail 50 - kms_cursor_crc@cursor-random-32x32,Fail 51 - kms_cursor_crc@cursor-random-512x170,Fail 52 - kms_cursor_crc@cursor-random-512x512,Fail 53 - kms_cursor_crc@cursor-random-64x21,Fail 54 - kms_cursor_crc@cursor-random-64x64,Fail 55 - kms_cursor_crc@cursor-rapid-movement-128x128,Fail 56 - kms_cursor_crc@cursor-rapid-movement-128x42,Fail 57 - kms_cursor_crc@cursor-rapid-movement-256x256,Fail 58 - kms_cursor_crc@cursor-rapid-movement-256x85,Fail 59 - kms_cursor_crc@cursor-rapid-movement-32x10,Fail 60 - kms_cursor_crc@cursor-rapid-movement-32x32,Fail 61 - kms_cursor_crc@cursor-rapid-movement-512x170,Fail 62 - kms_cursor_crc@cursor-rapid-movement-512x512,Fail 63 - kms_cursor_crc@cursor-rapid-movement-64x21,Fail 64 - kms_cursor_crc@cursor-rapid-movement-64x64,Fail 65 - kms_cursor_crc@cursor-size-change,Fail 66 - kms_cursor_crc@cursor-sliding-128x128,Fail 67 - kms_cursor_crc@cursor-sliding-128x42,Fail 68 - kms_cursor_crc@cursor-sliding-256x256,Fail 69 - kms_cursor_crc@cursor-sliding-256x85,Fail 70 - kms_cursor_crc@cursor-sliding-32x10,Fail 71 - kms_cursor_crc@cursor-sliding-32x32,Fail 72 - kms_cursor_crc@cursor-sliding-512x170,Fail 73 - kms_cursor_crc@cursor-sliding-512x512,Fail 74 - kms_cursor_crc@cursor-sliding-64x21,Fail 75 - kms_cursor_crc@cursor-sliding-64x64,Fail 76 - kms_cursor_edge_walk@128x128-left-edge,Fail 77 - kms_cursor_edge_walk@128x128-right-edge,Fail 78 - kms_cursor_edge_walk@128x128-top-bottom,Fail 79 - kms_cursor_edge_walk@128x128-top-edge,Fail 80 - kms_cursor_edge_walk@256x256-left-edge,Fail 81 - kms_cursor_edge_walk@256x256-right-edge,Fail 82 - kms_cursor_edge_walk@256x256-top-bottom,Fail 83 - kms_cursor_edge_walk@256x256-top-edge,Fail 84 - kms_cursor_edge_walk@64x64-left-edge,Fail 85 - kms_cursor_edge_walk@64x64-right-edge,Fail 86 - kms_cursor_edge_walk@64x64-top-bottom,Fail 87 - kms_cursor_edge_walk@64x64-top-edge,Fail 88 24 kms_cursor_legacy@2x-cursor-vs-flip-atomic,Fail 89 25 kms_cursor_legacy@2x-cursor-vs-flip-legacy,Fail 90 26 kms_cursor_legacy@2x-flip-vs-cursor-atomic,Fail ··· 32 100 kms_display_modes@extended-mode-basic,Fail 33 101 kms_flip@2x-flip-vs-modeset-vs-hang,Fail 34 102 kms_flip@2x-flip-vs-panning-vs-hang,Fail 35 - kms_flip@absolute-wf_vblank,Fail 36 - kms_flip@absolute-wf_vblank-interruptible,Fail 37 - kms_flip@basic-flip-vs-wf_vblank,Fail 38 - kms_flip@basic-plain-flip,Fail 39 - kms_flip@blocking-absolute-wf_vblank,Fail 40 - kms_flip@blocking-absolute-wf_vblank-interruptible,Fail 41 - kms_flip@blocking-wf_vblank,Fail 42 - kms_flip@busy-flip,Fail 43 - kms_flip@dpms-off-confusion,Fail 44 - kms_flip@dpms-off-confusion-interruptible,Fail 45 - kms_flip@dpms-vs-vblank-race,Fail 46 - kms_flip@dpms-vs-vblank-race-interruptible,Fail 47 - kms_flip@flip-vs-absolute-wf_vblank,Fail 48 - kms_flip@flip-vs-absolute-wf_vblank-interruptible,Fail 49 - kms_flip@flip-vs-blocking-wf-vblank,Fail 50 - kms_flip@flip-vs-expired-vblank,Fail 51 - kms_flip@flip-vs-expired-vblank-interruptible,Fail 52 103 kms_flip@flip-vs-modeset-vs-hang,Fail 53 - kms_flip@flip-vs-panning,Fail 54 - kms_flip@flip-vs-panning-interruptible,Fail 55 104 kms_flip@flip-vs-panning-vs-hang,Fail 56 - kms_flip@flip-vs-rmfb,Fail 57 - kms_flip@flip-vs-rmfb-interruptible,Fail 58 - kms_flip@flip-vs-wf_vblank-interruptible,Fail 59 - kms_flip@modeset-vs-vblank-race,Fail 60 - kms_flip@modeset-vs-vblank-race-interruptible,Fail 61 - kms_flip@plain-flip-fb-recreate,Fail 62 - kms_flip@plain-flip-fb-recreate-interruptible,Fail 63 - kms_flip@plain-flip-interruptible,Fail 64 - kms_flip@plain-flip-ts-check,Fail 65 - kms_flip@plain-flip-ts-check-interruptible,Fail 66 - kms_flip@wf_vblank-ts-check,Fail 67 - kms_flip@wf_vblank-ts-check-interruptible,Fail 68 - kms_lease@cursor-implicit-plane,Fail 69 105 kms_lease@lease-uevent,Fail 70 - kms_lease@page-flip-implicit-plane,Fail 71 - kms_lease@setcrtc-implicit-plane,Fail 72 - kms_lease@simple-lease,Fail 73 106 kms_multipipe_modeset@basic-max-pipe-crc-check,Fail 74 107 kms_pipe_crc_basic@compare-crc-sanitycheck-nv12,Fail 75 - kms_pipe_crc_basic@compare-crc-sanitycheck-xr24,Fail 76 - kms_pipe_crc_basic@disable-crc-after-crtc,Fail 77 - kms_pipe_crc_basic@nonblocking-crc,Fail 78 - kms_pipe_crc_basic@nonblocking-crc-frame-sequence,Fail 79 - kms_pipe_crc_basic@read-crc,Fail 80 - kms_pipe_crc_basic@read-crc-frame-sequence,Fail 81 - kms_plane@pixel-format,Fail 82 - kms_plane@pixel-format-source-clamping,Fail 83 - kms_plane@plane-panning-bottom-right,Fail 84 - kms_plane@plane-panning-top-left,Fail 85 - kms_plane@plane-position-covered,Fail 86 - kms_plane@plane-position-hole,Fail 87 - kms_plane@plane-position-hole-dpms,Fail 88 108 kms_plane_alpha_blend@alpha-7efc,Fail 89 - kms_plane_alpha_blend@alpha-basic,Fail 90 - kms_plane_alpha_blend@alpha-opaque-fb,Fail 91 - kms_plane_alpha_blend@alpha-transparent-fb,Fail 92 - kms_plane_alpha_blend@constant-alpha-max,Fail 93 - kms_plane_alpha_blend@constant-alpha-mid,Fail 94 - kms_plane_alpha_blend@constant-alpha-min,Fail 95 109 kms_plane_alpha_blend@coverage-7efc,Fail 96 110 kms_plane_alpha_blend@coverage-vs-premult-vs-constant,Fail 97 - kms_plane_cursor@primary,Fail 98 111 kms_plane_lowres@tiling-none,Fail 99 - kms_plane_multiple@tiling-none,Fail 100 112 kms_rmfb@close-fd,Fail 101 - kms_rotation_crc@cursor-rotation-180,Fail 102 - kms_rotation_crc@primary-rotation-180,Fail 103 - kms_sequence@get-busy,Fail 104 - kms_sequence@get-forked,Fail 105 - kms_sequence@get-forked-busy,Fail 106 - kms_sequence@get-idle,Fail 107 - kms_sequence@queue-busy,Fail 108 - kms_sequence@queue-idle,Fail 109 - kms_vblank@accuracy-idle,Fail 110 - kms_vblank@crtc-id,Fail 111 - kms_vblank@query-busy,Fail 112 - kms_vblank@query-forked,Fail 113 - kms_vblank@query-forked-busy,Fail 114 - kms_vblank@query-idle,Fail 115 113 kms_vblank@ts-continuation-dpms-rpm,Fail 116 - kms_vblank@ts-continuation-idle,Fail 117 - kms_vblank@ts-continuation-modeset,Fail 118 - kms_vblank@ts-continuation-modeset-rpm,Fail 119 - kms_vblank@wait-busy,Fail 120 - kms_vblank@wait-forked,Fail 121 - kms_vblank@wait-forked-busy,Fail 122 - kms_vblank@wait-idle,Fail 123 114 tools_test@tools_test,Fail
+15 -3
drivers/gpu/drm/ci/xfails/msm-sc7180-trogdor-kingoftown-flakes.txt
··· 1 1 # Board Name: sc7180-trogdor-kingoftown 2 2 # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 6 + msm/msm_mapping@shadow 7 + 8 + # Board Name: sc7180-trogdor-kingoftown 9 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 5 10 # Failure Rate: 50 6 - msm_mapping@shadow 7 - msm_shrink@copy-gpu-oom-32 8 - msm_shrink@copy-gpu-oom-8 11 + # IGT Version: 1.28-g0df7b9b97 12 + # Linux Version: 6.9.0-rc7 13 + msm/msm_shrink@copy-gpu-oom-32 14 + 15 + # Board Name: sc7180-trogdor-kingoftown 16 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 17 + # Failure Rate: 50 18 + # IGT Version: 1.28-g0df7b9b97 19 + # Linux Version: 6.9.0-rc7 20 + msm/msm_shrink@copy-gpu-oom-8
+4 -1
drivers/gpu/drm/ci/xfails/msm-sc7180-trogdor-kingoftown-skips.txt
··· 4 4 # Skip driver specific tests 5 5 ^amdgpu.* 6 6 nouveau_.* 7 - panfrost_.* 7 + ^panfrost.* 8 8 ^v3d.* 9 9 ^vc4.* 10 10 ^vmwgfx* ··· 19 19 20 20 # Timeout occurs 21 21 kms_flip@2x-wf_vblank-ts-check 22 + 23 + # Hangs the machine 24 + kms_cursor_crc@cursor-random-max-size
-145
drivers/gpu/drm/ci/xfails/msm-sc7180-trogdor-lazor-limozeen-fails.txt
··· 3 3 device_reset@unbind-cold-reset-rebind,Fail 4 4 device_reset@unbind-reset-rebind,Fail 5 5 dumb_buffer@invalid-bpp,Fail 6 - kms_atomic_transition@plane-primary-toggle-with-vblank-wait,Fail 7 6 kms_color@ctm-0-25,Fail 8 7 kms_color@ctm-0-50,Fail 9 8 kms_color@ctm-0-75,Fail 10 9 kms_color@ctm-blue-to-red,Fail 11 10 kms_color@ctm-green-to-red,Fail 12 - kms_color@ctm-max,Fail 13 11 kms_color@ctm-negative,Fail 14 12 kms_color@ctm-red-to-blue,Fail 15 13 kms_color@ctm-signed,Fail ··· 19 21 kms_content_protection@srm,Crash 20 22 kms_content_protection@type1,Crash 21 23 kms_content_protection@uevent,Crash 22 - kms_cursor_crc@cursor-alpha-opaque,Fail 23 - kms_cursor_crc@cursor-alpha-transparent,Fail 24 - kms_cursor_crc@cursor-dpms,Fail 25 - kms_cursor_crc@cursor-offscreen-128x128,Fail 26 - kms_cursor_crc@cursor-offscreen-128x42,Fail 27 - kms_cursor_crc@cursor-offscreen-256x256,Fail 28 - kms_cursor_crc@cursor-offscreen-256x85,Fail 29 - kms_cursor_crc@cursor-offscreen-32x10,Fail 30 - kms_cursor_crc@cursor-offscreen-32x32,Fail 31 - kms_cursor_crc@cursor-offscreen-512x170,Fail 32 - kms_cursor_crc@cursor-offscreen-512x512,Fail 33 - kms_cursor_crc@cursor-offscreen-64x21,Fail 34 - kms_cursor_crc@cursor-offscreen-64x64,Fail 35 - kms_cursor_crc@cursor-onscreen-128x128,Fail 36 - kms_cursor_crc@cursor-onscreen-128x42,Fail 37 - kms_cursor_crc@cursor-onscreen-256x256,Fail 38 - kms_cursor_crc@cursor-onscreen-256x85,Fail 39 - kms_cursor_crc@cursor-onscreen-32x10,Fail 40 - kms_cursor_crc@cursor-onscreen-32x32,Fail 41 - kms_cursor_crc@cursor-onscreen-512x170,Fail 42 - kms_cursor_crc@cursor-onscreen-512x512,Fail 43 - kms_cursor_crc@cursor-onscreen-64x21,Fail 44 - kms_cursor_crc@cursor-onscreen-64x64,Fail 45 - kms_cursor_crc@cursor-random-128x128,Fail 46 - kms_cursor_crc@cursor-random-128x42,Fail 47 - kms_cursor_crc@cursor-random-256x256,Fail 48 - kms_cursor_crc@cursor-random-256x85,Fail 49 - kms_cursor_crc@cursor-random-32x10,Fail 50 - kms_cursor_crc@cursor-random-32x32,Fail 51 - kms_cursor_crc@cursor-random-512x170,Fail 52 - kms_cursor_crc@cursor-random-512x512,Fail 53 - kms_cursor_crc@cursor-random-64x21,Fail 54 - kms_cursor_crc@cursor-random-64x64,Fail 55 - kms_cursor_crc@cursor-rapid-movement-128x128,Fail 56 - kms_cursor_crc@cursor-rapid-movement-128x42,Fail 57 - kms_cursor_crc@cursor-rapid-movement-256x256,Fail 58 - kms_cursor_crc@cursor-rapid-movement-256x85,Fail 59 - kms_cursor_crc@cursor-rapid-movement-32x10,Fail 60 - kms_cursor_crc@cursor-rapid-movement-32x32,Fail 61 - kms_cursor_crc@cursor-rapid-movement-512x170,Fail 62 - kms_cursor_crc@cursor-rapid-movement-512x512,Fail 63 - kms_cursor_crc@cursor-rapid-movement-64x21,Fail 64 - kms_cursor_crc@cursor-rapid-movement-64x64,Fail 65 - kms_cursor_crc@cursor-size-change,Fail 66 - kms_cursor_crc@cursor-sliding-128x128,Fail 67 - kms_cursor_crc@cursor-sliding-128x42,Fail 68 - kms_cursor_crc@cursor-sliding-256x256,Fail 69 - kms_cursor_crc@cursor-sliding-256x85,Fail 70 - kms_cursor_crc@cursor-sliding-32x10,Fail 71 - kms_cursor_crc@cursor-sliding-32x32,Fail 72 - kms_cursor_crc@cursor-sliding-512x170,Fail 73 - kms_cursor_crc@cursor-sliding-512x512,Fail 74 - kms_cursor_crc@cursor-sliding-64x21,Fail 75 - kms_cursor_crc@cursor-sliding-64x64,Fail 76 - kms_cursor_edge_walk@128x128-left-edge,Fail 77 - kms_cursor_edge_walk@128x128-right-edge,Fail 78 - kms_cursor_edge_walk@128x128-top-bottom,Fail 79 - kms_cursor_edge_walk@128x128-top-edge,Fail 80 - kms_cursor_edge_walk@256x256-left-edge,Fail 81 - kms_cursor_edge_walk@256x256-right-edge,Fail 82 - kms_cursor_edge_walk@256x256-top-bottom,Fail 83 - kms_cursor_edge_walk@256x256-top-edge,Fail 84 - kms_cursor_edge_walk@64x64-left-edge,Fail 85 - kms_cursor_edge_walk@64x64-right-edge,Fail 86 - kms_cursor_edge_walk@64x64-top-bottom,Fail 87 - kms_cursor_edge_walk@64x64-top-edge,Fail 88 24 kms_cursor_legacy@2x-cursor-vs-flip-atomic,Fail 89 25 kms_cursor_legacy@2x-cursor-vs-flip-legacy,Fail 90 26 kms_cursor_legacy@2x-flip-vs-cursor-atomic,Fail ··· 32 100 kms_display_modes@extended-mode-basic,Fail 33 101 kms_flip@2x-flip-vs-modeset-vs-hang,Fail 34 102 kms_flip@2x-flip-vs-panning-vs-hang,Fail 35 - kms_flip@absolute-wf_vblank,Fail 36 - kms_flip@absolute-wf_vblank-interruptible,Fail 37 - kms_flip@basic-flip-vs-wf_vblank,Fail 38 - kms_flip@basic-plain-flip,Fail 39 - kms_flip@blocking-absolute-wf_vblank,Fail 40 - kms_flip@blocking-absolute-wf_vblank-interruptible,Fail 41 - kms_flip@blocking-wf_vblank,Fail 42 - kms_flip@busy-flip,Fail 43 - kms_flip@dpms-off-confusion,Fail 44 - kms_flip@dpms-off-confusion-interruptible,Fail 45 - kms_flip@dpms-vs-vblank-race,Fail 46 - kms_flip@dpms-vs-vblank-race-interruptible,Fail 47 - kms_flip@flip-vs-absolute-wf_vblank,Fail 48 - kms_flip@flip-vs-absolute-wf_vblank-interruptible,Fail 49 - kms_flip@flip-vs-blocking-wf-vblank,Fail 50 - kms_flip@flip-vs-expired-vblank,Fail 51 - kms_flip@flip-vs-expired-vblank-interruptible,Fail 52 103 kms_flip@flip-vs-modeset-vs-hang,Fail 53 - kms_flip@flip-vs-panning,Fail 54 - kms_flip@flip-vs-panning-interruptible,Fail 55 104 kms_flip@flip-vs-panning-vs-hang,Fail 56 - kms_flip@flip-vs-rmfb,Fail 57 - kms_flip@flip-vs-rmfb-interruptible,Fail 58 - kms_flip@flip-vs-wf_vblank-interruptible,Fail 59 - kms_flip@modeset-vs-vblank-race,Fail 60 - kms_flip@modeset-vs-vblank-race-interruptible,Fail 61 - kms_flip@plain-flip-fb-recreate,Fail 62 - kms_flip@plain-flip-fb-recreate-interruptible,Fail 63 - kms_flip@plain-flip-interruptible,Fail 64 - kms_flip@plain-flip-ts-check,Fail 65 - kms_flip@plain-flip-ts-check-interruptible,Fail 66 - kms_flip@wf_vblank-ts-check,Fail 67 - kms_flip@wf_vblank-ts-check-interruptible,Fail 68 - kms_lease@cursor-implicit-plane,Fail 69 105 kms_lease@lease-uevent,Fail 70 - kms_lease@page-flip-implicit-plane,Fail 71 - kms_lease@setcrtc-implicit-plane,Fail 72 - kms_lease@simple-lease,Fail 73 106 kms_multipipe_modeset@basic-max-pipe-crc-check,Fail 74 107 kms_pipe_crc_basic@compare-crc-sanitycheck-nv12,Fail 75 - kms_pipe_crc_basic@compare-crc-sanitycheck-xr24,Fail 76 - kms_pipe_crc_basic@disable-crc-after-crtc,Fail 77 - kms_pipe_crc_basic@nonblocking-crc,Fail 78 - kms_pipe_crc_basic@nonblocking-crc-frame-sequence,Fail 79 - kms_pipe_crc_basic@read-crc,Fail 80 - kms_pipe_crc_basic@read-crc-frame-sequence,Fail 81 - kms_plane@pixel-format,Fail 82 - kms_plane@pixel-format-source-clamping,Fail 83 - kms_plane@plane-panning-bottom-right,Fail 84 - kms_plane@plane-panning-top-left,Fail 85 - kms_plane@plane-position-covered,Fail 86 - kms_plane@plane-position-hole,Fail 87 - kms_plane@plane-position-hole-dpms,Fail 88 108 kms_plane_alpha_blend@alpha-7efc,Fail 89 - kms_plane_alpha_blend@alpha-basic,Fail 90 - kms_plane_alpha_blend@alpha-opaque-fb,Fail 91 - kms_plane_alpha_blend@alpha-transparent-fb,Fail 92 - kms_plane_alpha_blend@constant-alpha-max,Fail 93 - kms_plane_alpha_blend@constant-alpha-mid,Fail 94 - kms_plane_alpha_blend@constant-alpha-min,Fail 95 109 kms_plane_alpha_blend@coverage-7efc,Fail 96 110 kms_plane_alpha_blend@coverage-vs-premult-vs-constant,Fail 97 - kms_plane_cursor@primary,Fail 98 111 kms_plane_lowres@tiling-none,Fail 99 - kms_plane_multiple@tiling-none,Fail 100 112 kms_rmfb@close-fd,Fail 101 - kms_rotation_crc@cursor-rotation-180,Fail 102 - kms_rotation_crc@primary-rotation-180,Fail 103 - kms_sequence@get-busy,Fail 104 - kms_sequence@get-forked,Fail 105 - kms_sequence@get-forked-busy,Fail 106 - kms_sequence@get-idle,Fail 107 - kms_sequence@queue-busy,Fail 108 - kms_sequence@queue-idle,Fail 109 - kms_vblank@accuracy-idle,Fail 110 - kms_vblank@crtc-id,Fail 111 - kms_vblank@query-busy,Fail 112 - kms_vblank@query-forked,Fail 113 - kms_vblank@query-forked-busy,Fail 114 - kms_vblank@query-idle,Fail 115 113 kms_vblank@ts-continuation-dpms-rpm,Fail 116 - kms_vblank@ts-continuation-idle,Fail 117 - kms_vblank@ts-continuation-modeset,Fail 118 - kms_vblank@ts-continuation-modeset-rpm,Fail 119 - kms_vblank@wait-busy,Fail 120 - kms_vblank@wait-forked,Fail 121 - kms_vblank@wait-forked-busy,Fail 122 - kms_vblank@wait-idle,Fail 123 114 tools_test@tools_test,Fail
+9 -2
drivers/gpu/drm/ci/xfails/msm-sc7180-trogdor-lazor-limozeen-flakes.txt
··· 1 1 # Board Name: sc7180-trogdor-lazor-limozeen-nots-r5 2 2 # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 5 - # Failure Rate: 50 6 - msm_mapping@shadow 6 + msm/msm_mapping@shadow 7 + 8 + # Board Name: sc7180-trogdor-lazor-limozeen-nots-r5 9 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 10 + # Failure Rate: 100 11 + # IGT Version: 1.28-gf13702b8e 12 + # Linux Version: 6.10.0-rc5 13 + kms_lease@page-flip-implicit-plane
+1 -1
drivers/gpu/drm/ci/xfails/msm-sc7180-trogdor-lazor-limozeen-skips.txt
··· 4 4 # Skip driver specific tests 5 5 ^amdgpu.* 6 6 nouveau_.* 7 - panfrost_.* 7 + ^panfrost.* 8 8 ^v3d.* 9 9 ^vc4.* 10 10 ^vmwgfx*
+102 -3
drivers/gpu/drm/ci/xfails/msm-sdm845-flakes.txt
··· 1 1 # Board Name: sdm845-cheza-r3 2 2 # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 5 - # Failure Rate: 50 6 6 kms_cursor_legacy@basic-flip-after-cursor-atomic 7 + 8 + # Board Name: sdm845-cheza-r3 9 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 10 + # Failure Rate: 50 11 + # IGT Version: 1.28-g0df7b9b97 12 + # Linux Version: 6.9.0-rc7 7 13 kms_cursor_legacy@basic-flip-after-cursor-legacy 14 + 15 + # Board Name: sdm845-cheza-r3 16 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 17 + # Failure Rate: 50 18 + # IGT Version: 1.28-g0df7b9b97 19 + # Linux Version: 6.9.0-rc7 8 20 kms_cursor_legacy@basic-flip-after-cursor-varying-size 21 + 22 + # Board Name: sdm845-cheza-r3 23 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 24 + # Failure Rate: 50 25 + # IGT Version: 1.28-g0df7b9b97 26 + # Linux Version: 6.9.0-rc7 9 27 kms_cursor_legacy@basic-flip-before-cursor-varying-size 28 + 29 + # Board Name: sdm845-cheza-r3 30 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 31 + # Failure Rate: 50 32 + # IGT Version: 1.28-g0df7b9b97 33 + # Linux Version: 6.9.0-rc7 10 34 kms_cursor_legacy@flip-vs-cursor-atomic-transitions 35 + 36 + # Board Name: sdm845-cheza-r3 37 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 38 + # Failure Rate: 50 39 + # IGT Version: 1.28-g0df7b9b97 40 + # Linux Version: 6.9.0-rc7 11 41 kms_cursor_legacy@flip-vs-cursor-atomic-transitions-varying-size 42 + 43 + # Board Name: sdm845-cheza-r3 44 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 45 + # Failure Rate: 50 46 + # IGT Version: 1.28-g0df7b9b97 47 + # Linux Version: 6.9.0-rc7 12 48 kms_cursor_legacy@flip-vs-cursor-varying-size 49 + 50 + # Board Name: sdm845-cheza-r3 51 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 52 + # Failure Rate: 50 53 + # IGT Version: 1.28-g0df7b9b97 54 + # Linux Version: 6.9.0-rc7 13 55 kms_cursor_legacy@short-flip-after-cursor-atomic-transitions 56 + 57 + # Board Name: sdm845-cheza-r3 58 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 59 + # Failure Rate: 50 60 + # IGT Version: 1.28-g0df7b9b97 61 + # Linux Version: 6.9.0-rc7 14 62 kms_cursor_legacy@short-flip-after-cursor-atomic-transitions-varying-size 63 + 64 + # Board Name: sdm845-cheza-r3 65 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 66 + # Failure Rate: 50 67 + # IGT Version: 1.28-g0df7b9b97 68 + # Linux Version: 6.9.0-rc7 15 69 kms_cursor_legacy@short-flip-after-cursor-toggle 70 + 71 + # Board Name: sdm845-cheza-r3 72 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 73 + # Failure Rate: 50 74 + # IGT Version: 1.28-g0df7b9b97 75 + # Linux Version: 6.9.0-rc7 16 76 kms_cursor_legacy@short-flip-before-cursor-atomic-transitions 77 + 78 + # Board Name: sdm845-cheza-r3 79 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 80 + # Failure Rate: 50 81 + # IGT Version: 1.28-g0df7b9b97 82 + # Linux Version: 6.9.0-rc7 17 83 kms_cursor_legacy@short-flip-before-cursor-atomic-transitions-varying-size 18 - msm_shrink@copy-gpu-32 19 - msm_shrink@copy-gpu-oom-32 84 + 85 + # Board Name: sdm845-cheza-r3 86 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 87 + # Failure Rate: 50 88 + # IGT Version: 1.28-g0df7b9b97 89 + # Linux Version: 6.9.0-rc7 90 + msm/msm_shrink@copy-gpu-32 91 + 92 + # Board Name: sdm845-cheza-r3 93 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 94 + # Failure Rate: 50 95 + # IGT Version: 1.28-g0df7b9b97 96 + # Linux Version: 6.9.0-rc7 97 + msm/msm_shrink@copy-gpu-oom-32 98 + 99 + # Board Name: sdm845-cheza-r3 100 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 101 + # Failure Rate: 50 102 + # IGT Version: 1.28-gf13702b8e 103 + # Linux Version: 6.10.0-rc5 104 + kms_cursor_legacy@short-flip-before-cursor-toggle 105 + 106 + # Board Name: sdm845-cheza-r3 107 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 108 + # Failure Rate: 50 109 + # IGT Version: 1.28-gf13702b8e 110 + # Linux Version: 6.10.0-rc5 111 + kms_cursor_legacy@flip-vs-cursor-toggle 112 + 113 + # Board Name: sdm845-cheza-r3 114 + # Bug Report: https://lore.kernel.org/linux-arm-msm/661483c8-ad82-400d-bcd8-e94986d20d7d@collabora.com/T/#u 115 + # Failure Rate: 50 116 + # IGT Version: 1.28-gf13702b8e 117 + # Linux Version: 6.10.0-rc5 118 + msm/msm_shrink@copy-mmap-oom-8
+2 -2
drivers/gpu/drm/ci/xfails/msm-sdm845-skips.txt
··· 4 4 # Failing due to a bootloader/fw issue. The workaround in mesa CI involves these two patches 5 5 # https://gitlab.freedesktop.org/gfx-ci/linux/-/commit/4b49f902ec6f2bb382cbbf489870573f4b43371e 6 6 # https://gitlab.freedesktop.org/gfx-ci/linux/-/commit/38cdf4c5559771e2474ae0fecef8469f65147bc1 7 - msm_mapping@* 7 + msm/msm_mapping@* 8 8 9 9 # Skip driver specific tests 10 10 ^amdgpu.* 11 11 nouveau_.* 12 - panfrost_.* 12 + ^panfrost.* 13 13 ^v3d.* 14 14 ^vc4.* 15 15 ^vmwgfx*
+1 -1
drivers/gpu/drm/ci/xfails/rockchip-rk3288-fails.txt
··· 4 4 dumb_buffer@map-invalid-size,Crash 5 5 dumb_buffer@map-uaf,Crash 6 6 dumb_buffer@map-valid,Crash 7 - panfrost_prime@gem-prime-import,Crash 7 + panfrost/panfrost_prime@gem-prime-import,Crash 8 8 tools_test@tools_test,Crash
+1 -1
drivers/gpu/drm/ci/xfails/rockchip-rk3288-skips.txt
··· 53 53 54 54 # Skip driver specific tests 55 55 ^amdgpu.* 56 - msm_.* 56 + ^msm.* 57 57 nouveau_.* 58 58 ^v3d.* 59 59 ^vc4.*
+1 -1
drivers/gpu/drm/ci/xfails/rockchip-rk3399-fails.txt
··· 4 4 dumb_buffer@map-invalid-size,Fail 5 5 dumb_buffer@map-uaf,Fail 6 6 dumb_buffer@map-valid,Fail 7 - panfrost_prime@gem-prime-import,Fail 7 + panfrost/panfrost_prime@gem-prime-import,Fail 8 8 tools_test@tools_test,Fail
+2 -2
drivers/gpu/drm/ci/xfails/rockchip-rk3399-flakes.txt
··· 1 1 # Board Name: rk3399-gru-kevin 2 2 # Bug Report: https://lore.kernel.org/dri-devel/5cc34a8b-c1fa-4744-9031-2d33ecf41011@collabora.com/T/#u 3 + # Failure Rate: 50 3 4 # IGT Version: 1.28-g0df7b9b97 4 5 # Linux Version: 6.9.0-rc7 5 - # Failure Rate: 50 6 - panfrost_submit@pan-unhandled-pagefault 6 + panfrost/panfrost_submit@pan-unhandled-pagefault
+1 -1
drivers/gpu/drm/ci/xfails/rockchip-rk3399-skips.txt
··· 6 6 7 7 # Skip driver specific tests 8 8 ^amdgpu.* 9 - msm_.* 9 + ^msm.* 10 10 nouveau_.* 11 11 ^v3d.* 12 12 ^vc4.*
+64
drivers/gpu/drm/ci/xfails/virtio_gpu-none-fails.txt
··· 3 3 kms_addfb_basic@size-max,Fail 4 4 kms_addfb_basic@too-high,Fail 5 5 kms_atomic_transition@plane-primary-toggle-with-vblank-wait,Fail 6 + kms_bw@connected-linear-tiling-1-displays-1920x1080p,Fail 7 + kms_bw@connected-linear-tiling-1-displays-2160x1440p,Fail 8 + kms_bw@connected-linear-tiling-1-displays-2560x1440p,Fail 9 + kms_bw@connected-linear-tiling-1-displays-3840x2160p,Fail 10 + kms_bw@connected-linear-tiling-10-displays-1920x1080p,Fail 11 + kms_bw@connected-linear-tiling-10-displays-2160x1440p,Fail 12 + kms_bw@connected-linear-tiling-10-displays-2560x1440p,Fail 13 + kms_bw@connected-linear-tiling-10-displays-3840x2160p,Fail 14 + kms_bw@connected-linear-tiling-11-displays-1920x1080p,Fail 15 + kms_bw@connected-linear-tiling-11-displays-2160x1440p,Fail 16 + kms_bw@connected-linear-tiling-11-displays-2560x1440p,Fail 17 + kms_bw@connected-linear-tiling-11-displays-3840x2160p,Fail 18 + kms_bw@connected-linear-tiling-12-displays-1920x1080p,Fail 19 + kms_bw@connected-linear-tiling-12-displays-2160x1440p,Fail 20 + kms_bw@connected-linear-tiling-12-displays-2560x1440p,Fail 21 + kms_bw@connected-linear-tiling-12-displays-3840x2160p,Fail 22 + kms_bw@connected-linear-tiling-13-displays-1920x1080p,Fail 23 + kms_bw@connected-linear-tiling-13-displays-2160x1440p,Fail 24 + kms_bw@connected-linear-tiling-13-displays-2560x1440p,Fail 25 + kms_bw@connected-linear-tiling-13-displays-3840x2160p,Fail 26 + kms_bw@connected-linear-tiling-14-displays-1920x1080p,Fail 27 + kms_bw@connected-linear-tiling-14-displays-2160x1440p,Fail 28 + kms_bw@connected-linear-tiling-14-displays-2560x1440p,Fail 29 + kms_bw@connected-linear-tiling-14-displays-3840x2160p,Fail 30 + kms_bw@connected-linear-tiling-15-displays-1920x1080p,Fail 31 + kms_bw@connected-linear-tiling-15-displays-2160x1440p,Fail 32 + kms_bw@connected-linear-tiling-15-displays-2560x1440p,Fail 33 + kms_bw@connected-linear-tiling-15-displays-3840x2160p,Fail 34 + kms_bw@connected-linear-tiling-16-displays-1920x1080p,Fail 35 + kms_bw@connected-linear-tiling-16-displays-2160x1440p,Fail 36 + kms_bw@connected-linear-tiling-16-displays-2560x1440p,Fail 37 + kms_bw@connected-linear-tiling-16-displays-3840x2160p,Fail 38 + kms_bw@connected-linear-tiling-2-displays-1920x1080p,Fail 39 + kms_bw@connected-linear-tiling-2-displays-2160x1440p,Fail 40 + kms_bw@connected-linear-tiling-2-displays-2560x1440p,Fail 41 + kms_bw@connected-linear-tiling-2-displays-3840x2160p,Fail 42 + kms_bw@connected-linear-tiling-3-displays-1920x1080p,Fail 43 + kms_bw@connected-linear-tiling-3-displays-2160x1440p,Fail 44 + kms_bw@connected-linear-tiling-3-displays-2560x1440p,Fail 45 + kms_bw@connected-linear-tiling-3-displays-3840x2160p,Fail 46 + kms_bw@connected-linear-tiling-4-displays-1920x1080p,Fail 47 + kms_bw@connected-linear-tiling-4-displays-2160x1440p,Fail 48 + kms_bw@connected-linear-tiling-4-displays-2560x1440p,Fail 49 + kms_bw@connected-linear-tiling-4-displays-3840x2160p,Fail 50 + kms_bw@connected-linear-tiling-5-displays-1920x1080p,Fail 51 + kms_bw@connected-linear-tiling-5-displays-2160x1440p,Fail 52 + kms_bw@connected-linear-tiling-5-displays-2560x1440p,Fail 53 + kms_bw@connected-linear-tiling-5-displays-3840x2160p,Fail 54 + kms_bw@connected-linear-tiling-6-displays-1920x1080p,Fail 55 + kms_bw@connected-linear-tiling-6-displays-2160x1440p,Fail 56 + kms_bw@connected-linear-tiling-6-displays-2560x1440p,Fail 57 + kms_bw@connected-linear-tiling-6-displays-3840x2160p,Fail 58 + kms_bw@connected-linear-tiling-7-displays-1920x1080p,Fail 59 + kms_bw@connected-linear-tiling-7-displays-2160x1440p,Fail 60 + kms_bw@connected-linear-tiling-7-displays-2560x1440p,Fail 61 + kms_bw@connected-linear-tiling-7-displays-3840x2160p,Fail 62 + kms_bw@connected-linear-tiling-8-displays-1920x1080p,Fail 63 + kms_bw@connected-linear-tiling-8-displays-2160x1440p,Fail 64 + kms_bw@connected-linear-tiling-8-displays-2560x1440p,Fail 65 + kms_bw@connected-linear-tiling-8-displays-3840x2160p,Fail 66 + kms_bw@connected-linear-tiling-9-displays-1920x1080p,Fail 67 + kms_bw@connected-linear-tiling-9-displays-2160x1440p,Fail 68 + kms_bw@connected-linear-tiling-9-displays-2560x1440p,Fail 69 + kms_bw@connected-linear-tiling-9-displays-3840x2160p,Fail 6 70 kms_bw@linear-tiling-1-displays-1920x1080p,Fail 7 71 kms_bw@linear-tiling-1-displays-2160x1440p,Fail 8 72 kms_bw@linear-tiling-1-displays-2560x1440p,Fail
+2 -2
drivers/gpu/drm/ci/xfails/virtio_gpu-none-skips.txt
··· 7 7 8 8 # Skip driver specific tests 9 9 ^amdgpu.* 10 - msm_.* 10 + ^msm.* 11 11 nouveau_.* 12 - panfrost_.* 12 + ^panfrost.* 13 13 ^v3d.* 14 14 ^vc4.* 15 15 ^vmwgfx*
-4
drivers/gpu/drm/ci/xfails/vkms-none-fails.txt
··· 41 41 kms_cursor_legacy@flip-vs-cursor-legacy,Fail 42 42 kms_flip@flip-vs-modeset-vs-hang,Fail 43 43 kms_flip@flip-vs-panning-vs-hang,Fail 44 - kms_flip@flip-vs-suspend,Timeout 45 - kms_flip@flip-vs-suspend-interruptible,Timeout 46 - kms_flip@plain-flip-fb-recreate,Fail 47 44 kms_lease@lease-uevent,Fail 48 45 kms_pipe_crc_basic@nonblocking-crc,Fail 49 - kms_pipe_crc_basic@nonblocking-crc-frame-sequence,Fail 50 46 kms_writeback@writeback-check-output,Fail 51 47 kms_writeback@writeback-check-output-XRGB2101010,Fail 52 48 kms_writeback@writeback-fb-id,Fail
+21
drivers/gpu/drm/ci/xfails/vkms-none-flakes.txt
··· 67 67 # IGT Version: 1.28-g0df7b9b97 68 68 # Linux Version: 6.9.0-rc7 69 69 kms_flip@flip-vs-blocking-wf-vblank 70 + 71 + # Board Name: vkms 72 + # Bug Report: https://lore.kernel.org/dri-devel/61ed26af-062c-443c-9df2-d1ee319f3fb0@collabora.com/T/#u 73 + # Failure Rate: 50 74 + # IGT Version: 1.28-gf13702b8e 75 + # Linux Version: 6.10.0-rc5 76 + kms_cursor_legacy@flip-vs-cursor-varying-size 77 + 78 + # Board Name: vkms 79 + # Bug Report: https://lore.kernel.org/dri-devel/61ed26af-062c-443c-9df2-d1ee319f3fb0@collabora.com/T/#u 80 + # Failure Rate: 50 81 + # IGT Version: 1.28-gf13702b8e 82 + # Linux Version: 6.10.0-rc5 83 + kms_flip@flip-vs-expired-vblank 84 + 85 + # Board Name: vkms 86 + # Bug Report: https://lore.kernel.org/dri-devel/61ed26af-062c-443c-9df2-d1ee319f3fb0@collabora.com/T/#u 87 + # Failure Rate: 50 88 + # IGT Version: 1.28-gf13702b8e 89 + # Linux Version: 6.10.0-rc5 90 + kms_pipe_crc_basic@nonblocking-crc-frame-sequence
+103 -2
drivers/gpu/drm/ci/xfails/vkms-none-skips.txt
··· 104 104 # CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 105 105 # CR2: 0000000000000078 CR3: 0000000109b38000 CR4: 0000000000350ef0 106 106 107 + kms_cursor_crc@cursor-onscreen-256x256 108 + # Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI 109 + # CPU: 1 PID: 1913 Comm: kworker/u8:6 Not tainted 6.10.0-rc5-g8a28e73ebead #1 110 + # Hardware name: ChromiumOS crosvm, BIOS 0 111 + # Workqueue: vkms_composer vkms_composer_worker [vkms] 112 + # RIP: 0010:compose_active_planes+0x344/0x4e0 [vkms] 113 + # Code: 6a 34 0f 8e 91 fe ff ff 44 89 ea 48 8d 7c 24 48 e8 71 f0 ff ff 4b 8b 04 fc 48 8b 4c 24 50 48 8b 7c 24 40 48 8b 80 48 01 00 00 <48> 63 70 18 8b 40 20 48 89 f2 48 c1 e6 03 29 d0 48 8b 54 24 48 48 114 + # RSP: 0018:ffffb477409fbd58 EFLAGS: 00010282 115 + # RAX: 0000000000000000 RBX: 0000000000000002 RCX: ffff8b124a242000 116 + # RDX: 00000000000000ff RSI: ffff8b124a243ff8 RDI: ffff8b124a244000 117 + # RBP: 0000000000000002 R08: 0000000000000000 R09: 00000000000003ff 118 + # R10: ffff8b124a244000 R11: 0000000000000000 R12: ffff8b1249282f30 119 + # R13: 0000000000000002 R14: 0000000000000002 R15: 0000000000000000 120 + # FS: 0000000000000000(0000) GS:ffff8b126bd00000(0000) knlGS:0000000000000000 121 + # CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 122 + # CR2: 0000000000000018 CR3: 0000000107a86000 CR4: 0000000000350ef0 123 + # Call Trace: 124 + # <TASK> 125 + # ? __die+0x1e/0x60 126 + # ? page_fault_oops+0x17b/0x4a0 127 + # ? exc_page_fault+0x6d/0x230 128 + # ? asm_exc_page_fault+0x26/0x30 129 + # ? compose_active_planes+0x344/0x4e0 [vkms] 130 + # ? compose_active_planes+0x32f/0x4e0 [vkms] 131 + # ? srso_return_thunk+0x5/0x5f 132 + # vkms_composer_worker+0x205/0x240 [vkms] 133 + # process_one_work+0x201/0x6c0 134 + # ? lock_is_held_type+0x9e/0x110 135 + # worker_thread+0x17e/0x350 136 + # ? __pfx_worker_thread+0x10/0x10 137 + # kthread+0xce/0x100 138 + # ? __pfx_kthread+0x10/0x10 139 + # ret_from_fork+0x2f/0x50 140 + # ? __pfx_kthread+0x10/0x10 141 + # ret_from_fork_asm+0x1a/0x30 142 + # </TASK> 143 + # Modules linked in: vkms 144 + # CR2: 0000000000000018 145 + # ---[ end trace 0000000000000000 ]--- 146 + # RIP: 0010:compose_active_planes+0x344/0x4e0 [vkms] 147 + # Code: 6a 34 0f 8e 91 fe ff ff 44 89 ea 48 8d 7c 24 48 e8 71 f0 ff ff 4b 8b 04 fc 48 8b 4c 24 50 48 8b 7c 24 40 48 8b 80 48 01 00 00 <48> 63 70 18 8b 40 20 48 89 f2 48 c1 e6 03 29 d0 48 8b 54 24 48 48 148 + # RSP: 0018:ffffb477409fbd58 EFLAGS: 00010282 149 + # RAX: 0000000000000000 RBX: 0000000000000002 RCX: ffff8b124a242000 150 + # RDX: 00000000000000ff RSI: ffff8b124a243ff8 RDI: ffff8b124a244000 151 + # RBP: 0000000000000002 R08: 0000000000000000 R09: 00000000000003ff 152 + # R10: ffff8b124a244000 R11: 0000000000000000 R12: ffff8b1249282f30 153 + # R13: 0000000000000002 R14: 0000000000000002 R15: 0000000000000000 154 + # FS: 0000000000000000(0000) GS:ffff8b126bd00000(0000) knlGS:0000000000000000 155 + # CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 156 + # CR2: 0000000000000018 CR3: 0000000107a86000 CR4: 0000000000350ef0 157 + 158 + kms_cursor_edge_walk@128x128-right-edge 159 + # Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI 160 + # CPU: 0 PID: 1911 Comm: kworker/u8:3 Not tainted 6.10.0-rc5-g5e7a002eefe5 #1 161 + # Hardware name: ChromiumOS crosvm, BIOS 0 162 + # Workqueue: vkms_composer vkms_composer_worker [vkms] 163 + # RIP: 0010:compose_active_planes+0x344/0x4e0 [vkms] 164 + # Code: 6a 34 0f 8e 91 fe ff ff 44 89 ea 48 8d 7c 24 48 e8 71 f0 ff ff 4b 8b 04 fc 48 8b 4c 24 50 48 8b 7c 24 40 48 8b 80 48 01 00 00 <48> 63 70 18 8b 40 20 48 89 f2 48 c1 e6 03 29 d0 48 8b 54 24 48 48 165 + # RSP: 0018:ffffb2f040a43d58 EFLAGS: 00010282 166 + # RAX: 0000000000000000 RBX: 0000000000000002 RCX: ffffa2c181792000 167 + # RDX: 0000000000000000 RSI: ffffa2c181793ff8 RDI: ffffa2c181790000 168 + # RBP: 0000000000000031 R08: 0000000000000000 R09: 00000000000003ff 169 + # R10: ffffa2c181790000 R11: 0000000000000000 R12: ffffa2c1814fa810 170 + # R13: 0000000000000031 R14: 0000000000000031 R15: 0000000000000000 171 + # FS: 0000000000000000(0000) GS:ffffa2c1abc00000(0000) knlGS:0000000000000000 172 + # CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 173 + # CR2: 0000000000000018 CR3: 0000000106768000 CR4: 0000000000350ef0 174 + # Call Trace: 175 + # <TASK> 176 + # ? __die+0x1e/0x60 177 + # ? page_fault_oops+0x17b/0x4a0 178 + # ? srso_return_thunk+0x5/0x5f 179 + # ? mark_held_locks+0x49/0x80 180 + # ? exc_page_fault+0x6d/0x230 181 + # ? asm_exc_page_fault+0x26/0x30 182 + # ? compose_active_planes+0x344/0x4e0 [vkms] 183 + # ? compose_active_planes+0x32f/0x4e0 [vkms] 184 + # ? srso_return_thunk+0x5/0x5f 185 + # vkms_composer_worker+0x205/0x240 [vkms] 186 + # process_one_work+0x201/0x6c0 187 + # ? lock_is_held_type+0x9e/0x110 188 + # worker_thread+0x17e/0x350 189 + # ? __pfx_worker_thread+0x10/0x10 190 + # kthread+0xce/0x100 191 + # ? __pfx_kthread+0x10/0x10 192 + # ret_from_fork+0x2f/0x50 193 + # ? __pfx_kthread+0x10/0x10 194 + # ret_from_fork_asm+0x1a/0x30 195 + # </TASK> 196 + # Modules linked in: vkms 197 + # CR2: 0000000000000018 198 + # ---[ end trace 0000000000000000 ]--- 199 + # RIP: 0010:compose_active_planes+0x344/0x4e0 [vkms] 200 + # Code: 6a 34 0f 8e 91 fe ff ff 44 89 ea 48 8d 7c 24 48 e8 71 f0 ff ff 4b 8b 04 fc 48 8b 4c 24 50 48 8b 7c 24 40 48 8b 80 48 01 00 00 <48> 63 70 18 8b 40 20 48 89 f2 48 c1 e6 03 29 d0 48 8b 54 24 48 48 201 + # RSP: 0018:ffffb2f040a43d58 EFLAGS: 00010282 202 + # RAX: 0000000000000000 RBX: 0000000000000002 RCX: ffffa2c181792000 203 + # RDX: 0000000000000000 RSI: ffffa2c181793ff8 RDI: ffffa2c181790000 204 + # RBP: 0000000000000031 R08: 0000000000000000 R09: 00000000000003ff 205 + # R10: ffffa2c181790000 R11: 0000000000000000 R12: ffffa2c1814fa810 206 + # R13: 0000000000000031 R14: 0000000000000031 R15: 000000000000 207 + 107 208 # Skip driver specific tests 108 209 ^amdgpu.* 109 - msm_.* 210 + ^msm.* 110 211 nouveau_.* 111 - panfrost_.* 212 + ^panfrost.* 112 213 ^v3d.* 113 214 ^vc4.* 114 215 ^vmwgfx*
+56 -10
drivers/gpu/drm/display/drm_dp_helper.c
··· 2328 2328 #undef DEVICE_ID_ANY 2329 2329 #undef DEVICE_ID 2330 2330 2331 + static int drm_dp_read_ident(struct drm_dp_aux *aux, unsigned int offset, 2332 + struct drm_dp_dpcd_ident *ident) 2333 + { 2334 + int ret; 2335 + 2336 + ret = drm_dp_dpcd_read(aux, offset, ident, sizeof(*ident)); 2337 + 2338 + return ret < 0 ? ret : 0; 2339 + } 2340 + 2341 + static void drm_dp_dump_desc(struct drm_dp_aux *aux, 2342 + const char *device_name, const struct drm_dp_desc *desc) 2343 + { 2344 + const struct drm_dp_dpcd_ident *ident = &desc->ident; 2345 + 2346 + drm_dbg_kms(aux->drm_dev, 2347 + "%s: %s: OUI %*phD dev-ID %*pE HW-rev %d.%d SW-rev %d.%d quirks 0x%04x\n", 2348 + aux->name, device_name, 2349 + (int)sizeof(ident->oui), ident->oui, 2350 + (int)strnlen(ident->device_id, sizeof(ident->device_id)), ident->device_id, 2351 + ident->hw_rev >> 4, ident->hw_rev & 0xf, 2352 + ident->sw_major_rev, ident->sw_minor_rev, 2353 + desc->quirks); 2354 + } 2355 + 2331 2356 /** 2332 2357 * drm_dp_read_desc - read sink/branch descriptor from DPCD 2333 2358 * @aux: DisplayPort AUX channel ··· 2369 2344 { 2370 2345 struct drm_dp_dpcd_ident *ident = &desc->ident; 2371 2346 unsigned int offset = is_branch ? DP_BRANCH_OUI : DP_SINK_OUI; 2372 - int ret, dev_id_len; 2347 + int ret; 2373 2348 2374 - ret = drm_dp_dpcd_read(aux, offset, ident, sizeof(*ident)); 2349 + ret = drm_dp_read_ident(aux, offset, ident); 2375 2350 if (ret < 0) 2376 2351 return ret; 2377 2352 2378 2353 desc->quirks = drm_dp_get_quirks(ident, is_branch); 2379 2354 2380 - dev_id_len = strnlen(ident->device_id, sizeof(ident->device_id)); 2381 - 2382 - drm_dbg_kms(aux->drm_dev, 2383 - "%s: DP %s: OUI %*phD dev-ID %*pE HW-rev %d.%d SW-rev %d.%d quirks 0x%04x\n", 2384 - aux->name, is_branch ? "branch" : "sink", 2385 - (int)sizeof(ident->oui), ident->oui, dev_id_len, 2386 - ident->device_id, ident->hw_rev >> 4, ident->hw_rev & 0xf, 2387 - ident->sw_major_rev, ident->sw_minor_rev, desc->quirks); 2355 + drm_dp_dump_desc(aux, is_branch ? "DP branch" : "DP sink", desc); 2388 2356 2389 2357 return 0; 2390 2358 } 2391 2359 EXPORT_SYMBOL(drm_dp_read_desc); 2360 + 2361 + /** 2362 + * drm_dp_dump_lttpr_desc - read and dump the DPCD descriptor for an LTTPR PHY 2363 + * @aux: DisplayPort AUX channel 2364 + * @dp_phy: LTTPR PHY instance 2365 + * 2366 + * Read the DPCD LTTPR PHY descriptor for @dp_phy and print a debug message 2367 + * with its details to dmesg. 2368 + * 2369 + * Returns 0 on success or a negative error code on failure. 2370 + */ 2371 + int drm_dp_dump_lttpr_desc(struct drm_dp_aux *aux, enum drm_dp_phy dp_phy) 2372 + { 2373 + struct drm_dp_desc desc = {}; 2374 + int ret; 2375 + 2376 + if (drm_WARN_ON(aux->drm_dev, dp_phy < DP_PHY_LTTPR1 || dp_phy > DP_MAX_LTTPR_COUNT)) 2377 + return -EINVAL; 2378 + 2379 + ret = drm_dp_read_ident(aux, DP_OUI_PHY_REPEATER(dp_phy), &desc.ident); 2380 + if (ret < 0) 2381 + return ret; 2382 + 2383 + drm_dp_dump_desc(aux, drm_dp_phy_name(dp_phy), &desc); 2384 + 2385 + return 0; 2386 + } 2387 + EXPORT_SYMBOL(drm_dp_dump_lttpr_desc); 2392 2388 2393 2389 /** 2394 2390 * drm_dp_dsc_sink_bpp_incr() - Get bits per pixel increment
+1 -1
drivers/gpu/drm/display/drm_dp_mst_topology.c
··· 4963 4963 seq_printf(m, "branch oui: %*phN devid: ", 3, buf); 4964 4964 4965 4965 for (i = 0x3; i < 0x8 && buf[i]; i++) 4966 - seq_printf(m, "%c", buf[i]); 4966 + seq_putc(m, buf[i]); 4967 4967 seq_printf(m, " revision: hw: %x.%x sw: %x.%x\n", 4968 4968 buf[0x9] >> 4, buf[0x9] & 0xf, buf[0xa], buf[0xb]); 4969 4969 if (dump_dp_payload_table(mgr, buf))
+7 -2
drivers/gpu/drm/drm_bridge.c
··· 353 353 bridge->encoder = NULL; 354 354 list_del(&bridge->chain_node); 355 355 356 - DRM_ERROR("failed to attach bridge %pOF to encoder %s: %d\n", 357 - bridge->of_node, encoder->name, ret); 356 + if (ret != -EPROBE_DEFER) 357 + DRM_ERROR("failed to attach bridge %pOF to encoder %s: %d\n", 358 + bridge->of_node, encoder->name, ret); 359 + else 360 + dev_err_probe(encoder->dev->dev, -EPROBE_DEFER, 361 + "failed to attach bridge %pOF to encoder %s\n", 362 + bridge->of_node, encoder->name); 358 363 359 364 return ret; 360 365 }
+2 -6
drivers/gpu/drm/drm_bridge_connector.c
··· 443 443 panel_bridge = bridge; 444 444 } 445 445 446 - if (connector_type == DRM_MODE_CONNECTOR_Unknown) { 447 - kfree(bridge_connector); 446 + if (connector_type == DRM_MODE_CONNECTOR_Unknown) 448 447 return ERR_PTR(-EINVAL); 449 - } 450 448 451 449 if (bridge_connector->bridge_hdmi) 452 450 ret = drmm_connector_hdmi_init(drm, connector, ··· 459 461 ret = drmm_connector_init(drm, connector, 460 462 &drm_bridge_connector_funcs, 461 463 connector_type, ddc); 462 - if (ret) { 463 - kfree(bridge_connector); 464 + if (ret) 464 465 return ERR_PTR(ret); 465 - } 466 466 467 467 drm_connector_helper_add(connector, &drm_bridge_connector_helper_funcs); 468 468
+107 -16
drivers/gpu/drm/drm_connector.c
··· 1043 1043 { DRM_MODE_SCALE_ASPECT, "Full aspect" }, 1044 1044 }; 1045 1045 1046 + static const struct drm_prop_enum_list drm_power_saving_policy_enum_list[] = { 1047 + { __builtin_ffs(DRM_MODE_REQUIRE_COLOR_ACCURACY) - 1, "Require color accuracy" }, 1048 + { __builtin_ffs(DRM_MODE_REQUIRE_LOW_LATENCY) - 1, "Require low latency" }, 1049 + }; 1050 + 1046 1051 static const struct drm_prop_enum_list drm_aspect_ratio_enum_list[] = { 1047 1052 { DRM_MODE_PICTURE_ASPECT_NONE, "Automatic" }, 1048 1053 { DRM_MODE_PICTURE_ASPECT_4_3, "4:3" }, ··· 1634 1629 * 1635 1630 * Drivers can set up these properties by calling 1636 1631 * drm_mode_create_tv_margin_properties(). 1632 + * power saving policy: 1633 + * This property is used to set the power saving policy for the connector. 1634 + * This property is populated with a bitmask of optional requirements set 1635 + * by the drm master for the drm driver to respect: 1636 + * - "Require color accuracy": Disable power saving features that will 1637 + * affect color fidelity. 1638 + * For example: Hardware assisted backlight modulation. 1639 + * - "Require low latency": Disable power saving features that will 1640 + * affect latency. 1641 + * For example: Panel self refresh (PSR) 1637 1642 */ 1638 1643 1639 1644 int drm_connector_create_standard_properties(struct drm_device *dev) ··· 2147 2132 EXPORT_SYMBOL(drm_mode_create_scaling_mode_property); 2148 2133 2149 2134 /** 2135 + * drm_mode_create_power_saving_policy_property - create power saving policy property 2136 + * @dev: DRM device 2137 + * @supported_policies: bitmask of supported power saving policies 2138 + * 2139 + * Called by a driver the first time it's needed, must be attached to desired 2140 + * connectors. 2141 + * 2142 + * Returns: %0 2143 + */ 2144 + int drm_mode_create_power_saving_policy_property(struct drm_device *dev, 2145 + uint64_t supported_policies) 2146 + { 2147 + struct drm_property *power_saving; 2148 + 2149 + if (dev->mode_config.power_saving_policy) 2150 + return 0; 2151 + WARN_ON((supported_policies & DRM_MODE_POWER_SAVING_POLICY_ALL) == 0); 2152 + 2153 + power_saving = 2154 + drm_property_create_bitmask(dev, 0, "power saving policy", 2155 + drm_power_saving_policy_enum_list, 2156 + ARRAY_SIZE(drm_power_saving_policy_enum_list), 2157 + supported_policies); 2158 + if (!power_saving) 2159 + return -ENOMEM; 2160 + 2161 + dev->mode_config.power_saving_policy = power_saving; 2162 + 2163 + return 0; 2164 + } 2165 + EXPORT_SYMBOL(drm_mode_create_power_saving_policy_property); 2166 + 2167 + /** 2150 2168 * DOC: Variable refresh properties 2151 2169 * 2152 2170 * Variable refresh rate capable displays can dynamically adjust their ··· 2363 2315 * DOC: standard connector properties 2364 2316 * 2365 2317 * Colorspace: 2366 - * This property helps select a suitable colorspace based on the sink 2367 - * capability. Modern sink devices support wider gamut like BT2020. 2368 - * This helps switch to BT2020 mode if the BT2020 encoded video stream 2369 - * is being played by the user, same for any other colorspace. Thereby 2370 - * giving a good visual experience to users. 2318 + * This property is used to inform the driver about the color encoding 2319 + * user space configured the pixel operation properties to produce. 2320 + * The variants set the colorimetry, transfer characteristics, and which 2321 + * YCbCr conversion should be used when necessary. 2322 + * The transfer characteristics from HDR_OUTPUT_METADATA takes precedence 2323 + * over this property. 2324 + * User space always configures the pixel operation properties to produce 2325 + * full quantization range data (see the Broadcast RGB property). 2371 2326 * 2372 - * The expectation from userspace is that it should parse the EDID 2373 - * and get supported colorspaces. Use this property and switch to the 2374 - * one supported. Sink supported colorspaces should be retrieved by 2375 - * userspace from EDID and driver will not explicitly expose them. 2327 + * Drivers inform the sink about what colorimetry, transfer 2328 + * characteristics, YCbCr conversion, and quantization range to expect 2329 + * (this can depend on the output mode, output format and other 2330 + * properties). Drivers also convert the user space provided data to what 2331 + * the sink expects. 2376 2332 * 2377 - * Basically the expectation from userspace is: 2378 - * - Set up CRTC DEGAMMA/CTM/GAMMA to convert to some sink 2379 - * colorspace 2380 - * - Set this new property to let the sink know what it 2381 - * converted the CRTC output to. 2382 - * - This property is just to inform sink what colorspace 2383 - * source is trying to drive. 2333 + * User space has to check if the sink supports all of the possible 2334 + * colorimetries that the driver is allowed to pick by parsing the EDID. 2335 + * 2336 + * For historical reasons this property exposes a number of variants which 2337 + * result in undefined behavior. 2338 + * 2339 + * Default: 2340 + * The behavior is driver-specific. 2341 + * BT2020_RGB: 2342 + * BT2020_YCC: 2343 + * User space configures the pixel operation properties to produce 2344 + * RGB content with Rec. ITU-R BT.2020 colorimetry, Rec. 2345 + * ITU-R BT.2020 (Table 4, RGB) transfer characteristics and full 2346 + * quantization range. 2347 + * User space can use the HDR_OUTPUT_METADATA property to set the 2348 + * transfer characteristics to PQ (Rec. ITU-R BT.2100 Table 4) or 2349 + * HLG (Rec. ITU-R BT.2100 Table 5) in which case, user space 2350 + * configures pixel operation properties to produce content with 2351 + * the respective transfer characteristics. 2352 + * User space has to make sure the sink supports Rec. 2353 + * ITU-R BT.2020 R'G'B' and Rec. ITU-R BT.2020 Y'C'BC'R 2354 + * colorimetry. 2355 + * Drivers can configure the sink to use an RGB format, tell the 2356 + * sink to expect Rec. ITU-R BT.2020 R'G'B' colorimetry and convert 2357 + * to the appropriate quantization range. 2358 + * Drivers can configure the sink to use a YCbCr format, tell the 2359 + * sink to expect Rec. ITU-R BT.2020 Y'C'BC'R colorimetry, convert 2360 + * to YCbCr using the Rec. ITU-R BT.2020 non-constant luminance 2361 + * conversion matrix and convert to the appropriate quantization 2362 + * range. 2363 + * The variants BT2020_RGB and BT2020_YCC are equivalent and the 2364 + * driver chooses between RGB and YCbCr on its own. 2365 + * SMPTE_170M_YCC: 2366 + * BT709_YCC: 2367 + * XVYCC_601: 2368 + * XVYCC_709: 2369 + * SYCC_601: 2370 + * opYCC_601: 2371 + * opRGB: 2372 + * BT2020_CYCC: 2373 + * DCI-P3_RGB_D65: 2374 + * DCI-P3_RGB_Theater: 2375 + * RGB_WIDE_FIXED: 2376 + * RGB_WIDE_FLOAT: 2377 + * BT601_YCC: 2378 + * The behavior is undefined. 2384 2379 * 2385 2380 * Because between HDMI and DP have different colorspaces, 2386 2381 * drm_mode_create_hdmi_colorspace_property() is used for HDMI connector and
+7
drivers/gpu/drm/drm_crtc_internal.h
··· 315 315 } 316 316 #endif 317 317 318 + /* drm_panic.c */ 319 + #ifdef CONFIG_DRM_PANIC 320 + bool drm_panic_is_enabled(struct drm_device *dev); 321 + #else 322 + static inline bool drm_panic_is_enabled(struct drm_device *dev) { return false; } 323 + #endif 324 + 318 325 #endif /* __DRM_CRTC_INTERNAL_H__ */
+2
drivers/gpu/drm/drm_fb_helper.c
··· 44 44 #include <drm/drm_vblank.h> 45 45 46 46 #include "drm_internal.h" 47 + #include "drm_crtc_internal.h" 47 48 48 49 static bool drm_fbdev_emulation = true; 49 50 module_param_named(fbdev_emulation, drm_fbdev_emulation, bool, 0600); ··· 528 527 fb_helper->info = info; 529 528 info->skip_vt_switch = true; 530 529 530 + info->skip_panic = drm_panic_is_enabled(fb_helper->dev); 531 531 return info; 532 532 533 533 err_release:
+2
drivers/gpu/drm/drm_mode_config.c
··· 456 456 if (ret == -EDEADLK) 457 457 ret = drm_modeset_backoff(&modeset_ctx); 458 458 459 + might_fault(); 460 + 459 461 ww_acquire_init(&resv_ctx, &reservation_ww_class); 460 462 ret = dma_resv_lock(&resv, &resv_ctx); 461 463 if (ret == -EDEADLK)
+18
drivers/gpu/drm/drm_panel.c
··· 161 161 if (!panel) 162 162 return -EINVAL; 163 163 164 + /* 165 + * If you are seeing the warning below it likely means one of two things: 166 + * - Your panel driver incorrectly calls drm_panel_unprepare() in its 167 + * shutdown routine. You should delete this. 168 + * - You are using panel-edp or panel-simple and your DRM modeset 169 + * driver's shutdown() callback happened after the panel's shutdown(). 170 + * In this case the warning is harmless though ideally you should 171 + * figure out how to reverse the order of the shutdown() callbacks. 172 + */ 164 173 if (!panel->prepared) { 165 174 dev_warn(panel->dev, "Skipping unprepare of already unprepared panel\n"); 166 175 return 0; ··· 254 245 if (!panel) 255 246 return -EINVAL; 256 247 248 + /* 249 + * If you are seeing the warning below it likely means one of two things: 250 + * - Your panel driver incorrectly calls drm_panel_disable() in its 251 + * shutdown routine. You should delete this. 252 + * - You are using panel-edp or panel-simple and your DRM modeset 253 + * driver's shutdown() callback happened after the panel's shutdown(). 254 + * In this case the warning is harmless though ideally you should 255 + * figure out how to reverse the order of the shutdown() callbacks. 256 + */ 257 257 if (!panel->enabled) { 258 258 dev_warn(panel->dev, "Skipping disable of already disabled panel\n"); 259 259 return 0;
+24 -2
drivers/gpu/drm/drm_panic.c
··· 27 27 #include <drm/drm_plane.h> 28 28 #include <drm/drm_print.h> 29 29 30 + #include "drm_crtc_internal.h" 31 + 30 32 MODULE_AUTHOR("Jocelyn Falempe"); 31 33 MODULE_DESCRIPTION("DRM panic handler"); 32 34 MODULE_LICENSE("GPL"); ··· 657 655 return container_of(kd, struct drm_plane, kmsg_panic); 658 656 } 659 657 660 - static void drm_panic(struct kmsg_dumper *dumper, enum kmsg_dump_reason reason) 658 + static void drm_panic(struct kmsg_dumper *dumper, struct kmsg_dump_detail *detail) 661 659 { 662 660 struct drm_plane *plane = to_drm_plane(dumper); 663 661 664 - if (reason == KMSG_DUMP_PANIC) 662 + if (detail->reason == KMSG_DUMP_PANIC) 665 663 draw_panic_plane(plane); 666 664 } 667 665 ··· 704 702 #else 705 703 static void debugfs_register_plane(struct drm_plane *plane, int index) {} 706 704 #endif /* CONFIG_DRM_PANIC_DEBUG */ 705 + 706 + /** 707 + * drm_panic_is_enabled 708 + * @dev: the drm device that may supports drm_panic 709 + * 710 + * returns true if the drm device supports drm_panic 711 + */ 712 + bool drm_panic_is_enabled(struct drm_device *dev) 713 + { 714 + struct drm_plane *plane; 715 + 716 + if (!dev->mode_config.num_total_plane) 717 + return false; 718 + 719 + drm_for_each_plane(plane, dev) 720 + if (plane->helper_private && plane->helper_private->get_scanout_buffer) 721 + return true; 722 + return false; 723 + } 724 + EXPORT_SYMBOL(drm_panic_is_enabled); 707 725 708 726 /** 709 727 * drm_panic_register() - Initialize DRM panic for a device
+1 -1
drivers/gpu/drm/drm_probe_helper.c
··· 888 888 * disabled. Polling is re-enabled by calling drm_kms_helper_poll_enable(). 889 889 * 890 890 * If however, the polling was never initialized, this call will trigger a 891 - * warning and return 891 + * warning and return. 892 892 * 893 893 * Note that calls to enable and disable polling must be strictly ordered, which 894 894 * is automatically the case when they're only call from suspend/resume
+59 -22
drivers/gpu/drm/drm_vblank.c
··· 131 131 * guaranteed to be enabled. 132 132 * 133 133 * On many hardware disabling the vblank interrupt cannot be done in a race-free 134 - * manner, see &drm_driver.vblank_disable_immediate and 134 + * manner, see &drm_vblank_crtc_config.disable_immediate and 135 135 * &drm_driver.max_vblank_count. In that case the vblank core only disables the 136 136 * vblanks after a timer has expired, which can be configured through the 137 137 * ``vblankoffdelay`` module parameter. ··· 1241 1241 void drm_vblank_put(struct drm_device *dev, unsigned int pipe) 1242 1242 { 1243 1243 struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 1244 + int vblank_offdelay = vblank->config.offdelay_ms; 1244 1245 1245 1246 if (drm_WARN_ON(dev, pipe >= dev->num_crtcs)) 1246 1247 return; ··· 1251 1250 1252 1251 /* Last user schedules interrupt disable */ 1253 1252 if (atomic_dec_and_test(&vblank->refcount)) { 1254 - if (drm_vblank_offdelay == 0) 1253 + if (!vblank_offdelay) 1255 1254 return; 1256 - else if (drm_vblank_offdelay < 0) 1255 + else if (vblank_offdelay < 0) 1257 1256 vblank_disable_fn(&vblank->disable_timer); 1258 - else if (!dev->vblank_disable_immediate) 1257 + else if (!vblank->config.disable_immediate) 1259 1258 mod_timer(&vblank->disable_timer, 1260 - jiffies + ((drm_vblank_offdelay * HZ)/1000)); 1259 + jiffies + ((vblank_offdelay * HZ) / 1000)); 1261 1260 } 1262 1261 } 1263 1262 ··· 1266 1265 * @crtc: which counter to give up 1267 1266 * 1268 1267 * Release ownership of a given vblank counter, turning off interrupts 1269 - * if possible. Disable interrupts after drm_vblank_offdelay milliseconds. 1268 + * if possible. Disable interrupts after &drm_vblank_crtc_config.offdelay_ms 1269 + * milliseconds. 1270 1270 */ 1271 1271 void drm_crtc_vblank_put(struct drm_crtc *crtc) 1272 1272 { ··· 1468 1466 EXPORT_SYMBOL(drm_crtc_set_max_vblank_count); 1469 1467 1470 1468 /** 1471 - * drm_crtc_vblank_on - enable vblank events on a CRTC 1469 + * drm_crtc_vblank_on_config - enable vblank events on a CRTC with custom 1470 + * configuration options 1472 1471 * @crtc: CRTC in question 1472 + * @config: Vblank configuration value 1473 1473 * 1474 - * This functions restores the vblank interrupt state captured with 1475 - * drm_crtc_vblank_off() again and is generally called when enabling @crtc. Note 1476 - * that calls to drm_crtc_vblank_on() and drm_crtc_vblank_off() can be 1477 - * unbalanced and so can also be unconditionally called in driver load code to 1478 - * reflect the current hardware state of the crtc. 1474 + * See drm_crtc_vblank_on(). In addition, this function allows you to provide a 1475 + * custom vblank configuration for a given CRTC. 1476 + * 1477 + * Note that @config is copied, the pointer does not need to stay valid beyond 1478 + * this function call. For details of the parameters see 1479 + * struct drm_vblank_crtc_config. 1479 1480 */ 1480 - void drm_crtc_vblank_on(struct drm_crtc *crtc) 1481 + void drm_crtc_vblank_on_config(struct drm_crtc *crtc, 1482 + const struct drm_vblank_crtc_config *config) 1481 1483 { 1482 1484 struct drm_device *dev = crtc->dev; 1483 1485 unsigned int pipe = drm_crtc_index(crtc); ··· 1493 1487 spin_lock_irq(&dev->vbl_lock); 1494 1488 drm_dbg_vbl(dev, "crtc %d, vblank enabled %d, inmodeset %d\n", 1495 1489 pipe, vblank->enabled, vblank->inmodeset); 1490 + 1491 + vblank->config = *config; 1496 1492 1497 1493 /* Drop our private "prevent drm_vblank_get" refcount */ 1498 1494 if (vblank->inmodeset) { ··· 1508 1500 * re-enable interrupts if there are users left, or the 1509 1501 * user wishes vblank interrupts to be enabled all the time. 1510 1502 */ 1511 - if (atomic_read(&vblank->refcount) != 0 || drm_vblank_offdelay == 0) 1503 + if (atomic_read(&vblank->refcount) != 0 || !vblank->config.offdelay_ms) 1512 1504 drm_WARN_ON(dev, drm_vblank_enable(dev, pipe)); 1513 1505 spin_unlock_irq(&dev->vbl_lock); 1506 + } 1507 + EXPORT_SYMBOL(drm_crtc_vblank_on_config); 1508 + 1509 + /** 1510 + * drm_crtc_vblank_on - enable vblank events on a CRTC 1511 + * @crtc: CRTC in question 1512 + * 1513 + * This functions restores the vblank interrupt state captured with 1514 + * drm_crtc_vblank_off() again and is generally called when enabling @crtc. Note 1515 + * that calls to drm_crtc_vblank_on() and drm_crtc_vblank_off() can be 1516 + * unbalanced and so can also be unconditionally called in driver load code to 1517 + * reflect the current hardware state of the crtc. 1518 + * 1519 + * Note that unlike in drm_crtc_vblank_on_config(), default values are used. 1520 + */ 1521 + void drm_crtc_vblank_on(struct drm_crtc *crtc) 1522 + { 1523 + const struct drm_vblank_crtc_config config = { 1524 + .offdelay_ms = drm_vblank_offdelay, 1525 + .disable_immediate = crtc->dev->vblank_disable_immediate 1526 + }; 1527 + 1528 + drm_crtc_vblank_on_config(crtc, &config); 1514 1529 } 1515 1530 EXPORT_SYMBOL(drm_crtc_vblank_on); 1516 1531 ··· 1587 1556 * 1588 1557 * Note that drivers must have race-free high-precision timestamping support, 1589 1558 * i.e. &drm_crtc_funcs.get_vblank_timestamp must be hooked up and 1590 - * &drm_driver.vblank_disable_immediate must be set to indicate the 1559 + * &drm_vblank_crtc_config.disable_immediate must be set to indicate the 1591 1560 * time-stamping functions are race-free against vblank hardware counter 1592 1561 * increments. 1593 1562 */ 1594 1563 void drm_crtc_vblank_restore(struct drm_crtc *crtc) 1595 1564 { 1596 - WARN_ON_ONCE(!crtc->funcs->get_vblank_timestamp); 1597 - WARN_ON_ONCE(!crtc->dev->vblank_disable_immediate); 1565 + struct drm_device *dev = crtc->dev; 1566 + unsigned int pipe = drm_crtc_index(crtc); 1567 + struct drm_vblank_crtc *vblank = drm_vblank_crtc(dev, pipe); 1598 1568 1599 - drm_vblank_restore(crtc->dev, drm_crtc_index(crtc)); 1569 + drm_WARN_ON_ONCE(dev, !crtc->funcs->get_vblank_timestamp); 1570 + drm_WARN_ON_ONCE(dev, vblank->inmodeset); 1571 + drm_WARN_ON_ONCE(dev, !vblank->config.disable_immediate); 1572 + 1573 + drm_vblank_restore(dev, pipe); 1600 1574 } 1601 1575 EXPORT_SYMBOL(drm_crtc_vblank_restore); 1602 1576 ··· 1790 1754 /* If the counter is currently enabled and accurate, short-circuit 1791 1755 * queries to return the cached timestamp of the last vblank. 1792 1756 */ 1793 - if (dev->vblank_disable_immediate && 1757 + if (vblank->config.disable_immediate && 1794 1758 drm_wait_vblank_is_query(vblwait) && 1795 1759 READ_ONCE(vblank->enabled)) { 1796 1760 drm_wait_vblank_reply(dev, pipe, &vblwait->reply); ··· 1954 1918 * been signaled. The disable has to be last (after 1955 1919 * drm_handle_vblank_events) so that the timestamp is always accurate. 1956 1920 */ 1957 - disable_irq = (dev->vblank_disable_immediate && 1958 - drm_vblank_offdelay > 0 && 1921 + disable_irq = (vblank->config.disable_immediate && 1922 + vblank->config.offdelay_ms > 0 && 1959 1923 !atomic_read(&vblank->refcount)); 1960 1924 1961 1925 drm_handle_vblank_events(dev, pipe); ··· 2028 1992 pipe = drm_crtc_index(crtc); 2029 1993 2030 1994 vblank = drm_crtc_vblank_crtc(crtc); 2031 - vblank_enabled = dev->vblank_disable_immediate && READ_ONCE(vblank->enabled); 1995 + vblank_enabled = READ_ONCE(vblank->config.disable_immediate) && 1996 + READ_ONCE(vblank->enabled); 2032 1997 2033 1998 if (!vblank_enabled) { 2034 1999 ret = drm_crtc_vblank_get(crtc);
+1 -1
drivers/gpu/drm/etnaviv/etnaviv_sched.c
··· 72 72 73 73 drm_sched_resubmit_jobs(&gpu->sched); 74 74 75 - drm_sched_start(&gpu->sched, true); 75 + drm_sched_start(&gpu->sched); 76 76 return DRM_GPU_SCHED_STAT_NOMINAL; 77 77 78 78 out_no_timeout:
+1 -1
drivers/gpu/drm/gma500/cdv_intel_lvds.c
··· 568 568 dev->dev, "I2C bus registration failed.\n"); 569 569 goto err_encoder_cleanup; 570 570 } 571 - gma_encoder->i2c_bus->slave_addr = 0x2C; 571 + gma_encoder->i2c_bus->target_addr = 0x2C; 572 572 dev_priv->lvds_i2c_bus = gma_encoder->i2c_bus; 573 573 574 574 /*
+11 -11
drivers/gpu/drm/gma500/intel_bios.c
··· 14 14 #include "psb_intel_drv.h" 15 15 #include "psb_intel_reg.h" 16 16 17 - #define SLAVE_ADDR1 0x70 18 - #define SLAVE_ADDR2 0x72 17 + #define TARGET_ADDR1 0x70 18 + #define TARGET_ADDR2 0x72 19 19 20 20 static void *find_section(struct bdb_header *bdb, int section_id) 21 21 { ··· 357 357 /* skip the device block if device type is invalid */ 358 358 continue; 359 359 } 360 - if (p_child->slave_addr != SLAVE_ADDR1 && 361 - p_child->slave_addr != SLAVE_ADDR2) { 360 + if (p_child->target_addr != TARGET_ADDR1 && 361 + p_child->target_addr != TARGET_ADDR2) { 362 362 /* 363 - * If the slave address is neither 0x70 nor 0x72, 363 + * If the target address is neither 0x70 nor 0x72, 364 364 * it is not a SDVO device. Skip it. 365 365 */ 366 366 continue; ··· 371 371 DRM_DEBUG_KMS("Incorrect SDVO port. Skip it\n"); 372 372 continue; 373 373 } 374 - DRM_DEBUG_KMS("the SDVO device with slave addr %2x is found on" 374 + DRM_DEBUG_KMS("the SDVO device with target addr %2x is found on" 375 375 " %s port\n", 376 - p_child->slave_addr, 376 + p_child->target_addr, 377 377 (p_child->dvo_port == DEVICE_PORT_DVOB) ? 378 378 "SDVOB" : "SDVOC"); 379 379 p_mapping = &(dev_priv->sdvo_mappings[p_child->dvo_port - 1]); 380 380 if (!p_mapping->initialized) { 381 381 p_mapping->dvo_port = p_child->dvo_port; 382 - p_mapping->slave_addr = p_child->slave_addr; 382 + p_mapping->target_addr = p_child->target_addr; 383 383 p_mapping->dvo_wiring = p_child->dvo_wiring; 384 384 p_mapping->ddc_pin = p_child->ddc_pin; 385 385 p_mapping->i2c_pin = p_child->i2c_pin; 386 386 p_mapping->initialized = 1; 387 387 DRM_DEBUG_KMS("SDVO device: dvo=%x, addr=%x, wiring=%d, ddc_pin=%d, i2c_pin=%d\n", 388 388 p_mapping->dvo_port, 389 - p_mapping->slave_addr, 389 + p_mapping->target_addr, 390 390 p_mapping->dvo_wiring, 391 391 p_mapping->ddc_pin, 392 392 p_mapping->i2c_pin); ··· 394 394 DRM_DEBUG_KMS("Maybe one SDVO port is shared by " 395 395 "two SDVO device.\n"); 396 396 } 397 - if (p_child->slave2_addr) { 397 + if (p_child->target2_addr) { 398 398 /* Maybe this is a SDVO device with multiple inputs */ 399 399 /* And the mapping info is not added */ 400 - DRM_DEBUG_KMS("there exists the slave2_addr. Maybe this" 400 + DRM_DEBUG_KMS("there exists the target2_addr. Maybe this" 401 401 " is a SDVO device with multiple inputs.\n"); 402 402 } 403 403 count++;
+2 -2
drivers/gpu/drm/gma500/intel_bios.h
··· 186 186 u16 addin_offset; 187 187 u8 dvo_port; /* See Device_PORT_* above */ 188 188 u8 i2c_pin; 189 - u8 slave_addr; 189 + u8 target_addr; 190 190 u8 ddc_pin; 191 191 u16 edid_ptr; 192 192 u8 dvo_cfg; /* See DEVICE_CFG_* above */ 193 193 u8 dvo2_port; 194 194 u8 i2c2_pin; 195 - u8 slave2_addr; 195 + u8 target2_addr; 196 196 u8 ddc2_pin; 197 197 u8 capabilities; 198 198 u8 dvo_wiring;/* See DEVICE_WIRE_* above */
+1 -1
drivers/gpu/drm/gma500/intel_gmbus.c
··· 333 333 clear_err: 334 334 /* Toggle the Software Clear Interrupt bit. This has the effect 335 335 * of resetting the GMBUS controller and so clearing the 336 - * BUS_ERROR raised by the slave's NAK. 336 + * BUS_ERROR raised by the target's NAK. 337 337 */ 338 338 GMBUS_REG_WRITE(GMBUS1 + reg_offset, GMBUS_SW_CLR_INT); 339 339 GMBUS_REG_WRITE(GMBUS1 + reg_offset, 0);
+1 -1
drivers/gpu/drm/gma500/psb_drv.h
··· 202 202 struct sdvo_device_mapping { 203 203 u8 initialized; 204 204 u8 dvo_port; 205 - u8 slave_addr; 205 + u8 target_addr; 206 206 u8 dvo_wiring; 207 207 u8 i2c_pin; 208 208 u8 i2c_speed;
+1 -1
drivers/gpu/drm/gma500/psb_intel_drv.h
··· 80 80 struct gma_i2c_chan { 81 81 struct i2c_adapter base; 82 82 struct i2c_algo_bit_data algo; 83 - u8 slave_addr; 83 + u8 target_addr; 84 84 85 85 /* for getting at dev. private (mmio etc.) */ 86 86 struct drm_device *drm_dev;
+2 -2
drivers/gpu/drm/gma500/psb_intel_lvds.c
··· 97 97 98 98 struct i2c_msg msgs[] = { 99 99 { 100 - .addr = lvds_i2c_bus->slave_addr, 100 + .addr = lvds_i2c_bus->target_addr, 101 101 .flags = 0, 102 102 .len = 2, 103 103 .buf = out_buf, ··· 710 710 dev->dev, "I2C bus registration failed.\n"); 711 711 goto err_encoder_cleanup; 712 712 } 713 - lvds_priv->i2c_bus->slave_addr = 0x2C; 713 + lvds_priv->i2c_bus->target_addr = 0x2C; 714 714 dev_priv->lvds_i2c_bus = lvds_priv->i2c_bus; 715 715 716 716 /*
+13 -13
drivers/gpu/drm/gma500/psb_intel_sdvo.c
··· 70 70 struct gma_encoder base; 71 71 72 72 struct i2c_adapter *i2c; 73 - u8 slave_addr; 73 + u8 target_addr; 74 74 75 75 struct i2c_adapter ddc; 76 76 ··· 259 259 { 260 260 struct i2c_msg msgs[] = { 261 261 { 262 - .addr = psb_intel_sdvo->slave_addr, 262 + .addr = psb_intel_sdvo->target_addr, 263 263 .flags = 0, 264 264 .len = 1, 265 265 .buf = &addr, 266 266 }, 267 267 { 268 - .addr = psb_intel_sdvo->slave_addr, 268 + .addr = psb_intel_sdvo->target_addr, 269 269 .flags = I2C_M_RD, 270 270 .len = 1, 271 271 .buf = ch, ··· 463 463 psb_intel_sdvo_debug_write(psb_intel_sdvo, cmd, args, args_len); 464 464 465 465 for (i = 0; i < args_len; i++) { 466 - msgs[i].addr = psb_intel_sdvo->slave_addr; 466 + msgs[i].addr = psb_intel_sdvo->target_addr; 467 467 msgs[i].flags = 0; 468 468 msgs[i].len = 2; 469 469 msgs[i].buf = buf + 2 *i; 470 470 buf[2*i + 0] = SDVO_I2C_ARG_0 - i; 471 471 buf[2*i + 1] = ((u8*)args)[i]; 472 472 } 473 - msgs[i].addr = psb_intel_sdvo->slave_addr; 473 + msgs[i].addr = psb_intel_sdvo->target_addr; 474 474 msgs[i].flags = 0; 475 475 msgs[i].len = 2; 476 476 msgs[i].buf = buf + 2*i; ··· 479 479 480 480 /* the following two are to read the response */ 481 481 status = SDVO_I2C_CMD_STATUS; 482 - msgs[i+1].addr = psb_intel_sdvo->slave_addr; 482 + msgs[i+1].addr = psb_intel_sdvo->target_addr; 483 483 msgs[i+1].flags = 0; 484 484 msgs[i+1].len = 1; 485 485 msgs[i+1].buf = &status; 486 486 487 - msgs[i+2].addr = psb_intel_sdvo->slave_addr; 487 + msgs[i+2].addr = psb_intel_sdvo->target_addr; 488 488 msgs[i+2].flags = I2C_M_RD; 489 489 msgs[i+2].len = 1; 490 490 msgs[i+2].buf = &status; ··· 1899 1899 } 1900 1900 1901 1901 static u8 1902 - psb_intel_sdvo_get_slave_addr(struct drm_device *dev, int sdvo_reg) 1902 + psb_intel_sdvo_get_target_addr(struct drm_device *dev, int sdvo_reg) 1903 1903 { 1904 1904 struct drm_psb_private *dev_priv = to_drm_psb_private(dev); 1905 1905 struct sdvo_device_mapping *my_mapping, *other_mapping; ··· 1913 1913 } 1914 1914 1915 1915 /* If the BIOS described our SDVO device, take advantage of it. */ 1916 - if (my_mapping->slave_addr) 1917 - return my_mapping->slave_addr; 1916 + if (my_mapping->target_addr) 1917 + return my_mapping->target_addr; 1918 1918 1919 1919 /* If the BIOS only described a different SDVO device, use the 1920 1920 * address that it isn't using. 1921 1921 */ 1922 - if (other_mapping->slave_addr) { 1923 - if (other_mapping->slave_addr == 0x70) 1922 + if (other_mapping->target_addr) { 1923 + if (other_mapping->target_addr == 0x70) 1924 1924 return 0x72; 1925 1925 else 1926 1926 return 0x70; ··· 2446 2446 return false; 2447 2447 2448 2448 psb_intel_sdvo->sdvo_reg = sdvo_reg; 2449 - psb_intel_sdvo->slave_addr = psb_intel_sdvo_get_slave_addr(dev, sdvo_reg) >> 1; 2449 + psb_intel_sdvo->target_addr = psb_intel_sdvo_get_target_addr(dev, sdvo_reg) >> 1; 2450 2450 psb_intel_sdvo_select_i2c_bus(dev_priv, psb_intel_sdvo, sdvo_reg); 2451 2451 if (!psb_intel_sdvo_init_ddc_proxy(psb_intel_sdvo, dev)) { 2452 2452 kfree(psb_intel_sdvo);
+2 -2
drivers/gpu/drm/imagination/pvr_queue.c
··· 782 782 } 783 783 } 784 784 785 - drm_sched_start(&queue->scheduler, true); 785 + drm_sched_start(&queue->scheduler); 786 786 } 787 787 788 788 /** ··· 842 842 } 843 843 mutex_unlock(&pvr_dev->queues.lock); 844 844 845 - drm_sched_start(sched, true); 845 + drm_sched_start(sched); 846 846 847 847 return DRM_GPU_SCHED_STAT_NOMINAL; 848 848 }
+1 -1
drivers/gpu/drm/lima/lima_sched.c
··· 463 463 lima_pm_idle(ldev); 464 464 465 465 drm_sched_resubmit_jobs(&pipe->base); 466 - drm_sched_start(&pipe->base, true); 466 + drm_sched_start(&pipe->base); 467 467 468 468 return DRM_GPU_SCHED_STAT_NOMINAL; 469 469 }
+2 -6
drivers/gpu/drm/loongson/lsdc_ttm.c
··· 341 341 342 342 void lsdc_bo_ref(struct lsdc_bo *lbo) 343 343 { 344 - struct ttm_buffer_object *tbo = &lbo->tbo; 345 - 346 - ttm_bo_get(tbo); 344 + drm_gem_object_get(&lbo->tbo.base); 347 345 } 348 346 349 347 void lsdc_bo_unref(struct lsdc_bo *lbo) 350 348 { 351 - struct ttm_buffer_object *tbo = &lbo->tbo; 352 - 353 - ttm_bo_put(tbo); 349 + drm_gem_object_put(&lbo->tbo.base); 354 350 } 355 351 356 352 int lsdc_bo_kmap(struct lsdc_bo *lbo)
+2 -7
drivers/gpu/drm/mgag200/mgag200_bmc.c
··· 14 14 return container_of(connector, struct mgag200_bmc_connector, base); 15 15 } 16 16 17 - void mgag200_bmc_disable_vidrst(struct mga_device *mdev) 17 + void mgag200_bmc_stop_scanout(struct mga_device *mdev) 18 18 { 19 19 u8 tmp; 20 20 int iter_max; ··· 73 73 } 74 74 } 75 75 76 - void mgag200_bmc_enable_vidrst(struct mga_device *mdev) 76 + void mgag200_bmc_start_scanout(struct mga_device *mdev) 77 77 { 78 78 u8 tmp; 79 - 80 - /* Ensure that the vrsten and hrsten are set */ 81 - WREG8(MGAREG_CRTCEXT_INDEX, 1); 82 - tmp = RREG8(MGAREG_CRTCEXT_DATA); 83 - WREG8(MGAREG_CRTCEXT_DATA, tmp | 0x88); 84 79 85 80 /* Assert rstlvl2 */ 86 81 WREG8(DAC_INDEX, MGA1064_REMHEADCTL2);
+40
drivers/gpu/drm/mgag200/mgag200_drv.c
··· 18 18 #include <drm/drm_managed.h> 19 19 #include <drm/drm_module.h> 20 20 #include <drm/drm_pciids.h> 21 + #include <drm/drm_vblank.h> 21 22 22 23 #include "mgag200_drv.h" 23 24 ··· 83 82 iowrite16(orig, mem); 84 83 85 84 return offset - 65536; 85 + } 86 + 87 + static irqreturn_t mgag200_irq_handler(int irq, void *arg) 88 + { 89 + struct drm_device *dev = arg; 90 + struct mga_device *mdev = to_mga_device(dev); 91 + struct drm_crtc *crtc; 92 + u32 status, ien; 93 + 94 + status = RREG32(MGAREG_STATUS); 95 + 96 + if (status & MGAREG_STATUS_VLINEPEN) { 97 + ien = RREG32(MGAREG_IEN); 98 + if (!(ien & MGAREG_IEN_VLINEIEN)) 99 + goto out; 100 + 101 + crtc = drm_crtc_from_index(dev, 0); 102 + if (WARN_ON_ONCE(!crtc)) 103 + goto out; 104 + drm_crtc_handle_vblank(crtc); 105 + 106 + WREG32(MGAREG_ICLEAR, MGAREG_ICLEAR_VLINEICLR); 107 + 108 + return IRQ_HANDLED; 109 + } 110 + 111 + out: 112 + return IRQ_NONE; 86 113 } 87 114 88 115 /* ··· 196 167 const struct mgag200_device_funcs *funcs) 197 168 { 198 169 struct drm_device *dev = &mdev->base; 170 + struct pci_dev *pdev = to_pci_dev(dev->dev); 199 171 u8 crtcext3, misc; 200 172 int ret; 201 173 ··· 221 191 WREG8(MGA_MISC_OUT, misc); 222 192 223 193 mutex_unlock(&mdev->rmmio_lock); 194 + 195 + WREG32(MGAREG_IEN, 0); 196 + WREG32(MGAREG_ICLEAR, MGAREG_ICLEAR_VLINEICLR); 197 + 198 + ret = devm_request_irq(&pdev->dev, pdev->irq, mgag200_irq_handler, IRQF_SHARED, 199 + dev->driver->name, dev); 200 + if (ret) { 201 + drm_err(dev, "Failed to acquire interrupt, error %d\n", ret); 202 + return ret; 203 + } 224 204 225 205 return 0; 226 206 }
+23 -22
drivers/gpu/drm/mgag200/mgag200_drv.h
··· 179 179 const struct drm_format_info *format; 180 180 181 181 struct mgag200_pll_values pixpllc; 182 + 183 + bool set_vidrst; 182 184 }; 183 185 184 186 static inline struct mgag200_crtc_state *to_mgag200_crtc_state(struct drm_crtc_state *base) ··· 216 214 */ 217 215 unsigned long max_mem_bandwidth; 218 216 219 - /* HW has external source (e.g., BMC) to synchronize with */ 220 - bool has_vidrst:1; 217 + /* Synchronize scanout with BMC */ 218 + bool sync_bmc:1; 221 219 222 220 struct { 223 221 unsigned data_bit:3; ··· 232 230 }; 233 231 234 232 #define MGAG200_DEVICE_INFO_INIT(_max_hdisplay, _max_vdisplay, _max_mem_bandwidth, \ 235 - _has_vidrst, _i2c_data_bit, _i2c_clock_bit, \ 233 + _sync_bmc, _i2c_data_bit, _i2c_clock_bit, \ 236 234 _bug_no_startadd) \ 237 235 { \ 238 236 .max_hdisplay = (_max_hdisplay), \ 239 237 .max_vdisplay = (_max_vdisplay), \ 240 238 .max_mem_bandwidth = (_max_mem_bandwidth), \ 241 - .has_vidrst = (_has_vidrst), \ 239 + .sync_bmc = (_sync_bmc), \ 242 240 .i2c = { \ 243 241 .data_bit = (_i2c_data_bit), \ 244 242 .clock_bit = (_i2c_clock_bit), \ ··· 247 245 } 248 246 249 247 struct mgag200_device_funcs { 250 - /* 251 - * Disables an external reset source (i.e., BMC) before programming 252 - * a new display mode. 253 - */ 254 - void (*disable_vidrst)(struct mga_device *mdev); 255 - 256 - /* 257 - * Enables an external reset source (i.e., BMC) after programming 258 - * a new display mode. 259 - */ 260 - void (*enable_vidrst)(struct mga_device *mdev); 261 - 262 248 /* 263 249 * Validate that the given state can be programmed into PIXPLLC. On 264 250 * success, the calculated parameters should be stored in the CRTC's ··· 400 410 void mgag200_crtc_helper_atomic_flush(struct drm_crtc *crtc, struct drm_atomic_state *old_state); 401 411 void mgag200_crtc_helper_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_state *old_state); 402 412 void mgag200_crtc_helper_atomic_disable(struct drm_crtc *crtc, struct drm_atomic_state *old_state); 413 + bool mgag200_crtc_helper_get_scanout_position(struct drm_crtc *crtc, bool in_vblank_irq, 414 + int *vpos, int *hpos, 415 + ktime_t *stime, ktime_t *etime, 416 + const struct drm_display_mode *mode); 403 417 404 418 #define MGAG200_CRTC_HELPER_FUNCS \ 405 419 .mode_valid = mgag200_crtc_helper_mode_valid, \ 406 420 .atomic_check = mgag200_crtc_helper_atomic_check, \ 407 421 .atomic_flush = mgag200_crtc_helper_atomic_flush, \ 408 422 .atomic_enable = mgag200_crtc_helper_atomic_enable, \ 409 - .atomic_disable = mgag200_crtc_helper_atomic_disable 423 + .atomic_disable = mgag200_crtc_helper_atomic_disable, \ 424 + .get_scanout_position = mgag200_crtc_helper_get_scanout_position 410 425 411 426 void mgag200_crtc_reset(struct drm_crtc *crtc); 412 427 struct drm_crtc_state *mgag200_crtc_atomic_duplicate_state(struct drm_crtc *crtc); 413 428 void mgag200_crtc_atomic_destroy_state(struct drm_crtc *crtc, struct drm_crtc_state *crtc_state); 429 + int mgag200_crtc_enable_vblank(struct drm_crtc *crtc); 430 + void mgag200_crtc_disable_vblank(struct drm_crtc *crtc); 414 431 415 432 #define MGAG200_CRTC_FUNCS \ 416 433 .reset = mgag200_crtc_reset, \ ··· 425 428 .set_config = drm_atomic_helper_set_config, \ 426 429 .page_flip = drm_atomic_helper_page_flip, \ 427 430 .atomic_duplicate_state = mgag200_crtc_atomic_duplicate_state, \ 428 - .atomic_destroy_state = mgag200_crtc_atomic_destroy_state 431 + .atomic_destroy_state = mgag200_crtc_atomic_destroy_state, \ 432 + .enable_vblank = mgag200_crtc_enable_vblank, \ 433 + .disable_vblank = mgag200_crtc_disable_vblank, \ 434 + .get_vblank_timestamp = drm_crtc_vblank_helper_get_vblank_timestamp 429 435 430 - void mgag200_set_mode_regs(struct mga_device *mdev, const struct drm_display_mode *mode); 436 + void mgag200_set_mode_regs(struct mga_device *mdev, const struct drm_display_mode *mode, 437 + bool set_vidrst); 431 438 void mgag200_set_format_regs(struct mga_device *mdev, const struct drm_format_info *format); 432 439 void mgag200_enable_display(struct mga_device *mdev); 433 440 void mgag200_init_registers(struct mga_device *mdev); ··· 440 439 /* mgag200_vga.c */ 441 440 int mgag200_vga_output_init(struct mga_device *mdev); 442 441 443 - /* mgag200_bmc.c */ 444 - void mgag200_bmc_disable_vidrst(struct mga_device *mdev); 445 - void mgag200_bmc_enable_vidrst(struct mga_device *mdev); 442 + /* mgag200_bmc.c */ 443 + void mgag200_bmc_stop_scanout(struct mga_device *mdev); 444 + void mgag200_bmc_start_scanout(struct mga_device *mdev); 446 445 int mgag200_bmc_output_init(struct mga_device *mdev, struct drm_connector *physical_connector); 447 446 448 447 #endif /* __MGAG200_DRV_H__ */
+5
drivers/gpu/drm/mgag200/mgag200_g200.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_probe_helper.h> 11 + #include <drm/drm_vblank.h> 11 12 12 13 #include "mgag200_drv.h" 13 14 ··· 403 402 404 403 drm_mode_config_reset(dev); 405 404 drm_kms_helper_poll_init(dev); 405 + 406 + ret = drm_vblank_init(dev, 1); 407 + if (ret) 408 + return ERR_PTR(ret); 406 409 407 410 return mdev; 408 411 }
+5
drivers/gpu/drm/mgag200/mgag200_g200eh.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_probe_helper.h> 11 + #include <drm/drm_vblank.h> 11 12 12 13 #include "mgag200_drv.h" 13 14 ··· 279 278 280 279 drm_mode_config_reset(dev); 281 280 drm_kms_helper_poll_init(dev); 281 + 282 + ret = drm_vblank_init(dev, 1); 283 + if (ret) 284 + return ERR_PTR(ret); 282 285 283 286 return mdev; 284 287 }
+5
drivers/gpu/drm/mgag200/mgag200_g200eh3.c
··· 7 7 #include <drm/drm_drv.h> 8 8 #include <drm/drm_gem_atomic_helper.h> 9 9 #include <drm/drm_probe_helper.h> 10 + #include <drm/drm_vblank.h> 10 11 11 12 #include "mgag200_drv.h" 12 13 ··· 184 183 185 184 drm_mode_config_reset(dev); 186 185 drm_kms_helper_poll_init(dev); 186 + 187 + ret = drm_vblank_init(dev, 1); 188 + if (ret) 189 + return ERR_PTR(ret); 187 190 188 191 return mdev; 189 192 }
+12 -7
drivers/gpu/drm/mgag200/mgag200_g200er.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_probe_helper.h> 11 + #include <drm/drm_vblank.h> 11 12 12 13 #include "mgag200_drv.h" 13 14 ··· 192 191 struct mgag200_crtc_state *mgag200_crtc_state = to_mgag200_crtc_state(crtc_state); 193 192 const struct drm_format_info *format = mgag200_crtc_state->format; 194 193 195 - if (funcs->disable_vidrst) 196 - funcs->disable_vidrst(mdev); 197 - 198 194 mgag200_set_format_regs(mdev, format); 199 - mgag200_set_mode_regs(mdev, adjusted_mode); 195 + mgag200_set_mode_regs(mdev, adjusted_mode, mgag200_crtc_state->set_vidrst); 200 196 201 197 if (funcs->pixpllc_atomic_update) 202 198 funcs->pixpllc_atomic_update(crtc, old_state); ··· 207 209 208 210 mgag200_enable_display(mdev); 209 211 210 - if (funcs->enable_vidrst) 211 - funcs->enable_vidrst(mdev); 212 + if (mdev->info->sync_bmc) 213 + mgag200_bmc_start_scanout(mdev); 214 + 215 + drm_crtc_vblank_on(crtc); 212 216 } 213 217 214 218 static const struct drm_crtc_helper_funcs mgag200_g200er_crtc_helper_funcs = { ··· 218 218 .atomic_check = mgag200_crtc_helper_atomic_check, 219 219 .atomic_flush = mgag200_crtc_helper_atomic_flush, 220 220 .atomic_enable = mgag200_g200er_crtc_helper_atomic_enable, 221 - .atomic_disable = mgag200_crtc_helper_atomic_disable 221 + .atomic_disable = mgag200_crtc_helper_atomic_disable, 222 + .get_scanout_position = mgag200_crtc_helper_get_scanout_position, 222 223 }; 223 224 224 225 static const struct drm_crtc_funcs mgag200_g200er_crtc_funcs = { ··· 318 317 319 318 drm_mode_config_reset(dev); 320 319 drm_kms_helper_poll_init(dev); 320 + 321 + ret = drm_vblank_init(dev, 1); 322 + if (ret) 323 + return ERR_PTR(ret); 321 324 322 325 return mdev; 323 326 }
+12 -7
drivers/gpu/drm/mgag200/mgag200_g200ev.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_probe_helper.h> 11 + #include <drm/drm_vblank.h> 11 12 12 13 #include "mgag200_drv.h" 13 14 ··· 193 192 struct mgag200_crtc_state *mgag200_crtc_state = to_mgag200_crtc_state(crtc_state); 194 193 const struct drm_format_info *format = mgag200_crtc_state->format; 195 194 196 - if (funcs->disable_vidrst) 197 - funcs->disable_vidrst(mdev); 198 - 199 195 mgag200_set_format_regs(mdev, format); 200 - mgag200_set_mode_regs(mdev, adjusted_mode); 196 + mgag200_set_mode_regs(mdev, adjusted_mode, mgag200_crtc_state->set_vidrst); 201 197 202 198 if (funcs->pixpllc_atomic_update) 203 199 funcs->pixpllc_atomic_update(crtc, old_state); ··· 208 210 209 211 mgag200_enable_display(mdev); 210 212 211 - if (funcs->enable_vidrst) 212 - funcs->enable_vidrst(mdev); 213 + if (mdev->info->sync_bmc) 214 + mgag200_bmc_start_scanout(mdev); 215 + 216 + drm_crtc_vblank_on(crtc); 213 217 } 214 218 215 219 static const struct drm_crtc_helper_funcs mgag200_g200ev_crtc_helper_funcs = { ··· 219 219 .atomic_check = mgag200_crtc_helper_atomic_check, 220 220 .atomic_flush = mgag200_crtc_helper_atomic_flush, 221 221 .atomic_enable = mgag200_g200ev_crtc_helper_atomic_enable, 222 - .atomic_disable = mgag200_crtc_helper_atomic_disable 222 + .atomic_disable = mgag200_crtc_helper_atomic_disable, 223 + .get_scanout_position = mgag200_crtc_helper_get_scanout_position, 223 224 }; 224 225 225 226 static const struct drm_crtc_funcs mgag200_g200ev_crtc_funcs = { ··· 323 322 324 323 drm_mode_config_reset(dev); 325 324 drm_kms_helper_poll_init(dev); 325 + 326 + ret = drm_vblank_init(dev, 1); 327 + if (ret) 328 + return ERR_PTR(ret); 326 329 327 330 return mdev; 328 331 }
+5 -2
drivers/gpu/drm/mgag200/mgag200_g200ew3.c
··· 7 7 #include <drm/drm_drv.h> 8 8 #include <drm/drm_gem_atomic_helper.h> 9 9 #include <drm/drm_probe_helper.h> 10 + #include <drm/drm_vblank.h> 10 11 11 12 #include "mgag200_drv.h" 12 13 ··· 147 146 MGAG200_DEVICE_INFO_INIT(2048, 2048, 0, true, 0, 1, false); 148 147 149 148 static const struct mgag200_device_funcs mgag200_g200ew3_device_funcs = { 150 - .disable_vidrst = mgag200_bmc_disable_vidrst, 151 - .enable_vidrst = mgag200_bmc_enable_vidrst, 152 149 .pixpllc_atomic_check = mgag200_g200ew3_pixpllc_atomic_check, 153 150 .pixpllc_atomic_update = mgag200_g200wb_pixpllc_atomic_update, // same as G200WB 154 151 }; ··· 202 203 203 204 drm_mode_config_reset(dev); 204 205 drm_kms_helper_poll_init(dev); 206 + 207 + ret = drm_vblank_init(dev, 1); 208 + if (ret) 209 + return ERR_PTR(ret); 205 210 206 211 return mdev; 207 212 }
+12 -7
drivers/gpu/drm/mgag200/mgag200_g200se.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_probe_helper.h> 11 + #include <drm/drm_vblank.h> 11 12 12 13 #include "mgag200_drv.h" 13 14 ··· 324 323 struct mgag200_crtc_state *mgag200_crtc_state = to_mgag200_crtc_state(crtc_state); 325 324 const struct drm_format_info *format = mgag200_crtc_state->format; 326 325 327 - if (funcs->disable_vidrst) 328 - funcs->disable_vidrst(mdev); 329 - 330 326 mgag200_set_format_regs(mdev, format); 331 - mgag200_set_mode_regs(mdev, adjusted_mode); 327 + mgag200_set_mode_regs(mdev, adjusted_mode, mgag200_crtc_state->set_vidrst); 332 328 333 329 if (funcs->pixpllc_atomic_update) 334 330 funcs->pixpllc_atomic_update(crtc, old_state); ··· 339 341 340 342 mgag200_enable_display(mdev); 341 343 342 - if (funcs->enable_vidrst) 343 - funcs->enable_vidrst(mdev); 344 + if (mdev->info->sync_bmc) 345 + mgag200_bmc_start_scanout(mdev); 346 + 347 + drm_crtc_vblank_on(crtc); 344 348 } 345 349 346 350 static const struct drm_crtc_helper_funcs mgag200_g200se_crtc_helper_funcs = { ··· 350 350 .atomic_check = mgag200_crtc_helper_atomic_check, 351 351 .atomic_flush = mgag200_crtc_helper_atomic_flush, 352 352 .atomic_enable = mgag200_g200se_crtc_helper_atomic_enable, 353 - .atomic_disable = mgag200_crtc_helper_atomic_disable 353 + .atomic_disable = mgag200_crtc_helper_atomic_disable, 354 + .get_scanout_position = mgag200_crtc_helper_get_scanout_position, 354 355 }; 355 356 356 357 static const struct drm_crtc_funcs mgag200_g200se_crtc_funcs = { ··· 523 522 524 523 drm_mode_config_reset(dev); 525 524 drm_kms_helper_poll_init(dev); 525 + 526 + ret = drm_vblank_init(dev, 1); 527 + if (ret) 528 + return ERR_PTR(ret); 526 529 527 530 return mdev; 528 531 }
+5 -2
drivers/gpu/drm/mgag200/mgag200_g200wb.c
··· 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_probe_helper.h> 11 + #include <drm/drm_vblank.h> 11 12 12 13 #include "mgag200_drv.h" 13 14 ··· 281 280 MGAG200_DEVICE_INFO_INIT(1280, 1024, 31877, true, 0, 1, false); 282 281 283 282 static const struct mgag200_device_funcs mgag200_g200wb_device_funcs = { 284 - .disable_vidrst = mgag200_bmc_disable_vidrst, 285 - .enable_vidrst = mgag200_bmc_enable_vidrst, 286 283 .pixpllc_atomic_check = mgag200_g200wb_pixpllc_atomic_check, 287 284 .pixpllc_atomic_update = mgag200_g200wb_pixpllc_atomic_update, 288 285 }; ··· 326 327 327 328 drm_mode_config_reset(dev); 328 329 drm_kms_helper_poll_init(dev); 330 + 331 + ret = drm_vblank_init(dev, 1); 332 + if (ret) 333 + return ERR_PTR(ret); 329 334 330 335 return mdev; 331 336 }
+135 -57
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 22 22 #include <drm/drm_gem_framebuffer_helper.h> 23 23 #include <drm/drm_panic.h> 24 24 #include <drm/drm_print.h> 25 + #include <drm/drm_vblank.h> 25 26 26 27 #include "mgag200_ddc.h" 27 28 #include "mgag200_drv.h" ··· 202 201 WREG8(MGA_MISC_OUT, misc); 203 202 } 204 203 205 - void mgag200_set_mode_regs(struct mga_device *mdev, const struct drm_display_mode *mode) 204 + void mgag200_set_mode_regs(struct mga_device *mdev, const struct drm_display_mode *mode, 205 + bool set_vidrst) 206 206 { 207 - const struct mgag200_device_info *info = mdev->info; 208 - unsigned int hdisplay, hsyncstart, hsyncend, htotal; 209 - unsigned int vdisplay, vsyncstart, vsyncend, vtotal; 207 + unsigned int hdispend, hsyncstr, hsyncend, htotal, hblkstr, hblkend; 208 + unsigned int vdispend, vsyncstr, vsyncend, vtotal, vblkstr, vblkend; 209 + unsigned int linecomp; 210 210 u8 misc, crtcext1, crtcext2, crtcext5; 211 211 212 - hdisplay = mode->hdisplay / 8 - 1; 213 - hsyncstart = mode->hsync_start / 8 - 1; 214 - hsyncend = mode->hsync_end / 8 - 1; 215 - htotal = mode->htotal / 8 - 1; 216 - 212 + hdispend = mode->crtc_hdisplay / 8 - 1; 213 + hsyncstr = mode->crtc_hsync_start / 8 - 1; 214 + hsyncend = mode->crtc_hsync_end / 8 - 1; 215 + htotal = mode->crtc_htotal / 8 - 1; 217 216 /* Work around hardware quirk */ 218 217 if ((htotal & 0x07) == 0x06 || (htotal & 0x07) == 0x04) 219 218 htotal++; 219 + hblkstr = mode->crtc_hblank_start / 8 - 1; 220 + hblkend = htotal; 220 221 221 - vdisplay = mode->vdisplay - 1; 222 - vsyncstart = mode->vsync_start - 1; 223 - vsyncend = mode->vsync_end - 1; 224 - vtotal = mode->vtotal - 2; 222 + vdispend = mode->crtc_vdisplay - 1; 223 + vsyncstr = mode->crtc_vsync_start - 1; 224 + vsyncend = mode->crtc_vsync_end - 1; 225 + vtotal = mode->crtc_vtotal - 2; 226 + vblkstr = mode->crtc_vblank_start; 227 + vblkend = vtotal + 1; 228 + 229 + /* 230 + * There's no VBLANK interrupt on Matrox chipsets, so we use 231 + * the VLINE interrupt instead. It triggers when the current 232 + * <linecomp> has been reached. For VBLANK, this is the first 233 + * non-visible line at the bottom of the screen. Therefore, 234 + * keep <linecomp> in sync with <vblkstr>. 235 + */ 236 + linecomp = vblkstr; 225 237 226 238 misc = RREG8(MGA_MISC_IN); 227 239 ··· 249 235 misc &= ~MGAREG_MISC_VSYNCPOL; 250 236 251 237 crtcext1 = (((htotal - 4) & 0x100) >> 8) | 252 - ((hdisplay & 0x100) >> 7) | 253 - ((hsyncstart & 0x100) >> 6) | 254 - (htotal & 0x40); 255 - if (info->has_vidrst) 238 + ((hblkstr & 0x100) >> 7) | 239 + ((hsyncstr & 0x100) >> 6) | 240 + (hblkend & 0x40); 241 + if (set_vidrst) 256 242 crtcext1 |= MGAREG_CRTCEXT1_VRSTEN | 257 243 MGAREG_CRTCEXT1_HRSTEN; 258 244 259 245 crtcext2 = ((vtotal & 0xc00) >> 10) | 260 - ((vdisplay & 0x400) >> 8) | 261 - ((vdisplay & 0xc00) >> 7) | 262 - ((vsyncstart & 0xc00) >> 5) | 263 - ((vdisplay & 0x400) >> 3); 246 + ((vdispend & 0x400) >> 8) | 247 + ((vblkstr & 0xc00) >> 7) | 248 + ((vsyncstr & 0xc00) >> 5) | 249 + ((linecomp & 0x400) >> 3); 264 250 crtcext5 = 0x00; 265 251 266 - WREG_CRT(0, htotal - 4); 267 - WREG_CRT(1, hdisplay); 268 - WREG_CRT(2, hdisplay); 269 - WREG_CRT(3, (htotal & 0x1F) | 0x80); 270 - WREG_CRT(4, hsyncstart); 271 - WREG_CRT(5, ((htotal & 0x20) << 2) | (hsyncend & 0x1F)); 272 - WREG_CRT(6, vtotal & 0xFF); 273 - WREG_CRT(7, ((vtotal & 0x100) >> 8) | 274 - ((vdisplay & 0x100) >> 7) | 275 - ((vsyncstart & 0x100) >> 6) | 276 - ((vdisplay & 0x100) >> 5) | 277 - ((vdisplay & 0x100) >> 4) | /* linecomp */ 278 - ((vtotal & 0x200) >> 4) | 279 - ((vdisplay & 0x200) >> 3) | 280 - ((vsyncstart & 0x200) >> 2)); 281 - WREG_CRT(9, ((vdisplay & 0x200) >> 4) | 282 - ((vdisplay & 0x200) >> 3)); 283 - WREG_CRT(16, vsyncstart & 0xFF); 284 - WREG_CRT(17, (vsyncend & 0x0F) | 0x20); 285 - WREG_CRT(18, vdisplay & 0xFF); 286 - WREG_CRT(20, 0); 287 - WREG_CRT(21, vdisplay & 0xFF); 288 - WREG_CRT(22, (vtotal + 1) & 0xFF); 289 - WREG_CRT(23, 0xc3); 290 - WREG_CRT(24, vdisplay & 0xFF); 252 + WREG_CRT(0x00, htotal - 4); 253 + WREG_CRT(0x01, hdispend); 254 + WREG_CRT(0x02, hblkstr); 255 + WREG_CRT(0x03, (hblkend & 0x1f) | 0x80); 256 + WREG_CRT(0x04, hsyncstr); 257 + WREG_CRT(0x05, ((hblkend & 0x20) << 2) | (hsyncend & 0x1f)); 258 + WREG_CRT(0x06, vtotal & 0xff); 259 + WREG_CRT(0x07, ((vtotal & 0x100) >> 8) | 260 + ((vdispend & 0x100) >> 7) | 261 + ((vsyncstr & 0x100) >> 6) | 262 + ((vblkstr & 0x100) >> 5) | 263 + ((linecomp & 0x100) >> 4) | 264 + ((vtotal & 0x200) >> 4) | 265 + ((vdispend & 0x200) >> 3) | 266 + ((vsyncstr & 0x200) >> 2)); 267 + WREG_CRT(0x09, ((vblkstr & 0x200) >> 4) | 268 + ((linecomp & 0x200) >> 3)); 269 + WREG_CRT(0x10, vsyncstr & 0xff); 270 + WREG_CRT(0x11, (vsyncend & 0x0f) | 0x20); 271 + WREG_CRT(0x12, vdispend & 0xff); 272 + WREG_CRT(0x14, 0); 273 + WREG_CRT(0x15, vblkstr & 0xff); 274 + WREG_CRT(0x16, vblkend & 0xff); 275 + WREG_CRT(0x17, 0xc3); 276 + WREG_CRT(0x18, linecomp & 0xff); 291 277 292 278 WREG_ECRT(0x01, crtcext1); 293 279 WREG_ECRT(0x02, crtcext2); ··· 611 597 struct mga_device *mdev = to_mga_device(dev); 612 598 const struct mgag200_device_funcs *funcs = mdev->funcs; 613 599 struct drm_crtc_state *new_crtc_state = drm_atomic_get_new_crtc_state(new_state, crtc); 600 + struct mgag200_crtc_state *new_mgag200_crtc_state = to_mgag200_crtc_state(new_crtc_state); 614 601 struct drm_property_blob *new_gamma_lut = new_crtc_state->gamma_lut; 615 602 int ret; 616 603 ··· 621 606 ret = drm_atomic_helper_check_crtc_primary_plane(new_crtc_state); 622 607 if (ret) 623 608 return ret; 609 + 610 + new_mgag200_crtc_state->set_vidrst = mdev->info->sync_bmc; 624 611 625 612 if (new_crtc_state->mode_changed) { 626 613 if (funcs->pixpllc_atomic_check) { ··· 648 631 struct mgag200_crtc_state *mgag200_crtc_state = to_mgag200_crtc_state(crtc_state); 649 632 struct drm_device *dev = crtc->dev; 650 633 struct mga_device *mdev = to_mga_device(dev); 634 + struct drm_pending_vblank_event *event; 635 + unsigned long flags; 651 636 652 637 if (crtc_state->enable && crtc_state->color_mgmt_changed) { 653 638 const struct drm_format_info *format = mgag200_crtc_state->format; ··· 658 639 mgag200_crtc_set_gamma(mdev, format, crtc_state->gamma_lut->data); 659 640 else 660 641 mgag200_crtc_set_gamma_linear(mdev, format); 642 + } 643 + 644 + event = crtc->state->event; 645 + if (event) { 646 + crtc->state->event = NULL; 647 + 648 + spin_lock_irqsave(&dev->event_lock, flags); 649 + if (drm_crtc_vblank_get(crtc) != 0) 650 + drm_crtc_send_vblank_event(crtc, event); 651 + else 652 + drm_crtc_arm_vblank_event(crtc, event); 653 + spin_unlock_irqrestore(&dev->event_lock, flags); 661 654 } 662 655 } 663 656 ··· 683 652 struct mgag200_crtc_state *mgag200_crtc_state = to_mgag200_crtc_state(crtc_state); 684 653 const struct drm_format_info *format = mgag200_crtc_state->format; 685 654 686 - if (funcs->disable_vidrst) 687 - funcs->disable_vidrst(mdev); 688 - 689 655 mgag200_set_format_regs(mdev, format); 690 - mgag200_set_mode_regs(mdev, adjusted_mode); 656 + mgag200_set_mode_regs(mdev, adjusted_mode, mgag200_crtc_state->set_vidrst); 691 657 692 658 if (funcs->pixpllc_atomic_update) 693 659 funcs->pixpllc_atomic_update(crtc, old_state); ··· 696 668 697 669 mgag200_enable_display(mdev); 698 670 699 - if (funcs->enable_vidrst) 700 - funcs->enable_vidrst(mdev); 671 + if (mdev->info->sync_bmc) 672 + mgag200_bmc_start_scanout(mdev); 673 + 674 + drm_crtc_vblank_on(crtc); 701 675 } 702 676 703 677 void mgag200_crtc_helper_atomic_disable(struct drm_crtc *crtc, struct drm_atomic_state *old_state) 704 678 { 705 679 struct mga_device *mdev = to_mga_device(crtc->dev); 706 - const struct mgag200_device_funcs *funcs = mdev->funcs; 707 680 708 - if (funcs->disable_vidrst) 709 - funcs->disable_vidrst(mdev); 681 + drm_crtc_vblank_off(crtc); 682 + 683 + if (mdev->info->sync_bmc) 684 + mgag200_bmc_stop_scanout(mdev); 710 685 711 686 mgag200_disable_display(mdev); 687 + } 712 688 713 - if (funcs->enable_vidrst) 714 - funcs->enable_vidrst(mdev); 689 + bool mgag200_crtc_helper_get_scanout_position(struct drm_crtc *crtc, bool in_vblank_irq, 690 + int *vpos, int *hpos, 691 + ktime_t *stime, ktime_t *etime, 692 + const struct drm_display_mode *mode) 693 + { 694 + struct mga_device *mdev = to_mga_device(crtc->dev); 695 + u32 vcount; 696 + 697 + if (stime) 698 + *stime = ktime_get(); 699 + 700 + if (vpos) { 701 + vcount = RREG32(MGAREG_VCOUNT); 702 + *vpos = vcount & GENMASK(11, 0); 703 + } 704 + 705 + if (hpos) 706 + *hpos = mode->htotal >> 1; // near middle of scanline on average 707 + 708 + if (etime) 709 + *etime = ktime_get(); 710 + 711 + return true; 715 712 } 716 713 717 714 void mgag200_crtc_reset(struct drm_crtc *crtc) ··· 770 717 new_mgag200_crtc_state->format = mgag200_crtc_state->format; 771 718 memcpy(&new_mgag200_crtc_state->pixpllc, &mgag200_crtc_state->pixpllc, 772 719 sizeof(new_mgag200_crtc_state->pixpllc)); 720 + new_mgag200_crtc_state->set_vidrst = mgag200_crtc_state->set_vidrst; 773 721 774 722 return &new_mgag200_crtc_state->base; 775 723 } ··· 781 727 782 728 __drm_atomic_helper_crtc_destroy_state(&mgag200_crtc_state->base); 783 729 kfree(mgag200_crtc_state); 730 + } 731 + 732 + int mgag200_crtc_enable_vblank(struct drm_crtc *crtc) 733 + { 734 + struct mga_device *mdev = to_mga_device(crtc->dev); 735 + u32 ien; 736 + 737 + WREG32(MGAREG_ICLEAR, MGAREG_ICLEAR_VLINEICLR); 738 + 739 + ien = RREG32(MGAREG_IEN); 740 + ien |= MGAREG_IEN_VLINEIEN; 741 + WREG32(MGAREG_IEN, ien); 742 + 743 + return 0; 744 + } 745 + 746 + void mgag200_crtc_disable_vblank(struct drm_crtc *crtc) 747 + { 748 + struct mga_device *mdev = to_mga_device(crtc->dev); 749 + u32 ien; 750 + 751 + ien = RREG32(MGAREG_IEN); 752 + ien &= ~(MGAREG_IEN_VLINEIEN); 753 + WREG32(MGAREG_IEN, ien); 784 754 } 785 755 786 756 /*
+7
drivers/gpu/drm/mgag200/mgag200_reg.h
··· 102 102 #define MGAREG_EXEC 0x0100 103 103 104 104 #define MGAREG_FIFOSTATUS 0x1e10 105 + 105 106 #define MGAREG_STATUS 0x1e14 107 + #define MGAREG_STATUS_VLINEPEN BIT(5) 108 + 106 109 #define MGAREG_CACHEFLUSH 0x1fff 110 + 107 111 #define MGAREG_ICLEAR 0x1e18 112 + #define MGAREG_ICLEAR_VLINEICLR BIT(5) 113 + 108 114 #define MGAREG_IEN 0x1e1c 115 + #define MGAREG_IEN_VLINEIEN BIT(5) 109 116 110 117 #define MGAREG_VCOUNT 0x1e20 111 118
+2 -3
drivers/gpu/drm/mxsfb/lcdif_kms.c
··· 407 407 struct drm_display_mode *m = &crtc_state->adjusted_mode; 408 408 409 409 DRM_DEV_DEBUG_DRIVER(drm->dev, "Pixel clock: %dkHz (actual: %dkHz)\n", 410 - m->crtc_clock, 411 - (int)(clk_get_rate(lcdif->clk) / 1000)); 410 + m->clock, (int)(clk_get_rate(lcdif->clk) / 1000)); 412 411 DRM_DEV_DEBUG_DRIVER(drm->dev, "Bridge bus_flags: 0x%08X\n", 413 412 lcdif_crtc_state->bus_flags); 414 413 DRM_DEV_DEBUG_DRIVER(drm->dev, "Mode flags: 0x%08X\n", m->flags); ··· 537 538 struct drm_device *drm = lcdif->drm; 538 539 dma_addr_t paddr; 539 540 540 - clk_set_rate(lcdif->clk, m->crtc_clock * 1000); 541 + clk_set_rate(lcdif->clk, m->clock * 1000); 541 542 542 543 pm_runtime_get_sync(drm->dev); 543 544
-1
drivers/gpu/drm/nouveau/Kbuild
··· 25 25 nouveau-$(CONFIG_LEDS_CLASS) += nouveau_led.o 26 26 nouveau-y += nouveau_nvif.o 27 27 nouveau-$(CONFIG_NOUVEAU_PLATFORM_DRIVER) += nouveau_platform.o 28 - nouveau-y += nouveau_usif.o # userspace <-> nvif 29 28 nouveau-y += nouveau_vga.o 30 29 31 30 # DRM - memory management
+38 -19
drivers/gpu/drm/nouveau/dispnv04/crtc.c
··· 118 118 { 119 119 struct drm_device *dev = crtc->dev; 120 120 struct nouveau_drm *drm = nouveau_drm(dev); 121 - struct nvkm_bios *bios = nvxx_bios(&drm->client.device); 122 - struct nvkm_clk *clk = nvxx_clk(&drm->client.device); 121 + struct nvkm_bios *bios = nvxx_bios(drm); 122 + struct nvkm_clk *clk = nvxx_clk(drm); 123 123 struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc); 124 124 struct nv04_mode_state *state = &nv04_display(dev)->mode_reg; 125 125 struct nv04_crtc_reg *regp = &state->crtc_reg[nv_crtc->index]; ··· 617 617 618 618 ret = nouveau_bo_pin(nvbo, NOUVEAU_GEM_DOMAIN_VRAM, false); 619 619 if (ret == 0) { 620 - if (disp->image[nv_crtc->index]) 621 - nouveau_bo_unpin(disp->image[nv_crtc->index]); 622 - nouveau_bo_ref(nvbo, &disp->image[nv_crtc->index]); 620 + if (disp->image[nv_crtc->index]) { 621 + struct nouveau_bo *bo = disp->image[nv_crtc->index]; 622 + 623 + nouveau_bo_unpin(bo); 624 + drm_gem_object_put(&bo->bo.base); 625 + } 626 + 627 + drm_gem_object_get(&nvbo->bo.base); 628 + disp->image[nv_crtc->index] = nvbo; 623 629 } 624 630 625 631 return ret; ··· 760 754 761 755 drm_crtc_cleanup(crtc); 762 756 763 - if (disp->image[nv_crtc->index]) 764 - nouveau_bo_unpin(disp->image[nv_crtc->index]); 765 - nouveau_bo_ref(NULL, &disp->image[nv_crtc->index]); 757 + if (disp->image[nv_crtc->index]) { 758 + struct nouveau_bo *bo = disp->image[nv_crtc->index]; 759 + 760 + nouveau_bo_unpin(bo); 761 + drm_gem_object_put(&bo->bo.base); 762 + disp->image[nv_crtc->index] = NULL; 763 + } 766 764 767 765 nouveau_bo_unmap(nv_crtc->cursor.nvbo); 768 766 nouveau_bo_unpin(nv_crtc->cursor.nvbo); 769 - nouveau_bo_ref(NULL, &nv_crtc->cursor.nvbo); 767 + nouveau_bo_fini(nv_crtc->cursor.nvbo); 770 768 nvif_event_dtor(&nv_crtc->vblank); 771 769 nvif_head_dtor(&nv_crtc->head); 772 770 kfree(nv_crtc); ··· 804 794 { 805 795 struct nv04_display *disp = nv04_display(crtc->dev); 806 796 struct nouveau_crtc *nv_crtc = nouveau_crtc(crtc); 807 - if (disp->image[nv_crtc->index]) 808 - nouveau_bo_unpin(disp->image[nv_crtc->index]); 809 - nouveau_bo_ref(NULL, &disp->image[nv_crtc->index]); 797 + 798 + if (disp->image[nv_crtc->index]) { 799 + struct nouveau_bo *bo = disp->image[nv_crtc->index]; 800 + 801 + nouveau_bo_unpin(bo); 802 + drm_gem_object_put(&bo->bo.base); 803 + disp->image[nv_crtc->index] = NULL; 804 + } 810 805 } 811 806 812 807 static int ··· 1057 1042 struct nv04_page_flip_state *ps) 1058 1043 { 1059 1044 struct nouveau_fence_chan *fctx = chan->fence; 1060 - struct nouveau_drm *drm = chan->drm; 1045 + struct nouveau_drm *drm = chan->cli->drm; 1061 1046 struct drm_device *dev = drm->dev; 1062 1047 struct nv04_page_flip_state *s; 1063 1048 unsigned long flags; ··· 1113 1098 struct nouveau_fence **pfence) 1114 1099 { 1115 1100 struct nouveau_fence_chan *fctx = chan->fence; 1116 - struct nouveau_drm *drm = chan->drm; 1101 + struct nouveau_drm *drm = chan->cli->drm; 1117 1102 struct drm_device *dev = drm->dev; 1118 - struct nvif_push *push = chan->chan.push; 1103 + struct nvif_push *push = &chan->chan.push; 1119 1104 unsigned long flags; 1120 1105 int ret; 1121 1106 ··· 1172 1157 chan = drm->channel; 1173 1158 if (!chan) 1174 1159 return -ENODEV; 1175 - cli = (void *)chan->user.client; 1176 - push = chan->chan.push; 1160 + cli = chan->cli; 1161 + push = &chan->chan.push; 1177 1162 1178 1163 s = kzalloc(sizeof(*s), GFP_KERNEL); 1179 1164 if (!s) ··· 1225 1210 PUSH_NVSQ(push, NV05F, 0x0130, 0); 1226 1211 } 1227 1212 1228 - nouveau_bo_ref(new_bo, &dispnv04->image[head]); 1213 + if (dispnv04->image[head]) 1214 + drm_gem_object_put(&dispnv04->image[head]->bo.base); 1215 + 1216 + drm_gem_object_get(&new_bo->bo.base); 1217 + dispnv04->image[head] = new_bo; 1229 1218 1230 1219 ret = nv04_page_flip_emit(chan, old_bo, new_bo, s, &fence); 1231 1220 if (ret) ··· 1348 1329 nouveau_bo_unpin(nv_crtc->cursor.nvbo); 1349 1330 } 1350 1331 if (ret) 1351 - nouveau_bo_ref(NULL, &nv_crtc->cursor.nvbo); 1332 + nouveau_bo_fini(nv_crtc->cursor.nvbo); 1352 1333 } 1353 1334 1354 1335 nv04_cursor_init(nv_crtc);
+1 -1
drivers/gpu/drm/nouveau/dispnv04/dac.c
··· 237 237 struct drm_device *dev = encoder->dev; 238 238 struct nouveau_drm *drm = nouveau_drm(dev); 239 239 struct nvif_object *device = &nouveau_drm(dev)->client.device.object; 240 - struct nvkm_gpio *gpio = nvxx_gpio(&drm->client.device); 240 + struct nvkm_gpio *gpio = nvxx_gpio(drm); 241 241 struct dcb_output *dcb = nouveau_encoder(encoder)->dcb; 242 242 uint32_t sample, testval, regoffset = nv04_dac_output_offset(encoder); 243 243 uint32_t saved_powerctrl_2 = 0, saved_powerctrl_4 = 0, saved_routput,
+1 -1
drivers/gpu/drm/nouveau/dispnv04/dfp.c
··· 626 626 struct drm_device *dev = encoder->dev; 627 627 struct dcb_output *dcb = nouveau_encoder(encoder)->dcb; 628 628 struct nouveau_drm *drm = nouveau_drm(dev); 629 - struct nvkm_i2c *i2c = nvxx_i2c(&drm->client.device); 629 + struct nvkm_i2c *i2c = nvxx_i2c(drm); 630 630 struct nvkm_i2c_bus *bus = nvkm_i2c_bus_find(i2c, NVKM_I2C_BUS_PRI); 631 631 struct nvkm_i2c_bus_probe info[] = { 632 632 {
+1 -6
drivers/gpu/drm/nouveau/dispnv04/disp.c
··· 189 189 nv04_display_destroy(struct drm_device *dev) 190 190 { 191 191 struct nv04_display *disp = nv04_display(dev); 192 - struct nouveau_drm *drm = nouveau_drm(dev); 193 192 struct nouveau_encoder *encoder; 194 193 struct nouveau_crtc *nv_crtc; 195 194 ··· 205 206 206 207 nouveau_display(dev)->priv = NULL; 207 208 vfree(disp); 208 - 209 - nvif_object_unmap(&drm->client.device.object); 210 209 } 211 210 212 211 int 213 212 nv04_display_create(struct drm_device *dev) 214 213 { 215 214 struct nouveau_drm *drm = nouveau_drm(dev); 216 - struct nvkm_i2c *i2c = nvxx_i2c(&drm->client.device); 215 + struct nvkm_i2c *i2c = nvxx_i2c(drm); 217 216 struct dcb_table *dcb = &drm->vbios.dcb; 218 217 struct drm_connector *connector, *ct; 219 218 struct drm_encoder *encoder; ··· 225 228 return -ENOMEM; 226 229 227 230 disp->drm = drm; 228 - 229 - nvif_object_map(&drm->client.device.object, NULL, 0); 230 231 231 232 nouveau_display(dev)->priv = disp; 232 233 nouveau_display(dev)->dtor = nv04_display_destroy;
+1 -1
drivers/gpu/drm/nouveau/dispnv04/disp.h
··· 176 176 nouveau_bios_run_init_table(struct drm_device *dev, u16 table, 177 177 struct dcb_output *outp, int crtc) 178 178 { 179 - nvbios_init(&nvxx_bios(&nouveau_drm(dev)->client.device)->subdev, table, 179 + nvbios_init(&nvxx_bios(nouveau_drm(dev))->subdev, table, 180 180 init.outp = outp; 181 181 init.head = crtc; 182 182 );
+4 -5
drivers/gpu/drm/nouveau/dispnv04/hw.c
··· 166 166 { 167 167 struct nouveau_drm *drm = nouveau_drm(dev); 168 168 struct nvif_object *device = &drm->client.device.object; 169 - struct nvkm_bios *bios = nvxx_bios(&drm->client.device); 169 + struct nvkm_bios *bios = nvxx_bios(drm); 170 170 uint32_t reg1, pll1, pll2 = 0; 171 171 struct nvbios_pll pll_lim; 172 172 int ret; ··· 258 258 */ 259 259 260 260 struct nouveau_drm *drm = nouveau_drm(dev); 261 - struct nvif_device *device = &drm->client.device; 262 - struct nvkm_clk *clk = nvxx_clk(device); 263 - struct nvkm_bios *bios = nvxx_bios(device); 261 + struct nvkm_clk *clk = nvxx_clk(drm); 262 + struct nvkm_bios *bios = nvxx_bios(drm); 264 263 struct nvbios_pll pll_lim; 265 264 struct nvkm_pll_vals pv; 266 265 enum nvbios_pll_type pll = head ? PLL_VPLL1 : PLL_VPLL0; ··· 469 470 struct nv04_mode_state *state) 470 471 { 471 472 struct nouveau_drm *drm = nouveau_drm(dev); 472 - struct nvkm_clk *clk = nvxx_clk(&drm->client.device); 473 + struct nvkm_clk *clk = nvxx_clk(drm); 473 474 struct nv04_crtc_reg *regp = &state->crtc_reg[head]; 474 475 uint32_t pllreg = head ? NV_RAMDAC_VPLL2 : NV_PRAMDAC_VPLL_COEFF; 475 476 int i;
+2 -2
drivers/gpu/drm/nouveau/dispnv04/tvnv04.c
··· 53 53 int nv04_tv_identify(struct drm_device *dev, int i2c_index) 54 54 { 55 55 struct nouveau_drm *drm = nouveau_drm(dev); 56 - struct nvkm_i2c *i2c = nvxx_i2c(&drm->client.device); 56 + struct nvkm_i2c *i2c = nvxx_i2c(drm); 57 57 struct nvkm_i2c_bus *bus = nvkm_i2c_bus_find(i2c, i2c_index); 58 58 if (bus) { 59 59 return nvkm_i2c_bus_probe(bus, "TV encoder", ··· 205 205 struct drm_encoder *encoder; 206 206 struct drm_device *dev = connector->dev; 207 207 struct nouveau_drm *drm = nouveau_drm(dev); 208 - struct nvkm_i2c *i2c = nvxx_i2c(&drm->client.device); 208 + struct nvkm_i2c *i2c = nvxx_i2c(drm); 209 209 struct nvkm_i2c_bus *bus = nvkm_i2c_bus_find(i2c, entry->i2c_index); 210 210 int type, ret; 211 211
+3 -3
drivers/gpu/drm/nouveau/dispnv04/tvnv17.c
··· 47 47 { 48 48 struct drm_device *dev = encoder->dev; 49 49 struct nouveau_drm *drm = nouveau_drm(dev); 50 - struct nvkm_gpio *gpio = nvxx_gpio(&drm->client.device); 50 + struct nvkm_gpio *gpio = nvxx_gpio(drm); 51 51 uint32_t testval, regoffset = nv04_dac_output_offset(encoder); 52 52 uint32_t gpio0, gpio1, fp_htotal, fp_hsync_start, fp_hsync_end, 53 53 fp_control, test_ctrl, dacclk, ctv_14, ctv_1c, ctv_6c; ··· 131 131 get_tv_detect_quirks(struct drm_device *dev, uint32_t *pin_mask) 132 132 { 133 133 struct nouveau_drm *drm = nouveau_drm(dev); 134 - struct nvkm_device *device = nvxx_device(&drm->client.device); 134 + struct nvkm_device *device = nvxx_device(drm); 135 135 136 136 if (device->quirk && device->quirk->tv_pin_mask) { 137 137 *pin_mask = device->quirk->tv_pin_mask; ··· 369 369 { 370 370 struct drm_device *dev = encoder->dev; 371 371 struct nouveau_drm *drm = nouveau_drm(dev); 372 - struct nvkm_gpio *gpio = nvxx_gpio(&drm->client.device); 372 + struct nvkm_gpio *gpio = nvxx_gpio(drm); 373 373 struct nv17_tv_state *regs = &to_tv_enc(encoder)->state; 374 374 struct nv17_tv_norm_params *tv_norm = get_tv_norm(encoder); 375 375
+10 -11
drivers/gpu/drm/nouveau/dispnv50/base507c.c
··· 35 35 int 36 36 base507c_update(struct nv50_wndw *wndw, u32 *interlock) 37 37 { 38 - struct nvif_push *push = wndw->wndw.push; 38 + struct nvif_push *push = &wndw->wndw.push; 39 39 int ret; 40 40 41 41 if ((ret = PUSH_WAIT(push, 2))) ··· 48 48 int 49 49 base507c_image_clr(struct nv50_wndw *wndw) 50 50 { 51 - struct nvif_push *push = wndw->wndw.push; 51 + struct nvif_push *push = &wndw->wndw.push; 52 52 int ret; 53 53 54 54 if ((ret = PUSH_WAIT(push, 4))) ··· 65 65 static int 66 66 base507c_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 67 67 { 68 - struct nvif_push *push = wndw->wndw.push; 68 + struct nvif_push *push = &wndw->wndw.push; 69 69 int ret; 70 70 71 71 if ((ret = PUSH_WAIT(push, 13))) ··· 118 118 int 119 119 base507c_xlut_clr(struct nv50_wndw *wndw) 120 120 { 121 - struct nvif_push *push = wndw->wndw.push; 121 + struct nvif_push *push = &wndw->wndw.push; 122 122 int ret; 123 123 124 124 if ((ret = PUSH_WAIT(push, 2))) ··· 132 132 int 133 133 base507c_xlut_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 134 134 { 135 - struct nvif_push *push = wndw->wndw.push; 135 + struct nvif_push *push = &wndw->wndw.push; 136 136 int ret; 137 137 138 138 if ((ret = PUSH_WAIT(push, 2))) ··· 158 158 int 159 159 base507c_ntfy_clr(struct nv50_wndw *wndw) 160 160 { 161 - struct nvif_push *push = wndw->wndw.push; 161 + struct nvif_push *push = &wndw->wndw.push; 162 162 int ret; 163 163 164 164 if ((ret = PUSH_WAIT(push, 2))) ··· 171 171 int 172 172 base507c_ntfy_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 173 173 { 174 - struct nvif_push *push = wndw->wndw.push; 174 + struct nvif_push *push = &wndw->wndw.push; 175 175 int ret; 176 176 177 177 if ((ret = PUSH_WAIT(push, 3))) ··· 195 195 int 196 196 base507c_sema_clr(struct nv50_wndw *wndw) 197 197 { 198 - struct nvif_push *push = wndw->wndw.push; 198 + struct nvif_push *push = &wndw->wndw.push; 199 199 int ret; 200 200 201 201 if ((ret = PUSH_WAIT(push, 2))) ··· 208 208 int 209 209 base507c_sema_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 210 210 { 211 - struct nvif_push *push = wndw->wndw.push; 211 + struct nvif_push *push = &wndw->wndw.push; 212 212 int ret; 213 213 214 214 if ((ret = PUSH_WAIT(push, 5))) ··· 307 307 struct nvif_disp_chan_v0 args = { 308 308 .id = head, 309 309 }; 310 - struct nouveau_display *disp = nouveau_display(drm->dev); 311 310 struct nv50_disp *disp50 = nv50_disp(drm->dev); 312 311 struct nv50_wndw *wndw; 313 312 int ret; ··· 317 318 if (*pwndw = wndw, ret) 318 319 return ret; 319 320 320 - ret = nv50_dmac_create(&drm->client.device, &disp->disp.object, 321 + ret = nv50_dmac_create(drm, 321 322 &oclass, head, &args, sizeof(args), 322 323 disp50->sync->offset, &wndw->wndw); 323 324 if (ret) {
+1 -1
drivers/gpu/drm/nouveau/dispnv50/base827c.c
··· 28 28 static int 29 29 base827c_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 30 30 { 31 - struct nvif_push *push = wndw->wndw.push; 31 + struct nvif_push *push = &wndw->wndw.push; 32 32 int ret; 33 33 34 34 if ((ret = PUSH_WAIT(push, 13)))
+5 -5
drivers/gpu/drm/nouveau/dispnv50/base907c.c
··· 28 28 static int 29 29 base907c_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 30 30 { 31 - struct nvif_push *push = wndw->wndw.push; 31 + struct nvif_push *push = &wndw->wndw.push; 32 32 int ret; 33 33 34 34 if ((ret = PUSH_WAIT(push, 10))) ··· 65 65 static int 66 66 base907c_xlut_clr(struct nv50_wndw *wndw) 67 67 { 68 - struct nvif_push *push = wndw->wndw.push; 68 + struct nvif_push *push = &wndw->wndw.push; 69 69 int ret; 70 70 71 71 if ((ret = PUSH_WAIT(push, 6))) ··· 84 84 static int 85 85 base907c_xlut_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 86 86 { 87 - struct nvif_push *push = wndw->wndw.push; 87 + struct nvif_push *push = &wndw->wndw.push; 88 88 int ret; 89 89 90 90 if ((ret = PUSH_WAIT(push, 6))) ··· 156 156 static int 157 157 base907c_csc_clr(struct nv50_wndw *wndw) 158 158 { 159 - struct nvif_push *push = wndw->wndw.push; 159 + struct nvif_push *push = &wndw->wndw.push; 160 160 int ret; 161 161 162 162 if ((ret = PUSH_WAIT(push, 2))) ··· 170 170 static int 171 171 base907c_csc_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 172 172 { 173 - struct nvif_push *push = wndw->wndw.push; 173 + struct nvif_push *push = &wndw->wndw.push; 174 174 int ret; 175 175 176 176 if ((ret = PUSH_WAIT(push, 13)))
+4 -4
drivers/gpu/drm/nouveau/dispnv50/core507d.c
··· 33 33 int 34 34 core507d_update(struct nv50_core *core, u32 *interlock, bool ntfy) 35 35 { 36 - struct nvif_push *push = core->chan.push; 36 + struct nvif_push *push = &core->chan.push; 37 37 int ret; 38 38 39 39 if ((ret = PUSH_WAIT(push, (ntfy ? 2 : 0) + 3))) ··· 80 80 int 81 81 core507d_read_caps(struct nv50_disp *disp) 82 82 { 83 - struct nvif_push *push = disp->core->chan.push; 83 + struct nvif_push *push = &disp->core->chan.push; 84 84 int ret; 85 85 86 86 ret = PUSH_WAIT(push, 6); ··· 130 130 int 131 131 core507d_init(struct nv50_core *core) 132 132 { 133 - struct nvif_push *push = core->chan.push; 133 + struct nvif_push *push = &core->chan.push; 134 134 int ret; 135 135 136 136 if ((ret = PUSH_WAIT(push, 2))) ··· 166 166 return -ENOMEM; 167 167 core->func = func; 168 168 169 - ret = nv50_dmac_create(&drm->client.device, &disp->disp->object, 169 + ret = nv50_dmac_create(drm, 170 170 &oclass, 0, &args, sizeof(args), 171 171 disp->sync->offset, &core->chan); 172 172 if (ret) {
+3 -3
drivers/gpu/drm/nouveau/dispnv50/corec37d.c
··· 33 33 int 34 34 corec37d_wndw_owner(struct nv50_core *core) 35 35 { 36 - struct nvif_push *push = core->chan.push; 36 + struct nvif_push *push = &core->chan.push; 37 37 const u32 windows = 8; /*XXX*/ 38 38 int ret, i; 39 39 ··· 51 51 int 52 52 corec37d_update(struct nv50_core *core, u32 *interlock, bool ntfy) 53 53 { 54 - struct nvif_push *push = core->chan.push; 54 + struct nvif_push *push = &core->chan.push; 55 55 int ret; 56 56 57 57 if ((ret = PUSH_WAIT(push, (ntfy ? 2 * 2 : 0) + 5))) ··· 127 127 static int 128 128 corec37d_init(struct nv50_core *core) 129 129 { 130 - struct nvif_push *push = core->chan.push; 130 + struct nvif_push *push = &core->chan.push; 131 131 const u32 windows = 8; /*XXX*/ 132 132 int ret, i; 133 133
+1 -1
drivers/gpu/drm/nouveau/dispnv50/corec57d.c
··· 29 29 static int 30 30 corec57d_init(struct nv50_core *core) 31 31 { 32 - struct nvif_push *push = core->chan.push; 32 + struct nvif_push *push = &core->chan.push; 33 33 const u32 windows = 8; /*XXX*/ 34 34 int ret, i; 35 35
+2 -2
drivers/gpu/drm/nouveau/dispnv50/crc907d.c
··· 26 26 crc907d_set_src(struct nv50_head *head, int or, enum nv50_crc_source_type source, 27 27 struct nv50_crc_notifier_ctx *ctx) 28 28 { 29 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 29 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 30 30 const int i = head->base.index; 31 31 u32 crc_args = NVDEF(NV907D, HEAD_SET_CRC_CONTROL, CONTROLLING_CHANNEL, CORE) | 32 32 NVDEF(NV907D, HEAD_SET_CRC_CONTROL, EXPECT_BUFFER_COLLAPSE, FALSE) | ··· 74 74 static int 75 75 crc907d_set_ctx(struct nv50_head *head, struct nv50_crc_notifier_ctx *ctx) 76 76 { 77 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 77 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 78 78 const int i = head->base.index; 79 79 int ret; 80 80
+2 -2
drivers/gpu/drm/nouveau/dispnv50/crcc37d.c
··· 15 15 crcc37d_set_src(struct nv50_head *head, int or, enum nv50_crc_source_type source, 16 16 struct nv50_crc_notifier_ctx *ctx) 17 17 { 18 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 18 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 19 19 const int i = head->base.index; 20 20 u32 crc_args = NVVAL(NVC37D, HEAD_SET_CRC_CONTROL, CONTROLLING_CHANNEL, i * 4) | 21 21 NVDEF(NVC37D, HEAD_SET_CRC_CONTROL, EXPECT_BUFFER_COLLAPSE, FALSE) | ··· 53 53 54 54 int crcc37d_set_ctx(struct nv50_head *head, struct nv50_crc_notifier_ctx *ctx) 55 55 { 56 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 56 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 57 57 const int i = head->base.index; 58 58 int ret; 59 59
+1 -1
drivers/gpu/drm/nouveau/dispnv50/crcc57d.c
··· 13 13 static int crcc57d_set_src(struct nv50_head *head, int or, enum nv50_crc_source_type source, 14 14 struct nv50_crc_notifier_ctx *ctx) 15 15 { 16 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 16 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 17 17 const int i = head->base.index; 18 18 u32 crc_args = NVDEF(NVC57D, HEAD_SET_CRC_CONTROL, CONTROLLING_CHANNEL, CORE) | 19 19 NVDEF(NVC57D, HEAD_SET_CRC_CONTROL, EXPECT_BUFFER_COLLAPSE, FALSE) |
+1 -1
drivers/gpu/drm/nouveau/dispnv50/dac507d.c
··· 29 29 dac507d_ctrl(struct nv50_core *core, int or, u32 ctrl, 30 30 struct nv50_head_atom *asyh) 31 31 { 32 - struct nvif_push *push = core->chan.push; 32 + struct nvif_push *push = &core->chan.push; 33 33 u32 sync = 0; 34 34 int ret; 35 35
+1 -1
drivers/gpu/drm/nouveau/dispnv50/dac907d.c
··· 29 29 dac907d_ctrl(struct nv50_core *core, int or, u32 ctrl, 30 30 struct nv50_head_atom *asyh) 31 31 { 32 - struct nvif_push *push = core->chan.push; 32 + struct nvif_push *push = &core->chan.push; 33 33 int ret; 34 34 35 35 if ((ret = PUSH_WAIT(push, 2)))
+37 -41
drivers/gpu/drm/nouveau/dispnv50/disp.c
··· 93 93 ret = nvif_object_ctor(disp, "kmsChan", 0, 94 94 oclass[0], data, size, 95 95 &chan->user); 96 - if (ret == 0) 97 - nvif_object_map(&chan->user, NULL, 0); 96 + if (ret == 0) { 97 + ret = nvif_object_map(&chan->user, NULL, 0); 98 + if (ret) 99 + nvif_object_dtor(&chan->user); 100 + } 98 101 nvif_object_sclass_put(&sclass); 99 102 return ret; 100 103 } ··· 127 124 128 125 nv50_chan_destroy(&dmac->base); 129 126 130 - nvif_mem_dtor(&dmac->_push.mem); 127 + nvif_mem_dtor(&dmac->push.mem); 131 128 } 132 129 133 130 static void 134 131 nv50_dmac_kick(struct nvif_push *push) 135 132 { 136 - struct nv50_dmac *dmac = container_of(push, typeof(*dmac), _push); 133 + struct nv50_dmac *dmac = container_of(push, typeof(*dmac), push); 137 134 138 - dmac->cur = push->cur - (u32 __iomem *)dmac->_push.mem.object.map.ptr; 135 + dmac->cur = push->cur - (u32 __iomem *)dmac->push.mem.object.map.ptr; 139 136 if (dmac->put != dmac->cur) { 140 137 /* Push buffer fetches are not coherent with BAR1, we need to ensure 141 138 * writes have been flushed right through to VRAM before writing PUT. 142 139 */ 143 - if (dmac->push->mem.type & NVIF_MEM_VRAM) { 140 + if (dmac->push.mem.type & NVIF_MEM_VRAM) { 144 141 struct nvif_device *device = dmac->base.device; 145 142 nvif_wr32(&device->object, 0x070000, 0x00000001); 146 143 nvif_msec(device, 2000, ··· 175 172 if (get == 0) { 176 173 /* Corner-case, HW idle, but non-committed work pending. */ 177 174 if (dmac->put == 0) 178 - nv50_dmac_kick(dmac->push); 175 + nv50_dmac_kick(&dmac->push); 179 176 180 177 if (nvif_msec(dmac->base.device, 2000, 181 178 if (NVIF_TV32(&dmac->base.user, NV507C, GET, PTR, >, 0)) ··· 184 181 return -ETIMEDOUT; 185 182 } 186 183 187 - PUSH_RSVD(dmac->push, PUSH_JUMP(dmac->push, 0)); 184 + PUSH_RSVD(&dmac->push, PUSH_JUMP(&dmac->push, 0)); 188 185 dmac->cur = 0; 189 186 return 0; 190 187 } ··· 192 189 static int 193 190 nv50_dmac_wait(struct nvif_push *push, u32 size) 194 191 { 195 - struct nv50_dmac *dmac = container_of(push, typeof(*dmac), _push); 192 + struct nv50_dmac *dmac = container_of(push, typeof(*dmac), push); 196 193 int free; 197 194 198 195 if (WARN_ON(size > dmac->max)) 199 196 return -EINVAL; 200 197 201 - dmac->cur = push->cur - (u32 __iomem *)dmac->_push.mem.object.map.ptr; 198 + dmac->cur = push->cur - (u32 __iomem *)dmac->push.mem.object.map.ptr; 202 199 if (dmac->cur + size >= dmac->max) { 203 200 int ret = nv50_dmac_wind(dmac); 204 201 if (ret) 205 202 return ret; 206 203 207 - push->cur = dmac->_push.mem.object.map.ptr; 204 + push->cur = dmac->push.mem.object.map.ptr; 208 205 push->cur = push->cur + dmac->cur; 209 206 nv50_dmac_kick(push); 210 207 } ··· 217 214 return -ETIMEDOUT; 218 215 } 219 216 220 - push->bgn = dmac->_push.mem.object.map.ptr; 217 + push->bgn = dmac->push.mem.object.map.ptr; 221 218 push->bgn = push->bgn + dmac->cur; 222 219 push->cur = push->bgn; 223 220 push->end = push->cur + free; ··· 229 226 module_param_named(kms_vram_pushbuf, nv50_dmac_vram_pushbuf, int, 0400); 230 227 231 228 int 232 - nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp, 229 + nv50_dmac_create(struct nouveau_drm *drm, 233 230 const s32 *oclass, u8 head, void *data, u32 size, s64 syncbuf, 234 231 struct nv50_dmac *dmac) 235 232 { 236 - struct nouveau_cli *cli = (void *)device->object.client; 233 + struct nvif_device *device = &drm->device; 234 + struct nvif_object *disp = &drm->display->disp.object; 237 235 struct nvif_disp_chan_v0 *args = data; 238 236 u8 type = NVIF_MEM_COHERENT; 239 237 int ret; 240 - 241 - mutex_init(&dmac->lock); 242 238 243 239 /* Pascal added support for 47-bit physical addresses, but some 244 240 * parts of EVO still only accept 40-bit PAs. ··· 252 250 (nv50_dmac_vram_pushbuf < 0 && device->info.family == NV_DEVICE_INFO_V0_PASCAL)) 253 251 type |= NVIF_MEM_VRAM; 254 252 255 - ret = nvif_mem_ctor_map(&cli->mmu, "kmsChanPush", type, 0x1000, 256 - &dmac->_push.mem); 253 + ret = nvif_mem_ctor_map(&drm->mmu, "kmsChanPush", type, 0x1000, &dmac->push.mem); 257 254 if (ret) 258 255 return ret; 259 256 260 - dmac->ptr = dmac->_push.mem.object.map.ptr; 261 - dmac->_push.wait = nv50_dmac_wait; 262 - dmac->_push.kick = nv50_dmac_kick; 263 - dmac->push = &dmac->_push; 264 - dmac->push->bgn = dmac->_push.mem.object.map.ptr; 265 - dmac->push->cur = dmac->push->bgn; 266 - dmac->push->end = dmac->push->bgn; 257 + dmac->push.wait = nv50_dmac_wait; 258 + dmac->push.kick = nv50_dmac_kick; 259 + dmac->push.bgn = dmac->push.mem.object.map.ptr; 260 + dmac->push.cur = dmac->push.bgn; 261 + dmac->push.end = dmac->push.bgn; 267 262 dmac->max = 0x1000/4 - 1; 268 263 269 264 /* EVO channels are affected by a HW bug where the last 12 DWORDs ··· 269 270 if (disp->oclass < GV100_DISP) 270 271 dmac->max -= 12; 271 272 272 - args->pushbuf = nvif_handle(&dmac->_push.mem.object); 273 + args->pushbuf = nvif_handle(&dmac->push.mem.object); 273 274 274 275 ret = nv50_chan_create(device, disp, oclass, head, data, size, 275 276 &dmac->base); ··· 557 558 { 558 559 struct drm_connector *connector = &nv_encoder->conn->base; 559 560 struct nouveau_drm *drm = nouveau_drm(connector->dev); 560 - struct nvkm_i2c *i2c = nvxx_i2c(&drm->client.device); 561 + struct nvkm_i2c *i2c = nvxx_i2c(drm); 561 562 struct nvkm_i2c_bus *bus; 562 563 struct drm_encoder *encoder; 563 564 struct dcb_output *dcbe = nv_encoder->dcb; ··· 592 593 nv50_audio_component_get_eld(struct device *kdev, int port, int dev_id, 593 594 bool *enabled, unsigned char *buf, int max_bytes) 594 595 { 595 - struct drm_device *drm_dev = dev_get_drvdata(kdev); 596 - struct nouveau_drm *drm = nouveau_drm(drm_dev); 596 + struct nouveau_drm *drm = dev_get_drvdata(kdev); 597 597 struct drm_encoder *encoder; 598 598 struct nouveau_encoder *nv_encoder; 599 599 struct nouveau_crtc *nv_crtc; ··· 637 639 nv50_audio_component_bind(struct device *kdev, struct device *hda_kdev, 638 640 void *data) 639 641 { 640 - struct drm_device *drm_dev = dev_get_drvdata(kdev); 641 - struct nouveau_drm *drm = nouveau_drm(drm_dev); 642 + struct nouveau_drm *drm = dev_get_drvdata(kdev); 642 643 struct drm_audio_component *acomp = data; 643 644 644 645 if (WARN_ON(!device_link_add(hda_kdev, kdev, DL_FLAG_STATELESS))) 645 646 return -ENOMEM; 646 647 647 - drm_modeset_lock_all(drm_dev); 648 + drm_modeset_lock_all(drm->dev); 648 649 acomp->ops = &nv50_audio_component_ops; 649 650 acomp->dev = kdev; 650 651 drm->audio.component = acomp; 651 - drm_modeset_unlock_all(drm_dev); 652 + drm_modeset_unlock_all(drm->dev); 652 653 return 0; 653 654 } 654 655 ··· 655 658 nv50_audio_component_unbind(struct device *kdev, struct device *hda_kdev, 656 659 void *data) 657 660 { 658 - struct drm_device *drm_dev = dev_get_drvdata(kdev); 659 - struct nouveau_drm *drm = nouveau_drm(drm_dev); 661 + struct nouveau_drm *drm = dev_get_drvdata(kdev); 660 662 struct drm_audio_component *acomp = data; 661 663 662 - drm_modeset_lock_all(drm_dev); 664 + drm_modeset_lock_all(drm->dev); 663 665 drm->audio.component = NULL; 664 666 acomp->ops = NULL; 665 667 acomp->dev = NULL; 666 - drm_modeset_unlock_all(drm_dev); 668 + drm_modeset_unlock_all(drm->dev); 667 669 } 668 670 669 671 static const struct component_ops nv50_audio_component_bind_ops = { ··· 1880 1884 struct drm_connector *connector = &nv_encoder->conn->base; 1881 1885 struct nouveau_connector *nv_connector = nouveau_connector(connector); 1882 1886 struct nouveau_drm *drm = nouveau_drm(connector->dev); 1883 - struct nvkm_i2c *i2c = nvxx_i2c(&drm->client.device); 1887 + struct nvkm_i2c *i2c = nvxx_i2c(drm); 1884 1888 struct drm_encoder *encoder; 1885 1889 struct dcb_output *dcbe = nv_encoder->dcb; 1886 1890 struct nv50_disp *disp = nv50_disp(connector->dev); ··· 2047 2051 struct drm_device *dev = connector->dev; 2048 2052 struct nouveau_drm *drm = nouveau_drm(dev); 2049 2053 struct nv50_disp *disp = nv50_disp(dev); 2050 - struct nvkm_i2c *i2c = nvxx_i2c(&drm->client.device); 2054 + struct nvkm_i2c *i2c = nvxx_i2c(drm); 2051 2055 struct nvkm_i2c_bus *bus = NULL; 2052 2056 struct nvkm_i2c_aux *aux = NULL; 2053 2057 struct i2c_adapter *ddc; ··· 2815 2819 nouveau_bo_unmap(disp->sync); 2816 2820 if (disp->sync) 2817 2821 nouveau_bo_unpin(disp->sync); 2818 - nouveau_bo_ref(NULL, &disp->sync); 2822 + nouveau_bo_fini(disp->sync); 2819 2823 2820 2824 nouveau_display(dev)->priv = NULL; 2821 2825 kfree(disp); ··· 2858 2862 nouveau_bo_unpin(disp->sync); 2859 2863 } 2860 2864 if (ret) 2861 - nouveau_bo_ref(NULL, &disp->sync); 2865 + nouveau_bo_fini(disp->sync); 2862 2866 } 2863 2867 2864 2868 if (ret)
+2 -12
drivers/gpu/drm/nouveau/dispnv50/disp.h
··· 62 62 struct nv50_dmac { 63 63 struct nv50_chan base; 64 64 65 - struct nvif_push _push; 66 - struct nvif_push *push; 67 - u32 *ptr; 65 + struct nvif_push push; 68 66 69 67 struct nvif_object sync; 70 68 struct nvif_object vram; 71 - 72 - /* Protects against concurrent pushbuf access to this channel, lock is 73 - * grabbed by evo_wait (if the pushbuf reservation is successful) and 74 - * dropped again by evo_kick. */ 75 - struct mutex lock; 76 69 77 70 u32 cur; 78 71 u32 put; ··· 88 95 } set, clr; 89 96 }; 90 97 91 - int nv50_dmac_create(struct nvif_device *device, struct nvif_object *disp, 98 + int nv50_dmac_create(struct nouveau_drm *, 92 99 const s32 *oclass, u8 head, void *data, u32 size, 93 100 s64 syncbuf, struct nv50_dmac *dmac); 94 101 void nv50_dmac_destroy(struct nv50_dmac *); ··· 100 107 * return anyway. 101 108 */ 102 109 struct nouveau_encoder *nv50_real_outp(struct drm_encoder *encoder); 103 - 104 - u32 *evo_wait(struct nv50_dmac *, int nr); 105 - void evo_kick(u32 *, struct nv50_dmac *); 106 110 107 111 extern const u64 disp50xx_modifiers[]; 108 112 extern const u64 disp90xx_modifiers[];
+12 -12
drivers/gpu/drm/nouveau/dispnv50/head507d.c
··· 29 29 int 30 30 head507d_procamp(struct nv50_head *head, struct nv50_head_atom *asyh) 31 31 { 32 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 32 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 33 33 const int i = head->base.index; 34 34 int ret; 35 35 ··· 48 48 int 49 49 head507d_dither(struct nv50_head *head, struct nv50_head_atom *asyh) 50 50 { 51 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 51 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 52 52 const int i = head->base.index; 53 53 int ret; 54 54 ··· 66 66 int 67 67 head507d_ovly(struct nv50_head *head, struct nv50_head_atom *asyh) 68 68 { 69 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 69 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 70 70 const int i = head->base.index; 71 71 u32 bounds = 0; 72 72 int ret; ··· 94 94 int 95 95 head507d_base(struct nv50_head *head, struct nv50_head_atom *asyh) 96 96 { 97 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 97 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 98 98 const int i = head->base.index; 99 99 u32 bounds = 0; 100 100 int ret; ··· 122 122 static int 123 123 head507d_curs_clr(struct nv50_head *head) 124 124 { 125 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 125 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 126 126 const int i = head->base.index; 127 127 int ret; 128 128 ··· 139 139 static int 140 140 head507d_curs_set(struct nv50_head *head, struct nv50_head_atom *asyh) 141 141 { 142 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 142 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 143 143 const int i = head->base.index; 144 144 int ret; 145 145 ··· 188 188 int 189 189 head507d_core_clr(struct nv50_head *head) 190 190 { 191 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 191 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 192 192 const int i = head->base.index; 193 193 int ret; 194 194 ··· 202 202 static int 203 203 head507d_core_set(struct nv50_head *head, struct nv50_head_atom *asyh) 204 204 { 205 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 205 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 206 206 const int i = head->base.index; 207 207 int ret; 208 208 ··· 278 278 static int 279 279 head507d_olut_clr(struct nv50_head *head) 280 280 { 281 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 281 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 282 282 const int i = head->base.index; 283 283 int ret; 284 284 ··· 293 293 static int 294 294 head507d_olut_set(struct nv50_head *head, struct nv50_head_atom *asyh) 295 295 { 296 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 296 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 297 297 const int i = head->base.index; 298 298 int ret; 299 299 ··· 345 345 int 346 346 head507d_mode(struct nv50_head *head, struct nv50_head_atom *asyh) 347 347 { 348 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 348 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 349 349 struct nv50_head_mode *m = &asyh->mode; 350 350 const int i = head->base.index; 351 351 int ret; ··· 400 400 int 401 401 head507d_view(struct nv50_head *head, struct nv50_head_atom *asyh) 402 402 { 403 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 403 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 404 404 const int i = head->base.index; 405 405 int ret; 406 406
+5 -5
drivers/gpu/drm/nouveau/dispnv50/head827d.c
··· 29 29 static int 30 30 head827d_curs_clr(struct nv50_head *head) 31 31 { 32 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 32 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 33 33 const int i = head->base.index; 34 34 int ret; 35 35 ··· 48 48 static int 49 49 head827d_curs_set(struct nv50_head *head, struct nv50_head_atom *asyh) 50 50 { 51 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 51 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 52 52 const int i = head->base.index; 53 53 int ret; 54 54 ··· 73 73 static int 74 74 head827d_core_set(struct nv50_head *head, struct nv50_head_atom *asyh) 75 75 { 76 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 76 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 77 77 const int i = head->base.index; 78 78 int ret; 79 79 ··· 110 110 static int 111 111 head827d_olut_clr(struct nv50_head *head) 112 112 { 113 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 113 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 114 114 const int i = head->base.index; 115 115 int ret; 116 116 ··· 127 127 static int 128 128 head827d_olut_set(struct nv50_head *head, struct nv50_head_atom *asyh) 129 129 { 130 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 130 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 131 131 const int i = head->base.index; 132 132 int ret; 133 133
+13 -13
drivers/gpu/drm/nouveau/dispnv50/head907d.c
··· 36 36 int 37 37 head907d_or(struct nv50_head *head, struct nv50_head_atom *asyh) 38 38 { 39 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 39 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 40 40 const int i = head->base.index; 41 41 int ret; 42 42 ··· 57 57 int 58 58 head907d_procamp(struct nv50_head *head, struct nv50_head_atom *asyh) 59 59 { 60 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 60 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 61 61 const int i = head->base.index; 62 62 int ret; 63 63 ··· 77 77 static int 78 78 head907d_dither(struct nv50_head *head, struct nv50_head_atom *asyh) 79 79 { 80 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 80 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 81 81 const int i = head->base.index; 82 82 int ret; 83 83 ··· 95 95 int 96 96 head907d_ovly(struct nv50_head *head, struct nv50_head_atom *asyh) 97 97 { 98 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 98 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 99 99 const int i = head->base.index; 100 100 u32 bounds = 0; 101 101 int ret; ··· 124 124 static int 125 125 head907d_base(struct nv50_head *head, struct nv50_head_atom *asyh) 126 126 { 127 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 127 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 128 128 const int i = head->base.index; 129 129 u32 bounds = 0; 130 130 int ret; ··· 152 152 int 153 153 head907d_curs_clr(struct nv50_head *head) 154 154 { 155 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 155 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 156 156 const int i = head->base.index; 157 157 int ret; 158 158 ··· 171 171 int 172 172 head907d_curs_set(struct nv50_head *head, struct nv50_head_atom *asyh) 173 173 { 174 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 174 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 175 175 const int i = head->base.index; 176 176 int ret; 177 177 ··· 195 195 int 196 196 head907d_core_clr(struct nv50_head *head) 197 197 { 198 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 198 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 199 199 const int i = head->base.index; 200 200 int ret; 201 201 ··· 209 209 int 210 210 head907d_core_set(struct nv50_head *head, struct nv50_head_atom *asyh) 211 211 { 212 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 212 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 213 213 const int i = head->base.index; 214 214 int ret; 215 215 ··· 246 246 int 247 247 head907d_olut_clr(struct nv50_head *head) 248 248 { 249 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 249 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 250 250 const int i = head->base.index; 251 251 int ret; 252 252 ··· 263 263 int 264 264 head907d_olut_set(struct nv50_head *head, struct nv50_head_atom *asyh) 265 265 { 266 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 266 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 267 267 const int i = head->base.index; 268 268 int ret; 269 269 ··· 322 322 int 323 323 head907d_mode(struct nv50_head *head, struct nv50_head_atom *asyh) 324 324 { 325 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 325 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 326 326 struct nv50_head_mode *m = &asyh->mode; 327 327 const int i = head->base.index; 328 328 int ret; ··· 378 378 int 379 379 head907d_view(struct nv50_head *head, struct nv50_head_atom *asyh) 380 380 { 381 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 381 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 382 382 const int i = head->base.index; 383 383 int ret; 384 384
+3 -3
drivers/gpu/drm/nouveau/dispnv50/head917d.c
··· 30 30 static int 31 31 head917d_dither(struct nv50_head *head, struct nv50_head_atom *asyh) 32 32 { 33 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 33 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 34 34 const int i = head->base.index; 35 35 int ret; 36 36 ··· 48 48 static int 49 49 head917d_base(struct nv50_head *head, struct nv50_head_atom *asyh) 50 50 { 51 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 51 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 52 52 const int i = head->base.index; 53 53 u32 bounds = 0; 54 54 int ret; ··· 77 77 static int 78 78 head917d_curs_set(struct nv50_head *head, struct nv50_head_atom *asyh) 79 79 { 80 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 80 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 81 81 const int i = head->base.index; 82 82 int ret; 83 83
+9 -9
drivers/gpu/drm/nouveau/dispnv50/headc37d.c
··· 30 30 static int 31 31 headc37d_or(struct nv50_head *head, struct nv50_head_atom *asyh) 32 32 { 33 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 33 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 34 34 const int i = head->base.index; 35 35 u8 depth; 36 36 int ret; ··· 64 64 static int 65 65 headc37d_procamp(struct nv50_head *head, struct nv50_head_atom *asyh) 66 66 { 67 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 67 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 68 68 const int i = head->base.index; 69 69 int ret; 70 70 ··· 85 85 int 86 86 headc37d_dither(struct nv50_head *head, struct nv50_head_atom *asyh) 87 87 { 88 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 88 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 89 89 const int i = head->base.index; 90 90 int ret; 91 91 ··· 104 104 int 105 105 headc37d_curs_clr(struct nv50_head *head) 106 106 { 107 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 107 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 108 108 const int i = head->base.index; 109 109 int ret; 110 110 ··· 122 122 int 123 123 headc37d_curs_set(struct nv50_head *head, struct nv50_head_atom *asyh) 124 124 { 125 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 125 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 126 126 const int i = head->base.index; 127 127 int ret; 128 128 ··· 161 161 static int 162 162 headc37d_olut_clr(struct nv50_head *head) 163 163 { 164 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 164 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 165 165 const int i = head->base.index; 166 166 int ret; 167 167 ··· 175 175 static int 176 176 headc37d_olut_set(struct nv50_head *head, struct nv50_head_atom *asyh) 177 177 { 178 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 178 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 179 179 const int i = head->base.index; 180 180 int ret; 181 181 ··· 209 209 static int 210 210 headc37d_mode(struct nv50_head *head, struct nv50_head_atom *asyh) 211 211 { 212 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 212 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 213 213 struct nv50_head_mode *m = &asyh->mode; 214 214 const int i = head->base.index; 215 215 int ret; ··· 254 254 int 255 255 headc37d_view(struct nv50_head *head, struct nv50_head_atom *asyh) 256 256 { 257 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 257 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 258 258 const int i = head->base.index; 259 259 int ret; 260 260
+6 -6
drivers/gpu/drm/nouveau/dispnv50/headc57d.c
··· 30 30 static int 31 31 headc57d_display_id(struct nv50_head *head, u32 display_id) 32 32 { 33 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 33 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 34 34 int ret; 35 35 36 36 if ((ret = PUSH_WAIT(push, 2))) ··· 43 43 static int 44 44 headc57d_or(struct nv50_head *head, struct nv50_head_atom *asyh) 45 45 { 46 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 46 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 47 47 const int i = head->base.index; 48 48 u8 depth; 49 49 int ret; ··· 78 78 static int 79 79 headc57d_procamp(struct nv50_head *head, struct nv50_head_atom *asyh) 80 80 { 81 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 81 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 82 82 const int i = head->base.index; 83 83 int ret; 84 84 ··· 96 96 static int 97 97 headc57d_olut_clr(struct nv50_head *head) 98 98 { 99 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 99 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 100 100 const int i = head->base.index; 101 101 int ret; 102 102 ··· 110 110 static int 111 111 headc57d_olut_set(struct nv50_head *head, struct nv50_head_atom *asyh) 112 112 { 113 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 113 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 114 114 const int i = head->base.index; 115 115 int ret; 116 116 ··· 201 201 static int 202 202 headc57d_mode(struct nv50_head *head, struct nv50_head_atom *asyh) 203 203 { 204 - struct nvif_push *push = nv50_disp(head->base.base.dev)->core->chan.push; 204 + struct nvif_push *push = &nv50_disp(head->base.base.dev)->core->chan.push; 205 205 struct nv50_head_mode *m = &asyh->mode; 206 206 const int i = head->base.index; 207 207 int ret;
+3 -3
drivers/gpu/drm/nouveau/dispnv50/ovly507e.c
··· 33 33 int 34 34 ovly507e_scale_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 35 35 { 36 - struct nvif_push *push = wndw->wndw.push; 36 + struct nvif_push *push = &wndw->wndw.push; 37 37 int ret; 38 38 39 39 if ((ret = PUSH_WAIT(push, 4))) ··· 55 55 static int 56 56 ovly507e_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 57 57 { 58 - struct nvif_push *push = wndw->wndw.push; 58 + struct nvif_push *push = &wndw->wndw.push; 59 59 int ret; 60 60 61 61 if ((ret = PUSH_WAIT(push, 12))) ··· 159 159 if (*pwndw = wndw, ret) 160 160 return ret; 161 161 162 - ret = nv50_dmac_create(&drm->client.device, &disp->disp->object, 162 + ret = nv50_dmac_create(drm, 163 163 &oclass, 0, &args, sizeof(args), 164 164 disp->sync->offset, &wndw->wndw); 165 165 if (ret) {
+1 -1
drivers/gpu/drm/nouveau/dispnv50/ovly827e.c
··· 32 32 static int 33 33 ovly827e_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 34 34 { 35 - struct nvif_push *push = wndw->wndw.push; 35 + struct nvif_push *push = &wndw->wndw.push; 36 36 int ret; 37 37 38 38 if ((ret = PUSH_WAIT(push, 12)))
+1 -1
drivers/gpu/drm/nouveau/dispnv50/ovly907e.c
··· 29 29 static int 30 30 ovly907e_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 31 31 { 32 - struct nvif_push *push = wndw->wndw.push; 32 + struct nvif_push *push = &wndw->wndw.push; 33 33 int ret; 34 34 35 35 if ((ret = PUSH_WAIT(push, 12)))
+1 -1
drivers/gpu/drm/nouveau/dispnv50/pior507d.c
··· 30 30 pior507d_ctrl(struct nv50_core *core, int or, u32 ctrl, 31 31 struct nv50_head_atom *asyh) 32 32 { 33 - struct nvif_push *push = core->chan.push; 33 + struct nvif_push *push = &core->chan.push; 34 34 int ret; 35 35 36 36 if (asyh) {
+1 -1
drivers/gpu/drm/nouveau/dispnv50/sor507d.c
··· 30 30 sor507d_ctrl(struct nv50_core *core, int or, u32 ctrl, 31 31 struct nv50_head_atom *asyh) 32 32 { 33 - struct nvif_push *push = core->chan.push; 33 + struct nvif_push *push = &core->chan.push; 34 34 int ret; 35 35 36 36 if (asyh) {
+1 -1
drivers/gpu/drm/nouveau/dispnv50/sor907d.c
··· 32 32 sor907d_ctrl(struct nv50_core *core, int or, u32 ctrl, 33 33 struct nv50_head_atom *asyh) 34 34 { 35 - struct nvif_push *push = core->chan.push; 35 + struct nvif_push *push = &core->chan.push; 36 36 int ret; 37 37 38 38 if ((ret = PUSH_WAIT(push, 2)))
+1 -1
drivers/gpu/drm/nouveau/dispnv50/sorc37d.c
··· 29 29 sorc37d_ctrl(struct nv50_core *core, int or, u32 ctrl, 30 30 struct nv50_head_atom *asyh) 31 31 { 32 - struct nvif_push *push = core->chan.push; 32 + struct nvif_push *push = &core->chan.push; 33 33 int ret; 34 34 35 35 if ((ret = PUSH_WAIT(push, 2)))
+3 -4
drivers/gpu/drm/nouveau/dispnv50/wimmc37b.c
··· 31 31 static int 32 32 wimmc37b_update(struct nv50_wndw *wndw, u32 *interlock) 33 33 { 34 - struct nvif_push *push = wndw->wimm.push; 34 + struct nvif_push *push = &wndw->wimm.push; 35 35 int ret; 36 36 37 37 if ((ret = PUSH_WAIT(push, 2))) ··· 46 46 static int 47 47 wimmc37b_point(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 48 48 { 49 - struct nvif_push *push = wndw->wimm.push; 49 + struct nvif_push *push = &wndw->wimm.push; 50 50 int ret; 51 51 52 52 if ((ret = PUSH_WAIT(push, 2))) ··· 71 71 struct nvif_disp_chan_v0 args = { 72 72 .id = wndw->id, 73 73 }; 74 - struct nv50_disp *disp = nv50_disp(drm->dev); 75 74 int ret; 76 75 77 - ret = nv50_dmac_create(&drm->client.device, &disp->disp->object, 76 + ret = nv50_dmac_create(drm, 78 77 &oclass, 0, &args, sizeof(args), -1, 79 78 &wndw->wimm); 80 79 if (ret) {
+12 -12
drivers/gpu/drm/nouveau/dispnv50/wndwc37e.c
··· 39 39 static int 40 40 wndwc37e_csc_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 41 41 { 42 - struct nvif_push *push = wndw->wndw.push; 42 + struct nvif_push *push = &wndw->wndw.push; 43 43 int ret; 44 44 45 45 if ((ret = PUSH_WAIT(push, 13))) ··· 52 52 static int 53 53 wndwc37e_ilut_clr(struct nv50_wndw *wndw) 54 54 { 55 - struct nvif_push *push = wndw->wndw.push; 55 + struct nvif_push *push = &wndw->wndw.push; 56 56 int ret; 57 57 58 58 if ((ret = PUSH_WAIT(push, 2))) ··· 65 65 static int 66 66 wndwc37e_ilut_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 67 67 { 68 - struct nvif_push *push = wndw->wndw.push; 68 + struct nvif_push *push = &wndw->wndw.push; 69 69 int ret; 70 70 71 71 if ((ret = PUSH_WAIT(push, 4))) ··· 94 94 int 95 95 wndwc37e_blend_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 96 96 { 97 - struct nvif_push *push = wndw->wndw.push; 97 + struct nvif_push *push = &wndw->wndw.push; 98 98 int ret; 99 99 100 100 if ((ret = PUSH_WAIT(push, 8))) ··· 139 139 int 140 140 wndwc37e_image_clr(struct nv50_wndw *wndw) 141 141 { 142 - struct nvif_push *push = wndw->wndw.push; 142 + struct nvif_push *push = &wndw->wndw.push; 143 143 int ret; 144 144 145 145 if ((ret = PUSH_WAIT(push, 4))) ··· 156 156 static int 157 157 wndwc37e_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 158 158 { 159 - struct nvif_push *push = wndw->wndw.push; 159 + struct nvif_push *push = &wndw->wndw.push; 160 160 int ret; 161 161 162 162 if ((ret = PUSH_WAIT(push, 17))) ··· 209 209 int 210 210 wndwc37e_ntfy_clr(struct nv50_wndw *wndw) 211 211 { 212 - struct nvif_push *push = wndw->wndw.push; 212 + struct nvif_push *push = &wndw->wndw.push; 213 213 int ret; 214 214 215 215 if ((ret = PUSH_WAIT(push, 2))) ··· 222 222 int 223 223 wndwc37e_ntfy_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 224 224 { 225 - struct nvif_push *push = wndw->wndw.push; 225 + struct nvif_push *push = &wndw->wndw.push; 226 226 int ret; 227 227 228 228 if ((ret = PUSH_WAIT(push, 3))) ··· 239 239 int 240 240 wndwc37e_sema_clr(struct nv50_wndw *wndw) 241 241 { 242 - struct nvif_push *push = wndw->wndw.push; 242 + struct nvif_push *push = &wndw->wndw.push; 243 243 int ret; 244 244 245 245 if ((ret = PUSH_WAIT(push, 2))) ··· 252 252 int 253 253 wndwc37e_sema_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 254 254 { 255 - struct nvif_push *push = wndw->wndw.push; 255 + struct nvif_push *push = &wndw->wndw.push; 256 256 int ret; 257 257 258 258 if ((ret = PUSH_WAIT(push, 5))) ··· 268 268 int 269 269 wndwc37e_update(struct nv50_wndw *wndw, u32 *interlock) 270 270 { 271 - struct nvif_push *push = wndw->wndw.push; 271 + struct nvif_push *push = &wndw->wndw.push; 272 272 int ret; 273 273 274 274 if ((ret = PUSH_WAIT(push, 5))) ··· 363 363 if (*pwndw = wndw, ret) 364 364 return ret; 365 365 366 - ret = nv50_dmac_create(&drm->client.device, &disp->disp->object, 366 + ret = nv50_dmac_create(drm, 367 367 &oclass, 0, &args, sizeof(args), 368 368 disp->sync->offset, &wndw->wndw); 369 369 if (ret) {
+5 -5
drivers/gpu/drm/nouveau/dispnv50/wndwc57e.c
··· 32 32 static int 33 33 wndwc57e_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 34 34 { 35 - struct nvif_push *push = wndw->wndw.push; 35 + struct nvif_push *push = &wndw->wndw.push; 36 36 int ret; 37 37 38 38 if ((ret = PUSH_WAIT(push, 17))) ··· 81 81 int 82 82 wndwc57e_csc_clr(struct nv50_wndw *wndw) 83 83 { 84 - struct nvif_push *push = wndw->wndw.push; 84 + struct nvif_push *push = &wndw->wndw.push; 85 85 const u32 identity[12] = { 86 86 0x00010000, 0x00000000, 0x00000000, 0x00000000, 87 87 0x00000000, 0x00010000, 0x00000000, 0x00000000, ··· 99 99 int 100 100 wndwc57e_csc_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 101 101 { 102 - struct nvif_push *push = wndw->wndw.push; 102 + struct nvif_push *push = &wndw->wndw.push; 103 103 int ret; 104 104 105 105 if ((ret = PUSH_WAIT(push, 13))) ··· 112 112 int 113 113 wndwc57e_ilut_clr(struct nv50_wndw *wndw) 114 114 { 115 - struct nvif_push *push = wndw->wndw.push; 115 + struct nvif_push *push = &wndw->wndw.push; 116 116 int ret; 117 117 118 118 if ((ret = PUSH_WAIT(push, 2))) ··· 125 125 int 126 126 wndwc57e_ilut_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 127 127 { 128 - struct nvif_push *push = wndw->wndw.push; 128 + struct nvif_push *push = &wndw->wndw.push; 129 129 int ret; 130 130 131 131 if ((ret = PUSH_WAIT(push, 4)))
+1 -1
drivers/gpu/drm/nouveau/dispnv50/wndwc67e.c
··· 29 29 static int 30 30 wndwc67e_image_set(struct nv50_wndw *wndw, struct nv50_wndw_atom *asyw) 31 31 { 32 - struct nvif_push *push = wndw->wndw.push; 32 + struct nvif_push *push = &wndw->wndw.push; 33 33 int ret; 34 34 35 35 if ((ret = PUSH_WAIT(push, 17)))
-7
drivers/gpu/drm/nouveau/include/nvif/cl0080.h
··· 2 2 #ifndef __NVIF_CL0080_H__ 3 3 #define __NVIF_CL0080_H__ 4 4 5 - struct nv_device_v0 { 6 - __u8 version; 7 - __u8 priv; 8 - __u8 pad02[6]; 9 - __u64 device; /* device identifier, ~0 for client default */ 10 - }; 11 - 12 5 #define NV_DEVICE_V0_INFO 0x00 13 6 #define NV_DEVICE_V0_TIME 0x01 14 7
-3
drivers/gpu/drm/nouveau/include/nvif/class.h
··· 7 7 8 8 #define NVIF_CLASS_CONTROL /* if0001.h */ -0x00000001 9 9 10 - #define NVIF_CLASS_PERFMON /* if0002.h */ -0x00000002 11 - #define NVIF_CLASS_PERFDOM /* if0003.h */ -0x00000003 12 - 13 10 #define NVIF_CLASS_SW_NV04 /* if0004.h */ -0x00000004 14 11 #define NVIF_CLASS_SW_NV10 /* if0005.h */ -0x00000005 15 12 #define NVIF_CLASS_SW_NV50 /* if0005.h */ -0x00000006
+1 -10
drivers/gpu/drm/nouveau/include/nvif/client.h
··· 7 7 struct nvif_client { 8 8 struct nvif_object object; 9 9 const struct nvif_driver *driver; 10 - u64 version; 11 - u8 route; 12 10 }; 13 11 14 - int nvif_client_ctor(struct nvif_client *parent, const char *name, u64 device, 15 - struct nvif_client *); 12 + int nvif_client_ctor(struct nvif_client *parent, const char *name, struct nvif_client *); 16 13 void nvif_client_dtor(struct nvif_client *); 17 - int nvif_client_ioctl(struct nvif_client *, void *, u32); 18 14 int nvif_client_suspend(struct nvif_client *); 19 15 int nvif_client_resume(struct nvif_client *); 20 16 21 17 /*XXX*/ 22 - #include <core/client.h> 23 - #define nvxx_client(a) ({ \ 24 - struct nvif_client *_client = (a); \ 25 - (struct nvkm_client *)_client->object.priv; \ 26 - }) 27 18 #endif
+2 -35
drivers/gpu/drm/nouveau/include/nvif/device.h
··· 18 18 struct nvif_user user; 19 19 }; 20 20 21 - int nvif_device_ctor(struct nvif_object *, const char *name, u32 handle, 22 - s32 oclass, void *, u32, struct nvif_device *); 21 + int nvif_device_ctor(struct nvif_client *, const char *name, struct nvif_device *); 23 22 void nvif_device_dtor(struct nvif_device *); 23 + int nvif_device_map(struct nvif_device *); 24 24 u64 nvif_device_time(struct nvif_device *); 25 - 26 - /*XXX*/ 27 - #include <subdev/bios.h> 28 - #include <subdev/fb.h> 29 - #include <subdev/bar.h> 30 - #include <subdev/gpio.h> 31 - #include <subdev/clk.h> 32 - #include <subdev/i2c.h> 33 - #include <subdev/timer.h> 34 - #include <subdev/therm.h> 35 - #include <subdev/pci.h> 36 - 37 - #define nvxx_device(a) ({ \ 38 - struct nvif_device *_device = (a); \ 39 - struct { \ 40 - struct nvkm_object object; \ 41 - struct nvkm_device *device; \ 42 - } *_udevice = _device->object.priv; \ 43 - _udevice->device; \ 44 - }) 45 - #define nvxx_bios(a) nvxx_device(a)->bios 46 - #define nvxx_fb(a) nvxx_device(a)->fb 47 - #define nvxx_gpio(a) nvxx_device(a)->gpio 48 - #define nvxx_clk(a) nvxx_device(a)->clk 49 - #define nvxx_i2c(a) nvxx_device(a)->i2c 50 - #define nvxx_iccsense(a) nvxx_device(a)->iccsense 51 - #define nvxx_therm(a) nvxx_device(a)->therm 52 - #define nvxx_volt(a) nvxx_device(a)->volt 53 - 54 - #include <engine/fifo.h> 55 - #include <engine/gr.h> 56 - 57 - #define nvxx_gr(a) nvxx_device(a)->gr 58 25 #endif
-5
drivers/gpu/drm/nouveau/include/nvif/driver.h
··· 8 8 const char *name; 9 9 int (*init)(const char *name, u64 device, const char *cfg, 10 10 const char *dbg, void **priv); 11 - void (*fini)(void *priv); 12 11 int (*suspend)(void *priv); 13 12 int (*resume)(void *priv); 14 13 int (*ioctl)(void *priv, void *data, u32 size, void **hack); 15 14 void __iomem *(*map)(void *priv, u64 handle, u32 size); 16 15 void (*unmap)(void *priv, void __iomem *ptr, u32 size); 17 - bool keep; 18 16 }; 19 17 20 18 int nvif_driver_init(const char *drv, const char *cfg, const char *dbg, 21 19 const char *name, u64 device, struct nvif_client *); 22 20 23 21 extern const struct nvif_driver nvif_driver_nvkm; 24 - extern const struct nvif_driver nvif_driver_drm; 25 - extern const struct nvif_driver nvif_driver_lib; 26 - extern const struct nvif_driver nvif_driver_null; 27 22 #endif
-10
drivers/gpu/drm/nouveau/include/nvif/if0000.h
··· 5 5 struct nvif_client_v0 { 6 6 __u8 version; 7 7 __u8 pad01[7]; 8 - __u64 device; 9 8 char name[32]; 10 - }; 11 - 12 - #define NVIF_CLIENT_V0_DEVLIST 0x00 13 - 14 - struct nvif_client_devlist_v0 { 15 - __u8 version; 16 - __u8 count; 17 - __u8 pad02[6]; 18 - __u64 device[]; 19 9 }; 20 10 #endif
-39
drivers/gpu/drm/nouveau/include/nvif/if0002.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - #ifndef __NVIF_IF0002_H__ 3 - #define __NVIF_IF0002_H__ 4 - 5 - #define NVIF_PERFMON_V0_QUERY_DOMAIN 0x00 6 - #define NVIF_PERFMON_V0_QUERY_SIGNAL 0x01 7 - #define NVIF_PERFMON_V0_QUERY_SOURCE 0x02 8 - 9 - struct nvif_perfmon_query_domain_v0 { 10 - __u8 version; 11 - __u8 id; 12 - __u8 counter_nr; 13 - __u8 iter; 14 - __u16 signal_nr; 15 - __u8 pad05[2]; 16 - char name[64]; 17 - }; 18 - 19 - struct nvif_perfmon_query_signal_v0 { 20 - __u8 version; 21 - __u8 domain; 22 - __u16 iter; 23 - __u8 signal; 24 - __u8 source_nr; 25 - __u8 pad05[2]; 26 - char name[64]; 27 - }; 28 - 29 - struct nvif_perfmon_query_source_v0 { 30 - __u8 version; 31 - __u8 domain; 32 - __u8 signal; 33 - __u8 iter; 34 - __u8 pad04[4]; 35 - __u32 source; 36 - __u32 mask; 37 - char name[64]; 38 - }; 39 - #endif
-34
drivers/gpu/drm/nouveau/include/nvif/if0003.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - #ifndef __NVIF_IF0003_H__ 3 - #define __NVIF_IF0003_H__ 4 - 5 - struct nvif_perfdom_v0 { 6 - __u8 version; 7 - __u8 domain; 8 - __u8 mode; 9 - __u8 pad03[1]; 10 - struct { 11 - __u8 signal[4]; 12 - __u64 source[4][8]; 13 - __u16 logic_op; 14 - } ctr[4]; 15 - }; 16 - 17 - #define NVIF_PERFDOM_V0_INIT 0x00 18 - #define NVIF_PERFDOM_V0_SAMPLE 0x01 19 - #define NVIF_PERFDOM_V0_READ 0x02 20 - 21 - struct nvif_perfdom_init { 22 - }; 23 - 24 - struct nvif_perfdom_sample { 25 - }; 26 - 27 - struct nvif_perfdom_read_v0 { 28 - __u8 version; 29 - __u8 pad01[7]; 30 - __u32 ctr[4]; 31 - __u32 clk; 32 - __u8 pad04[4]; 33 - }; 34 - #endif
-27
drivers/gpu/drm/nouveau/include/nvif/ioctl.h
··· 2 2 #ifndef __NVIF_IOCTL_H__ 3 3 #define __NVIF_IOCTL_H__ 4 4 5 - #define NVIF_VERSION_LATEST 0x0000000000000100ULL 6 - 7 5 struct nvif_ioctl_v0 { 8 6 __u8 version; 9 - #define NVIF_IOCTL_V0_NOP 0x00 10 7 #define NVIF_IOCTL_V0_SCLASS 0x01 11 8 #define NVIF_IOCTL_V0_NEW 0x02 12 9 #define NVIF_IOCTL_V0_DEL 0x03 13 10 #define NVIF_IOCTL_V0_MTHD 0x04 14 - #define NVIF_IOCTL_V0_RD 0x05 15 - #define NVIF_IOCTL_V0_WR 0x06 16 11 #define NVIF_IOCTL_V0_MAP 0x07 17 12 #define NVIF_IOCTL_V0_UNMAP 0x08 18 13 __u8 type; ··· 21 26 __u64 token; 22 27 __u64 object; 23 28 __u8 data[]; /* ioctl data (below) */ 24 - }; 25 - 26 - struct nvif_ioctl_nop_v0 { 27 - __u64 version; 28 29 }; 29 30 30 31 struct nvif_ioctl_sclass_v0 { ··· 56 65 __u8 method; 57 66 __u8 pad02[6]; 58 67 __u8 data[]; /* method data (class.h) */ 59 - }; 60 - 61 - struct nvif_ioctl_rd_v0 { 62 - /* nvif_ioctl ... */ 63 - __u8 version; 64 - __u8 size; 65 - __u8 pad02[2]; 66 - __u32 data; 67 - __u64 addr; 68 - }; 69 - 70 - struct nvif_ioctl_wr_v0 { 71 - /* nvif_ioctl ... */ 72 - __u8 version; 73 - __u8 size; 74 - __u8 pad02[2]; 75 - __u32 data; 76 - __u64 addr; 77 68 }; 78 69 79 70 struct nvif_ioctl_map_v0 {
+3 -21
drivers/gpu/drm/nouveau/include/nvif/object.h
··· 34 34 int nvif_object_ioctl(struct nvif_object *, void *, u32, void **); 35 35 int nvif_object_sclass_get(struct nvif_object *, struct nvif_sclass **); 36 36 void nvif_object_sclass_put(struct nvif_sclass **); 37 - u32 nvif_object_rd(struct nvif_object *, int, u64); 38 - void nvif_object_wr(struct nvif_object *, int, u64, u32); 39 37 int nvif_object_mthd(struct nvif_object *, u32, void *, u32); 40 38 int nvif_object_map_handle(struct nvif_object *, void *, u32, 41 39 u64 *handle, u64 *length); ··· 45 47 #define nvif_object(a) (a)->object 46 48 47 49 #define nvif_rd(a,f,b,c) ({ \ 48 - struct nvif_object *_object = (a); \ 49 - u32 _data; \ 50 - if (likely(_object->map.ptr)) \ 51 - _data = f((u8 __iomem *)_object->map.ptr + (c)); \ 52 - else \ 53 - _data = nvif_object_rd(_object, (b), (c)); \ 50 + u32 _data = f((u8 __iomem *)(a)->map.ptr + (c)); \ 54 51 _data; \ 55 52 }) 56 53 #define nvif_wr(a,f,b,c,d) ({ \ 57 - struct nvif_object *_object = (a); \ 58 - if (likely(_object->map.ptr)) \ 59 - f((d), (u8 __iomem *)_object->map.ptr + (c)); \ 60 - else \ 61 - nvif_object_wr(_object, (b), (c), (d)); \ 54 + f((d), (u8 __iomem *)(a)->map.ptr + (c)); \ 62 55 }) 63 56 #define nvif_rd08(a,b) ({ ((u8)nvif_rd((a), ioread8, 1, (b))); }) 64 57 #define nvif_rd16(a,b) ({ ((u16)nvif_rd((a), ioread16_native, 2, (b))); }) ··· 58 69 #define nvif_wr16(a,b,c) nvif_wr((a), iowrite16_native, 2, (b), (u16)(c)) 59 70 #define nvif_wr32(a,b,c) nvif_wr((a), iowrite32_native, 4, (b), (u32)(c)) 60 71 #define nvif_mask(a,b,c,d) ({ \ 61 - struct nvif_object *__object = (a); \ 72 + typeof(a) __object = (a); \ 62 73 u32 _addr = (b), _data = nvif_rd32(__object, _addr); \ 63 74 nvif_wr32(__object, _addr, (_data & ~(c)) | (d)); \ 64 75 _data; \ ··· 123 134 #define NVIF_MR32(p,A...) DRF_MR(NVIF_RD32_, NVIF_WR32_, u32, (p), 0, ##A) 124 135 #define NVIF_MV32(p,A...) DRF_MV(NVIF_RD32_, NVIF_WR32_, u32, (p), 0, ##A) 125 136 #define NVIF_MD32(p,A...) DRF_MD(NVIF_RD32_, NVIF_WR32_, u32, (p), 0, ##A) 126 - 127 - /*XXX*/ 128 - #include <core/object.h> 129 - #define nvxx_object(a) ({ \ 130 - struct nvif_object *_object = (a); \ 131 - (struct nvkm_object *)_object->priv; \ 132 - }) 133 137 #endif
+19
drivers/gpu/drm/nouveau/include/nvif/os.h
··· 34 34 35 35 #include <soc/tegra/fuse.h> 36 36 #include <soc/tegra/pmc.h> 37 + 38 + #ifdef __BIG_ENDIAN 39 + #define ioread16_native ioread16be 40 + #define iowrite16_native iowrite16be 41 + #define ioread32_native ioread32be 42 + #define iowrite32_native iowrite32be 43 + #else 44 + #define ioread16_native ioread16 45 + #define iowrite16_native iowrite16 46 + #define ioread32_native ioread32 47 + #define iowrite32_native iowrite32 48 + #endif 49 + 50 + #define iowrite64_native(v,p) do { \ 51 + u32 __iomem *_p = (u32 __iomem *)(p); \ 52 + u64 _v = (v); \ 53 + iowrite32_native(lower_32_bits(_v), &_p[0]); \ 54 + iowrite32_native(upper_32_bits(_v), &_p[1]); \ 55 + } while(0) 37 56 #endif
-1
drivers/gpu/drm/nouveau/include/nvkm/core/client.h
··· 22 22 23 23 int nvkm_client_new(const char *name, u64 device, const char *cfg, const char *dbg, 24 24 int (*)(u64, void *, u32), struct nvkm_client **); 25 - struct nvkm_client *nvkm_client_search(struct nvkm_client *, u64 handle); 26 25 27 26 /* logging for client-facing objects */ 28 27 #define nvif_printk(o,l,p,f,a...) do { \
-1
drivers/gpu/drm/nouveau/include/nvkm/core/device.h
··· 109 109 }; 110 110 111 111 struct nvkm_device *nvkm_device_find(u64 name); 112 - int nvkm_device_list(u64 *name, int size); 113 112 114 113 /* privileged register interface accessor macros */ 115 114 #define nvkm_rd08(d,a) ioread8((d)->pri + (a))
-1
drivers/gpu/drm/nouveau/include/nvkm/core/layout.h
··· 46 46 NVKM_LAYOUT_INST(NVKM_ENGINE_NVENC , struct nvkm_nvenc , nvenc, 3) 47 47 NVKM_LAYOUT_INST(NVKM_ENGINE_NVJPG , struct nvkm_engine , nvjpg, 8) 48 48 NVKM_LAYOUT_ONCE(NVKM_ENGINE_OFA , struct nvkm_engine , ofa) 49 - NVKM_LAYOUT_ONCE(NVKM_ENGINE_PM , struct nvkm_pm , pm) 50 49 NVKM_LAYOUT_ONCE(NVKM_ENGINE_SEC , struct nvkm_engine , sec) 51 50 NVKM_LAYOUT_ONCE(NVKM_ENGINE_SEC2 , struct nvkm_sec2 , sec2) 52 51 NVKM_LAYOUT_ONCE(NVKM_ENGINE_SW , struct nvkm_sw , sw)
-14
drivers/gpu/drm/nouveau/include/nvkm/core/object.h
··· 15 15 16 16 struct list_head head; 17 17 struct list_head tree; 18 - u8 route; 19 - u64 token; 20 18 u64 object; 21 19 struct rb_node node; 22 20 }; ··· 33 35 int (*map)(struct nvkm_object *, void *argv, u32 argc, 34 36 enum nvkm_object_map *, u64 *addr, u64 *size); 35 37 int (*unmap)(struct nvkm_object *); 36 - int (*rd08)(struct nvkm_object *, u64 addr, u8 *data); 37 - int (*rd16)(struct nvkm_object *, u64 addr, u16 *data); 38 - int (*rd32)(struct nvkm_object *, u64 addr, u32 *data); 39 - int (*wr08)(struct nvkm_object *, u64 addr, u8 data); 40 - int (*wr16)(struct nvkm_object *, u64 addr, u16 data); 41 - int (*wr32)(struct nvkm_object *, u64 addr, u32 data); 42 38 int (*bind)(struct nvkm_object *, struct nvkm_gpuobj *, int align, 43 39 struct nvkm_gpuobj **); 44 40 int (*sclass)(struct nvkm_object *, int index, struct nvkm_oclass *); ··· 55 63 int nvkm_object_map(struct nvkm_object *, void *argv, u32 argc, 56 64 enum nvkm_object_map *, u64 *addr, u64 *size); 57 65 int nvkm_object_unmap(struct nvkm_object *); 58 - int nvkm_object_rd08(struct nvkm_object *, u64 addr, u8 *data); 59 - int nvkm_object_rd16(struct nvkm_object *, u64 addr, u16 *data); 60 - int nvkm_object_rd32(struct nvkm_object *, u64 addr, u32 *data); 61 - int nvkm_object_wr08(struct nvkm_object *, u64 addr, u8 data); 62 - int nvkm_object_wr16(struct nvkm_object *, u64 addr, u16 data); 63 - int nvkm_object_wr32(struct nvkm_object *, u64 addr, u32 data); 64 66 int nvkm_object_bind(struct nvkm_object *, struct nvkm_gpuobj *, int align, 65 67 struct nvkm_gpuobj **); 66 68
-2
drivers/gpu/drm/nouveau/include/nvkm/core/oclass.h
··· 21 21 const void *priv; 22 22 const void *engn; 23 23 u32 handle; 24 - u8 route; 25 - u64 token; 26 24 u64 object; 27 25 struct nvkm_client *client; 28 26 struct nvkm_object *parent;
-19
drivers/gpu/drm/nouveau/include/nvkm/core/os.h
··· 3 3 #define __NVKM_OS_H__ 4 4 #include <nvif/os.h> 5 5 6 - #ifdef __BIG_ENDIAN 7 - #define ioread16_native ioread16be 8 - #define iowrite16_native iowrite16be 9 - #define ioread32_native ioread32be 10 - #define iowrite32_native iowrite32be 11 - #else 12 - #define ioread16_native ioread16 13 - #define iowrite16_native iowrite16 14 - #define ioread32_native ioread32 15 - #define iowrite32_native iowrite32 16 - #endif 17 - 18 - #define iowrite64_native(v,p) do { \ 19 - u32 __iomem *_p = (u32 __iomem *)(p); \ 20 - u64 _v = (v); \ 21 - iowrite32_native(lower_32_bits(_v), &_p[0]); \ 22 - iowrite32_native(upper_32_bits(_v), &_p[1]); \ 23 - } while(0) 24 - 25 6 struct nvkm_blob { 26 7 void *data; 27 8 u32 size;
-1
drivers/gpu/drm/nouveau/include/nvkm/core/pci.h
··· 10 10 }; 11 11 12 12 int nvkm_device_pci_new(struct pci_dev *, const char *cfg, const char *dbg, 13 - bool detect, bool mmio, u64 subdev_mask, 14 13 struct nvkm_device **); 15 14 #endif
-1
drivers/gpu/drm/nouveau/include/nvkm/core/tegra.h
··· 51 51 int nvkm_device_tegra_new(const struct nvkm_device_tegra_func *, 52 52 struct platform_device *, 53 53 const char *cfg, const char *dbg, 54 - bool detect, bool mmio, u64 subdev_mask, 55 54 struct nvkm_device **); 56 55 #endif
-29
drivers/gpu/drm/nouveau/include/nvkm/engine/pm.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - #ifndef __NVKM_PM_H__ 3 - #define __NVKM_PM_H__ 4 - #include <core/engine.h> 5 - 6 - struct nvkm_pm { 7 - const struct nvkm_pm_func *func; 8 - struct nvkm_engine engine; 9 - 10 - struct { 11 - spinlock_t lock; 12 - struct nvkm_object *object; 13 - } client; 14 - 15 - struct list_head domains; 16 - struct list_head sources; 17 - u32 sequence; 18 - }; 19 - 20 - int nv40_pm_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_pm **); 21 - int nv50_pm_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_pm **); 22 - int g84_pm_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_pm **); 23 - int gt200_pm_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_pm **); 24 - int gt215_pm_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_pm **); 25 - int gf100_pm_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_pm **); 26 - int gf108_pm_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_pm **); 27 - int gf117_pm_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_pm **); 28 - int gk104_pm_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_pm **); 29 - #endif
+257 -73
drivers/gpu/drm/nouveau/nouveau_abi16.c
··· 46 46 struct nouveau_abi16 *abi16; 47 47 cli->abi16 = abi16 = kzalloc(sizeof(*abi16), GFP_KERNEL); 48 48 if (cli->abi16) { 49 - struct nv_device_v0 args = { 50 - .device = ~0ULL, 51 - }; 52 - 49 + abi16->cli = cli; 53 50 INIT_LIST_HEAD(&abi16->channels); 54 - 55 - /* allocate device object targeting client's default 56 - * device (ie. the one that belongs to the fd it 57 - * opened) 58 - */ 59 - if (nvif_device_ctor(&cli->base.object, "abi16Device", 60 - 0, NV_DEVICE, &args, sizeof(args), 61 - &abi16->device) == 0) 62 - return cli->abi16; 63 - 64 - kfree(cli->abi16); 65 - cli->abi16 = NULL; 51 + INIT_LIST_HEAD(&abi16->objects); 66 52 } 67 53 } 68 54 return cli->abi16; ··· 68 82 int 69 83 nouveau_abi16_put(struct nouveau_abi16 *abi16, int ret) 70 84 { 71 - struct nouveau_cli *cli = (void *)abi16->device.object.client; 85 + struct nouveau_cli *cli = abi16->cli; 72 86 mutex_unlock(&cli->mutex); 73 87 return ret; 88 + } 89 + 90 + /* Tracks objects created via the DRM_NOUVEAU_NVIF ioctl. 91 + * 92 + * The only two types of object that userspace ever allocated via this 93 + * interface are 'device', in order to retrieve basic device info, and 94 + * 'engine objects', which instantiate HW classes on a channel. 95 + * 96 + * The remainder of what used to be available via DRM_NOUVEAU_NVIF has 97 + * been removed, but these object types need to be tracked to maintain 98 + * compatibility with userspace. 99 + */ 100 + struct nouveau_abi16_obj { 101 + enum nouveau_abi16_obj_type { 102 + DEVICE, 103 + ENGOBJ, 104 + } type; 105 + u64 object; 106 + 107 + struct nvif_object engobj; 108 + 109 + struct list_head head; /* protected by nouveau_abi16.cli.mutex */ 110 + }; 111 + 112 + static struct nouveau_abi16_obj * 113 + nouveau_abi16_obj_find(struct nouveau_abi16 *abi16, u64 object) 114 + { 115 + struct nouveau_abi16_obj *obj; 116 + 117 + list_for_each_entry(obj, &abi16->objects, head) { 118 + if (obj->object == object) 119 + return obj; 120 + } 121 + 122 + return NULL; 123 + } 124 + 125 + static void 126 + nouveau_abi16_obj_del(struct nouveau_abi16_obj *obj) 127 + { 128 + list_del(&obj->head); 129 + kfree(obj); 130 + } 131 + 132 + static struct nouveau_abi16_obj * 133 + nouveau_abi16_obj_new(struct nouveau_abi16 *abi16, enum nouveau_abi16_obj_type type, u64 object) 134 + { 135 + struct nouveau_abi16_obj *obj; 136 + 137 + obj = nouveau_abi16_obj_find(abi16, object); 138 + if (obj) 139 + return ERR_PTR(-EEXIST); 140 + 141 + obj = kzalloc(sizeof(*obj), GFP_KERNEL); 142 + if (!obj) 143 + return ERR_PTR(-ENOMEM); 144 + 145 + obj->type = type; 146 + obj->object = object; 147 + list_add_tail(&obj->head, &abi16->objects); 148 + return obj; 74 149 } 75 150 76 151 s32 ··· 211 164 void 212 165 nouveau_abi16_fini(struct nouveau_abi16 *abi16) 213 166 { 214 - struct nouveau_cli *cli = (void *)abi16->device.object.client; 167 + struct nouveau_cli *cli = abi16->cli; 215 168 struct nouveau_abi16_chan *chan, *temp; 169 + struct nouveau_abi16_obj *obj, *tmp; 170 + 171 + /* cleanup objects */ 172 + list_for_each_entry_safe(obj, tmp, &abi16->objects, head) { 173 + nouveau_abi16_obj_del(obj); 174 + } 216 175 217 176 /* cleanup channels */ 218 177 list_for_each_entry_safe(chan, temp, &abi16->channels, head) { 219 178 nouveau_abi16_chan_fini(abi16, chan); 220 179 } 221 - 222 - /* destroy the device object */ 223 - nvif_device_dtor(&abi16->device); 224 180 225 181 kfree(cli->abi16); 226 182 cli->abi16 = NULL; ··· 249 199 struct nouveau_cli *cli = nouveau_cli(file_priv); 250 200 struct nouveau_drm *drm = nouveau_drm(dev); 251 201 struct nvif_device *device = &drm->client.device; 252 - struct nvkm_device *nvkm_device = nvxx_device(&drm->client.device); 253 - struct nvkm_gr *gr = nvxx_gr(device); 202 + struct nvkm_device *nvkm_device = nvxx_device(drm); 203 + struct nvkm_gr *gr = nvxx_gr(drm); 254 204 struct drm_nouveau_getparam *getparam = data; 255 205 struct pci_dev *pdev = to_pci_dev(dev->dev); 256 206 ··· 341 291 struct nouveau_drm *drm = nouveau_drm(dev); 342 292 struct nouveau_abi16 *abi16 = nouveau_abi16_get(file_priv); 343 293 struct nouveau_abi16_chan *chan; 344 - struct nvif_device *device; 294 + struct nvif_device *device = &cli->device; 345 295 u64 engine, runm; 346 296 int ret; 347 297 ··· 358 308 */ 359 309 __nouveau_cli_disable_uvmm_noinit(cli); 360 310 361 - device = &abi16->device; 362 311 engine = NV_DEVICE_HOST_RUNLIST_ENGINES_GR; 363 312 364 313 /* hack to allow channel engine type specification on kepler */ ··· 405 356 list_add(&chan->head, &abi16->channels); 406 357 407 358 /* create channel object and initialise dma and fence management */ 408 - ret = nouveau_channel_new(drm, device, false, runm, init->fb_ctxdma_handle, 359 + ret = nouveau_channel_new(cli, false, runm, init->fb_ctxdma_handle, 409 360 init->tt_ctxdma_handle, &chan->chan); 410 361 if (ret) 411 362 goto done; ··· 507 458 } 508 459 509 460 int 510 - nouveau_abi16_usif(struct drm_file *file_priv, void *data, u32 size) 511 - { 512 - union { 513 - struct nvif_ioctl_v0 v0; 514 - } *args = data; 515 - struct nouveau_abi16_chan *chan; 516 - struct nouveau_abi16 *abi16; 517 - int ret = -ENOSYS; 518 - 519 - if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, true))) { 520 - switch (args->v0.type) { 521 - case NVIF_IOCTL_V0_NEW: 522 - case NVIF_IOCTL_V0_MTHD: 523 - case NVIF_IOCTL_V0_SCLASS: 524 - break; 525 - default: 526 - return -EACCES; 527 - } 528 - } else 529 - return ret; 530 - 531 - if (!(abi16 = nouveau_abi16(file_priv))) 532 - return -ENOMEM; 533 - 534 - if (args->v0.token != ~0ULL) { 535 - if (!(chan = nouveau_abi16_chan(abi16, args->v0.token))) 536 - return -EINVAL; 537 - args->v0.object = nvif_handle(&chan->chan->user); 538 - args->v0.owner = NVIF_IOCTL_V0_OWNER_ANY; 539 - return 0; 540 - } 541 - 542 - args->v0.object = nvif_handle(&abi16->device.object); 543 - args->v0.owner = NVIF_IOCTL_V0_OWNER_ANY; 544 - return 0; 545 - } 546 - 547 - int 548 461 nouveau_abi16_ioctl_channel_free(ABI16_IOCTL_ARGS) 549 462 { 550 463 struct drm_nouveau_channel_free *req = data; ··· 530 519 struct nouveau_abi16 *abi16 = nouveau_abi16_get(file_priv); 531 520 struct nouveau_abi16_chan *chan; 532 521 struct nouveau_abi16_ntfy *ntfy; 533 - struct nvif_client *client; 534 522 struct nvif_sclass *sclass; 535 523 s32 oclass = 0; 536 524 int ret, i; ··· 539 529 540 530 if (init->handle == ~0) 541 531 return nouveau_abi16_put(abi16, -EINVAL); 542 - client = abi16->device.object.client; 543 532 544 533 chan = nouveau_abi16_chan(abi16, init->channel); 545 534 if (!chan) ··· 603 594 604 595 list_add(&ntfy->head, &chan->notifiers); 605 596 606 - client->route = NVDRM_OBJECT_ABI16; 607 597 ret = nvif_object_ctor(&chan->chan->user, "abi16EngObj", init->handle, 608 598 oclass, NULL, 0, &ntfy->object); 609 - client->route = NVDRM_OBJECT_NVIF; 610 599 611 600 if (ret) 612 601 nouveau_abi16_ntfy_fini(chan, ntfy); ··· 619 612 struct nouveau_abi16 *abi16 = nouveau_abi16_get(file_priv); 620 613 struct nouveau_abi16_chan *chan; 621 614 struct nouveau_abi16_ntfy *ntfy; 622 - struct nvif_device *device = &abi16->device; 623 - struct nvif_client *client; 615 + struct nvif_device *device; 624 616 struct nv_dma_v0 args = {}; 625 617 int ret; 626 618 627 619 if (unlikely(!abi16)) 628 620 return -ENOMEM; 621 + device = &abi16->cli->device; 629 622 630 623 /* completely unnecessary for these chipsets... */ 631 624 if (unlikely(device->info.family >= NV_DEVICE_INFO_V0_FERMI)) 632 625 return nouveau_abi16_put(abi16, -EINVAL); 633 - client = abi16->device.object.client; 634 626 635 627 chan = nouveau_abi16_chan(abi16, info->channel); 636 628 if (!chan) ··· 666 660 args.limit += chan->ntfy->offset; 667 661 } 668 662 669 - client->route = NVDRM_OBJECT_ABI16; 670 663 ret = nvif_object_ctor(&chan->chan->user, "abi16Ntfy", info->handle, 671 664 NV_DMA_IN_MEMORY, &args, sizeof(args), 672 665 &ntfy->object); 673 - client->route = NVDRM_OBJECT_NVIF; 674 666 if (ret) 675 667 goto done; 676 668 ··· 707 703 } 708 704 709 705 return nouveau_abi16_put(abi16, ret); 706 + } 707 + 708 + static int 709 + nouveau_abi16_ioctl_mthd(struct nouveau_abi16 *abi16, struct nvif_ioctl_v0 *ioctl, u32 argc) 710 + { 711 + struct nouveau_cli *cli = abi16->cli; 712 + struct nvif_ioctl_mthd_v0 *args; 713 + struct nouveau_abi16_obj *obj; 714 + struct nv_device_info_v0 *info; 715 + 716 + if (ioctl->route || argc < sizeof(*args)) 717 + return -EINVAL; 718 + args = (void *)ioctl->data; 719 + argc -= sizeof(*args); 720 + 721 + obj = nouveau_abi16_obj_find(abi16, ioctl->object); 722 + if (!obj || obj->type != DEVICE) 723 + return -EINVAL; 724 + 725 + if (args->method != NV_DEVICE_V0_INFO || 726 + argc != sizeof(*info)) 727 + return -EINVAL; 728 + 729 + info = (void *)args->data; 730 + if (info->version != 0x00) 731 + return -EINVAL; 732 + 733 + info = &cli->device.info; 734 + memcpy(args->data, info, sizeof(*info)); 735 + return 0; 736 + } 737 + 738 + static int 739 + nouveau_abi16_ioctl_del(struct nouveau_abi16 *abi16, struct nvif_ioctl_v0 *ioctl, u32 argc) 740 + { 741 + struct nouveau_abi16_obj *obj; 742 + 743 + if (ioctl->route || argc) 744 + return -EINVAL; 745 + 746 + obj = nouveau_abi16_obj_find(abi16, ioctl->object); 747 + if (obj) { 748 + if (obj->type == ENGOBJ) 749 + nvif_object_dtor(&obj->engobj); 750 + nouveau_abi16_obj_del(obj); 751 + } 752 + 753 + return 0; 754 + } 755 + 756 + static int 757 + nouveau_abi16_ioctl_new(struct nouveau_abi16 *abi16, struct nvif_ioctl_v0 *ioctl, u32 argc) 758 + { 759 + struct nvif_ioctl_new_v0 *args; 760 + struct nouveau_abi16_chan *chan; 761 + struct nouveau_abi16_obj *obj; 762 + int ret; 763 + 764 + if (argc < sizeof(*args)) 765 + return -EINVAL; 766 + args = (void *)ioctl->data; 767 + argc -= sizeof(*args); 768 + 769 + if (args->version != 0) 770 + return -EINVAL; 771 + 772 + if (!ioctl->route) { 773 + if (ioctl->object || args->oclass != NV_DEVICE) 774 + return -EINVAL; 775 + 776 + obj = nouveau_abi16_obj_new(abi16, DEVICE, args->object); 777 + if (IS_ERR(obj)) 778 + return PTR_ERR(obj); 779 + 780 + return 0; 781 + } 782 + 783 + chan = nouveau_abi16_chan(abi16, ioctl->token); 784 + if (!chan) 785 + return -EINVAL; 786 + 787 + obj = nouveau_abi16_obj_new(abi16, ENGOBJ, args->object); 788 + if (IS_ERR(obj)) 789 + return PTR_ERR(obj); 790 + 791 + ret = nvif_object_ctor(&chan->chan->user, "abi16EngObj", args->handle, args->oclass, 792 + NULL, 0, &obj->engobj); 793 + if (ret) 794 + nouveau_abi16_obj_del(obj); 795 + 796 + return ret; 797 + } 798 + 799 + static int 800 + nouveau_abi16_ioctl_sclass(struct nouveau_abi16 *abi16, struct nvif_ioctl_v0 *ioctl, u32 argc) 801 + { 802 + struct nvif_ioctl_sclass_v0 *args; 803 + struct nouveau_abi16_chan *chan; 804 + struct nvif_sclass *sclass; 805 + int ret; 806 + 807 + if (!ioctl->route || argc < sizeof(*args)) 808 + return -EINVAL; 809 + args = (void *)ioctl->data; 810 + argc -= sizeof(*args); 811 + 812 + if (argc != args->count * sizeof(args->oclass[0])) 813 + return -EINVAL; 814 + 815 + chan = nouveau_abi16_chan(abi16, ioctl->token); 816 + if (!chan) 817 + return -EINVAL; 818 + 819 + ret = nvif_object_sclass_get(&chan->chan->user, &sclass); 820 + if (ret < 0) 821 + return ret; 822 + 823 + for (int i = 0; i < min_t(u8, args->count, ret); i++) { 824 + args->oclass[i].oclass = sclass[i].oclass; 825 + args->oclass[i].minver = sclass[i].minver; 826 + args->oclass[i].maxver = sclass[i].maxver; 827 + } 828 + args->count = ret; 829 + 830 + nvif_object_sclass_put(&sclass); 831 + return 0; 832 + } 833 + 834 + int 835 + nouveau_abi16_ioctl(struct drm_file *filp, void __user *user, u32 size) 836 + { 837 + struct nvif_ioctl_v0 *ioctl; 838 + struct nouveau_abi16 *abi16; 839 + u32 argc = size; 840 + int ret; 841 + 842 + if (argc < sizeof(*ioctl)) 843 + return -EINVAL; 844 + argc -= sizeof(*ioctl); 845 + 846 + ioctl = kmalloc(size, GFP_KERNEL); 847 + if (!ioctl) 848 + return -ENOMEM; 849 + 850 + ret = -EFAULT; 851 + if (copy_from_user(ioctl, user, size)) 852 + goto done_free; 853 + 854 + if (ioctl->version != 0x00 || 855 + (ioctl->route && ioctl->route != 0xff)) { 856 + ret = -EINVAL; 857 + goto done_free; 858 + } 859 + 860 + abi16 = nouveau_abi16_get(filp); 861 + if (unlikely(!abi16)) { 862 + ret = -ENOMEM; 863 + goto done_free; 864 + } 865 + 866 + switch (ioctl->type) { 867 + case NVIF_IOCTL_V0_SCLASS: ret = nouveau_abi16_ioctl_sclass(abi16, ioctl, argc); break; 868 + case NVIF_IOCTL_V0_NEW : ret = nouveau_abi16_ioctl_new (abi16, ioctl, argc); break; 869 + case NVIF_IOCTL_V0_DEL : ret = nouveau_abi16_ioctl_del (abi16, ioctl, argc); break; 870 + case NVIF_IOCTL_V0_MTHD : ret = nouveau_abi16_ioctl_mthd (abi16, ioctl, argc); break; 871 + default: 872 + ret = -EINVAL; 873 + break; 874 + } 875 + 876 + nouveau_abi16_put(abi16, 0); 877 + 878 + if (ret == 0) { 879 + if (copy_to_user(user, ioctl, size)) 880 + ret = -EFAULT; 881 + } 882 + 883 + done_free: 884 + kfree(ioctl); 885 + return ret; 710 886 }
+3 -3
drivers/gpu/drm/nouveau/nouveau_abi16.h
··· 30 30 }; 31 31 32 32 struct nouveau_abi16 { 33 - struct nvif_device device; 33 + struct nouveau_cli *cli; 34 34 struct list_head channels; 35 - u64 handles; 35 + struct list_head objects; 36 36 }; 37 37 38 38 struct nouveau_abi16 *nouveau_abi16_get(struct drm_file *); 39 39 int nouveau_abi16_put(struct nouveau_abi16 *, int); 40 40 void nouveau_abi16_fini(struct nouveau_abi16 *); 41 41 s32 nouveau_abi16_swclass(struct nouveau_drm *); 42 - int nouveau_abi16_usif(struct drm_file *, void *data, u32 size); 42 + int nouveau_abi16_ioctl(struct drm_file *, void __user *user, u32 size); 43 43 44 44 #define NOUVEAU_GEM_DOMAIN_VRAM (1 << 1) 45 45 #define NOUVEAU_GEM_DOMAIN_GART (1 << 2)
+2 -2
drivers/gpu/drm/nouveau/nouveau_bios.c
··· 2015 2015 static bool NVInitVBIOS(struct drm_device *dev) 2016 2016 { 2017 2017 struct nouveau_drm *drm = nouveau_drm(dev); 2018 - struct nvkm_bios *bios = nvxx_bios(&drm->client.device); 2018 + struct nvkm_bios *bios = nvxx_bios(drm); 2019 2019 struct nvbios *legacy = &drm->vbios; 2020 2020 2021 2021 memset(legacy, 0, sizeof(struct nvbios)); ··· 2086 2086 2087 2087 /* only relevant for PCI devices */ 2088 2088 if (!dev_is_pci(dev->dev) || 2089 - nvkm_gsp_rm(nvxx_device(&drm->client.device)->gsp)) 2089 + nvkm_gsp_rm(nvxx_device(drm)->gsp)) 2090 2090 return 0; 2091 2091 2092 2092 if (!NVInitVBIOS(dev))
+1
drivers/gpu/drm/nouveau/nouveau_bios.h
··· 48 48 49 49 int bit_table(struct drm_device *, u8 id, struct bit_entry *); 50 50 51 + #include <subdev/bios.h> 51 52 #include <subdev/bios/dcb.h> 52 53 #include <subdev/bios/conn.h> 53 54
+5 -5
drivers/gpu/drm/nouveau/nouveau_bo.c
··· 58 58 { 59 59 struct nouveau_drm *drm = nouveau_drm(dev); 60 60 int i = reg - drm->tile.reg; 61 - struct nvkm_fb *fb = nvxx_fb(&drm->client.device); 61 + struct nvkm_fb *fb = nvxx_fb(drm); 62 62 struct nvkm_fb_tile *tile = &fb->tile.region[i]; 63 63 64 64 nouveau_fence_unref(&reg->fence); ··· 109 109 u32 size, u32 pitch, u32 zeta) 110 110 { 111 111 struct nouveau_drm *drm = nouveau_drm(dev); 112 - struct nvkm_fb *fb = nvxx_fb(&drm->client.device); 112 + struct nvkm_fb *fb = nvxx_fb(drm); 113 113 struct nouveau_drm_tile *tile, *found = NULL; 114 114 int i; 115 115 ··· 859 859 { 860 860 struct nouveau_drm *drm = nouveau_bdev(bo->bdev); 861 861 struct nouveau_channel *chan = drm->ttm.chan; 862 - struct nouveau_cli *cli = (void *)chan->user.client; 862 + struct nouveau_cli *cli = chan->cli; 863 863 struct nouveau_fence *fence; 864 864 int ret; 865 865 ··· 1171 1171 nouveau_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *reg) 1172 1172 { 1173 1173 struct nouveau_drm *drm = nouveau_bdev(bdev); 1174 - struct nvkm_device *device = nvxx_device(&drm->client.device); 1174 + struct nvkm_device *device = nvxx_device(drm); 1175 1175 struct nouveau_mem *mem = nouveau_mem(reg); 1176 1176 struct nvif_mmu *mmu = &drm->client.mmu; 1177 1177 int ret; ··· 1291 1291 { 1292 1292 struct nouveau_drm *drm = nouveau_bdev(bo->bdev); 1293 1293 struct nouveau_bo *nvbo = nouveau_bo(bo); 1294 - struct nvkm_device *device = nvxx_device(&drm->client.device); 1294 + struct nvkm_device *device = nvxx_device(drm); 1295 1295 u32 mappable = device->func->resource_size(device, 1) >> PAGE_SHIFT; 1296 1296 int i, ret; 1297 1297
+3 -47
drivers/gpu/drm/nouveau/nouveau_bo.h
··· 53 53 return container_of(bo, struct nouveau_bo, bo); 54 54 } 55 55 56 - static inline int 57 - nouveau_bo_ref(struct nouveau_bo *ref, struct nouveau_bo **pnvbo) 56 + static inline void 57 + nouveau_bo_fini(struct nouveau_bo *bo) 58 58 { 59 - struct nouveau_bo *prev; 60 - 61 - if (!pnvbo) 62 - return -EINVAL; 63 - prev = *pnvbo; 64 - 65 - if (ref) { 66 - ttm_bo_get(&ref->bo); 67 - *pnvbo = nouveau_bo(&ref->bo); 68 - } else { 69 - *pnvbo = NULL; 70 - } 71 - if (prev) 72 - ttm_bo_put(&prev->bo); 73 - 74 - return 0; 59 + ttm_bo_put(&bo->bo); 75 60 } 76 61 77 62 extern struct ttm_device_funcs nouveau_bo_driver; ··· 98 113 &nvbo->kmap, &is_iomem); 99 114 WARN_ON_ONCE(ioptr && !is_iomem); 100 115 return ioptr; 101 - } 102 - 103 - static inline void 104 - nouveau_bo_unmap_unpin_unref(struct nouveau_bo **pnvbo) 105 - { 106 - if (*pnvbo) { 107 - nouveau_bo_unmap(*pnvbo); 108 - nouveau_bo_unpin(*pnvbo); 109 - nouveau_bo_ref(NULL, pnvbo); 110 - } 111 - } 112 - 113 - static inline int 114 - nouveau_bo_new_pin_map(struct nouveau_cli *cli, u64 size, int align, u32 domain, 115 - struct nouveau_bo **pnvbo) 116 - { 117 - int ret = nouveau_bo_new(cli, size, align, domain, 118 - 0, 0, NULL, NULL, pnvbo); 119 - if (ret == 0) { 120 - ret = nouveau_bo_pin(*pnvbo, domain, true); 121 - if (ret == 0) { 122 - ret = nouveau_bo_map(*pnvbo); 123 - if (ret == 0) 124 - return ret; 125 - nouveau_bo_unpin(*pnvbo); 126 - } 127 - nouveau_bo_ref(NULL, pnvbo); 128 - } 129 - return ret; 130 116 } 131 117 132 118 int nv04_bo_move_init(struct nouveau_channel *, u32);
+3 -3
drivers/gpu/drm/nouveau/nouveau_bo0039.c
··· 47 47 nv04_bo_move_m2mf(struct nouveau_channel *chan, struct ttm_buffer_object *bo, 48 48 struct ttm_resource *old_reg, struct ttm_resource *new_reg) 49 49 { 50 - struct nvif_push *push = chan->chan.push; 50 + struct nvif_push *push = &chan->chan.push; 51 51 u32 src_ctxdma = nouveau_bo_mem_ctxdma(bo, chan, old_reg); 52 52 u32 src_offset = old_reg->start << PAGE_SHIFT; 53 53 u32 dst_ctxdma = nouveau_bo_mem_ctxdma(bo, chan, new_reg); ··· 96 96 int 97 97 nv04_bo_move_init(struct nouveau_channel *chan, u32 handle) 98 98 { 99 - struct nvif_push *push = chan->chan.push; 99 + struct nvif_push *push = &chan->chan.push; 100 100 int ret; 101 101 102 102 ret = PUSH_WAIT(push, 4); ··· 104 104 return ret; 105 105 106 106 PUSH_MTHD(push, NV039, SET_OBJECT, handle); 107 - PUSH_MTHD(push, NV039, SET_CONTEXT_DMA_NOTIFIES, chan->drm->ntfy.handle); 107 + PUSH_MTHD(push, NV039, SET_CONTEXT_DMA_NOTIFIES, chan->cli->drm->ntfy.handle); 108 108 return 0; 109 109 }
+3 -3
drivers/gpu/drm/nouveau/nouveau_bo5039.c
··· 40 40 struct ttm_resource *old_reg, struct ttm_resource *new_reg) 41 41 { 42 42 struct nouveau_mem *mem = nouveau_mem(old_reg); 43 - struct nvif_push *push = chan->chan.push; 43 + struct nvif_push *push = &chan->chan.push; 44 44 u64 length = new_reg->size; 45 45 u64 src_offset = mem->vma[0].addr; 46 46 u64 dst_offset = mem->vma[1].addr; ··· 136 136 int 137 137 nv50_bo_move_init(struct nouveau_channel *chan, u32 handle) 138 138 { 139 - struct nvif_push *push = chan->chan.push; 139 + struct nvif_push *push = &chan->chan.push; 140 140 int ret; 141 141 142 142 ret = PUSH_WAIT(push, 6); ··· 144 144 return ret; 145 145 146 146 PUSH_MTHD(push, NV5039, SET_OBJECT, handle); 147 - PUSH_MTHD(push, NV5039, SET_CONTEXT_DMA_NOTIFY, chan->drm->ntfy.handle, 147 + PUSH_MTHD(push, NV5039, SET_CONTEXT_DMA_NOTIFY, chan->cli->drm->ntfy.handle, 148 148 SET_CONTEXT_DMA_BUFFER_IN, chan->vram.handle, 149 149 SET_CONTEXT_DMA_BUFFER_OUT, chan->vram.handle); 150 150 return 0;
+1 -1
drivers/gpu/drm/nouveau/nouveau_bo74c1.c
··· 37 37 struct ttm_resource *old_reg, struct ttm_resource *new_reg) 38 38 { 39 39 struct nouveau_mem *mem = nouveau_mem(old_reg); 40 - struct nvif_push *push = chan->chan.push; 40 + struct nvif_push *push = &chan->chan.push; 41 41 int ret; 42 42 43 43 ret = PUSH_WAIT(push, 7);
+1 -1
drivers/gpu/drm/nouveau/nouveau_bo85b5.c
··· 41 41 struct ttm_resource *old_reg, struct ttm_resource *new_reg) 42 42 { 43 43 struct nouveau_mem *mem = nouveau_mem(old_reg); 44 - struct nvif_push *push = chan->chan.push; 44 + struct nvif_push *push = &chan->chan.push; 45 45 u64 src_offset = mem->vma[0].addr; 46 46 u64 dst_offset = mem->vma[1].addr; 47 47 u32 page_count = PFN_UP(new_reg->size);
+2 -2
drivers/gpu/drm/nouveau/nouveau_bo9039.c
··· 38 38 nvc0_bo_move_m2mf(struct nouveau_channel *chan, struct ttm_buffer_object *bo, 39 39 struct ttm_resource *old_reg, struct ttm_resource *new_reg) 40 40 { 41 - struct nvif_push *push = chan->chan.push; 41 + struct nvif_push *push = &chan->chan.push; 42 42 struct nouveau_mem *mem = nouveau_mem(old_reg); 43 43 u64 src_offset = mem->vma[0].addr; 44 44 u64 dst_offset = mem->vma[1].addr; ··· 86 86 int 87 87 nvc0_bo_move_init(struct nouveau_channel *chan, u32 handle) 88 88 { 89 - struct nvif_push *push = chan->chan.push; 89 + struct nvif_push *push = &chan->chan.push; 90 90 int ret; 91 91 92 92 ret = PUSH_WAIT(push, 2);
+1 -1
drivers/gpu/drm/nouveau/nouveau_bo90b5.c
··· 34 34 struct ttm_resource *old_reg, struct ttm_resource *new_reg) 35 35 { 36 36 struct nouveau_mem *mem = nouveau_mem(old_reg); 37 - struct nvif_push *push = chan->chan.push; 37 + struct nvif_push *push = &chan->chan.push; 38 38 u64 src_offset = mem->vma[0].addr; 39 39 u64 dst_offset = mem->vma[1].addr; 40 40 u32 page_count = PFN_UP(new_reg->size);
+2 -2
drivers/gpu/drm/nouveau/nouveau_boa0b5.c
··· 39 39 struct ttm_resource *old_reg, struct ttm_resource *new_reg) 40 40 { 41 41 struct nouveau_mem *mem = nouveau_mem(old_reg); 42 - struct nvif_push *push = chan->chan.push; 42 + struct nvif_push *push = &chan->chan.push; 43 43 int ret; 44 44 45 45 ret = PUSH_WAIT(push, 10); ··· 78 78 int 79 79 nve0_bo_move_init(struct nouveau_channel *chan, u32 handle) 80 80 { 81 - struct nvif_push *push = chan->chan.push; 81 + struct nvif_push *push = &chan->chan.push; 82 82 int ret; 83 83 84 84 ret = PUSH_WAIT(push, 2);
+46 -52
drivers/gpu/drm/nouveau/nouveau_chan.c
··· 52 52 nouveau_channel_killed(struct nvif_event *event, void *repv, u32 repc) 53 53 { 54 54 struct nouveau_channel *chan = container_of(event, typeof(*chan), kill); 55 - struct nouveau_cli *cli = (void *)chan->user.client; 55 + struct nouveau_cli *cli = chan->cli; 56 56 57 57 NV_PRINTK(warn, cli, "channel %d killed!\n", chan->chid); 58 58 ··· 66 66 nouveau_channel_idle(struct nouveau_channel *chan) 67 67 { 68 68 if (likely(chan && chan->fence && !atomic_read(&chan->killed))) { 69 - struct nouveau_cli *cli = (void *)chan->user.client; 69 + struct nouveau_cli *cli = chan->cli; 70 70 struct nouveau_fence *fence = NULL; 71 71 int ret; 72 72 ··· 78 78 79 79 if (ret) { 80 80 NV_PRINTK(err, cli, "failed to idle channel %d [%s]\n", 81 - chan->chid, nvxx_client(&cli->base)->name); 81 + chan->chid, cli->name); 82 82 return ret; 83 83 } 84 84 } ··· 90 90 { 91 91 struct nouveau_channel *chan = *pchan; 92 92 if (chan) { 93 - struct nouveau_cli *cli = (void *)chan->user.client; 94 - 95 93 if (chan->fence) 96 - nouveau_fence(chan->drm)->context_del(chan); 94 + nouveau_fence(chan->cli->drm)->context_del(chan); 97 95 98 - if (cli) 96 + if (nvif_object_constructed(&chan->user)) 99 97 nouveau_svmm_part(chan->vmm->svmm, chan->inst); 100 98 101 99 nvif_object_dtor(&chan->blit); ··· 108 110 nouveau_bo_unmap(chan->push.buffer); 109 111 if (chan->push.buffer && chan->push.buffer->bo.pin_count) 110 112 nouveau_bo_unpin(chan->push.buffer); 111 - nouveau_bo_ref(NULL, &chan->push.buffer); 113 + nouveau_bo_fini(chan->push.buffer); 112 114 kfree(chan); 113 115 } 114 116 *pchan = NULL; ··· 117 119 static void 118 120 nouveau_channel_kick(struct nvif_push *push) 119 121 { 120 - struct nouveau_channel *chan = container_of(push, typeof(*chan), chan._push); 121 - chan->dma.cur = chan->dma.cur + (chan->chan._push.cur - chan->chan._push.bgn); 122 + struct nouveau_channel *chan = container_of(push, typeof(*chan), chan.push); 123 + chan->dma.cur = chan->dma.cur + (chan->chan.push.cur - chan->chan.push.bgn); 122 124 FIRE_RING(chan); 123 - chan->chan._push.bgn = chan->chan._push.cur; 125 + chan->chan.push.bgn = chan->chan.push.cur; 124 126 } 125 127 126 128 static int 127 129 nouveau_channel_wait(struct nvif_push *push, u32 size) 128 130 { 129 - struct nouveau_channel *chan = container_of(push, typeof(*chan), chan._push); 131 + struct nouveau_channel *chan = container_of(push, typeof(*chan), chan.push); 130 132 int ret; 131 - chan->dma.cur = chan->dma.cur + (chan->chan._push.cur - chan->chan._push.bgn); 133 + chan->dma.cur = chan->dma.cur + (chan->chan.push.cur - chan->chan.push.bgn); 132 134 ret = RING_SPACE(chan, size); 133 135 if (ret == 0) { 134 - chan->chan._push.bgn = chan->chan._push.mem.object.map.ptr; 135 - chan->chan._push.bgn = chan->chan._push.bgn + chan->dma.cur; 136 - chan->chan._push.cur = chan->chan._push.bgn; 137 - chan->chan._push.end = chan->chan._push.bgn + size; 136 + chan->chan.push.bgn = chan->chan.push.mem.object.map.ptr; 137 + chan->chan.push.bgn = chan->chan.push.bgn + chan->dma.cur; 138 + chan->chan.push.cur = chan->chan.push.bgn; 139 + chan->chan.push.end = chan->chan.push.bgn + size; 138 140 } 139 141 return ret; 140 142 } 141 143 142 144 static int 143 - nouveau_channel_prep(struct nouveau_drm *drm, struct nvif_device *device, 145 + nouveau_channel_prep(struct nouveau_cli *cli, 144 146 u32 size, struct nouveau_channel **pchan) 145 147 { 146 - struct nouveau_cli *cli = (void *)device->object.client; 148 + struct nouveau_drm *drm = cli->drm; 149 + struct nvif_device *device = &cli->device; 147 150 struct nv_dma_v0 args = {}; 148 151 struct nouveau_channel *chan; 149 152 u32 target; ··· 154 155 if (!chan) 155 156 return -ENOMEM; 156 157 157 - chan->device = device; 158 - chan->drm = drm; 158 + chan->cli = cli; 159 159 chan->vmm = nouveau_cli_vmm(cli); 160 160 atomic_set(&chan->killed, 0); 161 161 ··· 176 178 return ret; 177 179 } 178 180 179 - chan->chan._push.mem.object.parent = cli->base.object.parent; 180 - chan->chan._push.mem.object.client = &cli->base; 181 - chan->chan._push.mem.object.name = "chanPush"; 182 - chan->chan._push.mem.object.map.ptr = chan->push.buffer->kmap.virtual; 183 - chan->chan._push.wait = nouveau_channel_wait; 184 - chan->chan._push.kick = nouveau_channel_kick; 185 - chan->chan.push = &chan->chan._push; 181 + chan->chan.push.mem.object.parent = cli->base.object.parent; 182 + chan->chan.push.mem.object.client = &cli->base; 183 + chan->chan.push.mem.object.name = "chanPush"; 184 + chan->chan.push.mem.object.map.ptr = chan->push.buffer->kmap.virtual; 185 + chan->chan.push.wait = nouveau_channel_wait; 186 + chan->chan.push.kick = nouveau_channel_kick; 186 187 187 188 /* create dma object covering the *entire* memory space that the 188 189 * pushbuf lives in, this is because the GEM code requires that ··· 215 218 */ 216 219 args.target = NV_DMA_V0_TARGET_PCI; 217 220 args.access = NV_DMA_V0_ACCESS_RDWR; 218 - args.start = nvxx_device(device)->func-> 219 - resource_addr(nvxx_device(device), 1); 221 + args.start = nvxx_device(drm)->func->resource_addr(nvxx_device(drm), 1); 220 222 args.limit = args.start + device->info.ram_user - 1; 221 223 } else { 222 224 args.target = NV_DMA_V0_TARGET_VRAM; ··· 224 228 args.limit = device->info.ram_user - 1; 225 229 } 226 230 } else { 227 - if (chan->drm->agp.bridge) { 231 + if (drm->agp.bridge) { 228 232 args.target = NV_DMA_V0_TARGET_AGP; 229 233 args.access = NV_DMA_V0_ACCESS_RDWR; 230 - args.start = chan->drm->agp.base; 231 - args.limit = chan->drm->agp.base + 232 - chan->drm->agp.size - 1; 234 + args.start = drm->agp.base; 235 + args.limit = drm->agp.base + drm->agp.size - 1; 233 236 } else { 234 237 args.target = NV_DMA_V0_TARGET_VM; 235 238 args.access = NV_DMA_V0_ACCESS_RDWR; ··· 249 254 } 250 255 251 256 static int 252 - nouveau_channel_ctor(struct nouveau_drm *drm, struct nvif_device *device, bool priv, u64 runm, 257 + nouveau_channel_ctor(struct nouveau_cli *cli, bool priv, u64 runm, 253 258 struct nouveau_channel **pchan) 254 259 { 255 260 const struct nvif_mclass hosts[] = { ··· 274 279 struct nvif_chan_v0 chan; 275 280 char name[TASK_COMM_LEN+16]; 276 281 } args; 277 - struct nouveau_cli *cli = (void *)device->object.client; 282 + struct nvif_device *device = &cli->device; 278 283 struct nouveau_channel *chan; 279 284 const u64 plength = 0x10000; 280 285 const u64 ioffset = plength; ··· 293 298 size = ioffset + ilength; 294 299 295 300 /* allocate dma push buffer */ 296 - ret = nouveau_channel_prep(drm, device, size, &chan); 301 + ret = nouveau_channel_prep(cli, size, &chan); 297 302 *pchan = chan; 298 303 if (ret) 299 304 return ret; ··· 358 363 static int 359 364 nouveau_channel_init(struct nouveau_channel *chan, u32 vram, u32 gart) 360 365 { 361 - struct nvif_device *device = chan->device; 362 - struct nouveau_drm *drm = chan->drm; 366 + struct nouveau_cli *cli = chan->cli; 367 + struct nouveau_drm *drm = cli->drm; 368 + struct nvif_device *device = &cli->device; 363 369 struct nv_dma_v0 args = {}; 364 370 int ret, i; 365 371 ··· 415 419 args.start = 0; 416 420 args.limit = chan->vmm->vmm.limit - 1; 417 421 } else 418 - if (chan->drm->agp.bridge) { 422 + if (drm->agp.bridge) { 419 423 args.target = NV_DMA_V0_TARGET_AGP; 420 424 args.access = NV_DMA_V0_ACCESS_RDWR; 421 - args.start = chan->drm->agp.base; 422 - args.limit = chan->drm->agp.base + 423 - chan->drm->agp.size - 1; 425 + args.start = drm->agp.base; 426 + args.limit = drm->agp.base + drm->agp.size - 1; 424 427 } else { 425 428 args.target = NV_DMA_V0_TARGET_VM; 426 429 args.access = NV_DMA_V0_ACCESS_RDWR; ··· 460 465 chan->dma.cur = chan->dma.put; 461 466 chan->dma.free = chan->dma.max - chan->dma.cur; 462 467 463 - ret = PUSH_WAIT(chan->chan.push, NOUVEAU_DMA_SKIPS); 468 + ret = PUSH_WAIT(&chan->chan.push, NOUVEAU_DMA_SKIPS); 464 469 if (ret) 465 470 return ret; 466 471 467 472 for (i = 0; i < NOUVEAU_DMA_SKIPS; i++) 468 - PUSH_DATA(chan->chan.push, 0x00000000); 473 + PUSH_DATA(&chan->chan.push, 0x00000000); 469 474 470 475 /* allocate software object class (used for fences on <= nv05) */ 471 476 if (device->info.family < NV_DEVICE_INFO_V0_CELSIUS) { ··· 475 480 if (ret) 476 481 return ret; 477 482 478 - ret = PUSH_WAIT(chan->chan.push, 2); 483 + ret = PUSH_WAIT(&chan->chan.push, 2); 479 484 if (ret) 480 485 return ret; 481 486 482 - PUSH_NVSQ(chan->chan.push, NV_SW, 0x0000, chan->nvsw.handle); 483 - PUSH_KICK(chan->chan.push); 487 + PUSH_NVSQ(&chan->chan.push, NV_SW, 0x0000, chan->nvsw.handle); 488 + PUSH_KICK(&chan->chan.push); 484 489 } 485 490 486 491 /* initialise synchronisation */ 487 - return nouveau_fence(chan->drm)->context_new(chan); 492 + return nouveau_fence(drm)->context_new(chan); 488 493 } 489 494 490 495 int 491 - nouveau_channel_new(struct nouveau_drm *drm, struct nvif_device *device, 496 + nouveau_channel_new(struct nouveau_cli *cli, 492 497 bool priv, u64 runm, u32 vram, u32 gart, struct nouveau_channel **pchan) 493 498 { 494 - struct nouveau_cli *cli = (void *)device->object.client; 495 499 int ret; 496 500 497 - ret = nouveau_channel_ctor(drm, device, priv, runm, pchan); 501 + ret = nouveau_channel_ctor(cli, priv, runm, pchan); 498 502 if (ret) { 499 503 NV_PRINTK(dbg, cli, "channel create, %d\n", ret); 500 504 return ret;
+3 -5
drivers/gpu/drm/nouveau/nouveau_chan.h
··· 8 8 9 9 struct nouveau_channel { 10 10 struct { 11 - struct nvif_push _push; 12 - struct nvif_push *push; 11 + struct nvif_push push; 13 12 } chan; 14 13 15 - struct nvif_device *device; 16 - struct nouveau_drm *drm; 14 + struct nouveau_cli *cli; 17 15 struct nouveau_vmm *vmm; 18 16 19 17 struct nvif_mem mem_userd; ··· 60 62 int nouveau_channels_init(struct nouveau_drm *); 61 63 void nouveau_channels_fini(struct nouveau_drm *); 62 64 63 - int nouveau_channel_new(struct nouveau_drm *, struct nvif_device *, bool priv, u64 runm, 65 + int nouveau_channel_new(struct nouveau_cli *, bool priv, u64 runm, 64 66 u32 vram, u32 gart, struct nouveau_channel **); 65 67 void nouveau_channel_del(struct nouveau_channel **); 66 68 int nouveau_channel_idle(struct nouveau_channel *);
+1 -3
drivers/gpu/drm/nouveau/nouveau_display.c
··· 446 446 } while(0) 447 447 448 448 void 449 - nouveau_display_hpd_resume(struct drm_device *dev) 449 + nouveau_display_hpd_resume(struct nouveau_drm *drm) 450 450 { 451 - struct nouveau_drm *drm = nouveau_drm(dev); 452 - 453 451 if (drm->headless) 454 452 return; 455 453
+1 -1
drivers/gpu/drm/nouveau/nouveau_display.h
··· 45 45 int nouveau_display_create(struct drm_device *dev); 46 46 void nouveau_display_destroy(struct drm_device *dev); 47 47 int nouveau_display_init(struct drm_device *dev, bool resume, bool runtime); 48 - void nouveau_display_hpd_resume(struct drm_device *dev); 48 + void nouveau_display_hpd_resume(struct nouveau_drm *); 49 49 void nouveau_display_fini(struct drm_device *dev, bool suspend, bool runtime); 50 50 int nouveau_display_suspend(struct drm_device *dev, bool runtime); 51 51 void nouveau_display_resume(struct drm_device *dev, bool runtime);
+1 -1
drivers/gpu/drm/nouveau/nouveau_dma.c
··· 72 72 nv50_dma_push(struct nouveau_channel *chan, u64 offset, u32 length, 73 73 bool no_prefetch) 74 74 { 75 - struct nvif_user *user = &chan->drm->client.device.user; 75 + struct nvif_user *user = &chan->cli->drm->client.device.user; 76 76 struct nouveau_bo *pb = chan->push.buffer; 77 77 int ip = (chan->dma.ib_put * 2) + chan->dma.ib_base; 78 78
+4 -4
drivers/gpu/drm/nouveau/nouveau_dmem.c
··· 294 294 out_bo_unpin: 295 295 nouveau_bo_unpin(chunk->bo); 296 296 out_bo_free: 297 - nouveau_bo_ref(NULL, &chunk->bo); 297 + nouveau_bo_fini(chunk->bo); 298 298 out_release: 299 299 release_mem_region(chunk->pagemap.range.start, range_len(&chunk->pagemap.range)); 300 300 out_free: ··· 426 426 list_for_each_entry_safe(chunk, tmp, &drm->dmem->chunks, list) { 427 427 nouveau_dmem_evict_chunk(chunk); 428 428 nouveau_bo_unpin(chunk->bo); 429 - nouveau_bo_ref(NULL, &chunk->bo); 429 + nouveau_bo_fini(chunk->bo); 430 430 WARN_ON(chunk->callocated); 431 431 list_del(&chunk->list); 432 432 memunmap_pages(&chunk->pagemap); ··· 443 443 enum nouveau_aper dst_aper, u64 dst_addr, 444 444 enum nouveau_aper src_aper, u64 src_addr) 445 445 { 446 - struct nvif_push *push = drm->dmem->migrate.chan->chan.push; 446 + struct nvif_push *push = &drm->dmem->migrate.chan->chan.push; 447 447 u32 launch_dma = 0; 448 448 int ret; 449 449 ··· 516 516 nvc0b5_migrate_clear(struct nouveau_drm *drm, u32 length, 517 517 enum nouveau_aper dst_aper, u64 dst_addr) 518 518 { 519 - struct nvif_push *push = drm->dmem->migrate.chan->chan.push; 519 + struct nvif_push *push = &drm->dmem->migrate.chan->chan.push; 520 520 u32 launch_dma = 0; 521 521 int ret; 522 522
+208 -193
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 63 63 #include "nouveau_abi16.h" 64 64 #include "nouveau_fence.h" 65 65 #include "nouveau_debugfs.h" 66 - #include "nouveau_usif.h" 67 66 #include "nouveau_connector.h" 68 67 #include "nouveau_platform.h" 69 68 #include "nouveau_svm.h" ··· 199 200 flush_work(&cli->work); 200 201 WARN_ON(!list_empty(&cli->worker)); 201 202 202 - usif_client_fini(cli); 203 203 if (cli->sched) 204 204 nouveau_sched_destroy(&cli->sched); 205 205 if (uvmm) ··· 206 208 nouveau_vmm_fini(&cli->svm); 207 209 nouveau_vmm_fini(&cli->vmm); 208 210 nvif_mmu_dtor(&cli->mmu); 211 + cli->device.object.map.ptr = NULL; 209 212 nvif_device_dtor(&cli->device); 210 - mutex_lock(&cli->drm->master.lock); 213 + mutex_lock(&cli->drm->client_mutex); 211 214 nvif_client_dtor(&cli->base); 212 - mutex_unlock(&cli->drm->master.lock); 215 + mutex_unlock(&cli->drm->client_mutex); 213 216 } 214 217 215 218 static int ··· 225 226 {} 226 227 }; 227 228 static const struct nvif_mclass 228 - mmus[] = { 229 - { NVIF_CLASS_MMU_GF100, -1 }, 230 - { NVIF_CLASS_MMU_NV50 , -1 }, 231 - { NVIF_CLASS_MMU_NV04 , -1 }, 232 - {} 233 - }; 234 - static const struct nvif_mclass 235 229 vmms[] = { 236 230 { NVIF_CLASS_VMM_GP100, -1 }, 237 231 { NVIF_CLASS_VMM_GM200, -1 }, ··· 233 241 { NVIF_CLASS_VMM_NV04 , -1 }, 234 242 {} 235 243 }; 236 - u64 device = nouveau_name(drm->dev); 237 244 int ret; 238 245 239 246 snprintf(cli->name, sizeof(cli->name), "%s", sname); 240 247 cli->drm = drm; 241 248 mutex_init(&cli->mutex); 242 - usif_client_init(cli); 243 249 244 250 INIT_WORK(&cli->work, nouveau_cli_work); 245 251 INIT_LIST_HEAD(&cli->worker); 246 252 mutex_init(&cli->lock); 247 253 248 - if (cli == &drm->master) { 249 - ret = nvif_driver_init(NULL, nouveau_config, nouveau_debug, 250 - cli->name, device, &cli->base); 251 - } else { 252 - mutex_lock(&drm->master.lock); 253 - ret = nvif_client_ctor(&drm->master.base, cli->name, device, 254 - &cli->base); 255 - mutex_unlock(&drm->master.lock); 256 - } 254 + mutex_lock(&drm->client_mutex); 255 + ret = nvif_client_ctor(&drm->_client, cli->name, &cli->base); 256 + mutex_unlock(&drm->client_mutex); 257 257 if (ret) { 258 258 NV_PRINTK(err, cli, "Client allocation failed: %d\n", ret); 259 259 goto done; 260 260 } 261 261 262 - ret = nvif_device_ctor(&cli->base.object, "drmDevice", 0, NV_DEVICE, 263 - &(struct nv_device_v0) { 264 - .device = ~0, 265 - .priv = true, 266 - }, sizeof(struct nv_device_v0), 267 - &cli->device); 262 + ret = nvif_device_ctor(&cli->base, "drmDevice", &cli->device); 268 263 if (ret) { 269 264 NV_PRINTK(err, cli, "Device allocation failed: %d\n", ret); 270 265 goto done; 271 266 } 272 267 273 - ret = nvif_mclass(&cli->device.object, mmus); 274 - if (ret < 0) { 275 - NV_PRINTK(err, cli, "No supported MMU class\n"); 276 - goto done; 277 - } 268 + cli->device.object.map.ptr = drm->device.object.map.ptr; 278 269 279 - ret = nvif_mmu_ctor(&cli->device.object, "drmMmu", mmus[ret].oclass, 270 + ret = nvif_mmu_ctor(&cli->device.object, "drmMmu", drm->mmu.object.oclass, 280 271 &cli->mmu); 281 272 if (ret) { 282 273 NV_PRINTK(err, cli, "MMU allocation failed: %d\n", ret); ··· 331 356 return; 332 357 } 333 358 334 - ret = nouveau_channel_new(drm, device, false, runm, NvDmaFB, NvDmaTT, &drm->cechan); 359 + ret = nouveau_channel_new(&drm->client, false, runm, NvDmaFB, NvDmaTT, &drm->cechan); 335 360 if (ret) 336 361 NV_ERROR(drm, "failed to create ce channel, %d\n", ret); 337 362 } ··· 359 384 return; 360 385 } 361 386 362 - ret = nouveau_channel_new(drm, device, false, runm, NvDmaFB, NvDmaTT, &drm->channel); 387 + ret = nouveau_channel_new(&drm->client, false, runm, NvDmaFB, NvDmaTT, &drm->channel); 363 388 if (ret) { 364 389 NV_ERROR(drm, "failed to create kernel channel, %d\n", ret); 365 390 nouveau_accel_gr_fini(drm); ··· 382 407 } 383 408 384 409 if (ret == 0) { 385 - struct nvif_push *push = drm->channel->chan.push; 410 + struct nvif_push *push = &drm->channel->chan.push; 411 + 386 412 ret = PUSH_WAIT(push, 8); 387 413 if (ret == 0) { 388 414 if (device->info.chipset >= 0x11) { ··· 408 432 * any GPU where it's possible we'll end up using M2MF for BO moves. 409 433 */ 410 434 if (device->info.family < NV_DEVICE_INFO_V0_FERMI) { 411 - ret = nvkm_gpuobj_new(nvxx_device(device), 32, 0, false, NULL, 412 - &drm->notify); 435 + ret = nvkm_gpuobj_new(nvxx_device(drm), 32, 0, false, NULL, &drm->notify); 413 436 if (ret) { 414 437 NV_ERROR(drm, "failed to allocate notifier, %d\n", ret); 415 438 nouveau_accel_gr_fini(drm); ··· 553 578 .errorf = nouveau_drm_errorf, 554 579 }; 555 580 556 - static int 557 - nouveau_drm_device_init(struct drm_device *dev) 581 + static void 582 + nouveau_drm_device_fini(struct nouveau_drm *drm) 558 583 { 559 - struct nouveau_drm *drm; 584 + struct drm_device *dev = drm->dev; 585 + struct nouveau_cli *cli, *temp_cli; 586 + 587 + if (nouveau_pmops_runtime()) { 588 + pm_runtime_get_sync(dev->dev); 589 + pm_runtime_forbid(dev->dev); 590 + } 591 + 592 + nouveau_led_fini(dev); 593 + nouveau_dmem_fini(drm); 594 + nouveau_svm_fini(drm); 595 + nouveau_hwmon_fini(dev); 596 + nouveau_debugfs_fini(drm); 597 + 598 + if (dev->mode_config.num_crtc) 599 + nouveau_display_fini(dev, false, false); 600 + nouveau_display_destroy(dev); 601 + 602 + nouveau_accel_fini(drm); 603 + nouveau_bios_takedown(dev); 604 + 605 + nouveau_ttm_fini(drm); 606 + nouveau_vga_fini(drm); 607 + 608 + /* 609 + * There may be existing clients from as-yet unclosed files. For now, 610 + * clean them up here rather than deferring until the file is closed, 611 + * but this likely not correct if we want to support hot-unplugging 612 + * properly. 613 + */ 614 + mutex_lock(&drm->clients_lock); 615 + list_for_each_entry_safe(cli, temp_cli, &drm->clients, head) { 616 + list_del(&cli->head); 617 + mutex_lock(&cli->mutex); 618 + if (cli->abi16) 619 + nouveau_abi16_fini(cli->abi16); 620 + mutex_unlock(&cli->mutex); 621 + nouveau_cli_fini(cli); 622 + kfree(cli); 623 + } 624 + mutex_unlock(&drm->clients_lock); 625 + 626 + nouveau_cli_fini(&drm->client); 627 + destroy_workqueue(drm->sched_wq); 628 + mutex_destroy(&drm->clients_lock); 629 + } 630 + 631 + static int 632 + nouveau_drm_device_init(struct nouveau_drm *drm) 633 + { 634 + struct drm_device *dev = drm->dev; 560 635 int ret; 561 - 562 - if (!(drm = kzalloc(sizeof(*drm), GFP_KERNEL))) 563 - return -ENOMEM; 564 - dev->dev_private = drm; 565 - drm->dev = dev; 566 - 567 - nvif_parent_ctor(&nouveau_parent, &drm->parent); 568 - drm->master.base.object.parent = &drm->parent; 569 636 570 637 drm->sched_wq = alloc_workqueue("nouveau_sched_wq_shared", 0, 571 638 WQ_MAX_ACTIVE); 572 - if (!drm->sched_wq) { 573 - ret = -ENOMEM; 574 - goto fail_alloc; 575 - } 576 - 577 - ret = nouveau_cli_init(drm, "DRM-master", &drm->master); 578 - if (ret) 579 - goto fail_wq; 639 + if (!drm->sched_wq) 640 + return -ENOMEM; 580 641 581 642 ret = nouveau_cli_init(drm, "DRM", &drm->client); 582 643 if (ret) 583 - goto fail_master; 584 - 585 - nvxx_client(&drm->client.base)->debug = 586 - nvkm_dbgopt(nouveau_debug, "DRM"); 644 + goto fail_wq; 587 645 588 646 INIT_LIST_HEAD(&drm->clients); 589 647 mutex_init(&drm->clients_lock); ··· 666 658 pm_runtime_put(dev->dev); 667 659 } 668 660 661 + ret = drm_dev_register(drm->dev, 0); 662 + if (ret) { 663 + nouveau_drm_device_fini(drm); 664 + return ret; 665 + } 666 + 669 667 return 0; 670 668 fail_dispinit: 671 669 nouveau_display_destroy(dev); ··· 683 669 fail_ttm: 684 670 nouveau_vga_fini(drm); 685 671 nouveau_cli_fini(&drm->client); 686 - fail_master: 687 - nouveau_cli_fini(&drm->master); 688 672 fail_wq: 689 673 destroy_workqueue(drm->sched_wq); 690 - fail_alloc: 691 - nvif_parent_dtor(&drm->parent); 692 - kfree(drm); 693 674 return ret; 694 675 } 695 676 696 677 static void 697 - nouveau_drm_device_fini(struct drm_device *dev) 678 + nouveau_drm_device_del(struct nouveau_drm *drm) 698 679 { 699 - struct nouveau_cli *cli, *temp_cli; 700 - struct nouveau_drm *drm = nouveau_drm(dev); 680 + if (drm->dev) 681 + drm_dev_put(drm->dev); 701 682 702 - if (nouveau_pmops_runtime()) { 703 - pm_runtime_get_sync(dev->dev); 704 - pm_runtime_forbid(dev->dev); 705 - } 706 - 707 - nouveau_led_fini(dev); 708 - nouveau_dmem_fini(drm); 709 - nouveau_svm_fini(drm); 710 - nouveau_hwmon_fini(dev); 711 - nouveau_debugfs_fini(drm); 712 - 713 - if (dev->mode_config.num_crtc) 714 - nouveau_display_fini(dev, false, false); 715 - nouveau_display_destroy(dev); 716 - 717 - nouveau_accel_fini(drm); 718 - nouveau_bios_takedown(dev); 719 - 720 - nouveau_ttm_fini(drm); 721 - nouveau_vga_fini(drm); 722 - 723 - /* 724 - * There may be existing clients from as-yet unclosed files. For now, 725 - * clean them up here rather than deferring until the file is closed, 726 - * but this likely not correct if we want to support hot-unplugging 727 - * properly. 728 - */ 729 - mutex_lock(&drm->clients_lock); 730 - list_for_each_entry_safe(cli, temp_cli, &drm->clients, head) { 731 - list_del(&cli->head); 732 - mutex_lock(&cli->mutex); 733 - if (cli->abi16) 734 - nouveau_abi16_fini(cli->abi16); 735 - mutex_unlock(&cli->mutex); 736 - nouveau_cli_fini(cli); 737 - kfree(cli); 738 - } 739 - mutex_unlock(&drm->clients_lock); 740 - 741 - nouveau_cli_fini(&drm->client); 742 - nouveau_cli_fini(&drm->master); 743 - destroy_workqueue(drm->sched_wq); 683 + nvif_mmu_dtor(&drm->mmu); 684 + nvif_device_dtor(&drm->device); 685 + nvif_client_dtor(&drm->_client); 744 686 nvif_parent_dtor(&drm->parent); 745 - mutex_destroy(&drm->clients_lock); 687 + 688 + mutex_destroy(&drm->client_mutex); 746 689 kfree(drm); 690 + } 691 + 692 + static struct nouveau_drm * 693 + nouveau_drm_device_new(const struct drm_driver *drm_driver, struct device *parent, 694 + struct nvkm_device *device) 695 + { 696 + static const struct nvif_mclass 697 + mmus[] = { 698 + { NVIF_CLASS_MMU_GF100, -1 }, 699 + { NVIF_CLASS_MMU_NV50 , -1 }, 700 + { NVIF_CLASS_MMU_NV04 , -1 }, 701 + {} 702 + }; 703 + struct nouveau_drm *drm; 704 + int ret; 705 + 706 + drm = kzalloc(sizeof(*drm), GFP_KERNEL); 707 + if (!drm) 708 + return ERR_PTR(-ENOMEM); 709 + 710 + drm->nvkm = device; 711 + 712 + drm->dev = drm_dev_alloc(drm_driver, parent); 713 + if (IS_ERR(drm->dev)) { 714 + ret = PTR_ERR(drm->dev); 715 + goto done; 716 + } 717 + 718 + drm->dev->dev_private = drm; 719 + dev_set_drvdata(parent, drm); 720 + 721 + nvif_parent_ctor(&nouveau_parent, &drm->parent); 722 + mutex_init(&drm->client_mutex); 723 + drm->_client.object.parent = &drm->parent; 724 + 725 + ret = nvif_driver_init(NULL, nouveau_config, nouveau_debug, "drm", 726 + nouveau_name(drm->dev), &drm->_client); 727 + if (ret) 728 + goto done; 729 + 730 + ret = nvif_device_ctor(&drm->_client, "drmDevice", &drm->device); 731 + if (ret) { 732 + NV_ERROR(drm, "Device allocation failed: %d\n", ret); 733 + goto done; 734 + } 735 + 736 + ret = nvif_device_map(&drm->device); 737 + if (ret) { 738 + NV_ERROR(drm, "Failed to map PRI: %d\n", ret); 739 + goto done; 740 + } 741 + 742 + ret = nvif_mclass(&drm->device.object, mmus); 743 + if (ret < 0) { 744 + NV_ERROR(drm, "No supported MMU class\n"); 745 + goto done; 746 + } 747 + 748 + ret = nvif_mmu_ctor(&drm->device.object, "drmMmu", mmus[ret].oclass, &drm->mmu); 749 + if (ret) { 750 + NV_ERROR(drm, "MMU allocation failed: %d\n", ret); 751 + goto done; 752 + } 753 + 754 + done: 755 + if (ret) { 756 + nouveau_drm_device_del(drm); 757 + drm = NULL; 758 + } 759 + 760 + return ret ? ERR_PTR(ret) : drm; 747 761 } 748 762 749 763 /* ··· 816 774 817 775 static void quirk_broken_nv_runpm(struct pci_dev *pdev) 818 776 { 819 - struct drm_device *dev = pci_get_drvdata(pdev); 820 - struct nouveau_drm *drm = nouveau_drm(dev); 777 + struct nouveau_drm *drm = pci_get_drvdata(pdev); 821 778 struct pci_dev *bridge = pci_upstream_bridge(pdev); 822 779 823 780 if (!bridge || bridge->vendor != PCI_VENDOR_ID_INTEL) ··· 835 794 const struct pci_device_id *pent) 836 795 { 837 796 struct nvkm_device *device; 838 - struct drm_device *drm_dev; 797 + struct nouveau_drm *drm; 839 798 int ret; 840 799 841 800 if (vga_switcheroo_client_probe_defer(pdev)) ··· 844 803 /* We need to check that the chipset is supported before booting 845 804 * fbdev off the hardware, as there's no way to put it back. 846 805 */ 847 - ret = nvkm_device_pci_new(pdev, nouveau_config, "error", 848 - true, false, 0, &device); 806 + ret = nvkm_device_pci_new(pdev, nouveau_config, nouveau_debug, &device); 849 807 if (ret) 850 808 return ret; 851 - 852 - nvkm_device_del(&device); 853 809 854 810 /* Remove conflicting drivers (vesafb, efifb etc). */ 855 811 ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, &driver_pci); 856 - if (ret) 857 - return ret; 858 - 859 - ret = nvkm_device_pci_new(pdev, nouveau_config, nouveau_debug, 860 - true, true, ~0ULL, &device); 861 812 if (ret) 862 813 return ret; 863 814 ··· 858 825 if (nouveau_atomic) 859 826 driver_pci.driver_features |= DRIVER_ATOMIC; 860 827 861 - drm_dev = drm_dev_alloc(&driver_pci, &pdev->dev); 862 - if (IS_ERR(drm_dev)) { 863 - ret = PTR_ERR(drm_dev); 828 + drm = nouveau_drm_device_new(&driver_pci, &pdev->dev, device); 829 + if (IS_ERR(drm)) { 830 + ret = PTR_ERR(drm); 864 831 goto fail_nvkm; 865 832 } 866 833 ··· 868 835 if (ret) 869 836 goto fail_drm; 870 837 871 - pci_set_drvdata(pdev, drm_dev); 872 - 873 - ret = nouveau_drm_device_init(drm_dev); 838 + ret = nouveau_drm_device_init(drm); 874 839 if (ret) 875 840 goto fail_pci; 876 841 877 - ret = drm_dev_register(drm_dev, pent->driver_data); 878 - if (ret) 879 - goto fail_drm_dev_init; 880 - 881 - if (nouveau_drm(drm_dev)->client.device.info.ram_size <= 32 * 1024 * 1024) 882 - drm_fbdev_ttm_setup(drm_dev, 8); 842 + if (drm->client.device.info.ram_size <= 32 * 1024 * 1024) 843 + drm_fbdev_ttm_setup(drm->dev, 8); 883 844 else 884 - drm_fbdev_ttm_setup(drm_dev, 32); 845 + drm_fbdev_ttm_setup(drm->dev, 32); 885 846 886 847 quirk_broken_nv_runpm(pdev); 887 848 return 0; 888 849 889 - fail_drm_dev_init: 890 - nouveau_drm_device_fini(drm_dev); 891 850 fail_pci: 892 851 pci_disable_device(pdev); 893 852 fail_drm: 894 - drm_dev_put(drm_dev); 853 + nouveau_drm_device_del(drm); 895 854 fail_nvkm: 896 855 nvkm_device_del(&device); 897 856 return ret; 898 857 } 899 858 900 859 void 901 - nouveau_drm_device_remove(struct drm_device *dev) 860 + nouveau_drm_device_remove(struct nouveau_drm *drm) 902 861 { 903 - struct nouveau_drm *drm = nouveau_drm(dev); 904 - struct nvkm_client *client; 905 - struct nvkm_device *device; 862 + struct nvkm_device *device = drm->nvkm; 906 863 907 - drm_dev_unplug(dev); 864 + drm_dev_unplug(drm->dev); 908 865 909 - client = nvxx_client(&drm->client.base); 910 - device = nvkm_device_find(client->device); 911 - 912 - nouveau_drm_device_fini(dev); 913 - drm_dev_put(dev); 866 + nouveau_drm_device_fini(drm); 867 + nouveau_drm_device_del(drm); 914 868 nvkm_device_del(&device); 915 869 } 916 870 917 871 static void 918 872 nouveau_drm_remove(struct pci_dev *pdev) 919 873 { 920 - struct drm_device *dev = pci_get_drvdata(pdev); 921 - struct nouveau_drm *drm = nouveau_drm(dev); 874 + struct nouveau_drm *drm = pci_get_drvdata(pdev); 922 875 923 876 /* revert our workaround */ 924 877 if (drm->old_pm_cap) 925 878 pdev->pm_cap = drm->old_pm_cap; 926 - nouveau_drm_device_remove(dev); 879 + nouveau_drm_device_remove(drm); 927 880 pci_disable_device(pdev); 928 881 } 929 882 930 883 static int 931 - nouveau_do_suspend(struct drm_device *dev, bool runtime) 884 + nouveau_do_suspend(struct nouveau_drm *drm, bool runtime) 932 885 { 933 - struct nouveau_drm *drm = nouveau_drm(dev); 886 + struct drm_device *dev = drm->dev; 934 887 struct ttm_resource_manager *man; 935 888 int ret; 936 889 ··· 958 939 } 959 940 960 941 NV_DEBUG(drm, "suspending object tree...\n"); 961 - ret = nvif_client_suspend(&drm->master.base); 942 + ret = nvif_client_suspend(&drm->_client); 962 943 if (ret) 963 944 goto fail_client; 964 945 ··· 977 958 } 978 959 979 960 static int 980 - nouveau_do_resume(struct drm_device *dev, bool runtime) 961 + nouveau_do_resume(struct nouveau_drm *drm, bool runtime) 981 962 { 963 + struct drm_device *dev = drm->dev; 982 964 int ret = 0; 983 - struct nouveau_drm *drm = nouveau_drm(dev); 984 965 985 966 NV_DEBUG(drm, "resuming object tree...\n"); 986 - ret = nvif_client_resume(&drm->master.base); 967 + ret = nvif_client_resume(&drm->_client); 987 968 if (ret) { 988 969 NV_ERROR(drm, "Client resume failed with error: %d\n", ret); 989 970 return ret; ··· 1010 991 nouveau_pmops_suspend(struct device *dev) 1011 992 { 1012 993 struct pci_dev *pdev = to_pci_dev(dev); 1013 - struct drm_device *drm_dev = pci_get_drvdata(pdev); 994 + struct nouveau_drm *drm = pci_get_drvdata(pdev); 1014 995 int ret; 1015 996 1016 - if (drm_dev->switch_power_state == DRM_SWITCH_POWER_OFF || 1017 - drm_dev->switch_power_state == DRM_SWITCH_POWER_DYNAMIC_OFF) 997 + if (drm->dev->switch_power_state == DRM_SWITCH_POWER_OFF || 998 + drm->dev->switch_power_state == DRM_SWITCH_POWER_DYNAMIC_OFF) 1018 999 return 0; 1019 1000 1020 - ret = nouveau_do_suspend(drm_dev, false); 1001 + ret = nouveau_do_suspend(drm, false); 1021 1002 if (ret) 1022 1003 return ret; 1023 1004 ··· 1032 1013 nouveau_pmops_resume(struct device *dev) 1033 1014 { 1034 1015 struct pci_dev *pdev = to_pci_dev(dev); 1035 - struct drm_device *drm_dev = pci_get_drvdata(pdev); 1016 + struct nouveau_drm *drm = pci_get_drvdata(pdev); 1036 1017 int ret; 1037 1018 1038 - if (drm_dev->switch_power_state == DRM_SWITCH_POWER_OFF || 1039 - drm_dev->switch_power_state == DRM_SWITCH_POWER_DYNAMIC_OFF) 1019 + if (drm->dev->switch_power_state == DRM_SWITCH_POWER_OFF || 1020 + drm->dev->switch_power_state == DRM_SWITCH_POWER_DYNAMIC_OFF) 1040 1021 return 0; 1041 1022 1042 1023 pci_set_power_state(pdev, PCI_D0); ··· 1046 1027 return ret; 1047 1028 pci_set_master(pdev); 1048 1029 1049 - ret = nouveau_do_resume(drm_dev, false); 1030 + ret = nouveau_do_resume(drm, false); 1050 1031 1051 1032 /* Monitors may have been connected / disconnected during suspend */ 1052 - nouveau_display_hpd_resume(drm_dev); 1033 + nouveau_display_hpd_resume(drm); 1053 1034 1054 1035 return ret; 1055 1036 } ··· 1057 1038 static int 1058 1039 nouveau_pmops_freeze(struct device *dev) 1059 1040 { 1060 - struct pci_dev *pdev = to_pci_dev(dev); 1061 - struct drm_device *drm_dev = pci_get_drvdata(pdev); 1062 - return nouveau_do_suspend(drm_dev, false); 1041 + struct nouveau_drm *drm = dev_get_drvdata(dev); 1042 + 1043 + return nouveau_do_suspend(drm, false); 1063 1044 } 1064 1045 1065 1046 static int 1066 1047 nouveau_pmops_thaw(struct device *dev) 1067 1048 { 1068 - struct pci_dev *pdev = to_pci_dev(dev); 1069 - struct drm_device *drm_dev = pci_get_drvdata(pdev); 1070 - return nouveau_do_resume(drm_dev, false); 1049 + struct nouveau_drm *drm = dev_get_drvdata(dev); 1050 + 1051 + return nouveau_do_resume(drm, false); 1071 1052 } 1072 1053 1073 1054 bool ··· 1082 1063 nouveau_pmops_runtime_suspend(struct device *dev) 1083 1064 { 1084 1065 struct pci_dev *pdev = to_pci_dev(dev); 1085 - struct drm_device *drm_dev = pci_get_drvdata(pdev); 1066 + struct nouveau_drm *drm = pci_get_drvdata(pdev); 1086 1067 int ret; 1087 1068 1088 1069 if (!nouveau_pmops_runtime()) { ··· 1091 1072 } 1092 1073 1093 1074 nouveau_switcheroo_optimus_dsm(); 1094 - ret = nouveau_do_suspend(drm_dev, true); 1075 + ret = nouveau_do_suspend(drm, true); 1095 1076 pci_save_state(pdev); 1096 1077 pci_disable_device(pdev); 1097 1078 pci_ignore_hotplug(pdev); 1098 1079 pci_set_power_state(pdev, PCI_D3cold); 1099 - drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF; 1080 + drm->dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF; 1100 1081 return ret; 1101 1082 } 1102 1083 ··· 1104 1085 nouveau_pmops_runtime_resume(struct device *dev) 1105 1086 { 1106 1087 struct pci_dev *pdev = to_pci_dev(dev); 1107 - struct drm_device *drm_dev = pci_get_drvdata(pdev); 1108 - struct nouveau_drm *drm = nouveau_drm(drm_dev); 1109 - struct nvif_device *device = &nouveau_drm(drm_dev)->client.device; 1088 + struct nouveau_drm *drm = pci_get_drvdata(pdev); 1089 + struct nvif_device *device = &drm->client.device; 1110 1090 int ret; 1111 1091 1112 1092 if (!nouveau_pmops_runtime()) { ··· 1120 1102 return ret; 1121 1103 pci_set_master(pdev); 1122 1104 1123 - ret = nouveau_do_resume(drm_dev, true); 1105 + ret = nouveau_do_resume(drm, true); 1124 1106 if (ret) { 1125 1107 NV_ERROR(drm, "resume failed with: %d\n", ret); 1126 1108 return ret; ··· 1128 1110 1129 1111 /* do magic */ 1130 1112 nvif_mask(&device->object, 0x088488, (1 << 25), (1 << 25)); 1131 - drm_dev->switch_power_state = DRM_SWITCH_POWER_ON; 1113 + drm->dev->switch_power_state = DRM_SWITCH_POWER_ON; 1132 1114 1133 1115 /* Monitors may have been connected / disconnected during suspend */ 1134 - nouveau_display_hpd_resume(drm_dev); 1116 + nouveau_display_hpd_resume(drm); 1135 1117 1136 1118 return ret; 1137 1119 } ··· 1267 1249 1268 1250 switch (_IOC_NR(cmd) - DRM_COMMAND_BASE) { 1269 1251 case DRM_NOUVEAU_NVIF: 1270 - ret = usif_ioctl(filp, (void __user *)arg, _IOC_SIZE(cmd)); 1252 + ret = nouveau_abi16_ioctl(filp, (void __user *)arg, _IOC_SIZE(cmd)); 1271 1253 break; 1272 1254 default: 1273 1255 ret = drm_ioctl(file, cmd, arg); ··· 1387 1369 struct platform_device *pdev, 1388 1370 struct nvkm_device **pdevice) 1389 1371 { 1390 - struct drm_device *drm; 1372 + struct nouveau_drm *drm; 1391 1373 int err; 1392 1374 1393 - err = nvkm_device_tegra_new(func, pdev, nouveau_config, nouveau_debug, 1394 - true, true, ~0ULL, pdevice); 1375 + err = nvkm_device_tegra_new(func, pdev, nouveau_config, nouveau_debug, pdevice); 1395 1376 if (err) 1396 1377 goto err_free; 1397 1378 1398 - drm = drm_dev_alloc(&driver_platform, &pdev->dev); 1379 + drm = nouveau_drm_device_new(&driver_platform, &pdev->dev, *pdevice); 1399 1380 if (IS_ERR(drm)) { 1400 1381 err = PTR_ERR(drm); 1401 1382 goto err_free; ··· 1404 1387 if (err) 1405 1388 goto err_put; 1406 1389 1407 - platform_set_drvdata(pdev, drm); 1408 - 1409 - return drm; 1390 + return drm->dev; 1410 1391 1411 1392 err_put: 1412 - drm_dev_put(drm); 1393 + nouveau_drm_device_del(drm); 1413 1394 err_free: 1414 1395 nvkm_device_del(pdevice); 1415 1396
+53 -8
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 201 201 #include <nvif/parent.h> 202 202 203 203 struct nouveau_drm { 204 + struct nvkm_device *nvkm; 204 205 struct nvif_parent parent; 205 - struct nouveau_cli master; 206 + struct mutex client_mutex; 207 + struct nvif_client _client; 208 + struct nvif_device device; 209 + struct nvif_mmu mmu; 210 + 206 211 struct nouveau_cli client; 207 212 struct drm_device *dev; 208 213 ··· 331 326 struct drm_device * 332 327 nouveau_platform_device_create(const struct nvkm_device_tegra_func *, 333 328 struct platform_device *, struct nvkm_device **); 334 - void nouveau_drm_device_remove(struct drm_device *dev); 329 + void nouveau_drm_device_remove(struct nouveau_drm *); 335 330 336 331 #define NV_PRINTK(l,c,f,a...) do { \ 337 332 struct nouveau_cli *_cli = (c); \ 338 333 dev_##l(_cli->drm->dev->dev, "%s: "f, _cli->name, ##a); \ 339 334 } while(0) 340 335 341 - #define NV_FATAL(drm,f,a...) NV_PRINTK(crit, &(drm)->client, f, ##a) 342 - #define NV_ERROR(drm,f,a...) NV_PRINTK(err, &(drm)->client, f, ##a) 343 - #define NV_WARN(drm,f,a...) NV_PRINTK(warn, &(drm)->client, f, ##a) 344 - #define NV_INFO(drm,f,a...) NV_PRINTK(info, &(drm)->client, f, ##a) 336 + #define NV_PRINTK_(l,drm,f,a...) do { \ 337 + dev_##l((drm)->nvkm->dev, "drm: "f, ##a); \ 338 + } while(0) 339 + #define NV_FATAL(drm,f,a...) NV_PRINTK_(crit, (drm), f, ##a) 340 + #define NV_ERROR(drm,f,a...) NV_PRINTK_(err, (drm), f, ##a) 341 + #define NV_WARN(drm,f,a...) NV_PRINTK_(warn, (drm), f, ##a) 342 + #define NV_INFO(drm,f,a...) NV_PRINTK_(info, (drm), f, ##a) 345 343 346 344 #define NV_DEBUG(drm,f,a...) do { \ 347 345 if (drm_debug_enabled(DRM_UT_DRIVER)) \ 348 - NV_PRINTK(info, &(drm)->client, f, ##a); \ 346 + NV_PRINTK_(info, (drm), f, ##a); \ 349 347 } while(0) 350 348 #define NV_ATOMIC(drm,f,a...) do { \ 351 349 if (drm_debug_enabled(DRM_UT_ATOMIC)) \ 352 - NV_PRINTK(info, &(drm)->client, f, ##a); \ 350 + NV_PRINTK_(info, (drm), f, ##a); \ 353 351 } while(0) 354 352 355 353 #define NV_PRINTK_ONCE(l,c,f,a...) NV_PRINTK(l##_once,c,f, ##a) ··· 363 355 364 356 extern int nouveau_modeset; 365 357 358 + /*XXX: Don't use these in new code. 359 + * 360 + * These accessors are used in a few places (mostly older code paths) 361 + * to get direct access to NVKM structures, where a more well-defined 362 + * interface doesn't exist. Outside of the current use, these should 363 + * not be relied on, and instead be implemented as NVIF. 364 + * 365 + * This is especially important when considering GSP-RM, as a lot the 366 + * modules don't exist, or are "stub" implementations that just allow 367 + * the GSP-RM paths to be bootstrapped. 368 + */ 369 + #include <subdev/bios.h> 370 + #include <subdev/fb.h> 371 + #include <subdev/gpio.h> 372 + #include <subdev/clk.h> 373 + #include <subdev/i2c.h> 374 + #include <subdev/timer.h> 375 + #include <subdev/therm.h> 376 + 377 + static inline struct nvkm_device * 378 + nvxx_device(struct nouveau_drm *drm) 379 + { 380 + return drm->nvkm; 381 + } 382 + 383 + #define nvxx_bios(a) nvxx_device(a)->bios 384 + #define nvxx_fb(a) nvxx_device(a)->fb 385 + #define nvxx_gpio(a) nvxx_device(a)->gpio 386 + #define nvxx_clk(a) nvxx_device(a)->clk 387 + #define nvxx_i2c(a) nvxx_device(a)->i2c 388 + #define nvxx_iccsense(a) nvxx_device(a)->iccsense 389 + #define nvxx_therm(a) nvxx_device(a)->therm 390 + #define nvxx_volt(a) nvxx_device(a)->volt 391 + 392 + #include <engine/gr.h> 393 + 394 + #define nvxx_gr(a) nvxx_device(a)->gr 366 395 #endif
+9 -8
drivers/gpu/drm/nouveau/nouveau_fence.c
··· 181 181 void 182 182 nouveau_fence_context_new(struct nouveau_channel *chan, struct nouveau_fence_chan *fctx) 183 183 { 184 - struct nouveau_fence_priv *priv = (void*)chan->drm->fence; 185 - struct nouveau_cli *cli = (void *)chan->user.client; 184 + struct nouveau_cli *cli = chan->cli; 185 + struct nouveau_drm *drm = cli->drm; 186 + struct nouveau_fence_priv *priv = (void*)drm->fence; 186 187 struct { 187 188 struct nvif_event_v0 base; 188 189 struct nvif_chan_event_v0 host; ··· 194 193 INIT_LIST_HEAD(&fctx->flip); 195 194 INIT_LIST_HEAD(&fctx->pending); 196 195 spin_lock_init(&fctx->lock); 197 - fctx->context = chan->drm->runl[chan->runlist].context_base + chan->chid; 196 + fctx->context = drm->runl[chan->runlist].context_base + chan->chid; 198 197 199 - if (chan == chan->drm->cechan) 198 + if (chan == drm->cechan) 200 199 strcpy(fctx->name, "copy engine channel"); 201 - else if (chan == chan->drm->channel) 200 + else if (chan == drm->channel) 202 201 strcpy(fctx->name, "generic kernel channel"); 203 202 else 204 - strcpy(fctx->name, nvxx_client(&cli->base)->name); 203 + strcpy(fctx->name, cli->name); 205 204 206 205 kref_init(&fctx->fence_ref); 207 206 if (!priv->uevent) ··· 222 221 { 223 222 struct nouveau_channel *chan = unrcu_pointer(fence->channel); 224 223 struct nouveau_fence_chan *fctx = chan->fence; 225 - struct nouveau_fence_priv *priv = (void*)chan->drm->fence; 224 + struct nouveau_fence_priv *priv = (void*)chan->cli->drm->fence; 226 225 int ret; 227 226 228 227 fence->timeout = jiffies + (15 * HZ); ··· 383 382 if (i == 0 && usage == DMA_RESV_USAGE_WRITE) 384 383 continue; 385 384 386 - f = nouveau_local_fence(fence, chan->drm); 385 + f = nouveau_local_fence(fence, chan->cli->drm); 387 386 if (f) { 388 387 struct nouveau_channel *prev; 389 388 bool must_wait = true;
+11 -10
drivers/gpu/drm/nouveau/nouveau_gem.c
··· 567 567 } 568 568 569 569 static int 570 - validate_list(struct nouveau_channel *chan, struct nouveau_cli *cli, 570 + validate_list(struct nouveau_channel *chan, 571 571 struct list_head *list, struct drm_nouveau_gem_pushbuf_bo *pbbo) 572 572 { 573 - struct nouveau_drm *drm = chan->drm; 573 + struct nouveau_cli *cli = chan->cli; 574 + struct nouveau_drm *drm = cli->drm; 574 575 struct nouveau_bo *nvbo; 575 576 int ret, relocs = 0; 576 577 ··· 643 642 return ret; 644 643 } 645 644 646 - ret = validate_list(chan, cli, &op->list, pbbo); 645 + ret = validate_list(chan, &op->list, pbbo); 647 646 if (unlikely(ret < 0)) { 648 647 if (ret != -ERESTARTSYS) 649 648 NV_PRINTK(err, cli, "validating bo list\n"); ··· 871 870 } 872 871 } else 873 872 if (drm->client.device.info.chipset >= 0x25) { 874 - ret = PUSH_WAIT(chan->chan.push, req->nr_push * 2); 873 + ret = PUSH_WAIT(&chan->chan.push, req->nr_push * 2); 875 874 if (ret) { 876 875 NV_PRINTK(err, cli, "cal_space: %d\n", ret); 877 876 goto out; ··· 881 880 struct nouveau_bo *nvbo = (void *)(unsigned long) 882 881 bo[push[i].bo_index].user_priv; 883 882 884 - PUSH_CALL(chan->chan.push, nvbo->offset + push[i].offset); 885 - PUSH_DATA(chan->chan.push, 0); 883 + PUSH_CALL(&chan->chan.push, nvbo->offset + push[i].offset); 884 + PUSH_DATA(&chan->chan.push, 0); 886 885 } 887 886 } else { 888 - ret = PUSH_WAIT(chan->chan.push, req->nr_push * (2 + NOUVEAU_DMA_SKIPS)); 887 + ret = PUSH_WAIT(&chan->chan.push, req->nr_push * (2 + NOUVEAU_DMA_SKIPS)); 889 888 if (ret) { 890 889 NV_PRINTK(err, cli, "jmp_space: %d\n", ret); 891 890 goto out; ··· 914 913 push[i].length - 8) / 4, cmd); 915 914 } 916 915 917 - PUSH_JUMP(chan->chan.push, nvbo->offset + push[i].offset); 918 - PUSH_DATA(chan->chan.push, 0); 916 + PUSH_JUMP(&chan->chan.push, nvbo->offset + push[i].offset); 917 + PUSH_DATA(&chan->chan.push, 0); 919 918 for (j = 0; j < NOUVEAU_DMA_SKIPS; j++) 920 - PUSH_DATA(chan->chan.push, 0); 919 + PUSH_DATA(&chan->chan.push, 0); 921 920 } 922 921 } 923 922
+23 -23
drivers/gpu/drm/nouveau/nouveau_hwmon.c
··· 52 52 { 53 53 struct drm_device *dev = dev_get_drvdata(d); 54 54 struct nouveau_drm *drm = nouveau_drm(dev); 55 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 55 + struct nvkm_therm *therm = nvxx_therm(drm); 56 56 57 57 return sysfs_emit(buf, "%d\n", 58 58 therm->attr_get(therm, NVKM_THERM_ATTR_THRS_FAN_BOOST) * 1000); ··· 64 64 { 65 65 struct drm_device *dev = dev_get_drvdata(d); 66 66 struct nouveau_drm *drm = nouveau_drm(dev); 67 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 67 + struct nvkm_therm *therm = nvxx_therm(drm); 68 68 long value; 69 69 70 70 if (kstrtol(buf, 10, &value)) ··· 85 85 { 86 86 struct drm_device *dev = dev_get_drvdata(d); 87 87 struct nouveau_drm *drm = nouveau_drm(dev); 88 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 88 + struct nvkm_therm *therm = nvxx_therm(drm); 89 89 90 90 return sysfs_emit(buf, "%d\n", 91 91 therm->attr_get(therm, NVKM_THERM_ATTR_THRS_FAN_BOOST_HYST) * 1000); ··· 97 97 { 98 98 struct drm_device *dev = dev_get_drvdata(d); 99 99 struct nouveau_drm *drm = nouveau_drm(dev); 100 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 100 + struct nvkm_therm *therm = nvxx_therm(drm); 101 101 long value; 102 102 103 103 if (kstrtol(buf, 10, &value)) ··· 118 118 { 119 119 struct drm_device *dev = dev_get_drvdata(d); 120 120 struct nouveau_drm *drm = nouveau_drm(dev); 121 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 121 + struct nvkm_therm *therm = nvxx_therm(drm); 122 122 int ret; 123 123 124 124 ret = therm->attr_get(therm, NVKM_THERM_ATTR_FAN_MAX_DUTY); ··· 134 134 { 135 135 struct drm_device *dev = dev_get_drvdata(d); 136 136 struct nouveau_drm *drm = nouveau_drm(dev); 137 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 137 + struct nvkm_therm *therm = nvxx_therm(drm); 138 138 int ret; 139 139 140 140 ret = therm->attr_get(therm, NVKM_THERM_ATTR_FAN_MIN_DUTY); ··· 150 150 { 151 151 struct drm_device *dev = dev_get_drvdata(d); 152 152 struct nouveau_drm *drm = nouveau_drm(dev); 153 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 153 + struct nvkm_therm *therm = nvxx_therm(drm); 154 154 long value; 155 155 int ret; 156 156 ··· 173 173 { 174 174 struct drm_device *dev = dev_get_drvdata(d); 175 175 struct nouveau_drm *drm = nouveau_drm(dev); 176 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 176 + struct nvkm_therm *therm = nvxx_therm(drm); 177 177 long value; 178 178 int ret; 179 179 ··· 247 247 nouveau_power_is_visible(const void *data, u32 attr, int channel) 248 248 { 249 249 struct nouveau_drm *drm = nouveau_drm((struct drm_device *)data); 250 - struct nvkm_iccsense *iccsense = nvxx_iccsense(&drm->client.device); 250 + struct nvkm_iccsense *iccsense = nvxx_iccsense(drm); 251 251 252 252 if (!iccsense || !iccsense->data_valid || list_empty(&iccsense->rails)) 253 253 return 0; ··· 272 272 nouveau_temp_is_visible(const void *data, u32 attr, int channel) 273 273 { 274 274 struct nouveau_drm *drm = nouveau_drm((struct drm_device *)data); 275 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 275 + struct nvkm_therm *therm = nvxx_therm(drm); 276 276 277 277 if (!therm || !therm->attr_get || nvkm_therm_temp_get(therm) < 0) 278 278 return 0; ··· 296 296 nouveau_pwm_is_visible(const void *data, u32 attr, int channel) 297 297 { 298 298 struct nouveau_drm *drm = nouveau_drm((struct drm_device *)data); 299 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 299 + struct nvkm_therm *therm = nvxx_therm(drm); 300 300 301 301 if (!therm || !therm->attr_get || !therm->fan_get || 302 302 therm->fan_get(therm) < 0) ··· 315 315 nouveau_input_is_visible(const void *data, u32 attr, int channel) 316 316 { 317 317 struct nouveau_drm *drm = nouveau_drm((struct drm_device *)data); 318 - struct nvkm_volt *volt = nvxx_volt(&drm->client.device); 318 + struct nvkm_volt *volt = nvxx_volt(drm); 319 319 320 320 if (!volt || nvkm_volt_get(volt) < 0) 321 321 return 0; ··· 335 335 nouveau_fan_is_visible(const void *data, u32 attr, int channel) 336 336 { 337 337 struct nouveau_drm *drm = nouveau_drm((struct drm_device *)data); 338 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 338 + struct nvkm_therm *therm = nvxx_therm(drm); 339 339 340 340 if (!therm || !therm->attr_get || nvkm_therm_fan_sense(therm) < 0) 341 341 return 0; ··· 367 367 { 368 368 struct drm_device *drm_dev = dev_get_drvdata(dev); 369 369 struct nouveau_drm *drm = nouveau_drm(drm_dev); 370 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 370 + struct nvkm_therm *therm = nvxx_therm(drm); 371 371 int ret; 372 372 373 373 if (!therm || !therm->attr_get) ··· 416 416 { 417 417 struct drm_device *drm_dev = dev_get_drvdata(dev); 418 418 struct nouveau_drm *drm = nouveau_drm(drm_dev); 419 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 419 + struct nvkm_therm *therm = nvxx_therm(drm); 420 420 421 421 if (!therm) 422 422 return -EOPNOTSUPP; ··· 439 439 { 440 440 struct drm_device *drm_dev = dev_get_drvdata(dev); 441 441 struct nouveau_drm *drm = nouveau_drm(drm_dev); 442 - struct nvkm_volt *volt = nvxx_volt(&drm->client.device); 442 + struct nvkm_volt *volt = nvxx_volt(drm); 443 443 int ret; 444 444 445 445 if (!volt) ··· 470 470 { 471 471 struct drm_device *drm_dev = dev_get_drvdata(dev); 472 472 struct nouveau_drm *drm = nouveau_drm(drm_dev); 473 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 473 + struct nvkm_therm *therm = nvxx_therm(drm); 474 474 475 475 if (!therm || !therm->attr_get || !therm->fan_get) 476 476 return -EOPNOTSUPP; ··· 496 496 { 497 497 struct drm_device *drm_dev = dev_get_drvdata(dev); 498 498 struct nouveau_drm *drm = nouveau_drm(drm_dev); 499 - struct nvkm_iccsense *iccsense = nvxx_iccsense(&drm->client.device); 499 + struct nvkm_iccsense *iccsense = nvxx_iccsense(drm); 500 500 501 501 if (!iccsense) 502 502 return -EOPNOTSUPP; ··· 525 525 { 526 526 struct drm_device *drm_dev = dev_get_drvdata(dev); 527 527 struct nouveau_drm *drm = nouveau_drm(drm_dev); 528 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 528 + struct nvkm_therm *therm = nvxx_therm(drm); 529 529 530 530 if (!therm || !therm->attr_set) 531 531 return -EOPNOTSUPP; ··· 559 559 { 560 560 struct drm_device *drm_dev = dev_get_drvdata(dev); 561 561 struct nouveau_drm *drm = nouveau_drm(drm_dev); 562 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 562 + struct nvkm_therm *therm = nvxx_therm(drm); 563 563 564 564 if (!therm || !therm->attr_set) 565 565 return -EOPNOTSUPP; ··· 664 664 { 665 665 #if defined(CONFIG_HWMON) || (defined(MODULE) && defined(CONFIG_HWMON_MODULE)) 666 666 struct nouveau_drm *drm = nouveau_drm(dev); 667 - struct nvkm_iccsense *iccsense = nvxx_iccsense(&drm->client.device); 668 - struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 669 - struct nvkm_volt *volt = nvxx_volt(&drm->client.device); 667 + struct nvkm_iccsense *iccsense = nvxx_iccsense(drm); 668 + struct nvkm_therm *therm = nvxx_therm(drm); 669 + struct nvkm_volt *volt = nvxx_volt(drm); 670 670 const struct attribute_group *special_groups[N_ATTR_GROUPS]; 671 671 struct nouveau_hwmon *hwmon; 672 672 struct device *hwmon_dev;
+1 -1
drivers/gpu/drm/nouveau/nouveau_led.c
··· 78 78 nouveau_led_init(struct drm_device *dev) 79 79 { 80 80 struct nouveau_drm *drm = nouveau_drm(dev); 81 - struct nvkm_gpio *gpio = nvxx_gpio(&drm->client.device); 81 + struct nvkm_gpio *gpio = nvxx_gpio(drm); 82 82 struct dcb_gpio_func logo_led; 83 83 int ret; 84 84
+18 -20
drivers/gpu/drm/nouveau/nouveau_mem.c
··· 78 78 void 79 79 nouveau_mem_fini(struct nouveau_mem *mem) 80 80 { 81 - nvif_vmm_put(&mem->cli->drm->client.vmm.vmm, &mem->vma[1]); 82 - nvif_vmm_put(&mem->cli->drm->client.vmm.vmm, &mem->vma[0]); 83 - mutex_lock(&mem->cli->drm->master.lock); 81 + nvif_vmm_put(&mem->drm->client.vmm.vmm, &mem->vma[1]); 82 + nvif_vmm_put(&mem->drm->client.vmm.vmm, &mem->vma[0]); 83 + mutex_lock(&mem->drm->client_mutex); 84 84 nvif_mem_dtor(&mem->mem); 85 - mutex_unlock(&mem->cli->drm->master.lock); 85 + mutex_unlock(&mem->drm->client_mutex); 86 86 } 87 87 88 88 int 89 89 nouveau_mem_host(struct ttm_resource *reg, struct ttm_tt *tt) 90 90 { 91 91 struct nouveau_mem *mem = nouveau_mem(reg); 92 - struct nouveau_cli *cli = mem->cli; 93 - struct nouveau_drm *drm = cli->drm; 94 - struct nvif_mmu *mmu = &cli->mmu; 92 + struct nouveau_drm *drm = mem->drm; 93 + struct nvif_mmu *mmu = &drm->mmu; 95 94 struct nvif_mem_ram_v0 args = {}; 96 95 u8 type; 97 96 int ret; ··· 113 114 else 114 115 args.dma = tt->dma_address; 115 116 116 - mutex_lock(&drm->master.lock); 117 - ret = nvif_mem_ctor_type(mmu, "ttmHostMem", cli->mem->oclass, type, PAGE_SHIFT, 117 + mutex_lock(&drm->client_mutex); 118 + ret = nvif_mem_ctor_type(mmu, "ttmHostMem", mmu->mem, type, PAGE_SHIFT, 118 119 reg->size, 119 120 &args, sizeof(args), &mem->mem); 120 - mutex_unlock(&drm->master.lock); 121 + mutex_unlock(&drm->client_mutex); 121 122 return ret; 122 123 } 123 124 ··· 125 126 nouveau_mem_vram(struct ttm_resource *reg, bool contig, u8 page) 126 127 { 127 128 struct nouveau_mem *mem = nouveau_mem(reg); 128 - struct nouveau_cli *cli = mem->cli; 129 - struct nouveau_drm *drm = cli->drm; 130 - struct nvif_mmu *mmu = &cli->mmu; 129 + struct nouveau_drm *drm = mem->drm; 130 + struct nvif_mmu *mmu = &drm->mmu; 131 131 u64 size = ALIGN(reg->size, 1 << page); 132 132 int ret; 133 133 134 - mutex_lock(&drm->master.lock); 135 - switch (cli->mem->oclass) { 134 + mutex_lock(&drm->client_mutex); 135 + switch (mmu->mem) { 136 136 case NVIF_CLASS_MEM_GF100: 137 - ret = nvif_mem_ctor_type(mmu, "ttmVram", cli->mem->oclass, 137 + ret = nvif_mem_ctor_type(mmu, "ttmVram", mmu->mem, 138 138 drm->ttm.type_vram, page, size, 139 139 &(struct gf100_mem_v0) { 140 140 .contig = contig, ··· 141 143 &mem->mem); 142 144 break; 143 145 case NVIF_CLASS_MEM_NV50: 144 - ret = nvif_mem_ctor_type(mmu, "ttmVram", cli->mem->oclass, 146 + ret = nvif_mem_ctor_type(mmu, "ttmVram", mmu->mem, 145 147 drm->ttm.type_vram, page, size, 146 148 &(struct nv50_mem_v0) { 147 149 .bankswz = mmu->kind[mem->kind] == 2, ··· 154 156 WARN_ON(1); 155 157 break; 156 158 } 157 - mutex_unlock(&drm->master.lock); 159 + mutex_unlock(&drm->client_mutex); 158 160 159 161 reg->start = mem->mem.addr >> PAGE_SHIFT; 160 162 return ret; ··· 171 173 } 172 174 173 175 int 174 - nouveau_mem_new(struct nouveau_cli *cli, u8 kind, u8 comp, 176 + nouveau_mem_new(struct nouveau_drm *drm, u8 kind, u8 comp, 175 177 struct ttm_resource **res) 176 178 { 177 179 struct nouveau_mem *mem; ··· 179 181 if (!(mem = kzalloc(sizeof(*mem), GFP_KERNEL))) 180 182 return -ENOMEM; 181 183 182 - mem->cli = cli; 184 + mem->drm = drm; 183 185 mem->kind = kind; 184 186 mem->comp = comp; 185 187
+2 -2
drivers/gpu/drm/nouveau/nouveau_mem.h
··· 8 8 9 9 struct nouveau_mem { 10 10 struct ttm_resource base; 11 - struct nouveau_cli *cli; 11 + struct nouveau_drm *drm; 12 12 u8 kind; 13 13 u8 comp; 14 14 struct nvif_mem mem; ··· 21 21 return container_of(reg, struct nouveau_mem, base); 22 22 } 23 23 24 - int nouveau_mem_new(struct nouveau_cli *, u8 kind, u8 comp, 24 + int nouveau_mem_new(struct nouveau_drm *, u8 kind, u8 comp, 25 25 struct ttm_resource **); 26 26 void nouveau_mem_del(struct ttm_resource_manager *man, 27 27 struct ttm_resource *);
-2
drivers/gpu/drm/nouveau/nouveau_nvif.c
··· 35 35 #include <nvif/ioctl.h> 36 36 37 37 #include "nouveau_drv.h" 38 - #include "nouveau_usif.h" 39 38 40 39 static void 41 40 nvkm_client_unmap(void *priv, void __iomem *ptr, u32 size) ··· 97 98 .ioctl = nvkm_client_ioctl, 98 99 .map = nvkm_client_map, 99 100 .unmap = nvkm_client_unmap, 100 - .keep = false, 101 101 };
+3 -9
drivers/gpu/drm/nouveau/nouveau_platform.c
··· 26 26 const struct nvkm_device_tegra_func *func; 27 27 struct nvkm_device *device = NULL; 28 28 struct drm_device *drm; 29 - int ret; 30 29 31 30 func = of_device_get_match_data(&pdev->dev); 32 31 ··· 33 34 if (IS_ERR(drm)) 34 35 return PTR_ERR(drm); 35 36 36 - ret = drm_dev_register(drm, 0); 37 - if (ret < 0) { 38 - drm_dev_put(drm); 39 - return ret; 40 - } 41 - 42 37 return 0; 43 38 } 44 39 45 40 static void nouveau_platform_remove(struct platform_device *pdev) 46 41 { 47 - struct drm_device *dev = platform_get_drvdata(pdev); 48 - nouveau_drm_device_remove(dev); 42 + struct nouveau_drm *drm = platform_get_drvdata(pdev); 43 + 44 + nouveau_drm_device_remove(drm); 49 45 } 50 46 51 47 #if IS_ENABLED(CONFIG_OF)
+3 -3
drivers/gpu/drm/nouveau/nouveau_sched.c
··· 379 379 else 380 380 NV_PRINTK(warn, job->cli, "Generic job timeout.\n"); 381 381 382 - drm_sched_start(sched, true); 382 + drm_sched_start(sched); 383 383 384 384 return stat; 385 385 } ··· 404 404 { 405 405 struct drm_gpu_scheduler *drm_sched = &sched->base; 406 406 struct drm_sched_entity *entity = &sched->entity; 407 - long job_hang_limit = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS); 407 + const long timeout = msecs_to_jiffies(NOUVEAU_SCHED_JOB_TIMEOUT_MS); 408 408 int ret; 409 409 410 410 if (!wq) { ··· 418 418 419 419 ret = drm_sched_init(drm_sched, &nouveau_sched_ops, wq, 420 420 NOUVEAU_SCHED_PRIORITY_COUNT, 421 - credit_limit, 0, job_hang_limit, 421 + credit_limit, 0, timeout, 422 422 NULL, NULL, "nouveau_sched", drm->dev->dev); 423 423 if (ret) 424 424 goto fail_wq;
+1 -1
drivers/gpu/drm/nouveau/nouveau_sgdma.c
··· 43 43 return ret; 44 44 45 45 if (drm->client.device.info.family < NV_DEVICE_INFO_V0_TESLA) { 46 - ret = nouveau_mem_map(mem, &mem->cli->vmm.vmm, &mem->vma[0]); 46 + ret = nouveau_mem_map(mem, &drm->client.vmm.vmm, &mem->vma[0]); 47 47 if (ret) { 48 48 nouveau_mem_fini(mem); 49 49 return ret;
+6 -6
drivers/gpu/drm/nouveau/nouveau_ttm.c
··· 73 73 if (drm->client.device.info.ram_size == 0) 74 74 return -ENOMEM; 75 75 76 - ret = nouveau_mem_new(&drm->master, nvbo->kind, nvbo->comp, res); 76 + ret = nouveau_mem_new(drm, nvbo->kind, nvbo->comp, res); 77 77 if (ret) 78 78 return ret; 79 79 ··· 105 105 struct nouveau_drm *drm = nouveau_bdev(bo->bdev); 106 106 int ret; 107 107 108 - ret = nouveau_mem_new(&drm->master, nvbo->kind, nvbo->comp, res); 108 + ret = nouveau_mem_new(drm, nvbo->kind, nvbo->comp, res); 109 109 if (ret) 110 110 return ret; 111 111 ··· 132 132 struct nouveau_mem *mem; 133 133 int ret; 134 134 135 - ret = nouveau_mem_new(&drm->master, nvbo->kind, nvbo->comp, res); 135 + ret = nouveau_mem_new(drm, nvbo->kind, nvbo->comp, res); 136 136 if (ret) 137 137 return ret; 138 138 139 139 mem = nouveau_mem(*res); 140 140 ttm_resource_init(bo, place, *res); 141 - ret = nvif_vmm_get(&mem->cli->vmm.vmm, PTES, false, 12, 0, 141 + ret = nvif_vmm_get(&drm->client.vmm.vmm, PTES, false, 12, 0, 142 142 (long)(*res)->size, &mem->vma[0]); 143 143 if (ret) { 144 144 nouveau_mem_del(man, *res); ··· 261 261 int 262 262 nouveau_ttm_init(struct nouveau_drm *drm) 263 263 { 264 - struct nvkm_device *device = nvxx_device(&drm->client.device); 264 + struct nvkm_device *device = nvxx_device(drm); 265 265 struct nvkm_pci *pci = device->pci; 266 266 struct nvif_mmu *mmu = &drm->client.mmu; 267 267 struct drm_device *dev = drm->dev; ··· 348 348 void 349 349 nouveau_ttm_fini(struct nouveau_drm *drm) 350 350 { 351 - struct nvkm_device *device = nvxx_device(&drm->client.device); 351 + struct nvkm_device *device = nvxx_device(drm); 352 352 353 353 nouveau_ttm_fini_vram(drm); 354 354 nouveau_ttm_fini_gtt(drm);
-194
drivers/gpu/drm/nouveau/nouveau_usif.c
··· 1 - /* 2 - * Copyright 2014 Red Hat Inc. 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: Ben Skeggs <bskeggs@redhat.com> 23 - */ 24 - 25 - #include "nouveau_drv.h" 26 - #include "nouveau_usif.h" 27 - #include "nouveau_abi16.h" 28 - 29 - #include <nvif/unpack.h> 30 - #include <nvif/client.h> 31 - #include <nvif/ioctl.h> 32 - 33 - #include <nvif/class.h> 34 - #include <nvif/cl0080.h> 35 - 36 - struct usif_object { 37 - struct list_head head; 38 - u8 route; 39 - u64 token; 40 - }; 41 - 42 - static void 43 - usif_object_dtor(struct usif_object *object) 44 - { 45 - list_del(&object->head); 46 - kfree(object); 47 - } 48 - 49 - static int 50 - usif_object_new(struct drm_file *f, void *data, u32 size, void *argv, u32 argc, bool parent_abi16) 51 - { 52 - struct nouveau_cli *cli = nouveau_cli(f); 53 - struct nvif_client *client = &cli->base; 54 - union { 55 - struct nvif_ioctl_new_v0 v0; 56 - } *args = data; 57 - struct usif_object *object; 58 - int ret = -ENOSYS; 59 - 60 - if ((ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, true))) 61 - return ret; 62 - 63 - switch (args->v0.oclass) { 64 - case NV_DMA_FROM_MEMORY: 65 - case NV_DMA_TO_MEMORY: 66 - case NV_DMA_IN_MEMORY: 67 - return -EINVAL; 68 - case NV_DEVICE: { 69 - union { 70 - struct nv_device_v0 v0; 71 - } *args = data; 72 - 73 - if ((ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) 74 - return ret; 75 - 76 - args->v0.priv = false; 77 - break; 78 - } 79 - default: 80 - if (!parent_abi16) 81 - return -EINVAL; 82 - break; 83 - } 84 - 85 - if (!(object = kmalloc(sizeof(*object), GFP_KERNEL))) 86 - return -ENOMEM; 87 - list_add(&object->head, &cli->objects); 88 - 89 - object->route = args->v0.route; 90 - object->token = args->v0.token; 91 - args->v0.route = NVDRM_OBJECT_USIF; 92 - args->v0.token = (unsigned long)(void *)object; 93 - ret = nvif_client_ioctl(client, argv, argc); 94 - if (ret) { 95 - usif_object_dtor(object); 96 - return ret; 97 - } 98 - 99 - args->v0.token = object->token; 100 - args->v0.route = object->route; 101 - return 0; 102 - } 103 - 104 - int 105 - usif_ioctl(struct drm_file *filp, void __user *user, u32 argc) 106 - { 107 - struct nouveau_cli *cli = nouveau_cli(filp); 108 - struct nvif_client *client = &cli->base; 109 - void *data = kmalloc(argc, GFP_KERNEL); 110 - u32 size = argc; 111 - union { 112 - struct nvif_ioctl_v0 v0; 113 - } *argv = data; 114 - struct usif_object *object; 115 - bool abi16 = false; 116 - u8 owner; 117 - int ret; 118 - 119 - if (ret = -ENOMEM, !argv) 120 - goto done; 121 - if (ret = -EFAULT, copy_from_user(argv, user, size)) 122 - goto done; 123 - 124 - if (!(ret = nvif_unpack(-ENOSYS, &data, &size, argv->v0, 0, 0, true))) { 125 - /* block access to objects not created via this interface */ 126 - owner = argv->v0.owner; 127 - if (argv->v0.object == 0ULL && 128 - argv->v0.type != NVIF_IOCTL_V0_DEL) 129 - argv->v0.owner = NVDRM_OBJECT_ANY; /* except client */ 130 - else 131 - argv->v0.owner = NVDRM_OBJECT_USIF; 132 - } else 133 - goto done; 134 - 135 - /* USIF slightly abuses some return-only ioctl members in order 136 - * to provide interoperability with the older ABI16 objects 137 - */ 138 - mutex_lock(&cli->mutex); 139 - if (argv->v0.route) { 140 - if (ret = -EINVAL, argv->v0.route == 0xff) 141 - ret = nouveau_abi16_usif(filp, argv, argc); 142 - if (ret) { 143 - mutex_unlock(&cli->mutex); 144 - goto done; 145 - } 146 - 147 - abi16 = true; 148 - } 149 - 150 - switch (argv->v0.type) { 151 - case NVIF_IOCTL_V0_NEW: 152 - ret = usif_object_new(filp, data, size, argv, argc, abi16); 153 - break; 154 - default: 155 - ret = nvif_client_ioctl(client, argv, argc); 156 - break; 157 - } 158 - if (argv->v0.route == NVDRM_OBJECT_USIF) { 159 - object = (void *)(unsigned long)argv->v0.token; 160 - argv->v0.route = object->route; 161 - argv->v0.token = object->token; 162 - if (ret == 0 && argv->v0.type == NVIF_IOCTL_V0_DEL) { 163 - list_del(&object->head); 164 - kfree(object); 165 - } 166 - } else { 167 - argv->v0.route = NVIF_IOCTL_V0_ROUTE_HIDDEN; 168 - argv->v0.token = 0; 169 - } 170 - argv->v0.owner = owner; 171 - mutex_unlock(&cli->mutex); 172 - 173 - if (copy_to_user(user, argv, argc)) 174 - ret = -EFAULT; 175 - done: 176 - kfree(argv); 177 - return ret; 178 - } 179 - 180 - void 181 - usif_client_fini(struct nouveau_cli *cli) 182 - { 183 - struct usif_object *object, *otemp; 184 - 185 - list_for_each_entry_safe(object, otemp, &cli->objects, head) { 186 - usif_object_dtor(object); 187 - } 188 - } 189 - 190 - void 191 - usif_client_init(struct nouveau_cli *cli) 192 - { 193 - INIT_LIST_HEAD(&cli->objects); 194 - }
-10
drivers/gpu/drm/nouveau/nouveau_usif.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - #ifndef __NOUVEAU_USIF_H__ 3 - #define __NOUVEAU_USIF_H__ 4 - 5 - void usif_client_init(struct nouveau_cli *); 6 - void usif_client_fini(struct nouveau_cli *); 7 - int usif_ioctl(struct drm_file *, void __user *, u32); 8 - int usif_notify(const void *, u32, const void *, u32); 9 - 10 - #endif
+8 -6
drivers/gpu/drm/nouveau/nouveau_vga.c
··· 11 11 static unsigned int 12 12 nouveau_vga_set_decode(struct pci_dev *pdev, bool state) 13 13 { 14 - struct nouveau_drm *drm = nouveau_drm(pci_get_drvdata(pdev)); 14 + struct nouveau_drm *drm = pci_get_drvdata(pdev); 15 15 struct nvif_object *device = &drm->client.device.object; 16 16 17 17 if (drm->client.device.info.family == NV_DEVICE_INFO_V0_CURIE && ··· 34 34 nouveau_switcheroo_set_state(struct pci_dev *pdev, 35 35 enum vga_switcheroo_state state) 36 36 { 37 - struct drm_device *dev = pci_get_drvdata(pdev); 37 + struct nouveau_drm *drm = pci_get_drvdata(pdev); 38 + struct drm_device *dev = drm->dev; 38 39 39 40 if ((nouveau_is_optimus() || nouveau_is_v1_dsm()) && state == VGA_SWITCHEROO_OFF) 40 41 return; ··· 57 56 static void 58 57 nouveau_switcheroo_reprobe(struct pci_dev *pdev) 59 58 { 60 - struct drm_device *dev = pci_get_drvdata(pdev); 61 - drm_fb_helper_output_poll_changed(dev); 59 + struct nouveau_drm *drm = pci_get_drvdata(pdev); 60 + 61 + drm_fb_helper_output_poll_changed(drm->dev); 62 62 } 63 63 64 64 static bool 65 65 nouveau_switcheroo_can_switch(struct pci_dev *pdev) 66 66 { 67 - struct drm_device *dev = pci_get_drvdata(pdev); 67 + struct nouveau_drm *drm = pci_get_drvdata(pdev); 68 68 69 69 /* 70 70 * FIXME: open_count is protected by drm_global_mutex but that would lead to 71 71 * locking inversion with the driver load path. And the access here is 72 72 * completely racy anyway. So don't bother with locking for now. 73 73 */ 74 - return atomic_read(&dev->open_count) == 0; 74 + return atomic_read(&drm->dev->open_count) == 0; 75 75 } 76 76 77 77 static const struct vga_switcheroo_client_ops
+1 -1
drivers/gpu/drm/nouveau/nv04_fence.c
··· 39 39 static int 40 40 nv04_fence_emit(struct nouveau_fence *fence) 41 41 { 42 - struct nvif_push *push = unrcu_pointer(fence->channel)->chan.push; 42 + struct nvif_push *push = &unrcu_pointer(fence->channel)->chan.push; 43 43 int ret = PUSH_WAIT(push, 2); 44 44 if (ret == 0) { 45 45 PUSH_NVSQ(push, NV_SW, 0x0150, fence->base.seqno);
+2 -2
drivers/gpu/drm/nouveau/nv10_fence.c
··· 32 32 int 33 33 nv10_fence_emit(struct nouveau_fence *fence) 34 34 { 35 - struct nvif_push *push = fence->channel->chan.push; 35 + struct nvif_push *push = &fence->channel->chan.push; 36 36 int ret = PUSH_WAIT(push, 2); 37 37 if (ret == 0) { 38 38 PUSH_MTHD(push, NV06E, SET_REFERENCE, fence->base.seqno); ··· 88 88 nouveau_bo_unmap(priv->bo); 89 89 if (priv->bo) 90 90 nouveau_bo_unpin(priv->bo); 91 - nouveau_bo_ref(NULL, &priv->bo); 91 + nouveau_bo_fini(priv->bo); 92 92 drm->fence = NULL; 93 93 kfree(priv); 94 94 }
+6 -6
drivers/gpu/drm/nouveau/nv17_fence.c
··· 36 36 nv17_fence_sync(struct nouveau_fence *fence, 37 37 struct nouveau_channel *prev, struct nouveau_channel *chan) 38 38 { 39 - struct nouveau_cli *cli = (void *)prev->user.client; 40 - struct nv10_fence_priv *priv = chan->drm->fence; 39 + struct nouveau_cli *cli = prev->cli; 40 + struct nv10_fence_priv *priv = cli->drm->fence; 41 41 struct nv10_fence_chan *fctx = chan->fence; 42 - struct nvif_push *ppush = prev->chan.push; 43 - struct nvif_push *npush = chan->chan.push; 42 + struct nvif_push *ppush = &prev->chan.push; 43 + struct nvif_push *npush = &chan->chan.push; 44 44 u32 value; 45 45 int ret; 46 46 ··· 76 76 static int 77 77 nv17_fence_context_new(struct nouveau_channel *chan) 78 78 { 79 - struct nv10_fence_priv *priv = chan->drm->fence; 79 + struct nv10_fence_priv *priv = chan->cli->drm->fence; 80 80 struct ttm_resource *reg = priv->bo->bo.resource; 81 81 struct nv10_fence_chan *fctx; 82 82 u32 start = reg->start * PAGE_SIZE; ··· 141 141 nouveau_bo_unpin(priv->bo); 142 142 } 143 143 if (ret) 144 - nouveau_bo_ref(NULL, &priv->bo); 144 + nouveau_bo_fini(priv->bo); 145 145 } 146 146 147 147 if (ret) {
+2 -2
drivers/gpu/drm/nouveau/nv50_fence.c
··· 35 35 static int 36 36 nv50_fence_context_new(struct nouveau_channel *chan) 37 37 { 38 - struct nv10_fence_priv *priv = chan->drm->fence; 38 + struct nv10_fence_priv *priv = chan->cli->drm->fence; 39 39 struct nv10_fence_chan *fctx; 40 40 struct ttm_resource *reg = priv->bo->bo.resource; 41 41 u32 start = reg->start * PAGE_SIZE; ··· 92 92 nouveau_bo_unpin(priv->bo); 93 93 } 94 94 if (ret) 95 - nouveau_bo_ref(NULL, &priv->bo); 95 + nouveau_bo_fini(priv->bo); 96 96 } 97 97 98 98 if (ret) {
+8 -8
drivers/gpu/drm/nouveau/nv84_fence.c
··· 35 35 static int 36 36 nv84_fence_emit32(struct nouveau_channel *chan, u64 virtual, u32 sequence) 37 37 { 38 - struct nvif_push *push = chan->chan.push; 38 + struct nvif_push *push = &chan->chan.push; 39 39 int ret = PUSH_WAIT(push, 8); 40 40 if (ret == 0) { 41 41 PUSH_MTHD(push, NV826F, SET_CONTEXT_DMA_SEMAPHORE, chan->vram.handle); ··· 58 58 static int 59 59 nv84_fence_sync32(struct nouveau_channel *chan, u64 virtual, u32 sequence) 60 60 { 61 - struct nvif_push *push = chan->chan.push; 61 + struct nvif_push *push = &chan->chan.push; 62 62 int ret = PUSH_WAIT(push, 7); 63 63 if (ret == 0) { 64 64 PUSH_MTHD(push, NV826F, SET_CONTEXT_DMA_SEMAPHORE, chan->vram.handle); ··· 79 79 static inline u32 80 80 nv84_fence_chid(struct nouveau_channel *chan) 81 81 { 82 - return chan->drm->runl[chan->runlist].chan_id_base + chan->chid; 82 + return chan->cli->drm->runl[chan->runlist].chan_id_base + chan->chid; 83 83 } 84 84 85 85 static int ··· 105 105 static u32 106 106 nv84_fence_read(struct nouveau_channel *chan) 107 107 { 108 - struct nv84_fence_priv *priv = chan->drm->fence; 108 + struct nv84_fence_priv *priv = chan->cli->drm->fence; 109 109 return nouveau_bo_rd32(priv->bo, nv84_fence_chid(chan) * 16/4); 110 110 } 111 111 112 112 static void 113 113 nv84_fence_context_del(struct nouveau_channel *chan) 114 114 { 115 - struct nv84_fence_priv *priv = chan->drm->fence; 115 + struct nv84_fence_priv *priv = chan->cli->drm->fence; 116 116 struct nv84_fence_chan *fctx = chan->fence; 117 117 118 118 nouveau_bo_wr32(priv->bo, nv84_fence_chid(chan) * 16 / 4, fctx->base.sequence); ··· 127 127 int 128 128 nv84_fence_context_new(struct nouveau_channel *chan) 129 129 { 130 - struct nv84_fence_priv *priv = chan->drm->fence; 130 + struct nv84_fence_priv *priv = chan->cli->drm->fence; 131 131 struct nv84_fence_chan *fctx; 132 132 int ret; 133 133 ··· 188 188 nouveau_bo_unmap(priv->bo); 189 189 if (priv->bo) 190 190 nouveau_bo_unpin(priv->bo); 191 - nouveau_bo_ref(NULL, &priv->bo); 191 + nouveau_bo_fini(priv->bo); 192 192 drm->fence = NULL; 193 193 kfree(priv); 194 194 } ··· 232 232 nouveau_bo_unpin(priv->bo); 233 233 } 234 234 if (ret) 235 - nouveau_bo_ref(NULL, &priv->bo); 235 + nouveau_bo_fini(priv->bo); 236 236 } 237 237 238 238 if (ret)
+2 -2
drivers/gpu/drm/nouveau/nvc0_fence.c
··· 34 34 static int 35 35 nvc0_fence_emit32(struct nouveau_channel *chan, u64 virtual, u32 sequence) 36 36 { 37 - struct nvif_push *push = chan->chan.push; 37 + struct nvif_push *push = &chan->chan.push; 38 38 int ret = PUSH_WAIT(push, 6); 39 39 if (ret == 0) { 40 40 PUSH_MTHD(push, NV906F, SEMAPHOREA, ··· 57 57 static int 58 58 nvc0_fence_sync32(struct nouveau_channel *chan, u64 virtual, u32 sequence) 59 59 { 60 - struct nvif_push *push = chan->chan.push; 60 + struct nvif_push *push = &chan->chan.push; 61 61 int ret = PUSH_WAIT(push, 5); 62 62 if (ret == 0) { 63 63 PUSH_MTHD(push, NV906F, SEMAPHOREA,
+4 -28
drivers/gpu/drm/nouveau/nvif/client.c
··· 30 30 #include <nvif/if0000.h> 31 31 32 32 int 33 - nvif_client_ioctl(struct nvif_client *client, void *data, u32 size) 34 - { 35 - return client->driver->ioctl(client->object.priv, data, size, NULL); 36 - } 37 - 38 - int 39 33 nvif_client_suspend(struct nvif_client *client) 40 34 { 41 35 return client->driver->suspend(client->object.priv); ··· 45 51 nvif_client_dtor(struct nvif_client *client) 46 52 { 47 53 nvif_object_dtor(&client->object); 48 - if (client->driver) { 49 - if (client->driver->fini) 50 - client->driver->fini(client->object.priv); 51 - client->driver = NULL; 52 - } 54 + client->driver = NULL; 53 55 } 54 56 55 57 int 56 - nvif_client_ctor(struct nvif_client *parent, const char *name, u64 device, 57 - struct nvif_client *client) 58 + nvif_client_ctor(struct nvif_client *parent, const char *name, struct nvif_client *client) 58 59 { 59 - struct nvif_client_v0 args = { .device = device }; 60 - struct { 61 - struct nvif_ioctl_v0 ioctl; 62 - struct nvif_ioctl_nop_v0 nop; 63 - } nop = {}; 60 + struct nvif_client_v0 args = {}; 64 61 int ret; 65 62 66 63 strscpy_pad(args.name, name, sizeof(args.name)); ··· 64 79 65 80 client->object.client = client; 66 81 client->object.handle = ~0; 67 - client->route = NVIF_IOCTL_V0_ROUTE_NVIF; 68 82 client->driver = parent->driver; 69 - 70 - if (ret == 0) { 71 - ret = nvif_client_ioctl(client, &nop, sizeof(nop)); 72 - client->version = nop.nop.version; 73 - } 74 - 75 - if (ret) 76 - nvif_client_dtor(client); 77 - return ret; 83 + return 0; 78 84 }
+10 -5
drivers/gpu/drm/nouveau/nvif/device.c
··· 21 21 * 22 22 * Authors: Ben Skeggs <bskeggs@redhat.com> 23 23 */ 24 - 25 24 #include <nvif/device.h> 25 + #include <nvif/client.h> 26 26 27 27 u64 28 28 nvif_device_time(struct nvif_device *device) ··· 38 38 return device->user.func->time(&device->user); 39 39 } 40 40 41 + int 42 + nvif_device_map(struct nvif_device *device) 43 + { 44 + return nvif_object_map(&device->object, NULL, 0); 45 + } 46 + 41 47 void 42 48 nvif_device_dtor(struct nvif_device *device) 43 49 { ··· 54 48 } 55 49 56 50 int 57 - nvif_device_ctor(struct nvif_object *parent, const char *name, u32 handle, 58 - s32 oclass, void *data, u32 size, struct nvif_device *device) 51 + nvif_device_ctor(struct nvif_client *client, const char *name, struct nvif_device *device) 59 52 { 60 - int ret = nvif_object_ctor(parent, name ? name : "nvifDevice", handle, 61 - oclass, data, size, &device->object); 53 + int ret = nvif_object_ctor(&client->object, name ? name : "nvifDevice", 0, 54 + 0x0080, NULL, 0, &device->object); 62 55 device->runlist = NULL; 63 56 device->user.func = NULL; 64 57 if (ret == 0) {
+7 -25
drivers/gpu/drm/nouveau/nvif/driver.c
··· 24 24 #include <nvif/driver.h> 25 25 #include <nvif/client.h> 26 26 27 - static const struct nvif_driver * 28 - nvif_driver[] = { 29 - #ifdef __KERNEL__ 30 - &nvif_driver_nvkm, 31 - #else 32 - &nvif_driver_drm, 33 - &nvif_driver_lib, 34 - &nvif_driver_null, 35 - #endif 36 - NULL 37 - }; 38 - 39 27 int 40 28 nvif_driver_init(const char *drv, const char *cfg, const char *dbg, 41 29 const char *name, u64 device, struct nvif_client *client) 42 30 { 43 - int ret = -EINVAL, i; 31 + int ret; 44 32 45 - for (i = 0; (client->driver = nvif_driver[i]); i++) { 46 - if (!drv || !strcmp(client->driver->name, drv)) { 47 - ret = client->driver->init(name, device, cfg, dbg, 48 - &client->object.priv); 49 - if (ret == 0) 50 - break; 51 - client->driver->fini(client->object.priv); 52 - } 53 - } 33 + client->driver = &nvif_driver_nvkm; 54 34 55 - if (ret == 0) 56 - ret = nvif_client_ctor(client, name, device, client); 57 - return ret; 35 + ret = client->driver->init(name, device, cfg, dbg, &client->object.priv); 36 + if (ret) 37 + return ret; 38 + 39 + return nvif_client_ctor(client, name, client); 58 40 }
-40
drivers/gpu/drm/nouveau/nvif/object.c
··· 40 40 args->v0.object = nvif_handle(object); 41 41 else 42 42 args->v0.object = 0; 43 - args->v0.owner = NVIF_IOCTL_V0_OWNER_ANY; 44 43 } else 45 44 return -ENOSYS; 46 45 ··· 95 96 96 97 kfree(args); 97 98 return ret; 98 - } 99 - 100 - u32 101 - nvif_object_rd(struct nvif_object *object, int size, u64 addr) 102 - { 103 - struct { 104 - struct nvif_ioctl_v0 ioctl; 105 - struct nvif_ioctl_rd_v0 rd; 106 - } args = { 107 - .ioctl.type = NVIF_IOCTL_V0_RD, 108 - .rd.size = size, 109 - .rd.addr = addr, 110 - }; 111 - int ret = nvif_object_ioctl(object, &args, sizeof(args), NULL); 112 - if (ret) { 113 - /*XXX: warn? */ 114 - return 0; 115 - } 116 - return args.rd.data; 117 - } 118 - 119 - void 120 - nvif_object_wr(struct nvif_object *object, int size, u64 addr, u32 data) 121 - { 122 - struct { 123 - struct nvif_ioctl_v0 ioctl; 124 - struct nvif_ioctl_wr_v0 wr; 125 - } args = { 126 - .ioctl.type = NVIF_IOCTL_V0_WR, 127 - .wr.size = size, 128 - .wr.addr = addr, 129 - .wr.data = data, 130 - }; 131 - int ret = nvif_object_ioctl(object, &args, sizeof(args), NULL); 132 - if (ret) { 133 - /*XXX: warn? */ 134 - } 135 99 } 136 100 137 101 int ··· 261 299 args->ioctl.version = 0; 262 300 args->ioctl.type = NVIF_IOCTL_V0_NEW; 263 301 args->new.version = 0; 264 - args->new.route = parent->client->route; 265 - args->new.token = nvif_handle(object); 266 302 args->new.object = nvif_handle(object); 267 303 args->new.handle = handle; 268 304 args->new.oclass = oclass;
+1 -63
drivers/gpu/drm/nouveau/nvkm/core/client.c
··· 42 42 43 43 if (!(ret = nvif_unpack(ret, &argv, &argc, args->v0, 0, 0, false))){ 44 44 args->v0.name[sizeof(args->v0.name) - 1] = 0; 45 - ret = nvkm_client_new(args->v0.name, args->v0.device, NULL, 45 + ret = nvkm_client_new(args->v0.name, oclass->client->device, NULL, 46 46 NULL, oclass->client->event, &client); 47 47 if (ret) 48 48 return ret; ··· 51 51 52 52 client->object.client = oclass->client; 53 53 client->object.handle = oclass->handle; 54 - client->object.route = oclass->route; 55 - client->object.token = oclass->token; 56 54 client->object.object = oclass->object; 57 55 client->debug = oclass->client->debug; 58 56 *pobject = &client->object; ··· 64 66 .maxver = 0, 65 67 .ctor = nvkm_uclient_new, 66 68 }; 67 - 68 - static const struct nvkm_object_func nvkm_client; 69 - struct nvkm_client * 70 - nvkm_client_search(struct nvkm_client *client, u64 handle) 71 - { 72 - struct nvkm_object *object; 73 - 74 - object = nvkm_object_search(client, handle, &nvkm_client); 75 - if (IS_ERR(object)) 76 - return (void *)object; 77 - 78 - return nvkm_client(object); 79 - } 80 - 81 - static int 82 - nvkm_client_mthd_devlist(struct nvkm_client *client, void *data, u32 size) 83 - { 84 - union { 85 - struct nvif_client_devlist_v0 v0; 86 - } *args = data; 87 - int ret = -ENOSYS; 88 - 89 - nvif_ioctl(&client->object, "client devlist size %d\n", size); 90 - if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, true))) { 91 - nvif_ioctl(&client->object, "client devlist vers %d count %d\n", 92 - args->v0.version, args->v0.count); 93 - if (size == sizeof(args->v0.device[0]) * args->v0.count) { 94 - ret = nvkm_device_list(args->v0.device, args->v0.count); 95 - if (ret >= 0) { 96 - args->v0.count = ret; 97 - ret = 0; 98 - } 99 - } else { 100 - ret = -EINVAL; 101 - } 102 - } 103 - 104 - return ret; 105 - } 106 - 107 - static int 108 - nvkm_client_mthd(struct nvkm_object *object, u32 mthd, void *data, u32 size) 109 - { 110 - struct nvkm_client *client = nvkm_client(object); 111 - switch (mthd) { 112 - case NVIF_CLIENT_V0_DEVLIST: 113 - return nvkm_client_mthd_devlist(client, data, size); 114 - default: 115 - break; 116 - } 117 - return -EINVAL; 118 - } 119 69 120 70 static int 121 71 nvkm_client_child_new(const struct nvkm_oclass *oclass, ··· 90 144 return 0; 91 145 } 92 146 93 - static int 94 - nvkm_client_fini(struct nvkm_object *object, bool suspend) 95 - { 96 - return 0; 97 - } 98 - 99 147 static void * 100 148 nvkm_client_dtor(struct nvkm_object *object) 101 149 { ··· 99 159 static const struct nvkm_object_func 100 160 nvkm_client = { 101 161 .dtor = nvkm_client_dtor, 102 - .fini = nvkm_client_fini, 103 - .mthd = nvkm_client_mthd, 104 162 .sclass = nvkm_client_child_get, 105 163 }; 106 164
+7 -84
drivers/gpu/drm/nouveau/nvkm/core/ioctl.c
··· 33 33 nvkm_ioctl_nop(struct nvkm_client *client, 34 34 struct nvkm_object *object, void *data, u32 size) 35 35 { 36 - union { 37 - struct nvif_ioctl_nop_v0 v0; 38 - } *args = data; 39 - int ret = -ENOSYS; 40 - 41 - nvif_ioctl(object, "nop size %d\n", size); 42 - if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) { 43 - nvif_ioctl(object, "nop vers %lld\n", args->v0.version); 44 - args->v0.version = NVIF_VERSION_LATEST; 45 - } 46 - 47 - return ret; 36 + return -ENOSYS; 48 37 } 49 38 50 39 #include <nvif/class.h> ··· 101 112 102 113 nvif_ioctl(parent, "new size %d\n", size); 103 114 if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, true))) { 104 - nvif_ioctl(parent, "new vers %d handle %08x class %08x " 105 - "route %02x token %llx object %016llx\n", 115 + nvif_ioctl(parent, "new vers %d handle %08x class %08x object %016llx\n", 106 116 args->v0.version, args->v0.handle, args->v0.oclass, 107 - args->v0.route, args->v0.token, args->v0.object); 117 + args->v0.object); 108 118 } else 109 119 return ret; 110 120 ··· 115 127 do { 116 128 memset(&oclass, 0x00, sizeof(oclass)); 117 129 oclass.handle = args->v0.handle; 118 - oclass.route = args->v0.route; 119 - oclass.token = args->v0.token; 120 130 oclass.object = args->v0.object; 121 131 oclass.client = client; 122 132 oclass.parent = parent; ··· 191 205 nvkm_ioctl_rd(struct nvkm_client *client, 192 206 struct nvkm_object *object, void *data, u32 size) 193 207 { 194 - union { 195 - struct nvif_ioctl_rd_v0 v0; 196 - } *args = data; 197 - union { 198 - u8 b08; 199 - u16 b16; 200 - u32 b32; 201 - } v; 202 - int ret = -ENOSYS; 203 - 204 - nvif_ioctl(object, "rd size %d\n", size); 205 - if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) { 206 - nvif_ioctl(object, "rd vers %d size %d addr %016llx\n", 207 - args->v0.version, args->v0.size, args->v0.addr); 208 - switch (args->v0.size) { 209 - case 1: 210 - ret = nvkm_object_rd08(object, args->v0.addr, &v.b08); 211 - args->v0.data = v.b08; 212 - break; 213 - case 2: 214 - ret = nvkm_object_rd16(object, args->v0.addr, &v.b16); 215 - args->v0.data = v.b16; 216 - break; 217 - case 4: 218 - ret = nvkm_object_rd32(object, args->v0.addr, &v.b32); 219 - args->v0.data = v.b32; 220 - break; 221 - default: 222 - ret = -EINVAL; 223 - break; 224 - } 225 - } 226 - 227 - return ret; 208 + return -ENOSYS; 228 209 } 229 210 230 211 static int 231 212 nvkm_ioctl_wr(struct nvkm_client *client, 232 213 struct nvkm_object *object, void *data, u32 size) 233 214 { 234 - union { 235 - struct nvif_ioctl_wr_v0 v0; 236 - } *args = data; 237 - int ret = -ENOSYS; 238 - 239 - nvif_ioctl(object, "wr size %d\n", size); 240 - if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) { 241 - nvif_ioctl(object, 242 - "wr vers %d size %d addr %016llx data %08x\n", 243 - args->v0.version, args->v0.size, args->v0.addr, 244 - args->v0.data); 245 - } else 246 - return ret; 247 - 248 - switch (args->v0.size) { 249 - case 1: return nvkm_object_wr08(object, args->v0.addr, args->v0.data); 250 - case 2: return nvkm_object_wr16(object, args->v0.addr, args->v0.data); 251 - case 4: return nvkm_object_wr32(object, args->v0.addr, args->v0.data); 252 - default: 253 - break; 254 - } 255 - 256 - return -EINVAL; 215 + return -ENOSYS; 257 216 } 258 217 259 218 static int ··· 262 331 263 332 static int 264 333 nvkm_ioctl_path(struct nvkm_client *client, u64 handle, u32 type, 265 - void *data, u32 size, u8 owner, u8 *route, u64 *token) 334 + void *data, u32 size) 266 335 { 267 336 struct nvkm_object *object; 268 337 int ret; ··· 272 341 nvif_ioctl(&client->object, "object not found\n"); 273 342 return PTR_ERR(object); 274 343 } 275 - 276 - if (owner != NVIF_IOCTL_V0_OWNER_ANY && owner != object->route) { 277 - nvif_ioctl(&client->object, "route != owner\n"); 278 - return -EACCES; 279 - } 280 - *route = object->route; 281 - *token = object->token; 282 344 283 345 if (ret = -EINVAL, type < ARRAY_SIZE(nvkm_ioctl_v0)) { 284 346 if (nvkm_ioctl_v0[type].version == 0) ··· 298 374 args->v0.version, args->v0.type, args->v0.object, 299 375 args->v0.owner); 300 376 ret = nvkm_ioctl_path(client, args->v0.object, args->v0.type, 301 - data, size, args->v0.owner, 302 - &args->v0.route, &args->v0.token); 377 + data, size); 303 378 } 304 379 305 380 if (ret != 1) {
-50
drivers/gpu/drm/nouveau/nvkm/core/object.c
··· 133 133 } 134 134 135 135 int 136 - nvkm_object_rd08(struct nvkm_object *object, u64 addr, u8 *data) 137 - { 138 - if (likely(object->func->rd08)) 139 - return object->func->rd08(object, addr, data); 140 - return -ENODEV; 141 - } 142 - 143 - int 144 - nvkm_object_rd16(struct nvkm_object *object, u64 addr, u16 *data) 145 - { 146 - if (likely(object->func->rd16)) 147 - return object->func->rd16(object, addr, data); 148 - return -ENODEV; 149 - } 150 - 151 - int 152 - nvkm_object_rd32(struct nvkm_object *object, u64 addr, u32 *data) 153 - { 154 - if (likely(object->func->rd32)) 155 - return object->func->rd32(object, addr, data); 156 - return -ENODEV; 157 - } 158 - 159 - int 160 - nvkm_object_wr08(struct nvkm_object *object, u64 addr, u8 data) 161 - { 162 - if (likely(object->func->wr08)) 163 - return object->func->wr08(object, addr, data); 164 - return -ENODEV; 165 - } 166 - 167 - int 168 - nvkm_object_wr16(struct nvkm_object *object, u64 addr, u16 data) 169 - { 170 - if (likely(object->func->wr16)) 171 - return object->func->wr16(object, addr, data); 172 - return -ENODEV; 173 - } 174 - 175 - int 176 - nvkm_object_wr32(struct nvkm_object *object, u64 addr, u32 data) 177 - { 178 - if (likely(object->func->wr32)) 179 - return object->func->wr32(object, addr, data); 180 - return -ENODEV; 181 - } 182 - 183 - int 184 136 nvkm_object_bind(struct nvkm_object *object, struct nvkm_gpuobj *gpuobj, 185 137 int align, struct nvkm_gpuobj **pgpuobj) 186 138 { ··· 265 313 object->engine = nvkm_engine_ref(oclass->engine); 266 314 object->oclass = oclass->base.oclass; 267 315 object->handle = oclass->handle; 268 - object->route = oclass->route; 269 - object->token = oclass->token; 270 316 object->object = oclass->object; 271 317 INIT_LIST_HEAD(&object->head); 272 318 INIT_LIST_HEAD(&object->tree);
-42
drivers/gpu/drm/nouveau/nvkm/core/oproxy.c
··· 56 56 } 57 57 58 58 static int 59 - nvkm_oproxy_rd08(struct nvkm_object *object, u64 addr, u8 *data) 60 - { 61 - return nvkm_object_rd08(nvkm_oproxy(object)->object, addr, data); 62 - } 63 - 64 - static int 65 - nvkm_oproxy_rd16(struct nvkm_object *object, u64 addr, u16 *data) 66 - { 67 - return nvkm_object_rd16(nvkm_oproxy(object)->object, addr, data); 68 - } 69 - 70 - static int 71 - nvkm_oproxy_rd32(struct nvkm_object *object, u64 addr, u32 *data) 72 - { 73 - return nvkm_object_rd32(nvkm_oproxy(object)->object, addr, data); 74 - } 75 - 76 - static int 77 - nvkm_oproxy_wr08(struct nvkm_object *object, u64 addr, u8 data) 78 - { 79 - return nvkm_object_wr08(nvkm_oproxy(object)->object, addr, data); 80 - } 81 - 82 - static int 83 - nvkm_oproxy_wr16(struct nvkm_object *object, u64 addr, u16 data) 84 - { 85 - return nvkm_object_wr16(nvkm_oproxy(object)->object, addr, data); 86 - } 87 - 88 - static int 89 - nvkm_oproxy_wr32(struct nvkm_object *object, u64 addr, u32 data) 90 - { 91 - return nvkm_object_wr32(nvkm_oproxy(object)->object, addr, data); 92 - } 93 - 94 - static int 95 59 nvkm_oproxy_bind(struct nvkm_object *object, struct nvkm_gpuobj *parent, 96 60 int align, struct nvkm_gpuobj **pgpuobj) 97 61 { ··· 161 197 .ntfy = nvkm_oproxy_ntfy, 162 198 .map = nvkm_oproxy_map, 163 199 .unmap = nvkm_oproxy_unmap, 164 - .rd08 = nvkm_oproxy_rd08, 165 - .rd16 = nvkm_oproxy_rd16, 166 - .rd32 = nvkm_oproxy_rd32, 167 - .wr08 = nvkm_oproxy_wr08, 168 - .wr16 = nvkm_oproxy_wr16, 169 - .wr32 = nvkm_oproxy_wr32, 170 200 .bind = nvkm_oproxy_bind, 171 201 .sclass = nvkm_oproxy_sclass, 172 202 .uevent = nvkm_oproxy_uevent,
+2 -2
drivers/gpu/drm/nouveau/nvkm/core/uevent.c
··· 116 116 struct nvkm_client *client = uevent->object.client; 117 117 118 118 if (uevent->func) 119 - return uevent->func(uevent->parent, uevent->object.token, bits); 119 + return uevent->func(uevent->parent, uevent->object.object, bits); 120 120 121 - return client->event(uevent->object.token, NULL, 0); 121 + return client->event(uevent->object.object, NULL, 0); 122 122 } 123 123 124 124 int
-1
drivers/gpu/drm/nouveau/nvkm/engine/Kbuild
··· 19 19 include $(src)/nvkm/engine/nvdec/Kbuild 20 20 include $(src)/nvkm/engine/nvjpg/Kbuild 21 21 include $(src)/nvkm/engine/ofa/Kbuild 22 - include $(src)/nvkm/engine/pm/Kbuild 23 22 include $(src)/nvkm/engine/sec/Kbuild 24 23 include $(src)/nvkm/engine/sec2/Kbuild 25 24 include $(src)/nvkm/engine/sw/Kbuild
+206 -275
drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
··· 53 53 return device; 54 54 } 55 55 56 - int 57 - nvkm_device_list(u64 *name, int size) 58 - { 59 - struct nvkm_device *device; 60 - int nr = 0; 61 - mutex_lock(&nv_devices_mutex); 62 - list_for_each_entry(device, &nv_devices, head) { 63 - if (nr++ < size) 64 - name[nr - 1] = device->handle; 65 - } 66 - mutex_unlock(&nv_devices_mutex); 67 - return nr; 68 - } 69 - 70 - static const struct nvkm_device_chip 71 - null_chipset = { 72 - .name = "NULL", 73 - .bios = { 0x00000001, nvkm_bios_new }, 74 - }; 75 - 76 56 static const struct nvkm_device_chip 77 57 nv4_chipset = { 78 58 .name = "NV04", ··· 470 490 .fifo = { 0x00000001, nv40_fifo_new }, 471 491 .gr = { 0x00000001, nv40_gr_new }, 472 492 .mpeg = { 0x00000001, nv40_mpeg_new }, 473 - .pm = { 0x00000001, nv40_pm_new }, 474 493 .sw = { 0x00000001, nv10_sw_new }, 475 494 }; 476 495 ··· 495 516 .fifo = { 0x00000001, nv40_fifo_new }, 496 517 .gr = { 0x00000001, nv40_gr_new }, 497 518 .mpeg = { 0x00000001, nv40_mpeg_new }, 498 - .pm = { 0x00000001, nv40_pm_new }, 499 519 .sw = { 0x00000001, nv10_sw_new }, 500 520 }; 501 521 ··· 520 542 .fifo = { 0x00000001, nv40_fifo_new }, 521 543 .gr = { 0x00000001, nv40_gr_new }, 522 544 .mpeg = { 0x00000001, nv40_mpeg_new }, 523 - .pm = { 0x00000001, nv40_pm_new }, 524 545 .sw = { 0x00000001, nv10_sw_new }, 525 546 }; 526 547 ··· 545 568 .fifo = { 0x00000001, nv40_fifo_new }, 546 569 .gr = { 0x00000001, nv40_gr_new }, 547 570 .mpeg = { 0x00000001, nv40_mpeg_new }, 548 - .pm = { 0x00000001, nv40_pm_new }, 549 571 .sw = { 0x00000001, nv10_sw_new }, 550 572 }; 551 573 ··· 570 594 .fifo = { 0x00000001, nv40_fifo_new }, 571 595 .gr = { 0x00000001, nv44_gr_new }, 572 596 .mpeg = { 0x00000001, nv44_mpeg_new }, 573 - .pm = { 0x00000001, nv40_pm_new }, 574 597 .sw = { 0x00000001, nv10_sw_new }, 575 598 }; 576 599 ··· 595 620 .fifo = { 0x00000001, nv40_fifo_new }, 596 621 .gr = { 0x00000001, nv40_gr_new }, 597 622 .mpeg = { 0x00000001, nv44_mpeg_new }, 598 - .pm = { 0x00000001, nv40_pm_new }, 599 623 .sw = { 0x00000001, nv10_sw_new }, 600 624 }; 601 625 ··· 620 646 .fifo = { 0x00000001, nv40_fifo_new }, 621 647 .gr = { 0x00000001, nv44_gr_new }, 622 648 .mpeg = { 0x00000001, nv44_mpeg_new }, 623 - .pm = { 0x00000001, nv40_pm_new }, 624 649 .sw = { 0x00000001, nv10_sw_new }, 625 650 }; 626 651 ··· 645 672 .fifo = { 0x00000001, nv40_fifo_new }, 646 673 .gr = { 0x00000001, nv40_gr_new }, 647 674 .mpeg = { 0x00000001, nv44_mpeg_new }, 648 - .pm = { 0x00000001, nv40_pm_new }, 649 675 .sw = { 0x00000001, nv10_sw_new }, 650 676 }; 651 677 ··· 670 698 .fifo = { 0x00000001, nv40_fifo_new }, 671 699 .gr = { 0x00000001, nv40_gr_new }, 672 700 .mpeg = { 0x00000001, nv44_mpeg_new }, 673 - .pm = { 0x00000001, nv40_pm_new }, 674 701 .sw = { 0x00000001, nv10_sw_new }, 675 702 }; 676 703 ··· 695 724 .fifo = { 0x00000001, nv40_fifo_new }, 696 725 .gr = { 0x00000001, nv44_gr_new }, 697 726 .mpeg = { 0x00000001, nv44_mpeg_new }, 698 - .pm = { 0x00000001, nv40_pm_new }, 699 727 .sw = { 0x00000001, nv10_sw_new }, 700 728 }; 701 729 ··· 720 750 .fifo = { 0x00000001, nv40_fifo_new }, 721 751 .gr = { 0x00000001, nv40_gr_new }, 722 752 .mpeg = { 0x00000001, nv44_mpeg_new }, 723 - .pm = { 0x00000001, nv40_pm_new }, 724 753 .sw = { 0x00000001, nv10_sw_new }, 725 754 }; 726 755 ··· 745 776 .fifo = { 0x00000001, nv40_fifo_new }, 746 777 .gr = { 0x00000001, nv44_gr_new }, 747 778 .mpeg = { 0x00000001, nv44_mpeg_new }, 748 - .pm = { 0x00000001, nv40_pm_new }, 749 779 .sw = { 0x00000001, nv10_sw_new }, 750 780 }; 751 781 ··· 770 802 .fifo = { 0x00000001, nv40_fifo_new }, 771 803 .gr = { 0x00000001, nv44_gr_new }, 772 804 .mpeg = { 0x00000001, nv44_mpeg_new }, 773 - .pm = { 0x00000001, nv40_pm_new }, 774 805 .sw = { 0x00000001, nv10_sw_new }, 775 806 }; 776 807 ··· 798 831 .fifo = { 0x00000001, nv50_fifo_new }, 799 832 .gr = { 0x00000001, nv50_gr_new }, 800 833 .mpeg = { 0x00000001, nv50_mpeg_new }, 801 - .pm = { 0x00000001, nv50_pm_new }, 802 834 .sw = { 0x00000001, nv50_sw_new }, 803 835 }; 804 836 ··· 823 857 .fifo = { 0x00000001, nv40_fifo_new }, 824 858 .gr = { 0x00000001, nv44_gr_new }, 825 859 .mpeg = { 0x00000001, nv44_mpeg_new }, 826 - .pm = { 0x00000001, nv40_pm_new }, 827 860 .sw = { 0x00000001, nv10_sw_new }, 828 861 }; 829 862 ··· 848 883 .fifo = { 0x00000001, nv40_fifo_new }, 849 884 .gr = { 0x00000001, nv44_gr_new }, 850 885 .mpeg = { 0x00000001, nv44_mpeg_new }, 851 - .pm = { 0x00000001, nv40_pm_new }, 852 886 .sw = { 0x00000001, nv10_sw_new }, 853 887 }; 854 888 ··· 873 909 .fifo = { 0x00000001, nv40_fifo_new }, 874 910 .gr = { 0x00000001, nv44_gr_new }, 875 911 .mpeg = { 0x00000001, nv44_mpeg_new }, 876 - .pm = { 0x00000001, nv40_pm_new }, 877 912 .sw = { 0x00000001, nv10_sw_new }, 878 913 }; 879 914 ··· 903 940 .fifo = { 0x00000001, g84_fifo_new }, 904 941 .gr = { 0x00000001, g84_gr_new }, 905 942 .mpeg = { 0x00000001, g84_mpeg_new }, 906 - .pm = { 0x00000001, g84_pm_new }, 907 943 .sw = { 0x00000001, nv50_sw_new }, 908 944 .vp = { 0x00000001, g84_vp_new }, 909 945 }; ··· 934 972 .fifo = { 0x00000001, g84_fifo_new }, 935 973 .gr = { 0x00000001, g84_gr_new }, 936 974 .mpeg = { 0x00000001, g84_mpeg_new }, 937 - .pm = { 0x00000001, g84_pm_new }, 938 975 .sw = { 0x00000001, nv50_sw_new }, 939 976 .vp = { 0x00000001, g84_vp_new }, 940 977 }; ··· 965 1004 .fifo = { 0x00000001, g84_fifo_new }, 966 1005 .gr = { 0x00000001, g84_gr_new }, 967 1006 .mpeg = { 0x00000001, g84_mpeg_new }, 968 - .pm = { 0x00000001, g84_pm_new }, 969 1007 .sw = { 0x00000001, nv50_sw_new }, 970 1008 .vp = { 0x00000001, g84_vp_new }, 971 1009 }; ··· 996 1036 .fifo = { 0x00000001, g84_fifo_new }, 997 1037 .gr = { 0x00000001, g84_gr_new }, 998 1038 .mpeg = { 0x00000001, g84_mpeg_new }, 999 - .pm = { 0x00000001, g84_pm_new }, 1000 1039 .sw = { 0x00000001, nv50_sw_new }, 1001 1040 .vp = { 0x00000001, g84_vp_new }, 1002 1041 }; ··· 1027 1068 .fifo = { 0x00000001, g84_fifo_new }, 1028 1069 .gr = { 0x00000001, g84_gr_new }, 1029 1070 .mpeg = { 0x00000001, g84_mpeg_new }, 1030 - .pm = { 0x00000001, g84_pm_new }, 1031 1071 .sw = { 0x00000001, nv50_sw_new }, 1032 1072 .vp = { 0x00000001, g84_vp_new }, 1033 1073 }; ··· 1058 1100 .mspdec = { 0x00000001, g98_mspdec_new }, 1059 1101 .msppp = { 0x00000001, g98_msppp_new }, 1060 1102 .msvld = { 0x00000001, g98_msvld_new }, 1061 - .pm = { 0x00000001, g84_pm_new }, 1062 1103 .sec = { 0x00000001, g98_sec_new }, 1063 1104 .sw = { 0x00000001, nv50_sw_new }, 1064 1105 }; ··· 1089 1132 .fifo = { 0x00000001, g84_fifo_new }, 1090 1133 .gr = { 0x00000001, gt200_gr_new }, 1091 1134 .mpeg = { 0x00000001, g84_mpeg_new }, 1092 - .pm = { 0x00000001, gt200_pm_new }, 1093 1135 .sw = { 0x00000001, nv50_sw_new }, 1094 1136 .vp = { 0x00000001, g84_vp_new }, 1095 1137 }; ··· 1123 1167 .mspdec = { 0x00000001, gt215_mspdec_new }, 1124 1168 .msppp = { 0x00000001, gt215_msppp_new }, 1125 1169 .msvld = { 0x00000001, gt215_msvld_new }, 1126 - .pm = { 0x00000001, gt215_pm_new }, 1127 1170 .sw = { 0x00000001, nv50_sw_new }, 1128 1171 }; 1129 1172 ··· 1155 1200 .mspdec = { 0x00000001, gt215_mspdec_new }, 1156 1201 .msppp = { 0x00000001, gt215_msppp_new }, 1157 1202 .msvld = { 0x00000001, gt215_msvld_new }, 1158 - .pm = { 0x00000001, gt215_pm_new }, 1159 1203 .sw = { 0x00000001, nv50_sw_new }, 1160 1204 }; 1161 1205 ··· 1187 1233 .mspdec = { 0x00000001, gt215_mspdec_new }, 1188 1234 .msppp = { 0x00000001, gt215_msppp_new }, 1189 1235 .msvld = { 0x00000001, gt215_msvld_new }, 1190 - .pm = { 0x00000001, gt215_pm_new }, 1191 1236 .sw = { 0x00000001, nv50_sw_new }, 1192 1237 }; 1193 1238 ··· 1217 1264 .mspdec = { 0x00000001, g98_mspdec_new }, 1218 1265 .msppp = { 0x00000001, g98_msppp_new }, 1219 1266 .msvld = { 0x00000001, g98_msvld_new }, 1220 - .pm = { 0x00000001, g84_pm_new }, 1221 1267 .sec = { 0x00000001, g98_sec_new }, 1222 1268 .sw = { 0x00000001, nv50_sw_new }, 1223 1269 }; ··· 1248 1296 .mspdec = { 0x00000001, g98_mspdec_new }, 1249 1297 .msppp = { 0x00000001, g98_msppp_new }, 1250 1298 .msvld = { 0x00000001, g98_msvld_new }, 1251 - .pm = { 0x00000001, g84_pm_new }, 1252 1299 .sec = { 0x00000001, g98_sec_new }, 1253 1300 .sw = { 0x00000001, nv50_sw_new }, 1254 1301 }; ··· 1281 1330 .mspdec = { 0x00000001, gt215_mspdec_new }, 1282 1331 .msppp = { 0x00000001, gt215_msppp_new }, 1283 1332 .msvld = { 0x00000001, mcp89_msvld_new }, 1284 - .pm = { 0x00000001, gt215_pm_new }, 1285 1333 .sw = { 0x00000001, nv50_sw_new }, 1286 1334 }; 1287 1335 ··· 1316 1366 .mspdec = { 0x00000001, gf100_mspdec_new }, 1317 1367 .msppp = { 0x00000001, gf100_msppp_new }, 1318 1368 .msvld = { 0x00000001, gf100_msvld_new }, 1319 - .pm = { 0x00000001, gf100_pm_new }, 1320 1369 .sw = { 0x00000001, gf100_sw_new }, 1321 1370 }; 1322 1371 ··· 1351 1402 .mspdec = { 0x00000001, gf100_mspdec_new }, 1352 1403 .msppp = { 0x00000001, gf100_msppp_new }, 1353 1404 .msvld = { 0x00000001, gf100_msvld_new }, 1354 - .pm = { 0x00000001, gf108_pm_new }, 1355 1405 .sw = { 0x00000001, gf100_sw_new }, 1356 1406 }; 1357 1407 ··· 1386 1438 .mspdec = { 0x00000001, gf100_mspdec_new }, 1387 1439 .msppp = { 0x00000001, gf100_msppp_new }, 1388 1440 .msvld = { 0x00000001, gf100_msvld_new }, 1389 - .pm = { 0x00000001, gf100_pm_new }, 1390 1441 .sw = { 0x00000001, gf100_sw_new }, 1391 1442 }; 1392 1443 ··· 1421 1474 .mspdec = { 0x00000001, gf100_mspdec_new }, 1422 1475 .msppp = { 0x00000001, gf100_msppp_new }, 1423 1476 .msvld = { 0x00000001, gf100_msvld_new }, 1424 - .pm = { 0x00000001, gf100_pm_new }, 1425 1477 .sw = { 0x00000001, gf100_sw_new }, 1426 1478 }; 1427 1479 ··· 1456 1510 .mspdec = { 0x00000001, gf100_mspdec_new }, 1457 1511 .msppp = { 0x00000001, gf100_msppp_new }, 1458 1512 .msvld = { 0x00000001, gf100_msvld_new }, 1459 - .pm = { 0x00000001, gf100_pm_new }, 1460 1513 .sw = { 0x00000001, gf100_sw_new }, 1461 1514 }; 1462 1515 ··· 1491 1546 .mspdec = { 0x00000001, gf100_mspdec_new }, 1492 1547 .msppp = { 0x00000001, gf100_msppp_new }, 1493 1548 .msvld = { 0x00000001, gf100_msvld_new }, 1494 - .pm = { 0x00000001, gf100_pm_new }, 1495 1549 .sw = { 0x00000001, gf100_sw_new }, 1496 1550 }; 1497 1551 ··· 1526 1582 .mspdec = { 0x00000001, gf100_mspdec_new }, 1527 1583 .msppp = { 0x00000001, gf100_msppp_new }, 1528 1584 .msvld = { 0x00000001, gf100_msvld_new }, 1529 - .pm = { 0x00000001, gf100_pm_new }, 1530 1585 .sw = { 0x00000001, gf100_sw_new }, 1531 1586 }; 1532 1587 ··· 1560 1617 .mspdec = { 0x00000001, gf100_mspdec_new }, 1561 1618 .msppp = { 0x00000001, gf100_msppp_new }, 1562 1619 .msvld = { 0x00000001, gf100_msvld_new }, 1563 - .pm = { 0x00000001, gf117_pm_new }, 1564 1620 .sw = { 0x00000001, gf100_sw_new }, 1565 1621 }; 1566 1622 ··· 1595 1653 .mspdec = { 0x00000001, gf100_mspdec_new }, 1596 1654 .msppp = { 0x00000001, gf100_msppp_new }, 1597 1655 .msvld = { 0x00000001, gf100_msvld_new }, 1598 - .pm = { 0x00000001, gf117_pm_new }, 1599 1656 .sw = { 0x00000001, gf100_sw_new }, 1600 1657 }; 1601 1658 ··· 1631 1690 .mspdec = { 0x00000001, gk104_mspdec_new }, 1632 1691 .msppp = { 0x00000001, gf100_msppp_new }, 1633 1692 .msvld = { 0x00000001, gk104_msvld_new }, 1634 - .pm = { 0x00000001, gk104_pm_new }, 1635 1693 .sw = { 0x00000001, gf100_sw_new }, 1636 1694 }; 1637 1695 ··· 1667 1727 .mspdec = { 0x00000001, gk104_mspdec_new }, 1668 1728 .msppp = { 0x00000001, gf100_msppp_new }, 1669 1729 .msvld = { 0x00000001, gk104_msvld_new }, 1670 - .pm = { 0x00000001, gk104_pm_new }, 1671 1730 .sw = { 0x00000001, gf100_sw_new }, 1672 1731 }; 1673 1732 ··· 1703 1764 .mspdec = { 0x00000001, gk104_mspdec_new }, 1704 1765 .msppp = { 0x00000001, gf100_msppp_new }, 1705 1766 .msvld = { 0x00000001, gk104_msvld_new }, 1706 - .pm = { 0x00000001, gk104_pm_new }, 1707 1767 .sw = { 0x00000001, gf100_sw_new }, 1708 1768 }; 1709 1769 ··· 1727 1789 .dma = { 0x00000001, gf119_dma_new }, 1728 1790 .fifo = { 0x00000001, gk20a_fifo_new }, 1729 1791 .gr = { 0x00000001, gk20a_gr_new }, 1730 - .pm = { 0x00000001, gk104_pm_new }, 1731 1792 .sw = { 0x00000001, gf100_sw_new }, 1732 1793 }; 1733 1794 ··· 3041 3104 const struct nvkm_device_quirk *quirk, 3042 3105 struct device *dev, enum nvkm_device_type type, u64 handle, 3043 3106 const char *name, const char *cfg, const char *dbg, 3044 - bool detect, bool mmio, u64 subdev_mask, 3045 3107 struct nvkm_device *device) 3046 3108 { 3047 3109 struct nvkm_subdev *subdev; ··· 3068 3132 mmio_base = device->func->resource_addr(device, 0); 3069 3133 mmio_size = device->func->resource_size(device, 0); 3070 3134 3071 - if (detect || mmio) { 3072 - device->pri = ioremap(mmio_base, mmio_size); 3073 - if (device->pri == NULL) { 3074 - nvdev_error(device, "unable to map PRI\n"); 3075 - ret = -ENOMEM; 3076 - goto done; 3077 - } 3135 + device->pri = ioremap(mmio_base, mmio_size); 3136 + if (device->pri == NULL) { 3137 + nvdev_error(device, "unable to map PRI\n"); 3138 + ret = -ENOMEM; 3139 + goto done; 3078 3140 } 3079 3141 3080 3142 /* identify the chipset, and determine classes of subdev/engines */ 3081 - if (detect) { 3082 - /* switch mmio to cpu's native endianness */ 3083 - if (!nvkm_device_endianness(device)) { 3084 - nvdev_error(device, 3085 - "Couldn't switch GPU to CPUs endianness\n"); 3086 - ret = -ENOSYS; 3087 - goto done; 3143 + 3144 + /* switch mmio to cpu's native endianness */ 3145 + if (!nvkm_device_endianness(device)) { 3146 + nvdev_error(device, 3147 + "Couldn't switch GPU to CPUs endianness\n"); 3148 + ret = -ENOSYS; 3149 + goto done; 3150 + } 3151 + 3152 + boot0 = nvkm_rd32(device, 0x000000); 3153 + 3154 + /* chipset can be overridden for devel/testing purposes */ 3155 + chipset = nvkm_longopt(device->cfgopt, "NvChipset", 0); 3156 + if (chipset) { 3157 + u32 override_boot0; 3158 + 3159 + if (chipset >= 0x10) { 3160 + override_boot0 = ((chipset & 0x1ff) << 20); 3161 + override_boot0 |= 0x000000a1; 3162 + } else { 3163 + if (chipset != 0x04) 3164 + override_boot0 = 0x20104000; 3165 + else 3166 + override_boot0 = 0x20004000; 3088 3167 } 3089 3168 3090 - boot0 = nvkm_rd32(device, 0x000000); 3169 + nvdev_warn(device, "CHIPSET OVERRIDE: %08x -> %08x\n", 3170 + boot0, override_boot0); 3171 + boot0 = override_boot0; 3172 + } 3091 3173 3092 - /* chipset can be overridden for devel/testing purposes */ 3093 - chipset = nvkm_longopt(device->cfgopt, "NvChipset", 0); 3094 - if (chipset) { 3095 - u32 override_boot0; 3096 - 3097 - if (chipset >= 0x10) { 3098 - override_boot0 = ((chipset & 0x1ff) << 20); 3099 - override_boot0 |= 0x000000a1; 3100 - } else { 3101 - if (chipset != 0x04) 3102 - override_boot0 = 0x20104000; 3103 - else 3104 - override_boot0 = 0x20004000; 3105 - } 3106 - 3107 - nvdev_warn(device, "CHIPSET OVERRIDE: %08x -> %08x\n", 3108 - boot0, override_boot0); 3109 - boot0 = override_boot0; 3174 + /* determine chipset and derive architecture from it */ 3175 + if ((boot0 & 0x1f000000) > 0) { 3176 + device->chipset = (boot0 & 0x1ff00000) >> 20; 3177 + device->chiprev = (boot0 & 0x000000ff); 3178 + switch (device->chipset & 0x1f0) { 3179 + case 0x010: { 3180 + if (0x461 & (1 << (device->chipset & 0xf))) 3181 + device->card_type = NV_10; 3182 + else 3183 + device->card_type = NV_11; 3184 + device->chiprev = 0x00; 3185 + break; 3110 3186 } 3187 + case 0x020: device->card_type = NV_20; break; 3188 + case 0x030: device->card_type = NV_30; break; 3189 + case 0x040: 3190 + case 0x060: device->card_type = NV_40; break; 3191 + case 0x050: 3192 + case 0x080: 3193 + case 0x090: 3194 + case 0x0a0: device->card_type = NV_50; break; 3195 + case 0x0c0: 3196 + case 0x0d0: device->card_type = NV_C0; break; 3197 + case 0x0e0: 3198 + case 0x0f0: 3199 + case 0x100: device->card_type = NV_E0; break; 3200 + case 0x110: 3201 + case 0x120: device->card_type = GM100; break; 3202 + case 0x130: device->card_type = GP100; break; 3203 + case 0x140: device->card_type = GV100; break; 3204 + case 0x160: device->card_type = TU100; break; 3205 + case 0x170: device->card_type = GA100; break; 3206 + case 0x190: device->card_type = AD100; break; 3207 + default: 3208 + break; 3209 + } 3210 + } else 3211 + if ((boot0 & 0xff00fff0) == 0x20004000) { 3212 + if (boot0 & 0x00f00000) 3213 + device->chipset = 0x05; 3214 + else 3215 + device->chipset = 0x04; 3216 + device->card_type = NV_04; 3217 + } 3111 3218 3112 - /* determine chipset and derive architecture from it */ 3113 - if ((boot0 & 0x1f000000) > 0) { 3114 - device->chipset = (boot0 & 0x1ff00000) >> 20; 3115 - device->chiprev = (boot0 & 0x000000ff); 3116 - switch (device->chipset & 0x1f0) { 3117 - case 0x010: { 3118 - if (0x461 & (1 << (device->chipset & 0xf))) 3119 - device->card_type = NV_10; 3120 - else 3121 - device->card_type = NV_11; 3122 - device->chiprev = 0x00; 3123 - break; 3124 - } 3125 - case 0x020: device->card_type = NV_20; break; 3126 - case 0x030: device->card_type = NV_30; break; 3127 - case 0x040: 3128 - case 0x060: device->card_type = NV_40; break; 3129 - case 0x050: 3130 - case 0x080: 3131 - case 0x090: 3132 - case 0x0a0: device->card_type = NV_50; break; 3133 - case 0x0c0: 3134 - case 0x0d0: device->card_type = NV_C0; break; 3135 - case 0x0e0: 3136 - case 0x0f0: 3137 - case 0x100: device->card_type = NV_E0; break; 3138 - case 0x110: 3139 - case 0x120: device->card_type = GM100; break; 3140 - case 0x130: device->card_type = GP100; break; 3141 - case 0x140: device->card_type = GV100; break; 3142 - case 0x160: device->card_type = TU100; break; 3143 - case 0x170: device->card_type = GA100; break; 3144 - case 0x190: device->card_type = AD100; break; 3219 + switch (device->chipset) { 3220 + case 0x004: device->chip = &nv4_chipset; break; 3221 + case 0x005: device->chip = &nv5_chipset; break; 3222 + case 0x010: device->chip = &nv10_chipset; break; 3223 + case 0x011: device->chip = &nv11_chipset; break; 3224 + case 0x015: device->chip = &nv15_chipset; break; 3225 + case 0x017: device->chip = &nv17_chipset; break; 3226 + case 0x018: device->chip = &nv18_chipset; break; 3227 + case 0x01a: device->chip = &nv1a_chipset; break; 3228 + case 0x01f: device->chip = &nv1f_chipset; break; 3229 + case 0x020: device->chip = &nv20_chipset; break; 3230 + case 0x025: device->chip = &nv25_chipset; break; 3231 + case 0x028: device->chip = &nv28_chipset; break; 3232 + case 0x02a: device->chip = &nv2a_chipset; break; 3233 + case 0x030: device->chip = &nv30_chipset; break; 3234 + case 0x031: device->chip = &nv31_chipset; break; 3235 + case 0x034: device->chip = &nv34_chipset; break; 3236 + case 0x035: device->chip = &nv35_chipset; break; 3237 + case 0x036: device->chip = &nv36_chipset; break; 3238 + case 0x040: device->chip = &nv40_chipset; break; 3239 + case 0x041: device->chip = &nv41_chipset; break; 3240 + case 0x042: device->chip = &nv42_chipset; break; 3241 + case 0x043: device->chip = &nv43_chipset; break; 3242 + case 0x044: device->chip = &nv44_chipset; break; 3243 + case 0x045: device->chip = &nv45_chipset; break; 3244 + case 0x046: device->chip = &nv46_chipset; break; 3245 + case 0x047: device->chip = &nv47_chipset; break; 3246 + case 0x049: device->chip = &nv49_chipset; break; 3247 + case 0x04a: device->chip = &nv4a_chipset; break; 3248 + case 0x04b: device->chip = &nv4b_chipset; break; 3249 + case 0x04c: device->chip = &nv4c_chipset; break; 3250 + case 0x04e: device->chip = &nv4e_chipset; break; 3251 + case 0x050: device->chip = &nv50_chipset; break; 3252 + case 0x063: device->chip = &nv63_chipset; break; 3253 + case 0x067: device->chip = &nv67_chipset; break; 3254 + case 0x068: device->chip = &nv68_chipset; break; 3255 + case 0x084: device->chip = &nv84_chipset; break; 3256 + case 0x086: device->chip = &nv86_chipset; break; 3257 + case 0x092: device->chip = &nv92_chipset; break; 3258 + case 0x094: device->chip = &nv94_chipset; break; 3259 + case 0x096: device->chip = &nv96_chipset; break; 3260 + case 0x098: device->chip = &nv98_chipset; break; 3261 + case 0x0a0: device->chip = &nva0_chipset; break; 3262 + case 0x0a3: device->chip = &nva3_chipset; break; 3263 + case 0x0a5: device->chip = &nva5_chipset; break; 3264 + case 0x0a8: device->chip = &nva8_chipset; break; 3265 + case 0x0aa: device->chip = &nvaa_chipset; break; 3266 + case 0x0ac: device->chip = &nvac_chipset; break; 3267 + case 0x0af: device->chip = &nvaf_chipset; break; 3268 + case 0x0c0: device->chip = &nvc0_chipset; break; 3269 + case 0x0c1: device->chip = &nvc1_chipset; break; 3270 + case 0x0c3: device->chip = &nvc3_chipset; break; 3271 + case 0x0c4: device->chip = &nvc4_chipset; break; 3272 + case 0x0c8: device->chip = &nvc8_chipset; break; 3273 + case 0x0ce: device->chip = &nvce_chipset; break; 3274 + case 0x0cf: device->chip = &nvcf_chipset; break; 3275 + case 0x0d7: device->chip = &nvd7_chipset; break; 3276 + case 0x0d9: device->chip = &nvd9_chipset; break; 3277 + case 0x0e4: device->chip = &nve4_chipset; break; 3278 + case 0x0e6: device->chip = &nve6_chipset; break; 3279 + case 0x0e7: device->chip = &nve7_chipset; break; 3280 + case 0x0ea: device->chip = &nvea_chipset; break; 3281 + case 0x0f0: device->chip = &nvf0_chipset; break; 3282 + case 0x0f1: device->chip = &nvf1_chipset; break; 3283 + case 0x106: device->chip = &nv106_chipset; break; 3284 + case 0x108: device->chip = &nv108_chipset; break; 3285 + case 0x117: device->chip = &nv117_chipset; break; 3286 + case 0x118: device->chip = &nv118_chipset; break; 3287 + case 0x120: device->chip = &nv120_chipset; break; 3288 + case 0x124: device->chip = &nv124_chipset; break; 3289 + case 0x126: device->chip = &nv126_chipset; break; 3290 + case 0x12b: device->chip = &nv12b_chipset; break; 3291 + case 0x130: device->chip = &nv130_chipset; break; 3292 + case 0x132: device->chip = &nv132_chipset; break; 3293 + case 0x134: device->chip = &nv134_chipset; break; 3294 + case 0x136: device->chip = &nv136_chipset; break; 3295 + case 0x137: device->chip = &nv137_chipset; break; 3296 + case 0x138: device->chip = &nv138_chipset; break; 3297 + case 0x13b: device->chip = &nv13b_chipset; break; 3298 + case 0x140: device->chip = &nv140_chipset; break; 3299 + case 0x162: device->chip = &nv162_chipset; break; 3300 + case 0x164: device->chip = &nv164_chipset; break; 3301 + case 0x166: device->chip = &nv166_chipset; break; 3302 + case 0x167: device->chip = &nv167_chipset; break; 3303 + case 0x168: device->chip = &nv168_chipset; break; 3304 + case 0x172: device->chip = &nv172_chipset; break; 3305 + case 0x173: device->chip = &nv173_chipset; break; 3306 + case 0x174: device->chip = &nv174_chipset; break; 3307 + case 0x176: device->chip = &nv176_chipset; break; 3308 + case 0x177: device->chip = &nv177_chipset; break; 3309 + case 0x192: device->chip = &nv192_chipset; break; 3310 + case 0x193: device->chip = &nv193_chipset; break; 3311 + case 0x194: device->chip = &nv194_chipset; break; 3312 + case 0x196: device->chip = &nv196_chipset; break; 3313 + case 0x197: device->chip = &nv197_chipset; break; 3314 + default: 3315 + if (nvkm_boolopt(device->cfgopt, "NvEnableUnsupportedChipsets", false)) { 3316 + switch (device->chipset) { 3317 + case 0x170: device->chip = &nv170_chipset; break; 3145 3318 default: 3146 3319 break; 3147 3320 } 3148 - } else 3149 - if ((boot0 & 0xff00fff0) == 0x20004000) { 3150 - if (boot0 & 0x00f00000) 3151 - device->chipset = 0x05; 3152 - else 3153 - device->chipset = 0x04; 3154 - device->card_type = NV_04; 3155 3321 } 3156 3322 3157 - switch (device->chipset) { 3158 - case 0x004: device->chip = &nv4_chipset; break; 3159 - case 0x005: device->chip = &nv5_chipset; break; 3160 - case 0x010: device->chip = &nv10_chipset; break; 3161 - case 0x011: device->chip = &nv11_chipset; break; 3162 - case 0x015: device->chip = &nv15_chipset; break; 3163 - case 0x017: device->chip = &nv17_chipset; break; 3164 - case 0x018: device->chip = &nv18_chipset; break; 3165 - case 0x01a: device->chip = &nv1a_chipset; break; 3166 - case 0x01f: device->chip = &nv1f_chipset; break; 3167 - case 0x020: device->chip = &nv20_chipset; break; 3168 - case 0x025: device->chip = &nv25_chipset; break; 3169 - case 0x028: device->chip = &nv28_chipset; break; 3170 - case 0x02a: device->chip = &nv2a_chipset; break; 3171 - case 0x030: device->chip = &nv30_chipset; break; 3172 - case 0x031: device->chip = &nv31_chipset; break; 3173 - case 0x034: device->chip = &nv34_chipset; break; 3174 - case 0x035: device->chip = &nv35_chipset; break; 3175 - case 0x036: device->chip = &nv36_chipset; break; 3176 - case 0x040: device->chip = &nv40_chipset; break; 3177 - case 0x041: device->chip = &nv41_chipset; break; 3178 - case 0x042: device->chip = &nv42_chipset; break; 3179 - case 0x043: device->chip = &nv43_chipset; break; 3180 - case 0x044: device->chip = &nv44_chipset; break; 3181 - case 0x045: device->chip = &nv45_chipset; break; 3182 - case 0x046: device->chip = &nv46_chipset; break; 3183 - case 0x047: device->chip = &nv47_chipset; break; 3184 - case 0x049: device->chip = &nv49_chipset; break; 3185 - case 0x04a: device->chip = &nv4a_chipset; break; 3186 - case 0x04b: device->chip = &nv4b_chipset; break; 3187 - case 0x04c: device->chip = &nv4c_chipset; break; 3188 - case 0x04e: device->chip = &nv4e_chipset; break; 3189 - case 0x050: device->chip = &nv50_chipset; break; 3190 - case 0x063: device->chip = &nv63_chipset; break; 3191 - case 0x067: device->chip = &nv67_chipset; break; 3192 - case 0x068: device->chip = &nv68_chipset; break; 3193 - case 0x084: device->chip = &nv84_chipset; break; 3194 - case 0x086: device->chip = &nv86_chipset; break; 3195 - case 0x092: device->chip = &nv92_chipset; break; 3196 - case 0x094: device->chip = &nv94_chipset; break; 3197 - case 0x096: device->chip = &nv96_chipset; break; 3198 - case 0x098: device->chip = &nv98_chipset; break; 3199 - case 0x0a0: device->chip = &nva0_chipset; break; 3200 - case 0x0a3: device->chip = &nva3_chipset; break; 3201 - case 0x0a5: device->chip = &nva5_chipset; break; 3202 - case 0x0a8: device->chip = &nva8_chipset; break; 3203 - case 0x0aa: device->chip = &nvaa_chipset; break; 3204 - case 0x0ac: device->chip = &nvac_chipset; break; 3205 - case 0x0af: device->chip = &nvaf_chipset; break; 3206 - case 0x0c0: device->chip = &nvc0_chipset; break; 3207 - case 0x0c1: device->chip = &nvc1_chipset; break; 3208 - case 0x0c3: device->chip = &nvc3_chipset; break; 3209 - case 0x0c4: device->chip = &nvc4_chipset; break; 3210 - case 0x0c8: device->chip = &nvc8_chipset; break; 3211 - case 0x0ce: device->chip = &nvce_chipset; break; 3212 - case 0x0cf: device->chip = &nvcf_chipset; break; 3213 - case 0x0d7: device->chip = &nvd7_chipset; break; 3214 - case 0x0d9: device->chip = &nvd9_chipset; break; 3215 - case 0x0e4: device->chip = &nve4_chipset; break; 3216 - case 0x0e6: device->chip = &nve6_chipset; break; 3217 - case 0x0e7: device->chip = &nve7_chipset; break; 3218 - case 0x0ea: device->chip = &nvea_chipset; break; 3219 - case 0x0f0: device->chip = &nvf0_chipset; break; 3220 - case 0x0f1: device->chip = &nvf1_chipset; break; 3221 - case 0x106: device->chip = &nv106_chipset; break; 3222 - case 0x108: device->chip = &nv108_chipset; break; 3223 - case 0x117: device->chip = &nv117_chipset; break; 3224 - case 0x118: device->chip = &nv118_chipset; break; 3225 - case 0x120: device->chip = &nv120_chipset; break; 3226 - case 0x124: device->chip = &nv124_chipset; break; 3227 - case 0x126: device->chip = &nv126_chipset; break; 3228 - case 0x12b: device->chip = &nv12b_chipset; break; 3229 - case 0x130: device->chip = &nv130_chipset; break; 3230 - case 0x132: device->chip = &nv132_chipset; break; 3231 - case 0x134: device->chip = &nv134_chipset; break; 3232 - case 0x136: device->chip = &nv136_chipset; break; 3233 - case 0x137: device->chip = &nv137_chipset; break; 3234 - case 0x138: device->chip = &nv138_chipset; break; 3235 - case 0x13b: device->chip = &nv13b_chipset; break; 3236 - case 0x140: device->chip = &nv140_chipset; break; 3237 - case 0x162: device->chip = &nv162_chipset; break; 3238 - case 0x164: device->chip = &nv164_chipset; break; 3239 - case 0x166: device->chip = &nv166_chipset; break; 3240 - case 0x167: device->chip = &nv167_chipset; break; 3241 - case 0x168: device->chip = &nv168_chipset; break; 3242 - case 0x172: device->chip = &nv172_chipset; break; 3243 - case 0x173: device->chip = &nv173_chipset; break; 3244 - case 0x174: device->chip = &nv174_chipset; break; 3245 - case 0x176: device->chip = &nv176_chipset; break; 3246 - case 0x177: device->chip = &nv177_chipset; break; 3247 - case 0x192: device->chip = &nv192_chipset; break; 3248 - case 0x193: device->chip = &nv193_chipset; break; 3249 - case 0x194: device->chip = &nv194_chipset; break; 3250 - case 0x196: device->chip = &nv196_chipset; break; 3251 - case 0x197: device->chip = &nv197_chipset; break; 3252 - default: 3253 - if (nvkm_boolopt(device->cfgopt, "NvEnableUnsupportedChipsets", false)) { 3254 - switch (device->chipset) { 3255 - case 0x170: device->chip = &nv170_chipset; break; 3256 - default: 3257 - break; 3258 - } 3259 - } 3260 - 3261 - if (!device->chip) { 3262 - nvdev_error(device, "unknown chipset (%08x)\n", boot0); 3263 - ret = -ENODEV; 3264 - goto done; 3265 - } 3266 - break; 3267 - } 3268 - 3269 - nvdev_info(device, "NVIDIA %s (%08x)\n", 3270 - device->chip->name, boot0); 3271 - 3272 - /* vGPU detection */ 3273 - boot1 = nvkm_rd32(device, 0x0000004); 3274 - if (device->card_type >= TU100 && (boot1 & 0x00030000)) { 3275 - nvdev_info(device, "vGPUs are not supported\n"); 3323 + if (!device->chip) { 3324 + nvdev_error(device, "unknown chipset (%08x)\n", boot0); 3276 3325 ret = -ENODEV; 3277 3326 goto done; 3278 3327 } 3328 + break; 3329 + } 3279 3330 3280 - /* read strapping information */ 3281 - strap = nvkm_rd32(device, 0x101000); 3331 + nvdev_info(device, "NVIDIA %s (%08x)\n", 3332 + device->chip->name, boot0); 3282 3333 3283 - /* determine frequency of timing crystal */ 3284 - if ( device->card_type <= NV_10 || device->chipset < 0x17 || 3285 - (device->chipset >= 0x20 && device->chipset < 0x25)) 3286 - strap &= 0x00000040; 3287 - else 3288 - strap &= 0x00400040; 3334 + /* vGPU detection */ 3335 + boot1 = nvkm_rd32(device, 0x0000004); 3336 + if (device->card_type >= TU100 && (boot1 & 0x00030000)) { 3337 + nvdev_info(device, "vGPUs are not supported\n"); 3338 + ret = -ENODEV; 3339 + goto done; 3340 + } 3289 3341 3290 - switch (strap) { 3291 - case 0x00000000: device->crystal = 13500; break; 3292 - case 0x00000040: device->crystal = 14318; break; 3293 - case 0x00400000: device->crystal = 27000; break; 3294 - case 0x00400040: device->crystal = 25000; break; 3295 - } 3296 - } else { 3297 - device->chip = &null_chipset; 3342 + /* read strapping information */ 3343 + strap = nvkm_rd32(device, 0x101000); 3344 + 3345 + /* determine frequency of timing crystal */ 3346 + if ( device->card_type <= NV_10 || device->chipset < 0x17 || 3347 + (device->chipset >= 0x20 && device->chipset < 0x25)) 3348 + strap &= 0x00000040; 3349 + else 3350 + strap &= 0x00400040; 3351 + 3352 + switch (strap) { 3353 + case 0x00000000: device->crystal = 13500; break; 3354 + case 0x00000040: device->crystal = 14318; break; 3355 + case 0x00400000: device->crystal = 27000; break; 3356 + case 0x00400040: device->crystal = 25000; break; 3298 3357 } 3299 3358 3300 3359 if (!device->name) ··· 3299 3368 nvkm_intr_ctor(device); 3300 3369 3301 3370 #define NVKM_LAYOUT_ONCE(type,data,ptr) \ 3302 - if (device->chip->ptr.inst && (subdev_mask & (BIT_ULL(type)))) { \ 3371 + if (device->chip->ptr.inst) { \ 3303 3372 WARN_ON(device->chip->ptr.inst != 0x00000001); \ 3304 3373 ret = device->chip->ptr.ctor(device, (type), -1, &device->ptr); \ 3305 3374 subdev = nvkm_device_subdev(device, (type), 0); \ ··· 3318 3387 #define NVKM_LAYOUT_INST(type,data,ptr,cnt) \ 3319 3388 WARN_ON(device->chip->ptr.inst & ~((1 << ARRAY_SIZE(device->ptr)) - 1)); \ 3320 3389 for (j = 0; device->chip->ptr.inst && j < ARRAY_SIZE(device->ptr); j++) { \ 3321 - if ((device->chip->ptr.inst & BIT(j)) && (subdev_mask & BIT_ULL(type))) { \ 3390 + if (device->chip->ptr.inst & BIT(j)) { \ 3322 3391 ret = device->chip->ptr.ctor(device, (type), (j), &device->ptr[j]); \ 3323 3392 subdev = nvkm_device_subdev(device, (type), (j)); \ 3324 3393 if (ret) { \ ··· 3340 3409 3341 3410 ret = nvkm_intr_install(device); 3342 3411 done: 3343 - if (device->pri && (!mmio || ret)) { 3412 + if (ret && device->pri) { 3344 3413 iounmap(device->pri); 3345 3414 device->pri = NULL; 3346 3415 }
+1 -3
drivers/gpu/drm/nouveau/nvkm/engine/device/pci.c
··· 1626 1626 1627 1627 int 1628 1628 nvkm_device_pci_new(struct pci_dev *pci_dev, const char *cfg, const char *dbg, 1629 - bool detect, bool mmio, u64 subdev_mask, 1630 1629 struct nvkm_device **pdevice) 1631 1630 { 1632 1631 const struct nvkm_device_quirk *quirk = NULL; ··· 1679 1680 pci_dev->bus->number << 16 | 1680 1681 PCI_SLOT(pci_dev->devfn) << 8 | 1681 1682 PCI_FUNC(pci_dev->devfn), name, 1682 - cfg, dbg, detect, mmio, subdev_mask, 1683 - &pdev->device); 1683 + cfg, dbg, &pdev->device); 1684 1684 1685 1685 if (ret) 1686 1686 return ret;
-2
drivers/gpu/drm/nouveau/nvkm/engine/device/priv.h
··· 45 45 #include <engine/nvdec.h> 46 46 #include <engine/nvjpg.h> 47 47 #include <engine/ofa.h> 48 - #include <engine/pm.h> 49 48 #include <engine/sec.h> 50 49 #include <engine/sec2.h> 51 50 #include <engine/sw.h> ··· 55 56 const struct nvkm_device_quirk *, 56 57 struct device *, enum nvkm_device_type, u64 handle, 57 58 const char *name, const char *cfg, const char *dbg, 58 - bool detect, bool mmio, u64 subdev_mask, 59 59 struct nvkm_device *); 60 60 int nvkm_device_init(struct nvkm_device *); 61 61 int nvkm_device_fini(struct nvkm_device *, bool suspend);
+1 -4
drivers/gpu/drm/nouveau/nvkm/engine/device/tegra.c
··· 237 237 nvkm_device_tegra_new(const struct nvkm_device_tegra_func *func, 238 238 struct platform_device *pdev, 239 239 const char *cfg, const char *dbg, 240 - bool detect, bool mmio, u64 subdev_mask, 241 240 struct nvkm_device **pdevice) 242 241 { 243 242 struct nvkm_device_tegra *tdev; ··· 310 311 tdev->gpu_speedo_id = tegra_sku_info.gpu_speedo_id; 311 312 ret = nvkm_device_ctor(&nvkm_device_tegra_func, NULL, &pdev->dev, 312 313 NVKM_DEVICE_TEGRA, pdev->id, NULL, 313 - cfg, dbg, detect, mmio, subdev_mask, 314 - &tdev->device); 314 + cfg, dbg, &tdev->device); 315 315 if (ret) 316 316 goto powerdown; 317 317 ··· 331 333 nvkm_device_tegra_new(const struct nvkm_device_tegra_func *func, 332 334 struct platform_device *pdev, 333 335 const char *cfg, const char *dbg, 334 - bool detect, bool mmio, u64 subdev_mask, 335 336 struct nvkm_device **pdevice) 336 337 { 337 338 return -ENOSYS;
+4 -89
drivers/gpu/drm/nouveau/nvkm/engine/device/user.c
··· 203 203 } 204 204 205 205 static int 206 - nvkm_udevice_rd08(struct nvkm_object *object, u64 addr, u8 *data) 207 - { 208 - struct nvkm_udevice *udev = nvkm_udevice(object); 209 - *data = nvkm_rd08(udev->device, addr); 210 - return 0; 211 - } 212 - 213 - static int 214 - nvkm_udevice_rd16(struct nvkm_object *object, u64 addr, u16 *data) 215 - { 216 - struct nvkm_udevice *udev = nvkm_udevice(object); 217 - *data = nvkm_rd16(udev->device, addr); 218 - return 0; 219 - } 220 - 221 - static int 222 - nvkm_udevice_rd32(struct nvkm_object *object, u64 addr, u32 *data) 223 - { 224 - struct nvkm_udevice *udev = nvkm_udevice(object); 225 - *data = nvkm_rd32(udev->device, addr); 226 - return 0; 227 - } 228 - 229 - static int 230 - nvkm_udevice_wr08(struct nvkm_object *object, u64 addr, u8 data) 231 - { 232 - struct nvkm_udevice *udev = nvkm_udevice(object); 233 - nvkm_wr08(udev->device, addr, data); 234 - return 0; 235 - } 236 - 237 - static int 238 - nvkm_udevice_wr16(struct nvkm_object *object, u64 addr, u16 data) 239 - { 240 - struct nvkm_udevice *udev = nvkm_udevice(object); 241 - nvkm_wr16(udev->device, addr, data); 242 - return 0; 243 - } 244 - 245 - static int 246 - nvkm_udevice_wr32(struct nvkm_object *object, u64 addr, u32 data) 247 - { 248 - struct nvkm_udevice *udev = nvkm_udevice(object); 249 - nvkm_wr32(udev->device, addr, data); 250 - return 0; 251 - } 252 - 253 - static int 254 206 nvkm_udevice_map(struct nvkm_object *object, void *argv, u32 argc, 255 207 enum nvkm_object_map *type, u64 *addr, u64 *size) 256 208 { ··· 274 322 struct nvkm_engine *engine; 275 323 u64 mask = (1ULL << NVKM_ENGINE_DMAOBJ) | 276 324 (1ULL << NVKM_ENGINE_FIFO) | 277 - (1ULL << NVKM_ENGINE_DISP) | 278 - (1ULL << NVKM_ENGINE_PM); 325 + (1ULL << NVKM_ENGINE_DISP); 279 326 const struct nvkm_device_oclass *sclass = NULL; 280 327 int i; 281 328 ··· 309 358 } 310 359 311 360 static const struct nvkm_object_func 312 - nvkm_udevice_super = { 313 - .init = nvkm_udevice_init, 314 - .fini = nvkm_udevice_fini, 315 - .mthd = nvkm_udevice_mthd, 316 - .map = nvkm_udevice_map, 317 - .rd08 = nvkm_udevice_rd08, 318 - .rd16 = nvkm_udevice_rd16, 319 - .rd32 = nvkm_udevice_rd32, 320 - .wr08 = nvkm_udevice_wr08, 321 - .wr16 = nvkm_udevice_wr16, 322 - .wr32 = nvkm_udevice_wr32, 323 - .sclass = nvkm_udevice_child_get, 324 - }; 325 - 326 - static const struct nvkm_object_func 327 361 nvkm_udevice = { 328 362 .init = nvkm_udevice_init, 329 363 .fini = nvkm_udevice_fini, 330 364 .mthd = nvkm_udevice_mthd, 365 + .map = nvkm_udevice_map, 331 366 .sclass = nvkm_udevice_child_get, 332 367 }; 333 368 ··· 321 384 nvkm_udevice_new(const struct nvkm_oclass *oclass, void *data, u32 size, 322 385 struct nvkm_object **pobject) 323 386 { 324 - union { 325 - struct nv_device_v0 v0; 326 - } *args = data; 327 387 struct nvkm_client *client = oclass->client; 328 - struct nvkm_object *parent = &client->object; 329 - const struct nvkm_object_func *func; 330 388 struct nvkm_udevice *udev; 331 - int ret = -ENOSYS; 332 - 333 - nvif_ioctl(parent, "create device size %d\n", size); 334 - if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) { 335 - nvif_ioctl(parent, "create device v%d device %016llx\n", 336 - args->v0.version, args->v0.device); 337 - } else 338 - return ret; 339 - 340 - /* give priviledged clients register access */ 341 - if (args->v0.priv) 342 - func = &nvkm_udevice_super; 343 - else 344 - func = &nvkm_udevice; 345 389 346 390 if (!(udev = kzalloc(sizeof(*udev), GFP_KERNEL))) 347 391 return -ENOMEM; 348 - nvkm_object_ctor(func, oclass, &udev->object); 392 + nvkm_object_ctor(&nvkm_udevice, oclass, &udev->object); 349 393 *pobject = &udev->object; 350 394 351 395 /* find the device that matches what the client requested */ 352 - if (args->v0.device != ~0) 353 - udev->device = nvkm_device_find(args->v0.device); 354 - else 355 - udev->device = nvkm_device_find(client->device); 396 + udev->device = nvkm_device_find(client->device); 356 397 if (!udev->device) 357 398 return -ENODEV; 358 399
-24
drivers/gpu/drm/nouveau/nvkm/engine/disp/chan.c
··· 27 27 #include <nvif/if0014.h> 28 28 29 29 static int 30 - nvkm_disp_chan_rd32(struct nvkm_object *object, u64 addr, u32 *data) 31 - { 32 - struct nvkm_disp_chan *chan = nvkm_disp_chan(object); 33 - struct nvkm_device *device = chan->disp->engine.subdev.device; 34 - u64 size, base = chan->func->user(chan, &size); 35 - 36 - *data = nvkm_rd32(device, base + addr); 37 - return 0; 38 - } 39 - 40 - static int 41 - nvkm_disp_chan_wr32(struct nvkm_object *object, u64 addr, u32 data) 42 - { 43 - struct nvkm_disp_chan *chan = nvkm_disp_chan(object); 44 - struct nvkm_device *device = chan->disp->engine.subdev.device; 45 - u64 size, base = chan->func->user(chan, &size); 46 - 47 - nvkm_wr32(device, base + addr, data); 48 - return 0; 49 - } 50 - 51 - static int 52 30 nvkm_disp_chan_ntfy(struct nvkm_object *object, u32 type, struct nvkm_event **pevent) 53 31 { 54 32 struct nvkm_disp_chan *chan = nvkm_disp_chan(object); ··· 166 188 .dtor = nvkm_disp_chan_dtor, 167 189 .init = nvkm_disp_chan_init, 168 190 .fini = nvkm_disp_chan_fini, 169 - .rd32 = nvkm_disp_chan_rd32, 170 - .wr32 = nvkm_disp_chan_wr32, 171 191 .ntfy = nvkm_disp_chan_ntfy, 172 192 .map = nvkm_disp_chan_map, 173 193 .sclass = nvkm_disp_chan_child_get,
-11
drivers/gpu/drm/nouveau/nvkm/engine/pm/Kbuild
··· 1 - # SPDX-License-Identifier: MIT 2 - nvkm-y += nvkm/engine/pm/base.o 3 - nvkm-y += nvkm/engine/pm/nv40.o 4 - nvkm-y += nvkm/engine/pm/nv50.o 5 - nvkm-y += nvkm/engine/pm/g84.o 6 - nvkm-y += nvkm/engine/pm/gt200.o 7 - nvkm-y += nvkm/engine/pm/gt215.o 8 - nvkm-y += nvkm/engine/pm/gf100.o 9 - nvkm-y += nvkm/engine/pm/gf108.o 10 - nvkm-y += nvkm/engine/pm/gf117.o 11 - nvkm-y += nvkm/engine/pm/gk104.o
-867
drivers/gpu/drm/nouveau/nvkm/engine/pm/base.c
··· 1 - /* 2 - * Copyright 2013 Red Hat Inc. 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: Ben Skeggs 23 - */ 24 - #include "priv.h" 25 - 26 - #include <core/client.h> 27 - #include <core/option.h> 28 - 29 - #include <nvif/class.h> 30 - #include <nvif/if0002.h> 31 - #include <nvif/if0003.h> 32 - #include <nvif/ioctl.h> 33 - #include <nvif/unpack.h> 34 - 35 - static u8 36 - nvkm_pm_count_perfdom(struct nvkm_pm *pm) 37 - { 38 - struct nvkm_perfdom *dom; 39 - u8 domain_nr = 0; 40 - 41 - list_for_each_entry(dom, &pm->domains, head) 42 - domain_nr++; 43 - return domain_nr; 44 - } 45 - 46 - static u16 47 - nvkm_perfdom_count_perfsig(struct nvkm_perfdom *dom) 48 - { 49 - u16 signal_nr = 0; 50 - int i; 51 - 52 - if (dom) { 53 - for (i = 0; i < dom->signal_nr; i++) { 54 - if (dom->signal[i].name) 55 - signal_nr++; 56 - } 57 - } 58 - return signal_nr; 59 - } 60 - 61 - static struct nvkm_perfdom * 62 - nvkm_perfdom_find(struct nvkm_pm *pm, int di) 63 - { 64 - struct nvkm_perfdom *dom; 65 - int tmp = 0; 66 - 67 - list_for_each_entry(dom, &pm->domains, head) { 68 - if (tmp++ == di) 69 - return dom; 70 - } 71 - return NULL; 72 - } 73 - 74 - static struct nvkm_perfsig * 75 - nvkm_perfsig_find(struct nvkm_pm *pm, u8 di, u8 si, struct nvkm_perfdom **pdom) 76 - { 77 - struct nvkm_perfdom *dom = *pdom; 78 - 79 - if (dom == NULL) { 80 - dom = nvkm_perfdom_find(pm, di); 81 - if (dom == NULL) 82 - return NULL; 83 - *pdom = dom; 84 - } 85 - 86 - if (!dom->signal[si].name) 87 - return NULL; 88 - return &dom->signal[si]; 89 - } 90 - 91 - static u8 92 - nvkm_perfsig_count_perfsrc(struct nvkm_perfsig *sig) 93 - { 94 - u8 source_nr = 0, i; 95 - 96 - for (i = 0; i < ARRAY_SIZE(sig->source); i++) { 97 - if (sig->source[i]) 98 - source_nr++; 99 - } 100 - return source_nr; 101 - } 102 - 103 - static struct nvkm_perfsrc * 104 - nvkm_perfsrc_find(struct nvkm_pm *pm, struct nvkm_perfsig *sig, int si) 105 - { 106 - struct nvkm_perfsrc *src; 107 - bool found = false; 108 - int tmp = 1; /* Sources ID start from 1 */ 109 - u8 i; 110 - 111 - for (i = 0; i < ARRAY_SIZE(sig->source) && sig->source[i]; i++) { 112 - if (sig->source[i] == si) { 113 - found = true; 114 - break; 115 - } 116 - } 117 - 118 - if (found) { 119 - list_for_each_entry(src, &pm->sources, head) { 120 - if (tmp++ == si) 121 - return src; 122 - } 123 - } 124 - 125 - return NULL; 126 - } 127 - 128 - static int 129 - nvkm_perfsrc_enable(struct nvkm_pm *pm, struct nvkm_perfctr *ctr) 130 - { 131 - struct nvkm_subdev *subdev = &pm->engine.subdev; 132 - struct nvkm_device *device = subdev->device; 133 - struct nvkm_perfdom *dom = NULL; 134 - struct nvkm_perfsig *sig; 135 - struct nvkm_perfsrc *src; 136 - u32 mask, value; 137 - int i, j; 138 - 139 - for (i = 0; i < 4; i++) { 140 - for (j = 0; j < 8 && ctr->source[i][j]; j++) { 141 - sig = nvkm_perfsig_find(pm, ctr->domain, 142 - ctr->signal[i], &dom); 143 - if (!sig) 144 - return -EINVAL; 145 - 146 - src = nvkm_perfsrc_find(pm, sig, ctr->source[i][j]); 147 - if (!src) 148 - return -EINVAL; 149 - 150 - /* set enable bit if needed */ 151 - mask = value = 0x00000000; 152 - if (src->enable) 153 - mask = value = 0x80000000; 154 - mask |= (src->mask << src->shift); 155 - value |= ((ctr->source[i][j] >> 32) << src->shift); 156 - 157 - /* enable the source */ 158 - nvkm_mask(device, src->addr, mask, value); 159 - nvkm_debug(subdev, 160 - "enabled source %08x %08x %08x\n", 161 - src->addr, mask, value); 162 - } 163 - } 164 - return 0; 165 - } 166 - 167 - static int 168 - nvkm_perfsrc_disable(struct nvkm_pm *pm, struct nvkm_perfctr *ctr) 169 - { 170 - struct nvkm_subdev *subdev = &pm->engine.subdev; 171 - struct nvkm_device *device = subdev->device; 172 - struct nvkm_perfdom *dom = NULL; 173 - struct nvkm_perfsig *sig; 174 - struct nvkm_perfsrc *src; 175 - u32 mask; 176 - int i, j; 177 - 178 - for (i = 0; i < 4; i++) { 179 - for (j = 0; j < 8 && ctr->source[i][j]; j++) { 180 - sig = nvkm_perfsig_find(pm, ctr->domain, 181 - ctr->signal[i], &dom); 182 - if (!sig) 183 - return -EINVAL; 184 - 185 - src = nvkm_perfsrc_find(pm, sig, ctr->source[i][j]); 186 - if (!src) 187 - return -EINVAL; 188 - 189 - /* unset enable bit if needed */ 190 - mask = 0x00000000; 191 - if (src->enable) 192 - mask = 0x80000000; 193 - mask |= (src->mask << src->shift); 194 - 195 - /* disable the source */ 196 - nvkm_mask(device, src->addr, mask, 0); 197 - nvkm_debug(subdev, "disabled source %08x %08x\n", 198 - src->addr, mask); 199 - } 200 - } 201 - return 0; 202 - } 203 - 204 - /******************************************************************************* 205 - * Perfdom object classes 206 - ******************************************************************************/ 207 - static int 208 - nvkm_perfdom_init(struct nvkm_perfdom *dom, void *data, u32 size) 209 - { 210 - union { 211 - struct nvif_perfdom_init none; 212 - } *args = data; 213 - struct nvkm_object *object = &dom->object; 214 - struct nvkm_pm *pm = dom->perfmon->pm; 215 - int ret = -ENOSYS, i; 216 - 217 - nvif_ioctl(object, "perfdom init size %d\n", size); 218 - if (!(ret = nvif_unvers(ret, &data, &size, args->none))) { 219 - nvif_ioctl(object, "perfdom init\n"); 220 - } else 221 - return ret; 222 - 223 - for (i = 0; i < 4; i++) { 224 - if (dom->ctr[i]) { 225 - dom->func->init(pm, dom, dom->ctr[i]); 226 - 227 - /* enable sources */ 228 - nvkm_perfsrc_enable(pm, dom->ctr[i]); 229 - } 230 - } 231 - 232 - /* start next batch of counters for sampling */ 233 - dom->func->next(pm, dom); 234 - return 0; 235 - } 236 - 237 - static int 238 - nvkm_perfdom_sample(struct nvkm_perfdom *dom, void *data, u32 size) 239 - { 240 - union { 241 - struct nvif_perfdom_sample none; 242 - } *args = data; 243 - struct nvkm_object *object = &dom->object; 244 - struct nvkm_pm *pm = dom->perfmon->pm; 245 - int ret = -ENOSYS; 246 - 247 - nvif_ioctl(object, "perfdom sample size %d\n", size); 248 - if (!(ret = nvif_unvers(ret, &data, &size, args->none))) { 249 - nvif_ioctl(object, "perfdom sample\n"); 250 - } else 251 - return ret; 252 - pm->sequence++; 253 - 254 - /* sample previous batch of counters */ 255 - list_for_each_entry(dom, &pm->domains, head) 256 - dom->func->next(pm, dom); 257 - 258 - return 0; 259 - } 260 - 261 - static int 262 - nvkm_perfdom_read(struct nvkm_perfdom *dom, void *data, u32 size) 263 - { 264 - union { 265 - struct nvif_perfdom_read_v0 v0; 266 - } *args = data; 267 - struct nvkm_object *object = &dom->object; 268 - struct nvkm_pm *pm = dom->perfmon->pm; 269 - int ret = -ENOSYS, i; 270 - 271 - nvif_ioctl(object, "perfdom read size %d\n", size); 272 - if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) { 273 - nvif_ioctl(object, "perfdom read vers %d\n", args->v0.version); 274 - } else 275 - return ret; 276 - 277 - for (i = 0; i < 4; i++) { 278 - if (dom->ctr[i]) 279 - dom->func->read(pm, dom, dom->ctr[i]); 280 - } 281 - 282 - if (!dom->clk) 283 - return -EAGAIN; 284 - 285 - for (i = 0; i < 4; i++) 286 - if (dom->ctr[i]) 287 - args->v0.ctr[i] = dom->ctr[i]->ctr; 288 - args->v0.clk = dom->clk; 289 - return 0; 290 - } 291 - 292 - static int 293 - nvkm_perfdom_mthd(struct nvkm_object *object, u32 mthd, void *data, u32 size) 294 - { 295 - struct nvkm_perfdom *dom = nvkm_perfdom(object); 296 - switch (mthd) { 297 - case NVIF_PERFDOM_V0_INIT: 298 - return nvkm_perfdom_init(dom, data, size); 299 - case NVIF_PERFDOM_V0_SAMPLE: 300 - return nvkm_perfdom_sample(dom, data, size); 301 - case NVIF_PERFDOM_V0_READ: 302 - return nvkm_perfdom_read(dom, data, size); 303 - default: 304 - break; 305 - } 306 - return -EINVAL; 307 - } 308 - 309 - static void * 310 - nvkm_perfdom_dtor(struct nvkm_object *object) 311 - { 312 - struct nvkm_perfdom *dom = nvkm_perfdom(object); 313 - struct nvkm_pm *pm = dom->perfmon->pm; 314 - int i; 315 - 316 - for (i = 0; i < 4; i++) { 317 - struct nvkm_perfctr *ctr = dom->ctr[i]; 318 - if (ctr) { 319 - nvkm_perfsrc_disable(pm, ctr); 320 - if (ctr->head.next) 321 - list_del(&ctr->head); 322 - } 323 - kfree(ctr); 324 - } 325 - 326 - return dom; 327 - } 328 - 329 - static int 330 - nvkm_perfctr_new(struct nvkm_perfdom *dom, int slot, u8 domain, 331 - struct nvkm_perfsig *signal[4], u64 source[4][8], 332 - u16 logic_op, struct nvkm_perfctr **pctr) 333 - { 334 - struct nvkm_perfctr *ctr; 335 - int i, j; 336 - 337 - if (!dom) 338 - return -EINVAL; 339 - 340 - ctr = *pctr = kzalloc(sizeof(*ctr), GFP_KERNEL); 341 - if (!ctr) 342 - return -ENOMEM; 343 - 344 - ctr->domain = domain; 345 - ctr->logic_op = logic_op; 346 - ctr->slot = slot; 347 - for (i = 0; i < 4; i++) { 348 - if (signal[i]) { 349 - ctr->signal[i] = signal[i] - dom->signal; 350 - for (j = 0; j < 8; j++) 351 - ctr->source[i][j] = source[i][j]; 352 - } 353 - } 354 - list_add_tail(&ctr->head, &dom->list); 355 - 356 - return 0; 357 - } 358 - 359 - static const struct nvkm_object_func 360 - nvkm_perfdom = { 361 - .dtor = nvkm_perfdom_dtor, 362 - .mthd = nvkm_perfdom_mthd, 363 - }; 364 - 365 - static int 366 - nvkm_perfdom_new_(struct nvkm_perfmon *perfmon, 367 - const struct nvkm_oclass *oclass, void *data, u32 size, 368 - struct nvkm_object **pobject) 369 - { 370 - union { 371 - struct nvif_perfdom_v0 v0; 372 - } *args = data; 373 - struct nvkm_pm *pm = perfmon->pm; 374 - struct nvkm_object *parent = oclass->parent; 375 - struct nvkm_perfdom *sdom = NULL; 376 - struct nvkm_perfctr *ctr[4] = {}; 377 - struct nvkm_perfdom *dom; 378 - int c, s, m; 379 - int ret = -ENOSYS; 380 - 381 - nvif_ioctl(parent, "create perfdom size %d\n", size); 382 - if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) { 383 - nvif_ioctl(parent, "create perfdom vers %d dom %d mode %02x\n", 384 - args->v0.version, args->v0.domain, args->v0.mode); 385 - } else 386 - return ret; 387 - 388 - for (c = 0; c < ARRAY_SIZE(args->v0.ctr); c++) { 389 - struct nvkm_perfsig *sig[4] = {}; 390 - u64 src[4][8] = {}; 391 - 392 - for (s = 0; s < ARRAY_SIZE(args->v0.ctr[c].signal); s++) { 393 - sig[s] = nvkm_perfsig_find(pm, args->v0.domain, 394 - args->v0.ctr[c].signal[s], 395 - &sdom); 396 - if (args->v0.ctr[c].signal[s] && !sig[s]) 397 - return -EINVAL; 398 - 399 - for (m = 0; m < 8; m++) { 400 - src[s][m] = args->v0.ctr[c].source[s][m]; 401 - if (src[s][m] && !nvkm_perfsrc_find(pm, sig[s], 402 - src[s][m])) 403 - return -EINVAL; 404 - } 405 - } 406 - 407 - ret = nvkm_perfctr_new(sdom, c, args->v0.domain, sig, src, 408 - args->v0.ctr[c].logic_op, &ctr[c]); 409 - if (ret) 410 - return ret; 411 - } 412 - 413 - if (!sdom) 414 - return -EINVAL; 415 - 416 - if (!(dom = kzalloc(sizeof(*dom), GFP_KERNEL))) 417 - return -ENOMEM; 418 - nvkm_object_ctor(&nvkm_perfdom, oclass, &dom->object); 419 - dom->perfmon = perfmon; 420 - *pobject = &dom->object; 421 - 422 - dom->func = sdom->func; 423 - dom->addr = sdom->addr; 424 - dom->mode = args->v0.mode; 425 - for (c = 0; c < ARRAY_SIZE(ctr); c++) 426 - dom->ctr[c] = ctr[c]; 427 - return 0; 428 - } 429 - 430 - /******************************************************************************* 431 - * Perfmon object classes 432 - ******************************************************************************/ 433 - static int 434 - nvkm_perfmon_mthd_query_domain(struct nvkm_perfmon *perfmon, 435 - void *data, u32 size) 436 - { 437 - union { 438 - struct nvif_perfmon_query_domain_v0 v0; 439 - } *args = data; 440 - struct nvkm_object *object = &perfmon->object; 441 - struct nvkm_pm *pm = perfmon->pm; 442 - struct nvkm_perfdom *dom; 443 - u8 domain_nr; 444 - int di, ret = -ENOSYS; 445 - 446 - nvif_ioctl(object, "perfmon query domain size %d\n", size); 447 - if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) { 448 - nvif_ioctl(object, "perfmon domain vers %d iter %02x\n", 449 - args->v0.version, args->v0.iter); 450 - di = (args->v0.iter & 0xff) - 1; 451 - } else 452 - return ret; 453 - 454 - domain_nr = nvkm_pm_count_perfdom(pm); 455 - if (di >= (int)domain_nr) 456 - return -EINVAL; 457 - 458 - if (di >= 0) { 459 - dom = nvkm_perfdom_find(pm, di); 460 - if (dom == NULL) 461 - return -EINVAL; 462 - 463 - args->v0.id = di; 464 - args->v0.signal_nr = nvkm_perfdom_count_perfsig(dom); 465 - strscpy(args->v0.name, dom->name, sizeof(args->v0.name)); 466 - 467 - /* Currently only global counters (PCOUNTER) are implemented 468 - * but this will be different for local counters (MP). */ 469 - args->v0.counter_nr = 4; 470 - } 471 - 472 - if (++di < domain_nr) { 473 - args->v0.iter = ++di; 474 - return 0; 475 - } 476 - 477 - args->v0.iter = 0xff; 478 - return 0; 479 - } 480 - 481 - static int 482 - nvkm_perfmon_mthd_query_signal(struct nvkm_perfmon *perfmon, 483 - void *data, u32 size) 484 - { 485 - union { 486 - struct nvif_perfmon_query_signal_v0 v0; 487 - } *args = data; 488 - struct nvkm_object *object = &perfmon->object; 489 - struct nvkm_pm *pm = perfmon->pm; 490 - struct nvkm_device *device = pm->engine.subdev.device; 491 - struct nvkm_perfdom *dom; 492 - struct nvkm_perfsig *sig; 493 - const bool all = nvkm_boolopt(device->cfgopt, "NvPmShowAll", false); 494 - const bool raw = nvkm_boolopt(device->cfgopt, "NvPmUnnamed", all); 495 - int ret = -ENOSYS, si; 496 - 497 - nvif_ioctl(object, "perfmon query signal size %d\n", size); 498 - if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) { 499 - nvif_ioctl(object, 500 - "perfmon query signal vers %d dom %d iter %04x\n", 501 - args->v0.version, args->v0.domain, args->v0.iter); 502 - si = (args->v0.iter & 0xffff) - 1; 503 - } else 504 - return ret; 505 - 506 - dom = nvkm_perfdom_find(pm, args->v0.domain); 507 - if (dom == NULL || si >= (int)dom->signal_nr) 508 - return -EINVAL; 509 - 510 - if (si >= 0) { 511 - sig = &dom->signal[si]; 512 - if (raw || !sig->name) { 513 - snprintf(args->v0.name, sizeof(args->v0.name), 514 - "/%s/%02x", dom->name, si); 515 - } else { 516 - strscpy(args->v0.name, sig->name, sizeof(args->v0.name)); 517 - } 518 - 519 - args->v0.signal = si; 520 - args->v0.source_nr = nvkm_perfsig_count_perfsrc(sig); 521 - } 522 - 523 - while (++si < dom->signal_nr) { 524 - if (all || dom->signal[si].name) { 525 - args->v0.iter = ++si; 526 - return 0; 527 - } 528 - } 529 - 530 - args->v0.iter = 0xffff; 531 - return 0; 532 - } 533 - 534 - static int 535 - nvkm_perfmon_mthd_query_source(struct nvkm_perfmon *perfmon, 536 - void *data, u32 size) 537 - { 538 - union { 539 - struct nvif_perfmon_query_source_v0 v0; 540 - } *args = data; 541 - struct nvkm_object *object = &perfmon->object; 542 - struct nvkm_pm *pm = perfmon->pm; 543 - struct nvkm_perfdom *dom = NULL; 544 - struct nvkm_perfsig *sig; 545 - struct nvkm_perfsrc *src; 546 - u8 source_nr = 0; 547 - int si, ret = -ENOSYS; 548 - 549 - nvif_ioctl(object, "perfmon query source size %d\n", size); 550 - if (!(ret = nvif_unpack(ret, &data, &size, args->v0, 0, 0, false))) { 551 - nvif_ioctl(object, 552 - "perfmon source vers %d dom %d sig %02x iter %02x\n", 553 - args->v0.version, args->v0.domain, args->v0.signal, 554 - args->v0.iter); 555 - si = (args->v0.iter & 0xff) - 1; 556 - } else 557 - return ret; 558 - 559 - sig = nvkm_perfsig_find(pm, args->v0.domain, args->v0.signal, &dom); 560 - if (!sig) 561 - return -EINVAL; 562 - 563 - source_nr = nvkm_perfsig_count_perfsrc(sig); 564 - if (si >= (int)source_nr) 565 - return -EINVAL; 566 - 567 - if (si >= 0) { 568 - src = nvkm_perfsrc_find(pm, sig, sig->source[si]); 569 - if (!src) 570 - return -EINVAL; 571 - 572 - args->v0.source = sig->source[si]; 573 - args->v0.mask = src->mask; 574 - strscpy(args->v0.name, src->name, sizeof(args->v0.name)); 575 - } 576 - 577 - if (++si < source_nr) { 578 - args->v0.iter = ++si; 579 - return 0; 580 - } 581 - 582 - args->v0.iter = 0xff; 583 - return 0; 584 - } 585 - 586 - static int 587 - nvkm_perfmon_mthd(struct nvkm_object *object, u32 mthd, void *data, u32 size) 588 - { 589 - struct nvkm_perfmon *perfmon = nvkm_perfmon(object); 590 - switch (mthd) { 591 - case NVIF_PERFMON_V0_QUERY_DOMAIN: 592 - return nvkm_perfmon_mthd_query_domain(perfmon, data, size); 593 - case NVIF_PERFMON_V0_QUERY_SIGNAL: 594 - return nvkm_perfmon_mthd_query_signal(perfmon, data, size); 595 - case NVIF_PERFMON_V0_QUERY_SOURCE: 596 - return nvkm_perfmon_mthd_query_source(perfmon, data, size); 597 - default: 598 - break; 599 - } 600 - return -EINVAL; 601 - } 602 - 603 - static int 604 - nvkm_perfmon_child_new(const struct nvkm_oclass *oclass, void *data, u32 size, 605 - struct nvkm_object **pobject) 606 - { 607 - struct nvkm_perfmon *perfmon = nvkm_perfmon(oclass->parent); 608 - return nvkm_perfdom_new_(perfmon, oclass, data, size, pobject); 609 - } 610 - 611 - static int 612 - nvkm_perfmon_child_get(struct nvkm_object *object, int index, 613 - struct nvkm_oclass *oclass) 614 - { 615 - if (index == 0) { 616 - oclass->base.oclass = NVIF_CLASS_PERFDOM; 617 - oclass->base.minver = 0; 618 - oclass->base.maxver = 0; 619 - oclass->ctor = nvkm_perfmon_child_new; 620 - return 0; 621 - } 622 - return -EINVAL; 623 - } 624 - 625 - static void * 626 - nvkm_perfmon_dtor(struct nvkm_object *object) 627 - { 628 - struct nvkm_perfmon *perfmon = nvkm_perfmon(object); 629 - struct nvkm_pm *pm = perfmon->pm; 630 - spin_lock(&pm->client.lock); 631 - if (pm->client.object == &perfmon->object) 632 - pm->client.object = NULL; 633 - spin_unlock(&pm->client.lock); 634 - return perfmon; 635 - } 636 - 637 - static const struct nvkm_object_func 638 - nvkm_perfmon = { 639 - .dtor = nvkm_perfmon_dtor, 640 - .mthd = nvkm_perfmon_mthd, 641 - .sclass = nvkm_perfmon_child_get, 642 - }; 643 - 644 - static int 645 - nvkm_perfmon_new(struct nvkm_pm *pm, const struct nvkm_oclass *oclass, 646 - void *data, u32 size, struct nvkm_object **pobject) 647 - { 648 - struct nvkm_perfmon *perfmon; 649 - 650 - if (!(perfmon = kzalloc(sizeof(*perfmon), GFP_KERNEL))) 651 - return -ENOMEM; 652 - nvkm_object_ctor(&nvkm_perfmon, oclass, &perfmon->object); 653 - perfmon->pm = pm; 654 - *pobject = &perfmon->object; 655 - return 0; 656 - } 657 - 658 - /******************************************************************************* 659 - * PPM engine/subdev functions 660 - ******************************************************************************/ 661 - 662 - static int 663 - nvkm_pm_oclass_new(struct nvkm_device *device, const struct nvkm_oclass *oclass, 664 - void *data, u32 size, struct nvkm_object **pobject) 665 - { 666 - struct nvkm_pm *pm = nvkm_pm(oclass->engine); 667 - int ret; 668 - 669 - ret = nvkm_perfmon_new(pm, oclass, data, size, pobject); 670 - if (ret) 671 - return ret; 672 - 673 - spin_lock(&pm->client.lock); 674 - if (pm->client.object == NULL) 675 - pm->client.object = *pobject; 676 - ret = (pm->client.object == *pobject) ? 0 : -EBUSY; 677 - spin_unlock(&pm->client.lock); 678 - return ret; 679 - } 680 - 681 - static const struct nvkm_device_oclass 682 - nvkm_pm_oclass = { 683 - .base.oclass = NVIF_CLASS_PERFMON, 684 - .base.minver = -1, 685 - .base.maxver = -1, 686 - .ctor = nvkm_pm_oclass_new, 687 - }; 688 - 689 - static int 690 - nvkm_pm_oclass_get(struct nvkm_oclass *oclass, int index, 691 - const struct nvkm_device_oclass **class) 692 - { 693 - if (index == 0) { 694 - oclass->base = nvkm_pm_oclass.base; 695 - *class = &nvkm_pm_oclass; 696 - return index; 697 - } 698 - return 1; 699 - } 700 - 701 - static int 702 - nvkm_perfsrc_new(struct nvkm_pm *pm, struct nvkm_perfsig *sig, 703 - const struct nvkm_specsrc *spec) 704 - { 705 - const struct nvkm_specsrc *ssrc; 706 - const struct nvkm_specmux *smux; 707 - struct nvkm_perfsrc *src; 708 - u8 source_nr = 0; 709 - 710 - if (!spec) { 711 - /* No sources are defined for this signal. */ 712 - return 0; 713 - } 714 - 715 - ssrc = spec; 716 - while (ssrc->name) { 717 - smux = ssrc->mux; 718 - while (smux->name) { 719 - bool found = false; 720 - u8 source_id = 0; 721 - u32 len; 722 - 723 - list_for_each_entry(src, &pm->sources, head) { 724 - if (src->addr == ssrc->addr && 725 - src->shift == smux->shift) { 726 - found = true; 727 - break; 728 - } 729 - source_id++; 730 - } 731 - 732 - if (!found) { 733 - src = kzalloc(sizeof(*src), GFP_KERNEL); 734 - if (!src) 735 - return -ENOMEM; 736 - 737 - src->addr = ssrc->addr; 738 - src->mask = smux->mask; 739 - src->shift = smux->shift; 740 - src->enable = smux->enable; 741 - 742 - len = strlen(ssrc->name) + 743 - strlen(smux->name) + 2; 744 - src->name = kzalloc(len, GFP_KERNEL); 745 - if (!src->name) { 746 - kfree(src); 747 - return -ENOMEM; 748 - } 749 - snprintf(src->name, len, "%s_%s", ssrc->name, 750 - smux->name); 751 - 752 - list_add_tail(&src->head, &pm->sources); 753 - } 754 - 755 - sig->source[source_nr++] = source_id + 1; 756 - smux++; 757 - } 758 - ssrc++; 759 - } 760 - 761 - return 0; 762 - } 763 - 764 - int 765 - nvkm_perfdom_new(struct nvkm_pm *pm, const char *name, u32 mask, 766 - u32 base, u32 size_unit, u32 size_domain, 767 - const struct nvkm_specdom *spec) 768 - { 769 - const struct nvkm_specdom *sdom; 770 - const struct nvkm_specsig *ssig; 771 - struct nvkm_perfdom *dom; 772 - int ret, i; 773 - 774 - for (i = 0; i == 0 || mask; i++) { 775 - u32 addr = base + (i * size_unit); 776 - if (i && !(mask & (1 << i))) 777 - continue; 778 - 779 - sdom = spec; 780 - while (sdom->signal_nr) { 781 - dom = kzalloc(struct_size(dom, signal, sdom->signal_nr), 782 - GFP_KERNEL); 783 - if (!dom) 784 - return -ENOMEM; 785 - 786 - if (mask) { 787 - snprintf(dom->name, sizeof(dom->name), 788 - "%s/%02x/%02x", name, i, 789 - (int)(sdom - spec)); 790 - } else { 791 - snprintf(dom->name, sizeof(dom->name), 792 - "%s/%02x", name, (int)(sdom - spec)); 793 - } 794 - 795 - list_add_tail(&dom->head, &pm->domains); 796 - INIT_LIST_HEAD(&dom->list); 797 - dom->func = sdom->func; 798 - dom->addr = addr; 799 - dom->signal_nr = sdom->signal_nr; 800 - 801 - ssig = (sdom++)->signal; 802 - while (ssig->name) { 803 - struct nvkm_perfsig *sig = 804 - &dom->signal[ssig->signal]; 805 - sig->name = ssig->name; 806 - ret = nvkm_perfsrc_new(pm, sig, ssig->source); 807 - if (ret) 808 - return ret; 809 - ssig++; 810 - } 811 - 812 - addr += size_domain; 813 - } 814 - 815 - mask &= ~(1 << i); 816 - } 817 - 818 - return 0; 819 - } 820 - 821 - static int 822 - nvkm_pm_fini(struct nvkm_engine *engine, bool suspend) 823 - { 824 - struct nvkm_pm *pm = nvkm_pm(engine); 825 - if (pm->func->fini) 826 - pm->func->fini(pm); 827 - return 0; 828 - } 829 - 830 - static void * 831 - nvkm_pm_dtor(struct nvkm_engine *engine) 832 - { 833 - struct nvkm_pm *pm = nvkm_pm(engine); 834 - struct nvkm_perfdom *dom, *next_dom; 835 - struct nvkm_perfsrc *src, *next_src; 836 - 837 - list_for_each_entry_safe(dom, next_dom, &pm->domains, head) { 838 - list_del(&dom->head); 839 - kfree(dom); 840 - } 841 - 842 - list_for_each_entry_safe(src, next_src, &pm->sources, head) { 843 - list_del(&src->head); 844 - kfree(src->name); 845 - kfree(src); 846 - } 847 - 848 - return pm; 849 - } 850 - 851 - static const struct nvkm_engine_func 852 - nvkm_pm = { 853 - .dtor = nvkm_pm_dtor, 854 - .fini = nvkm_pm_fini, 855 - .base.sclass = nvkm_pm_oclass_get, 856 - }; 857 - 858 - int 859 - nvkm_pm_ctor(const struct nvkm_pm_func *func, struct nvkm_device *device, 860 - enum nvkm_subdev_type type, int inst, struct nvkm_pm *pm) 861 - { 862 - pm->func = func; 863 - INIT_LIST_HEAD(&pm->domains); 864 - INIT_LIST_HEAD(&pm->sources); 865 - spin_lock_init(&pm->client.lock); 866 - return nvkm_engine_ctor(&nvkm_pm, device, type, inst, true, &pm->engine); 867 - }
-165
drivers/gpu/drm/nouveau/nvkm/engine/pm/g84.c
··· 1 - /* 2 - * Copyright 2013 Red Hat Inc. 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: Ben Skeggs 23 - */ 24 - #include "nv40.h" 25 - 26 - const struct nvkm_specsrc 27 - g84_vfetch_sources[] = { 28 - { 0x400c0c, (const struct nvkm_specmux[]) { 29 - { 0x3, 0, "unk0" }, 30 - {} 31 - }, "pgraph_vfetch_unk0c" }, 32 - {} 33 - }; 34 - 35 - static const struct nvkm_specsrc 36 - g84_prop_sources[] = { 37 - { 0x408e50, (const struct nvkm_specmux[]) { 38 - { 0x1f, 0, "sel", true }, 39 - {} 40 - }, "pgraph_tpc0_prop_pm_mux" }, 41 - {} 42 - }; 43 - 44 - static const struct nvkm_specsrc 45 - g84_crop_sources[] = { 46 - { 0x407008, (const struct nvkm_specmux[]) { 47 - { 0xf, 0, "sel0", true }, 48 - { 0x7, 16, "sel1", true }, 49 - {} 50 - }, "pgraph_rop0_crop_pm_mux" }, 51 - {} 52 - }; 53 - 54 - static const struct nvkm_specsrc 55 - g84_tex_sources[] = { 56 - { 0x408808, (const struct nvkm_specmux[]) { 57 - { 0xfffff, 0, "unk0" }, 58 - {} 59 - }, "pgraph_tpc0_tex_unk08" }, 60 - {} 61 - }; 62 - 63 - static const struct nvkm_specdom 64 - g84_pm[] = { 65 - { 0x20, (const struct nvkm_specsig[]) { 66 - {} 67 - }, &nv40_perfctr_func }, 68 - { 0xf0, (const struct nvkm_specsig[]) { 69 - { 0xbd, "pc01_gr_idle" }, 70 - { 0x5e, "pc01_strmout_00" }, 71 - { 0x5f, "pc01_strmout_01" }, 72 - { 0xd2, "pc01_trast_00" }, 73 - { 0xd3, "pc01_trast_01" }, 74 - { 0xd4, "pc01_trast_02" }, 75 - { 0xd5, "pc01_trast_03" }, 76 - { 0xd8, "pc01_trast_04" }, 77 - { 0xd9, "pc01_trast_05" }, 78 - { 0x5c, "pc01_vattr_00" }, 79 - { 0x5d, "pc01_vattr_01" }, 80 - { 0x66, "pc01_vfetch_00", g84_vfetch_sources }, 81 - { 0x67, "pc01_vfetch_01", g84_vfetch_sources }, 82 - { 0x68, "pc01_vfetch_02", g84_vfetch_sources }, 83 - { 0x69, "pc01_vfetch_03", g84_vfetch_sources }, 84 - { 0x6a, "pc01_vfetch_04", g84_vfetch_sources }, 85 - { 0x6b, "pc01_vfetch_05", g84_vfetch_sources }, 86 - { 0x6c, "pc01_vfetch_06", g84_vfetch_sources }, 87 - { 0x6d, "pc01_vfetch_07", g84_vfetch_sources }, 88 - { 0x6e, "pc01_vfetch_08", g84_vfetch_sources }, 89 - { 0x6f, "pc01_vfetch_09", g84_vfetch_sources }, 90 - { 0x70, "pc01_vfetch_0a", g84_vfetch_sources }, 91 - { 0x71, "pc01_vfetch_0b", g84_vfetch_sources }, 92 - { 0x72, "pc01_vfetch_0c", g84_vfetch_sources }, 93 - { 0x73, "pc01_vfetch_0d", g84_vfetch_sources }, 94 - { 0x74, "pc01_vfetch_0e", g84_vfetch_sources }, 95 - { 0x75, "pc01_vfetch_0f", g84_vfetch_sources }, 96 - { 0x76, "pc01_vfetch_10", g84_vfetch_sources }, 97 - { 0x77, "pc01_vfetch_11", g84_vfetch_sources }, 98 - { 0x78, "pc01_vfetch_12", g84_vfetch_sources }, 99 - { 0x79, "pc01_vfetch_13", g84_vfetch_sources }, 100 - { 0x7a, "pc01_vfetch_14", g84_vfetch_sources }, 101 - { 0x7b, "pc01_vfetch_15", g84_vfetch_sources }, 102 - { 0x7c, "pc01_vfetch_16", g84_vfetch_sources }, 103 - { 0x7d, "pc01_vfetch_17", g84_vfetch_sources }, 104 - { 0x7e, "pc01_vfetch_18", g84_vfetch_sources }, 105 - { 0x7f, "pc01_vfetch_19", g84_vfetch_sources }, 106 - { 0x07, "pc01_zcull_00", nv50_zcull_sources }, 107 - { 0x08, "pc01_zcull_01", nv50_zcull_sources }, 108 - { 0x09, "pc01_zcull_02", nv50_zcull_sources }, 109 - { 0x0a, "pc01_zcull_03", nv50_zcull_sources }, 110 - { 0x0b, "pc01_zcull_04", nv50_zcull_sources }, 111 - { 0x0c, "pc01_zcull_05", nv50_zcull_sources }, 112 - { 0xa4, "pc01_unk00" }, 113 - { 0xec, "pc01_trailer" }, 114 - {} 115 - }, &nv40_perfctr_func }, 116 - { 0xa0, (const struct nvkm_specsig[]) { 117 - { 0x30, "pc02_crop_00", g84_crop_sources }, 118 - { 0x31, "pc02_crop_01", g84_crop_sources }, 119 - { 0x32, "pc02_crop_02", g84_crop_sources }, 120 - { 0x33, "pc02_crop_03", g84_crop_sources }, 121 - { 0x00, "pc02_prop_00", g84_prop_sources }, 122 - { 0x01, "pc02_prop_01", g84_prop_sources }, 123 - { 0x02, "pc02_prop_02", g84_prop_sources }, 124 - { 0x03, "pc02_prop_03", g84_prop_sources }, 125 - { 0x04, "pc02_prop_04", g84_prop_sources }, 126 - { 0x05, "pc02_prop_05", g84_prop_sources }, 127 - { 0x06, "pc02_prop_06", g84_prop_sources }, 128 - { 0x07, "pc02_prop_07", g84_prop_sources }, 129 - { 0x48, "pc02_tex_00", g84_tex_sources }, 130 - { 0x49, "pc02_tex_01", g84_tex_sources }, 131 - { 0x4a, "pc02_tex_02", g84_tex_sources }, 132 - { 0x4b, "pc02_tex_03", g84_tex_sources }, 133 - { 0x1a, "pc02_tex_04", g84_tex_sources }, 134 - { 0x1b, "pc02_tex_05", g84_tex_sources }, 135 - { 0x1c, "pc02_tex_06", g84_tex_sources }, 136 - { 0x44, "pc02_zrop_00", nv50_zrop_sources }, 137 - { 0x45, "pc02_zrop_01", nv50_zrop_sources }, 138 - { 0x46, "pc02_zrop_02", nv50_zrop_sources }, 139 - { 0x47, "pc02_zrop_03", nv50_zrop_sources }, 140 - { 0x8c, "pc02_trailer" }, 141 - {} 142 - }, &nv40_perfctr_func }, 143 - { 0x20, (const struct nvkm_specsig[]) { 144 - {} 145 - }, &nv40_perfctr_func }, 146 - { 0x20, (const struct nvkm_specsig[]) { 147 - {} 148 - }, &nv40_perfctr_func }, 149 - { 0x20, (const struct nvkm_specsig[]) { 150 - {} 151 - }, &nv40_perfctr_func }, 152 - { 0x20, (const struct nvkm_specsig[]) { 153 - {} 154 - }, &nv40_perfctr_func }, 155 - { 0x20, (const struct nvkm_specsig[]) { 156 - {} 157 - }, &nv40_perfctr_func }, 158 - {} 159 - }; 160 - 161 - int 162 - g84_pm_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_pm **ppm) 163 - { 164 - return nv40_pm_new_(g84_pm, device, type, inst, ppm); 165 - }
-243
drivers/gpu/drm/nouveau/nvkm/engine/pm/gf100.c
··· 1 - /* 2 - * Copyright 2013 Red Hat Inc. 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: Ben Skeggs 23 - */ 24 - #include "gf100.h" 25 - 26 - const struct nvkm_specsrc 27 - gf100_pbfb_sources[] = { 28 - { 0x10f100, (const struct nvkm_specmux[]) { 29 - { 0x1, 0, "unk0" }, 30 - { 0x3f, 4, "unk4" }, 31 - {} 32 - }, "pbfb_broadcast_pm_unk100" }, 33 - {} 34 - }; 35 - 36 - const struct nvkm_specsrc 37 - gf100_pmfb_sources[] = { 38 - { 0x140028, (const struct nvkm_specmux[]) { 39 - { 0x3fff, 0, "unk0" }, 40 - { 0x7, 16, "unk16" }, 41 - { 0x3, 24, "unk24" }, 42 - { 0x2, 29, "unk29" }, 43 - {} 44 - }, "pmfb0_pm_unk28" }, 45 - {} 46 - }; 47 - 48 - static const struct nvkm_specsrc 49 - gf100_l1_sources[] = { 50 - { 0x5044a8, (const struct nvkm_specmux[]) { 51 - { 0x3f, 0, "sel", true }, 52 - {} 53 - }, "pgraph_gpc0_tpc0_l1_pm_mux" }, 54 - {} 55 - }; 56 - 57 - static const struct nvkm_specsrc 58 - gf100_tex_sources[] = { 59 - { 0x5042c0, (const struct nvkm_specmux[]) { 60 - { 0xf, 0, "sel0", true }, 61 - { 0x7, 8, "sel1", true }, 62 - {} 63 - }, "pgraph_gpc0_tpc0_tex_pm_mux_c_d" }, 64 - {} 65 - }; 66 - 67 - static const struct nvkm_specsrc 68 - gf100_unk400_sources[] = { 69 - { 0x50440c, (const struct nvkm_specmux[]) { 70 - { 0x3f, 0, "sel", true }, 71 - {} 72 - }, "pgraph_gpc0_tpc0_unk400_pm_mux" }, 73 - {} 74 - }; 75 - 76 - static const struct nvkm_specdom 77 - gf100_pm_hub[] = { 78 - {} 79 - }; 80 - 81 - const struct nvkm_specdom 82 - gf100_pm_gpc[] = { 83 - { 0xe0, (const struct nvkm_specsig[]) { 84 - { 0x00, "gpc00_l1_00", gf100_l1_sources }, 85 - { 0x01, "gpc00_l1_01", gf100_l1_sources }, 86 - { 0x02, "gpc00_l1_02", gf100_l1_sources }, 87 - { 0x03, "gpc00_l1_03", gf100_l1_sources }, 88 - { 0x05, "gpc00_l1_04", gf100_l1_sources }, 89 - { 0x06, "gpc00_l1_05", gf100_l1_sources }, 90 - { 0x0a, "gpc00_tex_00", gf100_tex_sources }, 91 - { 0x0b, "gpc00_tex_01", gf100_tex_sources }, 92 - { 0x0c, "gpc00_tex_02", gf100_tex_sources }, 93 - { 0x0d, "gpc00_tex_03", gf100_tex_sources }, 94 - { 0x0e, "gpc00_tex_04", gf100_tex_sources }, 95 - { 0x0f, "gpc00_tex_05", gf100_tex_sources }, 96 - { 0x10, "gpc00_tex_06", gf100_tex_sources }, 97 - { 0x11, "gpc00_tex_07", gf100_tex_sources }, 98 - { 0x12, "gpc00_tex_08", gf100_tex_sources }, 99 - { 0x26, "gpc00_unk400_00", gf100_unk400_sources }, 100 - {} 101 - }, &gf100_perfctr_func }, 102 - {} 103 - }; 104 - 105 - static const struct nvkm_specdom 106 - gf100_pm_part[] = { 107 - { 0xe0, (const struct nvkm_specsig[]) { 108 - { 0x0f, "part00_pbfb_00", gf100_pbfb_sources }, 109 - { 0x10, "part00_pbfb_01", gf100_pbfb_sources }, 110 - { 0x21, "part00_pmfb_00", gf100_pmfb_sources }, 111 - { 0x04, "part00_pmfb_01", gf100_pmfb_sources }, 112 - { 0x00, "part00_pmfb_02", gf100_pmfb_sources }, 113 - { 0x02, "part00_pmfb_03", gf100_pmfb_sources }, 114 - { 0x01, "part00_pmfb_04", gf100_pmfb_sources }, 115 - { 0x2e, "part00_pmfb_05", gf100_pmfb_sources }, 116 - { 0x2f, "part00_pmfb_06", gf100_pmfb_sources }, 117 - { 0x1b, "part00_pmfb_07", gf100_pmfb_sources }, 118 - { 0x1c, "part00_pmfb_08", gf100_pmfb_sources }, 119 - { 0x1d, "part00_pmfb_09", gf100_pmfb_sources }, 120 - { 0x1e, "part00_pmfb_0a", gf100_pmfb_sources }, 121 - { 0x1f, "part00_pmfb_0b", gf100_pmfb_sources }, 122 - {} 123 - }, &gf100_perfctr_func }, 124 - {} 125 - }; 126 - 127 - static void 128 - gf100_perfctr_init(struct nvkm_pm *pm, struct nvkm_perfdom *dom, 129 - struct nvkm_perfctr *ctr) 130 - { 131 - struct nvkm_device *device = pm->engine.subdev.device; 132 - u32 log = ctr->logic_op; 133 - u32 src = 0x00000000; 134 - int i; 135 - 136 - for (i = 0; i < 4; i++) 137 - src |= ctr->signal[i] << (i * 8); 138 - 139 - nvkm_wr32(device, dom->addr + 0x09c, 0x00040002 | (dom->mode << 3)); 140 - nvkm_wr32(device, dom->addr + 0x100, 0x00000000); 141 - nvkm_wr32(device, dom->addr + 0x040 + (ctr->slot * 0x08), src); 142 - nvkm_wr32(device, dom->addr + 0x044 + (ctr->slot * 0x08), log); 143 - } 144 - 145 - static void 146 - gf100_perfctr_read(struct nvkm_pm *pm, struct nvkm_perfdom *dom, 147 - struct nvkm_perfctr *ctr) 148 - { 149 - struct nvkm_device *device = pm->engine.subdev.device; 150 - 151 - switch (ctr->slot) { 152 - case 0: ctr->ctr = nvkm_rd32(device, dom->addr + 0x08c); break; 153 - case 1: ctr->ctr = nvkm_rd32(device, dom->addr + 0x088); break; 154 - case 2: ctr->ctr = nvkm_rd32(device, dom->addr + 0x080); break; 155 - case 3: ctr->ctr = nvkm_rd32(device, dom->addr + 0x090); break; 156 - } 157 - dom->clk = nvkm_rd32(device, dom->addr + 0x070); 158 - } 159 - 160 - static void 161 - gf100_perfctr_next(struct nvkm_pm *pm, struct nvkm_perfdom *dom) 162 - { 163 - struct nvkm_device *device = pm->engine.subdev.device; 164 - nvkm_wr32(device, dom->addr + 0x06c, dom->signal_nr - 0x40 + 0x27); 165 - nvkm_wr32(device, dom->addr + 0x0ec, 0x00000011); 166 - } 167 - 168 - const struct nvkm_funcdom 169 - gf100_perfctr_func = { 170 - .init = gf100_perfctr_init, 171 - .read = gf100_perfctr_read, 172 - .next = gf100_perfctr_next, 173 - }; 174 - 175 - static void 176 - gf100_pm_fini(struct nvkm_pm *pm) 177 - { 178 - struct nvkm_device *device = pm->engine.subdev.device; 179 - nvkm_mask(device, 0x000200, 0x10000000, 0x00000000); 180 - nvkm_mask(device, 0x000200, 0x10000000, 0x10000000); 181 - } 182 - 183 - static const struct nvkm_pm_func 184 - gf100_pm_ = { 185 - .fini = gf100_pm_fini, 186 - }; 187 - 188 - int 189 - gf100_pm_new_(const struct gf100_pm_func *func, struct nvkm_device *device, 190 - enum nvkm_subdev_type type, int inst, struct nvkm_pm **ppm) 191 - { 192 - struct nvkm_pm *pm; 193 - u32 mask; 194 - int ret; 195 - 196 - if (!(pm = *ppm = kzalloc(sizeof(*pm), GFP_KERNEL))) 197 - return -ENOMEM; 198 - 199 - ret = nvkm_pm_ctor(&gf100_pm_, device, type, inst, pm); 200 - if (ret) 201 - return ret; 202 - 203 - /* HUB */ 204 - ret = nvkm_perfdom_new(pm, "hub", 0, 0x1b0000, 0, 0x200, 205 - func->doms_hub); 206 - if (ret) 207 - return ret; 208 - 209 - /* GPC */ 210 - mask = (1 << nvkm_rd32(device, 0x022430)) - 1; 211 - mask &= ~nvkm_rd32(device, 0x022504); 212 - mask &= ~nvkm_rd32(device, 0x022584); 213 - 214 - ret = nvkm_perfdom_new(pm, "gpc", mask, 0x180000, 215 - 0x1000, 0x200, func->doms_gpc); 216 - if (ret) 217 - return ret; 218 - 219 - /* PART */ 220 - mask = (1 << nvkm_rd32(device, 0x022438)) - 1; 221 - mask &= ~nvkm_rd32(device, 0x022548); 222 - mask &= ~nvkm_rd32(device, 0x0225c8); 223 - 224 - ret = nvkm_perfdom_new(pm, "part", mask, 0x1a0000, 225 - 0x1000, 0x200, func->doms_part); 226 - if (ret) 227 - return ret; 228 - 229 - return 0; 230 - } 231 - 232 - static const struct gf100_pm_func 233 - gf100_pm = { 234 - .doms_gpc = gf100_pm_gpc, 235 - .doms_hub = gf100_pm_hub, 236 - .doms_part = gf100_pm_part, 237 - }; 238 - 239 - int 240 - gf100_pm_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_pm **ppm) 241 - { 242 - return gf100_pm_new_(&gf100_pm, device, type, inst, ppm); 243 - }
-20
drivers/gpu/drm/nouveau/nvkm/engine/pm/gf100.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - #ifndef __NVKM_PM_NVC0_H__ 3 - #define __NVKM_PM_NVC0_H__ 4 - #include "priv.h" 5 - 6 - struct gf100_pm_func { 7 - const struct nvkm_specdom *doms_hub; 8 - const struct nvkm_specdom *doms_gpc; 9 - const struct nvkm_specdom *doms_part; 10 - }; 11 - 12 - int gf100_pm_new_(const struct gf100_pm_func *, struct nvkm_device *, enum nvkm_subdev_type, int, 13 - struct nvkm_pm **); 14 - 15 - extern const struct nvkm_funcdom gf100_perfctr_func; 16 - extern const struct nvkm_specdom gf100_pm_gpc[]; 17 - 18 - extern const struct nvkm_specsrc gf100_pbfb_sources[]; 19 - extern const struct nvkm_specsrc gf100_pmfb_sources[]; 20 - #endif
-66
drivers/gpu/drm/nouveau/nvkm/engine/pm/gf108.c
··· 1 - /* 2 - * Copyright 2015 Samuel Pitoiset 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: Samuel Pitoiset 23 - */ 24 - #include "gf100.h" 25 - 26 - static const struct nvkm_specdom 27 - gf108_pm_hub[] = { 28 - {} 29 - }; 30 - 31 - static const struct nvkm_specdom 32 - gf108_pm_part[] = { 33 - { 0xe0, (const struct nvkm_specsig[]) { 34 - { 0x14, "part00_pbfb_00", gf100_pbfb_sources }, 35 - { 0x15, "part00_pbfb_01", gf100_pbfb_sources }, 36 - { 0x20, "part00_pbfb_02", gf100_pbfb_sources }, 37 - { 0x21, "part00_pbfb_03", gf100_pbfb_sources }, 38 - { 0x01, "part00_pmfb_00", gf100_pmfb_sources }, 39 - { 0x04, "part00_pmfb_01", gf100_pmfb_sources }, 40 - { 0x05, "part00_pmfb_02", gf100_pmfb_sources}, 41 - { 0x07, "part00_pmfb_03", gf100_pmfb_sources }, 42 - { 0x0d, "part00_pmfb_04", gf100_pmfb_sources }, 43 - { 0x12, "part00_pmfb_05", gf100_pmfb_sources }, 44 - { 0x13, "part00_pmfb_06", gf100_pmfb_sources }, 45 - { 0x2c, "part00_pmfb_07", gf100_pmfb_sources }, 46 - { 0x2d, "part00_pmfb_08", gf100_pmfb_sources }, 47 - { 0x2e, "part00_pmfb_09", gf100_pmfb_sources }, 48 - { 0x2f, "part00_pmfb_0a", gf100_pmfb_sources }, 49 - { 0x30, "part00_pmfb_0b", gf100_pmfb_sources }, 50 - {} 51 - }, &gf100_perfctr_func }, 52 - {} 53 - }; 54 - 55 - static const struct gf100_pm_func 56 - gf108_pm = { 57 - .doms_gpc = gf100_pm_gpc, 58 - .doms_hub = gf108_pm_hub, 59 - .doms_part = gf108_pm_part, 60 - }; 61 - 62 - int 63 - gf108_pm_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_pm **ppm) 64 - { 65 - return gf100_pm_new_(&gf108_pm, device, type, inst, ppm); 66 - }
-80
drivers/gpu/drm/nouveau/nvkm/engine/pm/gf117.c
··· 1 - /* 2 - * Copyright 2015 Samuel Pitoiset 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: Samuel Pitoiset 23 - */ 24 - #include "gf100.h" 25 - 26 - static const struct nvkm_specsrc 27 - gf117_pmfb_sources[] = { 28 - { 0x140028, (const struct nvkm_specmux[]) { 29 - { 0x3fff, 0, "unk0" }, 30 - { 0x7, 16, "unk16" }, 31 - { 0x3, 24, "unk24" }, 32 - { 0x2, 28, "unk28" }, 33 - {} 34 - }, "pmfb0_pm_unk28" }, 35 - { 0x14125c, (const struct nvkm_specmux[]) { 36 - { 0x3fff, 0, "unk0" }, 37 - {} 38 - }, "pmfb0_subp0_pm_unk25c" }, 39 - {} 40 - }; 41 - 42 - static const struct nvkm_specdom 43 - gf117_pm_hub[] = { 44 - {} 45 - }; 46 - 47 - static const struct nvkm_specdom 48 - gf117_pm_part[] = { 49 - { 0xe0, (const struct nvkm_specsig[]) { 50 - { 0x00, "part00_pbfb_00", gf100_pbfb_sources }, 51 - { 0x01, "part00_pbfb_01", gf100_pbfb_sources }, 52 - { 0x12, "part00_pmfb_00", gf117_pmfb_sources }, 53 - { 0x15, "part00_pmfb_01", gf117_pmfb_sources }, 54 - { 0x16, "part00_pmfb_02", gf117_pmfb_sources }, 55 - { 0x18, "part00_pmfb_03", gf117_pmfb_sources }, 56 - { 0x1e, "part00_pmfb_04", gf117_pmfb_sources }, 57 - { 0x23, "part00_pmfb_05", gf117_pmfb_sources }, 58 - { 0x24, "part00_pmfb_06", gf117_pmfb_sources }, 59 - { 0x0c, "part00_pmfb_07", gf117_pmfb_sources }, 60 - { 0x0d, "part00_pmfb_08", gf117_pmfb_sources }, 61 - { 0x0e, "part00_pmfb_09", gf117_pmfb_sources }, 62 - { 0x0f, "part00_pmfb_0a", gf117_pmfb_sources }, 63 - { 0x10, "part00_pmfb_0b", gf117_pmfb_sources }, 64 - {} 65 - }, &gf100_perfctr_func }, 66 - {} 67 - }; 68 - 69 - static const struct gf100_pm_func 70 - gf117_pm = { 71 - .doms_gpc = gf100_pm_gpc, 72 - .doms_hub = gf117_pm_hub, 73 - .doms_part = gf117_pm_part, 74 - }; 75 - 76 - int 77 - gf117_pm_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_pm **ppm) 78 - { 79 - return gf100_pm_new_(&gf117_pm, device, type, inst, ppm); 80 - }
-184
drivers/gpu/drm/nouveau/nvkm/engine/pm/gk104.c
··· 1 - /* 2 - * Copyright 2013 Red Hat Inc. 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: Ben Skeggs 23 - */ 24 - #include "gf100.h" 25 - 26 - static const struct nvkm_specsrc 27 - gk104_pmfb_sources[] = { 28 - { 0x140028, (const struct nvkm_specmux[]) { 29 - { 0x3fff, 0, "unk0" }, 30 - { 0x7, 16, "unk16" }, 31 - { 0x3, 24, "unk24" }, 32 - { 0x2, 28, "unk28" }, 33 - {} 34 - }, "pmfb0_pm_unk28" }, 35 - { 0x14125c, (const struct nvkm_specmux[]) { 36 - { 0x3fff, 0, "unk0" }, 37 - {} 38 - }, "pmfb0_subp0_pm_unk25c" }, 39 - { 0x14165c, (const struct nvkm_specmux[]) { 40 - { 0x3fff, 0, "unk0" }, 41 - {} 42 - }, "pmfb0_subp1_pm_unk25c" }, 43 - { 0x141a5c, (const struct nvkm_specmux[]) { 44 - { 0x3fff, 0, "unk0" }, 45 - {} 46 - }, "pmfb0_subp2_pm_unk25c" }, 47 - { 0x141e5c, (const struct nvkm_specmux[]) { 48 - { 0x3fff, 0, "unk0" }, 49 - {} 50 - }, "pmfb0_subp3_pm_unk25c" }, 51 - {} 52 - }; 53 - 54 - static const struct nvkm_specsrc 55 - gk104_tex_sources[] = { 56 - { 0x5042c0, (const struct nvkm_specmux[]) { 57 - { 0xf, 0, "sel0", true }, 58 - { 0x7, 8, "sel1", true }, 59 - {} 60 - }, "pgraph_gpc0_tpc0_tex_pm_mux_c_d" }, 61 - { 0x5042c8, (const struct nvkm_specmux[]) { 62 - { 0x1f, 0, "sel", true }, 63 - {} 64 - }, "pgraph_gpc0_tpc0_tex_pm_unkc8" }, 65 - { 0x5042b8, (const struct nvkm_specmux[]) { 66 - { 0xff, 0, "sel", true }, 67 - {} 68 - }, "pgraph_gpc0_tpc0_tex_pm_unkb8" }, 69 - {} 70 - }; 71 - 72 - static const struct nvkm_specdom 73 - gk104_pm_hub[] = { 74 - { 0x60, (const struct nvkm_specsig[]) { 75 - { 0x47, "hub00_user_0" }, 76 - {} 77 - }, &gf100_perfctr_func }, 78 - { 0x40, (const struct nvkm_specsig[]) { 79 - { 0x27, "hub01_user_0" }, 80 - {} 81 - }, &gf100_perfctr_func }, 82 - { 0x60, (const struct nvkm_specsig[]) { 83 - { 0x47, "hub02_user_0" }, 84 - {} 85 - }, &gf100_perfctr_func }, 86 - { 0x60, (const struct nvkm_specsig[]) { 87 - { 0x47, "hub03_user_0" }, 88 - {} 89 - }, &gf100_perfctr_func }, 90 - { 0x40, (const struct nvkm_specsig[]) { 91 - { 0x03, "host_mmio_rd" }, 92 - { 0x27, "hub04_user_0" }, 93 - {} 94 - }, &gf100_perfctr_func }, 95 - { 0x60, (const struct nvkm_specsig[]) { 96 - { 0x47, "hub05_user_0" }, 97 - {} 98 - }, &gf100_perfctr_func }, 99 - { 0xc0, (const struct nvkm_specsig[]) { 100 - { 0x74, "host_fb_rd3x" }, 101 - { 0x75, "host_fb_rd3x_2" }, 102 - { 0xa7, "hub06_user_0" }, 103 - {} 104 - }, &gf100_perfctr_func }, 105 - { 0x60, (const struct nvkm_specsig[]) { 106 - { 0x47, "hub07_user_0" }, 107 - {} 108 - }, &gf100_perfctr_func }, 109 - {} 110 - }; 111 - 112 - static const struct nvkm_specdom 113 - gk104_pm_gpc[] = { 114 - { 0xe0, (const struct nvkm_specsig[]) { 115 - { 0xc7, "gpc00_user_0" }, 116 - {} 117 - }, &gf100_perfctr_func }, 118 - { 0x20, (const struct nvkm_specsig[]) { 119 - {} 120 - }, &gf100_perfctr_func }, 121 - { 0x20, (const struct nvkm_specsig[]) { 122 - { 0x00, "gpc02_tex_00", gk104_tex_sources }, 123 - { 0x01, "gpc02_tex_01", gk104_tex_sources }, 124 - { 0x02, "gpc02_tex_02", gk104_tex_sources }, 125 - { 0x03, "gpc02_tex_03", gk104_tex_sources }, 126 - { 0x04, "gpc02_tex_04", gk104_tex_sources }, 127 - { 0x05, "gpc02_tex_05", gk104_tex_sources }, 128 - { 0x06, "gpc02_tex_06", gk104_tex_sources }, 129 - { 0x07, "gpc02_tex_07", gk104_tex_sources }, 130 - { 0x08, "gpc02_tex_08", gk104_tex_sources }, 131 - { 0x0a, "gpc02_tex_0a", gk104_tex_sources }, 132 - { 0x0b, "gpc02_tex_0b", gk104_tex_sources }, 133 - { 0x0d, "gpc02_tex_0c", gk104_tex_sources }, 134 - { 0x0c, "gpc02_tex_0d", gk104_tex_sources }, 135 - { 0x0e, "gpc02_tex_0e", gk104_tex_sources }, 136 - { 0x0f, "gpc02_tex_0f", gk104_tex_sources }, 137 - { 0x10, "gpc02_tex_10", gk104_tex_sources }, 138 - { 0x11, "gpc02_tex_11", gk104_tex_sources }, 139 - { 0x12, "gpc02_tex_12", gk104_tex_sources }, 140 - {} 141 - }, &gf100_perfctr_func }, 142 - {} 143 - }; 144 - 145 - static const struct nvkm_specdom 146 - gk104_pm_part[] = { 147 - { 0x60, (const struct nvkm_specsig[]) { 148 - { 0x00, "part00_pbfb_00", gf100_pbfb_sources }, 149 - { 0x01, "part00_pbfb_01", gf100_pbfb_sources }, 150 - { 0x0c, "part00_pmfb_00", gk104_pmfb_sources }, 151 - { 0x0d, "part00_pmfb_01", gk104_pmfb_sources }, 152 - { 0x0e, "part00_pmfb_02", gk104_pmfb_sources }, 153 - { 0x0f, "part00_pmfb_03", gk104_pmfb_sources }, 154 - { 0x10, "part00_pmfb_04", gk104_pmfb_sources }, 155 - { 0x12, "part00_pmfb_05", gk104_pmfb_sources }, 156 - { 0x15, "part00_pmfb_06", gk104_pmfb_sources }, 157 - { 0x16, "part00_pmfb_07", gk104_pmfb_sources }, 158 - { 0x18, "part00_pmfb_08", gk104_pmfb_sources }, 159 - { 0x21, "part00_pmfb_09", gk104_pmfb_sources }, 160 - { 0x25, "part00_pmfb_0a", gk104_pmfb_sources }, 161 - { 0x26, "part00_pmfb_0b", gk104_pmfb_sources }, 162 - { 0x27, "part00_pmfb_0c", gk104_pmfb_sources }, 163 - { 0x47, "part00_user_0" }, 164 - {} 165 - }, &gf100_perfctr_func }, 166 - { 0x60, (const struct nvkm_specsig[]) { 167 - { 0x47, "part01_user_0" }, 168 - {} 169 - }, &gf100_perfctr_func }, 170 - {} 171 - }; 172 - 173 - static const struct gf100_pm_func 174 - gk104_pm = { 175 - .doms_gpc = gk104_pm_gpc, 176 - .doms_hub = gk104_pm_hub, 177 - .doms_part = gk104_pm_part, 178 - }; 179 - 180 - int 181 - gk104_pm_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_pm **ppm) 182 - { 183 - return gf100_pm_new_(&gk104_pm, device, type, inst, ppm); 184 - }
-157
drivers/gpu/drm/nouveau/nvkm/engine/pm/gt200.c
··· 1 - /* 2 - * Copyright 2015 Nouveau project 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: Samuel Pitoiset 23 - */ 24 - #include "nv40.h" 25 - 26 - const struct nvkm_specsrc 27 - gt200_crop_sources[] = { 28 - { 0x407008, (const struct nvkm_specmux[]) { 29 - { 0xf, 0, "sel0", true }, 30 - { 0x1f, 16, "sel1", true }, 31 - {} 32 - }, "pgraph_rop0_crop_pm_mux" }, 33 - {} 34 - }; 35 - 36 - const struct nvkm_specsrc 37 - gt200_prop_sources[] = { 38 - { 0x408750, (const struct nvkm_specmux[]) { 39 - { 0x3f, 0, "sel", true }, 40 - {} 41 - }, "pgraph_tpc0_prop_pm_mux" }, 42 - {} 43 - }; 44 - 45 - const struct nvkm_specsrc 46 - gt200_tex_sources[] = { 47 - { 0x408508, (const struct nvkm_specmux[]) { 48 - { 0xfffff, 0, "unk0" }, 49 - {} 50 - }, "pgraph_tpc0_tex_unk08" }, 51 - {} 52 - }; 53 - 54 - static const struct nvkm_specdom 55 - gt200_pm[] = { 56 - { 0x20, (const struct nvkm_specsig[]) { 57 - {} 58 - }, &nv40_perfctr_func }, 59 - { 0xf0, (const struct nvkm_specsig[]) { 60 - { 0xc9, "pc01_gr_idle" }, 61 - { 0x84, "pc01_strmout_00" }, 62 - { 0x85, "pc01_strmout_01" }, 63 - { 0xde, "pc01_trast_00" }, 64 - { 0xdf, "pc01_trast_01" }, 65 - { 0xe0, "pc01_trast_02" }, 66 - { 0xe1, "pc01_trast_03" }, 67 - { 0xe4, "pc01_trast_04" }, 68 - { 0xe5, "pc01_trast_05" }, 69 - { 0x82, "pc01_vattr_00" }, 70 - { 0x83, "pc01_vattr_01" }, 71 - { 0x46, "pc01_vfetch_00", g84_vfetch_sources }, 72 - { 0x47, "pc01_vfetch_01", g84_vfetch_sources }, 73 - { 0x48, "pc01_vfetch_02", g84_vfetch_sources }, 74 - { 0x49, "pc01_vfetch_03", g84_vfetch_sources }, 75 - { 0x4a, "pc01_vfetch_04", g84_vfetch_sources }, 76 - { 0x4b, "pc01_vfetch_05", g84_vfetch_sources }, 77 - { 0x4c, "pc01_vfetch_06", g84_vfetch_sources }, 78 - { 0x4d, "pc01_vfetch_07", g84_vfetch_sources }, 79 - { 0x4e, "pc01_vfetch_08", g84_vfetch_sources }, 80 - { 0x4f, "pc01_vfetch_09", g84_vfetch_sources }, 81 - { 0x50, "pc01_vfetch_0a", g84_vfetch_sources }, 82 - { 0x51, "pc01_vfetch_0b", g84_vfetch_sources }, 83 - { 0x52, "pc01_vfetch_0c", g84_vfetch_sources }, 84 - { 0x53, "pc01_vfetch_0d", g84_vfetch_sources }, 85 - { 0x54, "pc01_vfetch_0e", g84_vfetch_sources }, 86 - { 0x55, "pc01_vfetch_0f", g84_vfetch_sources }, 87 - { 0x56, "pc01_vfetch_10", g84_vfetch_sources }, 88 - { 0x57, "pc01_vfetch_11", g84_vfetch_sources }, 89 - { 0x58, "pc01_vfetch_12", g84_vfetch_sources }, 90 - { 0x59, "pc01_vfetch_13", g84_vfetch_sources }, 91 - { 0x5a, "pc01_vfetch_14", g84_vfetch_sources }, 92 - { 0x5b, "pc01_vfetch_15", g84_vfetch_sources }, 93 - { 0x5c, "pc01_vfetch_16", g84_vfetch_sources }, 94 - { 0x5d, "pc01_vfetch_17", g84_vfetch_sources }, 95 - { 0x5e, "pc01_vfetch_18", g84_vfetch_sources }, 96 - { 0x5f, "pc01_vfetch_19", g84_vfetch_sources }, 97 - { 0x07, "pc01_zcull_00", nv50_zcull_sources }, 98 - { 0x08, "pc01_zcull_01", nv50_zcull_sources }, 99 - { 0x09, "pc01_zcull_02", nv50_zcull_sources }, 100 - { 0x0a, "pc01_zcull_03", nv50_zcull_sources }, 101 - { 0x0b, "pc01_zcull_04", nv50_zcull_sources }, 102 - { 0x0c, "pc01_zcull_05", nv50_zcull_sources }, 103 - 104 - { 0xb0, "pc01_unk00" }, 105 - { 0xec, "pc01_trailer" }, 106 - {} 107 - }, &nv40_perfctr_func }, 108 - { 0xf0, (const struct nvkm_specsig[]) { 109 - { 0x55, "pc02_crop_00", gt200_crop_sources }, 110 - { 0x56, "pc02_crop_01", gt200_crop_sources }, 111 - { 0x57, "pc02_crop_02", gt200_crop_sources }, 112 - { 0x58, "pc02_crop_03", gt200_crop_sources }, 113 - { 0x00, "pc02_prop_00", gt200_prop_sources }, 114 - { 0x01, "pc02_prop_01", gt200_prop_sources }, 115 - { 0x02, "pc02_prop_02", gt200_prop_sources }, 116 - { 0x03, "pc02_prop_03", gt200_prop_sources }, 117 - { 0x04, "pc02_prop_04", gt200_prop_sources }, 118 - { 0x05, "pc02_prop_05", gt200_prop_sources }, 119 - { 0x06, "pc02_prop_06", gt200_prop_sources }, 120 - { 0x07, "pc02_prop_07", gt200_prop_sources }, 121 - { 0x78, "pc02_tex_00", gt200_tex_sources }, 122 - { 0x79, "pc02_tex_01", gt200_tex_sources }, 123 - { 0x7a, "pc02_tex_02", gt200_tex_sources }, 124 - { 0x7b, "pc02_tex_03", gt200_tex_sources }, 125 - { 0x32, "pc02_tex_04", gt200_tex_sources }, 126 - { 0x33, "pc02_tex_05", gt200_tex_sources }, 127 - { 0x34, "pc02_tex_06", gt200_tex_sources }, 128 - { 0x74, "pc02_zrop_00", nv50_zrop_sources }, 129 - { 0x75, "pc02_zrop_01", nv50_zrop_sources }, 130 - { 0x76, "pc02_zrop_02", nv50_zrop_sources }, 131 - { 0x77, "pc02_zrop_03", nv50_zrop_sources }, 132 - { 0xec, "pc02_trailer" }, 133 - {} 134 - }, &nv40_perfctr_func }, 135 - { 0x20, (const struct nvkm_specsig[]) { 136 - {} 137 - }, &nv40_perfctr_func }, 138 - { 0x20, (const struct nvkm_specsig[]) { 139 - {} 140 - }, &nv40_perfctr_func }, 141 - { 0x20, (const struct nvkm_specsig[]) { 142 - {} 143 - }, &nv40_perfctr_func }, 144 - { 0x20, (const struct nvkm_specsig[]) { 145 - {} 146 - }, &nv40_perfctr_func }, 147 - { 0x20, (const struct nvkm_specsig[]) { 148 - {} 149 - }, &nv40_perfctr_func }, 150 - {} 151 - }; 152 - 153 - int 154 - gt200_pm_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_pm **ppm) 155 - { 156 - return nv40_pm_new_(gt200_pm, device, type, inst, ppm); 157 - }
-138
drivers/gpu/drm/nouveau/nvkm/engine/pm/gt215.c
··· 1 - /* 2 - * Copyright 2013 Red Hat Inc. 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: Ben Skeggs 23 - */ 24 - #include "nv40.h" 25 - 26 - static const struct nvkm_specsrc 27 - gt215_zcull_sources[] = { 28 - { 0x402ca4, (const struct nvkm_specmux[]) { 29 - { 0x7fff, 0, "unk0" }, 30 - { 0xff, 24, "unk24" }, 31 - {} 32 - }, "pgraph_zcull_pm_unka4" }, 33 - {} 34 - }; 35 - 36 - static const struct nvkm_specdom 37 - gt215_pm[] = { 38 - { 0x20, (const struct nvkm_specsig[]) { 39 - {} 40 - }, &nv40_perfctr_func }, 41 - { 0xf0, (const struct nvkm_specsig[]) { 42 - { 0xcb, "pc01_gr_idle" }, 43 - { 0x86, "pc01_strmout_00" }, 44 - { 0x87, "pc01_strmout_01" }, 45 - { 0xe0, "pc01_trast_00" }, 46 - { 0xe1, "pc01_trast_01" }, 47 - { 0xe2, "pc01_trast_02" }, 48 - { 0xe3, "pc01_trast_03" }, 49 - { 0xe6, "pc01_trast_04" }, 50 - { 0xe7, "pc01_trast_05" }, 51 - { 0x84, "pc01_vattr_00" }, 52 - { 0x85, "pc01_vattr_01" }, 53 - { 0x46, "pc01_vfetch_00", g84_vfetch_sources }, 54 - { 0x47, "pc01_vfetch_01", g84_vfetch_sources }, 55 - { 0x48, "pc01_vfetch_02", g84_vfetch_sources }, 56 - { 0x49, "pc01_vfetch_03", g84_vfetch_sources }, 57 - { 0x4a, "pc01_vfetch_04", g84_vfetch_sources }, 58 - { 0x4b, "pc01_vfetch_05", g84_vfetch_sources }, 59 - { 0x4c, "pc01_vfetch_06", g84_vfetch_sources }, 60 - { 0x4d, "pc01_vfetch_07", g84_vfetch_sources }, 61 - { 0x4e, "pc01_vfetch_08", g84_vfetch_sources }, 62 - { 0x4f, "pc01_vfetch_09", g84_vfetch_sources }, 63 - { 0x50, "pc01_vfetch_0a", g84_vfetch_sources }, 64 - { 0x51, "pc01_vfetch_0b", g84_vfetch_sources }, 65 - { 0x52, "pc01_vfetch_0c", g84_vfetch_sources }, 66 - { 0x53, "pc01_vfetch_0d", g84_vfetch_sources }, 67 - { 0x54, "pc01_vfetch_0e", g84_vfetch_sources }, 68 - { 0x55, "pc01_vfetch_0f", g84_vfetch_sources }, 69 - { 0x56, "pc01_vfetch_10", g84_vfetch_sources }, 70 - { 0x57, "pc01_vfetch_11", g84_vfetch_sources }, 71 - { 0x58, "pc01_vfetch_12", g84_vfetch_sources }, 72 - { 0x59, "pc01_vfetch_13", g84_vfetch_sources }, 73 - { 0x5a, "pc01_vfetch_14", g84_vfetch_sources }, 74 - { 0x5b, "pc01_vfetch_15", g84_vfetch_sources }, 75 - { 0x5c, "pc01_vfetch_16", g84_vfetch_sources }, 76 - { 0x5d, "pc01_vfetch_17", g84_vfetch_sources }, 77 - { 0x5e, "pc01_vfetch_18", g84_vfetch_sources }, 78 - { 0x5f, "pc01_vfetch_19", g84_vfetch_sources }, 79 - { 0x07, "pc01_zcull_00", gt215_zcull_sources }, 80 - { 0x08, "pc01_zcull_01", gt215_zcull_sources }, 81 - { 0x09, "pc01_zcull_02", gt215_zcull_sources }, 82 - { 0x0a, "pc01_zcull_03", gt215_zcull_sources }, 83 - { 0x0b, "pc01_zcull_04", gt215_zcull_sources }, 84 - { 0x0c, "pc01_zcull_05", gt215_zcull_sources }, 85 - { 0xb2, "pc01_unk00" }, 86 - { 0xec, "pc01_trailer" }, 87 - {} 88 - }, &nv40_perfctr_func }, 89 - { 0xe0, (const struct nvkm_specsig[]) { 90 - { 0x64, "pc02_crop_00", gt200_crop_sources }, 91 - { 0x65, "pc02_crop_01", gt200_crop_sources }, 92 - { 0x66, "pc02_crop_02", gt200_crop_sources }, 93 - { 0x67, "pc02_crop_03", gt200_crop_sources }, 94 - { 0x00, "pc02_prop_00", gt200_prop_sources }, 95 - { 0x01, "pc02_prop_01", gt200_prop_sources }, 96 - { 0x02, "pc02_prop_02", gt200_prop_sources }, 97 - { 0x03, "pc02_prop_03", gt200_prop_sources }, 98 - { 0x04, "pc02_prop_04", gt200_prop_sources }, 99 - { 0x05, "pc02_prop_05", gt200_prop_sources }, 100 - { 0x06, "pc02_prop_06", gt200_prop_sources }, 101 - { 0x07, "pc02_prop_07", gt200_prop_sources }, 102 - { 0x80, "pc02_tex_00", gt200_tex_sources }, 103 - { 0x81, "pc02_tex_01", gt200_tex_sources }, 104 - { 0x82, "pc02_tex_02", gt200_tex_sources }, 105 - { 0x83, "pc02_tex_03", gt200_tex_sources }, 106 - { 0x3a, "pc02_tex_04", gt200_tex_sources }, 107 - { 0x3b, "pc02_tex_05", gt200_tex_sources }, 108 - { 0x3c, "pc02_tex_06", gt200_tex_sources }, 109 - { 0x7c, "pc02_zrop_00", nv50_zrop_sources }, 110 - { 0x7d, "pc02_zrop_01", nv50_zrop_sources }, 111 - { 0x7e, "pc02_zrop_02", nv50_zrop_sources }, 112 - { 0x7f, "pc02_zrop_03", nv50_zrop_sources }, 113 - { 0xcc, "pc02_trailer" }, 114 - {} 115 - }, &nv40_perfctr_func }, 116 - { 0x20, (const struct nvkm_specsig[]) { 117 - {} 118 - }, &nv40_perfctr_func }, 119 - { 0x20, (const struct nvkm_specsig[]) { 120 - {} 121 - }, &nv40_perfctr_func }, 122 - { 0x20, (const struct nvkm_specsig[]) { 123 - {} 124 - }, &nv40_perfctr_func }, 125 - { 0x20, (const struct nvkm_specsig[]) { 126 - {} 127 - }, &nv40_perfctr_func }, 128 - { 0x20, (const struct nvkm_specsig[]) { 129 - {} 130 - }, &nv40_perfctr_func }, 131 - {} 132 - }; 133 - 134 - int 135 - gt215_pm_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_pm **ppm) 136 - { 137 - return nv40_pm_new_(gt215_pm, device, type, inst, ppm); 138 - }
-123
drivers/gpu/drm/nouveau/nvkm/engine/pm/nv40.c
··· 1 - /* 2 - * Copyright 2013 Red Hat Inc. 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: Ben Skeggs 23 - */ 24 - #include "nv40.h" 25 - 26 - static void 27 - nv40_perfctr_init(struct nvkm_pm *pm, struct nvkm_perfdom *dom, 28 - struct nvkm_perfctr *ctr) 29 - { 30 - struct nvkm_device *device = pm->engine.subdev.device; 31 - u32 log = ctr->logic_op; 32 - u32 src = 0x00000000; 33 - int i; 34 - 35 - for (i = 0; i < 4; i++) 36 - src |= ctr->signal[i] << (i * 8); 37 - 38 - nvkm_wr32(device, 0x00a7c0 + dom->addr, 0x00000001 | (dom->mode << 4)); 39 - nvkm_wr32(device, 0x00a400 + dom->addr + (ctr->slot * 0x40), src); 40 - nvkm_wr32(device, 0x00a420 + dom->addr + (ctr->slot * 0x40), log); 41 - } 42 - 43 - static void 44 - nv40_perfctr_read(struct nvkm_pm *pm, struct nvkm_perfdom *dom, 45 - struct nvkm_perfctr *ctr) 46 - { 47 - struct nvkm_device *device = pm->engine.subdev.device; 48 - 49 - switch (ctr->slot) { 50 - case 0: ctr->ctr = nvkm_rd32(device, 0x00a700 + dom->addr); break; 51 - case 1: ctr->ctr = nvkm_rd32(device, 0x00a6c0 + dom->addr); break; 52 - case 2: ctr->ctr = nvkm_rd32(device, 0x00a680 + dom->addr); break; 53 - case 3: ctr->ctr = nvkm_rd32(device, 0x00a740 + dom->addr); break; 54 - } 55 - dom->clk = nvkm_rd32(device, 0x00a600 + dom->addr); 56 - } 57 - 58 - static void 59 - nv40_perfctr_next(struct nvkm_pm *pm, struct nvkm_perfdom *dom) 60 - { 61 - struct nvkm_device *device = pm->engine.subdev.device; 62 - struct nv40_pm *nv40pm = container_of(pm, struct nv40_pm, base); 63 - 64 - if (nv40pm->sequence != pm->sequence) { 65 - nvkm_wr32(device, 0x400084, 0x00000020); 66 - nv40pm->sequence = pm->sequence; 67 - } 68 - } 69 - 70 - const struct nvkm_funcdom 71 - nv40_perfctr_func = { 72 - .init = nv40_perfctr_init, 73 - .read = nv40_perfctr_read, 74 - .next = nv40_perfctr_next, 75 - }; 76 - 77 - static const struct nvkm_pm_func 78 - nv40_pm_ = { 79 - }; 80 - 81 - int 82 - nv40_pm_new_(const struct nvkm_specdom *doms, struct nvkm_device *device, 83 - enum nvkm_subdev_type type, int inst, struct nvkm_pm **ppm) 84 - { 85 - struct nv40_pm *pm; 86 - int ret; 87 - 88 - if (!(pm = kzalloc(sizeof(*pm), GFP_KERNEL))) 89 - return -ENOMEM; 90 - *ppm = &pm->base; 91 - 92 - ret = nvkm_pm_ctor(&nv40_pm_, device, type, inst, &pm->base); 93 - if (ret) 94 - return ret; 95 - 96 - return nvkm_perfdom_new(&pm->base, "pc", 0, 0, 0, 4, doms); 97 - } 98 - 99 - static const struct nvkm_specdom 100 - nv40_pm[] = { 101 - { 0x20, (const struct nvkm_specsig[]) { 102 - {} 103 - }, &nv40_perfctr_func }, 104 - { 0x20, (const struct nvkm_specsig[]) { 105 - {} 106 - }, &nv40_perfctr_func }, 107 - { 0x20, (const struct nvkm_specsig[]) { 108 - {} 109 - }, &nv40_perfctr_func }, 110 - { 0x20, (const struct nvkm_specsig[]) { 111 - {} 112 - }, &nv40_perfctr_func }, 113 - { 0x20, (const struct nvkm_specsig[]) { 114 - {} 115 - }, &nv40_perfctr_func }, 116 - {} 117 - }; 118 - 119 - int 120 - nv40_pm_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_pm **ppm) 121 - { 122 - return nv40_pm_new_(nv40_pm, device, type, inst, ppm); 123 - }
-15
drivers/gpu/drm/nouveau/nvkm/engine/pm/nv40.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - #ifndef __NVKM_PM_NV40_H__ 3 - #define __NVKM_PM_NV40_H__ 4 - #define nv40_pm(p) container_of((p), struct nv40_pm, base) 5 - #include "priv.h" 6 - 7 - struct nv40_pm { 8 - struct nvkm_pm base; 9 - u32 sequence; 10 - }; 11 - 12 - int nv40_pm_new_(const struct nvkm_specdom *, struct nvkm_device *, enum nvkm_subdev_type, int, 13 - struct nvkm_pm **); 14 - extern const struct nvkm_funcdom nv40_perfctr_func; 15 - #endif
-175
drivers/gpu/drm/nouveau/nvkm/engine/pm/nv50.c
··· 1 - /* 2 - * Copyright 2013 Red Hat Inc. 3 - * 4 - * Permission is hereby granted, free of charge, to any person obtaining a 5 - * copy of this software and associated documentation files (the "Software"), 6 - * to deal in the Software without restriction, including without limitation 7 - * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 - * and/or sell copies of the Software, and to permit persons to whom the 9 - * Software is furnished to do so, subject to the following conditions: 10 - * 11 - * The above copyright notice and this permission notice shall be included in 12 - * all copies or substantial portions of the Software. 13 - * 14 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 15 - * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 16 - * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 17 - * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 18 - * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 19 - * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 20 - * OTHER DEALINGS IN THE SOFTWARE. 21 - * 22 - * Authors: Ben Skeggs 23 - */ 24 - #include "nv40.h" 25 - 26 - const struct nvkm_specsrc 27 - nv50_zcull_sources[] = { 28 - { 0x402ca4, (const struct nvkm_specmux[]) { 29 - { 0x7fff, 0, "unk0" }, 30 - {} 31 - }, "pgraph_zcull_pm_unka4" }, 32 - {} 33 - }; 34 - 35 - const struct nvkm_specsrc 36 - nv50_zrop_sources[] = { 37 - { 0x40708c, (const struct nvkm_specmux[]) { 38 - { 0xf, 0, "sel0", true }, 39 - { 0xf, 16, "sel1", true }, 40 - {} 41 - }, "pgraph_rop0_zrop_pm_mux" }, 42 - {} 43 - }; 44 - 45 - static const struct nvkm_specsrc 46 - nv50_prop_sources[] = { 47 - { 0x40be50, (const struct nvkm_specmux[]) { 48 - { 0x1f, 0, "sel", true }, 49 - {} 50 - }, "pgraph_tpc3_prop_pm_mux" }, 51 - {} 52 - }; 53 - 54 - static const struct nvkm_specsrc 55 - nv50_crop_sources[] = { 56 - { 0x407008, (const struct nvkm_specmux[]) { 57 - { 0x7, 0, "sel0", true }, 58 - { 0x7, 16, "sel1", true }, 59 - {} 60 - }, "pgraph_rop0_crop_pm_mux" }, 61 - {} 62 - }; 63 - 64 - static const struct nvkm_specsrc 65 - nv50_tex_sources[] = { 66 - { 0x40b808, (const struct nvkm_specmux[]) { 67 - { 0x3fff, 0, "unk0" }, 68 - {} 69 - }, "pgraph_tpc3_tex_unk08" }, 70 - {} 71 - }; 72 - 73 - static const struct nvkm_specsrc 74 - nv50_vfetch_sources[] = { 75 - { 0x400c0c, (const struct nvkm_specmux[]) { 76 - { 0x1, 0, "unk0" }, 77 - {} 78 - }, "pgraph_vfetch_unk0c" }, 79 - {} 80 - }; 81 - 82 - static const struct nvkm_specdom 83 - nv50_pm[] = { 84 - { 0x20, (const struct nvkm_specsig[]) { 85 - {} 86 - }, &nv40_perfctr_func }, 87 - { 0xf0, (const struct nvkm_specsig[]) { 88 - { 0xc8, "pc01_gr_idle" }, 89 - { 0x7f, "pc01_strmout_00" }, 90 - { 0x80, "pc01_strmout_01" }, 91 - { 0xdc, "pc01_trast_00" }, 92 - { 0xdd, "pc01_trast_01" }, 93 - { 0xde, "pc01_trast_02" }, 94 - { 0xdf, "pc01_trast_03" }, 95 - { 0xe2, "pc01_trast_04" }, 96 - { 0xe3, "pc01_trast_05" }, 97 - { 0x7c, "pc01_vattr_00" }, 98 - { 0x7d, "pc01_vattr_01" }, 99 - { 0x26, "pc01_vfetch_00", nv50_vfetch_sources }, 100 - { 0x27, "pc01_vfetch_01", nv50_vfetch_sources }, 101 - { 0x28, "pc01_vfetch_02", nv50_vfetch_sources }, 102 - { 0x29, "pc01_vfetch_03", nv50_vfetch_sources }, 103 - { 0x2a, "pc01_vfetch_04", nv50_vfetch_sources }, 104 - { 0x2b, "pc01_vfetch_05", nv50_vfetch_sources }, 105 - { 0x2c, "pc01_vfetch_06", nv50_vfetch_sources }, 106 - { 0x2d, "pc01_vfetch_07", nv50_vfetch_sources }, 107 - { 0x2e, "pc01_vfetch_08", nv50_vfetch_sources }, 108 - { 0x2f, "pc01_vfetch_09", nv50_vfetch_sources }, 109 - { 0x30, "pc01_vfetch_0a", nv50_vfetch_sources }, 110 - { 0x31, "pc01_vfetch_0b", nv50_vfetch_sources }, 111 - { 0x32, "pc01_vfetch_0c", nv50_vfetch_sources }, 112 - { 0x33, "pc01_vfetch_0d", nv50_vfetch_sources }, 113 - { 0x34, "pc01_vfetch_0e", nv50_vfetch_sources }, 114 - { 0x35, "pc01_vfetch_0f", nv50_vfetch_sources }, 115 - { 0x36, "pc01_vfetch_10", nv50_vfetch_sources }, 116 - { 0x37, "pc01_vfetch_11", nv50_vfetch_sources }, 117 - { 0x38, "pc01_vfetch_12", nv50_vfetch_sources }, 118 - { 0x39, "pc01_vfetch_13", nv50_vfetch_sources }, 119 - { 0x3a, "pc01_vfetch_14", nv50_vfetch_sources }, 120 - { 0x3b, "pc01_vfetch_15", nv50_vfetch_sources }, 121 - { 0x3c, "pc01_vfetch_16", nv50_vfetch_sources }, 122 - { 0x3d, "pc01_vfetch_17", nv50_vfetch_sources }, 123 - { 0x3e, "pc01_vfetch_18", nv50_vfetch_sources }, 124 - { 0x3f, "pc01_vfetch_19", nv50_vfetch_sources }, 125 - { 0x20, "pc01_zcull_00", nv50_zcull_sources }, 126 - { 0x21, "pc01_zcull_01", nv50_zcull_sources }, 127 - { 0x22, "pc01_zcull_02", nv50_zcull_sources }, 128 - { 0x23, "pc01_zcull_03", nv50_zcull_sources }, 129 - { 0x24, "pc01_zcull_04", nv50_zcull_sources }, 130 - { 0x25, "pc01_zcull_05", nv50_zcull_sources }, 131 - { 0xae, "pc01_unk00" }, 132 - { 0xee, "pc01_trailer" }, 133 - {} 134 - }, &nv40_perfctr_func }, 135 - { 0xf0, (const struct nvkm_specsig[]) { 136 - { 0x52, "pc02_crop_00", nv50_crop_sources }, 137 - { 0x53, "pc02_crop_01", nv50_crop_sources }, 138 - { 0x54, "pc02_crop_02", nv50_crop_sources }, 139 - { 0x55, "pc02_crop_03", nv50_crop_sources }, 140 - { 0x00, "pc02_prop_00", nv50_prop_sources }, 141 - { 0x01, "pc02_prop_01", nv50_prop_sources }, 142 - { 0x02, "pc02_prop_02", nv50_prop_sources }, 143 - { 0x03, "pc02_prop_03", nv50_prop_sources }, 144 - { 0x04, "pc02_prop_04", nv50_prop_sources }, 145 - { 0x05, "pc02_prop_05", nv50_prop_sources }, 146 - { 0x06, "pc02_prop_06", nv50_prop_sources }, 147 - { 0x07, "pc02_prop_07", nv50_prop_sources }, 148 - { 0x70, "pc02_tex_00", nv50_tex_sources }, 149 - { 0x71, "pc02_tex_01", nv50_tex_sources }, 150 - { 0x72, "pc02_tex_02", nv50_tex_sources }, 151 - { 0x73, "pc02_tex_03", nv50_tex_sources }, 152 - { 0x40, "pc02_tex_04", nv50_tex_sources }, 153 - { 0x41, "pc02_tex_05", nv50_tex_sources }, 154 - { 0x42, "pc02_tex_06", nv50_tex_sources }, 155 - { 0x6c, "pc02_zrop_00", nv50_zrop_sources }, 156 - { 0x6d, "pc02_zrop_01", nv50_zrop_sources }, 157 - { 0x6e, "pc02_zrop_02", nv50_zrop_sources }, 158 - { 0x6f, "pc02_zrop_03", nv50_zrop_sources }, 159 - { 0xee, "pc02_trailer" }, 160 - {} 161 - }, &nv40_perfctr_func }, 162 - { 0x20, (const struct nvkm_specsig[]) { 163 - {} 164 - }, &nv40_perfctr_func }, 165 - { 0x20, (const struct nvkm_specsig[]) { 166 - {} 167 - }, &nv40_perfctr_func }, 168 - {} 169 - }; 170 - 171 - int 172 - nv50_pm_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_pm **ppm) 173 - { 174 - return nv40_pm_new_(nv50_pm, device, type, inst, ppm); 175 - }
-105
drivers/gpu/drm/nouveau/nvkm/engine/pm/priv.h
··· 1 - /* SPDX-License-Identifier: MIT */ 2 - #ifndef __NVKM_PM_PRIV_H__ 3 - #define __NVKM_PM_PRIV_H__ 4 - #define nvkm_pm(p) container_of((p), struct nvkm_pm, engine) 5 - #include <engine/pm.h> 6 - 7 - int nvkm_pm_ctor(const struct nvkm_pm_func *, struct nvkm_device *, enum nvkm_subdev_type, int, 8 - struct nvkm_pm *); 9 - 10 - struct nvkm_pm_func { 11 - void (*fini)(struct nvkm_pm *); 12 - }; 13 - 14 - struct nvkm_perfctr { 15 - struct list_head head; 16 - u8 domain; 17 - u8 signal[4]; 18 - u64 source[4][8]; 19 - int slot; 20 - u32 logic_op; 21 - u32 ctr; 22 - }; 23 - 24 - struct nvkm_specmux { 25 - u32 mask; 26 - u8 shift; 27 - const char *name; 28 - bool enable; 29 - }; 30 - 31 - struct nvkm_specsrc { 32 - u32 addr; 33 - const struct nvkm_specmux *mux; 34 - const char *name; 35 - }; 36 - 37 - struct nvkm_perfsrc { 38 - struct list_head head; 39 - char *name; 40 - u32 addr; 41 - u32 mask; 42 - u8 shift; 43 - bool enable; 44 - }; 45 - 46 - extern const struct nvkm_specsrc nv50_zcull_sources[]; 47 - extern const struct nvkm_specsrc nv50_zrop_sources[]; 48 - extern const struct nvkm_specsrc g84_vfetch_sources[]; 49 - extern const struct nvkm_specsrc gt200_crop_sources[]; 50 - extern const struct nvkm_specsrc gt200_prop_sources[]; 51 - extern const struct nvkm_specsrc gt200_tex_sources[]; 52 - 53 - struct nvkm_specsig { 54 - u8 signal; 55 - const char *name; 56 - const struct nvkm_specsrc *source; 57 - }; 58 - 59 - struct nvkm_perfsig { 60 - const char *name; 61 - u8 source[8]; 62 - }; 63 - 64 - struct nvkm_specdom { 65 - u16 signal_nr; 66 - const struct nvkm_specsig *signal; 67 - const struct nvkm_funcdom *func; 68 - }; 69 - 70 - #define nvkm_perfdom(p) container_of((p), struct nvkm_perfdom, object) 71 - #include <core/object.h> 72 - 73 - struct nvkm_perfdom { 74 - struct nvkm_object object; 75 - struct nvkm_perfmon *perfmon; 76 - struct list_head head; 77 - struct list_head list; 78 - const struct nvkm_funcdom *func; 79 - struct nvkm_perfctr *ctr[4]; 80 - char name[32]; 81 - u32 addr; 82 - u8 mode; 83 - u32 clk; 84 - u16 signal_nr; 85 - struct nvkm_perfsig signal[] __counted_by(signal_nr); 86 - }; 87 - 88 - struct nvkm_funcdom { 89 - void (*init)(struct nvkm_pm *, struct nvkm_perfdom *, 90 - struct nvkm_perfctr *); 91 - void (*read)(struct nvkm_pm *, struct nvkm_perfdom *, 92 - struct nvkm_perfctr *); 93 - void (*next)(struct nvkm_pm *, struct nvkm_perfdom *); 94 - }; 95 - 96 - int nvkm_perfdom_new(struct nvkm_pm *, const char *, u32, u32, u32, u32, 97 - const struct nvkm_specdom *); 98 - 99 - #define nvkm_perfmon(p) container_of((p), struct nvkm_perfmon, object) 100 - 101 - struct nvkm_perfmon { 102 - struct nvkm_object object; 103 - struct nvkm_pm *pm; 104 - }; 105 - #endif
+2
drivers/gpu/drm/panel/panel-boe-bf060y8m-aj0.c
··· 377 377 drm_panel_init(&boe->panel, dev, &boe_bf060y8m_aj0_panel_funcs, 378 378 DRM_MODE_CONNECTOR_DSI); 379 379 380 + boe->panel.prepare_prev_first = true; 381 + 380 382 boe->panel.backlight = boe_bf060y8m_aj0_create_backlight(dsi); 381 383 if (IS_ERR(boe->panel.backlight)) 382 384 return dev_err_probe(dev, PTR_ERR(boe->panel.backlight),
+217 -100
drivers/gpu/drm/panel/panel-boe-th101mb31ig002-28a.c
··· 16 16 #include <drm/drm_mipi_dsi.h> 17 17 #include <drm/drm_modes.h> 18 18 #include <drm/drm_panel.h> 19 + #include <drm/drm_probe_helper.h> 20 + 21 + struct boe_th101mb31ig002; 22 + 23 + struct panel_desc { 24 + const struct drm_display_mode *modes; 25 + unsigned long mode_flags; 26 + enum mipi_dsi_pixel_format format; 27 + int (*init)(struct boe_th101mb31ig002 *ctx); 28 + unsigned int lanes; 29 + bool lp11_before_reset; 30 + unsigned int vcioo_to_lp11_delay_ms; 31 + unsigned int lp11_to_reset_delay_ms; 32 + unsigned int backlight_off_to_display_off_delay_ms; 33 + unsigned int enter_sleep_to_reset_down_delay_ms; 34 + unsigned int power_off_delay_ms; 35 + }; 19 36 20 37 struct boe_th101mb31ig002 { 21 38 struct drm_panel panel; 22 39 23 40 struct mipi_dsi_device *dsi; 41 + 42 + const struct panel_desc *desc; 24 43 25 44 struct regulator *power; 26 45 struct gpio_desc *enable; ··· 58 39 usleep_range(5000, 6000); 59 40 } 60 41 61 - static int boe_th101mb31ig002_enable(struct drm_panel *panel) 42 + static int boe_th101mb31ig002_enable(struct boe_th101mb31ig002 *ctx) 62 43 { 63 - struct boe_th101mb31ig002 *ctx = container_of(panel, 64 - struct boe_th101mb31ig002, 65 - panel); 66 - struct mipi_dsi_device *dsi = ctx->dsi; 67 - struct device *dev = &dsi->dev; 68 - int ret; 44 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = ctx->dsi }; 69 45 70 - mipi_dsi_dcs_write_seq(dsi, 0xE0, 0xAB, 0xBA); 71 - mipi_dsi_dcs_write_seq(dsi, 0xE1, 0xBA, 0xAB); 72 - mipi_dsi_dcs_write_seq(dsi, 0xB1, 0x10, 0x01, 0x47, 0xFF); 73 - mipi_dsi_dcs_write_seq(dsi, 0xB2, 0x0C, 0x14, 0x04, 0x50, 0x50, 0x14); 74 - mipi_dsi_dcs_write_seq(dsi, 0xB3, 0x56, 0x53, 0x00); 75 - mipi_dsi_dcs_write_seq(dsi, 0xB4, 0x33, 0x30, 0x04); 76 - mipi_dsi_dcs_write_seq(dsi, 0xB6, 0xB0, 0x00, 0x00, 0x10, 0x00, 0x10, 77 - 0x00); 78 - mipi_dsi_dcs_write_seq(dsi, 0xB8, 0x05, 0x12, 0x29, 0x49, 0x48, 0x00, 79 - 0x00); 80 - mipi_dsi_dcs_write_seq(dsi, 0xB9, 0x7C, 0x65, 0x55, 0x49, 0x46, 0x36, 81 - 0x3B, 0x24, 0x3D, 0x3C, 0x3D, 0x5C, 0x4C, 82 - 0x55, 0x47, 0x46, 0x39, 0x26, 0x06, 0x7C, 83 - 0x65, 0x55, 0x49, 0x46, 0x36, 0x3B, 0x24, 84 - 0x3D, 0x3C, 0x3D, 0x5C, 0x4C, 0x55, 0x47, 85 - 0x46, 0x39, 0x26, 0x06); 86 - mipi_dsi_dcs_write_seq(dsi, 0x00, 0xFF, 0x87, 0x12, 0x34, 0x44, 0x44, 87 - 0x44, 0x44, 0x98, 0x04, 0x98, 0x04, 0x0F, 88 - 0x00, 0x00, 0xC1); 89 - mipi_dsi_dcs_write_seq(dsi, 0xC1, 0x54, 0x94, 0x02, 0x85, 0x9F, 0x00, 90 - 0x7F, 0x00, 0x54, 0x00); 91 - mipi_dsi_dcs_write_seq(dsi, 0xC2, 0x17, 0x09, 0x08, 0x89, 0x08, 0x11, 92 - 0x22, 0x20, 0x44, 0xFF, 0x18, 0x00); 93 - mipi_dsi_dcs_write_seq(dsi, 0xC3, 0x86, 0x46, 0x05, 0x05, 0x1C, 0x1C, 94 - 0x1D, 0x1D, 0x02, 0x1F, 0x1F, 0x1E, 0x1E, 95 - 0x0F, 0x0F, 0x0D, 0x0D, 0x13, 0x13, 0x11, 96 - 0x11, 0x00); 97 - mipi_dsi_dcs_write_seq(dsi, 0xC4, 0x07, 0x07, 0x04, 0x04, 0x1C, 0x1C, 98 - 0x1D, 0x1D, 0x02, 0x1F, 0x1F, 0x1E, 0x1E, 99 - 0x0E, 0x0E, 0x0C, 0x0C, 0x12, 0x12, 0x10, 100 - 0x10, 0x00); 101 - mipi_dsi_dcs_write_seq(dsi, 0xC6, 0x2A, 0x2A); 102 - mipi_dsi_dcs_write_seq(dsi, 0xC8, 0x21, 0x00, 0x31, 0x42, 0x34, 0x16); 103 - mipi_dsi_dcs_write_seq(dsi, 0xCA, 0xCB, 0x43); 104 - mipi_dsi_dcs_write_seq(dsi, 0xCD, 0x0E, 0x4B, 0x4B, 0x20, 0x19, 0x6B, 105 - 0x06, 0xB3); 106 - mipi_dsi_dcs_write_seq(dsi, 0xD2, 0xE3, 0x2B, 0x38, 0x00); 107 - mipi_dsi_dcs_write_seq(dsi, 0xD4, 0x00, 0x01, 0x00, 0x0E, 0x04, 0x44, 108 - 0x08, 0x10, 0x00, 0x00, 0x00); 109 - mipi_dsi_dcs_write_seq(dsi, 0xE6, 0x80, 0x01, 0xFF, 0xFF, 0xFF, 0xFF, 110 - 0xFF, 0xFF); 111 - mipi_dsi_dcs_write_seq(dsi, 0xF0, 0x12, 0x03, 0x20, 0x00, 0xFF); 112 - mipi_dsi_dcs_write_seq(dsi, 0xF3, 0x00); 46 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe0, 0xab, 0xba); 47 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe1, 0xba, 0xab); 48 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb1, 0x10, 0x01, 0x47, 0xff); 49 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb2, 0x0c, 0x14, 0x04, 0x50, 0x50, 0x14); 50 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb3, 0x56, 0x53, 0x00); 51 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb4, 0x33, 0x30, 0x04); 52 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb6, 0xb0, 0x00, 0x00, 0x10, 0x00, 0x10, 53 + 0x00); 54 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb8, 0x05, 0x12, 0x29, 0x49, 0x48, 0x00, 55 + 0x00); 56 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb9, 0x7c, 0x65, 0x55, 0x49, 0x46, 0x36, 57 + 0x3b, 0x24, 0x3d, 0x3c, 0x3d, 0x5c, 0x4c, 58 + 0x55, 0x47, 0x46, 0x39, 0x26, 0x06, 0x7c, 59 + 0x65, 0x55, 0x49, 0x46, 0x36, 0x3b, 0x24, 60 + 0x3d, 0x3c, 0x3d, 0x5c, 0x4c, 0x55, 0x47, 61 + 0x46, 0x39, 0x26, 0x06); 62 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x00, 0xff, 0x87, 0x12, 0x34, 0x44, 0x44, 63 + 0x44, 0x44, 0x98, 0x04, 0x98, 0x04, 0x0f, 64 + 0x00, 0x00, 0xc1); 65 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc1, 0x54, 0x94, 0x02, 0x85, 0x9f, 0x00, 66 + 0x7f, 0x00, 0x54, 0x00); 67 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc2, 0x17, 0x09, 0x08, 0x89, 0x08, 0x11, 68 + 0x22, 0x20, 0x44, 0xff, 0x18, 0x00); 69 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc3, 0x86, 0x46, 0x05, 0x05, 0x1c, 0x1c, 70 + 0x1d, 0x1d, 0x02, 0x1f, 0x1f, 0x1e, 0x1e, 71 + 0x0f, 0x0f, 0x0d, 0x0d, 0x13, 0x13, 0x11, 72 + 0x11, 0x00); 73 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc4, 0x07, 0x07, 0x04, 0x04, 0x1c, 0x1c, 74 + 0x1d, 0x1d, 0x02, 0x1f, 0x1f, 0x1e, 0x1e, 75 + 0x0e, 0x0e, 0x0c, 0x0c, 0x12, 0x12, 0x10, 76 + 0x10, 0x00); 77 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc6, 0x2a, 0x2a); 78 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc8, 0x21, 0x00, 0x31, 0x42, 0x34, 0x16); 79 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xca, 0xcb, 0x43); 80 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xcd, 0x0e, 0x4b, 0x4b, 0x20, 0x19, 0x6b, 81 + 0x06, 0xb3); 82 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xd2, 0xe3, 0x2b, 0x38, 0x00); 83 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xd4, 0x00, 0x01, 0x00, 0x0e, 0x04, 0x44, 84 + 0x08, 0x10, 0x00, 0x00, 0x00); 85 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe6, 0x80, 0x01, 0xff, 0xff, 0xff, 0xff, 86 + 0xff, 0xff); 87 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xf0, 0x12, 0x03, 0x20, 0x00, 0xff); 88 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xf3, 0x00); 113 89 114 - ret = mipi_dsi_dcs_exit_sleep_mode(dsi); 115 - if (ret < 0) { 116 - dev_err(dev, "Failed to exit sleep mode: %d\n", ret); 117 - return ret; 118 - } 90 + mipi_dsi_dcs_exit_sleep_mode_multi(&dsi_ctx); 119 91 120 - msleep(120); 92 + mipi_dsi_msleep(&dsi_ctx, 120); 121 93 122 - ret = mipi_dsi_dcs_set_display_on(dsi); 123 - if (ret < 0) { 124 - dev_err(dev, "Failed to set panel on: %d\n", ret); 125 - return ret; 126 - } 94 + mipi_dsi_dcs_set_display_on_multi(&dsi_ctx); 127 95 128 - return 0; 96 + return dsi_ctx.accum_err; 97 + } 98 + 99 + static int starry_er88577_init_cmd(struct boe_th101mb31ig002 *ctx) 100 + { 101 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = ctx->dsi }; 102 + 103 + msleep(70); 104 + 105 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe0, 0xab, 0xba); 106 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe1, 0xba, 0xab); 107 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb1, 0x10, 0x01, 0x47, 0xff); 108 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb2, 0x0c, 0x14, 0x04, 0x50, 0x50, 0x14); 109 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb3, 0x56, 0x53, 0x00); 110 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb4, 0x33, 0x30, 0x04); 111 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb6, 0xb0, 0x00, 0x00, 0x10, 0x00, 0x10, 112 + 0x00); 113 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb8, 0x05, 0x12, 0x29, 0x49, 0x40); 114 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb9, 0x7c, 0x61, 0x4f, 0x42, 0x3e, 0x2d, 115 + 0x31, 0x1a, 0x33, 0x33, 0x33, 0x52, 0x40, 116 + 0x47, 0x38, 0x34, 0x26, 0x0e, 0x06, 0x7c, 117 + 0x61, 0x4f, 0x42, 0x3e, 0x2d, 0x31, 0x1a, 118 + 0x33, 0x33, 0x33, 0x52, 0x40, 0x47, 0x38, 119 + 0x34, 0x26, 0x0e, 0x06); 120 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc0, 0xcc, 0x76, 0x12, 0x34, 0x44, 0x44, 121 + 0x44, 0x44, 0x98, 0x04, 0x98, 0x04, 0x0f, 122 + 0x00, 0x00, 0xc1); 123 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc1, 0x54, 0x94, 0x02, 0x85, 0x9f, 0x00, 124 + 0x6f, 0x00, 0x54, 0x00); 125 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc2, 0x17, 0x09, 0x08, 0x89, 0x08, 0x11, 126 + 0x22, 0x20, 0x44, 0xff, 0x18, 0x00); 127 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc3, 0x87, 0x47, 0x05, 0x05, 0x1c, 0x1c, 128 + 0x1d, 0x1d, 0x02, 0x1e, 0x1e, 0x1f, 0x1f, 129 + 0x0f, 0x0f, 0x0d, 0x0d, 0x13, 0x13, 0x11, 130 + 0x11, 0x24); 131 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc4, 0x06, 0x06, 0x04, 0x04, 0x1c, 0x1c, 132 + 0x1d, 0x1d, 0x02, 0x1e, 0x1e, 0x1f, 0x1f, 133 + 0x0e, 0x0e, 0x0c, 0x0c, 0x12, 0x12, 0x10, 134 + 0x10, 0x24); 135 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xc8, 0x21, 0x00, 0x31, 0x42, 0x34, 0x16); 136 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xca, 0xcb, 0x43); 137 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xcd, 0x0e, 0x4b, 0x4b, 0x20, 0x19, 0x6b, 138 + 0x06, 0xb3); 139 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xd1, 0x40, 0x0d, 0xff, 0x0f); 140 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xd2, 0xe3, 0x2b, 0x38, 0x08); 141 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xd3, 0x00, 0x00, 0x00, 0x00, 142 + 0x00, 0x33, 0x20, 0x3a, 0xd5, 0x86, 0xf3); 143 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xd4, 0x00, 0x01, 0x00, 0x0e, 0x04, 0x44, 144 + 0x08, 0x10, 0x00, 0x00, 0x00); 145 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe6, 0x80, 0x09, 0xff, 0xff, 0xff, 0xff, 146 + 0xff, 0xff); 147 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xf0, 0x12, 0x03, 0x20, 0x00, 0xff); 148 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xf3, 0x00); 149 + 150 + mipi_dsi_dcs_exit_sleep_mode_multi(&dsi_ctx); 151 + 152 + mipi_dsi_msleep(&dsi_ctx, 120); 153 + 154 + mipi_dsi_dcs_set_display_on_multi(&dsi_ctx); 155 + 156 + mipi_dsi_msleep(&dsi_ctx, 20); 157 + 158 + return dsi_ctx.accum_err; 129 159 } 130 160 131 161 static int boe_th101mb31ig002_disable(struct drm_panel *panel) ··· 182 114 struct boe_th101mb31ig002 *ctx = container_of(panel, 183 115 struct boe_th101mb31ig002, 184 116 panel); 185 - struct mipi_dsi_device *dsi = ctx->dsi; 186 - struct device *dev = &dsi->dev; 187 - int ret; 117 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = ctx->dsi }; 188 118 189 - ret = mipi_dsi_dcs_set_display_off(dsi); 190 - if (ret < 0) 191 - dev_err(dev, "Failed to set panel off: %d\n", ret); 119 + if (ctx->desc->backlight_off_to_display_off_delay_ms) 120 + mipi_dsi_msleep(&dsi_ctx, ctx->desc->backlight_off_to_display_off_delay_ms); 192 121 193 - msleep(120); 122 + mipi_dsi_dcs_set_display_off_multi(&dsi_ctx); 194 123 195 - ret = mipi_dsi_dcs_enter_sleep_mode(dsi); 196 - if (ret < 0) 197 - dev_err(dev, "Failed to enter sleep mode: %d\n", ret); 124 + mipi_dsi_msleep(&dsi_ctx, 120); 198 125 199 - return 0; 126 + mipi_dsi_dcs_enter_sleep_mode_multi(&dsi_ctx); 127 + 128 + if (ctx->desc->enter_sleep_to_reset_down_delay_ms) 129 + mipi_dsi_msleep(&dsi_ctx, ctx->desc->enter_sleep_to_reset_down_delay_ms); 130 + 131 + return dsi_ctx.accum_err; 200 132 } 201 133 202 134 static int boe_th101mb31ig002_unprepare(struct drm_panel *panel) ··· 208 140 gpiod_set_value_cansleep(ctx->reset, 1); 209 141 gpiod_set_value_cansleep(ctx->enable, 0); 210 142 regulator_disable(ctx->power); 143 + 144 + if (ctx->desc->power_off_delay_ms) 145 + msleep(ctx->desc->power_off_delay_ms); 211 146 212 147 return 0; 213 148 } ··· 229 158 return ret; 230 159 } 231 160 161 + if (ctx->desc->vcioo_to_lp11_delay_ms) 162 + msleep(ctx->desc->vcioo_to_lp11_delay_ms); 163 + 164 + if (ctx->desc->lp11_before_reset) { 165 + ret = mipi_dsi_dcs_nop(ctx->dsi); 166 + if (ret) 167 + return ret; 168 + } 169 + 170 + if (ctx->desc->lp11_to_reset_delay_ms) 171 + msleep(ctx->desc->lp11_to_reset_delay_ms); 172 + 232 173 gpiod_set_value_cansleep(ctx->enable, 1); 233 174 msleep(50); 234 175 boe_th101mb31ig002_reset(ctx); 235 - boe_th101mb31ig002_enable(panel); 176 + 177 + ret = ctx->desc->init(ctx); 178 + if (ret) 179 + return ret; 236 180 237 181 return 0; 238 182 } ··· 267 181 .type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED, 268 182 }; 269 183 184 + static const struct panel_desc boe_th101mb31ig002_desc = { 185 + .modes = &boe_th101mb31ig002_default_mode, 186 + .lanes = 4, 187 + .format = MIPI_DSI_FMT_RGB888, 188 + .mode_flags = MIPI_DSI_MODE_VIDEO_BURST | 189 + MIPI_DSI_MODE_NO_EOT_PACKET | 190 + MIPI_DSI_MODE_LPM, 191 + .init = boe_th101mb31ig002_enable, 192 + }; 193 + 194 + static const struct drm_display_mode starry_er88577_default_mode = { 195 + .clock = (800 + 25 + 25 + 25) * (1280 + 20 + 4 + 12) * 60 / 1000, 196 + .hdisplay = 800, 197 + .hsync_start = 800 + 25, 198 + .hsync_end = 800 + 25 + 25, 199 + .htotal = 800 + 25 + 25 + 25, 200 + .vdisplay = 1280, 201 + .vsync_start = 1280 + 20, 202 + .vsync_end = 1280 + 20 + 4, 203 + .vtotal = 1280 + 20 + 4 + 12, 204 + .width_mm = 135, 205 + .height_mm = 216, 206 + .type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED, 207 + }; 208 + 209 + static const struct panel_desc starry_er88577_desc = { 210 + .modes = &starry_er88577_default_mode, 211 + .lanes = 4, 212 + .format = MIPI_DSI_FMT_RGB888, 213 + .mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE | 214 + MIPI_DSI_MODE_LPM, 215 + .init = starry_er88577_init_cmd, 216 + .lp11_before_reset = true, 217 + .vcioo_to_lp11_delay_ms = 5, 218 + .lp11_to_reset_delay_ms = 50, 219 + .backlight_off_to_display_off_delay_ms = 100, 220 + .enter_sleep_to_reset_down_delay_ms = 100, 221 + .power_off_delay_ms = 1000, 222 + }; 223 + 270 224 static int boe_th101mb31ig002_get_modes(struct drm_panel *panel, 271 225 struct drm_connector *connector) 272 226 { 273 227 struct boe_th101mb31ig002 *ctx = container_of(panel, 274 228 struct boe_th101mb31ig002, 275 229 panel); 276 - struct drm_display_mode *mode; 277 - 278 - mode = drm_mode_duplicate(connector->dev, 279 - &boe_th101mb31ig002_default_mode); 280 - if (!mode) { 281 - dev_err(panel->dev, "Failed to add mode %ux%u@%u\n", 282 - boe_th101mb31ig002_default_mode.hdisplay, 283 - boe_th101mb31ig002_default_mode.vdisplay, 284 - drm_mode_vrefresh(&boe_th101mb31ig002_default_mode)); 285 - return -ENOMEM; 286 - } 287 - 288 - drm_mode_set_name(mode); 230 + const struct drm_display_mode *desc_mode = ctx->desc->modes; 289 231 290 232 connector->display_info.bpc = 8; 291 - connector->display_info.width_mm = mode->width_mm; 292 - connector->display_info.height_mm = mode->height_mm; 293 - 294 233 /* 295 234 * TODO: Remove once all drm drivers call 296 235 * drm_connector_set_orientation_from_panel() 297 236 */ 298 237 drm_connector_set_panel_orientation(connector, ctx->orientation); 299 238 300 - drm_mode_probed_add(connector, mode); 301 - 302 - return 1; 239 + return drm_connector_helper_get_modes_fixed(connector, desc_mode); 303 240 } 304 241 305 242 static enum drm_panel_orientation ··· 346 237 static int boe_th101mb31ig002_dsi_probe(struct mipi_dsi_device *dsi) 347 238 { 348 239 struct boe_th101mb31ig002 *ctx; 240 + const struct panel_desc *desc; 349 241 int ret; 350 242 351 243 ctx = devm_kzalloc(&dsi->dev, sizeof(*ctx), GFP_KERNEL); ··· 356 246 mipi_dsi_set_drvdata(dsi, ctx); 357 247 ctx->dsi = dsi; 358 248 359 - dsi->lanes = 4; 360 - dsi->format = MIPI_DSI_FMT_RGB888; 361 - dsi->mode_flags = MIPI_DSI_MODE_VIDEO_BURST | 362 - MIPI_DSI_MODE_NO_EOT_PACKET | 363 - MIPI_DSI_MODE_LPM; 249 + desc = of_device_get_match_data(&dsi->dev); 250 + dsi->lanes = desc->lanes; 251 + dsi->format = desc->format; 252 + dsi->mode_flags = desc->mode_flags; 253 + ctx->desc = desc; 364 254 365 255 ctx->power = devm_regulator_get(&dsi->dev, "power"); 366 256 if (IS_ERR(ctx->power)) ··· 372 262 return dev_err_probe(&dsi->dev, PTR_ERR(ctx->enable), 373 263 "Failed to get enable GPIO\n"); 374 264 375 - ctx->reset = devm_gpiod_get(&dsi->dev, "reset", GPIOD_OUT_HIGH); 265 + ctx->reset = devm_gpiod_get_optional(&dsi->dev, "reset", GPIOD_OUT_HIGH); 376 266 if (IS_ERR(ctx->reset)) 377 267 return dev_err_probe(&dsi->dev, PTR_ERR(ctx->reset), 378 268 "Failed to get reset GPIO\n"); ··· 412 302 } 413 303 414 304 static const struct of_device_id boe_th101mb31ig002_of_match[] = { 415 - { .compatible = "boe,th101mb31ig002-28a", }, 305 + { 306 + .compatible = "boe,th101mb31ig002-28a", 307 + .data = &boe_th101mb31ig002_desc 308 + }, 309 + { 310 + .compatible = "starry,er88577", 311 + .data = &starry_er88577_desc 312 + }, 416 313 { /* sentinel */ } 417 314 }; 418 315 MODULE_DEVICE_TABLE(of, boe_th101mb31ig002_of_match);
+63 -127
drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c
··· 54 54 struct gpio_desc *enable_gpio; 55 55 }; 56 56 57 + #define NT36523_DCS_SWITCH_PAGE 0xff 58 + 59 + #define nt36523_switch_page(ctx, page) \ 60 + mipi_dsi_dcs_write_seq_multi(ctx, NT36523_DCS_SWITCH_PAGE, (page)) 61 + 62 + static void nt36523_enable_reload_cmds(struct mipi_dsi_multi_context *ctx) 63 + { 64 + mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 65 + } 66 + 57 67 static int boe_tv110c9m_init(struct boe_panel *boe) 58 68 { 59 69 struct mipi_dsi_multi_context ctx = { .dsi = boe->dsi }; 60 70 61 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x20); 62 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 71 + nt36523_switch_page(&ctx, 0x20); 72 + nt36523_enable_reload_cmds(&ctx); 63 73 mipi_dsi_dcs_write_seq_multi(&ctx, 0x05, 0xd9); 64 74 mipi_dsi_dcs_write_seq_multi(&ctx, 0x07, 0x78); 65 75 mipi_dsi_dcs_write_seq_multi(&ctx, 0x08, 0x5a); ··· 109 99 mipi_dsi_dcs_write_seq_multi(&ctx, 0xbb, 0x03, 0x8e, 0x03, 0xa2, 0x03, 0xb7, 0x03, 0xe7, 110 100 0x03, 0xfd, 0x03, 0xff); 111 101 112 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x21); 113 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 114 - 102 + nt36523_switch_page(&ctx, 0x21); 103 + nt36523_enable_reload_cmds(&ctx); 115 104 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb0, 0x00, 0x00, 0x00, 0x1b, 0x00, 0x45, 0x00, 0x65, 116 105 0x00, 0x81, 0x00, 0x99, 0x00, 0xae, 0x00, 0xc1); 117 106 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb1, 0x00, 0xd2, 0x01, 0x0b, 0x01, 0x34, 0x01, 0x76, 118 107 0x01, 0xa3, 0x01, 0xef, 0x02, 0x27, 0x02, 0x29); 119 108 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb2, 0x02, 0x5f, 0x02, 0x9e, 0x02, 0xc9, 0x03, 0x00, 120 109 0x03, 0x26, 0x03, 0x53, 0x03, 0x63, 0x03, 0x73); 121 - 122 110 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb3, 0x03, 0x86, 0x03, 0x9a, 0x03, 0xaf, 0x03, 0xdf, 123 111 0x03, 0xf5, 0x03, 0xe0); 124 112 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb4, 0x00, 0x00, 0x00, 0x1b, 0x00, 0x45, 0x00, 0x65, ··· 127 119 0x03, 0x26, 0x03, 0x53, 0x03, 0x63, 0x03, 0x73); 128 120 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb7, 0x03, 0x86, 0x03, 0x9a, 0x03, 0xaf, 0x03, 0xdf, 129 121 0x03, 0xf5, 0x03, 0xe0); 130 - 131 122 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb8, 0x00, 0x00, 0x00, 0x1b, 0x00, 0x45, 0x00, 0x65, 132 123 0x00, 0x81, 0x00, 0x99, 0x00, 0xae, 0x00, 0xc1); 133 124 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb9, 0x00, 0xd2, 0x01, 0x0b, 0x01, 0x34, 0x01, 0x76, 134 125 0x01, 0xa3, 0x01, 0xef, 0x02, 0x27, 0x02, 0x29); 135 126 mipi_dsi_dcs_write_seq_multi(&ctx, 0xba, 0x02, 0x5f, 0x02, 0x9e, 0x02, 0xc9, 0x03, 0x00, 136 127 0x03, 0x26, 0x03, 0x53, 0x03, 0x63, 0x03, 0x73); 137 - 138 128 mipi_dsi_dcs_write_seq_multi(&ctx, 0xbb, 0x03, 0x86, 0x03, 0x9a, 0x03, 0xaf, 0x03, 0xdf, 139 129 0x03, 0xf5, 0x03, 0xe0); 140 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x24); 141 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 142 130 131 + nt36523_switch_page(&ctx, 0x24); 132 + nt36523_enable_reload_cmds(&ctx); 143 133 mipi_dsi_dcs_write_seq_multi(&ctx, 0x00, 0x00); 144 134 mipi_dsi_dcs_write_seq_multi(&ctx, 0x01, 0x00); 145 - 146 135 mipi_dsi_dcs_write_seq_multi(&ctx, 0x02, 0x1c); 147 136 mipi_dsi_dcs_write_seq_multi(&ctx, 0x03, 0x1c); 148 - 149 137 mipi_dsi_dcs_write_seq_multi(&ctx, 0x04, 0x1d); 150 138 mipi_dsi_dcs_write_seq_multi(&ctx, 0x05, 0x1d); 151 - 152 139 mipi_dsi_dcs_write_seq_multi(&ctx, 0x06, 0x04); 153 140 mipi_dsi_dcs_write_seq_multi(&ctx, 0x07, 0x04); 154 - 155 141 mipi_dsi_dcs_write_seq_multi(&ctx, 0x08, 0x0f); 156 142 mipi_dsi_dcs_write_seq_multi(&ctx, 0x09, 0x0f); 157 - 158 143 mipi_dsi_dcs_write_seq_multi(&ctx, 0x0a, 0x0e); 159 144 mipi_dsi_dcs_write_seq_multi(&ctx, 0x0b, 0x0e); 160 - 161 145 mipi_dsi_dcs_write_seq_multi(&ctx, 0x0c, 0x0d); 162 146 mipi_dsi_dcs_write_seq_multi(&ctx, 0x0d, 0x0d); 163 - 164 147 mipi_dsi_dcs_write_seq_multi(&ctx, 0x0e, 0x0c); 165 148 mipi_dsi_dcs_write_seq_multi(&ctx, 0x0f, 0x0c); 166 - 167 149 mipi_dsi_dcs_write_seq_multi(&ctx, 0x10, 0x08); 168 150 mipi_dsi_dcs_write_seq_multi(&ctx, 0x11, 0x08); 169 - 170 151 mipi_dsi_dcs_write_seq_multi(&ctx, 0x12, 0x00); 171 152 mipi_dsi_dcs_write_seq_multi(&ctx, 0x13, 0x00); 172 153 mipi_dsi_dcs_write_seq_multi(&ctx, 0x14, 0x00); 173 154 mipi_dsi_dcs_write_seq_multi(&ctx, 0x15, 0x00); 174 - 175 155 mipi_dsi_dcs_write_seq_multi(&ctx, 0x16, 0x00); 176 156 mipi_dsi_dcs_write_seq_multi(&ctx, 0x17, 0x00); 177 - 178 157 mipi_dsi_dcs_write_seq_multi(&ctx, 0x18, 0x1c); 179 158 mipi_dsi_dcs_write_seq_multi(&ctx, 0x19, 0x1c); 180 - 181 159 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1a, 0x1d); 182 160 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1b, 0x1d); 183 - 184 161 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1c, 0x04); 185 162 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1d, 0x04); 186 - 187 163 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1e, 0x0f); 188 164 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1f, 0x0f); 189 - 190 165 mipi_dsi_dcs_write_seq_multi(&ctx, 0x20, 0x0e); 191 166 mipi_dsi_dcs_write_seq_multi(&ctx, 0x21, 0x0e); 192 - 193 167 mipi_dsi_dcs_write_seq_multi(&ctx, 0x22, 0x0d); 194 168 mipi_dsi_dcs_write_seq_multi(&ctx, 0x23, 0x0d); 195 - 196 169 mipi_dsi_dcs_write_seq_multi(&ctx, 0x24, 0x0c); 197 170 mipi_dsi_dcs_write_seq_multi(&ctx, 0x25, 0x0c); 198 - 199 171 mipi_dsi_dcs_write_seq_multi(&ctx, 0x26, 0x08); 200 172 mipi_dsi_dcs_write_seq_multi(&ctx, 0x27, 0x08); 201 - 202 173 mipi_dsi_dcs_write_seq_multi(&ctx, 0x28, 0x00); 203 174 mipi_dsi_dcs_write_seq_multi(&ctx, 0x29, 0x00); 204 175 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2a, 0x00); 205 176 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2b, 0x00); 206 - 207 177 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2d, 0x20); 208 178 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2f, 0x0a); 209 179 mipi_dsi_dcs_write_seq_multi(&ctx, 0x30, 0x44); 210 180 mipi_dsi_dcs_write_seq_multi(&ctx, 0x33, 0x0c); 211 181 mipi_dsi_dcs_write_seq_multi(&ctx, 0x34, 0x32); 212 - 213 182 mipi_dsi_dcs_write_seq_multi(&ctx, 0x37, 0x44); 214 183 mipi_dsi_dcs_write_seq_multi(&ctx, 0x38, 0x40); 215 184 mipi_dsi_dcs_write_seq_multi(&ctx, 0x39, 0x00); ··· 229 244 mipi_dsi_dcs_write_seq_multi(&ctx, 0xdb, 0x05); 230 245 mipi_dsi_dcs_write_seq_multi(&ctx, 0xdc, 0xa9); 231 246 mipi_dsi_dcs_write_seq_multi(&ctx, 0xdd, 0x22); 232 - 233 247 mipi_dsi_dcs_write_seq_multi(&ctx, 0xdf, 0x05); 234 248 mipi_dsi_dcs_write_seq_multi(&ctx, 0xe0, 0xa9); 235 249 mipi_dsi_dcs_write_seq_multi(&ctx, 0xe1, 0x05); ··· 242 258 mipi_dsi_dcs_write_seq_multi(&ctx, 0x8d, 0x00); 243 259 mipi_dsi_dcs_write_seq_multi(&ctx, 0x8e, 0x00); 244 260 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb5, 0x90); 245 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x25); 246 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 261 + 262 + nt36523_switch_page(&ctx, 0x25); 263 + nt36523_enable_reload_cmds(&ctx); 247 264 mipi_dsi_dcs_write_seq_multi(&ctx, 0x05, 0x00); 248 265 mipi_dsi_dcs_write_seq_multi(&ctx, 0x19, 0x07); 249 266 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1f, 0x60); ··· 266 281 mipi_dsi_dcs_write_seq_multi(&ctx, 0x61, 0x60); 267 282 mipi_dsi_dcs_write_seq_multi(&ctx, 0x62, 0x50); 268 283 mipi_dsi_dcs_write_seq_multi(&ctx, 0xf1, 0x10); 269 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x2a); 270 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 271 284 285 + nt36523_switch_page(&ctx, 0x2a); 286 + nt36523_enable_reload_cmds(&ctx); 272 287 mipi_dsi_dcs_write_seq_multi(&ctx, 0x64, 0x16); 273 288 mipi_dsi_dcs_write_seq_multi(&ctx, 0x67, 0x16); 274 289 mipi_dsi_dcs_write_seq_multi(&ctx, 0x6a, 0x16); 275 - 276 290 mipi_dsi_dcs_write_seq_multi(&ctx, 0x70, 0x30); 277 - 278 291 mipi_dsi_dcs_write_seq_multi(&ctx, 0xa2, 0xf3); 279 292 mipi_dsi_dcs_write_seq_multi(&ctx, 0xa3, 0xff); 280 293 mipi_dsi_dcs_write_seq_multi(&ctx, 0xa4, 0xff); 281 294 mipi_dsi_dcs_write_seq_multi(&ctx, 0xa5, 0xff); 282 - 283 295 mipi_dsi_dcs_write_seq_multi(&ctx, 0xd6, 0x08); 284 296 285 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x26); 286 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 297 + nt36523_switch_page(&ctx, 0x26); 298 + nt36523_enable_reload_cmds(&ctx); 287 299 mipi_dsi_dcs_write_seq_multi(&ctx, 0x00, 0xa1); 288 - 289 300 mipi_dsi_dcs_write_seq_multi(&ctx, 0x02, 0x31); 290 301 mipi_dsi_dcs_write_seq_multi(&ctx, 0x04, 0x28); 291 302 mipi_dsi_dcs_write_seq_multi(&ctx, 0x06, 0x30); ··· 304 323 mipi_dsi_dcs_write_seq_multi(&ctx, 0x23, 0x00); 305 324 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2a, 0x0d); 306 325 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2b, 0x7f); 307 - 308 326 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1d, 0x00); 309 327 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1e, 0x65); 310 328 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1f, 0x65); ··· 323 343 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc9, 0x9e); 324 344 mipi_dsi_dcs_write_seq_multi(&ctx, 0xca, 0x4e); 325 345 mipi_dsi_dcs_write_seq_multi(&ctx, 0xcb, 0x00); 326 - 327 346 mipi_dsi_dcs_write_seq_multi(&ctx, 0xa9, 0x49); 328 347 mipi_dsi_dcs_write_seq_multi(&ctx, 0xaa, 0x4b); 329 348 mipi_dsi_dcs_write_seq_multi(&ctx, 0xab, 0x48); ··· 352 373 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc3, 0x4f); 353 374 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc4, 0x3a); 354 375 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc5, 0x42); 355 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x27); 356 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 357 376 377 + nt36523_switch_page(&ctx, 0x27); 378 + nt36523_enable_reload_cmds(&ctx); 358 379 mipi_dsi_dcs_write_seq_multi(&ctx, 0x56, 0x06); 359 380 mipi_dsi_dcs_write_seq_multi(&ctx, 0x58, 0x80); 360 381 mipi_dsi_dcs_write_seq_multi(&ctx, 0x59, 0x75); ··· 373 394 mipi_dsi_dcs_write_seq_multi(&ctx, 0x66, 0x00); 374 395 mipi_dsi_dcs_write_seq_multi(&ctx, 0x67, 0x01); 375 396 mipi_dsi_dcs_write_seq_multi(&ctx, 0x68, 0x44); 376 - 377 397 mipi_dsi_dcs_write_seq_multi(&ctx, 0x00, 0x00); 378 398 mipi_dsi_dcs_write_seq_multi(&ctx, 0x78, 0x00); 379 399 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc3, 0x00); 380 400 381 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x2a); 382 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 383 - 401 + nt36523_switch_page(&ctx, 0x2a); 402 + nt36523_enable_reload_cmds(&ctx); 384 403 mipi_dsi_dcs_write_seq_multi(&ctx, 0x22, 0x2f); 385 404 mipi_dsi_dcs_write_seq_multi(&ctx, 0x23, 0x08); 386 - 387 405 mipi_dsi_dcs_write_seq_multi(&ctx, 0x24, 0x00); 388 406 mipi_dsi_dcs_write_seq_multi(&ctx, 0x25, 0x65); 389 407 mipi_dsi_dcs_write_seq_multi(&ctx, 0x26, 0xf8); ··· 391 415 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2b, 0x00); 392 416 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2d, 0x1a); 393 417 394 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x23); 395 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 396 - 418 + nt36523_switch_page(&ctx, 0x23); 419 + nt36523_enable_reload_cmds(&ctx); 397 420 mipi_dsi_dcs_write_seq_multi(&ctx, 0x00, 0x80); 398 421 mipi_dsi_dcs_write_seq_multi(&ctx, 0x07, 0x00); 399 422 400 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0xe0); 401 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 423 + nt36523_switch_page(&ctx, 0xe0); 424 + nt36523_enable_reload_cmds(&ctx); 402 425 mipi_dsi_dcs_write_seq_multi(&ctx, 0x14, 0x60); 403 426 mipi_dsi_dcs_write_seq_multi(&ctx, 0x16, 0xc0); 404 427 405 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0xf0); 406 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 428 + nt36523_switch_page(&ctx, 0xf0); 429 + nt36523_enable_reload_cmds(&ctx); 407 430 mipi_dsi_dcs_write_seq_multi(&ctx, 0x3a, 0x08); 408 431 409 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x10); 410 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 432 + nt36523_switch_page(&ctx, 0x10); 433 + nt36523_enable_reload_cmds(&ctx); 411 434 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb9, 0x01); 412 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x20); 413 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 435 + 436 + nt36523_switch_page(&ctx, 0x20); 437 + nt36523_enable_reload_cmds(&ctx); 414 438 mipi_dsi_dcs_write_seq_multi(&ctx, 0x18, 0x40); 415 439 416 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x10); 417 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 440 + nt36523_switch_page(&ctx, 0x10); 441 + nt36523_enable_reload_cmds(&ctx); 418 442 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb9, 0x02); 419 443 mipi_dsi_dcs_write_seq_multi(&ctx, 0x35, 0x00); 420 444 mipi_dsi_dcs_write_seq_multi(&ctx, 0x51, 0x00, 0xff); ··· 440 464 { 441 465 struct mipi_dsi_multi_context ctx = { .dsi = boe->dsi }; 442 466 443 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x20); 444 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 467 + nt36523_switch_page(&ctx, 0x20); 468 + nt36523_enable_reload_cmds(&ctx); 445 469 mipi_dsi_dcs_write_seq_multi(&ctx, 0x05, 0xd1); 446 470 mipi_dsi_dcs_write_seq_multi(&ctx, 0x06, 0xc0); 447 471 mipi_dsi_dcs_write_seq_multi(&ctx, 0x07, 0x87); 448 472 mipi_dsi_dcs_write_seq_multi(&ctx, 0x08, 0x4b); 449 - 450 473 mipi_dsi_dcs_write_seq_multi(&ctx, 0x0d, 0x63); 451 474 mipi_dsi_dcs_write_seq_multi(&ctx, 0x0e, 0x91); 452 475 mipi_dsi_dcs_write_seq_multi(&ctx, 0x0f, 0x69); ··· 457 482 mipi_dsi_dcs_write_seq_multi(&ctx, 0x69, 0x98); 458 483 mipi_dsi_dcs_write_seq_multi(&ctx, 0x75, 0xa2); 459 484 mipi_dsi_dcs_write_seq_multi(&ctx, 0x77, 0xb3); 460 - 461 485 mipi_dsi_dcs_write_seq_multi(&ctx, 0x58, 0x43); 462 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x24); 463 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 486 + 487 + nt36523_switch_page(&ctx, 0x24); 488 + nt36523_enable_reload_cmds(&ctx); 464 489 mipi_dsi_dcs_write_seq_multi(&ctx, 0x91, 0x44); 465 490 mipi_dsi_dcs_write_seq_multi(&ctx, 0x92, 0x4c); 466 491 mipi_dsi_dcs_write_seq_multi(&ctx, 0x94, 0x86); ··· 468 493 mipi_dsi_dcs_write_seq_multi(&ctx, 0x61, 0xd0); 469 494 mipi_dsi_dcs_write_seq_multi(&ctx, 0x63, 0x70); 470 495 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc2, 0xca); 471 - 472 496 mipi_dsi_dcs_write_seq_multi(&ctx, 0x00, 0x03); 473 497 mipi_dsi_dcs_write_seq_multi(&ctx, 0x01, 0x03); 474 498 mipi_dsi_dcs_write_seq_multi(&ctx, 0x02, 0x03); ··· 512 538 mipi_dsi_dcs_write_seq_multi(&ctx, 0x29, 0x04); 513 539 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2a, 0x00); 514 540 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2b, 0x03); 515 - 516 541 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2f, 0x0a); 517 542 mipi_dsi_dcs_write_seq_multi(&ctx, 0x30, 0x35); 518 543 mipi_dsi_dcs_write_seq_multi(&ctx, 0x37, 0xa7); ··· 519 546 mipi_dsi_dcs_write_seq_multi(&ctx, 0x3a, 0x46); 520 547 mipi_dsi_dcs_write_seq_multi(&ctx, 0x3b, 0x32); 521 548 mipi_dsi_dcs_write_seq_multi(&ctx, 0x3d, 0x12); 522 - 523 549 mipi_dsi_dcs_write_seq_multi(&ctx, 0x3f, 0x33); 524 550 mipi_dsi_dcs_write_seq_multi(&ctx, 0x40, 0x31); 525 551 mipi_dsi_dcs_write_seq_multi(&ctx, 0x41, 0x40); ··· 528 556 mipi_dsi_dcs_write_seq_multi(&ctx, 0x4a, 0x45); 529 557 mipi_dsi_dcs_write_seq_multi(&ctx, 0x4b, 0x45); 530 558 mipi_dsi_dcs_write_seq_multi(&ctx, 0x4c, 0x14); 531 - 532 559 mipi_dsi_dcs_write_seq_multi(&ctx, 0x4d, 0x21); 533 560 mipi_dsi_dcs_write_seq_multi(&ctx, 0x4e, 0x43); 534 561 mipi_dsi_dcs_write_seq_multi(&ctx, 0x4f, 0x65); ··· 540 569 mipi_dsi_dcs_write_seq_multi(&ctx, 0x5c, 0x88); 541 570 mipi_dsi_dcs_write_seq_multi(&ctx, 0x5e, 0x00, 0x00); 542 571 mipi_dsi_dcs_write_seq_multi(&ctx, 0x5f, 0x00); 543 - 544 572 mipi_dsi_dcs_write_seq_multi(&ctx, 0x7a, 0xff); 545 573 mipi_dsi_dcs_write_seq_multi(&ctx, 0x7b, 0xff); 546 574 mipi_dsi_dcs_write_seq_multi(&ctx, 0x7c, 0x00); ··· 551 581 mipi_dsi_dcs_write_seq_multi(&ctx, 0x82, 0x08); 552 582 mipi_dsi_dcs_write_seq_multi(&ctx, 0x97, 0x02); 553 583 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc5, 0x10); 554 - 555 584 mipi_dsi_dcs_write_seq_multi(&ctx, 0xd7, 0x55); 556 585 mipi_dsi_dcs_write_seq_multi(&ctx, 0xd8, 0x55); 557 586 mipi_dsi_dcs_write_seq_multi(&ctx, 0xd9, 0x23); ··· 578 609 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb6, 0x05, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x00, 579 610 0x05, 0x05, 0x00, 0x00); 580 611 581 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x25); 582 - 583 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 612 + nt36523_switch_page(&ctx, 0x25); 613 + nt36523_enable_reload_cmds(&ctx); 584 614 mipi_dsi_dcs_write_seq_multi(&ctx, 0x05, 0x00); 585 615 mipi_dsi_dcs_write_seq_multi(&ctx, 0xf1, 0x10); 586 - 587 616 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1e, 0x00); 588 617 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1f, 0x46); 589 618 mipi_dsi_dcs_write_seq_multi(&ctx, 0x20, 0x32); 590 - 591 619 mipi_dsi_dcs_write_seq_multi(&ctx, 0x25, 0x00); 592 620 mipi_dsi_dcs_write_seq_multi(&ctx, 0x26, 0x46); 593 621 mipi_dsi_dcs_write_seq_multi(&ctx, 0x27, 0x32); 594 - 595 622 mipi_dsi_dcs_write_seq_multi(&ctx, 0x3f, 0x80); 596 623 mipi_dsi_dcs_write_seq_multi(&ctx, 0x40, 0x00); 597 624 mipi_dsi_dcs_write_seq_multi(&ctx, 0x43, 0x00); 598 - 599 625 mipi_dsi_dcs_write_seq_multi(&ctx, 0x44, 0x46); 600 626 mipi_dsi_dcs_write_seq_multi(&ctx, 0x45, 0x46); 601 - 602 627 mipi_dsi_dcs_write_seq_multi(&ctx, 0x48, 0x46); 603 628 mipi_dsi_dcs_write_seq_multi(&ctx, 0x49, 0x32); 604 - 605 629 mipi_dsi_dcs_write_seq_multi(&ctx, 0x5b, 0x80); 606 - 607 630 mipi_dsi_dcs_write_seq_multi(&ctx, 0x5c, 0x00); 608 631 mipi_dsi_dcs_write_seq_multi(&ctx, 0x5d, 0x46); 609 632 mipi_dsi_dcs_write_seq_multi(&ctx, 0x5e, 0x32); 610 - 611 633 mipi_dsi_dcs_write_seq_multi(&ctx, 0x5f, 0x46); 612 634 mipi_dsi_dcs_write_seq_multi(&ctx, 0x60, 0x32); 613 - 614 635 mipi_dsi_dcs_write_seq_multi(&ctx, 0x61, 0x46); 615 636 mipi_dsi_dcs_write_seq_multi(&ctx, 0x62, 0x32); 616 637 mipi_dsi_dcs_write_seq_multi(&ctx, 0x68, 0x0c); 617 - 618 638 mipi_dsi_dcs_write_seq_multi(&ctx, 0x6c, 0x0d); 619 639 mipi_dsi_dcs_write_seq_multi(&ctx, 0x6e, 0x0d); 620 640 mipi_dsi_dcs_write_seq_multi(&ctx, 0x78, 0x00); ··· 611 653 mipi_dsi_dcs_write_seq_multi(&ctx, 0x7a, 0x0c); 612 654 mipi_dsi_dcs_write_seq_multi(&ctx, 0x7b, 0xb0); 613 655 614 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x26); 615 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 616 - 656 + nt36523_switch_page(&ctx, 0x26); 657 + nt36523_enable_reload_cmds(&ctx); 617 658 mipi_dsi_dcs_write_seq_multi(&ctx, 0x00, 0xa1); 618 659 mipi_dsi_dcs_write_seq_multi(&ctx, 0x02, 0x31); 619 660 mipi_dsi_dcs_write_seq_multi(&ctx, 0x0a, 0xf4); ··· 631 674 mipi_dsi_dcs_write_seq_multi(&ctx, 0x18, 0x86); 632 675 mipi_dsi_dcs_write_seq_multi(&ctx, 0x22, 0x00); 633 676 mipi_dsi_dcs_write_seq_multi(&ctx, 0x23, 0x00); 634 - 635 677 mipi_dsi_dcs_write_seq_multi(&ctx, 0x19, 0x0e); 636 678 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1a, 0x31); 637 679 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1b, 0x0d); 638 680 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1c, 0x29); 639 681 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2a, 0x0e); 640 682 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2b, 0x31); 641 - 642 683 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1d, 0x00); 643 684 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1e, 0x62); 644 685 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1f, 0x62); 645 - 646 686 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2f, 0x06); 647 687 mipi_dsi_dcs_write_seq_multi(&ctx, 0x30, 0x62); 648 688 mipi_dsi_dcs_write_seq_multi(&ctx, 0x31, 0x06); ··· 647 693 mipi_dsi_dcs_write_seq_multi(&ctx, 0x33, 0x11); 648 694 mipi_dsi_dcs_write_seq_multi(&ctx, 0x34, 0x89); 649 695 mipi_dsi_dcs_write_seq_multi(&ctx, 0x35, 0x67); 650 - 651 696 mipi_dsi_dcs_write_seq_multi(&ctx, 0x39, 0x0b); 652 697 mipi_dsi_dcs_write_seq_multi(&ctx, 0x3a, 0x62); 653 698 mipi_dsi_dcs_write_seq_multi(&ctx, 0x3b, 0x06); 654 - 655 699 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc8, 0x04); 656 700 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc9, 0x89); 657 701 mipi_dsi_dcs_write_seq_multi(&ctx, 0xca, 0x4e); ··· 663 711 mipi_dsi_dcs_write_seq_multi(&ctx, 0xaf, 0x39); 664 712 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb0, 0x38); 665 713 666 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x27); 667 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 668 - 714 + nt36523_switch_page(&ctx, 0x27); 715 + nt36523_enable_reload_cmds(&ctx); 669 716 mipi_dsi_dcs_write_seq_multi(&ctx, 0xd0, 0x11); 670 717 mipi_dsi_dcs_write_seq_multi(&ctx, 0xd1, 0x54); 671 718 mipi_dsi_dcs_write_seq_multi(&ctx, 0xde, 0x43); 672 719 mipi_dsi_dcs_write_seq_multi(&ctx, 0xdf, 0x02); 673 - 674 720 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc0, 0x18); 675 721 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc1, 0x00); 676 722 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc2, 0x00); 677 723 mipi_dsi_dcs_write_seq_multi(&ctx, 0x00, 0x00); 678 724 mipi_dsi_dcs_write_seq_multi(&ctx, 0xc3, 0x00); 679 725 mipi_dsi_dcs_write_seq_multi(&ctx, 0x56, 0x06); 680 - 681 726 mipi_dsi_dcs_write_seq_multi(&ctx, 0x58, 0x80); 682 727 mipi_dsi_dcs_write_seq_multi(&ctx, 0x59, 0x78); 683 728 mipi_dsi_dcs_write_seq_multi(&ctx, 0x5a, 0x00); ··· 692 743 mipi_dsi_dcs_write_seq_multi(&ctx, 0x66, 0x00); 693 744 mipi_dsi_dcs_write_seq_multi(&ctx, 0x67, 0x01); 694 745 mipi_dsi_dcs_write_seq_multi(&ctx, 0x68, 0x44); 695 - 696 746 mipi_dsi_dcs_write_seq_multi(&ctx, 0x98, 0x01); 697 747 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb4, 0x03); 698 748 mipi_dsi_dcs_write_seq_multi(&ctx, 0x9b, 0xbe); 699 - 700 749 mipi_dsi_dcs_write_seq_multi(&ctx, 0xab, 0x14); 701 750 mipi_dsi_dcs_write_seq_multi(&ctx, 0xbc, 0x08); 702 751 mipi_dsi_dcs_write_seq_multi(&ctx, 0xbd, 0x28); 703 752 704 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x2a); 705 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 753 + nt36523_switch_page(&ctx, 0x2a); 754 + nt36523_enable_reload_cmds(&ctx); 706 755 mipi_dsi_dcs_write_seq_multi(&ctx, 0x22, 0x2f); 707 756 mipi_dsi_dcs_write_seq_multi(&ctx, 0x23, 0x08); 708 - 709 757 mipi_dsi_dcs_write_seq_multi(&ctx, 0x24, 0x00); 710 758 mipi_dsi_dcs_write_seq_multi(&ctx, 0x25, 0x62); 711 759 mipi_dsi_dcs_write_seq_multi(&ctx, 0x26, 0xf8); ··· 712 766 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2a, 0x1a); 713 767 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2b, 0x00); 714 768 mipi_dsi_dcs_write_seq_multi(&ctx, 0x2d, 0x1a); 715 - 716 769 mipi_dsi_dcs_write_seq_multi(&ctx, 0x64, 0x96); 717 770 mipi_dsi_dcs_write_seq_multi(&ctx, 0x65, 0x10); 718 771 mipi_dsi_dcs_write_seq_multi(&ctx, 0x66, 0x00); ··· 728 783 mipi_dsi_dcs_write_seq_multi(&ctx, 0x7a, 0x10); 729 784 mipi_dsi_dcs_write_seq_multi(&ctx, 0x88, 0x96); 730 785 mipi_dsi_dcs_write_seq_multi(&ctx, 0x89, 0x10); 731 - 732 786 mipi_dsi_dcs_write_seq_multi(&ctx, 0xa2, 0x3f); 733 787 mipi_dsi_dcs_write_seq_multi(&ctx, 0xa3, 0x30); 734 788 mipi_dsi_dcs_write_seq_multi(&ctx, 0xa4, 0xc0); 735 789 mipi_dsi_dcs_write_seq_multi(&ctx, 0xa5, 0x03); 736 - 737 790 mipi_dsi_dcs_write_seq_multi(&ctx, 0xe8, 0x00); 738 - 739 791 mipi_dsi_dcs_write_seq_multi(&ctx, 0x97, 0x3c); 740 792 mipi_dsi_dcs_write_seq_multi(&ctx, 0x98, 0x02); 741 793 mipi_dsi_dcs_write_seq_multi(&ctx, 0x99, 0x95); ··· 742 800 mipi_dsi_dcs_write_seq_multi(&ctx, 0x9d, 0x0a); 743 801 mipi_dsi_dcs_write_seq_multi(&ctx, 0x9e, 0x90); 744 802 745 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x25); 803 + nt36523_switch_page(&ctx, 0x25); 746 804 mipi_dsi_dcs_write_seq_multi(&ctx, 0x13, 0x02); 747 805 mipi_dsi_dcs_write_seq_multi(&ctx, 0x14, 0xd7); 748 806 mipi_dsi_dcs_write_seq_multi(&ctx, 0xdb, 0x02); ··· 751 809 mipi_dsi_dcs_write_seq_multi(&ctx, 0x19, 0x0f); 752 810 mipi_dsi_dcs_write_seq_multi(&ctx, 0x1b, 0x5b); 753 811 754 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x20); 755 - 812 + nt36523_switch_page(&ctx, 0x20); 756 813 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb0, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x24, 0x00, 0x38, 757 814 0x00, 0x4c, 0x00, 0x5e, 0x00, 0x6f, 0x00, 0x7e); 758 815 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb1, 0x00, 0x8c, 0x00, 0xbe, 0x00, 0xe5, 0x01, 0x27, ··· 760 819 0x03, 0x00, 0x03, 0x31, 0x03, 0x40, 0x03, 0x51); 761 820 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb3, 0x03, 0x62, 0x03, 0x75, 0x03, 0x89, 0x03, 0x9c, 762 821 0x03, 0xaa, 0x03, 0xb2); 763 - 764 822 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb4, 0x00, 0x00, 0x00, 0x0d, 0x00, 0x27, 0x00, 0x3d, 765 823 0x00, 0x52, 0x00, 0x64, 0x00, 0x75, 0x00, 0x84); 766 824 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb5, 0x00, 0x93, 0x00, 0xc5, 0x00, 0xec, 0x01, 0x2c, ··· 768 828 0x03, 0x01, 0x03, 0x31, 0x03, 0x41, 0x03, 0x51); 769 829 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb7, 0x03, 0x63, 0x03, 0x75, 0x03, 0x89, 0x03, 0x9c, 770 830 0x03, 0xaa, 0x03, 0xb2); 771 - 772 831 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb8, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x2a, 0x00, 0x40, 773 832 0x00, 0x56, 0x00, 0x68, 0x00, 0x7a, 0x00, 0x89); 774 833 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb9, 0x00, 0x98, 0x00, 0xc9, 0x00, 0xf1, 0x01, 0x30, ··· 777 838 mipi_dsi_dcs_write_seq_multi(&ctx, 0xbb, 0x03, 0x66, 0x03, 0x75, 0x03, 0x89, 0x03, 0x9c, 778 839 0x03, 0xaa, 0x03, 0xb2); 779 840 780 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x21); 841 + nt36523_switch_page(&ctx, 0x21); 781 842 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb0, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x24, 0x00, 0x38, 782 843 0x00, 0x4c, 0x00, 0x5e, 0x00, 0x6f, 0x00, 0x7e); 783 844 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb1, 0x00, 0x8c, 0x00, 0xbe, 0x00, 0xe5, 0x01, 0x27, ··· 786 847 0x03, 0x00, 0x03, 0x31, 0x03, 0x40, 0x03, 0x51); 787 848 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb3, 0x03, 0x62, 0x03, 0x77, 0x03, 0x90, 0x03, 0xac, 788 849 0x03, 0xca, 0x03, 0xda); 789 - 790 850 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb4, 0x00, 0x00, 0x00, 0x0d, 0x00, 0x27, 0x00, 0x3d, 791 851 0x00, 0x52, 0x00, 0x64, 0x00, 0x75, 0x00, 0x84); 792 852 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb5, 0x00, 0x93, 0x00, 0xc5, 0x00, 0xec, 0x01, 0x2c, ··· 794 856 0x03, 0x01, 0x03, 0x31, 0x03, 0x41, 0x03, 0x51); 795 857 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb7, 0x03, 0x63, 0x03, 0x77, 0x03, 0x90, 0x03, 0xac, 796 858 0x03, 0xca, 0x03, 0xda); 797 - 798 859 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb8, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x2a, 0x00, 0x40, 799 860 0x00, 0x56, 0x00, 0x68, 0x00, 0x7a, 0x00, 0x89); 800 861 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb9, 0x00, 0x98, 0x00, 0xc9, 0x00, 0xf1, 0x01, 0x30, ··· 803 866 mipi_dsi_dcs_write_seq_multi(&ctx, 0xbb, 0x03, 0x66, 0x03, 0x77, 0x03, 0x90, 0x03, 0xac, 804 867 0x03, 0xca, 0x03, 0xda); 805 868 806 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0xf0); 807 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 869 + nt36523_switch_page(&ctx, 0xf0); 870 + nt36523_enable_reload_cmds(&ctx); 808 871 mipi_dsi_dcs_write_seq_multi(&ctx, 0x3a, 0x08); 809 872 810 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x10); 873 + nt36523_switch_page(&ctx, 0x10); 811 874 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb9, 0x01); 812 875 813 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x20); 814 - 876 + nt36523_switch_page(&ctx, 0x20); 815 877 mipi_dsi_dcs_write_seq_multi(&ctx, 0x18, 0x40); 816 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x10); 817 878 879 + nt36523_switch_page(&ctx, 0x10); 818 880 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb9, 0x02); 819 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xff, 0x10); 820 881 821 - mipi_dsi_dcs_write_seq_multi(&ctx, 0xfb, 0x01); 882 + nt36523_switch_page(&ctx, 0x10); 883 + nt36523_enable_reload_cmds(&ctx); 822 884 mipi_dsi_dcs_write_seq_multi(&ctx, 0xb0, 0x01); 823 885 mipi_dsi_dcs_write_seq_multi(&ctx, 0x35, 0x00); 824 886 mipi_dsi_dcs_write_seq_multi(&ctx, 0x3b, 0x03, 0xae, 0x1a, 0x04, 0x04);
+26 -9
drivers/gpu/drm/panel/panel-edp.c
··· 954 954 * drm_atomic_helper_shutdown() at shutdown time and that should 955 955 * cause the panel to be disabled / unprepared if needed. For now, 956 956 * however, we'll keep these calls due to the sheer number of 957 - * different DRM modeset drivers used with panel-edp. The fact that 958 - * we're calling these and _also_ the drm_atomic_helper_shutdown() 959 - * will try to disable/unprepare means that we can get a warning about 960 - * trying to disable/unprepare an already disabled/unprepared panel, 961 - * but that's something we'll have to live with until we've confirmed 962 - * that all DRM modeset drivers are properly calling 963 - * drm_atomic_helper_shutdown(). 957 + * different DRM modeset drivers used with panel-edp. Once we've 958 + * confirmed that all DRM modeset drivers using this panel properly 959 + * call drm_atomic_helper_shutdown() we can simply delete the two 960 + * calls below. 961 + * 962 + * TO BE EXPLICIT: THE CALLS BELOW SHOULDN'T BE COPIED TO ANY NEW 963 + * PANEL DRIVERS. 964 + * 965 + * FIXME: If we're still haven't figured out if all DRM modeset 966 + * drivers properly call drm_atomic_helper_shutdown() but we _have_ 967 + * managed to make sure that DRM modeset drivers get their shutdown() 968 + * callback before the panel's shutdown() callback (perhaps using 969 + * device link), we could add a WARN_ON here to help move forward. 964 970 */ 965 - drm_panel_disable(&panel->base); 966 - drm_panel_unprepare(&panel->base); 971 + if (panel->base.enabled) 972 + drm_panel_disable(&panel->base); 973 + if (panel->base.prepared) 974 + drm_panel_unprepare(&panel->base); 967 975 } 968 976 969 977 static void panel_edp_remove(struct device *dev) ··· 1853 1845 EDP_PANEL_ENTRY('A', 'U', 'O', 0x635c, &delay_200_500_e50, "B116XAN06.3"), 1854 1846 EDP_PANEL_ENTRY('A', 'U', 'O', 0x639c, &delay_200_500_e50, "B140HAK02.7"), 1855 1847 EDP_PANEL_ENTRY('A', 'U', 'O', 0x723c, &delay_200_500_e50, "B140XTN07.2"), 1848 + EDP_PANEL_ENTRY('A', 'U', 'O', 0x73aa, &delay_200_500_e50, "B116XTN02.3"), 1856 1849 EDP_PANEL_ENTRY('A', 'U', 'O', 0x8594, &delay_200_500_e50, "B133UAN01.0"), 1850 + EDP_PANEL_ENTRY('A', 'U', 'O', 0xa199, &delay_200_500_e50, "B116XAN06.1"), 1851 + EDP_PANEL_ENTRY('A', 'U', 'O', 0xc4b4, &delay_200_500_e50, "B116XAT04.1"), 1857 1852 EDP_PANEL_ENTRY('A', 'U', 'O', 0xd497, &delay_200_500_e50, "B120XAN01.0"), 1858 1853 EDP_PANEL_ENTRY('A', 'U', 'O', 0xf390, &delay_200_500_e50, "B140XTN07.7"), 1859 1854 ··· 1902 1891 EDP_PANEL_ENTRY('B', 'O', 'E', 0x09ad, &delay_200_500_e80, "NV116WHM-N47"), 1903 1892 EDP_PANEL_ENTRY('B', 'O', 'E', 0x09ae, &delay_200_500_e200, "NT140FHM-N45"), 1904 1893 EDP_PANEL_ENTRY('B', 'O', 'E', 0x09dd, &delay_200_500_e50, "NT116WHM-N21"), 1894 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0a1b, &delay_200_500_e50, "NV133WUM-N63"), 1905 1895 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0a36, &delay_200_500_e200, "Unknown"), 1906 1896 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0a3e, &delay_200_500_e80, "NV116WHM-N49"), 1907 1897 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0a5d, &delay_200_500_e50, "NV116WHM-N45"), 1908 1898 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0ac5, &delay_200_500_e50, "NV116WHM-N4C"), 1899 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0ae8, &delay_200_500_e50_p2e80, "NV140WUM-N41"), 1909 1900 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0b34, &delay_200_500_e80, "NV122WUM-N41"), 1910 1901 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0b43, &delay_200_500_e200, "NV140FHM-T09"), 1911 1902 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0b56, &delay_200_500_e80, "NT140FHM-N47"), 1912 1903 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0c20, &delay_200_500_e80, "NT140FHM-N47"), 1913 1904 EDP_PANEL_ENTRY('B', 'O', 'E', 0x0cb6, &delay_200_500_e200, "NT116WHM-N44"), 1905 + EDP_PANEL_ENTRY('B', 'O', 'E', 0x0cfa, &delay_200_500_e50, "NV116WHM-A4D"), 1914 1906 1915 1907 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1130, &delay_200_500_e50, "N116BGE-EB2"), 1916 1908 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1132, &delay_200_500_e80_d50, "N116BGE-EA2"), ··· 1929 1915 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1156, &delay_200_500_e80_d50, "Unknown"), 1930 1916 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1157, &delay_200_500_e80_d50, "N116BGE-EA2"), 1931 1917 EDP_PANEL_ENTRY('C', 'M', 'N', 0x115b, &delay_200_500_e80_d50, "N116BCN-EB1"), 1918 + EDP_PANEL_ENTRY('C', 'M', 'N', 0x115d, &delay_200_500_e80_d50, "N116BCA-EA2"), 1932 1919 EDP_PANEL_ENTRY('C', 'M', 'N', 0x115e, &delay_200_500_e80_d50, "N116BCA-EA1"), 1933 1920 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1160, &delay_200_500_e80_d50, "N116BCJ-EAK"), 1921 + EDP_PANEL_ENTRY('C', 'M', 'N', 0x1161, &delay_200_500_e80, "N116BCP-EA2"), 1934 1922 EDP_PANEL_ENTRY('C', 'M', 'N', 0x1247, &delay_200_500_e80_d50, "N120ACA-EA1"), 1935 1923 EDP_PANEL_ENTRY('C', 'M', 'N', 0x142b, &delay_200_500_e80_d50, "N140HCA-EAC"), 1936 1924 EDP_PANEL_ENTRY('C', 'M', 'N', 0x142e, &delay_200_500_e80_d50, "N140BGA-EA4"), ··· 1945 1929 EDP_PANEL_ENTRY('C', 'S', 'O', 0x1200, &delay_200_500_e50_p2e200, "MNC207QS1-1"), 1946 1930 1947 1931 EDP_PANEL_ENTRY('C', 'S', 'W', 0x1100, &delay_200_500_e80_d50, "MNB601LS1-1"), 1932 + EDP_PANEL_ENTRY('C', 'S', 'W', 0x1104, &delay_200_500_e50, "MNB601LS1-4"), 1948 1933 1949 1934 EDP_PANEL_ENTRY('H', 'K', 'C', 0x2d51, &delay_200_500_e200, "Unknown"), 1950 1935 EDP_PANEL_ENTRY('H', 'K', 'C', 0x2d5b, &delay_200_500_e200, "Unknown"),
+152 -1
drivers/gpu/drm/panel/panel-himax-hx8394.c
··· 339 339 .init_sequence = powkiddy_x55_init_sequence, 340 340 }; 341 341 342 + static int mchp_ac40t08a_init_sequence(struct hx8394 *ctx) 343 + { 344 + struct mipi_dsi_device *dsi = to_mipi_dsi_device(ctx->dev); 345 + 346 + /* DCS commands do not seem to be sent correclty without this delay */ 347 + msleep(20); 348 + 349 + /* 5.19.8 SETEXTC: Set extension command (B9h) */ 350 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETEXTC, 351 + 0xff, 0x83, 0x94); 352 + 353 + /* 5.19.9 SETMIPI: Set MIPI control (BAh) */ 354 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETMIPI, 355 + 0x63, 0x03, 0x68, 0x6b, 0xb2, 0xc0); 356 + 357 + /* 5.19.2 SETPOWER: Set power (B1h) */ 358 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETPOWER, 359 + 0x48, 0x12, 0x72, 0x09, 0x32, 0x54, 360 + 0x71, 0x71, 0x57, 0x47); 361 + 362 + /* 5.19.3 SETDISP: Set display related register (B2h) */ 363 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETDISP, 364 + 0x00, 0x80, 0x64, 0x0c, 0x0d, 0x2f); 365 + 366 + /* 5.19.4 SETCYC: Set display waveform cycles (B4h) */ 367 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETCYC, 368 + 0x73, 0x74, 0x73, 0x74, 0x73, 0x74, 369 + 0x01, 0x0c, 0x86, 0x75, 0x00, 0x3f, 370 + 0x73, 0x74, 0x73, 0x74, 0x73, 0x74, 371 + 0x01, 0x0c, 0x86); 372 + 373 + /* 5.19.5 SETVCOM: Set VCOM voltage (B6h) */ 374 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETVCOM, 375 + 0x6e, 0x6e); 376 + 377 + /* 5.19.19 SETGIP0: Set GIP Option0 (D3h) */ 378 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETGIP0, 379 + 0x00, 0x00, 0x07, 0x07, 0x40, 0x07, 380 + 0x0c, 0x00, 0x08, 0x10, 0x08, 0x00, 381 + 0x08, 0x54, 0x15, 0x0a, 0x05, 0x0a, 382 + 0x02, 0x15, 0x06, 0x05, 0x06, 0x47, 383 + 0x44, 0x0a, 0x0a, 0x4b, 0x10, 0x07, 384 + 0x07, 0x0c, 0x40); 385 + 386 + /* 5.19.20 Set GIP Option1 (D5h) */ 387 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETGIP1, 388 + 0x1c, 0x1c, 0x1d, 0x1d, 0x00, 0x01, 389 + 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 390 + 0x08, 0x09, 0x0a, 0x0b, 0x24, 0x25, 391 + 0x18, 0x18, 0x26, 0x27, 0x18, 0x18, 392 + 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 393 + 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 394 + 0x18, 0x18, 0x20, 0x21, 0x18, 0x18, 395 + 0x18, 0x18); 396 + 397 + /* 5.19.21 Set GIP Option2 (D6h) */ 398 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETGIP2, 399 + 0x1c, 0x1c, 0x1d, 0x1d, 0x07, 0x06, 400 + 0x05, 0x04, 0x03, 0x02, 0x01, 0x00, 401 + 0x0b, 0x0a, 0x09, 0x08, 0x21, 0x20, 402 + 0x18, 0x18, 0x27, 0x26, 0x18, 0x18, 403 + 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 404 + 0x18, 0x18, 0x18, 0x18, 0x18, 0x18, 405 + 0x18, 0x18, 0x25, 0x24, 0x18, 0x18, 406 + 0x18, 0x18); 407 + 408 + /* 5.19.25 SETGAMMA: Set gamma curve related setting (E0h) */ 409 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETGAMMA, 410 + 0x00, 0x0a, 0x15, 0x1b, 0x1e, 0x21, 411 + 0x24, 0x22, 0x47, 0x56, 0x65, 0x66, 412 + 0x6e, 0x82, 0x88, 0x8b, 0x9a, 0x9d, 413 + 0x98, 0xa8, 0xb9, 0x5d, 0x5c, 0x61, 414 + 0x66, 0x6a, 0x6f, 0x7f, 0x7f, 0x00, 415 + 0x0a, 0x15, 0x1b, 0x1e, 0x21, 0x24, 416 + 0x22, 0x47, 0x56, 0x65, 0x65, 0x6e, 417 + 0x81, 0x87, 0x8b, 0x98, 0x9d, 0x99, 418 + 0xa8, 0xba, 0x5d, 0x5d, 0x62, 0x67, 419 + 0x6b, 0x72, 0x7f, 0x7f); 420 + 421 + /* Unknown command, not listed in the HX8394-F datasheet (C0H) */ 422 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_UNKNOWN1, 423 + 0x1f, 0x73); 424 + 425 + /* Set CABC control (C9h)*/ 426 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETCABC, 427 + 0x76, 0x00, 0x30); 428 + 429 + /* 5.19.17 SETPANEL (CCh) */ 430 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETPANEL, 431 + 0x0b); 432 + 433 + /* Unknown command, not listed in the HX8394-F datasheet (D4h) */ 434 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_UNKNOWN3, 435 + 0x02); 436 + 437 + /* 5.19.11 Set register bank (BDh) */ 438 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETREGBANK, 439 + 0x02); 440 + 441 + /* 5.19.11 Set register bank (D8h) */ 442 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_UNKNOWN4, 443 + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 444 + 0xff, 0xff, 0xff, 0xff, 0xff, 0xff); 445 + 446 + /* 5.19.11 Set register bank (BDh) */ 447 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETREGBANK, 448 + 0x00); 449 + 450 + /* 5.19.11 Set register bank (BDh) */ 451 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETREGBANK, 452 + 0x01); 453 + 454 + /* 5.19.2 SETPOWER: Set power (B1h) */ 455 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETPOWER, 456 + 0x00); 457 + 458 + /* 5.19.11 Set register bank (BDh) */ 459 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_SETREGBANK, 460 + 0x00); 461 + 462 + /* Unknown command, not listed in the HX8394-F datasheet (C6h) */ 463 + mipi_dsi_dcs_write_seq(dsi, HX8394_CMD_UNKNOWN2, 464 + 0xed); 465 + 466 + return 0; 467 + } 468 + 469 + static const struct drm_display_mode mchp_ac40t08a_mode = { 470 + .hdisplay = 720, 471 + .hsync_start = 720 + 12, 472 + .hsync_end = 720 + 12 + 24, 473 + .htotal = 720 + 12 + 12 + 24, 474 + .vdisplay = 1280, 475 + .vsync_start = 1280 + 13, 476 + .vsync_end = 1280 + 14, 477 + .vtotal = 1280 + 14 + 13, 478 + .clock = 60226, 479 + .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC, 480 + .width_mm = 76, 481 + .height_mm = 132, 482 + }; 483 + 484 + static const struct hx8394_panel_desc mchp_ac40t08a_desc = { 485 + .mode = &mchp_ac40t08a_mode, 486 + .lanes = 4, 487 + .mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST, 488 + .format = MIPI_DSI_FMT_RGB888, 489 + .init_sequence = mchp_ac40t08a_init_sequence, 490 + }; 491 + 342 492 static int hx8394_enable(struct drm_panel *panel) 343 493 { 344 494 struct hx8394 *ctx = panel_to_hx8394(panel); ··· 636 486 if (!ctx) 637 487 return -ENOMEM; 638 488 639 - ctx->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 489 + ctx->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH); 640 490 if (IS_ERR(ctx->reset_gpio)) 641 491 return dev_err_probe(dev, PTR_ERR(ctx->reset_gpio), 642 492 "Failed to get reset gpio\n"); ··· 705 555 static const struct of_device_id hx8394_of_match[] = { 706 556 { .compatible = "hannstar,hsd060bhw4", .data = &hsd060bhw4_desc }, 707 557 { .compatible = "powkiddy,x55-panel", .data = &powkiddy_x55_desc }, 558 + { .compatible = "microchip,ac40t08a-mipi-panel", .data = &mchp_ac40t08a_desc }, 708 559 { /* sentinel */ } 709 560 }; 710 561 MODULE_DEVICE_TABLE(of, hx8394_of_match);
+165
drivers/gpu/drm/panel/panel-ilitek-ili9806e.c
··· 380 380 .lanes = 2, 381 381 }; 382 382 383 + static void dmt028vghmcmi_1d_init(struct mipi_dsi_multi_context *ctx) 384 + { 385 + mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0xff, 0x98, 0x06, 0x04, 0x01); 386 + mipi_dsi_dcs_write_seq_multi(ctx, 0x08, 0x10); 387 + mipi_dsi_dcs_write_seq_multi(ctx, 0x21, 0x01); 388 + mipi_dsi_dcs_write_seq_multi(ctx, 0x30, 0x03); 389 + mipi_dsi_dcs_write_seq_multi(ctx, 0x31, 0x00); 390 + mipi_dsi_dcs_write_seq_multi(ctx, 0x60, 0x06); 391 + mipi_dsi_dcs_write_seq_multi(ctx, 0x61, 0x00); 392 + mipi_dsi_dcs_write_seq_multi(ctx, 0x62, 0x07); 393 + mipi_dsi_dcs_write_seq_multi(ctx, 0x63, 0x00); 394 + mipi_dsi_dcs_write_seq_multi(ctx, 0x40, 0x16); 395 + mipi_dsi_dcs_write_seq_multi(ctx, 0x41, 0x44); 396 + mipi_dsi_dcs_write_seq_multi(ctx, 0x42, 0x00); 397 + mipi_dsi_dcs_write_seq_multi(ctx, 0x43, 0x83); 398 + mipi_dsi_dcs_write_seq_multi(ctx, 0x44, 0x89); 399 + mipi_dsi_dcs_write_seq_multi(ctx, 0x45, 0x8a); 400 + mipi_dsi_dcs_write_seq_multi(ctx, 0x46, 0x44); 401 + mipi_dsi_dcs_write_seq_multi(ctx, 0x47, 0x44); 402 + mipi_dsi_dcs_write_seq_multi(ctx, 0x50, 0x78); 403 + mipi_dsi_dcs_write_seq_multi(ctx, 0x51, 0x78); 404 + mipi_dsi_dcs_write_seq_multi(ctx, 0x52, 0x00); 405 + mipi_dsi_dcs_write_seq_multi(ctx, 0x53, 0x6c); 406 + mipi_dsi_dcs_write_seq_multi(ctx, 0x54, 0x00); 407 + mipi_dsi_dcs_write_seq_multi(ctx, 0x55, 0x6c); 408 + mipi_dsi_dcs_write_seq_multi(ctx, 0x56, 0x00); 409 + /* Gamma settings */ 410 + mipi_dsi_dcs_write_seq_multi(ctx, 0xa0, 0x00); 411 + mipi_dsi_dcs_write_seq_multi(ctx, 0xa1, 0x09); 412 + mipi_dsi_dcs_write_seq_multi(ctx, 0xa2, 0x14); 413 + mipi_dsi_dcs_write_seq_multi(ctx, 0xa3, 0x09); 414 + mipi_dsi_dcs_write_seq_multi(ctx, 0xa4, 0x05); 415 + mipi_dsi_dcs_write_seq_multi(ctx, 0xa5, 0x0a); 416 + mipi_dsi_dcs_write_seq_multi(ctx, 0xa6, 0x07); 417 + mipi_dsi_dcs_write_seq_multi(ctx, 0xa7, 0x07); 418 + mipi_dsi_dcs_write_seq_multi(ctx, 0xa8, 0x08); 419 + mipi_dsi_dcs_write_seq_multi(ctx, 0xa9, 0x0b); 420 + mipi_dsi_dcs_write_seq_multi(ctx, 0xaa, 0x0c); 421 + mipi_dsi_dcs_write_seq_multi(ctx, 0xab, 0x05); 422 + mipi_dsi_dcs_write_seq_multi(ctx, 0xac, 0x0a); 423 + mipi_dsi_dcs_write_seq_multi(ctx, 0xad, 0x19); 424 + mipi_dsi_dcs_write_seq_multi(ctx, 0xae, 0x0b); 425 + mipi_dsi_dcs_write_seq_multi(ctx, 0xaf, 0x00); 426 + 427 + mipi_dsi_dcs_write_seq_multi(ctx, 0xc0, 0x00); 428 + mipi_dsi_dcs_write_seq_multi(ctx, 0xc1, 0x0c); 429 + mipi_dsi_dcs_write_seq_multi(ctx, 0xc2, 0x14); 430 + mipi_dsi_dcs_write_seq_multi(ctx, 0xc3, 0x11); 431 + mipi_dsi_dcs_write_seq_multi(ctx, 0xc4, 0x05); 432 + mipi_dsi_dcs_write_seq_multi(ctx, 0xc5, 0x0c); 433 + mipi_dsi_dcs_write_seq_multi(ctx, 0xc6, 0x08); 434 + mipi_dsi_dcs_write_seq_multi(ctx, 0xc7, 0x03); 435 + mipi_dsi_dcs_write_seq_multi(ctx, 0xc8, 0x06); 436 + mipi_dsi_dcs_write_seq_multi(ctx, 0xc9, 0x0a); 437 + mipi_dsi_dcs_write_seq_multi(ctx, 0xca, 0x10); 438 + mipi_dsi_dcs_write_seq_multi(ctx, 0xcb, 0x05); 439 + mipi_dsi_dcs_write_seq_multi(ctx, 0xcc, 0x0d); 440 + mipi_dsi_dcs_write_seq_multi(ctx, 0xcd, 0x15); 441 + mipi_dsi_dcs_write_seq_multi(ctx, 0xce, 0x13); 442 + mipi_dsi_dcs_write_seq_multi(ctx, 0xcf, 0x00); 443 + 444 + mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0xff, 0x98, 0x06, 0x04, 0x07); 445 + mipi_dsi_dcs_write_seq_multi(ctx, 0x17, 0x22); 446 + mipi_dsi_dcs_write_seq_multi(ctx, 0x18, 0x1d); 447 + mipi_dsi_dcs_write_seq_multi(ctx, 0x02, 0x77); 448 + mipi_dsi_dcs_write_seq_multi(ctx, 0xe1, 0x79); 449 + mipi_dsi_dcs_write_seq_multi(ctx, 0x06, 0x13); 450 + 451 + mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0xff, 0x98, 0x06, 0x04, 0x06); 452 + /* GIP 0 */ 453 + mipi_dsi_dcs_write_seq_multi(ctx, 0x00, 0x21); 454 + mipi_dsi_dcs_write_seq_multi(ctx, 0x01, 0x0a); 455 + mipi_dsi_dcs_write_seq_multi(ctx, 0x02, 0x00); 456 + mipi_dsi_dcs_write_seq_multi(ctx, 0x03, 0x05); 457 + mipi_dsi_dcs_write_seq_multi(ctx, 0x04, 0x01); 458 + mipi_dsi_dcs_write_seq_multi(ctx, 0x05, 0x01); 459 + mipi_dsi_dcs_write_seq_multi(ctx, 0x06, 0x98); 460 + mipi_dsi_dcs_write_seq_multi(ctx, 0x07, 0x06); 461 + mipi_dsi_dcs_write_seq_multi(ctx, 0x08, 0x01); 462 + mipi_dsi_dcs_write_seq_multi(ctx, 0x09, 0x00); 463 + mipi_dsi_dcs_write_seq_multi(ctx, 0x0a, 0x00); 464 + mipi_dsi_dcs_write_seq_multi(ctx, 0x0b, 0x00); 465 + mipi_dsi_dcs_write_seq_multi(ctx, 0x0c, 0x01); 466 + mipi_dsi_dcs_write_seq_multi(ctx, 0x0d, 0x01); 467 + mipi_dsi_dcs_write_seq_multi(ctx, 0x0e, 0x00); 468 + mipi_dsi_dcs_write_seq_multi(ctx, 0x0f, 0x00); 469 + mipi_dsi_dcs_write_seq_multi(ctx, 0x10, 0xf7); 470 + mipi_dsi_dcs_write_seq_multi(ctx, 0x11, 0xf0); 471 + mipi_dsi_dcs_write_seq_multi(ctx, 0x12, 0x00); 472 + mipi_dsi_dcs_write_seq_multi(ctx, 0x13, 0x00); 473 + mipi_dsi_dcs_write_seq_multi(ctx, 0x14, 0x00); 474 + mipi_dsi_dcs_write_seq_multi(ctx, 0x15, 0xc0); 475 + mipi_dsi_dcs_write_seq_multi(ctx, 0x16, 0x08); 476 + mipi_dsi_dcs_write_seq_multi(ctx, 0x17, 0x00); 477 + mipi_dsi_dcs_write_seq_multi(ctx, 0x18, 0x00); 478 + mipi_dsi_dcs_write_seq_multi(ctx, 0x19, 0x00); 479 + mipi_dsi_dcs_write_seq_multi(ctx, 0x1a, 0x00); 480 + mipi_dsi_dcs_write_seq_multi(ctx, 0x1b, 0x00); 481 + mipi_dsi_dcs_write_seq_multi(ctx, 0x1c, 0x00); 482 + mipi_dsi_dcs_write_seq_multi(ctx, 0x1d, 0x00); 483 + /* GIP 1 */ 484 + mipi_dsi_dcs_write_seq_multi(ctx, 0x20, 0x01); 485 + mipi_dsi_dcs_write_seq_multi(ctx, 0x21, 0x23); 486 + mipi_dsi_dcs_write_seq_multi(ctx, 0x22, 0x44); 487 + mipi_dsi_dcs_write_seq_multi(ctx, 0x23, 0x67); 488 + mipi_dsi_dcs_write_seq_multi(ctx, 0x24, 0x01); 489 + mipi_dsi_dcs_write_seq_multi(ctx, 0x25, 0x23); 490 + mipi_dsi_dcs_write_seq_multi(ctx, 0x26, 0x45); 491 + mipi_dsi_dcs_write_seq_multi(ctx, 0x27, 0x67); 492 + /* GIP 2 */ 493 + mipi_dsi_dcs_write_seq_multi(ctx, 0x30, 0x01); 494 + mipi_dsi_dcs_write_seq_multi(ctx, 0x31, 0x22); 495 + mipi_dsi_dcs_write_seq_multi(ctx, 0x32, 0x22); 496 + mipi_dsi_dcs_write_seq_multi(ctx, 0x33, 0xbc); 497 + mipi_dsi_dcs_write_seq_multi(ctx, 0x34, 0xad); 498 + mipi_dsi_dcs_write_seq_multi(ctx, 0x35, 0xda); 499 + mipi_dsi_dcs_write_seq_multi(ctx, 0x36, 0xcb); 500 + mipi_dsi_dcs_write_seq_multi(ctx, 0x37, 0x22); 501 + mipi_dsi_dcs_write_seq_multi(ctx, 0x38, 0x55); 502 + mipi_dsi_dcs_write_seq_multi(ctx, 0x39, 0x76); 503 + mipi_dsi_dcs_write_seq_multi(ctx, 0x3a, 0x67); 504 + mipi_dsi_dcs_write_seq_multi(ctx, 0x3b, 0x88); 505 + mipi_dsi_dcs_write_seq_multi(ctx, 0x3c, 0x22); 506 + mipi_dsi_dcs_write_seq_multi(ctx, 0x3d, 0x11); 507 + mipi_dsi_dcs_write_seq_multi(ctx, 0x3e, 0x00); 508 + mipi_dsi_dcs_write_seq_multi(ctx, 0x3f, 0x22); 509 + mipi_dsi_dcs_write_seq_multi(ctx, 0x40, 0x22); 510 + 511 + mipi_dsi_dcs_write_seq_multi(ctx, 0x52, 0x10); 512 + mipi_dsi_dcs_write_seq_multi(ctx, 0x53, 0x10); 513 + mipi_dsi_dcs_write_seq_multi(ctx, 0x54, 0x13); 514 + 515 + mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0xff, 0x98, 0x06, 0x04, 0x00); 516 + }; 517 + 518 + static const struct drm_display_mode dmt028vghmcmi_1d_default_mode = { 519 + .clock = 22000, 520 + 521 + .hdisplay = 480, 522 + .hsync_start = 480 + 20, 523 + .hsync_end = 480 + 20 + 4, 524 + .htotal = 480 + 20 + 4 + 10, 525 + 526 + .vdisplay = 640, 527 + .vsync_start = 640 + 40, 528 + .vsync_end = 640 + 40 + 4, 529 + .vtotal = 640 + 40 + 4 + 20, 530 + 531 + .width_mm = 53, 532 + .height_mm = 79, 533 + 534 + .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC, 535 + .type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED, 536 + }; 537 + 538 + static const struct panel_desc dmt028vghmcmi_1d_desc = { 539 + .init_sequence = dmt028vghmcmi_1d_init, 540 + .display_mode = &dmt028vghmcmi_1d_default_mode, 541 + .mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | 542 + MIPI_DSI_MODE_LPM | MIPI_DSI_CLOCK_NON_CONTINUOUS, 543 + .format = MIPI_DSI_FMT_RGB888, 544 + .lanes = 2, 545 + }; 546 + 383 547 static const struct of_device_id ili9806e_of_match[] = { 548 + { .compatible = "densitron,dmt028vghmcmi-1d", .data = &dmt028vghmcmi_1d_desc }, 384 549 { .compatible = "ortustech,com35h3p70ulc", .data = &com35h3p70ulc_desc }, 385 550 { } 386 551 };
+290 -27
drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
··· 48 48 struct gpio_desc *reset; 49 49 }; 50 50 51 + #define JD9365DA_DCS_SWITCH_PAGE 0xe0 52 + 53 + #define jd9365da_switch_page(dsi_ctx, page) \ 54 + mipi_dsi_dcs_write_seq_multi(dsi_ctx, JD9365DA_DCS_SWITCH_PAGE, (page)) 55 + 56 + static void jadard_enable_standard_cmds(struct mipi_dsi_multi_context *dsi_ctx) 57 + { 58 + mipi_dsi_dcs_write_seq_multi(dsi_ctx, 0xe1, 0x93); 59 + mipi_dsi_dcs_write_seq_multi(dsi_ctx, 0xe2, 0x65); 60 + mipi_dsi_dcs_write_seq_multi(dsi_ctx, 0xe3, 0xf8); 61 + mipi_dsi_dcs_write_seq_multi(dsi_ctx, 0x80, 0x03); 62 + } 63 + 51 64 static inline struct jadard *panel_to_jadard(struct drm_panel *panel) 52 65 { 53 66 return container_of(panel, struct jadard, panel); ··· 211 198 { 212 199 struct mipi_dsi_multi_context dsi_ctx = { .dsi = jadard->dsi }; 213 200 214 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE0, 0x00); 215 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE1, 0x93); 216 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE2, 0x65); 217 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE3, 0xF8); 218 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x80, 0x03); 219 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE0, 0x01); 201 + jd9365da_switch_page(&dsi_ctx, 0x00); 202 + jadard_enable_standard_cmds(&dsi_ctx); 203 + 204 + jd9365da_switch_page(&dsi_ctx, 0x01); 220 205 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x00, 0x00); 221 206 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x01, 0x7E); 222 207 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x03, 0x00); ··· 287 276 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x80, 0x37); 288 277 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x81, 0x23); 289 278 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x82, 0x10); 290 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE0, 0x02); 279 + 280 + jd9365da_switch_page(&dsi_ctx, 0x02); 291 281 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x00, 0x47); 292 282 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x01, 0x47); 293 283 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x02, 0x45); ··· 372 360 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x7C, 0x00); 373 361 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x7D, 0x03); 374 362 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x7E, 0x7B); 375 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE0, 0x04); 363 + 364 + jd9365da_switch_page(&dsi_ctx, 0x04); 376 365 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x00, 0x0E); 377 366 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x02, 0xB3); 378 367 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x09, 0x60); 379 368 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0E, 0x2A); 380 369 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x36, 0x59); 381 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE0, 0x00); 370 + 371 + jd9365da_switch_page(&dsi_ctx, 0x00); 382 372 383 373 return dsi_ctx.accum_err; 384 374 }; ··· 412 398 { 413 399 struct mipi_dsi_multi_context dsi_ctx = { .dsi = jadard->dsi }; 414 400 415 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE0, 0x00); 416 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE1, 0x93); 417 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE2, 0x65); 418 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE3, 0xF8); 419 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x80, 0x03); 420 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE0, 0x01); 401 + jd9365da_switch_page(&dsi_ctx, 0x00); 402 + jadard_enable_standard_cmds(&dsi_ctx); 403 + 404 + jd9365da_switch_page(&dsi_ctx, 0x01); 421 405 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x00, 0x00); 422 406 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x01, 0x3B); 423 407 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0C, 0x74); ··· 483 471 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x80, 0x20); 484 472 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x81, 0x0F); 485 473 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x82, 0x00); 486 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE0, 0x02); 474 + 475 + jd9365da_switch_page(&dsi_ctx, 0x02); 487 476 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x00, 0x02); 488 477 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x01, 0x02); 489 478 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x02, 0x00); ··· 597 584 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x7A, 0x17); 598 585 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x7D, 0x14); 599 586 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x7E, 0x82); 600 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE0, 0x04); 587 + 588 + jd9365da_switch_page(&dsi_ctx, 0x04); 601 589 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x00, 0x0E); 602 590 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x02, 0xB3); 603 591 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x09, 0x61); 604 592 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0E, 0x48); 605 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE0, 0x00); 593 + 594 + jd9365da_switch_page(&dsi_ctx, 0x00); 606 595 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE6, 0x02); 607 596 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xE7, 0x0C); 608 597 ··· 638 623 { 639 624 struct mipi_dsi_multi_context dsi_ctx = { .dsi = jadard->dsi }; 640 625 641 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe0, 0x00); 642 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe1, 0x93); 643 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe2, 0x65); 644 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe3, 0xf8); 645 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x80, 0x03); 646 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe0, 0x01); 626 + jd9365da_switch_page(&dsi_ctx, 0x00); 627 + jadard_enable_standard_cmds(&dsi_ctx); 628 + 629 + jd9365da_switch_page(&dsi_ctx, 0x01); 647 630 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0c, 0x74); 648 631 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x17, 0x00); 649 632 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x18, 0xc7); ··· 707 694 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x80, 0x26); 708 695 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x81, 0x14); 709 696 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x82, 0x02); 710 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe0, 0x02); 697 + 698 + jd9365da_switch_page(&dsi_ctx, 0x02); 711 699 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x00, 0x52); 712 700 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x01, 0x5f); 713 701 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x02, 0x5f); ··· 822 808 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x76, 0x00); 823 809 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x77, 0x05); 824 810 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x78, 0x2a); 825 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe0, 0x04); 811 + 812 + jd9365da_switch_page(&dsi_ctx, 0x04); 826 813 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x00, 0x0e); 827 814 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x02, 0xb3); 828 815 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x09, 0x61); 829 816 mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0e, 0x48); 830 - mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe0, 0x00); 817 + 818 + jd9365da_switch_page(&dsi_ctx, 0x00); 831 819 832 820 return dsi_ctx.accum_err; 833 821 }; ··· 855 839 .lanes = 4, 856 840 .format = MIPI_DSI_FMT_RGB888, 857 841 .init = kingdisplay_kd101ne3_init_cmds, 842 + .lp11_before_reset = true, 843 + .reset_before_power_off_vcioo = true, 844 + .vcioo_to_lp11_delay_ms = 5, 845 + .lp11_to_reset_delay_ms = 10, 846 + .exit_sleep_to_display_on_delay_ms = 120, 847 + .display_on_delay_ms = 20, 848 + .backlight_off_to_display_off_delay_ms = 100, 849 + .display_off_to_enter_sleep_delay_ms = 50, 850 + .enter_sleep_to_reset_down_delay_ms = 100, 851 + }; 852 + 853 + static int melfas_lmfbx101117480_init_cmds(struct jadard *jadard) 854 + { 855 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = jadard->dsi }; 856 + 857 + jd9365da_switch_page(&dsi_ctx, 0x00); 858 + jadard_enable_standard_cmds(&dsi_ctx); 859 + 860 + jd9365da_switch_page(&dsi_ctx, 0x01); 861 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0c, 0x74); 862 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x17, 0x00); 863 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x18, 0xbf); 864 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x19, 0x00); 865 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x1a, 0x00); 866 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x1b, 0xbf); 867 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x1c, 0x00); 868 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x1f, 0x70); 869 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x20, 0x2d); 870 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x21, 0x2d); 871 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x22, 0x7e); 872 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x24, 0xfe); 873 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x37, 0x19); 874 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x35, 0x28); 875 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x38, 0x05); 876 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x39, 0x08); 877 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x3a, 0x12); 878 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x3c, 0x78); 879 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x3d, 0xff); 880 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x3e, 0xff); 881 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x3f, 0x7f); 882 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x40, 0x06); 883 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x41, 0xa0); 884 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x43, 0x1e); 885 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x44, 0x0b); 886 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0c, 0x74); 887 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x55, 0x02); 888 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x56, 0x01); 889 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x57, 0x8e); 890 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x58, 0x09); 891 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x59, 0x0a); 892 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x5a, 0x2e); 893 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x5b, 0x1a); 894 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x5c, 0x15); 895 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x5d, 0x7f); 896 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x5e, 0x69); 897 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x5f, 0x59); 898 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x60, 0x4e); 899 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x61, 0x4c); 900 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x62, 0x40); 901 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x63, 0x45); 902 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x64, 0x30); 903 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x65, 0x4a); 904 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x66, 0x49); 905 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x67, 0x4a); 906 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x68, 0x68); 907 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x69, 0x57); 908 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x6a, 0x5b); 909 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x6b, 0x4e); 910 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x6c, 0x49); 911 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x6d, 0x24); 912 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x6e, 0x12); 913 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x6f, 0x02); 914 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x70, 0x7f); 915 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x71, 0x69); 916 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x72, 0x59); 917 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x73, 0x4e); 918 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x74, 0x4c); 919 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x75, 0x40); 920 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x76, 0x45); 921 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x77, 0x30); 922 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x78, 0x4a); 923 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x79, 0x49); 924 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x7a, 0x4a); 925 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x7b, 0x68); 926 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x7c, 0x57); 927 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x7d, 0x5b); 928 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x7e, 0x4e); 929 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x7f, 0x49); 930 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x80, 0x24); 931 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x81, 0x12); 932 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x82, 0x02); 933 + 934 + jd9365da_switch_page(&dsi_ctx, 0x02); 935 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x00, 0x52); 936 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x01, 0x55); 937 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x02, 0x55); 938 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x03, 0x50); 939 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x04, 0x77); 940 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x05, 0x57); 941 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x06, 0x55); 942 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x07, 0x4e); 943 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x08, 0x4c); 944 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x09, 0x5f); 945 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0a, 0x4a); 946 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0b, 0x48); 947 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0c, 0x55); 948 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0d, 0x46); 949 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0e, 0x44); 950 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0f, 0x40); 951 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x10, 0x55); 952 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x11, 0x55); 953 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x12, 0x55); 954 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x13, 0x55); 955 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x14, 0x55); 956 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x15, 0x55); 957 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x16, 0x53); 958 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x17, 0x55); 959 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x18, 0x55); 960 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x19, 0x51); 961 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x1a, 0x77); 962 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x1b, 0x57); 963 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x1c, 0x55); 964 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x1d, 0x4f); 965 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x1e, 0x4d); 966 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x1f, 0x5f); 967 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x20, 0x4b); 968 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x21, 0x49); 969 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x22, 0x55); 970 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x23, 0x47); 971 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x24, 0x45); 972 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x25, 0x41); 973 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x26, 0x55); 974 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x27, 0x55); 975 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x28, 0x55); 976 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x29, 0x55); 977 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x2a, 0x55); 978 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x2b, 0x55); 979 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x2c, 0x13); 980 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x2d, 0x15); 981 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x2e, 0x15); 982 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x2f, 0x01); 983 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x30, 0x37); 984 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x31, 0x17); 985 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x32, 0x15); 986 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x33, 0x0d); 987 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x34, 0x0f); 988 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x35, 0x15); 989 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x36, 0x05); 990 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x37, 0x07); 991 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x38, 0x15); 992 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x39, 0x09); 993 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x3a, 0x0b); 994 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x3b, 0x11); 995 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x3c, 0x15); 996 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x3d, 0x15); 997 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x3e, 0x15); 998 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x3f, 0x15); 999 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x40, 0x15); 1000 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x41, 0x15); 1001 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x42, 0x12); 1002 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x43, 0x15); 1003 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x44, 0x15); 1004 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x45, 0x00); 1005 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x46, 0x37); 1006 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x47, 0x17); 1007 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x48, 0x15); 1008 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x49, 0x0c); 1009 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x4a, 0x0e); 1010 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x4b, 0x15); 1011 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x4c, 0x04); 1012 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x4d, 0x06); 1013 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x4e, 0x15); 1014 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x4f, 0x08); 1015 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x50, 0x0a); 1016 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x51, 0x10); 1017 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x52, 0x15); 1018 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x53, 0x15); 1019 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x54, 0x15); 1020 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x55, 0x15); 1021 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x56, 0x15); 1022 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x57, 0x15); 1023 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x58, 0x40); 1024 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x5b, 0x10); 1025 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x5c, 0x06); 1026 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x5d, 0x40); 1027 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x5e, 0x00); 1028 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x5f, 0x00); 1029 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x60, 0x40); 1030 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x61, 0x03); 1031 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x62, 0x04); 1032 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x63, 0x6c); 1033 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x64, 0x6c); 1034 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x65, 0x75); 1035 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x66, 0x08); 1036 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x67, 0xb4); 1037 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x68, 0x08); 1038 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x69, 0x6c); 1039 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x6a, 0x6c); 1040 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x6b, 0x0c); 1041 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x6d, 0x00); 1042 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x6e, 0x00); 1043 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x6f, 0x88); 1044 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x75, 0xbb); 1045 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x76, 0x00); 1046 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x77, 0x05); 1047 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x78, 0x2a); 1048 + 1049 + jd9365da_switch_page(&dsi_ctx, 0x04); 1050 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x02, 0x23); 1051 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x09, 0x11); 1052 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x0e, 0x48); 1053 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x36, 0x49); 1054 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x2b, 0x08); 1055 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x2e, 0x03); 1056 + 1057 + jd9365da_switch_page(&dsi_ctx, 0x00); 1058 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe6, 0x02); 1059 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe7, 0x06); 1060 + 1061 + return dsi_ctx.accum_err; 1062 + }; 1063 + 1064 + static const struct jadard_panel_desc melfas_lmfbx101117480_desc = { 1065 + .mode = { 1066 + .clock = (800 + 24 + 24 + 24) * (1280 + 30 + 4 + 8) * 60 / 1000, 1067 + 1068 + .hdisplay = 800, 1069 + .hsync_start = 800 + 24, 1070 + .hsync_end = 800 + 24 + 24, 1071 + .htotal = 800 + 24 + 24 + 24, 1072 + 1073 + .vdisplay = 1280, 1074 + .vsync_start = 1280 + 30, 1075 + .vsync_end = 1280 + 30 + 4, 1076 + .vtotal = 1280 + 30 + 4 + 8, 1077 + 1078 + .width_mm = 135, 1079 + .height_mm = 216, 1080 + .type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED, 1081 + }, 1082 + .lanes = 4, 1083 + .format = MIPI_DSI_FMT_RGB888, 1084 + .init = melfas_lmfbx101117480_init_cmds, 858 1085 .lp11_before_reset = true, 859 1086 .reset_before_power_off_vcioo = true, 860 1087 .vcioo_to_lp11_delay_ms = 5, ··· 1184 925 { 1185 926 .compatible = "kingdisplay,kd101ne3-40ti", 1186 927 .data = &kingdisplay_kd101ne3_40ti_desc 928 + }, 929 + { 930 + .compatible = "melfas,lmfbx101117480", 931 + .data = &melfas_lmfbx101117480_desc 1187 932 }, 1188 933 { 1189 934 .compatible = "radxa,display-10hd-ad001",
+44 -25
drivers/gpu/drm/panel/panel-novatek-nt36672e.c
··· 44 44 const struct panel_desc *desc; 45 45 }; 46 46 47 + #define NT36672E_DCS_SWITCH_PAGE 0xff 48 + 49 + #define nt36672e_switch_page(ctx, page) \ 50 + mipi_dsi_dcs_write_seq_multi(ctx, NT36672E_DCS_SWITCH_PAGE, (page)) 51 + 52 + static void nt36672e_enable_reload_cmds(struct mipi_dsi_multi_context *ctx) 53 + { 54 + mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 55 + } 56 + 47 57 static inline struct nt36672e_panel *to_nt36672e_panel(struct drm_panel *panel) 48 58 { 49 59 return container_of(panel, struct nt36672e_panel, panel); ··· 61 51 62 52 static void nt36672e_1080x2408_60hz_init(struct mipi_dsi_multi_context *ctx) 63 53 { 64 - mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0x10); 65 - mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 54 + nt36672e_switch_page(ctx, 0x10); 55 + nt36672e_enable_reload_cmds(ctx); 66 56 mipi_dsi_dcs_write_seq_multi(ctx, 0xb0, 0x00); 67 57 mipi_dsi_dcs_write_seq_multi(ctx, 0xc0, 0x00); 68 58 mipi_dsi_dcs_write_seq_multi(ctx, 0xc1, 0x89, 0x28, 0x00, 0x08, 0x00, 0xaa, 0x02, 69 59 0x0e, 0x00, 0x2b, 0x00, 0x07, 0x0d, 0xb7, 0x0c, 0xb7); 70 - 71 60 mipi_dsi_dcs_write_seq_multi(ctx, 0xc2, 0x1b, 0xa0); 72 - mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0x20); 73 - mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 61 + 62 + nt36672e_switch_page(ctx, 0x20); 63 + nt36672e_enable_reload_cmds(ctx); 74 64 mipi_dsi_dcs_write_seq_multi(ctx, 0x01, 0x66); 75 65 mipi_dsi_dcs_write_seq_multi(ctx, 0x06, 0x40); 76 66 mipi_dsi_dcs_write_seq_multi(ctx, 0x07, 0x38); ··· 86 76 mipi_dsi_dcs_write_seq_multi(ctx, 0xf7, 0x54); 87 77 mipi_dsi_dcs_write_seq_multi(ctx, 0xf8, 0x64); 88 78 mipi_dsi_dcs_write_seq_multi(ctx, 0xf9, 0x54); 89 - mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0x24); 90 - mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 79 + 80 + nt36672e_switch_page(ctx, 0x24); 81 + nt36672e_enable_reload_cmds(ctx); 91 82 mipi_dsi_dcs_write_seq_multi(ctx, 0x01, 0x0f); 92 83 mipi_dsi_dcs_write_seq_multi(ctx, 0x03, 0x0c); 93 84 mipi_dsi_dcs_write_seq_multi(ctx, 0x05, 0x1d); ··· 150 139 mipi_dsi_dcs_write_seq_multi(ctx, 0xc9, 0x00); 151 140 mipi_dsi_dcs_write_seq_multi(ctx, 0xd9, 0x80); 152 141 mipi_dsi_dcs_write_seq_multi(ctx, 0xe9, 0x02); 153 - mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0x25); 154 - mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 142 + 143 + nt36672e_switch_page(ctx, 0x25); 144 + nt36672e_enable_reload_cmds(ctx); 155 145 mipi_dsi_dcs_write_seq_multi(ctx, 0x18, 0x22); 156 146 mipi_dsi_dcs_write_seq_multi(ctx, 0x19, 0xe4); 157 147 mipi_dsi_dcs_write_seq_multi(ctx, 0x21, 0x40); ··· 176 164 mipi_dsi_dcs_write_seq_multi(ctx, 0xd7, 0x80); 177 165 mipi_dsi_dcs_write_seq_multi(ctx, 0xef, 0x20); 178 166 mipi_dsi_dcs_write_seq_multi(ctx, 0xf0, 0x84); 179 - mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0x26); 180 - mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 167 + 168 + nt36672e_switch_page(ctx, 0x26); 169 + nt36672e_enable_reload_cmds(ctx); 181 170 mipi_dsi_dcs_write_seq_multi(ctx, 0x81, 0x0f); 182 171 mipi_dsi_dcs_write_seq_multi(ctx, 0x83, 0x01); 183 172 mipi_dsi_dcs_write_seq_multi(ctx, 0x84, 0x03); ··· 198 185 mipi_dsi_dcs_write_seq_multi(ctx, 0x9c, 0x00); 199 186 mipi_dsi_dcs_write_seq_multi(ctx, 0x9d, 0x00); 200 187 mipi_dsi_dcs_write_seq_multi(ctx, 0x9e, 0x00); 201 - mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0x27); 202 - mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 188 + 189 + nt36672e_switch_page(ctx, 0x27); 190 + nt36672e_enable_reload_cmds(ctx); 203 191 mipi_dsi_dcs_write_seq_multi(ctx, 0x01, 0x68); 204 192 mipi_dsi_dcs_write_seq_multi(ctx, 0x20, 0x81); 205 193 mipi_dsi_dcs_write_seq_multi(ctx, 0x21, 0x6a); ··· 229 215 mipi_dsi_dcs_write_seq_multi(ctx, 0xe6, 0xd3); 230 216 mipi_dsi_dcs_write_seq_multi(ctx, 0xeb, 0x03); 231 217 mipi_dsi_dcs_write_seq_multi(ctx, 0xec, 0x28); 232 - mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0x2a); 233 - mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 218 + 219 + nt36672e_switch_page(ctx, 0x2a); 220 + nt36672e_enable_reload_cmds(ctx); 234 221 mipi_dsi_dcs_write_seq_multi(ctx, 0x00, 0x91); 235 222 mipi_dsi_dcs_write_seq_multi(ctx, 0x03, 0x20); 236 223 mipi_dsi_dcs_write_seq_multi(ctx, 0x07, 0x50); ··· 275 260 mipi_dsi_dcs_write_seq_multi(ctx, 0x8c, 0x7d); 276 261 mipi_dsi_dcs_write_seq_multi(ctx, 0x8d, 0x7d); 277 262 mipi_dsi_dcs_write_seq_multi(ctx, 0x8e, 0x7d); 278 - mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0x20); 279 - mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 263 + 264 + nt36672e_switch_page(ctx, 0x20); 265 + nt36672e_enable_reload_cmds(ctx); 280 266 mipi_dsi_dcs_write_seq_multi(ctx, 0xb0, 0x00, 0x00, 0x00, 0x17, 0x00, 0x49, 0x00, 281 267 0x6a, 0x00, 0x89, 0x00, 0x9f, 0x00, 0xb6, 0x00, 0xc8); 282 268 mipi_dsi_dcs_write_seq_multi(ctx, 0xb1, 0x00, 0xd9, 0x01, 0x10, 0x01, 0x3a, 0x01, ··· 302 286 0x01, 0x03, 0x1f, 0x03, 0x4a, 0x03, 0x59, 0x03, 0x6a); 303 287 mipi_dsi_dcs_write_seq_multi(ctx, 0xbb, 0x03, 0x7d, 0x03, 0x93, 0x03, 0xab, 0x03, 304 288 0xc8, 0x03, 0xec, 0x03, 0xfe, 0x00, 0x00); 305 - mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0x21); 306 - mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 289 + 290 + nt36672e_switch_page(ctx, 0x21); 291 + nt36672e_enable_reload_cmds(ctx); 307 292 mipi_dsi_dcs_write_seq_multi(ctx, 0xb0, 0x00, 0x00, 0x00, 0x17, 0x00, 0x49, 0x00, 308 293 0x6a, 0x00, 0x89, 0x00, 0x9f, 0x00, 0xb6, 0x00, 0xc8); 309 294 mipi_dsi_dcs_write_seq_multi(ctx, 0xb1, 0x00, 0xd9, 0x01, 0x10, 0x01, 0x3a, 0x01, ··· 329 312 0x01, 0x03, 0x1f, 0x03, 0x4a, 0x03, 0x59, 0x03, 0x6a); 330 313 mipi_dsi_dcs_write_seq_multi(ctx, 0xbb, 0x03, 0x7d, 0x03, 0x93, 0x03, 0xab, 0x03, 331 314 0xc8, 0x03, 0xec, 0x03, 0xfe, 0x00, 0x00); 332 - mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0x2c); 333 - mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 315 + 316 + nt36672e_switch_page(ctx, 0x2c); 317 + nt36672e_enable_reload_cmds(ctx); 334 318 mipi_dsi_dcs_write_seq_multi(ctx, 0x61, 0x1f); 335 319 mipi_dsi_dcs_write_seq_multi(ctx, 0x62, 0x1f); 336 320 mipi_dsi_dcs_write_seq_multi(ctx, 0x7e, 0x03); ··· 345 327 mipi_dsi_dcs_write_seq_multi(ctx, 0x56, 0x0f); 346 328 mipi_dsi_dcs_write_seq_multi(ctx, 0x58, 0x0f); 347 329 mipi_dsi_dcs_write_seq_multi(ctx, 0x59, 0x0f); 348 - mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0xf0); 349 - mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 330 + 331 + nt36672e_switch_page(ctx, 0xf0); 332 + nt36672e_enable_reload_cmds(ctx); 350 333 mipi_dsi_dcs_write_seq_multi(ctx, 0x5a, 0x00); 351 334 352 - mipi_dsi_dcs_write_seq_multi(ctx, 0xff, 0x10); 353 - mipi_dsi_dcs_write_seq_multi(ctx, 0xfb, 0x01); 335 + nt36672e_switch_page(ctx, 0x10); 336 + nt36672e_enable_reload_cmds(ctx); 354 337 mipi_dsi_dcs_write_seq_multi(ctx, 0x51, 0xff); 355 338 mipi_dsi_dcs_write_seq_multi(ctx, 0x53, 0x24); 356 339 mipi_dsi_dcs_write_seq_multi(ctx, 0x55, 0x01);
+17 -9
drivers/gpu/drm/panel/panel-simple.c
··· 726 726 * drm_atomic_helper_shutdown() at shutdown time and that should 727 727 * cause the panel to be disabled / unprepared if needed. For now, 728 728 * however, we'll keep these calls due to the sheer number of 729 - * different DRM modeset drivers used with panel-simple. The fact that 730 - * we're calling these and _also_ the drm_atomic_helper_shutdown() 731 - * will try to disable/unprepare means that we can get a warning about 732 - * trying to disable/unprepare an already disabled/unprepared panel, 733 - * but that's something we'll have to live with until we've confirmed 734 - * that all DRM modeset drivers are properly calling 735 - * drm_atomic_helper_shutdown(). 729 + * different DRM modeset drivers used with panel-simple. Once we've 730 + * confirmed that all DRM modeset drivers using this panel properly 731 + * call drm_atomic_helper_shutdown() we can simply delete the two 732 + * calls below. 733 + * 734 + * TO BE EXPLICIT: THE CALLS BELOW SHOULDN'T BE COPIED TO ANY NEW 735 + * PANEL DRIVERS. 736 + * 737 + * FIXME: If we're still haven't figured out if all DRM modeset 738 + * drivers properly call drm_atomic_helper_shutdown() but we _have_ 739 + * managed to make sure that DRM modeset drivers get their shutdown() 740 + * callback before the panel's shutdown() callback (perhaps using 741 + * device link), we could add a WARN_ON here to help move forward. 736 742 */ 737 - drm_panel_disable(&panel->base); 738 - drm_panel_unprepare(&panel->base); 743 + if (panel->base.enabled) 744 + drm_panel_disable(&panel->base); 745 + if (panel->base.prepared) 746 + drm_panel_unprepare(&panel->base); 739 747 } 740 748 741 749 static void panel_simple_remove(struct device *dev)
+21 -8
drivers/gpu/drm/panel/panel-sony-tulip-truly-nt35521.c
··· 25 25 struct gpio_desc *blen_gpio; 26 26 }; 27 27 28 + #define NT35521_DCS_SWITCH_PAGE 0xf0 29 + 30 + #define nt35521_switch_page(dsi_ctx, page) \ 31 + mipi_dsi_dcs_write_seq_multi(dsi_ctx, NT35521_DCS_SWITCH_PAGE, \ 32 + 0x55, 0xaa, 0x52, 0x08, (page)) 33 + 28 34 static inline 29 35 struct truly_nt35521 *to_truly_nt35521(struct drm_panel *panel) 30 36 { ··· 54 48 55 49 dsi->mode_flags |= MIPI_DSI_MODE_LPM; 56 50 57 - mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xf0, 0x55, 0xaa, 0x52, 0x08, 0x00); 51 + nt35521_switch_page(&dsi_ctx, 0x00); 58 52 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xff, 0xaa, 0x55, 0xa5, 0x80); 59 53 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0x6f, 0x11, 0x00); 60 54 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xf7, 0x20, 0x00); ··· 65 59 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xbb, 0x11, 0x11); 66 60 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xbc, 0x00, 0x00); 67 61 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb6, 0x02); 68 - mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xf0, 0x55, 0xaa, 0x52, 0x08, 0x01); 62 + 63 + nt35521_switch_page(&dsi_ctx, 0x01); 69 64 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb0, 0x09, 0x09); 70 65 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb1, 0x09, 0x09); 71 66 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xbc, 0x8c, 0x00); ··· 78 71 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb4, 0x25, 0x25); 79 72 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb9, 0x43, 0x43); 80 73 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xba, 0x24, 0x24); 81 - mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xf0, 0x55, 0xaa, 0x52, 0x08, 0x02); 74 + 75 + nt35521_switch_page(&dsi_ctx, 0x02); 82 76 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xee, 0x03); 83 77 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb0, 84 78 0x00, 0xb2, 0x00, 0xb3, 0x00, 0xb6, 0x00, 0xc3, ··· 111 103 0x02, 0x93, 0x02, 0xcd, 0x02, 0xf6, 0x03, 0x31, 112 104 0x03, 0x6c, 0x03, 0xe9, 0x03, 0xef, 0x03, 0xf4); 113 105 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xbb, 0x03, 0xf6, 0x03, 0xf7); 114 - mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xf0, 0x55, 0xaa, 0x52, 0x08, 0x03); 106 + 107 + nt35521_switch_page(&dsi_ctx, 0x03); 115 108 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb0, 0x22, 0x00); 116 109 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb1, 0x22, 0x00); 117 110 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb2, 0x05, 0x00, 0x60, 0x00, 0x00); ··· 131 122 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xc5, 0xc0); 132 123 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xc6, 0x00); 133 124 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xc7, 0x00); 134 - mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xf0, 0x55, 0xaa, 0x52, 0x08, 0x05); 125 + 126 + nt35521_switch_page(&dsi_ctx, 0x05); 135 127 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb0, 0x17, 0x06); 136 128 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb1, 0x17, 0x06); 137 129 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb2, 0x17, 0x06); ··· 188 178 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xeb, 0x00); 189 179 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xec, 0x00); 190 180 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xed, 0x30); 191 - mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xf0, 0x55, 0xaa, 0x52, 0x08, 0x06); 181 + 182 + nt35521_switch_page(&dsi_ctx, 0x06); 192 183 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb0, 0x31, 0x31); 193 184 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb1, 0x31, 0x31); 194 185 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb2, 0x2d, 0x2e); ··· 246 235 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0x6f, 0x11); 247 236 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xf3, 0x01); 248 237 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0x35, 0x00); 249 - mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xf0, 0x55, 0xaa, 0x52, 0x08, 0x00); 238 + 239 + nt35521_switch_page(&dsi_ctx, 0x00); 250 240 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xd9, 0x02, 0x03, 0x00); 251 241 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xf0, 0x55, 0xaa, 0x52, 0x00, 0x00); 252 - mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xf0, 0x55, 0xaa, 0x52, 0x08, 0x00); 242 + 243 + nt35521_switch_page(&dsi_ctx, 0x00); 253 244 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xb1, 0x6c, 0x21); 254 245 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0xf0, 0x55, 0xaa, 0x52, 0x00, 0x00); 255 246 mipi_dsi_generic_write_seq_multi(&dsi_ctx, 0x35, 0x00);
+1 -1
drivers/gpu/drm/panfrost/panfrost_job.c
··· 727 727 728 728 /* Restart the schedulers */ 729 729 for (i = 0; i < NUM_JOB_SLOTS; i++) 730 - drm_sched_start(&pfdev->js->queue[i].sched, true); 730 + drm_sched_start(&pfdev->js->queue[i].sched); 731 731 732 732 /* Re-enable job interrupts now that everything has been restarted. */ 733 733 job_write(pfdev, JOB_INT_MASK,
+1 -1
drivers/gpu/drm/panthor/panthor_mmu.c
··· 827 827 828 828 static void panthor_vm_start(struct panthor_vm *vm) 829 829 { 830 - drm_sched_start(&vm->sched, true); 830 + drm_sched_start(&vm->sched); 831 831 } 832 832 833 833 /**
+1 -1
drivers/gpu/drm/panthor/panthor_sched.c
··· 2538 2538 list_for_each_entry(job, &queue->scheduler.pending_list, base.list) 2539 2539 job->base.s_fence->parent = dma_fence_get(job->done_fence); 2540 2540 2541 - drm_sched_start(&queue->scheduler, true); 2541 + drm_sched_start(&queue->scheduler); 2542 2542 } 2543 2543 2544 2544 static void panthor_group_stop(struct panthor_group *group)
+9 -18
drivers/gpu/drm/scheduler/sched_main.c
··· 674 674 * drm_sched_start - recover jobs after a reset 675 675 * 676 676 * @sched: scheduler instance 677 - * @full_recovery: proceed with complete sched restart 678 677 * 679 678 */ 680 - void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery) 679 + void drm_sched_start(struct drm_gpu_scheduler *sched) 681 680 { 682 681 struct drm_sched_job *s_job, *tmp; 683 - int r; 684 682 685 683 /* 686 684 * Locking the list is not required here as the sched thread is parked ··· 690 692 691 693 atomic_add(s_job->credits, &sched->credit_count); 692 694 693 - if (!full_recovery) 694 - continue; 695 - 696 - if (fence) { 697 - r = dma_fence_add_callback(fence, &s_job->cb, 698 - drm_sched_job_done_cb); 699 - if (r == -ENOENT) 700 - drm_sched_job_done(s_job, fence->error); 701 - else if (r) 702 - DRM_DEV_ERROR(sched->dev, "fence add callback failed (%d)\n", 703 - r); 704 - } else 695 + if (!fence) { 705 696 drm_sched_job_done(s_job, -ECANCELED); 697 + continue; 698 + } 699 + 700 + if (dma_fence_add_callback(fence, &s_job->cb, 701 + drm_sched_job_done_cb)) 702 + drm_sched_job_done(s_job, fence->error); 706 703 } 707 704 708 - if (full_recovery) 709 - drm_sched_start_timeout_unlocked(sched); 710 - 705 + drm_sched_start_timeout_unlocked(sched); 711 706 drm_sched_wqueue_start(sched); 712 707 } 713 708 EXPORT_SYMBOL(drm_sched_start);
-1
drivers/gpu/drm/sti/sti_dvo.c
··· 582 582 struct platform_driver sti_dvo_driver = { 583 583 .driver = { 584 584 .name = "sti-dvo", 585 - .owner = THIS_MODULE, 586 585 .of_match_table = dvo_of_match, 587 586 }, 588 587 .probe = sti_dvo_probe,
-1
drivers/gpu/drm/sti/sti_hda.c
··· 807 807 struct platform_driver sti_hda_driver = { 808 808 .driver = { 809 809 .name = "sti-hda", 810 - .owner = THIS_MODULE, 811 810 .of_match_table = hda_of_match, 812 811 }, 813 812 .probe = sti_hda_probe,
-1
drivers/gpu/drm/sti/sti_hdmi.c
··· 1485 1485 struct platform_driver sti_hdmi_driver = { 1486 1486 .driver = { 1487 1487 .name = "sti-hdmi", 1488 - .owner = THIS_MODULE, 1489 1488 .of_match_table = hdmi_of_match, 1490 1489 }, 1491 1490 .probe = sti_hdmi_probe,
-1
drivers/gpu/drm/sti/sti_hqvdp.c
··· 1414 1414 struct platform_driver sti_hqvdp_driver = { 1415 1415 .driver = { 1416 1416 .name = "sti-hqvdp", 1417 - .owner = THIS_MODULE, 1418 1417 .of_match_table = hqvdp_of_match, 1419 1418 }, 1420 1419 .probe = sti_hqvdp_probe,
-1
drivers/gpu/drm/sti/sti_tvout.c
··· 886 886 struct platform_driver sti_tvout_driver = { 887 887 .driver = { 888 888 .name = "sti-tvout", 889 - .owner = THIS_MODULE, 890 889 .of_match_table = tvout_of_match, 891 890 }, 892 891 .probe = sti_tvout_probe,
-1
drivers/gpu/drm/sti/sti_vtg.c
··· 431 431 struct platform_driver sti_vtg_driver = { 432 432 .driver = { 433 433 .name = "sti-vtg", 434 - .owner = THIS_MODULE, 435 434 .of_match_table = vtg_of_match, 436 435 }, 437 436 .probe = vtg_probe,
+1
drivers/gpu/drm/stm/Kconfig
··· 2 2 config DRM_STM 3 3 tristate "DRM Support for STMicroelectronics SoC Series" 4 4 depends on DRM && (ARCH_STM32 || COMPILE_TEST) 5 + depends on COMMON_CLK 5 6 select DRM_KMS_HELPER 6 7 select DRM_GEM_DMA_HELPER 7 8 select DRM_PANEL_BRIDGE
+5 -2
drivers/gpu/drm/stm/drv.c
··· 25 25 #include <drm/drm_module.h> 26 26 #include <drm/drm_probe_helper.h> 27 27 #include <drm/drm_vblank.h> 28 + #include <drm/drm_managed.h> 28 29 29 30 #include "ltdc.h" 30 31 ··· 76 75 77 76 DRM_DEBUG("%s\n", __func__); 78 77 79 - ldev = devm_kzalloc(ddev->dev, sizeof(*ldev), GFP_KERNEL); 78 + ldev = drmm_kzalloc(ddev, sizeof(*ldev), GFP_KERNEL); 80 79 if (!ldev) 81 80 return -ENOMEM; 82 81 ··· 204 203 205 204 ret = drm_dev_register(ddev, 0); 206 205 if (ret) 207 - goto err_put; 206 + goto err_unload; 208 207 209 208 drm_fbdev_dma_setup(ddev, 16); 210 209 211 210 return 0; 212 211 212 + err_unload: 213 + drv_unload(ddev); 213 214 err_put: 214 215 drm_dev_put(ddev); 215 216
+33 -74
drivers/gpu/drm/stm/ltdc.c
··· 36 36 #include <drm/drm_probe_helper.h> 37 37 #include <drm/drm_simple_kms_helper.h> 38 38 #include <drm/drm_vblank.h> 39 + #include <drm/drm_managed.h> 39 40 40 41 #include <video/videomode.h> 41 42 ··· 170 169 #define IER_RRIE BIT(3) /* Register Reload Interrupt Enable */ 171 170 #define IER_FUEIE BIT(6) /* Fifo Underrun Error Interrupt Enable */ 172 171 #define IER_CRCIE BIT(7) /* CRC Error Interrupt Enable */ 172 + #define IER_MASK (IER_LIE | IER_FUWIE | IER_TERRIE | IER_RRIE | IER_FUEIE | IER_CRCIE) 173 173 174 174 #define CPSR_CYPOS GENMASK(15, 0) /* Current Y position */ 175 175 ··· 189 187 #define LXCR_COLKEN BIT(1) /* Color Keying Enable */ 190 188 #define LXCR_CLUTEN BIT(4) /* Color Look-Up Table ENable */ 191 189 #define LXCR_HMEN BIT(8) /* Horizontal Mirroring ENable */ 190 + #define LXCR_MASK (LXCR_LEN | LXCR_COLKEN | LXCR_CLUTEN | LXCR_HMEN) 192 191 193 192 #define LXWHPCR_WHSTPOS GENMASK(11, 0) /* Window Horizontal StarT POSition */ 194 193 #define LXWHPCR_WHSPPOS GENMASK(27, 16) /* Window Horizontal StoP POSition */ ··· 494 491 return (struct ltdc_device *)plane->dev->dev_private; 495 492 } 496 493 497 - static inline struct ltdc_device *encoder_to_ltdc(struct drm_encoder *enc) 498 - { 499 - return (struct ltdc_device *)enc->dev->dev_private; 500 - } 501 - 502 494 static inline enum ltdc_pix_fmt to_ltdc_pixelformat(u32 drm_fmt) 503 495 { 504 496 enum ltdc_pix_fmt pf; ··· 782 784 regmap_write(ldev->regmap, LTDC_BCCR, BCCR_BCBLACK); 783 785 784 786 /* Enable IRQ */ 785 - regmap_set_bits(ldev->regmap, LTDC_IER, IER_FUWIE | IER_FUEIE | IER_RRIE | IER_TERRIE); 787 + regmap_set_bits(ldev->regmap, LTDC_IER, IER_FUWIE | IER_FUEIE | IER_TERRIE); 786 788 787 789 /* Commit shadow registers = update planes at next vblank */ 788 790 if (!ldev->caps.plane_reg_shadow) ··· 804 806 805 807 /* Disable all layers */ 806 808 for (layer_index = 0; layer_index < ldev->caps.nb_layers; layer_index++) 807 - regmap_write_bits(ldev->regmap, LTDC_L1CR + layer_index * LAY_OFS, 808 - LXCR_CLUTEN | LXCR_LEN, 0); 809 + regmap_write_bits(ldev->regmap, LTDC_L1CR + layer_index * LAY_OFS, LXCR_MASK, 0); 809 810 810 - /* disable IRQ */ 811 - regmap_clear_bits(ldev->regmap, LTDC_IER, IER_FUWIE | IER_FUEIE | IER_RRIE | IER_TERRIE); 811 + /* Disable IRQ */ 812 + regmap_clear_bits(ldev->regmap, LTDC_IER, IER_FUWIE | IER_FUEIE | IER_TERRIE); 812 813 813 814 /* immediately commit disable of layers before switching off LTDC */ 814 815 if (!ldev->caps.plane_reg_shadow) ··· 1196 1199 } 1197 1200 1198 1201 static const struct drm_crtc_funcs ltdc_crtc_funcs = { 1199 - .destroy = drm_crtc_cleanup, 1200 1202 .set_config = drm_atomic_helper_set_config, 1201 1203 .page_flip = drm_atomic_helper_page_flip, 1202 1204 .reset = drm_atomic_helper_crtc_reset, ··· 1208 1212 }; 1209 1213 1210 1214 static const struct drm_crtc_funcs ltdc_crtc_with_crc_support_funcs = { 1211 - .destroy = drm_crtc_cleanup, 1212 1215 .set_config = drm_atomic_helper_set_config, 1213 1216 .page_flip = drm_atomic_helper_page_flip, 1214 1217 .reset = drm_atomic_helper_crtc_reset, ··· 1469 1474 if (newstate->rotation & DRM_MODE_REFLECT_X) 1470 1475 val |= LXCR_HMEN; 1471 1476 1472 - regmap_write_bits(ldev->regmap, LTDC_L1CR + lofs, LXCR_LEN | LXCR_CLUTEN | LXCR_HMEN, val); 1477 + regmap_write_bits(ldev->regmap, LTDC_L1CR + lofs, LXCR_MASK, val); 1473 1478 1474 1479 /* Commit shadow registers = update plane at next vblank */ 1475 1480 if (ldev->caps.plane_reg_shadow) ··· 1507 1512 u32 lofs = plane->index * LAY_OFS; 1508 1513 1509 1514 /* Disable layer */ 1510 - regmap_write_bits(ldev->regmap, LTDC_L1CR + lofs, LXCR_LEN | LXCR_CLUTEN | LXCR_HMEN, 0); 1515 + regmap_write_bits(ldev->regmap, LTDC_L1CR + lofs, LXCR_MASK, 0); 1516 + 1517 + /* Reset the layer transparency to hide any related background color */ 1518 + regmap_write_bits(ldev->regmap, LTDC_L1CACR + lofs, LXCACR_CONSTA, 0x00); 1511 1519 1512 1520 /* Commit shadow registers = update plane at next vblank */ 1513 1521 if (ldev->caps.plane_reg_shadow) ··· 1543 1545 static const struct drm_plane_funcs ltdc_plane_funcs = { 1544 1546 .update_plane = drm_atomic_helper_update_plane, 1545 1547 .disable_plane = drm_atomic_helper_disable_plane, 1546 - .destroy = drm_plane_cleanup, 1547 1548 .reset = drm_atomic_helper_plane_reset, 1548 1549 .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state, 1549 1550 .atomic_destroy_state = drm_atomic_helper_plane_destroy_state, ··· 1569 1572 const u64 *modifiers = ltdc_format_modifiers; 1570 1573 u32 lofs = index * LAY_OFS; 1571 1574 u32 val; 1572 - int ret; 1573 1575 1574 1576 /* Allocate the biggest size according to supported color formats */ 1575 1577 formats = devm_kzalloc(dev, (ldev->caps.pix_fmt_nb + ··· 1576 1580 ARRAY_SIZE(ltdc_drm_fmt_ycbcr_sp) + 1577 1581 ARRAY_SIZE(ltdc_drm_fmt_ycbcr_fp)) * 1578 1582 sizeof(*formats), GFP_KERNEL); 1583 + if (!formats) 1584 + return NULL; 1579 1585 1580 1586 for (i = 0; i < ldev->caps.pix_fmt_nb; i++) { 1581 1587 drm_fmt = ldev->caps.pix_fmt_drm[i]; ··· 1611 1613 } 1612 1614 } 1613 1615 1614 - plane = devm_kzalloc(dev, sizeof(*plane), GFP_KERNEL); 1615 - if (!plane) 1616 - return NULL; 1617 - 1618 - ret = drm_universal_plane_init(ddev, plane, possible_crtcs, 1619 - &ltdc_plane_funcs, formats, nb_fmt, 1620 - modifiers, type, NULL); 1621 - if (ret < 0) 1616 + plane = drmm_universal_plane_alloc(ddev, struct drm_plane, dev, 1617 + possible_crtcs, &ltdc_plane_funcs, formats, 1618 + nb_fmt, modifiers, type, NULL); 1619 + if (IS_ERR(plane)) 1622 1620 return NULL; 1623 1621 1624 1622 if (ldev->caps.ycbcr_input) { ··· 1635 1641 DRM_DEBUG_DRIVER("plane:%d created\n", plane->base.id); 1636 1642 1637 1643 return plane; 1638 - } 1639 - 1640 - static void ltdc_plane_destroy_all(struct drm_device *ddev) 1641 - { 1642 - struct drm_plane *plane, *plane_temp; 1643 - 1644 - list_for_each_entry_safe(plane, plane_temp, 1645 - &ddev->mode_config.plane_list, head) 1646 - drm_plane_cleanup(plane); 1647 1644 } 1648 1645 1649 1646 static int ltdc_crtc_init(struct drm_device *ddev, struct drm_crtc *crtc) ··· 1662 1677 1663 1678 /* Init CRTC according to its hardware features */ 1664 1679 if (ldev->caps.crc) 1665 - ret = drm_crtc_init_with_planes(ddev, crtc, primary, NULL, 1666 - &ltdc_crtc_with_crc_support_funcs, NULL); 1680 + ret = drmm_crtc_init_with_planes(ddev, crtc, primary, NULL, 1681 + &ltdc_crtc_with_crc_support_funcs, NULL); 1667 1682 else 1668 - ret = drm_crtc_init_with_planes(ddev, crtc, primary, NULL, 1669 - &ltdc_crtc_funcs, NULL); 1683 + ret = drmm_crtc_init_with_planes(ddev, crtc, primary, NULL, 1684 + &ltdc_crtc_funcs, NULL); 1670 1685 if (ret) { 1671 1686 DRM_ERROR("Can not initialize CRTC\n"); 1672 - goto cleanup; 1687 + return ret; 1673 1688 } 1674 1689 1675 1690 drm_crtc_helper_add(crtc, &ltdc_crtc_helper_funcs); ··· 1683 1698 for (i = 1; i < ldev->caps.nb_layers; i++) { 1684 1699 overlay = ltdc_plane_create(ddev, DRM_PLANE_TYPE_OVERLAY, i); 1685 1700 if (!overlay) { 1686 - ret = -ENOMEM; 1687 1701 DRM_ERROR("Can not create overlay plane %d\n", i); 1688 - goto cleanup; 1702 + return -ENOMEM; 1689 1703 } 1690 1704 if (ldev->caps.dynamic_zorder) 1691 1705 drm_plane_create_zpos_property(overlay, i, 0, ldev->caps.nb_layers - 1); ··· 1697 1713 } 1698 1714 1699 1715 return 0; 1700 - 1701 - cleanup: 1702 - ltdc_plane_destroy_all(ddev); 1703 - return ret; 1704 1716 } 1705 1717 1706 1718 static void ltdc_encoder_disable(struct drm_encoder *encoder) ··· 1756 1776 struct drm_encoder *encoder; 1757 1777 int ret; 1758 1778 1759 - encoder = devm_kzalloc(ddev->dev, sizeof(*encoder), GFP_KERNEL); 1760 - if (!encoder) 1761 - return -ENOMEM; 1779 + encoder = drmm_simple_encoder_alloc(ddev, struct drm_encoder, dev, 1780 + DRM_MODE_ENCODER_DPI); 1781 + if (IS_ERR(encoder)) 1782 + return PTR_ERR(encoder); 1762 1783 1763 1784 encoder->possible_crtcs = CRTC_MASK; 1764 1785 encoder->possible_clones = 0; /* No cloning support */ 1765 1786 1766 - drm_simple_encoder_init(ddev, encoder, DRM_MODE_ENCODER_DPI); 1767 - 1768 1787 drm_encoder_helper_add(encoder, &ltdc_encoder_helper_funcs); 1769 1788 1770 1789 ret = drm_bridge_attach(encoder, bridge, NULL, 0); 1771 - if (ret) { 1772 - if (ret != -EPROBE_DEFER) 1773 - drm_encoder_cleanup(encoder); 1790 + if (ret) 1774 1791 return ret; 1775 - } 1776 1792 1777 1793 DRM_DEBUG_DRIVER("Bridge encoder:%d created\n", encoder->base.id); 1778 1794 ··· 1938 1962 goto err; 1939 1963 1940 1964 if (panel) { 1941 - bridge = drm_panel_bridge_add_typed(panel, 1942 - DRM_MODE_CONNECTOR_DPI); 1965 + bridge = drmm_panel_bridge_add(ddev, panel); 1943 1966 if (IS_ERR(bridge)) { 1944 1967 DRM_ERROR("panel-bridge endpoint %d\n", i); 1945 1968 ret = PTR_ERR(bridge); ··· 1988 2013 goto err; 1989 2014 } 1990 2015 1991 - /* Disable interrupts */ 1992 - if (ldev->caps.fifo_threshold) 1993 - regmap_clear_bits(ldev->regmap, LTDC_IER, IER_LIE | IER_RRIE | IER_FUWIE | 1994 - IER_TERRIE); 1995 - else 1996 - regmap_clear_bits(ldev->regmap, LTDC_IER, IER_LIE | IER_RRIE | IER_FUWIE | 1997 - IER_TERRIE | IER_FUEIE); 2016 + /* Disable all interrupts */ 2017 + regmap_clear_bits(ldev->regmap, LTDC_IER, IER_MASK); 1998 2018 1999 2019 DRM_DEBUG_DRIVER("ltdc hw version 0x%08x\n", ldev->caps.hw_version); 2000 2020 ··· 2015 2045 } 2016 2046 } 2017 2047 2018 - crtc = devm_kzalloc(dev, sizeof(*crtc), GFP_KERNEL); 2048 + crtc = drmm_kzalloc(ddev, sizeof(*crtc), GFP_KERNEL); 2019 2049 if (!crtc) { 2020 2050 DRM_ERROR("Failed to allocate crtc\n"); 2021 2051 ret = -ENOMEM; ··· 2042 2072 2043 2073 return 0; 2044 2074 err: 2045 - for (i = 0; i < nb_endpoints; i++) 2046 - drm_of_panel_bridge_remove(ddev->dev->of_node, 0, i); 2047 - 2048 2075 clk_disable_unprepare(ldev->pixel_clk); 2049 2076 2050 2077 return ret; ··· 2049 2082 2050 2083 void ltdc_unload(struct drm_device *ddev) 2051 2084 { 2052 - struct device *dev = ddev->dev; 2053 - int nb_endpoints, i; 2054 - 2055 2085 DRM_DEBUG_DRIVER("\n"); 2056 - 2057 - nb_endpoints = of_graph_get_endpoint_count(dev->of_node); 2058 - 2059 - for (i = 0; i < nb_endpoints; i++) 2060 - drm_of_panel_bridge_remove(ddev->dev->of_node, 0, i); 2061 2086 2062 2087 pm_runtime_disable(ddev->dev); 2063 2088 }
-1
drivers/gpu/drm/stm/lvds.c
··· 1210 1210 .remove = lvds_remove, 1211 1211 .driver = { 1212 1212 .name = "stm32-display-lvds", 1213 - .owner = THIS_MODULE, 1214 1213 .of_match_table = lvds_dt_ids, 1215 1214 }, 1216 1215 };
+6
drivers/gpu/drm/tegra/drm.c
··· 1330 1330 return 0; 1331 1331 } 1332 1332 1333 + static void host1x_drm_shutdown(struct host1x_device *dev) 1334 + { 1335 + drm_atomic_helper_shutdown(dev_get_drvdata(&dev->dev)); 1336 + } 1337 + 1333 1338 #ifdef CONFIG_PM_SLEEP 1334 1339 static int host1x_drm_suspend(struct device *dev) 1335 1340 { ··· 1403 1398 }, 1404 1399 .probe = host1x_drm_probe, 1405 1400 .remove = host1x_drm_remove, 1401 + .shutdown = host1x_drm_shutdown, 1406 1402 .subdevs = host1x_drm_subdevs, 1407 1403 }; 1408 1404
+7 -20
drivers/gpu/drm/tests/drm_gem_shmem_test.c
··· 23 23 #define TEST_BYTE 0xae 24 24 25 25 /* 26 - * Wrappers to avoid an explicit type casting when passing action 27 - * functions to kunit_add_action(). 26 + * Wrappers to avoid cast warnings when passing action functions 27 + * directly to kunit_add_action(). 28 28 */ 29 - static void kfree_wrapper(void *ptr) 30 - { 31 - const void *obj = ptr; 29 + KUNIT_DEFINE_ACTION_WRAPPER(kfree_wrapper, kfree, const void *); 32 30 33 - kfree(obj); 34 - } 31 + KUNIT_DEFINE_ACTION_WRAPPER(sg_free_table_wrapper, sg_free_table, 32 + struct sg_table *); 35 33 36 - static void sg_free_table_wrapper(void *ptr) 37 - { 38 - struct sg_table *sgt = ptr; 39 - 40 - sg_free_table(sgt); 41 - } 42 - 43 - static void drm_gem_shmem_free_wrapper(void *ptr) 44 - { 45 - struct drm_gem_shmem_object *shmem = ptr; 46 - 47 - drm_gem_shmem_free(shmem); 48 - } 34 + KUNIT_DEFINE_ACTION_WRAPPER(drm_gem_shmem_free_wrapper, drm_gem_shmem_free, 35 + struct drm_gem_shmem_object *); 49 36 50 37 /* 51 38 * Test creating a shmem GEM object backed by shmem buffer. The test
+3 -3
drivers/gpu/drm/ttm/tests/ttm_bo_test.c
··· 271 271 272 272 man = ttm_manager_type(priv->ttm_dev, mem_type); 273 273 KUNIT_ASSERT_EQ(test, 274 - list_is_last(&res1->lru, &man->lru[bo->priority]), 1); 274 + list_is_last(&res1->lru.link, &man->lru[bo->priority]), 1); 275 275 276 276 ttm_resource_free(bo, &res2); 277 277 ttm_resource_free(bo, &res1); ··· 308 308 err = ttm_resource_alloc(bo, place, &res2); 309 309 KUNIT_ASSERT_EQ(test, err, 0); 310 310 KUNIT_ASSERT_EQ(test, 311 - list_is_last(&res2->lru, &priv->ttm_dev->pinned), 1); 311 + list_is_last(&res2->lru.link, &priv->ttm_dev->pinned), 1); 312 312 313 313 ttm_bo_unreserve(bo); 314 314 KUNIT_ASSERT_EQ(test, 315 - list_is_last(&res1->lru, &priv->ttm_dev->pinned), 1); 315 + list_is_last(&res1->lru.link, &priv->ttm_dev->pinned), 1); 316 316 317 317 ttm_resource_free(bo, &res1); 318 318 ttm_resource_free(bo, &res2);
+1 -1
drivers/gpu/drm/ttm/tests/ttm_resource_test.c
··· 198 198 ttm_resource_init(bo, place, res); 199 199 ttm_resource_fini(man, res); 200 200 201 - KUNIT_ASSERT_TRUE(test, list_empty(&res->lru)); 201 + KUNIT_ASSERT_TRUE(test, list_empty(&res->lru.link)); 202 202 KUNIT_ASSERT_EQ(test, man->usage, 0); 203 203 } 204 204
+214 -248
drivers/gpu/drm/ttm/ttm_bo.c
··· 224 224 dma_resv_iter_end(&cursor); 225 225 } 226 226 227 - /** 228 - * ttm_bo_cleanup_refs 229 - * If bo idle, remove from lru lists, and unref. 230 - * If not idle, block if possible. 231 - * 232 - * Must be called with lru_lock and reservation held, this function 233 - * will drop the lru lock and optionally the reservation lock before returning. 234 - * 235 - * @bo: The buffer object to clean-up 236 - * @interruptible: Any sleeps should occur interruptibly. 237 - * @no_wait_gpu: Never wait for gpu. Return -EBUSY instead. 238 - * @unlock_resv: Unlock the reservation lock as well. 239 - */ 240 - 241 - static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo, 242 - bool interruptible, bool no_wait_gpu, 243 - bool unlock_resv) 244 - { 245 - struct dma_resv *resv = &bo->base._resv; 246 - int ret; 247 - 248 - if (dma_resv_test_signaled(resv, DMA_RESV_USAGE_BOOKKEEP)) 249 - ret = 0; 250 - else 251 - ret = -EBUSY; 252 - 253 - if (ret && !no_wait_gpu) { 254 - long lret; 255 - 256 - if (unlock_resv) 257 - dma_resv_unlock(bo->base.resv); 258 - spin_unlock(&bo->bdev->lru_lock); 259 - 260 - lret = dma_resv_wait_timeout(resv, DMA_RESV_USAGE_BOOKKEEP, 261 - interruptible, 262 - 30 * HZ); 263 - 264 - if (lret < 0) 265 - return lret; 266 - else if (lret == 0) 267 - return -EBUSY; 268 - 269 - spin_lock(&bo->bdev->lru_lock); 270 - if (unlock_resv && !dma_resv_trylock(bo->base.resv)) { 271 - /* 272 - * We raced, and lost, someone else holds the reservation now, 273 - * and is probably busy in ttm_bo_cleanup_memtype_use. 274 - * 275 - * Even if it's not the case, because we finished waiting any 276 - * delayed destruction would succeed, so just return success 277 - * here. 278 - */ 279 - spin_unlock(&bo->bdev->lru_lock); 280 - return 0; 281 - } 282 - ret = 0; 283 - } 284 - 285 - if (ret) { 286 - if (unlock_resv) 287 - dma_resv_unlock(bo->base.resv); 288 - spin_unlock(&bo->bdev->lru_lock); 289 - return ret; 290 - } 291 - 292 - spin_unlock(&bo->bdev->lru_lock); 293 - ttm_bo_cleanup_memtype_use(bo); 294 - 295 - if (unlock_resv) 296 - dma_resv_unlock(bo->base.resv); 297 - 298 - return 0; 299 - } 300 - 301 227 /* 302 228 * Block for the dma_resv object to become idle, lock the buffer and clean up 303 229 * the resource and tt object. ··· 432 506 } 433 507 EXPORT_SYMBOL(ttm_bo_eviction_valuable); 434 508 435 - /* 436 - * Check the target bo is allowable to be evicted or swapout, including cases: 509 + /** 510 + * ttm_bo_evict_first() - Evict the first bo on the manager's LRU list. 511 + * @bdev: The ttm device. 512 + * @man: The manager whose bo to evict. 513 + * @ctx: The TTM operation ctx governing the eviction. 437 514 * 438 - * a. if share same reservation object with ctx->resv, have assumption 439 - * reservation objects should already be locked, so not lock again and 440 - * return true directly when either the opreation allow_reserved_eviction 441 - * or the target bo already is in delayed free list; 442 - * 443 - * b. Otherwise, trylock it. 515 + * Return: 0 if successful or the resource disappeared. Negative error code on error. 444 516 */ 445 - static bool ttm_bo_evict_swapout_allowable(struct ttm_buffer_object *bo, 446 - struct ttm_operation_ctx *ctx, 447 - const struct ttm_place *place, 448 - bool *locked, bool *busy) 517 + int ttm_bo_evict_first(struct ttm_device *bdev, struct ttm_resource_manager *man, 518 + struct ttm_operation_ctx *ctx) 449 519 { 450 - bool ret = false; 520 + struct ttm_resource_cursor cursor; 521 + struct ttm_buffer_object *bo; 522 + struct ttm_resource *res; 523 + unsigned int mem_type; 524 + int ret = 0; 451 525 452 - if (bo->pin_count) { 453 - *locked = false; 454 - if (busy) 455 - *busy = false; 456 - return false; 526 + spin_lock(&bdev->lru_lock); 527 + res = ttm_resource_manager_first(man, &cursor); 528 + ttm_resource_cursor_fini(&cursor); 529 + if (!res) { 530 + ret = -ENOENT; 531 + goto out_no_ref; 457 532 } 533 + bo = res->bo; 534 + if (!ttm_bo_get_unless_zero(bo)) 535 + goto out_no_ref; 536 + mem_type = res->mem_type; 537 + spin_unlock(&bdev->lru_lock); 538 + ret = ttm_bo_reserve(bo, ctx->interruptible, ctx->no_wait_gpu, NULL); 539 + if (ret) 540 + goto out_no_lock; 541 + if (!bo->resource || bo->resource->mem_type != mem_type) 542 + goto out_bo_moved; 458 543 459 - if (bo->base.resv == ctx->resv) { 460 - dma_resv_assert_held(bo->base.resv); 461 - if (ctx->allow_res_evict) 462 - ret = true; 463 - *locked = false; 464 - if (busy) 465 - *busy = false; 544 + if (bo->deleted) { 545 + ret = ttm_bo_wait_ctx(bo, ctx); 546 + if (!ret) 547 + ttm_bo_cleanup_memtype_use(bo); 466 548 } else { 467 - ret = dma_resv_trylock(bo->base.resv); 468 - *locked = ret; 469 - if (busy) 470 - *busy = !ret; 549 + ret = ttm_bo_evict(bo, ctx); 471 550 } 551 + out_bo_moved: 552 + dma_resv_unlock(bo->base.resv); 553 + out_no_lock: 554 + ttm_bo_put(bo); 555 + return ret; 472 556 473 - if (ret && place && (bo->resource->mem_type != place->mem_type || 474 - !bo->bdev->funcs->eviction_valuable(bo, place))) { 475 - ret = false; 476 - if (*locked) { 477 - dma_resv_unlock(bo->base.resv); 478 - *locked = false; 479 - } 480 - } 481 - 557 + out_no_ref: 558 + spin_unlock(&bdev->lru_lock); 482 559 return ret; 483 560 } 484 561 485 562 /** 486 - * ttm_mem_evict_wait_busy - wait for a busy BO to become available 487 - * 488 - * @busy_bo: BO which couldn't be locked with trylock 489 - * @ctx: operation context 490 - * @ticket: acquire ticket 491 - * 492 - * Try to lock a busy buffer object to avoid failing eviction. 563 + * struct ttm_bo_evict_walk - Parameters for the evict walk. 493 564 */ 494 - static int ttm_mem_evict_wait_busy(struct ttm_buffer_object *busy_bo, 495 - struct ttm_operation_ctx *ctx, 496 - struct ww_acquire_ctx *ticket) 565 + struct ttm_bo_evict_walk { 566 + /** @walk: The walk base parameters. */ 567 + struct ttm_lru_walk walk; 568 + /** @place: The place passed to the resource allocation. */ 569 + const struct ttm_place *place; 570 + /** @evictor: The buffer object we're trying to make room for. */ 571 + struct ttm_buffer_object *evictor; 572 + /** @res: The allocated resource if any. */ 573 + struct ttm_resource **res; 574 + /** @evicted: Number of successful evictions. */ 575 + unsigned long evicted; 576 + }; 577 + 578 + static s64 ttm_bo_evict_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo) 497 579 { 498 - int r; 580 + struct ttm_bo_evict_walk *evict_walk = 581 + container_of(walk, typeof(*evict_walk), walk); 582 + s64 lret; 499 583 500 - if (!busy_bo || !ticket) 501 - return -EBUSY; 502 - 503 - if (ctx->interruptible) 504 - r = dma_resv_lock_interruptible(busy_bo->base.resv, 505 - ticket); 506 - else 507 - r = dma_resv_lock(busy_bo->base.resv, ticket); 508 - 509 - /* 510 - * TODO: It would be better to keep the BO locked until allocation is at 511 - * least tried one more time, but that would mean a much larger rework 512 - * of TTM. 513 - */ 514 - if (!r) 515 - dma_resv_unlock(busy_bo->base.resv); 516 - 517 - return r == -EDEADLK ? -EBUSY : r; 518 - } 519 - 520 - int ttm_mem_evict_first(struct ttm_device *bdev, 521 - struct ttm_resource_manager *man, 522 - const struct ttm_place *place, 523 - struct ttm_operation_ctx *ctx, 524 - struct ww_acquire_ctx *ticket) 525 - { 526 - struct ttm_buffer_object *bo = NULL, *busy_bo = NULL; 527 - struct ttm_resource_cursor cursor; 528 - struct ttm_resource *res; 529 - bool locked = false; 530 - int ret; 531 - 532 - spin_lock(&bdev->lru_lock); 533 - ttm_resource_manager_for_each_res(man, &cursor, res) { 534 - bool busy; 535 - 536 - if (!ttm_bo_evict_swapout_allowable(res->bo, ctx, place, 537 - &locked, &busy)) { 538 - if (busy && !busy_bo && ticket != 539 - dma_resv_locking_ctx(res->bo->base.resv)) 540 - busy_bo = res->bo; 541 - continue; 542 - } 543 - 544 - if (ttm_bo_get_unless_zero(res->bo)) { 545 - bo = res->bo; 546 - break; 547 - } 548 - if (locked) 549 - dma_resv_unlock(res->bo->base.resv); 550 - } 551 - 552 - if (!bo) { 553 - if (busy_bo && !ttm_bo_get_unless_zero(busy_bo)) 554 - busy_bo = NULL; 555 - spin_unlock(&bdev->lru_lock); 556 - ret = ttm_mem_evict_wait_busy(busy_bo, ctx, ticket); 557 - if (busy_bo) 558 - ttm_bo_put(busy_bo); 559 - return ret; 560 - } 584 + if (bo->pin_count || !bo->bdev->funcs->eviction_valuable(bo, evict_walk->place)) 585 + return 0; 561 586 562 587 if (bo->deleted) { 563 - ret = ttm_bo_cleanup_refs(bo, ctx->interruptible, 564 - ctx->no_wait_gpu, locked); 565 - ttm_bo_put(bo); 566 - return ret; 588 + lret = ttm_bo_wait_ctx(bo, walk->ctx); 589 + if (!lret) 590 + ttm_bo_cleanup_memtype_use(bo); 591 + } else { 592 + lret = ttm_bo_evict(bo, walk->ctx); 567 593 } 568 594 569 - spin_unlock(&bdev->lru_lock); 595 + if (lret) 596 + goto out; 570 597 571 - ret = ttm_bo_evict(bo, ctx); 572 - if (locked) 573 - ttm_bo_unreserve(bo); 574 - else 575 - ttm_bo_move_to_lru_tail_unlocked(bo); 598 + evict_walk->evicted++; 599 + if (evict_walk->res) 600 + lret = ttm_resource_alloc(evict_walk->evictor, evict_walk->place, 601 + evict_walk->res); 602 + if (lret == 0) 603 + return 1; 604 + out: 605 + /* Errors that should terminate the walk. */ 606 + if (lret == -ENOSPC) 607 + return -EBUSY; 576 608 577 - ttm_bo_put(bo); 578 - return ret; 609 + return lret; 610 + } 611 + 612 + static const struct ttm_lru_walk_ops ttm_evict_walk_ops = { 613 + .process_bo = ttm_bo_evict_cb, 614 + }; 615 + 616 + static int ttm_bo_evict_alloc(struct ttm_device *bdev, 617 + struct ttm_resource_manager *man, 618 + const struct ttm_place *place, 619 + struct ttm_buffer_object *evictor, 620 + struct ttm_operation_ctx *ctx, 621 + struct ww_acquire_ctx *ticket, 622 + struct ttm_resource **res) 623 + { 624 + struct ttm_bo_evict_walk evict_walk = { 625 + .walk = { 626 + .ops = &ttm_evict_walk_ops, 627 + .ctx = ctx, 628 + .ticket = ticket, 629 + }, 630 + .place = place, 631 + .evictor = evictor, 632 + .res = res, 633 + }; 634 + s64 lret; 635 + 636 + evict_walk.walk.trylock_only = true; 637 + lret = ttm_lru_walk_for_evict(&evict_walk.walk, bdev, man, 1); 638 + if (lret || !ticket) 639 + goto out; 640 + 641 + /* If ticket-locking, repeat while making progress. */ 642 + evict_walk.walk.trylock_only = false; 643 + do { 644 + /* The walk may clear the evict_walk.walk.ticket field */ 645 + evict_walk.walk.ticket = ticket; 646 + evict_walk.evicted = 0; 647 + lret = ttm_lru_walk_for_evict(&evict_walk.walk, bdev, man, 1); 648 + } while (!lret && evict_walk.evicted); 649 + out: 650 + if (lret < 0) 651 + return lret; 652 + if (lret == 0) 653 + return -EBUSY; 654 + return 0; 579 655 } 580 656 581 657 /** ··· 688 760 for (i = 0; i < placement->num_placement; ++i) { 689 761 const struct ttm_place *place = &placement->placement[i]; 690 762 struct ttm_resource_manager *man; 763 + bool may_evict; 691 764 692 765 man = ttm_manager_type(bdev, place->mem_type); 693 766 if (!man || !ttm_resource_manager_used(man)) ··· 698 769 TTM_PL_FLAG_FALLBACK)) 699 770 continue; 700 771 701 - do { 702 - ret = ttm_resource_alloc(bo, place, res); 703 - if (unlikely(ret && ret != -ENOSPC)) 772 + may_evict = (force_space && place->mem_type != TTM_PL_SYSTEM); 773 + ret = ttm_resource_alloc(bo, place, res); 774 + if (ret) { 775 + if (ret != -ENOSPC) 704 776 return ret; 705 - if (likely(!ret) || !force_space) 706 - break; 777 + if (!may_evict) 778 + continue; 707 779 708 - ret = ttm_mem_evict_first(bdev, man, place, ctx, 709 - ticket); 710 - if (unlikely(ret == -EBUSY)) 711 - break; 712 - if (unlikely(ret)) 780 + ret = ttm_bo_evict_alloc(bdev, man, place, bo, ctx, 781 + ticket, res); 782 + if (ret == -EBUSY) 783 + continue; 784 + if (ret) 713 785 return ret; 714 - } while (1); 715 - if (ret) 716 - continue; 786 + } 717 787 718 788 ret = ttm_bo_add_move_fence(bo, man, ctx->no_wait_gpu); 719 789 if (unlikely(ret)) { ··· 1046 1118 } 1047 1119 EXPORT_SYMBOL(ttm_bo_wait_ctx); 1048 1120 1049 - int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, 1050 - gfp_t gfp_flags) 1121 + /** 1122 + * struct ttm_bo_swapout_walk - Parameters for the swapout walk 1123 + */ 1124 + struct ttm_bo_swapout_walk { 1125 + /** @walk: The walk base parameters. */ 1126 + struct ttm_lru_walk walk; 1127 + /** @gfp_flags: The gfp flags to use for ttm_tt_swapout() */ 1128 + gfp_t gfp_flags; 1129 + }; 1130 + 1131 + static s64 1132 + ttm_bo_swapout_cb(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo) 1051 1133 { 1052 - struct ttm_place place; 1053 - bool locked; 1054 - long ret; 1134 + struct ttm_place place = {.mem_type = bo->resource->mem_type}; 1135 + struct ttm_bo_swapout_walk *swapout_walk = 1136 + container_of(walk, typeof(*swapout_walk), walk); 1137 + struct ttm_operation_ctx *ctx = walk->ctx; 1138 + s64 ret; 1055 1139 1056 1140 /* 1057 1141 * While the bo may already reside in SYSTEM placement, set ··· 1071 1131 * The driver may use the fact that we're moving from SYSTEM 1072 1132 * as an indication that we're about to swap out. 1073 1133 */ 1074 - memset(&place, 0, sizeof(place)); 1075 - place.mem_type = bo->resource->mem_type; 1076 - if (!ttm_bo_evict_swapout_allowable(bo, ctx, &place, &locked, NULL)) 1077 - return -EBUSY; 1134 + if (bo->pin_count || !bo->bdev->funcs->eviction_valuable(bo, &place)) { 1135 + ret = -EBUSY; 1136 + goto out; 1137 + } 1078 1138 1079 1139 if (!bo->ttm || !ttm_tt_is_populated(bo->ttm) || 1080 1140 bo->ttm->page_flags & TTM_TT_FLAG_EXTERNAL || 1081 - bo->ttm->page_flags & TTM_TT_FLAG_SWAPPED || 1082 - !ttm_bo_get_unless_zero(bo)) { 1083 - if (locked) 1084 - dma_resv_unlock(bo->base.resv); 1085 - return -EBUSY; 1141 + bo->ttm->page_flags & TTM_TT_FLAG_SWAPPED) { 1142 + ret = -EBUSY; 1143 + goto out; 1086 1144 } 1087 1145 1088 1146 if (bo->deleted) { 1089 - ret = ttm_bo_cleanup_refs(bo, false, false, locked); 1090 - ttm_bo_put(bo); 1091 - return ret == -EBUSY ? -ENOSPC : ret; 1092 - } 1147 + pgoff_t num_pages = bo->ttm->num_pages; 1093 1148 1094 - /* TODO: Cleanup the locking */ 1095 - spin_unlock(&bo->bdev->lru_lock); 1149 + ret = ttm_bo_wait_ctx(bo, ctx); 1150 + if (ret) 1151 + goto out; 1152 + 1153 + ttm_bo_cleanup_memtype_use(bo); 1154 + ret = num_pages; 1155 + goto out; 1156 + } 1096 1157 1097 1158 /* 1098 1159 * Move to system cached ··· 1105 1164 memset(&hop, 0, sizeof(hop)); 1106 1165 place.mem_type = TTM_PL_SYSTEM; 1107 1166 ret = ttm_resource_alloc(bo, &place, &evict_mem); 1108 - if (unlikely(ret)) 1167 + if (ret) 1109 1168 goto out; 1110 1169 1111 1170 ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, &hop); 1112 - if (unlikely(ret != 0)) { 1113 - WARN(ret == -EMULTIHOP, "Unexpected multihop in swaput - likely driver bug.\n"); 1171 + if (ret) { 1172 + WARN(ret == -EMULTIHOP, 1173 + "Unexpected multihop in swapout - likely driver bug.\n"); 1114 1174 ttm_resource_free(bo, &evict_mem); 1115 1175 goto out; 1116 1176 } ··· 1121 1179 * Make sure BO is idle. 1122 1180 */ 1123 1181 ret = ttm_bo_wait_ctx(bo, ctx); 1124 - if (unlikely(ret != 0)) 1182 + if (ret) 1125 1183 goto out; 1126 1184 1127 1185 ttm_bo_unmap_virtual(bo); 1128 - 1129 - /* 1130 - * Swap out. Buffer will be swapped in again as soon as 1131 - * anyone tries to access a ttm page. 1132 - */ 1133 1186 if (bo->bdev->funcs->swap_notify) 1134 1187 bo->bdev->funcs->swap_notify(bo); 1135 1188 1136 1189 if (ttm_tt_is_populated(bo->ttm)) 1137 - ret = ttm_tt_swapout(bo->bdev, bo->ttm, gfp_flags); 1138 - out: 1190 + ret = ttm_tt_swapout(bo->bdev, bo->ttm, swapout_walk->gfp_flags); 1139 1191 1140 - /* 1141 - * Unreserve without putting on LRU to avoid swapping out an 1142 - * already swapped buffer. 1143 - */ 1144 - if (locked) 1145 - dma_resv_unlock(bo->base.resv); 1146 - ttm_bo_put(bo); 1147 - return ret == -EBUSY ? -ENOSPC : ret; 1192 + out: 1193 + /* Consider -ENOMEM and -ENOSPC non-fatal. */ 1194 + if (ret == -ENOMEM || ret == -ENOSPC) 1195 + ret = -EBUSY; 1196 + 1197 + return ret; 1198 + } 1199 + 1200 + const struct ttm_lru_walk_ops ttm_swap_ops = { 1201 + .process_bo = ttm_bo_swapout_cb, 1202 + }; 1203 + 1204 + /** 1205 + * ttm_bo_swapout() - Swap out buffer objects on the LRU list to shmem. 1206 + * @bdev: The ttm device. 1207 + * @ctx: The ttm_operation_ctx governing the swapout operation. 1208 + * @man: The resource manager whose resources / buffer objects are 1209 + * goint to be swapped out. 1210 + * @gfp_flags: The gfp flags used for shmem page allocations. 1211 + * @target: The desired number of bytes to swap out. 1212 + * 1213 + * Return: The number of bytes actually swapped out, or negative error code 1214 + * on error. 1215 + */ 1216 + s64 ttm_bo_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx, 1217 + struct ttm_resource_manager *man, gfp_t gfp_flags, 1218 + s64 target) 1219 + { 1220 + struct ttm_bo_swapout_walk swapout_walk = { 1221 + .walk = { 1222 + .ops = &ttm_swap_ops, 1223 + .ctx = ctx, 1224 + .trylock_only = true, 1225 + }, 1226 + .gfp_flags = gfp_flags, 1227 + }; 1228 + 1229 + return ttm_lru_walk_for_evict(&swapout_walk.walk, bdev, man, target); 1148 1230 } 1149 1231 1150 1232 void ttm_bo_tt_destroy(struct ttm_buffer_object *bo)
+151
drivers/gpu/drm/ttm/ttm_bo_util.c
··· 768 768 ttm_tt_destroy(bo->bdev, ttm); 769 769 return ret; 770 770 } 771 + 772 + static bool ttm_lru_walk_trylock(struct ttm_lru_walk *walk, 773 + struct ttm_buffer_object *bo, 774 + bool *needs_unlock) 775 + { 776 + struct ttm_operation_ctx *ctx = walk->ctx; 777 + 778 + *needs_unlock = false; 779 + 780 + if (dma_resv_trylock(bo->base.resv)) { 781 + *needs_unlock = true; 782 + return true; 783 + } 784 + 785 + if (bo->base.resv == ctx->resv && ctx->allow_res_evict) { 786 + dma_resv_assert_held(bo->base.resv); 787 + return true; 788 + } 789 + 790 + return false; 791 + } 792 + 793 + static int ttm_lru_walk_ticketlock(struct ttm_lru_walk *walk, 794 + struct ttm_buffer_object *bo, 795 + bool *needs_unlock) 796 + { 797 + struct dma_resv *resv = bo->base.resv; 798 + int ret; 799 + 800 + if (walk->ctx->interruptible) 801 + ret = dma_resv_lock_interruptible(resv, walk->ticket); 802 + else 803 + ret = dma_resv_lock(resv, walk->ticket); 804 + 805 + if (!ret) { 806 + *needs_unlock = true; 807 + /* 808 + * Only a single ticketlock per loop. Ticketlocks are prone 809 + * to return -EDEADLK causing the eviction to fail, so 810 + * after waiting for the ticketlock, revert back to 811 + * trylocking for this walk. 812 + */ 813 + walk->ticket = NULL; 814 + } else if (ret == -EDEADLK) { 815 + /* Caller needs to exit the ww transaction. */ 816 + ret = -ENOSPC; 817 + } 818 + 819 + return ret; 820 + } 821 + 822 + static void ttm_lru_walk_unlock(struct ttm_buffer_object *bo, bool locked) 823 + { 824 + if (locked) 825 + dma_resv_unlock(bo->base.resv); 826 + } 827 + 828 + /** 829 + * ttm_lru_walk_for_evict() - Perform a LRU list walk, with actions taken on 830 + * valid items. 831 + * @walk: describe the walks and actions taken 832 + * @bdev: The TTM device. 833 + * @man: The struct ttm_resource manager whose LRU lists we're walking. 834 + * @target: The end condition for the walk. 835 + * 836 + * The LRU lists of @man are walk, and for each struct ttm_resource encountered, 837 + * the corresponding ttm_buffer_object is locked and taken a reference on, and 838 + * the LRU lock is dropped. the LRU lock may be dropped before locking and, in 839 + * that case, it's verified that the item actually remains on the LRU list after 840 + * the lock, and that the buffer object didn't switch resource in between. 841 + * 842 + * With a locked object, the actions indicated by @walk->process_bo are 843 + * performed, and after that, the bo is unlocked, the refcount dropped and the 844 + * next struct ttm_resource is processed. Here, the walker relies on 845 + * TTM's restartable LRU list implementation. 846 + * 847 + * Typically @walk->process_bo() would return the number of pages evicted, 848 + * swapped or shrunken, so that when the total exceeds @target, or when the 849 + * LRU list has been walked in full, iteration is terminated. It's also terminated 850 + * on error. Note that the definition of @target is done by the caller, it 851 + * could have a different meaning than the number of pages. 852 + * 853 + * Note that the way dma_resv individualization is done, locking needs to be done 854 + * either with the LRU lock held (trylocking only) or with a reference on the 855 + * object. 856 + * 857 + * Return: The progress made towards target or negative error code on error. 858 + */ 859 + s64 ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, 860 + struct ttm_resource_manager *man, s64 target) 861 + { 862 + struct ttm_resource_cursor cursor; 863 + struct ttm_resource *res; 864 + s64 progress = 0; 865 + s64 lret; 866 + 867 + spin_lock(&bdev->lru_lock); 868 + ttm_resource_manager_for_each_res(man, &cursor, res) { 869 + struct ttm_buffer_object *bo = res->bo; 870 + bool bo_needs_unlock = false; 871 + bool bo_locked = false; 872 + int mem_type; 873 + 874 + /* 875 + * Attempt a trylock before taking a reference on the bo, 876 + * since if we do it the other way around, and the trylock fails, 877 + * we need to drop the lru lock to put the bo. 878 + */ 879 + if (ttm_lru_walk_trylock(walk, bo, &bo_needs_unlock)) 880 + bo_locked = true; 881 + else if (!walk->ticket || walk->ctx->no_wait_gpu || 882 + walk->trylock_only) 883 + continue; 884 + 885 + if (!ttm_bo_get_unless_zero(bo)) { 886 + ttm_lru_walk_unlock(bo, bo_needs_unlock); 887 + continue; 888 + } 889 + 890 + mem_type = res->mem_type; 891 + spin_unlock(&bdev->lru_lock); 892 + 893 + lret = 0; 894 + if (!bo_locked) 895 + lret = ttm_lru_walk_ticketlock(walk, bo, &bo_needs_unlock); 896 + 897 + /* 898 + * Note that in between the release of the lru lock and the 899 + * ticketlock, the bo may have switched resource, 900 + * and also memory type, since the resource may have been 901 + * freed and allocated again with a different memory type. 902 + * In that case, just skip it. 903 + */ 904 + if (!lret && bo->resource && bo->resource->mem_type == mem_type) 905 + lret = walk->ops->process_bo(walk, bo); 906 + 907 + ttm_lru_walk_unlock(bo, bo_needs_unlock); 908 + ttm_bo_put(bo); 909 + if (lret == -EBUSY || lret == -EALREADY) 910 + lret = 0; 911 + progress = (lret < 0) ? lret : progress + lret; 912 + 913 + spin_lock(&bdev->lru_lock); 914 + if (progress < 0 || progress >= target) 915 + break; 916 + } 917 + ttm_resource_cursor_fini(&cursor); 918 + spin_unlock(&bdev->lru_lock); 919 + 920 + return progress; 921 + }
+7 -22
drivers/gpu/drm/ttm/ttm_device.c
··· 148 148 int ttm_device_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx, 149 149 gfp_t gfp_flags) 150 150 { 151 - struct ttm_resource_cursor cursor; 152 151 struct ttm_resource_manager *man; 153 - struct ttm_resource *res; 154 152 unsigned i; 155 - int ret; 153 + s64 lret; 156 154 157 - spin_lock(&bdev->lru_lock); 158 155 for (i = TTM_PL_SYSTEM; i < TTM_NUM_MEM_TYPES; ++i) { 159 156 man = ttm_manager_type(bdev, i); 160 157 if (!man || !man->use_tt) 161 158 continue; 162 159 163 - ttm_resource_manager_for_each_res(man, &cursor, res) { 164 - struct ttm_buffer_object *bo = res->bo; 165 - uint32_t num_pages; 166 - 167 - if (!bo || bo->resource != res) 168 - continue; 169 - 170 - num_pages = PFN_UP(bo->base.size); 171 - ret = ttm_bo_swapout(bo, ctx, gfp_flags); 172 - /* ttm_bo_swapout has dropped the lru_lock */ 173 - if (!ret) 174 - return num_pages; 175 - if (ret != -EBUSY) 176 - return ret; 177 - } 160 + lret = ttm_bo_swapout(bdev, ctx, man, gfp_flags, 1); 161 + /* Can be both positive (num_pages) and negative (error) */ 162 + if (lret) 163 + return lret; 178 164 } 179 - spin_unlock(&bdev->lru_lock); 180 165 return 0; 181 166 } 182 167 EXPORT_SYMBOL(ttm_device_swapout); ··· 259 274 struct ttm_resource *res; 260 275 261 276 spin_lock(&bdev->lru_lock); 262 - while ((res = list_first_entry_or_null(list, typeof(*res), lru))) { 277 + while ((res = ttm_lru_first_res_or_null(list))) { 263 278 struct ttm_buffer_object *bo = res->bo; 264 279 265 280 /* Take ref against racing releases once lru_lock is unlocked */ 266 281 if (!ttm_bo_get_unless_zero(bo)) 267 282 continue; 268 283 269 - list_del_init(&res->lru); 284 + list_del_init(&bo->resource->lru.link); 270 285 spin_unlock(&bdev->lru_lock); 271 286 272 287 if (bo->ttm)
+1 -1
drivers/gpu/drm/ttm/ttm_pool.c
··· 91 91 */ 92 92 if (order) 93 93 gfp_flags |= __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN | 94 - __GFP_KSWAPD_RECLAIM; 94 + __GFP_THISNODE; 95 95 96 96 if (!pool->use_dma_alloc) { 97 97 p = alloc_pages_node(pool->nid, gfp_flags, order);
+200 -55
drivers/gpu/drm/ttm/ttm_resource.c
··· 33 33 34 34 #include <drm/drm_util.h> 35 35 36 + /* Detach the cursor from the bulk move list*/ 37 + static void 38 + ttm_resource_cursor_clear_bulk(struct ttm_resource_cursor *cursor) 39 + { 40 + lockdep_assert_held(&cursor->man->bdev->lru_lock); 41 + 42 + cursor->bulk = NULL; 43 + list_del_init(&cursor->bulk_link); 44 + } 45 + 46 + /* Move the cursor to the end of the bulk move list it's in */ 47 + static void ttm_resource_cursor_move_bulk_tail(struct ttm_lru_bulk_move *bulk, 48 + struct ttm_resource_cursor *cursor) 49 + { 50 + struct ttm_lru_bulk_move_pos *pos; 51 + 52 + lockdep_assert_held(&cursor->man->bdev->lru_lock); 53 + 54 + if (WARN_ON_ONCE(bulk != cursor->bulk)) { 55 + list_del_init(&cursor->bulk_link); 56 + return; 57 + } 58 + 59 + pos = &bulk->pos[cursor->mem_type][cursor->priority]; 60 + if (pos->last) 61 + list_move(&cursor->hitch.link, &pos->last->lru.link); 62 + ttm_resource_cursor_clear_bulk(cursor); 63 + } 64 + 65 + /* Move all cursors attached to a bulk move to its end */ 66 + static void ttm_bulk_move_adjust_cursors(struct ttm_lru_bulk_move *bulk) 67 + { 68 + struct ttm_resource_cursor *cursor, *next; 69 + 70 + list_for_each_entry_safe(cursor, next, &bulk->cursor_list, bulk_link) 71 + ttm_resource_cursor_move_bulk_tail(bulk, cursor); 72 + } 73 + 74 + /* Remove a cursor from an empty bulk move list */ 75 + static void ttm_bulk_move_drop_cursors(struct ttm_lru_bulk_move *bulk) 76 + { 77 + struct ttm_resource_cursor *cursor, *next; 78 + 79 + list_for_each_entry_safe(cursor, next, &bulk->cursor_list, bulk_link) 80 + ttm_resource_cursor_clear_bulk(cursor); 81 + } 82 + 83 + /** 84 + * ttm_resource_cursor_fini() - Finalize the LRU list cursor usage 85 + * @cursor: The struct ttm_resource_cursor to finalize. 86 + * 87 + * The function pulls the LRU list cursor off any lists it was previusly 88 + * attached to. Needs to be called with the LRU lock held. The function 89 + * can be called multiple times after eachother. 90 + */ 91 + void ttm_resource_cursor_fini(struct ttm_resource_cursor *cursor) 92 + { 93 + lockdep_assert_held(&cursor->man->bdev->lru_lock); 94 + list_del_init(&cursor->hitch.link); 95 + ttm_resource_cursor_clear_bulk(cursor); 96 + } 97 + 36 98 /** 37 99 * ttm_lru_bulk_move_init - initialize a bulk move structure 38 100 * @bulk: the structure to init ··· 104 42 void ttm_lru_bulk_move_init(struct ttm_lru_bulk_move *bulk) 105 43 { 106 44 memset(bulk, 0, sizeof(*bulk)); 45 + INIT_LIST_HEAD(&bulk->cursor_list); 107 46 } 108 47 EXPORT_SYMBOL(ttm_lru_bulk_move_init); 48 + 49 + /** 50 + * ttm_lru_bulk_move_fini - finalize a bulk move structure 51 + * @bdev: The struct ttm_device 52 + * @bulk: the structure to finalize 53 + * 54 + * Sanity checks that bulk moves don't have any 55 + * resources left and hence no cursors attached. 56 + */ 57 + void ttm_lru_bulk_move_fini(struct ttm_device *bdev, 58 + struct ttm_lru_bulk_move *bulk) 59 + { 60 + spin_lock(&bdev->lru_lock); 61 + ttm_bulk_move_drop_cursors(bulk); 62 + spin_unlock(&bdev->lru_lock); 63 + } 64 + EXPORT_SYMBOL(ttm_lru_bulk_move_fini); 109 65 110 66 /** 111 67 * ttm_lru_bulk_move_tail - bulk move range of resources to the LRU tail. ··· 137 57 { 138 58 unsigned i, j; 139 59 60 + ttm_bulk_move_adjust_cursors(bulk); 140 61 for (i = 0; i < TTM_NUM_MEM_TYPES; ++i) { 141 62 for (j = 0; j < TTM_MAX_BO_PRIORITY; ++j) { 142 63 struct ttm_lru_bulk_move_pos *pos = &bulk->pos[i][j]; ··· 151 70 dma_resv_assert_held(pos->last->bo->base.resv); 152 71 153 72 man = ttm_manager_type(pos->first->bo->bdev, i); 154 - list_bulk_move_tail(&man->lru[j], &pos->first->lru, 155 - &pos->last->lru); 73 + list_bulk_move_tail(&man->lru[j], &pos->first->lru.link, 74 + &pos->last->lru.link); 156 75 } 157 76 } 158 77 } ··· 165 84 return &bulk->pos[res->mem_type][res->bo->priority]; 166 85 } 167 86 87 + /* Return the previous resource on the list (skip over non-resource list items) */ 88 + static struct ttm_resource *ttm_lru_prev_res(struct ttm_resource *cur) 89 + { 90 + struct ttm_lru_item *lru = &cur->lru; 91 + 92 + do { 93 + lru = list_prev_entry(lru, link); 94 + } while (!ttm_lru_item_is_res(lru)); 95 + 96 + return ttm_lru_item_to_res(lru); 97 + } 98 + 99 + /* Return the next resource on the list (skip over non-resource list items) */ 100 + static struct ttm_resource *ttm_lru_next_res(struct ttm_resource *cur) 101 + { 102 + struct ttm_lru_item *lru = &cur->lru; 103 + 104 + do { 105 + lru = list_next_entry(lru, link); 106 + } while (!ttm_lru_item_is_res(lru)); 107 + 108 + return ttm_lru_item_to_res(lru); 109 + } 110 + 168 111 /* Move the resource to the tail of the bulk move range */ 169 112 static void ttm_lru_bulk_move_pos_tail(struct ttm_lru_bulk_move_pos *pos, 170 113 struct ttm_resource *res) 171 114 { 172 115 if (pos->last != res) { 173 116 if (pos->first == res) 174 - pos->first = list_next_entry(res, lru); 175 - list_move(&res->lru, &pos->last->lru); 117 + pos->first = ttm_lru_next_res(res); 118 + list_move(&res->lru.link, &pos->last->lru.link); 176 119 pos->last = res; 177 120 } 178 121 } ··· 227 122 pos->first = NULL; 228 123 pos->last = NULL; 229 124 } else if (pos->first == res) { 230 - pos->first = list_next_entry(res, lru); 125 + pos->first = ttm_lru_next_res(res); 231 126 } else if (pos->last == res) { 232 - pos->last = list_prev_entry(res, lru); 127 + pos->last = ttm_lru_prev_res(res); 233 128 } else { 234 - list_move(&res->lru, &pos->last->lru); 129 + list_move(&res->lru.link, &pos->last->lru.link); 235 130 } 236 131 } 237 132 ··· 260 155 lockdep_assert_held(&bo->bdev->lru_lock); 261 156 262 157 if (bo->pin_count) { 263 - list_move_tail(&res->lru, &bdev->pinned); 158 + list_move_tail(&res->lru.link, &bdev->pinned); 264 159 265 160 } else if (bo->bulk_move) { 266 161 struct ttm_lru_bulk_move_pos *pos = ··· 271 166 struct ttm_resource_manager *man; 272 167 273 168 man = ttm_manager_type(bdev, res->mem_type); 274 - list_move_tail(&res->lru, &man->lru[bo->priority]); 169 + list_move_tail(&res->lru.link, &man->lru[bo->priority]); 275 170 } 276 171 } 277 172 ··· 302 197 man = ttm_manager_type(bo->bdev, place->mem_type); 303 198 spin_lock(&bo->bdev->lru_lock); 304 199 if (bo->pin_count) 305 - list_add_tail(&res->lru, &bo->bdev->pinned); 200 + list_add_tail(&res->lru.link, &bo->bdev->pinned); 306 201 else 307 - list_add_tail(&res->lru, &man->lru[bo->priority]); 202 + list_add_tail(&res->lru.link, &man->lru[bo->priority]); 308 203 man->usage += res->size; 309 204 spin_unlock(&bo->bdev->lru_lock); 310 205 } ··· 326 221 struct ttm_device *bdev = man->bdev; 327 222 328 223 spin_lock(&bdev->lru_lock); 329 - list_del_init(&res->lru); 224 + list_del_init(&res->lru.link); 330 225 man->usage -= res->size; 331 226 spin_unlock(&bdev->lru_lock); 332 227 } ··· 495 390 }; 496 391 struct dma_fence *fence; 497 392 int ret; 498 - unsigned i; 499 393 500 - /* 501 - * Can't use standard list traversal since we're unlocking. 502 - */ 503 - 504 - spin_lock(&bdev->lru_lock); 505 - for (i = 0; i < TTM_MAX_BO_PRIORITY; ++i) { 506 - while (!list_empty(&man->lru[i])) { 507 - spin_unlock(&bdev->lru_lock); 508 - ret = ttm_mem_evict_first(bdev, man, NULL, &ctx, 509 - NULL); 510 - if (ret) 511 - return ret; 512 - spin_lock(&bdev->lru_lock); 513 - } 514 - } 515 - spin_unlock(&bdev->lru_lock); 394 + do { 395 + ret = ttm_bo_evict_first(bdev, man, &ctx); 396 + cond_resched(); 397 + } while (!ret); 516 398 517 399 spin_lock(&man->move_lock); 518 400 fence = dma_fence_get(man->move); ··· 552 460 } 553 461 EXPORT_SYMBOL(ttm_resource_manager_debug); 554 462 463 + static void 464 + ttm_resource_cursor_check_bulk(struct ttm_resource_cursor *cursor, 465 + struct ttm_lru_item *next_lru) 466 + { 467 + struct ttm_resource *next = ttm_lru_item_to_res(next_lru); 468 + struct ttm_lru_bulk_move *bulk = NULL; 469 + struct ttm_buffer_object *bo = next->bo; 470 + 471 + lockdep_assert_held(&cursor->man->bdev->lru_lock); 472 + bulk = bo->bulk_move; 473 + 474 + if (cursor->bulk != bulk) { 475 + if (bulk) { 476 + list_move_tail(&cursor->bulk_link, &bulk->cursor_list); 477 + cursor->mem_type = next->mem_type; 478 + } else { 479 + list_del_init(&cursor->bulk_link); 480 + } 481 + cursor->bulk = bulk; 482 + } 483 + } 484 + 555 485 /** 556 - * ttm_resource_manager_first 557 - * 486 + * ttm_resource_manager_first() - Start iterating over the resources 487 + * of a resource manager 558 488 * @man: resource manager to iterate over 559 489 * @cursor: cursor to record the position 560 490 * 561 - * Returns the first resource from the resource manager. 491 + * Initializes the cursor and starts iterating. When done iterating, 492 + * the caller must explicitly call ttm_resource_cursor_fini(). 493 + * 494 + * Return: The first resource from the resource manager. 562 495 */ 563 496 struct ttm_resource * 564 497 ttm_resource_manager_first(struct ttm_resource_manager *man, 565 498 struct ttm_resource_cursor *cursor) 566 499 { 567 - struct ttm_resource *res; 500 + lockdep_assert_held(&man->bdev->lru_lock); 501 + 502 + cursor->priority = 0; 503 + cursor->man = man; 504 + ttm_lru_item_init(&cursor->hitch, TTM_LRU_HITCH); 505 + INIT_LIST_HEAD(&cursor->bulk_link); 506 + list_add(&cursor->hitch.link, &man->lru[cursor->priority]); 507 + 508 + return ttm_resource_manager_next(cursor); 509 + } 510 + 511 + /** 512 + * ttm_resource_manager_next() - Continue iterating over the resource manager 513 + * resources 514 + * @cursor: cursor to record the position 515 + * 516 + * Return: the next resource from the resource manager. 517 + */ 518 + struct ttm_resource * 519 + ttm_resource_manager_next(struct ttm_resource_cursor *cursor) 520 + { 521 + struct ttm_resource_manager *man = cursor->man; 522 + struct ttm_lru_item *lru; 568 523 569 524 lockdep_assert_held(&man->bdev->lru_lock); 570 525 571 - for (cursor->priority = 0; cursor->priority < TTM_MAX_BO_PRIORITY; 572 - ++cursor->priority) 573 - list_for_each_entry(res, &man->lru[cursor->priority], lru) 574 - return res; 526 + for (;;) { 527 + lru = &cursor->hitch; 528 + list_for_each_entry_continue(lru, &man->lru[cursor->priority], link) { 529 + if (ttm_lru_item_is_res(lru)) { 530 + ttm_resource_cursor_check_bulk(cursor, lru); 531 + list_move(&cursor->hitch.link, &lru->link); 532 + return ttm_lru_item_to_res(lru); 533 + } 534 + } 535 + 536 + if (++cursor->priority >= TTM_MAX_BO_PRIORITY) 537 + break; 538 + 539 + list_move(&cursor->hitch.link, &man->lru[cursor->priority]); 540 + ttm_resource_cursor_clear_bulk(cursor); 541 + } 542 + 543 + ttm_resource_cursor_fini(cursor); 575 544 576 545 return NULL; 577 546 } 578 547 579 548 /** 580 - * ttm_resource_manager_next 549 + * ttm_lru_first_res_or_null() - Return the first resource on an lru list 550 + * @head: The list head of the lru list. 581 551 * 582 - * @man: resource manager to iterate over 583 - * @cursor: cursor to record the position 584 - * @res: the current resource pointer 585 - * 586 - * Returns the next resource from the resource manager. 552 + * Return: Pointer to the first resource on the lru list or NULL if 553 + * there is none. 587 554 */ 588 - struct ttm_resource * 589 - ttm_resource_manager_next(struct ttm_resource_manager *man, 590 - struct ttm_resource_cursor *cursor, 591 - struct ttm_resource *res) 555 + struct ttm_resource *ttm_lru_first_res_or_null(struct list_head *head) 592 556 { 593 - lockdep_assert_held(&man->bdev->lru_lock); 557 + struct ttm_lru_item *lru; 594 558 595 - list_for_each_entry_continue(res, &man->lru[cursor->priority], lru) 596 - return res; 597 - 598 - for (++cursor->priority; cursor->priority < TTM_MAX_BO_PRIORITY; 599 - ++cursor->priority) 600 - list_for_each_entry(res, &man->lru[cursor->priority], lru) 601 - return res; 559 + list_for_each_entry(lru, head, link) { 560 + if (ttm_lru_item_is_res(lru)) 561 + return ttm_lru_item_to_res(lru); 562 + } 602 563 603 564 return NULL; 604 565 }
+12
drivers/gpu/drm/v3d/v3d_bo.c
··· 26 26 #include "v3d_drv.h" 27 27 #include "uapi/drm/v3d_drm.h" 28 28 29 + static enum drm_gem_object_status v3d_gem_status(struct drm_gem_object *obj) 30 + { 31 + struct v3d_bo *bo = to_v3d_bo(obj); 32 + enum drm_gem_object_status res = 0; 33 + 34 + if (bo->base.pages) 35 + res |= DRM_GEM_OBJECT_RESIDENT; 36 + 37 + return res; 38 + } 39 + 29 40 /* Called DRM core on the last userspace/kernel unreference of the 30 41 * BO. 31 42 */ ··· 74 63 .vmap = drm_gem_shmem_object_vmap, 75 64 .vunmap = drm_gem_shmem_object_vunmap, 76 65 .mmap = drm_gem_shmem_object_mmap, 66 + .status = v3d_gem_status, 77 67 .vm_ops = &drm_gem_shmem_vm_ops, 78 68 }; 79 69
+4 -7
drivers/gpu/drm/v3d/v3d_drv.c
··· 95 95 args->value = 1; 96 96 return 0; 97 97 case DRM_V3D_PARAM_MAX_PERF_COUNTERS: 98 - args->value = v3d->max_counters; 98 + args->value = v3d->perfmon_info.max_counters; 99 99 return 0; 100 100 default: 101 101 DRM_DEBUG("Unknown parameter %d\n", args->param); ··· 184 184 drm_printf(p, "v3d-jobs-%s: \t%llu jobs\n", 185 185 v3d_queue_to_string(queue), jobs_completed); 186 186 } 187 + 188 + drm_show_memory_stats(p, file); 187 189 } 188 190 189 191 static const struct file_operations v3d_drm_fops = { ··· 303 301 ident3 = V3D_READ(V3D_HUB_IDENT3); 304 302 v3d->rev = V3D_GET_FIELD(ident3, V3D_HUB_IDENT3_IPREV); 305 303 306 - if (v3d->ver >= 71) 307 - v3d->max_counters = V3D_V71_NUM_PERFCOUNTERS; 308 - else if (v3d->ver >= 42) 309 - v3d->max_counters = V3D_V42_NUM_PERFCOUNTERS; 310 - else 311 - v3d->max_counters = 0; 304 + v3d_perfmon_init(v3d); 312 305 313 306 v3d->reset = devm_reset_control_get_exclusive(dev, NULL); 314 307 if (IS_ERR(v3d->reset)) {
+3 -9
drivers/gpu/drm/v3d/v3d_drv.h
··· 106 106 107 107 bool single_irq_line; 108 108 109 - /* Different revisions of V3D have different total number of performance 110 - * counters 111 - */ 112 - unsigned int max_counters; 109 + struct v3d_perfmon_info perfmon_info; 113 110 114 111 void __iomem *hub_regs; 115 112 void __iomem *core_regs[3]; ··· 350 353 struct drm_syncobj *syncobj; 351 354 }; 352 355 353 - /* Number of perfmons required to handle all supported performance counters */ 354 - #define V3D_MAX_PERFMONS DIV_ROUND_UP(V3D_MAX_COUNTERS, \ 355 - DRM_V3D_MAX_PERF_COUNTERS) 356 - 357 356 struct v3d_performance_query { 358 357 /* Performance monitor IDs for this query */ 359 - u32 kperfmon_ids[V3D_MAX_PERFMONS]; 358 + u32 *kperfmon_ids; 360 359 361 360 /* Syncobj that indicates the query availability */ 362 361 struct drm_syncobj *syncobj; ··· 567 574 void v3d_sched_fini(struct v3d_dev *v3d); 568 575 569 576 /* v3d_perfmon.c */ 577 + void v3d_perfmon_init(struct v3d_dev *v3d); 570 578 void v3d_perfmon_get(struct v3d_perfmon *perfmon); 571 579 void v3d_perfmon_put(struct v3d_perfmon *perfmon); 572 580 void v3d_perfmon_start(struct v3d_dev *v3d, struct v3d_perfmon *perfmon);
+23 -17
drivers/gpu/drm/v3d/v3d_perfmon.c
··· 195 195 {"QPU", "QPU-stalls-other", "[QPU] Stalled qcycles waiting for any other reason (vary/W/Z)"}, 196 196 }; 197 197 198 + void v3d_perfmon_init(struct v3d_dev *v3d) 199 + { 200 + const struct v3d_perf_counter_desc *counters = NULL; 201 + unsigned int max = 0; 202 + 203 + if (v3d->ver >= 71) { 204 + counters = v3d_v71_performance_counters; 205 + max = ARRAY_SIZE(v3d_v71_performance_counters); 206 + } else if (v3d->ver >= 42) { 207 + counters = v3d_v42_performance_counters; 208 + max = ARRAY_SIZE(v3d_v42_performance_counters); 209 + } 210 + 211 + v3d->perfmon_info.max_counters = max; 212 + v3d->perfmon_info.counters = counters; 213 + } 214 + 198 215 void v3d_perfmon_get(struct v3d_perfmon *perfmon) 199 216 { 200 217 if (perfmon) ··· 338 321 339 322 /* Make sure all counters are valid. */ 340 323 for (i = 0; i < req->ncounters; i++) { 341 - if (req->counters[i] >= v3d->max_counters) 324 + if (req->counters[i] >= v3d->perfmon_info.max_counters) 342 325 return -EINVAL; 343 326 } 344 327 ··· 433 416 return -EINVAL; 434 417 } 435 418 419 + if (!v3d->perfmon_info.max_counters) 420 + return -EOPNOTSUPP; 421 + 436 422 /* Make sure that the counter ID is valid */ 437 - if (req->counter >= v3d->max_counters) 423 + if (req->counter >= v3d->perfmon_info.max_counters) 438 424 return -EINVAL; 439 425 440 - BUILD_BUG_ON(ARRAY_SIZE(v3d_v42_performance_counters) != 441 - V3D_V42_NUM_PERFCOUNTERS); 442 - BUILD_BUG_ON(ARRAY_SIZE(v3d_v71_performance_counters) != 443 - V3D_V71_NUM_PERFCOUNTERS); 444 - BUILD_BUG_ON(V3D_MAX_COUNTERS < V3D_V42_NUM_PERFCOUNTERS); 445 - BUILD_BUG_ON(V3D_MAX_COUNTERS < V3D_V71_NUM_PERFCOUNTERS); 446 - BUILD_BUG_ON((V3D_MAX_COUNTERS != V3D_V42_NUM_PERFCOUNTERS) && 447 - (V3D_MAX_COUNTERS != V3D_V71_NUM_PERFCOUNTERS)); 448 - 449 - if (v3d->ver >= 71) 450 - counter = &v3d_v71_performance_counters[req->counter]; 451 - else if (v3d->ver >= 42) 452 - counter = &v3d_v42_performance_counters[req->counter]; 453 - else 454 - return -EOPNOTSUPP; 426 + counter = &v3d->perfmon_info.counters[req->counter]; 455 427 456 428 strscpy(req->name, counter->name, sizeof(req->name)); 457 429 strscpy(req->category, counter->category, sizeof(req->category));
+11 -5
drivers/gpu/drm/v3d/v3d_performance_counters.h
··· 19 19 char description[256]; 20 20 }; 21 21 22 + struct v3d_perfmon_info { 23 + /* 24 + * Different revisions of V3D have different total number of 25 + * performance counters. 26 + */ 27 + unsigned int max_counters; 22 28 23 - #define V3D_V42_NUM_PERFCOUNTERS (87) 24 - #define V3D_V71_NUM_PERFCOUNTERS (93) 25 - 26 - /* Maximum number of performance counters supported by any version of V3D */ 27 - #define V3D_MAX_COUNTERS (93) 29 + /* 30 + * Array of counters valid for the platform. 31 + */ 32 + const struct v3d_perf_counter_desc *counters; 33 + }; 28 34 29 35 #endif
+43 -36
drivers/gpu/drm/v3d/v3d_sched.c
··· 94 94 if (query_info->queries) { 95 95 unsigned int i; 96 96 97 - for (i = 0; i < count; i++) 97 + for (i = 0; i < count; i++) { 98 98 drm_syncobj_put(query_info->queries[i].syncobj); 99 + kvfree(query_info->queries[i].kperfmon_ids); 100 + } 99 101 100 102 kvfree(query_info->queries); 101 103 } ··· 353 351 struct v3d_bo *bo = to_v3d_bo(job->base.bo[0]); 354 352 struct v3d_bo *indirect = to_v3d_bo(indirect_csd->indirect); 355 353 struct drm_v3d_submit_csd *args = &indirect_csd->job->args; 356 - struct v3d_dev *v3d = job->base.v3d; 357 - u32 num_batches, *wg_counts; 354 + u32 *wg_counts; 358 355 359 356 v3d_get_bo_vaddr(bo); 360 357 v3d_get_bo_vaddr(indirect); ··· 366 365 args->cfg[0] = wg_counts[0] << V3D_CSD_CFG012_WG_COUNT_SHIFT; 367 366 args->cfg[1] = wg_counts[1] << V3D_CSD_CFG012_WG_COUNT_SHIFT; 368 367 args->cfg[2] = wg_counts[2] << V3D_CSD_CFG012_WG_COUNT_SHIFT; 369 - 370 - num_batches = DIV_ROUND_UP(indirect_csd->wg_size, 16) * 371 - (wg_counts[0] * wg_counts[1] * wg_counts[2]); 372 - 373 - /* V3D 7.1.6 and later don't subtract 1 from the number of batches */ 374 - if (v3d->ver < 71 || (v3d->ver == 71 && v3d->rev < 6)) 375 - args->cfg[4] = num_batches - 1; 376 - else 377 - args->cfg[4] = num_batches; 378 - 379 - WARN_ON(args->cfg[4] == ~0); 368 + args->cfg[4] = DIV_ROUND_UP(indirect_csd->wg_size, 16) * 369 + (wg_counts[0] * wg_counts[1] * wg_counts[2]) - 1; 380 370 381 371 for (int i = 0; i < 3; i++) { 382 372 /* 0xffffffff indicates that the uniform rewrite is not needed */ ··· 421 429 v3d_put_bo_vaddr(bo); 422 430 } 423 431 424 - static void 425 - write_to_buffer(void *dst, u32 idx, bool do_64bit, u64 value) 432 + static void write_to_buffer_32(u32 *dst, unsigned int idx, u32 value) 426 433 { 427 - if (do_64bit) { 428 - u64 *dst64 = (u64 *)dst; 434 + dst[idx] = value; 435 + } 429 436 430 - dst64[idx] = value; 431 - } else { 432 - u32 *dst32 = (u32 *)dst; 437 + static void write_to_buffer_64(u64 *dst, unsigned int idx, u64 value) 438 + { 439 + dst[idx] = value; 440 + } 433 441 434 - dst32[idx] = (u32)value; 435 - } 442 + static void 443 + write_to_buffer(void *dst, unsigned int idx, bool do_64bit, u64 value) 444 + { 445 + if (do_64bit) 446 + write_to_buffer_64(dst, idx, value); 447 + else 448 + write_to_buffer_32(dst, idx, value); 436 449 } 437 450 438 451 static void ··· 510 513 } 511 514 512 515 static void 513 - v3d_write_performance_query_result(struct v3d_cpu_job *job, void *data, u32 query) 516 + v3d_write_performance_query_result(struct v3d_cpu_job *job, void *data, 517 + unsigned int query) 514 518 { 515 - struct v3d_performance_query_info *performance_query = &job->performance_query; 516 - struct v3d_copy_query_results_info *copy = &job->copy; 519 + struct v3d_performance_query_info *performance_query = 520 + &job->performance_query; 517 521 struct v3d_file_priv *v3d_priv = job->base.file->driver_priv; 522 + struct v3d_performance_query *perf_query = 523 + &performance_query->queries[query]; 518 524 struct v3d_dev *v3d = job->base.v3d; 519 - struct v3d_perfmon *perfmon; 520 - u64 counter_values[V3D_MAX_COUNTERS]; 525 + unsigned int i, j, offset; 521 526 522 - for (int i = 0; i < performance_query->nperfmons; i++) { 527 + for (i = 0, offset = 0; 528 + i < performance_query->nperfmons; 529 + i++, offset += DRM_V3D_MAX_PERF_COUNTERS) { 530 + struct v3d_perfmon *perfmon; 531 + 523 532 perfmon = v3d_perfmon_find(v3d_priv, 524 - performance_query->queries[query].kperfmon_ids[i]); 533 + perf_query->kperfmon_ids[i]); 525 534 if (!perfmon) { 526 535 DRM_DEBUG("Failed to find perfmon."); 527 536 continue; ··· 535 532 536 533 v3d_perfmon_stop(v3d, perfmon, true); 537 534 538 - memcpy(&counter_values[i * DRM_V3D_MAX_PERF_COUNTERS], perfmon->values, 539 - perfmon->ncounters * sizeof(u64)); 535 + if (job->copy.do_64bit) { 536 + for (j = 0; j < perfmon->ncounters; j++) 537 + write_to_buffer_64(data, offset + j, 538 + perfmon->values[j]); 539 + } else { 540 + for (j = 0; j < perfmon->ncounters; j++) 541 + write_to_buffer_32(data, offset + j, 542 + perfmon->values[j]); 543 + } 540 544 541 545 v3d_perfmon_put(perfmon); 542 546 } 543 - 544 - for (int i = 0; i < performance_query->ncounters; i++) 545 - write_to_buffer(data, i, copy->do_64bit, counter_values[i]); 546 547 } 547 548 548 549 static void ··· 653 646 654 647 /* Unblock schedulers and restart their jobs. */ 655 648 for (q = 0; q < V3D_MAX_QUEUES; q++) { 656 - drm_sched_start(&v3d->queue[q].sched, true); 649 + drm_sched_start(&v3d->queue[q].sched); 657 650 } 658 651 659 652 mutex_unlock(&v3d->reset_lock);
+131 -132
drivers/gpu/drm/v3d/v3d_submit.c
··· 452 452 { 453 453 u32 __user *offsets, *syncs; 454 454 struct drm_v3d_timestamp_query timestamp; 455 + struct v3d_timestamp_query_info *query_info = &job->timestamp_query; 455 456 unsigned int i; 456 457 int err; 457 458 ··· 474 473 475 474 job->job_type = V3D_CPU_JOB_TYPE_TIMESTAMP_QUERY; 476 475 477 - job->timestamp_query.queries = kvmalloc_array(timestamp.count, 478 - sizeof(struct v3d_timestamp_query), 479 - GFP_KERNEL); 480 - if (!job->timestamp_query.queries) 476 + query_info->queries = kvmalloc_array(timestamp.count, 477 + sizeof(struct v3d_timestamp_query), 478 + GFP_KERNEL); 479 + if (!query_info->queries) 481 480 return -ENOMEM; 482 481 483 482 offsets = u64_to_user_ptr(timestamp.offsets); ··· 486 485 for (i = 0; i < timestamp.count; i++) { 487 486 u32 offset, sync; 488 487 489 - if (copy_from_user(&offset, offsets++, sizeof(offset))) { 488 + if (get_user(offset, offsets++)) { 490 489 err = -EFAULT; 491 490 goto error; 492 491 } 493 492 494 - job->timestamp_query.queries[i].offset = offset; 493 + query_info->queries[i].offset = offset; 495 494 496 - if (copy_from_user(&sync, syncs++, sizeof(sync))) { 495 + if (get_user(sync, syncs++)) { 497 496 err = -EFAULT; 498 497 goto error; 499 498 } 500 499 501 - job->timestamp_query.queries[i].syncobj = drm_syncobj_find(file_priv, sync); 502 - if (!job->timestamp_query.queries[i].syncobj) { 500 + query_info->queries[i].syncobj = drm_syncobj_find(file_priv, 501 + sync); 502 + if (!query_info->queries[i].syncobj) { 503 503 err = -ENOENT; 504 504 goto error; 505 505 } 506 506 } 507 - job->timestamp_query.count = timestamp.count; 507 + query_info->count = timestamp.count; 508 508 509 509 return 0; 510 510 ··· 521 519 { 522 520 u32 __user *syncs; 523 521 struct drm_v3d_reset_timestamp_query reset; 522 + struct v3d_timestamp_query_info *query_info = &job->timestamp_query; 524 523 unsigned int i; 525 524 int err; 526 525 ··· 540 537 541 538 job->job_type = V3D_CPU_JOB_TYPE_RESET_TIMESTAMP_QUERY; 542 539 543 - job->timestamp_query.queries = kvmalloc_array(reset.count, 544 - sizeof(struct v3d_timestamp_query), 545 - GFP_KERNEL); 546 - if (!job->timestamp_query.queries) 540 + query_info->queries = kvmalloc_array(reset.count, 541 + sizeof(struct v3d_timestamp_query), 542 + GFP_KERNEL); 543 + if (!query_info->queries) 547 544 return -ENOMEM; 548 545 549 546 syncs = u64_to_user_ptr(reset.syncs); ··· 551 548 for (i = 0; i < reset.count; i++) { 552 549 u32 sync; 553 550 554 - job->timestamp_query.queries[i].offset = reset.offset + 8 * i; 551 + query_info->queries[i].offset = reset.offset + 8 * i; 555 552 556 - if (copy_from_user(&sync, syncs++, sizeof(sync))) { 553 + if (get_user(sync, syncs++)) { 557 554 err = -EFAULT; 558 555 goto error; 559 556 } 560 557 561 - job->timestamp_query.queries[i].syncobj = drm_syncobj_find(file_priv, sync); 562 - if (!job->timestamp_query.queries[i].syncobj) { 558 + query_info->queries[i].syncobj = drm_syncobj_find(file_priv, 559 + sync); 560 + if (!query_info->queries[i].syncobj) { 563 561 err = -ENOENT; 564 562 goto error; 565 563 } 566 564 } 567 - job->timestamp_query.count = reset.count; 565 + query_info->count = reset.count; 568 566 569 567 return 0; 570 568 ··· 582 578 { 583 579 u32 __user *offsets, *syncs; 584 580 struct drm_v3d_copy_timestamp_query copy; 581 + struct v3d_timestamp_query_info *query_info = &job->timestamp_query; 585 582 unsigned int i; 586 583 int err; 587 584 ··· 604 599 605 600 job->job_type = V3D_CPU_JOB_TYPE_COPY_TIMESTAMP_QUERY; 606 601 607 - job->timestamp_query.queries = kvmalloc_array(copy.count, 608 - sizeof(struct v3d_timestamp_query), 609 - GFP_KERNEL); 610 - if (!job->timestamp_query.queries) 602 + query_info->queries = kvmalloc_array(copy.count, 603 + sizeof(struct v3d_timestamp_query), 604 + GFP_KERNEL); 605 + if (!query_info->queries) 611 606 return -ENOMEM; 612 607 613 608 offsets = u64_to_user_ptr(copy.offsets); ··· 616 611 for (i = 0; i < copy.count; i++) { 617 612 u32 offset, sync; 618 613 619 - if (copy_from_user(&offset, offsets++, sizeof(offset))) { 614 + if (get_user(offset, offsets++)) { 620 615 err = -EFAULT; 621 616 goto error; 622 617 } 623 618 624 - job->timestamp_query.queries[i].offset = offset; 619 + query_info->queries[i].offset = offset; 625 620 626 - if (copy_from_user(&sync, syncs++, sizeof(sync))) { 621 + if (get_user(sync, syncs++)) { 627 622 err = -EFAULT; 628 623 goto error; 629 624 } 630 625 631 - job->timestamp_query.queries[i].syncobj = drm_syncobj_find(file_priv, sync); 632 - if (!job->timestamp_query.queries[i].syncobj) { 626 + query_info->queries[i].syncobj = drm_syncobj_find(file_priv, 627 + sync); 628 + if (!query_info->queries[i].syncobj) { 633 629 err = -ENOENT; 634 630 goto error; 635 631 } 636 632 } 637 - job->timestamp_query.count = copy.count; 633 + query_info->count = copy.count; 638 634 639 635 job->copy.do_64bit = copy.do_64bit; 640 636 job->copy.do_partial = copy.do_partial; ··· 651 645 } 652 646 653 647 static int 648 + v3d_copy_query_info(struct v3d_performance_query_info *query_info, 649 + unsigned int count, 650 + unsigned int nperfmons, 651 + u32 __user *syncs, 652 + u64 __user *kperfmon_ids, 653 + struct drm_file *file_priv) 654 + { 655 + unsigned int i, j; 656 + int err; 657 + 658 + for (i = 0; i < count; i++) { 659 + struct v3d_performance_query *query = &query_info->queries[i]; 660 + u32 __user *ids_pointer; 661 + u32 sync, id; 662 + u64 ids; 663 + 664 + if (get_user(sync, syncs++)) { 665 + err = -EFAULT; 666 + goto error; 667 + } 668 + 669 + if (get_user(ids, kperfmon_ids++)) { 670 + err = -EFAULT; 671 + goto error; 672 + } 673 + 674 + query->kperfmon_ids = 675 + kvmalloc_array(nperfmons, 676 + sizeof(struct v3d_performance_query *), 677 + GFP_KERNEL); 678 + if (!query->kperfmon_ids) { 679 + err = -ENOMEM; 680 + goto error; 681 + } 682 + 683 + ids_pointer = u64_to_user_ptr(ids); 684 + 685 + for (j = 0; j < nperfmons; j++) { 686 + if (get_user(id, ids_pointer++)) { 687 + kvfree(query->kperfmon_ids); 688 + err = -EFAULT; 689 + goto error; 690 + } 691 + 692 + query->kperfmon_ids[j] = id; 693 + } 694 + 695 + query->syncobj = drm_syncobj_find(file_priv, sync); 696 + if (!query->syncobj) { 697 + kvfree(query->kperfmon_ids); 698 + err = -ENOENT; 699 + goto error; 700 + } 701 + } 702 + 703 + return 0; 704 + 705 + error: 706 + v3d_performance_query_info_free(query_info, i); 707 + return err; 708 + } 709 + 710 + static int 654 711 v3d_get_cpu_reset_performance_params(struct drm_file *file_priv, 655 712 struct drm_v3d_extension __user *ext, 656 713 struct v3d_cpu_job *job) 657 714 { 658 - u32 __user *syncs; 659 - u64 __user *kperfmon_ids; 715 + struct v3d_performance_query_info *query_info = &job->performance_query; 660 716 struct drm_v3d_reset_performance_query reset; 661 - unsigned int i, j; 662 717 int err; 663 718 664 719 if (!job) { ··· 735 668 if (copy_from_user(&reset, ext, sizeof(reset))) 736 669 return -EFAULT; 737 670 738 - if (reset.nperfmons > V3D_MAX_PERFMONS) 739 - return -EINVAL; 740 - 741 671 job->job_type = V3D_CPU_JOB_TYPE_RESET_PERFORMANCE_QUERY; 742 672 743 - job->performance_query.queries = kvmalloc_array(reset.count, 744 - sizeof(struct v3d_performance_query), 745 - GFP_KERNEL); 746 - if (!job->performance_query.queries) 673 + query_info->queries = 674 + kvmalloc_array(reset.count, 675 + sizeof(struct v3d_performance_query), 676 + GFP_KERNEL); 677 + if (!query_info->queries) 747 678 return -ENOMEM; 748 679 749 - syncs = u64_to_user_ptr(reset.syncs); 750 - kperfmon_ids = u64_to_user_ptr(reset.kperfmon_ids); 680 + err = v3d_copy_query_info(query_info, 681 + reset.count, 682 + reset.nperfmons, 683 + u64_to_user_ptr(reset.syncs), 684 + u64_to_user_ptr(reset.kperfmon_ids), 685 + file_priv); 686 + if (err) 687 + return err; 751 688 752 - for (i = 0; i < reset.count; i++) { 753 - u32 sync; 754 - u64 ids; 755 - u32 __user *ids_pointer; 756 - u32 id; 757 - 758 - if (copy_from_user(&sync, syncs++, sizeof(sync))) { 759 - err = -EFAULT; 760 - goto error; 761 - } 762 - 763 - if (copy_from_user(&ids, kperfmon_ids++, sizeof(ids))) { 764 - err = -EFAULT; 765 - goto error; 766 - } 767 - 768 - ids_pointer = u64_to_user_ptr(ids); 769 - 770 - for (j = 0; j < reset.nperfmons; j++) { 771 - if (copy_from_user(&id, ids_pointer++, sizeof(id))) { 772 - err = -EFAULT; 773 - goto error; 774 - } 775 - 776 - job->performance_query.queries[i].kperfmon_ids[j] = id; 777 - } 778 - 779 - job->performance_query.queries[i].syncobj = drm_syncobj_find(file_priv, sync); 780 - if (!job->performance_query.queries[i].syncobj) { 781 - err = -ENOENT; 782 - goto error; 783 - } 784 - } 785 - job->performance_query.count = reset.count; 786 - job->performance_query.nperfmons = reset.nperfmons; 689 + query_info->count = reset.count; 690 + query_info->nperfmons = reset.nperfmons; 787 691 788 692 return 0; 789 - 790 - error: 791 - v3d_performance_query_info_free(&job->performance_query, i); 792 - return err; 793 693 } 794 694 795 695 static int ··· 764 730 struct drm_v3d_extension __user *ext, 765 731 struct v3d_cpu_job *job) 766 732 { 767 - u32 __user *syncs; 768 - u64 __user *kperfmon_ids; 733 + struct v3d_performance_query_info *query_info = &job->performance_query; 769 734 struct drm_v3d_copy_performance_query copy; 770 - unsigned int i, j; 771 735 int err; 772 736 773 737 if (!job) { ··· 784 752 if (copy.pad) 785 753 return -EINVAL; 786 754 787 - if (copy.nperfmons > V3D_MAX_PERFMONS) 788 - return -EINVAL; 789 - 790 755 job->job_type = V3D_CPU_JOB_TYPE_COPY_PERFORMANCE_QUERY; 791 756 792 - job->performance_query.queries = kvmalloc_array(copy.count, 793 - sizeof(struct v3d_performance_query), 794 - GFP_KERNEL); 795 - if (!job->performance_query.queries) 757 + query_info->queries = 758 + kvmalloc_array(copy.count, 759 + sizeof(struct v3d_performance_query), 760 + GFP_KERNEL); 761 + if (!query_info->queries) 796 762 return -ENOMEM; 797 763 798 - syncs = u64_to_user_ptr(copy.syncs); 799 - kperfmon_ids = u64_to_user_ptr(copy.kperfmon_ids); 764 + err = v3d_copy_query_info(query_info, 765 + copy.count, 766 + copy.nperfmons, 767 + u64_to_user_ptr(copy.syncs), 768 + u64_to_user_ptr(copy.kperfmon_ids), 769 + file_priv); 770 + if (err) 771 + return err; 800 772 801 - for (i = 0; i < copy.count; i++) { 802 - u32 sync; 803 - u64 ids; 804 - u32 __user *ids_pointer; 805 - u32 id; 806 - 807 - if (copy_from_user(&sync, syncs++, sizeof(sync))) { 808 - err = -EFAULT; 809 - goto error; 810 - } 811 - 812 - if (copy_from_user(&ids, kperfmon_ids++, sizeof(ids))) { 813 - err = -EFAULT; 814 - goto error; 815 - } 816 - 817 - ids_pointer = u64_to_user_ptr(ids); 818 - 819 - for (j = 0; j < copy.nperfmons; j++) { 820 - if (copy_from_user(&id, ids_pointer++, sizeof(id))) { 821 - err = -EFAULT; 822 - goto error; 823 - } 824 - 825 - job->performance_query.queries[i].kperfmon_ids[j] = id; 826 - } 827 - 828 - job->performance_query.queries[i].syncobj = drm_syncobj_find(file_priv, sync); 829 - if (!job->performance_query.queries[i].syncobj) { 830 - err = -ENOENT; 831 - goto error; 832 - } 833 - } 834 - job->performance_query.count = copy.count; 835 - job->performance_query.nperfmons = copy.nperfmons; 836 - job->performance_query.ncounters = copy.ncounters; 773 + query_info->count = copy.count; 774 + query_info->nperfmons = copy.nperfmons; 775 + query_info->ncounters = copy.ncounters; 837 776 838 777 job->copy.do_64bit = copy.do_64bit; 839 778 job->copy.do_partial = copy.do_partial; ··· 813 810 job->copy.stride = copy.stride; 814 811 815 812 return 0; 816 - 817 - error: 818 - v3d_performance_query_info_free(&job->performance_query, i); 819 - return err; 820 813 } 821 814 822 815 /* Whenever userspace sets ioctl extensions, v3d_get_extensions parses data
-1
drivers/gpu/drm/vkms/vkms_drv.h
··· 103 103 struct drm_writeback_connector wb_connector; 104 104 struct hrtimer vblank_hrtimer; 105 105 ktime_t period_ns; 106 - struct drm_pending_vblank_event *event; 107 106 /* ordered wq for composer_work */ 108 107 struct workqueue_struct *composer_workq; 109 108 /* protects concurrent access to composer */
+4
drivers/gpu/drm/xe/xe_vm.c
··· 1402 1402 init_rwsem(&vm->userptr.notifier_lock); 1403 1403 spin_lock_init(&vm->userptr.invalidated_lock); 1404 1404 1405 + ttm_lru_bulk_move_init(&vm->lru_bulk_move); 1406 + 1405 1407 INIT_WORK(&vm->destroy_work, vm_destroy_work_func); 1406 1408 1407 1409 INIT_LIST_HEAD(&vm->preempt.exec_queues); ··· 1529 1527 mutex_destroy(&vm->snap_mutex); 1530 1528 for_each_tile(tile, xe, id) 1531 1529 xe_range_fence_tree_fini(&vm->rftree[id]); 1530 + ttm_lru_bulk_move_fini(&xe->ttm, &vm->lru_bulk_move); 1532 1531 kfree(vm); 1533 1532 if (flags & XE_VM_FLAG_LR_MODE) 1534 1533 xe_pm_runtime_put(xe); ··· 1673 1670 XE_WARN_ON(vm->pt_root[id]); 1674 1671 1675 1672 trace_xe_vm_free(vm); 1673 + ttm_lru_bulk_move_fini(&xe->ttm, &vm->lru_bulk_move); 1676 1674 1677 1675 if (vm->xef) 1678 1676 xe_file_put(vm->xef);
+2 -2
drivers/hv/hv_common.c
··· 207 207 * buffer and call into Hyper-V to transfer the data. 208 208 */ 209 209 static void hv_kmsg_dump(struct kmsg_dumper *dumper, 210 - enum kmsg_dump_reason reason) 210 + struct kmsg_dump_detail *detail) 211 211 { 212 212 struct kmsg_dump_iter iter; 213 213 size_t bytes_written; 214 214 215 215 /* We are only interested in panics. */ 216 - if (reason != KMSG_DUMP_PANIC || !sysctl_record_panic_msg) 216 + if (detail->reason != KMSG_DUMP_PANIC || !sysctl_record_panic_msg) 217 217 return; 218 218 219 219 /*
+3 -3
drivers/mtd/mtdoops.c
··· 298 298 } 299 299 300 300 static void mtdoops_do_dump(struct kmsg_dumper *dumper, 301 - enum kmsg_dump_reason reason) 301 + struct kmsg_dump_detail *detail) 302 302 { 303 303 struct mtdoops_context *cxt = container_of(dumper, 304 304 struct mtdoops_context, dump); 305 305 struct kmsg_dump_iter iter; 306 306 307 307 /* Only dump oopses if dump_oops is set */ 308 - if (reason == KMSG_DUMP_OOPS && !dump_oops) 308 + if (detail->reason == KMSG_DUMP_OOPS && !dump_oops) 309 309 return; 310 310 311 311 kmsg_dump_rewind(&iter); ··· 317 317 record_size - sizeof(struct mtdoops_hdr), NULL); 318 318 clear_bit(0, &cxt->oops_buf_busy); 319 319 320 - if (reason != KMSG_DUMP_OOPS) { 320 + if (detail->reason != KMSG_DUMP_OOPS) { 321 321 /* Panics must be written immediately */ 322 322 mtdoops_write(cxt, 1); 323 323 } else {
+15 -1
drivers/video/fbdev/core/fbcon.c
··· 64 64 #include <linux/console.h> 65 65 #include <linux/string.h> 66 66 #include <linux/kd.h> 67 + #include <linux/panic.h> 68 + #include <linux/printk.h> 67 69 #include <linux/slab.h> 68 70 #include <linux/fb.h> 69 71 #include <linux/fbcon.h> ··· 272 270 return (ops) ? ops->rotate : 0; 273 271 } 274 272 273 + static bool fbcon_skip_panic(struct fb_info *info) 274 + { 275 + /* panic_cpu is not exported, and can't be used if built as module. Use 276 + * oops_in_progress instead, but non-fatal oops won't be printed. 277 + */ 278 + #if defined(MODULE) 279 + return (info->skip_panic && unlikely(oops_in_progress)); 280 + #else 281 + return (info->skip_panic && unlikely(atomic_read(&panic_cpu) != PANIC_CPU_INVALID)); 282 + #endif 283 + } 284 + 275 285 static inline int fbcon_is_inactive(struct vc_data *vc, struct fb_info *info) 276 286 { 277 287 struct fbcon_ops *ops = info->fbcon_par; 278 288 279 289 return (info->state != FBINFO_STATE_RUNNING || 280 - vc->vc_mode != KD_TEXT || ops->graphics); 290 + vc->vc_mode != KD_TEXT || ops->graphics || fbcon_skip_panic(info)); 281 291 } 282 292 283 293 static int get_color(struct vc_data *vc, struct fb_info *info,
+5 -5
fs/pstore/platform.c
··· 275 275 * end of the buffer. 276 276 */ 277 277 static void pstore_dump(struct kmsg_dumper *dumper, 278 - enum kmsg_dump_reason reason) 278 + struct kmsg_dump_detail *detail) 279 279 { 280 280 struct kmsg_dump_iter iter; 281 281 unsigned long total = 0; ··· 285 285 int saved_ret = 0; 286 286 int ret; 287 287 288 - why = kmsg_dump_reason_str(reason); 288 + why = kmsg_dump_reason_str(detail->reason); 289 289 290 - if (pstore_cannot_block_path(reason)) { 290 + if (pstore_cannot_block_path(detail->reason)) { 291 291 if (!spin_trylock_irqsave(&psinfo->buf_lock, flags)) { 292 292 pr_err("dump skipped in %s path because of concurrent dump\n", 293 293 in_nmi() ? "NMI" : why); ··· 311 311 pstore_record_init(&record, psinfo); 312 312 record.type = PSTORE_TYPE_DMESG; 313 313 record.count = oopscount; 314 - record.reason = reason; 314 + record.reason = detail->reason; 315 315 record.part = part; 316 316 record.buf = psinfo->buf; 317 317 ··· 352 352 } 353 353 354 354 ret = psinfo->write(&record); 355 - if (ret == 0 && reason == KMSG_DUMP_OOPS) { 355 + if (ret == 0 && detail->reason == KMSG_DUMP_OOPS) { 356 356 pstore_new_entry = 1; 357 357 pstore_timer_kick(); 358 358 } else {
+4
include/drm/display/drm_dp.h
··· 1543 1543 #define DP_SYMBOL_ERROR_COUNT_LANE2_PHY_REPEATER1 0xf0039 /* 1.3 */ 1544 1544 #define DP_SYMBOL_ERROR_COUNT_LANE3_PHY_REPEATER1 0xf003b /* 1.3 */ 1545 1545 1546 + #define DP_OUI_PHY_REPEATER1 0xf003d /* 1.3 */ 1547 + #define DP_OUI_PHY_REPEATER(dp_phy) \ 1548 + DP_LTTPR_REG(dp_phy, DP_OUI_PHY_REPEATER1) 1549 + 1546 1550 #define __DP_FEC1_BASE 0xf0290 /* 1.4 */ 1547 1551 #define __DP_FEC2_BASE 0xf0298 /* 1.4 */ 1548 1552 #define DP_FEC_BASE(dp_phy) \
+3
include/drm/display/drm_dp_helper.h
··· 112 112 * @target_rr: Target Refresh 113 113 * @duration_incr_ms: Successive frame duration increase 114 114 * @duration_decr_ms: Successive frame duration decrease 115 + * @target_rr_divider: Target refresh rate divider 115 116 * @mode: Adaptive Sync Operation Mode 116 117 */ 117 118 struct drm_dp_as_sdp { ··· 657 656 658 657 int drm_dp_read_desc(struct drm_dp_aux *aux, struct drm_dp_desc *desc, 659 658 bool is_branch); 659 + 660 + int drm_dp_dump_lttpr_desc(struct drm_dp_aux *aux, enum drm_dp_phy dp_phy); 660 661 661 662 /** 662 663 * enum drm_dp_quirk - Display Port sink/branch device specific quirks
+2 -8
include/drm/drm_connector.h
··· 471 471 * 472 472 * DP definitions come from the DP v2.0 spec 473 473 * HDMI definitions come from the CTA-861-H spec 474 - * 475 - * A note on YCC and RGB variants: 476 - * 477 - * Since userspace is not aware of the encoding on the wire 478 - * (RGB or YCbCr), drivers are free to pick the appropriate 479 - * variant, regardless of what userspace selects. E.g., if 480 - * BT2020_RGB is selected by userspace a driver will pick 481 - * BT2020_YCC if the encoding on the wire is YUV444 or YUV420. 482 474 * 483 475 * @DRM_MODE_COLORIMETRY_DEFAULT: 484 476 * Driver specific behavior. ··· 2267 2275 u32 supported_colorspaces); 2268 2276 int drm_mode_create_content_type_property(struct drm_device *dev); 2269 2277 int drm_mode_create_suggested_offset_properties(struct drm_device *dev); 2278 + int drm_mode_create_power_saving_policy_property(struct drm_device *dev, 2279 + uint64_t supported_policies); 2270 2280 2271 2281 int drm_connector_set_path_property(struct drm_connector *connector, 2272 2282 const char *path);
+3 -2
include/drm/drm_device.h
··· 213 213 * This can be set to true it the hardware has a working vblank counter 214 214 * with high-precision timestamping (otherwise there are races) and the 215 215 * driver uses drm_crtc_vblank_on() and drm_crtc_vblank_off() 216 - * appropriately. See also @max_vblank_count and 217 - * &drm_crtc_funcs.get_vblank_counter. 216 + * appropriately. Also, see @max_vblank_count, 217 + * &drm_crtc_funcs.get_vblank_counter and 218 + * &drm_vblank_crtc_config.disable_immediate. 218 219 */ 219 220 bool vblank_disable_immediate; 220 221
+5
include/drm/drm_mode_config.h
··· 969 969 */ 970 970 struct drm_atomic_state *suspend_state; 971 971 972 + /** 973 + * @power_saving_policy: bitmask for power saving policy requests. 974 + */ 975 + struct drm_property *power_saving_policy; 976 + 972 977 const struct drm_mode_config_helper_funcs *helper_private; 973 978 }; 974 979
+35 -2
include/drm/drm_vblank.h
··· 79 79 }; 80 80 81 81 /** 82 + * struct drm_vblank_crtc_config - vblank configuration for a CRTC 83 + */ 84 + struct drm_vblank_crtc_config { 85 + /** 86 + * @offdelay_ms: Vblank off delay in ms, used to determine how long 87 + * &drm_vblank_crtc.disable_timer waits before disabling. 88 + * 89 + * Defaults to the value of drm_vblank_offdelay in drm_crtc_vblank_on(). 90 + */ 91 + int offdelay_ms; 92 + 93 + /** 94 + * @disable_immediate: See &drm_device.vblank_disable_immediate 95 + * for the exact semantics of immediate vblank disabling. 96 + * 97 + * Additionally, this tracks the disable immediate value per crtc, just 98 + * in case it needs to differ from the default value for a given device. 99 + * 100 + * Defaults to the value of &drm_device.vblank_disable_immediate in 101 + * drm_crtc_vblank_on(). 102 + */ 103 + bool disable_immediate; 104 + }; 105 + 106 + /** 82 107 * struct drm_vblank_crtc - vblank tracking for a CRTC 83 108 * 84 109 * This structure tracks the vblank state for one CRTC. ··· 124 99 wait_queue_head_t queue; 125 100 /** 126 101 * @disable_timer: Disable timer for the delayed vblank disabling 127 - * hysteresis logic. Vblank disabling is controlled through the 128 - * drm_vblank_offdelay module option and the setting of the 102 + * hysteresis logic. Vblank disabling is controlled through 103 + * &drm_vblank_crtc_config.offdelay_ms and the setting of the 129 104 * &drm_device.max_vblank_count value. 130 105 */ 131 106 struct timer_list disable_timer; ··· 224 199 struct drm_display_mode hwmode; 225 200 226 201 /** 202 + * @config: Stores vblank configuration values for a given CRTC. 203 + * Also, see drm_crtc_vblank_on_config(). 204 + */ 205 + struct drm_vblank_crtc_config config; 206 + 207 + /** 227 208 * @enabled: Tracks the enabling state of the corresponding &drm_crtc to 228 209 * avoid double-disabling and hence corrupting saved state. Needed by 229 210 * drivers not using atomic KMS, since those might go through their CRTC ··· 278 247 void drm_crtc_wait_one_vblank(struct drm_crtc *crtc); 279 248 void drm_crtc_vblank_off(struct drm_crtc *crtc); 280 249 void drm_crtc_vblank_reset(struct drm_crtc *crtc); 250 + void drm_crtc_vblank_on_config(struct drm_crtc *crtc, 251 + const struct drm_vblank_crtc_config *config); 281 252 void drm_crtc_vblank_on(struct drm_crtc *crtc); 282 253 u64 drm_crtc_accurate_vblank_count(struct drm_crtc *crtc); 283 254 void drm_crtc_vblank_restore(struct drm_crtc *crtc);
+1 -1
include/drm/gpu_scheduler.h
··· 579 579 void drm_sched_wqueue_stop(struct drm_gpu_scheduler *sched); 580 580 void drm_sched_wqueue_start(struct drm_gpu_scheduler *sched); 581 581 void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad); 582 - void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery); 582 + void drm_sched_start(struct drm_gpu_scheduler *sched); 583 583 void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched); 584 584 void drm_sched_increase_karma(struct drm_sched_job *bad); 585 585 void drm_sched_reset_karma(struct drm_sched_job *bad);
+41 -7
include/drm/ttm/ttm_bo.h
··· 194 194 uint64_t bytes_moved; 195 195 }; 196 196 197 + struct ttm_lru_walk; 198 + 199 + /** struct ttm_lru_walk_ops - Operations for a LRU walk. */ 200 + struct ttm_lru_walk_ops { 201 + /** 202 + * process_bo - Process this bo. 203 + * @walk: struct ttm_lru_walk describing the walk. 204 + * @bo: A locked and referenced buffer object. 205 + * 206 + * Return: Negative error code on error, User-defined positive value 207 + * (typically, but not always, size of the processed bo) on success. 208 + * On success, the returned values are summed by the walk and the 209 + * walk exits when its target is met. 210 + * 0 also indicates success, -EBUSY means this bo was skipped. 211 + */ 212 + s64 (*process_bo)(struct ttm_lru_walk *walk, struct ttm_buffer_object *bo); 213 + }; 214 + 215 + /** 216 + * struct ttm_lru_walk - Structure describing a LRU walk. 217 + */ 218 + struct ttm_lru_walk { 219 + /** @ops: Pointer to the ops structure. */ 220 + const struct ttm_lru_walk_ops *ops; 221 + /** @ctx: Pointer to the struct ttm_operation_ctx. */ 222 + struct ttm_operation_ctx *ctx; 223 + /** @ticket: The struct ww_acquire_ctx if any. */ 224 + struct ww_acquire_ctx *ticket; 225 + /** @tryock_only: Only use trylock for locking. */ 226 + bool trylock_only; 227 + }; 228 + 229 + s64 ttm_lru_walk_for_evict(struct ttm_lru_walk *walk, struct ttm_device *bdev, 230 + struct ttm_resource_manager *man, s64 target); 231 + 197 232 /** 198 233 * ttm_bo_get - reference a struct ttm_buffer_object 199 234 * ··· 417 382 int ttm_bo_vmap(struct ttm_buffer_object *bo, struct iosys_map *map); 418 383 void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct iosys_map *map); 419 384 int ttm_bo_mmap_obj(struct vm_area_struct *vma, struct ttm_buffer_object *bo); 420 - int ttm_bo_swapout(struct ttm_buffer_object *bo, struct ttm_operation_ctx *ctx, 421 - gfp_t gfp_flags); 385 + s64 ttm_bo_swapout(struct ttm_device *bdev, struct ttm_operation_ctx *ctx, 386 + struct ttm_resource_manager *man, gfp_t gfp_flags, 387 + s64 target); 422 388 void ttm_bo_pin(struct ttm_buffer_object *bo); 423 389 void ttm_bo_unpin(struct ttm_buffer_object *bo); 424 - int ttm_mem_evict_first(struct ttm_device *bdev, 425 - struct ttm_resource_manager *man, 426 - const struct ttm_place *place, 427 - struct ttm_operation_ctx *ctx, 428 - struct ww_acquire_ctx *ticket); 390 + int ttm_bo_evict_first(struct ttm_device *bdev, 391 + struct ttm_resource_manager *man, 392 + struct ttm_operation_ctx *ctx); 429 393 vm_fault_t ttm_bo_vm_reserve(struct ttm_buffer_object *bo, 430 394 struct vm_fault *vmf); 431 395 vm_fault_t ttm_bo_vm_fault_reserved(struct vm_fault *vmf,
+84 -13
include/drm/ttm/ttm_resource.h
··· 49 49 struct sg_table; 50 50 struct scatterlist; 51 51 52 + /** 53 + * enum ttm_lru_item_type - enumerate ttm_lru_item subclasses 54 + */ 55 + enum ttm_lru_item_type { 56 + /** @TTM_LRU_RESOURCE: The resource subclass */ 57 + TTM_LRU_RESOURCE, 58 + /** @TTM_LRU_HITCH: The iterator hitch subclass */ 59 + TTM_LRU_HITCH 60 + }; 61 + 62 + /** 63 + * struct ttm_lru_item - The TTM lru list node base class 64 + * @link: The list link 65 + * @type: The subclass type 66 + */ 67 + struct ttm_lru_item { 68 + struct list_head link; 69 + enum ttm_lru_item_type type; 70 + }; 71 + 72 + /** 73 + * ttm_lru_item_init() - initialize a struct ttm_lru_item 74 + * @item: The item to initialize 75 + * @type: The subclass type 76 + */ 77 + static inline void ttm_lru_item_init(struct ttm_lru_item *item, 78 + enum ttm_lru_item_type type) 79 + { 80 + item->type = type; 81 + INIT_LIST_HEAD(&item->link); 82 + } 83 + 84 + static inline bool ttm_lru_item_is_res(const struct ttm_lru_item *item) 85 + { 86 + return item->type == TTM_LRU_RESOURCE; 87 + } 88 + 52 89 struct ttm_resource_manager_func { 53 90 /** 54 91 * struct ttm_resource_manager_func member alloc ··· 254 217 /** 255 218 * @lru: Least recently used list, see &ttm_resource_manager.lru 256 219 */ 257 - struct list_head lru; 220 + struct ttm_lru_item lru; 258 221 }; 259 222 260 223 /** 261 - * struct ttm_resource_cursor 224 + * ttm_lru_item_to_res() - Downcast a struct ttm_lru_item to a struct ttm_resource 225 + * @item: The struct ttm_lru_item to downcast 262 226 * 263 - * @priority: the current priority 264 - * 265 - * Cursor to iterate over the resources in a manager. 227 + * Return: Pointer to the embedding struct ttm_resource 266 228 */ 267 - struct ttm_resource_cursor { 268 - unsigned int priority; 269 - }; 229 + static inline struct ttm_resource * 230 + ttm_lru_item_to_res(struct ttm_lru_item *item) 231 + { 232 + return container_of(item, struct ttm_resource, lru); 233 + } 270 234 271 235 /** 272 236 * struct ttm_lru_bulk_move_pos ··· 284 246 285 247 /** 286 248 * struct ttm_lru_bulk_move 287 - * 288 249 * @pos: first/last lru entry for resources in the each domain/priority 250 + * @cursor_list: The list of cursors currently traversing any of 251 + * the sublists of @pos. Protected by the ttm device's lru_lock. 289 252 * 290 253 * Container for the current bulk move state. Should be used with 291 254 * ttm_lru_bulk_move_init() and ttm_bo_set_bulk_move(). ··· 296 257 */ 297 258 struct ttm_lru_bulk_move { 298 259 struct ttm_lru_bulk_move_pos pos[TTM_NUM_MEM_TYPES][TTM_MAX_BO_PRIORITY]; 260 + struct list_head cursor_list; 299 261 }; 262 + 263 + /** 264 + * struct ttm_resource_cursor 265 + * @man: The resource manager currently being iterated over 266 + * @hitch: A hitch list node inserted before the next resource 267 + * to iterate over. 268 + * @bulk_link: A list link for the list of cursors traversing the 269 + * bulk sublist of @bulk. Protected by the ttm device's lru_lock. 270 + * @bulk: Pointer to struct ttm_lru_bulk_move whose subrange @hitch is 271 + * inserted to. NULL if none. Never dereference this pointer since 272 + * the struct ttm_lru_bulk_move object pointed to might have been 273 + * freed. The pointer is only for comparison. 274 + * @mem_type: The memory type of the LRU list being traversed. 275 + * This field is valid iff @bulk != NULL. 276 + * @priority: the current priority 277 + * 278 + * Cursor to iterate over the resources in a manager. 279 + */ 280 + struct ttm_resource_cursor { 281 + struct ttm_resource_manager *man; 282 + struct ttm_lru_item hitch; 283 + struct list_head bulk_link; 284 + struct ttm_lru_bulk_move *bulk; 285 + unsigned int mem_type; 286 + unsigned int priority; 287 + }; 288 + 289 + void ttm_resource_cursor_fini(struct ttm_resource_cursor *cursor); 300 290 301 291 /** 302 292 * struct ttm_kmap_iter_iomap - Specialization for a struct io_mapping + ··· 415 347 416 348 void ttm_lru_bulk_move_init(struct ttm_lru_bulk_move *bulk); 417 349 void ttm_lru_bulk_move_tail(struct ttm_lru_bulk_move *bulk); 350 + void ttm_lru_bulk_move_fini(struct ttm_device *bdev, 351 + struct ttm_lru_bulk_move *bulk); 418 352 419 353 void ttm_resource_add_bulk_move(struct ttm_resource *res, 420 354 struct ttm_buffer_object *bo); ··· 459 389 ttm_resource_manager_first(struct ttm_resource_manager *man, 460 390 struct ttm_resource_cursor *cursor); 461 391 struct ttm_resource * 462 - ttm_resource_manager_next(struct ttm_resource_manager *man, 463 - struct ttm_resource_cursor *cursor, 464 - struct ttm_resource *res); 392 + ttm_resource_manager_next(struct ttm_resource_cursor *cursor); 393 + 394 + struct ttm_resource * 395 + ttm_lru_first_res_or_null(struct list_head *head); 465 396 466 397 /** 467 398 * ttm_resource_manager_for_each_res - iterate over all resources ··· 474 403 */ 475 404 #define ttm_resource_manager_for_each_res(man, cursor, res) \ 476 405 for (res = ttm_resource_manager_first(man, cursor); res; \ 477 - res = ttm_resource_manager_next(man, cursor, res)) 406 + res = ttm_resource_manager_next(cursor)) 478 407 479 408 struct ttm_kmap_iter * 480 409 ttm_kmap_iter_iomap_init(struct ttm_kmap_iter_iomap *iter_io,
+1 -20
include/linux/dma-heap.h
··· 9 9 #ifndef _DMA_HEAPS_H 10 10 #define _DMA_HEAPS_H 11 11 12 - #include <linux/cdev.h> 13 12 #include <linux/types.h> 14 13 15 14 struct dma_heap; 16 15 17 16 /** 18 17 * struct dma_heap_ops - ops to operate on a given heap 19 - * @allocate: allocate dmabuf and return struct dma_buf ptr 18 + * @allocate: allocate dmabuf and return struct dma_buf ptr 20 19 * 21 20 * allocate returns dmabuf on success, ERR_PTR(-errno) on error. 22 21 */ ··· 40 41 void *priv; 41 42 }; 42 43 43 - /** 44 - * dma_heap_get_drvdata() - get per-heap driver data 45 - * @heap: DMA-Heap to retrieve private data for 46 - * 47 - * Returns: 48 - * The per-heap data for the heap. 49 - */ 50 44 void *dma_heap_get_drvdata(struct dma_heap *heap); 51 45 52 - /** 53 - * dma_heap_get_name() - get heap name 54 - * @heap: DMA-Heap to retrieve private data for 55 - * 56 - * Returns: 57 - * The char* for the heap name. 58 - */ 59 46 const char *dma_heap_get_name(struct dma_heap *heap); 60 47 61 - /** 62 - * dma_heap_add - adds a heap to dmabuf heaps 63 - * @exp_info: information needed to register this heap 64 - */ 65 48 struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info); 66 49 67 50 #endif /* _DMA_HEAPS_H */
+1
include/linux/fb.h
··· 510 510 void *par; 511 511 512 512 bool skip_vt_switch; /* no VT switch on suspend/resume required */ 513 + bool skip_panic; /* Do not write to the fb after a panic */ 513 514 }; 514 515 515 516 /* This will go away
+19 -3
include/linux/kmsg_dump.h
··· 40 40 }; 41 41 42 42 /** 43 + * struct kmsg_dump_detail - kernel crash detail 44 + * @reason: reason for the crash, see kmsg_dump_reason. 45 + * @description: optional short string, to provide additional information. 46 + */ 47 + 48 + struct kmsg_dump_detail { 49 + enum kmsg_dump_reason reason; 50 + const char *description; 51 + }; 52 + 53 + /** 43 54 * struct kmsg_dumper - kernel crash message dumper structure 44 55 * @list: Entry in the dumper list (private) 45 56 * @dump: Call into dumping code which will retrieve the data with ··· 60 49 */ 61 50 struct kmsg_dumper { 62 51 struct list_head list; 63 - void (*dump)(struct kmsg_dumper *dumper, enum kmsg_dump_reason reason); 52 + void (*dump)(struct kmsg_dumper *dumper, struct kmsg_dump_detail *detail); 64 53 enum kmsg_dump_reason max_reason; 65 54 bool registered; 66 55 }; 67 56 68 57 #ifdef CONFIG_PRINTK 69 - void kmsg_dump(enum kmsg_dump_reason reason); 58 + void kmsg_dump_desc(enum kmsg_dump_reason reason, const char *desc); 70 59 71 60 bool kmsg_dump_get_line(struct kmsg_dump_iter *iter, bool syslog, 72 61 char *line, size_t size, size_t *len); ··· 82 71 83 72 const char *kmsg_dump_reason_str(enum kmsg_dump_reason reason); 84 73 #else 85 - static inline void kmsg_dump(enum kmsg_dump_reason reason) 74 + static inline void kmsg_dump_desc(enum kmsg_dump_reason reason, const char *desc) 86 75 { 87 76 } 88 77 ··· 117 106 return "Disabled"; 118 107 } 119 108 #endif 109 + 110 + static inline void kmsg_dump(enum kmsg_dump_reason reason) 111 + { 112 + kmsg_dump_desc(reason, NULL); 113 + } 120 114 121 115 #endif /* _LINUX_KMSG_DUMP_H */
+7
include/uapi/drm/drm_mode.h
··· 152 152 #define DRM_MODE_SCALE_CENTER 2 /* Centered, no scaling */ 153 153 #define DRM_MODE_SCALE_ASPECT 3 /* Full screen, preserve aspect */ 154 154 155 + /* power saving policy options */ 156 + #define DRM_MODE_REQUIRE_COLOR_ACCURACY BIT(0) /* Compositor requires color accuracy */ 157 + #define DRM_MODE_REQUIRE_LOW_LATENCY BIT(1) /* Compositor requires low latency */ 158 + 159 + #define DRM_MODE_POWER_SAVING_POLICY_ALL (DRM_MODE_REQUIRE_COLOR_ACCURACY |\ 160 + DRM_MODE_REQUIRE_LOW_LATENCY) 161 + 155 162 /* Dithering mode options */ 156 163 #define DRM_MODE_DITHERING_OFF 0 157 164 #define DRM_MODE_DITHERING_ON 1
+1
include/uapi/linux/virtio_gpu.h
··· 311 311 #define VIRTIO_GPU_CAPSET_VIRGL2 2 312 312 /* 3 is reserved for gfxstream */ 313 313 #define VIRTIO_GPU_CAPSET_VENUS 4 314 + #define VIRTIO_GPU_CAPSET_DRM 6 314 315 315 316 /* VIRTIO_GPU_CMD_GET_CAPSET_INFO */ 316 317 struct virtio_gpu_get_capset_info {
+1 -1
kernel/panic.c
··· 376 376 377 377 panic_print_sys_info(false); 378 378 379 - kmsg_dump(KMSG_DUMP_PANIC); 379 + kmsg_dump_desc(KMSG_DUMP_PANIC, buf); 380 380 381 381 /* 382 382 * If you doubt kdump always works fine in any situation,
+8 -3
kernel/printk/printk.c
··· 4184 4184 EXPORT_SYMBOL_GPL(kmsg_dump_reason_str); 4185 4185 4186 4186 /** 4187 - * kmsg_dump - dump kernel log to kernel message dumpers. 4187 + * kmsg_dump_desc - dump kernel log to kernel message dumpers. 4188 4188 * @reason: the reason (oops, panic etc) for dumping 4189 + * @desc: a short string to describe what caused the panic or oops. Can be NULL 4190 + * if no additional description is available. 4189 4191 * 4190 4192 * Call each of the registered dumper's dump() callback, which can 4191 4193 * retrieve the kmsg records with kmsg_dump_get_line() or 4192 4194 * kmsg_dump_get_buffer(). 4193 4195 */ 4194 - void kmsg_dump(enum kmsg_dump_reason reason) 4196 + void kmsg_dump_desc(enum kmsg_dump_reason reason, const char *desc) 4195 4197 { 4196 4198 struct kmsg_dumper *dumper; 4199 + struct kmsg_dump_detail detail = { 4200 + .reason = reason, 4201 + .description = desc}; 4197 4202 4198 4203 rcu_read_lock(); 4199 4204 list_for_each_entry_rcu(dumper, &dump_list, list) { ··· 4216 4211 continue; 4217 4212 4218 4213 /* invoke dumper which will iterate over records */ 4219 - dumper->dump(dumper, reason); 4214 + dumper->dump(dumper, &detail); 4220 4215 } 4221 4216 rcu_read_unlock(); 4222 4217 }