Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

drm/v3d: Fix miscellaneous documentation errors

This commit fixes several miscellaneous documentation errors. Mostly,
delete/update comments that are outdated or are leftovers from past code
changes. Apart from that, remove double-spaces in several comments.

Signed-off-by: Maíra Canal <mcanal@igalia.com>
Acked-by: Iago Toral Quiroga <itoral@igalia.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20241206153908.62429-1-mcanal@igalia.com

+20 -25
-4
drivers/gpu/drm/v3d/v3d_bo.c
··· 13 13 * Display engines requiring physically contiguous allocations should 14 14 * look into Mesa's "renderonly" support (as used by the Mesa pl111 15 15 * driver) for an example of how to integrate with V3D. 16 - * 17 - * Long term, we should support evicting pages from the MMU when under 18 - * memory pressure (thus the v3d_bo_get_pages() refcounting), but 19 - * that's not a high priority since our systems tend to not have swap. 20 16 */ 21 17 22 18 #include <linux/dma-buf.h>
+4 -4
drivers/gpu/drm/v3d/v3d_mmu.c
··· 4 4 /** 5 5 * DOC: Broadcom V3D MMU 6 6 * 7 - * The V3D 3.x hardware (compared to VC4) now includes an MMU. It has 7 + * The V3D 3.x hardware (compared to VC4) now includes an MMU. It has 8 8 * a single level of page tables for the V3D's 4GB address space to 9 9 * map to AXI bus addresses, thus it could need up to 4MB of 10 10 * physically contiguous memory to store the PTEs. ··· 15 15 * 16 16 * To protect clients from each other, we should use the GMP to 17 17 * quickly mask out (at 128kb granularity) what pages are available to 18 - * each client. This is not yet implemented. 18 + * each client. This is not yet implemented. 19 19 */ 20 20 21 21 #include "v3d_drv.h" 22 22 #include "v3d_regs.h" 23 23 24 - /* Note: All PTEs for the 1MB superpage must be filled with the 25 - * superpage bit set. 24 + /* Note: All PTEs for the 64KB bigpage or 1MB superpage must be filled 25 + * with the bigpage/superpage bit set. 26 26 */ 27 27 #define V3D_PTE_SUPERPAGE BIT(31) 28 28 #define V3D_PTE_BIGPAGE BIT(30)
+5 -7
drivers/gpu/drm/v3d/v3d_performance_counters.h
··· 2 2 /* 3 3 * Copyright (C) 2024 Raspberry Pi 4 4 */ 5 + 5 6 #ifndef V3D_PERFORMANCE_COUNTERS_H 6 7 #define V3D_PERFORMANCE_COUNTERS_H 7 8 8 - /* Holds a description of a given performance counter. The index of performance 9 - * counter is given by the array on v3d_performance_counter.h 9 + /* Holds a description of a given performance counter. The index of 10 + * performance counter is given by the array on `v3d_performance_counter.c`. 10 11 */ 11 12 struct v3d_perf_counter_desc { 12 13 /* Category of the counter */ ··· 21 20 }; 22 21 23 22 struct v3d_perfmon_info { 24 - /* 25 - * Different revisions of V3D have different total number of 23 + /* Different revisions of V3D have different total number of 26 24 * performance counters. 27 25 */ 28 26 unsigned int max_counters; 29 27 30 - /* 31 - * Array of counters valid for the platform. 32 - */ 28 + /* Array of counters valid for the platform. */ 33 29 const struct v3d_perf_counter_desc *counters; 34 30 }; 35 31
+6 -6
drivers/gpu/drm/v3d/v3d_sched.c
··· 5 5 * DOC: Broadcom V3D scheduling 6 6 * 7 7 * The shared DRM GPU scheduler is used to coordinate submitting jobs 8 - * to the hardware. Each DRM fd (roughly a client process) gets its 9 - * own scheduler entity, which will process jobs in order. The GPU 10 - * scheduler will round-robin between clients to submit the next job. 8 + * to the hardware. Each DRM fd (roughly a client process) gets its 9 + * own scheduler entity, which will process jobs in order. The GPU 10 + * scheduler will schedule the clients with a FIFO scheduling algorithm. 11 11 * 12 12 * For simplicity, and in order to keep latency low for interactive 13 13 * jobs when bulk background jobs are queued up, we submit a new job 14 14 * to the HW only when it has completed the last one, instead of 15 - * filling up the CT[01]Q FIFOs with jobs. Similarly, we use 16 - * drm_sched_job_add_dependency() to manage the dependency between bin and 17 - * render, instead of having the clients submit jobs using the HW's 15 + * filling up the CT[01]Q FIFOs with jobs. Similarly, we use 16 + * `drm_sched_job_add_dependency()` to manage the dependency between bin 17 + * and render, instead of having the clients submit jobs using the HW's 18 18 * semaphores to interlock between them. 19 19 */ 20 20
+5 -4
drivers/gpu/drm/v3d/v3d_submit.c
··· 11 11 #include "v3d_trace.h" 12 12 13 13 /* Takes the reservation lock on all the BOs being referenced, so that 14 - * at queue submit time we can update the reservations. 14 + * we can attach fences and update the reservations after pushing the job 15 + * to the queue. 15 16 * 16 17 * We don't lock the RCL the tile alloc/state BOs, or overflow memory 17 - * (all of which are on exec->unref_list). They're entirely private 18 + * (all of which are on render->unref_list). They're entirely private 18 19 * to v3d, so we don't attach dma-buf fences to them. 19 20 */ 20 21 static int ··· 56 55 * @bo_count: Number of GEM handles passed in 57 56 * 58 57 * The command validator needs to reference BOs by their index within 59 - * the submitted job's BO list. This does the validation of the job's 58 + * the submitted job's BO list. This does the validation of the job's 60 59 * BO list and reference counting for the lifetime of the job. 61 60 * 62 61 * Note that this function doesn't need to unreference the BOs on 63 - * failure, because that will happen at v3d_exec_cleanup() time. 62 + * failure, because that will happen at `v3d_job_free()`. 64 63 */ 65 64 static int 66 65 v3d_lookup_bos(struct drm_device *dev,