Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

drm/xe/uapi: loosen used tracking restriction

Currently this is hidden behind perfmon_capable() since this is
technically an info leak, given that this is a system wide metric.
However the granularity reported here is always PAGE_SIZE aligned, which
matches what the core kernel is already willing to expose to userspace
if querying how many free RAM pages there are on the system, and that
doesn't need any special privileges. In addition other drm drivers seem
happy to expose this.

The motivation here if with oneAPI where they want to use the system
wide 'used' reporting here, so not the per-client fdinfo stats. This has
also come up with some perf overlay applications wanting this
information.

Fixes: 1105ac15d2a1 ("drm/xe/uapi: restrict system wide accounting")
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Cc: Joshua Santosh <joshua.santosh.ranjan@intel.com>
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: <stable@vger.kernel.org> # v6.8+
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://lore.kernel.org/r/20250919122052.420979-2-matthew.auld@intel.com
(cherry picked from commit 4d0b035fd6dae8ee48e9c928b10f14877e595356)
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>

authored by

Matthew Auld and committed by
Lucas De Marchi
2d1684a0 6982a462

+6 -9
+6 -9
drivers/gpu/drm/xe/xe_query.c
··· 276 276 mem_regions->mem_regions[0].instance = 0; 277 277 mem_regions->mem_regions[0].min_page_size = PAGE_SIZE; 278 278 mem_regions->mem_regions[0].total_size = man->size << PAGE_SHIFT; 279 - if (perfmon_capable()) 280 - mem_regions->mem_regions[0].used = ttm_resource_manager_usage(man); 279 + mem_regions->mem_regions[0].used = ttm_resource_manager_usage(man); 281 280 mem_regions->num_mem_regions = 1; 282 281 283 282 for (i = XE_PL_VRAM0; i <= XE_PL_VRAM1; ++i) { ··· 292 293 mem_regions->mem_regions[mem_regions->num_mem_regions].total_size = 293 294 man->size; 294 295 295 - if (perfmon_capable()) { 296 - xe_ttm_vram_get_used(man, 297 - &mem_regions->mem_regions 298 - [mem_regions->num_mem_regions].used, 299 - &mem_regions->mem_regions 300 - [mem_regions->num_mem_regions].cpu_visible_used); 301 - } 296 + xe_ttm_vram_get_used(man, 297 + &mem_regions->mem_regions 298 + [mem_regions->num_mem_regions].used, 299 + &mem_regions->mem_regions 300 + [mem_regions->num_mem_regions].cpu_visible_used); 302 301 303 302 mem_regions->mem_regions[mem_regions->num_mem_regions].cpu_visible_size = 304 303 xe_ttm_vram_get_cpu_visible_size(man);