Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

mm/damon: add trace event for effective size quota

Aim-oriented DAMOS quota auto-tuning is an important and recommended
feature for DAMOS users. Add a trace event for the observability of the
tuned quota and tuning itself.

[sj@kernel.org: initialize sidx in damos_trace_esz()]
Link: https://lkml.kernel.org/r/20250705172003.52324-1-sj@kernel.org
[sj@kernel.org: make damos_esz unconditional trace event]
Link: https://lkml.kernel.org/r/20250709182843.35812-1-sj@kernel.org
Link: https://lkml.kernel.org/r/20250704221408.38510-3-sj@kernel.org
Signed-off-by: SeongJae Park <sj@kernel.org>
Cc: "Masami Hiramatsu (Google)" <mhiramat@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: kernel test robot <lkp@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

authored by

SeongJae Park and committed by
Andrew Morton
a86d6951 214db702

+43 -1
+24
include/trace/events/damon.h
··· 9 9 #include <linux/types.h> 10 10 #include <linux/tracepoint.h> 11 11 12 + TRACE_EVENT(damos_esz, 13 + 14 + TP_PROTO(unsigned int context_idx, unsigned int scheme_idx, 15 + unsigned long esz), 16 + 17 + TP_ARGS(context_idx, scheme_idx, esz), 18 + 19 + TP_STRUCT__entry( 20 + __field(unsigned int, context_idx) 21 + __field(unsigned int, scheme_idx) 22 + __field(unsigned long, esz) 23 + ), 24 + 25 + TP_fast_assign( 26 + __entry->context_idx = context_idx; 27 + __entry->scheme_idx = scheme_idx; 28 + __entry->esz = esz; 29 + ), 30 + 31 + TP_printk("ctx_idx=%u scheme_idx=%u esz=%lu", 32 + __entry->context_idx, __entry->scheme_idx, 33 + __entry->esz) 34 + ); 35 + 12 36 TRACE_EVENT_CONDITION(damos_before_apply, 13 37 14 38 TP_PROTO(unsigned int context_idx, unsigned int scheme_idx,
+19 -1
mm/damon/core.c
··· 2011 2011 quota->esz = esz; 2012 2012 } 2013 2013 2014 + static void damos_trace_esz(struct damon_ctx *c, struct damos *s, 2015 + struct damos_quota *quota) 2016 + { 2017 + unsigned int cidx = 0, sidx = 0; 2018 + struct damos *siter; 2019 + 2020 + damon_for_each_scheme(siter, c) { 2021 + if (siter == s) 2022 + break; 2023 + sidx++; 2024 + } 2025 + trace_damos_esz(cidx, sidx, quota->esz); 2026 + } 2027 + 2014 2028 static void damos_adjust_quota(struct damon_ctx *c, struct damos *s) 2015 2029 { 2016 2030 struct damos_quota *quota = &s->quota; 2017 2031 struct damon_target *t; 2018 2032 struct damon_region *r; 2019 - unsigned long cumulated_sz; 2033 + unsigned long cumulated_sz, cached_esz; 2020 2034 unsigned int score, max_score = 0; 2021 2035 2022 2036 if (!quota->ms && !quota->sz && list_empty(&quota->goals)) ··· 2044 2030 quota->total_charged_sz += quota->charged_sz; 2045 2031 quota->charged_from = jiffies; 2046 2032 quota->charged_sz = 0; 2033 + if (trace_damos_esz_enabled()) 2034 + cached_esz = quota->esz; 2047 2035 damos_set_effective_quota(quota); 2036 + if (trace_damos_esz_enabled() && quota->esz != cached_esz) 2037 + damos_trace_esz(c, s, quota); 2048 2038 } 2049 2039 2050 2040 if (!c->ops.get_scheme_score)