tracing: Avoid possible signed 64-bit truncation

64-bit truncation to 32-bit can result in the sign of the truncated
value changing. The cmp_mod_entry is used in bsearch and so the
truncation could result in an invalid search order. This would only
happen were the addresses more than 2GB apart and so unlikely, but
let's fix the potentially broken compare anyway.

Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://patch.msgid.link/20260108002625.333331-1-irogers@google.com
Signed-off-by: Ian Rogers <irogers@google.com>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>

authored by Ian Rogers and committed by Steven Rostedt (Google) 00f13e28 90f9f5d6

+4 -4
+4 -4
kernel/trace/trace.c
··· 6115 unsigned long addr = (unsigned long)key; 6116 const struct trace_mod_entry *ent = pivot; 6117 6118 - if (addr >= ent[0].mod_addr && addr < ent[1].mod_addr) 6119 - return 0; 6120 - else 6121 - return addr - ent->mod_addr; 6122 } 6123 6124 /**
··· 6115 unsigned long addr = (unsigned long)key; 6116 const struct trace_mod_entry *ent = pivot; 6117 6118 + if (addr < ent[0].mod_addr) 6119 + return -1; 6120 + 6121 + return addr >= ent[1].mod_addr; 6122 } 6123 6124 /**