Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

perf: arm_pmuv3: Don't use PMCCNTR_EL0 on SMT cores

CPU_CYCLES is expected to count the logical CPU (PE) clock. Currently it's
preferred to use PMCCNTR_EL0 for counting CPU_CYCLES, but it'll count
processor clock rather than the PE clock (ARM DDI0487 L.b D13.1.3) if
one of the SMT siblings is not idle on a multi-threaded implementation.
So don't use it on SMT cores.

Introduce topology_core_has_smt() for knowing the SMT implementation and
cached it in arm_pmu::has_smt during allocation.

When counting cycles on SMT CPU 2-3 and CPU 3 is idle, without this
patch we'll get:
[root@client1 tmp]# perf stat -e cycles -A -C 2-3 -- stress-ng -c 1
--taskset 2 --timeout 1
[...]
Performance counter stats for 'CPU(s) 2-3':

CPU2 2880457316 cycles
CPU3 2880459810 cycles
1.254688470 seconds time elapsed

With this patch the idle state of CPU3 is observed as expected:
[root@client1 ~]# perf stat -e cycles -A -C 2-3 -- stress-ng -c 1
--taskset 2 --timeout 1
[...]
Performance counter stats for 'CPU(s) 2-3':

CPU2 2558580492 cycles
CPU3 305749 cycles
1.113626410 seconds time elapsed

Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
Signed-off-by: Will Deacon <will@kernel.org>

authored by

Yicong Yang and committed by
Will Deacon
c3d78c34 3a866087

+28
+6
drivers/perf/arm_pmu.c
··· 925 925 if (ret) 926 926 return ret; 927 927 928 + /* 929 + * By this stage we know our supported CPUs on either DT/ACPI platforms, 930 + * detect the SMT implementation. 931 + */ 932 + pmu->has_smt = topology_core_has_smt(cpumask_first(&pmu->supported_cpus)); 933 + 928 934 if (!pmu->set_event_filter) 929 935 pmu->pmu.capabilities |= PERF_PMU_CAP_NO_EXCLUDE; 930 936
+10
drivers/perf/arm_pmuv3.c
··· 981 981 static bool armv8pmu_can_use_pmccntr(struct pmu_hw_events *cpuc, 982 982 struct perf_event *event) 983 983 { 984 + struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); 984 985 struct hw_perf_event *hwc = &event->hw; 985 986 unsigned long evtype = hwc->config_base & ARMV8_PMU_EVTYPE_EVENT; 986 987 ··· 1000 999 * So don't use it for branch events. 1001 1000 */ 1002 1001 if (has_branch_stack(event)) 1002 + return false; 1003 + 1004 + /* 1005 + * The PMCCNTR_EL0 increments from the processor clock rather than 1006 + * the PE clock (ARM DDI0487 L.b D13.1.3) which means it'll continue 1007 + * counting on a WFI PE if one of its SMT sibling is not idle on a 1008 + * multi-threaded implementation. So don't use it on SMT cores. 1009 + */ 1010 + if (cpu_pmu->has_smt) 1003 1011 return false; 1004 1012 1005 1013 return true;
+11
include/linux/arch_topology.h
··· 89 89 void reset_cpu_topology(void); 90 90 int parse_acpi_topology(void); 91 91 void freq_inv_set_max_ratio(int cpu, u64 max_rate); 92 + 93 + /* 94 + * Architectures like ARM64 don't have reliable architectural way to get SMT 95 + * information and depend on the firmware (ACPI/OF) report. Non-SMT core won't 96 + * initialize thread_id so we can use this to detect the SMT implementation. 97 + */ 98 + static inline bool topology_core_has_smt(int cpu) 99 + { 100 + return cpu_topology[cpu].thread_id != -1; 101 + } 102 + 92 103 #endif 93 104 94 105 #endif /* _LINUX_ARCH_TOPOLOGY_H_ */
+1
include/linux/perf/arm_pmu.h
··· 119 119 120 120 /* PMUv3 only */ 121 121 int pmuver; 122 + bool has_smt; 122 123 u64 reg_pmmir; 123 124 u64 reg_brbidr; 124 125 #define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40