KVM: arm64: Fix the issues when guest PMCCFILTR is configured

KVM calls kvm_pmu_set_counter_event_type() when PMCCFILTR is configured.
But this function can't deals with PMCCFILTR correctly because the evtCount
bits of PMCCFILTR, which is reserved 0, conflits with the SW_INCR event
type of other PMXEVTYPER<n> registers. To fix it, when eventsel == 0, this
function shouldn't return immediately; instead it needs to check further
if select_idx is ARMV8_PMU_CYCLE_IDX.

Another issue is that KVM shouldn't copy the eventsel bits of PMCCFILTER
blindly to attr.config. Instead it ought to convert the request to the
"cpu cycle" event type (i.e. 0x11).

To support this patch and to prevent duplicated definitions, a limited
set of ARMv8 perf event types were relocated from perf_event.c to
asm/perf_event.h.

Cc: stable@vger.kernel.org # 4.6+
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Wei Huang <wei@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

authored by

Wei Huang and committed by
Marc Zyngier
b112c84a 9e3f7a29

+15 -13
+9 -1
arch/arm64/include/asm/perf_event.h
··· 46 46 #define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */ 47 47 #define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */ 48 48 49 - #define ARMV8_PMU_EVTYPE_EVENT_SW_INCR 0 /* Software increment event */ 49 + /* 50 + * PMUv3 event types: required events 51 + */ 52 + #define ARMV8_PMUV3_PERFCTR_SW_INCR 0x00 53 + #define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x03 54 + #define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x04 55 + #define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x10 56 + #define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x11 57 + #define ARMV8_PMUV3_PERFCTR_BR_PRED 0x12 50 58 51 59 /* 52 60 * Event filters for PMUv3
+1 -9
arch/arm64/kernel/perf_event.c
··· 31 31 32 32 /* 33 33 * ARMv8 PMUv3 Performance Events handling code. 34 - * Common event types. 34 + * Common event types (some are defined in asm/perf_event.h). 35 35 */ 36 - 37 - /* Required events. */ 38 - #define ARMV8_PMUV3_PERFCTR_SW_INCR 0x00 39 - #define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x03 40 - #define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x04 41 - #define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x10 42 - #define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x11 43 - #define ARMV8_PMUV3_PERFCTR_BR_PRED 0x12 44 36 45 37 /* At least one of the following is required. */ 46 38 #define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x08
+5 -3
virt/kvm/arm/pmu.c
··· 305 305 continue; 306 306 type = vcpu_sys_reg(vcpu, PMEVTYPER0_EL0 + i) 307 307 & ARMV8_PMU_EVTYPE_EVENT; 308 - if ((type == ARMV8_PMU_EVTYPE_EVENT_SW_INCR) 308 + if ((type == ARMV8_PMUV3_PERFCTR_SW_INCR) 309 309 && (enable & BIT(i))) { 310 310 reg = vcpu_sys_reg(vcpu, PMEVCNTR0_EL0 + i) + 1; 311 311 reg = lower_32_bits(reg); ··· 379 379 eventsel = data & ARMV8_PMU_EVTYPE_EVENT; 380 380 381 381 /* Software increment event does't need to be backed by a perf event */ 382 - if (eventsel == ARMV8_PMU_EVTYPE_EVENT_SW_INCR) 382 + if (eventsel == ARMV8_PMUV3_PERFCTR_SW_INCR && 383 + select_idx != ARMV8_PMU_CYCLE_IDX) 383 384 return; 384 385 385 386 memset(&attr, 0, sizeof(struct perf_event_attr)); ··· 392 391 attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0; 393 392 attr.exclude_hv = 1; /* Don't count EL2 events */ 394 393 attr.exclude_host = 1; /* Don't count host events */ 395 - attr.config = eventsel; 394 + attr.config = (select_idx == ARMV8_PMU_CYCLE_IDX) ? 395 + ARMV8_PMUV3_PERFCTR_CPU_CYCLES : eventsel; 396 396 397 397 counter = kvm_pmu_get_counter_value(vcpu, select_idx); 398 398 /* The initial sample period (overflow count) of an event. */