Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm updates from Paolo Bonzini:
"ARM:

- Generalized infrastructure for 'writable' ID registers, effectively
allowing userspace to opt-out of certain vCPU features for its
guest

- Optimization for vSGI injection, opportunistically compressing
MPIDR to vCPU mapping into a table

- Improvements to KVM's PMU emulation, allowing userspace to select
the number of PMCs available to a VM

- Guest support for memory operation instructions (FEAT_MOPS)

- Cleanups to handling feature flags in KVM_ARM_VCPU_INIT, squashing
bugs and getting rid of useless code

- Changes to the way the SMCCC filter is constructed, avoiding wasted
memory allocations when not in use

- Load the stage-2 MMU context at vcpu_load() for VHE systems,
reducing the overhead of errata mitigations

- Miscellaneous kernel and selftest fixes

LoongArch:

- New architecture for kvm.

The hardware uses the same model as x86, s390 and RISC-V, where
guest/host mode is orthogonal to supervisor/user mode. The
virtualization extensions are very similar to MIPS, therefore the
code also has some similarities but it's been cleaned up to avoid
some of the historical bogosities that are found in arch/mips. The
kernel emulates MMU, timer and CSR accesses, while interrupt
controllers are only emulated in userspace, at least for now.

RISC-V:

- Support for the Smstateen and Zicond extensions

- Support for virtualizing senvcfg

- Support for virtualized SBI debug console (DBCN)

S390:

- Nested page table management can be monitored through tracepoints
and statistics

x86:

- Fix incorrect handling of VMX posted interrupt descriptor in
KVM_SET_LAPIC, which could result in a dropped timer IRQ

- Avoid WARN on systems with Intel IPI virtualization

- Add CONFIG_KVM_MAX_NR_VCPUS, to allow supporting up to 4096 vCPUs
without forcing more common use cases to eat the extra memory
overhead.

- Add virtualization support for AMD SRSO mitigation (IBPB_BRTYPE and
SBPB, aka Selective Branch Predictor Barrier).

- Fix a bug where restoring a vCPU snapshot that was taken within 1
second of creating the original vCPU would cause KVM to try to
synchronize the vCPU's TSC and thus clobber the correct TSC being
set by userspace.

- Compute guest wall clock using a single TSC read to avoid
generating an inaccurate time, e.g. if the vCPU is preempted
between multiple TSC reads.

- "Virtualize" HWCR.TscFreqSel to make Linux guests happy, which
complain about a "Firmware Bug" if the bit isn't set for select
F/M/S combos. Likewise "virtualize" (ignore) MSR_AMD64_TW_CFG to
appease Windows Server 2022.

- Don't apply side effects to Hyper-V's synthetic timer on writes
from userspace to fix an issue where the auto-enable behavior can
trigger spurious interrupts, i.e. do auto-enabling only for guest
writes.

- Remove an unnecessary kick of all vCPUs when synchronizing the
dirty log without PML enabled.

- Advertise "support" for non-serializing FS/GS base MSR writes as
appropriate.

- Harden the fast page fault path to guard against encountering an
invalid root when walking SPTEs.

- Omit "struct kvm_vcpu_xen" entirely when CONFIG_KVM_XEN=n.

- Use the fast path directly from the timer callback when delivering
Xen timer events, instead of waiting for the next iteration of the
run loop. This was not done so far because previously proposed code
had races, but now care is taken to stop the hrtimer at critical
points such as restarting the timer or saving the timer information
for userspace.

- Follow the lead of upstream Xen and ignore the VCPU_SSHOTTMR_future
flag.

- Optimize injection of PMU interrupts that are simultaneous with
NMIs.

- Usual handful of fixes for typos and other warts.

x86 - MTRR/PAT fixes and optimizations:

- Clean up code that deals with honoring guest MTRRs when the VM has
non-coherent DMA and host MTRRs are ignored, i.e. EPT is enabled.

- Zap EPT entries when non-coherent DMA assignment stops/start to
prevent using stale entries with the wrong memtype.

- Don't ignore guest PAT for CR0.CD=1 && KVM_X86_QUIRK_CD_NW_CLEARED=y

This was done as a workaround for virtual machine BIOSes that did
not bother to clear CR0.CD (because ancient KVM/QEMU did not bother
to set it, in turn), and there's zero reason to extend the quirk to
also ignore guest PAT.

x86 - SEV fixes:

- Report KVM_EXIT_SHUTDOWN instead of EINVAL if KVM intercepts
SHUTDOWN while running an SEV-ES guest.

- Clean up the recognition of emulation failures on SEV guests, when
KVM would like to "skip" the instruction but it had already been
partially emulated. This makes it possible to drop a hack that
second guessed the (insufficient) information provided by the
emulator, and just do the right thing.

Documentation:

- Various updates and fixes, mostly for x86

- MTRR and PAT fixes and optimizations"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (164 commits)
KVM: selftests: Avoid using forced target for generating arm64 headers
tools headers arm64: Fix references to top srcdir in Makefile
KVM: arm64: Add tracepoint for MMIO accesses where ISV==0
KVM: arm64: selftest: Perform ISB before reading PAR_EL1
KVM: arm64: selftest: Add the missing .guest_prepare()
KVM: arm64: Always invalidate TLB for stage-2 permission faults
KVM: x86: Service NMI requests after PMI requests in VM-Enter path
KVM: arm64: Handle AArch32 SPSR_{irq,abt,und,fiq} as RAZ/WI
KVM: arm64: Do not let a L1 hypervisor access the *32_EL2 sysregs
KVM: arm64: Refine _EL2 system register list that require trap reinjection
arm64: Add missing _EL2 encodings
arm64: Add missing _EL12 encodings
KVM: selftests: aarch64: vPMU test for validating user accesses
KVM: selftests: aarch64: vPMU register test for unimplemented counters
KVM: selftests: aarch64: vPMU register test for implemented counters
KVM: selftests: aarch64: Introduce vpmu_counter_access test
tools: Import arm_pmuv3.h
KVM: arm64: PMU: Allow userspace to limit PMCR_EL0.N for the guest
KVM: arm64: Sanitize PM{C,I}NTEN{SET,CLR}, PMOVS{SET,CLR} before first run
KVM: arm64: Add {get,set}_user for PM{C,I}NTEN{SET,CLR}, PMOVS{SET,CLR}
...

+8890 -1508
+12
Documentation/devicetree/bindings/riscv/extensions.yaml
··· 128 128 changes to interrupts as frozen at commit ccbddab ("Merge pull 129 129 request #42 from riscv/jhauser-2023-RC4") of riscv-aia. 130 130 131 + - const: smstateen 132 + description: | 133 + The standard Smstateen extension for controlling access to CSRs 134 + added by other RISC-V extensions in H/S/VS/U/VU modes and as 135 + ratified at commit a28bfae (Ratified (#7)) of riscv-state-enable. 136 + 131 137 - const: ssaia 132 138 description: | 133 139 The standard Ssaia supervisor-level extension for the advanced ··· 217 211 The standard Zicntr extension for base counters and timers, as 218 212 ratified in the 20191213 version of the unprivileged ISA 219 213 specification. 214 + 215 + - const: zicond 216 + description: 217 + The standard Zicond extension for conditional arithmetic and 218 + conditional-select/move operations as ratified in commit 95cf1f9 219 + ("Add changes requested by Ved during signoff") of riscv-zicond. 220 220 221 221 - const: zicsr 222 222 description: |
+140 -18
Documentation/virt/kvm/api.rst
··· 416 416 __u64 pc; 417 417 }; 418 418 419 + /* LoongArch */ 420 + struct kvm_regs { 421 + /* out (KVM_GET_REGS) / in (KVM_SET_REGS) */ 422 + unsigned long gpr[32]; 423 + unsigned long pc; 424 + }; 425 + 419 426 420 427 4.12 KVM_SET_REGS 421 428 ----------------- ··· 513 506 ------------------ 514 507 515 508 :Capability: basic 516 - :Architectures: x86, ppc, mips, riscv 509 + :Architectures: x86, ppc, mips, riscv, loongarch 517 510 :Type: vcpu ioctl 518 511 :Parameters: struct kvm_interrupt (in) 519 512 :Returns: 0 on success, negative on failure. ··· 547 540 PPC: 548 541 ^^^^ 549 542 550 - Queues an external interrupt to be injected. This ioctl is overleaded 543 + Queues an external interrupt to be injected. This ioctl is overloaded 551 544 with 3 different irq values: 552 545 553 546 a) KVM_INTERRUPT_SET ··· 596 589 b) KVM_INTERRUPT_UNSET 597 590 598 591 This clears pending external interrupt for a virtual CPU. 592 + 593 + This is an asynchronous vcpu ioctl and can be invoked from any thread. 594 + 595 + LOONGARCH: 596 + ^^^^^^^^^^ 597 + 598 + Queues an external interrupt to be injected into the virtual CPU. A negative 599 + interrupt number dequeues the interrupt. 599 600 600 601 This is an asynchronous vcpu ioctl and can be invoked from any thread. 601 602 ··· 752 737 ---------------- 753 738 754 739 :Capability: basic 755 - :Architectures: x86 740 + :Architectures: x86, loongarch 756 741 :Type: vcpu ioctl 757 742 :Parameters: struct kvm_fpu (out) 758 743 :Returns: 0 on success, -1 on error ··· 761 746 762 747 :: 763 748 764 - /* for KVM_GET_FPU and KVM_SET_FPU */ 749 + /* x86: for KVM_GET_FPU and KVM_SET_FPU */ 765 750 struct kvm_fpu { 766 751 __u8 fpr[8][16]; 767 752 __u16 fcw; ··· 776 761 __u32 pad2; 777 762 }; 778 763 764 + /* LoongArch: for KVM_GET_FPU and KVM_SET_FPU */ 765 + struct kvm_fpu { 766 + __u32 fcsr; 767 + __u64 fcc; 768 + struct kvm_fpureg { 769 + __u64 val64[4]; 770 + }fpr[32]; 771 + }; 772 + 779 773 780 774 4.23 KVM_SET_FPU 781 775 ---------------- 782 776 783 777 :Capability: basic 784 - :Architectures: x86 778 + :Architectures: x86, loongarch 785 779 :Type: vcpu ioctl 786 780 :Parameters: struct kvm_fpu (in) 787 781 :Returns: 0 on success, -1 on error ··· 799 775 800 776 :: 801 777 802 - /* for KVM_GET_FPU and KVM_SET_FPU */ 778 + /* x86: for KVM_GET_FPU and KVM_SET_FPU */ 803 779 struct kvm_fpu { 804 780 __u8 fpr[8][16]; 805 781 __u16 fcw; ··· 812 788 __u8 xmm[16][16]; 813 789 __u32 mxcsr; 814 790 __u32 pad2; 791 + }; 792 + 793 + /* LoongArch: for KVM_GET_FPU and KVM_SET_FPU */ 794 + struct kvm_fpu { 795 + __u32 fcsr; 796 + __u64 fcc; 797 + struct kvm_fpureg { 798 + __u64 val64[4]; 799 + }fpr[32]; 815 800 }; 816 801 817 802 ··· 998 965 The KVM_XEN_HVM_CONFIG_INTERCEPT_HCALL flag requests KVM to generate 999 966 the contents of the hypercall page automatically; hypercalls will be 1000 967 intercepted and passed to userspace through KVM_EXIT_XEN. In this 1001 - ase, all of the blob size and address fields must be zero. 968 + case, all of the blob size and address fields must be zero. 1002 969 1003 970 The KVM_XEN_HVM_CONFIG_EVTCHN_SEND flag indicates to KVM that userspace 1004 971 will always use the KVM_XEN_HVM_EVTCHN_SEND ioctl to deliver event ··· 1103 1070 :Extended by: KVM_CAP_INTR_SHADOW 1104 1071 :Architectures: x86, arm64 1105 1072 :Type: vcpu ioctl 1106 - :Parameters: struct kvm_vcpu_event (out) 1073 + :Parameters: struct kvm_vcpu_events (out) 1107 1074 :Returns: 0 on success, -1 on error 1108 1075 1109 1076 X86: ··· 1226 1193 :Extended by: KVM_CAP_INTR_SHADOW 1227 1194 :Architectures: x86, arm64 1228 1195 :Type: vcpu ioctl 1229 - :Parameters: struct kvm_vcpu_event (in) 1196 + :Parameters: struct kvm_vcpu_events (in) 1230 1197 :Returns: 0 on success, -1 on error 1231 1198 1232 1199 X86: ··· 1420 1387 ------------------- 1421 1388 1422 1389 :Capability: KVM_CAP_ENABLE_CAP 1423 - :Architectures: mips, ppc, s390, x86 1390 + :Architectures: mips, ppc, s390, x86, loongarch 1424 1391 :Type: vcpu ioctl 1425 1392 :Parameters: struct kvm_enable_cap (in) 1426 1393 :Returns: 0 on success; -1 on error ··· 1475 1442 --------------------- 1476 1443 1477 1444 :Capability: KVM_CAP_MP_STATE 1478 - :Architectures: x86, s390, arm64, riscv 1445 + :Architectures: x86, s390, arm64, riscv, loongarch 1479 1446 :Type: vcpu ioctl 1480 1447 :Parameters: struct kvm_mp_state (out) 1481 1448 :Returns: 0 on success; -1 on error ··· 1493 1460 1494 1461 ========================== =============================================== 1495 1462 KVM_MP_STATE_RUNNABLE the vcpu is currently running 1496 - [x86,arm64,riscv] 1463 + [x86,arm64,riscv,loongarch] 1497 1464 KVM_MP_STATE_UNINITIALIZED the vcpu is an application processor (AP) 1498 1465 which has not yet received an INIT signal [x86] 1499 1466 KVM_MP_STATE_INIT_RECEIVED the vcpu has received an INIT signal, and is ··· 1549 1516 The only states that are valid are KVM_MP_STATE_STOPPED and 1550 1517 KVM_MP_STATE_RUNNABLE which reflect if the vcpu is paused or not. 1551 1518 1519 + On LoongArch, only the KVM_MP_STATE_RUNNABLE state is used to reflect 1520 + whether the vcpu is runnable. 1521 + 1552 1522 4.39 KVM_SET_MP_STATE 1553 1523 --------------------- 1554 1524 1555 1525 :Capability: KVM_CAP_MP_STATE 1556 - :Architectures: x86, s390, arm64, riscv 1526 + :Architectures: x86, s390, arm64, riscv, loongarch 1557 1527 :Type: vcpu ioctl 1558 1528 :Parameters: struct kvm_mp_state (in) 1559 1529 :Returns: 0 on success; -1 on error ··· 1573 1537 1574 1538 The only states that are valid are KVM_MP_STATE_STOPPED and 1575 1539 KVM_MP_STATE_RUNNABLE which reflect if the vcpu should be paused or not. 1540 + 1541 + On LoongArch, only the KVM_MP_STATE_RUNNABLE state is used to reflect 1542 + whether the vcpu is runnable. 1576 1543 1577 1544 4.40 KVM_SET_IDENTITY_MAP_ADDR 1578 1545 ------------------------------ ··· 2880 2841 0x8020 0000 0600 0020 fcsr Floating point control and status register 2881 2842 ======================= ========= ============================================= 2882 2843 2844 + LoongArch registers are mapped using the lower 32 bits. The upper 16 bits of 2845 + that is the register group type. 2846 + 2847 + LoongArch csr registers are used to control guest cpu or get status of guest 2848 + cpu, and they have the following id bit patterns:: 2849 + 2850 + 0x9030 0000 0001 00 <reg:5> <sel:3> (64-bit) 2851 + 2852 + LoongArch KVM control registers are used to implement some new defined functions 2853 + such as set vcpu counter or reset vcpu, and they have the following id bit patterns:: 2854 + 2855 + 0x9030 0000 0002 <reg:16> 2856 + 2883 2857 2884 2858 4.69 KVM_GET_ONE_REG 2885 2859 -------------------- ··· 3115 3063 }; 3116 3064 3117 3065 An entry with a "page_shift" of 0 is unused. Because the array is 3118 - organized in increasing order, a lookup can stop when encoutering 3066 + organized in increasing order, a lookup can stop when encountering 3119 3067 such an entry. 3120 3068 3121 3069 The "slb_enc" field provides the encoding to use in the SLB for the ··· 3422 3370 indicate that the attribute can be read or written in the device's 3423 3371 current state. "addr" is ignored. 3424 3372 3373 + .. _KVM_ARM_VCPU_INIT: 3374 + 3425 3375 4.82 KVM_ARM_VCPU_INIT 3426 3376 ---------------------- 3427 3377 ··· 3509 3455 - KVM_RUN and KVM_GET_REG_LIST are not available; 3510 3456 3511 3457 - KVM_GET_ONE_REG and KVM_SET_ONE_REG cannot be used to access 3512 - the scalable archietctural SVE registers 3458 + the scalable architectural SVE registers 3513 3459 KVM_REG_ARM64_SVE_ZREG(), KVM_REG_ARM64_SVE_PREG() or 3514 3460 KVM_REG_ARM64_SVE_FFR; 3515 3461 ··· 4455 4401 placed itself in a quiescent state where no vcpu will make MMU enabled 4456 4402 memory accesses. 4457 4403 4458 - On succsful completion, the pending HPT will become the guest's active 4404 + On successful completion, the pending HPT will become the guest's active 4459 4405 HPT and the previous HPT will be discarded. 4460 4406 4461 4407 On failure, the guest will still be operating on its previous HPT. ··· 5070 5016 5071 5017 Between KVM_ARM_VCPU_INIT and KVM_ARM_VCPU_FINALIZE, the feature may be 5072 5018 configured by use of ioctls such as KVM_SET_ONE_REG. The exact configuration 5073 - that should be performaned and how to do it are feature-dependent. 5019 + that should be performed and how to do it are feature-dependent. 5074 5020 5075 5021 Other calls that depend on a particular feature being finalized, such as 5076 5022 KVM_RUN, KVM_GET_REG_LIST, KVM_GET_ONE_REG and KVM_SET_ONE_REG, will fail with ··· 5177 5123 5178 5124 #define KVM_PMU_EVENT_ALLOW 0 5179 5125 #define KVM_PMU_EVENT_DENY 1 5126 + 5127 + Via this API, KVM userspace can also control the behavior of the VM's fixed 5128 + counters (if any) by configuring the "action" and "fixed_counter_bitmap" fields. 5129 + 5130 + Specifically, KVM follows the following pseudo-code when determining whether to 5131 + allow the guest FixCtr[i] to count its pre-defined fixed event:: 5132 + 5133 + FixCtr[i]_is_allowed = (action == ALLOW) && (bitmap & BIT(i)) || 5134 + (action == DENY) && !(bitmap & BIT(i)); 5135 + FixCtr[i]_is_denied = !FixCtr[i]_is_allowed; 5136 + 5137 + KVM always consumes fixed_counter_bitmap, it's userspace's responsibility to 5138 + ensure fixed_counter_bitmap is set correctly, e.g. if userspace wants to define 5139 + a filter that only affects general purpose counters. 5140 + 5141 + Note, the "events" field also applies to fixed counters' hardcoded event_select 5142 + and unit_mask values. "fixed_counter_bitmap" has higher priority than "events" 5143 + if there is a contradiction between the two. 5180 5144 5181 5145 4.121 KVM_PPC_SVM_OFF 5182 5146 --------------------- ··· 5547 5475 from the guest. A given sending port number may be directed back to 5548 5476 a specified vCPU (by APIC ID) / port / priority on the guest, or to 5549 5477 trigger events on an eventfd. The vCPU and priority can be changed 5550 - by setting KVM_XEN_EVTCHN_UPDATE in a subsequent call, but but other 5478 + by setting KVM_XEN_EVTCHN_UPDATE in a subsequent call, but other 5551 5479 fields cannot change for a given sending port. A port mapping is 5552 5480 removed by using KVM_XEN_EVTCHN_DEASSIGN in the flags field. Passing 5553 5481 KVM_XEN_EVTCHN_RESET in the flags field removes all interception of ··· 6141 6069 writes to the CNTVCT_EL0 and CNTPCT_EL0 registers using the SET_ONE_REG 6142 6070 interface. No error will be returned, but the resulting offset will not be 6143 6071 applied. 6072 + 6073 + .. _KVM_ARM_GET_REG_WRITABLE_MASKS: 6074 + 6075 + 4.139 KVM_ARM_GET_REG_WRITABLE_MASKS 6076 + ------------------------------------------- 6077 + 6078 + :Capability: KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES 6079 + :Architectures: arm64 6080 + :Type: vm ioctl 6081 + :Parameters: struct reg_mask_range (in/out) 6082 + :Returns: 0 on success, < 0 on error 6083 + 6084 + 6085 + :: 6086 + 6087 + #define KVM_ARM_FEATURE_ID_RANGE 0 6088 + #define KVM_ARM_FEATURE_ID_RANGE_SIZE (3 * 8 * 8) 6089 + 6090 + struct reg_mask_range { 6091 + __u64 addr; /* Pointer to mask array */ 6092 + __u32 range; /* Requested range */ 6093 + __u32 reserved[13]; 6094 + }; 6095 + 6096 + This ioctl copies the writable masks for a selected range of registers to 6097 + userspace. 6098 + 6099 + The ``addr`` field is a pointer to the destination array where KVM copies 6100 + the writable masks. 6101 + 6102 + The ``range`` field indicates the requested range of registers. 6103 + ``KVM_CHECK_EXTENSION`` for the ``KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES`` 6104 + capability returns the supported ranges, expressed as a set of flags. Each 6105 + flag's bit index represents a possible value for the ``range`` field. 6106 + All other values are reserved for future use and KVM may return an error. 6107 + 6108 + The ``reserved[13]`` array is reserved for future use and should be 0, or 6109 + KVM may return an error. 6110 + 6111 + KVM_ARM_FEATURE_ID_RANGE (0) 6112 + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 6113 + 6114 + The Feature ID range is defined as the AArch64 System register space with 6115 + op0==3, op1=={0, 1, 3}, CRn==0, CRm=={0-7}, op2=={0-7}. 6116 + 6117 + The mask returned array pointed to by ``addr`` is indexed by the macro 6118 + ``ARM64_FEATURE_ID_RANGE_IDX(op0, op1, crn, crm, op2)``, allowing userspace 6119 + to know what fields can be changed for the system register described by 6120 + ``op0, op1, crn, crm, op2``. KVM rejects ID register values that describe a 6121 + superset of the features supported by the system. 6144 6122 6145 6123 5. The kvm_run structure 6146 6124 ========================
+1
Documentation/virt/kvm/arm/index.rst
··· 11 11 hypercalls 12 12 pvtime 13 13 ptp_kvm 14 + vcpu-features
+48
Documentation/virt/kvm/arm/vcpu-features.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + =============================== 4 + vCPU feature selection on arm64 5 + =============================== 6 + 7 + KVM/arm64 provides two mechanisms that allow userspace to configure 8 + the CPU features presented to the guest. 9 + 10 + KVM_ARM_VCPU_INIT 11 + ================= 12 + 13 + The ``KVM_ARM_VCPU_INIT`` ioctl accepts a bitmap of feature flags 14 + (``struct kvm_vcpu_init::features``). Features enabled by this interface are 15 + *opt-in* and may change/extend UAPI. See :ref:`KVM_ARM_VCPU_INIT` for complete 16 + documentation of the features controlled by the ioctl. 17 + 18 + Otherwise, all CPU features supported by KVM are described by the architected 19 + ID registers. 20 + 21 + The ID Registers 22 + ================ 23 + 24 + The Arm architecture specifies a range of *ID Registers* that describe the set 25 + of architectural features supported by the CPU implementation. KVM initializes 26 + the guest's ID registers to the maximum set of CPU features supported by the 27 + system. The ID register values may be VM-scoped in KVM, meaning that the 28 + values could be shared for all vCPUs in a VM. 29 + 30 + KVM allows userspace to *opt-out* of certain CPU features described by the ID 31 + registers by writing values to them via the ``KVM_SET_ONE_REG`` ioctl. The ID 32 + registers are mutable until the VM has started, i.e. userspace has called 33 + ``KVM_RUN`` on at least one vCPU in the VM. Userspace can discover what fields 34 + are mutable in the ID registers using the ``KVM_ARM_GET_REG_WRITABLE_MASKS``. 35 + See the :ref:`ioctl documentation <KVM_ARM_GET_REG_WRITABLE_MASKS>` for more 36 + details. 37 + 38 + Userspace is allowed to *limit* or *mask* CPU features according to the rules 39 + outlined by the architecture in DDI0487J.a D19.1.3 'Principles of the ID 40 + scheme for fields in ID register'. KVM does not allow ID register values that 41 + exceed the capabilities of the system. 42 + 43 + .. warning:: 44 + It is **strongly recommended** that userspace modify the ID register values 45 + before accessing the rest of the vCPU's CPU register state. KVM may use the 46 + ID register values to control feature emulation. Interleaving ID register 47 + modification with other system register accesses may lead to unpredictable 48 + behavior.
+7
Documentation/virt/kvm/devices/arm-vgic-v3.rst
··· 59 59 It is invalid to mix calls with KVM_VGIC_V3_ADDR_TYPE_REDIST and 60 60 KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION attributes. 61 61 62 + Note that to obtain reproducible results (the same VCPU being associated 63 + with the same redistributor across a save/restore operation), VCPU creation 64 + order, redistributor region creation order as well as the respective 65 + interleaves of VCPU and region creation MUST be preserved. Any change in 66 + either ordering may result in a different vcpu_id/redistributor association, 67 + resulting in a VM that will fail to run at restore time. 68 + 62 69 Errors: 63 70 64 71 ======= =============================================================
+34 -9
Documentation/virt/kvm/x86/mmu.rst
··· 202 202 Is 1 if the MMU instance cannot use A/D bits. EPT did not have A/D 203 203 bits before Haswell; shadow EPT page tables also cannot use A/D bits 204 204 if the L1 hypervisor does not enable them. 205 + role.guest_mode: 206 + Indicates the shadow page is created for a nested guest. 205 207 role.passthrough: 206 208 The page is not backed by a guest page table, but its first entry 207 209 points to one. This is set if NPT uses 5-level page tables (host 208 210 CR4.LA57=1) and is shadowing L1's 4-level NPT (L1 CR4.LA57=0). 211 + mmu_valid_gen: 212 + The MMU generation of this page, used to fast zap of all MMU pages within a 213 + VM without blocking vCPUs too long. Specifically, KVM updates the per-VM 214 + valid MMU generation which causes the mismatch of mmu_valid_gen for each mmu 215 + page. This makes all existing MMU pages obsolete. Obsolete pages can't be 216 + used. Therefore, vCPUs must load a new, valid root before re-entering the 217 + guest. The MMU generation is only ever '0' or '1'. Note, the TDP MMU doesn't 218 + use this field as non-root TDP MMU pages are reachable only from their 219 + owning root. Thus it suffices for TDP MMU to use role.invalid in root pages 220 + to invalidate all MMU pages. 209 221 gfn: 210 222 Either the guest page table containing the translations shadowed by this 211 223 page, or the base page frame for linear translations. See role.direct. ··· 231 219 at __pa(sp2->spt). sp2 will point back at sp1 through parent_pte. 232 220 The spt array forms a DAG structure with the shadow page as a node, and 233 221 guest pages as leaves. 234 - gfns: 235 - An array of 512 guest frame numbers, one for each present pte. Used to 236 - perform a reverse map from a pte to a gfn. When role.direct is set, any 237 - element of this array can be calculated from the gfn field when used, in 238 - this case, the array of gfns is not allocated. See role.direct and gfn. 239 - root_count: 240 - A counter keeping track of how many hardware registers (guest cr3 or 241 - pdptrs) are now pointing at the page. While this counter is nonzero, the 242 - page cannot be destroyed. See role.invalid. 222 + shadowed_translation: 223 + An array of 512 shadow translation entries, one for each present pte. Used 224 + to perform a reverse map from a pte to a gfn as well as its access 225 + permission. When role.direct is set, the shadow_translation array is not 226 + allocated. This is because the gfn contained in any element of this array 227 + can be calculated from the gfn field when used. In addition, when 228 + role.direct is set, KVM does not track access permission for each of the 229 + gfn. See role.direct and gfn. 230 + root_count / tdp_mmu_root_count: 231 + root_count is a reference counter for root shadow pages in Shadow MMU. 232 + vCPUs elevate the refcount when getting a shadow page that will be used as 233 + a root page, i.e. page that will be loaded into hardware directly (CR3, 234 + PDPTRs, nCR3 EPTP). Root pages cannot be destroyed while their refcount is 235 + non-zero. See role.invalid. tdp_mmu_root_count is similar but exclusively 236 + used in TDP MMU as an atomic refcount. 243 237 parent_ptes: 244 238 The reverse mapping for the pte/ptes pointing at this page's spt. If 245 239 parent_ptes bit 0 is zero, only one spte points at this page and 246 240 parent_ptes points at this single spte, otherwise, there exists multiple 247 241 sptes pointing at this page and (parent_ptes & ~0x1) points at a data 248 242 structure with a list of parent sptes. 243 + ptep: 244 + The kernel virtual address of the SPTE that points at this shadow page. 245 + Used exclusively by the TDP MMU, this field is a union with parent_ptes. 249 246 unsync: 250 247 If true, then the translations in this page may not match the guest's 251 248 translation. This is equivalent to the state of the tlb when a pte is ··· 282 261 since the last time the page table was actually used; if emulation 283 262 is triggered too frequently on this page, KVM will unmap the page 284 263 to avoid emulation in the future. 264 + tdp_mmu_page: 265 + Is 1 if the shadow page is a TDP MMU page. This variable is used to 266 + bifurcate the control flows for KVM when walking any data structure that 267 + may contain pages from both TDP MMU and shadow MMU. 285 268 286 269 Reverse map 287 270 ===========
+13
MAINTAINERS
··· 11604 11604 F: tools/testing/selftests/kvm/*/aarch64/ 11605 11605 F: tools/testing/selftests/kvm/aarch64/ 11606 11606 11607 + KERNEL VIRTUAL MACHINE FOR LOONGARCH (KVM/LoongArch) 11608 + M: Tianrui Zhao <zhaotianrui@loongson.cn> 11609 + M: Bibo Mao <maobibo@loongson.cn> 11610 + M: Huacai Chen <chenhuacai@kernel.org> 11611 + L: kvm@vger.kernel.org 11612 + L: loongarch@lists.linux.dev 11613 + S: Maintained 11614 + T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git 11615 + F: arch/loongarch/include/asm/kvm* 11616 + F: arch/loongarch/include/uapi/asm/kvm* 11617 + F: arch/loongarch/kvm/ 11618 + 11607 11619 KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips) 11608 11620 M: Huacai Chen <chenhuacai@kernel.org> 11609 11621 L: linux-mips@vger.kernel.org ··· 11652 11640 F: arch/riscv/include/uapi/asm/kvm* 11653 11641 F: arch/riscv/kvm/ 11654 11642 F: tools/testing/selftests/kvm/*/riscv/ 11643 + F: tools/testing/selftests/kvm/riscv/ 11655 11644 11656 11645 KERNEL VIRTUAL MACHINE for s390 (KVM/s390) 11657 11646 M: Christian Borntraeger <borntraeger@linux.ibm.com>
+3 -1
arch/arm64/include/asm/kvm_arm.h
··· 102 102 #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC) 103 103 #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H) 104 104 105 - #define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En) 105 + #define HCRX_GUEST_FLAGS \ 106 + (HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \ 107 + (cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0)) 106 108 #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En) 107 109 108 110 /* TCR_EL2 Registers bits */
+7 -8
arch/arm64/include/asm/kvm_emulate.h
··· 54 54 int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2); 55 55 int kvm_inject_nested_irq(struct kvm_vcpu *vcpu); 56 56 57 + static inline bool vcpu_has_feature(const struct kvm_vcpu *vcpu, int feature) 58 + { 59 + return test_bit(feature, vcpu->kvm->arch.vcpu_features); 60 + } 61 + 57 62 #if defined(__KVM_VHE_HYPERVISOR__) || defined(__KVM_NVHE_HYPERVISOR__) 58 63 static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) 59 64 { ··· 67 62 #else 68 63 static __always_inline bool vcpu_el1_is_32bit(struct kvm_vcpu *vcpu) 69 64 { 70 - return test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features); 65 + return vcpu_has_feature(vcpu, KVM_ARM_VCPU_EL1_32BIT); 71 66 } 72 67 #endif 73 68 ··· 470 465 471 466 static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu) 472 467 { 473 - return vcpu_read_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK; 468 + return __vcpu_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK; 474 469 } 475 470 476 471 static inline void kvm_vcpu_set_be(struct kvm_vcpu *vcpu) ··· 569 564 vcpu_set_flag((v), PENDING_EXCEPTION); \ 570 565 vcpu_set_flag((v), e); \ 571 566 } while (0) 572 - 573 - 574 - static inline bool vcpu_has_feature(struct kvm_vcpu *vcpu, int feature) 575 - { 576 - return test_bit(feature, vcpu->arch.features); 577 - } 578 567 579 568 static __always_inline void kvm_write_cptr_el2(u64 val) 580 569 {
+48 -13
arch/arm64/include/asm/kvm_host.h
··· 78 78 int __init kvm_arm_init_sve(void); 79 79 80 80 u32 __attribute_const__ kvm_target_cpu(void); 81 - int kvm_reset_vcpu(struct kvm_vcpu *vcpu); 81 + void kvm_reset_vcpu(struct kvm_vcpu *vcpu); 82 82 void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu); 83 83 84 84 struct kvm_hyp_memcache { ··· 158 158 phys_addr_t pgd_phys; 159 159 struct kvm_pgtable *pgt; 160 160 161 + /* 162 + * VTCR value used on the host. For a non-NV guest (or a NV 163 + * guest that runs in a context where its own S2 doesn't 164 + * apply), its T0SZ value reflects that of the IPA size. 165 + * 166 + * For a shadow S2 MMU, T0SZ reflects the PARange exposed to 167 + * the guest. 168 + */ 169 + u64 vtcr; 170 + 161 171 /* The last vcpu id that ran on each physical CPU */ 162 172 int __percpu *last_vcpu_ran; 163 173 ··· 212 202 struct kvm_hyp_memcache teardown_mc; 213 203 }; 214 204 205 + struct kvm_mpidr_data { 206 + u64 mpidr_mask; 207 + DECLARE_FLEX_ARRAY(u16, cmpidr_to_idx); 208 + }; 209 + 210 + static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr) 211 + { 212 + unsigned long mask = data->mpidr_mask; 213 + u64 aff = mpidr & MPIDR_HWID_BITMASK; 214 + int nbits, bit, bit_idx = 0; 215 + u16 index = 0; 216 + 217 + /* 218 + * If this looks like RISC-V's BEXT or x86's PEXT 219 + * instructions, it isn't by accident. 220 + */ 221 + nbits = fls(mask); 222 + for_each_set_bit(bit, &mask, nbits) { 223 + index |= (aff & BIT(bit)) >> (bit - bit_idx); 224 + bit_idx++; 225 + } 226 + 227 + return index; 228 + } 229 + 215 230 struct kvm_arch { 216 231 struct kvm_s2_mmu mmu; 217 - 218 - /* VTCR_EL2 value for this VM */ 219 - u64 vtcr; 220 232 221 233 /* Interrupt controller */ 222 234 struct vgic_dist vgic; ··· 271 239 #define KVM_ARCH_FLAG_VM_COUNTER_OFFSET 5 272 240 /* Timer PPIs made immutable */ 273 241 #define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE 6 274 - /* SMCCC filter initialized for the VM */ 275 - #define KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED 7 276 242 /* Initial ID reg values loaded */ 277 - #define KVM_ARCH_FLAG_ID_REGS_INITIALIZED 8 243 + #define KVM_ARCH_FLAG_ID_REGS_INITIALIZED 7 278 244 unsigned long flags; 279 245 280 246 /* VM-wide vCPU feature set */ 281 247 DECLARE_BITMAP(vcpu_features, KVM_VCPU_MAX_FEATURES); 248 + 249 + /* MPIDR to vcpu index mapping, optional */ 250 + struct kvm_mpidr_data *mpidr_data; 282 251 283 252 /* 284 253 * VM-wide PMU filter, implemented as a bitmap and big enough for ··· 289 256 struct arm_pmu *arm_pmu; 290 257 291 258 cpumask_var_t supported_cpus; 259 + 260 + /* PMCR_EL0.N value for the guest */ 261 + u8 pmcr_n; 292 262 293 263 /* Hypercall features firmware registers' descriptor */ 294 264 struct kvm_smccc_features smccc_feat; ··· 609 573 610 574 /* Cache some mmu pages needed inside spinlock regions */ 611 575 struct kvm_mmu_memory_cache mmu_page_cache; 612 - 613 - /* feature flags */ 614 - DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES); 615 576 616 577 /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ 617 578 u64 vsesr_el2; ··· 1058 1025 extern unsigned int __ro_after_init kvm_arm_vmid_bits; 1059 1026 int __init kvm_arm_vmid_alloc_init(void); 1060 1027 void __init kvm_arm_vmid_alloc_free(void); 1061 - void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid); 1028 + bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid); 1062 1029 void kvm_arm_vmid_clear_active(void); 1063 1030 1064 1031 static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch) ··· 1111 1078 struct kvm_arm_copy_mte_tags *copy_tags); 1112 1079 int kvm_vm_ioctl_set_counter_offset(struct kvm *kvm, 1113 1080 struct kvm_arm_counter_offset *offset); 1081 + int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, 1082 + struct reg_mask_range *range); 1114 1083 1115 1084 /* Guest/host FPSIMD coordination helpers */ 1116 1085 int kvm_arch_vcpu_run_map_fp(struct kvm_vcpu *vcpu); ··· 1144 1109 } 1145 1110 #endif 1146 1111 1147 - void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu); 1148 - void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu); 1112 + void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu); 1113 + void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu); 1149 1114 1150 1115 int __init kvm_set_ipa_limit(void); 1151 1116
+2 -5
arch/arm64/include/asm/kvm_hyp.h
··· 93 93 void __sysreg_save_state_nvhe(struct kvm_cpu_context *ctxt); 94 94 void __sysreg_restore_state_nvhe(struct kvm_cpu_context *ctxt); 95 95 #else 96 + void __vcpu_load_switch_sysregs(struct kvm_vcpu *vcpu); 97 + void __vcpu_put_switch_sysregs(struct kvm_vcpu *vcpu); 96 98 void sysreg_save_host_state_vhe(struct kvm_cpu_context *ctxt); 97 99 void sysreg_restore_host_state_vhe(struct kvm_cpu_context *ctxt); 98 100 void sysreg_save_guest_state_vhe(struct kvm_cpu_context *ctxt); ··· 112 110 void __fpsimd_save_state(struct user_fpsimd_state *fp_regs); 113 111 void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs); 114 112 void __sve_restore_state(void *sve_pffr, u32 *fpsr); 115 - 116 - #ifndef __KVM_NVHE_HYPERVISOR__ 117 - void activate_traps_vhe_load(struct kvm_vcpu *vcpu); 118 - void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu); 119 - #endif 120 113 121 114 u64 __guest_enter(struct kvm_vcpu *vcpu); 122 115
+35 -10
arch/arm64/include/asm/kvm_mmu.h
··· 150 150 */ 151 151 #define KVM_PHYS_SHIFT (40) 152 152 153 - #define kvm_phys_shift(kvm) VTCR_EL2_IPA(kvm->arch.vtcr) 154 - #define kvm_phys_size(kvm) (_AC(1, ULL) << kvm_phys_shift(kvm)) 155 - #define kvm_phys_mask(kvm) (kvm_phys_size(kvm) - _AC(1, ULL)) 153 + #define kvm_phys_shift(mmu) VTCR_EL2_IPA((mmu)->vtcr) 154 + #define kvm_phys_size(mmu) (_AC(1, ULL) << kvm_phys_shift(mmu)) 155 + #define kvm_phys_mask(mmu) (kvm_phys_size(mmu) - _AC(1, ULL)) 156 156 157 157 #include <asm/kvm_pgtable.h> 158 158 #include <asm/stage2_pgtable.h> ··· 224 224 kvm_flush_dcache_to_poc(va, size); 225 225 } 226 226 227 + static inline size_t __invalidate_icache_max_range(void) 228 + { 229 + u8 iminline; 230 + u64 ctr; 231 + 232 + asm volatile(ALTERNATIVE_CB("movz %0, #0\n" 233 + "movk %0, #0, lsl #16\n" 234 + "movk %0, #0, lsl #32\n" 235 + "movk %0, #0, lsl #48\n", 236 + ARM64_ALWAYS_SYSTEM, 237 + kvm_compute_final_ctr_el0) 238 + : "=r" (ctr)); 239 + 240 + iminline = SYS_FIELD_GET(CTR_EL0, IminLine, ctr) + 2; 241 + return MAX_DVM_OPS << iminline; 242 + } 243 + 227 244 static inline void __invalidate_icache_guest_page(void *va, size_t size) 228 245 { 229 - if (icache_is_aliasing()) { 230 - /* any kind of VIPT cache */ 246 + /* 247 + * VPIPT I-cache maintenance must be done from EL2. See comment in the 248 + * nVHE flavor of __kvm_tlb_flush_vmid_ipa(). 249 + */ 250 + if (icache_is_vpipt() && read_sysreg(CurrentEL) != CurrentEL_EL2) 251 + return; 252 + 253 + /* 254 + * Blow the whole I-cache if it is aliasing (i.e. VIPT) or the 255 + * invalidation range exceeds our arbitrary limit on invadations by 256 + * cache line. 257 + */ 258 + if (icache_is_aliasing() || size > __invalidate_icache_max_range()) 231 259 icache_inval_all_pou(); 232 - } else if (read_sysreg(CurrentEL) != CurrentEL_EL1 || 233 - !icache_is_vpipt()) { 234 - /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */ 260 + else 235 261 icache_inval_pou((unsigned long)va, (unsigned long)va + size); 236 - } 237 262 } 238 263 239 264 void kvm_set_way_flush(struct kvm_vcpu *vcpu); ··· 324 299 static __always_inline void __load_stage2(struct kvm_s2_mmu *mmu, 325 300 struct kvm_arch *arch) 326 301 { 327 - write_sysreg(arch->vtcr, vtcr_el2); 302 + write_sysreg(mmu->vtcr, vtcr_el2); 328 303 write_sysreg(kvm_get_vttbr(mmu), vttbr_el2); 329 304 330 305 /*
+2 -1
arch/arm64/include/asm/kvm_nested.h
··· 2 2 #ifndef __ARM64_KVM_NESTED_H 3 3 #define __ARM64_KVM_NESTED_H 4 4 5 + #include <asm/kvm_emulate.h> 5 6 #include <linux/kvm_host.h> 6 7 7 8 static inline bool vcpu_has_nv(const struct kvm_vcpu *vcpu) 8 9 { 9 10 return (!__is_defined(__KVM_NVHE_HYPERVISOR__) && 10 11 cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) && 11 - test_bit(KVM_ARM_VCPU_HAS_EL2, vcpu->arch.features)); 12 + vcpu_has_feature(vcpu, KVM_ARM_VCPU_HAS_EL2)); 12 13 } 13 14 14 15 extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
+2 -2
arch/arm64/include/asm/stage2_pgtable.h
··· 21 21 * (IPA_SHIFT - 4). 22 22 */ 23 23 #define stage2_pgtable_levels(ipa) ARM64_HW_PGTABLE_LEVELS((ipa) - 4) 24 - #define kvm_stage2_levels(kvm) VTCR_EL2_LVLS(kvm->arch.vtcr) 24 + #define kvm_stage2_levels(mmu) VTCR_EL2_LVLS((mmu)->vtcr) 25 25 26 26 /* 27 27 * kvm_mmmu_cache_min_pages() is the number of pages required to install 28 28 * a stage-2 translation. We pre-allocate the entry level page table at 29 29 * the VM creation. 30 30 */ 31 - #define kvm_mmu_cache_min_pages(kvm) (kvm_stage2_levels(kvm) - 1) 31 + #define kvm_mmu_cache_min_pages(mmu) (kvm_stage2_levels(mmu) - 1) 32 32 33 33 #endif /* __ARM64_S2_PGTABLE_H_ */
+45
arch/arm64/include/asm/sysreg.h
··· 270 270 /* ETM */ 271 271 #define SYS_TRCOSLAR sys_reg(2, 1, 1, 0, 4) 272 272 273 + #define SYS_BRBCR_EL2 sys_reg(2, 4, 9, 0, 0) 274 + 273 275 #define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0) 274 276 #define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5) 275 277 #define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6) ··· 486 484 487 485 #define SYS_SCTLR_EL2 sys_reg(3, 4, 1, 0, 0) 488 486 #define SYS_ACTLR_EL2 sys_reg(3, 4, 1, 0, 1) 487 + #define SYS_SCTLR2_EL2 sys_reg(3, 4, 1, 0, 3) 489 488 #define SYS_HCR_EL2 sys_reg(3, 4, 1, 1, 0) 490 489 #define SYS_MDCR_EL2 sys_reg(3, 4, 1, 1, 1) 491 490 #define SYS_CPTR_EL2 sys_reg(3, 4, 1, 1, 2) ··· 500 497 #define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2) 501 498 502 499 #define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1) 500 + #define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0) 503 501 #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6) 504 502 #define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0) 505 503 #define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1) 506 504 #define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0) 505 + #define SYS_SPSR_irq sys_reg(3, 4, 4, 3, 0) 506 + #define SYS_SPSR_abt sys_reg(3, 4, 4, 3, 1) 507 + #define SYS_SPSR_und sys_reg(3, 4, 4, 3, 2) 508 + #define SYS_SPSR_fiq sys_reg(3, 4, 4, 3, 3) 507 509 #define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1) 508 510 #define SYS_AFSR0_EL2 sys_reg(3, 4, 5, 1, 0) 509 511 #define SYS_AFSR1_EL2 sys_reg(3, 4, 5, 1, 1) ··· 522 514 523 515 #define SYS_MAIR_EL2 sys_reg(3, 4, 10, 2, 0) 524 516 #define SYS_AMAIR_EL2 sys_reg(3, 4, 10, 3, 0) 517 + #define SYS_MPAMHCR_EL2 sys_reg(3, 4, 10, 4, 0) 518 + #define SYS_MPAMVPMV_EL2 sys_reg(3, 4, 10, 4, 1) 519 + #define SYS_MPAM2_EL2 sys_reg(3, 4, 10, 5, 0) 520 + #define __SYS__MPAMVPMx_EL2(x) sys_reg(3, 4, 10, 6, x) 521 + #define SYS_MPAMVPM0_EL2 __SYS__MPAMVPMx_EL2(0) 522 + #define SYS_MPAMVPM1_EL2 __SYS__MPAMVPMx_EL2(1) 523 + #define SYS_MPAMVPM2_EL2 __SYS__MPAMVPMx_EL2(2) 524 + #define SYS_MPAMVPM3_EL2 __SYS__MPAMVPMx_EL2(3) 525 + #define SYS_MPAMVPM4_EL2 __SYS__MPAMVPMx_EL2(4) 526 + #define SYS_MPAMVPM5_EL2 __SYS__MPAMVPMx_EL2(5) 527 + #define SYS_MPAMVPM6_EL2 __SYS__MPAMVPMx_EL2(6) 528 + #define SYS_MPAMVPM7_EL2 __SYS__MPAMVPMx_EL2(7) 525 529 526 530 #define SYS_VBAR_EL2 sys_reg(3, 4, 12, 0, 0) 527 531 #define SYS_RVBAR_EL2 sys_reg(3, 4, 12, 0, 1) ··· 582 562 583 563 #define SYS_CONTEXTIDR_EL2 sys_reg(3, 4, 13, 0, 1) 584 564 #define SYS_TPIDR_EL2 sys_reg(3, 4, 13, 0, 2) 565 + #define SYS_SCXTNUM_EL2 sys_reg(3, 4, 13, 0, 7) 566 + 567 + #define __AMEV_op2(m) (m & 0x7) 568 + #define __AMEV_CRm(n, m) (n | ((m & 0x8) >> 3)) 569 + #define __SYS__AMEVCNTVOFF0n_EL2(m) sys_reg(3, 4, 13, __AMEV_CRm(0x8, m), __AMEV_op2(m)) 570 + #define SYS_AMEVCNTVOFF0n_EL2(m) __SYS__AMEVCNTVOFF0n_EL2(m) 571 + #define __SYS__AMEVCNTVOFF1n_EL2(m) sys_reg(3, 4, 13, __AMEV_CRm(0xA, m), __AMEV_op2(m)) 572 + #define SYS_AMEVCNTVOFF1n_EL2(m) __SYS__AMEVCNTVOFF1n_EL2(m) 585 573 586 574 #define SYS_CNTVOFF_EL2 sys_reg(3, 4, 14, 0, 3) 587 575 #define SYS_CNTHCTL_EL2 sys_reg(3, 4, 14, 1, 0) 576 + #define SYS_CNTHP_TVAL_EL2 sys_reg(3, 4, 14, 2, 0) 577 + #define SYS_CNTHP_CTL_EL2 sys_reg(3, 4, 14, 2, 1) 578 + #define SYS_CNTHP_CVAL_EL2 sys_reg(3, 4, 14, 2, 2) 579 + #define SYS_CNTHV_TVAL_EL2 sys_reg(3, 4, 14, 3, 0) 580 + #define SYS_CNTHV_CTL_EL2 sys_reg(3, 4, 14, 3, 1) 581 + #define SYS_CNTHV_CVAL_EL2 sys_reg(3, 4, 14, 3, 2) 588 582 589 583 /* VHE encodings for architectural EL0/1 system registers */ 584 + #define SYS_BRBCR_EL12 sys_reg(2, 5, 9, 0, 0) 590 585 #define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0) 586 + #define SYS_CPACR_EL12 sys_reg(3, 5, 1, 0, 2) 587 + #define SYS_SCTLR2_EL12 sys_reg(3, 5, 1, 0, 3) 588 + #define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0) 589 + #define SYS_TRFCR_EL12 sys_reg(3, 5, 1, 2, 1) 590 + #define SYS_SMCR_EL12 sys_reg(3, 5, 1, 2, 6) 591 591 #define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0) 592 592 #define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1) 593 593 #define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2) 594 + #define SYS_TCR2_EL12 sys_reg(3, 5, 2, 0, 3) 594 595 #define SYS_SPSR_EL12 sys_reg(3, 5, 4, 0, 0) 595 596 #define SYS_ELR_EL12 sys_reg(3, 5, 4, 0, 1) 596 597 #define SYS_AFSR0_EL12 sys_reg(3, 5, 5, 1, 0) 597 598 #define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1) 598 599 #define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0) 599 600 #define SYS_TFSR_EL12 sys_reg(3, 5, 5, 6, 0) 601 + #define SYS_FAR_EL12 sys_reg(3, 5, 6, 0, 0) 602 + #define SYS_PMSCR_EL12 sys_reg(3, 5, 9, 9, 0) 600 603 #define SYS_MAIR_EL12 sys_reg(3, 5, 10, 2, 0) 601 604 #define SYS_AMAIR_EL12 sys_reg(3, 5, 10, 3, 0) 602 605 #define SYS_VBAR_EL12 sys_reg(3, 5, 12, 0, 0) 606 + #define SYS_CONTEXTIDR_EL12 sys_reg(3, 5, 13, 0, 1) 607 + #define SYS_SCXTNUM_EL12 sys_reg(3, 5, 13, 0, 7) 603 608 #define SYS_CNTKCTL_EL12 sys_reg(3, 5, 14, 1, 0) 604 609 #define SYS_CNTP_TVAL_EL02 sys_reg(3, 5, 14, 2, 0) 605 610 #define SYS_CNTP_CTL_EL02 sys_reg(3, 5, 14, 2, 1)
+4 -4
arch/arm64/include/asm/tlbflush.h
··· 332 332 * This is meant to avoid soft lock-ups on large TLB flushing ranges and not 333 333 * necessarily a performance improvement. 334 334 */ 335 - #define MAX_TLBI_OPS PTRS_PER_PTE 335 + #define MAX_DVM_OPS PTRS_PER_PTE 336 336 337 337 /* 338 338 * __flush_tlb_range_op - Perform TLBI operation upon a range ··· 412 412 413 413 /* 414 414 * When not uses TLB range ops, we can handle up to 415 - * (MAX_TLBI_OPS - 1) pages; 415 + * (MAX_DVM_OPS - 1) pages; 416 416 * When uses TLB range ops, we can handle up to 417 417 * (MAX_TLBI_RANGE_PAGES - 1) pages. 418 418 */ 419 419 if ((!system_supports_tlb_range() && 420 - (end - start) >= (MAX_TLBI_OPS * stride)) || 420 + (end - start) >= (MAX_DVM_OPS * stride)) || 421 421 pages >= MAX_TLBI_RANGE_PAGES) { 422 422 flush_tlb_mm(vma->vm_mm); 423 423 return; ··· 450 450 { 451 451 unsigned long addr; 452 452 453 - if ((end - start) > (MAX_TLBI_OPS * PAGE_SIZE)) { 453 + if ((end - start) > (MAX_DVM_OPS * PAGE_SIZE)) { 454 454 flush_tlb_all(); 455 455 return; 456 456 }
+52 -2
arch/arm64/include/asm/traps.h
··· 9 9 10 10 #include <linux/list.h> 11 11 #include <asm/esr.h> 12 + #include <asm/ptrace.h> 12 13 #include <asm/sections.h> 13 - 14 - struct pt_regs; 15 14 16 15 #ifdef CONFIG_ARMV8_DEPRECATED 17 16 bool try_emulate_armv8_deprecated(struct pt_regs *regs, u32 insn); ··· 100 101 101 102 bool arm64_is_fatal_ras_serror(struct pt_regs *regs, unsigned long esr); 102 103 void __noreturn arm64_serror_panic(struct pt_regs *regs, unsigned long esr); 104 + 105 + static inline void arm64_mops_reset_regs(struct user_pt_regs *regs, unsigned long esr) 106 + { 107 + bool wrong_option = esr & ESR_ELx_MOPS_ISS_WRONG_OPTION; 108 + bool option_a = esr & ESR_ELx_MOPS_ISS_OPTION_A; 109 + int dstreg = ESR_ELx_MOPS_ISS_DESTREG(esr); 110 + int srcreg = ESR_ELx_MOPS_ISS_SRCREG(esr); 111 + int sizereg = ESR_ELx_MOPS_ISS_SIZEREG(esr); 112 + unsigned long dst, src, size; 113 + 114 + dst = regs->regs[dstreg]; 115 + src = regs->regs[srcreg]; 116 + size = regs->regs[sizereg]; 117 + 118 + /* 119 + * Put the registers back in the original format suitable for a 120 + * prologue instruction, using the generic return routine from the 121 + * Arm ARM (DDI 0487I.a) rules CNTMJ and MWFQH. 122 + */ 123 + if (esr & ESR_ELx_MOPS_ISS_MEM_INST) { 124 + /* SET* instruction */ 125 + if (option_a ^ wrong_option) { 126 + /* Format is from Option A; forward set */ 127 + regs->regs[dstreg] = dst + size; 128 + regs->regs[sizereg] = -size; 129 + } 130 + } else { 131 + /* CPY* instruction */ 132 + if (!(option_a ^ wrong_option)) { 133 + /* Format is from Option B */ 134 + if (regs->pstate & PSR_N_BIT) { 135 + /* Backward copy */ 136 + regs->regs[dstreg] = dst - size; 137 + regs->regs[srcreg] = src - size; 138 + } 139 + } else { 140 + /* Format is from Option A */ 141 + if (size & BIT(63)) { 142 + /* Forward copy */ 143 + regs->regs[dstreg] = dst + size; 144 + regs->regs[srcreg] = src + size; 145 + regs->regs[sizereg] = -size; 146 + } 147 + } 148 + } 149 + 150 + if (esr & ESR_ELx_MOPS_ISS_FROM_EPILOGUE) 151 + regs->pc -= 8; 152 + else 153 + regs->pc -= 4; 154 + } 103 155 #endif
+32
arch/arm64/include/uapi/asm/kvm.h
··· 505 505 #define KVM_HYPERCALL_EXIT_SMC (1U << 0) 506 506 #define KVM_HYPERCALL_EXIT_16BIT (1U << 1) 507 507 508 + /* 509 + * Get feature ID registers userspace writable mask. 510 + * 511 + * From DDI0487J.a, D19.2.66 ("ID_AA64MMFR2_EL1, AArch64 Memory Model 512 + * Feature Register 2"): 513 + * 514 + * "The Feature ID space is defined as the System register space in 515 + * AArch64 with op0==3, op1=={0, 1, 3}, CRn==0, CRm=={0-7}, 516 + * op2=={0-7}." 517 + * 518 + * This covers all currently known R/O registers that indicate 519 + * anything useful feature wise, including the ID registers. 520 + * 521 + * If we ever need to introduce a new range, it will be described as 522 + * such in the range field. 523 + */ 524 + #define KVM_ARM_FEATURE_ID_RANGE_IDX(op0, op1, crn, crm, op2) \ 525 + ({ \ 526 + __u64 __op1 = (op1) & 3; \ 527 + __op1 -= (__op1 == 3); \ 528 + (__op1 << 6 | ((crm) & 7) << 3 | (op2)); \ 529 + }) 530 + 531 + #define KVM_ARM_FEATURE_ID_RANGE 0 532 + #define KVM_ARM_FEATURE_ID_RANGE_SIZE (3 * 8 * 8) 533 + 534 + struct reg_mask_range { 535 + __u64 addr; /* Pointer to mask array */ 536 + __u32 range; /* Requested range */ 537 + __u32 reserved[13]; 538 + }; 539 + 508 540 #endif 509 541 510 542 #endif /* __ARM_KVM_H__ */
+1 -47
arch/arm64/kernel/traps.c
··· 516 516 517 517 void do_el0_mops(struct pt_regs *regs, unsigned long esr) 518 518 { 519 - bool wrong_option = esr & ESR_ELx_MOPS_ISS_WRONG_OPTION; 520 - bool option_a = esr & ESR_ELx_MOPS_ISS_OPTION_A; 521 - int dstreg = ESR_ELx_MOPS_ISS_DESTREG(esr); 522 - int srcreg = ESR_ELx_MOPS_ISS_SRCREG(esr); 523 - int sizereg = ESR_ELx_MOPS_ISS_SIZEREG(esr); 524 - unsigned long dst, src, size; 525 - 526 - dst = pt_regs_read_reg(regs, dstreg); 527 - src = pt_regs_read_reg(regs, srcreg); 528 - size = pt_regs_read_reg(regs, sizereg); 529 - 530 - /* 531 - * Put the registers back in the original format suitable for a 532 - * prologue instruction, using the generic return routine from the 533 - * Arm ARM (DDI 0487I.a) rules CNTMJ and MWFQH. 534 - */ 535 - if (esr & ESR_ELx_MOPS_ISS_MEM_INST) { 536 - /* SET* instruction */ 537 - if (option_a ^ wrong_option) { 538 - /* Format is from Option A; forward set */ 539 - pt_regs_write_reg(regs, dstreg, dst + size); 540 - pt_regs_write_reg(regs, sizereg, -size); 541 - } 542 - } else { 543 - /* CPY* instruction */ 544 - if (!(option_a ^ wrong_option)) { 545 - /* Format is from Option B */ 546 - if (regs->pstate & PSR_N_BIT) { 547 - /* Backward copy */ 548 - pt_regs_write_reg(regs, dstreg, dst - size); 549 - pt_regs_write_reg(regs, srcreg, src - size); 550 - } 551 - } else { 552 - /* Format is from Option A */ 553 - if (size & BIT(63)) { 554 - /* Forward copy */ 555 - pt_regs_write_reg(regs, dstreg, dst + size); 556 - pt_regs_write_reg(regs, srcreg, src + size); 557 - pt_regs_write_reg(regs, sizereg, -size); 558 - } 559 - } 560 - } 561 - 562 - if (esr & ESR_ELx_MOPS_ISS_FROM_EPILOGUE) 563 - regs->pc -= 8; 564 - else 565 - regs->pc -= 4; 519 + arm64_mops_reset_regs(&regs->user_regs, esr); 566 520 567 521 /* 568 522 * If single stepping then finish the step before executing the
+2 -4
arch/arm64/kvm/arch_timer.c
··· 453 453 timer_ctx->irq.level); 454 454 455 455 if (!userspace_irqchip(vcpu->kvm)) { 456 - ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, 456 + ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu, 457 457 timer_irq(timer_ctx), 458 458 timer_ctx->irq.level, 459 459 timer_ctx); ··· 936 936 unmask_vtimer_irq_user(vcpu); 937 937 } 938 938 939 - int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu) 939 + void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu) 940 940 { 941 941 struct arch_timer_cpu *timer = vcpu_timer(vcpu); 942 942 struct timer_map map; ··· 980 980 soft_timer_cancel(&map.emul_vtimer->hrtimer); 981 981 if (map.emul_ptimer) 982 982 soft_timer_cancel(&map.emul_ptimer->hrtimer); 983 - 984 - return 0; 985 983 } 986 984 987 985 static void timer_context_init(struct kvm_vcpu *vcpu, int timerid)
+162 -38
arch/arm64/kvm/arm.c
··· 205 205 if (is_protected_kvm_enabled()) 206 206 pkvm_destroy_hyp_vm(kvm); 207 207 208 + kfree(kvm->arch.mpidr_data); 208 209 kvm_destroy_vcpus(kvm); 209 210 210 211 kvm_unshare_hyp(kvm, kvm + 1); ··· 318 317 case KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES: 319 318 r = kvm_supported_block_sizes(); 320 319 break; 320 + case KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES: 321 + r = BIT(0); 322 + break; 321 323 default: 322 324 r = 0; 323 325 } ··· 371 367 372 368 /* Force users to call KVM_ARM_VCPU_INIT */ 373 369 vcpu_clear_flag(vcpu, VCPU_INITIALIZED); 374 - bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); 375 370 376 371 vcpu->arch.mmu_page_cache.gfp_zero = __GFP_ZERO; 377 372 ··· 441 438 * We might get preempted before the vCPU actually runs, but 442 439 * over-invalidation doesn't affect correctness. 443 440 */ 444 - if (*last_ran != vcpu->vcpu_id) { 441 + if (*last_ran != vcpu->vcpu_idx) { 445 442 kvm_call_hyp(__kvm_flush_cpu_context, mmu); 446 - *last_ran = vcpu->vcpu_id; 443 + *last_ran = vcpu->vcpu_idx; 447 444 } 448 445 449 446 vcpu->cpu = cpu; ··· 451 448 kvm_vgic_load(vcpu); 452 449 kvm_timer_vcpu_load(vcpu); 453 450 if (has_vhe()) 454 - kvm_vcpu_load_sysregs_vhe(vcpu); 451 + kvm_vcpu_load_vhe(vcpu); 455 452 kvm_arch_vcpu_load_fp(vcpu); 456 453 kvm_vcpu_pmu_restore_guest(vcpu); 457 454 if (kvm_arm_is_pvtime_enabled(&vcpu->arch)) ··· 475 472 kvm_arch_vcpu_put_debug_state_flags(vcpu); 476 473 kvm_arch_vcpu_put_fp(vcpu); 477 474 if (has_vhe()) 478 - kvm_vcpu_put_sysregs_vhe(vcpu); 475 + kvm_vcpu_put_vhe(vcpu); 479 476 kvm_timer_vcpu_put(vcpu); 480 477 kvm_vgic_put(vcpu); 481 478 kvm_vcpu_pmu_restore_host(vcpu); ··· 581 578 return vcpu_get_flag(vcpu, VCPU_INITIALIZED); 582 579 } 583 580 581 + static void kvm_init_mpidr_data(struct kvm *kvm) 582 + { 583 + struct kvm_mpidr_data *data = NULL; 584 + unsigned long c, mask, nr_entries; 585 + u64 aff_set = 0, aff_clr = ~0UL; 586 + struct kvm_vcpu *vcpu; 587 + 588 + mutex_lock(&kvm->arch.config_lock); 589 + 590 + if (kvm->arch.mpidr_data || atomic_read(&kvm->online_vcpus) == 1) 591 + goto out; 592 + 593 + kvm_for_each_vcpu(c, vcpu, kvm) { 594 + u64 aff = kvm_vcpu_get_mpidr_aff(vcpu); 595 + aff_set |= aff; 596 + aff_clr &= aff; 597 + } 598 + 599 + /* 600 + * A significant bit can be either 0 or 1, and will only appear in 601 + * aff_set. Use aff_clr to weed out the useless stuff. 602 + */ 603 + mask = aff_set ^ aff_clr; 604 + nr_entries = BIT_ULL(hweight_long(mask)); 605 + 606 + /* 607 + * Don't let userspace fool us. If we need more than a single page 608 + * to describe the compressed MPIDR array, just fall back to the 609 + * iterative method. Single vcpu VMs do not need this either. 610 + */ 611 + if (struct_size(data, cmpidr_to_idx, nr_entries) <= PAGE_SIZE) 612 + data = kzalloc(struct_size(data, cmpidr_to_idx, nr_entries), 613 + GFP_KERNEL_ACCOUNT); 614 + 615 + if (!data) 616 + goto out; 617 + 618 + data->mpidr_mask = mask; 619 + 620 + kvm_for_each_vcpu(c, vcpu, kvm) { 621 + u64 aff = kvm_vcpu_get_mpidr_aff(vcpu); 622 + u16 index = kvm_mpidr_index(data, aff); 623 + 624 + data->cmpidr_to_idx[index] = c; 625 + } 626 + 627 + kvm->arch.mpidr_data = data; 628 + out: 629 + mutex_unlock(&kvm->arch.config_lock); 630 + } 631 + 584 632 /* 585 633 * Handle both the initialisation that is being done when the vcpu is 586 634 * run for the first time, as well as the updates that must be ··· 654 600 655 601 if (likely(vcpu_has_run_once(vcpu))) 656 602 return 0; 603 + 604 + kvm_init_mpidr_data(kvm); 657 605 658 606 kvm_arm_vcpu_init_debug(vcpu); 659 607 ··· 857 801 } 858 802 859 803 if (kvm_check_request(KVM_REQ_RELOAD_PMU, vcpu)) 860 - kvm_pmu_handle_pmcr(vcpu, 861 - __vcpu_sys_reg(vcpu, PMCR_EL0)); 804 + kvm_vcpu_reload_pmu(vcpu); 862 805 863 806 if (kvm_check_request(KVM_REQ_RESYNC_PMU_EL0, vcpu)) 864 807 kvm_vcpu_pmu_restore_guest(vcpu); ··· 1005 950 * making a thread's VMID inactive. So we need to call 1006 951 * kvm_arm_vmid_update() in non-premptible context. 1007 952 */ 1008 - kvm_arm_vmid_update(&vcpu->arch.hw_mmu->vmid); 953 + if (kvm_arm_vmid_update(&vcpu->arch.hw_mmu->vmid) && 954 + has_vhe()) 955 + __load_stage2(vcpu->arch.hw_mmu, 956 + vcpu->arch.hw_mmu->arch); 1009 957 1010 958 kvm_pmu_flush_hwstate(vcpu); 1011 959 ··· 1192 1134 bool line_status) 1193 1135 { 1194 1136 u32 irq = irq_level->irq; 1195 - unsigned int irq_type, vcpu_idx, irq_num; 1196 - int nrcpus = atomic_read(&kvm->online_vcpus); 1137 + unsigned int irq_type, vcpu_id, irq_num; 1197 1138 struct kvm_vcpu *vcpu = NULL; 1198 1139 bool level = irq_level->level; 1199 1140 1200 1141 irq_type = (irq >> KVM_ARM_IRQ_TYPE_SHIFT) & KVM_ARM_IRQ_TYPE_MASK; 1201 - vcpu_idx = (irq >> KVM_ARM_IRQ_VCPU_SHIFT) & KVM_ARM_IRQ_VCPU_MASK; 1202 - vcpu_idx += ((irq >> KVM_ARM_IRQ_VCPU2_SHIFT) & KVM_ARM_IRQ_VCPU2_MASK) * (KVM_ARM_IRQ_VCPU_MASK + 1); 1142 + vcpu_id = (irq >> KVM_ARM_IRQ_VCPU_SHIFT) & KVM_ARM_IRQ_VCPU_MASK; 1143 + vcpu_id += ((irq >> KVM_ARM_IRQ_VCPU2_SHIFT) & KVM_ARM_IRQ_VCPU2_MASK) * (KVM_ARM_IRQ_VCPU_MASK + 1); 1203 1144 irq_num = (irq >> KVM_ARM_IRQ_NUM_SHIFT) & KVM_ARM_IRQ_NUM_MASK; 1204 1145 1205 - trace_kvm_irq_line(irq_type, vcpu_idx, irq_num, irq_level->level); 1146 + trace_kvm_irq_line(irq_type, vcpu_id, irq_num, irq_level->level); 1206 1147 1207 1148 switch (irq_type) { 1208 1149 case KVM_ARM_IRQ_TYPE_CPU: 1209 1150 if (irqchip_in_kernel(kvm)) 1210 1151 return -ENXIO; 1211 1152 1212 - if (vcpu_idx >= nrcpus) 1213 - return -EINVAL; 1214 - 1215 - vcpu = kvm_get_vcpu(kvm, vcpu_idx); 1153 + vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id); 1216 1154 if (!vcpu) 1217 1155 return -EINVAL; 1218 1156 ··· 1220 1166 if (!irqchip_in_kernel(kvm)) 1221 1167 return -ENXIO; 1222 1168 1223 - if (vcpu_idx >= nrcpus) 1224 - return -EINVAL; 1225 - 1226 - vcpu = kvm_get_vcpu(kvm, vcpu_idx); 1169 + vcpu = kvm_get_vcpu_by_id(kvm, vcpu_id); 1227 1170 if (!vcpu) 1228 1171 return -EINVAL; 1229 1172 1230 1173 if (irq_num < VGIC_NR_SGIS || irq_num >= VGIC_NR_PRIVATE_IRQS) 1231 1174 return -EINVAL; 1232 1175 1233 - return kvm_vgic_inject_irq(kvm, vcpu->vcpu_id, irq_num, level, NULL); 1176 + return kvm_vgic_inject_irq(kvm, vcpu, irq_num, level, NULL); 1234 1177 case KVM_ARM_IRQ_TYPE_SPI: 1235 1178 if (!irqchip_in_kernel(kvm)) 1236 1179 return -ENXIO; ··· 1235 1184 if (irq_num < VGIC_NR_PRIVATE_IRQS) 1236 1185 return -EINVAL; 1237 1186 1238 - return kvm_vgic_inject_irq(kvm, 0, irq_num, level, NULL); 1187 + return kvm_vgic_inject_irq(kvm, NULL, irq_num, level, NULL); 1239 1188 } 1240 1189 1241 1190 return -EINVAL; 1191 + } 1192 + 1193 + static unsigned long system_supported_vcpu_features(void) 1194 + { 1195 + unsigned long features = KVM_VCPU_VALID_FEATURES; 1196 + 1197 + if (!cpus_have_final_cap(ARM64_HAS_32BIT_EL1)) 1198 + clear_bit(KVM_ARM_VCPU_EL1_32BIT, &features); 1199 + 1200 + if (!kvm_arm_support_pmu_v3()) 1201 + clear_bit(KVM_ARM_VCPU_PMU_V3, &features); 1202 + 1203 + if (!system_supports_sve()) 1204 + clear_bit(KVM_ARM_VCPU_SVE, &features); 1205 + 1206 + if (!system_has_full_ptr_auth()) { 1207 + clear_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, &features); 1208 + clear_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features); 1209 + } 1210 + 1211 + if (!cpus_have_final_cap(ARM64_HAS_NESTED_VIRT)) 1212 + clear_bit(KVM_ARM_VCPU_HAS_EL2, &features); 1213 + 1214 + return features; 1242 1215 } 1243 1216 1244 1217 static int kvm_vcpu_init_check_features(struct kvm_vcpu *vcpu, ··· 1279 1204 return -ENOENT; 1280 1205 } 1281 1206 1207 + if (features & ~system_supported_vcpu_features()) 1208 + return -EINVAL; 1209 + 1210 + /* 1211 + * For now make sure that both address/generic pointer authentication 1212 + * features are requested by the userspace together. 1213 + */ 1214 + if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, &features) != 1215 + test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features)) 1216 + return -EINVAL; 1217 + 1218 + /* Disallow NV+SVE for the time being */ 1219 + if (test_bit(KVM_ARM_VCPU_HAS_EL2, &features) && 1220 + test_bit(KVM_ARM_VCPU_SVE, &features)) 1221 + return -EINVAL; 1222 + 1282 1223 if (!test_bit(KVM_ARM_VCPU_EL1_32BIT, &features)) 1283 1224 return 0; 1284 - 1285 - if (!cpus_have_final_cap(ARM64_HAS_32BIT_EL1)) 1286 - return -EINVAL; 1287 1225 1288 1226 /* MTE is incompatible with AArch32 */ 1289 1227 if (kvm_has_mte(vcpu->kvm)) ··· 1314 1226 { 1315 1227 unsigned long features = init->features[0]; 1316 1228 1317 - return !bitmap_equal(vcpu->arch.features, &features, KVM_VCPU_MAX_FEATURES); 1229 + return !bitmap_equal(vcpu->kvm->arch.vcpu_features, &features, 1230 + KVM_VCPU_MAX_FEATURES); 1231 + } 1232 + 1233 + static int kvm_setup_vcpu(struct kvm_vcpu *vcpu) 1234 + { 1235 + struct kvm *kvm = vcpu->kvm; 1236 + int ret = 0; 1237 + 1238 + /* 1239 + * When the vCPU has a PMU, but no PMU is set for the guest 1240 + * yet, set the default one. 1241 + */ 1242 + if (kvm_vcpu_has_pmu(vcpu) && !kvm->arch.arm_pmu) 1243 + ret = kvm_arm_set_default_pmu(kvm); 1244 + 1245 + return ret; 1318 1246 } 1319 1247 1320 1248 static int __kvm_vcpu_set_target(struct kvm_vcpu *vcpu, ··· 1343 1239 mutex_lock(&kvm->arch.config_lock); 1344 1240 1345 1241 if (test_bit(KVM_ARCH_FLAG_VCPU_FEATURES_CONFIGURED, &kvm->arch.flags) && 1346 - !bitmap_equal(kvm->arch.vcpu_features, &features, KVM_VCPU_MAX_FEATURES)) 1242 + kvm_vcpu_init_changed(vcpu, init)) 1347 1243 goto out_unlock; 1348 - 1349 - bitmap_copy(vcpu->arch.features, &features, KVM_VCPU_MAX_FEATURES); 1350 - 1351 - /* Now we know what it is, we can reset it. */ 1352 - ret = kvm_reset_vcpu(vcpu); 1353 - if (ret) { 1354 - bitmap_zero(vcpu->arch.features, KVM_VCPU_MAX_FEATURES); 1355 - goto out_unlock; 1356 - } 1357 1244 1358 1245 bitmap_copy(kvm->arch.vcpu_features, &features, KVM_VCPU_MAX_FEATURES); 1246 + 1247 + ret = kvm_setup_vcpu(vcpu); 1248 + if (ret) 1249 + goto out_unlock; 1250 + 1251 + /* Now we know what it is, we can reset it. */ 1252 + kvm_reset_vcpu(vcpu); 1253 + 1359 1254 set_bit(KVM_ARCH_FLAG_VCPU_FEATURES_CONFIGURED, &kvm->arch.flags); 1360 1255 vcpu_set_flag(vcpu, VCPU_INITIALIZED); 1256 + ret = 0; 1361 1257 out_unlock: 1362 1258 mutex_unlock(&kvm->arch.config_lock); 1363 1259 return ret; ··· 1382 1278 if (kvm_vcpu_init_changed(vcpu, init)) 1383 1279 return -EINVAL; 1384 1280 1385 - return kvm_reset_vcpu(vcpu); 1281 + kvm_reset_vcpu(vcpu); 1282 + return 0; 1386 1283 } 1387 1284 1388 1285 static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu, ··· 1733 1628 return -EFAULT; 1734 1629 1735 1630 return kvm_vm_set_attr(kvm, &attr); 1631 + } 1632 + case KVM_ARM_GET_REG_WRITABLE_MASKS: { 1633 + struct reg_mask_range range; 1634 + 1635 + if (copy_from_user(&range, argp, sizeof(range))) 1636 + return -EFAULT; 1637 + return kvm_vm_ioctl_get_reg_writable_masks(kvm, &range); 1736 1638 } 1737 1639 default: 1738 1640 return -EINVAL; ··· 2453 2341 unsigned long i; 2454 2342 2455 2343 mpidr &= MPIDR_HWID_BITMASK; 2344 + 2345 + if (kvm->arch.mpidr_data) { 2346 + u16 idx = kvm_mpidr_index(kvm->arch.mpidr_data, mpidr); 2347 + 2348 + vcpu = kvm_get_vcpu(kvm, 2349 + kvm->arch.mpidr_data->cmpidr_to_idx[idx]); 2350 + if (mpidr != kvm_vcpu_get_mpidr_aff(vcpu)) 2351 + vcpu = NULL; 2352 + 2353 + return vcpu; 2354 + } 2355 + 2456 2356 kvm_for_each_vcpu(i, vcpu, kvm) { 2457 2357 if (mpidr == kvm_vcpu_get_mpidr_aff(vcpu)) 2458 2358 return vcpu;
+71 -6
arch/arm64/kvm/emulate-nested.c
··· 648 648 SR_TRAP(SYS_APGAKEYLO_EL1, CGT_HCR_APK), 649 649 SR_TRAP(SYS_APGAKEYHI_EL1, CGT_HCR_APK), 650 650 /* All _EL2 registers */ 651 - SR_RANGE_TRAP(sys_reg(3, 4, 0, 0, 0), 652 - sys_reg(3, 4, 3, 15, 7), CGT_HCR_NV), 651 + SR_TRAP(SYS_BRBCR_EL2, CGT_HCR_NV), 652 + SR_TRAP(SYS_VPIDR_EL2, CGT_HCR_NV), 653 + SR_TRAP(SYS_VMPIDR_EL2, CGT_HCR_NV), 654 + SR_TRAP(SYS_SCTLR_EL2, CGT_HCR_NV), 655 + SR_TRAP(SYS_ACTLR_EL2, CGT_HCR_NV), 656 + SR_TRAP(SYS_SCTLR2_EL2, CGT_HCR_NV), 657 + SR_RANGE_TRAP(SYS_HCR_EL2, 658 + SYS_HCRX_EL2, CGT_HCR_NV), 659 + SR_TRAP(SYS_SMPRIMAP_EL2, CGT_HCR_NV), 660 + SR_TRAP(SYS_SMCR_EL2, CGT_HCR_NV), 661 + SR_RANGE_TRAP(SYS_TTBR0_EL2, 662 + SYS_TCR2_EL2, CGT_HCR_NV), 663 + SR_TRAP(SYS_VTTBR_EL2, CGT_HCR_NV), 664 + SR_TRAP(SYS_VTCR_EL2, CGT_HCR_NV), 665 + SR_TRAP(SYS_VNCR_EL2, CGT_HCR_NV), 666 + SR_RANGE_TRAP(SYS_HDFGRTR_EL2, 667 + SYS_HAFGRTR_EL2, CGT_HCR_NV), 653 668 /* Skip the SP_EL1 encoding... */ 654 669 SR_TRAP(SYS_SPSR_EL2, CGT_HCR_NV), 655 670 SR_TRAP(SYS_ELR_EL2, CGT_HCR_NV), 656 - SR_RANGE_TRAP(sys_reg(3, 4, 4, 1, 1), 657 - sys_reg(3, 4, 10, 15, 7), CGT_HCR_NV), 658 - SR_RANGE_TRAP(sys_reg(3, 4, 12, 0, 0), 659 - sys_reg(3, 4, 14, 15, 7), CGT_HCR_NV), 671 + /* Skip SPSR_irq, SPSR_abt, SPSR_und, SPSR_fiq */ 672 + SR_TRAP(SYS_AFSR0_EL2, CGT_HCR_NV), 673 + SR_TRAP(SYS_AFSR1_EL2, CGT_HCR_NV), 674 + SR_TRAP(SYS_ESR_EL2, CGT_HCR_NV), 675 + SR_TRAP(SYS_VSESR_EL2, CGT_HCR_NV), 676 + SR_TRAP(SYS_TFSR_EL2, CGT_HCR_NV), 677 + SR_TRAP(SYS_FAR_EL2, CGT_HCR_NV), 678 + SR_TRAP(SYS_HPFAR_EL2, CGT_HCR_NV), 679 + SR_TRAP(SYS_PMSCR_EL2, CGT_HCR_NV), 680 + SR_TRAP(SYS_MAIR_EL2, CGT_HCR_NV), 681 + SR_TRAP(SYS_AMAIR_EL2, CGT_HCR_NV), 682 + SR_TRAP(SYS_MPAMHCR_EL2, CGT_HCR_NV), 683 + SR_TRAP(SYS_MPAMVPMV_EL2, CGT_HCR_NV), 684 + SR_TRAP(SYS_MPAM2_EL2, CGT_HCR_NV), 685 + SR_RANGE_TRAP(SYS_MPAMVPM0_EL2, 686 + SYS_MPAMVPM7_EL2, CGT_HCR_NV), 687 + /* 688 + * Note that the spec. describes a group of MEC registers 689 + * whose access should not trap, therefore skip the following: 690 + * MECID_A0_EL2, MECID_A1_EL2, MECID_P0_EL2, 691 + * MECID_P1_EL2, MECIDR_EL2, VMECID_A_EL2, 692 + * VMECID_P_EL2. 693 + */ 694 + SR_RANGE_TRAP(SYS_VBAR_EL2, 695 + SYS_RMR_EL2, CGT_HCR_NV), 696 + SR_TRAP(SYS_VDISR_EL2, CGT_HCR_NV), 697 + /* ICH_AP0R<m>_EL2 */ 698 + SR_RANGE_TRAP(SYS_ICH_AP0R0_EL2, 699 + SYS_ICH_AP0R3_EL2, CGT_HCR_NV), 700 + /* ICH_AP1R<m>_EL2 */ 701 + SR_RANGE_TRAP(SYS_ICH_AP1R0_EL2, 702 + SYS_ICH_AP1R3_EL2, CGT_HCR_NV), 703 + SR_TRAP(SYS_ICC_SRE_EL2, CGT_HCR_NV), 704 + SR_RANGE_TRAP(SYS_ICH_HCR_EL2, 705 + SYS_ICH_EISR_EL2, CGT_HCR_NV), 706 + SR_TRAP(SYS_ICH_ELRSR_EL2, CGT_HCR_NV), 707 + SR_TRAP(SYS_ICH_VMCR_EL2, CGT_HCR_NV), 708 + /* ICH_LR<m>_EL2 */ 709 + SR_RANGE_TRAP(SYS_ICH_LR0_EL2, 710 + SYS_ICH_LR15_EL2, CGT_HCR_NV), 711 + SR_TRAP(SYS_CONTEXTIDR_EL2, CGT_HCR_NV), 712 + SR_TRAP(SYS_TPIDR_EL2, CGT_HCR_NV), 713 + SR_TRAP(SYS_SCXTNUM_EL2, CGT_HCR_NV), 714 + /* AMEVCNTVOFF0<n>_EL2, AMEVCNTVOFF1<n>_EL2 */ 715 + SR_RANGE_TRAP(SYS_AMEVCNTVOFF0n_EL2(0), 716 + SYS_AMEVCNTVOFF1n_EL2(15), CGT_HCR_NV), 717 + /* CNT*_EL2 */ 718 + SR_TRAP(SYS_CNTVOFF_EL2, CGT_HCR_NV), 719 + SR_TRAP(SYS_CNTPOFF_EL2, CGT_HCR_NV), 720 + SR_TRAP(SYS_CNTHCTL_EL2, CGT_HCR_NV), 721 + SR_RANGE_TRAP(SYS_CNTHP_TVAL_EL2, 722 + SYS_CNTHP_CVAL_EL2, CGT_HCR_NV), 723 + SR_RANGE_TRAP(SYS_CNTHV_TVAL_EL2, 724 + SYS_CNTHV_CVAL_EL2, CGT_HCR_NV), 660 725 /* All _EL02, _EL12 registers */ 661 726 SR_RANGE_TRAP(sys_reg(3, 5, 0, 0, 0), 662 727 sys_reg(3, 5, 10, 15, 7), CGT_HCR_NV),
+17
arch/arm64/kvm/hyp/include/hyp/switch.h
··· 30 30 #include <asm/fpsimd.h> 31 31 #include <asm/debug-monitors.h> 32 32 #include <asm/processor.h> 33 + #include <asm/traps.h> 33 34 34 35 struct kvm_exception_table_entry { 35 36 int insn, fixup; ··· 264 263 static inline bool __populate_fault_info(struct kvm_vcpu *vcpu) 265 264 { 266 265 return __get_fault_info(vcpu->arch.fault.esr_el2, &vcpu->arch.fault); 266 + } 267 + 268 + static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code) 269 + { 270 + *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR); 271 + arm64_mops_reset_regs(vcpu_gp_regs(vcpu), vcpu->arch.fault.esr_el2); 272 + write_sysreg_el2(*vcpu_pc(vcpu), SYS_ELR); 273 + 274 + /* 275 + * Finish potential single step before executing the prologue 276 + * instruction. 277 + */ 278 + *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS; 279 + write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR); 280 + 281 + return true; 267 282 } 268 283 269 284 static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
+2 -1
arch/arm64/kvm/hyp/include/nvhe/fixed_config.h
··· 197 197 198 198 #define PVM_ID_AA64ISAR2_ALLOW (\ 199 199 ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3) | \ 200 - ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) \ 200 + ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) | \ 201 + ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS) \ 201 202 ) 202 203 203 204 u64 pvm_read_id_reg(const struct kvm_vcpu *vcpu, u32 id);
+4 -4
arch/arm64/kvm/hyp/nvhe/mem_protect.c
··· 129 129 parange = kvm_get_parange(id_aa64mmfr0_el1_sys_val); 130 130 phys_shift = id_aa64mmfr0_parange_to_phys_shift(parange); 131 131 132 - host_mmu.arch.vtcr = kvm_get_vtcr(id_aa64mmfr0_el1_sys_val, 133 - id_aa64mmfr1_el1_sys_val, phys_shift); 132 + host_mmu.arch.mmu.vtcr = kvm_get_vtcr(id_aa64mmfr0_el1_sys_val, 133 + id_aa64mmfr1_el1_sys_val, phys_shift); 134 134 } 135 135 136 136 static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot); ··· 235 235 unsigned long nr_pages; 236 236 int ret; 237 237 238 - nr_pages = kvm_pgtable_stage2_pgd_size(vm->kvm.arch.vtcr) >> PAGE_SHIFT; 238 + nr_pages = kvm_pgtable_stage2_pgd_size(mmu->vtcr) >> PAGE_SHIFT; 239 239 ret = hyp_pool_init(&vm->pool, hyp_virt_to_pfn(pgd), nr_pages, 0); 240 240 if (ret) 241 241 return ret; ··· 295 295 return -EPERM; 296 296 297 297 params->vttbr = kvm_get_vttbr(mmu); 298 - params->vtcr = host_mmu.arch.vtcr; 298 + params->vtcr = mmu->vtcr; 299 299 params->hcr_el2 |= HCR_VM; 300 300 301 301 /*
+2 -2
arch/arm64/kvm/hyp/nvhe/pkvm.c
··· 303 303 { 304 304 hyp_vm->host_kvm = host_kvm; 305 305 hyp_vm->kvm.created_vcpus = nr_vcpus; 306 - hyp_vm->kvm.arch.vtcr = host_mmu.arch.vtcr; 306 + hyp_vm->kvm.arch.mmu.vtcr = host_mmu.arch.mmu.vtcr; 307 307 } 308 308 309 309 static int init_pkvm_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu, ··· 483 483 } 484 484 485 485 vm_size = pkvm_get_hyp_vm_size(nr_vcpus); 486 - pgd_size = kvm_pgtable_stage2_pgd_size(host_mmu.arch.vtcr); 486 + pgd_size = kvm_pgtable_stage2_pgd_size(host_mmu.arch.mmu.vtcr); 487 487 488 488 ret = -ENOMEM; 489 489
+2
arch/arm64/kvm/hyp/nvhe/switch.c
··· 192 192 [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, 193 193 [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, 194 194 [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, 195 + [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, 195 196 }; 196 197 197 198 static const exit_handler_fn pvm_exit_handlers[] = { ··· 204 203 [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, 205 204 [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, 206 205 [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, 206 + [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, 207 207 }; 208 208 209 209 static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu)
+2 -2
arch/arm64/kvm/hyp/pgtable.c
··· 1314 1314 ret = stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, 1315 1315 KVM_PGTABLE_WALK_HANDLE_FAULT | 1316 1316 KVM_PGTABLE_WALK_SHARED); 1317 - if (!ret) 1317 + if (!ret || ret == -EAGAIN) 1318 1318 kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, pgt->mmu, addr, level); 1319 1319 return ret; 1320 1320 } ··· 1511 1511 kvm_pgtable_force_pte_cb_t force_pte_cb) 1512 1512 { 1513 1513 size_t pgd_sz; 1514 - u64 vtcr = mmu->arch->vtcr; 1514 + u64 vtcr = mmu->vtcr; 1515 1515 u32 ia_bits = VTCR_EL2_IPA(vtcr); 1516 1516 u32 sl0 = FIELD_GET(VTCR_EL2_SL0_MASK, vtcr); 1517 1517 u32 start_level = VTCR_EL2_TGRAN_SL0_BASE - sl0;
+21 -13
arch/arm64/kvm/hyp/vhe/switch.c
··· 137 137 NOKPROBE_SYMBOL(__deactivate_traps); 138 138 139 139 /* 140 - * Disable IRQs in {activate,deactivate}_traps_vhe_{load,put}() to 140 + * Disable IRQs in __vcpu_{load,put}_{activate,deactivate}_traps() to 141 141 * prevent a race condition between context switching of PMUSERENR_EL0 142 142 * in __{activate,deactivate}_traps_common() and IPIs that attempts to 143 143 * update PMUSERENR_EL0. See also kvm_set_pmuserenr(). 144 144 */ 145 - void activate_traps_vhe_load(struct kvm_vcpu *vcpu) 145 + static void __vcpu_load_activate_traps(struct kvm_vcpu *vcpu) 146 146 { 147 147 unsigned long flags; 148 148 ··· 151 151 local_irq_restore(flags); 152 152 } 153 153 154 - void deactivate_traps_vhe_put(struct kvm_vcpu *vcpu) 154 + static void __vcpu_put_deactivate_traps(struct kvm_vcpu *vcpu) 155 155 { 156 156 unsigned long flags; 157 157 158 158 local_irq_save(flags); 159 159 __deactivate_traps_common(vcpu); 160 160 local_irq_restore(flags); 161 + } 162 + 163 + void kvm_vcpu_load_vhe(struct kvm_vcpu *vcpu) 164 + { 165 + __vcpu_load_switch_sysregs(vcpu); 166 + __vcpu_load_activate_traps(vcpu); 167 + __load_stage2(vcpu->arch.hw_mmu, vcpu->arch.hw_mmu->arch); 168 + } 169 + 170 + void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu) 171 + { 172 + __vcpu_put_deactivate_traps(vcpu); 173 + __vcpu_put_switch_sysregs(vcpu); 161 174 } 162 175 163 176 static const exit_handler_fn hyp_exit_handlers[] = { ··· 183 170 [ESR_ELx_EC_DABT_LOW] = kvm_hyp_handle_dabt_low, 184 171 [ESR_ELx_EC_WATCHPT_LOW] = kvm_hyp_handle_watchpt_low, 185 172 [ESR_ELx_EC_PAC] = kvm_hyp_handle_ptrauth, 173 + [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops, 186 174 }; 187 175 188 176 static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu) ··· 228 214 sysreg_save_host_state_vhe(host_ctxt); 229 215 230 216 /* 231 - * ARM erratum 1165522 requires us to configure both stage 1 and 232 - * stage 2 translation for the guest context before we clear 233 - * HCR_EL2.TGE. 234 - * 235 - * We have already configured the guest's stage 1 translation in 236 - * kvm_vcpu_load_sysregs_vhe above. We must now call 237 - * __load_stage2 before __activate_traps, because 238 - * __load_stage2 configures stage 2 translation, and 239 - * __activate_traps clear HCR_EL2.TGE (among other things). 217 + * Note that ARM erratum 1165522 requires us to configure both stage 1 218 + * and stage 2 translation for the guest context before we clear 219 + * HCR_EL2.TGE. The stage 1 and stage 2 guest context has already been 220 + * loaded on the CPU in kvm_vcpu_load_vhe(). 240 221 */ 241 - __load_stage2(vcpu->arch.hw_mmu, vcpu->arch.hw_mmu->arch); 242 222 __activate_traps(vcpu); 243 223 244 224 __kvm_adjust_pc(vcpu);
+4 -7
arch/arm64/kvm/hyp/vhe/sysreg-sr.c
··· 52 52 NOKPROBE_SYMBOL(sysreg_restore_guest_state_vhe); 53 53 54 54 /** 55 - * kvm_vcpu_load_sysregs_vhe - Load guest system registers to the physical CPU 55 + * __vcpu_load_switch_sysregs - Load guest system registers to the physical CPU 56 56 * 57 57 * @vcpu: The VCPU pointer 58 58 * ··· 62 62 * and loading system register state early avoids having to load them on 63 63 * every entry to the VM. 64 64 */ 65 - void kvm_vcpu_load_sysregs_vhe(struct kvm_vcpu *vcpu) 65 + void __vcpu_load_switch_sysregs(struct kvm_vcpu *vcpu) 66 66 { 67 67 struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; 68 68 struct kvm_cpu_context *host_ctxt; ··· 92 92 __sysreg_restore_el1_state(guest_ctxt); 93 93 94 94 vcpu_set_flag(vcpu, SYSREGS_ON_CPU); 95 - 96 - activate_traps_vhe_load(vcpu); 97 95 } 98 96 99 97 /** 100 - * kvm_vcpu_put_sysregs_vhe - Restore host system registers to the physical CPU 98 + * __vcpu_put_switch_syregs - Restore host system registers to the physical CPU 101 99 * 102 100 * @vcpu: The VCPU pointer 103 101 * ··· 105 107 * and deferring saving system register state until we're no longer running the 106 108 * VCPU avoids having to save them on every exit from the VM. 107 109 */ 108 - void kvm_vcpu_put_sysregs_vhe(struct kvm_vcpu *vcpu) 110 + void __vcpu_put_switch_sysregs(struct kvm_vcpu *vcpu) 109 111 { 110 112 struct kvm_cpu_context *guest_ctxt = &vcpu->arch.ctxt; 111 113 struct kvm_cpu_context *host_ctxt; 112 114 113 115 host_ctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt; 114 - deactivate_traps_vhe_put(vcpu); 115 116 116 117 __sysreg_save_el1_state(guest_ctxt); 117 118 __sysreg_save_user_state(guest_ctxt);
+14 -4
arch/arm64/kvm/hyp/vhe/tlb.c
··· 11 11 #include <asm/tlbflush.h> 12 12 13 13 struct tlb_inv_context { 14 - unsigned long flags; 15 - u64 tcr; 16 - u64 sctlr; 14 + struct kvm_s2_mmu *mmu; 15 + unsigned long flags; 16 + u64 tcr; 17 + u64 sctlr; 17 18 }; 18 19 19 20 static void __tlb_switch_to_guest(struct kvm_s2_mmu *mmu, 20 21 struct tlb_inv_context *cxt) 21 22 { 23 + struct kvm_vcpu *vcpu = kvm_get_running_vcpu(); 22 24 u64 val; 23 25 24 26 local_irq_save(cxt->flags); 27 + 28 + if (vcpu && mmu != vcpu->arch.hw_mmu) 29 + cxt->mmu = vcpu->arch.hw_mmu; 30 + else 31 + cxt->mmu = NULL; 25 32 26 33 if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { 27 34 /* ··· 73 66 * We're done with the TLB operation, let's restore the host's 74 67 * view of HCR_EL2. 75 68 */ 76 - write_sysreg(0, vttbr_el2); 77 69 write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2); 78 70 isb(); 71 + 72 + /* ... and the stage-2 MMU context that we switched away from */ 73 + if (cxt->mmu) 74 + __load_stage2(cxt->mmu, cxt->mmu->arch); 79 75 80 76 if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) { 81 77 /* Restore the registers to what they were */
+23 -13
arch/arm64/kvm/hypercalls.c
··· 133 133 ARM_SMCCC_SMC_64, \ 134 134 0, ARM_SMCCC_FUNC_MASK) 135 135 136 - static void init_smccc_filter(struct kvm *kvm) 136 + static int kvm_smccc_filter_insert_reserved(struct kvm *kvm) 137 137 { 138 138 int r; 139 - 140 - mt_init(&kvm->arch.smccc_filter); 141 139 142 140 /* 143 141 * Prevent userspace from handling any SMCCC calls in the architecture ··· 146 148 SMC32_ARCH_RANGE_BEGIN, SMC32_ARCH_RANGE_END, 147 149 xa_mk_value(KVM_SMCCC_FILTER_HANDLE), 148 150 GFP_KERNEL_ACCOUNT); 149 - WARN_ON_ONCE(r); 151 + if (r) 152 + goto out_destroy; 150 153 151 154 r = mtree_insert_range(&kvm->arch.smccc_filter, 152 155 SMC64_ARCH_RANGE_BEGIN, SMC64_ARCH_RANGE_END, 153 156 xa_mk_value(KVM_SMCCC_FILTER_HANDLE), 154 157 GFP_KERNEL_ACCOUNT); 155 - WARN_ON_ONCE(r); 158 + if (r) 159 + goto out_destroy; 156 160 161 + return 0; 162 + out_destroy: 163 + mtree_destroy(&kvm->arch.smccc_filter); 164 + return r; 165 + } 166 + 167 + static bool kvm_smccc_filter_configured(struct kvm *kvm) 168 + { 169 + return !mtree_empty(&kvm->arch.smccc_filter); 157 170 } 158 171 159 172 static int kvm_smccc_set_filter(struct kvm *kvm, struct kvm_smccc_filter __user *uaddr) ··· 193 184 goto out_unlock; 194 185 } 195 186 187 + if (!kvm_smccc_filter_configured(kvm)) { 188 + r = kvm_smccc_filter_insert_reserved(kvm); 189 + if (WARN_ON_ONCE(r)) 190 + goto out_unlock; 191 + } 192 + 196 193 r = mtree_insert_range(&kvm->arch.smccc_filter, start, end, 197 194 xa_mk_value(filter.action), GFP_KERNEL_ACCOUNT); 198 - if (r) 199 - goto out_unlock; 200 - 201 - set_bit(KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED, &kvm->arch.flags); 202 - 203 195 out_unlock: 204 196 mutex_unlock(&kvm->arch.config_lock); 205 197 return r; ··· 211 201 unsigned long idx = func_id; 212 202 void *val; 213 203 214 - if (!test_bit(KVM_ARCH_FLAG_SMCCC_FILTER_CONFIGURED, &kvm->arch.flags)) 204 + if (!kvm_smccc_filter_configured(kvm)) 215 205 return KVM_SMCCC_FILTER_HANDLE; 216 206 217 207 /* ··· 397 387 smccc_feat->std_hyp_bmap = KVM_ARM_SMCCC_STD_HYP_FEATURES; 398 388 smccc_feat->vendor_hyp_bmap = KVM_ARM_SMCCC_VENDOR_HYP_FEATURES; 399 389 400 - init_smccc_filter(kvm); 390 + mt_init(&kvm->arch.smccc_filter); 401 391 } 402 392 403 393 void kvm_arm_teardown_hypercalls(struct kvm *kvm) ··· 564 554 { 565 555 bool wants_02; 566 556 567 - wants_02 = test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features); 557 + wants_02 = vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2); 568 558 569 559 switch (val) { 570 560 case KVM_ARM_PSCI_0_1:
+3 -1
arch/arm64/kvm/mmio.c
··· 135 135 * volunteered to do so, and bail out otherwise. 136 136 */ 137 137 if (!kvm_vcpu_dabt_isvalid(vcpu)) { 138 + trace_kvm_mmio_nisv(*vcpu_pc(vcpu), kvm_vcpu_get_esr(vcpu), 139 + kvm_vcpu_get_hfar(vcpu), fault_ipa); 140 + 138 141 if (test_bit(KVM_ARCH_FLAG_RETURN_NISV_IO_ABORT_TO_USER, 139 142 &vcpu->kvm->arch.flags)) { 140 143 run->exit_reason = KVM_EXIT_ARM_NISV; ··· 146 143 return 0; 147 144 } 148 145 149 - kvm_pr_unimpl("Data abort outside memslots with no valid syndrome info\n"); 150 146 return -ENOSYS; 151 147 } 152 148
+7 -26
arch/arm64/kvm/mmu.c
··· 892 892 893 893 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1); 894 894 mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); 895 - kvm->arch.vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift); 895 + mmu->vtcr = kvm_get_vtcr(mmfr0, mmfr1, phys_shift); 896 896 897 897 if (mmu->pgt != NULL) { 898 898 kvm_err("kvm_arch already initialized?\n"); ··· 1067 1067 phys_addr_t addr; 1068 1068 int ret = 0; 1069 1069 struct kvm_mmu_memory_cache cache = { .gfp_zero = __GFP_ZERO }; 1070 - struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; 1070 + struct kvm_s2_mmu *mmu = &kvm->arch.mmu; 1071 + struct kvm_pgtable *pgt = mmu->pgt; 1071 1072 enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE | 1072 1073 KVM_PGTABLE_PROT_R | 1073 1074 (writable ? KVM_PGTABLE_PROT_W : 0); ··· 1081 1080 1082 1081 for (addr = guest_ipa; addr < guest_ipa + size; addr += PAGE_SIZE) { 1083 1082 ret = kvm_mmu_topup_memory_cache(&cache, 1084 - kvm_mmu_cache_min_pages(kvm)); 1083 + kvm_mmu_cache_min_pages(mmu)); 1085 1084 if (ret) 1086 1085 break; 1087 1086 ··· 1299 1298 if (sz < PMD_SIZE) 1300 1299 return PAGE_SIZE; 1301 1300 1302 - /* 1303 - * The address we faulted on is backed by a transparent huge 1304 - * page. However, because we map the compound huge page and 1305 - * not the individual tail page, we need to transfer the 1306 - * refcount to the head page. We have to be careful that the 1307 - * THP doesn't start to split while we are adjusting the 1308 - * refcounts. 1309 - * 1310 - * We are sure this doesn't happen, because mmu_invalidate_retry 1311 - * was successful and we are holding the mmu_lock, so if this 1312 - * THP is trying to split, it will be blocked in the mmu 1313 - * notifier before touching any of the pages, specifically 1314 - * before being able to call __split_huge_page_refcount(). 1315 - * 1316 - * We can therefore safely transfer the refcount from PG_tail 1317 - * to PG_head and switch the pfn from a tail page to the head 1318 - * page accordingly. 1319 - */ 1320 1301 *ipap &= PMD_MASK; 1321 - kvm_release_pfn_clean(pfn); 1322 1302 pfn &= ~(PTRS_PER_PMD - 1); 1323 - get_page(pfn_to_page(pfn)); 1324 1303 *pfnp = pfn; 1325 1304 1326 1305 return PMD_SIZE; ··· 1412 1431 if (fault_status != ESR_ELx_FSC_PERM || 1413 1432 (logging_active && write_fault)) { 1414 1433 ret = kvm_mmu_topup_memory_cache(memcache, 1415 - kvm_mmu_cache_min_pages(kvm)); 1434 + kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu)); 1416 1435 if (ret) 1417 1436 return ret; 1418 1437 } ··· 1728 1747 } 1729 1748 1730 1749 /* Userspace should not be able to register out-of-bounds IPAs */ 1731 - VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->kvm)); 1750 + VM_BUG_ON(fault_ipa >= kvm_phys_size(vcpu->arch.hw_mmu)); 1732 1751 1733 1752 if (fault_status == ESR_ELx_FSC_ACCESS) { 1734 1753 handle_access_fault(vcpu, fault_ipa); ··· 2002 2021 * Prevent userspace from creating a memory region outside of the IPA 2003 2022 * space addressable by the KVM guest IPA space. 2004 2023 */ 2005 - if ((new->base_gfn + new->npages) > (kvm_phys_size(kvm) >> PAGE_SHIFT)) 2024 + if ((new->base_gfn + new->npages) > (kvm_phys_size(&kvm->arch.mmu) >> PAGE_SHIFT)) 2006 2025 return -EFAULT; 2007 2026 2008 2027 hva = new->userspace_addr;
+1 -1
arch/arm64/kvm/pkvm.c
··· 123 123 if (host_kvm->created_vcpus < 1) 124 124 return -EINVAL; 125 125 126 - pgd_sz = kvm_pgtable_stage2_pgd_size(host_kvm->arch.vtcr); 126 + pgd_sz = kvm_pgtable_stage2_pgd_size(host_kvm->arch.mmu.vtcr); 127 127 128 128 /* 129 129 * The PGD pages will be reclaimed using a hyp_memcache which implies
+107 -38
arch/arm64/kvm/pmu-emul.c
··· 60 60 return __kvm_pmu_event_mask(pmuver); 61 61 } 62 62 63 + u64 kvm_pmu_evtyper_mask(struct kvm *kvm) 64 + { 65 + u64 mask = ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 | 66 + kvm_pmu_event_mask(kvm); 67 + u64 pfr0 = IDREG(kvm, SYS_ID_AA64PFR0_EL1); 68 + 69 + if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL2, pfr0)) 70 + mask |= ARMV8_PMU_INCLUDE_EL2; 71 + 72 + if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr0)) 73 + mask |= ARMV8_PMU_EXCLUDE_NS_EL0 | 74 + ARMV8_PMU_EXCLUDE_NS_EL1 | 75 + ARMV8_PMU_EXCLUDE_EL3; 76 + 77 + return mask; 78 + } 79 + 63 80 /** 64 81 * kvm_pmc_is_64bit - determine if counter is 64bit 65 82 * @pmc: counter context ··· 89 72 90 73 static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc) 91 74 { 92 - u64 val = __vcpu_sys_reg(kvm_pmc_to_vcpu(pmc), PMCR_EL0); 75 + u64 val = kvm_vcpu_read_pmcr(kvm_pmc_to_vcpu(pmc)); 93 76 94 77 return (pmc->idx < ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LP)) || 95 78 (pmc->idx == ARMV8_PMU_CYCLE_IDX && (val & ARMV8_PMU_PMCR_LC)); ··· 267 250 268 251 u64 kvm_pmu_valid_counter_mask(struct kvm_vcpu *vcpu) 269 252 { 270 - u64 val = __vcpu_sys_reg(vcpu, PMCR_EL0) >> ARMV8_PMU_PMCR_N_SHIFT; 253 + u64 val = kvm_vcpu_read_pmcr(vcpu) >> ARMV8_PMU_PMCR_N_SHIFT; 271 254 272 255 val &= ARMV8_PMU_PMCR_N_MASK; 273 256 if (val == 0) ··· 289 272 if (!kvm_vcpu_has_pmu(vcpu)) 290 273 return; 291 274 292 - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) || !val) 275 + if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) || !val) 293 276 return; 294 277 295 278 for (i = 0; i < ARMV8_PMU_MAX_COUNTERS; i++) { ··· 341 324 { 342 325 u64 reg = 0; 343 326 344 - if ((__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) { 327 + if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) { 345 328 reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0); 346 329 reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0); 347 330 reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1); ··· 365 348 pmu->irq_level = overflow; 366 349 367 350 if (likely(irqchip_in_kernel(vcpu->kvm))) { 368 - int ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu->vcpu_id, 351 + int ret = kvm_vgic_inject_irq(vcpu->kvm, vcpu, 369 352 pmu->irq_num, overflow, pmu); 370 353 WARN_ON(ret); 371 354 } ··· 443 426 { 444 427 int i; 445 428 446 - if (!(__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E)) 429 + if (!(kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) 447 430 return; 448 431 449 432 /* Weed out disabled counters */ ··· 586 569 static bool kvm_pmu_counter_is_enabled(struct kvm_pmc *pmc) 587 570 { 588 571 struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc); 589 - return (__vcpu_sys_reg(vcpu, PMCR_EL0) & ARMV8_PMU_PMCR_E) && 572 + return (kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E) && 590 573 (__vcpu_sys_reg(vcpu, PMCNTENSET_EL0) & BIT(pmc->idx)); 591 574 } 592 575 ··· 601 584 struct perf_event *event; 602 585 struct perf_event_attr attr; 603 586 u64 eventsel, reg, data; 587 + bool p, u, nsk, nsu; 604 588 605 589 reg = counter_index_to_evtreg(pmc->idx); 606 590 data = __vcpu_sys_reg(vcpu, reg); ··· 628 610 !test_bit(eventsel, vcpu->kvm->arch.pmu_filter)) 629 611 return; 630 612 613 + p = data & ARMV8_PMU_EXCLUDE_EL1; 614 + u = data & ARMV8_PMU_EXCLUDE_EL0; 615 + nsk = data & ARMV8_PMU_EXCLUDE_NS_EL1; 616 + nsu = data & ARMV8_PMU_EXCLUDE_NS_EL0; 617 + 631 618 memset(&attr, 0, sizeof(struct perf_event_attr)); 632 619 attr.type = arm_pmu->pmu.type; 633 620 attr.size = sizeof(attr); 634 621 attr.pinned = 1; 635 622 attr.disabled = !kvm_pmu_counter_is_enabled(pmc); 636 - attr.exclude_user = data & ARMV8_PMU_EXCLUDE_EL0 ? 1 : 0; 637 - attr.exclude_kernel = data & ARMV8_PMU_EXCLUDE_EL1 ? 1 : 0; 623 + attr.exclude_user = (u != nsu); 624 + attr.exclude_kernel = (p != nsk); 638 625 attr.exclude_hv = 1; /* Don't count EL2 events */ 639 626 attr.exclude_host = 1; /* Don't count host events */ 640 627 attr.config = eventsel; ··· 680 657 u64 select_idx) 681 658 { 682 659 struct kvm_pmc *pmc = kvm_vcpu_idx_to_pmc(vcpu, select_idx); 683 - u64 reg, mask; 660 + u64 reg; 684 661 685 662 if (!kvm_vcpu_has_pmu(vcpu)) 686 663 return; 687 664 688 - mask = ARMV8_PMU_EVTYPE_MASK; 689 - mask &= ~ARMV8_PMU_EVTYPE_EVENT; 690 - mask |= kvm_pmu_event_mask(vcpu->kvm); 691 - 692 665 reg = counter_index_to_evtreg(pmc->idx); 693 - 694 - __vcpu_sys_reg(vcpu, reg) = data & mask; 666 + __vcpu_sys_reg(vcpu, reg) = data & kvm_pmu_evtyper_mask(vcpu->kvm); 695 667 696 668 kvm_pmu_create_perf_event(pmc); 697 669 } ··· 735 717 * It is still necessary to get a valid cpu, though, to probe for the 736 718 * default PMU instance as userspace is not required to specify a PMU 737 719 * type. In order to uphold the preexisting behavior KVM selects the 738 - * PMU instance for the core where the first call to the 739 - * KVM_ARM_VCPU_PMU_V3_CTRL attribute group occurs. A dependent use case 740 - * would be a user with disdain of all things big.LITTLE that affines 741 - * the VMM to a particular cluster of cores. 720 + * PMU instance for the core during vcpu init. A dependent use 721 + * case would be a user with disdain of all things big.LITTLE that 722 + * affines the VMM to a particular cluster of cores. 742 723 * 743 724 * In any case, userspace should just do the sane thing and use the UAPI 744 725 * to select a PMU type directly. But, be wary of the baggage being ··· 801 784 } 802 785 803 786 return val & mask; 787 + } 788 + 789 + void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) 790 + { 791 + u64 mask = kvm_pmu_valid_counter_mask(vcpu); 792 + 793 + kvm_pmu_handle_pmcr(vcpu, kvm_vcpu_read_pmcr(vcpu)); 794 + 795 + __vcpu_sys_reg(vcpu, PMOVSSET_EL0) &= mask; 796 + __vcpu_sys_reg(vcpu, PMINTENSET_EL1) &= mask; 797 + __vcpu_sys_reg(vcpu, PMCNTENSET_EL0) &= mask; 804 798 } 805 799 806 800 int kvm_arm_pmu_v3_enable(struct kvm_vcpu *vcpu) ··· 902 874 return true; 903 875 } 904 876 877 + /** 878 + * kvm_arm_pmu_get_max_counters - Return the max number of PMU counters. 879 + * @kvm: The kvm pointer 880 + */ 881 + u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) 882 + { 883 + struct arm_pmu *arm_pmu = kvm->arch.arm_pmu; 884 + 885 + /* 886 + * The arm_pmu->num_events considers the cycle counter as well. 887 + * Ignore that and return only the general-purpose counters. 888 + */ 889 + return arm_pmu->num_events - 1; 890 + } 891 + 892 + static void kvm_arm_set_pmu(struct kvm *kvm, struct arm_pmu *arm_pmu) 893 + { 894 + lockdep_assert_held(&kvm->arch.config_lock); 895 + 896 + kvm->arch.arm_pmu = arm_pmu; 897 + kvm->arch.pmcr_n = kvm_arm_pmu_get_max_counters(kvm); 898 + } 899 + 900 + /** 901 + * kvm_arm_set_default_pmu - No PMU set, get the default one. 902 + * @kvm: The kvm pointer 903 + * 904 + * The observant among you will notice that the supported_cpus 905 + * mask does not get updated for the default PMU even though it 906 + * is quite possible the selected instance supports only a 907 + * subset of cores in the system. This is intentional, and 908 + * upholds the preexisting behavior on heterogeneous systems 909 + * where vCPUs can be scheduled on any core but the guest 910 + * counters could stop working. 911 + */ 912 + int kvm_arm_set_default_pmu(struct kvm *kvm) 913 + { 914 + struct arm_pmu *arm_pmu = kvm_pmu_probe_armpmu(); 915 + 916 + if (!arm_pmu) 917 + return -ENODEV; 918 + 919 + kvm_arm_set_pmu(kvm, arm_pmu); 920 + return 0; 921 + } 922 + 905 923 static int kvm_arm_pmu_v3_set_pmu(struct kvm_vcpu *vcpu, int pmu_id) 906 924 { 907 925 struct kvm *kvm = vcpu->kvm; ··· 967 893 break; 968 894 } 969 895 970 - kvm->arch.arm_pmu = arm_pmu; 896 + kvm_arm_set_pmu(kvm, arm_pmu); 971 897 cpumask_copy(kvm->arch.supported_cpus, &arm_pmu->supported_cpus); 972 898 ret = 0; 973 899 break; ··· 989 915 990 916 if (vcpu->arch.pmu.created) 991 917 return -EBUSY; 992 - 993 - if (!kvm->arch.arm_pmu) { 994 - /* 995 - * No PMU set, get the default one. 996 - * 997 - * The observant among you will notice that the supported_cpus 998 - * mask does not get updated for the default PMU even though it 999 - * is quite possible the selected instance supports only a 1000 - * subset of cores in the system. This is intentional, and 1001 - * upholds the preexisting behavior on heterogeneous systems 1002 - * where vCPUs can be scheduled on any core but the guest 1003 - * counters could stop working. 1004 - */ 1005 - kvm->arch.arm_pmu = kvm_pmu_probe_armpmu(); 1006 - if (!kvm->arch.arm_pmu) 1007 - return -ENODEV; 1008 - } 1009 918 1010 919 switch (attr->attr) { 1011 920 case KVM_ARM_VCPU_PMU_V3_IRQ: { ··· 1128 1071 ID_AA64DFR0_EL1_PMUVer_SHIFT, 1129 1072 ID_AA64DFR0_EL1_PMUVer_V3P5); 1130 1073 return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), tmp); 1074 + } 1075 + 1076 + /** 1077 + * kvm_vcpu_read_pmcr - Read PMCR_EL0 register for the vCPU 1078 + * @vcpu: The vcpu pointer 1079 + */ 1080 + u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) 1081 + { 1082 + u64 pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0) & 1083 + ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); 1084 + 1085 + return pmcr | ((u64)vcpu->kvm->arch.pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); 1131 1086 }
+10 -46
arch/arm64/kvm/reset.c
··· 73 73 return 0; 74 74 } 75 75 76 - static int kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) 76 + static void kvm_vcpu_enable_sve(struct kvm_vcpu *vcpu) 77 77 { 78 - if (!system_supports_sve()) 79 - return -EINVAL; 80 - 81 78 vcpu->arch.sve_max_vl = kvm_sve_max_vl; 82 79 83 80 /* ··· 83 86 * kvm_arm_vcpu_finalize(), which freezes the configuration. 84 87 */ 85 88 vcpu_set_flag(vcpu, GUEST_HAS_SVE); 86 - 87 - return 0; 88 89 } 89 90 90 91 /* ··· 165 170 memset(vcpu->arch.sve_state, 0, vcpu_sve_state_size(vcpu)); 166 171 } 167 172 168 - static int kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) 173 + static void kvm_vcpu_enable_ptrauth(struct kvm_vcpu *vcpu) 169 174 { 170 - /* 171 - * For now make sure that both address/generic pointer authentication 172 - * features are requested by the userspace together and the system 173 - * supports these capabilities. 174 - */ 175 - if (!test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) || 176 - !test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features) || 177 - !system_has_full_ptr_auth()) 178 - return -EINVAL; 179 - 180 175 vcpu_set_flag(vcpu, GUEST_HAS_PTRAUTH); 181 - return 0; 182 176 } 183 177 184 178 /** ··· 188 204 * disable preemption around the vcpu reset as we would otherwise race with 189 205 * preempt notifiers which also call put/load. 190 206 */ 191 - int kvm_reset_vcpu(struct kvm_vcpu *vcpu) 207 + void kvm_reset_vcpu(struct kvm_vcpu *vcpu) 192 208 { 193 209 struct vcpu_reset_state reset_state; 194 - int ret; 195 210 bool loaded; 196 211 u32 pstate; 197 212 ··· 207 224 if (loaded) 208 225 kvm_arch_vcpu_put(vcpu); 209 226 210 - /* Disallow NV+SVE for the time being */ 211 - if (vcpu_has_nv(vcpu) && vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) { 212 - ret = -EINVAL; 213 - goto out; 214 - } 215 - 216 227 if (!kvm_arm_vcpu_sve_finalized(vcpu)) { 217 - if (test_bit(KVM_ARM_VCPU_SVE, vcpu->arch.features)) { 218 - ret = kvm_vcpu_enable_sve(vcpu); 219 - if (ret) 220 - goto out; 221 - } 228 + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_SVE)) 229 + kvm_vcpu_enable_sve(vcpu); 222 230 } else { 223 231 kvm_vcpu_reset_sve(vcpu); 224 232 } 225 233 226 - if (test_bit(KVM_ARM_VCPU_PTRAUTH_ADDRESS, vcpu->arch.features) || 227 - test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, vcpu->arch.features)) { 228 - if (kvm_vcpu_enable_ptrauth(vcpu)) { 229 - ret = -EINVAL; 230 - goto out; 231 - } 232 - } 234 + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_ADDRESS) || 235 + vcpu_has_feature(vcpu, KVM_ARM_VCPU_PTRAUTH_GENERIC)) 236 + kvm_vcpu_enable_ptrauth(vcpu); 233 237 234 238 if (vcpu_el1_is_32bit(vcpu)) 235 239 pstate = VCPU_RESET_PSTATE_SVC; ··· 224 254 pstate = VCPU_RESET_PSTATE_EL2; 225 255 else 226 256 pstate = VCPU_RESET_PSTATE_EL1; 227 - 228 - if (kvm_vcpu_has_pmu(vcpu) && !kvm_arm_support_pmu_v3()) { 229 - ret = -EINVAL; 230 - goto out; 231 - } 232 257 233 258 /* Reset core registers */ 234 259 memset(vcpu_gp_regs(vcpu), 0, sizeof(*vcpu_gp_regs(vcpu))); ··· 259 294 } 260 295 261 296 /* Reset timer */ 262 - ret = kvm_timer_vcpu_reset(vcpu); 263 - out: 297 + kvm_timer_vcpu_reset(vcpu); 298 + 264 299 if (loaded) 265 300 kvm_arch_vcpu_load(vcpu, smp_processor_id()); 266 301 preempt_enable(); 267 - return ret; 268 302 } 269 303 270 304 u32 get_kvm_ipa_limit(void)
+286 -67
arch/arm64/kvm/sys_regs.c
··· 379 379 struct sys_reg_params *p, 380 380 const struct sys_reg_desc *r) 381 381 { 382 - u64 val = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1); 382 + u64 val = IDREG(vcpu->kvm, SYS_ID_AA64MMFR1_EL1); 383 383 u32 sr = reg_to_encoding(r); 384 384 385 385 if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) { ··· 719 719 720 720 static u64 reset_pmu_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 721 721 { 722 - u64 n, mask = BIT(ARMV8_PMU_CYCLE_IDX); 722 + u64 mask = BIT(ARMV8_PMU_CYCLE_IDX); 723 + u8 n = vcpu->kvm->arch.pmcr_n; 723 724 724 - /* No PMU available, any PMU reg may UNDEF... */ 725 - if (!kvm_arm_support_pmu_v3()) 726 - return 0; 727 - 728 - n = read_sysreg(pmcr_el0) >> ARMV8_PMU_PMCR_N_SHIFT; 729 - n &= ARMV8_PMU_PMCR_N_MASK; 730 725 if (n) 731 726 mask |= GENMASK(n - 1, 0); 732 727 ··· 741 746 742 747 static u64 reset_pmevtyper(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 743 748 { 749 + /* This thing will UNDEF, who cares about the reset value? */ 750 + if (!kvm_vcpu_has_pmu(vcpu)) 751 + return 0; 752 + 744 753 reset_unknown(vcpu, r); 745 - __vcpu_sys_reg(vcpu, r->reg) &= ARMV8_PMU_EVTYPE_MASK; 754 + __vcpu_sys_reg(vcpu, r->reg) &= kvm_pmu_evtyper_mask(vcpu->kvm); 746 755 747 756 return __vcpu_sys_reg(vcpu, r->reg); 748 757 } ··· 761 762 762 763 static u64 reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) 763 764 { 764 - u64 pmcr; 765 + u64 pmcr = 0; 765 766 766 - /* No PMU available, PMCR_EL0 may UNDEF... */ 767 - if (!kvm_arm_support_pmu_v3()) 768 - return 0; 769 - 770 - /* Only preserve PMCR_EL0.N, and reset the rest to 0 */ 771 - pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); 772 767 if (!kvm_supports_32bit_el0()) 773 768 pmcr |= ARMV8_PMU_PMCR_LC; 774 769 770 + /* 771 + * The value of PMCR.N field is included when the 772 + * vCPU register is read via kvm_vcpu_read_pmcr(). 773 + */ 775 774 __vcpu_sys_reg(vcpu, r->reg) = pmcr; 776 775 777 776 return __vcpu_sys_reg(vcpu, r->reg); ··· 819 822 * Only update writeable bits of PMCR (continuing into 820 823 * kvm_pmu_handle_pmcr() as well) 821 824 */ 822 - val = __vcpu_sys_reg(vcpu, PMCR_EL0); 825 + val = kvm_vcpu_read_pmcr(vcpu); 823 826 val &= ~ARMV8_PMU_PMCR_MASK; 824 827 val |= p->regval & ARMV8_PMU_PMCR_MASK; 825 828 if (!kvm_supports_32bit_el0()) ··· 827 830 kvm_pmu_handle_pmcr(vcpu, val); 828 831 } else { 829 832 /* PMCR.P & PMCR.C are RAZ */ 830 - val = __vcpu_sys_reg(vcpu, PMCR_EL0) 833 + val = kvm_vcpu_read_pmcr(vcpu) 831 834 & ~(ARMV8_PMU_PMCR_P | ARMV8_PMU_PMCR_C); 832 835 p->regval = val; 833 836 } ··· 876 879 { 877 880 u64 pmcr, val; 878 881 879 - pmcr = __vcpu_sys_reg(vcpu, PMCR_EL0); 882 + pmcr = kvm_vcpu_read_pmcr(vcpu); 880 883 val = (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; 881 884 if (idx >= val && idx != ARMV8_PMU_CYCLE_IDX) { 882 885 kvm_inject_undefined(vcpu); ··· 985 988 kvm_pmu_set_counter_event_type(vcpu, p->regval, idx); 986 989 kvm_vcpu_pmu_restore_guest(vcpu); 987 990 } else { 988 - p->regval = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_MASK; 991 + p->regval = __vcpu_sys_reg(vcpu, reg); 989 992 } 990 993 991 994 return true; 995 + } 996 + 997 + static int set_pmreg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, u64 val) 998 + { 999 + bool set; 1000 + 1001 + val &= kvm_pmu_valid_counter_mask(vcpu); 1002 + 1003 + switch (r->reg) { 1004 + case PMOVSSET_EL0: 1005 + /* CRm[1] being set indicates a SET register, and CLR otherwise */ 1006 + set = r->CRm & 2; 1007 + break; 1008 + default: 1009 + /* Op2[0] being set indicates a SET register, and CLR otherwise */ 1010 + set = r->Op2 & 1; 1011 + break; 1012 + } 1013 + 1014 + if (set) 1015 + __vcpu_sys_reg(vcpu, r->reg) |= val; 1016 + else 1017 + __vcpu_sys_reg(vcpu, r->reg) &= ~val; 1018 + 1019 + return 0; 1020 + } 1021 + 1022 + static int get_pmreg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, u64 *val) 1023 + { 1024 + u64 mask = kvm_pmu_valid_counter_mask(vcpu); 1025 + 1026 + *val = __vcpu_sys_reg(vcpu, r->reg) & mask; 1027 + return 0; 992 1028 } 993 1029 994 1030 static bool access_pmcnten(struct kvm_vcpu *vcpu, struct sys_reg_params *p, ··· 1131 1101 } 1132 1102 1133 1103 return true; 1104 + } 1105 + 1106 + static int get_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, 1107 + u64 *val) 1108 + { 1109 + *val = kvm_vcpu_read_pmcr(vcpu); 1110 + return 0; 1111 + } 1112 + 1113 + static int set_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r, 1114 + u64 val) 1115 + { 1116 + u8 new_n = (val >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; 1117 + struct kvm *kvm = vcpu->kvm; 1118 + 1119 + mutex_lock(&kvm->arch.config_lock); 1120 + 1121 + /* 1122 + * The vCPU can't have more counters than the PMU hardware 1123 + * implements. Ignore this error to maintain compatibility 1124 + * with the existing KVM behavior. 1125 + */ 1126 + if (!kvm_vm_has_ran_once(kvm) && 1127 + new_n <= kvm_arm_pmu_get_max_counters(kvm)) 1128 + kvm->arch.pmcr_n = new_n; 1129 + 1130 + mutex_unlock(&kvm->arch.config_lock); 1131 + 1132 + /* 1133 + * Ignore writes to RES0 bits, read only bits that are cleared on 1134 + * vCPU reset, and writable bits that KVM doesn't support yet. 1135 + * (i.e. only PMCR.N and bits [7:0] are mutable from userspace) 1136 + * The LP bit is RES0 when FEAT_PMUv3p5 is not supported on the vCPU. 1137 + * But, we leave the bit as it is here, as the vCPU's PMUver might 1138 + * be changed later (NOTE: the bit will be cleared on first vCPU run 1139 + * if necessary). 1140 + */ 1141 + val &= ARMV8_PMU_PMCR_MASK; 1142 + 1143 + /* The LC bit is RES1 when AArch32 is not supported */ 1144 + if (!kvm_supports_32bit_el0()) 1145 + val |= ARMV8_PMU_PMCR_LC; 1146 + 1147 + __vcpu_sys_reg(vcpu, r->reg) = val; 1148 + return 0; 1134 1149 } 1135 1150 1136 1151 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */ ··· 1291 1216 /* Some features have different safe value type in KVM than host features */ 1292 1217 switch (id) { 1293 1218 case SYS_ID_AA64DFR0_EL1: 1294 - if (kvm_ftr.shift == ID_AA64DFR0_EL1_PMUVer_SHIFT) 1219 + switch (kvm_ftr.shift) { 1220 + case ID_AA64DFR0_EL1_PMUVer_SHIFT: 1295 1221 kvm_ftr.type = FTR_LOWER_SAFE; 1222 + break; 1223 + case ID_AA64DFR0_EL1_DebugVer_SHIFT: 1224 + kvm_ftr.type = FTR_LOWER_SAFE; 1225 + break; 1226 + } 1296 1227 break; 1297 1228 case SYS_ID_DFR0_EL1: 1298 1229 if (kvm_ftr.shift == ID_DFR0_EL1_PerfMon_SHIFT) ··· 1309 1228 return arm64_ftr_safe_value(&kvm_ftr, new, cur); 1310 1229 } 1311 1230 1312 - /** 1231 + /* 1313 1232 * arm64_check_features() - Check if a feature register value constitutes 1314 1233 * a subset of features indicated by the idreg's KVM sanitised limit. 1315 1234 * ··· 1419 1338 ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3)); 1420 1339 if (!cpus_have_final_cap(ARM64_HAS_WFXT)) 1421 1340 val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); 1422 - val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_MOPS); 1423 1341 break; 1424 1342 case SYS_ID_AA64MMFR2_EL1: 1425 1343 val &= ~ID_AA64MMFR2_EL1_CCIDX_MASK; ··· 1451 1371 return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 && 1452 1372 sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 && 1453 1373 sys_reg_CRm(id) < 8); 1374 + } 1375 + 1376 + static inline bool is_aa32_id_reg(u32 id) 1377 + { 1378 + return (sys_reg_Op0(id) == 3 && sys_reg_Op1(id) == 0 && 1379 + sys_reg_CRn(id) == 0 && sys_reg_CRm(id) >= 1 && 1380 + sys_reg_CRm(id) <= 3); 1454 1381 } 1455 1382 1456 1383 static unsigned int id_visibility(const struct kvm_vcpu *vcpu, ··· 1556 1469 return val; 1557 1470 } 1558 1471 1472 + #define ID_REG_LIMIT_FIELD_ENUM(val, reg, field, limit) \ 1473 + ({ \ 1474 + u64 __f_val = FIELD_GET(reg##_##field##_MASK, val); \ 1475 + (val) &= ~reg##_##field##_MASK; \ 1476 + (val) |= FIELD_PREP(reg##_##field##_MASK, \ 1477 + min(__f_val, (u64)reg##_##field##_##limit)); \ 1478 + (val); \ 1479 + }) 1480 + 1559 1481 static u64 read_sanitised_id_aa64dfr0_el1(struct kvm_vcpu *vcpu, 1560 1482 const struct sys_reg_desc *rd) 1561 1483 { 1562 1484 u64 val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); 1563 1485 1564 - /* Limit debug to ARMv8.0 */ 1565 - val &= ~ID_AA64DFR0_EL1_DebugVer_MASK; 1566 - val |= SYS_FIELD_PREP_ENUM(ID_AA64DFR0_EL1, DebugVer, IMP); 1486 + val = ID_REG_LIMIT_FIELD_ENUM(val, ID_AA64DFR0_EL1, DebugVer, V8P8); 1567 1487 1568 1488 /* 1569 1489 * Only initialize the PMU version if the vCPU was configured with one. ··· 1590 1496 const struct sys_reg_desc *rd, 1591 1497 u64 val) 1592 1498 { 1499 + u8 debugver = SYS_FIELD_GET(ID_AA64DFR0_EL1, DebugVer, val); 1593 1500 u8 pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, val); 1594 1501 1595 1502 /* ··· 1610 1515 if (pmuver == ID_AA64DFR0_EL1_PMUVer_IMP_DEF) 1611 1516 val &= ~ID_AA64DFR0_EL1_PMUVer_MASK; 1612 1517 1518 + /* 1519 + * ID_AA64DFR0_EL1.DebugVer is one of those awkward fields with a 1520 + * nonzero minimum safe value. 1521 + */ 1522 + if (debugver < ID_AA64DFR0_EL1_DebugVer_IMP) 1523 + return -EINVAL; 1524 + 1613 1525 return set_id_reg(vcpu, rd, val); 1614 1526 } 1615 1527 ··· 1630 1528 if (kvm_vcpu_has_pmu(vcpu)) 1631 1529 val |= SYS_FIELD_PREP(ID_DFR0_EL1, PerfMon, perfmon); 1632 1530 1531 + val = ID_REG_LIMIT_FIELD_ENUM(val, ID_DFR0_EL1, CopDbg, Debugv8p8); 1532 + 1633 1533 return val; 1634 1534 } 1635 1535 ··· 1640 1536 u64 val) 1641 1537 { 1642 1538 u8 perfmon = SYS_FIELD_GET(ID_DFR0_EL1, PerfMon, val); 1539 + u8 copdbg = SYS_FIELD_GET(ID_DFR0_EL1, CopDbg, val); 1643 1540 1644 1541 if (perfmon == ID_DFR0_EL1_PerfMon_IMPDEF) { 1645 1542 val &= ~ID_DFR0_EL1_PerfMon_MASK; ··· 1654 1549 * that this is a PMUv3. 1655 1550 */ 1656 1551 if (perfmon != 0 && perfmon < ID_DFR0_EL1_PerfMon_PMUv3) 1552 + return -EINVAL; 1553 + 1554 + if (copdbg < ID_DFR0_EL1_CopDbg_Armv8) 1657 1555 return -EINVAL; 1658 1556 1659 1557 return set_id_reg(vcpu, rd, val); ··· 1899 1791 * HCR_EL2.E2H==1, and only in the sysreg table for convenience of 1900 1792 * handling traps. Given that, they are always hidden from userspace. 1901 1793 */ 1902 - static unsigned int elx2_visibility(const struct kvm_vcpu *vcpu, 1903 - const struct sys_reg_desc *rd) 1794 + static unsigned int hidden_user_visibility(const struct kvm_vcpu *vcpu, 1795 + const struct sys_reg_desc *rd) 1904 1796 { 1905 1797 return REG_HIDDEN_USER; 1906 1798 } ··· 1911 1803 .reset = rst, \ 1912 1804 .reg = name##_EL1, \ 1913 1805 .val = v, \ 1914 - .visibility = elx2_visibility, \ 1806 + .visibility = hidden_user_visibility, \ 1915 1807 } 1916 1808 1917 1809 /* ··· 1925 1817 * from userspace. 1926 1818 */ 1927 1819 1928 - /* sys_reg_desc initialiser for known cpufeature ID registers */ 1929 - #define ID_SANITISED(name) { \ 1820 + #define ID_DESC(name) \ 1930 1821 SYS_DESC(SYS_##name), \ 1931 1822 .access = access_id_reg, \ 1932 - .get_user = get_id_reg, \ 1823 + .get_user = get_id_reg \ 1824 + 1825 + /* sys_reg_desc initialiser for known cpufeature ID registers */ 1826 + #define ID_SANITISED(name) { \ 1827 + ID_DESC(name), \ 1933 1828 .set_user = set_id_reg, \ 1934 1829 .visibility = id_visibility, \ 1935 1830 .reset = kvm_read_sanitised_id_reg, \ ··· 1941 1830 1942 1831 /* sys_reg_desc initialiser for known cpufeature ID registers */ 1943 1832 #define AA32_ID_SANITISED(name) { \ 1944 - SYS_DESC(SYS_##name), \ 1945 - .access = access_id_reg, \ 1946 - .get_user = get_id_reg, \ 1833 + ID_DESC(name), \ 1947 1834 .set_user = set_id_reg, \ 1948 1835 .visibility = aa32_id_visibility, \ 1949 1836 .reset = kvm_read_sanitised_id_reg, \ 1950 1837 .val = 0, \ 1838 + } 1839 + 1840 + /* sys_reg_desc initialiser for writable ID registers */ 1841 + #define ID_WRITABLE(name, mask) { \ 1842 + ID_DESC(name), \ 1843 + .set_user = set_id_reg, \ 1844 + .visibility = id_visibility, \ 1845 + .reset = kvm_read_sanitised_id_reg, \ 1846 + .val = mask, \ 1951 1847 } 1952 1848 1953 1849 /* ··· 1978 1860 * RAZ for the guest. 1979 1861 */ 1980 1862 #define ID_HIDDEN(name) { \ 1981 - SYS_DESC(SYS_##name), \ 1982 - .access = access_id_reg, \ 1983 - .get_user = get_id_reg, \ 1863 + ID_DESC(name), \ 1984 1864 .set_user = set_id_reg, \ 1985 1865 .visibility = raz_visibility, \ 1986 1866 .reset = kvm_read_sanitised_id_reg, \ ··· 2077 1961 // DBGDTR[TR]X_EL0 share the same encoding 2078 1962 { SYS_DESC(SYS_DBGDTRTX_EL0), trap_raz_wi }, 2079 1963 2080 - { SYS_DESC(SYS_DBGVCR32_EL2), NULL, reset_val, DBGVCR32_EL2, 0 }, 1964 + { SYS_DESC(SYS_DBGVCR32_EL2), trap_undef, reset_val, DBGVCR32_EL2, 0 }, 2081 1965 2082 1966 { SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 }, 2083 1967 ··· 2096 1980 .set_user = set_id_dfr0_el1, 2097 1981 .visibility = aa32_id_visibility, 2098 1982 .reset = read_sanitised_id_dfr0_el1, 2099 - .val = ID_DFR0_EL1_PerfMon_MASK, }, 1983 + .val = ID_DFR0_EL1_PerfMon_MASK | 1984 + ID_DFR0_EL1_CopDbg_MASK, }, 2100 1985 ID_HIDDEN(ID_AFR0_EL1), 2101 1986 AA32_ID_SANITISED(ID_MMFR0_EL1), 2102 1987 AA32_ID_SANITISED(ID_MMFR1_EL1), ··· 2131 2014 .get_user = get_id_reg, 2132 2015 .set_user = set_id_reg, 2133 2016 .reset = read_sanitised_id_aa64pfr0_el1, 2134 - .val = ID_AA64PFR0_EL1_CSV2_MASK | ID_AA64PFR0_EL1_CSV3_MASK, }, 2017 + .val = ~(ID_AA64PFR0_EL1_AMU | 2018 + ID_AA64PFR0_EL1_MPAM | 2019 + ID_AA64PFR0_EL1_SVE | 2020 + ID_AA64PFR0_EL1_RAS | 2021 + ID_AA64PFR0_EL1_GIC | 2022 + ID_AA64PFR0_EL1_AdvSIMD | 2023 + ID_AA64PFR0_EL1_FP), }, 2135 2024 ID_SANITISED(ID_AA64PFR1_EL1), 2136 2025 ID_UNALLOCATED(4,2), 2137 2026 ID_UNALLOCATED(4,3), 2138 - ID_SANITISED(ID_AA64ZFR0_EL1), 2027 + ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0), 2139 2028 ID_HIDDEN(ID_AA64SMFR0_EL1), 2140 2029 ID_UNALLOCATED(4,6), 2141 2030 ID_UNALLOCATED(4,7), ··· 2152 2029 .get_user = get_id_reg, 2153 2030 .set_user = set_id_aa64dfr0_el1, 2154 2031 .reset = read_sanitised_id_aa64dfr0_el1, 2155 - .val = ID_AA64DFR0_EL1_PMUVer_MASK, }, 2032 + .val = ID_AA64DFR0_EL1_PMUVer_MASK | 2033 + ID_AA64DFR0_EL1_DebugVer_MASK, }, 2156 2034 ID_SANITISED(ID_AA64DFR1_EL1), 2157 2035 ID_UNALLOCATED(5,2), 2158 2036 ID_UNALLOCATED(5,3), ··· 2163 2039 ID_UNALLOCATED(5,7), 2164 2040 2165 2041 /* CRm=6 */ 2166 - ID_SANITISED(ID_AA64ISAR0_EL1), 2167 - ID_SANITISED(ID_AA64ISAR1_EL1), 2168 - ID_SANITISED(ID_AA64ISAR2_EL1), 2042 + ID_WRITABLE(ID_AA64ISAR0_EL1, ~ID_AA64ISAR0_EL1_RES0), 2043 + ID_WRITABLE(ID_AA64ISAR1_EL1, ~(ID_AA64ISAR1_EL1_GPI | 2044 + ID_AA64ISAR1_EL1_GPA | 2045 + ID_AA64ISAR1_EL1_API | 2046 + ID_AA64ISAR1_EL1_APA)), 2047 + ID_WRITABLE(ID_AA64ISAR2_EL1, ~(ID_AA64ISAR2_EL1_RES0 | 2048 + ID_AA64ISAR2_EL1_APA3 | 2049 + ID_AA64ISAR2_EL1_GPA3)), 2169 2050 ID_UNALLOCATED(6,3), 2170 2051 ID_UNALLOCATED(6,4), 2171 2052 ID_UNALLOCATED(6,5), ··· 2178 2049 ID_UNALLOCATED(6,7), 2179 2050 2180 2051 /* CRm=7 */ 2181 - ID_SANITISED(ID_AA64MMFR0_EL1), 2182 - ID_SANITISED(ID_AA64MMFR1_EL1), 2183 - ID_SANITISED(ID_AA64MMFR2_EL1), 2052 + ID_WRITABLE(ID_AA64MMFR0_EL1, ~(ID_AA64MMFR0_EL1_RES0 | 2053 + ID_AA64MMFR0_EL1_TGRAN4_2 | 2054 + ID_AA64MMFR0_EL1_TGRAN64_2 | 2055 + ID_AA64MMFR0_EL1_TGRAN16_2)), 2056 + ID_WRITABLE(ID_AA64MMFR1_EL1, ~(ID_AA64MMFR1_EL1_RES0 | 2057 + ID_AA64MMFR1_EL1_HCX | 2058 + ID_AA64MMFR1_EL1_XNX | 2059 + ID_AA64MMFR1_EL1_TWED | 2060 + ID_AA64MMFR1_EL1_XNX | 2061 + ID_AA64MMFR1_EL1_VH | 2062 + ID_AA64MMFR1_EL1_VMIDBits)), 2063 + ID_WRITABLE(ID_AA64MMFR2_EL1, ~(ID_AA64MMFR2_EL1_RES0 | 2064 + ID_AA64MMFR2_EL1_EVT | 2065 + ID_AA64MMFR2_EL1_FWB | 2066 + ID_AA64MMFR2_EL1_IDS | 2067 + ID_AA64MMFR2_EL1_NV | 2068 + ID_AA64MMFR2_EL1_CCIDX)), 2184 2069 ID_SANITISED(ID_AA64MMFR3_EL1), 2185 2070 ID_UNALLOCATED(7,4), 2186 2071 ID_UNALLOCATED(7,5), ··· 2259 2116 /* PMBIDR_EL1 is not trapped */ 2260 2117 2261 2118 { PMU_SYS_REG(PMINTENSET_EL1), 2262 - .access = access_pminten, .reg = PMINTENSET_EL1 }, 2119 + .access = access_pminten, .reg = PMINTENSET_EL1, 2120 + .get_user = get_pmreg, .set_user = set_pmreg }, 2263 2121 { PMU_SYS_REG(PMINTENCLR_EL1), 2264 - .access = access_pminten, .reg = PMINTENSET_EL1 }, 2122 + .access = access_pminten, .reg = PMINTENSET_EL1, 2123 + .get_user = get_pmreg, .set_user = set_pmreg }, 2265 2124 { SYS_DESC(SYS_PMMIR_EL1), trap_raz_wi }, 2266 2125 2267 2126 { SYS_DESC(SYS_MAIR_EL1), access_vm_reg, reset_unknown, MAIR_EL1 }, ··· 2311 2166 { SYS_DESC(SYS_CTR_EL0), access_ctr }, 2312 2167 { SYS_DESC(SYS_SVCR), undef_access }, 2313 2168 2314 - { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, 2315 - .reset = reset_pmcr, .reg = PMCR_EL0 }, 2169 + { PMU_SYS_REG(PMCR_EL0), .access = access_pmcr, .reset = reset_pmcr, 2170 + .reg = PMCR_EL0, .get_user = get_pmcr, .set_user = set_pmcr }, 2316 2171 { PMU_SYS_REG(PMCNTENSET_EL0), 2317 - .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, 2172 + .access = access_pmcnten, .reg = PMCNTENSET_EL0, 2173 + .get_user = get_pmreg, .set_user = set_pmreg }, 2318 2174 { PMU_SYS_REG(PMCNTENCLR_EL0), 2319 - .access = access_pmcnten, .reg = PMCNTENSET_EL0 }, 2175 + .access = access_pmcnten, .reg = PMCNTENSET_EL0, 2176 + .get_user = get_pmreg, .set_user = set_pmreg }, 2320 2177 { PMU_SYS_REG(PMOVSCLR_EL0), 2321 - .access = access_pmovs, .reg = PMOVSSET_EL0 }, 2178 + .access = access_pmovs, .reg = PMOVSSET_EL0, 2179 + .get_user = get_pmreg, .set_user = set_pmreg }, 2322 2180 /* 2323 2181 * PM_SWINC_EL0 is exposed to userspace as RAZ/WI, as it was 2324 2182 * previously (and pointlessly) advertised in the past... ··· 2349 2201 { PMU_SYS_REG(PMUSERENR_EL0), .access = access_pmuserenr, 2350 2202 .reset = reset_val, .reg = PMUSERENR_EL0, .val = 0 }, 2351 2203 { PMU_SYS_REG(PMOVSSET_EL0), 2352 - .access = access_pmovs, .reg = PMOVSSET_EL0 }, 2204 + .access = access_pmovs, .reg = PMOVSSET_EL0, 2205 + .get_user = get_pmreg, .set_user = set_pmreg }, 2353 2206 2354 2207 { SYS_DESC(SYS_TPIDR_EL0), NULL, reset_unknown, TPIDR_EL0 }, 2355 2208 { SYS_DESC(SYS_TPIDRRO_EL0), NULL, reset_unknown, TPIDRRO_EL0 }, ··· 2529 2380 EL2_REG(VTTBR_EL2, access_rw, reset_val, 0), 2530 2381 EL2_REG(VTCR_EL2, access_rw, reset_val, 0), 2531 2382 2532 - { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 }, 2383 + { SYS_DESC(SYS_DACR32_EL2), trap_undef, reset_unknown, DACR32_EL2 }, 2533 2384 EL2_REG(HDFGRTR_EL2, access_rw, reset_val, 0), 2534 2385 EL2_REG(HDFGWTR_EL2, access_rw, reset_val, 0), 2535 2386 EL2_REG(SPSR_EL2, access_rw, reset_val, 0), 2536 2387 EL2_REG(ELR_EL2, access_rw, reset_val, 0), 2537 2388 { SYS_DESC(SYS_SP_EL1), access_sp_el1}, 2538 2389 2539 - { SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 }, 2390 + /* AArch32 SPSR_* are RES0 if trapped from a NV guest */ 2391 + { SYS_DESC(SYS_SPSR_irq), .access = trap_raz_wi, 2392 + .visibility = hidden_user_visibility }, 2393 + { SYS_DESC(SYS_SPSR_abt), .access = trap_raz_wi, 2394 + .visibility = hidden_user_visibility }, 2395 + { SYS_DESC(SYS_SPSR_und), .access = trap_raz_wi, 2396 + .visibility = hidden_user_visibility }, 2397 + { SYS_DESC(SYS_SPSR_fiq), .access = trap_raz_wi, 2398 + .visibility = hidden_user_visibility }, 2399 + 2400 + { SYS_DESC(SYS_IFSR32_EL2), trap_undef, reset_unknown, IFSR32_EL2 }, 2540 2401 EL2_REG(AFSR0_EL2, access_rw, reset_val, 0), 2541 2402 EL2_REG(AFSR1_EL2, access_rw, reset_val, 0), 2542 2403 EL2_REG(ESR_EL2, access_rw, reset_val, 0), 2543 - { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 }, 2404 + { SYS_DESC(SYS_FPEXC32_EL2), trap_undef, reset_val, FPEXC32_EL2, 0x700 }, 2544 2405 2545 2406 EL2_REG(FAR_EL2, access_rw, reset_val, 0), 2546 2407 EL2_REG(HPFAR_EL2, access_rw, reset_val, 0), ··· 2597 2438 if (p->is_write) { 2598 2439 return ignore_write(vcpu, p); 2599 2440 } else { 2600 - u64 dfr = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1); 2601 - u64 pfr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1); 2602 - u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL1_EL3_SHIFT); 2441 + u64 dfr = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); 2442 + u64 pfr = IDREG(vcpu->kvm, SYS_ID_AA64PFR0_EL1); 2443 + u32 el3 = !!SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr); 2603 2444 2604 - p->regval = ((((dfr >> ID_AA64DFR0_EL1_WRPs_SHIFT) & 0xf) << 28) | 2605 - (((dfr >> ID_AA64DFR0_EL1_BRPs_SHIFT) & 0xf) << 24) | 2606 - (((dfr >> ID_AA64DFR0_EL1_CTX_CMPs_SHIFT) & 0xf) << 20) 2607 - | (6 << 16) | (1 << 15) | (el3 << 14) | (el3 << 12)); 2445 + p->regval = ((SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr) << 28) | 2446 + (SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, dfr) << 24) | 2447 + (SYS_FIELD_GET(ID_AA64DFR0_EL1, CTX_CMPs, dfr) << 20) | 2448 + (SYS_FIELD_GET(ID_AA64DFR0_EL1, DebugVer, dfr) << 16) | 2449 + (1 << 15) | (el3 << 14) | (el3 << 12)); 2608 2450 return true; 2609 2451 } 2610 2452 } ··· 3730 3570 uindices += err; 3731 3571 3732 3572 return write_demux_regids(uindices); 3573 + } 3574 + 3575 + #define KVM_ARM_FEATURE_ID_RANGE_INDEX(r) \ 3576 + KVM_ARM_FEATURE_ID_RANGE_IDX(sys_reg_Op0(r), \ 3577 + sys_reg_Op1(r), \ 3578 + sys_reg_CRn(r), \ 3579 + sys_reg_CRm(r), \ 3580 + sys_reg_Op2(r)) 3581 + 3582 + static bool is_feature_id_reg(u32 encoding) 3583 + { 3584 + return (sys_reg_Op0(encoding) == 3 && 3585 + (sys_reg_Op1(encoding) < 2 || sys_reg_Op1(encoding) == 3) && 3586 + sys_reg_CRn(encoding) == 0 && 3587 + sys_reg_CRm(encoding) <= 7); 3588 + } 3589 + 3590 + int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, struct reg_mask_range *range) 3591 + { 3592 + const void *zero_page = page_to_virt(ZERO_PAGE(0)); 3593 + u64 __user *masks = (u64 __user *)range->addr; 3594 + 3595 + /* Only feature id range is supported, reserved[13] must be zero. */ 3596 + if (range->range || 3597 + memcmp(range->reserved, zero_page, sizeof(range->reserved))) 3598 + return -EINVAL; 3599 + 3600 + /* Wipe the whole thing first */ 3601 + if (clear_user(masks, KVM_ARM_FEATURE_ID_RANGE_SIZE * sizeof(__u64))) 3602 + return -EFAULT; 3603 + 3604 + for (int i = 0; i < ARRAY_SIZE(sys_reg_descs); i++) { 3605 + const struct sys_reg_desc *reg = &sys_reg_descs[i]; 3606 + u32 encoding = reg_to_encoding(reg); 3607 + u64 val; 3608 + 3609 + if (!is_feature_id_reg(encoding) || !reg->set_user) 3610 + continue; 3611 + 3612 + /* 3613 + * For ID registers, we return the writable mask. Other feature 3614 + * registers return a full 64bit mask. That's not necessary 3615 + * compliant with a given revision of the architecture, but the 3616 + * RES0/RES1 definitions allow us to do that. 3617 + */ 3618 + if (is_id_reg(encoding)) { 3619 + if (!reg->val || 3620 + (is_aa32_id_reg(encoding) && !kvm_supports_32bit_el0())) 3621 + continue; 3622 + val = reg->val; 3623 + } else { 3624 + val = ~0UL; 3625 + } 3626 + 3627 + if (put_user(val, (masks + KVM_ARM_FEATURE_ID_RANGE_INDEX(encoding)))) 3628 + return -EFAULT; 3629 + } 3630 + 3631 + return 0; 3733 3632 } 3734 3633 3735 3634 int __init kvm_sys_reg_table_init(void)
+25
arch/arm64/kvm/trace_arm.h
··· 136 136 __entry->vcpu_pc, __entry->instr, __entry->cpsr) 137 137 ); 138 138 139 + TRACE_EVENT(kvm_mmio_nisv, 140 + TP_PROTO(unsigned long vcpu_pc, unsigned long esr, 141 + unsigned long far, unsigned long ipa), 142 + TP_ARGS(vcpu_pc, esr, far, ipa), 143 + 144 + TP_STRUCT__entry( 145 + __field( unsigned long, vcpu_pc ) 146 + __field( unsigned long, esr ) 147 + __field( unsigned long, far ) 148 + __field( unsigned long, ipa ) 149 + ), 150 + 151 + TP_fast_assign( 152 + __entry->vcpu_pc = vcpu_pc; 153 + __entry->esr = esr; 154 + __entry->far = far; 155 + __entry->ipa = ipa; 156 + ), 157 + 158 + TP_printk("ipa %#016lx, esr %#016lx, far %#016lx, pc %#016lx", 159 + __entry->ipa, __entry->esr, 160 + __entry->far, __entry->vcpu_pc) 161 + ); 162 + 163 + 139 164 TRACE_EVENT(kvm_set_way_flush, 140 165 TP_PROTO(unsigned long vcpu_pc, bool cache), 141 166 TP_ARGS(vcpu_pc, cache),
+3 -3
arch/arm64/kvm/vgic/vgic-debug.c
··· 166 166 167 167 if (vcpu) { 168 168 hdr = "VCPU"; 169 - id = vcpu->vcpu_id; 169 + id = vcpu->vcpu_idx; 170 170 } 171 171 172 172 seq_printf(s, "\n"); ··· 212 212 " %2d " 213 213 "\n", 214 214 type, irq->intid, 215 - (irq->target_vcpu) ? irq->target_vcpu->vcpu_id : -1, 215 + (irq->target_vcpu) ? irq->target_vcpu->vcpu_idx : -1, 216 216 pending, 217 217 irq->line_level, 218 218 irq->active, ··· 224 224 irq->mpidr, 225 225 irq->source, 226 226 irq->priority, 227 - (irq->vcpu) ? irq->vcpu->vcpu_id : -1); 227 + (irq->vcpu) ? irq->vcpu->vcpu_idx : -1); 228 228 } 229 229 230 230 static int vgic_debug_show(struct seq_file *s, void *v)
+1 -1
arch/arm64/kvm/vgic/vgic-irqfd.c
··· 23 23 24 24 if (!vgic_valid_spi(kvm, spi_id)) 25 25 return -EINVAL; 26 - return kvm_vgic_inject_irq(kvm, 0, spi_id, level, NULL); 26 + return kvm_vgic_inject_irq(kvm, NULL, spi_id, level, NULL); 27 27 } 28 28 29 29 /**
+27 -22
arch/arm64/kvm/vgic/vgic-its.c
··· 378 378 return ret; 379 379 } 380 380 381 + static struct kvm_vcpu *collection_to_vcpu(struct kvm *kvm, 382 + struct its_collection *col) 383 + { 384 + return kvm_get_vcpu_by_id(kvm, col->target_addr); 385 + } 386 + 381 387 /* 382 388 * Promotes the ITS view of affinity of an ITTE (which redistributor this LPI 383 389 * is targeting) to the VGIC's view, which deals with target VCPUs. ··· 397 391 if (!its_is_collection_mapped(ite->collection)) 398 392 return; 399 393 400 - vcpu = kvm_get_vcpu(kvm, ite->collection->target_addr); 394 + vcpu = collection_to_vcpu(kvm, ite->collection); 401 395 update_affinity(ite->irq, vcpu); 402 396 } 403 397 ··· 685 679 if (!ite || !its_is_collection_mapped(ite->collection)) 686 680 return E_ITS_INT_UNMAPPED_INTERRUPT; 687 681 688 - vcpu = kvm_get_vcpu(kvm, ite->collection->target_addr); 682 + vcpu = collection_to_vcpu(kvm, ite->collection); 689 683 if (!vcpu) 690 684 return E_ITS_INT_UNMAPPED_INTERRUPT; 691 685 ··· 893 887 return E_ITS_MOVI_UNMAPPED_COLLECTION; 894 888 895 889 ite->collection = collection; 896 - vcpu = kvm_get_vcpu(kvm, collection->target_addr); 890 + vcpu = collection_to_vcpu(kvm, collection); 897 891 898 892 vgic_its_invalidate_cache(kvm); 899 893 ··· 1127 1121 } 1128 1122 1129 1123 if (its_is_collection_mapped(collection)) 1130 - vcpu = kvm_get_vcpu(kvm, collection->target_addr); 1124 + vcpu = collection_to_vcpu(kvm, collection); 1131 1125 1132 1126 irq = vgic_add_lpi(kvm, lpi_nr, vcpu); 1133 1127 if (IS_ERR(irq)) { ··· 1248 1242 u64 *its_cmd) 1249 1243 { 1250 1244 u16 coll_id; 1251 - u32 target_addr; 1252 1245 struct its_collection *collection; 1253 1246 bool valid; 1254 1247 1255 1248 valid = its_cmd_get_validbit(its_cmd); 1256 1249 coll_id = its_cmd_get_collection(its_cmd); 1257 - target_addr = its_cmd_get_target_addr(its_cmd); 1258 - 1259 - if (target_addr >= atomic_read(&kvm->online_vcpus)) 1260 - return E_ITS_MAPC_PROCNUM_OOR; 1261 1250 1262 1251 if (!valid) { 1263 1252 vgic_its_free_collection(its, coll_id); 1264 1253 vgic_its_invalidate_cache(kvm); 1265 1254 } else { 1255 + struct kvm_vcpu *vcpu; 1256 + 1257 + vcpu = kvm_get_vcpu_by_id(kvm, its_cmd_get_target_addr(its_cmd)); 1258 + if (!vcpu) 1259 + return E_ITS_MAPC_PROCNUM_OOR; 1260 + 1266 1261 collection = find_collection(its, coll_id); 1267 1262 1268 1263 if (!collection) { ··· 1277 1270 coll_id); 1278 1271 if (ret) 1279 1272 return ret; 1280 - collection->target_addr = target_addr; 1273 + collection->target_addr = vcpu->vcpu_id; 1281 1274 } else { 1282 - collection->target_addr = target_addr; 1275 + collection->target_addr = vcpu->vcpu_id; 1283 1276 update_affinity_collection(kvm, its, collection); 1284 1277 } 1285 1278 } ··· 1389 1382 if (!its_is_collection_mapped(collection)) 1390 1383 return E_ITS_INVALL_UNMAPPED_COLLECTION; 1391 1384 1392 - vcpu = kvm_get_vcpu(kvm, collection->target_addr); 1385 + vcpu = collection_to_vcpu(kvm, collection); 1393 1386 vgic_its_invall(vcpu); 1394 1387 1395 1388 return 0; ··· 1406 1399 static int vgic_its_cmd_handle_movall(struct kvm *kvm, struct vgic_its *its, 1407 1400 u64 *its_cmd) 1408 1401 { 1409 - u32 target1_addr = its_cmd_get_target_addr(its_cmd); 1410 - u32 target2_addr = its_cmd_mask_field(its_cmd, 3, 16, 32); 1411 1402 struct kvm_vcpu *vcpu1, *vcpu2; 1412 1403 struct vgic_irq *irq; 1413 1404 u32 *intids; 1414 1405 int irq_count, i; 1415 1406 1416 - if (target1_addr >= atomic_read(&kvm->online_vcpus) || 1417 - target2_addr >= atomic_read(&kvm->online_vcpus)) 1407 + /* We advertise GITS_TYPER.PTA==0, making the address the vcpu ID */ 1408 + vcpu1 = kvm_get_vcpu_by_id(kvm, its_cmd_get_target_addr(its_cmd)); 1409 + vcpu2 = kvm_get_vcpu_by_id(kvm, its_cmd_mask_field(its_cmd, 3, 16, 32)); 1410 + 1411 + if (!vcpu1 || !vcpu2) 1418 1412 return E_ITS_MOVALL_PROCNUM_OOR; 1419 1413 1420 - if (target1_addr == target2_addr) 1414 + if (vcpu1 == vcpu2) 1421 1415 return 0; 1422 - 1423 - vcpu1 = kvm_get_vcpu(kvm, target1_addr); 1424 - vcpu2 = kvm_get_vcpu(kvm, target2_addr); 1425 1416 1426 1417 irq_count = vgic_copy_lpi_list(kvm, vcpu1, &intids); 1427 1418 if (irq_count < 0) ··· 2263 2258 return PTR_ERR(ite); 2264 2259 2265 2260 if (its_is_collection_mapped(collection)) 2266 - vcpu = kvm_get_vcpu(kvm, collection->target_addr); 2261 + vcpu = kvm_get_vcpu_by_id(kvm, collection->target_addr); 2267 2262 2268 2263 irq = vgic_add_lpi(kvm, lpi_id, vcpu); 2269 2264 if (IS_ERR(irq)) { ··· 2578 2573 coll_id = val & KVM_ITS_CTE_ICID_MASK; 2579 2574 2580 2575 if (target_addr != COLLECTION_NOT_MAPPED && 2581 - target_addr >= atomic_read(&kvm->online_vcpus)) 2576 + !kvm_get_vcpu_by_id(kvm, target_addr)) 2582 2577 return -EINVAL; 2583 2578 2584 2579 collection = find_collection(its, coll_id);
+4 -7
arch/arm64/kvm/vgic/vgic-kvm-device.c
··· 27 27 if (addr + size < addr) 28 28 return -EINVAL; 29 29 30 - if (addr & ~kvm_phys_mask(kvm) || addr + size > kvm_phys_size(kvm)) 30 + if (addr & ~kvm_phys_mask(&kvm->arch.mmu) || 31 + (addr + size) > kvm_phys_size(&kvm->arch.mmu)) 31 32 return -E2BIG; 32 33 33 34 return 0; ··· 340 339 { 341 340 int cpuid; 342 341 343 - cpuid = (attr->attr & KVM_DEV_ARM_VGIC_CPUID_MASK) >> 344 - KVM_DEV_ARM_VGIC_CPUID_SHIFT; 342 + cpuid = FIELD_GET(KVM_DEV_ARM_VGIC_CPUID_MASK, attr->attr); 345 343 346 - if (cpuid >= atomic_read(&dev->kvm->online_vcpus)) 347 - return -EINVAL; 348 - 349 - reg_attr->vcpu = kvm_get_vcpu(dev->kvm, cpuid); 344 + reg_attr->vcpu = kvm_get_vcpu_by_id(dev->kvm, cpuid); 350 345 reg_attr->addr = attr->attr & KVM_DEV_ARM_VGIC_OFFSET_MASK; 351 346 352 347 return 0;
+61 -95
arch/arm64/kvm/vgic/vgic-mmio-v3.c
··· 1013 1013 1014 1014 return 0; 1015 1015 } 1016 - /* 1017 - * Compare a given affinity (level 1-3 and a level 0 mask, from the SGI 1018 - * generation register ICC_SGI1R_EL1) with a given VCPU. 1019 - * If the VCPU's MPIDR matches, return the level0 affinity, otherwise 1020 - * return -1. 1021 - */ 1022 - static int match_mpidr(u64 sgi_aff, u16 sgi_cpu_mask, struct kvm_vcpu *vcpu) 1023 - { 1024 - unsigned long affinity; 1025 - int level0; 1026 - 1027 - /* 1028 - * Split the current VCPU's MPIDR into affinity level 0 and the 1029 - * rest as this is what we have to compare against. 1030 - */ 1031 - affinity = kvm_vcpu_get_mpidr_aff(vcpu); 1032 - level0 = MPIDR_AFFINITY_LEVEL(affinity, 0); 1033 - affinity &= ~MPIDR_LEVEL_MASK; 1034 - 1035 - /* bail out if the upper three levels don't match */ 1036 - if (sgi_aff != affinity) 1037 - return -1; 1038 - 1039 - /* Is this VCPU's bit set in the mask ? */ 1040 - if (!(sgi_cpu_mask & BIT(level0))) 1041 - return -1; 1042 - 1043 - return level0; 1044 - } 1045 1016 1046 1017 /* 1047 1018 * The ICC_SGI* registers encode the affinity differently from the MPIDR, ··· 1022 1051 #define SGI_AFFINITY_LEVEL(reg, level) \ 1023 1052 ((((reg) & ICC_SGI1R_AFFINITY_## level ##_MASK) \ 1024 1053 >> ICC_SGI1R_AFFINITY_## level ##_SHIFT) << MPIDR_LEVEL_SHIFT(level)) 1054 + 1055 + static void vgic_v3_queue_sgi(struct kvm_vcpu *vcpu, u32 sgi, bool allow_group1) 1056 + { 1057 + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, sgi); 1058 + unsigned long flags; 1059 + 1060 + raw_spin_lock_irqsave(&irq->irq_lock, flags); 1061 + 1062 + /* 1063 + * An access targeting Group0 SGIs can only generate 1064 + * those, while an access targeting Group1 SGIs can 1065 + * generate interrupts of either group. 1066 + */ 1067 + if (!irq->group || allow_group1) { 1068 + if (!irq->hw) { 1069 + irq->pending_latch = true; 1070 + vgic_queue_irq_unlock(vcpu->kvm, irq, flags); 1071 + } else { 1072 + /* HW SGI? Ask the GIC to inject it */ 1073 + int err; 1074 + err = irq_set_irqchip_state(irq->host_irq, 1075 + IRQCHIP_STATE_PENDING, 1076 + true); 1077 + WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); 1078 + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); 1079 + } 1080 + } else { 1081 + raw_spin_unlock_irqrestore(&irq->irq_lock, flags); 1082 + } 1083 + 1084 + vgic_put_irq(vcpu->kvm, irq); 1085 + } 1025 1086 1026 1087 /** 1027 1088 * vgic_v3_dispatch_sgi - handle SGI requests from VCPUs ··· 1065 1062 * This will trap in sys_regs.c and call this function. 1066 1063 * This ICC_SGI1R_EL1 register contains the upper three affinity levels of the 1067 1064 * target processors as well as a bitmask of 16 Aff0 CPUs. 1068 - * If the interrupt routing mode bit is not set, we iterate over all VCPUs to 1069 - * check for matching ones. If this bit is set, we signal all, but not the 1070 - * calling VCPU. 1065 + * 1066 + * If the interrupt routing mode bit is not set, we iterate over the Aff0 1067 + * bits and signal the VCPUs matching the provided Aff{3,2,1}. 1068 + * 1069 + * If this bit is set, we signal all, but not the calling VCPU. 1071 1070 */ 1072 1071 void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) 1073 1072 { 1074 1073 struct kvm *kvm = vcpu->kvm; 1075 1074 struct kvm_vcpu *c_vcpu; 1076 - u16 target_cpus; 1075 + unsigned long target_cpus; 1077 1076 u64 mpidr; 1078 - int sgi; 1079 - int vcpu_id = vcpu->vcpu_id; 1080 - bool broadcast; 1081 - unsigned long c, flags; 1077 + u32 sgi, aff0; 1078 + unsigned long c; 1082 1079 1083 - sgi = (reg & ICC_SGI1R_SGI_ID_MASK) >> ICC_SGI1R_SGI_ID_SHIFT; 1084 - broadcast = reg & BIT_ULL(ICC_SGI1R_IRQ_ROUTING_MODE_BIT); 1085 - target_cpus = (reg & ICC_SGI1R_TARGET_LIST_MASK) >> ICC_SGI1R_TARGET_LIST_SHIFT; 1080 + sgi = FIELD_GET(ICC_SGI1R_SGI_ID_MASK, reg); 1081 + 1082 + /* Broadcast */ 1083 + if (unlikely(reg & BIT_ULL(ICC_SGI1R_IRQ_ROUTING_MODE_BIT))) { 1084 + kvm_for_each_vcpu(c, c_vcpu, kvm) { 1085 + /* Don't signal the calling VCPU */ 1086 + if (c_vcpu == vcpu) 1087 + continue; 1088 + 1089 + vgic_v3_queue_sgi(c_vcpu, sgi, allow_group1); 1090 + } 1091 + 1092 + return; 1093 + } 1094 + 1095 + /* We iterate over affinities to find the corresponding vcpus */ 1086 1096 mpidr = SGI_AFFINITY_LEVEL(reg, 3); 1087 1097 mpidr |= SGI_AFFINITY_LEVEL(reg, 2); 1088 1098 mpidr |= SGI_AFFINITY_LEVEL(reg, 1); 1099 + target_cpus = FIELD_GET(ICC_SGI1R_TARGET_LIST_MASK, reg); 1089 1100 1090 - /* 1091 - * We iterate over all VCPUs to find the MPIDRs matching the request. 1092 - * If we have handled one CPU, we clear its bit to detect early 1093 - * if we are already finished. This avoids iterating through all 1094 - * VCPUs when most of the times we just signal a single VCPU. 1095 - */ 1096 - kvm_for_each_vcpu(c, c_vcpu, kvm) { 1097 - struct vgic_irq *irq; 1098 - 1099 - /* Exit early if we have dealt with all requested CPUs */ 1100 - if (!broadcast && target_cpus == 0) 1101 - break; 1102 - 1103 - /* Don't signal the calling VCPU */ 1104 - if (broadcast && c == vcpu_id) 1105 - continue; 1106 - 1107 - if (!broadcast) { 1108 - int level0; 1109 - 1110 - level0 = match_mpidr(mpidr, target_cpus, c_vcpu); 1111 - if (level0 == -1) 1112 - continue; 1113 - 1114 - /* remove this matching VCPU from the mask */ 1115 - target_cpus &= ~BIT(level0); 1116 - } 1117 - 1118 - irq = vgic_get_irq(vcpu->kvm, c_vcpu, sgi); 1119 - 1120 - raw_spin_lock_irqsave(&irq->irq_lock, flags); 1121 - 1122 - /* 1123 - * An access targeting Group0 SGIs can only generate 1124 - * those, while an access targeting Group1 SGIs can 1125 - * generate interrupts of either group. 1126 - */ 1127 - if (!irq->group || allow_group1) { 1128 - if (!irq->hw) { 1129 - irq->pending_latch = true; 1130 - vgic_queue_irq_unlock(vcpu->kvm, irq, flags); 1131 - } else { 1132 - /* HW SGI? Ask the GIC to inject it */ 1133 - int err; 1134 - err = irq_set_irqchip_state(irq->host_irq, 1135 - IRQCHIP_STATE_PENDING, 1136 - true); 1137 - WARN_RATELIMIT(err, "IRQ %d", irq->host_irq); 1138 - raw_spin_unlock_irqrestore(&irq->irq_lock, flags); 1139 - } 1140 - } else { 1141 - raw_spin_unlock_irqrestore(&irq->irq_lock, flags); 1142 - } 1143 - 1144 - vgic_put_irq(vcpu->kvm, irq); 1101 + for_each_set_bit(aff0, &target_cpus, hweight_long(ICC_SGI1R_TARGET_LIST_MASK)) { 1102 + c_vcpu = kvm_mpidr_to_vcpu(kvm, mpidr | aff0); 1103 + if (c_vcpu) 1104 + vgic_v3_queue_sgi(c_vcpu, sgi, allow_group1); 1145 1105 } 1146 1106 } 1147 1107
+5 -7
arch/arm64/kvm/vgic/vgic.c
··· 422 422 /** 423 423 * kvm_vgic_inject_irq - Inject an IRQ from a device to the vgic 424 424 * @kvm: The VM structure pointer 425 - * @cpuid: The CPU for PPIs 425 + * @vcpu: The CPU for PPIs or NULL for global interrupts 426 426 * @intid: The INTID to inject a new state to. 427 427 * @level: Edge-triggered: true: to trigger the interrupt 428 428 * false: to ignore the call ··· 436 436 * level-sensitive interrupts. You can think of the level parameter as 1 437 437 * being HIGH and 0 being LOW and all devices being active-HIGH. 438 438 */ 439 - int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid, 440 - bool level, void *owner) 439 + int kvm_vgic_inject_irq(struct kvm *kvm, struct kvm_vcpu *vcpu, 440 + unsigned int intid, bool level, void *owner) 441 441 { 442 - struct kvm_vcpu *vcpu; 443 442 struct vgic_irq *irq; 444 443 unsigned long flags; 445 444 int ret; 446 - 447 - trace_vgic_update_irq_pending(cpuid, intid, level); 448 445 449 446 ret = vgic_lazy_init(kvm); 450 447 if (ret) 451 448 return ret; 452 449 453 - vcpu = kvm_get_vcpu(kvm, cpuid); 454 450 if (!vcpu && intid < VGIC_NR_PRIVATE_IRQS) 455 451 return -EINVAL; 452 + 453 + trace_vgic_update_irq_pending(vcpu ? vcpu->vcpu_idx : 0, intid, level); 456 454 457 455 irq = vgic_get_irq(kvm, vcpu, intid); 458 456 if (!irq)
+8 -3
arch/arm64/kvm/vmid.c
··· 135 135 atomic64_set(this_cpu_ptr(&active_vmids), VMID_ACTIVE_INVALID); 136 136 } 137 137 138 - void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid) 138 + bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid) 139 139 { 140 140 unsigned long flags; 141 141 u64 vmid, old_active_vmid; 142 + bool updated = false; 142 143 143 144 vmid = atomic64_read(&kvm_vmid->id); 144 145 ··· 157 156 if (old_active_vmid != 0 && vmid_gen_match(vmid) && 158 157 0 != atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_vmids), 159 158 old_active_vmid, vmid)) 160 - return; 159 + return false; 161 160 162 161 raw_spin_lock_irqsave(&cpu_vmid_lock, flags); 163 162 164 163 /* Check that our VMID belongs to the current generation. */ 165 164 vmid = atomic64_read(&kvm_vmid->id); 166 - if (!vmid_gen_match(vmid)) 165 + if (!vmid_gen_match(vmid)) { 167 166 vmid = new_vmid(kvm_vmid); 167 + updated = true; 168 + } 168 169 169 170 atomic64_set(this_cpu_ptr(&active_vmids), vmid); 170 171 raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags); 172 + 173 + return updated; 171 174 } 172 175 173 176 /*
+2
arch/loongarch/Kbuild
··· 3 3 obj-y += net/ 4 4 obj-y += vdso/ 5 5 6 + obj-$(CONFIG_KVM) += kvm/ 7 + 6 8 # for cleaning 7 9 subdir- += boot
+6
arch/loongarch/Kconfig
··· 129 129 select HAVE_KPROBES 130 130 select HAVE_KPROBES_ON_FTRACE 131 131 select HAVE_KRETPROBES 132 + select HAVE_KVM 132 133 select HAVE_MOD_ARCH_SPECIFIC 133 134 select HAVE_NMI 134 135 select HAVE_PCI ··· 263 262 264 263 config AS_HAS_LBT_EXTENSION 265 264 def_bool $(as-instr,movscr2gr \$a0$(comma)\$scr0) 265 + 266 + config AS_HAS_LVZ_EXTENSION 267 + def_bool $(as-instr,hvcl 0) 266 268 267 269 menu "Kernel type and options" 268 270 ··· 680 676 source "drivers/acpi/Kconfig" 681 677 682 678 endmenu 679 + 680 + source "arch/loongarch/kvm/Kconfig"
+2
arch/loongarch/configs/loongson3_defconfig
··· 66 66 CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y 67 67 CONFIG_EFI_CAPSULE_LOADER=m 68 68 CONFIG_EFI_TEST=m 69 + CONFIG_VIRTUALIZATION=y 70 + CONFIG_KVM=m 69 71 CONFIG_JUMP_LABEL=y 70 72 CONFIG_MODULES=y 71 73 CONFIG_MODULE_FORCE_LOAD=y
+16
arch/loongarch/include/asm/inst.h
··· 65 65 revbd_op = 0x0f, 66 66 revh2w_op = 0x10, 67 67 revhd_op = 0x11, 68 + iocsrrdb_op = 0x19200, 69 + iocsrrdh_op = 0x19201, 70 + iocsrrdw_op = 0x19202, 71 + iocsrrdd_op = 0x19203, 72 + iocsrwrb_op = 0x19204, 73 + iocsrwrh_op = 0x19205, 74 + iocsrwrw_op = 0x19206, 75 + iocsrwrd_op = 0x19207, 68 76 }; 69 77 70 78 enum reg2i5_op { ··· 326 318 unsigned int opcode : 10; 327 319 }; 328 320 321 + struct reg2csr_format { 322 + unsigned int rd : 5; 323 + unsigned int rj : 5; 324 + unsigned int csr : 14; 325 + unsigned int opcode : 8; 326 + }; 327 + 329 328 struct reg3_format { 330 329 unsigned int rd : 5; 331 330 unsigned int rj : 5; ··· 361 346 struct reg2i14_format reg2i14_format; 362 347 struct reg2i16_format reg2i16_format; 363 348 struct reg2bstrd_format reg2bstrd_format; 349 + struct reg2csr_format reg2csr_format; 364 350 struct reg3_format reg3_format; 365 351 struct reg3sa2_format reg3sa2_format; 366 352 };
+211
arch/loongarch/include/asm/kvm_csr.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #ifndef __ASM_LOONGARCH_KVM_CSR_H__ 7 + #define __ASM_LOONGARCH_KVM_CSR_H__ 8 + 9 + #include <linux/uaccess.h> 10 + #include <linux/kvm_host.h> 11 + #include <asm/loongarch.h> 12 + #include <asm/kvm_vcpu.h> 13 + 14 + #define gcsr_read(csr) \ 15 + ({ \ 16 + register unsigned long __v; \ 17 + __asm__ __volatile__( \ 18 + " gcsrrd %[val], %[reg]\n\t" \ 19 + : [val] "=r" (__v) \ 20 + : [reg] "i" (csr) \ 21 + : "memory"); \ 22 + __v; \ 23 + }) 24 + 25 + #define gcsr_write(v, csr) \ 26 + ({ \ 27 + register unsigned long __v = v; \ 28 + __asm__ __volatile__ ( \ 29 + " gcsrwr %[val], %[reg]\n\t" \ 30 + : [val] "+r" (__v) \ 31 + : [reg] "i" (csr) \ 32 + : "memory"); \ 33 + }) 34 + 35 + #define gcsr_xchg(v, m, csr) \ 36 + ({ \ 37 + register unsigned long __v = v; \ 38 + __asm__ __volatile__( \ 39 + " gcsrxchg %[val], %[mask], %[reg]\n\t" \ 40 + : [val] "+r" (__v) \ 41 + : [mask] "r" (m), [reg] "i" (csr) \ 42 + : "memory"); \ 43 + __v; \ 44 + }) 45 + 46 + /* Guest CSRS read and write */ 47 + #define read_gcsr_crmd() gcsr_read(LOONGARCH_CSR_CRMD) 48 + #define write_gcsr_crmd(val) gcsr_write(val, LOONGARCH_CSR_CRMD) 49 + #define read_gcsr_prmd() gcsr_read(LOONGARCH_CSR_PRMD) 50 + #define write_gcsr_prmd(val) gcsr_write(val, LOONGARCH_CSR_PRMD) 51 + #define read_gcsr_euen() gcsr_read(LOONGARCH_CSR_EUEN) 52 + #define write_gcsr_euen(val) gcsr_write(val, LOONGARCH_CSR_EUEN) 53 + #define read_gcsr_misc() gcsr_read(LOONGARCH_CSR_MISC) 54 + #define write_gcsr_misc(val) gcsr_write(val, LOONGARCH_CSR_MISC) 55 + #define read_gcsr_ecfg() gcsr_read(LOONGARCH_CSR_ECFG) 56 + #define write_gcsr_ecfg(val) gcsr_write(val, LOONGARCH_CSR_ECFG) 57 + #define read_gcsr_estat() gcsr_read(LOONGARCH_CSR_ESTAT) 58 + #define write_gcsr_estat(val) gcsr_write(val, LOONGARCH_CSR_ESTAT) 59 + #define read_gcsr_era() gcsr_read(LOONGARCH_CSR_ERA) 60 + #define write_gcsr_era(val) gcsr_write(val, LOONGARCH_CSR_ERA) 61 + #define read_gcsr_badv() gcsr_read(LOONGARCH_CSR_BADV) 62 + #define write_gcsr_badv(val) gcsr_write(val, LOONGARCH_CSR_BADV) 63 + #define read_gcsr_badi() gcsr_read(LOONGARCH_CSR_BADI) 64 + #define write_gcsr_badi(val) gcsr_write(val, LOONGARCH_CSR_BADI) 65 + #define read_gcsr_eentry() gcsr_read(LOONGARCH_CSR_EENTRY) 66 + #define write_gcsr_eentry(val) gcsr_write(val, LOONGARCH_CSR_EENTRY) 67 + 68 + #define read_gcsr_asid() gcsr_read(LOONGARCH_CSR_ASID) 69 + #define write_gcsr_asid(val) gcsr_write(val, LOONGARCH_CSR_ASID) 70 + #define read_gcsr_pgdl() gcsr_read(LOONGARCH_CSR_PGDL) 71 + #define write_gcsr_pgdl(val) gcsr_write(val, LOONGARCH_CSR_PGDL) 72 + #define read_gcsr_pgdh() gcsr_read(LOONGARCH_CSR_PGDH) 73 + #define write_gcsr_pgdh(val) gcsr_write(val, LOONGARCH_CSR_PGDH) 74 + #define write_gcsr_pgd(val) gcsr_write(val, LOONGARCH_CSR_PGD) 75 + #define read_gcsr_pgd() gcsr_read(LOONGARCH_CSR_PGD) 76 + #define read_gcsr_pwctl0() gcsr_read(LOONGARCH_CSR_PWCTL0) 77 + #define write_gcsr_pwctl0(val) gcsr_write(val, LOONGARCH_CSR_PWCTL0) 78 + #define read_gcsr_pwctl1() gcsr_read(LOONGARCH_CSR_PWCTL1) 79 + #define write_gcsr_pwctl1(val) gcsr_write(val, LOONGARCH_CSR_PWCTL1) 80 + #define read_gcsr_stlbpgsize() gcsr_read(LOONGARCH_CSR_STLBPGSIZE) 81 + #define write_gcsr_stlbpgsize(val) gcsr_write(val, LOONGARCH_CSR_STLBPGSIZE) 82 + #define read_gcsr_rvacfg() gcsr_read(LOONGARCH_CSR_RVACFG) 83 + #define write_gcsr_rvacfg(val) gcsr_write(val, LOONGARCH_CSR_RVACFG) 84 + 85 + #define read_gcsr_cpuid() gcsr_read(LOONGARCH_CSR_CPUID) 86 + #define write_gcsr_cpuid(val) gcsr_write(val, LOONGARCH_CSR_CPUID) 87 + #define read_gcsr_prcfg1() gcsr_read(LOONGARCH_CSR_PRCFG1) 88 + #define write_gcsr_prcfg1(val) gcsr_write(val, LOONGARCH_CSR_PRCFG1) 89 + #define read_gcsr_prcfg2() gcsr_read(LOONGARCH_CSR_PRCFG2) 90 + #define write_gcsr_prcfg2(val) gcsr_write(val, LOONGARCH_CSR_PRCFG2) 91 + #define read_gcsr_prcfg3() gcsr_read(LOONGARCH_CSR_PRCFG3) 92 + #define write_gcsr_prcfg3(val) gcsr_write(val, LOONGARCH_CSR_PRCFG3) 93 + 94 + #define read_gcsr_kscratch0() gcsr_read(LOONGARCH_CSR_KS0) 95 + #define write_gcsr_kscratch0(val) gcsr_write(val, LOONGARCH_CSR_KS0) 96 + #define read_gcsr_kscratch1() gcsr_read(LOONGARCH_CSR_KS1) 97 + #define write_gcsr_kscratch1(val) gcsr_write(val, LOONGARCH_CSR_KS1) 98 + #define read_gcsr_kscratch2() gcsr_read(LOONGARCH_CSR_KS2) 99 + #define write_gcsr_kscratch2(val) gcsr_write(val, LOONGARCH_CSR_KS2) 100 + #define read_gcsr_kscratch3() gcsr_read(LOONGARCH_CSR_KS3) 101 + #define write_gcsr_kscratch3(val) gcsr_write(val, LOONGARCH_CSR_KS3) 102 + #define read_gcsr_kscratch4() gcsr_read(LOONGARCH_CSR_KS4) 103 + #define write_gcsr_kscratch4(val) gcsr_write(val, LOONGARCH_CSR_KS4) 104 + #define read_gcsr_kscratch5() gcsr_read(LOONGARCH_CSR_KS5) 105 + #define write_gcsr_kscratch5(val) gcsr_write(val, LOONGARCH_CSR_KS5) 106 + #define read_gcsr_kscratch6() gcsr_read(LOONGARCH_CSR_KS6) 107 + #define write_gcsr_kscratch6(val) gcsr_write(val, LOONGARCH_CSR_KS6) 108 + #define read_gcsr_kscratch7() gcsr_read(LOONGARCH_CSR_KS7) 109 + #define write_gcsr_kscratch7(val) gcsr_write(val, LOONGARCH_CSR_KS7) 110 + 111 + #define read_gcsr_timerid() gcsr_read(LOONGARCH_CSR_TMID) 112 + #define write_gcsr_timerid(val) gcsr_write(val, LOONGARCH_CSR_TMID) 113 + #define read_gcsr_timercfg() gcsr_read(LOONGARCH_CSR_TCFG) 114 + #define write_gcsr_timercfg(val) gcsr_write(val, LOONGARCH_CSR_TCFG) 115 + #define read_gcsr_timertick() gcsr_read(LOONGARCH_CSR_TVAL) 116 + #define write_gcsr_timertick(val) gcsr_write(val, LOONGARCH_CSR_TVAL) 117 + #define read_gcsr_timeroffset() gcsr_read(LOONGARCH_CSR_CNTC) 118 + #define write_gcsr_timeroffset(val) gcsr_write(val, LOONGARCH_CSR_CNTC) 119 + 120 + #define read_gcsr_llbctl() gcsr_read(LOONGARCH_CSR_LLBCTL) 121 + #define write_gcsr_llbctl(val) gcsr_write(val, LOONGARCH_CSR_LLBCTL) 122 + 123 + #define read_gcsr_tlbidx() gcsr_read(LOONGARCH_CSR_TLBIDX) 124 + #define write_gcsr_tlbidx(val) gcsr_write(val, LOONGARCH_CSR_TLBIDX) 125 + #define read_gcsr_tlbrentry() gcsr_read(LOONGARCH_CSR_TLBRENTRY) 126 + #define write_gcsr_tlbrentry(val) gcsr_write(val, LOONGARCH_CSR_TLBRENTRY) 127 + #define read_gcsr_tlbrbadv() gcsr_read(LOONGARCH_CSR_TLBRBADV) 128 + #define write_gcsr_tlbrbadv(val) gcsr_write(val, LOONGARCH_CSR_TLBRBADV) 129 + #define read_gcsr_tlbrera() gcsr_read(LOONGARCH_CSR_TLBRERA) 130 + #define write_gcsr_tlbrera(val) gcsr_write(val, LOONGARCH_CSR_TLBRERA) 131 + #define read_gcsr_tlbrsave() gcsr_read(LOONGARCH_CSR_TLBRSAVE) 132 + #define write_gcsr_tlbrsave(val) gcsr_write(val, LOONGARCH_CSR_TLBRSAVE) 133 + #define read_gcsr_tlbrelo0() gcsr_read(LOONGARCH_CSR_TLBRELO0) 134 + #define write_gcsr_tlbrelo0(val) gcsr_write(val, LOONGARCH_CSR_TLBRELO0) 135 + #define read_gcsr_tlbrelo1() gcsr_read(LOONGARCH_CSR_TLBRELO1) 136 + #define write_gcsr_tlbrelo1(val) gcsr_write(val, LOONGARCH_CSR_TLBRELO1) 137 + #define read_gcsr_tlbrehi() gcsr_read(LOONGARCH_CSR_TLBREHI) 138 + #define write_gcsr_tlbrehi(val) gcsr_write(val, LOONGARCH_CSR_TLBREHI) 139 + #define read_gcsr_tlbrprmd() gcsr_read(LOONGARCH_CSR_TLBRPRMD) 140 + #define write_gcsr_tlbrprmd(val) gcsr_write(val, LOONGARCH_CSR_TLBRPRMD) 141 + 142 + #define read_gcsr_directwin0() gcsr_read(LOONGARCH_CSR_DMWIN0) 143 + #define write_gcsr_directwin0(val) gcsr_write(val, LOONGARCH_CSR_DMWIN0) 144 + #define read_gcsr_directwin1() gcsr_read(LOONGARCH_CSR_DMWIN1) 145 + #define write_gcsr_directwin1(val) gcsr_write(val, LOONGARCH_CSR_DMWIN1) 146 + #define read_gcsr_directwin2() gcsr_read(LOONGARCH_CSR_DMWIN2) 147 + #define write_gcsr_directwin2(val) gcsr_write(val, LOONGARCH_CSR_DMWIN2) 148 + #define read_gcsr_directwin3() gcsr_read(LOONGARCH_CSR_DMWIN3) 149 + #define write_gcsr_directwin3(val) gcsr_write(val, LOONGARCH_CSR_DMWIN3) 150 + 151 + /* Guest related CSRs */ 152 + #define read_csr_gtlbc() csr_read64(LOONGARCH_CSR_GTLBC) 153 + #define write_csr_gtlbc(val) csr_write64(val, LOONGARCH_CSR_GTLBC) 154 + #define read_csr_trgp() csr_read64(LOONGARCH_CSR_TRGP) 155 + #define read_csr_gcfg() csr_read64(LOONGARCH_CSR_GCFG) 156 + #define write_csr_gcfg(val) csr_write64(val, LOONGARCH_CSR_GCFG) 157 + #define read_csr_gstat() csr_read64(LOONGARCH_CSR_GSTAT) 158 + #define write_csr_gstat(val) csr_write64(val, LOONGARCH_CSR_GSTAT) 159 + #define read_csr_gintc() csr_read64(LOONGARCH_CSR_GINTC) 160 + #define write_csr_gintc(val) csr_write64(val, LOONGARCH_CSR_GINTC) 161 + #define read_csr_gcntc() csr_read64(LOONGARCH_CSR_GCNTC) 162 + #define write_csr_gcntc(val) csr_write64(val, LOONGARCH_CSR_GCNTC) 163 + 164 + #define __BUILD_GCSR_OP(name) __BUILD_CSR_COMMON(gcsr_##name) 165 + 166 + __BUILD_CSR_OP(gcfg) 167 + __BUILD_CSR_OP(gstat) 168 + __BUILD_CSR_OP(gtlbc) 169 + __BUILD_CSR_OP(gintc) 170 + __BUILD_GCSR_OP(llbctl) 171 + __BUILD_GCSR_OP(tlbidx) 172 + 173 + #define set_gcsr_estat(val) \ 174 + gcsr_xchg(val, val, LOONGARCH_CSR_ESTAT) 175 + #define clear_gcsr_estat(val) \ 176 + gcsr_xchg(~(val), val, LOONGARCH_CSR_ESTAT) 177 + 178 + #define kvm_read_hw_gcsr(id) gcsr_read(id) 179 + #define kvm_write_hw_gcsr(id, val) gcsr_write(val, id) 180 + 181 + #define kvm_save_hw_gcsr(csr, gid) (csr->csrs[gid] = gcsr_read(gid)) 182 + #define kvm_restore_hw_gcsr(csr, gid) (gcsr_write(csr->csrs[gid], gid)) 183 + 184 + int kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu); 185 + 186 + static __always_inline unsigned long kvm_read_sw_gcsr(struct loongarch_csrs *csr, int gid) 187 + { 188 + return csr->csrs[gid]; 189 + } 190 + 191 + static __always_inline void kvm_write_sw_gcsr(struct loongarch_csrs *csr, int gid, unsigned long val) 192 + { 193 + csr->csrs[gid] = val; 194 + } 195 + 196 + static __always_inline void kvm_set_sw_gcsr(struct loongarch_csrs *csr, 197 + int gid, unsigned long val) 198 + { 199 + csr->csrs[gid] |= val; 200 + } 201 + 202 + static __always_inline void kvm_change_sw_gcsr(struct loongarch_csrs *csr, 203 + int gid, unsigned long mask, unsigned long val) 204 + { 205 + unsigned long _mask = mask; 206 + 207 + csr->csrs[gid] &= ~_mask; 208 + csr->csrs[gid] |= val & _mask; 209 + } 210 + 211 + #endif /* __ASM_LOONGARCH_KVM_CSR_H__ */
+237
arch/loongarch/include/asm/kvm_host.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #ifndef __ASM_LOONGARCH_KVM_HOST_H__ 7 + #define __ASM_LOONGARCH_KVM_HOST_H__ 8 + 9 + #include <linux/cpumask.h> 10 + #include <linux/hrtimer.h> 11 + #include <linux/interrupt.h> 12 + #include <linux/kvm.h> 13 + #include <linux/kvm_types.h> 14 + #include <linux/mutex.h> 15 + #include <linux/spinlock.h> 16 + #include <linux/threads.h> 17 + #include <linux/types.h> 18 + 19 + #include <asm/inst.h> 20 + #include <asm/kvm_mmu.h> 21 + #include <asm/loongarch.h> 22 + 23 + /* Loongarch KVM register ids */ 24 + #define KVM_GET_IOC_CSR_IDX(id) ((id & KVM_CSR_IDX_MASK) >> LOONGARCH_REG_SHIFT) 25 + #define KVM_GET_IOC_CPUCFG_IDX(id) ((id & KVM_CPUCFG_IDX_MASK) >> LOONGARCH_REG_SHIFT) 26 + 27 + #define KVM_MAX_VCPUS 256 28 + #define KVM_MAX_CPUCFG_REGS 21 29 + /* memory slots that does not exposed to userspace */ 30 + #define KVM_PRIVATE_MEM_SLOTS 0 31 + 32 + #define KVM_HALT_POLL_NS_DEFAULT 500000 33 + 34 + struct kvm_vm_stat { 35 + struct kvm_vm_stat_generic generic; 36 + u64 pages; 37 + u64 hugepages; 38 + }; 39 + 40 + struct kvm_vcpu_stat { 41 + struct kvm_vcpu_stat_generic generic; 42 + u64 int_exits; 43 + u64 idle_exits; 44 + u64 cpucfg_exits; 45 + u64 signal_exits; 46 + }; 47 + 48 + struct kvm_arch_memory_slot { 49 + }; 50 + 51 + struct kvm_context { 52 + unsigned long vpid_cache; 53 + struct kvm_vcpu *last_vcpu; 54 + }; 55 + 56 + struct kvm_world_switch { 57 + int (*exc_entry)(void); 58 + int (*enter_guest)(struct kvm_run *run, struct kvm_vcpu *vcpu); 59 + unsigned long page_order; 60 + }; 61 + 62 + #define MAX_PGTABLE_LEVELS 4 63 + 64 + struct kvm_arch { 65 + /* Guest physical mm */ 66 + kvm_pte_t *pgd; 67 + unsigned long gpa_size; 68 + unsigned long invalid_ptes[MAX_PGTABLE_LEVELS]; 69 + unsigned int pte_shifts[MAX_PGTABLE_LEVELS]; 70 + unsigned int root_level; 71 + 72 + s64 time_offset; 73 + struct kvm_context __percpu *vmcs; 74 + }; 75 + 76 + #define CSR_MAX_NUMS 0x800 77 + 78 + struct loongarch_csrs { 79 + unsigned long csrs[CSR_MAX_NUMS]; 80 + }; 81 + 82 + /* Resume Flags */ 83 + #define RESUME_HOST 0 84 + #define RESUME_GUEST 1 85 + 86 + enum emulation_result { 87 + EMULATE_DONE, /* no further processing */ 88 + EMULATE_DO_MMIO, /* kvm_run filled with MMIO request */ 89 + EMULATE_DO_IOCSR, /* handle IOCSR request */ 90 + EMULATE_FAIL, /* can't emulate this instruction */ 91 + EMULATE_EXCEPT, /* A guest exception has been generated */ 92 + }; 93 + 94 + #define KVM_LARCH_FPU (0x1 << 0) 95 + #define KVM_LARCH_SWCSR_LATEST (0x1 << 1) 96 + #define KVM_LARCH_HWCSR_USABLE (0x1 << 2) 97 + 98 + struct kvm_vcpu_arch { 99 + /* 100 + * Switch pointer-to-function type to unsigned long 101 + * for loading the value into register directly. 102 + */ 103 + unsigned long host_eentry; 104 + unsigned long guest_eentry; 105 + 106 + /* Pointers stored here for easy accessing from assembly code */ 107 + int (*handle_exit)(struct kvm_run *run, struct kvm_vcpu *vcpu); 108 + 109 + /* Host registers preserved across guest mode execution */ 110 + unsigned long host_sp; 111 + unsigned long host_tp; 112 + unsigned long host_pgd; 113 + 114 + /* Host CSRs are used when handling exits from guest */ 115 + unsigned long badi; 116 + unsigned long badv; 117 + unsigned long host_ecfg; 118 + unsigned long host_estat; 119 + unsigned long host_percpu; 120 + 121 + /* GPRs */ 122 + unsigned long gprs[32]; 123 + unsigned long pc; 124 + 125 + /* Which auxiliary state is loaded (KVM_LARCH_*) */ 126 + unsigned int aux_inuse; 127 + 128 + /* FPU state */ 129 + struct loongarch_fpu fpu FPU_ALIGN; 130 + 131 + /* CSR state */ 132 + struct loongarch_csrs *csr; 133 + 134 + /* GPR used as IO source/target */ 135 + u32 io_gpr; 136 + 137 + /* KVM register to control count timer */ 138 + u32 count_ctl; 139 + struct hrtimer swtimer; 140 + 141 + /* Bitmask of intr that are pending */ 142 + unsigned long irq_pending; 143 + /* Bitmask of pending intr to be cleared */ 144 + unsigned long irq_clear; 145 + 146 + /* Bitmask of exceptions that are pending */ 147 + unsigned long exception_pending; 148 + unsigned int esubcode; 149 + 150 + /* Cache for pages needed inside spinlock regions */ 151 + struct kvm_mmu_memory_cache mmu_page_cache; 152 + 153 + /* vcpu's vpid */ 154 + u64 vpid; 155 + 156 + /* Frequency of stable timer in Hz */ 157 + u64 timer_mhz; 158 + ktime_t expire; 159 + 160 + /* Last CPU the vCPU state was loaded on */ 161 + int last_sched_cpu; 162 + /* mp state */ 163 + struct kvm_mp_state mp_state; 164 + /* cpucfg */ 165 + u32 cpucfg[KVM_MAX_CPUCFG_REGS]; 166 + }; 167 + 168 + static inline unsigned long readl_sw_gcsr(struct loongarch_csrs *csr, int reg) 169 + { 170 + return csr->csrs[reg]; 171 + } 172 + 173 + static inline void writel_sw_gcsr(struct loongarch_csrs *csr, int reg, unsigned long val) 174 + { 175 + csr->csrs[reg] = val; 176 + } 177 + 178 + /* Debug: dump vcpu state */ 179 + int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu); 180 + 181 + /* MMU handling */ 182 + void kvm_flush_tlb_all(void); 183 + void kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa); 184 + int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long badv, bool write); 185 + 186 + #define KVM_ARCH_WANT_MMU_NOTIFIER 187 + void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte); 188 + int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, bool blockable); 189 + int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end); 190 + int kvm_test_age_hva(struct kvm *kvm, unsigned long hva); 191 + 192 + static inline void update_pc(struct kvm_vcpu_arch *arch) 193 + { 194 + arch->pc += 4; 195 + } 196 + 197 + /* 198 + * kvm_is_ifetch_fault() - Find whether a TLBL exception is due to ifetch fault. 199 + * @vcpu: Virtual CPU. 200 + * 201 + * Returns: Whether the TLBL exception was likely due to an instruction 202 + * fetch fault rather than a data load fault. 203 + */ 204 + static inline bool kvm_is_ifetch_fault(struct kvm_vcpu_arch *arch) 205 + { 206 + return arch->pc == arch->badv; 207 + } 208 + 209 + /* Misc */ 210 + static inline void kvm_arch_hardware_unsetup(void) {} 211 + static inline void kvm_arch_sync_events(struct kvm *kvm) {} 212 + static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} 213 + static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} 214 + static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} 215 + static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} 216 + static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} 217 + static inline void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) {} 218 + void kvm_check_vpid(struct kvm_vcpu *vcpu); 219 + enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer); 220 + void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, const struct kvm_memory_slot *memslot); 221 + void kvm_init_vmcs(struct kvm *kvm); 222 + void kvm_exc_entry(void); 223 + int kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu); 224 + 225 + extern unsigned long vpid_mask; 226 + extern const unsigned long kvm_exception_size; 227 + extern const unsigned long kvm_enter_guest_size; 228 + extern struct kvm_world_switch *kvm_loongarch_ops; 229 + 230 + #define SW_GCSR (1 << 0) 231 + #define HW_GCSR (1 << 1) 232 + #define INVALID_GCSR (1 << 2) 233 + 234 + int get_gcsr_flag(int csr); 235 + void set_hw_gcsr(int csr_id, unsigned long val); 236 + 237 + #endif /* __ASM_LOONGARCH_KVM_HOST_H__ */
+139
arch/loongarch/include/asm/kvm_mmu.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #ifndef __ASM_LOONGARCH_KVM_MMU_H__ 7 + #define __ASM_LOONGARCH_KVM_MMU_H__ 8 + 9 + #include <linux/kvm_host.h> 10 + #include <asm/pgalloc.h> 11 + #include <asm/tlb.h> 12 + 13 + /* 14 + * KVM_MMU_CACHE_MIN_PAGES is the number of GPA page table translation levels 15 + * for which pages need to be cached. 16 + */ 17 + #define KVM_MMU_CACHE_MIN_PAGES (CONFIG_PGTABLE_LEVELS - 1) 18 + 19 + #define _KVM_FLUSH_PGTABLE 0x1 20 + #define _KVM_HAS_PGMASK 0x2 21 + #define kvm_pfn_pte(pfn, prot) (((pfn) << PFN_PTE_SHIFT) | pgprot_val(prot)) 22 + #define kvm_pte_pfn(x) ((phys_addr_t)((x & _PFN_MASK) >> PFN_PTE_SHIFT)) 23 + 24 + typedef unsigned long kvm_pte_t; 25 + typedef struct kvm_ptw_ctx kvm_ptw_ctx; 26 + typedef int (*kvm_pte_ops)(kvm_pte_t *pte, phys_addr_t addr, kvm_ptw_ctx *ctx); 27 + 28 + struct kvm_ptw_ctx { 29 + kvm_pte_ops ops; 30 + unsigned long flag; 31 + 32 + /* for kvm_arch_mmu_enable_log_dirty_pt_masked use */ 33 + unsigned long mask; 34 + unsigned long gfn; 35 + 36 + /* page walk mmu info */ 37 + unsigned int level; 38 + unsigned long pgtable_shift; 39 + unsigned long invalid_entry; 40 + unsigned long *invalid_ptes; 41 + unsigned int *pte_shifts; 42 + void *opaque; 43 + 44 + /* free pte table page list */ 45 + struct list_head list; 46 + }; 47 + 48 + kvm_pte_t *kvm_pgd_alloc(void); 49 + 50 + static inline void kvm_set_pte(kvm_pte_t *ptep, kvm_pte_t val) 51 + { 52 + WRITE_ONCE(*ptep, val); 53 + } 54 + 55 + static inline int kvm_pte_write(kvm_pte_t pte) { return pte & _PAGE_WRITE; } 56 + static inline int kvm_pte_dirty(kvm_pte_t pte) { return pte & _PAGE_DIRTY; } 57 + static inline int kvm_pte_young(kvm_pte_t pte) { return pte & _PAGE_ACCESSED; } 58 + static inline int kvm_pte_huge(kvm_pte_t pte) { return pte & _PAGE_HUGE; } 59 + 60 + static inline kvm_pte_t kvm_pte_mkyoung(kvm_pte_t pte) 61 + { 62 + return pte | _PAGE_ACCESSED; 63 + } 64 + 65 + static inline kvm_pte_t kvm_pte_mkold(kvm_pte_t pte) 66 + { 67 + return pte & ~_PAGE_ACCESSED; 68 + } 69 + 70 + static inline kvm_pte_t kvm_pte_mkdirty(kvm_pte_t pte) 71 + { 72 + return pte | _PAGE_DIRTY; 73 + } 74 + 75 + static inline kvm_pte_t kvm_pte_mkclean(kvm_pte_t pte) 76 + { 77 + return pte & ~_PAGE_DIRTY; 78 + } 79 + 80 + static inline kvm_pte_t kvm_pte_mkhuge(kvm_pte_t pte) 81 + { 82 + return pte | _PAGE_HUGE; 83 + } 84 + 85 + static inline kvm_pte_t kvm_pte_mksmall(kvm_pte_t pte) 86 + { 87 + return pte & ~_PAGE_HUGE; 88 + } 89 + 90 + static inline int kvm_need_flush(kvm_ptw_ctx *ctx) 91 + { 92 + return ctx->flag & _KVM_FLUSH_PGTABLE; 93 + } 94 + 95 + static inline kvm_pte_t *kvm_pgtable_offset(kvm_ptw_ctx *ctx, kvm_pte_t *table, 96 + phys_addr_t addr) 97 + { 98 + 99 + return table + ((addr >> ctx->pgtable_shift) & (PTRS_PER_PTE - 1)); 100 + } 101 + 102 + static inline phys_addr_t kvm_pgtable_addr_end(kvm_ptw_ctx *ctx, 103 + phys_addr_t addr, phys_addr_t end) 104 + { 105 + phys_addr_t boundary, size; 106 + 107 + size = 0x1UL << ctx->pgtable_shift; 108 + boundary = (addr + size) & ~(size - 1); 109 + return (boundary - 1 < end - 1) ? boundary : end; 110 + } 111 + 112 + static inline int kvm_pte_present(kvm_ptw_ctx *ctx, kvm_pte_t *entry) 113 + { 114 + if (!ctx || ctx->level == 0) 115 + return !!(*entry & _PAGE_PRESENT); 116 + 117 + return *entry != ctx->invalid_entry; 118 + } 119 + 120 + static inline int kvm_pte_none(kvm_ptw_ctx *ctx, kvm_pte_t *entry) 121 + { 122 + return *entry == ctx->invalid_entry; 123 + } 124 + 125 + static inline void kvm_ptw_enter(kvm_ptw_ctx *ctx) 126 + { 127 + ctx->level--; 128 + ctx->pgtable_shift = ctx->pte_shifts[ctx->level]; 129 + ctx->invalid_entry = ctx->invalid_ptes[ctx->level]; 130 + } 131 + 132 + static inline void kvm_ptw_exit(kvm_ptw_ctx *ctx) 133 + { 134 + ctx->level++; 135 + ctx->pgtable_shift = ctx->pte_shifts[ctx->level]; 136 + ctx->invalid_entry = ctx->invalid_ptes[ctx->level]; 137 + } 138 + 139 + #endif /* __ASM_LOONGARCH_KVM_MMU_H__ */
+11
arch/loongarch/include/asm/kvm_types.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #ifndef _ASM_LOONGARCH_KVM_TYPES_H 7 + #define _ASM_LOONGARCH_KVM_TYPES_H 8 + 9 + #define KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE 40 10 + 11 + #endif /* _ASM_LOONGARCH_KVM_TYPES_H */
+93
arch/loongarch/include/asm/kvm_vcpu.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #ifndef __ASM_LOONGARCH_KVM_VCPU_H__ 7 + #define __ASM_LOONGARCH_KVM_VCPU_H__ 8 + 9 + #include <linux/kvm_host.h> 10 + #include <asm/loongarch.h> 11 + 12 + /* Controlled by 0x5 guest estat */ 13 + #define CPU_SIP0 (_ULCAST_(1)) 14 + #define CPU_SIP1 (_ULCAST_(1) << 1) 15 + #define CPU_PMU (_ULCAST_(1) << 10) 16 + #define CPU_TIMER (_ULCAST_(1) << 11) 17 + #define CPU_IPI (_ULCAST_(1) << 12) 18 + 19 + /* Controlled by 0x52 guest exception VIP aligned to estat bit 5~12 */ 20 + #define CPU_IP0 (_ULCAST_(1)) 21 + #define CPU_IP1 (_ULCAST_(1) << 1) 22 + #define CPU_IP2 (_ULCAST_(1) << 2) 23 + #define CPU_IP3 (_ULCAST_(1) << 3) 24 + #define CPU_IP4 (_ULCAST_(1) << 4) 25 + #define CPU_IP5 (_ULCAST_(1) << 5) 26 + #define CPU_IP6 (_ULCAST_(1) << 6) 27 + #define CPU_IP7 (_ULCAST_(1) << 7) 28 + 29 + #define MNSEC_PER_SEC (NSEC_PER_SEC >> 20) 30 + 31 + /* KVM_IRQ_LINE irq field index values */ 32 + #define KVM_LOONGSON_IRQ_TYPE_SHIFT 24 33 + #define KVM_LOONGSON_IRQ_TYPE_MASK 0xff 34 + #define KVM_LOONGSON_IRQ_VCPU_SHIFT 16 35 + #define KVM_LOONGSON_IRQ_VCPU_MASK 0xff 36 + #define KVM_LOONGSON_IRQ_NUM_SHIFT 0 37 + #define KVM_LOONGSON_IRQ_NUM_MASK 0xffff 38 + 39 + typedef union loongarch_instruction larch_inst; 40 + typedef int (*exit_handle_fn)(struct kvm_vcpu *); 41 + 42 + int kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst); 43 + int kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst); 44 + int kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run); 45 + int kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run); 46 + int kvm_emu_idle(struct kvm_vcpu *vcpu); 47 + int kvm_pending_timer(struct kvm_vcpu *vcpu); 48 + int kvm_handle_fault(struct kvm_vcpu *vcpu, int fault); 49 + void kvm_deliver_intr(struct kvm_vcpu *vcpu); 50 + void kvm_deliver_exception(struct kvm_vcpu *vcpu); 51 + 52 + void kvm_own_fpu(struct kvm_vcpu *vcpu); 53 + void kvm_lose_fpu(struct kvm_vcpu *vcpu); 54 + void kvm_save_fpu(struct loongarch_fpu *fpu); 55 + void kvm_restore_fpu(struct loongarch_fpu *fpu); 56 + void kvm_restore_fcsr(struct loongarch_fpu *fpu); 57 + 58 + void kvm_acquire_timer(struct kvm_vcpu *vcpu); 59 + void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long hz); 60 + void kvm_reset_timer(struct kvm_vcpu *vcpu); 61 + void kvm_save_timer(struct kvm_vcpu *vcpu); 62 + void kvm_restore_timer(struct kvm_vcpu *vcpu); 63 + 64 + int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq); 65 + 66 + /* 67 + * Loongarch KVM guest interrupt handling 68 + */ 69 + static inline void kvm_queue_irq(struct kvm_vcpu *vcpu, unsigned int irq) 70 + { 71 + set_bit(irq, &vcpu->arch.irq_pending); 72 + clear_bit(irq, &vcpu->arch.irq_clear); 73 + } 74 + 75 + static inline void kvm_dequeue_irq(struct kvm_vcpu *vcpu, unsigned int irq) 76 + { 77 + clear_bit(irq, &vcpu->arch.irq_pending); 78 + set_bit(irq, &vcpu->arch.irq_clear); 79 + } 80 + 81 + static inline int kvm_queue_exception(struct kvm_vcpu *vcpu, 82 + unsigned int code, unsigned int subcode) 83 + { 84 + /* only one exception can be injected */ 85 + if (!vcpu->arch.exception_pending) { 86 + set_bit(code, &vcpu->arch.exception_pending); 87 + vcpu->arch.esubcode = subcode; 88 + return 0; 89 + } else 90 + return -1; 91 + } 92 + 93 + #endif /* __ASM_LOONGARCH_KVM_VCPU_H__ */
+14 -5
arch/loongarch/include/asm/loongarch.h
··· 226 226 #define LOONGARCH_CSR_ECFG 0x4 /* Exception config */ 227 227 #define CSR_ECFG_VS_SHIFT 16 228 228 #define CSR_ECFG_VS_WIDTH 3 229 + #define CSR_ECFG_VS_SHIFT_END (CSR_ECFG_VS_SHIFT + CSR_ECFG_VS_WIDTH - 1) 229 230 #define CSR_ECFG_VS (_ULCAST_(0x7) << CSR_ECFG_VS_SHIFT) 230 231 #define CSR_ECFG_IM_SHIFT 0 231 232 #define CSR_ECFG_IM_WIDTH 14 ··· 315 314 #define CSR_TLBLO1_V (_ULCAST_(0x1) << CSR_TLBLO1_V_SHIFT) 316 315 317 316 #define LOONGARCH_CSR_GTLBC 0x15 /* Guest TLB control */ 318 - #define CSR_GTLBC_RID_SHIFT 16 319 - #define CSR_GTLBC_RID_WIDTH 8 320 - #define CSR_GTLBC_RID (_ULCAST_(0xff) << CSR_GTLBC_RID_SHIFT) 317 + #define CSR_GTLBC_TGID_SHIFT 16 318 + #define CSR_GTLBC_TGID_WIDTH 8 319 + #define CSR_GTLBC_TGID_SHIFT_END (CSR_GTLBC_TGID_SHIFT + CSR_GTLBC_TGID_WIDTH - 1) 320 + #define CSR_GTLBC_TGID (_ULCAST_(0xff) << CSR_GTLBC_TGID_SHIFT) 321 321 #define CSR_GTLBC_TOTI_SHIFT 13 322 322 #define CSR_GTLBC_TOTI (_ULCAST_(0x1) << CSR_GTLBC_TOTI_SHIFT) 323 - #define CSR_GTLBC_USERID_SHIFT 12 324 - #define CSR_GTLBC_USERID (_ULCAST_(0x1) << CSR_GTLBC_USERID_SHIFT) 323 + #define CSR_GTLBC_USETGID_SHIFT 12 324 + #define CSR_GTLBC_USETGID (_ULCAST_(0x1) << CSR_GTLBC_USETGID_SHIFT) 325 325 #define CSR_GTLBC_GMTLBSZ_SHIFT 0 326 326 #define CSR_GTLBC_GMTLBSZ_WIDTH 6 327 327 #define CSR_GTLBC_GMTLBSZ (_ULCAST_(0x3f) << CSR_GTLBC_GMTLBSZ_SHIFT) ··· 477 475 #define LOONGARCH_CSR_GSTAT 0x50 /* Guest status */ 478 476 #define CSR_GSTAT_GID_SHIFT 16 479 477 #define CSR_GSTAT_GID_WIDTH 8 478 + #define CSR_GSTAT_GID_SHIFT_END (CSR_GSTAT_GID_SHIFT + CSR_GSTAT_GID_WIDTH - 1) 480 479 #define CSR_GSTAT_GID (_ULCAST_(0xff) << CSR_GSTAT_GID_SHIFT) 481 480 #define CSR_GSTAT_GIDBIT_SHIFT 4 482 481 #define CSR_GSTAT_GIDBIT_WIDTH 6 ··· 528 525 #define CSR_GCFG_MATC_GUEST (_ULCAST_(0x0) << CSR_GCFG_MATC_SHITF) 529 526 #define CSR_GCFG_MATC_ROOT (_ULCAST_(0x1) << CSR_GCFG_MATC_SHITF) 530 527 #define CSR_GCFG_MATC_NEST (_ULCAST_(0x2) << CSR_GCFG_MATC_SHITF) 528 + #define CSR_GCFG_MATP_NEST_SHIFT 2 529 + #define CSR_GCFG_MATP_NEST (_ULCAST_(0x1) << CSR_GCFG_MATP_NEST_SHIFT) 530 + #define CSR_GCFG_MATP_ROOT_SHIFT 1 531 + #define CSR_GCFG_MATP_ROOT (_ULCAST_(0x1) << CSR_GCFG_MATP_ROOT_SHIFT) 532 + #define CSR_GCFG_MATP_GUEST_SHIFT 0 533 + #define CSR_GCFG_MATP_GUEST (_ULCAST_(0x1) << CSR_GCFG_MATP_GUEST_SHIFT) 531 534 532 535 #define LOONGARCH_CSR_GINTC 0x52 /* Guest interrupt control */ 533 536 #define CSR_GINTC_HC_SHIFT 16
+108
arch/loongarch/include/uapi/asm/kvm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #ifndef __UAPI_ASM_LOONGARCH_KVM_H 7 + #define __UAPI_ASM_LOONGARCH_KVM_H 8 + 9 + #include <linux/types.h> 10 + 11 + /* 12 + * KVM LoongArch specific structures and definitions. 13 + * 14 + * Some parts derived from the x86 version of this file. 15 + */ 16 + 17 + #define __KVM_HAVE_READONLY_MEM 18 + 19 + #define KVM_COALESCED_MMIO_PAGE_OFFSET 1 20 + #define KVM_DIRTY_LOG_PAGE_OFFSET 64 21 + 22 + /* 23 + * for KVM_GET_REGS and KVM_SET_REGS 24 + */ 25 + struct kvm_regs { 26 + /* out (KVM_GET_REGS) / in (KVM_SET_REGS) */ 27 + __u64 gpr[32]; 28 + __u64 pc; 29 + }; 30 + 31 + /* 32 + * for KVM_GET_FPU and KVM_SET_FPU 33 + */ 34 + struct kvm_fpu { 35 + __u32 fcsr; 36 + __u64 fcc; /* 8x8 */ 37 + struct kvm_fpureg { 38 + __u64 val64[4]; 39 + } fpr[32]; 40 + }; 41 + 42 + /* 43 + * For LoongArch, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to access various 44 + * registers. The id field is broken down as follows: 45 + * 46 + * bits[63..52] - As per linux/kvm.h 47 + * bits[51..32] - Must be zero. 48 + * bits[31..16] - Register set. 49 + * 50 + * Register set = 0: GP registers from kvm_regs (see definitions below). 51 + * 52 + * Register set = 1: CSR registers. 53 + * 54 + * Register set = 2: KVM specific registers (see definitions below). 55 + * 56 + * Register set = 3: FPU / SIMD registers (see definitions below). 57 + * 58 + * Other sets registers may be added in the future. Each set would 59 + * have its own identifier in bits[31..16]. 60 + */ 61 + 62 + #define KVM_REG_LOONGARCH_GPR (KVM_REG_LOONGARCH | 0x00000ULL) 63 + #define KVM_REG_LOONGARCH_CSR (KVM_REG_LOONGARCH | 0x10000ULL) 64 + #define KVM_REG_LOONGARCH_KVM (KVM_REG_LOONGARCH | 0x20000ULL) 65 + #define KVM_REG_LOONGARCH_FPSIMD (KVM_REG_LOONGARCH | 0x30000ULL) 66 + #define KVM_REG_LOONGARCH_CPUCFG (KVM_REG_LOONGARCH | 0x40000ULL) 67 + #define KVM_REG_LOONGARCH_MASK (KVM_REG_LOONGARCH | 0x70000ULL) 68 + #define KVM_CSR_IDX_MASK 0x7fff 69 + #define KVM_CPUCFG_IDX_MASK 0x7fff 70 + 71 + /* 72 + * KVM_REG_LOONGARCH_KVM - KVM specific control registers. 73 + */ 74 + 75 + #define KVM_REG_LOONGARCH_COUNTER (KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 1) 76 + #define KVM_REG_LOONGARCH_VCPU_RESET (KVM_REG_LOONGARCH_KVM | KVM_REG_SIZE_U64 | 2) 77 + 78 + #define LOONGARCH_REG_SHIFT 3 79 + #define LOONGARCH_REG_64(TYPE, REG) (TYPE | KVM_REG_SIZE_U64 | (REG << LOONGARCH_REG_SHIFT)) 80 + #define KVM_IOC_CSRID(REG) LOONGARCH_REG_64(KVM_REG_LOONGARCH_CSR, REG) 81 + #define KVM_IOC_CPUCFG(REG) LOONGARCH_REG_64(KVM_REG_LOONGARCH_CPUCFG, REG) 82 + 83 + struct kvm_debug_exit_arch { 84 + }; 85 + 86 + /* for KVM_SET_GUEST_DEBUG */ 87 + struct kvm_guest_debug_arch { 88 + }; 89 + 90 + /* definition of registers in kvm_run */ 91 + struct kvm_sync_regs { 92 + }; 93 + 94 + /* dummy definition */ 95 + struct kvm_sregs { 96 + }; 97 + 98 + struct kvm_iocsr_entry { 99 + __u32 addr; 100 + __u32 pad; 101 + __u64 data; 102 + }; 103 + 104 + #define KVM_NR_IRQCHIPS 1 105 + #define KVM_IRQCHIP_NUM_PINS 64 106 + #define KVM_MAX_CORES 256 107 + 108 + #endif /* __UAPI_ASM_LOONGARCH_KVM_H */
+32
arch/loongarch/kernel/asm-offsets.c
··· 9 9 #include <linux/mm.h> 10 10 #include <linux/kbuild.h> 11 11 #include <linux/suspend.h> 12 + #include <linux/kvm_host.h> 12 13 #include <asm/cpu-info.h> 13 14 #include <asm/ptrace.h> 14 15 #include <asm/processor.h> ··· 290 289 BLANK(); 291 290 } 292 291 #endif 292 + 293 + void output_kvm_defines(void) 294 + { 295 + COMMENT("KVM/LoongArch Specific offsets."); 296 + 297 + OFFSET(VCPU_FCC, kvm_vcpu_arch, fpu.fcc); 298 + OFFSET(VCPU_FCSR0, kvm_vcpu_arch, fpu.fcsr); 299 + BLANK(); 300 + 301 + OFFSET(KVM_VCPU_ARCH, kvm_vcpu, arch); 302 + OFFSET(KVM_VCPU_KVM, kvm_vcpu, kvm); 303 + OFFSET(KVM_VCPU_RUN, kvm_vcpu, run); 304 + BLANK(); 305 + 306 + OFFSET(KVM_ARCH_HSP, kvm_vcpu_arch, host_sp); 307 + OFFSET(KVM_ARCH_HTP, kvm_vcpu_arch, host_tp); 308 + OFFSET(KVM_ARCH_HPGD, kvm_vcpu_arch, host_pgd); 309 + OFFSET(KVM_ARCH_HANDLE_EXIT, kvm_vcpu_arch, handle_exit); 310 + OFFSET(KVM_ARCH_HEENTRY, kvm_vcpu_arch, host_eentry); 311 + OFFSET(KVM_ARCH_GEENTRY, kvm_vcpu_arch, guest_eentry); 312 + OFFSET(KVM_ARCH_GPC, kvm_vcpu_arch, pc); 313 + OFFSET(KVM_ARCH_GGPR, kvm_vcpu_arch, gprs); 314 + OFFSET(KVM_ARCH_HBADI, kvm_vcpu_arch, badi); 315 + OFFSET(KVM_ARCH_HBADV, kvm_vcpu_arch, badv); 316 + OFFSET(KVM_ARCH_HECFG, kvm_vcpu_arch, host_ecfg); 317 + OFFSET(KVM_ARCH_HESTAT, kvm_vcpu_arch, host_estat); 318 + OFFSET(KVM_ARCH_HPERCPU, kvm_vcpu_arch, host_percpu); 319 + 320 + OFFSET(KVM_GPGD, kvm, arch.pgd); 321 + BLANK(); 322 + }
+40
arch/loongarch/kvm/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # KVM configuration 4 + # 5 + 6 + source "virt/kvm/Kconfig" 7 + 8 + menuconfig VIRTUALIZATION 9 + bool "Virtualization" 10 + help 11 + Say Y here to get to see options for using your Linux host to run 12 + other operating systems inside virtual machines (guests). 13 + This option alone does not add any kernel code. 14 + 15 + If you say N, all options in this submenu will be skipped and 16 + disabled. 17 + 18 + if VIRTUALIZATION 19 + 20 + config KVM 21 + tristate "Kernel-based Virtual Machine (KVM) support" 22 + depends on AS_HAS_LVZ_EXTENSION 23 + depends on HAVE_KVM 24 + select HAVE_KVM_DIRTY_RING_ACQ_REL 25 + select HAVE_KVM_EVENTFD 26 + select HAVE_KVM_VCPU_ASYNC_IOCTL 27 + select KVM_GENERIC_DIRTYLOG_READ_PROTECT 28 + select KVM_GENERIC_HARDWARE_ENABLING 29 + select KVM_MMIO 30 + select KVM_XFER_TO_GUEST_WORK 31 + select MMU_NOTIFIER 32 + select PREEMPT_NOTIFIERS 33 + help 34 + Support hosting virtualized guest machines using 35 + hardware virtualization extensions. You will need 36 + a processor equipped with virtualization extensions. 37 + 38 + If unsure, say N. 39 + 40 + endif # VIRTUALIZATION
+22
arch/loongarch/kvm/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # Makefile for LoongArch KVM support 4 + # 5 + 6 + ccflags-y += -I $(srctree)/$(src) 7 + 8 + include $(srctree)/virt/kvm/Makefile.kvm 9 + 10 + obj-$(CONFIG_KVM) += kvm.o 11 + 12 + kvm-y += exit.o 13 + kvm-y += interrupt.o 14 + kvm-y += main.o 15 + kvm-y += mmu.o 16 + kvm-y += switch.o 17 + kvm-y += timer.o 18 + kvm-y += tlb.o 19 + kvm-y += vcpu.o 20 + kvm-y += vm.o 21 + 22 + CFLAGS_exit.o += $(call cc-option,-Wno-override-init,)
+696
arch/loongarch/kvm/exit.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #include <linux/err.h> 7 + #include <linux/errno.h> 8 + #include <linux/kvm_host.h> 9 + #include <linux/module.h> 10 + #include <linux/preempt.h> 11 + #include <linux/vmalloc.h> 12 + #include <asm/fpu.h> 13 + #include <asm/inst.h> 14 + #include <asm/loongarch.h> 15 + #include <asm/mmzone.h> 16 + #include <asm/numa.h> 17 + #include <asm/time.h> 18 + #include <asm/tlb.h> 19 + #include <asm/kvm_csr.h> 20 + #include <asm/kvm_vcpu.h> 21 + #include "trace.h" 22 + 23 + static unsigned long kvm_emu_read_csr(struct kvm_vcpu *vcpu, int csrid) 24 + { 25 + unsigned long val = 0; 26 + struct loongarch_csrs *csr = vcpu->arch.csr; 27 + 28 + /* 29 + * From LoongArch Reference Manual Volume 1 Chapter 4.2.1 30 + * For undefined CSR id, return value is 0 31 + */ 32 + if (get_gcsr_flag(csrid) & SW_GCSR) 33 + val = kvm_read_sw_gcsr(csr, csrid); 34 + else 35 + pr_warn_once("Unsupported csrrd 0x%x with pc %lx\n", csrid, vcpu->arch.pc); 36 + 37 + return val; 38 + } 39 + 40 + static unsigned long kvm_emu_write_csr(struct kvm_vcpu *vcpu, int csrid, unsigned long val) 41 + { 42 + unsigned long old = 0; 43 + struct loongarch_csrs *csr = vcpu->arch.csr; 44 + 45 + if (get_gcsr_flag(csrid) & SW_GCSR) { 46 + old = kvm_read_sw_gcsr(csr, csrid); 47 + kvm_write_sw_gcsr(csr, csrid, val); 48 + } else 49 + pr_warn_once("Unsupported csrwr 0x%x with pc %lx\n", csrid, vcpu->arch.pc); 50 + 51 + return old; 52 + } 53 + 54 + static unsigned long kvm_emu_xchg_csr(struct kvm_vcpu *vcpu, int csrid, 55 + unsigned long csr_mask, unsigned long val) 56 + { 57 + unsigned long old = 0; 58 + struct loongarch_csrs *csr = vcpu->arch.csr; 59 + 60 + if (get_gcsr_flag(csrid) & SW_GCSR) { 61 + old = kvm_read_sw_gcsr(csr, csrid); 62 + val = (old & ~csr_mask) | (val & csr_mask); 63 + kvm_write_sw_gcsr(csr, csrid, val); 64 + old = old & csr_mask; 65 + } else 66 + pr_warn_once("Unsupported csrxchg 0x%x with pc %lx\n", csrid, vcpu->arch.pc); 67 + 68 + return old; 69 + } 70 + 71 + static int kvm_handle_csr(struct kvm_vcpu *vcpu, larch_inst inst) 72 + { 73 + unsigned int rd, rj, csrid; 74 + unsigned long csr_mask, val = 0; 75 + 76 + /* 77 + * CSR value mask imm 78 + * rj = 0 means csrrd 79 + * rj = 1 means csrwr 80 + * rj != 0,1 means csrxchg 81 + */ 82 + rd = inst.reg2csr_format.rd; 83 + rj = inst.reg2csr_format.rj; 84 + csrid = inst.reg2csr_format.csr; 85 + 86 + /* Process CSR ops */ 87 + switch (rj) { 88 + case 0: /* process csrrd */ 89 + val = kvm_emu_read_csr(vcpu, csrid); 90 + vcpu->arch.gprs[rd] = val; 91 + break; 92 + case 1: /* process csrwr */ 93 + val = vcpu->arch.gprs[rd]; 94 + val = kvm_emu_write_csr(vcpu, csrid, val); 95 + vcpu->arch.gprs[rd] = val; 96 + break; 97 + default: /* process csrxchg */ 98 + val = vcpu->arch.gprs[rd]; 99 + csr_mask = vcpu->arch.gprs[rj]; 100 + val = kvm_emu_xchg_csr(vcpu, csrid, csr_mask, val); 101 + vcpu->arch.gprs[rd] = val; 102 + } 103 + 104 + return EMULATE_DONE; 105 + } 106 + 107 + int kvm_emu_iocsr(larch_inst inst, struct kvm_run *run, struct kvm_vcpu *vcpu) 108 + { 109 + int ret; 110 + unsigned long val; 111 + u32 addr, rd, rj, opcode; 112 + 113 + /* 114 + * Each IOCSR with different opcode 115 + */ 116 + rd = inst.reg2_format.rd; 117 + rj = inst.reg2_format.rj; 118 + opcode = inst.reg2_format.opcode; 119 + addr = vcpu->arch.gprs[rj]; 120 + ret = EMULATE_DO_IOCSR; 121 + run->iocsr_io.phys_addr = addr; 122 + run->iocsr_io.is_write = 0; 123 + 124 + /* LoongArch is Little endian */ 125 + switch (opcode) { 126 + case iocsrrdb_op: 127 + run->iocsr_io.len = 1; 128 + break; 129 + case iocsrrdh_op: 130 + run->iocsr_io.len = 2; 131 + break; 132 + case iocsrrdw_op: 133 + run->iocsr_io.len = 4; 134 + break; 135 + case iocsrrdd_op: 136 + run->iocsr_io.len = 8; 137 + break; 138 + case iocsrwrb_op: 139 + run->iocsr_io.len = 1; 140 + run->iocsr_io.is_write = 1; 141 + break; 142 + case iocsrwrh_op: 143 + run->iocsr_io.len = 2; 144 + run->iocsr_io.is_write = 1; 145 + break; 146 + case iocsrwrw_op: 147 + run->iocsr_io.len = 4; 148 + run->iocsr_io.is_write = 1; 149 + break; 150 + case iocsrwrd_op: 151 + run->iocsr_io.len = 8; 152 + run->iocsr_io.is_write = 1; 153 + break; 154 + default: 155 + ret = EMULATE_FAIL; 156 + break; 157 + } 158 + 159 + if (ret == EMULATE_DO_IOCSR) { 160 + if (run->iocsr_io.is_write) { 161 + val = vcpu->arch.gprs[rd]; 162 + memcpy(run->iocsr_io.data, &val, run->iocsr_io.len); 163 + } 164 + vcpu->arch.io_gpr = rd; 165 + } 166 + 167 + return ret; 168 + } 169 + 170 + int kvm_complete_iocsr_read(struct kvm_vcpu *vcpu, struct kvm_run *run) 171 + { 172 + enum emulation_result er = EMULATE_DONE; 173 + unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr]; 174 + 175 + switch (run->iocsr_io.len) { 176 + case 1: 177 + *gpr = *(s8 *)run->iocsr_io.data; 178 + break; 179 + case 2: 180 + *gpr = *(s16 *)run->iocsr_io.data; 181 + break; 182 + case 4: 183 + *gpr = *(s32 *)run->iocsr_io.data; 184 + break; 185 + case 8: 186 + *gpr = *(s64 *)run->iocsr_io.data; 187 + break; 188 + default: 189 + kvm_err("Bad IOCSR length: %d, addr is 0x%lx\n", 190 + run->iocsr_io.len, vcpu->arch.badv); 191 + er = EMULATE_FAIL; 192 + break; 193 + } 194 + 195 + return er; 196 + } 197 + 198 + int kvm_emu_idle(struct kvm_vcpu *vcpu) 199 + { 200 + ++vcpu->stat.idle_exits; 201 + trace_kvm_exit_idle(vcpu, KVM_TRACE_EXIT_IDLE); 202 + 203 + if (!kvm_arch_vcpu_runnable(vcpu)) { 204 + /* 205 + * Switch to the software timer before halt-polling/blocking as 206 + * the guest's timer may be a break event for the vCPU, and the 207 + * hypervisor timer runs only when the CPU is in guest mode. 208 + * Switch before halt-polling so that KVM recognizes an expired 209 + * timer before blocking. 210 + */ 211 + kvm_save_timer(vcpu); 212 + kvm_vcpu_block(vcpu); 213 + } 214 + 215 + return EMULATE_DONE; 216 + } 217 + 218 + static int kvm_trap_handle_gspr(struct kvm_vcpu *vcpu) 219 + { 220 + int rd, rj; 221 + unsigned int index; 222 + unsigned long curr_pc; 223 + larch_inst inst; 224 + enum emulation_result er = EMULATE_DONE; 225 + struct kvm_run *run = vcpu->run; 226 + 227 + /* Fetch the instruction */ 228 + inst.word = vcpu->arch.badi; 229 + curr_pc = vcpu->arch.pc; 230 + update_pc(&vcpu->arch); 231 + 232 + trace_kvm_exit_gspr(vcpu, inst.word); 233 + er = EMULATE_FAIL; 234 + switch (((inst.word >> 24) & 0xff)) { 235 + case 0x0: /* CPUCFG GSPR */ 236 + if (inst.reg2_format.opcode == 0x1B) { 237 + rd = inst.reg2_format.rd; 238 + rj = inst.reg2_format.rj; 239 + ++vcpu->stat.cpucfg_exits; 240 + index = vcpu->arch.gprs[rj]; 241 + er = EMULATE_DONE; 242 + /* 243 + * By LoongArch Reference Manual 2.2.10.5 244 + * return value is 0 for undefined cpucfg index 245 + */ 246 + if (index < KVM_MAX_CPUCFG_REGS) 247 + vcpu->arch.gprs[rd] = vcpu->arch.cpucfg[index]; 248 + else 249 + vcpu->arch.gprs[rd] = 0; 250 + } 251 + break; 252 + case 0x4: /* CSR{RD,WR,XCHG} GSPR */ 253 + er = kvm_handle_csr(vcpu, inst); 254 + break; 255 + case 0x6: /* Cache, Idle and IOCSR GSPR */ 256 + switch (((inst.word >> 22) & 0x3ff)) { 257 + case 0x18: /* Cache GSPR */ 258 + er = EMULATE_DONE; 259 + trace_kvm_exit_cache(vcpu, KVM_TRACE_EXIT_CACHE); 260 + break; 261 + case 0x19: /* Idle/IOCSR GSPR */ 262 + switch (((inst.word >> 15) & 0x1ffff)) { 263 + case 0xc90: /* IOCSR GSPR */ 264 + er = kvm_emu_iocsr(inst, run, vcpu); 265 + break; 266 + case 0xc91: /* Idle GSPR */ 267 + er = kvm_emu_idle(vcpu); 268 + break; 269 + default: 270 + er = EMULATE_FAIL; 271 + break; 272 + } 273 + break; 274 + default: 275 + er = EMULATE_FAIL; 276 + break; 277 + } 278 + break; 279 + default: 280 + er = EMULATE_FAIL; 281 + break; 282 + } 283 + 284 + /* Rollback PC only if emulation was unsuccessful */ 285 + if (er == EMULATE_FAIL) { 286 + kvm_err("[%#lx]%s: unsupported gspr instruction 0x%08x\n", 287 + curr_pc, __func__, inst.word); 288 + 289 + kvm_arch_vcpu_dump_regs(vcpu); 290 + vcpu->arch.pc = curr_pc; 291 + } 292 + 293 + return er; 294 + } 295 + 296 + /* 297 + * Trigger GSPR: 298 + * 1) Execute CPUCFG instruction; 299 + * 2) Execute CACOP/IDLE instructions; 300 + * 3) Access to unimplemented CSRs/IOCSRs. 301 + */ 302 + static int kvm_handle_gspr(struct kvm_vcpu *vcpu) 303 + { 304 + int ret = RESUME_GUEST; 305 + enum emulation_result er = EMULATE_DONE; 306 + 307 + er = kvm_trap_handle_gspr(vcpu); 308 + 309 + if (er == EMULATE_DONE) { 310 + ret = RESUME_GUEST; 311 + } else if (er == EMULATE_DO_MMIO) { 312 + vcpu->run->exit_reason = KVM_EXIT_MMIO; 313 + ret = RESUME_HOST; 314 + } else if (er == EMULATE_DO_IOCSR) { 315 + vcpu->run->exit_reason = KVM_EXIT_LOONGARCH_IOCSR; 316 + ret = RESUME_HOST; 317 + } else { 318 + kvm_queue_exception(vcpu, EXCCODE_INE, 0); 319 + ret = RESUME_GUEST; 320 + } 321 + 322 + return ret; 323 + } 324 + 325 + int kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst) 326 + { 327 + int ret; 328 + unsigned int op8, opcode, rd; 329 + struct kvm_run *run = vcpu->run; 330 + 331 + run->mmio.phys_addr = vcpu->arch.badv; 332 + vcpu->mmio_needed = 2; /* signed */ 333 + op8 = (inst.word >> 24) & 0xff; 334 + ret = EMULATE_DO_MMIO; 335 + 336 + switch (op8) { 337 + case 0x24 ... 0x27: /* ldptr.w/d process */ 338 + rd = inst.reg2i14_format.rd; 339 + opcode = inst.reg2i14_format.opcode; 340 + 341 + switch (opcode) { 342 + case ldptrw_op: 343 + run->mmio.len = 4; 344 + break; 345 + case ldptrd_op: 346 + run->mmio.len = 8; 347 + break; 348 + default: 349 + break; 350 + } 351 + break; 352 + case 0x28 ... 0x2e: /* ld.b/h/w/d, ld.bu/hu/wu process */ 353 + rd = inst.reg2i12_format.rd; 354 + opcode = inst.reg2i12_format.opcode; 355 + 356 + switch (opcode) { 357 + case ldb_op: 358 + run->mmio.len = 1; 359 + break; 360 + case ldbu_op: 361 + vcpu->mmio_needed = 1; /* unsigned */ 362 + run->mmio.len = 1; 363 + break; 364 + case ldh_op: 365 + run->mmio.len = 2; 366 + break; 367 + case ldhu_op: 368 + vcpu->mmio_needed = 1; /* unsigned */ 369 + run->mmio.len = 2; 370 + break; 371 + case ldw_op: 372 + run->mmio.len = 4; 373 + break; 374 + case ldwu_op: 375 + vcpu->mmio_needed = 1; /* unsigned */ 376 + run->mmio.len = 4; 377 + break; 378 + case ldd_op: 379 + run->mmio.len = 8; 380 + break; 381 + default: 382 + ret = EMULATE_FAIL; 383 + break; 384 + } 385 + break; 386 + case 0x38: /* ldx.b/h/w/d, ldx.bu/hu/wu process */ 387 + rd = inst.reg3_format.rd; 388 + opcode = inst.reg3_format.opcode; 389 + 390 + switch (opcode) { 391 + case ldxb_op: 392 + run->mmio.len = 1; 393 + break; 394 + case ldxbu_op: 395 + run->mmio.len = 1; 396 + vcpu->mmio_needed = 1; /* unsigned */ 397 + break; 398 + case ldxh_op: 399 + run->mmio.len = 2; 400 + break; 401 + case ldxhu_op: 402 + run->mmio.len = 2; 403 + vcpu->mmio_needed = 1; /* unsigned */ 404 + break; 405 + case ldxw_op: 406 + run->mmio.len = 4; 407 + break; 408 + case ldxwu_op: 409 + run->mmio.len = 4; 410 + vcpu->mmio_needed = 1; /* unsigned */ 411 + break; 412 + case ldxd_op: 413 + run->mmio.len = 8; 414 + break; 415 + default: 416 + ret = EMULATE_FAIL; 417 + break; 418 + } 419 + break; 420 + default: 421 + ret = EMULATE_FAIL; 422 + } 423 + 424 + if (ret == EMULATE_DO_MMIO) { 425 + /* Set for kvm_complete_mmio_read() use */ 426 + vcpu->arch.io_gpr = rd; 427 + run->mmio.is_write = 0; 428 + vcpu->mmio_is_write = 0; 429 + } else { 430 + kvm_err("Read not supported Inst=0x%08x @%lx BadVaddr:%#lx\n", 431 + inst.word, vcpu->arch.pc, vcpu->arch.badv); 432 + kvm_arch_vcpu_dump_regs(vcpu); 433 + vcpu->mmio_needed = 0; 434 + } 435 + 436 + return ret; 437 + } 438 + 439 + int kvm_complete_mmio_read(struct kvm_vcpu *vcpu, struct kvm_run *run) 440 + { 441 + enum emulation_result er = EMULATE_DONE; 442 + unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr]; 443 + 444 + /* Update with new PC */ 445 + update_pc(&vcpu->arch); 446 + switch (run->mmio.len) { 447 + case 1: 448 + if (vcpu->mmio_needed == 2) 449 + *gpr = *(s8 *)run->mmio.data; 450 + else 451 + *gpr = *(u8 *)run->mmio.data; 452 + break; 453 + case 2: 454 + if (vcpu->mmio_needed == 2) 455 + *gpr = *(s16 *)run->mmio.data; 456 + else 457 + *gpr = *(u16 *)run->mmio.data; 458 + break; 459 + case 4: 460 + if (vcpu->mmio_needed == 2) 461 + *gpr = *(s32 *)run->mmio.data; 462 + else 463 + *gpr = *(u32 *)run->mmio.data; 464 + break; 465 + case 8: 466 + *gpr = *(s64 *)run->mmio.data; 467 + break; 468 + default: 469 + kvm_err("Bad MMIO length: %d, addr is 0x%lx\n", 470 + run->mmio.len, vcpu->arch.badv); 471 + er = EMULATE_FAIL; 472 + break; 473 + } 474 + 475 + return er; 476 + } 477 + 478 + int kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst) 479 + { 480 + int ret; 481 + unsigned int rd, op8, opcode; 482 + unsigned long curr_pc, rd_val = 0; 483 + struct kvm_run *run = vcpu->run; 484 + void *data = run->mmio.data; 485 + 486 + /* 487 + * Update PC and hold onto current PC in case there is 488 + * an error and we want to rollback the PC 489 + */ 490 + curr_pc = vcpu->arch.pc; 491 + update_pc(&vcpu->arch); 492 + 493 + op8 = (inst.word >> 24) & 0xff; 494 + run->mmio.phys_addr = vcpu->arch.badv; 495 + ret = EMULATE_DO_MMIO; 496 + switch (op8) { 497 + case 0x24 ... 0x27: /* stptr.w/d process */ 498 + rd = inst.reg2i14_format.rd; 499 + opcode = inst.reg2i14_format.opcode; 500 + 501 + switch (opcode) { 502 + case stptrw_op: 503 + run->mmio.len = 4; 504 + *(unsigned int *)data = vcpu->arch.gprs[rd]; 505 + break; 506 + case stptrd_op: 507 + run->mmio.len = 8; 508 + *(unsigned long *)data = vcpu->arch.gprs[rd]; 509 + break; 510 + default: 511 + ret = EMULATE_FAIL; 512 + break; 513 + } 514 + break; 515 + case 0x28 ... 0x2e: /* st.b/h/w/d process */ 516 + rd = inst.reg2i12_format.rd; 517 + opcode = inst.reg2i12_format.opcode; 518 + rd_val = vcpu->arch.gprs[rd]; 519 + 520 + switch (opcode) { 521 + case stb_op: 522 + run->mmio.len = 1; 523 + *(unsigned char *)data = rd_val; 524 + break; 525 + case sth_op: 526 + run->mmio.len = 2; 527 + *(unsigned short *)data = rd_val; 528 + break; 529 + case stw_op: 530 + run->mmio.len = 4; 531 + *(unsigned int *)data = rd_val; 532 + break; 533 + case std_op: 534 + run->mmio.len = 8; 535 + *(unsigned long *)data = rd_val; 536 + break; 537 + default: 538 + ret = EMULATE_FAIL; 539 + break; 540 + } 541 + break; 542 + case 0x38: /* stx.b/h/w/d process */ 543 + rd = inst.reg3_format.rd; 544 + opcode = inst.reg3_format.opcode; 545 + 546 + switch (opcode) { 547 + case stxb_op: 548 + run->mmio.len = 1; 549 + *(unsigned char *)data = vcpu->arch.gprs[rd]; 550 + break; 551 + case stxh_op: 552 + run->mmio.len = 2; 553 + *(unsigned short *)data = vcpu->arch.gprs[rd]; 554 + break; 555 + case stxw_op: 556 + run->mmio.len = 4; 557 + *(unsigned int *)data = vcpu->arch.gprs[rd]; 558 + break; 559 + case stxd_op: 560 + run->mmio.len = 8; 561 + *(unsigned long *)data = vcpu->arch.gprs[rd]; 562 + break; 563 + default: 564 + ret = EMULATE_FAIL; 565 + break; 566 + } 567 + break; 568 + default: 569 + ret = EMULATE_FAIL; 570 + } 571 + 572 + if (ret == EMULATE_DO_MMIO) { 573 + run->mmio.is_write = 1; 574 + vcpu->mmio_needed = 1; 575 + vcpu->mmio_is_write = 1; 576 + } else { 577 + vcpu->arch.pc = curr_pc; 578 + kvm_err("Write not supported Inst=0x%08x @%lx BadVaddr:%#lx\n", 579 + inst.word, vcpu->arch.pc, vcpu->arch.badv); 580 + kvm_arch_vcpu_dump_regs(vcpu); 581 + /* Rollback PC if emulation was unsuccessful */ 582 + } 583 + 584 + return ret; 585 + } 586 + 587 + static int kvm_handle_rdwr_fault(struct kvm_vcpu *vcpu, bool write) 588 + { 589 + int ret; 590 + larch_inst inst; 591 + enum emulation_result er = EMULATE_DONE; 592 + struct kvm_run *run = vcpu->run; 593 + unsigned long badv = vcpu->arch.badv; 594 + 595 + ret = kvm_handle_mm_fault(vcpu, badv, write); 596 + if (ret) { 597 + /* Treat as MMIO */ 598 + inst.word = vcpu->arch.badi; 599 + if (write) { 600 + er = kvm_emu_mmio_write(vcpu, inst); 601 + } else { 602 + /* A code fetch fault doesn't count as an MMIO */ 603 + if (kvm_is_ifetch_fault(&vcpu->arch)) { 604 + kvm_queue_exception(vcpu, EXCCODE_ADE, EXSUBCODE_ADEF); 605 + return RESUME_GUEST; 606 + } 607 + 608 + er = kvm_emu_mmio_read(vcpu, inst); 609 + } 610 + } 611 + 612 + if (er == EMULATE_DONE) { 613 + ret = RESUME_GUEST; 614 + } else if (er == EMULATE_DO_MMIO) { 615 + run->exit_reason = KVM_EXIT_MMIO; 616 + ret = RESUME_HOST; 617 + } else { 618 + kvm_queue_exception(vcpu, EXCCODE_ADE, EXSUBCODE_ADEM); 619 + ret = RESUME_GUEST; 620 + } 621 + 622 + return ret; 623 + } 624 + 625 + static int kvm_handle_read_fault(struct kvm_vcpu *vcpu) 626 + { 627 + return kvm_handle_rdwr_fault(vcpu, false); 628 + } 629 + 630 + static int kvm_handle_write_fault(struct kvm_vcpu *vcpu) 631 + { 632 + return kvm_handle_rdwr_fault(vcpu, true); 633 + } 634 + 635 + /** 636 + * kvm_handle_fpu_disabled() - Guest used fpu however it is disabled at host 637 + * @vcpu: Virtual CPU context. 638 + * 639 + * Handle when the guest attempts to use fpu which hasn't been allowed 640 + * by the root context. 641 + */ 642 + static int kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu) 643 + { 644 + struct kvm_run *run = vcpu->run; 645 + 646 + /* 647 + * If guest FPU not present, the FPU operation should have been 648 + * treated as a reserved instruction! 649 + * If FPU already in use, we shouldn't get this at all. 650 + */ 651 + if (WARN_ON(vcpu->arch.aux_inuse & KVM_LARCH_FPU)) { 652 + kvm_err("%s internal error\n", __func__); 653 + run->exit_reason = KVM_EXIT_INTERNAL_ERROR; 654 + return RESUME_HOST; 655 + } 656 + 657 + kvm_own_fpu(vcpu); 658 + 659 + return RESUME_GUEST; 660 + } 661 + 662 + /* 663 + * LoongArch KVM callback handling for unimplemented guest exiting 664 + */ 665 + static int kvm_fault_ni(struct kvm_vcpu *vcpu) 666 + { 667 + unsigned int ecode, inst; 668 + unsigned long estat, badv; 669 + 670 + /* Fetch the instruction */ 671 + inst = vcpu->arch.badi; 672 + badv = vcpu->arch.badv; 673 + estat = vcpu->arch.host_estat; 674 + ecode = (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT; 675 + kvm_err("ECode: %d PC=%#lx Inst=0x%08x BadVaddr=%#lx ESTAT=%#lx\n", 676 + ecode, vcpu->arch.pc, inst, badv, read_gcsr_estat()); 677 + kvm_arch_vcpu_dump_regs(vcpu); 678 + kvm_queue_exception(vcpu, EXCCODE_INE, 0); 679 + 680 + return RESUME_GUEST; 681 + } 682 + 683 + static exit_handle_fn kvm_fault_tables[EXCCODE_INT_START] = { 684 + [0 ... EXCCODE_INT_START - 1] = kvm_fault_ni, 685 + [EXCCODE_TLBI] = kvm_handle_read_fault, 686 + [EXCCODE_TLBL] = kvm_handle_read_fault, 687 + [EXCCODE_TLBS] = kvm_handle_write_fault, 688 + [EXCCODE_TLBM] = kvm_handle_write_fault, 689 + [EXCCODE_FPDIS] = kvm_handle_fpu_disabled, 690 + [EXCCODE_GSPR] = kvm_handle_gspr, 691 + }; 692 + 693 + int kvm_handle_fault(struct kvm_vcpu *vcpu, int fault) 694 + { 695 + return kvm_fault_tables[fault](vcpu); 696 + }
+183
arch/loongarch/kvm/interrupt.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #include <linux/err.h> 7 + #include <linux/errno.h> 8 + #include <asm/kvm_csr.h> 9 + #include <asm/kvm_vcpu.h> 10 + 11 + static unsigned int priority_to_irq[EXCCODE_INT_NUM] = { 12 + [INT_TI] = CPU_TIMER, 13 + [INT_IPI] = CPU_IPI, 14 + [INT_SWI0] = CPU_SIP0, 15 + [INT_SWI1] = CPU_SIP1, 16 + [INT_HWI0] = CPU_IP0, 17 + [INT_HWI1] = CPU_IP1, 18 + [INT_HWI2] = CPU_IP2, 19 + [INT_HWI3] = CPU_IP3, 20 + [INT_HWI4] = CPU_IP4, 21 + [INT_HWI5] = CPU_IP5, 22 + [INT_HWI6] = CPU_IP6, 23 + [INT_HWI7] = CPU_IP7, 24 + }; 25 + 26 + static int kvm_irq_deliver(struct kvm_vcpu *vcpu, unsigned int priority) 27 + { 28 + unsigned int irq = 0; 29 + 30 + clear_bit(priority, &vcpu->arch.irq_pending); 31 + if (priority < EXCCODE_INT_NUM) 32 + irq = priority_to_irq[priority]; 33 + 34 + switch (priority) { 35 + case INT_TI: 36 + case INT_IPI: 37 + case INT_SWI0: 38 + case INT_SWI1: 39 + set_gcsr_estat(irq); 40 + break; 41 + 42 + case INT_HWI0 ... INT_HWI7: 43 + set_csr_gintc(irq); 44 + break; 45 + 46 + default: 47 + break; 48 + } 49 + 50 + return 1; 51 + } 52 + 53 + static int kvm_irq_clear(struct kvm_vcpu *vcpu, unsigned int priority) 54 + { 55 + unsigned int irq = 0; 56 + 57 + clear_bit(priority, &vcpu->arch.irq_clear); 58 + if (priority < EXCCODE_INT_NUM) 59 + irq = priority_to_irq[priority]; 60 + 61 + switch (priority) { 62 + case INT_TI: 63 + case INT_IPI: 64 + case INT_SWI0: 65 + case INT_SWI1: 66 + clear_gcsr_estat(irq); 67 + break; 68 + 69 + case INT_HWI0 ... INT_HWI7: 70 + clear_csr_gintc(irq); 71 + break; 72 + 73 + default: 74 + break; 75 + } 76 + 77 + return 1; 78 + } 79 + 80 + void kvm_deliver_intr(struct kvm_vcpu *vcpu) 81 + { 82 + unsigned int priority; 83 + unsigned long *pending = &vcpu->arch.irq_pending; 84 + unsigned long *pending_clr = &vcpu->arch.irq_clear; 85 + 86 + if (!(*pending) && !(*pending_clr)) 87 + return; 88 + 89 + if (*pending_clr) { 90 + priority = __ffs(*pending_clr); 91 + while (priority <= INT_IPI) { 92 + kvm_irq_clear(vcpu, priority); 93 + priority = find_next_bit(pending_clr, 94 + BITS_PER_BYTE * sizeof(*pending_clr), 95 + priority + 1); 96 + } 97 + } 98 + 99 + if (*pending) { 100 + priority = __ffs(*pending); 101 + while (priority <= INT_IPI) { 102 + kvm_irq_deliver(vcpu, priority); 103 + priority = find_next_bit(pending, 104 + BITS_PER_BYTE * sizeof(*pending), 105 + priority + 1); 106 + } 107 + } 108 + } 109 + 110 + int kvm_pending_timer(struct kvm_vcpu *vcpu) 111 + { 112 + return test_bit(INT_TI, &vcpu->arch.irq_pending); 113 + } 114 + 115 + /* 116 + * Only support illegal instruction or illegal Address Error exception, 117 + * Other exceptions are injected by hardware in kvm mode 118 + */ 119 + static void _kvm_deliver_exception(struct kvm_vcpu *vcpu, 120 + unsigned int code, unsigned int subcode) 121 + { 122 + unsigned long val, vec_size; 123 + 124 + /* 125 + * BADV is added for EXCCODE_ADE exception 126 + * Use PC register (GVA address) if it is instruction exeception 127 + * Else use BADV from host side (GPA address) for data exeception 128 + */ 129 + if (code == EXCCODE_ADE) { 130 + if (subcode == EXSUBCODE_ADEF) 131 + val = vcpu->arch.pc; 132 + else 133 + val = vcpu->arch.badv; 134 + kvm_write_hw_gcsr(LOONGARCH_CSR_BADV, val); 135 + } 136 + 137 + /* Set exception instruction */ 138 + kvm_write_hw_gcsr(LOONGARCH_CSR_BADI, vcpu->arch.badi); 139 + 140 + /* 141 + * Save CRMD in PRMD 142 + * Set IRQ disabled and PLV0 with CRMD 143 + */ 144 + val = kvm_read_hw_gcsr(LOONGARCH_CSR_CRMD); 145 + kvm_write_hw_gcsr(LOONGARCH_CSR_PRMD, val); 146 + val = val & ~(CSR_CRMD_PLV | CSR_CRMD_IE); 147 + kvm_write_hw_gcsr(LOONGARCH_CSR_CRMD, val); 148 + 149 + /* Set exception PC address */ 150 + kvm_write_hw_gcsr(LOONGARCH_CSR_ERA, vcpu->arch.pc); 151 + 152 + /* 153 + * Set exception code 154 + * Exception and interrupt can be inject at the same time 155 + * Hardware will handle exception first and then extern interrupt 156 + * Exception code is Ecode in ESTAT[16:21] 157 + * Interrupt code in ESTAT[0:12] 158 + */ 159 + val = kvm_read_hw_gcsr(LOONGARCH_CSR_ESTAT); 160 + val = (val & ~CSR_ESTAT_EXC) | code; 161 + kvm_write_hw_gcsr(LOONGARCH_CSR_ESTAT, val); 162 + 163 + /* Calculate expcetion entry address */ 164 + val = kvm_read_hw_gcsr(LOONGARCH_CSR_ECFG); 165 + vec_size = (val & CSR_ECFG_VS) >> CSR_ECFG_VS_SHIFT; 166 + if (vec_size) 167 + vec_size = (1 << vec_size) * 4; 168 + val = kvm_read_hw_gcsr(LOONGARCH_CSR_EENTRY); 169 + vcpu->arch.pc = val + code * vec_size; 170 + } 171 + 172 + void kvm_deliver_exception(struct kvm_vcpu *vcpu) 173 + { 174 + unsigned int code; 175 + unsigned long *pending = &vcpu->arch.exception_pending; 176 + 177 + if (*pending) { 178 + code = __ffs(*pending); 179 + _kvm_deliver_exception(vcpu, code, vcpu->arch.esubcode); 180 + *pending = 0; 181 + vcpu->arch.esubcode = 0; 182 + } 183 + }
+420
arch/loongarch/kvm/main.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #include <linux/err.h> 7 + #include <linux/module.h> 8 + #include <linux/kvm_host.h> 9 + #include <asm/cacheflush.h> 10 + #include <asm/cpufeature.h> 11 + #include <asm/kvm_csr.h> 12 + #include "trace.h" 13 + 14 + unsigned long vpid_mask; 15 + struct kvm_world_switch *kvm_loongarch_ops; 16 + static int gcsr_flag[CSR_MAX_NUMS]; 17 + static struct kvm_context __percpu *vmcs; 18 + 19 + int get_gcsr_flag(int csr) 20 + { 21 + if (csr < CSR_MAX_NUMS) 22 + return gcsr_flag[csr]; 23 + 24 + return INVALID_GCSR; 25 + } 26 + 27 + static inline void set_gcsr_sw_flag(int csr) 28 + { 29 + if (csr < CSR_MAX_NUMS) 30 + gcsr_flag[csr] |= SW_GCSR; 31 + } 32 + 33 + static inline void set_gcsr_hw_flag(int csr) 34 + { 35 + if (csr < CSR_MAX_NUMS) 36 + gcsr_flag[csr] |= HW_GCSR; 37 + } 38 + 39 + /* 40 + * The default value of gcsr_flag[CSR] is 0, and we use this 41 + * function to set the flag to 1 (SW_GCSR) or 2 (HW_GCSR) if the 42 + * gcsr is software or hardware. It will be used by get/set_gcsr, 43 + * if gcsr_flag is HW we should use gcsrrd/gcsrwr to access it, 44 + * else use software csr to emulate it. 45 + */ 46 + static void kvm_init_gcsr_flag(void) 47 + { 48 + set_gcsr_hw_flag(LOONGARCH_CSR_CRMD); 49 + set_gcsr_hw_flag(LOONGARCH_CSR_PRMD); 50 + set_gcsr_hw_flag(LOONGARCH_CSR_EUEN); 51 + set_gcsr_hw_flag(LOONGARCH_CSR_MISC); 52 + set_gcsr_hw_flag(LOONGARCH_CSR_ECFG); 53 + set_gcsr_hw_flag(LOONGARCH_CSR_ESTAT); 54 + set_gcsr_hw_flag(LOONGARCH_CSR_ERA); 55 + set_gcsr_hw_flag(LOONGARCH_CSR_BADV); 56 + set_gcsr_hw_flag(LOONGARCH_CSR_BADI); 57 + set_gcsr_hw_flag(LOONGARCH_CSR_EENTRY); 58 + set_gcsr_hw_flag(LOONGARCH_CSR_TLBIDX); 59 + set_gcsr_hw_flag(LOONGARCH_CSR_TLBEHI); 60 + set_gcsr_hw_flag(LOONGARCH_CSR_TLBELO0); 61 + set_gcsr_hw_flag(LOONGARCH_CSR_TLBELO1); 62 + set_gcsr_hw_flag(LOONGARCH_CSR_ASID); 63 + set_gcsr_hw_flag(LOONGARCH_CSR_PGDL); 64 + set_gcsr_hw_flag(LOONGARCH_CSR_PGDH); 65 + set_gcsr_hw_flag(LOONGARCH_CSR_PGD); 66 + set_gcsr_hw_flag(LOONGARCH_CSR_PWCTL0); 67 + set_gcsr_hw_flag(LOONGARCH_CSR_PWCTL1); 68 + set_gcsr_hw_flag(LOONGARCH_CSR_STLBPGSIZE); 69 + set_gcsr_hw_flag(LOONGARCH_CSR_RVACFG); 70 + set_gcsr_hw_flag(LOONGARCH_CSR_CPUID); 71 + set_gcsr_hw_flag(LOONGARCH_CSR_PRCFG1); 72 + set_gcsr_hw_flag(LOONGARCH_CSR_PRCFG2); 73 + set_gcsr_hw_flag(LOONGARCH_CSR_PRCFG3); 74 + set_gcsr_hw_flag(LOONGARCH_CSR_KS0); 75 + set_gcsr_hw_flag(LOONGARCH_CSR_KS1); 76 + set_gcsr_hw_flag(LOONGARCH_CSR_KS2); 77 + set_gcsr_hw_flag(LOONGARCH_CSR_KS3); 78 + set_gcsr_hw_flag(LOONGARCH_CSR_KS4); 79 + set_gcsr_hw_flag(LOONGARCH_CSR_KS5); 80 + set_gcsr_hw_flag(LOONGARCH_CSR_KS6); 81 + set_gcsr_hw_flag(LOONGARCH_CSR_KS7); 82 + set_gcsr_hw_flag(LOONGARCH_CSR_TMID); 83 + set_gcsr_hw_flag(LOONGARCH_CSR_TCFG); 84 + set_gcsr_hw_flag(LOONGARCH_CSR_TVAL); 85 + set_gcsr_hw_flag(LOONGARCH_CSR_TINTCLR); 86 + set_gcsr_hw_flag(LOONGARCH_CSR_CNTC); 87 + set_gcsr_hw_flag(LOONGARCH_CSR_LLBCTL); 88 + set_gcsr_hw_flag(LOONGARCH_CSR_TLBRENTRY); 89 + set_gcsr_hw_flag(LOONGARCH_CSR_TLBRBADV); 90 + set_gcsr_hw_flag(LOONGARCH_CSR_TLBRERA); 91 + set_gcsr_hw_flag(LOONGARCH_CSR_TLBRSAVE); 92 + set_gcsr_hw_flag(LOONGARCH_CSR_TLBRELO0); 93 + set_gcsr_hw_flag(LOONGARCH_CSR_TLBRELO1); 94 + set_gcsr_hw_flag(LOONGARCH_CSR_TLBREHI); 95 + set_gcsr_hw_flag(LOONGARCH_CSR_TLBRPRMD); 96 + set_gcsr_hw_flag(LOONGARCH_CSR_DMWIN0); 97 + set_gcsr_hw_flag(LOONGARCH_CSR_DMWIN1); 98 + set_gcsr_hw_flag(LOONGARCH_CSR_DMWIN2); 99 + set_gcsr_hw_flag(LOONGARCH_CSR_DMWIN3); 100 + 101 + set_gcsr_sw_flag(LOONGARCH_CSR_IMPCTL1); 102 + set_gcsr_sw_flag(LOONGARCH_CSR_IMPCTL2); 103 + set_gcsr_sw_flag(LOONGARCH_CSR_MERRCTL); 104 + set_gcsr_sw_flag(LOONGARCH_CSR_MERRINFO1); 105 + set_gcsr_sw_flag(LOONGARCH_CSR_MERRINFO2); 106 + set_gcsr_sw_flag(LOONGARCH_CSR_MERRENTRY); 107 + set_gcsr_sw_flag(LOONGARCH_CSR_MERRERA); 108 + set_gcsr_sw_flag(LOONGARCH_CSR_MERRSAVE); 109 + set_gcsr_sw_flag(LOONGARCH_CSR_CTAG); 110 + set_gcsr_sw_flag(LOONGARCH_CSR_DEBUG); 111 + set_gcsr_sw_flag(LOONGARCH_CSR_DERA); 112 + set_gcsr_sw_flag(LOONGARCH_CSR_DESAVE); 113 + 114 + set_gcsr_sw_flag(LOONGARCH_CSR_FWPC); 115 + set_gcsr_sw_flag(LOONGARCH_CSR_FWPS); 116 + set_gcsr_sw_flag(LOONGARCH_CSR_MWPC); 117 + set_gcsr_sw_flag(LOONGARCH_CSR_MWPS); 118 + 119 + set_gcsr_sw_flag(LOONGARCH_CSR_DB0ADDR); 120 + set_gcsr_sw_flag(LOONGARCH_CSR_DB0MASK); 121 + set_gcsr_sw_flag(LOONGARCH_CSR_DB0CTRL); 122 + set_gcsr_sw_flag(LOONGARCH_CSR_DB0ASID); 123 + set_gcsr_sw_flag(LOONGARCH_CSR_DB1ADDR); 124 + set_gcsr_sw_flag(LOONGARCH_CSR_DB1MASK); 125 + set_gcsr_sw_flag(LOONGARCH_CSR_DB1CTRL); 126 + set_gcsr_sw_flag(LOONGARCH_CSR_DB1ASID); 127 + set_gcsr_sw_flag(LOONGARCH_CSR_DB2ADDR); 128 + set_gcsr_sw_flag(LOONGARCH_CSR_DB2MASK); 129 + set_gcsr_sw_flag(LOONGARCH_CSR_DB2CTRL); 130 + set_gcsr_sw_flag(LOONGARCH_CSR_DB2ASID); 131 + set_gcsr_sw_flag(LOONGARCH_CSR_DB3ADDR); 132 + set_gcsr_sw_flag(LOONGARCH_CSR_DB3MASK); 133 + set_gcsr_sw_flag(LOONGARCH_CSR_DB3CTRL); 134 + set_gcsr_sw_flag(LOONGARCH_CSR_DB3ASID); 135 + set_gcsr_sw_flag(LOONGARCH_CSR_DB4ADDR); 136 + set_gcsr_sw_flag(LOONGARCH_CSR_DB4MASK); 137 + set_gcsr_sw_flag(LOONGARCH_CSR_DB4CTRL); 138 + set_gcsr_sw_flag(LOONGARCH_CSR_DB4ASID); 139 + set_gcsr_sw_flag(LOONGARCH_CSR_DB5ADDR); 140 + set_gcsr_sw_flag(LOONGARCH_CSR_DB5MASK); 141 + set_gcsr_sw_flag(LOONGARCH_CSR_DB5CTRL); 142 + set_gcsr_sw_flag(LOONGARCH_CSR_DB5ASID); 143 + set_gcsr_sw_flag(LOONGARCH_CSR_DB6ADDR); 144 + set_gcsr_sw_flag(LOONGARCH_CSR_DB6MASK); 145 + set_gcsr_sw_flag(LOONGARCH_CSR_DB6CTRL); 146 + set_gcsr_sw_flag(LOONGARCH_CSR_DB6ASID); 147 + set_gcsr_sw_flag(LOONGARCH_CSR_DB7ADDR); 148 + set_gcsr_sw_flag(LOONGARCH_CSR_DB7MASK); 149 + set_gcsr_sw_flag(LOONGARCH_CSR_DB7CTRL); 150 + set_gcsr_sw_flag(LOONGARCH_CSR_DB7ASID); 151 + 152 + set_gcsr_sw_flag(LOONGARCH_CSR_IB0ADDR); 153 + set_gcsr_sw_flag(LOONGARCH_CSR_IB0MASK); 154 + set_gcsr_sw_flag(LOONGARCH_CSR_IB0CTRL); 155 + set_gcsr_sw_flag(LOONGARCH_CSR_IB0ASID); 156 + set_gcsr_sw_flag(LOONGARCH_CSR_IB1ADDR); 157 + set_gcsr_sw_flag(LOONGARCH_CSR_IB1MASK); 158 + set_gcsr_sw_flag(LOONGARCH_CSR_IB1CTRL); 159 + set_gcsr_sw_flag(LOONGARCH_CSR_IB1ASID); 160 + set_gcsr_sw_flag(LOONGARCH_CSR_IB2ADDR); 161 + set_gcsr_sw_flag(LOONGARCH_CSR_IB2MASK); 162 + set_gcsr_sw_flag(LOONGARCH_CSR_IB2CTRL); 163 + set_gcsr_sw_flag(LOONGARCH_CSR_IB2ASID); 164 + set_gcsr_sw_flag(LOONGARCH_CSR_IB3ADDR); 165 + set_gcsr_sw_flag(LOONGARCH_CSR_IB3MASK); 166 + set_gcsr_sw_flag(LOONGARCH_CSR_IB3CTRL); 167 + set_gcsr_sw_flag(LOONGARCH_CSR_IB3ASID); 168 + set_gcsr_sw_flag(LOONGARCH_CSR_IB4ADDR); 169 + set_gcsr_sw_flag(LOONGARCH_CSR_IB4MASK); 170 + set_gcsr_sw_flag(LOONGARCH_CSR_IB4CTRL); 171 + set_gcsr_sw_flag(LOONGARCH_CSR_IB4ASID); 172 + set_gcsr_sw_flag(LOONGARCH_CSR_IB5ADDR); 173 + set_gcsr_sw_flag(LOONGARCH_CSR_IB5MASK); 174 + set_gcsr_sw_flag(LOONGARCH_CSR_IB5CTRL); 175 + set_gcsr_sw_flag(LOONGARCH_CSR_IB5ASID); 176 + set_gcsr_sw_flag(LOONGARCH_CSR_IB6ADDR); 177 + set_gcsr_sw_flag(LOONGARCH_CSR_IB6MASK); 178 + set_gcsr_sw_flag(LOONGARCH_CSR_IB6CTRL); 179 + set_gcsr_sw_flag(LOONGARCH_CSR_IB6ASID); 180 + set_gcsr_sw_flag(LOONGARCH_CSR_IB7ADDR); 181 + set_gcsr_sw_flag(LOONGARCH_CSR_IB7MASK); 182 + set_gcsr_sw_flag(LOONGARCH_CSR_IB7CTRL); 183 + set_gcsr_sw_flag(LOONGARCH_CSR_IB7ASID); 184 + 185 + set_gcsr_sw_flag(LOONGARCH_CSR_PERFCTRL0); 186 + set_gcsr_sw_flag(LOONGARCH_CSR_PERFCNTR0); 187 + set_gcsr_sw_flag(LOONGARCH_CSR_PERFCTRL1); 188 + set_gcsr_sw_flag(LOONGARCH_CSR_PERFCNTR1); 189 + set_gcsr_sw_flag(LOONGARCH_CSR_PERFCTRL2); 190 + set_gcsr_sw_flag(LOONGARCH_CSR_PERFCNTR2); 191 + set_gcsr_sw_flag(LOONGARCH_CSR_PERFCTRL3); 192 + set_gcsr_sw_flag(LOONGARCH_CSR_PERFCNTR3); 193 + } 194 + 195 + static void kvm_update_vpid(struct kvm_vcpu *vcpu, int cpu) 196 + { 197 + unsigned long vpid; 198 + struct kvm_context *context; 199 + 200 + context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); 201 + vpid = context->vpid_cache + 1; 202 + if (!(vpid & vpid_mask)) { 203 + /* finish round of vpid loop */ 204 + if (unlikely(!vpid)) 205 + vpid = vpid_mask + 1; 206 + 207 + ++vpid; /* vpid 0 reserved for root */ 208 + 209 + /* start new vpid cycle */ 210 + kvm_flush_tlb_all(); 211 + } 212 + 213 + context->vpid_cache = vpid; 214 + vcpu->arch.vpid = vpid; 215 + } 216 + 217 + void kvm_check_vpid(struct kvm_vcpu *vcpu) 218 + { 219 + int cpu; 220 + bool migrated; 221 + unsigned long ver, old, vpid; 222 + struct kvm_context *context; 223 + 224 + cpu = smp_processor_id(); 225 + /* 226 + * Are we entering guest context on a different CPU to last time? 227 + * If so, the vCPU's guest TLB state on this CPU may be stale. 228 + */ 229 + context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); 230 + migrated = (vcpu->cpu != cpu); 231 + 232 + /* 233 + * Check if our vpid is of an older version 234 + * 235 + * We also discard the stored vpid if we've executed on 236 + * another CPU, as the guest mappings may have changed without 237 + * hypervisor knowledge. 238 + */ 239 + ver = vcpu->arch.vpid & ~vpid_mask; 240 + old = context->vpid_cache & ~vpid_mask; 241 + if (migrated || (ver != old)) { 242 + kvm_update_vpid(vcpu, cpu); 243 + trace_kvm_vpid_change(vcpu, vcpu->arch.vpid); 244 + vcpu->cpu = cpu; 245 + } 246 + 247 + /* Restore GSTAT(0x50).vpid */ 248 + vpid = (vcpu->arch.vpid & vpid_mask) << CSR_GSTAT_GID_SHIFT; 249 + change_csr_gstat(vpid_mask << CSR_GSTAT_GID_SHIFT, vpid); 250 + } 251 + 252 + void kvm_init_vmcs(struct kvm *kvm) 253 + { 254 + kvm->arch.vmcs = vmcs; 255 + } 256 + 257 + long kvm_arch_dev_ioctl(struct file *filp, 258 + unsigned int ioctl, unsigned long arg) 259 + { 260 + return -ENOIOCTLCMD; 261 + } 262 + 263 + int kvm_arch_hardware_enable(void) 264 + { 265 + unsigned long env, gcfg = 0; 266 + 267 + env = read_csr_gcfg(); 268 + 269 + /* First init gcfg, gstat, gintc, gtlbc. All guest use the same config */ 270 + write_csr_gcfg(0); 271 + write_csr_gstat(0); 272 + write_csr_gintc(0); 273 + clear_csr_gtlbc(CSR_GTLBC_USETGID | CSR_GTLBC_TOTI); 274 + 275 + /* 276 + * Enable virtualization features granting guest direct control of 277 + * certain features: 278 + * GCI=2: Trap on init or unimplement cache instruction. 279 + * TORU=0: Trap on Root Unimplement. 280 + * CACTRL=1: Root control cache. 281 + * TOP=0: Trap on Previlege. 282 + * TOE=0: Trap on Exception. 283 + * TIT=0: Trap on Timer. 284 + */ 285 + if (env & CSR_GCFG_GCIP_ALL) 286 + gcfg |= CSR_GCFG_GCI_SECURE; 287 + if (env & CSR_GCFG_MATC_ROOT) 288 + gcfg |= CSR_GCFG_MATC_ROOT; 289 + 290 + gcfg |= CSR_GCFG_TIT; 291 + write_csr_gcfg(gcfg); 292 + 293 + kvm_flush_tlb_all(); 294 + 295 + /* Enable using TGID */ 296 + set_csr_gtlbc(CSR_GTLBC_USETGID); 297 + kvm_debug("GCFG:%lx GSTAT:%lx GINTC:%lx GTLBC:%lx", 298 + read_csr_gcfg(), read_csr_gstat(), read_csr_gintc(), read_csr_gtlbc()); 299 + 300 + return 0; 301 + } 302 + 303 + void kvm_arch_hardware_disable(void) 304 + { 305 + write_csr_gcfg(0); 306 + write_csr_gstat(0); 307 + write_csr_gintc(0); 308 + clear_csr_gtlbc(CSR_GTLBC_USETGID | CSR_GTLBC_TOTI); 309 + 310 + /* Flush any remaining guest TLB entries */ 311 + kvm_flush_tlb_all(); 312 + } 313 + 314 + static int kvm_loongarch_env_init(void) 315 + { 316 + int cpu, order; 317 + void *addr; 318 + struct kvm_context *context; 319 + 320 + vmcs = alloc_percpu(struct kvm_context); 321 + if (!vmcs) { 322 + pr_err("kvm: failed to allocate percpu kvm_context\n"); 323 + return -ENOMEM; 324 + } 325 + 326 + kvm_loongarch_ops = kzalloc(sizeof(*kvm_loongarch_ops), GFP_KERNEL); 327 + if (!kvm_loongarch_ops) { 328 + free_percpu(vmcs); 329 + vmcs = NULL; 330 + return -ENOMEM; 331 + } 332 + 333 + /* 334 + * PGD register is shared between root kernel and kvm hypervisor. 335 + * So world switch entry should be in DMW area rather than TLB area 336 + * to avoid page fault reenter. 337 + * 338 + * In future if hardware pagetable walking is supported, we won't 339 + * need to copy world switch code to DMW area. 340 + */ 341 + order = get_order(kvm_exception_size + kvm_enter_guest_size); 342 + addr = (void *)__get_free_pages(GFP_KERNEL, order); 343 + if (!addr) { 344 + free_percpu(vmcs); 345 + vmcs = NULL; 346 + kfree(kvm_loongarch_ops); 347 + kvm_loongarch_ops = NULL; 348 + return -ENOMEM; 349 + } 350 + 351 + memcpy(addr, kvm_exc_entry, kvm_exception_size); 352 + memcpy(addr + kvm_exception_size, kvm_enter_guest, kvm_enter_guest_size); 353 + flush_icache_range((unsigned long)addr, (unsigned long)addr + kvm_exception_size + kvm_enter_guest_size); 354 + kvm_loongarch_ops->exc_entry = addr; 355 + kvm_loongarch_ops->enter_guest = addr + kvm_exception_size; 356 + kvm_loongarch_ops->page_order = order; 357 + 358 + vpid_mask = read_csr_gstat(); 359 + vpid_mask = (vpid_mask & CSR_GSTAT_GIDBIT) >> CSR_GSTAT_GIDBIT_SHIFT; 360 + if (vpid_mask) 361 + vpid_mask = GENMASK(vpid_mask - 1, 0); 362 + 363 + for_each_possible_cpu(cpu) { 364 + context = per_cpu_ptr(vmcs, cpu); 365 + context->vpid_cache = vpid_mask + 1; 366 + context->last_vcpu = NULL; 367 + } 368 + 369 + kvm_init_gcsr_flag(); 370 + 371 + return 0; 372 + } 373 + 374 + static void kvm_loongarch_env_exit(void) 375 + { 376 + unsigned long addr; 377 + 378 + if (vmcs) 379 + free_percpu(vmcs); 380 + 381 + if (kvm_loongarch_ops) { 382 + if (kvm_loongarch_ops->exc_entry) { 383 + addr = (unsigned long)kvm_loongarch_ops->exc_entry; 384 + free_pages(addr, kvm_loongarch_ops->page_order); 385 + } 386 + kfree(kvm_loongarch_ops); 387 + } 388 + } 389 + 390 + static int kvm_loongarch_init(void) 391 + { 392 + int r; 393 + 394 + if (!cpu_has_lvz) { 395 + kvm_info("Hardware virtualization not available\n"); 396 + return -ENODEV; 397 + } 398 + r = kvm_loongarch_env_init(); 399 + if (r) 400 + return r; 401 + 402 + return kvm_init(sizeof(struct kvm_vcpu), 0, THIS_MODULE); 403 + } 404 + 405 + static void kvm_loongarch_exit(void) 406 + { 407 + kvm_exit(); 408 + kvm_loongarch_env_exit(); 409 + } 410 + 411 + module_init(kvm_loongarch_init); 412 + module_exit(kvm_loongarch_exit); 413 + 414 + #ifdef MODULE 415 + static const struct cpu_feature kvm_feature[] = { 416 + { .feature = cpu_feature(LOONGARCH_LVZ) }, 417 + {}, 418 + }; 419 + MODULE_DEVICE_TABLE(cpu, kvm_feature); 420 + #endif
+914
arch/loongarch/kvm/mmu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #include <linux/highmem.h> 7 + #include <linux/hugetlb.h> 8 + #include <linux/kvm_host.h> 9 + #include <linux/page-flags.h> 10 + #include <linux/uaccess.h> 11 + #include <asm/mmu_context.h> 12 + #include <asm/pgalloc.h> 13 + #include <asm/tlb.h> 14 + #include <asm/kvm_mmu.h> 15 + 16 + static inline void kvm_ptw_prepare(struct kvm *kvm, kvm_ptw_ctx *ctx) 17 + { 18 + ctx->level = kvm->arch.root_level; 19 + /* pte table */ 20 + ctx->invalid_ptes = kvm->arch.invalid_ptes; 21 + ctx->pte_shifts = kvm->arch.pte_shifts; 22 + ctx->pgtable_shift = ctx->pte_shifts[ctx->level]; 23 + ctx->invalid_entry = ctx->invalid_ptes[ctx->level]; 24 + ctx->opaque = kvm; 25 + } 26 + 27 + /* 28 + * Mark a range of guest physical address space old (all accesses fault) in the 29 + * VM's GPA page table to allow detection of commonly used pages. 30 + */ 31 + static int kvm_mkold_pte(kvm_pte_t *pte, phys_addr_t addr, kvm_ptw_ctx *ctx) 32 + { 33 + if (kvm_pte_young(*pte)) { 34 + *pte = kvm_pte_mkold(*pte); 35 + return 1; 36 + } 37 + 38 + return 0; 39 + } 40 + 41 + /* 42 + * Mark a range of guest physical address space clean (writes fault) in the VM's 43 + * GPA page table to allow dirty page tracking. 44 + */ 45 + static int kvm_mkclean_pte(kvm_pte_t *pte, phys_addr_t addr, kvm_ptw_ctx *ctx) 46 + { 47 + gfn_t offset; 48 + kvm_pte_t val; 49 + 50 + val = *pte; 51 + /* 52 + * For kvm_arch_mmu_enable_log_dirty_pt_masked with mask, start and end 53 + * may cross hugepage, for first huge page parameter addr is equal to 54 + * start, however for the second huge page addr is base address of 55 + * this huge page, rather than start or end address 56 + */ 57 + if ((ctx->flag & _KVM_HAS_PGMASK) && !kvm_pte_huge(val)) { 58 + offset = (addr >> PAGE_SHIFT) - ctx->gfn; 59 + if (!(BIT(offset) & ctx->mask)) 60 + return 0; 61 + } 62 + 63 + /* 64 + * Need not split huge page now, just set write-proect pte bit 65 + * Split huge page until next write fault 66 + */ 67 + if (kvm_pte_dirty(val)) { 68 + *pte = kvm_pte_mkclean(val); 69 + return 1; 70 + } 71 + 72 + return 0; 73 + } 74 + 75 + /* 76 + * Clear pte entry 77 + */ 78 + static int kvm_flush_pte(kvm_pte_t *pte, phys_addr_t addr, kvm_ptw_ctx *ctx) 79 + { 80 + struct kvm *kvm; 81 + 82 + kvm = ctx->opaque; 83 + if (ctx->level) 84 + kvm->stat.hugepages--; 85 + else 86 + kvm->stat.pages--; 87 + 88 + *pte = ctx->invalid_entry; 89 + 90 + return 1; 91 + } 92 + 93 + /* 94 + * kvm_pgd_alloc() - Allocate and initialise a KVM GPA page directory. 95 + * 96 + * Allocate a blank KVM GPA page directory (PGD) for representing guest physical 97 + * to host physical page mappings. 98 + * 99 + * Returns: Pointer to new KVM GPA page directory. 100 + * NULL on allocation failure. 101 + */ 102 + kvm_pte_t *kvm_pgd_alloc(void) 103 + { 104 + kvm_pte_t *pgd; 105 + 106 + pgd = (kvm_pte_t *)__get_free_pages(GFP_KERNEL, 0); 107 + if (pgd) 108 + pgd_init((void *)pgd); 109 + 110 + return pgd; 111 + } 112 + 113 + static void _kvm_pte_init(void *addr, unsigned long val) 114 + { 115 + unsigned long *p, *end; 116 + 117 + p = (unsigned long *)addr; 118 + end = p + PTRS_PER_PTE; 119 + do { 120 + p[0] = val; 121 + p[1] = val; 122 + p[2] = val; 123 + p[3] = val; 124 + p[4] = val; 125 + p += 8; 126 + p[-3] = val; 127 + p[-2] = val; 128 + p[-1] = val; 129 + } while (p != end); 130 + } 131 + 132 + /* 133 + * Caller must hold kvm->mm_lock 134 + * 135 + * Walk the page tables of kvm to find the PTE corresponding to the 136 + * address @addr. If page tables don't exist for @addr, they will be created 137 + * from the MMU cache if @cache is not NULL. 138 + */ 139 + static kvm_pte_t *kvm_populate_gpa(struct kvm *kvm, 140 + struct kvm_mmu_memory_cache *cache, 141 + unsigned long addr, int level) 142 + { 143 + kvm_ptw_ctx ctx; 144 + kvm_pte_t *entry, *child; 145 + 146 + kvm_ptw_prepare(kvm, &ctx); 147 + child = kvm->arch.pgd; 148 + while (ctx.level > level) { 149 + entry = kvm_pgtable_offset(&ctx, child, addr); 150 + if (kvm_pte_none(&ctx, entry)) { 151 + if (!cache) 152 + return NULL; 153 + 154 + child = kvm_mmu_memory_cache_alloc(cache); 155 + _kvm_pte_init(child, ctx.invalid_ptes[ctx.level - 1]); 156 + kvm_set_pte(entry, __pa(child)); 157 + } else if (kvm_pte_huge(*entry)) { 158 + return entry; 159 + } else 160 + child = (kvm_pte_t *)__va(PHYSADDR(*entry)); 161 + kvm_ptw_enter(&ctx); 162 + } 163 + 164 + entry = kvm_pgtable_offset(&ctx, child, addr); 165 + 166 + return entry; 167 + } 168 + 169 + /* 170 + * Page walker for VM shadow mmu at last level 171 + * The last level is small pte page or huge pmd page 172 + */ 173 + static int kvm_ptw_leaf(kvm_pte_t *dir, phys_addr_t addr, phys_addr_t end, kvm_ptw_ctx *ctx) 174 + { 175 + int ret; 176 + phys_addr_t next, start, size; 177 + struct list_head *list; 178 + kvm_pte_t *entry, *child; 179 + 180 + ret = 0; 181 + start = addr; 182 + child = (kvm_pte_t *)__va(PHYSADDR(*dir)); 183 + entry = kvm_pgtable_offset(ctx, child, addr); 184 + do { 185 + next = addr + (0x1UL << ctx->pgtable_shift); 186 + if (!kvm_pte_present(ctx, entry)) 187 + continue; 188 + 189 + ret |= ctx->ops(entry, addr, ctx); 190 + } while (entry++, addr = next, addr < end); 191 + 192 + if (kvm_need_flush(ctx)) { 193 + size = 0x1UL << (ctx->pgtable_shift + PAGE_SHIFT - 3); 194 + if (start + size == end) { 195 + list = (struct list_head *)child; 196 + list_add_tail(list, &ctx->list); 197 + *dir = ctx->invalid_ptes[ctx->level + 1]; 198 + } 199 + } 200 + 201 + return ret; 202 + } 203 + 204 + /* 205 + * Page walker for VM shadow mmu at page table dir level 206 + */ 207 + static int kvm_ptw_dir(kvm_pte_t *dir, phys_addr_t addr, phys_addr_t end, kvm_ptw_ctx *ctx) 208 + { 209 + int ret; 210 + phys_addr_t next, start, size; 211 + struct list_head *list; 212 + kvm_pte_t *entry, *child; 213 + 214 + ret = 0; 215 + start = addr; 216 + child = (kvm_pte_t *)__va(PHYSADDR(*dir)); 217 + entry = kvm_pgtable_offset(ctx, child, addr); 218 + do { 219 + next = kvm_pgtable_addr_end(ctx, addr, end); 220 + if (!kvm_pte_present(ctx, entry)) 221 + continue; 222 + 223 + if (kvm_pte_huge(*entry)) { 224 + ret |= ctx->ops(entry, addr, ctx); 225 + continue; 226 + } 227 + 228 + kvm_ptw_enter(ctx); 229 + if (ctx->level == 0) 230 + ret |= kvm_ptw_leaf(entry, addr, next, ctx); 231 + else 232 + ret |= kvm_ptw_dir(entry, addr, next, ctx); 233 + kvm_ptw_exit(ctx); 234 + } while (entry++, addr = next, addr < end); 235 + 236 + if (kvm_need_flush(ctx)) { 237 + size = 0x1UL << (ctx->pgtable_shift + PAGE_SHIFT - 3); 238 + if (start + size == end) { 239 + list = (struct list_head *)child; 240 + list_add_tail(list, &ctx->list); 241 + *dir = ctx->invalid_ptes[ctx->level + 1]; 242 + } 243 + } 244 + 245 + return ret; 246 + } 247 + 248 + /* 249 + * Page walker for VM shadow mmu at page root table 250 + */ 251 + static int kvm_ptw_top(kvm_pte_t *dir, phys_addr_t addr, phys_addr_t end, kvm_ptw_ctx *ctx) 252 + { 253 + int ret; 254 + phys_addr_t next; 255 + kvm_pte_t *entry; 256 + 257 + ret = 0; 258 + entry = kvm_pgtable_offset(ctx, dir, addr); 259 + do { 260 + next = kvm_pgtable_addr_end(ctx, addr, end); 261 + if (!kvm_pte_present(ctx, entry)) 262 + continue; 263 + 264 + kvm_ptw_enter(ctx); 265 + ret |= kvm_ptw_dir(entry, addr, next, ctx); 266 + kvm_ptw_exit(ctx); 267 + } while (entry++, addr = next, addr < end); 268 + 269 + return ret; 270 + } 271 + 272 + /* 273 + * kvm_flush_range() - Flush a range of guest physical addresses. 274 + * @kvm: KVM pointer. 275 + * @start_gfn: Guest frame number of first page in GPA range to flush. 276 + * @end_gfn: Guest frame number of last page in GPA range to flush. 277 + * @lock: Whether to hold mmu_lock or not 278 + * 279 + * Flushes a range of GPA mappings from the GPA page tables. 280 + */ 281 + static void kvm_flush_range(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn, int lock) 282 + { 283 + int ret; 284 + kvm_ptw_ctx ctx; 285 + struct list_head *pos, *temp; 286 + 287 + ctx.ops = kvm_flush_pte; 288 + ctx.flag = _KVM_FLUSH_PGTABLE; 289 + kvm_ptw_prepare(kvm, &ctx); 290 + INIT_LIST_HEAD(&ctx.list); 291 + 292 + if (lock) { 293 + spin_lock(&kvm->mmu_lock); 294 + ret = kvm_ptw_top(kvm->arch.pgd, start_gfn << PAGE_SHIFT, 295 + end_gfn << PAGE_SHIFT, &ctx); 296 + spin_unlock(&kvm->mmu_lock); 297 + } else 298 + ret = kvm_ptw_top(kvm->arch.pgd, start_gfn << PAGE_SHIFT, 299 + end_gfn << PAGE_SHIFT, &ctx); 300 + 301 + /* Flush vpid for each vCPU individually */ 302 + if (ret) 303 + kvm_flush_remote_tlbs(kvm); 304 + 305 + /* 306 + * free pte table page after mmu_lock 307 + * the pte table page is linked together with ctx.list 308 + */ 309 + list_for_each_safe(pos, temp, &ctx.list) { 310 + list_del(pos); 311 + free_page((unsigned long)pos); 312 + } 313 + } 314 + 315 + /* 316 + * kvm_mkclean_gpa_pt() - Make a range of guest physical addresses clean. 317 + * @kvm: KVM pointer. 318 + * @start_gfn: Guest frame number of first page in GPA range to flush. 319 + * @end_gfn: Guest frame number of last page in GPA range to flush. 320 + * 321 + * Make a range of GPA mappings clean so that guest writes will fault and 322 + * trigger dirty page logging. 323 + * 324 + * The caller must hold the @kvm->mmu_lock spinlock. 325 + * 326 + * Returns: Whether any GPA mappings were modified, which would require 327 + * derived mappings (GVA page tables & TLB enties) to be 328 + * invalidated. 329 + */ 330 + static int kvm_mkclean_gpa_pt(struct kvm *kvm, gfn_t start_gfn, gfn_t end_gfn) 331 + { 332 + kvm_ptw_ctx ctx; 333 + 334 + ctx.ops = kvm_mkclean_pte; 335 + ctx.flag = 0; 336 + kvm_ptw_prepare(kvm, &ctx); 337 + return kvm_ptw_top(kvm->arch.pgd, start_gfn << PAGE_SHIFT, end_gfn << PAGE_SHIFT, &ctx); 338 + } 339 + 340 + /* 341 + * kvm_arch_mmu_enable_log_dirty_pt_masked() - write protect dirty pages 342 + * @kvm: The KVM pointer 343 + * @slot: The memory slot associated with mask 344 + * @gfn_offset: The gfn offset in memory slot 345 + * @mask: The mask of dirty pages at offset 'gfn_offset' in this memory 346 + * slot to be write protected 347 + * 348 + * Walks bits set in mask write protects the associated pte's. Caller must 349 + * acquire @kvm->mmu_lock. 350 + */ 351 + void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, 352 + struct kvm_memory_slot *slot, gfn_t gfn_offset, unsigned long mask) 353 + { 354 + kvm_ptw_ctx ctx; 355 + gfn_t base_gfn = slot->base_gfn + gfn_offset; 356 + gfn_t start = base_gfn + __ffs(mask); 357 + gfn_t end = base_gfn + __fls(mask) + 1; 358 + 359 + ctx.ops = kvm_mkclean_pte; 360 + ctx.flag = _KVM_HAS_PGMASK; 361 + ctx.mask = mask; 362 + ctx.gfn = base_gfn; 363 + kvm_ptw_prepare(kvm, &ctx); 364 + 365 + kvm_ptw_top(kvm->arch.pgd, start << PAGE_SHIFT, end << PAGE_SHIFT, &ctx); 366 + } 367 + 368 + void kvm_arch_commit_memory_region(struct kvm *kvm, 369 + struct kvm_memory_slot *old, 370 + const struct kvm_memory_slot *new, 371 + enum kvm_mr_change change) 372 + { 373 + int needs_flush; 374 + 375 + /* 376 + * If dirty page logging is enabled, write protect all pages in the slot 377 + * ready for dirty logging. 378 + * 379 + * There is no need to do this in any of the following cases: 380 + * CREATE: No dirty mappings will already exist. 381 + * MOVE/DELETE: The old mappings will already have been cleaned up by 382 + * kvm_arch_flush_shadow_memslot() 383 + */ 384 + if (change == KVM_MR_FLAGS_ONLY && 385 + (!(old->flags & KVM_MEM_LOG_DIRTY_PAGES) && 386 + new->flags & KVM_MEM_LOG_DIRTY_PAGES)) { 387 + spin_lock(&kvm->mmu_lock); 388 + /* Write protect GPA page table entries */ 389 + needs_flush = kvm_mkclean_gpa_pt(kvm, new->base_gfn, 390 + new->base_gfn + new->npages); 391 + spin_unlock(&kvm->mmu_lock); 392 + if (needs_flush) 393 + kvm_flush_remote_tlbs(kvm); 394 + } 395 + } 396 + 397 + void kvm_arch_flush_shadow_all(struct kvm *kvm) 398 + { 399 + kvm_flush_range(kvm, 0, kvm->arch.gpa_size >> PAGE_SHIFT, 0); 400 + } 401 + 402 + void kvm_arch_flush_shadow_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) 403 + { 404 + /* 405 + * The slot has been made invalid (ready for moving or deletion), so we 406 + * need to ensure that it can no longer be accessed by any guest vCPUs. 407 + */ 408 + kvm_flush_range(kvm, slot->base_gfn, slot->base_gfn + slot->npages, 1); 409 + } 410 + 411 + bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) 412 + { 413 + kvm_ptw_ctx ctx; 414 + 415 + ctx.flag = 0; 416 + ctx.ops = kvm_flush_pte; 417 + kvm_ptw_prepare(kvm, &ctx); 418 + INIT_LIST_HEAD(&ctx.list); 419 + 420 + return kvm_ptw_top(kvm->arch.pgd, range->start << PAGE_SHIFT, 421 + range->end << PAGE_SHIFT, &ctx); 422 + } 423 + 424 + bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range) 425 + { 426 + unsigned long prot_bits; 427 + kvm_pte_t *ptep; 428 + kvm_pfn_t pfn = pte_pfn(range->arg.pte); 429 + gpa_t gpa = range->start << PAGE_SHIFT; 430 + 431 + ptep = kvm_populate_gpa(kvm, NULL, gpa, 0); 432 + if (!ptep) 433 + return false; 434 + 435 + /* Replacing an absent or old page doesn't need flushes */ 436 + if (!kvm_pte_present(NULL, ptep) || !kvm_pte_young(*ptep)) { 437 + kvm_set_pte(ptep, 0); 438 + return false; 439 + } 440 + 441 + /* Fill new pte if write protected or page migrated */ 442 + prot_bits = _PAGE_PRESENT | __READABLE; 443 + prot_bits |= _CACHE_MASK & pte_val(range->arg.pte); 444 + 445 + /* 446 + * Set _PAGE_WRITE or _PAGE_DIRTY iff old and new pte both support 447 + * _PAGE_WRITE for map_page_fast if next page write fault 448 + * _PAGE_DIRTY since gpa has already recorded as dirty page 449 + */ 450 + prot_bits |= __WRITEABLE & *ptep & pte_val(range->arg.pte); 451 + kvm_set_pte(ptep, kvm_pfn_pte(pfn, __pgprot(prot_bits))); 452 + 453 + return true; 454 + } 455 + 456 + bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) 457 + { 458 + kvm_ptw_ctx ctx; 459 + 460 + ctx.flag = 0; 461 + ctx.ops = kvm_mkold_pte; 462 + kvm_ptw_prepare(kvm, &ctx); 463 + 464 + return kvm_ptw_top(kvm->arch.pgd, range->start << PAGE_SHIFT, 465 + range->end << PAGE_SHIFT, &ctx); 466 + } 467 + 468 + bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range) 469 + { 470 + gpa_t gpa = range->start << PAGE_SHIFT; 471 + kvm_pte_t *ptep = kvm_populate_gpa(kvm, NULL, gpa, 0); 472 + 473 + if (ptep && kvm_pte_present(NULL, ptep) && kvm_pte_young(*ptep)) 474 + return true; 475 + 476 + return false; 477 + } 478 + 479 + /* 480 + * kvm_map_page_fast() - Fast path GPA fault handler. 481 + * @vcpu: vCPU pointer. 482 + * @gpa: Guest physical address of fault. 483 + * @write: Whether the fault was due to a write. 484 + * 485 + * Perform fast path GPA fault handling, doing all that can be done without 486 + * calling into KVM. This handles marking old pages young (for idle page 487 + * tracking), and dirtying of clean pages (for dirty page logging). 488 + * 489 + * Returns: 0 on success, in which case we can update derived mappings and 490 + * resume guest execution. 491 + * -EFAULT on failure due to absent GPA mapping or write to 492 + * read-only page, in which case KVM must be consulted. 493 + */ 494 + static int kvm_map_page_fast(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) 495 + { 496 + int ret = 0; 497 + kvm_pfn_t pfn = 0; 498 + kvm_pte_t *ptep, changed, new; 499 + gfn_t gfn = gpa >> PAGE_SHIFT; 500 + struct kvm *kvm = vcpu->kvm; 501 + struct kvm_memory_slot *slot; 502 + 503 + spin_lock(&kvm->mmu_lock); 504 + 505 + /* Fast path - just check GPA page table for an existing entry */ 506 + ptep = kvm_populate_gpa(kvm, NULL, gpa, 0); 507 + if (!ptep || !kvm_pte_present(NULL, ptep)) { 508 + ret = -EFAULT; 509 + goto out; 510 + } 511 + 512 + /* Track access to pages marked old */ 513 + new = *ptep; 514 + if (!kvm_pte_young(new)) 515 + new = kvm_pte_mkyoung(new); 516 + /* call kvm_set_pfn_accessed() after unlock */ 517 + 518 + if (write && !kvm_pte_dirty(new)) { 519 + if (!kvm_pte_write(new)) { 520 + ret = -EFAULT; 521 + goto out; 522 + } 523 + 524 + if (kvm_pte_huge(new)) { 525 + /* 526 + * Do not set write permission when dirty logging is 527 + * enabled for HugePages 528 + */ 529 + slot = gfn_to_memslot(kvm, gfn); 530 + if (kvm_slot_dirty_track_enabled(slot)) { 531 + ret = -EFAULT; 532 + goto out; 533 + } 534 + } 535 + 536 + /* Track dirtying of writeable pages */ 537 + new = kvm_pte_mkdirty(new); 538 + } 539 + 540 + changed = new ^ (*ptep); 541 + if (changed) { 542 + kvm_set_pte(ptep, new); 543 + pfn = kvm_pte_pfn(new); 544 + } 545 + spin_unlock(&kvm->mmu_lock); 546 + 547 + /* 548 + * Fixme: pfn may be freed after mmu_lock 549 + * kvm_try_get_pfn(pfn)/kvm_release_pfn pair to prevent this? 550 + */ 551 + if (kvm_pte_young(changed)) 552 + kvm_set_pfn_accessed(pfn); 553 + 554 + if (kvm_pte_dirty(changed)) { 555 + mark_page_dirty(kvm, gfn); 556 + kvm_set_pfn_dirty(pfn); 557 + } 558 + return ret; 559 + out: 560 + spin_unlock(&kvm->mmu_lock); 561 + return ret; 562 + } 563 + 564 + static bool fault_supports_huge_mapping(struct kvm_memory_slot *memslot, 565 + unsigned long hva, unsigned long map_size, bool write) 566 + { 567 + size_t size; 568 + gpa_t gpa_start; 569 + hva_t uaddr_start, uaddr_end; 570 + 571 + /* Disable dirty logging on HugePages */ 572 + if (kvm_slot_dirty_track_enabled(memslot) && write) 573 + return false; 574 + 575 + size = memslot->npages * PAGE_SIZE; 576 + gpa_start = memslot->base_gfn << PAGE_SHIFT; 577 + uaddr_start = memslot->userspace_addr; 578 + uaddr_end = uaddr_start + size; 579 + 580 + /* 581 + * Pages belonging to memslots that don't have the same alignment 582 + * within a PMD for userspace and GPA cannot be mapped with stage-2 583 + * PMD entries, because we'll end up mapping the wrong pages. 584 + * 585 + * Consider a layout like the following: 586 + * 587 + * memslot->userspace_addr: 588 + * +-----+--------------------+--------------------+---+ 589 + * |abcde|fgh Stage-1 block | Stage-1 block tv|xyz| 590 + * +-----+--------------------+--------------------+---+ 591 + * 592 + * memslot->base_gfn << PAGE_SIZE: 593 + * +---+--------------------+--------------------+-----+ 594 + * |abc|def Stage-2 block | Stage-2 block |tvxyz| 595 + * +---+--------------------+--------------------+-----+ 596 + * 597 + * If we create those stage-2 blocks, we'll end up with this incorrect 598 + * mapping: 599 + * d -> f 600 + * e -> g 601 + * f -> h 602 + */ 603 + if ((gpa_start & (map_size - 1)) != (uaddr_start & (map_size - 1))) 604 + return false; 605 + 606 + /* 607 + * Next, let's make sure we're not trying to map anything not covered 608 + * by the memslot. This means we have to prohibit block size mappings 609 + * for the beginning and end of a non-block aligned and non-block sized 610 + * memory slot (illustrated by the head and tail parts of the 611 + * userspace view above containing pages 'abcde' and 'xyz', 612 + * respectively). 613 + * 614 + * Note that it doesn't matter if we do the check using the 615 + * userspace_addr or the base_gfn, as both are equally aligned (per 616 + * the check above) and equally sized. 617 + */ 618 + return (hva & ~(map_size - 1)) >= uaddr_start && 619 + (hva & ~(map_size - 1)) + map_size <= uaddr_end; 620 + } 621 + 622 + /* 623 + * Lookup the mapping level for @gfn in the current mm. 624 + * 625 + * WARNING! Use of host_pfn_mapping_level() requires the caller and the end 626 + * consumer to be tied into KVM's handlers for MMU notifier events! 627 + * 628 + * There are several ways to safely use this helper: 629 + * 630 + * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before 631 + * consuming it. In this case, mmu_lock doesn't need to be held during the 632 + * lookup, but it does need to be held while checking the MMU notifier. 633 + * 634 + * - Hold mmu_lock AND ensure there is no in-progress MMU notifier invalidation 635 + * event for the hva. This can be done by explicit checking the MMU notifier 636 + * or by ensuring that KVM already has a valid mapping that covers the hva. 637 + * 638 + * - Do not use the result to install new mappings, e.g. use the host mapping 639 + * level only to decide whether or not to zap an entry. In this case, it's 640 + * not required to hold mmu_lock (though it's highly likely the caller will 641 + * want to hold mmu_lock anyways, e.g. to modify SPTEs). 642 + * 643 + * Note! The lookup can still race with modifications to host page tables, but 644 + * the above "rules" ensure KVM will not _consume_ the result of the walk if a 645 + * race with the primary MMU occurs. 646 + */ 647 + static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, 648 + const struct kvm_memory_slot *slot) 649 + { 650 + int level = 0; 651 + unsigned long hva; 652 + unsigned long flags; 653 + pgd_t pgd; 654 + p4d_t p4d; 655 + pud_t pud; 656 + pmd_t pmd; 657 + 658 + /* 659 + * Note, using the already-retrieved memslot and __gfn_to_hva_memslot() 660 + * is not solely for performance, it's also necessary to avoid the 661 + * "writable" check in __gfn_to_hva_many(), which will always fail on 662 + * read-only memslots due to gfn_to_hva() assuming writes. Earlier 663 + * page fault steps have already verified the guest isn't writing a 664 + * read-only memslot. 665 + */ 666 + hva = __gfn_to_hva_memslot(slot, gfn); 667 + 668 + /* 669 + * Disable IRQs to prevent concurrent tear down of host page tables, 670 + * e.g. if the primary MMU promotes a P*D to a huge page and then frees 671 + * the original page table. 672 + */ 673 + local_irq_save(flags); 674 + 675 + /* 676 + * Read each entry once. As above, a non-leaf entry can be promoted to 677 + * a huge page _during_ this walk. Re-reading the entry could send the 678 + * walk into the weeks, e.g. p*d_large() returns false (sees the old 679 + * value) and then p*d_offset() walks into the target huge page instead 680 + * of the old page table (sees the new value). 681 + */ 682 + pgd = READ_ONCE(*pgd_offset(kvm->mm, hva)); 683 + if (pgd_none(pgd)) 684 + goto out; 685 + 686 + p4d = READ_ONCE(*p4d_offset(&pgd, hva)); 687 + if (p4d_none(p4d) || !p4d_present(p4d)) 688 + goto out; 689 + 690 + pud = READ_ONCE(*pud_offset(&p4d, hva)); 691 + if (pud_none(pud) || !pud_present(pud)) 692 + goto out; 693 + 694 + pmd = READ_ONCE(*pmd_offset(&pud, hva)); 695 + if (pmd_none(pmd) || !pmd_present(pmd)) 696 + goto out; 697 + 698 + if (kvm_pte_huge(pmd_val(pmd))) 699 + level = 1; 700 + 701 + out: 702 + local_irq_restore(flags); 703 + return level; 704 + } 705 + 706 + /* 707 + * Split huge page 708 + */ 709 + static kvm_pte_t *kvm_split_huge(struct kvm_vcpu *vcpu, kvm_pte_t *ptep, gfn_t gfn) 710 + { 711 + int i; 712 + kvm_pte_t val, *child; 713 + struct kvm *kvm = vcpu->kvm; 714 + struct kvm_mmu_memory_cache *memcache; 715 + 716 + memcache = &vcpu->arch.mmu_page_cache; 717 + child = kvm_mmu_memory_cache_alloc(memcache); 718 + val = kvm_pte_mksmall(*ptep); 719 + for (i = 0; i < PTRS_PER_PTE; i++) { 720 + kvm_set_pte(child + i, val); 721 + val += PAGE_SIZE; 722 + } 723 + 724 + /* The later kvm_flush_tlb_gpa() will flush hugepage tlb */ 725 + kvm_set_pte(ptep, __pa(child)); 726 + 727 + kvm->stat.hugepages--; 728 + kvm->stat.pages += PTRS_PER_PTE; 729 + 730 + return child + (gfn & (PTRS_PER_PTE - 1)); 731 + } 732 + 733 + /* 734 + * kvm_map_page() - Map a guest physical page. 735 + * @vcpu: vCPU pointer. 736 + * @gpa: Guest physical address of fault. 737 + * @write: Whether the fault was due to a write. 738 + * 739 + * Handle GPA faults by creating a new GPA mapping (or updating an existing 740 + * one). 741 + * 742 + * This takes care of marking pages young or dirty (idle/dirty page tracking), 743 + * asking KVM for the corresponding PFN, and creating a mapping in the GPA page 744 + * tables. Derived mappings (GVA page tables and TLBs) must be handled by the 745 + * caller. 746 + * 747 + * Returns: 0 on success 748 + * -EFAULT if there is no memory region at @gpa or a write was 749 + * attempted to a read-only memory region. This is usually handled 750 + * as an MMIO access. 751 + */ 752 + static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) 753 + { 754 + bool writeable; 755 + int srcu_idx, err, retry_no = 0, level; 756 + unsigned long hva, mmu_seq, prot_bits; 757 + kvm_pfn_t pfn; 758 + kvm_pte_t *ptep, new_pte; 759 + gfn_t gfn = gpa >> PAGE_SHIFT; 760 + struct kvm *kvm = vcpu->kvm; 761 + struct kvm_memory_slot *memslot; 762 + struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache; 763 + 764 + /* Try the fast path to handle old / clean pages */ 765 + srcu_idx = srcu_read_lock(&kvm->srcu); 766 + err = kvm_map_page_fast(vcpu, gpa, write); 767 + if (!err) 768 + goto out; 769 + 770 + memslot = gfn_to_memslot(kvm, gfn); 771 + hva = gfn_to_hva_memslot_prot(memslot, gfn, &writeable); 772 + if (kvm_is_error_hva(hva) || (write && !writeable)) { 773 + err = -EFAULT; 774 + goto out; 775 + } 776 + 777 + /* We need a minimum of cached pages ready for page table creation */ 778 + err = kvm_mmu_topup_memory_cache(memcache, KVM_MMU_CACHE_MIN_PAGES); 779 + if (err) 780 + goto out; 781 + 782 + retry: 783 + /* 784 + * Used to check for invalidations in progress, of the pfn that is 785 + * returned by pfn_to_pfn_prot below. 786 + */ 787 + mmu_seq = kvm->mmu_invalidate_seq; 788 + /* 789 + * Ensure the read of mmu_invalidate_seq isn't reordered with PTE reads in 790 + * gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't 791 + * risk the page we get a reference to getting unmapped before we have a 792 + * chance to grab the mmu_lock without mmu_invalidate_retry() noticing. 793 + * 794 + * This smp_rmb() pairs with the effective smp_wmb() of the combination 795 + * of the pte_unmap_unlock() after the PTE is zapped, and the 796 + * spin_lock() in kvm_mmu_invalidate_invalidate_<page|range_end>() before 797 + * mmu_invalidate_seq is incremented. 798 + */ 799 + smp_rmb(); 800 + 801 + /* Slow path - ask KVM core whether we can access this GPA */ 802 + pfn = gfn_to_pfn_prot(kvm, gfn, write, &writeable); 803 + if (is_error_noslot_pfn(pfn)) { 804 + err = -EFAULT; 805 + goto out; 806 + } 807 + 808 + /* Check if an invalidation has taken place since we got pfn */ 809 + spin_lock(&kvm->mmu_lock); 810 + if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) { 811 + /* 812 + * This can happen when mappings are changed asynchronously, but 813 + * also synchronously if a COW is triggered by 814 + * gfn_to_pfn_prot(). 815 + */ 816 + spin_unlock(&kvm->mmu_lock); 817 + kvm_release_pfn_clean(pfn); 818 + if (retry_no > 100) { 819 + retry_no = 0; 820 + schedule(); 821 + } 822 + retry_no++; 823 + goto retry; 824 + } 825 + 826 + /* 827 + * For emulated devices such virtio device, actual cache attribute is 828 + * determined by physical machine. 829 + * For pass through physical device, it should be uncachable 830 + */ 831 + prot_bits = _PAGE_PRESENT | __READABLE; 832 + if (pfn_valid(pfn)) 833 + prot_bits |= _CACHE_CC; 834 + else 835 + prot_bits |= _CACHE_SUC; 836 + 837 + if (writeable) { 838 + prot_bits |= _PAGE_WRITE; 839 + if (write) 840 + prot_bits |= __WRITEABLE; 841 + } 842 + 843 + /* Disable dirty logging on HugePages */ 844 + level = 0; 845 + if (!fault_supports_huge_mapping(memslot, hva, PMD_SIZE, write)) { 846 + level = 0; 847 + } else { 848 + level = host_pfn_mapping_level(kvm, gfn, memslot); 849 + if (level == 1) { 850 + gfn = gfn & ~(PTRS_PER_PTE - 1); 851 + pfn = pfn & ~(PTRS_PER_PTE - 1); 852 + } 853 + } 854 + 855 + /* Ensure page tables are allocated */ 856 + ptep = kvm_populate_gpa(kvm, memcache, gpa, level); 857 + new_pte = kvm_pfn_pte(pfn, __pgprot(prot_bits)); 858 + if (level == 1) { 859 + new_pte = kvm_pte_mkhuge(new_pte); 860 + /* 861 + * previous pmd entry is invalid_pte_table 862 + * there is invalid tlb with small page 863 + * need flush these invalid tlbs for current vcpu 864 + */ 865 + kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu); 866 + ++kvm->stat.hugepages; 867 + } else if (kvm_pte_huge(*ptep) && write) 868 + ptep = kvm_split_huge(vcpu, ptep, gfn); 869 + else 870 + ++kvm->stat.pages; 871 + kvm_set_pte(ptep, new_pte); 872 + spin_unlock(&kvm->mmu_lock); 873 + 874 + if (prot_bits & _PAGE_DIRTY) { 875 + mark_page_dirty_in_slot(kvm, memslot, gfn); 876 + kvm_set_pfn_dirty(pfn); 877 + } 878 + 879 + kvm_set_pfn_accessed(pfn); 880 + kvm_release_pfn_clean(pfn); 881 + out: 882 + srcu_read_unlock(&kvm->srcu, srcu_idx); 883 + return err; 884 + } 885 + 886 + int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long gpa, bool write) 887 + { 888 + int ret; 889 + 890 + ret = kvm_map_page(vcpu, gpa, write); 891 + if (ret) 892 + return ret; 893 + 894 + /* Invalidate this entry in the TLB */ 895 + kvm_flush_tlb_gpa(vcpu, gpa); 896 + 897 + return 0; 898 + } 899 + 900 + void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *memslot) 901 + { 902 + } 903 + 904 + int kvm_arch_prepare_memory_region(struct kvm *kvm, const struct kvm_memory_slot *old, 905 + struct kvm_memory_slot *new, enum kvm_mr_change change) 906 + { 907 + return 0; 908 + } 909 + 910 + void kvm_arch_flush_remote_tlbs_memslot(struct kvm *kvm, 911 + const struct kvm_memory_slot *memslot) 912 + { 913 + kvm_flush_remote_tlbs(kvm); 914 + }
+250
arch/loongarch/kvm/switch.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #include <linux/linkage.h> 7 + #include <asm/asm.h> 8 + #include <asm/asmmacro.h> 9 + #include <asm/loongarch.h> 10 + #include <asm/regdef.h> 11 + #include <asm/stackframe.h> 12 + 13 + #define HGPR_OFFSET(x) (PT_R0 + 8*x) 14 + #define GGPR_OFFSET(x) (KVM_ARCH_GGPR + 8*x) 15 + 16 + .macro kvm_save_host_gpr base 17 + .irp n,1,2,3,22,23,24,25,26,27,28,29,30,31 18 + st.d $r\n, \base, HGPR_OFFSET(\n) 19 + .endr 20 + .endm 21 + 22 + .macro kvm_restore_host_gpr base 23 + .irp n,1,2,3,22,23,24,25,26,27,28,29,30,31 24 + ld.d $r\n, \base, HGPR_OFFSET(\n) 25 + .endr 26 + .endm 27 + 28 + /* 29 + * Save and restore all GPRs except base register, 30 + * and default value of base register is a2. 31 + */ 32 + .macro kvm_save_guest_gprs base 33 + .irp n,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31 34 + st.d $r\n, \base, GGPR_OFFSET(\n) 35 + .endr 36 + .endm 37 + 38 + .macro kvm_restore_guest_gprs base 39 + .irp n,1,2,3,4,5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31 40 + ld.d $r\n, \base, GGPR_OFFSET(\n) 41 + .endr 42 + .endm 43 + 44 + /* 45 + * Prepare switch to guest, save host regs and restore guest regs. 46 + * a2: kvm_vcpu_arch, don't touch it until 'ertn' 47 + * t0, t1: temp register 48 + */ 49 + .macro kvm_switch_to_guest 50 + /* Set host ECFG.VS=0, all exceptions share one exception entry */ 51 + csrrd t0, LOONGARCH_CSR_ECFG 52 + bstrins.w t0, zero, CSR_ECFG_VS_SHIFT_END, CSR_ECFG_VS_SHIFT 53 + csrwr t0, LOONGARCH_CSR_ECFG 54 + 55 + /* Load up the new EENTRY */ 56 + ld.d t0, a2, KVM_ARCH_GEENTRY 57 + csrwr t0, LOONGARCH_CSR_EENTRY 58 + 59 + /* Set Guest ERA */ 60 + ld.d t0, a2, KVM_ARCH_GPC 61 + csrwr t0, LOONGARCH_CSR_ERA 62 + 63 + /* Save host PGDL */ 64 + csrrd t0, LOONGARCH_CSR_PGDL 65 + st.d t0, a2, KVM_ARCH_HPGD 66 + 67 + /* Switch to kvm */ 68 + ld.d t1, a2, KVM_VCPU_KVM - KVM_VCPU_ARCH 69 + 70 + /* Load guest PGDL */ 71 + li.w t0, KVM_GPGD 72 + ldx.d t0, t1, t0 73 + csrwr t0, LOONGARCH_CSR_PGDL 74 + 75 + /* Mix GID and RID */ 76 + csrrd t1, LOONGARCH_CSR_GSTAT 77 + bstrpick.w t1, t1, CSR_GSTAT_GID_SHIFT_END, CSR_GSTAT_GID_SHIFT 78 + csrrd t0, LOONGARCH_CSR_GTLBC 79 + bstrins.w t0, t1, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT 80 + csrwr t0, LOONGARCH_CSR_GTLBC 81 + 82 + /* 83 + * Enable intr in root mode with future ertn so that host interrupt 84 + * can be responsed during VM runs 85 + * Guest CRMD comes from separate GCSR_CRMD register 86 + */ 87 + ori t0, zero, CSR_PRMD_PIE 88 + csrxchg t0, t0, LOONGARCH_CSR_PRMD 89 + 90 + /* Set PVM bit to setup ertn to guest context */ 91 + ori t0, zero, CSR_GSTAT_PVM 92 + csrxchg t0, t0, LOONGARCH_CSR_GSTAT 93 + 94 + /* Load Guest GPRs */ 95 + kvm_restore_guest_gprs a2 96 + /* Load KVM_ARCH register */ 97 + ld.d a2, a2, (KVM_ARCH_GGPR + 8 * REG_A2) 98 + 99 + ertn /* Switch to guest: GSTAT.PGM = 1, ERRCTL.ISERR = 0, TLBRPRMD.ISTLBR = 0 */ 100 + .endm 101 + 102 + /* 103 + * Exception entry for general exception from guest mode 104 + * - IRQ is disabled 105 + * - kernel privilege in root mode 106 + * - page mode keep unchanged from previous PRMD in root mode 107 + * - Fixme: tlb exception cannot happen since registers relative with TLB 108 + * - is still in guest mode, such as pgd table/vmid registers etc, 109 + * - will fix with hw page walk enabled in future 110 + * load kvm_vcpu from reserved CSR KVM_VCPU_KS, and save a2 to KVM_TEMP_KS 111 + */ 112 + .text 113 + .cfi_sections .debug_frame 114 + SYM_CODE_START(kvm_exc_entry) 115 + csrwr a2, KVM_TEMP_KS 116 + csrrd a2, KVM_VCPU_KS 117 + addi.d a2, a2, KVM_VCPU_ARCH 118 + 119 + /* After save GPRs, free to use any GPR */ 120 + kvm_save_guest_gprs a2 121 + /* Save guest A2 */ 122 + csrrd t0, KVM_TEMP_KS 123 + st.d t0, a2, (KVM_ARCH_GGPR + 8 * REG_A2) 124 + 125 + /* A2 is kvm_vcpu_arch, A1 is free to use */ 126 + csrrd s1, KVM_VCPU_KS 127 + ld.d s0, s1, KVM_VCPU_RUN 128 + 129 + csrrd t0, LOONGARCH_CSR_ESTAT 130 + st.d t0, a2, KVM_ARCH_HESTAT 131 + csrrd t0, LOONGARCH_CSR_ERA 132 + st.d t0, a2, KVM_ARCH_GPC 133 + csrrd t0, LOONGARCH_CSR_BADV 134 + st.d t0, a2, KVM_ARCH_HBADV 135 + csrrd t0, LOONGARCH_CSR_BADI 136 + st.d t0, a2, KVM_ARCH_HBADI 137 + 138 + /* Restore host ECFG.VS */ 139 + csrrd t0, LOONGARCH_CSR_ECFG 140 + ld.d t1, a2, KVM_ARCH_HECFG 141 + or t0, t0, t1 142 + csrwr t0, LOONGARCH_CSR_ECFG 143 + 144 + /* Restore host EENTRY */ 145 + ld.d t0, a2, KVM_ARCH_HEENTRY 146 + csrwr t0, LOONGARCH_CSR_EENTRY 147 + 148 + /* Restore host pgd table */ 149 + ld.d t0, a2, KVM_ARCH_HPGD 150 + csrwr t0, LOONGARCH_CSR_PGDL 151 + 152 + /* 153 + * Disable PGM bit to enter root mode by default with next ertn 154 + */ 155 + ori t0, zero, CSR_GSTAT_PVM 156 + csrxchg zero, t0, LOONGARCH_CSR_GSTAT 157 + 158 + /* 159 + * Clear GTLBC.TGID field 160 + * 0: for root tlb update in future tlb instr 161 + * others: for guest tlb update like gpa to hpa in future tlb instr 162 + */ 163 + csrrd t0, LOONGARCH_CSR_GTLBC 164 + bstrins.w t0, zero, CSR_GTLBC_TGID_SHIFT_END, CSR_GTLBC_TGID_SHIFT 165 + csrwr t0, LOONGARCH_CSR_GTLBC 166 + ld.d tp, a2, KVM_ARCH_HTP 167 + ld.d sp, a2, KVM_ARCH_HSP 168 + /* restore per cpu register */ 169 + ld.d u0, a2, KVM_ARCH_HPERCPU 170 + addi.d sp, sp, -PT_SIZE 171 + 172 + /* Prepare handle exception */ 173 + or a0, s0, zero 174 + or a1, s1, zero 175 + ld.d t8, a2, KVM_ARCH_HANDLE_EXIT 176 + jirl ra, t8, 0 177 + 178 + or a2, s1, zero 179 + addi.d a2, a2, KVM_VCPU_ARCH 180 + 181 + /* Resume host when ret <= 0 */ 182 + blez a0, ret_to_host 183 + 184 + /* 185 + * Return to guest 186 + * Save per cpu register again, maybe switched to another cpu 187 + */ 188 + st.d u0, a2, KVM_ARCH_HPERCPU 189 + 190 + /* Save kvm_vcpu to kscratch */ 191 + csrwr s1, KVM_VCPU_KS 192 + kvm_switch_to_guest 193 + 194 + ret_to_host: 195 + ld.d a2, a2, KVM_ARCH_HSP 196 + addi.d a2, a2, -PT_SIZE 197 + kvm_restore_host_gpr a2 198 + jr ra 199 + 200 + SYM_INNER_LABEL(kvm_exc_entry_end, SYM_L_LOCAL) 201 + SYM_CODE_END(kvm_exc_entry) 202 + 203 + /* 204 + * int kvm_enter_guest(struct kvm_run *run, struct kvm_vcpu *vcpu) 205 + * 206 + * @register_param: 207 + * a0: kvm_run* run 208 + * a1: kvm_vcpu* vcpu 209 + */ 210 + SYM_FUNC_START(kvm_enter_guest) 211 + /* Allocate space in stack bottom */ 212 + addi.d a2, sp, -PT_SIZE 213 + /* Save host GPRs */ 214 + kvm_save_host_gpr a2 215 + 216 + /* Save host CRMD, PRMD to stack */ 217 + csrrd a3, LOONGARCH_CSR_CRMD 218 + st.d a3, a2, PT_CRMD 219 + csrrd a3, LOONGARCH_CSR_PRMD 220 + st.d a3, a2, PT_PRMD 221 + 222 + addi.d a2, a1, KVM_VCPU_ARCH 223 + st.d sp, a2, KVM_ARCH_HSP 224 + st.d tp, a2, KVM_ARCH_HTP 225 + /* Save per cpu register */ 226 + st.d u0, a2, KVM_ARCH_HPERCPU 227 + 228 + /* Save kvm_vcpu to kscratch */ 229 + csrwr a1, KVM_VCPU_KS 230 + kvm_switch_to_guest 231 + SYM_INNER_LABEL(kvm_enter_guest_end, SYM_L_LOCAL) 232 + SYM_FUNC_END(kvm_enter_guest) 233 + 234 + SYM_FUNC_START(kvm_save_fpu) 235 + fpu_save_csr a0 t1 236 + fpu_save_double a0 t1 237 + fpu_save_cc a0 t1 t2 238 + jr ra 239 + SYM_FUNC_END(kvm_save_fpu) 240 + 241 + SYM_FUNC_START(kvm_restore_fpu) 242 + fpu_restore_double a0 t1 243 + fpu_restore_csr a0 t1 t2 244 + fpu_restore_cc a0 t1 t2 245 + jr ra 246 + SYM_FUNC_END(kvm_restore_fpu) 247 + 248 + .section ".rodata" 249 + SYM_DATA(kvm_exception_size, .quad kvm_exc_entry_end - kvm_exc_entry) 250 + SYM_DATA(kvm_enter_guest_size, .quad kvm_enter_guest_end - kvm_enter_guest)
+197
arch/loongarch/kvm/timer.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #include <linux/kvm_host.h> 7 + #include <asm/kvm_csr.h> 8 + #include <asm/kvm_vcpu.h> 9 + 10 + /* 11 + * ktime_to_tick() - Scale ktime_t to timer tick value. 12 + */ 13 + static inline u64 ktime_to_tick(struct kvm_vcpu *vcpu, ktime_t now) 14 + { 15 + u64 delta; 16 + 17 + delta = ktime_to_ns(now); 18 + return div_u64(delta * vcpu->arch.timer_mhz, MNSEC_PER_SEC); 19 + } 20 + 21 + static inline u64 tick_to_ns(struct kvm_vcpu *vcpu, u64 tick) 22 + { 23 + return div_u64(tick * MNSEC_PER_SEC, vcpu->arch.timer_mhz); 24 + } 25 + 26 + /* 27 + * Push timer forward on timeout. 28 + * Handle an hrtimer event by push the hrtimer forward a period. 29 + */ 30 + static enum hrtimer_restart kvm_count_timeout(struct kvm_vcpu *vcpu) 31 + { 32 + unsigned long cfg, period; 33 + 34 + /* Add periodic tick to current expire time */ 35 + cfg = kvm_read_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG); 36 + if (cfg & CSR_TCFG_PERIOD) { 37 + period = tick_to_ns(vcpu, cfg & CSR_TCFG_VAL); 38 + hrtimer_add_expires_ns(&vcpu->arch.swtimer, period); 39 + return HRTIMER_RESTART; 40 + } else 41 + return HRTIMER_NORESTART; 42 + } 43 + 44 + /* Low level hrtimer wake routine */ 45 + enum hrtimer_restart kvm_swtimer_wakeup(struct hrtimer *timer) 46 + { 47 + struct kvm_vcpu *vcpu; 48 + 49 + vcpu = container_of(timer, struct kvm_vcpu, arch.swtimer); 50 + kvm_queue_irq(vcpu, INT_TI); 51 + rcuwait_wake_up(&vcpu->wait); 52 + 53 + return kvm_count_timeout(vcpu); 54 + } 55 + 56 + /* 57 + * Initialise the timer to the specified frequency, zero it 58 + */ 59 + void kvm_init_timer(struct kvm_vcpu *vcpu, unsigned long timer_hz) 60 + { 61 + vcpu->arch.timer_mhz = timer_hz >> 20; 62 + 63 + /* Starting at 0 */ 64 + kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TVAL, 0); 65 + } 66 + 67 + /* 68 + * Restore hard timer state and enable guest to access timer registers 69 + * without trap, should be called with irq disabled 70 + */ 71 + void kvm_acquire_timer(struct kvm_vcpu *vcpu) 72 + { 73 + unsigned long cfg; 74 + 75 + cfg = read_csr_gcfg(); 76 + if (!(cfg & CSR_GCFG_TIT)) 77 + return; 78 + 79 + /* Enable guest access to hard timer */ 80 + write_csr_gcfg(cfg & ~CSR_GCFG_TIT); 81 + 82 + /* 83 + * Freeze the soft-timer and sync the guest stable timer with it. We do 84 + * this with interrupts disabled to avoid latency. 85 + */ 86 + hrtimer_cancel(&vcpu->arch.swtimer); 87 + } 88 + 89 + /* 90 + * Restore soft timer state from saved context. 91 + */ 92 + void kvm_restore_timer(struct kvm_vcpu *vcpu) 93 + { 94 + unsigned long cfg, delta, period; 95 + ktime_t expire, now; 96 + struct loongarch_csrs *csr = vcpu->arch.csr; 97 + 98 + /* 99 + * Set guest stable timer cfg csr 100 + */ 101 + cfg = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TCFG); 102 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ESTAT); 103 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TCFG); 104 + if (!(cfg & CSR_TCFG_EN)) { 105 + /* Guest timer is disabled, just restore timer registers */ 106 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TVAL); 107 + return; 108 + } 109 + 110 + /* 111 + * Set remainder tick value if not expired 112 + */ 113 + now = ktime_get(); 114 + expire = vcpu->arch.expire; 115 + if (ktime_before(now, expire)) 116 + delta = ktime_to_tick(vcpu, ktime_sub(expire, now)); 117 + else { 118 + if (cfg & CSR_TCFG_PERIOD) { 119 + period = cfg & CSR_TCFG_VAL; 120 + delta = ktime_to_tick(vcpu, ktime_sub(now, expire)); 121 + delta = period - (delta % period); 122 + } else 123 + delta = 0; 124 + /* 125 + * Inject timer here though sw timer should inject timer 126 + * interrupt async already, since sw timer may be cancelled 127 + * during injecting intr async in function kvm_acquire_timer 128 + */ 129 + kvm_queue_irq(vcpu, INT_TI); 130 + } 131 + 132 + write_gcsr_timertick(delta); 133 + } 134 + 135 + /* 136 + * Save guest timer state and switch to software emulation of guest 137 + * timer. The hard timer must already be in use, so preemption should be 138 + * disabled. 139 + */ 140 + static void _kvm_save_timer(struct kvm_vcpu *vcpu) 141 + { 142 + unsigned long ticks, delta; 143 + ktime_t expire; 144 + struct loongarch_csrs *csr = vcpu->arch.csr; 145 + 146 + ticks = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TVAL); 147 + delta = tick_to_ns(vcpu, ticks); 148 + expire = ktime_add_ns(ktime_get(), delta); 149 + vcpu->arch.expire = expire; 150 + if (ticks) { 151 + /* 152 + * Update hrtimer to use new timeout 153 + * HRTIMER_MODE_PINNED is suggested since vcpu may run in 154 + * the same physical cpu in next time 155 + */ 156 + hrtimer_cancel(&vcpu->arch.swtimer); 157 + hrtimer_start(&vcpu->arch.swtimer, expire, HRTIMER_MODE_ABS_PINNED); 158 + } else 159 + /* 160 + * Inject timer interrupt so that hall polling can dectect and exit 161 + */ 162 + kvm_queue_irq(vcpu, INT_TI); 163 + } 164 + 165 + /* 166 + * Save guest timer state and switch to soft guest timer if hard timer was in 167 + * use. 168 + */ 169 + void kvm_save_timer(struct kvm_vcpu *vcpu) 170 + { 171 + unsigned long cfg; 172 + struct loongarch_csrs *csr = vcpu->arch.csr; 173 + 174 + preempt_disable(); 175 + cfg = read_csr_gcfg(); 176 + if (!(cfg & CSR_GCFG_TIT)) { 177 + /* Disable guest use of hard timer */ 178 + write_csr_gcfg(cfg | CSR_GCFG_TIT); 179 + 180 + /* Save hard timer state */ 181 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TCFG); 182 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TVAL); 183 + if (kvm_read_sw_gcsr(csr, LOONGARCH_CSR_TCFG) & CSR_TCFG_EN) 184 + _kvm_save_timer(vcpu); 185 + } 186 + 187 + /* Save timer-related state to vCPU context */ 188 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ESTAT); 189 + preempt_enable(); 190 + } 191 + 192 + void kvm_reset_timer(struct kvm_vcpu *vcpu) 193 + { 194 + write_gcsr_timercfg(0); 195 + kvm_write_sw_gcsr(vcpu->arch.csr, LOONGARCH_CSR_TCFG, 0); 196 + hrtimer_cancel(&vcpu->arch.swtimer); 197 + }
+32
arch/loongarch/kvm/tlb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #include <linux/kvm_host.h> 7 + #include <asm/tlb.h> 8 + #include <asm/kvm_csr.h> 9 + 10 + /* 11 + * kvm_flush_tlb_all() - Flush all root TLB entries for guests. 12 + * 13 + * Invalidate all entries including GVA-->GPA and GPA-->HPA mappings. 14 + */ 15 + void kvm_flush_tlb_all(void) 16 + { 17 + unsigned long flags; 18 + 19 + local_irq_save(flags); 20 + invtlb_all(INVTLB_ALLGID, 0, 0); 21 + local_irq_restore(flags); 22 + } 23 + 24 + void kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa) 25 + { 26 + unsigned long flags; 27 + 28 + local_irq_save(flags); 29 + gpa &= (PAGE_MASK << 1); 30 + invtlb(INVTLB_GID_ADDR, read_csr_gstat() & CSR_GSTAT_GID, gpa); 31 + local_irq_restore(flags); 32 + }
+162
arch/loongarch/kvm/trace.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #if !defined(_TRACE_KVM_H) || defined(TRACE_HEADER_MULTI_READ) 7 + #define _TRACE_KVM_H 8 + 9 + #include <linux/tracepoint.h> 10 + #include <asm/kvm_csr.h> 11 + 12 + #undef TRACE_SYSTEM 13 + #define TRACE_SYSTEM kvm 14 + 15 + /* 16 + * Tracepoints for VM enters 17 + */ 18 + DECLARE_EVENT_CLASS(kvm_transition, 19 + TP_PROTO(struct kvm_vcpu *vcpu), 20 + TP_ARGS(vcpu), 21 + TP_STRUCT__entry( 22 + __field(unsigned long, pc) 23 + ), 24 + 25 + TP_fast_assign( 26 + __entry->pc = vcpu->arch.pc; 27 + ), 28 + 29 + TP_printk("PC: 0x%08lx", __entry->pc) 30 + ); 31 + 32 + DEFINE_EVENT(kvm_transition, kvm_enter, 33 + TP_PROTO(struct kvm_vcpu *vcpu), 34 + TP_ARGS(vcpu)); 35 + 36 + DEFINE_EVENT(kvm_transition, kvm_reenter, 37 + TP_PROTO(struct kvm_vcpu *vcpu), 38 + TP_ARGS(vcpu)); 39 + 40 + DEFINE_EVENT(kvm_transition, kvm_out, 41 + TP_PROTO(struct kvm_vcpu *vcpu), 42 + TP_ARGS(vcpu)); 43 + 44 + /* Further exit reasons */ 45 + #define KVM_TRACE_EXIT_IDLE 64 46 + #define KVM_TRACE_EXIT_CACHE 65 47 + 48 + /* Tracepoints for VM exits */ 49 + #define kvm_trace_symbol_exit_types \ 50 + { KVM_TRACE_EXIT_IDLE, "IDLE" }, \ 51 + { KVM_TRACE_EXIT_CACHE, "CACHE" } 52 + 53 + DECLARE_EVENT_CLASS(kvm_exit, 54 + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), 55 + TP_ARGS(vcpu, reason), 56 + TP_STRUCT__entry( 57 + __field(unsigned long, pc) 58 + __field(unsigned int, reason) 59 + ), 60 + 61 + TP_fast_assign( 62 + __entry->pc = vcpu->arch.pc; 63 + __entry->reason = reason; 64 + ), 65 + 66 + TP_printk("[%s]PC: 0x%08lx", 67 + __print_symbolic(__entry->reason, 68 + kvm_trace_symbol_exit_types), 69 + __entry->pc) 70 + ); 71 + 72 + DEFINE_EVENT(kvm_exit, kvm_exit_idle, 73 + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), 74 + TP_ARGS(vcpu, reason)); 75 + 76 + DEFINE_EVENT(kvm_exit, kvm_exit_cache, 77 + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), 78 + TP_ARGS(vcpu, reason)); 79 + 80 + DEFINE_EVENT(kvm_exit, kvm_exit, 81 + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int reason), 82 + TP_ARGS(vcpu, reason)); 83 + 84 + TRACE_EVENT(kvm_exit_gspr, 85 + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int inst_word), 86 + TP_ARGS(vcpu, inst_word), 87 + TP_STRUCT__entry( 88 + __field(unsigned int, inst_word) 89 + ), 90 + 91 + TP_fast_assign( 92 + __entry->inst_word = inst_word; 93 + ), 94 + 95 + TP_printk("Inst word: 0x%08x", __entry->inst_word) 96 + ); 97 + 98 + #define KVM_TRACE_AUX_SAVE 0 99 + #define KVM_TRACE_AUX_RESTORE 1 100 + #define KVM_TRACE_AUX_ENABLE 2 101 + #define KVM_TRACE_AUX_DISABLE 3 102 + #define KVM_TRACE_AUX_DISCARD 4 103 + 104 + #define KVM_TRACE_AUX_FPU 1 105 + 106 + #define kvm_trace_symbol_aux_op \ 107 + { KVM_TRACE_AUX_SAVE, "save" }, \ 108 + { KVM_TRACE_AUX_RESTORE, "restore" }, \ 109 + { KVM_TRACE_AUX_ENABLE, "enable" }, \ 110 + { KVM_TRACE_AUX_DISABLE, "disable" }, \ 111 + { KVM_TRACE_AUX_DISCARD, "discard" } 112 + 113 + #define kvm_trace_symbol_aux_state \ 114 + { KVM_TRACE_AUX_FPU, "FPU" } 115 + 116 + TRACE_EVENT(kvm_aux, 117 + TP_PROTO(struct kvm_vcpu *vcpu, unsigned int op, 118 + unsigned int state), 119 + TP_ARGS(vcpu, op, state), 120 + TP_STRUCT__entry( 121 + __field(unsigned long, pc) 122 + __field(u8, op) 123 + __field(u8, state) 124 + ), 125 + 126 + TP_fast_assign( 127 + __entry->pc = vcpu->arch.pc; 128 + __entry->op = op; 129 + __entry->state = state; 130 + ), 131 + 132 + TP_printk("%s %s PC: 0x%08lx", 133 + __print_symbolic(__entry->op, 134 + kvm_trace_symbol_aux_op), 135 + __print_symbolic(__entry->state, 136 + kvm_trace_symbol_aux_state), 137 + __entry->pc) 138 + ); 139 + 140 + TRACE_EVENT(kvm_vpid_change, 141 + TP_PROTO(struct kvm_vcpu *vcpu, unsigned long vpid), 142 + TP_ARGS(vcpu, vpid), 143 + TP_STRUCT__entry( 144 + __field(unsigned long, vpid) 145 + ), 146 + 147 + TP_fast_assign( 148 + __entry->vpid = vpid; 149 + ), 150 + 151 + TP_printk("VPID: 0x%08lx", __entry->vpid) 152 + ); 153 + 154 + #endif /* _TRACE_KVM_H */ 155 + 156 + #undef TRACE_INCLUDE_PATH 157 + #define TRACE_INCLUDE_PATH ../../arch/loongarch/kvm 158 + #undef TRACE_INCLUDE_FILE 159 + #define TRACE_INCLUDE_FILE trace 160 + 161 + /* This part must be outside protection */ 162 + #include <trace/define_trace.h>
+939
arch/loongarch/kvm/vcpu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #include <linux/kvm_host.h> 7 + #include <linux/entry-kvm.h> 8 + #include <asm/fpu.h> 9 + #include <asm/loongarch.h> 10 + #include <asm/setup.h> 11 + #include <asm/time.h> 12 + 13 + #define CREATE_TRACE_POINTS 14 + #include "trace.h" 15 + 16 + const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = { 17 + KVM_GENERIC_VCPU_STATS(), 18 + STATS_DESC_COUNTER(VCPU, int_exits), 19 + STATS_DESC_COUNTER(VCPU, idle_exits), 20 + STATS_DESC_COUNTER(VCPU, cpucfg_exits), 21 + STATS_DESC_COUNTER(VCPU, signal_exits), 22 + }; 23 + 24 + const struct kvm_stats_header kvm_vcpu_stats_header = { 25 + .name_size = KVM_STATS_NAME_SIZE, 26 + .num_desc = ARRAY_SIZE(kvm_vcpu_stats_desc), 27 + .id_offset = sizeof(struct kvm_stats_header), 28 + .desc_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE, 29 + .data_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE + 30 + sizeof(kvm_vcpu_stats_desc), 31 + }; 32 + 33 + /* 34 + * kvm_check_requests - check and handle pending vCPU requests 35 + * 36 + * Return: RESUME_GUEST if we should enter the guest 37 + * RESUME_HOST if we should exit to userspace 38 + */ 39 + static int kvm_check_requests(struct kvm_vcpu *vcpu) 40 + { 41 + if (!kvm_request_pending(vcpu)) 42 + return RESUME_GUEST; 43 + 44 + if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) 45 + vcpu->arch.vpid = 0; /* Drop vpid for this vCPU */ 46 + 47 + if (kvm_dirty_ring_check_request(vcpu)) 48 + return RESUME_HOST; 49 + 50 + return RESUME_GUEST; 51 + } 52 + 53 + /* 54 + * Check and handle pending signal and vCPU requests etc 55 + * Run with irq enabled and preempt enabled 56 + * 57 + * Return: RESUME_GUEST if we should enter the guest 58 + * RESUME_HOST if we should exit to userspace 59 + * < 0 if we should exit to userspace, where the return value 60 + * indicates an error 61 + */ 62 + static int kvm_enter_guest_check(struct kvm_vcpu *vcpu) 63 + { 64 + int ret; 65 + 66 + /* 67 + * Check conditions before entering the guest 68 + */ 69 + ret = xfer_to_guest_mode_handle_work(vcpu); 70 + if (ret < 0) 71 + return ret; 72 + 73 + ret = kvm_check_requests(vcpu); 74 + 75 + return ret; 76 + } 77 + 78 + /* 79 + * Called with irq enabled 80 + * 81 + * Return: RESUME_GUEST if we should enter the guest, and irq disabled 82 + * Others if we should exit to userspace 83 + */ 84 + static int kvm_pre_enter_guest(struct kvm_vcpu *vcpu) 85 + { 86 + int ret; 87 + 88 + do { 89 + ret = kvm_enter_guest_check(vcpu); 90 + if (ret != RESUME_GUEST) 91 + break; 92 + 93 + /* 94 + * Handle vcpu timer, interrupts, check requests and 95 + * check vmid before vcpu enter guest 96 + */ 97 + local_irq_disable(); 98 + kvm_acquire_timer(vcpu); 99 + kvm_deliver_intr(vcpu); 100 + kvm_deliver_exception(vcpu); 101 + /* Make sure the vcpu mode has been written */ 102 + smp_store_mb(vcpu->mode, IN_GUEST_MODE); 103 + kvm_check_vpid(vcpu); 104 + vcpu->arch.host_eentry = csr_read64(LOONGARCH_CSR_EENTRY); 105 + /* Clear KVM_LARCH_SWCSR_LATEST as CSR will change when enter guest */ 106 + vcpu->arch.aux_inuse &= ~KVM_LARCH_SWCSR_LATEST; 107 + 108 + if (kvm_request_pending(vcpu) || xfer_to_guest_mode_work_pending()) { 109 + /* make sure the vcpu mode has been written */ 110 + smp_store_mb(vcpu->mode, OUTSIDE_GUEST_MODE); 111 + local_irq_enable(); 112 + ret = -EAGAIN; 113 + } 114 + } while (ret != RESUME_GUEST); 115 + 116 + return ret; 117 + } 118 + 119 + /* 120 + * Return 1 for resume guest and "<= 0" for resume host. 121 + */ 122 + static int kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu) 123 + { 124 + int ret = RESUME_GUEST; 125 + unsigned long estat = vcpu->arch.host_estat; 126 + u32 intr = estat & 0x1fff; /* Ignore NMI */ 127 + u32 ecode = (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT; 128 + 129 + vcpu->mode = OUTSIDE_GUEST_MODE; 130 + 131 + /* Set a default exit reason */ 132 + run->exit_reason = KVM_EXIT_UNKNOWN; 133 + 134 + guest_timing_exit_irqoff(); 135 + guest_state_exit_irqoff(); 136 + local_irq_enable(); 137 + 138 + trace_kvm_exit(vcpu, ecode); 139 + if (ecode) { 140 + ret = kvm_handle_fault(vcpu, ecode); 141 + } else { 142 + WARN(!intr, "vm exiting with suspicious irq\n"); 143 + ++vcpu->stat.int_exits; 144 + } 145 + 146 + if (ret == RESUME_GUEST) 147 + ret = kvm_pre_enter_guest(vcpu); 148 + 149 + if (ret != RESUME_GUEST) { 150 + local_irq_disable(); 151 + return ret; 152 + } 153 + 154 + guest_timing_enter_irqoff(); 155 + guest_state_enter_irqoff(); 156 + trace_kvm_reenter(vcpu); 157 + 158 + return RESUME_GUEST; 159 + } 160 + 161 + int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu) 162 + { 163 + return !!(vcpu->arch.irq_pending) && 164 + vcpu->arch.mp_state.mp_state == KVM_MP_STATE_RUNNABLE; 165 + } 166 + 167 + int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) 168 + { 169 + return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; 170 + } 171 + 172 + bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) 173 + { 174 + return false; 175 + } 176 + 177 + vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) 178 + { 179 + return VM_FAULT_SIGBUS; 180 + } 181 + 182 + int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu, 183 + struct kvm_translation *tr) 184 + { 185 + return -EINVAL; 186 + } 187 + 188 + int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu) 189 + { 190 + return kvm_pending_timer(vcpu) || 191 + kvm_read_hw_gcsr(LOONGARCH_CSR_ESTAT) & (1 << INT_TI); 192 + } 193 + 194 + int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu) 195 + { 196 + int i; 197 + 198 + kvm_debug("vCPU Register Dump:\n"); 199 + kvm_debug("\tPC = 0x%08lx\n", vcpu->arch.pc); 200 + kvm_debug("\tExceptions: %08lx\n", vcpu->arch.irq_pending); 201 + 202 + for (i = 0; i < 32; i += 4) { 203 + kvm_debug("\tGPR%02d: %08lx %08lx %08lx %08lx\n", i, 204 + vcpu->arch.gprs[i], vcpu->arch.gprs[i + 1], 205 + vcpu->arch.gprs[i + 2], vcpu->arch.gprs[i + 3]); 206 + } 207 + 208 + kvm_debug("\tCRMD: 0x%08lx, ESTAT: 0x%08lx\n", 209 + kvm_read_hw_gcsr(LOONGARCH_CSR_CRMD), 210 + kvm_read_hw_gcsr(LOONGARCH_CSR_ESTAT)); 211 + 212 + kvm_debug("\tERA: 0x%08lx\n", kvm_read_hw_gcsr(LOONGARCH_CSR_ERA)); 213 + 214 + return 0; 215 + } 216 + 217 + int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu, 218 + struct kvm_mp_state *mp_state) 219 + { 220 + *mp_state = vcpu->arch.mp_state; 221 + 222 + return 0; 223 + } 224 + 225 + int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, 226 + struct kvm_mp_state *mp_state) 227 + { 228 + int ret = 0; 229 + 230 + switch (mp_state->mp_state) { 231 + case KVM_MP_STATE_RUNNABLE: 232 + vcpu->arch.mp_state = *mp_state; 233 + break; 234 + default: 235 + ret = -EINVAL; 236 + } 237 + 238 + return ret; 239 + } 240 + 241 + int kvm_arch_vcpu_ioctl_set_guest_debug(struct kvm_vcpu *vcpu, 242 + struct kvm_guest_debug *dbg) 243 + { 244 + return -EINVAL; 245 + } 246 + 247 + /** 248 + * kvm_migrate_count() - Migrate timer. 249 + * @vcpu: Virtual CPU. 250 + * 251 + * Migrate hrtimer to the current CPU by cancelling and restarting it 252 + * if the hrtimer is active. 253 + * 254 + * Must be called when the vCPU is migrated to a different CPU, so that 255 + * the timer can interrupt the guest at the new CPU, and the timer irq can 256 + * be delivered to the vCPU. 257 + */ 258 + static void kvm_migrate_count(struct kvm_vcpu *vcpu) 259 + { 260 + if (hrtimer_cancel(&vcpu->arch.swtimer)) 261 + hrtimer_restart(&vcpu->arch.swtimer); 262 + } 263 + 264 + static int _kvm_getcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 *val) 265 + { 266 + unsigned long gintc; 267 + struct loongarch_csrs *csr = vcpu->arch.csr; 268 + 269 + if (get_gcsr_flag(id) & INVALID_GCSR) 270 + return -EINVAL; 271 + 272 + if (id == LOONGARCH_CSR_ESTAT) { 273 + /* ESTAT IP0~IP7 get from GINTC */ 274 + gintc = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_GINTC) & 0xff; 275 + *val = kvm_read_sw_gcsr(csr, LOONGARCH_CSR_ESTAT) | (gintc << 2); 276 + return 0; 277 + } 278 + 279 + /* 280 + * Get software CSR state since software state is consistent 281 + * with hardware for synchronous ioctl 282 + */ 283 + *val = kvm_read_sw_gcsr(csr, id); 284 + 285 + return 0; 286 + } 287 + 288 + static int _kvm_setcsr(struct kvm_vcpu *vcpu, unsigned int id, u64 val) 289 + { 290 + int ret = 0, gintc; 291 + struct loongarch_csrs *csr = vcpu->arch.csr; 292 + 293 + if (get_gcsr_flag(id) & INVALID_GCSR) 294 + return -EINVAL; 295 + 296 + if (id == LOONGARCH_CSR_ESTAT) { 297 + /* ESTAT IP0~IP7 inject through GINTC */ 298 + gintc = (val >> 2) & 0xff; 299 + kvm_set_sw_gcsr(csr, LOONGARCH_CSR_GINTC, gintc); 300 + 301 + gintc = val & ~(0xffUL << 2); 302 + kvm_set_sw_gcsr(csr, LOONGARCH_CSR_ESTAT, gintc); 303 + 304 + return ret; 305 + } 306 + 307 + kvm_write_sw_gcsr(csr, id, val); 308 + 309 + return ret; 310 + } 311 + 312 + static int kvm_get_one_reg(struct kvm_vcpu *vcpu, 313 + const struct kvm_one_reg *reg, u64 *v) 314 + { 315 + int id, ret = 0; 316 + u64 type = reg->id & KVM_REG_LOONGARCH_MASK; 317 + 318 + switch (type) { 319 + case KVM_REG_LOONGARCH_CSR: 320 + id = KVM_GET_IOC_CSR_IDX(reg->id); 321 + ret = _kvm_getcsr(vcpu, id, v); 322 + break; 323 + case KVM_REG_LOONGARCH_CPUCFG: 324 + id = KVM_GET_IOC_CPUCFG_IDX(reg->id); 325 + if (id >= 0 && id < KVM_MAX_CPUCFG_REGS) 326 + *v = vcpu->arch.cpucfg[id]; 327 + else 328 + ret = -EINVAL; 329 + break; 330 + case KVM_REG_LOONGARCH_KVM: 331 + switch (reg->id) { 332 + case KVM_REG_LOONGARCH_COUNTER: 333 + *v = drdtime() + vcpu->kvm->arch.time_offset; 334 + break; 335 + default: 336 + ret = -EINVAL; 337 + break; 338 + } 339 + break; 340 + default: 341 + ret = -EINVAL; 342 + break; 343 + } 344 + 345 + return ret; 346 + } 347 + 348 + static int kvm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) 349 + { 350 + int ret = 0; 351 + u64 v, size = reg->id & KVM_REG_SIZE_MASK; 352 + 353 + switch (size) { 354 + case KVM_REG_SIZE_U64: 355 + ret = kvm_get_one_reg(vcpu, reg, &v); 356 + if (ret) 357 + return ret; 358 + ret = put_user(v, (u64 __user *)(long)reg->addr); 359 + break; 360 + default: 361 + ret = -EINVAL; 362 + break; 363 + } 364 + 365 + return ret; 366 + } 367 + 368 + static int kvm_set_one_reg(struct kvm_vcpu *vcpu, 369 + const struct kvm_one_reg *reg, u64 v) 370 + { 371 + int id, ret = 0; 372 + u64 type = reg->id & KVM_REG_LOONGARCH_MASK; 373 + 374 + switch (type) { 375 + case KVM_REG_LOONGARCH_CSR: 376 + id = KVM_GET_IOC_CSR_IDX(reg->id); 377 + ret = _kvm_setcsr(vcpu, id, v); 378 + break; 379 + case KVM_REG_LOONGARCH_CPUCFG: 380 + id = KVM_GET_IOC_CPUCFG_IDX(reg->id); 381 + if (id >= 0 && id < KVM_MAX_CPUCFG_REGS) 382 + vcpu->arch.cpucfg[id] = (u32)v; 383 + else 384 + ret = -EINVAL; 385 + break; 386 + case KVM_REG_LOONGARCH_KVM: 387 + switch (reg->id) { 388 + case KVM_REG_LOONGARCH_COUNTER: 389 + /* 390 + * gftoffset is relative with board, not vcpu 391 + * only set for the first time for smp system 392 + */ 393 + if (vcpu->vcpu_id == 0) 394 + vcpu->kvm->arch.time_offset = (signed long)(v - drdtime()); 395 + break; 396 + case KVM_REG_LOONGARCH_VCPU_RESET: 397 + kvm_reset_timer(vcpu); 398 + memset(&vcpu->arch.irq_pending, 0, sizeof(vcpu->arch.irq_pending)); 399 + memset(&vcpu->arch.irq_clear, 0, sizeof(vcpu->arch.irq_clear)); 400 + break; 401 + default: 402 + ret = -EINVAL; 403 + break; 404 + } 405 + break; 406 + default: 407 + ret = -EINVAL; 408 + break; 409 + } 410 + 411 + return ret; 412 + } 413 + 414 + static int kvm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) 415 + { 416 + int ret = 0; 417 + u64 v, size = reg->id & KVM_REG_SIZE_MASK; 418 + 419 + switch (size) { 420 + case KVM_REG_SIZE_U64: 421 + ret = get_user(v, (u64 __user *)(long)reg->addr); 422 + if (ret) 423 + return ret; 424 + break; 425 + default: 426 + return -EINVAL; 427 + } 428 + 429 + return kvm_set_one_reg(vcpu, reg, v); 430 + } 431 + 432 + int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) 433 + { 434 + return -ENOIOCTLCMD; 435 + } 436 + 437 + int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs) 438 + { 439 + return -ENOIOCTLCMD; 440 + } 441 + 442 + int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) 443 + { 444 + int i; 445 + 446 + for (i = 0; i < ARRAY_SIZE(vcpu->arch.gprs); i++) 447 + regs->gpr[i] = vcpu->arch.gprs[i]; 448 + 449 + regs->pc = vcpu->arch.pc; 450 + 451 + return 0; 452 + } 453 + 454 + int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) 455 + { 456 + int i; 457 + 458 + for (i = 1; i < ARRAY_SIZE(vcpu->arch.gprs); i++) 459 + vcpu->arch.gprs[i] = regs->gpr[i]; 460 + 461 + vcpu->arch.gprs[0] = 0; /* zero is special, and cannot be set. */ 462 + vcpu->arch.pc = regs->pc; 463 + 464 + return 0; 465 + } 466 + 467 + static int kvm_vcpu_ioctl_enable_cap(struct kvm_vcpu *vcpu, 468 + struct kvm_enable_cap *cap) 469 + { 470 + /* FPU is enabled by default, will support LSX/LASX later. */ 471 + return -EINVAL; 472 + } 473 + 474 + long kvm_arch_vcpu_ioctl(struct file *filp, 475 + unsigned int ioctl, unsigned long arg) 476 + { 477 + long r; 478 + void __user *argp = (void __user *)arg; 479 + struct kvm_vcpu *vcpu = filp->private_data; 480 + 481 + /* 482 + * Only software CSR should be modified 483 + * 484 + * If any hardware CSR register is modified, vcpu_load/vcpu_put pair 485 + * should be used. Since CSR registers owns by this vcpu, if switch 486 + * to other vcpus, other vcpus need reload CSR registers. 487 + * 488 + * If software CSR is modified, bit KVM_LARCH_HWCSR_USABLE should 489 + * be clear in vcpu->arch.aux_inuse, and vcpu_load will check 490 + * aux_inuse flag and reload CSR registers form software. 491 + */ 492 + 493 + switch (ioctl) { 494 + case KVM_SET_ONE_REG: 495 + case KVM_GET_ONE_REG: { 496 + struct kvm_one_reg reg; 497 + 498 + r = -EFAULT; 499 + if (copy_from_user(&reg, argp, sizeof(reg))) 500 + break; 501 + if (ioctl == KVM_SET_ONE_REG) { 502 + r = kvm_set_reg(vcpu, &reg); 503 + vcpu->arch.aux_inuse &= ~KVM_LARCH_HWCSR_USABLE; 504 + } else 505 + r = kvm_get_reg(vcpu, &reg); 506 + break; 507 + } 508 + case KVM_ENABLE_CAP: { 509 + struct kvm_enable_cap cap; 510 + 511 + r = -EFAULT; 512 + if (copy_from_user(&cap, argp, sizeof(cap))) 513 + break; 514 + r = kvm_vcpu_ioctl_enable_cap(vcpu, &cap); 515 + break; 516 + } 517 + default: 518 + r = -ENOIOCTLCMD; 519 + break; 520 + } 521 + 522 + return r; 523 + } 524 + 525 + int kvm_arch_vcpu_ioctl_get_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) 526 + { 527 + int i = 0; 528 + 529 + fpu->fcc = vcpu->arch.fpu.fcc; 530 + fpu->fcsr = vcpu->arch.fpu.fcsr; 531 + for (i = 0; i < NUM_FPU_REGS; i++) 532 + memcpy(&fpu->fpr[i], &vcpu->arch.fpu.fpr[i], FPU_REG_WIDTH / 64); 533 + 534 + return 0; 535 + } 536 + 537 + int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu) 538 + { 539 + int i = 0; 540 + 541 + vcpu->arch.fpu.fcc = fpu->fcc; 542 + vcpu->arch.fpu.fcsr = fpu->fcsr; 543 + for (i = 0; i < NUM_FPU_REGS; i++) 544 + memcpy(&vcpu->arch.fpu.fpr[i], &fpu->fpr[i], FPU_REG_WIDTH / 64); 545 + 546 + return 0; 547 + } 548 + 549 + /* Enable FPU and restore context */ 550 + void kvm_own_fpu(struct kvm_vcpu *vcpu) 551 + { 552 + preempt_disable(); 553 + 554 + /* Enable FPU */ 555 + set_csr_euen(CSR_EUEN_FPEN); 556 + 557 + kvm_restore_fpu(&vcpu->arch.fpu); 558 + vcpu->arch.aux_inuse |= KVM_LARCH_FPU; 559 + trace_kvm_aux(vcpu, KVM_TRACE_AUX_RESTORE, KVM_TRACE_AUX_FPU); 560 + 561 + preempt_enable(); 562 + } 563 + 564 + /* Save context and disable FPU */ 565 + void kvm_lose_fpu(struct kvm_vcpu *vcpu) 566 + { 567 + preempt_disable(); 568 + 569 + if (vcpu->arch.aux_inuse & KVM_LARCH_FPU) { 570 + kvm_save_fpu(&vcpu->arch.fpu); 571 + vcpu->arch.aux_inuse &= ~KVM_LARCH_FPU; 572 + trace_kvm_aux(vcpu, KVM_TRACE_AUX_SAVE, KVM_TRACE_AUX_FPU); 573 + 574 + /* Disable FPU */ 575 + clear_csr_euen(CSR_EUEN_FPEN); 576 + } 577 + 578 + preempt_enable(); 579 + } 580 + 581 + int kvm_vcpu_ioctl_interrupt(struct kvm_vcpu *vcpu, struct kvm_interrupt *irq) 582 + { 583 + int intr = (int)irq->irq; 584 + 585 + if (intr > 0) 586 + kvm_queue_irq(vcpu, intr); 587 + else if (intr < 0) 588 + kvm_dequeue_irq(vcpu, -intr); 589 + else { 590 + kvm_err("%s: invalid interrupt ioctl %d\n", __func__, irq->irq); 591 + return -EINVAL; 592 + } 593 + 594 + kvm_vcpu_kick(vcpu); 595 + 596 + return 0; 597 + } 598 + 599 + long kvm_arch_vcpu_async_ioctl(struct file *filp, 600 + unsigned int ioctl, unsigned long arg) 601 + { 602 + void __user *argp = (void __user *)arg; 603 + struct kvm_vcpu *vcpu = filp->private_data; 604 + 605 + if (ioctl == KVM_INTERRUPT) { 606 + struct kvm_interrupt irq; 607 + 608 + if (copy_from_user(&irq, argp, sizeof(irq))) 609 + return -EFAULT; 610 + 611 + kvm_debug("[%d] %s: irq: %d\n", vcpu->vcpu_id, __func__, irq.irq); 612 + 613 + return kvm_vcpu_ioctl_interrupt(vcpu, &irq); 614 + } 615 + 616 + return -ENOIOCTLCMD; 617 + } 618 + 619 + int kvm_arch_vcpu_precreate(struct kvm *kvm, unsigned int id) 620 + { 621 + return 0; 622 + } 623 + 624 + int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) 625 + { 626 + unsigned long timer_hz; 627 + struct loongarch_csrs *csr; 628 + 629 + vcpu->arch.vpid = 0; 630 + 631 + hrtimer_init(&vcpu->arch.swtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED); 632 + vcpu->arch.swtimer.function = kvm_swtimer_wakeup; 633 + 634 + vcpu->arch.handle_exit = kvm_handle_exit; 635 + vcpu->arch.guest_eentry = (unsigned long)kvm_loongarch_ops->exc_entry; 636 + vcpu->arch.csr = kzalloc(sizeof(struct loongarch_csrs), GFP_KERNEL); 637 + if (!vcpu->arch.csr) 638 + return -ENOMEM; 639 + 640 + /* 641 + * All kvm exceptions share one exception entry, and host <-> guest 642 + * switch also switch ECFG.VS field, keep host ECFG.VS info here. 643 + */ 644 + vcpu->arch.host_ecfg = (read_csr_ecfg() & CSR_ECFG_VS); 645 + 646 + /* Init */ 647 + vcpu->arch.last_sched_cpu = -1; 648 + 649 + /* 650 + * Initialize guest register state to valid architectural reset state. 651 + */ 652 + timer_hz = calc_const_freq(); 653 + kvm_init_timer(vcpu, timer_hz); 654 + 655 + /* Set Initialize mode for guest */ 656 + csr = vcpu->arch.csr; 657 + kvm_write_sw_gcsr(csr, LOONGARCH_CSR_CRMD, CSR_CRMD_DA); 658 + 659 + /* Set cpuid */ 660 + kvm_write_sw_gcsr(csr, LOONGARCH_CSR_TMID, vcpu->vcpu_id); 661 + 662 + /* Start with no pending virtual guest interrupts */ 663 + csr->csrs[LOONGARCH_CSR_GINTC] = 0; 664 + 665 + return 0; 666 + } 667 + 668 + void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) 669 + { 670 + } 671 + 672 + void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) 673 + { 674 + int cpu; 675 + struct kvm_context *context; 676 + 677 + hrtimer_cancel(&vcpu->arch.swtimer); 678 + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); 679 + kfree(vcpu->arch.csr); 680 + 681 + /* 682 + * If the vCPU is freed and reused as another vCPU, we don't want the 683 + * matching pointer wrongly hanging around in last_vcpu. 684 + */ 685 + for_each_possible_cpu(cpu) { 686 + context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); 687 + if (context->last_vcpu == vcpu) 688 + context->last_vcpu = NULL; 689 + } 690 + } 691 + 692 + static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) 693 + { 694 + bool migrated; 695 + struct kvm_context *context; 696 + struct loongarch_csrs *csr = vcpu->arch.csr; 697 + 698 + /* 699 + * Have we migrated to a different CPU? 700 + * If so, any old guest TLB state may be stale. 701 + */ 702 + migrated = (vcpu->arch.last_sched_cpu != cpu); 703 + 704 + /* 705 + * Was this the last vCPU to run on this CPU? 706 + * If not, any old guest state from this vCPU will have been clobbered. 707 + */ 708 + context = per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); 709 + if (migrated || (context->last_vcpu != vcpu)) 710 + vcpu->arch.aux_inuse &= ~KVM_LARCH_HWCSR_USABLE; 711 + context->last_vcpu = vcpu; 712 + 713 + /* Restore timer state regardless */ 714 + kvm_restore_timer(vcpu); 715 + 716 + /* Control guest page CCA attribute */ 717 + change_csr_gcfg(CSR_GCFG_MATC_MASK, CSR_GCFG_MATC_ROOT); 718 + 719 + /* Don't bother restoring registers multiple times unless necessary */ 720 + if (vcpu->arch.aux_inuse & KVM_LARCH_HWCSR_USABLE) 721 + return 0; 722 + 723 + write_csr_gcntc((ulong)vcpu->kvm->arch.time_offset); 724 + 725 + /* Restore guest CSR registers */ 726 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CRMD); 727 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PRMD); 728 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_EUEN); 729 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_MISC); 730 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ECFG); 731 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ERA); 732 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_BADV); 733 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_BADI); 734 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_EENTRY); 735 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBIDX); 736 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBEHI); 737 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBELO0); 738 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBELO1); 739 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_ASID); 740 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PGDL); 741 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PGDH); 742 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PWCTL0); 743 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_PWCTL1); 744 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_STLBPGSIZE); 745 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_RVACFG); 746 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CPUID); 747 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS0); 748 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS1); 749 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS2); 750 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS3); 751 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS4); 752 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS5); 753 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS6); 754 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_KS7); 755 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TMID); 756 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_CNTC); 757 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRENTRY); 758 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRBADV); 759 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRERA); 760 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRSAVE); 761 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO0); 762 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO1); 763 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBREHI); 764 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_TLBRPRMD); 765 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN0); 766 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN1); 767 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN2); 768 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_DMWIN3); 769 + kvm_restore_hw_gcsr(csr, LOONGARCH_CSR_LLBCTL); 770 + 771 + /* Restore Root.GINTC from unused Guest.GINTC register */ 772 + write_csr_gintc(csr->csrs[LOONGARCH_CSR_GINTC]); 773 + 774 + /* 775 + * We should clear linked load bit to break interrupted atomics. This 776 + * prevents a SC on the next vCPU from succeeding by matching a LL on 777 + * the previous vCPU. 778 + */ 779 + if (vcpu->kvm->created_vcpus > 1) 780 + set_gcsr_llbctl(CSR_LLBCTL_WCLLB); 781 + 782 + vcpu->arch.aux_inuse |= KVM_LARCH_HWCSR_USABLE; 783 + 784 + return 0; 785 + } 786 + 787 + void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) 788 + { 789 + unsigned long flags; 790 + 791 + local_irq_save(flags); 792 + if (vcpu->arch.last_sched_cpu != cpu) { 793 + kvm_debug("[%d->%d]KVM vCPU[%d] switch\n", 794 + vcpu->arch.last_sched_cpu, cpu, vcpu->vcpu_id); 795 + /* 796 + * Migrate the timer interrupt to the current CPU so that it 797 + * always interrupts the guest and synchronously triggers a 798 + * guest timer interrupt. 799 + */ 800 + kvm_migrate_count(vcpu); 801 + } 802 + 803 + /* Restore guest state to registers */ 804 + _kvm_vcpu_load(vcpu, cpu); 805 + local_irq_restore(flags); 806 + } 807 + 808 + static int _kvm_vcpu_put(struct kvm_vcpu *vcpu, int cpu) 809 + { 810 + struct loongarch_csrs *csr = vcpu->arch.csr; 811 + 812 + kvm_lose_fpu(vcpu); 813 + 814 + /* 815 + * Update CSR state from hardware if software CSR state is stale, 816 + * most CSR registers are kept unchanged during process context 817 + * switch except CSR registers like remaining timer tick value and 818 + * injected interrupt state. 819 + */ 820 + if (vcpu->arch.aux_inuse & KVM_LARCH_SWCSR_LATEST) 821 + goto out; 822 + 823 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CRMD); 824 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRMD); 825 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_EUEN); 826 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_MISC); 827 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ECFG); 828 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ERA); 829 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_BADV); 830 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_BADI); 831 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_EENTRY); 832 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBIDX); 833 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBEHI); 834 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBELO0); 835 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBELO1); 836 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ASID); 837 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PGDL); 838 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PGDH); 839 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PWCTL0); 840 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PWCTL1); 841 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_STLBPGSIZE); 842 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_RVACFG); 843 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CPUID); 844 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG1); 845 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG2); 846 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_PRCFG3); 847 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS0); 848 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS1); 849 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS2); 850 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS3); 851 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS4); 852 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS5); 853 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS6); 854 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_KS7); 855 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TMID); 856 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_CNTC); 857 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_LLBCTL); 858 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRENTRY); 859 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRBADV); 860 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRERA); 861 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRSAVE); 862 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO0); 863 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRELO1); 864 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBREHI); 865 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_TLBRPRMD); 866 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN0); 867 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN1); 868 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN2); 869 + kvm_save_hw_gcsr(csr, LOONGARCH_CSR_DMWIN3); 870 + 871 + vcpu->arch.aux_inuse |= KVM_LARCH_SWCSR_LATEST; 872 + 873 + out: 874 + kvm_save_timer(vcpu); 875 + /* Save Root.GINTC into unused Guest.GINTC register */ 876 + csr->csrs[LOONGARCH_CSR_GINTC] = read_csr_gintc(); 877 + 878 + return 0; 879 + } 880 + 881 + void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) 882 + { 883 + int cpu; 884 + unsigned long flags; 885 + 886 + local_irq_save(flags); 887 + cpu = smp_processor_id(); 888 + vcpu->arch.last_sched_cpu = cpu; 889 + 890 + /* Save guest state in registers */ 891 + _kvm_vcpu_put(vcpu, cpu); 892 + local_irq_restore(flags); 893 + } 894 + 895 + int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) 896 + { 897 + int r = -EINTR; 898 + struct kvm_run *run = vcpu->run; 899 + 900 + if (vcpu->mmio_needed) { 901 + if (!vcpu->mmio_is_write) 902 + kvm_complete_mmio_read(vcpu, run); 903 + vcpu->mmio_needed = 0; 904 + } 905 + 906 + if (run->exit_reason == KVM_EXIT_LOONGARCH_IOCSR) { 907 + if (!run->iocsr_io.is_write) 908 + kvm_complete_iocsr_read(vcpu, run); 909 + } 910 + 911 + if (run->immediate_exit) 912 + return r; 913 + 914 + /* Clear exit_reason */ 915 + run->exit_reason = KVM_EXIT_UNKNOWN; 916 + lose_fpu(1); 917 + vcpu_load(vcpu); 918 + kvm_sigset_activate(vcpu); 919 + r = kvm_pre_enter_guest(vcpu); 920 + if (r != RESUME_GUEST) 921 + goto out; 922 + 923 + guest_timing_enter_irqoff(); 924 + guest_state_enter_irqoff(); 925 + trace_kvm_enter(vcpu); 926 + r = kvm_loongarch_ops->enter_guest(run, vcpu); 927 + 928 + trace_kvm_out(vcpu); 929 + /* 930 + * Guest exit is already recorded at kvm_handle_exit() 931 + * return value must not be RESUME_GUEST 932 + */ 933 + local_irq_enable(); 934 + out: 935 + kvm_sigset_deactivate(vcpu); 936 + vcpu_put(vcpu); 937 + 938 + return r; 939 + }
+94
arch/loongarch/kvm/vm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2020-2023 Loongson Technology Corporation Limited 4 + */ 5 + 6 + #include <linux/kvm_host.h> 7 + #include <asm/kvm_mmu.h> 8 + 9 + const struct _kvm_stats_desc kvm_vm_stats_desc[] = { 10 + KVM_GENERIC_VM_STATS(), 11 + STATS_DESC_ICOUNTER(VM, pages), 12 + STATS_DESC_ICOUNTER(VM, hugepages), 13 + }; 14 + 15 + const struct kvm_stats_header kvm_vm_stats_header = { 16 + .name_size = KVM_STATS_NAME_SIZE, 17 + .num_desc = ARRAY_SIZE(kvm_vm_stats_desc), 18 + .id_offset = sizeof(struct kvm_stats_header), 19 + .desc_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE, 20 + .data_offset = sizeof(struct kvm_stats_header) + KVM_STATS_NAME_SIZE + 21 + sizeof(kvm_vm_stats_desc), 22 + }; 23 + 24 + int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) 25 + { 26 + int i; 27 + 28 + /* Allocate page table to map GPA -> RPA */ 29 + kvm->arch.pgd = kvm_pgd_alloc(); 30 + if (!kvm->arch.pgd) 31 + return -ENOMEM; 32 + 33 + kvm_init_vmcs(kvm); 34 + kvm->arch.gpa_size = BIT(cpu_vabits - 1); 35 + kvm->arch.root_level = CONFIG_PGTABLE_LEVELS - 1; 36 + kvm->arch.invalid_ptes[0] = 0; 37 + kvm->arch.invalid_ptes[1] = (unsigned long)invalid_pte_table; 38 + #if CONFIG_PGTABLE_LEVELS > 2 39 + kvm->arch.invalid_ptes[2] = (unsigned long)invalid_pmd_table; 40 + #endif 41 + #if CONFIG_PGTABLE_LEVELS > 3 42 + kvm->arch.invalid_ptes[3] = (unsigned long)invalid_pud_table; 43 + #endif 44 + for (i = 0; i <= kvm->arch.root_level; i++) 45 + kvm->arch.pte_shifts[i] = PAGE_SHIFT + i * (PAGE_SHIFT - 3); 46 + 47 + return 0; 48 + } 49 + 50 + void kvm_arch_destroy_vm(struct kvm *kvm) 51 + { 52 + kvm_destroy_vcpus(kvm); 53 + free_page((unsigned long)kvm->arch.pgd); 54 + kvm->arch.pgd = NULL; 55 + } 56 + 57 + int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) 58 + { 59 + int r; 60 + 61 + switch (ext) { 62 + case KVM_CAP_ONE_REG: 63 + case KVM_CAP_ENABLE_CAP: 64 + case KVM_CAP_READONLY_MEM: 65 + case KVM_CAP_SYNC_MMU: 66 + case KVM_CAP_IMMEDIATE_EXIT: 67 + case KVM_CAP_IOEVENTFD: 68 + case KVM_CAP_MP_STATE: 69 + r = 1; 70 + break; 71 + case KVM_CAP_NR_VCPUS: 72 + r = num_online_cpus(); 73 + break; 74 + case KVM_CAP_MAX_VCPUS: 75 + r = KVM_MAX_VCPUS; 76 + break; 77 + case KVM_CAP_MAX_VCPU_ID: 78 + r = KVM_MAX_VCPU_IDS; 79 + break; 80 + case KVM_CAP_NR_MEMSLOTS: 81 + r = KVM_USER_MEM_SLOTS; 82 + break; 83 + default: 84 + r = 0; 85 + break; 86 + } 87 + 88 + return r; 89 + } 90 + 91 + int kvm_arch_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) 92 + { 93 + return -ENOIOCTLCMD; 94 + }
+18
arch/riscv/include/asm/csr.h
··· 203 203 #define ENVCFG_CBIE_INV _AC(0x3, UL) 204 204 #define ENVCFG_FIOM _AC(0x1, UL) 205 205 206 + /* Smstateen bits */ 207 + #define SMSTATEEN0_AIA_IMSIC_SHIFT 58 208 + #define SMSTATEEN0_AIA_IMSIC (_ULL(1) << SMSTATEEN0_AIA_IMSIC_SHIFT) 209 + #define SMSTATEEN0_AIA_SHIFT 59 210 + #define SMSTATEEN0_AIA (_ULL(1) << SMSTATEEN0_AIA_SHIFT) 211 + #define SMSTATEEN0_AIA_ISEL_SHIFT 60 212 + #define SMSTATEEN0_AIA_ISEL (_ULL(1) << SMSTATEEN0_AIA_ISEL_SHIFT) 213 + #define SMSTATEEN0_HSENVCFG_SHIFT 62 214 + #define SMSTATEEN0_HSENVCFG (_ULL(1) << SMSTATEEN0_HSENVCFG_SHIFT) 215 + #define SMSTATEEN0_SSTATEEN0_SHIFT 63 216 + #define SMSTATEEN0_SSTATEEN0 (_ULL(1) << SMSTATEEN0_SSTATEEN0_SHIFT) 217 + 206 218 /* symbolic CSR names: */ 207 219 #define CSR_CYCLE 0xc00 208 220 #define CSR_TIME 0xc01 ··· 287 275 #define CSR_SIE 0x104 288 276 #define CSR_STVEC 0x105 289 277 #define CSR_SCOUNTEREN 0x106 278 + #define CSR_SENVCFG 0x10a 279 + #define CSR_SSTATEEN0 0x10c 290 280 #define CSR_SSCRATCH 0x140 291 281 #define CSR_SEPC 0x141 292 282 #define CSR_SCAUSE 0x142 ··· 362 348 #define CSR_HVIPRIO2H 0x657 363 349 #define CSR_VSIEH 0x214 364 350 #define CSR_VSIPH 0x254 351 + 352 + /* Hypervisor stateen CSRs */ 353 + #define CSR_HSTATEEN0 0x60c 354 + #define CSR_HSTATEEN0H 0x61c 365 355 366 356 #define CSR_MSTATUS 0x300 367 357 #define CSR_MISA 0x301
+2
arch/riscv/include/asm/hwcap.h
··· 58 58 #define RISCV_ISA_EXT_ZICSR 40 59 59 #define RISCV_ISA_EXT_ZIFENCEI 41 60 60 #define RISCV_ISA_EXT_ZIHPM 42 61 + #define RISCV_ISA_EXT_SMSTATEEN 43 62 + #define RISCV_ISA_EXT_ZICOND 44 61 63 62 64 #define RISCV_ISA_EXT_MAX 64 63 65
+18
arch/riscv/include/asm/kvm_host.h
··· 162 162 unsigned long hvip; 163 163 unsigned long vsatp; 164 164 unsigned long scounteren; 165 + unsigned long senvcfg; 166 + }; 167 + 168 + struct kvm_vcpu_config { 169 + u64 henvcfg; 170 + u64 hstateen0; 171 + }; 172 + 173 + struct kvm_vcpu_smstateen_csr { 174 + unsigned long sstateen0; 165 175 }; 166 176 167 177 struct kvm_vcpu_arch { ··· 193 183 unsigned long host_sscratch; 194 184 unsigned long host_stvec; 195 185 unsigned long host_scounteren; 186 + unsigned long host_senvcfg; 187 + unsigned long host_sstateen0; 196 188 197 189 /* CPU context of Host */ 198 190 struct kvm_cpu_context host_context; ··· 204 192 205 193 /* CPU CSR context of Guest VCPU */ 206 194 struct kvm_vcpu_csr guest_csr; 195 + 196 + /* CPU Smstateen CSR context of Guest VCPU */ 197 + struct kvm_vcpu_smstateen_csr smstateen_csr; 207 198 208 199 /* CPU context upon Guest VCPU reset */ 209 200 struct kvm_cpu_context guest_reset_context; ··· 259 244 260 245 /* Performance monitoring context */ 261 246 struct kvm_pmu pmu_context; 247 + 248 + /* 'static' configurations which are set only once */ 249 + struct kvm_vcpu_config cfg; 262 250 }; 263 251 264 252 static inline void kvm_arch_sync_events(struct kvm *kvm) {}
+6 -1
arch/riscv/include/asm/kvm_vcpu_sbi.h
··· 11 11 12 12 #define KVM_SBI_IMPID 3 13 13 14 - #define KVM_SBI_VERSION_MAJOR 1 14 + #define KVM_SBI_VERSION_MAJOR 2 15 15 #define KVM_SBI_VERSION_MINOR 0 16 16 17 17 enum kvm_riscv_sbi_ext_status { ··· 35 35 struct kvm_vcpu_sbi_extension { 36 36 unsigned long extid_start; 37 37 unsigned long extid_end; 38 + 39 + bool default_unavail; 40 + 38 41 /** 39 42 * SBI extension handler. It can be defined for a given extension or group of 40 43 * extension. But it should always return linux error codes rather than SBI ··· 62 59 const struct kvm_vcpu_sbi_extension *kvm_vcpu_sbi_find_ext( 63 60 struct kvm_vcpu *vcpu, unsigned long extid); 64 61 int kvm_riscv_vcpu_sbi_ecall(struct kvm_vcpu *vcpu, struct kvm_run *run); 62 + void kvm_riscv_vcpu_sbi_init(struct kvm_vcpu *vcpu); 65 63 66 64 #ifdef CONFIG_RISCV_SBI_V01 67 65 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_v01; ··· 73 69 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_rfence; 74 70 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_srst; 75 71 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_hsm; 72 + extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_dbcn; 76 73 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_experimental; 77 74 extern const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_vendor; 78 75
+7
arch/riscv/include/asm/sbi.h
··· 30 30 SBI_EXT_HSM = 0x48534D, 31 31 SBI_EXT_SRST = 0x53525354, 32 32 SBI_EXT_PMU = 0x504D55, 33 + SBI_EXT_DBCN = 0x4442434E, 33 34 34 35 /* Experimentals extensions must lie within this range */ 35 36 SBI_EXT_EXPERIMENTAL_START = 0x08000000, ··· 236 235 237 236 /* Flags defined for counter stop function */ 238 237 #define SBI_PMU_STOP_FLAG_RESET (1 << 0) 238 + 239 + enum sbi_ext_dbcn_fid { 240 + SBI_EXT_DBCN_CONSOLE_WRITE = 0, 241 + SBI_EXT_DBCN_CONSOLE_READ = 1, 242 + SBI_EXT_DBCN_CONSOLE_WRITE_BYTE = 2, 243 + }; 239 244 240 245 #define SBI_SPEC_VERSION_DEFAULT 0x1 241 246 #define SBI_SPEC_VERSION_MAJOR_SHIFT 24
+12
arch/riscv/include/uapi/asm/kvm.h
··· 80 80 unsigned long sip; 81 81 unsigned long satp; 82 82 unsigned long scounteren; 83 + unsigned long senvcfg; 83 84 }; 84 85 85 86 /* AIA CSR registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ ··· 92 91 unsigned long siph; 93 92 unsigned long iprio1h; 94 93 unsigned long iprio2h; 94 + }; 95 + 96 + /* Smstateen CSR for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ 97 + struct kvm_riscv_smstateen_csr { 98 + unsigned long sstateen0; 95 99 }; 96 100 97 101 /* TIMER registers for KVM_GET_ONE_REG and KVM_SET_ONE_REG */ ··· 137 131 KVM_RISCV_ISA_EXT_ZICSR, 138 132 KVM_RISCV_ISA_EXT_ZIFENCEI, 139 133 KVM_RISCV_ISA_EXT_ZIHPM, 134 + KVM_RISCV_ISA_EXT_SMSTATEEN, 135 + KVM_RISCV_ISA_EXT_ZICOND, 140 136 KVM_RISCV_ISA_EXT_MAX, 141 137 }; 142 138 ··· 156 148 KVM_RISCV_SBI_EXT_PMU, 157 149 KVM_RISCV_SBI_EXT_EXPERIMENTAL, 158 150 KVM_RISCV_SBI_EXT_VENDOR, 151 + KVM_RISCV_SBI_EXT_DBCN, 159 152 KVM_RISCV_SBI_EXT_MAX, 160 153 }; 161 154 ··· 187 178 #define KVM_REG_RISCV_CSR (0x03 << KVM_REG_RISCV_TYPE_SHIFT) 188 179 #define KVM_REG_RISCV_CSR_GENERAL (0x0 << KVM_REG_RISCV_SUBTYPE_SHIFT) 189 180 #define KVM_REG_RISCV_CSR_AIA (0x1 << KVM_REG_RISCV_SUBTYPE_SHIFT) 181 + #define KVM_REG_RISCV_CSR_SMSTATEEN (0x2 << KVM_REG_RISCV_SUBTYPE_SHIFT) 190 182 #define KVM_REG_RISCV_CSR_REG(name) \ 191 183 (offsetof(struct kvm_riscv_csr, name) / sizeof(unsigned long)) 192 184 #define KVM_REG_RISCV_CSR_AIA_REG(name) \ 193 185 (offsetof(struct kvm_riscv_aia_csr, name) / sizeof(unsigned long)) 186 + #define KVM_REG_RISCV_CSR_SMSTATEEN_REG(name) \ 187 + (offsetof(struct kvm_riscv_smstateen_csr, name) / sizeof(unsigned long)) 194 188 195 189 /* Timer registers are mapped as type 4 */ 196 190 #define KVM_REG_RISCV_TIMER (0x04 << KVM_REG_RISCV_TYPE_SHIFT)
+2
arch/riscv/kernel/cpufeature.c
··· 167 167 __RISCV_ISA_EXT_DATA(zicbom, RISCV_ISA_EXT_ZICBOM), 168 168 __RISCV_ISA_EXT_DATA(zicboz, RISCV_ISA_EXT_ZICBOZ), 169 169 __RISCV_ISA_EXT_DATA(zicntr, RISCV_ISA_EXT_ZICNTR), 170 + __RISCV_ISA_EXT_DATA(zicond, RISCV_ISA_EXT_ZICOND), 170 171 __RISCV_ISA_EXT_DATA(zicsr, RISCV_ISA_EXT_ZICSR), 171 172 __RISCV_ISA_EXT_DATA(zifencei, RISCV_ISA_EXT_ZIFENCEI), 172 173 __RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE), ··· 176 175 __RISCV_ISA_EXT_DATA(zbb, RISCV_ISA_EXT_ZBB), 177 176 __RISCV_ISA_EXT_DATA(zbs, RISCV_ISA_EXT_ZBS), 178 177 __RISCV_ISA_EXT_DATA(smaia, RISCV_ISA_EXT_SMAIA), 178 + __RISCV_ISA_EXT_DATA(smstateen, RISCV_ISA_EXT_SMSTATEEN), 179 179 __RISCV_ISA_EXT_DATA(ssaia, RISCV_ISA_EXT_SSAIA), 180 180 __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), 181 181 __RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC),
+62 -12
arch/riscv/kvm/vcpu.c
··· 141 141 if (rc) 142 142 return rc; 143 143 144 + /* 145 + * Setup SBI extensions 146 + * NOTE: This must be the last thing to be initialized. 147 + */ 148 + kvm_riscv_vcpu_sbi_init(vcpu); 149 + 144 150 /* Reset VCPU */ 145 151 kvm_riscv_reset_vcpu(vcpu); 146 152 ··· 477 471 return -EINVAL; 478 472 } 479 473 480 - static void kvm_riscv_vcpu_update_config(const unsigned long *isa) 474 + static void kvm_riscv_vcpu_setup_config(struct kvm_vcpu *vcpu) 481 475 { 482 - u64 henvcfg = 0; 476 + const unsigned long *isa = vcpu->arch.isa; 477 + struct kvm_vcpu_config *cfg = &vcpu->arch.cfg; 483 478 484 479 if (riscv_isa_extension_available(isa, SVPBMT)) 485 - henvcfg |= ENVCFG_PBMTE; 480 + cfg->henvcfg |= ENVCFG_PBMTE; 486 481 487 482 if (riscv_isa_extension_available(isa, SSTC)) 488 - henvcfg |= ENVCFG_STCE; 483 + cfg->henvcfg |= ENVCFG_STCE; 489 484 490 485 if (riscv_isa_extension_available(isa, ZICBOM)) 491 - henvcfg |= (ENVCFG_CBIE | ENVCFG_CBCFE); 486 + cfg->henvcfg |= (ENVCFG_CBIE | ENVCFG_CBCFE); 492 487 493 488 if (riscv_isa_extension_available(isa, ZICBOZ)) 494 - henvcfg |= ENVCFG_CBZE; 489 + cfg->henvcfg |= ENVCFG_CBZE; 495 490 496 - csr_write(CSR_HENVCFG, henvcfg); 497 - #ifdef CONFIG_32BIT 498 - csr_write(CSR_HENVCFGH, henvcfg >> 32); 499 - #endif 491 + if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) { 492 + cfg->hstateen0 |= SMSTATEEN0_HSENVCFG; 493 + if (riscv_isa_extension_available(isa, SSAIA)) 494 + cfg->hstateen0 |= SMSTATEEN0_AIA_IMSIC | 495 + SMSTATEEN0_AIA | 496 + SMSTATEEN0_AIA_ISEL; 497 + if (riscv_isa_extension_available(isa, SMSTATEEN)) 498 + cfg->hstateen0 |= SMSTATEEN0_SSTATEEN0; 499 + } 500 500 } 501 501 502 502 void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) 503 503 { 504 504 struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; 505 + struct kvm_vcpu_config *cfg = &vcpu->arch.cfg; 505 506 506 507 csr_write(CSR_VSSTATUS, csr->vsstatus); 507 508 csr_write(CSR_VSIE, csr->vsie); ··· 519 506 csr_write(CSR_VSTVAL, csr->vstval); 520 507 csr_write(CSR_HVIP, csr->hvip); 521 508 csr_write(CSR_VSATP, csr->vsatp); 522 - 523 - kvm_riscv_vcpu_update_config(vcpu->arch.isa); 509 + csr_write(CSR_HENVCFG, cfg->henvcfg); 510 + if (IS_ENABLED(CONFIG_32BIT)) 511 + csr_write(CSR_HENVCFGH, cfg->henvcfg >> 32); 512 + if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) { 513 + csr_write(CSR_HSTATEEN0, cfg->hstateen0); 514 + if (IS_ENABLED(CONFIG_32BIT)) 515 + csr_write(CSR_HSTATEEN0H, cfg->hstateen0 >> 32); 516 + } 524 517 525 518 kvm_riscv_gstage_update_hgatp(vcpu); 526 519 ··· 625 606 kvm_riscv_vcpu_aia_update_hvip(vcpu); 626 607 } 627 608 609 + static __always_inline void kvm_riscv_vcpu_swap_in_guest_state(struct kvm_vcpu *vcpu) 610 + { 611 + struct kvm_vcpu_smstateen_csr *smcsr = &vcpu->arch.smstateen_csr; 612 + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; 613 + struct kvm_vcpu_config *cfg = &vcpu->arch.cfg; 614 + 615 + vcpu->arch.host_senvcfg = csr_swap(CSR_SENVCFG, csr->senvcfg); 616 + if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN) && 617 + (cfg->hstateen0 & SMSTATEEN0_SSTATEEN0)) 618 + vcpu->arch.host_sstateen0 = csr_swap(CSR_SSTATEEN0, 619 + smcsr->sstateen0); 620 + } 621 + 622 + static __always_inline void kvm_riscv_vcpu_swap_in_host_state(struct kvm_vcpu *vcpu) 623 + { 624 + struct kvm_vcpu_smstateen_csr *smcsr = &vcpu->arch.smstateen_csr; 625 + struct kvm_vcpu_csr *csr = &vcpu->arch.guest_csr; 626 + struct kvm_vcpu_config *cfg = &vcpu->arch.cfg; 627 + 628 + csr->senvcfg = csr_swap(CSR_SENVCFG, vcpu->arch.host_senvcfg); 629 + if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN) && 630 + (cfg->hstateen0 & SMSTATEEN0_SSTATEEN0)) 631 + smcsr->sstateen0 = csr_swap(CSR_SSTATEEN0, 632 + vcpu->arch.host_sstateen0); 633 + } 634 + 628 635 /* 629 636 * Actually run the vCPU, entering an RCU extended quiescent state (EQS) while 630 637 * the vCPU is running. ··· 660 615 */ 661 616 static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu) 662 617 { 618 + kvm_riscv_vcpu_swap_in_guest_state(vcpu); 663 619 guest_state_enter_irqoff(); 664 620 __kvm_riscv_switch_to(&vcpu->arch); 665 621 vcpu->arch.last_exit_cpu = vcpu->cpu; 666 622 guest_state_exit_irqoff(); 623 + kvm_riscv_vcpu_swap_in_host_state(vcpu); 667 624 } 668 625 669 626 int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) ··· 673 626 int ret; 674 627 struct kvm_cpu_trap trap; 675 628 struct kvm_run *run = vcpu->run; 629 + 630 + if (!vcpu->arch.ran_atleast_once) 631 + kvm_riscv_vcpu_setup_config(vcpu); 676 632 677 633 /* Mark this VCPU ran at least once */ 678 634 vcpu->arch.ran_atleast_once = true;
+69 -3
arch/riscv/kvm/vcpu_onereg.c
··· 34 34 [KVM_RISCV_ISA_EXT_M] = RISCV_ISA_EXT_m, 35 35 [KVM_RISCV_ISA_EXT_V] = RISCV_ISA_EXT_v, 36 36 /* Multi letter extensions (alphabetically sorted) */ 37 + KVM_ISA_EXT_ARR(SMSTATEEN), 37 38 KVM_ISA_EXT_ARR(SSAIA), 38 39 KVM_ISA_EXT_ARR(SSTC), 39 40 KVM_ISA_EXT_ARR(SVINVAL), ··· 46 45 KVM_ISA_EXT_ARR(ZICBOM), 47 46 KVM_ISA_EXT_ARR(ZICBOZ), 48 47 KVM_ISA_EXT_ARR(ZICNTR), 48 + KVM_ISA_EXT_ARR(ZICOND), 49 49 KVM_ISA_EXT_ARR(ZICSR), 50 50 KVM_ISA_EXT_ARR(ZIFENCEI), 51 51 KVM_ISA_EXT_ARR(ZIHINTPAUSE), ··· 82 80 static bool kvm_riscv_vcpu_isa_disable_allowed(unsigned long ext) 83 81 { 84 82 switch (ext) { 83 + /* Extensions which don't have any mechanism to disable */ 85 84 case KVM_RISCV_ISA_EXT_A: 86 85 case KVM_RISCV_ISA_EXT_C: 87 86 case KVM_RISCV_ISA_EXT_I: 88 87 case KVM_RISCV_ISA_EXT_M: 89 - case KVM_RISCV_ISA_EXT_SSAIA: 90 88 case KVM_RISCV_ISA_EXT_SSTC: 91 89 case KVM_RISCV_ISA_EXT_SVINVAL: 92 90 case KVM_RISCV_ISA_EXT_SVNAPOT: ··· 94 92 case KVM_RISCV_ISA_EXT_ZBB: 95 93 case KVM_RISCV_ISA_EXT_ZBS: 96 94 case KVM_RISCV_ISA_EXT_ZICNTR: 95 + case KVM_RISCV_ISA_EXT_ZICOND: 97 96 case KVM_RISCV_ISA_EXT_ZICSR: 98 97 case KVM_RISCV_ISA_EXT_ZIFENCEI: 99 98 case KVM_RISCV_ISA_EXT_ZIHINTPAUSE: 100 99 case KVM_RISCV_ISA_EXT_ZIHPM: 101 100 return false; 101 + /* Extensions which can be disabled using Smstateen */ 102 + case KVM_RISCV_ISA_EXT_SSAIA: 103 + return riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN); 102 104 default: 103 105 break; 104 106 } ··· 384 378 return 0; 385 379 } 386 380 381 + static inline int kvm_riscv_vcpu_smstateen_set_csr(struct kvm_vcpu *vcpu, 382 + unsigned long reg_num, 383 + unsigned long reg_val) 384 + { 385 + struct kvm_vcpu_smstateen_csr *csr = &vcpu->arch.smstateen_csr; 386 + 387 + if (reg_num >= sizeof(struct kvm_riscv_smstateen_csr) / 388 + sizeof(unsigned long)) 389 + return -EINVAL; 390 + 391 + ((unsigned long *)csr)[reg_num] = reg_val; 392 + return 0; 393 + } 394 + 395 + static int kvm_riscv_vcpu_smstateen_get_csr(struct kvm_vcpu *vcpu, 396 + unsigned long reg_num, 397 + unsigned long *out_val) 398 + { 399 + struct kvm_vcpu_smstateen_csr *csr = &vcpu->arch.smstateen_csr; 400 + 401 + if (reg_num >= sizeof(struct kvm_riscv_smstateen_csr) / 402 + sizeof(unsigned long)) 403 + return -EINVAL; 404 + 405 + *out_val = ((unsigned long *)csr)[reg_num]; 406 + return 0; 407 + } 408 + 387 409 static int kvm_riscv_vcpu_get_reg_csr(struct kvm_vcpu *vcpu, 388 410 const struct kvm_one_reg *reg) 389 411 { ··· 434 400 break; 435 401 case KVM_REG_RISCV_CSR_AIA: 436 402 rc = kvm_riscv_vcpu_aia_get_csr(vcpu, reg_num, &reg_val); 403 + break; 404 + case KVM_REG_RISCV_CSR_SMSTATEEN: 405 + rc = -EINVAL; 406 + if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) 407 + rc = kvm_riscv_vcpu_smstateen_get_csr(vcpu, reg_num, 408 + &reg_val); 437 409 break; 438 410 default: 439 411 rc = -ENOENT; ··· 480 440 case KVM_REG_RISCV_CSR_AIA: 481 441 rc = kvm_riscv_vcpu_aia_set_csr(vcpu, reg_num, reg_val); 482 442 break; 443 + case KVM_REG_RISCV_CSR_SMSTATEEN: 444 + rc = -EINVAL; 445 + if (riscv_has_extension_unlikely(RISCV_ISA_EXT_SMSTATEEN)) 446 + rc = kvm_riscv_vcpu_smstateen_set_csr(vcpu, reg_num, 447 + reg_val); 448 + break; 483 449 default: 484 450 rc = -ENOENT; 485 451 break; ··· 742 696 743 697 if (riscv_isa_extension_available(vcpu->arch.isa, SSAIA)) 744 698 n += sizeof(struct kvm_riscv_aia_csr) / sizeof(unsigned long); 699 + if (riscv_isa_extension_available(vcpu->arch.isa, SMSTATEEN)) 700 + n += sizeof(struct kvm_riscv_smstateen_csr) / sizeof(unsigned long); 745 701 746 702 return n; 747 703 } ··· 752 704 u64 __user *uindices) 753 705 { 754 706 int n1 = sizeof(struct kvm_riscv_csr) / sizeof(unsigned long); 755 - int n2 = 0; 707 + int n2 = 0, n3 = 0; 756 708 757 709 /* copy general csr regs */ 758 710 for (int i = 0; i < n1; i++) { ··· 786 738 } 787 739 } 788 740 789 - return n1 + n2; 741 + /* copy Smstateen csr regs */ 742 + if (riscv_isa_extension_available(vcpu->arch.isa, SMSTATEEN)) { 743 + n3 = sizeof(struct kvm_riscv_smstateen_csr) / sizeof(unsigned long); 744 + 745 + for (int i = 0; i < n3; i++) { 746 + u64 size = IS_ENABLED(CONFIG_32BIT) ? 747 + KVM_REG_SIZE_U32 : KVM_REG_SIZE_U64; 748 + u64 reg = KVM_REG_RISCV | size | KVM_REG_RISCV_CSR | 749 + KVM_REG_RISCV_CSR_SMSTATEEN | i; 750 + 751 + if (uindices) { 752 + if (put_user(reg, uindices)) 753 + return -EFAULT; 754 + uindices++; 755 + } 756 + } 757 + } 758 + 759 + return n1 + n2 + n3; 790 760 } 791 761 792 762 static inline unsigned long num_timer_regs(void)
+32 -29
arch/riscv/kvm/vcpu_sbi.c
··· 67 67 .ext_ptr = &vcpu_sbi_ext_pmu, 68 68 }, 69 69 { 70 + .ext_idx = KVM_RISCV_SBI_EXT_DBCN, 71 + .ext_ptr = &vcpu_sbi_ext_dbcn, 72 + }, 73 + { 70 74 .ext_idx = KVM_RISCV_SBI_EXT_EXPERIMENTAL, 71 75 .ext_ptr = &vcpu_sbi_ext_experimental, 72 76 }, ··· 159 155 if (!sext) 160 156 return -ENOENT; 161 157 162 - /* 163 - * We can't set the extension status to available here, since it may 164 - * have a probe() function which needs to confirm availability first, 165 - * but it may be too early to call that here. We can set the status to 166 - * unavailable, though. 167 - */ 168 - if (!reg_val) 169 - scontext->ext_status[sext->ext_idx] = 158 + scontext->ext_status[sext->ext_idx] = (reg_val) ? 159 + KVM_RISCV_SBI_EXT_AVAILABLE : 170 160 KVM_RISCV_SBI_EXT_UNAVAILABLE; 171 161 172 162 return 0; ··· 186 188 if (!sext) 187 189 return -ENOENT; 188 190 189 - /* 190 - * If the extension status is still uninitialized, then we should probe 191 - * to determine if it's available, but it may be too early to do that 192 - * here. The best we can do is report that the extension has not been 193 - * disabled, i.e. we return 1 when the extension is available and also 194 - * when it only may be available. 195 - */ 196 - *reg_val = scontext->ext_status[sext->ext_idx] != 197 - KVM_RISCV_SBI_EXT_UNAVAILABLE; 198 - 191 + *reg_val = scontext->ext_status[sext->ext_idx] == 192 + KVM_RISCV_SBI_EXT_AVAILABLE; 199 193 return 0; 200 194 } 201 195 ··· 327 337 scontext->ext_status[entry->ext_idx] == 328 338 KVM_RISCV_SBI_EXT_AVAILABLE) 329 339 return ext; 330 - if (scontext->ext_status[entry->ext_idx] == 331 - KVM_RISCV_SBI_EXT_UNAVAILABLE) 332 - return NULL; 333 - if (ext->probe && !ext->probe(vcpu)) { 334 - scontext->ext_status[entry->ext_idx] = 335 - KVM_RISCV_SBI_EXT_UNAVAILABLE; 336 - return NULL; 337 - } 338 340 339 - scontext->ext_status[entry->ext_idx] = 340 - KVM_RISCV_SBI_EXT_AVAILABLE; 341 - return ext; 341 + return NULL; 342 342 } 343 343 } 344 344 ··· 398 418 cp->a1 = sbi_ret.out_val; 399 419 400 420 return ret; 421 + } 422 + 423 + void kvm_riscv_vcpu_sbi_init(struct kvm_vcpu *vcpu) 424 + { 425 + struct kvm_vcpu_sbi_context *scontext = &vcpu->arch.sbi_context; 426 + const struct kvm_riscv_sbi_extension_entry *entry; 427 + const struct kvm_vcpu_sbi_extension *ext; 428 + int i; 429 + 430 + for (i = 0; i < ARRAY_SIZE(sbi_ext); i++) { 431 + entry = &sbi_ext[i]; 432 + ext = entry->ext_ptr; 433 + 434 + if (ext->probe && !ext->probe(vcpu)) { 435 + scontext->ext_status[entry->ext_idx] = 436 + KVM_RISCV_SBI_EXT_UNAVAILABLE; 437 + continue; 438 + } 439 + 440 + scontext->ext_status[entry->ext_idx] = ext->default_unavail ? 441 + KVM_RISCV_SBI_EXT_UNAVAILABLE : 442 + KVM_RISCV_SBI_EXT_AVAILABLE; 443 + } 401 444 }
+32
arch/riscv/kvm/vcpu_sbi_replace.c
··· 175 175 .extid_end = SBI_EXT_SRST, 176 176 .handler = kvm_sbi_ext_srst_handler, 177 177 }; 178 + 179 + static int kvm_sbi_ext_dbcn_handler(struct kvm_vcpu *vcpu, 180 + struct kvm_run *run, 181 + struct kvm_vcpu_sbi_return *retdata) 182 + { 183 + struct kvm_cpu_context *cp = &vcpu->arch.guest_context; 184 + unsigned long funcid = cp->a6; 185 + 186 + switch (funcid) { 187 + case SBI_EXT_DBCN_CONSOLE_WRITE: 188 + case SBI_EXT_DBCN_CONSOLE_READ: 189 + case SBI_EXT_DBCN_CONSOLE_WRITE_BYTE: 190 + /* 191 + * The SBI debug console functions are unconditionally 192 + * forwarded to the userspace. 193 + */ 194 + kvm_riscv_vcpu_sbi_forward(vcpu, run); 195 + retdata->uexit = true; 196 + break; 197 + default: 198 + retdata->err_val = SBI_ERR_NOT_SUPPORTED; 199 + } 200 + 201 + return 0; 202 + } 203 + 204 + const struct kvm_vcpu_sbi_extension vcpu_sbi_ext_dbcn = { 205 + .extid_start = SBI_EXT_DBCN, 206 + .extid_end = SBI_EXT_DBCN, 207 + .default_unavail = true, 208 + .handler = kvm_sbi_ext_dbcn_handler, 209 + };
+7
arch/s390/include/asm/kvm_host.h
··· 777 777 u64 inject_service_signal; 778 778 u64 inject_virtio; 779 779 u64 aen_forward; 780 + u64 gmap_shadow_create; 781 + u64 gmap_shadow_reuse; 782 + u64 gmap_shadow_r1_entry; 783 + u64 gmap_shadow_r2_entry; 784 + u64 gmap_shadow_r3_entry; 785 + u64 gmap_shadow_sg_entry; 786 + u64 gmap_shadow_pg_entry; 780 787 }; 781 788 782 789 struct kvm_arch_memory_slot {
+7
arch/s390/kvm/gaccess.c
··· 1382 1382 unsigned long *pgt, int *dat_protection, 1383 1383 int *fake) 1384 1384 { 1385 + struct kvm *kvm; 1385 1386 struct gmap *parent; 1386 1387 union asce asce; 1387 1388 union vaddress vaddr; ··· 1391 1390 1392 1391 *fake = 0; 1393 1392 *dat_protection = 0; 1393 + kvm = sg->private; 1394 1394 parent = sg->parent; 1395 1395 vaddr.addr = saddr; 1396 1396 asce.val = sg->orig_asce; ··· 1452 1450 rc = gmap_shadow_r2t(sg, saddr, rfte.val, *fake); 1453 1451 if (rc) 1454 1452 return rc; 1453 + kvm->stat.gmap_shadow_r1_entry++; 1455 1454 } 1456 1455 fallthrough; 1457 1456 case ASCE_TYPE_REGION2: { ··· 1481 1478 rc = gmap_shadow_r3t(sg, saddr, rste.val, *fake); 1482 1479 if (rc) 1483 1480 return rc; 1481 + kvm->stat.gmap_shadow_r2_entry++; 1484 1482 } 1485 1483 fallthrough; 1486 1484 case ASCE_TYPE_REGION3: { ··· 1519 1515 rc = gmap_shadow_sgt(sg, saddr, rtte.val, *fake); 1520 1516 if (rc) 1521 1517 return rc; 1518 + kvm->stat.gmap_shadow_r3_entry++; 1522 1519 } 1523 1520 fallthrough; 1524 1521 case ASCE_TYPE_SEGMENT: { ··· 1553 1548 rc = gmap_shadow_pgt(sg, saddr, ste.val, *fake); 1554 1549 if (rc) 1555 1550 return rc; 1551 + kvm->stat.gmap_shadow_sg_entry++; 1556 1552 } 1557 1553 } 1558 1554 /* Return the parent address of the page table */ ··· 1624 1618 pte.p |= dat_protection; 1625 1619 if (!rc) 1626 1620 rc = gmap_shadow_page(sg, saddr, __pte(pte.val)); 1621 + vcpu->kvm->stat.gmap_shadow_pg_entry++; 1627 1622 ipte_unlock(vcpu->kvm); 1628 1623 mmap_read_unlock(sg->mm); 1629 1624 return rc;
+10 -1
arch/s390/kvm/kvm-s390.c
··· 66 66 STATS_DESC_COUNTER(VM, inject_pfault_done), 67 67 STATS_DESC_COUNTER(VM, inject_service_signal), 68 68 STATS_DESC_COUNTER(VM, inject_virtio), 69 - STATS_DESC_COUNTER(VM, aen_forward) 69 + STATS_DESC_COUNTER(VM, aen_forward), 70 + STATS_DESC_COUNTER(VM, gmap_shadow_reuse), 71 + STATS_DESC_COUNTER(VM, gmap_shadow_create), 72 + STATS_DESC_COUNTER(VM, gmap_shadow_r1_entry), 73 + STATS_DESC_COUNTER(VM, gmap_shadow_r2_entry), 74 + STATS_DESC_COUNTER(VM, gmap_shadow_r3_entry), 75 + STATS_DESC_COUNTER(VM, gmap_shadow_sg_entry), 76 + STATS_DESC_COUNTER(VM, gmap_shadow_pg_entry), 70 77 }; 71 78 72 79 const struct kvm_stats_header kvm_vm_stats_header = { ··· 4059 4052 struct kvm_vcpu *vcpu; 4060 4053 unsigned long prefix; 4061 4054 unsigned long i; 4055 + 4056 + trace_kvm_s390_gmap_notifier(start, end, gmap_is_shadow(gmap)); 4062 4057 4063 4058 if (gmap_is_shadow(gmap)) 4064 4059 return;
+23
arch/s390/kvm/trace-s390.h
··· 333 333 __entry->id, __entry->isc) 334 334 ); 335 335 336 + /* 337 + * Trace point for gmap notifier calls. 338 + */ 339 + TRACE_EVENT(kvm_s390_gmap_notifier, 340 + TP_PROTO(unsigned long start, unsigned long end, unsigned int shadow), 341 + TP_ARGS(start, end, shadow), 342 + 343 + TP_STRUCT__entry( 344 + __field(unsigned long, start) 345 + __field(unsigned long, end) 346 + __field(unsigned int, shadow) 347 + ), 348 + 349 + TP_fast_assign( 350 + __entry->start = start; 351 + __entry->end = end; 352 + __entry->shadow = shadow; 353 + ), 354 + 355 + TP_printk("gmap notified (start:0x%lx end:0x%lx shadow:%d)", 356 + __entry->start, __entry->end, __entry->shadow) 357 + ); 358 + 336 359 337 360 #endif /* _TRACE_KVMS390_H */ 338 361
+4 -1
arch/s390/kvm/vsie.c
··· 1214 1214 * we're holding has been unshadowed. If the gmap is still valid, 1215 1215 * we can safely reuse it. 1216 1216 */ 1217 - if (vsie_page->gmap && gmap_shadow_valid(vsie_page->gmap, asce, edat)) 1217 + if (vsie_page->gmap && gmap_shadow_valid(vsie_page->gmap, asce, edat)) { 1218 + vcpu->kvm->stat.gmap_shadow_reuse++; 1218 1219 return 0; 1220 + } 1219 1221 1220 1222 /* release the old shadow - if any, and mark the prefix as unmapped */ 1221 1223 release_gmap_shadow(vsie_page); ··· 1225 1223 if (IS_ERR(gmap)) 1226 1224 return PTR_ERR(gmap); 1227 1225 gmap->private = vcpu->kvm; 1226 + vcpu->kvm->stat.gmap_shadow_create++; 1228 1227 WRITE_ONCE(vsie_page->gmap, gmap); 1229 1228 return 0; 1230 1229 }
+1
arch/x86/include/asm/cpufeatures.h
··· 443 443 444 444 /* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */ 445 445 #define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* "" No Nested Data Breakpoints */ 446 + #define X86_FEATURE_WRMSR_XX_BASE_NS (20*32+ 1) /* "" WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */ 446 447 #define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* "" LFENCE always serializing / synchronizes RDTSC */ 447 448 #define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* "" Null Selector Clears Base */ 448 449 #define X86_FEATURE_AUTOIBRS (20*32+ 8) /* "" Automatic IBRS */
+2 -1
arch/x86/include/asm/kvm-x86-ops.h
··· 108 108 KVM_X86_OP_OPTIONAL(vcpu_unblocking) 109 109 KVM_X86_OP_OPTIONAL(pi_update_irte) 110 110 KVM_X86_OP_OPTIONAL(pi_start_assignment) 111 + KVM_X86_OP_OPTIONAL(apicv_pre_state_restore) 111 112 KVM_X86_OP_OPTIONAL(apicv_post_state_restore) 112 113 KVM_X86_OP_OPTIONAL_RET0(dy_apicv_has_pending_interrupt) 113 114 KVM_X86_OP_OPTIONAL(set_hv_timer) ··· 127 126 KVM_X86_OP_OPTIONAL(vm_move_enc_context_from) 128 127 KVM_X86_OP_OPTIONAL(guest_memory_reclaimed) 129 128 KVM_X86_OP(get_msr_feature) 130 - KVM_X86_OP(can_emulate_instruction) 129 + KVM_X86_OP(check_emulate_instruction) 131 130 KVM_X86_OP(apic_init_signal_blocked) 132 131 KVM_X86_OP_OPTIONAL(enable_l2_tlb_flush) 133 132 KVM_X86_OP_OPTIONAL(migrate_timers)
+17 -5
arch/x86/include/asm/kvm_host.h
··· 39 39 40 40 #define __KVM_HAVE_ARCH_VCPU_DEBUGFS 41 41 42 + /* 43 + * CONFIG_KVM_MAX_NR_VCPUS is defined iff CONFIG_KVM!=n, provide a dummy max if 44 + * KVM is disabled (arbitrarily use the default from CONFIG_KVM_MAX_NR_VCPUS). 45 + */ 46 + #ifdef CONFIG_KVM_MAX_NR_VCPUS 47 + #define KVM_MAX_VCPUS CONFIG_KVM_MAX_NR_VCPUS 48 + #else 42 49 #define KVM_MAX_VCPUS 1024 50 + #endif 43 51 44 52 /* 45 53 * In x86, the VCPU ID corresponds to the APIC ID, and APIC IDs ··· 687 679 u32 limit; 688 680 }; 689 681 682 + #ifdef CONFIG_KVM_XEN 690 683 /* Xen HVM per vcpu emulation context */ 691 684 struct kvm_vcpu_xen { 692 685 u64 hypercall_rip; ··· 710 701 struct timer_list poll_timer; 711 702 struct kvm_hypervisor_cpuid cpuid; 712 703 }; 704 + #endif 713 705 714 706 struct kvm_queued_exception { 715 707 bool pending; ··· 939 929 940 930 bool hyperv_enabled; 941 931 struct kvm_vcpu_hv *hyperv; 932 + #ifdef CONFIG_KVM_XEN 942 933 struct kvm_vcpu_xen xen; 943 - 934 + #endif 944 935 cpumask_var_t wbinvd_dirty_mask; 945 936 946 937 unsigned long last_retry_eip; ··· 1286 1275 */ 1287 1276 spinlock_t mmu_unsync_pages_lock; 1288 1277 1289 - struct list_head assigned_dev_head; 1290 1278 struct iommu_domain *iommu_domain; 1291 1279 bool iommu_noncoherent; 1292 1280 #define __KVM_HAVE_ARCH_NONCOHERENT_DMA ··· 1333 1323 int nr_vcpus_matched_tsc; 1334 1324 1335 1325 u32 default_tsc_khz; 1326 + bool user_set_tsc; 1336 1327 1337 1328 seqcount_raw_spinlock_t pvclock_sc; 1338 1329 bool use_master_clock; ··· 1702 1691 1703 1692 void (*request_immediate_exit)(struct kvm_vcpu *vcpu); 1704 1693 1705 - void (*sched_in)(struct kvm_vcpu *kvm, int cpu); 1694 + void (*sched_in)(struct kvm_vcpu *vcpu, int cpu); 1706 1695 1707 1696 /* 1708 1697 * Size of the CPU's dirty log buffer, i.e. VMX's PML buffer. A zero ··· 1719 1708 int (*pi_update_irte)(struct kvm *kvm, unsigned int host_irq, 1720 1709 uint32_t guest_irq, bool set); 1721 1710 void (*pi_start_assignment)(struct kvm *kvm); 1711 + void (*apicv_pre_state_restore)(struct kvm_vcpu *vcpu); 1722 1712 void (*apicv_post_state_restore)(struct kvm_vcpu *vcpu); 1723 1713 bool (*dy_apicv_has_pending_interrupt)(struct kvm_vcpu *vcpu); 1724 1714 ··· 1745 1733 1746 1734 int (*get_msr_feature)(struct kvm_msr_entry *entry); 1747 1735 1748 - bool (*can_emulate_instruction)(struct kvm_vcpu *vcpu, int emul_type, 1749 - void *insn, int insn_len); 1736 + int (*check_emulate_instruction)(struct kvm_vcpu *vcpu, int emul_type, 1737 + void *insn, int insn_len); 1750 1738 1751 1739 bool (*apic_init_signal_blocked)(struct kvm_vcpu *vcpu); 1752 1740 int (*enable_l2_tlb_flush)(struct kvm_vcpu *vcpu);
+1
arch/x86/include/asm/msr-index.h
··· 554 554 #define MSR_AMD64_CPUID_FN_1 0xc0011004 555 555 #define MSR_AMD64_LS_CFG 0xc0011020 556 556 #define MSR_AMD64_DC_CFG 0xc0011022 557 + #define MSR_AMD64_TW_CFG 0xc0011023 557 558 558 559 #define MSR_AMD64_DE_CFG 0xc0011029 559 560 #define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT 1
+11
arch/x86/kvm/Kconfig
··· 154 154 config KVM_EXTERNAL_WRITE_TRACKING 155 155 bool 156 156 157 + config KVM_MAX_NR_VCPUS 158 + int "Maximum number of vCPUs per KVM guest" 159 + depends on KVM 160 + range 1024 4096 161 + default 4096 if MAXSMP 162 + default 1024 163 + help 164 + Set the maximum number of vCPUs per KVM guest. Larger values will increase 165 + the memory footprint of each KVM guest, regardless of how many vCPUs are 166 + created for a given VM. 167 + 157 168 endif # VIRTUALIZATION
+7 -3
arch/x86/kvm/cpuid.c
··· 448 448 vcpu->arch.cpuid_nent = nent; 449 449 450 450 vcpu->arch.kvm_cpuid = kvm_get_hypervisor_cpuid(vcpu, KVM_SIGNATURE); 451 + #ifdef CONFIG_KVM_XEN 451 452 vcpu->arch.xen.cpuid = kvm_get_hypervisor_cpuid(vcpu, XEN_SIGNATURE); 453 + #endif 452 454 kvm_vcpu_after_set_cpuid(vcpu); 453 455 454 456 return 0; ··· 755 753 756 754 kvm_cpu_cap_mask(CPUID_8000_0021_EAX, 757 755 F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ | 758 - F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */ 756 + F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */ | 757 + F(WRMSR_XX_BASE_NS) 759 758 ); 760 759 761 - if (cpu_feature_enabled(X86_FEATURE_SRSO_NO)) 762 - kvm_cpu_cap_set(X86_FEATURE_SRSO_NO); 760 + kvm_cpu_cap_check_and_set(X86_FEATURE_SBPB); 761 + kvm_cpu_cap_check_and_set(X86_FEATURE_IBPB_BRTYPE); 762 + kvm_cpu_cap_check_and_set(X86_FEATURE_SRSO_NO); 763 763 764 764 kvm_cpu_cap_init_kvm_defined(CPUID_8000_0022_EAX, 765 765 F(PERFMON_V2)
+2 -1
arch/x86/kvm/cpuid.h
··· 174 174 static inline bool guest_has_pred_cmd_msr(struct kvm_vcpu *vcpu) 175 175 { 176 176 return (guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) || 177 - guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB)); 177 + guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB) || 178 + guest_cpuid_has(vcpu, X86_FEATURE_SBPB)); 178 179 } 179 180 180 181 static inline bool supports_cpuid_fault(struct kvm_vcpu *vcpu)
+6 -4
arch/x86/kvm/hyperv.c
··· 727 727 728 728 stimer_cleanup(stimer); 729 729 stimer->count = count; 730 - if (stimer->count == 0) 731 - stimer->config.enable = 0; 732 - else if (stimer->config.auto_enable) 733 - stimer->config.enable = 1; 730 + if (!host) { 731 + if (stimer->count == 0) 732 + stimer->config.enable = 0; 733 + else if (stimer->config.auto_enable) 734 + stimer->config.enable = 1; 735 + } 734 736 735 737 if (stimer->config.enable) 736 738 stimer_mark_pending(stimer, false);
+17 -13
arch/x86/kvm/lapic.c
··· 2444 2444 void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset) 2445 2445 { 2446 2446 struct kvm_lapic *apic = vcpu->arch.apic; 2447 - u64 val; 2448 2447 2449 2448 /* 2450 - * ICR is a single 64-bit register when x2APIC is enabled. For legacy 2451 - * xAPIC, ICR writes need to go down the common (slightly slower) path 2452 - * to get the upper half from ICR2. 2449 + * ICR is a single 64-bit register when x2APIC is enabled, all others 2450 + * registers hold 32-bit values. For legacy xAPIC, ICR writes need to 2451 + * go down the common path to get the upper half from ICR2. 2452 + * 2453 + * Note, using the write helpers may incur an unnecessary write to the 2454 + * virtual APIC state, but KVM needs to conditionally modify the value 2455 + * in certain cases, e.g. to clear the ICR busy bit. The cost of extra 2456 + * conditional branches is likely a wash relative to the cost of the 2457 + * maybe-unecessary write, and both are in the noise anyways. 2453 2458 */ 2454 - if (apic_x2apic_mode(apic) && offset == APIC_ICR) { 2455 - val = kvm_lapic_get_reg64(apic, APIC_ICR); 2456 - kvm_apic_send_ipi(apic, (u32)val, (u32)(val >> 32)); 2457 - trace_kvm_apic_write(APIC_ICR, val); 2458 - } else { 2459 - /* TODO: optimize to just emulate side effect w/o one more write */ 2460 - val = kvm_lapic_get_reg(apic, offset); 2461 - kvm_lapic_reg_write(apic, offset, (u32)val); 2462 - } 2459 + if (apic_x2apic_mode(apic) && offset == APIC_ICR) 2460 + kvm_x2apic_icr_write(apic, kvm_lapic_get_reg64(apic, APIC_ICR)); 2461 + else 2462 + kvm_lapic_reg_write(apic, offset, kvm_lapic_get_reg(apic, offset)); 2463 2463 } 2464 2464 EXPORT_SYMBOL_GPL(kvm_apic_write_nodecode); 2465 2465 ··· 2669 2669 struct kvm_lapic *apic = vcpu->arch.apic; 2670 2670 u64 msr_val; 2671 2671 int i; 2672 + 2673 + static_call_cond(kvm_x86_apicv_pre_state_restore)(vcpu); 2672 2674 2673 2675 if (!init_event) { 2674 2676 msr_val = APIC_DEFAULT_PHYS_BASE | MSR_IA32_APICBASE_ENABLE; ··· 2982 2980 { 2983 2981 struct kvm_lapic *apic = vcpu->arch.apic; 2984 2982 int r; 2983 + 2984 + static_call_cond(kvm_x86_apicv_pre_state_restore)(vcpu); 2985 2985 2986 2986 kvm_lapic_set_base(vcpu, vcpu->arch.apic_base); 2987 2987 /* set SPIV separately to get count of SW disabled APICs right */
+7
arch/x86/kvm/mmu.h
··· 237 237 return -(u32)fault & errcode; 238 238 } 239 239 240 + bool __kvm_mmu_honors_guest_mtrrs(bool vm_has_noncoherent_dma); 241 + 242 + static inline bool kvm_mmu_honors_guest_mtrrs(struct kvm *kvm) 243 + { 244 + return __kvm_mmu_honors_guest_mtrrs(kvm_arch_has_noncoherent_dma(kvm)); 245 + } 246 + 240 247 void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end); 241 248 242 249 int kvm_arch_write_log_dirty(struct kvm_vcpu *vcpu);
+26 -11
arch/x86/kvm/mmu/mmu.c
··· 3425 3425 { 3426 3426 struct kvm_mmu_page *sp; 3427 3427 int ret = RET_PF_INVALID; 3428 - u64 spte = 0ull; 3429 - u64 *sptep = NULL; 3428 + u64 spte; 3429 + u64 *sptep; 3430 3430 uint retry_count = 0; 3431 3431 3432 3432 if (!page_fault_can_be_fast(fault)) ··· 3441 3441 sptep = kvm_tdp_mmu_fast_pf_get_last_sptep(vcpu, fault->addr, &spte); 3442 3442 else 3443 3443 sptep = fast_pf_get_last_sptep(vcpu, fault->addr, &spte); 3444 + 3445 + /* 3446 + * It's entirely possible for the mapping to have been zapped 3447 + * by a different task, but the root page should always be 3448 + * available as the vCPU holds a reference to its root(s). 3449 + */ 3450 + if (WARN_ON_ONCE(!sptep)) 3451 + spte = REMOVED_SPTE; 3444 3452 3445 3453 if (!is_shadow_present_pte(spte)) 3446 3454 break; ··· 4487 4479 } 4488 4480 #endif 4489 4481 4482 + bool __kvm_mmu_honors_guest_mtrrs(bool vm_has_noncoherent_dma) 4483 + { 4484 + /* 4485 + * If host MTRRs are ignored (shadow_memtype_mask is non-zero), and the 4486 + * VM has non-coherent DMA (DMA doesn't snoop CPU caches), KVM's ABI is 4487 + * to honor the memtype from the guest's MTRRs so that guest accesses 4488 + * to memory that is DMA'd aren't cached against the guest's wishes. 4489 + * 4490 + * Note, KVM may still ultimately ignore guest MTRRs for certain PFNs, 4491 + * e.g. KVM will force UC memtype for host MMIO. 4492 + */ 4493 + return vm_has_noncoherent_dma && shadow_memtype_mask; 4494 + } 4495 + 4490 4496 int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) 4491 4497 { 4492 4498 /* 4493 4499 * If the guest's MTRRs may be used to compute the "real" memtype, 4494 4500 * restrict the mapping level to ensure KVM uses a consistent memtype 4495 - * across the entire mapping. If the host MTRRs are ignored by TDP 4496 - * (shadow_memtype_mask is non-zero), and the VM has non-coherent DMA 4497 - * (DMA doesn't snoop CPU caches), KVM's ABI is to honor the memtype 4498 - * from the guest's MTRRs so that guest accesses to memory that is 4499 - * DMA'd aren't cached against the guest's wishes. 4500 - * 4501 - * Note, KVM may still ultimately ignore guest MTRRs for certain PFNs, 4502 - * e.g. KVM will force UC memtype for host MMIO. 4501 + * across the entire mapping. 4503 4502 */ 4504 - if (shadow_memtype_mask && kvm_arch_has_noncoherent_dma(vcpu->kvm)) { 4503 + if (kvm_mmu_honors_guest_mtrrs(vcpu->kvm)) { 4505 4504 for ( ; fault->max_level > PG_LEVEL_4K; --fault->max_level) { 4506 4505 int page_num = KVM_PAGES_PER_HPAGE(fault->max_level); 4507 4506 gfn_t base = gfn_round_for_level(fault->gfn,
+1 -1
arch/x86/kvm/mtrr.c
··· 320 320 struct kvm_mtrr *mtrr_state = &vcpu->arch.mtrr_state; 321 321 gfn_t start, end; 322 322 323 - if (!tdp_enabled || !kvm_arch_has_noncoherent_dma(vcpu->kvm)) 323 + if (!kvm_mmu_honors_guest_mtrrs(vcpu->kvm)) 324 324 return; 325 325 326 326 if (!mtrr_is_enabled(mtrr_state) && msr != MSR_MTRRdefType)
-1
arch/x86/kvm/smm.c
··· 324 324 325 325 cr0 = vcpu->arch.cr0 & ~(X86_CR0_PE | X86_CR0_EM | X86_CR0_TS | X86_CR0_PG); 326 326 static_call(kvm_x86_set_cr0)(vcpu, cr0); 327 - vcpu->arch.cr0 = cr0; 328 327 329 328 static_call(kvm_x86_set_cr4)(vcpu, 0); 330 329
+22 -30
arch/x86/kvm/svm/svm.c
··· 199 199 200 200 /* allow nested virtualization in KVM/SVM */ 201 201 static int nested = true; 202 - module_param(nested, int, S_IRUGO); 202 + module_param(nested, int, 0444); 203 203 204 204 /* enable/disable Next RIP Save */ 205 205 int nrips = true; ··· 364 364 svm->vmcb->control.int_state |= SVM_INTERRUPT_SHADOW_MASK; 365 365 366 366 } 367 - static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type, 368 - void *insn, int insn_len); 369 367 370 368 static int __svm_skip_emulated_instruction(struct kvm_vcpu *vcpu, 371 369 bool commit_side_effects) ··· 384 386 } 385 387 386 388 if (!svm->next_rip) { 387 - /* 388 - * FIXME: Drop this when kvm_emulate_instruction() does the 389 - * right thing and treats "can't emulate" as outright failure 390 - * for EMULTYPE_SKIP. 391 - */ 392 - if (!svm_can_emulate_instruction(vcpu, EMULTYPE_SKIP, NULL, 0)) 393 - return 0; 394 - 395 389 if (unlikely(!commit_side_effects)) 396 390 old_rflags = svm->vmcb->save.rflags; 397 391 ··· 2184 2194 struct kvm_run *kvm_run = vcpu->run; 2185 2195 struct vcpu_svm *svm = to_svm(vcpu); 2186 2196 2187 - /* 2188 - * The VM save area has already been encrypted so it 2189 - * cannot be reinitialized - just terminate. 2190 - */ 2191 - if (sev_es_guest(vcpu->kvm)) 2192 - return -EINVAL; 2193 2197 2194 2198 /* 2195 2199 * VMCB is undefined after a SHUTDOWN intercept. INIT the vCPU to put ··· 2192 2208 * userspace. At a platform view, INIT is acceptable behavior as 2193 2209 * there exist bare metal platforms that automatically INIT the CPU 2194 2210 * in response to shutdown. 2211 + * 2212 + * The VM save area for SEV-ES guests has already been encrypted so it 2213 + * cannot be reinitialized, i.e. synthesizing INIT is futile. 2195 2214 */ 2196 - clear_page(svm->vmcb); 2197 - kvm_vcpu_reset(vcpu, true); 2215 + if (!sev_es_guest(vcpu->kvm)) { 2216 + clear_page(svm->vmcb); 2217 + kvm_vcpu_reset(vcpu, true); 2218 + } 2198 2219 2199 2220 kvm_run->exit_reason = KVM_EXIT_SHUTDOWN; 2200 2221 return 0; ··· 4708 4719 } 4709 4720 #endif 4710 4721 4711 - static bool svm_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type, 4712 - void *insn, int insn_len) 4722 + static int svm_check_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type, 4723 + void *insn, int insn_len) 4713 4724 { 4714 4725 bool smep, smap, is_user; 4715 4726 u64 error_code; 4716 4727 4717 4728 /* Emulation is always possible when KVM has access to all guest state. */ 4718 4729 if (!sev_guest(vcpu->kvm)) 4719 - return true; 4730 + return X86EMUL_CONTINUE; 4720 4731 4721 4732 /* #UD and #GP should never be intercepted for SEV guests. */ 4722 4733 WARN_ON_ONCE(emul_type & (EMULTYPE_TRAP_UD | ··· 4728 4739 * to guest register state. 4729 4740 */ 4730 4741 if (sev_es_guest(vcpu->kvm)) 4731 - return false; 4742 + return X86EMUL_RETRY_INSTR; 4732 4743 4733 4744 /* 4734 4745 * Emulation is possible if the instruction is already decoded, e.g. 4735 4746 * when completing I/O after returning from userspace. 4736 4747 */ 4737 4748 if (emul_type & EMULTYPE_NO_DECODE) 4738 - return true; 4749 + return X86EMUL_CONTINUE; 4739 4750 4740 4751 /* 4741 4752 * Emulation is possible for SEV guests if and only if a prefilled ··· 4761 4772 * success (and in practice it will work the vast majority of the time). 4762 4773 */ 4763 4774 if (unlikely(!insn)) { 4764 - if (!(emul_type & EMULTYPE_SKIP)) 4765 - kvm_queue_exception(vcpu, UD_VECTOR); 4766 - return false; 4775 + if (emul_type & EMULTYPE_SKIP) 4776 + return X86EMUL_UNHANDLEABLE; 4777 + 4778 + kvm_queue_exception(vcpu, UD_VECTOR); 4779 + return X86EMUL_PROPAGATE_FAULT; 4767 4780 } 4768 4781 4769 4782 /* ··· 4776 4785 * table used to translate CS:RIP resides in emulated MMIO. 4777 4786 */ 4778 4787 if (likely(insn_len)) 4779 - return true; 4788 + return X86EMUL_CONTINUE; 4780 4789 4781 4790 /* 4782 4791 * Detect and workaround Errata 1096 Fam_17h_00_0Fh. ··· 4834 4843 kvm_inject_gp(vcpu, 0); 4835 4844 else 4836 4845 kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu); 4846 + return X86EMUL_PROPAGATE_FAULT; 4837 4847 } 4838 4848 4839 4849 resume_guest: ··· 4852 4860 * doesn't explicitly define "ignored", i.e. doing nothing and letting 4853 4861 * the guest spin is technically "ignoring" the access. 4854 4862 */ 4855 - return false; 4863 + return X86EMUL_RETRY_INSTR; 4856 4864 } 4857 4865 4858 4866 static bool svm_apic_init_signal_blocked(struct kvm_vcpu *vcpu) ··· 5012 5020 .vm_copy_enc_context_from = sev_vm_copy_enc_context_from, 5013 5021 .vm_move_enc_context_from = sev_vm_move_enc_context_from, 5014 5022 5015 - .can_emulate_instruction = svm_can_emulate_instruction, 5023 + .check_emulate_instruction = svm_check_emulate_instruction, 5016 5024 5017 5025 .apic_init_signal_blocked = svm_apic_init_signal_blocked, 5018 5026
+21 -24
arch/x86/kvm/vmx/vmx.c
··· 82 82 module_param_named(vpid, enable_vpid, bool, 0444); 83 83 84 84 static bool __read_mostly enable_vnmi = 1; 85 - module_param_named(vnmi, enable_vnmi, bool, S_IRUGO); 85 + module_param_named(vnmi, enable_vnmi, bool, 0444); 86 86 87 87 bool __read_mostly flexpriority_enabled = 1; 88 - module_param_named(flexpriority, flexpriority_enabled, bool, S_IRUGO); 88 + module_param_named(flexpriority, flexpriority_enabled, bool, 0444); 89 89 90 90 bool __read_mostly enable_ept = 1; 91 - module_param_named(ept, enable_ept, bool, S_IRUGO); 91 + module_param_named(ept, enable_ept, bool, 0444); 92 92 93 93 bool __read_mostly enable_unrestricted_guest = 1; 94 94 module_param_named(unrestricted_guest, 95 - enable_unrestricted_guest, bool, S_IRUGO); 95 + enable_unrestricted_guest, bool, 0444); 96 96 97 97 bool __read_mostly enable_ept_ad_bits = 1; 98 - module_param_named(eptad, enable_ept_ad_bits, bool, S_IRUGO); 98 + module_param_named(eptad, enable_ept_ad_bits, bool, 0444); 99 99 100 100 static bool __read_mostly emulate_invalid_guest_state = true; 101 - module_param(emulate_invalid_guest_state, bool, S_IRUGO); 101 + module_param(emulate_invalid_guest_state, bool, 0444); 102 102 103 103 static bool __read_mostly fasteoi = 1; 104 - module_param(fasteoi, bool, S_IRUGO); 104 + module_param(fasteoi, bool, 0444); 105 105 106 - module_param(enable_apicv, bool, S_IRUGO); 106 + module_param(enable_apicv, bool, 0444); 107 107 108 108 bool __read_mostly enable_ipiv = true; 109 109 module_param(enable_ipiv, bool, 0444); ··· 114 114 * use VMX instructions. 115 115 */ 116 116 static bool __read_mostly nested = 1; 117 - module_param(nested, bool, S_IRUGO); 117 + module_param(nested, bool, 0444); 118 118 119 119 bool __read_mostly enable_pml = 1; 120 - module_param_named(pml, enable_pml, bool, S_IRUGO); 120 + module_param_named(pml, enable_pml, bool, 0444); 121 121 122 122 static bool __read_mostly error_on_inconsistent_vmcs_config = true; 123 123 module_param(error_on_inconsistent_vmcs_config, bool, 0444); ··· 1657 1657 return 0; 1658 1658 } 1659 1659 1660 - static bool vmx_can_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type, 1661 - void *insn, int insn_len) 1660 + static int vmx_check_emulate_instruction(struct kvm_vcpu *vcpu, int emul_type, 1661 + void *insn, int insn_len) 1662 1662 { 1663 1663 /* 1664 1664 * Emulation of instructions in SGX enclaves is impossible as RIP does ··· 1669 1669 */ 1670 1670 if (to_vmx(vcpu)->exit_reason.enclave_mode) { 1671 1671 kvm_queue_exception(vcpu, UD_VECTOR); 1672 - return false; 1672 + return X86EMUL_PROPAGATE_FAULT; 1673 1673 } 1674 - return true; 1674 + return X86EMUL_CONTINUE; 1675 1675 } 1676 1676 1677 1677 static int skip_emulated_instruction(struct kvm_vcpu *vcpu) ··· 5792 5792 { 5793 5793 gpa_t gpa; 5794 5794 5795 - if (!vmx_can_emulate_instruction(vcpu, EMULTYPE_PF, NULL, 0)) 5795 + if (vmx_check_emulate_instruction(vcpu, EMULTYPE_PF, NULL, 0)) 5796 5796 return 1; 5797 5797 5798 5798 /* ··· 6912 6912 vmcs_write64(EOI_EXIT_BITMAP3, eoi_exit_bitmap[3]); 6913 6913 } 6914 6914 6915 - static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu) 6915 + static void vmx_apicv_pre_state_restore(struct kvm_vcpu *vcpu) 6916 6916 { 6917 6917 struct vcpu_vmx *vmx = to_vmx(vcpu); 6918 6918 ··· 7579 7579 7580 7580 static u8 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) 7581 7581 { 7582 - u8 cache; 7583 - 7584 7582 /* We wanted to honor guest CD/MTRR/PAT, but doing so could result in 7585 7583 * memory aliases with conflicting memory types and sometimes MCEs. 7586 7584 * We have to be careful as to what are honored and when. ··· 7605 7607 7606 7608 if (kvm_read_cr0_bits(vcpu, X86_CR0_CD)) { 7607 7609 if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) 7608 - cache = MTRR_TYPE_WRBACK; 7610 + return MTRR_TYPE_WRBACK << VMX_EPT_MT_EPTE_SHIFT; 7609 7611 else 7610 - cache = MTRR_TYPE_UNCACHABLE; 7611 - 7612 - return (cache << VMX_EPT_MT_EPTE_SHIFT) | VMX_EPT_IPAT_BIT; 7612 + return (MTRR_TYPE_UNCACHABLE << VMX_EPT_MT_EPTE_SHIFT) | 7613 + VMX_EPT_IPAT_BIT; 7613 7614 } 7614 7615 7615 7616 return kvm_mtrr_get_guest_memory_type(vcpu, gfn) << VMX_EPT_MT_EPTE_SHIFT; ··· 8283 8286 .set_apic_access_page_addr = vmx_set_apic_access_page_addr, 8284 8287 .refresh_apicv_exec_ctrl = vmx_refresh_apicv_exec_ctrl, 8285 8288 .load_eoi_exitmap = vmx_load_eoi_exitmap, 8286 - .apicv_post_state_restore = vmx_apicv_post_state_restore, 8289 + .apicv_pre_state_restore = vmx_apicv_pre_state_restore, 8287 8290 .required_apicv_inhibits = VMX_REQUIRED_APICV_INHIBITS, 8288 8291 .hwapic_irr_update = vmx_hwapic_irr_update, 8289 8292 .hwapic_isr_update = vmx_hwapic_isr_update, ··· 8338 8341 .enable_smi_window = vmx_enable_smi_window, 8339 8342 #endif 8340 8343 8341 - .can_emulate_instruction = vmx_can_emulate_instruction, 8344 + .check_emulate_instruction = vmx_check_emulate_instruction, 8342 8345 .apic_init_signal_blocked = vmx_apic_init_signal_blocked, 8343 8346 .migrate_timers = vmx_migrate_timers, 8344 8347
+192 -56
arch/x86/kvm/x86.c
··· 145 145 EXPORT_STATIC_CALL_GPL(kvm_x86_cache_reg); 146 146 147 147 static bool __read_mostly ignore_msrs = 0; 148 - module_param(ignore_msrs, bool, S_IRUGO | S_IWUSR); 148 + module_param(ignore_msrs, bool, 0644); 149 149 150 150 bool __read_mostly report_ignored_msrs = true; 151 - module_param(report_ignored_msrs, bool, S_IRUGO | S_IWUSR); 151 + module_param(report_ignored_msrs, bool, 0644); 152 152 EXPORT_SYMBOL_GPL(report_ignored_msrs); 153 153 154 154 unsigned int min_timer_period_us = 200; 155 - module_param(min_timer_period_us, uint, S_IRUGO | S_IWUSR); 155 + module_param(min_timer_period_us, uint, 0644); 156 156 157 157 static bool __read_mostly kvmclock_periodic_sync = true; 158 - module_param(kvmclock_periodic_sync, bool, S_IRUGO); 158 + module_param(kvmclock_periodic_sync, bool, 0444); 159 159 160 160 /* tsc tolerance in parts per million - default to 1/2 of the NTP threshold */ 161 161 static u32 __read_mostly tsc_tolerance_ppm = 250; 162 - module_param(tsc_tolerance_ppm, uint, S_IRUGO | S_IWUSR); 162 + module_param(tsc_tolerance_ppm, uint, 0644); 163 163 164 164 /* 165 165 * lapic timer advance (tscdeadline mode only) in nanoseconds. '-1' enables ··· 168 168 * tuning, i.e. allows privileged userspace to set an exact advancement time. 169 169 */ 170 170 static int __read_mostly lapic_timer_advance_ns = -1; 171 - module_param(lapic_timer_advance_ns, int, S_IRUGO | S_IWUSR); 171 + module_param(lapic_timer_advance_ns, int, 0644); 172 172 173 173 static bool __read_mostly vector_hashing = true; 174 - module_param(vector_hashing, bool, S_IRUGO); 174 + module_param(vector_hashing, bool, 0444); 175 175 176 176 bool __read_mostly enable_vmware_backdoor = false; 177 - module_param(enable_vmware_backdoor, bool, S_IRUGO); 177 + module_param(enable_vmware_backdoor, bool, 0444); 178 178 EXPORT_SYMBOL_GPL(enable_vmware_backdoor); 179 179 180 180 /* ··· 186 186 module_param(force_emulation_prefix, int, 0644); 187 187 188 188 int __read_mostly pi_inject_timer = -1; 189 - module_param(pi_inject_timer, bint, S_IRUGO | S_IWUSR); 189 + module_param(pi_inject_timer, bint, 0644); 190 190 191 191 /* Enable/disable PMU virtualization */ 192 192 bool __read_mostly enable_pmu = true; ··· 962 962 kvm_mmu_reset_context(vcpu); 963 963 964 964 if (((cr0 ^ old_cr0) & X86_CR0_CD) && 965 - kvm_arch_has_noncoherent_dma(vcpu->kvm) && 965 + kvm_mmu_honors_guest_mtrrs(vcpu->kvm) && 966 966 !kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) 967 967 kvm_zap_gfn_range(vcpu->kvm, 0, ~0ULL); 968 968 } ··· 2331 2331 if (kvm_write_guest(kvm, wall_clock, &version, sizeof(version))) 2332 2332 return; 2333 2333 2334 - /* 2335 - * The guest calculates current wall clock time by adding 2336 - * system time (updated by kvm_guest_time_update below) to the 2337 - * wall clock specified here. We do the reverse here. 2338 - */ 2339 - wall_nsec = ktime_get_real_ns() - get_kvmclock_ns(kvm); 2334 + wall_nsec = kvm_get_wall_clock_epoch(kvm); 2340 2335 2341 - wc.nsec = do_div(wall_nsec, 1000000000); 2336 + wc.nsec = do_div(wall_nsec, NSEC_PER_SEC); 2342 2337 wc.sec = (u32)wall_nsec; /* overflow in 2106 guest time */ 2343 2338 wc.version = version; 2344 2339 ··· 2709 2714 kvm_track_tsc_matching(vcpu); 2710 2715 } 2711 2716 2712 - static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 data) 2717 + static void kvm_synchronize_tsc(struct kvm_vcpu *vcpu, u64 *user_value) 2713 2718 { 2719 + u64 data = user_value ? *user_value : 0; 2714 2720 struct kvm *kvm = vcpu->kvm; 2715 2721 u64 offset, ns, elapsed; 2716 2722 unsigned long flags; ··· 2726 2730 if (vcpu->arch.virtual_tsc_khz) { 2727 2731 if (data == 0) { 2728 2732 /* 2729 - * detection of vcpu initialization -- need to sync 2730 - * with other vCPUs. This particularly helps to keep 2731 - * kvm_clock stable after CPU hotplug 2733 + * Force synchronization when creating a vCPU, or when 2734 + * userspace explicitly writes a zero value. 2732 2735 */ 2733 2736 synchronizing = true; 2734 - } else { 2737 + } else if (kvm->arch.user_set_tsc) { 2735 2738 u64 tsc_exp = kvm->arch.last_tsc_write + 2736 2739 nsec_to_cycles(vcpu, elapsed); 2737 2740 u64 tsc_hz = vcpu->arch.virtual_tsc_khz * 1000LL; 2738 2741 /* 2739 - * Special case: TSC write with a small delta (1 second) 2740 - * of virtual cycle time against real time is 2741 - * interpreted as an attempt to synchronize the CPU. 2742 + * Here lies UAPI baggage: when a user-initiated TSC write has 2743 + * a small delta (1 second) of virtual cycle time against the 2744 + * previously set vCPU, we assume that they were intended to be 2745 + * in sync and the delta was only due to the racy nature of the 2746 + * legacy API. 2747 + * 2748 + * This trick falls down when restoring a guest which genuinely 2749 + * has been running for less time than the 1 second of imprecision 2750 + * which we allow for in the legacy API. In this case, the first 2751 + * value written by userspace (on any vCPU) should not be subject 2752 + * to this 'correction' to make it sync up with values that only 2753 + * come from the kernel's default vCPU creation. Make the 1-second 2754 + * slop hack only trigger if the user_set_tsc flag is already set. 2742 2755 */ 2743 2756 synchronizing = data < tsc_exp + tsc_hz && 2744 2757 data + tsc_hz > tsc_exp; 2745 2758 } 2746 2759 } 2760 + 2761 + if (user_value) 2762 + kvm->arch.user_set_tsc = true; 2747 2763 2748 2764 /* 2749 2765 * For a reliable TSC, we can match TSC offsets, and for an unstable ··· 3240 3232 3241 3233 if (vcpu->pv_time.active) 3242 3234 kvm_setup_guest_pvclock(v, &vcpu->pv_time, 0); 3235 + #ifdef CONFIG_KVM_XEN 3243 3236 if (vcpu->xen.vcpu_info_cache.active) 3244 3237 kvm_setup_guest_pvclock(v, &vcpu->xen.vcpu_info_cache, 3245 3238 offsetof(struct compat_vcpu_info, time)); 3246 3239 if (vcpu->xen.vcpu_time_info_cache.active) 3247 3240 kvm_setup_guest_pvclock(v, &vcpu->xen.vcpu_time_info_cache, 0); 3241 + #endif 3248 3242 kvm_hv_setup_tsc_page(v->kvm, &vcpu->hv_clock); 3249 3243 return 0; 3244 + } 3245 + 3246 + /* 3247 + * The pvclock_wall_clock ABI tells the guest the wall clock time at 3248 + * which it started (i.e. its epoch, when its kvmclock was zero). 3249 + * 3250 + * In fact those clocks are subtly different; wall clock frequency is 3251 + * adjusted by NTP and has leap seconds, while the kvmclock is a 3252 + * simple function of the TSC without any such adjustment. 3253 + * 3254 + * Perhaps the ABI should have exposed CLOCK_TAI and a ratio between 3255 + * that and kvmclock, but even that would be subject to change over 3256 + * time. 3257 + * 3258 + * Attempt to calculate the epoch at a given moment using the *same* 3259 + * TSC reading via kvm_get_walltime_and_clockread() to obtain both 3260 + * wallclock and kvmclock times, and subtracting one from the other. 3261 + * 3262 + * Fall back to using their values at slightly different moments by 3263 + * calling ktime_get_real_ns() and get_kvmclock_ns() separately. 3264 + */ 3265 + uint64_t kvm_get_wall_clock_epoch(struct kvm *kvm) 3266 + { 3267 + #ifdef CONFIG_X86_64 3268 + struct pvclock_vcpu_time_info hv_clock; 3269 + struct kvm_arch *ka = &kvm->arch; 3270 + unsigned long seq, local_tsc_khz; 3271 + struct timespec64 ts; 3272 + uint64_t host_tsc; 3273 + 3274 + do { 3275 + seq = read_seqcount_begin(&ka->pvclock_sc); 3276 + 3277 + local_tsc_khz = 0; 3278 + if (!ka->use_master_clock) 3279 + break; 3280 + 3281 + /* 3282 + * The TSC read and the call to get_cpu_tsc_khz() must happen 3283 + * on the same CPU. 3284 + */ 3285 + get_cpu(); 3286 + 3287 + local_tsc_khz = get_cpu_tsc_khz(); 3288 + 3289 + if (local_tsc_khz && 3290 + !kvm_get_walltime_and_clockread(&ts, &host_tsc)) 3291 + local_tsc_khz = 0; /* Fall back to old method */ 3292 + 3293 + put_cpu(); 3294 + 3295 + /* 3296 + * These values must be snapshotted within the seqcount loop. 3297 + * After that, it's just mathematics which can happen on any 3298 + * CPU at any time. 3299 + */ 3300 + hv_clock.tsc_timestamp = ka->master_cycle_now; 3301 + hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset; 3302 + 3303 + } while (read_seqcount_retry(&ka->pvclock_sc, seq)); 3304 + 3305 + /* 3306 + * If the conditions were right, and obtaining the wallclock+TSC was 3307 + * successful, calculate the KVM clock at the corresponding time and 3308 + * subtract one from the other to get the guest's epoch in nanoseconds 3309 + * since 1970-01-01. 3310 + */ 3311 + if (local_tsc_khz) { 3312 + kvm_get_time_scale(NSEC_PER_SEC, local_tsc_khz * NSEC_PER_USEC, 3313 + &hv_clock.tsc_shift, 3314 + &hv_clock.tsc_to_system_mul); 3315 + return ts.tv_nsec + NSEC_PER_SEC * ts.tv_sec - 3316 + __pvclock_read_cycles(&hv_clock, host_tsc); 3317 + } 3318 + #endif 3319 + return ktime_get_real_ns() - get_kvmclock_ns(kvm); 3250 3320 } 3251 3321 3252 3322 /* ··· 3375 3289 struct kvm_arch *ka = container_of(dwork, struct kvm_arch, 3376 3290 kvmclock_sync_work); 3377 3291 struct kvm *kvm = container_of(ka, struct kvm, arch); 3378 - 3379 - if (!kvmclock_periodic_sync) 3380 - return; 3381 3292 3382 3293 schedule_delayed_work(&kvm->arch.kvmclock_update_work, 0); 3383 3294 schedule_delayed_work(&kvm->arch.kvmclock_sync_work, ··· 3724 3641 case MSR_AMD64_PATCH_LOADER: 3725 3642 case MSR_AMD64_BU_CFG2: 3726 3643 case MSR_AMD64_DC_CFG: 3644 + case MSR_AMD64_TW_CFG: 3727 3645 case MSR_F15H_EX_CFG: 3728 3646 break; 3729 3647 ··· 3754 3670 vcpu->arch.perf_capabilities = data; 3755 3671 kvm_pmu_refresh(vcpu); 3756 3672 break; 3757 - case MSR_IA32_PRED_CMD: 3758 - if (!msr_info->host_initiated && !guest_has_pred_cmd_msr(vcpu)) 3673 + case MSR_IA32_PRED_CMD: { 3674 + u64 reserved_bits = ~(PRED_CMD_IBPB | PRED_CMD_SBPB); 3675 + 3676 + if (!msr_info->host_initiated) { 3677 + if ((!guest_has_pred_cmd_msr(vcpu))) 3678 + return 1; 3679 + 3680 + if (!guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL) && 3681 + !guest_cpuid_has(vcpu, X86_FEATURE_AMD_IBPB)) 3682 + reserved_bits |= PRED_CMD_IBPB; 3683 + 3684 + if (!guest_cpuid_has(vcpu, X86_FEATURE_SBPB)) 3685 + reserved_bits |= PRED_CMD_SBPB; 3686 + } 3687 + 3688 + if (!boot_cpu_has(X86_FEATURE_IBPB)) 3689 + reserved_bits |= PRED_CMD_IBPB; 3690 + 3691 + if (!boot_cpu_has(X86_FEATURE_SBPB)) 3692 + reserved_bits |= PRED_CMD_SBPB; 3693 + 3694 + if (data & reserved_bits) 3759 3695 return 1; 3760 3696 3761 - if (!boot_cpu_has(X86_FEATURE_IBPB) || (data & ~PRED_CMD_IBPB)) 3762 - return 1; 3763 3697 if (!data) 3764 3698 break; 3765 3699 3766 - wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB); 3700 + wrmsrl(MSR_IA32_PRED_CMD, data); 3767 3701 break; 3702 + } 3768 3703 case MSR_IA32_FLUSH_CMD: 3769 3704 if (!msr_info->host_initiated && 3770 3705 !guest_cpuid_has(vcpu, X86_FEATURE_FLUSH_L1D)) ··· 3803 3700 data &= ~(u64)0x100; /* ignore ignne emulation enable */ 3804 3701 data &= ~(u64)0x8; /* ignore TLB cache disable */ 3805 3702 3806 - /* Handle McStatusWrEn */ 3807 - if (data == BIT_ULL(18)) { 3808 - vcpu->arch.msr_hwcr = data; 3809 - } else if (data != 0) { 3703 + /* 3704 + * Allow McStatusWrEn and TscFreqSel. (Linux guests from v3.2 3705 + * through at least v6.6 whine if TscFreqSel is clear, 3706 + * depending on F/M/S. 3707 + */ 3708 + if (data & ~(BIT_ULL(18) | BIT_ULL(24))) { 3810 3709 kvm_pr_unimpl_wrmsr(vcpu, msr, data); 3811 3710 return 1; 3812 3711 } 3712 + vcpu->arch.msr_hwcr = data; 3813 3713 break; 3814 3714 case MSR_FAM10H_MMIO_CONF_BASE: 3815 3715 if (data != 0) { ··· 3883 3777 break; 3884 3778 case MSR_IA32_TSC: 3885 3779 if (msr_info->host_initiated) { 3886 - kvm_synchronize_tsc(vcpu, data); 3780 + kvm_synchronize_tsc(vcpu, &data); 3887 3781 } else { 3888 3782 u64 adj = kvm_compute_l1_tsc_offset(vcpu, data) - vcpu->arch.l1_tsc_offset; 3889 3783 adjust_tsc_offset_guest(vcpu, adj); ··· 4171 4065 case MSR_AMD64_BU_CFG2: 4172 4066 case MSR_IA32_PERF_CTL: 4173 4067 case MSR_AMD64_DC_CFG: 4068 + case MSR_AMD64_TW_CFG: 4174 4069 case MSR_F15H_EX_CFG: 4175 4070 /* 4176 4071 * Intel Sandy Bridge CPUs must support the RAPL (running average power ··· 5654 5547 tsc = kvm_scale_tsc(rdtsc(), vcpu->arch.l1_tsc_scaling_ratio) + offset; 5655 5548 ns = get_kvmclock_base_ns(); 5656 5549 5550 + kvm->arch.user_set_tsc = true; 5657 5551 __kvm_synchronize_tsc(vcpu, offset, tsc, ns, matched); 5658 5552 raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags); 5659 5553 ··· 6366 6258 */ 6367 6259 struct kvm_vcpu *vcpu; 6368 6260 unsigned long i; 6261 + 6262 + if (!kvm_x86_ops.cpu_dirty_log_size) 6263 + return; 6369 6264 6370 6265 kvm_for_each_vcpu(i, vcpu, kvm) 6371 6266 kvm_vcpu_kick(vcpu); ··· 7596 7485 } 7597 7486 EXPORT_SYMBOL_GPL(kvm_write_guest_virt_system); 7598 7487 7599 - static int kvm_can_emulate_insn(struct kvm_vcpu *vcpu, int emul_type, 7600 - void *insn, int insn_len) 7488 + static int kvm_check_emulate_insn(struct kvm_vcpu *vcpu, int emul_type, 7489 + void *insn, int insn_len) 7601 7490 { 7602 - return static_call(kvm_x86_can_emulate_instruction)(vcpu, emul_type, 7603 - insn, insn_len); 7491 + return static_call(kvm_x86_check_emulate_instruction)(vcpu, emul_type, 7492 + insn, insn_len); 7604 7493 } 7605 7494 7606 7495 int handle_ud(struct kvm_vcpu *vcpu) ··· 7610 7499 int emul_type = EMULTYPE_TRAP_UD; 7611 7500 char sig[5]; /* ud2; .ascii "kvm" */ 7612 7501 struct x86_exception e; 7502 + int r; 7613 7503 7614 - if (unlikely(!kvm_can_emulate_insn(vcpu, emul_type, NULL, 0))) 7504 + r = kvm_check_emulate_insn(vcpu, emul_type, NULL, 0); 7505 + if (r != X86EMUL_CONTINUE) 7615 7506 return 1; 7616 7507 7617 7508 if (fep_flags && ··· 8995 8882 struct x86_emulate_ctxt *ctxt = vcpu->arch.emulate_ctxt; 8996 8883 bool writeback = true; 8997 8884 8998 - if (unlikely(!kvm_can_emulate_insn(vcpu, emulation_type, insn, insn_len))) 8999 - return 1; 8885 + r = kvm_check_emulate_insn(vcpu, emulation_type, insn, insn_len); 8886 + if (r != X86EMUL_CONTINUE) { 8887 + if (r == X86EMUL_RETRY_INSTR || r == X86EMUL_PROPAGATE_FAULT) 8888 + return 1; 8889 + 8890 + WARN_ON_ONCE(r != X86EMUL_UNHANDLEABLE); 8891 + return handle_emulation_failure(vcpu, emulation_type); 8892 + } 9000 8893 9001 8894 vcpu->arch.l1tf_flush_l1d = true; 9002 8895 ··· 10706 10587 } 10707 10588 if (kvm_check_request(KVM_REQ_STEAL_UPDATE, vcpu)) 10708 10589 record_steal_time(vcpu); 10590 + if (kvm_check_request(KVM_REQ_PMU, vcpu)) 10591 + kvm_pmu_handle_event(vcpu); 10592 + if (kvm_check_request(KVM_REQ_PMI, vcpu)) 10593 + kvm_pmu_deliver_pmi(vcpu); 10709 10594 #ifdef CONFIG_KVM_SMM 10710 10595 if (kvm_check_request(KVM_REQ_SMI, vcpu)) 10711 10596 process_smi(vcpu); 10712 10597 #endif 10713 10598 if (kvm_check_request(KVM_REQ_NMI, vcpu)) 10714 10599 process_nmi(vcpu); 10715 - if (kvm_check_request(KVM_REQ_PMU, vcpu)) 10716 - kvm_pmu_handle_event(vcpu); 10717 - if (kvm_check_request(KVM_REQ_PMI, vcpu)) 10718 - kvm_pmu_deliver_pmi(vcpu); 10719 10600 if (kvm_check_request(KVM_REQ_IOAPIC_EOI_EXIT, vcpu)) { 10720 10601 BUG_ON(vcpu->arch.pending_ioapic_eoi > 255); 10721 10602 if (test_bit(vcpu->arch.pending_ioapic_eoi, ··· 11651 11532 11652 11533 *mmu_reset_needed |= kvm_read_cr0(vcpu) != sregs->cr0; 11653 11534 static_call(kvm_x86_set_cr0)(vcpu, sregs->cr0); 11654 - vcpu->arch.cr0 = sregs->cr0; 11655 11535 11656 11536 *mmu_reset_needed |= kvm_read_cr4(vcpu) != sregs->cr4; 11657 11537 static_call(kvm_x86_set_cr4)(vcpu, sregs->cr4); ··· 11694 11576 if (ret) 11695 11577 return ret; 11696 11578 11697 - if (mmu_reset_needed) 11579 + if (mmu_reset_needed) { 11698 11580 kvm_mmu_reset_context(vcpu); 11581 + kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu); 11582 + } 11699 11583 11700 11584 max_bits = KVM_NR_INTERRUPTS; 11701 11585 pending_vec = find_first_bit( ··· 11738 11618 mmu_reset_needed = 1; 11739 11619 vcpu->arch.pdptrs_from_userspace = true; 11740 11620 } 11741 - if (mmu_reset_needed) 11621 + if (mmu_reset_needed) { 11742 11622 kvm_mmu_reset_context(vcpu); 11623 + kvm_make_request(KVM_REQ_TLB_FLUSH_GUEST, vcpu); 11624 + } 11743 11625 return 0; 11744 11626 } 11745 11627 ··· 12092 11970 if (mutex_lock_killable(&vcpu->mutex)) 12093 11971 return; 12094 11972 vcpu_load(vcpu); 12095 - kvm_synchronize_tsc(vcpu, 0); 11973 + kvm_synchronize_tsc(vcpu, NULL); 12096 11974 vcpu_put(vcpu); 12097 11975 12098 11976 /* poll control enabled by default */ ··· 12448 12326 goto out_uninit_mmu; 12449 12327 12450 12328 INIT_HLIST_HEAD(&kvm->arch.mask_notifier_list); 12451 - INIT_LIST_HEAD(&kvm->arch.assigned_dev_head); 12452 12329 atomic_set(&kvm->arch.noncoherent_dma_count, 0); 12453 12330 12454 12331 /* Reserve bit 0 of irq_sources_bitmap for userspace irq source */ ··· 13323 13202 } 13324 13203 EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device); 13325 13204 13205 + static void kvm_noncoherent_dma_assignment_start_or_stop(struct kvm *kvm) 13206 + { 13207 + /* 13208 + * Non-coherent DMA assignment and de-assignment will affect 13209 + * whether KVM honors guest MTRRs and cause changes in memtypes 13210 + * in TDP. 13211 + * So, pass %true unconditionally to indicate non-coherent DMA was, 13212 + * or will be involved, and that zapping SPTEs might be necessary. 13213 + */ 13214 + if (__kvm_mmu_honors_guest_mtrrs(true)) 13215 + kvm_zap_gfn_range(kvm, gpa_to_gfn(0), gpa_to_gfn(~0ULL)); 13216 + } 13217 + 13326 13218 void kvm_arch_register_noncoherent_dma(struct kvm *kvm) 13327 13219 { 13328 - atomic_inc(&kvm->arch.noncoherent_dma_count); 13220 + if (atomic_inc_return(&kvm->arch.noncoherent_dma_count) == 1) 13221 + kvm_noncoherent_dma_assignment_start_or_stop(kvm); 13329 13222 } 13330 13223 EXPORT_SYMBOL_GPL(kvm_arch_register_noncoherent_dma); 13331 13224 13332 13225 void kvm_arch_unregister_noncoherent_dma(struct kvm *kvm) 13333 13226 { 13334 - atomic_dec(&kvm->arch.noncoherent_dma_count); 13227 + if (!atomic_dec_return(&kvm->arch.noncoherent_dma_count)) 13228 + kvm_noncoherent_dma_assignment_start_or_stop(kvm); 13335 13229 } 13336 13230 EXPORT_SYMBOL_GPL(kvm_arch_unregister_noncoherent_dma); 13337 13231
+1
arch/x86/kvm/x86.h
··· 293 293 void kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq, int inc_eip); 294 294 295 295 u64 get_kvmclock_ns(struct kvm *kvm); 296 + uint64_t kvm_get_wall_clock_epoch(struct kvm *kvm); 296 297 297 298 int kvm_read_guest_virt(struct kvm_vcpu *vcpu, 298 299 gva_t addr, void *val, unsigned int bytes,
+52 -7
arch/x86/kvm/xen.c
··· 59 59 * This code mirrors kvm_write_wall_clock() except that it writes 60 60 * directly through the pfn cache and doesn't mark the page dirty. 61 61 */ 62 - wall_nsec = ktime_get_real_ns() - get_kvmclock_ns(kvm); 62 + wall_nsec = kvm_get_wall_clock_epoch(kvm); 63 63 64 64 /* It could be invalid again already, so we need to check */ 65 65 read_lock_irq(&gpc->lock); ··· 98 98 wc_version = wc->version = (wc->version + 1) | 1; 99 99 smp_wmb(); 100 100 101 - wc->nsec = do_div(wall_nsec, 1000000000); 101 + wc->nsec = do_div(wall_nsec, NSEC_PER_SEC); 102 102 wc->sec = (u32)wall_nsec; 103 103 *wc_sec_hi = wall_nsec >> 32; 104 104 smp_wmb(); ··· 134 134 { 135 135 struct kvm_vcpu *vcpu = container_of(timer, struct kvm_vcpu, 136 136 arch.xen.timer); 137 + struct kvm_xen_evtchn e; 138 + int rc; 139 + 137 140 if (atomic_read(&vcpu->arch.xen.timer_pending)) 138 141 return HRTIMER_NORESTART; 142 + 143 + e.vcpu_id = vcpu->vcpu_id; 144 + e.vcpu_idx = vcpu->vcpu_idx; 145 + e.port = vcpu->arch.xen.timer_virq; 146 + e.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL; 147 + 148 + rc = kvm_xen_set_evtchn_fast(&e, vcpu->kvm); 149 + if (rc != -EWOULDBLOCK) { 150 + vcpu->arch.xen.timer_expires = 0; 151 + return HRTIMER_NORESTART; 152 + } 139 153 140 154 atomic_inc(&vcpu->arch.xen.timer_pending); 141 155 kvm_make_request(KVM_REQ_UNBLOCK, vcpu); ··· 160 146 161 147 static void kvm_xen_start_timer(struct kvm_vcpu *vcpu, u64 guest_abs, s64 delta_ns) 162 148 { 149 + /* 150 + * Avoid races with the old timer firing. Checking timer_expires 151 + * to avoid calling hrtimer_cancel() will only have false positives 152 + * so is fine. 153 + */ 154 + if (vcpu->arch.xen.timer_expires) 155 + hrtimer_cancel(&vcpu->arch.xen.timer); 156 + 163 157 atomic_set(&vcpu->arch.xen.timer_pending, 0); 164 158 vcpu->arch.xen.timer_expires = guest_abs; 165 159 ··· 1041 1019 break; 1042 1020 1043 1021 case KVM_XEN_VCPU_ATTR_TYPE_TIMER: 1022 + /* 1023 + * Ensure a consistent snapshot of state is captured, with a 1024 + * timer either being pending, or the event channel delivered 1025 + * to the corresponding bit in the shared_info. Not still 1026 + * lurking in the timer_pending flag for deferred delivery. 1027 + * Purely as an optimisation, if the timer_expires field is 1028 + * zero, that means the timer isn't active (or even in the 1029 + * timer_pending flag) and there is no need to cancel it. 1030 + */ 1031 + if (vcpu->arch.xen.timer_expires) { 1032 + hrtimer_cancel(&vcpu->arch.xen.timer); 1033 + kvm_xen_inject_timer_irqs(vcpu); 1034 + } 1035 + 1044 1036 data->u.timer.port = vcpu->arch.xen.timer_virq; 1045 1037 data->u.timer.priority = KVM_IRQ_ROUTING_XEN_EVTCHN_PRIO_2LEVEL; 1046 1038 data->u.timer.expires_ns = vcpu->arch.xen.timer_expires; 1039 + 1040 + /* 1041 + * The hrtimer may trigger and raise the IRQ immediately, 1042 + * while the returned state causes it to be set up and 1043 + * raised again on the destination system after migration. 1044 + * That's fine, as the guest won't even have had a chance 1045 + * to run and handle the interrupt. Asserting an already 1046 + * pending event channel is idempotent. 1047 + */ 1048 + if (vcpu->arch.xen.timer_expires) 1049 + hrtimer_start_expires(&vcpu->arch.xen.timer, 1050 + HRTIMER_MODE_ABS_HARD); 1051 + 1047 1052 r = 0; 1048 1053 break; 1049 1054 ··· 1423 1374 return true; 1424 1375 } 1425 1376 1377 + /* A delta <= 0 results in an immediate callback, which is what we want */ 1426 1378 delta = oneshot.timeout_abs_ns - get_kvmclock_ns(vcpu->kvm); 1427 - if ((oneshot.flags & VCPU_SSHOTTMR_future) && delta < 0) { 1428 - *r = -ETIME; 1429 - return true; 1430 - } 1431 - 1432 1379 kvm_xen_start_timer(vcpu, oneshot.timeout_abs_ns, delta); 1433 1380 *r = 0; 1434 1381 return true;
+1 -1
include/kvm/arm_arch_timer.h
··· 96 96 97 97 int __init kvm_timer_hyp_init(bool has_gic); 98 98 int kvm_timer_enable(struct kvm_vcpu *vcpu); 99 - int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu); 99 + void kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu); 100 100 void kvm_timer_vcpu_init(struct kvm_vcpu *vcpu); 101 101 void kvm_timer_sync_user(struct kvm_vcpu *vcpu); 102 102 bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu);
+26 -2
include/kvm/arm_pmu.h
··· 13 13 #define ARMV8_PMU_CYCLE_IDX (ARMV8_PMU_MAX_COUNTERS - 1) 14 14 15 15 #if IS_ENABLED(CONFIG_HW_PERF_EVENTS) && IS_ENABLED(CONFIG_KVM) 16 - 17 16 struct kvm_pmc { 18 17 u8 idx; /* index into the pmu->pmc array */ 19 18 struct perf_event *perf_event; ··· 62 63 void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val); 63 64 void kvm_pmu_set_counter_event_type(struct kvm_vcpu *vcpu, u64 data, 64 65 u64 select_idx); 66 + void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu); 65 67 int kvm_arm_pmu_v3_set_attr(struct kvm_vcpu *vcpu, 66 68 struct kvm_device_attr *attr); 67 69 int kvm_arm_pmu_v3_get_attr(struct kvm_vcpu *vcpu, ··· 77 77 void kvm_vcpu_pmu_resync_el0(void); 78 78 79 79 #define kvm_vcpu_has_pmu(vcpu) \ 80 - (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features)) 80 + (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PMU_V3)) 81 81 82 82 /* 83 83 * Updates the vcpu's view of the pmu events for this cpu. ··· 101 101 }) 102 102 103 103 u8 kvm_arm_pmu_get_pmuver_limit(void); 104 + u64 kvm_pmu_evtyper_mask(struct kvm *kvm); 105 + int kvm_arm_set_default_pmu(struct kvm *kvm); 106 + u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm); 104 107 108 + u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu); 105 109 #else 106 110 struct kvm_pmu { 107 111 }; ··· 172 168 static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {} 173 169 static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {} 174 170 static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {} 171 + static inline void kvm_vcpu_reload_pmu(struct kvm_vcpu *vcpu) {} 175 172 static inline u8 kvm_arm_pmu_get_pmuver_limit(void) 176 173 { 177 174 return 0; 178 175 } 176 + static inline u64 kvm_pmu_evtyper_mask(struct kvm *kvm) 177 + { 178 + return 0; 179 + } 179 180 static inline void kvm_vcpu_pmu_resync_el0(void) {} 181 + 182 + static inline int kvm_arm_set_default_pmu(struct kvm *kvm) 183 + { 184 + return -ENODEV; 185 + } 186 + 187 + static inline u8 kvm_arm_pmu_get_max_counters(struct kvm *kvm) 188 + { 189 + return 0; 190 + } 191 + 192 + static inline u64 kvm_vcpu_read_pmcr(struct kvm_vcpu *vcpu) 193 + { 194 + return 0; 195 + } 180 196 181 197 #endif 182 198
+1 -1
include/kvm/arm_psci.h
··· 26 26 * revisions. It is thus safe to return the latest, unless 27 27 * userspace has instructed us otherwise. 28 28 */ 29 - if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features)) { 29 + if (vcpu_has_feature(vcpu, KVM_ARM_VCPU_PSCI_0_2)) { 30 30 if (vcpu->kvm->arch.psci_version) 31 31 return vcpu->kvm->arch.psci_version; 32 32
+2 -2
include/kvm/arm_vgic.h
··· 375 375 int kvm_vgic_hyp_init(void); 376 376 void kvm_vgic_init_cpu_hardware(void); 377 377 378 - int kvm_vgic_inject_irq(struct kvm *kvm, int cpuid, unsigned int intid, 379 - bool level, void *owner); 378 + int kvm_vgic_inject_irq(struct kvm *kvm, struct kvm_vcpu *vcpu, 379 + unsigned int intid, bool level, void *owner); 380 380 int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, unsigned int host_irq, 381 381 u32 vintid, struct irq_ops *ops); 382 382 int kvm_vgic_unmap_phys_irq(struct kvm_vcpu *vcpu, unsigned int vintid);
+6 -3
include/linux/perf/arm_pmuv3.h
··· 234 234 /* 235 235 * Event filters for PMUv3 236 236 */ 237 - #define ARMV8_PMU_EXCLUDE_EL1 (1U << 31) 238 - #define ARMV8_PMU_EXCLUDE_EL0 (1U << 30) 239 - #define ARMV8_PMU_INCLUDE_EL2 (1U << 27) 237 + #define ARMV8_PMU_EXCLUDE_EL1 (1U << 31) 238 + #define ARMV8_PMU_EXCLUDE_EL0 (1U << 30) 239 + #define ARMV8_PMU_EXCLUDE_NS_EL1 (1U << 29) 240 + #define ARMV8_PMU_EXCLUDE_NS_EL0 (1U << 28) 241 + #define ARMV8_PMU_INCLUDE_EL2 (1U << 27) 242 + #define ARMV8_PMU_EXCLUDE_EL3 (1U << 26) 240 243 241 244 /* 242 245 * PMUSERENR: user enable reg
+11
include/uapi/linux/kvm.h
··· 264 264 #define KVM_EXIT_RISCV_SBI 35 265 265 #define KVM_EXIT_RISCV_CSR 36 266 266 #define KVM_EXIT_NOTIFY 37 267 + #define KVM_EXIT_LOONGARCH_IOCSR 38 267 268 268 269 /* For KVM_EXIT_INTERNAL_ERROR */ 269 270 /* Emulate instruction failed. */ ··· 337 336 __u32 len; 338 337 __u8 is_write; 339 338 } mmio; 339 + /* KVM_EXIT_LOONGARCH_IOCSR */ 340 + struct { 341 + __u64 phys_addr; 342 + __u8 data[8]; 343 + __u32 len; 344 + __u8 is_write; 345 + } iocsr_io; 340 346 /* KVM_EXIT_HYPERCALL */ 341 347 struct { 342 348 __u64 nr; ··· 1200 1192 #define KVM_CAP_COUNTER_OFFSET 227 1201 1193 #define KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE 228 1202 1194 #define KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES 229 1195 + #define KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES 230 1203 1196 1204 1197 #ifdef KVM_CAP_IRQ_ROUTING 1205 1198 ··· 1371 1362 #define KVM_REG_ARM64 0x6000000000000000ULL 1372 1363 #define KVM_REG_MIPS 0x7000000000000000ULL 1373 1364 #define KVM_REG_RISCV 0x8000000000000000ULL 1365 + #define KVM_REG_LOONGARCH 0x9000000000000000ULL 1374 1366 1375 1367 #define KVM_REG_SIZE_SHIFT 52 1376 1368 #define KVM_REG_SIZE_MASK 0x00f0000000000000ULL ··· 1572 1562 #define KVM_ARM_MTE_COPY_TAGS _IOR(KVMIO, 0xb4, struct kvm_arm_copy_mte_tags) 1573 1563 /* Available with KVM_CAP_COUNTER_OFFSET */ 1574 1564 #define KVM_ARM_SET_COUNTER_OFFSET _IOW(KVMIO, 0xb5, struct kvm_arm_counter_offset) 1565 + #define KVM_ARM_GET_REG_WRITABLE_MASKS _IOR(KVMIO, 0xb6, struct reg_mask_range) 1575 1566 1576 1567 /* ioctl for vm fd */ 1577 1568 #define KVM_CREATE_DEVICE _IOWR(KVMIO, 0xe0, struct kvm_create_device)
+1
tools/arch/arm64/include/.gitignore
··· 1 + generated/
+26
tools/arch/arm64/include/asm/gpr-num.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + #ifndef __ASM_GPR_NUM_H 3 + #define __ASM_GPR_NUM_H 4 + 5 + #ifdef __ASSEMBLY__ 6 + 7 + .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30 8 + .equ .L__gpr_num_x\num, \num 9 + .equ .L__gpr_num_w\num, \num 10 + .endr 11 + .equ .L__gpr_num_xzr, 31 12 + .equ .L__gpr_num_wzr, 31 13 + 14 + #else /* __ASSEMBLY__ */ 15 + 16 + #define __DEFINE_ASM_GPR_NUMS \ 17 + " .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30\n" \ 18 + " .equ .L__gpr_num_x\\num, \\num\n" \ 19 + " .equ .L__gpr_num_w\\num, \\num\n" \ 20 + " .endr\n" \ 21 + " .equ .L__gpr_num_xzr, 31\n" \ 22 + " .equ .L__gpr_num_wzr, 31\n" 23 + 24 + #endif /* __ASSEMBLY__ */ 25 + 26 + #endif /* __ASM_GPR_NUM_H */
+193 -648
tools/arch/arm64/include/asm/sysreg.h
··· 12 12 #include <linux/bits.h> 13 13 #include <linux/stringify.h> 14 14 15 + #include <asm/gpr-num.h> 16 + 15 17 /* 16 18 * ARMv8 ARM reserves the following encoding for system registers: 17 19 * (Ref: ARMv8 ARM, Section: "System instruction class encoding overview", ··· 89 87 */ 90 88 #define pstate_field(op1, op2) ((op1) << Op1_shift | (op2) << Op2_shift) 91 89 #define PSTATE_Imm_shift CRm_shift 90 + #define SET_PSTATE(x, r) __emit_inst(0xd500401f | PSTATE_ ## r | ((!!x) << PSTATE_Imm_shift)) 92 91 93 92 #define PSTATE_PAN pstate_field(0, 4) 94 93 #define PSTATE_UAO pstate_field(0, 3) 95 94 #define PSTATE_SSBS pstate_field(3, 1) 95 + #define PSTATE_DIT pstate_field(3, 2) 96 96 #define PSTATE_TCO pstate_field(3, 4) 97 97 98 - #define SET_PSTATE_PAN(x) __emit_inst(0xd500401f | PSTATE_PAN | ((!!x) << PSTATE_Imm_shift)) 99 - #define SET_PSTATE_UAO(x) __emit_inst(0xd500401f | PSTATE_UAO | ((!!x) << PSTATE_Imm_shift)) 100 - #define SET_PSTATE_SSBS(x) __emit_inst(0xd500401f | PSTATE_SSBS | ((!!x) << PSTATE_Imm_shift)) 101 - #define SET_PSTATE_TCO(x) __emit_inst(0xd500401f | PSTATE_TCO | ((!!x) << PSTATE_Imm_shift)) 98 + #define SET_PSTATE_PAN(x) SET_PSTATE((x), PAN) 99 + #define SET_PSTATE_UAO(x) SET_PSTATE((x), UAO) 100 + #define SET_PSTATE_SSBS(x) SET_PSTATE((x), SSBS) 101 + #define SET_PSTATE_DIT(x) SET_PSTATE((x), DIT) 102 + #define SET_PSTATE_TCO(x) SET_PSTATE((x), TCO) 102 103 103 104 #define set_pstate_pan(x) asm volatile(SET_PSTATE_PAN(x)) 104 105 #define set_pstate_uao(x) asm volatile(SET_PSTATE_UAO(x)) 105 106 #define set_pstate_ssbs(x) asm volatile(SET_PSTATE_SSBS(x)) 107 + #define set_pstate_dit(x) asm volatile(SET_PSTATE_DIT(x)) 106 108 107 109 #define __SYS_BARRIER_INSN(CRm, op2, Rt) \ 108 110 __emit_inst(0xd5000000 | sys_insn(0, 3, 3, (CRm), (op2)) | ((Rt) & 0x1f)) ··· 114 108 #define SB_BARRIER_INSN __SYS_BARRIER_INSN(0, 7, 31) 115 109 116 110 #define SYS_DC_ISW sys_insn(1, 0, 7, 6, 2) 111 + #define SYS_DC_IGSW sys_insn(1, 0, 7, 6, 4) 112 + #define SYS_DC_IGDSW sys_insn(1, 0, 7, 6, 6) 117 113 #define SYS_DC_CSW sys_insn(1, 0, 7, 10, 2) 114 + #define SYS_DC_CGSW sys_insn(1, 0, 7, 10, 4) 115 + #define SYS_DC_CGDSW sys_insn(1, 0, 7, 10, 6) 118 116 #define SYS_DC_CISW sys_insn(1, 0, 7, 14, 2) 117 + #define SYS_DC_CIGSW sys_insn(1, 0, 7, 14, 4) 118 + #define SYS_DC_CIGDSW sys_insn(1, 0, 7, 14, 6) 119 + 120 + /* 121 + * Automatically generated definitions for system registers, the 122 + * manual encodings below are in the process of being converted to 123 + * come from here. The header relies on the definition of sys_reg() 124 + * earlier in this file. 125 + */ 126 + #include "asm/sysreg-defs.h" 119 127 120 128 /* 121 129 * System registers, organised loosely by encoding but grouped together 122 130 * where the architected name contains an index. e.g. ID_MMFR<n>_EL1. 123 131 */ 124 - #define SYS_OSDTRRX_EL1 sys_reg(2, 0, 0, 0, 2) 125 - #define SYS_MDCCINT_EL1 sys_reg(2, 0, 0, 2, 0) 126 - #define SYS_MDSCR_EL1 sys_reg(2, 0, 0, 2, 2) 127 - #define SYS_OSDTRTX_EL1 sys_reg(2, 0, 0, 3, 2) 128 - #define SYS_OSECCR_EL1 sys_reg(2, 0, 0, 6, 2) 132 + #define SYS_SVCR_SMSTOP_SM_EL0 sys_reg(0, 3, 4, 2, 3) 133 + #define SYS_SVCR_SMSTART_SM_EL0 sys_reg(0, 3, 4, 3, 3) 134 + #define SYS_SVCR_SMSTOP_SMZA_EL0 sys_reg(0, 3, 4, 6, 3) 135 + 129 136 #define SYS_DBGBVRn_EL1(n) sys_reg(2, 0, 0, n, 4) 130 137 #define SYS_DBGBCRn_EL1(n) sys_reg(2, 0, 0, n, 5) 131 138 #define SYS_DBGWVRn_EL1(n) sys_reg(2, 0, 0, n, 6) 132 139 #define SYS_DBGWCRn_EL1(n) sys_reg(2, 0, 0, n, 7) 133 140 #define SYS_MDRAR_EL1 sys_reg(2, 0, 1, 0, 0) 134 - #define SYS_OSLAR_EL1 sys_reg(2, 0, 1, 0, 4) 141 + 135 142 #define SYS_OSLSR_EL1 sys_reg(2, 0, 1, 1, 4) 143 + #define OSLSR_EL1_OSLM_MASK (BIT(3) | BIT(0)) 144 + #define OSLSR_EL1_OSLM_NI 0 145 + #define OSLSR_EL1_OSLM_IMPLEMENTED BIT(3) 146 + #define OSLSR_EL1_OSLK BIT(1) 147 + 136 148 #define SYS_OSDLR_EL1 sys_reg(2, 0, 1, 3, 4) 137 149 #define SYS_DBGPRCR_EL1 sys_reg(2, 0, 1, 4, 4) 138 150 #define SYS_DBGCLAIMSET_EL1 sys_reg(2, 0, 7, 8, 6) ··· 166 142 #define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5) 167 143 #define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6) 168 144 169 - #define SYS_ID_PFR0_EL1 sys_reg(3, 0, 0, 1, 0) 170 - #define SYS_ID_PFR1_EL1 sys_reg(3, 0, 0, 1, 1) 171 - #define SYS_ID_PFR2_EL1 sys_reg(3, 0, 0, 3, 4) 172 - #define SYS_ID_DFR0_EL1 sys_reg(3, 0, 0, 1, 2) 173 - #define SYS_ID_DFR1_EL1 sys_reg(3, 0, 0, 3, 5) 174 - #define SYS_ID_AFR0_EL1 sys_reg(3, 0, 0, 1, 3) 175 - #define SYS_ID_MMFR0_EL1 sys_reg(3, 0, 0, 1, 4) 176 - #define SYS_ID_MMFR1_EL1 sys_reg(3, 0, 0, 1, 5) 177 - #define SYS_ID_MMFR2_EL1 sys_reg(3, 0, 0, 1, 6) 178 - #define SYS_ID_MMFR3_EL1 sys_reg(3, 0, 0, 1, 7) 179 - #define SYS_ID_MMFR4_EL1 sys_reg(3, 0, 0, 2, 6) 180 - #define SYS_ID_MMFR5_EL1 sys_reg(3, 0, 0, 3, 6) 181 - 182 - #define SYS_ID_ISAR0_EL1 sys_reg(3, 0, 0, 2, 0) 183 - #define SYS_ID_ISAR1_EL1 sys_reg(3, 0, 0, 2, 1) 184 - #define SYS_ID_ISAR2_EL1 sys_reg(3, 0, 0, 2, 2) 185 - #define SYS_ID_ISAR3_EL1 sys_reg(3, 0, 0, 2, 3) 186 - #define SYS_ID_ISAR4_EL1 sys_reg(3, 0, 0, 2, 4) 187 - #define SYS_ID_ISAR5_EL1 sys_reg(3, 0, 0, 2, 5) 188 - #define SYS_ID_ISAR6_EL1 sys_reg(3, 0, 0, 2, 7) 189 - 190 - #define SYS_MVFR0_EL1 sys_reg(3, 0, 0, 3, 0) 191 - #define SYS_MVFR1_EL1 sys_reg(3, 0, 0, 3, 1) 192 - #define SYS_MVFR2_EL1 sys_reg(3, 0, 0, 3, 2) 193 - 194 - #define SYS_ID_AA64PFR0_EL1 sys_reg(3, 0, 0, 4, 0) 195 - #define SYS_ID_AA64PFR1_EL1 sys_reg(3, 0, 0, 4, 1) 196 - #define SYS_ID_AA64ZFR0_EL1 sys_reg(3, 0, 0, 4, 4) 197 - 198 - #define SYS_ID_AA64DFR0_EL1 sys_reg(3, 0, 0, 5, 0) 199 - #define SYS_ID_AA64DFR1_EL1 sys_reg(3, 0, 0, 5, 1) 200 - 201 - #define SYS_ID_AA64AFR0_EL1 sys_reg(3, 0, 0, 5, 4) 202 - #define SYS_ID_AA64AFR1_EL1 sys_reg(3, 0, 0, 5, 5) 203 - 204 - #define SYS_ID_AA64ISAR0_EL1 sys_reg(3, 0, 0, 6, 0) 205 - #define SYS_ID_AA64ISAR1_EL1 sys_reg(3, 0, 0, 6, 1) 206 - 207 - #define SYS_ID_AA64MMFR0_EL1 sys_reg(3, 0, 0, 7, 0) 208 - #define SYS_ID_AA64MMFR1_EL1 sys_reg(3, 0, 0, 7, 1) 209 - #define SYS_ID_AA64MMFR2_EL1 sys_reg(3, 0, 0, 7, 2) 210 - 211 - #define SYS_SCTLR_EL1 sys_reg(3, 0, 1, 0, 0) 212 145 #define SYS_ACTLR_EL1 sys_reg(3, 0, 1, 0, 1) 213 - #define SYS_CPACR_EL1 sys_reg(3, 0, 1, 0, 2) 214 146 #define SYS_RGSR_EL1 sys_reg(3, 0, 1, 0, 5) 215 147 #define SYS_GCR_EL1 sys_reg(3, 0, 1, 0, 6) 216 148 217 - #define SYS_ZCR_EL1 sys_reg(3, 0, 1, 2, 0) 218 149 #define SYS_TRFCR_EL1 sys_reg(3, 0, 1, 2, 1) 219 150 220 - #define SYS_TTBR0_EL1 sys_reg(3, 0, 2, 0, 0) 221 - #define SYS_TTBR1_EL1 sys_reg(3, 0, 2, 0, 1) 222 151 #define SYS_TCR_EL1 sys_reg(3, 0, 2, 0, 2) 223 152 224 153 #define SYS_APIAKEYLO_EL1 sys_reg(3, 0, 2, 1, 0) ··· 207 230 #define SYS_TFSR_EL1 sys_reg(3, 0, 5, 6, 0) 208 231 #define SYS_TFSRE0_EL1 sys_reg(3, 0, 5, 6, 1) 209 232 210 - #define SYS_FAR_EL1 sys_reg(3, 0, 6, 0, 0) 211 233 #define SYS_PAR_EL1 sys_reg(3, 0, 7, 4, 0) 212 234 213 235 #define SYS_PAR_EL1_F BIT(0) 214 236 #define SYS_PAR_EL1_FST GENMASK(6, 1) 215 237 216 238 /*** Statistical Profiling Extension ***/ 217 - /* ID registers */ 218 - #define SYS_PMSIDR_EL1 sys_reg(3, 0, 9, 9, 7) 219 - #define SYS_PMSIDR_EL1_FE_SHIFT 0 220 - #define SYS_PMSIDR_EL1_FT_SHIFT 1 221 - #define SYS_PMSIDR_EL1_FL_SHIFT 2 222 - #define SYS_PMSIDR_EL1_ARCHINST_SHIFT 3 223 - #define SYS_PMSIDR_EL1_LDS_SHIFT 4 224 - #define SYS_PMSIDR_EL1_ERND_SHIFT 5 225 - #define SYS_PMSIDR_EL1_INTERVAL_SHIFT 8 226 - #define SYS_PMSIDR_EL1_INTERVAL_MASK 0xfUL 227 - #define SYS_PMSIDR_EL1_MAXSIZE_SHIFT 12 228 - #define SYS_PMSIDR_EL1_MAXSIZE_MASK 0xfUL 229 - #define SYS_PMSIDR_EL1_COUNTSIZE_SHIFT 16 230 - #define SYS_PMSIDR_EL1_COUNTSIZE_MASK 0xfUL 231 - 232 - #define SYS_PMBIDR_EL1 sys_reg(3, 0, 9, 10, 7) 233 - #define SYS_PMBIDR_EL1_ALIGN_SHIFT 0 234 - #define SYS_PMBIDR_EL1_ALIGN_MASK 0xfU 235 - #define SYS_PMBIDR_EL1_P_SHIFT 4 236 - #define SYS_PMBIDR_EL1_F_SHIFT 5 237 - 238 - /* Sampling controls */ 239 - #define SYS_PMSCR_EL1 sys_reg(3, 0, 9, 9, 0) 240 - #define SYS_PMSCR_EL1_E0SPE_SHIFT 0 241 - #define SYS_PMSCR_EL1_E1SPE_SHIFT 1 242 - #define SYS_PMSCR_EL1_CX_SHIFT 3 243 - #define SYS_PMSCR_EL1_PA_SHIFT 4 244 - #define SYS_PMSCR_EL1_TS_SHIFT 5 245 - #define SYS_PMSCR_EL1_PCT_SHIFT 6 246 - 247 - #define SYS_PMSCR_EL2 sys_reg(3, 4, 9, 9, 0) 248 - #define SYS_PMSCR_EL2_E0HSPE_SHIFT 0 249 - #define SYS_PMSCR_EL2_E2SPE_SHIFT 1 250 - #define SYS_PMSCR_EL2_CX_SHIFT 3 251 - #define SYS_PMSCR_EL2_PA_SHIFT 4 252 - #define SYS_PMSCR_EL2_TS_SHIFT 5 253 - #define SYS_PMSCR_EL2_PCT_SHIFT 6 254 - 255 - #define SYS_PMSICR_EL1 sys_reg(3, 0, 9, 9, 2) 256 - 257 - #define SYS_PMSIRR_EL1 sys_reg(3, 0, 9, 9, 3) 258 - #define SYS_PMSIRR_EL1_RND_SHIFT 0 259 - #define SYS_PMSIRR_EL1_INTERVAL_SHIFT 8 260 - #define SYS_PMSIRR_EL1_INTERVAL_MASK 0xffffffUL 261 - 262 - /* Filtering controls */ 263 - #define SYS_PMSNEVFR_EL1 sys_reg(3, 0, 9, 9, 1) 264 - 265 - #define SYS_PMSFCR_EL1 sys_reg(3, 0, 9, 9, 4) 266 - #define SYS_PMSFCR_EL1_FE_SHIFT 0 267 - #define SYS_PMSFCR_EL1_FT_SHIFT 1 268 - #define SYS_PMSFCR_EL1_FL_SHIFT 2 269 - #define SYS_PMSFCR_EL1_B_SHIFT 16 270 - #define SYS_PMSFCR_EL1_LD_SHIFT 17 271 - #define SYS_PMSFCR_EL1_ST_SHIFT 18 272 - 273 - #define SYS_PMSEVFR_EL1 sys_reg(3, 0, 9, 9, 5) 274 - #define SYS_PMSEVFR_EL1_RES0_8_2 \ 239 + #define PMSEVFR_EL1_RES0_IMP \ 275 240 (GENMASK_ULL(47, 32) | GENMASK_ULL(23, 16) | GENMASK_ULL(11, 8) |\ 276 241 BIT_ULL(6) | BIT_ULL(4) | BIT_ULL(2) | BIT_ULL(0)) 277 - #define SYS_PMSEVFR_EL1_RES0_8_3 \ 278 - (SYS_PMSEVFR_EL1_RES0_8_2 & ~(BIT_ULL(18) | BIT_ULL(17) | BIT_ULL(11))) 279 - 280 - #define SYS_PMSLATFR_EL1 sys_reg(3, 0, 9, 9, 6) 281 - #define SYS_PMSLATFR_EL1_MINLAT_SHIFT 0 282 - 283 - /* Buffer controls */ 284 - #define SYS_PMBLIMITR_EL1 sys_reg(3, 0, 9, 10, 0) 285 - #define SYS_PMBLIMITR_EL1_E_SHIFT 0 286 - #define SYS_PMBLIMITR_EL1_FM_SHIFT 1 287 - #define SYS_PMBLIMITR_EL1_FM_MASK 0x3UL 288 - #define SYS_PMBLIMITR_EL1_FM_STOP_IRQ (0 << SYS_PMBLIMITR_EL1_FM_SHIFT) 289 - 290 - #define SYS_PMBPTR_EL1 sys_reg(3, 0, 9, 10, 1) 242 + #define PMSEVFR_EL1_RES0_V1P1 \ 243 + (PMSEVFR_EL1_RES0_IMP & ~(BIT_ULL(18) | BIT_ULL(17) | BIT_ULL(11))) 244 + #define PMSEVFR_EL1_RES0_V1P2 \ 245 + (PMSEVFR_EL1_RES0_V1P1 & ~BIT_ULL(6)) 291 246 292 247 /* Buffer error reporting */ 293 - #define SYS_PMBSR_EL1 sys_reg(3, 0, 9, 10, 3) 294 - #define SYS_PMBSR_EL1_COLL_SHIFT 16 295 - #define SYS_PMBSR_EL1_S_SHIFT 17 296 - #define SYS_PMBSR_EL1_EA_SHIFT 18 297 - #define SYS_PMBSR_EL1_DL_SHIFT 19 298 - #define SYS_PMBSR_EL1_EC_SHIFT 26 299 - #define SYS_PMBSR_EL1_EC_MASK 0x3fUL 248 + #define PMBSR_EL1_FAULT_FSC_SHIFT PMBSR_EL1_MSS_SHIFT 249 + #define PMBSR_EL1_FAULT_FSC_MASK PMBSR_EL1_MSS_MASK 300 250 301 - #define SYS_PMBSR_EL1_EC_BUF (0x0UL << SYS_PMBSR_EL1_EC_SHIFT) 302 - #define SYS_PMBSR_EL1_EC_FAULT_S1 (0x24UL << SYS_PMBSR_EL1_EC_SHIFT) 303 - #define SYS_PMBSR_EL1_EC_FAULT_S2 (0x25UL << SYS_PMBSR_EL1_EC_SHIFT) 251 + #define PMBSR_EL1_BUF_BSC_SHIFT PMBSR_EL1_MSS_SHIFT 252 + #define PMBSR_EL1_BUF_BSC_MASK PMBSR_EL1_MSS_MASK 304 253 305 - #define SYS_PMBSR_EL1_FAULT_FSC_SHIFT 0 306 - #define SYS_PMBSR_EL1_FAULT_FSC_MASK 0x3fUL 307 - 308 - #define SYS_PMBSR_EL1_BUF_BSC_SHIFT 0 309 - #define SYS_PMBSR_EL1_BUF_BSC_MASK 0x3fUL 310 - 311 - #define SYS_PMBSR_EL1_BUF_BSC_FULL (0x1UL << SYS_PMBSR_EL1_BUF_BSC_SHIFT) 254 + #define PMBSR_EL1_BUF_BSC_FULL 0x1UL 312 255 313 256 /*** End of Statistical Profiling Extension ***/ 314 257 315 - /* 316 - * TRBE Registers 317 - */ 318 - #define SYS_TRBLIMITR_EL1 sys_reg(3, 0, 9, 11, 0) 319 - #define SYS_TRBPTR_EL1 sys_reg(3, 0, 9, 11, 1) 320 - #define SYS_TRBBASER_EL1 sys_reg(3, 0, 9, 11, 2) 321 - #define SYS_TRBSR_EL1 sys_reg(3, 0, 9, 11, 3) 322 - #define SYS_TRBMAR_EL1 sys_reg(3, 0, 9, 11, 4) 323 - #define SYS_TRBTRG_EL1 sys_reg(3, 0, 9, 11, 6) 324 - #define SYS_TRBIDR_EL1 sys_reg(3, 0, 9, 11, 7) 325 - 326 - #define TRBLIMITR_LIMIT_MASK GENMASK_ULL(51, 0) 327 - #define TRBLIMITR_LIMIT_SHIFT 12 328 - #define TRBLIMITR_NVM BIT(5) 329 - #define TRBLIMITR_TRIG_MODE_MASK GENMASK(1, 0) 330 - #define TRBLIMITR_TRIG_MODE_SHIFT 3 331 - #define TRBLIMITR_FILL_MODE_MASK GENMASK(1, 0) 332 - #define TRBLIMITR_FILL_MODE_SHIFT 1 333 - #define TRBLIMITR_ENABLE BIT(0) 334 - #define TRBPTR_PTR_MASK GENMASK_ULL(63, 0) 335 - #define TRBPTR_PTR_SHIFT 0 336 - #define TRBBASER_BASE_MASK GENMASK_ULL(51, 0) 337 - #define TRBBASER_BASE_SHIFT 12 338 - #define TRBSR_EC_MASK GENMASK(5, 0) 339 - #define TRBSR_EC_SHIFT 26 340 - #define TRBSR_IRQ BIT(22) 341 - #define TRBSR_TRG BIT(21) 342 - #define TRBSR_WRAP BIT(20) 343 - #define TRBSR_ABORT BIT(18) 344 - #define TRBSR_STOP BIT(17) 345 - #define TRBSR_MSS_MASK GENMASK(15, 0) 346 - #define TRBSR_MSS_SHIFT 0 347 - #define TRBSR_BSC_MASK GENMASK(5, 0) 348 - #define TRBSR_BSC_SHIFT 0 349 - #define TRBSR_FSC_MASK GENMASK(5, 0) 350 - #define TRBSR_FSC_SHIFT 0 351 - #define TRBMAR_SHARE_MASK GENMASK(1, 0) 352 - #define TRBMAR_SHARE_SHIFT 8 353 - #define TRBMAR_OUTER_MASK GENMASK(3, 0) 354 - #define TRBMAR_OUTER_SHIFT 4 355 - #define TRBMAR_INNER_MASK GENMASK(3, 0) 356 - #define TRBMAR_INNER_SHIFT 0 357 - #define TRBTRG_TRG_MASK GENMASK(31, 0) 358 - #define TRBTRG_TRG_SHIFT 0 359 - #define TRBIDR_FLAG BIT(5) 360 - #define TRBIDR_PROG BIT(4) 361 - #define TRBIDR_ALIGN_MASK GENMASK(3, 0) 362 - #define TRBIDR_ALIGN_SHIFT 0 258 + #define TRBSR_EL1_BSC_MASK GENMASK(5, 0) 259 + #define TRBSR_EL1_BSC_SHIFT 0 363 260 364 261 #define SYS_PMINTENSET_EL1 sys_reg(3, 0, 9, 14, 1) 365 262 #define SYS_PMINTENCLR_EL1 sys_reg(3, 0, 9, 14, 2) ··· 242 391 243 392 #define SYS_MAIR_EL1 sys_reg(3, 0, 10, 2, 0) 244 393 #define SYS_AMAIR_EL1 sys_reg(3, 0, 10, 3, 0) 245 - 246 - #define SYS_LORSA_EL1 sys_reg(3, 0, 10, 4, 0) 247 - #define SYS_LOREA_EL1 sys_reg(3, 0, 10, 4, 1) 248 - #define SYS_LORN_EL1 sys_reg(3, 0, 10, 4, 2) 249 - #define SYS_LORC_EL1 sys_reg(3, 0, 10, 4, 3) 250 - #define SYS_LORID_EL1 sys_reg(3, 0, 10, 4, 7) 251 394 252 395 #define SYS_VBAR_EL1 sys_reg(3, 0, 12, 0, 0) 253 396 #define SYS_DISR_EL1 sys_reg(3, 0, 12, 1, 1) ··· 274 429 #define SYS_ICC_IGRPEN0_EL1 sys_reg(3, 0, 12, 12, 6) 275 430 #define SYS_ICC_IGRPEN1_EL1 sys_reg(3, 0, 12, 12, 7) 276 431 277 - #define SYS_CONTEXTIDR_EL1 sys_reg(3, 0, 13, 0, 1) 278 - #define SYS_TPIDR_EL1 sys_reg(3, 0, 13, 0, 4) 279 - 280 - #define SYS_SCXTNUM_EL1 sys_reg(3, 0, 13, 0, 7) 281 - 282 432 #define SYS_CNTKCTL_EL1 sys_reg(3, 0, 14, 1, 0) 283 433 284 - #define SYS_CCSIDR_EL1 sys_reg(3, 1, 0, 0, 0) 285 - #define SYS_CLIDR_EL1 sys_reg(3, 1, 0, 0, 1) 286 - #define SYS_GMID_EL1 sys_reg(3, 1, 0, 0, 4) 287 434 #define SYS_AIDR_EL1 sys_reg(3, 1, 0, 0, 7) 288 - 289 - #define SYS_CSSELR_EL1 sys_reg(3, 2, 0, 0, 0) 290 - 291 - #define SYS_CTR_EL0 sys_reg(3, 3, 0, 0, 1) 292 - #define SYS_DCZID_EL0 sys_reg(3, 3, 0, 0, 7) 293 435 294 436 #define SYS_RNDR_EL0 sys_reg(3, 3, 2, 4, 0) 295 437 #define SYS_RNDRRS_EL0 sys_reg(3, 3, 2, 4, 1) ··· 297 465 298 466 #define SYS_TPIDR_EL0 sys_reg(3, 3, 13, 0, 2) 299 467 #define SYS_TPIDRRO_EL0 sys_reg(3, 3, 13, 0, 3) 468 + #define SYS_TPIDR2_EL0 sys_reg(3, 3, 13, 0, 5) 300 469 301 470 #define SYS_SCXTNUM_EL0 sys_reg(3, 3, 13, 0, 7) 302 471 ··· 339 506 340 507 #define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0) 341 508 509 + #define SYS_CNTPCT_EL0 sys_reg(3, 3, 14, 0, 1) 510 + #define SYS_CNTPCTSS_EL0 sys_reg(3, 3, 14, 0, 5) 511 + #define SYS_CNTVCTSS_EL0 sys_reg(3, 3, 14, 0, 6) 512 + 342 513 #define SYS_CNTP_TVAL_EL0 sys_reg(3, 3, 14, 2, 0) 343 514 #define SYS_CNTP_CTL_EL0 sys_reg(3, 3, 14, 2, 1) 344 515 #define SYS_CNTP_CVAL_EL0 sys_reg(3, 3, 14, 2, 2) ··· 352 515 353 516 #define SYS_AARCH32_CNTP_TVAL sys_reg(0, 0, 14, 2, 0) 354 517 #define SYS_AARCH32_CNTP_CTL sys_reg(0, 0, 14, 2, 1) 518 + #define SYS_AARCH32_CNTPCT sys_reg(0, 0, 0, 14, 0) 355 519 #define SYS_AARCH32_CNTP_CVAL sys_reg(0, 2, 0, 14, 0) 520 + #define SYS_AARCH32_CNTPCTSS sys_reg(0, 8, 0, 14, 0) 356 521 357 522 #define __PMEV_op2(n) ((n) & 0x7) 358 523 #define __CNTR_CRm(n) (0x8 | (((n) >> 3) & 0x3)) ··· 364 525 365 526 #define SYS_PMCCFILTR_EL0 sys_reg(3, 3, 14, 15, 7) 366 527 528 + #define SYS_VPIDR_EL2 sys_reg(3, 4, 0, 0, 0) 529 + #define SYS_VMPIDR_EL2 sys_reg(3, 4, 0, 0, 5) 530 + 367 531 #define SYS_SCTLR_EL2 sys_reg(3, 4, 1, 0, 0) 368 - #define SYS_HFGRTR_EL2 sys_reg(3, 4, 1, 1, 4) 369 - #define SYS_HFGWTR_EL2 sys_reg(3, 4, 1, 1, 5) 370 - #define SYS_HFGITR_EL2 sys_reg(3, 4, 1, 1, 6) 371 - #define SYS_ZCR_EL2 sys_reg(3, 4, 1, 2, 0) 532 + #define SYS_ACTLR_EL2 sys_reg(3, 4, 1, 0, 1) 533 + #define SYS_HCR_EL2 sys_reg(3, 4, 1, 1, 0) 534 + #define SYS_MDCR_EL2 sys_reg(3, 4, 1, 1, 1) 535 + #define SYS_CPTR_EL2 sys_reg(3, 4, 1, 1, 2) 536 + #define SYS_HSTR_EL2 sys_reg(3, 4, 1, 1, 3) 537 + #define SYS_HACR_EL2 sys_reg(3, 4, 1, 1, 7) 538 + 539 + #define SYS_TTBR0_EL2 sys_reg(3, 4, 2, 0, 0) 540 + #define SYS_TTBR1_EL2 sys_reg(3, 4, 2, 0, 1) 541 + #define SYS_TCR_EL2 sys_reg(3, 4, 2, 0, 2) 542 + #define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0) 543 + #define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2) 544 + 372 545 #define SYS_TRFCR_EL2 sys_reg(3, 4, 1, 2, 1) 373 - #define SYS_DACR32_EL2 sys_reg(3, 4, 3, 0, 0) 374 546 #define SYS_HDFGRTR_EL2 sys_reg(3, 4, 3, 1, 4) 375 547 #define SYS_HDFGWTR_EL2 sys_reg(3, 4, 3, 1, 5) 376 548 #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6) 377 549 #define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0) 378 550 #define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1) 551 + #define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0) 379 552 #define SYS_IFSR32_EL2 sys_reg(3, 4, 5, 0, 1) 553 + #define SYS_AFSR0_EL2 sys_reg(3, 4, 5, 1, 0) 554 + #define SYS_AFSR1_EL2 sys_reg(3, 4, 5, 1, 1) 380 555 #define SYS_ESR_EL2 sys_reg(3, 4, 5, 2, 0) 381 556 #define SYS_VSESR_EL2 sys_reg(3, 4, 5, 2, 3) 382 557 #define SYS_FPEXC32_EL2 sys_reg(3, 4, 5, 3, 0) 383 558 #define SYS_TFSR_EL2 sys_reg(3, 4, 5, 6, 0) 384 - #define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0) 385 559 386 - #define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1) 560 + #define SYS_FAR_EL2 sys_reg(3, 4, 6, 0, 0) 561 + #define SYS_HPFAR_EL2 sys_reg(3, 4, 6, 0, 4) 562 + 563 + #define SYS_MAIR_EL2 sys_reg(3, 4, 10, 2, 0) 564 + #define SYS_AMAIR_EL2 sys_reg(3, 4, 10, 3, 0) 565 + 566 + #define SYS_VBAR_EL2 sys_reg(3, 4, 12, 0, 0) 567 + #define SYS_RVBAR_EL2 sys_reg(3, 4, 12, 0, 1) 568 + #define SYS_RMR_EL2 sys_reg(3, 4, 12, 0, 2) 569 + #define SYS_VDISR_EL2 sys_reg(3, 4, 12, 1, 1) 387 570 #define __SYS__AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x) 388 571 #define SYS_ICH_AP0R0_EL2 __SYS__AP0Rx_EL2(0) 389 572 #define SYS_ICH_AP0R1_EL2 __SYS__AP0Rx_EL2(1) ··· 447 586 #define SYS_ICH_LR14_EL2 __SYS__LR8_EL2(6) 448 587 #define SYS_ICH_LR15_EL2 __SYS__LR8_EL2(7) 449 588 589 + #define SYS_CONTEXTIDR_EL2 sys_reg(3, 4, 13, 0, 1) 590 + #define SYS_TPIDR_EL2 sys_reg(3, 4, 13, 0, 2) 591 + 592 + #define SYS_CNTVOFF_EL2 sys_reg(3, 4, 14, 0, 3) 593 + #define SYS_CNTHCTL_EL2 sys_reg(3, 4, 14, 1, 0) 594 + 450 595 /* VHE encodings for architectural EL0/1 system registers */ 451 596 #define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0) 452 - #define SYS_CPACR_EL12 sys_reg(3, 5, 1, 0, 2) 453 - #define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0) 454 597 #define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0) 455 598 #define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1) 456 599 #define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2) ··· 464 599 #define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1) 465 600 #define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0) 466 601 #define SYS_TFSR_EL12 sys_reg(3, 5, 5, 6, 0) 467 - #define SYS_FAR_EL12 sys_reg(3, 5, 6, 0, 0) 468 602 #define SYS_MAIR_EL12 sys_reg(3, 5, 10, 2, 0) 469 603 #define SYS_AMAIR_EL12 sys_reg(3, 5, 10, 3, 0) 470 604 #define SYS_VBAR_EL12 sys_reg(3, 5, 12, 0, 0) 471 - #define SYS_CONTEXTIDR_EL12 sys_reg(3, 5, 13, 0, 1) 472 605 #define SYS_CNTKCTL_EL12 sys_reg(3, 5, 14, 1, 0) 473 606 #define SYS_CNTP_TVAL_EL02 sys_reg(3, 5, 14, 2, 0) 474 607 #define SYS_CNTP_CTL_EL02 sys_reg(3, 5, 14, 2, 1) ··· 475 612 #define SYS_CNTV_CTL_EL02 sys_reg(3, 5, 14, 3, 1) 476 613 #define SYS_CNTV_CVAL_EL02 sys_reg(3, 5, 14, 3, 2) 477 614 615 + #define SYS_SP_EL2 sys_reg(3, 6, 4, 1, 0) 616 + 478 617 /* Common SCTLR_ELx flags. */ 618 + #define SCTLR_ELx_ENTP2 (BIT(60)) 479 619 #define SCTLR_ELx_DSSBS (BIT(44)) 480 620 #define SCTLR_ELx_ATA (BIT(43)) 481 621 482 - #define SCTLR_ELx_TCF_SHIFT 40 483 - #define SCTLR_ELx_TCF_NONE (UL(0x0) << SCTLR_ELx_TCF_SHIFT) 484 - #define SCTLR_ELx_TCF_SYNC (UL(0x1) << SCTLR_ELx_TCF_SHIFT) 485 - #define SCTLR_ELx_TCF_ASYNC (UL(0x2) << SCTLR_ELx_TCF_SHIFT) 486 - #define SCTLR_ELx_TCF_MASK (UL(0x3) << SCTLR_ELx_TCF_SHIFT) 487 - 622 + #define SCTLR_ELx_EE_SHIFT 25 488 623 #define SCTLR_ELx_ENIA_SHIFT 31 489 624 490 - #define SCTLR_ELx_ITFSB (BIT(37)) 491 - #define SCTLR_ELx_ENIA (BIT(SCTLR_ELx_ENIA_SHIFT)) 492 - #define SCTLR_ELx_ENIB (BIT(30)) 493 - #define SCTLR_ELx_ENDA (BIT(27)) 494 - #define SCTLR_ELx_EE (BIT(25)) 495 - #define SCTLR_ELx_IESB (BIT(21)) 496 - #define SCTLR_ELx_WXN (BIT(19)) 497 - #define SCTLR_ELx_ENDB (BIT(13)) 498 - #define SCTLR_ELx_I (BIT(12)) 499 - #define SCTLR_ELx_SA (BIT(3)) 500 - #define SCTLR_ELx_C (BIT(2)) 501 - #define SCTLR_ELx_A (BIT(1)) 502 - #define SCTLR_ELx_M (BIT(0)) 625 + #define SCTLR_ELx_ITFSB (BIT(37)) 626 + #define SCTLR_ELx_ENIA (BIT(SCTLR_ELx_ENIA_SHIFT)) 627 + #define SCTLR_ELx_ENIB (BIT(30)) 628 + #define SCTLR_ELx_LSMAOE (BIT(29)) 629 + #define SCTLR_ELx_nTLSMD (BIT(28)) 630 + #define SCTLR_ELx_ENDA (BIT(27)) 631 + #define SCTLR_ELx_EE (BIT(SCTLR_ELx_EE_SHIFT)) 632 + #define SCTLR_ELx_EIS (BIT(22)) 633 + #define SCTLR_ELx_IESB (BIT(21)) 634 + #define SCTLR_ELx_TSCXT (BIT(20)) 635 + #define SCTLR_ELx_WXN (BIT(19)) 636 + #define SCTLR_ELx_ENDB (BIT(13)) 637 + #define SCTLR_ELx_I (BIT(12)) 638 + #define SCTLR_ELx_EOS (BIT(11)) 639 + #define SCTLR_ELx_SA (BIT(3)) 640 + #define SCTLR_ELx_C (BIT(2)) 641 + #define SCTLR_ELx_A (BIT(1)) 642 + #define SCTLR_ELx_M (BIT(0)) 503 643 504 644 /* SCTLR_EL2 specific flags. */ 505 645 #define SCTLR_EL2_RES1 ((BIT(4)) | (BIT(5)) | (BIT(11)) | (BIT(16)) | \ 506 646 (BIT(18)) | (BIT(22)) | (BIT(23)) | (BIT(28)) | \ 507 647 (BIT(29))) 508 648 649 + #define SCTLR_EL2_BT (BIT(36)) 509 650 #ifdef CONFIG_CPU_BIG_ENDIAN 510 651 #define ENDIAN_SET_EL2 SCTLR_ELx_EE 511 652 #else ··· 525 658 (SCTLR_EL2_RES1 | ENDIAN_SET_EL2) 526 659 527 660 /* SCTLR_EL1 specific flags. */ 528 - #define SCTLR_EL1_EPAN (BIT(57)) 529 - #define SCTLR_EL1_ATA0 (BIT(42)) 530 - 531 - #define SCTLR_EL1_TCF0_SHIFT 38 532 - #define SCTLR_EL1_TCF0_NONE (UL(0x0) << SCTLR_EL1_TCF0_SHIFT) 533 - #define SCTLR_EL1_TCF0_SYNC (UL(0x1) << SCTLR_EL1_TCF0_SHIFT) 534 - #define SCTLR_EL1_TCF0_ASYNC (UL(0x2) << SCTLR_EL1_TCF0_SHIFT) 535 - #define SCTLR_EL1_TCF0_MASK (UL(0x3) << SCTLR_EL1_TCF0_SHIFT) 536 - 537 - #define SCTLR_EL1_BT1 (BIT(36)) 538 - #define SCTLR_EL1_BT0 (BIT(35)) 539 - #define SCTLR_EL1_UCI (BIT(26)) 540 - #define SCTLR_EL1_E0E (BIT(24)) 541 - #define SCTLR_EL1_SPAN (BIT(23)) 542 - #define SCTLR_EL1_NTWE (BIT(18)) 543 - #define SCTLR_EL1_NTWI (BIT(16)) 544 - #define SCTLR_EL1_UCT (BIT(15)) 545 - #define SCTLR_EL1_DZE (BIT(14)) 546 - #define SCTLR_EL1_UMA (BIT(9)) 547 - #define SCTLR_EL1_SED (BIT(8)) 548 - #define SCTLR_EL1_ITD (BIT(7)) 549 - #define SCTLR_EL1_CP15BEN (BIT(5)) 550 - #define SCTLR_EL1_SA0 (BIT(4)) 551 - 552 - #define SCTLR_EL1_RES1 ((BIT(11)) | (BIT(20)) | (BIT(22)) | (BIT(28)) | \ 553 - (BIT(29))) 554 - 555 661 #ifdef CONFIG_CPU_BIG_ENDIAN 556 662 #define ENDIAN_SET_EL1 (SCTLR_EL1_E0E | SCTLR_ELx_EE) 557 663 #else ··· 532 692 #endif 533 693 534 694 #define INIT_SCTLR_EL1_MMU_OFF \ 535 - (ENDIAN_SET_EL1 | SCTLR_EL1_RES1) 695 + (ENDIAN_SET_EL1 | SCTLR_EL1_LSMAOE | SCTLR_EL1_nTLSMD | \ 696 + SCTLR_EL1_EIS | SCTLR_EL1_TSCXT | SCTLR_EL1_EOS) 536 697 537 698 #define INIT_SCTLR_EL1_MMU_ON \ 538 - (SCTLR_ELx_M | SCTLR_ELx_C | SCTLR_ELx_SA | SCTLR_EL1_SA0 | \ 539 - SCTLR_EL1_SED | SCTLR_ELx_I | SCTLR_EL1_DZE | SCTLR_EL1_UCT | \ 540 - SCTLR_EL1_NTWE | SCTLR_ELx_IESB | SCTLR_EL1_SPAN | SCTLR_ELx_ITFSB | \ 541 - SCTLR_ELx_ATA | SCTLR_EL1_ATA0 | ENDIAN_SET_EL1 | SCTLR_EL1_UCI | \ 542 - SCTLR_EL1_EPAN | SCTLR_EL1_RES1) 699 + (SCTLR_ELx_M | SCTLR_ELx_C | SCTLR_ELx_SA | \ 700 + SCTLR_EL1_SA0 | SCTLR_EL1_SED | SCTLR_ELx_I | \ 701 + SCTLR_EL1_DZE | SCTLR_EL1_UCT | SCTLR_EL1_nTWE | \ 702 + SCTLR_ELx_IESB | SCTLR_EL1_SPAN | SCTLR_ELx_ITFSB | \ 703 + ENDIAN_SET_EL1 | SCTLR_EL1_UCI | SCTLR_EL1_EPAN | \ 704 + SCTLR_EL1_LSMAOE | SCTLR_EL1_nTLSMD | SCTLR_EL1_EIS | \ 705 + SCTLR_EL1_TSCXT | SCTLR_EL1_EOS) 543 706 544 707 /* MAIR_ELx memory attributes (used by Linux) */ 545 708 #define MAIR_ATTR_DEVICE_nGnRnE UL(0x00) ··· 555 712 /* Position the attr at the correct index */ 556 713 #define MAIR_ATTRIDX(attr, idx) ((attr) << ((idx) * 8)) 557 714 558 - /* id_aa64isar0 */ 559 - #define ID_AA64ISAR0_RNDR_SHIFT 60 560 - #define ID_AA64ISAR0_TLB_SHIFT 56 561 - #define ID_AA64ISAR0_TS_SHIFT 52 562 - #define ID_AA64ISAR0_FHM_SHIFT 48 563 - #define ID_AA64ISAR0_DP_SHIFT 44 564 - #define ID_AA64ISAR0_SM4_SHIFT 40 565 - #define ID_AA64ISAR0_SM3_SHIFT 36 566 - #define ID_AA64ISAR0_SHA3_SHIFT 32 567 - #define ID_AA64ISAR0_RDM_SHIFT 28 568 - #define ID_AA64ISAR0_ATOMICS_SHIFT 20 569 - #define ID_AA64ISAR0_CRC32_SHIFT 16 570 - #define ID_AA64ISAR0_SHA2_SHIFT 12 571 - #define ID_AA64ISAR0_SHA1_SHIFT 8 572 - #define ID_AA64ISAR0_AES_SHIFT 4 573 - 574 - #define ID_AA64ISAR0_TLB_RANGE_NI 0x0 575 - #define ID_AA64ISAR0_TLB_RANGE 0x2 576 - 577 - /* id_aa64isar1 */ 578 - #define ID_AA64ISAR1_I8MM_SHIFT 52 579 - #define ID_AA64ISAR1_DGH_SHIFT 48 580 - #define ID_AA64ISAR1_BF16_SHIFT 44 581 - #define ID_AA64ISAR1_SPECRES_SHIFT 40 582 - #define ID_AA64ISAR1_SB_SHIFT 36 583 - #define ID_AA64ISAR1_FRINTTS_SHIFT 32 584 - #define ID_AA64ISAR1_GPI_SHIFT 28 585 - #define ID_AA64ISAR1_GPA_SHIFT 24 586 - #define ID_AA64ISAR1_LRCPC_SHIFT 20 587 - #define ID_AA64ISAR1_FCMA_SHIFT 16 588 - #define ID_AA64ISAR1_JSCVT_SHIFT 12 589 - #define ID_AA64ISAR1_API_SHIFT 8 590 - #define ID_AA64ISAR1_APA_SHIFT 4 591 - #define ID_AA64ISAR1_DPB_SHIFT 0 592 - 593 - #define ID_AA64ISAR1_APA_NI 0x0 594 - #define ID_AA64ISAR1_APA_ARCHITECTED 0x1 595 - #define ID_AA64ISAR1_APA_ARCH_EPAC 0x2 596 - #define ID_AA64ISAR1_APA_ARCH_EPAC2 0x3 597 - #define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC 0x4 598 - #define ID_AA64ISAR1_APA_ARCH_EPAC2_FPAC_CMB 0x5 599 - #define ID_AA64ISAR1_API_NI 0x0 600 - #define ID_AA64ISAR1_API_IMP_DEF 0x1 601 - #define ID_AA64ISAR1_API_IMP_DEF_EPAC 0x2 602 - #define ID_AA64ISAR1_API_IMP_DEF_EPAC2 0x3 603 - #define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC 0x4 604 - #define ID_AA64ISAR1_API_IMP_DEF_EPAC2_FPAC_CMB 0x5 605 - #define ID_AA64ISAR1_GPA_NI 0x0 606 - #define ID_AA64ISAR1_GPA_ARCHITECTED 0x1 607 - #define ID_AA64ISAR1_GPI_NI 0x0 608 - #define ID_AA64ISAR1_GPI_IMP_DEF 0x1 609 - 610 715 /* id_aa64pfr0 */ 611 - #define ID_AA64PFR0_CSV3_SHIFT 60 612 - #define ID_AA64PFR0_CSV2_SHIFT 56 613 - #define ID_AA64PFR0_DIT_SHIFT 48 614 - #define ID_AA64PFR0_AMU_SHIFT 44 615 - #define ID_AA64PFR0_MPAM_SHIFT 40 616 - #define ID_AA64PFR0_SEL2_SHIFT 36 617 - #define ID_AA64PFR0_SVE_SHIFT 32 618 - #define ID_AA64PFR0_RAS_SHIFT 28 619 - #define ID_AA64PFR0_GIC_SHIFT 24 620 - #define ID_AA64PFR0_ASIMD_SHIFT 20 621 - #define ID_AA64PFR0_FP_SHIFT 16 622 - #define ID_AA64PFR0_EL3_SHIFT 12 623 - #define ID_AA64PFR0_EL2_SHIFT 8 624 - #define ID_AA64PFR0_EL1_SHIFT 4 625 - #define ID_AA64PFR0_EL0_SHIFT 0 626 - 627 - #define ID_AA64PFR0_AMU 0x1 628 - #define ID_AA64PFR0_SVE 0x1 629 - #define ID_AA64PFR0_RAS_V1 0x1 630 - #define ID_AA64PFR0_RAS_V1P1 0x2 631 - #define ID_AA64PFR0_FP_NI 0xf 632 - #define ID_AA64PFR0_FP_SUPPORTED 0x0 633 - #define ID_AA64PFR0_ASIMD_NI 0xf 634 - #define ID_AA64PFR0_ASIMD_SUPPORTED 0x0 635 - #define ID_AA64PFR0_ELx_64BIT_ONLY 0x1 636 - #define ID_AA64PFR0_ELx_32BIT_64BIT 0x2 637 - 638 - /* id_aa64pfr1 */ 639 - #define ID_AA64PFR1_MPAMFRAC_SHIFT 16 640 - #define ID_AA64PFR1_RASFRAC_SHIFT 12 641 - #define ID_AA64PFR1_MTE_SHIFT 8 642 - #define ID_AA64PFR1_SSBS_SHIFT 4 643 - #define ID_AA64PFR1_BT_SHIFT 0 644 - 645 - #define ID_AA64PFR1_SSBS_PSTATE_NI 0 646 - #define ID_AA64PFR1_SSBS_PSTATE_ONLY 1 647 - #define ID_AA64PFR1_SSBS_PSTATE_INSNS 2 648 - #define ID_AA64PFR1_BT_BTI 0x1 649 - 650 - #define ID_AA64PFR1_MTE_NI 0x0 651 - #define ID_AA64PFR1_MTE_EL0 0x1 652 - #define ID_AA64PFR1_MTE 0x2 653 - 654 - /* id_aa64zfr0 */ 655 - #define ID_AA64ZFR0_F64MM_SHIFT 56 656 - #define ID_AA64ZFR0_F32MM_SHIFT 52 657 - #define ID_AA64ZFR0_I8MM_SHIFT 44 658 - #define ID_AA64ZFR0_SM4_SHIFT 40 659 - #define ID_AA64ZFR0_SHA3_SHIFT 32 660 - #define ID_AA64ZFR0_BF16_SHIFT 20 661 - #define ID_AA64ZFR0_BITPERM_SHIFT 16 662 - #define ID_AA64ZFR0_AES_SHIFT 4 663 - #define ID_AA64ZFR0_SVEVER_SHIFT 0 664 - 665 - #define ID_AA64ZFR0_F64MM 0x1 666 - #define ID_AA64ZFR0_F32MM 0x1 667 - #define ID_AA64ZFR0_I8MM 0x1 668 - #define ID_AA64ZFR0_BF16 0x1 669 - #define ID_AA64ZFR0_SM4 0x1 670 - #define ID_AA64ZFR0_SHA3 0x1 671 - #define ID_AA64ZFR0_BITPERM 0x1 672 - #define ID_AA64ZFR0_AES 0x1 673 - #define ID_AA64ZFR0_AES_PMULL 0x2 674 - #define ID_AA64ZFR0_SVEVER_SVE2 0x1 716 + #define ID_AA64PFR0_EL1_ELx_64BIT_ONLY 0x1 717 + #define ID_AA64PFR0_EL1_ELx_32BIT_64BIT 0x2 675 718 676 719 /* id_aa64mmfr0 */ 677 - #define ID_AA64MMFR0_ECV_SHIFT 60 678 - #define ID_AA64MMFR0_FGT_SHIFT 56 679 - #define ID_AA64MMFR0_EXS_SHIFT 44 680 - #define ID_AA64MMFR0_TGRAN4_2_SHIFT 40 681 - #define ID_AA64MMFR0_TGRAN64_2_SHIFT 36 682 - #define ID_AA64MMFR0_TGRAN16_2_SHIFT 32 683 - #define ID_AA64MMFR0_TGRAN4_SHIFT 28 684 - #define ID_AA64MMFR0_TGRAN64_SHIFT 24 685 - #define ID_AA64MMFR0_TGRAN16_SHIFT 20 686 - #define ID_AA64MMFR0_BIGENDEL0_SHIFT 16 687 - #define ID_AA64MMFR0_SNSMEM_SHIFT 12 688 - #define ID_AA64MMFR0_BIGENDEL_SHIFT 8 689 - #define ID_AA64MMFR0_ASID_SHIFT 4 690 - #define ID_AA64MMFR0_PARANGE_SHIFT 0 691 - 692 - #define ID_AA64MMFR0_ASID_8 0x0 693 - #define ID_AA64MMFR0_ASID_16 0x2 694 - 695 - #define ID_AA64MMFR0_TGRAN4_NI 0xf 696 - #define ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN 0x0 697 - #define ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX 0x7 698 - #define ID_AA64MMFR0_TGRAN64_NI 0xf 699 - #define ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN 0x0 700 - #define ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX 0x7 701 - #define ID_AA64MMFR0_TGRAN16_NI 0x0 702 - #define ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN 0x1 703 - #define ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX 0xf 704 - 705 - #define ID_AA64MMFR0_PARANGE_32 0x0 706 - #define ID_AA64MMFR0_PARANGE_36 0x1 707 - #define ID_AA64MMFR0_PARANGE_40 0x2 708 - #define ID_AA64MMFR0_PARANGE_42 0x3 709 - #define ID_AA64MMFR0_PARANGE_44 0x4 710 - #define ID_AA64MMFR0_PARANGE_48 0x5 711 - #define ID_AA64MMFR0_PARANGE_52 0x6 720 + #define ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN 0x0 721 + #define ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX 0x7 722 + #define ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MIN 0x0 723 + #define ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MAX 0x7 724 + #define ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN 0x1 725 + #define ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX 0xf 712 726 713 727 #define ARM64_MIN_PARANGE_BITS 32 714 728 715 - #define ID_AA64MMFR0_TGRAN_2_SUPPORTED_DEFAULT 0x0 716 - #define ID_AA64MMFR0_TGRAN_2_SUPPORTED_NONE 0x1 717 - #define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MIN 0x2 718 - #define ID_AA64MMFR0_TGRAN_2_SUPPORTED_MAX 0x7 729 + #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_DEFAULT 0x0 730 + #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_NONE 0x1 731 + #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MIN 0x2 732 + #define ID_AA64MMFR0_EL1_TGRAN_2_SUPPORTED_MAX 0x7 719 733 720 734 #ifdef CONFIG_ARM64_PA_BITS_52 721 - #define ID_AA64MMFR0_PARANGE_MAX ID_AA64MMFR0_PARANGE_52 735 + #define ID_AA64MMFR0_EL1_PARANGE_MAX ID_AA64MMFR0_EL1_PARANGE_52 722 736 #else 723 - #define ID_AA64MMFR0_PARANGE_MAX ID_AA64MMFR0_PARANGE_48 737 + #define ID_AA64MMFR0_EL1_PARANGE_MAX ID_AA64MMFR0_EL1_PARANGE_48 724 738 #endif 725 - 726 - /* id_aa64mmfr1 */ 727 - #define ID_AA64MMFR1_ETS_SHIFT 36 728 - #define ID_AA64MMFR1_TWED_SHIFT 32 729 - #define ID_AA64MMFR1_XNX_SHIFT 28 730 - #define ID_AA64MMFR1_SPECSEI_SHIFT 24 731 - #define ID_AA64MMFR1_PAN_SHIFT 20 732 - #define ID_AA64MMFR1_LOR_SHIFT 16 733 - #define ID_AA64MMFR1_HPD_SHIFT 12 734 - #define ID_AA64MMFR1_VHE_SHIFT 8 735 - #define ID_AA64MMFR1_VMIDBITS_SHIFT 4 736 - #define ID_AA64MMFR1_HADBS_SHIFT 0 737 - 738 - #define ID_AA64MMFR1_VMIDBITS_8 0 739 - #define ID_AA64MMFR1_VMIDBITS_16 2 740 - 741 - /* id_aa64mmfr2 */ 742 - #define ID_AA64MMFR2_E0PD_SHIFT 60 743 - #define ID_AA64MMFR2_EVT_SHIFT 56 744 - #define ID_AA64MMFR2_BBM_SHIFT 52 745 - #define ID_AA64MMFR2_TTL_SHIFT 48 746 - #define ID_AA64MMFR2_FWB_SHIFT 40 747 - #define ID_AA64MMFR2_IDS_SHIFT 36 748 - #define ID_AA64MMFR2_AT_SHIFT 32 749 - #define ID_AA64MMFR2_ST_SHIFT 28 750 - #define ID_AA64MMFR2_NV_SHIFT 24 751 - #define ID_AA64MMFR2_CCIDX_SHIFT 20 752 - #define ID_AA64MMFR2_LVA_SHIFT 16 753 - #define ID_AA64MMFR2_IESB_SHIFT 12 754 - #define ID_AA64MMFR2_LSM_SHIFT 8 755 - #define ID_AA64MMFR2_UAO_SHIFT 4 756 - #define ID_AA64MMFR2_CNP_SHIFT 0 757 - 758 - /* id_aa64dfr0 */ 759 - #define ID_AA64DFR0_MTPMU_SHIFT 48 760 - #define ID_AA64DFR0_TRBE_SHIFT 44 761 - #define ID_AA64DFR0_TRACE_FILT_SHIFT 40 762 - #define ID_AA64DFR0_DOUBLELOCK_SHIFT 36 763 - #define ID_AA64DFR0_PMSVER_SHIFT 32 764 - #define ID_AA64DFR0_CTX_CMPS_SHIFT 28 765 - #define ID_AA64DFR0_WRPS_SHIFT 20 766 - #define ID_AA64DFR0_BRPS_SHIFT 12 767 - #define ID_AA64DFR0_PMUVER_SHIFT 8 768 - #define ID_AA64DFR0_TRACEVER_SHIFT 4 769 - #define ID_AA64DFR0_DEBUGVER_SHIFT 0 770 - 771 - #define ID_AA64DFR0_PMUVER_8_0 0x1 772 - #define ID_AA64DFR0_PMUVER_8_1 0x4 773 - #define ID_AA64DFR0_PMUVER_8_4 0x5 774 - #define ID_AA64DFR0_PMUVER_8_5 0x6 775 - #define ID_AA64DFR0_PMUVER_IMP_DEF 0xf 776 - 777 - #define ID_AA64DFR0_PMSVER_8_2 0x1 778 - #define ID_AA64DFR0_PMSVER_8_3 0x2 779 - 780 - #define ID_DFR0_PERFMON_SHIFT 24 781 - 782 - #define ID_DFR0_PERFMON_8_0 0x3 783 - #define ID_DFR0_PERFMON_8_1 0x4 784 - #define ID_DFR0_PERFMON_8_4 0x5 785 - #define ID_DFR0_PERFMON_8_5 0x6 786 - 787 - #define ID_ISAR4_SWP_FRAC_SHIFT 28 788 - #define ID_ISAR4_PSR_M_SHIFT 24 789 - #define ID_ISAR4_SYNCH_PRIM_FRAC_SHIFT 20 790 - #define ID_ISAR4_BARRIER_SHIFT 16 791 - #define ID_ISAR4_SMC_SHIFT 12 792 - #define ID_ISAR4_WRITEBACK_SHIFT 8 793 - #define ID_ISAR4_WITHSHIFTS_SHIFT 4 794 - #define ID_ISAR4_UNPRIV_SHIFT 0 795 - 796 - #define ID_DFR1_MTPMU_SHIFT 0 797 - 798 - #define ID_ISAR0_DIVIDE_SHIFT 24 799 - #define ID_ISAR0_DEBUG_SHIFT 20 800 - #define ID_ISAR0_COPROC_SHIFT 16 801 - #define ID_ISAR0_CMPBRANCH_SHIFT 12 802 - #define ID_ISAR0_BITFIELD_SHIFT 8 803 - #define ID_ISAR0_BITCOUNT_SHIFT 4 804 - #define ID_ISAR0_SWAP_SHIFT 0 805 - 806 - #define ID_ISAR5_RDM_SHIFT 24 807 - #define ID_ISAR5_CRC32_SHIFT 16 808 - #define ID_ISAR5_SHA2_SHIFT 12 809 - #define ID_ISAR5_SHA1_SHIFT 8 810 - #define ID_ISAR5_AES_SHIFT 4 811 - #define ID_ISAR5_SEVL_SHIFT 0 812 - 813 - #define ID_ISAR6_I8MM_SHIFT 24 814 - #define ID_ISAR6_BF16_SHIFT 20 815 - #define ID_ISAR6_SPECRES_SHIFT 16 816 - #define ID_ISAR6_SB_SHIFT 12 817 - #define ID_ISAR6_FHM_SHIFT 8 818 - #define ID_ISAR6_DP_SHIFT 4 819 - #define ID_ISAR6_JSCVT_SHIFT 0 820 - 821 - #define ID_MMFR0_INNERSHR_SHIFT 28 822 - #define ID_MMFR0_FCSE_SHIFT 24 823 - #define ID_MMFR0_AUXREG_SHIFT 20 824 - #define ID_MMFR0_TCM_SHIFT 16 825 - #define ID_MMFR0_SHARELVL_SHIFT 12 826 - #define ID_MMFR0_OUTERSHR_SHIFT 8 827 - #define ID_MMFR0_PMSA_SHIFT 4 828 - #define ID_MMFR0_VMSA_SHIFT 0 829 - 830 - #define ID_MMFR4_EVT_SHIFT 28 831 - #define ID_MMFR4_CCIDX_SHIFT 24 832 - #define ID_MMFR4_LSM_SHIFT 20 833 - #define ID_MMFR4_HPDS_SHIFT 16 834 - #define ID_MMFR4_CNP_SHIFT 12 835 - #define ID_MMFR4_XNX_SHIFT 8 836 - #define ID_MMFR4_AC2_SHIFT 4 837 - #define ID_MMFR4_SPECSEI_SHIFT 0 838 - 839 - #define ID_MMFR5_ETS_SHIFT 0 840 - 841 - #define ID_PFR0_DIT_SHIFT 24 842 - #define ID_PFR0_CSV2_SHIFT 16 843 - #define ID_PFR0_STATE3_SHIFT 12 844 - #define ID_PFR0_STATE2_SHIFT 8 845 - #define ID_PFR0_STATE1_SHIFT 4 846 - #define ID_PFR0_STATE0_SHIFT 0 847 - 848 - #define ID_DFR0_PERFMON_SHIFT 24 849 - #define ID_DFR0_MPROFDBG_SHIFT 20 850 - #define ID_DFR0_MMAPTRC_SHIFT 16 851 - #define ID_DFR0_COPTRC_SHIFT 12 852 - #define ID_DFR0_MMAPDBG_SHIFT 8 853 - #define ID_DFR0_COPSDBG_SHIFT 4 854 - #define ID_DFR0_COPDBG_SHIFT 0 855 - 856 - #define ID_PFR2_SSBS_SHIFT 4 857 - #define ID_PFR2_CSV3_SHIFT 0 858 - 859 - #define MVFR0_FPROUND_SHIFT 28 860 - #define MVFR0_FPSHVEC_SHIFT 24 861 - #define MVFR0_FPSQRT_SHIFT 20 862 - #define MVFR0_FPDIVIDE_SHIFT 16 863 - #define MVFR0_FPTRAP_SHIFT 12 864 - #define MVFR0_FPDP_SHIFT 8 865 - #define MVFR0_FPSP_SHIFT 4 866 - #define MVFR0_SIMD_SHIFT 0 867 - 868 - #define MVFR1_SIMDFMAC_SHIFT 28 869 - #define MVFR1_FPHP_SHIFT 24 870 - #define MVFR1_SIMDHP_SHIFT 20 871 - #define MVFR1_SIMDSP_SHIFT 16 872 - #define MVFR1_SIMDINT_SHIFT 12 873 - #define MVFR1_SIMDLS_SHIFT 8 874 - #define MVFR1_FPDNAN_SHIFT 4 875 - #define MVFR1_FPFTZ_SHIFT 0 876 - 877 - #define ID_PFR1_GIC_SHIFT 28 878 - #define ID_PFR1_VIRT_FRAC_SHIFT 24 879 - #define ID_PFR1_SEC_FRAC_SHIFT 20 880 - #define ID_PFR1_GENTIMER_SHIFT 16 881 - #define ID_PFR1_VIRTUALIZATION_SHIFT 12 882 - #define ID_PFR1_MPROGMOD_SHIFT 8 883 - #define ID_PFR1_SECURITY_SHIFT 4 884 - #define ID_PFR1_PROGMOD_SHIFT 0 885 739 886 740 #if defined(CONFIG_ARM64_4K_PAGES) 887 - #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN4_SHIFT 888 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN4_SUPPORTED_MIN 889 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_TGRAN4_SUPPORTED_MAX 890 - #define ID_AA64MMFR0_TGRAN_2_SHIFT ID_AA64MMFR0_TGRAN4_2_SHIFT 741 + #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN4_SHIFT 742 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MIN 743 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN4_SUPPORTED_MAX 744 + #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN4_2_SHIFT 891 745 #elif defined(CONFIG_ARM64_16K_PAGES) 892 - #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN16_SHIFT 893 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN16_SUPPORTED_MIN 894 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_TGRAN16_SUPPORTED_MAX 895 - #define ID_AA64MMFR0_TGRAN_2_SHIFT ID_AA64MMFR0_TGRAN16_2_SHIFT 746 + #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN16_SHIFT 747 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MIN 748 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN16_SUPPORTED_MAX 749 + #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN16_2_SHIFT 896 750 #elif defined(CONFIG_ARM64_64K_PAGES) 897 - #define ID_AA64MMFR0_TGRAN_SHIFT ID_AA64MMFR0_TGRAN64_SHIFT 898 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_TGRAN64_SUPPORTED_MIN 899 - #define ID_AA64MMFR0_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_TGRAN64_SUPPORTED_MAX 900 - #define ID_AA64MMFR0_TGRAN_2_SHIFT ID_AA64MMFR0_TGRAN64_2_SHIFT 751 + #define ID_AA64MMFR0_EL1_TGRAN_SHIFT ID_AA64MMFR0_EL1_TGRAN64_SHIFT 752 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MIN ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MIN 753 + #define ID_AA64MMFR0_EL1_TGRAN_SUPPORTED_MAX ID_AA64MMFR0_EL1_TGRAN64_SUPPORTED_MAX 754 + #define ID_AA64MMFR0_EL1_TGRAN_2_SHIFT ID_AA64MMFR0_EL1_TGRAN64_2_SHIFT 901 755 #endif 902 756 903 - #define MVFR2_FPMISC_SHIFT 4 904 - #define MVFR2_SIMDMISC_SHIFT 0 757 + #define CPACR_EL1_FPEN_EL1EN (BIT(20)) /* enable EL1 access */ 758 + #define CPACR_EL1_FPEN_EL0EN (BIT(21)) /* enable EL0 access, if EL1EN set */ 905 759 906 - #define DCZID_DZP_SHIFT 4 907 - #define DCZID_BS_SHIFT 0 908 - 909 - /* 910 - * The ZCR_ELx_LEN_* definitions intentionally include bits [8:4] which 911 - * are reserved by the SVE architecture for future expansion of the LEN 912 - * field, with compatible semantics. 913 - */ 914 - #define ZCR_ELx_LEN_SHIFT 0 915 - #define ZCR_ELx_LEN_SIZE 9 916 - #define ZCR_ELx_LEN_MASK 0x1ff 760 + #define CPACR_EL1_SMEN_EL1EN (BIT(24)) /* enable EL1 access */ 761 + #define CPACR_EL1_SMEN_EL0EN (BIT(25)) /* enable EL0 access, if EL1EN set */ 917 762 918 763 #define CPACR_EL1_ZEN_EL1EN (BIT(16)) /* enable EL1 access */ 919 764 #define CPACR_EL1_ZEN_EL0EN (BIT(17)) /* enable EL0 access, if EL1EN set */ 920 - #define CPACR_EL1_ZEN (CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN) 921 - 922 - /* TCR EL1 Bit Definitions */ 923 - #define SYS_TCR_EL1_TCMA1 (BIT(58)) 924 - #define SYS_TCR_EL1_TCMA0 (BIT(57)) 925 765 926 766 /* GCR_EL1 Definitions */ 927 767 #define SYS_GCR_EL1_RRND (BIT(16)) 928 768 #define SYS_GCR_EL1_EXCL_MASK 0xffffUL 929 769 770 + #define KERNEL_GCR_EL1 (SYS_GCR_EL1_RRND | KERNEL_GCR_EL1_EXCL) 771 + 930 772 /* RGSR_EL1 Definitions */ 931 773 #define SYS_RGSR_EL1_TAG_MASK 0xfUL 932 774 #define SYS_RGSR_EL1_SEED_SHIFT 8 933 775 #define SYS_RGSR_EL1_SEED_MASK 0xffffUL 934 - 935 - /* GMID_EL1 field definitions */ 936 - #define SYS_GMID_EL1_BS_SHIFT 0 937 - #define SYS_GMID_EL1_BS_SIZE 4 938 776 939 777 /* TFSR{,E0}_EL1 bit definitions */ 940 778 #define SYS_TFSR_EL1_TF0_SHIFT 0 ··· 627 1103 #define SYS_MPIDR_SAFE_VAL (BIT(31)) 628 1104 629 1105 #define TRFCR_ELx_TS_SHIFT 5 1106 + #define TRFCR_ELx_TS_MASK ((0x3UL) << TRFCR_ELx_TS_SHIFT) 630 1107 #define TRFCR_ELx_TS_VIRTUAL ((0x1UL) << TRFCR_ELx_TS_SHIFT) 631 1108 #define TRFCR_ELx_TS_GUEST_PHYSICAL ((0x2UL) << TRFCR_ELx_TS_SHIFT) 632 1109 #define TRFCR_ELx_TS_PHYSICAL ((0x3UL) << TRFCR_ELx_TS_SHIFT) 633 1110 #define TRFCR_EL2_CX BIT(3) 634 1111 #define TRFCR_ELx_ExTRE BIT(1) 635 1112 #define TRFCR_ELx_E0TRE BIT(0) 636 - 637 1113 638 1114 /* GIC Hypervisor interface registers */ 639 1115 /* ICH_MISR_EL2 bit definitions */ ··· 661 1137 #define ICH_HCR_TC (1 << 10) 662 1138 #define ICH_HCR_TALL0 (1 << 11) 663 1139 #define ICH_HCR_TALL1 (1 << 12) 1140 + #define ICH_HCR_TDIR (1 << 14) 664 1141 #define ICH_HCR_EOIcount_SHIFT 27 665 1142 #define ICH_HCR_EOIcount_MASK (0x1f << ICH_HCR_EOIcount_SHIFT) 666 1143 ··· 694 1169 #define ICH_VTR_SEIS_MASK (1 << ICH_VTR_SEIS_SHIFT) 695 1170 #define ICH_VTR_A3V_SHIFT 21 696 1171 #define ICH_VTR_A3V_MASK (1 << ICH_VTR_A3V_SHIFT) 1172 + #define ICH_VTR_TDS_SHIFT 19 1173 + #define ICH_VTR_TDS_MASK (1 << ICH_VTR_TDS_SHIFT) 1174 + 1175 + /* 1176 + * Permission Indirection Extension (PIE) permission encodings. 1177 + * Encodings with the _O suffix, have overlays applied (Permission Overlay Extension). 1178 + */ 1179 + #define PIE_NONE_O 0x0 1180 + #define PIE_R_O 0x1 1181 + #define PIE_X_O 0x2 1182 + #define PIE_RX_O 0x3 1183 + #define PIE_RW_O 0x5 1184 + #define PIE_RWnX_O 0x6 1185 + #define PIE_RWX_O 0x7 1186 + #define PIE_R 0x8 1187 + #define PIE_GCS 0x9 1188 + #define PIE_RX 0xa 1189 + #define PIE_RW 0xc 1190 + #define PIE_RWX 0xe 1191 + 1192 + #define PIRx_ELx_PERM(idx, perm) ((perm) << ((idx) * 4)) 697 1193 698 1194 #define ARM64_FEATURE_FIELD_BITS 4 699 1195 700 - /* Create a mask for the feature bits of the specified feature. */ 701 - #define ARM64_FEATURE_MASK(x) (GENMASK_ULL(x##_SHIFT + ARM64_FEATURE_FIELD_BITS - 1, x##_SHIFT)) 1196 + /* Defined for compatibility only, do not add new users. */ 1197 + #define ARM64_FEATURE_MASK(x) (x##_MASK) 702 1198 703 1199 #ifdef __ASSEMBLY__ 704 1200 705 - .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30 706 - .equ .L__reg_num_x\num, \num 707 - .endr 708 - .equ .L__reg_num_xzr, 31 709 - 710 1201 .macro mrs_s, rt, sreg 711 - __emit_inst(0xd5200000|(\sreg)|(.L__reg_num_\rt)) 1202 + __emit_inst(0xd5200000|(\sreg)|(.L__gpr_num_\rt)) 712 1203 .endm 713 1204 714 1205 .macro msr_s, sreg, rt 715 - __emit_inst(0xd5000000|(\sreg)|(.L__reg_num_\rt)) 1206 + __emit_inst(0xd5000000|(\sreg)|(.L__gpr_num_\rt)) 716 1207 .endm 717 1208 718 1209 #else 719 1210 1211 + #include <linux/bitfield.h> 720 1212 #include <linux/build_bug.h> 721 1213 #include <linux/types.h> 722 1214 #include <asm/alternative.h> 723 1215 724 - #define __DEFINE_MRS_MSR_S_REGNUM \ 725 - " .irp num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30\n" \ 726 - " .equ .L__reg_num_x\\num, \\num\n" \ 727 - " .endr\n" \ 728 - " .equ .L__reg_num_xzr, 31\n" 729 - 730 1216 #define DEFINE_MRS_S \ 731 - __DEFINE_MRS_MSR_S_REGNUM \ 1217 + __DEFINE_ASM_GPR_NUMS \ 732 1218 " .macro mrs_s, rt, sreg\n" \ 733 - __emit_inst(0xd5200000|(\\sreg)|(.L__reg_num_\\rt)) \ 1219 + __emit_inst(0xd5200000|(\\sreg)|(.L__gpr_num_\\rt)) \ 734 1220 " .endm\n" 735 1221 736 1222 #define DEFINE_MSR_S \ 737 - __DEFINE_MRS_MSR_S_REGNUM \ 1223 + __DEFINE_ASM_GPR_NUMS \ 738 1224 " .macro msr_s, sreg, rt\n" \ 739 - __emit_inst(0xd5000000|(\\sreg)|(.L__reg_num_\\rt)) \ 1225 + __emit_inst(0xd5000000|(\\sreg)|(.L__gpr_num_\\rt)) \ 740 1226 " .endm\n" 741 1227 742 1228 #define UNDEFINE_MRS_S \ ··· 826 1290 asm(ALTERNATIVE("nop", "dmb sy", ARM64_WORKAROUND_1508412)); \ 827 1291 par; \ 828 1292 }) 1293 + 1294 + #define SYS_FIELD_GET(reg, field, val) \ 1295 + FIELD_GET(reg##_##field##_MASK, val) 1296 + 1297 + #define SYS_FIELD_PREP(reg, field, val) \ 1298 + FIELD_PREP(reg##_##field##_MASK, val) 1299 + 1300 + #define SYS_FIELD_PREP_ENUM(reg, field, val) \ 1301 + FIELD_PREP(reg##_##field##_MASK, reg##_##field##_##val) 829 1302 830 1303 #endif 831 1304
+38
tools/arch/arm64/tools/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + 3 + ifeq ($(top_srcdir),) 4 + top_srcdir := $(patsubst %/,%,$(dir $(CURDIR))) 5 + top_srcdir := $(patsubst %/,%,$(dir $(top_srcdir))) 6 + top_srcdir := $(patsubst %/,%,$(dir $(top_srcdir))) 7 + top_srcdir := $(patsubst %/,%,$(dir $(top_srcdir))) 8 + endif 9 + 10 + include $(top_srcdir)/tools/scripts/Makefile.include 11 + 12 + AWK ?= awk 13 + MKDIR ?= mkdir 14 + RM ?= rm 15 + 16 + ifeq ($(V),1) 17 + Q = 18 + else 19 + Q = @ 20 + endif 21 + 22 + arm64_tools_dir = $(top_srcdir)/arch/arm64/tools 23 + arm64_sysreg_tbl = $(arm64_tools_dir)/sysreg 24 + arm64_gen_sysreg = $(arm64_tools_dir)/gen-sysreg.awk 25 + arm64_generated_dir = $(top_srcdir)/tools/arch/arm64/include/generated 26 + arm64_sysreg_defs = $(arm64_generated_dir)/asm/sysreg-defs.h 27 + 28 + all: $(arm64_sysreg_defs) 29 + @: 30 + 31 + $(arm64_sysreg_defs): $(arm64_gen_sysreg) $(arm64_sysreg_tbl) 32 + $(Q)$(MKDIR) -p $(dir $@) 33 + $(QUIET_GEN)$(AWK) -f $^ > $@ 34 + 35 + clean: 36 + $(Q)$(RM) -rf $(arm64_generated_dir) 37 + 38 + .PHONY: all clean
+308
tools/include/perf/arm_pmuv3.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2012 ARM Ltd. 4 + */ 5 + 6 + #ifndef __PERF_ARM_PMUV3_H 7 + #define __PERF_ARM_PMUV3_H 8 + 9 + #include <assert.h> 10 + #include <asm/bug.h> 11 + 12 + #define ARMV8_PMU_MAX_COUNTERS 32 13 + #define ARMV8_PMU_COUNTER_MASK (ARMV8_PMU_MAX_COUNTERS - 1) 14 + 15 + /* 16 + * Common architectural and microarchitectural event numbers. 17 + */ 18 + #define ARMV8_PMUV3_PERFCTR_SW_INCR 0x0000 19 + #define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x0001 20 + #define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x0002 21 + #define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x0003 22 + #define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x0004 23 + #define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x0005 24 + #define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x0006 25 + #define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x0007 26 + #define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x0008 27 + #define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x0009 28 + #define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x000A 29 + #define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x000B 30 + #define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x000C 31 + #define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x000D 32 + #define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x000E 33 + #define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x000F 34 + #define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x0010 35 + #define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x0011 36 + #define ARMV8_PMUV3_PERFCTR_BR_PRED 0x0012 37 + #define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x0013 38 + #define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x0014 39 + #define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x0015 40 + #define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x0016 41 + #define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x0017 42 + #define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x0018 43 + #define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x0019 44 + #define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x001A 45 + #define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x001B 46 + #define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x001C 47 + #define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x001D 48 + #define ARMV8_PMUV3_PERFCTR_CHAIN 0x001E 49 + #define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x001F 50 + #define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x0020 51 + #define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x0021 52 + #define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED 0x0022 53 + #define ARMV8_PMUV3_PERFCTR_STALL_FRONTEND 0x0023 54 + #define ARMV8_PMUV3_PERFCTR_STALL_BACKEND 0x0024 55 + #define ARMV8_PMUV3_PERFCTR_L1D_TLB 0x0025 56 + #define ARMV8_PMUV3_PERFCTR_L1I_TLB 0x0026 57 + #define ARMV8_PMUV3_PERFCTR_L2I_CACHE 0x0027 58 + #define ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL 0x0028 59 + #define ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE 0x0029 60 + #define ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL 0x002A 61 + #define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x002B 62 + #define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x002C 63 + #define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x002D 64 + #define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x002E 65 + #define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x002F 66 + #define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x0030 67 + #define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS 0x0031 68 + #define ARMV8_PMUV3_PERFCTR_LL_CACHE 0x0032 69 + #define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS 0x0033 70 + #define ARMV8_PMUV3_PERFCTR_DTLB_WALK 0x0034 71 + #define ARMV8_PMUV3_PERFCTR_ITLB_WALK 0x0035 72 + #define ARMV8_PMUV3_PERFCTR_LL_CACHE_RD 0x0036 73 + #define ARMV8_PMUV3_PERFCTR_LL_CACHE_MISS_RD 0x0037 74 + #define ARMV8_PMUV3_PERFCTR_REMOTE_ACCESS_RD 0x0038 75 + #define ARMV8_PMUV3_PERFCTR_L1D_CACHE_LMISS_RD 0x0039 76 + #define ARMV8_PMUV3_PERFCTR_OP_RETIRED 0x003A 77 + #define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x003B 78 + #define ARMV8_PMUV3_PERFCTR_STALL 0x003C 79 + #define ARMV8_PMUV3_PERFCTR_STALL_SLOT_BACKEND 0x003D 80 + #define ARMV8_PMUV3_PERFCTR_STALL_SLOT_FRONTEND 0x003E 81 + #define ARMV8_PMUV3_PERFCTR_STALL_SLOT 0x003F 82 + 83 + /* Statistical profiling extension microarchitectural events */ 84 + #define ARMV8_SPE_PERFCTR_SAMPLE_POP 0x4000 85 + #define ARMV8_SPE_PERFCTR_SAMPLE_FEED 0x4001 86 + #define ARMV8_SPE_PERFCTR_SAMPLE_FILTRATE 0x4002 87 + #define ARMV8_SPE_PERFCTR_SAMPLE_COLLISION 0x4003 88 + 89 + /* AMUv1 architecture events */ 90 + #define ARMV8_AMU_PERFCTR_CNT_CYCLES 0x4004 91 + #define ARMV8_AMU_PERFCTR_STALL_BACKEND_MEM 0x4005 92 + 93 + /* long-latency read miss events */ 94 + #define ARMV8_PMUV3_PERFCTR_L1I_CACHE_LMISS 0x4006 95 + #define ARMV8_PMUV3_PERFCTR_L2D_CACHE_LMISS_RD 0x4009 96 + #define ARMV8_PMUV3_PERFCTR_L2I_CACHE_LMISS 0x400A 97 + #define ARMV8_PMUV3_PERFCTR_L3D_CACHE_LMISS_RD 0x400B 98 + 99 + /* Trace buffer events */ 100 + #define ARMV8_PMUV3_PERFCTR_TRB_WRAP 0x400C 101 + #define ARMV8_PMUV3_PERFCTR_TRB_TRIG 0x400E 102 + 103 + /* Trace unit events */ 104 + #define ARMV8_PMUV3_PERFCTR_TRCEXTOUT0 0x4010 105 + #define ARMV8_PMUV3_PERFCTR_TRCEXTOUT1 0x4011 106 + #define ARMV8_PMUV3_PERFCTR_TRCEXTOUT2 0x4012 107 + #define ARMV8_PMUV3_PERFCTR_TRCEXTOUT3 0x4013 108 + #define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT4 0x4018 109 + #define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT5 0x4019 110 + #define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT6 0x401A 111 + #define ARMV8_PMUV3_PERFCTR_CTI_TRIGOUT7 0x401B 112 + 113 + /* additional latency from alignment events */ 114 + #define ARMV8_PMUV3_PERFCTR_LDST_ALIGN_LAT 0x4020 115 + #define ARMV8_PMUV3_PERFCTR_LD_ALIGN_LAT 0x4021 116 + #define ARMV8_PMUV3_PERFCTR_ST_ALIGN_LAT 0x4022 117 + 118 + /* Armv8.5 Memory Tagging Extension events */ 119 + #define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED 0x4024 120 + #define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_RD 0x4025 121 + #define ARMV8_MTE_PERFCTR_MEM_ACCESS_CHECKED_WR 0x4026 122 + 123 + /* ARMv8 recommended implementation defined event types */ 124 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x0040 125 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x0041 126 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x0042 127 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x0043 128 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x0044 129 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x0045 130 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x0046 131 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x0047 132 + #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x0048 133 + 134 + #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C 135 + #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D 136 + #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E 137 + #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F 138 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x0050 139 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x0051 140 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x0052 141 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x0053 142 + 143 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x0056 144 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x0057 145 + #define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x0058 146 + 147 + #define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x005C 148 + #define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x005D 149 + #define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x005E 150 + #define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x005F 151 + #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060 152 + #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061 153 + #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x0062 154 + #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x0063 155 + #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x0064 156 + #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x0065 157 + #define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x0066 158 + #define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x0067 159 + #define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x0068 160 + #define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x0069 161 + #define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x006A 162 + 163 + #define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x006C 164 + #define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x006D 165 + #define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x006E 166 + #define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x006F 167 + #define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x0070 168 + #define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x0071 169 + #define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x0072 170 + #define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x0073 171 + #define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x0074 172 + #define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x0075 173 + #define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x0076 174 + #define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x0077 175 + #define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x0078 176 + #define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x0079 177 + #define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x007A 178 + 179 + #define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x007C 180 + #define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x007D 181 + #define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x007E 182 + 183 + #define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x0081 184 + #define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x0082 185 + #define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x0083 186 + #define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x0084 187 + 188 + #define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x0086 189 + #define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x0087 190 + #define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x0088 191 + 192 + #define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x008A 193 + #define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x008B 194 + #define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x008C 195 + #define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x008D 196 + #define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x008E 197 + #define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x008F 198 + #define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x0090 199 + #define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x0091 200 + 201 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0x00A0 202 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0x00A1 203 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0x00A2 204 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0x00A3 205 + 206 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0x00A6 207 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0x00A7 208 + #define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0x00A8 209 + 210 + /* 211 + * Per-CPU PMCR: config reg 212 + */ 213 + #define ARMV8_PMU_PMCR_E (1 << 0) /* Enable all counters */ 214 + #define ARMV8_PMU_PMCR_P (1 << 1) /* Reset all counters */ 215 + #define ARMV8_PMU_PMCR_C (1 << 2) /* Cycle counter reset */ 216 + #define ARMV8_PMU_PMCR_D (1 << 3) /* CCNT counts every 64th cpu cycle */ 217 + #define ARMV8_PMU_PMCR_X (1 << 4) /* Export to ETM */ 218 + #define ARMV8_PMU_PMCR_DP (1 << 5) /* Disable CCNT if non-invasive debug*/ 219 + #define ARMV8_PMU_PMCR_LC (1 << 6) /* Overflow on 64 bit cycle counter */ 220 + #define ARMV8_PMU_PMCR_LP (1 << 7) /* Long event counter enable */ 221 + #define ARMV8_PMU_PMCR_N_SHIFT 11 /* Number of counters supported */ 222 + #define ARMV8_PMU_PMCR_N_MASK 0x1f 223 + #define ARMV8_PMU_PMCR_MASK 0xff /* Mask for writable bits */ 224 + 225 + /* 226 + * PMOVSR: counters overflow flag status reg 227 + */ 228 + #define ARMV8_PMU_OVSR_MASK 0xffffffff /* Mask for writable bits */ 229 + #define ARMV8_PMU_OVERFLOWED_MASK ARMV8_PMU_OVSR_MASK 230 + 231 + /* 232 + * PMXEVTYPER: Event selection reg 233 + */ 234 + #define ARMV8_PMU_EVTYPE_MASK 0xc800ffff /* Mask for writable bits */ 235 + #define ARMV8_PMU_EVTYPE_EVENT 0xffff /* Mask for EVENT bits */ 236 + 237 + /* 238 + * Event filters for PMUv3 239 + */ 240 + #define ARMV8_PMU_EXCLUDE_EL1 (1U << 31) 241 + #define ARMV8_PMU_EXCLUDE_EL0 (1U << 30) 242 + #define ARMV8_PMU_INCLUDE_EL2 (1U << 27) 243 + 244 + /* 245 + * PMUSERENR: user enable reg 246 + */ 247 + #define ARMV8_PMU_USERENR_MASK 0xf /* Mask for writable bits */ 248 + #define ARMV8_PMU_USERENR_EN (1 << 0) /* PMU regs can be accessed at EL0 */ 249 + #define ARMV8_PMU_USERENR_SW (1 << 1) /* PMSWINC can be written at EL0 */ 250 + #define ARMV8_PMU_USERENR_CR (1 << 2) /* Cycle counter can be read at EL0 */ 251 + #define ARMV8_PMU_USERENR_ER (1 << 3) /* Event counter can be read at EL0 */ 252 + 253 + /* PMMIR_EL1.SLOTS mask */ 254 + #define ARMV8_PMU_SLOTS_MASK 0xff 255 + 256 + #define ARMV8_PMU_BUS_SLOTS_SHIFT 8 257 + #define ARMV8_PMU_BUS_SLOTS_MASK 0xff 258 + #define ARMV8_PMU_BUS_WIDTH_SHIFT 16 259 + #define ARMV8_PMU_BUS_WIDTH_MASK 0xf 260 + 261 + /* 262 + * This code is really good 263 + */ 264 + 265 + #define PMEVN_CASE(n, case_macro) \ 266 + case n: case_macro(n); break 267 + 268 + #define PMEVN_SWITCH(x, case_macro) \ 269 + do { \ 270 + switch (x) { \ 271 + PMEVN_CASE(0, case_macro); \ 272 + PMEVN_CASE(1, case_macro); \ 273 + PMEVN_CASE(2, case_macro); \ 274 + PMEVN_CASE(3, case_macro); \ 275 + PMEVN_CASE(4, case_macro); \ 276 + PMEVN_CASE(5, case_macro); \ 277 + PMEVN_CASE(6, case_macro); \ 278 + PMEVN_CASE(7, case_macro); \ 279 + PMEVN_CASE(8, case_macro); \ 280 + PMEVN_CASE(9, case_macro); \ 281 + PMEVN_CASE(10, case_macro); \ 282 + PMEVN_CASE(11, case_macro); \ 283 + PMEVN_CASE(12, case_macro); \ 284 + PMEVN_CASE(13, case_macro); \ 285 + PMEVN_CASE(14, case_macro); \ 286 + PMEVN_CASE(15, case_macro); \ 287 + PMEVN_CASE(16, case_macro); \ 288 + PMEVN_CASE(17, case_macro); \ 289 + PMEVN_CASE(18, case_macro); \ 290 + PMEVN_CASE(19, case_macro); \ 291 + PMEVN_CASE(20, case_macro); \ 292 + PMEVN_CASE(21, case_macro); \ 293 + PMEVN_CASE(22, case_macro); \ 294 + PMEVN_CASE(23, case_macro); \ 295 + PMEVN_CASE(24, case_macro); \ 296 + PMEVN_CASE(25, case_macro); \ 297 + PMEVN_CASE(26, case_macro); \ 298 + PMEVN_CASE(27, case_macro); \ 299 + PMEVN_CASE(28, case_macro); \ 300 + PMEVN_CASE(29, case_macro); \ 301 + PMEVN_CASE(30, case_macro); \ 302 + default: \ 303 + WARN(1, "Invalid PMEV* index\n"); \ 304 + assert(0); \ 305 + } \ 306 + } while (0) 307 + 308 + #endif
+13 -2
tools/perf/Makefile.perf
··· 443 443 # Create output directory if not already present 444 444 _dummy := $(shell [ -d '$(beauty_ioctl_outdir)' ] || mkdir -p '$(beauty_ioctl_outdir)') 445 445 446 + arm64_gen_sysreg_dir := $(srctree)/tools/arch/arm64/tools 447 + 448 + arm64-sysreg-defs: FORCE 449 + $(Q)$(MAKE) -C $(arm64_gen_sysreg_dir) 450 + 451 + arm64-sysreg-defs-clean: 452 + $(call QUIET_CLEAN,arm64-sysreg-defs) 453 + $(Q)$(MAKE) -C $(arm64_gen_sysreg_dir) clean > /dev/null 454 + 446 455 $(drm_ioctl_array): $(drm_hdr_dir)/drm.h $(drm_hdr_dir)/i915_drm.h $(drm_ioctl_tbl) 447 456 $(Q)$(SHELL) '$(drm_ioctl_tbl)' $(drm_hdr_dir) > $@ 448 457 ··· 725 716 __build-dir = $(subst $(OUTPUT),,$(dir $@)) 726 717 build-dir = $(or $(__build-dir),.) 727 718 728 - prepare: $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h archheaders $(drm_ioctl_array) \ 719 + prepare: $(OUTPUT)PERF-VERSION-FILE $(OUTPUT)common-cmds.h archheaders \ 720 + arm64-sysreg-defs \ 721 + $(drm_ioctl_array) \ 729 722 $(fadvise_advice_array) \ 730 723 $(fsconfig_arrays) \ 731 724 $(fsmount_arrays) \ ··· 1136 1125 bpf-skel-clean: 1137 1126 $(call QUIET_CLEAN, bpf-skel) $(RM) -r $(SKEL_TMP_OUT) $(SKELETONS) 1138 1127 1139 - clean:: $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBSYMBOL)-clean $(LIBPERF)-clean fixdep-clean python-clean bpf-skel-clean tests-coresight-targets-clean 1128 + clean:: $(LIBAPI)-clean $(LIBBPF)-clean $(LIBSUBCMD)-clean $(LIBSYMBOL)-clean $(LIBPERF)-clean arm64-sysreg-defs-clean fixdep-clean python-clean bpf-skel-clean tests-coresight-targets-clean 1140 1129 $(call QUIET_CLEAN, core-objs) $(RM) $(LIBPERF_A) $(OUTPUT)perf-archive $(OUTPUT)perf-iostat $(LANG_BINDINGS) 1141 1130 $(Q)find $(or $(OUTPUT),.) -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete 1142 1131 $(Q)$(RM) $(OUTPUT).config-detected
+1 -1
tools/perf/util/Build
··· 345 345 CFLAGS_libstring.o += -Wno-unused-parameter -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))" 346 346 CFLAGS_hweight.o += -Wno-unused-parameter -DETC_PERFCONFIG="BUILD_STR($(ETC_PERFCONFIG_SQ))" 347 347 CFLAGS_header.o += -include $(OUTPUT)PERF-VERSION-FILE 348 - CFLAGS_arm-spe.o += -I$(srctree)/tools/arch/arm64/include/ 348 + CFLAGS_arm-spe.o += -I$(srctree)/tools/arch/arm64/include/ -I$(srctree)/tools/arch/arm64/include/generated/ 349 349 350 350 $(OUTPUT)util/argv_split.o: ../lib/argv_split.c FORCE 351 351 $(call rule_mkdir)
+22 -3
tools/testing/selftests/kvm/Makefile
··· 17 17 ARCH_DIR := $(ARCH) 18 18 endif 19 19 20 + ifeq ($(ARCH),arm64) 21 + arm64_tools_dir := $(top_srcdir)/tools/arch/arm64/tools/ 22 + GEN_HDRS := $(top_srcdir)/tools/arch/arm64/include/generated/ 23 + CFLAGS += -I$(GEN_HDRS) 24 + 25 + $(GEN_HDRS): $(wildcard $(arm64_tools_dir)/*) 26 + $(MAKE) -C $(arm64_tools_dir) 27 + endif 28 + 20 29 LIBKVM += lib/assert.c 21 30 LIBKVM += lib/elf.c 22 31 LIBKVM += lib/guest_modes.c ··· 75 66 TEST_GEN_PROGS_x86_64 += x86_64/get_msr_index_features 76 67 TEST_GEN_PROGS_x86_64 += x86_64/exit_on_emulation_failure_test 77 68 TEST_GEN_PROGS_x86_64 += x86_64/fix_hypercall_test 69 + TEST_GEN_PROGS_x86_64 += x86_64/hwcr_msr_test 78 70 TEST_GEN_PROGS_x86_64 += x86_64/hyperv_clock 79 71 TEST_GEN_PROGS_x86_64 += x86_64/hyperv_cpuid 80 72 TEST_GEN_PROGS_x86_64 += x86_64/hyperv_evmcs ··· 155 145 TEST_GEN_PROGS_aarch64 += aarch64/hypercalls 156 146 TEST_GEN_PROGS_aarch64 += aarch64/page_fault_test 157 147 TEST_GEN_PROGS_aarch64 += aarch64/psci_test 148 + TEST_GEN_PROGS_aarch64 += aarch64/set_id_regs 158 149 TEST_GEN_PROGS_aarch64 += aarch64/smccc_filter 159 150 TEST_GEN_PROGS_aarch64 += aarch64/vcpu_width_config 160 151 TEST_GEN_PROGS_aarch64 += aarch64/vgic_init 161 152 TEST_GEN_PROGS_aarch64 += aarch64/vgic_irq 153 + TEST_GEN_PROGS_aarch64 += aarch64/vpmu_counter_access 162 154 TEST_GEN_PROGS_aarch64 += access_tracking_perf_test 163 155 TEST_GEN_PROGS_aarch64 += demand_paging_test 164 156 TEST_GEN_PROGS_aarch64 += dirty_log_test ··· 268 256 $(SPLIT_TESTS_TARGETS): %: %.o $(SPLIT_TESTS_OBJS) 269 257 $(CC) $(CFLAGS) $(CPPFLAGS) $(LDFLAGS) $(TARGET_ARCH) $^ $(LDLIBS) -o $@ 270 258 271 - EXTRA_CLEAN += $(LIBKVM_OBJS) $(TEST_DEP_FILES) $(TEST_GEN_OBJ) $(SPLIT_TESTS_OBJS) cscope.* 259 + EXTRA_CLEAN += $(GEN_HDRS) \ 260 + $(LIBKVM_OBJS) \ 261 + $(SPLIT_TESTS_OBJS) \ 262 + $(TEST_DEP_FILES) \ 263 + $(TEST_GEN_OBJ) \ 264 + cscope.* 272 265 273 266 x := $(shell mkdir -p $(sort $(dir $(LIBKVM_C_OBJ) $(LIBKVM_S_OBJ)))) 274 - $(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c 267 + $(LIBKVM_C_OBJ): $(OUTPUT)/%.o: %.c $(GEN_HDRS) 275 268 $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@ 276 269 277 - $(LIBKVM_S_OBJ): $(OUTPUT)/%.o: %.S 270 + $(LIBKVM_S_OBJ): $(OUTPUT)/%.o: %.S $(GEN_HDRS) 278 271 $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c $< -o $@ 279 272 280 273 # Compile the string overrides as freestanding to prevent the compiler from ··· 289 272 $(CC) $(CFLAGS) $(CPPFLAGS) $(TARGET_ARCH) -c -ffreestanding $< -o $@ 290 273 291 274 x := $(shell mkdir -p $(sort $(dir $(TEST_GEN_PROGS)))) 275 + $(SPLIT_TESTS_OBJS): $(GEN_HDRS) 292 276 $(TEST_GEN_PROGS): $(LIBKVM_OBJS) 293 277 $(TEST_GEN_PROGS_EXTENDED): $(LIBKVM_OBJS) 278 + $(TEST_GEN_OBJ): $(GEN_HDRS) 294 279 295 280 cscope: include_paths = $(LINUX_TOOL_INCLUDE) $(LINUX_HDR_PATH) include lib .. 296 281 cscope:
+2 -2
tools/testing/selftests/kvm/aarch64/aarch32_id_regs.c
··· 146 146 147 147 vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), &val); 148 148 149 - el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL0), val); 150 - return el0 == ID_AA64PFR0_ELx_64BIT_ONLY; 149 + el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val); 150 + return el0 == ID_AA64PFR0_EL1_ELx_64BIT_ONLY; 151 151 } 152 152 153 153 int main(void)
+6 -6
tools/testing/selftests/kvm/aarch64/debug-exceptions.c
··· 116 116 117 117 /* Reset all bcr/bvr/wcr/wvr registers */ 118 118 dfr0 = read_sysreg(id_aa64dfr0_el1); 119 - brps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_BRPS), dfr0); 119 + brps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_BRPs), dfr0); 120 120 for (i = 0; i <= brps; i++) { 121 121 write_dbgbcr(i, 0); 122 122 write_dbgbvr(i, 0); 123 123 } 124 - wrps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_WRPS), dfr0); 124 + wrps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_WRPs), dfr0); 125 125 for (i = 0; i <= wrps; i++) { 126 126 write_dbgwcr(i, 0); 127 127 write_dbgwvr(i, 0); ··· 418 418 419 419 static int debug_version(uint64_t id_aa64dfr0) 420 420 { 421 - return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_DEBUGVER), id_aa64dfr0); 421 + return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), id_aa64dfr0); 422 422 } 423 423 424 424 static void test_guest_debug_exceptions(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn) ··· 539 539 int b, w, c; 540 540 541 541 /* Number of breakpoints */ 542 - brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_BRPS), aa64dfr0) + 1; 542 + brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_BRPs), aa64dfr0) + 1; 543 543 __TEST_REQUIRE(brp_num >= 2, "At least two breakpoints are required"); 544 544 545 545 /* Number of watchpoints */ 546 - wrp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_WRPS), aa64dfr0) + 1; 546 + wrp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_WRPs), aa64dfr0) + 1; 547 547 548 548 /* Number of context aware breakpoints */ 549 - ctx_brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_CTX_CMPS), aa64dfr0) + 1; 549 + ctx_brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_CTX_CMPs), aa64dfr0) + 1; 550 550 551 551 pr_debug("%s brp_num:%d, wrp_num:%d, ctx_brp_num:%d\n", __func__, 552 552 brp_num, wrp_num, ctx_brp_num);
+7 -4
tools/testing/selftests/kvm/aarch64/page_fault_test.c
··· 96 96 uint64_t isar0 = read_sysreg(id_aa64isar0_el1); 97 97 uint64_t atomic; 98 98 99 - atomic = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR0_ATOMICS), isar0); 99 + atomic = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_ATOMIC), isar0); 100 100 return atomic >= 2; 101 101 } 102 102 103 103 static bool guest_check_dc_zva(void) 104 104 { 105 105 uint64_t dczid = read_sysreg(dczid_el0); 106 - uint64_t dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_DZP), dczid); 106 + uint64_t dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_EL0_DZP), dczid); 107 107 108 108 return dzp == 0; 109 109 } ··· 135 135 uint64_t par; 136 136 137 137 asm volatile("at s1e1r, %0" :: "r" (guest_test_memory)); 138 - par = read_sysreg(par_el1); 139 138 isb(); 139 + par = read_sysreg(par_el1); 140 140 141 141 /* Bit 1 indicates whether the AT was successful */ 142 142 GUEST_ASSERT_EQ(par & 1, 0); ··· 196 196 uint64_t hadbs, tcr; 197 197 198 198 /* Skip if HA is not supported. */ 199 - hadbs = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_HADBS), mmfr1); 199 + hadbs = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HAFDBS), mmfr1); 200 200 if (hadbs == 0) 201 201 return false; 202 202 ··· 842 842 .name = SCAT2(ro_memslot_no_syndrome, _access), \ 843 843 .data_memslot_flags = KVM_MEM_READONLY, \ 844 844 .pt_memslot_flags = KVM_MEM_READONLY, \ 845 + .guest_prepare = { _PREPARE(_access) }, \ 845 846 .guest_test = _access, \ 846 847 .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ 847 848 .expected_events = { .fail_vcpu_runs = 1 }, \ ··· 866 865 .name = SCAT2(ro_memslot_no_syn_and_dlog, _access), \ 867 866 .data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ 868 867 .pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ 868 + .guest_prepare = { _PREPARE(_access) }, \ 869 869 .guest_test = _access, \ 870 870 .guest_test_check = { _test_check }, \ 871 871 .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ ··· 896 894 .data_memslot_flags = KVM_MEM_READONLY, \ 897 895 .pt_memslot_flags = KVM_MEM_READONLY, \ 898 896 .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ 897 + .guest_prepare = { _PREPARE(_access) }, \ 899 898 .guest_test = _access, \ 900 899 .uffd_data_handler = _uffd_data_handler, \ 901 900 .uffd_pt_handler = uffd_pt_handler, \
+481
tools/testing/selftests/kvm/aarch64/set_id_regs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * set_id_regs - Test for setting ID register from usersapce. 4 + * 5 + * Copyright (c) 2023 Google LLC. 6 + * 7 + * 8 + * Test that KVM supports setting ID registers from userspace and handles the 9 + * feature set correctly. 10 + */ 11 + 12 + #include <stdint.h> 13 + #include "kvm_util.h" 14 + #include "processor.h" 15 + #include "test_util.h" 16 + #include <linux/bitfield.h> 17 + 18 + enum ftr_type { 19 + FTR_EXACT, /* Use a predefined safe value */ 20 + FTR_LOWER_SAFE, /* Smaller value is safe */ 21 + FTR_HIGHER_SAFE, /* Bigger value is safe */ 22 + FTR_HIGHER_OR_ZERO_SAFE, /* Bigger value is safe, but 0 is biggest */ 23 + FTR_END, /* Mark the last ftr bits */ 24 + }; 25 + 26 + #define FTR_SIGNED true /* Value should be treated as signed */ 27 + #define FTR_UNSIGNED false /* Value should be treated as unsigned */ 28 + 29 + struct reg_ftr_bits { 30 + char *name; 31 + bool sign; 32 + enum ftr_type type; 33 + uint8_t shift; 34 + uint64_t mask; 35 + int64_t safe_val; 36 + }; 37 + 38 + struct test_feature_reg { 39 + uint32_t reg; 40 + const struct reg_ftr_bits *ftr_bits; 41 + }; 42 + 43 + #define __REG_FTR_BITS(NAME, SIGNED, TYPE, SHIFT, MASK, SAFE_VAL) \ 44 + { \ 45 + .name = #NAME, \ 46 + .sign = SIGNED, \ 47 + .type = TYPE, \ 48 + .shift = SHIFT, \ 49 + .mask = MASK, \ 50 + .safe_val = SAFE_VAL, \ 51 + } 52 + 53 + #define REG_FTR_BITS(type, reg, field, safe_val) \ 54 + __REG_FTR_BITS(reg##_##field, FTR_UNSIGNED, type, reg##_##field##_SHIFT, \ 55 + reg##_##field##_MASK, safe_val) 56 + 57 + #define S_REG_FTR_BITS(type, reg, field, safe_val) \ 58 + __REG_FTR_BITS(reg##_##field, FTR_SIGNED, type, reg##_##field##_SHIFT, \ 59 + reg##_##field##_MASK, safe_val) 60 + 61 + #define REG_FTR_END \ 62 + { \ 63 + .type = FTR_END, \ 64 + } 65 + 66 + static const struct reg_ftr_bits ftr_id_aa64dfr0_el1[] = { 67 + S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_EL1, PMUVer, 0), 68 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64DFR0_EL1, DebugVer, 0), 69 + REG_FTR_END, 70 + }; 71 + 72 + static const struct reg_ftr_bits ftr_id_dfr0_el1[] = { 73 + S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_DFR0_EL1, PerfMon, 0), 74 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_DFR0_EL1, CopDbg, 0), 75 + REG_FTR_END, 76 + }; 77 + 78 + static const struct reg_ftr_bits ftr_id_aa64isar0_el1[] = { 79 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, RNDR, 0), 80 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, TLB, 0), 81 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, TS, 0), 82 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, FHM, 0), 83 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, DP, 0), 84 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SM4, 0), 85 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SM3, 0), 86 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA3, 0), 87 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, RDM, 0), 88 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, TME, 0), 89 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, ATOMIC, 0), 90 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, CRC32, 0), 91 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA2, 0), 92 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA1, 0), 93 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, AES, 0), 94 + REG_FTR_END, 95 + }; 96 + 97 + static const struct reg_ftr_bits ftr_id_aa64isar1_el1[] = { 98 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, LS64, 0), 99 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, XS, 0), 100 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, I8MM, 0), 101 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, DGH, 0), 102 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, BF16, 0), 103 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, SPECRES, 0), 104 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, SB, 0), 105 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, FRINTTS, 0), 106 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, LRCPC, 0), 107 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, FCMA, 0), 108 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, JSCVT, 0), 109 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR1_EL1, DPB, 0), 110 + REG_FTR_END, 111 + }; 112 + 113 + static const struct reg_ftr_bits ftr_id_aa64isar2_el1[] = { 114 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR2_EL1, BC, 0), 115 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR2_EL1, RPRES, 0), 116 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR2_EL1, WFxT, 0), 117 + REG_FTR_END, 118 + }; 119 + 120 + static const struct reg_ftr_bits ftr_id_aa64pfr0_el1[] = { 121 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, CSV3, 0), 122 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, CSV2, 0), 123 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, DIT, 0), 124 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, SEL2, 0), 125 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL3, 0), 126 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL2, 0), 127 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL1, 0), 128 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL0, 0), 129 + REG_FTR_END, 130 + }; 131 + 132 + static const struct reg_ftr_bits ftr_id_aa64mmfr0_el1[] = { 133 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, ECV, 0), 134 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, EXS, 0), 135 + S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, TGRAN4, 0), 136 + S_REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, TGRAN64, 0), 137 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, TGRAN16, 0), 138 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, BIGENDEL0, 0), 139 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, SNSMEM, 0), 140 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, BIGEND, 0), 141 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, ASIDBITS, 0), 142 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR0_EL1, PARANGE, 0), 143 + REG_FTR_END, 144 + }; 145 + 146 + static const struct reg_ftr_bits ftr_id_aa64mmfr1_el1[] = { 147 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, TIDCP1, 0), 148 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, AFP, 0), 149 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, ETS, 0), 150 + REG_FTR_BITS(FTR_HIGHER_SAFE, ID_AA64MMFR1_EL1, SpecSEI, 0), 151 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, PAN, 0), 152 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, LO, 0), 153 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, HPDS, 0), 154 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR1_EL1, HAFDBS, 0), 155 + REG_FTR_END, 156 + }; 157 + 158 + static const struct reg_ftr_bits ftr_id_aa64mmfr2_el1[] = { 159 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, E0PD, 0), 160 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, BBM, 0), 161 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, TTL, 0), 162 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, AT, 0), 163 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, ST, 0), 164 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, VARange, 0), 165 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, IESB, 0), 166 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, LSM, 0), 167 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, UAO, 0), 168 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64MMFR2_EL1, CnP, 0), 169 + REG_FTR_END, 170 + }; 171 + 172 + static const struct reg_ftr_bits ftr_id_aa64zfr0_el1[] = { 173 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F64MM, 0), 174 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, F32MM, 0), 175 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, I8MM, 0), 176 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, SM4, 0), 177 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, SHA3, 0), 178 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, BF16, 0), 179 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, BitPerm, 0), 180 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, AES, 0), 181 + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ZFR0_EL1, SVEver, 0), 182 + REG_FTR_END, 183 + }; 184 + 185 + #define TEST_REG(id, table) \ 186 + { \ 187 + .reg = id, \ 188 + .ftr_bits = &((table)[0]), \ 189 + } 190 + 191 + static struct test_feature_reg test_regs[] = { 192 + TEST_REG(SYS_ID_AA64DFR0_EL1, ftr_id_aa64dfr0_el1), 193 + TEST_REG(SYS_ID_DFR0_EL1, ftr_id_dfr0_el1), 194 + TEST_REG(SYS_ID_AA64ISAR0_EL1, ftr_id_aa64isar0_el1), 195 + TEST_REG(SYS_ID_AA64ISAR1_EL1, ftr_id_aa64isar1_el1), 196 + TEST_REG(SYS_ID_AA64ISAR2_EL1, ftr_id_aa64isar2_el1), 197 + TEST_REG(SYS_ID_AA64PFR0_EL1, ftr_id_aa64pfr0_el1), 198 + TEST_REG(SYS_ID_AA64MMFR0_EL1, ftr_id_aa64mmfr0_el1), 199 + TEST_REG(SYS_ID_AA64MMFR1_EL1, ftr_id_aa64mmfr1_el1), 200 + TEST_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2_el1), 201 + TEST_REG(SYS_ID_AA64ZFR0_EL1, ftr_id_aa64zfr0_el1), 202 + }; 203 + 204 + #define GUEST_REG_SYNC(id) GUEST_SYNC_ARGS(0, id, read_sysreg_s(id), 0, 0); 205 + 206 + static void guest_code(void) 207 + { 208 + GUEST_REG_SYNC(SYS_ID_AA64DFR0_EL1); 209 + GUEST_REG_SYNC(SYS_ID_DFR0_EL1); 210 + GUEST_REG_SYNC(SYS_ID_AA64ISAR0_EL1); 211 + GUEST_REG_SYNC(SYS_ID_AA64ISAR1_EL1); 212 + GUEST_REG_SYNC(SYS_ID_AA64ISAR2_EL1); 213 + GUEST_REG_SYNC(SYS_ID_AA64PFR0_EL1); 214 + GUEST_REG_SYNC(SYS_ID_AA64MMFR0_EL1); 215 + GUEST_REG_SYNC(SYS_ID_AA64MMFR1_EL1); 216 + GUEST_REG_SYNC(SYS_ID_AA64MMFR2_EL1); 217 + GUEST_REG_SYNC(SYS_ID_AA64ZFR0_EL1); 218 + 219 + GUEST_DONE(); 220 + } 221 + 222 + /* Return a safe value to a given ftr_bits an ftr value */ 223 + uint64_t get_safe_value(const struct reg_ftr_bits *ftr_bits, uint64_t ftr) 224 + { 225 + uint64_t ftr_max = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0); 226 + 227 + if (ftr_bits->type == FTR_UNSIGNED) { 228 + switch (ftr_bits->type) { 229 + case FTR_EXACT: 230 + ftr = ftr_bits->safe_val; 231 + break; 232 + case FTR_LOWER_SAFE: 233 + if (ftr > 0) 234 + ftr--; 235 + break; 236 + case FTR_HIGHER_SAFE: 237 + if (ftr < ftr_max) 238 + ftr++; 239 + break; 240 + case FTR_HIGHER_OR_ZERO_SAFE: 241 + if (ftr == ftr_max) 242 + ftr = 0; 243 + else if (ftr != 0) 244 + ftr++; 245 + break; 246 + default: 247 + break; 248 + } 249 + } else if (ftr != ftr_max) { 250 + switch (ftr_bits->type) { 251 + case FTR_EXACT: 252 + ftr = ftr_bits->safe_val; 253 + break; 254 + case FTR_LOWER_SAFE: 255 + if (ftr > 0) 256 + ftr--; 257 + break; 258 + case FTR_HIGHER_SAFE: 259 + if (ftr < ftr_max - 1) 260 + ftr++; 261 + break; 262 + case FTR_HIGHER_OR_ZERO_SAFE: 263 + if (ftr != 0 && ftr != ftr_max - 1) 264 + ftr++; 265 + break; 266 + default: 267 + break; 268 + } 269 + } 270 + 271 + return ftr; 272 + } 273 + 274 + /* Return an invalid value to a given ftr_bits an ftr value */ 275 + uint64_t get_invalid_value(const struct reg_ftr_bits *ftr_bits, uint64_t ftr) 276 + { 277 + uint64_t ftr_max = GENMASK_ULL(ARM64_FEATURE_FIELD_BITS - 1, 0); 278 + 279 + if (ftr_bits->type == FTR_UNSIGNED) { 280 + switch (ftr_bits->type) { 281 + case FTR_EXACT: 282 + ftr = max((uint64_t)ftr_bits->safe_val + 1, ftr + 1); 283 + break; 284 + case FTR_LOWER_SAFE: 285 + ftr++; 286 + break; 287 + case FTR_HIGHER_SAFE: 288 + ftr--; 289 + break; 290 + case FTR_HIGHER_OR_ZERO_SAFE: 291 + if (ftr == 0) 292 + ftr = ftr_max; 293 + else 294 + ftr--; 295 + break; 296 + default: 297 + break; 298 + } 299 + } else if (ftr != ftr_max) { 300 + switch (ftr_bits->type) { 301 + case FTR_EXACT: 302 + ftr = max((uint64_t)ftr_bits->safe_val + 1, ftr + 1); 303 + break; 304 + case FTR_LOWER_SAFE: 305 + ftr++; 306 + break; 307 + case FTR_HIGHER_SAFE: 308 + ftr--; 309 + break; 310 + case FTR_HIGHER_OR_ZERO_SAFE: 311 + if (ftr == 0) 312 + ftr = ftr_max - 1; 313 + else 314 + ftr--; 315 + break; 316 + default: 317 + break; 318 + } 319 + } else { 320 + ftr = 0; 321 + } 322 + 323 + return ftr; 324 + } 325 + 326 + static void test_reg_set_success(struct kvm_vcpu *vcpu, uint64_t reg, 327 + const struct reg_ftr_bits *ftr_bits) 328 + { 329 + uint8_t shift = ftr_bits->shift; 330 + uint64_t mask = ftr_bits->mask; 331 + uint64_t val, new_val, ftr; 332 + 333 + vcpu_get_reg(vcpu, reg, &val); 334 + ftr = (val & mask) >> shift; 335 + 336 + ftr = get_safe_value(ftr_bits, ftr); 337 + 338 + ftr <<= shift; 339 + val &= ~mask; 340 + val |= ftr; 341 + 342 + vcpu_set_reg(vcpu, reg, val); 343 + vcpu_get_reg(vcpu, reg, &new_val); 344 + TEST_ASSERT_EQ(new_val, val); 345 + } 346 + 347 + static void test_reg_set_fail(struct kvm_vcpu *vcpu, uint64_t reg, 348 + const struct reg_ftr_bits *ftr_bits) 349 + { 350 + uint8_t shift = ftr_bits->shift; 351 + uint64_t mask = ftr_bits->mask; 352 + uint64_t val, old_val, ftr; 353 + int r; 354 + 355 + vcpu_get_reg(vcpu, reg, &val); 356 + ftr = (val & mask) >> shift; 357 + 358 + ftr = get_invalid_value(ftr_bits, ftr); 359 + 360 + old_val = val; 361 + ftr <<= shift; 362 + val &= ~mask; 363 + val |= ftr; 364 + 365 + r = __vcpu_set_reg(vcpu, reg, val); 366 + TEST_ASSERT(r < 0 && errno == EINVAL, 367 + "Unexpected KVM_SET_ONE_REG error: r=%d, errno=%d", r, errno); 368 + 369 + vcpu_get_reg(vcpu, reg, &val); 370 + TEST_ASSERT_EQ(val, old_val); 371 + } 372 + 373 + static void test_user_set_reg(struct kvm_vcpu *vcpu, bool aarch64_only) 374 + { 375 + uint64_t masks[KVM_ARM_FEATURE_ID_RANGE_SIZE]; 376 + struct reg_mask_range range = { 377 + .addr = (__u64)masks, 378 + }; 379 + int ret; 380 + 381 + /* KVM should return error when reserved field is not zero */ 382 + range.reserved[0] = 1; 383 + ret = __vm_ioctl(vcpu->vm, KVM_ARM_GET_REG_WRITABLE_MASKS, &range); 384 + TEST_ASSERT(ret, "KVM doesn't check invalid parameters."); 385 + 386 + /* Get writable masks for feature ID registers */ 387 + memset(range.reserved, 0, sizeof(range.reserved)); 388 + vm_ioctl(vcpu->vm, KVM_ARM_GET_REG_WRITABLE_MASKS, &range); 389 + 390 + for (int i = 0; i < ARRAY_SIZE(test_regs); i++) { 391 + const struct reg_ftr_bits *ftr_bits = test_regs[i].ftr_bits; 392 + uint32_t reg_id = test_regs[i].reg; 393 + uint64_t reg = KVM_ARM64_SYS_REG(reg_id); 394 + int idx; 395 + 396 + /* Get the index to masks array for the idreg */ 397 + idx = KVM_ARM_FEATURE_ID_RANGE_IDX(sys_reg_Op0(reg_id), sys_reg_Op1(reg_id), 398 + sys_reg_CRn(reg_id), sys_reg_CRm(reg_id), 399 + sys_reg_Op2(reg_id)); 400 + 401 + for (int j = 0; ftr_bits[j].type != FTR_END; j++) { 402 + /* Skip aarch32 reg on aarch64 only system, since they are RAZ/WI. */ 403 + if (aarch64_only && sys_reg_CRm(reg_id) < 4) { 404 + ksft_test_result_skip("%s on AARCH64 only system\n", 405 + ftr_bits[j].name); 406 + continue; 407 + } 408 + 409 + /* Make sure the feature field is writable */ 410 + TEST_ASSERT_EQ(masks[idx] & ftr_bits[j].mask, ftr_bits[j].mask); 411 + 412 + test_reg_set_fail(vcpu, reg, &ftr_bits[j]); 413 + test_reg_set_success(vcpu, reg, &ftr_bits[j]); 414 + 415 + ksft_test_result_pass("%s\n", ftr_bits[j].name); 416 + } 417 + } 418 + } 419 + 420 + static void test_guest_reg_read(struct kvm_vcpu *vcpu) 421 + { 422 + bool done = false; 423 + struct ucall uc; 424 + uint64_t val; 425 + 426 + while (!done) { 427 + vcpu_run(vcpu); 428 + 429 + switch (get_ucall(vcpu, &uc)) { 430 + case UCALL_ABORT: 431 + REPORT_GUEST_ASSERT(uc); 432 + break; 433 + case UCALL_SYNC: 434 + /* Make sure the written values are seen by guest */ 435 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(uc.args[2]), &val); 436 + TEST_ASSERT_EQ(val, uc.args[3]); 437 + break; 438 + case UCALL_DONE: 439 + done = true; 440 + break; 441 + default: 442 + TEST_FAIL("Unexpected ucall: %lu", uc.cmd); 443 + } 444 + } 445 + } 446 + 447 + int main(void) 448 + { 449 + struct kvm_vcpu *vcpu; 450 + struct kvm_vm *vm; 451 + bool aarch64_only; 452 + uint64_t val, el0; 453 + int ftr_cnt; 454 + 455 + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_SUPPORTED_REG_MASK_RANGES)); 456 + 457 + vm = vm_create_with_one_vcpu(&vcpu, guest_code); 458 + 459 + /* Check for AARCH64 only system */ 460 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), &val); 461 + el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val); 462 + aarch64_only = (el0 == ID_AA64PFR0_EL1_ELx_64BIT_ONLY); 463 + 464 + ksft_print_header(); 465 + 466 + ftr_cnt = ARRAY_SIZE(ftr_id_aa64dfr0_el1) + ARRAY_SIZE(ftr_id_dfr0_el1) + 467 + ARRAY_SIZE(ftr_id_aa64isar0_el1) + ARRAY_SIZE(ftr_id_aa64isar1_el1) + 468 + ARRAY_SIZE(ftr_id_aa64isar2_el1) + ARRAY_SIZE(ftr_id_aa64pfr0_el1) + 469 + ARRAY_SIZE(ftr_id_aa64mmfr0_el1) + ARRAY_SIZE(ftr_id_aa64mmfr1_el1) + 470 + ARRAY_SIZE(ftr_id_aa64mmfr2_el1) + ARRAY_SIZE(ftr_id_aa64zfr0_el1) - 471 + ARRAY_SIZE(test_regs); 472 + 473 + ksft_set_plan(ftr_cnt); 474 + 475 + test_user_set_reg(vcpu, aarch64_only); 476 + test_guest_reg_read(vcpu); 477 + 478 + kvm_vm_free(vm); 479 + 480 + ksft_finished(); 481 + }
+670
tools/testing/selftests/kvm/aarch64/vpmu_counter_access.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * vpmu_counter_access - Test vPMU event counter access 4 + * 5 + * Copyright (c) 2023 Google LLC. 6 + * 7 + * This test checks if the guest can see the same number of the PMU event 8 + * counters (PMCR_EL0.N) that userspace sets, if the guest can access 9 + * those counters, and if the guest is prevented from accessing any 10 + * other counters. 11 + * It also checks if the userspace accesses to the PMU regsisters honor the 12 + * PMCR.N value that's set for the guest. 13 + * This test runs only when KVM_CAP_ARM_PMU_V3 is supported on the host. 14 + */ 15 + #include <kvm_util.h> 16 + #include <processor.h> 17 + #include <test_util.h> 18 + #include <vgic.h> 19 + #include <perf/arm_pmuv3.h> 20 + #include <linux/bitfield.h> 21 + 22 + /* The max number of the PMU event counters (excluding the cycle counter) */ 23 + #define ARMV8_PMU_MAX_GENERAL_COUNTERS (ARMV8_PMU_MAX_COUNTERS - 1) 24 + 25 + /* The cycle counter bit position that's common among the PMU registers */ 26 + #define ARMV8_PMU_CYCLE_IDX 31 27 + 28 + struct vpmu_vm { 29 + struct kvm_vm *vm; 30 + struct kvm_vcpu *vcpu; 31 + int gic_fd; 32 + }; 33 + 34 + static struct vpmu_vm vpmu_vm; 35 + 36 + struct pmreg_sets { 37 + uint64_t set_reg_id; 38 + uint64_t clr_reg_id; 39 + }; 40 + 41 + #define PMREG_SET(set, clr) {.set_reg_id = set, .clr_reg_id = clr} 42 + 43 + static uint64_t get_pmcr_n(uint64_t pmcr) 44 + { 45 + return (pmcr >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK; 46 + } 47 + 48 + static void set_pmcr_n(uint64_t *pmcr, uint64_t pmcr_n) 49 + { 50 + *pmcr = *pmcr & ~(ARMV8_PMU_PMCR_N_MASK << ARMV8_PMU_PMCR_N_SHIFT); 51 + *pmcr |= (pmcr_n << ARMV8_PMU_PMCR_N_SHIFT); 52 + } 53 + 54 + static uint64_t get_counters_mask(uint64_t n) 55 + { 56 + uint64_t mask = BIT(ARMV8_PMU_CYCLE_IDX); 57 + 58 + if (n) 59 + mask |= GENMASK(n - 1, 0); 60 + return mask; 61 + } 62 + 63 + /* Read PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */ 64 + static inline unsigned long read_sel_evcntr(int sel) 65 + { 66 + write_sysreg(sel, pmselr_el0); 67 + isb(); 68 + return read_sysreg(pmxevcntr_el0); 69 + } 70 + 71 + /* Write PMEVTCNTR<n>_EL0 through PMXEVCNTR_EL0 */ 72 + static inline void write_sel_evcntr(int sel, unsigned long val) 73 + { 74 + write_sysreg(sel, pmselr_el0); 75 + isb(); 76 + write_sysreg(val, pmxevcntr_el0); 77 + isb(); 78 + } 79 + 80 + /* Read PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */ 81 + static inline unsigned long read_sel_evtyper(int sel) 82 + { 83 + write_sysreg(sel, pmselr_el0); 84 + isb(); 85 + return read_sysreg(pmxevtyper_el0); 86 + } 87 + 88 + /* Write PMEVTYPER<n>_EL0 through PMXEVTYPER_EL0 */ 89 + static inline void write_sel_evtyper(int sel, unsigned long val) 90 + { 91 + write_sysreg(sel, pmselr_el0); 92 + isb(); 93 + write_sysreg(val, pmxevtyper_el0); 94 + isb(); 95 + } 96 + 97 + static inline void enable_counter(int idx) 98 + { 99 + uint64_t v = read_sysreg(pmcntenset_el0); 100 + 101 + write_sysreg(BIT(idx) | v, pmcntenset_el0); 102 + isb(); 103 + } 104 + 105 + static inline void disable_counter(int idx) 106 + { 107 + uint64_t v = read_sysreg(pmcntenset_el0); 108 + 109 + write_sysreg(BIT(idx) | v, pmcntenclr_el0); 110 + isb(); 111 + } 112 + 113 + static void pmu_disable_reset(void) 114 + { 115 + uint64_t pmcr = read_sysreg(pmcr_el0); 116 + 117 + /* Reset all counters, disabling them */ 118 + pmcr &= ~ARMV8_PMU_PMCR_E; 119 + write_sysreg(pmcr | ARMV8_PMU_PMCR_P, pmcr_el0); 120 + isb(); 121 + } 122 + 123 + #define RETURN_READ_PMEVCNTRN(n) \ 124 + return read_sysreg(pmevcntr##n##_el0) 125 + static unsigned long read_pmevcntrn(int n) 126 + { 127 + PMEVN_SWITCH(n, RETURN_READ_PMEVCNTRN); 128 + return 0; 129 + } 130 + 131 + #define WRITE_PMEVCNTRN(n) \ 132 + write_sysreg(val, pmevcntr##n##_el0) 133 + static void write_pmevcntrn(int n, unsigned long val) 134 + { 135 + PMEVN_SWITCH(n, WRITE_PMEVCNTRN); 136 + isb(); 137 + } 138 + 139 + #define READ_PMEVTYPERN(n) \ 140 + return read_sysreg(pmevtyper##n##_el0) 141 + static unsigned long read_pmevtypern(int n) 142 + { 143 + PMEVN_SWITCH(n, READ_PMEVTYPERN); 144 + return 0; 145 + } 146 + 147 + #define WRITE_PMEVTYPERN(n) \ 148 + write_sysreg(val, pmevtyper##n##_el0) 149 + static void write_pmevtypern(int n, unsigned long val) 150 + { 151 + PMEVN_SWITCH(n, WRITE_PMEVTYPERN); 152 + isb(); 153 + } 154 + 155 + /* 156 + * The pmc_accessor structure has pointers to PMEV{CNTR,TYPER}<n>_EL0 157 + * accessors that test cases will use. Each of the accessors will 158 + * either directly reads/writes PMEV{CNTR,TYPER}<n>_EL0 159 + * (i.e. {read,write}_pmev{cnt,type}rn()), or reads/writes them through 160 + * PMXEV{CNTR,TYPER}_EL0 (i.e. {read,write}_sel_ev{cnt,type}r()). 161 + * 162 + * This is used to test that combinations of those accessors provide 163 + * the consistent behavior. 164 + */ 165 + struct pmc_accessor { 166 + /* A function to be used to read PMEVTCNTR<n>_EL0 */ 167 + unsigned long (*read_cntr)(int idx); 168 + /* A function to be used to write PMEVTCNTR<n>_EL0 */ 169 + void (*write_cntr)(int idx, unsigned long val); 170 + /* A function to be used to read PMEVTYPER<n>_EL0 */ 171 + unsigned long (*read_typer)(int idx); 172 + /* A function to be used to write PMEVTYPER<n>_EL0 */ 173 + void (*write_typer)(int idx, unsigned long val); 174 + }; 175 + 176 + struct pmc_accessor pmc_accessors[] = { 177 + /* test with all direct accesses */ 178 + { read_pmevcntrn, write_pmevcntrn, read_pmevtypern, write_pmevtypern }, 179 + /* test with all indirect accesses */ 180 + { read_sel_evcntr, write_sel_evcntr, read_sel_evtyper, write_sel_evtyper }, 181 + /* read with direct accesses, and write with indirect accesses */ 182 + { read_pmevcntrn, write_sel_evcntr, read_pmevtypern, write_sel_evtyper }, 183 + /* read with indirect accesses, and write with direct accesses */ 184 + { read_sel_evcntr, write_pmevcntrn, read_sel_evtyper, write_pmevtypern }, 185 + }; 186 + 187 + /* 188 + * Convert a pointer of pmc_accessor to an index in pmc_accessors[], 189 + * assuming that the pointer is one of the entries in pmc_accessors[]. 190 + */ 191 + #define PMC_ACC_TO_IDX(acc) (acc - &pmc_accessors[0]) 192 + 193 + #define GUEST_ASSERT_BITMAP_REG(regname, mask, set_expected) \ 194 + { \ 195 + uint64_t _tval = read_sysreg(regname); \ 196 + \ 197 + if (set_expected) \ 198 + __GUEST_ASSERT((_tval & mask), \ 199 + "tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \ 200 + _tval, mask, set_expected); \ 201 + else \ 202 + __GUEST_ASSERT(!(_tval & mask), \ 203 + "tval: 0x%lx; mask: 0x%lx; set_expected: 0x%lx", \ 204 + _tval, mask, set_expected); \ 205 + } 206 + 207 + /* 208 + * Check if @mask bits in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers 209 + * are set or cleared as specified in @set_expected. 210 + */ 211 + static void check_bitmap_pmu_regs(uint64_t mask, bool set_expected) 212 + { 213 + GUEST_ASSERT_BITMAP_REG(pmcntenset_el0, mask, set_expected); 214 + GUEST_ASSERT_BITMAP_REG(pmcntenclr_el0, mask, set_expected); 215 + GUEST_ASSERT_BITMAP_REG(pmintenset_el1, mask, set_expected); 216 + GUEST_ASSERT_BITMAP_REG(pmintenclr_el1, mask, set_expected); 217 + GUEST_ASSERT_BITMAP_REG(pmovsset_el0, mask, set_expected); 218 + GUEST_ASSERT_BITMAP_REG(pmovsclr_el0, mask, set_expected); 219 + } 220 + 221 + /* 222 + * Check if the bit in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers corresponding 223 + * to the specified counter (@pmc_idx) can be read/written as expected. 224 + * When @set_op is true, it tries to set the bit for the counter in 225 + * those registers by writing the SET registers (the bit won't be set 226 + * if the counter is not implemented though). 227 + * Otherwise, it tries to clear the bits in the registers by writing 228 + * the CLR registers. 229 + * Then, it checks if the values indicated in the registers are as expected. 230 + */ 231 + static void test_bitmap_pmu_regs(int pmc_idx, bool set_op) 232 + { 233 + uint64_t pmcr_n, test_bit = BIT(pmc_idx); 234 + bool set_expected = false; 235 + 236 + if (set_op) { 237 + write_sysreg(test_bit, pmcntenset_el0); 238 + write_sysreg(test_bit, pmintenset_el1); 239 + write_sysreg(test_bit, pmovsset_el0); 240 + 241 + /* The bit will be set only if the counter is implemented */ 242 + pmcr_n = get_pmcr_n(read_sysreg(pmcr_el0)); 243 + set_expected = (pmc_idx < pmcr_n) ? true : false; 244 + } else { 245 + write_sysreg(test_bit, pmcntenclr_el0); 246 + write_sysreg(test_bit, pmintenclr_el1); 247 + write_sysreg(test_bit, pmovsclr_el0); 248 + } 249 + check_bitmap_pmu_regs(test_bit, set_expected); 250 + } 251 + 252 + /* 253 + * Tests for reading/writing registers for the (implemented) event counter 254 + * specified by @pmc_idx. 255 + */ 256 + static void test_access_pmc_regs(struct pmc_accessor *acc, int pmc_idx) 257 + { 258 + uint64_t write_data, read_data; 259 + 260 + /* Disable all PMCs and reset all PMCs to zero. */ 261 + pmu_disable_reset(); 262 + 263 + /* 264 + * Tests for reading/writing {PMCNTEN,PMINTEN,PMOVS}{SET,CLR}_EL1. 265 + */ 266 + 267 + /* Make sure that the bit in those registers are set to 0 */ 268 + test_bitmap_pmu_regs(pmc_idx, false); 269 + /* Test if setting the bit in those registers works */ 270 + test_bitmap_pmu_regs(pmc_idx, true); 271 + /* Test if clearing the bit in those registers works */ 272 + test_bitmap_pmu_regs(pmc_idx, false); 273 + 274 + /* 275 + * Tests for reading/writing the event type register. 276 + */ 277 + 278 + /* 279 + * Set the event type register to an arbitrary value just for testing 280 + * of reading/writing the register. 281 + * Arm ARM says that for the event from 0x0000 to 0x003F, 282 + * the value indicated in the PMEVTYPER<n>_EL0.evtCount field is 283 + * the value written to the field even when the specified event 284 + * is not supported. 285 + */ 286 + write_data = (ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMUV3_PERFCTR_INST_RETIRED); 287 + acc->write_typer(pmc_idx, write_data); 288 + read_data = acc->read_typer(pmc_idx); 289 + __GUEST_ASSERT(read_data == write_data, 290 + "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x%lx", 291 + pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data); 292 + 293 + /* 294 + * Tests for reading/writing the event count register. 295 + */ 296 + 297 + read_data = acc->read_cntr(pmc_idx); 298 + 299 + /* The count value must be 0, as it is disabled and reset */ 300 + __GUEST_ASSERT(read_data == 0, 301 + "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx", 302 + pmc_idx, PMC_ACC_TO_IDX(acc), read_data); 303 + 304 + write_data = read_data + pmc_idx + 0x12345; 305 + acc->write_cntr(pmc_idx, write_data); 306 + read_data = acc->read_cntr(pmc_idx); 307 + __GUEST_ASSERT(read_data == write_data, 308 + "pmc_idx: 0x%lx; acc_idx: 0x%lx; read_data: 0x%lx; write_data: 0x%lx", 309 + pmc_idx, PMC_ACC_TO_IDX(acc), read_data, write_data); 310 + } 311 + 312 + #define INVALID_EC (-1ul) 313 + uint64_t expected_ec = INVALID_EC; 314 + 315 + static void guest_sync_handler(struct ex_regs *regs) 316 + { 317 + uint64_t esr, ec; 318 + 319 + esr = read_sysreg(esr_el1); 320 + ec = (esr >> ESR_EC_SHIFT) & ESR_EC_MASK; 321 + 322 + __GUEST_ASSERT(expected_ec == ec, 323 + "PC: 0x%lx; ESR: 0x%lx; EC: 0x%lx; EC expected: 0x%lx", 324 + regs->pc, esr, ec, expected_ec); 325 + 326 + /* skip the trapping instruction */ 327 + regs->pc += 4; 328 + 329 + /* Use INVALID_EC to indicate an exception occurred */ 330 + expected_ec = INVALID_EC; 331 + } 332 + 333 + /* 334 + * Run the given operation that should trigger an exception with the 335 + * given exception class. The exception handler (guest_sync_handler) 336 + * will reset op_end_addr to 0, expected_ec to INVALID_EC, and skip 337 + * the instruction that trapped. 338 + */ 339 + #define TEST_EXCEPTION(ec, ops) \ 340 + ({ \ 341 + GUEST_ASSERT(ec != INVALID_EC); \ 342 + WRITE_ONCE(expected_ec, ec); \ 343 + dsb(ish); \ 344 + ops; \ 345 + GUEST_ASSERT(expected_ec == INVALID_EC); \ 346 + }) 347 + 348 + /* 349 + * Tests for reading/writing registers for the unimplemented event counter 350 + * specified by @pmc_idx (>= PMCR_EL0.N). 351 + */ 352 + static void test_access_invalid_pmc_regs(struct pmc_accessor *acc, int pmc_idx) 353 + { 354 + /* 355 + * Reading/writing the event count/type registers should cause 356 + * an UNDEFINED exception. 357 + */ 358 + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_cntr(pmc_idx)); 359 + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_cntr(pmc_idx, 0)); 360 + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->read_typer(pmc_idx)); 361 + TEST_EXCEPTION(ESR_EC_UNKNOWN, acc->write_typer(pmc_idx, 0)); 362 + /* 363 + * The bit corresponding to the (unimplemented) counter in 364 + * {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers should be RAZ. 365 + */ 366 + test_bitmap_pmu_regs(pmc_idx, 1); 367 + test_bitmap_pmu_regs(pmc_idx, 0); 368 + } 369 + 370 + /* 371 + * The guest is configured with PMUv3 with @expected_pmcr_n number of 372 + * event counters. 373 + * Check if @expected_pmcr_n is consistent with PMCR_EL0.N, and 374 + * if reading/writing PMU registers for implemented or unimplemented 375 + * counters works as expected. 376 + */ 377 + static void guest_code(uint64_t expected_pmcr_n) 378 + { 379 + uint64_t pmcr, pmcr_n, unimp_mask; 380 + int i, pmc; 381 + 382 + __GUEST_ASSERT(expected_pmcr_n <= ARMV8_PMU_MAX_GENERAL_COUNTERS, 383 + "Expected PMCR.N: 0x%lx; ARMv8 general counters: 0x%lx", 384 + expected_pmcr_n, ARMV8_PMU_MAX_GENERAL_COUNTERS); 385 + 386 + pmcr = read_sysreg(pmcr_el0); 387 + pmcr_n = get_pmcr_n(pmcr); 388 + 389 + /* Make sure that PMCR_EL0.N indicates the value userspace set */ 390 + __GUEST_ASSERT(pmcr_n == expected_pmcr_n, 391 + "Expected PMCR.N: 0x%lx, PMCR.N: 0x%lx", 392 + expected_pmcr_n, pmcr_n); 393 + 394 + /* 395 + * Make sure that (RAZ) bits corresponding to unimplemented event 396 + * counters in {PMCNTEN,PMINTEN,PMOVS}{SET,CLR} registers are reset 397 + * to zero. 398 + * (NOTE: bits for implemented event counters are reset to UNKNOWN) 399 + */ 400 + unimp_mask = GENMASK_ULL(ARMV8_PMU_MAX_GENERAL_COUNTERS - 1, pmcr_n); 401 + check_bitmap_pmu_regs(unimp_mask, false); 402 + 403 + /* 404 + * Tests for reading/writing PMU registers for implemented counters. 405 + * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions. 406 + */ 407 + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { 408 + for (pmc = 0; pmc < pmcr_n; pmc++) 409 + test_access_pmc_regs(&pmc_accessors[i], pmc); 410 + } 411 + 412 + /* 413 + * Tests for reading/writing PMU registers for unimplemented counters. 414 + * Use each combination of PMEV{CNTR,TYPER}<n>_EL0 accessor functions. 415 + */ 416 + for (i = 0; i < ARRAY_SIZE(pmc_accessors); i++) { 417 + for (pmc = pmcr_n; pmc < ARMV8_PMU_MAX_GENERAL_COUNTERS; pmc++) 418 + test_access_invalid_pmc_regs(&pmc_accessors[i], pmc); 419 + } 420 + 421 + GUEST_DONE(); 422 + } 423 + 424 + #define GICD_BASE_GPA 0x8000000ULL 425 + #define GICR_BASE_GPA 0x80A0000ULL 426 + 427 + /* Create a VM that has one vCPU with PMUv3 configured. */ 428 + static void create_vpmu_vm(void *guest_code) 429 + { 430 + struct kvm_vcpu_init init; 431 + uint8_t pmuver, ec; 432 + uint64_t dfr0, irq = 23; 433 + struct kvm_device_attr irq_attr = { 434 + .group = KVM_ARM_VCPU_PMU_V3_CTRL, 435 + .attr = KVM_ARM_VCPU_PMU_V3_IRQ, 436 + .addr = (uint64_t)&irq, 437 + }; 438 + struct kvm_device_attr init_attr = { 439 + .group = KVM_ARM_VCPU_PMU_V3_CTRL, 440 + .attr = KVM_ARM_VCPU_PMU_V3_INIT, 441 + }; 442 + 443 + /* The test creates the vpmu_vm multiple times. Ensure a clean state */ 444 + memset(&vpmu_vm, 0, sizeof(vpmu_vm)); 445 + 446 + vpmu_vm.vm = vm_create(1); 447 + vm_init_descriptor_tables(vpmu_vm.vm); 448 + for (ec = 0; ec < ESR_EC_NUM; ec++) { 449 + vm_install_sync_handler(vpmu_vm.vm, VECTOR_SYNC_CURRENT, ec, 450 + guest_sync_handler); 451 + } 452 + 453 + /* Create vCPU with PMUv3 */ 454 + vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init); 455 + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); 456 + vpmu_vm.vcpu = aarch64_vcpu_add(vpmu_vm.vm, 0, &init, guest_code); 457 + vcpu_init_descriptor_tables(vpmu_vm.vcpu); 458 + vpmu_vm.gic_fd = vgic_v3_setup(vpmu_vm.vm, 1, 64, 459 + GICD_BASE_GPA, GICR_BASE_GPA); 460 + __TEST_REQUIRE(vpmu_vm.gic_fd >= 0, 461 + "Failed to create vgic-v3, skipping"); 462 + 463 + /* Make sure that PMUv3 support is indicated in the ID register */ 464 + vcpu_get_reg(vpmu_vm.vcpu, 465 + KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1), &dfr0); 466 + pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), dfr0); 467 + TEST_ASSERT(pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF && 468 + pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP, 469 + "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver); 470 + 471 + /* Initialize vPMU */ 472 + vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &irq_attr); 473 + vcpu_ioctl(vpmu_vm.vcpu, KVM_SET_DEVICE_ATTR, &init_attr); 474 + } 475 + 476 + static void destroy_vpmu_vm(void) 477 + { 478 + close(vpmu_vm.gic_fd); 479 + kvm_vm_free(vpmu_vm.vm); 480 + } 481 + 482 + static void run_vcpu(struct kvm_vcpu *vcpu, uint64_t pmcr_n) 483 + { 484 + struct ucall uc; 485 + 486 + vcpu_args_set(vcpu, 1, pmcr_n); 487 + vcpu_run(vcpu); 488 + switch (get_ucall(vcpu, &uc)) { 489 + case UCALL_ABORT: 490 + REPORT_GUEST_ASSERT(uc); 491 + break; 492 + case UCALL_DONE: 493 + break; 494 + default: 495 + TEST_FAIL("Unknown ucall %lu", uc.cmd); 496 + break; 497 + } 498 + } 499 + 500 + static void test_create_vpmu_vm_with_pmcr_n(uint64_t pmcr_n, bool expect_fail) 501 + { 502 + struct kvm_vcpu *vcpu; 503 + uint64_t pmcr, pmcr_orig; 504 + 505 + create_vpmu_vm(guest_code); 506 + vcpu = vpmu_vm.vcpu; 507 + 508 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr_orig); 509 + pmcr = pmcr_orig; 510 + 511 + /* 512 + * Setting a larger value of PMCR.N should not modify the field, and 513 + * return a success. 514 + */ 515 + set_pmcr_n(&pmcr, pmcr_n); 516 + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), pmcr); 517 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); 518 + 519 + if (expect_fail) 520 + TEST_ASSERT(pmcr_orig == pmcr, 521 + "PMCR.N modified by KVM to a larger value (PMCR: 0x%lx) for pmcr_n: 0x%lx\n", 522 + pmcr, pmcr_n); 523 + else 524 + TEST_ASSERT(pmcr_n == get_pmcr_n(pmcr), 525 + "Failed to update PMCR.N to %lu (received: %lu)\n", 526 + pmcr_n, get_pmcr_n(pmcr)); 527 + } 528 + 529 + /* 530 + * Create a guest with one vCPU, set the PMCR_EL0.N for the vCPU to @pmcr_n, 531 + * and run the test. 532 + */ 533 + static void run_access_test(uint64_t pmcr_n) 534 + { 535 + uint64_t sp; 536 + struct kvm_vcpu *vcpu; 537 + struct kvm_vcpu_init init; 538 + 539 + pr_debug("Test with pmcr_n %lu\n", pmcr_n); 540 + 541 + test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); 542 + vcpu = vpmu_vm.vcpu; 543 + 544 + /* Save the initial sp to restore them later to run the guest again */ 545 + vcpu_get_reg(vcpu, ARM64_CORE_REG(sp_el1), &sp); 546 + 547 + run_vcpu(vcpu, pmcr_n); 548 + 549 + /* 550 + * Reset and re-initialize the vCPU, and run the guest code again to 551 + * check if PMCR_EL0.N is preserved. 552 + */ 553 + vm_ioctl(vpmu_vm.vm, KVM_ARM_PREFERRED_TARGET, &init); 554 + init.features[0] |= (1 << KVM_ARM_VCPU_PMU_V3); 555 + aarch64_vcpu_setup(vcpu, &init); 556 + vcpu_init_descriptor_tables(vcpu); 557 + vcpu_set_reg(vcpu, ARM64_CORE_REG(sp_el1), sp); 558 + vcpu_set_reg(vcpu, ARM64_CORE_REG(regs.pc), (uint64_t)guest_code); 559 + 560 + run_vcpu(vcpu, pmcr_n); 561 + 562 + destroy_vpmu_vm(); 563 + } 564 + 565 + static struct pmreg_sets validity_check_reg_sets[] = { 566 + PMREG_SET(SYS_PMCNTENSET_EL0, SYS_PMCNTENCLR_EL0), 567 + PMREG_SET(SYS_PMINTENSET_EL1, SYS_PMINTENCLR_EL1), 568 + PMREG_SET(SYS_PMOVSSET_EL0, SYS_PMOVSCLR_EL0), 569 + }; 570 + 571 + /* 572 + * Create a VM, and check if KVM handles the userspace accesses of 573 + * the PMU register sets in @validity_check_reg_sets[] correctly. 574 + */ 575 + static void run_pmregs_validity_test(uint64_t pmcr_n) 576 + { 577 + int i; 578 + struct kvm_vcpu *vcpu; 579 + uint64_t set_reg_id, clr_reg_id, reg_val; 580 + uint64_t valid_counters_mask, max_counters_mask; 581 + 582 + test_create_vpmu_vm_with_pmcr_n(pmcr_n, false); 583 + vcpu = vpmu_vm.vcpu; 584 + 585 + valid_counters_mask = get_counters_mask(pmcr_n); 586 + max_counters_mask = get_counters_mask(ARMV8_PMU_MAX_COUNTERS); 587 + 588 + for (i = 0; i < ARRAY_SIZE(validity_check_reg_sets); i++) { 589 + set_reg_id = validity_check_reg_sets[i].set_reg_id; 590 + clr_reg_id = validity_check_reg_sets[i].clr_reg_id; 591 + 592 + /* 593 + * Test if the 'set' and 'clr' variants of the registers 594 + * are initialized based on the number of valid counters. 595 + */ 596 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), &reg_val); 597 + TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0, 598 + "Initial read of set_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n", 599 + KVM_ARM64_SYS_REG(set_reg_id), reg_val); 600 + 601 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(clr_reg_id), &reg_val); 602 + TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0, 603 + "Initial read of clr_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n", 604 + KVM_ARM64_SYS_REG(clr_reg_id), reg_val); 605 + 606 + /* 607 + * Using the 'set' variant, force-set the register to the 608 + * max number of possible counters and test if KVM discards 609 + * the bits for unimplemented counters as it should. 610 + */ 611 + vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), max_counters_mask); 612 + 613 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(set_reg_id), &reg_val); 614 + TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0, 615 + "Read of set_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n", 616 + KVM_ARM64_SYS_REG(set_reg_id), reg_val); 617 + 618 + vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(clr_reg_id), &reg_val); 619 + TEST_ASSERT((reg_val & (~valid_counters_mask)) == 0, 620 + "Read of clr_reg: 0x%llx has unimplemented counters enabled: 0x%lx\n", 621 + KVM_ARM64_SYS_REG(clr_reg_id), reg_val); 622 + } 623 + 624 + destroy_vpmu_vm(); 625 + } 626 + 627 + /* 628 + * Create a guest with one vCPU, and attempt to set the PMCR_EL0.N for 629 + * the vCPU to @pmcr_n, which is larger than the host value. 630 + * The attempt should fail as @pmcr_n is too big to set for the vCPU. 631 + */ 632 + static void run_error_test(uint64_t pmcr_n) 633 + { 634 + pr_debug("Error test with pmcr_n %lu (larger than the host)\n", pmcr_n); 635 + 636 + test_create_vpmu_vm_with_pmcr_n(pmcr_n, true); 637 + destroy_vpmu_vm(); 638 + } 639 + 640 + /* 641 + * Return the default number of implemented PMU event counters excluding 642 + * the cycle counter (i.e. PMCR_EL0.N value) for the guest. 643 + */ 644 + static uint64_t get_pmcr_n_limit(void) 645 + { 646 + uint64_t pmcr; 647 + 648 + create_vpmu_vm(guest_code); 649 + vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_PMCR_EL0), &pmcr); 650 + destroy_vpmu_vm(); 651 + return get_pmcr_n(pmcr); 652 + } 653 + 654 + int main(void) 655 + { 656 + uint64_t i, pmcr_n; 657 + 658 + TEST_REQUIRE(kvm_has_cap(KVM_CAP_ARM_PMU_V3)); 659 + 660 + pmcr_n = get_pmcr_n_limit(); 661 + for (i = 0; i <= pmcr_n; i++) { 662 + run_access_test(i); 663 + run_pmregs_validity_test(i); 664 + } 665 + 666 + for (i = pmcr_n + 1; i < ARMV8_PMU_MAX_COUNTERS; i++) 667 + run_error_test(i); 668 + 669 + return 0; 670 + }
+1
tools/testing/selftests/kvm/include/aarch64/processor.h
··· 104 104 #define ESR_EC_SHIFT 26 105 105 #define ESR_EC_MASK (ESR_EC_NUM - 1) 106 106 107 + #define ESR_EC_UNKNOWN 0x0 107 108 #define ESR_EC_SVC64 0x15 108 109 #define ESR_EC_IABT 0x21 109 110 #define ESR_EC_DABT 0x25
+3 -3
tools/testing/selftests/kvm/lib/aarch64/processor.c
··· 518 518 err = ioctl(vcpu_fd, KVM_GET_ONE_REG, &reg); 519 519 TEST_ASSERT(err == 0, KVM_IOCTL_ERROR(KVM_GET_ONE_REG, vcpu_fd)); 520 520 521 - *ps4k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN4), val) != 0xf; 522 - *ps64k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN64), val) == 0; 523 - *ps16k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_TGRAN16), val) != 0; 521 + *ps4k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN4), val) != 0xf; 522 + *ps64k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN64), val) == 0; 523 + *ps16k = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN16), val) != 0; 524 524 525 525 close(vcpu_fd); 526 526 close(vm_fd);
+145 -90
tools/testing/selftests/kvm/riscv/get-reg-list.c
··· 25 25 * the visibility of the ISA_EXT register itself. 26 26 * 27 27 * Based on above, we should filter-out all ISA_EXT registers. 28 + * 29 + * Note: The below list is alphabetically sorted. 28 30 */ 29 31 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_A: 30 32 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_C: ··· 35 33 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_H: 36 34 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_I: 37 35 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_M: 38 - case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SVPBMT: 36 + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_V: 37 + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SMSTATEEN: 38 + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SSAIA: 39 39 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SSTC: 40 40 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SVINVAL: 41 - case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZIHINTPAUSE: 41 + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SVNAPOT: 42 + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SVPBMT: 43 + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZBA: 44 + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZBB: 45 + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZBS: 42 46 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZICBOM: 43 47 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZICBOZ: 44 - case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZBB: 45 - case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SSAIA: 46 - case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_V: 47 - case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SVNAPOT: 48 - case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZBA: 49 - case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZBS: 50 48 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZICNTR: 49 + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZICOND: 51 50 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZICSR: 52 51 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZIFENCEI: 52 + case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZIHINTPAUSE: 53 53 case KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZIHPM: 54 54 return true; 55 55 /* AIA registers are always available when Ssaia can't be disabled */ ··· 116 112 } 117 113 } 118 114 119 - static const char *config_id_to_str(__u64 id) 115 + static const char *config_id_to_str(const char *prefix, __u64 id) 120 116 { 121 117 /* reg_off is the offset into struct kvm_riscv_config */ 122 118 __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_CONFIG); 119 + 120 + assert((id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CONFIG); 123 121 124 122 switch (reg_off) { 125 123 case KVM_REG_RISCV_CONFIG_REG(isa): ··· 140 134 return "KVM_REG_RISCV_CONFIG_REG(satp_mode)"; 141 135 } 142 136 143 - /* 144 - * Config regs would grow regularly with new pseudo reg added, so 145 - * just show raw id to indicate a new pseudo config reg. 146 - */ 147 - return strdup_printf("KVM_REG_RISCV_CONFIG_REG(%lld) /* UNKNOWN */", reg_off); 137 + return strdup_printf("%lld /* UNKNOWN */", reg_off); 148 138 } 149 139 150 140 static const char *core_id_to_str(const char *prefix, __u64 id) 151 141 { 152 142 /* reg_off is the offset into struct kvm_riscv_core */ 153 143 __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_CORE); 144 + 145 + assert((id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CORE); 154 146 155 147 switch (reg_off) { 156 148 case KVM_REG_RISCV_CORE_REG(regs.pc): ··· 180 176 return "KVM_REG_RISCV_CORE_REG(mode)"; 181 177 } 182 178 183 - TEST_FAIL("%s: Unknown core reg id: 0x%llx", prefix, id); 184 - return NULL; 179 + return strdup_printf("%lld /* UNKNOWN */", reg_off); 185 180 } 186 181 187 182 #define RISCV_CSR_GENERAL(csr) \ 188 183 "KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(" #csr ")" 189 184 #define RISCV_CSR_AIA(csr) \ 190 185 "KVM_REG_RISCV_CSR_AIA | KVM_REG_RISCV_CSR_REG(" #csr ")" 186 + #define RISCV_CSR_SMSTATEEN(csr) \ 187 + "KVM_REG_RISCV_CSR_SMSTATEEN | KVM_REG_RISCV_CSR_REG(" #csr ")" 191 188 192 189 static const char *general_csr_id_to_str(__u64 reg_off) 193 190 { ··· 214 209 return RISCV_CSR_GENERAL(satp); 215 210 case KVM_REG_RISCV_CSR_REG(scounteren): 216 211 return RISCV_CSR_GENERAL(scounteren); 212 + case KVM_REG_RISCV_CSR_REG(senvcfg): 213 + return RISCV_CSR_GENERAL(senvcfg); 217 214 } 218 215 219 - TEST_FAIL("Unknown general csr reg: 0x%llx", reg_off); 220 - return NULL; 216 + return strdup_printf("KVM_REG_RISCV_CSR_GENERAL | %lld /* UNKNOWN */", reg_off); 221 217 } 222 218 223 219 static const char *aia_csr_id_to_str(__u64 reg_off) ··· 241 235 return RISCV_CSR_AIA(iprio2h); 242 236 } 243 237 244 - TEST_FAIL("Unknown aia csr reg: 0x%llx", reg_off); 238 + return strdup_printf("KVM_REG_RISCV_CSR_AIA | %lld /* UNKNOWN */", reg_off); 239 + } 240 + 241 + static const char *smstateen_csr_id_to_str(__u64 reg_off) 242 + { 243 + /* reg_off is the offset into struct kvm_riscv_smstateen_csr */ 244 + switch (reg_off) { 245 + case KVM_REG_RISCV_CSR_SMSTATEEN_REG(sstateen0): 246 + return RISCV_CSR_SMSTATEEN(sstateen0); 247 + } 248 + 249 + TEST_FAIL("Unknown smstateen csr reg: 0x%llx", reg_off); 245 250 return NULL; 246 251 } 247 252 ··· 261 244 __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_CSR); 262 245 __u64 reg_subtype = reg_off & KVM_REG_RISCV_SUBTYPE_MASK; 263 246 247 + assert((id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_CSR); 248 + 264 249 reg_off &= ~KVM_REG_RISCV_SUBTYPE_MASK; 265 250 266 251 switch (reg_subtype) { ··· 270 251 return general_csr_id_to_str(reg_off); 271 252 case KVM_REG_RISCV_CSR_AIA: 272 253 return aia_csr_id_to_str(reg_off); 254 + case KVM_REG_RISCV_CSR_SMSTATEEN: 255 + return smstateen_csr_id_to_str(reg_off); 273 256 } 274 257 275 - TEST_FAIL("%s: Unknown csr subtype: 0x%llx", prefix, reg_subtype); 276 - return NULL; 258 + return strdup_printf("%lld | %lld /* UNKNOWN */", reg_subtype, reg_off); 277 259 } 278 260 279 261 static const char *timer_id_to_str(const char *prefix, __u64 id) 280 262 { 281 263 /* reg_off is the offset into struct kvm_riscv_timer */ 282 264 __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_TIMER); 265 + 266 + assert((id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_TIMER); 283 267 284 268 switch (reg_off) { 285 269 case KVM_REG_RISCV_TIMER_REG(frequency): ··· 295 273 return "KVM_REG_RISCV_TIMER_REG(state)"; 296 274 } 297 275 298 - TEST_FAIL("%s: Unknown timer reg id: 0x%llx", prefix, id); 299 - return NULL; 276 + return strdup_printf("%lld /* UNKNOWN */", reg_off); 300 277 } 301 278 302 279 static const char *fp_f_id_to_str(const char *prefix, __u64 id) 303 280 { 304 281 /* reg_off is the offset into struct __riscv_f_ext_state */ 305 282 __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_FP_F); 283 + 284 + assert((id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_FP_F); 306 285 307 286 switch (reg_off) { 308 287 case KVM_REG_RISCV_FP_F_REG(f[0]) ... ··· 313 290 return "KVM_REG_RISCV_FP_F_REG(fcsr)"; 314 291 } 315 292 316 - TEST_FAIL("%s: Unknown fp_f reg id: 0x%llx", prefix, id); 317 - return NULL; 293 + return strdup_printf("%lld /* UNKNOWN */", reg_off); 318 294 } 319 295 320 296 static const char *fp_d_id_to_str(const char *prefix, __u64 id) 321 297 { 322 298 /* reg_off is the offset into struct __riscv_d_ext_state */ 323 299 __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_FP_D); 300 + 301 + assert((id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_FP_D); 324 302 325 303 switch (reg_off) { 326 304 case KVM_REG_RISCV_FP_D_REG(f[0]) ... ··· 331 307 return "KVM_REG_RISCV_FP_D_REG(fcsr)"; 332 308 } 333 309 334 - TEST_FAIL("%s: Unknown fp_d reg id: 0x%llx", prefix, id); 335 - return NULL; 310 + return strdup_printf("%lld /* UNKNOWN */", reg_off); 336 311 } 337 312 338 - static const char *isa_ext_id_to_str(__u64 id) 313 + #define KVM_ISA_EXT_ARR(ext) \ 314 + [KVM_RISCV_ISA_EXT_##ext] = "KVM_RISCV_ISA_EXT_" #ext 315 + 316 + static const char *isa_ext_id_to_str(const char *prefix, __u64 id) 339 317 { 340 318 /* reg_off is the offset into unsigned long kvm_isa_ext_arr[] */ 341 319 __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_ISA_EXT); 342 320 321 + assert((id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_ISA_EXT); 322 + 343 323 static const char * const kvm_isa_ext_reg_name[] = { 344 - "KVM_RISCV_ISA_EXT_A", 345 - "KVM_RISCV_ISA_EXT_C", 346 - "KVM_RISCV_ISA_EXT_D", 347 - "KVM_RISCV_ISA_EXT_F", 348 - "KVM_RISCV_ISA_EXT_H", 349 - "KVM_RISCV_ISA_EXT_I", 350 - "KVM_RISCV_ISA_EXT_M", 351 - "KVM_RISCV_ISA_EXT_SVPBMT", 352 - "KVM_RISCV_ISA_EXT_SSTC", 353 - "KVM_RISCV_ISA_EXT_SVINVAL", 354 - "KVM_RISCV_ISA_EXT_ZIHINTPAUSE", 355 - "KVM_RISCV_ISA_EXT_ZICBOM", 356 - "KVM_RISCV_ISA_EXT_ZICBOZ", 357 - "KVM_RISCV_ISA_EXT_ZBB", 358 - "KVM_RISCV_ISA_EXT_SSAIA", 359 - "KVM_RISCV_ISA_EXT_V", 360 - "KVM_RISCV_ISA_EXT_SVNAPOT", 361 - "KVM_RISCV_ISA_EXT_ZBA", 362 - "KVM_RISCV_ISA_EXT_ZBS", 363 - "KVM_RISCV_ISA_EXT_ZICNTR", 364 - "KVM_RISCV_ISA_EXT_ZICSR", 365 - "KVM_RISCV_ISA_EXT_ZIFENCEI", 366 - "KVM_RISCV_ISA_EXT_ZIHPM", 324 + KVM_ISA_EXT_ARR(A), 325 + KVM_ISA_EXT_ARR(C), 326 + KVM_ISA_EXT_ARR(D), 327 + KVM_ISA_EXT_ARR(F), 328 + KVM_ISA_EXT_ARR(H), 329 + KVM_ISA_EXT_ARR(I), 330 + KVM_ISA_EXT_ARR(M), 331 + KVM_ISA_EXT_ARR(V), 332 + KVM_ISA_EXT_ARR(SMSTATEEN), 333 + KVM_ISA_EXT_ARR(SSAIA), 334 + KVM_ISA_EXT_ARR(SSTC), 335 + KVM_ISA_EXT_ARR(SVINVAL), 336 + KVM_ISA_EXT_ARR(SVNAPOT), 337 + KVM_ISA_EXT_ARR(SVPBMT), 338 + KVM_ISA_EXT_ARR(ZBA), 339 + KVM_ISA_EXT_ARR(ZBB), 340 + KVM_ISA_EXT_ARR(ZBS), 341 + KVM_ISA_EXT_ARR(ZICBOM), 342 + KVM_ISA_EXT_ARR(ZICBOZ), 343 + KVM_ISA_EXT_ARR(ZICNTR), 344 + KVM_ISA_EXT_ARR(ZICOND), 345 + KVM_ISA_EXT_ARR(ZICSR), 346 + KVM_ISA_EXT_ARR(ZIFENCEI), 347 + KVM_ISA_EXT_ARR(ZIHINTPAUSE), 348 + KVM_ISA_EXT_ARR(ZIHPM), 367 349 }; 368 350 369 - if (reg_off >= ARRAY_SIZE(kvm_isa_ext_reg_name)) { 370 - /* 371 - * isa_ext regs would grow regularly with new isa extension added, so 372 - * just show "reg" to indicate a new extension. 373 - */ 351 + if (reg_off >= ARRAY_SIZE(kvm_isa_ext_reg_name)) 374 352 return strdup_printf("%lld /* UNKNOWN */", reg_off); 375 - } 376 353 377 354 return kvm_isa_ext_reg_name[reg_off]; 378 355 } 356 + 357 + #define KVM_SBI_EXT_ARR(ext) \ 358 + [ext] = "KVM_REG_RISCV_SBI_SINGLE | " #ext 379 359 380 360 static const char *sbi_ext_single_id_to_str(__u64 reg_off) 381 361 { 382 362 /* reg_off is KVM_RISCV_SBI_EXT_ID */ 383 363 static const char * const kvm_sbi_ext_reg_name[] = { 384 - "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_V01", 385 - "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_TIME", 386 - "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_IPI", 387 - "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_RFENCE", 388 - "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_SRST", 389 - "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_HSM", 390 - "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_PMU", 391 - "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_EXPERIMENTAL", 392 - "KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_VENDOR", 364 + KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_V01), 365 + KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_TIME), 366 + KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_IPI), 367 + KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_RFENCE), 368 + KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_SRST), 369 + KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_HSM), 370 + KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_PMU), 371 + KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_EXPERIMENTAL), 372 + KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_VENDOR), 373 + KVM_SBI_EXT_ARR(KVM_RISCV_SBI_EXT_DBCN), 393 374 }; 394 375 395 - if (reg_off >= ARRAY_SIZE(kvm_sbi_ext_reg_name)) { 396 - /* 397 - * sbi_ext regs would grow regularly with new sbi extension added, so 398 - * just show "reg" to indicate a new extension. 399 - */ 376 + if (reg_off >= ARRAY_SIZE(kvm_sbi_ext_reg_name)) 400 377 return strdup_printf("KVM_REG_RISCV_SBI_SINGLE | %lld /* UNKNOWN */", reg_off); 401 - } 402 378 403 379 return kvm_sbi_ext_reg_name[reg_off]; 404 380 } 405 381 406 382 static const char *sbi_ext_multi_id_to_str(__u64 reg_subtype, __u64 reg_off) 407 383 { 408 - if (reg_off > KVM_REG_RISCV_SBI_MULTI_REG_LAST) { 409 - /* 410 - * sbi_ext regs would grow regularly with new sbi extension added, so 411 - * just show "reg" to indicate a new extension. 412 - */ 413 - return strdup_printf("%lld /* UNKNOWN */", reg_off); 414 - } 384 + const char *unknown = ""; 385 + 386 + if (reg_off > KVM_REG_RISCV_SBI_MULTI_REG_LAST) 387 + unknown = " /* UNKNOWN */"; 415 388 416 389 switch (reg_subtype) { 417 390 case KVM_REG_RISCV_SBI_MULTI_EN: 418 - return strdup_printf("KVM_REG_RISCV_SBI_MULTI_EN | %lld", reg_off); 391 + return strdup_printf("KVM_REG_RISCV_SBI_MULTI_EN | %lld%s", reg_off, unknown); 419 392 case KVM_REG_RISCV_SBI_MULTI_DIS: 420 - return strdup_printf("KVM_REG_RISCV_SBI_MULTI_DIS | %lld", reg_off); 393 + return strdup_printf("KVM_REG_RISCV_SBI_MULTI_DIS | %lld%s", reg_off, unknown); 421 394 } 422 395 423 - return NULL; 396 + return strdup_printf("%lld | %lld /* UNKNOWN */", reg_subtype, reg_off); 424 397 } 425 398 426 399 static const char *sbi_ext_id_to_str(const char *prefix, __u64 id) 427 400 { 428 401 __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_SBI_EXT); 429 402 __u64 reg_subtype = reg_off & KVM_REG_RISCV_SUBTYPE_MASK; 403 + 404 + assert((id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_SBI_EXT); 430 405 431 406 reg_off &= ~KVM_REG_RISCV_SUBTYPE_MASK; 432 407 ··· 437 414 return sbi_ext_multi_id_to_str(reg_subtype, reg_off); 438 415 } 439 416 440 - TEST_FAIL("%s: Unknown sbi ext subtype: 0x%llx", prefix, reg_subtype); 441 - return NULL; 417 + return strdup_printf("%lld | %lld /* UNKNOWN */", reg_subtype, reg_off); 442 418 } 443 419 444 420 void print_reg(const char *prefix, __u64 id) ··· 458 436 reg_size = "KVM_REG_SIZE_U128"; 459 437 break; 460 438 default: 461 - TEST_FAIL("%s: Unexpected reg size: 0x%llx in reg id: 0x%llx", 462 - prefix, (id & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT, id); 439 + printf("\tKVM_REG_RISCV | (%lld << KVM_REG_SIZE_SHIFT) | 0x%llx /* UNKNOWN */,", 440 + (id & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT, id & REG_MASK); 463 441 } 464 442 465 443 switch (id & KVM_REG_RISCV_TYPE_MASK) { 466 444 case KVM_REG_RISCV_CONFIG: 467 445 printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_CONFIG | %s,\n", 468 - reg_size, config_id_to_str(id)); 446 + reg_size, config_id_to_str(prefix, id)); 469 447 break; 470 448 case KVM_REG_RISCV_CORE: 471 449 printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_CORE | %s,\n", ··· 489 467 break; 490 468 case KVM_REG_RISCV_ISA_EXT: 491 469 printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_ISA_EXT | %s,\n", 492 - reg_size, isa_ext_id_to_str(id)); 470 + reg_size, isa_ext_id_to_str(prefix, id)); 493 471 break; 494 472 case KVM_REG_RISCV_SBI_EXT: 495 473 printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_SBI_EXT | %s,\n", 496 474 reg_size, sbi_ext_id_to_str(prefix, id)); 497 475 break; 498 476 default: 499 - TEST_FAIL("%s: Unexpected reg type: 0x%llx in reg id: 0x%llx", prefix, 500 - (id & KVM_REG_RISCV_TYPE_MASK) >> KVM_REG_RISCV_TYPE_SHIFT, id); 477 + printf("\tKVM_REG_RISCV | %s | 0x%llx /* UNKNOWN */,", 478 + reg_size, id & REG_MASK); 501 479 } 502 480 } 503 481 ··· 554 532 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(sip), 555 533 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(satp), 556 534 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(scounteren), 535 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_GENERAL | KVM_REG_RISCV_CSR_REG(senvcfg), 557 536 KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_TIMER | KVM_REG_RISCV_TIMER_REG(frequency), 558 537 KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_TIMER | KVM_REG_RISCV_TIMER_REG(time), 559 538 KVM_REG_RISCV | KVM_REG_SIZE_U64 | KVM_REG_RISCV_TIMER | KVM_REG_RISCV_TIMER_REG(compare), ··· 568 545 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_PMU, 569 546 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_EXPERIMENTAL, 570 547 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_VENDOR, 548 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_SINGLE | KVM_RISCV_SBI_EXT_DBCN, 571 549 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_MULTI_EN | 0, 572 550 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_SBI_EXT | KVM_REG_RISCV_SBI_MULTI_DIS | 0, 573 551 }; ··· 627 603 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZICNTR, 628 604 }; 629 605 606 + static __u64 zicond_regs[] = { 607 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZICOND, 608 + }; 609 + 630 610 static __u64 zicsr_regs[] = { 631 611 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_ZICSR, 632 612 }; ··· 652 624 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_AIA | KVM_REG_RISCV_CSR_AIA_REG(iprio1h), 653 625 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_AIA | KVM_REG_RISCV_CSR_AIA_REG(iprio2h), 654 626 KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SSAIA, 627 + }; 628 + 629 + static __u64 smstateen_regs[] = { 630 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_CSR | KVM_REG_RISCV_CSR_SMSTATEEN | KVM_REG_RISCV_CSR_SMSTATEEN_REG(sstateen0), 631 + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_RISCV_ISA_EXT_SMSTATEEN, 655 632 }; 656 633 657 634 static __u64 fp_f_regs[] = { ··· 758 725 {"zbs", .feature = KVM_RISCV_ISA_EXT_ZBS, .regs = zbs_regs, .regs_n = ARRAY_SIZE(zbs_regs),} 759 726 #define ZICNTR_REGS_SUBLIST \ 760 727 {"zicntr", .feature = KVM_RISCV_ISA_EXT_ZICNTR, .regs = zicntr_regs, .regs_n = ARRAY_SIZE(zicntr_regs),} 728 + #define ZICOND_REGS_SUBLIST \ 729 + {"zicond", .feature = KVM_RISCV_ISA_EXT_ZICOND, .regs = zicond_regs, .regs_n = ARRAY_SIZE(zicond_regs),} 761 730 #define ZICSR_REGS_SUBLIST \ 762 731 {"zicsr", .feature = KVM_RISCV_ISA_EXT_ZICSR, .regs = zicsr_regs, .regs_n = ARRAY_SIZE(zicsr_regs),} 763 732 #define ZIFENCEI_REGS_SUBLIST \ ··· 768 733 {"zihpm", .feature = KVM_RISCV_ISA_EXT_ZIHPM, .regs = zihpm_regs, .regs_n = ARRAY_SIZE(zihpm_regs),} 769 734 #define AIA_REGS_SUBLIST \ 770 735 {"aia", .feature = KVM_RISCV_ISA_EXT_SSAIA, .regs = aia_regs, .regs_n = ARRAY_SIZE(aia_regs),} 736 + #define SMSTATEEN_REGS_SUBLIST \ 737 + {"smstateen", .feature = KVM_RISCV_ISA_EXT_SMSTATEEN, .regs = smstateen_regs, .regs_n = ARRAY_SIZE(smstateen_regs),} 771 738 #define FP_F_REGS_SUBLIST \ 772 739 {"fp_f", .feature = KVM_RISCV_ISA_EXT_F, .regs = fp_f_regs, \ 773 740 .regs_n = ARRAY_SIZE(fp_f_regs),} ··· 865 828 }, 866 829 }; 867 830 831 + static struct vcpu_reg_list zicond_config = { 832 + .sublists = { 833 + BASE_SUBLIST, 834 + ZICOND_REGS_SUBLIST, 835 + {0}, 836 + }, 837 + }; 838 + 868 839 static struct vcpu_reg_list zicsr_config = { 869 840 .sublists = { 870 841 BASE_SUBLIST, ··· 905 860 }, 906 861 }; 907 862 863 + static struct vcpu_reg_list smstateen_config = { 864 + .sublists = { 865 + BASE_SUBLIST, 866 + SMSTATEEN_REGS_SUBLIST, 867 + {0}, 868 + }, 869 + }; 870 + 908 871 static struct vcpu_reg_list fp_f_config = { 909 872 .sublists = { 910 873 BASE_SUBLIST, ··· 941 888 &zbb_config, 942 889 &zbs_config, 943 890 &zicntr_config, 891 + &zicond_config, 944 892 &zicsr_config, 945 893 &zifencei_config, 946 894 &zihpm_config, 947 895 &aia_config, 896 + &smstateen_config, 948 897 &fp_f_config, 949 898 &fp_d_config, 950 899 };
+47
tools/testing/selftests/kvm/x86_64/hwcr_msr_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2023, Google LLC. 4 + */ 5 + 6 + #define _GNU_SOURCE /* for program_invocation_short_name */ 7 + #include <sys/ioctl.h> 8 + 9 + #include "test_util.h" 10 + #include "kvm_util.h" 11 + #include "vmx.h" 12 + 13 + void test_hwcr_bit(struct kvm_vcpu *vcpu, unsigned int bit) 14 + { 15 + const uint64_t ignored = BIT_ULL(3) | BIT_ULL(6) | BIT_ULL(8); 16 + const uint64_t valid = BIT_ULL(18) | BIT_ULL(24); 17 + const uint64_t legal = ignored | valid; 18 + uint64_t val = BIT_ULL(bit); 19 + uint64_t actual; 20 + int r; 21 + 22 + r = _vcpu_set_msr(vcpu, MSR_K7_HWCR, val); 23 + TEST_ASSERT(val & ~legal ? !r : r == 1, 24 + "Expected KVM_SET_MSRS(MSR_K7_HWCR) = 0x%lx to %s", 25 + val, val & ~legal ? "fail" : "succeed"); 26 + 27 + actual = vcpu_get_msr(vcpu, MSR_K7_HWCR); 28 + TEST_ASSERT(actual == (val & valid), 29 + "Bit %u: unexpected HWCR 0x%lx; expected 0x%lx", 30 + bit, actual, (val & valid)); 31 + 32 + vcpu_set_msr(vcpu, MSR_K7_HWCR, 0); 33 + } 34 + 35 + int main(int argc, char *argv[]) 36 + { 37 + struct kvm_vm *vm; 38 + struct kvm_vcpu *vcpu; 39 + unsigned int bit; 40 + 41 + vm = vm_create_with_one_vcpu(&vcpu, NULL); 42 + 43 + for (bit = 0; bit < BITS_PER_LONG; bit++) 44 + test_hwcr_bit(vcpu, bit); 45 + 46 + kvm_vm_free(vm); 47 + }