Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

KVM: PPC: Add support for nestedv2 guests

A series of hcalls have been added to the PAPR which allow a regular
guest partition to create and manage guest partitions of its own. KVM
already had an interface that allowed this on powernv platforms. This
existing interface will now be called "nestedv1". The newly added PAPR
interface will be called "nestedv2". PHYP will support the nestedv2
interface. At this time the host side of the nestedv2 interface has not
been implemented on powernv but there is no technical reason why it
could not be added.

The nestedv1 interface is still supported.

Add support to KVM to utilize these hcalls to enable running nested
guests as a pseries guest on PHYP.

Overview of the new hcall usage:

- L1 and L0 negotiate capabilities with
H_GUEST_{G,S}ET_CAPABILITIES()

- L1 requests the L0 create a L2 with
H_GUEST_CREATE() and receives a handle to use in future hcalls

- L1 requests the L0 create a L2 vCPU with
H_GUEST_CREATE_VCPU()

- L1 sets up the L2 using H_GUEST_SET and the
H_GUEST_VCPU_RUN input buffer

- L1 requests the L0 runs the L2 vCPU using H_GUEST_VCPU_RUN()

- L2 returns to L1 with an exit reason and L1 reads the
H_GUEST_VCPU_RUN output buffer populated by the L0

- L1 handles the exit using H_GET_STATE if necessary

- L1 reruns L2 vCPU with H_GUEST_VCPU_RUN

- L1 frees the L2 in the L0 with H_GUEST_DELETE()

Support for the new API is determined by trying
H_GUEST_GET_CAPABILITIES. On a successful return, use the nestedv2
interface.

Use the vcpu register state setters for tracking modified guest state
elements and copy the thread wide values into the H_GUEST_VCPU_RUN input
buffer immediately before running a L2. The guest wide
elements can not be added to the input buffer so send them with a
separate H_GUEST_SET call if necessary.

Make the vcpu register getter load the corresponding value from the real
host with H_GUEST_GET. To avoid unnecessarily calling H_GUEST_GET, track
which values have already been loaded between H_GUEST_VCPU_RUN calls. If
an element is present in the H_GUEST_VCPU_RUN output buffer it also does
not need to be loaded again.

Tested-by: Sachin Sant <sachinp@linux.ibm.com>
Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Signed-off-by: Gautam Menghani <gautam@linux.ibm.com>
Signed-off-by: Kautuk Consul <kconsul@linux.vnet.ibm.com>
Signed-off-by: Amit Machhiwal <amachhiw@linux.vnet.ibm.com>
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://msgid.link/20230914030600.16993-11-jniethe5@gmail.com

authored by

Jordan Niethe and committed by
Michael Ellerman
19d31c5f dfcaacc8

+1840 -94
+91
arch/powerpc/include/asm/guest-state-buffer.h
··· 5 5 #ifndef _ASM_POWERPC_GUEST_STATE_BUFFER_H 6 6 #define _ASM_POWERPC_GUEST_STATE_BUFFER_H 7 7 8 + #include "asm/hvcall.h" 8 9 #include <linux/gfp.h> 9 10 #include <linux/bitmap.h> 10 11 #include <asm/plpar_wrappers.h> ··· 314 313 unsigned long vcpu_id, gfp_t flags); 315 314 void kvmppc_gsb_free(struct kvmppc_gs_buff *gsb); 316 315 void *kvmppc_gsb_put(struct kvmppc_gs_buff *gsb, size_t size); 316 + int kvmppc_gsb_send(struct kvmppc_gs_buff *gsb, unsigned long flags); 317 + int kvmppc_gsb_recv(struct kvmppc_gs_buff *gsb, unsigned long flags); 317 318 318 319 /** 319 320 * kvmppc_gsb_header() - the header of a guest state buffer ··· 902 899 static inline void kvmppc_gsm_reset(struct kvmppc_gs_msg *gsm) 903 900 { 904 901 kvmppc_gsbm_zero(&gsm->bitmap); 902 + } 903 + 904 + /** 905 + * kvmppc_gsb_receive_data - flexibly update values from a guest state buffer 906 + * @gsb: guest state buffer 907 + * @gsm: guest state message 908 + * 909 + * Requests updated values for the guest state values included in the guest 910 + * state message. The guest state message will then deserialize the guest state 911 + * buffer. 912 + */ 913 + static inline int kvmppc_gsb_receive_data(struct kvmppc_gs_buff *gsb, 914 + struct kvmppc_gs_msg *gsm) 915 + { 916 + int rc; 917 + 918 + kvmppc_gsb_reset(gsb); 919 + rc = kvmppc_gsm_fill_info(gsm, gsb); 920 + if (rc < 0) 921 + return rc; 922 + 923 + rc = kvmppc_gsb_recv(gsb, gsm->flags); 924 + if (rc < 0) 925 + return rc; 926 + 927 + rc = kvmppc_gsm_refresh_info(gsm, gsb); 928 + if (rc < 0) 929 + return rc; 930 + return 0; 931 + } 932 + 933 + /** 934 + * kvmppc_gsb_recv - receive a single guest state ID 935 + * @gsb: guest state buffer 936 + * @gsm: guest state message 937 + * @iden: guest state identity 938 + */ 939 + static inline int kvmppc_gsb_receive_datum(struct kvmppc_gs_buff *gsb, 940 + struct kvmppc_gs_msg *gsm, u16 iden) 941 + { 942 + int rc; 943 + 944 + kvmppc_gsm_include(gsm, iden); 945 + rc = kvmppc_gsb_receive_data(gsb, gsm); 946 + if (rc < 0) 947 + return rc; 948 + kvmppc_gsm_reset(gsm); 949 + return 0; 950 + } 951 + 952 + /** 953 + * kvmppc_gsb_send_data - flexibly send values from a guest state buffer 954 + * @gsb: guest state buffer 955 + * @gsm: guest state message 956 + * 957 + * Sends the guest state values included in the guest state message. 958 + */ 959 + static inline int kvmppc_gsb_send_data(struct kvmppc_gs_buff *gsb, 960 + struct kvmppc_gs_msg *gsm) 961 + { 962 + int rc; 963 + 964 + kvmppc_gsb_reset(gsb); 965 + rc = kvmppc_gsm_fill_info(gsm, gsb); 966 + if (rc < 0) 967 + return rc; 968 + rc = kvmppc_gsb_send(gsb, gsm->flags); 969 + 970 + return rc; 971 + } 972 + 973 + /** 974 + * kvmppc_gsb_recv - send a single guest state ID 975 + * @gsb: guest state buffer 976 + * @gsm: guest state message 977 + * @iden: guest state identity 978 + */ 979 + static inline int kvmppc_gsb_send_datum(struct kvmppc_gs_buff *gsb, 980 + struct kvmppc_gs_msg *gsm, u16 iden) 981 + { 982 + int rc; 983 + 984 + kvmppc_gsm_include(gsm, iden); 985 + rc = kvmppc_gsb_send_data(gsb, gsm); 986 + if (rc < 0) 987 + return rc; 988 + kvmppc_gsm_reset(gsm); 989 + return 0; 905 990 } 906 991 907 992 #endif /* _ASM_POWERPC_GUEST_STATE_BUFFER_H */
+30
arch/powerpc/include/asm/hvcall.h
··· 100 100 #define H_COP_HW -74 101 101 #define H_STATE -75 102 102 #define H_IN_USE -77 103 + 104 + #define H_INVALID_ELEMENT_ID -79 105 + #define H_INVALID_ELEMENT_SIZE -80 106 + #define H_INVALID_ELEMENT_VALUE -81 107 + #define H_INPUT_BUFFER_NOT_DEFINED -82 108 + #define H_INPUT_BUFFER_TOO_SMALL -83 109 + #define H_OUTPUT_BUFFER_NOT_DEFINED -84 110 + #define H_OUTPUT_BUFFER_TOO_SMALL -85 111 + #define H_PARTITION_PAGE_TABLE_NOT_DEFINED -86 112 + #define H_GUEST_VCPU_STATE_NOT_HV_OWNED -87 113 + 114 + 103 115 #define H_UNSUPPORTED_FLAG_START -256 104 116 #define H_UNSUPPORTED_FLAG_END -511 105 117 #define H_MULTI_THREADS_ACTIVE -9005 ··· 393 381 #define H_ENTER_NESTED 0xF804 394 382 #define H_TLB_INVALIDATE 0xF808 395 383 #define H_COPY_TOFROM_GUEST 0xF80C 384 + #define H_GUEST_GET_CAPABILITIES 0x460 385 + #define H_GUEST_SET_CAPABILITIES 0x464 386 + #define H_GUEST_CREATE 0x470 387 + #define H_GUEST_CREATE_VCPU 0x474 388 + #define H_GUEST_GET_STATE 0x478 389 + #define H_GUEST_SET_STATE 0x47C 390 + #define H_GUEST_RUN_VCPU 0x480 391 + #define H_GUEST_COPY_MEMORY 0x484 392 + #define H_GUEST_DELETE 0x488 396 393 397 394 /* Flags for H_SVM_PAGE_IN */ 398 395 #define H_PAGE_IN_SHARED 0x1 ··· 487 466 #define H_RPTI_PAGE_2M 0x04 488 467 #define H_RPTI_PAGE_1G 0x08 489 468 #define H_RPTI_PAGE_ALL (-1UL) 469 + 470 + /* Flags for H_GUEST_{S,G}_STATE */ 471 + #define H_GUEST_FLAGS_WIDE (1UL<<(63-0)) 472 + 473 + /* Flag values used for H_{S,G}SET_GUEST_CAPABILITIES */ 474 + #define H_GUEST_CAP_COPY_MEM (1UL<<(63-0)) 475 + #define H_GUEST_CAP_POWER9 (1UL<<(63-1)) 476 + #define H_GUEST_CAP_POWER10 (1UL<<(63-2)) 477 + #define H_GUEST_CAP_BITMAP2 (1UL<<(63-63)) 490 478 491 479 #ifndef __ASSEMBLY__ 492 480 #include <linux/types.h>
+116 -21
arch/powerpc/include/asm/kvm_book3s.h
··· 12 12 #include <linux/types.h> 13 13 #include <linux/kvm_host.h> 14 14 #include <asm/kvm_book3s_asm.h> 15 + #include <asm/guest-state-buffer.h> 15 16 16 17 struct kvmppc_bat { 17 18 u64 raw; ··· 296 295 static inline void kvmppc_restore_tm_sprs(struct kvm_vcpu *vcpu) {} 297 296 #endif 298 297 298 + extern unsigned long nested_capabilities; 299 299 long kvmhv_nested_init(void); 300 300 void kvmhv_nested_exit(void); 301 301 void kvmhv_vm_nested_init(struct kvm *kvm); ··· 318 316 319 317 void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac); 320 318 319 + 320 + #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE 321 + 322 + extern struct static_key_false __kvmhv_is_nestedv2; 323 + 324 + static inline bool kvmhv_is_nestedv2(void) 325 + { 326 + return static_branch_unlikely(&__kvmhv_is_nestedv2); 327 + } 328 + 329 + static inline bool kvmhv_is_nestedv1(void) 330 + { 331 + return !static_branch_likely(&__kvmhv_is_nestedv2); 332 + } 333 + 334 + #else 335 + 336 + static inline bool kvmhv_is_nestedv2(void) 337 + { 338 + return false; 339 + } 340 + 341 + static inline bool kvmhv_is_nestedv1(void) 342 + { 343 + return false; 344 + } 345 + 346 + #endif 347 + 348 + int __kvmhv_nestedv2_reload_ptregs(struct kvm_vcpu *vcpu, struct pt_regs *regs); 349 + int __kvmhv_nestedv2_mark_dirty_ptregs(struct kvm_vcpu *vcpu, struct pt_regs *regs); 350 + int __kvmhv_nestedv2_mark_dirty(struct kvm_vcpu *vcpu, u16 iden); 351 + int __kvmhv_nestedv2_cached_reload(struct kvm_vcpu *vcpu, u16 iden); 352 + 353 + static inline int kvmhv_nestedv2_reload_ptregs(struct kvm_vcpu *vcpu, 354 + struct pt_regs *regs) 355 + { 356 + if (kvmhv_is_nestedv2()) 357 + return __kvmhv_nestedv2_reload_ptregs(vcpu, regs); 358 + return 0; 359 + } 360 + static inline int kvmhv_nestedv2_mark_dirty_ptregs(struct kvm_vcpu *vcpu, 361 + struct pt_regs *regs) 362 + { 363 + if (kvmhv_is_nestedv2()) 364 + return __kvmhv_nestedv2_mark_dirty_ptregs(vcpu, regs); 365 + return 0; 366 + } 367 + 368 + static inline int kvmhv_nestedv2_mark_dirty(struct kvm_vcpu *vcpu, u16 iden) 369 + { 370 + if (kvmhv_is_nestedv2()) 371 + return __kvmhv_nestedv2_mark_dirty(vcpu, iden); 372 + return 0; 373 + } 374 + 375 + static inline int kvmhv_nestedv2_cached_reload(struct kvm_vcpu *vcpu, u16 iden) 376 + { 377 + if (kvmhv_is_nestedv2()) 378 + return __kvmhv_nestedv2_cached_reload(vcpu, iden); 379 + return 0; 380 + } 381 + 321 382 extern int kvm_irq_bypass; 322 383 323 384 static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu) ··· 400 335 static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val) 401 336 { 402 337 vcpu->arch.regs.gpr[num] = val; 338 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_GPR(num)); 403 339 } 404 340 405 341 static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num) 406 342 { 343 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_GPR(num)) < 0); 407 344 return vcpu->arch.regs.gpr[num]; 408 345 } 409 346 410 347 static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val) 411 348 { 412 349 vcpu->arch.regs.ccr = val; 350 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_CR); 413 351 } 414 352 415 353 static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu) 416 354 { 355 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_CR) < 0); 417 356 return vcpu->arch.regs.ccr; 418 357 } 419 358 420 359 static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, ulong val) 421 360 { 422 361 vcpu->arch.regs.xer = val; 362 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_XER); 423 363 } 424 364 425 365 static inline ulong kvmppc_get_xer(struct kvm_vcpu *vcpu) 426 366 { 367 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_XER) < 0); 427 368 return vcpu->arch.regs.xer; 428 369 } 429 370 430 371 static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val) 431 372 { 432 373 vcpu->arch.regs.ctr = val; 374 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_CTR); 433 375 } 434 376 435 377 static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu) 436 378 { 379 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_CTR) < 0); 437 380 return vcpu->arch.regs.ctr; 438 381 } 439 382 440 383 static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val) 441 384 { 442 385 vcpu->arch.regs.link = val; 386 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_LR); 443 387 } 444 388 445 389 static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu) 446 390 { 391 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_LR) < 0); 447 392 return vcpu->arch.regs.link; 448 393 } 449 394 450 395 static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val) 451 396 { 452 397 vcpu->arch.regs.nip = val; 398 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_NIA); 453 399 } 454 400 455 401 static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu) 456 402 { 403 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_NIA) < 0); 457 404 return vcpu->arch.regs.nip; 458 405 } 459 406 ··· 482 405 483 406 static inline u64 kvmppc_get_fpr(struct kvm_vcpu *vcpu, int i) 484 407 { 408 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_VSRS(i)) < 0); 485 409 return vcpu->arch.fp.fpr[i][TS_FPROFFSET]; 486 410 } 487 411 488 412 static inline void kvmppc_set_fpr(struct kvm_vcpu *vcpu, int i, u64 val) 489 413 { 490 414 vcpu->arch.fp.fpr[i][TS_FPROFFSET] = val; 415 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_VSRS(i)); 491 416 } 492 417 493 418 static inline u64 kvmppc_get_fpscr(struct kvm_vcpu *vcpu) 494 419 { 420 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_FPSCR) < 0); 495 421 return vcpu->arch.fp.fpscr; 496 422 } 497 423 498 424 static inline void kvmppc_set_fpscr(struct kvm_vcpu *vcpu, u64 val) 499 425 { 500 426 vcpu->arch.fp.fpscr = val; 427 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_FPSCR); 501 428 } 502 429 503 430 504 431 static inline u64 kvmppc_get_vsx_fpr(struct kvm_vcpu *vcpu, int i, int j) 505 432 { 433 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_VSRS(i)) < 0); 506 434 return vcpu->arch.fp.fpr[i][j]; 507 435 } 508 436 ··· 515 433 u64 val) 516 434 { 517 435 vcpu->arch.fp.fpr[i][j] = val; 436 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_VSRS(i)); 518 437 } 519 438 520 439 #ifdef CONFIG_ALTIVEC 521 440 static inline void kvmppc_get_vsx_vr(struct kvm_vcpu *vcpu, int i, vector128 *v) 522 441 { 442 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_VSRS(32 + i)) < 0); 523 443 *v = vcpu->arch.vr.vr[i]; 524 444 } 525 445 ··· 529 445 vector128 *val) 530 446 { 531 447 vcpu->arch.vr.vr[i] = *val; 448 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_VSRS(32 + i)); 532 449 } 533 450 534 451 static inline u32 kvmppc_get_vscr(struct kvm_vcpu *vcpu) 535 452 { 453 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_VSCR) < 0); 536 454 return vcpu->arch.vr.vscr.u[3]; 537 455 } 538 456 539 457 static inline void kvmppc_set_vscr(struct kvm_vcpu *vcpu, u32 val) 540 458 { 541 459 vcpu->arch.vr.vscr.u[3] = val; 460 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_VSCR); 542 461 } 543 462 #endif 544 463 545 - #define KVMPPC_BOOK3S_VCPU_ACCESSOR_SET(reg, size) \ 464 + #define KVMPPC_BOOK3S_VCPU_ACCESSOR_SET(reg, size, iden) \ 546 465 static inline void kvmppc_set_##reg(struct kvm_vcpu *vcpu, u##size val) \ 547 466 { \ 548 467 \ 549 468 vcpu->arch.reg = val; \ 469 + kvmhv_nestedv2_mark_dirty(vcpu, iden); \ 550 470 } 551 471 552 - #define KVMPPC_BOOK3S_VCPU_ACCESSOR_GET(reg, size) \ 472 + #define KVMPPC_BOOK3S_VCPU_ACCESSOR_GET(reg, size, iden) \ 553 473 static inline u##size kvmppc_get_##reg(struct kvm_vcpu *vcpu) \ 554 474 { \ 475 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, iden) < 0); \ 555 476 return vcpu->arch.reg; \ 556 477 } 557 478 558 - #define KVMPPC_BOOK3S_VCPU_ACCESSOR(reg, size) \ 559 - KVMPPC_BOOK3S_VCPU_ACCESSOR_SET(reg, size) \ 560 - KVMPPC_BOOK3S_VCPU_ACCESSOR_GET(reg, size) \ 479 + #define KVMPPC_BOOK3S_VCPU_ACCESSOR(reg, size, iden) \ 480 + KVMPPC_BOOK3S_VCPU_ACCESSOR_SET(reg, size, iden) \ 481 + KVMPPC_BOOK3S_VCPU_ACCESSOR_GET(reg, size, iden) \ 561 482 562 - KVMPPC_BOOK3S_VCPU_ACCESSOR(pid, 32) 563 - KVMPPC_BOOK3S_VCPU_ACCESSOR(tar, 64) 564 - KVMPPC_BOOK3S_VCPU_ACCESSOR(ebbhr, 64) 565 - KVMPPC_BOOK3S_VCPU_ACCESSOR(ebbrr, 64) 566 - KVMPPC_BOOK3S_VCPU_ACCESSOR(bescr, 64) 567 - KVMPPC_BOOK3S_VCPU_ACCESSOR(ic, 64) 568 - KVMPPC_BOOK3S_VCPU_ACCESSOR(vrsave, 64) 483 + KVMPPC_BOOK3S_VCPU_ACCESSOR(pid, 32, KVMPPC_GSID_PIDR) 484 + KVMPPC_BOOK3S_VCPU_ACCESSOR(tar, 64, KVMPPC_GSID_TAR) 485 + KVMPPC_BOOK3S_VCPU_ACCESSOR(ebbhr, 64, KVMPPC_GSID_EBBHR) 486 + KVMPPC_BOOK3S_VCPU_ACCESSOR(ebbrr, 64, KVMPPC_GSID_EBBRR) 487 + KVMPPC_BOOK3S_VCPU_ACCESSOR(bescr, 64, KVMPPC_GSID_BESCR) 488 + KVMPPC_BOOK3S_VCPU_ACCESSOR(ic, 64, KVMPPC_GSID_IC) 489 + KVMPPC_BOOK3S_VCPU_ACCESSOR(vrsave, 64, KVMPPC_GSID_VRSAVE) 569 490 570 491 571 - #define KVMPPC_BOOK3S_VCORE_ACCESSOR_SET(reg, size) \ 492 + #define KVMPPC_BOOK3S_VCORE_ACCESSOR_SET(reg, size, iden) \ 572 493 static inline void kvmppc_set_##reg(struct kvm_vcpu *vcpu, u##size val) \ 573 494 { \ 574 495 vcpu->arch.vcore->reg = val; \ 496 + kvmhv_nestedv2_mark_dirty(vcpu, iden); \ 575 497 } 576 498 577 - #define KVMPPC_BOOK3S_VCORE_ACCESSOR_GET(reg, size) \ 499 + #define KVMPPC_BOOK3S_VCORE_ACCESSOR_GET(reg, size, iden) \ 578 500 static inline u##size kvmppc_get_##reg(struct kvm_vcpu *vcpu) \ 579 501 { \ 502 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, iden) < 0); \ 580 503 return vcpu->arch.vcore->reg; \ 581 504 } 582 505 583 - #define KVMPPC_BOOK3S_VCORE_ACCESSOR(reg, size) \ 584 - KVMPPC_BOOK3S_VCORE_ACCESSOR_SET(reg, size) \ 585 - KVMPPC_BOOK3S_VCORE_ACCESSOR_GET(reg, size) \ 506 + #define KVMPPC_BOOK3S_VCORE_ACCESSOR(reg, size, iden) \ 507 + KVMPPC_BOOK3S_VCORE_ACCESSOR_SET(reg, size, iden) \ 508 + KVMPPC_BOOK3S_VCORE_ACCESSOR_GET(reg, size, iden) \ 586 509 587 510 588 - KVMPPC_BOOK3S_VCORE_ACCESSOR(vtb, 64) 589 - KVMPPC_BOOK3S_VCORE_ACCESSOR(tb_offset, 64) 590 - KVMPPC_BOOK3S_VCORE_ACCESSOR_GET(arch_compat, 32) 591 - KVMPPC_BOOK3S_VCORE_ACCESSOR_GET(lpcr, 64) 511 + KVMPPC_BOOK3S_VCORE_ACCESSOR(vtb, 64, KVMPPC_GSID_VTB) 512 + KVMPPC_BOOK3S_VCORE_ACCESSOR(tb_offset, 64, KVMPPC_GSID_TB_OFFSET) 513 + KVMPPC_BOOK3S_VCORE_ACCESSOR_GET(arch_compat, 32, KVMPPC_GSID_LOGICAL_PVR) 514 + KVMPPC_BOOK3S_VCORE_ACCESSOR_GET(lpcr, 64, KVMPPC_GSID_LPCR) 592 515 593 516 static inline u64 kvmppc_get_dec_expires(struct kvm_vcpu *vcpu) 594 517 { 518 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_TB_OFFSET) < 0); 519 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_DEC_EXPIRY_TB) < 0); 595 520 return vcpu->arch.dec_expires; 596 521 } 597 522 598 523 static inline void kvmppc_set_dec_expires(struct kvm_vcpu *vcpu, u64 val) 599 524 { 600 525 vcpu->arch.dec_expires = val; 526 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_TB_OFFSET) < 0); 527 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_DEC_EXPIRY_TB); 601 528 } 602 529 603 530 /* Expiry time of vcpu DEC relative to host TB */
+6
arch/powerpc/include/asm/kvm_book3s_64.h
··· 677 677 extern pte_t *find_kvm_nested_guest_pte(struct kvm *kvm, unsigned long lpid, 678 678 unsigned long ea, unsigned *hshift); 679 679 680 + int kvmhv_nestedv2_vcpu_create(struct kvm_vcpu *vcpu, struct kvmhv_nestedv2_io *io); 681 + void kvmhv_nestedv2_vcpu_free(struct kvm_vcpu *vcpu, struct kvmhv_nestedv2_io *io); 682 + int kvmhv_nestedv2_flush_vcpu(struct kvm_vcpu *vcpu, u64 time_limit); 683 + int kvmhv_nestedv2_set_ptbl_entry(unsigned long lpid, u64 dw0, u64 dw1); 684 + int kvmhv_nestedv2_parse_output(struct kvm_vcpu *vcpu); 685 + 680 686 #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ 681 687 682 688 #endif /* __ASM_KVM_BOOK3S_64_H__ */
+20
arch/powerpc/include/asm/kvm_host.h
··· 25 25 #include <asm/cacheflush.h> 26 26 #include <asm/hvcall.h> 27 27 #include <asm/mce.h> 28 + #include <asm/guest-state-buffer.h> 28 29 29 30 #define __KVM_HAVE_ARCH_VCPU_DEBUGFS 30 31 ··· 510 509 __be64 w01; 511 510 }; 512 511 512 + /* Nestedv2 H_GUEST_RUN_VCPU configuration */ 513 + struct kvmhv_nestedv2_config { 514 + struct kvmppc_gs_buff_info vcpu_run_output_cfg; 515 + struct kvmppc_gs_buff_info vcpu_run_input_cfg; 516 + u64 vcpu_run_output_size; 517 + }; 518 + 519 + /* Nestedv2 L1<->L0 communication state */ 520 + struct kvmhv_nestedv2_io { 521 + struct kvmhv_nestedv2_config cfg; 522 + struct kvmppc_gs_buff *vcpu_run_output; 523 + struct kvmppc_gs_buff *vcpu_run_input; 524 + struct kvmppc_gs_msg *vcpu_message; 525 + struct kvmppc_gs_msg *vcore_message; 526 + struct kvmppc_gs_bitmap valids; 527 + }; 528 + 513 529 struct kvm_vcpu_arch { 514 530 ulong host_stack; 515 531 u32 host_pid; ··· 847 829 u64 nested_hfscr; /* HFSCR that the L1 requested for the nested guest */ 848 830 u32 nested_vcpu_id; 849 831 gpa_t nested_io_gpr; 832 + /* For nested APIv2 guests*/ 833 + struct kvmhv_nestedv2_io nestedv2_io; 850 834 #endif 851 835 852 836 #ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING
+66 -24
arch/powerpc/include/asm/kvm_ppc.h
··· 615 615 { 616 616 return false; 617 617 } 618 + 619 + #endif 620 + 621 + #ifndef CONFIG_PPC_BOOK3S 622 + 623 + static inline bool kvmhv_is_nestedv2(void) 624 + { 625 + return false; 626 + } 627 + 628 + static inline bool kvmhv_is_nestedv1(void) 629 + { 630 + return false; 631 + } 632 + 633 + static inline int kvmhv_nestedv2_reload_ptregs(struct kvm_vcpu *vcpu, 634 + struct pt_regs *regs) 635 + { 636 + return 0; 637 + } 638 + static inline int kvmhv_nestedv2_mark_dirty_ptregs(struct kvm_vcpu *vcpu, 639 + struct pt_regs *regs) 640 + { 641 + return 0; 642 + } 643 + 644 + static inline int kvmhv_nestedv2_mark_dirty(struct kvm_vcpu *vcpu, u16 iden) 645 + { 646 + return 0; 647 + } 648 + 649 + static inline int kvmhv_nestedv2_cached_reload(struct kvm_vcpu *vcpu, u16 iden) 650 + { 651 + return 0; 652 + } 653 + 618 654 #endif 619 655 620 656 #ifdef CONFIG_KVM_XICS ··· 975 939 mtspr(bookehv_spr, val); \ 976 940 } \ 977 941 978 - #define KVMPPC_VCPU_SHARED_REGS_ACCESSOR_GET(reg, size) \ 942 + #define KVMPPC_VCPU_SHARED_REGS_ACCESSOR_GET(reg, size, iden) \ 979 943 static inline u##size kvmppc_get_##reg(struct kvm_vcpu *vcpu) \ 980 944 { \ 945 + if (iden) \ 946 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, iden) < 0); \ 981 947 if (kvmppc_shared_big_endian(vcpu)) \ 982 948 return be##size##_to_cpu(vcpu->arch.shared->reg); \ 983 949 else \ 984 950 return le##size##_to_cpu(vcpu->arch.shared->reg); \ 985 951 } \ 986 952 987 - #define KVMPPC_VCPU_SHARED_REGS_ACCESSOR_SET(reg, size) \ 953 + #define KVMPPC_VCPU_SHARED_REGS_ACCESSOR_SET(reg, size, iden) \ 988 954 static inline void kvmppc_set_##reg(struct kvm_vcpu *vcpu, u##size val) \ 989 955 { \ 990 956 if (kvmppc_shared_big_endian(vcpu)) \ 991 957 vcpu->arch.shared->reg = cpu_to_be##size(val); \ 992 958 else \ 993 959 vcpu->arch.shared->reg = cpu_to_le##size(val); \ 960 + \ 961 + if (iden) \ 962 + kvmhv_nestedv2_mark_dirty(vcpu, iden); \ 994 963 } \ 995 964 996 - #define KVMPPC_VCPU_SHARED_REGS_ACCESSOR(reg, size) \ 997 - KVMPPC_VCPU_SHARED_REGS_ACCESSOR_GET(reg, size) \ 998 - KVMPPC_VCPU_SHARED_REGS_ACCESSOR_SET(reg, size) \ 965 + #define KVMPPC_VCPU_SHARED_REGS_ACCESSOR(reg, size, iden) \ 966 + KVMPPC_VCPU_SHARED_REGS_ACCESSOR_GET(reg, size, iden) \ 967 + KVMPPC_VCPU_SHARED_REGS_ACCESSOR_SET(reg, size, iden) \ 999 968 1000 969 #define KVMPPC_BOOKE_HV_SPRNG_ACCESSOR(reg, bookehv_spr) \ 1001 970 KVMPPC_BOOKE_HV_SPRNG_ACCESSOR_GET(reg, bookehv_spr) \ ··· 1008 967 1009 968 #ifdef CONFIG_KVM_BOOKE_HV 1010 969 1011 - #define KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(reg, size, bookehv_spr) \ 970 + #define KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(reg, size, bookehv_spr, iden) \ 1012 971 KVMPPC_BOOKE_HV_SPRNG_ACCESSOR(reg, bookehv_spr) \ 1013 972 1014 973 #else 1015 974 1016 - #define KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(reg, size, bookehv_spr) \ 1017 - KVMPPC_VCPU_SHARED_REGS_ACCESSOR(reg, size) \ 975 + #define KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(reg, size, bookehv_spr, iden) \ 976 + KVMPPC_VCPU_SHARED_REGS_ACCESSOR(reg, size, iden) \ 1018 977 1019 978 #endif 1020 979 1021 - KVMPPC_VCPU_SHARED_REGS_ACCESSOR(critical, 64) 1022 - KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(sprg0, 64, SPRN_GSPRG0) 1023 - KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(sprg1, 64, SPRN_GSPRG1) 1024 - KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(sprg2, 64, SPRN_GSPRG2) 1025 - KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(sprg3, 64, SPRN_GSPRG3) 1026 - KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(srr0, 64, SPRN_GSRR0) 1027 - KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(srr1, 64, SPRN_GSRR1) 1028 - KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(dar, 64, SPRN_GDEAR) 1029 - KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(esr, 64, SPRN_GESR) 1030 - KVMPPC_VCPU_SHARED_REGS_ACCESSOR_GET(msr, 64) 980 + KVMPPC_VCPU_SHARED_REGS_ACCESSOR(critical, 64, 0) 981 + KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(sprg0, 64, SPRN_GSPRG0, KVMPPC_GSID_SPRG0) 982 + KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(sprg1, 64, SPRN_GSPRG1, KVMPPC_GSID_SPRG1) 983 + KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(sprg2, 64, SPRN_GSPRG2, KVMPPC_GSID_SPRG2) 984 + KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(sprg3, 64, SPRN_GSPRG3, KVMPPC_GSID_SPRG3) 985 + KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(srr0, 64, SPRN_GSRR0, KVMPPC_GSID_SRR0) 986 + KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(srr1, 64, SPRN_GSRR1, KVMPPC_GSID_SRR1) 987 + KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(dar, 64, SPRN_GDEAR, KVMPPC_GSID_DAR) 988 + KVMPPC_BOOKE_HV_SPRNG_OR_VCPU_SHARED_REGS_ACCESSOR(esr, 64, SPRN_GESR, 0) 989 + KVMPPC_VCPU_SHARED_REGS_ACCESSOR_GET(msr, 64, KVMPPC_GSID_MSR) 1031 990 static inline void kvmppc_set_msr_fast(struct kvm_vcpu *vcpu, u64 val) 1032 991 { 1033 992 if (kvmppc_shared_big_endian(vcpu)) 1034 993 vcpu->arch.shared->msr = cpu_to_be64(val); 1035 994 else 1036 995 vcpu->arch.shared->msr = cpu_to_le64(val); 996 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_MSR); 1037 997 } 1038 - KVMPPC_VCPU_SHARED_REGS_ACCESSOR(dsisr, 32) 1039 - KVMPPC_VCPU_SHARED_REGS_ACCESSOR(int_pending, 32) 1040 - KVMPPC_VCPU_SHARED_REGS_ACCESSOR(sprg4, 64) 1041 - KVMPPC_VCPU_SHARED_REGS_ACCESSOR(sprg5, 64) 1042 - KVMPPC_VCPU_SHARED_REGS_ACCESSOR(sprg6, 64) 1043 - KVMPPC_VCPU_SHARED_REGS_ACCESSOR(sprg7, 64) 998 + KVMPPC_VCPU_SHARED_REGS_ACCESSOR(dsisr, 32, KVMPPC_GSID_DSISR) 999 + KVMPPC_VCPU_SHARED_REGS_ACCESSOR(int_pending, 32, 0) 1000 + KVMPPC_VCPU_SHARED_REGS_ACCESSOR(sprg4, 64, 0) 1001 + KVMPPC_VCPU_SHARED_REGS_ACCESSOR(sprg5, 64, 0) 1002 + KVMPPC_VCPU_SHARED_REGS_ACCESSOR(sprg6, 64, 0) 1003 + KVMPPC_VCPU_SHARED_REGS_ACCESSOR(sprg7, 64, 0) 1044 1004 1045 1005 static inline u32 kvmppc_get_sr(struct kvm_vcpu *vcpu, int nr) 1046 1006 {
+263
arch/powerpc/include/asm/plpar_wrappers.h
··· 6 6 7 7 #include <linux/string.h> 8 8 #include <linux/irqflags.h> 9 + #include <linux/delay.h> 9 10 10 11 #include <asm/hvcall.h> 11 12 #include <asm/paca.h> ··· 344 343 return rc; 345 344 } 346 345 346 + static inline long plpar_guest_create(unsigned long flags, unsigned long *guest_id) 347 + { 348 + unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; 349 + unsigned long token; 350 + long rc; 351 + 352 + token = -1UL; 353 + do { 354 + rc = plpar_hcall(H_GUEST_CREATE, retbuf, flags, token); 355 + if (rc == H_SUCCESS) 356 + *guest_id = retbuf[0]; 357 + 358 + if (rc == H_BUSY) { 359 + token = retbuf[0]; 360 + cond_resched(); 361 + } 362 + 363 + if (H_IS_LONG_BUSY(rc)) { 364 + token = retbuf[0]; 365 + msleep(get_longbusy_msecs(rc)); 366 + rc = H_BUSY; 367 + } 368 + 369 + } while (rc == H_BUSY); 370 + 371 + return rc; 372 + } 373 + 374 + static inline long plpar_guest_create_vcpu(unsigned long flags, 375 + unsigned long guest_id, 376 + unsigned long vcpu_id) 377 + { 378 + long rc; 379 + 380 + do { 381 + rc = plpar_hcall_norets(H_GUEST_CREATE_VCPU, 0, guest_id, vcpu_id); 382 + 383 + if (rc == H_BUSY) 384 + cond_resched(); 385 + 386 + if (H_IS_LONG_BUSY(rc)) { 387 + msleep(get_longbusy_msecs(rc)); 388 + rc = H_BUSY; 389 + } 390 + 391 + } while (rc == H_BUSY); 392 + 393 + return rc; 394 + } 395 + 396 + static inline long plpar_guest_set_state(unsigned long flags, 397 + unsigned long guest_id, 398 + unsigned long vcpu_id, 399 + unsigned long data_buffer, 400 + unsigned long data_size, 401 + unsigned long *failed_index) 402 + { 403 + unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; 404 + long rc; 405 + 406 + while (true) { 407 + rc = plpar_hcall(H_GUEST_SET_STATE, retbuf, flags, guest_id, 408 + vcpu_id, data_buffer, data_size); 409 + 410 + if (rc == H_BUSY) { 411 + cpu_relax(); 412 + continue; 413 + } 414 + 415 + if (H_IS_LONG_BUSY(rc)) { 416 + mdelay(get_longbusy_msecs(rc)); 417 + continue; 418 + } 419 + 420 + if (rc == H_INVALID_ELEMENT_ID) 421 + *failed_index = retbuf[0]; 422 + else if (rc == H_INVALID_ELEMENT_SIZE) 423 + *failed_index = retbuf[0]; 424 + else if (rc == H_INVALID_ELEMENT_VALUE) 425 + *failed_index = retbuf[0]; 426 + 427 + break; 428 + } 429 + 430 + return rc; 431 + } 432 + 433 + static inline long plpar_guest_get_state(unsigned long flags, 434 + unsigned long guest_id, 435 + unsigned long vcpu_id, 436 + unsigned long data_buffer, 437 + unsigned long data_size, 438 + unsigned long *failed_index) 439 + { 440 + unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; 441 + long rc; 442 + 443 + while (true) { 444 + rc = plpar_hcall(H_GUEST_GET_STATE, retbuf, flags, guest_id, 445 + vcpu_id, data_buffer, data_size); 446 + 447 + if (rc == H_BUSY) { 448 + cpu_relax(); 449 + continue; 450 + } 451 + 452 + if (H_IS_LONG_BUSY(rc)) { 453 + mdelay(get_longbusy_msecs(rc)); 454 + continue; 455 + } 456 + 457 + if (rc == H_INVALID_ELEMENT_ID) 458 + *failed_index = retbuf[0]; 459 + else if (rc == H_INVALID_ELEMENT_SIZE) 460 + *failed_index = retbuf[0]; 461 + else if (rc == H_INVALID_ELEMENT_VALUE) 462 + *failed_index = retbuf[0]; 463 + 464 + break; 465 + } 466 + 467 + return rc; 468 + } 469 + 470 + static inline long plpar_guest_run_vcpu(unsigned long flags, unsigned long guest_id, 471 + unsigned long vcpu_id, int *trap, 472 + unsigned long *failed_index) 473 + { 474 + unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; 475 + long rc; 476 + 477 + rc = plpar_hcall(H_GUEST_RUN_VCPU, retbuf, flags, guest_id, vcpu_id); 478 + if (rc == H_SUCCESS) 479 + *trap = retbuf[0]; 480 + else if (rc == H_INVALID_ELEMENT_ID) 481 + *failed_index = retbuf[0]; 482 + else if (rc == H_INVALID_ELEMENT_SIZE) 483 + *failed_index = retbuf[0]; 484 + else if (rc == H_INVALID_ELEMENT_VALUE) 485 + *failed_index = retbuf[0]; 486 + 487 + return rc; 488 + } 489 + 490 + static inline long plpar_guest_delete(unsigned long flags, u64 guest_id) 491 + { 492 + long rc; 493 + 494 + do { 495 + rc = plpar_hcall_norets(H_GUEST_DELETE, flags, guest_id); 496 + if (rc == H_BUSY) 497 + cond_resched(); 498 + 499 + if (H_IS_LONG_BUSY(rc)) { 500 + msleep(get_longbusy_msecs(rc)); 501 + rc = H_BUSY; 502 + } 503 + 504 + } while (rc == H_BUSY); 505 + 506 + return rc; 507 + } 508 + 509 + static inline long plpar_guest_set_capabilities(unsigned long flags, 510 + unsigned long capabilities) 511 + { 512 + unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; 513 + long rc; 514 + 515 + do { 516 + rc = plpar_hcall(H_GUEST_SET_CAPABILITIES, retbuf, flags, capabilities); 517 + if (rc == H_BUSY) 518 + cond_resched(); 519 + 520 + if (H_IS_LONG_BUSY(rc)) { 521 + msleep(get_longbusy_msecs(rc)); 522 + rc = H_BUSY; 523 + } 524 + } while (rc == H_BUSY); 525 + 526 + return rc; 527 + } 528 + 529 + static inline long plpar_guest_get_capabilities(unsigned long flags, 530 + unsigned long *capabilities) 531 + { 532 + unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; 533 + long rc; 534 + 535 + do { 536 + rc = plpar_hcall(H_GUEST_GET_CAPABILITIES, retbuf, flags); 537 + if (rc == H_BUSY) 538 + cond_resched(); 539 + 540 + if (H_IS_LONG_BUSY(rc)) { 541 + msleep(get_longbusy_msecs(rc)); 542 + rc = H_BUSY; 543 + } 544 + } while (rc == H_BUSY); 545 + 546 + if (rc == H_SUCCESS) 547 + *capabilities = retbuf[0]; 548 + 549 + return rc; 550 + } 551 + 347 552 /* 348 553 * Wrapper to H_RPT_INVALIDATE hcall that handles return values appropriately 349 554 * ··· 610 403 611 404 static inline long pseries_rpt_invalidate(u64 pid, u64 target, u64 type, 612 405 u64 page_sizes, u64 start, u64 end) 406 + { 407 + return 0; 408 + } 409 + 410 + static inline long plpar_guest_create_vcpu(unsigned long flags, 411 + unsigned long guest_id, 412 + unsigned long vcpu_id) 413 + { 414 + return 0; 415 + } 416 + 417 + static inline long plpar_guest_get_state(unsigned long flags, 418 + unsigned long guest_id, 419 + unsigned long vcpu_id, 420 + unsigned long data_buffer, 421 + unsigned long data_size, 422 + unsigned long *failed_index) 423 + { 424 + return 0; 425 + } 426 + 427 + static inline long plpar_guest_set_state(unsigned long flags, 428 + unsigned long guest_id, 429 + unsigned long vcpu_id, 430 + unsigned long data_buffer, 431 + unsigned long data_size, 432 + unsigned long *failed_index) 433 + { 434 + return 0; 435 + } 436 + 437 + static inline long plpar_guest_run_vcpu(unsigned long flags, unsigned long guest_id, 438 + unsigned long vcpu_id, int *trap, 439 + unsigned long *failed_index) 440 + { 441 + return 0; 442 + } 443 + 444 + static inline long plpar_guest_create(unsigned long flags, unsigned long *guest_id) 445 + { 446 + return 0; 447 + } 448 + 449 + static inline long plpar_guest_delete(unsigned long flags, u64 guest_id) 450 + { 451 + return 0; 452 + } 453 + 454 + static inline long plpar_guest_get_capabilities(unsigned long flags, 455 + unsigned long *capabilities) 456 + { 457 + return 0; 458 + } 459 + 460 + static inline long plpar_guest_set_capabilities(unsigned long flags, 461 + unsigned long capabilities) 613 462 { 614 463 return 0; 615 464 }
+1
arch/powerpc/kvm/Makefile
··· 87 87 book3s_hv_ras.o \ 88 88 book3s_hv_builtin.o \ 89 89 book3s_hv_p9_perf.o \ 90 + book3s_hv_nestedv2.o \ 90 91 guest-state-buffer.o \ 91 92 $(kvm-book3s_64-builtin-tm-objs-y) \ 92 93 $(kvm-book3s_64-builtin-xics-objs-y)
+124 -10
arch/powerpc/kvm/book3s_hv.c
··· 393 393 394 394 static int kvmppc_set_arch_compat(struct kvm_vcpu *vcpu, u32 arch_compat) 395 395 { 396 - unsigned long host_pcr_bit = 0, guest_pcr_bit = 0; 396 + unsigned long host_pcr_bit = 0, guest_pcr_bit = 0, cap = 0; 397 397 struct kvmppc_vcore *vc = vcpu->arch.vcore; 398 398 399 399 /* We can (emulate) our own architecture version and anything older */ ··· 424 424 break; 425 425 case PVR_ARCH_300: 426 426 guest_pcr_bit = PCR_ARCH_300; 427 + cap = H_GUEST_CAP_POWER9; 427 428 break; 428 429 case PVR_ARCH_31: 429 430 guest_pcr_bit = PCR_ARCH_31; 431 + cap = H_GUEST_CAP_POWER10; 430 432 break; 431 433 default: 432 434 return -EINVAL; ··· 439 437 if (guest_pcr_bit > host_pcr_bit) 440 438 return -EINVAL; 441 439 440 + if (kvmhv_on_pseries() && kvmhv_is_nestedv2()) { 441 + if (!(cap & nested_capabilities)) 442 + return -EINVAL; 443 + } 444 + 442 445 spin_lock(&vc->lock); 443 446 vc->arch_compat = arch_compat; 447 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_LOGICAL_PVR); 444 448 /* 445 449 * Set all PCR bits for which guest_pcr_bit <= bit < host_pcr_bit 446 450 * Also set all reserved PCR bits ··· 2195 2187 } 2196 2188 2197 2189 vc->lpcr = new_lpcr; 2190 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_LPCR); 2198 2191 2199 2192 spin_unlock(&vc->lock); 2200 2193 } ··· 2936 2927 vcpu->arch.shared_big_endian = false; 2937 2928 #endif 2938 2929 #endif 2939 - kvmppc_set_mmcr_hv(vcpu, 0, MMCR0_FC); 2940 2930 2931 + if (kvmhv_is_nestedv2()) { 2932 + err = kvmhv_nestedv2_vcpu_create(vcpu, &vcpu->arch.nestedv2_io); 2933 + if (err < 0) 2934 + return err; 2935 + } 2936 + 2937 + kvmppc_set_mmcr_hv(vcpu, 0, MMCR0_FC); 2941 2938 if (cpu_has_feature(CPU_FTR_ARCH_31)) { 2942 2939 kvmppc_set_mmcr_hv(vcpu, 0, kvmppc_get_mmcr_hv(vcpu, 0) | MMCR0_PMCCEXT); 2943 2940 kvmppc_set_mmcra_hv(vcpu, MMCRA_BHRB_DISABLE); ··· 3099 3084 unpin_vpa(vcpu->kvm, &vcpu->arch.slb_shadow); 3100 3085 unpin_vpa(vcpu->kvm, &vcpu->arch.vpa); 3101 3086 spin_unlock(&vcpu->arch.vpa_update_lock); 3087 + if (kvmhv_is_nestedv2()) 3088 + kvmhv_nestedv2_vcpu_free(vcpu, &vcpu->arch.nestedv2_io); 3102 3089 } 3103 3090 3104 3091 static int kvmppc_core_check_requests_hv(struct kvm_vcpu *vcpu) ··· 4065 4048 } 4066 4049 } 4067 4050 4051 + static int kvmhv_vcpu_entry_nestedv2(struct kvm_vcpu *vcpu, u64 time_limit, 4052 + unsigned long lpcr, u64 *tb) 4053 + { 4054 + struct kvmhv_nestedv2_io *io; 4055 + unsigned long msr, i; 4056 + int trap; 4057 + long rc; 4058 + 4059 + io = &vcpu->arch.nestedv2_io; 4060 + 4061 + msr = mfmsr(); 4062 + kvmppc_msr_hard_disable_set_facilities(vcpu, msr); 4063 + if (lazy_irq_pending()) 4064 + return 0; 4065 + 4066 + rc = kvmhv_nestedv2_flush_vcpu(vcpu, time_limit); 4067 + if (rc < 0) 4068 + return -EINVAL; 4069 + 4070 + accumulate_time(vcpu, &vcpu->arch.in_guest); 4071 + rc = plpar_guest_run_vcpu(0, vcpu->kvm->arch.lpid, vcpu->vcpu_id, 4072 + &trap, &i); 4073 + 4074 + if (rc != H_SUCCESS) { 4075 + pr_err("KVM Guest Run VCPU hcall failed\n"); 4076 + if (rc == H_INVALID_ELEMENT_ID) 4077 + pr_err("KVM: Guest Run VCPU invalid element id at %ld\n", i); 4078 + else if (rc == H_INVALID_ELEMENT_SIZE) 4079 + pr_err("KVM: Guest Run VCPU invalid element size at %ld\n", i); 4080 + else if (rc == H_INVALID_ELEMENT_VALUE) 4081 + pr_err("KVM: Guest Run VCPU invalid element value at %ld\n", i); 4082 + return -EINVAL; 4083 + } 4084 + accumulate_time(vcpu, &vcpu->arch.guest_exit); 4085 + 4086 + *tb = mftb(); 4087 + kvmppc_gsm_reset(io->vcpu_message); 4088 + kvmppc_gsm_reset(io->vcore_message); 4089 + kvmppc_gsbm_zero(&io->valids); 4090 + 4091 + rc = kvmhv_nestedv2_parse_output(vcpu); 4092 + if (rc < 0) 4093 + return -EINVAL; 4094 + 4095 + timer_rearm_host_dec(*tb); 4096 + 4097 + return trap; 4098 + } 4099 + 4068 4100 /* call our hypervisor to load up HV regs and go */ 4069 4101 static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, unsigned long lpcr, u64 *tb) 4070 4102 { ··· 4231 4165 vcpu_vpa_increment_dispatch(vcpu); 4232 4166 4233 4167 if (kvmhv_on_pseries()) { 4234 - trap = kvmhv_vcpu_entry_p9_nested(vcpu, time_limit, lpcr, tb); 4168 + if (kvmhv_is_nestedv1()) 4169 + trap = kvmhv_vcpu_entry_p9_nested(vcpu, time_limit, lpcr, tb); 4170 + else 4171 + trap = kvmhv_vcpu_entry_nestedv2(vcpu, time_limit, lpcr, tb); 4235 4172 4236 4173 /* H_CEDE has to be handled now, not later */ 4237 4174 if (trap == BOOK3S_INTERRUPT_SYSCALL && !nested && ··· 5214 5145 if (++cores_done >= kvm->arch.online_vcores) 5215 5146 break; 5216 5147 } 5148 + 5149 + if (kvmhv_is_nestedv2()) { 5150 + struct kvm_vcpu *vcpu; 5151 + 5152 + kvm_for_each_vcpu(i, vcpu, kvm) { 5153 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_LPCR); 5154 + } 5155 + } 5217 5156 } 5218 5157 5219 5158 void kvmppc_setup_partition_table(struct kvm *kvm) ··· 5488 5411 5489 5412 /* Allocate the guest's logical partition ID */ 5490 5413 5491 - lpid = kvmppc_alloc_lpid(); 5492 - if ((long)lpid < 0) 5493 - return -ENOMEM; 5494 - kvm->arch.lpid = lpid; 5414 + if (!kvmhv_is_nestedv2()) { 5415 + lpid = kvmppc_alloc_lpid(); 5416 + if ((long)lpid < 0) 5417 + return -ENOMEM; 5418 + kvm->arch.lpid = lpid; 5419 + } 5495 5420 5496 5421 kvmppc_alloc_host_rm_ops(); 5497 5422 5498 5423 kvmhv_vm_nested_init(kvm); 5424 + 5425 + if (kvmhv_is_nestedv2()) { 5426 + long rc; 5427 + unsigned long guest_id; 5428 + 5429 + rc = plpar_guest_create(0, &guest_id); 5430 + 5431 + if (rc != H_SUCCESS) 5432 + pr_err("KVM: Create Guest hcall failed, rc=%ld\n", rc); 5433 + 5434 + switch (rc) { 5435 + case H_PARAMETER: 5436 + case H_FUNCTION: 5437 + case H_STATE: 5438 + return -EINVAL; 5439 + case H_NOT_ENOUGH_RESOURCES: 5440 + case H_ABORTED: 5441 + return -ENOMEM; 5442 + case H_AUTHORITY: 5443 + return -EPERM; 5444 + case H_NOT_AVAILABLE: 5445 + return -EBUSY; 5446 + } 5447 + kvm->arch.lpid = guest_id; 5448 + } 5449 + 5499 5450 5500 5451 /* 5501 5452 * Since we don't flush the TLB when tearing down a VM, ··· 5594 5489 lpcr |= LPCR_HAIL; 5595 5490 ret = kvmppc_init_vm_radix(kvm); 5596 5491 if (ret) { 5597 - kvmppc_free_lpid(kvm->arch.lpid); 5492 + if (kvmhv_is_nestedv2()) 5493 + plpar_guest_delete(0, kvm->arch.lpid); 5494 + else 5495 + kvmppc_free_lpid(kvm->arch.lpid); 5598 5496 return ret; 5599 5497 } 5600 5498 kvmppc_setup_partition_table(kvm); ··· 5687 5579 kvm->arch.process_table = 0; 5688 5580 if (kvm->arch.secure_guest) 5689 5581 uv_svm_terminate(kvm->arch.lpid); 5690 - kvmhv_set_ptbl_entry(kvm->arch.lpid, 0, 0); 5582 + if (!kvmhv_is_nestedv2()) 5583 + kvmhv_set_ptbl_entry(kvm->arch.lpid, 0, 0); 5691 5584 } 5692 5585 5693 - kvmppc_free_lpid(kvm->arch.lpid); 5586 + if (kvmhv_is_nestedv2()) 5587 + plpar_guest_delete(0, kvm->arch.lpid); 5588 + else 5589 + kvmppc_free_lpid(kvm->arch.lpid); 5694 5590 5695 5591 kvmppc_free_pimap(kvm); 5696 5592 } ··· 6105 5993 if (!cpu_has_feature(CPU_FTR_ARCH_300)) 6106 5994 return -ENODEV; 6107 5995 if (!radix_enabled()) 5996 + return -ENODEV; 5997 + if (kvmhv_is_nestedv2()) 6108 5998 return -ENODEV; 6109 5999 6110 6000 /* kvm == NULL means the caller is testing if the capability exists */
+41 -33
arch/powerpc/kvm/book3s_hv.h
··· 3 3 /* 4 4 * Privileged (non-hypervisor) host registers to save. 5 5 */ 6 + #include "asm/guest-state-buffer.h" 7 + 6 8 struct p9_host_os_sprs { 7 9 unsigned long iamr; 8 10 unsigned long amr; ··· 56 54 static inline void __kvmppc_set_msr_hv(struct kvm_vcpu *vcpu, u64 val) 57 55 { 58 56 vcpu->arch.shregs.msr = val; 57 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_MSR); 59 58 } 60 59 61 60 static inline u64 __kvmppc_get_msr_hv(struct kvm_vcpu *vcpu) 62 61 { 62 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, KVMPPC_GSID_MSR) < 0); 63 63 return vcpu->arch.shregs.msr; 64 64 } 65 65 66 - #define KVMPPC_BOOK3S_HV_VCPU_ACCESSOR_SET(reg, size) \ 66 + #define KVMPPC_BOOK3S_HV_VCPU_ACCESSOR_SET(reg, size, iden) \ 67 67 static inline void kvmppc_set_##reg ##_hv(struct kvm_vcpu *vcpu, u##size val) \ 68 68 { \ 69 69 vcpu->arch.reg = val; \ 70 + kvmhv_nestedv2_mark_dirty(vcpu, iden); \ 70 71 } 71 72 72 - #define KVMPPC_BOOK3S_HV_VCPU_ACCESSOR_GET(reg, size) \ 73 + #define KVMPPC_BOOK3S_HV_VCPU_ACCESSOR_GET(reg, size, iden) \ 73 74 static inline u##size kvmppc_get_##reg ##_hv(struct kvm_vcpu *vcpu) \ 74 75 { \ 76 + kvmhv_nestedv2_cached_reload(vcpu, iden); \ 75 77 return vcpu->arch.reg; \ 76 78 } 77 79 78 - #define KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(reg, size) \ 79 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR_SET(reg, size) \ 80 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR_GET(reg, size) \ 80 + #define KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(reg, size, iden) \ 81 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR_SET(reg, size, iden) \ 82 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR_GET(reg, size, iden) \ 81 83 82 - #define KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR_SET(reg, size) \ 84 + #define KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR_SET(reg, size, iden) \ 83 85 static inline void kvmppc_set_##reg ##_hv(struct kvm_vcpu *vcpu, int i, u##size val) \ 84 86 { \ 85 87 vcpu->arch.reg[i] = val; \ 88 + kvmhv_nestedv2_mark_dirty(vcpu, iden(i)); \ 86 89 } 87 90 88 - #define KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR_GET(reg, size) \ 91 + #define KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR_GET(reg, size, iden) \ 89 92 static inline u##size kvmppc_get_##reg ##_hv(struct kvm_vcpu *vcpu, int i) \ 90 93 { \ 94 + WARN_ON(kvmhv_nestedv2_cached_reload(vcpu, iden(i)) < 0); \ 91 95 return vcpu->arch.reg[i]; \ 92 96 } 93 97 94 - #define KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR(reg, size) \ 95 - KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR_SET(reg, size) \ 96 - KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR_GET(reg, size) \ 98 + #define KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR(reg, size, iden) \ 99 + KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR_SET(reg, size, iden) \ 100 + KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR_GET(reg, size, iden) \ 97 101 98 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(mmcra, 64) 99 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(hfscr, 64) 100 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(fscr, 64) 101 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(dscr, 64) 102 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(purr, 64) 103 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(spurr, 64) 104 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(amr, 64) 105 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(uamor, 64) 106 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(siar, 64) 107 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(sdar, 64) 108 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(iamr, 64) 109 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(dawr0, 64) 110 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(dawr1, 64) 111 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(dawrx0, 64) 112 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(dawrx1, 64) 113 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(ciabr, 64) 114 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(wort, 64) 115 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(ppr, 64) 116 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(ctrl, 64) 102 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(mmcra, 64, KVMPPC_GSID_MMCRA) 103 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(hfscr, 64, KVMPPC_GSID_HFSCR) 104 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(fscr, 64, KVMPPC_GSID_FSCR) 105 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(dscr, 64, KVMPPC_GSID_DSCR) 106 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(purr, 64, KVMPPC_GSID_PURR) 107 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(spurr, 64, KVMPPC_GSID_SPURR) 108 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(amr, 64, KVMPPC_GSID_AMR) 109 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(uamor, 64, KVMPPC_GSID_UAMOR) 110 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(siar, 64, KVMPPC_GSID_SIAR) 111 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(sdar, 64, KVMPPC_GSID_SDAR) 112 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(iamr, 64, KVMPPC_GSID_IAMR) 113 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(dawr0, 64, KVMPPC_GSID_DAWR0) 114 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(dawr1, 64, KVMPPC_GSID_DAWR1) 115 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(dawrx0, 64, KVMPPC_GSID_DAWRX0) 116 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(dawrx1, 64, KVMPPC_GSID_DAWRX1) 117 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(ciabr, 64, KVMPPC_GSID_CIABR) 118 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(wort, 64, KVMPPC_GSID_WORT) 119 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(ppr, 64, KVMPPC_GSID_PPR) 120 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(ctrl, 64, KVMPPC_GSID_CTRL); 117 121 118 - KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR(mmcr, 64) 119 - KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR(sier, 64) 120 - KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR(pmc, 32) 122 + KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR(mmcr, 64, KVMPPC_GSID_MMCR) 123 + KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR(sier, 64, KVMPPC_GSID_SIER) 124 + KVMPPC_BOOK3S_HV_VCPU_ARRAY_ACCESSOR(pmc, 32, KVMPPC_GSID_PMC) 121 125 122 - KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(pspb, 32) 126 + KVMPPC_BOOK3S_HV_VCPU_ACCESSOR(pspb, 32, KVMPPC_GSID_PSPB)
+35 -5
arch/powerpc/kvm/book3s_hv_nested.c
··· 428 428 return vcpu->arch.trap; 429 429 } 430 430 431 + unsigned long nested_capabilities; 432 + 431 433 long kvmhv_nested_init(void) 432 434 { 433 435 long int ptb_order; 434 - unsigned long ptcr; 436 + unsigned long ptcr, host_capabilities; 435 437 long rc; 436 438 437 439 if (!kvmhv_on_pseries()) ··· 441 439 if (!radix_enabled()) 442 440 return -ENODEV; 443 441 442 + rc = plpar_guest_get_capabilities(0, &host_capabilities); 443 + if (rc == H_SUCCESS) { 444 + unsigned long capabilities = 0; 445 + 446 + if (cpu_has_feature(CPU_FTR_ARCH_31)) 447 + capabilities |= H_GUEST_CAP_POWER10; 448 + if (cpu_has_feature(CPU_FTR_ARCH_300)) 449 + capabilities |= H_GUEST_CAP_POWER9; 450 + 451 + nested_capabilities = capabilities & host_capabilities; 452 + rc = plpar_guest_set_capabilities(0, nested_capabilities); 453 + if (rc != H_SUCCESS) { 454 + pr_err("kvm-hv: Could not configure parent hypervisor capabilities (rc=%ld)", 455 + rc); 456 + return -ENODEV; 457 + } 458 + 459 + static_branch_enable(&__kvmhv_is_nestedv2); 460 + return 0; 461 + } 462 + 463 + pr_info("kvm-hv: nestedv2 get capabilities hcall failed, falling back to nestedv1 (rc=%ld)\n", 464 + rc); 444 465 /* Partition table entry is 1<<4 bytes in size, hence the 4. */ 445 466 ptb_order = KVM_MAX_NESTED_GUESTS_SHIFT + 4; 446 467 /* Minimum partition table size is 1<<12 bytes */ ··· 532 507 return; 533 508 } 534 509 535 - pseries_partition_tb[lpid].patb0 = cpu_to_be64(dw0); 536 - pseries_partition_tb[lpid].patb1 = cpu_to_be64(dw1); 537 - /* L0 will do the necessary barriers */ 538 - kvmhv_flush_lpid(lpid); 510 + if (kvmhv_is_nestedv1()) { 511 + pseries_partition_tb[lpid].patb0 = cpu_to_be64(dw0); 512 + pseries_partition_tb[lpid].patb1 = cpu_to_be64(dw1); 513 + /* L0 will do the necessary barriers */ 514 + kvmhv_flush_lpid(lpid); 515 + } 516 + 517 + if (kvmhv_is_nestedv2()) 518 + kvmhv_nestedv2_set_ptbl_entry(lpid, dw0, dw1); 539 519 } 540 520 541 521 static void kvmhv_set_nested_ptbl(struct kvm_nested_guest *gp)
+994
arch/powerpc/kvm/book3s_hv_nestedv2.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright 2023 Jordan Niethe, IBM Corp. <jniethe5@gmail.com> 4 + * 5 + * Authors: 6 + * Jordan Niethe <jniethe5@gmail.com> 7 + * 8 + * Description: KVM functions specific to running on Book 3S 9 + * processors as a NESTEDv2 guest. 10 + * 11 + */ 12 + 13 + #include "linux/blk-mq.h" 14 + #include "linux/console.h" 15 + #include "linux/gfp_types.h" 16 + #include "linux/signal.h" 17 + #include <linux/kernel.h> 18 + #include <linux/kvm_host.h> 19 + #include <linux/pgtable.h> 20 + 21 + #include <asm/kvm_ppc.h> 22 + #include <asm/kvm_book3s.h> 23 + #include <asm/hvcall.h> 24 + #include <asm/pgalloc.h> 25 + #include <asm/reg.h> 26 + #include <asm/plpar_wrappers.h> 27 + #include <asm/guest-state-buffer.h> 28 + #include "trace_hv.h" 29 + 30 + struct static_key_false __kvmhv_is_nestedv2 __read_mostly; 31 + EXPORT_SYMBOL_GPL(__kvmhv_is_nestedv2); 32 + 33 + 34 + static size_t 35 + gs_msg_ops_kvmhv_nestedv2_config_get_size(struct kvmppc_gs_msg *gsm) 36 + { 37 + u16 ids[] = { 38 + KVMPPC_GSID_RUN_OUTPUT_MIN_SIZE, 39 + KVMPPC_GSID_RUN_INPUT, 40 + KVMPPC_GSID_RUN_OUTPUT, 41 + 42 + }; 43 + size_t size = 0; 44 + 45 + for (int i = 0; i < ARRAY_SIZE(ids); i++) 46 + size += kvmppc_gse_total_size(kvmppc_gsid_size(ids[i])); 47 + return size; 48 + } 49 + 50 + static int 51 + gs_msg_ops_kvmhv_nestedv2_config_fill_info(struct kvmppc_gs_buff *gsb, 52 + struct kvmppc_gs_msg *gsm) 53 + { 54 + struct kvmhv_nestedv2_config *cfg; 55 + int rc; 56 + 57 + cfg = gsm->data; 58 + 59 + if (kvmppc_gsm_includes(gsm, KVMPPC_GSID_RUN_OUTPUT_MIN_SIZE)) { 60 + rc = kvmppc_gse_put_u64(gsb, KVMPPC_GSID_RUN_OUTPUT_MIN_SIZE, 61 + cfg->vcpu_run_output_size); 62 + if (rc < 0) 63 + return rc; 64 + } 65 + 66 + if (kvmppc_gsm_includes(gsm, KVMPPC_GSID_RUN_INPUT)) { 67 + rc = kvmppc_gse_put_buff_info(gsb, KVMPPC_GSID_RUN_INPUT, 68 + cfg->vcpu_run_input_cfg); 69 + if (rc < 0) 70 + return rc; 71 + } 72 + 73 + if (kvmppc_gsm_includes(gsm, KVMPPC_GSID_RUN_OUTPUT)) { 74 + kvmppc_gse_put_buff_info(gsb, KVMPPC_GSID_RUN_OUTPUT, 75 + cfg->vcpu_run_output_cfg); 76 + if (rc < 0) 77 + return rc; 78 + } 79 + 80 + return 0; 81 + } 82 + 83 + static int 84 + gs_msg_ops_kvmhv_nestedv2_config_refresh_info(struct kvmppc_gs_msg *gsm, 85 + struct kvmppc_gs_buff *gsb) 86 + { 87 + struct kvmhv_nestedv2_config *cfg; 88 + struct kvmppc_gs_parser gsp = { 0 }; 89 + struct kvmppc_gs_elem *gse; 90 + int rc; 91 + 92 + cfg = gsm->data; 93 + 94 + rc = kvmppc_gse_parse(&gsp, gsb); 95 + if (rc < 0) 96 + return rc; 97 + 98 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_RUN_OUTPUT_MIN_SIZE); 99 + if (gse) 100 + cfg->vcpu_run_output_size = kvmppc_gse_get_u64(gse); 101 + return 0; 102 + } 103 + 104 + static struct kvmppc_gs_msg_ops config_msg_ops = { 105 + .get_size = gs_msg_ops_kvmhv_nestedv2_config_get_size, 106 + .fill_info = gs_msg_ops_kvmhv_nestedv2_config_fill_info, 107 + .refresh_info = gs_msg_ops_kvmhv_nestedv2_config_refresh_info, 108 + }; 109 + 110 + static size_t gs_msg_ops_vcpu_get_size(struct kvmppc_gs_msg *gsm) 111 + { 112 + struct kvmppc_gs_bitmap gsbm = { 0 }; 113 + size_t size = 0; 114 + u16 iden; 115 + 116 + kvmppc_gsbm_fill(&gsbm); 117 + kvmppc_gsbm_for_each(&gsbm, iden) 118 + { 119 + switch (iden) { 120 + case KVMPPC_GSID_HOST_STATE_SIZE: 121 + case KVMPPC_GSID_RUN_OUTPUT_MIN_SIZE: 122 + case KVMPPC_GSID_PARTITION_TABLE: 123 + case KVMPPC_GSID_PROCESS_TABLE: 124 + case KVMPPC_GSID_RUN_INPUT: 125 + case KVMPPC_GSID_RUN_OUTPUT: 126 + break; 127 + default: 128 + size += kvmppc_gse_total_size(kvmppc_gsid_size(iden)); 129 + } 130 + } 131 + return size; 132 + } 133 + 134 + static int gs_msg_ops_vcpu_fill_info(struct kvmppc_gs_buff *gsb, 135 + struct kvmppc_gs_msg *gsm) 136 + { 137 + struct kvm_vcpu *vcpu; 138 + vector128 v; 139 + int rc, i; 140 + u16 iden; 141 + 142 + vcpu = gsm->data; 143 + 144 + kvmppc_gsm_for_each(gsm, iden) 145 + { 146 + rc = 0; 147 + 148 + if ((gsm->flags & KVMPPC_GS_FLAGS_WIDE) != 149 + (kvmppc_gsid_flags(iden) & KVMPPC_GS_FLAGS_WIDE)) 150 + continue; 151 + 152 + switch (iden) { 153 + case KVMPPC_GSID_DSCR: 154 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.dscr); 155 + break; 156 + case KVMPPC_GSID_MMCRA: 157 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.mmcra); 158 + break; 159 + case KVMPPC_GSID_HFSCR: 160 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.hfscr); 161 + break; 162 + case KVMPPC_GSID_PURR: 163 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.purr); 164 + break; 165 + case KVMPPC_GSID_SPURR: 166 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.spurr); 167 + break; 168 + case KVMPPC_GSID_AMR: 169 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.amr); 170 + break; 171 + case KVMPPC_GSID_UAMOR: 172 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.uamor); 173 + break; 174 + case KVMPPC_GSID_SIAR: 175 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.siar); 176 + break; 177 + case KVMPPC_GSID_SDAR: 178 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.sdar); 179 + break; 180 + case KVMPPC_GSID_IAMR: 181 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.iamr); 182 + break; 183 + case KVMPPC_GSID_DAWR0: 184 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.dawr0); 185 + break; 186 + case KVMPPC_GSID_DAWR1: 187 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.dawr1); 188 + break; 189 + case KVMPPC_GSID_DAWRX0: 190 + rc = kvmppc_gse_put_u32(gsb, iden, vcpu->arch.dawrx0); 191 + break; 192 + case KVMPPC_GSID_DAWRX1: 193 + rc = kvmppc_gse_put_u32(gsb, iden, vcpu->arch.dawrx1); 194 + break; 195 + case KVMPPC_GSID_CIABR: 196 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.ciabr); 197 + break; 198 + case KVMPPC_GSID_WORT: 199 + rc = kvmppc_gse_put_u32(gsb, iden, vcpu->arch.wort); 200 + break; 201 + case KVMPPC_GSID_PPR: 202 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.ppr); 203 + break; 204 + case KVMPPC_GSID_PSPB: 205 + rc = kvmppc_gse_put_u32(gsb, iden, vcpu->arch.pspb); 206 + break; 207 + case KVMPPC_GSID_TAR: 208 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.tar); 209 + break; 210 + case KVMPPC_GSID_FSCR: 211 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.fscr); 212 + break; 213 + case KVMPPC_GSID_EBBHR: 214 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.ebbhr); 215 + break; 216 + case KVMPPC_GSID_EBBRR: 217 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.ebbrr); 218 + break; 219 + case KVMPPC_GSID_BESCR: 220 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.bescr); 221 + break; 222 + case KVMPPC_GSID_IC: 223 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.ic); 224 + break; 225 + case KVMPPC_GSID_CTRL: 226 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.ctrl); 227 + break; 228 + case KVMPPC_GSID_PIDR: 229 + rc = kvmppc_gse_put_u32(gsb, iden, vcpu->arch.pid); 230 + break; 231 + case KVMPPC_GSID_AMOR: { 232 + u64 amor = ~0; 233 + 234 + rc = kvmppc_gse_put_u64(gsb, iden, amor); 235 + break; 236 + } 237 + case KVMPPC_GSID_VRSAVE: 238 + rc = kvmppc_gse_put_u32(gsb, iden, vcpu->arch.vrsave); 239 + break; 240 + case KVMPPC_GSID_MMCR(0)... KVMPPC_GSID_MMCR(3): 241 + i = iden - KVMPPC_GSID_MMCR(0); 242 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.mmcr[i]); 243 + break; 244 + case KVMPPC_GSID_SIER(0)... KVMPPC_GSID_SIER(2): 245 + i = iden - KVMPPC_GSID_SIER(0); 246 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.sier[i]); 247 + break; 248 + case KVMPPC_GSID_PMC(0)... KVMPPC_GSID_PMC(5): 249 + i = iden - KVMPPC_GSID_PMC(0); 250 + rc = kvmppc_gse_put_u32(gsb, iden, vcpu->arch.pmc[i]); 251 + break; 252 + case KVMPPC_GSID_GPR(0)... KVMPPC_GSID_GPR(31): 253 + i = iden - KVMPPC_GSID_GPR(0); 254 + rc = kvmppc_gse_put_u64(gsb, iden, 255 + vcpu->arch.regs.gpr[i]); 256 + break; 257 + case KVMPPC_GSID_CR: 258 + rc = kvmppc_gse_put_u32(gsb, iden, vcpu->arch.regs.ccr); 259 + break; 260 + case KVMPPC_GSID_XER: 261 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.regs.xer); 262 + break; 263 + case KVMPPC_GSID_CTR: 264 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.regs.ctr); 265 + break; 266 + case KVMPPC_GSID_LR: 267 + rc = kvmppc_gse_put_u64(gsb, iden, 268 + vcpu->arch.regs.link); 269 + break; 270 + case KVMPPC_GSID_NIA: 271 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.regs.nip); 272 + break; 273 + case KVMPPC_GSID_SRR0: 274 + rc = kvmppc_gse_put_u64(gsb, iden, 275 + vcpu->arch.shregs.srr0); 276 + break; 277 + case KVMPPC_GSID_SRR1: 278 + rc = kvmppc_gse_put_u64(gsb, iden, 279 + vcpu->arch.shregs.srr1); 280 + break; 281 + case KVMPPC_GSID_SPRG0: 282 + rc = kvmppc_gse_put_u64(gsb, iden, 283 + vcpu->arch.shregs.sprg0); 284 + break; 285 + case KVMPPC_GSID_SPRG1: 286 + rc = kvmppc_gse_put_u64(gsb, iden, 287 + vcpu->arch.shregs.sprg1); 288 + break; 289 + case KVMPPC_GSID_SPRG2: 290 + rc = kvmppc_gse_put_u64(gsb, iden, 291 + vcpu->arch.shregs.sprg2); 292 + break; 293 + case KVMPPC_GSID_SPRG3: 294 + rc = kvmppc_gse_put_u64(gsb, iden, 295 + vcpu->arch.shregs.sprg3); 296 + break; 297 + case KVMPPC_GSID_DAR: 298 + rc = kvmppc_gse_put_u64(gsb, iden, 299 + vcpu->arch.shregs.dar); 300 + break; 301 + case KVMPPC_GSID_DSISR: 302 + rc = kvmppc_gse_put_u32(gsb, iden, 303 + vcpu->arch.shregs.dsisr); 304 + break; 305 + case KVMPPC_GSID_MSR: 306 + rc = kvmppc_gse_put_u64(gsb, iden, 307 + vcpu->arch.shregs.msr); 308 + break; 309 + case KVMPPC_GSID_VTB: 310 + rc = kvmppc_gse_put_u64(gsb, iden, 311 + vcpu->arch.vcore->vtb); 312 + break; 313 + case KVMPPC_GSID_LPCR: 314 + rc = kvmppc_gse_put_u64(gsb, iden, 315 + vcpu->arch.vcore->lpcr); 316 + break; 317 + case KVMPPC_GSID_TB_OFFSET: 318 + rc = kvmppc_gse_put_u64(gsb, iden, 319 + vcpu->arch.vcore->tb_offset); 320 + break; 321 + case KVMPPC_GSID_FPSCR: 322 + rc = kvmppc_gse_put_u64(gsb, iden, vcpu->arch.fp.fpscr); 323 + break; 324 + case KVMPPC_GSID_VSRS(0)... KVMPPC_GSID_VSRS(31): 325 + i = iden - KVMPPC_GSID_VSRS(0); 326 + memcpy(&v, &vcpu->arch.fp.fpr[i], 327 + sizeof(vcpu->arch.fp.fpr[i])); 328 + rc = kvmppc_gse_put_vector128(gsb, iden, &v); 329 + break; 330 + #ifdef CONFIG_VSX 331 + case KVMPPC_GSID_VSCR: 332 + rc = kvmppc_gse_put_u32(gsb, iden, 333 + vcpu->arch.vr.vscr.u[3]); 334 + break; 335 + case KVMPPC_GSID_VSRS(32)... KVMPPC_GSID_VSRS(63): 336 + i = iden - KVMPPC_GSID_VSRS(32); 337 + rc = kvmppc_gse_put_vector128(gsb, iden, 338 + &vcpu->arch.vr.vr[i]); 339 + break; 340 + #endif 341 + case KVMPPC_GSID_DEC_EXPIRY_TB: { 342 + u64 dw; 343 + 344 + dw = vcpu->arch.dec_expires - 345 + vcpu->arch.vcore->tb_offset; 346 + rc = kvmppc_gse_put_u64(gsb, iden, dw); 347 + break; 348 + } 349 + case KVMPPC_GSID_LOGICAL_PVR: 350 + rc = kvmppc_gse_put_u32(gsb, iden, 351 + vcpu->arch.vcore->arch_compat); 352 + break; 353 + } 354 + 355 + if (rc < 0) 356 + return rc; 357 + } 358 + 359 + return 0; 360 + } 361 + 362 + static int gs_msg_ops_vcpu_refresh_info(struct kvmppc_gs_msg *gsm, 363 + struct kvmppc_gs_buff *gsb) 364 + { 365 + struct kvmppc_gs_parser gsp = { 0 }; 366 + struct kvmhv_nestedv2_io *io; 367 + struct kvmppc_gs_bitmap *valids; 368 + struct kvm_vcpu *vcpu; 369 + struct kvmppc_gs_elem *gse; 370 + vector128 v; 371 + int rc, i; 372 + u16 iden; 373 + 374 + vcpu = gsm->data; 375 + 376 + rc = kvmppc_gse_parse(&gsp, gsb); 377 + if (rc < 0) 378 + return rc; 379 + 380 + io = &vcpu->arch.nestedv2_io; 381 + valids = &io->valids; 382 + 383 + kvmppc_gsp_for_each(&gsp, iden, gse) 384 + { 385 + switch (iden) { 386 + case KVMPPC_GSID_DSCR: 387 + vcpu->arch.dscr = kvmppc_gse_get_u64(gse); 388 + break; 389 + case KVMPPC_GSID_MMCRA: 390 + vcpu->arch.mmcra = kvmppc_gse_get_u64(gse); 391 + break; 392 + case KVMPPC_GSID_HFSCR: 393 + vcpu->arch.hfscr = kvmppc_gse_get_u64(gse); 394 + break; 395 + case KVMPPC_GSID_PURR: 396 + vcpu->arch.purr = kvmppc_gse_get_u64(gse); 397 + break; 398 + case KVMPPC_GSID_SPURR: 399 + vcpu->arch.spurr = kvmppc_gse_get_u64(gse); 400 + break; 401 + case KVMPPC_GSID_AMR: 402 + vcpu->arch.amr = kvmppc_gse_get_u64(gse); 403 + break; 404 + case KVMPPC_GSID_UAMOR: 405 + vcpu->arch.uamor = kvmppc_gse_get_u64(gse); 406 + break; 407 + case KVMPPC_GSID_SIAR: 408 + vcpu->arch.siar = kvmppc_gse_get_u64(gse); 409 + break; 410 + case KVMPPC_GSID_SDAR: 411 + vcpu->arch.sdar = kvmppc_gse_get_u64(gse); 412 + break; 413 + case KVMPPC_GSID_IAMR: 414 + vcpu->arch.iamr = kvmppc_gse_get_u64(gse); 415 + break; 416 + case KVMPPC_GSID_DAWR0: 417 + vcpu->arch.dawr0 = kvmppc_gse_get_u64(gse); 418 + break; 419 + case KVMPPC_GSID_DAWR1: 420 + vcpu->arch.dawr1 = kvmppc_gse_get_u64(gse); 421 + break; 422 + case KVMPPC_GSID_DAWRX0: 423 + vcpu->arch.dawrx0 = kvmppc_gse_get_u32(gse); 424 + break; 425 + case KVMPPC_GSID_DAWRX1: 426 + vcpu->arch.dawrx1 = kvmppc_gse_get_u32(gse); 427 + break; 428 + case KVMPPC_GSID_CIABR: 429 + vcpu->arch.ciabr = kvmppc_gse_get_u64(gse); 430 + break; 431 + case KVMPPC_GSID_WORT: 432 + vcpu->arch.wort = kvmppc_gse_get_u32(gse); 433 + break; 434 + case KVMPPC_GSID_PPR: 435 + vcpu->arch.ppr = kvmppc_gse_get_u64(gse); 436 + break; 437 + case KVMPPC_GSID_PSPB: 438 + vcpu->arch.pspb = kvmppc_gse_get_u32(gse); 439 + break; 440 + case KVMPPC_GSID_TAR: 441 + vcpu->arch.tar = kvmppc_gse_get_u64(gse); 442 + break; 443 + case KVMPPC_GSID_FSCR: 444 + vcpu->arch.fscr = kvmppc_gse_get_u64(gse); 445 + break; 446 + case KVMPPC_GSID_EBBHR: 447 + vcpu->arch.ebbhr = kvmppc_gse_get_u64(gse); 448 + break; 449 + case KVMPPC_GSID_EBBRR: 450 + vcpu->arch.ebbrr = kvmppc_gse_get_u64(gse); 451 + break; 452 + case KVMPPC_GSID_BESCR: 453 + vcpu->arch.bescr = kvmppc_gse_get_u64(gse); 454 + break; 455 + case KVMPPC_GSID_IC: 456 + vcpu->arch.ic = kvmppc_gse_get_u64(gse); 457 + break; 458 + case KVMPPC_GSID_CTRL: 459 + vcpu->arch.ctrl = kvmppc_gse_get_u64(gse); 460 + break; 461 + case KVMPPC_GSID_PIDR: 462 + vcpu->arch.pid = kvmppc_gse_get_u32(gse); 463 + break; 464 + case KVMPPC_GSID_AMOR: 465 + break; 466 + case KVMPPC_GSID_VRSAVE: 467 + vcpu->arch.vrsave = kvmppc_gse_get_u32(gse); 468 + break; 469 + case KVMPPC_GSID_MMCR(0)... KVMPPC_GSID_MMCR(3): 470 + i = iden - KVMPPC_GSID_MMCR(0); 471 + vcpu->arch.mmcr[i] = kvmppc_gse_get_u64(gse); 472 + break; 473 + case KVMPPC_GSID_SIER(0)... KVMPPC_GSID_SIER(2): 474 + i = iden - KVMPPC_GSID_SIER(0); 475 + vcpu->arch.sier[i] = kvmppc_gse_get_u64(gse); 476 + break; 477 + case KVMPPC_GSID_PMC(0)... KVMPPC_GSID_PMC(5): 478 + i = iden - KVMPPC_GSID_PMC(0); 479 + vcpu->arch.pmc[i] = kvmppc_gse_get_u32(gse); 480 + break; 481 + case KVMPPC_GSID_GPR(0)... KVMPPC_GSID_GPR(31): 482 + i = iden - KVMPPC_GSID_GPR(0); 483 + vcpu->arch.regs.gpr[i] = kvmppc_gse_get_u64(gse); 484 + break; 485 + case KVMPPC_GSID_CR: 486 + vcpu->arch.regs.ccr = kvmppc_gse_get_u32(gse); 487 + break; 488 + case KVMPPC_GSID_XER: 489 + vcpu->arch.regs.xer = kvmppc_gse_get_u64(gse); 490 + break; 491 + case KVMPPC_GSID_CTR: 492 + vcpu->arch.regs.ctr = kvmppc_gse_get_u64(gse); 493 + break; 494 + case KVMPPC_GSID_LR: 495 + vcpu->arch.regs.link = kvmppc_gse_get_u64(gse); 496 + break; 497 + case KVMPPC_GSID_NIA: 498 + vcpu->arch.regs.nip = kvmppc_gse_get_u64(gse); 499 + break; 500 + case KVMPPC_GSID_SRR0: 501 + vcpu->arch.shregs.srr0 = kvmppc_gse_get_u64(gse); 502 + break; 503 + case KVMPPC_GSID_SRR1: 504 + vcpu->arch.shregs.srr1 = kvmppc_gse_get_u64(gse); 505 + break; 506 + case KVMPPC_GSID_SPRG0: 507 + vcpu->arch.shregs.sprg0 = kvmppc_gse_get_u64(gse); 508 + break; 509 + case KVMPPC_GSID_SPRG1: 510 + vcpu->arch.shregs.sprg1 = kvmppc_gse_get_u64(gse); 511 + break; 512 + case KVMPPC_GSID_SPRG2: 513 + vcpu->arch.shregs.sprg2 = kvmppc_gse_get_u64(gse); 514 + break; 515 + case KVMPPC_GSID_SPRG3: 516 + vcpu->arch.shregs.sprg3 = kvmppc_gse_get_u64(gse); 517 + break; 518 + case KVMPPC_GSID_DAR: 519 + vcpu->arch.shregs.dar = kvmppc_gse_get_u64(gse); 520 + break; 521 + case KVMPPC_GSID_DSISR: 522 + vcpu->arch.shregs.dsisr = kvmppc_gse_get_u32(gse); 523 + break; 524 + case KVMPPC_GSID_MSR: 525 + vcpu->arch.shregs.msr = kvmppc_gse_get_u64(gse); 526 + break; 527 + case KVMPPC_GSID_VTB: 528 + vcpu->arch.vcore->vtb = kvmppc_gse_get_u64(gse); 529 + break; 530 + case KVMPPC_GSID_LPCR: 531 + vcpu->arch.vcore->lpcr = kvmppc_gse_get_u64(gse); 532 + break; 533 + case KVMPPC_GSID_TB_OFFSET: 534 + vcpu->arch.vcore->tb_offset = kvmppc_gse_get_u64(gse); 535 + break; 536 + case KVMPPC_GSID_FPSCR: 537 + vcpu->arch.fp.fpscr = kvmppc_gse_get_u64(gse); 538 + break; 539 + case KVMPPC_GSID_VSRS(0)... KVMPPC_GSID_VSRS(31): 540 + kvmppc_gse_get_vector128(gse, &v); 541 + i = iden - KVMPPC_GSID_VSRS(0); 542 + memcpy(&vcpu->arch.fp.fpr[i], &v, 543 + sizeof(vcpu->arch.fp.fpr[i])); 544 + break; 545 + #ifdef CONFIG_VSX 546 + case KVMPPC_GSID_VSCR: 547 + vcpu->arch.vr.vscr.u[3] = kvmppc_gse_get_u32(gse); 548 + break; 549 + case KVMPPC_GSID_VSRS(32)... KVMPPC_GSID_VSRS(63): 550 + i = iden - KVMPPC_GSID_VSRS(32); 551 + kvmppc_gse_get_vector128(gse, &vcpu->arch.vr.vr[i]); 552 + break; 553 + #endif 554 + case KVMPPC_GSID_HDAR: 555 + vcpu->arch.fault_dar = kvmppc_gse_get_u64(gse); 556 + break; 557 + case KVMPPC_GSID_HDSISR: 558 + vcpu->arch.fault_dsisr = kvmppc_gse_get_u32(gse); 559 + break; 560 + case KVMPPC_GSID_ASDR: 561 + vcpu->arch.fault_gpa = kvmppc_gse_get_u64(gse); 562 + break; 563 + case KVMPPC_GSID_HEIR: 564 + vcpu->arch.emul_inst = kvmppc_gse_get_u64(gse); 565 + break; 566 + case KVMPPC_GSID_DEC_EXPIRY_TB: { 567 + u64 dw; 568 + 569 + dw = kvmppc_gse_get_u64(gse); 570 + vcpu->arch.dec_expires = 571 + dw + vcpu->arch.vcore->tb_offset; 572 + break; 573 + } 574 + case KVMPPC_GSID_LOGICAL_PVR: 575 + vcpu->arch.vcore->arch_compat = kvmppc_gse_get_u32(gse); 576 + break; 577 + default: 578 + continue; 579 + } 580 + kvmppc_gsbm_set(valids, iden); 581 + } 582 + 583 + return 0; 584 + } 585 + 586 + static struct kvmppc_gs_msg_ops vcpu_message_ops = { 587 + .get_size = gs_msg_ops_vcpu_get_size, 588 + .fill_info = gs_msg_ops_vcpu_fill_info, 589 + .refresh_info = gs_msg_ops_vcpu_refresh_info, 590 + }; 591 + 592 + static int kvmhv_nestedv2_host_create(struct kvm_vcpu *vcpu, 593 + struct kvmhv_nestedv2_io *io) 594 + { 595 + struct kvmhv_nestedv2_config *cfg; 596 + struct kvmppc_gs_buff *gsb, *vcpu_run_output, *vcpu_run_input; 597 + unsigned long guest_id, vcpu_id; 598 + struct kvmppc_gs_msg *gsm, *vcpu_message, *vcore_message; 599 + int rc; 600 + 601 + cfg = &io->cfg; 602 + guest_id = vcpu->kvm->arch.lpid; 603 + vcpu_id = vcpu->vcpu_id; 604 + 605 + gsm = kvmppc_gsm_new(&config_msg_ops, cfg, KVMPPC_GS_FLAGS_WIDE, 606 + GFP_KERNEL); 607 + if (!gsm) { 608 + rc = -ENOMEM; 609 + goto err; 610 + } 611 + 612 + gsb = kvmppc_gsb_new(kvmppc_gsm_size(gsm), guest_id, vcpu_id, 613 + GFP_KERNEL); 614 + if (!gsb) { 615 + rc = -ENOMEM; 616 + goto free_gsm; 617 + } 618 + 619 + rc = kvmppc_gsb_receive_datum(gsb, gsm, 620 + KVMPPC_GSID_RUN_OUTPUT_MIN_SIZE); 621 + if (rc < 0) { 622 + pr_err("KVM-NESTEDv2: couldn't get vcpu run output buffer minimum size\n"); 623 + goto free_gsb; 624 + } 625 + 626 + vcpu_run_output = kvmppc_gsb_new(cfg->vcpu_run_output_size, guest_id, 627 + vcpu_id, GFP_KERNEL); 628 + if (!vcpu_run_output) { 629 + rc = -ENOMEM; 630 + goto free_gsb; 631 + } 632 + 633 + cfg->vcpu_run_output_cfg.address = kvmppc_gsb_paddress(vcpu_run_output); 634 + cfg->vcpu_run_output_cfg.size = kvmppc_gsb_capacity(vcpu_run_output); 635 + io->vcpu_run_output = vcpu_run_output; 636 + 637 + gsm->flags = 0; 638 + rc = kvmppc_gsb_send_datum(gsb, gsm, KVMPPC_GSID_RUN_OUTPUT); 639 + if (rc < 0) { 640 + pr_err("KVM-NESTEDv2: couldn't set vcpu run output buffer\n"); 641 + goto free_gs_out; 642 + } 643 + 644 + vcpu_message = kvmppc_gsm_new(&vcpu_message_ops, vcpu, 0, GFP_KERNEL); 645 + if (!vcpu_message) { 646 + rc = -ENOMEM; 647 + goto free_gs_out; 648 + } 649 + kvmppc_gsm_include_all(vcpu_message); 650 + 651 + io->vcpu_message = vcpu_message; 652 + 653 + vcpu_run_input = kvmppc_gsb_new(kvmppc_gsm_size(vcpu_message), guest_id, 654 + vcpu_id, GFP_KERNEL); 655 + if (!vcpu_run_input) { 656 + rc = -ENOMEM; 657 + goto free_vcpu_message; 658 + } 659 + 660 + io->vcpu_run_input = vcpu_run_input; 661 + cfg->vcpu_run_input_cfg.address = kvmppc_gsb_paddress(vcpu_run_input); 662 + cfg->vcpu_run_input_cfg.size = kvmppc_gsb_capacity(vcpu_run_input); 663 + rc = kvmppc_gsb_send_datum(gsb, gsm, KVMPPC_GSID_RUN_INPUT); 664 + if (rc < 0) { 665 + pr_err("KVM-NESTEDv2: couldn't set vcpu run input buffer\n"); 666 + goto free_vcpu_run_input; 667 + } 668 + 669 + vcore_message = kvmppc_gsm_new(&vcpu_message_ops, vcpu, 670 + KVMPPC_GS_FLAGS_WIDE, GFP_KERNEL); 671 + if (!vcore_message) { 672 + rc = -ENOMEM; 673 + goto free_vcpu_run_input; 674 + } 675 + 676 + kvmppc_gsm_include_all(vcore_message); 677 + kvmppc_gsbm_clear(&vcore_message->bitmap, KVMPPC_GSID_LOGICAL_PVR); 678 + io->vcore_message = vcore_message; 679 + 680 + kvmppc_gsbm_fill(&io->valids); 681 + kvmppc_gsm_free(gsm); 682 + kvmppc_gsb_free(gsb); 683 + return 0; 684 + 685 + free_vcpu_run_input: 686 + kvmppc_gsb_free(vcpu_run_input); 687 + free_vcpu_message: 688 + kvmppc_gsm_free(vcpu_message); 689 + free_gs_out: 690 + kvmppc_gsb_free(vcpu_run_output); 691 + free_gsb: 692 + kvmppc_gsb_free(gsb); 693 + free_gsm: 694 + kvmppc_gsm_free(gsm); 695 + err: 696 + return rc; 697 + } 698 + 699 + /** 700 + * __kvmhv_nestedv2_mark_dirty() - mark a Guest State ID to be sent to the host 701 + * @vcpu: vcpu 702 + * @iden: guest state ID 703 + * 704 + * Mark a guest state ID as having been changed by the L1 host and thus 705 + * the new value must be sent to the L0 hypervisor. See kvmhv_nestedv2_flush_vcpu() 706 + */ 707 + int __kvmhv_nestedv2_mark_dirty(struct kvm_vcpu *vcpu, u16 iden) 708 + { 709 + struct kvmhv_nestedv2_io *io; 710 + struct kvmppc_gs_bitmap *valids; 711 + struct kvmppc_gs_msg *gsm; 712 + 713 + if (!iden) 714 + return 0; 715 + 716 + io = &vcpu->arch.nestedv2_io; 717 + valids = &io->valids; 718 + gsm = io->vcpu_message; 719 + kvmppc_gsm_include(gsm, iden); 720 + gsm = io->vcore_message; 721 + kvmppc_gsm_include(gsm, iden); 722 + kvmppc_gsbm_set(valids, iden); 723 + return 0; 724 + } 725 + EXPORT_SYMBOL_GPL(__kvmhv_nestedv2_mark_dirty); 726 + 727 + /** 728 + * __kvmhv_nestedv2_cached_reload() - reload a Guest State ID from the host 729 + * @vcpu: vcpu 730 + * @iden: guest state ID 731 + * 732 + * Reload the value for the guest state ID from the L0 host into the L1 host. 733 + * This is cached so that going out to the L0 host only happens if necessary. 734 + */ 735 + int __kvmhv_nestedv2_cached_reload(struct kvm_vcpu *vcpu, u16 iden) 736 + { 737 + struct kvmhv_nestedv2_io *io; 738 + struct kvmppc_gs_bitmap *valids; 739 + struct kvmppc_gs_buff *gsb; 740 + struct kvmppc_gs_msg gsm; 741 + int rc; 742 + 743 + if (!iden) 744 + return 0; 745 + 746 + io = &vcpu->arch.nestedv2_io; 747 + valids = &io->valids; 748 + if (kvmppc_gsbm_test(valids, iden)) 749 + return 0; 750 + 751 + gsb = io->vcpu_run_input; 752 + kvmppc_gsm_init(&gsm, &vcpu_message_ops, vcpu, kvmppc_gsid_flags(iden)); 753 + rc = kvmppc_gsb_receive_datum(gsb, &gsm, iden); 754 + if (rc < 0) { 755 + pr_err("KVM-NESTEDv2: couldn't get GSID: 0x%x\n", iden); 756 + return rc; 757 + } 758 + return 0; 759 + } 760 + EXPORT_SYMBOL_GPL(__kvmhv_nestedv2_cached_reload); 761 + 762 + /** 763 + * kvmhv_nestedv2_flush_vcpu() - send modified Guest State IDs to the host 764 + * @vcpu: vcpu 765 + * @time_limit: hdec expiry tb 766 + * 767 + * Send the values marked by __kvmhv_nestedv2_mark_dirty() to the L0 host. 768 + * Thread wide values are copied to the H_GUEST_RUN_VCPU input buffer. Guest 769 + * wide values need to be sent with H_GUEST_SET first. 770 + * 771 + * The hdec tb offset is always sent to L0 host. 772 + */ 773 + int kvmhv_nestedv2_flush_vcpu(struct kvm_vcpu *vcpu, u64 time_limit) 774 + { 775 + struct kvmhv_nestedv2_io *io; 776 + struct kvmppc_gs_buff *gsb; 777 + struct kvmppc_gs_msg *gsm; 778 + int rc; 779 + 780 + io = &vcpu->arch.nestedv2_io; 781 + gsb = io->vcpu_run_input; 782 + gsm = io->vcore_message; 783 + rc = kvmppc_gsb_send_data(gsb, gsm); 784 + if (rc < 0) { 785 + pr_err("KVM-NESTEDv2: couldn't set guest wide elements\n"); 786 + return rc; 787 + } 788 + 789 + gsm = io->vcpu_message; 790 + kvmppc_gsb_reset(gsb); 791 + rc = kvmppc_gsm_fill_info(gsm, gsb); 792 + if (rc < 0) { 793 + pr_err("KVM-NESTEDv2: couldn't fill vcpu run input buffer\n"); 794 + return rc; 795 + } 796 + 797 + rc = kvmppc_gse_put_u64(gsb, KVMPPC_GSID_HDEC_EXPIRY_TB, time_limit); 798 + if (rc < 0) 799 + return rc; 800 + return 0; 801 + } 802 + EXPORT_SYMBOL_GPL(kvmhv_nestedv2_flush_vcpu); 803 + 804 + /** 805 + * kvmhv_nestedv2_set_ptbl_entry() - send partition and process table state to 806 + * L0 host 807 + * @lpid: guest id 808 + * @dw0: partition table double word 809 + * @dw1: process table double word 810 + */ 811 + int kvmhv_nestedv2_set_ptbl_entry(unsigned long lpid, u64 dw0, u64 dw1) 812 + { 813 + struct kvmppc_gs_part_table patbl; 814 + struct kvmppc_gs_proc_table prtbl; 815 + struct kvmppc_gs_buff *gsb; 816 + size_t size; 817 + int rc; 818 + 819 + size = kvmppc_gse_total_size( 820 + kvmppc_gsid_size(KVMPPC_GSID_PARTITION_TABLE)) + 821 + kvmppc_gse_total_size( 822 + kvmppc_gsid_size(KVMPPC_GSID_PROCESS_TABLE)) + 823 + sizeof(struct kvmppc_gs_header); 824 + gsb = kvmppc_gsb_new(size, lpid, 0, GFP_KERNEL); 825 + if (!gsb) 826 + return -ENOMEM; 827 + 828 + patbl.address = dw0 & RPDB_MASK; 829 + patbl.ea_bits = ((((dw0 & RTS1_MASK) >> (RTS1_SHIFT - 3)) | 830 + ((dw0 & RTS2_MASK) >> RTS2_SHIFT)) + 831 + 31); 832 + patbl.gpd_size = 1ul << ((dw0 & RPDS_MASK) + 3); 833 + rc = kvmppc_gse_put_part_table(gsb, KVMPPC_GSID_PARTITION_TABLE, patbl); 834 + if (rc < 0) 835 + goto free_gsb; 836 + 837 + prtbl.address = dw1 & PRTB_MASK; 838 + prtbl.gpd_size = 1ul << ((dw1 & PRTS_MASK) + 12); 839 + rc = kvmppc_gse_put_proc_table(gsb, KVMPPC_GSID_PROCESS_TABLE, prtbl); 840 + if (rc < 0) 841 + goto free_gsb; 842 + 843 + rc = kvmppc_gsb_send(gsb, KVMPPC_GS_FLAGS_WIDE); 844 + if (rc < 0) { 845 + pr_err("KVM-NESTEDv2: couldn't set the PATE\n"); 846 + goto free_gsb; 847 + } 848 + 849 + kvmppc_gsb_free(gsb); 850 + return 0; 851 + 852 + free_gsb: 853 + kvmppc_gsb_free(gsb); 854 + return rc; 855 + } 856 + EXPORT_SYMBOL_GPL(kvmhv_nestedv2_set_ptbl_entry); 857 + 858 + /** 859 + * kvmhv_nestedv2_parse_output() - receive values from H_GUEST_RUN_VCPU output 860 + * @vcpu: vcpu 861 + * 862 + * Parse the output buffer from H_GUEST_RUN_VCPU to update vcpu. 863 + */ 864 + int kvmhv_nestedv2_parse_output(struct kvm_vcpu *vcpu) 865 + { 866 + struct kvmhv_nestedv2_io *io; 867 + struct kvmppc_gs_buff *gsb; 868 + struct kvmppc_gs_msg gsm; 869 + 870 + io = &vcpu->arch.nestedv2_io; 871 + gsb = io->vcpu_run_output; 872 + 873 + vcpu->arch.fault_dar = 0; 874 + vcpu->arch.fault_dsisr = 0; 875 + vcpu->arch.fault_gpa = 0; 876 + vcpu->arch.emul_inst = KVM_INST_FETCH_FAILED; 877 + 878 + kvmppc_gsm_init(&gsm, &vcpu_message_ops, vcpu, 0); 879 + return kvmppc_gsm_refresh_info(&gsm, gsb); 880 + } 881 + EXPORT_SYMBOL_GPL(kvmhv_nestedv2_parse_output); 882 + 883 + static void kvmhv_nestedv2_host_free(struct kvm_vcpu *vcpu, 884 + struct kvmhv_nestedv2_io *io) 885 + { 886 + kvmppc_gsm_free(io->vcpu_message); 887 + kvmppc_gsm_free(io->vcore_message); 888 + kvmppc_gsb_free(io->vcpu_run_input); 889 + kvmppc_gsb_free(io->vcpu_run_output); 890 + } 891 + 892 + int __kvmhv_nestedv2_reload_ptregs(struct kvm_vcpu *vcpu, struct pt_regs *regs) 893 + { 894 + struct kvmhv_nestedv2_io *io; 895 + struct kvmppc_gs_bitmap *valids; 896 + struct kvmppc_gs_buff *gsb; 897 + struct kvmppc_gs_msg gsm; 898 + int rc = 0; 899 + 900 + 901 + io = &vcpu->arch.nestedv2_io; 902 + valids = &io->valids; 903 + 904 + gsb = io->vcpu_run_input; 905 + kvmppc_gsm_init(&gsm, &vcpu_message_ops, vcpu, 0); 906 + 907 + for (int i = 0; i < 32; i++) { 908 + if (!kvmppc_gsbm_test(valids, KVMPPC_GSID_GPR(i))) 909 + kvmppc_gsm_include(&gsm, KVMPPC_GSID_GPR(i)); 910 + } 911 + 912 + if (!kvmppc_gsbm_test(valids, KVMPPC_GSID_CR)) 913 + kvmppc_gsm_include(&gsm, KVMPPC_GSID_CR); 914 + 915 + if (!kvmppc_gsbm_test(valids, KVMPPC_GSID_XER)) 916 + kvmppc_gsm_include(&gsm, KVMPPC_GSID_XER); 917 + 918 + if (!kvmppc_gsbm_test(valids, KVMPPC_GSID_CTR)) 919 + kvmppc_gsm_include(&gsm, KVMPPC_GSID_CTR); 920 + 921 + if (!kvmppc_gsbm_test(valids, KVMPPC_GSID_LR)) 922 + kvmppc_gsm_include(&gsm, KVMPPC_GSID_LR); 923 + 924 + if (!kvmppc_gsbm_test(valids, KVMPPC_GSID_NIA)) 925 + kvmppc_gsm_include(&gsm, KVMPPC_GSID_NIA); 926 + 927 + rc = kvmppc_gsb_receive_data(gsb, &gsm); 928 + if (rc < 0) 929 + pr_err("KVM-NESTEDv2: couldn't reload ptregs\n"); 930 + 931 + return rc; 932 + } 933 + EXPORT_SYMBOL_GPL(__kvmhv_nestedv2_reload_ptregs); 934 + 935 + int __kvmhv_nestedv2_mark_dirty_ptregs(struct kvm_vcpu *vcpu, 936 + struct pt_regs *regs) 937 + { 938 + for (int i = 0; i < 32; i++) 939 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_GPR(i)); 940 + 941 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_CR); 942 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_XER); 943 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_CTR); 944 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_LR); 945 + kvmhv_nestedv2_mark_dirty(vcpu, KVMPPC_GSID_NIA); 946 + 947 + return 0; 948 + } 949 + EXPORT_SYMBOL_GPL(__kvmhv_nestedv2_mark_dirty_ptregs); 950 + 951 + /** 952 + * kvmhv_nestedv2_vcpu_create() - create nested vcpu for the NESTEDv2 API 953 + * @vcpu: vcpu 954 + * @io: NESTEDv2 nested io state 955 + * 956 + * Parse the output buffer from H_GUEST_RUN_VCPU to update vcpu. 957 + */ 958 + int kvmhv_nestedv2_vcpu_create(struct kvm_vcpu *vcpu, 959 + struct kvmhv_nestedv2_io *io) 960 + { 961 + long rc; 962 + 963 + rc = plpar_guest_create_vcpu(0, vcpu->kvm->arch.lpid, vcpu->vcpu_id); 964 + 965 + if (rc != H_SUCCESS) { 966 + pr_err("KVM: Create Guest vcpu hcall failed, rc=%ld\n", rc); 967 + switch (rc) { 968 + case H_NOT_ENOUGH_RESOURCES: 969 + case H_ABORTED: 970 + return -ENOMEM; 971 + case H_AUTHORITY: 972 + return -EPERM; 973 + default: 974 + return -EINVAL; 975 + } 976 + } 977 + 978 + rc = kvmhv_nestedv2_host_create(vcpu, io); 979 + 980 + return rc; 981 + } 982 + EXPORT_SYMBOL_GPL(kvmhv_nestedv2_vcpu_create); 983 + 984 + /** 985 + * kvmhv_nestedv2_vcpu_free() - free the NESTEDv2 state 986 + * @vcpu: vcpu 987 + * @io: NESTEDv2 nested io state 988 + */ 989 + void kvmhv_nestedv2_vcpu_free(struct kvm_vcpu *vcpu, 990 + struct kvmhv_nestedv2_io *io) 991 + { 992 + kvmhv_nestedv2_host_free(vcpu, io); 993 + } 994 + EXPORT_SYMBOL_GPL(kvmhv_nestedv2_vcpu_free);
+3 -1
arch/powerpc/kvm/emulate_loadstore.c
··· 92 92 vcpu->arch.mmio_host_swabbed = 0; 93 93 94 94 emulated = EMULATE_FAIL; 95 - vcpu->arch.regs.msr = vcpu->arch.shared->msr; 95 + vcpu->arch.regs.msr = kvmppc_get_msr(vcpu); 96 + kvmhv_nestedv2_reload_ptregs(vcpu, &vcpu->arch.regs); 96 97 if (analyse_instr(&op, &vcpu->arch.regs, inst) == 0) { 97 98 int type = op.type & INSTR_TYPE_MASK; 98 99 int size = GETSIZE(op.type); ··· 358 357 } 359 358 360 359 trace_kvm_ppc_instr(ppc_inst_val(inst), kvmppc_get_pc(vcpu), emulated); 360 + kvmhv_nestedv2_mark_dirty_ptregs(vcpu, &vcpu->arch.regs); 361 361 362 362 /* Advance past emulated instruction. */ 363 363 if (emulated != EMULATE_FAIL)
+50
arch/powerpc/kvm/guest-state-buffer.c
··· 569 569 return gsm->ops->refresh_info(gsm, gsb); 570 570 } 571 571 EXPORT_SYMBOL_GPL(kvmppc_gsm_refresh_info); 572 + 573 + /** 574 + * kvmppc_gsb_send - send all elements in the buffer to the hypervisor. 575 + * @gsb: guest state buffer 576 + * @flags: guest wide or thread wide 577 + * 578 + * Performs the H_GUEST_SET_STATE hcall for the guest state buffer. 579 + */ 580 + int kvmppc_gsb_send(struct kvmppc_gs_buff *gsb, unsigned long flags) 581 + { 582 + unsigned long hflags = 0; 583 + unsigned long i; 584 + int rc; 585 + 586 + if (kvmppc_gsb_nelems(gsb) == 0) 587 + return 0; 588 + 589 + if (flags & KVMPPC_GS_FLAGS_WIDE) 590 + hflags |= H_GUEST_FLAGS_WIDE; 591 + 592 + rc = plpar_guest_set_state(hflags, gsb->guest_id, gsb->vcpu_id, 593 + __pa(gsb->hdr), gsb->capacity, &i); 594 + return rc; 595 + } 596 + EXPORT_SYMBOL_GPL(kvmppc_gsb_send); 597 + 598 + /** 599 + * kvmppc_gsb_recv - request all elements in the buffer have their value 600 + * updated. 601 + * @gsb: guest state buffer 602 + * @flags: guest wide or thread wide 603 + * 604 + * Performs the H_GUEST_GET_STATE hcall for the guest state buffer. 605 + * After returning from the hcall the guest state elements that were 606 + * present in the buffer will have updated values from the hypervisor. 607 + */ 608 + int kvmppc_gsb_recv(struct kvmppc_gs_buff *gsb, unsigned long flags) 609 + { 610 + unsigned long hflags = 0; 611 + unsigned long i; 612 + int rc; 613 + 614 + if (flags & KVMPPC_GS_FLAGS_WIDE) 615 + hflags |= H_GUEST_FLAGS_WIDE; 616 + 617 + rc = plpar_guest_get_state(hflags, gsb->guest_id, gsb->vcpu_id, 618 + __pa(gsb->hdr), gsb->capacity, &i); 619 + return rc; 620 + } 621 + EXPORT_SYMBOL_GPL(kvmppc_gsb_recv);