Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'powerpc-6.16-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Madhavan Srinivasan:

- Support for dynamic preemption

- Migrate powerpc boards GPIO driver to new setter API

- Added new PMU for KVM host-wide measurement

- Enhancement to htmdump driver to support more functions

- Added character device for couple RTAS supported APIs

- Minor fixes and cleanup

Thanks to Amit Machhiwal, Athira Rajeev, Bagas Sanjaya, Bartosz
Golaszewski, Christophe Leroy, Eddie James, Gaurav Batra, Gautam
Menghani, Geert Uytterhoeven, Haren Myneni, Hari Bathini, Jiri Slaby
(SUSE), Linus Walleij, Michal Suchanek, Naveen N Rao (AMD), Nilay
Shroff, Ricardo B. Marlière, Ritesh Harjani (IBM), Sathvika Vasireddy,
Shrikanth Hegde, Stephen Rothwell, Sourabh Jain, Thorsten Blum, Vaibhav
Jain, Venkat Rao Bagalkote, and Viktor Malik.

* tag 'powerpc-6.16-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (52 commits)
MAINTAINERS: powerpc: Remove myself as a reviewer
powerpc/iommu: Use str_disabled_enabled() helper
powerpc/powermac: Use str_enabled_disabled() and str_on_off() helpers
powerpc/mm/fault: Use str_write_read() helper function
powerpc: Replace strcpy() with strscpy() in proc_ppc64_init()
powerpc/pseries/iommu: Fix kmemleak in TCE table userspace view
powerpc/kernel: Fix ppc_save_regs inclusion in build
powerpc: Transliterate author name and remove FIXME
powerpc/pseries/htmdump: Include header file to get is_kvm_guest() definition
KVM: PPC: Book3S HV: Fix IRQ map warnings with XICS on pSeries KVM Guest
powerpc/8xx: Reduce alignment constraint for kernel memory
powerpc/boot: Fix build with gcc 15
powerpc/pseries/htmdump: Add documentation for H_HTM debugfs interface
powerpc/pseries/htmdump: Add htm capabilities support to htmdump module
powerpc/pseries/htmdump: Add htm flags support to htmdump module
powerpc/pseries/htmdump: Add htm setup support to htmdump module
powerpc/pseries/htmdump: Add htm info support to htmdump module
powerpc/pseries/htmdump: Add htm status support to htmdump module
powerpc/pseries/htmdump: Add htm start support to htmdump module
powerpc/pseries/htmdump: Add htm configure support to htmdump module
...

+3146 -471
+104
Documentation/arch/powerpc/htm.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + .. _htm: 3 + 4 + =================================== 5 + HTM (Hardware Trace Macro) 6 + =================================== 7 + 8 + Athira Rajeev, 2 Mar 2025 9 + 10 + .. contents:: 11 + :depth: 3 12 + 13 + 14 + Basic overview 15 + ============== 16 + 17 + H_HTM is used as an interface for executing Hardware Trace Macro (HTM) 18 + functions, including setup, configuration, control and dumping of the HTM data. 19 + For using HTM, it is required to setup HTM buffers and HTM operations can 20 + be controlled using the H_HTM hcall. The hcall can be invoked for any core/chip 21 + of the system from within a partition itself. To use this feature, a debugfs 22 + folder called "htmdump" is present under /sys/kernel/debug/powerpc. 23 + 24 + 25 + HTM debugfs example usage 26 + ========================= 27 + 28 + .. code-block:: sh 29 + 30 + # ls /sys/kernel/debug/powerpc/htmdump/ 31 + coreindexonchip htmcaps htmconfigure htmflags htminfo htmsetup 32 + htmstart htmstatus htmtype nodalchipindex nodeindex trace 33 + 34 + Details on each file: 35 + 36 + * nodeindex, nodalchipindex, coreindexonchip specifies which partition to configure the HTM for. 37 + * htmtype: specifies the type of HTM. Supported target is hardwareTarget. 38 + * trace: is to read the HTM data. 39 + * htmconfigure: Configure/Deconfigure the HTM. Writing 1 to the file will configure the trace, writing 0 to the file will do deconfigure. 40 + * htmstart: start/Stop the HTM. Writing 1 to the file will start the tracing, writing 0 to the file will stop the tracing. 41 + * htmstatus: get the status of HTM. This is needed to understand the HTM state after each operation. 42 + * htmsetup: set the HTM buffer size. Size of HTM buffer is in power of 2 43 + * htminfo: provides the system processor configuration details. This is needed to understand the appropriate values for nodeindex, nodalchipindex, coreindexonchip. 44 + * htmcaps : provides the HTM capabilities like minimum/maximum buffer size, what kind of tracing the HTM supports etc. 45 + * htmflags : allows to pass flags to hcall. Currently supports controlling the wrapping of HTM buffer. 46 + 47 + To see the system processor configuration details: 48 + 49 + .. code-block:: sh 50 + 51 + # cat /sys/kernel/debug/powerpc/htmdump/htminfo > htminfo_file 52 + 53 + The result can be interpreted using hexdump. 54 + 55 + To collect HTM traces for a partition represented by nodeindex as 56 + zero, nodalchipindex as 1 and coreindexonchip as 12 57 + 58 + .. code-block:: sh 59 + 60 + # cd /sys/kernel/debug/powerpc/htmdump/ 61 + # echo 2 > htmtype 62 + # echo 33 > htmsetup ( sets 8GB memory for HTM buffer, number is size in power of 2 ) 63 + 64 + This requires a CEC reboot to get the HTM buffers allocated. 65 + 66 + .. code-block:: sh 67 + 68 + # cd /sys/kernel/debug/powerpc/htmdump/ 69 + # echo 2 > htmtype 70 + # echo 0 > nodeindex 71 + # echo 1 > nodalchipindex 72 + # echo 12 > coreindexonchip 73 + # echo 1 > htmflags # to set noWrap for HTM buffers 74 + # echo 1 > htmconfigure # Configure the HTM 75 + # echo 1 > htmstart # Start the HTM 76 + # echo 0 > htmstart # Stop the HTM 77 + # echo 0 > htmconfigure # Deconfigure the HTM 78 + # cat htmstatus # Dump the status of HTM entries as data 79 + 80 + Above will set the htmtype and core details, followed by executing respective HTM operation. 81 + 82 + Read the HTM trace data 83 + ======================== 84 + 85 + After starting the trace collection, run the workload 86 + of interest. Stop the trace collection after required period 87 + of time, and read the trace file. 88 + 89 + .. code-block:: sh 90 + 91 + # cat /sys/kernel/debug/powerpc/htmdump/trace > trace_file 92 + 93 + This trace file will contain the relevant instruction traces 94 + collected during the workload execution. And can be used as 95 + input file for trace decoders to understand data. 96 + 97 + Benefits of using HTM debugfs interface 98 + ======================================= 99 + 100 + It is now possible to collect traces for a particular core/chip 101 + from within any partition of the system and decode it. Through 102 + this enablement, a small partition can be dedicated to collect the 103 + trace data and analyze to provide important information for Performance 104 + analysis, Software tuning, or Hardware debug.
+30 -10
Documentation/arch/powerpc/kvm-nested.rst
··· 208 208 flags: 209 209 Bit 0: getGuestWideState: Request state of the Guest instead 210 210 of an individual VCPU. 211 - Bit 1: takeOwnershipOfVcpuState Indicate the L1 is taking 212 - over ownership of the VCPU state and that the L0 can free 213 - the storage holding the state. The VCPU state will need to 214 - be returned to the Hypervisor via H_GUEST_SET_STATE prior 215 - to H_GUEST_RUN_VCPU being called for this VCPU. The data 216 - returned in the dataBuffer is in a Hypervisor internal 217 - format. 211 + Bit 1: getHostWideState: Request stats of the Host. This causes 212 + the guestId and vcpuId parameters to be ignored and attempting 213 + to get the VCPU/Guest state will cause an error. 218 214 Bits 2-63: Reserved 219 215 guestId: ID obtained from H_GUEST_CREATE 220 216 vcpuId: ID of the vCPU pass to H_GUEST_CREATE_VCPU ··· 402 406 table information. 403 407 404 408 +--------+-------+----+--------+----------------------------------+ 405 - | ID | Size | RW | Thread | Details | 406 - | | Bytes | | Guest | | 407 - | | | | Scope | | 409 + | ID | Size | RW |(H)ost | Details | 410 + | | Bytes | |(G)uest | | 411 + | | | |(T)hread| | 412 + | | | |Scope | | 408 413 +========+=======+====+========+==================================+ 409 414 | 0x0000 | | RW | TG | NOP element | 410 415 +--------+-------+----+--------+----------------------------------+ ··· 431 434 | | | | |- 0x8 Table size. | 432 435 +--------+-------+----+--------+----------------------------------+ 433 436 | 0x0007-| | | | Reserved | 437 + | 0x07FF | | | | | 438 + +--------+-------+----+--------+----------------------------------+ 439 + | 0x0800 | 0x08 | R | H | Current usage in bytes of the | 440 + | | | | | L0's Guest Management Space | 441 + | | | | | for an L1-Lpar. | 442 + +--------+-------+----+--------+----------------------------------+ 443 + | 0x0801 | 0x08 | R | H | Max bytes available in the | 444 + | | | | | L0's Guest Management Space for | 445 + | | | | | an L1-Lpar | 446 + +--------+-------+----+--------+----------------------------------+ 447 + | 0x0802 | 0x08 | R | H | Current usage in bytes of the | 448 + | | | | | L0's Guest Page Table Management | 449 + | | | | | Space for an L1-Lpar | 450 + +--------+-------+----+--------+----------------------------------+ 451 + | 0x0803 | 0x08 | R | H | Max bytes available in the L0's | 452 + | | | | | Guest Page Table Management | 453 + | | | | | Space for an L1-Lpar | 454 + +--------+-------+----+--------+----------------------------------+ 455 + | 0x0804 | 0x08 | R | H | Cumulative Reclaimed bytes from | 456 + | | | | | L0 Guest's Page Table Management | 457 + | | | | | Space due to overcommit | 458 + +--------+-------+----+--------+----------------------------------+ 459 + | 0x0805-| | | | Reserved | 434 460 | 0x0BFF | | | | | 435 461 +--------+-------+----+--------+----------------------------------+ 436 462 | 0x0C00 | 0x10 | RW | T |Run vCPU Input Buffer: |
+6
Documentation/userspace-api/ioctl/ioctl-number.rst
··· 366 366 <mailto:linuxppc-dev> 367 367 0xB2 01-02 arch/powerpc/include/uapi/asm/papr-sysparm.h powerpc/pseries system parameter API 368 368 <mailto:linuxppc-dev> 369 + 0xB2 03-05 arch/powerpc/include/uapi/asm/papr-indices.h powerpc/pseries indices API 370 + <mailto:linuxppc-dev> 371 + 0xB2 06-07 arch/powerpc/include/uapi/asm/papr-platform-dump.h powerpc/pseries Platform Dump API 372 + <mailto:linuxppc-dev> 373 + 0xB2 08 powerpc/include/uapi/asm/papr-physical-attestation.h powerpc/pseries Physical Attestation API 374 + <mailto:linuxppc-dev> 369 375 0xB3 00 linux/mmc/ioctl.h 370 376 0xB4 00-0F linux/gpio.h <mailto:linux-gpio@vger.kernel.org> 371 377 0xB5 00-0F uapi/linux/rpmsg.h <mailto:linux-remoteproc@vger.kernel.org>
-1
MAINTAINERS
··· 13665 13665 M: Michael Ellerman <mpe@ellerman.id.au> 13666 13666 R: Nicholas Piggin <npiggin@gmail.com> 13667 13667 R: Christophe Leroy <christophe.leroy@csgroup.eu> 13668 - R: Naveen N Rao <naveen@kernel.org> 13669 13668 L: linuxppc-dev@lists.ozlabs.org 13670 13669 S: Supported 13671 13670 W: https://github.com/linuxppc/wiki/wiki
+6 -5
arch/powerpc/Kconfig
··· 277 277 select HAVE_PERF_EVENTS_NMI if PPC64 278 278 select HAVE_PERF_REGS 279 279 select HAVE_PERF_USER_STACK_DUMP 280 + select HAVE_PREEMPT_DYNAMIC_KEY 280 281 select HAVE_RETHOOK if KPROBES 281 282 select HAVE_REGS_AND_STACK_ACCESS_API 282 283 select HAVE_RELIABLE_STACKTRACE ··· 895 894 int "Data shift" if DATA_SHIFT_BOOL 896 895 default 24 if STRICT_KERNEL_RWX && PPC64 897 896 range 17 28 if (STRICT_KERNEL_RWX || DEBUG_PAGEALLOC || KFENCE) && PPC_BOOK3S_32 898 - range 19 23 if (STRICT_KERNEL_RWX || DEBUG_PAGEALLOC || KFENCE) && PPC_8xx 897 + range 14 23 if (STRICT_KERNEL_RWX || DEBUG_PAGEALLOC || KFENCE) && PPC_8xx 899 898 range 20 24 if (STRICT_KERNEL_RWX || DEBUG_PAGEALLOC || KFENCE) && PPC_85xx 900 899 default 22 if STRICT_KERNEL_RWX && PPC_BOOK3S_32 901 900 default 18 if (DEBUG_PAGEALLOC || KFENCE) && PPC_BOOK3S_32 ··· 908 907 On Book3S 32 (603+), DBATs are used to map kernel text and rodata RO. 909 908 Smaller is the alignment, greater is the number of necessary DBATs. 910 909 911 - On 8xx, large pages (512kb or 8M) are used to map kernel linear 912 - memory. Aligning to 8M reduces TLB misses as only 8M pages are used 913 - in that case. If PIN_TLB is selected, it must be aligned to 8M as 914 - 8M pages will be pinned. 910 + On 8xx, large pages (16kb or 512kb or 8M) are used to map kernel 911 + linear memory. Aligning to 8M reduces TLB misses as only 8M pages 912 + are used in that case. If PIN_TLB is selected, it must be aligned 913 + to 8M as 8M pages will be pinned. 915 914 916 915 config ARCH_FORCE_MAX_ORDER 917 916 int "Order of maximal physically contiguous allocations"
+1
arch/powerpc/boot/Makefile
··· 70 70 BOOTCPPFLAGS += -isystem $(shell $(BOOTCC) -print-file-name=include) 71 71 72 72 BOOTCFLAGS := $(BOOTTARGETFLAGS) \ 73 + -std=gnu11 \ 73 74 -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ 74 75 -fno-strict-aliasing -O2 \ 75 76 -msoft-float -mno-altivec -mno-vsx \
+1 -5
arch/powerpc/boot/rs6000.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* IBM RS/6000 "XCOFF" file definitions for BFD. 3 3 Copyright (C) 1990, 1991 Free Software Foundation, Inc. 4 - FIXME: Can someone provide a transliteration of this name into ASCII? 5 - Using the following chars caused a compiler warning on HIUX (so I replaced 6 - them with octal escapes), and isn't useful without an understanding of what 7 - character set it is. 8 - Written by Mimi Ph\373\364ng-Th\345o V\365 of IBM 4 + Written by Mimi Phuong-Thao Vo of IBM 9 5 and John Gilmore of Cygnus Support. */ 10 6 11 7 /********************** FILE HEADER **********************/
+29 -6
arch/powerpc/include/asm/guest-state-buffer.h
··· 28 28 /* Process Table Info */ 29 29 #define KVMPPC_GSID_PROCESS_TABLE 0x0006 30 30 31 + /* Guest Management Heap Size */ 32 + #define KVMPPC_GSID_L0_GUEST_HEAP 0x0800 33 + 34 + /* Guest Management Heap Max Size */ 35 + #define KVMPPC_GSID_L0_GUEST_HEAP_MAX 0x0801 36 + 37 + /* Guest Pagetable Size */ 38 + #define KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE 0x0802 39 + 40 + /* Guest Pagetable Max Size */ 41 + #define KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX 0x0803 42 + 43 + /* Guest Pagetable Reclaim in bytes */ 44 + #define KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM 0x0804 45 + 31 46 /* H_GUEST_RUN_VCPU input buffer Info */ 32 47 #define KVMPPC_GSID_RUN_INPUT 0x0C00 33 48 /* H_GUEST_RUN_VCPU output buffer Info */ ··· 121 106 #define KVMPPC_GSE_GUESTWIDE_COUNT \ 122 107 (KVMPPC_GSE_GUESTWIDE_END - KVMPPC_GSE_GUESTWIDE_START + 1) 123 108 109 + #define KVMPPC_GSE_HOSTWIDE_START KVMPPC_GSID_L0_GUEST_HEAP 110 + #define KVMPPC_GSE_HOSTWIDE_END KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM 111 + #define KVMPPC_GSE_HOSTWIDE_COUNT \ 112 + (KVMPPC_GSE_HOSTWIDE_END - KVMPPC_GSE_HOSTWIDE_START + 1) 113 + 124 114 #define KVMPPC_GSE_META_START KVMPPC_GSID_RUN_INPUT 125 115 #define KVMPPC_GSE_META_END KVMPPC_GSID_VPA 126 116 #define KVMPPC_GSE_META_COUNT (KVMPPC_GSE_META_END - KVMPPC_GSE_META_START + 1) ··· 150 130 (KVMPPC_GSE_INTR_REGS_END - KVMPPC_GSE_INTR_REGS_START + 1) 151 131 152 132 #define KVMPPC_GSE_IDEN_COUNT \ 153 - (KVMPPC_GSE_GUESTWIDE_COUNT + KVMPPC_GSE_META_COUNT + \ 133 + (KVMPPC_GSE_HOSTWIDE_COUNT + \ 134 + KVMPPC_GSE_GUESTWIDE_COUNT + KVMPPC_GSE_META_COUNT + \ 154 135 KVMPPC_GSE_DW_REGS_COUNT + KVMPPC_GSE_W_REGS_COUNT + \ 155 136 KVMPPC_GSE_VSRS_COUNT + KVMPPC_GSE_INTR_REGS_COUNT) 156 137 ··· 160 139 */ 161 140 enum { 162 141 KVMPPC_GS_CLASS_GUESTWIDE = 0x01, 163 - KVMPPC_GS_CLASS_META = 0x02, 164 - KVMPPC_GS_CLASS_DWORD_REG = 0x04, 165 - KVMPPC_GS_CLASS_WORD_REG = 0x08, 166 - KVMPPC_GS_CLASS_VECTOR = 0x10, 142 + KVMPPC_GS_CLASS_HOSTWIDE = 0x02, 143 + KVMPPC_GS_CLASS_META = 0x04, 144 + KVMPPC_GS_CLASS_DWORD_REG = 0x08, 145 + KVMPPC_GS_CLASS_WORD_REG = 0x10, 146 + KVMPPC_GS_CLASS_VECTOR = 0x18, 167 147 KVMPPC_GS_CLASS_INTR = 0x20, 168 148 }; 169 149 ··· 186 164 */ 187 165 enum { 188 166 KVMPPC_GS_FLAGS_WIDE = 0x01, 167 + KVMPPC_GS_FLAGS_HOST_WIDE = 0x02, 189 168 }; 190 169 191 170 /** ··· 310 287 * struct kvmppc_gs_msg - a guest state message 311 288 * @bitmap: the guest state ids that should be included 312 289 * @ops: modify message behavior for reading and writing to buffers 313 - * @flags: guest wide or thread wide 290 + * @flags: host wide, guest wide or thread wide 314 291 * @data: location where buffer data will be written to or from. 315 292 * 316 293 * A guest state message is allows flexibility in sending in receiving data
+7 -6
arch/powerpc/include/asm/hvcall.h
··· 490 490 #define H_RPTI_PAGE_ALL (-1UL) 491 491 492 492 /* Flags for H_GUEST_{S,G}_STATE */ 493 - #define H_GUEST_FLAGS_WIDE (1UL<<(63-0)) 493 + #define H_GUEST_FLAGS_WIDE (1UL << (63 - 0)) 494 + #define H_GUEST_FLAGS_HOST_WIDE (1UL << (63 - 1)) 494 495 495 496 /* Flag values used for H_{S,G}SET_GUEST_CAPABILITIES */ 496 - #define H_GUEST_CAP_COPY_MEM (1UL<<(63-0)) 497 - #define H_GUEST_CAP_POWER9 (1UL<<(63-1)) 498 - #define H_GUEST_CAP_POWER10 (1UL<<(63-2)) 499 - #define H_GUEST_CAP_POWER11 (1UL<<(63-3)) 500 - #define H_GUEST_CAP_BITMAP2 (1UL<<(63-63)) 497 + #define H_GUEST_CAP_COPY_MEM (1UL << (63 - 0)) 498 + #define H_GUEST_CAP_POWER9 (1UL << (63 - 1)) 499 + #define H_GUEST_CAP_POWER10 (1UL << (63 - 2)) 500 + #define H_GUEST_CAP_POWER11 (1UL << (63 - 3)) 501 + #define H_GUEST_CAP_BITMAP2 (1UL << (63 - 63)) 501 502 502 503 /* 503 504 * Defines for H_HTM - Macros for hardware trace macro (HTM) function.
+14 -6
arch/powerpc/include/asm/plpar_wrappers.h
··· 65 65 return vpa_call(H_VPA_REG_DTL, cpu, vpa); 66 66 } 67 67 68 + /* 69 + * Invokes H_HTM hcall with parameters passed from htm_hcall_wrapper. 70 + * flags: Set to hardwareTarget. 71 + * target: Specifies target using node index, nodal chip index and core index. 72 + * operation : action to perform ie configure, start, stop, deconfigure, trace 73 + * based on the HTM type. 74 + * param1, param2, param3: parameters for each action. 75 + */ 68 76 static inline long htm_call(unsigned long flags, unsigned long target, 69 77 unsigned long operation, unsigned long param1, 70 78 unsigned long param2, unsigned long param3) ··· 81 73 param1, param2, param3); 82 74 } 83 75 84 - static inline long htm_get_dump_hardware(unsigned long nodeindex, 76 + static inline long htm_hcall_wrapper(unsigned long flags, unsigned long nodeindex, 85 77 unsigned long nodalchipindex, unsigned long coreindexonchip, 86 - unsigned long type, unsigned long addr, unsigned long size, 87 - unsigned long offset) 78 + unsigned long type, unsigned long htm_op, unsigned long param1, unsigned long param2, 79 + unsigned long param3) 88 80 { 89 - return htm_call(H_HTM_FLAGS_HARDWARE_TARGET, 81 + return htm_call(H_HTM_FLAGS_HARDWARE_TARGET | flags, 90 82 H_HTM_TARGET_NODE_INDEX(nodeindex) | 91 83 H_HTM_TARGET_NODAL_CHIP_INDEX(nodalchipindex) | 92 84 H_HTM_TARGET_CORE_INDEX_ON_CHIP(coreindexonchip), 93 - H_HTM_OP(H_HTM_OP_DUMP_DATA) | H_HTM_TYPE(type), 94 - addr, size, offset); 85 + H_HTM_OP(htm_op) | H_HTM_TYPE(type), 86 + param1, param2, param3); 95 87 } 96 88 97 89 extern void vpa_init(int cpu);
+16
arch/powerpc/include/asm/preempt.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __ASM_POWERPC_PREEMPT_H 3 + #define __ASM_POWERPC_PREEMPT_H 4 + 5 + #include <asm-generic/preempt.h> 6 + 7 + #if defined(CONFIG_PREEMPT_DYNAMIC) 8 + #include <linux/jump_label.h> 9 + DECLARE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); 10 + #define need_irq_preemption() \ 11 + (static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched)) 12 + #else 13 + #define need_irq_preemption() (IS_ENABLED(CONFIG_PREEMPTION)) 14 + #endif 15 + 16 + #endif /* __ASM_POWERPC_PREEMPT_H */
+4
arch/powerpc/include/asm/rtas.h
··· 515 515 extern unsigned long rtas_rmo_buf; 516 516 517 517 extern struct mutex rtas_ibm_get_vpd_lock; 518 + extern struct mutex rtas_ibm_get_indices_lock; 519 + extern struct mutex rtas_ibm_set_dynamic_indicator_lock; 520 + extern struct mutex rtas_ibm_get_dynamic_sensor_state_lock; 521 + extern struct mutex rtas_ibm_physical_attestation_lock; 518 522 519 523 #define GLOBAL_INTERRUPT_QUEUE 9005 520 524
+41
arch/powerpc/include/uapi/asm/papr-indices.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef _UAPI_PAPR_INDICES_H_ 3 + #define _UAPI_PAPR_INDICES_H_ 4 + 5 + #include <linux/types.h> 6 + #include <asm/ioctl.h> 7 + #include <asm/papr-miscdev.h> 8 + 9 + #define LOC_CODE_SIZE 80 10 + #define RTAS_GET_INDICES_BUF_SIZE SZ_4K 11 + 12 + struct papr_indices_io_block { 13 + union { 14 + struct { 15 + __u8 is_sensor; /* 0 for indicator and 1 for sensor */ 16 + __u32 indice_type; 17 + } indices; 18 + struct { 19 + __u32 token; /* Sensor or indicator token */ 20 + __u32 state; /* get / set state */ 21 + /* 22 + * PAPR+ 12.3.2.4 Converged Location Code Rules - Length 23 + * Restrictions. 79 characters plus null. 24 + */ 25 + char location_code_str[LOC_CODE_SIZE]; /* location code */ 26 + } dynamic_param; 27 + }; 28 + }; 29 + 30 + /* 31 + * ioctls for /dev/papr-indices. 32 + * PAPR_INDICES_IOC_GET: Returns a get-indices handle fd to read data 33 + * PAPR_DYNAMIC_SENSOR_IOC_GET: Gets the state of the input sensor 34 + * PAPR_DYNAMIC_INDICATOR_IOC_SET: Sets the new state for the input indicator 35 + */ 36 + #define PAPR_INDICES_IOC_GET _IOW(PAPR_MISCDEV_IOC_ID, 3, struct papr_indices_io_block) 37 + #define PAPR_DYNAMIC_SENSOR_IOC_GET _IOWR(PAPR_MISCDEV_IOC_ID, 4, struct papr_indices_io_block) 38 + #define PAPR_DYNAMIC_INDICATOR_IOC_SET _IOW(PAPR_MISCDEV_IOC_ID, 5, struct papr_indices_io_block) 39 + 40 + 41 + #endif /* _UAPI_PAPR_INDICES_H_ */
+31
arch/powerpc/include/uapi/asm/papr-physical-attestation.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef _UAPI_PAPR_PHYSICAL_ATTESTATION_H_ 3 + #define _UAPI_PAPR_PHYSICAL_ATTESTATION_H_ 4 + 5 + #include <linux/types.h> 6 + #include <asm/ioctl.h> 7 + #include <asm/papr-miscdev.h> 8 + 9 + #define PAPR_PHYATTEST_MAX_INPUT 4084 /* Max 4K buffer: 4K-12 */ 10 + 11 + /* 12 + * Defined in PAPR 2.13+ 21.6 Attestation Command Structures. 13 + * User space pass this struct and the max size should be 4K. 14 + */ 15 + struct papr_phy_attest_io_block { 16 + __u8 version; 17 + __u8 command; 18 + __u8 TCG_major_ver; 19 + __u8 TCG_minor_ver; 20 + __be32 length; 21 + __be32 correlator; 22 + __u8 payload[PAPR_PHYATTEST_MAX_INPUT]; 23 + }; 24 + 25 + /* 26 + * ioctl for /dev/papr-physical-attestation. Returns a attestation 27 + * command fd handle 28 + */ 29 + #define PAPR_PHY_ATTEST_IOC_HANDLE _IOW(PAPR_MISCDEV_IOC_ID, 8, struct papr_phy_attest_io_block) 30 + 31 + #endif /* _UAPI_PAPR_PHYSICAL_ATTESTATION_H_ */
+16
arch/powerpc/include/uapi/asm/papr-platform-dump.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef _UAPI_PAPR_PLATFORM_DUMP_H_ 3 + #define _UAPI_PAPR_PLATFORM_DUMP_H_ 4 + 5 + #include <linux/types.h> 6 + #include <asm/ioctl.h> 7 + #include <asm/papr-miscdev.h> 8 + 9 + /* 10 + * ioctl for /dev/papr-platform-dump. Returns a platform-dump handle fd 11 + * corresponding to dump tag. 12 + */ 13 + #define PAPR_PLATFORM_DUMP_IOC_CREATE_HANDLE _IOW(PAPR_MISCDEV_IOC_ID, 6, __u64) 14 + #define PAPR_PLATFORM_DUMP_IOC_INVALIDATE _IOW(PAPR_MISCDEV_IOC_ID, 7, __u64) 15 + 16 + #endif /* _UAPI_PAPR_PLATFORM_DUMP_H_ */
-2
arch/powerpc/kernel/Makefile
··· 160 160 161 161 obj64-$(CONFIG_PPC_TRANSACTIONAL_MEM) += tm.o 162 162 163 - ifneq ($(CONFIG_XMON)$(CONFIG_KEXEC_CORE)(CONFIG_PPC_BOOK3S),) 164 163 obj-y += ppc_save_regs.o 165 - endif 166 164 167 165 obj-$(CONFIG_EPAPR_PARAVIRT) += epapr_paravirt.o epapr_hcalls.o 168 166 obj-$(CONFIG_KVM_GUEST) += kvm.o kvm_emul.o
+2 -4
arch/powerpc/kernel/fadump.c
··· 290 290 if (!fw_dump.fadump_supported) 291 291 return; 292 292 293 - pr_debug("Fadump enabled : %s\n", 294 - (fw_dump.fadump_enabled ? "yes" : "no")); 295 - pr_debug("Dump Active : %s\n", 296 - (fw_dump.dump_active ? "yes" : "no")); 293 + pr_debug("Fadump enabled : %s\n", str_yes_no(fw_dump.fadump_enabled)); 294 + pr_debug("Dump Active : %s\n", str_yes_no(fw_dump.dump_active)); 297 295 pr_debug("Dump section sizes:\n"); 298 296 pr_debug(" CPU state data size: %lx\n", fw_dump.cpu_state_data_size); 299 297 pr_debug(" HPTE region size : %lx\n", fw_dump.hpte_region_size);
+5 -1
arch/powerpc/kernel/interrupt.c
··· 25 25 unsigned long global_dbcr0[NR_CPUS]; 26 26 #endif 27 27 28 + #if defined(CONFIG_PREEMPT_DYNAMIC) 29 + DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched); 30 + #endif 31 + 28 32 #ifdef CONFIG_PPC_BOOK3S_64 29 33 DEFINE_STATIC_KEY_FALSE(interrupt_exit_not_reentrant); 30 34 static inline bool exit_must_hard_disable(void) ··· 400 396 /* Returning to a kernel context with local irqs enabled. */ 401 397 WARN_ON_ONCE(!(regs->msr & MSR_EE)); 402 398 again: 403 - if (IS_ENABLED(CONFIG_PREEMPTION)) { 399 + if (need_irq_preemption()) { 404 400 /* Return to preemptible kernel context */ 405 401 if (unlikely(read_thread_flags() & _TIF_NEED_RESCHED)) { 406 402 if (preempt_count() == 0)
+3 -2
arch/powerpc/kernel/iommu.c
··· 16 16 #include <linux/mm.h> 17 17 #include <linux/spinlock.h> 18 18 #include <linux/string.h> 19 + #include <linux/string_choices.h> 19 20 #include <linux/dma-mapping.h> 20 21 #include <linux/bitmap.h> 21 22 #include <linux/iommu-helper.h> ··· 770 769 iommu_table_clear(tbl); 771 770 772 771 if (!welcomed) { 773 - printk(KERN_INFO "IOMMU table initialized, virtual merging %s\n", 774 - novmerge ? "disabled" : "enabled"); 772 + pr_info("IOMMU table initialized, virtual merging %s\n", 773 + str_disabled_enabled(novmerge)); 775 774 welcomed = 1; 776 775 } 777 776
+2 -1
arch/powerpc/kernel/proc_powerpc.c
··· 9 9 #include <linux/proc_fs.h> 10 10 #include <linux/kernel.h> 11 11 #include <linux/of.h> 12 + #include <linux/string.h> 12 13 13 14 #include <asm/machdep.h> 14 15 #include <asm/vdso_datapage.h> ··· 57 56 { 58 57 struct proc_dir_entry *pde; 59 58 60 - strcpy((char *)systemcfg->eye_catcher, "SYSTEMCFG:PPC64"); 59 + strscpy(systemcfg->eye_catcher, "SYSTEMCFG:PPC64"); 61 60 systemcfg->version.major = SYSTEMCFG_MAJOR; 62 61 systemcfg->version.minor = SYSTEMCFG_MINOR; 63 62 systemcfg->processor = mfspr(SPRN_PVR);
+4 -4
arch/powerpc/kernel/process.c
··· 1000 1000 1001 1001 WARN_ON(tm_suspend_disabled); 1002 1002 1003 - TM_DEBUG("--- tm_reclaim on pid %d (NIP=%lx, " 1003 + TM_DEBUG("---- tm_reclaim on pid %d (NIP=%lx, " 1004 1004 "ccr=%lx, msr=%lx, trap=%lx)\n", 1005 1005 tsk->pid, thr->regs->nip, 1006 1006 thr->regs->ccr, thr->regs->msr, ··· 1008 1008 1009 1009 tm_reclaim_thread(thr, TM_CAUSE_RESCHED); 1010 1010 1011 - TM_DEBUG("--- tm_reclaim on pid %d complete\n", 1011 + TM_DEBUG("---- tm_reclaim on pid %d complete\n", 1012 1012 tsk->pid); 1013 1013 1014 1014 out_and_saveregs: ··· 2367 2367 (sp + STACK_INT_FRAME_REGS); 2368 2368 2369 2369 lr = regs->link; 2370 - printk("%s--- interrupt: %lx at %pS\n", 2370 + printk("%s---- interrupt: %lx at %pS\n", 2371 2371 loglvl, regs->trap, (void *)regs->nip); 2372 2372 2373 2373 // Detect the case of an empty pt_regs at the very base 2374 2374 // of the stack and suppress showing it in full. 2375 2375 if (!empty_user_regs(regs, tsk)) { 2376 2376 __show_regs(regs); 2377 - printk("%s--- interrupt: %lx\n", loglvl, regs->trap); 2377 + printk("%s---- interrupt: %lx\n", loglvl, regs->trap); 2378 2378 } 2379 2379 2380 2380 firstframe = 1;
+4 -4
arch/powerpc/kernel/rtas.c
··· 92 92 * Per-function locks for sequence-based RTAS functions. 93 93 */ 94 94 static DEFINE_MUTEX(rtas_ibm_activate_firmware_lock); 95 - static DEFINE_MUTEX(rtas_ibm_get_dynamic_sensor_state_lock); 96 - static DEFINE_MUTEX(rtas_ibm_get_indices_lock); 97 95 static DEFINE_MUTEX(rtas_ibm_lpar_perftools_lock); 98 - static DEFINE_MUTEX(rtas_ibm_physical_attestation_lock); 99 - static DEFINE_MUTEX(rtas_ibm_set_dynamic_indicator_lock); 96 + DEFINE_MUTEX(rtas_ibm_physical_attestation_lock); 100 97 DEFINE_MUTEX(rtas_ibm_get_vpd_lock); 98 + DEFINE_MUTEX(rtas_ibm_get_indices_lock); 99 + DEFINE_MUTEX(rtas_ibm_set_dynamic_indicator_lock); 100 + DEFINE_MUTEX(rtas_ibm_get_dynamic_sensor_state_lock); 101 101 102 102 static struct rtas_function rtas_function_table[] __ro_after_init = { 103 103 [RTAS_FNIDX__CHECK_EXCEPTION] = {
+1 -1
arch/powerpc/kernel/trace/ftrace_entry.S
··· 212 212 bne- 1f 213 213 214 214 mr r3, r15 215 + 1: mtlr r3 215 216 .if \allregs == 0 216 217 REST_GPR(15, r1) 217 218 .endif 218 - 1: mtlr r3 219 219 #endif 220 220 221 221 /* Restore gprs */
+4 -1
arch/powerpc/kexec/crash.c
··· 359 359 if (TRAP(regs) == INTERRUPT_SYSTEM_RESET) 360 360 is_via_system_reset = 1; 361 361 362 - crash_smp_send_stop(); 362 + if (IS_ENABLED(CONFIG_SMP)) 363 + crash_smp_send_stop(); 364 + else 365 + crash_kexec_prepare(); 363 366 364 367 crash_save_cpu(regs, crashing_cpu); 365 368
+13
arch/powerpc/kvm/Kconfig
··· 83 83 depends on KVM_BOOK3S_64 && PPC_POWERNV 84 84 select KVM_BOOK3S_HV_POSSIBLE 85 85 select KVM_GENERIC_MMU_NOTIFIER 86 + select KVM_BOOK3S_HV_PMU 86 87 select CMA 87 88 help 88 89 Support running unmodified book3s_64 guest kernels in ··· 171 170 Selecting this option for the L0 host implements a workaround for 172 171 those buggy L1s which saves the L2 state, at the cost of performance 173 172 in all nested-capable guest entry/exit. 173 + 174 + config KVM_BOOK3S_HV_PMU 175 + tristate "Hypervisor Perf events for KVM Book3s-HV" 176 + depends on KVM_BOOK3S_64_HV 177 + help 178 + Enable Book3s-HV Hypervisor Perf events PMU named 'kvm-hv'. These 179 + Perf events give an overview of hypervisor performance overall 180 + instead of a specific guests. Currently the PMU reports 181 + L0-Hypervisor stats on a kvm-hv enabled PSeries LPAR like: 182 + * Total/Used Guest-Heap 183 + * Total/Used Guest Page-table Memory 184 + * Total amount of Guest Page-table Memory reclaimed 174 185 175 186 config KVM_BOOKE_HV 176 187 bool
+16 -4
arch/powerpc/kvm/book3s_hv.c
··· 6541 6541 .fast_vcpu_kick = kvmppc_fast_vcpu_kick_hv, 6542 6542 .arch_vm_ioctl = kvm_arch_vm_ioctl_hv, 6543 6543 .hcall_implemented = kvmppc_hcall_impl_hv, 6544 - #ifdef CONFIG_KVM_XICS 6545 - .irq_bypass_add_producer = kvmppc_irq_bypass_add_producer_hv, 6546 - .irq_bypass_del_producer = kvmppc_irq_bypass_del_producer_hv, 6547 - #endif 6548 6544 .configure_mmu = kvmhv_configure_mmu, 6549 6545 .get_rmmu_info = kvmhv_get_rmmu_info, 6550 6546 .set_smt_mode = kvmhv_set_smt_mode, ··· 6657 6661 pr_err("KVM-HV: kvmppc_uvmem_init failed %d\n", r); 6658 6662 return r; 6659 6663 } 6664 + 6665 + #if defined(CONFIG_KVM_XICS) 6666 + /* 6667 + * IRQ bypass is supported only for interrupts whose EOI operations are 6668 + * handled via OPAL calls. Therefore, register IRQ bypass handlers 6669 + * exclusively for PowerNV KVM when booted with 'xive=off', indicating 6670 + * the use of the emulated XICS interrupt controller. 6671 + */ 6672 + if (!kvmhv_on_pseries()) { 6673 + pr_info("KVM-HV: Enabling IRQ bypass\n"); 6674 + kvm_ops_hv.irq_bypass_add_producer = 6675 + kvmppc_irq_bypass_add_producer_hv; 6676 + kvm_ops_hv.irq_bypass_del_producer = 6677 + kvmppc_irq_bypass_del_producer_hv; 6678 + } 6679 + #endif 6660 6680 6661 6681 kvm_ops_hv.owner = THIS_MODULE; 6662 6682 kvmppc_hv_ops = &kvm_ops_hv;
+6
arch/powerpc/kvm/book3s_hv_nestedv2.c
··· 123 123 case KVMPPC_GSID_PROCESS_TABLE: 124 124 case KVMPPC_GSID_RUN_INPUT: 125 125 case KVMPPC_GSID_RUN_OUTPUT: 126 + /* Host wide counters */ 127 + case KVMPPC_GSID_L0_GUEST_HEAP: 128 + case KVMPPC_GSID_L0_GUEST_HEAP_MAX: 129 + case KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE: 130 + case KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX: 131 + case KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM: 126 132 break; 127 133 default: 128 134 size += kvmppc_gse_total_size(kvmppc_gsid_size(iden));
+39
arch/powerpc/kvm/guest-state-buffer.c
··· 92 92 (iden <= KVMPPC_GSE_GUESTWIDE_END)) 93 93 return KVMPPC_GS_CLASS_GUESTWIDE; 94 94 95 + if ((iden >= KVMPPC_GSE_HOSTWIDE_START) && 96 + (iden <= KVMPPC_GSE_HOSTWIDE_END)) 97 + return KVMPPC_GS_CLASS_HOSTWIDE; 98 + 95 99 if ((iden >= KVMPPC_GSE_META_START) && (iden <= KVMPPC_GSE_META_END)) 96 100 return KVMPPC_GS_CLASS_META; 97 101 ··· 122 118 int type = -1; 123 119 124 120 switch (kvmppc_gsid_class(iden)) { 121 + case KVMPPC_GS_CLASS_HOSTWIDE: 122 + switch (iden) { 123 + case KVMPPC_GSID_L0_GUEST_HEAP: 124 + fallthrough; 125 + case KVMPPC_GSID_L0_GUEST_HEAP_MAX: 126 + fallthrough; 127 + case KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE: 128 + fallthrough; 129 + case KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX: 130 + fallthrough; 131 + case KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM: 132 + type = KVMPPC_GSE_BE64; 133 + break; 134 + } 135 + break; 125 136 case KVMPPC_GS_CLASS_GUESTWIDE: 126 137 switch (iden) { 127 138 case KVMPPC_GSID_HOST_STATE_SIZE: ··· 205 186 switch (kvmppc_gsid_class(iden)) { 206 187 case KVMPPC_GS_CLASS_GUESTWIDE: 207 188 flags = KVMPPC_GS_FLAGS_WIDE; 189 + break; 190 + case KVMPPC_GS_CLASS_HOSTWIDE: 191 + flags = KVMPPC_GS_FLAGS_HOST_WIDE; 208 192 break; 209 193 case KVMPPC_GS_CLASS_META: 210 194 case KVMPPC_GS_CLASS_DWORD_REG: ··· 332 310 333 311 bit += KVMPPC_GSE_GUESTWIDE_COUNT; 334 312 313 + if (class == KVMPPC_GS_CLASS_HOSTWIDE) { 314 + bit += iden - KVMPPC_GSE_HOSTWIDE_START; 315 + return bit; 316 + } 317 + 318 + bit += KVMPPC_GSE_HOSTWIDE_COUNT; 319 + 335 320 if (class == KVMPPC_GS_CLASS_META) { 336 321 bit += iden - KVMPPC_GSE_META_START; 337 322 return bit; ··· 384 355 return iden; 385 356 } 386 357 bit -= KVMPPC_GSE_GUESTWIDE_COUNT; 358 + 359 + if (bit < KVMPPC_GSE_HOSTWIDE_COUNT) { 360 + iden = KVMPPC_GSE_HOSTWIDE_START + bit; 361 + return iden; 362 + } 363 + bit -= KVMPPC_GSE_HOSTWIDE_COUNT; 387 364 388 365 if (bit < KVMPPC_GSE_META_COUNT) { 389 366 iden = KVMPPC_GSE_META_START + bit; ··· 623 588 624 589 if (flags & KVMPPC_GS_FLAGS_WIDE) 625 590 hflags |= H_GUEST_FLAGS_WIDE; 591 + if (flags & KVMPPC_GS_FLAGS_HOST_WIDE) 592 + hflags |= H_GUEST_FLAGS_HOST_WIDE; 626 593 627 594 rc = plpar_guest_set_state(hflags, gsb->guest_id, gsb->vcpu_id, 628 595 __pa(gsb->hdr), gsb->capacity, &i); ··· 650 613 651 614 if (flags & KVMPPC_GS_FLAGS_WIDE) 652 615 hflags |= H_GUEST_FLAGS_WIDE; 616 + if (flags & KVMPPC_GS_FLAGS_HOST_WIDE) 617 + hflags |= H_GUEST_FLAGS_HOST_WIDE; 653 618 654 619 rc = plpar_guest_get_state(hflags, gsb->guest_id, gsb->vcpu_id, 655 620 __pa(gsb->hdr), gsb->capacity, &i);
+214
arch/powerpc/kvm/test-guest-state-buffer.c
··· 5 5 #include <kunit/test.h> 6 6 7 7 #include <asm/guest-state-buffer.h> 8 + #include <asm/kvm_ppc.h> 8 9 9 10 static void test_creating_buffer(struct kunit *test) 10 11 { ··· 134 133 i = 0; 135 134 for (u16 iden = KVMPPC_GSID_HOST_STATE_SIZE; 136 135 iden <= KVMPPC_GSID_PROCESS_TABLE; iden++) { 136 + kvmppc_gsbm_set(&gsbm, iden); 137 + kvmppc_gsbm_set(&gsbm1, iden); 138 + KUNIT_EXPECT_TRUE(test, kvmppc_gsbm_test(&gsbm, iden)); 139 + kvmppc_gsbm_clear(&gsbm, iden); 140 + KUNIT_EXPECT_FALSE(test, kvmppc_gsbm_test(&gsbm, iden)); 141 + i++; 142 + } 143 + 144 + for (u16 iden = KVMPPC_GSID_L0_GUEST_HEAP; 145 + iden <= KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM; iden++) { 137 146 kvmppc_gsbm_set(&gsbm, iden); 138 147 kvmppc_gsbm_set(&gsbm1, iden); 139 148 KUNIT_EXPECT_TRUE(test, kvmppc_gsbm_test(&gsbm, iden)); ··· 320 309 kvmppc_gsm_free(gsm); 321 310 } 322 311 312 + /* Test data struct for hostwide/L0 counters */ 313 + struct kvmppc_gs_msg_test_hostwide_data { 314 + u64 guest_heap; 315 + u64 guest_heap_max; 316 + u64 guest_pgtable_size; 317 + u64 guest_pgtable_size_max; 318 + u64 guest_pgtable_reclaim; 319 + }; 320 + 321 + static size_t test_hostwide_get_size(struct kvmppc_gs_msg *gsm) 322 + 323 + { 324 + size_t size = 0; 325 + u16 ids[] = { 326 + KVMPPC_GSID_L0_GUEST_HEAP, 327 + KVMPPC_GSID_L0_GUEST_HEAP_MAX, 328 + KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE, 329 + KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX, 330 + KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM 331 + }; 332 + 333 + for (int i = 0; i < ARRAY_SIZE(ids); i++) 334 + size += kvmppc_gse_total_size(kvmppc_gsid_size(ids[i])); 335 + return size; 336 + } 337 + 338 + static int test_hostwide_fill_info(struct kvmppc_gs_buff *gsb, 339 + struct kvmppc_gs_msg *gsm) 340 + { 341 + struct kvmppc_gs_msg_test_hostwide_data *data = gsm->data; 342 + 343 + if (kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_HEAP)) 344 + kvmppc_gse_put_u64(gsb, KVMPPC_GSID_L0_GUEST_HEAP, 345 + data->guest_heap); 346 + if (kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_HEAP_MAX)) 347 + kvmppc_gse_put_u64(gsb, KVMPPC_GSID_L0_GUEST_HEAP_MAX, 348 + data->guest_heap_max); 349 + if (kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE)) 350 + kvmppc_gse_put_u64(gsb, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE, 351 + data->guest_pgtable_size); 352 + if (kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX)) 353 + kvmppc_gse_put_u64(gsb, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX, 354 + data->guest_pgtable_size_max); 355 + if (kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM)) 356 + kvmppc_gse_put_u64(gsb, KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM, 357 + data->guest_pgtable_reclaim); 358 + 359 + return 0; 360 + } 361 + 362 + static int test_hostwide_refresh_info(struct kvmppc_gs_msg *gsm, 363 + struct kvmppc_gs_buff *gsb) 364 + { 365 + struct kvmppc_gs_parser gsp = { 0 }; 366 + struct kvmppc_gs_msg_test_hostwide_data *data = gsm->data; 367 + struct kvmppc_gs_elem *gse; 368 + int rc; 369 + 370 + rc = kvmppc_gse_parse(&gsp, gsb); 371 + if (rc < 0) 372 + return rc; 373 + 374 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_HEAP); 375 + if (gse) 376 + data->guest_heap = kvmppc_gse_get_u64(gse); 377 + 378 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_HEAP_MAX); 379 + if (gse) 380 + data->guest_heap_max = kvmppc_gse_get_u64(gse); 381 + 382 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE); 383 + if (gse) 384 + data->guest_pgtable_size = kvmppc_gse_get_u64(gse); 385 + 386 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX); 387 + if (gse) 388 + data->guest_pgtable_size_max = kvmppc_gse_get_u64(gse); 389 + 390 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM); 391 + if (gse) 392 + data->guest_pgtable_reclaim = kvmppc_gse_get_u64(gse); 393 + 394 + return 0; 395 + } 396 + 397 + static struct kvmppc_gs_msg_ops gs_msg_test_hostwide_ops = { 398 + .get_size = test_hostwide_get_size, 399 + .fill_info = test_hostwide_fill_info, 400 + .refresh_info = test_hostwide_refresh_info, 401 + }; 402 + 403 + static void test_gs_hostwide_msg(struct kunit *test) 404 + { 405 + struct kvmppc_gs_msg_test_hostwide_data test_data = { 406 + .guest_heap = 0xdeadbeef, 407 + .guest_heap_max = ~0ULL, 408 + .guest_pgtable_size = 0xff, 409 + .guest_pgtable_size_max = 0xffffff, 410 + .guest_pgtable_reclaim = 0xdeadbeef, 411 + }; 412 + struct kvmppc_gs_msg *gsm; 413 + struct kvmppc_gs_buff *gsb; 414 + 415 + gsm = kvmppc_gsm_new(&gs_msg_test_hostwide_ops, &test_data, GSM_SEND, 416 + GFP_KERNEL); 417 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gsm); 418 + 419 + gsb = kvmppc_gsb_new(kvmppc_gsm_size(gsm), 0, 0, GFP_KERNEL); 420 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gsb); 421 + 422 + kvmppc_gsm_include(gsm, KVMPPC_GSID_L0_GUEST_HEAP); 423 + kvmppc_gsm_include(gsm, KVMPPC_GSID_L0_GUEST_HEAP_MAX); 424 + kvmppc_gsm_include(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE); 425 + kvmppc_gsm_include(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX); 426 + kvmppc_gsm_include(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM); 427 + 428 + kvmppc_gsm_fill_info(gsm, gsb); 429 + 430 + memset(&test_data, 0, sizeof(test_data)); 431 + 432 + kvmppc_gsm_refresh_info(gsm, gsb); 433 + KUNIT_EXPECT_EQ(test, test_data.guest_heap, 0xdeadbeef); 434 + KUNIT_EXPECT_EQ(test, test_data.guest_heap_max, ~0ULL); 435 + KUNIT_EXPECT_EQ(test, test_data.guest_pgtable_size, 0xff); 436 + KUNIT_EXPECT_EQ(test, test_data.guest_pgtable_size_max, 0xffffff); 437 + KUNIT_EXPECT_EQ(test, test_data.guest_pgtable_reclaim, 0xdeadbeef); 438 + 439 + kvmppc_gsm_free(gsm); 440 + } 441 + 442 + /* Test if the H_GUEST_GET_STATE for hostwide counters works */ 443 + static void test_gs_hostwide_counters(struct kunit *test) 444 + { 445 + struct kvmppc_gs_msg_test_hostwide_data test_data; 446 + struct kvmppc_gs_parser gsp = { 0 }; 447 + 448 + struct kvmppc_gs_msg *gsm; 449 + struct kvmppc_gs_buff *gsb; 450 + struct kvmppc_gs_elem *gse; 451 + int rc; 452 + 453 + if (!kvmhv_on_pseries()) 454 + kunit_skip(test, "This test need a kmv-hv guest"); 455 + 456 + gsm = kvmppc_gsm_new(&gs_msg_test_hostwide_ops, &test_data, GSM_SEND, 457 + GFP_KERNEL); 458 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gsm); 459 + 460 + gsb = kvmppc_gsb_new(kvmppc_gsm_size(gsm), 0, 0, GFP_KERNEL); 461 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, gsb); 462 + 463 + kvmppc_gsm_include(gsm, KVMPPC_GSID_L0_GUEST_HEAP); 464 + 465 + kvmppc_gsm_include(gsm, KVMPPC_GSID_L0_GUEST_HEAP_MAX); 466 + 467 + kvmppc_gsm_include(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE); 468 + 469 + kvmppc_gsm_include(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX); 470 + 471 + kvmppc_gsm_include(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM); 472 + 473 + kvmppc_gsm_fill_info(gsm, gsb); 474 + 475 + /* With HOST_WIDE flags guestid and vcpuid will be ignored */ 476 + rc = kvmppc_gsb_recv(gsb, KVMPPC_GS_FLAGS_HOST_WIDE); 477 + KUNIT_ASSERT_EQ(test, rc, 0); 478 + 479 + /* Parse the guest state buffer is successful */ 480 + rc = kvmppc_gse_parse(&gsp, gsb); 481 + KUNIT_ASSERT_EQ(test, rc, 0); 482 + 483 + /* Parse the GSB and get the counters */ 484 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_HEAP); 485 + KUNIT_ASSERT_NOT_NULL_MSG(test, gse, "L0 Heap counter missing"); 486 + kunit_info(test, "Guest Heap Size=%llu bytes", 487 + kvmppc_gse_get_u64(gse)); 488 + 489 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_HEAP_MAX); 490 + KUNIT_ASSERT_NOT_NULL_MSG(test, gse, "L0 Heap counter max missing"); 491 + kunit_info(test, "Guest Heap Size Max=%llu bytes", 492 + kvmppc_gse_get_u64(gse)); 493 + 494 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE); 495 + KUNIT_ASSERT_NOT_NULL_MSG(test, gse, "L0 page-table size missing"); 496 + kunit_info(test, "Guest Page-table Size=%llu bytes", 497 + kvmppc_gse_get_u64(gse)); 498 + 499 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX); 500 + KUNIT_ASSERT_NOT_NULL_MSG(test, gse, "L0 page-table size-max missing"); 501 + kunit_info(test, "Guest Page-table Size Max=%llu bytes", 502 + kvmppc_gse_get_u64(gse)); 503 + 504 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM); 505 + KUNIT_ASSERT_NOT_NULL_MSG(test, gse, "L0 page-table reclaim size missing"); 506 + kunit_info(test, "Guest Page-table Reclaim Size=%llu bytes", 507 + kvmppc_gse_get_u64(gse)); 508 + 509 + kvmppc_gsm_free(gsm); 510 + kvmppc_gsb_free(gsb); 511 + } 512 + 323 513 static struct kunit_case guest_state_buffer_testcases[] = { 324 514 KUNIT_CASE(test_creating_buffer), 325 515 KUNIT_CASE(test_adding_element), 326 516 KUNIT_CASE(test_gs_bitmap), 327 517 KUNIT_CASE(test_gs_parsing), 328 518 KUNIT_CASE(test_gs_msg), 519 + KUNIT_CASE(test_gs_hostwide_msg), 520 + KUNIT_CASE(test_gs_hostwide_counters), 329 521 {} 330 522 }; 331 523
-4
arch/powerpc/kvm/timing.h
··· 38 38 static inline void kvmppc_account_exit_stat(struct kvm_vcpu *vcpu, int type) 39 39 { 40 40 /* type has to be known at build time for optimization */ 41 - 42 - /* The BUILD_BUG_ON below breaks in funny ways, commented out 43 - * for now ... -BenH 44 41 BUILD_BUG_ON(!__builtin_constant_p(type)); 45 - */ 46 42 switch (type) { 47 43 case EXT_INTR_EXITS: 48 44 vcpu->stat.ext_intr_exits++;
+1 -1
arch/powerpc/lib/vmx-helper.c
··· 45 45 * set and we are preemptible. The hack here is to schedule a 46 46 * decrementer to fire here and reschedule for us if necessary. 47 47 */ 48 - if (IS_ENABLED(CONFIG_PREEMPTION) && need_resched()) 48 + if (need_irq_preemption() && need_resched()) 49 49 set_dec(1); 50 50 return 0; 51 51 }
+3 -2
arch/powerpc/mm/fault.c
··· 17 17 #include <linux/kernel.h> 18 18 #include <linux/errno.h> 19 19 #include <linux/string.h> 20 + #include <linux/string_choices.h> 20 21 #include <linux/types.h> 21 22 #include <linux/pagemap.h> 22 23 #include <linux/ptrace.h> ··· 219 218 // Read/write fault blocked by KUAP is bad, it can never succeed. 220 219 if (bad_kuap_fault(regs, address, is_write)) { 221 220 pr_crit_ratelimited("Kernel attempted to %s user page (%lx) - exploit attempt? (uid: %d)\n", 222 - is_write ? "write" : "read", address, 221 + str_write_read(is_write), address, 223 222 from_kuid(&init_user_ns, current_uid())); 224 223 225 224 // Fault on user outside of certain regions (eg. copy_tofrom_user()) is bad ··· 626 625 case INTERRUPT_DATA_STORAGE: 627 626 case INTERRUPT_H_DATA_STORAGE: 628 627 pr_alert("BUG: %s on %s at 0x%08lx\n", msg, 629 - is_write ? "write" : "read", regs->dar); 628 + str_write_read(is_write), regs->dar); 630 629 break; 631 630 case INTERRUPT_DATA_SEGMENT: 632 631 pr_alert("BUG: %s at 0x%08lx\n", msg, regs->dar);
+17 -15
arch/powerpc/mm/nohash/8xx.c
··· 54 54 { 55 55 pmd_t *pmdp = pmd_off_k(va); 56 56 pte_t *ptep; 57 - 58 - if (WARN_ON(psize != MMU_PAGE_512K && psize != MMU_PAGE_8M)) 59 - return -EINVAL; 57 + unsigned int shift = mmu_psize_to_shift(psize); 60 58 61 59 if (new) { 62 60 if (WARN_ON(slab_is_available())) 63 61 return -EINVAL; 64 62 65 - if (psize == MMU_PAGE_512K) { 66 - ptep = early_pte_alloc_kernel(pmdp, va); 67 - /* The PTE should never be already present */ 68 - if (WARN_ON(pte_present(*ptep) && pgprot_val(prot))) 69 - return -EINVAL; 70 - } else { 63 + if (psize == MMU_PAGE_8M) { 71 64 if (WARN_ON(!pmd_none(*pmdp) || !pmd_none(*(pmdp + 1)))) 72 65 return -EINVAL; 73 66 ··· 71 78 pmd_populate_kernel(&init_mm, pmdp + 1, ptep); 72 79 73 80 ptep = (pte_t *)pmdp; 81 + } else { 82 + ptep = early_pte_alloc_kernel(pmdp, va); 83 + /* The PTE should never be already present */ 84 + if (WARN_ON(pte_present(*ptep) && pgprot_val(prot))) 85 + return -EINVAL; 74 86 } 75 87 } else { 76 - if (psize == MMU_PAGE_512K) 77 - ptep = pte_offset_kernel(pmdp, va); 78 - else 88 + if (psize == MMU_PAGE_8M) 79 89 ptep = (pte_t *)pmdp; 90 + else 91 + ptep = pte_offset_kernel(pmdp, va); 80 92 } 81 93 82 94 if (WARN_ON(!ptep)) 83 95 return -ENOMEM; 84 96 85 97 set_huge_pte_at(&init_mm, va, ptep, 86 - pte_mkhuge(pfn_pte(pa >> PAGE_SHIFT, prot)), 87 - 1UL << mmu_psize_to_shift(psize)); 98 + arch_make_huge_pte(pfn_pte(pa >> PAGE_SHIFT, prot), shift, 0), 99 + 1UL << shift); 88 100 89 101 return 0; 90 102 } ··· 121 123 unsigned long p = offset; 122 124 int err = 0; 123 125 124 - WARN_ON(!IS_ALIGNED(offset, SZ_512K) || !IS_ALIGNED(top, SZ_512K)); 126 + WARN_ON(!IS_ALIGNED(offset, SZ_16K) || !IS_ALIGNED(top, SZ_16K)); 125 127 128 + for (; p < ALIGN(p, SZ_512K) && p < top && !err; p += SZ_16K, v += SZ_16K) 129 + err = __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_16K, new); 126 130 for (; p < ALIGN(p, SZ_8M) && p < top && !err; p += SZ_512K, v += SZ_512K) 127 131 err = __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_512K, new); 128 132 for (; p < ALIGN_DOWN(top, SZ_8M) && p < top && !err; p += SZ_8M, v += SZ_8M) 129 133 err = __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_8M, new); 130 134 for (; p < ALIGN_DOWN(top, SZ_512K) && p < top && !err; p += SZ_512K, v += SZ_512K) 131 135 err = __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_512K, new); 136 + for (; p < ALIGN_DOWN(top, SZ_16K) && p < top && !err; p += SZ_16K, v += SZ_16K) 137 + err = __early_map_kernel_hugepage(v, p, prot, MMU_PAGE_16K, new); 132 138 133 139 if (!new) 134 140 flush_tlb_kernel_range(PAGE_OFFSET + v, PAGE_OFFSET + top);
+17 -3
arch/powerpc/net/bpf_jit.h
··· 51 51 EMIT(PPC_INST_BRANCH_COND | (((cond) & 0x3ff) << 16) | (offset & 0xfffc)); \ 52 52 } while (0) 53 53 54 - /* Sign-extended 32-bit immediate load */ 54 + /* 55 + * Sign-extended 32-bit immediate load 56 + * 57 + * If this is a dummy pass (!image), account for 58 + * maximum possible instructions. 59 + */ 55 60 #define PPC_LI32(d, i) do { \ 61 + if (!image) \ 62 + ctx->idx += 2; \ 63 + else { \ 56 64 if ((int)(uintptr_t)(i) >= -32768 && \ 57 65 (int)(uintptr_t)(i) < 32768) \ 58 66 EMIT(PPC_RAW_LI(d, i)); \ ··· 68 60 EMIT(PPC_RAW_LIS(d, IMM_H(i))); \ 69 61 if (IMM_L(i)) \ 70 62 EMIT(PPC_RAW_ORI(d, d, IMM_L(i))); \ 71 - } } while(0) 63 + } \ 64 + } } while (0) 72 65 73 66 #ifdef CONFIG_PPC64 67 + /* If dummy pass (!image), account for maximum possible instructions */ 74 68 #define PPC_LI64(d, i) do { \ 69 + if (!image) \ 70 + ctx->idx += 5; \ 71 + else { \ 75 72 if ((long)(i) >= -2147483648 && \ 76 73 (long)(i) < 2147483648) \ 77 74 PPC_LI32(d, i); \ ··· 97 84 if ((uintptr_t)(i) & 0x000000000000ffffULL) \ 98 85 EMIT(PPC_RAW_ORI(d, d, (uintptr_t)(i) & \ 99 86 0xffff)); \ 100 - } } while (0) 87 + } \ 88 + } } while (0) 101 89 #define PPC_LI_ADDR PPC_LI64 102 90 103 91 #ifndef CONFIG_PPC_KERNEL_PCREL
+10 -23
arch/powerpc/net/bpf_jit_comp.c
··· 504 504 EMIT(PPC_RAW_ADDI(_R3, _R1, regs_off)); 505 505 if (!p->jited) 506 506 PPC_LI_ADDR(_R4, (unsigned long)p->insnsi); 507 - if (!create_branch(&branch_insn, (u32 *)&ro_image[ctx->idx], (unsigned long)p->bpf_func, 508 - BRANCH_SET_LINK)) { 509 - if (image) 510 - image[ctx->idx] = ppc_inst_val(branch_insn); 507 + /* Account for max possible instructions during dummy pass for size calculation */ 508 + if (image && !create_branch(&branch_insn, (u32 *)&ro_image[ctx->idx], 509 + (unsigned long)p->bpf_func, 510 + BRANCH_SET_LINK)) { 511 + image[ctx->idx] = ppc_inst_val(branch_insn); 511 512 ctx->idx++; 512 513 } else { 513 514 EMIT(PPC_RAW_LL(_R12, _R25, offsetof(struct bpf_prog, bpf_func))); ··· 890 889 bpf_trampoline_restore_tail_call_cnt(image, ctx, func_frame_offset, r4_off); 891 890 892 891 /* Reserve space to patch branch instruction to skip fexit progs */ 893 - im->ip_after_call = &((u32 *)ro_image)[ctx->idx]; 892 + if (ro_image) /* image is NULL for dummy pass */ 893 + im->ip_after_call = &((u32 *)ro_image)[ctx->idx]; 894 894 EMIT(PPC_RAW_NOP()); 895 895 } 896 896 ··· 914 912 } 915 913 916 914 if (flags & BPF_TRAMP_F_CALL_ORIG) { 917 - im->ip_epilogue = &((u32 *)ro_image)[ctx->idx]; 915 + if (ro_image) /* image is NULL for dummy pass */ 916 + im->ip_epilogue = &((u32 *)ro_image)[ctx->idx]; 918 917 PPC_LI_ADDR(_R3, im); 919 918 ret = bpf_jit_emit_func_call_rel(image, ro_image, ctx, 920 919 (unsigned long)__bpf_tramp_exit); ··· 976 973 struct bpf_tramp_links *tlinks, void *func_addr) 977 974 { 978 975 struct bpf_tramp_image im; 979 - void *image; 980 976 int ret; 981 977 982 - /* 983 - * Allocate a temporary buffer for __arch_prepare_bpf_trampoline(). 984 - * This will NOT cause fragmentation in direct map, as we do not 985 - * call set_memory_*() on this buffer. 986 - * 987 - * We cannot use kvmalloc here, because we need image to be in 988 - * module memory range. 989 - */ 990 - image = bpf_jit_alloc_exec(PAGE_SIZE); 991 - if (!image) 992 - return -ENOMEM; 993 - 994 - ret = __arch_prepare_bpf_trampoline(&im, image, image + PAGE_SIZE, image, 995 - m, flags, tlinks, func_addr); 996 - bpf_jit_free_exec(image); 997 - 978 + ret = __arch_prepare_bpf_trampoline(&im, NULL, NULL, NULL, m, flags, tlinks, func_addr); 998 979 return ret; 999 980 } 1000 981
-6
arch/powerpc/net/bpf_jit_comp32.c
··· 313 313 u64 func_addr; 314 314 u32 true_cond; 315 315 u32 tmp_idx; 316 - int j; 317 316 318 317 if (i && (BPF_CLASS(code) == BPF_ALU64 || BPF_CLASS(code) == BPF_ALU) && 319 318 (BPF_CLASS(prevcode) == BPF_ALU64 || BPF_CLASS(prevcode) == BPF_ALU) && ··· 1098 1099 * 16 byte instruction that uses two 'struct bpf_insn' 1099 1100 */ 1100 1101 case BPF_LD | BPF_IMM | BPF_DW: /* dst = (u64) imm */ 1101 - tmp_idx = ctx->idx; 1102 1102 PPC_LI32(dst_reg_h, (u32)insn[i + 1].imm); 1103 1103 PPC_LI32(dst_reg, (u32)insn[i].imm); 1104 - /* padding to allow full 4 instructions for later patching */ 1105 - if (!image) 1106 - for (j = ctx->idx - tmp_idx; j < 4; j++) 1107 - EMIT(PPC_RAW_NOP()); 1108 1104 /* Adjust for two bpf instructions */ 1109 1105 addrs[++i] = ctx->idx * 4; 1110 1106 break;
+8 -7
arch/powerpc/net/bpf_jit_comp64.c
··· 227 227 #ifdef CONFIG_PPC_KERNEL_PCREL 228 228 reladdr = func_addr - local_paca->kernelbase; 229 229 230 - if (reladdr < (long)SZ_8G && reladdr >= -(long)SZ_8G) { 230 + /* 231 + * If fimage is NULL (the initial pass to find image size), 232 + * account for the maximum no. of instructions possible. 233 + */ 234 + if (!fimage) { 235 + ctx->idx += 7; 236 + return 0; 237 + } else if (reladdr < (long)SZ_8G && reladdr >= -(long)SZ_8G) { 231 238 EMIT(PPC_RAW_LD(_R12, _R13, offsetof(struct paca_struct, kernelbase))); 232 239 /* Align for subsequent prefix instruction */ 233 240 if (!IS_ALIGNED((unsigned long)fimage + CTX_NIA(ctx), 8)) ··· 419 412 u64 imm64; 420 413 u32 true_cond; 421 414 u32 tmp_idx; 422 - int j; 423 415 424 416 /* 425 417 * addrs[] maps a BPF bytecode address into a real offset from ··· 1052 1046 case BPF_LD | BPF_IMM | BPF_DW: /* dst = (u64) imm */ 1053 1047 imm64 = ((u64)(u32) insn[i].imm) | 1054 1048 (((u64)(u32) insn[i+1].imm) << 32); 1055 - tmp_idx = ctx->idx; 1056 1049 PPC_LI64(dst_reg, imm64); 1057 - /* padding to allow full 5 instructions for later patching */ 1058 - if (!image) 1059 - for (j = ctx->idx - tmp_idx; j < 5; j++) 1060 - EMIT(PPC_RAW_NOP()); 1061 1050 /* Adjust for two bpf instructions */ 1062 1051 addrs[++i] = ctx->idx * 4; 1063 1052 break;
+2
arch/powerpc/perf/Makefile
··· 18 18 19 19 obj-$(CONFIG_VPA_PMU) += vpa-pmu.o 20 20 21 + obj-$(CONFIG_KVM_BOOK3S_HV_PMU) += kvm-hv-pmu.o 22 + 21 23 obj-$(CONFIG_PPC_8xx) += 8xx-pmu.o 22 24 23 25 obj-$(CONFIG_PPC64) += $(obj64-y)
+435
arch/powerpc/perf/kvm-hv-pmu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Description: PMUs specific to running nested KVM-HV guests 4 + * on Book3S processors (specifically POWER9 and later). 5 + */ 6 + 7 + #define pr_fmt(fmt) "kvmppc-pmu: " fmt 8 + 9 + #include "asm-generic/local64.h" 10 + #include <linux/kernel.h> 11 + #include <linux/errno.h> 12 + #include <linux/ratelimit.h> 13 + #include <linux/kvm_host.h> 14 + #include <linux/gfp_types.h> 15 + #include <linux/pgtable.h> 16 + #include <linux/perf_event.h> 17 + #include <linux/spinlock_types.h> 18 + #include <linux/spinlock.h> 19 + 20 + #include <asm/types.h> 21 + #include <asm/kvm_ppc.h> 22 + #include <asm/kvm_book3s.h> 23 + #include <asm/mmu.h> 24 + #include <asm/pgalloc.h> 25 + #include <asm/pte-walk.h> 26 + #include <asm/reg.h> 27 + #include <asm/plpar_wrappers.h> 28 + #include <asm/firmware.h> 29 + 30 + #include "asm/guest-state-buffer.h" 31 + 32 + enum kvmppc_pmu_eventid { 33 + KVMPPC_EVENT_HOST_HEAP, 34 + KVMPPC_EVENT_HOST_HEAP_MAX, 35 + KVMPPC_EVENT_HOST_PGTABLE, 36 + KVMPPC_EVENT_HOST_PGTABLE_MAX, 37 + KVMPPC_EVENT_HOST_PGTABLE_RECLAIM, 38 + KVMPPC_EVENT_MAX, 39 + }; 40 + 41 + #define KVMPPC_PMU_EVENT_ATTR(_name, _id) \ 42 + PMU_EVENT_ATTR_ID(_name, kvmppc_events_sysfs_show, _id) 43 + 44 + static ssize_t kvmppc_events_sysfs_show(struct device *dev, 45 + struct device_attribute *attr, 46 + char *page) 47 + { 48 + struct perf_pmu_events_attr *pmu_attr; 49 + 50 + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr); 51 + return sprintf(page, "event=0x%02llx\n", pmu_attr->id); 52 + } 53 + 54 + /* Holds the hostwide stats */ 55 + static struct kvmppc_hostwide_stats { 56 + u64 guest_heap; 57 + u64 guest_heap_max; 58 + u64 guest_pgtable_size; 59 + u64 guest_pgtable_size_max; 60 + u64 guest_pgtable_reclaim; 61 + } l0_stats; 62 + 63 + /* Protect access to l0_stats */ 64 + static DEFINE_SPINLOCK(lock_l0_stats); 65 + 66 + /* GSB related structs needed to talk to L0 */ 67 + static struct kvmppc_gs_msg *gsm_l0_stats; 68 + static struct kvmppc_gs_buff *gsb_l0_stats; 69 + static struct kvmppc_gs_parser gsp_l0_stats; 70 + 71 + static struct attribute *kvmppc_pmu_events_attr[] = { 72 + KVMPPC_PMU_EVENT_ATTR(host_heap, KVMPPC_EVENT_HOST_HEAP), 73 + KVMPPC_PMU_EVENT_ATTR(host_heap_max, KVMPPC_EVENT_HOST_HEAP_MAX), 74 + KVMPPC_PMU_EVENT_ATTR(host_pagetable, KVMPPC_EVENT_HOST_PGTABLE), 75 + KVMPPC_PMU_EVENT_ATTR(host_pagetable_max, KVMPPC_EVENT_HOST_PGTABLE_MAX), 76 + KVMPPC_PMU_EVENT_ATTR(host_pagetable_reclaim, KVMPPC_EVENT_HOST_PGTABLE_RECLAIM), 77 + NULL, 78 + }; 79 + 80 + static const struct attribute_group kvmppc_pmu_events_group = { 81 + .name = "events", 82 + .attrs = kvmppc_pmu_events_attr, 83 + }; 84 + 85 + PMU_FORMAT_ATTR(event, "config:0-5"); 86 + static struct attribute *kvmppc_pmu_format_attr[] = { 87 + &format_attr_event.attr, 88 + NULL, 89 + }; 90 + 91 + static struct attribute_group kvmppc_pmu_format_group = { 92 + .name = "format", 93 + .attrs = kvmppc_pmu_format_attr, 94 + }; 95 + 96 + static const struct attribute_group *kvmppc_pmu_attr_groups[] = { 97 + &kvmppc_pmu_events_group, 98 + &kvmppc_pmu_format_group, 99 + NULL, 100 + }; 101 + 102 + /* 103 + * Issue the hcall to get the L0-host stats. 104 + * Should be called with l0-stat lock held 105 + */ 106 + static int kvmppc_update_l0_stats(void) 107 + { 108 + int rc; 109 + 110 + /* With HOST_WIDE flags guestid and vcpuid will be ignored */ 111 + rc = kvmppc_gsb_recv(gsb_l0_stats, KVMPPC_GS_FLAGS_HOST_WIDE); 112 + if (rc) 113 + goto out; 114 + 115 + /* Parse the guest state buffer is successful */ 116 + rc = kvmppc_gse_parse(&gsp_l0_stats, gsb_l0_stats); 117 + if (rc) 118 + goto out; 119 + 120 + /* Update the l0 returned stats*/ 121 + memset(&l0_stats, 0, sizeof(l0_stats)); 122 + rc = kvmppc_gsm_refresh_info(gsm_l0_stats, gsb_l0_stats); 123 + 124 + out: 125 + return rc; 126 + } 127 + 128 + /* Update the value of the given perf_event */ 129 + static int kvmppc_pmu_event_update(struct perf_event *event) 130 + { 131 + int rc; 132 + u64 curr_val, prev_val; 133 + unsigned long flags; 134 + unsigned int config = event->attr.config; 135 + 136 + /* Ensure no one else is modifying the l0_stats */ 137 + spin_lock_irqsave(&lock_l0_stats, flags); 138 + 139 + rc = kvmppc_update_l0_stats(); 140 + if (!rc) { 141 + switch (config) { 142 + case KVMPPC_EVENT_HOST_HEAP: 143 + curr_val = l0_stats.guest_heap; 144 + break; 145 + case KVMPPC_EVENT_HOST_HEAP_MAX: 146 + curr_val = l0_stats.guest_heap_max; 147 + break; 148 + case KVMPPC_EVENT_HOST_PGTABLE: 149 + curr_val = l0_stats.guest_pgtable_size; 150 + break; 151 + case KVMPPC_EVENT_HOST_PGTABLE_MAX: 152 + curr_val = l0_stats.guest_pgtable_size_max; 153 + break; 154 + case KVMPPC_EVENT_HOST_PGTABLE_RECLAIM: 155 + curr_val = l0_stats.guest_pgtable_reclaim; 156 + break; 157 + default: 158 + rc = -ENOENT; 159 + break; 160 + } 161 + } 162 + 163 + spin_unlock_irqrestore(&lock_l0_stats, flags); 164 + 165 + /* If no error than update the perf event */ 166 + if (!rc) { 167 + prev_val = local64_xchg(&event->hw.prev_count, curr_val); 168 + if (curr_val > prev_val) 169 + local64_add(curr_val - prev_val, &event->count); 170 + } 171 + 172 + return rc; 173 + } 174 + 175 + static int kvmppc_pmu_event_init(struct perf_event *event) 176 + { 177 + unsigned int config = event->attr.config; 178 + 179 + pr_debug("%s: Event(%p) id=%llu cpu=%x on_cpu=%x config=%u", 180 + __func__, event, event->id, event->cpu, 181 + event->oncpu, config); 182 + 183 + if (event->attr.type != event->pmu->type) 184 + return -ENOENT; 185 + 186 + if (config >= KVMPPC_EVENT_MAX) 187 + return -EINVAL; 188 + 189 + local64_set(&event->hw.prev_count, 0); 190 + local64_set(&event->count, 0); 191 + 192 + return 0; 193 + } 194 + 195 + static void kvmppc_pmu_del(struct perf_event *event, int flags) 196 + { 197 + kvmppc_pmu_event_update(event); 198 + } 199 + 200 + static int kvmppc_pmu_add(struct perf_event *event, int flags) 201 + { 202 + if (flags & PERF_EF_START) 203 + return kvmppc_pmu_event_update(event); 204 + return 0; 205 + } 206 + 207 + static void kvmppc_pmu_read(struct perf_event *event) 208 + { 209 + kvmppc_pmu_event_update(event); 210 + } 211 + 212 + /* Return the size of the needed guest state buffer */ 213 + static size_t hostwide_get_size(struct kvmppc_gs_msg *gsm) 214 + 215 + { 216 + size_t size = 0; 217 + const u16 ids[] = { 218 + KVMPPC_GSID_L0_GUEST_HEAP, 219 + KVMPPC_GSID_L0_GUEST_HEAP_MAX, 220 + KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE, 221 + KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX, 222 + KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM 223 + }; 224 + 225 + for (int i = 0; i < ARRAY_SIZE(ids); i++) 226 + size += kvmppc_gse_total_size(kvmppc_gsid_size(ids[i])); 227 + return size; 228 + } 229 + 230 + /* Populate the request guest state buffer */ 231 + static int hostwide_fill_info(struct kvmppc_gs_buff *gsb, 232 + struct kvmppc_gs_msg *gsm) 233 + { 234 + int rc = 0; 235 + struct kvmppc_hostwide_stats *stats = gsm->data; 236 + 237 + /* 238 + * It doesn't matter what values are put into request buffer as 239 + * they are going to be overwritten anyways. But for the sake of 240 + * testcode and symmetry contents of existing stats are put 241 + * populated into the request guest state buffer. 242 + */ 243 + if (kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_HEAP)) 244 + rc = kvmppc_gse_put_u64(gsb, 245 + KVMPPC_GSID_L0_GUEST_HEAP, 246 + stats->guest_heap); 247 + 248 + if (!rc && kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_HEAP_MAX)) 249 + rc = kvmppc_gse_put_u64(gsb, 250 + KVMPPC_GSID_L0_GUEST_HEAP_MAX, 251 + stats->guest_heap_max); 252 + 253 + if (!rc && kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE)) 254 + rc = kvmppc_gse_put_u64(gsb, 255 + KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE, 256 + stats->guest_pgtable_size); 257 + if (!rc && 258 + kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX)) 259 + rc = kvmppc_gse_put_u64(gsb, 260 + KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX, 261 + stats->guest_pgtable_size_max); 262 + if (!rc && 263 + kvmppc_gsm_includes(gsm, KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM)) 264 + rc = kvmppc_gse_put_u64(gsb, 265 + KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM, 266 + stats->guest_pgtable_reclaim); 267 + 268 + return rc; 269 + } 270 + 271 + /* Parse and update the host wide stats from returned gsb */ 272 + static int hostwide_refresh_info(struct kvmppc_gs_msg *gsm, 273 + struct kvmppc_gs_buff *gsb) 274 + { 275 + struct kvmppc_gs_parser gsp = { 0 }; 276 + struct kvmppc_hostwide_stats *stats = gsm->data; 277 + struct kvmppc_gs_elem *gse; 278 + int rc; 279 + 280 + rc = kvmppc_gse_parse(&gsp, gsb); 281 + if (rc < 0) 282 + return rc; 283 + 284 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_HEAP); 285 + if (gse) 286 + stats->guest_heap = kvmppc_gse_get_u64(gse); 287 + 288 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_HEAP_MAX); 289 + if (gse) 290 + stats->guest_heap_max = kvmppc_gse_get_u64(gse); 291 + 292 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE); 293 + if (gse) 294 + stats->guest_pgtable_size = kvmppc_gse_get_u64(gse); 295 + 296 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX); 297 + if (gse) 298 + stats->guest_pgtable_size_max = kvmppc_gse_get_u64(gse); 299 + 300 + gse = kvmppc_gsp_lookup(&gsp, KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM); 301 + if (gse) 302 + stats->guest_pgtable_reclaim = kvmppc_gse_get_u64(gse); 303 + 304 + return 0; 305 + } 306 + 307 + /* gsb-message ops for setting up/parsing */ 308 + static struct kvmppc_gs_msg_ops gsb_ops_l0_stats = { 309 + .get_size = hostwide_get_size, 310 + .fill_info = hostwide_fill_info, 311 + .refresh_info = hostwide_refresh_info, 312 + }; 313 + 314 + static int kvmppc_init_hostwide(void) 315 + { 316 + int rc = 0; 317 + unsigned long flags; 318 + 319 + spin_lock_irqsave(&lock_l0_stats, flags); 320 + 321 + /* already registered ? */ 322 + if (gsm_l0_stats) { 323 + rc = 0; 324 + goto out; 325 + } 326 + 327 + /* setup the Guest state message/buffer to talk to L0 */ 328 + gsm_l0_stats = kvmppc_gsm_new(&gsb_ops_l0_stats, &l0_stats, 329 + GSM_SEND, GFP_KERNEL); 330 + if (!gsm_l0_stats) { 331 + rc = -ENOMEM; 332 + goto out; 333 + } 334 + 335 + /* Populate the Idents */ 336 + kvmppc_gsm_include(gsm_l0_stats, KVMPPC_GSID_L0_GUEST_HEAP); 337 + kvmppc_gsm_include(gsm_l0_stats, KVMPPC_GSID_L0_GUEST_HEAP_MAX); 338 + kvmppc_gsm_include(gsm_l0_stats, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE); 339 + kvmppc_gsm_include(gsm_l0_stats, KVMPPC_GSID_L0_GUEST_PGTABLE_SIZE_MAX); 340 + kvmppc_gsm_include(gsm_l0_stats, KVMPPC_GSID_L0_GUEST_PGTABLE_RECLAIM); 341 + 342 + /* allocate GSB. Guest/Vcpu Id is ignored */ 343 + gsb_l0_stats = kvmppc_gsb_new(kvmppc_gsm_size(gsm_l0_stats), 0, 0, 344 + GFP_KERNEL); 345 + if (!gsb_l0_stats) { 346 + rc = -ENOMEM; 347 + goto out; 348 + } 349 + 350 + /* ask the ops to fill in the info */ 351 + rc = kvmppc_gsm_fill_info(gsm_l0_stats, gsb_l0_stats); 352 + 353 + out: 354 + if (rc) { 355 + if (gsm_l0_stats) 356 + kvmppc_gsm_free(gsm_l0_stats); 357 + if (gsb_l0_stats) 358 + kvmppc_gsb_free(gsb_l0_stats); 359 + gsm_l0_stats = NULL; 360 + gsb_l0_stats = NULL; 361 + } 362 + spin_unlock_irqrestore(&lock_l0_stats, flags); 363 + return rc; 364 + } 365 + 366 + static void kvmppc_cleanup_hostwide(void) 367 + { 368 + unsigned long flags; 369 + 370 + spin_lock_irqsave(&lock_l0_stats, flags); 371 + 372 + if (gsm_l0_stats) 373 + kvmppc_gsm_free(gsm_l0_stats); 374 + if (gsb_l0_stats) 375 + kvmppc_gsb_free(gsb_l0_stats); 376 + gsm_l0_stats = NULL; 377 + gsb_l0_stats = NULL; 378 + 379 + spin_unlock_irqrestore(&lock_l0_stats, flags); 380 + } 381 + 382 + /* L1 wide counters PMU */ 383 + static struct pmu kvmppc_pmu = { 384 + .module = THIS_MODULE, 385 + .task_ctx_nr = perf_sw_context, 386 + .name = "kvm-hv", 387 + .event_init = kvmppc_pmu_event_init, 388 + .add = kvmppc_pmu_add, 389 + .del = kvmppc_pmu_del, 390 + .read = kvmppc_pmu_read, 391 + .attr_groups = kvmppc_pmu_attr_groups, 392 + .type = -1, 393 + .scope = PERF_PMU_SCOPE_SYS_WIDE, 394 + .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT, 395 + }; 396 + 397 + static int __init kvmppc_register_pmu(void) 398 + { 399 + int rc = -EOPNOTSUPP; 400 + 401 + /* only support events for nestedv2 right now */ 402 + if (kvmhv_is_nestedv2()) { 403 + rc = kvmppc_init_hostwide(); 404 + if (rc) 405 + goto out; 406 + 407 + /* Register the pmu */ 408 + rc = perf_pmu_register(&kvmppc_pmu, kvmppc_pmu.name, -1); 409 + if (rc) 410 + goto out; 411 + 412 + pr_info("Registered kvm-hv pmu"); 413 + } 414 + 415 + out: 416 + return rc; 417 + } 418 + 419 + static void __exit kvmppc_unregister_pmu(void) 420 + { 421 + if (kvmhv_is_nestedv2()) { 422 + kvmppc_cleanup_hostwide(); 423 + 424 + if (kvmppc_pmu.type != -1) 425 + perf_pmu_unregister(&kvmppc_pmu); 426 + 427 + pr_info("kvmhv_pmu unregistered.\n"); 428 + } 429 + } 430 + 431 + module_init(kvmppc_register_pmu); 432 + module_exit(kvmppc_unregister_pmu); 433 + MODULE_DESCRIPTION("KVM PPC Book3s-hv PMU"); 434 + MODULE_AUTHOR("Vaibhav Jain <vaibhav@linux.ibm.com>"); 435 + MODULE_LICENSE("GPL");
+4 -3
arch/powerpc/platforms/44x/gpio.c
··· 75 75 clrbits32(&regs->or, GPIO_MASK(gpio)); 76 76 } 77 77 78 - static void 79 - ppc4xx_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val) 78 + static int ppc4xx_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val) 80 79 { 81 80 struct ppc4xx_gpio_chip *chip = gpiochip_get_data(gc); 82 81 unsigned long flags; ··· 87 88 spin_unlock_irqrestore(&chip->lock, flags); 88 89 89 90 pr_debug("%s: gpio: %d val: %d\n", __func__, gpio, val); 91 + 92 + return 0; 90 93 } 91 94 92 95 static int ppc4xx_gpio_dir_in(struct gpio_chip *gc, unsigned int gpio) ··· 180 179 gc->direction_input = ppc4xx_gpio_dir_in; 181 180 gc->direction_output = ppc4xx_gpio_dir_out; 182 181 gc->get = ppc4xx_gpio_get; 183 - gc->set = ppc4xx_gpio_set; 182 + gc->set_rv = ppc4xx_gpio_set; 184 183 185 184 ret = of_mm_gpiochip_add_data(np, mm_gc, ppc4xx_gc); 186 185 if (ret)
+4 -2
arch/powerpc/platforms/52xx/mpc52xx_gpt.c
··· 280 280 return (in_be32(&gpt->regs->status) >> 8) & 1; 281 281 } 282 282 283 - static void 283 + static int 284 284 mpc52xx_gpt_gpio_set(struct gpio_chip *gc, unsigned int gpio, int v) 285 285 { 286 286 struct mpc52xx_gpt_priv *gpt = gpiochip_get_data(gc); ··· 293 293 raw_spin_lock_irqsave(&gpt->lock, flags); 294 294 clrsetbits_be32(&gpt->regs->mode, MPC52xx_GPT_MODE_GPIO_MASK, r); 295 295 raw_spin_unlock_irqrestore(&gpt->lock, flags); 296 + 297 + return 0; 296 298 } 297 299 298 300 static int mpc52xx_gpt_gpio_dir_in(struct gpio_chip *gc, unsigned int gpio) ··· 336 334 gpt->gc.direction_input = mpc52xx_gpt_gpio_dir_in; 337 335 gpt->gc.direction_output = mpc52xx_gpt_gpio_dir_out; 338 336 gpt->gc.get = mpc52xx_gpt_gpio_get; 339 - gpt->gc.set = mpc52xx_gpt_gpio_set; 337 + gpt->gc.set_rv = mpc52xx_gpt_gpio_set; 340 338 gpt->gc.base = -1; 341 339 gpt->gc.parent = gpt->dev; 342 340
+8 -5
arch/powerpc/platforms/83xx/mcu_mpc8349emitx.c
··· 92 92 mutex_unlock(&mcu->lock); 93 93 } 94 94 95 - static void mcu_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val) 95 + static int mcu_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val) 96 96 { 97 97 struct mcu *mcu = gpiochip_get_data(gc); 98 98 u8 bit = 1 << (4 + gpio); 99 + int ret; 99 100 100 101 mutex_lock(&mcu->lock); 101 102 if (val) ··· 104 103 else 105 104 mcu->reg_ctrl |= bit; 106 105 107 - i2c_smbus_write_byte_data(mcu->client, MCU_REG_CTRL, mcu->reg_ctrl); 106 + ret = i2c_smbus_write_byte_data(mcu->client, MCU_REG_CTRL, 107 + mcu->reg_ctrl); 108 108 mutex_unlock(&mcu->lock); 109 + 110 + return ret; 109 111 } 110 112 111 113 static int mcu_gpio_dir_out(struct gpio_chip *gc, unsigned int gpio, int val) 112 114 { 113 - mcu_gpio_set(gc, gpio, val); 114 - return 0; 115 + return mcu_gpio_set(gc, gpio, val); 115 116 } 116 117 117 118 static int mcu_gpiochip_add(struct mcu *mcu) ··· 126 123 gc->can_sleep = 1; 127 124 gc->ngpio = MCU_NUM_GPIO; 128 125 gc->base = -1; 129 - gc->set = mcu_gpio_set; 126 + gc->set_rv = mcu_gpio_set; 130 127 gc->direction_output = mcu_gpio_dir_out; 131 128 gc->parent = dev; 132 129
+8 -4
arch/powerpc/platforms/8xx/cpm1.c
··· 417 417 out_be16(&iop->dat, cpm1_gc->cpdata); 418 418 } 419 419 420 - static void cpm1_gpio16_set(struct gpio_chip *gc, unsigned int gpio, int value) 420 + static int cpm1_gpio16_set(struct gpio_chip *gc, unsigned int gpio, int value) 421 421 { 422 422 struct cpm1_gpio16_chip *cpm1_gc = gpiochip_get_data(gc); 423 423 unsigned long flags; ··· 428 428 __cpm1_gpio16_set(cpm1_gc, pin_mask, value); 429 429 430 430 spin_unlock_irqrestore(&cpm1_gc->lock, flags); 431 + 432 + return 0; 431 433 } 432 434 433 435 static int cpm1_gpio16_to_irq(struct gpio_chip *gc, unsigned int gpio) ··· 499 497 gc->direction_input = cpm1_gpio16_dir_in; 500 498 gc->direction_output = cpm1_gpio16_dir_out; 501 499 gc->get = cpm1_gpio16_get; 502 - gc->set = cpm1_gpio16_set; 500 + gc->set_rv = cpm1_gpio16_set; 503 501 gc->to_irq = cpm1_gpio16_to_irq; 504 502 gc->parent = dev; 505 503 gc->owner = THIS_MODULE; ··· 556 554 out_be32(&iop->dat, cpm1_gc->cpdata); 557 555 } 558 556 559 - static void cpm1_gpio32_set(struct gpio_chip *gc, unsigned int gpio, int value) 557 + static int cpm1_gpio32_set(struct gpio_chip *gc, unsigned int gpio, int value) 560 558 { 561 559 struct cpm1_gpio32_chip *cpm1_gc = gpiochip_get_data(gc); 562 560 unsigned long flags; ··· 567 565 __cpm1_gpio32_set(cpm1_gc, pin_mask, value); 568 566 569 567 spin_unlock_irqrestore(&cpm1_gc->lock, flags); 568 + 569 + return 0; 570 570 } 571 571 572 572 static int cpm1_gpio32_dir_out(struct gpio_chip *gc, unsigned int gpio, int val) ··· 622 618 gc->direction_input = cpm1_gpio32_dir_in; 623 619 gc->direction_output = cpm1_gpio32_dir_out; 624 620 gc->get = cpm1_gpio32_get; 625 - gc->set = cpm1_gpio32_set; 621 + gc->set_rv = cpm1_gpio32_set; 626 622 gc->parent = dev; 627 623 gc->owner = THIS_MODULE; 628 624
+2 -2
arch/powerpc/platforms/powermac/setup.c
··· 45 45 #include <linux/root_dev.h> 46 46 #include <linux/bitops.h> 47 47 #include <linux/suspend.h> 48 + #include <linux/string_choices.h> 48 49 #include <linux/of.h> 49 50 #include <linux/of_platform.h> 50 51 ··· 239 238 _set_L2CR(0); 240 239 _set_L2CR(*l2cr); 241 240 pr_info("L2CR overridden (0x%x), backside cache is %s\n", 242 - *l2cr, ((*l2cr) & 0x80000000) ? 243 - "enabled" : "disabled"); 241 + *l2cr, str_enabled_disabled((*l2cr) & 0x80000000)); 244 242 } 245 243 of_node_put(np); 246 244 break;
+2 -1
arch/powerpc/platforms/powermac/time.c
··· 15 15 #include <linux/kernel.h> 16 16 #include <linux/param.h> 17 17 #include <linux/string.h> 18 + #include <linux/string_choices.h> 18 19 #include <linux/mm.h> 19 20 #include <linux/init.h> 20 21 #include <linux/time.h> ··· 78 77 delta |= 0xFF000000UL; 79 78 dst = ((pmac_xpram_read(PMAC_XPRAM_MACHINE_LOC + 0x8) & 0x80) != 0); 80 79 printk("GMT Delta read from XPRAM: %d minutes, DST: %s\n", delta/60, 81 - dst ? "on" : "off"); 80 + str_on_off(dst)); 82 81 #endif 83 82 return delta; 84 83 }
+2 -1
arch/powerpc/platforms/ps3/device-init.c
··· 14 14 #include <linux/slab.h> 15 15 #include <linux/reboot.h> 16 16 #include <linux/rcuwait.h> 17 + #include <linux/string_choices.h> 17 18 18 19 #include <asm/firmware.h> 19 20 #include <asm/lv1call.h> ··· 725 724 static int ps3_notification_read_write(struct ps3_notification_device *dev, 726 725 u64 lpar, int write) 727 726 { 728 - const char *op = write ? "write" : "read"; 727 + const char *op = str_write_read(write); 729 728 unsigned long flags; 730 729 int res; 731 730
+2 -1
arch/powerpc/platforms/pseries/Makefile
··· 3 3 4 4 obj-y := lpar.o hvCall.o nvram.o reconfig.o \ 5 5 of_helpers.o rtas-work-area.o papr-sysparm.o \ 6 - papr-vpd.o \ 6 + papr-rtas-common.o papr-vpd.o papr-indices.o \ 7 + papr-platform-dump.o papr-phy-attest.o \ 7 8 setup.o iommu.o event_sources.o ras.o \ 8 9 firmware.o power.o dlpar.o mobility.o rng.o \ 9 10 pci.o pci_dlpar.o eeh_pseries.o msi.o \
+382 -13
arch/powerpc/platforms/pseries/htmdump.c
··· 10 10 #include <asm/io.h> 11 11 #include <asm/machdep.h> 12 12 #include <asm/plpar_wrappers.h> 13 + #include <asm/kvm_guest.h> 13 14 14 15 static void *htm_buf; 16 + static void *htm_status_buf; 17 + static void *htm_info_buf; 18 + static void *htm_caps_buf; 15 19 static u32 nodeindex; 16 20 static u32 nodalchipindex; 17 21 static u32 coreindexonchip; 18 22 static u32 htmtype; 23 + static u32 htmconfigure; 24 + static u32 htmstart; 25 + static u32 htmsetup; 26 + static u64 htmflags; 27 + 19 28 static struct dentry *htmdump_debugfs_dir; 29 + #define HTM_ENABLE 1 30 + #define HTM_DISABLE 0 31 + #define HTM_NOWRAP 1 32 + #define HTM_WRAP 0 20 33 21 - static ssize_t htmdump_read(struct file *filp, char __user *ubuf, 22 - size_t count, loff_t *ppos) 34 + /* 35 + * Check the return code for H_HTM hcall. 36 + * Return non-zero value (1) if either H_PARTIAL or H_SUCCESS 37 + * is returned. For other return codes: 38 + * Return zero if H_NOT_AVAILABLE. 39 + * Return -EBUSY if hcall return busy. 40 + * Return -EINVAL if any parameter or operation is not valid. 41 + * Return -EPERM if HTM Virtualization Engine Technology code 42 + * is not applied. 43 + * Return -EIO if the HTM state is not valid. 44 + */ 45 + static ssize_t htm_return_check(long rc) 23 46 { 24 - void *htm_buf = filp->private_data; 25 - unsigned long page, read_size, available; 26 - loff_t offset; 27 - long rc; 28 - 29 - page = ALIGN_DOWN(*ppos, PAGE_SIZE); 30 - offset = (*ppos) % PAGE_SIZE; 31 - 32 - rc = htm_get_dump_hardware(nodeindex, nodalchipindex, coreindexonchip, 33 - htmtype, virt_to_phys(htm_buf), PAGE_SIZE, page); 34 - 35 47 switch (rc) { 36 48 case H_SUCCESS: 37 49 /* H_PARTIAL for the case where all available data can't be ··· 77 65 return -EPERM; 78 66 } 79 67 68 + /* 69 + * Return 1 for H_SUCCESS/H_PARTIAL 70 + */ 71 + return 1; 72 + } 73 + 74 + static ssize_t htmdump_read(struct file *filp, char __user *ubuf, 75 + size_t count, loff_t *ppos) 76 + { 77 + void *htm_buf = filp->private_data; 78 + unsigned long page, read_size, available; 79 + loff_t offset; 80 + long rc, ret; 81 + 82 + page = ALIGN_DOWN(*ppos, PAGE_SIZE); 83 + offset = (*ppos) % PAGE_SIZE; 84 + 85 + /* 86 + * Invoke H_HTM call with: 87 + * - operation as htm dump (H_HTM_OP_DUMP_DATA) 88 + * - last three values are address, size and offset 89 + */ 90 + rc = htm_hcall_wrapper(htmflags, nodeindex, nodalchipindex, coreindexonchip, 91 + htmtype, H_HTM_OP_DUMP_DATA, virt_to_phys(htm_buf), 92 + PAGE_SIZE, page); 93 + 94 + ret = htm_return_check(rc); 95 + if (ret <= 0) { 96 + pr_debug("H_HTM hcall failed for op: H_HTM_OP_DUMP_DATA, returning %ld\n", ret); 97 + return ret; 98 + } 99 + 80 100 available = PAGE_SIZE; 81 101 read_size = min(count, available); 82 102 *ppos += read_size; ··· 120 76 .read = htmdump_read, 121 77 .open = simple_open, 122 78 }; 79 + 80 + static int htmconfigure_set(void *data, u64 val) 81 + { 82 + long rc, ret; 83 + unsigned long param1 = -1, param2 = -1; 84 + 85 + /* 86 + * value as 1 : configure HTM. 87 + * value as 0 : deconfigure HTM. Return -EINVAL for 88 + * other values. 89 + */ 90 + if (val == HTM_ENABLE) { 91 + /* 92 + * Invoke H_HTM call with: 93 + * - operation as htm configure (H_HTM_OP_CONFIGURE) 94 + * - If htmflags is set, param1 and param2 will be -1 95 + * which is an indicator to use default htm mode reg mask 96 + * and htm mode reg value. 97 + * - last three values are unused, hence set to zero 98 + */ 99 + if (!htmflags) { 100 + param1 = 0; 101 + param2 = 0; 102 + } 103 + 104 + rc = htm_hcall_wrapper(htmflags, nodeindex, nodalchipindex, coreindexonchip, 105 + htmtype, H_HTM_OP_CONFIGURE, param1, param2, 0); 106 + } else if (val == HTM_DISABLE) { 107 + /* 108 + * Invoke H_HTM call with: 109 + * - operation as htm deconfigure (H_HTM_OP_DECONFIGURE) 110 + * - last three values are unused, hence set to zero 111 + */ 112 + rc = htm_hcall_wrapper(htmflags, nodeindex, nodalchipindex, coreindexonchip, 113 + htmtype, H_HTM_OP_DECONFIGURE, 0, 0, 0); 114 + } else 115 + return -EINVAL; 116 + 117 + ret = htm_return_check(rc); 118 + if (ret <= 0) { 119 + pr_debug("H_HTM hcall failed, returning %ld\n", ret); 120 + return ret; 121 + } 122 + 123 + /* Set htmconfigure if operation succeeds */ 124 + htmconfigure = val; 125 + 126 + return 0; 127 + } 128 + 129 + static int htmconfigure_get(void *data, u64 *val) 130 + { 131 + *val = htmconfigure; 132 + return 0; 133 + } 134 + 135 + static int htmstart_set(void *data, u64 val) 136 + { 137 + long rc, ret; 138 + 139 + /* 140 + * value as 1: start HTM 141 + * value as 0: stop HTM 142 + * Return -EINVAL for other values. 143 + */ 144 + if (val == HTM_ENABLE) { 145 + /* 146 + * Invoke H_HTM call with: 147 + * - operation as htm start (H_HTM_OP_START) 148 + * - last three values are unused, hence set to zero 149 + */ 150 + rc = htm_hcall_wrapper(htmflags, nodeindex, nodalchipindex, coreindexonchip, 151 + htmtype, H_HTM_OP_START, 0, 0, 0); 152 + 153 + } else if (val == HTM_DISABLE) { 154 + /* 155 + * Invoke H_HTM call with: 156 + * - operation as htm stop (H_HTM_OP_STOP) 157 + * - last three values are unused, hence set to zero 158 + */ 159 + rc = htm_hcall_wrapper(htmflags, nodeindex, nodalchipindex, coreindexonchip, 160 + htmtype, H_HTM_OP_STOP, 0, 0, 0); 161 + } else 162 + return -EINVAL; 163 + 164 + ret = htm_return_check(rc); 165 + if (ret <= 0) { 166 + pr_debug("H_HTM hcall failed, returning %ld\n", ret); 167 + return ret; 168 + } 169 + 170 + /* Set htmstart if H_HTM_OP_START/H_HTM_OP_STOP operation succeeds */ 171 + htmstart = val; 172 + 173 + return 0; 174 + } 175 + 176 + static int htmstart_get(void *data, u64 *val) 177 + { 178 + *val = htmstart; 179 + return 0; 180 + } 181 + 182 + static ssize_t htmstatus_read(struct file *filp, char __user *ubuf, 183 + size_t count, loff_t *ppos) 184 + { 185 + void *htm_status_buf = filp->private_data; 186 + long rc, ret; 187 + u64 *num_entries; 188 + u64 to_copy; 189 + int htmstatus_flag; 190 + 191 + /* 192 + * Invoke H_HTM call with: 193 + * - operation as htm status (H_HTM_OP_STATUS) 194 + * - last three values as addr, size and offset 195 + */ 196 + rc = htm_hcall_wrapper(htmflags, nodeindex, nodalchipindex, coreindexonchip, 197 + htmtype, H_HTM_OP_STATUS, virt_to_phys(htm_status_buf), 198 + PAGE_SIZE, 0); 199 + 200 + ret = htm_return_check(rc); 201 + if (ret <= 0) { 202 + pr_debug("H_HTM hcall failed for op: H_HTM_OP_STATUS, returning %ld\n", ret); 203 + return ret; 204 + } 205 + 206 + /* 207 + * HTM status buffer, start of buffer + 0x10 gives the 208 + * number of HTM entries in the buffer. Each nest htm status 209 + * entry is 0x6 bytes where each core htm status entry is 210 + * 0x8 bytes. 211 + * So total count to copy is: 212 + * 32 bytes (for first 7 fields) + (number of HTM entries * entry size) 213 + */ 214 + num_entries = htm_status_buf + 0x10; 215 + if (htmtype == 0x2) 216 + htmstatus_flag = 0x8; 217 + else 218 + htmstatus_flag = 0x6; 219 + to_copy = 32 + (be64_to_cpu(*num_entries) * htmstatus_flag); 220 + return simple_read_from_buffer(ubuf, count, ppos, htm_status_buf, to_copy); 221 + } 222 + 223 + static const struct file_operations htmstatus_fops = { 224 + .llseek = NULL, 225 + .read = htmstatus_read, 226 + .open = simple_open, 227 + }; 228 + 229 + static ssize_t htminfo_read(struct file *filp, char __user *ubuf, 230 + size_t count, loff_t *ppos) 231 + { 232 + void *htm_info_buf = filp->private_data; 233 + long rc, ret; 234 + u64 *num_entries; 235 + u64 to_copy; 236 + 237 + /* 238 + * Invoke H_HTM call with: 239 + * - operation as htm status (H_HTM_OP_STATUS) 240 + * - last three values as addr, size and offset 241 + */ 242 + rc = htm_hcall_wrapper(htmflags, nodeindex, nodalchipindex, coreindexonchip, 243 + htmtype, H_HTM_OP_DUMP_SYSPROC_CONF, virt_to_phys(htm_info_buf), 244 + PAGE_SIZE, 0); 245 + 246 + ret = htm_return_check(rc); 247 + if (ret <= 0) { 248 + pr_debug("H_HTM hcall failed for op: H_HTM_OP_DUMP_SYSPROC_CONF, returning %ld\n", ret); 249 + return ret; 250 + } 251 + 252 + /* 253 + * HTM status buffer, start of buffer + 0x10 gives the 254 + * number of HTM entries in the buffer. Each entry of processor 255 + * is 16 bytes. 256 + * 257 + * So total count to copy is: 258 + * 32 bytes (for first 5 fields) + (number of HTM entries * entry size) 259 + */ 260 + num_entries = htm_info_buf + 0x10; 261 + to_copy = 32 + (be64_to_cpu(*num_entries) * 16); 262 + return simple_read_from_buffer(ubuf, count, ppos, htm_info_buf, to_copy); 263 + } 264 + 265 + static ssize_t htmcaps_read(struct file *filp, char __user *ubuf, 266 + size_t count, loff_t *ppos) 267 + { 268 + void *htm_caps_buf = filp->private_data; 269 + long rc, ret; 270 + 271 + /* 272 + * Invoke H_HTM call with: 273 + * - operation as htm capabilities (H_HTM_OP_CAPABILITIES) 274 + * - last three values as addr, size (0x80 for Capabilities Output Buffer 275 + * and zero 276 + */ 277 + rc = htm_hcall_wrapper(htmflags, nodeindex, nodalchipindex, coreindexonchip, 278 + htmtype, H_HTM_OP_CAPABILITIES, virt_to_phys(htm_caps_buf), 279 + 0x80, 0); 280 + 281 + ret = htm_return_check(rc); 282 + if (ret <= 0) { 283 + pr_debug("H_HTM hcall failed for op: H_HTM_OP_CAPABILITIES, returning %ld\n", ret); 284 + return ret; 285 + } 286 + 287 + return simple_read_from_buffer(ubuf, count, ppos, htm_caps_buf, 0x80); 288 + } 289 + 290 + static const struct file_operations htminfo_fops = { 291 + .llseek = NULL, 292 + .read = htminfo_read, 293 + .open = simple_open, 294 + }; 295 + 296 + static const struct file_operations htmcaps_fops = { 297 + .llseek = NULL, 298 + .read = htmcaps_read, 299 + .open = simple_open, 300 + }; 301 + 302 + static int htmsetup_set(void *data, u64 val) 303 + { 304 + long rc, ret; 305 + 306 + /* 307 + * Input value: HTM buffer size in the power of 2 308 + * example: hex value 0x21 ( decimal: 33 ) is for 309 + * 8GB 310 + * Invoke H_HTM call with: 311 + * - operation as htm start (H_HTM_OP_SETUP) 312 + * - parameter 1 set to input value. 313 + * - last two values are unused, hence set to zero 314 + */ 315 + rc = htm_hcall_wrapper(htmflags, nodeindex, nodalchipindex, coreindexonchip, 316 + htmtype, H_HTM_OP_SETUP, val, 0, 0); 317 + 318 + ret = htm_return_check(rc); 319 + if (ret <= 0) { 320 + pr_debug("H_HTM hcall failed for op: H_HTM_OP_SETUP, returning %ld\n", ret); 321 + return ret; 322 + } 323 + 324 + /* Set htmsetup if H_HTM_OP_SETUP operation succeeds */ 325 + htmsetup = val; 326 + 327 + return 0; 328 + } 329 + 330 + static int htmsetup_get(void *data, u64 *val) 331 + { 332 + *val = htmsetup; 333 + return 0; 334 + } 335 + 336 + static int htmflags_set(void *data, u64 val) 337 + { 338 + /* 339 + * Input value: 340 + * Currently supported flag value is to enable/disable 341 + * HTM buffer wrap. wrap is used along with "configure" 342 + * to prevent HTM buffer from wrapping. 343 + * Writing 1 will set noWrap while configuring HTM 344 + */ 345 + if (val == HTM_NOWRAP) 346 + htmflags = H_HTM_FLAGS_NOWRAP; 347 + else if (val == HTM_WRAP) 348 + htmflags = 0; 349 + else 350 + return -EINVAL; 351 + 352 + return 0; 353 + } 354 + 355 + static int htmflags_get(void *data, u64 *val) 356 + { 357 + *val = htmflags; 358 + return 0; 359 + } 360 + 361 + DEFINE_SIMPLE_ATTRIBUTE(htmconfigure_fops, htmconfigure_get, htmconfigure_set, "%llu\n"); 362 + DEFINE_SIMPLE_ATTRIBUTE(htmstart_fops, htmstart_get, htmstart_set, "%llu\n"); 363 + DEFINE_SIMPLE_ATTRIBUTE(htmsetup_fops, htmsetup_get, htmsetup_set, "%llu\n"); 364 + DEFINE_SIMPLE_ATTRIBUTE(htmflags_fops, htmflags_get, htmflags_set, "%llu\n"); 123 365 124 366 static int htmdump_init_debugfs(void) 125 367 { ··· 428 98 htmdump_debugfs_dir, &htmtype); 429 99 debugfs_create_file("trace", 0400, htmdump_debugfs_dir, htm_buf, &htmdump_fops); 430 100 101 + /* 102 + * Debugfs interface files to control HTM operations: 103 + */ 104 + debugfs_create_file("htmconfigure", 0600, htmdump_debugfs_dir, NULL, &htmconfigure_fops); 105 + debugfs_create_file("htmstart", 0600, htmdump_debugfs_dir, NULL, &htmstart_fops); 106 + debugfs_create_file("htmsetup", 0600, htmdump_debugfs_dir, NULL, &htmsetup_fops); 107 + debugfs_create_file("htmflags", 0600, htmdump_debugfs_dir, NULL, &htmflags_fops); 108 + 109 + /* Debugfs interface file to present status of HTM */ 110 + htm_status_buf = kmalloc(PAGE_SIZE, GFP_KERNEL); 111 + if (!htm_status_buf) { 112 + pr_err("Failed to allocate htmstatus buf\n"); 113 + return -ENOMEM; 114 + } 115 + 116 + /* Debugfs interface file to present System Processor Configuration */ 117 + htm_info_buf = kmalloc(PAGE_SIZE, GFP_KERNEL); 118 + if (!htm_info_buf) { 119 + pr_err("Failed to allocate htm info buf\n"); 120 + return -ENOMEM; 121 + } 122 + 123 + /* Debugfs interface file to present HTM capabilities */ 124 + htm_caps_buf = kmalloc(PAGE_SIZE, GFP_KERNEL); 125 + if (!htm_caps_buf) { 126 + pr_err("Failed to allocate htm caps buf\n"); 127 + return -ENOMEM; 128 + } 129 + 130 + debugfs_create_file("htmstatus", 0400, htmdump_debugfs_dir, htm_status_buf, &htmstatus_fops); 131 + debugfs_create_file("htminfo", 0400, htmdump_debugfs_dir, htm_info_buf, &htminfo_fops); 132 + debugfs_create_file("htmcaps", 0400, htmdump_debugfs_dir, htm_caps_buf, &htmcaps_fops); 133 + 431 134 return 0; 432 135 } 433 136 434 137 static int __init htmdump_init(void) 435 138 { 139 + /* Disable on kvm guest */ 140 + if (is_kvm_guest()) { 141 + pr_info("htmdump not supported inside KVM guest\n"); 142 + return -EOPNOTSUPP; 143 + } 144 + 436 145 if (htmdump_init_debugfs()) 437 146 return -ENOMEM; 438 147
+1 -1
arch/powerpc/platforms/pseries/iommu.c
··· 197 197 198 198 static void tce_free_pSeries(struct iommu_table *tbl) 199 199 { 200 - if (!tbl->it_userspace) 200 + if (tbl->it_userspace) 201 201 tce_iommu_userspace_view_free(tbl); 202 202 } 203 203
+6 -1
arch/powerpc/platforms/pseries/msi.c
··· 525 525 526 526 static void pseries_msi_compose_msg(struct irq_data *data, struct msi_msg *msg) 527 527 { 528 - __pci_read_msi_msg(irq_data_get_msi_desc(data), msg); 528 + struct pci_dev *dev = msi_desc_to_pci_dev(irq_data_get_msi_desc(data)); 529 + 530 + if (dev->current_state == PCI_D0) 531 + __pci_read_msi_msg(irq_data_get_msi_desc(data), msg); 532 + else 533 + get_cached_msi_msg(data->irq, msg); 529 534 } 530 535 531 536 static struct irq_chip pseries_msi_irq_chip = {
+488
arch/powerpc/platforms/pseries/papr-indices.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + #define pr_fmt(fmt) "papr-indices: " fmt 4 + 5 + #include <linux/build_bug.h> 6 + #include <linux/file.h> 7 + #include <linux/fs.h> 8 + #include <linux/init.h> 9 + #include <linux/lockdep.h> 10 + #include <linux/kernel.h> 11 + #include <linux/miscdevice.h> 12 + #include <linux/signal.h> 13 + #include <linux/slab.h> 14 + #include <linux/string.h> 15 + #include <linux/string_helpers.h> 16 + #include <linux/uaccess.h> 17 + #include <asm/machdep.h> 18 + #include <asm/rtas-work-area.h> 19 + #include <asm/rtas.h> 20 + #include <uapi/asm/papr-indices.h> 21 + #include "papr-rtas-common.h" 22 + 23 + /* 24 + * Function-specific return values for ibm,set-dynamic-indicator and 25 + * ibm,get-dynamic-sensor-state RTAS calls. 26 + * PAPR+ v2.13 7.3.18 and 7.3.19. 27 + */ 28 + #define RTAS_IBM_DYNAMIC_INDICE_NO_INDICATOR -3 29 + 30 + /** 31 + * struct rtas_get_indices_params - Parameters (in and out) for 32 + * ibm,get-indices. 33 + * @is_sensor: In: Caller-provided whether sensor or indicator. 34 + * @indice_type:In: Caller-provided indice (sensor or indicator) token 35 + * @work_area: In: Caller-provided work area buffer for results. 36 + * @next: In: Sequence number. Out: Next sequence number. 37 + * @status: Out: RTAS call status. 38 + */ 39 + struct rtas_get_indices_params { 40 + u8 is_sensor; 41 + u32 indice_type; 42 + struct rtas_work_area *work_area; 43 + u32 next; 44 + s32 status; 45 + }; 46 + 47 + /* 48 + * rtas_ibm_get_indices() - Call ibm,get-indices to fill a work area buffer. 49 + * @params: See &struct rtas_ibm_get_indices_params. 50 + * 51 + * Calls ibm,get-indices until it errors or successfully deposits data 52 + * into the supplied work area. Handles RTAS retry statuses. Maps RTAS 53 + * error statuses to reasonable errno values. 54 + * 55 + * The caller is expected to invoke rtas_ibm_get_indices() multiple times 56 + * to retrieve all indices data for the provided indice type. Only one 57 + * sequence should be in progress at any time; starting a new sequence 58 + * will disrupt any sequence already in progress. Serialization of 59 + * indices retrieval sequences is the responsibility of the caller. 60 + * 61 + * The caller should inspect @params.status to determine whether more 62 + * calls are needed to complete the sequence. 63 + * 64 + * Context: May sleep. 65 + * Return: -ve on error, 0 otherwise. 66 + */ 67 + static int rtas_ibm_get_indices(struct rtas_get_indices_params *params) 68 + { 69 + struct rtas_work_area *work_area = params->work_area; 70 + const s32 token = rtas_function_token(RTAS_FN_IBM_GET_INDICES); 71 + u32 rets; 72 + s32 fwrc; 73 + int ret; 74 + 75 + if (token == RTAS_UNKNOWN_SERVICE) 76 + return -ENOENT; 77 + 78 + lockdep_assert_held(&rtas_ibm_get_indices_lock); 79 + 80 + do { 81 + fwrc = rtas_call(token, 5, 2, &rets, params->is_sensor, 82 + params->indice_type, 83 + rtas_work_area_phys(work_area), 84 + rtas_work_area_size(work_area), 85 + params->next); 86 + } while (rtas_busy_delay(fwrc)); 87 + 88 + switch (fwrc) { 89 + case RTAS_HARDWARE_ERROR: 90 + ret = -EIO; 91 + break; 92 + case RTAS_INVALID_PARAMETER: /* Indicator type is not supported */ 93 + ret = -EINVAL; 94 + break; 95 + case RTAS_SEQ_START_OVER: 96 + ret = -EAGAIN; 97 + pr_info_ratelimited("Indices changed during retrieval, retrying\n"); 98 + params->next = 1; 99 + break; 100 + case RTAS_SEQ_MORE_DATA: 101 + params->next = rets; 102 + ret = 0; 103 + break; 104 + case RTAS_SEQ_COMPLETE: 105 + params->next = 0; 106 + ret = 0; 107 + break; 108 + default: 109 + ret = -EIO; 110 + pr_err_ratelimited("unexpected ibm,get-indices status %d\n", fwrc); 111 + break; 112 + } 113 + 114 + params->status = fwrc; 115 + return ret; 116 + } 117 + 118 + /* 119 + * Internal indices sequence APIs. A sequence is a series of calls to 120 + * ibm,get-indices for a given location code. The sequence ends when 121 + * an error is encountered or all indices for the input has been 122 + * returned. 123 + */ 124 + 125 + /* 126 + * indices_sequence_begin() - Begin a indices retrieval sequence. 127 + * 128 + * Context: May sleep. 129 + */ 130 + static void indices_sequence_begin(struct papr_rtas_sequence *seq) 131 + { 132 + struct rtas_get_indices_params *param; 133 + 134 + param = (struct rtas_get_indices_params *)seq->params; 135 + /* 136 + * We could allocate the work area before acquiring the 137 + * function lock, but that would allow concurrent requests to 138 + * exhaust the limited work area pool for no benefit. So 139 + * allocate the work area under the lock. 140 + */ 141 + mutex_lock(&rtas_ibm_get_indices_lock); 142 + param->work_area = rtas_work_area_alloc(RTAS_GET_INDICES_BUF_SIZE); 143 + param->next = 1; 144 + param->status = 0; 145 + } 146 + 147 + /* 148 + * indices_sequence_end() - Finalize a indices retrieval sequence. 149 + * 150 + * Releases resources obtained by indices_sequence_begin(). 151 + */ 152 + static void indices_sequence_end(struct papr_rtas_sequence *seq) 153 + { 154 + struct rtas_get_indices_params *param; 155 + 156 + param = (struct rtas_get_indices_params *)seq->params; 157 + rtas_work_area_free(param->work_area); 158 + mutex_unlock(&rtas_ibm_get_indices_lock); 159 + } 160 + 161 + /* 162 + * Work function to be passed to papr_rtas_blob_generate(). 163 + * 164 + * ibm,get-indices RTAS call fills the work area with the certain 165 + * format but does not return the bytes written in the buffer. So 166 + * instead of kernel parsing this work area to determine the buffer 167 + * length, copy the complete work area (RTAS_GET_INDICES_BUF_SIZE) 168 + * to the blob and let the user space to obtain the data. 169 + * Means RTAS_GET_INDICES_BUF_SIZE data will be returned for each 170 + * read(). 171 + */ 172 + 173 + static const char *indices_sequence_fill_work_area(struct papr_rtas_sequence *seq, 174 + size_t *len) 175 + { 176 + struct rtas_get_indices_params *p; 177 + bool init_state; 178 + 179 + p = (struct rtas_get_indices_params *)seq->params; 180 + init_state = (p->next == 1) ? true : false; 181 + 182 + if (papr_rtas_sequence_should_stop(seq, p->status, init_state)) 183 + return NULL; 184 + if (papr_rtas_sequence_set_err(seq, rtas_ibm_get_indices(p))) 185 + return NULL; 186 + 187 + *len = RTAS_GET_INDICES_BUF_SIZE; 188 + return rtas_work_area_raw_buf(p->work_area); 189 + } 190 + 191 + /* 192 + * papr_indices_handle_read - returns indices blob data to the user space 193 + * 194 + * ibm,get-indices RTAS call fills the work area with the certian 195 + * format but does not return the bytes written in the buffer and 196 + * copied RTAS_GET_INDICES_BUF_SIZE data to the blob for each RTAS 197 + * call. So send RTAS_GET_INDICES_BUF_SIZE buffer to the user space 198 + * for each read(). 199 + */ 200 + static ssize_t papr_indices_handle_read(struct file *file, 201 + char __user *buf, size_t size, loff_t *off) 202 + { 203 + const struct papr_rtas_blob *blob = file->private_data; 204 + 205 + /* we should not instantiate a handle without any data attached. */ 206 + if (!papr_rtas_blob_has_data(blob)) { 207 + pr_err_once("handle without data\n"); 208 + return -EIO; 209 + } 210 + 211 + if (size < RTAS_GET_INDICES_BUF_SIZE) { 212 + pr_err_once("Invalid buffer length %ld, expect %d\n", 213 + size, RTAS_GET_INDICES_BUF_SIZE); 214 + return -EINVAL; 215 + } else if (size > RTAS_GET_INDICES_BUF_SIZE) 216 + size = RTAS_GET_INDICES_BUF_SIZE; 217 + 218 + return simple_read_from_buffer(buf, size, off, blob->data, blob->len); 219 + } 220 + 221 + static const struct file_operations papr_indices_handle_ops = { 222 + .read = papr_indices_handle_read, 223 + .llseek = papr_rtas_common_handle_seek, 224 + .release = papr_rtas_common_handle_release, 225 + }; 226 + 227 + /* 228 + * papr_indices_create_handle() - Create a fd-based handle for reading 229 + * indices data 230 + * @ubuf: Input parameters to RTAS call such as whether sensor or indicator 231 + * and indice type in user memory 232 + * 233 + * Handler for PAPR_INDICES_IOC_GET ioctl command. Validates @ubuf 234 + * and instantiates an immutable indices "blob" for it. The blob is 235 + * attached to a file descriptor for reading by user space. The memory 236 + * backing the blob is freed when the file is released. 237 + * 238 + * The entire requested indices is retrieved by this call and all 239 + * necessary RTAS interactions are performed before returning the fd 240 + * to user space. This keeps the read handler simple and ensures that 241 + * the kernel can prevent interleaving of ibm,get-indices call sequences. 242 + * 243 + * Return: The installed fd number if successful, -ve errno otherwise. 244 + */ 245 + static long papr_indices_create_handle(struct papr_indices_io_block __user *ubuf) 246 + { 247 + struct papr_rtas_sequence seq = {}; 248 + struct rtas_get_indices_params params = {}; 249 + int fd; 250 + 251 + if (get_user(params.is_sensor, &ubuf->indices.is_sensor)) 252 + return -EFAULT; 253 + 254 + if (get_user(params.indice_type, &ubuf->indices.indice_type)) 255 + return -EFAULT; 256 + 257 + seq = (struct papr_rtas_sequence) { 258 + .begin = indices_sequence_begin, 259 + .end = indices_sequence_end, 260 + .work = indices_sequence_fill_work_area, 261 + }; 262 + 263 + seq.params = &params; 264 + fd = papr_rtas_setup_file_interface(&seq, 265 + &papr_indices_handle_ops, "[papr-indices]"); 266 + 267 + return fd; 268 + } 269 + 270 + /* 271 + * Create work area with the input parameters. This function is used 272 + * for both ibm,set-dynamic-indicator and ibm,get-dynamic-sensor-state 273 + * RTAS Calls. 274 + */ 275 + static struct rtas_work_area * 276 + papr_dynamic_indice_buf_from_user(struct papr_indices_io_block __user *ubuf, 277 + struct papr_indices_io_block *kbuf) 278 + { 279 + struct rtas_work_area *work_area; 280 + u32 length; 281 + __be32 len_be; 282 + 283 + if (copy_from_user(kbuf, ubuf, sizeof(*kbuf))) 284 + return ERR_PTR(-EFAULT); 285 + 286 + 287 + if (!string_is_terminated(kbuf->dynamic_param.location_code_str, 288 + ARRAY_SIZE(kbuf->dynamic_param.location_code_str))) 289 + return ERR_PTR(-EINVAL); 290 + 291 + /* 292 + * The input data in the work area should be as follows: 293 + * - 32-bit integer length of the location code string, 294 + * including NULL. 295 + * - Location code string, NULL terminated, identifying the 296 + * token (sensor or indicator). 297 + * PAPR 2.13 - R1–7.3.18–5 ibm,set-dynamic-indicator 298 + * - R1–7.3.19–5 ibm,get-dynamic-sensor-state 299 + */ 300 + /* 301 + * Length that user space passed should also include NULL 302 + * terminator. 303 + */ 304 + length = strlen(kbuf->dynamic_param.location_code_str) + 1; 305 + if (length > LOC_CODE_SIZE) 306 + return ERR_PTR(-EINVAL); 307 + 308 + len_be = cpu_to_be32(length); 309 + 310 + work_area = rtas_work_area_alloc(LOC_CODE_SIZE + sizeof(u32)); 311 + memcpy(rtas_work_area_raw_buf(work_area), &len_be, sizeof(u32)); 312 + memcpy((rtas_work_area_raw_buf(work_area) + sizeof(u32)), 313 + &kbuf->dynamic_param.location_code_str, length); 314 + 315 + return work_area; 316 + } 317 + 318 + /** 319 + * papr_dynamic_indicator_ioc_set - ibm,set-dynamic-indicator RTAS Call 320 + * PAPR 2.13 7.3.18 321 + * 322 + * @ubuf: Input parameters to RTAS call such as indicator token and 323 + * new state. 324 + * 325 + * Returns success or -errno. 326 + */ 327 + static long papr_dynamic_indicator_ioc_set(struct papr_indices_io_block __user *ubuf) 328 + { 329 + struct papr_indices_io_block kbuf; 330 + struct rtas_work_area *work_area; 331 + s32 fwrc, token, ret; 332 + 333 + token = rtas_function_token(RTAS_FN_IBM_SET_DYNAMIC_INDICATOR); 334 + if (token == RTAS_UNKNOWN_SERVICE) 335 + return -ENOENT; 336 + 337 + mutex_lock(&rtas_ibm_set_dynamic_indicator_lock); 338 + work_area = papr_dynamic_indice_buf_from_user(ubuf, &kbuf); 339 + if (IS_ERR(work_area)) { 340 + ret = PTR_ERR(work_area); 341 + goto out; 342 + } 343 + 344 + do { 345 + fwrc = rtas_call(token, 3, 1, NULL, 346 + kbuf.dynamic_param.token, 347 + kbuf.dynamic_param.state, 348 + rtas_work_area_phys(work_area)); 349 + } while (rtas_busy_delay(fwrc)); 350 + 351 + rtas_work_area_free(work_area); 352 + 353 + switch (fwrc) { 354 + case RTAS_SUCCESS: 355 + ret = 0; 356 + break; 357 + case RTAS_IBM_DYNAMIC_INDICE_NO_INDICATOR: /* No such indicator */ 358 + ret = -EOPNOTSUPP; 359 + break; 360 + default: 361 + pr_err("unexpected ibm,set-dynamic-indicator result %d\n", 362 + fwrc); 363 + fallthrough; 364 + case RTAS_HARDWARE_ERROR: /* Hardware/platform error */ 365 + ret = -EIO; 366 + break; 367 + } 368 + 369 + out: 370 + mutex_unlock(&rtas_ibm_set_dynamic_indicator_lock); 371 + return ret; 372 + } 373 + 374 + /** 375 + * papr_dynamic_sensor_ioc_get - ibm,get-dynamic-sensor-state RTAS Call 376 + * PAPR 2.13 7.3.19 377 + * 378 + * @ubuf: Input parameters to RTAS call such as sensor token 379 + * Copies the state in user space buffer. 380 + * 381 + * 382 + * Returns success or -errno. 383 + */ 384 + 385 + static long papr_dynamic_sensor_ioc_get(struct papr_indices_io_block __user *ubuf) 386 + { 387 + struct papr_indices_io_block kbuf; 388 + struct rtas_work_area *work_area; 389 + s32 fwrc, token, ret; 390 + u32 rets; 391 + 392 + token = rtas_function_token(RTAS_FN_IBM_GET_DYNAMIC_SENSOR_STATE); 393 + if (token == RTAS_UNKNOWN_SERVICE) 394 + return -ENOENT; 395 + 396 + mutex_lock(&rtas_ibm_get_dynamic_sensor_state_lock); 397 + work_area = papr_dynamic_indice_buf_from_user(ubuf, &kbuf); 398 + if (IS_ERR(work_area)) { 399 + ret = PTR_ERR(work_area); 400 + goto out; 401 + } 402 + 403 + do { 404 + fwrc = rtas_call(token, 2, 2, &rets, 405 + kbuf.dynamic_param.token, 406 + rtas_work_area_phys(work_area)); 407 + } while (rtas_busy_delay(fwrc)); 408 + 409 + rtas_work_area_free(work_area); 410 + 411 + switch (fwrc) { 412 + case RTAS_SUCCESS: 413 + if (put_user(rets, &ubuf->dynamic_param.state)) 414 + ret = -EFAULT; 415 + else 416 + ret = 0; 417 + break; 418 + case RTAS_IBM_DYNAMIC_INDICE_NO_INDICATOR: /* No such indicator */ 419 + ret = -EOPNOTSUPP; 420 + break; 421 + default: 422 + pr_err("unexpected ibm,get-dynamic-sensor result %d\n", 423 + fwrc); 424 + fallthrough; 425 + case RTAS_HARDWARE_ERROR: /* Hardware/platform error */ 426 + ret = -EIO; 427 + break; 428 + } 429 + 430 + out: 431 + mutex_unlock(&rtas_ibm_get_dynamic_sensor_state_lock); 432 + return ret; 433 + } 434 + 435 + /* 436 + * Top-level ioctl handler for /dev/papr-indices. 437 + */ 438 + static long papr_indices_dev_ioctl(struct file *filp, unsigned int ioctl, 439 + unsigned long arg) 440 + { 441 + void __user *argp = (__force void __user *)arg; 442 + long ret; 443 + 444 + switch (ioctl) { 445 + case PAPR_INDICES_IOC_GET: 446 + ret = papr_indices_create_handle(argp); 447 + break; 448 + case PAPR_DYNAMIC_SENSOR_IOC_GET: 449 + ret = papr_dynamic_sensor_ioc_get(argp); 450 + break; 451 + case PAPR_DYNAMIC_INDICATOR_IOC_SET: 452 + if (filp->f_mode & FMODE_WRITE) 453 + ret = papr_dynamic_indicator_ioc_set(argp); 454 + else 455 + ret = -EBADF; 456 + break; 457 + default: 458 + ret = -ENOIOCTLCMD; 459 + break; 460 + } 461 + 462 + return ret; 463 + } 464 + 465 + static const struct file_operations papr_indices_ops = { 466 + .unlocked_ioctl = papr_indices_dev_ioctl, 467 + }; 468 + 469 + static struct miscdevice papr_indices_dev = { 470 + .minor = MISC_DYNAMIC_MINOR, 471 + .name = "papr-indices", 472 + .fops = &papr_indices_ops, 473 + }; 474 + 475 + static __init int papr_indices_init(void) 476 + { 477 + if (!rtas_function_implemented(RTAS_FN_IBM_GET_INDICES)) 478 + return -ENODEV; 479 + 480 + if (!rtas_function_implemented(RTAS_FN_IBM_SET_DYNAMIC_INDICATOR)) 481 + return -ENODEV; 482 + 483 + if (!rtas_function_implemented(RTAS_FN_IBM_GET_DYNAMIC_SENSOR_STATE)) 484 + return -ENODEV; 485 + 486 + return misc_register(&papr_indices_dev); 487 + } 488 + machine_device_initcall(pseries, papr_indices_init);
+288
arch/powerpc/platforms/pseries/papr-phy-attest.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + #define pr_fmt(fmt) "papr-phy-attest: " fmt 4 + 5 + #include <linux/build_bug.h> 6 + #include <linux/file.h> 7 + #include <linux/fs.h> 8 + #include <linux/init.h> 9 + #include <linux/lockdep.h> 10 + #include <linux/kernel.h> 11 + #include <linux/miscdevice.h> 12 + #include <linux/signal.h> 13 + #include <linux/slab.h> 14 + #include <linux/string.h> 15 + #include <linux/string_helpers.h> 16 + #include <linux/uaccess.h> 17 + #include <asm/machdep.h> 18 + #include <asm/rtas-work-area.h> 19 + #include <asm/rtas.h> 20 + #include <uapi/asm/papr-physical-attestation.h> 21 + #include "papr-rtas-common.h" 22 + 23 + /** 24 + * struct rtas_phy_attest_params - Parameters (in and out) for 25 + * ibm,physical-attestation. 26 + * 27 + * @cmd: In: Caller-provided attestation command buffer. Must be 28 + * RTAS-addressable. 29 + * @work_area: In: Caller-provided work area buffer for attestation 30 + * command structure 31 + * Out: Caller-provided work area buffer for the response 32 + * @cmd_len: In: Caller-provided attestation command structure 33 + * length 34 + * @sequence: In: Sequence number. Out: Next sequence number. 35 + * @written: Out: Bytes written by ibm,physical-attestation to 36 + * @work_area. 37 + * @status: Out: RTAS call status. 38 + */ 39 + struct rtas_phy_attest_params { 40 + struct papr_phy_attest_io_block cmd; 41 + struct rtas_work_area *work_area; 42 + u32 cmd_len; 43 + u32 sequence; 44 + u32 written; 45 + s32 status; 46 + }; 47 + 48 + /** 49 + * rtas_physical_attestation() - Call ibm,physical-attestation to 50 + * fill a work area buffer. 51 + * @params: See &struct rtas_phy_attest_params. 52 + * 53 + * Calls ibm,physical-attestation until it errors or successfully 54 + * deposits data into the supplied work area. Handles RTAS retry 55 + * statuses. Maps RTAS error statuses to reasonable errno values. 56 + * 57 + * The caller is expected to invoke rtas_physical_attestation() 58 + * multiple times to retrieve all the data for the provided 59 + * attestation command. Only one sequence should be in progress at 60 + * any time; starting a new sequence will disrupt any sequence 61 + * already in progress. Serialization of attestation retrieval 62 + * sequences is the responsibility of the caller. 63 + * 64 + * The caller should inspect @params.status to determine whether more 65 + * calls are needed to complete the sequence. 66 + * 67 + * Context: May sleep. 68 + * Return: -ve on error, 0 otherwise. 69 + */ 70 + static int rtas_physical_attestation(struct rtas_phy_attest_params *params) 71 + { 72 + struct rtas_work_area *work_area; 73 + s32 fwrc, token; 74 + u32 rets[2]; 75 + int ret; 76 + 77 + work_area = params->work_area; 78 + token = rtas_function_token(RTAS_FN_IBM_PHYSICAL_ATTESTATION); 79 + if (token == RTAS_UNKNOWN_SERVICE) 80 + return -ENOENT; 81 + 82 + lockdep_assert_held(&rtas_ibm_physical_attestation_lock); 83 + 84 + do { 85 + fwrc = rtas_call(token, 3, 3, rets, 86 + rtas_work_area_phys(work_area), 87 + params->cmd_len, 88 + params->sequence); 89 + } while (rtas_busy_delay(fwrc)); 90 + 91 + switch (fwrc) { 92 + case RTAS_HARDWARE_ERROR: 93 + ret = -EIO; 94 + break; 95 + case RTAS_INVALID_PARAMETER: 96 + ret = -EINVAL; 97 + break; 98 + case RTAS_SEQ_MORE_DATA: 99 + params->sequence = rets[0]; 100 + fallthrough; 101 + case RTAS_SEQ_COMPLETE: 102 + params->written = rets[1]; 103 + /* 104 + * Kernel or firmware bug, do not continue. 105 + */ 106 + if (WARN(params->written > rtas_work_area_size(work_area), 107 + "possible write beyond end of work area")) 108 + ret = -EFAULT; 109 + else 110 + ret = 0; 111 + break; 112 + default: 113 + ret = -EIO; 114 + pr_err_ratelimited("unexpected ibm,get-phy_attest status %d\n", fwrc); 115 + break; 116 + } 117 + 118 + params->status = fwrc; 119 + return ret; 120 + } 121 + 122 + /* 123 + * Internal physical-attestation sequence APIs. A physical-attestation 124 + * sequence is a series of calls to get ibm,physical-attestation 125 + * for a given attestation command. The sequence ends when an error 126 + * is encountered or all data for the attestation command has been 127 + * returned. 128 + */ 129 + 130 + /** 131 + * phy_attest_sequence_begin() - Begin a response data for attestation 132 + * command retrieval sequence. 133 + * @seq: user specified parameters for RTAS call from seq struct. 134 + * 135 + * Context: May sleep. 136 + */ 137 + static void phy_attest_sequence_begin(struct papr_rtas_sequence *seq) 138 + { 139 + struct rtas_phy_attest_params *param; 140 + 141 + /* 142 + * We could allocate the work area before acquiring the 143 + * function lock, but that would allow concurrent requests to 144 + * exhaust the limited work area pool for no benefit. So 145 + * allocate the work area under the lock. 146 + */ 147 + mutex_lock(&rtas_ibm_physical_attestation_lock); 148 + param = (struct rtas_phy_attest_params *)seq->params; 149 + param->work_area = rtas_work_area_alloc(SZ_4K); 150 + memcpy(rtas_work_area_raw_buf(param->work_area), &param->cmd, 151 + param->cmd_len); 152 + param->sequence = 1; 153 + param->status = 0; 154 + } 155 + 156 + /** 157 + * phy_attest_sequence_end() - Finalize a attestation command 158 + * response retrieval sequence. 159 + * @seq: Sequence state. 160 + * 161 + * Releases resources obtained by phy_attest_sequence_begin(). 162 + */ 163 + static void phy_attest_sequence_end(struct papr_rtas_sequence *seq) 164 + { 165 + struct rtas_phy_attest_params *param; 166 + 167 + param = (struct rtas_phy_attest_params *)seq->params; 168 + rtas_work_area_free(param->work_area); 169 + mutex_unlock(&rtas_ibm_physical_attestation_lock); 170 + kfree(param); 171 + } 172 + 173 + /* 174 + * Generator function to be passed to papr_rtas_blob_generate(). 175 + */ 176 + static const char *phy_attest_sequence_fill_work_area(struct papr_rtas_sequence *seq, 177 + size_t *len) 178 + { 179 + struct rtas_phy_attest_params *p; 180 + bool init_state; 181 + 182 + p = (struct rtas_phy_attest_params *)seq->params; 183 + init_state = (p->written == 0) ? true : false; 184 + 185 + if (papr_rtas_sequence_should_stop(seq, p->status, init_state)) 186 + return NULL; 187 + if (papr_rtas_sequence_set_err(seq, rtas_physical_attestation(p))) 188 + return NULL; 189 + *len = p->written; 190 + return rtas_work_area_raw_buf(p->work_area); 191 + } 192 + 193 + static const struct file_operations papr_phy_attest_handle_ops = { 194 + .read = papr_rtas_common_handle_read, 195 + .llseek = papr_rtas_common_handle_seek, 196 + .release = papr_rtas_common_handle_release, 197 + }; 198 + 199 + /** 200 + * papr_phy_attest_create_handle() - Create a fd-based handle for 201 + * reading the response for the given attestation command. 202 + * @ulc: Attestation command in user memory; defines the scope of 203 + * data for the attestation command to retrieve. 204 + * 205 + * Handler for PAPR_PHYSICAL_ATTESTATION_IOC_CREATE_HANDLE ioctl 206 + * command. Validates @ulc and instantiates an immutable response 207 + * "blob" for attestation command. The blob is attached to a file 208 + * descriptor for reading by user space. The memory backing the blob 209 + * is freed when the file is released. 210 + * 211 + * The entire requested response buffer for the attestation command 212 + * retrieved by this call and all necessary RTAS interactions are 213 + * performed before returning the fd to user space. This keeps the 214 + * read handler simple and ensures that kernel can prevent 215 + * interleaving ibm,physical-attestation call sequences. 216 + * 217 + * Return: The installed fd number if successful, -ve errno otherwise. 218 + */ 219 + static long papr_phy_attest_create_handle(struct papr_phy_attest_io_block __user *ulc) 220 + { 221 + struct rtas_phy_attest_params *params; 222 + struct papr_rtas_sequence seq = {}; 223 + int fd; 224 + 225 + /* 226 + * Freed in phy_attest_sequence_end(). 227 + */ 228 + params = kzalloc(sizeof(*params), GFP_KERNEL_ACCOUNT); 229 + if (!params) 230 + return -ENOMEM; 231 + 232 + if (copy_from_user(&params->cmd, ulc, 233 + sizeof(struct papr_phy_attest_io_block))) 234 + return -EFAULT; 235 + 236 + params->cmd_len = be32_to_cpu(params->cmd.length); 237 + seq = (struct papr_rtas_sequence) { 238 + .begin = phy_attest_sequence_begin, 239 + .end = phy_attest_sequence_end, 240 + .work = phy_attest_sequence_fill_work_area, 241 + }; 242 + 243 + seq.params = (void *)params; 244 + 245 + fd = papr_rtas_setup_file_interface(&seq, 246 + &papr_phy_attest_handle_ops, 247 + "[papr-physical-attestation]"); 248 + 249 + return fd; 250 + } 251 + 252 + /* 253 + * Top-level ioctl handler for /dev/papr-physical-attestation. 254 + */ 255 + static long papr_phy_attest_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) 256 + { 257 + void __user *argp = (__force void __user *)arg; 258 + long ret; 259 + 260 + switch (ioctl) { 261 + case PAPR_PHY_ATTEST_IOC_HANDLE: 262 + ret = papr_phy_attest_create_handle(argp); 263 + break; 264 + default: 265 + ret = -ENOIOCTLCMD; 266 + break; 267 + } 268 + return ret; 269 + } 270 + 271 + static const struct file_operations papr_phy_attest_ops = { 272 + .unlocked_ioctl = papr_phy_attest_dev_ioctl, 273 + }; 274 + 275 + static struct miscdevice papr_phy_attest_dev = { 276 + .minor = MISC_DYNAMIC_MINOR, 277 + .name = "papr-physical-attestation", 278 + .fops = &papr_phy_attest_ops, 279 + }; 280 + 281 + static __init int papr_phy_attest_init(void) 282 + { 283 + if (!rtas_function_implemented(RTAS_FN_IBM_PHYSICAL_ATTESTATION)) 284 + return -ENODEV; 285 + 286 + return misc_register(&papr_phy_attest_dev); 287 + } 288 + machine_device_initcall(pseries, papr_phy_attest_init);
+411
arch/powerpc/platforms/pseries/papr-platform-dump.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + #define pr_fmt(fmt) "papr-platform-dump: " fmt 4 + 5 + #include <linux/anon_inodes.h> 6 + #include <linux/file.h> 7 + #include <linux/fs.h> 8 + #include <linux/init.h> 9 + #include <linux/kernel.h> 10 + #include <linux/miscdevice.h> 11 + #include <asm/machdep.h> 12 + #include <asm/rtas-work-area.h> 13 + #include <asm/rtas.h> 14 + #include <uapi/asm/papr-platform-dump.h> 15 + 16 + /* 17 + * Function-specific return values for ibm,platform-dump, derived from 18 + * PAPR+ v2.13 7.3.3.4.1 "ibm,platform-dump RTAS Call". 19 + */ 20 + #define RTAS_IBM_PLATFORM_DUMP_COMPLETE 0 /* Complete dump retrieved. */ 21 + #define RTAS_IBM_PLATFORM_DUMP_CONTINUE 1 /* Continue dump */ 22 + #define RTAS_NOT_AUTHORIZED -9002 /* Not Authorized */ 23 + 24 + #define RTAS_IBM_PLATFORM_DUMP_START 2 /* Linux status to start dump */ 25 + 26 + /** 27 + * struct ibm_platform_dump_params - Parameters (in and out) for 28 + * ibm,platform-dump 29 + * @work_area: In: work area buffer for results. 30 + * @buf_length: In: work area buffer length in bytes 31 + * @dump_tag_hi: In: Most-significant 32 bits of a Dump_Tag representing 32 + * an id of the dump being processed. 33 + * @dump_tag_lo: In: Least-significant 32 bits of a Dump_Tag representing 34 + * an id of the dump being processed. 35 + * @sequence_hi: In: Sequence number in most-significant 32 bits. 36 + * Out: Next sequence number in most-significant 32 bits. 37 + * @sequence_lo: In: Sequence number in Least-significant 32 bits 38 + * Out: Next sequence number in Least-significant 32 bits. 39 + * @bytes_ret_hi: Out: Bytes written in most-significant 32 bits. 40 + * @bytes_ret_lo: Out: Bytes written in Least-significant 32 bits. 41 + * @status: Out: RTAS call status. 42 + * @list: Maintain the list of dumps are in progress. Can 43 + * retrieve multiple dumps with different dump IDs at 44 + * the same time but not with the same dump ID. This list 45 + * is used to determine whether the dump for the same ID 46 + * is in progress. 47 + */ 48 + struct ibm_platform_dump_params { 49 + struct rtas_work_area *work_area; 50 + u32 buf_length; 51 + u32 dump_tag_hi; 52 + u32 dump_tag_lo; 53 + u32 sequence_hi; 54 + u32 sequence_lo; 55 + u32 bytes_ret_hi; 56 + u32 bytes_ret_lo; 57 + s32 status; 58 + struct list_head list; 59 + }; 60 + 61 + /* 62 + * Multiple dumps with different dump IDs can be retrieved at the same 63 + * time, but not with dame dump ID. platform_dump_list_mutex and 64 + * platform_dump_list are used to prevent this behavior. 65 + */ 66 + static DEFINE_MUTEX(platform_dump_list_mutex); 67 + static LIST_HEAD(platform_dump_list); 68 + 69 + /** 70 + * rtas_ibm_platform_dump() - Call ibm,platform-dump to fill a work area 71 + * buffer. 72 + * @params: See &struct ibm_platform_dump_params. 73 + * @buf_addr: Address of dump buffer (work_area) 74 + * @buf_length: Length of the buffer in bytes (min. 1024) 75 + * 76 + * Calls ibm,platform-dump until it errors or successfully deposits data 77 + * into the supplied work area. Handles RTAS retry statuses. Maps RTAS 78 + * error statuses to reasonable errno values. 79 + * 80 + * Can request multiple dumps with different dump IDs at the same time, 81 + * but not with the same dump ID which is prevented with the check in 82 + * the ioctl code (papr_platform_dump_create_handle()). 83 + * 84 + * The caller should inspect @params.status to determine whether more 85 + * calls are needed to complete the sequence. 86 + * 87 + * Context: May sleep. 88 + * Return: -ve on error, 0 for dump complete and 1 for continue dump 89 + */ 90 + static int rtas_ibm_platform_dump(struct ibm_platform_dump_params *params, 91 + phys_addr_t buf_addr, u32 buf_length) 92 + { 93 + u32 rets[4]; 94 + s32 fwrc; 95 + int ret = 0; 96 + 97 + do { 98 + fwrc = rtas_call(rtas_function_token(RTAS_FN_IBM_PLATFORM_DUMP), 99 + 6, 5, 100 + rets, 101 + params->dump_tag_hi, 102 + params->dump_tag_lo, 103 + params->sequence_hi, 104 + params->sequence_lo, 105 + buf_addr, 106 + buf_length); 107 + } while (rtas_busy_delay(fwrc)); 108 + 109 + switch (fwrc) { 110 + case RTAS_HARDWARE_ERROR: 111 + ret = -EIO; 112 + break; 113 + case RTAS_NOT_AUTHORIZED: 114 + ret = -EPERM; 115 + break; 116 + case RTAS_IBM_PLATFORM_DUMP_CONTINUE: 117 + case RTAS_IBM_PLATFORM_DUMP_COMPLETE: 118 + params->sequence_hi = rets[0]; 119 + params->sequence_lo = rets[1]; 120 + params->bytes_ret_hi = rets[2]; 121 + params->bytes_ret_lo = rets[3]; 122 + break; 123 + default: 124 + ret = -EIO; 125 + pr_err_ratelimited("unexpected ibm,platform-dump status %d\n", 126 + fwrc); 127 + break; 128 + } 129 + 130 + params->status = fwrc; 131 + return ret; 132 + } 133 + 134 + /* 135 + * Platform dump is used with multiple RTAS calls to retrieve the 136 + * complete dump for the provided dump ID. Once the complete dump is 137 + * retrieved, the hypervisor returns dump complete status (0) for the 138 + * last RTAS call and expects the caller issues one more call with 139 + * NULL buffer to invalidate the dump so that the hypervisor can remove 140 + * the dump. 141 + * 142 + * After the specific dump is invalidated in the hypervisor, expect the 143 + * dump complete status for the new sequence - the user space initiates 144 + * new request for the same dump ID. 145 + */ 146 + static ssize_t papr_platform_dump_handle_read(struct file *file, 147 + char __user *buf, size_t size, loff_t *off) 148 + { 149 + struct ibm_platform_dump_params *params = file->private_data; 150 + u64 total_bytes; 151 + s32 fwrc; 152 + 153 + /* 154 + * Dump already completed with the previous read calls. 155 + * In case if the user space issues further reads, returns 156 + * -EINVAL. 157 + */ 158 + if (!params->buf_length) { 159 + pr_warn_once("Platform dump completed for dump ID %llu\n", 160 + (u64) (((u64)params->dump_tag_hi << 32) | 161 + params->dump_tag_lo)); 162 + return -EINVAL; 163 + } 164 + 165 + /* 166 + * The hypervisor returns status 0 if no more data available to 167 + * download. The dump will be invalidated with ioctl (see below). 168 + */ 169 + if (params->status == RTAS_IBM_PLATFORM_DUMP_COMPLETE) { 170 + params->buf_length = 0; 171 + /* 172 + * Returns 0 to the user space so that user 173 + * space read stops. 174 + */ 175 + return 0; 176 + } 177 + 178 + if (size < SZ_1K) { 179 + pr_err_once("Buffer length should be minimum 1024 bytes\n"); 180 + return -EINVAL; 181 + } else if (size > params->buf_length) { 182 + /* 183 + * Allocate 4K work area. So if the user requests > 4K, 184 + * resize the buffer length. 185 + */ 186 + size = params->buf_length; 187 + } 188 + 189 + fwrc = rtas_ibm_platform_dump(params, 190 + rtas_work_area_phys(params->work_area), 191 + size); 192 + if (fwrc < 0) 193 + return fwrc; 194 + 195 + total_bytes = (u64) (((u64)params->bytes_ret_hi << 32) | 196 + params->bytes_ret_lo); 197 + 198 + /* 199 + * Kernel or firmware bug, do not continue. 200 + */ 201 + if (WARN(total_bytes > size, "possible write beyond end of work area")) 202 + return -EFAULT; 203 + 204 + if (copy_to_user(buf, rtas_work_area_raw_buf(params->work_area), 205 + total_bytes)) 206 + return -EFAULT; 207 + 208 + return total_bytes; 209 + } 210 + 211 + static int papr_platform_dump_handle_release(struct inode *inode, 212 + struct file *file) 213 + { 214 + struct ibm_platform_dump_params *params = file->private_data; 215 + 216 + if (params->work_area) 217 + rtas_work_area_free(params->work_area); 218 + 219 + mutex_lock(&platform_dump_list_mutex); 220 + list_del(&params->list); 221 + mutex_unlock(&platform_dump_list_mutex); 222 + 223 + kfree(params); 224 + file->private_data = NULL; 225 + return 0; 226 + } 227 + 228 + /* 229 + * This ioctl is used to invalidate the dump assuming the user space 230 + * issue this ioctl after obtain the complete dump. 231 + * Issue the last RTAS call with NULL buffer to invalidate the dump 232 + * which means dump will be freed in the hypervisor. 233 + */ 234 + static long papr_platform_dump_invalidate_ioctl(struct file *file, 235 + unsigned int ioctl, unsigned long arg) 236 + { 237 + struct ibm_platform_dump_params *params; 238 + u64 __user *argp = (void __user *)arg; 239 + u64 param_dump_tag, dump_tag; 240 + 241 + if (ioctl != PAPR_PLATFORM_DUMP_IOC_INVALIDATE) 242 + return -ENOIOCTLCMD; 243 + 244 + if (get_user(dump_tag, argp)) 245 + return -EFAULT; 246 + 247 + /* 248 + * private_data is freeded during release(), so should not 249 + * happen. 250 + */ 251 + if (!file->private_data) { 252 + pr_err("No valid FD to invalidate dump for the ID(%llu)\n", 253 + dump_tag); 254 + return -EINVAL; 255 + } 256 + 257 + params = file->private_data; 258 + param_dump_tag = (u64) (((u64)params->dump_tag_hi << 32) | 259 + params->dump_tag_lo); 260 + if (dump_tag != param_dump_tag) { 261 + pr_err("Invalid dump ID(%llu) to invalidate dump\n", 262 + dump_tag); 263 + return -EINVAL; 264 + } 265 + 266 + if (params->status != RTAS_IBM_PLATFORM_DUMP_COMPLETE) { 267 + pr_err("Platform dump is not complete, but requested " 268 + "to invalidate dump for ID(%llu)\n", 269 + dump_tag); 270 + return -EINPROGRESS; 271 + } 272 + 273 + return rtas_ibm_platform_dump(params, 0, 0); 274 + } 275 + 276 + static const struct file_operations papr_platform_dump_handle_ops = { 277 + .read = papr_platform_dump_handle_read, 278 + .release = papr_platform_dump_handle_release, 279 + .unlocked_ioctl = papr_platform_dump_invalidate_ioctl, 280 + }; 281 + 282 + /** 283 + * papr_platform_dump_create_handle() - Create a fd-based handle for 284 + * reading platform dump 285 + * 286 + * Handler for PAPR_PLATFORM_DUMP_IOC_CREATE_HANDLE ioctl command 287 + * Allocates RTAS parameter struct and work area and attached to the 288 + * file descriptor for reading by user space with the multiple RTAS 289 + * calls until the dump is completed. This memory allocation is freed 290 + * when the file is released. 291 + * 292 + * Multiple dump requests with different IDs are allowed at the same 293 + * time, but not with the same dump ID. So if the user space is 294 + * already opened file descriptor for the specific dump ID, return 295 + * -EALREADY for the next request. 296 + * 297 + * @dump_tag: Dump ID for the dump requested to retrieve from the 298 + * hypervisor 299 + * 300 + * Return: The installed fd number if successful, -ve errno otherwise. 301 + */ 302 + static long papr_platform_dump_create_handle(u64 dump_tag) 303 + { 304 + struct ibm_platform_dump_params *params; 305 + u64 param_dump_tag; 306 + struct file *file; 307 + long err; 308 + int fd; 309 + 310 + /* 311 + * Return failure if the user space is already opened FD for 312 + * the specific dump ID. This check will prevent multiple dump 313 + * requests for the same dump ID at the same time. Generally 314 + * should not expect this, but in case. 315 + */ 316 + list_for_each_entry(params, &platform_dump_list, list) { 317 + param_dump_tag = (u64) (((u64)params->dump_tag_hi << 32) | 318 + params->dump_tag_lo); 319 + if (dump_tag == param_dump_tag) { 320 + pr_err("Platform dump for ID(%llu) is already in progress\n", 321 + dump_tag); 322 + return -EALREADY; 323 + } 324 + } 325 + 326 + params = kzalloc(sizeof(struct ibm_platform_dump_params), 327 + GFP_KERNEL_ACCOUNT); 328 + if (!params) 329 + return -ENOMEM; 330 + 331 + params->work_area = rtas_work_area_alloc(SZ_4K); 332 + params->buf_length = SZ_4K; 333 + params->dump_tag_hi = (u32)(dump_tag >> 32); 334 + params->dump_tag_lo = (u32)(dump_tag & 0x00000000ffffffffULL); 335 + params->status = RTAS_IBM_PLATFORM_DUMP_START; 336 + 337 + fd = get_unused_fd_flags(O_RDONLY | O_CLOEXEC); 338 + if (fd < 0) { 339 + err = fd; 340 + goto free_area; 341 + } 342 + 343 + file = anon_inode_getfile_fmode("[papr-platform-dump]", 344 + &papr_platform_dump_handle_ops, 345 + (void *)params, O_RDONLY, 346 + FMODE_LSEEK | FMODE_PREAD); 347 + if (IS_ERR(file)) { 348 + err = PTR_ERR(file); 349 + goto put_fd; 350 + } 351 + 352 + fd_install(fd, file); 353 + 354 + list_add(&params->list, &platform_dump_list); 355 + 356 + pr_info("%s (%d) initiated platform dump for dump tag %llu\n", 357 + current->comm, current->pid, dump_tag); 358 + return fd; 359 + put_fd: 360 + put_unused_fd(fd); 361 + free_area: 362 + rtas_work_area_free(params->work_area); 363 + kfree(params); 364 + return err; 365 + } 366 + 367 + /* 368 + * Top-level ioctl handler for /dev/papr-platform-dump. 369 + */ 370 + static long papr_platform_dump_dev_ioctl(struct file *filp, 371 + unsigned int ioctl, 372 + unsigned long arg) 373 + { 374 + u64 __user *argp = (void __user *)arg; 375 + u64 dump_tag; 376 + long ret; 377 + 378 + if (get_user(dump_tag, argp)) 379 + return -EFAULT; 380 + 381 + switch (ioctl) { 382 + case PAPR_PLATFORM_DUMP_IOC_CREATE_HANDLE: 383 + mutex_lock(&platform_dump_list_mutex); 384 + ret = papr_platform_dump_create_handle(dump_tag); 385 + mutex_unlock(&platform_dump_list_mutex); 386 + break; 387 + default: 388 + ret = -ENOIOCTLCMD; 389 + break; 390 + } 391 + return ret; 392 + } 393 + 394 + static const struct file_operations papr_platform_dump_ops = { 395 + .unlocked_ioctl = papr_platform_dump_dev_ioctl, 396 + }; 397 + 398 + static struct miscdevice papr_platform_dump_dev = { 399 + .minor = MISC_DYNAMIC_MINOR, 400 + .name = "papr-platform-dump", 401 + .fops = &papr_platform_dump_ops, 402 + }; 403 + 404 + static __init int papr_platform_dump_init(void) 405 + { 406 + if (!rtas_function_implemented(RTAS_FN_IBM_PLATFORM_DUMP)) 407 + return -ENODEV; 408 + 409 + return misc_register(&papr_platform_dump_dev); 410 + } 411 + machine_device_initcall(pseries, papr_platform_dump_init);
+311
arch/powerpc/platforms/pseries/papr-rtas-common.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + 3 + #define pr_fmt(fmt) "papr-common: " fmt 4 + 5 + #include <linux/types.h> 6 + #include <linux/kernel.h> 7 + #include <linux/signal.h> 8 + #include <linux/slab.h> 9 + #include <linux/file.h> 10 + #include <linux/fs.h> 11 + #include <linux/anon_inodes.h> 12 + #include <linux/sched/signal.h> 13 + #include "papr-rtas-common.h" 14 + 15 + /* 16 + * Sequence based RTAS HCALL has to issue multiple times to retrieve 17 + * complete data from the hypervisor. For some of these RTAS calls, 18 + * the OS should not interleave calls with different input until the 19 + * sequence is completed. So data is collected for these calls during 20 + * ioctl handle and export to user space with read() handle. 21 + * This file provides common functions needed for such sequence based 22 + * RTAS calls Ex: ibm,get-vpd and ibm,get-indices. 23 + */ 24 + 25 + bool papr_rtas_blob_has_data(const struct papr_rtas_blob *blob) 26 + { 27 + return blob->data && blob->len; 28 + } 29 + 30 + void papr_rtas_blob_free(const struct papr_rtas_blob *blob) 31 + { 32 + if (blob) { 33 + kvfree(blob->data); 34 + kfree(blob); 35 + } 36 + } 37 + 38 + /** 39 + * papr_rtas_blob_extend() - Append data to a &struct papr_rtas_blob. 40 + * @blob: The blob to extend. 41 + * @data: The new data to append to @blob. 42 + * @len: The length of @data. 43 + * 44 + * Context: May sleep. 45 + * Return: -ENOMEM on allocation failure, 0 otherwise. 46 + */ 47 + static int papr_rtas_blob_extend(struct papr_rtas_blob *blob, 48 + const char *data, size_t len) 49 + { 50 + const size_t new_len = blob->len + len; 51 + const size_t old_len = blob->len; 52 + const char *old_ptr = blob->data; 53 + char *new_ptr; 54 + 55 + new_ptr = kvrealloc(old_ptr, new_len, GFP_KERNEL_ACCOUNT); 56 + if (!new_ptr) 57 + return -ENOMEM; 58 + 59 + memcpy(&new_ptr[old_len], data, len); 60 + blob->data = new_ptr; 61 + blob->len = new_len; 62 + return 0; 63 + } 64 + 65 + /** 66 + * papr_rtas_blob_generate() - Construct a new &struct papr_rtas_blob. 67 + * @seq: work function of the caller that is called to obtain 68 + * data with the caller RTAS call. 69 + * 70 + * The @work callback is invoked until it returns NULL. @seq is 71 + * passed to @work in its first argument on each call. When 72 + * @work returns data, it should store the data length in its 73 + * second argument. 74 + * 75 + * Context: May sleep. 76 + * Return: A completely populated &struct papr_rtas_blob, or NULL on error. 77 + */ 78 + static const struct papr_rtas_blob * 79 + papr_rtas_blob_generate(struct papr_rtas_sequence *seq) 80 + { 81 + struct papr_rtas_blob *blob; 82 + const char *buf; 83 + size_t len; 84 + int err = 0; 85 + 86 + blob = kzalloc(sizeof(*blob), GFP_KERNEL_ACCOUNT); 87 + if (!blob) 88 + return NULL; 89 + 90 + if (!seq->work) 91 + return ERR_PTR(-EINVAL); 92 + 93 + 94 + while (err == 0 && (buf = seq->work(seq, &len))) 95 + err = papr_rtas_blob_extend(blob, buf, len); 96 + 97 + if (err != 0 || !papr_rtas_blob_has_data(blob)) 98 + goto free_blob; 99 + 100 + return blob; 101 + free_blob: 102 + papr_rtas_blob_free(blob); 103 + return NULL; 104 + } 105 + 106 + int papr_rtas_sequence_set_err(struct papr_rtas_sequence *seq, int err) 107 + { 108 + /* Preserve the first error recorded. */ 109 + if (seq->error == 0) 110 + seq->error = err; 111 + 112 + return seq->error; 113 + } 114 + 115 + /* 116 + * Higher-level retrieval code below. These functions use the 117 + * papr_rtas_blob_* and sequence_* APIs defined above to create fd-based 118 + * handles for consumption by user space. 119 + */ 120 + 121 + /** 122 + * papr_rtas_run_sequence() - Run a single retrieval sequence. 123 + * @seq: Functions of the caller to complete the sequence 124 + * 125 + * Context: May sleep. Holds a mutex and an RTAS work area for its 126 + * duration. Typically performs multiple sleepable slab 127 + * allocations. 128 + * 129 + * Return: A populated &struct papr_rtas_blob on success. Encoded error 130 + * pointer otherwise. 131 + */ 132 + static const struct papr_rtas_blob *papr_rtas_run_sequence(struct papr_rtas_sequence *seq) 133 + { 134 + const struct papr_rtas_blob *blob; 135 + 136 + if (seq->begin) 137 + seq->begin(seq); 138 + 139 + blob = papr_rtas_blob_generate(seq); 140 + if (!blob) 141 + papr_rtas_sequence_set_err(seq, -ENOMEM); 142 + 143 + if (seq->end) 144 + seq->end(seq); 145 + 146 + 147 + if (seq->error) { 148 + papr_rtas_blob_free(blob); 149 + return ERR_PTR(seq->error); 150 + } 151 + 152 + return blob; 153 + } 154 + 155 + /** 156 + * papr_rtas_retrieve() - Return the data blob that is exposed to 157 + * user space. 158 + * @seq: RTAS call specific functions to be invoked until the 159 + * sequence is completed. 160 + * 161 + * Run sequences against @param until a blob is successfully 162 + * instantiated, or a hard error is encountered, or a fatal signal is 163 + * pending. 164 + * 165 + * Context: May sleep. 166 + * Return: A fully populated data blob when successful. Encoded error 167 + * pointer otherwise. 168 + */ 169 + const struct papr_rtas_blob *papr_rtas_retrieve(struct papr_rtas_sequence *seq) 170 + { 171 + const struct papr_rtas_blob *blob; 172 + 173 + /* 174 + * EAGAIN means the sequence returns error with a -4 (data 175 + * changed and need to start the sequence) status from RTAS calls 176 + * and we should attempt a new sequence. PAPR+ (v2.13 R1–7.3.20–5 177 + * - ibm,get-vpd, R1–7.3.17–6 - ibm,get-indices) indicates that 178 + * this should be a transient condition, not something that 179 + * happens continuously. But we'll stop trying on a fatal signal. 180 + */ 181 + do { 182 + blob = papr_rtas_run_sequence(seq); 183 + if (!IS_ERR(blob)) /* Success. */ 184 + break; 185 + if (PTR_ERR(blob) != -EAGAIN) /* Hard error. */ 186 + break; 187 + cond_resched(); 188 + } while (!fatal_signal_pending(current)); 189 + 190 + return blob; 191 + } 192 + 193 + /** 194 + * papr_rtas_setup_file_interface - Complete the sequence and obtain 195 + * the data and export to user space with fd-based handles. Then the 196 + * user spave gets the data with read() handle. 197 + * @seq: RTAS call specific functions to get the data. 198 + * @fops: RTAS call specific file operations such as read(). 199 + * @name: RTAS call specific char device node. 200 + * 201 + * Return: FD handle for consumption by user space 202 + */ 203 + long papr_rtas_setup_file_interface(struct papr_rtas_sequence *seq, 204 + const struct file_operations *fops, 205 + char *name) 206 + { 207 + const struct papr_rtas_blob *blob; 208 + struct file *file; 209 + long ret; 210 + int fd; 211 + 212 + blob = papr_rtas_retrieve(seq); 213 + if (IS_ERR(blob)) 214 + return PTR_ERR(blob); 215 + 216 + fd = get_unused_fd_flags(O_RDONLY | O_CLOEXEC); 217 + if (fd < 0) { 218 + ret = fd; 219 + goto free_blob; 220 + } 221 + 222 + file = anon_inode_getfile_fmode(name, fops, (void *)blob, 223 + O_RDONLY, FMODE_LSEEK | FMODE_PREAD); 224 + if (IS_ERR(file)) { 225 + ret = PTR_ERR(file); 226 + goto put_fd; 227 + } 228 + 229 + fd_install(fd, file); 230 + return fd; 231 + 232 + put_fd: 233 + put_unused_fd(fd); 234 + free_blob: 235 + papr_rtas_blob_free(blob); 236 + return ret; 237 + } 238 + 239 + /* 240 + * papr_rtas_sequence_should_stop() - Determine whether RTAS retrieval 241 + * sequence should continue. 242 + * 243 + * Examines the sequence error state and outputs of the last call to 244 + * the specific RTAS to determine whether the sequence in progress 245 + * should continue or stop. 246 + * 247 + * Return: True if the sequence has encountered an error or if all data 248 + * for this sequence has been retrieved. False otherwise. 249 + */ 250 + bool papr_rtas_sequence_should_stop(const struct papr_rtas_sequence *seq, 251 + s32 status, bool init_state) 252 + { 253 + bool done; 254 + 255 + if (seq->error) 256 + return true; 257 + 258 + switch (status) { 259 + case RTAS_SEQ_COMPLETE: 260 + if (init_state) 261 + done = false; /* Initial state. */ 262 + else 263 + done = true; /* All data consumed. */ 264 + break; 265 + case RTAS_SEQ_MORE_DATA: 266 + done = false; /* More data available. */ 267 + break; 268 + default: 269 + done = true; /* Error encountered. */ 270 + break; 271 + } 272 + 273 + return done; 274 + } 275 + 276 + /* 277 + * User space read to retrieve data for the corresponding RTAS call. 278 + * papr_rtas_blob is filled with the data using the corresponding RTAS 279 + * call sequence API. 280 + */ 281 + ssize_t papr_rtas_common_handle_read(struct file *file, 282 + char __user *buf, size_t size, loff_t *off) 283 + { 284 + const struct papr_rtas_blob *blob = file->private_data; 285 + 286 + /* We should not instantiate a handle without any data attached. */ 287 + if (!papr_rtas_blob_has_data(blob)) { 288 + pr_err_once("handle without data\n"); 289 + return -EIO; 290 + } 291 + 292 + return simple_read_from_buffer(buf, size, off, blob->data, blob->len); 293 + } 294 + 295 + int papr_rtas_common_handle_release(struct inode *inode, 296 + struct file *file) 297 + { 298 + const struct papr_rtas_blob *blob = file->private_data; 299 + 300 + papr_rtas_blob_free(blob); 301 + 302 + return 0; 303 + } 304 + 305 + loff_t papr_rtas_common_handle_seek(struct file *file, loff_t off, 306 + int whence) 307 + { 308 + const struct papr_rtas_blob *blob = file->private_data; 309 + 310 + return fixed_size_llseek(file, off, whence, blob->len); 311 + }
+61
arch/powerpc/platforms/pseries/papr-rtas-common.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + #ifndef _ASM_POWERPC_PAPR_RTAS_COMMON_H 3 + #define _ASM_POWERPC_PAPR_RTAS_COMMON_H 4 + 5 + #include <linux/types.h> 6 + 7 + /* 8 + * Return codes for sequence based RTAS calls. 9 + * Not listed under PAPR+ v2.13 7.2.8: "Return Codes". 10 + * But defined in the specific section of each RTAS call. 11 + */ 12 + #define RTAS_SEQ_COMPLETE 0 /* All data has been retrieved. */ 13 + #define RTAS_SEQ_MORE_DATA 1 /* More data is available */ 14 + #define RTAS_SEQ_START_OVER -4 /* Data changed, restart call sequence. */ 15 + 16 + /* 17 + * Internal "blob" APIs for accumulating RTAS call results into 18 + * an immutable buffer to be attached to a file descriptor. 19 + */ 20 + struct papr_rtas_blob { 21 + const char *data; 22 + size_t len; 23 + }; 24 + 25 + /** 26 + * struct papr_sequence - State for managing a sequence of RTAS calls. 27 + * @error: Shall be zero as long as the sequence has not encountered an error, 28 + * -ve errno otherwise. Use papr_rtas_sequence_set_err() to update. 29 + * @params: Parameter block to pass to rtas_*() calls. 30 + * @begin: Work area allocation and initialize the needed parameter 31 + * values passed to RTAS call 32 + * @end: Free the allocated work area 33 + * @work: Obtain data with RTAS call and invoke it until the sequence is 34 + * completed. 35 + * 36 + */ 37 + struct papr_rtas_sequence { 38 + int error; 39 + void *params; 40 + void (*begin)(struct papr_rtas_sequence *seq); 41 + void (*end)(struct papr_rtas_sequence *seq); 42 + const char *(*work)(struct papr_rtas_sequence *seq, size_t *len); 43 + }; 44 + 45 + extern bool papr_rtas_blob_has_data(const struct papr_rtas_blob *blob); 46 + extern void papr_rtas_blob_free(const struct papr_rtas_blob *blob); 47 + extern int papr_rtas_sequence_set_err(struct papr_rtas_sequence *seq, 48 + int err); 49 + extern const struct papr_rtas_blob *papr_rtas_retrieve(struct papr_rtas_sequence *seq); 50 + extern long papr_rtas_setup_file_interface(struct papr_rtas_sequence *seq, 51 + const struct file_operations *fops, char *name); 52 + extern bool papr_rtas_sequence_should_stop(const struct papr_rtas_sequence *seq, 53 + s32 status, bool init_state); 54 + extern ssize_t papr_rtas_common_handle_read(struct file *file, 55 + char __user *buf, size_t size, loff_t *off); 56 + extern int papr_rtas_common_handle_release(struct inode *inode, 57 + struct file *file); 58 + extern loff_t papr_rtas_common_handle_seek(struct file *file, loff_t off, 59 + int whence); 60 + #endif /* _ASM_POWERPC_PAPR_RTAS_COMMON_H */ 61 +
+45 -307
arch/powerpc/platforms/pseries/papr-vpd.c
··· 2 2 3 3 #define pr_fmt(fmt) "papr-vpd: " fmt 4 4 5 - #include <linux/anon_inodes.h> 6 5 #include <linux/build_bug.h> 7 6 #include <linux/file.h> 8 7 #include <linux/fs.h> ··· 19 20 #include <asm/rtas-work-area.h> 20 21 #include <asm/rtas.h> 21 22 #include <uapi/asm/papr-vpd.h> 22 - 23 - /* 24 - * Function-specific return values for ibm,get-vpd, derived from PAPR+ 25 - * v2.13 7.3.20 "ibm,get-vpd RTAS Call". 26 - */ 27 - #define RTAS_IBM_GET_VPD_COMPLETE 0 /* All VPD has been retrieved. */ 28 - #define RTAS_IBM_GET_VPD_MORE_DATA 1 /* More VPD is available. */ 29 - #define RTAS_IBM_GET_VPD_START_OVER -4 /* VPD changed, restart call sequence. */ 23 + #include "papr-rtas-common.h" 30 24 31 25 /** 32 26 * struct rtas_ibm_get_vpd_params - Parameters (in and out) for ibm,get-vpd. ··· 83 91 case RTAS_INVALID_PARAMETER: 84 92 ret = -EINVAL; 85 93 break; 86 - case RTAS_IBM_GET_VPD_START_OVER: 94 + case RTAS_SEQ_START_OVER: 87 95 ret = -EAGAIN; 96 + pr_info_ratelimited("VPD changed during retrieval, retrying\n"); 88 97 break; 89 - case RTAS_IBM_GET_VPD_MORE_DATA: 98 + case RTAS_SEQ_MORE_DATA: 90 99 params->sequence = rets[0]; 91 100 fallthrough; 92 - case RTAS_IBM_GET_VPD_COMPLETE: 101 + case RTAS_SEQ_COMPLETE: 93 102 params->written = rets[1]; 94 103 /* 95 104 * Kernel or firmware bug, do not continue. ··· 112 119 } 113 120 114 121 /* 115 - * Internal VPD "blob" APIs for accumulating ibm,get-vpd results into 116 - * an immutable buffer to be attached to a file descriptor. 117 - */ 118 - struct vpd_blob { 119 - const char *data; 120 - size_t len; 121 - }; 122 - 123 - static bool vpd_blob_has_data(const struct vpd_blob *blob) 124 - { 125 - return blob->data && blob->len; 126 - } 127 - 128 - static void vpd_blob_free(const struct vpd_blob *blob) 129 - { 130 - if (blob) { 131 - kvfree(blob->data); 132 - kfree(blob); 133 - } 134 - } 135 - 136 - /** 137 - * vpd_blob_extend() - Append data to a &struct vpd_blob. 138 - * @blob: The blob to extend. 139 - * @data: The new data to append to @blob. 140 - * @len: The length of @data. 141 - * 142 - * Context: May sleep. 143 - * Return: -ENOMEM on allocation failure, 0 otherwise. 144 - */ 145 - static int vpd_blob_extend(struct vpd_blob *blob, const char *data, size_t len) 146 - { 147 - const size_t new_len = blob->len + len; 148 - const size_t old_len = blob->len; 149 - const char *old_ptr = blob->data; 150 - char *new_ptr; 151 - 152 - new_ptr = kvrealloc(old_ptr, new_len, GFP_KERNEL_ACCOUNT); 153 - if (!new_ptr) 154 - return -ENOMEM; 155 - 156 - memcpy(&new_ptr[old_len], data, len); 157 - blob->data = new_ptr; 158 - blob->len = new_len; 159 - return 0; 160 - } 161 - 162 - /** 163 - * vpd_blob_generate() - Construct a new &struct vpd_blob. 164 - * @generator: Function that supplies the blob data. 165 - * @arg: Context pointer supplied by caller, passed to @generator. 166 - * 167 - * The @generator callback is invoked until it returns NULL. @arg is 168 - * passed to @generator in its first argument on each call. When 169 - * @generator returns data, it should store the data length in its 170 - * second argument. 171 - * 172 - * Context: May sleep. 173 - * Return: A completely populated &struct vpd_blob, or NULL on error. 174 - */ 175 - static const struct vpd_blob * 176 - vpd_blob_generate(const char * (*generator)(void *, size_t *), void *arg) 177 - { 178 - struct vpd_blob *blob; 179 - const char *buf; 180 - size_t len; 181 - int err = 0; 182 - 183 - blob = kzalloc(sizeof(*blob), GFP_KERNEL_ACCOUNT); 184 - if (!blob) 185 - return NULL; 186 - 187 - while (err == 0 && (buf = generator(arg, &len))) 188 - err = vpd_blob_extend(blob, buf, len); 189 - 190 - if (err != 0 || !vpd_blob_has_data(blob)) 191 - goto free_blob; 192 - 193 - return blob; 194 - free_blob: 195 - vpd_blob_free(blob); 196 - return NULL; 197 - } 198 - 199 - /* 200 122 * Internal VPD sequence APIs. A VPD sequence is a series of calls to 201 123 * ibm,get-vpd for a given location code. The sequence ends when an 202 124 * error is encountered or all VPD for the location code has been ··· 119 211 */ 120 212 121 213 /** 122 - * struct vpd_sequence - State for managing a VPD sequence. 123 - * @error: Shall be zero as long as the sequence has not encountered an error, 124 - * -ve errno otherwise. Use vpd_sequence_set_err() to update this. 125 - * @params: Parameter block to pass to rtas_ibm_get_vpd(). 126 - */ 127 - struct vpd_sequence { 128 - int error; 129 - struct rtas_ibm_get_vpd_params params; 130 - }; 131 - 132 - /** 133 214 * vpd_sequence_begin() - Begin a VPD retrieval sequence. 134 - * @seq: Uninitialized sequence state. 135 - * @loc_code: Location code that defines the scope of the VPD to return. 136 - * 137 - * Initializes @seq with the resources necessary to carry out a VPD 138 - * sequence. Callers must pass @seq to vpd_sequence_end() regardless 139 - * of whether the sequence succeeds. 215 + * @seq: vpd call parameters from sequence struct 140 216 * 141 217 * Context: May sleep. 142 218 */ 143 - static void vpd_sequence_begin(struct vpd_sequence *seq, 144 - const struct papr_location_code *loc_code) 219 + static void vpd_sequence_begin(struct papr_rtas_sequence *seq) 145 220 { 221 + struct rtas_ibm_get_vpd_params *vpd_params; 146 222 /* 147 223 * Use a static data structure for the location code passed to 148 224 * RTAS to ensure it's in the RMA and avoid a separate work ··· 134 242 */ 135 243 static struct papr_location_code static_loc_code; 136 244 245 + vpd_params = (struct rtas_ibm_get_vpd_params *)seq->params; 137 246 /* 138 247 * We could allocate the work area before acquiring the 139 248 * function lock, but that would allow concurrent requests to ··· 142 249 * allocate the work area under the lock. 143 250 */ 144 251 mutex_lock(&rtas_ibm_get_vpd_lock); 145 - static_loc_code = *loc_code; 146 - *seq = (struct vpd_sequence) { 147 - .params = { 148 - .work_area = rtas_work_area_alloc(SZ_4K), 149 - .loc_code = &static_loc_code, 150 - .sequence = 1, 151 - }, 152 - }; 252 + static_loc_code = *(struct papr_location_code *)vpd_params->loc_code; 253 + vpd_params = (struct rtas_ibm_get_vpd_params *)seq->params; 254 + vpd_params->work_area = rtas_work_area_alloc(SZ_4K); 255 + vpd_params->loc_code = &static_loc_code; 256 + vpd_params->sequence = 1; 257 + vpd_params->status = 0; 153 258 } 154 259 155 260 /** ··· 156 265 * 157 266 * Releases resources obtained by vpd_sequence_begin(). 158 267 */ 159 - static void vpd_sequence_end(struct vpd_sequence *seq) 268 + static void vpd_sequence_end(struct papr_rtas_sequence *seq) 160 269 { 161 - rtas_work_area_free(seq->params.work_area); 270 + struct rtas_ibm_get_vpd_params *vpd_params; 271 + 272 + vpd_params = (struct rtas_ibm_get_vpd_params *)seq->params; 273 + rtas_work_area_free(vpd_params->work_area); 162 274 mutex_unlock(&rtas_ibm_get_vpd_lock); 163 275 } 164 276 165 - /** 166 - * vpd_sequence_should_stop() - Determine whether a VPD retrieval sequence 167 - * should continue. 168 - * @seq: VPD sequence state. 169 - * 170 - * Examines the sequence error state and outputs of the last call to 171 - * ibm,get-vpd to determine whether the sequence in progress should 172 - * continue or stop. 173 - * 174 - * Return: True if the sequence has encountered an error or if all VPD for 175 - * this sequence has been retrieved. False otherwise. 176 - */ 177 - static bool vpd_sequence_should_stop(const struct vpd_sequence *seq) 178 - { 179 - bool done; 180 - 181 - if (seq->error) 182 - return true; 183 - 184 - switch (seq->params.status) { 185 - case 0: 186 - if (seq->params.written == 0) 187 - done = false; /* Initial state. */ 188 - else 189 - done = true; /* All data consumed. */ 190 - break; 191 - case 1: 192 - done = false; /* More data available. */ 193 - break; 194 - default: 195 - done = true; /* Error encountered. */ 196 - break; 197 - } 198 - 199 - return done; 200 - } 201 - 202 - static int vpd_sequence_set_err(struct vpd_sequence *seq, int err) 203 - { 204 - /* Preserve the first error recorded. */ 205 - if (seq->error == 0) 206 - seq->error = err; 207 - 208 - return seq->error; 209 - } 210 - 211 277 /* 212 - * Generator function to be passed to vpd_blob_generate(). 278 + * Generator function to be passed to papr_rtas_blob_generate(). 213 279 */ 214 - static const char *vpd_sequence_fill_work_area(void *arg, size_t *len) 280 + static const char *vpd_sequence_fill_work_area(struct papr_rtas_sequence *seq, 281 + size_t *len) 215 282 { 216 - struct vpd_sequence *seq = arg; 217 - struct rtas_ibm_get_vpd_params *p = &seq->params; 283 + struct rtas_ibm_get_vpd_params *p; 284 + bool init_state; 218 285 219 - if (vpd_sequence_should_stop(seq)) 286 + p = (struct rtas_ibm_get_vpd_params *)seq->params; 287 + init_state = (p->written == 0) ? true : false; 288 + 289 + if (papr_rtas_sequence_should_stop(seq, p->status, init_state)) 220 290 return NULL; 221 - if (vpd_sequence_set_err(seq, rtas_ibm_get_vpd(p))) 291 + if (papr_rtas_sequence_set_err(seq, rtas_ibm_get_vpd(p))) 222 292 return NULL; 223 293 *len = p->written; 224 294 return rtas_work_area_raw_buf(p->work_area); 225 295 } 226 296 227 - /* 228 - * Higher-level VPD retrieval code below. These functions use the 229 - * vpd_blob_* and vpd_sequence_* APIs defined above to create fd-based 230 - * VPD handles for consumption by user space. 231 - */ 232 - 233 - /** 234 - * papr_vpd_run_sequence() - Run a single VPD retrieval sequence. 235 - * @loc_code: Location code that defines the scope of VPD to return. 236 - * 237 - * Context: May sleep. Holds a mutex and an RTAS work area for its 238 - * duration. Typically performs multiple sleepable slab 239 - * allocations. 240 - * 241 - * Return: A populated &struct vpd_blob on success. Encoded error 242 - * pointer otherwise. 243 - */ 244 - static const struct vpd_blob *papr_vpd_run_sequence(const struct papr_location_code *loc_code) 245 - { 246 - const struct vpd_blob *blob; 247 - struct vpd_sequence seq; 248 - 249 - vpd_sequence_begin(&seq, loc_code); 250 - blob = vpd_blob_generate(vpd_sequence_fill_work_area, &seq); 251 - if (!blob) 252 - vpd_sequence_set_err(&seq, -ENOMEM); 253 - vpd_sequence_end(&seq); 254 - 255 - if (seq.error) { 256 - vpd_blob_free(blob); 257 - return ERR_PTR(seq.error); 258 - } 259 - 260 - return blob; 261 - } 262 - 263 - /** 264 - * papr_vpd_retrieve() - Return the VPD for a location code. 265 - * @loc_code: Location code that defines the scope of VPD to return. 266 - * 267 - * Run VPD sequences against @loc_code until a blob is successfully 268 - * instantiated, or a hard error is encountered, or a fatal signal is 269 - * pending. 270 - * 271 - * Context: May sleep. 272 - * Return: A fully populated VPD blob when successful. Encoded error 273 - * pointer otherwise. 274 - */ 275 - static const struct vpd_blob *papr_vpd_retrieve(const struct papr_location_code *loc_code) 276 - { 277 - const struct vpd_blob *blob; 278 - 279 - /* 280 - * EAGAIN means the sequence errored with a -4 (VPD changed) 281 - * status from ibm,get-vpd, and we should attempt a new 282 - * sequence. PAPR+ v2.13 R1–7.3.20–5 indicates that this 283 - * should be a transient condition, not something that happens 284 - * continuously. But we'll stop trying on a fatal signal. 285 - */ 286 - do { 287 - blob = papr_vpd_run_sequence(loc_code); 288 - if (!IS_ERR(blob)) /* Success. */ 289 - break; 290 - if (PTR_ERR(blob) != -EAGAIN) /* Hard error. */ 291 - break; 292 - pr_info_ratelimited("VPD changed during retrieval, retrying\n"); 293 - cond_resched(); 294 - } while (!fatal_signal_pending(current)); 295 - 296 - return blob; 297 - } 298 - 299 - static ssize_t papr_vpd_handle_read(struct file *file, char __user *buf, size_t size, loff_t *off) 300 - { 301 - const struct vpd_blob *blob = file->private_data; 302 - 303 - /* bug: we should not instantiate a handle without any data attached. */ 304 - if (!vpd_blob_has_data(blob)) { 305 - pr_err_once("handle without data\n"); 306 - return -EIO; 307 - } 308 - 309 - return simple_read_from_buffer(buf, size, off, blob->data, blob->len); 310 - } 311 - 312 - static int papr_vpd_handle_release(struct inode *inode, struct file *file) 313 - { 314 - const struct vpd_blob *blob = file->private_data; 315 - 316 - vpd_blob_free(blob); 317 - 318 - return 0; 319 - } 320 - 321 - static loff_t papr_vpd_handle_seek(struct file *file, loff_t off, int whence) 322 - { 323 - const struct vpd_blob *blob = file->private_data; 324 - 325 - return fixed_size_llseek(file, off, whence, blob->len); 326 - } 327 - 328 - 329 297 static const struct file_operations papr_vpd_handle_ops = { 330 - .read = papr_vpd_handle_read, 331 - .llseek = papr_vpd_handle_seek, 332 - .release = papr_vpd_handle_release, 298 + .read = papr_rtas_common_handle_read, 299 + .llseek = papr_rtas_common_handle_seek, 300 + .release = papr_rtas_common_handle_release, 333 301 }; 334 302 335 303 /** ··· 210 460 */ 211 461 static long papr_vpd_create_handle(struct papr_location_code __user *ulc) 212 462 { 463 + struct rtas_ibm_get_vpd_params vpd_params = {}; 464 + struct papr_rtas_sequence seq = {}; 213 465 struct papr_location_code klc; 214 - const struct vpd_blob *blob; 215 - struct file *file; 216 - long err; 217 466 int fd; 218 467 219 468 if (copy_from_user(&klc, ulc, sizeof(klc))) ··· 221 472 if (!string_is_terminated(klc.str, ARRAY_SIZE(klc.str))) 222 473 return -EINVAL; 223 474 224 - blob = papr_vpd_retrieve(&klc); 225 - if (IS_ERR(blob)) 226 - return PTR_ERR(blob); 475 + seq = (struct papr_rtas_sequence) { 476 + .begin = vpd_sequence_begin, 477 + .end = vpd_sequence_end, 478 + .work = vpd_sequence_fill_work_area, 479 + }; 227 480 228 - fd = get_unused_fd_flags(O_RDONLY | O_CLOEXEC); 229 - if (fd < 0) { 230 - err = fd; 231 - goto free_blob; 232 - } 481 + vpd_params.loc_code = &klc; 482 + seq.params = (void *)&vpd_params; 233 483 234 - file = anon_inode_getfile_fmode("[papr-vpd]", &papr_vpd_handle_ops, 235 - (void *)blob, O_RDONLY, 236 - FMODE_LSEEK | FMODE_PREAD); 237 - if (IS_ERR(file)) { 238 - err = PTR_ERR(file); 239 - goto put_fd; 240 - } 241 - fd_install(fd, file); 484 + fd = papr_rtas_setup_file_interface(&seq, &papr_vpd_handle_ops, 485 + "[papr-vpd]"); 486 + 242 487 return fd; 243 - put_fd: 244 - put_unused_fd(fd); 245 - free_blob: 246 - vpd_blob_free(blob); 247 - return err; 248 488 } 249 489 250 490 /*
+4 -2
arch/powerpc/sysdev/cpm_common.c
··· 138 138 out_be32(&iop->dat, cpm2_gc->cpdata); 139 139 } 140 140 141 - static void cpm2_gpio32_set(struct gpio_chip *gc, unsigned int gpio, int value) 141 + static int cpm2_gpio32_set(struct gpio_chip *gc, unsigned int gpio, int value) 142 142 { 143 143 struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc); 144 144 struct cpm2_gpio32_chip *cpm2_gc = gpiochip_get_data(gc); ··· 150 150 __cpm2_gpio32_set(mm_gc, pin_mask, value); 151 151 152 152 spin_unlock_irqrestore(&cpm2_gc->lock, flags); 153 + 154 + return 0; 153 155 } 154 156 155 157 static int cpm2_gpio32_dir_out(struct gpio_chip *gc, unsigned int gpio, int val) ··· 210 208 gc->direction_input = cpm2_gpio32_dir_in; 211 209 gc->direction_output = cpm2_gpio32_dir_out; 212 210 gc->get = cpm2_gpio32_get; 213 - gc->set = cpm2_gpio32_set; 211 + gc->set_rv = cpm2_gpio32_set; 214 212 gc->parent = dev; 215 213 gc->owner = THIS_MODULE; 216 214
+4 -3
arch/powerpc/sysdev/mpic.c
··· 27 27 #include <linux/spinlock.h> 28 28 #include <linux/pci.h> 29 29 #include <linux/slab.h> 30 + #include <linux/string_choices.h> 30 31 #include <linux/syscore_ops.h> 31 32 #include <linux/ratelimit.h> 32 33 #include <linux/pgtable.h> ··· 475 474 addr = addr | ((u64)readl(base + HT_MSI_ADDR_HI) << 32); 476 475 } 477 476 478 - printk(KERN_DEBUG "mpic: - HT:%02x.%x %s MSI mapping found @ 0x%llx\n", 479 - PCI_SLOT(devfn), PCI_FUNC(devfn), 480 - flags & HT_MSI_FLAGS_ENABLE ? "enabled" : "disabled", addr); 477 + pr_debug("mpic: - HT:%02x.%x %s MSI mapping found @ 0x%llx\n", 478 + PCI_SLOT(devfn), PCI_FUNC(devfn), 479 + str_enabled_disabled(flags & HT_MSI_FLAGS_ENABLE), addr); 481 480 482 481 if (!(flags & HT_MSI_FLAGS_ENABLE)) 483 482 writeb(flags | HT_MSI_FLAGS_ENABLE, base + HT_MSI_FLAGS);
+1 -1
arch/powerpc/xmon/xmon.c
··· 1770 1770 sp + STACK_INT_FRAME_REGS); 1771 1771 break; 1772 1772 } 1773 - printf("--- Exception: %lx %s at ", regs.trap, 1773 + printf("---- Exception: %lx %s at ", regs.trap, 1774 1774 getvecname(TRAP(&regs))); 1775 1775 pc = regs.nip; 1776 1776 lr = regs.link;