Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'powerpc-6.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:

- Rework kfence support for the HPT MMU to work on systems with >= 16TB
of RAM.

- Remove the powerpc "maple" platform, used by the "Yellow Dog
Powerstation".

- Add support for DYNAMIC_FTRACE_WITH_CALL_OPS,
DYNAMIC_FTRACE_WITH_DIRECT_CALLS & BPF Trampolines.

- Add support for running KVM nested guests on Power11.

- Other small features, cleanups and fixes.

Thanks to Amit Machhiwal, Arnd Bergmann, Christophe Leroy, Costa
Shulyupin, David Hunter, David Wang, Disha Goel, Gautam Menghani, Geert
Uytterhoeven, Hari Bathini, Julia Lawall, Kajol Jain, Keith Packard,
Lukas Bulwahn, Madhavan Srinivasan, Markus Elfring, Michal Suchanek,
Ming Lei, Mukesh Kumar Chaurasiya, Nathan Chancellor, Naveen N Rao,
Nicholas Piggin, Nysal Jan K.A, Paulo Miguel Almeida, Pavithra Prakash,
Ritesh Harjani (IBM), Rob Herring (Arm), Sachin P Bappalige, Shen
Lichuan, Simon Horman, Sourabh Jain, Thomas Weißschuh, Thorsten Blum,
Thorsten Leemhuis, Venkat Rao Bagalkote, Zhang Zekun, and zhang jiao.

* tag 'powerpc-6.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (89 commits)
EDAC/powerpc: Remove PPC_MAPLE drivers
powerpc/perf: Add per-task/process monitoring to vpa_pmu driver
powerpc/kvm: Add vpa latency counters to kvm_vcpu_arch
docs: ABI: sysfs-bus-event_source-devices-vpa-pmu: Document sysfs event format entries for vpa_pmu
powerpc/perf: Add perf interface to expose vpa counters
MAINTAINERS: powerpc: Mark Maddy as "M"
powerpc/Makefile: Allow overriding CPP
powerpc-km82xx.c: replace of_node_put() with __free
ps3: Correct some typos in comments
powerpc/kexec: Fix return of uninitialized variable
macintosh: Use common error handling code in via_pmu_led_init()
powerpc/powermac: Use of_property_match_string() in pmac_has_backlight_type()
powerpc: remove dead config options for MPC85xx platform support
powerpc/xive: Use cpumask_intersects()
selftests/powerpc: Remove the path after initialization.
powerpc/xmon: symbol lookup length fixed
powerpc/ep8248e: Use %pa to format resource_size_t
powerpc/ps3: Reorganize kerneldoc parameter names
KVM: PPC: Book3S HV: Fix kmv -> kvm typo
powerpc/sstep: make emulate_vsx_load and emulate_vsx_store static
...

+3059 -3586
+24
Documentation/ABI/testing/sysfs-bus-event_source-devices-vpa-pmu
··· 1 + What: /sys/bus/event_source/devices/vpa_pmu/format 2 + Date: November 2024 3 + Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org> 4 + Description: Read-only. Attribute group to describe the magic bits 5 + that go into perf_event_attr.config for a particular pmu. 6 + (See ABI/testing/sysfs-bus-event_source-devices-format). 7 + 8 + Each attribute under this group defines a bit range of the 9 + perf_event_attr.config. Supported attribute are listed 10 + below:: 11 + event = "config:0-31" - event ID 12 + 13 + For example:: 14 + 15 + l1_to_l2_lat = "event=0x1" 16 + 17 + What: /sys/bus/event_source/devices/vpa_pmu/events 18 + Date: November 2024 19 + Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org> 20 + Description: Read-only. Attribute group to describe performance monitoring 21 + events for the Virtual Processor Area events. Each attribute 22 + in this group describes a single performance monitoring event 23 + supported by vpa_pmu. The name of the file is the name of 24 + the event (See ABI/testing/sysfs-bus-event_source-devices-events).
+2 -2
Documentation/arch/powerpc/booting.rst
··· 93 93 should: 94 94 95 95 a) add your platform support as a _boolean_ option in 96 - arch/powerpc/Kconfig, following the example of PPC_PSERIES, 97 - PPC_PMAC and PPC_MAPLE. The latter is probably a good 96 + arch/powerpc/Kconfig, following the example of PPC_PSERIES 97 + and PPC_PMAC. The latter is probably a good 98 98 example of a board support to start from. 99 99 100 100 b) create your main platform file as
+1 -1
MAINTAINERS
··· 13140 13140 R: Nicholas Piggin <npiggin@gmail.com> 13141 13141 R: Christophe Leroy <christophe.leroy@csgroup.eu> 13142 13142 R: Naveen N Rao <naveen@kernel.org> 13143 - R: Madhavan Srinivasan <maddy@linux.ibm.com> 13143 + M: Madhavan Srinivasan <maddy@linux.ibm.com> 13144 13144 L: linuxppc-dev@lists.ozlabs.org 13145 13145 S: Supported 13146 13146 W: https://github.com/linuxppc/wiki/wiki
+6
arch/Kconfig
··· 1691 1691 config ARCH_NEED_CMPXCHG_1_EMU 1692 1692 bool 1693 1693 1694 + config ARCH_WANTS_PRE_LINK_VMLINUX 1695 + bool 1696 + help 1697 + An architecture can select this if it provides arch/<arch>/tools/Makefile 1698 + with .arch.vmlinux.o target to be linked into vmlinux. 1699 + 1694 1700 endmenu
+1 -1
arch/powerpc/Kbuild
··· 19 19 obj-$(CONFIG_KEXEC_FILE) += purgatory/ 20 20 21 21 # for cleaning 22 - subdir- += boot 22 + subdir- += boot tools
+23 -3
arch/powerpc/Kconfig
··· 234 234 select HAVE_DEBUG_STACKOVERFLOW 235 235 select HAVE_DYNAMIC_FTRACE 236 236 select HAVE_DYNAMIC_FTRACE_WITH_ARGS if ARCH_USING_PATCHABLE_FUNCTION_ENTRY || MPROFILE_KERNEL || PPC32 237 + select HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS if PPC_FTRACE_OUT_OF_LINE || (PPC32 && ARCH_USING_PATCHABLE_FUNCTION_ENTRY) 238 + select HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS if HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS 237 239 select HAVE_DYNAMIC_FTRACE_WITH_REGS if ARCH_USING_PATCHABLE_FUNCTION_ENTRY || MPROFILE_KERNEL || PPC32 238 240 select HAVE_EBPF_JIT 239 241 select HAVE_EFFICIENT_UNALIGNED_ACCESS ··· 245 243 select HAVE_FUNCTION_DESCRIPTORS if PPC64_ELF_ABI_V1 246 244 select HAVE_FUNCTION_ERROR_INJECTION 247 245 select HAVE_FUNCTION_GRAPH_TRACER 248 - select HAVE_FUNCTION_TRACER if PPC64 || (PPC32 && CC_IS_GCC) 246 + select HAVE_FUNCTION_TRACER if !COMPILE_TEST && (PPC64 || (PPC32 && CC_IS_GCC)) 249 247 select HAVE_GCC_PLUGINS if GCC_VERSION >= 50200 # plugin support on gcc <= 5.1 is buggy on PPC 250 248 select HAVE_GENERIC_VDSO 251 249 select HAVE_HARDLOCKUP_DETECTOR_ARCH if PPC_BOOK3S_64 && SMP ··· 275 273 select HAVE_REGS_AND_STACK_ACCESS_API 276 274 select HAVE_RELIABLE_STACKTRACE 277 275 select HAVE_RSEQ 276 + select HAVE_SAMPLE_FTRACE_DIRECT if HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 277 + select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 278 278 select HAVE_SETUP_PER_CPU_AREA if PPC64 279 279 select HAVE_SOFTIRQ_ON_OWN_STACK 280 - select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r2) 281 - select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r13) 280 + select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,$(m32-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 -mstack-protector-guard-offset=0) 281 + select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,$(m64-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 -mstack-protector-guard-offset=0) 282 282 select HAVE_STATIC_CALL if PPC32 283 283 select HAVE_SYSCALL_TRACEPOINTS 284 284 select HAVE_VIRT_CPU_ACCOUNTING ··· 572 568 def_bool y if PPC32 573 569 def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh $(CC) -mlittle-endian) if PPC64 && CPU_LITTLE_ENDIAN 574 570 def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh $(CC) -mbig-endian) if PPC64 && CPU_BIG_ENDIAN 571 + 572 + config PPC_FTRACE_OUT_OF_LINE 573 + def_bool PPC64 && ARCH_USING_PATCHABLE_FUNCTION_ENTRY 574 + select ARCH_WANTS_PRE_LINK_VMLINUX 575 + 576 + config PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE 577 + int "Number of ftrace out-of-line stubs to reserve within .text" 578 + depends on PPC_FTRACE_OUT_OF_LINE 579 + default 32768 580 + help 581 + Number of stubs to reserve for use by ftrace. This space is 582 + reserved within .text, and is distinct from any additional space 583 + added at the end of .text before the final vmlinux link. Set to 584 + zero to have stubs only be generated at the end of vmlinux (only 585 + if the size of vmlinux is less than 32MB). Set to a higher value 586 + if building vmlinux larger than 48MB. 575 587 576 588 config HOTPLUG_CPU 577 589 bool "Support for enabling/disabling CPUs"
-6
arch/powerpc/Kconfig.debug
··· 223 223 help 224 224 Select this to enable early debugging via the RTAS console. 225 225 226 - config PPC_EARLY_DEBUG_MAPLE 227 - bool "Maple real mode" 228 - depends on PPC_MAPLE 229 - help 230 - Select this to enable early debugging for Maple. 231 - 232 226 config PPC_EARLY_DEBUG_PAS_REALMODE 233 227 bool "PA Semi real mode" 234 228 depends on PPC_PASEMI
+16 -14
arch/powerpc/Makefile
··· 62 62 endif 63 63 64 64 ifdef CONFIG_CPU_LITTLE_ENDIAN 65 - KBUILD_CFLAGS += -mlittle-endian 65 + KBUILD_CPPFLAGS += -mlittle-endian 66 66 KBUILD_LDFLAGS += -EL 67 67 LDEMULATION := lppc 68 68 GNUTARGET := powerpcle 69 69 MULTIPLEWORD := -mno-multiple 70 70 KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-save-toc-indirect) 71 71 else 72 - KBUILD_CFLAGS += $(call cc-option,-mbig-endian) 72 + KBUILD_CPPFLAGS += $(call cc-option,-mbig-endian) 73 73 KBUILD_LDFLAGS += -EB 74 74 LDEMULATION := ppc 75 75 GNUTARGET := powerpc ··· 95 95 aflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mlittle-endian 96 96 97 97 ifeq ($(HAS_BIARCH),y) 98 - KBUILD_CFLAGS += -m$(BITS) 98 + KBUILD_CPPFLAGS += -m$(BITS) 99 99 KBUILD_AFLAGS += -m$(BITS) 100 100 KBUILD_LDFLAGS += -m elf$(BITS)$(LDEMULATION) 101 - endif 102 - 103 - cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard=tls 104 - ifdef CONFIG_PPC64 105 - cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard-reg=r13 106 - else 107 - cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard-reg=r2 108 101 endif 109 102 110 103 LDFLAGS_vmlinux-y := -Bstatic ··· 148 155 ifdef CONFIG_FUNCTION_TRACER 149 156 ifdef CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY 150 157 KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY 158 + ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 159 + CC_FLAGS_FTRACE := -fpatchable-function-entry=1 160 + else 161 + ifdef CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS # PPC32 only 162 + CC_FLAGS_FTRACE := -fpatchable-function-entry=3,1 163 + else 151 164 CC_FLAGS_FTRACE := -fpatchable-function-entry=2 165 + endif 166 + endif 152 167 else 153 168 CC_FLAGS_FTRACE := -pg 154 169 ifdef CONFIG_MPROFILE_KERNEL ··· 176 175 KBUILD_AFLAGS += $(AFLAGS-y) 177 176 KBUILD_CFLAGS += $(CC_FLAGS_NO_FPU) 178 177 KBUILD_CFLAGS += $(CFLAGS-y) 179 - CPP = $(CC) -E $(KBUILD_CFLAGS) 180 178 181 179 CHECKFLAGS += -m$(BITS) -D__powerpc__ -D__powerpc$(BITS)__ 182 180 ifdef CONFIG_CPU_BIG_ENDIAN ··· 359 359 echo ' install - Install kernel using' 360 360 echo ' (your) ~/bin/$(INSTALLKERNEL) or' 361 361 echo ' (distribution) /sbin/$(INSTALLKERNEL) or' 362 - echo ' install to $$(INSTALL_PATH) and run lilo' 362 + echo ' install to $$(INSTALL_PATH)' 363 363 echo ' *_defconfig - Select default config from arch/powerpc/configs' 364 364 echo '' 365 365 echo ' Targets with <dt> embed a device tree blob inside the image' ··· 402 402 PHONY += stack_protector_prepare 403 403 stack_protector_prepare: prepare0 404 404 ifdef CONFIG_PPC64 405 - $(eval KBUILD_CFLAGS += -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "PACA_CANARY") print $$3;}' include/generated/asm-offsets.h)) 405 + $(eval KBUILD_CFLAGS += -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 \ 406 + -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "PACA_CANARY") print $$3;}' include/generated/asm-offsets.h)) 406 407 else 407 - $(eval KBUILD_CFLAGS += -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "TASK_CANARY") print $$3;}' include/generated/asm-offsets.h)) 408 + $(eval KBUILD_CFLAGS += -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 \ 409 + -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "TASK_CANARY") print $$3;}' include/generated/asm-offsets.h)) 408 410 endif 409 411 endif 410 412
-1
arch/powerpc/boot/.gitignore
··· 30 30 zImage.epapr 31 31 zImage.holly 32 32 zImage.*lds 33 - zImage.maple 34 33 zImage.miboot 35 34 zImage.pmac 36 35 zImage.pseries
+1 -2
arch/powerpc/boot/Makefile
··· 276 276 277 277 image-$(CONFIG_PPC_PSERIES) += zImage.pseries 278 278 image-$(CONFIG_PPC_POWERNV) += zImage.pseries 279 - image-$(CONFIG_PPC_MAPLE) += zImage.maple 280 279 image-$(CONFIG_PPC_IBM_CELL_BLADE) += zImage.pseries 281 280 image-$(CONFIG_PPC_PS3) += dtbImage.ps3 282 281 image-$(CONFIG_PPC_CHRP) += zImage.chrp ··· 443 444 clean-files += $(image-) $(initrd-) cuImage.* dtbImage.* treeImage.* \ 444 445 zImage zImage.initrd zImage.chrp zImage.coff zImage.holly \ 445 446 zImage.miboot zImage.pmac zImage.pseries \ 446 - zImage.maple simpleImage.* otheros.bld 447 + simpleImage.* otheros.bld 447 448 448 449 # clean up files cached by wrapper 449 450 clean-kernel-base := vmlinux.strip vmlinux.bin
+1 -6
arch/powerpc/boot/wrapper
··· 271 271 fi 272 272 make_space=n 273 273 ;; 274 - maple) 275 - platformo="$object/of.o $object/epapr.o" 276 - link_address='0x400000' 277 - make_space=n 278 - ;; 279 274 pmac|chrp) 280 275 platformo="$object/of.o $object/epapr.o" 281 276 make_space=n ··· 512 517 513 518 # post-processing needed for some platforms 514 519 case "$platform" in 515 - pseries|chrp|maple) 520 + pseries|chrp) 516 521 $objbin/addnote "$ofile" 517 522 ;; 518 523 coff)
-111
arch/powerpc/configs/maple_defconfig
··· 1 - CONFIG_PPC64=y 2 - CONFIG_SMP=y 3 - CONFIG_NR_CPUS=4 4 - CONFIG_SYSVIPC=y 5 - CONFIG_POSIX_MQUEUE=y 6 - CONFIG_NO_HZ=y 7 - CONFIG_HIGH_RES_TIMERS=y 8 - CONFIG_IKCONFIG=y 9 - CONFIG_IKCONFIG_PROC=y 10 - # CONFIG_COMPAT_BRK is not set 11 - CONFIG_PROFILING=y 12 - CONFIG_KPROBES=y 13 - CONFIG_MODULES=y 14 - CONFIG_MODULE_UNLOAD=y 15 - CONFIG_MODVERSIONS=y 16 - CONFIG_MODULE_SRCVERSION_ALL=y 17 - # CONFIG_BLK_DEV_BSG is not set 18 - CONFIG_PARTITION_ADVANCED=y 19 - CONFIG_MAC_PARTITION=y 20 - # CONFIG_PPC_POWERNV is not set 21 - # CONFIG_PPC_PSERIES is not set 22 - # CONFIG_PPC_PMAC is not set 23 - CONFIG_PPC_MAPLE=y 24 - CONFIG_UDBG_RTAS_CONSOLE=y 25 - CONFIG_GEN_RTC=y 26 - CONFIG_KEXEC=y 27 - CONFIG_IRQ_ALL_CPUS=y 28 - CONFIG_PPC_4K_PAGES=y 29 - CONFIG_PCI_MSI=y 30 - CONFIG_NET=y 31 - CONFIG_PACKET=y 32 - CONFIG_UNIX=y 33 - CONFIG_XFRM_USER=m 34 - CONFIG_INET=y 35 - CONFIG_IP_MULTICAST=y 36 - CONFIG_IP_PNP=y 37 - CONFIG_IP_PNP_DHCP=y 38 - # CONFIG_IPV6 is not set 39 - CONFIG_BLK_DEV_RAM=y 40 - CONFIG_BLK_DEV_RAM_SIZE=8192 41 - # CONFIG_SCSI_PROC_FS is not set 42 - CONFIG_BLK_DEV_SD=y 43 - CONFIG_BLK_DEV_SR=y 44 - CONFIG_CHR_DEV_SG=y 45 - CONFIG_SCSI_IPR=y 46 - CONFIG_ATA=y 47 - CONFIG_PATA_AMD=y 48 - CONFIG_ATA_GENERIC=y 49 - CONFIG_NETDEVICES=y 50 - CONFIG_AMD8111_ETH=y 51 - CONFIG_TIGON3=y 52 - CONFIG_E1000=y 53 - CONFIG_USB_PEGASUS=y 54 - # CONFIG_INPUT_KEYBOARD is not set 55 - # CONFIG_INPUT_MOUSE is not set 56 - # CONFIG_SERIO is not set 57 - CONFIG_SERIAL_8250=y 58 - CONFIG_SERIAL_8250_CONSOLE=y 59 - CONFIG_HVC_RTAS=y 60 - # CONFIG_HW_RANDOM is not set 61 - CONFIG_I2C=y 62 - CONFIG_I2C_CHARDEV=y 63 - CONFIG_I2C_AMD8111=y 64 - # CONFIG_VGA_CONSOLE is not set 65 - CONFIG_HID_GYRATION=y 66 - CONFIG_HID_PANTHERLORD=y 67 - CONFIG_HID_PETALYNX=y 68 - CONFIG_HID_SAMSUNG=y 69 - CONFIG_HID_SUNPLUS=y 70 - CONFIG_USB=y 71 - CONFIG_USB_MON=y 72 - CONFIG_USB_EHCI_HCD=y 73 - CONFIG_USB_EHCI_ROOT_HUB_TT=y 74 - # CONFIG_USB_EHCI_HCD_PPC_OF is not set 75 - CONFIG_USB_OHCI_HCD=y 76 - CONFIG_USB_UHCI_HCD=y 77 - CONFIG_USB_SERIAL=y 78 - CONFIG_USB_SERIAL_GENERIC=y 79 - CONFIG_USB_SERIAL_CYPRESS_M8=m 80 - CONFIG_USB_SERIAL_GARMIN=m 81 - CONFIG_USB_SERIAL_IPW=m 82 - CONFIG_USB_SERIAL_KEYSPAN=y 83 - CONFIG_USB_SERIAL_TI=m 84 - CONFIG_EXT2_FS=y 85 - CONFIG_EXT4_FS=y 86 - CONFIG_FS_DAX=y 87 - CONFIG_MSDOS_FS=y 88 - CONFIG_VFAT_FS=y 89 - CONFIG_PROC_KCORE=y 90 - CONFIG_TMPFS=y 91 - CONFIG_HUGETLBFS=y 92 - CONFIG_CRAMFS=y 93 - CONFIG_NFS_FS=y 94 - CONFIG_NFS_V3_ACL=y 95 - CONFIG_NFS_V4=y 96 - CONFIG_ROOT_NFS=y 97 - CONFIG_NLS_DEFAULT="utf-8" 98 - CONFIG_NLS_UTF8=y 99 - CONFIG_CRC_CCITT=y 100 - CONFIG_CRC_T10DIF=y 101 - CONFIG_MAGIC_SYSRQ=y 102 - CONFIG_DEBUG_KERNEL=y 103 - CONFIG_DEBUG_STACK_USAGE=y 104 - CONFIG_DEBUG_STACKOVERFLOW=y 105 - CONFIG_XMON=y 106 - CONFIG_XMON_DEFAULT=y 107 - CONFIG_BOOTX_TEXT=y 108 - CONFIG_CRYPTO_ECB=m 109 - CONFIG_CRYPTO_PCBC=m 110 - # CONFIG_CRYPTO_HW is not set 111 - CONFIG_PRINTK_TIME=y
-1
arch/powerpc/configs/ppc64_defconfig
··· 44 44 CONFIG_IBMEBUS=y 45 45 CONFIG_PAPR_SCM=m 46 46 CONFIG_PPC_SVM=y 47 - CONFIG_PPC_MAPLE=y 48 47 CONFIG_PPC_PASEMI=y 49 48 CONFIG_PPC_PASEMI_IOMMU=y 50 49 CONFIG_PPC_PS3=y
+6 -5
arch/powerpc/include/asm/cputable.h
··· 193 193 #define CPU_FTR_ARCH_31 LONG_ASM_CONST(0x0004000000000000) 194 194 #define CPU_FTR_DAWR1 LONG_ASM_CONST(0x0008000000000000) 195 195 #define CPU_FTR_DEXCR_NPHIE LONG_ASM_CONST(0x0010000000000000) 196 + #define CPU_FTR_P11_PVR LONG_ASM_CONST(0x0020000000000000) 196 197 197 198 #ifndef __ASSEMBLY__ 198 199 ··· 455 454 CPU_FTR_DAWR | CPU_FTR_DAWR1 | \ 456 455 CPU_FTR_DEXCR_NPHIE) 457 456 458 - #define CPU_FTRS_POWER11 CPU_FTRS_POWER10 457 + #define CPU_FTRS_POWER11 (CPU_FTRS_POWER10 | CPU_FTR_P11_PVR) 459 458 460 459 #define CPU_FTRS_CELL (CPU_FTR_LWSYNC | \ 461 460 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ ··· 476 475 (CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | CPU_FTRS_POWER8 | \ 477 476 CPU_FTR_ALTIVEC_COMP | CPU_FTR_VSX_COMP | CPU_FTRS_POWER9 | \ 478 477 CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | \ 479 - CPU_FTRS_POWER9_DD2_3 | CPU_FTRS_POWER10) 478 + CPU_FTRS_POWER9_DD2_3 | CPU_FTRS_POWER10 | CPU_FTRS_POWER11) 480 479 #else 481 480 #define CPU_FTRS_POSSIBLE \ 482 481 (CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | \ ··· 484 483 CPU_FTRS_POWER8 | CPU_FTRS_CELL | CPU_FTRS_PA6T | \ 485 484 CPU_FTR_VSX_COMP | CPU_FTR_ALTIVEC_COMP | CPU_FTRS_POWER9 | \ 486 485 CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | \ 487 - CPU_FTRS_POWER9_DD2_3 | CPU_FTRS_POWER10) 486 + CPU_FTRS_POWER9_DD2_3 | CPU_FTRS_POWER10 | CPU_FTRS_POWER11) 488 487 #endif /* CONFIG_CPU_LITTLE_ENDIAN */ 489 488 #endif 490 489 #else ··· 548 547 (CPU_FTRS_POSSIBLE & ~CPU_FTR_HVMODE & ~CPU_FTR_DBELL & \ 549 548 CPU_FTRS_POWER7 & CPU_FTRS_POWER8E & CPU_FTRS_POWER8 & \ 550 549 CPU_FTRS_POWER9 & CPU_FTRS_POWER9_DD2_1 & CPU_FTRS_POWER9_DD2_2 & \ 551 - CPU_FTRS_POWER10 & CPU_FTRS_DT_CPU_BASE) 550 + CPU_FTRS_POWER10 & CPU_FTRS_POWER11 & CPU_FTRS_DT_CPU_BASE) 552 551 #else 553 552 #define CPU_FTRS_ALWAYS \ 554 553 (CPU_FTRS_PPC970 & CPU_FTRS_POWER5 & \ ··· 556 555 CPU_FTRS_PA6T & CPU_FTRS_POWER8 & CPU_FTRS_POWER8E & \ 557 556 ~CPU_FTR_HVMODE & ~CPU_FTR_DBELL & CPU_FTRS_POSSIBLE & \ 558 557 CPU_FTRS_POWER9 & CPU_FTRS_POWER9_DD2_1 & CPU_FTRS_POWER9_DD2_2 & \ 559 - CPU_FTRS_POWER10 & CPU_FTRS_DT_CPU_BASE) 558 + CPU_FTRS_POWER10 & CPU_FTRS_POWER11 & CPU_FTRS_DT_CPU_BASE) 560 559 #endif /* CONFIG_CPU_LITTLE_ENDIAN */ 561 560 #endif 562 561 #else
+2 -2
arch/powerpc/include/asm/dtl.h
··· 1 1 #ifndef _ASM_POWERPC_DTL_H 2 2 #define _ASM_POWERPC_DTL_H 3 3 4 + #include <linux/rwsem.h> 4 5 #include <asm/lppaca.h> 5 - #include <linux/spinlock_types.h> 6 6 7 7 /* 8 8 * Layout of entries in the hypervisor's dispatch trace log buffer. ··· 35 35 #define DTL_LOG_ALL (DTL_LOG_CEDE | DTL_LOG_PREEMPT | DTL_LOG_FAULT) 36 36 37 37 extern struct kmem_cache *dtl_cache; 38 - extern rwlock_t dtl_access_lock; 38 + extern struct rw_semaphore dtl_access_lock; 39 39 40 40 extern void register_dtl_buffer(int cpu); 41 41 extern void alloc_dtl_buffers(unsigned long *time_limit);
+9
arch/powerpc/include/asm/fadump.h
··· 19 19 extern int should_fadump_crash(void); 20 20 extern void crash_fadump(struct pt_regs *, const char *); 21 21 extern void fadump_cleanup(void); 22 + void fadump_setup_param_area(void); 22 23 extern void fadump_append_bootargs(void); 23 24 24 25 #else /* CONFIG_FA_DUMP */ ··· 27 26 static inline int should_fadump_crash(void) { return 0; } 28 27 static inline void crash_fadump(struct pt_regs *regs, const char *str) { } 29 28 static inline void fadump_cleanup(void) { } 29 + static inline void fadump_setup_param_area(void) { } 30 30 static inline void fadump_append_bootargs(void) { } 31 31 #endif /* !CONFIG_FA_DUMP */ 32 32 ··· 36 34 int depth, void *data); 37 35 extern int fadump_reserve_mem(void); 38 36 #endif 37 + 38 + #if defined(CONFIG_FA_DUMP) && defined(CONFIG_CMA) 39 + void fadump_cma_init(void); 40 + #else 41 + static inline void fadump_cma_init(void) { } 42 + #endif 43 + 39 44 #endif /* _ASM_POWERPC_FADUMP_H */
+32 -1
arch/powerpc/include/asm/ftrace.h
··· 24 24 struct module; 25 25 struct dyn_ftrace; 26 26 struct dyn_arch_ftrace { 27 - struct module *mod; 27 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 28 + /* pointer to the associated out-of-line stub */ 29 + unsigned long ool_stub; 30 + #endif 28 31 }; 29 32 30 33 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS ··· 113 110 114 111 #ifdef CONFIG_FUNCTION_TRACER 115 112 extern unsigned int ftrace_tramp_text[], ftrace_tramp_init[]; 113 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 114 + struct ftrace_ool_stub { 115 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS 116 + struct ftrace_ops *ftrace_op; 117 + #endif 118 + u32 insn[4]; 119 + } __aligned(sizeof(unsigned long)); 120 + extern struct ftrace_ool_stub ftrace_ool_stub_text_end[], ftrace_ool_stub_text[], 121 + ftrace_ool_stub_inittext[]; 122 + extern unsigned int ftrace_ool_stub_text_end_count, ftrace_ool_stub_text_count, 123 + ftrace_ool_stub_inittext_count; 124 + #endif 116 125 void ftrace_free_init_tramp(void); 117 126 unsigned long ftrace_call_adjust(unsigned long addr); 127 + 128 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 129 + /* 130 + * When an ftrace registered caller is tracing a function that is also set by a 131 + * register_ftrace_direct() call, it needs to be differentiated in the 132 + * ftrace_caller trampoline so that the direct call can be invoked after the 133 + * other ftrace ops. To do this, place the direct caller in the orig_gpr3 field 134 + * of pt_regs. This tells ftrace_caller that there's a direct caller. 135 + */ 136 + static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs, unsigned long addr) 137 + { 138 + struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs; 139 + 140 + regs->orig_gpr3 = addr; 141 + } 142 + #endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */ 118 143 #else 119 144 static inline void ftrace_free_init_tramp(void) { } 120 145 static inline unsigned long ftrace_call_adjust(unsigned long addr) { return addr; }
+1
arch/powerpc/include/asm/hvcall.h
··· 495 495 #define H_GUEST_CAP_COPY_MEM (1UL<<(63-0)) 496 496 #define H_GUEST_CAP_POWER9 (1UL<<(63-1)) 497 497 #define H_GUEST_CAP_POWER10 (1UL<<(63-2)) 498 + #define H_GUEST_CAP_POWER11 (1UL<<(63-3)) 498 499 #define H_GUEST_CAP_BITMAP2 (1UL<<(63-63)) 499 500 500 501 #ifndef __ASSEMBLY__
+6 -2
arch/powerpc/include/asm/kfence.h
··· 15 15 #define ARCH_FUNC_PREFIX "." 16 16 #endif 17 17 18 - #ifdef CONFIG_KFENCE 18 + extern bool kfence_early_init; 19 19 extern bool kfence_disabled; 20 20 21 21 static inline void disable_kfence(void) ··· 27 27 { 28 28 return !kfence_disabled; 29 29 } 30 - #endif 30 + 31 + static inline bool kfence_early_init_enabled(void) 32 + { 33 + return IS_ENABLED(CONFIG_KFENCE) && kfence_early_init; 34 + } 31 35 32 36 #ifdef CONFIG_PPC64 33 37 static inline bool kfence_protect_page(unsigned long addr, bool protect)
+8 -2
arch/powerpc/include/asm/kvm_book3s_64.h
··· 684 684 int kvmhv_nestedv2_parse_output(struct kvm_vcpu *vcpu); 685 685 int kvmhv_nestedv2_set_vpa(struct kvm_vcpu *vcpu, unsigned long vpa); 686 686 687 - int kmvhv_counters_tracepoint_regfunc(void); 688 - void kmvhv_counters_tracepoint_unregfunc(void); 687 + int kvmhv_counters_tracepoint_regfunc(void); 688 + void kvmhv_counters_tracepoint_unregfunc(void); 689 689 int kvmhv_get_l2_counters_status(void); 690 690 void kvmhv_set_l2_counters_status(int cpu, bool status); 691 + u64 kvmhv_get_l1_to_l2_cs_time(void); 692 + u64 kvmhv_get_l2_to_l1_cs_time(void); 693 + u64 kvmhv_get_l2_runtime_agg(void); 694 + u64 kvmhv_get_l1_to_l2_cs_time_vcpu(void); 695 + u64 kvmhv_get_l2_to_l1_cs_time_vcpu(void); 696 + u64 kvmhv_get_l2_runtime_agg_vcpu(void); 691 697 692 698 #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ 693 699
+5
arch/powerpc/include/asm/kvm_host.h
··· 871 871 struct kvmhv_tb_accumulator cede_time; /* time napping inside guest */ 872 872 #endif 873 873 #endif /* CONFIG_KVM_BOOK3S_HV_EXIT_TIMING */ 874 + #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE 875 + u64 l1_to_l2_cs; 876 + u64 l2_to_l1_cs; 877 + u64 l2_runtime_agg; 878 + #endif 874 879 }; 875 880 876 881 #define VCPU_FPR(vcpu, i) (vcpu)->arch.fp.fpr[i][TS_FPROFFSET]
+6 -2
arch/powerpc/include/asm/machdep.h
··· 4 4 #ifdef __KERNEL__ 5 5 6 6 #include <linux/compiler.h> 7 - #include <linux/seq_file.h> 8 7 #include <linux/init.h> 9 - #include <linux/dma-mapping.h> 10 8 #include <linux/export.h> 9 + #include <linux/time64.h> 10 + 11 + #include <asm/page.h> 11 12 12 13 struct pt_regs; 13 14 struct pci_bus; 15 + struct device; 14 16 struct device_node; 15 17 struct iommu_table; 16 18 struct rtc_time; 17 19 struct file; 20 + struct pci_dev; 18 21 struct pci_controller; 19 22 struct kimage; 20 23 struct pci_host_bridge; 24 + struct seq_file; 21 25 22 26 struct machdep_calls { 23 27 const char *name;
+7
arch/powerpc/include/asm/module.h
··· 35 35 bool toc_fixed; /* Have we fixed up .TOC.? */ 36 36 #endif 37 37 38 + #ifdef CONFIG_PPC64_ELF_ABI_V1 38 39 /* For module function descriptor dereference */ 39 40 unsigned long start_opd; 40 41 unsigned long end_opd; 42 + #endif 41 43 #else /* powerpc64 */ 42 44 /* Indices of PLT sections within module. */ 43 45 unsigned int core_plt_section; ··· 49 47 #ifdef CONFIG_DYNAMIC_FTRACE 50 48 unsigned long tramp; 51 49 unsigned long tramp_regs; 50 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 51 + struct ftrace_ool_stub *ool_stubs; 52 + unsigned int ool_stub_count; 53 + unsigned int ool_stub_index; 54 + #endif 52 55 #endif 53 56 }; 54 57
+14
arch/powerpc/include/asm/ppc-opcode.h
··· 587 587 #define PPC_RAW_MTSPR(spr, d) (0x7c0003a6 | ___PPC_RS(d) | __PPC_SPR(spr)) 588 588 #define PPC_RAW_EIEIO() (0x7c0006ac) 589 589 590 + /* bcl 20,31,$+4 */ 591 + #define PPC_RAW_BCL4() (0x429f0005) 590 592 #define PPC_RAW_BRANCH(offset) (0x48000000 | PPC_LI(offset)) 591 593 #define PPC_RAW_BL(offset) (0x48000001 | PPC_LI(offset)) 592 594 #define PPC_RAW_TW(t0, a, b) (0x7c000008 | ___PPC_RS(t0) | ___PPC_RA(a) | ___PPC_RB(b)) 593 595 #define PPC_RAW_TRAP() PPC_RAW_TW(31, 0, 0) 594 596 #define PPC_RAW_SETB(t, bfa) (0x7c000100 | ___PPC_RT(t) | ___PPC_RA((bfa) << 2)) 597 + 598 + #ifdef CONFIG_PPC32 599 + #define PPC_RAW_STL PPC_RAW_STW 600 + #define PPC_RAW_STLU PPC_RAW_STWU 601 + #define PPC_RAW_LL PPC_RAW_LWZ 602 + #define PPC_RAW_CMPLI PPC_RAW_CMPWI 603 + #else 604 + #define PPC_RAW_STL PPC_RAW_STD 605 + #define PPC_RAW_STLU PPC_RAW_STDU 606 + #define PPC_RAW_LL PPC_RAW_LD 607 + #define PPC_RAW_CMPLI PPC_RAW_CMPDI 608 + #endif 595 609 596 610 /* Deal with instructions that older assemblers aren't aware of */ 597 611 #define PPC_BCCTR_FLUSH stringify_in_c(.long PPC_INST_BCCTR_FLUSH)
+7 -7
arch/powerpc/include/asm/set_memory.h
··· 12 12 13 13 int change_memory_attr(unsigned long addr, int numpages, long action); 14 14 15 - static inline int set_memory_ro(unsigned long addr, int numpages) 15 + static inline int __must_check set_memory_ro(unsigned long addr, int numpages) 16 16 { 17 17 return change_memory_attr(addr, numpages, SET_MEMORY_RO); 18 18 } 19 19 20 - static inline int set_memory_rw(unsigned long addr, int numpages) 20 + static inline int __must_check set_memory_rw(unsigned long addr, int numpages) 21 21 { 22 22 return change_memory_attr(addr, numpages, SET_MEMORY_RW); 23 23 } 24 24 25 - static inline int set_memory_nx(unsigned long addr, int numpages) 25 + static inline int __must_check set_memory_nx(unsigned long addr, int numpages) 26 26 { 27 27 return change_memory_attr(addr, numpages, SET_MEMORY_NX); 28 28 } 29 29 30 - static inline int set_memory_x(unsigned long addr, int numpages) 30 + static inline int __must_check set_memory_x(unsigned long addr, int numpages) 31 31 { 32 32 return change_memory_attr(addr, numpages, SET_MEMORY_X); 33 33 } 34 34 35 - static inline int set_memory_np(unsigned long addr, int numpages) 35 + static inline int __must_check set_memory_np(unsigned long addr, int numpages) 36 36 { 37 37 return change_memory_attr(addr, numpages, SET_MEMORY_NP); 38 38 } 39 39 40 - static inline int set_memory_p(unsigned long addr, int numpages) 40 + static inline int __must_check set_memory_p(unsigned long addr, int numpages) 41 41 { 42 42 return change_memory_attr(addr, numpages, SET_MEMORY_P); 43 43 } 44 44 45 - static inline int set_memory_rox(unsigned long addr, int numpages) 45 + static inline int __must_check set_memory_rox(unsigned long addr, int numpages) 46 46 { 47 47 return change_memory_attr(addr, numpages, SET_MEMORY_ROX); 48 48 }
-1
arch/powerpc/include/asm/spu_priv1.h
··· 216 216 */ 217 217 218 218 extern const struct spu_priv1_ops spu_priv1_mmio_ops; 219 - extern const struct spu_priv1_ops spu_priv1_beat_ops; 220 219 221 220 extern const struct spu_management_ops spu_management_of_ops; 222 221
-5
arch/powerpc/include/asm/sstep.h
··· 173 173 */ 174 174 extern int emulate_loadstore(struct pt_regs *regs, struct instruction_op *op); 175 175 176 - extern void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg, 177 - const void *mem, bool cross_endian); 178 - extern void emulate_vsx_store(struct instruction_op *op, 179 - const union vsx_reg *reg, void *mem, 180 - bool cross_endian); 181 176 extern int emulate_dcbz(unsigned long ea, struct pt_regs *regs);
-1
arch/powerpc/include/asm/udbg.h
··· 38 38 void __init udbg_init_debug_lpar(void); 39 39 void __init udbg_init_debug_lpar_hvsi(void); 40 40 void __init udbg_init_pmac_realmode(void); 41 - void __init udbg_init_maple_realmode(void); 42 41 void __init udbg_init_pas_realmode(void); 43 42 void __init udbg_init_rtas_panel(void); 44 43 void __init udbg_init_rtas_console(void);
+1
arch/powerpc/include/asm/vdso.h
··· 25 25 #ifdef __VDSO64__ 26 26 #define V_FUNCTION_BEGIN(name) \ 27 27 .globl name; \ 28 + .type name,@function; \ 28 29 name: \ 29 30 30 31 #define V_FUNCTION_END(name) \
+14 -2
arch/powerpc/include/asm/vdso/getrandom.h
··· 7 7 8 8 #ifndef __ASSEMBLY__ 9 9 10 + #include <asm/vdso_datapage.h> 11 + 10 12 static __always_inline int do_syscall_3(const unsigned long _r0, const unsigned long _r3, 11 13 const unsigned long _r4, const unsigned long _r5) 12 14 { ··· 45 43 46 44 static __always_inline struct vdso_rng_data *__arch_get_vdso_rng_data(void) 47 45 { 48 - return NULL; 46 + struct vdso_arch_data *data; 47 + 48 + asm ( 49 + " bcl 20, 31, .+4 ;" 50 + "0: mflr %0 ;" 51 + " addis %0, %0, (_vdso_datapage - 0b)@ha ;" 52 + " addi %0, %0, (_vdso_datapage - 0b)@l ;" 53 + : "=r" (data) : : "lr" 54 + ); 55 + 56 + return &data->rng_data; 49 57 } 50 58 51 59 ssize_t __c_kernel_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state, 52 - size_t opaque_len, const struct vdso_rng_data *vd); 60 + size_t opaque_len); 53 61 54 62 #endif /* !__ASSEMBLY__ */ 55 63
+7 -17
arch/powerpc/include/asm/vdso_datapage.h
··· 28 28 __u32 syscall_map[SYSCALL_MAP_SIZE]; /* Map of syscalls */ 29 29 __u32 compat_syscall_map[SYSCALL_MAP_SIZE]; /* Map of compat syscalls */ 30 30 31 - struct vdso_data data[CS_BASES]; 32 31 struct vdso_rng_data rng_data; 32 + 33 + struct vdso_data data[CS_BASES] __aligned(1 << CONFIG_PAGE_SHIFT); 33 34 }; 34 35 35 36 #else /* CONFIG_PPC64 */ ··· 39 38 __u64 tb_ticks_per_sec; /* Timebase tics / sec */ 40 39 __u32 syscall_map[SYSCALL_MAP_SIZE]; /* Map of syscalls */ 41 40 __u32 compat_syscall_map[0]; /* No compat syscalls on PPC32 */ 42 - struct vdso_data data[CS_BASES]; 43 41 struct vdso_rng_data rng_data; 42 + 43 + struct vdso_data data[CS_BASES] __aligned(1 << CONFIG_PAGE_SHIFT); 44 44 }; 45 45 46 46 #endif /* CONFIG_PPC64 */ ··· 50 48 51 49 #else /* __ASSEMBLY__ */ 52 50 53 - .macro get_datapage ptr 51 + .macro get_datapage ptr offset=0 54 52 bcl 20, 31, .+4 55 53 999: 56 54 mflr \ptr 57 - addis \ptr, \ptr, (_vdso_datapage - 999b)@ha 58 - addi \ptr, \ptr, (_vdso_datapage - 999b)@l 55 + addis \ptr, \ptr, (_vdso_datapage - 999b + \offset)@ha 56 + addi \ptr, \ptr, (_vdso_datapage - 999b + \offset)@l 59 57 .endm 60 58 61 59 #include <asm/asm-offsets.h> 62 60 #include <asm/page.h> 63 - 64 - .macro get_realdatapage ptr scratch 65 - get_datapage \ptr 66 - #ifdef CONFIG_TIME_NS 67 - lwz \scratch, VDSO_CLOCKMODE_OFFSET(\ptr) 68 - xoris \scratch, \scratch, VDSO_CLOCKMODE_TIMENS@h 69 - xori \scratch, \scratch, VDSO_CLOCKMODE_TIMENS@l 70 - cntlzw \scratch, \scratch 71 - rlwinm \scratch, \scratch, PAGE_SHIFT - 5, 1 << PAGE_SHIFT 72 - add \ptr, \ptr, \scratch 73 - #endif 74 - .endm 75 61 76 62 #endif /* __ASSEMBLY__ */ 77 63
+11 -4
arch/powerpc/kernel/asm-offsets.c
··· 335 335 336 336 /* datapage offsets for use by vdso */ 337 337 OFFSET(VDSO_DATA_OFFSET, vdso_arch_data, data); 338 - OFFSET(VDSO_RNG_DATA_OFFSET, vdso_arch_data, rng_data); 339 338 OFFSET(CFG_TB_TICKS_PER_SEC, vdso_arch_data, tb_ticks_per_sec); 340 339 #ifdef CONFIG_PPC64 341 340 OFFSET(CFG_ICACHE_BLOCKSZ, vdso_arch_data, icache_block_size); ··· 346 347 #else 347 348 OFFSET(CFG_SYSCALL_MAP32, vdso_arch_data, syscall_map); 348 349 #endif 349 - OFFSET(VDSO_CLOCKMODE_OFFSET, vdso_arch_data, data[0].clock_mode); 350 - DEFINE(VDSO_CLOCKMODE_TIMENS, VDSO_CLOCKMODE_TIMENS); 351 350 352 351 #ifdef CONFIG_BUG 353 352 DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry)); ··· 594 597 HSTATE_FIELD(HSTATE_DABR, dabr); 595 598 HSTATE_FIELD(HSTATE_DECEXP, dec_expires); 596 599 HSTATE_FIELD(HSTATE_SPLIT_MODE, kvm_split_mode); 597 - DEFINE(IPI_PRIORITY, IPI_PRIORITY); 598 600 OFFSET(KVM_SPLIT_RPR, kvm_split_mode, rpr); 599 601 OFFSET(KVM_SPLIT_PMMAR, kvm_split_mode, pmmar); 600 602 OFFSET(KVM_SPLIT_LDBAR, kvm_split_mode, ldbar); ··· 671 675 672 676 #ifdef CONFIG_XMON 673 677 DEFINE(BPT_SIZE, BPT_SIZE); 678 + #endif 679 + 680 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 681 + DEFINE(FTRACE_OOL_STUB_SIZE, sizeof(struct ftrace_ool_stub)); 682 + #endif 683 + 684 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS 685 + OFFSET(FTRACE_OPS_FUNC, ftrace_ops, func); 686 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 687 + OFFSET(FTRACE_OPS_DIRECT_CALL, ftrace_ops, direct_call); 688 + #endif 674 689 #endif 675 690 676 691 return 0;
+39 -33
arch/powerpc/kernel/fadump.c
··· 78 78 * But for some reason even if it fails we still have the memory reservation 79 79 * with us and we can still continue doing fadump. 80 80 */ 81 - static int __init fadump_cma_init(void) 81 + void __init fadump_cma_init(void) 82 82 { 83 - unsigned long long base, size; 83 + unsigned long long base, size, end; 84 84 int rc; 85 85 86 - if (!fw_dump.fadump_enabled) 87 - return 0; 88 - 86 + if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled || 87 + fw_dump.dump_active) 88 + return; 89 89 /* 90 90 * Do not use CMA if user has provided fadump=nocma kernel parameter. 91 - * Return 1 to continue with fadump old behaviour. 92 91 */ 93 - if (fw_dump.nocma) 94 - return 1; 92 + if (fw_dump.nocma || !fw_dump.boot_memory_size) 93 + return; 95 94 95 + /* 96 + * [base, end) should be reserved during early init in 97 + * fadump_reserve_mem(). No need to check this here as 98 + * cma_init_reserved_mem() already checks for overlap. 99 + * Here we give the aligned chunk of this reserved memory to CMA. 100 + */ 96 101 base = fw_dump.reserve_dump_area_start; 97 102 size = fw_dump.boot_memory_size; 103 + end = base + size; 98 104 99 - if (!size) 100 - return 0; 105 + base = ALIGN(base, CMA_MIN_ALIGNMENT_BYTES); 106 + end = ALIGN_DOWN(end, CMA_MIN_ALIGNMENT_BYTES); 107 + size = end - base; 108 + 109 + if (end <= base) { 110 + pr_warn("%s: Too less memory to give to CMA\n", __func__); 111 + return; 112 + } 101 113 102 114 rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma); 103 115 if (rc) { ··· 120 108 * blocked from production system usage. Hence return 1, 121 109 * so that we can continue with fadump. 122 110 */ 123 - return 1; 111 + return; 124 112 } 125 113 126 114 /* ··· 132 120 /* 133 121 * So we now have successfully initialized cma area for fadump. 134 122 */ 135 - pr_info("Initialized 0x%lx bytes cma area at %ldMB from 0x%lx " 123 + pr_info("Initialized [0x%llx, %luMB] cma area from [0x%lx, %luMB] " 136 124 "bytes of memory reserved for firmware-assisted dump\n", 137 - cma_get_size(fadump_cma), 138 - (unsigned long)cma_get_base(fadump_cma) >> 20, 139 - fw_dump.reserve_dump_area_size); 140 - return 1; 125 + cma_get_base(fadump_cma), cma_get_size(fadump_cma) >> 20, 126 + fw_dump.reserve_dump_area_start, 127 + fw_dump.boot_memory_size >> 20); 128 + return; 141 129 } 142 - #else 143 - static int __init fadump_cma_init(void) { return 1; } 144 130 #endif /* CONFIG_CMA */ 145 131 146 132 /* ··· 153 143 if (!fw_dump.dump_active || !fw_dump.param_area_supported || !fw_dump.param_area) 154 144 return; 155 145 156 - if (fw_dump.param_area >= fw_dump.boot_mem_top) { 146 + if (fw_dump.param_area < fw_dump.boot_mem_top) { 157 147 if (memblock_reserve(fw_dump.param_area, COMMAND_LINE_SIZE)) { 158 148 pr_warn("WARNING: Can't use additional parameters area!\n"); 159 149 fw_dump.param_area = 0; ··· 568 558 if (!fw_dump.dump_active) { 569 559 fw_dump.boot_memory_size = 570 560 PAGE_ALIGN(fadump_calculate_reserve_size()); 571 - #ifdef CONFIG_CMA 572 - if (!fw_dump.nocma) { 573 - fw_dump.boot_memory_size = 574 - ALIGN(fw_dump.boot_memory_size, 575 - CMA_MIN_ALIGNMENT_BYTES); 576 - } 577 - #endif 578 561 579 562 bootmem_min = fw_dump.ops->fadump_get_bootmem_min(); 580 563 if (fw_dump.boot_memory_size < bootmem_min) { ··· 640 637 641 638 pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n", 642 639 (size >> 20), base, (memblock_phys_mem_size() >> 20)); 643 - 644 - ret = fadump_cma_init(); 645 640 } 646 641 647 642 return ret; ··· 1587 1586 return; 1588 1587 } 1589 1588 1589 + if (fw_dump.param_area) { 1590 + rc = sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr); 1591 + if (rc) 1592 + pr_err("unable to create bootargs_append sysfs file (%d)\n", rc); 1593 + } 1594 + 1590 1595 debugfs_create_file("fadump_region", 0444, arch_debugfs_dir, NULL, 1591 1596 &fadump_region_fops); 1592 1597 ··· 1747 1740 * Reserve memory to store additional parameters to be passed 1748 1741 * for fadump/capture kernel. 1749 1742 */ 1750 - static void __init fadump_setup_param_area(void) 1743 + void __init fadump_setup_param_area(void) 1751 1744 { 1752 1745 phys_addr_t range_start, range_end; 1753 1746 ··· 1755 1748 return; 1756 1749 1757 1750 /* This memory can't be used by PFW or bootloader as it is shared across kernels */ 1758 - if (radix_enabled()) { 1751 + if (early_radix_enabled()) { 1759 1752 /* 1760 1753 * Anywhere in the upper half should be good enough as all memory 1761 1754 * is accessible in real mode. ··· 1783 1776 COMMAND_LINE_SIZE, 1784 1777 range_start, 1785 1778 range_end); 1786 - if (!fw_dump.param_area || sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr)) { 1779 + if (!fw_dump.param_area) { 1787 1780 pr_warn("WARNING: Could not setup area to pass additional parameters!\n"); 1788 1781 return; 1789 1782 } 1790 1783 1791 - memset(phys_to_virt(fw_dump.param_area), 0, COMMAND_LINE_SIZE); 1784 + memset((void *)fw_dump.param_area, 0, COMMAND_LINE_SIZE); 1792 1785 } 1793 1786 1794 1787 /* ··· 1814 1807 } 1815 1808 /* Initialize the kernel dump memory structure and register with f/w */ 1816 1809 else if (fw_dump.reserve_dump_area_size) { 1817 - fadump_setup_param_area(); 1818 1810 fw_dump.ops->fadump_init_mem_struct(&fw_dump); 1819 1811 register_fadump(); 1820 1812 }
+22 -22
arch/powerpc/kernel/irq.c
··· 89 89 90 90 #if defined(CONFIG_PPC32) && defined(CONFIG_TAU_INT) 91 91 if (tau_initialized) { 92 - seq_printf(p, "%*s: ", prec, "TAU"); 92 + seq_printf(p, "%*s:", prec, "TAU"); 93 93 for_each_online_cpu(j) 94 - seq_printf(p, "%10u ", tau_interrupts(j)); 94 + seq_put_decimal_ull_width(p, " ", tau_interrupts(j), 10); 95 95 seq_puts(p, " PowerPC Thermal Assist (cpu temp)\n"); 96 96 } 97 97 #endif /* CONFIG_PPC32 && CONFIG_TAU_INT */ 98 98 99 - seq_printf(p, "%*s: ", prec, "LOC"); 99 + seq_printf(p, "%*s:", prec, "LOC"); 100 100 for_each_online_cpu(j) 101 - seq_printf(p, "%10u ", per_cpu(irq_stat, j).timer_irqs_event); 101 + seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).timer_irqs_event, 10); 102 102 seq_printf(p, " Local timer interrupts for timer event device\n"); 103 103 104 - seq_printf(p, "%*s: ", prec, "BCT"); 104 + seq_printf(p, "%*s:", prec, "BCT"); 105 105 for_each_online_cpu(j) 106 - seq_printf(p, "%10u ", per_cpu(irq_stat, j).broadcast_irqs_event); 106 + seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).broadcast_irqs_event, 10); 107 107 seq_printf(p, " Broadcast timer interrupts for timer event device\n"); 108 108 109 - seq_printf(p, "%*s: ", prec, "LOC"); 109 + seq_printf(p, "%*s:", prec, "LOC"); 110 110 for_each_online_cpu(j) 111 - seq_printf(p, "%10u ", per_cpu(irq_stat, j).timer_irqs_others); 111 + seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).timer_irqs_others, 10); 112 112 seq_printf(p, " Local timer interrupts for others\n"); 113 113 114 - seq_printf(p, "%*s: ", prec, "SPU"); 114 + seq_printf(p, "%*s:", prec, "SPU"); 115 115 for_each_online_cpu(j) 116 - seq_printf(p, "%10u ", per_cpu(irq_stat, j).spurious_irqs); 116 + seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).spurious_irqs, 10); 117 117 seq_printf(p, " Spurious interrupts\n"); 118 118 119 - seq_printf(p, "%*s: ", prec, "PMI"); 119 + seq_printf(p, "%*s:", prec, "PMI"); 120 120 for_each_online_cpu(j) 121 - seq_printf(p, "%10u ", per_cpu(irq_stat, j).pmu_irqs); 121 + seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).pmu_irqs, 10); 122 122 seq_printf(p, " Performance monitoring interrupts\n"); 123 123 124 - seq_printf(p, "%*s: ", prec, "MCE"); 124 + seq_printf(p, "%*s:", prec, "MCE"); 125 125 for_each_online_cpu(j) 126 - seq_printf(p, "%10u ", per_cpu(irq_stat, j).mce_exceptions); 126 + seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).mce_exceptions, 10); 127 127 seq_printf(p, " Machine check exceptions\n"); 128 128 129 129 #ifdef CONFIG_PPC_BOOK3S_64 130 130 if (cpu_has_feature(CPU_FTR_HVMODE)) { 131 - seq_printf(p, "%*s: ", prec, "HMI"); 131 + seq_printf(p, "%*s:", prec, "HMI"); 132 132 for_each_online_cpu(j) 133 - seq_printf(p, "%10u ", paca_ptrs[j]->hmi_irqs); 133 + seq_put_decimal_ull_width(p, " ", paca_ptrs[j]->hmi_irqs, 10); 134 134 seq_printf(p, " Hypervisor Maintenance Interrupts\n"); 135 135 } 136 136 #endif 137 137 138 - seq_printf(p, "%*s: ", prec, "NMI"); 138 + seq_printf(p, "%*s:", prec, "NMI"); 139 139 for_each_online_cpu(j) 140 - seq_printf(p, "%10u ", per_cpu(irq_stat, j).sreset_irqs); 140 + seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).sreset_irqs, 10); 141 141 seq_printf(p, " System Reset interrupts\n"); 142 142 143 143 #ifdef CONFIG_PPC_WATCHDOG 144 - seq_printf(p, "%*s: ", prec, "WDG"); 144 + seq_printf(p, "%*s:", prec, "WDG"); 145 145 for_each_online_cpu(j) 146 - seq_printf(p, "%10u ", per_cpu(irq_stat, j).soft_nmi_irqs); 146 + seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).soft_nmi_irqs, 10); 147 147 seq_printf(p, " Watchdog soft-NMI interrupts\n"); 148 148 #endif 149 149 150 150 #ifdef CONFIG_PPC_DOORBELL 151 151 if (cpu_has_feature(CPU_FTR_DBELL)) { 152 - seq_printf(p, "%*s: ", prec, "DBL"); 152 + seq_printf(p, "%*s:", prec, "DBL"); 153 153 for_each_online_cpu(j) 154 - seq_printf(p, "%10u ", per_cpu(irq_stat, j).doorbell_irqs); 154 + seq_put_decimal_ull_width(p, " ", per_cpu(irq_stat, j).doorbell_irqs, 10); 155 155 seq_printf(p, " Doorbell interrupts\n"); 156 156 } 157 157 #endif
+8 -10
arch/powerpc/kernel/kprobes.c
··· 105 105 return addr; 106 106 } 107 107 108 - static bool arch_kprobe_on_func_entry(unsigned long offset) 108 + static bool arch_kprobe_on_func_entry(unsigned long addr, unsigned long offset) 109 109 { 110 - #ifdef CONFIG_PPC64_ELF_ABI_V2 111 - #ifdef CONFIG_KPROBES_ON_FTRACE 112 - return offset <= 16; 113 - #else 114 - return offset <= 8; 115 - #endif 116 - #else 110 + unsigned long ip = ftrace_location(addr); 111 + 112 + if (ip) 113 + return offset <= (ip - addr); 114 + if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) 115 + return offset <= 8; 117 116 return !offset; 118 - #endif 119 117 } 120 118 121 119 /* XXX try and fold the magic of kprobe_lookup_name() in this */ 122 120 kprobe_opcode_t *arch_adjust_kprobe_addr(unsigned long addr, unsigned long offset, 123 121 bool *on_func_entry) 124 122 { 125 - *on_func_entry = arch_kprobe_on_func_entry(offset); 123 + *on_func_entry = arch_kprobe_on_func_entry(addr, offset); 126 124 return (kprobe_opcode_t *)(addr + offset); 127 125 } 128 126
+4 -4
arch/powerpc/kernel/misc_64.S
··· 74 74 blr 75 75 #endif /* CONFIG_PPC_EARLY_DEBUG_BOOTX */ 76 76 77 - #if defined(CONFIG_PPC_PMAC) || defined(CONFIG_PPC_MAPLE) 77 + #ifdef CONFIG_PPC_PMAC 78 78 79 79 /* 80 80 * Do an IO access in real mode ··· 137 137 sync 138 138 isync 139 139 blr 140 - #endif /* defined(CONFIG_PPC_PMAC) || defined(CONFIG_PPC_MAPLE) */ 140 + #endif // CONFIG_PPC_PMAC 141 141 142 142 #ifdef CONFIG_PPC_PASEMI 143 143 ··· 174 174 #endif /* CONFIG_PPC_PASEMI */ 175 175 176 176 177 - #if defined(CONFIG_CPU_FREQ_PMAC64) || defined(CONFIG_CPU_FREQ_MAPLE) 177 + #ifdef CONFIG_CPU_FREQ_PMAC64 178 178 /* 179 179 * SCOM access functions for 970 (FX only for now) 180 180 * ··· 243 243 /* restore interrupts */ 244 244 mtmsrd r5,1 245 245 blr 246 - #endif /* CONFIG_CPU_FREQ_PMAC64 || CONFIG_CPU_FREQ_MAPLE */ 246 + #endif // CONFIG_CPU_FREQ_PMAC64 247 247 248 248 /* kexec_wait(phys_cpu) 249 249 *
+57 -9
arch/powerpc/kernel/module_64.c
··· 205 205 206 206 /* Get size of potential trampolines required. */ 207 207 static unsigned long get_stubs_size(const Elf64_Ehdr *hdr, 208 - const Elf64_Shdr *sechdrs) 208 + const Elf64_Shdr *sechdrs, 209 + char *secstrings, 210 + struct module *me) 209 211 { 210 212 /* One extra reloc so it's always 0-addr terminated */ 211 213 unsigned long relocs = 1; ··· 243 241 } 244 242 } 245 243 246 - #ifdef CONFIG_DYNAMIC_FTRACE 247 - /* make the trampoline to the ftrace_caller */ 248 - relocs++; 249 - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 250 - /* an additional one for ftrace_regs_caller */ 251 - relocs++; 252 - #endif 244 + /* stubs for ftrace_caller and ftrace_regs_caller */ 245 + relocs += IS_ENABLED(CONFIG_DYNAMIC_FTRACE) + IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS); 246 + 247 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 248 + /* stubs for the function tracer */ 249 + for (i = 1; i < hdr->e_shnum; i++) { 250 + if (!strcmp(secstrings + sechdrs[i].sh_name, "__patchable_function_entries")) { 251 + me->arch.ool_stub_count = sechdrs[i].sh_size / sizeof(unsigned long); 252 + me->arch.ool_stub_index = 0; 253 + relocs += roundup(me->arch.ool_stub_count * sizeof(struct ftrace_ool_stub), 254 + sizeof(struct ppc64_stub_entry)) / 255 + sizeof(struct ppc64_stub_entry); 256 + break; 257 + } 258 + } 259 + if (i == hdr->e_shnum) { 260 + pr_err("%s: doesn't contain __patchable_function_entries.\n", me->name); 261 + return -ENOEXEC; 262 + } 253 263 #endif 254 264 255 265 pr_debug("Looks like a total of %lu stubs, max\n", relocs); ··· 474 460 #endif 475 461 476 462 /* Override the stubs size */ 477 - sechdrs[me->arch.stubs_section].sh_size = get_stubs_size(hdr, sechdrs); 463 + sechdrs[me->arch.stubs_section].sh_size = get_stubs_size(hdr, sechdrs, secstrings, me); 478 464 479 465 return 0; 480 466 } ··· 1099 1085 return 0; 1100 1086 } 1101 1087 1088 + static int setup_ftrace_ool_stubs(const Elf64_Shdr *sechdrs, unsigned long addr, struct module *me) 1089 + { 1090 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 1091 + unsigned int i, total_stubs, num_stubs; 1092 + struct ppc64_stub_entry *stub; 1093 + 1094 + total_stubs = sechdrs[me->arch.stubs_section].sh_size / sizeof(*stub); 1095 + num_stubs = roundup(me->arch.ool_stub_count * sizeof(struct ftrace_ool_stub), 1096 + sizeof(struct ppc64_stub_entry)) / sizeof(struct ppc64_stub_entry); 1097 + 1098 + /* Find the next available entry */ 1099 + stub = (void *)sechdrs[me->arch.stubs_section].sh_addr; 1100 + for (i = 0; stub_func_addr(stub[i].funcdata); i++) 1101 + if (WARN_ON(i >= total_stubs)) 1102 + return -1; 1103 + 1104 + if (WARN_ON(i + num_stubs > total_stubs)) 1105 + return -1; 1106 + 1107 + stub += i; 1108 + me->arch.ool_stubs = (struct ftrace_ool_stub *)stub; 1109 + 1110 + /* reserve stubs */ 1111 + for (i = 0; i < num_stubs; i++) 1112 + if (patch_u32((void *)&stub->funcdata, PPC_RAW_NOP())) 1113 + return -1; 1114 + #endif 1115 + 1116 + return 0; 1117 + } 1118 + 1102 1119 int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs) 1103 1120 { 1104 1121 mod->arch.tramp = stub_for_addr(sechdrs, ··· 1146 1101 #endif 1147 1102 1148 1103 if (!mod->arch.tramp) 1104 + return -ENOENT; 1105 + 1106 + if (setup_ftrace_ool_stubs(sechdrs, mod->arch.tramp, mod)) 1149 1107 return -ENOENT; 1150 1108 1151 1109 return 0;
+3
arch/powerpc/kernel/prom.c
··· 908 908 909 909 mmu_early_init_devtree(); 910 910 911 + /* Setup param area for passing additional parameters to fadump capture kernel. */ 912 + fadump_setup_param_area(); 913 + 911 914 #ifdef CONFIG_PPC_POWERNV 912 915 /* Scan and build the list of machine check recoverable ranges */ 913 916 of_scan_flat_dt(early_init_dt_scan_recoverable_ranges, NULL);
-86
arch/powerpc/kernel/prom_init.c
··· 2792 2792 dt_struct_start, dt_struct_end); 2793 2793 } 2794 2794 2795 - #ifdef CONFIG_PPC_MAPLE 2796 - /* PIBS Version 1.05.0000 04/26/2005 has an incorrect /ht/isa/ranges property. 2797 - * The values are bad, and it doesn't even have the right number of cells. */ 2798 - static void __init fixup_device_tree_maple(void) 2799 - { 2800 - phandle isa; 2801 - u32 rloc = 0x01002000; /* IO space; PCI device = 4 */ 2802 - u32 isa_ranges[6]; 2803 - char *name; 2804 - 2805 - name = "/ht@0/isa@4"; 2806 - isa = call_prom("finddevice", 1, 1, ADDR(name)); 2807 - if (!PHANDLE_VALID(isa)) { 2808 - name = "/ht@0/isa@6"; 2809 - isa = call_prom("finddevice", 1, 1, ADDR(name)); 2810 - rloc = 0x01003000; /* IO space; PCI device = 6 */ 2811 - } 2812 - if (!PHANDLE_VALID(isa)) 2813 - return; 2814 - 2815 - if (prom_getproplen(isa, "ranges") != 12) 2816 - return; 2817 - if (prom_getprop(isa, "ranges", isa_ranges, sizeof(isa_ranges)) 2818 - == PROM_ERROR) 2819 - return; 2820 - 2821 - if (isa_ranges[0] != 0x1 || 2822 - isa_ranges[1] != 0xf4000000 || 2823 - isa_ranges[2] != 0x00010000) 2824 - return; 2825 - 2826 - prom_printf("Fixing up bogus ISA range on Maple/Apache...\n"); 2827 - 2828 - isa_ranges[0] = 0x1; 2829 - isa_ranges[1] = 0x0; 2830 - isa_ranges[2] = rloc; 2831 - isa_ranges[3] = 0x0; 2832 - isa_ranges[4] = 0x0; 2833 - isa_ranges[5] = 0x00010000; 2834 - prom_setprop(isa, name, "ranges", 2835 - isa_ranges, sizeof(isa_ranges)); 2836 - } 2837 - 2838 - #define CPC925_MC_START 0xf8000000 2839 - #define CPC925_MC_LENGTH 0x1000000 2840 - /* The values for memory-controller don't have right number of cells */ 2841 - static void __init fixup_device_tree_maple_memory_controller(void) 2842 - { 2843 - phandle mc; 2844 - u32 mc_reg[4]; 2845 - char *name = "/hostbridge@f8000000"; 2846 - u32 ac, sc; 2847 - 2848 - mc = call_prom("finddevice", 1, 1, ADDR(name)); 2849 - if (!PHANDLE_VALID(mc)) 2850 - return; 2851 - 2852 - if (prom_getproplen(mc, "reg") != 8) 2853 - return; 2854 - 2855 - prom_getprop(prom.root, "#address-cells", &ac, sizeof(ac)); 2856 - prom_getprop(prom.root, "#size-cells", &sc, sizeof(sc)); 2857 - if ((ac != 2) || (sc != 2)) 2858 - return; 2859 - 2860 - if (prom_getprop(mc, "reg", mc_reg, sizeof(mc_reg)) == PROM_ERROR) 2861 - return; 2862 - 2863 - if (mc_reg[0] != CPC925_MC_START || mc_reg[1] != CPC925_MC_LENGTH) 2864 - return; 2865 - 2866 - prom_printf("Fixing up bogus hostbridge on Maple...\n"); 2867 - 2868 - mc_reg[0] = 0x0; 2869 - mc_reg[1] = CPC925_MC_START; 2870 - mc_reg[2] = 0x0; 2871 - mc_reg[3] = CPC925_MC_LENGTH; 2872 - prom_setprop(mc, name, "reg", mc_reg, sizeof(mc_reg)); 2873 - } 2874 - #else 2875 - #define fixup_device_tree_maple() 2876 - #define fixup_device_tree_maple_memory_controller() 2877 - #endif 2878 - 2879 2795 #ifdef CONFIG_PPC_CHRP 2880 2796 /* 2881 2797 * Pegasos and BriQ lacks the "ranges" property in the isa node ··· 3109 3193 3110 3194 static void __init fixup_device_tree(void) 3111 3195 { 3112 - fixup_device_tree_maple(); 3113 - fixup_device_tree_maple_memory_controller(); 3114 3196 fixup_device_tree_chrp(); 3115 3197 fixup_device_tree_pmac(); 3116 3198 fixup_device_tree_efika();
+3 -2
arch/powerpc/kernel/secure_boot.c
··· 5 5 */ 6 6 #include <linux/types.h> 7 7 #include <linux/of.h> 8 + #include <linux/string_choices.h> 8 9 #include <asm/secure_boot.h> 9 10 10 11 static struct device_node *get_ppc_fw_sb_node(void) ··· 39 38 of_node_put(node); 40 39 41 40 out: 42 - pr_info("Secure boot mode %s\n", enabled ? "enabled" : "disabled"); 41 + pr_info("Secure boot mode %s\n", str_enabled_disabled(enabled)); 43 42 44 43 return enabled; 45 44 } ··· 63 62 of_node_put(node); 64 63 65 64 out: 66 - pr_info("Trusted boot mode %s\n", enabled ? "enabled" : "disabled"); 65 + pr_info("Trusted boot mode %s\n", str_enabled_disabled(enabled)); 67 66 68 67 return enabled; 69 68 }
+4 -2
arch/powerpc/kernel/setup-common.c
··· 1000 1000 initmem_init(); 1001 1001 1002 1002 /* 1003 - * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must 1004 - * be called after initmem_init(), so that pageblock_order is initialised. 1003 + * Reserve large chunks of memory for use by CMA for fadump, KVM and 1004 + * hugetlb. These must be called after initmem_init(), so that 1005 + * pageblock_order is initialised. 1005 1006 */ 1007 + fadump_cma_init(); 1006 1008 kvm_cma_reserve(); 1007 1009 gigantic_hugetlb_cma_reserve(); 1008 1010
+1
arch/powerpc/kernel/setup_64.c
··· 920 920 hardlockup_detector_disable(); 921 921 #else 922 922 if (firmware_has_feature(FW_FEATURE_LPAR)) { 923 + check_kvm_guest(); 923 924 if (is_kvm_guest()) 924 925 hardlockup_detector_disable(); 925 926 }
+1
arch/powerpc/kernel/sysfs.c
··· 17 17 #include <asm/hvcall.h> 18 18 #include <asm/machdep.h> 19 19 #include <asm/smp.h> 20 + #include <asm/time.h> 20 21 #include <asm/pmc.h> 21 22 #include <asm/firmware.h> 22 23 #include <asm/idle.h>
+7 -4
arch/powerpc/kernel/trace/Makefile
··· 9 9 CFLAGS_REMOVE_ftrace_64_pg.o = $(CC_FLAGS_FTRACE) 10 10 endif 11 11 12 - obj32-$(CONFIG_FUNCTION_TRACER) += ftrace.o ftrace_entry.o 13 - ifdef CONFIG_MPROFILE_KERNEL 14 - obj64-$(CONFIG_FUNCTION_TRACER) += ftrace.o ftrace_entry.o 12 + ifdef CONFIG_FUNCTION_TRACER 13 + obj32-y += ftrace.o ftrace_entry.o 14 + ifeq ($(CONFIG_MPROFILE_KERNEL)$(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY),) 15 + obj64-y += ftrace_64_pg.o ftrace_64_pg_entry.o 15 16 else 16 - obj64-$(CONFIG_FUNCTION_TRACER) += ftrace_64_pg.o ftrace_64_pg_entry.o 17 + obj64-y += ftrace.o ftrace_entry.o 17 18 endif 19 + endif 20 + 18 21 obj-$(CONFIG_TRACING) += trace_clock.o 19 22 20 23 obj-$(CONFIG_PPC64) += $(obj64-y)
+269 -33
arch/powerpc/kernel/trace/ftrace.c
··· 37 37 if (addr >= (unsigned long)__exittext_begin && addr < (unsigned long)__exittext_end) 38 38 return 0; 39 39 40 - if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)) 40 + if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY) && 41 + !IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) { 41 42 addr += MCOUNT_INSN_SIZE; 43 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS)) 44 + addr += MCOUNT_INSN_SIZE; 45 + } 42 46 43 47 return addr; 44 48 } ··· 86 82 { 87 83 int ret = ftrace_validate_inst(ip, old); 88 84 89 - if (!ret) 85 + if (!ret && !ppc_inst_equal(old, new)) 90 86 ret = patch_instruction((u32 *)ip, new); 91 87 92 88 return ret; ··· 110 106 return 0; 111 107 } 112 108 109 + #ifdef CONFIG_MODULES 110 + static unsigned long ftrace_lookup_module_stub(unsigned long ip, unsigned long addr) 111 + { 112 + struct module *mod = NULL; 113 + 114 + preempt_disable(); 115 + mod = __module_text_address(ip); 116 + preempt_enable(); 117 + 118 + if (!mod) 119 + pr_err("No module loaded at addr=%lx\n", ip); 120 + 121 + return (addr == (unsigned long)ftrace_caller ? mod->arch.tramp : mod->arch.tramp_regs); 122 + } 123 + #else 124 + static unsigned long ftrace_lookup_module_stub(unsigned long ip, unsigned long addr) 125 + { 126 + return 0; 127 + } 128 + #endif 129 + 130 + static unsigned long ftrace_get_ool_stub(struct dyn_ftrace *rec) 131 + { 132 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 133 + return rec->arch.ool_stub; 134 + #else 135 + BUILD_BUG(); 136 + #endif 137 + } 138 + 113 139 static int ftrace_get_call_inst(struct dyn_ftrace *rec, unsigned long addr, ppc_inst_t *call_inst) 114 140 { 115 - unsigned long ip = rec->ip; 141 + unsigned long ip; 116 142 unsigned long stub; 117 143 118 - if (is_offset_in_branch_range(addr - ip)) { 119 - /* Within range */ 120 - stub = addr; 121 - #ifdef CONFIG_MODULES 122 - } else if (rec->arch.mod) { 123 - /* Module code would be going to one of the module stubs */ 124 - stub = (addr == (unsigned long)ftrace_caller ? rec->arch.mod->arch.tramp : 125 - rec->arch.mod->arch.tramp_regs); 126 - #endif 127 - } else if (core_kernel_text(ip)) { 128 - /* We would be branching to one of our ftrace stubs */ 129 - stub = find_ftrace_tramp(ip); 130 - if (!stub) { 131 - pr_err("0x%lx: No ftrace stubs reachable\n", ip); 144 + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) 145 + ip = ftrace_get_ool_stub(rec) + MCOUNT_INSN_SIZE; /* second instruction in stub */ 146 + else 147 + ip = rec->ip; 148 + 149 + if (!is_offset_in_branch_range(addr - ip) && addr != FTRACE_ADDR && 150 + addr != FTRACE_REGS_ADDR) { 151 + /* This can only happen with ftrace direct */ 152 + if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS)) { 153 + pr_err("0x%lx (0x%lx): Unexpected target address 0x%lx\n", 154 + ip, rec->ip, addr); 132 155 return -EINVAL; 133 156 } 134 - } else { 157 + addr = FTRACE_ADDR; 158 + } 159 + 160 + if (is_offset_in_branch_range(addr - ip)) 161 + /* Within range */ 162 + stub = addr; 163 + else if (core_kernel_text(ip)) 164 + /* We would be branching to one of our ftrace stubs */ 165 + stub = find_ftrace_tramp(ip); 166 + else 167 + stub = ftrace_lookup_module_stub(ip, addr); 168 + 169 + if (!stub) { 170 + pr_err("0x%lx (0x%lx): No ftrace stubs reachable\n", ip, rec->ip); 135 171 return -EINVAL; 136 172 } 137 173 138 174 *call_inst = ftrace_create_branch_inst(ip, stub, 1); 139 175 return 0; 140 176 } 177 + 178 + static int ftrace_init_ool_stub(struct module *mod, struct dyn_ftrace *rec) 179 + { 180 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 181 + static int ool_stub_text_index, ool_stub_text_end_index, ool_stub_inittext_index; 182 + int ret = 0, ool_stub_count, *ool_stub_index; 183 + ppc_inst_t inst; 184 + /* 185 + * See ftrace_entry.S if changing the below instruction sequence, as we rely on 186 + * decoding the last branch instruction here to recover the correct function ip. 187 + */ 188 + struct ftrace_ool_stub *ool_stub, ool_stub_template = { 189 + .insn = { 190 + PPC_RAW_MFLR(_R0), 191 + PPC_RAW_NOP(), /* bl ftrace_caller */ 192 + PPC_RAW_MTLR(_R0), 193 + PPC_RAW_NOP() /* b rec->ip + 4 */ 194 + } 195 + }; 196 + 197 + WARN_ON(rec->arch.ool_stub); 198 + 199 + if (is_kernel_inittext(rec->ip)) { 200 + ool_stub = ftrace_ool_stub_inittext; 201 + ool_stub_index = &ool_stub_inittext_index; 202 + ool_stub_count = ftrace_ool_stub_inittext_count; 203 + } else if (is_kernel_text(rec->ip)) { 204 + /* 205 + * ftrace records are sorted, so we first use up the stub area within .text 206 + * (ftrace_ool_stub_text) before using the area at the end of .text 207 + * (ftrace_ool_stub_text_end), unless the stub is out of range of the record. 208 + */ 209 + if (ool_stub_text_index >= ftrace_ool_stub_text_count || 210 + !is_offset_in_branch_range((long)rec->ip - 211 + (long)&ftrace_ool_stub_text[ool_stub_text_index])) { 212 + ool_stub = ftrace_ool_stub_text_end; 213 + ool_stub_index = &ool_stub_text_end_index; 214 + ool_stub_count = ftrace_ool_stub_text_end_count; 215 + } else { 216 + ool_stub = ftrace_ool_stub_text; 217 + ool_stub_index = &ool_stub_text_index; 218 + ool_stub_count = ftrace_ool_stub_text_count; 219 + } 220 + #ifdef CONFIG_MODULES 221 + } else if (mod) { 222 + ool_stub = mod->arch.ool_stubs; 223 + ool_stub_index = &mod->arch.ool_stub_index; 224 + ool_stub_count = mod->arch.ool_stub_count; 225 + #endif 226 + } else { 227 + return -EINVAL; 228 + } 229 + 230 + ool_stub += (*ool_stub_index)++; 231 + 232 + if (WARN_ON(*ool_stub_index > ool_stub_count)) 233 + return -EINVAL; 234 + 235 + if (!is_offset_in_branch_range((long)rec->ip - (long)&ool_stub->insn[0]) || 236 + !is_offset_in_branch_range((long)(rec->ip + MCOUNT_INSN_SIZE) - 237 + (long)&ool_stub->insn[3])) { 238 + pr_err("%s: ftrace ool stub out of range (%p -> %p).\n", 239 + __func__, (void *)rec->ip, (void *)&ool_stub->insn[0]); 240 + return -EINVAL; 241 + } 242 + 243 + rec->arch.ool_stub = (unsigned long)&ool_stub->insn[0]; 244 + 245 + /* bl ftrace_caller */ 246 + if (!mod) 247 + ret = ftrace_get_call_inst(rec, (unsigned long)ftrace_caller, &inst); 248 + #ifdef CONFIG_MODULES 249 + else 250 + /* 251 + * We can't use ftrace_get_call_inst() since that uses 252 + * __module_text_address(rec->ip) to look up the module. 253 + * But, since the module is not fully formed at this stage, 254 + * the lookup fails. We know the target though, so generate 255 + * the branch inst directly. 256 + */ 257 + inst = ftrace_create_branch_inst(ftrace_get_ool_stub(rec) + MCOUNT_INSN_SIZE, 258 + mod->arch.tramp, 1); 259 + #endif 260 + ool_stub_template.insn[1] = ppc_inst_val(inst); 261 + 262 + /* b rec->ip + 4 */ 263 + if (!ret && create_branch(&inst, &ool_stub->insn[3], rec->ip + MCOUNT_INSN_SIZE, 0)) 264 + return -EINVAL; 265 + ool_stub_template.insn[3] = ppc_inst_val(inst); 266 + 267 + if (!ret) 268 + ret = patch_instructions((u32 *)ool_stub, (u32 *)&ool_stub_template, 269 + sizeof(ool_stub_template), false); 270 + 271 + return ret; 272 + #else /* !CONFIG_PPC_FTRACE_OUT_OF_LINE */ 273 + BUILD_BUG(); 274 + #endif 275 + } 276 + 277 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS 278 + static const struct ftrace_ops *powerpc_rec_get_ops(struct dyn_ftrace *rec) 279 + { 280 + const struct ftrace_ops *ops = NULL; 281 + 282 + if (rec->flags & FTRACE_FL_CALL_OPS_EN) { 283 + ops = ftrace_find_unique_ops(rec); 284 + WARN_ON_ONCE(!ops); 285 + } 286 + 287 + if (!ops) 288 + ops = &ftrace_list_ops; 289 + 290 + return ops; 291 + } 292 + 293 + static int ftrace_rec_set_ops(struct dyn_ftrace *rec, const struct ftrace_ops *ops) 294 + { 295 + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) 296 + return patch_ulong((void *)(ftrace_get_ool_stub(rec) - sizeof(unsigned long)), 297 + (unsigned long)ops); 298 + else 299 + return patch_ulong((void *)(rec->ip - MCOUNT_INSN_SIZE - sizeof(unsigned long)), 300 + (unsigned long)ops); 301 + } 302 + 303 + static int ftrace_rec_set_nop_ops(struct dyn_ftrace *rec) 304 + { 305 + return ftrace_rec_set_ops(rec, &ftrace_nop_ops); 306 + } 307 + 308 + static int ftrace_rec_update_ops(struct dyn_ftrace *rec) 309 + { 310 + return ftrace_rec_set_ops(rec, powerpc_rec_get_ops(rec)); 311 + } 312 + #else 313 + static int ftrace_rec_set_nop_ops(struct dyn_ftrace *rec) { return 0; } 314 + static int ftrace_rec_update_ops(struct dyn_ftrace *rec) { return 0; } 315 + #endif 141 316 142 317 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 143 318 int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, unsigned long addr) ··· 330 147 int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 331 148 { 332 149 ppc_inst_t old, new; 333 - int ret; 150 + unsigned long ip = rec->ip; 151 + int ret = 0; 334 152 335 153 /* This can only ever be called during module load */ 336 - if (WARN_ON(!IS_ENABLED(CONFIG_MODULES) || core_kernel_text(rec->ip))) 154 + if (WARN_ON(!IS_ENABLED(CONFIG_MODULES) || core_kernel_text(ip))) 337 155 return -EINVAL; 338 156 339 157 old = ppc_inst(PPC_RAW_NOP()); 340 - ret = ftrace_get_call_inst(rec, addr, &new); 158 + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) { 159 + ip = ftrace_get_ool_stub(rec) + MCOUNT_INSN_SIZE; /* second instruction in stub */ 160 + ret = ftrace_get_call_inst(rec, (unsigned long)ftrace_caller, &old); 161 + } 162 + 163 + ret |= ftrace_get_call_inst(rec, addr, &new); 164 + 165 + if (!ret) 166 + ret = ftrace_modify_code(ip, old, new); 167 + 168 + ret = ftrace_rec_update_ops(rec); 341 169 if (ret) 342 170 return ret; 343 171 344 - return ftrace_modify_code(rec->ip, old, new); 172 + if (!ret && IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) 173 + ret = ftrace_modify_code(rec->ip, ppc_inst(PPC_RAW_NOP()), 174 + ppc_inst(PPC_RAW_BRANCH((long)ftrace_get_ool_stub(rec) - (long)rec->ip))); 175 + 176 + return ret; 345 177 } 346 178 347 179 int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr) ··· 389 191 new_addr = ftrace_get_addr_new(rec); 390 192 update = ftrace_update_record(rec, enable); 391 193 194 + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE) && update != FTRACE_UPDATE_IGNORE) { 195 + ip = ftrace_get_ool_stub(rec) + MCOUNT_INSN_SIZE; 196 + ret = ftrace_get_call_inst(rec, (unsigned long)ftrace_caller, &nop_inst); 197 + if (ret) 198 + goto out; 199 + } 200 + 392 201 switch (update) { 393 202 case FTRACE_UPDATE_IGNORE: 394 203 default: ··· 403 198 case FTRACE_UPDATE_MODIFY_CALL: 404 199 ret = ftrace_get_call_inst(rec, new_addr, &new_call_inst); 405 200 ret |= ftrace_get_call_inst(rec, addr, &call_inst); 201 + ret |= ftrace_rec_update_ops(rec); 406 202 old = call_inst; 407 203 new = new_call_inst; 408 204 break; 409 205 case FTRACE_UPDATE_MAKE_NOP: 410 206 ret = ftrace_get_call_inst(rec, addr, &call_inst); 207 + ret |= ftrace_rec_set_nop_ops(rec); 411 208 old = call_inst; 412 209 new = nop_inst; 413 210 break; 414 211 case FTRACE_UPDATE_MAKE_CALL: 415 212 ret = ftrace_get_call_inst(rec, new_addr, &call_inst); 213 + ret |= ftrace_rec_update_ops(rec); 416 214 old = nop_inst; 417 215 new = call_inst; 418 216 break; ··· 423 215 424 216 if (!ret) 425 217 ret = ftrace_modify_code(ip, old, new); 218 + 219 + if (!ret && IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE) && 220 + (update == FTRACE_UPDATE_MAKE_NOP || update == FTRACE_UPDATE_MAKE_CALL)) { 221 + /* Update the actual ftrace location */ 222 + call_inst = ppc_inst(PPC_RAW_BRANCH((long)ftrace_get_ool_stub(rec) - 223 + (long)rec->ip)); 224 + nop_inst = ppc_inst(PPC_RAW_NOP()); 225 + ip = rec->ip; 226 + 227 + if (update == FTRACE_UPDATE_MAKE_NOP) 228 + ret = ftrace_modify_code(ip, call_inst, nop_inst); 229 + else 230 + ret = ftrace_modify_code(ip, nop_inst, call_inst); 231 + 232 + if (ret) 233 + goto out; 234 + } 235 + 426 236 if (ret) 427 237 goto out; 428 238 } ··· 460 234 /* Verify instructions surrounding the ftrace location */ 461 235 if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)) { 462 236 /* Expect nops */ 463 - ret = ftrace_validate_inst(ip - 4, ppc_inst(PPC_RAW_NOP())); 237 + if (!IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) 238 + ret = ftrace_validate_inst(ip - 4, ppc_inst(PPC_RAW_NOP())); 464 239 if (!ret) 465 240 ret = ftrace_validate_inst(ip, ppc_inst(PPC_RAW_NOP())); 466 241 } else if (IS_ENABLED(CONFIG_PPC32)) { 467 242 /* Expected sequence: 'mflr r0', 'stw r0,4(r1)', 'bl _mcount' */ 468 243 ret = ftrace_validate_inst(ip - 8, ppc_inst(PPC_RAW_MFLR(_R0))); 469 - if (!ret) 470 - ret = ftrace_validate_inst(ip - 4, ppc_inst(PPC_RAW_STW(_R0, _R1, 4))); 244 + if (ret) 245 + return ret; 246 + ret = ftrace_modify_code(ip - 4, ppc_inst(PPC_RAW_STW(_R0, _R1, 4)), 247 + ppc_inst(PPC_RAW_NOP())); 471 248 } else if (IS_ENABLED(CONFIG_MPROFILE_KERNEL)) { 472 249 /* Expected sequence: 'mflr r0', ['std r0,16(r1)'], 'bl _mcount' */ 473 250 ret = ftrace_read_inst(ip - 4, &old); 474 251 if (!ret && !ppc_inst_equal(old, ppc_inst(PPC_RAW_MFLR(_R0)))) { 252 + /* Gcc v5.x emit the additional 'std' instruction, gcc v6.x don't */ 475 253 ret = ftrace_validate_inst(ip - 8, ppc_inst(PPC_RAW_MFLR(_R0))); 476 - ret |= ftrace_validate_inst(ip - 4, ppc_inst(PPC_RAW_STD(_R0, _R1, 16))); 254 + if (ret) 255 + return ret; 256 + ret = ftrace_modify_code(ip - 4, ppc_inst(PPC_RAW_STD(_R0, _R1, 16)), 257 + ppc_inst(PPC_RAW_NOP())); 477 258 } 478 259 } else { 479 260 return -EINVAL; ··· 489 256 if (ret) 490 257 return ret; 491 258 492 - if (!core_kernel_text(ip)) { 493 - if (!mod) { 494 - pr_err("0x%lx: No module provided for non-kernel address\n", ip); 495 - return -EFAULT; 496 - } 497 - rec->arch.mod = mod; 498 - } 259 + /* Set up out-of-line stub */ 260 + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) 261 + return ftrace_init_ool_stub(mod, rec); 499 262 500 263 /* Nop-out the ftrace location */ 501 264 new = ppc_inst(PPC_RAW_NOP()); ··· 530 301 unsigned long ip = (unsigned long)(&ftrace_call); 531 302 ppc_inst_t old, new; 532 303 int ret; 304 + 305 + /* 306 + * When using CALL_OPS, the function to call is associated with the 307 + * call site, and we don't have a global function pointer to update. 308 + */ 309 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS)) 310 + return 0; 533 311 534 312 old = ppc_inst_read((u32 *)&ftrace_call); 535 313 new = ftrace_create_branch_inst(ip, ppc_function_entry(func), 1);
+28 -41
arch/powerpc/kernel/trace/ftrace_64_pg.c
··· 116 116 } 117 117 118 118 #ifdef CONFIG_MODULES 119 + static struct module *ftrace_lookup_module(struct dyn_ftrace *rec) 120 + { 121 + struct module *mod; 122 + 123 + preempt_disable(); 124 + mod = __module_text_address(rec->ip); 125 + preempt_enable(); 126 + 127 + if (!mod) 128 + pr_err("No module loaded at addr=%lx\n", rec->ip); 129 + 130 + return mod; 131 + } 132 + 119 133 static int 120 134 __ftrace_make_nop(struct module *mod, 121 135 struct dyn_ftrace *rec, unsigned long addr) ··· 137 123 unsigned long entry, ptr, tramp; 138 124 unsigned long ip = rec->ip; 139 125 ppc_inst_t op, pop; 126 + 127 + if (!mod) { 128 + mod = ftrace_lookup_module(rec); 129 + if (!mod) 130 + return -EINVAL; 131 + } 140 132 141 133 /* read where this goes */ 142 134 if (copy_inst_from_kernel_nofault(&op, (void *)ip)) { ··· 386 366 return -EINVAL; 387 367 } 388 368 389 - /* 390 - * Out of range jumps are called from modules. 391 - * We should either already have a pointer to the module 392 - * or it has been passed in. 393 - */ 394 - if (!rec->arch.mod) { 395 - if (!mod) { 396 - pr_err("No module loaded addr=%lx\n", addr); 397 - return -EFAULT; 398 - } 399 - rec->arch.mod = mod; 400 - } else if (mod) { 401 - if (mod != rec->arch.mod) { 402 - pr_err("Record mod %p not equal to passed in mod %p\n", 403 - rec->arch.mod, mod); 404 - return -EINVAL; 405 - } 406 - /* nothing to do if mod == rec->arch.mod */ 407 - } else 408 - mod = rec->arch.mod; 409 - 410 369 return __ftrace_make_nop(mod, rec, addr); 411 370 } 412 371 ··· 410 411 ppc_inst_t op[2]; 411 412 void *ip = (void *)rec->ip; 412 413 unsigned long entry, ptr, tramp; 413 - struct module *mod = rec->arch.mod; 414 + struct module *mod = ftrace_lookup_module(rec); 415 + 416 + if (!mod) 417 + return -EINVAL; 414 418 415 419 /* read where this goes */ 416 420 if (copy_inst_from_kernel_nofault(op, ip)) ··· 535 533 return -EINVAL; 536 534 } 537 535 538 - /* 539 - * Out of range jumps are called from modules. 540 - * Being that we are converting from nop, it had better 541 - * already have a module defined. 542 - */ 543 - if (!rec->arch.mod) { 544 - pr_err("No module loaded\n"); 545 - return -EINVAL; 546 - } 547 - 548 536 return __ftrace_make_call(rec, addr); 549 537 } 550 538 ··· 547 555 ppc_inst_t op; 548 556 unsigned long ip = rec->ip; 549 557 unsigned long entry, ptr, tramp; 550 - struct module *mod = rec->arch.mod; 558 + struct module *mod = ftrace_lookup_module(rec); 559 + 560 + if (!mod) 561 + return -EINVAL; 551 562 552 563 /* If we never set up ftrace trampolines, then bail */ 553 564 if (!mod->arch.tramp || !mod->arch.tramp_regs) { ··· 660 665 return 0; 661 666 } else if (!IS_ENABLED(CONFIG_MODULES)) { 662 667 /* We should not get here without modules */ 663 - return -EINVAL; 664 - } 665 - 666 - /* 667 - * Out of range jumps are called from modules. 668 - */ 669 - if (!rec->arch.mod) { 670 - pr_err("No module loaded\n"); 671 668 return -EINVAL; 672 669 } 673 670
+198 -48
arch/powerpc/kernel/trace/ftrace_entry.S
··· 39 39 /* Create our stack frame + pt_regs */ 40 40 PPC_STLU r1,-SWITCH_FRAME_SIZE(r1) 41 41 42 + .if \allregs == 1 43 + SAVE_GPRS(11, 12, r1) 44 + .endif 45 + 46 + /* Get the _mcount() call site out of LR */ 47 + mflr r11 48 + 49 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 50 + /* Load the ftrace_op */ 51 + PPC_LL r12, -(MCOUNT_INSN_SIZE*2 + SZL)(r11) 52 + 53 + /* Load direct_call from the ftrace_op */ 54 + PPC_LL r12, FTRACE_OPS_DIRECT_CALL(r12) 55 + PPC_LCMPI r12, 0 56 + .if \allregs == 1 57 + bne .Lftrace_direct_call_regs 58 + .else 59 + bne .Lftrace_direct_call 60 + .endif 61 + #endif 62 + 63 + /* Save the previous LR in pt_regs->link */ 64 + PPC_STL r0, _LINK(r1) 65 + /* Also save it in A's stack frame */ 66 + PPC_STL r0, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE+LRSAVE(r1) 67 + 42 68 /* Save all gprs to pt_regs */ 43 69 SAVE_GPR(0, r1) 44 70 SAVE_GPRS(3, 10, r1) 45 71 46 72 #ifdef CONFIG_PPC64 47 - /* Save the original return address in A's stack frame */ 48 - std r0, LRSAVE+SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE(r1) 49 73 /* Ok to continue? */ 50 74 lbz r3, PACA_FTRACE_ENABLED(r13) 51 75 cmpdi r3, 0 ··· 78 54 79 55 .if \allregs == 1 80 56 SAVE_GPR(2, r1) 81 - SAVE_GPRS(11, 31, r1) 57 + SAVE_GPRS(13, 31, r1) 82 58 .else 83 - #ifdef CONFIG_LIVEPATCH_64 59 + #if defined(CONFIG_LIVEPATCH_64) || defined(CONFIG_PPC_FTRACE_OUT_OF_LINE) 84 60 SAVE_GPR(14, r1) 85 61 #endif 86 62 .endif ··· 91 67 92 68 .if \allregs == 1 93 69 /* Load special regs for save below */ 70 + mfcr r7 94 71 mfmsr r8 95 72 mfctr r9 96 73 mfxer r10 97 - mfcr r11 98 74 .else 99 75 /* Clear MSR to flag as ftrace_caller versus frace_regs_caller */ 100 76 li r8, 0 101 77 .endif 102 78 103 - /* Get the _mcount() call site out of LR */ 104 - mflr r7 105 - /* Save it as pt_regs->nip */ 106 - PPC_STL r7, _NIP(r1) 107 - /* Also save it in B's stackframe header for proper unwind */ 108 - PPC_STL r7, LRSAVE+SWITCH_FRAME_SIZE(r1) 109 - /* Save the read LR in pt_regs->link */ 110 - PPC_STL r0, _LINK(r1) 111 - 112 79 #ifdef CONFIG_PPC64 113 80 /* Save callee's TOC in the ABI compliant location */ 114 81 std r2, STK_GOT(r1) 115 82 LOAD_PACA_TOC() /* get kernel TOC in r2 */ 83 + #endif 84 + 85 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS 86 + /* r11 points to the instruction following the call to ftrace */ 87 + PPC_LL r5, -(MCOUNT_INSN_SIZE*2 + SZL)(r11) 88 + PPC_LL r12, FTRACE_OPS_FUNC(r5) 89 + mtctr r12 90 + #else /* !CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS */ 91 + #ifdef CONFIG_PPC64 116 92 LOAD_REG_ADDR(r3, function_trace_op) 117 93 ld r5,0(r3) 118 94 #else 119 95 lis r3,function_trace_op@ha 120 96 lwz r5,function_trace_op@l(r3) 121 97 #endif 122 - 123 - #ifdef CONFIG_LIVEPATCH_64 124 - mr r14, r7 /* remember old NIP */ 125 98 #endif 126 - 127 - /* Calculate ip from nip-4 into r3 for call below */ 128 - subi r3, r7, MCOUNT_INSN_SIZE 129 - 130 - /* Put the original return address in r4 as parent_ip */ 131 - mr r4, r0 132 99 133 100 /* Save special regs */ 134 101 PPC_STL r8, _MSR(r1) 135 102 .if \allregs == 1 103 + PPC_STL r7, _CCR(r1) 136 104 PPC_STL r9, _CTR(r1) 137 105 PPC_STL r10, _XER(r1) 138 - PPC_STL r11, _CCR(r1) 139 106 .endif 107 + 108 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 109 + /* Clear orig_gpr3 to later detect ftrace_direct call */ 110 + li r7, 0 111 + PPC_STL r7, ORIG_GPR3(r1) 112 + #endif 113 + 114 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 115 + /* Save our real return address in nvr for return */ 116 + .if \allregs == 0 117 + SAVE_GPR(15, r1) 118 + .endif 119 + mr r15, r11 120 + /* 121 + * We want the ftrace location in the function, but our lr (in r11) 122 + * points at the 'mtlr r0' instruction in the out of line stub. To 123 + * recover the ftrace location, we read the branch instruction in the 124 + * stub, and adjust our lr by the branch offset. 125 + * 126 + * See ftrace_init_ool_stub() for the profile sequence. 127 + */ 128 + lwz r8, MCOUNT_INSN_SIZE(r11) 129 + slwi r8, r8, 6 130 + srawi r8, r8, 6 131 + add r3, r11, r8 132 + /* 133 + * Override our nip to point past the branch in the original function. 134 + * This allows reliable stack trace and the ftrace stack tracer to work as-is. 135 + */ 136 + addi r11, r3, MCOUNT_INSN_SIZE 137 + #else 138 + /* Calculate ip from nip-4 into r3 for call below */ 139 + subi r3, r11, MCOUNT_INSN_SIZE 140 + #endif 141 + 142 + /* Save NIP as pt_regs->nip */ 143 + PPC_STL r11, _NIP(r1) 144 + /* Also save it in B's stackframe header for proper unwind */ 145 + PPC_STL r11, LRSAVE+SWITCH_FRAME_SIZE(r1) 146 + #if defined(CONFIG_LIVEPATCH_64) || defined(CONFIG_PPC_FTRACE_OUT_OF_LINE) 147 + mr r14, r11 /* remember old NIP */ 148 + #endif 149 + 150 + /* Put the original return address in r4 as parent_ip */ 151 + mr r4, r0 140 152 141 153 /* Load &pt_regs in r6 for call below */ 142 154 addi r6, r1, STACK_INT_FRAME_REGS 143 155 .endm 144 156 145 157 .macro ftrace_regs_exit allregs 158 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 159 + /* Check orig_gpr3 to detect ftrace_direct call */ 160 + PPC_LL r3, ORIG_GPR3(r1) 161 + PPC_LCMPI cr1, r3, 0 162 + mtctr r3 163 + #endif 164 + 165 + /* Restore possibly modified LR */ 166 + PPC_LL r0, _LINK(r1) 167 + 168 + #ifndef CONFIG_PPC_FTRACE_OUT_OF_LINE 146 169 /* Load ctr with the possibly modified NIP */ 147 170 PPC_LL r3, _NIP(r1) 148 - mtctr r3 149 - 150 171 #ifdef CONFIG_LIVEPATCH_64 151 172 cmpd r14, r3 /* has NIP been altered? */ 173 + #endif 174 + 175 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 176 + beq cr1,2f 177 + mtlr r3 178 + b 3f 179 + #endif 180 + 2: mtctr r3 181 + mtlr r0 182 + 3: 183 + 184 + #else /* !CONFIG_PPC_FTRACE_OUT_OF_LINE */ 185 + /* Load LR with the possibly modified NIP */ 186 + PPC_LL r3, _NIP(r1) 187 + cmpd r14, r3 /* has NIP been altered? */ 188 + bne- 1f 189 + 190 + mr r3, r15 191 + .if \allregs == 0 192 + REST_GPR(15, r1) 193 + .endif 194 + 1: mtlr r3 152 195 #endif 153 196 154 197 /* Restore gprs */ ··· 223 132 REST_GPRS(2, 31, r1) 224 133 .else 225 134 REST_GPRS(3, 10, r1) 226 - #ifdef CONFIG_LIVEPATCH_64 135 + #if defined(CONFIG_LIVEPATCH_64) || defined(CONFIG_PPC_FTRACE_OUT_OF_LINE) 227 136 REST_GPR(14, r1) 228 137 #endif 229 138 .endif 230 - 231 - /* Restore possibly modified LR */ 232 - PPC_LL r0, _LINK(r1) 233 - mtlr r0 234 139 235 140 #ifdef CONFIG_PPC64 236 141 /* Restore callee's TOC */ ··· 240 153 /* Based on the cmpd above, if the NIP was altered handle livepatch */ 241 154 bne- livepatch_handler 242 155 #endif 243 - bctr /* jump after _mcount site */ 156 + 157 + /* jump after _mcount site */ 158 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 159 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 160 + bnectr cr1 161 + #endif 162 + /* 163 + * Return with blr to keep the link stack balanced. The function profiling sequence 164 + * uses 'mtlr r0' to restore LR. 165 + */ 166 + blr 167 + #else 168 + bctr 169 + #endif 170 + .endm 171 + 172 + .macro ftrace_regs_func allregs 173 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_CALL_OPS 174 + bctrl 175 + #else 176 + .if \allregs == 1 177 + .globl ftrace_regs_call 178 + ftrace_regs_call: 179 + .else 180 + .globl ftrace_call 181 + ftrace_call: 182 + .endif 183 + /* ftrace_call(r3, r4, r5, r6) */ 184 + bl ftrace_stub 185 + #endif 244 186 .endm 245 187 246 188 _GLOBAL(ftrace_regs_caller) 247 189 ftrace_regs_entry 1 248 - /* ftrace_call(r3, r4, r5, r6) */ 249 - .globl ftrace_regs_call 250 - ftrace_regs_call: 251 - bl ftrace_stub 190 + ftrace_regs_func 1 252 191 ftrace_regs_exit 1 253 192 254 193 _GLOBAL(ftrace_caller) 255 194 ftrace_regs_entry 0 256 - /* ftrace_call(r3, r4, r5, r6) */ 257 - .globl ftrace_call 258 - ftrace_call: 259 - bl ftrace_stub 195 + ftrace_regs_func 0 260 196 ftrace_regs_exit 0 261 197 262 198 _GLOBAL(ftrace_stub) ··· 287 177 288 178 #ifdef CONFIG_PPC64 289 179 ftrace_no_trace: 180 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 181 + REST_GPR(3, r1) 182 + addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE 183 + blr 184 + #else 290 185 mflr r3 291 186 mtctr r3 292 187 REST_GPR(3, r1) 293 188 addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE 294 189 mtlr r0 295 190 bctr 191 + #endif 192 + #endif 193 + 194 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS 195 + .Lftrace_direct_call_regs: 196 + mtctr r12 197 + REST_GPRS(11, 12, r1) 198 + addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE 199 + bctr 200 + .Lftrace_direct_call: 201 + mtctr r12 202 + addi r1, r1, SWITCH_FRAME_SIZE+STACK_FRAME_MIN_SIZE 203 + bctr 204 + SYM_FUNC_START(ftrace_stub_direct_tramp) 205 + blr 206 + SYM_FUNC_END(ftrace_stub_direct_tramp) 296 207 #endif 297 208 298 209 #ifdef CONFIG_LIVEPATCH_64 ··· 325 194 * We get here when a function A, calls another function B, but B has 326 195 * been live patched with a new function C. 327 196 * 328 - * On entry: 329 - * - we have no stack frame and can not allocate one 197 + * On entry, we have no stack frame and can not allocate one. 198 + * 199 + * With PPC_FTRACE_OUT_OF_LINE=n, on entry: 330 200 * - LR points back to the original caller (in A) 331 201 * - CTR holds the new NIP in C 332 202 * - r0, r11 & r12 are free 203 + * 204 + * With PPC_FTRACE_OUT_OF_LINE=y, on entry: 205 + * - r0 points back to the original caller (in A) 206 + * - LR holds the new NIP in C 207 + * - r11 & r12 are free 333 208 */ 334 209 livepatch_handler: 335 210 ld r12, PACA_THREAD_INFO(r13) ··· 345 208 addi r11, r11, 24 346 209 std r11, TI_livepatch_sp(r12) 347 210 348 - /* Save toc & real LR on livepatch stack */ 349 - std r2, -24(r11) 350 - mflr r12 351 - std r12, -16(r11) 352 - 353 211 /* Store stack end marker */ 354 212 lis r12, STACK_END_MAGIC@h 355 213 ori r12, r12, STACK_END_MAGIC@l 356 214 std r12, -8(r11) 357 215 358 - /* Put ctr in r12 for global entry and branch there */ 216 + /* Save toc & real LR on livepatch stack */ 217 + std r2, -24(r11) 218 + #ifndef CONFIG_PPC_FTRACE_OUT_OF_LINE 219 + mflr r12 220 + std r12, -16(r11) 359 221 mfctr r12 222 + #else 223 + std r0, -16(r11) 224 + mflr r12 225 + /* Put ctr in r12 for global entry and branch there */ 226 + mtctr r12 227 + #endif 360 228 bctrl 361 229 362 230 /* ··· 449 307 /* Jump back to real return address */ 450 308 blr 451 309 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 310 + 311 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 312 + SYM_DATA(ftrace_ool_stub_text_count, .long CONFIG_PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE) 313 + 314 + SYM_START(ftrace_ool_stub_text, SYM_L_GLOBAL, .balign SZL) 315 + .space CONFIG_PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE * FTRACE_OOL_STUB_SIZE 316 + SYM_CODE_END(ftrace_ool_stub_text) 317 + #endif 452 318 453 319 .pushsection ".tramp.ftrace.text","aw",@progbits; 454 320 .globl ftrace_tramp_text
-3
arch/powerpc/kernel/udbg.c
··· 39 39 #elif defined(CONFIG_PPC_EARLY_DEBUG_RTAS_CONSOLE) 40 40 /* RTAS console debug */ 41 41 udbg_init_rtas_console(); 42 - #elif defined(CONFIG_PPC_EARLY_DEBUG_MAPLE) 43 - /* Maple real mode debug */ 44 - udbg_init_maple_realmode(); 45 42 #elif defined(CONFIG_PPC_EARLY_DEBUG_PAS_REALMODE) 46 43 udbg_init_pas_realmode(); 47 44 #elif defined(CONFIG_PPC_EARLY_DEBUG_BOOTX)
-23
arch/powerpc/kernel/udbg_16550.c
··· 205 205 udbg_use_uart(); 206 206 } 207 207 208 - #ifdef CONFIG_PPC_MAPLE 209 - 210 - #define UDBG_UART_MAPLE_ADDR ((void __iomem *)0xf40003f8) 211 - 212 - static u8 udbg_uart_in_maple(unsigned int reg) 213 - { 214 - return real_readb(UDBG_UART_MAPLE_ADDR + reg); 215 - } 216 - 217 - static void udbg_uart_out_maple(unsigned int reg, u8 val) 218 - { 219 - real_writeb(val, UDBG_UART_MAPLE_ADDR + reg); 220 - } 221 - 222 - void __init udbg_init_maple_realmode(void) 223 - { 224 - udbg_uart_in = udbg_uart_in_maple; 225 - udbg_uart_out = udbg_uart_out_maple; 226 - udbg_use_uart(); 227 - } 228 - 229 - #endif /* CONFIG_PPC_MAPLE */ 230 - 231 208 #ifdef CONFIG_PPC_PASEMI 232 209 233 210 #define UDBG_UART_PAS_ADDR ((void __iomem *)0xfcff03f8UL)
+10 -6
arch/powerpc/kernel/vdso.c
··· 47 47 */ 48 48 static union { 49 49 struct vdso_arch_data data; 50 - u8 page[PAGE_SIZE]; 50 + u8 page[2 * PAGE_SIZE]; 51 51 } vdso_data_store __page_aligned_data; 52 52 struct vdso_arch_data *vdso_data = &vdso_data_store.data; 53 53 54 54 enum vvar_pages { 55 - VVAR_DATA_PAGE_OFFSET, 55 + VVAR_BASE_PAGE_OFFSET, 56 + VVAR_TIME_PAGE_OFFSET, 56 57 VVAR_TIMENS_PAGE_OFFSET, 57 58 VVAR_NR_PAGES, 58 59 }; ··· 119 118 #ifdef CONFIG_TIME_NS 120 119 struct vdso_data *arch_get_vdso_data(void *vvar_page) 121 120 { 122 - return ((struct vdso_arch_data *)vvar_page)->data; 121 + return vvar_page; 123 122 } 124 123 125 124 /* ··· 153 152 unsigned long pfn; 154 153 155 154 switch (vmf->pgoff) { 156 - case VVAR_DATA_PAGE_OFFSET: 155 + case VVAR_BASE_PAGE_OFFSET: 156 + pfn = virt_to_pfn(vdso_data); 157 + break; 158 + case VVAR_TIME_PAGE_OFFSET: 157 159 if (timens_page) 158 160 pfn = page_to_pfn(timens_page); 159 161 else 160 - pfn = virt_to_pfn(vdso_data); 162 + pfn = virt_to_pfn(vdso_data->data); 161 163 break; 162 164 #ifdef CONFIG_TIME_NS 163 165 case VVAR_TIMENS_PAGE_OFFSET: ··· 173 169 */ 174 170 if (!timens_page) 175 171 return VM_FAULT_SIGBUS; 176 - pfn = virt_to_pfn(vdso_data); 172 + pfn = virt_to_pfn(vdso_data->data); 177 173 break; 178 174 #endif /* CONFIG_TIME_NS */ 179 175 default:
+7 -3
arch/powerpc/kernel/vdso/Makefile
··· 50 50 ldflags-$(CONFIG_LD_ORPHAN_WARN) += -Wl,--orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL) 51 51 52 52 # Filter flags that clang will warn are unused for linking 53 - ldflags-y += $(filter-out $(CC_AUTO_VAR_INIT_ZERO_ENABLER) $(CC_FLAGS_FTRACE) -Wa$(comma)%, $(KBUILD_CFLAGS)) 53 + ldflags-y += $(filter-out $(CC_AUTO_VAR_INIT_ZERO_ENABLER) $(CC_FLAGS_FTRACE) -Wa$(comma)%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS)) 54 54 55 55 CC32FLAGS := -m32 56 56 CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc 57 - # This flag is supported by clang for 64-bit but not 32-bit so it will cause 58 - # an unused command line flag warning for this file. 59 57 ifdef CONFIG_CC_IS_CLANG 58 + # This flag is supported by clang for 64-bit but not 32-bit so it will cause 59 + # an unused command line flag warning for this file. 60 60 CC32FLAGSREMOVE += -fno-stack-clash-protection 61 + # -mstack-protector-guard values from the 64-bit build are not valid for the 62 + # 32-bit one. clang validates the values passed to these arguments during 63 + # parsing, even when -fno-stack-protector is passed afterwards. 64 + CC32FLAGSREMOVE += -mstack-protector-guard% 61 65 endif 62 66 LD32FLAGS := -Wl,-soname=linux-vdso32.so.1 63 67 AS32FLAGS := -D__VDSO32__
+1 -1
arch/powerpc/kernel/vdso/cacheflush.S
··· 30 30 #ifdef CONFIG_PPC64 31 31 mflr r12 32 32 .cfi_register lr,r12 33 - get_realdatapage r10, r11 33 + get_datapage r10 34 34 mtlr r12 35 35 .cfi_restore lr 36 36 #endif
+2 -2
arch/powerpc/kernel/vdso/datapage.S
··· 28 28 mflr r12 29 29 .cfi_register lr,r12 30 30 mr. r4,r3 31 - get_realdatapage r3, r11 31 + get_datapage r3 32 32 mtlr r12 33 33 #ifdef __powerpc64__ 34 34 addi r3,r3,CFG_SYSCALL_MAP64 ··· 52 52 .cfi_startproc 53 53 mflr r12 54 54 .cfi_register lr,r12 55 - get_realdatapage r3, r11 55 + get_datapage r3 56 56 #ifndef __powerpc64__ 57 57 lwz r4,(CFG_TB_TICKS_PER_SEC + 4)(r3) 58 58 #endif
-2
arch/powerpc/kernel/vdso/getrandom.S
··· 31 31 PPC_STL r2, PPC_MIN_STKFRM + STK_GOT(r1) 32 32 .cfi_rel_offset r2, PPC_MIN_STKFRM + STK_GOT 33 33 #endif 34 - get_realdatapage r8, r11 35 - addi r8, r8, VDSO_RNG_DATA_OFFSET 36 34 bl CFUNC(DOTSYM(\funct)) 37 35 PPC_LL r0, PPC_MIN_STKFRM + PPC_LR_STKOFF(r1) 38 36 #ifdef __powerpc64__
+2 -3
arch/powerpc/kernel/vdso/gettimeofday.S
··· 32 32 PPC_STL r2, PPC_MIN_STKFRM + STK_GOT(r1) 33 33 .cfi_rel_offset r2, PPC_MIN_STKFRM + STK_GOT 34 34 #endif 35 - get_datapage r5 36 35 .ifeq \call_time 37 - addi r5, r5, VDSO_DATA_OFFSET 36 + get_datapage r5 VDSO_DATA_OFFSET 38 37 .else 39 - addi r4, r5, VDSO_DATA_OFFSET 38 + get_datapage r4 VDSO_DATA_OFFSET 40 39 .endif 41 40 bl CFUNC(DOTSYM(\funct)) 42 41 PPC_LL r0, PPC_MIN_STKFRM + PPC_LR_STKOFF(r1)
+1 -1
arch/powerpc/kernel/vdso/vdso32.lds.S
··· 16 16 17 17 SECTIONS 18 18 { 19 - PROVIDE(_vdso_datapage = . - 2 * PAGE_SIZE); 19 + PROVIDE(_vdso_datapage = . - 3 * PAGE_SIZE); 20 20 . = SIZEOF_HEADERS; 21 21 22 22 .hash : { *(.hash) } :text
+1 -1
arch/powerpc/kernel/vdso/vdso64.lds.S
··· 16 16 17 17 SECTIONS 18 18 { 19 - PROVIDE(_vdso_datapage = . - 2 * PAGE_SIZE); 19 + PROVIDE(_vdso_datapage = . - 3 * PAGE_SIZE); 20 20 . = SIZEOF_HEADERS; 21 21 22 22 .hash : { *(.hash) } :text
+2 -2
arch/powerpc/kernel/vdso/vgetrandom.c
··· 8 8 #include <linux/types.h> 9 9 10 10 ssize_t __c_kernel_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state, 11 - size_t opaque_len, const struct vdso_rng_data *vd) 11 + size_t opaque_len) 12 12 { 13 - return __cvdso_getrandom_data(vd, buffer, len, flags, opaque_state, opaque_len); 13 + return __cvdso_getrandom(buffer, len, flags, opaque_state, opaque_len); 14 14 }
+1 -2
arch/powerpc/kernel/vmlinux.lds.S
··· 265 265 .init.text : AT(ADDR(.init.text) - LOAD_OFFSET) { 266 266 _sinittext = .; 267 267 INIT_TEXT 268 - 268 + *(.tramp.ftrace.init); 269 269 /* 270 270 *.init.text might be RO so we must ensure this section ends on 271 271 * a page boundary. 272 272 */ 273 273 . = ALIGN(PAGE_SIZE); 274 274 _einittext = .; 275 - *(.tramp.ftrace.init); 276 275 } :text 277 276 278 277 /* .exit.text is discarded at runtime, not link time,
+7 -2
arch/powerpc/kexec/file_load_64.c
··· 736 736 if (dn) { 737 737 u64 val; 738 738 739 - of_property_read_u64(dn, "opal-base-address", &val); 739 + ret = of_property_read_u64(dn, "opal-base-address", &val); 740 + if (ret) 741 + goto out; 742 + 740 743 ret = kexec_purgatory_get_set_symbol(image, "opal_base", &val, 741 744 sizeof(val), false); 742 745 if (ret) 743 746 goto out; 744 747 745 - of_property_read_u64(dn, "opal-entry-address", &val); 748 + ret = of_property_read_u64(dn, "opal-entry-address", &val); 749 + if (ret) 750 + goto out; 746 751 ret = kexec_purgatory_get_set_symbol(image, "opal_entry", &val, 747 752 sizeof(val), false); 748 753 }
+85 -34
arch/powerpc/kvm/book3s_hv.c
··· 400 400 cap = H_GUEST_CAP_POWER9; 401 401 break; 402 402 case PCR_ARCH_31: 403 - cap = H_GUEST_CAP_POWER10; 403 + if (cpu_has_feature(CPU_FTR_P11_PVR)) 404 + cap = H_GUEST_CAP_POWER11; 405 + else 406 + cap = H_GUEST_CAP_POWER10; 404 407 break; 405 408 default: 406 409 break; ··· 418 415 struct kvmppc_vcore *vc = vcpu->arch.vcore; 419 416 420 417 /* We can (emulate) our own architecture version and anything older */ 421 - if (cpu_has_feature(CPU_FTR_ARCH_31)) 418 + if (cpu_has_feature(CPU_FTR_P11_PVR) || cpu_has_feature(CPU_FTR_ARCH_31)) 422 419 host_pcr_bit = PCR_ARCH_31; 423 420 else if (cpu_has_feature(CPU_FTR_ARCH_300)) 424 421 host_pcr_bit = PCR_ARCH_300; ··· 2063 2060 fallthrough; /* go to facility unavailable handler */ 2064 2061 #endif 2065 2062 2066 - case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: { 2067 - u64 cause = vcpu->arch.hfscr >> 56; 2068 - 2069 - /* 2070 - * Only pass HFU interrupts to the L1 if the facility is 2071 - * permitted but disabled by the L1's HFSCR, otherwise 2072 - * the interrupt does not make sense to the L1 so turn 2073 - * it into a HEAI. 2074 - */ 2075 - if (!(vcpu->arch.hfscr_permitted & (1UL << cause)) || 2076 - (vcpu->arch.nested_hfscr & (1UL << cause))) { 2077 - ppc_inst_t pinst; 2078 - vcpu->arch.trap = BOOK3S_INTERRUPT_H_EMUL_ASSIST; 2079 - 2080 - /* 2081 - * If the fetch failed, return to guest and 2082 - * try executing it again. 2083 - */ 2084 - r = kvmppc_get_last_inst(vcpu, INST_GENERIC, &pinst); 2085 - vcpu->arch.emul_inst = ppc_inst_val(pinst); 2086 - if (r != EMULATE_DONE) 2087 - r = RESUME_GUEST; 2088 - else 2089 - r = RESUME_HOST; 2090 - } else { 2091 - r = RESUME_HOST; 2092 - } 2093 - 2063 + case BOOK3S_INTERRUPT_H_FAC_UNAVAIL: 2064 + r = RESUME_HOST; 2094 2065 break; 2095 - } 2096 2066 2097 2067 case BOOK3S_INTERRUPT_HV_RM_HARD: 2098 2068 vcpu->arch.trap = 0; ··· 4129 4153 else 4130 4154 lppaca_of(cpu).l2_counters_enable = 0; 4131 4155 } 4156 + EXPORT_SYMBOL(kvmhv_set_l2_counters_status); 4132 4157 4133 - int kmvhv_counters_tracepoint_regfunc(void) 4158 + int kvmhv_counters_tracepoint_regfunc(void) 4134 4159 { 4135 4160 int cpu; 4136 4161 ··· 4141 4164 return 0; 4142 4165 } 4143 4166 4144 - void kmvhv_counters_tracepoint_unregfunc(void) 4167 + void kvmhv_counters_tracepoint_unregfunc(void) 4145 4168 { 4146 4169 int cpu; 4147 4170 ··· 4167 4190 *l1_to_l2_cs_ptr = l1_to_l2_ns; 4168 4191 *l2_to_l1_cs_ptr = l2_to_l1_ns; 4169 4192 *l2_runtime_agg_ptr = l2_runtime_ns; 4193 + vcpu->arch.l1_to_l2_cs = l1_to_l2_ns; 4194 + vcpu->arch.l2_to_l1_cs = l2_to_l1_ns; 4195 + vcpu->arch.l2_runtime_agg = l2_runtime_ns; 4170 4196 } 4197 + 4198 + u64 kvmhv_get_l1_to_l2_cs_time(void) 4199 + { 4200 + return tb_to_ns(be64_to_cpu(get_lppaca()->l1_to_l2_cs_tb)); 4201 + } 4202 + EXPORT_SYMBOL(kvmhv_get_l1_to_l2_cs_time); 4203 + 4204 + u64 kvmhv_get_l2_to_l1_cs_time(void) 4205 + { 4206 + return tb_to_ns(be64_to_cpu(get_lppaca()->l2_to_l1_cs_tb)); 4207 + } 4208 + EXPORT_SYMBOL(kvmhv_get_l2_to_l1_cs_time); 4209 + 4210 + u64 kvmhv_get_l2_runtime_agg(void) 4211 + { 4212 + return tb_to_ns(be64_to_cpu(get_lppaca()->l2_runtime_tb)); 4213 + } 4214 + EXPORT_SYMBOL(kvmhv_get_l2_runtime_agg); 4215 + 4216 + u64 kvmhv_get_l1_to_l2_cs_time_vcpu(void) 4217 + { 4218 + struct kvm_vcpu *vcpu; 4219 + struct kvm_vcpu_arch *arch; 4220 + 4221 + vcpu = local_paca->kvm_hstate.kvm_vcpu; 4222 + if (vcpu) { 4223 + arch = &vcpu->arch; 4224 + return arch->l1_to_l2_cs; 4225 + } else { 4226 + return 0; 4227 + } 4228 + } 4229 + EXPORT_SYMBOL(kvmhv_get_l1_to_l2_cs_time_vcpu); 4230 + 4231 + u64 kvmhv_get_l2_to_l1_cs_time_vcpu(void) 4232 + { 4233 + struct kvm_vcpu *vcpu; 4234 + struct kvm_vcpu_arch *arch; 4235 + 4236 + vcpu = local_paca->kvm_hstate.kvm_vcpu; 4237 + if (vcpu) { 4238 + arch = &vcpu->arch; 4239 + return arch->l2_to_l1_cs; 4240 + } else { 4241 + return 0; 4242 + } 4243 + } 4244 + EXPORT_SYMBOL(kvmhv_get_l2_to_l1_cs_time_vcpu); 4245 + 4246 + u64 kvmhv_get_l2_runtime_agg_vcpu(void) 4247 + { 4248 + struct kvm_vcpu *vcpu; 4249 + struct kvm_vcpu_arch *arch; 4250 + 4251 + vcpu = local_paca->kvm_hstate.kvm_vcpu; 4252 + if (vcpu) { 4253 + arch = &vcpu->arch; 4254 + return arch->l2_runtime_agg; 4255 + } else { 4256 + return 0; 4257 + } 4258 + } 4259 + EXPORT_SYMBOL(kvmhv_get_l2_runtime_agg_vcpu); 4171 4260 4172 4261 #else 4173 4262 int kvmhv_get_l2_counters_status(void) ··· 4351 4308 hvregs.vcpu_token = vcpu->vcpu_id; 4352 4309 } 4353 4310 hvregs.hdec_expiry = time_limit; 4311 + 4312 + /* 4313 + * hvregs has the doorbell status, so zero it here which 4314 + * enables us to receive doorbells when H_ENTER_NESTED is 4315 + * in progress for this vCPU 4316 + */ 4317 + 4318 + if (vcpu->arch.doorbell_request) 4319 + vcpu->arch.doorbell_request = 0; 4354 4320 4355 4321 /* 4356 4322 * When setting DEC, we must always deal with irq_work_raise ··· 4964 4912 lpcr &= ~LPCR_MER; 4965 4913 } 4966 4914 } else if (vcpu->arch.pending_exceptions || 4967 - vcpu->arch.doorbell_request || 4968 4915 xive_interrupt_pending(vcpu)) { 4969 4916 vcpu->arch.ret = RESUME_HOST; 4970 4917 goto out;
+12 -4
arch/powerpc/kvm/book3s_hv_nested.c
··· 32 32 struct kvmppc_vcore *vc = vcpu->arch.vcore; 33 33 34 34 hr->pcr = vc->pcr | PCR_MASK; 35 - hr->dpdes = vc->dpdes; 35 + hr->dpdes = vcpu->arch.doorbell_request; 36 36 hr->hfscr = vcpu->arch.hfscr; 37 37 hr->tb_offset = vc->tb_offset; 38 38 hr->dawr0 = vcpu->arch.dawr0; ··· 105 105 { 106 106 struct kvmppc_vcore *vc = vcpu->arch.vcore; 107 107 108 - hr->dpdes = vc->dpdes; 108 + hr->dpdes = vcpu->arch.doorbell_request; 109 109 hr->purr = vcpu->arch.purr; 110 110 hr->spurr = vcpu->arch.spurr; 111 111 hr->ic = vcpu->arch.ic; ··· 143 143 struct kvmppc_vcore *vc = vcpu->arch.vcore; 144 144 145 145 vc->pcr = hr->pcr | PCR_MASK; 146 - vc->dpdes = hr->dpdes; 146 + vcpu->arch.doorbell_request = hr->dpdes; 147 147 vcpu->arch.hfscr = hr->hfscr; 148 148 vcpu->arch.dawr0 = hr->dawr0; 149 149 vcpu->arch.dawrx0 = hr->dawrx0; ··· 170 170 { 171 171 struct kvmppc_vcore *vc = vcpu->arch.vcore; 172 172 173 - vc->dpdes = hr->dpdes; 173 + /* 174 + * This L2 vCPU might have received a doorbell while H_ENTER_NESTED was being handled. 175 + * Make sure we preserve the doorbell if it was either: 176 + * a) Sent after H_ENTER_NESTED was called on this vCPU (arch.doorbell_request would be 1) 177 + * b) Doorbell was not handled and L2 exited for some other reason (hr->dpdes would be 1) 178 + */ 179 + vcpu->arch.doorbell_request = vcpu->arch.doorbell_request | hr->dpdes; 174 180 vcpu->arch.hfscr = hr->hfscr; 175 181 vcpu->arch.purr = hr->purr; 176 182 vcpu->arch.spurr = hr->spurr; ··· 451 445 if (rc == H_SUCCESS) { 452 446 unsigned long capabilities = 0; 453 447 448 + if (cpu_has_feature(CPU_FTR_P11_PVR)) 449 + capabilities |= H_GUEST_CAP_POWER11; 454 450 if (cpu_has_feature(CPU_FTR_ARCH_31)) 455 451 capabilities |= H_GUEST_CAP_POWER10; 456 452 if (cpu_has_feature(CPU_FTR_ARCH_300))
+3 -1
arch/powerpc/kvm/book3s_hv_nestedv2.c
··· 370 370 * default to L1's PVR. 371 371 */ 372 372 if (!vcpu->arch.vcore->arch_compat) { 373 - if (cpu_has_feature(CPU_FTR_ARCH_31)) 373 + if (cpu_has_feature(CPU_FTR_P11_PVR)) 374 + arch_compat = PVR_ARCH_31_P11; 375 + else if (cpu_has_feature(CPU_FTR_ARCH_31)) 374 376 arch_compat = PVR_ARCH_31; 375 377 else if (cpu_has_feature(CPU_FTR_ARCH_300)) 376 378 arch_compat = PVR_ARCH_300;
+1 -7
arch/powerpc/kvm/book3s_mmu_hpte.c
··· 92 92 spin_unlock(&vcpu3s->mmu_lock); 93 93 } 94 94 95 - static void free_pte_rcu(struct rcu_head *head) 96 - { 97 - struct hpte_cache *pte = container_of(head, struct hpte_cache, rcu_head); 98 - kmem_cache_free(hpte_cache, pte); 99 - } 100 - 101 95 static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte) 102 96 { 103 97 struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu); ··· 120 126 121 127 spin_unlock(&vcpu3s->mmu_lock); 122 128 123 - call_rcu(&pte->rcu_head, free_pte_rcu); 129 + kfree_rcu(pte, rcu_head); 124 130 } 125 131 126 132 static void kvmppc_mmu_pte_flush_all(struct kvm_vcpu *vcpu)
+1 -1
arch/powerpc/kvm/trace_hv.h
··· 538 538 TP_printk("VCPU %d: l1_to_l2_cs_time=%llu ns l2_to_l1_cs_time=%llu ns l2_runtime=%llu ns", 539 539 __entry->vcpu_id, __entry->l1_to_l2_cs, 540 540 __entry->l2_to_l1_cs, __entry->l2_runtime), 541 - kmvhv_counters_tracepoint_regfunc, kmvhv_counters_tracepoint_unregfunc 541 + kvmhv_counters_tracepoint_regfunc, kvmhv_counters_tracepoint_unregfunc 542 542 ); 543 543 #endif 544 544 #endif /* _TRACE_KVM_HV_H */
+4 -8
arch/powerpc/lib/sstep.c
··· 780 780 #endif /* __powerpc64 */ 781 781 782 782 #ifdef CONFIG_VSX 783 - void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg, 784 - const void *mem, bool rev) 783 + static nokprobe_inline void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg, 784 + const void *mem, bool rev) 785 785 { 786 786 int size, read_size; 787 787 int i, j; ··· 863 863 break; 864 864 } 865 865 } 866 - EXPORT_SYMBOL_GPL(emulate_vsx_load); 867 - NOKPROBE_SYMBOL(emulate_vsx_load); 868 866 869 - void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg, 870 - void *mem, bool rev) 867 + static nokprobe_inline void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg, 868 + void *mem, bool rev) 871 869 { 872 870 int size, write_size; 873 871 int i, j; ··· 953 955 break; 954 956 } 955 957 } 956 - EXPORT_SYMBOL_GPL(emulate_vsx_store); 957 - NOKPROBE_SYMBOL(emulate_vsx_store); 958 958 959 959 static nokprobe_inline int do_vsx_load(struct instruction_op *op, 960 960 unsigned long ea, struct pt_regs *regs,
+272 -92
arch/powerpc/mm/book3s64/hash_utils.c
··· 40 40 #include <linux/random.h> 41 41 #include <linux/elf-randomize.h> 42 42 #include <linux/of_fdt.h> 43 + #include <linux/kfence.h> 43 44 44 45 #include <asm/interrupt.h> 45 46 #include <asm/processor.h> ··· 67 66 #include <asm/pte-walk.h> 68 67 #include <asm/asm-prototypes.h> 69 68 #include <asm/ultravisor.h> 69 + #include <asm/kfence.h> 70 70 71 71 #include <mm/mmu_decl.h> 72 72 ··· 125 123 #ifdef CONFIG_PPC_64K_PAGES 126 124 int mmu_ci_restrictions; 127 125 #endif 128 - static u8 *linear_map_hash_slots; 129 - static unsigned long linear_map_hash_count; 130 126 struct mmu_hash_ops mmu_hash_ops __ro_after_init; 131 127 EXPORT_SYMBOL(mmu_hash_ops); 132 128 ··· 272 272 else 273 273 WARN(1, "%s called on pre-POWER7 CPU\n", __func__); 274 274 } 275 + 276 + #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE) 277 + static void kernel_map_linear_page(unsigned long vaddr, unsigned long idx, 278 + u8 *slots, raw_spinlock_t *lock) 279 + { 280 + unsigned long hash; 281 + unsigned long vsid = get_kernel_vsid(vaddr, mmu_kernel_ssize); 282 + unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize); 283 + unsigned long mode = htab_convert_pte_flags(pgprot_val(PAGE_KERNEL), HPTE_USE_KERNEL_KEY); 284 + long ret; 285 + 286 + hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize); 287 + 288 + /* Don't create HPTE entries for bad address */ 289 + if (!vsid) 290 + return; 291 + 292 + if (slots[idx] & 0x80) 293 + return; 294 + 295 + ret = hpte_insert_repeating(hash, vpn, __pa(vaddr), mode, 296 + HPTE_V_BOLTED, 297 + mmu_linear_psize, mmu_kernel_ssize); 298 + 299 + BUG_ON (ret < 0); 300 + raw_spin_lock(lock); 301 + BUG_ON(slots[idx] & 0x80); 302 + slots[idx] = ret | 0x80; 303 + raw_spin_unlock(lock); 304 + } 305 + 306 + static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long idx, 307 + u8 *slots, raw_spinlock_t *lock) 308 + { 309 + unsigned long hash, hslot, slot; 310 + unsigned long vsid = get_kernel_vsid(vaddr, mmu_kernel_ssize); 311 + unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize); 312 + 313 + hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize); 314 + raw_spin_lock(lock); 315 + if (!(slots[idx] & 0x80)) { 316 + raw_spin_unlock(lock); 317 + return; 318 + } 319 + hslot = slots[idx] & 0x7f; 320 + slots[idx] = 0; 321 + raw_spin_unlock(lock); 322 + if (hslot & _PTEIDX_SECONDARY) 323 + hash = ~hash; 324 + slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; 325 + slot += hslot & _PTEIDX_GROUP_IX; 326 + mmu_hash_ops.hpte_invalidate(slot, vpn, mmu_linear_psize, 327 + mmu_linear_psize, 328 + mmu_kernel_ssize, 0); 329 + } 330 + #endif 331 + 332 + static inline bool hash_supports_debug_pagealloc(void) 333 + { 334 + unsigned long max_hash_count = ppc64_rma_size / 4; 335 + unsigned long linear_map_count = memblock_end_of_DRAM() >> PAGE_SHIFT; 336 + 337 + if (!debug_pagealloc_enabled() || linear_map_count > max_hash_count) 338 + return false; 339 + return true; 340 + } 341 + 342 + #ifdef CONFIG_DEBUG_PAGEALLOC 343 + static u8 *linear_map_hash_slots; 344 + static unsigned long linear_map_hash_count; 345 + static DEFINE_RAW_SPINLOCK(linear_map_hash_lock); 346 + static void hash_debug_pagealloc_alloc_slots(void) 347 + { 348 + if (!hash_supports_debug_pagealloc()) 349 + return; 350 + 351 + linear_map_hash_count = memblock_end_of_DRAM() >> PAGE_SHIFT; 352 + linear_map_hash_slots = memblock_alloc_try_nid( 353 + linear_map_hash_count, 1, MEMBLOCK_LOW_LIMIT, 354 + ppc64_rma_size, NUMA_NO_NODE); 355 + if (!linear_map_hash_slots) 356 + panic("%s: Failed to allocate %lu bytes max_addr=%pa\n", 357 + __func__, linear_map_hash_count, &ppc64_rma_size); 358 + } 359 + 360 + static inline void hash_debug_pagealloc_add_slot(phys_addr_t paddr, 361 + int slot) 362 + { 363 + if (!debug_pagealloc_enabled() || !linear_map_hash_count) 364 + return; 365 + if ((paddr >> PAGE_SHIFT) < linear_map_hash_count) 366 + linear_map_hash_slots[paddr >> PAGE_SHIFT] = slot | 0x80; 367 + } 368 + 369 + static int hash_debug_pagealloc_map_pages(struct page *page, int numpages, 370 + int enable) 371 + { 372 + unsigned long flags, vaddr, lmi; 373 + int i; 374 + 375 + if (!debug_pagealloc_enabled() || !linear_map_hash_count) 376 + return 0; 377 + 378 + local_irq_save(flags); 379 + for (i = 0; i < numpages; i++, page++) { 380 + vaddr = (unsigned long)page_address(page); 381 + lmi = __pa(vaddr) >> PAGE_SHIFT; 382 + if (lmi >= linear_map_hash_count) 383 + continue; 384 + if (enable) 385 + kernel_map_linear_page(vaddr, lmi, 386 + linear_map_hash_slots, &linear_map_hash_lock); 387 + else 388 + kernel_unmap_linear_page(vaddr, lmi, 389 + linear_map_hash_slots, &linear_map_hash_lock); 390 + } 391 + local_irq_restore(flags); 392 + return 0; 393 + } 394 + 395 + #else /* CONFIG_DEBUG_PAGEALLOC */ 396 + static inline void hash_debug_pagealloc_alloc_slots(void) {} 397 + static inline void hash_debug_pagealloc_add_slot(phys_addr_t paddr, int slot) {} 398 + static int __maybe_unused 399 + hash_debug_pagealloc_map_pages(struct page *page, int numpages, int enable) 400 + { 401 + return 0; 402 + } 403 + #endif /* CONFIG_DEBUG_PAGEALLOC */ 404 + 405 + #ifdef CONFIG_KFENCE 406 + static u8 *linear_map_kf_hash_slots; 407 + static unsigned long linear_map_kf_hash_count; 408 + static DEFINE_RAW_SPINLOCK(linear_map_kf_hash_lock); 409 + 410 + static phys_addr_t kfence_pool; 411 + 412 + static inline void hash_kfence_alloc_pool(void) 413 + { 414 + if (!kfence_early_init_enabled()) 415 + goto err; 416 + 417 + /* allocate linear map for kfence within RMA region */ 418 + linear_map_kf_hash_count = KFENCE_POOL_SIZE >> PAGE_SHIFT; 419 + linear_map_kf_hash_slots = memblock_alloc_try_nid( 420 + linear_map_kf_hash_count, 1, 421 + MEMBLOCK_LOW_LIMIT, ppc64_rma_size, 422 + NUMA_NO_NODE); 423 + if (!linear_map_kf_hash_slots) { 424 + pr_err("%s: memblock for linear map (%lu) failed\n", __func__, 425 + linear_map_kf_hash_count); 426 + goto err; 427 + } 428 + 429 + /* allocate kfence pool early */ 430 + kfence_pool = memblock_phys_alloc_range(KFENCE_POOL_SIZE, PAGE_SIZE, 431 + MEMBLOCK_LOW_LIMIT, MEMBLOCK_ALLOC_ANYWHERE); 432 + if (!kfence_pool) { 433 + pr_err("%s: memblock for kfence pool (%lu) failed\n", __func__, 434 + KFENCE_POOL_SIZE); 435 + memblock_free(linear_map_kf_hash_slots, 436 + linear_map_kf_hash_count); 437 + linear_map_kf_hash_count = 0; 438 + goto err; 439 + } 440 + memblock_mark_nomap(kfence_pool, KFENCE_POOL_SIZE); 441 + 442 + return; 443 + err: 444 + pr_info("Disabling kfence\n"); 445 + disable_kfence(); 446 + } 447 + 448 + static inline void hash_kfence_map_pool(void) 449 + { 450 + unsigned long kfence_pool_start, kfence_pool_end; 451 + unsigned long prot = pgprot_val(PAGE_KERNEL); 452 + 453 + if (!kfence_pool) 454 + return; 455 + 456 + kfence_pool_start = (unsigned long) __va(kfence_pool); 457 + kfence_pool_end = kfence_pool_start + KFENCE_POOL_SIZE; 458 + __kfence_pool = (char *) kfence_pool_start; 459 + BUG_ON(htab_bolt_mapping(kfence_pool_start, kfence_pool_end, 460 + kfence_pool, prot, mmu_linear_psize, 461 + mmu_kernel_ssize)); 462 + memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE); 463 + } 464 + 465 + static inline void hash_kfence_add_slot(phys_addr_t paddr, int slot) 466 + { 467 + unsigned long vaddr = (unsigned long) __va(paddr); 468 + unsigned long lmi = (vaddr - (unsigned long)__kfence_pool) 469 + >> PAGE_SHIFT; 470 + 471 + if (!kfence_pool) 472 + return; 473 + BUG_ON(!is_kfence_address((void *)vaddr)); 474 + BUG_ON(lmi >= linear_map_kf_hash_count); 475 + linear_map_kf_hash_slots[lmi] = slot | 0x80; 476 + } 477 + 478 + static int hash_kfence_map_pages(struct page *page, int numpages, int enable) 479 + { 480 + unsigned long flags, vaddr, lmi; 481 + int i; 482 + 483 + WARN_ON_ONCE(!linear_map_kf_hash_count); 484 + local_irq_save(flags); 485 + for (i = 0; i < numpages; i++, page++) { 486 + vaddr = (unsigned long)page_address(page); 487 + lmi = (vaddr - (unsigned long)__kfence_pool) >> PAGE_SHIFT; 488 + 489 + /* Ideally this should never happen */ 490 + if (lmi >= linear_map_kf_hash_count) { 491 + WARN_ON_ONCE(1); 492 + continue; 493 + } 494 + 495 + if (enable) 496 + kernel_map_linear_page(vaddr, lmi, 497 + linear_map_kf_hash_slots, 498 + &linear_map_kf_hash_lock); 499 + else 500 + kernel_unmap_linear_page(vaddr, lmi, 501 + linear_map_kf_hash_slots, 502 + &linear_map_kf_hash_lock); 503 + } 504 + local_irq_restore(flags); 505 + return 0; 506 + } 507 + #else 508 + static inline void hash_kfence_alloc_pool(void) {} 509 + static inline void hash_kfence_map_pool(void) {} 510 + static inline void hash_kfence_add_slot(phys_addr_t paddr, int slot) {} 511 + static int __maybe_unused 512 + hash_kfence_map_pages(struct page *page, int numpages, int enable) 513 + { 514 + return 0; 515 + } 516 + #endif 517 + 518 + #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE) 519 + int hash__kernel_map_pages(struct page *page, int numpages, int enable) 520 + { 521 + void *vaddr = page_address(page); 522 + 523 + if (is_kfence_address(vaddr)) 524 + return hash_kfence_map_pages(page, numpages, enable); 525 + else 526 + return hash_debug_pagealloc_map_pages(page, numpages, enable); 527 + } 528 + 529 + static void hash_linear_map_add_slot(phys_addr_t paddr, int slot) 530 + { 531 + if (is_kfence_address(__va(paddr))) 532 + hash_kfence_add_slot(paddr, slot); 533 + else 534 + hash_debug_pagealloc_add_slot(paddr, slot); 535 + } 536 + #else 537 + static void hash_linear_map_add_slot(phys_addr_t paddr, int slot) {} 538 + #endif 275 539 276 540 /* 277 541 * 'R' and 'C' update notes: ··· 695 431 break; 696 432 697 433 cond_resched(); 698 - if (debug_pagealloc_enabled_or_kfence() && 699 - (paddr >> PAGE_SHIFT) < linear_map_hash_count) 700 - linear_map_hash_slots[paddr >> PAGE_SHIFT] = ret | 0x80; 434 + /* add slot info in debug_pagealloc / kfence linear map */ 435 + hash_linear_map_add_slot(paddr, ret); 701 436 } 702 437 return ret < 0 ? ret : 0; 703 438 } ··· 1077 814 bool aligned = true; 1078 815 init_hpte_page_sizes(); 1079 816 1080 - if (!debug_pagealloc_enabled_or_kfence()) { 817 + if (!hash_supports_debug_pagealloc() && !kfence_early_init_enabled()) { 1081 818 /* 1082 819 * Pick a size for the linear mapping. Currently, we only 1083 820 * support 16M, 1M and 4K which is the default ··· 1397 1134 1398 1135 prot = pgprot_val(PAGE_KERNEL); 1399 1136 1400 - if (debug_pagealloc_enabled_or_kfence()) { 1401 - linear_map_hash_count = memblock_end_of_DRAM() >> PAGE_SHIFT; 1402 - linear_map_hash_slots = memblock_alloc_try_nid( 1403 - linear_map_hash_count, 1, MEMBLOCK_LOW_LIMIT, 1404 - ppc64_rma_size, NUMA_NO_NODE); 1405 - if (!linear_map_hash_slots) 1406 - panic("%s: Failed to allocate %lu bytes max_addr=%pa\n", 1407 - __func__, linear_map_hash_count, &ppc64_rma_size); 1408 - } 1409 - 1137 + hash_debug_pagealloc_alloc_slots(); 1138 + hash_kfence_alloc_pool(); 1410 1139 /* create bolted the linear mapping in the hash table */ 1411 1140 for_each_mem_range(i, &base, &end) { 1412 1141 size = end - base; ··· 1415 1160 BUG_ON(htab_bolt_mapping(base, base + size, __pa(base), 1416 1161 prot, mmu_linear_psize, mmu_kernel_ssize)); 1417 1162 } 1163 + hash_kfence_map_pool(); 1418 1164 memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE); 1419 1165 1420 1166 /* ··· 2375 2119 stress_hpt_struct[cpu].last_group[0] = hpte_group; 2376 2120 } 2377 2121 } 2378 - 2379 - #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE) 2380 - static DEFINE_RAW_SPINLOCK(linear_map_hash_lock); 2381 - 2382 - static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi) 2383 - { 2384 - unsigned long hash; 2385 - unsigned long vsid = get_kernel_vsid(vaddr, mmu_kernel_ssize); 2386 - unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize); 2387 - unsigned long mode = htab_convert_pte_flags(pgprot_val(PAGE_KERNEL), HPTE_USE_KERNEL_KEY); 2388 - long ret; 2389 - 2390 - hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize); 2391 - 2392 - /* Don't create HPTE entries for bad address */ 2393 - if (!vsid) 2394 - return; 2395 - 2396 - if (linear_map_hash_slots[lmi] & 0x80) 2397 - return; 2398 - 2399 - ret = hpte_insert_repeating(hash, vpn, __pa(vaddr), mode, 2400 - HPTE_V_BOLTED, 2401 - mmu_linear_psize, mmu_kernel_ssize); 2402 - 2403 - BUG_ON (ret < 0); 2404 - raw_spin_lock(&linear_map_hash_lock); 2405 - BUG_ON(linear_map_hash_slots[lmi] & 0x80); 2406 - linear_map_hash_slots[lmi] = ret | 0x80; 2407 - raw_spin_unlock(&linear_map_hash_lock); 2408 - } 2409 - 2410 - static void kernel_unmap_linear_page(unsigned long vaddr, unsigned long lmi) 2411 - { 2412 - unsigned long hash, hidx, slot; 2413 - unsigned long vsid = get_kernel_vsid(vaddr, mmu_kernel_ssize); 2414 - unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize); 2415 - 2416 - hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize); 2417 - raw_spin_lock(&linear_map_hash_lock); 2418 - if (!(linear_map_hash_slots[lmi] & 0x80)) { 2419 - raw_spin_unlock(&linear_map_hash_lock); 2420 - return; 2421 - } 2422 - hidx = linear_map_hash_slots[lmi] & 0x7f; 2423 - linear_map_hash_slots[lmi] = 0; 2424 - raw_spin_unlock(&linear_map_hash_lock); 2425 - if (hidx & _PTEIDX_SECONDARY) 2426 - hash = ~hash; 2427 - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; 2428 - slot += hidx & _PTEIDX_GROUP_IX; 2429 - mmu_hash_ops.hpte_invalidate(slot, vpn, mmu_linear_psize, 2430 - mmu_linear_psize, 2431 - mmu_kernel_ssize, 0); 2432 - } 2433 - 2434 - int hash__kernel_map_pages(struct page *page, int numpages, int enable) 2435 - { 2436 - unsigned long flags, vaddr, lmi; 2437 - int i; 2438 - 2439 - local_irq_save(flags); 2440 - for (i = 0; i < numpages; i++, page++) { 2441 - vaddr = (unsigned long)page_address(page); 2442 - lmi = __pa(vaddr) >> PAGE_SHIFT; 2443 - if (lmi >= linear_map_hash_count) 2444 - continue; 2445 - if (enable) 2446 - kernel_map_linear_page(vaddr, lmi); 2447 - else 2448 - kernel_unmap_linear_page(vaddr, lmi); 2449 - } 2450 - local_irq_restore(flags); 2451 - return 0; 2452 - } 2453 - #endif /* CONFIG_DEBUG_PAGEALLOC || CONFIG_KFENCE */ 2454 2122 2455 2123 void hash__setup_initial_memory_limit(phys_addr_t first_memblock_base, 2456 2124 phys_addr_t first_memblock_size)
+13
arch/powerpc/mm/book3s64/pgtable.c
··· 37 37 unsigned long __pmd_frag_size_shift; 38 38 EXPORT_SYMBOL(__pmd_frag_size_shift); 39 39 40 + #ifdef CONFIG_KFENCE 41 + extern bool kfence_early_init; 42 + static int __init parse_kfence_early_init(char *arg) 43 + { 44 + int val; 45 + 46 + if (get_option(&arg, &val)) 47 + kfence_early_init = !!val; 48 + return 0; 49 + } 50 + early_param("kfence.sample_interval", parse_kfence_early_init); 51 + #endif 52 + 40 53 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 41 54 /* 42 55 * This is called when relaxing access to a hugepage. It's also called in the page
-12
arch/powerpc/mm/book3s64/radix_pgtable.c
··· 363 363 } 364 364 365 365 #ifdef CONFIG_KFENCE 366 - static bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL; 367 - 368 - static int __init parse_kfence_early_init(char *arg) 369 - { 370 - int val; 371 - 372 - if (get_option(&arg, &val)) 373 - kfence_early_init = !!val; 374 - return 0; 375 - } 376 - early_param("kfence.sample_interval", parse_kfence_early_init); 377 - 378 366 static inline phys_addr_t alloc_kfence_pool(void) 379 367 { 380 368 phys_addr_t kfence_pool;
+8 -2
arch/powerpc/mm/fault.c
··· 439 439 /* 440 440 * The kernel should never take an execute fault nor should it 441 441 * take a page fault to a kernel address or a page fault to a user 442 - * address outside of dedicated places 442 + * address outside of dedicated places. 443 + * 444 + * Rather than kfence directly reporting false negatives, search whether 445 + * the NIP belongs to the fixup table for cases where fault could come 446 + * from functions like copy_from_kernel_nofault(). 443 447 */ 444 448 if (unlikely(!is_user && bad_kernel_fault(regs, error_code, address, is_write))) { 445 - if (kfence_handle_page_fault(address, is_write, regs)) 449 + if (is_kfence_address((void *)address) && 450 + !search_exception_tables(instruction_pointer(regs)) && 451 + kfence_handle_page_fault(address, is_write, regs)) 446 452 return 0; 447 453 448 454 return SIGSEGV;
+1
arch/powerpc/mm/init-common.c
··· 33 33 bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP); 34 34 #ifdef CONFIG_KFENCE 35 35 bool __ro_after_init kfence_disabled; 36 + bool __ro_after_init kfence_early_init = !!CONFIG_KFENCE_SAMPLE_INTERVAL; 36 37 #endif 37 38 38 39 static int __init parse_nosmep(char *p)
+17
arch/powerpc/net/bpf_jit.h
··· 12 12 13 13 #include <asm/types.h> 14 14 #include <asm/ppc-opcode.h> 15 + #include <linux/build_bug.h> 15 16 16 17 #ifdef CONFIG_PPC64_ELF_ABI_V1 17 18 #define FUNCTION_DESCR_SIZE 24 ··· 21 20 #endif 22 21 23 22 #define CTX_NIA(ctx) ((unsigned long)ctx->idx * 4) 23 + 24 + #define SZL sizeof(unsigned long) 25 + #define BPF_INSN_SAFETY 64 24 26 25 27 #define PLANT_INSTR(d, idx, instr) \ 26 28 do { if (d) { (d)[idx] = instr; } idx++; } while (0) ··· 85 81 EMIT(PPC_RAW_ORI(d, d, (uintptr_t)(i) & \ 86 82 0xffff)); \ 87 83 } } while (0) 84 + #define PPC_LI_ADDR PPC_LI64 85 + 86 + #ifndef CONFIG_PPC_KERNEL_PCREL 87 + #define PPC64_LOAD_PACA() \ 88 + EMIT(PPC_RAW_LD(_R2, _R13, offsetof(struct paca_struct, kernel_toc))) 89 + #else 90 + #define PPC64_LOAD_PACA() do {} while (0) 91 + #endif 92 + #else 93 + #define PPC_LI64(d, i) BUILD_BUG() 94 + #define PPC_LI_ADDR PPC_LI32 95 + #define PPC64_LOAD_PACA() BUILD_BUG() 88 96 #endif 89 97 90 98 /* ··· 181 165 u32 *addrs, int pass, bool extra_pass); 182 166 void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx); 183 167 void bpf_jit_build_epilogue(u32 *image, struct codegen_context *ctx); 168 + void bpf_jit_build_fentry_stubs(u32 *image, struct codegen_context *ctx); 184 169 void bpf_jit_realloc_regs(struct codegen_context *ctx); 185 170 int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx, int tmp_reg, long exit_addr); 186 171
+846 -1
arch/powerpc/net/bpf_jit_comp.c
··· 22 22 23 23 #include "bpf_jit.h" 24 24 25 + /* These offsets are from bpf prog end and stay the same across progs */ 26 + static int bpf_jit_ool_stub, bpf_jit_long_branch_stub; 27 + 25 28 static void bpf_jit_fill_ill_insns(void *area, unsigned int size) 26 29 { 27 30 memset32(area, BREAKPOINT_INSTRUCTION, size / 4); 31 + } 32 + 33 + void dummy_tramp(void); 34 + 35 + asm ( 36 + " .pushsection .text, \"ax\", @progbits ;" 37 + " .global dummy_tramp ;" 38 + " .type dummy_tramp, @function ;" 39 + "dummy_tramp: ;" 40 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 41 + " blr ;" 42 + #else 43 + /* LR is always in r11, so we don't need a 'mflr r11' here */ 44 + " mtctr 11 ;" 45 + " mtlr 0 ;" 46 + " bctr ;" 47 + #endif 48 + " .size dummy_tramp, .-dummy_tramp ;" 49 + " .popsection ;" 50 + ); 51 + 52 + void bpf_jit_build_fentry_stubs(u32 *image, struct codegen_context *ctx) 53 + { 54 + int ool_stub_idx, long_branch_stub_idx; 55 + 56 + /* 57 + * Out-of-line stub: 58 + * mflr r0 59 + * [b|bl] tramp 60 + * mtlr r0 // only with CONFIG_PPC_FTRACE_OUT_OF_LINE 61 + * b bpf_func + 4 62 + */ 63 + ool_stub_idx = ctx->idx; 64 + EMIT(PPC_RAW_MFLR(_R0)); 65 + EMIT(PPC_RAW_NOP()); 66 + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) 67 + EMIT(PPC_RAW_MTLR(_R0)); 68 + WARN_ON_ONCE(!is_offset_in_branch_range(4 - (long)ctx->idx * 4)); 69 + EMIT(PPC_RAW_BRANCH(4 - (long)ctx->idx * 4)); 70 + 71 + /* 72 + * Long branch stub: 73 + * .long <dummy_tramp_addr> 74 + * mflr r11 75 + * bcl 20,31,$+4 76 + * mflr r12 77 + * ld r12, -8-SZL(r12) 78 + * mtctr r12 79 + * mtlr r11 // needed to retain ftrace ABI 80 + * bctr 81 + */ 82 + if (image) 83 + *((unsigned long *)&image[ctx->idx]) = (unsigned long)dummy_tramp; 84 + ctx->idx += SZL / 4; 85 + long_branch_stub_idx = ctx->idx; 86 + EMIT(PPC_RAW_MFLR(_R11)); 87 + EMIT(PPC_RAW_BCL4()); 88 + EMIT(PPC_RAW_MFLR(_R12)); 89 + EMIT(PPC_RAW_LL(_R12, _R12, -8-SZL)); 90 + EMIT(PPC_RAW_MTCTR(_R12)); 91 + EMIT(PPC_RAW_MTLR(_R11)); 92 + EMIT(PPC_RAW_BCTR()); 93 + 94 + if (!bpf_jit_ool_stub) { 95 + bpf_jit_ool_stub = (ctx->idx - ool_stub_idx) * 4; 96 + bpf_jit_long_branch_stub = (ctx->idx - long_branch_stub_idx) * 4; 97 + } 28 98 } 29 99 30 100 int bpf_jit_emit_exit_insn(u32 *image, struct codegen_context *ctx, int tmp_reg, long exit_addr) ··· 292 222 293 223 fp->bpf_func = (void *)fimage; 294 224 fp->jited = 1; 295 - fp->jited_len = proglen + FUNCTION_DESCR_SIZE; 225 + fp->jited_len = cgctx.idx * 4 + FUNCTION_DESCR_SIZE; 296 226 297 227 if (!fp->is_func || extra_pass) { 298 228 if (bpf_jit_binary_pack_finalize(fhdr, hdr)) { ··· 438 368 bool bpf_jit_supports_far_kfunc_call(void) 439 369 { 440 370 return IS_ENABLED(CONFIG_PPC64); 371 + } 372 + 373 + void *arch_alloc_bpf_trampoline(unsigned int size) 374 + { 375 + return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns); 376 + } 377 + 378 + void arch_free_bpf_trampoline(void *image, unsigned int size) 379 + { 380 + bpf_prog_pack_free(image, size); 381 + } 382 + 383 + int arch_protect_bpf_trampoline(void *image, unsigned int size) 384 + { 385 + return 0; 386 + } 387 + 388 + static int invoke_bpf_prog(u32 *image, u32 *ro_image, struct codegen_context *ctx, 389 + struct bpf_tramp_link *l, int regs_off, int retval_off, 390 + int run_ctx_off, bool save_ret) 391 + { 392 + struct bpf_prog *p = l->link.prog; 393 + ppc_inst_t branch_insn; 394 + u32 jmp_idx; 395 + int ret = 0; 396 + 397 + /* Save cookie */ 398 + if (IS_ENABLED(CONFIG_PPC64)) { 399 + PPC_LI64(_R3, l->cookie); 400 + EMIT(PPC_RAW_STD(_R3, _R1, run_ctx_off + offsetof(struct bpf_tramp_run_ctx, 401 + bpf_cookie))); 402 + } else { 403 + PPC_LI32(_R3, l->cookie >> 32); 404 + PPC_LI32(_R4, l->cookie); 405 + EMIT(PPC_RAW_STW(_R3, _R1, 406 + run_ctx_off + offsetof(struct bpf_tramp_run_ctx, bpf_cookie))); 407 + EMIT(PPC_RAW_STW(_R4, _R1, 408 + run_ctx_off + offsetof(struct bpf_tramp_run_ctx, bpf_cookie) + 4)); 409 + } 410 + 411 + /* __bpf_prog_enter(p, &bpf_tramp_run_ctx) */ 412 + PPC_LI_ADDR(_R3, p); 413 + EMIT(PPC_RAW_MR(_R25, _R3)); 414 + EMIT(PPC_RAW_ADDI(_R4, _R1, run_ctx_off)); 415 + ret = bpf_jit_emit_func_call_rel(image, ro_image, ctx, 416 + (unsigned long)bpf_trampoline_enter(p)); 417 + if (ret) 418 + return ret; 419 + 420 + /* Remember prog start time returned by __bpf_prog_enter */ 421 + EMIT(PPC_RAW_MR(_R26, _R3)); 422 + 423 + /* 424 + * if (__bpf_prog_enter(p) == 0) 425 + * goto skip_exec_of_prog; 426 + * 427 + * Emit a nop to be later patched with conditional branch, once offset is known 428 + */ 429 + EMIT(PPC_RAW_CMPLI(_R3, 0)); 430 + jmp_idx = ctx->idx; 431 + EMIT(PPC_RAW_NOP()); 432 + 433 + /* p->bpf_func(ctx) */ 434 + EMIT(PPC_RAW_ADDI(_R3, _R1, regs_off)); 435 + if (!p->jited) 436 + PPC_LI_ADDR(_R4, (unsigned long)p->insnsi); 437 + if (!create_branch(&branch_insn, (u32 *)&ro_image[ctx->idx], (unsigned long)p->bpf_func, 438 + BRANCH_SET_LINK)) { 439 + if (image) 440 + image[ctx->idx] = ppc_inst_val(branch_insn); 441 + ctx->idx++; 442 + } else { 443 + EMIT(PPC_RAW_LL(_R12, _R25, offsetof(struct bpf_prog, bpf_func))); 444 + EMIT(PPC_RAW_MTCTR(_R12)); 445 + EMIT(PPC_RAW_BCTRL()); 446 + } 447 + 448 + if (save_ret) 449 + EMIT(PPC_RAW_STL(_R3, _R1, retval_off)); 450 + 451 + /* Fix up branch */ 452 + if (image) { 453 + if (create_cond_branch(&branch_insn, &image[jmp_idx], 454 + (unsigned long)&image[ctx->idx], COND_EQ << 16)) 455 + return -EINVAL; 456 + image[jmp_idx] = ppc_inst_val(branch_insn); 457 + } 458 + 459 + /* __bpf_prog_exit(p, start_time, &bpf_tramp_run_ctx) */ 460 + EMIT(PPC_RAW_MR(_R3, _R25)); 461 + EMIT(PPC_RAW_MR(_R4, _R26)); 462 + EMIT(PPC_RAW_ADDI(_R5, _R1, run_ctx_off)); 463 + ret = bpf_jit_emit_func_call_rel(image, ro_image, ctx, 464 + (unsigned long)bpf_trampoline_exit(p)); 465 + 466 + return ret; 467 + } 468 + 469 + static int invoke_bpf_mod_ret(u32 *image, u32 *ro_image, struct codegen_context *ctx, 470 + struct bpf_tramp_links *tl, int regs_off, int retval_off, 471 + int run_ctx_off, u32 *branches) 472 + { 473 + int i; 474 + 475 + /* 476 + * The first fmod_ret program will receive a garbage return value. 477 + * Set this to 0 to avoid confusing the program. 478 + */ 479 + EMIT(PPC_RAW_LI(_R3, 0)); 480 + EMIT(PPC_RAW_STL(_R3, _R1, retval_off)); 481 + for (i = 0; i < tl->nr_links; i++) { 482 + if (invoke_bpf_prog(image, ro_image, ctx, tl->links[i], regs_off, retval_off, 483 + run_ctx_off, true)) 484 + return -EINVAL; 485 + 486 + /* 487 + * mod_ret prog stored return value after prog ctx. Emit: 488 + * if (*(u64 *)(ret_val) != 0) 489 + * goto do_fexit; 490 + */ 491 + EMIT(PPC_RAW_LL(_R3, _R1, retval_off)); 492 + EMIT(PPC_RAW_CMPLI(_R3, 0)); 493 + 494 + /* 495 + * Save the location of the branch and generate a nop, which is 496 + * replaced with a conditional jump once do_fexit (i.e. the 497 + * start of the fexit invocation) is finalized. 498 + */ 499 + branches[i] = ctx->idx; 500 + EMIT(PPC_RAW_NOP()); 501 + } 502 + 503 + return 0; 504 + } 505 + 506 + static void bpf_trampoline_setup_tail_call_cnt(u32 *image, struct codegen_context *ctx, 507 + int func_frame_offset, int r4_off) 508 + { 509 + if (IS_ENABLED(CONFIG_PPC64)) { 510 + /* See bpf_jit_stack_tailcallcnt() */ 511 + int tailcallcnt_offset = 6 * 8; 512 + 513 + EMIT(PPC_RAW_LL(_R3, _R1, func_frame_offset - tailcallcnt_offset)); 514 + EMIT(PPC_RAW_STL(_R3, _R1, -tailcallcnt_offset)); 515 + } else { 516 + /* See bpf_jit_stack_offsetof() and BPF_PPC_TC */ 517 + EMIT(PPC_RAW_LL(_R4, _R1, r4_off)); 518 + } 519 + } 520 + 521 + static void bpf_trampoline_restore_tail_call_cnt(u32 *image, struct codegen_context *ctx, 522 + int func_frame_offset, int r4_off) 523 + { 524 + if (IS_ENABLED(CONFIG_PPC64)) { 525 + /* See bpf_jit_stack_tailcallcnt() */ 526 + int tailcallcnt_offset = 6 * 8; 527 + 528 + EMIT(PPC_RAW_LL(_R3, _R1, -tailcallcnt_offset)); 529 + EMIT(PPC_RAW_STL(_R3, _R1, func_frame_offset - tailcallcnt_offset)); 530 + } else { 531 + /* See bpf_jit_stack_offsetof() and BPF_PPC_TC */ 532 + EMIT(PPC_RAW_STL(_R4, _R1, r4_off)); 533 + } 534 + } 535 + 536 + static void bpf_trampoline_save_args(u32 *image, struct codegen_context *ctx, int func_frame_offset, 537 + int nr_regs, int regs_off) 538 + { 539 + int param_save_area_offset; 540 + 541 + param_save_area_offset = func_frame_offset; /* the two frames we alloted */ 542 + param_save_area_offset += STACK_FRAME_MIN_SIZE; /* param save area is past frame header */ 543 + 544 + for (int i = 0; i < nr_regs; i++) { 545 + if (i < 8) { 546 + EMIT(PPC_RAW_STL(_R3 + i, _R1, regs_off + i * SZL)); 547 + } else { 548 + EMIT(PPC_RAW_LL(_R3, _R1, param_save_area_offset + i * SZL)); 549 + EMIT(PPC_RAW_STL(_R3, _R1, regs_off + i * SZL)); 550 + } 551 + } 552 + } 553 + 554 + /* Used when restoring just the register parameters when returning back */ 555 + static void bpf_trampoline_restore_args_regs(u32 *image, struct codegen_context *ctx, 556 + int nr_regs, int regs_off) 557 + { 558 + for (int i = 0; i < nr_regs && i < 8; i++) 559 + EMIT(PPC_RAW_LL(_R3 + i, _R1, regs_off + i * SZL)); 560 + } 561 + 562 + /* Used when we call into the traced function. Replicate parameter save area */ 563 + static void bpf_trampoline_restore_args_stack(u32 *image, struct codegen_context *ctx, 564 + int func_frame_offset, int nr_regs, int regs_off) 565 + { 566 + int param_save_area_offset; 567 + 568 + param_save_area_offset = func_frame_offset; /* the two frames we alloted */ 569 + param_save_area_offset += STACK_FRAME_MIN_SIZE; /* param save area is past frame header */ 570 + 571 + for (int i = 8; i < nr_regs; i++) { 572 + EMIT(PPC_RAW_LL(_R3, _R1, param_save_area_offset + i * SZL)); 573 + EMIT(PPC_RAW_STL(_R3, _R1, STACK_FRAME_MIN_SIZE + i * SZL)); 574 + } 575 + bpf_trampoline_restore_args_regs(image, ctx, nr_regs, regs_off); 576 + } 577 + 578 + static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_image, 579 + void *rw_image_end, void *ro_image, 580 + const struct btf_func_model *m, u32 flags, 581 + struct bpf_tramp_links *tlinks, 582 + void *func_addr) 583 + { 584 + int regs_off, nregs_off, ip_off, run_ctx_off, retval_off, nvr_off, alt_lr_off, r4_off = 0; 585 + int i, ret, nr_regs, bpf_frame_size = 0, bpf_dummy_frame_size = 0, func_frame_offset; 586 + struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN]; 587 + struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY]; 588 + struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT]; 589 + struct codegen_context codegen_ctx, *ctx; 590 + u32 *image = (u32 *)rw_image; 591 + ppc_inst_t branch_insn; 592 + u32 *branches = NULL; 593 + bool save_ret; 594 + 595 + if (IS_ENABLED(CONFIG_PPC32)) 596 + return -EOPNOTSUPP; 597 + 598 + nr_regs = m->nr_args; 599 + /* Extra registers for struct arguments */ 600 + for (i = 0; i < m->nr_args; i++) 601 + if (m->arg_size[i] > SZL) 602 + nr_regs += round_up(m->arg_size[i], SZL) / SZL - 1; 603 + 604 + if (nr_regs > MAX_BPF_FUNC_ARGS) 605 + return -EOPNOTSUPP; 606 + 607 + ctx = &codegen_ctx; 608 + memset(ctx, 0, sizeof(*ctx)); 609 + 610 + /* 611 + * Generated stack layout: 612 + * 613 + * func prev back chain [ back chain ] 614 + * [ ] 615 + * bpf prog redzone/tailcallcnt [ ... ] 64 bytes (64-bit powerpc) 616 + * [ ] -- 617 + * LR save area [ r0 save (64-bit) ] | header 618 + * [ r0 save (32-bit) ] | 619 + * dummy frame for unwind [ back chain 1 ] -- 620 + * [ padding ] align stack frame 621 + * r4_off [ r4 (tailcallcnt) ] optional - 32-bit powerpc 622 + * alt_lr_off [ real lr (ool stub)] optional - actual lr 623 + * [ r26 ] 624 + * nvr_off [ r25 ] nvr save area 625 + * retval_off [ return value ] 626 + * [ reg argN ] 627 + * [ ... ] 628 + * regs_off [ reg_arg1 ] prog ctx context 629 + * nregs_off [ args count ] 630 + * ip_off [ traced function ] 631 + * [ ... ] 632 + * run_ctx_off [ bpf_tramp_run_ctx ] 633 + * [ reg argN ] 634 + * [ ... ] 635 + * param_save_area [ reg_arg1 ] min 8 doublewords, per ABI 636 + * [ TOC save (64-bit) ] -- 637 + * [ LR save (64-bit) ] | header 638 + * [ LR save (32-bit) ] | 639 + * bpf trampoline frame [ back chain 2 ] -- 640 + * 641 + */ 642 + 643 + /* Minimum stack frame header */ 644 + bpf_frame_size = STACK_FRAME_MIN_SIZE; 645 + 646 + /* 647 + * Room for parameter save area. 648 + * 649 + * As per the ABI, this is required if we call into the traced 650 + * function (BPF_TRAMP_F_CALL_ORIG): 651 + * - if the function takes more than 8 arguments for the rest to spill onto the stack 652 + * - or, if the function has variadic arguments 653 + * - or, if this functions's prototype was not available to the caller 654 + * 655 + * Reserve space for at least 8 registers for now. This can be optimized later. 656 + */ 657 + bpf_frame_size += (nr_regs > 8 ? nr_regs : 8) * SZL; 658 + 659 + /* Room for struct bpf_tramp_run_ctx */ 660 + run_ctx_off = bpf_frame_size; 661 + bpf_frame_size += round_up(sizeof(struct bpf_tramp_run_ctx), SZL); 662 + 663 + /* Room for IP address argument */ 664 + ip_off = bpf_frame_size; 665 + if (flags & BPF_TRAMP_F_IP_ARG) 666 + bpf_frame_size += SZL; 667 + 668 + /* Room for args count */ 669 + nregs_off = bpf_frame_size; 670 + bpf_frame_size += SZL; 671 + 672 + /* Room for args */ 673 + regs_off = bpf_frame_size; 674 + bpf_frame_size += nr_regs * SZL; 675 + 676 + /* Room for return value of func_addr or fentry prog */ 677 + retval_off = bpf_frame_size; 678 + save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET); 679 + if (save_ret) 680 + bpf_frame_size += SZL; 681 + 682 + /* Room for nvr save area */ 683 + nvr_off = bpf_frame_size; 684 + bpf_frame_size += 2 * SZL; 685 + 686 + /* Optional save area for actual LR in case of ool ftrace */ 687 + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) { 688 + alt_lr_off = bpf_frame_size; 689 + bpf_frame_size += SZL; 690 + } 691 + 692 + if (IS_ENABLED(CONFIG_PPC32)) { 693 + if (nr_regs < 2) { 694 + r4_off = bpf_frame_size; 695 + bpf_frame_size += SZL; 696 + } else { 697 + r4_off = regs_off + SZL; 698 + } 699 + } 700 + 701 + /* Padding to align stack frame, if any */ 702 + bpf_frame_size = round_up(bpf_frame_size, SZL * 2); 703 + 704 + /* Dummy frame size for proper unwind - includes 64-bytes red zone for 64-bit powerpc */ 705 + bpf_dummy_frame_size = STACK_FRAME_MIN_SIZE + 64; 706 + 707 + /* Offset to the traced function's stack frame */ 708 + func_frame_offset = bpf_dummy_frame_size + bpf_frame_size; 709 + 710 + /* Create dummy frame for unwind, store original return value */ 711 + EMIT(PPC_RAW_STL(_R0, _R1, PPC_LR_STKOFF)); 712 + /* Protect red zone where tail call count goes */ 713 + EMIT(PPC_RAW_STLU(_R1, _R1, -bpf_dummy_frame_size)); 714 + 715 + /* Create our stack frame */ 716 + EMIT(PPC_RAW_STLU(_R1, _R1, -bpf_frame_size)); 717 + 718 + /* 64-bit: Save TOC and load kernel TOC */ 719 + if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) { 720 + EMIT(PPC_RAW_STD(_R2, _R1, 24)); 721 + PPC64_LOAD_PACA(); 722 + } 723 + 724 + /* 32-bit: save tail call count in r4 */ 725 + if (IS_ENABLED(CONFIG_PPC32) && nr_regs < 2) 726 + EMIT(PPC_RAW_STL(_R4, _R1, r4_off)); 727 + 728 + bpf_trampoline_save_args(image, ctx, func_frame_offset, nr_regs, regs_off); 729 + 730 + /* Save our return address */ 731 + EMIT(PPC_RAW_MFLR(_R3)); 732 + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) 733 + EMIT(PPC_RAW_STL(_R3, _R1, alt_lr_off)); 734 + else 735 + EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF)); 736 + 737 + /* 738 + * Save ip address of the traced function. 739 + * We could recover this from LR, but we will need to address for OOL trampoline, 740 + * and optional GEP area. 741 + */ 742 + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE) || flags & BPF_TRAMP_F_IP_ARG) { 743 + EMIT(PPC_RAW_LWZ(_R4, _R3, 4)); 744 + EMIT(PPC_RAW_SLWI(_R4, _R4, 6)); 745 + EMIT(PPC_RAW_SRAWI(_R4, _R4, 6)); 746 + EMIT(PPC_RAW_ADD(_R3, _R3, _R4)); 747 + EMIT(PPC_RAW_ADDI(_R3, _R3, 4)); 748 + } 749 + 750 + if (flags & BPF_TRAMP_F_IP_ARG) 751 + EMIT(PPC_RAW_STL(_R3, _R1, ip_off)); 752 + 753 + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) 754 + /* Fake our LR for unwind */ 755 + EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF)); 756 + 757 + /* Save function arg count -- see bpf_get_func_arg_cnt() */ 758 + EMIT(PPC_RAW_LI(_R3, nr_regs)); 759 + EMIT(PPC_RAW_STL(_R3, _R1, nregs_off)); 760 + 761 + /* Save nv regs */ 762 + EMIT(PPC_RAW_STL(_R25, _R1, nvr_off)); 763 + EMIT(PPC_RAW_STL(_R26, _R1, nvr_off + SZL)); 764 + 765 + if (flags & BPF_TRAMP_F_CALL_ORIG) { 766 + PPC_LI_ADDR(_R3, (unsigned long)im); 767 + ret = bpf_jit_emit_func_call_rel(image, ro_image, ctx, 768 + (unsigned long)__bpf_tramp_enter); 769 + if (ret) 770 + return ret; 771 + } 772 + 773 + for (i = 0; i < fentry->nr_links; i++) 774 + if (invoke_bpf_prog(image, ro_image, ctx, fentry->links[i], regs_off, retval_off, 775 + run_ctx_off, flags & BPF_TRAMP_F_RET_FENTRY_RET)) 776 + return -EINVAL; 777 + 778 + if (fmod_ret->nr_links) { 779 + branches = kcalloc(fmod_ret->nr_links, sizeof(u32), GFP_KERNEL); 780 + if (!branches) 781 + return -ENOMEM; 782 + 783 + if (invoke_bpf_mod_ret(image, ro_image, ctx, fmod_ret, regs_off, retval_off, 784 + run_ctx_off, branches)) { 785 + ret = -EINVAL; 786 + goto cleanup; 787 + } 788 + } 789 + 790 + /* Call the traced function */ 791 + if (flags & BPF_TRAMP_F_CALL_ORIG) { 792 + /* 793 + * The address in LR save area points to the correct point in the original function 794 + * with both PPC_FTRACE_OUT_OF_LINE as well as with traditional ftrace instruction 795 + * sequence 796 + */ 797 + EMIT(PPC_RAW_LL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF)); 798 + EMIT(PPC_RAW_MTCTR(_R3)); 799 + 800 + /* Replicate tail_call_cnt before calling the original BPF prog */ 801 + if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) 802 + bpf_trampoline_setup_tail_call_cnt(image, ctx, func_frame_offset, r4_off); 803 + 804 + /* Restore args */ 805 + bpf_trampoline_restore_args_stack(image, ctx, func_frame_offset, nr_regs, regs_off); 806 + 807 + /* Restore TOC for 64-bit */ 808 + if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) 809 + EMIT(PPC_RAW_LD(_R2, _R1, 24)); 810 + EMIT(PPC_RAW_BCTRL()); 811 + if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) 812 + PPC64_LOAD_PACA(); 813 + 814 + /* Store return value for bpf prog to access */ 815 + EMIT(PPC_RAW_STL(_R3, _R1, retval_off)); 816 + 817 + /* Restore updated tail_call_cnt */ 818 + if (flags & BPF_TRAMP_F_TAIL_CALL_CTX) 819 + bpf_trampoline_restore_tail_call_cnt(image, ctx, func_frame_offset, r4_off); 820 + 821 + /* Reserve space to patch branch instruction to skip fexit progs */ 822 + im->ip_after_call = &((u32 *)ro_image)[ctx->idx]; 823 + EMIT(PPC_RAW_NOP()); 824 + } 825 + 826 + /* Update branches saved in invoke_bpf_mod_ret with address of do_fexit */ 827 + for (i = 0; i < fmod_ret->nr_links && image; i++) { 828 + if (create_cond_branch(&branch_insn, &image[branches[i]], 829 + (unsigned long)&image[ctx->idx], COND_NE << 16)) { 830 + ret = -EINVAL; 831 + goto cleanup; 832 + } 833 + 834 + image[branches[i]] = ppc_inst_val(branch_insn); 835 + } 836 + 837 + for (i = 0; i < fexit->nr_links; i++) 838 + if (invoke_bpf_prog(image, ro_image, ctx, fexit->links[i], regs_off, retval_off, 839 + run_ctx_off, false)) { 840 + ret = -EINVAL; 841 + goto cleanup; 842 + } 843 + 844 + if (flags & BPF_TRAMP_F_CALL_ORIG) { 845 + im->ip_epilogue = &((u32 *)ro_image)[ctx->idx]; 846 + PPC_LI_ADDR(_R3, im); 847 + ret = bpf_jit_emit_func_call_rel(image, ro_image, ctx, 848 + (unsigned long)__bpf_tramp_exit); 849 + if (ret) 850 + goto cleanup; 851 + } 852 + 853 + if (flags & BPF_TRAMP_F_RESTORE_REGS) 854 + bpf_trampoline_restore_args_regs(image, ctx, nr_regs, regs_off); 855 + 856 + /* Restore return value of func_addr or fentry prog */ 857 + if (save_ret) 858 + EMIT(PPC_RAW_LL(_R3, _R1, retval_off)); 859 + 860 + /* Restore nv regs */ 861 + EMIT(PPC_RAW_LL(_R26, _R1, nvr_off + SZL)); 862 + EMIT(PPC_RAW_LL(_R25, _R1, nvr_off)); 863 + 864 + /* Epilogue */ 865 + if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2) && !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) 866 + EMIT(PPC_RAW_LD(_R2, _R1, 24)); 867 + if (flags & BPF_TRAMP_F_SKIP_FRAME) { 868 + /* Skip the traced function and return to parent */ 869 + EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset)); 870 + EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF)); 871 + EMIT(PPC_RAW_MTLR(_R0)); 872 + EMIT(PPC_RAW_BLR()); 873 + } else { 874 + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) { 875 + EMIT(PPC_RAW_LL(_R0, _R1, alt_lr_off)); 876 + EMIT(PPC_RAW_MTLR(_R0)); 877 + EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset)); 878 + EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF)); 879 + EMIT(PPC_RAW_BLR()); 880 + } else { 881 + EMIT(PPC_RAW_LL(_R0, _R1, bpf_frame_size + PPC_LR_STKOFF)); 882 + EMIT(PPC_RAW_MTCTR(_R0)); 883 + EMIT(PPC_RAW_ADDI(_R1, _R1, func_frame_offset)); 884 + EMIT(PPC_RAW_LL(_R0, _R1, PPC_LR_STKOFF)); 885 + EMIT(PPC_RAW_MTLR(_R0)); 886 + EMIT(PPC_RAW_BCTR()); 887 + } 888 + } 889 + 890 + /* Make sure the trampoline generation logic doesn't overflow */ 891 + if (image && WARN_ON_ONCE(&image[ctx->idx] > (u32 *)rw_image_end - BPF_INSN_SAFETY)) { 892 + ret = -EFAULT; 893 + goto cleanup; 894 + } 895 + ret = ctx->idx * 4 + BPF_INSN_SAFETY * 4; 896 + 897 + cleanup: 898 + kfree(branches); 899 + return ret; 900 + } 901 + 902 + int arch_bpf_trampoline_size(const struct btf_func_model *m, u32 flags, 903 + struct bpf_tramp_links *tlinks, void *func_addr) 904 + { 905 + struct bpf_tramp_image im; 906 + void *image; 907 + int ret; 908 + 909 + /* 910 + * Allocate a temporary buffer for __arch_prepare_bpf_trampoline(). 911 + * This will NOT cause fragmentation in direct map, as we do not 912 + * call set_memory_*() on this buffer. 913 + * 914 + * We cannot use kvmalloc here, because we need image to be in 915 + * module memory range. 916 + */ 917 + image = bpf_jit_alloc_exec(PAGE_SIZE); 918 + if (!image) 919 + return -ENOMEM; 920 + 921 + ret = __arch_prepare_bpf_trampoline(&im, image, image + PAGE_SIZE, image, 922 + m, flags, tlinks, func_addr); 923 + bpf_jit_free_exec(image); 924 + 925 + return ret; 926 + } 927 + 928 + int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *image_end, 929 + const struct btf_func_model *m, u32 flags, 930 + struct bpf_tramp_links *tlinks, 931 + void *func_addr) 932 + { 933 + u32 size = image_end - image; 934 + void *rw_image, *tmp; 935 + int ret; 936 + 937 + /* 938 + * rw_image doesn't need to be in module memory range, so we can 939 + * use kvmalloc. 940 + */ 941 + rw_image = kvmalloc(size, GFP_KERNEL); 942 + if (!rw_image) 943 + return -ENOMEM; 944 + 945 + ret = __arch_prepare_bpf_trampoline(im, rw_image, rw_image + size, image, m, 946 + flags, tlinks, func_addr); 947 + if (ret < 0) 948 + goto out; 949 + 950 + if (bpf_jit_enable > 1) 951 + bpf_jit_dump(1, ret - BPF_INSN_SAFETY * 4, 1, rw_image); 952 + 953 + tmp = bpf_arch_text_copy(image, rw_image, size); 954 + if (IS_ERR(tmp)) 955 + ret = PTR_ERR(tmp); 956 + 957 + out: 958 + kvfree(rw_image); 959 + return ret; 960 + } 961 + 962 + static int bpf_modify_inst(void *ip, ppc_inst_t old_inst, ppc_inst_t new_inst) 963 + { 964 + ppc_inst_t org_inst; 965 + 966 + if (copy_inst_from_kernel_nofault(&org_inst, ip)) { 967 + pr_err("0x%lx: fetching instruction failed\n", (unsigned long)ip); 968 + return -EFAULT; 969 + } 970 + 971 + if (!ppc_inst_equal(org_inst, old_inst)) { 972 + pr_err("0x%lx: expected (%08lx) != found (%08lx)\n", 973 + (unsigned long)ip, ppc_inst_as_ulong(old_inst), ppc_inst_as_ulong(org_inst)); 974 + return -EINVAL; 975 + } 976 + 977 + if (ppc_inst_equal(old_inst, new_inst)) 978 + return 0; 979 + 980 + return patch_instruction(ip, new_inst); 981 + } 982 + 983 + static void do_isync(void *info __maybe_unused) 984 + { 985 + isync(); 986 + } 987 + 988 + /* 989 + * A 3-step process for bpf prog entry: 990 + * 1. At bpf prog entry, a single nop/b: 991 + * bpf_func: 992 + * [nop|b] ool_stub 993 + * 2. Out-of-line stub: 994 + * ool_stub: 995 + * mflr r0 996 + * [b|bl] <bpf_prog>/<long_branch_stub> 997 + * mtlr r0 // CONFIG_PPC_FTRACE_OUT_OF_LINE only 998 + * b bpf_func + 4 999 + * 3. Long branch stub: 1000 + * long_branch_stub: 1001 + * .long <branch_addr>/<dummy_tramp> 1002 + * mflr r11 1003 + * bcl 20,31,$+4 1004 + * mflr r12 1005 + * ld r12, -16(r12) 1006 + * mtctr r12 1007 + * mtlr r11 // needed to retain ftrace ABI 1008 + * bctr 1009 + * 1010 + * dummy_tramp is used to reduce synchronization requirements. 1011 + * 1012 + * When attaching a bpf trampoline to a bpf prog, we do not need any 1013 + * synchronization here since we always have a valid branch target regardless 1014 + * of the order in which the above stores are seen. dummy_tramp ensures that 1015 + * the long_branch stub goes to a valid destination on other cpus, even when 1016 + * the branch to the long_branch stub is seen before the updated trampoline 1017 + * address. 1018 + * 1019 + * However, when detaching a bpf trampoline from a bpf prog, or if changing 1020 + * the bpf trampoline address, we need synchronization to ensure that other 1021 + * cpus can no longer branch into the older trampoline so that it can be 1022 + * safely freed. bpf_tramp_image_put() uses rcu_tasks to ensure all cpus 1023 + * make forward progress, but we still need to ensure that other cpus 1024 + * execute isync (or some CSI) so that they don't go back into the 1025 + * trampoline again. 1026 + */ 1027 + int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type poke_type, 1028 + void *old_addr, void *new_addr) 1029 + { 1030 + unsigned long bpf_func, bpf_func_end, size, offset; 1031 + ppc_inst_t old_inst, new_inst; 1032 + int ret = 0, branch_flags; 1033 + char name[KSYM_NAME_LEN]; 1034 + 1035 + if (IS_ENABLED(CONFIG_PPC32)) 1036 + return -EOPNOTSUPP; 1037 + 1038 + bpf_func = (unsigned long)ip; 1039 + branch_flags = poke_type == BPF_MOD_CALL ? BRANCH_SET_LINK : 0; 1040 + 1041 + /* We currently only support poking bpf programs */ 1042 + if (!__bpf_address_lookup(bpf_func, &size, &offset, name)) { 1043 + pr_err("%s (0x%lx): kernel/modules are not supported\n", __func__, bpf_func); 1044 + return -EOPNOTSUPP; 1045 + } 1046 + 1047 + /* 1048 + * If we are not poking at bpf prog entry, then we are simply patching in/out 1049 + * an unconditional branch instruction at im->ip_after_call 1050 + */ 1051 + if (offset) { 1052 + if (poke_type != BPF_MOD_JUMP) { 1053 + pr_err("%s (0x%lx): calls are not supported in bpf prog body\n", __func__, 1054 + bpf_func); 1055 + return -EOPNOTSUPP; 1056 + } 1057 + old_inst = ppc_inst(PPC_RAW_NOP()); 1058 + if (old_addr) 1059 + if (create_branch(&old_inst, ip, (unsigned long)old_addr, 0)) 1060 + return -ERANGE; 1061 + new_inst = ppc_inst(PPC_RAW_NOP()); 1062 + if (new_addr) 1063 + if (create_branch(&new_inst, ip, (unsigned long)new_addr, 0)) 1064 + return -ERANGE; 1065 + mutex_lock(&text_mutex); 1066 + ret = bpf_modify_inst(ip, old_inst, new_inst); 1067 + mutex_unlock(&text_mutex); 1068 + 1069 + /* Make sure all cpus see the new instruction */ 1070 + smp_call_function(do_isync, NULL, 1); 1071 + return ret; 1072 + } 1073 + 1074 + bpf_func_end = bpf_func + size; 1075 + 1076 + /* Address of the jmp/call instruction in the out-of-line stub */ 1077 + ip = (void *)(bpf_func_end - bpf_jit_ool_stub + 4); 1078 + 1079 + if (!is_offset_in_branch_range((long)ip - 4 - bpf_func)) { 1080 + pr_err("%s (0x%lx): bpf prog too large, ool stub out of branch range\n", __func__, 1081 + bpf_func); 1082 + return -ERANGE; 1083 + } 1084 + 1085 + old_inst = ppc_inst(PPC_RAW_NOP()); 1086 + if (old_addr) { 1087 + if (is_offset_in_branch_range(ip - old_addr)) 1088 + create_branch(&old_inst, ip, (unsigned long)old_addr, branch_flags); 1089 + else 1090 + create_branch(&old_inst, ip, bpf_func_end - bpf_jit_long_branch_stub, 1091 + branch_flags); 1092 + } 1093 + new_inst = ppc_inst(PPC_RAW_NOP()); 1094 + if (new_addr) { 1095 + if (is_offset_in_branch_range(ip - new_addr)) 1096 + create_branch(&new_inst, ip, (unsigned long)new_addr, branch_flags); 1097 + else 1098 + create_branch(&new_inst, ip, bpf_func_end - bpf_jit_long_branch_stub, 1099 + branch_flags); 1100 + } 1101 + 1102 + mutex_lock(&text_mutex); 1103 + 1104 + /* 1105 + * 1. Update the address in the long branch stub: 1106 + * If new_addr is out of range, we will have to use the long branch stub, so patch new_addr 1107 + * here. Otherwise, revert to dummy_tramp, but only if we had patched old_addr here. 1108 + */ 1109 + if ((new_addr && !is_offset_in_branch_range(new_addr - ip)) || 1110 + (old_addr && !is_offset_in_branch_range(old_addr - ip))) 1111 + ret = patch_ulong((void *)(bpf_func_end - bpf_jit_long_branch_stub - SZL), 1112 + (new_addr && !is_offset_in_branch_range(new_addr - ip)) ? 1113 + (unsigned long)new_addr : (unsigned long)dummy_tramp); 1114 + if (ret) 1115 + goto out; 1116 + 1117 + /* 2. Update the branch/call in the out-of-line stub */ 1118 + ret = bpf_modify_inst(ip, old_inst, new_inst); 1119 + if (ret) 1120 + goto out; 1121 + 1122 + /* 3. Update instruction at bpf prog entry */ 1123 + ip = (void *)bpf_func; 1124 + if (!old_addr || !new_addr) { 1125 + if (!old_addr) { 1126 + old_inst = ppc_inst(PPC_RAW_NOP()); 1127 + create_branch(&new_inst, ip, bpf_func_end - bpf_jit_ool_stub, 0); 1128 + } else { 1129 + new_inst = ppc_inst(PPC_RAW_NOP()); 1130 + create_branch(&old_inst, ip, bpf_func_end - bpf_jit_ool_stub, 0); 1131 + } 1132 + ret = bpf_modify_inst(ip, old_inst, new_inst); 1133 + } 1134 + 1135 + out: 1136 + mutex_unlock(&text_mutex); 1137 + 1138 + /* 1139 + * Sync only if we are not attaching a trampoline to a bpf prog so the older 1140 + * trampoline can be freed safely. 1141 + */ 1142 + if (old_addr) 1143 + smp_call_function(do_isync, NULL, 1); 1144 + 1145 + return ret; 441 1146 }
+6 -1
arch/powerpc/net/bpf_jit_comp32.c
··· 127 127 { 128 128 int i; 129 129 130 + /* Instruction for trampoline attach */ 131 + EMIT(PPC_RAW_NOP()); 132 + 130 133 /* Initialize tail_call_cnt, to be skipped if we do tail calls. */ 131 134 if (ctx->seen & SEEN_TAILCALL) 132 135 EMIT(PPC_RAW_LI(_R4, 0)); 133 136 else 134 137 EMIT(PPC_RAW_NOP()); 135 138 136 - #define BPF_TAILCALL_PROLOGUE_SIZE 4 139 + #define BPF_TAILCALL_PROLOGUE_SIZE 8 137 140 138 141 if (bpf_has_stack_frame(ctx)) 139 142 EMIT(PPC_RAW_STWU(_R1, _R1, -BPF_PPC_STACKFRAME(ctx))); ··· 201 198 bpf_jit_emit_common_epilogue(image, ctx); 202 199 203 200 EMIT(PPC_RAW_BLR()); 201 + 202 + bpf_jit_build_fentry_stubs(image, ctx); 204 203 } 205 204 206 205 /* Relative offset needs to be calculated based on final image location */
+25 -47
arch/powerpc/net/bpf_jit_comp64.c
··· 84 84 } 85 85 86 86 /* 87 - * When not setting up our own stackframe, the redzone usage is: 87 + * When not setting up our own stackframe, the redzone (288 bytes) usage is: 88 88 * 89 89 * [ prev sp ] <------------- 90 90 * [ ... ] | ··· 92 92 * [ nv gpr save area ] 5*8 93 93 * [ tail_call_cnt ] 8 94 94 * [ local_tmp_var ] 16 95 - * [ unused red zone ] 208 bytes protected 95 + * [ unused red zone ] 224 96 96 */ 97 97 static int bpf_jit_stack_local(struct codegen_context *ctx) 98 98 { ··· 125 125 void bpf_jit_build_prologue(u32 *image, struct codegen_context *ctx) 126 126 { 127 127 int i; 128 + 129 + /* Instruction for trampoline attach */ 130 + EMIT(PPC_RAW_NOP()); 128 131 129 132 #ifndef CONFIG_PPC_KERNEL_PCREL 130 133 if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2)) ··· 203 200 EMIT(PPC_RAW_MR(_R3, bpf_to_ppc(BPF_REG_0))); 204 201 205 202 EMIT(PPC_RAW_BLR()); 203 + 204 + bpf_jit_build_fentry_stubs(image, ctx); 206 205 } 207 206 208 - static int 209 - bpf_jit_emit_func_call_hlp(u32 *image, u32 *fimage, struct codegen_context *ctx, u64 func) 207 + int bpf_jit_emit_func_call_rel(u32 *image, u32 *fimage, struct codegen_context *ctx, u64 func) 210 208 { 211 209 unsigned long func_addr = func ? ppc_function_entry((void *)func) : 0; 212 210 long reladdr; 213 211 214 - if (WARN_ON_ONCE(!kernel_text_address(func_addr))) 215 - return -EINVAL; 212 + /* bpf to bpf call, func is not known in the initial pass. Emit 5 nops as a placeholder */ 213 + if (!func) { 214 + for (int i = 0; i < 5; i++) 215 + EMIT(PPC_RAW_NOP()); 216 + /* elfv1 needs an additional instruction to load addr from descriptor */ 217 + if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V1)) 218 + EMIT(PPC_RAW_NOP()); 219 + EMIT(PPC_RAW_MTCTR(_R12)); 220 + EMIT(PPC_RAW_BCTRL()); 221 + return 0; 222 + } 216 223 217 224 #ifdef CONFIG_PPC_KERNEL_PCREL 218 225 reladdr = func_addr - local_paca->kernelbase; ··· 279 266 * We can clobber r2 since we get called through a 280 267 * function pointer (so caller will save/restore r2). 281 268 */ 282 - EMIT(PPC_RAW_LD(_R2, bpf_to_ppc(TMP_REG_2), 8)); 269 + if (is_module_text_address(func_addr)) 270 + EMIT(PPC_RAW_LD(_R2, bpf_to_ppc(TMP_REG_2), 8)); 283 271 } else { 284 272 PPC_LI64(_R12, func); 285 273 EMIT(PPC_RAW_MTCTR(_R12)); ··· 290 276 * Load r2 with kernel TOC as kernel TOC is used if function address falls 291 277 * within core kernel text. 292 278 */ 293 - EMIT(PPC_RAW_LD(_R2, _R13, offsetof(struct paca_struct, kernel_toc))); 279 + if (is_module_text_address(func_addr)) 280 + EMIT(PPC_RAW_LD(_R2, _R13, offsetof(struct paca_struct, kernel_toc))); 294 281 } 295 282 #endif 296 - 297 - return 0; 298 - } 299 - 300 - int bpf_jit_emit_func_call_rel(u32 *image, u32 *fimage, struct codegen_context *ctx, u64 func) 301 - { 302 - unsigned int i, ctx_idx = ctx->idx; 303 - 304 - if (WARN_ON_ONCE(func && is_module_text_address(func))) 305 - return -EINVAL; 306 - 307 - /* skip past descriptor if elf v1 */ 308 - func += FUNCTION_DESCR_SIZE; 309 - 310 - /* Load function address into r12 */ 311 - PPC_LI64(_R12, func); 312 - 313 - /* For bpf-to-bpf function calls, the callee's address is unknown 314 - * until the last extra pass. As seen above, we use PPC_LI64() to 315 - * load the callee's address, but this may optimize the number of 316 - * instructions required based on the nature of the address. 317 - * 318 - * Since we don't want the number of instructions emitted to increase, 319 - * we pad the optimized PPC_LI64() call with NOPs to guarantee that 320 - * we always have a five-instruction sequence, which is the maximum 321 - * that PPC_LI64() can emit. 322 - */ 323 - if (!image) 324 - for (i = ctx->idx - ctx_idx; i < 5; i++) 325 - EMIT(PPC_RAW_NOP()); 326 - 327 - EMIT(PPC_RAW_MTCTR(_R12)); 328 - EMIT(PPC_RAW_BCTRL()); 329 283 330 284 return 0; 331 285 } ··· 308 326 */ 309 327 int b2p_bpf_array = bpf_to_ppc(BPF_REG_2); 310 328 int b2p_index = bpf_to_ppc(BPF_REG_3); 311 - int bpf_tailcall_prologue_size = 8; 329 + int bpf_tailcall_prologue_size = 12; 312 330 313 331 if (!IS_ENABLED(CONFIG_PPC_KERNEL_PCREL) && IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2)) 314 332 bpf_tailcall_prologue_size += 4; /* skip past the toc load */ ··· 1084 1102 if (ret < 0) 1085 1103 return ret; 1086 1104 1087 - if (func_addr_fixed) 1088 - ret = bpf_jit_emit_func_call_hlp(image, fimage, ctx, func_addr); 1089 - else 1090 - ret = bpf_jit_emit_func_call_rel(image, fimage, ctx, func_addr); 1091 - 1105 + ret = bpf_jit_emit_func_call_rel(image, fimage, ctx, func_addr); 1092 1106 if (ret) 1093 1107 return ret; 1094 1108
+2
arch/powerpc/perf/Makefile
··· 16 16 17 17 obj-$(CONFIG_HV_PERF_CTRS) += hv-24x7.o hv-gpci.o hv-common.o 18 18 19 + obj-$(CONFIG_VPA_PMU) += vpa-pmu.o 20 + 19 21 obj-$(CONFIG_PPC_8xx) += 8xx-pmu.o 20 22 21 23 obj-$(CONFIG_PPC64) += $(obj64-y)
+203
arch/powerpc/perf/vpa-pmu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * Performance monitoring support for Virtual Processor Area(VPA) based counters 4 + * 5 + * Copyright (C) 2024 IBM Corporation 6 + */ 7 + #define pr_fmt(fmt) "vpa_pmu: " fmt 8 + 9 + #include <linux/module.h> 10 + #include <linux/perf_event.h> 11 + #include <asm/kvm_ppc.h> 12 + #include <asm/kvm_book3s_64.h> 13 + 14 + #define MODULE_VERS "1.0" 15 + #define MODULE_NAME "pseries_vpa_pmu" 16 + 17 + #define EVENT(_name, _code) enum{_name = _code} 18 + 19 + #define VPA_PMU_EVENT_VAR(_id) event_attr_##_id 20 + #define VPA_PMU_EVENT_PTR(_id) (&event_attr_##_id.attr.attr) 21 + 22 + static ssize_t vpa_pmu_events_sysfs_show(struct device *dev, 23 + struct device_attribute *attr, char *page) 24 + { 25 + struct perf_pmu_events_attr *pmu_attr; 26 + 27 + pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr); 28 + 29 + return sprintf(page, "event=0x%02llx\n", pmu_attr->id); 30 + } 31 + 32 + #define VPA_PMU_EVENT_ATTR(_name, _id) \ 33 + PMU_EVENT_ATTR(_name, VPA_PMU_EVENT_VAR(_id), _id, \ 34 + vpa_pmu_events_sysfs_show) 35 + 36 + EVENT(L1_TO_L2_CS_LAT, 0x1); 37 + EVENT(L2_TO_L1_CS_LAT, 0x2); 38 + EVENT(L2_RUNTIME_AGG, 0x3); 39 + 40 + VPA_PMU_EVENT_ATTR(l1_to_l2_lat, L1_TO_L2_CS_LAT); 41 + VPA_PMU_EVENT_ATTR(l2_to_l1_lat, L2_TO_L1_CS_LAT); 42 + VPA_PMU_EVENT_ATTR(l2_runtime_agg, L2_RUNTIME_AGG); 43 + 44 + static struct attribute *vpa_pmu_events_attr[] = { 45 + VPA_PMU_EVENT_PTR(L1_TO_L2_CS_LAT), 46 + VPA_PMU_EVENT_PTR(L2_TO_L1_CS_LAT), 47 + VPA_PMU_EVENT_PTR(L2_RUNTIME_AGG), 48 + NULL 49 + }; 50 + 51 + static const struct attribute_group vpa_pmu_events_group = { 52 + .name = "events", 53 + .attrs = vpa_pmu_events_attr, 54 + }; 55 + 56 + PMU_FORMAT_ATTR(event, "config:0-31"); 57 + static struct attribute *vpa_pmu_format_attr[] = { 58 + &format_attr_event.attr, 59 + NULL, 60 + }; 61 + 62 + static struct attribute_group vpa_pmu_format_group = { 63 + .name = "format", 64 + .attrs = vpa_pmu_format_attr, 65 + }; 66 + 67 + static const struct attribute_group *vpa_pmu_attr_groups[] = { 68 + &vpa_pmu_events_group, 69 + &vpa_pmu_format_group, 70 + NULL 71 + }; 72 + 73 + static int vpa_pmu_event_init(struct perf_event *event) 74 + { 75 + if (event->attr.type != event->pmu->type) 76 + return -ENOENT; 77 + 78 + /* it does not support event sampling mode */ 79 + if (is_sampling_event(event)) 80 + return -EOPNOTSUPP; 81 + 82 + /* no branch sampling */ 83 + if (has_branch_stack(event)) 84 + return -EOPNOTSUPP; 85 + 86 + /* Invalid event code */ 87 + if ((event->attr.config <= 0) || (event->attr.config > 3)) 88 + return -EINVAL; 89 + 90 + return 0; 91 + } 92 + 93 + static unsigned long get_counter_data(struct perf_event *event) 94 + { 95 + unsigned int config = event->attr.config; 96 + u64 data; 97 + 98 + switch (config) { 99 + case L1_TO_L2_CS_LAT: 100 + if (event->attach_state & PERF_ATTACH_TASK) 101 + data = kvmhv_get_l1_to_l2_cs_time_vcpu(); 102 + else 103 + data = kvmhv_get_l1_to_l2_cs_time(); 104 + break; 105 + case L2_TO_L1_CS_LAT: 106 + if (event->attach_state & PERF_ATTACH_TASK) 107 + data = kvmhv_get_l2_to_l1_cs_time_vcpu(); 108 + else 109 + data = kvmhv_get_l2_to_l1_cs_time(); 110 + break; 111 + case L2_RUNTIME_AGG: 112 + if (event->attach_state & PERF_ATTACH_TASK) 113 + data = kvmhv_get_l2_runtime_agg_vcpu(); 114 + else 115 + data = kvmhv_get_l2_runtime_agg(); 116 + break; 117 + default: 118 + data = 0; 119 + break; 120 + } 121 + 122 + return data; 123 + } 124 + 125 + static int vpa_pmu_add(struct perf_event *event, int flags) 126 + { 127 + u64 data; 128 + 129 + kvmhv_set_l2_counters_status(smp_processor_id(), true); 130 + 131 + data = get_counter_data(event); 132 + local64_set(&event->hw.prev_count, data); 133 + 134 + return 0; 135 + } 136 + 137 + static void vpa_pmu_read(struct perf_event *event) 138 + { 139 + u64 prev_data, new_data, final_data; 140 + 141 + prev_data = local64_read(&event->hw.prev_count); 142 + new_data = get_counter_data(event); 143 + final_data = new_data - prev_data; 144 + 145 + local64_add(final_data, &event->count); 146 + } 147 + 148 + static void vpa_pmu_del(struct perf_event *event, int flags) 149 + { 150 + vpa_pmu_read(event); 151 + 152 + /* 153 + * Disable vpa counter accumulation 154 + */ 155 + kvmhv_set_l2_counters_status(smp_processor_id(), false); 156 + } 157 + 158 + static struct pmu vpa_pmu = { 159 + .task_ctx_nr = perf_sw_context, 160 + .name = "vpa_pmu", 161 + .event_init = vpa_pmu_event_init, 162 + .add = vpa_pmu_add, 163 + .del = vpa_pmu_del, 164 + .read = vpa_pmu_read, 165 + .attr_groups = vpa_pmu_attr_groups, 166 + .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT, 167 + }; 168 + 169 + static int __init pseries_vpa_pmu_init(void) 170 + { 171 + /* 172 + * List of current Linux on Power platforms and 173 + * this driver is supported only in PowerVM LPAR 174 + * (L1) platform. 175 + * 176 + * Enabled Linux on Power Platforms 177 + * ---------------------------------------- 178 + * [X] PowerVM LPAR (L1) 179 + * [ ] KVM Guest On PowerVM KoP(L2) 180 + * [ ] Baremetal(PowerNV) 181 + * [ ] KVM Guest On PowerNV 182 + */ 183 + if (!firmware_has_feature(FW_FEATURE_LPAR) || is_kvm_guest()) 184 + return -ENODEV; 185 + 186 + perf_pmu_register(&vpa_pmu, vpa_pmu.name, -1); 187 + pr_info("Virtual Processor Area PMU registered.\n"); 188 + 189 + return 0; 190 + } 191 + 192 + static void __exit pseries_vpa_pmu_cleanup(void) 193 + { 194 + perf_pmu_unregister(&vpa_pmu); 195 + pr_info("Virtual Processor Area PMU unregistered.\n"); 196 + } 197 + 198 + module_init(pseries_vpa_pmu_init); 199 + module_exit(pseries_vpa_pmu_cleanup); 200 + MODULE_DESCRIPTION("Perf Driver for pSeries VPA pmu counter"); 201 + MODULE_AUTHOR("Kajol Jain <kjain@linux.ibm.com>"); 202 + MODULE_AUTHOR("Madhavan Srinivasan <maddy@linux.ibm.com>"); 203 + MODULE_LICENSE("GPL");
+9 -14
arch/powerpc/platforms/44x/pci.c
··· 94 94 struct resource *res) 95 95 { 96 96 u64 size; 97 - const u32 *ranges; 98 - int rlen; 99 - int pna = of_n_addr_cells(hose->dn); 100 - int np = pna + 5; 97 + struct of_range_parser parser; 98 + struct of_range range; 101 99 102 100 /* Default */ 103 101 res->start = 0; ··· 103 105 res->end = size - 1; 104 106 res->flags = IORESOURCE_MEM | IORESOURCE_PREFETCH; 105 107 106 - /* Get dma-ranges property */ 107 - ranges = of_get_property(hose->dn, "dma-ranges", &rlen); 108 - if (ranges == NULL) 108 + if (of_pci_dma_range_parser_init(&parser, hose->dn)) 109 109 goto out; 110 110 111 - /* Walk it */ 112 - while ((rlen -= np * 4) >= 0) { 113 - u32 pci_space = ranges[0]; 114 - u64 pci_addr = of_read_number(ranges + 1, 2); 115 - u64 cpu_addr = of_translate_dma_address(hose->dn, ranges + 3); 116 - size = of_read_number(ranges + pna + 3, 2); 117 - ranges += np; 111 + for_each_of_range(&parser, &range) { 112 + u32 pci_space = range.flags; 113 + u64 pci_addr = range.bus_addr; 114 + u64 cpu_addr = range.cpu_addr; 115 + size = range.size; 116 + 118 117 if (cpu_addr == OF_BAD_ADDR || size == 0) 119 118 continue; 120 119
+1
arch/powerpc/platforms/52xx/efika.c
··· 13 13 #include <generated/utsrelease.h> 14 14 #include <linux/pci.h> 15 15 #include <linux/of.h> 16 + #include <linux/seq_file.h> 16 17 #include <asm/dma.h> 17 18 #include <asm/time.h> 18 19 #include <asm/machdep.h>
+1 -1
arch/powerpc/platforms/82xx/ep8248e.c
··· 128 128 129 129 bus->name = "ep8248e-mdio-bitbang"; 130 130 bus->parent = &ofdev->dev; 131 - snprintf(bus->id, MII_BUS_ID_SIZE, "%x", res.start); 131 + snprintf(bus->id, MII_BUS_ID_SIZE, "%pa", &res.start); 132 132 133 133 ret = of_mdiobus_register(bus, ofdev->dev.of_node); 134 134 if (ret)
+3 -3
arch/powerpc/platforms/82xx/km82xx.c
··· 27 27 28 28 static void __init km82xx_pic_init(void) 29 29 { 30 - struct device_node *np = of_find_compatible_node(NULL, NULL, 31 - "fsl,pq2-pic"); 30 + struct device_node *np __free(device_node); 31 + np = of_find_compatible_node(NULL, NULL, "fsl,pq2-pic"); 32 + 32 33 if (!np) { 33 34 pr_err("PIC init: can not find cpm-pic node\n"); 34 35 return; 35 36 } 36 37 37 38 cpm2_pic_init(np); 38 - of_node_put(np); 39 39 } 40 40 41 41 struct cpm_pin {
-21
arch/powerpc/platforms/85xx/Kconfig
··· 40 40 and dual StarCore SC3850 DSP cores. 41 41 Manufacturer : Freescale Semiconductor, Inc 42 42 43 - config MPC8540_ADS 44 - bool "Freescale MPC8540 ADS" 45 - select DEFAULT_UIMAGE 46 - help 47 - This option enables support for the MPC 8540 ADS board 48 - 49 - config MPC8560_ADS 50 - bool "Freescale MPC8560 ADS" 51 - select DEFAULT_UIMAGE 52 - select CPM2 53 - help 54 - This option enables support for the MPC 8560 ADS board 55 - 56 - config MPC85xx_CDS 57 - bool "Freescale MPC85xx CDS" 58 - select DEFAULT_UIMAGE 59 - select PPC_I8259 60 - select HAVE_RAPIDIO 61 - help 62 - This option enables support for the MPC85xx CDS board 63 - 64 43 config MPC85xx_MDS 65 44 bool "Freescale MPC8568 MDS / MPC8569 MDS / P1021 MDS" 66 45 select DEFAULT_UIMAGE
-1
arch/powerpc/platforms/Kconfig
··· 7 7 source "arch/powerpc/platforms/512x/Kconfig" 8 8 source "arch/powerpc/platforms/52xx/Kconfig" 9 9 source "arch/powerpc/platforms/powermac/Kconfig" 10 - source "arch/powerpc/platforms/maple/Kconfig" 11 10 source "arch/powerpc/platforms/pasemi/Kconfig" 12 11 source "arch/powerpc/platforms/ps3/Kconfig" 13 12 source "arch/powerpc/platforms/cell/Kconfig"
-1
arch/powerpc/platforms/Makefile
··· 14 14 obj-$(CONFIG_PPC_86xx) += 86xx/ 15 15 obj-$(CONFIG_PPC_POWERNV) += powernv/ 16 16 obj-$(CONFIG_PPC_PSERIES) += pseries/ 17 - obj-$(CONFIG_PPC_MAPLE) += maple/ 18 17 obj-$(CONFIG_PPC_PASEMI) += pasemi/ 19 18 obj-$(CONFIG_PPC_CELL) += cell/ 20 19 obj-$(CONFIG_PPC_PS3) += ps3/
+16 -33
arch/powerpc/platforms/cell/iommu.c
··· 779 779 780 780 static u64 cell_iommu_get_fixed_address(struct device *dev) 781 781 { 782 - u64 cpu_addr, size, best_size, dev_addr = OF_BAD_ADDR; 782 + u64 best_size, dev_addr = OF_BAD_ADDR; 783 783 struct device_node *np; 784 - const u32 *ranges = NULL; 785 - int i, len, best, naddr, nsize, pna, range_size; 784 + struct of_range_parser parser; 785 + struct of_range range; 786 786 787 787 /* We can be called for platform devices that have no of_node */ 788 788 np = of_node_get(dev->of_node); 789 789 if (!np) 790 790 goto out; 791 791 792 - while (1) { 793 - naddr = of_n_addr_cells(np); 794 - nsize = of_n_size_cells(np); 795 - np = of_get_next_parent(np); 796 - if (!np) 797 - break; 792 + while ((np = of_get_next_parent(np))) { 793 + if (of_pci_dma_range_parser_init(&parser, np)) 794 + continue; 798 795 799 - ranges = of_get_property(np, "dma-ranges", &len); 800 - 801 - /* Ignore empty ranges, they imply no translation required */ 802 - if (ranges && len > 0) 796 + if (of_range_count(&parser)) 803 797 break; 804 798 } 805 799 806 - if (!ranges) { 800 + if (!np) { 807 801 dev_dbg(dev, "iommu: no dma-ranges found\n"); 808 802 goto out; 809 803 } 810 804 811 - len /= sizeof(u32); 805 + best_size = 0; 806 + for_each_of_range(&parser, &range) { 807 + if (!range.cpu_addr) 808 + continue; 812 809 813 - pna = of_n_addr_cells(np); 814 - range_size = naddr + nsize + pna; 815 - 816 - /* dma-ranges format: 817 - * child addr : naddr cells 818 - * parent addr : pna cells 819 - * size : nsize cells 820 - */ 821 - for (i = 0, best = -1, best_size = 0; i < len; i += range_size) { 822 - cpu_addr = of_translate_dma_address(np, ranges + i + naddr); 823 - size = of_read_number(ranges + i + naddr + pna, nsize); 824 - 825 - if (cpu_addr == 0 && size > best_size) { 826 - best = i; 827 - best_size = size; 810 + if (range.size > best_size) { 811 + best_size = range.size; 812 + dev_addr = range.bus_addr; 828 813 } 829 814 } 830 815 831 - if (best >= 0) { 832 - dev_addr = of_read_number(ranges + best, naddr); 833 - } else 816 + if (!best_size) 834 817 dev_dbg(dev, "iommu: no suitable range found!\n"); 835 818 836 819 out:
+1
arch/powerpc/platforms/embedded6xx/linkstation.c
··· 13 13 #include <linux/kernel.h> 14 14 #include <linux/initrd.h> 15 15 #include <linux/of_platform.h> 16 + #include <linux/seq_file.h> 16 17 17 18 #include <asm/time.h> 18 19 #include <asm/mpic.h>
+1
arch/powerpc/platforms/embedded6xx/mvme5100.c
··· 14 14 15 15 #include <linux/of_irq.h> 16 16 #include <linux/of_platform.h> 17 + #include <linux/seq_file.h> 17 18 18 19 #include <asm/i8259.h> 19 20 #include <asm/pci-bridge.h>
-19
arch/powerpc/platforms/maple/Kconfig
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - config PPC_MAPLE 3 - depends on PPC64 && PPC_BOOK3S && CPU_BIG_ENDIAN 4 - bool "Maple 970FX Evaluation Board" 5 - select FORCE_PCI 6 - select MPIC 7 - select U3_DART 8 - select MPIC_U3_HT_IRQS 9 - select GENERIC_TBSYNC 10 - select PPC_UDBG_16550 11 - select PPC_970_NAP 12 - select PPC_64S_HASH_MMU 13 - select PPC_HASH_MMU_NATIVE 14 - select PPC_RTAS 15 - select MMIO_NVRAM 16 - select ATA_NONSTANDARD if ATA 17 - help 18 - This option enables support for the Maple 970FX Evaluation Board. 19 - For more information, refer to <http://www.970eval.com>
+1 -1
arch/powerpc/platforms/maple/Makefile arch/powerpc/tools/.gitignore
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - obj-y += setup.o pci.o time.o 2 + /vmlinux.arch.S
-14
arch/powerpc/platforms/maple/maple.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Declarations for maple-specific code. 4 - * 5 - * Maple is the name of a PPC970 evaluation board. 6 - */ 7 - extern int maple_set_rtc_time(struct rtc_time *tm); 8 - extern void maple_get_rtc_time(struct rtc_time *tm); 9 - extern time64_t maple_get_boot_time(void); 10 - extern void maple_pci_init(void); 11 - extern void maple_pci_irq_fixup(struct pci_dev *dev); 12 - extern int maple_pci_get_legacy_ide_irq(struct pci_dev *dev, int channel); 13 - 14 - extern struct pci_controller_ops maple_pci_controller_ops;
-672
arch/powerpc/platforms/maple/pci.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Copyright (C) 2004 Benjamin Herrenschmuidt (benh@kernel.crashing.org), 4 - * IBM Corp. 5 - */ 6 - 7 - #undef DEBUG 8 - 9 - #include <linux/kernel.h> 10 - #include <linux/pci.h> 11 - #include <linux/delay.h> 12 - #include <linux/string.h> 13 - #include <linux/init.h> 14 - #include <linux/irq.h> 15 - #include <linux/of_irq.h> 16 - 17 - #include <asm/sections.h> 18 - #include <asm/io.h> 19 - #include <asm/pci-bridge.h> 20 - #include <asm/machdep.h> 21 - #include <asm/iommu.h> 22 - #include <asm/ppc-pci.h> 23 - #include <asm/isa-bridge.h> 24 - 25 - #include "maple.h" 26 - 27 - #ifdef DEBUG 28 - #define DBG(x...) printk(x) 29 - #else 30 - #define DBG(x...) 31 - #endif 32 - 33 - static struct pci_controller *u3_agp, *u3_ht, *u4_pcie; 34 - 35 - static int __init fixup_one_level_bus_range(struct device_node *node, int higher) 36 - { 37 - for (; node; node = node->sibling) { 38 - const int *bus_range; 39 - const unsigned int *class_code; 40 - int len; 41 - 42 - /* For PCI<->PCI bridges or CardBus bridges, we go down */ 43 - class_code = of_get_property(node, "class-code", NULL); 44 - if (!class_code || ((*class_code >> 8) != PCI_CLASS_BRIDGE_PCI && 45 - (*class_code >> 8) != PCI_CLASS_BRIDGE_CARDBUS)) 46 - continue; 47 - bus_range = of_get_property(node, "bus-range", &len); 48 - if (bus_range != NULL && len > 2 * sizeof(int)) { 49 - if (bus_range[1] > higher) 50 - higher = bus_range[1]; 51 - } 52 - higher = fixup_one_level_bus_range(node->child, higher); 53 - } 54 - return higher; 55 - } 56 - 57 - /* This routine fixes the "bus-range" property of all bridges in the 58 - * system since they tend to have their "last" member wrong on macs 59 - * 60 - * Note that the bus numbers manipulated here are OF bus numbers, they 61 - * are not Linux bus numbers. 62 - */ 63 - static void __init fixup_bus_range(struct device_node *bridge) 64 - { 65 - int *bus_range; 66 - struct property *prop; 67 - int len; 68 - 69 - /* Lookup the "bus-range" property for the hose */ 70 - prop = of_find_property(bridge, "bus-range", &len); 71 - if (prop == NULL || prop->value == NULL || len < 2 * sizeof(int)) { 72 - printk(KERN_WARNING "Can't get bus-range for %pOF\n", 73 - bridge); 74 - return; 75 - } 76 - bus_range = prop->value; 77 - bus_range[1] = fixup_one_level_bus_range(bridge->child, bus_range[1]); 78 - } 79 - 80 - 81 - static unsigned long u3_agp_cfa0(u8 devfn, u8 off) 82 - { 83 - return (1 << (unsigned long)PCI_SLOT(devfn)) | 84 - ((unsigned long)PCI_FUNC(devfn) << 8) | 85 - ((unsigned long)off & 0xFCUL); 86 - } 87 - 88 - static unsigned long u3_agp_cfa1(u8 bus, u8 devfn, u8 off) 89 - { 90 - return ((unsigned long)bus << 16) | 91 - ((unsigned long)devfn << 8) | 92 - ((unsigned long)off & 0xFCUL) | 93 - 1UL; 94 - } 95 - 96 - static volatile void __iomem *u3_agp_cfg_access(struct pci_controller* hose, 97 - u8 bus, u8 dev_fn, u8 offset) 98 - { 99 - unsigned int caddr; 100 - 101 - if (bus == hose->first_busno) { 102 - if (dev_fn < (11 << 3)) 103 - return NULL; 104 - caddr = u3_agp_cfa0(dev_fn, offset); 105 - } else 106 - caddr = u3_agp_cfa1(bus, dev_fn, offset); 107 - 108 - /* Uninorth will return garbage if we don't read back the value ! */ 109 - do { 110 - out_le32(hose->cfg_addr, caddr); 111 - } while (in_le32(hose->cfg_addr) != caddr); 112 - 113 - offset &= 0x07; 114 - return hose->cfg_data + offset; 115 - } 116 - 117 - static int u3_agp_read_config(struct pci_bus *bus, unsigned int devfn, 118 - int offset, int len, u32 *val) 119 - { 120 - struct pci_controller *hose; 121 - volatile void __iomem *addr; 122 - 123 - hose = pci_bus_to_host(bus); 124 - if (hose == NULL) 125 - return PCIBIOS_DEVICE_NOT_FOUND; 126 - 127 - addr = u3_agp_cfg_access(hose, bus->number, devfn, offset); 128 - if (!addr) 129 - return PCIBIOS_DEVICE_NOT_FOUND; 130 - /* 131 - * Note: the caller has already checked that offset is 132 - * suitably aligned and that len is 1, 2 or 4. 133 - */ 134 - switch (len) { 135 - case 1: 136 - *val = in_8(addr); 137 - break; 138 - case 2: 139 - *val = in_le16(addr); 140 - break; 141 - default: 142 - *val = in_le32(addr); 143 - break; 144 - } 145 - return PCIBIOS_SUCCESSFUL; 146 - } 147 - 148 - static int u3_agp_write_config(struct pci_bus *bus, unsigned int devfn, 149 - int offset, int len, u32 val) 150 - { 151 - struct pci_controller *hose; 152 - volatile void __iomem *addr; 153 - 154 - hose = pci_bus_to_host(bus); 155 - if (hose == NULL) 156 - return PCIBIOS_DEVICE_NOT_FOUND; 157 - 158 - addr = u3_agp_cfg_access(hose, bus->number, devfn, offset); 159 - if (!addr) 160 - return PCIBIOS_DEVICE_NOT_FOUND; 161 - /* 162 - * Note: the caller has already checked that offset is 163 - * suitably aligned and that len is 1, 2 or 4. 164 - */ 165 - switch (len) { 166 - case 1: 167 - out_8(addr, val); 168 - break; 169 - case 2: 170 - out_le16(addr, val); 171 - break; 172 - default: 173 - out_le32(addr, val); 174 - break; 175 - } 176 - return PCIBIOS_SUCCESSFUL; 177 - } 178 - 179 - static struct pci_ops u3_agp_pci_ops = 180 - { 181 - .read = u3_agp_read_config, 182 - .write = u3_agp_write_config, 183 - }; 184 - 185 - static unsigned long u3_ht_cfa0(u8 devfn, u8 off) 186 - { 187 - return (devfn << 8) | off; 188 - } 189 - 190 - static unsigned long u3_ht_cfa1(u8 bus, u8 devfn, u8 off) 191 - { 192 - return u3_ht_cfa0(devfn, off) + (bus << 16) + 0x01000000UL; 193 - } 194 - 195 - static volatile void __iomem *u3_ht_cfg_access(struct pci_controller* hose, 196 - u8 bus, u8 devfn, u8 offset) 197 - { 198 - if (bus == hose->first_busno) { 199 - if (PCI_SLOT(devfn) == 0) 200 - return NULL; 201 - return hose->cfg_data + u3_ht_cfa0(devfn, offset); 202 - } else 203 - return hose->cfg_data + u3_ht_cfa1(bus, devfn, offset); 204 - } 205 - 206 - static int u3_ht_root_read_config(struct pci_controller *hose, u8 offset, 207 - int len, u32 *val) 208 - { 209 - volatile void __iomem *addr; 210 - 211 - addr = hose->cfg_addr; 212 - addr += ((offset & ~3) << 2) + (4 - len - (offset & 3)); 213 - 214 - switch (len) { 215 - case 1: 216 - *val = in_8(addr); 217 - break; 218 - case 2: 219 - *val = in_be16(addr); 220 - break; 221 - default: 222 - *val = in_be32(addr); 223 - break; 224 - } 225 - 226 - return PCIBIOS_SUCCESSFUL; 227 - } 228 - 229 - static int u3_ht_root_write_config(struct pci_controller *hose, u8 offset, 230 - int len, u32 val) 231 - { 232 - volatile void __iomem *addr; 233 - 234 - addr = hose->cfg_addr + ((offset & ~3) << 2) + (4 - len - (offset & 3)); 235 - 236 - if (offset >= PCI_BASE_ADDRESS_0 && offset < PCI_CAPABILITY_LIST) 237 - return PCIBIOS_SUCCESSFUL; 238 - 239 - switch (len) { 240 - case 1: 241 - out_8(addr, val); 242 - break; 243 - case 2: 244 - out_be16(addr, val); 245 - break; 246 - default: 247 - out_be32(addr, val); 248 - break; 249 - } 250 - 251 - return PCIBIOS_SUCCESSFUL; 252 - } 253 - 254 - static int u3_ht_read_config(struct pci_bus *bus, unsigned int devfn, 255 - int offset, int len, u32 *val) 256 - { 257 - struct pci_controller *hose; 258 - volatile void __iomem *addr; 259 - 260 - hose = pci_bus_to_host(bus); 261 - if (hose == NULL) 262 - return PCIBIOS_DEVICE_NOT_FOUND; 263 - 264 - if (bus->number == hose->first_busno && devfn == PCI_DEVFN(0, 0)) 265 - return u3_ht_root_read_config(hose, offset, len, val); 266 - 267 - if (offset > 0xff) 268 - return PCIBIOS_BAD_REGISTER_NUMBER; 269 - 270 - addr = u3_ht_cfg_access(hose, bus->number, devfn, offset); 271 - if (!addr) 272 - return PCIBIOS_DEVICE_NOT_FOUND; 273 - 274 - /* 275 - * Note: the caller has already checked that offset is 276 - * suitably aligned and that len is 1, 2 or 4. 277 - */ 278 - switch (len) { 279 - case 1: 280 - *val = in_8(addr); 281 - break; 282 - case 2: 283 - *val = in_le16(addr); 284 - break; 285 - default: 286 - *val = in_le32(addr); 287 - break; 288 - } 289 - return PCIBIOS_SUCCESSFUL; 290 - } 291 - 292 - static int u3_ht_write_config(struct pci_bus *bus, unsigned int devfn, 293 - int offset, int len, u32 val) 294 - { 295 - struct pci_controller *hose; 296 - volatile void __iomem *addr; 297 - 298 - hose = pci_bus_to_host(bus); 299 - if (hose == NULL) 300 - return PCIBIOS_DEVICE_NOT_FOUND; 301 - 302 - if (bus->number == hose->first_busno && devfn == PCI_DEVFN(0, 0)) 303 - return u3_ht_root_write_config(hose, offset, len, val); 304 - 305 - if (offset > 0xff) 306 - return PCIBIOS_BAD_REGISTER_NUMBER; 307 - 308 - addr = u3_ht_cfg_access(hose, bus->number, devfn, offset); 309 - if (!addr) 310 - return PCIBIOS_DEVICE_NOT_FOUND; 311 - /* 312 - * Note: the caller has already checked that offset is 313 - * suitably aligned and that len is 1, 2 or 4. 314 - */ 315 - switch (len) { 316 - case 1: 317 - out_8(addr, val); 318 - break; 319 - case 2: 320 - out_le16(addr, val); 321 - break; 322 - default: 323 - out_le32(addr, val); 324 - break; 325 - } 326 - return PCIBIOS_SUCCESSFUL; 327 - } 328 - 329 - static struct pci_ops u3_ht_pci_ops = 330 - { 331 - .read = u3_ht_read_config, 332 - .write = u3_ht_write_config, 333 - }; 334 - 335 - static unsigned int u4_pcie_cfa0(unsigned int devfn, unsigned int off) 336 - { 337 - return (1 << PCI_SLOT(devfn)) | 338 - (PCI_FUNC(devfn) << 8) | 339 - ((off >> 8) << 28) | 340 - (off & 0xfcu); 341 - } 342 - 343 - static unsigned int u4_pcie_cfa1(unsigned int bus, unsigned int devfn, 344 - unsigned int off) 345 - { 346 - return (bus << 16) | 347 - (devfn << 8) | 348 - ((off >> 8) << 28) | 349 - (off & 0xfcu) | 1u; 350 - } 351 - 352 - static volatile void __iomem *u4_pcie_cfg_access(struct pci_controller* hose, 353 - u8 bus, u8 dev_fn, int offset) 354 - { 355 - unsigned int caddr; 356 - 357 - if (bus == hose->first_busno) 358 - caddr = u4_pcie_cfa0(dev_fn, offset); 359 - else 360 - caddr = u4_pcie_cfa1(bus, dev_fn, offset); 361 - 362 - /* Uninorth will return garbage if we don't read back the value ! */ 363 - do { 364 - out_le32(hose->cfg_addr, caddr); 365 - } while (in_le32(hose->cfg_addr) != caddr); 366 - 367 - offset &= 0x03; 368 - return hose->cfg_data + offset; 369 - } 370 - 371 - static int u4_pcie_read_config(struct pci_bus *bus, unsigned int devfn, 372 - int offset, int len, u32 *val) 373 - { 374 - struct pci_controller *hose; 375 - volatile void __iomem *addr; 376 - 377 - hose = pci_bus_to_host(bus); 378 - if (hose == NULL) 379 - return PCIBIOS_DEVICE_NOT_FOUND; 380 - if (offset >= 0x1000) 381 - return PCIBIOS_BAD_REGISTER_NUMBER; 382 - addr = u4_pcie_cfg_access(hose, bus->number, devfn, offset); 383 - if (!addr) 384 - return PCIBIOS_DEVICE_NOT_FOUND; 385 - /* 386 - * Note: the caller has already checked that offset is 387 - * suitably aligned and that len is 1, 2 or 4. 388 - */ 389 - switch (len) { 390 - case 1: 391 - *val = in_8(addr); 392 - break; 393 - case 2: 394 - *val = in_le16(addr); 395 - break; 396 - default: 397 - *val = in_le32(addr); 398 - break; 399 - } 400 - return PCIBIOS_SUCCESSFUL; 401 - } 402 - static int u4_pcie_write_config(struct pci_bus *bus, unsigned int devfn, 403 - int offset, int len, u32 val) 404 - { 405 - struct pci_controller *hose; 406 - volatile void __iomem *addr; 407 - 408 - hose = pci_bus_to_host(bus); 409 - if (hose == NULL) 410 - return PCIBIOS_DEVICE_NOT_FOUND; 411 - if (offset >= 0x1000) 412 - return PCIBIOS_BAD_REGISTER_NUMBER; 413 - addr = u4_pcie_cfg_access(hose, bus->number, devfn, offset); 414 - if (!addr) 415 - return PCIBIOS_DEVICE_NOT_FOUND; 416 - /* 417 - * Note: the caller has already checked that offset is 418 - * suitably aligned and that len is 1, 2 or 4. 419 - */ 420 - switch (len) { 421 - case 1: 422 - out_8(addr, val); 423 - break; 424 - case 2: 425 - out_le16(addr, val); 426 - break; 427 - default: 428 - out_le32(addr, val); 429 - break; 430 - } 431 - return PCIBIOS_SUCCESSFUL; 432 - } 433 - 434 - static struct pci_ops u4_pcie_pci_ops = 435 - { 436 - .read = u4_pcie_read_config, 437 - .write = u4_pcie_write_config, 438 - }; 439 - 440 - static void __init setup_u3_agp(struct pci_controller* hose) 441 - { 442 - /* On G5, we move AGP up to high bus number so we don't need 443 - * to reassign bus numbers for HT. If we ever have P2P bridges 444 - * on AGP, we'll have to move pci_assign_all_buses to the 445 - * pci_controller structure so we enable it for AGP and not for 446 - * HT childs. 447 - * We hard code the address because of the different size of 448 - * the reg address cell, we shall fix that by killing struct 449 - * reg_property and using some accessor functions instead 450 - */ 451 - hose->first_busno = 0xf0; 452 - hose->last_busno = 0xff; 453 - hose->ops = &u3_agp_pci_ops; 454 - hose->cfg_addr = ioremap(0xf0000000 + 0x800000, 0x1000); 455 - hose->cfg_data = ioremap(0xf0000000 + 0xc00000, 0x1000); 456 - 457 - u3_agp = hose; 458 - } 459 - 460 - static void __init setup_u4_pcie(struct pci_controller* hose) 461 - { 462 - /* We currently only implement the "non-atomic" config space, to 463 - * be optimised later. 464 - */ 465 - hose->ops = &u4_pcie_pci_ops; 466 - hose->cfg_addr = ioremap(0xf0000000 + 0x800000, 0x1000); 467 - hose->cfg_data = ioremap(0xf0000000 + 0xc00000, 0x1000); 468 - 469 - u4_pcie = hose; 470 - } 471 - 472 - static void __init setup_u3_ht(struct pci_controller* hose) 473 - { 474 - hose->ops = &u3_ht_pci_ops; 475 - 476 - /* We hard code the address because of the different size of 477 - * the reg address cell, we shall fix that by killing struct 478 - * reg_property and using some accessor functions instead 479 - */ 480 - hose->cfg_data = ioremap(0xf2000000, 0x02000000); 481 - hose->cfg_addr = ioremap(0xf8070000, 0x1000); 482 - 483 - hose->first_busno = 0; 484 - hose->last_busno = 0xef; 485 - 486 - u3_ht = hose; 487 - } 488 - 489 - static int __init maple_add_bridge(struct device_node *dev) 490 - { 491 - int len; 492 - struct pci_controller *hose; 493 - char* disp_name; 494 - const int *bus_range; 495 - int primary = 1; 496 - 497 - DBG("Adding PCI host bridge %pOF\n", dev); 498 - 499 - bus_range = of_get_property(dev, "bus-range", &len); 500 - if (bus_range == NULL || len < 2 * sizeof(int)) { 501 - printk(KERN_WARNING "Can't get bus-range for %pOF, assume bus 0\n", 502 - dev); 503 - } 504 - 505 - hose = pcibios_alloc_controller(dev); 506 - if (hose == NULL) 507 - return -ENOMEM; 508 - hose->first_busno = bus_range ? bus_range[0] : 0; 509 - hose->last_busno = bus_range ? bus_range[1] : 0xff; 510 - hose->controller_ops = maple_pci_controller_ops; 511 - 512 - disp_name = NULL; 513 - if (of_device_is_compatible(dev, "u3-agp")) { 514 - setup_u3_agp(hose); 515 - disp_name = "U3-AGP"; 516 - primary = 0; 517 - } else if (of_device_is_compatible(dev, "u3-ht")) { 518 - setup_u3_ht(hose); 519 - disp_name = "U3-HT"; 520 - primary = 1; 521 - } else if (of_device_is_compatible(dev, "u4-pcie")) { 522 - setup_u4_pcie(hose); 523 - disp_name = "U4-PCIE"; 524 - primary = 0; 525 - } 526 - printk(KERN_INFO "Found %s PCI host bridge. Firmware bus number: %d->%d\n", 527 - disp_name, hose->first_busno, hose->last_busno); 528 - 529 - /* Interpret the "ranges" property */ 530 - /* This also maps the I/O region and sets isa_io/mem_base */ 531 - pci_process_bridge_OF_ranges(hose, dev, primary); 532 - 533 - /* Fixup "bus-range" OF property */ 534 - fixup_bus_range(dev); 535 - 536 - /* Check for legacy IOs */ 537 - isa_bridge_find_early(hose); 538 - 539 - /* create pci_dn's for DT nodes under this PHB */ 540 - pci_devs_phb_init_dynamic(hose); 541 - 542 - return 0; 543 - } 544 - 545 - 546 - void maple_pci_irq_fixup(struct pci_dev *dev) 547 - { 548 - DBG(" -> maple_pci_irq_fixup\n"); 549 - 550 - /* Fixup IRQ for PCIe host */ 551 - if (u4_pcie != NULL && dev->bus->number == 0 && 552 - pci_bus_to_host(dev->bus) == u4_pcie) { 553 - printk(KERN_DEBUG "Fixup U4 PCIe IRQ\n"); 554 - dev->irq = irq_create_mapping(NULL, 1); 555 - if (dev->irq) 556 - irq_set_irq_type(dev->irq, IRQ_TYPE_LEVEL_LOW); 557 - } 558 - 559 - /* Hide AMD8111 IDE interrupt when in legacy mode so 560 - * the driver calls pci_get_legacy_ide_irq() 561 - */ 562 - if (dev->vendor == PCI_VENDOR_ID_AMD && 563 - dev->device == PCI_DEVICE_ID_AMD_8111_IDE && 564 - (dev->class & 5) != 5) { 565 - dev->irq = 0; 566 - } 567 - 568 - DBG(" <- maple_pci_irq_fixup\n"); 569 - } 570 - 571 - static int maple_pci_root_bridge_prepare(struct pci_host_bridge *bridge) 572 - { 573 - struct pci_controller *hose = pci_bus_to_host(bridge->bus); 574 - struct device_node *np, *child; 575 - 576 - if (hose != u3_agp) 577 - return 0; 578 - 579 - /* Fixup the PCI<->OF mapping for U3 AGP due to bus renumbering. We 580 - * assume there is no P2P bridge on the AGP bus, which should be a 581 - * safe assumptions hopefully. 582 - */ 583 - np = hose->dn; 584 - PCI_DN(np)->busno = 0xf0; 585 - for_each_child_of_node(np, child) 586 - PCI_DN(child)->busno = 0xf0; 587 - 588 - return 0; 589 - } 590 - 591 - void __init maple_pci_init(void) 592 - { 593 - struct device_node *np, *root; 594 - struct device_node *ht = NULL; 595 - 596 - /* Probe root PCI hosts, that is on U3 the AGP host and the 597 - * HyperTransport host. That one is actually "kept" around 598 - * and actually added last as its resource management relies 599 - * on the AGP resources to have been setup first 600 - */ 601 - root = of_find_node_by_path("/"); 602 - if (root == NULL) { 603 - printk(KERN_CRIT "maple_find_bridges: can't find root of device tree\n"); 604 - return; 605 - } 606 - for_each_child_of_node(root, np) { 607 - if (!of_node_is_type(np, "pci") && !of_node_is_type(np, "ht")) 608 - continue; 609 - if ((of_device_is_compatible(np, "u4-pcie") || 610 - of_device_is_compatible(np, "u3-agp")) && 611 - maple_add_bridge(np) == 0) 612 - of_node_get(np); 613 - 614 - if (of_device_is_compatible(np, "u3-ht")) { 615 - of_node_get(np); 616 - ht = np; 617 - } 618 - } 619 - of_node_put(root); 620 - 621 - /* Now setup the HyperTransport host if we found any 622 - */ 623 - if (ht && maple_add_bridge(ht) != 0) 624 - of_node_put(ht); 625 - 626 - ppc_md.pcibios_root_bridge_prepare = maple_pci_root_bridge_prepare; 627 - 628 - /* Tell pci.c to not change any resource allocations. */ 629 - pci_add_flags(PCI_PROBE_ONLY); 630 - } 631 - 632 - int maple_pci_get_legacy_ide_irq(struct pci_dev *pdev, int channel) 633 - { 634 - struct device_node *np; 635 - unsigned int defirq = channel ? 15 : 14; 636 - unsigned int irq; 637 - 638 - if (pdev->vendor != PCI_VENDOR_ID_AMD || 639 - pdev->device != PCI_DEVICE_ID_AMD_8111_IDE) 640 - return defirq; 641 - 642 - np = pci_device_to_OF_node(pdev); 643 - if (np == NULL) { 644 - printk("Failed to locate OF node for IDE %s\n", 645 - pci_name(pdev)); 646 - return defirq; 647 - } 648 - irq = irq_of_parse_and_map(np, channel & 0x1); 649 - if (!irq) { 650 - printk("Failed to map onboard IDE interrupt for channel %d\n", 651 - channel); 652 - return defirq; 653 - } 654 - return irq; 655 - } 656 - 657 - static void quirk_ipr_msi(struct pci_dev *dev) 658 - { 659 - /* Something prevents MSIs from the IPR from working on Bimini, 660 - * and the driver has no smarts to recover. So disable MSI 661 - * on it for now. */ 662 - 663 - if (machine_is(maple)) { 664 - dev->no_msi = 1; 665 - dev_info(&dev->dev, "Quirk disabled MSI\n"); 666 - } 667 - } 668 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_OBSIDIAN, 669 - quirk_ipr_msi); 670 - 671 - struct pci_controller_ops maple_pci_controller_ops = { 672 - };
-363
arch/powerpc/platforms/maple/setup.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Maple (970 eval board) setup code 4 - * 5 - * (c) Copyright 2004 Benjamin Herrenschmidt (benh@kernel.crashing.org), 6 - * IBM Corp. 7 - */ 8 - 9 - #undef DEBUG 10 - 11 - #include <linux/init.h> 12 - #include <linux/errno.h> 13 - #include <linux/sched.h> 14 - #include <linux/kernel.h> 15 - #include <linux/export.h> 16 - #include <linux/mm.h> 17 - #include <linux/stddef.h> 18 - #include <linux/unistd.h> 19 - #include <linux/ptrace.h> 20 - #include <linux/user.h> 21 - #include <linux/tty.h> 22 - #include <linux/string.h> 23 - #include <linux/delay.h> 24 - #include <linux/ioport.h> 25 - #include <linux/major.h> 26 - #include <linux/initrd.h> 27 - #include <linux/vt_kern.h> 28 - #include <linux/console.h> 29 - #include <linux/pci.h> 30 - #include <linux/adb.h> 31 - #include <linux/cuda.h> 32 - #include <linux/pmu.h> 33 - #include <linux/irq.h> 34 - #include <linux/seq_file.h> 35 - #include <linux/root_dev.h> 36 - #include <linux/serial.h> 37 - #include <linux/smp.h> 38 - #include <linux/bitops.h> 39 - #include <linux/of.h> 40 - #include <linux/of_address.h> 41 - #include <linux/platform_device.h> 42 - #include <linux/memblock.h> 43 - 44 - #include <asm/processor.h> 45 - #include <asm/sections.h> 46 - #include <asm/io.h> 47 - #include <asm/pci-bridge.h> 48 - #include <asm/iommu.h> 49 - #include <asm/machdep.h> 50 - #include <asm/dma.h> 51 - #include <asm/cputable.h> 52 - #include <asm/time.h> 53 - #include <asm/mpic.h> 54 - #include <asm/rtas.h> 55 - #include <asm/udbg.h> 56 - #include <asm/nvram.h> 57 - 58 - #include "maple.h" 59 - 60 - #ifdef DEBUG 61 - #define DBG(fmt...) udbg_printf(fmt) 62 - #else 63 - #define DBG(fmt...) 64 - #endif 65 - 66 - static unsigned long maple_find_nvram_base(void) 67 - { 68 - struct device_node *rtcs; 69 - unsigned long result = 0; 70 - 71 - /* find NVRAM device */ 72 - rtcs = of_find_compatible_node(NULL, "nvram", "AMD8111"); 73 - if (rtcs) { 74 - struct resource r; 75 - if (of_address_to_resource(rtcs, 0, &r)) { 76 - printk(KERN_EMERG "Maple: Unable to translate NVRAM" 77 - " address\n"); 78 - goto bail; 79 - } 80 - if (!(r.flags & IORESOURCE_IO)) { 81 - printk(KERN_EMERG "Maple: NVRAM address isn't PIO!\n"); 82 - goto bail; 83 - } 84 - result = r.start; 85 - } else 86 - printk(KERN_EMERG "Maple: Unable to find NVRAM\n"); 87 - bail: 88 - of_node_put(rtcs); 89 - return result; 90 - } 91 - 92 - static void __noreturn maple_restart(char *cmd) 93 - { 94 - unsigned int maple_nvram_base; 95 - const unsigned int *maple_nvram_offset, *maple_nvram_command; 96 - struct device_node *sp; 97 - 98 - maple_nvram_base = maple_find_nvram_base(); 99 - if (maple_nvram_base == 0) 100 - goto fail; 101 - 102 - /* find service processor device */ 103 - sp = of_find_node_by_name(NULL, "service-processor"); 104 - if (!sp) { 105 - printk(KERN_EMERG "Maple: Unable to find Service Processor\n"); 106 - goto fail; 107 - } 108 - maple_nvram_offset = of_get_property(sp, "restart-addr", NULL); 109 - maple_nvram_command = of_get_property(sp, "restart-value", NULL); 110 - of_node_put(sp); 111 - 112 - /* send command */ 113 - outb_p(*maple_nvram_command, maple_nvram_base + *maple_nvram_offset); 114 - for (;;) ; 115 - fail: 116 - printk(KERN_EMERG "Maple: Manual Restart Required\n"); 117 - for (;;) ; 118 - } 119 - 120 - static void __noreturn maple_power_off(void) 121 - { 122 - unsigned int maple_nvram_base; 123 - const unsigned int *maple_nvram_offset, *maple_nvram_command; 124 - struct device_node *sp; 125 - 126 - maple_nvram_base = maple_find_nvram_base(); 127 - if (maple_nvram_base == 0) 128 - goto fail; 129 - 130 - /* find service processor device */ 131 - sp = of_find_node_by_name(NULL, "service-processor"); 132 - if (!sp) { 133 - printk(KERN_EMERG "Maple: Unable to find Service Processor\n"); 134 - goto fail; 135 - } 136 - maple_nvram_offset = of_get_property(sp, "power-off-addr", NULL); 137 - maple_nvram_command = of_get_property(sp, "power-off-value", NULL); 138 - of_node_put(sp); 139 - 140 - /* send command */ 141 - outb_p(*maple_nvram_command, maple_nvram_base + *maple_nvram_offset); 142 - for (;;) ; 143 - fail: 144 - printk(KERN_EMERG "Maple: Manual Power-Down Required\n"); 145 - for (;;) ; 146 - } 147 - 148 - static void __noreturn maple_halt(void) 149 - { 150 - maple_power_off(); 151 - } 152 - 153 - #ifdef CONFIG_SMP 154 - static struct smp_ops_t maple_smp_ops = { 155 - .probe = smp_mpic_probe, 156 - .message_pass = smp_mpic_message_pass, 157 - .kick_cpu = smp_generic_kick_cpu, 158 - .setup_cpu = smp_mpic_setup_cpu, 159 - .give_timebase = smp_generic_give_timebase, 160 - .take_timebase = smp_generic_take_timebase, 161 - }; 162 - #endif /* CONFIG_SMP */ 163 - 164 - static void __init maple_use_rtas_reboot_and_halt_if_present(void) 165 - { 166 - if (rtas_function_implemented(RTAS_FN_SYSTEM_REBOOT) && 167 - rtas_function_implemented(RTAS_FN_POWER_OFF)) { 168 - ppc_md.restart = rtas_restart; 169 - pm_power_off = rtas_power_off; 170 - ppc_md.halt = rtas_halt; 171 - } 172 - } 173 - 174 - static void __init maple_setup_arch(void) 175 - { 176 - /* init to some ~sane value until calibrate_delay() runs */ 177 - loops_per_jiffy = 50000000; 178 - 179 - /* Setup SMP callback */ 180 - #ifdef CONFIG_SMP 181 - smp_ops = &maple_smp_ops; 182 - #endif 183 - maple_use_rtas_reboot_and_halt_if_present(); 184 - 185 - printk(KERN_DEBUG "Using native/NAP idle loop\n"); 186 - 187 - mmio_nvram_init(); 188 - } 189 - 190 - /* 191 - * This is almost identical to pSeries and CHRP. We need to make that 192 - * code generic at one point, with appropriate bits in the device-tree to 193 - * identify the presence of an HT APIC 194 - */ 195 - static void __init maple_init_IRQ(void) 196 - { 197 - struct device_node *root, *np, *mpic_node = NULL; 198 - const unsigned int *opprop; 199 - unsigned long openpic_addr = 0; 200 - int naddr, n, i, opplen, has_isus = 0; 201 - struct mpic *mpic; 202 - unsigned int flags = 0; 203 - 204 - /* Locate MPIC in the device-tree. Note that there is a bug 205 - * in Maple device-tree where the type of the controller is 206 - * open-pic and not interrupt-controller 207 - */ 208 - 209 - for_each_node_by_type(np, "interrupt-controller") 210 - if (of_device_is_compatible(np, "open-pic")) { 211 - mpic_node = np; 212 - break; 213 - } 214 - if (mpic_node == NULL) 215 - for_each_node_by_type(np, "open-pic") { 216 - mpic_node = np; 217 - break; 218 - } 219 - if (mpic_node == NULL) { 220 - printk(KERN_ERR 221 - "Failed to locate the MPIC interrupt controller\n"); 222 - return; 223 - } 224 - 225 - /* Find address list in /platform-open-pic */ 226 - root = of_find_node_by_path("/"); 227 - naddr = of_n_addr_cells(root); 228 - opprop = of_get_property(root, "platform-open-pic", &opplen); 229 - if (opprop) { 230 - openpic_addr = of_read_number(opprop, naddr); 231 - has_isus = (opplen > naddr); 232 - printk(KERN_DEBUG "OpenPIC addr: %lx, has ISUs: %d\n", 233 - openpic_addr, has_isus); 234 - } 235 - 236 - BUG_ON(openpic_addr == 0); 237 - 238 - /* Check for a big endian MPIC */ 239 - if (of_property_read_bool(np, "big-endian")) 240 - flags |= MPIC_BIG_ENDIAN; 241 - 242 - /* XXX Maple specific bits */ 243 - flags |= MPIC_U3_HT_IRQS; 244 - /* All U3/U4 are big-endian, older SLOF firmware doesn't encode this */ 245 - flags |= MPIC_BIG_ENDIAN; 246 - 247 - /* Setup the openpic driver. More device-tree junks, we hard code no 248 - * ISUs for now. I'll have to revisit some stuffs with the folks doing 249 - * the firmware for those 250 - */ 251 - mpic = mpic_alloc(mpic_node, openpic_addr, flags, 252 - /*has_isus ? 16 :*/ 0, 0, " MPIC "); 253 - BUG_ON(mpic == NULL); 254 - 255 - /* Add ISUs */ 256 - opplen /= sizeof(u32); 257 - for (n = 0, i = naddr; i < opplen; i += naddr, n++) { 258 - unsigned long isuaddr = of_read_number(opprop + i, naddr); 259 - mpic_assign_isu(mpic, n, isuaddr); 260 - } 261 - 262 - /* All ISUs are setup, complete initialization */ 263 - mpic_init(mpic); 264 - ppc_md.get_irq = mpic_get_irq; 265 - of_node_put(mpic_node); 266 - of_node_put(root); 267 - } 268 - 269 - static void __init maple_progress(char *s, unsigned short hex) 270 - { 271 - printk("*** %04x : %s\n", hex, s ? s : ""); 272 - } 273 - 274 - 275 - /* 276 - * Called very early, MMU is off, device-tree isn't unflattened 277 - */ 278 - static int __init maple_probe(void) 279 - { 280 - if (!of_machine_is_compatible("Momentum,Maple") && 281 - !of_machine_is_compatible("Momentum,Apache")) 282 - return 0; 283 - 284 - pm_power_off = maple_power_off; 285 - 286 - iommu_init_early_dart(&maple_pci_controller_ops); 287 - 288 - return 1; 289 - } 290 - 291 - #ifdef CONFIG_EDAC 292 - /* 293 - * Register a platform device for CPC925 memory controller on 294 - * all boards with U3H (CPC925) bridge. 295 - */ 296 - static int __init maple_cpc925_edac_setup(void) 297 - { 298 - struct platform_device *pdev; 299 - struct device_node *np = NULL; 300 - struct resource r; 301 - int ret; 302 - volatile void __iomem *mem; 303 - u32 rev; 304 - 305 - np = of_find_node_by_type(NULL, "memory-controller"); 306 - if (!np) { 307 - printk(KERN_ERR "%s: Unable to find memory-controller node\n", 308 - __func__); 309 - return -ENODEV; 310 - } 311 - 312 - ret = of_address_to_resource(np, 0, &r); 313 - of_node_put(np); 314 - 315 - if (ret < 0) { 316 - printk(KERN_ERR "%s: Unable to get memory-controller reg\n", 317 - __func__); 318 - return -ENODEV; 319 - } 320 - 321 - mem = ioremap(r.start, resource_size(&r)); 322 - if (!mem) { 323 - printk(KERN_ERR "%s: Unable to map memory-controller memory\n", 324 - __func__); 325 - return -ENOMEM; 326 - } 327 - 328 - rev = __raw_readl(mem); 329 - iounmap(mem); 330 - 331 - if (rev < 0x34 || rev > 0x3f) { /* U3H */ 332 - printk(KERN_ERR "%s: Non-CPC925(U3H) bridge revision: %02x\n", 333 - __func__, rev); 334 - return 0; 335 - } 336 - 337 - pdev = platform_device_register_simple("cpc925_edac", 0, &r, 1); 338 - if (IS_ERR(pdev)) 339 - return PTR_ERR(pdev); 340 - 341 - printk(KERN_INFO "%s: CPC925 platform device created\n", __func__); 342 - 343 - return 0; 344 - } 345 - machine_device_initcall(maple, maple_cpc925_edac_setup); 346 - #endif 347 - 348 - define_machine(maple) { 349 - .name = "Maple", 350 - .probe = maple_probe, 351 - .setup_arch = maple_setup_arch, 352 - .discover_phbs = maple_pci_init, 353 - .init_IRQ = maple_init_IRQ, 354 - .pci_irq_fixup = maple_pci_irq_fixup, 355 - .pci_get_legacy_ide_irq = maple_pci_get_legacy_ide_irq, 356 - .restart = maple_restart, 357 - .halt = maple_halt, 358 - .get_boot_time = maple_get_boot_time, 359 - .set_rtc_time = maple_set_rtc_time, 360 - .get_rtc_time = maple_get_rtc_time, 361 - .progress = maple_progress, 362 - .power_save = power4_idle, 363 - };
-170
arch/powerpc/platforms/maple/time.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * (c) Copyright 2004 Benjamin Herrenschmidt (benh@kernel.crashing.org), 4 - * IBM Corp. 5 - */ 6 - 7 - #undef DEBUG 8 - 9 - #include <linux/errno.h> 10 - #include <linux/sched.h> 11 - #include <linux/kernel.h> 12 - #include <linux/param.h> 13 - #include <linux/string.h> 14 - #include <linux/mm.h> 15 - #include <linux/init.h> 16 - #include <linux/time.h> 17 - #include <linux/adb.h> 18 - #include <linux/pmu.h> 19 - #include <linux/interrupt.h> 20 - #include <linux/mc146818rtc.h> 21 - #include <linux/bcd.h> 22 - #include <linux/of_address.h> 23 - 24 - #include <asm/sections.h> 25 - #include <asm/io.h> 26 - #include <asm/machdep.h> 27 - #include <asm/time.h> 28 - 29 - #include "maple.h" 30 - 31 - #ifdef DEBUG 32 - #define DBG(x...) printk(x) 33 - #else 34 - #define DBG(x...) 35 - #endif 36 - 37 - static int maple_rtc_addr; 38 - 39 - static int maple_clock_read(int addr) 40 - { 41 - outb_p(addr, maple_rtc_addr); 42 - return inb_p(maple_rtc_addr+1); 43 - } 44 - 45 - static void maple_clock_write(unsigned long val, int addr) 46 - { 47 - outb_p(addr, maple_rtc_addr); 48 - outb_p(val, maple_rtc_addr+1); 49 - } 50 - 51 - void maple_get_rtc_time(struct rtc_time *tm) 52 - { 53 - do { 54 - tm->tm_sec = maple_clock_read(RTC_SECONDS); 55 - tm->tm_min = maple_clock_read(RTC_MINUTES); 56 - tm->tm_hour = maple_clock_read(RTC_HOURS); 57 - tm->tm_mday = maple_clock_read(RTC_DAY_OF_MONTH); 58 - tm->tm_mon = maple_clock_read(RTC_MONTH); 59 - tm->tm_year = maple_clock_read(RTC_YEAR); 60 - } while (tm->tm_sec != maple_clock_read(RTC_SECONDS)); 61 - 62 - if (!(maple_clock_read(RTC_CONTROL) & RTC_DM_BINARY) 63 - || RTC_ALWAYS_BCD) { 64 - tm->tm_sec = bcd2bin(tm->tm_sec); 65 - tm->tm_min = bcd2bin(tm->tm_min); 66 - tm->tm_hour = bcd2bin(tm->tm_hour); 67 - tm->tm_mday = bcd2bin(tm->tm_mday); 68 - tm->tm_mon = bcd2bin(tm->tm_mon); 69 - tm->tm_year = bcd2bin(tm->tm_year); 70 - } 71 - if ((tm->tm_year + 1900) < 1970) 72 - tm->tm_year += 100; 73 - 74 - tm->tm_wday = -1; 75 - } 76 - 77 - int maple_set_rtc_time(struct rtc_time *tm) 78 - { 79 - unsigned char save_control, save_freq_select; 80 - int sec, min, hour, mon, mday, year; 81 - 82 - spin_lock(&rtc_lock); 83 - 84 - save_control = maple_clock_read(RTC_CONTROL); /* tell the clock it's being set */ 85 - 86 - maple_clock_write((save_control|RTC_SET), RTC_CONTROL); 87 - 88 - save_freq_select = maple_clock_read(RTC_FREQ_SELECT); /* stop and reset prescaler */ 89 - 90 - maple_clock_write((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT); 91 - 92 - sec = tm->tm_sec; 93 - min = tm->tm_min; 94 - hour = tm->tm_hour; 95 - mon = tm->tm_mon; 96 - mday = tm->tm_mday; 97 - year = tm->tm_year; 98 - 99 - if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) { 100 - sec = bin2bcd(sec); 101 - min = bin2bcd(min); 102 - hour = bin2bcd(hour); 103 - mon = bin2bcd(mon); 104 - mday = bin2bcd(mday); 105 - year = bin2bcd(year); 106 - } 107 - maple_clock_write(sec, RTC_SECONDS); 108 - maple_clock_write(min, RTC_MINUTES); 109 - maple_clock_write(hour, RTC_HOURS); 110 - maple_clock_write(mon, RTC_MONTH); 111 - maple_clock_write(mday, RTC_DAY_OF_MONTH); 112 - maple_clock_write(year, RTC_YEAR); 113 - 114 - /* The following flags have to be released exactly in this order, 115 - * otherwise the DS12887 (popular MC146818A clone with integrated 116 - * battery and quartz) will not reset the oscillator and will not 117 - * update precisely 500 ms later. You won't find this mentioned in 118 - * the Dallas Semiconductor data sheets, but who believes data 119 - * sheets anyway ... -- Markus Kuhn 120 - */ 121 - maple_clock_write(save_control, RTC_CONTROL); 122 - maple_clock_write(save_freq_select, RTC_FREQ_SELECT); 123 - 124 - spin_unlock(&rtc_lock); 125 - 126 - return 0; 127 - } 128 - 129 - static struct resource rtc_iores = { 130 - .name = "rtc", 131 - .flags = IORESOURCE_IO | IORESOURCE_BUSY, 132 - }; 133 - 134 - time64_t __init maple_get_boot_time(void) 135 - { 136 - struct rtc_time tm; 137 - struct device_node *rtcs; 138 - 139 - rtcs = of_find_compatible_node(NULL, "rtc", "pnpPNP,b00"); 140 - if (rtcs) { 141 - struct resource r; 142 - if (of_address_to_resource(rtcs, 0, &r)) { 143 - printk(KERN_EMERG "Maple: Unable to translate RTC" 144 - " address\n"); 145 - goto bail; 146 - } 147 - if (!(r.flags & IORESOURCE_IO)) { 148 - printk(KERN_EMERG "Maple: RTC address isn't PIO!\n"); 149 - goto bail; 150 - } 151 - maple_rtc_addr = r.start; 152 - printk(KERN_INFO "Maple: Found RTC at IO 0x%x\n", 153 - maple_rtc_addr); 154 - } 155 - bail: 156 - of_node_put(rtcs); 157 - if (maple_rtc_addr == 0) { 158 - maple_rtc_addr = RTC_PORT(0); /* legacy address */ 159 - printk(KERN_INFO "Maple: No device node for RTC, assuming " 160 - "legacy address (0x%x)\n", maple_rtc_addr); 161 - } 162 - 163 - rtc_iores.start = maple_rtc_addr; 164 - rtc_iores.end = maple_rtc_addr + 7; 165 - request_resource(&ioport_resource, &rtc_iores); 166 - 167 - maple_get_rtc_time(&tm); 168 - return rtc_tm_to_time64(&tm); 169 - } 170 -
+3 -11
arch/powerpc/platforms/powermac/backlight.c
··· 57 57 int pmac_has_backlight_type(const char *type) 58 58 { 59 59 struct device_node* bk_node = of_find_node_by_name(NULL, "backlight"); 60 + int i = of_property_match_string(bk_node, "backlight-control", type); 60 61 61 - if (bk_node) { 62 - const char *prop = of_get_property(bk_node, 63 - "backlight-control", NULL); 64 - if (prop && strncmp(prop, type, strlen(type)) == 0) { 65 - of_node_put(bk_node); 66 - return 1; 67 - } 68 - of_node_put(bk_node); 69 - } 70 - 71 - return 0; 62 + of_node_put(bk_node); 63 + return i >= 0; 72 64 } 73 65 74 66 static void pmac_backlight_key_worker(struct work_struct *work)
+1 -1
arch/powerpc/platforms/ps3/device-init.c
··· 178 178 return result; 179 179 } 180 180 181 - static int __ref ps3_setup_uhc_device( 181 + static int __init ps3_setup_uhc_device( 182 182 const struct ps3_repository_device *repo, enum ps3_match_id match_id, 183 183 enum ps3_interrupt_type interrupt_type, enum ps3_reg_type reg_type) 184 184 {
+1 -1
arch/powerpc/platforms/ps3/interrupt.c
··· 378 378 379 379 /** 380 380 * ps3_sb_event_receive_port_setup - Setup a system bus event receive port. 381 + * @dev: The system bus device instance. 381 382 * @cpu: enum ps3_cpu_binding indicating the cpu the interrupt should be 382 383 * serviced on. 383 - * @dev: The system bus device instance. 384 384 * @virq: The assigned Linux virq. 385 385 * 386 386 * An event irq represents a virtual device interrupt. The interrupt_id
+1 -1
arch/powerpc/platforms/ps3/repository.c
··· 940 940 941 941 /** 942 942 * ps3_repository_read_boot_dat_info - Get address and size of cell_ext_os_area. 943 - * address: lpar address of cell_ext_os_area 943 + * @lpar_addr: lpar address of cell_ext_os_area 944 944 * @size: size of cell_ext_os_area 945 945 */ 946 946
+2 -3
arch/powerpc/platforms/ps3/system-bus.c
··· 453 453 char *buf) 454 454 { 455 455 struct ps3_system_bus_device *dev = ps3_dev_to_system_bus_dev(_dev); 456 - int len = snprintf(buf, PAGE_SIZE, "ps3:%d:%d\n", dev->match_id, 457 - dev->match_sub_id); 458 456 459 - return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len; 457 + return sysfs_emit(buf, "ps3:%d:%d\n", dev->match_id, 458 + dev->match_sub_id); 460 459 } 461 460 static DEVICE_ATTR_RO(modalias); 462 461
+14
arch/powerpc/platforms/pseries/Kconfig
··· 140 140 141 141 If unsure, select Y. 142 142 143 + config VPA_PMU 144 + tristate "VPA PMU events" 145 + depends on KVM_BOOK3S_64_HV && HV_PERF_CTRS 146 + help 147 + Enable access to the VPA PMU counters via perf. This enables 148 + code that support measurement for KVM on PowerVM(KoP) feature. 149 + PAPR hypervisor has introduced three new counters in the VPA area 150 + of LPAR CPUs for KVM L2 guest observability. Two for context switches 151 + from host to guest and vice versa, and one counter for getting 152 + the total time spent inside the KVM guest. This config enables code 153 + that access these software counters via perf. 154 + 155 + If unsure, Select N. 156 + 143 157 config IBMVIO 144 158 depends on PPC_PSERIES 145 159 bool
+4 -4
arch/powerpc/platforms/pseries/dtl.c
··· 191 191 return -EBUSY; 192 192 193 193 /* ensure there are no other conflicting dtl users */ 194 - if (!read_trylock(&dtl_access_lock)) 194 + if (!down_read_trylock(&dtl_access_lock)) 195 195 return -EBUSY; 196 196 197 197 n_entries = dtl_buf_entries; ··· 199 199 if (!buf) { 200 200 printk(KERN_WARNING "%s: buffer alloc failed for cpu %d\n", 201 201 __func__, dtl->cpu); 202 - read_unlock(&dtl_access_lock); 202 + up_read(&dtl_access_lock); 203 203 return -ENOMEM; 204 204 } 205 205 ··· 217 217 spin_unlock(&dtl->lock); 218 218 219 219 if (rc) { 220 - read_unlock(&dtl_access_lock); 220 + up_read(&dtl_access_lock); 221 221 kmem_cache_free(dtl_cache, buf); 222 222 } 223 223 ··· 232 232 dtl->buf = NULL; 233 233 dtl->buf_entries = 0; 234 234 spin_unlock(&dtl->lock); 235 - read_unlock(&dtl_access_lock); 235 + up_read(&dtl_access_lock); 236 236 } 237 237 238 238 /* file interface */
+5 -4
arch/powerpc/platforms/pseries/lpar.c
··· 16 16 #include <linux/export.h> 17 17 #include <linux/jump_label.h> 18 18 #include <linux/delay.h> 19 + #include <linux/seq_file.h> 19 20 #include <linux/stop_machine.h> 20 21 #include <linux/spinlock.h> 21 22 #include <linux/cpuhotplug.h> ··· 170 169 */ 171 170 #define NR_CPUS_H NR_CPUS 172 171 173 - DEFINE_RWLOCK(dtl_access_lock); 172 + DECLARE_RWSEM(dtl_access_lock); 174 173 static DEFINE_PER_CPU(struct vcpu_dispatch_data, vcpu_disp_data); 175 174 static DEFINE_PER_CPU(u64, dtl_entry_ridx); 176 175 static DEFINE_PER_CPU(struct dtl_worker, dtl_workers); ··· 464 463 { 465 464 int rc = 0, state; 466 465 467 - if (!write_trylock(&dtl_access_lock)) { 466 + if (!down_write_trylock(&dtl_access_lock)) { 468 467 rc = -EBUSY; 469 468 goto out; 470 469 } ··· 480 479 pr_err("vcpudispatch_stats: unable to setup workqueue for DTL processing\n"); 481 480 free_dtl_buffers(time_limit); 482 481 reset_global_dtl_mask(); 483 - write_unlock(&dtl_access_lock); 482 + up_write(&dtl_access_lock); 484 483 rc = -EINVAL; 485 484 goto out; 486 485 } ··· 495 494 cpuhp_remove_state(dtl_worker_state); 496 495 free_dtl_buffers(time_limit); 497 496 reset_global_dtl_mask(); 498 - write_unlock(&dtl_access_lock); 497 + up_write(&dtl_access_lock); 499 498 } 500 499 501 500 static ssize_t vcpudispatch_stats_write(struct file *file, const char __user *p,
+1
arch/powerpc/platforms/pseries/msi.c
··· 9 9 #include <linux/irq.h> 10 10 #include <linux/irqdomain.h> 11 11 #include <linux/msi.h> 12 + #include <linux/seq_file.h> 12 13 13 14 #include <asm/rtas.h> 14 15 #include <asm/hw_irq.h>
+1
arch/powerpc/platforms/pseries/papr_scm.c
··· 6 6 #include <linux/kernel.h> 7 7 #include <linux/module.h> 8 8 #include <linux/ioport.h> 9 + #include <linux/seq_file.h> 9 10 #include <linux/slab.h> 10 11 #include <linux/ndctl.h> 11 12 #include <linux/sched.h>
+1
arch/powerpc/platforms/pseries/svm.c
··· 10 10 #include <linux/memblock.h> 11 11 #include <linux/mem_encrypt.h> 12 12 #include <linux/cc_platform.h> 13 + #include <linux/mem_encrypt.h> 13 14 #include <asm/machdep.h> 14 15 #include <asm/svm.h> 15 16 #include <asm/swiotlb.h>
+1 -1
arch/powerpc/sysdev/xive/common.c
··· 726 726 pr_debug("%s: irq %d/0x%x\n", __func__, d->irq, hw_irq); 727 727 728 728 /* Is this valid ? */ 729 - if (cpumask_any_and(cpumask, cpu_online_mask) >= nr_cpu_ids) 729 + if (!cpumask_intersects(cpumask, cpu_online_mask)) 730 730 return -EINVAL; 731 731 732 732 /*
+1
arch/powerpc/sysdev/xive/spapr.c
··· 7 7 8 8 #include <linux/types.h> 9 9 #include <linux/irq.h> 10 + #include <linux/seq_file.h> 10 11 #include <linux/smp.h> 11 12 #include <linux/interrupt.h> 12 13 #include <linux/init.h>
+10
arch/powerpc/tools/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0-or-later 2 + 3 + quiet_cmd_gen_ftrace_ool_stubs = GEN $@ 4 + cmd_gen_ftrace_ool_stubs = $< "$(CONFIG_PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE)" "$(CONFIG_64BIT)" \ 5 + "$(OBJDUMP)" vmlinux.o $@ 6 + 7 + $(obj)/vmlinux.arch.S: $(src)/ftrace-gen-ool-stubs.sh vmlinux.o FORCE 8 + $(call if_changed,gen_ftrace_ool_stubs) 9 + 10 + targets += vmlinux.arch.S
+52
arch/powerpc/tools/ftrace-gen-ool-stubs.sh
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0-or-later 3 + 4 + # Error out on error 5 + set -e 6 + 7 + num_ool_stubs_text_builtin="$1" 8 + is_64bit="$2" 9 + objdump="$3" 10 + vmlinux_o="$4" 11 + arch_vmlinux_S="$5" 12 + 13 + RELOCATION=R_PPC64_ADDR64 14 + if [ -z "$is_64bit" ]; then 15 + RELOCATION=R_PPC_ADDR32 16 + fi 17 + 18 + num_ool_stubs_total=$($objdump -r -j __patchable_function_entries "$vmlinux_o" | 19 + grep -c "$RELOCATION") 20 + num_ool_stubs_inittext=$($objdump -r -j __patchable_function_entries "$vmlinux_o" | 21 + grep -e ".init.text" -e ".text.startup" | grep -c "$RELOCATION") 22 + num_ool_stubs_text=$((num_ool_stubs_total - num_ool_stubs_inittext)) 23 + 24 + if [ "$num_ool_stubs_text" -gt "$num_ool_stubs_text_builtin" ]; then 25 + num_ool_stubs_text_end=$((num_ool_stubs_text - num_ool_stubs_text_builtin)) 26 + else 27 + num_ool_stubs_text_end=0 28 + fi 29 + 30 + cat > "$arch_vmlinux_S" <<EOF 31 + #include <asm/asm-offsets.h> 32 + #include <asm/ppc_asm.h> 33 + #include <linux/linkage.h> 34 + 35 + .pushsection .tramp.ftrace.text,"aw" 36 + SYM_DATA(ftrace_ool_stub_text_end_count, .long $num_ool_stubs_text_end) 37 + 38 + SYM_START(ftrace_ool_stub_text_end, SYM_L_GLOBAL, .balign SZL) 39 + #if $num_ool_stubs_text_end 40 + .space $num_ool_stubs_text_end * FTRACE_OOL_STUB_SIZE 41 + #endif 42 + SYM_CODE_END(ftrace_ool_stub_text_end) 43 + .popsection 44 + 45 + .pushsection .tramp.ftrace.init,"aw" 46 + SYM_DATA(ftrace_ool_stub_inittext_count, .long $num_ool_stubs_inittext) 47 + 48 + SYM_START(ftrace_ool_stub_inittext, SYM_L_GLOBAL, .balign SZL) 49 + .space $num_ool_stubs_inittext * FTRACE_OOL_STUB_SIZE 50 + SYM_CODE_END(ftrace_ool_stub_inittext) 51 + .popsection 52 + EOF
+50
arch/powerpc/tools/ftrace_check.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0-or-later 3 + # 4 + # This script checks vmlinux to ensure that all functions can call ftrace_caller() either directly, 5 + # or through the stub, ftrace_tramp_text, at the end of kernel text. 6 + 7 + # Error out if any command fails 8 + set -e 9 + 10 + # Allow for verbose output 11 + if [ "$V" = "1" ]; then 12 + set -x 13 + fi 14 + 15 + if [ $# -lt 2 ]; then 16 + echo "$0 [path to nm] [path to vmlinux]" 1>&2 17 + exit 1 18 + fi 19 + 20 + # Have Kbuild supply the path to nm so we handle cross compilation. 21 + nm="$1" 22 + vmlinux="$2" 23 + 24 + stext_addr=$($nm "$vmlinux" | grep -e " [TA] _stext$" | \ 25 + cut -d' ' -f1 | tr '[:lower:]' '[:upper:]') 26 + ftrace_caller_addr=$($nm "$vmlinux" | grep -e " T ftrace_caller$" | \ 27 + cut -d' ' -f1 | tr '[:lower:]' '[:upper:]') 28 + ftrace_tramp_addr=$($nm "$vmlinux" | grep -e " T ftrace_tramp_text$" | \ 29 + cut -d' ' -f1 | tr '[:lower:]' '[:upper:]') 30 + 31 + ftrace_caller_offset=$(echo "ibase=16;$ftrace_caller_addr - $stext_addr" | bc) 32 + ftrace_tramp_offset=$(echo "ibase=16;$ftrace_tramp_addr - $ftrace_caller_addr" | bc) 33 + sz_32m=$(printf "%d" 0x2000000) 34 + sz_64m=$(printf "%d" 0x4000000) 35 + 36 + # ftrace_caller - _stext < 32M 37 + if [ "$ftrace_caller_offset" -ge "$sz_32m" ]; then 38 + echo "ERROR: ftrace_caller (0x$ftrace_caller_addr) is beyond 32MiB of _stext" 1>&2 39 + echo "ERROR: consider disabling CONFIG_FUNCTION_TRACER, or reducing the size \ 40 + of kernel text" 1>&2 41 + exit 1 42 + fi 43 + 44 + # ftrace_tramp_text - ftrace_caller < 64M 45 + if [ "$ftrace_tramp_offset" -ge "$sz_64m" ]; then 46 + echo "ERROR: kernel text extends beyond 64MiB from ftrace_caller" 1>&2 47 + echo "ERROR: consider disabling CONFIG_FUNCTION_TRACER, or reducing the size \ 48 + of kernel text" 1>&2 49 + exit 1 50 + fi
+3 -3
arch/powerpc/xmon/xmon.c
··· 3662 3662 int type = inchar(); 3663 3663 unsigned long addr, cpu; 3664 3664 void __percpu *ptr = NULL; 3665 - static char tmp[64]; 3665 + static char tmp[KSYM_NAME_LEN]; 3666 3666 3667 3667 switch (type) { 3668 3668 case 'a': ··· 3671 3671 termch = 0; 3672 3672 break; 3673 3673 case 's': 3674 - getstring(tmp, 64); 3674 + getstring(tmp, KSYM_NAME_LEN); 3675 3675 if (setjmp(bus_error_jmp) == 0) { 3676 3676 catch_memory_errors = 1; 3677 3677 sync(); ··· 3686 3686 termch = 0; 3687 3687 break; 3688 3688 case 'p': 3689 - getstring(tmp, 64); 3689 + getstring(tmp, KSYM_NAME_LEN); 3690 3690 if (setjmp(bus_error_jmp) == 0) { 3691 3691 catch_memory_errors = 1; 3692 3692 sync();
-7
drivers/cpufreq/Kconfig.powerpc
··· 17 17 frequencies. Using PMI, the processor will not only be able to run at 18 18 lower speed, but also at lower core voltage. 19 19 20 - config CPU_FREQ_MAPLE 21 - bool "Support for Maple 970FX Evaluation Board" 22 - depends on PPC_MAPLE 23 - help 24 - This adds support for frequency switching on Maple 970FX 25 - Evaluation Board and compatible boards (IBM JS2x blades). 26 - 27 20 config CPU_FREQ_PMAC 28 21 bool "Support for Apple PowerBooks" 29 22 depends on ADB_PMU && PPC32
-1
drivers/cpufreq/Makefile
··· 93 93 obj-$(CONFIG_CPU_FREQ_CBE) += ppc-cbe-cpufreq.o 94 94 ppc-cbe-cpufreq-y += ppc_cbe_cpufreq_pervasive.o ppc_cbe_cpufreq.o 95 95 obj-$(CONFIG_CPU_FREQ_CBE_PMI) += ppc_cbe_cpufreq_pmi.o 96 - obj-$(CONFIG_CPU_FREQ_MAPLE) += maple-cpufreq.o 97 96 obj-$(CONFIG_QORIQ_CPUFREQ) += qoriq-cpufreq.o 98 97 obj-$(CONFIG_CPU_FREQ_PMAC) += pmac32-cpufreq.o 99 98 obj-$(CONFIG_CPU_FREQ_PMAC64) += pmac64-cpufreq.o
-242
drivers/cpufreq/maple-cpufreq.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright (C) 2011 Dmitry Eremin-Solenikov 4 - * Copyright (C) 2002 - 2005 Benjamin Herrenschmidt <benh@kernel.crashing.org> 5 - * and Markus Demleitner <msdemlei@cl.uni-heidelberg.de> 6 - * 7 - * This driver adds basic cpufreq support for SMU & 970FX based G5 Macs, 8 - * that is iMac G5 and latest single CPU desktop. 9 - */ 10 - 11 - #undef DEBUG 12 - 13 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 14 - 15 - #include <linux/module.h> 16 - #include <linux/types.h> 17 - #include <linux/errno.h> 18 - #include <linux/kernel.h> 19 - #include <linux/delay.h> 20 - #include <linux/sched.h> 21 - #include <linux/cpufreq.h> 22 - #include <linux/init.h> 23 - #include <linux/completion.h> 24 - #include <linux/mutex.h> 25 - #include <linux/time.h> 26 - #include <linux/of.h> 27 - 28 - #define DBG(fmt...) pr_debug(fmt) 29 - 30 - /* see 970FX user manual */ 31 - 32 - #define SCOM_PCR 0x0aa001 /* PCR scom addr */ 33 - 34 - #define PCR_HILO_SELECT 0x80000000U /* 1 = PCR, 0 = PCRH */ 35 - #define PCR_SPEED_FULL 0x00000000U /* 1:1 speed value */ 36 - #define PCR_SPEED_HALF 0x00020000U /* 1:2 speed value */ 37 - #define PCR_SPEED_QUARTER 0x00040000U /* 1:4 speed value */ 38 - #define PCR_SPEED_MASK 0x000e0000U /* speed mask */ 39 - #define PCR_SPEED_SHIFT 17 40 - #define PCR_FREQ_REQ_VALID 0x00010000U /* freq request valid */ 41 - #define PCR_VOLT_REQ_VALID 0x00008000U /* volt request valid */ 42 - #define PCR_TARGET_TIME_MASK 0x00006000U /* target time */ 43 - #define PCR_STATLAT_MASK 0x00001f00U /* STATLAT value */ 44 - #define PCR_SNOOPLAT_MASK 0x000000f0U /* SNOOPLAT value */ 45 - #define PCR_SNOOPACC_MASK 0x0000000fU /* SNOOPACC value */ 46 - 47 - #define SCOM_PSR 0x408001 /* PSR scom addr */ 48 - /* warning: PSR is a 64 bits register */ 49 - #define PSR_CMD_RECEIVED 0x2000000000000000U /* command received */ 50 - #define PSR_CMD_COMPLETED 0x1000000000000000U /* command completed */ 51 - #define PSR_CUR_SPEED_MASK 0x0300000000000000U /* current speed */ 52 - #define PSR_CUR_SPEED_SHIFT (56) 53 - 54 - /* 55 - * The G5 only supports two frequencies (Quarter speed is not supported) 56 - */ 57 - #define CPUFREQ_HIGH 0 58 - #define CPUFREQ_LOW 1 59 - 60 - static struct cpufreq_frequency_table maple_cpu_freqs[] = { 61 - {0, CPUFREQ_HIGH, 0}, 62 - {0, CPUFREQ_LOW, 0}, 63 - {0, 0, CPUFREQ_TABLE_END}, 64 - }; 65 - 66 - /* Power mode data is an array of the 32 bits PCR values to use for 67 - * the various frequencies, retrieved from the device-tree 68 - */ 69 - static int maple_pmode_cur; 70 - 71 - static const u32 *maple_pmode_data; 72 - static int maple_pmode_max; 73 - 74 - /* 75 - * SCOM based frequency switching for 970FX rev3 76 - */ 77 - static int maple_scom_switch_freq(int speed_mode) 78 - { 79 - unsigned long flags; 80 - int to; 81 - 82 - local_irq_save(flags); 83 - 84 - /* Clear PCR high */ 85 - scom970_write(SCOM_PCR, 0); 86 - /* Clear PCR low */ 87 - scom970_write(SCOM_PCR, PCR_HILO_SELECT | 0); 88 - /* Set PCR low */ 89 - scom970_write(SCOM_PCR, PCR_HILO_SELECT | 90 - maple_pmode_data[speed_mode]); 91 - 92 - /* Wait for completion */ 93 - for (to = 0; to < 10; to++) { 94 - unsigned long psr = scom970_read(SCOM_PSR); 95 - 96 - if ((psr & PSR_CMD_RECEIVED) == 0 && 97 - (((psr >> PSR_CUR_SPEED_SHIFT) ^ 98 - (maple_pmode_data[speed_mode] >> PCR_SPEED_SHIFT)) & 0x3) 99 - == 0) 100 - break; 101 - if (psr & PSR_CMD_COMPLETED) 102 - break; 103 - udelay(100); 104 - } 105 - 106 - local_irq_restore(flags); 107 - 108 - maple_pmode_cur = speed_mode; 109 - ppc_proc_freq = maple_cpu_freqs[speed_mode].frequency * 1000ul; 110 - 111 - return 0; 112 - } 113 - 114 - static int maple_scom_query_freq(void) 115 - { 116 - unsigned long psr = scom970_read(SCOM_PSR); 117 - int i; 118 - 119 - for (i = 0; i <= maple_pmode_max; i++) 120 - if ((((psr >> PSR_CUR_SPEED_SHIFT) ^ 121 - (maple_pmode_data[i] >> PCR_SPEED_SHIFT)) & 0x3) == 0) 122 - break; 123 - return i; 124 - } 125 - 126 - /* 127 - * Common interface to the cpufreq core 128 - */ 129 - 130 - static int maple_cpufreq_target(struct cpufreq_policy *policy, 131 - unsigned int index) 132 - { 133 - return maple_scom_switch_freq(index); 134 - } 135 - 136 - static unsigned int maple_cpufreq_get_speed(unsigned int cpu) 137 - { 138 - return maple_cpu_freqs[maple_pmode_cur].frequency; 139 - } 140 - 141 - static int maple_cpufreq_cpu_init(struct cpufreq_policy *policy) 142 - { 143 - cpufreq_generic_init(policy, maple_cpu_freqs, 12000); 144 - return 0; 145 - } 146 - 147 - static struct cpufreq_driver maple_cpufreq_driver = { 148 - .name = "maple", 149 - .flags = CPUFREQ_CONST_LOOPS, 150 - .init = maple_cpufreq_cpu_init, 151 - .verify = cpufreq_generic_frequency_table_verify, 152 - .target_index = maple_cpufreq_target, 153 - .get = maple_cpufreq_get_speed, 154 - .attr = cpufreq_generic_attr, 155 - }; 156 - 157 - static int __init maple_cpufreq_init(void) 158 - { 159 - struct device_node *cpunode; 160 - unsigned int psize; 161 - unsigned long max_freq; 162 - const u32 *valp; 163 - u32 pvr_hi; 164 - int rc = -ENODEV; 165 - 166 - /* 167 - * Behave here like powermac driver which checks machine compatibility 168 - * to ease merging of two drivers in future. 169 - */ 170 - if (!of_machine_is_compatible("Momentum,Maple") && 171 - !of_machine_is_compatible("Momentum,Apache")) 172 - return 0; 173 - 174 - /* Get first CPU node */ 175 - cpunode = of_cpu_device_node_get(0); 176 - if (cpunode == NULL) { 177 - pr_err("Can't find any CPU 0 node\n"); 178 - goto bail_noprops; 179 - } 180 - 181 - /* Check 970FX for now */ 182 - /* we actually don't care on which CPU to access PVR */ 183 - pvr_hi = PVR_VER(mfspr(SPRN_PVR)); 184 - if (pvr_hi != 0x3c && pvr_hi != 0x44) { 185 - pr_err("Unsupported CPU version (%x)\n", pvr_hi); 186 - goto bail_noprops; 187 - } 188 - 189 - /* Look for the powertune data in the device-tree */ 190 - /* 191 - * On Maple this property is provided by PIBS in dual-processor config, 192 - * not provided by PIBS in CPU0 config and also not provided by SLOF, 193 - * so YMMV 194 - */ 195 - maple_pmode_data = of_get_property(cpunode, "power-mode-data", &psize); 196 - if (!maple_pmode_data) { 197 - DBG("No power-mode-data !\n"); 198 - goto bail_noprops; 199 - } 200 - maple_pmode_max = psize / sizeof(u32) - 1; 201 - 202 - /* 203 - * From what I see, clock-frequency is always the maximal frequency. 204 - * The current driver can not slew sysclk yet, so we really only deal 205 - * with powertune steps for now. We also only implement full freq and 206 - * half freq in this version. So far, I haven't yet seen a machine 207 - * supporting anything else. 208 - */ 209 - valp = of_get_property(cpunode, "clock-frequency", NULL); 210 - if (!valp) 211 - goto bail_noprops; 212 - max_freq = (*valp)/1000; 213 - maple_cpu_freqs[0].frequency = max_freq; 214 - maple_cpu_freqs[1].frequency = max_freq/2; 215 - 216 - /* Force apply current frequency to make sure everything is in 217 - * sync (voltage is right for example). Firmware may leave us with 218 - * a strange setting ... 219 - */ 220 - msleep(10); 221 - maple_pmode_cur = -1; 222 - maple_scom_switch_freq(maple_scom_query_freq()); 223 - 224 - pr_info("Registering Maple CPU frequency driver\n"); 225 - pr_info("Low: %d Mhz, High: %d Mhz, Cur: %d MHz\n", 226 - maple_cpu_freqs[1].frequency/1000, 227 - maple_cpu_freqs[0].frequency/1000, 228 - maple_cpu_freqs[maple_pmode_cur].frequency/1000); 229 - 230 - rc = cpufreq_register_driver(&maple_cpufreq_driver); 231 - 232 - bail_noprops: 233 - of_node_put(cpunode); 234 - 235 - return rc; 236 - } 237 - 238 - module_init(maple_cpufreq_init); 239 - 240 - 241 - MODULE_DESCRIPTION("cpufreq driver for Maple 970FX/970MP boards"); 242 - MODULE_LICENSE("GPL");
+1
drivers/cpuidle/cpuidle-pseries.c
··· 22 22 #include <asm/idle.h> 23 23 #include <asm/plpar_wrappers.h> 24 24 #include <asm/rtas.h> 25 + #include <asm/time.h> 25 26 26 27 static struct cpuidle_driver pseries_idle_driver = { 27 28 .name = "pseries_idle",
-18
drivers/edac/Kconfig
··· 311 311 Cell Broadband Engine internal memory controller 312 312 on platform without a hypervisor 313 313 314 - config EDAC_AMD8131 315 - tristate "AMD8131 HyperTransport PCI-X Tunnel" 316 - depends on PCI && PPC_MAPLE 317 - help 318 - Support for error detection and correction on the 319 - AMD8131 HyperTransport PCI-X Tunnel chip. 320 - Note, add more Kconfig dependency if it's adopted 321 - on some machine other than Maple. 322 - 323 - config EDAC_AMD8111 324 - tristate "AMD8111 HyperTransport I/O Hub" 325 - depends on PCI && PPC_MAPLE 326 - help 327 - Support for error detection and correction on the 328 - AMD8111 HyperTransport I/O Hub chip. 329 - Note, add more Kconfig dependency if it's adopted 330 - on some machine other than Maple. 331 - 332 314 config EDAC_CPC925 333 315 tristate "IBM CPC925 Memory Controller (PPC970FX)" 334 316 depends on PPC64
-2
drivers/edac/Makefile
··· 63 63 obj-$(CONFIG_EDAC_I10NM) += i10nm_edac.o skx_edac_common.o 64 64 65 65 obj-$(CONFIG_EDAC_CELL) += cell_edac.o 66 - obj-$(CONFIG_EDAC_AMD8111) += amd8111_edac.o 67 - obj-$(CONFIG_EDAC_AMD8131) += amd8131_edac.o 68 66 69 67 obj-$(CONFIG_EDAC_HIGHBANK_MC) += highbank_mc_edac.o 70 68 obj-$(CONFIG_EDAC_HIGHBANK_L2) += highbank_l2_edac.o
-596
drivers/edac/amd8111_edac.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * amd8111_edac.c, AMD8111 Hyper Transport chip EDAC kernel module 4 - * 5 - * Copyright (c) 2008 Wind River Systems, Inc. 6 - * 7 - * Authors: Cao Qingtao <qingtao.cao@windriver.com> 8 - * Benjamin Walsh <benjamin.walsh@windriver.com> 9 - * Hu Yongqi <yongqi.hu@windriver.com> 10 - */ 11 - 12 - #include <linux/module.h> 13 - #include <linux/init.h> 14 - #include <linux/interrupt.h> 15 - #include <linux/bitops.h> 16 - #include <linux/edac.h> 17 - #include <linux/pci_ids.h> 18 - #include <asm/io.h> 19 - 20 - #include "edac_module.h" 21 - #include "amd8111_edac.h" 22 - 23 - #define AMD8111_EDAC_REVISION " Ver: 1.0.0" 24 - #define AMD8111_EDAC_MOD_STR "amd8111_edac" 25 - 26 - #define PCI_DEVICE_ID_AMD_8111_PCI 0x7460 27 - 28 - enum amd8111_edac_devs { 29 - LPC_BRIDGE = 0, 30 - }; 31 - 32 - enum amd8111_edac_pcis { 33 - PCI_BRIDGE = 0, 34 - }; 35 - 36 - /* Wrapper functions for accessing PCI configuration space */ 37 - static int edac_pci_read_dword(struct pci_dev *dev, int reg, u32 *val32) 38 - { 39 - int ret; 40 - 41 - ret = pci_read_config_dword(dev, reg, val32); 42 - if (ret != 0) 43 - printk(KERN_ERR AMD8111_EDAC_MOD_STR 44 - " PCI Access Read Error at 0x%x\n", reg); 45 - 46 - return ret; 47 - } 48 - 49 - static void edac_pci_read_byte(struct pci_dev *dev, int reg, u8 *val8) 50 - { 51 - int ret; 52 - 53 - ret = pci_read_config_byte(dev, reg, val8); 54 - if (ret != 0) 55 - printk(KERN_ERR AMD8111_EDAC_MOD_STR 56 - " PCI Access Read Error at 0x%x\n", reg); 57 - } 58 - 59 - static void edac_pci_write_dword(struct pci_dev *dev, int reg, u32 val32) 60 - { 61 - int ret; 62 - 63 - ret = pci_write_config_dword(dev, reg, val32); 64 - if (ret != 0) 65 - printk(KERN_ERR AMD8111_EDAC_MOD_STR 66 - " PCI Access Write Error at 0x%x\n", reg); 67 - } 68 - 69 - static void edac_pci_write_byte(struct pci_dev *dev, int reg, u8 val8) 70 - { 71 - int ret; 72 - 73 - ret = pci_write_config_byte(dev, reg, val8); 74 - if (ret != 0) 75 - printk(KERN_ERR AMD8111_EDAC_MOD_STR 76 - " PCI Access Write Error at 0x%x\n", reg); 77 - } 78 - 79 - /* 80 - * device-specific methods for amd8111 PCI Bridge Controller 81 - * 82 - * Error Reporting and Handling for amd8111 chipset could be found 83 - * in its datasheet 3.1.2 section, P37 84 - */ 85 - static void amd8111_pci_bridge_init(struct amd8111_pci_info *pci_info) 86 - { 87 - u32 val32; 88 - struct pci_dev *dev = pci_info->dev; 89 - 90 - /* First clear error detection flags on the host interface */ 91 - 92 - /* Clear SSE/SMA/STA flags in the global status register*/ 93 - edac_pci_read_dword(dev, REG_PCI_STSCMD, &val32); 94 - if (val32 & PCI_STSCMD_CLEAR_MASK) 95 - edac_pci_write_dword(dev, REG_PCI_STSCMD, val32); 96 - 97 - /* Clear CRC and Link Fail flags in HT Link Control reg */ 98 - edac_pci_read_dword(dev, REG_HT_LINK, &val32); 99 - if (val32 & HT_LINK_CLEAR_MASK) 100 - edac_pci_write_dword(dev, REG_HT_LINK, val32); 101 - 102 - /* Second clear all fault on the secondary interface */ 103 - 104 - /* Clear error flags in the memory-base limit reg. */ 105 - edac_pci_read_dword(dev, REG_MEM_LIM, &val32); 106 - if (val32 & MEM_LIMIT_CLEAR_MASK) 107 - edac_pci_write_dword(dev, REG_MEM_LIM, val32); 108 - 109 - /* Clear Discard Timer Expired flag in Interrupt/Bridge Control reg */ 110 - edac_pci_read_dword(dev, REG_PCI_INTBRG_CTRL, &val32); 111 - if (val32 & PCI_INTBRG_CTRL_CLEAR_MASK) 112 - edac_pci_write_dword(dev, REG_PCI_INTBRG_CTRL, val32); 113 - 114 - /* Last enable error detections */ 115 - if (edac_op_state == EDAC_OPSTATE_POLL) { 116 - /* Enable System Error reporting in global status register */ 117 - edac_pci_read_dword(dev, REG_PCI_STSCMD, &val32); 118 - val32 |= PCI_STSCMD_SERREN; 119 - edac_pci_write_dword(dev, REG_PCI_STSCMD, val32); 120 - 121 - /* Enable CRC Sync flood packets to HyperTransport Link */ 122 - edac_pci_read_dword(dev, REG_HT_LINK, &val32); 123 - val32 |= HT_LINK_CRCFEN; 124 - edac_pci_write_dword(dev, REG_HT_LINK, val32); 125 - 126 - /* Enable SSE reporting etc in Interrupt control reg */ 127 - edac_pci_read_dword(dev, REG_PCI_INTBRG_CTRL, &val32); 128 - val32 |= PCI_INTBRG_CTRL_POLL_MASK; 129 - edac_pci_write_dword(dev, REG_PCI_INTBRG_CTRL, val32); 130 - } 131 - } 132 - 133 - static void amd8111_pci_bridge_exit(struct amd8111_pci_info *pci_info) 134 - { 135 - u32 val32; 136 - struct pci_dev *dev = pci_info->dev; 137 - 138 - if (edac_op_state == EDAC_OPSTATE_POLL) { 139 - /* Disable System Error reporting */ 140 - edac_pci_read_dword(dev, REG_PCI_STSCMD, &val32); 141 - val32 &= ~PCI_STSCMD_SERREN; 142 - edac_pci_write_dword(dev, REG_PCI_STSCMD, val32); 143 - 144 - /* Disable CRC flood packets */ 145 - edac_pci_read_dword(dev, REG_HT_LINK, &val32); 146 - val32 &= ~HT_LINK_CRCFEN; 147 - edac_pci_write_dword(dev, REG_HT_LINK, val32); 148 - 149 - /* Disable DTSERREN/MARSP/SERREN in Interrupt Control reg */ 150 - edac_pci_read_dword(dev, REG_PCI_INTBRG_CTRL, &val32); 151 - val32 &= ~PCI_INTBRG_CTRL_POLL_MASK; 152 - edac_pci_write_dword(dev, REG_PCI_INTBRG_CTRL, val32); 153 - } 154 - } 155 - 156 - static void amd8111_pci_bridge_check(struct edac_pci_ctl_info *edac_dev) 157 - { 158 - struct amd8111_pci_info *pci_info = edac_dev->pvt_info; 159 - struct pci_dev *dev = pci_info->dev; 160 - u32 val32; 161 - 162 - /* Check out PCI Bridge Status and Command Register */ 163 - edac_pci_read_dword(dev, REG_PCI_STSCMD, &val32); 164 - if (val32 & PCI_STSCMD_CLEAR_MASK) { 165 - printk(KERN_INFO "Error(s) in PCI bridge status and command" 166 - "register on device %s\n", pci_info->ctl_name); 167 - printk(KERN_INFO "SSE: %d, RMA: %d, RTA: %d\n", 168 - (val32 & PCI_STSCMD_SSE) != 0, 169 - (val32 & PCI_STSCMD_RMA) != 0, 170 - (val32 & PCI_STSCMD_RTA) != 0); 171 - 172 - val32 |= PCI_STSCMD_CLEAR_MASK; 173 - edac_pci_write_dword(dev, REG_PCI_STSCMD, val32); 174 - 175 - edac_pci_handle_npe(edac_dev, edac_dev->ctl_name); 176 - } 177 - 178 - /* Check out HyperTransport Link Control Register */ 179 - edac_pci_read_dword(dev, REG_HT_LINK, &val32); 180 - if (val32 & HT_LINK_LKFAIL) { 181 - printk(KERN_INFO "Error(s) in hypertransport link control" 182 - "register on device %s\n", pci_info->ctl_name); 183 - printk(KERN_INFO "LKFAIL: %d\n", 184 - (val32 & HT_LINK_LKFAIL) != 0); 185 - 186 - val32 |= HT_LINK_LKFAIL; 187 - edac_pci_write_dword(dev, REG_HT_LINK, val32); 188 - 189 - edac_pci_handle_npe(edac_dev, edac_dev->ctl_name); 190 - } 191 - 192 - /* Check out PCI Interrupt and Bridge Control Register */ 193 - edac_pci_read_dword(dev, REG_PCI_INTBRG_CTRL, &val32); 194 - if (val32 & PCI_INTBRG_CTRL_DTSTAT) { 195 - printk(KERN_INFO "Error(s) in PCI interrupt and bridge control" 196 - "register on device %s\n", pci_info->ctl_name); 197 - printk(KERN_INFO "DTSTAT: %d\n", 198 - (val32 & PCI_INTBRG_CTRL_DTSTAT) != 0); 199 - 200 - val32 |= PCI_INTBRG_CTRL_DTSTAT; 201 - edac_pci_write_dword(dev, REG_PCI_INTBRG_CTRL, val32); 202 - 203 - edac_pci_handle_npe(edac_dev, edac_dev->ctl_name); 204 - } 205 - 206 - /* Check out PCI Bridge Memory Base-Limit Register */ 207 - edac_pci_read_dword(dev, REG_MEM_LIM, &val32); 208 - if (val32 & MEM_LIMIT_CLEAR_MASK) { 209 - printk(KERN_INFO 210 - "Error(s) in mem limit register on %s device\n", 211 - pci_info->ctl_name); 212 - printk(KERN_INFO "DPE: %d, RSE: %d, RMA: %d\n" 213 - "RTA: %d, STA: %d, MDPE: %d\n", 214 - (val32 & MEM_LIMIT_DPE) != 0, 215 - (val32 & MEM_LIMIT_RSE) != 0, 216 - (val32 & MEM_LIMIT_RMA) != 0, 217 - (val32 & MEM_LIMIT_RTA) != 0, 218 - (val32 & MEM_LIMIT_STA) != 0, 219 - (val32 & MEM_LIMIT_MDPE) != 0); 220 - 221 - val32 |= MEM_LIMIT_CLEAR_MASK; 222 - edac_pci_write_dword(dev, REG_MEM_LIM, val32); 223 - 224 - edac_pci_handle_npe(edac_dev, edac_dev->ctl_name); 225 - } 226 - } 227 - 228 - static struct resource *legacy_io_res; 229 - static int at_compat_reg_broken; 230 - #define LEGACY_NR_PORTS 1 231 - 232 - /* device-specific methods for amd8111 LPC Bridge device */ 233 - static void amd8111_lpc_bridge_init(struct amd8111_dev_info *dev_info) 234 - { 235 - u8 val8; 236 - struct pci_dev *dev = dev_info->dev; 237 - 238 - /* First clear REG_AT_COMPAT[SERR, IOCHK] if necessary */ 239 - legacy_io_res = request_region(REG_AT_COMPAT, LEGACY_NR_PORTS, 240 - AMD8111_EDAC_MOD_STR); 241 - if (!legacy_io_res) 242 - printk(KERN_INFO "%s: failed to request legacy I/O region " 243 - "start %d, len %d\n", __func__, 244 - REG_AT_COMPAT, LEGACY_NR_PORTS); 245 - else { 246 - val8 = __do_inb(REG_AT_COMPAT); 247 - if (val8 == 0xff) { /* buggy port */ 248 - printk(KERN_INFO "%s: port %d is buggy, not supported" 249 - " by hardware?\n", __func__, REG_AT_COMPAT); 250 - at_compat_reg_broken = 1; 251 - release_region(REG_AT_COMPAT, LEGACY_NR_PORTS); 252 - legacy_io_res = NULL; 253 - } else { 254 - u8 out8 = 0; 255 - if (val8 & AT_COMPAT_SERR) 256 - out8 = AT_COMPAT_CLRSERR; 257 - if (val8 & AT_COMPAT_IOCHK) 258 - out8 |= AT_COMPAT_CLRIOCHK; 259 - if (out8 > 0) 260 - __do_outb(out8, REG_AT_COMPAT); 261 - } 262 - } 263 - 264 - /* Second clear error flags on LPC bridge */ 265 - edac_pci_read_byte(dev, REG_IO_CTRL_1, &val8); 266 - if (val8 & IO_CTRL_1_CLEAR_MASK) 267 - edac_pci_write_byte(dev, REG_IO_CTRL_1, val8); 268 - } 269 - 270 - static void amd8111_lpc_bridge_exit(struct amd8111_dev_info *dev_info) 271 - { 272 - if (legacy_io_res) 273 - release_region(REG_AT_COMPAT, LEGACY_NR_PORTS); 274 - } 275 - 276 - static void amd8111_lpc_bridge_check(struct edac_device_ctl_info *edac_dev) 277 - { 278 - struct amd8111_dev_info *dev_info = edac_dev->pvt_info; 279 - struct pci_dev *dev = dev_info->dev; 280 - u8 val8; 281 - 282 - edac_pci_read_byte(dev, REG_IO_CTRL_1, &val8); 283 - if (val8 & IO_CTRL_1_CLEAR_MASK) { 284 - printk(KERN_INFO 285 - "Error(s) in IO control register on %s device\n", 286 - dev_info->ctl_name); 287 - printk(KERN_INFO "LPC ERR: %d, PW2LPC: %d\n", 288 - (val8 & IO_CTRL_1_LPC_ERR) != 0, 289 - (val8 & IO_CTRL_1_PW2LPC) != 0); 290 - 291 - val8 |= IO_CTRL_1_CLEAR_MASK; 292 - edac_pci_write_byte(dev, REG_IO_CTRL_1, val8); 293 - 294 - edac_device_handle_ue(edac_dev, 0, 0, edac_dev->ctl_name); 295 - } 296 - 297 - if (at_compat_reg_broken == 0) { 298 - u8 out8 = 0; 299 - val8 = __do_inb(REG_AT_COMPAT); 300 - if (val8 & AT_COMPAT_SERR) 301 - out8 = AT_COMPAT_CLRSERR; 302 - if (val8 & AT_COMPAT_IOCHK) 303 - out8 |= AT_COMPAT_CLRIOCHK; 304 - if (out8 > 0) { 305 - __do_outb(out8, REG_AT_COMPAT); 306 - edac_device_handle_ue(edac_dev, 0, 0, 307 - edac_dev->ctl_name); 308 - } 309 - } 310 - } 311 - 312 - /* General devices represented by edac_device_ctl_info */ 313 - static struct amd8111_dev_info amd8111_devices[] = { 314 - [LPC_BRIDGE] = { 315 - .err_dev = PCI_DEVICE_ID_AMD_8111_LPC, 316 - .ctl_name = "lpc", 317 - .init = amd8111_lpc_bridge_init, 318 - .exit = amd8111_lpc_bridge_exit, 319 - .check = amd8111_lpc_bridge_check, 320 - }, 321 - {0}, 322 - }; 323 - 324 - /* PCI controllers represented by edac_pci_ctl_info */ 325 - static struct amd8111_pci_info amd8111_pcis[] = { 326 - [PCI_BRIDGE] = { 327 - .err_dev = PCI_DEVICE_ID_AMD_8111_PCI, 328 - .ctl_name = "AMD8111_PCI_Controller", 329 - .init = amd8111_pci_bridge_init, 330 - .exit = amd8111_pci_bridge_exit, 331 - .check = amd8111_pci_bridge_check, 332 - }, 333 - {0}, 334 - }; 335 - 336 - static int amd8111_dev_probe(struct pci_dev *dev, 337 - const struct pci_device_id *id) 338 - { 339 - struct amd8111_dev_info *dev_info = &amd8111_devices[id->driver_data]; 340 - int ret = -ENODEV; 341 - 342 - dev_info->dev = pci_get_device(PCI_VENDOR_ID_AMD, 343 - dev_info->err_dev, NULL); 344 - 345 - if (!dev_info->dev) { 346 - printk(KERN_ERR "EDAC device not found:" 347 - "vendor %x, device %x, name %s\n", 348 - PCI_VENDOR_ID_AMD, dev_info->err_dev, 349 - dev_info->ctl_name); 350 - goto err; 351 - } 352 - 353 - if (pci_enable_device(dev_info->dev)) { 354 - printk(KERN_ERR "failed to enable:" 355 - "vendor %x, device %x, name %s\n", 356 - PCI_VENDOR_ID_AMD, dev_info->err_dev, 357 - dev_info->ctl_name); 358 - goto err_dev_put; 359 - } 360 - 361 - /* 362 - * we do not allocate extra private structure for 363 - * edac_device_ctl_info, but make use of existing 364 - * one instead. 365 - */ 366 - dev_info->edac_idx = edac_device_alloc_index(); 367 - dev_info->edac_dev = 368 - edac_device_alloc_ctl_info(0, dev_info->ctl_name, 1, 369 - NULL, 0, 0, dev_info->edac_idx); 370 - if (!dev_info->edac_dev) { 371 - ret = -ENOMEM; 372 - goto err_dev_put; 373 - } 374 - 375 - dev_info->edac_dev->pvt_info = dev_info; 376 - dev_info->edac_dev->dev = &dev_info->dev->dev; 377 - dev_info->edac_dev->mod_name = AMD8111_EDAC_MOD_STR; 378 - dev_info->edac_dev->ctl_name = dev_info->ctl_name; 379 - dev_info->edac_dev->dev_name = dev_name(&dev_info->dev->dev); 380 - 381 - if (edac_op_state == EDAC_OPSTATE_POLL) 382 - dev_info->edac_dev->edac_check = dev_info->check; 383 - 384 - if (dev_info->init) 385 - dev_info->init(dev_info); 386 - 387 - if (edac_device_add_device(dev_info->edac_dev) > 0) { 388 - printk(KERN_ERR "failed to add edac_dev for %s\n", 389 - dev_info->ctl_name); 390 - goto err_edac_free_ctl; 391 - } 392 - 393 - printk(KERN_INFO "added one edac_dev on AMD8111 " 394 - "vendor %x, device %x, name %s\n", 395 - PCI_VENDOR_ID_AMD, dev_info->err_dev, 396 - dev_info->ctl_name); 397 - 398 - return 0; 399 - 400 - err_edac_free_ctl: 401 - edac_device_free_ctl_info(dev_info->edac_dev); 402 - err_dev_put: 403 - pci_dev_put(dev_info->dev); 404 - err: 405 - return ret; 406 - } 407 - 408 - static void amd8111_dev_remove(struct pci_dev *dev) 409 - { 410 - struct amd8111_dev_info *dev_info; 411 - 412 - for (dev_info = amd8111_devices; dev_info->err_dev; dev_info++) 413 - if (dev_info->dev->device == dev->device) 414 - break; 415 - 416 - if (!dev_info->err_dev) /* should never happen */ 417 - return; 418 - 419 - if (dev_info->edac_dev) { 420 - edac_device_del_device(dev_info->edac_dev->dev); 421 - edac_device_free_ctl_info(dev_info->edac_dev); 422 - } 423 - 424 - if (dev_info->exit) 425 - dev_info->exit(dev_info); 426 - 427 - pci_dev_put(dev_info->dev); 428 - } 429 - 430 - static int amd8111_pci_probe(struct pci_dev *dev, 431 - const struct pci_device_id *id) 432 - { 433 - struct amd8111_pci_info *pci_info = &amd8111_pcis[id->driver_data]; 434 - int ret = -ENODEV; 435 - 436 - pci_info->dev = pci_get_device(PCI_VENDOR_ID_AMD, 437 - pci_info->err_dev, NULL); 438 - 439 - if (!pci_info->dev) { 440 - printk(KERN_ERR "EDAC device not found:" 441 - "vendor %x, device %x, name %s\n", 442 - PCI_VENDOR_ID_AMD, pci_info->err_dev, 443 - pci_info->ctl_name); 444 - goto err; 445 - } 446 - 447 - if (pci_enable_device(pci_info->dev)) { 448 - printk(KERN_ERR "failed to enable:" 449 - "vendor %x, device %x, name %s\n", 450 - PCI_VENDOR_ID_AMD, pci_info->err_dev, 451 - pci_info->ctl_name); 452 - goto err_dev_put; 453 - } 454 - 455 - /* 456 - * we do not allocate extra private structure for 457 - * edac_pci_ctl_info, but make use of existing 458 - * one instead. 459 - */ 460 - pci_info->edac_idx = edac_pci_alloc_index(); 461 - pci_info->edac_dev = edac_pci_alloc_ctl_info(0, pci_info->ctl_name); 462 - if (!pci_info->edac_dev) { 463 - ret = -ENOMEM; 464 - goto err_dev_put; 465 - } 466 - 467 - pci_info->edac_dev->pvt_info = pci_info; 468 - pci_info->edac_dev->dev = &pci_info->dev->dev; 469 - pci_info->edac_dev->mod_name = AMD8111_EDAC_MOD_STR; 470 - pci_info->edac_dev->ctl_name = pci_info->ctl_name; 471 - pci_info->edac_dev->dev_name = dev_name(&pci_info->dev->dev); 472 - 473 - if (edac_op_state == EDAC_OPSTATE_POLL) 474 - pci_info->edac_dev->edac_check = pci_info->check; 475 - 476 - if (pci_info->init) 477 - pci_info->init(pci_info); 478 - 479 - if (edac_pci_add_device(pci_info->edac_dev, pci_info->edac_idx) > 0) { 480 - printk(KERN_ERR "failed to add edac_pci for %s\n", 481 - pci_info->ctl_name); 482 - goto err_edac_free_ctl; 483 - } 484 - 485 - printk(KERN_INFO "added one edac_pci on AMD8111 " 486 - "vendor %x, device %x, name %s\n", 487 - PCI_VENDOR_ID_AMD, pci_info->err_dev, 488 - pci_info->ctl_name); 489 - 490 - return 0; 491 - 492 - err_edac_free_ctl: 493 - edac_pci_free_ctl_info(pci_info->edac_dev); 494 - err_dev_put: 495 - pci_dev_put(pci_info->dev); 496 - err: 497 - return ret; 498 - } 499 - 500 - static void amd8111_pci_remove(struct pci_dev *dev) 501 - { 502 - struct amd8111_pci_info *pci_info; 503 - 504 - for (pci_info = amd8111_pcis; pci_info->err_dev; pci_info++) 505 - if (pci_info->dev->device == dev->device) 506 - break; 507 - 508 - if (!pci_info->err_dev) /* should never happen */ 509 - return; 510 - 511 - if (pci_info->edac_dev) { 512 - edac_pci_del_device(pci_info->edac_dev->dev); 513 - edac_pci_free_ctl_info(pci_info->edac_dev); 514 - } 515 - 516 - if (pci_info->exit) 517 - pci_info->exit(pci_info); 518 - 519 - pci_dev_put(pci_info->dev); 520 - } 521 - 522 - /* PCI Device ID talbe for general EDAC device */ 523 - static const struct pci_device_id amd8111_edac_dev_tbl[] = { 524 - { 525 - PCI_VEND_DEV(AMD, 8111_LPC), 526 - .subvendor = PCI_ANY_ID, 527 - .subdevice = PCI_ANY_ID, 528 - .class = 0, 529 - .class_mask = 0, 530 - .driver_data = LPC_BRIDGE, 531 - }, 532 - { 533 - 0, 534 - } /* table is NULL-terminated */ 535 - }; 536 - MODULE_DEVICE_TABLE(pci, amd8111_edac_dev_tbl); 537 - 538 - static struct pci_driver amd8111_edac_dev_driver = { 539 - .name = "AMD8111_EDAC_DEV", 540 - .probe = amd8111_dev_probe, 541 - .remove = amd8111_dev_remove, 542 - .id_table = amd8111_edac_dev_tbl, 543 - }; 544 - 545 - /* PCI Device ID table for EDAC PCI controller */ 546 - static const struct pci_device_id amd8111_edac_pci_tbl[] = { 547 - { 548 - PCI_VEND_DEV(AMD, 8111_PCI), 549 - .subvendor = PCI_ANY_ID, 550 - .subdevice = PCI_ANY_ID, 551 - .class = 0, 552 - .class_mask = 0, 553 - .driver_data = PCI_BRIDGE, 554 - }, 555 - { 556 - 0, 557 - } /* table is NULL-terminated */ 558 - }; 559 - MODULE_DEVICE_TABLE(pci, amd8111_edac_pci_tbl); 560 - 561 - static struct pci_driver amd8111_edac_pci_driver = { 562 - .name = "AMD8111_EDAC_PCI", 563 - .probe = amd8111_pci_probe, 564 - .remove = amd8111_pci_remove, 565 - .id_table = amd8111_edac_pci_tbl, 566 - }; 567 - 568 - static int __init amd8111_edac_init(void) 569 - { 570 - int val; 571 - 572 - printk(KERN_INFO "AMD8111 EDAC driver " AMD8111_EDAC_REVISION "\n"); 573 - printk(KERN_INFO "\t(c) 2008 Wind River Systems, Inc.\n"); 574 - 575 - /* Only POLL mode supported so far */ 576 - edac_op_state = EDAC_OPSTATE_POLL; 577 - 578 - val = pci_register_driver(&amd8111_edac_dev_driver); 579 - val |= pci_register_driver(&amd8111_edac_pci_driver); 580 - 581 - return val; 582 - } 583 - 584 - static void __exit amd8111_edac_exit(void) 585 - { 586 - pci_unregister_driver(&amd8111_edac_pci_driver); 587 - pci_unregister_driver(&amd8111_edac_dev_driver); 588 - } 589 - 590 - 591 - module_init(amd8111_edac_init); 592 - module_exit(amd8111_edac_exit); 593 - 594 - MODULE_LICENSE("GPL"); 595 - MODULE_AUTHOR("Cao Qingtao <qingtao.cao@windriver.com>"); 596 - MODULE_DESCRIPTION("AMD8111 HyperTransport I/O Hub EDAC kernel module");
-118
drivers/edac/amd8111_edac.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * amd8111_edac.h, EDAC defs for AMD8111 hypertransport chip 4 - * 5 - * Copyright (c) 2008 Wind River Systems, Inc. 6 - * 7 - * Authors: Cao Qingtao <qingtao.cao@windriver.com> 8 - * Benjamin Walsh <benjamin.walsh@windriver.com> 9 - * Hu Yongqi <yongqi.hu@windriver.com> 10 - */ 11 - 12 - #ifndef _AMD8111_EDAC_H_ 13 - #define _AMD8111_EDAC_H_ 14 - 15 - /************************************************************ 16 - * PCI Bridge Status and Command Register, DevA:0x04 17 - ************************************************************/ 18 - #define REG_PCI_STSCMD 0x04 19 - enum pci_stscmd_bits { 20 - PCI_STSCMD_SSE = BIT(30), 21 - PCI_STSCMD_RMA = BIT(29), 22 - PCI_STSCMD_RTA = BIT(28), 23 - PCI_STSCMD_SERREN = BIT(8), 24 - PCI_STSCMD_CLEAR_MASK = (PCI_STSCMD_SSE | 25 - PCI_STSCMD_RMA | 26 - PCI_STSCMD_RTA) 27 - }; 28 - 29 - /************************************************************ 30 - * PCI Bridge Memory Base-Limit Register, DevA:0x1c 31 - ************************************************************/ 32 - #define REG_MEM_LIM 0x1c 33 - enum mem_limit_bits { 34 - MEM_LIMIT_DPE = BIT(31), 35 - MEM_LIMIT_RSE = BIT(30), 36 - MEM_LIMIT_RMA = BIT(29), 37 - MEM_LIMIT_RTA = BIT(28), 38 - MEM_LIMIT_STA = BIT(27), 39 - MEM_LIMIT_MDPE = BIT(24), 40 - MEM_LIMIT_CLEAR_MASK = (MEM_LIMIT_DPE | 41 - MEM_LIMIT_RSE | 42 - MEM_LIMIT_RMA | 43 - MEM_LIMIT_RTA | 44 - MEM_LIMIT_STA | 45 - MEM_LIMIT_MDPE) 46 - }; 47 - 48 - /************************************************************ 49 - * HyperTransport Link Control Register, DevA:0xc4 50 - ************************************************************/ 51 - #define REG_HT_LINK 0xc4 52 - enum ht_link_bits { 53 - HT_LINK_LKFAIL = BIT(4), 54 - HT_LINK_CRCFEN = BIT(1), 55 - HT_LINK_CLEAR_MASK = (HT_LINK_LKFAIL) 56 - }; 57 - 58 - /************************************************************ 59 - * PCI Bridge Interrupt and Bridge Control, DevA:0x3c 60 - ************************************************************/ 61 - #define REG_PCI_INTBRG_CTRL 0x3c 62 - enum pci_intbrg_ctrl_bits { 63 - PCI_INTBRG_CTRL_DTSERREN = BIT(27), 64 - PCI_INTBRG_CTRL_DTSTAT = BIT(26), 65 - PCI_INTBRG_CTRL_MARSP = BIT(21), 66 - PCI_INTBRG_CTRL_SERREN = BIT(17), 67 - PCI_INTBRG_CTRL_PEREN = BIT(16), 68 - PCI_INTBRG_CTRL_CLEAR_MASK = (PCI_INTBRG_CTRL_DTSTAT), 69 - PCI_INTBRG_CTRL_POLL_MASK = (PCI_INTBRG_CTRL_DTSERREN | 70 - PCI_INTBRG_CTRL_MARSP | 71 - PCI_INTBRG_CTRL_SERREN) 72 - }; 73 - 74 - /************************************************************ 75 - * I/O Control 1 Register, DevB:0x40 76 - ************************************************************/ 77 - #define REG_IO_CTRL_1 0x40 78 - enum io_ctrl_1_bits { 79 - IO_CTRL_1_NMIONERR = BIT(7), 80 - IO_CTRL_1_LPC_ERR = BIT(6), 81 - IO_CTRL_1_PW2LPC = BIT(1), 82 - IO_CTRL_1_CLEAR_MASK = (IO_CTRL_1_LPC_ERR | IO_CTRL_1_PW2LPC) 83 - }; 84 - 85 - /************************************************************ 86 - * Legacy I/O Space Registers 87 - ************************************************************/ 88 - #define REG_AT_COMPAT 0x61 89 - enum at_compat_bits { 90 - AT_COMPAT_SERR = BIT(7), 91 - AT_COMPAT_IOCHK = BIT(6), 92 - AT_COMPAT_CLRIOCHK = BIT(3), 93 - AT_COMPAT_CLRSERR = BIT(2), 94 - }; 95 - 96 - struct amd8111_dev_info { 97 - u16 err_dev; /* PCI Device ID */ 98 - struct pci_dev *dev; 99 - int edac_idx; /* device index */ 100 - char *ctl_name; 101 - struct edac_device_ctl_info *edac_dev; 102 - void (*init)(struct amd8111_dev_info *dev_info); 103 - void (*exit)(struct amd8111_dev_info *dev_info); 104 - void (*check)(struct edac_device_ctl_info *edac_dev); 105 - }; 106 - 107 - struct amd8111_pci_info { 108 - u16 err_dev; /* PCI Device ID */ 109 - struct pci_dev *dev; 110 - int edac_idx; /* pci index */ 111 - const char *ctl_name; 112 - struct edac_pci_ctl_info *edac_dev; 113 - void (*init)(struct amd8111_pci_info *dev_info); 114 - void (*exit)(struct amd8111_pci_info *dev_info); 115 - void (*check)(struct edac_pci_ctl_info *edac_dev); 116 - }; 117 - 118 - #endif /* _AMD8111_EDAC_H_ */
-358
drivers/edac/amd8131_edac.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * amd8131_edac.c, AMD8131 hypertransport chip EDAC kernel module 4 - * 5 - * Copyright (c) 2008 Wind River Systems, Inc. 6 - * 7 - * Authors: Cao Qingtao <qingtao.cao@windriver.com> 8 - * Benjamin Walsh <benjamin.walsh@windriver.com> 9 - * Hu Yongqi <yongqi.hu@windriver.com> 10 - */ 11 - 12 - #include <linux/module.h> 13 - #include <linux/init.h> 14 - #include <linux/interrupt.h> 15 - #include <linux/io.h> 16 - #include <linux/bitops.h> 17 - #include <linux/edac.h> 18 - #include <linux/pci_ids.h> 19 - 20 - #include "edac_module.h" 21 - #include "amd8131_edac.h" 22 - 23 - #define AMD8131_EDAC_REVISION " Ver: 1.0.0" 24 - #define AMD8131_EDAC_MOD_STR "amd8131_edac" 25 - 26 - /* Wrapper functions for accessing PCI configuration space */ 27 - static void edac_pci_read_dword(struct pci_dev *dev, int reg, u32 *val32) 28 - { 29 - int ret; 30 - 31 - ret = pci_read_config_dword(dev, reg, val32); 32 - if (ret != 0) 33 - printk(KERN_ERR AMD8131_EDAC_MOD_STR 34 - " PCI Access Read Error at 0x%x\n", reg); 35 - } 36 - 37 - static void edac_pci_write_dword(struct pci_dev *dev, int reg, u32 val32) 38 - { 39 - int ret; 40 - 41 - ret = pci_write_config_dword(dev, reg, val32); 42 - if (ret != 0) 43 - printk(KERN_ERR AMD8131_EDAC_MOD_STR 44 - " PCI Access Write Error at 0x%x\n", reg); 45 - } 46 - 47 - /* Support up to two AMD8131 chipsets on a platform */ 48 - static struct amd8131_dev_info amd8131_devices[] = { 49 - { 50 - .inst = NORTH_A, 51 - .devfn = DEVFN_PCIX_BRIDGE_NORTH_A, 52 - .ctl_name = "AMD8131_PCIX_NORTH_A", 53 - }, 54 - { 55 - .inst = NORTH_B, 56 - .devfn = DEVFN_PCIX_BRIDGE_NORTH_B, 57 - .ctl_name = "AMD8131_PCIX_NORTH_B", 58 - }, 59 - { 60 - .inst = SOUTH_A, 61 - .devfn = DEVFN_PCIX_BRIDGE_SOUTH_A, 62 - .ctl_name = "AMD8131_PCIX_SOUTH_A", 63 - }, 64 - { 65 - .inst = SOUTH_B, 66 - .devfn = DEVFN_PCIX_BRIDGE_SOUTH_B, 67 - .ctl_name = "AMD8131_PCIX_SOUTH_B", 68 - }, 69 - {.inst = NO_BRIDGE,}, 70 - }; 71 - 72 - static void amd8131_pcix_init(struct amd8131_dev_info *dev_info) 73 - { 74 - u32 val32; 75 - struct pci_dev *dev = dev_info->dev; 76 - 77 - /* First clear error detection flags */ 78 - edac_pci_read_dword(dev, REG_MEM_LIM, &val32); 79 - if (val32 & MEM_LIMIT_MASK) 80 - edac_pci_write_dword(dev, REG_MEM_LIM, val32); 81 - 82 - /* Clear Discard Timer Timedout flag */ 83 - edac_pci_read_dword(dev, REG_INT_CTLR, &val32); 84 - if (val32 & INT_CTLR_DTS) 85 - edac_pci_write_dword(dev, REG_INT_CTLR, val32); 86 - 87 - /* Clear CRC Error flag on link side A */ 88 - edac_pci_read_dword(dev, REG_LNK_CTRL_A, &val32); 89 - if (val32 & LNK_CTRL_CRCERR_A) 90 - edac_pci_write_dword(dev, REG_LNK_CTRL_A, val32); 91 - 92 - /* Clear CRC Error flag on link side B */ 93 - edac_pci_read_dword(dev, REG_LNK_CTRL_B, &val32); 94 - if (val32 & LNK_CTRL_CRCERR_B) 95 - edac_pci_write_dword(dev, REG_LNK_CTRL_B, val32); 96 - 97 - /* 98 - * Then enable all error detections. 99 - * 100 - * Setup Discard Timer Sync Flood Enable, 101 - * System Error Enable and Parity Error Enable. 102 - */ 103 - edac_pci_read_dword(dev, REG_INT_CTLR, &val32); 104 - val32 |= INT_CTLR_PERR | INT_CTLR_SERR | INT_CTLR_DTSE; 105 - edac_pci_write_dword(dev, REG_INT_CTLR, val32); 106 - 107 - /* Enable overall SERR Error detection */ 108 - edac_pci_read_dword(dev, REG_STS_CMD, &val32); 109 - val32 |= STS_CMD_SERREN; 110 - edac_pci_write_dword(dev, REG_STS_CMD, val32); 111 - 112 - /* Setup CRC Flood Enable for link side A */ 113 - edac_pci_read_dword(dev, REG_LNK_CTRL_A, &val32); 114 - val32 |= LNK_CTRL_CRCFEN; 115 - edac_pci_write_dword(dev, REG_LNK_CTRL_A, val32); 116 - 117 - /* Setup CRC Flood Enable for link side B */ 118 - edac_pci_read_dword(dev, REG_LNK_CTRL_B, &val32); 119 - val32 |= LNK_CTRL_CRCFEN; 120 - edac_pci_write_dword(dev, REG_LNK_CTRL_B, val32); 121 - } 122 - 123 - static void amd8131_pcix_exit(struct amd8131_dev_info *dev_info) 124 - { 125 - u32 val32; 126 - struct pci_dev *dev = dev_info->dev; 127 - 128 - /* Disable SERR, PERR and DTSE Error detection */ 129 - edac_pci_read_dword(dev, REG_INT_CTLR, &val32); 130 - val32 &= ~(INT_CTLR_PERR | INT_CTLR_SERR | INT_CTLR_DTSE); 131 - edac_pci_write_dword(dev, REG_INT_CTLR, val32); 132 - 133 - /* Disable overall System Error detection */ 134 - edac_pci_read_dword(dev, REG_STS_CMD, &val32); 135 - val32 &= ~STS_CMD_SERREN; 136 - edac_pci_write_dword(dev, REG_STS_CMD, val32); 137 - 138 - /* Disable CRC Sync Flood on link side A */ 139 - edac_pci_read_dword(dev, REG_LNK_CTRL_A, &val32); 140 - val32 &= ~LNK_CTRL_CRCFEN; 141 - edac_pci_write_dword(dev, REG_LNK_CTRL_A, val32); 142 - 143 - /* Disable CRC Sync Flood on link side B */ 144 - edac_pci_read_dword(dev, REG_LNK_CTRL_B, &val32); 145 - val32 &= ~LNK_CTRL_CRCFEN; 146 - edac_pci_write_dword(dev, REG_LNK_CTRL_B, val32); 147 - } 148 - 149 - static void amd8131_pcix_check(struct edac_pci_ctl_info *edac_dev) 150 - { 151 - struct amd8131_dev_info *dev_info = edac_dev->pvt_info; 152 - struct pci_dev *dev = dev_info->dev; 153 - u32 val32; 154 - 155 - /* Check PCI-X Bridge Memory Base-Limit Register for errors */ 156 - edac_pci_read_dword(dev, REG_MEM_LIM, &val32); 157 - if (val32 & MEM_LIMIT_MASK) { 158 - printk(KERN_INFO "Error(s) in mem limit register " 159 - "on %s bridge\n", dev_info->ctl_name); 160 - printk(KERN_INFO "DPE: %d, RSE: %d, RMA: %d\n" 161 - "RTA: %d, STA: %d, MDPE: %d\n", 162 - val32 & MEM_LIMIT_DPE, 163 - val32 & MEM_LIMIT_RSE, 164 - val32 & MEM_LIMIT_RMA, 165 - val32 & MEM_LIMIT_RTA, 166 - val32 & MEM_LIMIT_STA, 167 - val32 & MEM_LIMIT_MDPE); 168 - 169 - val32 |= MEM_LIMIT_MASK; 170 - edac_pci_write_dword(dev, REG_MEM_LIM, val32); 171 - 172 - edac_pci_handle_npe(edac_dev, edac_dev->ctl_name); 173 - } 174 - 175 - /* Check if Discard Timer timed out */ 176 - edac_pci_read_dword(dev, REG_INT_CTLR, &val32); 177 - if (val32 & INT_CTLR_DTS) { 178 - printk(KERN_INFO "Error(s) in interrupt and control register " 179 - "on %s bridge\n", dev_info->ctl_name); 180 - printk(KERN_INFO "DTS: %d\n", val32 & INT_CTLR_DTS); 181 - 182 - val32 |= INT_CTLR_DTS; 183 - edac_pci_write_dword(dev, REG_INT_CTLR, val32); 184 - 185 - edac_pci_handle_npe(edac_dev, edac_dev->ctl_name); 186 - } 187 - 188 - /* Check if CRC error happens on link side A */ 189 - edac_pci_read_dword(dev, REG_LNK_CTRL_A, &val32); 190 - if (val32 & LNK_CTRL_CRCERR_A) { 191 - printk(KERN_INFO "Error(s) in link conf and control register " 192 - "on %s bridge\n", dev_info->ctl_name); 193 - printk(KERN_INFO "CRCERR: %d\n", val32 & LNK_CTRL_CRCERR_A); 194 - 195 - val32 |= LNK_CTRL_CRCERR_A; 196 - edac_pci_write_dword(dev, REG_LNK_CTRL_A, val32); 197 - 198 - edac_pci_handle_npe(edac_dev, edac_dev->ctl_name); 199 - } 200 - 201 - /* Check if CRC error happens on link side B */ 202 - edac_pci_read_dword(dev, REG_LNK_CTRL_B, &val32); 203 - if (val32 & LNK_CTRL_CRCERR_B) { 204 - printk(KERN_INFO "Error(s) in link conf and control register " 205 - "on %s bridge\n", dev_info->ctl_name); 206 - printk(KERN_INFO "CRCERR: %d\n", val32 & LNK_CTRL_CRCERR_B); 207 - 208 - val32 |= LNK_CTRL_CRCERR_B; 209 - edac_pci_write_dword(dev, REG_LNK_CTRL_B, val32); 210 - 211 - edac_pci_handle_npe(edac_dev, edac_dev->ctl_name); 212 - } 213 - } 214 - 215 - static struct amd8131_info amd8131_chipset = { 216 - .err_dev = PCI_DEVICE_ID_AMD_8131_APIC, 217 - .devices = amd8131_devices, 218 - .init = amd8131_pcix_init, 219 - .exit = amd8131_pcix_exit, 220 - .check = amd8131_pcix_check, 221 - }; 222 - 223 - /* 224 - * There are 4 PCIX Bridges on ATCA-6101 that share the same PCI Device ID, 225 - * so amd8131_probe() would be called by kernel 4 times, with different 226 - * address of pci_dev for each of them each time. 227 - */ 228 - static int amd8131_probe(struct pci_dev *dev, const struct pci_device_id *id) 229 - { 230 - struct amd8131_dev_info *dev_info; 231 - 232 - for (dev_info = amd8131_chipset.devices; dev_info->inst != NO_BRIDGE; 233 - dev_info++) 234 - if (dev_info->devfn == dev->devfn) 235 - break; 236 - 237 - if (dev_info->inst == NO_BRIDGE) /* should never happen */ 238 - return -ENODEV; 239 - 240 - /* 241 - * We can't call pci_get_device() as we are used to do because 242 - * there are 4 of them but pci_dev_get() instead. 243 - */ 244 - dev_info->dev = pci_dev_get(dev); 245 - 246 - if (pci_enable_device(dev_info->dev)) { 247 - pci_dev_put(dev_info->dev); 248 - printk(KERN_ERR "failed to enable:" 249 - "vendor %x, device %x, devfn %x, name %s\n", 250 - PCI_VENDOR_ID_AMD, amd8131_chipset.err_dev, 251 - dev_info->devfn, dev_info->ctl_name); 252 - return -ENODEV; 253 - } 254 - 255 - /* 256 - * we do not allocate extra private structure for 257 - * edac_pci_ctl_info, but make use of existing 258 - * one instead. 259 - */ 260 - dev_info->edac_idx = edac_pci_alloc_index(); 261 - dev_info->edac_dev = edac_pci_alloc_ctl_info(0, dev_info->ctl_name); 262 - if (!dev_info->edac_dev) 263 - return -ENOMEM; 264 - 265 - dev_info->edac_dev->pvt_info = dev_info; 266 - dev_info->edac_dev->dev = &dev_info->dev->dev; 267 - dev_info->edac_dev->mod_name = AMD8131_EDAC_MOD_STR; 268 - dev_info->edac_dev->ctl_name = dev_info->ctl_name; 269 - dev_info->edac_dev->dev_name = dev_name(&dev_info->dev->dev); 270 - 271 - if (edac_op_state == EDAC_OPSTATE_POLL) 272 - dev_info->edac_dev->edac_check = amd8131_chipset.check; 273 - 274 - if (amd8131_chipset.init) 275 - amd8131_chipset.init(dev_info); 276 - 277 - if (edac_pci_add_device(dev_info->edac_dev, dev_info->edac_idx) > 0) { 278 - printk(KERN_ERR "failed edac_pci_add_device() for %s\n", 279 - dev_info->ctl_name); 280 - edac_pci_free_ctl_info(dev_info->edac_dev); 281 - return -ENODEV; 282 - } 283 - 284 - printk(KERN_INFO "added one device on AMD8131 " 285 - "vendor %x, device %x, devfn %x, name %s\n", 286 - PCI_VENDOR_ID_AMD, amd8131_chipset.err_dev, 287 - dev_info->devfn, dev_info->ctl_name); 288 - 289 - return 0; 290 - } 291 - 292 - static void amd8131_remove(struct pci_dev *dev) 293 - { 294 - struct amd8131_dev_info *dev_info; 295 - 296 - for (dev_info = amd8131_chipset.devices; dev_info->inst != NO_BRIDGE; 297 - dev_info++) 298 - if (dev_info->devfn == dev->devfn) 299 - break; 300 - 301 - if (dev_info->inst == NO_BRIDGE) /* should never happen */ 302 - return; 303 - 304 - if (dev_info->edac_dev) { 305 - edac_pci_del_device(dev_info->edac_dev->dev); 306 - edac_pci_free_ctl_info(dev_info->edac_dev); 307 - } 308 - 309 - if (amd8131_chipset.exit) 310 - amd8131_chipset.exit(dev_info); 311 - 312 - pci_dev_put(dev_info->dev); 313 - } 314 - 315 - static const struct pci_device_id amd8131_edac_pci_tbl[] = { 316 - { 317 - PCI_VEND_DEV(AMD, 8131_BRIDGE), 318 - .subvendor = PCI_ANY_ID, 319 - .subdevice = PCI_ANY_ID, 320 - .class = 0, 321 - .class_mask = 0, 322 - .driver_data = 0, 323 - }, 324 - { 325 - 0, 326 - } /* table is NULL-terminated */ 327 - }; 328 - MODULE_DEVICE_TABLE(pci, amd8131_edac_pci_tbl); 329 - 330 - static struct pci_driver amd8131_edac_driver = { 331 - .name = AMD8131_EDAC_MOD_STR, 332 - .probe = amd8131_probe, 333 - .remove = amd8131_remove, 334 - .id_table = amd8131_edac_pci_tbl, 335 - }; 336 - 337 - static int __init amd8131_edac_init(void) 338 - { 339 - printk(KERN_INFO "AMD8131 EDAC driver " AMD8131_EDAC_REVISION "\n"); 340 - printk(KERN_INFO "\t(c) 2008 Wind River Systems, Inc.\n"); 341 - 342 - /* Only POLL mode supported so far */ 343 - edac_op_state = EDAC_OPSTATE_POLL; 344 - 345 - return pci_register_driver(&amd8131_edac_driver); 346 - } 347 - 348 - static void __exit amd8131_edac_exit(void) 349 - { 350 - pci_unregister_driver(&amd8131_edac_driver); 351 - } 352 - 353 - module_init(amd8131_edac_init); 354 - module_exit(amd8131_edac_exit); 355 - 356 - MODULE_LICENSE("GPL"); 357 - MODULE_AUTHOR("Cao Qingtao <qingtao.cao@windriver.com>"); 358 - MODULE_DESCRIPTION("AMD8131 HyperTransport PCI-X Tunnel EDAC kernel module");
-107
drivers/edac/amd8131_edac.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * amd8131_edac.h, EDAC defs for AMD8131 hypertransport chip 4 - * 5 - * Copyright (c) 2008 Wind River Systems, Inc. 6 - * 7 - * Authors: Cao Qingtao <qingtao.cao@windriver.com> 8 - * Benjamin Walsh <benjamin.walsh@windriver.com> 9 - * Hu Yongqi <yongqi.hu@windriver.com> 10 - */ 11 - 12 - #ifndef _AMD8131_EDAC_H_ 13 - #define _AMD8131_EDAC_H_ 14 - 15 - #define DEVFN_PCIX_BRIDGE_NORTH_A 8 16 - #define DEVFN_PCIX_BRIDGE_NORTH_B 16 17 - #define DEVFN_PCIX_BRIDGE_SOUTH_A 24 18 - #define DEVFN_PCIX_BRIDGE_SOUTH_B 32 19 - 20 - /************************************************************ 21 - * PCI-X Bridge Status and Command Register, DevA:0x04 22 - ************************************************************/ 23 - #define REG_STS_CMD 0x04 24 - enum sts_cmd_bits { 25 - STS_CMD_SSE = BIT(30), 26 - STS_CMD_SERREN = BIT(8) 27 - }; 28 - 29 - /************************************************************ 30 - * PCI-X Bridge Interrupt and Bridge Control Register, 31 - ************************************************************/ 32 - #define REG_INT_CTLR 0x3c 33 - enum int_ctlr_bits { 34 - INT_CTLR_DTSE = BIT(27), 35 - INT_CTLR_DTS = BIT(26), 36 - INT_CTLR_SERR = BIT(17), 37 - INT_CTLR_PERR = BIT(16) 38 - }; 39 - 40 - /************************************************************ 41 - * PCI-X Bridge Memory Base-Limit Register, DevA:0x1C 42 - ************************************************************/ 43 - #define REG_MEM_LIM 0x1c 44 - enum mem_limit_bits { 45 - MEM_LIMIT_DPE = BIT(31), 46 - MEM_LIMIT_RSE = BIT(30), 47 - MEM_LIMIT_RMA = BIT(29), 48 - MEM_LIMIT_RTA = BIT(28), 49 - MEM_LIMIT_STA = BIT(27), 50 - MEM_LIMIT_MDPE = BIT(24), 51 - MEM_LIMIT_MASK = MEM_LIMIT_DPE|MEM_LIMIT_RSE|MEM_LIMIT_RMA| 52 - MEM_LIMIT_RTA|MEM_LIMIT_STA|MEM_LIMIT_MDPE 53 - }; 54 - 55 - /************************************************************ 56 - * Link Configuration And Control Register, side A 57 - ************************************************************/ 58 - #define REG_LNK_CTRL_A 0xc4 59 - 60 - /************************************************************ 61 - * Link Configuration And Control Register, side B 62 - ************************************************************/ 63 - #define REG_LNK_CTRL_B 0xc8 64 - 65 - enum lnk_ctrl_bits { 66 - LNK_CTRL_CRCERR_A = BIT(9), 67 - LNK_CTRL_CRCERR_B = BIT(8), 68 - LNK_CTRL_CRCFEN = BIT(1) 69 - }; 70 - 71 - enum pcix_bridge_inst { 72 - NORTH_A = 0, 73 - NORTH_B = 1, 74 - SOUTH_A = 2, 75 - SOUTH_B = 3, 76 - NO_BRIDGE = 4 77 - }; 78 - 79 - struct amd8131_dev_info { 80 - int devfn; 81 - enum pcix_bridge_inst inst; 82 - struct pci_dev *dev; 83 - int edac_idx; /* pci device index */ 84 - char *ctl_name; 85 - struct edac_pci_ctl_info *edac_dev; 86 - }; 87 - 88 - /* 89 - * AMD8131 chipset has two pairs of PCIX Bridge and related IOAPIC 90 - * Controller, and ATCA-6101 has two AMD8131 chipsets, so there are 91 - * four PCIX Bridges on ATCA-6101 altogether. 92 - * 93 - * These PCIX Bridges share the same PCI Device ID and are all of 94 - * Function Zero, they could be discrimated by their pci_dev->devfn. 95 - * They share the same set of init/check/exit methods, and their 96 - * private structures are collected in the devices[] array. 97 - */ 98 - struct amd8131_info { 99 - u16 err_dev; /* PCI Device ID for AMD8131 APIC*/ 100 - struct amd8131_dev_info *devices; 101 - void (*init)(struct amd8131_dev_info *dev_info); 102 - void (*exit)(struct amd8131_dev_info *dev_info); 103 - void (*check)(struct edac_pci_ctl_info *edac_dev); 104 - }; 105 - 106 - #endif /* _AMD8131_EDAC_H_ */ 107 -
+10 -9
drivers/macintosh/via-pmu-led.c
··· 92 92 if (dt == NULL) 93 93 return -ENODEV; 94 94 model = of_get_property(dt, "model", NULL); 95 - if (model == NULL) { 96 - of_node_put(dt); 97 - return -ENODEV; 98 - } 95 + if (!model) 96 + goto put_node; 97 + 99 98 if (strncmp(model, "PowerBook", strlen("PowerBook")) != 0 && 100 99 strncmp(model, "iBook", strlen("iBook")) != 0 && 101 100 strcmp(model, "PowerMac7,2") != 0 && 102 - strcmp(model, "PowerMac7,3") != 0) { 103 - of_node_put(dt); 104 - /* ignore */ 105 - return -ENODEV; 106 - } 101 + strcmp(model, "PowerMac7,3") != 0) 102 + goto put_node; 103 + 107 104 of_node_put(dt); 108 105 109 106 spin_lock_init(&pmu_blink_lock); ··· 109 112 pmu_blink_req.done = pmu_req_done; 110 113 111 114 return led_classdev_register(NULL, &pmu_led); 115 + 116 + put_node: 117 + of_node_put(dt); 118 + return -ENODEV; 112 119 } 113 120 114 121 late_initcall(via_pmu_led_init);
+1 -1
drivers/ps3/ps3-lpm.c
··· 91 91 * struct ps3_lpm_priv - Private lpm device data. 92 92 * 93 93 * @open: An atomic variable indicating the lpm driver has been opened. 94 - * @rights: The lpm rigths granted by the system policy module. A logical 94 + * @rights: The lpm rights granted by the system policy module. A logical 95 95 * OR of enum ps3_lpm_rights. 96 96 * @node_id: The node id of a BE processor whose performance monitor this 97 97 * lpar has the right to use.
+1 -1
drivers/ps3/ps3-sys-manager.c
··· 362 362 * ps3_sys_manager_send_response - Send a 'response' to the system manager. 363 363 * @status: zero = success, others fail. 364 364 * 365 - * The guest sends this message to the system manager to acnowledge success or 365 + * The guest sends this message to the system manager to acknowledge success or 366 366 * failure of a command sent by the system manager. 367 367 */ 368 368
+2 -2
drivers/ps3/ps3-vuart.c
··· 467 467 * 468 468 * If the port is idle on entry as much of the incoming data is written to 469 469 * the port as the port will accept. Otherwise a list buffer is created 470 - * and any remaning incoming data is copied to that buffer. The buffer is 471 - * then enqueued for transmision via the transmit interrupt. 470 + * and any remaining incoming data is copied to that buffer. The buffer is 471 + * then enqueued for transmission via the transmit interrupt. 472 472 */ 473 473 474 474 int ps3_vuart_write(struct ps3_system_bus_device *dev, const void *buf,
+1 -1
drivers/ps3/sys-manager-core.c
··· 12 12 #include <asm/ps3.h> 13 13 14 14 /** 15 - * Staticly linked routines that allow late binding of a loaded sys-manager 15 + * Statically linked routines that allow late binding of a loaded sys-manager 16 16 * module. 17 17 */ 18 18
+84 -1
samples/ftrace/ftrace-direct-modify.c
··· 2 2 #include <linux/module.h> 3 3 #include <linux/kthread.h> 4 4 #include <linux/ftrace.h> 5 - #ifndef CONFIG_ARM64 5 + #if !defined(CONFIG_ARM64) && !defined(CONFIG_PPC32) 6 6 #include <asm/asm-offsets.h> 7 7 #endif 8 8 ··· 198 198 ); 199 199 200 200 #endif /* CONFIG_LOONGARCH */ 201 + 202 + #ifdef CONFIG_PPC 203 + #include <asm/ppc_asm.h> 204 + 205 + #ifdef CONFIG_PPC64 206 + #define STACK_FRAME_SIZE 48 207 + #else 208 + #define STACK_FRAME_SIZE 24 209 + #endif 210 + 211 + #if defined(CONFIG_PPC64_ELF_ABI_V2) && !defined(CONFIG_PPC_KERNEL_PCREL) 212 + #define PPC64_TOC_SAVE_AND_UPDATE \ 213 + " std 2, 24(1)\n" \ 214 + " bcl 20, 31, 1f\n" \ 215 + " 1: mflr 12\n" \ 216 + " ld 2, (99f - 1b)(12)\n" 217 + #define PPC64_TOC_RESTORE \ 218 + " ld 2, 24(1)\n" 219 + #define PPC64_TOC \ 220 + " 99: .quad .TOC.@tocbase\n" 221 + #else 222 + #define PPC64_TOC_SAVE_AND_UPDATE "" 223 + #define PPC64_TOC_RESTORE "" 224 + #define PPC64_TOC "" 225 + #endif 226 + 227 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 228 + #define PPC_FTRACE_RESTORE_LR \ 229 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" \ 230 + " mtlr 0\n" 231 + #define PPC_FTRACE_RET \ 232 + " blr\n" 233 + #else 234 + #define PPC_FTRACE_RESTORE_LR \ 235 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" \ 236 + " mtctr 0\n" 237 + #define PPC_FTRACE_RET \ 238 + " mtlr 0\n" \ 239 + " bctr\n" 240 + #endif 241 + 242 + asm ( 243 + " .pushsection .text, \"ax\", @progbits\n" 244 + " .type my_tramp1, @function\n" 245 + " .globl my_tramp1\n" 246 + " my_tramp1:\n" 247 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 248 + PPC_STLU" 1, -"__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 249 + " mflr 0\n" 250 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 251 + PPC_STLU" 1, -"__stringify(STACK_FRAME_SIZE)"(1)\n" 252 + PPC64_TOC_SAVE_AND_UPDATE 253 + " bl my_direct_func1\n" 254 + PPC64_TOC_RESTORE 255 + " addi 1, 1, "__stringify(STACK_FRAME_SIZE)"\n" 256 + PPC_FTRACE_RESTORE_LR 257 + " addi 1, 1, "__stringify(STACK_FRAME_MIN_SIZE)"\n" 258 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 259 + PPC_FTRACE_RET 260 + " .size my_tramp1, .-my_tramp1\n" 261 + 262 + " .type my_tramp2, @function\n" 263 + " .globl my_tramp2\n" 264 + " my_tramp2:\n" 265 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 266 + PPC_STLU" 1, -"__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 267 + " mflr 0\n" 268 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 269 + PPC_STLU" 1, -"__stringify(STACK_FRAME_SIZE)"(1)\n" 270 + PPC64_TOC_SAVE_AND_UPDATE 271 + " bl my_direct_func2\n" 272 + PPC64_TOC_RESTORE 273 + " addi 1, 1, "__stringify(STACK_FRAME_SIZE)"\n" 274 + PPC_FTRACE_RESTORE_LR 275 + " addi 1, 1, "__stringify(STACK_FRAME_MIN_SIZE)"\n" 276 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 277 + PPC_FTRACE_RET 278 + PPC64_TOC 279 + " .size my_tramp2, .-my_tramp2\n" 280 + " .popsection\n" 281 + ); 282 + 283 + #endif /* CONFIG_PPC */ 201 284 202 285 static struct ftrace_ops direct; 203 286
+100 -1
samples/ftrace/ftrace-direct-multi-modify.c
··· 2 2 #include <linux/module.h> 3 3 #include <linux/kthread.h> 4 4 #include <linux/ftrace.h> 5 - #ifndef CONFIG_ARM64 5 + #if !defined(CONFIG_ARM64) && !defined(CONFIG_PPC32) 6 6 #include <asm/asm-offsets.h> 7 7 #endif 8 8 ··· 224 224 ); 225 225 226 226 #endif /* CONFIG_LOONGARCH */ 227 + 228 + #ifdef CONFIG_PPC 229 + #include <asm/ppc_asm.h> 230 + 231 + #ifdef CONFIG_PPC64 232 + #define STACK_FRAME_SIZE 48 233 + #else 234 + #define STACK_FRAME_SIZE 24 235 + #endif 236 + 237 + #if defined(CONFIG_PPC64_ELF_ABI_V2) && !defined(CONFIG_PPC_KERNEL_PCREL) 238 + #define PPC64_TOC_SAVE_AND_UPDATE \ 239 + " std 2, 24(1)\n" \ 240 + " bcl 20, 31, 1f\n" \ 241 + " 1: mflr 12\n" \ 242 + " ld 2, (99f - 1b)(12)\n" 243 + #define PPC64_TOC_RESTORE \ 244 + " ld 2, 24(1)\n" 245 + #define PPC64_TOC \ 246 + " 99: .quad .TOC.@tocbase\n" 247 + #else 248 + #define PPC64_TOC_SAVE_AND_UPDATE "" 249 + #define PPC64_TOC_RESTORE "" 250 + #define PPC64_TOC "" 251 + #endif 252 + 253 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 254 + #define PPC_FTRACE_RESTORE_LR \ 255 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" \ 256 + " mtlr 0\n" 257 + #define PPC_FTRACE_RET \ 258 + " blr\n" 259 + #define PPC_FTRACE_RECOVER_IP \ 260 + " lwz 8, 4(3)\n" \ 261 + " li 9, 6\n" \ 262 + " slw 8, 8, 9\n" \ 263 + " sraw 8, 8, 9\n" \ 264 + " add 3, 3, 8\n" \ 265 + " addi 3, 3, 4\n" 266 + #else 267 + #define PPC_FTRACE_RESTORE_LR \ 268 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" \ 269 + " mtctr 0\n" 270 + #define PPC_FTRACE_RET \ 271 + " mtlr 0\n" \ 272 + " bctr\n" 273 + #define PPC_FTRACE_RECOVER_IP "" 274 + #endif 275 + 276 + asm ( 277 + " .pushsection .text, \"ax\", @progbits\n" 278 + " .type my_tramp1, @function\n" 279 + " .globl my_tramp1\n" 280 + " my_tramp1:\n" 281 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 282 + PPC_STLU" 1, -"__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 283 + " mflr 0\n" 284 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 285 + PPC_STLU" 1, -"__stringify(STACK_FRAME_SIZE)"(1)\n" 286 + PPC64_TOC_SAVE_AND_UPDATE 287 + PPC_STL" 3, "__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 288 + " mr 3, 0\n" 289 + PPC_FTRACE_RECOVER_IP 290 + " bl my_direct_func1\n" 291 + PPC_LL" 3, "__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 292 + PPC64_TOC_RESTORE 293 + " addi 1, 1, "__stringify(STACK_FRAME_SIZE)"\n" 294 + PPC_FTRACE_RESTORE_LR 295 + " addi 1, 1, "__stringify(STACK_FRAME_MIN_SIZE)"\n" 296 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 297 + PPC_FTRACE_RET 298 + " .size my_tramp1, .-my_tramp1\n" 299 + 300 + " .type my_tramp2, @function\n" 301 + " .globl my_tramp2\n" 302 + " my_tramp2:\n" 303 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 304 + PPC_STLU" 1, -"__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 305 + " mflr 0\n" 306 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 307 + PPC_STLU" 1, -"__stringify(STACK_FRAME_SIZE)"(1)\n" 308 + PPC64_TOC_SAVE_AND_UPDATE 309 + PPC_STL" 3, "__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 310 + " mr 3, 0\n" 311 + PPC_FTRACE_RECOVER_IP 312 + " bl my_direct_func2\n" 313 + PPC_LL" 3, "__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 314 + PPC64_TOC_RESTORE 315 + " addi 1, 1, "__stringify(STACK_FRAME_SIZE)"\n" 316 + PPC_FTRACE_RESTORE_LR 317 + " addi 1, 1, "__stringify(STACK_FRAME_MIN_SIZE)"\n" 318 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 319 + PPC_FTRACE_RET 320 + PPC64_TOC 321 + " .size my_tramp2, .-my_tramp2\n" 322 + " .popsection\n" 323 + ); 324 + 325 + #endif /* CONFIG_PPC */ 227 326 228 327 static unsigned long my_tramp = (unsigned long)my_tramp1; 229 328 static unsigned long tramps[2] = {
+78 -1
samples/ftrace/ftrace-direct-multi.c
··· 4 4 #include <linux/mm.h> /* for handle_mm_fault() */ 5 5 #include <linux/ftrace.h> 6 6 #include <linux/sched/stat.h> 7 - #ifndef CONFIG_ARM64 7 + #if !defined(CONFIG_ARM64) && !defined(CONFIG_PPC32) 8 8 #include <asm/asm-offsets.h> 9 9 #endif 10 10 ··· 140 140 ); 141 141 142 142 #endif /* CONFIG_LOONGARCH */ 143 + 144 + #ifdef CONFIG_PPC 145 + #include <asm/ppc_asm.h> 146 + 147 + #ifdef CONFIG_PPC64 148 + #define STACK_FRAME_SIZE 48 149 + #else 150 + #define STACK_FRAME_SIZE 24 151 + #endif 152 + 153 + #if defined(CONFIG_PPC64_ELF_ABI_V2) && !defined(CONFIG_PPC_KERNEL_PCREL) 154 + #define PPC64_TOC_SAVE_AND_UPDATE \ 155 + " std 2, 24(1)\n" \ 156 + " bcl 20, 31, 1f\n" \ 157 + " 1: mflr 12\n" \ 158 + " ld 2, (99f - 1b)(12)\n" 159 + #define PPC64_TOC_RESTORE \ 160 + " ld 2, 24(1)\n" 161 + #define PPC64_TOC \ 162 + " 99: .quad .TOC.@tocbase\n" 163 + #else 164 + #define PPC64_TOC_SAVE_AND_UPDATE "" 165 + #define PPC64_TOC_RESTORE "" 166 + #define PPC64_TOC "" 167 + #endif 168 + 169 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 170 + #define PPC_FTRACE_RESTORE_LR \ 171 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" \ 172 + " mtlr 0\n" 173 + #define PPC_FTRACE_RET \ 174 + " blr\n" 175 + #define PPC_FTRACE_RECOVER_IP \ 176 + " lwz 8, 4(3)\n" \ 177 + " li 9, 6\n" \ 178 + " slw 8, 8, 9\n" \ 179 + " sraw 8, 8, 9\n" \ 180 + " add 3, 3, 8\n" \ 181 + " addi 3, 3, 4\n" 182 + #else 183 + #define PPC_FTRACE_RESTORE_LR \ 184 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" \ 185 + " mtctr 0\n" 186 + #define PPC_FTRACE_RET \ 187 + " mtlr 0\n" \ 188 + " bctr\n" 189 + #define PPC_FTRACE_RECOVER_IP "" 190 + #endif 191 + 192 + asm ( 193 + " .pushsection .text, \"ax\", @progbits\n" 194 + " .type my_tramp, @function\n" 195 + " .globl my_tramp\n" 196 + " my_tramp:\n" 197 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 198 + PPC_STLU" 1, -"__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 199 + " mflr 0\n" 200 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 201 + PPC_STLU" 1, -"__stringify(STACK_FRAME_SIZE)"(1)\n" 202 + PPC64_TOC_SAVE_AND_UPDATE 203 + PPC_STL" 3, "__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 204 + " mr 3, 0\n" 205 + PPC_FTRACE_RECOVER_IP 206 + " bl my_direct_func\n" 207 + PPC_LL" 3, "__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 208 + PPC64_TOC_RESTORE 209 + " addi 1, 1, "__stringify(STACK_FRAME_SIZE)"\n" 210 + PPC_FTRACE_RESTORE_LR 211 + " addi 1, 1, "__stringify(STACK_FRAME_MIN_SIZE)"\n" 212 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 213 + PPC_FTRACE_RET 214 + PPC64_TOC 215 + " .size my_tramp, .-my_tramp\n" 216 + " .popsection\n" 217 + ); 218 + 219 + #endif /* CONFIG_PPC */ 143 220 144 221 static struct ftrace_ops direct; 145 222
+82 -1
samples/ftrace/ftrace-direct-too.c
··· 3 3 4 4 #include <linux/mm.h> /* for handle_mm_fault() */ 5 5 #include <linux/ftrace.h> 6 - #ifndef CONFIG_ARM64 6 + #if !defined(CONFIG_ARM64) && !defined(CONFIG_PPC32) 7 7 #include <asm/asm-offsets.h> 8 8 #endif 9 9 ··· 152 152 ); 153 153 154 154 #endif /* CONFIG_LOONGARCH */ 155 + 156 + #ifdef CONFIG_PPC 157 + #include <asm/ppc_asm.h> 158 + 159 + #ifdef CONFIG_PPC64 160 + #define STACK_FRAME_SIZE 64 161 + #define STACK_FRAME_ARG1 32 162 + #define STACK_FRAME_ARG2 40 163 + #define STACK_FRAME_ARG3 48 164 + #define STACK_FRAME_ARG4 56 165 + #else 166 + #define STACK_FRAME_SIZE 32 167 + #define STACK_FRAME_ARG1 16 168 + #define STACK_FRAME_ARG2 20 169 + #define STACK_FRAME_ARG3 24 170 + #define STACK_FRAME_ARG4 28 171 + #endif 172 + 173 + #if defined(CONFIG_PPC64_ELF_ABI_V2) && !defined(CONFIG_PPC_KERNEL_PCREL) 174 + #define PPC64_TOC_SAVE_AND_UPDATE \ 175 + " std 2, 24(1)\n" \ 176 + " bcl 20, 31, 1f\n" \ 177 + " 1: mflr 12\n" \ 178 + " ld 2, (99f - 1b)(12)\n" 179 + #define PPC64_TOC_RESTORE \ 180 + " ld 2, 24(1)\n" 181 + #define PPC64_TOC \ 182 + " 99: .quad .TOC.@tocbase\n" 183 + #else 184 + #define PPC64_TOC_SAVE_AND_UPDATE "" 185 + #define PPC64_TOC_RESTORE "" 186 + #define PPC64_TOC "" 187 + #endif 188 + 189 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 190 + #define PPC_FTRACE_RESTORE_LR \ 191 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" \ 192 + " mtlr 0\n" 193 + #define PPC_FTRACE_RET \ 194 + " blr\n" 195 + #else 196 + #define PPC_FTRACE_RESTORE_LR \ 197 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" \ 198 + " mtctr 0\n" 199 + #define PPC_FTRACE_RET \ 200 + " mtlr 0\n" \ 201 + " bctr\n" 202 + #endif 203 + 204 + asm ( 205 + " .pushsection .text, \"ax\", @progbits\n" 206 + " .type my_tramp, @function\n" 207 + " .globl my_tramp\n" 208 + " my_tramp:\n" 209 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 210 + PPC_STLU" 1, -"__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 211 + " mflr 0\n" 212 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 213 + PPC_STLU" 1, -"__stringify(STACK_FRAME_SIZE)"(1)\n" 214 + PPC64_TOC_SAVE_AND_UPDATE 215 + PPC_STL" 3, "__stringify(STACK_FRAME_ARG1)"(1)\n" 216 + PPC_STL" 4, "__stringify(STACK_FRAME_ARG2)"(1)\n" 217 + PPC_STL" 5, "__stringify(STACK_FRAME_ARG3)"(1)\n" 218 + PPC_STL" 6, "__stringify(STACK_FRAME_ARG4)"(1)\n" 219 + " bl my_direct_func\n" 220 + PPC_LL" 6, "__stringify(STACK_FRAME_ARG4)"(1)\n" 221 + PPC_LL" 5, "__stringify(STACK_FRAME_ARG3)"(1)\n" 222 + PPC_LL" 4, "__stringify(STACK_FRAME_ARG2)"(1)\n" 223 + PPC_LL" 3, "__stringify(STACK_FRAME_ARG1)"(1)\n" 224 + PPC64_TOC_RESTORE 225 + " addi 1, 1, "__stringify(STACK_FRAME_SIZE)"\n" 226 + PPC_FTRACE_RESTORE_LR 227 + " addi 1, 1, "__stringify(STACK_FRAME_MIN_SIZE)"\n" 228 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 229 + PPC_FTRACE_RET 230 + PPC64_TOC 231 + " .size my_tramp, .-my_tramp\n" 232 + " .popsection\n" 233 + ); 234 + 235 + #endif /* CONFIG_PPC */ 155 236 156 237 static struct ftrace_ops direct; 157 238
+68 -1
samples/ftrace/ftrace-direct.c
··· 3 3 4 4 #include <linux/sched.h> /* for wake_up_process() */ 5 5 #include <linux/ftrace.h> 6 - #ifndef CONFIG_ARM64 6 + #if !defined(CONFIG_ARM64) && !defined(CONFIG_PPC32) 7 7 #include <asm/asm-offsets.h> 8 8 #endif 9 9 ··· 133 133 ); 134 134 135 135 #endif /* CONFIG_LOONGARCH */ 136 + 137 + #ifdef CONFIG_PPC 138 + #include <asm/ppc_asm.h> 139 + 140 + #ifdef CONFIG_PPC64 141 + #define STACK_FRAME_SIZE 48 142 + #else 143 + #define STACK_FRAME_SIZE 24 144 + #endif 145 + 146 + #if defined(CONFIG_PPC64_ELF_ABI_V2) && !defined(CONFIG_PPC_KERNEL_PCREL) 147 + #define PPC64_TOC_SAVE_AND_UPDATE \ 148 + " std 2, 24(1)\n" \ 149 + " bcl 20, 31, 1f\n" \ 150 + " 1: mflr 12\n" \ 151 + " ld 2, (99f - 1b)(12)\n" 152 + #define PPC64_TOC_RESTORE \ 153 + " ld 2, 24(1)\n" 154 + #define PPC64_TOC \ 155 + " 99: .quad .TOC.@tocbase\n" 156 + #else 157 + #define PPC64_TOC_SAVE_AND_UPDATE "" 158 + #define PPC64_TOC_RESTORE "" 159 + #define PPC64_TOC "" 160 + #endif 161 + 162 + #ifdef CONFIG_PPC_FTRACE_OUT_OF_LINE 163 + #define PPC_FTRACE_RESTORE_LR \ 164 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" \ 165 + " mtlr 0\n" 166 + #define PPC_FTRACE_RET \ 167 + " blr\n" 168 + #else 169 + #define PPC_FTRACE_RESTORE_LR \ 170 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" \ 171 + " mtctr 0\n" 172 + #define PPC_FTRACE_RET \ 173 + " mtlr 0\n" \ 174 + " bctr\n" 175 + #endif 176 + 177 + asm ( 178 + " .pushsection .text, \"ax\", @progbits\n" 179 + " .type my_tramp, @function\n" 180 + " .globl my_tramp\n" 181 + " my_tramp:\n" 182 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 183 + PPC_STLU" 1, -"__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 184 + " mflr 0\n" 185 + PPC_STL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 186 + PPC_STLU" 1, -"__stringify(STACK_FRAME_SIZE)"(1)\n" 187 + PPC64_TOC_SAVE_AND_UPDATE 188 + PPC_STL" 3, "__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 189 + " bl my_direct_func\n" 190 + PPC_LL" 3, "__stringify(STACK_FRAME_MIN_SIZE)"(1)\n" 191 + PPC64_TOC_RESTORE 192 + " addi 1, 1, "__stringify(STACK_FRAME_SIZE)"\n" 193 + PPC_FTRACE_RESTORE_LR 194 + " addi 1, 1, "__stringify(STACK_FRAME_MIN_SIZE)"\n" 195 + PPC_LL" 0, "__stringify(PPC_LR_STKOFF)"(1)\n" 196 + PPC_FTRACE_RET 197 + PPC64_TOC 198 + " .size my_tramp, .-my_tramp\n" 199 + " .popsection\n" 200 + ); 201 + 202 + #endif /* CONFIG_PPC */ 136 203 137 204 static struct ftrace_ops direct; 138 205
+7
scripts/Makefile.vmlinux
··· 22 22 vmlinux: .vmlinux.export.o 23 23 endif 24 24 25 + ifdef CONFIG_ARCH_WANTS_PRE_LINK_VMLINUX 26 + vmlinux: arch/$(SRCARCH)/tools/vmlinux.arch.o 27 + 28 + arch/$(SRCARCH)/tools/vmlinux.arch.o: vmlinux.o FORCE 29 + $(Q)$(MAKE) $(build)=arch/$(SRCARCH)/tools $@ 30 + endif 31 + 25 32 ARCH_POSTLINK := $(wildcard $(srctree)/arch/$(SRCARCH)/Makefile.postlink) 26 33 27 34 # Final link of vmlinux with optional arch pass after final link
+1
tools/testing/selftests/powerpc/alignment/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/cache_shape/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/copyloops/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/dexcr/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/dscr/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/lib/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/math/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/mce/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/mm/settings
··· 1 + timeout=130
+1 -1
tools/testing/selftests/powerpc/mm/stack_expansion_ldst.c
··· 175 175 176 176 page_size = getpagesize(); 177 177 getrlimit(RLIMIT_STACK, &rlimit); 178 - printf("Stack rlimit is 0x%lx\n", rlimit.rlim_cur); 178 + printf("Stack rlimit is 0x%llx\n", (unsigned long long)rlimit.rlim_cur); 179 179 180 180 printf("Testing loads ...\n"); 181 181 test_one_type(LOAD, page_size, rlimit.rlim_cur);
+2 -2
tools/testing/selftests/powerpc/mm/subpage_prot.c
··· 211 211 perror("failed to map file"); 212 212 return 1; 213 213 } 214 - printf("allocated %s for 0x%lx bytes at %p\n", 215 - file_name, filesize, fileblock); 214 + printf("allocated %s for 0x%llx bytes at %p\n", 215 + file_name, (long long)filesize, fileblock); 216 216 217 217 printf("testing file map...\n"); 218 218
+5 -5
tools/testing/selftests/powerpc/mm/tlbie_test.c
··· 313 313 314 314 fclose(f); 315 315 316 - if (nr_anamolies == 0) { 317 - remove(path); 318 - return; 319 - } 320 - 321 316 sprintf(logfile, logfilename, tid); 322 317 strcpy(path, logdir); 323 318 strcat(path, separator); 324 319 strcat(path, logfile); 320 + 321 + if (nr_anamolies == 0) { 322 + remove(path); 323 + return; 324 + } 325 325 326 326 printf("Thread %02d chunk has %d corrupted words. For details check %s\n", 327 327 tid, nr_anamolies, path);
+1
tools/testing/selftests/powerpc/nx-gzip/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/papr_attributes/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/papr_sysparm/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/papr_vpd/settings
··· 1 + timeout=130
-3
tools/testing/selftests/powerpc/pmu/count_stcx_fail.c
··· 144 144 /* Run for 16Bi instructions */ 145 145 FAIL_IF(do_count_loop(events, 16000000000, overhead, true)); 146 146 147 - /* Run for 64Bi instructions */ 148 - FAIL_IF(do_count_loop(events, 64000000000, overhead, true)); 149 - 150 147 event_close(&events[0]); 151 148 event_close(&events[1]); 152 149
+1
tools/testing/selftests/powerpc/pmu/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/primitives/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/ptrace/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/scripts/settings
··· 1 + timeout=130
+4 -4
tools/testing/selftests/powerpc/security/mitigation-patching.sh
··· 36 36 37 37 tainted=$(cat /proc/sys/kernel/tainted) 38 38 if [[ "$tainted" -ne 0 ]]; then 39 - echo "Error: kernel already tainted!" >&2 40 - exit 1 39 + echo "Warning: kernel already tainted! ($tainted)" >&2 41 40 fi 42 41 43 42 mitigations="barrier_nospec stf_barrier count_cache_flush rfi_flush entry_flush uaccess_flush" ··· 67 68 echo "Waiting for timeout ..." 68 69 wait 69 70 71 + orig_tainted=$tainted 70 72 tainted=$(cat /proc/sys/kernel/tainted) 71 - if [[ "$tainted" -ne 0 ]]; then 72 - echo "Error: kernel became tainted!" >&2 73 + if [[ "$tainted" != "$orig_tainted" ]]; then 74 + echo "Error: kernel newly tainted, before ($orig_tainted) after ($tainted)" >&2 73 75 exit 1 74 76 fi 75 77
+1
tools/testing/selftests/powerpc/security/settings
··· 1 + timeout=130
+1 -1
tools/testing/selftests/powerpc/signal/sigfuz.c
··· 321 321 if (!args) 322 322 args = ARG_COMPLETE; 323 323 324 - test_harness(signal_fuzzer, "signal_fuzzer"); 324 + return test_harness(signal_fuzzer, "signal_fuzzer"); 325 325 }
+1
tools/testing/selftests/powerpc/stringloops/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/switch_endian/settings
··· 1 + timeout=130
+1
tools/testing/selftests/powerpc/syscalls/settings
··· 1 + timeout=130
+1 -1
tools/testing/selftests/powerpc/tm/tm-signal-context-force-tm.c
··· 176 176 177 177 int main(int argc, char **argv) 178 178 { 179 - test_harness(tm_signal_context_force_tm, "tm_signal_context_force_tm"); 179 + return test_harness(tm_signal_context_force_tm, "tm_signal_context_force_tm"); 180 180 }
+1 -2
tools/testing/selftests/powerpc/tm/tm-signal-sigreturn-nt.c
··· 46 46 47 47 int main(int argc, char **argv) 48 48 { 49 - test_harness(tm_signal_sigreturn_nt, "tm_signal_sigreturn_nt"); 49 + return test_harness(tm_signal_sigreturn_nt, "tm_signal_sigreturn_nt"); 50 50 } 51 -
+1
tools/testing/selftests/powerpc/vphn/settings
··· 1 + timeout=130
+1 -2
tools/testing/selftests/vDSO/parse_vdso.c
··· 222 222 ELF(Sym) *sym = &vdso_info.symtab[chain]; 223 223 224 224 /* Check for a defined global or weak function w/ right name. */ 225 - if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC && 226 - ELF64_ST_TYPE(sym->st_info) != STT_NOTYPE) 225 + if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC) 227 226 continue; 228 227 if (ELF64_ST_BIND(sym->st_info) != STB_GLOBAL && 229 228 ELF64_ST_BIND(sym->st_info) != STB_WEAK)