Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'powerpc-6.6-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:

- Add HOTPLUG_SMT support (/sys/devices/system/cpu/smt) and honour the
configured SMT state when hotplugging CPUs into the system

- Combine final TLB flush and lazy TLB mm shootdown IPIs when using the
Radix MMU to avoid a broadcast TLBIE flush on exit

- Drop the exclusion between ptrace/perf watchpoints, and drop the now
unused associated arch hooks

- Add support for the "nohlt" command line option to disable CPU idle

- Add support for -fpatchable-function-entry for ftrace, with GCC >=
13.1

- Rework memory block size determination, and support 256MB size on
systems with GPUs that have hotpluggable memory

- Various other small features and fixes

Thanks to Andrew Donnellan, Aneesh Kumar K.V, Arnd Bergmann, Athira
Rajeev, Benjamin Gray, Christophe Leroy, Frederic Barrat, Gautam
Menghani, Geoff Levand, Hari Bathini, Immad Mir, Jialin Zhang, Joel
Stanley, Jordan Niethe, Justin Stitt, Kajol Jain, Kees Cook, Krzysztof
Kozlowski, Laurent Dufour, Liang He, Linus Walleij, Mahesh Salgaonkar,
Masahiro Yamada, Michal Suchanek, Nageswara R Sastry, Nathan Chancellor,
Nathan Lynch, Naveen N Rao, Nicholas Piggin, Nick Desaulniers, Omar
Sandoval, Randy Dunlap, Reza Arbab, Rob Herring, Russell Currey, Sourabh
Jain, Thomas Gleixner, Trevor Woerner, Uwe Kleine-König, Vaibhav Jain,
Xiongfeng Wang, Yuan Tan, Zhang Rui, and Zheng Zengkai.

* tag 'powerpc-6.6-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (135 commits)
macintosh/ams: linux/platform_device.h is needed
powerpc/xmon: Reapply "Relax frame size for clang"
powerpc/mm/book3s64: Use 256M as the upper limit with coherent device memory attached
powerpc/mm/book3s64: Fix build error with SPARSEMEM disabled
powerpc/iommu: Fix notifiers being shared by PCI and VIO buses
powerpc/mpc5xxx: Add missing fwnode_handle_put()
powerpc/config: Disable SLAB_DEBUG_ON in skiroot
powerpc/pseries: Remove unused hcall tracing instruction
powerpc/pseries: Fix hcall tracepoints with JUMP_LABEL=n
powerpc: dts: add missing space before {
powerpc/eeh: Use pci_dev_id() to simplify the code
powerpc/64s: Move CPU -mtune options into Kconfig
powerpc/powermac: Fix unused function warning
powerpc/pseries: Rework lppaca_shared_proc() to avoid DEBUG_PREEMPT
powerpc: Don't include lppaca.h in paca.h
powerpc/pseries: Move hcall_vphn() prototype into vphn.h
powerpc/pseries: Move VPHN constants into vphn.h
cxl: Drop unused detach_spa()
powerpc: Drop zalloc_maybe_bootmem()
powerpc/powernv: Use struct opal_prd_msg in more places
...

+4120 -3368
+160
Documentation/ABI/testing/sysfs-bus-event_source-devices-hv_gpci
··· 80 80 Description: read only 81 81 This sysfs file exposes the cpumask which is designated to make 82 82 HCALLs to retrieve hv-gpci pmu event counter data. 83 + 84 + What: /sys/devices/hv_gpci/interface/processor_bus_topology 85 + Date: July 2023 86 + Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org> 87 + Description: admin read only 88 + This sysfs file exposes the system topology information by making HCALL 89 + H_GET_PERF_COUNTER_INFO. The HCALL is made with counter request value 90 + PROCESSOR_BUS_TOPOLOGY(0xD0). 91 + 92 + * This sysfs file will be created only for power10 and above platforms. 93 + 94 + * User needs root privileges to read data from this sysfs file. 95 + 96 + * This sysfs file will be created, only when the HCALL returns "H_SUCCESS", 97 + "H_AUTHORITY" or "H_PARAMETER" as the return type. 98 + 99 + HCALL with return error type "H_AUTHORITY" can be resolved during 100 + runtime by setting "Enable Performance Information Collection" option. 101 + 102 + * The end user reading this sysfs file must decode the content as per 103 + underlying platform/firmware. 104 + 105 + Possible error codes while reading this sysfs file: 106 + 107 + * "-EPERM" : Partition is not permitted to retrieve performance information, 108 + required to set "Enable Performance Information Collection" option. 109 + 110 + * "-EIO" : Can't retrieve system information because of invalid buffer length/invalid address 111 + or because of some hardware error. Refer to getPerfCountInfo documentation for 112 + more information. 113 + 114 + * "-EFBIG" : System information exceeds PAGE_SIZE. 115 + 116 + What: /sys/devices/hv_gpci/interface/processor_config 117 + Date: July 2023 118 + Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org> 119 + Description: admin read only 120 + This sysfs file exposes the system topology information by making HCALL 121 + H_GET_PERF_COUNTER_INFO. The HCALL is made with counter request value 122 + PROCESSOR_CONFIG(0x90). 123 + 124 + * This sysfs file will be created only for power10 and above platforms. 125 + 126 + * User needs root privileges to read data from this sysfs file. 127 + 128 + * This sysfs file will be created, only when the HCALL returns "H_SUCCESS", 129 + "H_AUTHORITY" or "H_PARAMETER" as the return type. 130 + 131 + HCALL with return error type "H_AUTHORITY" can be resolved during 132 + runtime by setting "Enable Performance Information Collection" option. 133 + 134 + * The end user reading this sysfs file must decode the content as per 135 + underlying platform/firmware. 136 + 137 + Possible error codes while reading this sysfs file: 138 + 139 + * "-EPERM" : Partition is not permitted to retrieve performance information, 140 + required to set "Enable Performance Information Collection" option. 141 + 142 + * "-EIO" : Can't retrieve system information because of invalid buffer length/invalid address 143 + or because of some hardware error. Refer to getPerfCountInfo documentation for 144 + more information. 145 + 146 + * "-EFBIG" : System information exceeds PAGE_SIZE. 147 + 148 + What: /sys/devices/hv_gpci/interface/affinity_domain_via_virtual_processor 149 + Date: July 2023 150 + Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org> 151 + Description: admin read only 152 + This sysfs file exposes the system topology information by making HCALL 153 + H_GET_PERF_COUNTER_INFO. The HCALL is made with counter request value 154 + AFFINITY_DOMAIN_INFORMATION_BY_VIRTUAL_PROCESSOR(0xA0). 155 + 156 + * This sysfs file will be created only for power10 and above platforms. 157 + 158 + * User needs root privileges to read data from this sysfs file. 159 + 160 + * This sysfs file will be created, only when the HCALL returns "H_SUCCESS", 161 + "H_AUTHORITY" or "H_PARAMETER" as the return type. 162 + 163 + HCALL with return error type "H_AUTHORITY" can be resolved during 164 + runtime by setting "Enable Performance Information Collection" option. 165 + 166 + * The end user reading this sysfs file must decode the content as per 167 + underlying platform/firmware. 168 + 169 + Possible error codes while reading this sysfs file: 170 + 171 + * "-EPERM" : Partition is not permitted to retrieve performance information, 172 + required to set "Enable Performance Information Collection" option. 173 + 174 + * "-EIO" : Can't retrieve system information because of invalid buffer length/invalid address 175 + or because of some hardware error. Refer to getPerfCountInfo documentation for 176 + more information. 177 + 178 + * "-EFBIG" : System information exceeds PAGE_SIZE. 179 + 180 + What: /sys/devices/hv_gpci/interface/affinity_domain_via_domain 181 + Date: July 2023 182 + Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org> 183 + Description: admin read only 184 + This sysfs file exposes the system topology information by making HCALL 185 + H_GET_PERF_COUNTER_INFO. The HCALL is made with counter request value 186 + AFFINITY_DOMAIN_INFORMATION_BY_DOMAIN(0xB0). 187 + 188 + * This sysfs file will be created only for power10 and above platforms. 189 + 190 + * User needs root privileges to read data from this sysfs file. 191 + 192 + * This sysfs file will be created, only when the HCALL returns "H_SUCCESS", 193 + "H_AUTHORITY" or "H_PARAMETER" as the return type. 194 + 195 + HCALL with return error type "H_AUTHORITY" can be resolved during 196 + runtime by setting "Enable Performance Information Collection" option. 197 + 198 + * The end user reading this sysfs file must decode the content as per 199 + underlying platform/firmware. 200 + 201 + Possible error codes while reading this sysfs file: 202 + 203 + * "-EPERM" : Partition is not permitted to retrieve performance information, 204 + required to set "Enable Performance Information Collection" option. 205 + 206 + * "-EIO" : Can't retrieve system information because of invalid buffer length/invalid address 207 + or because of some hardware error. Refer to getPerfCountInfo documentation for 208 + more information. 209 + 210 + * "-EFBIG" : System information exceeds PAGE_SIZE. 211 + 212 + What: /sys/devices/hv_gpci/interface/affinity_domain_via_partition 213 + Date: July 2023 214 + Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org> 215 + Description: admin read only 216 + This sysfs file exposes the system topology information by making HCALL 217 + H_GET_PERF_COUNTER_INFO. The HCALL is made with counter request value 218 + AFFINITY_DOMAIN_INFORMATION_BY_PARTITION(0xB1). 219 + 220 + * This sysfs file will be created only for power10 and above platforms. 221 + 222 + * User needs root privileges to read data from this sysfs file. 223 + 224 + * This sysfs file will be created, only when the HCALL returns "H_SUCCESS", 225 + "H_AUTHORITY" or "H_PARAMETER" as the return type. 226 + 227 + HCALL with return error type "H_AUTHORITY" can be resolved during 228 + runtime by setting "Enable Performance Information Collection" option. 229 + 230 + * The end user reading this sysfs file must decode the content as per 231 + underlying platform/firmware. 232 + 233 + Possible error codes while reading this sysfs file: 234 + 235 + * "-EPERM" : Partition is not permitted to retrieve performance information, 236 + required to set "Enable Performance Information Collection" option. 237 + 238 + * "-EIO" : Can't retrieve system information because of invalid buffer length/invalid address 239 + or because of some hardware error. Refer to getPerfCountInfo documentation for 240 + more information. 241 + 242 + * "-EFBIG" : System information exceeds PAGE_SIZE.
+3 -3
Documentation/admin-guide/kernel-parameters.txt
··· 3753 3753 3754 3754 nohibernate [HIBERNATION] Disable hibernation and resume. 3755 3755 3756 - nohlt [ARM,ARM64,MICROBLAZE,MIPS,SH] Forces the kernel to 3756 + nohlt [ARM,ARM64,MICROBLAZE,MIPS,PPC,SH] Forces the kernel to 3757 3757 busy wait in do_idle() and not use the arch_cpu_idle() 3758 3758 implementation; requires CONFIG_GENERIC_IDLE_POLL_SETUP 3759 3759 to be effective. This is useful on platforms where the ··· 3889 3889 nosmp [SMP] Tells an SMP kernel to act as a UP kernel, 3890 3890 and disable the IO APIC. legacy for "maxcpus=0". 3891 3891 3892 - nosmt [KNL,MIPS,S390] Disable symmetric multithreading (SMT). 3892 + nosmt [KNL,MIPS,PPC,S390] Disable symmetric multithreading (SMT). 3893 3893 Equivalent to smt=1. 3894 3894 3895 - [KNL,X86] Disable symmetric multithreading (SMT). 3895 + [KNL,X86,PPC] Disable symmetric multithreading (SMT). 3896 3896 nosmt=force: Force disable SMT, cannot be undone 3897 3897 via the sysfs control file. 3898 3898
+4 -4
Documentation/powerpc/ptrace.rst
··· 15 15 that GDB doesn't need to special-case each of them. We added the 16 16 following 3 new ptrace requests. 17 17 18 - 1. PTRACE_PPC_GETHWDEBUGINFO 18 + 1. PPC_PTRACE_GETHWDBGINFO 19 19 ============================ 20 20 21 21 Query for GDB to discover the hardware debug features. The main info to ··· 48 48 #define PPC_DEBUG_FEATURE_DATA_BP_DAWR 0x10 49 49 #define PPC_DEBUG_FEATURE_DATA_BP_ARCH_31 0x20 50 50 51 - 2. PTRACE_SETHWDEBUG 51 + 2. PPC_PTRACE_SETHWDEBUG 52 52 53 53 Sets a hardware breakpoint or watchpoint, according to the provided structure:: 54 54 ··· 88 88 are not contemplated, but that is out of the scope of this work. 89 89 90 90 ptrace will return an integer (handle) uniquely identifying the breakpoint or 91 - watchpoint just created. This integer will be used in the PTRACE_DELHWDEBUG 91 + watchpoint just created. This integer will be used in the PPC_PTRACE_DELHWDEBUG 92 92 request to ask for its removal. Return -ENOSPC if the requested breakpoint 93 93 can't be allocated on the registers. 94 94 ··· 150 150 p.addr2 = (uint64_t) end_range; 151 151 p.condition_value = 0; 152 152 153 - 3. PTRACE_DELHWDEBUG 153 + 3. PPC_PTRACE_DELHWDEBUG 154 154 155 155 Takes an integer which identifies an existing breakpoint or watchpoint 156 156 (i.e., the value returned from PTRACE_SETHWDEBUG), and deletes the
+14 -9
arch/powerpc/Kconfig
··· 188 188 select DYNAMIC_FTRACE if FUNCTION_TRACER 189 189 select EDAC_ATOMIC_SCRUB 190 190 select EDAC_SUPPORT 191 + select FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY if ARCH_USING_PATCHABLE_FUNCTION_ENTRY 191 192 select GENERIC_ATOMIC64 if PPC32 192 193 select GENERIC_CLOCKEVENTS_BROADCAST if SMP 193 194 select GENERIC_CMOS_UPDATE ··· 196 195 select GENERIC_CPU_VULNERABILITIES if PPC_BARRIER_NOSPEC 197 196 select GENERIC_EARLY_IOREMAP 198 197 select GENERIC_GETTIMEOFDAY 198 + select GENERIC_IDLE_POLL_SETUP 199 199 select GENERIC_IOREMAP 200 200 select GENERIC_IRQ_SHOW 201 201 select GENERIC_IRQ_SHOW_LEVEL ··· 231 229 select HAVE_DEBUG_KMEMLEAK 232 230 select HAVE_DEBUG_STACKOVERFLOW 233 231 select HAVE_DYNAMIC_FTRACE 234 - select HAVE_DYNAMIC_FTRACE_WITH_ARGS if MPROFILE_KERNEL || PPC32 235 - select HAVE_DYNAMIC_FTRACE_WITH_REGS if MPROFILE_KERNEL || PPC32 232 + select HAVE_DYNAMIC_FTRACE_WITH_ARGS if ARCH_USING_PATCHABLE_FUNCTION_ENTRY || MPROFILE_KERNEL || PPC32 233 + select HAVE_DYNAMIC_FTRACE_WITH_REGS if ARCH_USING_PATCHABLE_FUNCTION_ENTRY || MPROFILE_KERNEL || PPC32 236 234 select HAVE_EBPF_JIT 237 235 select HAVE_EFFICIENT_UNALIGNED_ACCESS 238 236 select HAVE_FAST_GUP ··· 260 258 select HAVE_MOD_ARCH_SPECIFIC 261 259 select HAVE_NMI if PERF_EVENTS || (PPC64 && PPC_BOOK3S) 262 260 select HAVE_OPTPROBES 263 - select HAVE_OBJTOOL if PPC32 || MPROFILE_KERNEL 261 + select HAVE_OBJTOOL if ARCH_USING_PATCHABLE_FUNCTION_ENTRY || MPROFILE_KERNEL || PPC32 264 262 select HAVE_OBJTOOL_MCOUNT if HAVE_OBJTOOL 265 263 select HAVE_PERF_EVENTS 266 264 select HAVE_PERF_EVENTS_NMI if PPC64 ··· 277 275 select HAVE_SYSCALL_TRACEPOINTS 278 276 select HAVE_VIRT_CPU_ACCOUNTING 279 277 select HAVE_VIRT_CPU_ACCOUNTING_GEN 278 + select HOTPLUG_SMT if HOTPLUG_CPU 279 + select SMT_NUM_THREADS_DYNAMIC 280 280 select HUGETLB_PAGE_SIZE_VARIABLE if PPC_BOOK3S_64 && HUGETLB_PAGE 281 281 select IOMMU_HELPER if PPC64 282 282 select IRQ_DOMAIN ··· 557 553 depends on PPC64_ELF_ABI_V2 && FUNCTION_TRACER 558 554 def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-mprofile-kernel.sh $(CC) -mlittle-endian) if CPU_LITTLE_ENDIAN 559 555 def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-mprofile-kernel.sh $(CC) -mbig-endian) if CPU_BIG_ENDIAN 556 + 557 + config ARCH_USING_PATCHABLE_FUNCTION_ENTRY 558 + depends on FUNCTION_TRACER && (PPC32 || PPC64_ELF_ABI_V2) 559 + depends on $(cc-option,-fpatchable-function-entry=2) 560 + def_bool y if PPC32 561 + def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh $(CC) -mlittle-endian) if PPC64 && CPU_LITTLE_ENDIAN 562 + def_bool $(success,$(srctree)/arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh $(CC) -mbig-endian) if PPC64 && CPU_BIG_ENDIAN 560 563 561 564 config HOTPLUG_CPU 562 565 bool "Support for enabling/disabling CPUs" ··· 1136 1125 depends on PPC_83xx || QUICC_ENGINE || CPM2 1137 1126 help 1138 1127 Freescale General-purpose Timers support 1139 - 1140 - config PCI_8260 1141 - bool 1142 - depends on PCI && 8260 1143 - select PPC_INDIRECT_PCI 1144 - default y 1145 1128 1146 1129 config FSL_RIO 1147 1130 bool "Freescale Embedded SRIO Controller support"
+6 -3
arch/powerpc/Makefile
··· 143 143 CFLAGS-$(CONFIG_PPC32) += $(call cc-option,-mno-readonly-in-sdata) 144 144 145 145 ifdef CONFIG_FUNCTION_TRACER 146 + ifdef CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY 147 + KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY 148 + CC_FLAGS_FTRACE := -fpatchable-function-entry=2 149 + else 146 150 CC_FLAGS_FTRACE := -pg 147 151 ifdef CONFIG_MPROFILE_KERNEL 148 152 CC_FLAGS_FTRACE += -mprofile-kernel 153 + endif 149 154 endif 150 155 endif 151 156 152 157 CFLAGS-$(CONFIG_TARGET_CPU_BOOL) += -mcpu=$(CONFIG_TARGET_CPU) 153 158 AFLAGS-$(CONFIG_TARGET_CPU_BOOL) += -mcpu=$(CONFIG_TARGET_CPU) 154 159 155 - CFLAGS-$(CONFIG_POWERPC64_CPU) += $(call cc-option,-mtune=power10, \ 156 - $(call cc-option,-mtune=power9, \ 157 - $(call cc-option,-mtune=power8))) 160 + CFLAGS-y += $(CONFIG_TUNE_CPU) 158 161 159 162 asinstr := $(call as-instr,lis 9$(comma)foo@high,-DHAVE_AS_ATHIGH=1) 160 163
+6 -6
arch/powerpc/boot/dts/fsl/c293si-post.dtsi
··· 124 124 reg = <0x80000 0x20000>; 125 125 ranges = <0x0 0x80000 0x20000>; 126 126 127 - jr@1000{ 127 + jr@1000 { 128 128 interrupts = <45 2 0 0>; 129 129 }; 130 - jr@2000{ 130 + jr@2000 { 131 131 interrupts = <57 2 0 0>; 132 132 }; 133 133 }; ··· 140 140 reg = <0xa0000 0x20000>; 141 141 ranges = <0x0 0xa0000 0x20000>; 142 142 143 - jr@1000{ 143 + jr@1000 { 144 144 interrupts = <49 2 0 0>; 145 145 }; 146 - jr@2000{ 146 + jr@2000 { 147 147 interrupts = <50 2 0 0>; 148 148 }; 149 149 }; ··· 156 156 reg = <0xc0000 0x20000>; 157 157 ranges = <0x0 0xc0000 0x20000>; 158 158 159 - jr@1000{ 159 + jr@1000 { 160 160 interrupts = <55 2 0 0>; 161 161 }; 162 - jr@2000{ 162 + jr@2000 { 163 163 interrupts = <56 2 0 0>; 164 164 }; 165 165 };
+5 -5
arch/powerpc/boot/dts/fsl/p1022rdk.dts
··· 60 60 compatible = "st,m41t62"; 61 61 reg = <0x68>; 62 62 }; 63 - adt7461@4c{ 63 + adt7461@4c { 64 64 compatible = "adi,adt7461"; 65 65 reg = <0x4c>; 66 66 }; 67 - zl6100@21{ 67 + zl6100@21 { 68 68 compatible = "isil,zl6100"; 69 69 reg = <0x21>; 70 70 }; 71 - zl6100@24{ 71 + zl6100@24 { 72 72 compatible = "isil,zl6100"; 73 73 reg = <0x24>; 74 74 }; 75 - zl6100@26{ 75 + zl6100@26 { 76 76 compatible = "isil,zl6100"; 77 77 reg = <0x26>; 78 78 }; 79 - zl6100@29{ 79 + zl6100@29 { 80 80 compatible = "isil,zl6100"; 81 81 reg = <0x29>; 82 82 };
+1 -1
arch/powerpc/boot/dts/fsl/p1022si-post.dtsi
··· 238 238 fsl,has-rstcr; 239 239 }; 240 240 241 - power@e0070{ 241 + power@e0070 { 242 242 compatible = "fsl,mpc8536-pmc", "fsl,mpc8548-pmc"; 243 243 reg = <0xe0070 0x20>; 244 244 };
+2 -2
arch/powerpc/boot/dts/fsl/p3041ds.dts
··· 41 41 #size-cells = <2>; 42 42 interrupt-parent = <&mpic>; 43 43 44 - aliases{ 44 + aliases { 45 45 phy_rgmii_0 = &phy_rgmii_0; 46 46 phy_rgmii_1 = &phy_rgmii_1; 47 47 phy_sgmii_1c = &phy_sgmii_1c; ··· 165 165 }; 166 166 }; 167 167 168 - fman@400000{ 168 + fman@400000 { 169 169 ethernet@e0000 { 170 170 phy-handle = <&phy_sgmii_1c>; 171 171 phy-connection-type = "sgmii";
+1 -1
arch/powerpc/boot/dts/fsl/p5040ds.dts
··· 41 41 #size-cells = <2>; 42 42 interrupt-parent = <&mpic>; 43 43 44 - aliases{ 44 + aliases { 45 45 phy_sgmii_slot2_1c = &phy_sgmii_slot2_1c; 46 46 phy_sgmii_slot2_1d = &phy_sgmii_slot2_1d; 47 47 phy_sgmii_slot2_1e = &phy_sgmii_slot2_1e;
+1 -1
arch/powerpc/boot/dts/fsl/t4240qds.dts
··· 41 41 #size-cells = <2>; 42 42 interrupt-parent = <&mpic>; 43 43 44 - aliases{ 44 + aliases { 45 45 phy_rgmii1 = &phyrgmii1; 46 46 phy_rgmii2 = &phyrgmii2; 47 47 phy_sgmii3 = &phy3;
+1 -1
arch/powerpc/boot/dts/mpc5121.dtsi
··· 140 140 }; 141 141 142 142 /* Power Management Controller */ 143 - pmc@1000{ 143 + pmc@1000 { 144 144 compatible = "fsl,mpc5121-pmc"; 145 145 reg = <0x1000 0x100>; 146 146 interrupts = <83 0x8>;
+1 -1
arch/powerpc/boot/dts/mpc5125twr.dts
··· 104 104 clock-names = "osc"; 105 105 }; 106 106 107 - pmc@1000{ // Power Management Controller 107 + pmc@1000 { // Power Management Controller 108 108 compatible = "fsl,mpc5121-pmc"; 109 109 reg = <0x1000 0x100>; 110 110 interrupts = <83 0x2>;
+2 -1
arch/powerpc/configs/pmac32_defconfig
··· 176 176 # CONFIG_SERIO_I8042 is not set 177 177 # CONFIG_SERIO_SERPORT is not set 178 178 CONFIG_SERIAL_8250=m 179 - CONFIG_SERIAL_PMACZILOG=m 179 + CONFIG_SERIAL_PMACZILOG=y 180 180 CONFIG_SERIAL_PMACZILOG_TTYS=y 181 + CONFIG_SERIAL_PMACZILOG_CONSOLE=y 181 182 CONFIG_NVRAM=y 182 183 CONFIG_I2C_CHARDEV=m 183 184 CONFIG_APM_POWER=y
+3
arch/powerpc/configs/ppc64_defconfig
··· 390 390 CONFIG_CRYPTO_WP512=m 391 391 CONFIG_CRYPTO_LZO=m 392 392 CONFIG_CRYPTO_CRC32C_VPMSUM=m 393 + CONFIG_CRYPTO_CRCT10DIF_VPMSUM=m 394 + CONFIG_CRYPTO_VPMSUM_TESTER=m 393 395 CONFIG_CRYPTO_MD5_PPC=m 394 396 CONFIG_CRYPTO_SHA1_PPC=m 397 + CONFIG_CRYPTO_AES_GCM_P10=m 395 398 CONFIG_CRYPTO_DEV_NX=y 396 399 CONFIG_CRYPTO_DEV_NX_ENCRYPT=m 397 400 CONFIG_CRYPTO_DEV_VMX=y
-1
arch/powerpc/configs/ppc6xx_defconfig
··· 183 183 CONFIG_IP_NF_FILTER=m 184 184 CONFIG_IP_NF_TARGET_REJECT=m 185 185 CONFIG_IP_NF_MANGLE=m 186 - CONFIG_IP_NF_TARGET_CLUSTERIP=m 187 186 CONFIG_IP_NF_TARGET_ECN=m 188 187 CONFIG_IP_NF_TARGET_TTL=m 189 188 CONFIG_IP_NF_RAW=m
-1
arch/powerpc/configs/skiroot_defconfig
··· 289 289 # CONFIG_XZ_DEC_SPARC is not set 290 290 CONFIG_PRINTK_TIME=y 291 291 CONFIG_MAGIC_SYSRQ=y 292 - CONFIG_SLUB_DEBUG_ON=y 293 292 CONFIG_SCHED_STACK_END_CHECK=y 294 293 CONFIG_DEBUG_STACKOVERFLOW=y 295 294 CONFIG_PANIC_ON_OOPS=y
+1 -1
arch/powerpc/crypto/Kconfig
··· 100 100 select CRYPTO_LIB_AES 101 101 select CRYPTO_ALGAPI 102 102 select CRYPTO_AEAD 103 - default m 103 + select CRYPTO_SKCIPHER 104 104 help 105 105 AEAD cipher: AES cipher algorithms (FIPS-197) 106 106 GCM (Galois/Counter Mode) authenticated encryption mode (NIST SP800-38D)
+2
arch/powerpc/include/asm/8xx_immap.h
··· 560 560 cpm8xx_t im_cpm; /* Communication processor */ 561 561 } immap_t; 562 562 563 + extern immap_t __iomem *mpc8xx_immr; 564 + 563 565 #endif /* __IMMAP_8XX__ */ 564 566 #endif /* __KERNEL__ */
-1
arch/powerpc/include/asm/Kbuild
··· 3 3 generated-y += syscall_table_64.h 4 4 generated-y += syscall_table_spu.h 5 5 generic-y += agp.h 6 - generic-y += export.h 7 6 generic-y += kvm_types.h 8 7 generic-y += mcs_spinlock.h 9 8 generic-y += qrwlock.h
+56 -71
arch/powerpc/include/asm/book3s/32/kup.h
··· 9 9 10 10 #ifndef __ASSEMBLY__ 11 11 12 - #include <linux/jump_label.h> 13 - 14 - extern struct static_key_false disable_kuap_key; 15 - 16 - static __always_inline bool kuep_is_disabled(void) 17 - { 18 - return !IS_ENABLED(CONFIG_PPC_KUEP); 19 - } 20 - 21 12 #ifdef CONFIG_PPC_KUAP 22 13 23 14 #include <linux/sched.h> 24 15 25 16 #define KUAP_NONE (~0UL) 26 - #define KUAP_ALL (~1UL) 27 17 28 - static __always_inline bool kuap_is_disabled(void) 29 - { 30 - return static_branch_unlikely(&disable_kuap_key); 31 - } 32 - 33 - static inline void kuap_lock_one(unsigned long addr) 18 + static __always_inline void kuap_lock_one(unsigned long addr) 34 19 { 35 20 mtsr(mfsr(addr) | SR_KS, addr); 36 21 isync(); /* Context sync required after mtsr() */ 37 22 } 38 23 39 - static inline void kuap_unlock_one(unsigned long addr) 24 + static __always_inline void kuap_unlock_one(unsigned long addr) 40 25 { 41 26 mtsr(mfsr(addr) & ~SR_KS, addr); 42 27 isync(); /* Context sync required after mtsr() */ 43 28 } 44 29 45 - static inline void kuap_lock_all(void) 30 + static __always_inline void uaccess_begin_32s(unsigned long addr) 46 31 { 47 - update_user_segments(mfsr(0) | SR_KS); 48 - isync(); /* Context sync required after mtsr() */ 32 + unsigned long tmp; 33 + 34 + asm volatile(ASM_MMU_FTR_IFSET( 35 + "mfsrin %0, %1;" 36 + "rlwinm %0, %0, 0, %2;" 37 + "mtsrin %0, %1;" 38 + "isync", "", %3) 39 + : "=&r"(tmp) 40 + : "r"(addr), "i"(~SR_KS), "i"(MMU_FTR_KUAP) 41 + : "memory"); 49 42 } 50 43 51 - static inline void kuap_unlock_all(void) 44 + static __always_inline void uaccess_end_32s(unsigned long addr) 52 45 { 53 - update_user_segments(mfsr(0) & ~SR_KS); 54 - isync(); /* Context sync required after mtsr() */ 46 + unsigned long tmp; 47 + 48 + asm volatile(ASM_MMU_FTR_IFSET( 49 + "mfsrin %0, %1;" 50 + "oris %0, %0, %2;" 51 + "mtsrin %0, %1;" 52 + "isync", "", %3) 53 + : "=&r"(tmp) 54 + : "r"(addr), "i"(SR_KS >> 16), "i"(MMU_FTR_KUAP) 55 + : "memory"); 55 56 } 56 57 57 - void kuap_lock_all_ool(void); 58 - void kuap_unlock_all_ool(void); 59 - 60 - static inline void kuap_lock_addr(unsigned long addr, bool ool) 61 - { 62 - if (likely(addr != KUAP_ALL)) 63 - kuap_lock_one(addr); 64 - else if (!ool) 65 - kuap_lock_all(); 66 - else 67 - kuap_lock_all_ool(); 68 - } 69 - 70 - static inline void kuap_unlock(unsigned long addr, bool ool) 71 - { 72 - if (likely(addr != KUAP_ALL)) 73 - kuap_unlock_one(addr); 74 - else if (!ool) 75 - kuap_unlock_all(); 76 - else 77 - kuap_unlock_all_ool(); 78 - } 79 - 80 - static inline void __kuap_lock(void) 81 - { 82 - } 83 - 84 - static inline void __kuap_save_and_lock(struct pt_regs *regs) 58 + static __always_inline void __kuap_save_and_lock(struct pt_regs *regs) 85 59 { 86 60 unsigned long kuap = current->thread.kuap; 87 61 ··· 64 90 return; 65 91 66 92 current->thread.kuap = KUAP_NONE; 67 - kuap_lock_addr(kuap, false); 93 + kuap_lock_one(kuap); 68 94 } 95 + #define __kuap_save_and_lock __kuap_save_and_lock 69 96 70 - static inline void kuap_user_restore(struct pt_regs *regs) 97 + static __always_inline void kuap_user_restore(struct pt_regs *regs) 71 98 { 72 99 } 73 100 74 - static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap) 101 + static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap) 75 102 { 76 103 if (unlikely(kuap != KUAP_NONE)) { 77 104 current->thread.kuap = KUAP_NONE; 78 - kuap_lock_addr(kuap, false); 105 + kuap_lock_one(kuap); 79 106 } 80 107 81 108 if (likely(regs->kuap == KUAP_NONE)) ··· 84 109 85 110 current->thread.kuap = regs->kuap; 86 111 87 - kuap_unlock(regs->kuap, false); 112 + kuap_unlock_one(regs->kuap); 88 113 } 89 114 90 - static inline unsigned long __kuap_get_and_assert_locked(void) 115 + static __always_inline unsigned long __kuap_get_and_assert_locked(void) 91 116 { 92 117 unsigned long kuap = current->thread.kuap; 93 118 ··· 95 120 96 121 return kuap; 97 122 } 123 + #define __kuap_get_and_assert_locked __kuap_get_and_assert_locked 98 124 99 - static __always_inline void __allow_user_access(void __user *to, const void __user *from, 100 - u32 size, unsigned long dir) 125 + static __always_inline void allow_user_access(void __user *to, const void __user *from, 126 + u32 size, unsigned long dir) 101 127 { 102 128 BUILD_BUG_ON(!__builtin_constant_p(dir)); 103 129 ··· 106 130 return; 107 131 108 132 current->thread.kuap = (__force u32)to; 109 - kuap_unlock_one((__force u32)to); 133 + uaccess_begin_32s((__force u32)to); 110 134 } 111 135 112 - static __always_inline void __prevent_user_access(unsigned long dir) 136 + static __always_inline void prevent_user_access(unsigned long dir) 113 137 { 114 138 u32 kuap = current->thread.kuap; 115 139 ··· 119 143 return; 120 144 121 145 current->thread.kuap = KUAP_NONE; 122 - kuap_lock_addr(kuap, true); 146 + uaccess_end_32s(kuap); 123 147 } 124 148 125 - static inline unsigned long __prevent_user_access_return(void) 149 + static __always_inline unsigned long prevent_user_access_return(void) 126 150 { 127 151 unsigned long flags = current->thread.kuap; 128 152 129 153 if (flags != KUAP_NONE) { 130 154 current->thread.kuap = KUAP_NONE; 131 - kuap_lock_addr(flags, true); 155 + uaccess_end_32s(flags); 132 156 } 133 157 134 158 return flags; 135 159 } 136 160 137 - static inline void __restore_user_access(unsigned long flags) 161 + static __always_inline void restore_user_access(unsigned long flags) 138 162 { 139 163 if (flags != KUAP_NONE) { 140 164 current->thread.kuap = flags; 141 - kuap_unlock(flags, true); 165 + uaccess_begin_32s(flags); 142 166 } 143 167 } 144 168 145 - static inline bool 169 + static __always_inline bool 146 170 __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) 147 171 { 148 172 unsigned long kuap = regs->kuap; 149 173 150 - if (!is_write || kuap == KUAP_ALL) 174 + if (!is_write) 151 175 return false; 152 176 if (kuap == KUAP_NONE) 153 177 return true; 154 178 155 - /* If faulting address doesn't match unlocked segment, unlock all */ 156 - if ((kuap ^ address) & 0xf0000000) 157 - regs->kuap = KUAP_ALL; 179 + /* 180 + * If faulting address doesn't match unlocked segment, change segment. 181 + * In case of unaligned store crossing two segments, emulate store. 182 + */ 183 + if ((kuap ^ address) & 0xf0000000) { 184 + if (!(kuap & 0x0fffffff) && address > kuap - 4 && fix_alignment(regs)) { 185 + regs_add_return_ip(regs, 4); 186 + emulate_single_step(regs); 187 + } else { 188 + regs->kuap = address; 189 + } 190 + } 158 191 159 192 return false; 160 193 }
+31 -46
arch/powerpc/include/asm/book3s/32/pgtable.h
··· 536 536 537 537 538 538 /* This low level function performs the actual PTE insertion 539 - * Setting the PTE depends on the MMU type and other factors. It's 540 - * an horrible mess that I'm not going to try to clean up now but 541 - * I'm keeping it in one place rather than spread around 539 + * Setting the PTE depends on the MMU type and other factors. 540 + * 541 + * First case is 32-bit in UP mode with 32-bit PTEs, we need to preserve 542 + * the _PAGE_HASHPTE bit since we may not have invalidated the previous 543 + * translation in the hash yet (done in a subsequent flush_tlb_xxx()) 544 + * and see we need to keep track that this PTE needs invalidating. 545 + * 546 + * Second case is 32-bit with 64-bit PTE. In this case, we 547 + * can just store as long as we do the two halves in the right order 548 + * with a barrier in between. This is possible because we take care, 549 + * in the hash code, to pre-invalidate if the PTE was already hashed, 550 + * which synchronizes us with any concurrent invalidation. 551 + * In the percpu case, we fallback to the simple update preserving 552 + * the hash bits (ie, same as the non-SMP case). 553 + * 554 + * Third case is 32-bit in SMP mode with 32-bit PTEs. We use the 555 + * helper pte_update() which does an atomic update. We need to do that 556 + * because a concurrent invalidation can clear _PAGE_HASHPTE. If it's a 557 + * per-CPU PTE such as a kmap_atomic, we also do a simple update preserving 558 + * the hash bits instead. 542 559 */ 543 560 static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, 544 561 pte_t *ptep, pte_t pte, int percpu) 545 562 { 546 - #if defined(CONFIG_SMP) && !defined(CONFIG_PTE_64BIT) 547 - /* First case is 32-bit Hash MMU in SMP mode with 32-bit PTEs. We use the 548 - * helper pte_update() which does an atomic update. We need to do that 549 - * because a concurrent invalidation can clear _PAGE_HASHPTE. If it's a 550 - * per-CPU PTE such as a kmap_atomic, we do a simple update preserving 551 - * the hash bits instead (ie, same as the non-SMP case) 552 - */ 553 - if (percpu) 554 - *ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE) 555 - | (pte_val(pte) & ~_PAGE_HASHPTE)); 556 - else 563 + if ((!IS_ENABLED(CONFIG_SMP) && !IS_ENABLED(CONFIG_PTE_64BIT)) || percpu) { 564 + *ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE) | 565 + (pte_val(pte) & ~_PAGE_HASHPTE)); 566 + } else if (IS_ENABLED(CONFIG_PTE_64BIT)) { 567 + if (pte_val(*ptep) & _PAGE_HASHPTE) 568 + flush_hash_entry(mm, ptep, addr); 569 + 570 + asm volatile("stw%X0 %2,%0; eieio; stw%X1 %L2,%1" : 571 + "=m" (*ptep), "=m" (*((unsigned char *)ptep+4)) : 572 + "r" (pte) : "memory"); 573 + } else { 557 574 pte_update(mm, addr, ptep, ~_PAGE_HASHPTE, pte_val(pte), 0); 558 - 559 - #elif defined(CONFIG_PTE_64BIT) 560 - /* Second case is 32-bit with 64-bit PTE. In this case, we 561 - * can just store as long as we do the two halves in the right order 562 - * with a barrier in between. This is possible because we take care, 563 - * in the hash code, to pre-invalidate if the PTE was already hashed, 564 - * which synchronizes us with any concurrent invalidation. 565 - * In the percpu case, we also fallback to the simple update preserving 566 - * the hash bits 567 - */ 568 - if (percpu) { 569 - *ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE) 570 - | (pte_val(pte) & ~_PAGE_HASHPTE)); 571 - return; 572 575 } 573 - if (pte_val(*ptep) & _PAGE_HASHPTE) 574 - flush_hash_entry(mm, ptep, addr); 575 - __asm__ __volatile__("\ 576 - stw%X0 %2,%0\n\ 577 - eieio\n\ 578 - stw%X1 %L2,%1" 579 - : "=m" (*ptep), "=m" (*((unsigned char *)ptep+4)) 580 - : "r" (pte) : "memory"); 581 - 582 - #else 583 - /* Third case is 32-bit hash table in UP mode, we need to preserve 584 - * the _PAGE_HASHPTE bit since we may not have invalidated the previous 585 - * translation in the hash yet (done in a subsequent flush_tlb_xxx()) 586 - * and see we need to keep track that this PTE needs invalidating 587 - */ 588 - *ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE) 589 - | (pte_val(pte) & ~_PAGE_HASHPTE)); 590 - #endif 591 576 } 592 577 593 578 /*
+1 -1
arch/powerpc/include/asm/book3s/64/hash-pkey.h
··· 24 24 ((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) | 25 25 ((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL)); 26 26 27 - if (mmu_has_feature(MMU_FTR_BOOK3S_KUAP) || 27 + if (mmu_has_feature(MMU_FTR_KUAP) || 28 28 mmu_has_feature(MMU_FTR_BOOK3S_KUEP)) { 29 29 if ((pte_pkey == 0) && (flags & HPTE_USE_KERNEL_KEY)) 30 30 return HASH_DEFAULT_KERNEL_KEY;
+22 -32
arch/powerpc/include/asm/book3s/64/kup.h
··· 31 31 mfspr \gpr2, SPRN_AMR 32 32 cmpd \gpr1, \gpr2 33 33 beq 99f 34 - END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_BOOK3S_KUAP, 68) 34 + END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_KUAP, 68) 35 35 36 36 isync 37 37 mtspr SPRN_AMR, \gpr1 ··· 78 78 * No need to restore IAMR when returning to kernel space. 79 79 */ 80 80 100: 81 - END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_BOOK3S_KUAP, 67) 81 + END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67) 82 82 #endif 83 83 .endm 84 84 ··· 91 91 LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED) 92 92 999: tdne \gpr1, \gpr2 93 93 EMIT_WARN_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE) 94 - END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_BOOK3S_KUAP, 67) 94 + END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67) 95 95 #endif 96 96 .endm 97 97 #endif ··· 130 130 */ 131 131 BEGIN_MMU_FTR_SECTION_NESTED(68) 132 132 b 100f // skip_save_amr 133 - END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY | MMU_FTR_BOOK3S_KUAP, 68) 133 + END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY | MMU_FTR_KUAP, 68) 134 134 135 135 /* 136 136 * if pkey is disabled and we are entering from userspace ··· 166 166 mtspr SPRN_AMR, \gpr2 167 167 isync 168 168 102: 169 - END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_BOOK3S_KUAP, 69) 169 + END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 69) 170 170 171 171 /* 172 172 * if entering from kernel we don't need save IAMR ··· 213 213 * access restrictions. Because of this ignore AMR value when accessing 214 214 * userspace via kernel thread. 215 215 */ 216 - static inline u64 current_thread_amr(void) 216 + static __always_inline u64 current_thread_amr(void) 217 217 { 218 218 if (current->thread.regs) 219 219 return current->thread.regs->amr; 220 220 return default_amr; 221 221 } 222 222 223 - static inline u64 current_thread_iamr(void) 223 + static __always_inline u64 current_thread_iamr(void) 224 224 { 225 225 if (current->thread.regs) 226 226 return current->thread.regs->iamr; ··· 230 230 231 231 #ifdef CONFIG_PPC_KUAP 232 232 233 - static __always_inline bool kuap_is_disabled(void) 234 - { 235 - return !mmu_has_feature(MMU_FTR_BOOK3S_KUAP); 236 - } 237 - 238 - static inline void kuap_user_restore(struct pt_regs *regs) 233 + static __always_inline void kuap_user_restore(struct pt_regs *regs) 239 234 { 240 235 bool restore_amr = false, restore_iamr = false; 241 236 unsigned long amr, iamr; ··· 238 243 if (!mmu_has_feature(MMU_FTR_PKEY)) 239 244 return; 240 245 241 - if (!mmu_has_feature(MMU_FTR_BOOK3S_KUAP)) { 246 + if (!mmu_has_feature(MMU_FTR_KUAP)) { 242 247 amr = mfspr(SPRN_AMR); 243 248 if (amr != regs->amr) 244 249 restore_amr = true; ··· 269 274 */ 270 275 } 271 276 272 - static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) 277 + static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) 273 278 { 274 279 if (likely(regs->amr == amr)) 275 280 return; ··· 285 290 */ 286 291 } 287 292 288 - static inline unsigned long __kuap_get_and_assert_locked(void) 293 + static __always_inline unsigned long __kuap_get_and_assert_locked(void) 289 294 { 290 295 unsigned long amr = mfspr(SPRN_AMR); 291 296 ··· 293 298 WARN_ON_ONCE(amr != AMR_KUAP_BLOCKED); 294 299 return amr; 295 300 } 301 + #define __kuap_get_and_assert_locked __kuap_get_and_assert_locked 296 302 297 - /* Do nothing, book3s/64 does that in ASM */ 298 - static inline void __kuap_lock(void) 299 - { 300 - } 301 - 302 - static inline void __kuap_save_and_lock(struct pt_regs *regs) 303 - { 304 - } 303 + /* __kuap_lock() not required, book3s/64 does that in ASM */ 305 304 306 305 /* 307 306 * We support individually allowing read or write, but we don't support nesting 308 307 * because that would require an expensive read/modify write of the AMR. 309 308 */ 310 309 311 - static inline unsigned long get_kuap(void) 310 + static __always_inline unsigned long get_kuap(void) 312 311 { 313 312 /* 314 313 * We return AMR_KUAP_BLOCKED when we don't support KUAP because ··· 312 323 * This has no effect in terms of actually blocking things on hash, 313 324 * so it doesn't break anything. 314 325 */ 315 - if (!mmu_has_feature(MMU_FTR_BOOK3S_KUAP)) 326 + if (!mmu_has_feature(MMU_FTR_KUAP)) 316 327 return AMR_KUAP_BLOCKED; 317 328 318 329 return mfspr(SPRN_AMR); ··· 320 331 321 332 static __always_inline void set_kuap(unsigned long value) 322 333 { 323 - if (!mmu_has_feature(MMU_FTR_BOOK3S_KUAP)) 334 + if (!mmu_has_feature(MMU_FTR_KUAP)) 324 335 return; 325 336 326 337 /* ··· 332 343 isync(); 333 344 } 334 345 335 - static inline bool __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) 346 + static __always_inline bool 347 + __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) 336 348 { 337 349 /* 338 350 * For radix this will be a storage protection fault (DSISR_PROTFAULT). ··· 376 386 377 387 #else /* CONFIG_PPC_KUAP */ 378 388 379 - static inline unsigned long get_kuap(void) 389 + static __always_inline unsigned long get_kuap(void) 380 390 { 381 391 return AMR_KUAP_BLOCKED; 382 392 } 383 393 384 - static inline void set_kuap(unsigned long value) { } 394 + static __always_inline void set_kuap(unsigned long value) { } 385 395 386 396 static __always_inline void allow_user_access(void __user *to, const void __user *from, 387 397 unsigned long size, unsigned long dir) ··· 396 406 do_uaccess_flush(); 397 407 } 398 408 399 - static inline unsigned long prevent_user_access_return(void) 409 + static __always_inline unsigned long prevent_user_access_return(void) 400 410 { 401 411 unsigned long flags = get_kuap(); 402 412 ··· 407 417 return flags; 408 418 } 409 419 410 - static inline void restore_user_access(unsigned long flags) 420 + static __always_inline void restore_user_access(unsigned long flags) 411 421 { 412 422 set_kuap(flags); 413 423 if (static_branch_unlikely(&uaccess_flush_key) && flags == AMR_KUAP_BLOCKED)
+2 -5
arch/powerpc/include/asm/book3s/64/mmu.h
··· 71 71 /* Base PID to allocate from */ 72 72 extern unsigned int mmu_base_pid; 73 73 74 - /* 75 - * memory block size used with radix translation. 76 - */ 77 - extern unsigned long __ro_after_init radix_mem_block_size; 74 + extern unsigned long __ro_after_init memory_block_size; 78 75 79 76 #define PRTB_SIZE_SHIFT (mmu_pid_bits + 4) 80 77 #define PRTB_ENTRIES (1ul << mmu_pid_bits) ··· 258 261 #define arch_clear_mm_cpumask_cpu(cpu, mm) \ 259 262 do { \ 260 263 if (cpumask_test_cpu(cpu, mm_cpumask(mm))) { \ 261 - atomic_dec(&(mm)->context.active_cpus); \ 264 + dec_mm_active_cpus(mm); \ 262 265 cpumask_clear_cpu(cpu, mm_cpumask(mm)); \ 263 266 } \ 264 267 } while (0)
+1
arch/powerpc/include/asm/bug.h
··· 120 120 struct pt_regs; 121 121 void hash__do_page_fault(struct pt_regs *); 122 122 void bad_page_fault(struct pt_regs *, int); 123 + void emulate_single_step(struct pt_regs *regs); 123 124 extern void _exception(int, struct pt_regs *, int, unsigned long); 124 125 extern void _exception_pkey(struct pt_regs *, unsigned long, int); 125 126 extern void die(const char *, struct pt_regs *, long);
+3
arch/powerpc/include/asm/cpm2.h
··· 1080 1080 #define FCC2_MEM_OFFSET FCC_MEM_OFFSET(1) 1081 1081 #define FCC3_MEM_OFFSET FCC_MEM_OFFSET(2) 1082 1082 1083 + /* Pipeline Maximum Depth */ 1084 + #define MPC82XX_BCR_PLDP 0x00800000 1085 + 1083 1086 /* Clocks and GRG's */ 1084 1087 1085 1088 enum cpm_clk_dir {
+1 -1
arch/powerpc/include/asm/cputable.h
··· 252 252 * This is also required by 52xx family. 253 253 */ 254 254 #if defined(CONFIG_SMP) || defined(CONFIG_MPC10X_BRIDGE) \ 255 - || defined(CONFIG_PPC_83xx) || defined(CONFIG_8260) \ 255 + || defined(CONFIG_PPC_83xx) || defined(CONFIG_PPC_82xx) \ 256 256 || defined(CONFIG_PPC_MPC52xx) 257 257 #define CPU_FTR_COMMON CPU_FTR_NEED_COHERENT 258 258 #else
-1
arch/powerpc/include/asm/dtl.h
··· 39 39 40 40 extern void register_dtl_buffer(int cpu); 41 41 extern void alloc_dtl_buffers(unsigned long *time_limit); 42 - extern long hcall_vphn(unsigned long cpu, u64 flags, __be32 *associativity); 43 42 44 43 #endif /* _ASM_POWERPC_DTL_H */
+1
arch/powerpc/include/asm/feature-fixups.h
··· 292 292 extern long __start__btb_flush_fixup, __stop__btb_flush_fixup; 293 293 294 294 void apply_feature_fixups(void); 295 + void update_mmu_feature_fixups(unsigned long mask); 295 296 void setup_feature_keys(void); 296 297 #endif 297 298
-22
arch/powerpc/include/asm/fs_pd.h
··· 14 14 #include <sysdev/fsl_soc.h> 15 15 #include <asm/time.h> 16 16 17 - #ifdef CONFIG_CPM2 18 - #include <asm/cpm2.h> 19 - 20 - #if defined(CONFIG_8260) 21 - #include <asm/mpc8260.h> 22 - #endif 23 - 24 - #define cpm2_map(member) (&cpm2_immr->member) 25 - #define cpm2_map_size(member, size) (&cpm2_immr->member) 26 - #define cpm2_unmap(addr) do {} while(0) 27 - #endif 28 - 29 - #ifdef CONFIG_PPC_8xx 30 - #include <asm/8xx_immap.h> 31 - 32 - extern immap_t __iomem *mpc8xx_immr; 33 - 34 - #define immr_map(member) (&mpc8xx_immr->member) 35 - #define immr_map_size(member, size) (&mpc8xx_immr->member) 36 - #define immr_unmap(addr) do {} while (0) 37 - #endif 38 - 39 17 static inline int uart_baudrate(void) 40 18 { 41 19 return get_baudrate();
+18 -6
arch/powerpc/include/asm/ftrace.h
··· 11 11 #define HAVE_FUNCTION_GRAPH_RET_ADDR_PTR 12 12 13 13 /* Ignore unused weak functions which will have larger offsets */ 14 - #ifdef CONFIG_MPROFILE_KERNEL 15 - #define FTRACE_MCOUNT_MAX_OFFSET 12 14 + #if defined(CONFIG_MPROFILE_KERNEL) || defined(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY) 15 + #define FTRACE_MCOUNT_MAX_OFFSET 16 16 16 #elif defined(CONFIG_PPC32) 17 17 #define FTRACE_MCOUNT_MAX_OFFSET 8 18 18 #endif ··· 22 22 23 23 static inline unsigned long ftrace_call_adjust(unsigned long addr) 24 24 { 25 - /* relocation of mcount call site is the same as the address */ 25 + if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)) 26 + addr += MCOUNT_INSN_SIZE; 27 + 26 28 return addr; 27 29 } 28 30 29 31 unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip, 30 32 unsigned long sp); 31 33 34 + struct module; 35 + struct dyn_ftrace; 32 36 struct dyn_arch_ftrace { 33 37 struct module *mod; 34 38 }; 35 39 36 40 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS 41 + #define ftrace_need_init_nop() (true) 42 + int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec); 43 + #define ftrace_init_nop ftrace_init_nop 44 + 37 45 struct ftrace_regs { 38 46 struct pt_regs regs; 39 47 }; ··· 132 124 { 133 125 return get_paca()->ftrace_enabled; 134 126 } 135 - 136 - void ftrace_free_init_tramp(void); 137 127 #else /* CONFIG_PPC64 */ 138 128 static inline void this_cpu_disable_ftrace(void) { } 139 129 static inline void this_cpu_enable_ftrace(void) { } 140 130 static inline void this_cpu_set_ftrace_enabled(u8 ftrace_enabled) { } 141 131 static inline u8 this_cpu_get_ftrace_enabled(void) { return 1; } 142 - static inline void ftrace_free_init_tramp(void) { } 143 132 #endif /* CONFIG_PPC64 */ 133 + 134 + #ifdef CONFIG_FUNCTION_TRACER 135 + extern unsigned int ftrace_tramp_text[], ftrace_tramp_init[]; 136 + void ftrace_free_init_tramp(void); 137 + #else 138 + static inline void ftrace_free_init_tramp(void) { } 139 + #endif 144 140 #endif /* !__ASSEMBLY__ */ 145 141 146 142 #endif /* _ASM_POWERPC_FTRACE */
+1
arch/powerpc/include/asm/hw_breakpoint.h
··· 18 18 u16 len; /* length of the target data symbol */ 19 19 u16 hw_len; /* length programmed in hw */ 20 20 u8 flags; 21 + bool perf_single_step; /* temporarily uninstalled for a perf single step */ 21 22 }; 22 23 23 24 /* Note: Don't change the first 6 bits below as they are in the same order
+2
arch/powerpc/include/asm/ibmebus.h
··· 46 46 #include <linux/of_device.h> 47 47 #include <linux/of_platform.h> 48 48 49 + struct platform_driver; 50 + 49 51 extern struct bus_type ibmebus_bus_type; 50 52 51 53 int ibmebus_register_driver(struct platform_driver *drv);
+3
arch/powerpc/include/asm/iommu.h
··· 28 28 #define IOMMU_PAGE_MASK(tblptr) (~((1 << (tblptr)->it_page_shift) - 1)) 29 29 #define IOMMU_PAGE_ALIGN(addr, tblptr) ALIGN(addr, IOMMU_PAGE_SIZE(tblptr)) 30 30 31 + #define DIRECT64_PROPNAME "linux,direct64-ddr-window-info" 32 + #define DMA64_PROPNAME "linux,dma64-ddr-window-info" 33 + 31 34 /* Boot time flags */ 32 35 extern int iommu_is_off; 33 36 extern int iommu_force_on;
+1 -1
arch/powerpc/include/asm/kfence.h
··· 23 23 #ifdef CONFIG_PPC64 24 24 static inline bool kfence_protect_page(unsigned long addr, bool protect) 25 25 { 26 - struct page *page = virt_to_page(addr); 26 + struct page *page = virt_to_page((void *)addr); 27 27 28 28 __kernel_map_pages(page, 1, !protect); 29 29
+31 -60
arch/powerpc/include/asm/kup.h
··· 6 6 #define KUAP_WRITE 2 7 7 #define KUAP_READ_WRITE (KUAP_READ | KUAP_WRITE) 8 8 9 + #ifndef __ASSEMBLY__ 10 + #include <linux/types.h> 11 + 12 + static __always_inline bool kuap_is_disabled(void); 13 + #endif 14 + 9 15 #ifdef CONFIG_PPC_BOOK3S_64 10 16 #include <asm/book3s/64/kup.h> 11 17 #endif ··· 47 41 48 42 #ifdef CONFIG_PPC_KUAP 49 43 void setup_kuap(bool disabled); 44 + 45 + static __always_inline bool kuap_is_disabled(void) 46 + { 47 + return !mmu_has_feature(MMU_FTR_KUAP); 48 + } 50 49 #else 51 50 static inline void setup_kuap(bool disabled) { } 52 51 53 52 static __always_inline bool kuap_is_disabled(void) { return true; } 54 53 55 - static inline bool 54 + static __always_inline bool 56 55 __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) 57 56 { 58 57 return false; 59 58 } 60 59 61 - static inline void __kuap_lock(void) { } 62 - static inline void __kuap_save_and_lock(struct pt_regs *regs) { } 63 - static inline void kuap_user_restore(struct pt_regs *regs) { } 64 - static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { } 65 - 66 - static inline unsigned long __kuap_get_and_assert_locked(void) 67 - { 68 - return 0; 69 - } 60 + static __always_inline void kuap_user_restore(struct pt_regs *regs) { } 61 + static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) { } 70 62 71 63 /* 72 64 * book3s/64/kup-radix.h defines these functions for the !KUAP case to flush ··· 72 68 * platforms. 73 69 */ 74 70 #ifndef CONFIG_PPC_BOOK3S_64 75 - static inline void __allow_user_access(void __user *to, const void __user *from, 76 - unsigned long size, unsigned long dir) { } 77 - static inline void __prevent_user_access(unsigned long dir) { } 78 - static inline unsigned long __prevent_user_access_return(void) { return 0UL; } 79 - static inline void __restore_user_access(unsigned long flags) { } 71 + static __always_inline void allow_user_access(void __user *to, const void __user *from, 72 + unsigned long size, unsigned long dir) { } 73 + static __always_inline void prevent_user_access(unsigned long dir) { } 74 + static __always_inline unsigned long prevent_user_access_return(void) { return 0UL; } 75 + static __always_inline void restore_user_access(unsigned long flags) { } 80 76 #endif /* CONFIG_PPC_BOOK3S_64 */ 81 77 #endif /* CONFIG_PPC_KUAP */ 82 78 ··· 89 85 return __bad_kuap_fault(regs, address, is_write); 90 86 } 91 87 92 - static __always_inline void kuap_assert_locked(void) 93 - { 94 - if (kuap_is_disabled()) 95 - return; 96 - 97 - if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG)) 98 - __kuap_get_and_assert_locked(); 99 - } 100 - 101 88 static __always_inline void kuap_lock(void) 102 89 { 90 + #ifdef __kuap_lock 103 91 if (kuap_is_disabled()) 104 92 return; 105 93 106 94 __kuap_lock(); 95 + #endif 107 96 } 108 97 109 98 static __always_inline void kuap_save_and_lock(struct pt_regs *regs) 110 99 { 100 + #ifdef __kuap_save_and_lock 111 101 if (kuap_is_disabled()) 112 102 return; 113 103 114 104 __kuap_save_and_lock(regs); 105 + #endif 115 106 } 116 107 117 108 static __always_inline void kuap_kernel_restore(struct pt_regs *regs, unsigned long amr) ··· 119 120 120 121 static __always_inline unsigned long kuap_get_and_assert_locked(void) 121 122 { 122 - if (kuap_is_disabled()) 123 - return 0; 124 - 125 - return __kuap_get_and_assert_locked(); 123 + #ifdef __kuap_get_and_assert_locked 124 + if (!kuap_is_disabled()) 125 + return __kuap_get_and_assert_locked(); 126 + #endif 127 + return 0; 126 128 } 127 129 128 - #ifndef CONFIG_PPC_BOOK3S_64 129 - static __always_inline void allow_user_access(void __user *to, const void __user *from, 130 - unsigned long size, unsigned long dir) 130 + static __always_inline void kuap_assert_locked(void) 131 131 { 132 - if (kuap_is_disabled()) 133 - return; 134 - 135 - __allow_user_access(to, from, size, dir); 132 + if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG)) 133 + kuap_get_and_assert_locked(); 136 134 } 137 - 138 - static __always_inline void prevent_user_access(unsigned long dir) 139 - { 140 - if (kuap_is_disabled()) 141 - return; 142 - 143 - __prevent_user_access(dir); 144 - } 145 - 146 - static __always_inline unsigned long prevent_user_access_return(void) 147 - { 148 - if (kuap_is_disabled()) 149 - return 0; 150 - 151 - return __prevent_user_access_return(); 152 - } 153 - 154 - static __always_inline void restore_user_access(unsigned long flags) 155 - { 156 - if (kuap_is_disabled()) 157 - return; 158 - 159 - __restore_user_access(flags); 160 - } 161 - #endif /* CONFIG_PPC_BOOK3S_64 */ 162 135 163 136 static __always_inline void allow_read_from_user(const void __user *from, unsigned long size) 164 137 {
+12 -25
arch/powerpc/include/asm/lppaca.h
··· 6 6 #ifndef _ASM_POWERPC_LPPACA_H 7 7 #define _ASM_POWERPC_LPPACA_H 8 8 9 - /* 10 - * The below VPHN macros are outside the __KERNEL__ check since these are 11 - * used for compiling the vphn selftest in userspace 12 - */ 13 - 14 - /* The H_HOME_NODE_ASSOCIATIVITY h_call returns 6 64-bit registers. */ 15 - #define VPHN_REGISTER_COUNT 6 16 - 17 - /* 18 - * 6 64-bit registers unpacked into up to 24 be32 associativity values. To 19 - * form the complete property we have to add the length in the first cell. 20 - */ 21 - #define VPHN_ASSOC_BUFSIZE (VPHN_REGISTER_COUNT*sizeof(u64)/sizeof(u16) + 1) 22 - 23 - /* 24 - * The H_HOME_NODE_ASSOCIATIVITY hcall takes two values for flags: 25 - * 1 for retrieving associativity information for a guest cpu 26 - * 2 for retrieving associativity information for a host/hypervisor cpu 27 - */ 28 - #define VPHN_FLAG_VCPU 1 29 - #define VPHN_FLAG_PCPU 2 30 - 31 9 #ifdef __KERNEL__ 32 10 33 11 /* ··· 23 45 #include <asm/types.h> 24 46 #include <asm/mmu.h> 25 47 #include <asm/firmware.h> 48 + #include <asm/paca.h> 26 49 27 50 /* 28 51 * The lppaca is the "virtual processor area" registered with the hypervisor, ··· 106 127 */ 107 128 #define LPPACA_OLD_SHARED_PROC 2 108 129 109 - static inline bool lppaca_shared_proc(struct lppaca *l) 130 + #ifdef CONFIG_PPC_PSERIES 131 + /* 132 + * All CPUs should have the same shared proc value, so directly access the PACA 133 + * to avoid false positives from DEBUG_PREEMPT. 134 + */ 135 + static inline bool lppaca_shared_proc(void) 110 136 { 137 + struct lppaca *l = local_paca->lppaca_ptr; 138 + 111 139 if (!firmware_has_feature(FW_FEATURE_SPLPAR)) 112 140 return false; 113 141 return !!(l->__old_status & LPPACA_OLD_SHARED_PROC); 114 142 } 143 + 144 + #define get_lppaca() (get_paca()->lppaca_ptr) 145 + #endif 115 146 116 147 /* 117 148 * SLB shadow buffer structure as defined in the PAPR. The save_area ··· 137 148 __be64 vsid; 138 149 } save_area[SLB_NUM_BOLTED]; 139 150 } ____cacheline_aligned; 140 - 141 - extern long hcall_vphn(unsigned long cpu, u64 flags, __be32 *associativity); 142 151 143 152 #endif /* CONFIG_PPC_BOOK3S */ 144 153 #endif /* __KERNEL__ */
+2 -1
arch/powerpc/include/asm/macio.h
··· 3 3 #define __MACIO_ASIC_H__ 4 4 #ifdef __KERNEL__ 5 5 6 - #include <linux/of_device.h> 6 + #include <linux/of.h> 7 + #include <linux/platform_device.h> 7 8 8 9 extern struct bus_type macio_bus_type; 9 10
+2 -7
arch/powerpc/include/asm/mmu.h
··· 33 33 * key 0 controlling userspace addresses on radix 34 34 * Key 3 on hash 35 35 */ 36 - #define MMU_FTR_BOOK3S_KUAP ASM_CONST(0x00000200) 36 + #define MMU_FTR_KUAP ASM_CONST(0x00000200) 37 37 38 38 /* 39 39 * Supports KUEP feature ··· 144 144 145 145 typedef pte_t *pgtable_t; 146 146 147 - #ifdef CONFIG_PPC_E500 148 - #include <asm/percpu.h> 149 - DECLARE_PER_CPU(int, next_tlbcam_idx); 150 - #endif 151 - 152 147 enum { 153 148 MMU_FTRS_POSSIBLE = 154 149 #if defined(CONFIG_PPC_BOOK3S_604) ··· 183 188 #endif /* CONFIG_PPC_RADIX_MMU */ 184 189 #endif 185 190 #ifdef CONFIG_PPC_KUAP 186 - MMU_FTR_BOOK3S_KUAP | 191 + MMU_FTR_KUAP | 187 192 #endif /* CONFIG_PPC_KUAP */ 188 193 #ifdef CONFIG_PPC_MEM_KEYS 189 194 MMU_FTR_PKEY |
+1
arch/powerpc/include/asm/mmu_context.h
··· 127 127 128 128 static inline void dec_mm_active_cpus(struct mm_struct *mm) 129 129 { 130 + VM_WARN_ON_ONCE(atomic_read(&mm->context.active_cpus) <= 0); 130 131 atomic_dec(&mm->context.active_cpus); 131 132 } 132 133
-4
arch/powerpc/include/asm/module.h
··· 75 75 #endif 76 76 77 77 #ifdef CONFIG_DYNAMIC_FTRACE 78 - # ifdef MODULE 79 - asm(".section .ftrace.tramp,\"ax\",@nobits; .align 3; .previous"); 80 - # endif /* MODULE */ 81 - 82 78 int module_trampoline_target(struct module *mod, unsigned long trampoline, 83 79 unsigned long *target); 84 80 int module_finalize_ftrace(struct module *mod, const Elf_Shdr *sechdrs);
-22
arch/powerpc/include/asm/mpc8260.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Since there are many different boards and no standard configuration, 4 - * we have a unique include file for each. Rather than change every 5 - * file that has to include MPC8260 configuration, they all include 6 - * this one and the configuration switching is done here. 7 - */ 8 - #ifdef __KERNEL__ 9 - #ifndef __ASM_POWERPC_MPC8260_H__ 10 - #define __ASM_POWERPC_MPC8260_H__ 11 - 12 - #define MPC82XX_BCR_PLDP 0x00800000 /* Pipeline Maximum Depth */ 13 - 14 - #ifdef CONFIG_8260 15 - 16 - #ifdef CONFIG_PCI_8260 17 - #include <platforms/82xx/m82xx_pci.h> 18 - #endif 19 - 20 - #endif /* CONFIG_8260 */ 21 - #endif /* !__ASM_POWERPC_MPC8260_H__ */ 22 - #endif /* __KERNEL__ */
+31 -33
arch/powerpc/include/asm/nohash/32/kup-8xx.h
··· 9 9 10 10 #ifndef __ASSEMBLY__ 11 11 12 - #include <linux/jump_label.h> 13 - 14 12 #include <asm/reg.h> 15 13 16 - extern struct static_key_false disable_kuap_key; 17 - 18 - static __always_inline bool kuap_is_disabled(void) 19 - { 20 - return static_branch_unlikely(&disable_kuap_key); 21 - } 22 - 23 - static inline void __kuap_lock(void) 24 - { 25 - } 26 - 27 - static inline void __kuap_save_and_lock(struct pt_regs *regs) 14 + static __always_inline void __kuap_save_and_lock(struct pt_regs *regs) 28 15 { 29 16 regs->kuap = mfspr(SPRN_MD_AP); 30 17 mtspr(SPRN_MD_AP, MD_APG_KUAP); 31 18 } 19 + #define __kuap_save_and_lock __kuap_save_and_lock 32 20 33 - static inline void kuap_user_restore(struct pt_regs *regs) 21 + static __always_inline void kuap_user_restore(struct pt_regs *regs) 34 22 { 35 23 } 36 24 37 - static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap) 25 + static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap) 38 26 { 39 27 mtspr(SPRN_MD_AP, regs->kuap); 40 28 } 41 29 42 - static inline unsigned long __kuap_get_and_assert_locked(void) 30 + #ifdef CONFIG_PPC_KUAP_DEBUG 31 + static __always_inline unsigned long __kuap_get_and_assert_locked(void) 43 32 { 44 - unsigned long kuap; 33 + WARN_ON_ONCE(mfspr(SPRN_MD_AP) >> 16 != MD_APG_KUAP >> 16); 45 34 46 - kuap = mfspr(SPRN_MD_AP); 35 + return 0; 36 + } 37 + #define __kuap_get_and_assert_locked __kuap_get_and_assert_locked 38 + #endif 47 39 48 - if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG)) 49 - WARN_ON_ONCE(kuap >> 16 != MD_APG_KUAP >> 16); 50 - 51 - return kuap; 40 + static __always_inline void uaccess_begin_8xx(unsigned long val) 41 + { 42 + asm(ASM_MMU_FTR_IFSET("mtspr %0, %1", "", %2) : : 43 + "i"(SPRN_MD_AP), "r"(val), "i"(MMU_FTR_KUAP) : "memory"); 52 44 } 53 45 54 - static inline void __allow_user_access(void __user *to, const void __user *from, 55 - unsigned long size, unsigned long dir) 46 + static __always_inline void uaccess_end_8xx(void) 56 47 { 57 - mtspr(SPRN_MD_AP, MD_APG_INIT); 48 + asm(ASM_MMU_FTR_IFSET("mtspr %0, %1", "", %2) : : 49 + "i"(SPRN_MD_AP), "r"(MD_APG_KUAP), "i"(MMU_FTR_KUAP) : "memory"); 58 50 } 59 51 60 - static inline void __prevent_user_access(unsigned long dir) 52 + static __always_inline void allow_user_access(void __user *to, const void __user *from, 53 + unsigned long size, unsigned long dir) 61 54 { 62 - mtspr(SPRN_MD_AP, MD_APG_KUAP); 55 + uaccess_begin_8xx(MD_APG_INIT); 63 56 } 64 57 65 - static inline unsigned long __prevent_user_access_return(void) 58 + static __always_inline void prevent_user_access(unsigned long dir) 59 + { 60 + uaccess_end_8xx(); 61 + } 62 + 63 + static __always_inline unsigned long prevent_user_access_return(void) 66 64 { 67 65 unsigned long flags; 68 66 69 67 flags = mfspr(SPRN_MD_AP); 70 68 71 - mtspr(SPRN_MD_AP, MD_APG_KUAP); 69 + uaccess_end_8xx(); 72 70 73 71 return flags; 74 72 } 75 73 76 - static inline void __restore_user_access(unsigned long flags) 74 + static __always_inline void restore_user_access(unsigned long flags) 77 75 { 78 - mtspr(SPRN_MD_AP, flags); 76 + uaccess_begin_8xx(flags); 79 77 } 80 78 81 - static inline bool 79 + static __always_inline bool 82 80 __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) 83 81 { 84 82 return !((regs->kuap ^ MD_APG_KUAP) & 0xff000000);
+1 -1
arch/powerpc/include/asm/nohash/32/pgtable.h
··· 355 355 #define pmd_pfn(pmd) (pmd_val(pmd) >> PAGE_SHIFT) 356 356 #else 357 357 #define pmd_page_vaddr(pmd) \ 358 - ((unsigned long)(pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1))) 358 + ((const void *)(pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1))) 359 359 #define pmd_pfn(pmd) (__pa(pmd_val(pmd)) >> PAGE_SHIFT) 360 360 #endif 361 361
+1 -1
arch/powerpc/include/asm/nohash/64/pgtable.h
··· 127 127 #define pmd_bad(pmd) (!is_kernel_addr(pmd_val(pmd)) \ 128 128 || (pmd_val(pmd) & PMD_BAD_BITS)) 129 129 #define pmd_present(pmd) (!pmd_none(pmd)) 130 - #define pmd_page_vaddr(pmd) (pmd_val(pmd) & ~PMD_MASKED_BITS) 130 + #define pmd_page_vaddr(pmd) ((const void *)(pmd_val(pmd) & ~PMD_MASKED_BITS)) 131 131 extern struct page *pmd_page(pmd_t pmd); 132 132 #define pmd_pfn(pmd) (page_to_pfn(pmd_page(pmd))) 133 133
+35 -33
arch/powerpc/include/asm/nohash/kup-booke.h
··· 3 3 #define _ASM_POWERPC_KUP_BOOKE_H_ 4 4 5 5 #include <asm/bug.h> 6 + #include <asm/mmu.h> 6 7 7 8 #ifdef CONFIG_PPC_KUAP 8 9 ··· 14 13 15 14 #else 16 15 17 - #include <linux/jump_label.h> 18 16 #include <linux/sched.h> 19 17 20 18 #include <asm/reg.h> 21 19 22 - extern struct static_key_false disable_kuap_key; 23 - 24 - static __always_inline bool kuap_is_disabled(void) 25 - { 26 - return static_branch_unlikely(&disable_kuap_key); 27 - } 28 - 29 - static inline void __kuap_lock(void) 20 + static __always_inline void __kuap_lock(void) 30 21 { 31 22 mtspr(SPRN_PID, 0); 32 23 isync(); 33 24 } 25 + #define __kuap_lock __kuap_lock 34 26 35 - static inline void __kuap_save_and_lock(struct pt_regs *regs) 27 + static __always_inline void __kuap_save_and_lock(struct pt_regs *regs) 36 28 { 37 29 regs->kuap = mfspr(SPRN_PID); 38 30 mtspr(SPRN_PID, 0); 39 31 isync(); 40 32 } 33 + #define __kuap_save_and_lock __kuap_save_and_lock 41 34 42 - static inline void kuap_user_restore(struct pt_regs *regs) 35 + static __always_inline void kuap_user_restore(struct pt_regs *regs) 43 36 { 44 37 if (kuap_is_disabled()) 45 38 return; ··· 43 48 /* Context synchronisation is performed by rfi */ 44 49 } 45 50 46 - static inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap) 51 + static __always_inline void __kuap_kernel_restore(struct pt_regs *regs, unsigned long kuap) 47 52 { 48 53 if (regs->kuap) 49 54 mtspr(SPRN_PID, current->thread.pid); ··· 51 56 /* Context synchronisation is performed by rfi */ 52 57 } 53 58 54 - static inline unsigned long __kuap_get_and_assert_locked(void) 59 + #ifdef CONFIG_PPC_KUAP_DEBUG 60 + static __always_inline unsigned long __kuap_get_and_assert_locked(void) 55 61 { 56 - unsigned long kuap = mfspr(SPRN_PID); 62 + WARN_ON_ONCE(mfspr(SPRN_PID)); 57 63 58 - if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG)) 59 - WARN_ON_ONCE(kuap); 64 + return 0; 65 + } 66 + #define __kuap_get_and_assert_locked __kuap_get_and_assert_locked 67 + #endif 60 68 61 - return kuap; 69 + static __always_inline void uaccess_begin_booke(unsigned long val) 70 + { 71 + asm(ASM_MMU_FTR_IFSET("mtspr %0, %1; isync", "", %2) : : 72 + "i"(SPRN_PID), "r"(val), "i"(MMU_FTR_KUAP) : "memory"); 62 73 } 63 74 64 - static inline void __allow_user_access(void __user *to, const void __user *from, 65 - unsigned long size, unsigned long dir) 75 + static __always_inline void uaccess_end_booke(void) 66 76 { 67 - mtspr(SPRN_PID, current->thread.pid); 68 - isync(); 77 + asm(ASM_MMU_FTR_IFSET("mtspr %0, %1; isync", "", %2) : : 78 + "i"(SPRN_PID), "r"(0), "i"(MMU_FTR_KUAP) : "memory"); 69 79 } 70 80 71 - static inline void __prevent_user_access(unsigned long dir) 81 + static __always_inline void allow_user_access(void __user *to, const void __user *from, 82 + unsigned long size, unsigned long dir) 72 83 { 73 - mtspr(SPRN_PID, 0); 74 - isync(); 84 + uaccess_begin_booke(current->thread.pid); 75 85 } 76 86 77 - static inline unsigned long __prevent_user_access_return(void) 87 + static __always_inline void prevent_user_access(unsigned long dir) 88 + { 89 + uaccess_end_booke(); 90 + } 91 + 92 + static __always_inline unsigned long prevent_user_access_return(void) 78 93 { 79 94 unsigned long flags = mfspr(SPRN_PID); 80 95 81 - mtspr(SPRN_PID, 0); 82 - isync(); 96 + uaccess_end_booke(); 83 97 84 98 return flags; 85 99 } 86 100 87 - static inline void __restore_user_access(unsigned long flags) 101 + static __always_inline void restore_user_access(unsigned long flags) 88 102 { 89 - if (flags) { 90 - mtspr(SPRN_PID, current->thread.pid); 91 - isync(); 92 - } 103 + if (flags) 104 + uaccess_begin_booke(current->thread.pid); 93 105 } 94 106 95 - static inline bool 107 + static __always_inline bool 96 108 __bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) 97 109 { 98 110 return !regs->kuap;
+3
arch/powerpc/include/asm/nohash/mmu-e500.h
··· 319 319 320 320 #endif 321 321 322 + #include <asm/percpu.h> 323 + DECLARE_PER_CPU(int, next_tlbcam_idx); 324 + 322 325 #endif /* !__ASSEMBLY__ */ 323 326 324 327 #endif /* _ASM_POWERPC_MMU_BOOK3E_H_ */
+1 -5
arch/powerpc/include/asm/paca.h
··· 15 15 #include <linux/cache.h> 16 16 #include <linux/string.h> 17 17 #include <asm/types.h> 18 - #include <asm/lppaca.h> 19 18 #include <asm/mmu.h> 20 19 #include <asm/page.h> 21 20 #ifdef CONFIG_PPC_BOOK3E_64 ··· 46 47 #define get_paca() local_paca 47 48 #endif 48 49 49 - #ifdef CONFIG_PPC_PSERIES 50 - #define get_lppaca() (get_paca()->lppaca_ptr) 51 - #endif 52 - 53 50 #define get_slb_shadow() (get_paca()->slb_shadow_ptr) 54 51 55 52 struct task_struct; 56 53 struct rtas_args; 54 + struct lppaca; 57 55 58 56 /* 59 57 * Defines the layout of the paca.
+20 -10
arch/powerpc/include/asm/page.h
··· 9 9 #ifndef __ASSEMBLY__ 10 10 #include <linux/types.h> 11 11 #include <linux/kernel.h> 12 + #include <linux/bug.h> 12 13 #else 13 14 #include <asm/types.h> 14 15 #endif ··· 120 119 #define ARCH_PFN_OFFSET ((unsigned long)(MEMORY_START >> PAGE_SHIFT)) 121 120 #endif 122 121 123 - #define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT) 124 - #define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr)) 125 - #define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT) 126 - 127 - #define virt_addr_valid(vaddr) ({ \ 128 - unsigned long _addr = (unsigned long)vaddr; \ 129 - _addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory && \ 130 - pfn_valid(virt_to_pfn(_addr)); \ 131 - }) 132 - 133 122 /* 134 123 * On Book-E parts we need __va to parse the device tree and we can't 135 124 * determine MEMORY_START until then. However we can determine PHYSICAL_START ··· 223 232 #define __pa(x) ((unsigned long)(x) - PAGE_OFFSET + MEMORY_START) 224 233 #endif 225 234 #endif 235 + 236 + #ifndef __ASSEMBLY__ 237 + static inline unsigned long virt_to_pfn(const void *kaddr) 238 + { 239 + return __pa(kaddr) >> PAGE_SHIFT; 240 + } 241 + 242 + static inline const void *pfn_to_kaddr(unsigned long pfn) 243 + { 244 + return __va(pfn << PAGE_SHIFT); 245 + } 246 + #endif 247 + 248 + #define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr)) 249 + #define virt_addr_valid(vaddr) ({ \ 250 + unsigned long _addr = (unsigned long)vaddr; \ 251 + _addr >= PAGE_OFFSET && _addr < (unsigned long)high_memory && \ 252 + pfn_valid(virt_to_pfn((void *)_addr)); \ 253 + }) 226 254 227 255 /* 228 256 * Unfortunately the PLT is in the BSS in the PPC32 ELF ABI,
+1
arch/powerpc/include/asm/paravirt.h
··· 6 6 #include <asm/smp.h> 7 7 #ifdef CONFIG_PPC64 8 8 #include <asm/paca.h> 9 + #include <asm/lppaca.h> 9 10 #include <asm/hvcall.h> 10 11 #endif 11 12
+2 -1
arch/powerpc/include/asm/pci.h
··· 82 82 extern int pci_mmap_legacy_page_range(struct pci_bus *bus, 83 83 struct vm_area_struct *vma, 84 84 enum pci_mmap_state mmap_state); 85 - 85 + extern void pci_adjust_legacy_attr(struct pci_bus *bus, 86 + enum pci_mmap_state mmap_type); 86 87 #define HAVE_PCI_LEGACY 1 87 88 88 89 extern void pcibios_claim_one_bus(struct pci_bus *b);
+2 -2
arch/powerpc/include/asm/pgtable.h
··· 72 72 } 73 73 74 74 #ifndef pmd_page_vaddr 75 - static inline unsigned long pmd_page_vaddr(pmd_t pmd) 75 + static inline const void *pmd_page_vaddr(pmd_t pmd) 76 76 { 77 - return ((unsigned long)__va(pmd_val(pmd) & ~PMD_MASKED_BITS)); 77 + return __va(pmd_val(pmd) & ~PMD_MASKED_BITS); 78 78 } 79 79 #define pmd_page_vaddr pmd_page_vaddr 80 80 #endif
+1
arch/powerpc/include/asm/plpar_wrappers.h
··· 9 9 10 10 #include <asm/hvcall.h> 11 11 #include <asm/paca.h> 12 + #include <asm/lppaca.h> 12 13 #include <asm/page.h> 13 14 14 15 static inline long poll_pending(void)
+2
arch/powerpc/include/asm/ppc-opcode.h
··· 397 397 #define PPC_RAW_RFCI (0x4c000066) 398 398 #define PPC_RAW_RFDI (0x4c00004e) 399 399 #define PPC_RAW_RFMCI (0x4c00004c) 400 + #define PPC_RAW_TLBILX_LPID (0x7c000024) 400 401 #define PPC_RAW_TLBILX(t, a, b) (0x7c000024 | __PPC_T_TLB(t) | __PPC_RA0(a) | __PPC_RB(b)) 401 402 #define PPC_RAW_WAIT_v203 (0x7c00007c) 402 403 #define PPC_RAW_WAIT(w, p) (0x7c00003c | __PPC_WC(w) | __PPC_PL(p)) ··· 617 616 #define PPC_TLBILX(t, a, b) stringify_in_c(.long PPC_RAW_TLBILX(t, a, b)) 618 617 #define PPC_TLBILX_ALL(a, b) PPC_TLBILX(0, a, b) 619 618 #define PPC_TLBILX_PID(a, b) PPC_TLBILX(1, a, b) 619 + #define PPC_TLBILX_LPID stringify_in_c(.long PPC_RAW_TLBILX_LPID) 620 620 #define PPC_TLBILX_VA(a, b) PPC_TLBILX(3, a, b) 621 621 #define PPC_WAIT_v203 stringify_in_c(.long PPC_RAW_WAIT_v203) 622 622 #define PPC_WAIT(w, p) stringify_in_c(.long PPC_RAW_WAIT(w, p))
-5
arch/powerpc/include/asm/processor.h
··· 172 172 unsigned int align_ctl; /* alignment handling control */ 173 173 #ifdef CONFIG_HAVE_HW_BREAKPOINT 174 174 struct perf_event *ptrace_bps[HBP_NUM_MAX]; 175 - /* 176 - * Helps identify source of single-step exception and subsequent 177 - * hw-breakpoint enablement 178 - */ 179 - struct perf_event *last_hit_ubp[HBP_NUM_MAX]; 180 175 #endif /* CONFIG_HAVE_HW_BREAKPOINT */ 181 176 struct arch_hw_breakpoint hw_brk[HBP_NUM_MAX]; /* hardware breakpoint info */ 182 177 unsigned long trap_nr; /* last trap # on this thread */
-2
arch/powerpc/include/asm/reg.h
··· 1414 1414 #define mfspr(rn) ({unsigned long rval; \ 1415 1415 asm volatile("mfspr %0," __stringify(rn) \ 1416 1416 : "=r" (rval)); rval;}) 1417 - #ifndef mtspr 1418 1417 #define mtspr(rn, v) asm volatile("mtspr " __stringify(rn) ",%0" : \ 1419 1418 : "r" ((unsigned long)(v)) \ 1420 1419 : "memory") 1421 - #endif 1422 1420 #define wrtspr(rn) asm volatile("mtspr " __stringify(rn) ",2" : : : "memory") 1423 1421 1424 1422 static inline void wrtee(unsigned long val)
+3
arch/powerpc/include/asm/rtas.h
··· 202 202 #define RTAS_USER_REGION_SIZE (64 * 1024) 203 203 204 204 /* RTAS return status codes */ 205 + #define RTAS_HARDWARE_ERROR -1 /* Hardware Error */ 205 206 #define RTAS_BUSY -2 /* RTAS Busy */ 207 + #define RTAS_INVALID_PARAMETER -3 /* Invalid indicator/domain/sensor etc. */ 206 208 #define RTAS_EXTENDED_DELAY_MIN 9900 207 209 #define RTAS_EXTENDED_DELAY_MAX 9905 208 210 ··· 427 425 extern int rtas_set_indicator_fast(int indicator, int index, int new_value); 428 426 extern void rtas_progress(char *s, unsigned short hex); 429 427 int rtas_ibm_suspend_me(int *fw_status); 428 + int rtas_error_rc(int rtas_rc); 430 429 431 430 struct rtc_time; 432 431 extern time64_t rtas_get_boot_time(void);
+2
arch/powerpc/include/asm/sections.h
··· 74 74 (unsigned long)_stext < end; 75 75 } 76 76 77 + #else 78 + static inline unsigned long kernel_toc_addr(void) { BUILD_BUG(); return -1UL; } 77 79 #endif 78 80 79 81 #endif /* __KERNEL__ */
-1
arch/powerpc/include/asm/setup.h
··· 8 8 extern void ppc_printk_progress(char *s, unsigned short hex); 9 9 10 10 extern unsigned long long memory_limit; 11 - extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask); 12 11 13 12 struct device_node; 14 13
+15
arch/powerpc/include/asm/topology.h
··· 143 143 #endif 144 144 #endif 145 145 146 + #ifdef CONFIG_HOTPLUG_SMT 147 + #include <linux/cpu_smt.h> 148 + #include <asm/cputhreads.h> 149 + 150 + static inline bool topology_is_primary_thread(unsigned int cpu) 151 + { 152 + return cpu == cpu_first_thread_sibling(cpu); 153 + } 154 + 155 + static inline bool topology_smt_thread_allowed(unsigned int cpu) 156 + { 157 + return cpu_thread_in_core(cpu) < cpu_smt_num_threads; 158 + } 159 + #endif 160 + 146 161 #endif /* __KERNEL__ */ 147 162 #endif /* _ASM_POWERPC_TOPOLOGY_H */
+3 -3
arch/powerpc/include/asm/uaccess.h
··· 386 386 extern long __copy_from_user_flushcache(void *dst, const void __user *src, 387 387 unsigned size); 388 388 389 - static __must_check inline bool user_access_begin(const void __user *ptr, size_t len) 389 + static __must_check __always_inline bool user_access_begin(const void __user *ptr, size_t len) 390 390 { 391 391 if (unlikely(!access_ok(ptr, len))) 392 392 return false; ··· 401 401 #define user_access_save prevent_user_access_return 402 402 #define user_access_restore restore_user_access 403 403 404 - static __must_check inline bool 404 + static __must_check __always_inline bool 405 405 user_read_access_begin(const void __user *ptr, size_t len) 406 406 { 407 407 if (unlikely(!access_ok(ptr, len))) ··· 415 415 #define user_read_access_begin user_read_access_begin 416 416 #define user_read_access_end prevent_current_read_from_user 417 417 418 - static __must_check inline bool 418 + static __must_check __always_inline bool 419 419 user_write_access_begin(const void __user *ptr, size_t len) 420 420 { 421 421 if (unlikely(!access_ok(ptr, len)))
+3 -1
arch/powerpc/include/asm/vermagic.h
··· 2 2 #ifndef _ASM_VERMAGIC_H 3 3 #define _ASM_VERMAGIC_H 4 4 5 - #ifdef CONFIG_MPROFILE_KERNEL 5 + #ifdef CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY 6 + #define MODULE_ARCH_VERMAGIC_FTRACE "patchable-function-entry " 7 + #elif defined(CONFIG_MPROFILE_KERNEL) 6 8 #define MODULE_ARCH_VERMAGIC_FTRACE "mprofile-kernel " 7 9 #else 8 10 #define MODULE_ARCH_VERMAGIC_FTRACE ""
+24
arch/powerpc/include/asm/vphn.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + #ifndef _ASM_POWERPC_VPHN_H 3 + #define _ASM_POWERPC_VPHN_H 4 + 5 + /* The H_HOME_NODE_ASSOCIATIVITY h_call returns 6 64-bit registers. */ 6 + #define VPHN_REGISTER_COUNT 6 7 + 8 + /* 9 + * 6 64-bit registers unpacked into up to 24 be32 associativity values. To 10 + * form the complete property we have to add the length in the first cell. 11 + */ 12 + #define VPHN_ASSOC_BUFSIZE (VPHN_REGISTER_COUNT*sizeof(u64)/sizeof(u16) + 1) 13 + 14 + /* 15 + * The H_HOME_NODE_ASSOCIATIVITY hcall takes two values for flags: 16 + * 1 for retrieving associativity information for a guest cpu 17 + * 2 for retrieving associativity information for a host/hypervisor cpu 18 + */ 19 + #define VPHN_FLAG_VCPU 1 20 + #define VPHN_FLAG_PCPU 2 21 + 22 + long hcall_vphn(unsigned long cpu, u64 flags, __be32 *associativity); 23 + 24 + #endif // _ASM_POWERPC_VPHN_H
+2 -1
arch/powerpc/kernel/audit.c
··· 4 4 #include <linux/audit.h> 5 5 #include <asm/unistd.h> 6 6 7 + #include "audit_32.h" 8 + 7 9 static unsigned dir_class[] = { 8 10 #include <asm-generic/audit_dir_write.h> 9 11 ~0U ··· 43 41 int audit_classify_syscall(int abi, unsigned syscall) 44 42 { 45 43 #ifdef CONFIG_PPC64 46 - extern int ppc32_classify_syscall(unsigned); 47 44 if (abi == AUDIT_ARCH_PPC) 48 45 return ppc32_classify_syscall(syscall); 49 46 #endif
+7
arch/powerpc/kernel/audit_32.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #ifndef __AUDIT_32_H__ 3 + #define __AUDIT_32_H__ 4 + 5 + extern int ppc32_classify_syscall(unsigned); 6 + 7 + #endif
+2
arch/powerpc/kernel/compat_audit.c
··· 3 3 #include <linux/audit_arch.h> 4 4 #include <asm/unistd.h> 5 5 6 + #include "audit_32.h" 7 + 6 8 unsigned ppc32_dir_class[] = { 7 9 #include <asm-generic/audit_dir_write.h> 8 10 ~0U
+4
arch/powerpc/kernel/cputable.c
··· 75 75 t->cpu_features |= old.cpu_features & CPU_FTR_PMAO_BUG; 76 76 } 77 77 78 + /* Set kuap ON at startup, will be disabled later if cmdline has 'nosmap' */ 79 + if (IS_ENABLED(CONFIG_PPC_KUAP) && IS_ENABLED(CONFIG_PPC32)) 80 + t->mmu_features |= MMU_FTR_KUAP; 81 + 78 82 *PTRRELOC(&cur_cpu_spec) = &the_cpu_spec; 79 83 80 84 /*
-1
arch/powerpc/kernel/entry_32.S
··· 29 29 #include <asm/asm-offsets.h> 30 30 #include <asm/unistd.h> 31 31 #include <asm/ptrace.h> 32 - #include <asm/export.h> 33 32 #include <asm/feature-fixups.h> 34 33 #include <asm/barrier.h> 35 34 #include <asm/kup.h>
+1 -1
arch/powerpc/kernel/epapr_hcalls.S
··· 3 3 * Copyright (C) 2012 Freescale Semiconductor, Inc. 4 4 */ 5 5 6 + #include <linux/export.h> 6 7 #include <linux/threads.h> 7 8 #include <asm/epapr_hcalls.h> 8 9 #include <asm/reg.h> ··· 13 12 #include <asm/ppc_asm.h> 14 13 #include <asm/asm-compat.h> 15 14 #include <asm/asm-offsets.h> 16 - #include <asm/export.h> 17 15 18 16 #ifndef CONFIG_PPC64 19 17 /* epapr_ev_idle() was derived from e500_idle() */
+1
arch/powerpc/kernel/fadump.c
··· 654 654 return ret; 655 655 error_out: 656 656 fw_dump.fadump_enabled = 0; 657 + fw_dump.reserve_dump_area_size = 0; 657 658 return 0; 658 659 } 659 660
+1 -1
arch/powerpc/kernel/fpu.S
··· 9 9 * Copyright (C) 1997 Dan Malek (dmalek@jlc.net). 10 10 */ 11 11 12 + #include <linux/export.h> 12 13 #include <asm/reg.h> 13 14 #include <asm/page.h> 14 15 #include <asm/mmu.h> ··· 19 18 #include <asm/ppc_asm.h> 20 19 #include <asm/asm-offsets.h> 21 20 #include <asm/ptrace.h> 22 - #include <asm/export.h> 23 21 #include <asm/asm-compat.h> 24 22 #include <asm/feature-fixups.h> 25 23
-1
arch/powerpc/kernel/head_40x.S
··· 38 38 #include <asm/ppc_asm.h> 39 39 #include <asm/asm-offsets.h> 40 40 #include <asm/ptrace.h> 41 - #include <asm/export.h> 42 41 43 42 #include "head_32.h" 44 43
-1
arch/powerpc/kernel/head_44x.S
··· 35 35 #include <asm/asm-offsets.h> 36 36 #include <asm/ptrace.h> 37 37 #include <asm/synch.h> 38 - #include <asm/export.h> 39 38 #include <asm/code-patching-asm.h> 40 39 #include "head_booke.h" 41 40
-1
arch/powerpc/kernel/head_64.S
··· 40 40 #include <asm/hw_irq.h> 41 41 #include <asm/cputhreads.h> 42 42 #include <asm/ppc-opcode.h> 43 - #include <asm/export.h> 44 43 #include <asm/feature-fixups.h> 45 44 #ifdef CONFIG_PPC_BOOK3S 46 45 #include <asm/exception-64s.h>
-1
arch/powerpc/kernel/head_85xx.S
··· 40 40 #include <asm/asm-offsets.h> 41 41 #include <asm/cache.h> 42 42 #include <asm/ptrace.h> 43 - #include <asm/export.h> 44 43 #include <asm/feature-fixups.h> 45 44 #include "head_booke.h" 46 45
-1
arch/powerpc/kernel/head_8xx.S
··· 29 29 #include <asm/ppc_asm.h> 30 30 #include <asm/asm-offsets.h> 31 31 #include <asm/ptrace.h> 32 - #include <asm/export.h> 33 32 #include <asm/code-patching-asm.h> 34 33 #include <asm/interrupt.h> 35 34
-1
arch/powerpc/kernel/head_book3s_32.S
··· 31 31 #include <asm/ptrace.h> 32 32 #include <asm/bug.h> 33 33 #include <asm/kvm_book3s_asm.h> 34 - #include <asm/export.h> 35 34 #include <asm/feature-fixups.h> 36 35 #include <asm/interrupt.h> 37 36
+57 -331
arch/powerpc/kernel/hw_breakpoint.c
··· 43 43 return 0; /* no instruction breakpoints available */ 44 44 } 45 45 46 - static bool single_step_pending(void) 47 - { 48 - int i; 49 - 50 - for (i = 0; i < nr_wp_slots(); i++) { 51 - if (current->thread.last_hit_ubp[i]) 52 - return true; 53 - } 54 - return false; 55 - } 56 46 57 47 /* 58 48 * Install a perf counter breakpoint. ··· 74 84 * Do not install DABR values if the instruction must be single-stepped. 75 85 * If so, DABR will be populated in single_step_dabr_instruction(). 76 86 */ 77 - if (!single_step_pending()) 87 + if (!info->perf_single_step) 78 88 __set_breakpoint(i, info); 79 89 80 90 return 0; ··· 112 122 static bool is_ptrace_bp(struct perf_event *bp) 113 123 { 114 124 return bp->overflow_handler == ptrace_triggered; 115 - } 116 - 117 - struct breakpoint { 118 - struct list_head list; 119 - struct perf_event *bp; 120 - bool ptrace_bp; 121 - }; 122 - 123 - /* 124 - * While kernel/events/hw_breakpoint.c does its own synchronization, we cannot 125 - * rely on it safely synchronizing internals here; however, we can rely on it 126 - * not requesting more breakpoints than available. 127 - */ 128 - static DEFINE_SPINLOCK(cpu_bps_lock); 129 - static DEFINE_PER_CPU(struct breakpoint *, cpu_bps[HBP_NUM_MAX]); 130 - static DEFINE_SPINLOCK(task_bps_lock); 131 - static LIST_HEAD(task_bps); 132 - 133 - static struct breakpoint *alloc_breakpoint(struct perf_event *bp) 134 - { 135 - struct breakpoint *tmp; 136 - 137 - tmp = kzalloc(sizeof(*tmp), GFP_KERNEL); 138 - if (!tmp) 139 - return ERR_PTR(-ENOMEM); 140 - tmp->bp = bp; 141 - tmp->ptrace_bp = is_ptrace_bp(bp); 142 - return tmp; 143 - } 144 - 145 - static bool bp_addr_range_overlap(struct perf_event *bp1, struct perf_event *bp2) 146 - { 147 - __u64 bp1_saddr, bp1_eaddr, bp2_saddr, bp2_eaddr; 148 - 149 - bp1_saddr = ALIGN_DOWN(bp1->attr.bp_addr, HW_BREAKPOINT_SIZE); 150 - bp1_eaddr = ALIGN(bp1->attr.bp_addr + bp1->attr.bp_len, HW_BREAKPOINT_SIZE); 151 - bp2_saddr = ALIGN_DOWN(bp2->attr.bp_addr, HW_BREAKPOINT_SIZE); 152 - bp2_eaddr = ALIGN(bp2->attr.bp_addr + bp2->attr.bp_len, HW_BREAKPOINT_SIZE); 153 - 154 - return (bp1_saddr < bp2_eaddr && bp1_eaddr > bp2_saddr); 155 - } 156 - 157 - static bool alternate_infra_bp(struct breakpoint *b, struct perf_event *bp) 158 - { 159 - return is_ptrace_bp(bp) ? !b->ptrace_bp : b->ptrace_bp; 160 - } 161 - 162 - static bool can_co_exist(struct breakpoint *b, struct perf_event *bp) 163 - { 164 - return !(alternate_infra_bp(b, bp) && bp_addr_range_overlap(b->bp, bp)); 165 - } 166 - 167 - static int task_bps_add(struct perf_event *bp) 168 - { 169 - struct breakpoint *tmp; 170 - 171 - tmp = alloc_breakpoint(bp); 172 - if (IS_ERR(tmp)) 173 - return PTR_ERR(tmp); 174 - 175 - spin_lock(&task_bps_lock); 176 - list_add(&tmp->list, &task_bps); 177 - spin_unlock(&task_bps_lock); 178 - return 0; 179 - } 180 - 181 - static void task_bps_remove(struct perf_event *bp) 182 - { 183 - struct list_head *pos, *q; 184 - 185 - spin_lock(&task_bps_lock); 186 - list_for_each_safe(pos, q, &task_bps) { 187 - struct breakpoint *tmp = list_entry(pos, struct breakpoint, list); 188 - 189 - if (tmp->bp == bp) { 190 - list_del(&tmp->list); 191 - kfree(tmp); 192 - break; 193 - } 194 - } 195 - spin_unlock(&task_bps_lock); 196 - } 197 - 198 - /* 199 - * If any task has breakpoint from alternate infrastructure, 200 - * return true. Otherwise return false. 201 - */ 202 - static bool all_task_bps_check(struct perf_event *bp) 203 - { 204 - struct breakpoint *tmp; 205 - bool ret = false; 206 - 207 - spin_lock(&task_bps_lock); 208 - list_for_each_entry(tmp, &task_bps, list) { 209 - if (!can_co_exist(tmp, bp)) { 210 - ret = true; 211 - break; 212 - } 213 - } 214 - spin_unlock(&task_bps_lock); 215 - return ret; 216 - } 217 - 218 - /* 219 - * If same task has breakpoint from alternate infrastructure, 220 - * return true. Otherwise return false. 221 - */ 222 - static bool same_task_bps_check(struct perf_event *bp) 223 - { 224 - struct breakpoint *tmp; 225 - bool ret = false; 226 - 227 - spin_lock(&task_bps_lock); 228 - list_for_each_entry(tmp, &task_bps, list) { 229 - if (tmp->bp->hw.target == bp->hw.target && 230 - !can_co_exist(tmp, bp)) { 231 - ret = true; 232 - break; 233 - } 234 - } 235 - spin_unlock(&task_bps_lock); 236 - return ret; 237 - } 238 - 239 - static int cpu_bps_add(struct perf_event *bp) 240 - { 241 - struct breakpoint **cpu_bp; 242 - struct breakpoint *tmp; 243 - int i = 0; 244 - 245 - tmp = alloc_breakpoint(bp); 246 - if (IS_ERR(tmp)) 247 - return PTR_ERR(tmp); 248 - 249 - spin_lock(&cpu_bps_lock); 250 - cpu_bp = per_cpu_ptr(cpu_bps, bp->cpu); 251 - for (i = 0; i < nr_wp_slots(); i++) { 252 - if (!cpu_bp[i]) { 253 - cpu_bp[i] = tmp; 254 - break; 255 - } 256 - } 257 - spin_unlock(&cpu_bps_lock); 258 - return 0; 259 - } 260 - 261 - static void cpu_bps_remove(struct perf_event *bp) 262 - { 263 - struct breakpoint **cpu_bp; 264 - int i = 0; 265 - 266 - spin_lock(&cpu_bps_lock); 267 - cpu_bp = per_cpu_ptr(cpu_bps, bp->cpu); 268 - for (i = 0; i < nr_wp_slots(); i++) { 269 - if (!cpu_bp[i]) 270 - continue; 271 - 272 - if (cpu_bp[i]->bp == bp) { 273 - kfree(cpu_bp[i]); 274 - cpu_bp[i] = NULL; 275 - break; 276 - } 277 - } 278 - spin_unlock(&cpu_bps_lock); 279 - } 280 - 281 - static bool cpu_bps_check(int cpu, struct perf_event *bp) 282 - { 283 - struct breakpoint **cpu_bp; 284 - bool ret = false; 285 - int i; 286 - 287 - spin_lock(&cpu_bps_lock); 288 - cpu_bp = per_cpu_ptr(cpu_bps, cpu); 289 - for (i = 0; i < nr_wp_slots(); i++) { 290 - if (cpu_bp[i] && !can_co_exist(cpu_bp[i], bp)) { 291 - ret = true; 292 - break; 293 - } 294 - } 295 - spin_unlock(&cpu_bps_lock); 296 - return ret; 297 - } 298 - 299 - static bool all_cpu_bps_check(struct perf_event *bp) 300 - { 301 - int cpu; 302 - 303 - for_each_online_cpu(cpu) { 304 - if (cpu_bps_check(cpu, bp)) 305 - return true; 306 - } 307 - return false; 308 - } 309 - 310 - int arch_reserve_bp_slot(struct perf_event *bp) 311 - { 312 - int ret; 313 - 314 - /* ptrace breakpoint */ 315 - if (is_ptrace_bp(bp)) { 316 - if (all_cpu_bps_check(bp)) 317 - return -ENOSPC; 318 - 319 - if (same_task_bps_check(bp)) 320 - return -ENOSPC; 321 - 322 - return task_bps_add(bp); 323 - } 324 - 325 - /* perf breakpoint */ 326 - if (is_kernel_addr(bp->attr.bp_addr)) 327 - return 0; 328 - 329 - if (bp->hw.target && bp->cpu == -1) { 330 - if (same_task_bps_check(bp)) 331 - return -ENOSPC; 332 - 333 - return task_bps_add(bp); 334 - } else if (!bp->hw.target && bp->cpu != -1) { 335 - if (all_task_bps_check(bp)) 336 - return -ENOSPC; 337 - 338 - return cpu_bps_add(bp); 339 - } 340 - 341 - if (same_task_bps_check(bp)) 342 - return -ENOSPC; 343 - 344 - ret = cpu_bps_add(bp); 345 - if (ret) 346 - return ret; 347 - ret = task_bps_add(bp); 348 - if (ret) 349 - cpu_bps_remove(bp); 350 - 351 - return ret; 352 - } 353 - 354 - void arch_release_bp_slot(struct perf_event *bp) 355 - { 356 - if (!is_kernel_addr(bp->attr.bp_addr)) { 357 - if (bp->hw.target) 358 - task_bps_remove(bp); 359 - if (bp->cpu != -1) 360 - cpu_bps_remove(bp); 361 - } 362 - } 363 - 364 - /* 365 - * Perform cleanup of arch-specific counters during unregistration 366 - * of the perf-event 367 - */ 368 - void arch_unregister_hw_breakpoint(struct perf_event *bp) 369 - { 370 - /* 371 - * If the breakpoint is unregistered between a hw_breakpoint_handler() 372 - * and the single_step_dabr_instruction(), then cleanup the breakpoint 373 - * restoration variables to prevent dangling pointers. 374 - * FIXME, this should not be using bp->ctx at all! Sayeth peterz. 375 - */ 376 - if (bp->ctx && bp->ctx->task && bp->ctx->task != ((void *)-1L)) { 377 - int i; 378 - 379 - for (i = 0; i < nr_wp_slots(); i++) { 380 - if (bp->ctx->task->thread.last_hit_ubp[i] == bp) 381 - bp->ctx->task->thread.last_hit_ubp[i] = NULL; 382 - } 383 - } 384 125 } 385 126 386 127 /* ··· 220 499 * Restores the breakpoint on the debug registers. 221 500 * Invoke this function if it is known that the execution context is 222 501 * about to change to cause loss of MSR_SE settings. 502 + * 503 + * The perf watchpoint will simply re-trigger once the thread is started again, 504 + * and the watchpoint handler will set up MSR_SE and perf_single_step as 505 + * needed. 223 506 */ 224 507 void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs) 225 508 { ··· 231 506 int i; 232 507 233 508 for (i = 0; i < nr_wp_slots(); i++) { 234 - if (unlikely(tsk->thread.last_hit_ubp[i])) 509 + struct perf_event *bp = __this_cpu_read(bp_per_reg[i]); 510 + 511 + if (unlikely(bp && counter_arch_bp(bp)->perf_single_step)) 235 512 goto reset; 236 513 } 237 514 return; ··· 243 516 for (i = 0; i < nr_wp_slots(); i++) { 244 517 info = counter_arch_bp(__this_cpu_read(bp_per_reg[i])); 245 518 __set_breakpoint(i, info); 246 - tsk->thread.last_hit_ubp[i] = NULL; 519 + info->perf_single_step = false; 247 520 } 248 521 } 249 522 ··· 261 534 * We've failed in reliably handling the hw-breakpoint. Unregister 262 535 * it and throw a warning message to let the user know about it. 263 536 */ 264 - static void handler_error(struct perf_event *bp, struct arch_hw_breakpoint *info) 537 + static void handler_error(struct perf_event *bp) 265 538 { 266 539 WARN(1, "Unable to handle hardware breakpoint. Breakpoint at 0x%lx will be disabled.", 267 - info->address); 540 + counter_arch_bp(bp)->address); 268 541 perf_event_disable_inatomic(bp); 269 542 } 270 543 271 - static void larx_stcx_err(struct perf_event *bp, struct arch_hw_breakpoint *info) 544 + static void larx_stcx_err(struct perf_event *bp) 272 545 { 273 546 printk_ratelimited("Breakpoint hit on instruction that can't be emulated. Breakpoint at 0x%lx will be disabled.\n", 274 - info->address); 547 + counter_arch_bp(bp)->address); 275 548 perf_event_disable_inatomic(bp); 276 549 } 277 550 278 551 static bool stepping_handler(struct pt_regs *regs, struct perf_event **bp, 279 - struct arch_hw_breakpoint **info, int *hit, 280 - ppc_inst_t instr) 552 + int *hit, ppc_inst_t instr) 281 553 { 282 554 int i; 283 555 int stepped; ··· 286 560 for (i = 0; i < nr_wp_slots(); i++) { 287 561 if (!hit[i]) 288 562 continue; 289 - current->thread.last_hit_ubp[i] = bp[i]; 290 - info[i] = NULL; 563 + 564 + counter_arch_bp(bp[i])->perf_single_step = true; 565 + bp[i] = NULL; 291 566 } 292 567 regs_set_return_msr(regs, regs->msr | MSR_SE); 293 568 return false; ··· 299 572 for (i = 0; i < nr_wp_slots(); i++) { 300 573 if (!hit[i]) 301 574 continue; 302 - handler_error(bp[i], info[i]); 303 - info[i] = NULL; 575 + handler_error(bp[i]); 576 + bp[i] = NULL; 304 577 } 305 578 return false; 306 579 } 307 580 return true; 308 581 } 309 582 310 - static void handle_p10dd1_spurious_exception(struct arch_hw_breakpoint **info, 583 + static void handle_p10dd1_spurious_exception(struct perf_event **bp, 311 584 int *hit, unsigned long ea) 312 585 { 313 586 int i; ··· 319 592 * spurious exception. 320 593 */ 321 594 for (i = 0; i < nr_wp_slots(); i++) { 322 - if (!info[i]) 595 + struct arch_hw_breakpoint *info; 596 + 597 + if (!bp[i]) 323 598 continue; 324 599 325 - hw_end_addr = ALIGN(info[i]->address + info[i]->len, HW_BREAKPOINT_SIZE); 600 + info = counter_arch_bp(bp[i]); 601 + 602 + hw_end_addr = ALIGN(info->address + info->len, HW_BREAKPOINT_SIZE); 326 603 327 604 /* 328 605 * Ending address of DAWR range is less than starting ··· 356 625 return; 357 626 358 627 for (i = 0; i < nr_wp_slots(); i++) { 359 - if (info[i]) { 628 + if (bp[i]) { 360 629 hit[i] = 1; 361 - info[i]->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ; 630 + counter_arch_bp(bp[i])->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ; 362 631 } 363 632 } 364 633 } ··· 369 638 int rc = NOTIFY_STOP; 370 639 struct perf_event *bp[HBP_NUM_MAX] = { NULL }; 371 640 struct pt_regs *regs = args->regs; 372 - struct arch_hw_breakpoint *info[HBP_NUM_MAX] = { NULL }; 373 641 int i; 374 642 int hit[HBP_NUM_MAX] = {0}; 375 643 int nr_hit = 0; ··· 393 663 wp_get_instr_detail(regs, &instr, &type, &size, &ea); 394 664 395 665 for (i = 0; i < nr_wp_slots(); i++) { 666 + struct arch_hw_breakpoint *info; 667 + 396 668 bp[i] = __this_cpu_read(bp_per_reg[i]); 397 669 if (!bp[i]) 398 670 continue; 399 671 400 - info[i] = counter_arch_bp(bp[i]); 401 - info[i]->type &= ~HW_BRK_TYPE_EXTRANEOUS_IRQ; 672 + info = counter_arch_bp(bp[i]); 673 + info->type &= ~HW_BRK_TYPE_EXTRANEOUS_IRQ; 402 674 403 - if (wp_check_constraints(regs, instr, ea, type, size, info[i])) { 675 + if (wp_check_constraints(regs, instr, ea, type, size, info)) { 404 676 if (!IS_ENABLED(CONFIG_PPC_8xx) && 405 677 ppc_inst_equal(instr, ppc_inst(0))) { 406 - handler_error(bp[i], info[i]); 407 - info[i] = NULL; 678 + handler_error(bp[i]); 679 + bp[i] = NULL; 408 680 err = 1; 409 681 continue; 410 682 } ··· 425 693 /* Workaround for Power10 DD1 */ 426 694 if (!IS_ENABLED(CONFIG_PPC_8xx) && mfspr(SPRN_PVR) == 0x800100 && 427 695 is_octword_vsx_instr(type, size)) { 428 - handle_p10dd1_spurious_exception(info, hit, ea); 696 + handle_p10dd1_spurious_exception(bp, hit, ea); 429 697 } else { 430 698 rc = NOTIFY_DONE; 431 699 goto out; ··· 440 708 */ 441 709 if (ptrace_bp) { 442 710 for (i = 0; i < nr_wp_slots(); i++) { 443 - if (!hit[i]) 711 + if (!hit[i] || !is_ptrace_bp(bp[i])) 444 712 continue; 445 713 perf_bp_event(bp[i], regs); 446 - info[i] = NULL; 714 + bp[i] = NULL; 447 715 } 448 716 rc = NOTIFY_DONE; 449 717 goto reset; ··· 454 722 for (i = 0; i < nr_wp_slots(); i++) { 455 723 if (!hit[i]) 456 724 continue; 457 - larx_stcx_err(bp[i], info[i]); 458 - info[i] = NULL; 725 + larx_stcx_err(bp[i]); 726 + bp[i] = NULL; 459 727 } 460 728 goto reset; 461 729 } 462 730 463 - if (!stepping_handler(regs, bp, info, hit, instr)) 731 + if (!stepping_handler(regs, bp, hit, instr)) 464 732 goto reset; 465 733 } 466 734 ··· 471 739 for (i = 0; i < nr_wp_slots(); i++) { 472 740 if (!hit[i]) 473 741 continue; 474 - if (!(info[i]->type & HW_BRK_TYPE_EXTRANEOUS_IRQ)) 742 + if (!(counter_arch_bp(bp[i])->type & HW_BRK_TYPE_EXTRANEOUS_IRQ)) 475 743 perf_bp_event(bp[i], regs); 476 744 } 477 745 478 746 reset: 479 747 for (i = 0; i < nr_wp_slots(); i++) { 480 - if (!info[i]) 748 + if (!bp[i]) 481 749 continue; 482 - __set_breakpoint(i, info[i]); 750 + __set_breakpoint(i, counter_arch_bp(bp[i])); 483 751 } 484 752 485 753 out: ··· 494 762 static int single_step_dabr_instruction(struct die_args *args) 495 763 { 496 764 struct pt_regs *regs = args->regs; 497 - struct perf_event *bp = NULL; 498 - struct arch_hw_breakpoint *info; 499 - int i; 500 765 bool found = false; 501 766 502 767 /* 503 768 * Check if we are single-stepping as a result of a 504 769 * previous HW Breakpoint exception 505 770 */ 506 - for (i = 0; i < nr_wp_slots(); i++) { 507 - bp = current->thread.last_hit_ubp[i]; 771 + for (int i = 0; i < nr_wp_slots(); i++) { 772 + struct perf_event *bp; 773 + struct arch_hw_breakpoint *info; 774 + 775 + bp = __this_cpu_read(bp_per_reg[i]); 508 776 509 777 if (!bp) 510 778 continue; 511 779 512 - found = true; 513 780 info = counter_arch_bp(bp); 781 + 782 + if (!info->perf_single_step) 783 + continue; 784 + 785 + found = true; 514 786 515 787 /* 516 788 * We shall invoke the user-defined callback function in the ··· 523 787 */ 524 788 if (!(info->type & HW_BRK_TYPE_EXTRANEOUS_IRQ)) 525 789 perf_bp_event(bp, regs); 526 - current->thread.last_hit_ubp[i] = NULL; 527 - } 528 790 529 - if (!found) 530 - return NOTIFY_DONE; 531 - 532 - for (i = 0; i < nr_wp_slots(); i++) { 533 - bp = __this_cpu_read(bp_per_reg[i]); 534 - if (!bp) 535 - continue; 536 - 537 - info = counter_arch_bp(bp); 538 - __set_breakpoint(i, info); 791 + info->perf_single_step = false; 792 + __set_breakpoint(i, counter_arch_bp(bp)); 539 793 } 540 794 541 795 /* 542 796 * If the process was being single-stepped by ptrace, let the 543 797 * other single-step actions occur (e.g. generate SIGTRAP). 544 798 */ 545 - if (test_thread_flag(TIF_SINGLESTEP)) 799 + if (!found || test_thread_flag(TIF_SINGLESTEP)) 546 800 return NOTIFY_DONE; 547 801 548 802 return NOTIFY_STOP;
+14 -3
arch/powerpc/kernel/iommu.c
··· 172 172 return 0; 173 173 } 174 174 175 - static struct notifier_block fail_iommu_bus_notifier = { 175 + /* 176 + * PCI and VIO buses need separate notifier_block structs, since they're linked 177 + * list nodes. Sharing a notifier_block would mean that any notifiers later 178 + * registered for PCI buses would also get called by VIO buses and vice versa. 179 + */ 180 + static struct notifier_block fail_iommu_pci_bus_notifier = { 176 181 .notifier_call = fail_iommu_bus_notify 177 182 }; 183 + 184 + #ifdef CONFIG_IBMVIO 185 + static struct notifier_block fail_iommu_vio_bus_notifier = { 186 + .notifier_call = fail_iommu_bus_notify 187 + }; 188 + #endif 178 189 179 190 static int __init fail_iommu_setup(void) 180 191 { 181 192 #ifdef CONFIG_PCI 182 - bus_register_notifier(&pci_bus_type, &fail_iommu_bus_notifier); 193 + bus_register_notifier(&pci_bus_type, &fail_iommu_pci_bus_notifier); 183 194 #endif 184 195 #ifdef CONFIG_IBMVIO 185 - bus_register_notifier(&vio_bus_type, &fail_iommu_bus_notifier); 196 + bus_register_notifier(&vio_bus_type, &fail_iommu_vio_bus_notifier); 186 197 #endif 187 198 188 199 return 0;
+1 -1
arch/powerpc/kernel/legacy_serial.c
··· 5 5 #include <linux/serial_core.h> 6 6 #include <linux/console.h> 7 7 #include <linux/pci.h> 8 + #include <linux/of.h> 8 9 #include <linux/of_address.h> 9 - #include <linux/of_device.h> 10 10 #include <linux/of_irq.h> 11 11 #include <linux/serial_reg.h> 12 12 #include <asm/io.h>
+1 -1
arch/powerpc/kernel/misc.S
··· 10 10 * 11 11 * setjmp/longjmp code by Paul Mackerras. 12 12 */ 13 + #include <linux/export.h> 13 14 #include <asm/ppc_asm.h> 14 15 #include <asm/unistd.h> 15 16 #include <asm/asm-compat.h> 16 17 #include <asm/asm-offsets.h> 17 - #include <asm/export.h> 18 18 19 19 .text 20 20
+1 -1
arch/powerpc/kernel/misc_32.S
··· 8 8 * 9 9 */ 10 10 11 + #include <linux/export.h> 11 12 #include <linux/sys.h> 12 13 #include <asm/unistd.h> 13 14 #include <asm/errno.h> ··· 23 22 #include <asm/processor.h> 24 23 #include <asm/bug.h> 25 24 #include <asm/ptrace.h> 26 - #include <asm/export.h> 27 25 #include <asm/feature-fixups.h> 28 26 29 27 .text
+1 -1
arch/powerpc/kernel/misc_64.S
··· 9 9 * PPC64 updates by Dave Engebretsen (engebret@us.ibm.com) 10 10 */ 11 11 12 + #include <linux/export.h> 12 13 #include <linux/linkage.h> 13 14 #include <linux/sys.h> 14 15 #include <asm/unistd.h> ··· 24 23 #include <asm/kexec.h> 25 24 #include <asm/ptrace.h> 26 25 #include <asm/mmu.h> 27 - #include <asm/export.h> 28 26 #include <asm/feature-fixups.h> 29 27 30 28 .text
+1 -1
arch/powerpc/kernel/module_64.c
··· 465 465 return 0; 466 466 } 467 467 468 - #ifdef CONFIG_MPROFILE_KERNEL 468 + #if defined(CONFIG_MPROFILE_KERNEL) || defined(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY) 469 469 470 470 static u32 stub_insns[] = { 471 471 #ifdef CONFIG_PPC_KERNEL_PCREL
+1 -3
arch/powerpc/kernel/of_platform.c
··· 13 13 #include <linux/export.h> 14 14 #include <linux/mod_devicetable.h> 15 15 #include <linux/pci.h> 16 - #include <linux/of.h> 17 - #include <linux/of_device.h> 18 - #include <linux/of_platform.h> 16 + #include <linux/platform_device.h> 19 17 #include <linux/atomic.h> 20 18 21 19 #include <asm/errno.h>
+1 -1
arch/powerpc/kernel/pci-common.c
··· 125 125 { 126 126 struct pci_controller *phb; 127 127 128 - phb = zalloc_maybe_bootmem(sizeof(struct pci_controller), GFP_KERNEL); 128 + phb = kzalloc(sizeof(struct pci_controller), GFP_KERNEL); 129 129 if (phb == NULL) 130 130 return NULL; 131 131
+1 -1
arch/powerpc/kernel/pmc.c
··· 74 74 } 75 75 EXPORT_SYMBOL_GPL(release_pmc_hardware); 76 76 77 - #ifdef CONFIG_PPC64 77 + #ifdef CONFIG_PPC_BOOK3S_64 78 78 void power4_enable_pmcs(void) 79 79 { 80 80 unsigned long hid0;
+67 -38
arch/powerpc/kernel/ptrace/ptrace-view.c
··· 716 716 return membuf_zero(&to, (ELF_NGREG - PT_REGS_COUNT) * sizeof(u32)); 717 717 } 718 718 719 - int gpr32_set_common(struct task_struct *target, 720 - const struct user_regset *regset, 721 - unsigned int pos, unsigned int count, 722 - const void *kbuf, const void __user *ubuf, 723 - unsigned long *regs) 719 + static int gpr32_set_common_kernel(struct task_struct *target, 720 + const struct user_regset *regset, 721 + unsigned int pos, unsigned int count, 722 + const void *kbuf, unsigned long *regs) 724 723 { 725 724 const compat_ulong_t *k = kbuf; 725 + 726 + pos /= sizeof(compat_ulong_t); 727 + count /= sizeof(compat_ulong_t); 728 + 729 + for (; count > 0 && pos < PT_MSR; --count) 730 + regs[pos++] = *k++; 731 + 732 + if (count > 0 && pos == PT_MSR) { 733 + set_user_msr(target, *k++); 734 + ++pos; 735 + --count; 736 + } 737 + 738 + for (; count > 0 && pos <= PT_MAX_PUT_REG; --count) 739 + regs[pos++] = *k++; 740 + for (; count > 0 && pos < PT_TRAP; --count, ++pos) 741 + ++k; 742 + 743 + if (count > 0 && pos == PT_TRAP) { 744 + set_user_trap(target, *k++); 745 + ++pos; 746 + --count; 747 + } 748 + 749 + kbuf = k; 750 + pos *= sizeof(compat_ulong_t); 751 + count *= sizeof(compat_ulong_t); 752 + user_regset_copyin_ignore(&pos, &count, &kbuf, NULL, 753 + (PT_TRAP + 1) * sizeof(compat_ulong_t), -1); 754 + return 0; 755 + } 756 + 757 + static int gpr32_set_common_user(struct task_struct *target, 758 + const struct user_regset *regset, 759 + unsigned int pos, unsigned int count, 760 + const void __user *ubuf, unsigned long *regs) 761 + { 726 762 const compat_ulong_t __user *u = ubuf; 763 + const void *kbuf = NULL; 727 764 compat_ulong_t reg; 728 765 729 - if (!kbuf && !user_read_access_begin(u, count)) 766 + if (!user_read_access_begin(u, count)) 730 767 return -EFAULT; 731 768 732 769 pos /= sizeof(reg); 733 770 count /= sizeof(reg); 734 771 735 - if (kbuf) 736 - for (; count > 0 && pos < PT_MSR; --count) 737 - regs[pos++] = *k++; 738 - else 739 - for (; count > 0 && pos < PT_MSR; --count) { 740 - unsafe_get_user(reg, u++, Efault); 741 - regs[pos++] = reg; 742 - } 743 - 772 + for (; count > 0 && pos < PT_MSR; --count) { 773 + unsafe_get_user(reg, u++, Efault); 774 + regs[pos++] = reg; 775 + } 744 776 745 777 if (count > 0 && pos == PT_MSR) { 746 - if (kbuf) 747 - reg = *k++; 748 - else 749 - unsafe_get_user(reg, u++, Efault); 778 + unsafe_get_user(reg, u++, Efault); 750 779 set_user_msr(target, reg); 751 780 ++pos; 752 781 --count; 753 782 } 754 783 755 - if (kbuf) { 756 - for (; count > 0 && pos <= PT_MAX_PUT_REG; --count) 757 - regs[pos++] = *k++; 758 - for (; count > 0 && pos < PT_TRAP; --count, ++pos) 759 - ++k; 760 - } else { 761 - for (; count > 0 && pos <= PT_MAX_PUT_REG; --count) { 762 - unsafe_get_user(reg, u++, Efault); 763 - regs[pos++] = reg; 764 - } 765 - for (; count > 0 && pos < PT_TRAP; --count, ++pos) 766 - unsafe_get_user(reg, u++, Efault); 784 + for (; count > 0 && pos <= PT_MAX_PUT_REG; --count) { 785 + unsafe_get_user(reg, u++, Efault); 786 + regs[pos++] = reg; 767 787 } 788 + for (; count > 0 && pos < PT_TRAP; --count, ++pos) 789 + unsafe_get_user(reg, u++, Efault); 768 790 769 791 if (count > 0 && pos == PT_TRAP) { 770 - if (kbuf) 771 - reg = *k++; 772 - else 773 - unsafe_get_user(reg, u++, Efault); 792 + unsafe_get_user(reg, u++, Efault); 774 793 set_user_trap(target, reg); 775 794 ++pos; 776 795 --count; 777 796 } 778 - if (!kbuf) 779 - user_read_access_end(); 797 + user_read_access_end(); 780 798 781 - kbuf = k; 782 799 ubuf = u; 783 800 pos *= sizeof(reg); 784 801 count *= sizeof(reg); ··· 806 789 Efault: 807 790 user_read_access_end(); 808 791 return -EFAULT; 792 + } 793 + 794 + int gpr32_set_common(struct task_struct *target, 795 + const struct user_regset *regset, 796 + unsigned int pos, unsigned int count, 797 + const void *kbuf, const void __user *ubuf, 798 + unsigned long *regs) 799 + { 800 + if (kbuf) 801 + return gpr32_set_common_kernel(target, regset, pos, count, kbuf, regs); 802 + else 803 + return gpr32_set_common_user(target, regset, pos, count, ubuf, regs); 809 804 } 810 805 811 806 static int gpr32_get(struct task_struct *target,
+24 -21
arch/powerpc/kernel/rtas.c
··· 1330 1330 } 1331 1331 EXPORT_SYMBOL_GPL(rtas_busy_delay); 1332 1332 1333 - static int rtas_error_rc(int rtas_rc) 1333 + int rtas_error_rc(int rtas_rc) 1334 1334 { 1335 1335 int rc; 1336 1336 1337 1337 switch (rtas_rc) { 1338 - case -1: /* Hardware Error */ 1339 - rc = -EIO; 1340 - break; 1341 - case -3: /* Bad indicator/domain/etc */ 1342 - rc = -EINVAL; 1343 - break; 1344 - case -9000: /* Isolation error */ 1345 - rc = -EFAULT; 1346 - break; 1347 - case -9001: /* Outstanding TCE/PTE */ 1348 - rc = -EEXIST; 1349 - break; 1350 - case -9002: /* No usable slot */ 1351 - rc = -ENODEV; 1352 - break; 1353 - default: 1354 - pr_err("%s: unexpected error %d\n", __func__, rtas_rc); 1355 - rc = -ERANGE; 1356 - break; 1338 + case RTAS_HARDWARE_ERROR: /* Hardware Error */ 1339 + rc = -EIO; 1340 + break; 1341 + case RTAS_INVALID_PARAMETER: /* Bad indicator/domain/etc */ 1342 + rc = -EINVAL; 1343 + break; 1344 + case -9000: /* Isolation error */ 1345 + rc = -EFAULT; 1346 + break; 1347 + case -9001: /* Outstanding TCE/PTE */ 1348 + rc = -EEXIST; 1349 + break; 1350 + case -9002: /* No usable slot */ 1351 + rc = -ENODEV; 1352 + break; 1353 + default: 1354 + pr_err("%s: unexpected error %d\n", __func__, rtas_rc); 1355 + rc = -ERANGE; 1356 + break; 1357 1357 } 1358 1358 return rc; 1359 1359 } 1360 + EXPORT_SYMBOL_GPL(rtas_error_rc); 1360 1361 1361 1362 int rtas_get_power_level(int powerdomain, int *level) 1362 1363 { ··· 1588 1587 void rtas_os_term(char *str) 1589 1588 { 1590 1589 s32 token = rtas_function_token(RTAS_FN_IBM_OS_TERM); 1590 + static struct rtas_args args; 1591 1591 int status; 1592 1592 1593 1593 /* ··· 1609 1607 * schedules. 1610 1608 */ 1611 1609 do { 1612 - status = rtas_call(token, 1, 1, NULL, __pa(rtas_os_term_buf)); 1610 + rtas_call_unlocked(&args, token, 1, 1, NULL, __pa(rtas_os_term_buf)); 1611 + status = be32_to_cpu(args.rets[0]); 1613 1612 } while (rtas_busy_delay_time(status)); 1614 1613 1615 1614 if (status != 0)
+7 -3
arch/powerpc/kernel/setup-common.c
··· 31 31 #include <linux/serial_8250.h> 32 32 #include <linux/percpu.h> 33 33 #include <linux/memblock.h> 34 - #include <linux/of_irq.h> 34 + #include <linux/of.h> 35 35 #include <linux/of_fdt.h> 36 - #include <linux/of_platform.h> 36 + #include <linux/of_irq.h> 37 37 #include <linux/hugetlb.h> 38 38 #include <linux/pgtable.h> 39 39 #include <asm/io.h> ··· 969 969 klp_init_thread_info(&init_task); 970 970 971 971 setup_initial_init_mm(_stext, _etext, _edata, _end); 972 - 972 + /* sched_init() does the mmgrab(&init_mm) for the primary CPU */ 973 + VM_WARN_ON(cpumask_test_cpu(smp_processor_id(), mm_cpumask(&init_mm))); 974 + cpumask_set_cpu(smp_processor_id(), mm_cpumask(&init_mm)); 975 + inc_mm_active_cpus(&init_mm); 973 976 mm_iommu_init(&init_mm); 977 + 974 978 irqstack_early_init(); 975 979 exc_lvl_early_init(); 976 980 emergency_stack_init();
+19 -1
arch/powerpc/kernel/smp.c
··· 47 47 #include <asm/smp.h> 48 48 #include <asm/time.h> 49 49 #include <asm/machdep.h> 50 + #include <asm/mmu_context.h> 50 51 #include <asm/cputhreads.h> 51 52 #include <asm/cputable.h> 52 53 #include <asm/mpic.h> ··· 1088 1087 1089 1088 void __init smp_prepare_cpus(unsigned int max_cpus) 1090 1089 { 1091 - unsigned int cpu; 1090 + unsigned int cpu, num_threads; 1092 1091 1093 1092 DBG("smp_prepare_cpus\n"); 1094 1093 ··· 1155 1154 1156 1155 if (smp_ops && smp_ops->probe) 1157 1156 smp_ops->probe(); 1157 + 1158 + // Initalise the generic SMT topology support 1159 + num_threads = 1; 1160 + if (smt_enabled_at_boot) 1161 + num_threads = smt_enabled_at_boot; 1162 + cpu_smt_set_num_threads(num_threads, threads_per_core); 1158 1163 } 1159 1164 1160 1165 void smp_prepare_boot_cpu(void) ··· 1623 1616 1624 1617 mmgrab_lazy_tlb(&init_mm); 1625 1618 current->active_mm = &init_mm; 1619 + VM_WARN_ON(cpumask_test_cpu(smp_processor_id(), mm_cpumask(&init_mm))); 1620 + cpumask_set_cpu(cpu, mm_cpumask(&init_mm)); 1621 + inc_mm_active_cpus(&init_mm); 1626 1622 1627 1623 smp_store_cpu_info(cpu); 1628 1624 set_dec(tb_ticks_per_jiffy); ··· 1761 1751 1762 1752 void __cpu_die(unsigned int cpu) 1763 1753 { 1754 + /* 1755 + * This could perhaps be a generic call in idlea_task_dead(), but 1756 + * that requires testing from all archs, so first put it here to 1757 + */ 1758 + VM_WARN_ON_ONCE(!cpumask_test_cpu(cpu, mm_cpumask(&init_mm))); 1759 + dec_mm_active_cpus(&init_mm); 1760 + cpumask_clear_cpu(cpu, mm_cpumask(&init_mm)); 1761 + 1764 1762 if (smp_ops->cpu_die) 1765 1763 smp_ops->cpu_die(cpu); 1766 1764 }
+1 -1
arch/powerpc/kernel/syscall.c
··· 46 46 iamr = mfspr(SPRN_IAMR); 47 47 regs->amr = amr; 48 48 regs->iamr = iamr; 49 - if (mmu_has_feature(MMU_FTR_BOOK3S_KUAP)) { 49 + if (mmu_has_feature(MMU_FTR_KUAP)) { 50 50 mtspr(SPRN_AMR, AMR_KUAP_BLOCKED); 51 51 flush_needed = true; 52 52 }
+1 -1
arch/powerpc/kernel/tm.S
··· 6 6 * Copyright 2012 Matt Evans & Michael Neuling, IBM Corporation. 7 7 */ 8 8 9 + #include <linux/export.h> 9 10 #include <asm/asm-offsets.h> 10 11 #include <asm/ppc_asm.h> 11 12 #include <asm/ppc-opcode.h> 12 13 #include <asm/ptrace.h> 13 14 #include <asm/reg.h> 14 15 #include <asm/bug.h> 15 - #include <asm/export.h> 16 16 #include <asm/feature-fixups.h> 17 17 18 18 #ifdef CONFIG_VSX
+8 -4
arch/powerpc/kernel/trace/Makefile
··· 6 6 ifdef CONFIG_FUNCTION_TRACER 7 7 # do not trace tracer code 8 8 CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE) 9 + CFLAGS_REMOVE_ftrace_64_pg.o = $(CC_FLAGS_FTRACE) 9 10 endif 10 11 11 - obj32-$(CONFIG_FUNCTION_TRACER) += ftrace_mprofile.o 12 + obj32-$(CONFIG_FUNCTION_TRACER) += ftrace.o ftrace_entry.o 12 13 ifdef CONFIG_MPROFILE_KERNEL 13 - obj64-$(CONFIG_FUNCTION_TRACER) += ftrace_mprofile.o 14 + obj64-$(CONFIG_FUNCTION_TRACER) += ftrace.o ftrace_entry.o 14 15 else 15 - obj64-$(CONFIG_FUNCTION_TRACER) += ftrace_64_pg.o 16 + obj64-$(CONFIG_FUNCTION_TRACER) += ftrace_64_pg.o ftrace_64_pg_entry.o 16 17 endif 17 - obj-$(CONFIG_FUNCTION_TRACER) += ftrace_low.o ftrace.o 18 18 obj-$(CONFIG_TRACING) += trace_clock.o 19 19 20 20 obj-$(CONFIG_PPC64) += $(obj64-y) ··· 25 25 KCOV_INSTRUMENT_ftrace.o := n 26 26 KCSAN_SANITIZE_ftrace.o := n 27 27 UBSAN_SANITIZE_ftrace.o := n 28 + GCOV_PROFILE_ftrace_64_pg.o := n 29 + KCOV_INSTRUMENT_ftrace_64_pg.o := n 30 + KCSAN_SANITIZE_ftrace_64_pg.o := n 31 + UBSAN_SANITIZE_ftrace_64_pg.o := n
+232 -676
arch/powerpc/kernel/trace/ftrace.c
··· 28 28 #include <asm/syscall.h> 29 29 #include <asm/inst.h> 30 30 31 - /* 32 - * We generally only have a single long_branch tramp and at most 2 or 3 plt 33 - * tramps generated. But, we don't use the plt tramps currently. We also allot 34 - * 2 tramps after .text and .init.text. So, we only end up with around 3 usable 35 - * tramps in total. Set aside 8 just to be sure. 36 - */ 37 - #define NUM_FTRACE_TRAMPS 8 31 + #define NUM_FTRACE_TRAMPS 2 38 32 static unsigned long ftrace_tramps[NUM_FTRACE_TRAMPS]; 39 33 40 - static ppc_inst_t 41 - ftrace_call_replace(unsigned long ip, unsigned long addr, int link) 34 + static ppc_inst_t ftrace_create_branch_inst(unsigned long ip, unsigned long addr, int link) 42 35 { 43 36 ppc_inst_t op; 44 37 45 - addr = ppc_function_entry((void *)addr); 46 - 47 - /* if (link) set op to 'bl' else 'b' */ 38 + WARN_ON(!is_offset_in_branch_range(addr - ip)); 48 39 create_branch(&op, (u32 *)ip, addr, link ? BRANCH_SET_LINK : 0); 49 40 50 41 return op; 51 42 } 52 43 53 - static inline int 54 - ftrace_modify_code(unsigned long ip, ppc_inst_t old, ppc_inst_t new) 44 + static inline int ftrace_read_inst(unsigned long ip, ppc_inst_t *op) 55 45 { 56 - ppc_inst_t replaced; 57 - 58 - /* 59 - * Note: 60 - * We are paranoid about modifying text, as if a bug was to happen, it 61 - * could cause us to read or write to someplace that could cause harm. 62 - * Carefully read and modify the code with probe_kernel_*(), and make 63 - * sure what we read is what we expected it to be before modifying it. 64 - */ 65 - 66 - /* read the text we want to modify */ 67 - if (copy_inst_from_kernel_nofault(&replaced, (void *)ip)) 46 + if (copy_inst_from_kernel_nofault(op, (void *)ip)) { 47 + pr_err("0x%lx: fetching instruction failed\n", ip); 68 48 return -EFAULT; 69 - 70 - /* Make sure it is what we expect it to be */ 71 - if (!ppc_inst_equal(replaced, old)) { 72 - pr_err("%p: replaced (%08lx) != old (%08lx)", (void *)ip, 73 - ppc_inst_as_ulong(replaced), ppc_inst_as_ulong(old)); 74 - return -EINVAL; 75 49 } 76 50 77 - /* replace the text with the new text */ 78 - return patch_instruction((u32 *)ip, new); 51 + return 0; 79 52 } 80 53 81 - /* 82 - * Helper functions that are the same for both PPC64 and PPC32. 83 - */ 84 - static int test_24bit_addr(unsigned long ip, unsigned long addr) 54 + static inline int ftrace_validate_inst(unsigned long ip, ppc_inst_t inst) 85 55 { 86 - addr = ppc_function_entry((void *)addr); 56 + ppc_inst_t op; 57 + int ret; 87 58 88 - return is_offset_in_branch_range(addr - ip); 59 + ret = ftrace_read_inst(ip, &op); 60 + if (!ret && !ppc_inst_equal(op, inst)) { 61 + pr_err("0x%lx: expected (%08lx) != found (%08lx)\n", 62 + ip, ppc_inst_as_ulong(inst), ppc_inst_as_ulong(op)); 63 + ret = -EINVAL; 64 + } 65 + 66 + return ret; 67 + } 68 + 69 + static inline int ftrace_modify_code(unsigned long ip, ppc_inst_t old, ppc_inst_t new) 70 + { 71 + int ret = ftrace_validate_inst(ip, old); 72 + 73 + if (!ret) 74 + ret = patch_instruction((u32 *)ip, new); 75 + 76 + return ret; 89 77 } 90 78 91 79 static int is_bl_op(ppc_inst_t op) ··· 81 93 return (ppc_inst_val(op) & ~PPC_LI_MASK) == PPC_RAW_BL(0); 82 94 } 83 95 84 - static int is_b_op(ppc_inst_t op) 85 - { 86 - return (ppc_inst_val(op) & ~PPC_LI_MASK) == PPC_RAW_BRANCH(0); 87 - } 88 - 89 - static unsigned long find_bl_target(unsigned long ip, ppc_inst_t op) 90 - { 91 - int offset; 92 - 93 - offset = PPC_LI(ppc_inst_val(op)); 94 - /* make it signed */ 95 - if (offset & 0x02000000) 96 - offset |= 0xfe000000; 97 - 98 - return ip + (long)offset; 99 - } 100 - 101 - #ifdef CONFIG_MODULES 102 - static int 103 - __ftrace_make_nop(struct module *mod, 104 - struct dyn_ftrace *rec, unsigned long addr) 105 - { 106 - unsigned long entry, ptr, tramp; 107 - unsigned long ip = rec->ip; 108 - ppc_inst_t op, pop; 109 - 110 - /* read where this goes */ 111 - if (copy_inst_from_kernel_nofault(&op, (void *)ip)) { 112 - pr_err("Fetching opcode failed.\n"); 113 - return -EFAULT; 114 - } 115 - 116 - /* Make sure that this is still a 24bit jump */ 117 - if (!is_bl_op(op)) { 118 - pr_err("Not expected bl: opcode is %08lx\n", ppc_inst_as_ulong(op)); 119 - return -EINVAL; 120 - } 121 - 122 - /* lets find where the pointer goes */ 123 - tramp = find_bl_target(ip, op); 124 - 125 - pr_devel("ip:%lx jumps to %lx", ip, tramp); 126 - 127 - if (module_trampoline_target(mod, tramp, &ptr)) { 128 - pr_err("Failed to get trampoline target\n"); 129 - return -EFAULT; 130 - } 131 - 132 - pr_devel("trampoline target %lx", ptr); 133 - 134 - entry = ppc_global_function_entry((void *)addr); 135 - /* This should match what was called */ 136 - if (ptr != entry) { 137 - pr_err("addr %lx does not match expected %lx\n", ptr, entry); 138 - return -EINVAL; 139 - } 140 - 141 - if (IS_ENABLED(CONFIG_MPROFILE_KERNEL)) { 142 - if (copy_inst_from_kernel_nofault(&op, (void *)(ip - 4))) { 143 - pr_err("Fetching instruction at %lx failed.\n", ip - 4); 144 - return -EFAULT; 145 - } 146 - 147 - /* We expect either a mflr r0, or a std r0, LRSAVE(r1) */ 148 - if (!ppc_inst_equal(op, ppc_inst(PPC_RAW_MFLR(_R0))) && 149 - !ppc_inst_equal(op, ppc_inst(PPC_INST_STD_LR))) { 150 - pr_err("Unexpected instruction %08lx around bl _mcount\n", 151 - ppc_inst_as_ulong(op)); 152 - return -EINVAL; 153 - } 154 - } else if (IS_ENABLED(CONFIG_PPC64)) { 155 - /* 156 - * Check what is in the next instruction. We can see ld r2,40(r1), but 157 - * on first pass after boot we will see mflr r0. 158 - */ 159 - if (copy_inst_from_kernel_nofault(&op, (void *)(ip + 4))) { 160 - pr_err("Fetching op failed.\n"); 161 - return -EFAULT; 162 - } 163 - 164 - if (!ppc_inst_equal(op, ppc_inst(PPC_INST_LD_TOC))) { 165 - pr_err("Expected %08lx found %08lx\n", PPC_INST_LD_TOC, 166 - ppc_inst_as_ulong(op)); 167 - return -EINVAL; 168 - } 169 - } 170 - 171 - /* 172 - * When using -mprofile-kernel or PPC32 there is no load to jump over. 173 - * 174 - * Otherwise our original call site looks like: 175 - * 176 - * bl <tramp> 177 - * ld r2,XX(r1) 178 - * 179 - * Milton Miller pointed out that we can not simply nop the branch. 180 - * If a task was preempted when calling a trace function, the nops 181 - * will remove the way to restore the TOC in r2 and the r2 TOC will 182 - * get corrupted. 183 - * 184 - * Use a b +8 to jump over the load. 185 - * XXX: could make PCREL depend on MPROFILE_KERNEL 186 - * XXX: check PCREL && MPROFILE_KERNEL calling sequence 187 - */ 188 - if (IS_ENABLED(CONFIG_MPROFILE_KERNEL) || IS_ENABLED(CONFIG_PPC32)) 189 - pop = ppc_inst(PPC_RAW_NOP()); 190 - else 191 - pop = ppc_inst(PPC_RAW_BRANCH(8)); /* b +8 */ 192 - 193 - if (patch_instruction((u32 *)ip, pop)) { 194 - pr_err("Patching NOP failed.\n"); 195 - return -EPERM; 196 - } 197 - 198 - return 0; 199 - } 200 - #else 201 - static int __ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr) 202 - { 203 - return 0; 204 - } 205 - #endif /* CONFIG_MODULES */ 206 - 207 96 static unsigned long find_ftrace_tramp(unsigned long ip) 208 97 { 209 98 int i; 210 99 211 - /* 212 - * We have the compiler generated long_branch tramps at the end 213 - * and we prefer those 214 - */ 215 - for (i = NUM_FTRACE_TRAMPS - 1; i >= 0; i--) 100 + for (i = 0; i < NUM_FTRACE_TRAMPS; i++) 216 101 if (!ftrace_tramps[i]) 217 102 continue; 218 103 else if (is_offset_in_branch_range(ftrace_tramps[i] - ip)) ··· 94 233 return 0; 95 234 } 96 235 97 - static int add_ftrace_tramp(unsigned long tramp) 98 - { 99 - int i; 100 - 101 - for (i = 0; i < NUM_FTRACE_TRAMPS; i++) 102 - if (!ftrace_tramps[i]) { 103 - ftrace_tramps[i] = tramp; 104 - return 0; 105 - } 106 - 107 - return -1; 108 - } 109 - 110 - /* 111 - * If this is a compiler generated long_branch trampoline (essentially, a 112 - * trampoline that has a branch to _mcount()), we re-write the branch to 113 - * instead go to ftrace_[regs_]caller() and note down the location of this 114 - * trampoline. 115 - */ 116 - static int setup_mcount_compiler_tramp(unsigned long tramp) 117 - { 118 - int i; 119 - ppc_inst_t op; 120 - unsigned long ptr; 121 - 122 - /* Is this a known long jump tramp? */ 123 - for (i = 0; i < NUM_FTRACE_TRAMPS; i++) 124 - if (ftrace_tramps[i] == tramp) 125 - return 0; 126 - 127 - /* New trampoline -- read where this goes */ 128 - if (copy_inst_from_kernel_nofault(&op, (void *)tramp)) { 129 - pr_debug("Fetching opcode failed.\n"); 130 - return -1; 131 - } 132 - 133 - /* Is this a 24 bit branch? */ 134 - if (!is_b_op(op)) { 135 - pr_debug("Trampoline is not a long branch tramp.\n"); 136 - return -1; 137 - } 138 - 139 - /* lets find where the pointer goes */ 140 - ptr = find_bl_target(tramp, op); 141 - 142 - if (ptr != ppc_global_function_entry((void *)_mcount)) { 143 - pr_debug("Trampoline target %p is not _mcount\n", (void *)ptr); 144 - return -1; 145 - } 146 - 147 - /* Let's re-write the tramp to go to ftrace_[regs_]caller */ 148 - if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) 149 - ptr = ppc_global_function_entry((void *)ftrace_regs_caller); 150 - else 151 - ptr = ppc_global_function_entry((void *)ftrace_caller); 152 - 153 - if (patch_branch((u32 *)tramp, ptr, 0)) { 154 - pr_debug("REL24 out of range!\n"); 155 - return -1; 156 - } 157 - 158 - if (add_ftrace_tramp(tramp)) { 159 - pr_debug("No tramp locations left\n"); 160 - return -1; 161 - } 162 - 163 - return 0; 164 - } 165 - 166 - static int __ftrace_make_nop_kernel(struct dyn_ftrace *rec, unsigned long addr) 167 - { 168 - unsigned long tramp, ip = rec->ip; 169 - ppc_inst_t op; 170 - 171 - /* Read where this goes */ 172 - if (copy_inst_from_kernel_nofault(&op, (void *)ip)) { 173 - pr_err("Fetching opcode failed.\n"); 174 - return -EFAULT; 175 - } 176 - 177 - /* Make sure that this is still a 24bit jump */ 178 - if (!is_bl_op(op)) { 179 - pr_err("Not expected bl: opcode is %08lx\n", ppc_inst_as_ulong(op)); 180 - return -EINVAL; 181 - } 182 - 183 - /* Let's find where the pointer goes */ 184 - tramp = find_bl_target(ip, op); 185 - 186 - pr_devel("ip:%lx jumps to %lx", ip, tramp); 187 - 188 - if (setup_mcount_compiler_tramp(tramp)) { 189 - /* Are other trampolines reachable? */ 190 - if (!find_ftrace_tramp(ip)) { 191 - pr_err("No ftrace trampolines reachable from %ps\n", 192 - (void *)ip); 193 - return -EINVAL; 194 - } 195 - } 196 - 197 - if (patch_instruction((u32 *)ip, ppc_inst(PPC_RAW_NOP()))) { 198 - pr_err("Patching NOP failed.\n"); 199 - return -EPERM; 200 - } 201 - 202 - return 0; 203 - } 204 - 205 - int ftrace_make_nop(struct module *mod, 206 - struct dyn_ftrace *rec, unsigned long addr) 236 + static int ftrace_get_call_inst(struct dyn_ftrace *rec, unsigned long addr, ppc_inst_t *call_inst) 207 237 { 208 238 unsigned long ip = rec->ip; 209 - ppc_inst_t old, new; 239 + unsigned long stub; 210 240 211 - /* 212 - * If the calling address is more that 24 bits away, 213 - * then we had to use a trampoline to make the call. 214 - * Otherwise just update the call site. 215 - */ 216 - if (test_24bit_addr(ip, addr)) { 217 - /* within range */ 218 - old = ftrace_call_replace(ip, addr, 1); 219 - new = ppc_inst(PPC_RAW_NOP()); 220 - return ftrace_modify_code(ip, old, new); 221 - } else if (core_kernel_text(ip)) { 222 - return __ftrace_make_nop_kernel(rec, addr); 223 - } else if (!IS_ENABLED(CONFIG_MODULES)) { 224 - return -EINVAL; 225 - } 226 - 227 - /* 228 - * Out of range jumps are called from modules. 229 - * We should either already have a pointer to the module 230 - * or it has been passed in. 231 - */ 232 - if (!rec->arch.mod) { 233 - if (!mod) { 234 - pr_err("No module loaded addr=%lx\n", addr); 235 - return -EFAULT; 236 - } 237 - rec->arch.mod = mod; 238 - } else if (mod) { 239 - if (mod != rec->arch.mod) { 240 - pr_err("Record mod %p not equal to passed in mod %p\n", 241 - rec->arch.mod, mod); 242 - return -EINVAL; 243 - } 244 - /* nothing to do if mod == rec->arch.mod */ 245 - } else 246 - mod = rec->arch.mod; 247 - 248 - return __ftrace_make_nop(mod, rec, addr); 249 - } 250 - 241 + if (is_offset_in_branch_range(addr - ip)) { 242 + /* Within range */ 243 + stub = addr; 251 244 #ifdef CONFIG_MODULES 252 - /* 253 - * Examine the existing instructions for __ftrace_make_call. 254 - * They should effectively be a NOP, and follow formal constraints, 255 - * depending on the ABI. Return false if they don't. 256 - */ 257 - static bool expected_nop_sequence(void *ip, ppc_inst_t op0, ppc_inst_t op1) 258 - { 259 - if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) 260 - return ppc_inst_equal(op0, ppc_inst(PPC_RAW_NOP())); 261 - else 262 - return ppc_inst_equal(op0, ppc_inst(PPC_RAW_BRANCH(8))) && 263 - ppc_inst_equal(op1, ppc_inst(PPC_INST_LD_TOC)); 264 - } 265 - 266 - static int 267 - __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 268 - { 269 - ppc_inst_t op[2]; 270 - void *ip = (void *)rec->ip; 271 - unsigned long entry, ptr, tramp; 272 - struct module *mod = rec->arch.mod; 273 - 274 - /* read where this goes */ 275 - if (copy_inst_from_kernel_nofault(op, ip)) 276 - return -EFAULT; 277 - 278 - if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && 279 - copy_inst_from_kernel_nofault(op + 1, ip + 4)) 280 - return -EFAULT; 281 - 282 - if (!expected_nop_sequence(ip, op[0], op[1])) { 283 - pr_err("Unexpected call sequence at %p: %08lx %08lx\n", ip, 284 - ppc_inst_as_ulong(op[0]), ppc_inst_as_ulong(op[1])); 285 - return -EINVAL; 286 - } 287 - 288 - /* If we never set up ftrace trampoline(s), then bail */ 289 - if (!mod->arch.tramp || 290 - (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && !mod->arch.tramp_regs)) { 291 - pr_err("No ftrace trampoline\n"); 292 - return -EINVAL; 293 - } 294 - 295 - if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && rec->flags & FTRACE_FL_REGS) 296 - tramp = mod->arch.tramp_regs; 297 - else 298 - tramp = mod->arch.tramp; 299 - 300 - if (module_trampoline_target(mod, tramp, &ptr)) { 301 - pr_err("Failed to get trampoline target\n"); 302 - return -EFAULT; 303 - } 304 - 305 - pr_devel("trampoline target %lx", ptr); 306 - 307 - entry = ppc_global_function_entry((void *)addr); 308 - /* This should match what was called */ 309 - if (ptr != entry) { 310 - pr_err("addr %lx does not match expected %lx\n", ptr, entry); 311 - return -EINVAL; 312 - } 313 - 314 - if (patch_branch(ip, tramp, BRANCH_SET_LINK)) { 315 - pr_err("REL24 out of range!\n"); 316 - return -EINVAL; 317 - } 318 - 319 - return 0; 320 - } 321 - #else 322 - static int __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 323 - { 324 - return 0; 325 - } 326 - #endif /* CONFIG_MODULES */ 327 - 328 - static int __ftrace_make_call_kernel(struct dyn_ftrace *rec, unsigned long addr) 329 - { 330 - ppc_inst_t op; 331 - void *ip = (void *)rec->ip; 332 - unsigned long tramp, entry, ptr; 333 - 334 - /* Make sure we're being asked to patch branch to a known ftrace addr */ 335 - entry = ppc_global_function_entry((void *)ftrace_caller); 336 - ptr = ppc_global_function_entry((void *)addr); 337 - 338 - if (ptr != entry && IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) 339 - entry = ppc_global_function_entry((void *)ftrace_regs_caller); 340 - 341 - if (ptr != entry) { 342 - pr_err("Unknown ftrace addr to patch: %ps\n", (void *)ptr); 343 - return -EINVAL; 344 - } 345 - 346 - /* Make sure we have a nop */ 347 - if (copy_inst_from_kernel_nofault(&op, ip)) { 348 - pr_err("Unable to read ftrace location %p\n", ip); 349 - return -EFAULT; 350 - } 351 - 352 - if (!ppc_inst_equal(op, ppc_inst(PPC_RAW_NOP()))) { 353 - pr_err("Unexpected call sequence at %p: %08lx\n", 354 - ip, ppc_inst_as_ulong(op)); 355 - return -EINVAL; 356 - } 357 - 358 - tramp = find_ftrace_tramp((unsigned long)ip); 359 - if (!tramp) { 360 - pr_err("No ftrace trampolines reachable from %ps\n", ip); 361 - return -EINVAL; 362 - } 363 - 364 - if (patch_branch(ip, tramp, BRANCH_SET_LINK)) { 365 - pr_err("Error patching branch to ftrace tramp!\n"); 366 - return -EINVAL; 367 - } 368 - 369 - return 0; 370 - } 371 - 372 - int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 373 - { 374 - unsigned long ip = rec->ip; 375 - ppc_inst_t old, new; 376 - 377 - /* 378 - * If the calling address is more that 24 bits away, 379 - * then we had to use a trampoline to make the call. 380 - * Otherwise just update the call site. 381 - */ 382 - if (test_24bit_addr(ip, addr)) { 383 - /* within range */ 384 - old = ppc_inst(PPC_RAW_NOP()); 385 - new = ftrace_call_replace(ip, addr, 1); 386 - return ftrace_modify_code(ip, old, new); 245 + } else if (rec->arch.mod) { 246 + /* Module code would be going to one of the module stubs */ 247 + stub = (addr == (unsigned long)ftrace_caller ? rec->arch.mod->arch.tramp : 248 + rec->arch.mod->arch.tramp_regs); 249 + #endif 387 250 } else if (core_kernel_text(ip)) { 388 - return __ftrace_make_call_kernel(rec, addr); 389 - } else if (!IS_ENABLED(CONFIG_MODULES)) { 390 - /* We should not get here without modules */ 251 + /* We would be branching to one of our ftrace stubs */ 252 + stub = find_ftrace_tramp(ip); 253 + if (!stub) { 254 + pr_err("0x%lx: No ftrace stubs reachable\n", ip); 255 + return -EINVAL; 256 + } 257 + } else { 391 258 return -EINVAL; 392 259 } 393 260 394 - /* 395 - * Out of range jumps are called from modules. 396 - * Being that we are converting from nop, it had better 397 - * already have a module defined. 398 - */ 399 - if (!rec->arch.mod) { 400 - pr_err("No module loaded\n"); 401 - return -EINVAL; 402 - } 403 - 404 - return __ftrace_make_call(rec, addr); 261 + *call_inst = ftrace_create_branch_inst(ip, stub, 1); 262 + return 0; 405 263 } 406 264 407 265 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 408 - #ifdef CONFIG_MODULES 409 - static int 410 - __ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, 411 - unsigned long addr) 266 + int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, unsigned long addr) 412 267 { 413 - ppc_inst_t op; 414 - unsigned long ip = rec->ip; 415 - unsigned long entry, ptr, tramp; 416 - struct module *mod = rec->arch.mod; 268 + /* This should never be called since we override ftrace_replace_code() */ 269 + WARN_ON(1); 270 + return -EINVAL; 271 + } 272 + #endif 417 273 418 - /* If we never set up ftrace trampolines, then bail */ 419 - if (!mod->arch.tramp || !mod->arch.tramp_regs) { 420 - pr_err("No ftrace trampoline\n"); 274 + int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 275 + { 276 + ppc_inst_t old, new; 277 + int ret; 278 + 279 + /* This can only ever be called during module load */ 280 + if (WARN_ON(!IS_ENABLED(CONFIG_MODULES) || core_kernel_text(rec->ip))) 281 + return -EINVAL; 282 + 283 + old = ppc_inst(PPC_RAW_NOP()); 284 + ret = ftrace_get_call_inst(rec, addr, &new); 285 + if (ret) 286 + return ret; 287 + 288 + return ftrace_modify_code(rec->ip, old, new); 289 + } 290 + 291 + int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr) 292 + { 293 + /* 294 + * This should never be called since we override ftrace_replace_code(), 295 + * as well as ftrace_init_nop() 296 + */ 297 + WARN_ON(1); 298 + return -EINVAL; 299 + } 300 + 301 + void ftrace_replace_code(int enable) 302 + { 303 + ppc_inst_t old, new, call_inst, new_call_inst; 304 + ppc_inst_t nop_inst = ppc_inst(PPC_RAW_NOP()); 305 + unsigned long ip, new_addr, addr; 306 + struct ftrace_rec_iter *iter; 307 + struct dyn_ftrace *rec; 308 + int ret = 0, update; 309 + 310 + for_ftrace_rec_iter(iter) { 311 + rec = ftrace_rec_iter_record(iter); 312 + ip = rec->ip; 313 + 314 + if (rec->flags & FTRACE_FL_DISABLED && !(rec->flags & FTRACE_FL_ENABLED)) 315 + continue; 316 + 317 + addr = ftrace_get_addr_curr(rec); 318 + new_addr = ftrace_get_addr_new(rec); 319 + update = ftrace_update_record(rec, enable); 320 + 321 + switch (update) { 322 + case FTRACE_UPDATE_IGNORE: 323 + default: 324 + continue; 325 + case FTRACE_UPDATE_MODIFY_CALL: 326 + ret = ftrace_get_call_inst(rec, new_addr, &new_call_inst); 327 + ret |= ftrace_get_call_inst(rec, addr, &call_inst); 328 + old = call_inst; 329 + new = new_call_inst; 330 + break; 331 + case FTRACE_UPDATE_MAKE_NOP: 332 + ret = ftrace_get_call_inst(rec, addr, &call_inst); 333 + old = call_inst; 334 + new = nop_inst; 335 + break; 336 + case FTRACE_UPDATE_MAKE_CALL: 337 + ret = ftrace_get_call_inst(rec, new_addr, &call_inst); 338 + old = nop_inst; 339 + new = call_inst; 340 + break; 341 + } 342 + 343 + if (!ret) 344 + ret = ftrace_modify_code(ip, old, new); 345 + if (ret) 346 + goto out; 347 + } 348 + 349 + out: 350 + if (ret) 351 + ftrace_bug(ret, rec); 352 + return; 353 + } 354 + 355 + int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec) 356 + { 357 + unsigned long addr, ip = rec->ip; 358 + ppc_inst_t old, new; 359 + int ret = 0; 360 + 361 + /* Verify instructions surrounding the ftrace location */ 362 + if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)) { 363 + /* Expect nops */ 364 + ret = ftrace_validate_inst(ip - 4, ppc_inst(PPC_RAW_NOP())); 365 + if (!ret) 366 + ret = ftrace_validate_inst(ip, ppc_inst(PPC_RAW_NOP())); 367 + } else if (IS_ENABLED(CONFIG_PPC32)) { 368 + /* Expected sequence: 'mflr r0', 'stw r0,4(r1)', 'bl _mcount' */ 369 + ret = ftrace_validate_inst(ip - 8, ppc_inst(PPC_RAW_MFLR(_R0))); 370 + if (!ret) 371 + ret = ftrace_validate_inst(ip - 4, ppc_inst(PPC_RAW_STW(_R0, _R1, 4))); 372 + } else if (IS_ENABLED(CONFIG_MPROFILE_KERNEL)) { 373 + /* Expected sequence: 'mflr r0', ['std r0,16(r1)'], 'bl _mcount' */ 374 + ret = ftrace_read_inst(ip - 4, &old); 375 + if (!ret && !ppc_inst_equal(old, ppc_inst(PPC_RAW_MFLR(_R0)))) { 376 + ret = ftrace_validate_inst(ip - 8, ppc_inst(PPC_RAW_MFLR(_R0))); 377 + ret |= ftrace_validate_inst(ip - 4, ppc_inst(PPC_RAW_STD(_R0, _R1, 16))); 378 + } 379 + } else { 421 380 return -EINVAL; 422 381 } 423 382 424 - /* read where this goes */ 425 - if (copy_inst_from_kernel_nofault(&op, (void *)ip)) { 426 - pr_err("Fetching opcode failed.\n"); 427 - return -EFAULT; 428 - } 383 + if (ret) 384 + return ret; 429 385 430 - /* Make sure that this is still a 24bit jump */ 431 - if (!is_bl_op(op)) { 432 - pr_err("Not expected bl: opcode is %08lx\n", ppc_inst_as_ulong(op)); 433 - return -EINVAL; 434 - } 435 - 436 - /* lets find where the pointer goes */ 437 - tramp = find_bl_target(ip, op); 438 - entry = ppc_global_function_entry((void *)old_addr); 439 - 440 - pr_devel("ip:%lx jumps to %lx", ip, tramp); 441 - 442 - if (tramp != entry) { 443 - /* old_addr is not within range, so we must have used a trampoline */ 444 - if (module_trampoline_target(mod, tramp, &ptr)) { 445 - pr_err("Failed to get trampoline target\n"); 386 + if (!core_kernel_text(ip)) { 387 + if (!mod) { 388 + pr_err("0x%lx: No module provided for non-kernel address\n", ip); 446 389 return -EFAULT; 447 390 } 448 - 449 - pr_devel("trampoline target %lx", ptr); 450 - 451 - /* This should match what was called */ 452 - if (ptr != entry) { 453 - pr_err("addr %lx does not match expected %lx\n", ptr, entry); 454 - return -EINVAL; 455 - } 391 + rec->arch.mod = mod; 456 392 } 457 393 458 - /* The new target may be within range */ 459 - if (test_24bit_addr(ip, addr)) { 460 - /* within range */ 461 - if (patch_branch((u32 *)ip, addr, BRANCH_SET_LINK)) { 462 - pr_err("REL24 out of range!\n"); 463 - return -EINVAL; 464 - } 465 - 466 - return 0; 467 - } 468 - 469 - if (rec->flags & FTRACE_FL_REGS) 470 - tramp = mod->arch.tramp_regs; 471 - else 472 - tramp = mod->arch.tramp; 473 - 474 - if (module_trampoline_target(mod, tramp, &ptr)) { 475 - pr_err("Failed to get trampoline target\n"); 476 - return -EFAULT; 477 - } 478 - 479 - pr_devel("trampoline target %lx", ptr); 480 - 481 - entry = ppc_global_function_entry((void *)addr); 482 - /* This should match what was called */ 483 - if (ptr != entry) { 484 - pr_err("addr %lx does not match expected %lx\n", ptr, entry); 485 - return -EINVAL; 486 - } 487 - 488 - if (patch_branch((u32 *)ip, tramp, BRANCH_SET_LINK)) { 489 - pr_err("REL24 out of range!\n"); 490 - return -EINVAL; 491 - } 492 - 493 - return 0; 494 - } 495 - #else 496 - static int __ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, unsigned long addr) 497 - { 498 - return 0; 499 - } 500 - #endif 501 - 502 - int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, 503 - unsigned long addr) 504 - { 505 - unsigned long ip = rec->ip; 506 - ppc_inst_t old, new; 507 - 508 - /* 509 - * If the calling address is more that 24 bits away, 510 - * then we had to use a trampoline to make the call. 511 - * Otherwise just update the call site. 512 - */ 513 - if (test_24bit_addr(ip, addr) && test_24bit_addr(ip, old_addr)) { 514 - /* within range */ 515 - old = ftrace_call_replace(ip, old_addr, 1); 516 - new = ftrace_call_replace(ip, addr, 1); 517 - return ftrace_modify_code(ip, old, new); 518 - } else if (core_kernel_text(ip)) { 394 + /* Nop-out the ftrace location */ 395 + new = ppc_inst(PPC_RAW_NOP()); 396 + addr = MCOUNT_ADDR; 397 + if (IS_ENABLED(CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY)) { 398 + /* we instead patch-in the 'mflr r0' */ 399 + old = ppc_inst(PPC_RAW_NOP()); 400 + new = ppc_inst(PPC_RAW_MFLR(_R0)); 401 + ret = ftrace_modify_code(ip - 4, old, new); 402 + } else if (is_offset_in_branch_range(addr - ip)) { 403 + /* Within range */ 404 + old = ftrace_create_branch_inst(ip, addr, 1); 405 + ret = ftrace_modify_code(ip, old, new); 406 + } else if (core_kernel_text(ip) || (IS_ENABLED(CONFIG_MODULES) && mod)) { 519 407 /* 520 - * We always patch out of range locations to go to the regs 521 - * variant, so there is nothing to do here 408 + * We would be branching to a linker-generated stub, or to the module _mcount 409 + * stub. Let's just confirm we have a 'bl' here. 522 410 */ 523 - return 0; 524 - } else if (!IS_ENABLED(CONFIG_MODULES)) { 525 - /* We should not get here without modules */ 411 + ret = ftrace_read_inst(ip, &old); 412 + if (ret) 413 + return ret; 414 + if (!is_bl_op(old)) { 415 + pr_err("0x%lx: expected (bl) != found (%08lx)\n", ip, ppc_inst_as_ulong(old)); 416 + return -EINVAL; 417 + } 418 + ret = patch_instruction((u32 *)ip, new); 419 + } else { 526 420 return -EINVAL; 527 421 } 528 422 529 - /* 530 - * Out of range jumps are called from modules. 531 - */ 532 - if (!rec->arch.mod) { 533 - pr_err("No module loaded\n"); 534 - return -EINVAL; 535 - } 536 - 537 - return __ftrace_modify_call(rec, old_addr, addr); 423 + return ret; 538 424 } 539 - #endif 540 425 541 426 int ftrace_update_ftrace_func(ftrace_func_t func) 542 427 { ··· 291 684 int ret; 292 685 293 686 old = ppc_inst_read((u32 *)&ftrace_call); 294 - new = ftrace_call_replace(ip, (unsigned long)func, 1); 687 + new = ftrace_create_branch_inst(ip, ppc_function_entry(func), 1); 295 688 ret = ftrace_modify_code(ip, old, new); 296 689 297 690 /* Also update the regs callback function */ 298 691 if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && !ret) { 299 692 ip = (unsigned long)(&ftrace_regs_call); 300 693 old = ppc_inst_read((u32 *)&ftrace_regs_call); 301 - new = ftrace_call_replace(ip, (unsigned long)func, 1); 694 + new = ftrace_create_branch_inst(ip, ppc_function_entry(func), 1); 302 695 ret = ftrace_modify_code(ip, old, new); 303 696 } 304 697 ··· 314 707 ftrace_modify_all_code(command); 315 708 } 316 709 317 - #ifdef CONFIG_PPC64 318 - #define PACATOC offsetof(struct paca_struct, kernel_toc) 319 - 320 - extern unsigned int ftrace_tramp_text[], ftrace_tramp_init[]; 321 - 322 710 void ftrace_free_init_tramp(void) 323 711 { 324 712 int i; ··· 325 723 } 326 724 } 327 725 328 - int __init ftrace_dyn_arch_init(void) 726 + static void __init add_ftrace_tramp(unsigned long tramp) 329 727 { 330 728 int i; 729 + 730 + for (i = 0; i < NUM_FTRACE_TRAMPS; i++) 731 + if (!ftrace_tramps[i]) { 732 + ftrace_tramps[i] = tramp; 733 + return; 734 + } 735 + } 736 + 737 + int __init ftrace_dyn_arch_init(void) 738 + { 331 739 unsigned int *tramp[] = { ftrace_tramp_text, ftrace_tramp_init }; 332 - #ifdef CONFIG_PPC_KERNEL_PCREL 740 + unsigned long addr = FTRACE_REGS_ADDR; 741 + long reladdr; 742 + int i; 333 743 u32 stub_insns[] = { 744 + #ifdef CONFIG_PPC_KERNEL_PCREL 334 745 /* pla r12,addr */ 335 746 PPC_PREFIX_MLS | __PPC_PRFX_R(1), 336 747 PPC_INST_PADDI | ___PPC_RT(_R12), 337 748 PPC_RAW_MTCTR(_R12), 338 749 PPC_RAW_BCTR() 339 - }; 340 - #else 341 - u32 stub_insns[] = { 342 - PPC_RAW_LD(_R12, _R13, PACATOC), 750 + #elif defined(CONFIG_PPC64) 751 + PPC_RAW_LD(_R12, _R13, offsetof(struct paca_struct, kernel_toc)), 343 752 PPC_RAW_ADDIS(_R12, _R12, 0), 344 753 PPC_RAW_ADDI(_R12, _R12, 0), 345 754 PPC_RAW_MTCTR(_R12), 346 755 PPC_RAW_BCTR() 347 - }; 756 + #else 757 + PPC_RAW_LIS(_R12, 0), 758 + PPC_RAW_ADDI(_R12, _R12, 0), 759 + PPC_RAW_MTCTR(_R12), 760 + PPC_RAW_BCTR() 348 761 #endif 349 - 350 - unsigned long addr; 351 - long reladdr; 352 - 353 - if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) 354 - addr = ppc_global_function_entry((void *)ftrace_regs_caller); 355 - else 356 - addr = ppc_global_function_entry((void *)ftrace_caller); 762 + }; 357 763 358 764 if (IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) { 359 765 for (i = 0; i < 2; i++) { ··· 378 768 tramp[i][1] |= IMM_L(reladdr); 379 769 add_ftrace_tramp((unsigned long)tramp[i]); 380 770 } 381 - } else { 771 + } else if (IS_ENABLED(CONFIG_PPC64)) { 382 772 reladdr = addr - kernel_toc_addr(); 383 773 384 - if (reladdr >= (long)SZ_2G || reladdr < -(long)SZ_2G) { 774 + if (reladdr >= (long)SZ_2G || reladdr < -(long long)SZ_2G) { 385 775 pr_err("Address of %ps out of range of kernel_toc.\n", 386 776 (void *)addr); 387 777 return -1; ··· 393 783 tramp[i][2] |= PPC_LO(reladdr); 394 784 add_ftrace_tramp((unsigned long)tramp[i]); 395 785 } 786 + } else { 787 + for (i = 0; i < 2; i++) { 788 + memcpy(tramp[i], stub_insns, sizeof(stub_insns)); 789 + tramp[i][0] |= PPC_HA(addr); 790 + tramp[i][1] |= PPC_LO(addr); 791 + add_ftrace_tramp((unsigned long)tramp[i]); 792 + } 396 793 } 397 794 398 795 return 0; 399 796 } 400 - #endif 401 797 402 798 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 403 - 404 - extern void ftrace_graph_call(void); 405 - extern void ftrace_graph_stub(void); 406 - 407 - static int ftrace_modify_ftrace_graph_caller(bool enable) 799 + void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, 800 + struct ftrace_ops *op, struct ftrace_regs *fregs) 408 801 { 409 - unsigned long ip = (unsigned long)(&ftrace_graph_call); 410 - unsigned long addr = (unsigned long)(&ftrace_graph_caller); 411 - unsigned long stub = (unsigned long)(&ftrace_graph_stub); 412 - ppc_inst_t old, new; 413 - 414 - if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_ARGS)) 415 - return 0; 416 - 417 - old = ftrace_call_replace(ip, enable ? stub : addr, 0); 418 - new = ftrace_call_replace(ip, enable ? addr : stub, 0); 419 - 420 - return ftrace_modify_code(ip, old, new); 421 - } 422 - 423 - int ftrace_enable_ftrace_graph_caller(void) 424 - { 425 - return ftrace_modify_ftrace_graph_caller(true); 426 - } 427 - 428 - int ftrace_disable_ftrace_graph_caller(void) 429 - { 430 - return ftrace_modify_ftrace_graph_caller(false); 431 - } 432 - 433 - /* 434 - * Hook the return address and push it in the stack of return addrs 435 - * in current thread info. Return the address we want to divert to. 436 - */ 437 - static unsigned long 438 - __prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp) 439 - { 440 - unsigned long return_hooker; 802 + unsigned long sp = fregs->regs.gpr[1]; 441 803 int bit; 442 804 443 805 if (unlikely(ftrace_graph_is_dead())) ··· 418 836 if (unlikely(atomic_read(&current->tracing_graph_pause))) 419 837 goto out; 420 838 421 - bit = ftrace_test_recursion_trylock(ip, parent); 839 + bit = ftrace_test_recursion_trylock(ip, parent_ip); 422 840 if (bit < 0) 423 841 goto out; 424 842 425 - return_hooker = ppc_function_entry(return_to_handler); 426 - 427 - if (!function_graph_enter(parent, ip, 0, (unsigned long *)sp)) 428 - parent = return_hooker; 843 + if (!function_graph_enter(parent_ip, ip, 0, (unsigned long *)sp)) 844 + parent_ip = ppc_function_entry(return_to_handler); 429 845 430 846 ftrace_test_recursion_unlock(bit); 431 847 out: 432 - return parent; 848 + fregs->regs.link = parent_ip; 433 849 } 434 - 435 - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS 436 - void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, 437 - struct ftrace_ops *op, struct ftrace_regs *fregs) 438 - { 439 - fregs->regs.link = __prepare_ftrace_return(parent_ip, ip, fregs->regs.gpr[1]); 440 - } 441 - #else 442 - unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip, 443 - unsigned long sp) 444 - { 445 - return __prepare_ftrace_return(parent, ip, sp); 446 - } 447 - #endif 448 850 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 449 - 450 - #ifdef CONFIG_PPC64_ELF_ABI_V1 451 - char *arch_ftrace_match_adjust(char *str, const char *search) 452 - { 453 - if (str[0] == '.' && search[0] != '.') 454 - return str + 1; 455 - else 456 - return str; 457 - } 458 - #endif /* CONFIG_PPC64_ELF_ABI_V1 */
-67
arch/powerpc/kernel/trace/ftrace_64_pg.S
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Split from ftrace_64.S 4 - */ 5 - 6 - #include <linux/magic.h> 7 - #include <asm/ppc_asm.h> 8 - #include <asm/asm-offsets.h> 9 - #include <asm/ftrace.h> 10 - #include <asm/ppc-opcode.h> 11 - #include <asm/export.h> 12 - 13 - _GLOBAL_TOC(ftrace_caller) 14 - lbz r3, PACA_FTRACE_ENABLED(r13) 15 - cmpdi r3, 0 16 - beqlr 17 - 18 - /* Taken from output of objdump from lib64/glibc */ 19 - mflr r3 20 - ld r11, 0(r1) 21 - stdu r1, -112(r1) 22 - std r3, 128(r1) 23 - ld r4, 16(r11) 24 - subi r3, r3, MCOUNT_INSN_SIZE 25 - .globl ftrace_call 26 - ftrace_call: 27 - bl ftrace_stub 28 - nop 29 - #ifdef CONFIG_FUNCTION_GRAPH_TRACER 30 - .globl ftrace_graph_call 31 - ftrace_graph_call: 32 - b ftrace_graph_stub 33 - _GLOBAL(ftrace_graph_stub) 34 - #endif 35 - ld r0, 128(r1) 36 - mtlr r0 37 - addi r1, r1, 112 38 - 39 - _GLOBAL(ftrace_stub) 40 - blr 41 - 42 - #ifdef CONFIG_FUNCTION_GRAPH_TRACER 43 - _GLOBAL(ftrace_graph_caller) 44 - addi r5, r1, 112 45 - /* load r4 with local address */ 46 - ld r4, 128(r1) 47 - subi r4, r4, MCOUNT_INSN_SIZE 48 - 49 - /* Grab the LR out of the caller stack frame */ 50 - ld r11, 112(r1) 51 - ld r3, 16(r11) 52 - 53 - bl prepare_ftrace_return 54 - nop 55 - 56 - /* 57 - * prepare_ftrace_return gives us the address we divert to. 58 - * Change the LR in the callers stack frame to this. 59 - */ 60 - ld r11, 112(r1) 61 - std r3, 16(r11) 62 - 63 - ld r0, 128(r1) 64 - mtlr r0 65 - addi r1, r1, 112 66 - blr 67 - #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+846
arch/powerpc/kernel/trace/ftrace_64_pg.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Code for replacing ftrace calls with jumps. 4 + * 5 + * Copyright (C) 2007-2008 Steven Rostedt <srostedt@redhat.com> 6 + * 7 + * Thanks goes out to P.A. Semi, Inc for supplying me with a PPC64 box. 8 + * 9 + * Added function graph tracer code, taken from x86 that was written 10 + * by Frederic Weisbecker, and ported to PPC by Steven Rostedt. 11 + * 12 + */ 13 + 14 + #define pr_fmt(fmt) "ftrace-powerpc: " fmt 15 + 16 + #include <linux/spinlock.h> 17 + #include <linux/hardirq.h> 18 + #include <linux/uaccess.h> 19 + #include <linux/module.h> 20 + #include <linux/ftrace.h> 21 + #include <linux/percpu.h> 22 + #include <linux/init.h> 23 + #include <linux/list.h> 24 + 25 + #include <asm/cacheflush.h> 26 + #include <asm/code-patching.h> 27 + #include <asm/ftrace.h> 28 + #include <asm/syscall.h> 29 + #include <asm/inst.h> 30 + 31 + /* 32 + * We generally only have a single long_branch tramp and at most 2 or 3 plt 33 + * tramps generated. But, we don't use the plt tramps currently. We also allot 34 + * 2 tramps after .text and .init.text. So, we only end up with around 3 usable 35 + * tramps in total. Set aside 8 just to be sure. 36 + */ 37 + #define NUM_FTRACE_TRAMPS 8 38 + static unsigned long ftrace_tramps[NUM_FTRACE_TRAMPS]; 39 + 40 + static ppc_inst_t 41 + ftrace_call_replace(unsigned long ip, unsigned long addr, int link) 42 + { 43 + ppc_inst_t op; 44 + 45 + addr = ppc_function_entry((void *)addr); 46 + 47 + /* if (link) set op to 'bl' else 'b' */ 48 + create_branch(&op, (u32 *)ip, addr, link ? BRANCH_SET_LINK : 0); 49 + 50 + return op; 51 + } 52 + 53 + static inline int 54 + ftrace_modify_code(unsigned long ip, ppc_inst_t old, ppc_inst_t new) 55 + { 56 + ppc_inst_t replaced; 57 + 58 + /* 59 + * Note: 60 + * We are paranoid about modifying text, as if a bug was to happen, it 61 + * could cause us to read or write to someplace that could cause harm. 62 + * Carefully read and modify the code with probe_kernel_*(), and make 63 + * sure what we read is what we expected it to be before modifying it. 64 + */ 65 + 66 + /* read the text we want to modify */ 67 + if (copy_inst_from_kernel_nofault(&replaced, (void *)ip)) 68 + return -EFAULT; 69 + 70 + /* Make sure it is what we expect it to be */ 71 + if (!ppc_inst_equal(replaced, old)) { 72 + pr_err("%p: replaced (%08lx) != old (%08lx)", (void *)ip, 73 + ppc_inst_as_ulong(replaced), ppc_inst_as_ulong(old)); 74 + return -EINVAL; 75 + } 76 + 77 + /* replace the text with the new text */ 78 + return patch_instruction((u32 *)ip, new); 79 + } 80 + 81 + /* 82 + * Helper functions that are the same for both PPC64 and PPC32. 83 + */ 84 + static int test_24bit_addr(unsigned long ip, unsigned long addr) 85 + { 86 + addr = ppc_function_entry((void *)addr); 87 + 88 + return is_offset_in_branch_range(addr - ip); 89 + } 90 + 91 + static int is_bl_op(ppc_inst_t op) 92 + { 93 + return (ppc_inst_val(op) & ~PPC_LI_MASK) == PPC_RAW_BL(0); 94 + } 95 + 96 + static int is_b_op(ppc_inst_t op) 97 + { 98 + return (ppc_inst_val(op) & ~PPC_LI_MASK) == PPC_RAW_BRANCH(0); 99 + } 100 + 101 + static unsigned long find_bl_target(unsigned long ip, ppc_inst_t op) 102 + { 103 + int offset; 104 + 105 + offset = PPC_LI(ppc_inst_val(op)); 106 + /* make it signed */ 107 + if (offset & 0x02000000) 108 + offset |= 0xfe000000; 109 + 110 + return ip + (long)offset; 111 + } 112 + 113 + #ifdef CONFIG_MODULES 114 + static int 115 + __ftrace_make_nop(struct module *mod, 116 + struct dyn_ftrace *rec, unsigned long addr) 117 + { 118 + unsigned long entry, ptr, tramp; 119 + unsigned long ip = rec->ip; 120 + ppc_inst_t op, pop; 121 + 122 + /* read where this goes */ 123 + if (copy_inst_from_kernel_nofault(&op, (void *)ip)) { 124 + pr_err("Fetching opcode failed.\n"); 125 + return -EFAULT; 126 + } 127 + 128 + /* Make sure that this is still a 24bit jump */ 129 + if (!is_bl_op(op)) { 130 + pr_err("Not expected bl: opcode is %08lx\n", ppc_inst_as_ulong(op)); 131 + return -EINVAL; 132 + } 133 + 134 + /* lets find where the pointer goes */ 135 + tramp = find_bl_target(ip, op); 136 + 137 + pr_devel("ip:%lx jumps to %lx", ip, tramp); 138 + 139 + if (module_trampoline_target(mod, tramp, &ptr)) { 140 + pr_err("Failed to get trampoline target\n"); 141 + return -EFAULT; 142 + } 143 + 144 + pr_devel("trampoline target %lx", ptr); 145 + 146 + entry = ppc_global_function_entry((void *)addr); 147 + /* This should match what was called */ 148 + if (ptr != entry) { 149 + pr_err("addr %lx does not match expected %lx\n", ptr, entry); 150 + return -EINVAL; 151 + } 152 + 153 + if (IS_ENABLED(CONFIG_MPROFILE_KERNEL)) { 154 + if (copy_inst_from_kernel_nofault(&op, (void *)(ip - 4))) { 155 + pr_err("Fetching instruction at %lx failed.\n", ip - 4); 156 + return -EFAULT; 157 + } 158 + 159 + /* We expect either a mflr r0, or a std r0, LRSAVE(r1) */ 160 + if (!ppc_inst_equal(op, ppc_inst(PPC_RAW_MFLR(_R0))) && 161 + !ppc_inst_equal(op, ppc_inst(PPC_INST_STD_LR))) { 162 + pr_err("Unexpected instruction %08lx around bl _mcount\n", 163 + ppc_inst_as_ulong(op)); 164 + return -EINVAL; 165 + } 166 + } else if (IS_ENABLED(CONFIG_PPC64)) { 167 + /* 168 + * Check what is in the next instruction. We can see ld r2,40(r1), but 169 + * on first pass after boot we will see mflr r0. 170 + */ 171 + if (copy_inst_from_kernel_nofault(&op, (void *)(ip + 4))) { 172 + pr_err("Fetching op failed.\n"); 173 + return -EFAULT; 174 + } 175 + 176 + if (!ppc_inst_equal(op, ppc_inst(PPC_INST_LD_TOC))) { 177 + pr_err("Expected %08lx found %08lx\n", PPC_INST_LD_TOC, 178 + ppc_inst_as_ulong(op)); 179 + return -EINVAL; 180 + } 181 + } 182 + 183 + /* 184 + * When using -mprofile-kernel or PPC32 there is no load to jump over. 185 + * 186 + * Otherwise our original call site looks like: 187 + * 188 + * bl <tramp> 189 + * ld r2,XX(r1) 190 + * 191 + * Milton Miller pointed out that we can not simply nop the branch. 192 + * If a task was preempted when calling a trace function, the nops 193 + * will remove the way to restore the TOC in r2 and the r2 TOC will 194 + * get corrupted. 195 + * 196 + * Use a b +8 to jump over the load. 197 + */ 198 + if (IS_ENABLED(CONFIG_MPROFILE_KERNEL) || IS_ENABLED(CONFIG_PPC32)) 199 + pop = ppc_inst(PPC_RAW_NOP()); 200 + else 201 + pop = ppc_inst(PPC_RAW_BRANCH(8)); /* b +8 */ 202 + 203 + if (patch_instruction((u32 *)ip, pop)) { 204 + pr_err("Patching NOP failed.\n"); 205 + return -EPERM; 206 + } 207 + 208 + return 0; 209 + } 210 + #else 211 + static int __ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr) 212 + { 213 + return 0; 214 + } 215 + #endif /* CONFIG_MODULES */ 216 + 217 + static unsigned long find_ftrace_tramp(unsigned long ip) 218 + { 219 + int i; 220 + 221 + /* 222 + * We have the compiler generated long_branch tramps at the end 223 + * and we prefer those 224 + */ 225 + for (i = NUM_FTRACE_TRAMPS - 1; i >= 0; i--) 226 + if (!ftrace_tramps[i]) 227 + continue; 228 + else if (is_offset_in_branch_range(ftrace_tramps[i] - ip)) 229 + return ftrace_tramps[i]; 230 + 231 + return 0; 232 + } 233 + 234 + static int add_ftrace_tramp(unsigned long tramp) 235 + { 236 + int i; 237 + 238 + for (i = 0; i < NUM_FTRACE_TRAMPS; i++) 239 + if (!ftrace_tramps[i]) { 240 + ftrace_tramps[i] = tramp; 241 + return 0; 242 + } 243 + 244 + return -1; 245 + } 246 + 247 + /* 248 + * If this is a compiler generated long_branch trampoline (essentially, a 249 + * trampoline that has a branch to _mcount()), we re-write the branch to 250 + * instead go to ftrace_[regs_]caller() and note down the location of this 251 + * trampoline. 252 + */ 253 + static int setup_mcount_compiler_tramp(unsigned long tramp) 254 + { 255 + int i; 256 + ppc_inst_t op; 257 + unsigned long ptr; 258 + 259 + /* Is this a known long jump tramp? */ 260 + for (i = 0; i < NUM_FTRACE_TRAMPS; i++) 261 + if (ftrace_tramps[i] == tramp) 262 + return 0; 263 + 264 + /* New trampoline -- read where this goes */ 265 + if (copy_inst_from_kernel_nofault(&op, (void *)tramp)) { 266 + pr_debug("Fetching opcode failed.\n"); 267 + return -1; 268 + } 269 + 270 + /* Is this a 24 bit branch? */ 271 + if (!is_b_op(op)) { 272 + pr_debug("Trampoline is not a long branch tramp.\n"); 273 + return -1; 274 + } 275 + 276 + /* lets find where the pointer goes */ 277 + ptr = find_bl_target(tramp, op); 278 + 279 + if (ptr != ppc_global_function_entry((void *)_mcount)) { 280 + pr_debug("Trampoline target %p is not _mcount\n", (void *)ptr); 281 + return -1; 282 + } 283 + 284 + /* Let's re-write the tramp to go to ftrace_[regs_]caller */ 285 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) 286 + ptr = ppc_global_function_entry((void *)ftrace_regs_caller); 287 + else 288 + ptr = ppc_global_function_entry((void *)ftrace_caller); 289 + 290 + if (patch_branch((u32 *)tramp, ptr, 0)) { 291 + pr_debug("REL24 out of range!\n"); 292 + return -1; 293 + } 294 + 295 + if (add_ftrace_tramp(tramp)) { 296 + pr_debug("No tramp locations left\n"); 297 + return -1; 298 + } 299 + 300 + return 0; 301 + } 302 + 303 + static int __ftrace_make_nop_kernel(struct dyn_ftrace *rec, unsigned long addr) 304 + { 305 + unsigned long tramp, ip = rec->ip; 306 + ppc_inst_t op; 307 + 308 + /* Read where this goes */ 309 + if (copy_inst_from_kernel_nofault(&op, (void *)ip)) { 310 + pr_err("Fetching opcode failed.\n"); 311 + return -EFAULT; 312 + } 313 + 314 + /* Make sure that this is still a 24bit jump */ 315 + if (!is_bl_op(op)) { 316 + pr_err("Not expected bl: opcode is %08lx\n", ppc_inst_as_ulong(op)); 317 + return -EINVAL; 318 + } 319 + 320 + /* Let's find where the pointer goes */ 321 + tramp = find_bl_target(ip, op); 322 + 323 + pr_devel("ip:%lx jumps to %lx", ip, tramp); 324 + 325 + if (setup_mcount_compiler_tramp(tramp)) { 326 + /* Are other trampolines reachable? */ 327 + if (!find_ftrace_tramp(ip)) { 328 + pr_err("No ftrace trampolines reachable from %ps\n", 329 + (void *)ip); 330 + return -EINVAL; 331 + } 332 + } 333 + 334 + if (patch_instruction((u32 *)ip, ppc_inst(PPC_RAW_NOP()))) { 335 + pr_err("Patching NOP failed.\n"); 336 + return -EPERM; 337 + } 338 + 339 + return 0; 340 + } 341 + 342 + int ftrace_make_nop(struct module *mod, 343 + struct dyn_ftrace *rec, unsigned long addr) 344 + { 345 + unsigned long ip = rec->ip; 346 + ppc_inst_t old, new; 347 + 348 + /* 349 + * If the calling address is more that 24 bits away, 350 + * then we had to use a trampoline to make the call. 351 + * Otherwise just update the call site. 352 + */ 353 + if (test_24bit_addr(ip, addr)) { 354 + /* within range */ 355 + old = ftrace_call_replace(ip, addr, 1); 356 + new = ppc_inst(PPC_RAW_NOP()); 357 + return ftrace_modify_code(ip, old, new); 358 + } else if (core_kernel_text(ip)) { 359 + return __ftrace_make_nop_kernel(rec, addr); 360 + } else if (!IS_ENABLED(CONFIG_MODULES)) { 361 + return -EINVAL; 362 + } 363 + 364 + /* 365 + * Out of range jumps are called from modules. 366 + * We should either already have a pointer to the module 367 + * or it has been passed in. 368 + */ 369 + if (!rec->arch.mod) { 370 + if (!mod) { 371 + pr_err("No module loaded addr=%lx\n", addr); 372 + return -EFAULT; 373 + } 374 + rec->arch.mod = mod; 375 + } else if (mod) { 376 + if (mod != rec->arch.mod) { 377 + pr_err("Record mod %p not equal to passed in mod %p\n", 378 + rec->arch.mod, mod); 379 + return -EINVAL; 380 + } 381 + /* nothing to do if mod == rec->arch.mod */ 382 + } else 383 + mod = rec->arch.mod; 384 + 385 + return __ftrace_make_nop(mod, rec, addr); 386 + } 387 + 388 + #ifdef CONFIG_MODULES 389 + /* 390 + * Examine the existing instructions for __ftrace_make_call. 391 + * They should effectively be a NOP, and follow formal constraints, 392 + * depending on the ABI. Return false if they don't. 393 + */ 394 + static bool expected_nop_sequence(void *ip, ppc_inst_t op0, ppc_inst_t op1) 395 + { 396 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) 397 + return ppc_inst_equal(op0, ppc_inst(PPC_RAW_NOP())); 398 + else 399 + return ppc_inst_equal(op0, ppc_inst(PPC_RAW_BRANCH(8))) && 400 + ppc_inst_equal(op1, ppc_inst(PPC_INST_LD_TOC)); 401 + } 402 + 403 + static int 404 + __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 405 + { 406 + ppc_inst_t op[2]; 407 + void *ip = (void *)rec->ip; 408 + unsigned long entry, ptr, tramp; 409 + struct module *mod = rec->arch.mod; 410 + 411 + /* read where this goes */ 412 + if (copy_inst_from_kernel_nofault(op, ip)) 413 + return -EFAULT; 414 + 415 + if (!IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && 416 + copy_inst_from_kernel_nofault(op + 1, ip + 4)) 417 + return -EFAULT; 418 + 419 + if (!expected_nop_sequence(ip, op[0], op[1])) { 420 + pr_err("Unexpected call sequence at %p: %08lx %08lx\n", ip, 421 + ppc_inst_as_ulong(op[0]), ppc_inst_as_ulong(op[1])); 422 + return -EINVAL; 423 + } 424 + 425 + /* If we never set up ftrace trampoline(s), then bail */ 426 + if (!mod->arch.tramp || 427 + (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && !mod->arch.tramp_regs)) { 428 + pr_err("No ftrace trampoline\n"); 429 + return -EINVAL; 430 + } 431 + 432 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && rec->flags & FTRACE_FL_REGS) 433 + tramp = mod->arch.tramp_regs; 434 + else 435 + tramp = mod->arch.tramp; 436 + 437 + if (module_trampoline_target(mod, tramp, &ptr)) { 438 + pr_err("Failed to get trampoline target\n"); 439 + return -EFAULT; 440 + } 441 + 442 + pr_devel("trampoline target %lx", ptr); 443 + 444 + entry = ppc_global_function_entry((void *)addr); 445 + /* This should match what was called */ 446 + if (ptr != entry) { 447 + pr_err("addr %lx does not match expected %lx\n", ptr, entry); 448 + return -EINVAL; 449 + } 450 + 451 + if (patch_branch(ip, tramp, BRANCH_SET_LINK)) { 452 + pr_err("REL24 out of range!\n"); 453 + return -EINVAL; 454 + } 455 + 456 + return 0; 457 + } 458 + #else 459 + static int __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 460 + { 461 + return 0; 462 + } 463 + #endif /* CONFIG_MODULES */ 464 + 465 + static int __ftrace_make_call_kernel(struct dyn_ftrace *rec, unsigned long addr) 466 + { 467 + ppc_inst_t op; 468 + void *ip = (void *)rec->ip; 469 + unsigned long tramp, entry, ptr; 470 + 471 + /* Make sure we're being asked to patch branch to a known ftrace addr */ 472 + entry = ppc_global_function_entry((void *)ftrace_caller); 473 + ptr = ppc_global_function_entry((void *)addr); 474 + 475 + if (ptr != entry && IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) 476 + entry = ppc_global_function_entry((void *)ftrace_regs_caller); 477 + 478 + if (ptr != entry) { 479 + pr_err("Unknown ftrace addr to patch: %ps\n", (void *)ptr); 480 + return -EINVAL; 481 + } 482 + 483 + /* Make sure we have a nop */ 484 + if (copy_inst_from_kernel_nofault(&op, ip)) { 485 + pr_err("Unable to read ftrace location %p\n", ip); 486 + return -EFAULT; 487 + } 488 + 489 + if (!ppc_inst_equal(op, ppc_inst(PPC_RAW_NOP()))) { 490 + pr_err("Unexpected call sequence at %p: %08lx\n", 491 + ip, ppc_inst_as_ulong(op)); 492 + return -EINVAL; 493 + } 494 + 495 + tramp = find_ftrace_tramp((unsigned long)ip); 496 + if (!tramp) { 497 + pr_err("No ftrace trampolines reachable from %ps\n", ip); 498 + return -EINVAL; 499 + } 500 + 501 + if (patch_branch(ip, tramp, BRANCH_SET_LINK)) { 502 + pr_err("Error patching branch to ftrace tramp!\n"); 503 + return -EINVAL; 504 + } 505 + 506 + return 0; 507 + } 508 + 509 + int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 510 + { 511 + unsigned long ip = rec->ip; 512 + ppc_inst_t old, new; 513 + 514 + /* 515 + * If the calling address is more that 24 bits away, 516 + * then we had to use a trampoline to make the call. 517 + * Otherwise just update the call site. 518 + */ 519 + if (test_24bit_addr(ip, addr)) { 520 + /* within range */ 521 + old = ppc_inst(PPC_RAW_NOP()); 522 + new = ftrace_call_replace(ip, addr, 1); 523 + return ftrace_modify_code(ip, old, new); 524 + } else if (core_kernel_text(ip)) { 525 + return __ftrace_make_call_kernel(rec, addr); 526 + } else if (!IS_ENABLED(CONFIG_MODULES)) { 527 + /* We should not get here without modules */ 528 + return -EINVAL; 529 + } 530 + 531 + /* 532 + * Out of range jumps are called from modules. 533 + * Being that we are converting from nop, it had better 534 + * already have a module defined. 535 + */ 536 + if (!rec->arch.mod) { 537 + pr_err("No module loaded\n"); 538 + return -EINVAL; 539 + } 540 + 541 + return __ftrace_make_call(rec, addr); 542 + } 543 + 544 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 545 + #ifdef CONFIG_MODULES 546 + static int 547 + __ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, 548 + unsigned long addr) 549 + { 550 + ppc_inst_t op; 551 + unsigned long ip = rec->ip; 552 + unsigned long entry, ptr, tramp; 553 + struct module *mod = rec->arch.mod; 554 + 555 + /* If we never set up ftrace trampolines, then bail */ 556 + if (!mod->arch.tramp || !mod->arch.tramp_regs) { 557 + pr_err("No ftrace trampoline\n"); 558 + return -EINVAL; 559 + } 560 + 561 + /* read where this goes */ 562 + if (copy_inst_from_kernel_nofault(&op, (void *)ip)) { 563 + pr_err("Fetching opcode failed.\n"); 564 + return -EFAULT; 565 + } 566 + 567 + /* Make sure that this is still a 24bit jump */ 568 + if (!is_bl_op(op)) { 569 + pr_err("Not expected bl: opcode is %08lx\n", ppc_inst_as_ulong(op)); 570 + return -EINVAL; 571 + } 572 + 573 + /* lets find where the pointer goes */ 574 + tramp = find_bl_target(ip, op); 575 + entry = ppc_global_function_entry((void *)old_addr); 576 + 577 + pr_devel("ip:%lx jumps to %lx", ip, tramp); 578 + 579 + if (tramp != entry) { 580 + /* old_addr is not within range, so we must have used a trampoline */ 581 + if (module_trampoline_target(mod, tramp, &ptr)) { 582 + pr_err("Failed to get trampoline target\n"); 583 + return -EFAULT; 584 + } 585 + 586 + pr_devel("trampoline target %lx", ptr); 587 + 588 + /* This should match what was called */ 589 + if (ptr != entry) { 590 + pr_err("addr %lx does not match expected %lx\n", ptr, entry); 591 + return -EINVAL; 592 + } 593 + } 594 + 595 + /* The new target may be within range */ 596 + if (test_24bit_addr(ip, addr)) { 597 + /* within range */ 598 + if (patch_branch((u32 *)ip, addr, BRANCH_SET_LINK)) { 599 + pr_err("REL24 out of range!\n"); 600 + return -EINVAL; 601 + } 602 + 603 + return 0; 604 + } 605 + 606 + if (rec->flags & FTRACE_FL_REGS) 607 + tramp = mod->arch.tramp_regs; 608 + else 609 + tramp = mod->arch.tramp; 610 + 611 + if (module_trampoline_target(mod, tramp, &ptr)) { 612 + pr_err("Failed to get trampoline target\n"); 613 + return -EFAULT; 614 + } 615 + 616 + pr_devel("trampoline target %lx", ptr); 617 + 618 + entry = ppc_global_function_entry((void *)addr); 619 + /* This should match what was called */ 620 + if (ptr != entry) { 621 + pr_err("addr %lx does not match expected %lx\n", ptr, entry); 622 + return -EINVAL; 623 + } 624 + 625 + if (patch_branch((u32 *)ip, tramp, BRANCH_SET_LINK)) { 626 + pr_err("REL24 out of range!\n"); 627 + return -EINVAL; 628 + } 629 + 630 + return 0; 631 + } 632 + #else 633 + static int __ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, unsigned long addr) 634 + { 635 + return 0; 636 + } 637 + #endif 638 + 639 + int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, 640 + unsigned long addr) 641 + { 642 + unsigned long ip = rec->ip; 643 + ppc_inst_t old, new; 644 + 645 + /* 646 + * If the calling address is more that 24 bits away, 647 + * then we had to use a trampoline to make the call. 648 + * Otherwise just update the call site. 649 + */ 650 + if (test_24bit_addr(ip, addr) && test_24bit_addr(ip, old_addr)) { 651 + /* within range */ 652 + old = ftrace_call_replace(ip, old_addr, 1); 653 + new = ftrace_call_replace(ip, addr, 1); 654 + return ftrace_modify_code(ip, old, new); 655 + } else if (core_kernel_text(ip)) { 656 + /* 657 + * We always patch out of range locations to go to the regs 658 + * variant, so there is nothing to do here 659 + */ 660 + return 0; 661 + } else if (!IS_ENABLED(CONFIG_MODULES)) { 662 + /* We should not get here without modules */ 663 + return -EINVAL; 664 + } 665 + 666 + /* 667 + * Out of range jumps are called from modules. 668 + */ 669 + if (!rec->arch.mod) { 670 + pr_err("No module loaded\n"); 671 + return -EINVAL; 672 + } 673 + 674 + return __ftrace_modify_call(rec, old_addr, addr); 675 + } 676 + #endif 677 + 678 + int ftrace_update_ftrace_func(ftrace_func_t func) 679 + { 680 + unsigned long ip = (unsigned long)(&ftrace_call); 681 + ppc_inst_t old, new; 682 + int ret; 683 + 684 + old = ppc_inst_read((u32 *)&ftrace_call); 685 + new = ftrace_call_replace(ip, (unsigned long)func, 1); 686 + ret = ftrace_modify_code(ip, old, new); 687 + 688 + /* Also update the regs callback function */ 689 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && !ret) { 690 + ip = (unsigned long)(&ftrace_regs_call); 691 + old = ppc_inst_read((u32 *)&ftrace_regs_call); 692 + new = ftrace_call_replace(ip, (unsigned long)func, 1); 693 + ret = ftrace_modify_code(ip, old, new); 694 + } 695 + 696 + return ret; 697 + } 698 + 699 + /* 700 + * Use the default ftrace_modify_all_code, but without 701 + * stop_machine(). 702 + */ 703 + void arch_ftrace_update_code(int command) 704 + { 705 + ftrace_modify_all_code(command); 706 + } 707 + 708 + #ifdef CONFIG_PPC64 709 + #define PACATOC offsetof(struct paca_struct, kernel_toc) 710 + 711 + extern unsigned int ftrace_tramp_text[], ftrace_tramp_init[]; 712 + 713 + void ftrace_free_init_tramp(void) 714 + { 715 + int i; 716 + 717 + for (i = 0; i < NUM_FTRACE_TRAMPS && ftrace_tramps[i]; i++) 718 + if (ftrace_tramps[i] == (unsigned long)ftrace_tramp_init) { 719 + ftrace_tramps[i] = 0; 720 + return; 721 + } 722 + } 723 + 724 + int __init ftrace_dyn_arch_init(void) 725 + { 726 + int i; 727 + unsigned int *tramp[] = { ftrace_tramp_text, ftrace_tramp_init }; 728 + u32 stub_insns[] = { 729 + PPC_RAW_LD(_R12, _R13, PACATOC), 730 + PPC_RAW_ADDIS(_R12, _R12, 0), 731 + PPC_RAW_ADDI(_R12, _R12, 0), 732 + PPC_RAW_MTCTR(_R12), 733 + PPC_RAW_BCTR() 734 + }; 735 + unsigned long addr; 736 + long reladdr; 737 + 738 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) 739 + addr = ppc_global_function_entry((void *)ftrace_regs_caller); 740 + else 741 + addr = ppc_global_function_entry((void *)ftrace_caller); 742 + 743 + reladdr = addr - kernel_toc_addr(); 744 + 745 + if (reladdr >= SZ_2G || reladdr < -(long)SZ_2G) { 746 + pr_err("Address of %ps out of range of kernel_toc.\n", 747 + (void *)addr); 748 + return -1; 749 + } 750 + 751 + for (i = 0; i < 2; i++) { 752 + memcpy(tramp[i], stub_insns, sizeof(stub_insns)); 753 + tramp[i][1] |= PPC_HA(reladdr); 754 + tramp[i][2] |= PPC_LO(reladdr); 755 + add_ftrace_tramp((unsigned long)tramp[i]); 756 + } 757 + 758 + return 0; 759 + } 760 + #endif 761 + 762 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 763 + 764 + extern void ftrace_graph_call(void); 765 + extern void ftrace_graph_stub(void); 766 + 767 + static int ftrace_modify_ftrace_graph_caller(bool enable) 768 + { 769 + unsigned long ip = (unsigned long)(&ftrace_graph_call); 770 + unsigned long addr = (unsigned long)(&ftrace_graph_caller); 771 + unsigned long stub = (unsigned long)(&ftrace_graph_stub); 772 + ppc_inst_t old, new; 773 + 774 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_ARGS)) 775 + return 0; 776 + 777 + old = ftrace_call_replace(ip, enable ? stub : addr, 0); 778 + new = ftrace_call_replace(ip, enable ? addr : stub, 0); 779 + 780 + return ftrace_modify_code(ip, old, new); 781 + } 782 + 783 + int ftrace_enable_ftrace_graph_caller(void) 784 + { 785 + return ftrace_modify_ftrace_graph_caller(true); 786 + } 787 + 788 + int ftrace_disable_ftrace_graph_caller(void) 789 + { 790 + return ftrace_modify_ftrace_graph_caller(false); 791 + } 792 + 793 + /* 794 + * Hook the return address and push it in the stack of return addrs 795 + * in current thread info. Return the address we want to divert to. 796 + */ 797 + static unsigned long 798 + __prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp) 799 + { 800 + unsigned long return_hooker; 801 + int bit; 802 + 803 + if (unlikely(ftrace_graph_is_dead())) 804 + goto out; 805 + 806 + if (unlikely(atomic_read(&current->tracing_graph_pause))) 807 + goto out; 808 + 809 + bit = ftrace_test_recursion_trylock(ip, parent); 810 + if (bit < 0) 811 + goto out; 812 + 813 + return_hooker = ppc_function_entry(return_to_handler); 814 + 815 + if (!function_graph_enter(parent, ip, 0, (unsigned long *)sp)) 816 + parent = return_hooker; 817 + 818 + ftrace_test_recursion_unlock(bit); 819 + out: 820 + return parent; 821 + } 822 + 823 + #ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS 824 + void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, 825 + struct ftrace_ops *op, struct ftrace_regs *fregs) 826 + { 827 + fregs->regs.link = __prepare_ftrace_return(parent_ip, ip, fregs->regs.gpr[1]); 828 + } 829 + #else 830 + unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip, 831 + unsigned long sp) 832 + { 833 + return __prepare_ftrace_return(parent, ip, sp); 834 + } 835 + #endif 836 + #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 837 + 838 + #ifdef CONFIG_PPC64_ELF_ABI_V1 839 + char *arch_ftrace_match_adjust(char *str, const char *search) 840 + { 841 + if (str[0] == '.' && search[0] != '.') 842 + return str + 1; 843 + else 844 + return str; 845 + } 846 + #endif /* CONFIG_PPC64_ELF_ABI_V1 */
+60 -6
arch/powerpc/kernel/trace/ftrace_low.S arch/powerpc/kernel/trace/ftrace_64_pg_entry.S
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 2 /* 3 - * Split from entry_64.S 3 + * Split from ftrace_64.S 4 4 */ 5 5 6 + #include <linux/export.h> 6 7 #include <linux/magic.h> 7 8 #include <asm/ppc_asm.h> 8 9 #include <asm/asm-offsets.h> 9 10 #include <asm/ftrace.h> 10 11 #include <asm/ppc-opcode.h> 11 - #include <asm/export.h> 12 12 13 - #ifdef CONFIG_PPC64 13 + _GLOBAL_TOC(ftrace_caller) 14 + lbz r3, PACA_FTRACE_ENABLED(r13) 15 + cmpdi r3, 0 16 + beqlr 17 + 18 + /* Taken from output of objdump from lib64/glibc */ 19 + mflr r3 20 + ld r11, 0(r1) 21 + stdu r1, -112(r1) 22 + std r3, 128(r1) 23 + ld r4, 16(r11) 24 + subi r3, r3, MCOUNT_INSN_SIZE 25 + .globl ftrace_call 26 + ftrace_call: 27 + bl ftrace_stub 28 + nop 29 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 30 + .globl ftrace_graph_call 31 + ftrace_graph_call: 32 + b ftrace_graph_stub 33 + _GLOBAL(ftrace_graph_stub) 34 + #endif 35 + ld r0, 128(r1) 36 + mtlr r0 37 + addi r1, r1, 112 38 + 39 + _GLOBAL(ftrace_stub) 40 + blr 41 + 42 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 43 + _GLOBAL(ftrace_graph_caller) 44 + addi r5, r1, 112 45 + /* load r4 with local address */ 46 + ld r4, 128(r1) 47 + subi r4, r4, MCOUNT_INSN_SIZE 48 + 49 + /* Grab the LR out of the caller stack frame */ 50 + ld r11, 112(r1) 51 + ld r3, 16(r11) 52 + 53 + bl prepare_ftrace_return 54 + nop 55 + 56 + /* 57 + * prepare_ftrace_return gives us the address we divert to. 58 + * Change the LR in the callers stack frame to this. 59 + */ 60 + ld r11, 112(r1) 61 + std r3, 16(r11) 62 + 63 + ld r0, 128(r1) 64 + mtlr r0 65 + addi r1, r1, 112 66 + blr 67 + #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 68 + 14 69 .pushsection ".tramp.ftrace.text","aw",@progbits; 15 70 .globl ftrace_tramp_text 16 71 ftrace_tramp_text: 17 - .space 64 72 + .space 32 18 73 .popsection 19 74 20 75 .pushsection ".tramp.ftrace.init","aw",@progbits; 21 76 .globl ftrace_tramp_init 22 77 ftrace_tramp_init: 23 - .space 64 78 + .space 32 24 79 .popsection 25 - #endif 26 80 27 81 _GLOBAL(mcount) 28 82 _GLOBAL(_mcount)
+68 -1
arch/powerpc/kernel/trace/ftrace_mprofile.S arch/powerpc/kernel/trace/ftrace_entry.S
··· 3 3 * Split from ftrace_64.S 4 4 */ 5 5 6 + #include <linux/export.h> 6 7 #include <linux/magic.h> 7 8 #include <asm/ppc_asm.h> 8 9 #include <asm/asm-offsets.h> 9 10 #include <asm/ftrace.h> 10 11 #include <asm/ppc-opcode.h> 11 - #include <asm/export.h> 12 12 #include <asm/thread_info.h> 13 13 #include <asm/bug.h> 14 14 #include <asm/ptrace.h> ··· 254 254 /* Return to original caller of live patched function */ 255 255 blr 256 256 #endif /* CONFIG_LIVEPATCH */ 257 + 258 + #ifndef CONFIG_ARCH_USING_PATCHABLE_FUNCTION_ENTRY 259 + _GLOBAL(mcount) 260 + _GLOBAL(_mcount) 261 + EXPORT_SYMBOL(_mcount) 262 + mflr r12 263 + mtctr r12 264 + mtlr r0 265 + bctr 266 + #endif 267 + 268 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 269 + _GLOBAL(return_to_handler) 270 + /* need to save return values */ 271 + #ifdef CONFIG_PPC64 272 + std r4, -32(r1) 273 + std r3, -24(r1) 274 + /* save TOC */ 275 + std r2, -16(r1) 276 + std r31, -8(r1) 277 + mr r31, r1 278 + stdu r1, -112(r1) 279 + 280 + /* 281 + * We might be called from a module. 282 + * Switch to our TOC to run inside the core kernel. 283 + */ 284 + LOAD_PACA_TOC() 285 + #else 286 + stwu r1, -16(r1) 287 + stw r3, 8(r1) 288 + stw r4, 12(r1) 289 + #endif 290 + 291 + bl ftrace_return_to_handler 292 + nop 293 + 294 + /* return value has real return address */ 295 + mtlr r3 296 + 297 + #ifdef CONFIG_PPC64 298 + ld r1, 0(r1) 299 + ld r4, -32(r1) 300 + ld r3, -24(r1) 301 + ld r2, -16(r1) 302 + ld r31, -8(r1) 303 + #else 304 + lwz r3, 8(r1) 305 + lwz r4, 12(r1) 306 + addi r1, r1, 16 307 + #endif 308 + 309 + /* Jump back to real return address */ 310 + blr 311 + #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 312 + 313 + .pushsection ".tramp.ftrace.text","aw",@progbits; 314 + .globl ftrace_tramp_text 315 + ftrace_tramp_text: 316 + .space 32 317 + .popsection 318 + 319 + .pushsection ".tramp.ftrace.init","aw",@progbits; 320 + .globl ftrace_tramp_init 321 + ftrace_tramp_init: 322 + .space 32 323 + .popsection
+2 -13
arch/powerpc/kernel/traps.c
··· 1158 1158 * pretend we got a single-step exception. This was pointed out 1159 1159 * by Kumar Gala. -- paulus 1160 1160 */ 1161 - static void emulate_single_step(struct pt_regs *regs) 1161 + void emulate_single_step(struct pt_regs *regs) 1162 1162 { 1163 1163 if (single_stepping(regs)) 1164 1164 __single_step_exception(regs); ··· 2225 2225 } 2226 2226 2227 2227 #if defined(CONFIG_BOOKE_WDT) || defined(CONFIG_40x) 2228 - /* 2229 - * Default handler for a Watchdog exception, 2230 - * spins until a reboot occurs 2231 - */ 2232 - void __attribute__ ((weak)) WatchdogHandler(struct pt_regs *regs) 2233 - { 2234 - /* Generic WatchdogHandler, implement your own */ 2235 - mtspr(SPRN_TCR, mfspr(SPRN_TCR)&(~TCR_WIE)); 2236 - return; 2237 - } 2238 - 2239 2228 DEFINE_INTERRUPT_HANDLER_NMI(WatchdogException) 2240 2229 { 2241 2230 printk (KERN_EMERG "PowerPC Book-E Watchdog Exception\n"); 2242 - WatchdogHandler(regs); 2231 + mtspr(SPRN_TCR, mfspr(SPRN_TCR) & ~TCR_WIE); 2243 2232 return 0; 2244 2233 } 2245 2234 #endif
+1 -1
arch/powerpc/kernel/ucall.S
··· 5 5 * Copyright 2019, IBM Corporation. 6 6 * 7 7 */ 8 + #include <linux/export.h> 8 9 #include <asm/ppc_asm.h> 9 - #include <asm/export.h> 10 10 11 11 _GLOBAL(ucall_norets) 12 12 EXPORT_SYMBOL_GPL(ucall_norets)
+1 -1
arch/powerpc/kernel/vector.S
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 + #include <linux/export.h> 2 3 #include <linux/linkage.h> 3 4 #include <asm/processor.h> 4 5 #include <asm/ppc_asm.h> ··· 9 8 #include <asm/thread_info.h> 10 9 #include <asm/page.h> 11 10 #include <asm/ptrace.h> 12 - #include <asm/export.h> 13 11 #include <asm/asm-compat.h> 14 12 15 13 /*
-4
arch/powerpc/kernel/vmlinux.lds.S
··· 107 107 #endif 108 108 /* careful! __ftr_alt_* sections need to be close to .text */ 109 109 *(.text.hot .text.hot.* TEXT_MAIN .text.fixup .text.unlikely .text.unlikely.* .fixup __ftr_alt_* .ref.text); 110 - #ifdef CONFIG_PPC64 111 110 *(.tramp.ftrace.text); 112 - #endif 113 111 NOINSTR_TEXT 114 112 SCHED_TEXT 115 113 LOCK_TEXT ··· 274 276 */ 275 277 . = ALIGN(PAGE_SIZE); 276 278 _einittext = .; 277 - #ifdef CONFIG_PPC64 278 279 *(.tramp.ftrace.init); 279 - #endif 280 280 } :text 281 281 282 282 /* .exit.text is discarded at runtime, not link time,
+1 -1
arch/powerpc/kexec/crash.c
··· 350 350 351 351 void default_machine_crash_shutdown(struct pt_regs *regs) 352 352 { 353 - unsigned int i; 353 + volatile unsigned int i; 354 354 int (*old_handler)(struct pt_regs *regs); 355 355 356 356 if (TRAP(regs) == INTERRUPT_SYSTEM_RESET)
+5 -8
arch/powerpc/kexec/file_load_64.c
··· 17 17 #include <linux/kexec.h> 18 18 #include <linux/of_fdt.h> 19 19 #include <linux/libfdt.h> 20 - #include <linux/of_device.h> 20 + #include <linux/of.h> 21 21 #include <linux/memblock.h> 22 22 #include <linux/slab.h> 23 23 #include <linux/vmalloc.h> ··· 27 27 #include <asm/kexec_ranges.h> 28 28 #include <asm/crashdump-ppc64.h> 29 29 #include <asm/mmzone.h> 30 + #include <asm/iommu.h> 30 31 #include <asm/prom.h> 31 32 #include <asm/plpks.h> 32 33 ··· 934 933 } 935 934 936 935 /** 937 - * get_cpu_node_size - Compute the size of a CPU node in the FDT. 938 - * This should be done only once and the value is stored in 939 - * a static variable. 936 + * cpu_node_size - Compute the size of a CPU node in the FDT. 937 + * This should be done only once and the value is stored in 938 + * a static variable. 940 939 * Returns the max size of a CPU node in the FDT. 941 940 */ 942 941 static unsigned int cpu_node_size(void) ··· 1209 1208 if (ret < 0) 1210 1209 goto out; 1211 1210 1212 - #define DIRECT64_PROPNAME "linux,direct64-ddr-window-info" 1213 - #define DMA64_PROPNAME "linux,dma64-ddr-window-info" 1214 1211 ret = update_pci_dma_nodes(fdt, DIRECT64_PROPNAME); 1215 1212 if (ret < 0) 1216 1213 goto out; ··· 1216 1217 ret = update_pci_dma_nodes(fdt, DMA64_PROPNAME); 1217 1218 if (ret < 0) 1218 1219 goto out; 1219 - #undef DMA64_PROPNAME 1220 - #undef DIRECT64_PROPNAME 1221 1220 1222 1221 /* Update memory reserve map */ 1223 1222 ret = get_reserved_memory_ranges(&rmem);
+1 -1
arch/powerpc/kexec/ranges.c
··· 18 18 19 19 #include <linux/sort.h> 20 20 #include <linux/kexec.h> 21 - #include <linux/of_device.h> 21 + #include <linux/of.h> 22 22 #include <linux/slab.h> 23 23 #include <asm/sections.h> 24 24 #include <asm/kexec_ranges.h>
+1 -1
arch/powerpc/kvm/book3s_64_entry.S
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 + #include <linux/export.h> 2 3 #include <asm/asm-offsets.h> 3 4 #include <asm/cache.h> 4 5 #include <asm/code-patching-asm.h> 5 6 #include <asm/exception-64s.h> 6 - #include <asm/export.h> 7 7 #include <asm/kvm_asm.h> 8 8 #include <asm/kvm_book3s_asm.h> 9 9 #include <asm/mmu.h>
+1 -1
arch/powerpc/kvm/book3s_64_mmu_hv.c
··· 182 182 vfree(info->rev); 183 183 info->rev = NULL; 184 184 if (info->cma) 185 - kvm_free_hpt_cma(virt_to_page(info->virt), 185 + kvm_free_hpt_cma(virt_to_page((void *)info->virt), 186 186 1 << (info->order - PAGE_SHIFT)); 187 187 else if (info->virt) 188 188 free_pages(info->virt, info->order - PAGE_SHIFT);
+1
arch/powerpc/kvm/book3s_hv_ras.c
··· 9 9 #include <linux/kvm.h> 10 10 #include <linux/kvm_host.h> 11 11 #include <linux/kernel.h> 12 + #include <asm/lppaca.h> 12 13 #include <asm/opal.h> 13 14 #include <asm/mce.h> 14 15 #include <asm/machdep.h>
+1 -1
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 10 10 * Authors: Alexander Graf <agraf@suse.de> 11 11 */ 12 12 13 + #include <linux/export.h> 13 14 #include <linux/linkage.h> 14 15 #include <linux/objtool.h> 15 16 #include <asm/ppc_asm.h> ··· 25 24 #include <asm/exception-64s.h> 26 25 #include <asm/kvm_book3s_asm.h> 27 26 #include <asm/book3s/64/mmu-hash.h> 28 - #include <asm/export.h> 29 27 #include <asm/tm.h> 30 28 #include <asm/opal.h> 31 29 #include <asm/thread_info.h>
+6 -1
arch/powerpc/kvm/e500mc.c
··· 20 20 #include <asm/cputable.h> 21 21 #include <asm/kvm_ppc.h> 22 22 #include <asm/dbell.h> 23 + #include <asm/ppc-opcode.h> 23 24 24 25 #include "booke.h" 25 26 #include "e500.h" ··· 93 92 94 93 local_irq_save(flags); 95 94 mtspr(SPRN_MAS5, MAS5_SGS | get_lpid(&vcpu_e500->vcpu)); 96 - asm volatile("tlbilxlpid"); 95 + /* 96 + * clang-17 and older could not assemble tlbilxlpid. 97 + * https://github.com/ClangBuiltLinux/linux/issues/1891 98 + */ 99 + asm volatile (PPC_TLBILX_LPID); 97 100 mtspr(SPRN_MAS5, 0); 98 101 local_irq_restore(flags); 99 102 }
+1 -1
arch/powerpc/kvm/tm.S
··· 6 6 * Copyright 2011 Paul Mackerras, IBM Corp. <paulus@au1.ibm.com> 7 7 */ 8 8 9 + #include <linux/export.h> 9 10 #include <asm/reg.h> 10 11 #include <asm/ppc_asm.h> 11 12 #include <asm/asm-offsets.h> 12 - #include <asm/export.h> 13 13 #include <asm/tm.h> 14 14 #include <asm/cputable.h> 15 15
+1 -1
arch/powerpc/lib/Makefile
··· 27 27 CFLAGS_code-patching.o += $(DISABLE_LATENT_ENTROPY_PLUGIN) 28 28 CFLAGS_feature-fixups.o += $(DISABLE_LATENT_ENTROPY_PLUGIN) 29 29 30 - obj-y += alloc.o code-patching.o feature-fixups.o pmem.o 30 + obj-y += code-patching.o feature-fixups.o pmem.o 31 31 32 32 obj-$(CONFIG_CODE_PATCHING_SELFTEST) += test-code-patching.o 33 33
-23
arch/powerpc/lib/alloc.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - #include <linux/types.h> 3 - #include <linux/init.h> 4 - #include <linux/slab.h> 5 - #include <linux/memblock.h> 6 - #include <linux/string.h> 7 - #include <asm/setup.h> 8 - 9 - 10 - void * __ref zalloc_maybe_bootmem(size_t size, gfp_t mask) 11 - { 12 - void *p; 13 - 14 - if (slab_is_available()) 15 - p = kzalloc(size, mask); 16 - else { 17 - p = memblock_alloc(size, SMP_CACHE_BYTES); 18 - if (!p) 19 - panic("%s: Failed to allocate %zu bytes\n", __func__, 20 - size); 21 - } 22 - return p; 23 - }
+1 -1
arch/powerpc/lib/checksum_32.S
··· 8 8 * Severely hacked about by Paul Mackerras (paulus@cs.anu.edu.au). 9 9 */ 10 10 11 + #include <linux/export.h> 11 12 #include <linux/sys.h> 12 13 #include <asm/processor.h> 13 14 #include <asm/cache.h> 14 15 #include <asm/errno.h> 15 16 #include <asm/ppc_asm.h> 16 - #include <asm/export.h> 17 17 18 18 .text 19 19
+1 -1
arch/powerpc/lib/checksum_64.S
··· 8 8 * Severely hacked about by Paul Mackerras (paulus@cs.anu.edu.au). 9 9 */ 10 10 11 + #include <linux/export.h> 11 12 #include <linux/sys.h> 12 13 #include <asm/processor.h> 13 14 #include <asm/errno.h> 14 15 #include <asm/ppc_asm.h> 15 - #include <asm/export.h> 16 16 17 17 /* 18 18 * Computes the checksum of a memory block at buff, length len,
+1 -1
arch/powerpc/lib/copy_32.S
··· 4 4 * 5 5 * Copyright (C) 1996-2005 Paul Mackerras. 6 6 */ 7 + #include <linux/export.h> 7 8 #include <asm/processor.h> 8 9 #include <asm/cache.h> 9 10 #include <asm/errno.h> 10 11 #include <asm/ppc_asm.h> 11 - #include <asm/export.h> 12 12 #include <asm/code-patching-asm.h> 13 13 #include <asm/kasan.h> 14 14
+1 -1
arch/powerpc/lib/copy_mc_64.S
··· 4 4 * Derived from copyuser_power7.s by Anton Blanchard <anton@au.ibm.com> 5 5 * Author - Balbir Singh <bsingharora@gmail.com> 6 6 */ 7 + #include <linux/export.h> 7 8 #include <asm/ppc_asm.h> 8 9 #include <asm/errno.h> 9 - #include <asm/export.h> 10 10 11 11 .macro err1 12 12 100:
+1 -1
arch/powerpc/lib/copypage_64.S
··· 2 2 /* 3 3 * Copyright (C) 2008 Mark Nelson, IBM Corp. 4 4 */ 5 + #include <linux/export.h> 5 6 #include <asm/page.h> 6 7 #include <asm/processor.h> 7 8 #include <asm/ppc_asm.h> 8 9 #include <asm/asm-offsets.h> 9 - #include <asm/export.h> 10 10 #include <asm/feature-fixups.h> 11 11 12 12 _GLOBAL_TOC(copy_page)
+1 -1
arch/powerpc/lib/copyuser_64.S
··· 2 2 /* 3 3 * Copyright (C) 2002 Paul Mackerras, IBM Corp. 4 4 */ 5 + #include <linux/export.h> 5 6 #include <asm/processor.h> 6 7 #include <asm/ppc_asm.h> 7 - #include <asm/export.h> 8 8 #include <asm/asm-compat.h> 9 9 #include <asm/feature-fixups.h> 10 10
+27 -4
arch/powerpc/lib/feature-fixups.c
··· 67 67 return 0; 68 68 } 69 69 70 - static int patch_feature_section(unsigned long value, struct fixup_entry *fcur) 70 + static int patch_feature_section_mask(unsigned long value, unsigned long mask, 71 + struct fixup_entry *fcur) 71 72 { 72 73 u32 *start, *end, *alt_start, *alt_end, *src, *dest; 73 74 ··· 80 79 if ((alt_end - alt_start) > (end - start)) 81 80 return 1; 82 81 83 - if ((value & fcur->mask) == fcur->value) 82 + if ((value & fcur->mask & mask) == (fcur->value & mask)) 84 83 return 0; 85 84 86 85 src = alt_start; ··· 98 97 return 0; 99 98 } 100 99 101 - void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end) 100 + static void do_feature_fixups_mask(unsigned long value, unsigned long mask, 101 + void *fixup_start, void *fixup_end) 102 102 { 103 103 struct fixup_entry *fcur, *fend; 104 104 ··· 107 105 fend = fixup_end; 108 106 109 107 for (; fcur < fend; fcur++) { 110 - if (patch_feature_section(value, fcur)) { 108 + if (patch_feature_section_mask(value, mask, fcur)) { 111 109 WARN_ON(1); 112 110 printk("Unable to patch feature section at %p - %p" \ 113 111 " with %p - %p\n", ··· 117 115 calc_addr(fcur, fcur->alt_end_off)); 118 116 } 119 117 } 118 + } 119 + 120 + void do_feature_fixups(unsigned long value, void *fixup_start, void *fixup_end) 121 + { 122 + do_feature_fixups_mask(value, ~0, fixup_start, fixup_end); 120 123 } 121 124 122 125 #ifdef CONFIG_PPC_BARRIER_NOSPEC ··· 658 651 do_final_fixups(); 659 652 } 660 653 654 + void __init update_mmu_feature_fixups(unsigned long mask) 655 + { 656 + saved_mmu_features &= ~mask; 657 + saved_mmu_features |= cur_cpu_spec->mmu_features & mask; 658 + 659 + do_feature_fixups_mask(cur_cpu_spec->mmu_features, mask, 660 + PTRRELOC(&__start___mmu_ftr_fixup), 661 + PTRRELOC(&__stop___mmu_ftr_fixup)); 662 + mmu_feature_keys_init(); 663 + } 664 + 661 665 void __init setup_feature_keys(void) 662 666 { 663 667 /* ··· 700 682 701 683 #define check(x) \ 702 684 if (!(x)) printk("feature-fixups: test failed at line %d\n", __LINE__); 685 + 686 + static int patch_feature_section(unsigned long value, struct fixup_entry *fcur) 687 + { 688 + return patch_feature_section_mask(value, ~0, fcur); 689 + } 703 690 704 691 /* This must be after the text it fixes up, vmlinux.lds.S enforces that atm */ 705 692 static struct fixup_entry fixup;
+1 -1
arch/powerpc/lib/hweight_64.S
··· 5 5 * 6 6 * Author: Anton Blanchard <anton@au.ibm.com> 7 7 */ 8 + #include <linux/export.h> 8 9 #include <asm/processor.h> 9 10 #include <asm/ppc_asm.h> 10 - #include <asm/export.h> 11 11 #include <asm/feature-fixups.h> 12 12 13 13 /* Note: This code relies on -mminimal-toc */
+1 -1
arch/powerpc/lib/mem_64.S
··· 4 4 * 5 5 * Copyright (C) 1996 Paul Mackerras. 6 6 */ 7 + #include <linux/export.h> 7 8 #include <asm/processor.h> 8 9 #include <asm/errno.h> 9 10 #include <asm/ppc_asm.h> 10 - #include <asm/export.h> 11 11 #include <asm/kasan.h> 12 12 13 13 #ifndef CONFIG_KASAN
+1 -1
arch/powerpc/lib/memcmp_32.S
··· 7 7 * 8 8 */ 9 9 10 + #include <linux/export.h> 10 11 #include <asm/ppc_asm.h> 11 - #include <asm/export.h> 12 12 13 13 .text 14 14
+1 -1
arch/powerpc/lib/memcmp_64.S
··· 3 3 * Author: Anton Blanchard <anton@au.ibm.com> 4 4 * Copyright 2015 IBM Corporation. 5 5 */ 6 + #include <linux/export.h> 6 7 #include <asm/ppc_asm.h> 7 - #include <asm/export.h> 8 8 #include <asm/ppc-opcode.h> 9 9 10 10 #define off8 r6
+1 -1
arch/powerpc/lib/memcpy_64.S
··· 2 2 /* 3 3 * Copyright (C) 2002 Paul Mackerras, IBM Corp. 4 4 */ 5 + #include <linux/export.h> 5 6 #include <asm/processor.h> 6 7 #include <asm/ppc_asm.h> 7 - #include <asm/export.h> 8 8 #include <asm/asm-compat.h> 9 9 #include <asm/feature-fixups.h> 10 10 #include <asm/kasan.h>
+2 -2
arch/powerpc/lib/sstep.c
··· 485 485 * Copy from a buffer to userspace, using the largest possible 486 486 * aligned accesses, up to sizeof(long). 487 487 */ 488 - static nokprobe_inline int __copy_mem_out(u8 *dest, unsigned long ea, int nb, struct pt_regs *regs) 488 + static __always_inline int __copy_mem_out(u8 *dest, unsigned long ea, int nb, struct pt_regs *regs) 489 489 { 490 490 int c; 491 491 ··· 1043 1043 } 1044 1044 #endif /* CONFIG_VSX */ 1045 1045 1046 - static int __emulate_dcbz(unsigned long ea) 1046 + static __always_inline int __emulate_dcbz(unsigned long ea) 1047 1047 { 1048 1048 unsigned long i; 1049 1049 unsigned long size = l1_dcache_bytes();
+1 -1
arch/powerpc/lib/string.S
··· 4 4 * 5 5 * Copyright (C) 1996 Paul Mackerras. 6 6 */ 7 + #include <linux/export.h> 7 8 #include <asm/ppc_asm.h> 8 - #include <asm/export.h> 9 9 #include <asm/cache.h> 10 10 11 11 .text
+1 -1
arch/powerpc/lib/string_32.S
··· 7 7 * 8 8 */ 9 9 10 + #include <linux/export.h> 10 11 #include <asm/ppc_asm.h> 11 - #include <asm/export.h> 12 12 #include <asm/cache.h> 13 13 14 14 .text
+1 -1
arch/powerpc/lib/string_64.S
··· 6 6 * Author: Anton Blanchard <anton@au.ibm.com> 7 7 */ 8 8 9 + #include <linux/export.h> 9 10 #include <asm/ppc_asm.h> 10 11 #include <asm/linkage.h> 11 12 #include <asm/asm-offsets.h> 12 - #include <asm/export.h> 13 13 14 14 /** 15 15 * __arch_clear_user: - Zero a block of memory in user space, with less checking.
+1 -1
arch/powerpc/lib/strlen_32.S
··· 6 6 * 7 7 * Inspired from glibc implementation 8 8 */ 9 + #include <linux/export.h> 9 10 #include <asm/ppc_asm.h> 10 - #include <asm/export.h> 11 11 #include <asm/cache.h> 12 12 13 13 .text
+1 -1
arch/powerpc/mm/book3s32/hash_low.S
··· 14 14 * hash table, so this file is not used on them.) 15 15 */ 16 16 17 + #include <linux/export.h> 17 18 #include <linux/pgtable.h> 18 19 #include <linux/init.h> 19 20 #include <asm/reg.h> ··· 23 22 #include <asm/ppc_asm.h> 24 23 #include <asm/thread_info.h> 25 24 #include <asm/asm-offsets.h> 26 - #include <asm/export.h> 27 25 #include <asm/feature-fixups.h> 28 26 #include <asm/code-patching-asm.h> 29 27
+3 -17
arch/powerpc/mm/book3s32/kuap.c
··· 3 3 #include <asm/kup.h> 4 4 #include <asm/smp.h> 5 5 6 - struct static_key_false disable_kuap_key; 7 - EXPORT_SYMBOL(disable_kuap_key); 8 - 9 - void kuap_lock_all_ool(void) 10 - { 11 - kuap_lock_all(); 12 - } 13 - EXPORT_SYMBOL(kuap_lock_all_ool); 14 - 15 - void kuap_unlock_all_ool(void) 16 - { 17 - kuap_unlock_all(); 18 - } 19 - EXPORT_SYMBOL(kuap_unlock_all_ool); 20 - 21 6 void setup_kuap(bool disabled) 22 7 { 23 8 if (!disabled) { 24 - kuap_lock_all_ool(); 9 + update_user_segments(mfsr(0) | SR_KS); 10 + isync(); /* Context sync required after mtsr() */ 25 11 init_mm.context.sr0 |= SR_KS; 26 12 current->thread.sr0 |= SR_KS; 27 13 } ··· 16 30 return; 17 31 18 32 if (disabled) 19 - static_branch_enable(&disable_kuap_key); 33 + cur_cpu_spec->mmu_features &= ~MMU_FTR_KUAP; 20 34 else 21 35 pr_info("Activating Kernel Userspace Access Protection\n"); 22 36 }
+1 -1
arch/powerpc/mm/book3s32/mmu_context.c
··· 71 71 mm->context.id = __init_new_context(); 72 72 mm->context.sr0 = CTX_TO_VSID(mm->context.id, 0); 73 73 74 - if (!kuep_is_disabled()) 74 + if (IS_ENABLED(CONFIG_PPC_KUEP)) 75 75 mm->context.sr0 |= SR_NX; 76 76 if (!kuap_is_disabled()) 77 77 mm->context.sr0 |= SR_KS;
+1
arch/powerpc/mm/book3s64/pgtable.c
··· 9 9 #include <linux/memremap.h> 10 10 #include <linux/pkeys.h> 11 11 #include <linux/debugfs.h> 12 + #include <linux/proc_fs.h> 12 13 #include <misc/cxl-base.h> 13 14 14 15 #include <asm/pgalloc.h>
+1 -1
arch/powerpc/mm/book3s64/pkeys.c
··· 291 291 292 292 if (smp_processor_id() == boot_cpuid) { 293 293 pr_info("Activating Kernel Userspace Access Prevention\n"); 294 - cur_cpu_spec->mmu_features |= MMU_FTR_BOOK3S_KUAP; 294 + cur_cpu_spec->mmu_features |= MMU_FTR_KUAP; 295 295 } 296 296 297 297 /*
+1 -64
arch/powerpc/mm/book3s64/radix_pgtable.c
··· 37 37 #include <mm/mmu_decl.h> 38 38 39 39 unsigned int mmu_base_pid; 40 - unsigned long radix_mem_block_size __ro_after_init; 41 40 42 41 static __ref void *early_alloc_pgtable(unsigned long size, int nid, 43 42 unsigned long region_start, unsigned long region_end) ··· 299 300 bool prev_exec, exec = false; 300 301 pgprot_t prot; 301 302 int psize; 302 - unsigned long max_mapping_size = radix_mem_block_size; 303 + unsigned long max_mapping_size = memory_block_size; 303 304 304 305 if (debug_pagealloc_enabled_or_kfence()) 305 306 max_mapping_size = PAGE_SIZE; ··· 501 502 return 1; 502 503 } 503 504 504 - #ifdef CONFIG_MEMORY_HOTPLUG 505 - static int __init probe_memory_block_size(unsigned long node, const char *uname, int 506 - depth, void *data) 507 - { 508 - unsigned long *mem_block_size = (unsigned long *)data; 509 - const __be32 *prop; 510 - int len; 511 - 512 - if (depth != 1) 513 - return 0; 514 - 515 - if (strcmp(uname, "ibm,dynamic-reconfiguration-memory")) 516 - return 0; 517 - 518 - prop = of_get_flat_dt_prop(node, "ibm,lmb-size", &len); 519 - 520 - if (!prop || len < dt_root_size_cells * sizeof(__be32)) 521 - /* 522 - * Nothing in the device tree 523 - */ 524 - *mem_block_size = MIN_MEMORY_BLOCK_SIZE; 525 - else 526 - *mem_block_size = of_read_number(prop, dt_root_size_cells); 527 - return 1; 528 - } 529 - 530 - static unsigned long __init radix_memory_block_size(void) 531 - { 532 - unsigned long mem_block_size = MIN_MEMORY_BLOCK_SIZE; 533 - 534 - /* 535 - * OPAL firmware feature is set by now. Hence we are ok 536 - * to test OPAL feature. 537 - */ 538 - if (firmware_has_feature(FW_FEATURE_OPAL)) 539 - mem_block_size = 1UL * 1024 * 1024 * 1024; 540 - else 541 - of_scan_flat_dt(probe_memory_block_size, &mem_block_size); 542 - 543 - return mem_block_size; 544 - } 545 - 546 - #else /* CONFIG_MEMORY_HOTPLUG */ 547 - 548 - static unsigned long __init radix_memory_block_size(void) 549 - { 550 - return 1UL * 1024 * 1024 * 1024; 551 - } 552 - 553 - #endif /* CONFIG_MEMORY_HOTPLUG */ 554 - 555 - 556 505 void __init radix__early_init_devtree(void) 557 506 { 558 507 int rc; ··· 524 577 mmu_psize_defs[MMU_PAGE_64K].h_rpt_pgsize = 525 578 psize_to_rpti_pgsize(MMU_PAGE_64K); 526 579 } 527 - 528 - /* 529 - * Max mapping size used when mapping pages. We don't use 530 - * ppc_md.memory_block_size() here because this get called 531 - * early and we don't have machine probe called yet. Also 532 - * the pseries implementation only check for ibm,lmb-size. 533 - * All hypervisor supporting radix do expose that device 534 - * tree node. 535 - */ 536 - radix_mem_block_size = radix_memory_block_size(); 537 580 return; 538 581 } 539 582
+151 -121
arch/powerpc/mm/book3s64/radix_tlb.c
··· 127 127 trace_tlbie(0, 0, rb, rs, ric, prs, r); 128 128 } 129 129 130 - static __always_inline void __tlbie_pid_lpid(unsigned long pid, 131 - unsigned long lpid, 132 - unsigned long ric) 133 - { 134 - unsigned long rb, rs, prs, r; 135 - 136 - rb = PPC_BIT(53); /* IS = 1 */ 137 - rs = (pid << PPC_BITLSHIFT(31)) | (lpid & ~(PPC_BITMASK(0, 31))); 138 - prs = 1; /* process scoped */ 139 - r = 1; /* radix format */ 140 - 141 - asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1) 142 - : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory"); 143 - trace_tlbie(0, 0, rb, rs, ric, prs, r); 144 - } 145 130 static __always_inline void __tlbie_lpid(unsigned long lpid, unsigned long ric) 146 131 { 147 132 unsigned long rb,rs,prs,r; ··· 187 202 trace_tlbie(0, 0, rb, rs, ric, prs, r); 188 203 } 189 204 190 - static __always_inline void __tlbie_va_lpid(unsigned long va, unsigned long pid, 191 - unsigned long lpid, 192 - unsigned long ap, unsigned long ric) 193 - { 194 - unsigned long rb, rs, prs, r; 195 - 196 - rb = va & ~(PPC_BITMASK(52, 63)); 197 - rb |= ap << PPC_BITLSHIFT(58); 198 - rs = (pid << PPC_BITLSHIFT(31)) | (lpid & ~(PPC_BITMASK(0, 31))); 199 - prs = 1; /* process scoped */ 200 - r = 1; /* radix format */ 201 - 202 - asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1) 203 - : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory"); 204 - trace_tlbie(0, 0, rb, rs, ric, prs, r); 205 - } 206 - 207 205 static __always_inline void __tlbie_lpid_va(unsigned long va, unsigned long lpid, 208 206 unsigned long ap, unsigned long ric) 209 207 { ··· 232 264 } 233 265 } 234 266 235 - static inline void fixup_tlbie_va_range_lpid(unsigned long va, 236 - unsigned long pid, 237 - unsigned long lpid, 238 - unsigned long ap) 239 - { 240 - if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) { 241 - asm volatile("ptesync" : : : "memory"); 242 - __tlbie_pid_lpid(0, lpid, RIC_FLUSH_TLB); 243 - } 244 - 245 - if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) { 246 - asm volatile("ptesync" : : : "memory"); 247 - __tlbie_va_lpid(va, pid, lpid, ap, RIC_FLUSH_TLB); 248 - } 249 - } 250 - 251 267 static inline void fixup_tlbie_pid(unsigned long pid) 252 268 { 253 269 /* ··· 248 296 if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) { 249 297 asm volatile("ptesync": : :"memory"); 250 298 __tlbie_va(va, pid, mmu_get_ap(MMU_PAGE_64K), RIC_FLUSH_TLB); 251 - } 252 - } 253 - 254 - static inline void fixup_tlbie_pid_lpid(unsigned long pid, unsigned long lpid) 255 - { 256 - /* 257 - * We can use any address for the invalidation, pick one which is 258 - * probably unused as an optimisation. 259 - */ 260 - unsigned long va = ((1UL << 52) - 1); 261 - 262 - if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) { 263 - asm volatile("ptesync" : : : "memory"); 264 - __tlbie_pid_lpid(0, lpid, RIC_FLUSH_TLB); 265 - } 266 - 267 - if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) { 268 - asm volatile("ptesync" : : : "memory"); 269 - __tlbie_va_lpid(va, pid, lpid, mmu_get_ap(MMU_PAGE_64K), 270 - RIC_FLUSH_TLB); 271 299 } 272 300 } 273 301 ··· 348 416 asm volatile("eieio; tlbsync; ptesync": : :"memory"); 349 417 } 350 418 351 - static inline void _tlbie_pid_lpid(unsigned long pid, unsigned long lpid, 352 - unsigned long ric) 353 - { 354 - asm volatile("ptesync" : : : "memory"); 355 - 356 - /* 357 - * Workaround the fact that the "ric" argument to __tlbie_pid 358 - * must be a compile-time contraint to match the "i" constraint 359 - * in the asm statement. 360 - */ 361 - switch (ric) { 362 - case RIC_FLUSH_TLB: 363 - __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_TLB); 364 - fixup_tlbie_pid_lpid(pid, lpid); 365 - break; 366 - case RIC_FLUSH_PWC: 367 - __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_PWC); 368 - break; 369 - case RIC_FLUSH_ALL: 370 - default: 371 - __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_ALL); 372 - fixup_tlbie_pid_lpid(pid, lpid); 373 - } 374 - asm volatile("eieio; tlbsync; ptesync" : : : "memory"); 375 - } 376 419 struct tlbiel_pid { 377 420 unsigned long pid; 378 421 unsigned long ric; ··· 473 566 fixup_tlbie_va_range(addr - page_size, pid, ap); 474 567 } 475 568 476 - static inline void __tlbie_va_range_lpid(unsigned long start, unsigned long end, 477 - unsigned long pid, unsigned long lpid, 478 - unsigned long page_size, 479 - unsigned long psize) 480 - { 481 - unsigned long addr; 482 - unsigned long ap = mmu_get_ap(psize); 483 - 484 - for (addr = start; addr < end; addr += page_size) 485 - __tlbie_va_lpid(addr, pid, lpid, ap, RIC_FLUSH_TLB); 486 - 487 - fixup_tlbie_va_range_lpid(addr - page_size, pid, lpid, ap); 488 - } 489 - 490 569 static __always_inline void _tlbie_va(unsigned long va, unsigned long pid, 491 570 unsigned long psize, unsigned long ric) 492 571 { ··· 551 658 __tlbie_pid(pid, RIC_FLUSH_PWC); 552 659 __tlbie_va_range(start, end, pid, page_size, psize); 553 660 asm volatile("eieio; tlbsync; ptesync": : :"memory"); 554 - } 555 - 556 - static inline void _tlbie_va_range_lpid(unsigned long start, unsigned long end, 557 - unsigned long pid, unsigned long lpid, 558 - unsigned long page_size, 559 - unsigned long psize, bool also_pwc) 560 - { 561 - asm volatile("ptesync" : : : "memory"); 562 - if (also_pwc) 563 - __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_PWC); 564 - __tlbie_va_range_lpid(start, end, pid, lpid, page_size, psize); 565 - asm volatile("eieio; tlbsync; ptesync" : : : "memory"); 566 661 } 567 662 568 663 static inline void _tlbiel_va_range_multicast(struct mm_struct *mm, ··· 701 820 * that's what the caller expects. 702 821 */ 703 822 if (cpumask_test_cpu(cpu, mm_cpumask(mm))) { 704 - atomic_dec(&mm->context.active_cpus); 823 + dec_mm_active_cpus(mm); 705 824 cpumask_clear_cpu(cpu, mm_cpumask(mm)); 706 825 always_flush = true; 707 826 } ··· 1197 1316 * See the comment for radix in arch_exit_mmap(). 1198 1317 */ 1199 1318 if (tlb->fullmm) { 1200 - __flush_all_mm(mm, true); 1319 + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_SHOOTDOWN)) { 1320 + /* 1321 + * Shootdown based lazy tlb mm refcounting means we 1322 + * have to IPI everyone in the mm_cpumask anyway soon 1323 + * when the mm goes away, so might as well do it as 1324 + * part of the final flush now. 1325 + * 1326 + * If lazy shootdown was improved to reduce IPIs (e.g., 1327 + * by batching), then it may end up being better to use 1328 + * tlbies here instead. 1329 + */ 1330 + preempt_disable(); 1331 + 1332 + smp_mb(); /* see radix__flush_tlb_mm */ 1333 + exit_flush_lazy_tlbs(mm); 1334 + _tlbiel_pid(mm->context.id, RIC_FLUSH_ALL); 1335 + 1336 + /* 1337 + * It should not be possible to have coprocessors still 1338 + * attached here. 1339 + */ 1340 + if (WARN_ON_ONCE(atomic_read(&mm->context.copros) > 0)) 1341 + __flush_all_mm(mm, true); 1342 + 1343 + preempt_enable(); 1344 + } else { 1345 + __flush_all_mm(mm, true); 1346 + } 1347 + 1201 1348 } else if ( (psize = radix_get_mmu_psize(page_size)) == -1) { 1202 1349 if (!tlb->freed_tables) 1203 1350 radix__flush_tlb_mm(mm); ··· 1406 1497 } 1407 1498 1408 1499 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE 1500 + static __always_inline void __tlbie_pid_lpid(unsigned long pid, 1501 + unsigned long lpid, 1502 + unsigned long ric) 1503 + { 1504 + unsigned long rb, rs, prs, r; 1505 + 1506 + rb = PPC_BIT(53); /* IS = 1 */ 1507 + rs = (pid << PPC_BITLSHIFT(31)) | (lpid & ~(PPC_BITMASK(0, 31))); 1508 + prs = 1; /* process scoped */ 1509 + r = 1; /* radix format */ 1510 + 1511 + asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1) 1512 + : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory"); 1513 + trace_tlbie(0, 0, rb, rs, ric, prs, r); 1514 + } 1515 + 1516 + static __always_inline void __tlbie_va_lpid(unsigned long va, unsigned long pid, 1517 + unsigned long lpid, 1518 + unsigned long ap, unsigned long ric) 1519 + { 1520 + unsigned long rb, rs, prs, r; 1521 + 1522 + rb = va & ~(PPC_BITMASK(52, 63)); 1523 + rb |= ap << PPC_BITLSHIFT(58); 1524 + rs = (pid << PPC_BITLSHIFT(31)) | (lpid & ~(PPC_BITMASK(0, 31))); 1525 + prs = 1; /* process scoped */ 1526 + r = 1; /* radix format */ 1527 + 1528 + asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1) 1529 + : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory"); 1530 + trace_tlbie(0, 0, rb, rs, ric, prs, r); 1531 + } 1532 + 1533 + static inline void fixup_tlbie_pid_lpid(unsigned long pid, unsigned long lpid) 1534 + { 1535 + /* 1536 + * We can use any address for the invalidation, pick one which is 1537 + * probably unused as an optimisation. 1538 + */ 1539 + unsigned long va = ((1UL << 52) - 1); 1540 + 1541 + if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) { 1542 + asm volatile("ptesync" : : : "memory"); 1543 + __tlbie_pid_lpid(0, lpid, RIC_FLUSH_TLB); 1544 + } 1545 + 1546 + if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) { 1547 + asm volatile("ptesync" : : : "memory"); 1548 + __tlbie_va_lpid(va, pid, lpid, mmu_get_ap(MMU_PAGE_64K), 1549 + RIC_FLUSH_TLB); 1550 + } 1551 + } 1552 + 1553 + static inline void _tlbie_pid_lpid(unsigned long pid, unsigned long lpid, 1554 + unsigned long ric) 1555 + { 1556 + asm volatile("ptesync" : : : "memory"); 1557 + 1558 + /* 1559 + * Workaround the fact that the "ric" argument to __tlbie_pid 1560 + * must be a compile-time contraint to match the "i" constraint 1561 + * in the asm statement. 1562 + */ 1563 + switch (ric) { 1564 + case RIC_FLUSH_TLB: 1565 + __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_TLB); 1566 + fixup_tlbie_pid_lpid(pid, lpid); 1567 + break; 1568 + case RIC_FLUSH_PWC: 1569 + __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_PWC); 1570 + break; 1571 + case RIC_FLUSH_ALL: 1572 + default: 1573 + __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_ALL); 1574 + fixup_tlbie_pid_lpid(pid, lpid); 1575 + } 1576 + asm volatile("eieio; tlbsync; ptesync" : : : "memory"); 1577 + } 1578 + 1579 + static inline void fixup_tlbie_va_range_lpid(unsigned long va, 1580 + unsigned long pid, 1581 + unsigned long lpid, 1582 + unsigned long ap) 1583 + { 1584 + if (cpu_has_feature(CPU_FTR_P9_TLBIE_ERAT_BUG)) { 1585 + asm volatile("ptesync" : : : "memory"); 1586 + __tlbie_pid_lpid(0, lpid, RIC_FLUSH_TLB); 1587 + } 1588 + 1589 + if (cpu_has_feature(CPU_FTR_P9_TLBIE_STQ_BUG)) { 1590 + asm volatile("ptesync" : : : "memory"); 1591 + __tlbie_va_lpid(va, pid, lpid, ap, RIC_FLUSH_TLB); 1592 + } 1593 + } 1594 + 1595 + static inline void __tlbie_va_range_lpid(unsigned long start, unsigned long end, 1596 + unsigned long pid, unsigned long lpid, 1597 + unsigned long page_size, 1598 + unsigned long psize) 1599 + { 1600 + unsigned long addr; 1601 + unsigned long ap = mmu_get_ap(psize); 1602 + 1603 + for (addr = start; addr < end; addr += page_size) 1604 + __tlbie_va_lpid(addr, pid, lpid, ap, RIC_FLUSH_TLB); 1605 + 1606 + fixup_tlbie_va_range_lpid(addr - page_size, pid, lpid, ap); 1607 + } 1608 + 1609 + static inline void _tlbie_va_range_lpid(unsigned long start, unsigned long end, 1610 + unsigned long pid, unsigned long lpid, 1611 + unsigned long page_size, 1612 + unsigned long psize, bool also_pwc) 1613 + { 1614 + asm volatile("ptesync" : : : "memory"); 1615 + if (also_pwc) 1616 + __tlbie_pid_lpid(pid, lpid, RIC_FLUSH_PWC); 1617 + __tlbie_va_range_lpid(start, end, pid, lpid, page_size, psize); 1618 + asm volatile("eieio; tlbsync; ptesync" : : : "memory"); 1619 + } 1620 + 1409 1621 /* 1410 1622 * Performs process-scoped invalidations for a given LPID 1411 1623 * as part of H_RPT_INVALIDATE hcall.
+1
arch/powerpc/mm/book3s64/slb.c
··· 13 13 #include <asm/mmu.h> 14 14 #include <asm/mmu_context.h> 15 15 #include <asm/paca.h> 16 + #include <asm/lppaca.h> 16 17 #include <asm/ppc-opcode.h> 17 18 #include <asm/cputable.h> 18 19 #include <asm/cacheflush.h>
+2
arch/powerpc/mm/init_32.c
··· 126 126 127 127 setup_kup(); 128 128 129 + update_mmu_feature_fixups(MMU_FTR_KUAP); 130 + 129 131 /* Shortly after that, the entire linear mapping will be available */ 130 132 memblock_set_current_limit(lowmem_end_addr); 131 133 }
+127
arch/powerpc/mm/init_64.c
··· 40 40 #include <linux/of_fdt.h> 41 41 #include <linux/libfdt.h> 42 42 #include <linux/memremap.h> 43 + #include <linux/memory.h> 43 44 44 45 #include <asm/pgalloc.h> 45 46 #include <asm/page.h> ··· 494 493 return 1; 495 494 } 496 495 496 + /* 497 + * Outside hotplug the kernel uses this value to map the kernel direct map 498 + * with radix. To be compatible with older kernels, let's keep this value 499 + * as 16M which is also SECTION_SIZE with SPARSEMEM. We can ideally map 500 + * things with 1GB size in the case where we don't support hotplug. 501 + */ 502 + #ifndef CONFIG_MEMORY_HOTPLUG 503 + #define DEFAULT_MEMORY_BLOCK_SIZE SZ_16M 504 + #else 505 + #define DEFAULT_MEMORY_BLOCK_SIZE MIN_MEMORY_BLOCK_SIZE 506 + #endif 507 + 508 + static void update_memory_block_size(unsigned long *block_size, unsigned long mem_size) 509 + { 510 + unsigned long min_memory_block_size = DEFAULT_MEMORY_BLOCK_SIZE; 511 + 512 + for (; *block_size > min_memory_block_size; *block_size >>= 2) { 513 + if ((mem_size & *block_size) == 0) 514 + break; 515 + } 516 + } 517 + 518 + static int __init probe_memory_block_size(unsigned long node, const char *uname, int 519 + depth, void *data) 520 + { 521 + const char *type; 522 + unsigned long *block_size = (unsigned long *)data; 523 + const __be32 *reg, *endp; 524 + int l; 525 + 526 + if (depth != 1) 527 + return 0; 528 + /* 529 + * If we have dynamic-reconfiguration-memory node, use the 530 + * lmb value. 531 + */ 532 + if (strcmp(uname, "ibm,dynamic-reconfiguration-memory") == 0) { 533 + 534 + const __be32 *prop; 535 + 536 + prop = of_get_flat_dt_prop(node, "ibm,lmb-size", &l); 537 + 538 + if (!prop || l < dt_root_size_cells * sizeof(__be32)) 539 + /* 540 + * Nothing in the device tree 541 + */ 542 + *block_size = DEFAULT_MEMORY_BLOCK_SIZE; 543 + else 544 + *block_size = of_read_number(prop, dt_root_size_cells); 545 + /* 546 + * We have found the final value. Don't probe further. 547 + */ 548 + return 1; 549 + } 550 + /* 551 + * Find all the device tree nodes of memory type and make sure 552 + * the area can be mapped using the memory block size value 553 + * we end up using. We start with 1G value and keep reducing 554 + * it such that we can map the entire area using memory_block_size. 555 + * This will be used on powernv and older pseries that don't 556 + * have ibm,lmb-size node. 557 + * For ex: with P5 we can end up with 558 + * memory@0 -> 128MB 559 + * memory@128M -> 64M 560 + * This will end up using 64MB memory block size value. 561 + */ 562 + type = of_get_flat_dt_prop(node, "device_type", NULL); 563 + if (type == NULL || strcmp(type, "memory") != 0) 564 + return 0; 565 + 566 + reg = of_get_flat_dt_prop(node, "linux,usable-memory", &l); 567 + if (!reg) 568 + reg = of_get_flat_dt_prop(node, "reg", &l); 569 + if (!reg) 570 + return 0; 571 + 572 + endp = reg + (l / sizeof(__be32)); 573 + while ((endp - reg) >= (dt_root_addr_cells + dt_root_size_cells)) { 574 + const char *compatible; 575 + u64 size; 576 + 577 + dt_mem_next_cell(dt_root_addr_cells, &reg); 578 + size = dt_mem_next_cell(dt_root_size_cells, &reg); 579 + 580 + if (size) { 581 + update_memory_block_size(block_size, size); 582 + continue; 583 + } 584 + /* 585 + * ibm,coherent-device-memory with linux,usable-memory = 0 586 + * Force 256MiB block size. Work around for GPUs on P9 PowerNV 587 + * linux,usable-memory == 0 implies driver managed memory and 588 + * we can't use large memory block size due to hotplug/unplug 589 + * limitations. 590 + */ 591 + compatible = of_get_flat_dt_prop(node, "compatible", NULL); 592 + if (compatible && !strcmp(compatible, "ibm,coherent-device-memory")) { 593 + if (*block_size > SZ_256M) 594 + *block_size = SZ_256M; 595 + /* 596 + * We keep 256M as the upper limit with GPU present. 597 + */ 598 + return 0; 599 + } 600 + } 601 + /* continue looking for other memory device types */ 602 + return 0; 603 + } 604 + 605 + /* 606 + * start with 1G memory block size. Early init will 607 + * fix this with correct value. 608 + */ 609 + unsigned long memory_block_size __ro_after_init = 1UL << 30; 610 + static void __init early_init_memory_block_size(void) 611 + { 612 + /* 613 + * We need to do memory_block_size probe early so that 614 + * radix__early_init_mmu() can use this as limit for 615 + * mapping page size. 616 + */ 617 + of_scan_flat_dt(probe_memory_block_size, &memory_block_size); 618 + } 619 + 497 620 void __init mmu_early_init_devtree(void) 498 621 { 499 622 bool hvmode = !!(mfmsr() & MSR_HV); ··· 650 525 */ 651 526 if (!hvmode) 652 527 early_check_vec5(); 528 + 529 + early_init_memory_block_size(); 653 530 654 531 if (early_radix_enabled()) { 655 532 radix__early_init_devtree();
+6 -2
arch/powerpc/mm/mmu_context.c
··· 43 43 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, 44 44 struct task_struct *tsk) 45 45 { 46 + int cpu = smp_processor_id(); 46 47 bool new_on_cpu = false; 47 48 48 49 /* Mark this context has been used on the new CPU */ 49 - if (!cpumask_test_cpu(smp_processor_id(), mm_cpumask(next))) { 50 - cpumask_set_cpu(smp_processor_id(), mm_cpumask(next)); 50 + if (!cpumask_test_cpu(cpu, mm_cpumask(next))) { 51 + VM_WARN_ON_ONCE(next == &init_mm); 52 + cpumask_set_cpu(cpu, mm_cpumask(next)); 51 53 inc_mm_active_cpus(next); 52 54 53 55 /* ··· 102 100 * sub architectures. Out of line for now 103 101 */ 104 102 switch_mmu_context(prev, next, tsk); 103 + 104 + VM_WARN_ON_ONCE(!cpumask_test_cpu(cpu, mm_cpumask(prev))); 105 105 } 106 106 107 107 #ifndef CONFIG_PPC_BOOK3S_64
+1
arch/powerpc/mm/mmu_decl.h
··· 110 110 void MMU_init_hw_patch(void); 111 111 unsigned long mmu_mapin_ram(unsigned long base, unsigned long top); 112 112 #endif 113 + void mmu_init_secondary(int cpu); 113 114 114 115 #ifdef CONFIG_PPC_E500 115 116 extern unsigned long map_mem_in_cams(unsigned long ram, int max_cam_idx,
+2 -6
arch/powerpc/mm/nohash/kup.c
··· 5 5 6 6 #include <linux/export.h> 7 7 #include <linux/init.h> 8 - #include <linux/jump_label.h> 9 8 #include <linux/printk.h> 10 9 #include <linux/smp.h> 11 10 ··· 12 13 #include <asm/smp.h> 13 14 14 15 #ifdef CONFIG_PPC_KUAP 15 - struct static_key_false disable_kuap_key; 16 - EXPORT_SYMBOL(disable_kuap_key); 17 - 18 16 void setup_kuap(bool disabled) 19 17 { 20 18 if (disabled) { 21 19 if (IS_ENABLED(CONFIG_40x)) 22 20 disable_kuep = true; 23 21 if (smp_processor_id() == boot_cpuid) 24 - static_branch_enable(&disable_kuap_key); 22 + cur_cpu_spec->mmu_features &= ~MMU_FTR_KUAP; 25 23 return; 26 24 } 27 25 28 26 pr_info("Activating Kernel Userspace Access Protection\n"); 29 27 30 - __prevent_user_access(KUAP_READ_WRITE); 28 + prevent_user_access(KUAP_READ_WRITE); 31 29 } 32 30 #endif
+5 -14
arch/powerpc/mm/nohash/tlb.c
··· 318 318 319 319 #endif /* CONFIG_SMP */ 320 320 321 - #ifdef CONFIG_PPC_47x 322 - void __init early_init_mmu_47x(void) 323 - { 324 - #ifdef CONFIG_SMP 325 - unsigned long root = of_get_flat_dt_root(); 326 - if (of_get_flat_dt_prop(root, "cooperative-partition", NULL)) 327 - mmu_clear_feature(MMU_FTR_USE_TLBIVAX_BCAST); 328 - #endif /* CONFIG_SMP */ 329 - } 330 - #endif /* CONFIG_PPC_47x */ 331 - 332 321 /* 333 322 * Flush kernel TLB entries in the given range 334 323 */ ··· 735 746 #else /* ! CONFIG_PPC64 */ 736 747 void __init early_init_mmu(void) 737 748 { 738 - #ifdef CONFIG_PPC_47x 739 - early_init_mmu_47x(); 740 - #endif 749 + unsigned long root = of_get_flat_dt_root(); 750 + 751 + if (IS_ENABLED(CONFIG_PPC_47x) && IS_ENABLED(CONFIG_SMP) && 752 + of_get_flat_dt_prop(root, "cooperative-partition", NULL)) 753 + mmu_clear_feature(MMU_FTR_USE_TLBIVAX_BCAST); 741 754 } 742 755 #endif /* CONFIG_PPC64 */
+1
arch/powerpc/mm/numa.c
··· 34 34 #include <asm/hvcall.h> 35 35 #include <asm/setup.h> 36 36 #include <asm/vdso.h> 37 + #include <asm/vphn.h> 37 38 #include <asm/drmem.h> 38 39 39 40 static int numa_enabled = 1;
+5 -3
arch/powerpc/perf/core-fsl-emb.c
··· 645 645 struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events); 646 646 struct perf_event *event; 647 647 unsigned long val; 648 - int found = 0; 649 648 650 649 for (i = 0; i < ppmu->n_counter; ++i) { 651 650 event = cpuhw->event[i]; ··· 653 654 if ((int)val < 0) { 654 655 if (event) { 655 656 /* event has overflowed */ 656 - found = 1; 657 657 record_and_restart(event, val, regs); 658 658 } else { 659 659 /* ··· 670 672 isync(); 671 673 } 672 674 673 - void hw_perf_event_setup(int cpu) 675 + static int fsl_emb_pmu_prepare_cpu(unsigned int cpu) 674 676 { 675 677 struct cpu_hw_events *cpuhw = &per_cpu(cpu_hw_events, cpu); 676 678 677 679 memset(cpuhw, 0, sizeof(*cpuhw)); 680 + 681 + return 0; 678 682 } 679 683 680 684 int register_fsl_emb_pmu(struct fsl_emb_pmu *pmu) ··· 689 689 pmu->name); 690 690 691 691 perf_pmu_register(&fsl_emb_pmu, "cpu", PERF_TYPE_RAW); 692 + cpuhp_setup_state(CPUHP_PERF_POWER, "perf/powerpc:prepare", 693 + fsl_emb_pmu_prepare_cpu, NULL); 692 694 693 695 return 0; 694 696 }
+635 -2
arch/powerpc/perf/hv-gpci.c
··· 102 102 return cpumap_print_to_pagebuf(true, buf, &hv_gpci_cpumask); 103 103 } 104 104 105 + /* Interface attribute array index to store system information */ 106 + #define INTERFACE_PROCESSOR_BUS_TOPOLOGY_ATTR 6 107 + #define INTERFACE_PROCESSOR_CONFIG_ATTR 7 108 + #define INTERFACE_AFFINITY_DOMAIN_VIA_VP_ATTR 8 109 + #define INTERFACE_AFFINITY_DOMAIN_VIA_DOM_ATTR 9 110 + #define INTERFACE_AFFINITY_DOMAIN_VIA_PAR_ATTR 10 111 + #define INTERFACE_NULL_ATTR 11 112 + 113 + /* Counter request value to retrieve system information */ 114 + enum { 115 + PROCESSOR_BUS_TOPOLOGY, 116 + PROCESSOR_CONFIG, 117 + AFFINITY_DOMAIN_VIA_VP, /* affinity domain via virtual processor */ 118 + AFFINITY_DOMAIN_VIA_DOM, /* affinity domain via domain */ 119 + AFFINITY_DOMAIN_VIA_PAR, /* affinity domain via partition */ 120 + }; 121 + 122 + static int sysinfo_counter_request[] = { 123 + [PROCESSOR_BUS_TOPOLOGY] = 0xD0, 124 + [PROCESSOR_CONFIG] = 0x90, 125 + [AFFINITY_DOMAIN_VIA_VP] = 0xA0, 126 + [AFFINITY_DOMAIN_VIA_DOM] = 0xB0, 127 + [AFFINITY_DOMAIN_VIA_PAR] = 0xB1, 128 + }; 129 + 130 + static DEFINE_PER_CPU(char, hv_gpci_reqb[HGPCI_REQ_BUFFER_SIZE]) __aligned(sizeof(uint64_t)); 131 + 132 + static unsigned long systeminfo_gpci_request(u32 req, u32 starting_index, 133 + u16 secondary_index, char *buf, 134 + size_t *n, struct hv_gpci_request_buffer *arg) 135 + { 136 + unsigned long ret; 137 + size_t i, j; 138 + 139 + arg->params.counter_request = cpu_to_be32(req); 140 + arg->params.starting_index = cpu_to_be32(starting_index); 141 + arg->params.secondary_index = cpu_to_be16(secondary_index); 142 + 143 + ret = plpar_hcall_norets(H_GET_PERF_COUNTER_INFO, 144 + virt_to_phys(arg), HGPCI_REQ_BUFFER_SIZE); 145 + 146 + /* 147 + * ret value as 'H_PARAMETER' corresponds to 'GEN_BUF_TOO_SMALL', 148 + * which means that the current buffer size cannot accommodate 149 + * all the information and a partial buffer returned. 150 + * hcall fails incase of ret value other than H_SUCCESS or H_PARAMETER. 151 + * 152 + * ret value as H_AUTHORITY implies that partition is not permitted to retrieve 153 + * performance information, and required to set 154 + * "Enable Performance Information Collection" option. 155 + */ 156 + if (ret == H_AUTHORITY) 157 + return -EPERM; 158 + 159 + /* 160 + * hcall can fail with other possible ret value like H_PRIVILEGE/H_HARDWARE 161 + * because of invalid buffer-length/address or due to some hardware 162 + * error. 163 + */ 164 + if (ret && (ret != H_PARAMETER)) 165 + return -EIO; 166 + 167 + /* 168 + * hcall H_GET_PERF_COUNTER_INFO populates the 'returned_values' 169 + * to show the total number of counter_value array elements 170 + * returned via hcall. 171 + * hcall also populates 'cv_element_size' corresponds to individual 172 + * counter_value array element size. Below loop go through all 173 + * counter_value array elements as per their size and add it to 174 + * the output buffer. 175 + */ 176 + for (i = 0; i < be16_to_cpu(arg->params.returned_values); i++) { 177 + j = i * be16_to_cpu(arg->params.cv_element_size); 178 + 179 + for (; j < (i + 1) * be16_to_cpu(arg->params.cv_element_size); j++) 180 + *n += sprintf(buf + *n, "%02x", (u8)arg->bytes[j]); 181 + *n += sprintf(buf + *n, "\n"); 182 + } 183 + 184 + if (*n >= PAGE_SIZE) { 185 + pr_info("System information exceeds PAGE_SIZE\n"); 186 + return -EFBIG; 187 + } 188 + 189 + return ret; 190 + } 191 + 192 + static ssize_t processor_bus_topology_show(struct device *dev, struct device_attribute *attr, 193 + char *buf) 194 + { 195 + struct hv_gpci_request_buffer *arg; 196 + unsigned long ret; 197 + size_t n = 0; 198 + 199 + arg = (void *)get_cpu_var(hv_gpci_reqb); 200 + memset(arg, 0, HGPCI_REQ_BUFFER_SIZE); 201 + 202 + /* 203 + * Pass the counter request value 0xD0 corresponds to request 204 + * type 'Processor_bus_topology', to retrieve 205 + * the system topology information. 206 + * starting_index value implies the starting hardware 207 + * chip id. 208 + */ 209 + ret = systeminfo_gpci_request(sysinfo_counter_request[PROCESSOR_BUS_TOPOLOGY], 210 + 0, 0, buf, &n, arg); 211 + 212 + if (!ret) 213 + return n; 214 + 215 + if (ret != H_PARAMETER) 216 + goto out; 217 + 218 + /* 219 + * ret value as 'H_PARAMETER' corresponds to 'GEN_BUF_TOO_SMALL', which 220 + * implies that buffer can't accommodate all information, and a partial buffer 221 + * returned. To handle that, we need to make subsequent requests 222 + * with next starting index to retrieve additional (missing) data. 223 + * Below loop do subsequent hcalls with next starting index and add it 224 + * to buffer util we get all the information. 225 + */ 226 + while (ret == H_PARAMETER) { 227 + int returned_values = be16_to_cpu(arg->params.returned_values); 228 + int elementsize = be16_to_cpu(arg->params.cv_element_size); 229 + int last_element = (returned_values - 1) * elementsize; 230 + 231 + /* 232 + * Since the starting index value is part of counter_value 233 + * buffer elements, use the starting index value in the last 234 + * element and add 1 to make subsequent hcalls. 235 + */ 236 + u32 starting_index = arg->bytes[last_element + 3] + 237 + (arg->bytes[last_element + 2] << 8) + 238 + (arg->bytes[last_element + 1] << 16) + 239 + (arg->bytes[last_element] << 24) + 1; 240 + 241 + memset(arg, 0, HGPCI_REQ_BUFFER_SIZE); 242 + 243 + ret = systeminfo_gpci_request(sysinfo_counter_request[PROCESSOR_BUS_TOPOLOGY], 244 + starting_index, 0, buf, &n, arg); 245 + 246 + if (!ret) 247 + return n; 248 + 249 + if (ret != H_PARAMETER) 250 + goto out; 251 + } 252 + 253 + return n; 254 + 255 + out: 256 + put_cpu_var(hv_gpci_reqb); 257 + return ret; 258 + } 259 + 260 + static ssize_t processor_config_show(struct device *dev, struct device_attribute *attr, 261 + char *buf) 262 + { 263 + struct hv_gpci_request_buffer *arg; 264 + unsigned long ret; 265 + size_t n = 0; 266 + 267 + arg = (void *)get_cpu_var(hv_gpci_reqb); 268 + memset(arg, 0, HGPCI_REQ_BUFFER_SIZE); 269 + 270 + /* 271 + * Pass the counter request value 0x90 corresponds to request 272 + * type 'Processor_config', to retrieve 273 + * the system processor information. 274 + * starting_index value implies the starting hardware 275 + * processor index. 276 + */ 277 + ret = systeminfo_gpci_request(sysinfo_counter_request[PROCESSOR_CONFIG], 278 + 0, 0, buf, &n, arg); 279 + 280 + if (!ret) 281 + return n; 282 + 283 + if (ret != H_PARAMETER) 284 + goto out; 285 + 286 + /* 287 + * ret value as 'H_PARAMETER' corresponds to 'GEN_BUF_TOO_SMALL', which 288 + * implies that buffer can't accommodate all information, and a partial buffer 289 + * returned. To handle that, we need to take subsequent requests 290 + * with next starting index to retrieve additional (missing) data. 291 + * Below loop do subsequent hcalls with next starting index and add it 292 + * to buffer util we get all the information. 293 + */ 294 + while (ret == H_PARAMETER) { 295 + int returned_values = be16_to_cpu(arg->params.returned_values); 296 + int elementsize = be16_to_cpu(arg->params.cv_element_size); 297 + int last_element = (returned_values - 1) * elementsize; 298 + 299 + /* 300 + * Since the starting index is part of counter_value 301 + * buffer elements, use the starting index value in the last 302 + * element and add 1 to subsequent hcalls. 303 + */ 304 + u32 starting_index = arg->bytes[last_element + 3] + 305 + (arg->bytes[last_element + 2] << 8) + 306 + (arg->bytes[last_element + 1] << 16) + 307 + (arg->bytes[last_element] << 24) + 1; 308 + 309 + memset(arg, 0, HGPCI_REQ_BUFFER_SIZE); 310 + 311 + ret = systeminfo_gpci_request(sysinfo_counter_request[PROCESSOR_CONFIG], 312 + starting_index, 0, buf, &n, arg); 313 + 314 + if (!ret) 315 + return n; 316 + 317 + if (ret != H_PARAMETER) 318 + goto out; 319 + } 320 + 321 + return n; 322 + 323 + out: 324 + put_cpu_var(hv_gpci_reqb); 325 + return ret; 326 + } 327 + 328 + static ssize_t affinity_domain_via_virtual_processor_show(struct device *dev, 329 + struct device_attribute *attr, char *buf) 330 + { 331 + struct hv_gpci_request_buffer *arg; 332 + unsigned long ret; 333 + size_t n = 0; 334 + 335 + arg = (void *)get_cpu_var(hv_gpci_reqb); 336 + memset(arg, 0, HGPCI_REQ_BUFFER_SIZE); 337 + 338 + /* 339 + * Pass the counter request 0xA0 corresponds to request 340 + * type 'Affinity_domain_information_by_virutal_processor', 341 + * to retrieve the system affinity domain information. 342 + * starting_index value refers to the starting hardware 343 + * processor index. 344 + */ 345 + ret = systeminfo_gpci_request(sysinfo_counter_request[AFFINITY_DOMAIN_VIA_VP], 346 + 0, 0, buf, &n, arg); 347 + 348 + if (!ret) 349 + return n; 350 + 351 + if (ret != H_PARAMETER) 352 + goto out; 353 + 354 + /* 355 + * ret value as 'H_PARAMETER' corresponds to 'GEN_BUF_TOO_SMALL', which 356 + * implies that buffer can't accommodate all information, and a partial buffer 357 + * returned. To handle that, we need to take subsequent requests 358 + * with next secondary index to retrieve additional (missing) data. 359 + * Below loop do subsequent hcalls with next secondary index and add it 360 + * to buffer util we get all the information. 361 + */ 362 + while (ret == H_PARAMETER) { 363 + int returned_values = be16_to_cpu(arg->params.returned_values); 364 + int elementsize = be16_to_cpu(arg->params.cv_element_size); 365 + int last_element = (returned_values - 1) * elementsize; 366 + 367 + /* 368 + * Since the starting index and secondary index type is part of the 369 + * counter_value buffer elements, use the starting index value in the 370 + * last array element as subsequent starting index, and use secondary index 371 + * value in the last array element plus 1 as subsequent secondary index. 372 + * For counter request '0xA0', starting index points to partition id 373 + * and secondary index points to corresponding virtual processor index. 374 + */ 375 + u32 starting_index = arg->bytes[last_element + 1] + (arg->bytes[last_element] << 8); 376 + u16 secondary_index = arg->bytes[last_element + 3] + 377 + (arg->bytes[last_element + 2] << 8) + 1; 378 + 379 + memset(arg, 0, HGPCI_REQ_BUFFER_SIZE); 380 + 381 + ret = systeminfo_gpci_request(sysinfo_counter_request[AFFINITY_DOMAIN_VIA_VP], 382 + starting_index, secondary_index, buf, &n, arg); 383 + 384 + if (!ret) 385 + return n; 386 + 387 + if (ret != H_PARAMETER) 388 + goto out; 389 + } 390 + 391 + return n; 392 + 393 + out: 394 + put_cpu_var(hv_gpci_reqb); 395 + return ret; 396 + } 397 + 398 + static ssize_t affinity_domain_via_domain_show(struct device *dev, struct device_attribute *attr, 399 + char *buf) 400 + { 401 + struct hv_gpci_request_buffer *arg; 402 + unsigned long ret; 403 + size_t n = 0; 404 + 405 + arg = (void *)get_cpu_var(hv_gpci_reqb); 406 + memset(arg, 0, HGPCI_REQ_BUFFER_SIZE); 407 + 408 + /* 409 + * Pass the counter request 0xB0 corresponds to request 410 + * type 'Affinity_domain_information_by_domain', 411 + * to retrieve the system affinity domain information. 412 + * starting_index value refers to the starting hardware 413 + * processor index. 414 + */ 415 + ret = systeminfo_gpci_request(sysinfo_counter_request[AFFINITY_DOMAIN_VIA_DOM], 416 + 0, 0, buf, &n, arg); 417 + 418 + if (!ret) 419 + return n; 420 + 421 + if (ret != H_PARAMETER) 422 + goto out; 423 + 424 + /* 425 + * ret value as 'H_PARAMETER' corresponds to 'GEN_BUF_TOO_SMALL', which 426 + * implies that buffer can't accommodate all information, and a partial buffer 427 + * returned. To handle that, we need to take subsequent requests 428 + * with next starting index to retrieve additional (missing) data. 429 + * Below loop do subsequent hcalls with next starting index and add it 430 + * to buffer util we get all the information. 431 + */ 432 + while (ret == H_PARAMETER) { 433 + int returned_values = be16_to_cpu(arg->params.returned_values); 434 + int elementsize = be16_to_cpu(arg->params.cv_element_size); 435 + int last_element = (returned_values - 1) * elementsize; 436 + 437 + /* 438 + * Since the starting index value is part of counter_value 439 + * buffer elements, use the starting index value in the last 440 + * element and add 1 to make subsequent hcalls. 441 + */ 442 + u32 starting_index = arg->bytes[last_element + 1] + 443 + (arg->bytes[last_element] << 8) + 1; 444 + 445 + memset(arg, 0, HGPCI_REQ_BUFFER_SIZE); 446 + 447 + ret = systeminfo_gpci_request(sysinfo_counter_request[AFFINITY_DOMAIN_VIA_DOM], 448 + starting_index, 0, buf, &n, arg); 449 + 450 + if (!ret) 451 + return n; 452 + 453 + if (ret != H_PARAMETER) 454 + goto out; 455 + } 456 + 457 + return n; 458 + 459 + out: 460 + put_cpu_var(hv_gpci_reqb); 461 + return ret; 462 + } 463 + 464 + static void affinity_domain_via_partition_result_parse(int returned_values, 465 + int element_size, char *buf, size_t *last_element, 466 + size_t *n, struct hv_gpci_request_buffer *arg) 467 + { 468 + size_t i = 0, j = 0; 469 + size_t k, l, m; 470 + uint16_t total_affinity_domain_ele, size_of_each_affinity_domain_ele; 471 + 472 + /* 473 + * hcall H_GET_PERF_COUNTER_INFO populates the 'returned_values' 474 + * to show the total number of counter_value array elements 475 + * returned via hcall. 476 + * Unlike other request types, the data structure returned by this 477 + * request is variable-size. For this counter request type, 478 + * hcall populates 'cv_element_size' corresponds to minimum size of 479 + * the structure returned i.e; the size of the structure with no domain 480 + * information. Below loop go through all counter_value array 481 + * to determine the number and size of each domain array element and 482 + * add it to the output buffer. 483 + */ 484 + while (i < returned_values) { 485 + k = j; 486 + for (; k < j + element_size; k++) 487 + *n += sprintf(buf + *n, "%02x", (u8)arg->bytes[k]); 488 + *n += sprintf(buf + *n, "\n"); 489 + 490 + total_affinity_domain_ele = (u8)arg->bytes[k - 2] << 8 | (u8)arg->bytes[k - 3]; 491 + size_of_each_affinity_domain_ele = (u8)arg->bytes[k] << 8 | (u8)arg->bytes[k - 1]; 492 + 493 + for (l = 0; l < total_affinity_domain_ele; l++) { 494 + for (m = 0; m < size_of_each_affinity_domain_ele; m++) { 495 + *n += sprintf(buf + *n, "%02x", (u8)arg->bytes[k]); 496 + k++; 497 + } 498 + *n += sprintf(buf + *n, "\n"); 499 + } 500 + 501 + *n += sprintf(buf + *n, "\n"); 502 + i++; 503 + j = k; 504 + } 505 + 506 + *last_element = k; 507 + } 508 + 509 + static ssize_t affinity_domain_via_partition_show(struct device *dev, struct device_attribute *attr, 510 + char *buf) 511 + { 512 + struct hv_gpci_request_buffer *arg; 513 + unsigned long ret; 514 + size_t n = 0; 515 + size_t last_element = 0; 516 + u32 starting_index; 517 + 518 + arg = (void *)get_cpu_var(hv_gpci_reqb); 519 + memset(arg, 0, HGPCI_REQ_BUFFER_SIZE); 520 + 521 + /* 522 + * Pass the counter request value 0xB1 corresponds to counter request 523 + * type 'Affinity_domain_information_by_partition', 524 + * to retrieve the system affinity domain by partition information. 525 + * starting_index value refers to the starting hardware 526 + * processor index. 527 + */ 528 + arg->params.counter_request = cpu_to_be32(sysinfo_counter_request[AFFINITY_DOMAIN_VIA_PAR]); 529 + arg->params.starting_index = cpu_to_be32(0); 530 + 531 + ret = plpar_hcall_norets(H_GET_PERF_COUNTER_INFO, 532 + virt_to_phys(arg), HGPCI_REQ_BUFFER_SIZE); 533 + 534 + if (!ret) 535 + goto parse_result; 536 + 537 + /* 538 + * ret value as 'H_PARAMETER' implies that the current buffer size 539 + * can't accommodate all the information, and a partial buffer 540 + * returned. To handle that, we need to make subsequent requests 541 + * with next starting index to retrieve additional (missing) data. 542 + * Below loop do subsequent hcalls with next starting index and add it 543 + * to buffer util we get all the information. 544 + */ 545 + while (ret == H_PARAMETER) { 546 + affinity_domain_via_partition_result_parse( 547 + be16_to_cpu(arg->params.returned_values) - 1, 548 + be16_to_cpu(arg->params.cv_element_size), buf, 549 + &last_element, &n, arg); 550 + 551 + if (n >= PAGE_SIZE) { 552 + put_cpu_var(hv_gpci_reqb); 553 + pr_debug("System information exceeds PAGE_SIZE\n"); 554 + return -EFBIG; 555 + } 556 + 557 + /* 558 + * Since the starting index value is part of counter_value 559 + * buffer elements, use the starting_index value in the last 560 + * element and add 1 to make subsequent hcalls. 561 + */ 562 + starting_index = (u8)arg->bytes[last_element] << 8 | 563 + (u8)arg->bytes[last_element + 1]; 564 + 565 + memset(arg, 0, HGPCI_REQ_BUFFER_SIZE); 566 + arg->params.counter_request = cpu_to_be32( 567 + sysinfo_counter_request[AFFINITY_DOMAIN_VIA_PAR]); 568 + arg->params.starting_index = cpu_to_be32(starting_index); 569 + 570 + ret = plpar_hcall_norets(H_GET_PERF_COUNTER_INFO, 571 + virt_to_phys(arg), HGPCI_REQ_BUFFER_SIZE); 572 + 573 + if (ret && (ret != H_PARAMETER)) 574 + goto out; 575 + } 576 + 577 + parse_result: 578 + affinity_domain_via_partition_result_parse( 579 + be16_to_cpu(arg->params.returned_values), 580 + be16_to_cpu(arg->params.cv_element_size), 581 + buf, &last_element, &n, arg); 582 + 583 + put_cpu_var(hv_gpci_reqb); 584 + return n; 585 + 586 + out: 587 + put_cpu_var(hv_gpci_reqb); 588 + 589 + /* 590 + * ret value as 'H_PARAMETER' corresponds to 'GEN_BUF_TOO_SMALL', 591 + * which means that the current buffer size cannot accommodate 592 + * all the information and a partial buffer returned. 593 + * hcall fails incase of ret value other than H_SUCCESS or H_PARAMETER. 594 + * 595 + * ret value as H_AUTHORITY implies that partition is not permitted to retrieve 596 + * performance information, and required to set 597 + * "Enable Performance Information Collection" option. 598 + */ 599 + if (ret == H_AUTHORITY) 600 + return -EPERM; 601 + 602 + /* 603 + * hcall can fail with other possible ret value like H_PRIVILEGE/H_HARDWARE 604 + * because of invalid buffer-length/address or due to some hardware 605 + * error. 606 + */ 607 + return -EIO; 608 + } 609 + 105 610 static DEVICE_ATTR_RO(kernel_version); 106 611 static DEVICE_ATTR_RO(cpumask); 107 612 ··· 623 118 &hv_caps_attr_expanded.attr, 624 119 &hv_caps_attr_lab.attr, 625 120 &hv_caps_attr_collect_privileged.attr, 121 + /* 122 + * This NULL is a placeholder for the processor_bus_topology 123 + * attribute, set in init function if applicable. 124 + */ 125 + NULL, 126 + /* 127 + * This NULL is a placeholder for the processor_config 128 + * attribute, set in init function if applicable. 129 + */ 130 + NULL, 131 + /* 132 + * This NULL is a placeholder for the affinity_domain_via_virtual_processor 133 + * attribute, set in init function if applicable. 134 + */ 135 + NULL, 136 + /* 137 + * This NULL is a placeholder for the affinity_domain_via_domain 138 + * attribute, set in init function if applicable. 139 + */ 140 + NULL, 141 + /* 142 + * This NULL is a placeholder for the affinity_domain_via_partition 143 + * attribute, set in init function if applicable. 144 + */ 145 + NULL, 626 146 NULL, 627 147 }; 628 148 ··· 672 142 &cpumask_attr_group, 673 143 NULL, 674 144 }; 675 - 676 - static DEFINE_PER_CPU(char, hv_gpci_reqb[HGPCI_REQ_BUFFER_SIZE]) __aligned(sizeof(uint64_t)); 677 145 678 146 static unsigned long single_gpci_request(u32 req, u32 starting_index, 679 147 u16 secondary_index, u8 version_in, u32 offset, u8 length, ··· 853 325 ppc_hv_gpci_cpu_offline); 854 326 } 855 327 328 + static struct device_attribute *sysinfo_device_attr_create(int 329 + sysinfo_interface_group_index, u32 req) 330 + { 331 + struct device_attribute *attr = NULL; 332 + unsigned long ret; 333 + struct hv_gpci_request_buffer *arg; 334 + 335 + if (sysinfo_interface_group_index < INTERFACE_PROCESSOR_BUS_TOPOLOGY_ATTR || 336 + sysinfo_interface_group_index >= INTERFACE_NULL_ATTR) { 337 + pr_info("Wrong interface group index for system information\n"); 338 + return NULL; 339 + } 340 + 341 + /* Check for given counter request value support */ 342 + arg = (void *)get_cpu_var(hv_gpci_reqb); 343 + memset(arg, 0, HGPCI_REQ_BUFFER_SIZE); 344 + 345 + arg->params.counter_request = cpu_to_be32(req); 346 + 347 + ret = plpar_hcall_norets(H_GET_PERF_COUNTER_INFO, 348 + virt_to_phys(arg), HGPCI_REQ_BUFFER_SIZE); 349 + 350 + put_cpu_var(hv_gpci_reqb); 351 + 352 + /* 353 + * Add given counter request value attribute in the interface_attrs 354 + * attribute array, only for valid return types. 355 + */ 356 + if (!ret || ret == H_AUTHORITY || ret == H_PARAMETER) { 357 + attr = kzalloc(sizeof(*attr), GFP_KERNEL); 358 + if (!attr) 359 + return NULL; 360 + 361 + sysfs_attr_init(&attr->attr); 362 + attr->attr.mode = 0444; 363 + 364 + switch (sysinfo_interface_group_index) { 365 + case INTERFACE_PROCESSOR_BUS_TOPOLOGY_ATTR: 366 + attr->attr.name = "processor_bus_topology"; 367 + attr->show = processor_bus_topology_show; 368 + break; 369 + case INTERFACE_PROCESSOR_CONFIG_ATTR: 370 + attr->attr.name = "processor_config"; 371 + attr->show = processor_config_show; 372 + break; 373 + case INTERFACE_AFFINITY_DOMAIN_VIA_VP_ATTR: 374 + attr->attr.name = "affinity_domain_via_virtual_processor"; 375 + attr->show = affinity_domain_via_virtual_processor_show; 376 + break; 377 + case INTERFACE_AFFINITY_DOMAIN_VIA_DOM_ATTR: 378 + attr->attr.name = "affinity_domain_via_domain"; 379 + attr->show = affinity_domain_via_domain_show; 380 + break; 381 + case INTERFACE_AFFINITY_DOMAIN_VIA_PAR_ATTR: 382 + attr->attr.name = "affinity_domain_via_partition"; 383 + attr->show = affinity_domain_via_partition_show; 384 + break; 385 + } 386 + } else 387 + pr_devel("hcall failed, with error: 0x%lx\n", ret); 388 + 389 + return attr; 390 + } 391 + 392 + static void add_sysinfo_interface_files(void) 393 + { 394 + int sysfs_count; 395 + struct device_attribute *attr[INTERFACE_NULL_ATTR - INTERFACE_PROCESSOR_BUS_TOPOLOGY_ATTR]; 396 + int i; 397 + 398 + sysfs_count = INTERFACE_NULL_ATTR - INTERFACE_PROCESSOR_BUS_TOPOLOGY_ATTR; 399 + 400 + /* Get device attribute for a given counter request value */ 401 + for (i = 0; i < sysfs_count; i++) { 402 + attr[i] = sysinfo_device_attr_create(i + INTERFACE_PROCESSOR_BUS_TOPOLOGY_ATTR, 403 + sysinfo_counter_request[i]); 404 + 405 + if (!attr[i]) 406 + goto out; 407 + } 408 + 409 + /* Add sysinfo interface attributes in the interface_attrs attribute array */ 410 + for (i = 0; i < sysfs_count; i++) 411 + interface_attrs[i + INTERFACE_PROCESSOR_BUS_TOPOLOGY_ATTR] = &attr[i]->attr; 412 + 413 + return; 414 + 415 + out: 416 + /* 417 + * The sysinfo interface attributes will be added, only if hcall passed for 418 + * all the counter request values. Free the device attribute array incase 419 + * of any hcall failure. 420 + */ 421 + if (i > 0) { 422 + while (i >= 0) { 423 + kfree(attr[i]); 424 + i--; 425 + } 426 + } 427 + } 428 + 856 429 static int hv_gpci_init(void) 857 430 { 858 431 int r; ··· 1016 387 r = perf_pmu_register(&h_gpci_pmu, h_gpci_pmu.name, -1); 1017 388 if (r) 1018 389 return r; 390 + 391 + /* sysinfo interface files are only available for power10 and above platforms */ 392 + if (PVR_VER(mfspr(SPRN_PVR)) >= PVR_POWER10) 393 + add_sysinfo_interface_files(); 1019 394 1020 395 return 0; 1021 396 }
-55
arch/powerpc/platforms/44x/warp.c
··· 83 83 84 84 #ifdef CONFIG_SENSORS_AD7414 85 85 86 - static LIST_HEAD(dtm_shutdown_list); 87 86 static void __iomem *dtm_fpga; 88 - 89 - struct dtm_shutdown { 90 - struct list_head list; 91 - void (*func)(void *arg); 92 - void *arg; 93 - }; 94 - 95 - int pika_dtm_register_shutdown(void (*func)(void *arg), void *arg) 96 - { 97 - struct dtm_shutdown *shutdown; 98 - 99 - shutdown = kmalloc(sizeof(struct dtm_shutdown), GFP_KERNEL); 100 - if (shutdown == NULL) 101 - return -ENOMEM; 102 - 103 - shutdown->func = func; 104 - shutdown->arg = arg; 105 - 106 - list_add(&shutdown->list, &dtm_shutdown_list); 107 - 108 - return 0; 109 - } 110 - 111 - int pika_dtm_unregister_shutdown(void (*func)(void *arg), void *arg) 112 - { 113 - struct dtm_shutdown *shutdown; 114 - 115 - list_for_each_entry(shutdown, &dtm_shutdown_list, list) 116 - if (shutdown->func == func && shutdown->arg == arg) { 117 - list_del(&shutdown->list); 118 - kfree(shutdown); 119 - return 0; 120 - } 121 - 122 - return -EINVAL; 123 - } 124 87 125 88 #define WARP_GREEN_LED 0 126 89 #define WARP_RED_LED 1 ··· 116 153 117 154 static irqreturn_t temp_isr(int irq, void *context) 118 155 { 119 - struct dtm_shutdown *shutdown; 120 156 int value = 1; 121 157 122 158 local_irq_disable(); 123 159 124 160 gpiod_set_value(warp_gpio_led_pins[WARP_GREEN_LED].gpiod, 0); 125 - 126 - /* Run through the shutdown list. */ 127 - list_for_each_entry(shutdown, &dtm_shutdown_list, list) 128 - shutdown->func(shutdown->arg); 129 161 130 162 printk(KERN_EMERG "\n\nCritical Temperature Shutdown\n\n"); 131 163 ··· 324 366 325 367 #else /* !CONFIG_SENSORS_AD7414 */ 326 368 327 - int pika_dtm_register_shutdown(void (*func)(void *arg), void *arg) 328 - { 329 - return 0; 330 - } 331 - 332 - int pika_dtm_unregister_shutdown(void (*func)(void *arg), void *arg) 333 - { 334 - return 0; 335 - } 336 - 337 369 machine_late_initcall(warp, warp_post_info); 338 370 339 371 #endif 340 - 341 - EXPORT_SYMBOL(pika_dtm_register_shutdown); 342 - EXPORT_SYMBOL(pika_dtm_unregister_shutdown);
+1 -1
arch/powerpc/platforms/4xx/cpm.c
··· 18 18 */ 19 19 20 20 #include <linux/kernel.h> 21 - #include <linux/of_platform.h> 21 + #include <linux/of.h> 22 22 #include <linux/sysfs.h> 23 23 #include <linux/cpu.h> 24 24 #include <linux/suspend.h>
+1 -1
arch/powerpc/platforms/4xx/hsta_msi.c
··· 11 11 #include <linux/msi.h> 12 12 #include <linux/of.h> 13 13 #include <linux/of_irq.h> 14 - #include <linux/of_platform.h> 14 + #include <linux/platform_device.h> 15 15 #include <linux/pci.h> 16 16 #include <linux/semaphore.h> 17 17 #include <asm/msi_bitmap.h>
+2 -1
arch/powerpc/platforms/4xx/soc.c
··· 15 15 #include <linux/errno.h> 16 16 #include <linux/interrupt.h> 17 17 #include <linux/irq.h> 18 + #include <linux/of.h> 18 19 #include <linux/of_irq.h> 19 - #include <linux/of_platform.h> 20 20 21 21 #include <asm/dcr.h> 22 22 #include <asm/dcr-regs.h> 23 23 #include <asm/reg.h> 24 + #include <asm/ppc4xx.h> 24 25 25 26 static u32 dcrbase_l2c; 26 27
+1
arch/powerpc/platforms/4xx/uic.c
··· 24 24 #include <asm/irq.h> 25 25 #include <asm/io.h> 26 26 #include <asm/dcr.h> 27 + #include <asm/uic.h> 27 28 28 29 #define NR_UIC_INTS 32 29 30
+1 -1
arch/powerpc/platforms/512x/mpc5121_ads.c
··· 10 10 11 11 #include <linux/kernel.h> 12 12 #include <linux/io.h> 13 - #include <linux/of_platform.h> 13 + #include <linux/of.h> 14 14 15 15 #include <asm/machdep.h> 16 16 #include <asm/ipic.h>
-1
arch/powerpc/platforms/512x/mpc512x.h
··· 13 13 extern void __init mpc512x_setup_arch(void); 14 14 extern int __init mpc5121_clk_init(void); 15 15 const char *__init mpc512x_select_psc_compat(void); 16 - const char *__init mpc512x_select_reset_compat(void); 17 16 extern void __noreturn mpc512x_restart(char *cmd); 18 17 19 18 #endif /* __MPC512X_H__ */
+1 -1
arch/powerpc/platforms/512x/mpc512x_generic.c
··· 9 9 */ 10 10 11 11 #include <linux/kernel.h> 12 - #include <linux/of_platform.h> 12 + #include <linux/of.h> 13 13 14 14 #include <asm/machdep.h> 15 15 #include <asm/ipic.h>
+1 -1
arch/powerpc/platforms/512x/mpc512x_lpbfifo.c
··· 10 10 #include <linux/kernel.h> 11 11 #include <linux/module.h> 12 12 #include <linux/of.h> 13 - #include <linux/of_platform.h> 14 13 #include <linux/of_address.h> 15 14 #include <linux/of_irq.h> 15 + #include <linux/platform_device.h> 16 16 #include <asm/mpc5121.h> 17 17 #include <asm/io.h> 18 18 #include <linux/spinlock.h>
+15 -15
arch/powerpc/platforms/512x/mpc512x_shared.c
··· 29 29 30 30 static struct mpc512x_reset_module __iomem *reset_module_base; 31 31 32 - static void __init mpc512x_restart_init(void) 33 - { 34 - struct device_node *np; 35 - const char *reset_compat; 36 - 37 - reset_compat = mpc512x_select_reset_compat(); 38 - np = of_find_compatible_node(NULL, NULL, reset_compat); 39 - if (!np) 40 - return; 41 - 42 - reset_module_base = of_iomap(np, 0); 43 - of_node_put(np); 44 - } 45 - 46 32 void __noreturn mpc512x_restart(char *cmd) 47 33 { 48 34 if (reset_module_base) { ··· 349 363 return NULL; 350 364 } 351 365 352 - const char *__init mpc512x_select_reset_compat(void) 366 + static const char *__init mpc512x_select_reset_compat(void) 353 367 { 354 368 if (of_machine_is_compatible("fsl,mpc5121")) 355 369 return "fsl,mpc5121-reset"; ··· 439 453 440 454 iounmap(psc); 441 455 } 456 + } 457 + 458 + static void __init mpc512x_restart_init(void) 459 + { 460 + struct device_node *np; 461 + const char *reset_compat; 462 + 463 + reset_compat = mpc512x_select_reset_compat(); 464 + np = of_find_compatible_node(NULL, NULL, reset_compat); 465 + if (!np) 466 + return; 467 + 468 + reset_module_base = of_iomap(np, 0); 469 + of_node_put(np); 442 470 } 443 471 444 472 void __init mpc512x_init_early(void)
+2 -1
arch/powerpc/platforms/512x/pdm360ng.c
··· 7 7 * PDM360NG board setup 8 8 */ 9 9 10 + #include <linux/device.h> 10 11 #include <linux/kernel.h> 11 12 #include <linux/io.h> 13 + #include <linux/of.h> 12 14 #include <linux/of_address.h> 13 15 #include <linux/of_fdt.h> 14 - #include <linux/of_platform.h> 15 16 16 17 #include <asm/machdep.h> 17 18 #include <asm/ipic.h>
+1 -2
arch/powerpc/platforms/52xx/mpc52xx_gpt.c
··· 48 48 * the output mode. This driver does not change the output mode setting. 49 49 */ 50 50 51 - #include <linux/device.h> 52 51 #include <linux/irq.h> 53 52 #include <linux/interrupt.h> 54 53 #include <linux/io.h> ··· 56 57 #include <linux/of.h> 57 58 #include <linux/of_address.h> 58 59 #include <linux/of_irq.h> 59 - #include <linux/of_platform.h> 60 60 #include <linux/of_gpio.h> 61 + #include <linux/platform_device.h> 61 62 #include <linux/kernel.h> 62 63 #include <linux/property.h> 63 64 #include <linux/slab.h>
+4 -20
arch/powerpc/platforms/82xx/Kconfig
··· 7 7 8 8 config EP8248E 9 9 bool "Embedded Planet EP8248E (a.k.a. CWH-PPC-8248N-VE)" 10 - select 8272 11 - select 8260 10 + select CPM2 11 + select PPC_INDIRECT_PCI if PCI 12 12 select FSL_SOC 13 13 select PHYLIB if NETDEVICES 14 14 select MDIO_BITBANG if PHYLIB ··· 20 20 21 21 config MGCOGE 22 22 bool "Keymile MGCOGE" 23 - select 8272 24 - select 8260 23 + select CPM2 24 + select PPC_INDIRECT_PCI if PCI 25 25 select FSL_SOC 26 26 help 27 27 This enables support for the Keymile MGCOGE board. 28 28 29 29 endif 30 - 31 - config 8260 32 - bool 33 - depends on PPC_BOOK3S_32 34 - select CPM2 35 - help 36 - The MPC8260 is a typical embedded CPU made by Freescale. Selecting 37 - this option means that you wish to build a kernel for a machine with 38 - an 8260 class CPU. 39 - 40 - config 8272 41 - bool 42 - select 8260 43 - help 44 - The MPC8272 CPM has a different internal dpram setup than other CPM2 45 - devices
+2 -8
arch/powerpc/platforms/82xx/ep8248e.c
··· 13 13 #include <linux/of_mdio.h> 14 14 #include <linux/slab.h> 15 15 #include <linux/of_platform.h> 16 + #include <linux/platform_device.h> 16 17 17 18 #include <asm/io.h> 18 19 #include <asm/cpm2.h> 19 20 #include <asm/udbg.h> 20 21 #include <asm/machdep.h> 21 22 #include <asm/time.h> 22 - #include <asm/mpc8260.h> 23 23 24 24 #include <sysdev/fsl_soc.h> 25 25 #include <sysdev/cpm2_pic.h> ··· 140 140 return ret; 141 141 } 142 142 143 - static int ep8248e_mdio_remove(struct platform_device *ofdev) 144 - { 145 - BUG(); 146 - return 0; 147 - } 148 - 149 143 static const struct of_device_id ep8248e_mdio_match[] = { 150 144 { 151 145 .compatible = "fsl,ep8248e-mdio-bitbang", ··· 151 157 .driver = { 152 158 .name = "ep8248e-mdio-bitbang", 153 159 .of_match_table = ep8248e_mdio_match, 160 + .suppress_bind_attrs = true, 154 161 }, 155 162 .probe = ep8248e_mdio_probe, 156 - .remove = ep8248e_mdio_remove, 157 163 }; 158 164 159 165 struct cpm_pin {
-1
arch/powerpc/platforms/82xx/km82xx.c
··· 19 19 #include <asm/udbg.h> 20 20 #include <asm/machdep.h> 21 21 #include <linux/time.h> 22 - #include <asm/mpc8260.h> 23 22 24 23 #include <sysdev/fsl_soc.h> 25 24 #include <sysdev/cpm2_pic.h>
-14
arch/powerpc/platforms/82xx/m82xx_pci.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - #ifndef _PPC_KERNEL_M82XX_PCI_H 3 - #define _PPC_KERNEL_M82XX_PCI_H 4 - 5 - /* 6 - */ 7 - 8 - #define SIU_INT_IRQ1 ((uint)0x13 + CPM_IRQ_OFFSET) 9 - 10 - #ifndef _IO_BASE 11 - #define _IO_BASE isa_io_base 12 - #endif 13 - 14 - #endif /* _PPC_KERNEL_M8260_PCI_H */
-46
arch/powerpc/platforms/82xx/pq2.c
··· 32 32 panic("Restart failed\n"); 33 33 } 34 34 NOKPROBE_SYMBOL(pq2_restart) 35 - 36 - #ifdef CONFIG_PCI 37 - static int pq2_pci_exclude_device(struct pci_controller *hose, 38 - u_char bus, u8 devfn) 39 - { 40 - if (bus == 0 && PCI_SLOT(devfn) == 0) 41 - return PCIBIOS_DEVICE_NOT_FOUND; 42 - else 43 - return PCIBIOS_SUCCESSFUL; 44 - } 45 - 46 - static void __init pq2_pci_add_bridge(struct device_node *np) 47 - { 48 - struct pci_controller *hose; 49 - struct resource r; 50 - 51 - if (of_address_to_resource(np, 0, &r) || r.end - r.start < 0x10b) 52 - goto err; 53 - 54 - pci_add_flags(PCI_REASSIGN_ALL_BUS); 55 - 56 - hose = pcibios_alloc_controller(np); 57 - if (!hose) 58 - return; 59 - 60 - hose->dn = np; 61 - 62 - setup_indirect_pci(hose, r.start + 0x100, r.start + 0x104, 0); 63 - pci_process_bridge_OF_ranges(hose, np, 1); 64 - 65 - return; 66 - 67 - err: 68 - printk(KERN_ERR "No valid PCI reg property in device tree\n"); 69 - } 70 - 71 - void __init pq2_init_pci(void) 72 - { 73 - struct device_node *np; 74 - 75 - ppc_md.pci_exclude_device = pq2_pci_exclude_device; 76 - 77 - for_each_compatible_node(np, NULL, "fsl,pq2-pci") 78 - pq2_pci_add_bridge(np); 79 - } 80 - #endif
+4 -1
arch/powerpc/platforms/83xx/Makefile
··· 2 2 # 3 3 # Makefile for the PowerPC 83xx linux kernel. 4 4 # 5 - obj-y := misc.o usb.o 5 + obj-y := misc.o 6 6 obj-$(CONFIG_SUSPEND) += suspend.o suspend-asm.o 7 7 obj-$(CONFIG_MCU_MPC8349EMITX) += mcu_mpc8349emitx.o 8 8 obj-$(CONFIG_MPC830x_RDB) += mpc830x_rdb.o ··· 13 13 obj-$(CONFIG_MPC837x_RDB) += mpc837x_rdb.o 14 14 obj-$(CONFIG_ASP834x) += asp834x.o 15 15 obj-$(CONFIG_KMETER1) += km83xx.o 16 + obj-$(CONFIG_PPC_MPC831x) += usb_831x.o 17 + obj-$(CONFIG_PPC_MPC834x) += usb_834x.o 18 + obj-$(CONFIG_PPC_MPC837x) += usb_837x.o
+2 -2
arch/powerpc/platforms/83xx/km83xx.c
··· 20 20 #include <linux/seq_file.h> 21 21 #include <linux/root_dev.h> 22 22 #include <linux/initrd.h> 23 - #include <linux/of_platform.h> 24 - #include <linux/of_device.h> 23 + #include <linux/of.h> 24 + #include <linux/of_address.h> 25 25 26 26 #include <linux/atomic.h> 27 27 #include <linux/time.h>
+3 -1
arch/powerpc/platforms/83xx/mpc832x_rdb.c
··· 15 15 #include <linux/spi/spi.h> 16 16 #include <linux/spi/mmc_spi.h> 17 17 #include <linux/mmc/host.h> 18 + #include <linux/of.h> 19 + #include <linux/of_address.h> 18 20 #include <linux/of_irq.h> 19 - #include <linux/of_platform.h> 21 + #include <linux/platform_device.h> 20 22 #include <linux/fsl_devices.h> 21 23 22 24 #include <asm/time.h>
-2
arch/powerpc/platforms/83xx/mpc83xx.h
··· 3 3 #define __MPC83XX_H__ 4 4 5 5 #include <linux/init.h> 6 - #include <linux/device.h> 7 - #include <asm/pci-bridge.h> 8 6 9 7 /* System Clock Control Register */ 10 8 #define MPC83XX_SCCR_OFFS 0xA08
+1 -1
arch/powerpc/platforms/83xx/suspend.c
··· 19 19 #include <linux/fsl_devices.h> 20 20 #include <linux/of_address.h> 21 21 #include <linux/of_irq.h> 22 - #include <linux/of_platform.h> 22 + #include <linux/platform_device.h> 23 23 #include <linux/export.h> 24 24 25 25 #include <asm/reg.h>
-251
arch/powerpc/platforms/83xx/usb.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Freescale 83xx USB SOC setup code 4 - * 5 - * Copyright (C) 2007 Freescale Semiconductor, Inc. 6 - * Author: Li Yang 7 - */ 8 - 9 - 10 - #include <linux/stddef.h> 11 - #include <linux/kernel.h> 12 - #include <linux/errno.h> 13 - #include <linux/of.h> 14 - #include <linux/of_address.h> 15 - 16 - #include <asm/io.h> 17 - #include <sysdev/fsl_soc.h> 18 - 19 - #include "mpc83xx.h" 20 - 21 - 22 - #ifdef CONFIG_PPC_MPC834x 23 - int __init mpc834x_usb_cfg(void) 24 - { 25 - unsigned long sccr, sicrl, sicrh; 26 - void __iomem *immap; 27 - struct device_node *np = NULL; 28 - int port0_is_dr = 0, port1_is_dr = 0; 29 - const void *prop, *dr_mode; 30 - 31 - immap = ioremap(get_immrbase(), 0x1000); 32 - if (!immap) 33 - return -ENOMEM; 34 - 35 - /* Read registers */ 36 - /* Note: DR and MPH must use the same clock setting in SCCR */ 37 - sccr = in_be32(immap + MPC83XX_SCCR_OFFS) & ~MPC83XX_SCCR_USB_MASK; 38 - sicrl = in_be32(immap + MPC83XX_SICRL_OFFS) & ~MPC834X_SICRL_USB_MASK; 39 - sicrh = in_be32(immap + MPC83XX_SICRH_OFFS) & ~MPC834X_SICRH_USB_UTMI; 40 - 41 - np = of_find_compatible_node(NULL, NULL, "fsl-usb2-dr"); 42 - if (np) { 43 - sccr |= MPC83XX_SCCR_USB_DRCM_11; /* 1:3 */ 44 - 45 - prop = of_get_property(np, "phy_type", NULL); 46 - port1_is_dr = 1; 47 - if (prop && (!strcmp(prop, "utmi") || 48 - !strcmp(prop, "utmi_wide"))) { 49 - sicrl |= MPC834X_SICRL_USB0 | MPC834X_SICRL_USB1; 50 - sicrh |= MPC834X_SICRH_USB_UTMI; 51 - port0_is_dr = 1; 52 - } else if (prop && !strcmp(prop, "serial")) { 53 - dr_mode = of_get_property(np, "dr_mode", NULL); 54 - if (dr_mode && !strcmp(dr_mode, "otg")) { 55 - sicrl |= MPC834X_SICRL_USB0 | MPC834X_SICRL_USB1; 56 - port0_is_dr = 1; 57 - } else { 58 - sicrl |= MPC834X_SICRL_USB1; 59 - } 60 - } else if (prop && !strcmp(prop, "ulpi")) { 61 - sicrl |= MPC834X_SICRL_USB1; 62 - } else { 63 - printk(KERN_WARNING "834x USB PHY type not supported\n"); 64 - } 65 - of_node_put(np); 66 - } 67 - np = of_find_compatible_node(NULL, NULL, "fsl-usb2-mph"); 68 - if (np) { 69 - sccr |= MPC83XX_SCCR_USB_MPHCM_11; /* 1:3 */ 70 - 71 - prop = of_get_property(np, "port0", NULL); 72 - if (prop) { 73 - if (port0_is_dr) 74 - printk(KERN_WARNING 75 - "834x USB port0 can't be used by both DR and MPH!\n"); 76 - sicrl &= ~MPC834X_SICRL_USB0; 77 - } 78 - prop = of_get_property(np, "port1", NULL); 79 - if (prop) { 80 - if (port1_is_dr) 81 - printk(KERN_WARNING 82 - "834x USB port1 can't be used by both DR and MPH!\n"); 83 - sicrl &= ~MPC834X_SICRL_USB1; 84 - } 85 - of_node_put(np); 86 - } 87 - 88 - /* Write back */ 89 - out_be32(immap + MPC83XX_SCCR_OFFS, sccr); 90 - out_be32(immap + MPC83XX_SICRL_OFFS, sicrl); 91 - out_be32(immap + MPC83XX_SICRH_OFFS, sicrh); 92 - 93 - iounmap(immap); 94 - return 0; 95 - } 96 - #endif /* CONFIG_PPC_MPC834x */ 97 - 98 - #ifdef CONFIG_PPC_MPC831x 99 - int __init mpc831x_usb_cfg(void) 100 - { 101 - u32 temp; 102 - void __iomem *immap, *usb_regs; 103 - struct device_node *np = NULL; 104 - struct device_node *immr_node = NULL; 105 - const void *prop; 106 - struct resource res; 107 - int ret = 0; 108 - #ifdef CONFIG_USB_OTG 109 - const void *dr_mode; 110 - #endif 111 - 112 - np = of_find_compatible_node(NULL, NULL, "fsl-usb2-dr"); 113 - if (!np) 114 - return -ENODEV; 115 - prop = of_get_property(np, "phy_type", NULL); 116 - 117 - /* Map IMMR space for pin and clock settings */ 118 - immap = ioremap(get_immrbase(), 0x1000); 119 - if (!immap) { 120 - of_node_put(np); 121 - return -ENOMEM; 122 - } 123 - 124 - /* Configure clock */ 125 - immr_node = of_get_parent(np); 126 - if (immr_node && (of_device_is_compatible(immr_node, "fsl,mpc8315-immr") || 127 - of_device_is_compatible(immr_node, "fsl,mpc8308-immr"))) 128 - clrsetbits_be32(immap + MPC83XX_SCCR_OFFS, 129 - MPC8315_SCCR_USB_MASK, 130 - MPC8315_SCCR_USB_DRCM_01); 131 - else 132 - clrsetbits_be32(immap + MPC83XX_SCCR_OFFS, 133 - MPC83XX_SCCR_USB_MASK, 134 - MPC83XX_SCCR_USB_DRCM_11); 135 - 136 - /* Configure pin mux for ULPI. There is no pin mux for UTMI */ 137 - if (prop && !strcmp(prop, "ulpi")) { 138 - if (of_device_is_compatible(immr_node, "fsl,mpc8308-immr")) { 139 - clrsetbits_be32(immap + MPC83XX_SICRH_OFFS, 140 - MPC8308_SICRH_USB_MASK, 141 - MPC8308_SICRH_USB_ULPI); 142 - } else if (of_device_is_compatible(immr_node, "fsl,mpc8315-immr")) { 143 - clrsetbits_be32(immap + MPC83XX_SICRL_OFFS, 144 - MPC8315_SICRL_USB_MASK, 145 - MPC8315_SICRL_USB_ULPI); 146 - clrsetbits_be32(immap + MPC83XX_SICRH_OFFS, 147 - MPC8315_SICRH_USB_MASK, 148 - MPC8315_SICRH_USB_ULPI); 149 - } else { 150 - clrsetbits_be32(immap + MPC83XX_SICRL_OFFS, 151 - MPC831X_SICRL_USB_MASK, 152 - MPC831X_SICRL_USB_ULPI); 153 - clrsetbits_be32(immap + MPC83XX_SICRH_OFFS, 154 - MPC831X_SICRH_USB_MASK, 155 - MPC831X_SICRH_USB_ULPI); 156 - } 157 - } 158 - 159 - iounmap(immap); 160 - 161 - of_node_put(immr_node); 162 - 163 - /* Map USB SOC space */ 164 - ret = of_address_to_resource(np, 0, &res); 165 - if (ret) { 166 - of_node_put(np); 167 - return ret; 168 - } 169 - usb_regs = ioremap(res.start, resource_size(&res)); 170 - 171 - /* Using on-chip PHY */ 172 - if (prop && (!strcmp(prop, "utmi_wide") || 173 - !strcmp(prop, "utmi"))) { 174 - u32 refsel; 175 - 176 - if (of_device_is_compatible(immr_node, "fsl,mpc8308-immr")) 177 - goto out; 178 - 179 - if (of_device_is_compatible(immr_node, "fsl,mpc8315-immr")) 180 - refsel = CONTROL_REFSEL_24MHZ; 181 - else 182 - refsel = CONTROL_REFSEL_48MHZ; 183 - /* Set UTMI_PHY_EN and REFSEL */ 184 - out_be32(usb_regs + FSL_USB2_CONTROL_OFFS, 185 - CONTROL_UTMI_PHY_EN | refsel); 186 - /* Using external UPLI PHY */ 187 - } else if (prop && !strcmp(prop, "ulpi")) { 188 - /* Set PHY_CLK_SEL to ULPI */ 189 - temp = CONTROL_PHY_CLK_SEL_ULPI; 190 - #ifdef CONFIG_USB_OTG 191 - /* Set OTG_PORT */ 192 - if (!of_device_is_compatible(immr_node, "fsl,mpc8308-immr")) { 193 - dr_mode = of_get_property(np, "dr_mode", NULL); 194 - if (dr_mode && !strcmp(dr_mode, "otg")) 195 - temp |= CONTROL_OTG_PORT; 196 - } 197 - #endif /* CONFIG_USB_OTG */ 198 - out_be32(usb_regs + FSL_USB2_CONTROL_OFFS, temp); 199 - } else { 200 - printk(KERN_WARNING "831x USB PHY type not supported\n"); 201 - ret = -EINVAL; 202 - } 203 - 204 - out: 205 - iounmap(usb_regs); 206 - of_node_put(np); 207 - return ret; 208 - } 209 - #endif /* CONFIG_PPC_MPC831x */ 210 - 211 - #ifdef CONFIG_PPC_MPC837x 212 - int __init mpc837x_usb_cfg(void) 213 - { 214 - void __iomem *immap; 215 - struct device_node *np = NULL; 216 - const void *prop; 217 - int ret = 0; 218 - 219 - np = of_find_compatible_node(NULL, NULL, "fsl-usb2-dr"); 220 - if (!np || !of_device_is_available(np)) { 221 - of_node_put(np); 222 - return -ENODEV; 223 - } 224 - prop = of_get_property(np, "phy_type", NULL); 225 - 226 - if (!prop || (strcmp(prop, "ulpi") && strcmp(prop, "serial"))) { 227 - printk(KERN_WARNING "837x USB PHY type not supported\n"); 228 - of_node_put(np); 229 - return -EINVAL; 230 - } 231 - 232 - /* Map IMMR space for pin and clock settings */ 233 - immap = ioremap(get_immrbase(), 0x1000); 234 - if (!immap) { 235 - of_node_put(np); 236 - return -ENOMEM; 237 - } 238 - 239 - /* Configure clock */ 240 - clrsetbits_be32(immap + MPC83XX_SCCR_OFFS, MPC837X_SCCR_USB_DRCM_11, 241 - MPC837X_SCCR_USB_DRCM_11); 242 - 243 - /* Configure pin mux for ULPI/serial */ 244 - clrsetbits_be32(immap + MPC83XX_SICRL_OFFS, MPC837X_SICRL_USB_MASK, 245 - MPC837X_SICRL_USB_ULPI); 246 - 247 - iounmap(immap); 248 - of_node_put(np); 249 - return ret; 250 - } 251 - #endif /* CONFIG_PPC_MPC837x */
+128
arch/powerpc/platforms/83xx/usb_831x.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * Freescale 83xx USB SOC setup code 4 + * 5 + * Copyright (C) 2007 Freescale Semiconductor, Inc. 6 + * Author: Li Yang 7 + */ 8 + 9 + #include <linux/stddef.h> 10 + #include <linux/kernel.h> 11 + #include <linux/errno.h> 12 + #include <linux/of.h> 13 + #include <linux/of_address.h> 14 + #include <linux/io.h> 15 + 16 + #include <sysdev/fsl_soc.h> 17 + 18 + #include "mpc83xx.h" 19 + 20 + int __init mpc831x_usb_cfg(void) 21 + { 22 + u32 temp; 23 + void __iomem *immap, *usb_regs; 24 + struct device_node *np = NULL; 25 + struct device_node *immr_node = NULL; 26 + const void *prop; 27 + struct resource res; 28 + int ret = 0; 29 + #ifdef CONFIG_USB_OTG 30 + const void *dr_mode; 31 + #endif 32 + 33 + np = of_find_compatible_node(NULL, NULL, "fsl-usb2-dr"); 34 + if (!np) 35 + return -ENODEV; 36 + prop = of_get_property(np, "phy_type", NULL); 37 + 38 + /* Map IMMR space for pin and clock settings */ 39 + immap = ioremap(get_immrbase(), 0x1000); 40 + if (!immap) { 41 + of_node_put(np); 42 + return -ENOMEM; 43 + } 44 + 45 + /* Configure clock */ 46 + immr_node = of_get_parent(np); 47 + if (immr_node && (of_device_is_compatible(immr_node, "fsl,mpc8315-immr") || 48 + of_device_is_compatible(immr_node, "fsl,mpc8308-immr"))) 49 + clrsetbits_be32(immap + MPC83XX_SCCR_OFFS, 50 + MPC8315_SCCR_USB_MASK, 51 + MPC8315_SCCR_USB_DRCM_01); 52 + else 53 + clrsetbits_be32(immap + MPC83XX_SCCR_OFFS, 54 + MPC83XX_SCCR_USB_MASK, 55 + MPC83XX_SCCR_USB_DRCM_11); 56 + 57 + /* Configure pin mux for ULPI. There is no pin mux for UTMI */ 58 + if (prop && !strcmp(prop, "ulpi")) { 59 + if (of_device_is_compatible(immr_node, "fsl,mpc8308-immr")) { 60 + clrsetbits_be32(immap + MPC83XX_SICRH_OFFS, 61 + MPC8308_SICRH_USB_MASK, 62 + MPC8308_SICRH_USB_ULPI); 63 + } else if (of_device_is_compatible(immr_node, "fsl,mpc8315-immr")) { 64 + clrsetbits_be32(immap + MPC83XX_SICRL_OFFS, 65 + MPC8315_SICRL_USB_MASK, 66 + MPC8315_SICRL_USB_ULPI); 67 + clrsetbits_be32(immap + MPC83XX_SICRH_OFFS, 68 + MPC8315_SICRH_USB_MASK, 69 + MPC8315_SICRH_USB_ULPI); 70 + } else { 71 + clrsetbits_be32(immap + MPC83XX_SICRL_OFFS, 72 + MPC831X_SICRL_USB_MASK, 73 + MPC831X_SICRL_USB_ULPI); 74 + clrsetbits_be32(immap + MPC83XX_SICRH_OFFS, 75 + MPC831X_SICRH_USB_MASK, 76 + MPC831X_SICRH_USB_ULPI); 77 + } 78 + } 79 + 80 + iounmap(immap); 81 + 82 + of_node_put(immr_node); 83 + 84 + /* Map USB SOC space */ 85 + ret = of_address_to_resource(np, 0, &res); 86 + if (ret) { 87 + of_node_put(np); 88 + return ret; 89 + } 90 + usb_regs = ioremap(res.start, resource_size(&res)); 91 + 92 + /* Using on-chip PHY */ 93 + if (prop && (!strcmp(prop, "utmi_wide") || !strcmp(prop, "utmi"))) { 94 + u32 refsel; 95 + 96 + if (of_device_is_compatible(immr_node, "fsl,mpc8308-immr")) 97 + goto out; 98 + 99 + if (of_device_is_compatible(immr_node, "fsl,mpc8315-immr")) 100 + refsel = CONTROL_REFSEL_24MHZ; 101 + else 102 + refsel = CONTROL_REFSEL_48MHZ; 103 + /* Set UTMI_PHY_EN and REFSEL */ 104 + out_be32(usb_regs + FSL_USB2_CONTROL_OFFS, 105 + CONTROL_UTMI_PHY_EN | refsel); 106 + /* Using external UPLI PHY */ 107 + } else if (prop && !strcmp(prop, "ulpi")) { 108 + /* Set PHY_CLK_SEL to ULPI */ 109 + temp = CONTROL_PHY_CLK_SEL_ULPI; 110 + #ifdef CONFIG_USB_OTG 111 + /* Set OTG_PORT */ 112 + if (!of_device_is_compatible(immr_node, "fsl,mpc8308-immr")) { 113 + dr_mode = of_get_property(np, "dr_mode", NULL); 114 + if (dr_mode && !strcmp(dr_mode, "otg")) 115 + temp |= CONTROL_OTG_PORT; 116 + } 117 + #endif /* CONFIG_USB_OTG */ 118 + out_be32(usb_regs + FSL_USB2_CONTROL_OFFS, temp); 119 + } else { 120 + pr_warn("831x USB PHY type not supported\n"); 121 + ret = -EINVAL; 122 + } 123 + 124 + out: 125 + iounmap(usb_regs); 126 + of_node_put(np); 127 + return ret; 128 + }
+90
arch/powerpc/platforms/83xx/usb_834x.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * Freescale 83xx USB SOC setup code 4 + * 5 + * Copyright (C) 2007 Freescale Semiconductor, Inc. 6 + * Author: Li Yang 7 + */ 8 + 9 + #include <linux/stddef.h> 10 + #include <linux/kernel.h> 11 + #include <linux/errno.h> 12 + #include <linux/of.h> 13 + #include <linux/of_address.h> 14 + #include <linux/io.h> 15 + 16 + #include <sysdev/fsl_soc.h> 17 + 18 + #include "mpc83xx.h" 19 + 20 + int __init mpc834x_usb_cfg(void) 21 + { 22 + unsigned long sccr, sicrl, sicrh; 23 + void __iomem *immap; 24 + struct device_node *np = NULL; 25 + int port0_is_dr = 0, port1_is_dr = 0; 26 + const void *prop, *dr_mode; 27 + 28 + immap = ioremap(get_immrbase(), 0x1000); 29 + if (!immap) 30 + return -ENOMEM; 31 + 32 + /* Read registers */ 33 + /* Note: DR and MPH must use the same clock setting in SCCR */ 34 + sccr = in_be32(immap + MPC83XX_SCCR_OFFS) & ~MPC83XX_SCCR_USB_MASK; 35 + sicrl = in_be32(immap + MPC83XX_SICRL_OFFS) & ~MPC834X_SICRL_USB_MASK; 36 + sicrh = in_be32(immap + MPC83XX_SICRH_OFFS) & ~MPC834X_SICRH_USB_UTMI; 37 + 38 + np = of_find_compatible_node(NULL, NULL, "fsl-usb2-dr"); 39 + if (np) { 40 + sccr |= MPC83XX_SCCR_USB_DRCM_11; /* 1:3 */ 41 + 42 + prop = of_get_property(np, "phy_type", NULL); 43 + port1_is_dr = 1; 44 + if (prop && 45 + (!strcmp(prop, "utmi") || !strcmp(prop, "utmi_wide"))) { 46 + sicrl |= MPC834X_SICRL_USB0 | MPC834X_SICRL_USB1; 47 + sicrh |= MPC834X_SICRH_USB_UTMI; 48 + port0_is_dr = 1; 49 + } else if (prop && !strcmp(prop, "serial")) { 50 + dr_mode = of_get_property(np, "dr_mode", NULL); 51 + if (dr_mode && !strcmp(dr_mode, "otg")) { 52 + sicrl |= MPC834X_SICRL_USB0 | MPC834X_SICRL_USB1; 53 + port0_is_dr = 1; 54 + } else { 55 + sicrl |= MPC834X_SICRL_USB1; 56 + } 57 + } else if (prop && !strcmp(prop, "ulpi")) { 58 + sicrl |= MPC834X_SICRL_USB1; 59 + } else { 60 + pr_warn("834x USB PHY type not supported\n"); 61 + } 62 + of_node_put(np); 63 + } 64 + np = of_find_compatible_node(NULL, NULL, "fsl-usb2-mph"); 65 + if (np) { 66 + sccr |= MPC83XX_SCCR_USB_MPHCM_11; /* 1:3 */ 67 + 68 + prop = of_get_property(np, "port0", NULL); 69 + if (prop) { 70 + if (port0_is_dr) 71 + pr_warn("834x USB port0 can't be used by both DR and MPH!\n"); 72 + sicrl &= ~MPC834X_SICRL_USB0; 73 + } 74 + prop = of_get_property(np, "port1", NULL); 75 + if (prop) { 76 + if (port1_is_dr) 77 + pr_warn("834x USB port1 can't be used by both DR and MPH!\n"); 78 + sicrl &= ~MPC834X_SICRL_USB1; 79 + } 80 + of_node_put(np); 81 + } 82 + 83 + /* Write back */ 84 + out_be32(immap + MPC83XX_SCCR_OFFS, sccr); 85 + out_be32(immap + MPC83XX_SICRL_OFFS, sicrl); 86 + out_be32(immap + MPC83XX_SICRH_OFFS, sicrh); 87 + 88 + iounmap(immap); 89 + return 0; 90 + }
+58
arch/powerpc/platforms/83xx/usb_837x.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * Freescale 83xx USB SOC setup code 4 + * 5 + * Copyright (C) 2007 Freescale Semiconductor, Inc. 6 + * Author: Li Yang 7 + */ 8 + 9 + #include <linux/stddef.h> 10 + #include <linux/kernel.h> 11 + #include <linux/errno.h> 12 + #include <linux/of.h> 13 + #include <linux/of_address.h> 14 + #include <linux/io.h> 15 + 16 + #include <sysdev/fsl_soc.h> 17 + 18 + #include "mpc83xx.h" 19 + 20 + int __init mpc837x_usb_cfg(void) 21 + { 22 + void __iomem *immap; 23 + struct device_node *np = NULL; 24 + const void *prop; 25 + int ret = 0; 26 + 27 + np = of_find_compatible_node(NULL, NULL, "fsl-usb2-dr"); 28 + if (!np || !of_device_is_available(np)) { 29 + of_node_put(np); 30 + return -ENODEV; 31 + } 32 + prop = of_get_property(np, "phy_type", NULL); 33 + 34 + if (!prop || (strcmp(prop, "ulpi") && strcmp(prop, "serial"))) { 35 + pr_warn("837x USB PHY type not supported\n"); 36 + of_node_put(np); 37 + return -EINVAL; 38 + } 39 + 40 + /* Map IMMR space for pin and clock settings */ 41 + immap = ioremap(get_immrbase(), 0x1000); 42 + if (!immap) { 43 + of_node_put(np); 44 + return -ENOMEM; 45 + } 46 + 47 + /* Configure clock */ 48 + clrsetbits_be32(immap + MPC83XX_SCCR_OFFS, MPC837X_SCCR_USB_DRCM_11, 49 + MPC837X_SCCR_USB_DRCM_11); 50 + 51 + /* Configure pin mux for ULPI/serial */ 52 + clrsetbits_be32(immap + MPC83XX_SICRL_OFFS, MPC837X_SICRL_USB_MASK, 53 + MPC837X_SICRL_USB_ULPI); 54 + 55 + iounmap(immap); 56 + of_node_put(np); 57 + return ret; 58 + }
+1 -1
arch/powerpc/platforms/85xx/bsc913x_qds.c
··· 9 9 * Copyright 2014 Freescale Semiconductor Inc. 10 10 */ 11 11 12 - #include <linux/of_platform.h> 12 + #include <linux/of.h> 13 13 #include <linux/pci.h> 14 14 #include <asm/mpic.h> 15 15 #include <sysdev/fsl_soc.h>
+1 -1
arch/powerpc/platforms/85xx/bsc913x_rdb.c
··· 7 7 * Copyright 2011-2012 Freescale Semiconductor Inc. 8 8 */ 9 9 10 - #include <linux/of_platform.h> 10 + #include <linux/of.h> 11 11 #include <linux/pci.h> 12 12 #include <asm/mpic.h> 13 13 #include <sysdev/fsl_soc.h>
+1 -2
arch/powerpc/platforms/85xx/c293pcie.c
··· 7 7 8 8 #include <linux/stddef.h> 9 9 #include <linux/kernel.h> 10 - #include <linux/of_fdt.h> 11 - #include <linux/of_platform.h> 10 + #include <linux/of.h> 12 11 13 12 #include <asm/machdep.h> 14 13 #include <asm/udbg.h>
+1
arch/powerpc/platforms/85xx/common.c
··· 3 3 * Routines common to most mpc85xx-based boards. 4 4 */ 5 5 6 + #include <linux/of.h> 6 7 #include <linux/of_irq.h> 7 8 #include <linux/of_platform.h> 8 9
+3 -3
arch/powerpc/platforms/85xx/corenet_generic.c
··· 30 30 #include "smp.h" 31 31 #include "mpc85xx.h" 32 32 33 - void __init corenet_gen_pic_init(void) 33 + static void __init corenet_gen_pic_init(void) 34 34 { 35 35 struct mpic *mpic; 36 36 unsigned int flags = MPIC_BIG_ENDIAN | MPIC_SINGLE_DEST_CPU | ··· 48 48 /* 49 49 * Setup the architecture 50 50 */ 51 - void __init corenet_gen_setup_arch(void) 51 + static void __init corenet_gen_setup_arch(void) 52 52 { 53 53 mpc85xx_smp_init(); 54 54 ··· 101 101 {} 102 102 }; 103 103 104 - int __init corenet_gen_publish_devices(void) 104 + static int __init corenet_gen_publish_devices(void) 105 105 { 106 106 return of_platform_bus_probe(NULL, of_device_ids, NULL); 107 107 }
+1 -1
arch/powerpc/platforms/85xx/ge_imp3a.c
··· 17 17 #include <linux/delay.h> 18 18 #include <linux/seq_file.h> 19 19 #include <linux/interrupt.h> 20 + #include <linux/of.h> 20 21 #include <linux/of_address.h> 21 - #include <linux/of_platform.h> 22 22 23 23 #include <asm/time.h> 24 24 #include <asm/machdep.h>
+2 -1
arch/powerpc/platforms/85xx/ksi8560.c
··· 18 18 #include <linux/kdev_t.h> 19 19 #include <linux/delay.h> 20 20 #include <linux/seq_file.h> 21 - #include <linux/of_platform.h> 21 + #include <linux/of.h> 22 + #include <linux/of_address.h> 22 23 23 24 #include <asm/time.h> 24 25 #include <asm/machdep.h>
+1 -1
arch/powerpc/platforms/85xx/mpc8536_ds.c
··· 12 12 #include <linux/delay.h> 13 13 #include <linux/seq_file.h> 14 14 #include <linux/interrupt.h> 15 - #include <linux/of_platform.h> 15 + #include <linux/of.h> 16 16 17 17 #include <asm/time.h> 18 18 #include <asm/machdep.h>
+1 -1
arch/powerpc/platforms/85xx/mpc85xx_ds.c
··· 15 15 #include <linux/delay.h> 16 16 #include <linux/seq_file.h> 17 17 #include <linux/interrupt.h> 18 + #include <linux/of.h> 18 19 #include <linux/of_irq.h> 19 - #include <linux/of_platform.h> 20 20 21 21 #include <asm/time.h> 22 22 #include <asm/machdep.h>
+2 -2
arch/powerpc/platforms/85xx/mpc85xx_mds.c
··· 26 26 #include <linux/seq_file.h> 27 27 #include <linux/initrd.h> 28 28 #include <linux/fsl_devices.h> 29 - #include <linux/of_platform.h> 30 - #include <linux/of_device.h> 29 + #include <linux/of.h> 30 + #include <linux/of_address.h> 31 31 #include <linux/phy.h> 32 32 #include <linux/memblock.h> 33 33 #include <linux/fsl/guts.h>
+2 -1
arch/powerpc/platforms/85xx/mpc85xx_rdb.c
··· 12 12 #include <linux/delay.h> 13 13 #include <linux/seq_file.h> 14 14 #include <linux/interrupt.h> 15 - #include <linux/of_platform.h> 15 + #include <linux/of.h> 16 + #include <linux/of_address.h> 16 17 #include <linux/fsl/guts.h> 17 18 18 19 #include <asm/time.h>
+1 -1
arch/powerpc/platforms/85xx/p1010rdb.c
··· 10 10 #include <linux/pci.h> 11 11 #include <linux/delay.h> 12 12 #include <linux/interrupt.h> 13 - #include <linux/of_platform.h> 13 + #include <linux/of.h> 14 14 15 15 #include <asm/time.h> 16 16 #include <asm/machdep.h>
+1 -1
arch/powerpc/platforms/85xx/p1022_ds.c
··· 18 18 19 19 #include <linux/fsl/guts.h> 20 20 #include <linux/pci.h> 21 + #include <linux/of.h> 21 22 #include <linux/of_address.h> 22 - #include <linux/of_platform.h> 23 23 #include <asm/div64.h> 24 24 #include <asm/mpic.h> 25 25 #include <asm/swiotlb.h>
+1 -1
arch/powerpc/platforms/85xx/p1022_rdk.c
··· 14 14 15 15 #include <linux/fsl/guts.h> 16 16 #include <linux/pci.h> 17 + #include <linux/of.h> 17 18 #include <linux/of_address.h> 18 - #include <linux/of_platform.h> 19 19 #include <asm/div64.h> 20 20 #include <asm/mpic.h> 21 21 #include <asm/swiotlb.h>
+1 -2
arch/powerpc/platforms/85xx/p1023_rdb.c
··· 15 15 #include <linux/delay.h> 16 16 #include <linux/module.h> 17 17 #include <linux/fsl_devices.h> 18 + #include <linux/of.h> 18 19 #include <linux/of_address.h> 19 - #include <linux/of_platform.h> 20 - #include <linux/of_device.h> 21 20 22 21 #include <asm/time.h> 23 22 #include <asm/machdep.h>
+1 -1
arch/powerpc/platforms/85xx/qemu_e500.c
··· 25 25 #include "smp.h" 26 26 #include "mpc85xx.h" 27 27 28 - void __init qemu_e500_pic_init(void) 28 + static void __init qemu_e500_pic_init(void) 29 29 { 30 30 struct mpic *mpic; 31 31 unsigned int flags = MPIC_BIG_ENDIAN | MPIC_SINGLE_DEST_CPU |
+1 -1
arch/powerpc/platforms/85xx/socrates.c
··· 23 23 #include <linux/kdev_t.h> 24 24 #include <linux/delay.h> 25 25 #include <linux/seq_file.h> 26 - #include <linux/of_platform.h> 26 + #include <linux/of.h> 27 27 28 28 #include <asm/time.h> 29 29 #include <asm/machdep.h>
-1
arch/powerpc/platforms/85xx/socrates_fpga_pic.c
··· 6 6 #include <linux/irq.h> 7 7 #include <linux/of_address.h> 8 8 #include <linux/of_irq.h> 9 - #include <linux/of_platform.h> 10 9 #include <linux/io.h> 11 10 12 11 /*
+1 -1
arch/powerpc/platforms/85xx/stx_gp3.c
··· 22 22 #include <linux/kdev_t.h> 23 23 #include <linux/delay.h> 24 24 #include <linux/seq_file.h> 25 - #include <linux/of_platform.h> 25 + #include <linux/of.h> 26 26 27 27 #include <asm/time.h> 28 28 #include <asm/machdep.h>
+1 -1
arch/powerpc/platforms/85xx/tqm85xx.c
··· 20 20 #include <linux/kdev_t.h> 21 21 #include <linux/delay.h> 22 22 #include <linux/seq_file.h> 23 - #include <linux/of_platform.h> 23 + #include <linux/of.h> 24 24 25 25 #include <asm/time.h> 26 26 #include <asm/machdep.h>
+2 -1
arch/powerpc/platforms/85xx/twr_p102x.c
··· 13 13 #include <linux/errno.h> 14 14 #include <linux/fsl/guts.h> 15 15 #include <linux/pci.h> 16 - #include <linux/of_platform.h> 16 + #include <linux/of.h> 17 + #include <linux/of_address.h> 17 18 18 19 #include <asm/pci-bridge.h> 19 20 #include <asm/udbg.h>
+1 -1
arch/powerpc/platforms/85xx/xes_mpc85xx.c
··· 16 16 #include <linux/delay.h> 17 17 #include <linux/seq_file.h> 18 18 #include <linux/interrupt.h> 19 + #include <linux/of.h> 19 20 #include <linux/of_address.h> 20 - #include <linux/of_platform.h> 21 21 22 22 #include <asm/time.h> 23 23 #include <asm/machdep.h>
+3
arch/powerpc/platforms/86xx/common.c
··· 3 3 * Routines common to most mpc86xx-based boards. 4 4 */ 5 5 6 + #include <linux/init.h> 7 + #include <linux/mod_devicetable.h> 6 8 #include <linux/of_platform.h> 9 + #include <asm/reg.h> 7 10 #include <asm/synch.h> 8 11 9 12 #include "mpc86xx.h"
+1 -1
arch/powerpc/platforms/86xx/gef_ppc9a.c
··· 18 18 #include <linux/kdev_t.h> 19 19 #include <linux/delay.h> 20 20 #include <linux/seq_file.h> 21 + #include <linux/of.h> 21 22 #include <linux/of_address.h> 22 - #include <linux/of_platform.h> 23 23 24 24 #include <asm/time.h> 25 25 #include <asm/machdep.h>
+1 -1
arch/powerpc/platforms/86xx/gef_sbc310.c
··· 18 18 #include <linux/kdev_t.h> 19 19 #include <linux/delay.h> 20 20 #include <linux/seq_file.h> 21 + #include <linux/of.h> 21 22 #include <linux/of_address.h> 22 - #include <linux/of_platform.h> 23 23 24 24 #include <asm/time.h> 25 25 #include <asm/machdep.h>
+1 -1
arch/powerpc/platforms/86xx/gef_sbc610.c
··· 18 18 #include <linux/kdev_t.h> 19 19 #include <linux/delay.h> 20 20 #include <linux/seq_file.h> 21 + #include <linux/of.h> 21 22 #include <linux/of_address.h> 22 - #include <linux/of_platform.h> 23 23 24 24 #include <asm/time.h> 25 25 #include <asm/machdep.h>
-1
arch/powerpc/platforms/86xx/mvme7100.c
··· 20 20 #include <linux/pci.h> 21 21 #include <linux/of.h> 22 22 #include <linux/of_fdt.h> 23 - #include <linux/of_platform.h> 24 23 #include <linux/of_address.h> 25 24 #include <asm/udbg.h> 26 25 #include <asm/mpic.h>
+3 -1
arch/powerpc/platforms/86xx/pic.c
··· 6 6 #include <linux/stddef.h> 7 7 #include <linux/kernel.h> 8 8 #include <linux/interrupt.h> 9 + #include <linux/of.h> 9 10 #include <linux/of_irq.h> 10 - #include <linux/of_platform.h> 11 11 12 12 #include <asm/mpic.h> 13 13 #include <asm/i8259.h> 14 + 15 + #include "mpc86xx.h" 14 16 15 17 #ifdef CONFIG_PPC_I8259 16 18 static void mpc86xx_8259_cascade(struct irq_desc *desc)
+1 -1
arch/powerpc/platforms/8xx/adder875.c
··· 12 12 #include <asm/time.h> 13 13 #include <asm/machdep.h> 14 14 #include <asm/cpm1.h> 15 - #include <asm/fs_pd.h> 15 + #include <asm/8xx_immap.h> 16 16 #include <asm/udbg.h> 17 17 18 18 #include "mpc8xx.h"
+3 -7
arch/powerpc/platforms/8xx/cpm1.c
··· 41 41 #include <asm/rheap.h> 42 42 #include <asm/cpm.h> 43 43 44 - #include <asm/fs_pd.h> 44 + #include <sysdev/fsl_soc.h> 45 45 46 46 #ifdef CONFIG_8xx_GPIO 47 47 #include <linux/gpio/legacy-of-mm-gpiochip.h> ··· 54 54 55 55 void __init cpm_reset(void) 56 56 { 57 - sysconf8xx_t __iomem *siu_conf; 58 - 59 57 cpmp = &mpc8xx_immr->im_cpm; 60 58 61 59 #ifndef CONFIG_PPC_EARLY_DEBUG_CPM ··· 75 77 * manual recommends it. 76 78 * Bit 25, FAM can also be set to use FEC aggressive mode (860T). 77 79 */ 78 - siu_conf = immr_map(im_siu_conf); 79 80 if ((mfspr(SPRN_IMMR) & 0xffff) == 0x0900) /* MPC885 */ 80 - out_be32(&siu_conf->sc_sdcr, 0x40); 81 + out_be32(&mpc8xx_immr->im_siu_conf.sc_sdcr, 0x40); 81 82 else 82 - out_be32(&siu_conf->sc_sdcr, 1); 83 - immr_unmap(siu_conf); 83 + out_be32(&mpc8xx_immr->im_siu_conf.sc_sdcr, 1); 84 84 } 85 85 86 86 static DEFINE_SPINLOCK(cmd_lock);
+22 -57
arch/powerpc/platforms/8xx/m8xx_setup.c
··· 22 22 23 23 #include <asm/io.h> 24 24 #include <asm/8xx_immap.h> 25 - #include <asm/fs_pd.h> 26 25 #include <mm/mmu_decl.h> 27 26 28 27 #include "pic.h" ··· 34 35 printk ("timebase_interrupt()\n"); 35 36 36 37 return IRQ_HANDLED; 37 - } 38 - 39 - /* per-board overridable init_internal_rtc() function. */ 40 - void __init __attribute__ ((weak)) 41 - init_internal_rtc(void) 42 - { 43 - sit8xx_t __iomem *sys_tmr = immr_map(im_sit); 44 - 45 - /* Disable the RTC one second and alarm interrupts. */ 46 - clrbits16(&sys_tmr->sit_rtcsc, (RTCSC_SIE | RTCSC_ALE)); 47 - 48 - /* Enable the RTC */ 49 - setbits16(&sys_tmr->sit_rtcsc, (RTCSC_RTF | RTCSC_RTE)); 50 - immr_unmap(sys_tmr); 51 38 } 52 39 53 40 static int __init get_freq(char *name, unsigned long *val) ··· 65 80 void __init mpc8xx_calibrate_decr(void) 66 81 { 67 82 struct device_node *cpu; 68 - cark8xx_t __iomem *clk_r1; 69 - car8xx_t __iomem *clk_r2; 70 - sitk8xx_t __iomem *sys_tmr1; 71 - sit8xx_t __iomem *sys_tmr2; 72 83 int irq, virq; 73 84 74 - clk_r1 = immr_map(im_clkrstk); 75 - 76 85 /* Unlock the SCCR. */ 77 - out_be32(&clk_r1->cark_sccrk, ~KAPWR_KEY); 78 - out_be32(&clk_r1->cark_sccrk, KAPWR_KEY); 79 - immr_unmap(clk_r1); 86 + out_be32(&mpc8xx_immr->im_clkrstk.cark_sccrk, ~KAPWR_KEY); 87 + out_be32(&mpc8xx_immr->im_clkrstk.cark_sccrk, KAPWR_KEY); 80 88 81 89 /* Force all 8xx processors to use divide by 16 processor clock. */ 82 - clk_r2 = immr_map(im_clkrst); 83 - setbits32(&clk_r2->car_sccr, 0x02000000); 84 - immr_unmap(clk_r2); 90 + setbits32(&mpc8xx_immr->im_clkrst.car_sccr, 0x02000000); 85 91 86 92 /* Processor frequency is MHz. 87 93 */ ··· 99 123 * we guarantee the registers are locked, then we unlock them 100 124 * for our use. 101 125 */ 102 - sys_tmr1 = immr_map(im_sitk); 103 - out_be32(&sys_tmr1->sitk_tbscrk, ~KAPWR_KEY); 104 - out_be32(&sys_tmr1->sitk_rtcsck, ~KAPWR_KEY); 105 - out_be32(&sys_tmr1->sitk_tbk, ~KAPWR_KEY); 106 - out_be32(&sys_tmr1->sitk_tbscrk, KAPWR_KEY); 107 - out_be32(&sys_tmr1->sitk_rtcsck, KAPWR_KEY); 108 - out_be32(&sys_tmr1->sitk_tbk, KAPWR_KEY); 109 - immr_unmap(sys_tmr1); 126 + out_be32(&mpc8xx_immr->im_sitk.sitk_tbscrk, ~KAPWR_KEY); 127 + out_be32(&mpc8xx_immr->im_sitk.sitk_rtcsck, ~KAPWR_KEY); 128 + out_be32(&mpc8xx_immr->im_sitk.sitk_tbk, ~KAPWR_KEY); 129 + out_be32(&mpc8xx_immr->im_sitk.sitk_tbscrk, KAPWR_KEY); 130 + out_be32(&mpc8xx_immr->im_sitk.sitk_rtcsck, KAPWR_KEY); 131 + out_be32(&mpc8xx_immr->im_sitk.sitk_tbk, KAPWR_KEY); 110 132 111 - init_internal_rtc(); 133 + /* Disable the RTC one second and alarm interrupts. */ 134 + clrbits16(&mpc8xx_immr->im_sit.sit_rtcsc, (RTCSC_SIE | RTCSC_ALE)); 135 + 136 + /* Enable the RTC */ 137 + setbits16(&mpc8xx_immr->im_sit.sit_rtcsc, (RTCSC_RTF | RTCSC_RTE)); 112 138 113 139 /* Enabling the decrementer also enables the timebase interrupts 114 140 * (or from the other point of view, to get decrementer interrupts ··· 122 144 of_node_put(cpu); 123 145 irq = virq_to_hw(virq); 124 146 125 - sys_tmr2 = immr_map(im_sit); 126 - out_be16(&sys_tmr2->sit_tbscr, ((1 << (7 - (irq/2))) << 8) | 127 - (TBSCR_TBF | TBSCR_TBE)); 128 - immr_unmap(sys_tmr2); 147 + out_be16(&mpc8xx_immr->im_sit.sit_tbscr, 148 + ((1 << (7 - (irq / 2))) << 8) | (TBSCR_TBF | TBSCR_TBE)); 129 149 130 150 if (request_irq(virq, timebase_interrupt, IRQF_NO_THREAD, "tbint", 131 151 NULL)) ··· 137 161 138 162 int mpc8xx_set_rtc_time(struct rtc_time *tm) 139 163 { 140 - sitk8xx_t __iomem *sys_tmr1; 141 - sit8xx_t __iomem *sys_tmr2; 142 164 time64_t time; 143 165 144 - sys_tmr1 = immr_map(im_sitk); 145 - sys_tmr2 = immr_map(im_sit); 146 166 time = rtc_tm_to_time64(tm); 147 167 148 - out_be32(&sys_tmr1->sitk_rtck, KAPWR_KEY); 149 - out_be32(&sys_tmr2->sit_rtc, (u32)time); 150 - out_be32(&sys_tmr1->sitk_rtck, ~KAPWR_KEY); 168 + out_be32(&mpc8xx_immr->im_sitk.sitk_rtck, KAPWR_KEY); 169 + out_be32(&mpc8xx_immr->im_sit.sit_rtc, (u32)time); 170 + out_be32(&mpc8xx_immr->im_sitk.sitk_rtck, ~KAPWR_KEY); 151 171 152 - immr_unmap(sys_tmr2); 153 - immr_unmap(sys_tmr1); 154 172 return 0; 155 173 } 156 174 157 175 void mpc8xx_get_rtc_time(struct rtc_time *tm) 158 176 { 159 177 unsigned long data; 160 - sit8xx_t __iomem *sys_tmr = immr_map(im_sit); 161 178 162 179 /* Get time from the RTC. */ 163 - data = in_be32(&sys_tmr->sit_rtc); 180 + data = in_be32(&mpc8xx_immr->im_sit.sit_rtc); 164 181 rtc_time64_to_tm(data, tm); 165 - immr_unmap(sys_tmr); 166 182 return; 167 183 } 168 184 169 185 void __noreturn mpc8xx_restart(char *cmd) 170 186 { 171 - car8xx_t __iomem *clk_r = immr_map(im_clkrst); 172 - 173 - 174 187 local_irq_disable(); 175 188 176 - setbits32(&clk_r->car_plprcr, 0x00000080); 189 + setbits32(&mpc8xx_immr->im_clkrst.car_plprcr, 0x00000080); 177 190 /* Clear the ME bit in MSR to cause checkstop on machine check 178 191 */ 179 192 mtmsr(mfmsr() & ~0x1000); 180 193 181 - in_8(&clk_r->res[0]); 194 + in_8(&mpc8xx_immr->im_clkrst.res[0]); 182 195 panic("Restart failed\n"); 183 196 }
-1
arch/powerpc/platforms/8xx/mpc86xads_setup.c
··· 24 24 #include <asm/time.h> 25 25 #include <asm/8xx_immap.h> 26 26 #include <asm/cpm1.h> 27 - #include <asm/fs_pd.h> 28 27 #include <asm/udbg.h> 29 28 30 29 #include "mpc86xads.h"
-1
arch/powerpc/platforms/8xx/mpc885ads_setup.c
··· 36 36 #include <asm/time.h> 37 37 #include <asm/8xx_immap.h> 38 38 #include <asm/cpm1.h> 39 - #include <asm/fs_pd.h> 40 39 #include <asm/udbg.h> 41 40 42 41 #include "mpc885ads.h"
-1
arch/powerpc/platforms/8xx/tqm8xx_setup.c
··· 38 38 #include <asm/time.h> 39 39 #include <asm/8xx_immap.h> 40 40 #include <asm/cpm1.h> 41 - #include <asm/fs_pd.h> 42 41 #include <asm/udbg.h> 43 42 44 43 #include "mpc8xx.h"
+1 -1
arch/powerpc/platforms/Kconfig
··· 251 251 252 252 config CPM2 253 253 bool "Enable support for the CPM2 (Communications Processor Module)" 254 - depends on (FSL_SOC_BOOKE && PPC32) || 8260 254 + depends on (FSL_SOC_BOOKE && PPC32) || PPC_82xx 255 255 select CPM 256 256 select HAVE_PCI 257 257 select GPIOLIB
+7
arch/powerpc/platforms/Kconfig.cputype
··· 276 276 default "e500mc" if E500MC_CPU 277 277 default "powerpc" if POWERPC_CPU 278 278 279 + config TUNE_CPU 280 + string 281 + depends on POWERPC64_CPU 282 + default "-mtune=power10" if $(cc-option,-mtune=power10) 283 + default "-mtune=power9" if $(cc-option,-mtune=power9) 284 + default "-mtune=power8" if $(cc-option,-mtune=power8) 285 + 279 286 config PPC_BOOK3S 280 287 def_bool y 281 288 depends on PPC_BOOK3S_32 || PPC_BOOK3S_64
+2 -1
arch/powerpc/platforms/cell/axon_msi.c
··· 10 10 #include <linux/pci.h> 11 11 #include <linux/msi.h> 12 12 #include <linux/export.h> 13 - #include <linux/of_platform.h> 14 13 #include <linux/slab.h> 15 14 #include <linux/debugfs.h> 15 + #include <linux/of.h> 16 16 #include <linux/of_irq.h> 17 + #include <linux/platform_device.h> 17 18 18 19 #include <asm/dcr.h> 19 20 #include <asm/machdep.h>
+1 -2
arch/powerpc/platforms/cell/cbe_regs.c
··· 10 10 #include <linux/percpu.h> 11 11 #include <linux/types.h> 12 12 #include <linux/export.h> 13 + #include <linux/of.h> 13 14 #include <linux/of_address.h> 14 - #include <linux/of_device.h> 15 - #include <linux/of_platform.h> 16 15 #include <linux/pgtable.h> 17 16 18 17 #include <asm/io.h>
+1 -1
arch/powerpc/platforms/cell/iommu.c
··· 16 16 #include <linux/notifier.h> 17 17 #include <linux/of.h> 18 18 #include <linux/of_address.h> 19 - #include <linux/of_platform.h> 19 + #include <linux/platform_device.h> 20 20 #include <linux/slab.h> 21 21 #include <linux/memblock.h> 22 22
+1 -1
arch/powerpc/platforms/cell/ras.c
··· 22 22 #include <asm/cell-regs.h> 23 23 24 24 #include "ras.h" 25 - 25 + #include "pervasive.h" 26 26 27 27 static void dump_fir(int cpu) 28 28 {
+1
arch/powerpc/platforms/cell/setup.c
··· 27 27 #include <linux/mutex.h> 28 28 #include <linux/memory_hotplug.h> 29 29 #include <linux/of_platform.h> 30 + #include <linux/platform_device.h> 30 31 31 32 #include <asm/mmu.h> 32 33 #include <asm/processor.h>
-1
arch/powerpc/platforms/cell/spider-pci.c
··· 9 9 10 10 #include <linux/kernel.h> 11 11 #include <linux/of_address.h> 12 - #include <linux/of_platform.h> 13 12 #include <linux/slab.h> 14 13 #include <linux/io.h> 15 14
+1
arch/powerpc/platforms/cell/spu_manage.c
··· 25 25 26 26 #include "spufs/spufs.h" 27 27 #include "interrupt.h" 28 + #include "spu_priv1_mmio.h" 28 29 29 30 struct device_node *spu_devnode(struct spu *spu) 30 31 {
+1 -1
arch/powerpc/platforms/embedded6xx/holly.c
··· 22 22 #include <linux/serial.h> 23 23 #include <linux/tty.h> 24 24 #include <linux/serial_core.h> 25 + #include <linux/of.h> 25 26 #include <linux/of_address.h> 26 27 #include <linux/of_irq.h> 27 - #include <linux/of_platform.h> 28 28 #include <linux/extable.h> 29 29 30 30 #include <asm/time.h>
+2 -1
arch/powerpc/platforms/maple/setup.c
··· 36 36 #include <linux/serial.h> 37 37 #include <linux/smp.h> 38 38 #include <linux/bitops.h> 39 + #include <linux/of.h> 39 40 #include <linux/of_address.h> 40 - #include <linux/of_device.h> 41 + #include <linux/platform_device.h> 41 42 #include <linux/memblock.h> 42 43 43 44 #include <asm/processor.h>
+1 -1
arch/powerpc/platforms/pasemi/gpio_mdio.c
··· 20 20 #include <linux/phy.h> 21 21 #include <linux/of_address.h> 22 22 #include <linux/of_mdio.h> 23 - #include <linux/of_platform.h> 23 + #include <linux/platform_device.h> 24 24 25 25 #define DELAY 1 26 26
+1
arch/powerpc/platforms/pasemi/pasemi.h
··· 4 4 5 5 extern time64_t pas_get_boot_time(void); 6 6 extern void pas_pci_init(void); 7 + struct pci_dev; 7 8 extern void pas_pci_irq_fixup(struct pci_dev *dev); 8 9 extern void pas_pci_dma_dev_setup(struct pci_dev *dev); 9 10
+2
arch/powerpc/platforms/pasemi/setup.c
··· 16 16 #include <linux/console.h> 17 17 #include <linux/export.h> 18 18 #include <linux/pci.h> 19 + #include <linux/of.h> 19 20 #include <linux/of_platform.h> 21 + #include <linux/platform_device.h> 20 22 #include <linux/gfp.h> 21 23 #include <linux/irqdomain.h> 22 24
+2
arch/powerpc/platforms/pasemi/time.c
··· 9 9 10 10 #include <asm/time.h> 11 11 12 + #include "pasemi.h" 13 + 12 14 time64_t __init pas_get_boot_time(void) 13 15 { 14 16 /* Let's just return a fake date right now */
+6 -4
arch/powerpc/platforms/powermac/feature.c
··· 37 37 #include <asm/pci-bridge.h> 38 38 #include <asm/pmac_low_i2c.h> 39 39 40 + #include "pmac.h" 41 + 40 42 #undef DEBUG_FEATURE 41 43 42 44 #ifdef DEBUG_FEATURE ··· 134 132 * Here are the chip specific feature functions 135 133 */ 136 134 137 - static inline int simple_feature_tweak(struct device_node *node, int type, 138 - int reg, u32 mask, int value) 135 + #ifndef CONFIG_PPC64 136 + 137 + static int simple_feature_tweak(struct device_node *node, int type, int reg, 138 + u32 mask, int value) 139 139 { 140 140 struct macio_chip* macio; 141 141 unsigned long flags; ··· 155 151 156 152 return 0; 157 153 } 158 - 159 - #ifndef CONFIG_PPC64 160 154 161 155 static long ohare_htw_scc_enable(struct device_node *node, long param, 162 156 long value)
+1 -1
arch/powerpc/platforms/powermac/setup.c
··· 45 45 #include <linux/root_dev.h> 46 46 #include <linux/bitops.h> 47 47 #include <linux/suspend.h> 48 - #include <linux/of_device.h> 48 + #include <linux/of.h> 49 49 #include <linux/of_platform.h> 50 50 51 51 #include <asm/reg.h>
+1 -2
arch/powerpc/platforms/powernv/eeh-powernv.c
··· 855 855 struct pci_controller *hose = pci_bus_to_host(pdev->bus); 856 856 struct pnv_phb *phb = hose->private_data; 857 857 struct device_node *dn = pci_device_to_OF_node(pdev); 858 - uint64_t id = PCI_SLOT_ID(phb->opal_id, 859 - (pdev->bus->number << 8) | pdev->devfn); 858 + uint64_t id = PCI_SLOT_ID(phb->opal_id, pci_dev_id(pdev)); 860 859 uint8_t scope; 861 860 int64_t rc; 862 861
+1 -1
arch/powerpc/platforms/powernv/ocxl.c
··· 449 449 if (!data) 450 450 return -ENOMEM; 451 451 452 - bdfn = (dev->bus->number << 8) | dev->devfn; 452 + bdfn = pci_dev_id(dev); 453 453 rc = opal_npu_spa_setup(phb->opal_id, bdfn, virt_to_phys(spa_mem), 454 454 PE_mask); 455 455 if (rc) {
-1
arch/powerpc/platforms/powernv/opal-imc.c
··· 11 11 #include <linux/platform_device.h> 12 12 #include <linux/of.h> 13 13 #include <linux/of_address.h> 14 - #include <linux/of_platform.h> 15 14 #include <linux/crash_dump.h> 16 15 #include <linux/debugfs.h> 17 16 #include <asm/opal.h>
+17 -10
arch/powerpc/platforms/powernv/opal-prd.c
··· 24 24 #include <linux/uaccess.h> 25 25 26 26 27 + struct opal_prd_msg { 28 + union { 29 + struct opal_prd_msg_header header; 30 + DECLARE_FLEX_ARRAY(u8, data); 31 + }; 32 + }; 33 + 27 34 /* 28 35 * The msg member must be at the end of the struct, as it's followed by the 29 36 * message data. 30 37 */ 31 38 struct opal_prd_msg_queue_item { 32 - struct list_head list; 33 - struct opal_prd_msg_header msg; 39 + struct list_head list; 40 + struct opal_prd_msg msg; 34 41 }; 35 42 36 43 static struct device_node *prd_node; ··· 163 156 int rc; 164 157 165 158 /* we need at least a header's worth of data */ 166 - if (count < sizeof(item->msg)) 159 + if (count < sizeof(item->msg.header)) 167 160 return -EINVAL; 168 161 169 162 if (*ppos) ··· 193 186 return -EINTR; 194 187 } 195 188 196 - size = be16_to_cpu(item->msg.size); 189 + size = be16_to_cpu(item->msg.header.size); 197 190 if (size > count) { 198 191 err = -EINVAL; 199 192 goto err_requeue; ··· 221 214 size_t count, loff_t *ppos) 222 215 { 223 216 struct opal_prd_msg_header hdr; 217 + struct opal_prd_msg *msg; 224 218 ssize_t size; 225 - void *msg; 226 219 int rc; 227 220 228 221 size = sizeof(hdr); ··· 254 247 255 248 static int opal_prd_release(struct inode *inode, struct file *file) 256 249 { 257 - struct opal_prd_msg_header msg; 250 + struct opal_prd_msg msg; 258 251 259 - msg.size = cpu_to_be16(sizeof(msg)); 260 - msg.type = OPAL_PRD_MSG_TYPE_FINI; 252 + msg.header.size = cpu_to_be16(sizeof(msg)); 253 + msg.header.type = OPAL_PRD_MSG_TYPE_FINI; 261 254 262 - opal_prd_msg((struct opal_prd_msg *)&msg); 255 + opal_prd_msg(&msg); 263 256 264 257 atomic_xchg(&prd_usage, 0); 265 258 ··· 359 352 if (!item) 360 353 return -ENOMEM; 361 354 362 - memcpy(&item->msg, msg->params, msg_size); 355 + memcpy(&item->msg.data, msg->params, msg_size); 363 356 364 357 spin_lock_irqsave(&opal_prd_msg_queue_lock, flags); 365 358 list_add_tail(&item->list, &opal_prd_msg_queue);
+2 -1
arch/powerpc/platforms/powernv/opal-rtc.c
··· 11 11 #include <linux/bcd.h> 12 12 #include <linux/rtc.h> 13 13 #include <linux/delay.h> 14 - #include <linux/platform_device.h> 14 + #include <linux/of.h> 15 15 #include <linux/of_platform.h> 16 + #include <linux/platform_device.h> 16 17 17 18 #include <asm/opal.h> 18 19 #include <asm/firmware.h>
+1 -1
arch/powerpc/platforms/powernv/opal-secvar.c
··· 12 12 #define pr_fmt(fmt) "secvar: "fmt 13 13 14 14 #include <linux/types.h> 15 + #include <linux/of.h> 15 16 #include <linux/platform_device.h> 16 - #include <linux/of_platform.h> 17 17 #include <asm/opal.h> 18 18 #include <asm/secvar.h> 19 19 #include <asm/secure_boot.h>
+2
arch/powerpc/platforms/powernv/opal-sensor.c
··· 6 6 */ 7 7 8 8 #include <linux/delay.h> 9 + #include <linux/of.h> 9 10 #include <linux/of_platform.h> 11 + #include <linux/platform_device.h> 10 12 #include <asm/opal.h> 11 13 #include <asm/machdep.h> 12 14
+2 -2
arch/powerpc/platforms/powernv/opal-xscom.c
··· 168 168 ent->path.size = strlen((char *)ent->path.data); 169 169 170 170 dir = debugfs_create_dir(ent->name, root); 171 - if (!dir) { 171 + if (IS_ERR(dir)) { 172 172 kfree(ent->path.data); 173 173 kfree(ent); 174 174 return -1; ··· 190 190 return 0; 191 191 192 192 root = debugfs_create_dir("scom", arch_debugfs_dir); 193 - if (!root) 193 + if (IS_ERR(root)) 194 194 return -1; 195 195 196 196 rc = 0;
+3 -3
arch/powerpc/platforms/powernv/pci-ioda.c
··· 997 997 struct pnv_ioda_pe *pe; 998 998 999 999 /* Check if the BDFN for this device is associated with a PE yet */ 1000 - pe = pnv_pci_bdfn_to_pe(phb, pdev->devfn | (pdev->bus->number << 8)); 1000 + pe = pnv_pci_bdfn_to_pe(phb, pci_dev_id(pdev)); 1001 1001 if (!pe) { 1002 1002 /* VF PEs should be pre-configured in pnv_pci_sriov_enable() */ 1003 1003 if (WARN_ON(pdev->is_virtfn)) 1004 1004 return; 1005 1005 1006 1006 pnv_pci_configure_bus(pdev->bus); 1007 - pe = pnv_pci_bdfn_to_pe(phb, pdev->devfn | (pdev->bus->number << 8)); 1007 + pe = pnv_pci_bdfn_to_pe(phb, pci_dev_id(pdev)); 1008 1008 pci_info(pdev, "Configured PE#%x\n", pe ? pe->pe_number : 0xfffff); 1009 1009 1010 1010 ··· 2526 2526 if (WARN_ON(!phb)) 2527 2527 return ERR_PTR(-ENODEV); 2528 2528 2529 - pe = pnv_pci_bdfn_to_pe(phb, pdev->devfn | (pdev->bus->number << 8)); 2529 + pe = pnv_pci_bdfn_to_pe(phb, pci_dev_id(pdev)); 2530 2530 if (!pe) 2531 2531 return ERR_PTR(-ENODEV); 2532 2532
+1 -9
arch/powerpc/platforms/powernv/setup.c
··· 482 482 #ifdef CONFIG_MEMORY_HOTPLUG 483 483 static unsigned long pnv_memory_block_size(void) 484 484 { 485 - /* 486 - * We map the kernel linear region with 1GB large pages on radix. For 487 - * memory hot unplug to work our memory block size must be at least 488 - * this size. 489 - */ 490 - if (radix_enabled()) 491 - return radix_mem_block_size; 492 - else 493 - return 256UL * 1024 * 1024; 485 + return memory_block_size; 494 486 } 495 487 #endif 496 488
+2 -2
arch/powerpc/platforms/ps3/repository.c
··· 73 73 74 74 static u64 make_first_field(const char *text, u64 index) 75 75 { 76 - u64 n; 76 + u64 n = 0; 77 77 78 - strncpy((char *)&n, text, 8); 78 + memcpy((char *)&n, text, strnlen(text, sizeof(n))); 79 79 return PS3_VENDOR_ID_NONE + (n >> 32) + index; 80 80 } 81 81
+21 -9
arch/powerpc/platforms/pseries/hotplug-cpu.c
··· 398 398 for_each_present_cpu(cpu) { 399 399 if (get_hard_smp_processor_id(cpu) != thread) 400 400 continue; 401 + 402 + if (!topology_is_primary_thread(cpu)) { 403 + if (cpu_smt_control != CPU_SMT_ENABLED) 404 + break; 405 + if (!topology_smt_thread_allowed(cpu)) 406 + break; 407 + } 408 + 401 409 cpu_maps_update_done(); 402 410 find_and_update_cpu_nid(cpu); 403 411 rc = device_online(get_cpu_device(cpu)); ··· 853 845 .notifier_call = pseries_smp_notifier, 854 846 }; 855 847 856 - static int __init pseries_cpu_hotplug_init(void) 848 + void __init pseries_cpu_hotplug_init(void) 857 849 { 858 850 int qcss_tok; 859 - unsigned int node; 860 - 861 - #ifdef CONFIG_ARCH_CPU_PROBE_RELEASE 862 - ppc_md.cpu_probe = dlpar_cpu_probe; 863 - ppc_md.cpu_release = dlpar_cpu_release; 864 - #endif /* CONFIG_ARCH_CPU_PROBE_RELEASE */ 865 851 866 852 rtas_stop_self_token = rtas_function_token(RTAS_FN_STOP_SELF); 867 853 qcss_tok = rtas_function_token(RTAS_FN_QUERY_CPU_STOPPED_STATE); ··· 864 862 qcss_tok == RTAS_UNKNOWN_SERVICE) { 865 863 printk(KERN_INFO "CPU Hotplug not supported by firmware " 866 864 "- disabling.\n"); 867 - return 0; 865 + return; 868 866 } 869 867 870 868 smp_ops->cpu_offline_self = pseries_cpu_offline_self; 871 869 smp_ops->cpu_disable = pseries_cpu_disable; 872 870 smp_ops->cpu_die = pseries_cpu_die; 871 + } 872 + 873 + static int __init pseries_dlpar_init(void) 874 + { 875 + unsigned int node; 876 + 877 + #ifdef CONFIG_ARCH_CPU_PROBE_RELEASE 878 + ppc_md.cpu_probe = dlpar_cpu_probe; 879 + ppc_md.cpu_release = dlpar_cpu_release; 880 + #endif /* CONFIG_ARCH_CPU_PROBE_RELEASE */ 873 881 874 882 /* Processors can be added/removed only on LPAR */ 875 883 if (firmware_has_feature(FW_FEATURE_LPAR)) { ··· 898 886 899 887 return 0; 900 888 } 901 - machine_arch_initcall(pseries, pseries_cpu_hotplug_init); 889 + machine_arch_initcall(pseries, pseries_dlpar_init);
+4 -56
arch/powerpc/platforms/pseries/hotplug-memory.c
··· 21 21 #include <asm/drmem.h> 22 22 #include "pseries.h" 23 23 24 - unsigned long pseries_memory_block_size(void) 25 - { 26 - struct device_node *np; 27 - u64 memblock_size = MIN_MEMORY_BLOCK_SIZE; 28 - struct resource r; 29 - 30 - np = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory"); 31 - if (np) { 32 - int len; 33 - int size_cells; 34 - const __be32 *prop; 35 - 36 - size_cells = of_n_size_cells(np); 37 - 38 - prop = of_get_property(np, "ibm,lmb-size", &len); 39 - if (prop && len >= size_cells * sizeof(__be32)) 40 - memblock_size = of_read_number(prop, size_cells); 41 - of_node_put(np); 42 - 43 - } else if (machine_is(pseries)) { 44 - /* This fallback really only applies to pseries */ 45 - unsigned int memzero_size = 0; 46 - 47 - np = of_find_node_by_path("/memory@0"); 48 - if (np) { 49 - if (!of_address_to_resource(np, 0, &r)) 50 - memzero_size = resource_size(&r); 51 - of_node_put(np); 52 - } 53 - 54 - if (memzero_size) { 55 - /* We now know the size of memory@0, use this to find 56 - * the first memoryblock and get its size. 57 - */ 58 - char buf[64]; 59 - 60 - sprintf(buf, "/memory@%x", memzero_size); 61 - np = of_find_node_by_path(buf); 62 - if (np) { 63 - if (!of_address_to_resource(np, 0, &r)) 64 - memblock_size = resource_size(&r); 65 - of_node_put(np); 66 - } 67 - } 68 - } 69 - return memblock_size; 70 - } 71 - 72 24 static void dlpar_free_property(struct property *prop) 73 25 { 74 26 kfree(prop->name); ··· 235 283 236 284 static int pseries_remove_memblock(unsigned long base, unsigned long memblock_size) 237 285 { 238 - unsigned long block_sz, start_pfn; 286 + unsigned long start_pfn; 239 287 int sections_per_block; 240 288 int i; 241 289 ··· 246 294 if (!pfn_valid(start_pfn)) 247 295 goto out; 248 296 249 - block_sz = pseries_memory_block_size(); 250 - sections_per_block = block_sz / MIN_MEMORY_BLOCK_SIZE; 297 + sections_per_block = memory_block_size / MIN_MEMORY_BLOCK_SIZE; 251 298 252 299 for (i = 0; i < sections_per_block; i++) { 253 300 __remove_memory(base, MIN_MEMORY_BLOCK_SIZE); ··· 305 354 static int dlpar_remove_lmb(struct drmem_lmb *lmb) 306 355 { 307 356 struct memory_block *mem_block; 308 - unsigned long block_sz; 309 357 int rc; 310 358 311 359 if (!lmb_is_removable(lmb)) ··· 320 370 return rc; 321 371 } 322 372 323 - block_sz = pseries_memory_block_size(); 324 - 325 - __remove_memory(lmb->base_addr, block_sz); 373 + __remove_memory(lmb->base_addr, memory_block_size); 326 374 put_device(&mem_block->dev); 327 375 328 376 /* Update memory regions for memory remove */ 329 - memblock_remove(lmb->base_addr, block_sz); 377 + memblock_remove(lmb->base_addr, memory_block_size); 330 378 331 379 invalidate_lmb_associativity_index(lmb); 332 380 lmb->flags &= ~DRCONF_MEM_ASSIGNED;
+1 -1
arch/powerpc/platforms/pseries/hvCall.S
··· 91 91 b 1f; \ 92 92 END_FTR_SECTION(0, 1); \ 93 93 LOAD_REG_ADDR(r12, hcall_tracepoint_refcount) ; \ 94 - std r12,32(r1); \ 94 + ld r12,0(r12); \ 95 95 cmpdi r12,0; \ 96 96 bne- LABEL; \ 97 97 1:
+2
arch/powerpc/platforms/pseries/ibmebus.c
··· 47 47 #include <linux/slab.h> 48 48 #include <linux/stat.h> 49 49 #include <linux/of_platform.h> 50 + #include <linux/platform_device.h> 50 51 #include <asm/ibmebus.h> 51 52 #include <asm/machdep.h> 52 53 ··· 461 460 if (err) { 462 461 printk(KERN_WARNING "%s: device_register returned %i\n", 463 462 __func__, err); 463 + put_device(&ibmebus_bus_device); 464 464 bus_unregister(&ibmebus_bus_type); 465 465 466 466 return err;
-2
arch/powerpc/platforms/pseries/iommu.c
··· 395 395 static DEFINE_SPINLOCK(dma_win_list_lock); 396 396 /* protects initializing window twice for same device */ 397 397 static DEFINE_MUTEX(dma_win_init_mutex); 398 - #define DIRECT64_PROPNAME "linux,direct64-ddr-window-info" 399 - #define DMA64_PROPNAME "linux,dma64-ddr-window-info" 400 398 401 399 static int tce_clearrange_multi_pSeriesLP(unsigned long start_pfn, 402 400 unsigned long num_pfn, const void *arg)
+2 -9
arch/powerpc/platforms/pseries/lpar.c
··· 41 41 #include <asm/kexec.h> 42 42 #include <asm/fadump.h> 43 43 #include <asm/dtl.h> 44 + #include <asm/vphn.h> 44 45 45 46 #include "pseries.h" 46 47 ··· 640 639 641 640 static int __init vcpudispatch_stats_procfs_init(void) 642 641 { 643 - /* 644 - * Avoid smp_processor_id while preemptible. All CPUs should have 645 - * the same value for lppaca_shared_proc. 646 - */ 647 - preempt_disable(); 648 - if (!lppaca_shared_proc(get_lppaca())) { 649 - preempt_enable(); 642 + if (!lppaca_shared_proc()) 650 643 return 0; 651 - } 652 - preempt_enable(); 653 644 654 645 if (!proc_create("powerpc/vcpudispatch_stats", 0600, NULL, 655 646 &vcpudispatch_stats_proc_ops))
+2 -2
arch/powerpc/platforms/pseries/lparcfg.c
··· 206 206 ppp_data.active_system_procs); 207 207 208 208 /* pool related entries are appropriate for shared configs */ 209 - if (lppaca_shared_proc(get_lppaca())) { 209 + if (lppaca_shared_proc()) { 210 210 unsigned long pool_idle_time, pool_procs; 211 211 212 212 seq_printf(m, "pool=%d\n", ppp_data.pool_num); ··· 560 560 partition_potential_processors); 561 561 562 562 seq_printf(m, "shared_processor_mode=%d\n", 563 - lppaca_shared_proc(get_lppaca())); 563 + lppaca_shared_proc()); 564 564 565 565 #ifdef CONFIG_PPC_64S_HASH_MMU 566 566 if (!radix_enabled())
+1 -1
arch/powerpc/platforms/pseries/plpks.c
··· 194 194 return auth; 195 195 } 196 196 197 - /** 197 + /* 198 198 * Label is combination of label attributes + name. 199 199 * Label attributes are used internally by kernel and not exposed to the user. 200 200 */
+2 -2
arch/powerpc/platforms/pseries/pseries.h
··· 75 75 76 76 #ifdef CONFIG_HOTPLUG_CPU 77 77 int dlpar_cpu(struct pseries_hp_errorlog *hp_elog); 78 + void pseries_cpu_hotplug_init(void); 78 79 #else 79 80 static inline int dlpar_cpu(struct pseries_hp_errorlog *hp_elog) 80 81 { 81 82 return -EOPNOTSUPP; 82 83 } 84 + static inline void pseries_cpu_hotplug_init(void) { } 83 85 #endif 84 86 85 87 /* PCI root bridge prepare function override for pseries */ ··· 91 89 extern struct pci_controller_ops pseries_pci_controller_ops; 92 90 int pseries_msi_allocate_domains(struct pci_controller *phb); 93 91 void pseries_msi_free_domains(struct pci_controller *phb); 94 - 95 - unsigned long pseries_memory_block_size(void); 96 92 97 93 extern int CMO_PrPSP; 98 94 extern int CMO_SecPSP;
+10 -1
arch/powerpc/platforms/pseries/setup.c
··· 816 816 /* Discover PIC type and setup ppc_md accordingly */ 817 817 smp_init_pseries(); 818 818 819 + // Setup CPU hotplug callbacks 820 + pseries_cpu_hotplug_init(); 819 821 820 822 if (radix_enabled() && !mmu_has_feature(MMU_FTR_GTSE)) 821 823 if (!firmware_has_feature(FW_FEATURE_RPT_INVALIDATE)) ··· 849 847 if (firmware_has_feature(FW_FEATURE_LPAR)) { 850 848 vpa_init(boot_cpuid); 851 849 852 - if (lppaca_shared_proc(get_lppaca())) { 850 + if (lppaca_shared_proc()) { 853 851 static_branch_enable(&shared_processor); 854 852 pv_spinlocks_init(); 855 853 #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING ··· 1117 1115 return PCI_PROBE_DEVTREE; 1118 1116 return PCI_PROBE_NORMAL; 1119 1117 } 1118 + 1119 + #ifdef CONFIG_MEMORY_HOTPLUG 1120 + static unsigned long pseries_memory_block_size(void) 1121 + { 1122 + return memory_block_size; 1123 + } 1124 + #endif 1120 1125 1121 1126 struct pci_controller_ops pseries_pci_controller_ops = { 1122 1127 .probe_mode = pSeries_pci_probe_mode,
+1
arch/powerpc/platforms/pseries/vas.c
··· 17 17 #include <asm/hvcall.h> 18 18 #include <asm/plpar_wrappers.h> 19 19 #include <asm/firmware.h> 20 + #include <asm/vphn.h> 20 21 #include <asm/vas.h> 21 22 #include "vas.h" 22 23
+1 -1
arch/powerpc/platforms/pseries/vphn.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <asm/byteorder.h> 3 - #include <asm/lppaca.h> 3 + #include <asm/vphn.h> 4 4 5 5 /* 6 6 * The associativity domain numbers are returned from the hypervisor as a
+11 -22
arch/powerpc/sysdev/cpm2.c
··· 37 37 38 38 #include <asm/io.h> 39 39 #include <asm/irq.h> 40 - #include <asm/mpc8260.h> 41 40 #include <asm/page.h> 42 41 #include <asm/cpm2.h> 43 42 #include <asm/rheap.h> 44 - #include <asm/fs_pd.h> 45 43 46 44 #include <sysdev/fsl_soc.h> 47 45 ··· 117 119 /* This is good enough to get SMCs running..... 118 120 */ 119 121 if (brg < 4) { 120 - bp = cpm2_map_size(im_brgc1, 16); 122 + bp = &cpm2_immr->im_brgc1; 121 123 } else { 122 - bp = cpm2_map_size(im_brgc5, 16); 124 + bp = &cpm2_immr->im_brgc5; 123 125 brg -= 4; 124 126 } 125 127 bp += brg; ··· 129 131 val |= CPM_BRG_DIV16; 130 132 131 133 out_be32(bp, val); 132 - cpm2_unmap(bp); 133 134 } 134 135 EXPORT_SYMBOL(__cpm2_setbrg); 135 136 ··· 137 140 int ret = 0; 138 141 int shift; 139 142 int i, bits = 0; 140 - cpmux_t __iomem *im_cpmux; 141 143 u32 __iomem *reg; 142 144 u32 mask = 7; 143 145 ··· 199 203 {CPM_CLK_SCC4, CPM_CLK8, 7}, 200 204 }; 201 205 202 - im_cpmux = cpm2_map(im_cpmux); 203 - 204 206 switch (target) { 205 207 case CPM_CLK_SCC1: 206 - reg = &im_cpmux->cmx_scr; 208 + reg = &cpm2_immr->im_cpmux.cmx_scr; 207 209 shift = 24; 208 210 break; 209 211 case CPM_CLK_SCC2: 210 - reg = &im_cpmux->cmx_scr; 212 + reg = &cpm2_immr->im_cpmux.cmx_scr; 211 213 shift = 16; 212 214 break; 213 215 case CPM_CLK_SCC3: 214 - reg = &im_cpmux->cmx_scr; 216 + reg = &cpm2_immr->im_cpmux.cmx_scr; 215 217 shift = 8; 216 218 break; 217 219 case CPM_CLK_SCC4: 218 - reg = &im_cpmux->cmx_scr; 220 + reg = &cpm2_immr->im_cpmux.cmx_scr; 219 221 shift = 0; 220 222 break; 221 223 case CPM_CLK_FCC1: 222 - reg = &im_cpmux->cmx_fcr; 224 + reg = &cpm2_immr->im_cpmux.cmx_fcr; 223 225 shift = 24; 224 226 break; 225 227 case CPM_CLK_FCC2: 226 - reg = &im_cpmux->cmx_fcr; 228 + reg = &cpm2_immr->im_cpmux.cmx_fcr; 227 229 shift = 16; 228 230 break; 229 231 case CPM_CLK_FCC3: 230 - reg = &im_cpmux->cmx_fcr; 232 + reg = &cpm2_immr->im_cpmux.cmx_fcr; 231 233 shift = 8; 232 234 break; 233 235 default: ··· 255 261 256 262 out_be32(reg, (in_be32(reg) & ~mask) | bits); 257 263 258 - cpm2_unmap(im_cpmux); 259 264 return ret; 260 265 } 261 266 ··· 263 270 int ret = 0; 264 271 int shift; 265 272 int i, bits = 0; 266 - cpmux_t __iomem *im_cpmux; 267 273 u8 __iomem *reg; 268 274 u8 mask = 3; 269 275 ··· 277 285 {CPM_CLK_SMC2, CPM_CLK15, 3}, 278 286 }; 279 287 280 - im_cpmux = cpm2_map(im_cpmux); 281 - 282 288 switch (target) { 283 289 case CPM_CLK_SMC1: 284 - reg = &im_cpmux->cmx_smr; 290 + reg = &cpm2_immr->im_cpmux.cmx_smr; 285 291 mask = 3; 286 292 shift = 4; 287 293 break; 288 294 case CPM_CLK_SMC2: 289 - reg = &im_cpmux->cmx_smr; 295 + reg = &cpm2_immr->im_cpmux.cmx_smr; 290 296 mask = 3; 291 297 shift = 0; 292 298 break; ··· 307 317 308 318 out_8(reg, (in_8(reg) & ~mask) | bits); 309 319 310 - cpm2_unmap(im_cpmux); 311 320 return ret; 312 321 } 313 322
+1 -3
arch/powerpc/sysdev/cpm2_pic.c
··· 33 33 #include <linux/irqdomain.h> 34 34 35 35 #include <asm/immap_cpm2.h> 36 - #include <asm/mpc8260.h> 37 36 #include <asm/io.h> 38 - #include <asm/fs_pd.h> 39 37 40 38 #include "cpm2_pic.h" 41 39 ··· 229 231 { 230 232 int i; 231 233 232 - cpm2_intctl = cpm2_map(im_intctl); 234 + cpm2_intctl = &cpm2_immr->im_intctl; 233 235 234 236 /* Clear the CPM IRQ controller, in case it has any bits set 235 237 * from the bootloader
-2
arch/powerpc/sysdev/cpm_common.c
··· 15 15 */ 16 16 17 17 #include <linux/init.h> 18 - #include <linux/of_device.h> 19 18 #include <linux/spinlock.h> 20 19 #include <linux/export.h> 21 20 #include <linux/of.h> 22 - #include <linux/of_address.h> 23 21 #include <linux/slab.h> 24 22 25 23 #include <asm/udbg.h>
+2 -1
arch/powerpc/sysdev/cpm_gpio.c
··· 9 9 */ 10 10 11 11 #include <linux/module.h> 12 - #include <linux/of_device.h> 12 + #include <linux/of.h> 13 + #include <linux/platform_device.h> 13 14 14 15 #include <asm/cpm.h> 15 16 #ifdef CONFIG_8xx_GPIO
+1 -1
arch/powerpc/sysdev/dcr-low.S
··· 5 5 * Copyright (c) 2004 Eugene Surovegin <ebs@ebshome.net> 6 6 */ 7 7 8 + #include <linux/export.h> 8 9 #include <asm/ppc_asm.h> 9 10 #include <asm/processor.h> 10 11 #include <asm/bug.h> 11 - #include <asm/export.h> 12 12 13 13 #define DCR_ACCESS_PROLOG(table) \ 14 14 cmplwi cr0,r3,1024; \
+6 -6
arch/powerpc/sysdev/ehv_pic.c
··· 42 42 * Linux descriptor level callbacks 43 43 */ 44 44 45 - void ehv_pic_unmask_irq(struct irq_data *d) 45 + static void ehv_pic_unmask_irq(struct irq_data *d) 46 46 { 47 47 unsigned int src = virq_to_hw(d->irq); 48 48 49 49 ev_int_set_mask(src, 0); 50 50 } 51 51 52 - void ehv_pic_mask_irq(struct irq_data *d) 52 + static void ehv_pic_mask_irq(struct irq_data *d) 53 53 { 54 54 unsigned int src = virq_to_hw(d->irq); 55 55 56 56 ev_int_set_mask(src, 1); 57 57 } 58 58 59 - void ehv_pic_end_irq(struct irq_data *d) 59 + static void ehv_pic_end_irq(struct irq_data *d) 60 60 { 61 61 unsigned int src = virq_to_hw(d->irq); 62 62 63 63 ev_int_eoi(src); 64 64 } 65 65 66 - void ehv_pic_direct_end_irq(struct irq_data *d) 66 + static void ehv_pic_direct_end_irq(struct irq_data *d) 67 67 { 68 68 out_be32(mpic_percpu_base_vaddr + MPIC_EOI / 4, 0); 69 69 } 70 70 71 - int ehv_pic_set_affinity(struct irq_data *d, const struct cpumask *dest, 71 + static int ehv_pic_set_affinity(struct irq_data *d, const struct cpumask *dest, 72 72 bool force) 73 73 { 74 74 unsigned int src = virq_to_hw(d->irq); ··· 109 109 } 110 110 } 111 111 112 - int ehv_pic_set_irq_type(struct irq_data *d, unsigned int flow_type) 112 + static int ehv_pic_set_irq_type(struct irq_data *d, unsigned int flow_type) 113 113 { 114 114 unsigned int src = virq_to_hw(d->irq); 115 115 unsigned int vecpri, vold, vnew, prio, cpu_dest;
+2 -2
arch/powerpc/sysdev/fsl_pci.c
··· 519 519 } 520 520 } 521 521 522 - int fsl_add_bridge(struct platform_device *pdev, int is_primary) 522 + static int fsl_add_bridge(struct platform_device *pdev, int is_primary) 523 523 { 524 524 int len; 525 525 struct pci_controller *hose; ··· 767 767 u32 cfg_bar; 768 768 int ret = -ENOMEM; 769 769 770 - pcie = zalloc_maybe_bootmem(sizeof(*pcie), GFP_KERNEL); 770 + pcie = kzalloc(sizeof(*pcie), GFP_KERNEL); 771 771 if (!pcie) 772 772 return ret; 773 773
-1
arch/powerpc/sysdev/fsl_pci.h
··· 112 112 113 113 }; 114 114 115 - extern int fsl_add_bridge(struct platform_device *pdev, int is_primary); 116 115 extern void fsl_pcibios_fixup_bus(struct pci_bus *bus); 117 116 extern void fsl_pcibios_fixup_phb(struct pci_controller *phb); 118 117 extern int mpc83xx_add_bridge(struct device_node *dev);
+2 -2
arch/powerpc/sysdev/fsl_pmc.c
··· 13 13 #include <linux/export.h> 14 14 #include <linux/suspend.h> 15 15 #include <linux/delay.h> 16 - #include <linux/device.h> 16 + #include <linux/mod_devicetable.h> 17 17 #include <linux/of_address.h> 18 - #include <linux/of_platform.h> 18 + #include <linux/platform_device.h> 19 19 20 20 struct pmc_regs { 21 21 __be32 devdisr;
+7 -6
arch/powerpc/sysdev/fsl_rio.c
··· 23 23 #include <linux/types.h> 24 24 #include <linux/dma-mapping.h> 25 25 #include <linux/interrupt.h> 26 - #include <linux/device.h> 26 + #include <linux/of.h> 27 27 #include <linux/of_address.h> 28 28 #include <linux/of_irq.h> 29 - #include <linux/of_platform.h> 29 + #include <linux/platform_device.h> 30 30 #include <linux/delay.h> 31 31 #include <linux/slab.h> 32 32 33 33 #include <linux/io.h> 34 34 #include <linux/uaccess.h> 35 35 #include <asm/machdep.h> 36 + #include <asm/rio.h> 36 37 37 38 #include "fsl_rio.h" 38 39 ··· 304 303 out_be32(&priv->inb_atmu_regs[i].riwar, 0); 305 304 } 306 305 307 - int fsl_map_inb_mem(struct rio_mport *mport, dma_addr_t lstart, 308 - u64 rstart, u64 size, u32 flags) 306 + static int fsl_map_inb_mem(struct rio_mport *mport, dma_addr_t lstart, 307 + u64 rstart, u64 size, u32 flags) 309 308 { 310 309 struct rio_priv *priv = mport->priv; 311 310 u32 base_size; ··· 355 354 return 0; 356 355 } 357 356 358 - void fsl_unmap_inb_mem(struct rio_mport *mport, dma_addr_t lstart) 357 + static void fsl_unmap_inb_mem(struct rio_mport *mport, dma_addr_t lstart) 359 358 { 360 359 u32 win_start_shift, base_start_shift; 361 360 struct rio_priv *priv = mport->priv; ··· 443 442 * master port with system-specific info, and registers the 444 443 * master port with the RapidIO subsystem. 445 444 */ 446 - int fsl_rio_setup(struct platform_device *dev) 445 + static int fsl_rio_setup(struct platform_device *dev) 447 446 { 448 447 struct rio_ops *ops; 449 448 struct rio_mport *port;
+1 -2
arch/powerpc/sysdev/fsl_rmu.c
··· 25 25 #include <linux/interrupt.h> 26 26 #include <linux/of_address.h> 27 27 #include <linux/of_irq.h> 28 - #include <linux/of_platform.h> 29 28 #include <linux/slab.h> 30 29 31 30 #include "fsl_rio.h" ··· 359 360 return IRQ_HANDLED; 360 361 } 361 362 362 - void msg_unit_error_handler(void) 363 + static void msg_unit_error_handler(void) 363 364 { 364 365 365 366 /*XXX: Error recovery is not implemented, we just clear errors */
-1
arch/powerpc/sysdev/fsl_soc.c
··· 19 19 #include <linux/device.h> 20 20 #include <linux/platform_device.h> 21 21 #include <linux/of.h> 22 - #include <linux/of_platform.h> 23 22 #include <linux/phy.h> 24 23 #include <linux/spi/spi.h> 25 24 #include <linux/fsl_devices.h>
+3 -1
arch/powerpc/sysdev/mpc5xxx_clocks.c
··· 25 25 26 26 fwnode_for_each_parent_node(fwnode, parent) { 27 27 ret = fwnode_property_read_u32(parent, "bus-frequency", &bus_freq); 28 - if (!ret) 28 + if (!ret) { 29 + fwnode_handle_put(parent); 29 30 return bus_freq; 31 + } 30 32 } 31 33 32 34 return 0;
+2 -1
arch/powerpc/sysdev/mpic_msgr.c
··· 7 7 */ 8 8 9 9 #include <linux/list.h> 10 + #include <linux/of.h> 10 11 #include <linux/of_address.h> 11 12 #include <linux/of_irq.h> 12 - #include <linux/of_platform.h> 13 + #include <linux/platform_device.h> 13 14 #include <linux/errno.h> 14 15 #include <linux/err.h> 15 16 #include <linux/export.h>
-1
arch/powerpc/sysdev/mpic_timer.c
··· 16 16 #include <linux/slab.h> 17 17 #include <linux/of.h> 18 18 #include <linux/of_address.h> 19 - #include <linux/of_device.h> 20 19 #include <linux/of_irq.h> 21 20 #include <linux/syscore_ops.h> 22 21 #include <sysdev/fsl_soc.h>
+2 -2
arch/powerpc/sysdev/of_rtc.c
··· 5 5 * Copyright 2007 David Gibson <dwg@au1.ibm.com>, IBM Corporation. 6 6 */ 7 7 #include <linux/kernel.h> 8 - #include <linux/of.h> 9 8 #include <linux/init.h> 9 + #include <linux/of.h> 10 10 #include <linux/of_address.h> 11 - #include <linux/of_platform.h> 11 + #include <linux/platform_device.h> 12 12 #include <linux/slab.h> 13 13 14 14 #include <asm/prom.h>
+2 -2
arch/powerpc/sysdev/pmi.c
··· 16 16 #include <linux/completion.h> 17 17 #include <linux/spinlock.h> 18 18 #include <linux/module.h> 19 + #include <linux/mod_devicetable.h> 19 20 #include <linux/workqueue.h> 20 21 #include <linux/of_address.h> 21 - #include <linux/of_device.h> 22 22 #include <linux/of_irq.h> 23 - #include <linux/of_platform.h> 23 + #include <linux/platform_device.h> 24 24 25 25 #include <asm/io.h> 26 26 #include <asm/pmi.h>
-1
arch/powerpc/sysdev/xics/ics-opal.c
··· 111 111 __func__, d->irq, hw_irq, rc); 112 112 return -1; 113 113 } 114 - server = be16_to_cpu(oserver); 115 114 116 115 wanted_server = xics_get_irq_server(d->irq, cpumask, 1); 117 116 if (wanted_server < 0) {
+26
arch/powerpc/tools/gcc-check-fpatchable-function-entry.sh
··· 1 + #!/bin/bash 2 + # SPDX-License-Identifier: GPL-2.0 3 + 4 + set -e 5 + set -o pipefail 6 + 7 + # To debug, uncomment the following line 8 + # set -x 9 + 10 + # Output from -fpatchable-function-entry can only vary on ppc64 elfv2, so this 11 + # should not be invoked for other targets. Therefore we can pass in -m64 and 12 + # -mabi explicitly, to take care of toolchains defaulting to other targets. 13 + 14 + # Test whether the compile option -fpatchable-function-entry exists and 15 + # generates appropriate code 16 + echo "int func() { return 0; }" | \ 17 + $* -m64 -mabi=elfv2 -S -x c -O2 -fpatchable-function-entry=2 - -o - 2> /dev/null | \ 18 + grep -q "__patchable_function_entries" 19 + 20 + # Test whether nops are generated after the local entry point 21 + echo "int x; int func() { return x; }" | \ 22 + $* -m64 -mabi=elfv2 -S -x c -O2 -fpatchable-function-entry=2 - -o - 2> /dev/null | \ 23 + awk 'BEGIN { RS = ";" } /\.localentry.*nop.*\n[[:space:]]*nop/ { print $0 }' | \ 24 + grep -q "func:" 25 + 26 + exit 0
+4 -6
arch/powerpc/xmon/Makefile
··· 10 10 # Disable ftrace for the entire directory 11 11 ccflags-remove-$(CONFIG_FUNCTION_TRACER) += $(CC_FLAGS_FTRACE) 12 12 13 - ifdef CONFIG_CC_IS_CLANG 14 - # clang stores addresses on the stack causing the frame size to blow 15 - # out. See https://github.com/ClangBuiltLinux/linux/issues/252 16 - KBUILD_CFLAGS += -Wframe-larger-than=4096 17 - endif 18 - 19 13 ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) 14 + 15 + # Clang stores addresses on the stack causing the frame size to blow 16 + # out. See https://github.com/ClangBuiltLinux/linux/issues/252 17 + ccflags-$(CONFIG_CC_IS_CLANG) += -Wframe-larger-than=4096 20 18 21 19 obj-y += xmon.o nonstdio.o spr_access.o xmon_bpts.o 22 20
+4 -7
arch/powerpc/xmon/xmon.c
··· 58 58 #ifdef CONFIG_PPC64 59 59 #include <asm/hvcall.h> 60 60 #include <asm/paca.h> 61 + #include <asm/lppaca.h> 61 62 #endif 62 63 63 64 #include "nonstdio.h" ··· 3304 3303 { 3305 3304 unsigned long tskv = 0; 3306 3305 struct task_struct *volatile tsk = NULL; 3307 - struct mm_struct *mm; 3306 + struct mm_struct *volatile mm; 3308 3307 pgd_t *pgdp; 3309 3308 p4d_t *p4dp; 3310 3309 pud_t *pudp; ··· 3829 3828 #ifdef CONFIG_PPC_BOOK3E_64 3830 3829 static void dump_tlb_book3e(void) 3831 3830 { 3832 - u32 mmucfg, pidmask, lpidmask; 3831 + u32 mmucfg; 3833 3832 u64 ramask; 3834 - int i, tlb, ntlbs, pidsz, lpidsz, rasz, lrat = 0; 3833 + int i, tlb, ntlbs, pidsz, lpidsz, rasz; 3835 3834 int mmu_version; 3836 3835 static const char *pgsz_names[] = { 3837 3836 " 1K", ··· 3875 3874 pidsz = ((mmucfg >> 6) & 0x1f) + 1; 3876 3875 lpidsz = (mmucfg >> 24) & 0xf; 3877 3876 rasz = (mmucfg >> 16) & 0x7f; 3878 - if ((mmu_version > 1) && (mmucfg & 0x10000)) 3879 - lrat = 1; 3880 3877 printf("Book3E MMU MAV=%d.0,%d TLBs,%d-bit PID,%d-bit LPID,%d-bit RA\n", 3881 3878 mmu_version, ntlbs, pidsz, lpidsz, rasz); 3882 - pidmask = (1ul << pidsz) - 1; 3883 - lpidmask = (1ul << lpidsz) - 1; 3884 3879 ramask = (1ull << rasz) - 1; 3885 3880 3886 3881 for (tlb = 0; tlb < ntlbs; tlb++) {
+1 -7
drivers/cpuidle/cpuidle-pseries.c
··· 414 414 return -ENODEV; 415 415 416 416 if (firmware_has_feature(FW_FEATURE_SPLPAR)) { 417 - /* 418 - * Use local_paca instead of get_lppaca() since 419 - * preemption is not disabled, and it is not required in 420 - * fact, since lppaca_ptr does not need to be the value 421 - * associated to the current CPU, it can be from any CPU. 422 - */ 423 - if (lppaca_shared_proc(local_paca->lppaca_ptr)) { 417 + if (lppaca_shared_proc()) { 424 418 cpuidle_state_table = shared_states; 425 419 max_idle_state = ARRAY_SIZE(shared_states); 426 420 } else {
+1 -1
drivers/macintosh/ams/ams-core.c
··· 176 176 return result; 177 177 } 178 178 179 - int __init ams_init(void) 179 + static int __init ams_init(void) 180 180 { 181 181 struct device_node *np; 182 182
+1
drivers/macintosh/ams/ams.h
··· 6 6 #include <linux/input.h> 7 7 #include <linux/kthread.h> 8 8 #include <linux/mutex.h> 9 + #include <linux/platform_device.h> 9 10 #include <linux/spinlock.h> 10 11 #include <linux/types.h> 11 12
-5
drivers/misc/cxl/native.c
··· 269 269 cxl_p1n_write(afu, CXL_PSL_SPAP_An, spap); 270 270 } 271 271 272 - static inline void detach_spa(struct cxl_afu *afu) 273 - { 274 - cxl_p1n_write(afu, CXL_PSL_SPAP_An, 0); 275 - } 276 - 277 272 void cxl_release_spa(struct cxl_afu *afu) 278 273 { 279 274 if (afu->native->spa) {
+1 -10
drivers/misc/cxl/pci.c
··· 150 150 151 151 static int find_cxl_vsec(struct pci_dev *dev) 152 152 { 153 - int vsec = 0; 154 - u16 val; 155 - 156 - while ((vsec = pci_find_next_ext_capability(dev, vsec, PCI_EXT_CAP_ID_VNDR))) { 157 - pci_read_config_word(dev, vsec + 0x4, &val); 158 - if (val == CXL_PCI_VSEC_ID) 159 - return vsec; 160 - } 161 - return 0; 162 - 153 + return pci_find_vsec_capability(dev, PCI_VENDOR_ID_IBM, CXL_PCI_VSEC_ID); 163 154 } 164 155 165 156 static void dump_cxl_config_space(struct pci_dev *dev)
-2
drivers/net/ethernet/freescale/fs_enet/fs_enet.h
··· 10 10 #include <linux/phy.h> 11 11 #include <linux/dma-mapping.h> 12 12 13 - #include <asm/fs_pd.h> 14 - 15 13 #ifdef CONFIG_CPM1 16 14 #include <asm/cpm1.h> 17 15 #endif
-1
drivers/net/ethernet/freescale/fs_enet/mac-fcc.c
··· 37 37 #include <linux/pgtable.h> 38 38 39 39 #include <asm/immap_cpm2.h> 40 - #include <asm/mpc8260.h> 41 40 #include <asm/cpm2.h> 42 41 43 42 #include <asm/irq.h>
+82 -3
drivers/pci/hotplug/rpaphp_pci.c
··· 19 19 #include "../pci.h" /* for pci_add_new_bus */ 20 20 #include "rpaphp.h" 21 21 22 + /* 23 + * RTAS call get-sensor-state(DR_ENTITY_SENSE) return values as per PAPR: 24 + * -- generic return codes --- 25 + * -1: Hardware Error 26 + * -2: RTAS_BUSY 27 + * -3: Invalid sensor. RTAS Parameter Error. 28 + * -- rtas_get_sensor function specific return codes --- 29 + * -9000: Need DR entity to be powered up and unisolated before RTAS call 30 + * -9001: Need DR entity to be powered up, but not unisolated, before RTAS call 31 + * -9002: DR entity unusable 32 + * 990x: Extended delay - where x is a number in the range of 0-5 33 + */ 34 + #define RTAS_SLOT_UNISOLATED -9000 35 + #define RTAS_SLOT_NOT_UNISOLATED -9001 36 + #define RTAS_SLOT_NOT_USABLE -9002 37 + 38 + static int rtas_get_sensor_errno(int rtas_rc) 39 + { 40 + switch (rtas_rc) { 41 + case 0: 42 + /* Success case */ 43 + return 0; 44 + case RTAS_SLOT_UNISOLATED: 45 + case RTAS_SLOT_NOT_UNISOLATED: 46 + return -EFAULT; 47 + case RTAS_SLOT_NOT_USABLE: 48 + return -ENODEV; 49 + case RTAS_BUSY: 50 + case RTAS_EXTENDED_DELAY_MIN...RTAS_EXTENDED_DELAY_MAX: 51 + return -EBUSY; 52 + default: 53 + return rtas_error_rc(rtas_rc); 54 + } 55 + } 56 + 57 + /* 58 + * get_adapter_status() can be called by the EEH handler during EEH recovery. 59 + * On certain PHB failures, the RTAS call rtas_call(get-sensor-state) returns 60 + * extended busy error (9902) until PHB is recovered by pHyp. The RTAS call 61 + * interface rtas_get_sensor() loops over the RTAS call on extended delay 62 + * return code (9902) until the return value is either success (0) or error 63 + * (-1). This causes the EEH handler to get stuck for ~6 seconds before it 64 + * could notify that the PCI error has been detected and stop any active 65 + * operations. This sometimes causes EEH recovery to fail. To avoid this issue, 66 + * invoke rtas_call(get-sensor-state) directly if the respective PE is in EEH 67 + * recovery state and return -EBUSY error based on RTAS return status. This 68 + * will help the EEH handler to notify the driver about the PCI error 69 + * immediately and successfully proceed with EEH recovery steps. 70 + */ 71 + 72 + static int __rpaphp_get_sensor_state(struct slot *slot, int *state) 73 + { 74 + int rc; 75 + int token = rtas_token("get-sensor-state"); 76 + struct pci_dn *pdn; 77 + struct eeh_pe *pe; 78 + struct pci_controller *phb = PCI_DN(slot->dn)->phb; 79 + 80 + if (token == RTAS_UNKNOWN_SERVICE) 81 + return -ENOENT; 82 + 83 + /* 84 + * Fallback to existing method for empty slot or PE isn't in EEH 85 + * recovery. 86 + */ 87 + pdn = list_first_entry_or_null(&PCI_DN(phb->dn)->child_list, 88 + struct pci_dn, list); 89 + if (!pdn) 90 + goto fallback; 91 + 92 + pe = eeh_dev_to_pe(pdn->edev); 93 + if (pe && (pe->state & EEH_PE_RECOVERING)) { 94 + rc = rtas_call(token, 2, 2, state, DR_ENTITY_SENSE, 95 + slot->index); 96 + return rtas_get_sensor_errno(rc); 97 + } 98 + fallback: 99 + return rtas_get_sensor(DR_ENTITY_SENSE, slot->index, state); 100 + } 101 + 22 102 int rpaphp_get_sensor_state(struct slot *slot, int *state) 23 103 { 24 104 int rc; 25 105 int setlevel; 26 106 27 - rc = rtas_get_sensor(DR_ENTITY_SENSE, slot->index, state); 107 + rc = __rpaphp_get_sensor_state(slot, state); 28 108 29 109 if (rc < 0) { 30 110 if (rc == -EFAULT || rc == -EEXIST) { ··· 120 40 dbg("%s: power on slot[%s] failed rc=%d.\n", 121 41 __func__, slot->name, rc); 122 42 } else { 123 - rc = rtas_get_sensor(DR_ENTITY_SENSE, 124 - slot->index, state); 43 + rc = __rpaphp_get_sensor_state(slot, state); 125 44 } 126 45 } else if (rc == -ENODEV) 127 46 info("%s: slot is unusable\n", __func__);
-3
include/linux/hw_breakpoint.h
··· 90 90 extern int dbg_release_bp_slot(struct perf_event *bp); 91 91 extern int reserve_bp_slot(struct perf_event *bp); 92 92 extern void release_bp_slot(struct perf_event *bp); 93 - int arch_reserve_bp_slot(struct perf_event *bp); 94 - void arch_release_bp_slot(struct perf_event *bp); 95 - void arch_unregister_hw_breakpoint(struct perf_event *bp); 96 93 97 94 extern void flush_ptrace_hw_breakpoint(struct task_struct *tsk); 98 95
-28
kernel/events/hw_breakpoint.c
··· 523 523 return 0; 524 524 } 525 525 526 - __weak int arch_reserve_bp_slot(struct perf_event *bp) 527 - { 528 - return 0; 529 - } 530 - 531 - __weak void arch_release_bp_slot(struct perf_event *bp) 532 - { 533 - } 534 - 535 - /* 536 - * Function to perform processor-specific cleanup during unregistration 537 - */ 538 - __weak void arch_unregister_hw_breakpoint(struct perf_event *bp) 539 - { 540 - /* 541 - * A weak stub function here for those archs that don't define 542 - * it inside arch/.../kernel/hw_breakpoint.c 543 - */ 544 - } 545 - 546 526 /* 547 527 * Constraints to check before allowing this new breakpoint counter. 548 528 * ··· 574 594 enum bp_type_idx type; 575 595 int max_pinned_slots; 576 596 int weight; 577 - int ret; 578 597 579 598 /* We couldn't initialize breakpoint constraints on boot */ 580 599 if (!constraints_initialized) ··· 592 613 if (max_pinned_slots > hw_breakpoint_slots_cached(type)) 593 614 return -ENOSPC; 594 615 595 - ret = arch_reserve_bp_slot(bp); 596 - if (ret) 597 - return ret; 598 - 599 616 return toggle_bp_slot(bp, true, type, weight); 600 617 } 601 618 ··· 609 634 enum bp_type_idx type; 610 635 int weight; 611 636 612 - arch_release_bp_slot(bp); 613 - 614 637 type = find_slot_idx(bp_type); 615 638 weight = hw_breakpoint_weight(bp); 616 639 WARN_ON(toggle_bp_slot(bp, false, type, weight)); ··· 618 645 { 619 646 struct mutex *mtx = bp_constraints_lock(bp); 620 647 621 - arch_unregister_hw_breakpoint(bp); 622 648 __release_bp_slot(bp, bp->attr.bp_type); 623 649 bp_constraints_unlock(mtx); 624 650 }
tools/testing/selftests/powerpc/copyloops/asm/export.h tools/testing/selftests/powerpc/copyloops/linux/export.h
+2 -2
tools/testing/selftests/powerpc/harness.c
··· 24 24 /* Setting timeout to -1 disables the alarm */ 25 25 static uint64_t timeout = 120; 26 26 27 - int run_test(int (test_function)(void), char *name) 27 + int run_test(int (test_function)(void), const char *name) 28 28 { 29 29 bool terminated; 30 30 int rc, status; ··· 101 101 timeout = time; 102 102 } 103 103 104 - int test_harness(int (test_function)(void), char *name) 104 + int test_harness(int (test_function)(void), const char *name) 105 105 { 106 106 int rc; 107 107
+8 -8
tools/testing/selftests/powerpc/include/subunit.h
··· 6 6 #ifndef _SELFTESTS_POWERPC_SUBUNIT_H 7 7 #define _SELFTESTS_POWERPC_SUBUNIT_H 8 8 9 - static inline void test_start(char *name) 9 + static inline void test_start(const char *name) 10 10 { 11 11 printf("test: %s\n", name); 12 12 } 13 13 14 - static inline void test_failure_detail(char *name, char *detail) 14 + static inline void test_failure_detail(const char *name, const char *detail) 15 15 { 16 16 printf("failure: %s [%s]\n", name, detail); 17 17 } 18 18 19 - static inline void test_failure(char *name) 19 + static inline void test_failure(const char *name) 20 20 { 21 21 printf("failure: %s\n", name); 22 22 } 23 23 24 - static inline void test_error(char *name) 24 + static inline void test_error(const char *name) 25 25 { 26 26 printf("error: %s\n", name); 27 27 } 28 28 29 - static inline void test_skip(char *name) 29 + static inline void test_skip(const char *name) 30 30 { 31 31 printf("skip: %s\n", name); 32 32 } 33 33 34 - static inline void test_success(char *name) 34 + static inline void test_success(const char *name) 35 35 { 36 36 printf("success: %s\n", name); 37 37 } 38 38 39 - static inline void test_finish(char *name, int status) 39 + static inline void test_finish(const char *name, int status) 40 40 { 41 41 if (status) 42 42 test_failure(name); ··· 44 44 test_success(name); 45 45 } 46 46 47 - static inline void test_set_git_version(char *value) 47 + static inline void test_set_git_version(const char *value) 48 48 { 49 49 printf("tags: git_version:%s\n", value); 50 50 }
+1 -1
tools/testing/selftests/powerpc/include/utils.h
··· 32 32 typedef uint8_t u8; 33 33 34 34 void test_harness_set_timeout(uint64_t time); 35 - int test_harness(int (test_function)(void), char *name); 35 + int test_harness(int (test_function)(void), const char *name); 36 36 37 37 int read_auxv(char *buf, ssize_t buf_size); 38 38 void *find_auxv_entry(int type, char *auxv);
+10 -9
tools/testing/selftests/powerpc/mm/.gitignore
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - hugetlb_vs_thp_test 3 - subpage_prot 4 - tempfile 5 - prot_sao 6 - segv_errors 7 - wild_bctr 8 - large_vm_fork_separation 9 2 bad_accesses 10 - tlbie_test 3 + exec_prot 4 + hugetlb_vs_thp_test 5 + large_vm_fork_separation 6 + large_vm_gpr_corruption 11 7 pkey_exec_prot 12 8 pkey_siginfo 9 + prot_sao 10 + segv_errors 13 11 stack_expansion_ldst 14 12 stack_expansion_signal 15 - large_vm_gpr_corruption 13 + subpage_prot 14 + tempfile 15 + tlbie_test 16 + wild_bctr
+1
tools/testing/selftests/powerpc/ptrace/Makefile
··· 36 36 CFLAGS += $(KHDR_INCLUDES) -fno-pie 37 37 38 38 $(OUTPUT)/ptrace-gpr: ptrace-gpr.S 39 + $(OUTPUT)/ptrace-perf-hwbreak: ptrace-perf-asm.S 39 40 $(OUTPUT)/ptrace-pkey $(OUTPUT)/core-pkey: LDLIBS += -pthread 40 41 41 42 $(TEST_GEN_PROGS): ../harness.c ../utils.c ../lib/reg.S
+2 -2
tools/testing/selftests/powerpc/ptrace/child.h
··· 48 48 } \ 49 49 } while (0) 50 50 51 - #define PARENT_SKIP_IF_UNSUPPORTED(x, sync) \ 51 + #define PARENT_SKIP_IF_UNSUPPORTED(x, sync, msg) \ 52 52 do { \ 53 53 if ((x) == -1 && (errno == ENODEV || errno == EINVAL)) { \ 54 54 (sync)->parent_gave_up = true; \ 55 55 prod_child(sync); \ 56 - SKIP_IF(1); \ 56 + SKIP_IF_MSG(1, msg); \ 57 57 } \ 58 58 } while (0) 59 59
+1 -1
tools/testing/selftests/powerpc/ptrace/core-pkey.c
··· 266 266 * to the child. 267 267 */ 268 268 ret = ptrace_read_regs(pid, NT_PPC_PKEY, regs, 3); 269 - PARENT_SKIP_IF_UNSUPPORTED(ret, &info->child_sync); 269 + PARENT_SKIP_IF_UNSUPPORTED(ret, &info->child_sync, "PKEYs not supported"); 270 270 PARENT_FAIL_IF(ret, &info->child_sync); 271 271 272 272 info->amr = regs[0];
+1 -1
tools/testing/selftests/powerpc/ptrace/perf-hwbreak.c
··· 884 884 { 885 885 srand ( time(NULL) ); 886 886 887 - SKIP_IF(!perf_breakpoint_supported()); 887 + SKIP_IF_MSG(!perf_breakpoint_supported(), "Perf breakpoints not supported"); 888 888 889 889 return runtest(); 890 890 }
+13 -13
tools/testing/selftests/powerpc/ptrace/ptrace-hwbreak.c
··· 64 64 65 65 static void write_var(int len) 66 66 { 67 - __u8 *pcvar; 68 - __u16 *psvar; 69 - __u32 *pivar; 70 - __u64 *plvar; 67 + volatile __u8 *pcvar; 68 + volatile __u16 *psvar; 69 + volatile __u32 *pivar; 70 + volatile __u64 *plvar; 71 71 72 72 switch (len) { 73 73 case 1: 74 - pcvar = (__u8 *)&glvar; 74 + pcvar = (volatile __u8 *)&glvar; 75 75 *pcvar = 0xff; 76 76 break; 77 77 case 2: 78 - psvar = (__u16 *)&glvar; 78 + psvar = (volatile __u16 *)&glvar; 79 79 *psvar = 0xffff; 80 80 break; 81 81 case 4: 82 - pivar = (__u32 *)&glvar; 82 + pivar = (volatile __u32 *)&glvar; 83 83 *pivar = 0xffffffff; 84 84 break; 85 85 case 8: 86 - plvar = (__u64 *)&glvar; 86 + plvar = (volatile __u64 *)&glvar; 87 87 *plvar = 0xffffffffffffffffLL; 88 88 break; 89 89 } ··· 98 98 99 99 switch (len) { 100 100 case 1: 101 - cvar = (__u8)glvar; 101 + cvar = (volatile __u8)glvar; 102 102 break; 103 103 case 2: 104 - svar = (__u16)glvar; 104 + svar = (volatile __u16)glvar; 105 105 break; 106 106 case 4: 107 - ivar = (__u32)glvar; 107 + ivar = (volatile __u32)glvar; 108 108 break; 109 109 case 8: 110 - lvar = (__u64)glvar; 110 + lvar = (volatile __u64)glvar; 111 111 break; 112 112 } 113 113 } ··· 603 603 wait(NULL); 604 604 605 605 get_dbginfo(child_pid, &dbginfo); 606 - SKIP_IF(dbginfo.num_data_bps == 0); 606 + SKIP_IF_MSG(dbginfo.num_data_bps == 0, "No data breakpoints present"); 607 607 608 608 dawr = dawr_present(&dbginfo); 609 609 run_tests(child_pid, &dbginfo, dawr);
+33
tools/testing/selftests/powerpc/ptrace/ptrace-perf-asm.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + 3 + #include <ppc-asm.h> 4 + 5 + .global same_watch_addr_load 6 + .global same_watch_addr_trap 7 + 8 + FUNC_START(same_watch_addr_child) 9 + nop 10 + same_watch_addr_load: 11 + ld 0,0(3) 12 + nop 13 + same_watch_addr_trap: 14 + trap 15 + blr 16 + FUNC_END(same_watch_addr_child) 17 + 18 + 19 + .global perf_then_ptrace_load1 20 + .global perf_then_ptrace_load2 21 + .global perf_then_ptrace_trap 22 + 23 + FUNC_START(perf_then_ptrace_child) 24 + nop 25 + perf_then_ptrace_load1: 26 + ld 0,0(3) 27 + perf_then_ptrace_load2: 28 + ld 0,0(4) 29 + nop 30 + perf_then_ptrace_trap: 31 + trap 32 + blr 33 + FUNC_END(perf_then_ptrace_child)
+384 -598
tools/testing/selftests/powerpc/ptrace/ptrace-perf-hwbreak.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - #include <stdio.h> 3 - #include <string.h> 4 - #include <signal.h> 5 - #include <stdlib.h> 6 - #include <unistd.h> 7 - #include <errno.h> 8 - #include <linux/hw_breakpoint.h> 9 - #include <linux/perf_event.h> 2 + 10 3 #include <asm/unistd.h> 11 - #include <sys/ptrace.h> 4 + #include <linux/hw_breakpoint.h> 5 + #include <linux/ptrace.h> 6 + #include <memory.h> 7 + #include <stdlib.h> 12 8 #include <sys/wait.h> 13 - #include "ptrace.h" 14 9 15 - char data[16]; 10 + #include "utils.h" 16 11 17 - /* Overlapping address range */ 18 - volatile __u64 *ptrace_data1 = (__u64 *)&data[0]; 19 - volatile __u64 *perf_data1 = (__u64 *)&data[4]; 12 + /* 13 + * Child subroutine that performs a load on the address, then traps 14 + */ 15 + void same_watch_addr_child(unsigned long *addr); 20 16 21 - /* Non-overlapping address range */ 22 - volatile __u64 *ptrace_data2 = (__u64 *)&data[0]; 23 - volatile __u64 *perf_data2 = (__u64 *)&data[8]; 17 + /* Address of the ld instruction in same_watch_addr_child() */ 18 + extern char same_watch_addr_load[]; 24 19 25 - static unsigned long pid_max_addr(void) 20 + /* Address of the end trap instruction in same_watch_addr_child() */ 21 + extern char same_watch_addr_trap[]; 22 + 23 + /* 24 + * Child subroutine that performs a load on the first address, then a load on 25 + * the second address (with no instructions separating this from the first 26 + * load), then traps. 27 + */ 28 + void perf_then_ptrace_child(unsigned long *first_addr, unsigned long *second_addr); 29 + 30 + /* Address of the first ld instruction in perf_then_ptrace_child() */ 31 + extern char perf_then_ptrace_load1[]; 32 + 33 + /* Address of the second ld instruction in perf_then_ptrace_child() */ 34 + extern char perf_then_ptrace_load2[]; 35 + 36 + /* Address of the end trap instruction in perf_then_ptrace_child() */ 37 + extern char perf_then_ptrace_trap[]; 38 + 39 + static inline long sys_ptrace(long request, pid_t pid, unsigned long addr, unsigned long data) 26 40 { 27 - FILE *fp; 28 - char *line, *c; 29 - char addr[100]; 30 - size_t len = 0; 31 - 32 - fp = fopen("/proc/kallsyms", "r"); 33 - if (!fp) { 34 - printf("Failed to read /proc/kallsyms. Exiting..\n"); 35 - exit(EXIT_FAILURE); 36 - } 37 - 38 - while (getline(&line, &len, fp) != -1) { 39 - if (!strstr(line, "pid_max") || strstr(line, "pid_max_max") || 40 - strstr(line, "pid_max_min")) 41 - continue; 42 - 43 - strncpy(addr, line, len < 100 ? len : 100); 44 - c = strchr(addr, ' '); 45 - *c = '\0'; 46 - return strtoul(addr, &c, 16); 47 - } 48 - fclose(fp); 49 - printf("Could not find pix_max. Exiting..\n"); 50 - exit(EXIT_FAILURE); 51 - return -1; 41 + return syscall(__NR_ptrace, request, pid, addr, data); 52 42 } 53 43 54 - static void perf_user_event_attr_set(struct perf_event_attr *attr, __u64 addr, __u64 len) 44 + static long ptrace_traceme(void) 55 45 { 56 - memset(attr, 0, sizeof(struct perf_event_attr)); 57 - attr->type = PERF_TYPE_BREAKPOINT; 58 - attr->size = sizeof(struct perf_event_attr); 59 - attr->bp_type = HW_BREAKPOINT_R; 60 - attr->bp_addr = addr; 61 - attr->bp_len = len; 62 - attr->exclude_kernel = 1; 63 - attr->exclude_hv = 1; 46 + return sys_ptrace(PTRACE_TRACEME, 0, 0, 0); 64 47 } 65 48 66 - static void perf_kernel_event_attr_set(struct perf_event_attr *attr) 49 + static long ptrace_getregs(pid_t pid, struct pt_regs *result) 67 50 { 68 - memset(attr, 0, sizeof(struct perf_event_attr)); 69 - attr->type = PERF_TYPE_BREAKPOINT; 70 - attr->size = sizeof(struct perf_event_attr); 71 - attr->bp_type = HW_BREAKPOINT_R; 72 - attr->bp_addr = pid_max_addr(); 73 - attr->bp_len = sizeof(unsigned long); 74 - attr->exclude_user = 1; 75 - attr->exclude_hv = 1; 51 + return sys_ptrace(PTRACE_GETREGS, pid, 0, (unsigned long)result); 76 52 } 77 53 78 - static int perf_cpu_event_open(int cpu, __u64 addr, __u64 len) 54 + static long ptrace_setregs(pid_t pid, struct pt_regs *result) 79 55 { 80 - struct perf_event_attr attr; 81 - 82 - perf_user_event_attr_set(&attr, addr, len); 83 - return syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0); 56 + return sys_ptrace(PTRACE_SETREGS, pid, 0, (unsigned long)result); 84 57 } 85 58 86 - static int perf_thread_event_open(pid_t child_pid, __u64 addr, __u64 len) 59 + static long ptrace_cont(pid_t pid, long signal) 87 60 { 88 - struct perf_event_attr attr; 89 - 90 - perf_user_event_attr_set(&attr, addr, len); 91 - return syscall(__NR_perf_event_open, &attr, child_pid, -1, -1, 0); 61 + return sys_ptrace(PTRACE_CONT, pid, 0, signal); 92 62 } 93 63 94 - static int perf_thread_cpu_event_open(pid_t child_pid, int cpu, __u64 addr, __u64 len) 64 + static long ptrace_singlestep(pid_t pid, long signal) 95 65 { 96 - struct perf_event_attr attr; 97 - 98 - perf_user_event_attr_set(&attr, addr, len); 99 - return syscall(__NR_perf_event_open, &attr, child_pid, cpu, -1, 0); 66 + return sys_ptrace(PTRACE_SINGLESTEP, pid, 0, signal); 100 67 } 101 68 102 - static int perf_thread_kernel_event_open(pid_t child_pid) 69 + static long ppc_ptrace_gethwdbginfo(pid_t pid, struct ppc_debug_info *dbginfo) 103 70 { 104 - struct perf_event_attr attr; 105 - 106 - perf_kernel_event_attr_set(&attr); 107 - return syscall(__NR_perf_event_open, &attr, child_pid, -1, -1, 0); 71 + return sys_ptrace(PPC_PTRACE_GETHWDBGINFO, pid, 0, (unsigned long)dbginfo); 108 72 } 109 73 110 - static int perf_cpu_kernel_event_open(int cpu) 74 + static long ppc_ptrace_sethwdbg(pid_t pid, struct ppc_hw_breakpoint *bp_info) 111 75 { 112 - struct perf_event_attr attr; 113 - 114 - perf_kernel_event_attr_set(&attr); 115 - return syscall(__NR_perf_event_open, &attr, -1, cpu, -1, 0); 76 + return sys_ptrace(PPC_PTRACE_SETHWDEBUG, pid, 0, (unsigned long)bp_info); 116 77 } 117 78 118 - static int child(void) 79 + static long ppc_ptrace_delhwdbg(pid_t pid, int bp_id) 119 80 { 120 - int ret; 81 + return sys_ptrace(PPC_PTRACE_DELHWDEBUG, pid, 0L, bp_id); 82 + } 121 83 122 - ret = ptrace(PTRACE_TRACEME, 0, NULL, 0); 123 - if (ret) { 124 - printf("Error: PTRACE_TRACEME failed\n"); 125 - return 0; 126 - } 127 - kill(getpid(), SIGUSR1); /* --> parent (SIGUSR1) */ 84 + static long ptrace_getreg_pc(pid_t pid, void **pc) 85 + { 86 + struct pt_regs regs; 87 + long err; 88 + 89 + err = ptrace_getregs(pid, &regs); 90 + if (err) 91 + return err; 92 + 93 + *pc = (void *)regs.nip; 128 94 129 95 return 0; 130 96 } 131 97 132 - static void ptrace_ppc_hw_breakpoint(struct ppc_hw_breakpoint *info, int type, 133 - __u64 addr, int len) 98 + static long ptrace_setreg_pc(pid_t pid, void *pc) 99 + { 100 + struct pt_regs regs; 101 + long err; 102 + 103 + err = ptrace_getregs(pid, &regs); 104 + if (err) 105 + return err; 106 + 107 + regs.nip = (unsigned long)pc; 108 + 109 + err = ptrace_setregs(pid, &regs); 110 + if (err) 111 + return err; 112 + 113 + return 0; 114 + } 115 + 116 + static int perf_event_open(struct perf_event_attr *attr, pid_t pid, int cpu, 117 + int group_fd, unsigned long flags) 118 + { 119 + return syscall(__NR_perf_event_open, attr, pid, cpu, group_fd, flags); 120 + } 121 + 122 + static void perf_user_event_attr_set(struct perf_event_attr *attr, void *addr, u64 len) 123 + { 124 + memset(attr, 0, sizeof(struct perf_event_attr)); 125 + 126 + attr->type = PERF_TYPE_BREAKPOINT; 127 + attr->size = sizeof(struct perf_event_attr); 128 + attr->bp_type = HW_BREAKPOINT_R; 129 + attr->bp_addr = (u64)addr; 130 + attr->bp_len = len; 131 + attr->exclude_kernel = 1; 132 + attr->exclude_hv = 1; 133 + } 134 + 135 + static int perf_watchpoint_open(pid_t child_pid, void *addr, u64 len) 136 + { 137 + struct perf_event_attr attr; 138 + 139 + perf_user_event_attr_set(&attr, addr, len); 140 + return perf_event_open(&attr, child_pid, -1, -1, 0); 141 + } 142 + 143 + static int perf_read_counter(int perf_fd, u64 *count) 144 + { 145 + /* 146 + * A perf counter is retrieved by the read() syscall. It contains 147 + * the current count as 8 bytes that are interpreted as a u64 148 + */ 149 + ssize_t len = read(perf_fd, count, sizeof(*count)); 150 + 151 + if (len != sizeof(*count)) 152 + return -1; 153 + 154 + return 0; 155 + } 156 + 157 + static void ppc_ptrace_init_breakpoint(struct ppc_hw_breakpoint *info, 158 + int type, void *addr, int len) 134 159 { 135 160 info->version = 1; 136 161 info->trigger_type = type; 137 162 info->condition_mode = PPC_BREAKPOINT_CONDITION_NONE; 138 - info->addr = addr; 139 - info->addr2 = addr + len; 163 + info->addr = (u64)addr; 164 + info->addr2 = (u64)addr + len; 140 165 info->condition_value = 0; 141 166 if (!len) 142 167 info->addr_mode = PPC_BREAKPOINT_MODE_EXACT; ··· 169 144 info->addr_mode = PPC_BREAKPOINT_MODE_RANGE_INCLUSIVE; 170 145 } 171 146 172 - static int ptrace_open(pid_t child_pid, __u64 wp_addr, int len) 147 + /* 148 + * Checks if we can place at least 2 watchpoints on the child process 149 + */ 150 + static int check_watchpoints(pid_t pid) 173 151 { 174 - struct ppc_hw_breakpoint info; 175 - 176 - ptrace_ppc_hw_breakpoint(&info, PPC_BREAKPOINT_TRIGGER_RW, wp_addr, len); 177 - return ptrace(PPC_PTRACE_SETHWDEBUG, child_pid, 0, &info); 178 - } 179 - 180 - static int test1(pid_t child_pid) 181 - { 182 - int perf_fd; 183 - int ptrace_fd; 184 - int ret = 0; 185 - 186 - /* Test: 187 - * if (new per thread event by ptrace) 188 - * if (existing cpu event by perf) 189 - * if (addr range overlaps) 190 - * fail; 191 - */ 192 - 193 - perf_fd = perf_cpu_event_open(0, (__u64)perf_data1, sizeof(*perf_data1)); 194 - if (perf_fd < 0) 195 - return -1; 196 - 197 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data1, sizeof(*ptrace_data1)); 198 - if (ptrace_fd > 0 || errno != ENOSPC) 199 - ret = -1; 200 - 201 - close(perf_fd); 202 - return ret; 203 - } 204 - 205 - static int test2(pid_t child_pid) 206 - { 207 - int perf_fd; 208 - int ptrace_fd; 209 - int ret = 0; 210 - 211 - /* Test: 212 - * if (new per thread event by ptrace) 213 - * if (existing cpu event by perf) 214 - * if (addr range does not overlaps) 215 - * allow; 216 - */ 217 - 218 - perf_fd = perf_cpu_event_open(0, (__u64)perf_data2, sizeof(*perf_data2)); 219 - if (perf_fd < 0) 220 - return -1; 221 - 222 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data2, sizeof(*ptrace_data2)); 223 - if (ptrace_fd < 0) { 224 - ret = -1; 225 - goto perf_close; 226 - } 227 - ptrace(PPC_PTRACE_DELHWDEBUG, child_pid, 0, ptrace_fd); 228 - 229 - perf_close: 230 - close(perf_fd); 231 - return ret; 232 - } 233 - 234 - static int test3(pid_t child_pid) 235 - { 236 - int perf_fd; 237 - int ptrace_fd; 238 - int ret = 0; 239 - 240 - /* Test: 241 - * if (new per thread event by ptrace) 242 - * if (existing thread event by perf on the same thread) 243 - * if (addr range overlaps) 244 - * fail; 245 - */ 246 - perf_fd = perf_thread_event_open(child_pid, (__u64)perf_data1, 247 - sizeof(*perf_data1)); 248 - if (perf_fd < 0) 249 - return -1; 250 - 251 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data1, sizeof(*ptrace_data1)); 252 - if (ptrace_fd > 0 || errno != ENOSPC) 253 - ret = -1; 254 - 255 - close(perf_fd); 256 - return ret; 257 - } 258 - 259 - static int test4(pid_t child_pid) 260 - { 261 - int perf_fd; 262 - int ptrace_fd; 263 - int ret = 0; 264 - 265 - /* Test: 266 - * if (new per thread event by ptrace) 267 - * if (existing thread event by perf on the same thread) 268 - * if (addr range does not overlaps) 269 - * fail; 270 - */ 271 - perf_fd = perf_thread_event_open(child_pid, (__u64)perf_data2, 272 - sizeof(*perf_data2)); 273 - if (perf_fd < 0) 274 - return -1; 275 - 276 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data2, sizeof(*ptrace_data2)); 277 - if (ptrace_fd < 0) { 278 - ret = -1; 279 - goto perf_close; 280 - } 281 - ptrace(PPC_PTRACE_DELHWDEBUG, child_pid, 0, ptrace_fd); 282 - 283 - perf_close: 284 - close(perf_fd); 285 - return ret; 286 - } 287 - 288 - static int test5(pid_t child_pid) 289 - { 290 - int perf_fd; 291 - int ptrace_fd; 292 - int cpid; 293 - int ret = 0; 294 - 295 - /* Test: 296 - * if (new per thread event by ptrace) 297 - * if (existing thread event by perf on the different thread) 298 - * allow; 299 - */ 300 - cpid = fork(); 301 - if (!cpid) { 302 - /* Temporary Child */ 303 - pause(); 304 - exit(EXIT_SUCCESS); 305 - } 306 - 307 - perf_fd = perf_thread_event_open(cpid, (__u64)perf_data1, sizeof(*perf_data1)); 308 - if (perf_fd < 0) { 309 - ret = -1; 310 - goto kill_child; 311 - } 312 - 313 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data1, sizeof(*ptrace_data1)); 314 - if (ptrace_fd < 0) { 315 - ret = -1; 316 - goto perf_close; 317 - } 318 - 319 - ptrace(PPC_PTRACE_DELHWDEBUG, child_pid, 0, ptrace_fd); 320 - perf_close: 321 - close(perf_fd); 322 - kill_child: 323 - kill(cpid, SIGINT); 324 - return ret; 325 - } 326 - 327 - static int test6(pid_t child_pid) 328 - { 329 - int perf_fd; 330 - int ptrace_fd; 331 - int ret = 0; 332 - 333 - /* Test: 334 - * if (new per thread kernel event by perf) 335 - * if (existing thread event by ptrace on the same thread) 336 - * allow; 337 - * -- OR -- 338 - * if (new per cpu kernel event by perf) 339 - * if (existing thread event by ptrace) 340 - * allow; 341 - */ 342 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data1, sizeof(*ptrace_data1)); 343 - if (ptrace_fd < 0) 344 - return -1; 345 - 346 - perf_fd = perf_thread_kernel_event_open(child_pid); 347 - if (perf_fd < 0) { 348 - ret = -1; 349 - goto ptrace_close; 350 - } 351 - close(perf_fd); 352 - 353 - perf_fd = perf_cpu_kernel_event_open(0); 354 - if (perf_fd < 0) { 355 - ret = -1; 356 - goto ptrace_close; 357 - } 358 - close(perf_fd); 359 - 360 - ptrace_close: 361 - ptrace(PPC_PTRACE_DELHWDEBUG, child_pid, 0, ptrace_fd); 362 - return ret; 363 - } 364 - 365 - static int test7(pid_t child_pid) 366 - { 367 - int perf_fd; 368 - int ptrace_fd; 369 - int ret = 0; 370 - 371 - /* Test: 372 - * if (new per thread event by perf) 373 - * if (existing thread event by ptrace on the same thread) 374 - * if (addr range overlaps) 375 - * fail; 376 - */ 377 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data1, sizeof(*ptrace_data1)); 378 - if (ptrace_fd < 0) 379 - return -1; 380 - 381 - perf_fd = perf_thread_event_open(child_pid, (__u64)perf_data1, 382 - sizeof(*perf_data1)); 383 - if (perf_fd > 0 || errno != ENOSPC) 384 - ret = -1; 385 - 386 - ptrace(PPC_PTRACE_DELHWDEBUG, child_pid, 0, ptrace_fd); 387 - return ret; 388 - } 389 - 390 - static int test8(pid_t child_pid) 391 - { 392 - int perf_fd; 393 - int ptrace_fd; 394 - int ret = 0; 395 - 396 - /* Test: 397 - * if (new per thread event by perf) 398 - * if (existing thread event by ptrace on the same thread) 399 - * if (addr range does not overlaps) 400 - * allow; 401 - */ 402 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data2, sizeof(*ptrace_data2)); 403 - if (ptrace_fd < 0) 404 - return -1; 405 - 406 - perf_fd = perf_thread_event_open(child_pid, (__u64)perf_data2, 407 - sizeof(*perf_data2)); 408 - if (perf_fd < 0) { 409 - ret = -1; 410 - goto ptrace_close; 411 - } 412 - close(perf_fd); 413 - 414 - ptrace_close: 415 - ptrace(PPC_PTRACE_DELHWDEBUG, child_pid, 0, ptrace_fd); 416 - return ret; 417 - } 418 - 419 - static int test9(pid_t child_pid) 420 - { 421 - int perf_fd; 422 - int ptrace_fd; 423 - int cpid; 424 - int ret = 0; 425 - 426 - /* Test: 427 - * if (new per thread event by perf) 428 - * if (existing thread event by ptrace on the other thread) 429 - * allow; 430 - */ 431 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data1, sizeof(*ptrace_data1)); 432 - if (ptrace_fd < 0) 433 - return -1; 434 - 435 - cpid = fork(); 436 - if (!cpid) { 437 - /* Temporary Child */ 438 - pause(); 439 - exit(EXIT_SUCCESS); 440 - } 441 - 442 - perf_fd = perf_thread_event_open(cpid, (__u64)perf_data1, sizeof(*perf_data1)); 443 - if (perf_fd < 0) { 444 - ret = -1; 445 - goto kill_child; 446 - } 447 - close(perf_fd); 448 - 449 - kill_child: 450 - kill(cpid, SIGINT); 451 - ptrace(PPC_PTRACE_DELHWDEBUG, child_pid, 0, ptrace_fd); 452 - return ret; 453 - } 454 - 455 - static int test10(pid_t child_pid) 456 - { 457 - int perf_fd; 458 - int ptrace_fd; 459 - int ret = 0; 460 - 461 - /* Test: 462 - * if (new per cpu event by perf) 463 - * if (existing thread event by ptrace on the same thread) 464 - * if (addr range overlaps) 465 - * fail; 466 - */ 467 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data1, sizeof(*ptrace_data1)); 468 - if (ptrace_fd < 0) 469 - return -1; 470 - 471 - perf_fd = perf_cpu_event_open(0, (__u64)perf_data1, sizeof(*perf_data1)); 472 - if (perf_fd > 0 || errno != ENOSPC) 473 - ret = -1; 474 - 475 - ptrace(PPC_PTRACE_DELHWDEBUG, child_pid, 0, ptrace_fd); 476 - return ret; 477 - } 478 - 479 - static int test11(pid_t child_pid) 480 - { 481 - int perf_fd; 482 - int ptrace_fd; 483 - int ret = 0; 484 - 485 - /* Test: 486 - * if (new per cpu event by perf) 487 - * if (existing thread event by ptrace on the same thread) 488 - * if (addr range does not overlap) 489 - * allow; 490 - */ 491 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data2, sizeof(*ptrace_data2)); 492 - if (ptrace_fd < 0) 493 - return -1; 494 - 495 - perf_fd = perf_cpu_event_open(0, (__u64)perf_data2, sizeof(*perf_data2)); 496 - if (perf_fd < 0) { 497 - ret = -1; 498 - goto ptrace_close; 499 - } 500 - close(perf_fd); 501 - 502 - ptrace_close: 503 - ptrace(PPC_PTRACE_DELHWDEBUG, child_pid, 0, ptrace_fd); 504 - return ret; 505 - } 506 - 507 - static int test12(pid_t child_pid) 508 - { 509 - int perf_fd; 510 - int ptrace_fd; 511 - int ret = 0; 512 - 513 - /* Test: 514 - * if (new per thread and per cpu event by perf) 515 - * if (existing thread event by ptrace on the same thread) 516 - * if (addr range overlaps) 517 - * fail; 518 - */ 519 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data1, sizeof(*ptrace_data1)); 520 - if (ptrace_fd < 0) 521 - return -1; 522 - 523 - perf_fd = perf_thread_cpu_event_open(child_pid, 0, (__u64)perf_data1, sizeof(*perf_data1)); 524 - if (perf_fd > 0 || errno != ENOSPC) 525 - ret = -1; 526 - 527 - ptrace(PPC_PTRACE_DELHWDEBUG, child_pid, 0, ptrace_fd); 528 - return ret; 529 - } 530 - 531 - static int test13(pid_t child_pid) 532 - { 533 - int perf_fd; 534 - int ptrace_fd; 535 - int ret = 0; 536 - 537 - /* Test: 538 - * if (new per thread and per cpu event by perf) 539 - * if (existing thread event by ptrace on the same thread) 540 - * if (addr range does not overlap) 541 - * allow; 542 - */ 543 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data2, sizeof(*ptrace_data2)); 544 - if (ptrace_fd < 0) 545 - return -1; 546 - 547 - perf_fd = perf_thread_cpu_event_open(child_pid, 0, (__u64)perf_data2, sizeof(*perf_data2)); 548 - if (perf_fd < 0) { 549 - ret = -1; 550 - goto ptrace_close; 551 - } 552 - close(perf_fd); 553 - 554 - ptrace_close: 555 - ptrace(PPC_PTRACE_DELHWDEBUG, child_pid, 0, ptrace_fd); 556 - return ret; 557 - } 558 - 559 - static int test14(pid_t child_pid) 560 - { 561 - int perf_fd; 562 - int ptrace_fd; 563 - int cpid; 564 - int ret = 0; 565 - 566 - /* Test: 567 - * if (new per thread and per cpu event by perf) 568 - * if (existing thread event by ptrace on the other thread) 569 - * allow; 570 - */ 571 - ptrace_fd = ptrace_open(child_pid, (__u64)ptrace_data1, sizeof(*ptrace_data1)); 572 - if (ptrace_fd < 0) 573 - return -1; 574 - 575 - cpid = fork(); 576 - if (!cpid) { 577 - /* Temporary Child */ 578 - pause(); 579 - exit(EXIT_SUCCESS); 580 - } 581 - 582 - perf_fd = perf_thread_cpu_event_open(cpid, 0, (__u64)perf_data1, 583 - sizeof(*perf_data1)); 584 - if (perf_fd < 0) { 585 - ret = -1; 586 - goto kill_child; 587 - } 588 - close(perf_fd); 589 - 590 - kill_child: 591 - kill(cpid, SIGINT); 592 - ptrace(PPC_PTRACE_DELHWDEBUG, child_pid, 0, ptrace_fd); 593 - return ret; 594 - } 595 - 596 - static int do_test(const char *msg, int (*fun)(pid_t arg), pid_t arg) 597 - { 598 - int ret; 599 - 600 - ret = fun(arg); 601 - if (ret) 602 - printf("%s: Error\n", msg); 603 - else 604 - printf("%s: Ok\n", msg); 605 - return ret; 606 - } 607 - 608 - char *desc[14] = { 609 - "perf cpu event -> ptrace thread event (Overlapping)", 610 - "perf cpu event -> ptrace thread event (Non-overlapping)", 611 - "perf thread event -> ptrace same thread event (Overlapping)", 612 - "perf thread event -> ptrace same thread event (Non-overlapping)", 613 - "perf thread event -> ptrace other thread event", 614 - "ptrace thread event -> perf kernel event", 615 - "ptrace thread event -> perf same thread event (Overlapping)", 616 - "ptrace thread event -> perf same thread event (Non-overlapping)", 617 - "ptrace thread event -> perf other thread event", 618 - "ptrace thread event -> perf cpu event (Overlapping)", 619 - "ptrace thread event -> perf cpu event (Non-overlapping)", 620 - "ptrace thread event -> perf same thread & cpu event (Overlapping)", 621 - "ptrace thread event -> perf same thread & cpu event (Non-overlapping)", 622 - "ptrace thread event -> perf other thread & cpu event", 623 - }; 624 - 625 - static int test(pid_t child_pid) 626 - { 627 - int ret = TEST_PASS; 628 - 629 - ret |= do_test(desc[0], test1, child_pid); 630 - ret |= do_test(desc[1], test2, child_pid); 631 - ret |= do_test(desc[2], test3, child_pid); 632 - ret |= do_test(desc[3], test4, child_pid); 633 - ret |= do_test(desc[4], test5, child_pid); 634 - ret |= do_test(desc[5], test6, child_pid); 635 - ret |= do_test(desc[6], test7, child_pid); 636 - ret |= do_test(desc[7], test8, child_pid); 637 - ret |= do_test(desc[8], test9, child_pid); 638 - ret |= do_test(desc[9], test10, child_pid); 639 - ret |= do_test(desc[10], test11, child_pid); 640 - ret |= do_test(desc[11], test12, child_pid); 641 - ret |= do_test(desc[12], test13, child_pid); 642 - ret |= do_test(desc[13], test14, child_pid); 643 - 644 - return ret; 645 - } 646 - 647 - static void get_dbginfo(pid_t child_pid, struct ppc_debug_info *dbginfo) 648 - { 649 - if (ptrace(PPC_PTRACE_GETHWDBGINFO, child_pid, NULL, dbginfo)) { 650 - perror("Can't get breakpoint info"); 651 - exit(-1); 652 - } 653 - } 654 - 655 - static int ptrace_perf_hwbreak(void) 656 - { 657 - int ret; 658 - pid_t child_pid; 659 152 struct ppc_debug_info dbginfo; 660 153 661 - child_pid = fork(); 662 - if (!child_pid) 663 - return child(); 154 + FAIL_IF_MSG(ppc_ptrace_gethwdbginfo(pid, &dbginfo), "PPC_PTRACE_GETHWDBGINFO failed"); 155 + SKIP_IF_MSG(dbginfo.num_data_bps <= 1, "Not enough data watchpoints (need at least 2)"); 664 156 665 - /* parent */ 666 - wait(NULL); /* <-- child (SIGUSR1) */ 157 + return 0; 158 + } 667 159 668 - get_dbginfo(child_pid, &dbginfo); 669 - SKIP_IF(dbginfo.num_data_bps <= 1); 160 + /* 161 + * Wrapper around a plain fork() call that sets up the child for 162 + * ptrace-ing. Both the parent and child return from this, though 163 + * the child is stopped until ptrace_cont(pid) is run by the parent. 164 + */ 165 + static int ptrace_fork_child(pid_t *pid) 166 + { 167 + int status; 670 168 671 - ret = perf_cpu_event_open(0, (__u64)perf_data1, sizeof(*perf_data1)); 672 - SKIP_IF(ret < 0); 673 - close(ret); 169 + *pid = fork(); 674 170 675 - ret = test(child_pid); 171 + if (*pid < 0) 172 + FAIL_IF_MSG(1, "Failed to fork child"); 676 173 677 - ptrace(PTRACE_CONT, child_pid, NULL, 0); 678 - return ret; 174 + if (!*pid) { 175 + FAIL_IF_EXIT_MSG(ptrace_traceme(), "PTRACE_TRACEME failed"); 176 + FAIL_IF_EXIT_MSG(raise(SIGSTOP), "Child failed to raise SIGSTOP"); 177 + } else { 178 + /* Synchronise on child SIGSTOP */ 179 + FAIL_IF_MSG(waitpid(*pid, &status, 0) == -1, "Failed to wait for child"); 180 + FAIL_IF_MSG(!WIFSTOPPED(status), "Child is not stopped"); 181 + } 182 + 183 + return 0; 184 + } 185 + 186 + /* 187 + * Tests the interaction between ptrace and perf watching the same data. 188 + * 189 + * We expect ptrace to take 'priority', as it is has before-execute 190 + * semantics. 191 + * 192 + * The perf counter should not be incremented yet because perf has after-execute 193 + * semantics. E.g., if ptrace changes the child PC, we don't even execute the 194 + * instruction at all. 195 + * 196 + * When the child is stopped for ptrace, we test both continue and single step. 197 + * Both should increment the perf counter. We also test changing the PC somewhere 198 + * different and stepping, which should not increment the perf counter. 199 + */ 200 + int same_watch_addr_test(void) 201 + { 202 + struct ppc_hw_breakpoint bp_info; /* ptrace breakpoint info */ 203 + int bp_id; /* Breakpoint handle of ptrace watchpoint */ 204 + int perf_fd; /* File descriptor of perf performance counter */ 205 + u64 perf_count; /* Most recently fetched perf performance counter value */ 206 + pid_t pid; /* PID of child process */ 207 + void *pc; /* Most recently fetched child PC value */ 208 + int status; /* Stop status of child after waitpid */ 209 + unsigned long value; /* Dummy value to be read/written to by child */ 210 + int err; 211 + 212 + err = ptrace_fork_child(&pid); 213 + if (err) 214 + return err; 215 + 216 + if (!pid) { 217 + same_watch_addr_child(&value); 218 + exit(1); 219 + } 220 + 221 + err = check_watchpoints(pid); 222 + if (err) 223 + return err; 224 + 225 + /* Place a perf watchpoint counter on value */ 226 + perf_fd = perf_watchpoint_open(pid, &value, sizeof(value)); 227 + FAIL_IF_MSG(perf_fd < 0, "Failed to open perf performance counter"); 228 + 229 + /* Place a ptrace watchpoint on value */ 230 + ppc_ptrace_init_breakpoint(&bp_info, PPC_BREAKPOINT_TRIGGER_READ, &value, sizeof(value)); 231 + bp_id = ppc_ptrace_sethwdbg(pid, &bp_info); 232 + FAIL_IF_MSG(bp_id < 0, "Failed to set ptrace watchpoint"); 233 + 234 + /* Let the child run. It should stop on the ptrace watchpoint */ 235 + FAIL_IF_MSG(ptrace_cont(pid, 0), "Failed to continue child"); 236 + 237 + FAIL_IF_MSG(waitpid(pid, &status, 0) == -1, "Failed to wait for child"); 238 + FAIL_IF_MSG(!WIFSTOPPED(status), "Child is not stopped"); 239 + FAIL_IF_MSG(ptrace_getreg_pc(pid, &pc), "Failed to get child PC"); 240 + FAIL_IF_MSG(pc != same_watch_addr_load, "Child did not stop on load instruction"); 241 + 242 + /* 243 + * We stopped before executing the load, so perf should not have 244 + * recorded any events yet 245 + */ 246 + FAIL_IF_MSG(perf_read_counter(perf_fd, &perf_count), "Failed to read perf counter"); 247 + FAIL_IF_MSG(perf_count != 0, "perf recorded unexpected event"); 248 + 249 + /* Single stepping over the load should increment the perf counter */ 250 + FAIL_IF_MSG(ptrace_singlestep(pid, 0), "Failed to single step child"); 251 + 252 + FAIL_IF_MSG(waitpid(pid, &status, 0) == -1, "Failed to wait for child"); 253 + FAIL_IF_MSG(!WIFSTOPPED(status), "Child is not stopped"); 254 + FAIL_IF_MSG(ptrace_getreg_pc(pid, &pc), "Failed to get child PC"); 255 + FAIL_IF_MSG(pc != same_watch_addr_load + 4, "Failed to single step load instruction"); 256 + FAIL_IF_MSG(perf_read_counter(perf_fd, &perf_count), "Failed to read perf counter"); 257 + FAIL_IF_MSG(perf_count != 1, "perf counter did not increment"); 258 + 259 + /* 260 + * Set up a ptrace watchpoint on the value again and trigger it. 261 + * The perf counter should not have incremented because we do not 262 + * execute the load yet. 263 + */ 264 + FAIL_IF_MSG(ppc_ptrace_delhwdbg(pid, bp_id), "Failed to remove old ptrace watchpoint"); 265 + bp_id = ppc_ptrace_sethwdbg(pid, &bp_info); 266 + FAIL_IF_MSG(bp_id < 0, "Failed to set ptrace watchpoint"); 267 + FAIL_IF_MSG(ptrace_setreg_pc(pid, same_watch_addr_load), "Failed to set child PC"); 268 + FAIL_IF_MSG(ptrace_cont(pid, 0), "Failed to continue child"); 269 + 270 + FAIL_IF_MSG(waitpid(pid, &status, 0) == -1, "Failed to wait for child"); 271 + FAIL_IF_MSG(!WIFSTOPPED(status), "Child is not stopped"); 272 + FAIL_IF_MSG(ptrace_getreg_pc(pid, &pc), "Failed to get child PC"); 273 + FAIL_IF_MSG(pc != same_watch_addr_load, "Child did not stop on load trap"); 274 + FAIL_IF_MSG(perf_read_counter(perf_fd, &perf_count), "Failed to read perf counter"); 275 + FAIL_IF_MSG(perf_count != 1, "perf counter should not have changed"); 276 + 277 + /* Continuing over the load should increment the perf counter */ 278 + FAIL_IF_MSG(ptrace_cont(pid, 0), "Failed to continue child"); 279 + 280 + FAIL_IF_MSG(waitpid(pid, &status, 0) == -1, "Failed to wait for child"); 281 + FAIL_IF_MSG(!WIFSTOPPED(status), "Child is not stopped"); 282 + FAIL_IF_MSG(ptrace_getreg_pc(pid, &pc), "Failed to get child PC"); 283 + FAIL_IF_MSG(pc != same_watch_addr_trap, "Child did not stop on end trap"); 284 + FAIL_IF_MSG(perf_read_counter(perf_fd, &perf_count), "Failed to read perf counter"); 285 + FAIL_IF_MSG(perf_count != 2, "perf counter did not increment"); 286 + 287 + /* 288 + * If we set the child PC back to the load instruction, then continue, 289 + * we should reach the end trap (because ptrace is one-shot) and have 290 + * another perf event. 291 + */ 292 + FAIL_IF_MSG(ptrace_setreg_pc(pid, same_watch_addr_load), "Failed to set child PC"); 293 + FAIL_IF_MSG(ptrace_cont(pid, 0), "Failed to continue child"); 294 + 295 + FAIL_IF_MSG(waitpid(pid, &status, 0) == -1, "Failed to wait for child"); 296 + FAIL_IF_MSG(!WIFSTOPPED(status), "Child is not stopped"); 297 + FAIL_IF_MSG(ptrace_getreg_pc(pid, &pc), "Failed to get child PC"); 298 + FAIL_IF_MSG(pc != same_watch_addr_trap, "Child did not stop on end trap"); 299 + FAIL_IF_MSG(perf_read_counter(perf_fd, &perf_count), "Failed to read perf counter"); 300 + FAIL_IF_MSG(perf_count != 3, "perf counter did not increment"); 301 + 302 + /* 303 + * If we set the child PC back to the load instruction, set a ptrace 304 + * watchpoint on the load, then continue, we should immediately get 305 + * the ptrace trap without incrementing the perf counter 306 + */ 307 + FAIL_IF_MSG(ppc_ptrace_delhwdbg(pid, bp_id), "Failed to remove old ptrace watchpoint"); 308 + bp_id = ppc_ptrace_sethwdbg(pid, &bp_info); 309 + FAIL_IF_MSG(bp_id < 0, "Failed to set ptrace watchpoint"); 310 + FAIL_IF_MSG(ptrace_setreg_pc(pid, same_watch_addr_load), "Failed to set child PC"); 311 + FAIL_IF_MSG(ptrace_cont(pid, 0), "Failed to continue child"); 312 + 313 + FAIL_IF_MSG(waitpid(pid, &status, 0) == -1, "Failed to wait for child"); 314 + FAIL_IF_MSG(!WIFSTOPPED(status), "Child is not stopped"); 315 + FAIL_IF_MSG(ptrace_getreg_pc(pid, &pc), "Failed to get child PC"); 316 + FAIL_IF_MSG(pc != same_watch_addr_load, "Child did not stop on load instruction"); 317 + FAIL_IF_MSG(perf_read_counter(perf_fd, &perf_count), "Failed to read perf counter"); 318 + FAIL_IF_MSG(perf_count != 3, "perf counter should not have changed"); 319 + 320 + /* 321 + * If we change the PC while stopped on the load instruction, we should 322 + * not increment the perf counter (because ptrace is before-execute, 323 + * perf is after-execute). 324 + */ 325 + FAIL_IF_MSG(ptrace_setreg_pc(pid, same_watch_addr_load + 4), "Failed to set child PC"); 326 + FAIL_IF_MSG(ptrace_cont(pid, 0), "Failed to continue child"); 327 + 328 + FAIL_IF_MSG(waitpid(pid, &status, 0) == -1, "Failed to wait for child"); 329 + FAIL_IF_MSG(!WIFSTOPPED(status), "Child is not stopped"); 330 + FAIL_IF_MSG(ptrace_getreg_pc(pid, &pc), "Failed to get child PC"); 331 + FAIL_IF_MSG(pc != same_watch_addr_trap, "Child did not stop on end trap"); 332 + FAIL_IF_MSG(perf_read_counter(perf_fd, &perf_count), "Failed to read perf counter"); 333 + FAIL_IF_MSG(perf_count != 3, "perf counter should not have changed"); 334 + 335 + /* Clean up child */ 336 + FAIL_IF_MSG(kill(pid, SIGKILL) != 0, "Failed to kill child"); 337 + 338 + return 0; 339 + } 340 + 341 + /* 342 + * Tests the interaction between ptrace and perf when: 343 + * 1. perf watches a value 344 + * 2. ptrace watches a different value 345 + * 3. The perf value is read, then the ptrace value is read immediately after 346 + * 347 + * A breakpoint implementation may accidentally misattribute/skip one of 348 + * the ptrace or perf handlers, as interrupt based work is done after perf 349 + * and before ptrace. 350 + * 351 + * We expect the perf counter to increment before the ptrace watchpoint 352 + * triggers. 353 + */ 354 + int perf_then_ptrace_test(void) 355 + { 356 + struct ppc_hw_breakpoint bp_info; /* ptrace breakpoint info */ 357 + int bp_id; /* Breakpoint handle of ptrace watchpoint */ 358 + int perf_fd; /* File descriptor of perf performance counter */ 359 + u64 perf_count; /* Most recently fetched perf performance counter value */ 360 + pid_t pid; /* PID of child process */ 361 + void *pc; /* Most recently fetched child PC value */ 362 + int status; /* Stop status of child after waitpid */ 363 + unsigned long perf_value; /* Dummy value to be watched by perf */ 364 + unsigned long ptrace_value; /* Dummy value to be watched by ptrace */ 365 + int err; 366 + 367 + err = ptrace_fork_child(&pid); 368 + if (err) 369 + return err; 370 + 371 + /* 372 + * If we are the child, run a subroutine that reads the perf value, 373 + * then reads the ptrace value with consecutive load instructions 374 + */ 375 + if (!pid) { 376 + perf_then_ptrace_child(&perf_value, &ptrace_value); 377 + exit(0); 378 + } 379 + 380 + err = check_watchpoints(pid); 381 + if (err) 382 + return err; 383 + 384 + /* Place a perf watchpoint counter */ 385 + perf_fd = perf_watchpoint_open(pid, &perf_value, sizeof(perf_value)); 386 + FAIL_IF_MSG(perf_fd < 0, "Failed to open perf performance counter"); 387 + 388 + /* Place a ptrace watchpoint */ 389 + ppc_ptrace_init_breakpoint(&bp_info, PPC_BREAKPOINT_TRIGGER_READ, 390 + &ptrace_value, sizeof(ptrace_value)); 391 + bp_id = ppc_ptrace_sethwdbg(pid, &bp_info); 392 + FAIL_IF_MSG(bp_id < 0, "Failed to set ptrace watchpoint"); 393 + 394 + /* Let the child run. It should stop on the ptrace watchpoint */ 395 + FAIL_IF_MSG(ptrace_cont(pid, 0), "Failed to continue child"); 396 + 397 + FAIL_IF_MSG(waitpid(pid, &status, 0) == -1, "Failed to wait for child"); 398 + FAIL_IF_MSG(!WIFSTOPPED(status), "Child is not stopped"); 399 + FAIL_IF_MSG(ptrace_getreg_pc(pid, &pc), "Failed to get child PC"); 400 + FAIL_IF_MSG(pc != perf_then_ptrace_load2, "Child did not stop on ptrace load"); 401 + 402 + /* perf should have recorded the first load */ 403 + FAIL_IF_MSG(perf_read_counter(perf_fd, &perf_count), "Failed to read perf counter"); 404 + FAIL_IF_MSG(perf_count != 1, "perf counter did not increment"); 405 + 406 + /* Clean up child */ 407 + FAIL_IF_MSG(kill(pid, SIGKILL) != 0, "Failed to kill child"); 408 + 409 + return 0; 679 410 } 680 411 681 412 int main(int argc, char *argv[]) 682 413 { 683 - return test_harness(ptrace_perf_hwbreak, "ptrace-perf-hwbreak"); 414 + int err = 0; 415 + 416 + err |= test_harness(same_watch_addr_test, "same_watch_addr"); 417 + err |= test_harness(perf_then_ptrace_test, "perf_then_ptrace"); 418 + 419 + return err; 684 420 }
+1 -1
tools/testing/selftests/powerpc/ptrace/ptrace-pkey.c
··· 192 192 * to the child. 193 193 */ 194 194 ret = ptrace_read_regs(pid, NT_PPC_PKEY, regs, 3); 195 - PARENT_SKIP_IF_UNSUPPORTED(ret, &info->child_sync); 195 + PARENT_SKIP_IF_UNSUPPORTED(ret, &info->child_sync, "PKEYs not supported"); 196 196 PARENT_FAIL_IF(ret, &info->child_sync); 197 197 198 198 info->amr1 = info->amr2 = regs[0];
+1 -1
tools/testing/selftests/powerpc/ptrace/ptrace-tar.c
··· 79 79 int ret, status; 80 80 81 81 // TAR was added in v2.07 82 - SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_2_07)); 82 + SKIP_IF_MSG(!have_hwcap2(PPC_FEATURE2_ARCH_2_07), "TAR requires ISA 2.07 compatible hardware"); 83 83 84 84 shm_id = shmget(IPC_PRIVATE, sizeof(int) * 3, 0777|IPC_CREAT); 85 85 pid = fork();
+2 -2
tools/testing/selftests/powerpc/ptrace/ptrace-tm-gpr.c
··· 112 112 pid_t pid; 113 113 int ret, status; 114 114 115 - SKIP_IF(!have_htm()); 116 - SKIP_IF(htm_is_synthetic()); 115 + SKIP_IF_MSG(!have_htm(), "Don't have transactional memory"); 116 + SKIP_IF_MSG(htm_is_synthetic(), "Transactional memory is synthetic"); 117 117 shm_id = shmget(IPC_PRIVATE, sizeof(int) * 2, 0777|IPC_CREAT); 118 118 pid = fork(); 119 119 if (pid < 0) {
+2 -2
tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-gpr.c
··· 118 118 pid_t pid; 119 119 int ret, status; 120 120 121 - SKIP_IF(!have_htm()); 122 - SKIP_IF(htm_is_synthetic()); 121 + SKIP_IF_MSG(!have_htm(), "Don't have transactional memory"); 122 + SKIP_IF_MSG(htm_is_synthetic(), "Transactional memory is synthetic"); 123 123 shm_id = shmget(IPC_PRIVATE, sizeof(int) * 3, 0777|IPC_CREAT); 124 124 pid = fork(); 125 125 if (pid < 0) {
+2 -2
tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-tar.c
··· 128 128 pid_t pid; 129 129 int ret, status; 130 130 131 - SKIP_IF(!have_htm()); 132 - SKIP_IF(htm_is_synthetic()); 131 + SKIP_IF_MSG(!have_htm(), "Don't have transactional memory"); 132 + SKIP_IF_MSG(htm_is_synthetic(), "Transactional memory is synthetic"); 133 133 shm_id = shmget(IPC_PRIVATE, sizeof(int) * 3, 0777|IPC_CREAT); 134 134 pid = fork(); 135 135 if (pid == 0)
+2 -2
tools/testing/selftests/powerpc/ptrace/ptrace-tm-spd-vsx.c
··· 128 128 pid_t pid; 129 129 int ret, status, i; 130 130 131 - SKIP_IF(!have_htm()); 132 - SKIP_IF(htm_is_synthetic()); 131 + SKIP_IF_MSG(!have_htm(), "Don't have transactional memory"); 132 + SKIP_IF_MSG(htm_is_synthetic(), "Transactional memory is synthetic"); 133 133 shm_id = shmget(IPC_PRIVATE, sizeof(int) * 3, 0777|IPC_CREAT); 134 134 135 135 for (i = 0; i < 128; i++) {
+2 -2
tools/testing/selftests/powerpc/ptrace/ptrace-tm-spr.c
··· 113 113 pid_t pid; 114 114 int ret, status; 115 115 116 - SKIP_IF(!have_htm()); 117 - SKIP_IF(htm_is_synthetic()); 116 + SKIP_IF_MSG(!have_htm(), "Don't have transactional memory"); 117 + SKIP_IF_MSG(htm_is_synthetic(), "Transactional memory is synthetic"); 118 118 shm_id = shmget(IPC_PRIVATE, sizeof(struct shared), 0777|IPC_CREAT); 119 119 shm_id1 = shmget(IPC_PRIVATE, sizeof(int), 0777|IPC_CREAT); 120 120 pid = fork();
+2 -2
tools/testing/selftests/powerpc/ptrace/ptrace-tm-tar.c
··· 116 116 pid_t pid; 117 117 int ret, status; 118 118 119 - SKIP_IF(!have_htm()); 120 - SKIP_IF(htm_is_synthetic()); 119 + SKIP_IF_MSG(!have_htm(), "Don't have transactional memory"); 120 + SKIP_IF_MSG(htm_is_synthetic(), "Transactional memory is synthetic"); 121 121 shm_id = shmget(IPC_PRIVATE, sizeof(int) * 2, 0777|IPC_CREAT); 122 122 pid = fork(); 123 123 if (pid == 0)
+2 -2
tools/testing/selftests/powerpc/ptrace/ptrace-tm-vsx.c
··· 112 112 pid_t pid; 113 113 int ret, status, i; 114 114 115 - SKIP_IF(!have_htm()); 116 - SKIP_IF(htm_is_synthetic()); 115 + SKIP_IF_MSG(!have_htm(), "Don't have transactional memory"); 116 + SKIP_IF_MSG(htm_is_synthetic(), "Transactional memory is synthetic"); 117 117 shm_id = shmget(IPC_PRIVATE, sizeof(int) * 2, 0777|IPC_CREAT); 118 118 119 119 for (i = 0; i < 128; i++) {
+1 -1
tools/testing/selftests/powerpc/ptrace/ptrace-vsx.c
··· 61 61 pid_t pid; 62 62 int ret, status, i; 63 63 64 - SKIP_IF(!have_hwcap(PPC_FEATURE_HAS_VSX)); 64 + SKIP_IF_MSG(!have_hwcap(PPC_FEATURE_HAS_VSX), "Don't have VSX"); 65 65 66 66 shm_id = shmget(IPC_PRIVATE, sizeof(int) * 2, 0777|IPC_CREAT); 67 67
tools/testing/selftests/powerpc/stringloops/asm/export.h tools/testing/selftests/powerpc/stringloops/linux/export.h