Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'powerpc-6.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:

- Enable BPF Kernel Functions (kfuncs) in the powerpc BPF JIT.

- Allow per-process DEXCR (Dynamic Execution Control Register) settings
via prctl, notably NPHIE which controls hashst/hashchk for ROP
protection.

- Install powerpc selftests in sub-directories. Note this changes the
way run_kselftest.sh needs to be invoked for powerpc selftests.

- Change fadump (Firmware Assisted Dump) to better handle memory
add/remove.

- Add support for passing additional parameters to the fadump kernel.

- Add support for updating the kdump image on CPU/memory add/remove
events.

- Other small features, cleanups and fixes.

Thanks to Andrew Donnellan, Andy Shevchenko, Aneesh Kumar K.V, Arnd
Bergmann, Benjamin Gray, Bjorn Helgaas, Christian Zigotzky, Christophe
Jaillet, Christophe Leroy, Colin Ian King, Cédric Le Goater, Dr. David
Alan Gilbert, Erhard Furtner, Frank Li, GUO Zihua, Ganesh Goudar, Geoff
Levand, Ghanshyam Agrawal, Greg Kurz, Hari Bathini, Joel Stanley, Justin
Stitt, Kunwu Chan, Li Yang, Lidong Zhong, Madhavan Srinivasan, Mahesh
Salgaonkar, Masahiro Yamada, Matthias Schiffer, Naresh Kamboju, Nathan
Chancellor, Nathan Lynch, Naveen N Rao, Nicholas Miehlbradt, Ran Wang,
Randy Dunlap, Ritesh Harjani, Sachin Sant, Shirisha Ganta, Shrikanth
Hegde, Sourabh Jain, Stephen Rothwell, sundar, Thorsten Blum, Vaibhav
Jain, Xiaowei Bao, Yang Li, and Zhao Chenhui.

* tag 'powerpc-6.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (85 commits)
powerpc/fadump: Fix section mismatch warning
powerpc/85xx: fix compile error without CONFIG_CRASH_DUMP
powerpc/fadump: update documentation about bootargs_append
powerpc/fadump: pass additional parameters when fadump is active
powerpc/fadump: setup additional parameters for dump capture kernel
powerpc/pseries/fadump: add support for multiple boot memory regions
selftests/powerpc/dexcr: Fix spelling mistake "predicition" -> "prediction"
KVM: PPC: Book3S HV nestedv2: Fix an error handling path in gs_msg_ops_kvmhv_nestedv2_config_fill_info()
KVM: PPC: Fix documentation for ppc mmu caps
KVM: PPC: code cleanup for kvmppc_book3s_irqprio_deliver
KVM: PPC: Book3S HV nestedv2: Cancel pending DEC exception
powerpc/xmon: Check cpu id in commands "c#", "dp#" and "dx#"
powerpc/code-patching: Use dedicated memory routines for patching
powerpc/code-patching: Test patch_instructions() during boot
powerpc64/kasan: Pass virtual addresses to kasan_init_phys_region()
powerpc: rename SPRN_HID2 define to SPRN_HID2_750FX
powerpc: Fix typos
powerpc/eeh: Fix spelling of the word "auxillary" and update comment
macintosh/ams: Fix unused variable warning
powerpc/Makefile: Remove bits related to the previous use of -mcmodel=large
...

+3051 -1269
+7 -7
Documentation/ABI/testing/sysfs-devices-system-cpu
··· 423 423 /sys/devices/system/cpu/cpuX/cpufreq/throttle_stats/occ_reset 424 424 Date: March 2016 425 425 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 426 - Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 426 + Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 427 427 Description: POWERNV CPUFreq driver's frequency throttle stats directory and 428 428 attributes 429 429 ··· 473 473 /sys/devices/system/cpu/cpufreq/policyX/throttle_stats/occ_reset 474 474 Date: March 2016 475 475 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 476 - Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 476 + Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 477 477 Description: POWERNV CPUFreq driver's frequency throttle stats directory and 478 478 attributes 479 479 ··· 608 608 What: /sys/devices/system/cpu/svm 609 609 Date: August 2019 610 610 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> 611 - Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 611 + Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 612 612 Description: Secure Virtual Machine 613 613 614 614 If 1, it means the system is using the Protected Execution ··· 617 617 618 618 What: /sys/devices/system/cpu/cpuX/purr 619 619 Date: Apr 2005 620 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 620 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 621 621 Description: PURR ticks for this CPU since the system boot. 622 622 623 623 The Processor Utilization Resources Register (PURR) is ··· 628 628 629 629 What: /sys/devices/system/cpu/cpuX/spurr 630 630 Date: Dec 2006 631 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 631 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 632 632 Description: SPURR ticks for this CPU since the system boot. 633 633 634 634 The Scaled Processor Utilization Resources Register ··· 640 640 641 641 What: /sys/devices/system/cpu/cpuX/idle_purr 642 642 Date: Apr 2020 643 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 643 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 644 644 Description: PURR ticks for cpuX when it was idle. 645 645 646 646 This sysfs interface exposes the number of PURR ticks ··· 648 648 649 649 What: /sys/devices/system/cpu/cpuX/idle_spurr 650 650 Date: Apr 2020 651 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 651 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 652 652 Description: SPURR ticks for cpuX when it was idle. 653 653 654 654 This sysfs interface exposes the number of SPURR ticks
+2 -2
Documentation/ABI/testing/sysfs-firmware-opal-powercap
··· 1 1 What: /sys/firmware/opal/powercap 2 2 Date: August 2017 3 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 3 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 4 4 Description: Powercap directory for Powernv (P8, P9) servers 5 5 6 6 Each folder in this directory contains a ··· 11 11 /sys/firmware/opal/powercap/system-powercap/powercap-max 12 12 /sys/firmware/opal/powercap/system-powercap/powercap-current 13 13 Date: August 2017 14 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 14 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 15 15 Description: System powercap directory and attributes applicable for 16 16 Powernv (P8, P9) servers 17 17
+2 -2
Documentation/ABI/testing/sysfs-firmware-opal-psr
··· 1 1 What: /sys/firmware/opal/psr 2 2 Date: August 2017 3 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 3 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 4 4 Description: Power-Shift-Ratio directory for Powernv P9 servers 5 5 6 6 Power-Shift-Ratio allows to provide hints the firmware ··· 10 10 11 11 What: /sys/firmware/opal/psr/cpu_to_gpu_X 12 12 Date: August 2017 13 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 13 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 14 14 Description: PSR sysfs attributes for Powernv P9 servers 15 15 16 16 Power-Shift-Ratio between CPU and GPU for a given chip
+2 -2
Documentation/ABI/testing/sysfs-firmware-opal-sensor-groups
··· 1 1 What: /sys/firmware/opal/sensor_groups 2 2 Date: August 2017 3 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 3 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 4 4 Description: Sensor groups directory for POWER9 powernv servers 5 5 6 6 Each folder in this directory contains a sensor group ··· 11 11 12 12 What: /sys/firmware/opal/sensor_groups/<sensor_group_name>/clear 13 13 Date: August 2017 14 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 14 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 15 15 Description: Sysfs file to clear the min-max of all the sensors 16 16 belonging to the group. 17 17
+5 -5
Documentation/ABI/testing/sysfs-firmware-papr-energy-scale-info
··· 1 1 What: /sys/firmware/papr/energy_scale_info 2 2 Date: February 2022 3 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 3 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 4 4 Description: Directory hosting a set of platform attributes like 5 5 energy/frequency on Linux running as a PAPR guest. 6 6 ··· 10 10 11 11 What: /sys/firmware/papr/energy_scale_info/<id> 12 12 Date: February 2022 13 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 13 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 14 14 Description: Energy, frequency attributes directory for POWERVM servers 15 15 16 16 What: /sys/firmware/papr/energy_scale_info/<id>/desc 17 17 Date: February 2022 18 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 18 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 19 19 Description: String description of the energy attribute of <id> 20 20 21 21 What: /sys/firmware/papr/energy_scale_info/<id>/value 22 22 Date: February 2022 23 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 23 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 24 24 Description: Numeric value of the energy attribute of <id> 25 25 26 26 What: /sys/firmware/papr/energy_scale_info/<id>/value_desc 27 27 Date: February 2022 28 - Contact: Linux for PowerPC mailing list <linuxppc-dev@ozlabs.org> 28 + Contact: Linux for PowerPC mailing list <linuxppc-dev@lists.ozlabs.org> 29 29 Description: String value of the energy attribute of <id>
+18
Documentation/ABI/testing/sysfs-kernel-fadump
··· 38 38 Description: read only 39 39 Provide information about the amount of memory reserved by 40 40 FADump to save the crash dump in bytes. 41 + 42 + What: /sys/kernel/fadump/hotplug_ready 43 + Date: Apr 2024 44 + Contact: linuxppc-dev@lists.ozlabs.org 45 + Description: read only 46 + Kdump udev rule re-registers fadump on memory add/remove events, 47 + primarily to update the elfcorehdr. This sysfs indicates the 48 + kdump udev rule that fadump re-registration is not required on 49 + memory add/remove events because elfcorehdr is now prepared in 50 + the second/fadump kernel. 51 + User: kexec-tools 52 + 53 + What: /sys/kernel/fadump/bootargs_append 54 + Date: May 2024 55 + Contact: linuxppc-dev@lists.ozlabs.org 56 + Description: read/write 57 + This is a special sysfs file available to setup additional 58 + parameters to be passed to capture kernel.
+139 -2
Documentation/arch/powerpc/dexcr.rst
··· 36 36 Configuration 37 37 ============= 38 38 39 - The DEXCR is currently unconfigurable. All threads are run with the 40 - NPHIE aspect enabled. 39 + prctl 40 + ----- 41 + 42 + A process can control its own userspace DEXCR value using the 43 + ``PR_PPC_GET_DEXCR`` and ``PR_PPC_SET_DEXCR`` pair of 44 + :manpage:`prctl(2)` commands. These calls have the form:: 45 + 46 + prctl(PR_PPC_GET_DEXCR, unsigned long which, 0, 0, 0); 47 + prctl(PR_PPC_SET_DEXCR, unsigned long which, unsigned long ctrl, 0, 0); 48 + 49 + The possible 'which' and 'ctrl' values are as follows. Note there is no relation 50 + between the 'which' value and the DEXCR aspect's index. 51 + 52 + .. flat-table:: 53 + :header-rows: 1 54 + :widths: 2 7 1 55 + 56 + * - ``prctl()`` which 57 + - Aspect name 58 + - Aspect index 59 + 60 + * - ``PR_PPC_DEXCR_SBHE`` 61 + - Speculative Branch Hint Enable (SBHE) 62 + - 0 63 + 64 + * - ``PR_PPC_DEXCR_IBRTPD`` 65 + - Indirect Branch Recurrent Target Prediction Disable (IBRTPD) 66 + - 3 67 + 68 + * - ``PR_PPC_DEXCR_SRAPD`` 69 + - Subroutine Return Address Prediction Disable (SRAPD) 70 + - 4 71 + 72 + * - ``PR_PPC_DEXCR_NPHIE`` 73 + - Non-Privileged Hash Instruction Enable (NPHIE) 74 + - 5 75 + 76 + .. flat-table:: 77 + :header-rows: 1 78 + :widths: 2 8 79 + 80 + * - ``prctl()`` ctrl 81 + - Meaning 82 + 83 + * - ``PR_PPC_DEXCR_CTRL_EDITABLE`` 84 + - This aspect can be configured with PR_PPC_SET_DEXCR (get only) 85 + 86 + * - ``PR_PPC_DEXCR_CTRL_SET`` 87 + - This aspect is set / set this aspect 88 + 89 + * - ``PR_PPC_DEXCR_CTRL_CLEAR`` 90 + - This aspect is clear / clear this aspect 91 + 92 + * - ``PR_PPC_DEXCR_CTRL_SET_ONEXEC`` 93 + - This aspect will be set after exec / set this aspect after exec 94 + 95 + * - ``PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC`` 96 + - This aspect will be clear after exec / clear this aspect after exec 97 + 98 + Note that 99 + 100 + * which is a plain value, not a bitmask. Aspects must be worked with individually. 101 + 102 + * ctrl is a bitmask. ``PR_PPC_GET_DEXCR`` returns both the current and onexec 103 + configuration. For example, ``PR_PPC_GET_DEXCR`` may return 104 + ``PR_PPC_DEXCR_CTRL_EDITABLE | PR_PPC_DEXCR_CTRL_SET | 105 + PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC``. This would indicate the aspect is currently 106 + set, it will be cleared when you run exec, and you can change this with the 107 + ``PR_PPC_SET_DEXCR`` prctl. 108 + 109 + * The set/clear terminology refers to setting/clearing the bit in the DEXCR. 110 + For example:: 111 + 112 + prctl(PR_PPC_SET_DEXCR, PR_PPC_DEXCR_IBRTPD, PR_PPC_DEXCR_CTRL_SET, 0, 0); 113 + 114 + will set the IBRTPD aspect bit in the DEXCR, causing indirect branch prediction 115 + to be disabled. 116 + 117 + * The status returned by ``PR_PPC_GET_DEXCR`` represents what value the process 118 + would like applied. It does not include any alternative overrides, such as if 119 + the hypervisor is enforcing the aspect be set. To see the true DEXCR state 120 + software should read the appropriate SPRs directly. 121 + 122 + * The aspect state when starting a process is copied from the parent's state on 123 + :manpage:`fork(2)`. The state is reset to a fixed value on 124 + :manpage:`execve(2)`. The PR_PPC_SET_DEXCR prctl() can control both of these 125 + values. 126 + 127 + * The ``*_ONEXEC`` controls do not change the current process's DEXCR. 128 + 129 + Use ``PR_PPC_SET_DEXCR`` with one of ``PR_PPC_DEXCR_CTRL_SET`` or 130 + ``PR_PPC_DEXCR_CTRL_CLEAR`` to edit a given aspect. 131 + 132 + Common error codes for both getting and setting the DEXCR are as follows: 133 + 134 + .. flat-table:: 135 + :header-rows: 1 136 + :widths: 2 8 137 + 138 + * - Error 139 + - Meaning 140 + 141 + * - ``EINVAL`` 142 + - The DEXCR is not supported by the kernel. 143 + 144 + * - ``ENODEV`` 145 + - The aspect is not recognised by the kernel or not supported by the 146 + hardware. 147 + 148 + ``PR_PPC_SET_DEXCR`` may also report the following error codes: 149 + 150 + .. flat-table:: 151 + :header-rows: 1 152 + :widths: 2 8 153 + 154 + * - Error 155 + - Meaning 156 + 157 + * - ``EINVAL`` 158 + - The ctrl value contains unrecognised flags. 159 + 160 + * - ``EINVAL`` 161 + - The ctrl value contains mutually conflicting flags (e.g., 162 + ``PR_PPC_DEXCR_CTRL_SET | PR_PPC_DEXCR_CTRL_CLEAR``) 163 + 164 + * - ``EPERM`` 165 + - This aspect cannot be modified with prctl() (check for the 166 + PR_PPC_DEXCR_CTRL_EDITABLE flag with PR_PPC_GET_DEXCR). 167 + 168 + * - ``EPERM`` 169 + - The process does not have sufficient privilege to perform the operation. 170 + For example, clearing NPHIE on exec is a privileged operation (a process 171 + can still clear its own NPHIE aspect without privileges). 172 + 173 + This interface allows a process to control its own DEXCR aspects, and also set 174 + the initial DEXCR value for any children in its process tree (up to the next 175 + child to use an ``*_ONEXEC`` control). This allows fine-grained control over the 176 + default value of the DEXCR, for example allowing containers to run with different 177 + default values. 41 178 42 179 43 180 coredump and ptrace
+42 -49
Documentation/arch/powerpc/firmware-assisted-dump.rst
··· 134 134 memory is held. 135 135 136 136 If there is no waiting dump data, then only the memory required to 137 - hold CPU state, HPTE region, boot memory dump, FADump header and 138 - elfcore header, is usually reserved at an offset greater than boot 139 - memory size (see Fig. 1). This area is *not* released: this region 140 - will be kept permanently reserved, so that it can act as a receptacle 141 - for a copy of the boot memory content in addition to CPU state and 142 - HPTE region, in the case a crash does occur. 137 + hold CPU state, HPTE region, boot memory dump, and FADump header is 138 + usually reserved at an offset greater than boot memory size (see Fig. 1). 139 + This area is *not* released: this region will be kept permanently 140 + reserved, so that it can act as a receptacle for a copy of the boot 141 + memory content in addition to CPU state and HPTE region, in the case 142 + a crash does occur. 143 143 144 144 Since this reserved memory area is used only after the system crash, 145 145 there is no point in blocking this significant chunk of memory from ··· 153 153 154 154 o Memory Reservation during first kernel 155 155 156 - Low memory Top of memory 157 - 0 boot memory size |<--- Reserved dump area --->| | 158 - | | | Permanent Reservation | | 159 - V V | | V 160 - +-----------+-----/ /---+---+----+-------+-----+-----+----+--+ 161 - | | |///|////| DUMP | HDR | ELF |////| | 162 - +-----------+-----/ /---+---+----+-------+-----+-----+----+--+ 163 - | ^ ^ ^ ^ ^ 164 - | | | | | | 165 - \ CPU HPTE / | | 166 - ------------------------------ | | 167 - Boot memory content gets transferred | | 168 - to reserved area by firmware at the | | 169 - time of crash. | | 170 - FADump Header | 171 - (meta area) | 156 + Low memory Top of memory 157 + 0 boot memory size |<------ Reserved dump area ----->| | 158 + | | | Permanent Reservation | | 159 + V V | | V 160 + +-----------+-----/ /---+---+----+-----------+-------+----+-----+ 161 + | | |///|////| DUMP | HDR |////| | 162 + +-----------+-----/ /---+---+----+-----------+-------+----+-----+ 163 + | ^ ^ ^ ^ ^ 164 + | | | | | | 165 + \ CPU HPTE / | | 166 + -------------------------------- | | 167 + Boot memory content gets transferred | | 168 + to reserved area by firmware at the | | 169 + time of crash. | | 170 + FADump Header | 171 + (meta area) | 172 172 | 173 173 | 174 174 Metadata: This area holds a metadata structure whose ··· 186 186 0 boot memory size | 187 187 | |<------------ Crash preserved area ------------>| 188 188 V V |<--- Reserved dump area --->| | 189 - +-----------+-----/ /---+---+----+-------+-----+-----+----+--+ 190 - | | |///|////| DUMP | HDR | ELF |////| | 191 - +-----------+-----/ /---+---+----+-------+-----+-----+----+--+ 192 - | | 193 - V V 194 - Used by second /proc/vmcore 195 - kernel to boot 189 + +----+---+--+-----/ /---+---+----+-------+-----+-----+-------+ 190 + | |ELF| | |///|////| DUMP | HDR |/////| | 191 + +----+---+--+-----/ /---+---+----+-------+-----+-----+-------+ 192 + | | | | | | 193 + ----- ------------------------------ --------------- 194 + \ | | 195 + \ | | 196 + \ | | 197 + \ | ---------------------------- 198 + \ | / 199 + \ | / 200 + \ | / 201 + /proc/vmcore 202 + 196 203 197 204 +---+ 198 205 |///| -> Regions (CPU, HPTE & Metadata) marked like this in the above 199 206 +---+ figures are not always present. For example, OPAL platform 200 207 does not have CPU & HPTE regions while Metadata region is 201 208 not supported on pSeries currently. 209 + 210 + +---+ 211 + |ELF| -> elfcorehdr, it is created in second kernel after crash. 212 + +---+ 213 + 214 + Note: Memory from 0 to the boot memory size is used by second kernel 202 215 203 216 Fig. 2 204 217 ··· 366 353 - Need to come up with the better approach to find out more 367 354 accurate boot memory size that is required for a kernel to 368 355 boot successfully when booted with restricted memory. 369 - - The FADump implementation introduces a FADump crash info structure 370 - in the scratch area before the ELF core header. The idea of introducing 371 - this structure is to pass some important crash info data to the second 372 - kernel which will help second kernel to populate ELF core header with 373 - correct data before it gets exported through /proc/vmcore. The current 374 - design implementation does not address a possibility of introducing 375 - additional fields (in future) to this structure without affecting 376 - compatibility. Need to come up with the better approach to address this. 377 - 378 - The possible approaches are: 379 - 380 - 1. Introduce version field for version tracking, bump up the version 381 - whenever a new field is added to the structure in future. The version 382 - field can be used to find out what fields are valid for the current 383 - version of the structure. 384 - 2. Reserve the area of predefined size (say PAGE_SIZE) for this 385 - structure and have unused area as reserved (initialized to zero) 386 - for future field additions. 387 - 388 - The advantage of approach 1 over 2 is we don't need to reserve extra space. 389 356 390 357 Author: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> 391 358
+4 -4
Documentation/virt/kvm/api.rst
··· 4300 4300 4.100 KVM_PPC_CONFIGURE_V3_MMU 4301 4301 ------------------------------ 4302 4302 4303 - :Capability: KVM_CAP_PPC_RADIX_MMU or KVM_CAP_PPC_HASH_MMU_V3 4303 + :Capability: KVM_CAP_PPC_MMU_RADIX or KVM_CAP_PPC_MMU_HASH_V3 4304 4304 :Architectures: ppc 4305 4305 :Type: vm ioctl 4306 4306 :Parameters: struct kvm_ppc_mmuv3_cfg (in) ··· 4334 4334 4.101 KVM_PPC_GET_RMMU_INFO 4335 4335 --------------------------- 4336 4336 4337 - :Capability: KVM_CAP_PPC_RADIX_MMU 4337 + :Capability: KVM_CAP_PPC_MMU_RADIX 4338 4338 :Architectures: ppc 4339 4339 :Type: vm ioctl 4340 4340 :Parameters: struct kvm_ppc_rmmu_info (out) ··· 8102 8102 will disable the use of APIC hardware virtualization even if supported 8103 8103 by the CPU, as it's incompatible with SynIC auto-EOI behavior. 8104 8104 8105 - 8.3 KVM_CAP_PPC_RADIX_MMU 8105 + 8.3 KVM_CAP_PPC_MMU_RADIX 8106 8106 ------------------------- 8107 8107 8108 8108 :Architectures: ppc ··· 8112 8112 radix MMU defined in Power ISA V3.00 (as implemented in the POWER9 8113 8113 processor). 8114 8114 8115 - 8.4 KVM_CAP_PPC_HASH_MMU_V3 8115 + 8.4 KVM_CAP_PPC_MMU_HASH_V3 8116 8116 --------------------------- 8117 8117 8118 8118 :Architectures: ppc
+1 -2
MAINTAINERS
··· 12652 12652 M: Michael Ellerman <mpe@ellerman.id.au> 12653 12653 R: Nicholas Piggin <npiggin@gmail.com> 12654 12654 R: Christophe Leroy <christophe.leroy@csgroup.eu> 12655 - R: Aneesh Kumar K.V <aneesh.kumar@kernel.org> 12656 12655 R: Naveen N. Rao <naveen.n.rao@linux.ibm.com> 12657 12656 L: linuxppc-dev@lists.ozlabs.org 12658 12657 S: Supported ··· 15085 15086 15086 15087 MMU GATHER AND TLB INVALIDATION 15087 15088 M: Will Deacon <will@kernel.org> 15088 - M: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> 15089 + M: "Aneesh Kumar K.V" <aneesh.kumar@kernel.org> 15089 15090 M: Andrew Morton <akpm@linux-foundation.org> 15090 15091 M: Nick Piggin <npiggin@gmail.com> 15091 15092 M: Peter Zijlstra <peterz@infradead.org>
+2 -1
arch/powerpc/Kbuild
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror 2 + subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror -Wa,--fatal-warnings 3 + subdir-asflags-$(CONFIG_PPC_WERROR) := -Wa,--fatal-warnings 3 4 4 5 obj-y += kernel/ 5 6 obj-y += mm/
+4
arch/powerpc/Kconfig
··· 687 687 depends on CRASH_DUMP 688 688 select RELOCATABLE if PPC64 || 44x || PPC_85xx 689 689 690 + config ARCH_SUPPORTS_CRASH_HOTPLUG 691 + def_bool y 692 + depends on PPC64 693 + 690 694 config FA_DUMP 691 695 bool "Firmware-assisted dump" 692 696 depends on CRASH_DUMP && PPC64 && (PPC_RTAS || PPC_POWERNV)
+1 -5
arch/powerpc/Makefile
··· 114 114 115 115 ifdef CONFIG_PPC64 116 116 ifndef CONFIG_PPC_KERNEL_PCREL 117 - ifeq ($(call cc-option-yn,-mcmodel=medium),y) 118 117 # -mcmodel=medium breaks modules because it uses 32bit offsets from 119 118 # the TOC pointer to create pointers where possible. Pointers into the 120 119 # percpu data area are created by this method. ··· 123 124 # kernel percpu data space (starting with 0xc...). We need a full 124 125 # 64bit relocation for this to work, hence -mcmodel=large. 125 126 KBUILD_CFLAGS_MODULE += -mcmodel=large 126 - else 127 - export NO_MINIMAL_TOC := -mno-minimal-toc 128 - endif 129 127 endif 130 128 endif 131 129 ··· 135 139 CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcall-aixdesc) 136 140 endif 137 141 endif 138 - CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcmodel=medium,$(call cc-option,-mminimal-toc)) 142 + CFLAGS-$(CONFIG_PPC64) += -mcmodel=medium 139 143 CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mno-pointers-to-nested-functions) 140 144 CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mlong-double-128) 141 145
+2 -2
arch/powerpc/boot/Makefile
··· 108 108 # these files into the build dir, fix up any includes and ensure that dependent 109 109 # files are copied in the right order. 110 110 111 - # these need to be seperate variables because they are copied out of different 112 - # directories in the kernel tree. Sure you COULd merge them, but it's a 111 + # these need to be separate variables because they are copied out of different 112 + # directories in the kernel tree. Sure you COULD merge them, but it's a 113 113 # cure-is-worse-than-disease situation. 114 114 zlib-decomp-$(CONFIG_KERNEL_GZIP) := decompress_inflate.c 115 115 zlib-$(CONFIG_KERNEL_GZIP) := inffast.c inflate.c inftrees.c
+1 -1
arch/powerpc/boot/decompress.c
··· 101 101 * @input_size: length of the input buffer 102 102 * @outbuf: output buffer 103 103 * @output_size: length of the output buffer 104 - * @skip number of output bytes to ignore 104 + * @_skip: number of output bytes to ignore 105 105 * 106 106 * This function takes compressed data from inbuf, decompresses and write it to 107 107 * outbuf. Once output_size bytes are written to the output buffer, or the
+1 -1
arch/powerpc/boot/dts/acadia.dts
··· 172 172 reg = <0xef602800 0x60>; 173 173 interrupt-parent = <&UIC0>; 174 174 interrupts = <0x4 0x4>; 175 - /* This thing is a bit weird. It has it's own UIC 175 + /* This thing is a bit weird. It has its own UIC 176 176 * that it uses to generate snapshot triggers. We 177 177 * don't really support this device yet, and it needs 178 178 * work to figure this out.
+1 -1
arch/powerpc/boot/dts/fsl/b4si-post.dtsi
··· 50 50 &ifc { 51 51 #address-cells = <2>; 52 52 #size-cells = <1>; 53 - compatible = "fsl,ifc", "simple-bus"; 53 + compatible = "fsl,ifc"; 54 54 interrupts = <25 2 0 0>; 55 55 }; 56 56
+1 -1
arch/powerpc/boot/dts/fsl/bsc9131rdb.dts
··· 15 15 device_type = "memory"; 16 16 }; 17 17 18 - board_ifc: ifc: ifc@ff71e000 { 18 + board_ifc: ifc: memory-controller@ff71e000 { 19 19 /* NAND Flash on board */ 20 20 ranges = <0x0 0x0 0x0 0xff800000 0x00004000>; 21 21 reg = <0x0 0xff71e000 0x0 0x2000>;
+1 -1
arch/powerpc/boot/dts/fsl/bsc9131si-post.dtsi
··· 35 35 &ifc { 36 36 #address-cells = <2>; 37 37 #size-cells = <1>; 38 - compatible = "fsl,ifc", "simple-bus"; 38 + compatible = "fsl,ifc"; 39 39 interrupts = <16 2 0 0 20 2 0 0>; 40 40 }; 41 41
+1 -1
arch/powerpc/boot/dts/fsl/bsc9132qds.dts
··· 15 15 device_type = "memory"; 16 16 }; 17 17 18 - ifc: ifc@ff71e000 { 18 + ifc: memory-controller@ff71e000 { 19 19 /* NOR, NAND Flash on board */ 20 20 ranges = <0x0 0x0 0x0 0x88000000 0x08000000 21 21 0x1 0x0 0x0 0xff800000 0x00010000>;
+1 -1
arch/powerpc/boot/dts/fsl/bsc9132si-post.dtsi
··· 35 35 &ifc { 36 36 #address-cells = <2>; 37 37 #size-cells = <1>; 38 - compatible = "fsl,ifc", "simple-bus"; 38 + compatible = "fsl,ifc"; 39 39 /* FIXME: Test whether interrupts are split */ 40 40 interrupts = <16 2 0 0 20 2 0 0>; 41 41 };
+1 -1
arch/powerpc/boot/dts/fsl/c293pcie.dts
··· 42 42 device_type = "memory"; 43 43 }; 44 44 45 - ifc: ifc@fffe1e000 { 45 + ifc: memory-controller@fffe1e000 { 46 46 reg = <0xf 0xffe1e000 0 0x2000>; 47 47 ranges = <0x0 0x0 0xf 0xec000000 0x04000000 48 48 0x1 0x0 0xf 0xff800000 0x00010000
+1 -1
arch/powerpc/boot/dts/fsl/c293si-post.dtsi
··· 35 35 &ifc { 36 36 #address-cells = <2>; 37 37 #size-cells = <1>; 38 - compatible = "fsl,ifc", "simple-bus"; 38 + compatible = "fsl,ifc"; 39 39 interrupts = <19 2 0 0>; 40 40 }; 41 41
+12 -2
arch/powerpc/boot/dts/fsl/mpc8536si-post.dtsi
··· 199 199 200 200 /include/ "pq3-dma-0.dtsi" 201 201 /include/ "pq3-etsec1-0.dtsi" 202 + enet0: ethernet@24000 { 203 + fsl,wake-on-filer; 204 + fsl,pmc-handle = <&etsec1_clk>; 205 + }; 202 206 /include/ "pq3-etsec1-timer-0.dtsi" 203 207 204 208 usb@22000 { ··· 226 222 }; 227 223 228 224 /include/ "pq3-etsec1-2.dtsi" 229 - 230 - ethernet@26000 { 225 + enet2: ethernet@26000 { 231 226 cell-index = <1>; 227 + fsl,wake-on-filer; 228 + fsl,pmc-handle = <&etsec3_clk>; 232 229 }; 233 230 234 231 usb@2b000 { ··· 253 248 compatible = "fsl,mpc8536-guts"; 254 249 reg = <0xe0000 0x1000>; 255 250 fsl,has-rstcr; 251 + }; 252 + 253 + /include/ "pq3-power.dtsi" 254 + power@e0070 { 255 + compatible = "fsl,mpc8536-pmc", "fsl,mpc8548-pmc"; 256 256 }; 257 257 };
+2
arch/powerpc/boot/dts/fsl/mpc8544si-post.dtsi
··· 188 188 reg = <0xe0000 0x1000>; 189 189 fsl,has-rstcr; 190 190 }; 191 + 192 + /include/ "pq3-power.dtsi" 191 193 };
+2
arch/powerpc/boot/dts/fsl/mpc8548si-post.dtsi
··· 156 156 reg = <0xe0000 0x1000>; 157 157 fsl,has-rstcr; 158 158 }; 159 + 160 + /include/ "pq3-power.dtsi" 159 161 };
+2
arch/powerpc/boot/dts/fsl/mpc8572si-post.dtsi
··· 193 193 reg = <0xe0000 0x1000>; 194 194 fsl,has-rstcr; 195 195 }; 196 + 197 + /include/ "pq3-power.dtsi" 196 198 };
+16
arch/powerpc/boot/dts/fsl/p1010rdb-pb.dts
··· 29 29 }; 30 30 31 31 /include/ "p1010si-post.dtsi" 32 + 33 + &pci0 { 34 + pcie@0 { 35 + interrupt-map = < 36 + /* IDSEL 0x0 */ 37 + /* 38 + *irq[4:5] are active-high 39 + *irq[6:7] are active-low 40 + */ 41 + 0000 0x0 0x0 0x1 &mpic 0x4 0x2 0x0 0x0 42 + 0000 0x0 0x0 0x2 &mpic 0x5 0x2 0x0 0x0 43 + 0000 0x0 0x0 0x3 &mpic 0x6 0x1 0x0 0x0 44 + 0000 0x0 0x0 0x4 &mpic 0x7 0x1 0x0 0x0 45 + >; 46 + }; 47 + };
+16
arch/powerpc/boot/dts/fsl/p1010rdb-pb_36b.dts
··· 56 56 }; 57 57 58 58 /include/ "p1010si-post.dtsi" 59 + 60 + &pci0 { 61 + pcie@0 { 62 + interrupt-map = < 63 + /* IDSEL 0x0 */ 64 + /* 65 + *irq[4:5] are active-high 66 + *irq[6:7] are active-low 67 + */ 68 + 0000 0x0 0x0 0x1 &mpic 0x4 0x2 0x0 0x0 69 + 0000 0x0 0x0 0x2 &mpic 0x5 0x2 0x0 0x0 70 + 0000 0x0 0x0 0x3 &mpic 0x6 0x1 0x0 0x0 71 + 0000 0x0 0x0 0x4 &mpic 0x7 0x1 0x0 0x0 72 + >; 73 + }; 74 + };
-16
arch/powerpc/boot/dts/fsl/p1010rdb.dtsi
··· 215 215 phy-connection-type = "sgmii"; 216 216 }; 217 217 }; 218 - 219 - &pci0 { 220 - pcie@0 { 221 - interrupt-map = < 222 - /* IDSEL 0x0 */ 223 - /* 224 - *irq[4:5] are active-high 225 - *irq[6:7] are active-low 226 - */ 227 - 0000 0x0 0x0 0x1 &mpic 0x4 0x2 0x0 0x0 228 - 0000 0x0 0x0 0x2 &mpic 0x5 0x2 0x0 0x0 229 - 0000 0x0 0x0 0x3 &mpic 0x6 0x1 0x0 0x0 230 - 0000 0x0 0x0 0x4 &mpic 0x7 0x1 0x0 0x0 231 - >; 232 - }; 233 - };
+1 -1
arch/powerpc/boot/dts/fsl/p1010rdb_32b.dtsi
··· 36 36 device_type = "memory"; 37 37 }; 38 38 39 - board_ifc: ifc: ifc@ffe1e000 { 39 + board_ifc: ifc: memory-controller@ffe1e000 { 40 40 /* NOR, NAND Flashes and CPLD on board */ 41 41 ranges = <0x0 0x0 0x0 0xee000000 0x02000000 42 42 0x1 0x0 0x0 0xff800000 0x00010000
+1 -1
arch/powerpc/boot/dts/fsl/p1010rdb_36b.dtsi
··· 36 36 device_type = "memory"; 37 37 }; 38 38 39 - board_ifc: ifc: ifc@fffe1e000 { 39 + board_ifc: ifc: memory-controller@fffe1e000 { 40 40 /* NOR, NAND Flashes and CPLD on board */ 41 41 ranges = <0x0 0x0 0xf 0xee000000 0x02000000 42 42 0x1 0x0 0xf 0xff800000 0x00010000
+15 -1
arch/powerpc/boot/dts/fsl/p1010si-post.dtsi
··· 35 35 &ifc { 36 36 #address-cells = <2>; 37 37 #size-cells = <1>; 38 - compatible = "fsl,ifc", "simple-bus"; 38 + compatible = "fsl,ifc"; 39 39 interrupts = <16 2 0 0 19 2 0 0>; 40 40 }; 41 41 ··· 183 183 /include/ "pq3-etsec2-1.dtsi" 184 184 /include/ "pq3-etsec2-2.dtsi" 185 185 186 + enet0: ethernet@b0000 { 187 + fsl,pmc-handle = <&etsec1_clk>; 188 + }; 189 + 190 + enet1: ethernet@b1000 { 191 + fsl,pmc-handle = <&etsec2_clk>; 192 + }; 193 + 194 + enet2: ethernet@b2000 { 195 + fsl,pmc-handle = <&etsec3_clk>; 196 + }; 197 + 186 198 global-utilities@e0000 { 187 199 compatible = "fsl,p1010-guts"; 188 200 reg = <0xe0000 0x1000>; 189 201 fsl,has-rstcr; 190 202 }; 203 + 204 + /include/ "pq3-power.dtsi" 191 205 };
+5
arch/powerpc/boot/dts/fsl/p1020si-post.dtsi
··· 163 163 164 164 /include/ "pq3-etsec2-0.dtsi" 165 165 enet0: enet0_grp2: ethernet@b0000 { 166 + fsl,pmc-handle = <&etsec1_clk>; 166 167 }; 167 168 168 169 /include/ "pq3-etsec2-1.dtsi" 169 170 enet1: enet1_grp2: ethernet@b1000 { 171 + fsl,pmc-handle = <&etsec2_clk>; 170 172 }; 171 173 172 174 /include/ "pq3-etsec2-2.dtsi" 173 175 enet2: enet2_grp2: ethernet@b2000 { 176 + fsl,pmc-handle = <&etsec3_clk>; 174 177 }; 175 178 176 179 global-utilities@e0000 { ··· 181 178 reg = <0xe0000 0x1000>; 182 179 fsl,has-rstcr; 183 180 }; 181 + 182 + /include/ "pq3-power.dtsi" 184 183 }; 185 184 186 185 /include/ "pq3-etsec2-grp2-0.dtsi"
+5
arch/powerpc/boot/dts/fsl/p1021si-post.dtsi
··· 159 159 160 160 /include/ "pq3-etsec2-0.dtsi" 161 161 enet0: enet0_grp2: ethernet@b0000 { 162 + fsl,pmc-handle = <&etsec1_clk>; 162 163 }; 163 164 164 165 /include/ "pq3-etsec2-1.dtsi" 165 166 enet1: enet1_grp2: ethernet@b1000 { 167 + fsl,pmc-handle = <&etsec2_clk>; 166 168 }; 167 169 168 170 /include/ "pq3-etsec2-2.dtsi" 169 171 enet2: enet2_grp2: ethernet@b2000 { 172 + fsl,pmc-handle = <&etsec3_clk>; 170 173 }; 171 174 172 175 global-utilities@e0000 { ··· 177 174 reg = <0xe0000 0x1000>; 178 175 fsl,has-rstcr; 179 176 }; 177 + 178 + /include/ "pq3-power.dtsi" 180 179 }; 181 180 182 181 &qe {
+5 -2
arch/powerpc/boot/dts/fsl/p1022si-post.dtsi
··· 225 225 /include/ "pq3-etsec2-0.dtsi" 226 226 enet0: enet0_grp2: ethernet@b0000 { 227 227 fsl,wake-on-filer; 228 + fsl,pmc-handle = <&etsec1_clk>; 228 229 }; 229 230 230 231 /include/ "pq3-etsec2-1.dtsi" 231 232 enet1: enet1_grp2: ethernet@b1000 { 232 233 fsl,wake-on-filer; 234 + fsl,pmc-handle = <&etsec2_clk>; 233 235 }; 234 236 235 237 global-utilities@e0000 { ··· 240 238 fsl,has-rstcr; 241 239 }; 242 240 241 + /include/ "pq3-power.dtsi" 243 242 power@e0070 { 244 - compatible = "fsl,mpc8536-pmc", "fsl,mpc8548-pmc"; 245 - reg = <0xe0070 0x20>; 243 + compatible = "fsl,p1022-pmc", "fsl,mpc8536-pmc", 244 + "fsl,mpc8548-pmc"; 246 245 }; 247 246 248 247 };
+13 -4
arch/powerpc/boot/dts/fsl/p2020si-post.dtsi
··· 178 178 compatible = "fsl-usb2-dr-v1.6", "fsl-usb2-dr"; 179 179 }; 180 180 /include/ "pq3-etsec1-0.dtsi" 181 + enet0: ethernet@24000 { 182 + fsl,pmc-handle = <&etsec1_clk>; 183 + 184 + }; 181 185 /include/ "pq3-etsec1-timer-0.dtsi" 182 186 183 187 ptp_clock@24e00 { ··· 190 186 191 187 192 188 /include/ "pq3-etsec1-1.dtsi" 189 + enet1: ethernet@25000 { 190 + fsl,pmc-handle = <&etsec2_clk>; 191 + }; 192 + 193 193 /include/ "pq3-etsec1-2.dtsi" 194 + enet2: ethernet@26000 { 195 + fsl,pmc-handle = <&etsec3_clk>; 196 + }; 197 + 194 198 /include/ "pq3-esdhc-0.dtsi" 195 199 sdhc@2e000 { 196 200 compatible = "fsl,p2020-esdhc", "fsl,esdhc"; ··· 214 202 fsl,has-rstcr; 215 203 }; 216 204 217 - pmc: power@e0070 { 218 - compatible = "fsl,mpc8548-pmc"; 219 - reg = <0xe0070 0x20>; 220 - }; 205 + /include/ "pq3-power.dtsi" 221 206 };
+19
arch/powerpc/boot/dts/fsl/pq3-power.dtsi
··· 1 + // SPDX-License-Identifier: (GPL-2.0+) 2 + /* 3 + * Copyright 2024 NXP 4 + */ 5 + 6 + power@e0070 { 7 + compatible = "fsl,mpc8548-pmc"; 8 + reg = <0xe0070 0x20>; 9 + 10 + etsec1_clk: soc-clk@24 { 11 + fsl,pmcdr-mask = <0x00000080>; 12 + }; 13 + etsec2_clk: soc-clk@25 { 14 + fsl,pmcdr-mask = <0x00000040>; 15 + }; 16 + etsec3_clk: soc-clk@26 { 17 + fsl,pmcdr-mask = <0x00000020>; 18 + }; 19 + };
+1 -1
arch/powerpc/boot/dts/fsl/t1023si-post.dtsi
··· 52 52 &ifc { 53 53 #address-cells = <2>; 54 54 #size-cells = <1>; 55 - compatible = "fsl,ifc", "simple-bus"; 55 + compatible = "fsl,ifc"; 56 56 interrupts = <25 2 0 0>; 57 57 }; 58 58
+1 -1
arch/powerpc/boot/dts/fsl/t1024rdb.dts
··· 91 91 board-control@2,0 { 92 92 #address-cells = <1>; 93 93 #size-cells = <1>; 94 - compatible = "fsl,t1024-cpld"; 94 + compatible = "fsl,t1024-cpld", "fsl,deepsleep-cpld"; 95 95 reg = <3 0 0x300>; 96 96 ranges = <0 3 0 0x300>; 97 97 bank-width = <1>;
+1 -1
arch/powerpc/boot/dts/fsl/t1040rdb.dts
··· 104 104 105 105 ifc: localbus@ffe124000 { 106 106 cpld@3,0 { 107 - compatible = "fsl,t1040rdb-cpld"; 107 + compatible = "fsl,t104xrdb-cpld", "fsl,deepsleep-cpld"; 108 108 }; 109 109 }; 110 110 };
+1 -1
arch/powerpc/boot/dts/fsl/t1040si-post.dtsi
··· 52 52 &ifc { 53 53 #address-cells = <2>; 54 54 #size-cells = <1>; 55 - compatible = "fsl,ifc", "simple-bus"; 55 + compatible = "fsl,ifc"; 56 56 interrupts = <25 2 0 0>; 57 57 }; 58 58
+1 -1
arch/powerpc/boot/dts/fsl/t1042rdb.dts
··· 68 68 69 69 ifc: localbus@ffe124000 { 70 70 cpld@3,0 { 71 - compatible = "fsl,t1042rdb-cpld"; 71 + compatible = "fsl,t104xrdb-cpld", "fsl,deepsleep-cpld"; 72 72 }; 73 73 }; 74 74 };
+1 -1
arch/powerpc/boot/dts/fsl/t1042rdb_pi.dts
··· 41 41 42 42 ifc: localbus@ffe124000 { 43 43 cpld@3,0 { 44 - compatible = "fsl,t1042rdb_pi-cpld"; 44 + compatible = "fsl,t104xrdb-cpld", "fsl,deepsleep-cpld"; 45 45 }; 46 46 }; 47 47
+1 -1
arch/powerpc/boot/dts/fsl/t2081si-post.dtsi
··· 50 50 &ifc { 51 51 #address-cells = <2>; 52 52 #size-cells = <1>; 53 - compatible = "fsl,ifc", "simple-bus"; 53 + compatible = "fsl,ifc"; 54 54 interrupts = <25 2 0 0>; 55 55 }; 56 56
+1 -1
arch/powerpc/boot/dts/fsl/t4240si-post.dtsi
··· 50 50 &ifc { 51 51 #address-cells = <2>; 52 52 #size-cells = <1>; 53 - compatible = "fsl,ifc", "simple-bus"; 53 + compatible = "fsl,ifc"; 54 54 interrupts = <25 2 0 0>; 55 55 }; 56 56
+1 -1
arch/powerpc/boot/main.c
··· 188 188 189 189 /* A buffer that may be edited by tools operating on a zImage binary so as to 190 190 * edit the command line passed to vmlinux (by setting /chosen/bootargs). 191 - * The buffer is put in it's own section so that tools may locate it easier. 191 + * The buffer is put in its own section so that tools may locate it easier. 192 192 */ 193 193 static char cmdline[BOOT_COMMAND_LINE_SIZE] 194 194 __attribute__((__section__("__builtin_cmdline")));
+1 -1
arch/powerpc/boot/ps3.c
··· 25 25 26 26 /* A buffer that may be edited by tools operating on a zImage binary so as to 27 27 * edit the command line passed to vmlinux (by setting /chosen/bootargs). 28 - * The buffer is put in it's own section so that tools may locate it easier. 28 + * The buffer is put in its own section so that tools may locate it easier. 29 29 */ 30 30 31 31 static char cmdline[BOOT_COMMAND_LINE_SIZE]
+1 -1
arch/powerpc/include/asm/cpu_has_feature.h
··· 29 29 #endif 30 30 31 31 #ifdef CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG 32 - if (!static_key_initialized) { 32 + if (!static_key_feature_checks_initialized) { 33 33 printk("Warning! cpu_has_feature() used prior to jump label init!\n"); 34 34 dump_stack(); 35 35 return early_cpu_has_feature(feature);
+1 -1
arch/powerpc/include/asm/eeh.h
··· 82 82 int false_positives; /* Times of reported #ff's */ 83 83 atomic_t pass_dev_cnt; /* Count of passed through devs */ 84 84 struct eeh_pe *parent; /* Parent PE */ 85 - void *data; /* PE auxillary data */ 85 + void *data; /* PE auxiliary data */ 86 86 struct list_head child_list; /* List of PEs below this PE */ 87 87 struct list_head child; /* Memb. child_list/eeh_phb_pe */ 88 88 struct list_head edevs; /* List of eeh_dev in this PE */
+33 -3
arch/powerpc/include/asm/fadump-internal.h
··· 42 42 43 43 #define FADUMP_CPU_UNKNOWN (~((u32)0)) 44 44 45 - #define FADUMP_CRASH_INFO_MAGIC fadump_str_to_u64("FADMPINF") 45 + /* 46 + * The introduction of new fields in the fadump crash info header has 47 + * led to a change in the magic key from `FADMPINF` to `FADMPSIG` for 48 + * identifying a kernel crash from an old kernel. 49 + * 50 + * To prevent the need for further changes to the magic number in the 51 + * event of future modifications to the fadump crash info header, a 52 + * version field has been introduced to track the fadump crash info 53 + * header version. 54 + * 55 + * Consider a few points before adding new members to the fadump crash info 56 + * header structure: 57 + * 58 + * - Append new members; avoid adding them in between. 59 + * - Non-primitive members should have a size member as well. 60 + * - For every change in the fadump header, increment the 61 + * fadump header version. This helps the updated kernel decide how to 62 + * handle kernel dumps from older kernels. 63 + */ 64 + #define FADUMP_CRASH_INFO_MAGIC_OLD fadump_str_to_u64("FADMPINF") 65 + #define FADUMP_CRASH_INFO_MAGIC fadump_str_to_u64("FADMPSIG") 66 + #define FADUMP_HEADER_VERSION 1 46 67 47 68 /* fadump crash info structure */ 48 69 struct fadump_crash_info_header { 49 70 u64 magic_number; 50 - u64 elfcorehdr_addr; 71 + u32 version; 51 72 u32 crashing_cpu; 73 + u64 vmcoreinfo_raddr; 74 + u64 vmcoreinfo_size; 75 + u32 pt_regs_sz; 76 + u32 cpu_mask_sz; 52 77 struct pt_regs regs; 53 78 struct cpumask cpu_mask; 54 79 }; ··· 119 94 u64 boot_mem_regs_cnt; 120 95 121 96 unsigned long fadumphdr_addr; 97 + u64 elfcorehdr_addr; 98 + u64 elfcorehdr_size; 122 99 unsigned long cpu_notes_buf_vaddr; 123 100 unsigned long cpu_notes_buf_size; 101 + 102 + unsigned long param_area; 124 103 125 104 /* 126 105 * Maximum size supported by firmware to copy from source to ··· 140 111 unsigned long dump_active:1; 141 112 unsigned long dump_registered:1; 142 113 unsigned long nocma:1; 114 + unsigned long param_area_supported:1; 143 115 144 116 struct fadump_ops *ops; 145 117 }; ··· 159 129 struct seq_file *m); 160 130 void (*fadump_trigger)(struct fadump_crash_info_header *fdh, 161 131 const char *msg); 132 + int (*fadump_max_boot_mem_rgns)(void); 162 133 }; 163 134 164 135 /* Helper functions */ ··· 167 136 void fadump_free_cpu_notes_buf(void); 168 137 u32 *__init fadump_regs_to_elf_notes(u32 *buf, struct pt_regs *regs); 169 138 void __init fadump_update_elfcore_header(char *bufp); 170 - bool is_fadump_boot_mem_contiguous(void); 171 139 bool is_fadump_reserved_mem_contiguous(void); 172 140 173 141 #else /* !CONFIG_PRESERVE_FA_DUMP */
+2
arch/powerpc/include/asm/fadump.h
··· 19 19 extern int should_fadump_crash(void); 20 20 extern void crash_fadump(struct pt_regs *, const char *); 21 21 extern void fadump_cleanup(void); 22 + extern void fadump_append_bootargs(void); 22 23 23 24 #else /* CONFIG_FA_DUMP */ 24 25 static inline int is_fadump_active(void) { return 0; } 25 26 static inline int should_fadump_crash(void) { return 0; } 26 27 static inline void crash_fadump(struct pt_regs *regs, const char *str) { } 27 28 static inline void fadump_cleanup(void) { } 29 + static inline void fadump_append_bootargs(void) { } 28 30 #endif /* !CONFIG_FA_DUMP */ 29 31 30 32 #if defined(CONFIG_FA_DUMP) || defined(CONFIG_PRESERVE_FA_DUMP)
+2
arch/powerpc/include/asm/feature-fixups.h
··· 291 291 extern long __start___barrier_nospec_fixup, __stop___barrier_nospec_fixup; 292 292 extern long __start__btb_flush_fixup, __stop__btb_flush_fixup; 293 293 294 + extern bool static_key_feature_checks_initialized; 295 + 294 296 void apply_feature_fixups(void); 295 297 void update_mmu_feature_fixups(unsigned long mask); 296 298 void setup_feature_keys(void);
+5 -5
arch/powerpc/include/asm/hvcall.h
··· 524 524 * Used for all but the craziest of phyp interfaces (see plpar_hcall9) 525 525 */ 526 526 #define PLPAR_HCALL_BUFSIZE 4 527 - long plpar_hcall(unsigned long opcode, unsigned long *retbuf, ...); 527 + long plpar_hcall(unsigned long opcode, unsigned long retbuf[static PLPAR_HCALL_BUFSIZE], ...); 528 528 529 529 /** 530 530 * plpar_hcall_raw: - Make a hypervisor call without calculating hcall stats ··· 538 538 * plpar_hcall, but plpar_hcall_raw works in real mode and does not 539 539 * calculate hypervisor call statistics. 540 540 */ 541 - long plpar_hcall_raw(unsigned long opcode, unsigned long *retbuf, ...); 541 + long plpar_hcall_raw(unsigned long opcode, unsigned long retbuf[static PLPAR_HCALL_BUFSIZE], ...); 542 542 543 543 /** 544 544 * plpar_hcall9: - Make a pseries hypervisor call with up to 9 return arguments ··· 549 549 * PLPAR_HCALL9_BUFSIZE to size the return argument buffer. 550 550 */ 551 551 #define PLPAR_HCALL9_BUFSIZE 9 552 - long plpar_hcall9(unsigned long opcode, unsigned long *retbuf, ...); 553 - long plpar_hcall9_raw(unsigned long opcode, unsigned long *retbuf, ...); 552 + long plpar_hcall9(unsigned long opcode, unsigned long retbuf[static PLPAR_HCALL9_BUFSIZE], ...); 553 + long plpar_hcall9_raw(unsigned long opcode, unsigned long retbuf[static PLPAR_HCALL9_BUFSIZE], ...); 554 554 555 555 /* pseries hcall tracing */ 556 556 extern struct static_key hcall_tracepoint_key; ··· 570 570 unsigned long backing_mem; 571 571 }; 572 572 573 - int h_get_mpp(struct hvcall_mpp_data *); 573 + long h_get_mpp(struct hvcall_mpp_data *mpp_data); 574 574 575 575 struct hvcall_mpp_x_data { 576 576 unsigned long coalesced_bytes;
+10
arch/powerpc/include/asm/interrupt.h
··· 336 336 if (IS_ENABLED(CONFIG_KASAN)) 337 337 return; 338 338 339 + /* 340 + * Likewise, do not use it in real mode if percpu first chunk is not 341 + * embedded. With CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK enabled there 342 + * are chances where percpu allocation can come from vmalloc area. 343 + */ 344 + if (percpu_first_chunk_is_paged) 345 + return; 346 + 339 347 /* Otherwise, it should be safe to call it */ 340 348 nmi_enter(); 341 349 } ··· 359 351 // no nmi_exit for a pseries hash guest taking a real mode exception 360 352 } else if (IS_ENABLED(CONFIG_KASAN)) { 361 353 // no nmi_exit for KASAN in real mode 354 + } else if (percpu_first_chunk_is_paged) { 355 + // no nmi_exit if percpu first chunk is not embedded 362 356 } else { 363 357 nmi_exit(); 364 358 }
+14 -14
arch/powerpc/include/asm/io.h
··· 37 37 * define properly based on the platform 38 38 */ 39 39 #ifndef CONFIG_PCI 40 - #define _IO_BASE 0 40 + #define _IO_BASE POISON_POINTER_DELTA 41 41 #define _ISA_MEM_BASE 0 42 42 #define PCI_DRAM_OFFSET 0 43 43 #elif defined(CONFIG_PPC32) ··· 585 585 #define __do_inw(port) _rec_inw(port) 586 586 #define __do_inl(port) _rec_inl(port) 587 587 #else /* CONFIG_PPC32 */ 588 - #define __do_outb(val, port) writeb(val,(PCI_IO_ADDR)_IO_BASE+port); 589 - #define __do_outw(val, port) writew(val,(PCI_IO_ADDR)_IO_BASE+port); 590 - #define __do_outl(val, port) writel(val,(PCI_IO_ADDR)_IO_BASE+port); 591 - #define __do_inb(port) readb((PCI_IO_ADDR)_IO_BASE + port); 592 - #define __do_inw(port) readw((PCI_IO_ADDR)_IO_BASE + port); 593 - #define __do_inl(port) readl((PCI_IO_ADDR)_IO_BASE + port); 588 + #define __do_outb(val, port) writeb(val,(PCI_IO_ADDR)(_IO_BASE+port)); 589 + #define __do_outw(val, port) writew(val,(PCI_IO_ADDR)(_IO_BASE+port)); 590 + #define __do_outl(val, port) writel(val,(PCI_IO_ADDR)(_IO_BASE+port)); 591 + #define __do_inb(port) readb((PCI_IO_ADDR)(_IO_BASE + port)); 592 + #define __do_inw(port) readw((PCI_IO_ADDR)(_IO_BASE + port)); 593 + #define __do_inl(port) readl((PCI_IO_ADDR)(_IO_BASE + port)); 594 594 #endif /* !CONFIG_PPC32 */ 595 595 596 596 #ifdef CONFIG_EEH ··· 606 606 #define __do_writesw(a, b, n) _outsw(PCI_FIX_ADDR(a),(b),(n)) 607 607 #define __do_writesl(a, b, n) _outsl(PCI_FIX_ADDR(a),(b),(n)) 608 608 609 - #define __do_insb(p, b, n) readsb((PCI_IO_ADDR)_IO_BASE+(p), (b), (n)) 610 - #define __do_insw(p, b, n) readsw((PCI_IO_ADDR)_IO_BASE+(p), (b), (n)) 611 - #define __do_insl(p, b, n) readsl((PCI_IO_ADDR)_IO_BASE+(p), (b), (n)) 612 - #define __do_outsb(p, b, n) writesb((PCI_IO_ADDR)_IO_BASE+(p),(b),(n)) 613 - #define __do_outsw(p, b, n) writesw((PCI_IO_ADDR)_IO_BASE+(p),(b),(n)) 614 - #define __do_outsl(p, b, n) writesl((PCI_IO_ADDR)_IO_BASE+(p),(b),(n)) 609 + #define __do_insb(p, b, n) readsb((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n)) 610 + #define __do_insw(p, b, n) readsw((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n)) 611 + #define __do_insl(p, b, n) readsl((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n)) 612 + #define __do_outsb(p, b, n) writesb((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n)) 613 + #define __do_outsw(p, b, n) writesw((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n)) 614 + #define __do_outsl(p, b, n) writesl((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n)) 615 615 616 616 #define __do_memset_io(addr, c, n) \ 617 617 _memset_io(PCI_FIX_ADDR(addr), c, n) ··· 982 982 } 983 983 984 984 /* 985 - * 32 bits still uses virt_to_bus() for it's implementation of DMA 985 + * 32 bits still uses virt_to_bus() for its implementation of DMA 986 986 * mappings se we have to keep it defined here. We also have some old 987 987 * drivers (shame shame shame) that use bus_to_virt() and haven't been 988 988 * fixed yet so I need to define it here.
+15
arch/powerpc/include/asm/kexec.h
··· 135 135 ppc_save_regs(newregs); 136 136 } 137 137 138 + #ifdef CONFIG_CRASH_HOTPLUG 139 + void arch_crash_handle_hotplug_event(struct kimage *image, void *arg); 140 + #define arch_crash_handle_hotplug_event arch_crash_handle_hotplug_event 141 + 142 + int arch_crash_hotplug_support(struct kimage *image, unsigned long kexec_flags); 143 + #define arch_crash_hotplug_support arch_crash_hotplug_support 144 + 145 + unsigned int arch_crash_get_elfcorehdr_size(void); 146 + #define crash_get_elfcorehdr_size arch_crash_get_elfcorehdr_size 147 + #endif /* CONFIG_CRASH_HOTPLUG */ 148 + 138 149 extern int crashing_cpu; 139 150 extern void crash_send_ipi(void (*crash_ipi_callback)(struct pt_regs *)); 140 151 extern void crash_ipi_callback(struct pt_regs *regs); ··· 195 184 } 196 185 197 186 #endif /* CONFIG_CRASH_DUMP */ 187 + 188 + #if defined(CONFIG_KEXEC_FILE) || defined(CONFIG_CRASH_DUMP) 189 + int update_cpus_node(void *fdt); 190 + #endif 198 191 199 192 #ifdef CONFIG_PPC_BOOK3S_64 200 193 #include <asm/book3s/64/kexec.h>
+5 -15
arch/powerpc/include/asm/kexec_ranges.h
··· 7 7 void sort_memory_ranges(struct crash_mem *mrngs, bool merge); 8 8 struct crash_mem *realloc_mem_ranges(struct crash_mem **mem_ranges); 9 9 int add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size); 10 - int add_tce_mem_ranges(struct crash_mem **mem_ranges); 11 - int add_initrd_mem_range(struct crash_mem **mem_ranges); 12 - #ifdef CONFIG_PPC_64S_HASH_MMU 13 - int add_htab_mem_range(struct crash_mem **mem_ranges); 14 - #else 15 - static inline int add_htab_mem_range(struct crash_mem **mem_ranges) 16 - { 17 - return 0; 18 - } 19 - #endif 20 - int add_kernel_mem_range(struct crash_mem **mem_ranges); 21 - int add_rtas_mem_range(struct crash_mem **mem_ranges); 22 - int add_opal_mem_range(struct crash_mem **mem_ranges); 23 - int add_reserved_mem_ranges(struct crash_mem **mem_ranges); 24 - 10 + int remove_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size); 11 + int get_exclude_memory_ranges(struct crash_mem **mem_ranges); 12 + int get_reserved_memory_ranges(struct crash_mem **mem_ranges); 13 + int get_crash_memory_ranges(struct crash_mem **mem_ranges); 14 + int get_usable_memory_ranges(struct crash_mem **mem_ranges); 25 15 #endif /* _ASM_POWERPC_KEXEC_RANGES_H */
+1 -1
arch/powerpc/include/asm/mmu.h
··· 251 251 #endif 252 252 253 253 #ifdef CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG 254 - if (!static_key_initialized) { 254 + if (!static_key_feature_checks_initialized) { 255 255 printk("Warning! mmu_has_feature() used prior to jump label init!\n"); 256 256 dump_stack(); 257 257 return early_mmu_has_feature(feature);
-5
arch/powerpc/include/asm/module.h
··· 48 48 unsigned long tramp; 49 49 unsigned long tramp_regs; 50 50 #endif 51 - 52 - /* List of BUG addresses, source line numbers and filenames */ 53 - struct list_head bug_list; 54 - struct bug_entry *bug_table; 55 - unsigned int num_bugs; 56 51 }; 57 52 58 53 /*
+2 -2
arch/powerpc/include/asm/opal-api.h
··· 1027 1027 * The host will pass on OPAL, a buffer of length OPAL_SYSEPOW_MAX 1028 1028 * with individual elements being 16 bits wide to fetch the system 1029 1029 * wide EPOW status. Each element in the buffer will contain the 1030 - * EPOW status in it's bit representation for a particular EPOW sub 1030 + * EPOW status in its bit representation for a particular EPOW sub 1031 1031 * class as defined here. So multiple detailed EPOW status bits 1032 1032 * specific for any sub class can be represented in a single buffer 1033 - * element as it's bit representation. 1033 + * element as its bit representation. 1034 1034 */ 1035 1035 1036 1036 /* System EPOW type */
+10
arch/powerpc/include/asm/percpu.h
··· 15 15 #endif /* CONFIG_SMP */ 16 16 #endif /* __powerpc64__ */ 17 17 18 + #if defined(CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK) && defined(CONFIG_SMP) 19 + #include <linux/jump_label.h> 20 + DECLARE_STATIC_KEY_FALSE(__percpu_first_chunk_is_paged); 21 + 22 + #define percpu_first_chunk_is_paged \ 23 + (static_key_enabled(&__percpu_first_chunk_is_paged.key)) 24 + #else 25 + #define percpu_first_chunk_is_paged false 26 + #endif /* CONFIG_PPC64 && CONFIG_SMP */ 27 + 18 28 #include <asm-generic/percpu.h> 19 29 20 30 #include <asm/paca.h>
+1 -1
arch/powerpc/include/asm/pmac_feature.h
··· 192 192 193 193 /* PMAC_FTR_BMAC_ENABLE (struct device_node* node, 0, int value) 194 194 * enable/disable the bmac (ethernet) cell of a mac-io ASIC, also drive 195 - * it's reset line 195 + * its reset line 196 196 */ 197 197 #define PMAC_FTR_BMAC_ENABLE PMAC_FTR_DEF(6) 198 198
+4
arch/powerpc/include/asm/ppc-opcode.h
··· 510 510 #define PPC_RAW_STB(r, base, i) (0x98000000 | ___PPC_RS(r) | ___PPC_RA(base) | IMM_L(i)) 511 511 #define PPC_RAW_LBZ(r, base, i) (0x88000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_L(i)) 512 512 #define PPC_RAW_LDX(r, base, b) (0x7c00002a | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b)) 513 + #define PPC_RAW_LHA(r, base, i) (0xa8000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_L(i)) 513 514 #define PPC_RAW_LHZ(r, base, i) (0xa0000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_L(i)) 514 515 #define PPC_RAW_LHBRX(r, base, b) (0x7c00062c | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b)) 515 516 #define PPC_RAW_LWBRX(r, base, b) (0x7c00042c | ___PPC_RT(r) | ___PPC_RA(base) | ___PPC_RB(b)) ··· 533 532 #define PPC_RAW_MULW(d, a, b) (0x7c0001d6 | ___PPC_RT(d) | ___PPC_RA(a) | ___PPC_RB(b)) 534 533 #define PPC_RAW_MULHWU(d, a, b) (0x7c000016 | ___PPC_RT(d) | ___PPC_RA(a) | ___PPC_RB(b)) 535 534 #define PPC_RAW_MULI(d, a, i) (0x1c000000 | ___PPC_RT(d) | ___PPC_RA(a) | IMM_L(i)) 535 + #define PPC_RAW_DIVW(d, a, b) (0x7c0003d6 | ___PPC_RT(d) | ___PPC_RA(a) | ___PPC_RB(b)) 536 536 #define PPC_RAW_DIVWU(d, a, b) (0x7c000396 | ___PPC_RT(d) | ___PPC_RA(a) | ___PPC_RB(b)) 537 537 #define PPC_RAW_DIVDU(d, a, b) (0x7c000392 | ___PPC_RT(d) | ___PPC_RA(a) | ___PPC_RB(b)) 538 538 #define PPC_RAW_DIVDE(t, a, b) (0x7c000352 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b)) ··· 552 550 #define PPC_RAW_XOR(d, a, b) (0x7c000278 | ___PPC_RA(d) | ___PPC_RS(a) | ___PPC_RB(b)) 553 551 #define PPC_RAW_XORI(d, a, i) (0x68000000 | ___PPC_RA(d) | ___PPC_RS(a) | IMM_L(i)) 554 552 #define PPC_RAW_XORIS(d, a, i) (0x6c000000 | ___PPC_RA(d) | ___PPC_RS(a) | IMM_L(i)) 553 + #define PPC_RAW_EXTSB(d, a) (0x7c000774 | ___PPC_RA(d) | ___PPC_RS(a)) 554 + #define PPC_RAW_EXTSH(d, a) (0x7c000734 | ___PPC_RA(d) | ___PPC_RS(a)) 555 555 #define PPC_RAW_EXTSW(d, a) (0x7c0007b4 | ___PPC_RA(d) | ___PPC_RS(a)) 556 556 #define PPC_RAW_SLW(d, a, s) (0x7c000030 | ___PPC_RA(d) | ___PPC_RS(a) | ___PPC_RB(s)) 557 557 #define PPC_RAW_SLD(d, a, s) (0x7c000036 | ___PPC_RA(d) | ___PPC_RS(a) | ___PPC_RB(s))
+12 -1
arch/powerpc/include/asm/processor.h
··· 260 260 unsigned long sier2; 261 261 unsigned long sier3; 262 262 unsigned long hashkeyr; 263 - 263 + unsigned long dexcr; 264 + unsigned long dexcr_onexec; /* Reset value to load on exec */ 264 265 #endif 265 266 }; 266 267 ··· 333 332 334 333 extern int get_unalign_ctl(struct task_struct *tsk, unsigned long adr); 335 334 extern int set_unalign_ctl(struct task_struct *tsk, unsigned int val); 335 + 336 + #ifdef CONFIG_PPC_BOOK3S_64 337 + 338 + #define PPC_GET_DEXCR_ASPECT(tsk, asp) get_dexcr_prctl((tsk), (asp)) 339 + #define PPC_SET_DEXCR_ASPECT(tsk, asp, val) set_dexcr_prctl((tsk), (asp), (val)) 340 + 341 + int get_dexcr_prctl(struct task_struct *tsk, unsigned long asp); 342 + int set_dexcr_prctl(struct task_struct *tsk, unsigned long asp, unsigned long val); 343 + 344 + #endif 336 345 337 346 extern void load_fp_state(struct thread_fp_state *fp); 338 347 extern void store_fp_state(struct thread_fp_state *fp);
+1 -1
arch/powerpc/include/asm/reg.h
··· 615 615 #define HID1_ABE (1<<10) /* 7450 Address Broadcast Enable */ 616 616 #define HID1_PS (1<<16) /* 750FX PLL selection */ 617 617 #endif 618 - #define SPRN_HID2 0x3F8 /* Hardware Implementation Register 2 */ 618 + #define SPRN_HID2_750FX 0x3F8 /* IBM 750FX HID2 Register */ 619 619 #define SPRN_HID2_GEKKO 0x398 /* Gekko HID2 Register */ 620 620 #define SPRN_HID2_G2_LE 0x3F3 /* G2_LE HID2 Register */ 621 621 #define HID2_G2_LE_HBE (1<<18) /* High BAT Enable (G2_LE) */
+1 -1
arch/powerpc/include/asm/uninorth.h
··· 144 144 #define UNI_N_HWINIT_STATE_SLEEPING 0x01 145 145 #define UNI_N_HWINIT_STATE_RUNNING 0x02 146 146 /* This last bit appear to be used by the bootROM to know the second 147 - * CPU has started and will enter it's sleep loop with IP=0 147 + * CPU has started and will enter its sleep loop with IP=0 148 148 */ 149 149 #define UNI_N_HWINIT_STATE_CPU1_FLAG 0x10000000 150 150
+1 -1
arch/powerpc/include/uapi/asm/bootx.h
··· 108 108 /* ALL BELOW NEW (vers. 4) */ 109 109 110 110 /* This defines the physical memory. Valid with BOOT_ARCH_NUBUS flag 111 - (non-PCI) only. On PCI, memory is contiguous and it's size is in the 111 + (non-PCI) only. On PCI, memory is contiguous and its size is in the 112 112 device-tree. */ 113 113 boot_info_map_entry_t 114 114 physMemoryMap[MAX_MEM_MAP_SIZE]; /* Where the phys memory is */
+1 -6
arch/powerpc/kernel/Makefile
··· 3 3 # Makefile for the linux kernel. 4 4 # 5 5 6 - ifdef CONFIG_PPC64 7 - CFLAGS_prom_init.o += $(NO_MINIMAL_TOC) 8 - endif 9 6 ifdef CONFIG_PPC32 10 7 CFLAGS_prom_init.o += -fPIC 11 8 CFLAGS_btext.o += -fPIC ··· 84 87 obj-$(CONFIG_PPC_DAWR) += dawr.o 85 88 obj-$(CONFIG_PPC_BOOK3S_64) += cpu_setup_ppc970.o cpu_setup_pa6t.o 86 89 obj-$(CONFIG_PPC_BOOK3S_64) += cpu_setup_power.o 90 + obj-$(CONFIG_PPC_BOOK3S_64) += dexcr.o 87 91 obj-$(CONFIG_PPC_BOOK3S_64) += mce.o mce_power.o 88 92 obj-$(CONFIG_PPC_BOOK3E_64) += exceptions-64e.o idle_64e.o 89 93 obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o ··· 188 190 KCOV_INSTRUMENT_kprobes-ftrace.o := n 189 191 KCSAN_SANITIZE_kprobes-ftrace.o := n 190 192 UBSAN_SANITIZE_kprobes-ftrace.o := n 191 - GCOV_PROFILE_syscall_64.o := n 192 - KCOV_INSTRUMENT_syscall_64.o := n 193 - UBSAN_SANITIZE_syscall_64.o := n 194 193 UBSAN_SANITIZE_vdso.o := n 195 194 196 195 # Necessary for booting with kcov enabled on book3e machines
+2 -2
arch/powerpc/kernel/cpu_setup_6xx.S
··· 401 401 andi. r3,r3,0xff00 402 402 cmpwi cr0,r3,0x0200 403 403 bne 1f 404 - mfspr r4,SPRN_HID2 404 + mfspr r4,SPRN_HID2_750FX 405 405 stw r4,CS_HID2(r5) 406 406 1: 407 407 mtcr r7 ··· 496 496 bne 4f 497 497 lwz r4,CS_HID2(r5) 498 498 rlwinm r4,r4,0,19,17 499 - mtspr SPRN_HID2,r4 499 + mtspr SPRN_HID2_750FX,r4 500 500 sync 501 501 4: 502 502 lwz r4,CS_HID1(r5)
+124
arch/powerpc/kernel/dexcr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + 3 + #include <linux/capability.h> 4 + #include <linux/cpu.h> 5 + #include <linux/init.h> 6 + #include <linux/prctl.h> 7 + #include <linux/sched.h> 8 + 9 + #include <asm/cpu_has_feature.h> 10 + #include <asm/cputable.h> 11 + #include <asm/processor.h> 12 + #include <asm/reg.h> 13 + 14 + static int __init init_task_dexcr(void) 15 + { 16 + if (!early_cpu_has_feature(CPU_FTR_ARCH_31)) 17 + return 0; 18 + 19 + current->thread.dexcr_onexec = mfspr(SPRN_DEXCR); 20 + 21 + return 0; 22 + } 23 + early_initcall(init_task_dexcr) 24 + 25 + /* Allow thread local configuration of these by default */ 26 + #define DEXCR_PRCTL_EDITABLE ( \ 27 + DEXCR_PR_IBRTPD | \ 28 + DEXCR_PR_SRAPD | \ 29 + DEXCR_PR_NPHIE) 30 + 31 + static int prctl_to_aspect(unsigned long which, unsigned int *aspect) 32 + { 33 + switch (which) { 34 + case PR_PPC_DEXCR_SBHE: 35 + *aspect = DEXCR_PR_SBHE; 36 + break; 37 + case PR_PPC_DEXCR_IBRTPD: 38 + *aspect = DEXCR_PR_IBRTPD; 39 + break; 40 + case PR_PPC_DEXCR_SRAPD: 41 + *aspect = DEXCR_PR_SRAPD; 42 + break; 43 + case PR_PPC_DEXCR_NPHIE: 44 + *aspect = DEXCR_PR_NPHIE; 45 + break; 46 + default: 47 + return -ENODEV; 48 + } 49 + 50 + return 0; 51 + } 52 + 53 + int get_dexcr_prctl(struct task_struct *task, unsigned long which) 54 + { 55 + unsigned int aspect; 56 + int ret; 57 + 58 + ret = prctl_to_aspect(which, &aspect); 59 + if (ret) 60 + return ret; 61 + 62 + if (aspect & DEXCR_PRCTL_EDITABLE) 63 + ret |= PR_PPC_DEXCR_CTRL_EDITABLE; 64 + 65 + if (aspect & mfspr(SPRN_DEXCR)) 66 + ret |= PR_PPC_DEXCR_CTRL_SET; 67 + else 68 + ret |= PR_PPC_DEXCR_CTRL_CLEAR; 69 + 70 + if (aspect & task->thread.dexcr_onexec) 71 + ret |= PR_PPC_DEXCR_CTRL_SET_ONEXEC; 72 + else 73 + ret |= PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC; 74 + 75 + return ret; 76 + } 77 + 78 + int set_dexcr_prctl(struct task_struct *task, unsigned long which, unsigned long ctrl) 79 + { 80 + unsigned long dexcr; 81 + unsigned int aspect; 82 + int err = 0; 83 + 84 + err = prctl_to_aspect(which, &aspect); 85 + if (err) 86 + return err; 87 + 88 + if (!(aspect & DEXCR_PRCTL_EDITABLE)) 89 + return -EPERM; 90 + 91 + if (ctrl & ~PR_PPC_DEXCR_CTRL_MASK) 92 + return -EINVAL; 93 + 94 + if (ctrl & PR_PPC_DEXCR_CTRL_SET && ctrl & PR_PPC_DEXCR_CTRL_CLEAR) 95 + return -EINVAL; 96 + 97 + if (ctrl & PR_PPC_DEXCR_CTRL_SET_ONEXEC && ctrl & PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC) 98 + return -EINVAL; 99 + 100 + /* 101 + * We do not want an unprivileged process being able to disable 102 + * a setuid process's hash check instructions 103 + */ 104 + if (aspect == DEXCR_PR_NPHIE && 105 + ctrl & PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC && 106 + !capable(CAP_SYS_ADMIN)) 107 + return -EPERM; 108 + 109 + dexcr = mfspr(SPRN_DEXCR); 110 + 111 + if (ctrl & PR_PPC_DEXCR_CTRL_SET) 112 + dexcr |= aspect; 113 + else if (ctrl & PR_PPC_DEXCR_CTRL_CLEAR) 114 + dexcr &= ~aspect; 115 + 116 + if (ctrl & PR_PPC_DEXCR_CTRL_SET_ONEXEC) 117 + task->thread.dexcr_onexec |= aspect; 118 + else if (ctrl & PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC) 119 + task->thread.dexcr_onexec &= ~aspect; 120 + 121 + mtspr(SPRN_DEXCR, dexcr); 122 + 123 + return 0; 124 + }
+10 -1
arch/powerpc/kernel/eeh.c
··· 506 506 * We will punt with the following conditions: Failure to get 507 507 * PE's state, EEH not support and Permanently unavailable 508 508 * state, PE is in good state. 509 + * 510 + * On the pSeries, after reaching the threshold, get_state might 511 + * return EEH_STATE_NOT_SUPPORT. However, it's possible that the 512 + * device state remains uncleared if the device is not marked 513 + * pci_channel_io_perm_failure. Therefore, consider logging the 514 + * event to let device removal happen. 515 + * 509 516 */ 510 517 if ((ret < 0) || 511 - (ret == EEH_STATE_NOT_SUPPORT) || eeh_state_active(ret)) { 518 + (ret == EEH_STATE_NOT_SUPPORT && 519 + dev->error_state == pci_channel_io_perm_failure) || 520 + eeh_state_active(ret)) { 512 521 eeh_stats.false_positives++; 513 522 pe->false_positives++; 514 523 rc = 0;
+11 -2
arch/powerpc/kernel/eeh_driver.c
··· 865 865 devices++; 866 866 867 867 if (!devices) { 868 - pr_debug("EEH: Frozen PHB#%x-PE#%x is empty!\n", 868 + pr_warn("EEH: Frozen PHB#%x-PE#%x is empty!\n", 869 869 pe->phb->global_number, pe->addr); 870 - goto out; /* nothing to recover */ 870 + /* 871 + * The device is removed, tear down its state, on powernv 872 + * hotplug driver would take care of it but not on pseries, 873 + * permanently disable the card as it is hot removed. 874 + * 875 + * In the case of powernv, note that the removal of device 876 + * is covered by pci rescan lock, so no problem even if hotplug 877 + * driver attempts to remove the device. 878 + */ 879 + goto recover_failed; 871 880 } 872 881 873 882 /* Log the event */
+4 -4
arch/powerpc/kernel/eeh_pe.c
··· 24 24 static LIST_HEAD(eeh_phb_pe); 25 25 26 26 /** 27 - * eeh_set_pe_aux_size - Set PE auxillary data size 28 - * @size: PE auxillary data size 27 + * eeh_set_pe_aux_size - Set PE auxiliary data size 28 + * @size: PE auxiliary data size in bytes 29 29 * 30 - * Set PE auxillary data size 30 + * Set PE auxiliary data size. 31 31 */ 32 32 void eeh_set_pe_aux_size(int size) 33 33 { ··· 527 527 * eeh_pe_mark_isolated 528 528 * @pe: EEH PE 529 529 * 530 - * Record that a PE has been isolated by marking the PE and it's children as 530 + * Record that a PE has been isolated by marking the PE and its children as 531 531 * EEH_PE_ISOLATED (and EEH_PE_CFG_BLOCKED, if required) and their PCI devices 532 532 * as pci_channel_io_frozen. 533 533 */
+345 -197
arch/powerpc/kernel/fadump.c
··· 53 53 static atomic_t cpus_in_fadump; 54 54 static DEFINE_MUTEX(fadump_mutex); 55 55 56 - static struct fadump_mrange_info crash_mrange_info = { "crash", NULL, 0, 0, 0, false }; 57 - 58 56 #define RESERVED_RNGS_SZ 16384 /* 16K - 128 entries */ 59 57 #define RESERVED_RNGS_CNT (RESERVED_RNGS_SZ / \ 60 58 sizeof(struct fadump_memory_range)) ··· 130 132 #else 131 133 static int __init fadump_cma_init(void) { return 1; } 132 134 #endif /* CONFIG_CMA */ 135 + 136 + /* 137 + * Additional parameters meant for capture kernel are placed in a dedicated area. 138 + * If this is capture kernel boot, append these parameters to bootargs. 139 + */ 140 + void __init fadump_append_bootargs(void) 141 + { 142 + char *append_args; 143 + size_t len; 144 + 145 + if (!fw_dump.dump_active || !fw_dump.param_area_supported || !fw_dump.param_area) 146 + return; 147 + 148 + if (fw_dump.param_area >= fw_dump.boot_mem_top) { 149 + if (memblock_reserve(fw_dump.param_area, COMMAND_LINE_SIZE)) { 150 + pr_warn("WARNING: Can't use additional parameters area!\n"); 151 + fw_dump.param_area = 0; 152 + return; 153 + } 154 + } 155 + 156 + append_args = (char *)fw_dump.param_area; 157 + len = strlen(boot_command_line); 158 + 159 + /* 160 + * Too late to fail even if cmdline size exceeds. Truncate additional parameters 161 + * to cmdline size and proceed anyway. 162 + */ 163 + if (len + strlen(append_args) >= COMMAND_LINE_SIZE - 1) 164 + pr_warn("WARNING: Appending parameters exceeds cmdline size. Truncating!\n"); 165 + 166 + pr_debug("Cmdline: %s\n", boot_command_line); 167 + snprintf(boot_command_line + len, COMMAND_LINE_SIZE - len, " %s", append_args); 168 + pr_info("Updated cmdline: %s\n", boot_command_line); 169 + } 133 170 134 171 /* Scan the Firmware Assisted dump configuration details. */ 135 172 int __init early_init_dt_scan_fw_dump(unsigned long node, const char *uname, ··· 250 217 251 218 d_start = end + 1; 252 219 } 253 - } 254 - 255 - return ret; 256 - } 257 - 258 - /* 259 - * Returns true, if there are no holes in boot memory area, 260 - * false otherwise. 261 - */ 262 - bool is_fadump_boot_mem_contiguous(void) 263 - { 264 - unsigned long d_start, d_end; 265 - bool ret = false; 266 - int i; 267 - 268 - for (i = 0; i < fw_dump.boot_mem_regs_cnt; i++) { 269 - d_start = fw_dump.boot_mem_addr[i]; 270 - d_end = d_start + fw_dump.boot_mem_sz[i]; 271 - 272 - ret = is_fadump_mem_area_contiguous(d_start, d_end); 273 - if (!ret) 274 - break; 275 220 } 276 221 277 222 return ret; ··· 384 373 size = PAGE_ALIGN(size); 385 374 size += fw_dump.boot_memory_size; 386 375 size += sizeof(struct fadump_crash_info_header); 387 - size += sizeof(struct elfhdr); /* ELF core header.*/ 388 - size += sizeof(struct elf_phdr); /* place holder for cpu notes */ 389 - /* Program headers for crash memory regions. */ 390 - size += sizeof(struct elf_phdr) * (memblock_num_regions(memory) + 2); 391 - 392 - size = PAGE_ALIGN(size); 393 376 394 377 /* This is to hold kernel metadata on platforms that support it */ 395 378 size += (fw_dump.ops->fadump_get_metadata_size ? ··· 394 389 static int __init add_boot_mem_region(unsigned long rstart, 395 390 unsigned long rsize) 396 391 { 392 + int max_boot_mem_rgns = fw_dump.ops->fadump_max_boot_mem_rgns(); 397 393 int i = fw_dump.boot_mem_regs_cnt++; 398 394 399 - if (fw_dump.boot_mem_regs_cnt > FADUMP_MAX_MEM_REGS) { 400 - fw_dump.boot_mem_regs_cnt = FADUMP_MAX_MEM_REGS; 395 + if (fw_dump.boot_mem_regs_cnt > max_boot_mem_rgns) { 396 + fw_dump.boot_mem_regs_cnt = max_boot_mem_rgns; 401 397 return 0; 402 398 } 403 399 ··· 579 573 } 580 574 } 581 575 582 - /* 583 - * Calculate the memory boundary. 584 - * If memory_limit is less than actual memory boundary then reserve 585 - * the memory for fadump beyond the memory_limit and adjust the 586 - * memory_limit accordingly, so that the running kernel can run with 587 - * specified memory_limit. 588 - */ 589 - if (memory_limit && memory_limit < memblock_end_of_DRAM()) { 590 - size = get_fadump_area_size(); 591 - if ((memory_limit + size) < memblock_end_of_DRAM()) 592 - memory_limit += size; 593 - else 594 - memory_limit = memblock_end_of_DRAM(); 595 - printk(KERN_INFO "Adjusted memory_limit for firmware-assisted" 596 - " dump, now %#016llx\n", memory_limit); 597 - } 598 576 if (memory_limit) 599 577 mem_boundary = memory_limit; 600 578 else ··· 695 705 * old_cpu == -1 means this is the first CPU which has come here, 696 706 * go ahead and trigger fadump. 697 707 * 698 - * old_cpu != -1 means some other CPU has already on it's way 708 + * old_cpu != -1 means some other CPU has already on its way 699 709 * to trigger fadump, just keep looping here. 700 710 */ 701 711 this_cpu = smp_processor_id(); ··· 921 931 return 0; 922 932 } 923 933 924 - static int fadump_exclude_reserved_area(u64 start, u64 end) 925 - { 926 - u64 ra_start, ra_end; 927 - int ret = 0; 928 - 929 - ra_start = fw_dump.reserve_dump_area_start; 930 - ra_end = ra_start + fw_dump.reserve_dump_area_size; 931 - 932 - if ((ra_start < end) && (ra_end > start)) { 933 - if ((start < ra_start) && (end > ra_end)) { 934 - ret = fadump_add_mem_range(&crash_mrange_info, 935 - start, ra_start); 936 - if (ret) 937 - return ret; 938 - 939 - ret = fadump_add_mem_range(&crash_mrange_info, 940 - ra_end, end); 941 - } else if (start < ra_start) { 942 - ret = fadump_add_mem_range(&crash_mrange_info, 943 - start, ra_start); 944 - } else if (ra_end < end) { 945 - ret = fadump_add_mem_range(&crash_mrange_info, 946 - ra_end, end); 947 - } 948 - } else 949 - ret = fadump_add_mem_range(&crash_mrange_info, start, end); 950 - 951 - return ret; 952 - } 953 - 954 934 static int fadump_init_elfcore_header(char *bufp) 955 935 { 956 936 struct elfhdr *elf; ··· 958 998 } 959 999 960 1000 /* 961 - * Traverse through memblock structure and setup crash memory ranges. These 962 - * ranges will be used create PT_LOAD program headers in elfcore header. 963 - */ 964 - static int fadump_setup_crash_memory_ranges(void) 965 - { 966 - u64 i, start, end; 967 - int ret; 968 - 969 - pr_debug("Setup crash memory ranges.\n"); 970 - crash_mrange_info.mem_range_cnt = 0; 971 - 972 - /* 973 - * Boot memory region(s) registered with firmware are moved to 974 - * different location at the time of crash. Create separate program 975 - * header(s) for this memory chunk(s) with the correct offset. 976 - */ 977 - for (i = 0; i < fw_dump.boot_mem_regs_cnt; i++) { 978 - start = fw_dump.boot_mem_addr[i]; 979 - end = start + fw_dump.boot_mem_sz[i]; 980 - ret = fadump_add_mem_range(&crash_mrange_info, start, end); 981 - if (ret) 982 - return ret; 983 - } 984 - 985 - for_each_mem_range(i, &start, &end) { 986 - /* 987 - * skip the memory chunk that is already added 988 - * (0 through boot_memory_top). 989 - */ 990 - if (start < fw_dump.boot_mem_top) { 991 - if (end > fw_dump.boot_mem_top) 992 - start = fw_dump.boot_mem_top; 993 - else 994 - continue; 995 - } 996 - 997 - /* add this range excluding the reserved dump area. */ 998 - ret = fadump_exclude_reserved_area(start, end); 999 - if (ret) 1000 - return ret; 1001 - } 1002 - 1003 - return 0; 1004 - } 1005 - 1006 - /* 1007 1001 * If the given physical address falls within the boot memory region then 1008 1002 * return the relocated address that points to the dump region reserved 1009 1003 * for saving initial boot memory contents. ··· 987 1073 return raddr; 988 1074 } 989 1075 990 - static int fadump_create_elfcore_headers(char *bufp) 1076 + static void __init populate_elf_pt_load(struct elf_phdr *phdr, u64 start, 1077 + u64 size, unsigned long long offset) 991 1078 { 992 - unsigned long long raddr, offset; 993 - struct elf_phdr *phdr; 994 - struct elfhdr *elf; 995 - int i, j; 1079 + phdr->p_align = 0; 1080 + phdr->p_memsz = size; 1081 + phdr->p_filesz = size; 1082 + phdr->p_paddr = start; 1083 + phdr->p_offset = offset; 1084 + phdr->p_type = PT_LOAD; 1085 + phdr->p_flags = PF_R|PF_W|PF_X; 1086 + phdr->p_vaddr = (unsigned long)__va(start); 1087 + } 996 1088 1089 + static void __init fadump_populate_elfcorehdr(struct fadump_crash_info_header *fdh) 1090 + { 1091 + char *bufp; 1092 + struct elfhdr *elf; 1093 + struct elf_phdr *phdr; 1094 + u64 boot_mem_dest_offset; 1095 + unsigned long long i, ra_start, ra_end, ra_size, mstart, mend; 1096 + 1097 + bufp = (char *) fw_dump.elfcorehdr_addr; 997 1098 fadump_init_elfcore_header(bufp); 998 1099 elf = (struct elfhdr *)bufp; 999 1100 bufp += sizeof(struct elfhdr); 1000 1101 1001 1102 /* 1002 - * setup ELF PT_NOTE, place holder for cpu notes info. The notes info 1003 - * will be populated during second kernel boot after crash. Hence 1004 - * this PT_NOTE will always be the first elf note. 1103 + * Set up ELF PT_NOTE, a placeholder for CPU notes information. 1104 + * The notes info will be populated later by platform-specific code. 1105 + * Hence, this PT_NOTE will always be the first ELF note. 1005 1106 * 1006 1107 * NOTE: Any new ELF note addition should be placed after this note. 1007 1108 */ 1008 1109 phdr = (struct elf_phdr *)bufp; 1009 1110 bufp += sizeof(struct elf_phdr); 1010 1111 phdr->p_type = PT_NOTE; 1011 - phdr->p_flags = 0; 1012 - phdr->p_vaddr = 0; 1013 - phdr->p_align = 0; 1014 - 1015 - phdr->p_offset = 0; 1016 - phdr->p_paddr = 0; 1017 - phdr->p_filesz = 0; 1018 - phdr->p_memsz = 0; 1019 - 1112 + phdr->p_flags = 0; 1113 + phdr->p_vaddr = 0; 1114 + phdr->p_align = 0; 1115 + phdr->p_offset = 0; 1116 + phdr->p_paddr = 0; 1117 + phdr->p_filesz = 0; 1118 + phdr->p_memsz = 0; 1119 + /* Increment number of program headers. */ 1020 1120 (elf->e_phnum)++; 1021 1121 1022 1122 /* setup ELF PT_NOTE for vmcoreinfo */ ··· 1040 1112 phdr->p_flags = 0; 1041 1113 phdr->p_vaddr = 0; 1042 1114 phdr->p_align = 0; 1043 - 1044 - phdr->p_paddr = fadump_relocate(paddr_vmcoreinfo_note()); 1045 - phdr->p_offset = phdr->p_paddr; 1046 - phdr->p_memsz = phdr->p_filesz = VMCOREINFO_NOTE_SIZE; 1047 - 1115 + phdr->p_paddr = phdr->p_offset = fdh->vmcoreinfo_raddr; 1116 + phdr->p_memsz = phdr->p_filesz = fdh->vmcoreinfo_size; 1048 1117 /* Increment number of program headers. */ 1049 1118 (elf->e_phnum)++; 1050 1119 1051 - /* setup PT_LOAD sections. */ 1052 - j = 0; 1053 - offset = 0; 1054 - raddr = fw_dump.boot_mem_addr[0]; 1055 - for (i = 0; i < crash_mrange_info.mem_range_cnt; i++) { 1056 - u64 mbase, msize; 1057 - 1058 - mbase = crash_mrange_info.mem_ranges[i].base; 1059 - msize = crash_mrange_info.mem_ranges[i].size; 1060 - if (!msize) 1061 - continue; 1062 - 1120 + /* 1121 + * Setup PT_LOAD sections. first include boot memory regions 1122 + * and then add rest of the memory regions. 1123 + */ 1124 + boot_mem_dest_offset = fw_dump.boot_mem_dest_addr; 1125 + for (i = 0; i < fw_dump.boot_mem_regs_cnt; i++) { 1063 1126 phdr = (struct elf_phdr *)bufp; 1064 1127 bufp += sizeof(struct elf_phdr); 1065 - phdr->p_type = PT_LOAD; 1066 - phdr->p_flags = PF_R|PF_W|PF_X; 1067 - phdr->p_offset = mbase; 1128 + populate_elf_pt_load(phdr, fw_dump.boot_mem_addr[i], 1129 + fw_dump.boot_mem_sz[i], 1130 + boot_mem_dest_offset); 1131 + /* Increment number of program headers. */ 1132 + (elf->e_phnum)++; 1133 + boot_mem_dest_offset += fw_dump.boot_mem_sz[i]; 1134 + } 1068 1135 1069 - if (mbase == raddr) { 1070 - /* 1071 - * The entire real memory region will be moved by 1072 - * firmware to the specified destination_address. 1073 - * Hence set the correct offset. 1074 - */ 1075 - phdr->p_offset = fw_dump.boot_mem_dest_addr + offset; 1076 - if (j < (fw_dump.boot_mem_regs_cnt - 1)) { 1077 - offset += fw_dump.boot_mem_sz[j]; 1078 - raddr = fw_dump.boot_mem_addr[++j]; 1079 - } 1136 + /* Memory reserved for fadump in first kernel */ 1137 + ra_start = fw_dump.reserve_dump_area_start; 1138 + ra_size = get_fadump_area_size(); 1139 + ra_end = ra_start + ra_size; 1140 + 1141 + phdr = (struct elf_phdr *)bufp; 1142 + for_each_mem_range(i, &mstart, &mend) { 1143 + /* Boot memory regions already added, skip them now */ 1144 + if (mstart < fw_dump.boot_mem_top) { 1145 + if (mend > fw_dump.boot_mem_top) 1146 + mstart = fw_dump.boot_mem_top; 1147 + else 1148 + continue; 1080 1149 } 1081 1150 1082 - phdr->p_paddr = mbase; 1083 - phdr->p_vaddr = (unsigned long)__va(mbase); 1084 - phdr->p_filesz = msize; 1085 - phdr->p_memsz = msize; 1086 - phdr->p_align = 0; 1151 + /* Handle memblock regions overlaps with fadump reserved area */ 1152 + if ((ra_start < mend) && (ra_end > mstart)) { 1153 + if ((mstart < ra_start) && (mend > ra_end)) { 1154 + populate_elf_pt_load(phdr, mstart, ra_start - mstart, mstart); 1155 + /* Increment number of program headers. */ 1156 + (elf->e_phnum)++; 1157 + bufp += sizeof(struct elf_phdr); 1158 + phdr = (struct elf_phdr *)bufp; 1159 + populate_elf_pt_load(phdr, ra_end, mend - ra_end, ra_end); 1160 + } else if (mstart < ra_start) { 1161 + populate_elf_pt_load(phdr, mstart, ra_start - mstart, mstart); 1162 + } else if (ra_end < mend) { 1163 + populate_elf_pt_load(phdr, ra_end, mend - ra_end, ra_end); 1164 + } 1165 + } else { 1166 + /* No overlap with fadump reserved memory region */ 1167 + populate_elf_pt_load(phdr, mstart, mend - mstart, mstart); 1168 + } 1087 1169 1088 1170 /* Increment number of program headers. */ 1089 1171 (elf->e_phnum)++; 1172 + bufp += sizeof(struct elf_phdr); 1173 + phdr = (struct elf_phdr *) bufp; 1090 1174 } 1091 - return 0; 1092 1175 } 1093 1176 1094 1177 static unsigned long init_fadump_header(unsigned long addr) ··· 1114 1175 1115 1176 memset(fdh, 0, sizeof(struct fadump_crash_info_header)); 1116 1177 fdh->magic_number = FADUMP_CRASH_INFO_MAGIC; 1117 - fdh->elfcorehdr_addr = addr; 1178 + fdh->version = FADUMP_HEADER_VERSION; 1118 1179 /* We will set the crashing cpu id in crash_fadump() during crash. */ 1119 1180 fdh->crashing_cpu = FADUMP_CPU_UNKNOWN; 1181 + 1182 + /* 1183 + * The physical address and size of vmcoreinfo are required in the 1184 + * second kernel to prepare elfcorehdr. 1185 + */ 1186 + fdh->vmcoreinfo_raddr = fadump_relocate(paddr_vmcoreinfo_note()); 1187 + fdh->vmcoreinfo_size = VMCOREINFO_NOTE_SIZE; 1188 + 1189 + 1190 + fdh->pt_regs_sz = sizeof(struct pt_regs); 1120 1191 /* 1121 1192 * When LPAR is terminated by PYHP, ensure all possible CPUs' 1122 1193 * register data is processed while exporting the vmcore. 1123 1194 */ 1124 1195 fdh->cpu_mask = *cpu_possible_mask; 1196 + fdh->cpu_mask_sz = sizeof(struct cpumask); 1125 1197 1126 1198 return addr; 1127 1199 } ··· 1140 1190 static int register_fadump(void) 1141 1191 { 1142 1192 unsigned long addr; 1143 - void *vaddr; 1144 - int ret; 1145 1193 1146 1194 /* 1147 1195 * If no memory is reserved then we can not register for firmware- ··· 1148 1200 if (!fw_dump.reserve_dump_area_size) 1149 1201 return -ENODEV; 1150 1202 1151 - ret = fadump_setup_crash_memory_ranges(); 1152 - if (ret) 1153 - return ret; 1154 - 1155 1203 addr = fw_dump.fadumphdr_addr; 1156 1204 1157 1205 /* Initialize fadump crash info header. */ 1158 1206 addr = init_fadump_header(addr); 1159 - vaddr = __va(addr); 1160 - 1161 - pr_debug("Creating ELF core headers at %#016lx\n", addr); 1162 - fadump_create_elfcore_headers(vaddr); 1163 1207 1164 1208 /* register the future kernel dump with firmware. */ 1165 1209 pr_debug("Registering for firmware-assisted kernel dump...\n"); ··· 1170 1230 } else if (fw_dump.dump_registered) { 1171 1231 /* Un-register Firmware-assisted dump if it was registered. */ 1172 1232 fw_dump.ops->fadump_unregister(&fw_dump); 1173 - fadump_free_mem_ranges(&crash_mrange_info); 1174 1233 } 1175 1234 1176 1235 if (fw_dump.ops->fadump_cleanup) ··· 1355 1416 fadump_release_reserved_area(tstart, end); 1356 1417 } 1357 1418 1419 + static void fadump_free_elfcorehdr_buf(void) 1420 + { 1421 + if (fw_dump.elfcorehdr_addr == 0 || fw_dump.elfcorehdr_size == 0) 1422 + return; 1423 + 1424 + /* 1425 + * Before freeing the memory of `elfcorehdr`, reset the global 1426 + * `elfcorehdr_addr` to prevent modules like `vmcore` from accessing 1427 + * invalid memory. 1428 + */ 1429 + elfcorehdr_addr = ELFCORE_ADDR_ERR; 1430 + fadump_free_buffer(fw_dump.elfcorehdr_addr, fw_dump.elfcorehdr_size); 1431 + fw_dump.elfcorehdr_addr = 0; 1432 + fw_dump.elfcorehdr_size = 0; 1433 + } 1434 + 1358 1435 static void fadump_invalidate_release_mem(void) 1359 1436 { 1360 1437 mutex_lock(&fadump_mutex); ··· 1382 1427 fadump_cleanup(); 1383 1428 mutex_unlock(&fadump_mutex); 1384 1429 1430 + fadump_free_elfcorehdr_buf(); 1385 1431 fadump_release_memory(fw_dump.boot_mem_top, memblock_end_of_DRAM()); 1386 1432 fadump_free_cpu_notes_buf(); 1387 1433 ··· 1440 1484 return sprintf(buf, "%d\n", fw_dump.fadump_enabled); 1441 1485 } 1442 1486 1487 + /* 1488 + * /sys/kernel/fadump/hotplug_ready sysfs node returns 1, which inidcates 1489 + * to usersapce that fadump re-registration is not required on memory 1490 + * hotplug events. 1491 + */ 1492 + static ssize_t hotplug_ready_show(struct kobject *kobj, 1493 + struct kobj_attribute *attr, 1494 + char *buf) 1495 + { 1496 + return sprintf(buf, "%d\n", 1); 1497 + } 1498 + 1443 1499 static ssize_t mem_reserved_show(struct kobject *kobj, 1444 1500 struct kobj_attribute *attr, 1445 1501 char *buf) ··· 1464 1496 char *buf) 1465 1497 { 1466 1498 return sprintf(buf, "%d\n", fw_dump.dump_registered); 1499 + } 1500 + 1501 + static ssize_t bootargs_append_show(struct kobject *kobj, 1502 + struct kobj_attribute *attr, 1503 + char *buf) 1504 + { 1505 + return sprintf(buf, "%s\n", (char *)__va(fw_dump.param_area)); 1506 + } 1507 + 1508 + static ssize_t bootargs_append_store(struct kobject *kobj, 1509 + struct kobj_attribute *attr, 1510 + const char *buf, size_t count) 1511 + { 1512 + char *params; 1513 + 1514 + if (!fw_dump.fadump_enabled || fw_dump.dump_active) 1515 + return -EPERM; 1516 + 1517 + if (count >= COMMAND_LINE_SIZE) 1518 + return -EINVAL; 1519 + 1520 + /* 1521 + * Fail here instead of handling this scenario with 1522 + * some silly workaround in capture kernel. 1523 + */ 1524 + if (saved_command_line_len + count >= COMMAND_LINE_SIZE) { 1525 + pr_err("Appending parameters exceeds cmdline size!\n"); 1526 + return -ENOSPC; 1527 + } 1528 + 1529 + params = __va(fw_dump.param_area); 1530 + strscpy_pad(params, buf, COMMAND_LINE_SIZE); 1531 + /* Remove newline character at the end. */ 1532 + if (params[count-1] == '\n') 1533 + params[count-1] = '\0'; 1534 + 1535 + return count; 1467 1536 } 1468 1537 1469 1538 static ssize_t registered_store(struct kobject *kobj, ··· 1561 1556 static struct kobj_attribute enable_attr = __ATTR_RO(enabled); 1562 1557 static struct kobj_attribute register_attr = __ATTR_RW(registered); 1563 1558 static struct kobj_attribute mem_reserved_attr = __ATTR_RO(mem_reserved); 1559 + static struct kobj_attribute hotplug_ready_attr = __ATTR_RO(hotplug_ready); 1560 + static struct kobj_attribute bootargs_append_attr = __ATTR_RW(bootargs_append); 1564 1561 1565 1562 static struct attribute *fadump_attrs[] = { 1566 1563 &enable_attr.attr, 1567 1564 &register_attr.attr, 1568 1565 &mem_reserved_attr.attr, 1566 + &hotplug_ready_attr.attr, 1569 1567 NULL, 1570 1568 }; 1571 1569 ··· 1640 1632 return; 1641 1633 } 1642 1634 1635 + static int __init fadump_setup_elfcorehdr_buf(void) 1636 + { 1637 + int elf_phdr_cnt; 1638 + unsigned long elfcorehdr_size; 1639 + 1640 + /* 1641 + * Program header for CPU notes comes first, followed by one for 1642 + * vmcoreinfo, and the remaining program headers correspond to 1643 + * memory regions. 1644 + */ 1645 + elf_phdr_cnt = 2 + fw_dump.boot_mem_regs_cnt + memblock_num_regions(memory); 1646 + elfcorehdr_size = sizeof(struct elfhdr) + (elf_phdr_cnt * sizeof(struct elf_phdr)); 1647 + elfcorehdr_size = PAGE_ALIGN(elfcorehdr_size); 1648 + 1649 + fw_dump.elfcorehdr_addr = (u64)fadump_alloc_buffer(elfcorehdr_size); 1650 + if (!fw_dump.elfcorehdr_addr) { 1651 + pr_err("Failed to allocate %lu bytes for elfcorehdr\n", 1652 + elfcorehdr_size); 1653 + return -ENOMEM; 1654 + } 1655 + fw_dump.elfcorehdr_size = elfcorehdr_size; 1656 + return 0; 1657 + } 1658 + 1659 + /* 1660 + * Check if the fadump header of crashed kernel is compatible with fadump kernel. 1661 + * 1662 + * It checks the magic number, endianness, and size of non-primitive type 1663 + * members of fadump header to ensure safe dump collection. 1664 + */ 1665 + static bool __init is_fadump_header_compatible(struct fadump_crash_info_header *fdh) 1666 + { 1667 + if (fdh->magic_number == FADUMP_CRASH_INFO_MAGIC_OLD) { 1668 + pr_err("Old magic number, can't process the dump.\n"); 1669 + return false; 1670 + } 1671 + 1672 + if (fdh->magic_number != FADUMP_CRASH_INFO_MAGIC) { 1673 + if (fdh->magic_number == swab64(FADUMP_CRASH_INFO_MAGIC)) 1674 + pr_err("Endianness mismatch between the crashed and fadump kernels.\n"); 1675 + else 1676 + pr_err("Fadump header is corrupted.\n"); 1677 + 1678 + return false; 1679 + } 1680 + 1681 + /* 1682 + * Dump collection is not safe if the size of non-primitive type members 1683 + * of the fadump header do not match between crashed and fadump kernel. 1684 + */ 1685 + if (fdh->pt_regs_sz != sizeof(struct pt_regs) || 1686 + fdh->cpu_mask_sz != sizeof(struct cpumask)) { 1687 + pr_err("Fadump header size mismatch.\n"); 1688 + return false; 1689 + } 1690 + 1691 + return true; 1692 + } 1693 + 1694 + static void __init fadump_process(void) 1695 + { 1696 + struct fadump_crash_info_header *fdh; 1697 + 1698 + fdh = (struct fadump_crash_info_header *) __va(fw_dump.fadumphdr_addr); 1699 + if (!fdh) { 1700 + pr_err("Crash info header is empty.\n"); 1701 + goto err_out; 1702 + } 1703 + 1704 + /* Avoid processing the dump if fadump header isn't compatible */ 1705 + if (!is_fadump_header_compatible(fdh)) 1706 + goto err_out; 1707 + 1708 + /* Allocate buffer for elfcorehdr */ 1709 + if (fadump_setup_elfcorehdr_buf()) 1710 + goto err_out; 1711 + 1712 + fadump_populate_elfcorehdr(fdh); 1713 + 1714 + /* Let platform update the CPU notes in elfcorehdr */ 1715 + if (fw_dump.ops->fadump_process(&fw_dump) < 0) 1716 + goto err_out; 1717 + 1718 + /* 1719 + * elfcorehdr is now ready to be exported. 1720 + * 1721 + * set elfcorehdr_addr so that vmcore module will export the 1722 + * elfcorehdr through '/proc/vmcore'. 1723 + */ 1724 + elfcorehdr_addr = virt_to_phys((void *)fw_dump.elfcorehdr_addr); 1725 + return; 1726 + 1727 + err_out: 1728 + fadump_invalidate_release_mem(); 1729 + } 1730 + 1731 + /* 1732 + * Reserve memory to store additional parameters to be passed 1733 + * for fadump/capture kernel. 1734 + */ 1735 + static void __init fadump_setup_param_area(void) 1736 + { 1737 + phys_addr_t range_start, range_end; 1738 + 1739 + if (!fw_dump.param_area_supported || fw_dump.dump_active) 1740 + return; 1741 + 1742 + /* This memory can't be used by PFW or bootloader as it is shared across kernels */ 1743 + if (radix_enabled()) { 1744 + /* 1745 + * Anywhere in the upper half should be good enough as all memory 1746 + * is accessible in real mode. 1747 + */ 1748 + range_start = memblock_end_of_DRAM() / 2; 1749 + range_end = memblock_end_of_DRAM(); 1750 + } else { 1751 + /* 1752 + * Passing additional parameters is supported for hash MMU only 1753 + * if the first memory block size is 768MB or higher. 1754 + */ 1755 + if (ppc64_rma_size < 0x30000000) 1756 + return; 1757 + 1758 + /* 1759 + * 640 MB to 768 MB is not used by PFW/bootloader. So, try reserving 1760 + * memory for passing additional parameters in this range to avoid 1761 + * being stomped on by PFW/bootloader. 1762 + */ 1763 + range_start = 0x2A000000; 1764 + range_end = range_start + 0x4000000; 1765 + } 1766 + 1767 + fw_dump.param_area = memblock_phys_alloc_range(COMMAND_LINE_SIZE, 1768 + COMMAND_LINE_SIZE, 1769 + range_start, 1770 + range_end); 1771 + if (!fw_dump.param_area || sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr)) { 1772 + pr_warn("WARNING: Could not setup area to pass additional parameters!\n"); 1773 + return; 1774 + } 1775 + 1776 + memset(phys_to_virt(fw_dump.param_area), 0, COMMAND_LINE_SIZE); 1777 + } 1778 + 1643 1779 /* 1644 1780 * Prepare for firmware-assisted dump. 1645 1781 */ ··· 1803 1651 * saving it to the disk. 1804 1652 */ 1805 1653 if (fw_dump.dump_active) { 1806 - /* 1807 - * if dump process fails then invalidate the registration 1808 - * and release memory before proceeding for re-registration. 1809 - */ 1810 - if (fw_dump.ops->fadump_process(&fw_dump) < 0) 1811 - fadump_invalidate_release_mem(); 1654 + fadump_process(); 1812 1655 } 1813 1656 /* Initialize the kernel dump memory structure and register with f/w */ 1814 1657 else if (fw_dump.reserve_dump_area_size) { 1658 + fadump_setup_param_area(); 1815 1659 fw_dump.ops->fadump_init_mem_struct(&fw_dump); 1816 1660 register_fadump(); 1817 1661 }
+2 -2
arch/powerpc/kernel/misc_64.S
··· 192 192 xori r0,r0,MSR_EE 193 193 mtmsrd r0,1 194 194 195 - /* rotate 24 bits SCOM address 8 bits left and mask out it's low 8 bits 195 + /* rotate 24 bits SCOM address 8 bits left and mask out its low 8 bits 196 196 * (including parity). On current CPUs they must be 0'd, 197 197 * and finally or in RW bit 198 198 */ ··· 226 226 xori r0,r0,MSR_EE 227 227 mtmsrd r0,1 228 228 229 - /* rotate 24 bits SCOM address 8 bits left and mask out it's low 8 bits 229 + /* rotate 24 bits SCOM address 8 bits left and mask out its low 8 bits 230 230 * (including parity). On current CPUs they must be 0'd. 231 231 */ 232 232
-2
arch/powerpc/kernel/module.c
··· 16 16 #include <asm/setup.h> 17 17 #include <asm/sections.h> 18 18 19 - static LIST_HEAD(module_bug_list); 20 - 21 19 static const Elf_Shdr *find_section(const Elf_Ehdr *hdr, 22 20 const Elf_Shdr *sechdrs, 23 21 const char *name)
+23 -6
arch/powerpc/kernel/process.c
··· 1185 1185 1186 1186 if (cpu_has_feature(CPU_FTR_DEXCR_NPHIE)) 1187 1187 t->hashkeyr = mfspr(SPRN_HASHKEYR); 1188 + 1189 + if (cpu_has_feature(CPU_FTR_ARCH_31)) 1190 + t->dexcr = mfspr(SPRN_DEXCR); 1188 1191 #endif 1189 1192 } 1190 1193 ··· 1270 1267 if (cpu_has_feature(CPU_FTR_DEXCR_NPHIE) && 1271 1268 old_thread->hashkeyr != new_thread->hashkeyr) 1272 1269 mtspr(SPRN_HASHKEYR, new_thread->hashkeyr); 1270 + 1271 + if (cpu_has_feature(CPU_FTR_ARCH_31) && 1272 + old_thread->dexcr != new_thread->dexcr) 1273 + mtspr(SPRN_DEXCR, new_thread->dexcr); 1273 1274 #endif 1274 1275 1275 1276 } ··· 1641 1634 current->thread.regs->amr = default_amr; 1642 1635 current->thread.regs->iamr = default_iamr; 1643 1636 #endif 1637 + 1638 + #ifdef CONFIG_PPC_BOOK3S_64 1639 + if (cpu_has_feature(CPU_FTR_ARCH_31)) { 1640 + current->thread.dexcr = current->thread.dexcr_onexec; 1641 + mtspr(SPRN_DEXCR, current->thread.dexcr); 1642 + } 1643 + #endif /* CONFIG_PPC_BOOK3S_64 */ 1644 1644 } 1645 1645 1646 1646 #ifdef CONFIG_PPC64 ··· 1661 1647 * cases will happen: 1662 1648 * 1663 1649 * 1. The correct thread is running, the wrong thread is not 1664 - * In this situation, the correct thread is woken and proceeds to pass it's 1650 + * In this situation, the correct thread is woken and proceeds to pass its 1665 1651 * condition check. 1666 1652 * 1667 1653 * 2. Neither threads are running ··· 1671 1657 * for the wrong thread, or they will execute the condition check immediately. 1672 1658 * 1673 1659 * 3. The wrong thread is running, the correct thread is not 1674 - * The wrong thread will be woken, but will fail it's condition check and 1660 + * The wrong thread will be woken, but will fail its condition check and 1675 1661 * re-execute wait. The correct thread, when scheduled, will execute either 1676 - * it's condition check (which will pass), or wait, which returns immediately 1677 - * when called the first time after the thread is scheduled, followed by it's 1662 + * its condition check (which will pass), or wait, which returns immediately 1663 + * when called the first time after the thread is scheduled, followed by its 1678 1664 * condition check (which will pass). 1679 1665 * 1680 1666 * 4. Both threads are running 1681 - * Both threads will be woken. The wrong thread will fail it's condition check 1682 - * and execute another wait, while the correct thread will pass it's condition 1667 + * Both threads will be woken. The wrong thread will fail its condition check 1668 + * and execute another wait, while the correct thread will pass its condition 1683 1669 * check. 1684 1670 * 1685 1671 * @t: the task to set the thread ID for ··· 1892 1878 #ifdef CONFIG_PPC_BOOK3S_64 1893 1879 if (cpu_has_feature(CPU_FTR_DEXCR_NPHIE)) 1894 1880 p->thread.hashkeyr = current->thread.hashkeyr; 1881 + 1882 + if (cpu_has_feature(CPU_FTR_ARCH_31)) 1883 + p->thread.dexcr = mfspr(SPRN_DEXCR); 1895 1884 #endif 1896 1885 return 0; 1897 1886 }
+18 -5
arch/powerpc/kernel/prom.c
··· 779 779 780 780 void __init early_init_devtree(void *params) 781 781 { 782 - phys_addr_t limit; 782 + phys_addr_t int_vector_size; 783 783 784 784 DBG(" -> early_init_devtree(%px)\n", params); 785 785 ··· 813 813 */ 814 814 of_scan_flat_dt(early_init_dt_scan_chosen_ppc, boot_command_line); 815 815 816 + /* Append additional parameters passed for fadump capture kernel */ 817 + fadump_append_bootargs(); 818 + 816 819 /* Scan memory nodes and rebuild MEMBLOCKs */ 817 820 early_init_dt_scan_root(); 818 821 early_init_dt_scan_memory_ppc(); ··· 835 832 setup_initial_memory_limit(memstart_addr, first_memblock_size); 836 833 /* Reserve MEMBLOCK regions used by kernel, initrd, dt, etc... */ 837 834 memblock_reserve(PHYSICAL_START, __pa(_end) - PHYSICAL_START); 835 + #ifdef CONFIG_PPC64 836 + /* If relocatable, reserve at least 32k for interrupt vectors etc. */ 837 + int_vector_size = __end_interrupts - _stext; 838 + int_vector_size = max_t(phys_addr_t, SZ_32K, int_vector_size); 839 + #else 838 840 /* If relocatable, reserve first 32k for interrupt vectors etc. */ 841 + int_vector_size = SZ_32K; 842 + #endif 839 843 if (PHYSICAL_START > MEMORY_START) 840 - memblock_reserve(MEMORY_START, 0x8000); 844 + memblock_reserve(MEMORY_START, int_vector_size); 841 845 reserve_kdump_trampoline(); 842 846 #if defined(CONFIG_FA_DUMP) || defined(CONFIG_PRESERVE_FA_DUMP) 843 847 /* ··· 856 846 reserve_crashkernel(); 857 847 early_reserve_mem(); 858 848 859 - /* Ensure that total memory size is page-aligned. */ 860 - limit = ALIGN(memory_limit ?: memblock_phys_mem_size(), PAGE_SIZE); 861 - memblock_enforce_memory_limit(limit); 849 + if (memory_limit > memblock_phys_mem_size()) 850 + memory_limit = 0; 851 + 852 + /* Align down to 16 MB which is large page size with hash page translation */ 853 + memory_limit = ALIGN_DOWN(memory_limit ?: memblock_phys_mem_size(), SZ_16M); 854 + memblock_enforce_memory_limit(memory_limit); 862 855 863 856 #if defined(CONFIG_PPC_BOOK3S_64) && defined(CONFIG_PPC_4K_PAGES) 864 857 if (!early_radix_enabled())
+2 -2
arch/powerpc/kernel/prom_init.c
··· 817 817 opt += 4; 818 818 prom_memory_limit = prom_memparse(opt, (const char **)&opt); 819 819 #ifdef CONFIG_PPC64 820 - /* Align to 16 MB == size of ppc64 large page */ 821 - prom_memory_limit = ALIGN(prom_memory_limit, 0x1000000); 820 + /* Align down to 16 MB which is large page size with hash page translation */ 821 + prom_memory_limit = ALIGN_DOWN(prom_memory_limit, SZ_16M); 822 822 #endif 823 823 } 824 824
+1 -1
arch/powerpc/kernel/ptrace/ptrace-tm.c
··· 12 12 { 13 13 /* 14 14 * If task is not current, it will have been flushed already to 15 - * it's thread_struct during __switch_to(). 15 + * its thread_struct during __switch_to(). 16 16 * 17 17 * A reclaim flushes ALL the state or if not in TM save TM SPRs 18 18 * in the appropriate thread structures from live.
+1 -6
arch/powerpc/kernel/ptrace/ptrace-view.c
··· 469 469 if (!cpu_has_feature(CPU_FTR_ARCH_31)) 470 470 return -ENODEV; 471 471 472 - /* 473 - * The DEXCR is currently static across all CPUs, so we don't 474 - * store the target's value anywhere, but the static value 475 - * will also be correct. 476 - */ 477 - membuf_store(&to, (u64)lower_32_bits(DEXCR_INIT)); 472 + membuf_store(&to, (u64)lower_32_bits(target->thread.dexcr)); 478 473 479 474 /* 480 475 * Technically the HDEXCR is per-cpu, but a hypervisor can't reasonably
+1 -1
arch/powerpc/kernel/setup-common.c
··· 405 405 cpumask_set_cpu(i, &threads_core_mask); 406 406 407 407 printk(KERN_INFO "CPU maps initialized for %d thread%s per core\n", 408 - tpc, tpc > 1 ? "s" : ""); 408 + tpc, str_plural(tpc)); 409 409 printk(KERN_DEBUG " (thread shift is %d)\n", threads_shift); 410 410 } 411 411
+2
arch/powerpc/kernel/setup_64.c
··· 834 834 835 835 unsigned long __per_cpu_offset[NR_CPUS] __read_mostly; 836 836 EXPORT_SYMBOL(__per_cpu_offset); 837 + DEFINE_STATIC_KEY_FALSE(__percpu_first_chunk_is_paged); 837 838 838 839 void __init setup_per_cpu_areas(void) 839 840 { ··· 877 876 if (rc < 0) 878 877 panic("cannot initialize percpu area (err=%d)", rc); 879 878 879 + static_key_enable(&__percpu_first_chunk_is_paged.key); 880 880 delta = (unsigned long)pcpu_base_addr - (unsigned long)__per_cpu_start; 881 881 for_each_possible_cpu(cpu) { 882 882 __per_cpu_offset[cpu] = delta + pcpu_unit_offsets[cpu];
+1 -1
arch/powerpc/kernel/smp.c
··· 1567 1567 1568 1568 /* 1569 1569 * This CPU will not be in the online mask yet so we need to manually 1570 - * add it to it's own thread sibling mask. 1570 + * add it to its own thread sibling mask. 1571 1571 */ 1572 1572 map_cpu_to_node(cpu, cpu_to_node(cpu)); 1573 1573 cpumask_set_cpu(cpu, cpu_sibling_mask(cpu));
+2 -2
arch/powerpc/kernel/sysfs.c
··· 139 139 * @val: Returned cpu specific DSCR default value 140 140 * 141 141 * This function returns the per cpu DSCR default value 142 - * for any cpu which is contained in it's PACA structure. 142 + * for any cpu which is contained in its PACA structure. 143 143 */ 144 144 static void read_dscr(void *val) 145 145 { ··· 152 152 * @val: New cpu specific DSCR default value to update 153 153 * 154 154 * This function updates the per cpu DSCR default value 155 - * for any cpu which is contained in it's PACA structure. 155 + * for any cpu which is contained in its PACA structure. 156 156 */ 157 157 static void write_dscr(void *val) 158 158 {
+2 -2
arch/powerpc/kexec/Makefile
··· 3 3 # Makefile for the linux kernel. 4 4 # 5 5 6 - obj-y += core.o core_$(BITS).o 6 + obj-y += core.o core_$(BITS).o ranges.o 7 7 8 8 obj-$(CONFIG_PPC32) += relocate_32.o 9 9 10 - obj-$(CONFIG_KEXEC_FILE) += file_load.o ranges.o file_load_$(BITS).o elf_$(BITS).o 10 + obj-$(CONFIG_KEXEC_FILE) += file_load.o file_load_$(BITS).o elf_$(BITS).o 11 11 obj-$(CONFIG_VMCORE_INFO) += vmcore_info.o 12 12 obj-$(CONFIG_CRASH_DUMP) += crash.o 13 13
+91
arch/powerpc/kexec/core_64.c
··· 17 17 #include <linux/cpu.h> 18 18 #include <linux/hardirq.h> 19 19 #include <linux/of.h> 20 + #include <linux/libfdt.h> 20 21 21 22 #include <asm/page.h> 22 23 #include <asm/current.h> ··· 31 30 #include <asm/hw_breakpoint.h> 32 31 #include <asm/svm.h> 33 32 #include <asm/ultravisor.h> 33 + #include <asm/crashdump-ppc64.h> 34 34 35 35 int machine_kexec_prepare(struct kimage *image) 36 36 { ··· 421 419 } 422 420 late_initcall(export_htab_values); 423 421 #endif /* CONFIG_PPC_64S_HASH_MMU */ 422 + 423 + #if defined(CONFIG_KEXEC_FILE) || defined(CONFIG_CRASH_DUMP) 424 + /** 425 + * add_node_props - Reads node properties from device node structure and add 426 + * them to fdt. 427 + * @fdt: Flattened device tree of the kernel 428 + * @node_offset: offset of the node to add a property at 429 + * @dn: device node pointer 430 + * 431 + * Returns 0 on success, negative errno on error. 432 + */ 433 + static int add_node_props(void *fdt, int node_offset, const struct device_node *dn) 434 + { 435 + int ret = 0; 436 + struct property *pp; 437 + 438 + if (!dn) 439 + return -EINVAL; 440 + 441 + for_each_property_of_node(dn, pp) { 442 + ret = fdt_setprop(fdt, node_offset, pp->name, pp->value, pp->length); 443 + if (ret < 0) { 444 + pr_err("Unable to add %s property: %s\n", pp->name, fdt_strerror(ret)); 445 + return ret; 446 + } 447 + } 448 + return ret; 449 + } 450 + 451 + /** 452 + * update_cpus_node - Update cpus node of flattened device tree using of_root 453 + * device node. 454 + * @fdt: Flattened device tree of the kernel. 455 + * 456 + * Returns 0 on success, negative errno on error. 457 + */ 458 + int update_cpus_node(void *fdt) 459 + { 460 + struct device_node *cpus_node, *dn; 461 + int cpus_offset, cpus_subnode_offset, ret = 0; 462 + 463 + cpus_offset = fdt_path_offset(fdt, "/cpus"); 464 + if (cpus_offset < 0 && cpus_offset != -FDT_ERR_NOTFOUND) { 465 + pr_err("Malformed device tree: error reading /cpus node: %s\n", 466 + fdt_strerror(cpus_offset)); 467 + return cpus_offset; 468 + } 469 + 470 + if (cpus_offset > 0) { 471 + ret = fdt_del_node(fdt, cpus_offset); 472 + if (ret < 0) { 473 + pr_err("Error deleting /cpus node: %s\n", fdt_strerror(ret)); 474 + return -EINVAL; 475 + } 476 + } 477 + 478 + /* Add cpus node to fdt */ 479 + cpus_offset = fdt_add_subnode(fdt, fdt_path_offset(fdt, "/"), "cpus"); 480 + if (cpus_offset < 0) { 481 + pr_err("Error creating /cpus node: %s\n", fdt_strerror(cpus_offset)); 482 + return -EINVAL; 483 + } 484 + 485 + /* Add cpus node properties */ 486 + cpus_node = of_find_node_by_path("/cpus"); 487 + ret = add_node_props(fdt, cpus_offset, cpus_node); 488 + of_node_put(cpus_node); 489 + if (ret < 0) 490 + return ret; 491 + 492 + /* Loop through all subnodes of cpus and add them to fdt */ 493 + for_each_node_by_type(dn, "cpu") { 494 + cpus_subnode_offset = fdt_add_subnode(fdt, cpus_offset, dn->full_name); 495 + if (cpus_subnode_offset < 0) { 496 + pr_err("Unable to add %s subnode: %s\n", dn->full_name, 497 + fdt_strerror(cpus_subnode_offset)); 498 + ret = cpus_subnode_offset; 499 + goto out; 500 + } 501 + 502 + ret = add_node_props(fdt, cpus_subnode_offset, dn); 503 + if (ret < 0) 504 + goto out; 505 + } 506 + out: 507 + of_node_put(dn); 508 + return ret; 509 + } 510 + #endif /* CONFIG_KEXEC_FILE || CONFIG_CRASH_DUMP */
+195
arch/powerpc/kexec/crash.c
··· 16 16 #include <linux/delay.h> 17 17 #include <linux/irq.h> 18 18 #include <linux/types.h> 19 + #include <linux/libfdt.h> 20 + #include <linux/memory.h> 19 21 20 22 #include <asm/processor.h> 21 23 #include <asm/machdep.h> ··· 26 24 #include <asm/setjmp.h> 27 25 #include <asm/debug.h> 28 26 #include <asm/interrupt.h> 27 + #include <asm/kexec_ranges.h> 29 28 30 29 /* 31 30 * The primary CPU waits a while for all secondary CPUs to enter. This is to ··· 395 392 if (ppc_md.kexec_cpu_down) 396 393 ppc_md.kexec_cpu_down(1, 0); 397 394 } 395 + 396 + #ifdef CONFIG_CRASH_HOTPLUG 397 + #undef pr_fmt 398 + #define pr_fmt(fmt) "crash hp: " fmt 399 + 400 + /* 401 + * Advertise preferred elfcorehdr size to userspace via 402 + * /sys/kernel/crash_elfcorehdr_size sysfs interface. 403 + */ 404 + unsigned int arch_crash_get_elfcorehdr_size(void) 405 + { 406 + unsigned long phdr_cnt; 407 + 408 + /* A program header for possible CPUs + vmcoreinfo */ 409 + phdr_cnt = num_possible_cpus() + 1; 410 + if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG)) 411 + phdr_cnt += CONFIG_CRASH_MAX_MEMORY_RANGES; 412 + 413 + return sizeof(struct elfhdr) + (phdr_cnt * sizeof(Elf64_Phdr)); 414 + } 415 + 416 + /** 417 + * update_crash_elfcorehdr() - Recreate the elfcorehdr and replace it with old 418 + * elfcorehdr in the kexec segment array. 419 + * @image: the active struct kimage 420 + * @mn: struct memory_notify data handler 421 + */ 422 + static void update_crash_elfcorehdr(struct kimage *image, struct memory_notify *mn) 423 + { 424 + int ret; 425 + struct crash_mem *cmem = NULL; 426 + struct kexec_segment *ksegment; 427 + void *ptr, *mem, *elfbuf = NULL; 428 + unsigned long elfsz, memsz, base_addr, size; 429 + 430 + ksegment = &image->segment[image->elfcorehdr_index]; 431 + mem = (void *) ksegment->mem; 432 + memsz = ksegment->memsz; 433 + 434 + ret = get_crash_memory_ranges(&cmem); 435 + if (ret) { 436 + pr_err("Failed to get crash mem range\n"); 437 + return; 438 + } 439 + 440 + /* 441 + * The hot unplugged memory is part of crash memory ranges, 442 + * remove it here. 443 + */ 444 + if (image->hp_action == KEXEC_CRASH_HP_REMOVE_MEMORY) { 445 + base_addr = PFN_PHYS(mn->start_pfn); 446 + size = mn->nr_pages * PAGE_SIZE; 447 + ret = remove_mem_range(&cmem, base_addr, size); 448 + if (ret) { 449 + pr_err("Failed to remove hot-unplugged memory from crash memory ranges\n"); 450 + goto out; 451 + } 452 + } 453 + 454 + ret = crash_prepare_elf64_headers(cmem, false, &elfbuf, &elfsz); 455 + if (ret) { 456 + pr_err("Failed to prepare elf header\n"); 457 + goto out; 458 + } 459 + 460 + /* 461 + * It is unlikely that kernel hit this because elfcorehdr kexec 462 + * segment (memsz) is built with addition space to accommodate growing 463 + * number of crash memory ranges while loading the kdump kernel. It is 464 + * Just to avoid any unforeseen case. 465 + */ 466 + if (elfsz > memsz) { 467 + pr_err("Updated crash elfcorehdr elfsz %lu > memsz %lu", elfsz, memsz); 468 + goto out; 469 + } 470 + 471 + ptr = __va(mem); 472 + if (ptr) { 473 + /* Temporarily invalidate the crash image while it is replaced */ 474 + xchg(&kexec_crash_image, NULL); 475 + 476 + /* Replace the old elfcorehdr with newly prepared elfcorehdr */ 477 + memcpy((void *)ptr, elfbuf, elfsz); 478 + 479 + /* The crash image is now valid once again */ 480 + xchg(&kexec_crash_image, image); 481 + } 482 + out: 483 + kvfree(cmem); 484 + kvfree(elfbuf); 485 + } 486 + 487 + /** 488 + * get_fdt_index - Loop through the kexec segment array and find 489 + * the index of the FDT segment. 490 + * @image: a pointer to kexec_crash_image 491 + * 492 + * Returns the index of FDT segment in the kexec segment array 493 + * if found; otherwise -1. 494 + */ 495 + static int get_fdt_index(struct kimage *image) 496 + { 497 + void *ptr; 498 + unsigned long mem; 499 + int i, fdt_index = -1; 500 + 501 + /* Find the FDT segment index in kexec segment array. */ 502 + for (i = 0; i < image->nr_segments; i++) { 503 + mem = image->segment[i].mem; 504 + ptr = __va(mem); 505 + 506 + if (ptr && fdt_magic(ptr) == FDT_MAGIC) { 507 + fdt_index = i; 508 + break; 509 + } 510 + } 511 + 512 + return fdt_index; 513 + } 514 + 515 + /** 516 + * update_crash_fdt - updates the cpus node of the crash FDT. 517 + * 518 + * @image: a pointer to kexec_crash_image 519 + */ 520 + static void update_crash_fdt(struct kimage *image) 521 + { 522 + void *fdt; 523 + int fdt_index; 524 + 525 + fdt_index = get_fdt_index(image); 526 + if (fdt_index < 0) { 527 + pr_err("Unable to locate FDT segment.\n"); 528 + return; 529 + } 530 + 531 + fdt = __va((void *)image->segment[fdt_index].mem); 532 + 533 + /* Temporarily invalidate the crash image while it is replaced */ 534 + xchg(&kexec_crash_image, NULL); 535 + 536 + /* update FDT to reflect changes in CPU resources */ 537 + if (update_cpus_node(fdt)) 538 + pr_err("Failed to update crash FDT"); 539 + 540 + /* The crash image is now valid once again */ 541 + xchg(&kexec_crash_image, image); 542 + } 543 + 544 + int arch_crash_hotplug_support(struct kimage *image, unsigned long kexec_flags) 545 + { 546 + #ifdef CONFIG_KEXEC_FILE 547 + if (image->file_mode) 548 + return 1; 549 + #endif 550 + return kexec_flags & KEXEC_CRASH_HOTPLUG_SUPPORT; 551 + } 552 + 553 + /** 554 + * arch_crash_handle_hotplug_event - Handle crash CPU/Memory hotplug events to update the 555 + * necessary kexec segments based on the hotplug event. 556 + * @image: a pointer to kexec_crash_image 557 + * @arg: struct memory_notify handler for memory hotplug case and NULL for CPU hotplug case. 558 + * 559 + * Update the kdump image based on the type of hotplug event, represented by image->hp_action. 560 + * CPU add: Update the FDT segment to include the newly added CPU. 561 + * CPU remove: No action is needed, with the assumption that it's okay to have offline CPUs 562 + * part of the FDT. 563 + * Memory add/remove: No action is taken as this is not yet supported. 564 + */ 565 + void arch_crash_handle_hotplug_event(struct kimage *image, void *arg) 566 + { 567 + struct memory_notify *mn; 568 + 569 + switch (image->hp_action) { 570 + case KEXEC_CRASH_HP_REMOVE_CPU: 571 + return; 572 + 573 + case KEXEC_CRASH_HP_ADD_CPU: 574 + update_crash_fdt(image); 575 + break; 576 + 577 + case KEXEC_CRASH_HP_REMOVE_MEMORY: 578 + case KEXEC_CRASH_HP_ADD_MEMORY: 579 + mn = (struct memory_notify *)arg; 580 + update_crash_elfcorehdr(image, mn); 581 + return; 582 + default: 583 + pr_warn_once("Unknown hotplug action\n"); 584 + } 585 + } 586 + #endif /* CONFIG_CRASH_HOTPLUG */
+2 -1
arch/powerpc/kexec/elf_64.c
··· 116 116 if (ret) 117 117 goto out_free_fdt; 118 118 119 - fdt_pack(fdt); 119 + if (!IS_ENABLED(CONFIG_CRASH_HOTPLUG) || image->type != KEXEC_TYPE_CRASH) 120 + fdt_pack(fdt); 120 121 121 122 kbuf.buffer = fdt; 122 123 kbuf.bufsz = kbuf.memsz = fdt_totalsize(fdt);
+36 -278
arch/powerpc/kexec/file_load_64.c
··· 30 30 #include <asm/iommu.h> 31 31 #include <asm/prom.h> 32 32 #include <asm/plpks.h> 33 + #include <asm/cputhreads.h> 33 34 34 35 struct umem_info { 35 36 __be64 *buf; /* data buffer for usable-memory property */ ··· 47 46 &kexec_elf64_ops, 48 47 NULL 49 48 }; 50 - 51 - /** 52 - * get_exclude_memory_ranges - Get exclude memory ranges. This list includes 53 - * regions like opal/rtas, tce-table, initrd, 54 - * kernel, htab which should be avoided while 55 - * setting up kexec load segments. 56 - * @mem_ranges: Range list to add the memory ranges to. 57 - * 58 - * Returns 0 on success, negative errno on error. 59 - */ 60 - static int get_exclude_memory_ranges(struct crash_mem **mem_ranges) 61 - { 62 - int ret; 63 - 64 - ret = add_tce_mem_ranges(mem_ranges); 65 - if (ret) 66 - goto out; 67 - 68 - ret = add_initrd_mem_range(mem_ranges); 69 - if (ret) 70 - goto out; 71 - 72 - ret = add_htab_mem_range(mem_ranges); 73 - if (ret) 74 - goto out; 75 - 76 - ret = add_kernel_mem_range(mem_ranges); 77 - if (ret) 78 - goto out; 79 - 80 - ret = add_rtas_mem_range(mem_ranges); 81 - if (ret) 82 - goto out; 83 - 84 - ret = add_opal_mem_range(mem_ranges); 85 - if (ret) 86 - goto out; 87 - 88 - ret = add_reserved_mem_ranges(mem_ranges); 89 - if (ret) 90 - goto out; 91 - 92 - /* exclude memory ranges should be sorted for easy lookup */ 93 - sort_memory_ranges(*mem_ranges, true); 94 - out: 95 - if (ret) 96 - pr_err("Failed to setup exclude memory ranges\n"); 97 - return ret; 98 - } 99 - 100 - /** 101 - * get_reserved_memory_ranges - Get reserve memory ranges. This list includes 102 - * memory regions that should be added to the 103 - * memory reserve map to ensure the region is 104 - * protected from any mischief. 105 - * @mem_ranges: Range list to add the memory ranges to. 106 - * 107 - * Returns 0 on success, negative errno on error. 108 - */ 109 - static int get_reserved_memory_ranges(struct crash_mem **mem_ranges) 110 - { 111 - int ret; 112 - 113 - ret = add_rtas_mem_range(mem_ranges); 114 - if (ret) 115 - goto out; 116 - 117 - ret = add_tce_mem_ranges(mem_ranges); 118 - if (ret) 119 - goto out; 120 - 121 - ret = add_reserved_mem_ranges(mem_ranges); 122 - out: 123 - if (ret) 124 - pr_err("Failed to setup reserved memory ranges\n"); 125 - return ret; 126 - } 127 49 128 50 /** 129 51 * __locate_mem_hole_top_down - Looks top down for a large enough memory hole ··· 246 322 } 247 323 248 324 #ifdef CONFIG_CRASH_DUMP 249 - /** 250 - * get_usable_memory_ranges - Get usable memory ranges. This list includes 251 - * regions like crashkernel, opal/rtas & tce-table, 252 - * that kdump kernel could use. 253 - * @mem_ranges: Range list to add the memory ranges to. 254 - * 255 - * Returns 0 on success, negative errno on error. 256 - */ 257 - static int get_usable_memory_ranges(struct crash_mem **mem_ranges) 258 - { 259 - int ret; 260 - 261 - /* 262 - * Early boot failure observed on guests when low memory (first memory 263 - * block?) is not added to usable memory. So, add [0, crashk_res.end] 264 - * instead of [crashk_res.start, crashk_res.end] to workaround it. 265 - * Also, crashed kernel's memory must be added to reserve map to 266 - * avoid kdump kernel from using it. 267 - */ 268 - ret = add_mem_range(mem_ranges, 0, crashk_res.end + 1); 269 - if (ret) 270 - goto out; 271 - 272 - ret = add_rtas_mem_range(mem_ranges); 273 - if (ret) 274 - goto out; 275 - 276 - ret = add_opal_mem_range(mem_ranges); 277 - if (ret) 278 - goto out; 279 - 280 - ret = add_tce_mem_ranges(mem_ranges); 281 - out: 282 - if (ret) 283 - pr_err("Failed to setup usable memory ranges\n"); 284 - return ret; 285 - } 286 - 287 - /** 288 - * get_crash_memory_ranges - Get crash memory ranges. This list includes 289 - * first/crashing kernel's memory regions that 290 - * would be exported via an elfcore. 291 - * @mem_ranges: Range list to add the memory ranges to. 292 - * 293 - * Returns 0 on success, negative errno on error. 294 - */ 295 - static int get_crash_memory_ranges(struct crash_mem **mem_ranges) 296 - { 297 - phys_addr_t base, end; 298 - struct crash_mem *tmem; 299 - u64 i; 300 - int ret; 301 - 302 - for_each_mem_range(i, &base, &end) { 303 - u64 size = end - base; 304 - 305 - /* Skip backup memory region, which needs a separate entry */ 306 - if (base == BACKUP_SRC_START) { 307 - if (size > BACKUP_SRC_SIZE) { 308 - base = BACKUP_SRC_END + 1; 309 - size -= BACKUP_SRC_SIZE; 310 - } else 311 - continue; 312 - } 313 - 314 - ret = add_mem_range(mem_ranges, base, size); 315 - if (ret) 316 - goto out; 317 - 318 - /* Try merging adjacent ranges before reallocation attempt */ 319 - if ((*mem_ranges)->nr_ranges == (*mem_ranges)->max_nr_ranges) 320 - sort_memory_ranges(*mem_ranges, true); 321 - } 322 - 323 - /* Reallocate memory ranges if there is no space to split ranges */ 324 - tmem = *mem_ranges; 325 - if (tmem && (tmem->nr_ranges == tmem->max_nr_ranges)) { 326 - tmem = realloc_mem_ranges(mem_ranges); 327 - if (!tmem) 328 - goto out; 329 - } 330 - 331 - /* Exclude crashkernel region */ 332 - ret = crash_exclude_mem_range(tmem, crashk_res.start, crashk_res.end); 333 - if (ret) 334 - goto out; 335 - 336 - /* 337 - * FIXME: For now, stay in parity with kexec-tools but if RTAS/OPAL 338 - * regions are exported to save their context at the time of 339 - * crash, they should actually be backed up just like the 340 - * first 64K bytes of memory. 341 - */ 342 - ret = add_rtas_mem_range(mem_ranges); 343 - if (ret) 344 - goto out; 345 - 346 - ret = add_opal_mem_range(mem_ranges); 347 - if (ret) 348 - goto out; 349 - 350 - /* create a separate program header for the backup region */ 351 - ret = add_mem_range(mem_ranges, BACKUP_SRC_START, BACKUP_SRC_SIZE); 352 - if (ret) 353 - goto out; 354 - 355 - sort_memory_ranges(*mem_ranges, false); 356 - out: 357 - if (ret) 358 - pr_err("Failed to setup crash memory ranges\n"); 359 - return ret; 360 - } 361 - 362 325 /** 363 326 * check_realloc_usable_mem - Reallocate buffer if it can't accommodate entries 364 327 * @um_info: Usable memory buffer and ranges info. ··· 595 784 } 596 785 } 597 786 787 + static unsigned int kdump_extra_elfcorehdr_size(struct crash_mem *cmem) 788 + { 789 + #if defined(CONFIG_CRASH_HOTPLUG) && defined(CONFIG_MEMORY_HOTPLUG) 790 + unsigned int extra_sz = 0; 791 + 792 + if (CONFIG_CRASH_MAX_MEMORY_RANGES > (unsigned int)PN_XNUM) 793 + pr_warn("Number of Phdrs %u exceeds max\n", CONFIG_CRASH_MAX_MEMORY_RANGES); 794 + else if (cmem->nr_ranges >= CONFIG_CRASH_MAX_MEMORY_RANGES) 795 + pr_warn("Configured crash mem ranges may not be enough\n"); 796 + else 797 + extra_sz = (CONFIG_CRASH_MAX_MEMORY_RANGES - cmem->nr_ranges) * sizeof(Elf64_Phdr); 798 + 799 + return extra_sz; 800 + #endif 801 + return 0; 802 + } 803 + 598 804 /** 599 805 * load_elfcorehdr_segment - Setup crash memory ranges and initialize elfcorehdr 600 806 * segment needed to load kdump kernel. ··· 643 815 644 816 kbuf->buffer = headers; 645 817 kbuf->mem = KEXEC_BUF_MEM_UNKNOWN; 646 - kbuf->bufsz = kbuf->memsz = headers_sz; 818 + kbuf->bufsz = headers_sz; 819 + kbuf->memsz = headers_sz + kdump_extra_elfcorehdr_size(cmem); 647 820 kbuf->top_down = false; 648 821 649 822 ret = kexec_add_buffer(kbuf); ··· 808 979 unsigned int cpu_nodes, extra_size = 0; 809 980 struct device_node *dn; 810 981 u64 usm_entries; 982 + #ifdef CONFIG_CRASH_HOTPLUG 983 + unsigned int possible_cpu_nodes; 984 + #endif 811 985 812 986 if (!IS_ENABLED(CONFIG_CRASH_DUMP) || image->type != KEXEC_TYPE_CRASH) 813 987 return 0; ··· 838 1006 if (cpu_nodes > boot_cpu_node_count) 839 1007 extra_size += (cpu_nodes - boot_cpu_node_count) * cpu_node_size(); 840 1008 1009 + #ifdef CONFIG_CRASH_HOTPLUG 1010 + /* 1011 + * Make sure enough space is reserved to accommodate possible CPU nodes 1012 + * in the crash FDT. This allows packing possible CPU nodes which are 1013 + * not yet present in the system without regenerating the entire FDT. 1014 + */ 1015 + if (image->type == KEXEC_TYPE_CRASH) { 1016 + possible_cpu_nodes = num_possible_cpus() / threads_per_core; 1017 + if (possible_cpu_nodes > cpu_nodes) 1018 + extra_size += (possible_cpu_nodes - cpu_nodes) * cpu_node_size(); 1019 + } 1020 + #endif 1021 + 841 1022 return extra_size; 842 1023 } 843 1024 ··· 871 1026 extra_size += (unsigned int)plpks_get_passwordlen(); 872 1027 873 1028 return extra_size + kdump_extra_fdt_size_ppc64(image); 874 - } 875 - 876 - /** 877 - * add_node_props - Reads node properties from device node structure and add 878 - * them to fdt. 879 - * @fdt: Flattened device tree of the kernel 880 - * @node_offset: offset of the node to add a property at 881 - * @dn: device node pointer 882 - * 883 - * Returns 0 on success, negative errno on error. 884 - */ 885 - static int add_node_props(void *fdt, int node_offset, const struct device_node *dn) 886 - { 887 - int ret = 0; 888 - struct property *pp; 889 - 890 - if (!dn) 891 - return -EINVAL; 892 - 893 - for_each_property_of_node(dn, pp) { 894 - ret = fdt_setprop(fdt, node_offset, pp->name, pp->value, pp->length); 895 - if (ret < 0) { 896 - pr_err("Unable to add %s property: %s\n", pp->name, fdt_strerror(ret)); 897 - return ret; 898 - } 899 - } 900 - return ret; 901 - } 902 - 903 - /** 904 - * update_cpus_node - Update cpus node of flattened device tree using of_root 905 - * device node. 906 - * @fdt: Flattened device tree of the kernel. 907 - * 908 - * Returns 0 on success, negative errno on error. 909 - */ 910 - static int update_cpus_node(void *fdt) 911 - { 912 - struct device_node *cpus_node, *dn; 913 - int cpus_offset, cpus_subnode_offset, ret = 0; 914 - 915 - cpus_offset = fdt_path_offset(fdt, "/cpus"); 916 - if (cpus_offset < 0 && cpus_offset != -FDT_ERR_NOTFOUND) { 917 - pr_err("Malformed device tree: error reading /cpus node: %s\n", 918 - fdt_strerror(cpus_offset)); 919 - return cpus_offset; 920 - } 921 - 922 - if (cpus_offset > 0) { 923 - ret = fdt_del_node(fdt, cpus_offset); 924 - if (ret < 0) { 925 - pr_err("Error deleting /cpus node: %s\n", fdt_strerror(ret)); 926 - return -EINVAL; 927 - } 928 - } 929 - 930 - /* Add cpus node to fdt */ 931 - cpus_offset = fdt_add_subnode(fdt, fdt_path_offset(fdt, "/"), "cpus"); 932 - if (cpus_offset < 0) { 933 - pr_err("Error creating /cpus node: %s\n", fdt_strerror(cpus_offset)); 934 - return -EINVAL; 935 - } 936 - 937 - /* Add cpus node properties */ 938 - cpus_node = of_find_node_by_path("/cpus"); 939 - ret = add_node_props(fdt, cpus_offset, cpus_node); 940 - of_node_put(cpus_node); 941 - if (ret < 0) 942 - return ret; 943 - 944 - /* Loop through all subnodes of cpus and add them to fdt */ 945 - for_each_node_by_type(dn, "cpu") { 946 - cpus_subnode_offset = fdt_add_subnode(fdt, cpus_offset, dn->full_name); 947 - if (cpus_subnode_offset < 0) { 948 - pr_err("Unable to add %s subnode: %s\n", dn->full_name, 949 - fdt_strerror(cpus_subnode_offset)); 950 - ret = cpus_subnode_offset; 951 - goto out; 952 - } 953 - 954 - ret = add_node_props(fdt, cpus_subnode_offset, dn); 955 - if (ret < 0) 956 - goto out; 957 - } 958 - out: 959 - of_node_put(dn); 960 - return ret; 961 1029 } 962 1030 963 1031 static int copy_property(void *fdt, int node_offset, const struct device_node *dn,
+303 -9
arch/powerpc/kexec/ranges.c
··· 20 20 #include <linux/kexec.h> 21 21 #include <linux/of.h> 22 22 #include <linux/slab.h> 23 + #include <linux/memblock.h> 24 + #include <linux/crash_core.h> 23 25 #include <asm/sections.h> 24 26 #include <asm/kexec_ranges.h> 27 + #include <asm/crashdump-ppc64.h> 25 28 29 + #if defined(CONFIG_KEXEC_FILE) || defined(CONFIG_CRASH_DUMP) 26 30 /** 27 31 * get_max_nr_ranges - Get the max no. of ranges crash_mem structure 28 32 * could hold, given the size allocated for it. ··· 238 234 return __add_mem_range(mem_ranges, base, size); 239 235 } 240 236 237 + #endif /* CONFIG_KEXEC_FILE || CONFIG_CRASH_DUMP */ 238 + 239 + #ifdef CONFIG_KEXEC_FILE 241 240 /** 242 241 * add_tce_mem_ranges - Adds tce-table range to the given memory ranges list. 243 242 * @mem_ranges: Range list to add the memory range(s) to. 244 243 * 245 244 * Returns 0 on success, negative errno on error. 246 245 */ 247 - int add_tce_mem_ranges(struct crash_mem **mem_ranges) 246 + static int add_tce_mem_ranges(struct crash_mem **mem_ranges) 248 247 { 249 248 struct device_node *dn = NULL; 250 249 int ret = 0; ··· 286 279 * 287 280 * Returns 0 on success, negative errno on error. 288 281 */ 289 - int add_initrd_mem_range(struct crash_mem **mem_ranges) 282 + static int add_initrd_mem_range(struct crash_mem **mem_ranges) 290 283 { 291 284 u64 base, end; 292 285 int ret; ··· 303 296 return ret; 304 297 } 305 298 306 - #ifdef CONFIG_PPC_64S_HASH_MMU 307 299 /** 308 300 * add_htab_mem_range - Adds htab range to the given memory ranges list, 309 301 * if it exists ··· 310 304 * 311 305 * Returns 0 on success, negative errno on error. 312 306 */ 313 - int add_htab_mem_range(struct crash_mem **mem_ranges) 307 + static int add_htab_mem_range(struct crash_mem **mem_ranges) 314 308 { 309 + 310 + #ifdef CONFIG_PPC_64S_HASH_MMU 315 311 if (!htab_address) 316 312 return 0; 317 313 318 314 return add_mem_range(mem_ranges, __pa(htab_address), htab_size_bytes); 319 - } 315 + #else 316 + return 0; 320 317 #endif 318 + } 321 319 322 320 /** 323 321 * add_kernel_mem_range - Adds kernel text region to the given ··· 330 320 * 331 321 * Returns 0 on success, negative errno on error. 332 322 */ 333 - int add_kernel_mem_range(struct crash_mem **mem_ranges) 323 + static int add_kernel_mem_range(struct crash_mem **mem_ranges) 334 324 { 335 325 return add_mem_range(mem_ranges, 0, __pa(_end)); 336 326 } 327 + #endif /* CONFIG_KEXEC_FILE */ 337 328 329 + #if defined(CONFIG_KEXEC_FILE) || defined(CONFIG_CRASH_DUMP) 338 330 /** 339 331 * add_rtas_mem_range - Adds RTAS region to the given memory ranges list. 340 332 * @mem_ranges: Range list to add the memory range to. 341 333 * 342 334 * Returns 0 on success, negative errno on error. 343 335 */ 344 - int add_rtas_mem_range(struct crash_mem **mem_ranges) 336 + static int add_rtas_mem_range(struct crash_mem **mem_ranges) 345 337 { 346 338 struct device_node *dn; 347 339 u32 base, size; ··· 368 356 * 369 357 * Returns 0 on success, negative errno on error. 370 358 */ 371 - int add_opal_mem_range(struct crash_mem **mem_ranges) 359 + static int add_opal_mem_range(struct crash_mem **mem_ranges) 372 360 { 373 361 struct device_node *dn; 374 362 u64 base, size; ··· 386 374 of_node_put(dn); 387 375 return ret; 388 376 } 377 + #endif /* CONFIG_KEXEC_FILE || CONFIG_CRASH_DUMP */ 389 378 379 + #ifdef CONFIG_KEXEC_FILE 390 380 /** 391 381 * add_reserved_mem_ranges - Adds "/reserved-ranges" regions exported by f/w 392 382 * to the given memory ranges list. ··· 396 382 * 397 383 * Returns 0 on success, negative errno on error. 398 384 */ 399 - int add_reserved_mem_ranges(struct crash_mem **mem_ranges) 385 + static int add_reserved_mem_ranges(struct crash_mem **mem_ranges) 400 386 { 401 387 int n_mem_addr_cells, n_mem_size_cells, i, len, cells, ret = 0; 402 388 struct device_node *root = of_find_node_by_path("/"); ··· 426 412 427 413 return ret; 428 414 } 415 + 416 + /** 417 + * get_reserved_memory_ranges - Get reserve memory ranges. This list includes 418 + * memory regions that should be added to the 419 + * memory reserve map to ensure the region is 420 + * protected from any mischief. 421 + * @mem_ranges: Range list to add the memory ranges to. 422 + * 423 + * Returns 0 on success, negative errno on error. 424 + */ 425 + int get_reserved_memory_ranges(struct crash_mem **mem_ranges) 426 + { 427 + int ret; 428 + 429 + ret = add_rtas_mem_range(mem_ranges); 430 + if (ret) 431 + goto out; 432 + 433 + ret = add_tce_mem_ranges(mem_ranges); 434 + if (ret) 435 + goto out; 436 + 437 + ret = add_reserved_mem_ranges(mem_ranges); 438 + out: 439 + if (ret) 440 + pr_err("Failed to setup reserved memory ranges\n"); 441 + return ret; 442 + } 443 + 444 + /** 445 + * get_exclude_memory_ranges - Get exclude memory ranges. This list includes 446 + * regions like opal/rtas, tce-table, initrd, 447 + * kernel, htab which should be avoided while 448 + * setting up kexec load segments. 449 + * @mem_ranges: Range list to add the memory ranges to. 450 + * 451 + * Returns 0 on success, negative errno on error. 452 + */ 453 + int get_exclude_memory_ranges(struct crash_mem **mem_ranges) 454 + { 455 + int ret; 456 + 457 + ret = add_tce_mem_ranges(mem_ranges); 458 + if (ret) 459 + goto out; 460 + 461 + ret = add_initrd_mem_range(mem_ranges); 462 + if (ret) 463 + goto out; 464 + 465 + ret = add_htab_mem_range(mem_ranges); 466 + if (ret) 467 + goto out; 468 + 469 + ret = add_kernel_mem_range(mem_ranges); 470 + if (ret) 471 + goto out; 472 + 473 + ret = add_rtas_mem_range(mem_ranges); 474 + if (ret) 475 + goto out; 476 + 477 + ret = add_opal_mem_range(mem_ranges); 478 + if (ret) 479 + goto out; 480 + 481 + ret = add_reserved_mem_ranges(mem_ranges); 482 + if (ret) 483 + goto out; 484 + 485 + /* exclude memory ranges should be sorted for easy lookup */ 486 + sort_memory_ranges(*mem_ranges, true); 487 + out: 488 + if (ret) 489 + pr_err("Failed to setup exclude memory ranges\n"); 490 + return ret; 491 + } 492 + 493 + #ifdef CONFIG_CRASH_DUMP 494 + /** 495 + * get_usable_memory_ranges - Get usable memory ranges. This list includes 496 + * regions like crashkernel, opal/rtas & tce-table, 497 + * that kdump kernel could use. 498 + * @mem_ranges: Range list to add the memory ranges to. 499 + * 500 + * Returns 0 on success, negative errno on error. 501 + */ 502 + int get_usable_memory_ranges(struct crash_mem **mem_ranges) 503 + { 504 + int ret; 505 + 506 + /* 507 + * Early boot failure observed on guests when low memory (first memory 508 + * block?) is not added to usable memory. So, add [0, crashk_res.end] 509 + * instead of [crashk_res.start, crashk_res.end] to workaround it. 510 + * Also, crashed kernel's memory must be added to reserve map to 511 + * avoid kdump kernel from using it. 512 + */ 513 + ret = add_mem_range(mem_ranges, 0, crashk_res.end + 1); 514 + if (ret) 515 + goto out; 516 + 517 + ret = add_rtas_mem_range(mem_ranges); 518 + if (ret) 519 + goto out; 520 + 521 + ret = add_opal_mem_range(mem_ranges); 522 + if (ret) 523 + goto out; 524 + 525 + ret = add_tce_mem_ranges(mem_ranges); 526 + out: 527 + if (ret) 528 + pr_err("Failed to setup usable memory ranges\n"); 529 + return ret; 530 + } 531 + #endif /* CONFIG_CRASH_DUMP */ 532 + #endif /* CONFIG_KEXEC_FILE */ 533 + 534 + #ifdef CONFIG_CRASH_DUMP 535 + /** 536 + * get_crash_memory_ranges - Get crash memory ranges. This list includes 537 + * first/crashing kernel's memory regions that 538 + * would be exported via an elfcore. 539 + * @mem_ranges: Range list to add the memory ranges to. 540 + * 541 + * Returns 0 on success, negative errno on error. 542 + */ 543 + int get_crash_memory_ranges(struct crash_mem **mem_ranges) 544 + { 545 + phys_addr_t base, end; 546 + struct crash_mem *tmem; 547 + u64 i; 548 + int ret; 549 + 550 + for_each_mem_range(i, &base, &end) { 551 + u64 size = end - base; 552 + 553 + /* Skip backup memory region, which needs a separate entry */ 554 + if (base == BACKUP_SRC_START) { 555 + if (size > BACKUP_SRC_SIZE) { 556 + base = BACKUP_SRC_END + 1; 557 + size -= BACKUP_SRC_SIZE; 558 + } else 559 + continue; 560 + } 561 + 562 + ret = add_mem_range(mem_ranges, base, size); 563 + if (ret) 564 + goto out; 565 + 566 + /* Try merging adjacent ranges before reallocation attempt */ 567 + if ((*mem_ranges)->nr_ranges == (*mem_ranges)->max_nr_ranges) 568 + sort_memory_ranges(*mem_ranges, true); 569 + } 570 + 571 + /* Reallocate memory ranges if there is no space to split ranges */ 572 + tmem = *mem_ranges; 573 + if (tmem && (tmem->nr_ranges == tmem->max_nr_ranges)) { 574 + tmem = realloc_mem_ranges(mem_ranges); 575 + if (!tmem) 576 + goto out; 577 + } 578 + 579 + /* Exclude crashkernel region */ 580 + ret = crash_exclude_mem_range(tmem, crashk_res.start, crashk_res.end); 581 + if (ret) 582 + goto out; 583 + 584 + /* 585 + * FIXME: For now, stay in parity with kexec-tools but if RTAS/OPAL 586 + * regions are exported to save their context at the time of 587 + * crash, they should actually be backed up just like the 588 + * first 64K bytes of memory. 589 + */ 590 + ret = add_rtas_mem_range(mem_ranges); 591 + if (ret) 592 + goto out; 593 + 594 + ret = add_opal_mem_range(mem_ranges); 595 + if (ret) 596 + goto out; 597 + 598 + /* create a separate program header for the backup region */ 599 + ret = add_mem_range(mem_ranges, BACKUP_SRC_START, BACKUP_SRC_SIZE); 600 + if (ret) 601 + goto out; 602 + 603 + sort_memory_ranges(*mem_ranges, false); 604 + out: 605 + if (ret) 606 + pr_err("Failed to setup crash memory ranges\n"); 607 + return ret; 608 + } 609 + 610 + /** 611 + * remove_mem_range - Removes the given memory range from the range list. 612 + * @mem_ranges: Range list to remove the memory range to. 613 + * @base: Base address of the range to remove. 614 + * @size: Size of the memory range to remove. 615 + * 616 + * (Re)allocates memory, if needed. 617 + * 618 + * Returns 0 on success, negative errno on error. 619 + */ 620 + int remove_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size) 621 + { 622 + u64 end; 623 + int ret = 0; 624 + unsigned int i; 625 + u64 mstart, mend; 626 + struct crash_mem *mem_rngs = *mem_ranges; 627 + 628 + if (!size) 629 + return 0; 630 + 631 + /* 632 + * Memory range are stored as start and end address, use 633 + * the same format to do remove operation. 634 + */ 635 + end = base + size - 1; 636 + 637 + for (i = 0; i < mem_rngs->nr_ranges; i++) { 638 + mstart = mem_rngs->ranges[i].start; 639 + mend = mem_rngs->ranges[i].end; 640 + 641 + /* 642 + * Memory range to remove is not part of this range entry 643 + * in the memory range list 644 + */ 645 + if (!(base >= mstart && end <= mend)) 646 + continue; 647 + 648 + /* 649 + * Memory range to remove is equivalent to this entry in the 650 + * memory range list. Remove the range entry from the list. 651 + */ 652 + if (base == mstart && end == mend) { 653 + for (; i < mem_rngs->nr_ranges - 1; i++) { 654 + mem_rngs->ranges[i].start = mem_rngs->ranges[i+1].start; 655 + mem_rngs->ranges[i].end = mem_rngs->ranges[i+1].end; 656 + } 657 + mem_rngs->nr_ranges--; 658 + goto out; 659 + } 660 + /* 661 + * Start address of the memory range to remove and the 662 + * current memory range entry in the list is same. Just 663 + * move the start address of the current memory range 664 + * entry in the list to end + 1. 665 + */ 666 + else if (base == mstart) { 667 + mem_rngs->ranges[i].start = end + 1; 668 + goto out; 669 + } 670 + /* 671 + * End address of the memory range to remove and the 672 + * current memory range entry in the list is same. 673 + * Just move the end address of the current memory 674 + * range entry in the list to base - 1. 675 + */ 676 + else if (end == mend) { 677 + mem_rngs->ranges[i].end = base - 1; 678 + goto out; 679 + } 680 + /* 681 + * Memory range to remove is not at the edge of current 682 + * memory range entry. Split the current memory entry into 683 + * two half. 684 + */ 685 + else { 686 + mem_rngs->ranges[i].end = base - 1; 687 + size = mem_rngs->ranges[i].end - end; 688 + ret = add_mem_range(mem_ranges, end + 1, size); 689 + } 690 + } 691 + out: 692 + return ret; 693 + } 694 + #endif /* CONFIG_CRASH_DUMP */
-4
arch/powerpc/kvm/book3s.c
··· 360 360 break; 361 361 } 362 362 363 - #if 0 364 - printk(KERN_INFO "Deliver interrupt 0x%x? %x\n", vec, deliver); 365 - #endif 366 - 367 363 if (deliver) 368 364 kvmppc_inject_interrupt(vcpu, vec, 0); 369 365
+2 -2
arch/powerpc/kvm/book3s_emulate.c
··· 714 714 case SPRN_HID1: 715 715 to_book3s(vcpu)->hid[1] = spr_val; 716 716 break; 717 - case SPRN_HID2: 717 + case SPRN_HID2_750FX: 718 718 to_book3s(vcpu)->hid[2] = spr_val; 719 719 break; 720 720 case SPRN_HID2_GEKKO: ··· 900 900 case SPRN_HID1: 901 901 *spr_val = to_book3s(vcpu)->hid[1]; 902 902 break; 903 - case SPRN_HID2: 903 + case SPRN_HID2_750FX: 904 904 case SPRN_HID2_GEKKO: 905 905 *spr_val = to_book3s(vcpu)->hid[2]; 906 906 break;
+1 -1
arch/powerpc/kvm/book3s_hv.c
··· 4857 4857 * entering a nested guest in which case the decrementer is now owned 4858 4858 * by L2 and the L1 decrementer is provided in hdec_expires 4859 4859 */ 4860 - if (!kvmhv_is_nestedv2() && kvmppc_core_pending_dec(vcpu) && 4860 + if (kvmppc_core_pending_dec(vcpu) && 4861 4861 ((tb < kvmppc_dec_expires_host_tb(vcpu)) || 4862 4862 (trap == BOOK3S_INTERRUPT_SYSCALL && 4863 4863 kvmppc_get_gpr(vcpu, 3) == H_ENTER_NESTED)))
+2 -2
arch/powerpc/kvm/book3s_hv_nestedv2.c
··· 71 71 } 72 72 73 73 if (kvmppc_gsm_includes(gsm, KVMPPC_GSID_RUN_OUTPUT)) { 74 - kvmppc_gse_put_buff_info(gsb, KVMPPC_GSID_RUN_OUTPUT, 75 - cfg->vcpu_run_output_cfg); 74 + rc = kvmppc_gse_put_buff_info(gsb, KVMPPC_GSID_RUN_OUTPUT, 75 + cfg->vcpu_run_output_cfg); 76 76 if (rc < 0) 77 77 return rc; 78 78 }
+1 -1
arch/powerpc/kvm/book3s_xive.c
··· 531 531 xc->cppr = xive_prio_from_guest(new_cppr); 532 532 533 533 /* 534 - * IPIs are synthetized from MFRR and thus don't need 534 + * IPIs are synthesized from MFRR and thus don't need 535 535 * any special EOI handling. The underlying interrupt 536 536 * used to signal MFRR changes is EOId when fetched from 537 537 * the queue.
-2
arch/powerpc/lib/Makefile
··· 3 3 # Makefile for ppc-specific library files.. 4 4 # 5 5 6 - ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) 7 - 8 6 CFLAGS_code-patching.o += -fno-stack-protector 9 7 CFLAGS_feature-fixups.o += -fno-stack-protector 10 8
+27 -4
arch/powerpc/lib/code-patching.c
··· 372 372 } 373 373 NOKPROBE_SYMBOL(patch_instruction); 374 374 375 + static int patch_memset64(u64 *addr, u64 val, size_t count) 376 + { 377 + for (u64 *end = addr + count; addr < end; addr++) 378 + __put_kernel_nofault(addr, &val, u64, failed); 379 + 380 + return 0; 381 + 382 + failed: 383 + return -EPERM; 384 + } 385 + 386 + static int patch_memset32(u32 *addr, u32 val, size_t count) 387 + { 388 + for (u32 *end = addr + count; addr < end; addr++) 389 + __put_kernel_nofault(addr, &val, u32, failed); 390 + 391 + return 0; 392 + 393 + failed: 394 + return -EPERM; 395 + } 396 + 375 397 static int __patch_instructions(u32 *patch_addr, u32 *code, size_t len, bool repeat_instr) 376 398 { 377 399 unsigned long start = (unsigned long)patch_addr; 400 + int err; 378 401 379 402 /* Repeat instruction */ 380 403 if (repeat_instr) { ··· 406 383 if (ppc_inst_prefixed(instr)) { 407 384 u64 val = ppc_inst_as_ulong(instr); 408 385 409 - memset64((u64 *)patch_addr, val, len / 8); 386 + err = patch_memset64((u64 *)patch_addr, val, len / 8); 410 387 } else { 411 388 u32 val = ppc_inst_val(instr); 412 389 413 - memset32(patch_addr, val, len / 4); 390 + err = patch_memset32(patch_addr, val, len / 4); 414 391 } 415 392 } else { 416 - memcpy(patch_addr, code, len); 393 + err = copy_to_kernel_nofault(patch_addr, code, len); 417 394 } 418 395 419 396 smp_wmb(); /* smp write barrier */ 420 397 flush_icache_range(start, start + len); 421 - return 0; 398 + return err; 422 399 } 423 400 424 401 /*
+8
arch/powerpc/lib/feature-fixups.c
··· 25 25 #include <asm/firmware.h> 26 26 #include <asm/inst.h> 27 27 28 + /* 29 + * Used to generate warnings if mmu or cpu feature check functions that 30 + * use static keys before they are initialized. 31 + */ 32 + bool static_key_feature_checks_initialized __read_mostly; 33 + EXPORT_SYMBOL_GPL(static_key_feature_checks_initialized); 34 + 28 35 struct fixup_entry { 29 36 unsigned long mask; 30 37 unsigned long value; ··· 686 679 jump_label_init(); 687 680 cpu_feature_keys_init(); 688 681 mmu_feature_keys_init(); 682 + static_key_feature_checks_initialized = true; 689 683 } 690 684 691 685 static int __init check_features(void)
+92
arch/powerpc/lib/test-code-patching.c
··· 347 347 check(!memcmp(iptr, expected, sizeof(expected))); 348 348 } 349 349 350 + static void __init test_multi_instruction_patching(void) 351 + { 352 + u32 code[32]; 353 + void *buf; 354 + u32 *addr32; 355 + u64 *addr64; 356 + ppc_inst_t inst64 = ppc_inst_prefix(OP_PREFIX << 26 | 3UL << 24, PPC_RAW_TRAP()); 357 + u32 inst32 = PPC_RAW_NOP(); 358 + 359 + buf = vzalloc(PAGE_SIZE * 8); 360 + check(buf); 361 + if (!buf) 362 + return; 363 + 364 + /* Test single page 32-bit repeated instruction */ 365 + addr32 = buf + PAGE_SIZE; 366 + check(!patch_instructions(addr32 + 1, &inst32, 12, true)); 367 + 368 + check(addr32[0] == 0); 369 + check(addr32[1] == inst32); 370 + check(addr32[2] == inst32); 371 + check(addr32[3] == inst32); 372 + check(addr32[4] == 0); 373 + 374 + /* Test single page 64-bit repeated instruction */ 375 + if (IS_ENABLED(CONFIG_PPC64)) { 376 + check(ppc_inst_prefixed(inst64)); 377 + 378 + addr64 = buf + PAGE_SIZE * 2; 379 + ppc_inst_write(code, inst64); 380 + check(!patch_instructions((u32 *)(addr64 + 1), code, 24, true)); 381 + 382 + check(addr64[0] == 0); 383 + check(ppc_inst_equal(ppc_inst_read((u32 *)&addr64[1]), inst64)); 384 + check(ppc_inst_equal(ppc_inst_read((u32 *)&addr64[2]), inst64)); 385 + check(ppc_inst_equal(ppc_inst_read((u32 *)&addr64[3]), inst64)); 386 + check(addr64[4] == 0); 387 + } 388 + 389 + /* Test single page memcpy */ 390 + addr32 = buf + PAGE_SIZE * 3; 391 + 392 + for (int i = 0; i < ARRAY_SIZE(code); i++) 393 + code[i] = i + 1; 394 + 395 + check(!patch_instructions(addr32 + 1, code, sizeof(code), false)); 396 + 397 + check(addr32[0] == 0); 398 + check(!memcmp(&addr32[1], code, sizeof(code))); 399 + check(addr32[ARRAY_SIZE(code) + 1] == 0); 400 + 401 + /* Test multipage 32-bit repeated instruction */ 402 + addr32 = buf + PAGE_SIZE * 4 - 8; 403 + check(!patch_instructions(addr32 + 1, &inst32, 12, true)); 404 + 405 + check(addr32[0] == 0); 406 + check(addr32[1] == inst32); 407 + check(addr32[2] == inst32); 408 + check(addr32[3] == inst32); 409 + check(addr32[4] == 0); 410 + 411 + /* Test multipage 64-bit repeated instruction */ 412 + if (IS_ENABLED(CONFIG_PPC64)) { 413 + check(ppc_inst_prefixed(inst64)); 414 + 415 + addr64 = buf + PAGE_SIZE * 5 - 8; 416 + ppc_inst_write(code, inst64); 417 + check(!patch_instructions((u32 *)(addr64 + 1), code, 24, true)); 418 + 419 + check(addr64[0] == 0); 420 + check(ppc_inst_equal(ppc_inst_read((u32 *)&addr64[1]), inst64)); 421 + check(ppc_inst_equal(ppc_inst_read((u32 *)&addr64[2]), inst64)); 422 + check(ppc_inst_equal(ppc_inst_read((u32 *)&addr64[3]), inst64)); 423 + check(addr64[4] == 0); 424 + } 425 + 426 + /* Test multipage memcpy */ 427 + addr32 = buf + PAGE_SIZE * 6 - 12; 428 + 429 + for (int i = 0; i < ARRAY_SIZE(code); i++) 430 + code[i] = i + 1; 431 + 432 + check(!patch_instructions(addr32 + 1, code, sizeof(code), false)); 433 + 434 + check(addr32[0] == 0); 435 + check(!memcmp(&addr32[1], code, sizeof(code))); 436 + check(addr32[ARRAY_SIZE(code) + 1] == 0); 437 + 438 + vfree(buf); 439 + } 440 + 350 441 static int __init test_code_patching(void) 351 442 { 352 443 pr_info("Running code patching self-tests ...\n"); ··· 447 356 test_create_function_call(); 448 357 test_translate_branch(); 449 358 test_prefixed_patching(); 359 + test_multi_instruction_patching(); 450 360 451 361 return 0; 452 362 }
-2
arch/powerpc/mm/Makefile
··· 3 3 # Makefile for the linux ppc-specific parts of the memory manager. 4 4 # 5 5 6 - ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) 7 - 8 6 obj-y := fault.o mem.o pgtable.o maccess.o pageattr.o \ 9 7 init_$(BITS).o pgtable_$(BITS).o \ 10 8 pgtable-frag.o ioremap.o ioremap_$(BITS).o \
-2
arch/powerpc/mm/book3s64/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - ccflags-y := $(NO_MINIMAL_TOC) 4 - 5 3 obj-y += mmu_context.o pgtable.o trace.o 6 4 ifdef CONFIG_PPC_64S_HASH_MMU 7 5 CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
+1 -1
arch/powerpc/mm/cacheflush.c
··· 78 78 79 79 #ifdef CONFIG_HIGHMEM 80 80 /** 81 - * flush_dcache_icache_phys() - Flush a page by it's physical address 81 + * flush_dcache_icache_phys() - Flush a page by its physical address 82 82 * @physaddr: the physical address of the page 83 83 */ 84 84 static void flush_dcache_icache_phys(unsigned long physaddr)
+1 -1
arch/powerpc/mm/kasan/init_book3e_64.c
··· 112 112 pte_t zero_pte = pfn_pte(virt_to_pfn(kasan_early_shadow_page), PAGE_KERNEL_RO); 113 113 114 114 for_each_mem_range(i, &start, &end) 115 - kasan_init_phys_region((void *)start, (void *)end); 115 + kasan_init_phys_region(phys_to_virt(start), phys_to_virt(end)); 116 116 117 117 if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) 118 118 kasan_remove_zero_shadow((void *)VMALLOC_START, VMALLOC_SIZE);
+1 -1
arch/powerpc/mm/kasan/init_book3s_64.c
··· 62 62 } 63 63 64 64 for_each_mem_range(i, &start, &end) 65 - kasan_init_phys_region((void *)start, (void *)end); 65 + kasan_init_phys_region(phys_to_virt(start), phys_to_virt(end)); 66 66 67 67 for (i = 0; i < PTRS_PER_PTE; i++) 68 68 __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page,
+1 -1
arch/powerpc/mm/mem.c
··· 31 31 32 32 #include <mm/mmu_decl.h> 33 33 34 - unsigned long long memory_limit; 34 + unsigned long long memory_limit __initdata; 35 35 36 36 unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss; 37 37 EXPORT_SYMBOL(empty_zero_page);
-2
arch/powerpc/mm/nohash/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) 4 - 5 3 obj-y += mmu_context.o tlb.o tlb_low.o kup.o 6 4 obj-$(CONFIG_PPC_BOOK3E_64) += tlb_low_64e.o book3e_pgtable.o 7 5 obj-$(CONFIG_40x) += 40x.o
+1 -1
arch/powerpc/mm/nohash/kaslr_booke.c
··· 376 376 create_kaslr_tlb_entry(1, tlb_virt, tlb_phys); 377 377 } 378 378 379 - /* Copy the kernel to it's new location and run */ 379 + /* Copy the kernel to its new location and run */ 380 380 memcpy((void *)kernstart_virt_addr, (void *)_stext, kernel_sz); 381 381 flush_icache_range(kernstart_virt_addr, kernstart_virt_addr + kernel_sz); 382 382
+1 -1
arch/powerpc/mm/ptdump/hashpagetable.c
··· 491 491 * Traverse the vmemmaped memory and dump pages that are in the hash 492 492 * pagetable. 493 493 */ 494 - while (ptr->list) { 494 + while (ptr) { 495 495 hpte_find(st, ptr->virt_addr, mmu_vmemmap_psize); 496 496 ptr = ptr->list; 497 497 }
+10
arch/powerpc/net/bpf_jit_comp.c
··· 359 359 360 360 bpf_prog_unlock_free(fp); 361 361 } 362 + 363 + bool bpf_jit_supports_kfunc_call(void) 364 + { 365 + return true; 366 + } 367 + 368 + bool bpf_jit_supports_far_kfunc_call(void) 369 + { 370 + return IS_ENABLED(CONFIG_PPC64); 371 + }
+106 -31
arch/powerpc/net/bpf_jit_comp32.c
··· 450 450 } 451 451 break; 452 452 case BPF_ALU | BPF_DIV | BPF_X: /* (u32) dst /= (u32) src */ 453 - EMIT(PPC_RAW_DIVWU(dst_reg, src2_reg, src_reg)); 453 + if (off) 454 + EMIT(PPC_RAW_DIVW(dst_reg, src2_reg, src_reg)); 455 + else 456 + EMIT(PPC_RAW_DIVWU(dst_reg, src2_reg, src_reg)); 454 457 break; 455 458 case BPF_ALU | BPF_MOD | BPF_X: /* (u32) dst %= (u32) src */ 456 - EMIT(PPC_RAW_DIVWU(_R0, src2_reg, src_reg)); 459 + if (off) 460 + EMIT(PPC_RAW_DIVW(_R0, src2_reg, src_reg)); 461 + else 462 + EMIT(PPC_RAW_DIVWU(_R0, src2_reg, src_reg)); 457 463 EMIT(PPC_RAW_MULW(_R0, src_reg, _R0)); 458 464 EMIT(PPC_RAW_SUB(dst_reg, src2_reg, _R0)); 459 465 break; ··· 473 467 if (imm == 1) { 474 468 EMIT(PPC_RAW_MR(dst_reg, src2_reg)); 475 469 } else if (is_power_of_2((u32)imm)) { 476 - EMIT(PPC_RAW_SRWI(dst_reg, src2_reg, ilog2(imm))); 470 + if (off) 471 + EMIT(PPC_RAW_SRAWI(dst_reg, src2_reg, ilog2(imm))); 472 + else 473 + EMIT(PPC_RAW_SRWI(dst_reg, src2_reg, ilog2(imm))); 477 474 } else { 478 475 PPC_LI32(_R0, imm); 479 - EMIT(PPC_RAW_DIVWU(dst_reg, src2_reg, _R0)); 476 + if (off) 477 + EMIT(PPC_RAW_DIVW(dst_reg, src2_reg, _R0)); 478 + else 479 + EMIT(PPC_RAW_DIVWU(dst_reg, src2_reg, _R0)); 480 480 } 481 481 break; 482 482 case BPF_ALU | BPF_MOD | BPF_K: /* (u32) dst %= (u32) imm */ ··· 492 480 if (!is_power_of_2((u32)imm)) { 493 481 bpf_set_seen_register(ctx, tmp_reg); 494 482 PPC_LI32(tmp_reg, imm); 495 - EMIT(PPC_RAW_DIVWU(_R0, src2_reg, tmp_reg)); 483 + if (off) 484 + EMIT(PPC_RAW_DIVW(_R0, src2_reg, tmp_reg)); 485 + else 486 + EMIT(PPC_RAW_DIVWU(_R0, src2_reg, tmp_reg)); 496 487 EMIT(PPC_RAW_MULW(_R0, tmp_reg, _R0)); 497 488 EMIT(PPC_RAW_SUB(dst_reg, src2_reg, _R0)); 498 489 } else if (imm == 1) { 499 490 EMIT(PPC_RAW_LI(dst_reg, 0)); 491 + } else if (off) { 492 + EMIT(PPC_RAW_SRAWI(_R0, src2_reg, ilog2(imm))); 493 + EMIT(PPC_RAW_ADDZE(_R0, _R0)); 494 + EMIT(PPC_RAW_SLWI(_R0, _R0, ilog2(imm))); 495 + EMIT(PPC_RAW_SUB(dst_reg, src2_reg, _R0)); 500 496 } else { 501 497 imm = ilog2((u32)imm); 502 498 EMIT(PPC_RAW_RLWINM(dst_reg, src2_reg, 0, 32 - imm, 31)); ··· 517 497 imm = -imm; 518 498 if (!is_power_of_2(imm)) 519 499 return -EOPNOTSUPP; 520 - if (imm == 1) 500 + if (imm == 1) { 521 501 EMIT(PPC_RAW_LI(dst_reg, 0)); 522 - else 502 + EMIT(PPC_RAW_LI(dst_reg_h, 0)); 503 + } else if (off) { 504 + EMIT(PPC_RAW_SRAWI(dst_reg_h, src2_reg_h, 31)); 505 + EMIT(PPC_RAW_XOR(dst_reg, src2_reg, dst_reg_h)); 506 + EMIT(PPC_RAW_SUBFC(dst_reg, dst_reg_h, dst_reg)); 507 + EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 32 - ilog2(imm), 31)); 508 + EMIT(PPC_RAW_XOR(dst_reg, dst_reg, dst_reg_h)); 509 + EMIT(PPC_RAW_SUBFC(dst_reg, dst_reg_h, dst_reg)); 510 + EMIT(PPC_RAW_SUBFE(dst_reg_h, dst_reg_h, dst_reg_h)); 511 + } else { 523 512 EMIT(PPC_RAW_RLWINM(dst_reg, src2_reg, 0, 32 - ilog2(imm), 31)); 524 - EMIT(PPC_RAW_LI(dst_reg_h, 0)); 513 + EMIT(PPC_RAW_LI(dst_reg_h, 0)); 514 + } 525 515 break; 526 516 case BPF_ALU64 | BPF_DIV | BPF_K: /* dst /= imm */ 527 517 if (!imm) ··· 757 727 * MOV 758 728 */ 759 729 case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */ 760 - if (dst_reg == src_reg) 761 - break; 762 - EMIT(PPC_RAW_MR(dst_reg, src_reg)); 763 - EMIT(PPC_RAW_MR(dst_reg_h, src_reg_h)); 730 + if (off == 8) { 731 + EMIT(PPC_RAW_EXTSB(dst_reg, src_reg)); 732 + EMIT(PPC_RAW_SRAWI(dst_reg_h, dst_reg, 31)); 733 + } else if (off == 16) { 734 + EMIT(PPC_RAW_EXTSH(dst_reg, src_reg)); 735 + EMIT(PPC_RAW_SRAWI(dst_reg_h, dst_reg, 31)); 736 + } else if (off == 32 && dst_reg == src_reg) { 737 + EMIT(PPC_RAW_SRAWI(dst_reg_h, src_reg, 31)); 738 + } else if (off == 32) { 739 + EMIT(PPC_RAW_MR(dst_reg, src_reg)); 740 + EMIT(PPC_RAW_SRAWI(dst_reg_h, src_reg, 31)); 741 + } else if (dst_reg != src_reg) { 742 + EMIT(PPC_RAW_MR(dst_reg, src_reg)); 743 + EMIT(PPC_RAW_MR(dst_reg_h, src_reg_h)); 744 + } 764 745 break; 765 746 case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */ 766 747 /* special mov32 for zext */ 767 748 if (imm == 1) 768 749 EMIT(PPC_RAW_LI(dst_reg_h, 0)); 750 + else if (off == 8) 751 + EMIT(PPC_RAW_EXTSB(dst_reg, src_reg)); 752 + else if (off == 16) 753 + EMIT(PPC_RAW_EXTSH(dst_reg, src_reg)); 769 754 else if (dst_reg != src_reg) 770 755 EMIT(PPC_RAW_MR(dst_reg, src_reg)); 771 756 break; ··· 796 751 * BPF_FROM_BE/LE 797 752 */ 798 753 case BPF_ALU | BPF_END | BPF_FROM_LE: 754 + case BPF_ALU64 | BPF_END | BPF_FROM_LE: 799 755 switch (imm) { 800 756 case 16: 801 757 /* Copy 16 bits to upper part */ ··· 831 785 EMIT(PPC_RAW_MR(dst_reg_h, tmp_reg)); 832 786 break; 833 787 } 788 + if (BPF_CLASS(code) == BPF_ALU64 && imm != 64) 789 + EMIT(PPC_RAW_LI(dst_reg_h, 0)); 834 790 break; 835 791 case BPF_ALU | BPF_END | BPF_FROM_BE: 836 792 switch (imm) { ··· 966 918 * BPF_LDX 967 919 */ 968 920 case BPF_LDX | BPF_MEM | BPF_B: /* dst = *(u8 *)(ul) (src + off) */ 921 + case BPF_LDX | BPF_MEMSX | BPF_B: 969 922 case BPF_LDX | BPF_PROBE_MEM | BPF_B: 923 + case BPF_LDX | BPF_PROBE_MEMSX | BPF_B: 970 924 case BPF_LDX | BPF_MEM | BPF_H: /* dst = *(u16 *)(ul) (src + off) */ 925 + case BPF_LDX | BPF_MEMSX | BPF_H: 971 926 case BPF_LDX | BPF_PROBE_MEM | BPF_H: 927 + case BPF_LDX | BPF_PROBE_MEMSX | BPF_H: 972 928 case BPF_LDX | BPF_MEM | BPF_W: /* dst = *(u32 *)(ul) (src + off) */ 929 + case BPF_LDX | BPF_MEMSX | BPF_W: 973 930 case BPF_LDX | BPF_PROBE_MEM | BPF_W: 931 + case BPF_LDX | BPF_PROBE_MEMSX | BPF_W: 974 932 case BPF_LDX | BPF_MEM | BPF_DW: /* dst = *(u64 *)(ul) (src + off) */ 975 933 case BPF_LDX | BPF_PROBE_MEM | BPF_DW: 976 934 /* ··· 985 931 * load only if addr is kernel address (see is_kernel_addr()), otherwise 986 932 * set dst_reg=0 and move on. 987 933 */ 988 - if (BPF_MODE(code) == BPF_PROBE_MEM) { 934 + if (BPF_MODE(code) == BPF_PROBE_MEM || BPF_MODE(code) == BPF_PROBE_MEMSX) { 989 935 PPC_LI32(_R0, TASK_SIZE - off); 990 936 EMIT(PPC_RAW_CMPLW(src_reg, _R0)); 991 937 PPC_BCC_SHORT(COND_GT, (ctx->idx + 4) * 4); ··· 1007 953 * as there are two load instructions for dst_reg_h & dst_reg 1008 954 * respectively. 1009 955 */ 1010 - if (size == BPF_DW) 956 + if (size == BPF_DW || 957 + (size == BPF_B && BPF_MODE(code) == BPF_PROBE_MEMSX)) 1011 958 PPC_JMP((ctx->idx + 3) * 4); 1012 959 else 1013 960 PPC_JMP((ctx->idx + 2) * 4); 1014 961 } 1015 962 1016 - switch (size) { 1017 - case BPF_B: 1018 - EMIT(PPC_RAW_LBZ(dst_reg, src_reg, off)); 1019 - break; 1020 - case BPF_H: 1021 - EMIT(PPC_RAW_LHZ(dst_reg, src_reg, off)); 1022 - break; 1023 - case BPF_W: 1024 - EMIT(PPC_RAW_LWZ(dst_reg, src_reg, off)); 1025 - break; 1026 - case BPF_DW: 1027 - EMIT(PPC_RAW_LWZ(dst_reg_h, src_reg, off)); 1028 - EMIT(PPC_RAW_LWZ(dst_reg, src_reg, off + 4)); 1029 - break; 1030 - } 963 + if (BPF_MODE(code) == BPF_MEMSX || BPF_MODE(code) == BPF_PROBE_MEMSX) { 964 + switch (size) { 965 + case BPF_B: 966 + EMIT(PPC_RAW_LBZ(dst_reg, src_reg, off)); 967 + EMIT(PPC_RAW_EXTSB(dst_reg, dst_reg)); 968 + break; 969 + case BPF_H: 970 + EMIT(PPC_RAW_LHA(dst_reg, src_reg, off)); 971 + break; 972 + case BPF_W: 973 + EMIT(PPC_RAW_LWZ(dst_reg, src_reg, off)); 974 + break; 975 + } 976 + if (!fp->aux->verifier_zext) 977 + EMIT(PPC_RAW_SRAWI(dst_reg_h, dst_reg, 31)); 1031 978 1032 - if (size != BPF_DW && !fp->aux->verifier_zext) 1033 - EMIT(PPC_RAW_LI(dst_reg_h, 0)); 979 + } else { 980 + switch (size) { 981 + case BPF_B: 982 + EMIT(PPC_RAW_LBZ(dst_reg, src_reg, off)); 983 + break; 984 + case BPF_H: 985 + EMIT(PPC_RAW_LHZ(dst_reg, src_reg, off)); 986 + break; 987 + case BPF_W: 988 + EMIT(PPC_RAW_LWZ(dst_reg, src_reg, off)); 989 + break; 990 + case BPF_DW: 991 + EMIT(PPC_RAW_LWZ(dst_reg_h, src_reg, off)); 992 + EMIT(PPC_RAW_LWZ(dst_reg, src_reg, off + 4)); 993 + break; 994 + } 995 + if (size != BPF_DW && !fp->aux->verifier_zext) 996 + EMIT(PPC_RAW_LI(dst_reg_h, 0)); 997 + } 1034 998 1035 999 if (BPF_MODE(code) == BPF_PROBE_MEM) { 1036 1000 int insn_idx = ctx->idx - 1; ··· 1139 1067 */ 1140 1068 case BPF_JMP | BPF_JA: 1141 1069 PPC_JMP(addrs[i + 1 + off]); 1070 + break; 1071 + case BPF_JMP32 | BPF_JA: 1072 + PPC_JMP(addrs[i + 1 + imm]); 1142 1073 break; 1143 1074 1144 1075 case BPF_JMP | BPF_JGT | BPF_K:
+60 -17
arch/powerpc/net/bpf_jit_comp64.c
··· 202 202 EMIT(PPC_RAW_BLR()); 203 203 } 204 204 205 - static int bpf_jit_emit_func_call_hlp(u32 *image, struct codegen_context *ctx, u64 func) 205 + static int 206 + bpf_jit_emit_func_call_hlp(u32 *image, u32 *fimage, struct codegen_context *ctx, u64 func) 206 207 { 207 208 unsigned long func_addr = func ? ppc_function_entry((void *)func) : 0; 208 209 long reladdr; 209 210 210 - if (WARN_ON_ONCE(!core_kernel_text(func_addr))) 211 + if (WARN_ON_ONCE(!kernel_text_address(func_addr))) 211 212 return -EINVAL; 212 213 213 - if (IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) { 214 - reladdr = func_addr - CTX_NIA(ctx); 214 + #ifdef CONFIG_PPC_KERNEL_PCREL 215 + reladdr = func_addr - local_paca->kernelbase; 215 216 216 - if (reladdr >= (long)SZ_8G || reladdr < -(long)SZ_8G) { 217 - pr_err("eBPF: address of %ps out of range of pcrel address.\n", 218 - (void *)func); 219 - return -ERANGE; 220 - } 221 - /* pla r12,addr */ 222 - EMIT(PPC_PREFIX_MLS | __PPC_PRFX_R(1) | IMM_H18(reladdr)); 223 - EMIT(PPC_INST_PADDI | ___PPC_RT(_R12) | IMM_L(reladdr)); 224 - EMIT(PPC_RAW_MTCTR(_R12)); 225 - EMIT(PPC_RAW_BCTR()); 226 - 217 + if (reladdr < (long)SZ_8G && reladdr >= -(long)SZ_8G) { 218 + EMIT(PPC_RAW_LD(_R12, _R13, offsetof(struct paca_struct, kernelbase))); 219 + /* Align for subsequent prefix instruction */ 220 + if (!IS_ALIGNED((unsigned long)fimage + CTX_NIA(ctx), 8)) 221 + EMIT(PPC_RAW_NOP()); 222 + /* paddi r12,r12,addr */ 223 + EMIT(PPC_PREFIX_MLS | __PPC_PRFX_R(0) | IMM_H18(reladdr)); 224 + EMIT(PPC_INST_PADDI | ___PPC_RT(_R12) | ___PPC_RA(_R12) | IMM_L(reladdr)); 227 225 } else { 226 + unsigned long pc = (unsigned long)fimage + CTX_NIA(ctx); 227 + bool alignment_needed = !IS_ALIGNED(pc, 8); 228 + 229 + reladdr = func_addr - (alignment_needed ? pc + 4 : pc); 230 + 231 + if (reladdr < (long)SZ_8G && reladdr >= -(long)SZ_8G) { 232 + if (alignment_needed) 233 + EMIT(PPC_RAW_NOP()); 234 + /* pla r12,addr */ 235 + EMIT(PPC_PREFIX_MLS | __PPC_PRFX_R(1) | IMM_H18(reladdr)); 236 + EMIT(PPC_INST_PADDI | ___PPC_RT(_R12) | IMM_L(reladdr)); 237 + } else { 238 + /* We can clobber r12 */ 239 + PPC_LI64(_R12, func); 240 + } 241 + } 242 + EMIT(PPC_RAW_MTCTR(_R12)); 243 + EMIT(PPC_RAW_BCTRL()); 244 + #else 245 + if (core_kernel_text(func_addr)) { 228 246 reladdr = func_addr - kernel_toc_addr(); 229 247 if (reladdr > 0x7FFFFFFF || reladdr < -(0x80000000L)) { 230 248 pr_err("eBPF: address of %ps out of range of kernel_toc.\n", (void *)func); ··· 253 235 EMIT(PPC_RAW_ADDI(_R12, _R12, PPC_LO(reladdr))); 254 236 EMIT(PPC_RAW_MTCTR(_R12)); 255 237 EMIT(PPC_RAW_BCTRL()); 238 + } else { 239 + if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V1)) { 240 + /* func points to the function descriptor */ 241 + PPC_LI64(bpf_to_ppc(TMP_REG_2), func); 242 + /* Load actual entry point from function descriptor */ 243 + EMIT(PPC_RAW_LD(bpf_to_ppc(TMP_REG_1), bpf_to_ppc(TMP_REG_2), 0)); 244 + /* ... and move it to CTR */ 245 + EMIT(PPC_RAW_MTCTR(bpf_to_ppc(TMP_REG_1))); 246 + /* 247 + * Load TOC from function descriptor at offset 8. 248 + * We can clobber r2 since we get called through a 249 + * function pointer (so caller will save/restore r2). 250 + */ 251 + EMIT(PPC_RAW_LD(_R2, bpf_to_ppc(TMP_REG_2), 8)); 252 + } else { 253 + PPC_LI64(_R12, func); 254 + EMIT(PPC_RAW_MTCTR(_R12)); 255 + } 256 + EMIT(PPC_RAW_BCTRL()); 257 + /* 258 + * Load r2 with kernel TOC as kernel TOC is used if function address falls 259 + * within core kernel text. 260 + */ 261 + EMIT(PPC_RAW_LD(_R2, _R13, offsetof(struct paca_struct, kernel_toc))); 256 262 } 263 + #endif 257 264 258 265 return 0; 259 266 } ··· 328 285 int b2p_index = bpf_to_ppc(BPF_REG_3); 329 286 int bpf_tailcall_prologue_size = 8; 330 287 331 - if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2)) 288 + if (!IS_ENABLED(CONFIG_PPC_KERNEL_PCREL) && IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2)) 332 289 bpf_tailcall_prologue_size += 4; /* skip past the toc load */ 333 290 334 291 /* ··· 1036 993 return ret; 1037 994 1038 995 if (func_addr_fixed) 1039 - ret = bpf_jit_emit_func_call_hlp(image, ctx, func_addr); 996 + ret = bpf_jit_emit_func_call_hlp(image, fimage, ctx, func_addr); 1040 997 else 1041 998 ret = bpf_jit_emit_func_call_rel(image, fimage, ctx, func_addr); 1042 999
+1 -1
arch/powerpc/platforms/512x/mpc512x_shared.c
··· 279 279 * and so negatively affect boot time. Instead we reserve the 280 280 * already configured frame buffer area so that it won't be 281 281 * destroyed. The starting address of the area to reserve and 282 - * also it's length is passed to memblock_reserve(). It will be 282 + * also its length is passed to memblock_reserve(). It will be 283 283 * freed later on first open of fbdev, when splash image is not 284 284 * needed any more. 285 285 */
+4 -2
arch/powerpc/platforms/52xx/lite5200_sleep.S
··· 203 203 204 204 /* HIDs, MSR */ 205 205 LOAD_SPRN(HID1, 0x19) 206 - LOAD_SPRN(HID2, 0x1a) 206 + /* FIXME: Should this use HID2_G2_LE? */ 207 + LOAD_SPRN(HID2_750FX, 0x1a) 207 208 208 209 209 210 /* address translation is tricky (see turn_on_mmu) */ ··· 284 283 285 284 SAVE_SPRN(HID0, 0x18) 286 285 SAVE_SPRN(HID1, 0x19) 287 - SAVE_SPRN(HID2, 0x1a) 286 + /* FIXME: Should this use HID2_G2_LE? */ 287 + SAVE_SPRN(HID2_750FX, 0x1a) 288 288 mfmsr r10 289 289 stw r10, (4*0x1b)(r4) 290 290 /*SAVE_SPRN(LR, 0x1c) have to save it before the call */
-2
arch/powerpc/platforms/52xx/mpc52xx_common.c
··· 12 12 13 13 #undef DEBUG 14 14 15 - #include <linux/gpio.h> 16 15 #include <linux/kernel.h> 17 16 #include <linux/spinlock.h> 18 17 #include <linux/of_address.h> 19 18 #include <linux/of_platform.h> 20 - #include <linux/of_gpio.h> 21 19 #include <linux/export.h> 22 20 #include <asm/io.h> 23 21 #include <asm/mpc52xx.h>
+1 -1
arch/powerpc/platforms/52xx/mpc52xx_gpt.c
··· 48 48 * the output mode. This driver does not change the output mode setting. 49 49 */ 50 50 51 + #include <linux/gpio/driver.h> 51 52 #include <linux/irq.h> 52 53 #include <linux/interrupt.h> 53 54 #include <linux/io.h> ··· 57 56 #include <linux/of.h> 58 57 #include <linux/of_address.h> 59 58 #include <linux/of_irq.h> 60 - #include <linux/of_gpio.h> 61 59 #include <linux/platform_device.h> 62 60 #include <linux/kernel.h> 63 61 #include <linux/property.h>
+4 -2
arch/powerpc/platforms/83xx/suspend-asm.S
··· 68 68 69 69 mfspr r5, SPRN_HID0 70 70 mfspr r6, SPRN_HID1 71 - mfspr r7, SPRN_HID2 71 + /* FIXME: Should this use SPRN_HID2_G2_LE? */ 72 + mfspr r7, SPRN_HID2_750FX 72 73 73 74 stw r5, SS_HID+0(r3) 74 75 stw r6, SS_HID+4(r3) ··· 397 396 398 397 mtspr SPRN_HID0, r5 399 398 mtspr SPRN_HID1, r6 400 - mtspr SPRN_HID2, r7 399 + /* FIXME: Should this use SPRN_HID2_G2_LE? */ 400 + mtspr SPRN_HID2_750FX, r7 401 401 402 402 lwz r4, SS_IABR+0(r3) 403 403 lwz r5, SS_IABR+4(r3)
+6 -3
arch/powerpc/platforms/85xx/smp.c
··· 398 398 hard_irq_disable(); 399 399 mpic_teardown_this_cpu(secondary); 400 400 401 + #ifdef CONFIG_CRASH_DUMP 401 402 if (cpu == crashing_cpu && cpu_thread_in_core(cpu) != 0) { 402 403 /* 403 404 * We enter the crash kernel on whatever cpu crashed, ··· 407 406 */ 408 407 disable_threadbit = 1; 409 408 disable_cpu = cpu_first_thread_sibling(cpu); 410 - } else if (sibling != crashing_cpu && 411 - cpu_thread_in_core(cpu) == 0 && 412 - cpu_thread_in_core(sibling) != 0) { 409 + } else if (sibling == crashing_cpu) { 410 + return; 411 + } 412 + #endif 413 + if (cpu_thread_in_core(cpu) == 0 && cpu_thread_in_core(sibling) != 0) { 413 414 disable_threadbit = 2; 414 415 disable_cpu = sibling; 415 416 }
-17
arch/powerpc/platforms/cell/iommu.c
··· 424 424 cell_iommu_enable_hardware(iommu); 425 425 } 426 426 427 - #if 0/* Unused for now */ 428 - static struct iommu_window *find_window(struct cbe_iommu *iommu, 429 - unsigned long offset, unsigned long size) 430 - { 431 - struct iommu_window *window; 432 - 433 - /* todo: check for overlapping (but not equal) windows) */ 434 - 435 - list_for_each_entry(window, &(iommu->windows), list) { 436 - if (window->offset == offset && window->size == size) 437 - return window; 438 - } 439 - 440 - return NULL; 441 - } 442 - #endif 443 - 444 427 static inline u32 cell_iommu_get_ioid(struct device_node *np) 445 428 { 446 429 const u32 *ioid;
+1
arch/powerpc/platforms/cell/smp.c
··· 54 54 55 55 /** 56 56 * smp_startup_cpu() - start the given cpu 57 + * @lcpu: Logical CPU ID of the CPU to be started. 57 58 * 58 59 * At boot time, there is nothing to do for primary threads which were 59 60 * started from Open Firmware. For anything else, call RTAS with the
+4 -16
arch/powerpc/platforms/cell/spufs/file.c
··· 1704 1704 1705 1705 ret = spu_acquire(ctx); 1706 1706 if (ret) 1707 - goto out; 1708 - #if 0 1709 - /* this currently hangs */ 1710 - ret = spufs_wait(ctx->mfc_wq, 1711 - ctx->ops->set_mfc_query(ctx, ctx->tagwait, 2)); 1712 - if (ret) 1713 - goto out; 1714 - ret = spufs_wait(ctx->mfc_wq, 1715 - ctx->ops->read_mfc_tagstatus(ctx) == ctx->tagwait); 1716 - if (ret) 1717 - goto out; 1718 - #else 1719 - ret = 0; 1720 - #endif 1707 + return ret; 1708 + 1721 1709 spu_release(ctx); 1722 - out: 1723 - return ret; 1710 + 1711 + return 0; 1724 1712 } 1725 1713 1726 1714 static int spufs_mfc_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+1 -1
arch/powerpc/platforms/cell/spufs/sched.c
··· 868 868 } 869 869 870 870 /** 871 - * spu_deactivate - unbind a context from it's physical spu 871 + * spu_deactivate - unbind a context from its physical spu 872 872 * @ctx: spu context to unbind 873 873 * 874 874 * Unbind @ctx from the physical spu it is running on and schedule
+1 -1
arch/powerpc/platforms/maple/pci.c
··· 595 595 596 596 /* Probe root PCI hosts, that is on U3 the AGP host and the 597 597 * HyperTransport host. That one is actually "kept" around 598 - * and actually added last as it's resource management relies 598 + * and actually added last as its resource management relies 599 599 * on the AGP resources to have been setup first 600 600 */ 601 601 root = of_find_node_by_path("/");
+1 -1
arch/powerpc/platforms/powermac/pic.c
··· 2 2 /* 3 3 * Support for the interrupt controllers found on Power Macintosh, 4 4 * currently Apple's "Grand Central" interrupt controller in all 5 - * it's incarnations. OpenPIC support used on newer machines is 5 + * its incarnations. OpenPIC support used on newer machines is 6 6 * in a separate file 7 7 * 8 8 * Copyright (C) 1997 Paul Mackerras (paulus@samba.org)
+1 -1
arch/powerpc/platforms/powermac/sleep.S
··· 176 176 * memory location containing the PC to resume from 177 177 * at address 0. 178 178 * - On Core99, we must store the wakeup vector at 179 - * address 0x80 and eventually it's parameters 179 + * address 0x80 and eventually its parameters 180 180 * at address 0x84. I've have some trouble with those 181 181 * parameters however and I no longer use them. 182 182 */
+14 -21
arch/powerpc/platforms/powernv/opal-fadump.c
··· 513 513 final_note(note_buf); 514 514 515 515 pr_debug("Updating elfcore header (%llx) with cpu notes\n", 516 - fdh->elfcorehdr_addr); 517 - fadump_update_elfcore_header(__va(fdh->elfcorehdr_addr)); 516 + fadump_conf->elfcorehdr_addr); 517 + fadump_update_elfcore_header((char *)fadump_conf->elfcorehdr_addr); 518 518 return 0; 519 519 } 520 520 ··· 526 526 if (!opal_fdm_active || !fadump_conf->fadumphdr_addr) 527 527 return rc; 528 528 529 - /* Validate the fadump crash info header */ 530 529 fdh = __va(fadump_conf->fadumphdr_addr); 531 - if (fdh->magic_number != FADUMP_CRASH_INFO_MAGIC) { 532 - pr_err("Crash info header is not valid.\n"); 533 - return rc; 534 - } 535 530 536 531 #ifdef CONFIG_OPAL_CORE 537 532 /* ··· 540 545 kernel_initiated = true; 541 546 #endif 542 547 543 - rc = opal_fadump_build_cpu_notes(fadump_conf, fdh); 544 - if (rc) 545 - return rc; 546 - 547 - /* 548 - * We are done validating dump info and elfcore header is now ready 549 - * to be exported. set elfcorehdr_addr so that vmcore module will 550 - * export the elfcore header through '/proc/vmcore'. 551 - */ 552 - elfcorehdr_addr = fdh->elfcorehdr_addr; 553 - 554 - return rc; 548 + return opal_fadump_build_cpu_notes(fadump_conf, fdh); 555 549 } 556 550 557 551 static void opal_fadump_region_show(struct fw_dump *fadump_conf, ··· 599 615 pr_emerg("No backend support for MPIPL!\n"); 600 616 } 601 617 618 + /* FADUMP_MAX_MEM_REGS or lower */ 619 + static int opal_fadump_max_boot_mem_rgns(void) 620 + { 621 + return FADUMP_MAX_MEM_REGS; 622 + } 623 + 602 624 static struct fadump_ops opal_fadump_ops = { 603 625 .fadump_init_mem_struct = opal_fadump_init_mem_struct, 604 626 .fadump_get_metadata_size = opal_fadump_get_metadata_size, ··· 617 627 .fadump_process = opal_fadump_process, 618 628 .fadump_region_show = opal_fadump_region_show, 619 629 .fadump_trigger = opal_fadump_trigger, 630 + .fadump_max_boot_mem_rgns = opal_fadump_max_boot_mem_rgns, 620 631 }; 621 632 622 633 void __init opal_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node) ··· 665 674 } 666 675 } 667 676 668 - fadump_conf->ops = &opal_fadump_ops; 669 - fadump_conf->fadump_supported = 1; 677 + fadump_conf->ops = &opal_fadump_ops; 678 + fadump_conf->fadump_supported = 1; 679 + /* TODO: Add support to pass additional parameters */ 680 + fadump_conf->param_area_supported = 0; 670 681 671 682 /* 672 683 * Firmware supports 32-bit field for size. Align it to PAGE_SIZE
+2 -2
arch/powerpc/platforms/powernv/pci-sriov.c
··· 238 238 } else if (pdev->is_physfn) { 239 239 /* 240 240 * For PFs adjust their allocated IOV resources to match what 241 - * the PHB can support using it's M64 BAR table. 241 + * the PHB can support using its M64 BAR table. 242 242 */ 243 243 pnv_pci_ioda_fixup_iov_resources(pdev); 244 244 } ··· 658 658 list_add_tail(&pe->list, &phb->ioda.pe_list); 659 659 mutex_unlock(&phb->ioda.pe_list_mutex); 660 660 661 - /* associate this pe to it's pdn */ 661 + /* associate this pe to its pdn */ 662 662 list_for_each_entry(vf_pdn, &pdn->parent->child_list, list) { 663 663 if (vf_pdn->busno == vf_bus && 664 664 vf_pdn->devfn == vf_devfn) {
+1 -1
arch/powerpc/platforms/powernv/vas-window.c
··· 1059 1059 } 1060 1060 } else { 1061 1061 /* 1062 - * Interrupt hanlder or fault window setup failed. Means 1062 + * Interrupt handler or fault window setup failed. Means 1063 1063 * NX can not generate fault for page fault. So not 1064 1064 * opening for user space tx window. 1065 1065 */
+32 -29
arch/powerpc/platforms/ps3/device-init.c
··· 770 770 771 771 static int ps3_probe_thread(void *data) 772 772 { 773 - struct ps3_notification_device dev; 773 + struct { 774 + struct ps3_notification_device dev; 775 + u8 buf[512]; 776 + } *local; 777 + struct ps3_notify_cmd *notify_cmd; 778 + struct ps3_notify_event *notify_event; 774 779 int res; 775 780 unsigned int irq; 776 781 u64 lpar; 777 - void *buf; 778 - struct ps3_notify_cmd *notify_cmd; 779 - struct ps3_notify_event *notify_event; 780 782 781 783 pr_debug(" -> %s:%u: kthread started\n", __func__, __LINE__); 782 784 783 - buf = kzalloc(512, GFP_KERNEL); 784 - if (!buf) 785 + local = kzalloc(sizeof(*local), GFP_KERNEL); 786 + if (!local) 785 787 return -ENOMEM; 786 788 787 - lpar = ps3_mm_phys_to_lpar(__pa(buf)); 788 - notify_cmd = buf; 789 - notify_event = buf; 789 + lpar = ps3_mm_phys_to_lpar(__pa(&local->buf)); 790 + notify_cmd = (struct ps3_notify_cmd *)&local->buf; 791 + notify_event = (struct ps3_notify_event *)&local->buf; 790 792 791 793 /* dummy system bus device */ 792 - dev.sbd.bus_id = (u64)data; 793 - dev.sbd.dev_id = PS3_NOTIFICATION_DEV_ID; 794 - dev.sbd.interrupt_id = PS3_NOTIFICATION_INTERRUPT_ID; 794 + local->dev.sbd.bus_id = (u64)data; 795 + local->dev.sbd.dev_id = PS3_NOTIFICATION_DEV_ID; 796 + local->dev.sbd.interrupt_id = PS3_NOTIFICATION_INTERRUPT_ID; 795 797 796 - res = lv1_open_device(dev.sbd.bus_id, dev.sbd.dev_id, 0); 798 + res = lv1_open_device(local->dev.sbd.bus_id, local->dev.sbd.dev_id, 0); 797 799 if (res) { 798 800 pr_err("%s:%u: lv1_open_device failed %s\n", __func__, 799 801 __LINE__, ps3_result(res)); 800 802 goto fail_free; 801 803 } 802 804 803 - res = ps3_sb_event_receive_port_setup(&dev.sbd, PS3_BINDING_CPU_ANY, 804 - &irq); 805 + res = ps3_sb_event_receive_port_setup(&local->dev.sbd, 806 + PS3_BINDING_CPU_ANY, &irq); 805 807 if (res) { 806 808 pr_err("%s:%u: ps3_sb_event_receive_port_setup failed %d\n", 807 809 __func__, __LINE__, res); 808 810 goto fail_close_device; 809 811 } 810 812 811 - spin_lock_init(&dev.lock); 812 - rcuwait_init(&dev.wait); 813 + spin_lock_init(&local->dev.lock); 814 + rcuwait_init(&local->dev.wait); 813 815 814 816 res = request_irq(irq, ps3_notification_interrupt, 0, 815 - "ps3_notification", &dev); 817 + "ps3_notification", &local->dev); 816 818 if (res) { 817 819 pr_err("%s:%u: request_irq failed %d\n", __func__, __LINE__, 818 820 res); ··· 825 823 notify_cmd->operation_code = 0; /* must be zero */ 826 824 notify_cmd->event_mask = 1UL << notify_region_probe; 827 825 828 - res = ps3_notification_read_write(&dev, lpar, 1); 826 + res = ps3_notification_read_write(&local->dev, lpar, 1); 829 827 if (res) 830 828 goto fail_free_irq; 831 829 ··· 836 834 837 835 memset(notify_event, 0, sizeof(*notify_event)); 838 836 839 - res = ps3_notification_read_write(&dev, lpar, 0); 837 + res = ps3_notification_read_write(&local->dev, lpar, 0); 840 838 if (res) 841 839 break; 842 840 843 841 pr_debug("%s:%u: notify event type 0x%llx bus id %llu dev id %llu" 844 842 " type %llu port %llu\n", __func__, __LINE__, 845 - notify_event->event_type, notify_event->bus_id, 846 - notify_event->dev_id, notify_event->dev_type, 847 - notify_event->dev_port); 843 + notify_event->event_type, notify_event->bus_id, 844 + notify_event->dev_id, notify_event->dev_type, 845 + notify_event->dev_port); 848 846 849 847 if (notify_event->event_type != notify_region_probe || 850 - notify_event->bus_id != dev.sbd.bus_id) { 848 + notify_event->bus_id != local->dev.sbd.bus_id) { 851 849 pr_warn("%s:%u: bad notify_event: event %llu, dev_id %llu, dev_type %llu\n", 852 850 __func__, __LINE__, notify_event->event_type, 853 851 notify_event->dev_id, notify_event->dev_type); 854 852 continue; 855 853 } 856 854 857 - ps3_find_and_add_device(dev.sbd.bus_id, notify_event->dev_id); 855 + ps3_find_and_add_device(local->dev.sbd.bus_id, 856 + notify_event->dev_id); 858 857 859 858 } while (!kthread_should_stop()); 860 859 861 860 fail_free_irq: 862 - free_irq(irq, &dev); 861 + free_irq(irq, &local->dev); 863 862 fail_sb_event_receive_port_destroy: 864 - ps3_sb_event_receive_port_destroy(&dev.sbd, irq); 863 + ps3_sb_event_receive_port_destroy(&local->dev.sbd, irq); 865 864 fail_close_device: 866 - lv1_close_device(dev.sbd.bus_id, dev.sbd.dev_id); 865 + lv1_close_device(local->dev.sbd.bus_id, local->dev.sbd.dev_id); 867 866 fail_free: 868 - kfree(buf); 867 + kfree(local); 869 868 870 869 probe_task = NULL; 871 870
-1
arch/powerpc/platforms/pseries/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) 3 2 ccflags-$(CONFIG_PPC_PSERIES_DEBUG) += -DDEBUG 4 3 5 4 obj-y := lpar.o hvCall.o nvram.o reconfig.o \
+3 -3
arch/powerpc/platforms/pseries/lpar.c
··· 1886 1886 * h_get_mpp 1887 1887 * H_GET_MPP hcall returns info in 7 parms 1888 1888 */ 1889 - int h_get_mpp(struct hvcall_mpp_data *mpp_data) 1889 + long h_get_mpp(struct hvcall_mpp_data *mpp_data) 1890 1890 { 1891 - int rc; 1892 - unsigned long retbuf[PLPAR_HCALL9_BUFSIZE]; 1891 + unsigned long retbuf[PLPAR_HCALL9_BUFSIZE] = {0}; 1892 + long rc; 1893 1893 1894 1894 rc = plpar_hcall9(H_GET_MPP, retbuf); 1895 1895
+33 -12
arch/powerpc/platforms/pseries/lparcfg.c
··· 113 113 */ 114 114 static unsigned int h_get_ppp(struct hvcall_ppp_data *ppp_data) 115 115 { 116 - unsigned long rc; 117 - unsigned long retbuf[PLPAR_HCALL9_BUFSIZE]; 116 + unsigned long retbuf[PLPAR_HCALL9_BUFSIZE] = {0}; 117 + long rc; 118 118 119 119 rc = plpar_hcall9(H_GET_PPP, retbuf); 120 120 ··· 170 170 kfree(buf); 171 171 } 172 172 173 - static unsigned h_pic(unsigned long *pool_idle_time, 174 - unsigned long *num_procs) 173 + static long h_pic(unsigned long *pool_idle_time, 174 + unsigned long *num_procs) 175 175 { 176 - unsigned long rc; 177 - unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; 176 + long rc; 177 + unsigned long retbuf[PLPAR_HCALL_BUFSIZE] = {0}; 178 178 179 179 rc = plpar_hcall(H_PIC, retbuf); 180 180 181 - *pool_idle_time = retbuf[0]; 182 - *num_procs = retbuf[1]; 181 + if (pool_idle_time) 182 + *pool_idle_time = retbuf[0]; 183 + if (num_procs) 184 + *num_procs = retbuf[1]; 183 185 184 186 return rc; 185 187 } 188 + 189 + unsigned long boot_pool_idle_time; 186 190 187 191 /* 188 192 * parse_ppp_data ··· 197 193 struct hvcall_ppp_data ppp_data; 198 194 struct device_node *root; 199 195 const __be32 *perf_level; 200 - int rc; 196 + long rc; 201 197 202 198 rc = h_get_ppp(&ppp_data); 203 199 if (rc) ··· 219 215 seq_printf(m, "pool_capacity=%d\n", 220 216 ppp_data.active_procs_in_pool * 100); 221 217 222 - h_pic(&pool_idle_time, &pool_procs); 223 - seq_printf(m, "pool_idle_time=%ld\n", pool_idle_time); 224 - seq_printf(m, "pool_num_procs=%ld\n", pool_procs); 218 + /* In case h_pic call is not successful, this would result in 219 + * APP values being wrong in tools like lparstat. 220 + */ 221 + 222 + if (h_pic(&pool_idle_time, &pool_procs) == H_SUCCESS) { 223 + seq_printf(m, "pool_idle_time=%ld\n", pool_idle_time); 224 + seq_printf(m, "pool_num_procs=%ld\n", pool_procs); 225 + seq_printf(m, "boot_pool_idle_time=%ld\n", boot_pool_idle_time); 226 + } 225 227 } 226 228 227 229 seq_printf(m, "unallocated_capacity_weight=%d\n", ··· 802 792 static int __init lparcfg_init(void) 803 793 { 804 794 umode_t mode = 0444; 795 + long retval; 805 796 806 797 /* Allow writing if we have FW_FEATURE_SPLPAR */ 807 798 if (firmware_has_feature(FW_FEATURE_SPLPAR)) ··· 812 801 printk(KERN_ERR "Failed to create powerpc/lparcfg\n"); 813 802 return -EIO; 814 803 } 804 + 805 + /* If this call fails, it would result in APP values 806 + * being wrong for since boot reports of lparstat 807 + */ 808 + retval = h_pic(&boot_pool_idle_time, NULL); 809 + 810 + if (retval != H_SUCCESS) 811 + pr_debug("H_PIC failed during lparcfg init retval: %ld\n", 812 + retval); 813 + 815 814 return 0; 816 815 } 817 816 machine_device_initcall(pseries, lparcfg_init);
-27
arch/powerpc/platforms/pseries/pci.c
··· 18 18 #include <asm/pci.h> 19 19 #include "pseries.h" 20 20 21 - #if 0 22 - void pcibios_name_device(struct pci_dev *dev) 23 - { 24 - struct device_node *dn; 25 - 26 - /* 27 - * Add IBM loc code (slot) as a prefix to the device names for service 28 - */ 29 - dn = pci_device_to_OF_node(dev); 30 - if (dn) { 31 - const char *loc_code = of_get_property(dn, "ibm,loc-code", 32 - NULL); 33 - if (loc_code) { 34 - int loc_len = strlen(loc_code); 35 - if (loc_len < sizeof(dev->dev.name)) { 36 - memmove(dev->dev.name+loc_len+1, dev->dev.name, 37 - sizeof(dev->dev.name)-loc_len-1); 38 - memcpy(dev->dev.name, loc_code, loc_len); 39 - dev->dev.name[loc_len] = ' '; 40 - dev->dev.name[sizeof(dev->dev.name)-1] = '\0'; 41 - } 42 - } 43 - } 44 - } 45 - DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, pcibios_name_device); 46 - #endif 47 - 48 21 #ifdef CONFIG_PCI_IOV 49 22 #define MAX_VFS_FOR_MAP_PE 256 50 23 struct pe_map_bar_entry {
+206 -114
arch/powerpc/platforms/pseries/rtas-fadump.c
··· 18 18 19 19 #include <asm/page.h> 20 20 #include <asm/rtas.h> 21 + #include <asm/setup.h> 21 22 #include <asm/fadump.h> 22 23 #include <asm/fadump-internal.h> 23 24 ··· 30 29 static void rtas_fadump_update_config(struct fw_dump *fadump_conf, 31 30 const struct rtas_fadump_mem_struct *fdm) 32 31 { 33 - fadump_conf->boot_mem_dest_addr = 34 - be64_to_cpu(fdm->rmr_region.destination_address); 35 - 36 32 fadump_conf->fadumphdr_addr = (fadump_conf->boot_mem_dest_addr + 37 33 fadump_conf->boot_memory_size); 38 34 } ··· 41 43 static void __init rtas_fadump_get_config(struct fw_dump *fadump_conf, 42 44 const struct rtas_fadump_mem_struct *fdm) 43 45 { 44 - fadump_conf->boot_mem_addr[0] = 45 - be64_to_cpu(fdm->rmr_region.source_address); 46 - fadump_conf->boot_mem_sz[0] = be64_to_cpu(fdm->rmr_region.source_len); 47 - fadump_conf->boot_memory_size = fadump_conf->boot_mem_sz[0]; 46 + unsigned long base, size, last_end, hole_size; 48 47 49 - fadump_conf->boot_mem_top = fadump_conf->boot_memory_size; 50 - fadump_conf->boot_mem_regs_cnt = 1; 48 + last_end = 0; 49 + hole_size = 0; 50 + fadump_conf->boot_memory_size = 0; 51 + fadump_conf->boot_mem_regs_cnt = 0; 52 + pr_debug("Boot memory regions:\n"); 53 + for (int i = 0; i < be16_to_cpu(fdm->header.dump_num_sections); i++) { 54 + int type = be16_to_cpu(fdm->rgn[i].source_data_type); 55 + u64 addr; 51 56 52 - /* 53 - * Start address of reserve dump area (permanent reservation) for 54 - * re-registering FADump after dump capture. 55 - */ 56 - fadump_conf->reserve_dump_area_start = 57 - be64_to_cpu(fdm->cpu_state_data.destination_address); 57 + switch (type) { 58 + case RTAS_FADUMP_CPU_STATE_DATA: 59 + addr = be64_to_cpu(fdm->rgn[i].destination_address); 60 + 61 + fadump_conf->cpu_state_dest_vaddr = (u64)__va(addr); 62 + /* 63 + * Start address of reserve dump area (permanent reservation) for 64 + * re-registering FADump after dump capture. 65 + */ 66 + fadump_conf->reserve_dump_area_start = addr; 67 + break; 68 + case RTAS_FADUMP_HPTE_REGION: 69 + /* Not processed currently. */ 70 + break; 71 + case RTAS_FADUMP_REAL_MODE_REGION: 72 + base = be64_to_cpu(fdm->rgn[i].source_address); 73 + size = be64_to_cpu(fdm->rgn[i].source_len); 74 + pr_debug("\t[%03d] base: 0x%lx, size: 0x%lx\n", i, base, size); 75 + if (!base) { 76 + fadump_conf->boot_mem_dest_addr = 77 + be64_to_cpu(fdm->rgn[i].destination_address); 78 + } 79 + 80 + fadump_conf->boot_mem_addr[fadump_conf->boot_mem_regs_cnt] = base; 81 + fadump_conf->boot_mem_sz[fadump_conf->boot_mem_regs_cnt] = size; 82 + fadump_conf->boot_memory_size += size; 83 + hole_size += (base - last_end); 84 + last_end = base + size; 85 + fadump_conf->boot_mem_regs_cnt++; 86 + break; 87 + case RTAS_FADUMP_PARAM_AREA: 88 + fadump_conf->param_area = be64_to_cpu(fdm->rgn[i].destination_address); 89 + break; 90 + default: 91 + pr_warn("Section type %d unsupported on this kernel. Ignoring!\n", type); 92 + break; 93 + } 94 + } 95 + fadump_conf->boot_mem_top = fadump_conf->boot_memory_size + hole_size; 58 96 59 97 rtas_fadump_update_config(fadump_conf, fdm); 60 98 } ··· 98 64 static u64 rtas_fadump_init_mem_struct(struct fw_dump *fadump_conf) 99 65 { 100 66 u64 addr = fadump_conf->reserve_dump_area_start; 67 + u16 sec_cnt = 0; 101 68 102 69 memset(&fdm, 0, sizeof(struct rtas_fadump_mem_struct)); 103 70 addr = addr & PAGE_MASK; 104 71 105 72 fdm.header.dump_format_version = cpu_to_be32(0x00000001); 106 - fdm.header.dump_num_sections = cpu_to_be16(3); 107 73 fdm.header.dump_status_flag = 0; 108 74 fdm.header.offset_first_dump_section = 109 - cpu_to_be32((u32)offsetof(struct rtas_fadump_mem_struct, 110 - cpu_state_data)); 75 + cpu_to_be32((u32)offsetof(struct rtas_fadump_mem_struct, rgn)); 111 76 112 77 /* 113 78 * Fields for disk dump option. ··· 122 89 123 90 /* Kernel dump sections */ 124 91 /* cpu state data section. */ 125 - fdm.cpu_state_data.request_flag = 126 - cpu_to_be32(RTAS_FADUMP_REQUEST_FLAG); 127 - fdm.cpu_state_data.source_data_type = 128 - cpu_to_be16(RTAS_FADUMP_CPU_STATE_DATA); 129 - fdm.cpu_state_data.source_address = 0; 130 - fdm.cpu_state_data.source_len = 131 - cpu_to_be64(fadump_conf->cpu_state_data_size); 132 - fdm.cpu_state_data.destination_address = cpu_to_be64(addr); 92 + fdm.rgn[sec_cnt].request_flag = cpu_to_be32(RTAS_FADUMP_REQUEST_FLAG); 93 + fdm.rgn[sec_cnt].source_data_type = cpu_to_be16(RTAS_FADUMP_CPU_STATE_DATA); 94 + fdm.rgn[sec_cnt].source_address = 0; 95 + fdm.rgn[sec_cnt].source_len = cpu_to_be64(fadump_conf->cpu_state_data_size); 96 + fdm.rgn[sec_cnt].destination_address = cpu_to_be64(addr); 133 97 addr += fadump_conf->cpu_state_data_size; 98 + sec_cnt++; 134 99 135 100 /* hpte region section */ 136 - fdm.hpte_region.request_flag = cpu_to_be32(RTAS_FADUMP_REQUEST_FLAG); 137 - fdm.hpte_region.source_data_type = 138 - cpu_to_be16(RTAS_FADUMP_HPTE_REGION); 139 - fdm.hpte_region.source_address = 0; 140 - fdm.hpte_region.source_len = 141 - cpu_to_be64(fadump_conf->hpte_region_size); 142 - fdm.hpte_region.destination_address = cpu_to_be64(addr); 101 + fdm.rgn[sec_cnt].request_flag = cpu_to_be32(RTAS_FADUMP_REQUEST_FLAG); 102 + fdm.rgn[sec_cnt].source_data_type = cpu_to_be16(RTAS_FADUMP_HPTE_REGION); 103 + fdm.rgn[sec_cnt].source_address = 0; 104 + fdm.rgn[sec_cnt].source_len = cpu_to_be64(fadump_conf->hpte_region_size); 105 + fdm.rgn[sec_cnt].destination_address = cpu_to_be64(addr); 143 106 addr += fadump_conf->hpte_region_size; 107 + sec_cnt++; 144 108 145 109 /* 146 110 * Align boot memory area destination address to page boundary to ··· 145 115 */ 146 116 addr = PAGE_ALIGN(addr); 147 117 148 - /* RMA region section */ 149 - fdm.rmr_region.request_flag = cpu_to_be32(RTAS_FADUMP_REQUEST_FLAG); 150 - fdm.rmr_region.source_data_type = 151 - cpu_to_be16(RTAS_FADUMP_REAL_MODE_REGION); 152 - fdm.rmr_region.source_address = cpu_to_be64(0); 153 - fdm.rmr_region.source_len = cpu_to_be64(fadump_conf->boot_memory_size); 154 - fdm.rmr_region.destination_address = cpu_to_be64(addr); 155 - addr += fadump_conf->boot_memory_size; 118 + /* First boot memory region destination address */ 119 + fadump_conf->boot_mem_dest_addr = addr; 120 + for (int i = 0; i < fadump_conf->boot_mem_regs_cnt; i++) { 121 + /* Boot memory regions */ 122 + fdm.rgn[sec_cnt].request_flag = cpu_to_be32(RTAS_FADUMP_REQUEST_FLAG); 123 + fdm.rgn[sec_cnt].source_data_type = cpu_to_be16(RTAS_FADUMP_REAL_MODE_REGION); 124 + fdm.rgn[sec_cnt].source_address = cpu_to_be64(fadump_conf->boot_mem_addr[i]); 125 + fdm.rgn[sec_cnt].source_len = cpu_to_be64(fadump_conf->boot_mem_sz[i]); 126 + fdm.rgn[sec_cnt].destination_address = cpu_to_be64(addr); 127 + addr += fadump_conf->boot_mem_sz[i]; 128 + sec_cnt++; 129 + } 130 + 131 + /* Parameters area */ 132 + if (fadump_conf->param_area) { 133 + fdm.rgn[sec_cnt].request_flag = cpu_to_be32(RTAS_FADUMP_REQUEST_FLAG); 134 + fdm.rgn[sec_cnt].source_data_type = cpu_to_be16(RTAS_FADUMP_PARAM_AREA); 135 + fdm.rgn[sec_cnt].source_address = cpu_to_be64(fadump_conf->param_area); 136 + fdm.rgn[sec_cnt].source_len = cpu_to_be64(COMMAND_LINE_SIZE); 137 + fdm.rgn[sec_cnt].destination_address = cpu_to_be64(fadump_conf->param_area); 138 + sec_cnt++; 139 + } 140 + fdm.header.dump_num_sections = cpu_to_be16(sec_cnt); 156 141 157 142 rtas_fadump_update_config(fadump_conf, &fdm); 158 143 ··· 181 136 182 137 static int rtas_fadump_register(struct fw_dump *fadump_conf) 183 138 { 184 - unsigned int wait_time; 139 + unsigned int wait_time, fdm_size; 185 140 int rc, err = -EIO; 141 + 142 + /* 143 + * Platform requires the exact size of the Dump Memory Structure. 144 + * Avoid including any unused rgns in the calculation, as this 145 + * could result in a parameter error (-3) from the platform. 146 + */ 147 + fdm_size = sizeof(struct rtas_fadump_section_header); 148 + fdm_size += be16_to_cpu(fdm.header.dump_num_sections) * sizeof(struct rtas_fadump_section); 186 149 187 150 /* TODO: Add upper time limit for the delay */ 188 151 do { 189 152 rc = rtas_call(fadump_conf->ibm_configure_kernel_dump, 3, 1, 190 - NULL, FADUMP_REGISTER, &fdm, 191 - sizeof(struct rtas_fadump_mem_struct)); 153 + NULL, FADUMP_REGISTER, &fdm, fdm_size); 192 154 193 155 wait_time = rtas_busy_delay_time(rc); 194 156 if (wait_time) ··· 213 161 pr_err("Failed to register. Hardware Error(%d).\n", rc); 214 162 break; 215 163 case -3: 216 - if (!is_fadump_boot_mem_contiguous()) 217 - pr_err("Can't have holes in boot memory area.\n"); 218 - else if (!is_fadump_reserved_mem_contiguous()) 164 + if (!is_fadump_reserved_mem_contiguous()) 219 165 pr_err("Can't have holes in reserved memory area.\n"); 220 166 221 167 pr_err("Failed to register. Parameter Error(%d).\n", rc); ··· 366 316 u32 num_cpus, *note_buf; 367 317 int i, rc = 0, cpu = 0; 368 318 struct pt_regs regs; 369 - unsigned long addr; 370 319 void *vaddr; 371 320 372 - addr = be64_to_cpu(fdm_active->cpu_state_data.destination_address); 373 - vaddr = __va(addr); 321 + vaddr = (void *)fadump_conf->cpu_state_dest_vaddr; 374 322 375 323 reg_header = vaddr; 376 324 if (be64_to_cpu(reg_header->magic_number) != ··· 423 375 } 424 376 final_note(note_buf); 425 377 426 - if (fdh) { 427 - pr_debug("Updating elfcore header (%llx) with cpu notes\n", 428 - fdh->elfcorehdr_addr); 429 - fadump_update_elfcore_header(__va(fdh->elfcorehdr_addr)); 430 - } 378 + pr_debug("Updating elfcore header (%llx) with cpu notes\n", fadump_conf->elfcorehdr_addr); 379 + fadump_update_elfcore_header((char *)fadump_conf->elfcorehdr_addr); 431 380 return 0; 432 381 433 382 error_out: ··· 434 389 } 435 390 436 391 /* 437 - * Validate and process the dump data stored by firmware before exporting 438 - * it through '/proc/vmcore'. 392 + * Validate and process the dump data stored by the firmware, and update 393 + * the CPU notes of elfcorehdr. 439 394 */ 440 395 static int __init rtas_fadump_process(struct fw_dump *fadump_conf) 441 396 { 442 - struct fadump_crash_info_header *fdh; 443 - int rc = 0; 444 - 445 397 if (!fdm_active || !fadump_conf->fadumphdr_addr) 446 398 return -EINVAL; 447 399 448 400 /* Check if the dump data is valid. */ 449 - if ((be16_to_cpu(fdm_active->header.dump_status_flag) == 450 - RTAS_FADUMP_ERROR_FLAG) || 451 - (fdm_active->cpu_state_data.error_flags != 0) || 452 - (fdm_active->rmr_region.error_flags != 0)) { 453 - pr_err("Dump taken by platform is not valid\n"); 454 - return -EINVAL; 401 + for (int i = 0; i < be16_to_cpu(fdm_active->header.dump_num_sections); i++) { 402 + int type = be16_to_cpu(fdm_active->rgn[i].source_data_type); 403 + int rc = 0; 404 + 405 + switch (type) { 406 + case RTAS_FADUMP_CPU_STATE_DATA: 407 + case RTAS_FADUMP_HPTE_REGION: 408 + case RTAS_FADUMP_REAL_MODE_REGION: 409 + if (fdm_active->rgn[i].error_flags != 0) { 410 + pr_err("Dump taken by platform is not valid (%d)\n", i); 411 + rc = -EINVAL; 412 + } 413 + if (fdm_active->rgn[i].bytes_dumped != fdm_active->rgn[i].source_len) { 414 + pr_err("Dump taken by platform is incomplete (%d)\n", i); 415 + rc = -EINVAL; 416 + } 417 + if (rc) { 418 + pr_warn("Region type: %u src addr: 0x%llx dest addr: 0x%llx\n", 419 + be16_to_cpu(fdm_active->rgn[i].source_data_type), 420 + be64_to_cpu(fdm_active->rgn[i].source_address), 421 + be64_to_cpu(fdm_active->rgn[i].destination_address)); 422 + return rc; 423 + } 424 + break; 425 + case RTAS_FADUMP_PARAM_AREA: 426 + if (fdm_active->rgn[i].bytes_dumped != fdm_active->rgn[i].source_len || 427 + fdm_active->rgn[i].error_flags != 0) { 428 + pr_warn("Failed to process additional parameters! Proceeding anyway..\n"); 429 + fadump_conf->param_area = 0; 430 + } 431 + break; 432 + default: 433 + /* 434 + * If the first/crashed kernel added a new region type that the 435 + * second/fadump kernel doesn't recognize, skip it and process 436 + * assuming backward compatibility. 437 + */ 438 + pr_warn("Unknown region found: type: %u src addr: 0x%llx dest addr: 0x%llx\n", 439 + be16_to_cpu(fdm_active->rgn[i].source_data_type), 440 + be64_to_cpu(fdm_active->rgn[i].source_address), 441 + be64_to_cpu(fdm_active->rgn[i].destination_address)); 442 + break; 443 + } 455 444 } 456 - if ((fdm_active->rmr_region.bytes_dumped != 457 - fdm_active->rmr_region.source_len) || 458 - !fdm_active->cpu_state_data.bytes_dumped) { 459 - pr_err("Dump taken by platform is incomplete\n"); 460 - return -EINVAL; 461 - } 462 445 463 - /* Validate the fadump crash info header */ 464 - fdh = __va(fadump_conf->fadumphdr_addr); 465 - if (fdh->magic_number != FADUMP_CRASH_INFO_MAGIC) { 466 - pr_err("Crash info header is not valid.\n"); 467 - return -EINVAL; 468 - } 469 - 470 - rc = rtas_fadump_build_cpu_notes(fadump_conf); 471 - if (rc) 472 - return rc; 473 - 474 - /* 475 - * We are done validating dump info and elfcore header is now ready 476 - * to be exported. set elfcorehdr_addr so that vmcore module will 477 - * export the elfcore header through '/proc/vmcore'. 478 - */ 479 - elfcorehdr_addr = fdh->elfcorehdr_addr; 480 - 481 - return 0; 446 + return rtas_fadump_build_cpu_notes(fadump_conf); 482 447 } 483 448 484 449 static void rtas_fadump_region_show(struct fw_dump *fadump_conf, 485 450 struct seq_file *m) 486 451 { 487 - const struct rtas_fadump_section *cpu_data_section; 488 452 const struct rtas_fadump_mem_struct *fdm_ptr; 489 453 490 454 if (fdm_active) ··· 501 447 else 502 448 fdm_ptr = &fdm; 503 449 504 - cpu_data_section = &(fdm_ptr->cpu_state_data); 505 - seq_printf(m, "CPU :[%#016llx-%#016llx] %#llx bytes, Dumped: %#llx\n", 506 - be64_to_cpu(cpu_data_section->destination_address), 507 - be64_to_cpu(cpu_data_section->destination_address) + 508 - be64_to_cpu(cpu_data_section->source_len) - 1, 509 - be64_to_cpu(cpu_data_section->source_len), 510 - be64_to_cpu(cpu_data_section->bytes_dumped)); 511 450 512 - seq_printf(m, "HPTE:[%#016llx-%#016llx] %#llx bytes, Dumped: %#llx\n", 513 - be64_to_cpu(fdm_ptr->hpte_region.destination_address), 514 - be64_to_cpu(fdm_ptr->hpte_region.destination_address) + 515 - be64_to_cpu(fdm_ptr->hpte_region.source_len) - 1, 516 - be64_to_cpu(fdm_ptr->hpte_region.source_len), 517 - be64_to_cpu(fdm_ptr->hpte_region.bytes_dumped)); 451 + for (int i = 0; i < be16_to_cpu(fdm_ptr->header.dump_num_sections); i++) { 452 + int type = be16_to_cpu(fdm_ptr->rgn[i].source_data_type); 518 453 519 - seq_printf(m, "DUMP: Src: %#016llx, Dest: %#016llx, ", 520 - be64_to_cpu(fdm_ptr->rmr_region.source_address), 521 - be64_to_cpu(fdm_ptr->rmr_region.destination_address)); 522 - seq_printf(m, "Size: %#llx, Dumped: %#llx bytes\n", 523 - be64_to_cpu(fdm_ptr->rmr_region.source_len), 524 - be64_to_cpu(fdm_ptr->rmr_region.bytes_dumped)); 454 + switch (type) { 455 + case RTAS_FADUMP_CPU_STATE_DATA: 456 + seq_printf(m, "CPU :[%#016llx-%#016llx] %#llx bytes, Dumped: %#llx\n", 457 + be64_to_cpu(fdm_ptr->rgn[i].destination_address), 458 + be64_to_cpu(fdm_ptr->rgn[i].destination_address) + 459 + be64_to_cpu(fdm_ptr->rgn[i].source_len) - 1, 460 + be64_to_cpu(fdm_ptr->rgn[i].source_len), 461 + be64_to_cpu(fdm_ptr->rgn[i].bytes_dumped)); 462 + break; 463 + case RTAS_FADUMP_HPTE_REGION: 464 + seq_printf(m, "HPTE:[%#016llx-%#016llx] %#llx bytes, Dumped: %#llx\n", 465 + be64_to_cpu(fdm_ptr->rgn[i].destination_address), 466 + be64_to_cpu(fdm_ptr->rgn[i].destination_address) + 467 + be64_to_cpu(fdm_ptr->rgn[i].source_len) - 1, 468 + be64_to_cpu(fdm_ptr->rgn[i].source_len), 469 + be64_to_cpu(fdm_ptr->rgn[i].bytes_dumped)); 470 + break; 471 + case RTAS_FADUMP_REAL_MODE_REGION: 472 + seq_printf(m, "DUMP: Src: %#016llx, Dest: %#016llx, ", 473 + be64_to_cpu(fdm_ptr->rgn[i].source_address), 474 + be64_to_cpu(fdm_ptr->rgn[i].destination_address)); 475 + seq_printf(m, "Size: %#llx, Dumped: %#llx bytes\n", 476 + be64_to_cpu(fdm_ptr->rgn[i].source_len), 477 + be64_to_cpu(fdm_ptr->rgn[i].bytes_dumped)); 478 + break; 479 + case RTAS_FADUMP_PARAM_AREA: 480 + seq_printf(m, "\n[%#016llx-%#016llx]: cmdline append: '%s'\n", 481 + be64_to_cpu(fdm_ptr->rgn[i].destination_address), 482 + be64_to_cpu(fdm_ptr->rgn[i].destination_address) + 483 + be64_to_cpu(fdm_ptr->rgn[i].source_len) - 1, 484 + (char *)__va(be64_to_cpu(fdm_ptr->rgn[i].destination_address))); 485 + break; 486 + default: 487 + seq_printf(m, "Unknown region type %d : Src: %#016llx, Dest: %#016llx, ", 488 + type, be64_to_cpu(fdm_ptr->rgn[i].source_address), 489 + be64_to_cpu(fdm_ptr->rgn[i].destination_address)); 490 + break; 491 + } 492 + } 525 493 526 494 /* Dump is active. Show preserved area start address. */ 527 495 if (fdm_active) { ··· 559 483 rtas_os_term((char *)msg); 560 484 } 561 485 486 + /* FADUMP_MAX_MEM_REGS or lower */ 487 + static int rtas_fadump_max_boot_mem_rgns(void) 488 + { 489 + /* 490 + * Version 1 of Kernel Assisted Dump Memory Structure (PAPR) supports 10 sections. 491 + * With one each section taken for CPU state data & HPTE respectively, 8 sections 492 + * can be used for boot memory regions. 493 + * 494 + * If new region(s) is(are) defined, maximum boot memory regions will decrease 495 + * proportionally. 496 + */ 497 + return RTAS_FADUMP_MAX_BOOT_MEM_REGS; 498 + } 499 + 562 500 static struct fadump_ops rtas_fadump_ops = { 563 501 .fadump_init_mem_struct = rtas_fadump_init_mem_struct, 564 502 .fadump_get_bootmem_min = rtas_fadump_get_bootmem_min, ··· 582 492 .fadump_process = rtas_fadump_process, 583 493 .fadump_region_show = rtas_fadump_region_show, 584 494 .fadump_trigger = rtas_fadump_trigger, 495 + .fadump_max_boot_mem_rgns = rtas_fadump_max_boot_mem_rgns, 585 496 }; 586 497 587 498 void __init rtas_fadump_dt_scan(struct fw_dump *fadump_conf, u64 node) ··· 599 508 if (!token) 600 509 return; 601 510 602 - fadump_conf->ibm_configure_kernel_dump = be32_to_cpu(*token); 603 - fadump_conf->ops = &rtas_fadump_ops; 604 - fadump_conf->fadump_supported = 1; 511 + fadump_conf->ibm_configure_kernel_dump = be32_to_cpu(*token); 512 + fadump_conf->ops = &rtas_fadump_ops; 513 + fadump_conf->fadump_supported = 1; 514 + fadump_conf->param_area_supported = 1; 605 515 606 516 /* Firmware supports 64-bit value for size, align it to pagesize. */ 607 517 fadump_conf->max_copy_size = ALIGN_DOWN(U64_MAX, PAGE_SIZE);
+18 -11
arch/powerpc/platforms/pseries/rtas-fadump.h
··· 23 23 #define RTAS_FADUMP_HPTE_REGION 0x0002 24 24 #define RTAS_FADUMP_REAL_MODE_REGION 0x0011 25 25 26 + /* OS defined sections */ 27 + #define RTAS_FADUMP_PARAM_AREA 0x0100 28 + 26 29 /* Dump request flag */ 27 30 #define RTAS_FADUMP_REQUEST_FLAG 0x00000001 28 31 29 32 /* Dump status flag */ 30 33 #define RTAS_FADUMP_ERROR_FLAG 0x2000 34 + 35 + /* 36 + * The Firmware Assisted Dump Memory structure supports a maximum of 10 sections 37 + * in the dump memory structure. Presently, three sections are used for 38 + * CPU state data, HPTE & Parameters area, while the remaining seven sections 39 + * can be used for boot memory regions. 40 + */ 41 + #define MAX_SECTIONS 10 42 + #define RTAS_FADUMP_MAX_BOOT_MEM_REGS 7 31 43 32 44 /* Kernel Dump section info */ 33 45 struct rtas_fadump_section { ··· 73 61 * Firmware Assisted dump memory structure. This structure is required for 74 62 * registering future kernel dump with power firmware through rtas call. 75 63 * 76 - * No disk dump option. Hence disk dump path string section is not included. 64 + * In version 1, the platform permits one section header, dump-disk path 65 + * and ten sections. 66 + * 67 + * Note: No disk dump option. Hence disk dump path string section is not 68 + * included. 77 69 */ 78 70 struct rtas_fadump_mem_struct { 79 71 struct rtas_fadump_section_header header; 80 - 81 - /* Kernel dump sections */ 82 - struct rtas_fadump_section cpu_state_data; 83 - struct rtas_fadump_section hpte_region; 84 - 85 - /* 86 - * TODO: Extend multiple boot memory regions support in the kernel 87 - * for this platform. 88 - */ 89 - struct rtas_fadump_section rmr_region; 72 + struct rtas_fadump_section rgn[MAX_SECTIONS]; 90 73 }; 91 74 92 75 /*
+1 -1
arch/powerpc/platforms/pseries/vas.c
··· 228 228 struct pseries_vas_window *txwin = data; 229 229 230 230 /* 231 - * The thread hanlder will process this interrupt if it is 231 + * The thread handler will process this interrupt if it is 232 232 * already running. 233 233 */ 234 234 atomic_inc(&txwin->pending_faults);
+2 -6
arch/powerpc/platforms/pseries/vio.c
··· 1592 1592 const char *cp; 1593 1593 1594 1594 dn = dev->of_node; 1595 - if (!dn) 1596 - return -ENODEV; 1597 - cp = of_get_property(dn, "compatible", NULL); 1598 - if (!cp) 1599 - return -ENODEV; 1595 + if (dn && (cp = of_get_property(dn, "compatible", NULL))) 1596 + add_uevent_var(env, "MODALIAS=vio:T%sS%s", vio_dev->type, cp); 1600 1597 1601 - add_uevent_var(env, "MODALIAS=vio:T%sS%s", vio_dev->type, cp); 1602 1598 return 0; 1603 1599 } 1604 1600
-2
arch/powerpc/sysdev/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) 4 - 5 3 mpic-msi-obj-$(CONFIG_PCI_MSI) += mpic_msi.o mpic_u3msi.o 6 4 obj-$(CONFIG_MPIC) += mpic.o $(mpic-msi-obj-y) 7 5 obj-$(CONFIG_MPIC_TIMER) += mpic_timer.o
-4
arch/powerpc/sysdev/dart_iommu.c
··· 24 24 #include <linux/suspend.h> 25 25 #include <linux/memblock.h> 26 26 #include <linux/gfp.h> 27 - #include <linux/kmemleak.h> 28 27 #include <linux/of_address.h> 29 28 #include <asm/io.h> 30 29 #include <asm/iommu.h> ··· 241 242 NUMA_NO_NODE); 242 243 if (!dart_tablebase) 243 244 panic("Failed to allocate 16MB below 2GB for DART table\n"); 244 - 245 - /* There is no point scanning the DART space for leaks*/ 246 - kmemleak_no_scan((void *)dart_tablebase); 247 245 248 246 /* Allocate a spare page to map all invalid DART pages. We need to do 249 247 * that to work around what looks like a problem with the HT bridge
+3 -3
arch/powerpc/sysdev/fsl_gtm.c
··· 77 77 static LIST_HEAD(gtms); 78 78 79 79 /** 80 - * gtm_get_timer - request GTM timer to use it with the rest of GTM API 80 + * gtm_get_timer16 - request GTM timer to use it with the rest of GTM API 81 81 * Context: non-IRQ 82 82 * 83 83 * This function reserves GTM timer for later use. It returns gtm_timer ··· 110 110 EXPORT_SYMBOL(gtm_get_timer16); 111 111 112 112 /** 113 - * gtm_get_specific_timer - request specific GTM timer 113 + * gtm_get_specific_timer16 - request specific GTM timer 114 114 * @gtm: specific GTM, pass here GTM's device_node->data 115 115 * @timer: specific timer number, Timer1 is 0. 116 116 * Context: non-IRQ ··· 260 260 EXPORT_SYMBOL(gtm_set_timer16); 261 261 262 262 /** 263 - * gtm_set_exact_utimer16 - (re)set 16 bits timer 263 + * gtm_set_exact_timer16 - (re)set 16 bits timer 264 264 * @tmr: pointer to the gtm_timer structure obtained from gtm_get_timer 265 265 * @usec: timer interval in microseconds 266 266 * @reload: if set, the timer will reset upon expiry rather than
+2
arch/powerpc/sysdev/fsl_msi.c
··· 564 564 .msiir_offset = 0x38, 565 565 }; 566 566 567 + #ifdef CONFIG_EPAPR_PARAVIRT 567 568 static const struct fsl_msi_feature vmpic_msi_feature = { 568 569 .fsl_pic_ip = FSL_PIC_IP_VMPIC, 569 570 .msiir_offset = 0, 570 571 }; 572 + #endif 571 573 572 574 static const struct of_device_id fsl_of_msi_ids[] = { 573 575 {
+2 -2
arch/powerpc/sysdev/xive/common.c
··· 383 383 * CPU. 384 384 * 385 385 * If we find that there is indeed more in there, we call 386 - * force_external_irq_replay() to make Linux synthetize an 386 + * force_external_irq_replay() to make Linux synthesize an 387 387 * external interrupt on the next call to local_irq_restore(). 388 388 */ 389 389 static void xive_do_queue_eoi(struct xive_cpu *xc) ··· 874 874 * 875 875 * This also tells us that it's in flight to a host queue 876 876 * or has already been fetched but hasn't been EOIed yet 877 - * by the host. This it's potentially using up a host 877 + * by the host. Thus it's potentially using up a host 878 878 * queue slot. This is important to know because as long 879 879 * as this is the case, we must not hard-unmask it when 880 880 * "returning" that interrupt to the host.
+1 -1
arch/powerpc/sysdev/xive/native.c
··· 415 415 return; 416 416 } 417 417 418 - /* Grab it's CAM value */ 418 + /* Grab its CAM value */ 419 419 rc = opal_xive_get_vp_info(vp, NULL, &vp_cam_be, NULL, NULL); 420 420 if (rc) { 421 421 pr_err("Failed to get pool VP info CPU %d\n", cpu);
-2
arch/powerpc/xmon/Makefile
··· 10 10 # Disable ftrace for the entire directory 11 11 ccflags-remove-$(CONFIG_FUNCTION_TRACER) += $(CC_FLAGS_FTRACE) 12 12 13 - ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) 14 - 15 13 # Clang stores addresses on the stack causing the frame size to blow 16 14 # out. See https://github.com/ClangBuiltLinux/linux/issues/252 17 15 ccflags-$(CONFIG_CC_IS_CLANG) += -Wframe-larger-than=4096
+3 -3
arch/powerpc/xmon/xmon.c
··· 1350 1350 } 1351 1351 termch = cpu; 1352 1352 1353 - if (!scanhex(&cpu)) { 1353 + if (!scanhex(&cpu) || cpu >= num_possible_cpus()) { 1354 1354 /* print cpus waiting or in xmon */ 1355 1355 printf("cpus stopped:"); 1356 1356 last_cpu = first_cpu = NR_CPUS; ··· 2772 2772 2773 2773 termch = c; /* Put c back, it wasn't 'a' */ 2774 2774 2775 - if (scanhex(&num)) 2775 + if (scanhex(&num) && num < num_possible_cpus()) 2776 2776 dump_one_paca(num); 2777 2777 else 2778 2778 dump_one_paca(xmon_owner); ··· 2845 2845 2846 2846 termch = c; /* Put c back, it wasn't 'a' */ 2847 2847 2848 - if (scanhex(&num)) 2848 + if (scanhex(&num) && num < num_possible_cpus()) 2849 2849 dump_one_xive(num); 2850 2850 else 2851 2851 dump_one_xive(xmon_owner);
+3 -10
arch/x86/include/asm/kexec.h
··· 207 207 extern void kdump_nmi_shootdown_cpus(void); 208 208 209 209 #ifdef CONFIG_CRASH_HOTPLUG 210 - void arch_crash_handle_hotplug_event(struct kimage *image); 210 + void arch_crash_handle_hotplug_event(struct kimage *image, void *arg); 211 211 #define arch_crash_handle_hotplug_event arch_crash_handle_hotplug_event 212 212 213 - #ifdef CONFIG_HOTPLUG_CPU 214 - int arch_crash_hotplug_cpu_support(void); 215 - #define crash_hotplug_cpu_support arch_crash_hotplug_cpu_support 216 - #endif 217 - 218 - #ifdef CONFIG_MEMORY_HOTPLUG 219 - int arch_crash_hotplug_memory_support(void); 220 - #define crash_hotplug_memory_support arch_crash_hotplug_memory_support 221 - #endif 213 + int arch_crash_hotplug_support(struct kimage *image, unsigned long kexec_flags); 214 + #define arch_crash_hotplug_support arch_crash_hotplug_support 222 215 223 216 unsigned int arch_crash_get_elfcorehdr_size(void); 224 217 #define crash_get_elfcorehdr_size arch_crash_get_elfcorehdr_size
+20 -12
arch/x86/kernel/crash.c
··· 402 402 #undef pr_fmt 403 403 #define pr_fmt(fmt) "crash hp: " fmt 404 404 405 - /* These functions provide the value for the sysfs crash_hotplug nodes */ 406 - #ifdef CONFIG_HOTPLUG_CPU 407 - int arch_crash_hotplug_cpu_support(void) 405 + int arch_crash_hotplug_support(struct kimage *image, unsigned long kexec_flags) 408 406 { 409 - return crash_check_update_elfcorehdr(); 410 - } 411 - #endif 412 407 413 - #ifdef CONFIG_MEMORY_HOTPLUG 414 - int arch_crash_hotplug_memory_support(void) 415 - { 416 - return crash_check_update_elfcorehdr(); 417 - } 408 + #ifdef CONFIG_KEXEC_FILE 409 + if (image->file_mode) 410 + return 1; 418 411 #endif 412 + /* 413 + * Initially, crash hotplug support for kexec_load was added 414 + * with the KEXEC_UPDATE_ELFCOREHDR flag. Later, this 415 + * functionality was expanded to accommodate multiple kexec 416 + * segment updates, leading to the introduction of the 417 + * KEXEC_CRASH_HOTPLUG_SUPPORT kexec flag bit. Consequently, 418 + * when the kexec tool sends either of these flags, it indicates 419 + * that the required kexec segment (elfcorehdr) is excluded from 420 + * the SHA calculation. 421 + */ 422 + return (kexec_flags & KEXEC_UPDATE_ELFCOREHDR || 423 + kexec_flags & KEXEC_CRASH_HOTPLUG_SUPPORT); 424 + } 419 425 420 426 unsigned int arch_crash_get_elfcorehdr_size(void) 421 427 { ··· 438 432 /** 439 433 * arch_crash_handle_hotplug_event() - Handle hotplug elfcorehdr changes 440 434 * @image: a pointer to kexec_crash_image 435 + * @arg: struct memory_notify handler for memory hotplug case and 436 + * NULL for CPU hotplug case. 441 437 * 442 438 * Prepare the new elfcorehdr and replace the existing elfcorehdr. 443 439 */ 444 - void arch_crash_handle_hotplug_event(struct kimage *image) 440 + void arch_crash_handle_hotplug_event(struct kimage *image, void *arg) 445 441 { 446 442 void *elfbuf = NULL, *old_elfcorehdr; 447 443 unsigned long nr_mem_ranges;
+1 -1
drivers/base/cpu.c
··· 306 306 struct device_attribute *attr, 307 307 char *buf) 308 308 { 309 - return sysfs_emit(buf, "%d\n", crash_hotplug_cpu_support()); 309 + return sysfs_emit(buf, "%d\n", crash_check_hotplug_support()); 310 310 } 311 311 static DEVICE_ATTR_ADMIN_RO(crash_hotplug); 312 312 #endif
+1 -1
drivers/base/memory.c
··· 535 535 static ssize_t crash_hotplug_show(struct device *dev, 536 536 struct device_attribute *attr, char *buf) 537 537 { 538 - return sysfs_emit(buf, "%d\n", crash_hotplug_memory_support()); 538 + return sysfs_emit(buf, "%d\n", crash_check_hotplug_support()); 539 539 } 540 540 static DEVICE_ATTR_RO(crash_hotplug); 541 541 #endif
+4 -4
drivers/cpufreq/pmac32-cpufreq.c
··· 120 120 121 121 /* tweak L2 for high voltage */ 122 122 if (has_cpu_l2lve) { 123 - hid2 = mfspr(SPRN_HID2); 123 + hid2 = mfspr(SPRN_HID2_750FX); 124 124 hid2 &= ~0x2000; 125 - mtspr(SPRN_HID2, hid2); 125 + mtspr(SPRN_HID2_750FX, hid2); 126 126 } 127 127 } 128 128 #ifdef CONFIG_PPC_BOOK3S_32 ··· 131 131 if (low_speed == 1) { 132 132 /* tweak L2 for low voltage */ 133 133 if (has_cpu_l2lve) { 134 - hid2 = mfspr(SPRN_HID2); 134 + hid2 = mfspr(SPRN_HID2_750FX); 135 135 hid2 |= 0x2000; 136 - mtspr(SPRN_HID2, hid2); 136 + mtspr(SPRN_HID2_750FX, hid2); 137 137 } 138 138 139 139 /* ramping down, set voltage last */
+1 -1
drivers/macintosh/Kconfig
··· 262 262 will be called ams. 263 263 264 264 config SENSORS_AMS_PMU 265 - bool "PMU variant" 265 + bool "PMU variant" if SENSORS_AMS_I2C 266 266 depends on SENSORS_AMS && ADB_PMU 267 267 default y 268 268 help
+10 -14
drivers/macintosh/macio-adb.c
··· 83 83 84 84 int macio_probe(void) 85 85 { 86 - struct device_node *np; 86 + struct device_node *np __free(device_node) = 87 + of_find_compatible_node(NULL, "adb", "chrp,adb0"); 87 88 88 - np = of_find_compatible_node(NULL, "adb", "chrp,adb0"); 89 - if (np) { 90 - of_node_put(np); 89 + if (np) 91 90 return 0; 92 - } 91 + 93 92 return -ENODEV; 94 93 } 95 94 96 95 int macio_init(void) 97 96 { 98 - struct device_node *adbs; 97 + struct device_node *adbs __free(device_node) = 98 + of_find_compatible_node(NULL, "adb", "chrp,adb0"); 99 99 struct resource r; 100 100 unsigned int irq; 101 101 102 - adbs = of_find_compatible_node(NULL, "adb", "chrp,adb0"); 103 102 if (!adbs) 104 103 return -ENXIO; 105 104 106 - if (of_address_to_resource(adbs, 0, &r)) { 107 - of_node_put(adbs); 105 + if (of_address_to_resource(adbs, 0, &r)) 108 106 return -ENXIO; 109 - } 107 + 110 108 adb = ioremap(r.start, sizeof(struct adb_regs)); 111 - if (!adb) { 112 - of_node_put(adbs); 109 + if (!adb) 113 110 return -ENOMEM; 114 - } 111 + 115 112 116 113 out_8(&adb->ctrl.r, 0); 117 114 out_8(&adb->intr.r, 0); ··· 118 121 out_8(&adb->autopoll.r, APE); 119 122 120 123 irq = irq_of_parse_and_map(adbs, 0); 121 - of_node_put(adbs); 122 124 if (request_irq(irq, macio_adb_interrupt, 0, "ADB", (void *)0)) { 123 125 iounmap(adb); 124 126 printk(KERN_ERR "ADB: can't get irq %d\n", irq);
+7 -8
include/linux/crash_core.h
··· 37 37 38 38 39 39 #ifndef arch_crash_handle_hotplug_event 40 - static inline void arch_crash_handle_hotplug_event(struct kimage *image) { } 40 + static inline void arch_crash_handle_hotplug_event(struct kimage *image, void *arg) { } 41 41 #endif 42 42 43 - int crash_check_update_elfcorehdr(void); 43 + int crash_check_hotplug_support(void); 44 44 45 - #ifndef crash_hotplug_cpu_support 46 - static inline int crash_hotplug_cpu_support(void) { return 0; } 47 - #endif 48 - 49 - #ifndef crash_hotplug_memory_support 50 - static inline int crash_hotplug_memory_support(void) { return 0; } 45 + #ifndef arch_crash_hotplug_support 46 + static inline int arch_crash_hotplug_support(struct kimage *image, unsigned long kexec_flags) 47 + { 48 + return 0; 49 + } 51 50 #endif 52 51 53 52 #ifndef crash_get_elfcorehdr_size
+7 -4
include/linux/kexec.h
··· 319 319 /* If set, we are using file mode kexec syscall */ 320 320 unsigned int file_mode:1; 321 321 #ifdef CONFIG_CRASH_HOTPLUG 322 - /* If set, allow changes to elfcorehdr of kexec_load'd image */ 323 - unsigned int update_elfcorehdr:1; 322 + /* If set, it is safe to update kexec segments that are 323 + * excluded from SHA calculation. 324 + */ 325 + unsigned int hotplug_support:1; 324 326 #endif 325 327 326 328 #ifdef ARCH_HAS_KIMAGE_ARCH ··· 393 391 394 392 /* List of defined/legal kexec flags */ 395 393 #ifndef CONFIG_KEXEC_JUMP 396 - #define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_UPDATE_ELFCOREHDR) 394 + #define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_UPDATE_ELFCOREHDR | KEXEC_CRASH_HOTPLUG_SUPPORT) 397 395 #else 398 - #define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_PRESERVE_CONTEXT | KEXEC_UPDATE_ELFCOREHDR) 396 + #define KEXEC_FLAGS (KEXEC_ON_CRASH | KEXEC_PRESERVE_CONTEXT | KEXEC_UPDATE_ELFCOREHDR | \ 397 + KEXEC_CRASH_HOTPLUG_SUPPORT) 399 398 #endif 400 399 401 400 /* List of defined/legal kexec file flags */
+1
include/uapi/linux/kexec.h
··· 13 13 #define KEXEC_ON_CRASH 0x00000001 14 14 #define KEXEC_PRESERVE_CONTEXT 0x00000002 15 15 #define KEXEC_UPDATE_ELFCOREHDR 0x00000004 16 + #define KEXEC_CRASH_HOTPLUG_SUPPORT 0x00000008 16 17 #define KEXEC_ARCH_MASK 0xffff0000 17 18 18 19 /*
+2 -2
include/uapi/linux/kvm.h
··· 1221 1221 /* Available with KVM_CAP_SPAPR_RESIZE_HPT */ 1222 1222 #define KVM_PPC_RESIZE_HPT_PREPARE _IOR(KVMIO, 0xad, struct kvm_ppc_resize_hpt) 1223 1223 #define KVM_PPC_RESIZE_HPT_COMMIT _IOR(KVMIO, 0xae, struct kvm_ppc_resize_hpt) 1224 - /* Available with KVM_CAP_PPC_RADIX_MMU or KVM_CAP_PPC_HASH_MMU_V3 */ 1224 + /* Available with KVM_CAP_PPC_MMU_RADIX or KVM_CAP_PPC_MMU_HASH_V3 */ 1225 1225 #define KVM_PPC_CONFIGURE_V3_MMU _IOW(KVMIO, 0xaf, struct kvm_ppc_mmuv3_cfg) 1226 - /* Available with KVM_CAP_PPC_RADIX_MMU */ 1226 + /* Available with KVM_CAP_PPC_MMU_RADIX */ 1227 1227 #define KVM_PPC_GET_RMMU_INFO _IOW(KVMIO, 0xb0, struct kvm_ppc_rmmu_info) 1228 1228 /* Available with KVM_CAP_PPC_GET_CPU_CHAR */ 1229 1229 #define KVM_PPC_GET_CPU_CHAR _IOR(KVMIO, 0xb1, struct kvm_ppc_cpu_char)
+16
include/uapi/linux/prctl.h
··· 306 306 # define PR_RISCV_V_VSTATE_CTRL_NEXT_MASK 0xc 307 307 # define PR_RISCV_V_VSTATE_CTRL_MASK 0x1f 308 308 309 + /* PowerPC Dynamic Execution Control Register (DEXCR) controls */ 310 + #define PR_PPC_GET_DEXCR 72 311 + #define PR_PPC_SET_DEXCR 73 312 + /* DEXCR aspect to act on */ 313 + # define PR_PPC_DEXCR_SBHE 0 /* Speculative branch hint enable */ 314 + # define PR_PPC_DEXCR_IBRTPD 1 /* Indirect branch recurrent target prediction disable */ 315 + # define PR_PPC_DEXCR_SRAPD 2 /* Subroutine return address prediction disable */ 316 + # define PR_PPC_DEXCR_NPHIE 3 /* Non-privileged hash instruction enable */ 317 + /* Action to apply / return */ 318 + # define PR_PPC_DEXCR_CTRL_EDITABLE 0x1 /* Aspect can be modified with PR_PPC_SET_DEXCR */ 319 + # define PR_PPC_DEXCR_CTRL_SET 0x2 /* Set the aspect for this process */ 320 + # define PR_PPC_DEXCR_CTRL_CLEAR 0x4 /* Clear the aspect for this process */ 321 + # define PR_PPC_DEXCR_CTRL_SET_ONEXEC 0x8 /* Set the aspect on exec */ 322 + # define PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC 0x10 /* Clear the aspect on exec */ 323 + # define PR_PPC_DEXCR_CTRL_MASK 0x1f 324 + 309 325 #endif /* _LINUX_PRCTL_H */
+13 -16
kernel/crash_core.c
··· 493 493 494 494 /* 495 495 * This routine utilized when the crash_hotplug sysfs node is read. 496 - * It reflects the kernel's ability/permission to update the crash 497 - * elfcorehdr directly. 496 + * It reflects the kernel's ability/permission to update the kdump 497 + * image directly. 498 498 */ 499 - int crash_check_update_elfcorehdr(void) 499 + int crash_check_hotplug_support(void) 500 500 { 501 501 int rc = 0; 502 502 ··· 508 508 return 0; 509 509 } 510 510 if (kexec_crash_image) { 511 - if (kexec_crash_image->file_mode) 512 - rc = 1; 513 - else 514 - rc = kexec_crash_image->update_elfcorehdr; 511 + rc = kexec_crash_image->hotplug_support; 515 512 } 516 513 /* Release lock now that update complete */ 517 514 kexec_unlock(); ··· 531 534 * list of segments it checks (since the elfcorehdr changes and thus 532 535 * would require an update to purgatory itself to update the digest). 533 536 */ 534 - static void crash_handle_hotplug_event(unsigned int hp_action, unsigned int cpu) 537 + static void crash_handle_hotplug_event(unsigned int hp_action, unsigned int cpu, void *arg) 535 538 { 536 539 struct kimage *image; 537 540 ··· 549 552 550 553 image = kexec_crash_image; 551 554 552 - /* Check that updating elfcorehdr is permitted */ 553 - if (!(image->file_mode || image->update_elfcorehdr)) 555 + /* Check that kexec segments update is permitted */ 556 + if (!image->hotplug_support) 554 557 goto out; 555 558 556 559 if (hp_action == KEXEC_CRASH_HP_ADD_CPU || ··· 593 596 image->hp_action = hp_action; 594 597 595 598 /* Now invoke arch-specific update handler */ 596 - arch_crash_handle_hotplug_event(image); 599 + arch_crash_handle_hotplug_event(image, arg); 597 600 598 601 /* No longer handling a hotplug event */ 599 602 image->hp_action = KEXEC_CRASH_HP_NONE; ··· 609 612 crash_hotplug_unlock(); 610 613 } 611 614 612 - static int crash_memhp_notifier(struct notifier_block *nb, unsigned long val, void *v) 615 + static int crash_memhp_notifier(struct notifier_block *nb, unsigned long val, void *arg) 613 616 { 614 617 switch (val) { 615 618 case MEM_ONLINE: 616 619 crash_handle_hotplug_event(KEXEC_CRASH_HP_ADD_MEMORY, 617 - KEXEC_CRASH_HP_INVALID_CPU); 620 + KEXEC_CRASH_HP_INVALID_CPU, arg); 618 621 break; 619 622 620 623 case MEM_OFFLINE: 621 624 crash_handle_hotplug_event(KEXEC_CRASH_HP_REMOVE_MEMORY, 622 - KEXEC_CRASH_HP_INVALID_CPU); 625 + KEXEC_CRASH_HP_INVALID_CPU, arg); 623 626 break; 624 627 } 625 628 return NOTIFY_OK; ··· 632 635 633 636 static int crash_cpuhp_online(unsigned int cpu) 634 637 { 635 - crash_handle_hotplug_event(KEXEC_CRASH_HP_ADD_CPU, cpu); 638 + crash_handle_hotplug_event(KEXEC_CRASH_HP_ADD_CPU, cpu, NULL); 636 639 return 0; 637 640 } 638 641 639 642 static int crash_cpuhp_offline(unsigned int cpu) 640 643 { 641 - crash_handle_hotplug_event(KEXEC_CRASH_HP_REMOVE_CPU, cpu); 644 + crash_handle_hotplug_event(KEXEC_CRASH_HP_REMOVE_CPU, cpu, NULL); 642 645 return 0; 643 646 } 644 647
+2 -2
kernel/kexec.c
··· 135 135 image->preserve_context = 1; 136 136 137 137 #ifdef CONFIG_CRASH_HOTPLUG 138 - if (flags & KEXEC_UPDATE_ELFCOREHDR) 139 - image->update_elfcorehdr = 1; 138 + if ((flags & KEXEC_ON_CRASH) && arch_crash_hotplug_support(image, flags)) 139 + image->hotplug_support = 1; 140 140 #endif 141 141 142 142 ret = machine_kexec_prepare(image);
+5
kernel/kexec_file.c
··· 376 376 if (ret) 377 377 goto out; 378 378 379 + #ifdef CONFIG_CRASH_HOTPLUG 380 + if ((flags & KEXEC_FILE_ON_CRASH) && arch_crash_hotplug_support(image, flags)) 381 + image->hotplug_support = 1; 382 + #endif 383 + 379 384 ret = machine_kexec_prepare(image); 380 385 if (ret) 381 386 goto out;
+16
kernel/sys.c
··· 146 146 #ifndef RISCV_V_GET_CONTROL 147 147 # define RISCV_V_GET_CONTROL() (-EINVAL) 148 148 #endif 149 + #ifndef PPC_GET_DEXCR_ASPECT 150 + # define PPC_GET_DEXCR_ASPECT(a, b) (-EINVAL) 151 + #endif 152 + #ifndef PPC_SET_DEXCR_ASPECT 153 + # define PPC_SET_DEXCR_ASPECT(a, b, c) (-EINVAL) 154 + #endif 149 155 150 156 /* 151 157 * this is where the system-wide overflow UID and GID are defined, for ··· 2731 2725 break; 2732 2726 case PR_GET_MDWE: 2733 2727 error = prctl_get_mdwe(arg2, arg3, arg4, arg5); 2728 + break; 2729 + case PR_PPC_GET_DEXCR: 2730 + if (arg3 || arg4 || arg5) 2731 + return -EINVAL; 2732 + error = PPC_GET_DEXCR_ASPECT(me, arg2); 2733 + break; 2734 + case PR_PPC_SET_DEXCR: 2735 + if (arg4 || arg5) 2736 + return -EINVAL; 2737 + error = PPC_SET_DEXCR_ASPECT(me, arg2, arg3); 2734 2738 break; 2735 2739 case PR_SET_VMA: 2736 2740 error = prctl_set_vma(arg2, arg3, arg4, arg5);
+1 -1
tools/include/uapi/linux/kvm.h
··· 1221 1221 /* Available with KVM_CAP_SPAPR_RESIZE_HPT */ 1222 1222 #define KVM_PPC_RESIZE_HPT_PREPARE _IOR(KVMIO, 0xad, struct kvm_ppc_resize_hpt) 1223 1223 #define KVM_PPC_RESIZE_HPT_COMMIT _IOR(KVMIO, 0xae, struct kvm_ppc_resize_hpt) 1224 - /* Available with KVM_CAP_PPC_RADIX_MMU or KVM_CAP_PPC_HASH_MMU_V3 */ 1224 + /* Available with KVM_CAP_PPC_RADIX_MMU or KVM_CAP_PPC_MMU_HASH_V3 */ 1225 1225 #define KVM_PPC_CONFIGURE_V3_MMU _IOW(KVMIO, 0xaf, struct kvm_ppc_mmuv3_cfg) 1226 1226 /* Available with KVM_CAP_PPC_RADIX_MMU */ 1227 1227 #define KVM_PPC_GET_RMMU_INFO _IOW(KVMIO, 0xb0, struct kvm_ppc_rmmu_info)
+3 -8
tools/testing/selftests/powerpc/Makefile
··· 7 7 8 8 ifeq ($(ARCH),powerpc) 9 9 10 - GIT_VERSION = $(shell git describe --always --long --dirty || echo "unknown") 11 - 12 - CFLAGS := -std=gnu99 -O2 -Wall -Werror -DGIT_VERSION='"$(GIT_VERSION)"' -I$(CURDIR)/include $(CFLAGS) 13 - 14 - export CFLAGS 15 - 16 10 SUB_DIRS = alignment \ 17 11 benchmarks \ 18 12 cache_shape \ ··· 40 46 BUILD_TARGET=$(OUTPUT)/$@; mkdir -p $$BUILD_TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -k -C $@ all 41 47 42 48 include ../lib.mk 49 + include ./flags.mk 43 50 44 51 override define RUN_TESTS 45 52 +@for TARGET in $(SUB_DIRS); do \ ··· 52 57 override define INSTALL_RULE 53 58 +@for TARGET in $(SUB_DIRS); do \ 54 59 BUILD_TARGET=$(OUTPUT)/$$TARGET; \ 55 - $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET install;\ 60 + $(MAKE) OUTPUT=$$BUILD_TARGET INSTALL_PATH=$$INSTALL_PATH/$$TARGET -C $$TARGET install;\ 56 61 done; 57 62 endef 58 63 59 64 emit_tests: 60 65 +@for TARGET in $(SUB_DIRS); do \ 61 66 BUILD_TARGET=$(OUTPUT)/$$TARGET; \ 62 - $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET $@;\ 67 + $(MAKE) OUTPUT=$$BUILD_TARGET COLLECTION=$(COLLECTION)/$$TARGET -s -C $$TARGET $@;\ 63 68 done; 64 69 65 70 override define CLEAN
+1
tools/testing/selftests/powerpc/alignment/Makefile
··· 3 3 4 4 top_srcdir = ../../../../.. 5 5 include ../../lib.mk 6 + include ../flags.mk 6 7 7 8 $(TEST_GEN_PROGS): ../harness.c ../utils.c
+3 -2
tools/testing/selftests/powerpc/benchmarks/Makefile
··· 4 4 5 5 TEST_FILES := settings 6 6 7 - CFLAGS += -O2 8 - 9 7 top_srcdir = ../../../../.. 10 8 include ../../lib.mk 9 + include ../flags.mk 10 + 11 + CFLAGS += -O2 11 12 12 13 $(TEST_GEN_PROGS): ../harness.c 13 14
+1
tools/testing/selftests/powerpc/cache_shape/Makefile
··· 3 3 4 4 top_srcdir = ../../../../.. 5 5 include ../../lib.mk 6 + include ../flags.mk 6 7 7 8 $(TEST_GEN_PROGS): ../harness.c ../utils.c
+11 -10
tools/testing/selftests/powerpc/copyloops/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - # The loops are all 64-bit code 3 - CFLAGS += -m64 4 - CFLAGS += -I$(CURDIR) 5 - CFLAGS += -D SELFTEST 6 - CFLAGS += -maltivec 7 - CFLAGS += -mcpu=power4 8 - 9 - # Use our CFLAGS for the implicit .S rule & set the asm machine type 10 - ASFLAGS = $(CFLAGS) -Wa,-mpower4 11 - 12 2 TEST_GEN_PROGS := copyuser_64_t0 copyuser_64_t1 copyuser_64_t2 \ 13 3 copyuser_p7_t0 copyuser_p7_t1 \ 14 4 memcpy_64_t0 memcpy_64_t1 memcpy_64_t2 \ ··· 10 20 11 21 top_srcdir = ../../../../.. 12 22 include ../../lib.mk 23 + include ../flags.mk 24 + 25 + # The loops are all 64-bit code 26 + CFLAGS += -m64 27 + CFLAGS += -I$(CURDIR) 28 + CFLAGS += -D SELFTEST 29 + CFLAGS += -maltivec 30 + CFLAGS += -mcpu=power4 31 + 32 + # Use our CFLAGS for the implicit .S rule & set the asm machine type 33 + ASFLAGS = $(CFLAGS) -Wa,-mpower4 13 34 14 35 $(OUTPUT)/copyuser_64_t%: copyuser_64.S $(EXTRA_SOURCES) 15 36 $(CC) $(CPPFLAGS) $(CFLAGS) \
+2
tools/testing/selftests/powerpc/dexcr/.gitignore
··· 1 + dexcr_test 1 2 hashchk_test 3 + chdexcr 2 4 lsdexcr
+6 -3
tools/testing/selftests/powerpc/dexcr/Makefile
··· 1 - TEST_GEN_PROGS := hashchk_test 2 - TEST_GEN_FILES := lsdexcr 1 + TEST_GEN_PROGS := dexcr_test hashchk_test 2 + TEST_GEN_FILES := lsdexcr chdexcr 3 3 4 4 include ../../lib.mk 5 + include ../flags.mk 5 6 6 - $(OUTPUT)/hashchk_test: CFLAGS += -fno-pie $(call cc-option,-mno-rop-protect) 7 + CFLAGS += $(KHDR_INCLUDES) 8 + 9 + $(OUTPUT)/hashchk_test: CFLAGS += -fno-pie -no-pie $(call cc-option,-mno-rop-protect) 7 10 8 11 $(TEST_GEN_PROGS): ../harness.c ../utils.c ./dexcr.c 9 12 $(TEST_GEN_FILES): ../utils.c ./dexcr.c
+112
tools/testing/selftests/powerpc/dexcr/chdexcr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + 3 + #include <errno.h> 4 + #include <stddef.h> 5 + #include <stdio.h> 6 + #include <stdlib.h> 7 + #include <string.h> 8 + #include <sys/prctl.h> 9 + 10 + #include "dexcr.h" 11 + #include "utils.h" 12 + 13 + static void die(const char *msg) 14 + { 15 + printf("%s\n", msg); 16 + exit(1); 17 + } 18 + 19 + static void help(void) 20 + { 21 + printf("Invoke a provided program with a custom DEXCR on-exec reset value\n" 22 + "\n" 23 + "usage: chdexcr [CHDEXCR OPTIONS] -- PROGRAM [ARGS...]\n" 24 + "\n" 25 + "Each configurable DEXCR aspect is exposed as an option.\n" 26 + "\n" 27 + "The normal option sets the aspect in the DEXCR. The --no- variant\n" 28 + "clears that aspect. For example, --ibrtpd sets the IBRTPD aspect bit,\n" 29 + "so indirect branch prediction will be disabled in the provided program.\n" 30 + "Conversely, --no-ibrtpd clears the aspect bit, so indirect branch\n" 31 + "prediction may occur.\n" 32 + "\n" 33 + "CHDEXCR OPTIONS:\n"); 34 + 35 + for (int i = 0; i < ARRAY_SIZE(aspects); i++) { 36 + const struct dexcr_aspect *aspect = &aspects[i]; 37 + 38 + if (aspect->prctl == -1) 39 + continue; 40 + 41 + printf(" --%-6s / --no-%-6s : %s\n", aspect->opt, aspect->opt, aspect->desc); 42 + } 43 + } 44 + 45 + static const struct dexcr_aspect *opt_to_aspect(const char *opt) 46 + { 47 + for (int i = 0; i < ARRAY_SIZE(aspects); i++) 48 + if (aspects[i].prctl != -1 && !strcmp(aspects[i].opt, opt)) 49 + return &aspects[i]; 50 + 51 + return NULL; 52 + } 53 + 54 + static int apply_option(const char *option) 55 + { 56 + const struct dexcr_aspect *aspect; 57 + const char *opt = NULL; 58 + const char *set_prefix = "--"; 59 + const char *clear_prefix = "--no-"; 60 + unsigned long ctrl = 0; 61 + int err; 62 + 63 + if (!strcmp(option, "-h") || !strcmp(option, "--help")) { 64 + help(); 65 + exit(0); 66 + } 67 + 68 + /* Strip out --(no-) prefix and determine ctrl value */ 69 + if (!strncmp(option, clear_prefix, strlen(clear_prefix))) { 70 + opt = &option[strlen(clear_prefix)]; 71 + ctrl |= PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC; 72 + } else if (!strncmp(option, set_prefix, strlen(set_prefix))) { 73 + opt = &option[strlen(set_prefix)]; 74 + ctrl |= PR_PPC_DEXCR_CTRL_SET_ONEXEC; 75 + } 76 + 77 + if (!opt || !*opt) 78 + return 1; 79 + 80 + aspect = opt_to_aspect(opt); 81 + if (!aspect) 82 + die("unknown aspect"); 83 + 84 + err = pr_set_dexcr(aspect->prctl, ctrl); 85 + if (err) 86 + die("failed to apply option"); 87 + 88 + return 0; 89 + } 90 + 91 + int main(int argc, char *const argv[]) 92 + { 93 + int i; 94 + 95 + if (!dexcr_exists()) 96 + die("DEXCR not detected on this hardware"); 97 + 98 + for (i = 1; i < argc; i++) 99 + if (apply_option(argv[i])) 100 + break; 101 + 102 + if (i < argc && !strcmp(argv[i], "--")) 103 + i++; 104 + 105 + if (i >= argc) 106 + die("missing command"); 107 + 108 + execvp(argv[i], &argv[i]); 109 + perror("execve"); 110 + 111 + return errno; 112 + }
+40
tools/testing/selftests/powerpc/dexcr/dexcr.c
··· 3 3 #include <errno.h> 4 4 #include <setjmp.h> 5 5 #include <signal.h> 6 + #include <sys/prctl.h> 6 7 #include <sys/types.h> 7 8 #include <sys/wait.h> 8 9 ··· 42 41 out: 43 42 pop_signal_handler(SIGILL, old); 44 43 return exists; 44 + } 45 + 46 + unsigned int pr_which_to_aspect(unsigned long which) 47 + { 48 + switch (which) { 49 + case PR_PPC_DEXCR_SBHE: 50 + return DEXCR_PR_SBHE; 51 + case PR_PPC_DEXCR_IBRTPD: 52 + return DEXCR_PR_IBRTPD; 53 + case PR_PPC_DEXCR_SRAPD: 54 + return DEXCR_PR_SRAPD; 55 + case PR_PPC_DEXCR_NPHIE: 56 + return DEXCR_PR_NPHIE; 57 + default: 58 + FAIL_IF_EXIT_MSG(true, "unknown PR aspect"); 59 + } 60 + } 61 + 62 + int pr_get_dexcr(unsigned long which) 63 + { 64 + return prctl(PR_PPC_GET_DEXCR, which, 0UL, 0UL, 0UL); 65 + } 66 + 67 + int pr_set_dexcr(unsigned long which, unsigned long ctrl) 68 + { 69 + return prctl(PR_PPC_SET_DEXCR, which, ctrl, 0UL, 0UL); 70 + } 71 + 72 + bool pr_dexcr_aspect_supported(unsigned long which) 73 + { 74 + if (pr_get_dexcr(which) == -1) 75 + return errno == ENODEV; 76 + 77 + return true; 78 + } 79 + 80 + bool pr_dexcr_aspect_editable(unsigned long which) 81 + { 82 + return pr_get_dexcr(which) & PR_PPC_DEXCR_CTRL_EDITABLE; 45 83 } 46 84 47 85 /*
+57
tools/testing/selftests/powerpc/dexcr/dexcr.h
··· 9 9 #define _SELFTESTS_POWERPC_DEXCR_DEXCR_H 10 10 11 11 #include <stdbool.h> 12 + #include <sys/prctl.h> 12 13 #include <sys/types.h> 13 14 14 15 #include "reg.h" ··· 27 26 #define PPC_RAW_HASHCHK(b, i, a) \ 28 27 str(.long (0x7C0005E4 | PPC_RAW_HASH_ARGS(b, i, a));) 29 28 29 + struct dexcr_aspect { 30 + const char *name; /* Short display name */ 31 + const char *opt; /* Option name for chdexcr */ 32 + const char *desc; /* Expanded aspect meaning */ 33 + unsigned int index; /* Aspect bit index in DEXCR */ 34 + unsigned long prctl; /* 'which' value for get/set prctl */ 35 + }; 36 + 37 + static const struct dexcr_aspect aspects[] = { 38 + { 39 + .name = "SBHE", 40 + .opt = "sbhe", 41 + .desc = "Speculative branch hint enable", 42 + .index = 0, 43 + .prctl = PR_PPC_DEXCR_SBHE, 44 + }, 45 + { 46 + .name = "IBRTPD", 47 + .opt = "ibrtpd", 48 + .desc = "Indirect branch recurrent target prediction disable", 49 + .index = 3, 50 + .prctl = PR_PPC_DEXCR_IBRTPD, 51 + }, 52 + { 53 + .name = "SRAPD", 54 + .opt = "srapd", 55 + .desc = "Subroutine return address prediction disable", 56 + .index = 4, 57 + .prctl = PR_PPC_DEXCR_SRAPD, 58 + }, 59 + { 60 + .name = "NPHIE", 61 + .opt = "nphie", 62 + .desc = "Non-privileged hash instruction enable", 63 + .index = 5, 64 + .prctl = PR_PPC_DEXCR_NPHIE, 65 + }, 66 + { 67 + .name = "PHIE", 68 + .opt = "phie", 69 + .desc = "Privileged hash instruction enable", 70 + .index = 6, 71 + .prctl = -1, 72 + }, 73 + }; 74 + 30 75 bool dexcr_exists(void); 76 + 77 + bool pr_dexcr_aspect_supported(unsigned long which); 78 + 79 + bool pr_dexcr_aspect_editable(unsigned long which); 80 + 81 + int pr_get_dexcr(unsigned long pr_aspect); 82 + 83 + int pr_set_dexcr(unsigned long pr_aspect, unsigned long ctrl); 84 + 85 + unsigned int pr_which_to_aspect(unsigned long which); 31 86 32 87 bool hashchk_triggers(void); 33 88
+215
tools/testing/selftests/powerpc/dexcr/dexcr_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + 3 + #include <errno.h> 4 + #include <fcntl.h> 5 + #include <stdlib.h> 6 + #include <string.h> 7 + #include <sys/prctl.h> 8 + #include <unistd.h> 9 + 10 + #include "dexcr.h" 11 + #include "utils.h" 12 + 13 + /* 14 + * Helper function for testing the behaviour of a newly exec-ed process 15 + */ 16 + static int dexcr_prctl_onexec_test_child(unsigned long which, const char *status) 17 + { 18 + unsigned long dexcr = mfspr(SPRN_DEXCR_RO); 19 + unsigned long aspect = pr_which_to_aspect(which); 20 + int ctrl = pr_get_dexcr(which); 21 + 22 + if (!strcmp(status, "set")) { 23 + FAIL_IF_EXIT_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_SET), 24 + "setting aspect across exec not applied"); 25 + 26 + FAIL_IF_EXIT_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_SET_ONEXEC), 27 + "setting aspect across exec not inherited"); 28 + 29 + FAIL_IF_EXIT_MSG(!(aspect & dexcr), "setting aspect across exec did not take effect"); 30 + } else if (!strcmp(status, "clear")) { 31 + FAIL_IF_EXIT_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_CLEAR), 32 + "clearing aspect across exec not applied"); 33 + 34 + FAIL_IF_EXIT_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC), 35 + "clearing aspect across exec not inherited"); 36 + 37 + FAIL_IF_EXIT_MSG(aspect & dexcr, "clearing aspect across exec did not take effect"); 38 + } else { 39 + FAIL_IF_EXIT_MSG(true, "unknown expected status"); 40 + } 41 + 42 + return 0; 43 + } 44 + 45 + /* 46 + * Test that the given prctl value can be manipulated freely 47 + */ 48 + static int dexcr_prctl_aspect_test(unsigned long which) 49 + { 50 + unsigned long aspect = pr_which_to_aspect(which); 51 + pid_t pid; 52 + int ctrl; 53 + int err; 54 + int errno_save; 55 + 56 + SKIP_IF_MSG(!dexcr_exists(), "DEXCR not supported"); 57 + SKIP_IF_MSG(!pr_dexcr_aspect_supported(which), "DEXCR aspect not supported"); 58 + SKIP_IF_MSG(!pr_dexcr_aspect_editable(which), "DEXCR aspect not editable with prctl"); 59 + 60 + /* We reject invalid combinations of arguments */ 61 + err = pr_set_dexcr(which, PR_PPC_DEXCR_CTRL_SET | PR_PPC_DEXCR_CTRL_CLEAR); 62 + errno_save = errno; 63 + FAIL_IF_MSG(err != -1, "simultaneous set and clear should be rejected"); 64 + FAIL_IF_MSG(errno_save != EINVAL, "simultaneous set and clear should be rejected with EINVAL"); 65 + 66 + err = pr_set_dexcr(which, PR_PPC_DEXCR_CTRL_SET_ONEXEC | PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC); 67 + errno_save = errno; 68 + FAIL_IF_MSG(err != -1, "simultaneous set and clear on exec should be rejected"); 69 + FAIL_IF_MSG(errno_save != EINVAL, "simultaneous set and clear on exec should be rejected with EINVAL"); 70 + 71 + /* We set the aspect */ 72 + err = pr_set_dexcr(which, PR_PPC_DEXCR_CTRL_SET); 73 + FAIL_IF_MSG(err, "PR_PPC_DEXCR_CTRL_SET failed"); 74 + 75 + ctrl = pr_get_dexcr(which); 76 + FAIL_IF_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_SET), "config value not PR_PPC_DEXCR_CTRL_SET"); 77 + FAIL_IF_MSG(ctrl & PR_PPC_DEXCR_CTRL_CLEAR, "config value unexpected clear flag"); 78 + FAIL_IF_MSG(!(aspect & mfspr(SPRN_DEXCR_RO)), "setting aspect did not take effect"); 79 + 80 + /* We clear the aspect */ 81 + err = pr_set_dexcr(which, PR_PPC_DEXCR_CTRL_CLEAR); 82 + FAIL_IF_MSG(err, "PR_PPC_DEXCR_CTRL_CLEAR failed"); 83 + 84 + ctrl = pr_get_dexcr(which); 85 + FAIL_IF_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_CLEAR), "config value not PR_PPC_DEXCR_CTRL_CLEAR"); 86 + FAIL_IF_MSG(ctrl & PR_PPC_DEXCR_CTRL_SET, "config value unexpected set flag"); 87 + FAIL_IF_MSG(aspect & mfspr(SPRN_DEXCR_RO), "clearing aspect did not take effect"); 88 + 89 + /* We make it set on exec (doesn't change our current value) */ 90 + err = pr_set_dexcr(which, PR_PPC_DEXCR_CTRL_SET_ONEXEC); 91 + FAIL_IF_MSG(err, "PR_PPC_DEXCR_CTRL_SET_ONEXEC failed"); 92 + 93 + ctrl = pr_get_dexcr(which); 94 + FAIL_IF_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_CLEAR), "process aspect should still be cleared"); 95 + FAIL_IF_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_SET_ONEXEC), "config value not PR_PPC_DEXCR_CTRL_SET_ONEXEC"); 96 + FAIL_IF_MSG(ctrl & PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC, "config value unexpected clear on exec flag"); 97 + FAIL_IF_MSG(aspect & mfspr(SPRN_DEXCR_RO), "scheduling aspect to set on exec should not change it now"); 98 + 99 + /* We make it clear on exec (doesn't change our current value) */ 100 + err = pr_set_dexcr(which, PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC); 101 + FAIL_IF_MSG(err, "PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC failed"); 102 + 103 + ctrl = pr_get_dexcr(which); 104 + FAIL_IF_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_CLEAR), "process aspect config should still be cleared"); 105 + FAIL_IF_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC), "config value not PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC"); 106 + FAIL_IF_MSG(ctrl & PR_PPC_DEXCR_CTRL_SET_ONEXEC, "config value unexpected set on exec flag"); 107 + FAIL_IF_MSG(aspect & mfspr(SPRN_DEXCR_RO), "process aspect should still be cleared"); 108 + 109 + /* We allow setting the current and on-exec value in a single call */ 110 + err = pr_set_dexcr(which, PR_PPC_DEXCR_CTRL_SET | PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC); 111 + FAIL_IF_MSG(err, "PR_PPC_DEXCR_CTRL_SET | PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC failed"); 112 + 113 + ctrl = pr_get_dexcr(which); 114 + FAIL_IF_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_SET), "config value not PR_PPC_DEXCR_CTRL_SET"); 115 + FAIL_IF_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC), "config value not PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC"); 116 + FAIL_IF_MSG(!(aspect & mfspr(SPRN_DEXCR_RO)), "process aspect should be set"); 117 + 118 + err = pr_set_dexcr(which, PR_PPC_DEXCR_CTRL_CLEAR | PR_PPC_DEXCR_CTRL_SET_ONEXEC); 119 + FAIL_IF_MSG(err, "PR_PPC_DEXCR_CTRL_CLEAR | PR_PPC_DEXCR_CTRL_SET_ONEXEC failed"); 120 + 121 + ctrl = pr_get_dexcr(which); 122 + FAIL_IF_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_CLEAR), "config value not PR_PPC_DEXCR_CTRL_CLEAR"); 123 + FAIL_IF_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_SET_ONEXEC), "config value not PR_PPC_DEXCR_CTRL_SET_ONEXEC"); 124 + FAIL_IF_MSG(aspect & mfspr(SPRN_DEXCR_RO), "process aspect should be clear"); 125 + 126 + /* Verify the onexec value is applied across exec */ 127 + pid = fork(); 128 + if (!pid) { 129 + char which_str[32] = {}; 130 + char *args[] = { "dexcr_prctl_onexec_test_child", which_str, "set", NULL }; 131 + unsigned int ctrl = pr_get_dexcr(which); 132 + 133 + sprintf(which_str, "%lu", which); 134 + 135 + FAIL_IF_EXIT_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_SET_ONEXEC), 136 + "setting aspect on exec not copied across fork"); 137 + 138 + FAIL_IF_EXIT_MSG(mfspr(SPRN_DEXCR_RO) & aspect, 139 + "setting aspect on exec wrongly applied to fork"); 140 + 141 + execve("/proc/self/exe", args, NULL); 142 + _exit(errno); 143 + } 144 + await_child_success(pid); 145 + 146 + err = pr_set_dexcr(which, PR_PPC_DEXCR_CTRL_SET | PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC); 147 + FAIL_IF_MSG(err, "PR_PPC_DEXCR_CTRL_SET | PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC failed"); 148 + 149 + pid = fork(); 150 + if (!pid) { 151 + char which_str[32] = {}; 152 + char *args[] = { "dexcr_prctl_onexec_test_child", which_str, "clear", NULL }; 153 + unsigned int ctrl = pr_get_dexcr(which); 154 + 155 + sprintf(which_str, "%lu", which); 156 + 157 + FAIL_IF_EXIT_MSG(!(ctrl & PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC), 158 + "clearing aspect on exec not copied across fork"); 159 + 160 + FAIL_IF_EXIT_MSG(!(mfspr(SPRN_DEXCR_RO) & aspect), 161 + "clearing aspect on exec wrongly applied to fork"); 162 + 163 + execve("/proc/self/exe", args, NULL); 164 + _exit(errno); 165 + } 166 + await_child_success(pid); 167 + 168 + return 0; 169 + } 170 + 171 + static int dexcr_prctl_ibrtpd_test(void) 172 + { 173 + return dexcr_prctl_aspect_test(PR_PPC_DEXCR_IBRTPD); 174 + } 175 + 176 + static int dexcr_prctl_srapd_test(void) 177 + { 178 + return dexcr_prctl_aspect_test(PR_PPC_DEXCR_SRAPD); 179 + } 180 + 181 + static int dexcr_prctl_nphie_test(void) 182 + { 183 + return dexcr_prctl_aspect_test(PR_PPC_DEXCR_NPHIE); 184 + } 185 + 186 + int main(int argc, char *argv[]) 187 + { 188 + int err = 0; 189 + 190 + /* 191 + * Some tests require checking what happens across exec, so we may be 192 + * invoked as the child of a particular test 193 + */ 194 + if (argc > 1) { 195 + if (argc == 3 && !strcmp(argv[0], "dexcr_prctl_onexec_test_child")) { 196 + unsigned long which; 197 + 198 + err = parse_ulong(argv[1], strlen(argv[1]), &which, 10); 199 + FAIL_IF_MSG(err, "failed to parse which value for child"); 200 + 201 + return dexcr_prctl_onexec_test_child(which, argv[2]); 202 + } 203 + 204 + FAIL_IF_MSG(true, "unknown test case"); 205 + } 206 + 207 + /* 208 + * Otherwise we are the main test invocation and run the full suite 209 + */ 210 + err |= test_harness(dexcr_prctl_ibrtpd_test, "dexcr_prctl_ibrtpd"); 211 + err |= test_harness(dexcr_prctl_srapd_test, "dexcr_prctl_srapd"); 212 + err |= test_harness(dexcr_prctl_nphie_test, "dexcr_prctl_nphie"); 213 + 214 + return err; 215 + }
+7 -1
tools/testing/selftests/powerpc/dexcr/hashchk_test.c
··· 21 21 static int require_nphie(void) 22 22 { 23 23 SKIP_IF_MSG(!dexcr_exists(), "DEXCR not supported"); 24 + 25 + pr_set_dexcr(PR_PPC_DEXCR_NPHIE, PR_PPC_DEXCR_CTRL_SET | PR_PPC_DEXCR_CTRL_SET_ONEXEC); 26 + 27 + if (get_dexcr(EFFECTIVE) & DEXCR_PR_NPHIE) 28 + return 0; 29 + 24 30 SKIP_IF_MSG(!(get_dexcr(EFFECTIVE) & DEXCR_PR_NPHIE), 25 - "DEXCR[NPHIE] not enabled"); 31 + "Failed to enable DEXCR[NPHIE]"); 26 32 27 33 return 0; 28 34 }
+67 -36
tools/testing/selftests/powerpc/dexcr/lsdexcr.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 2 3 - #include <errno.h> 4 3 #include <stddef.h> 5 4 #include <stdio.h> 6 5 #include <string.h> 6 + #include <sys/prctl.h> 7 7 8 8 #include "dexcr.h" 9 9 #include "utils.h" ··· 11 11 static unsigned int dexcr; 12 12 static unsigned int hdexcr; 13 13 static unsigned int effective; 14 - 15 - struct dexcr_aspect { 16 - const char *name; 17 - const char *desc; 18 - unsigned int index; 19 - }; 20 - 21 - static const struct dexcr_aspect aspects[] = { 22 - { 23 - .name = "SBHE", 24 - .desc = "Speculative branch hint enable", 25 - .index = 0, 26 - }, 27 - { 28 - .name = "IBRTPD", 29 - .desc = "Indirect branch recurrent target prediction disable", 30 - .index = 3, 31 - }, 32 - { 33 - .name = "SRAPD", 34 - .desc = "Subroutine return address prediction disable", 35 - .index = 4, 36 - }, 37 - { 38 - .name = "NPHIE", 39 - .desc = "Non-privileged hash instruction enable", 40 - .index = 5, 41 - }, 42 - { 43 - .name = "PHIE", 44 - .desc = "Privileged hash instruction enable", 45 - .index = 6, 46 - }, 47 - }; 48 14 49 15 static void print_list(const char *list[], size_t len) 50 16 { ··· 26 60 const char *enabled_aspects[ARRAY_SIZE(aspects) + 1] = {NULL}; 27 61 size_t j = 0; 28 62 29 - printf("%s: %08x", name, bits); 63 + printf("%s: 0x%08x", name, bits); 30 64 31 65 if (bits == 0) { 32 66 printf("\n"); ··· 69 103 printf(" \t(%s)\n", aspect->desc); 70 104 } 71 105 106 + static void print_aspect_config(const struct dexcr_aspect *aspect) 107 + { 108 + const char *reason = NULL; 109 + const char *reason_hyp = NULL; 110 + const char *reason_prctl = "no prctl"; 111 + bool actual = effective & DEXCR_PR_BIT(aspect->index); 112 + bool expected = actual; /* Assume it's fine if we don't expect a specific set/clear value */ 113 + 114 + if (actual) 115 + reason = "set by unknown"; 116 + else 117 + reason = "cleared by unknown"; 118 + 119 + if (aspect->prctl != -1) { 120 + int ctrl = pr_get_dexcr(aspect->prctl); 121 + 122 + if (ctrl < 0) { 123 + reason_prctl = "failed to read prctl"; 124 + } else { 125 + if (ctrl & PR_PPC_DEXCR_CTRL_SET) { 126 + reason_prctl = "set by prctl"; 127 + expected = true; 128 + } else if (ctrl & PR_PPC_DEXCR_CTRL_CLEAR) { 129 + reason_prctl = "cleared by prctl"; 130 + expected = false; 131 + } else { 132 + reason_prctl = "unknown prctl"; 133 + } 134 + 135 + reason = reason_prctl; 136 + } 137 + } 138 + 139 + if (hdexcr & DEXCR_PR_BIT(aspect->index)) { 140 + reason_hyp = "set by hypervisor"; 141 + reason = reason_hyp; 142 + expected = true; 143 + } else { 144 + reason_hyp = "not modified by hypervisor"; 145 + } 146 + 147 + printf("%12s (%d): %-28s (%s, %s)\n", 148 + aspect->name, 149 + aspect->index, 150 + reason, 151 + reason_hyp, 152 + reason_prctl); 153 + 154 + /* 155 + * The checks are not atomic, so this can technically trigger if the 156 + * hypervisor makes a change while we are checking each source. It's 157 + * far more likely to be a bug if we see this though. 158 + */ 159 + if (actual != expected) 160 + printf(" : ! actual %s does not match config\n", aspect->name); 161 + } 162 + 72 163 int main(int argc, char *argv[]) 73 164 { 74 165 if (!dexcr_exists()) { ··· 136 113 dexcr = get_dexcr(DEXCR); 137 114 hdexcr = get_dexcr(HDEXCR); 138 115 effective = dexcr | hdexcr; 116 + 117 + printf("current status:\n"); 139 118 140 119 print_dexcr(" DEXCR", dexcr); 141 120 print_dexcr(" HDEXCR", hdexcr); ··· 161 136 else 162 137 printf("ignored\n"); 163 138 } 139 + printf("\n"); 140 + 141 + printf("configuration:\n"); 142 + for (size_t i = 0; i < ARRAY_SIZE(aspects); i++) 143 + print_aspect_config(&aspects[i]); 144 + printf("\n"); 164 145 165 146 return 0; 166 147 }
+1
tools/testing/selftests/powerpc/dscr/Makefile
··· 5 5 6 6 top_srcdir = ../../../../.. 7 7 include ../../lib.mk 8 + include ../flags.mk 8 9 9 10 $(OUTPUT)/dscr_default_test: LDLIBS += -lpthread 10 11 $(OUTPUT)/dscr_explicit_test: LDLIBS += -lpthread
+1
tools/testing/selftests/powerpc/eeh/Makefile
··· 7 7 8 8 top_srcdir = ../../../../.. 9 9 include ../../lib.mk 10 + include ../flags.mk
+12
tools/testing/selftests/powerpc/flags.mk
··· 1 + #This checks for any ENV variables and add those. 2 + 3 + ifeq ($(GIT_VERSION),) 4 + GIT_VERSION := $(shell git describe --always --long --dirty || echo "unknown") 5 + export GIT_VERSION 6 + endif 7 + 8 + ifeq ($(CFLAGS),) 9 + CFLAGS := -std=gnu99 -O2 -Wall -Werror -DGIT_VERSION='"$(GIT_VERSION)"' -I$(selfdir)/powerpc/include $(CFLAGS) 10 + export CFLAGS 11 + endif 12 +
+1
tools/testing/selftests/powerpc/math/Makefile
··· 3 3 4 4 top_srcdir = ../../../../.. 5 5 include ../../lib.mk 6 + include ../flags.mk 6 7 7 8 $(TEST_GEN_PROGS): ../harness.c 8 9 $(TEST_GEN_PROGS): CFLAGS += -O2 -g -pthread -m64 -maltivec
+1
tools/testing/selftests/powerpc/mce/Makefile
··· 3 3 TEST_GEN_PROGS := inject-ra-err 4 4 5 5 include ../../lib.mk 6 + include ../flags.mk 6 7 7 8 $(TEST_GEN_PROGS): ../harness.c
+1
tools/testing/selftests/powerpc/mm/Makefile
··· 13 13 14 14 top_srcdir = ../../../../.. 15 15 include ../../lib.mk 16 + include ../flags.mk 16 17 17 18 $(TEST_GEN_PROGS): ../harness.c ../utils.c 18 19
+3 -2
tools/testing/selftests/powerpc/nx-gzip/Makefile
··· 1 - CFLAGS = -O3 -m64 -I./include -I../include 2 - 3 1 TEST_GEN_FILES := gzfht_test gunz_test 4 2 TEST_PROGS := nx-gzip-test.sh 5 3 6 4 include ../../lib.mk 5 + include ../flags.mk 6 + 7 + CFLAGS = -O3 -m64 -I./include -I../include 7 8 8 9 $(TEST_GEN_FILES): gzip_vas.c ../utils.c
+1
tools/testing/selftests/powerpc/papr_attributes/Makefile
··· 3 3 4 4 top_srcdir = ../../../../.. 5 5 include ../../lib.mk 6 + include ../flags.mk 6 7 7 8 $(TEST_GEN_PROGS): ../harness.c ../utils.c
+1
tools/testing/selftests/powerpc/papr_sysparm/Makefile
··· 6 6 7 7 top_srcdir = ../../../../.. 8 8 include ../../lib.mk 9 + include ../flags.mk 9 10 10 11 $(TEST_GEN_PROGS): ../harness.c ../utils.c 11 12
+1
tools/testing/selftests/powerpc/papr_vpd/Makefile
··· 6 6 7 7 top_srcdir = ../../../../.. 8 8 include ../../lib.mk 9 + include ../flags.mk 9 10 10 11 $(TEST_GEN_PROGS): ../harness.c ../utils.c 11 12
+23 -21
tools/testing/selftests/powerpc/pmu/Makefile
··· 7 7 8 8 top_srcdir = ../../../../.. 9 9 include ../../lib.mk 10 + include ../flags.mk 10 11 11 - all: $(TEST_GEN_PROGS) ebb sampling_tests event_code_tests 12 + SUB_DIRS := ebb sampling_tests event_code_tests 13 + 14 + all: $(TEST_GEN_PROGS) $(SUB_DIRS) 12 15 13 16 $(TEST_GEN_PROGS): $(EXTRA_SOURCES) 14 17 ··· 25 22 26 23 $(OUTPUT)/per_event_excludes: ../utils.c 27 24 25 + $(SUB_DIRS): 26 + BUILD_TARGET=$(OUTPUT)/$@; mkdir -p $$BUILD_TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -k -C $@ all 27 + 28 28 DEFAULT_RUN_TESTS := $(RUN_TESTS) 29 29 override define RUN_TESTS 30 30 $(DEFAULT_RUN_TESTS) 31 - +TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests 32 - +TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests 33 - +TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests 31 + +@for TARGET in $(SUB_DIRS); do \ 32 + BUILD_TARGET=$(OUTPUT)/$$TARGET; \ 33 + $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests; \ 34 + done; 34 35 endef 35 36 36 37 emit_tests: ··· 42 35 BASENAME_TEST=`basename $$TEST`; \ 43 36 echo "$(COLLECTION):$$BASENAME_TEST"; \ 44 37 done 45 - +TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests 46 - +TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests 47 - +TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests 38 + +@for TARGET in $(SUB_DIRS); do \ 39 + BUILD_TARGET=$(OUTPUT)/$$TARGET; \ 40 + $(MAKE) OUTPUT=$$BUILD_TARGET COLLECTION=$(COLLECTION)/$$TARGET -s -C $$TARGET emit_tests; \ 41 + done; 48 42 49 43 DEFAULT_INSTALL_RULE := $(INSTALL_RULE) 50 44 override define INSTALL_RULE 51 45 $(DEFAULT_INSTALL_RULE) 52 - +TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET install 53 - +TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET install 54 - +TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET install 46 + +@for TARGET in $(SUB_DIRS); do \ 47 + BUILD_TARGET=$(OUTPUT)/$$TARGET; \ 48 + $(MAKE) OUTPUT=$$BUILD_TARGET INSTALL_PATH=$$INSTALL_PATH/$$TARGET -C $$TARGET install; \ 49 + done; 55 50 endef 56 51 57 52 DEFAULT_CLEAN := $(CLEAN) 58 53 override define CLEAN 59 54 $(DEFAULT_CLEAN) 60 55 $(RM) $(TEST_GEN_PROGS) $(OUTPUT)/loop.o 61 - +TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean 62 - +TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean 63 - +TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean 56 + +@for TARGET in $(SUB_DIRS); do \ 57 + BUILD_TARGET=$(OUTPUT)/$$TARGET; \ 58 + $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean; \ 59 + done; 64 60 endef 65 61 66 - ebb: 67 - TARGET=$@; BUILD_TARGET=$$OUTPUT/$$TARGET; mkdir -p $$BUILD_TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -k -C $$TARGET all 68 - 69 - sampling_tests: 70 - TARGET=$@; BUILD_TARGET=$$OUTPUT/$$TARGET; mkdir -p $$BUILD_TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -k -C $$TARGET all 71 - 72 - event_code_tests: 73 - TARGET=$@; BUILD_TARGET=$$OUTPUT/$$TARGET; mkdir -p $$BUILD_TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -k -C $$TARGET all 74 62 75 63 .PHONY: all run_tests ebb sampling_tests event_code_tests emit_tests
+11 -10
tools/testing/selftests/powerpc/pmu/ebb/Makefile
··· 4 4 noarg: 5 5 $(MAKE) -C ../../ 6 6 7 - # The EBB handler is 64-bit code and everything links against it 8 - CFLAGS += -m64 9 - 10 - TMPOUT = $(OUTPUT)/TMPDIR/ 11 - # Toolchains may build PIE by default which breaks the assembly 12 - no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \ 13 - $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie) 14 - 15 - LDFLAGS += $(no-pie-option) 16 - 17 7 TEST_GEN_PROGS := reg_access_test event_attributes_test cycles_test \ 18 8 cycles_with_freeze_test pmc56_overflow_test \ 19 9 ebb_vs_cpu_event_test cpu_event_vs_ebb_test \ ··· 18 28 19 29 top_srcdir = ../../../../../.. 20 30 include ../../../lib.mk 31 + include ../../flags.mk 32 + 33 + # The EBB handler is 64-bit code and everything links against it 34 + CFLAGS += -m64 35 + 36 + TMPOUT = $(OUTPUT)/TMPDIR/ 37 + # Toolchains may build PIE by default which breaks the assembly 38 + no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \ 39 + $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie) 40 + 41 + LDFLAGS += $(no-pie-option) 21 42 22 43 $(TEST_GEN_PROGS): ../../harness.c ../../utils.c ../event.c ../lib.c \ 23 44 ebb.c ebb_handler.S trace.c busy_loop.S
+3 -2
tools/testing/selftests/powerpc/pmu/event_code_tests/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - CFLAGS += -m64 3 - 4 2 TEST_GEN_PROGS := group_constraint_pmc56_test group_pmc56_exclude_constraints_test group_constraint_pmc_count_test \ 5 3 group_constraint_repeat_test group_constraint_radix_scope_qual_test reserved_bits_mmcra_sample_elig_mode_test \ 6 4 group_constraint_mmcra_sample_test invalid_event_code_test reserved_bits_mmcra_thresh_ctl_test \ ··· 9 11 10 12 top_srcdir = ../../../../../.. 11 13 include ../../../lib.mk 14 + include ../../flags.mk 15 + 16 + CFLAGS += -m64 12 17 13 18 $(TEST_GEN_PROGS): ../../harness.c ../../utils.c ../event.c ../lib.c ../sampling_tests/misc.h ../sampling_tests/misc.c
+3 -2
tools/testing/selftests/powerpc/pmu/sampling_tests/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - CFLAGS += -m64 3 - 4 2 TEST_GEN_PROGS := mmcr0_exceptionbits_test mmcr0_cc56run_test mmcr0_pmccext_test \ 5 3 mmcr0_pmcjce_test mmcr0_fc56_pmc1ce_test mmcr0_fc56_pmc56_test \ 6 4 mmcr1_comb_test mmcr2_l2l3_test mmcr2_fcs_fch_test \ ··· 9 11 10 12 top_srcdir = ../../../../../.. 11 13 include ../../../lib.mk 14 + include ../../flags.mk 15 + 16 + CFLAGS += -m64 12 17 13 18 $(TEST_GEN_PROGS): ../../harness.c ../../utils.c ../event.c ../lib.c misc.c misc.h ../loop.S ../branch_loops.S
+3 -2
tools/testing/selftests/powerpc/primitives/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - CFLAGS += -I$(CURDIR) 3 - 4 2 TEST_GEN_PROGS := load_unaligned_zeropad 5 3 6 4 top_srcdir = ../../../../.. 7 5 include ../../lib.mk 6 + include ../flags.mk 7 + 8 + CFLAGS += -I$(CURDIR) 8 9 9 10 $(TEST_GEN_PROGS): ../harness.c
+1
tools/testing/selftests/powerpc/ptrace/Makefile
··· 26 26 27 27 top_srcdir = ../../../../.. 28 28 include ../../lib.mk 29 + include ../flags.mk 29 30 30 31 TM_TESTS := $(patsubst %,$(OUTPUT)/%,$(TM_TESTS)) 31 32 TESTS_64 := $(patsubst %,$(OUTPUT)/%,$(TESTS_64))
+3 -2
tools/testing/selftests/powerpc/security/Makefile
··· 5 5 6 6 top_srcdir = ../../../../.. 7 7 8 - CFLAGS += $(KHDR_INCLUDES) 9 - 10 8 include ../../lib.mk 9 + include ../flags.mk 10 + 11 + CFLAGS += $(KHDR_INCLUDES) 11 12 12 13 $(TEST_GEN_PROGS): ../harness.c ../utils.c 13 14
+3 -1
tools/testing/selftests/powerpc/signal/Makefile
··· 3 3 TEST_GEN_PROGS += sigreturn_kernel 4 4 TEST_GEN_PROGS += sigreturn_unaligned 5 5 6 - CFLAGS += -maltivec 7 6 $(OUTPUT)/signal_tm: CFLAGS += -mhtm 8 7 $(OUTPUT)/sigfuz: CFLAGS += -pthread -m64 9 8 ··· 10 11 11 12 top_srcdir = ../../../../.. 12 13 include ../../lib.mk 14 + include ../flags.mk 15 + 16 + CFLAGS += -maltivec 13 17 14 18 $(TEST_GEN_PROGS): ../harness.c ../utils.c signal.S
+6 -5
tools/testing/selftests/powerpc/stringloops/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - # The loops are all 64-bit code 3 - CFLAGS += -I$(CURDIR) 4 - 5 2 EXTRA_SOURCES := ../harness.c 6 3 7 4 build_32bit = $(shell if ($(CC) $(CFLAGS) -m32 -o /dev/null memcmp.c >/dev/null 2>&1) then echo "1"; fi) ··· 24 27 TEST_GEN_PROGS += strlen_32 25 28 endif 26 29 27 - ASFLAGS = $(CFLAGS) 28 - 29 30 top_srcdir = ../../../../.. 30 31 include ../../lib.mk 32 + include ../flags.mk 33 + 34 + # The loops are all 64-bit code 35 + CFLAGS += -I$(CURDIR) 36 + 37 + ASFLAGS = $(CFLAGS) 31 38 32 39 $(TEST_GEN_PROGS): $(EXTRA_SOURCES)
+3 -2
tools/testing/selftests/powerpc/switch_endian/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 TEST_GEN_PROGS := switch_endian_test 3 3 4 - ASFLAGS += -O2 -Wall -g -nostdlib -m64 5 - 6 4 EXTRA_CLEAN = $(OUTPUT)/*.o $(OUTPUT)/check-reversed.S 7 5 8 6 top_srcdir = ../../../../.. 9 7 include ../../lib.mk 8 + include ../flags.mk 9 + 10 + ASFLAGS += -O2 -Wall -g -nostdlib -m64 10 11 11 12 $(OUTPUT)/switch_endian_test: ASFLAGS += -I $(OUTPUT) 12 13 $(OUTPUT)/switch_endian_test: $(OUTPUT)/check-reversed.S
+3 -2
tools/testing/selftests/powerpc/syscalls/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 TEST_GEN_PROGS := ipc_unmuxed rtas_filter 3 3 4 - CFLAGS += $(KHDR_INCLUDES) 5 - 6 4 top_srcdir = ../../../../.. 7 5 include ../../lib.mk 6 + include ../flags.mk 7 + 8 + CFLAGS += $(KHDR_INCLUDES) 8 9 9 10 $(TEST_GEN_PROGS): ../harness.c ../utils.c
+1
tools/testing/selftests/powerpc/tm/Makefile
··· 11 11 12 12 top_srcdir = ../../../../.. 13 13 include ../../lib.mk 14 + include ../flags.mk 14 15 15 16 $(TEST_GEN_PROGS): ../harness.c ../utils.c 16 17
+3 -2
tools/testing/selftests/powerpc/vphn/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 TEST_GEN_PROGS := test-vphn 3 3 4 - CFLAGS += -m64 -I$(CURDIR) 5 - 6 4 top_srcdir = ../../../../.. 7 5 include ../../lib.mk 6 + include ../flags.mk 7 + 8 + CFLAGS += -m64 -I$(CURDIR) 8 9 9 10 $(TEST_GEN_PROGS): ../harness.c 10 11