Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'powerpc-5.6-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:
"A pretty small batch for us, and apologies for it being a bit late, I
wanted to sneak Christophe's user_access_begin() series in.

Summary:

- Implement user_access_begin() and friends for our platforms that
support controlling kernel access to userspace.

- Enable CONFIG_VMAP_STACK on 32-bit Book3S and 8xx.

- Some tweaks to our pseries IOMMU code to allow SVMs ("secure"
virtual machines) to use the IOMMU.

- Add support for CLOCK_{REALTIME/MONOTONIC}_COARSE to the 32-bit
VDSO, and some other improvements.

- A series to use the PCI hotplug framework to control opencapi
card's so that they can be reset and re-read after flashing a new
FPGA image.

As well as other minor fixes and improvements as usual.

Thanks to: Alastair D'Silva, Alexandre Ghiti, Alexey Kardashevskiy,
Andrew Donnellan, Aneesh Kumar K.V, Anju T Sudhakar, Bai Yingjie, Chen
Zhou, Christophe Leroy, Frederic Barrat, Greg Kurz, Jason A.
Donenfeld, Joel Stanley, Jordan Niethe, Julia Lawall, Krzysztof
Kozlowski, Laurent Dufour, Laurentiu Tudor, Linus Walleij, Michael
Bringmann, Nathan Chancellor, Nicholas Piggin, Nick Desaulniers,
Oliver O'Halloran, Peter Ujfalusi, Pingfan Liu, Ram Pai, Randy Dunlap,
Russell Currey, Sam Bobroff, Sebastian Andrzej Siewior, Shawn
Anastasio, Stephen Rothwell, Steve Best, Sukadev Bhattiprolu, Thiago
Jung Bauermann, Tyrel Datwyler, Vaibhav Jain"

* tag 'powerpc-5.6-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (131 commits)
powerpc: configs: Cleanup old Kconfig options
powerpc/configs/skiroot: Enable some more hardening options
powerpc/configs/skiroot: Disable xmon default & enable reboot on panic
powerpc/configs/skiroot: Enable security features
powerpc/configs/skiroot: Update for symbol movement only
powerpc/configs/skiroot: Drop default n CONFIG_CRYPTO_ECHAINIV
powerpc/configs/skiroot: Drop HID_LOGITECH
powerpc/configs: Drop NET_VENDOR_HP which moved to staging
powerpc/configs: NET_CADENCE became NET_VENDOR_CADENCE
powerpc/configs: Drop CONFIG_QLGE which moved to staging
powerpc: Do not consider weak unresolved symbol relocations as bad
powerpc/32s: Fix kasan_early_hash_table() for CONFIG_VMAP_STACK
powerpc: indent to improve Kconfig readability
powerpc: Provide initial documentation for PAPR hcalls
powerpc: Implement user_access_save() and user_access_restore()
powerpc: Implement user_access_begin and friends
powerpc/32s: Prepare prevent_user_access() for user_access_end()
powerpc/32s: Drop NULL addr verification
powerpc/kuap: Fix set direction in allow/prevent_user_access()
powerpc/32s: Fix bad_kuap_fault()
...

+2315 -1126
+4 -4
Documentation/devicetree/bindings/spi/fsl-spi.txt
··· 15 15 - clock-frequency : input clock frequency to non FSL_SOC cores 16 16 17 17 Optional properties: 18 - - gpios : specifies the gpio pins to be used for chipselects. 18 + - cs-gpios : specifies the gpio pins to be used for chipselects. 19 19 The gpios will be referred to as reg = <index> in the SPI child nodes. 20 20 If unspecified, a single SPI device without a chip select can be used. 21 21 - fsl,spisel_boot : for the MPC8306 and MPC8309, specifies that the 22 22 SPISEL_BOOT signal is used as chip select for a slave device. Use 23 23 reg = <number of gpios> in the corresponding child node, i.e. 0 if 24 - the gpios property is not present. 24 + the cs-gpios property is not present. 25 25 26 26 Example: 27 27 spi@4c0 { ··· 31 31 interrupts = <82 0>; 32 32 interrupt-parent = <700>; 33 33 mode = "cpu"; 34 - gpios = <&gpio 18 1 // device reg=<0> 35 - &gpio 19 1>; // device reg=<1> 34 + cs-gpios = <&gpio 18 1 // device reg=<0> 35 + &gpio 19 1>; // device reg=<1> 36 36 }; 37 37 38 38
+199
Documentation/powerpc/imc.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + .. _imc: 3 + 4 + =================================== 5 + IMC (In-Memory Collection Counters) 6 + =================================== 7 + 8 + Anju T Sudhakar, 10 May 2019 9 + 10 + .. contents:: 11 + :depth: 3 12 + 13 + 14 + Basic overview 15 + ============== 16 + 17 + IMC (In-Memory collection counters) is a hardware monitoring facility that 18 + collects large numbers of hardware performance events at Nest level (these are 19 + on-chip but off-core), Core level and Thread level. 20 + 21 + The Nest PMU counters are handled by a Nest IMC microcode which runs in the OCC 22 + (On-Chip Controller) complex. The microcode collects the counter data and moves 23 + the nest IMC counter data to memory. 24 + 25 + The Core and Thread IMC PMU counters are handled in the core. Core level PMU 26 + counters give us the IMC counters' data per core and thread level PMU counters 27 + give us the IMC counters' data per CPU thread. 28 + 29 + OPAL obtains the IMC PMU and supported events information from the IMC Catalog 30 + and passes on to the kernel via the device tree. The event's information 31 + contains: 32 + 33 + - Event name 34 + - Event Offset 35 + - Event description 36 + 37 + and possibly also: 38 + 39 + - Event scale 40 + - Event unit 41 + 42 + Some PMUs may have a common scale and unit values for all their supported 43 + events. For those cases, the scale and unit properties for those events must be 44 + inherited from the PMU. 45 + 46 + The event offset in the memory is where the counter data gets accumulated. 47 + 48 + IMC catalog is available at: 49 + https://github.com/open-power/ima-catalog 50 + 51 + The kernel discovers the IMC counters information in the device tree at the 52 + `imc-counters` device node which has a compatible field 53 + `ibm,opal-in-memory-counters`. From the device tree, the kernel parses the PMUs 54 + and their event's information and register the PMU and its attributes in the 55 + kernel. 56 + 57 + IMC example usage 58 + ================= 59 + 60 + .. code-block:: sh 61 + 62 + # perf list 63 + [...] 64 + nest_mcs01/PM_MCS01_64B_RD_DISP_PORT01/ [Kernel PMU event] 65 + nest_mcs01/PM_MCS01_64B_RD_DISP_PORT23/ [Kernel PMU event] 66 + [...] 67 + core_imc/CPM_0THRD_NON_IDLE_PCYC/ [Kernel PMU event] 68 + core_imc/CPM_1THRD_NON_IDLE_INST/ [Kernel PMU event] 69 + [...] 70 + thread_imc/CPM_0THRD_NON_IDLE_PCYC/ [Kernel PMU event] 71 + thread_imc/CPM_1THRD_NON_IDLE_INST/ [Kernel PMU event] 72 + 73 + To see per chip data for nest_mcs0/PM_MCS_DOWN_128B_DATA_XFER_MC0/: 74 + 75 + .. code-block:: sh 76 + 77 + # ./perf stat -e "nest_mcs01/PM_MCS01_64B_WR_DISP_PORT01/" -a --per-socket 78 + 79 + To see non-idle instructions for core 0: 80 + 81 + .. code-block:: sh 82 + 83 + # ./perf stat -e "core_imc/CPM_NON_IDLE_INST/" -C 0 -I 1000 84 + 85 + To see non-idle instructions for a "make": 86 + 87 + .. code-block:: sh 88 + 89 + # ./perf stat -e "thread_imc/CPM_NON_IDLE_PCYC/" make 90 + 91 + 92 + IMC Trace-mode 93 + =============== 94 + 95 + POWER9 supports two modes for IMC which are the Accumulation mode and Trace 96 + mode. In Accumulation mode, event counts are accumulated in system Memory. 97 + Hypervisor then reads the posted counts periodically or when requested. In IMC 98 + Trace mode, the 64 bit trace SCOM value is initialized with the event 99 + information. The CPMCxSEL and CPMC_LOAD in the trace SCOM, specifies the event 100 + to be monitored and the sampling duration. On each overflow in the CPMCxSEL, 101 + hardware snapshots the program counter along with event counts and writes into 102 + memory pointed by LDBAR. 103 + 104 + LDBAR is a 64 bit special purpose per thread register, it has bits to indicate 105 + whether hardware is configured for accumulation or trace mode. 106 + 107 + LDBAR Register Layout 108 + --------------------- 109 + 110 + +-------+----------------------+ 111 + | 0 | Enable/Disable | 112 + +-------+----------------------+ 113 + | 1 | 0: Accumulation Mode | 114 + | +----------------------+ 115 + | | 1: Trace Mode | 116 + +-------+----------------------+ 117 + | 2:3 | Reserved | 118 + +-------+----------------------+ 119 + | 4-6 | PB scope | 120 + +-------+----------------------+ 121 + | 7 | Reserved | 122 + +-------+----------------------+ 123 + | 8:50 | Counter Address | 124 + +-------+----------------------+ 125 + | 51:63 | Reserved | 126 + +-------+----------------------+ 127 + 128 + TRACE_IMC_SCOM bit representation 129 + --------------------------------- 130 + 131 + +-------+------------+ 132 + | 0:1 | SAMPSEL | 133 + +-------+------------+ 134 + | 2:33 | CPMC_LOAD | 135 + +-------+------------+ 136 + | 34:40 | CPMC1SEL | 137 + +-------+------------+ 138 + | 41:47 | CPMC2SEL | 139 + +-------+------------+ 140 + | 48:50 | BUFFERSIZE | 141 + +-------+------------+ 142 + | 51:63 | RESERVED | 143 + +-------+------------+ 144 + 145 + CPMC_LOAD contains the sampling duration. SAMPSEL and CPMCxSEL determines the 146 + event to count. BUFFERSIZE indicates the memory range. On each overflow, 147 + hardware snapshots the program counter along with event counts and updates the 148 + memory and reloads the CMPC_LOAD value for the next sampling duration. IMC 149 + hardware does not support exceptions, so it quietly wraps around if memory 150 + buffer reaches the end. 151 + 152 + *Currently the event monitored for trace-mode is fixed as cycle.* 153 + 154 + Trace IMC example usage 155 + ======================= 156 + 157 + .. code-block:: sh 158 + 159 + # perf list 160 + [....] 161 + trace_imc/trace_cycles/ [Kernel PMU event] 162 + 163 + To record an application/process with trace-imc event: 164 + 165 + .. code-block:: sh 166 + 167 + # perf record -e trace_imc/trace_cycles/ yes > /dev/null 168 + [ perf record: Woken up 1 times to write data ] 169 + [ perf record: Captured and wrote 0.012 MB perf.data (21 samples) ] 170 + 171 + The `perf.data` generated, can be read using perf report. 172 + 173 + Benefits of using IMC trace-mode 174 + ================================ 175 + 176 + PMI (Performance Monitoring Interrupts) interrupt handling is avoided, since IMC 177 + trace mode snapshots the program counter and updates to the memory. And this 178 + also provide a way for the operating system to do instruction sampling in real 179 + time without PMI processing overhead. 180 + 181 + Performance data using `perf top` with and without trace-imc event. 182 + 183 + PMI interrupts count when `perf top` command is executed without trace-imc event. 184 + 185 + .. code-block:: sh 186 + 187 + # grep PMI /proc/interrupts 188 + PMI: 0 0 0 0 Performance monitoring interrupts 189 + # ./perf top 190 + ... 191 + # grep PMI /proc/interrupts 192 + PMI: 39735 8710 17338 17801 Performance monitoring interrupts 193 + # ./perf top -e trace_imc/trace_cycles/ 194 + ... 195 + # grep PMI /proc/interrupts 196 + PMI: 39735 8710 17338 17801 Performance monitoring interrupts 197 + 198 + 199 + That is, the PMI interrupt counts do not increment when using the `trace_imc` event.
+2
Documentation/powerpc/index.rst
··· 18 18 elfnote 19 19 firmware-assisted-dump 20 20 hvcs 21 + imc 21 22 isa-versions 22 23 kaslr-booke32 23 24 mpc52xx 25 + papr_hcalls 24 26 pci_iov_resource_on_powernv 25 27 pmu-ebb 26 28 ptrace
+250
Documentation/powerpc/papr_hcalls.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + =========================== 4 + Hypercall Op-codes (hcalls) 5 + =========================== 6 + 7 + Overview 8 + ========= 9 + 10 + Virtualization on 64-bit Power Book3S Platforms is based on the PAPR 11 + specification [1]_ which describes the run-time environment for a guest 12 + operating system and how it should interact with the hypervisor for 13 + privileged operations. Currently there are two PAPR compliant hypervisors: 14 + 15 + - **IBM PowerVM (PHYP)**: IBM's proprietary hypervisor that supports AIX, 16 + IBM-i and Linux as supported guests (termed as Logical Partitions 17 + or LPARS). It supports the full PAPR specification. 18 + 19 + - **Qemu/KVM**: Supports PPC64 linux guests running on a PPC64 linux host. 20 + Though it only implements a subset of PAPR specification called LoPAPR [2]_. 21 + 22 + On PPC64 arch a guest kernel running on top of a PAPR hypervisor is called 23 + a *pSeries guest*. A pseries guest runs in a supervisor mode (HV=0) and must 24 + issue hypercalls to the hypervisor whenever it needs to perform an action 25 + that is hypervisor priviledged [3]_ or for other services managed by the 26 + hypervisor. 27 + 28 + Hence a Hypercall (hcall) is essentially a request by the pseries guest 29 + asking hypervisor to perform a privileged operation on behalf of the guest. The 30 + guest issues a with necessary input operands. The hypervisor after performing 31 + the privilege operation returns a status code and output operands back to the 32 + guest. 33 + 34 + HCALL ABI 35 + ========= 36 + The ABI specification for a hcall between a pseries guest and PAPR hypervisor 37 + is covered in section 14.5.3 of ref [2]_. Switch to the Hypervisor context is 38 + done via the instruction **HVCS** that expects the Opcode for hcall is set in *r3* 39 + and any in-arguments for the hcall are provided in registers *r4-r12*. If values 40 + have to be passed through a memory buffer, the data stored in that buffer should be 41 + in Big-endian byte order. 42 + 43 + Once control is returns back to the guest after hypervisor has serviced the 44 + 'HVCS' instruction the return value of the hcall is available in *r3* and any 45 + out values are returned in registers *r4-r12*. Again like in case of in-arguments, 46 + any out values stored in a memory buffer will be in Big-endian byte order. 47 + 48 + Powerpc arch code provides convenient wrappers named **plpar_hcall_xxx** defined 49 + in a arch specific header [4]_ to issue hcalls from the linux kernel 50 + running as pseries guest. 51 + 52 + Register Conventions 53 + ==================== 54 + 55 + Any hcall should follow same register convention as described in section 2.2.1.1 56 + of "64-Bit ELF V2 ABI Specification: Power Architecture"[5]_. Table below 57 + summarizes these conventions: 58 + 59 + +----------+----------+-------------------------------------------+ 60 + | Register |Volatile | Purpose | 61 + | Range |(Y/N) | | 62 + +==========+==========+===========================================+ 63 + | r0 | Y | Optional-usage | 64 + +----------+----------+-------------------------------------------+ 65 + | r1 | N | Stack Pointer | 66 + +----------+----------+-------------------------------------------+ 67 + | r2 | N | TOC | 68 + +----------+----------+-------------------------------------------+ 69 + | r3 | Y | hcall opcode/return value | 70 + +----------+----------+-------------------------------------------+ 71 + | r4-r10 | Y | in and out values | 72 + +----------+----------+-------------------------------------------+ 73 + | r11 | Y | Optional-usage/Environmental pointer | 74 + +----------+----------+-------------------------------------------+ 75 + | r12 | Y | Optional-usage/Function entry address at | 76 + | | | global entry point | 77 + +----------+----------+-------------------------------------------+ 78 + | r13 | N | Thread-Pointer | 79 + +----------+----------+-------------------------------------------+ 80 + | r14-r31 | N | Local Variables | 81 + +----------+----------+-------------------------------------------+ 82 + | LR | Y | Link Register | 83 + +----------+----------+-------------------------------------------+ 84 + | CTR | Y | Loop Counter | 85 + +----------+----------+-------------------------------------------+ 86 + | XER | Y | Fixed-point exception register. | 87 + +----------+----------+-------------------------------------------+ 88 + | CR0-1 | Y | Condition register fields. | 89 + +----------+----------+-------------------------------------------+ 90 + | CR2-4 | N | Condition register fields. | 91 + +----------+----------+-------------------------------------------+ 92 + | CR5-7 | Y | Condition register fields. | 93 + +----------+----------+-------------------------------------------+ 94 + | Others | N | | 95 + +----------+----------+-------------------------------------------+ 96 + 97 + DRC & DRC Indexes 98 + ================= 99 + :: 100 + 101 + DR1 Guest 102 + +--+ +------------+ +---------+ 103 + | | <----> | | | User | 104 + +--+ DRC1 | | DRC | Space | 105 + | PAPR | Index +---------+ 106 + DR2 | Hypervisor | | | 107 + +--+ | | <-----> | Kernel | 108 + | | <----> | | Hcall | | 109 + +--+ DRC2 +------------+ +---------+ 110 + 111 + PAPR hypervisor terms shared hardware resources like PCI devices, NVDIMMs etc 112 + available for use by LPARs as Dynamic Resource (DR). When a DR is allocated to 113 + an LPAR, PHYP creates a data-structure called Dynamic Resource Connector (DRC) 114 + to manage LPAR access. An LPAR refers to a DRC via an opaque 32-bit number 115 + called DRC-Index. The DRC-index value is provided to the LPAR via device-tree 116 + where its present as an attribute in the device tree node associated with the 117 + DR. 118 + 119 + HCALL Return-values 120 + =================== 121 + 122 + After servicing the hcall, hypervisor sets the return-value in *r3* indicating 123 + success or failure of the hcall. In case of a failure an error code indicates 124 + the cause for error. These codes are defined and documented in arch specific 125 + header [4]_. 126 + 127 + In some cases a hcall can potentially take a long time and need to be issued 128 + multiple times in order to be completely serviced. These hcalls will usually 129 + accept an opaque value *continue-token* within there argument list and a 130 + return value of *H_CONTINUE* indicates that hypervisor hasn't still finished 131 + servicing the hcall yet. 132 + 133 + To make such hcalls the guest need to set *continue-token == 0* for the 134 + initial call and use the hypervisor returned value of *continue-token* 135 + for each subsequent hcall until hypervisor returns a non *H_CONTINUE* 136 + return value. 137 + 138 + HCALL Op-codes 139 + ============== 140 + 141 + Below is a partial list of HCALLs that are supported by PHYP. For the 142 + corresponding opcode values please look into the arch specific header [4]_: 143 + 144 + **H_SCM_READ_METADATA** 145 + 146 + | Input: *drcIndex, offset, buffer-address, numBytesToRead* 147 + | Out: *numBytesRead* 148 + | Return Value: *H_Success, H_Parameter, H_P2, H_P3, H_Hardware* 149 + 150 + Given a DRC Index of an NVDIMM, read N-bytes from the the metadata area 151 + associated with it, at a specified offset and copy it to provided buffer. 152 + The metadata area stores configuration information such as label information, 153 + bad-blocks etc. The metadata area is located out-of-band of NVDIMM storage 154 + area hence a separate access semantics is provided. 155 + 156 + **H_SCM_WRITE_METADATA** 157 + 158 + | Input: *drcIndex, offset, data, numBytesToWrite* 159 + | Out: *None* 160 + | Return Value: *H_Success, H_Parameter, H_P2, H_P4, H_Hardware* 161 + 162 + Given a DRC Index of an NVDIMM, write N-bytes to the metadata area 163 + associated with it, at the specified offset and from the provided buffer. 164 + 165 + **H_SCM_BIND_MEM** 166 + 167 + | Input: *drcIndex, startingScmBlockIndex, numScmBlocksToBind,* 168 + | *targetLogicalMemoryAddress, continue-token* 169 + | Out: *continue-token, targetLogicalMemoryAddress, numScmBlocksToBound* 170 + | Return Value: *H_Success, H_Parameter, H_P2, H_P3, H_P4, H_Overlap,* 171 + | *H_Too_Big, H_P5, H_Busy* 172 + 173 + Given a DRC-Index of an NVDIMM, map a continuous SCM blocks range 174 + *(startingScmBlockIndex, startingScmBlockIndex+numScmBlocksToBind)* to the guest 175 + at *targetLogicalMemoryAddress* within guest physical address space. In 176 + case *targetLogicalMemoryAddress == 0xFFFFFFFF_FFFFFFFF* then hypervisor 177 + assigns a target address to the guest. The HCALL can fail if the Guest has 178 + an active PTE entry to the SCM block being bound. 179 + 180 + **H_SCM_UNBIND_MEM** 181 + | Input: drcIndex, startingScmLogicalMemoryAddress, numScmBlocksToUnbind 182 + | Out: numScmBlocksUnbound 183 + | Return Value: *H_Success, H_Parameter, H_P2, H_P3, H_In_Use, H_Overlap,* 184 + | *H_Busy, H_LongBusyOrder1mSec, H_LongBusyOrder10mSec* 185 + 186 + Given a DRC-Index of an NVDimm, unmap *numScmBlocksToUnbind* SCM blocks starting 187 + at *startingScmLogicalMemoryAddress* from guest physical address space. The 188 + HCALL can fail if the Guest has an active PTE entry to the SCM block being 189 + unbound. 190 + 191 + **H_SCM_QUERY_BLOCK_MEM_BINDING** 192 + 193 + | Input: *drcIndex, scmBlockIndex* 194 + | Out: *Guest-Physical-Address* 195 + | Return Value: *H_Success, H_Parameter, H_P2, H_NotFound* 196 + 197 + Given a DRC-Index and an SCM Block index return the guest physical address to 198 + which the SCM block is mapped to. 199 + 200 + **H_SCM_QUERY_LOGICAL_MEM_BINDING** 201 + 202 + | Input: *Guest-Physical-Address* 203 + | Out: *drcIndex, scmBlockIndex* 204 + | Return Value: *H_Success, H_Parameter, H_P2, H_NotFound* 205 + 206 + Given a guest physical address return which DRC Index and SCM block is mapped 207 + to that address. 208 + 209 + **H_SCM_UNBIND_ALL** 210 + 211 + | Input: *scmTargetScope, drcIndex* 212 + | Out: *None* 213 + | Return Value: *H_Success, H_Parameter, H_P2, H_P3, H_In_Use, H_Busy,* 214 + | *H_LongBusyOrder1mSec, H_LongBusyOrder10mSec* 215 + 216 + Depending on the Target scope unmap all SCM blocks belonging to all NVDIMMs 217 + or all SCM blocks belonging to a single NVDIMM identified by its drcIndex 218 + from the LPAR memory. 219 + 220 + **H_SCM_HEALTH** 221 + 222 + | Input: drcIndex 223 + | Out: *health-bitmap, health-bit-valid-bitmap* 224 + | Return Value: *H_Success, H_Parameter, H_Hardware* 225 + 226 + Given a DRC Index return the info on predictive failure and overall health of 227 + the NVDIMM. The asserted bits in the health-bitmap indicate a single predictive 228 + failure and health-bit-valid-bitmap indicate which bits in health-bitmap are 229 + valid. 230 + 231 + **H_SCM_PERFORMANCE_STATS** 232 + 233 + | Input: drcIndex, resultBuffer Addr 234 + | Out: None 235 + | Return Value: *H_Success, H_Parameter, H_Unsupported, H_Hardware, H_Authority, H_Privilege* 236 + 237 + Given a DRC Index collect the performance statistics for NVDIMM and copy them 238 + to the resultBuffer. 239 + 240 + References 241 + ========== 242 + .. [1] "Power Architecture Platform Reference" 243 + https://en.wikipedia.org/wiki/Power_Architecture_Platform_Reference 244 + .. [2] "Linux on Power Architecture Platform Reference" 245 + https://members.openpowerfoundation.org/document/dl/469 246 + .. [3] "Definitions and Notation" Book III-Section 14.5.3 247 + https://openpowerfoundation.org/?resource_lib=power-isa-version-3-0 248 + .. [4] arch/powerpc/include/asm/hvcall.h 249 + .. [5] "64-Bit ELF V2 ABI Specification: Power Architecture" 250 + https://openpowerfoundation.org/?resource_lib=64-bit-elf-v2-abi-specification-power-architecture
+3 -6
arch/powerpc/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 source "arch/powerpc/platforms/Kconfig.cputype" 3 3 4 - config PPC32 5 - bool 6 - default y if !PPC64 7 - 8 4 config 32BIT 9 5 bool 10 6 default y if PPC32 ··· 129 133 select ARCH_HAS_PTE_SPECIAL 130 134 select ARCH_HAS_MEMBARRIER_CALLBACKS 131 135 select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64 132 - select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION) 136 + select ARCH_HAS_STRICT_KERNEL_RWX if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION) 133 137 select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST 134 138 select ARCH_HAS_UACCESS_FLUSHCACHE 135 139 select ARCH_HAS_UACCESS_MCSAFE if PPC64 ··· 169 173 select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU 170 174 select HAVE_ARCH_JUMP_LABEL 171 175 select HAVE_ARCH_KASAN if PPC32 176 + select HAVE_ARCH_KASAN_VMALLOC if PPC32 172 177 select HAVE_ARCH_KGDB 173 178 select HAVE_ARCH_MMAP_RND_BITS 174 179 select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT ··· 479 482 config HOTPLUG_CPU 480 483 bool "Support for enabling/disabling CPUs" 481 484 depends on SMP && (PPC_PSERIES || \ 482 - PPC_PMAC || PPC_POWERNV || FSL_SOC_BOOKE) 485 + PPC_PMAC || PPC_POWERNV || FSL_SOC_BOOKE) 483 486 help 484 487 Say Y here to be able to disable and re-enable individual 485 488 CPUs at runtime on SMP machines.
+1 -1
arch/powerpc/Kconfig.debug
··· 371 371 372 372 config PPC_DEBUG_WX 373 373 bool "Warn on W+X mappings at boot" 374 - depends on PPC_PTDUMP 374 + depends on PPC_PTDUMP && STRICT_KERNEL_RWX 375 375 help 376 376 Generate a warning if any W+X mappings are found at boot. 377 377
+1 -1
arch/powerpc/boot/4xx.c
··· 228 228 dpath = 8; /* 64 bits */ 229 229 230 230 /* get address pins (rows) */ 231 - val = SDRAM0_READ(DDR0_42); 231 + val = SDRAM0_READ(DDR0_42); 232 232 233 233 row = DDR_GET_VAL(val, DDR_APIN, DDR_APIN_SHIFT); 234 234 if (row > max_row)
+1 -1
arch/powerpc/boot/dts/mgcoge.dts
··· 224 224 reg = <0x11a80 0x40 0x89fc 0x2>; 225 225 interrupts = <2 8>; 226 226 interrupt-parent = <&PIC>; 227 - gpios = < &cpm2_pio_d 19 0>; 227 + cs-gpios = < &cpm2_pio_d 19 0>; 228 228 #address-cells = <1>; 229 229 #size-cells = <0>; 230 230 ds3106@1 {
+1 -1
arch/powerpc/boot/dts/mpc832x_rdb.dts
··· 249 249 reg = <0x4c0 0x40>; 250 250 interrupts = <2>; 251 251 interrupt-parent = <&qeic>; 252 - gpios = <&qe_pio_d 13 0>; 252 + cs-gpios = <&qe_pio_d 13 0>; 253 253 mode = "cpu-qe"; 254 254 255 255 mmc-slot@0 {
+1 -1
arch/powerpc/boot/dts/mpc8610_hpcd.dts
··· 200 200 interrupts = <59 2>; 201 201 interrupt-parent = <&mpic>; 202 202 mode = "cpu"; 203 - gpios = <&sdcsr_pio 7 0>; 203 + cs-gpios = <&sdcsr_pio 7 0>; 204 204 sleep = <&pmc 0x00000800 0>; 205 205 206 206 mmc-slot@0 {
-1
arch/powerpc/configs/44x/akebono_defconfig
··· 59 59 # CONFIG_NET_VENDOR_DLINK is not set 60 60 # CONFIG_NET_VENDOR_EMULEX is not set 61 61 # CONFIG_NET_VENDOR_EXAR is not set 62 - # CONFIG_NET_VENDOR_HP is not set 63 62 CONFIG_IBM_EMAC=y 64 63 # CONFIG_NET_VENDOR_MARVELL is not set 65 64 # CONFIG_NET_VENDOR_MELLANOX is not set
-2
arch/powerpc/configs/44x/sam440ep_defconfig
··· 10 10 # CONFIG_BLK_DEV_BSG is not set 11 11 CONFIG_PARTITION_ADVANCED=y 12 12 CONFIG_AMIGA_PARTITION=y 13 - # CONFIG_IOSCHED_DEADLINE is not set 14 - # CONFIG_IOSCHED_CFQ is not set 15 13 # CONFIG_EBONY is not set 16 14 CONFIG_SAM440EP=y 17 15 CONFIG_CMDLINE_BOOL=y
-2
arch/powerpc/configs/52xx/pcm030_defconfig
··· 14 14 CONFIG_MODULES=y 15 15 CONFIG_MODULE_UNLOAD=y 16 16 # CONFIG_BLK_DEV_BSG is not set 17 - # CONFIG_IOSCHED_DEADLINE is not set 18 - # CONFIG_IOSCHED_CFQ is not set 19 17 # CONFIG_PPC_CHRP is not set 20 18 CONFIG_PPC_MPC52xx=y 21 19 CONFIG_PPC_MPC5200_SIMPLE=y
-2
arch/powerpc/configs/83xx/kmeter1_defconfig
··· 11 11 # CONFIG_BLK_DEV_BSG is not set 12 12 CONFIG_PARTITION_ADVANCED=y 13 13 # CONFIG_MSDOS_PARTITION is not set 14 - # CONFIG_IOSCHED_DEADLINE is not set 15 - # CONFIG_IOSCHED_CFQ is not set 16 14 # CONFIG_PPC_CHRP is not set 17 15 # CONFIG_PPC_PMAC is not set 18 16 CONFIG_PPC_83xx=y
-1
arch/powerpc/configs/adder875_defconfig
··· 9 9 # CONFIG_VM_EVENT_COUNTERS is not set 10 10 # CONFIG_BLK_DEV_BSG is not set 11 11 CONFIG_PARTITION_ADVANCED=y 12 - # CONFIG_IOSCHED_CFQ is not set 13 12 CONFIG_PPC_ADDER875=y 14 13 CONFIG_8xx_COPYBACK=y 15 14 CONFIG_GEN_RTC=y
-1
arch/powerpc/configs/ep8248e_defconfig
··· 6 6 CONFIG_KALLSYMS_ALL=y 7 7 CONFIG_SLAB=y 8 8 CONFIG_PARTITION_ADVANCED=y 9 - # CONFIG_IOSCHED_CFQ is not set 10 9 # CONFIG_PPC_CHRP is not set 11 10 # CONFIG_PPC_PMAC is not set 12 11 CONFIG_PPC_82xx=y
-1
arch/powerpc/configs/ep88xc_defconfig
··· 11 11 # CONFIG_VM_EVENT_COUNTERS is not set 12 12 # CONFIG_BLK_DEV_BSG is not set 13 13 CONFIG_PARTITION_ADVANCED=y 14 - # CONFIG_IOSCHED_CFQ is not set 15 14 CONFIG_PPC_EP88XC=y 16 15 CONFIG_8xx_COPYBACK=y 17 16 CONFIG_GEN_RTC=y
-1
arch/powerpc/configs/mgcoge_defconfig
··· 12 12 CONFIG_EMBEDDED=y 13 13 CONFIG_SLAB=y 14 14 CONFIG_PARTITION_ADVANCED=y 15 - # CONFIG_IOSCHED_CFQ is not set 16 15 # CONFIG_PPC_PMAC is not set 17 16 CONFIG_PPC_82xx=y 18 17 CONFIG_MGCOGE=y
-1
arch/powerpc/configs/mpc512x_defconfig
··· 9 9 CONFIG_MODULE_UNLOAD=y 10 10 # CONFIG_BLK_DEV_BSG is not set 11 11 CONFIG_PARTITION_ADVANCED=y 12 - # CONFIG_IOSCHED_CFQ is not set 13 12 # CONFIG_PPC_CHRP is not set 14 13 CONFIG_PPC_MPC512x=y 15 14 CONFIG_MPC512x_LPBFIFO=y
-1
arch/powerpc/configs/mpc885_ads_defconfig
··· 11 11 # CONFIG_VM_EVENT_COUNTERS is not set 12 12 # CONFIG_BLK_DEV_BSG is not set 13 13 CONFIG_PARTITION_ADVANCED=y 14 - # CONFIG_IOSCHED_CFQ is not set 15 14 CONFIG_8xx_COPYBACK=y 16 15 CONFIG_GEN_RTC=y 17 16 CONFIG_HZ_100=y
-1
arch/powerpc/configs/powernv_defconfig
··· 181 181 CONFIG_MLX5_CORE_EN=y 182 182 CONFIG_MLX5_CORE_IPOIB=y 183 183 CONFIG_MYRI10GE=m 184 - CONFIG_QLGE=m 185 184 CONFIG_NETXEN_NIC=m 186 185 CONFIG_USB_NET_DRIVERS=m 187 186 # CONFIG_WLAN is not set
-1
arch/powerpc/configs/ppc64_defconfig
··· 189 189 CONFIG_MYRI10GE=m 190 190 CONFIG_S2IO=m 191 191 CONFIG_PASEMI_MAC=y 192 - CONFIG_QLGE=m 193 192 CONFIG_NETXEN_NIC=m 194 193 CONFIG_SUNGEM=y 195 194 CONFIG_GELIC_NET=m
-1
arch/powerpc/configs/ppc6xx_defconfig
··· 507 507 CONFIG_HAMACHI=m 508 508 CONFIG_YELLOWFIN=m 509 509 CONFIG_QLA3XXX=m 510 - CONFIG_QLGE=m 511 510 CONFIG_NETXEN_NIC=m 512 511 CONFIG_8139CP=m 513 512 CONFIG_8139TOO=m
-1
arch/powerpc/configs/pseries_defconfig
··· 169 169 CONFIG_I40E=m 170 170 CONFIG_MLX4_EN=m 171 171 CONFIG_MYRI10GE=m 172 - CONFIG_QLGE=m 173 172 CONFIG_NETXEN_NIC=m 174 173 CONFIG_PPP=m 175 174 CONFIG_PPP_BSDCOMP=m
+41 -27
arch/powerpc/configs/skiroot_defconfig
··· 1 - CONFIG_PPC64=y 2 - CONFIG_ALTIVEC=y 3 - CONFIG_VSX=y 4 - CONFIG_NR_CPUS=2048 5 - CONFIG_CPU_LITTLE_ENDIAN=y 6 1 CONFIG_KERNEL_XZ=y 7 2 # CONFIG_SWAP is not set 8 3 CONFIG_SYSVIPC=y ··· 23 28 # CONFIG_AIO is not set 24 29 CONFIG_PERF_EVENTS=y 25 30 # CONFIG_COMPAT_BRK is not set 31 + # CONFIG_SLAB_MERGE_DEFAULT is not set 32 + CONFIG_SLAB_FREELIST_RANDOM=y 26 33 CONFIG_SLAB_FREELIST_HARDENED=y 27 - CONFIG_JUMP_LABEL=y 28 - CONFIG_STRICT_KERNEL_RWX=y 29 - CONFIG_MODULES=y 30 - CONFIG_MODULE_UNLOAD=y 31 - CONFIG_MODULE_SIG=y 32 - CONFIG_MODULE_SIG_FORCE=y 33 - CONFIG_MODULE_SIG_SHA512=y 34 - CONFIG_PARTITION_ADVANCED=y 35 - # CONFIG_MQ_IOSCHED_DEADLINE is not set 36 - # CONFIG_MQ_IOSCHED_KYBER is not set 34 + CONFIG_PPC64=y 35 + CONFIG_ALTIVEC=y 36 + CONFIG_VSX=y 37 + CONFIG_NR_CPUS=2048 38 + CONFIG_CPU_LITTLE_ENDIAN=y 39 + CONFIG_PANIC_TIMEOUT=30 37 40 # CONFIG_PPC_VAS is not set 38 41 # CONFIG_PPC_PSERIES is not set 39 42 # CONFIG_PPC_OF_BOOT_TRAMPOLINE is not set ··· 39 46 CONFIG_CPU_IDLE=y 40 47 CONFIG_HZ_100=y 41 48 CONFIG_KEXEC=y 49 + CONFIG_KEXEC_FILE=y 42 50 CONFIG_PRESERVE_FA_DUMP=y 43 51 CONFIG_IRQ_ALL_CPUS=y 44 52 CONFIG_NUMA=y 45 - # CONFIG_COMPACTION is not set 46 - # CONFIG_MIGRATION is not set 47 53 CONFIG_PPC_64K_PAGES=y 48 54 CONFIG_SCHED_SMT=y 49 55 CONFIG_CMDLINE_BOOL=y 50 56 CONFIG_CMDLINE="console=tty0 console=hvc0 ipr.fast_reboot=1 quiet" 51 57 # CONFIG_SECCOMP is not set 52 58 # CONFIG_PPC_MEM_KEYS is not set 59 + CONFIG_JUMP_LABEL=y 60 + CONFIG_STRICT_KERNEL_RWX=y 61 + CONFIG_MODULES=y 62 + CONFIG_MODULE_UNLOAD=y 63 + CONFIG_MODULE_SIG_FORCE=y 64 + CONFIG_MODULE_SIG_SHA512=y 65 + CONFIG_PARTITION_ADVANCED=y 66 + # CONFIG_MQ_IOSCHED_DEADLINE is not set 67 + # CONFIG_MQ_IOSCHED_KYBER is not set 68 + # CONFIG_COMPACTION is not set 69 + # CONFIG_MIGRATION is not set 53 70 CONFIG_NET=y 54 71 CONFIG_PACKET=y 55 72 CONFIG_UNIX=y ··· 141 138 CONFIG_BNX2X=m 142 139 # CONFIG_NET_VENDOR_BROCADE is not set 143 140 # CONFIG_NET_VENDOR_CADENCE is not set 144 - # CONFIG_NET_CADENCE is not set 145 141 # CONFIG_NET_VENDOR_CAVIUM is not set 146 142 CONFIG_CHELSIO_T1=m 147 143 # CONFIG_NET_VENDOR_CISCO is not set ··· 149 147 # CONFIG_NET_VENDOR_DLINK is not set 150 148 CONFIG_BE2NET=m 151 149 # CONFIG_NET_VENDOR_EZCHIP is not set 152 - # CONFIG_NET_VENDOR_HP is not set 153 150 # CONFIG_NET_VENDOR_HUAWEI is not set 154 151 CONFIG_E1000=m 155 152 CONFIG_E1000E=m ··· 156 155 CONFIG_IXGB=m 157 156 CONFIG_IXGBE=m 158 157 CONFIG_I40E=m 159 - CONFIG_S2IO=m 160 158 # CONFIG_NET_VENDOR_MARVELL is not set 161 159 CONFIG_MLX4_EN=m 162 160 # CONFIG_MLX4_CORE_GEN2 is not set ··· 166 166 # CONFIG_NET_VENDOR_MICROSEMI is not set 167 167 CONFIG_MYRI10GE=m 168 168 # CONFIG_NET_VENDOR_NATSEMI is not set 169 + CONFIG_S2IO=m 169 170 # CONFIG_NET_VENDOR_NETRONOME is not set 170 171 # CONFIG_NET_VENDOR_NI is not set 171 172 # CONFIG_NET_VENDOR_NVIDIA is not set 172 173 # CONFIG_NET_VENDOR_OKI is not set 173 174 # CONFIG_NET_VENDOR_PACKET_ENGINES is not set 174 - CONFIG_QLGE=m 175 175 CONFIG_NETXEN_NIC=m 176 176 CONFIG_QED=m 177 177 CONFIG_QEDE=m ··· 238 238 CONFIG_HID_EZKEY=y 239 239 CONFIG_HID_ITE=y 240 240 CONFIG_HID_KENSINGTON=y 241 - CONFIG_HID_LOGITECH=y 242 241 CONFIG_HID_MICROSOFT=y 243 242 CONFIG_HID_MONTEREY=y 244 243 CONFIG_USB_HIDDEV=y ··· 274 275 CONFIG_NLS_ASCII=y 275 276 CONFIG_NLS_ISO8859_1=y 276 277 CONFIG_NLS_UTF8=y 278 + CONFIG_ENCRYPTED_KEYS=y 279 + CONFIG_SECURITY=y 280 + CONFIG_HARDENED_USERCOPY=y 281 + # CONFIG_HARDENED_USERCOPY_FALLBACK is not set 282 + CONFIG_HARDENED_USERCOPY_PAGESPAN=y 283 + CONFIG_FORTIFY_SOURCE=y 284 + CONFIG_SECURITY_LOCKDOWN_LSM=y 285 + CONFIG_SECURITY_LOCKDOWN_LSM_EARLY=y 286 + CONFIG_LOCK_DOWN_KERNEL_FORCE_INTEGRITY=y 287 + # CONFIG_INTEGRITY is not set 288 + CONFIG_LSM="yama,loadpin,safesetid,integrity" 289 + # CONFIG_CRYPTO_HW is not set 277 290 CONFIG_CRC16=y 278 291 CONFIG_CRC_ITU_T=y 279 292 CONFIG_LIBCRC32C=y ··· 296 285 # CONFIG_XZ_DEC_SPARC is not set 297 286 CONFIG_PRINTK_TIME=y 298 287 CONFIG_MAGIC_SYSRQ=y 288 + CONFIG_SLUB_DEBUG_ON=y 289 + CONFIG_SCHED_STACK_END_CHECK=y 299 290 CONFIG_DEBUG_STACKOVERFLOW=y 291 + CONFIG_PANIC_ON_OOPS=y 300 292 CONFIG_SOFTLOCKUP_DETECTOR=y 301 293 CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y 302 294 CONFIG_HARDLOCKUP_DETECTOR=y 303 295 CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y 304 296 CONFIG_WQ_WATCHDOG=y 305 297 # CONFIG_SCHED_DEBUG is not set 298 + CONFIG_DEBUG_SG=y 299 + CONFIG_DEBUG_NOTIFIERS=y 300 + CONFIG_BUG_ON_DATA_CORRUPTION=y 301 + CONFIG_DEBUG_CREDENTIALS=y 306 302 # CONFIG_FTRACE is not set 307 - # CONFIG_RUNTIME_TESTING_MENU is not set 308 303 CONFIG_XMON=y 309 - CONFIG_XMON_DEFAULT=y 310 - CONFIG_ENCRYPTED_KEYS=y 311 - # CONFIG_CRYPTO_ECHAINIV is not set 312 - # CONFIG_CRYPTO_HW is not set 304 + # CONFIG_RUNTIME_TESTING_MENU is not set
-1
arch/powerpc/configs/storcenter_defconfig
··· 77 77 CONFIG_NLS_ISO8859_1=y 78 78 CONFIG_NLS_UTF8=y 79 79 CONFIG_CRC_T10DIF=y 80 - # CONFIG_ENABLE_WARN_DEPRECATED is not set 81 80 # CONFIG_ENABLE_MUST_CHECK is not set
-1
arch/powerpc/configs/tqm8xx_defconfig
··· 14 14 CONFIG_MODULE_SRCVERSION_ALL=y 15 15 # CONFIG_BLK_DEV_BSG is not set 16 16 CONFIG_PARTITION_ADVANCED=y 17 - # CONFIG_IOSCHED_CFQ is not set 18 17 CONFIG_TQM8XX=y 19 18 CONFIG_8xx_COPYBACK=y 20 19 # CONFIG_8xx_CPU15 is not set
+59 -9
arch/powerpc/include/asm/book3s/32/kup.h
··· 102 102 isync(); /* Context sync required after mtsrin() */ 103 103 } 104 104 105 - static inline void allow_user_access(void __user *to, const void __user *from, u32 size) 105 + static __always_inline void allow_user_access(void __user *to, const void __user *from, 106 + u32 size, unsigned long dir) 106 107 { 107 108 u32 addr, end; 108 109 109 - if (__builtin_constant_p(to) && to == NULL) 110 + BUILD_BUG_ON(!__builtin_constant_p(dir)); 111 + BUILD_BUG_ON(dir == KUAP_CURRENT); 112 + 113 + if (!(dir & KUAP_WRITE)) 110 114 return; 111 115 112 116 addr = (__force u32)to; 113 117 114 - if (!addr || addr >= TASK_SIZE || !size) 118 + if (unlikely(addr >= TASK_SIZE || !size)) 115 119 return; 116 120 117 121 end = min(addr + size, TASK_SIZE); 122 + 118 123 current->thread.kuap = (addr & 0xf0000000) | ((((end - 1) >> 28) + 1) & 0xf); 119 124 kuap_update_sr(mfsrin(addr) & ~SR_KS, addr, end); /* Clear Ks */ 120 125 } 121 126 122 - static inline void prevent_user_access(void __user *to, const void __user *from, u32 size) 127 + static __always_inline void prevent_user_access(void __user *to, const void __user *from, 128 + u32 size, unsigned long dir) 123 129 { 124 - u32 addr = (__force u32)to; 125 - u32 end = min(addr + size, TASK_SIZE); 130 + u32 addr, end; 126 131 127 - if (!addr || addr >= TASK_SIZE || !size) 132 + BUILD_BUG_ON(!__builtin_constant_p(dir)); 133 + 134 + if (dir == KUAP_CURRENT) { 135 + u32 kuap = current->thread.kuap; 136 + 137 + if (unlikely(!kuap)) 138 + return; 139 + 140 + addr = kuap & 0xf0000000; 141 + end = kuap << 28; 142 + } else if (dir & KUAP_WRITE) { 143 + addr = (__force u32)to; 144 + end = min(addr + size, TASK_SIZE); 145 + 146 + if (unlikely(addr >= TASK_SIZE || !size)) 147 + return; 148 + } else { 128 149 return; 150 + } 129 151 130 152 current->thread.kuap = 0; 131 153 kuap_update_sr(mfsrin(addr) | SR_KS, addr, end); /* set Ks */ 132 154 } 133 155 134 - static inline bool bad_kuap_fault(struct pt_regs *regs, bool is_write) 156 + static inline unsigned long prevent_user_access_return(void) 135 157 { 158 + unsigned long flags = current->thread.kuap; 159 + unsigned long addr = flags & 0xf0000000; 160 + unsigned long end = flags << 28; 161 + void __user *to = (__force void __user *)addr; 162 + 163 + if (flags) 164 + prevent_user_access(to, to, end - addr, KUAP_READ_WRITE); 165 + 166 + return flags; 167 + } 168 + 169 + static inline void restore_user_access(unsigned long flags) 170 + { 171 + unsigned long addr = flags & 0xf0000000; 172 + unsigned long end = flags << 28; 173 + void __user *to = (__force void __user *)addr; 174 + 175 + if (flags) 176 + allow_user_access(to, to, end - addr, KUAP_READ_WRITE); 177 + } 178 + 179 + static inline bool 180 + bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) 181 + { 182 + unsigned long begin = regs->kuap & 0xf0000000; 183 + unsigned long end = regs->kuap << 28; 184 + 136 185 if (!is_write) 137 186 return false; 138 187 139 - return WARN(!regs->kuap, "Bug: write fault blocked by segment registers !"); 188 + return WARN(address < begin || address >= end, 189 + "Bug: write fault blocked by segment registers !"); 140 190 } 141 191 142 192 #endif /* CONFIG_PPC_KUAP */
+5
arch/powerpc/include/asm/book3s/32/pgtable.h
··· 193 193 #else 194 194 #define VMALLOC_START ((((long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))) 195 195 #endif 196 + 197 + #ifdef CONFIG_KASAN_VMALLOC 198 + #define VMALLOC_END _ALIGN_DOWN(ioremap_bot, PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) 199 + #else 196 200 #define VMALLOC_END ioremap_bot 201 + #endif 197 202 198 203 #ifndef __ASSEMBLY__ 199 204 #include <linux/sched.h>
+33 -7
arch/powerpc/include/asm/book3s/64/kup-radix.h
··· 63 63 * because that would require an expensive read/modify write of the AMR. 64 64 */ 65 65 66 + static inline unsigned long get_kuap(void) 67 + { 68 + if (!early_mmu_has_feature(MMU_FTR_RADIX_KUAP)) 69 + return 0; 70 + 71 + return mfspr(SPRN_AMR); 72 + } 73 + 66 74 static inline void set_kuap(unsigned long value) 67 75 { 68 76 if (!early_mmu_has_feature(MMU_FTR_RADIX_KUAP)) ··· 85 77 isync(); 86 78 } 87 79 88 - static inline void allow_user_access(void __user *to, const void __user *from, 89 - unsigned long size) 80 + static __always_inline void allow_user_access(void __user *to, const void __user *from, 81 + unsigned long size, unsigned long dir) 90 82 { 91 83 // This is written so we can resolve to a single case at build time 92 - if (__builtin_constant_p(to) && to == NULL) 84 + BUILD_BUG_ON(!__builtin_constant_p(dir)); 85 + if (dir == KUAP_READ) 93 86 set_kuap(AMR_KUAP_BLOCK_WRITE); 94 - else if (__builtin_constant_p(from) && from == NULL) 87 + else if (dir == KUAP_WRITE) 95 88 set_kuap(AMR_KUAP_BLOCK_READ); 96 - else 89 + else if (dir == KUAP_READ_WRITE) 97 90 set_kuap(0); 91 + else 92 + BUILD_BUG(); 98 93 } 99 94 100 95 static inline void prevent_user_access(void __user *to, const void __user *from, 101 - unsigned long size) 96 + unsigned long size, unsigned long dir) 102 97 { 103 98 set_kuap(AMR_KUAP_BLOCKED); 104 99 } 105 100 106 - static inline bool bad_kuap_fault(struct pt_regs *regs, bool is_write) 101 + static inline unsigned long prevent_user_access_return(void) 102 + { 103 + unsigned long flags = get_kuap(); 104 + 105 + set_kuap(AMR_KUAP_BLOCKED); 106 + 107 + return flags; 108 + } 109 + 110 + static inline void restore_user_access(unsigned long flags) 111 + { 112 + set_kuap(flags); 113 + } 114 + 115 + static inline bool 116 + bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) 107 117 { 108 118 return WARN(mmu_has_feature(MMU_FTR_RADIX_KUAP) && 109 119 (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)),
+5 -2
arch/powerpc/include/asm/cputable.h
··· 212 212 #define CPU_FTR_P9_TLBIE_STQ_BUG LONG_ASM_CONST(0x0000400000000000) 213 213 #define CPU_FTR_P9_TIDR LONG_ASM_CONST(0x0000800000000000) 214 214 #define CPU_FTR_P9_TLBIE_ERAT_BUG LONG_ASM_CONST(0x0001000000000000) 215 + #define CPU_FTR_P9_RADIX_PREFETCH_BUG LONG_ASM_CONST(0x0002000000000000) 215 216 216 217 #ifndef __ASSEMBLY__ 217 218 ··· 460 459 CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_ARCH_207S | \ 461 460 CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | CPU_FTR_PKEY | \ 462 461 CPU_FTR_P9_TLBIE_STQ_BUG | CPU_FTR_P9_TLBIE_ERAT_BUG | CPU_FTR_P9_TIDR) 463 - #define CPU_FTRS_POWER9_DD2_0 CPU_FTRS_POWER9 464 - #define CPU_FTRS_POWER9_DD2_1 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1) 462 + #define CPU_FTRS_POWER9_DD2_0 (CPU_FTRS_POWER9 | CPU_FTR_P9_RADIX_PREFETCH_BUG) 463 + #define CPU_FTRS_POWER9_DD2_1 (CPU_FTRS_POWER9 | \ 464 + CPU_FTR_P9_RADIX_PREFETCH_BUG | \ 465 + CPU_FTR_POWER9_DD2_1) 465 466 #define CPU_FTRS_POWER9_DD2_2 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1 | \ 466 467 CPU_FTR_P9_TM_HV_ASSIST | \ 467 468 CPU_FTR_P9_TM_XER_SO_BUG)
+4 -2
arch/powerpc/include/asm/firmware.h
··· 33 33 #define FW_FEATURE_LLAN ASM_CONST(0x0000000000010000) 34 34 #define FW_FEATURE_BULK_REMOVE ASM_CONST(0x0000000000020000) 35 35 #define FW_FEATURE_XDABR ASM_CONST(0x0000000000040000) 36 - #define FW_FEATURE_MULTITCE ASM_CONST(0x0000000000080000) 36 + #define FW_FEATURE_PUT_TCE_IND ASM_CONST(0x0000000000080000) 37 37 #define FW_FEATURE_SPLPAR ASM_CONST(0x0000000000100000) 38 38 #define FW_FEATURE_LPAR ASM_CONST(0x0000000000400000) 39 39 #define FW_FEATURE_PS3_LV1 ASM_CONST(0x0000000000800000) ··· 51 51 #define FW_FEATURE_BLOCK_REMOVE ASM_CONST(0x0000001000000000) 52 52 #define FW_FEATURE_PAPR_SCM ASM_CONST(0x0000002000000000) 53 53 #define FW_FEATURE_ULTRAVISOR ASM_CONST(0x0000004000000000) 54 + #define FW_FEATURE_STUFF_TCE ASM_CONST(0x0000008000000000) 54 55 55 56 #ifndef __ASSEMBLY__ 56 57 ··· 64 63 FW_FEATURE_MIGRATE | FW_FEATURE_PERFMON | FW_FEATURE_CRQ | 65 64 FW_FEATURE_VIO | FW_FEATURE_RDMA | FW_FEATURE_LLAN | 66 65 FW_FEATURE_BULK_REMOVE | FW_FEATURE_XDABR | 67 - FW_FEATURE_MULTITCE | FW_FEATURE_SPLPAR | FW_FEATURE_LPAR | 66 + FW_FEATURE_PUT_TCE_IND | FW_FEATURE_STUFF_TCE | 67 + FW_FEATURE_SPLPAR | FW_FEATURE_LPAR | 68 68 FW_FEATURE_CMO | FW_FEATURE_VPHN | FW_FEATURE_XCMO | 69 69 FW_FEATURE_SET_MODE | FW_FEATURE_BEST_ENERGY | 70 70 FW_FEATURE_TYPE1_AFFINITY | FW_FEATURE_PRRN |
+4
arch/powerpc/include/asm/hw_breakpoint.h
··· 34 34 #define HW_BRK_TYPE_PRIV_ALL (HW_BRK_TYPE_USER | HW_BRK_TYPE_KERNEL | \ 35 35 HW_BRK_TYPE_HYP) 36 36 37 + #ifdef CONFIG_PPC_8xx 38 + #define HW_BREAKPOINT_ALIGN 0x3 39 + #else 37 40 #define HW_BREAKPOINT_ALIGN 0x7 41 + #endif 38 42 39 43 #define DABR_MAX_LEN 8 40 44 #define DAWR_MAX_LEN 512
+2
arch/powerpc/include/asm/kasan.h
··· 31 31 void kasan_early_init(void); 32 32 void kasan_mmu_init(void); 33 33 void kasan_init(void); 34 + void kasan_late_init(void); 34 35 #else 35 36 static inline void kasan_init(void) { } 36 37 static inline void kasan_mmu_init(void) { } 38 + static inline void kasan_late_init(void) { } 37 39 #endif 38 40 39 41 #endif /* __ASSEMBLY */
+41 -8
arch/powerpc/include/asm/kup.h
··· 2 2 #ifndef _ASM_POWERPC_KUP_H_ 3 3 #define _ASM_POWERPC_KUP_H_ 4 4 5 + #define KUAP_READ 1 6 + #define KUAP_WRITE 2 7 + #define KUAP_READ_WRITE (KUAP_READ | KUAP_WRITE) 8 + /* 9 + * For prevent_user_access() only. 10 + * Use the current saved situation instead of the to/from/size params. 11 + * Used on book3s/32 12 + */ 13 + #define KUAP_CURRENT 4 14 + 5 15 #ifdef CONFIG_PPC64 6 16 #include <asm/book3s/64/kup-radix.h> 7 17 #endif ··· 52 42 #else 53 43 static inline void setup_kuap(bool disabled) { } 54 44 static inline void allow_user_access(void __user *to, const void __user *from, 55 - unsigned long size) { } 45 + unsigned long size, unsigned long dir) { } 56 46 static inline void prevent_user_access(void __user *to, const void __user *from, 57 - unsigned long size) { } 58 - static inline bool bad_kuap_fault(struct pt_regs *regs, bool is_write) { return false; } 47 + unsigned long size, unsigned long dir) { } 48 + static inline unsigned long prevent_user_access_return(void) { return 0UL; } 49 + static inline void restore_user_access(unsigned long flags) { } 50 + static inline bool 51 + bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) 52 + { 53 + return false; 54 + } 59 55 #endif /* CONFIG_PPC_KUAP */ 60 56 61 57 static inline void allow_read_from_user(const void __user *from, unsigned long size) 62 58 { 63 - allow_user_access(NULL, from, size); 59 + allow_user_access(NULL, from, size, KUAP_READ); 64 60 } 65 61 66 62 static inline void allow_write_to_user(void __user *to, unsigned long size) 67 63 { 68 - allow_user_access(to, NULL, size); 64 + allow_user_access(to, NULL, size, KUAP_WRITE); 65 + } 66 + 67 + static inline void allow_read_write_user(void __user *to, const void __user *from, 68 + unsigned long size) 69 + { 70 + allow_user_access(to, from, size, KUAP_READ_WRITE); 69 71 } 70 72 71 73 static inline void prevent_read_from_user(const void __user *from, unsigned long size) 72 74 { 73 - prevent_user_access(NULL, from, size); 75 + prevent_user_access(NULL, from, size, KUAP_READ); 74 76 } 75 77 76 78 static inline void prevent_write_to_user(void __user *to, unsigned long size) 77 79 { 78 - prevent_user_access(to, NULL, size); 80 + prevent_user_access(to, NULL, size, KUAP_WRITE); 81 + } 82 + 83 + static inline void prevent_read_write_user(void __user *to, const void __user *from, 84 + unsigned long size) 85 + { 86 + prevent_user_access(to, from, size, KUAP_READ_WRITE); 87 + } 88 + 89 + static inline void prevent_current_access_user(void) 90 + { 91 + prevent_user_access(NULL, NULL, ~0UL, KUAP_CURRENT); 79 92 } 80 93 81 94 #endif /* !__ASSEMBLY__ */ 82 95 83 - #endif /* _ASM_POWERPC_KUP_H_ */ 96 + #endif /* _ASM_POWERPC_KUAP_H_ */
+18 -3
arch/powerpc/include/asm/nohash/32/kup-8xx.h
··· 35 35 #include <asm/reg.h> 36 36 37 37 static inline void allow_user_access(void __user *to, const void __user *from, 38 - unsigned long size) 38 + unsigned long size, unsigned long dir) 39 39 { 40 40 mtspr(SPRN_MD_AP, MD_APG_INIT); 41 41 } 42 42 43 43 static inline void prevent_user_access(void __user *to, const void __user *from, 44 - unsigned long size) 44 + unsigned long size, unsigned long dir) 45 45 { 46 46 mtspr(SPRN_MD_AP, MD_APG_KUAP); 47 47 } 48 48 49 - static inline bool bad_kuap_fault(struct pt_regs *regs, bool is_write) 49 + static inline unsigned long prevent_user_access_return(void) 50 + { 51 + unsigned long flags = mfspr(SPRN_MD_AP); 52 + 53 + mtspr(SPRN_MD_AP, MD_APG_KUAP); 54 + 55 + return flags; 56 + } 57 + 58 + static inline void restore_user_access(unsigned long flags) 59 + { 60 + mtspr(SPRN_MD_AP, flags); 61 + } 62 + 63 + static inline bool 64 + bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write) 50 65 { 51 66 return WARN(!((regs->kuap ^ MD_APG_KUAP) & 0xf0000000), 52 67 "Bug: fault blocked by AP register !");
+5
arch/powerpc/include/asm/nohash/32/pgtable.h
··· 114 114 #else 115 115 #define VMALLOC_START ((((long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))) 116 116 #endif 117 + 118 + #ifdef CONFIG_KASAN_VMALLOC 119 + #define VMALLOC_END _ALIGN_DOWN(ioremap_bot, PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) 120 + #else 117 121 #define VMALLOC_END ioremap_bot 122 + #endif 118 123 119 124 /* 120 125 * Bits in a linux-style PTE. These match the bits in the
+1 -1
arch/powerpc/include/asm/page.h
··· 209 209 */ 210 210 #if defined(CONFIG_PPC32) && defined(CONFIG_BOOKE) 211 211 #define __va(x) ((void *)(unsigned long)((phys_addr_t)(x) + VIRT_PHYS_OFFSET)) 212 - #define __pa(x) ((unsigned long)(x) - VIRT_PHYS_OFFSET) 212 + #define __pa(x) ((phys_addr_t)(unsigned long)(x) - VIRT_PHYS_OFFSET) 213 213 #else 214 214 #ifdef CONFIG_PPC64 215 215 /*
+5 -2
arch/powerpc/include/asm/pci-bridge.h
··· 223 223 extern struct pci_dn *pci_get_pdn_by_devfn(struct pci_bus *bus, 224 224 int devfn); 225 225 extern struct pci_dn *pci_get_pdn(struct pci_dev *pdev); 226 - extern struct pci_dn *add_dev_pci_data(struct pci_dev *pdev); 227 - extern void remove_dev_pci_data(struct pci_dev *pdev); 228 226 extern struct pci_dn *pci_add_device_node_info(struct pci_controller *hose, 229 227 struct device_node *dn); 230 228 extern void pci_remove_device_node_info(struct device_node *dn); 229 + 230 + #ifdef CONFIG_PCI_IOV 231 + struct pci_dn *add_sriov_vf_pdns(struct pci_dev *pdev); 232 + void remove_sriov_vf_pdns(struct pci_dev *pdev); 233 + #endif 231 234 232 235 static inline int pci_device_from_OF_node(struct device_node *np, 233 236 u8 *bus, u8 *devfn)
-1
arch/powerpc/include/asm/pci.h
··· 113 113 pgprot_t prot); 114 114 115 115 extern resource_size_t pcibios_io_space_offset(struct pci_controller *hose); 116 - extern void pcibios_setup_bus_devices(struct pci_bus *bus); 117 116 extern void pcibios_setup_bus_self(struct pci_bus *bus); 118 117 extern void pcibios_setup_phb_io_space(struct pci_controller *hose); 119 118 extern void pcibios_scan_phb(struct pci_controller *hose);
-6
arch/powerpc/include/asm/pgtable.h
··· 94 94 static inline void mark_initmem_nx(void) { } 95 95 #endif 96 96 97 - #ifdef CONFIG_PPC_DEBUG_WX 98 - void ptdump_check_wx(void); 99 - #else 100 - static inline void ptdump_check_wx(void) { } 101 - #endif 102 - 103 97 /* 104 98 * When used, PTE_FRAG_NR is defined in subarch pgtable.h 105 99 * so we are sure it is included when arriving here.
+1
arch/powerpc/include/asm/pnv-pci.h
··· 15 15 #define PCI_SLOT_ID_PREFIX (1UL << 63) 16 16 #define PCI_SLOT_ID(phb_id, bdfn) \ 17 17 (PCI_SLOT_ID_PREFIX | ((uint64_t)(bdfn) << 16) | (phb_id)) 18 + #define PCI_PHB_SLOT_ID(phb_id) (phb_id) 18 19 19 20 extern int pnv_pci_get_slot_id(struct device_node *np, uint64_t *id); 20 21 extern int pnv_pci_get_device_tree(uint32_t phandle, void *buf, uint64_t len);
+9
arch/powerpc/include/asm/processor.h
··· 163 163 #if defined(CONFIG_PPC_BOOK3S_32) && defined(CONFIG_PPC_KUAP) 164 164 unsigned long kuap; /* opened segments for user access */ 165 165 #endif 166 + #ifdef CONFIG_VMAP_STACK 167 + unsigned long srr0; 168 + unsigned long srr1; 169 + unsigned long dar; 170 + unsigned long dsisr; 171 + #endif 166 172 /* Debug Registers */ 167 173 struct debug_reg debug; 168 174 struct thread_fp_state fp_state; ··· 418 412 extern unsigned long isa300_idle_stop_noloss(unsigned long psscr_val); 419 413 extern unsigned long isa300_idle_stop_mayloss(unsigned long psscr_val); 420 414 extern unsigned long isa206_idle_insn_mayloss(unsigned long type); 415 + #ifdef CONFIG_PPC_970_NAP 416 + extern void power4_idle_nap(void); 417 + #endif 421 418 422 419 extern unsigned long cpuidle_disable; 423 420 enum idle_boot_override {IDLE_NO_OVERRIDE = 0, IDLE_POWERSAVE_OFF};
+14
arch/powerpc/include/asm/reg_8xx.h
··· 35 35 #define SPRN_CMPE 152 36 36 #define SPRN_CMPF 153 37 37 #define SPRN_LCTRL1 156 38 + #define LCTRL1_CTE_GT 0xc0000000 39 + #define LCTRL1_CTF_LT 0x14000000 40 + #define LCTRL1_CRWE_RW 0x00000000 41 + #define LCTRL1_CRWE_RO 0x00040000 42 + #define LCTRL1_CRWE_WO 0x000c0000 43 + #define LCTRL1_CRWF_RW 0x00000000 44 + #define LCTRL1_CRWF_RO 0x00010000 45 + #define LCTRL1_CRWF_WO 0x00030000 38 46 #define SPRN_LCTRL2 157 47 + #define LCTRL2_LW0EN 0x80000000 48 + #define LCTRL2_LW0LA_E 0x00000000 49 + #define LCTRL2_LW0LA_F 0x04000000 50 + #define LCTRL2_LW0LA_EandF 0x08000000 51 + #define LCTRL2_LW0LADC 0x02000000 52 + #define LCTRL2_SLW0EN 0x00000002 39 53 #ifdef CONFIG_PPC_8xx 40 54 #define SPRN_ICTRL 158 41 55 #endif
+18
arch/powerpc/include/asm/thread_info.h
··· 10 10 #define _ASM_POWERPC_THREAD_INFO_H 11 11 12 12 #include <asm/asm-const.h> 13 + #include <asm/page.h> 13 14 14 15 #ifdef __KERNEL__ 15 16 17 + #if defined(CONFIG_VMAP_STACK) && CONFIG_THREAD_SHIFT < PAGE_SHIFT 18 + #define THREAD_SHIFT PAGE_SHIFT 19 + #else 16 20 #define THREAD_SHIFT CONFIG_THREAD_SHIFT 21 + #endif 17 22 18 23 #define THREAD_SIZE (1 << THREAD_SHIFT) 24 + 25 + /* 26 + * By aligning VMAP'd stacks to 2 * THREAD_SIZE, we can detect overflow by 27 + * checking sp & (1 << THREAD_SHIFT), which we can do cheaply in the entry 28 + * assembly. 29 + */ 30 + #ifdef CONFIG_VMAP_STACK 31 + #define THREAD_ALIGN_SHIFT (THREAD_SHIFT + 1) 32 + #else 33 + #define THREAD_ALIGN_SHIFT THREAD_SHIFT 34 + #endif 35 + 36 + #define THREAD_ALIGN (1 << THREAD_ALIGN_SHIFT) 19 37 20 38 #ifndef __ASSEMBLY__ 21 39 #include <linux/cache.h>
+67 -21
arch/powerpc/include/asm/uaccess.h
··· 91 91 __put_user_check((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr))) 92 92 93 93 #define __get_user(x, ptr) \ 94 - __get_user_nocheck((x), (ptr), sizeof(*(ptr))) 94 + __get_user_nocheck((x), (ptr), sizeof(*(ptr)), true) 95 95 #define __put_user(x, ptr) \ 96 - __put_user_nocheck((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr))) 96 + __put_user_nocheck((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)), true) 97 + 98 + #define __get_user_allowed(x, ptr) \ 99 + __get_user_nocheck((x), (ptr), sizeof(*(ptr)), false) 100 + #define __put_user_allowed(x, ptr) \ 101 + __put_user_nocheck((__typeof__(*(ptr)))(x), (ptr), sizeof(*(ptr)), false) 97 102 98 103 #define __get_user_inatomic(x, ptr) \ 99 104 __get_user_nosleep((x), (ptr), sizeof(*(ptr))) ··· 143 138 : "r" (x), "b" (addr), "i" (-EFAULT), "0" (err)) 144 139 #endif /* __powerpc64__ */ 145 140 146 - #define __put_user_size(x, ptr, size, retval) \ 141 + #define __put_user_size_allowed(x, ptr, size, retval) \ 147 142 do { \ 148 143 retval = 0; \ 149 - allow_write_to_user(ptr, size); \ 150 144 switch (size) { \ 151 145 case 1: __put_user_asm(x, ptr, retval, "stb"); break; \ 152 146 case 2: __put_user_asm(x, ptr, retval, "sth"); break; \ ··· 153 149 case 8: __put_user_asm2(x, ptr, retval); break; \ 154 150 default: __put_user_bad(); \ 155 151 } \ 152 + } while (0) 153 + 154 + #define __put_user_size(x, ptr, size, retval) \ 155 + do { \ 156 + allow_write_to_user(ptr, size); \ 157 + __put_user_size_allowed(x, ptr, size, retval); \ 156 158 prevent_write_to_user(ptr, size); \ 157 159 } while (0) 158 160 159 - #define __put_user_nocheck(x, ptr, size) \ 161 + #define __put_user_nocheck(x, ptr, size, do_allow) \ 160 162 ({ \ 161 163 long __pu_err; \ 162 164 __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ 163 165 if (!is_kernel_addr((unsigned long)__pu_addr)) \ 164 166 might_fault(); \ 165 167 __chk_user_ptr(ptr); \ 166 - __put_user_size((x), __pu_addr, (size), __pu_err); \ 168 + if (do_allow) \ 169 + __put_user_size((x), __pu_addr, (size), __pu_err); \ 170 + else \ 171 + __put_user_size_allowed((x), __pu_addr, (size), __pu_err); \ 167 172 __pu_err; \ 168 173 }) 169 174 ··· 249 236 : "b" (addr), "i" (-EFAULT), "0" (err)) 250 237 #endif /* __powerpc64__ */ 251 238 252 - #define __get_user_size(x, ptr, size, retval) \ 239 + #define __get_user_size_allowed(x, ptr, size, retval) \ 253 240 do { \ 254 241 retval = 0; \ 255 242 __chk_user_ptr(ptr); \ 256 243 if (size > sizeof(x)) \ 257 244 (x) = __get_user_bad(); \ 258 - allow_read_from_user(ptr, size); \ 259 245 switch (size) { \ 260 246 case 1: __get_user_asm(x, ptr, retval, "lbz"); break; \ 261 247 case 2: __get_user_asm(x, ptr, retval, "lhz"); break; \ ··· 262 250 case 8: __get_user_asm2(x, ptr, retval); break; \ 263 251 default: (x) = __get_user_bad(); \ 264 252 } \ 253 + } while (0) 254 + 255 + #define __get_user_size(x, ptr, size, retval) \ 256 + do { \ 257 + allow_read_from_user(ptr, size); \ 258 + __get_user_size_allowed(x, ptr, size, retval); \ 265 259 prevent_read_from_user(ptr, size); \ 266 260 } while (0) 267 261 ··· 278 260 #define __long_type(x) \ 279 261 __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL)) 280 262 281 - #define __get_user_nocheck(x, ptr, size) \ 263 + #define __get_user_nocheck(x, ptr, size, do_allow) \ 282 264 ({ \ 283 265 long __gu_err; \ 284 266 __long_type(*(ptr)) __gu_val; \ ··· 287 269 if (!is_kernel_addr((unsigned long)__gu_addr)) \ 288 270 might_fault(); \ 289 271 barrier_nospec(); \ 290 - __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ 272 + if (do_allow) \ 273 + __get_user_size(__gu_val, __gu_addr, (size), __gu_err); \ 274 + else \ 275 + __get_user_size_allowed(__gu_val, __gu_addr, (size), __gu_err); \ 291 276 (x) = (__typeof__(*(ptr)))__gu_val; \ 292 277 __gu_err; \ 293 278 }) ··· 334 313 unsigned long ret; 335 314 336 315 barrier_nospec(); 337 - allow_user_access(to, from, n); 316 + allow_read_write_user(to, from, n); 338 317 ret = __copy_tofrom_user(to, from, n); 339 - prevent_user_access(to, from, n); 318 + prevent_read_write_user(to, from, n); 340 319 return ret; 341 320 } 342 321 #endif /* __powerpc64__ */ ··· 377 356 return ret; 378 357 } 379 358 380 - static inline unsigned long raw_copy_to_user(void __user *to, 381 - const void *from, unsigned long n) 359 + static inline unsigned long 360 + raw_copy_to_user_allowed(void __user *to, const void *from, unsigned long n) 382 361 { 383 - unsigned long ret; 384 362 if (__builtin_constant_p(n) && (n <= 8)) { 385 - ret = 1; 363 + unsigned long ret = 1; 386 364 387 365 switch (n) { 388 366 case 1: 389 - __put_user_size(*(u8 *)from, (u8 __user *)to, 1, ret); 367 + __put_user_size_allowed(*(u8 *)from, (u8 __user *)to, 1, ret); 390 368 break; 391 369 case 2: 392 - __put_user_size(*(u16 *)from, (u16 __user *)to, 2, ret); 370 + __put_user_size_allowed(*(u16 *)from, (u16 __user *)to, 2, ret); 393 371 break; 394 372 case 4: 395 - __put_user_size(*(u32 *)from, (u32 __user *)to, 4, ret); 373 + __put_user_size_allowed(*(u32 *)from, (u32 __user *)to, 4, ret); 396 374 break; 397 375 case 8: 398 - __put_user_size(*(u64 *)from, (u64 __user *)to, 8, ret); 376 + __put_user_size_allowed(*(u64 *)from, (u64 __user *)to, 8, ret); 399 377 break; 400 378 } 401 379 if (ret == 0) 402 380 return 0; 403 381 } 404 382 383 + return __copy_tofrom_user(to, (__force const void __user *)from, n); 384 + } 385 + 386 + static inline unsigned long 387 + raw_copy_to_user(void __user *to, const void *from, unsigned long n) 388 + { 389 + unsigned long ret; 390 + 405 391 allow_write_to_user(to, n); 406 - ret = __copy_tofrom_user(to, (__force const void __user *)from, n); 392 + ret = raw_copy_to_user_allowed(to, from, n); 407 393 prevent_write_to_user(to, n); 408 394 return ret; 409 395 } ··· 455 427 unsigned size); 456 428 extern void memcpy_page_flushcache(char *to, struct page *page, size_t offset, 457 429 size_t len); 430 + 431 + static __must_check inline bool user_access_begin(const void __user *ptr, size_t len) 432 + { 433 + if (unlikely(!access_ok(ptr, len))) 434 + return false; 435 + allow_read_write_user((void __user *)ptr, ptr, len); 436 + return true; 437 + } 438 + #define user_access_begin user_access_begin 439 + #define user_access_end prevent_current_access_user 440 + #define user_access_save prevent_user_access_return 441 + #define user_access_restore restore_user_access 442 + 443 + #define unsafe_op_wrap(op, err) do { if (unlikely(op)) goto err; } while (0) 444 + #define unsafe_get_user(x, p, e) unsafe_op_wrap(__get_user_allowed(x, p), e) 445 + #define unsafe_put_user(x, p, e) unsafe_op_wrap(__put_user_allowed(x, p), e) 446 + #define unsafe_copy_to_user(d, s, l, e) \ 447 + unsafe_op_wrap(raw_copy_to_user_allowed(d, s, l), e) 458 448 459 449 #endif /* _ARCH_POWERPC_UACCESS_H */
+10 -4
arch/powerpc/include/asm/vdso_datapage.h
··· 108 108 __u32 stamp_sec_fraction; /* fractional seconds of stamp_xtime */ 109 109 __u32 hrtimer_res; /* hrtimer resolution */ 110 110 __u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */ 111 - __u32 dcache_block_size; /* L1 d-cache block size */ 112 - __u32 icache_block_size; /* L1 i-cache block size */ 113 - __u32 dcache_log_block_size; /* L1 d-cache log block size */ 114 - __u32 icache_log_block_size; /* L1 i-cache log block size */ 115 111 }; 116 112 117 113 #endif /* CONFIG_PPC64 */ 118 114 119 115 extern struct vdso_data *vdso_data; 116 + 117 + #else /* __ASSEMBLY__ */ 118 + 119 + .macro get_datapage ptr, tmp 120 + bcl 20, 31, .+4 121 + mflr \ptr 122 + addi \ptr, \ptr, (__kernel_datapage_offset - (.-4))@l 123 + lwz \tmp, 0(\ptr) 124 + add \ptr, \tmp, \ptr 125 + .endm 120 126 121 127 #endif /* __ASSEMBLY__ */ 122 128
+42 -42
arch/powerpc/include/asm/xive.h
··· 87 87 88 88 static inline bool xive_enabled(void) { return __xive_enabled; } 89 89 90 - extern bool xive_spapr_init(void); 91 - extern bool xive_native_init(void); 92 - extern void xive_smp_probe(void); 93 - extern int xive_smp_prepare_cpu(unsigned int cpu); 94 - extern void xive_smp_setup_cpu(void); 95 - extern void xive_smp_disable_cpu(void); 96 - extern void xive_teardown_cpu(void); 97 - extern void xive_shutdown(void); 98 - extern void xive_flush_interrupt(void); 90 + bool xive_spapr_init(void); 91 + bool xive_native_init(void); 92 + void xive_smp_probe(void); 93 + int xive_smp_prepare_cpu(unsigned int cpu); 94 + void xive_smp_setup_cpu(void); 95 + void xive_smp_disable_cpu(void); 96 + void xive_teardown_cpu(void); 97 + void xive_shutdown(void); 98 + void xive_flush_interrupt(void); 99 99 100 100 /* xmon hook */ 101 - extern void xmon_xive_do_dump(int cpu); 102 - extern int xmon_xive_get_irq_config(u32 hw_irq, struct irq_data *d); 101 + void xmon_xive_do_dump(int cpu); 102 + int xmon_xive_get_irq_config(u32 hw_irq, struct irq_data *d); 103 103 104 104 /* APIs used by KVM */ 105 - extern u32 xive_native_default_eq_shift(void); 106 - extern u32 xive_native_alloc_vp_block(u32 max_vcpus); 107 - extern void xive_native_free_vp_block(u32 vp_base); 108 - extern int xive_native_populate_irq_data(u32 hw_irq, 109 - struct xive_irq_data *data); 110 - extern void xive_cleanup_irq_data(struct xive_irq_data *xd); 111 - extern u32 xive_native_alloc_irq(void); 112 - extern void xive_native_free_irq(u32 irq); 113 - extern int xive_native_configure_irq(u32 hw_irq, u32 target, u8 prio, u32 sw_irq); 105 + u32 xive_native_default_eq_shift(void); 106 + u32 xive_native_alloc_vp_block(u32 max_vcpus); 107 + void xive_native_free_vp_block(u32 vp_base); 108 + int xive_native_populate_irq_data(u32 hw_irq, 109 + struct xive_irq_data *data); 110 + void xive_cleanup_irq_data(struct xive_irq_data *xd); 111 + u32 xive_native_alloc_irq(void); 112 + void xive_native_free_irq(u32 irq); 113 + int xive_native_configure_irq(u32 hw_irq, u32 target, u8 prio, u32 sw_irq); 114 114 115 - extern int xive_native_configure_queue(u32 vp_id, struct xive_q *q, u8 prio, 116 - __be32 *qpage, u32 order, bool can_escalate); 117 - extern void xive_native_disable_queue(u32 vp_id, struct xive_q *q, u8 prio); 115 + int xive_native_configure_queue(u32 vp_id, struct xive_q *q, u8 prio, 116 + __be32 *qpage, u32 order, bool can_escalate); 117 + void xive_native_disable_queue(u32 vp_id, struct xive_q *q, u8 prio); 118 118 119 - extern void xive_native_sync_source(u32 hw_irq); 120 - extern void xive_native_sync_queue(u32 hw_irq); 121 - extern bool is_xive_irq(struct irq_chip *chip); 122 - extern int xive_native_enable_vp(u32 vp_id, bool single_escalation); 123 - extern int xive_native_disable_vp(u32 vp_id); 124 - extern int xive_native_get_vp_info(u32 vp_id, u32 *out_cam_id, u32 *out_chip_id); 125 - extern bool xive_native_has_single_escalation(void); 119 + void xive_native_sync_source(u32 hw_irq); 120 + void xive_native_sync_queue(u32 hw_irq); 121 + bool is_xive_irq(struct irq_chip *chip); 122 + int xive_native_enable_vp(u32 vp_id, bool single_escalation); 123 + int xive_native_disable_vp(u32 vp_id); 124 + int xive_native_get_vp_info(u32 vp_id, u32 *out_cam_id, u32 *out_chip_id); 125 + bool xive_native_has_single_escalation(void); 126 126 127 - extern int xive_native_get_queue_info(u32 vp_id, uint32_t prio, 128 - u64 *out_qpage, 129 - u64 *out_qsize, 130 - u64 *out_qeoi_page, 131 - u32 *out_escalate_irq, 132 - u64 *out_qflags); 127 + int xive_native_get_queue_info(u32 vp_id, uint32_t prio, 128 + u64 *out_qpage, 129 + u64 *out_qsize, 130 + u64 *out_qeoi_page, 131 + u32 *out_escalate_irq, 132 + u64 *out_qflags); 133 133 134 - extern int xive_native_get_queue_state(u32 vp_id, uint32_t prio, u32 *qtoggle, 135 - u32 *qindex); 136 - extern int xive_native_set_queue_state(u32 vp_id, uint32_t prio, u32 qtoggle, 137 - u32 qindex); 138 - extern int xive_native_get_vp_state(u32 vp_id, u64 *out_state); 139 - extern bool xive_native_has_queue_state_support(void); 134 + int xive_native_get_queue_state(u32 vp_id, uint32_t prio, u32 *qtoggle, 135 + u32 *qindex); 136 + int xive_native_set_queue_state(u32 vp_id, uint32_t prio, u32 qtoggle, 137 + u32 qindex); 138 + int xive_native_get_vp_state(u32 vp_id, u64 *out_state); 139 + bool xive_native_has_queue_state_support(void); 140 140 141 141 #else 142 142
+1 -2
arch/powerpc/kernel/Makefile
··· 62 62 obj-$(CONFIG_PPC_BARRIER_NOSPEC) += security.o 63 63 obj-$(CONFIG_PPC64) += vdso64/ 64 64 obj-$(CONFIG_ALTIVEC) += vecemu.o 65 - obj-$(CONFIG_PPC_970_NAP) += idle_power4.o 66 - obj-$(CONFIG_PPC_P7_NAP) += idle_book3s.o 65 + obj-$(CONFIG_PPC_BOOK3S_IDLE) += idle_book3s.o 67 66 procfs-y := proc_powerpc.o 68 67 obj-$(CONFIG_PROC_FS) += $(procfs-y) 69 68 rtaspci-$(CONFIG_PPC64)-$(CONFIG_PCI) := rtas_pci.o
+10 -1
arch/powerpc/kernel/asm-offsets.c
··· 127 127 OFFSET(KSP_VSID, thread_struct, ksp_vsid); 128 128 #else /* CONFIG_PPC64 */ 129 129 OFFSET(PGDIR, thread_struct, pgdir); 130 + #ifdef CONFIG_VMAP_STACK 131 + OFFSET(SRR0, thread_struct, srr0); 132 + OFFSET(SRR1, thread_struct, srr1); 133 + OFFSET(DAR, thread_struct, dar); 134 + OFFSET(DSISR, thread_struct, dsisr); 135 + #endif 130 136 #ifdef CONFIG_SPE 131 137 OFFSET(THREAD_EVR0, thread_struct, evr[0]); 132 138 OFFSET(THREAD_ACC, thread_struct, acc); ··· 395 389 OFFSET(STAMP_XTIME_NSEC, vdso_data, stamp_xtime_nsec); 396 390 OFFSET(STAMP_SEC_FRAC, vdso_data, stamp_sec_fraction); 397 391 OFFSET(CLOCK_HRTIMER_RES, vdso_data, hrtimer_res); 392 + #ifdef CONFIG_PPC64 398 393 OFFSET(CFG_ICACHE_BLOCKSZ, vdso_data, icache_block_size); 399 394 OFFSET(CFG_DCACHE_BLOCKSZ, vdso_data, dcache_block_size); 400 395 OFFSET(CFG_ICACHE_LOGBLOCKSZ, vdso_data, icache_log_block_size); 401 396 OFFSET(CFG_DCACHE_LOGBLOCKSZ, vdso_data, dcache_log_block_size); 402 - #ifdef CONFIG_PPC64 403 397 OFFSET(CFG_SYSCALL_MAP64, vdso_data, syscall_map_64); 404 398 OFFSET(TVAL64_TV_SEC, __kernel_old_timeval, tv_sec); 405 399 OFFSET(TVAL64_TV_USEC, __kernel_old_timeval, tv_usec); ··· 419 413 DEFINE(CLOCK_MONOTONIC, CLOCK_MONOTONIC); 420 414 DEFINE(CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE); 421 415 DEFINE(CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_COARSE); 416 + DEFINE(CLOCK_MAX, CLOCK_TAI); 422 417 DEFINE(NSEC_PER_SEC, NSEC_PER_SEC); 418 + DEFINE(EINVAL, EINVAL); 419 + DEFINE(KTIME_LOW_RES, KTIME_LOW_RES); 423 420 424 421 #ifdef CONFIG_BUG 425 422 DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry));
+8 -5
arch/powerpc/kernel/dt_cpu_ftrs.c
··· 727 727 /* 728 728 * Not all quirks can be derived from the cpufeatures device tree. 729 729 */ 730 - if ((version & 0xffffefff) == 0x004e0200) 731 - ; /* DD2.0 has no feature flag */ 732 - else if ((version & 0xffffefff) == 0x004e0201) 730 + if ((version & 0xffffefff) == 0x004e0200) { 731 + /* DD2.0 has no feature flag */ 732 + cur_cpu_spec->cpu_features |= CPU_FTR_P9_RADIX_PREFETCH_BUG; 733 + } else if ((version & 0xffffefff) == 0x004e0201) { 733 734 cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1; 734 - else if ((version & 0xffffefff) == 0x004e0202) { 735 + cur_cpu_spec->cpu_features |= CPU_FTR_P9_RADIX_PREFETCH_BUG; 736 + } else if ((version & 0xffffefff) == 0x004e0202) { 735 737 cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST; 736 738 cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_XER_SO_BUG; 737 739 cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1; 738 - } else if ((version & 0xffff0000) == 0x004e0000) 740 + } else if ((version & 0xffff0000) == 0x004e0000) { 739 741 /* DD2.1 and up have DD2_1 */ 740 742 cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1; 743 + } 741 744 742 745 if ((version & 0xffff0000) == 0x004e0000) { 743 746 cur_cpu_spec->cpu_features &= ~(CPU_FTR_DAWR);
+18 -14
arch/powerpc/kernel/eeh.c
··· 503 503 rc = 1; 504 504 if (pe->state & EEH_PE_ISOLATED) { 505 505 pe->check_count++; 506 - if (pe->check_count % EEH_MAX_FAILS == 0) { 506 + if (pe->check_count == EEH_MAX_FAILS) { 507 507 dn = pci_device_to_OF_node(dev); 508 508 if (dn) 509 509 location = of_get_property(dn, "ibm,loc-code", ··· 1191 1191 eeh_rmv_from_parent_pe(edev); 1192 1192 eeh_addr_cache_rmv_dev(edev->pdev); 1193 1193 eeh_sysfs_remove_device(edev->pdev); 1194 - edev->mode &= ~EEH_DEV_SYSFS; 1195 1194 1196 1195 /* 1197 1196 * We definitely should have the PCI device removed ··· 1295 1296 edev->pdev = NULL; 1296 1297 1297 1298 /* 1299 + * eeh_sysfs_remove_device() uses pci_dev_to_eeh_dev() so we need to 1300 + * remove the sysfs files before clearing dev.archdata.edev 1301 + */ 1302 + if (edev->mode & EEH_DEV_SYSFS) 1303 + eeh_sysfs_remove_device(dev); 1304 + 1305 + /* 1306 + * We're removing from the PCI subsystem, that means 1307 + * the PCI device driver can't support EEH or not 1308 + * well. So we rely on hotplug completely to do recovery 1309 + * for the specific PCI device. 1310 + */ 1311 + edev->mode |= EEH_DEV_NO_HANDLER; 1312 + 1313 + eeh_addr_cache_rmv_dev(dev); 1314 + 1315 + /* 1298 1316 * The flag "in_error" is used to trace EEH devices for VFs 1299 1317 * in error state or not. It's set in eeh_report_error(). If 1300 1318 * it's not set, eeh_report_{reset,resume}() won't be called ··· 1323 1307 eeh_rmv_from_parent_pe(edev); 1324 1308 else 1325 1309 edev->mode |= EEH_DEV_DISCONNECTED; 1326 - 1327 - /* 1328 - * We're removing from the PCI subsystem, that means 1329 - * the PCI device driver can't support EEH or not 1330 - * well. So we rely on hotplug completely to do recovery 1331 - * for the specific PCI device. 1332 - */ 1333 - edev->mode |= EEH_DEV_NO_HANDLER; 1334 - 1335 - eeh_addr_cache_rmv_dev(dev); 1336 - eeh_sysfs_remove_device(dev); 1337 - edev->mode &= ~EEH_DEV_SYSFS; 1338 1310 } 1339 1311 1340 1312 int eeh_unfreeze_pe(struct eeh_pe *pe)
+1 -9
arch/powerpc/kernel/eeh_cache.c
··· 159 159 160 160 static void __eeh_addr_cache_insert_dev(struct pci_dev *dev) 161 161 { 162 - struct pci_dn *pdn; 163 162 struct eeh_dev *edev; 164 163 int i; 165 164 166 - pdn = pci_get_pdn_by_devfn(dev->bus, dev->devfn); 167 - if (!pdn) { 168 - pr_warn("PCI: no pci dn found for dev=%s\n", 169 - pci_name(dev)); 170 - return; 171 - } 172 - 173 - edev = pdn_to_eeh_dev(pdn); 165 + edev = pci_dev_to_eeh_dev(dev); 174 166 if (!edev) { 175 167 pr_warn("PCI: no EEH dev found for %s\n", 176 168 pci_name(dev));
-6
arch/powerpc/kernel/eeh_driver.c
··· 525 525 526 526 pci_iov_remove_virtfn(edev->physfn, pdn->vf_index); 527 527 edev->pdev = NULL; 528 - 529 - /* 530 - * We have to set the VF PE number to invalid one, which is 531 - * required to plug the VF successfully. 532 - */ 533 - pdn->pe_number = IODA_INVALID_PE; 534 528 #endif 535 529 if (rmv_data) 536 530 list_add(&edev->rmv_entry, &rmv_data->removed_vf_list);
+11 -11
arch/powerpc/kernel/eeh_sysfs.c
··· 14 14 /** 15 15 * EEH_SHOW_ATTR -- Create sysfs entry for eeh statistic 16 16 * @_name: name of file in sysfs directory 17 - * @_memb: name of member in struct pci_dn to access 17 + * @_memb: name of member in struct eeh_dev to access 18 18 * @_format: printf format for display 19 19 * 20 20 * All of the attributes look very similar, so just ··· 75 75 76 76 static DEVICE_ATTR_RW(eeh_pe_state); 77 77 78 - #ifdef CONFIG_PCI_IOV 78 + #if defined(CONFIG_PCI_IOV) && defined(CONFIG_PPC_PSERIES) 79 79 static ssize_t eeh_notify_resume_show(struct device *dev, 80 80 struct device_attribute *attr, char *buf) 81 81 { ··· 86 86 if (!edev || !edev->pe) 87 87 return -ENODEV; 88 88 89 - pdn = pci_get_pdn(pdev); 90 89 return sprintf(buf, "%d\n", pdn->last_allow_rc); 91 90 } 92 91 ··· 131 132 #else 132 133 static inline int eeh_notify_resume_add(struct pci_dev *pdev) { return 0; } 133 134 static inline void eeh_notify_resume_remove(struct pci_dev *pdev) { } 134 - #endif /* CONFIG_PCI_IOV */ 135 + #endif /* CONFIG_PCI_IOV && CONFIG PPC_PSERIES*/ 135 136 136 137 void eeh_sysfs_add_device(struct pci_dev *pdev) 137 138 { ··· 159 160 { 160 161 struct eeh_dev *edev = pci_dev_to_eeh_dev(pdev); 161 162 163 + if (!edev) { 164 + WARN_ON(eeh_enabled()); 165 + return; 166 + } 167 + 168 + edev->mode &= ~EEH_DEV_SYSFS; 169 + 162 170 /* 163 171 * The parent directory might have been removed. We needn't 164 172 * continue for that case. 165 173 */ 166 - if (!pdev->dev.kobj.sd) { 167 - if (edev) 168 - edev->mode &= ~EEH_DEV_SYSFS; 174 + if (!pdev->dev.kobj.sd) 169 175 return; 170 - } 171 176 172 177 device_remove_file(&pdev->dev, &dev_attr_eeh_mode); 173 178 device_remove_file(&pdev->dev, &dev_attr_eeh_pe_config_addr); 174 179 device_remove_file(&pdev->dev, &dev_attr_eeh_pe_state); 175 180 176 181 eeh_notify_resume_remove(pdev); 177 - 178 - if (edev) 179 - edev->mode &= ~EEH_DEV_SYSFS; 180 182 }
+19 -13
arch/powerpc/kernel/entry_32.S
··· 140 140 stw r12,_CTR(r11) 141 141 stw r2,_XER(r11) 142 142 mfspr r12,SPRN_SPRG_THREAD 143 + tovirt_vmstack r12, r12 143 144 beq 2f /* if from user, fix up THREAD.regs */ 144 145 addi r2, r12, -THREAD 145 146 addi r11,r1,STACK_FRAME_OVERHEAD ··· 180 179 2: /* if from kernel, check interrupted DOZE/NAP mode and 181 180 * check for stack overflow 182 181 */ 183 - kuap_save_and_lock r11, r12, r9, r2, r0 182 + kuap_save_and_lock r11, r12, r9, r2, r6 184 183 addi r2, r12, -THREAD 184 + #ifndef CONFIG_VMAP_STACK 185 185 lwz r9,KSP_LIMIT(r12) 186 186 cmplw r1,r9 /* if r1 <= ksp_limit */ 187 187 ble- stack_ovf /* then the kernel stack overflowed */ 188 + #endif 188 189 5: 189 190 #if defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500) 190 191 lwz r12,TI_LOCAL_FLAGS(r2) ··· 198 195 transfer_to_handler_cont: 199 196 3: 200 197 mflr r9 201 - tovirt(r2, r2) /* set r2 to current */ 198 + tovirt_novmstack r2, r2 /* set r2 to current */ 199 + tovirt_vmstack r9, r9 202 200 lwz r11,0(r9) /* virtual address of handler */ 203 201 lwz r9,4(r9) /* where to go when done */ 204 202 #if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS) ··· 288 284 rlwinm r9,r9,0,~MSR_EE 289 285 lwz r12,_LINK(r11) /* and return to address in LR */ 290 286 kuap_restore r11, r2, r3, r4, r5 287 + lwz r2, GPR2(r11) 291 288 b fast_exception_return 292 289 #endif 293 290 291 + #ifndef CONFIG_VMAP_STACK 294 292 /* 295 293 * On kernel stack overflow, load up an initial stack pointer 296 294 * and call StackOverflow(regs), which should not return. ··· 318 312 mtspr SPRN_SRR1,r10 319 313 SYNC 320 314 RFI 315 + #endif 321 316 322 317 #ifdef CONFIG_TRACE_IRQFLAGS 323 318 trace_syscall_entry_irq_off: ··· 404 397 LOAD_REG_IMMEDIATE(r10,MSR_KERNEL) /* doesn't include MSR_EE */ 405 398 /* Note: We don't bother telling lockdep about it */ 406 399 SYNC 407 - MTMSRD(r10) 400 + mtmsr r10 408 401 lwz r9,TI_FLAGS(r2) 409 402 li r8,-MAX_ERRNO 410 403 andi. r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK) ··· 561 554 */ 562 555 ori r10,r10,MSR_EE 563 556 SYNC 564 - MTMSRD(r10) 557 + mtmsr r10 565 558 566 559 /* Save NVGPRS if they're not saved already */ 567 560 lwz r4,_TRAP(r1) ··· 628 621 */ 629 622 .globl handle_page_fault 630 623 handle_page_fault: 631 - stw r4,_DAR(r1) 632 624 addi r3,r1,STACK_FRAME_OVERHEAD 633 625 #ifdef CONFIG_PPC_BOOK3S_32 634 626 andis. r0,r5,DSISR_DABRMATCH@h ··· 703 697 and. r0,r0,r11 /* FP or altivec or SPE enabled? */ 704 698 beq+ 1f 705 699 andc r11,r11,r0 706 - MTMSRD(r11) 700 + mtmsr r11 707 701 isync 708 702 1: stw r11,_MSR(r1) 709 703 mfcr r10 ··· 837 831 /* Note: We don't bother telling lockdep about it */ 838 832 LOAD_REG_IMMEDIATE(r10,MSR_KERNEL) 839 833 SYNC /* Some chip revs have problems here... */ 840 - MTMSRD(r10) /* disable interrupts */ 834 + mtmsr r10 /* disable interrupts */ 841 835 842 836 lwz r3,_MSR(r1) /* Returning to user mode? */ 843 837 andi. r0,r3,MSR_PR ··· 1004 998 */ 1005 999 LOAD_REG_IMMEDIATE(r10,MSR_KERNEL & ~MSR_RI) 1006 1000 SYNC 1007 - MTMSRD(r10) /* clear the RI bit */ 1001 + mtmsr r10 /* clear the RI bit */ 1008 1002 .globl exc_exit_restart 1009 1003 exc_exit_restart: 1010 1004 lwz r12,_NIP(r1) ··· 1240 1234 #endif 1241 1235 ori r10,r10,MSR_EE 1242 1236 SYNC 1243 - MTMSRD(r10) /* hard-enable interrupts */ 1237 + mtmsr r10 /* hard-enable interrupts */ 1244 1238 bl schedule 1245 1239 recheck: 1246 1240 /* Note: And we don't tell it we are disabling them again ··· 1249 1243 */ 1250 1244 LOAD_REG_IMMEDIATE(r10,MSR_KERNEL) 1251 1245 SYNC 1252 - MTMSRD(r10) /* disable interrupts */ 1246 + mtmsr r10 /* disable interrupts */ 1253 1247 lwz r9,TI_FLAGS(r2) 1254 1248 andi. r0,r9,_TIF_NEED_RESCHED 1255 1249 bne- do_resched ··· 1258 1252 do_user_signal: /* r10 contains MSR_KERNEL here */ 1259 1253 ori r10,r10,MSR_EE 1260 1254 SYNC 1261 - MTMSRD(r10) /* hard-enable interrupts */ 1255 + mtmsr r10 /* hard-enable interrupts */ 1262 1256 /* save r13-r31 in the exception frame, if not already done */ 1263 1257 lwz r3,_TRAP(r1) 1264 1258 andi. r0,r3,1 ··· 1340 1334 lis r6,1f@ha /* physical return address for rtas */ 1341 1335 addi r6,r6,1f@l 1342 1336 tophys(r6,r6) 1343 - tophys(r7,r1) 1337 + tophys_novmstack r7, r1 1344 1338 lwz r8,RTASENTRY(r4) 1345 1339 lwz r4,RTASBASE(r4) 1346 1340 mfmsr r9 1347 1341 stw r9,8(r1) 1348 1342 LOAD_REG_IMMEDIATE(r0,MSR_KERNEL) 1349 1343 SYNC /* disable interrupts so SRR0/1 */ 1350 - MTMSRD(r0) /* don't get trashed */ 1344 + mtmsr r0 /* don't get trashed */ 1351 1345 li r9,MSR_KERNEL & ~(MSR_IR|MSR_DR) 1352 1346 mtlr r6 1353 1347 stw r7, THREAD + RTAS_SP(r2)
+6 -12
arch/powerpc/kernel/entry_64.S
··· 597 597 std r0,16(r1) 598 598 stdu r1,-SWITCH_FRAME_SIZE(r1) 599 599 /* r3-r13 are caller saved -- Cort */ 600 - SAVE_8GPRS(14, r1) 601 - SAVE_10GPRS(22, r1) 600 + SAVE_NVGPRS(r1) 602 601 std r0,_NIP(r1) /* Return to switch caller */ 603 602 mfcr r23 604 603 std r23,_CCR(r1) ··· 721 722 mtcrf 0xFF,r6 722 723 723 724 /* r3-r13 are destroyed -- Cort */ 724 - REST_8GPRS(14, r1) 725 - REST_10GPRS(22, r1) 725 + REST_NVGPRS(r1) 726 726 727 727 /* convert old thread to its task_struct for return value */ 728 728 addi r3,r3,-THREAD ··· 1153 1155 */ 1154 1156 SAVE_GPR(2, r1) /* Save the TOC */ 1155 1157 SAVE_GPR(13, r1) /* Save paca */ 1156 - SAVE_8GPRS(14, r1) /* Save the non-volatiles */ 1157 - SAVE_10GPRS(22, r1) /* ditto */ 1158 + SAVE_NVGPRS(r1) /* Save the non-volatiles */ 1158 1159 1159 1160 mfcr r4 1160 1161 std r4,_CCR(r1) ··· 1260 1263 /* relocation is on at this point */ 1261 1264 REST_GPR(2, r1) /* Restore the TOC */ 1262 1265 REST_GPR(13, r1) /* Restore paca */ 1263 - REST_8GPRS(14, r1) /* Restore the non-volatiles */ 1264 - REST_10GPRS(22, r1) /* ditto */ 1266 + REST_NVGPRS(r1) /* Restore the non-volatiles */ 1265 1267 1266 1268 GET_PACA(r13) 1267 1269 ··· 1294 1298 */ 1295 1299 SAVE_GPR(2, r1) 1296 1300 SAVE_GPR(13, r1) 1297 - SAVE_8GPRS(14, r1) 1298 - SAVE_10GPRS(22, r1) 1301 + SAVE_NVGPRS(r1) 1299 1302 mfcr r10 1300 1303 mfmsr r11 1301 1304 std r10,_CCR(r1) ··· 1338 1343 /* Restore other registers */ 1339 1344 REST_GPR(2, r1) 1340 1345 REST_GPR(13, r1) 1341 - REST_8GPRS(14, r1) 1342 - REST_10GPRS(22, r1) 1346 + REST_NVGPRS(r1) 1343 1347 ld r4,_CCR(r1) 1344 1348 mtcr r4 1345 1349
+14 -18
arch/powerpc/kernel/exceptions-64s.S
··· 1408 1408 * 1409 1409 * Call convention: 1410 1410 * 1411 - * syscall register convention is in Documentation/powerpc/syscall64-abi.rst 1412 - * 1413 - * For hypercalls, the register convention is as follows: 1414 - * r0 volatile 1415 - * r1-2 nonvolatile 1416 - * r3 volatile parameter and return value for status 1417 - * r4-r10 volatile input and output value 1418 - * r11 volatile hypercall number and output value 1419 - * r12 volatile input and output value 1420 - * r13-r31 nonvolatile 1421 - * LR nonvolatile 1422 - * CTR volatile 1423 - * XER volatile 1424 - * CR0-1 CR5-7 volatile 1425 - * CR2-4 nonvolatile 1426 - * Other registers nonvolatile 1411 + * syscall and hypercalls register conventions are documented in 1412 + * Documentation/powerpc/syscall64-abi.rst and 1413 + * Documentation/powerpc/papr_hcalls.rst respectively. 1427 1414 * 1428 1415 * The intersection of volatile registers that don't contain possible 1429 1416 * inputs is: cr0, xer, ctr. We may use these as scratch regs upon entry ··· 2195 2208 DEFINE_FIXED_SYMBOL(__end_interrupts) 2196 2209 2197 2210 #ifdef CONFIG_PPC_970_NAP 2211 + /* 2212 + * Called by exception entry code if _TLF_NAPPING was set, this clears 2213 + * the NAPPING flag, and redirects the exception exit to 2214 + * power4_fixup_nap_return. 2215 + */ 2216 + .globl power4_fixup_nap 2198 2217 EXC_COMMON_BEGIN(power4_fixup_nap) 2199 2218 andc r9,r9,r10 2200 2219 std r9,TI_LOCAL_FLAGS(r11) 2201 - ld r10,_LINK(r1) /* make idle task do the */ 2202 - std r10,_NIP(r1) /* equivalent of a blr */ 2220 + LOAD_REG_ADDR(r10, power4_idle_nap_return) 2221 + std r10,_NIP(r1) 2222 + blr 2223 + 2224 + power4_idle_nap_return: 2203 2225 blr 2204 2226 #endif 2205 2227
+3
arch/powerpc/kernel/fpu.S
··· 94 94 /* enable use of FP after return */ 95 95 #ifdef CONFIG_PPC32 96 96 mfspr r5,SPRN_SPRG_THREAD /* current task's THREAD (phys) */ 97 + #ifdef CONFIG_VMAP_STACK 98 + tovirt(r5, r5) 99 + #endif 97 100 lwz r4,THREAD_FPEXC_MODE(r5) 98 101 ori r9,r9,MSR_FP /* enable FP for current */ 99 102 or r9,r9,r4
+41 -25
arch/powerpc/kernel/head_32.S
··· 272 272 */ 273 273 . = 0x200 274 274 DO_KVM 0x200 275 - mtspr SPRN_SPRG_SCRATCH0,r10 276 - mtspr SPRN_SPRG_SCRATCH1,r11 277 - mfcr r10 275 + MachineCheck: 276 + EXCEPTION_PROLOG_0 277 + #ifdef CONFIG_VMAP_STACK 278 + li r11, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */ 279 + mtmsr r11 280 + isync 281 + #endif 278 282 #ifdef CONFIG_PPC_CHRP 279 283 mfspr r11, SPRN_SPRG_THREAD 284 + tovirt_vmstack r11, r11 280 285 lwz r11, RTAS_SP(r11) 281 286 cmpwi cr1, r11, 0 282 287 bne cr1, 7f 283 288 #endif /* CONFIG_PPC_CHRP */ 284 - EXCEPTION_PROLOG_1 289 + EXCEPTION_PROLOG_1 for_rtas=1 285 290 7: EXCEPTION_PROLOG_2 286 291 addi r3,r1,STACK_FRAME_OVERHEAD 287 292 #ifdef CONFIG_PPC_CHRP ··· 301 296 . = 0x300 302 297 DO_KVM 0x300 303 298 DataAccess: 304 - EXCEPTION_PROLOG 305 - mfspr r10,SPRN_DSISR 306 - stw r10,_DSISR(r11) 307 - #ifdef CONFIG_PPC_KUAP 308 - andis. r0,r10,(DSISR_BAD_FAULT_32S | DSISR_DABRMATCH | DSISR_PROTFAULT)@h 309 - #else 310 - andis. r0,r10,(DSISR_BAD_FAULT_32S|DSISR_DABRMATCH)@h 311 - #endif 312 - bne 1f /* if not, try to put a PTE */ 313 - mfspr r4,SPRN_DAR /* into the hash table */ 314 - rlwinm r3,r10,32-15,21,21 /* DSISR_STORE -> _PAGE_RW */ 299 + EXCEPTION_PROLOG handle_dar_dsisr=1 300 + get_and_save_dar_dsisr_on_stack r4, r5, r11 315 301 BEGIN_MMU_FTR_SECTION 302 + #ifdef CONFIG_PPC_KUAP 303 + andis. r0, r5, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH | DSISR_PROTFAULT)@h 304 + #else 305 + andis. r0, r5, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH)@h 306 + #endif 307 + bne handle_page_fault_tramp_2 /* if not, try to put a PTE */ 308 + rlwinm r3, r5, 32 - 15, 21, 21 /* DSISR_STORE -> _PAGE_RW */ 316 309 bl hash_page 317 - END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE) 318 - 1: lwz r5,_DSISR(r11) /* get DSISR value */ 319 - mfspr r4,SPRN_DAR 320 - EXC_XFER_LITE(0x300, handle_page_fault) 321 - 310 + b handle_page_fault_tramp_1 311 + FTR_SECTION_ELSE 312 + b handle_page_fault_tramp_2 313 + ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE) 322 314 323 315 /* Instruction access exception. */ 324 316 . = 0x400 ··· 331 329 END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE) 332 330 1: mr r4,r12 333 331 andis. r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */ 332 + stw r4, _DAR(r11) 334 333 EXC_XFER_LITE(0x400, handle_page_fault) 335 334 336 335 /* External interrupt */ ··· 341 338 . = 0x600 342 339 DO_KVM 0x600 343 340 Alignment: 344 - EXCEPTION_PROLOG 345 - mfspr r4,SPRN_DAR 346 - stw r4,_DAR(r11) 347 - mfspr r5,SPRN_DSISR 348 - stw r5,_DSISR(r11) 341 + EXCEPTION_PROLOG handle_dar_dsisr=1 342 + save_dar_dsisr_on_stack r4, r5, r11 349 343 addi r3,r1,STACK_FRAME_OVERHEAD 350 344 EXC_XFER_STD(0x600, alignment_exception) 351 345 ··· 645 645 646 646 . = 0x3000 647 647 648 + handle_page_fault_tramp_1: 649 + lwz r4, _DAR(r11) 650 + lwz r5, _DSISR(r11) 651 + /* fall through */ 652 + handle_page_fault_tramp_2: 653 + EXC_XFER_LITE(0x300, handle_page_fault) 654 + 655 + stack_overflow: 656 + vmap_stack_overflow_exception 657 + 648 658 AltiVecUnavailable: 649 659 EXCEPTION_PROLOG 650 660 #ifdef CONFIG_ALTIVEC ··· 927 917 ori r4,r4,2f@l 928 918 tophys(r4,r4) 929 919 li r3,MSR_KERNEL & ~(MSR_IR|MSR_DR) 920 + 921 + .align 4 930 922 mtspr SPRN_SRR0,r4 931 923 mtspr SPRN_SRR1,r3 932 924 SYNC ··· 1070 1058 rlwinm r0, r6, 0, ~MSR_RI 1071 1059 rlwinm r0, r0, 0, ~MSR_EE 1072 1060 mtmsr r0 1061 + 1062 + .align 4 1073 1063 mtspr SPRN_SRR0, r4 1074 1064 mtspr SPRN_SRR1, r3 1075 1065 SYNC ··· 1111 1097 andi. r0,r3,MSR_DR|MSR_IR /* MMU enabled? */ 1112 1098 beqlr 1113 1099 andc r3,r3,r0 1100 + 1101 + .align 4 1114 1102 mtspr SPRN_SRR0,r4 1115 1103 mtspr SPRN_SRR1,r3 1116 1104 sync
+161 -19
arch/powerpc/kernel/head_32.h
··· 10 10 * We assume sprg3 has the physical address of the current 11 11 * task's thread_struct. 12 12 */ 13 + .macro EXCEPTION_PROLOG handle_dar_dsisr=0 14 + EXCEPTION_PROLOG_0 handle_dar_dsisr=\handle_dar_dsisr 15 + EXCEPTION_PROLOG_1 16 + EXCEPTION_PROLOG_2 handle_dar_dsisr=\handle_dar_dsisr 17 + .endm 13 18 14 - .macro EXCEPTION_PROLOG 19 + .macro EXCEPTION_PROLOG_0 handle_dar_dsisr=0 15 20 mtspr SPRN_SPRG_SCRATCH0,r10 16 21 mtspr SPRN_SPRG_SCRATCH1,r11 22 + #ifdef CONFIG_VMAP_STACK 23 + mfspr r10, SPRN_SPRG_THREAD 24 + .if \handle_dar_dsisr 25 + mfspr r11, SPRN_DAR 26 + stw r11, DAR(r10) 27 + mfspr r11, SPRN_DSISR 28 + stw r11, DSISR(r10) 29 + .endif 30 + mfspr r11, SPRN_SRR0 31 + stw r11, SRR0(r10) 32 + #endif 33 + mfspr r11, SPRN_SRR1 /* check whether user or kernel */ 34 + #ifdef CONFIG_VMAP_STACK 35 + stw r11, SRR1(r10) 36 + #endif 17 37 mfcr r10 18 - EXCEPTION_PROLOG_1 19 - EXCEPTION_PROLOG_2 38 + andi. r11, r11, MSR_PR 20 39 .endm 21 40 22 - .macro EXCEPTION_PROLOG_1 23 - mfspr r11,SPRN_SRR1 /* check whether user or kernel */ 24 - andi. r11,r11,MSR_PR 41 + .macro EXCEPTION_PROLOG_1 for_rtas=0 42 + #ifdef CONFIG_VMAP_STACK 43 + .ifeq \for_rtas 44 + li r11, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */ 45 + mtmsr r11 46 + isync 47 + .endif 48 + subi r11, r1, INT_FRAME_SIZE /* use r1 if kernel */ 49 + #else 25 50 tophys(r11,r1) /* use tophys(r1) if kernel */ 51 + subi r11, r11, INT_FRAME_SIZE /* alloc exc. frame */ 52 + #endif 26 53 beq 1f 27 54 mfspr r11,SPRN_SPRG_THREAD 55 + tovirt_vmstack r11, r11 28 56 lwz r11,TASK_STACK-THREAD(r11) 29 - addi r11,r11,THREAD_SIZE 30 - tophys(r11,r11) 31 - 1: subi r11,r11,INT_FRAME_SIZE /* alloc exc. frame */ 57 + addi r11, r11, THREAD_SIZE - INT_FRAME_SIZE 58 + tophys_novmstack r11, r11 59 + 1: 60 + #ifdef CONFIG_VMAP_STACK 61 + mtcrf 0x7f, r11 62 + bt 32 - THREAD_ALIGN_SHIFT, stack_overflow 63 + #endif 32 64 .endm 33 65 34 - .macro EXCEPTION_PROLOG_2 66 + .macro EXCEPTION_PROLOG_2 handle_dar_dsisr=0 35 67 stw r10,_CCR(r11) /* save registers */ 36 68 stw r12,GPR12(r11) 37 69 stw r9,GPR9(r11) ··· 73 41 stw r12,GPR11(r11) 74 42 mflr r10 75 43 stw r10,_LINK(r11) 44 + #ifdef CONFIG_VMAP_STACK 45 + mfspr r12, SPRN_SPRG_THREAD 46 + tovirt(r12, r12) 47 + .if \handle_dar_dsisr 48 + lwz r10, DAR(r12) 49 + stw r10, _DAR(r11) 50 + lwz r10, DSISR(r12) 51 + stw r10, _DSISR(r11) 52 + .endif 53 + lwz r9, SRR1(r12) 54 + lwz r12, SRR0(r12) 55 + #else 76 56 mfspr r12,SPRN_SRR0 77 57 mfspr r9,SPRN_SRR1 58 + #endif 78 59 stw r1,GPR1(r11) 79 60 stw r1,0(r11) 80 - tovirt(r1,r11) /* set new kernel sp */ 61 + tovirt_novmstack r1, r11 /* set new kernel sp */ 81 62 #ifdef CONFIG_40x 82 63 rlwinm r9,r9,0,14,12 /* clear MSR_WE (necessary?) */ 83 64 #else 65 + #ifdef CONFIG_VMAP_STACK 66 + li r10, MSR_KERNEL & ~MSR_IR /* can take exceptions */ 67 + #else 84 68 li r10,MSR_KERNEL & ~(MSR_IR|MSR_DR) /* can take exceptions */ 85 - MTMSRD(r10) /* (except for mach check in rtas) */ 69 + #endif 70 + mtmsr r10 /* (except for mach check in rtas) */ 86 71 #endif 87 72 stw r0,GPR0(r11) 88 73 lis r10,STACK_FRAME_REGS_MARKER@ha /* exception frame marker */ ··· 111 62 112 63 .macro SYSCALL_ENTRY trapno 113 64 mfspr r12,SPRN_SPRG_THREAD 65 + #ifdef CONFIG_VMAP_STACK 66 + mfspr r9, SPRN_SRR0 67 + mfspr r11, SPRN_SRR1 68 + stw r9, SRR0(r12) 69 + stw r11, SRR1(r12) 70 + #endif 114 71 mfcr r10 115 72 lwz r11,TASK_STACK-THREAD(r12) 116 - mflr r9 117 - addi r11,r11,THREAD_SIZE - INT_FRAME_SIZE 118 73 rlwinm r10,r10,0,4,2 /* Clear SO bit in CR */ 119 - tophys(r11,r11) 74 + addi r11, r11, THREAD_SIZE - INT_FRAME_SIZE 75 + #ifdef CONFIG_VMAP_STACK 76 + li r9, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */ 77 + mtmsr r9 78 + isync 79 + #endif 80 + tovirt_vmstack r12, r12 81 + tophys_novmstack r11, r11 82 + mflr r9 120 83 stw r10,_CCR(r11) /* save registers */ 84 + stw r9, _LINK(r11) 85 + #ifdef CONFIG_VMAP_STACK 86 + lwz r10, SRR0(r12) 87 + lwz r9, SRR1(r12) 88 + #else 121 89 mfspr r10,SPRN_SRR0 122 - stw r9,_LINK(r11) 123 90 mfspr r9,SPRN_SRR1 91 + #endif 124 92 stw r1,GPR1(r11) 125 93 stw r1,0(r11) 126 - tovirt(r1,r11) /* set new kernel sp */ 94 + tovirt_novmstack r1, r11 /* set new kernel sp */ 127 95 stw r10,_NIP(r11) 128 96 #ifdef CONFIG_40x 129 97 rlwinm r9,r9,0,14,12 /* clear MSR_WE (necessary?) */ 130 98 #else 99 + #ifdef CONFIG_VMAP_STACK 100 + LOAD_REG_IMMEDIATE(r10, MSR_KERNEL & ~MSR_IR) /* can take exceptions */ 101 + #else 131 102 LOAD_REG_IMMEDIATE(r10, MSR_KERNEL & ~(MSR_IR|MSR_DR)) /* can take exceptions */ 132 - MTMSRD(r10) /* (except for mach check in rtas) */ 103 + #endif 104 + mtmsr r10 /* (except for mach check in rtas) */ 133 105 #endif 134 106 lis r10,STACK_FRAME_REGS_MARKER@ha /* exception frame marker */ 135 107 stw r2,GPR2(r11) ··· 188 118 #endif 189 119 190 120 3: 191 - tovirt(r2, r2) /* set r2 to current */ 121 + tovirt_novmstack r2, r2 /* set r2 to current */ 192 122 lis r11, transfer_to_syscall@h 193 123 ori r11, r11, transfer_to_syscall@l 194 124 #ifdef CONFIG_TRACE_IRQFLAGS ··· 209 139 mtspr SPRN_SRR0,r11 210 140 SYNC 211 141 RFI /* jump to handler, enable MMU */ 142 + .endm 143 + 144 + .macro save_dar_dsisr_on_stack reg1, reg2, sp 145 + #ifndef CONFIG_VMAP_STACK 146 + mfspr \reg1, SPRN_DAR 147 + mfspr \reg2, SPRN_DSISR 148 + stw \reg1, _DAR(\sp) 149 + stw \reg2, _DSISR(\sp) 150 + #endif 151 + .endm 152 + 153 + .macro get_and_save_dar_dsisr_on_stack reg1, reg2, sp 154 + #ifdef CONFIG_VMAP_STACK 155 + lwz \reg1, _DAR(\sp) 156 + lwz \reg2, _DSISR(\sp) 157 + #else 158 + save_dar_dsisr_on_stack \reg1, \reg2, \sp 159 + #endif 160 + .endm 161 + 162 + .macro tovirt_vmstack dst, src 163 + #ifdef CONFIG_VMAP_STACK 164 + tovirt(\dst, \src) 165 + #else 166 + .ifnc \dst, \src 167 + mr \dst, \src 168 + .endif 169 + #endif 170 + .endm 171 + 172 + .macro tovirt_novmstack dst, src 173 + #ifndef CONFIG_VMAP_STACK 174 + tovirt(\dst, \src) 175 + #else 176 + .ifnc \dst, \src 177 + mr \dst, \src 178 + .endif 179 + #endif 180 + .endm 181 + 182 + .macro tophys_novmstack dst, src 183 + #ifndef CONFIG_VMAP_STACK 184 + tophys(\dst, \src) 185 + #else 186 + .ifnc \dst, \src 187 + mr \dst, \src 188 + .endif 189 + #endif 212 190 .endm 213 191 214 192 /* ··· 304 186 #define EXC_XFER_LITE(n, hdlr) \ 305 187 EXC_XFER_TEMPLATE(hdlr, n+1, MSR_KERNEL, transfer_to_handler, \ 306 188 ret_from_except) 189 + 190 + .macro vmap_stack_overflow_exception 191 + #ifdef CONFIG_VMAP_STACK 192 + #ifdef CONFIG_SMP 193 + mfspr r11, SPRN_SPRG_THREAD 194 + tovirt(r11, r11) 195 + lwz r11, TASK_CPU - THREAD(r11) 196 + slwi r11, r11, 3 197 + addis r11, r11, emergency_ctx@ha 198 + #else 199 + lis r11, emergency_ctx@ha 200 + #endif 201 + lwz r11, emergency_ctx@l(r11) 202 + cmpwi cr1, r11, 0 203 + bne cr1, 1f 204 + lis r11, init_thread_union@ha 205 + addi r11, r11, init_thread_union@l 206 + 1: addi r11, r11, THREAD_SIZE - INT_FRAME_SIZE 207 + EXCEPTION_PROLOG_2 208 + SAVE_NVGPRS(r11) 209 + addi r3, r1, STACK_FRAME_OVERHEAD 210 + EXC_XFER_STD(0, stack_overflow_exception) 211 + #endif 212 + .endm 307 213 308 214 #endif /* __HEAD_32_H__ */
+2
arch/powerpc/kernel/head_40x.S
··· 313 313 START_EXCEPTION(0x0400, InstructionAccess) 314 314 EXCEPTION_PROLOG 315 315 mr r4,r12 /* Pass SRR0 as arg2 */ 316 + stw r4, _DEAR(r11) 316 317 li r5,0 /* Pass zero as arg3 */ 317 318 EXC_XFER_LITE(0x400, handle_page_fault) 318 319 ··· 677 676 mfspr r5,SPRN_ESR /* Grab the ESR, save it, pass arg3 */ 678 677 stw r5,_ESR(r11) 679 678 mfspr r4,SPRN_DEAR /* Grab the DEAR, save it, pass arg2 */ 679 + stw r4, _DEAR(r11) 680 680 EXC_XFER_LITE(0x300, handle_page_fault) 681 681 682 682 /* Other PowerPC processors, namely those derived from the 6xx-series
+88 -101
arch/powerpc/kernel/head_8xx.S
··· 127 127 /* Machine check */ 128 128 . = 0x200 129 129 MachineCheck: 130 - EXCEPTION_PROLOG 131 - mfspr r4,SPRN_DAR 132 - stw r4,_DAR(r11) 133 - li r5,RPN_PATTERN 134 - mtspr SPRN_DAR,r5 /* Tag DAR, to be used in DTLB Error */ 135 - mfspr r5,SPRN_DSISR 136 - stw r5,_DSISR(r11) 130 + EXCEPTION_PROLOG handle_dar_dsisr=1 131 + save_dar_dsisr_on_stack r4, r5, r11 132 + li r6, RPN_PATTERN 133 + mtspr SPRN_DAR, r6 /* Tag DAR, to be used in DTLB Error */ 137 134 addi r3,r1,STACK_FRAME_OVERHEAD 138 135 EXC_XFER_STD(0x200, machine_check_exception) 139 - 140 - /* Data access exception. 141 - * This is "never generated" by the MPC8xx. 142 - */ 143 - . = 0x300 144 - DataAccess: 145 - 146 - /* Instruction access exception. 147 - * This is "never generated" by the MPC8xx. 148 - */ 149 - . = 0x400 150 - InstructionAccess: 151 136 152 137 /* External interrupt */ 153 138 EXCEPTION(0x500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE) ··· 140 155 /* Alignment exception */ 141 156 . = 0x600 142 157 Alignment: 143 - EXCEPTION_PROLOG 144 - mfspr r4,SPRN_DAR 145 - stw r4,_DAR(r11) 146 - li r5,RPN_PATTERN 147 - mtspr SPRN_DAR,r5 /* Tag DAR, to be used in DTLB Error */ 148 - mfspr r5,SPRN_DSISR 149 - stw r5,_DSISR(r11) 158 + EXCEPTION_PROLOG handle_dar_dsisr=1 159 + save_dar_dsisr_on_stack r4, r5, r11 160 + li r6, RPN_PATTERN 161 + mtspr SPRN_DAR, r6 /* Tag DAR, to be used in DTLB Error */ 150 162 addi r3,r1,STACK_FRAME_OVERHEAD 151 - EXC_XFER_STD(0x600, alignment_exception) 163 + b .Lalignment_exception_ool 152 164 153 165 /* Program check exception */ 154 166 EXCEPTION(0x700, ProgramCheck, program_check_exception, EXC_XFER_STD) 155 167 156 - /* No FPU on MPC8xx. This exception is not supposed to happen. 157 - */ 158 - EXCEPTION(0x800, FPUnavailable, unknown_exception, EXC_XFER_STD) 159 - 160 168 /* Decrementer */ 161 169 EXCEPTION(0x900, Decrementer, timer_interrupt, EXC_XFER_LITE) 162 170 163 - EXCEPTION(0xa00, Trap_0a, unknown_exception, EXC_XFER_STD) 164 - EXCEPTION(0xb00, Trap_0b, unknown_exception, EXC_XFER_STD) 171 + /* With VMAP_STACK there's not enough room for this at 0x600 */ 172 + . = 0xa00 173 + .Lalignment_exception_ool: 174 + EXC_XFER_STD(0x600, alignment_exception) 165 175 166 176 /* System call */ 167 177 . = 0xc00 ··· 165 185 166 186 /* Single step - not used on 601 */ 167 187 EXCEPTION(0xd00, SingleStep, single_step_exception, EXC_XFER_STD) 168 - EXCEPTION(0xe00, Trap_0e, unknown_exception, EXC_XFER_STD) 169 - EXCEPTION(0xf00, Trap_0f, unknown_exception, EXC_XFER_STD) 170 188 171 189 /* On the MPC8xx, this is a software emulation interrupt. It occurs 172 190 * for all unimplemented and illegal instructions. 173 191 */ 174 192 EXCEPTION(0x1000, SoftEmu, program_check_exception, EXC_XFER_STD) 175 - 176 - /* Called from DataStoreTLBMiss when perf TLB misses events are activated */ 177 - #ifdef CONFIG_PERF_EVENTS 178 - patch_site 0f, patch__dtlbmiss_perf 179 - 0: lwz r10, (dtlb_miss_counter - PAGE_OFFSET)@l(0) 180 - addi r10, r10, 1 181 - stw r10, (dtlb_miss_counter - PAGE_OFFSET)@l(0) 182 - mfspr r10, SPRN_SPRG_SCRATCH0 183 - mfspr r11, SPRN_SPRG_SCRATCH1 184 - rfi 185 - #endif 186 193 187 194 . = 0x1100 188 195 /* ··· 310 343 311 344 . = 0x1200 312 345 DataStoreTLBMiss: 313 - mtspr SPRN_SPRG_SCRATCH0, r10 314 - mtspr SPRN_SPRG_SCRATCH1, r11 346 + mtspr SPRN_DAR, r10 347 + mtspr SPRN_M_TW, r11 315 348 mfcr r11 316 349 317 350 /* If we are faulting a kernel address, we have to use the ··· 376 409 mtspr SPRN_MD_RPN, r10 /* Update TLB entry */ 377 410 378 411 /* Restore registers */ 379 - mtspr SPRN_DAR, r11 /* Tag DAR */ 380 412 381 - 0: mfspr r10, SPRN_SPRG_SCRATCH0 382 - mfspr r11, SPRN_SPRG_SCRATCH1 413 + 0: mfspr r10, SPRN_DAR 414 + mtspr SPRN_DAR, r11 /* Tag DAR */ 415 + mfspr r11, SPRN_M_TW 383 416 rfi 384 417 patch_site 0b, patch__dtlbmiss_exit_1 385 418 ··· 395 428 mtspr SPRN_MD_RPN, r10 /* Update TLB entry */ 396 429 397 430 li r11, RPN_PATTERN 398 - mtspr SPRN_DAR, r11 /* Tag DAR */ 399 431 400 - 0: mfspr r10, SPRN_SPRG_SCRATCH0 401 - mfspr r11, SPRN_SPRG_SCRATCH1 432 + 0: mfspr r10, SPRN_DAR 433 + mtspr SPRN_DAR, r11 /* Tag DAR */ 434 + mfspr r11, SPRN_M_TW 402 435 rfi 403 436 patch_site 0b, patch__dtlbmiss_exit_2 404 437 ··· 432 465 mtspr SPRN_MD_RPN, r10 /* Update TLB entry */ 433 466 434 467 li r11, RPN_PATTERN 435 - mtspr SPRN_DAR, r11 /* Tag DAR */ 436 468 437 - 0: mfspr r10, SPRN_SPRG_SCRATCH0 438 - mfspr r11, SPRN_SPRG_SCRATCH1 469 + 0: mfspr r10, SPRN_DAR 470 + mtspr SPRN_DAR, r11 /* Tag DAR */ 471 + mfspr r11, SPRN_M_TW 439 472 rfi 440 473 patch_site 0b, patch__dtlbmiss_exit_3 441 474 ··· 453 486 tlbie r4 454 487 /* 0x400 is InstructionAccess exception, needed by bad_page_fault() */ 455 488 .Litlbie: 489 + stw r4, _DAR(r11) 456 490 EXC_XFER_LITE(0x400, handle_page_fault) 457 491 458 492 /* This is the data TLB error on the MPC8xx. This could be due to ··· 462 494 */ 463 495 . = 0x1400 464 496 DataTLBError: 465 - mtspr SPRN_SPRG_SCRATCH0, r10 466 - mtspr SPRN_SPRG_SCRATCH1, r11 467 - mfcr r10 468 - 497 + EXCEPTION_PROLOG_0 handle_dar_dsisr=1 469 498 mfspr r11, SPRN_DAR 470 - cmpwi cr0, r11, RPN_PATTERN 471 - beq- FixupDAR /* must be a buggy dcbX, icbi insn. */ 499 + cmpwi cr1, r11, RPN_PATTERN 500 + beq- cr1, FixupDAR /* must be a buggy dcbX, icbi insn. */ 472 501 DARFixed:/* Return from dcbx instruction bug workaround */ 502 + #ifdef CONFIG_VMAP_STACK 503 + li r11, RPN_PATTERN 504 + mtspr SPRN_DAR, r11 /* Tag DAR, to be used in DTLB Error */ 505 + #endif 473 506 EXCEPTION_PROLOG_1 474 - EXCEPTION_PROLOG_2 475 - mfspr r5,SPRN_DSISR 476 - stw r5,_DSISR(r11) 477 - mfspr r4,SPRN_DAR 507 + EXCEPTION_PROLOG_2 handle_dar_dsisr=1 508 + get_and_save_dar_dsisr_on_stack r4, r5, r11 478 509 andis. r10,r5,DSISR_NOHPTE@h 479 510 beq+ .Ldtlbie 480 511 tlbie r4 481 512 .Ldtlbie: 513 + #ifndef CONFIG_VMAP_STACK 482 514 li r10,RPN_PATTERN 483 515 mtspr SPRN_DAR,r10 /* Tag DAR, to be used in DTLB Error */ 516 + #endif 484 517 /* 0x300 is DataAccess exception, needed by bad_page_fault() */ 485 518 EXC_XFER_LITE(0x300, handle_page_fault) 486 519 487 - EXCEPTION(0x1500, Trap_15, unknown_exception, EXC_XFER_STD) 488 - EXCEPTION(0x1600, Trap_16, unknown_exception, EXC_XFER_STD) 489 - EXCEPTION(0x1700, Trap_17, unknown_exception, EXC_XFER_STD) 490 - EXCEPTION(0x1800, Trap_18, unknown_exception, EXC_XFER_STD) 491 - EXCEPTION(0x1900, Trap_19, unknown_exception, EXC_XFER_STD) 492 - EXCEPTION(0x1a00, Trap_1a, unknown_exception, EXC_XFER_STD) 493 - EXCEPTION(0x1b00, Trap_1b, unknown_exception, EXC_XFER_STD) 520 + /* Called from DataStoreTLBMiss when perf TLB misses events are activated */ 521 + #ifdef CONFIG_PERF_EVENTS 522 + patch_site 0f, patch__dtlbmiss_perf 523 + 0: lwz r10, (dtlb_miss_counter - PAGE_OFFSET)@l(0) 524 + addi r10, r10, 1 525 + stw r10, (dtlb_miss_counter - PAGE_OFFSET)@l(0) 526 + mfspr r10, SPRN_DAR 527 + mtspr SPRN_DAR, r11 /* Tag DAR */ 528 + mfspr r11, SPRN_M_TW 529 + rfi 530 + #endif 531 + 532 + stack_overflow: 533 + vmap_stack_overflow_exception 494 534 495 535 /* On the MPC8xx, these next four traps are used for development 496 536 * support of breakpoints and such. Someday I will get around to 497 537 * using them. 498 538 */ 499 - . = 0x1c00 500 - DataBreakpoint: 501 - mtspr SPRN_SPRG_SCRATCH0, r10 502 - mtspr SPRN_SPRG_SCRATCH1, r11 503 - mfcr r10 504 - mfspr r11, SPRN_SRR0 505 - cmplwi cr0, r11, (.Ldtlbie - PAGE_OFFSET)@l 506 - cmplwi cr7, r11, (.Litlbie - PAGE_OFFSET)@l 507 - beq- cr0, 11f 508 - beq- cr7, 11f 539 + do_databreakpoint: 509 540 EXCEPTION_PROLOG_1 510 - EXCEPTION_PROLOG_2 541 + EXCEPTION_PROLOG_2 handle_dar_dsisr=1 511 542 addi r3,r1,STACK_FRAME_OVERHEAD 512 543 mfspr r4,SPRN_BAR 513 544 stw r4,_DAR(r11) 545 + #ifdef CONFIG_VMAP_STACK 546 + lwz r5,_DSISR(r11) 547 + #else 514 548 mfspr r5,SPRN_DSISR 549 + #endif 515 550 EXC_XFER_STD(0x1c00, do_break) 516 - 11: 551 + 552 + . = 0x1c00 553 + DataBreakpoint: 554 + EXCEPTION_PROLOG_0 handle_dar_dsisr=1 555 + mfspr r11, SPRN_SRR0 556 + cmplwi cr1, r11, (.Ldtlbie - PAGE_OFFSET)@l 557 + cmplwi cr7, r11, (.Litlbie - PAGE_OFFSET)@l 558 + cror 4*cr1+eq, 4*cr1+eq, 4*cr7+eq 559 + bne cr1, do_databreakpoint 517 560 mtcr r10 518 561 mfspr r10, SPRN_SPRG_SCRATCH0 519 562 mfspr r11, SPRN_SPRG_SCRATCH1 ··· 560 581 mfspr r10, SPRN_SRR0 561 582 mtspr SPRN_MD_EPN, r10 562 583 rlwinm r11, r10, 16, 0xfff8 563 - cmpli cr0, r11, PAGE_OFFSET@h 584 + cmpli cr1, r11, PAGE_OFFSET@h 564 585 mfspr r11, SPRN_M_TWB /* Get level 1 table */ 565 - blt+ 3f 586 + blt+ cr1, 3f 566 587 rlwinm r11, r10, 16, 0xfff8 567 588 568 589 0: cmpli cr7, r11, (PAGE_OFFSET + 0x1800000)@h ··· 577 598 3: 578 599 lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r11) /* Get the level 1 entry */ 579 600 mtspr SPRN_MD_TWC, r11 580 - mtcr r11 601 + mtcrf 0x01, r11 581 602 mfspr r11, SPRN_MD_TWC 582 603 lwz r11, 0(r11) /* Get the pte */ 583 604 bt 28,200f /* bit 28 = Large page (8M) */ ··· 590 611 * no need to include them here */ 591 612 xoris r10, r11, 0x7c00 /* check if major OP code is 31 */ 592 613 rlwinm r10, r10, 0, 21, 5 593 - cmpwi cr0, r10, 2028 /* Is dcbz? */ 594 - beq+ 142f 595 - cmpwi cr0, r10, 940 /* Is dcbi? */ 596 - beq+ 142f 597 - cmpwi cr0, r10, 108 /* Is dcbst? */ 598 - beq+ 144f /* Fix up store bit! */ 599 - cmpwi cr0, r10, 172 /* Is dcbf? */ 600 - beq+ 142f 601 - cmpwi cr0, r10, 1964 /* Is icbi? */ 602 - beq+ 142f 614 + cmpwi cr1, r10, 2028 /* Is dcbz? */ 615 + beq+ cr1, 142f 616 + cmpwi cr1, r10, 940 /* Is dcbi? */ 617 + beq+ cr1, 142f 618 + cmpwi cr1, r10, 108 /* Is dcbst? */ 619 + beq+ cr1, 144f /* Fix up store bit! */ 620 + cmpwi cr1, r10, 172 /* Is dcbf? */ 621 + beq+ cr1, 142f 622 + cmpwi cr1, r10, 1964 /* Is icbi? */ 623 + beq+ cr1, 142f 603 624 141: mfspr r10,SPRN_M_TW 604 625 b DARFixed /* Nope, go back to normal TLB processing */ 605 626 ··· 658 679 add r10, r10, r30 ;b 151f 659 680 add r10, r10, r31 660 681 151: 661 - rlwinm. r11,r11,19,24,28 /* offset into jump table for reg RA */ 662 - beq 152f /* if reg RA is zero, don't add it */ 682 + rlwinm r11,r11,19,24,28 /* offset into jump table for reg RA */ 683 + cmpwi cr1, r11, 0 684 + beq cr1, 152f /* if reg RA is zero, don't add it */ 663 685 addi r11, r11, 150b@l /* add start of table */ 664 686 mtctr r11 /* load ctr with jump address */ 665 687 rlwinm r11,r11,0,16,10 /* make sure we don't execute this more than once */ ··· 668 688 152: 669 689 mfdar r11 670 690 mtctr r11 /* restore ctr reg from DAR */ 691 + #ifdef CONFIG_VMAP_STACK 692 + mfspr r11, SPRN_SPRG_THREAD 693 + stw r10, DAR(r11) 694 + mfspr r10, SPRN_DSISR 695 + stw r10, DSISR(r11) 696 + #else 671 697 mtdar r10 /* save fault EA to DAR */ 698 + #endif 672 699 mfspr r10,SPRN_M_TW 673 700 b DARFixed /* Go back to normal TLB handling */ 674 701
+2
arch/powerpc/kernel/head_booke.h
··· 467 467 mfspr r5,SPRN_ESR; /* Grab the ESR and save it */ \ 468 468 stw r5,_ESR(r11); \ 469 469 mfspr r4,SPRN_DEAR; /* Grab the DEAR */ \ 470 + stw r4, _DEAR(r11); \ 470 471 EXC_XFER_LITE(0x0300, handle_page_fault) 471 472 472 473 #define INSTRUCTION_STORAGE_EXCEPTION \ ··· 476 475 mfspr r5,SPRN_ESR; /* Grab the ESR and save it */ \ 477 476 stw r5,_ESR(r11); \ 478 477 mr r4,r12; /* Pass SRR0 as arg2 */ \ 478 + stw r4, _DEAR(r11); \ 479 479 li r5,0; /* Pass zero as arg3 */ \ 480 480 EXC_XFER_LITE(0x0400, handle_page_fault) 481 481
+1
arch/powerpc/kernel/head_fsl_booke.S
··· 378 378 mfspr r4,SPRN_DEAR /* Grab the DEAR, save it, pass arg2 */ 379 379 andis. r10,r5,(ESR_ILK|ESR_DLK)@h 380 380 bne 1f 381 + stw r4, _DEAR(r11) 381 382 EXC_XFER_LITE(0x0300, handle_page_fault) 382 383 1: 383 384 addi r3,r1,STACK_FRAME_OVERHEAD
+8 -7
arch/powerpc/kernel/hw_breakpoint.c
··· 160 160 /* DAWR region can't cross 512 bytes boundary */ 161 161 if ((start_addr >> 9) != (end_addr >> 9)) 162 162 return -EINVAL; 163 + } else if (IS_ENABLED(CONFIG_PPC_8xx)) { 164 + /* 8xx can setup a range without limitation */ 165 + max_len = U16_MAX; 163 166 } 164 167 165 168 if (hw_len > max_len) ··· 331 328 } 332 329 333 330 info->type &= ~HW_BRK_TYPE_EXTRANEOUS_IRQ; 334 - if (IS_ENABLED(CONFIG_PPC_8xx)) { 335 - if (!dar_within_range(regs->dar, info)) 336 - info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ; 337 - } else { 338 - if (!stepping_handler(regs, bp, info)) 339 - goto out; 340 - } 331 + if (!dar_within_range(regs->dar, info)) 332 + info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ; 333 + 334 + if (!IS_ENABLED(CONFIG_PPC_8xx) && !stepping_handler(regs, bp, info)) 335 + goto out; 341 336 342 337 /* 343 338 * As a policy, the callback is invoked in a 'trigger-after-execute'
+25
arch/powerpc/kernel/idle.c
··· 77 77 78 78 int powersave_nap; 79 79 80 + #ifdef CONFIG_PPC_970_NAP 81 + void power4_idle(void) 82 + { 83 + if (!cpu_has_feature(CPU_FTR_CAN_NAP)) 84 + return; 85 + 86 + if (!powersave_nap) 87 + return; 88 + 89 + if (!prep_irq_for_idle()) 90 + return; 91 + 92 + if (cpu_has_feature(CPU_FTR_ALTIVEC)) 93 + asm volatile("DSSALL ; sync" ::: "memory"); 94 + 95 + power4_idle_nap(); 96 + 97 + /* 98 + * power4_idle_nap returns with interrupts enabled (soft and hard). 99 + * to our caller with interrupts enabled (soft and hard). Our caller 100 + * can cope with either interrupts disabled or enabled upon return. 101 + */ 102 + } 103 + #endif 104 + 80 105 #ifdef CONFIG_SYSCTL 81 106 /* 82 107 * Register the sysctl to set/clear powersave_nap.
+20
arch/powerpc/kernel/idle_book3s.S
··· 15 15 #include <asm/asm-offsets.h> 16 16 #include <asm/ppc-opcode.h> 17 17 #include <asm/cpuidle.h> 18 + #include <asm/thread_info.h> /* TLF_NAPPING */ 18 19 20 + #ifdef CONFIG_PPC_P7_NAP 19 21 /* 20 22 * Desired PSSCR in r3 21 23 * ··· 183 181 bne 2f 184 182 IDLE_STATE_ENTER_SEQ_NORET(PPC_SLEEP) 185 183 2: IDLE_STATE_ENTER_SEQ_NORET(PPC_WINKLE) 184 + #endif 186 185 186 + #ifdef CONFIG_PPC_970_NAP 187 + _GLOBAL(power4_idle_nap) 188 + LOAD_REG_IMMEDIATE(r7, MSR_KERNEL|MSR_EE|MSR_POW) 189 + ld r9,PACA_THREAD_INFO(r13) 190 + ld r8,TI_LOCAL_FLAGS(r9) 191 + ori r8,r8,_TLF_NAPPING 192 + std r8,TI_LOCAL_FLAGS(r9) 193 + /* 194 + * NAPPING bit is set, from this point onward power4_fixup_nap 195 + * will cause exceptions to return to power4_idle_nap_return. 196 + */ 197 + 1: sync 198 + isync 199 + mtmsrd r7 200 + isync 201 + b 1b 202 + #endif
-83
arch/powerpc/kernel/idle_power4.S
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * This file contains the power_save function for 970-family CPUs. 4 - */ 5 - 6 - #include <linux/threads.h> 7 - #include <asm/processor.h> 8 - #include <asm/page.h> 9 - #include <asm/cputable.h> 10 - #include <asm/thread_info.h> 11 - #include <asm/ppc_asm.h> 12 - #include <asm/asm-offsets.h> 13 - #include <asm/irqflags.h> 14 - #include <asm/hw_irq.h> 15 - #include <asm/feature-fixups.h> 16 - 17 - #undef DEBUG 18 - 19 - .text 20 - 21 - _GLOBAL(power4_idle) 22 - BEGIN_FTR_SECTION 23 - blr 24 - END_FTR_SECTION_IFCLR(CPU_FTR_CAN_NAP) 25 - /* Now check if user or arch enabled NAP mode */ 26 - LOAD_REG_ADDRBASE(r3,powersave_nap) 27 - lwz r4,ADDROFF(powersave_nap)(r3) 28 - cmpwi 0,r4,0 29 - beqlr 30 - 31 - /* This sequence is similar to prep_irq_for_idle() */ 32 - 33 - /* Hard disable interrupts */ 34 - mfmsr r7 35 - rldicl r0,r7,48,1 36 - rotldi r0,r0,16 37 - mtmsrd r0,1 38 - 39 - /* Check if something happened while soft-disabled */ 40 - lbz r0,PACAIRQHAPPENED(r13) 41 - cmpwi cr0,r0,0 42 - bne- 2f 43 - 44 - /* 45 - * Soft-enable interrupts. This will make power4_fixup_nap return 46 - * to our caller with interrupts enabled (soft and hard). The caller 47 - * can cope with either interrupts disabled or enabled upon return. 48 - */ 49 - #ifdef CONFIG_TRACE_IRQFLAGS 50 - /* Tell the tracer interrupts are on, because idle responds to them. */ 51 - mflr r0 52 - std r0,16(r1) 53 - stdu r1,-128(r1) 54 - bl trace_hardirqs_on 55 - addi r1,r1,128 56 - ld r0,16(r1) 57 - mtlr r0 58 - mfmsr r7 59 - #endif /* CONFIG_TRACE_IRQFLAGS */ 60 - 61 - li r0,IRQS_ENABLED 62 - stb r0,PACAIRQSOFTMASK(r13) /* we'll hard-enable shortly */ 63 - BEGIN_FTR_SECTION 64 - DSSALL 65 - sync 66 - END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) 67 - ld r9, PACA_THREAD_INFO(r13) 68 - ld r8,TI_LOCAL_FLAGS(r9) /* set napping bit */ 69 - ori r8,r8,_TLF_NAPPING /* so when we take an exception */ 70 - std r8,TI_LOCAL_FLAGS(r9) /* it will return to our caller */ 71 - ori r7,r7,MSR_EE 72 - oris r7,r7,MSR_POW@h 73 - 1: sync 74 - isync 75 - mtmsrd r7 76 - isync 77 - b 1b 78 - 79 - 2: /* Return if an interrupt had happened while soft disabled */ 80 - /* Set the HARD_DIS flag because interrupts are now hard disabled */ 81 - ori r0,r0,PACA_IRQ_HARD_DIS 82 - stb r0,PACAIRQHAPPENED(r13) 83 - blr
+22
arch/powerpc/kernel/irq.c
··· 50 50 #include <linux/debugfs.h> 51 51 #include <linux/of.h> 52 52 #include <linux/of_irq.h> 53 + #include <linux/vmalloc.h> 53 54 54 55 #include <linux/uaccess.h> 55 56 #include <asm/io.h> ··· 665 664 set_irq_regs(old_regs); 666 665 } 667 666 667 + static void *__init alloc_vm_stack(void) 668 + { 669 + return __vmalloc_node_range(THREAD_SIZE, THREAD_ALIGN, VMALLOC_START, 670 + VMALLOC_END, THREADINFO_GFP, PAGE_KERNEL, 671 + 0, NUMA_NO_NODE, (void*)_RET_IP_); 672 + } 673 + 674 + static void __init vmap_irqstack_init(void) 675 + { 676 + int i; 677 + 678 + for_each_possible_cpu(i) { 679 + softirq_ctx[i] = alloc_vm_stack(); 680 + hardirq_ctx[i] = alloc_vm_stack(); 681 + } 682 + } 683 + 684 + 668 685 void __init init_IRQ(void) 669 686 { 687 + if (IS_ENABLED(CONFIG_VMAP_STACK)) 688 + vmap_irqstack_init(); 689 + 670 690 if (ppc_md.init_IRQ) 671 691 ppc_md.init_IRQ(); 672 692 }
+4 -42
arch/powerpc/kernel/pci-common.c
··· 261 261 262 262 #endif /* CONFIG_PCI_IOV */ 263 263 264 - void pcibios_bus_add_device(struct pci_dev *pdev) 265 - { 266 - if (ppc_md.pcibios_bus_add_device) 267 - ppc_md.pcibios_bus_add_device(pdev); 268 - } 269 - 270 264 static resource_size_t pcibios_io_size(const struct pci_controller *hose) 271 265 { 272 266 #ifdef CONFIG_PPC64 ··· 958 964 phb->controller_ops.dma_bus_setup(bus); 959 965 } 960 966 961 - static void pcibios_setup_device(struct pci_dev *dev) 967 + void pcibios_bus_add_device(struct pci_dev *dev) 962 968 { 963 969 struct pci_controller *phb; 964 970 /* Fixup NUMA node as it may not be setup yet by the generic ··· 979 985 pci_read_irq_line(dev); 980 986 if (ppc_md.pci_irq_fixup) 981 987 ppc_md.pci_irq_fixup(dev); 988 + 989 + if (ppc_md.pcibios_bus_add_device) 990 + ppc_md.pcibios_bus_add_device(dev); 982 991 } 983 992 984 993 int pcibios_add_device(struct pci_dev *dev) 985 994 { 986 - /* 987 - * We can only call pcibios_setup_device() after bus setup is complete, 988 - * since some of the platform specific DMA setup code depends on it. 989 - */ 990 - if (dev->bus->is_added) 991 - pcibios_setup_device(dev); 992 - 993 995 #ifdef CONFIG_PCI_IOV 994 996 if (ppc_md.pcibios_fixup_sriov) 995 997 ppc_md.pcibios_fixup_sriov(dev); 996 998 #endif /* CONFIG_PCI_IOV */ 997 999 998 1000 return 0; 999 - } 1000 - 1001 - void pcibios_setup_bus_devices(struct pci_bus *bus) 1002 - { 1003 - struct pci_dev *dev; 1004 - 1005 - pr_debug("PCI: Fixup bus devices %d (%s)\n", 1006 - bus->number, bus->self ? pci_name(bus->self) : "PHB"); 1007 - 1008 - list_for_each_entry(dev, &bus->devices, bus_list) { 1009 - /* Cardbus can call us to add new devices to a bus, so ignore 1010 - * those who are already fully discovered 1011 - */ 1012 - if (pci_dev_is_added(dev)) 1013 - continue; 1014 - 1015 - pcibios_setup_device(dev); 1016 - } 1017 1001 } 1018 1002 1019 1003 void pcibios_set_master(struct pci_dev *dev) ··· 1009 1037 1010 1038 /* Now fixup the bus bus */ 1011 1039 pcibios_setup_bus_self(bus); 1012 - 1013 - /* Now fixup devices on that bus */ 1014 - pcibios_setup_bus_devices(bus); 1015 1040 } 1016 1041 EXPORT_SYMBOL(pcibios_fixup_bus); 1017 - 1018 - void pci_fixup_cardbus(struct pci_bus *bus) 1019 - { 1020 - /* Now fixup devices on that bus */ 1021 - pcibios_setup_bus_devices(bus); 1022 - } 1023 - 1024 1042 1025 1043 static int skip_isa_ioresource_align(struct pci_dev *dev) 1026 1044 {
-1
arch/powerpc/kernel/pci-hotplug.c
··· 134 134 */ 135 135 slotno = PCI_SLOT(PCI_DN(dn->child)->devfn); 136 136 pci_scan_slot(bus, PCI_DEVFN(slotno, 0)); 137 - pcibios_setup_bus_devices(bus); 138 137 max = bus->busn_res.start; 139 138 /* 140 139 * Scan bridges that are already configured. We don't touch
+22 -25
arch/powerpc/kernel/pci_dn.c
··· 125 125 } 126 126 127 127 #ifdef CONFIG_PCI_IOV 128 - static struct pci_dn *add_one_dev_pci_data(struct pci_dn *parent, 128 + static struct pci_dn *add_one_sriov_vf_pdn(struct pci_dn *parent, 129 129 int vf_index, 130 130 int busno, int devfn) 131 131 { ··· 151 151 152 152 return pdn; 153 153 } 154 - #endif 155 154 156 - struct pci_dn *add_dev_pci_data(struct pci_dev *pdev) 155 + struct pci_dn *add_sriov_vf_pdns(struct pci_dev *pdev) 157 156 { 158 - #ifdef CONFIG_PCI_IOV 159 157 struct pci_dn *parent, *pdn; 160 158 int i; 161 159 162 160 /* Only support IOV for now */ 163 - if (!pdev->is_physfn) 164 - return pci_get_pdn(pdev); 161 + if (WARN_ON(!pdev->is_physfn)) 162 + return NULL; 165 163 166 164 /* Check if VFs have been populated */ 167 165 pdn = pci_get_pdn(pdev); ··· 174 176 for (i = 0; i < pci_sriov_get_totalvfs(pdev); i++) { 175 177 struct eeh_dev *edev __maybe_unused; 176 178 177 - pdn = add_one_dev_pci_data(parent, i, 179 + pdn = add_one_sriov_vf_pdn(parent, i, 178 180 pci_iov_virtfn_bus(pdev, i), 179 181 pci_iov_virtfn_devfn(pdev, i)); 180 182 if (!pdn) { ··· 190 192 edev->physfn = pdev; 191 193 #endif /* CONFIG_EEH */ 192 194 } 193 - #endif /* CONFIG_PCI_IOV */ 194 - 195 195 return pci_get_pdn(pdev); 196 196 } 197 197 198 - void remove_dev_pci_data(struct pci_dev *pdev) 198 + void remove_sriov_vf_pdns(struct pci_dev *pdev) 199 199 { 200 - #ifdef CONFIG_PCI_IOV 201 200 struct pci_dn *parent; 202 201 struct pci_dn *pdn, *tmp; 203 202 int i; 204 203 205 - /* 206 - * VF and VF PE are created/released dynamically, so we need to 207 - * bind/unbind them. Otherwise the VF and VF PE would be mismatched 208 - * when re-enabling SR-IOV. 209 - */ 210 - if (pdev->is_virtfn) { 211 - pdn = pci_get_pdn(pdev); 212 - pdn->pe_number = IODA_INVALID_PE; 213 - return; 214 - } 215 - 216 204 /* Only support IOV PF for now */ 217 - if (!pdev->is_physfn) 205 + if (WARN_ON(!pdev->is_physfn)) 218 206 return; 219 207 220 208 /* Check if VFs have been populated */ ··· 228 244 continue; 229 245 230 246 #ifdef CONFIG_EEH 231 - /* Release EEH device for the VF */ 247 + /* 248 + * Release EEH state for this VF. The PCI core 249 + * has already torn down the pci_dev for this VF, but 250 + * we're responsible to removing the eeh_dev since it 251 + * has the same lifetime as the pci_dn that spawned it. 252 + */ 232 253 edev = pdn_to_eeh_dev(pdn); 233 254 if (edev) { 255 + /* 256 + * We allocate pci_dn's for the totalvfs count, 257 + * but only only the vfs that were activated 258 + * have a configured PE. 259 + */ 260 + if (edev->pe) 261 + eeh_rmv_from_parent_pe(edev); 262 + 234 263 pdn->edev = NULL; 235 264 kfree(edev); 236 265 } ··· 255 258 kfree(pdn); 256 259 } 257 260 } 258 - #endif /* CONFIG_PCI_IOV */ 259 261 } 262 + #endif /* CONFIG_PCI_IOV */ 260 263 261 264 struct pci_dn *pci_add_device_node_info(struct pci_controller *hose, 262 265 struct device_node *dn)
-1
arch/powerpc/kernel/pci_of_scan.c
··· 414 414 */ 415 415 if (!rescan_existing) 416 416 pcibios_setup_bus_self(bus); 417 - pcibios_setup_bus_devices(bus); 418 417 419 418 /* Now scan child busses */ 420 419 for_each_pci_bridge(dev, bus)
+36 -33
arch/powerpc/kernel/process.c
··· 740 740 mtspr(SPRN_DABRX, dabrx); 741 741 return 0; 742 742 } 743 - #elif defined(CONFIG_PPC_8xx) 744 - static inline int __set_dabr(unsigned long dabr, unsigned long dabrx) 745 - { 746 - unsigned long addr = dabr & ~HW_BRK_TYPE_DABR; 747 - unsigned long lctrl1 = 0x90000000; /* compare type: equal on E & F */ 748 - unsigned long lctrl2 = 0x8e000002; /* watchpoint 1 on cmp E | F */ 749 - 750 - if ((dabr & HW_BRK_TYPE_RDWR) == HW_BRK_TYPE_READ) 751 - lctrl1 |= 0xa0000; 752 - else if ((dabr & HW_BRK_TYPE_RDWR) == HW_BRK_TYPE_WRITE) 753 - lctrl1 |= 0xf0000; 754 - else if ((dabr & HW_BRK_TYPE_RDWR) == 0) 755 - lctrl2 = 0; 756 - 757 - mtspr(SPRN_LCTRL2, 0); 758 - mtspr(SPRN_CMPE, addr); 759 - mtspr(SPRN_CMPF, addr + 4); 760 - mtspr(SPRN_LCTRL1, lctrl1); 761 - mtspr(SPRN_LCTRL2, lctrl2); 762 - 763 - return 0; 764 - } 765 743 #else 766 744 static inline int __set_dabr(unsigned long dabr, unsigned long dabrx) 767 745 { ··· 760 782 return __set_dabr(dabr, dabrx); 761 783 } 762 784 785 + static inline int set_breakpoint_8xx(struct arch_hw_breakpoint *brk) 786 + { 787 + unsigned long lctrl1 = LCTRL1_CTE_GT | LCTRL1_CTF_LT | LCTRL1_CRWE_RW | 788 + LCTRL1_CRWF_RW; 789 + unsigned long lctrl2 = LCTRL2_LW0EN | LCTRL2_LW0LADC | LCTRL2_SLW0EN; 790 + unsigned long start_addr = brk->address & ~HW_BREAKPOINT_ALIGN; 791 + unsigned long end_addr = (brk->address + brk->len - 1) | HW_BREAKPOINT_ALIGN; 792 + 793 + if (start_addr == 0) 794 + lctrl2 |= LCTRL2_LW0LA_F; 795 + else if (end_addr == ~0U) 796 + lctrl2 |= LCTRL2_LW0LA_E; 797 + else 798 + lctrl2 |= LCTRL2_LW0LA_EandF; 799 + 800 + mtspr(SPRN_LCTRL2, 0); 801 + 802 + if ((brk->type & HW_BRK_TYPE_RDWR) == 0) 803 + return 0; 804 + 805 + if ((brk->type & HW_BRK_TYPE_RDWR) == HW_BRK_TYPE_READ) 806 + lctrl1 |= LCTRL1_CRWE_RO | LCTRL1_CRWF_RO; 807 + if ((brk->type & HW_BRK_TYPE_RDWR) == HW_BRK_TYPE_WRITE) 808 + lctrl1 |= LCTRL1_CRWE_WO | LCTRL1_CRWF_WO; 809 + 810 + mtspr(SPRN_CMPE, start_addr - 1); 811 + mtspr(SPRN_CMPF, end_addr + 1); 812 + mtspr(SPRN_LCTRL1, lctrl1); 813 + mtspr(SPRN_LCTRL2, lctrl2); 814 + 815 + return 0; 816 + } 817 + 763 818 void __set_breakpoint(struct arch_hw_breakpoint *brk) 764 819 { 765 820 memcpy(this_cpu_ptr(&current_brk), brk, sizeof(*brk)); ··· 800 789 if (dawr_enabled()) 801 790 // Power8 or later 802 791 set_dawr(brk); 792 + else if (IS_ENABLED(CONFIG_PPC_8xx)) 793 + set_breakpoint_8xx(brk); 803 794 else if (!cpu_has_feature(CPU_FTR_ARCH_207S)) 804 795 // Power7 or earlier 805 796 set_dabr(brk); ··· 1277 1264 1278 1265 pc = regs->nip - (NR_INSN_TO_PRINT * 3 / 4 * sizeof(int)); 1279 1266 1280 - /* 1281 - * Make sure the NIP points at userspace, not kernel text/data or 1282 - * elsewhere. 1283 - */ 1284 - if (!__access_ok(pc, NR_INSN_TO_PRINT * sizeof(int), USER_DS)) { 1285 - pr_info("%s[%d]: Bad NIP, not dumping instructions.\n", 1286 - current->comm, current->pid); 1287 - return; 1288 - } 1289 - 1290 1267 seq_buf_init(&s, buf, sizeof(buf)); 1291 1268 1292 1269 while (n) { ··· 1287 1284 for (i = 0; i < 8 && n; i++, n--, pc += sizeof(int)) { 1288 1285 int instr; 1289 1286 1290 - if (probe_kernel_address((const void *)pc, instr)) { 1287 + if (probe_user_read(&instr, (void __user *)pc, sizeof(instr))) { 1291 1288 seq_buf_printf(&s, "XXXXXXXX "); 1292 1289 continue; 1293 1290 }
+1 -1
arch/powerpc/kernel/setup.h
··· 35 35 static inline void exc_lvl_early_init(void) { }; 36 36 #endif 37 37 38 - #ifdef CONFIG_PPC64 38 + #if defined(CONFIG_PPC64) || defined(CONFIG_VMAP_STACK) 39 39 void emergency_stack_init(void); 40 40 #else 41 41 static inline void emergency_stack_init(void) { };
+16 -1
arch/powerpc/kernel/setup_32.c
··· 140 140 141 141 static void *__init alloc_stack(void) 142 142 { 143 - void *ptr = memblock_alloc(THREAD_SIZE, THREAD_SIZE); 143 + void *ptr = memblock_alloc(THREAD_SIZE, THREAD_ALIGN); 144 144 145 145 if (!ptr) 146 146 panic("cannot allocate %d bytes for stack at %pS\n", ··· 153 153 { 154 154 unsigned int i; 155 155 156 + if (IS_ENABLED(CONFIG_VMAP_STACK)) 157 + return; 158 + 156 159 /* interrupt stacks must be in lowmem, we get that for free on ppc32 157 160 * as the memblock is limited to lowmem by default */ 158 161 for_each_possible_cpu(i) { ··· 163 160 hardirq_ctx[i] = alloc_stack(); 164 161 } 165 162 } 163 + 164 + #ifdef CONFIG_VMAP_STACK 165 + void *emergency_ctx[NR_CPUS] __ro_after_init; 166 + 167 + void __init emergency_stack_init(void) 168 + { 169 + unsigned int i; 170 + 171 + for_each_possible_cpu(i) 172 + emergency_ctx[i] = alloc_stack(); 173 + } 174 + #endif 166 175 167 176 #if defined(CONFIG_BOOKE) || defined(CONFIG_40x) 168 177 void __init exc_lvl_early_init(void)
+1 -1
arch/powerpc/kernel/setup_64.c
··· 633 633 634 634 BUILD_BUG_ON(STACK_INT_FRAME_SIZE % 16); 635 635 636 - ptr = memblock_alloc_try_nid(THREAD_SIZE, THREAD_SIZE, 636 + ptr = memblock_alloc_try_nid(THREAD_SIZE, THREAD_ALIGN, 637 637 MEMBLOCK_LOW_LIMIT, limit, 638 638 early_cpu_to_node(cpu)); 639 639 if (!ptr)
+9
arch/powerpc/kernel/traps.c
··· 1637 1637 panic("kernel stack overflow"); 1638 1638 } 1639 1639 1640 + void stack_overflow_exception(struct pt_regs *regs) 1641 + { 1642 + enum ctx_state prev_state = exception_enter(); 1643 + 1644 + die("Kernel stack overflow", regs, SIGSEGV); 1645 + 1646 + exception_exit(prev_state); 1647 + } 1648 + 1640 1649 void kernel_fp_unavailable_exception(struct pt_regs *regs) 1641 1650 { 1642 1651 enum ctx_state prev_state = exception_enter();
-5
arch/powerpc/kernel/vdso.c
··· 728 728 */ 729 729 vdso64_pages = (&vdso64_end - &vdso64_start) >> PAGE_SHIFT; 730 730 DBG("vdso64_kbase: %p, 0x%x pages\n", vdso64_kbase, vdso64_pages); 731 - #else 732 - vdso_data->dcache_block_size = L1_CACHE_BYTES; 733 - vdso_data->dcache_log_block_size = L1_CACHE_SHIFT; 734 - vdso_data->icache_block_size = L1_CACHE_BYTES; 735 - vdso_data->icache_log_block_size = L1_CACHE_SHIFT; 736 731 #endif /* CONFIG_PPC64 */ 737 732 738 733
+1 -3
arch/powerpc/kernel/vdso32/Makefile
··· 2 2 3 3 # List of files in the vdso, has to be asm only for now 4 4 5 - obj-vdso32-$(CONFIG_PPC64) = getcpu.o 6 - obj-vdso32 = sigtramp.o gettimeofday.o datapage.o cacheflush.o note.o \ 7 - $(obj-vdso32-y) 5 + obj-vdso32 = sigtramp.o gettimeofday.o datapage.o cacheflush.o note.o getcpu.o 8 6 9 7 # Build rules 10 8
+27 -5
arch/powerpc/kernel/vdso32/cacheflush.S
··· 8 8 #include <asm/processor.h> 9 9 #include <asm/ppc_asm.h> 10 10 #include <asm/vdso.h> 11 + #include <asm/vdso_datapage.h> 11 12 #include <asm/asm-offsets.h> 13 + #include <asm/cache.h> 12 14 13 15 .text 14 16 ··· 24 22 */ 25 23 V_FUNCTION_BEGIN(__kernel_sync_dicache) 26 24 .cfi_startproc 25 + #ifdef CONFIG_PPC64 27 26 mflr r12 28 27 .cfi_register lr,r12 29 - mr r11,r3 30 - bl __get_datapage@local 28 + get_datapage r10, r0 31 29 mtlr r12 32 - mr r10,r3 30 + #endif 33 31 32 + #ifdef CONFIG_PPC64 34 33 lwz r7,CFG_DCACHE_BLOCKSZ(r10) 35 34 addi r5,r7,-1 36 - andc r6,r11,r5 /* round low to line bdy */ 35 + #else 36 + li r5, L1_CACHE_BYTES - 1 37 + #endif 38 + andc r6,r3,r5 /* round low to line bdy */ 37 39 subf r8,r6,r4 /* compute length */ 38 40 add r8,r8,r5 /* ensure we get enough */ 41 + #ifdef CONFIG_PPC64 39 42 lwz r9,CFG_DCACHE_LOGBLOCKSZ(r10) 40 43 srw. r8,r8,r9 /* compute line count */ 44 + #else 45 + srwi. r8, r8, L1_CACHE_SHIFT 46 + mr r7, r6 47 + #endif 41 48 crclr cr0*4+so 42 49 beqlr /* nothing to do? */ 43 50 mtctr r8 44 51 1: dcbst 0,r6 52 + #ifdef CONFIG_PPC64 45 53 add r6,r6,r7 54 + #else 55 + addi r6, r6, L1_CACHE_BYTES 56 + #endif 46 57 bdnz 1b 47 58 sync 48 59 49 60 /* Now invalidate the instruction cache */ 50 61 62 + #ifdef CONFIG_PPC64 51 63 lwz r7,CFG_ICACHE_BLOCKSZ(r10) 52 64 addi r5,r7,-1 53 - andc r6,r11,r5 /* round low to line bdy */ 65 + andc r6,r3,r5 /* round low to line bdy */ 54 66 subf r8,r6,r4 /* compute length */ 55 67 add r8,r8,r5 56 68 lwz r9,CFG_ICACHE_LOGBLOCKSZ(r10) 57 69 srw. r8,r8,r9 /* compute line count */ 58 70 crclr cr0*4+so 59 71 beqlr /* nothing to do? */ 72 + #endif 60 73 mtctr r8 74 + #ifdef CONFIG_PPC64 61 75 2: icbi 0,r6 62 76 add r6,r6,r7 77 + #else 78 + 2: icbi 0, r7 79 + addi r7, r7, L1_CACHE_BYTES 80 + #endif 63 81 bdnz 2b 64 82 isync 65 83 li r3,0
+4 -27
arch/powerpc/kernel/vdso32/datapage.S
··· 10 10 #include <asm/asm-offsets.h> 11 11 #include <asm/unistd.h> 12 12 #include <asm/vdso.h> 13 + #include <asm/vdso_datapage.h> 13 14 14 15 .text 15 16 .global __kernel_datapage_offset; 16 17 __kernel_datapage_offset: 17 18 .long 0 18 - 19 - V_FUNCTION_BEGIN(__get_datapage) 20 - .cfi_startproc 21 - /* We don't want that exposed or overridable as we want other objects 22 - * to be able to bl directly to here 23 - */ 24 - .protected __get_datapage 25 - .hidden __get_datapage 26 - 27 - mflr r0 28 - .cfi_register lr,r0 29 - 30 - bcl 20,31,data_page_branch 31 - data_page_branch: 32 - mflr r3 33 - mtlr r0 34 - addi r3, r3, __kernel_datapage_offset-data_page_branch 35 - lwz r0,0(r3) 36 - .cfi_restore lr 37 - add r3,r0,r3 38 - blr 39 - .cfi_endproc 40 - V_FUNCTION_END(__get_datapage) 41 19 42 20 /* 43 21 * void *__kernel_get_syscall_map(unsigned int *syscall_count) ; ··· 30 52 .cfi_startproc 31 53 mflr r12 32 54 .cfi_register lr,r12 33 - mr r4,r3 34 - bl __get_datapage@local 55 + mr. r4,r3 56 + get_datapage r3, r0 35 57 mtlr r12 36 58 addi r3,r3,CFG_SYSCALL_MAP32 37 - cmpli cr0,r4,0 38 59 beqlr 39 60 li r0,NR_syscalls 40 61 stw r0,0(r4) ··· 52 75 .cfi_startproc 53 76 mflr r12 54 77 .cfi_register lr,r12 55 - bl __get_datapage@local 78 + get_datapage r3, r0 56 79 lwz r4,(CFG_TB_TICKS_PER_SEC + 4)(r3) 57 80 lwz r3,CFG_TB_TICKS_PER_SEC(r3) 58 81 mtlr r12
+20 -3
arch/powerpc/kernel/vdso32/getcpu.S
··· 15 15 * int __kernel_getcpu(unsigned *cpu, unsigned *node); 16 16 * 17 17 */ 18 + #if defined(CONFIG_PPC64) 18 19 V_FUNCTION_BEGIN(__kernel_getcpu) 19 20 .cfi_startproc 20 21 mfspr r5,SPRN_SPRG_VDSO_READ ··· 25 24 rlwinm r7,r5,16,31-15,31-0 26 25 beq cr0,1f 27 26 stw r6,0(r3) 28 - 1: beq cr1,2f 29 - stw r7,0(r4) 30 - 2: crclr cr0*4+so 27 + 1: crclr cr0*4+so 31 28 li r3,0 /* always success */ 29 + beqlr cr1 30 + stw r7,0(r4) 32 31 blr 33 32 .cfi_endproc 34 33 V_FUNCTION_END(__kernel_getcpu) 34 + #elif !defined(CONFIG_SMP) 35 + V_FUNCTION_BEGIN(__kernel_getcpu) 36 + .cfi_startproc 37 + cmpwi cr0, r3, 0 38 + cmpwi cr1, r4, 0 39 + li r5, 0 40 + beq cr0, 1f 41 + stw r5, 0(r3) 42 + 1: li r3, 0 /* always success */ 43 + crclr cr0*4+so 44 + beqlr cr1 45 + stw r5, 0(r4) 46 + blr 47 + .cfi_endproc 48 + V_FUNCTION_END(__kernel_getcpu) 49 + #endif
+84 -35
arch/powerpc/kernel/vdso32/gettimeofday.S
··· 9 9 #include <asm/processor.h> 10 10 #include <asm/ppc_asm.h> 11 11 #include <asm/vdso.h> 12 + #include <asm/vdso_datapage.h> 12 13 #include <asm/asm-offsets.h> 13 14 #include <asm/unistd.h> 14 15 ··· 32 31 mflr r12 33 32 .cfi_register lr,r12 34 33 35 - mr r10,r3 /* r10 saves tv */ 34 + mr. r10,r3 /* r10 saves tv */ 36 35 mr r11,r4 /* r11 saves tz */ 37 - bl __get_datapage@local /* get data page */ 38 - mr r9, r3 /* datapage ptr in r9 */ 39 - cmplwi r10,0 /* check if tv is NULL */ 36 + get_datapage r9, r0 40 37 beq 3f 41 - lis r7,1000000@ha /* load up USEC_PER_SEC */ 42 - addi r7,r7,1000000@l /* so we get microseconds in r4 */ 38 + LOAD_REG_IMMEDIATE(r7, 1000000) /* load up USEC_PER_SEC */ 43 39 bl __do_get_tspec@local /* get sec/usec from tb & kernel */ 44 40 stw r3,TVAL32_TV_SEC(r10) 45 41 stw r4,TVAL32_TV_USEC(r10) 46 42 47 43 3: cmplwi r11,0 /* check if tz is NULL */ 48 - beq 1f 44 + mtlr r12 45 + crclr cr0*4+so 46 + li r3,0 47 + beqlr 48 + 49 49 lwz r4,CFG_TZ_MINUTEWEST(r9)/* fill tz */ 50 50 lwz r5,CFG_TZ_DSTTIME(r9) 51 51 stw r4,TZONE_TZ_MINWEST(r11) 52 52 stw r5,TZONE_TZ_DSTTIME(r11) 53 53 54 - 1: mtlr r12 55 - crclr cr0*4+so 56 - li r3,0 57 54 blr 58 55 .cfi_endproc 59 56 V_FUNCTION_END(__kernel_gettimeofday) ··· 68 69 cmpli cr0,r3,CLOCK_REALTIME 69 70 cmpli cr1,r3,CLOCK_MONOTONIC 70 71 cror cr0*4+eq,cr0*4+eq,cr1*4+eq 71 - bne cr0,99f 72 + 73 + cmpli cr5,r3,CLOCK_REALTIME_COARSE 74 + cmpli cr6,r3,CLOCK_MONOTONIC_COARSE 75 + cror cr5*4+eq,cr5*4+eq,cr6*4+eq 76 + 77 + cror cr0*4+eq,cr0*4+eq,cr5*4+eq 78 + bne cr0, .Lgettime_fallback 72 79 73 80 mflr r12 /* r12 saves lr */ 74 81 .cfi_register lr,r12 75 82 mr r11,r4 /* r11 saves tp */ 76 - bl __get_datapage@local /* get data page */ 77 - mr r9,r3 /* datapage ptr in r9 */ 78 - lis r7,NSEC_PER_SEC@h /* want nanoseconds */ 79 - ori r7,r7,NSEC_PER_SEC@l 80 - 50: bl __do_get_tspec@local /* get sec/nsec from tb & kernel */ 81 - bne cr1,80f /* not monotonic -> all done */ 83 + get_datapage r9, r0 84 + LOAD_REG_IMMEDIATE(r7, NSEC_PER_SEC) /* load up NSEC_PER_SEC */ 85 + beq cr5, .Lcoarse_clocks 86 + .Lprecise_clocks: 87 + bl __do_get_tspec@local /* get sec/nsec from tb & kernel */ 88 + bne cr1, .Lfinish /* not monotonic -> all done */ 82 89 83 90 /* 84 91 * CLOCK_MONOTONIC ··· 108 103 add r9,r9,r0 109 104 lwz r0,(CFG_TB_UPDATE_COUNT+LOPART)(r9) 110 105 cmpl cr0,r8,r0 /* check if updated */ 111 - bne- 50b 106 + bne- .Lprecise_clocks 107 + b .Lfinish_monotonic 108 + 109 + /* 110 + * For coarse clocks we get data directly from the vdso data page, so 111 + * we don't need to call __do_get_tspec, but we still need to do the 112 + * counter trick. 113 + */ 114 + .Lcoarse_clocks: 115 + lwz r8,(CFG_TB_UPDATE_COUNT+LOPART)(r9) 116 + andi. r0,r8,1 /* pending update ? loop */ 117 + bne- .Lcoarse_clocks 118 + add r9,r9,r0 /* r0 is already 0 */ 119 + 120 + /* 121 + * CLOCK_REALTIME_COARSE, below values are needed for MONOTONIC_COARSE 122 + * too 123 + */ 124 + lwz r3,STAMP_XTIME_SEC+LOPART(r9) 125 + lwz r4,STAMP_XTIME_NSEC+LOPART(r9) 126 + bne cr6,1f 127 + 128 + /* CLOCK_MONOTONIC_COARSE */ 129 + lwz r5,(WTOM_CLOCK_SEC+LOPART)(r9) 130 + lwz r6,WTOM_CLOCK_NSEC(r9) 131 + 132 + /* check if counter has updated */ 133 + or r0,r6,r5 134 + 1: or r0,r0,r3 135 + or r0,r0,r4 136 + xor r0,r0,r0 137 + add r3,r3,r0 138 + lwz r0,CFG_TB_UPDATE_COUNT+LOPART(r9) 139 + cmpl cr0,r0,r8 /* check if updated */ 140 + bne- .Lcoarse_clocks 141 + 142 + /* Counter has not updated, so continue calculating proper values for 143 + * sec and nsec if monotonic coarse, or just return with the proper 144 + * values for realtime. 145 + */ 146 + bne cr6, .Lfinish 112 147 113 148 /* Calculate and store result. Note that this mimics the C code, 114 149 * which may cause funny results if nsec goes negative... is that 115 150 * possible at all ? 116 151 */ 152 + .Lfinish_monotonic: 117 153 add r3,r3,r5 118 154 add r4,r4,r6 119 155 cmpw cr0,r4,r7 ··· 162 116 blt 1f 163 117 subf r4,r7,r4 164 118 addi r3,r3,1 165 - 1: bge cr1,80f 119 + 1: bge cr1, .Lfinish 166 120 addi r3,r3,-1 167 121 add r4,r4,r7 168 122 169 - 80: stw r3,TSPC32_TV_SEC(r11) 123 + .Lfinish: 124 + stw r3,TSPC32_TV_SEC(r11) 170 125 stw r4,TSPC32_TV_NSEC(r11) 171 126 172 127 mtlr r12 ··· 178 131 /* 179 132 * syscall fallback 180 133 */ 181 - 99: 134 + .Lgettime_fallback: 182 135 li r0,__NR_clock_gettime 183 136 .cfi_restore lr 184 137 sc ··· 196 149 V_FUNCTION_BEGIN(__kernel_clock_getres) 197 150 .cfi_startproc 198 151 /* Check for supported clock IDs */ 199 - cmpwi cr0,r3,CLOCK_REALTIME 200 - cmpwi cr1,r3,CLOCK_MONOTONIC 201 - cror cr0*4+eq,cr0*4+eq,cr1*4+eq 202 - bne cr0,99f 152 + cmplwi cr0, r3, CLOCK_MAX 153 + cmpwi cr1, r3, CLOCK_REALTIME_COARSE 154 + cmpwi cr7, r3, CLOCK_MONOTONIC_COARSE 155 + bgt cr0, 99f 156 + LOAD_REG_IMMEDIATE(r5, KTIME_LOW_RES) 157 + beq cr1, 1f 158 + beq cr7, 1f 203 159 204 160 mflr r12 205 161 .cfi_register lr,r12 206 - bl __get_datapage@local /* get data page */ 162 + get_datapage r3, r0 207 163 lwz r5, CLOCK_HRTIMER_RES(r3) 208 164 mtlr r12 209 - li r3,0 165 + 1: li r3,0 210 166 cmpli cr0,r4,0 211 167 crclr cr0*4+so 212 168 beqlr ··· 218 168 blr 219 169 220 170 /* 221 - * syscall fallback 171 + * invalid clock 222 172 */ 223 173 99: 224 - li r0,__NR_clock_getres 225 - sc 174 + li r3, EINVAL 175 + crset so 226 176 blr 227 177 .cfi_endproc 228 178 V_FUNCTION_END(__kernel_clock_getres) ··· 240 190 .cfi_register lr,r12 241 191 242 192 mr r11,r3 /* r11 holds t */ 243 - bl __get_datapage@local 244 - mr r9, r3 /* datapage ptr in r9 */ 193 + get_datapage r9, r0 245 194 246 195 lwz r3,STAMP_XTIME_SEC+LOPART(r9) 247 196 248 197 cmplwi r11,0 /* check if t is NULL */ 249 - beq 2f 250 - stw r3,0(r11) /* store result at *t */ 251 - 2: mtlr r12 198 + mtlr r12 252 199 crclr cr0*4+so 200 + beqlr 201 + stw r3,0(r11) /* store result at *t */ 253 202 blr 254 203 .cfi_endproc 255 204 V_FUNCTION_END(__kernel_time)
+1 -1
arch/powerpc/kernel/vdso32/vdso32.lds.S
··· 155 155 __kernel_sync_dicache_p5; 156 156 __kernel_sigtramp32; 157 157 __kernel_sigtramp_rt32; 158 - #ifdef CONFIG_PPC64 158 + #if defined(CONFIG_PPC64) || !defined(CONFIG_SMP) 159 159 __kernel_getcpu; 160 160 #endif 161 161
+3
arch/powerpc/kernel/vector.S
··· 67 67 #ifdef CONFIG_PPC32 68 68 mfspr r5,SPRN_SPRG_THREAD /* current task's THREAD (phys) */ 69 69 oris r9,r9,MSR_VEC@h 70 + #ifdef CONFIG_VMAP_STACK 71 + tovirt(r5, r5) 72 + #endif 70 73 #else 71 74 ld r4,PACACURRENT(r13) 72 75 addi r5,r4,THREAD /* Get THREAD */
+1 -1
arch/powerpc/kernel/vmlinux.lds.S
··· 323 323 #endif 324 324 325 325 /* The initial task and kernel stack */ 326 - INIT_TASK_DATA_SECTION(THREAD_SIZE) 326 + INIT_TASK_DATA_SECTION(THREAD_ALIGN) 327 327 328 328 .data..page_aligned : AT(ADDR(.data..page_aligned) - LOAD_OFFSET) { 329 329 PAGE_ALIGNED_DATA(PAGE_SIZE)
+2 -4
arch/powerpc/kvm/book3s_64_mmu_radix.c
··· 63 63 } 64 64 isync(); 65 65 66 - pagefault_disable(); 67 66 if (is_load) 68 - ret = raw_copy_from_user(to, from, n); 67 + ret = probe_user_read(to, (const void __user *)from, n); 69 68 else 70 - ret = raw_copy_to_user(to, from, n); 71 - pagefault_enable(); 69 + ret = probe_user_write((void __user *)to, from, n); 72 70 73 71 /* switch the pid first to avoid running host with unallocated pid */ 74 72 if (quadrant == 1 && pid != old_pid)
+2
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 1801 1801 tlbsync 1802 1802 ptesync 1803 1803 1804 + BEGIN_FTR_SECTION 1804 1805 /* Radix: Handle the case where the guest used an illegal PID */ 1805 1806 LOAD_REG_ADDR(r4, mmu_base_pid) 1806 1807 lwz r3, VCPU_GUEST_PID(r9) ··· 1831 1830 addi r7,r7,0x1000 1832 1831 bdnz 1b 1833 1832 ptesync 1833 + END_FTR_SECTION_IFSET(CPU_FTR_P9_RADIX_PREFETCH_BUG) 1834 1834 1835 1835 2: 1836 1836 #endif /* CONFIG_PPC_RADIX_MMU */
+1 -1
arch/powerpc/kvm/book3s_xive.c
··· 484 484 kvmppc_xive_select_irq(state, &hw_num, &xd); 485 485 486 486 /* 487 - * See command in xive_lock_and_mask() concerning masking 487 + * See comment in xive_lock_and_mask() concerning masking 488 488 * via firmware. 489 489 */ 490 490 if (xd->flags & OPAL_XIVE_IRQ_MASK_VIA_FW) {
+29 -17
arch/powerpc/mm/book3s32/hash_low.S
··· 25 25 #include <asm/feature-fixups.h> 26 26 #include <asm/code-patching-asm.h> 27 27 28 + #ifdef CONFIG_VMAP_STACK 29 + #define ADDR_OFFSET 0 30 + #else 31 + #define ADDR_OFFSET PAGE_OFFSET 32 + #endif 33 + 28 34 #ifdef CONFIG_SMP 29 35 .section .bss 30 36 .align 2 ··· 53 47 .text 54 48 _GLOBAL(hash_page) 55 49 #ifdef CONFIG_SMP 56 - lis r8, (mmu_hash_lock - PAGE_OFFSET)@h 57 - ori r8, r8, (mmu_hash_lock - PAGE_OFFSET)@l 50 + lis r8, (mmu_hash_lock - ADDR_OFFSET)@h 51 + ori r8, r8, (mmu_hash_lock - ADDR_OFFSET)@l 58 52 lis r0,0x0fff 59 53 b 10f 60 54 11: lwz r6,0(r8) ··· 72 66 cmplw 0,r4,r0 73 67 ori r3,r3,_PAGE_USER|_PAGE_PRESENT /* test low addresses as user */ 74 68 mfspr r5, SPRN_SPRG_PGDIR /* phys page-table root */ 69 + #ifdef CONFIG_VMAP_STACK 70 + tovirt(r5, r5) 71 + #endif 75 72 blt+ 112f /* assume user more likely */ 76 - lis r5, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */ 77 - addi r5 ,r5 ,(swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */ 73 + lis r5, (swapper_pg_dir - ADDR_OFFSET)@ha /* if kernel address, use */ 74 + addi r5 ,r5 ,(swapper_pg_dir - ADDR_OFFSET)@l /* kernel page table */ 78 75 rlwimi r3,r9,32-12,29,29 /* MSR_PR -> _PAGE_USER */ 79 76 112: 80 77 #ifndef CONFIG_PTE_64BIT ··· 88 79 rlwinm r8,r4,13,19,29 /* Compute pgdir/pmd offset */ 89 80 lwzx r8,r8,r5 /* Get L1 entry */ 90 81 rlwinm. r8,r8,0,0,20 /* extract pt base address */ 82 + #endif 83 + #ifdef CONFIG_VMAP_STACK 84 + tovirt(r8, r8) 91 85 #endif 92 86 #ifdef CONFIG_SMP 93 87 beq- hash_page_out /* return if no mapping */ ··· 149 137 150 138 #ifdef CONFIG_SMP 151 139 eieio 152 - lis r8, (mmu_hash_lock - PAGE_OFFSET)@ha 140 + lis r8, (mmu_hash_lock - ADDR_OFFSET)@ha 153 141 li r0,0 154 - stw r0, (mmu_hash_lock - PAGE_OFFSET)@l(r8) 142 + stw r0, (mmu_hash_lock - ADDR_OFFSET)@l(r8) 155 143 #endif 156 144 157 145 /* Return from the exception */ ··· 164 152 #ifdef CONFIG_SMP 165 153 hash_page_out: 166 154 eieio 167 - lis r8, (mmu_hash_lock - PAGE_OFFSET)@ha 155 + lis r8, (mmu_hash_lock - ADDR_OFFSET)@ha 168 156 li r0,0 169 - stw r0, (mmu_hash_lock - PAGE_OFFSET)@l(r8) 157 + stw r0, (mmu_hash_lock - ADDR_OFFSET)@l(r8) 170 158 blr 171 159 #endif /* CONFIG_SMP */ 172 160 ··· 341 329 patch_site 1f, patch__hash_page_A1 342 330 patch_site 2f, patch__hash_page_A2 343 331 /* Get the address of the primary PTE group in the hash table (r3) */ 344 - 0: lis r0, (Hash_base - PAGE_OFFSET)@h /* base address of hash table */ 332 + 0: lis r0, (Hash_base - ADDR_OFFSET)@h /* base address of hash table */ 345 333 1: rlwimi r0,r3,LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT /* VSID -> hash */ 346 334 2: rlwinm r3,r4,20+LG_PTEG_SIZE,HASH_LEFT,HASH_RIGHT /* PI -> hash */ 347 335 xor r3,r3,r0 /* make primary hash */ ··· 355 343 beq+ 10f /* no PTE: go look for an empty slot */ 356 344 tlbie r4 357 345 358 - lis r4, (htab_hash_searches - PAGE_OFFSET)@ha 359 - lwz r6, (htab_hash_searches - PAGE_OFFSET)@l(r4) 346 + lis r4, (htab_hash_searches - ADDR_OFFSET)@ha 347 + lwz r6, (htab_hash_searches - ADDR_OFFSET)@l(r4) 360 348 addi r6,r6,1 /* count how many searches we do */ 361 - stw r6, (htab_hash_searches - PAGE_OFFSET)@l(r4) 349 + stw r6, (htab_hash_searches - ADDR_OFFSET)@l(r4) 362 350 363 351 /* Search the primary PTEG for a PTE whose 1st (d)word matches r5 */ 364 352 mtctr r0 ··· 390 378 beq+ found_empty 391 379 392 380 /* update counter of times that the primary PTEG is full */ 393 - lis r4, (primary_pteg_full - PAGE_OFFSET)@ha 394 - lwz r6, (primary_pteg_full - PAGE_OFFSET)@l(r4) 381 + lis r4, (primary_pteg_full - ADDR_OFFSET)@ha 382 + lwz r6, (primary_pteg_full - ADDR_OFFSET)@l(r4) 395 383 addi r6,r6,1 396 - stw r6, (primary_pteg_full - PAGE_OFFSET)@l(r4) 384 + stw r6, (primary_pteg_full - ADDR_OFFSET)@l(r4) 397 385 398 386 patch_site 0f, patch__hash_page_C 399 387 /* Search the secondary PTEG for an empty slot */ ··· 427 415 * lockup here but that shouldn't happen 428 416 */ 429 417 430 - 1: lis r4, (next_slot - PAGE_OFFSET)@ha /* get next evict slot */ 431 - lwz r6, (next_slot - PAGE_OFFSET)@l(r4) 418 + 1: lis r4, (next_slot - ADDR_OFFSET)@ha /* get next evict slot */ 419 + lwz r6, (next_slot - ADDR_OFFSET)@l(r4) 432 420 addi r6,r6,HPTE_SIZE /* search for candidate */ 433 421 andi. r6,r6,7*HPTE_SIZE 434 422 stw r6,next_slot@l(r4)
+7 -2
arch/powerpc/mm/book3s32/mmu.c
··· 413 413 void __init MMU_init_hw_patch(void) 414 414 { 415 415 unsigned int hmask = Hash_mask >> (16 - LG_HPTEG_SIZE); 416 + unsigned int hash; 416 417 417 418 if (ppc_md.progress) 418 419 ppc_md.progress("hash:patch", 0x345); ··· 425 424 /* 426 425 * Patch up the instructions in hashtable.S:create_hpte 427 426 */ 428 - modify_instruction_site(&patch__hash_page_A0, 0xffff, 429 - ((unsigned int)Hash - PAGE_OFFSET) >> 16); 427 + if (IS_ENABLED(CONFIG_VMAP_STACK)) 428 + hash = (unsigned int)Hash; 429 + else 430 + hash = (unsigned int)Hash - PAGE_OFFSET; 431 + 432 + modify_instruction_site(&patch__hash_page_A0, 0xffff, hash >> 16); 430 433 modify_instruction_site(&patch__hash_page_A1, 0x7c0, hash_mb << 6); 431 434 modify_instruction_site(&patch__hash_page_A2, 0x7c0, hash_mb2 << 6); 432 435 modify_instruction_site(&patch__hash_page_B, 0xffff, hmask);
+10 -1
arch/powerpc/mm/book3s64/hash_utils.c
··· 652 652 653 653 static void __init htab_init_page_sizes(void) 654 654 { 655 + bool aligned = true; 655 656 init_hpte_page_sizes(); 656 657 657 658 if (!debug_pagealloc_enabled()) { ··· 660 659 * Pick a size for the linear mapping. Currently, we only 661 660 * support 16M, 1M and 4K which is the default 662 661 */ 663 - if (mmu_psize_defs[MMU_PAGE_16M].shift) 662 + if (IS_ENABLED(STRICT_KERNEL_RWX) && 663 + (unsigned long)_stext % 0x1000000) { 664 + if (mmu_psize_defs[MMU_PAGE_16M].shift) 665 + pr_warn("Kernel not 16M aligned, " 666 + "disabling 16M linear map alignment"); 667 + aligned = false; 668 + } 669 + 670 + if (mmu_psize_defs[MMU_PAGE_16M].shift && aligned) 664 671 mmu_linear_psize = MMU_PAGE_16M; 665 672 else if (mmu_psize_defs[MMU_PAGE_1M].shift) 666 673 mmu_linear_psize = MMU_PAGE_1M;
+5 -1
arch/powerpc/mm/book3s64/radix_pgtable.c
··· 337 337 } 338 338 339 339 /* Find out how many PID bits are supported */ 340 - if (cpu_has_feature(CPU_FTR_HVMODE)) { 340 + if (!cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) { 341 + if (!mmu_pid_bits) 342 + mmu_pid_bits = 20; 343 + mmu_base_pid = 1; 344 + } else if (cpu_has_feature(CPU_FTR_HVMODE)) { 341 345 if (!mmu_pid_bits) 342 346 mmu_pid_bits = 20; 343 347 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
+3
arch/powerpc/mm/book3s64/radix_tlb.c
··· 1161 1161 if (unlikely(pid == MMU_NO_CONTEXT)) 1162 1162 return; 1163 1163 1164 + if (!cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) 1165 + return; 1166 + 1164 1167 /* 1165 1168 * If this context hasn't run on that CPU before and KVM is 1166 1169 * around, there's a slim chance that the guest on another
+5 -6
arch/powerpc/mm/fault.c
··· 233 233 234 234 // Read/write fault in a valid region (the exception table search passed 235 235 // above), but blocked by KUAP is bad, it can never succeed. 236 - if (bad_kuap_fault(regs, is_write)) 236 + if (bad_kuap_fault(regs, address, is_write)) 237 237 return true; 238 238 239 239 // What's left? Kernel fault on user in well defined regions (extable ··· 279 279 if ((flags & FAULT_FLAG_WRITE) && (flags & FAULT_FLAG_USER) && 280 280 access_ok(nip, sizeof(*nip))) { 281 281 unsigned int inst; 282 - int res; 283 282 284 - pagefault_disable(); 285 - res = __get_user_inatomic(inst, nip); 286 - pagefault_enable(); 287 - if (!res) 283 + if (!probe_user_read(&inst, nip, sizeof(inst))) 288 284 return !store_updates_sp(inst); 289 285 *must_retry = true; 290 286 } ··· 350 354 * Userspace trying to access kernel address, we get PROTFAULT for that. 351 355 */ 352 356 if (is_user && address >= TASK_SIZE) { 357 + if ((long)address == -1) 358 + return; 359 + 353 360 pr_crit_ratelimited("%s[%d]: User access of kernel address (%lx) - exploit attempt? (uid: %d)\n", 354 361 current->comm, current->pid, address, 355 362 from_kuid(&init_user_ns, current_uid()));
+43 -46
arch/powerpc/mm/kasan/kasan_init_32.c
··· 12 12 #include <asm/code-patching.h> 13 13 #include <mm/mmu_decl.h> 14 14 15 - static pgprot_t kasan_prot_ro(void) 15 + static pgprot_t __init kasan_prot_ro(void) 16 16 { 17 17 if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) 18 18 return PAGE_READONLY; ··· 20 20 return PAGE_KERNEL_RO; 21 21 } 22 22 23 - static void kasan_populate_pte(pte_t *ptep, pgprot_t prot) 23 + static void __init kasan_populate_pte(pte_t *ptep, pgprot_t prot) 24 24 { 25 25 unsigned long va = (unsigned long)kasan_early_shadow_page; 26 26 phys_addr_t pa = __pa(kasan_early_shadow_page); ··· 30 30 __set_pte_at(&init_mm, va, ptep, pfn_pte(PHYS_PFN(pa), prot), 0); 31 31 } 32 32 33 - static int __ref kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_end) 33 + static int __init kasan_init_shadow_page_tables(unsigned long k_start, unsigned long k_end) 34 34 { 35 35 pmd_t *pmd; 36 36 unsigned long k_cur, k_next; 37 - pgprot_t prot = slab_is_available() ? kasan_prot_ro() : PAGE_KERNEL; 37 + pte_t *new = NULL; 38 38 39 39 pmd = pmd_offset(pud_offset(pgd_offset_k(k_start), k_start), k_start); 40 40 41 41 for (k_cur = k_start; k_cur != k_end; k_cur = k_next, pmd++) { 42 - pte_t *new; 43 - 44 42 k_next = pgd_addr_end(k_cur, k_end); 45 43 if ((void *)pmd_page_vaddr(*pmd) != kasan_early_shadow_pte) 46 44 continue; 47 45 48 - if (slab_is_available()) 49 - new = pte_alloc_one_kernel(&init_mm); 50 - else 46 + if (!new) 51 47 new = memblock_alloc(PTE_FRAG_SIZE, PTE_FRAG_SIZE); 52 48 53 49 if (!new) 54 50 return -ENOMEM; 55 - kasan_populate_pte(new, prot); 51 + kasan_populate_pte(new, PAGE_KERNEL); 56 52 57 53 smp_wmb(); /* See comment in __pte_alloc */ 58 54 ··· 59 63 new = NULL; 60 64 } 61 65 spin_unlock(&init_mm.page_table_lock); 62 - 63 - if (new && slab_is_available()) 64 - pte_free_kernel(&init_mm, new); 65 66 } 66 67 return 0; 67 68 } 68 69 69 - static void __ref *kasan_get_one_page(void) 70 - { 71 - if (slab_is_available()) 72 - return (void *)__get_free_page(GFP_KERNEL | __GFP_ZERO); 73 - 74 - return memblock_alloc(PAGE_SIZE, PAGE_SIZE); 75 - } 76 - 77 - static int __ref kasan_init_region(void *start, size_t size) 70 + static int __init kasan_init_region(void *start, size_t size) 78 71 { 79 72 unsigned long k_start = (unsigned long)kasan_mem_to_shadow(start); 80 73 unsigned long k_end = (unsigned long)kasan_mem_to_shadow(start + size); 81 74 unsigned long k_cur; 82 75 int ret; 83 - void *block = NULL; 76 + void *block; 84 77 85 78 ret = kasan_init_shadow_page_tables(k_start, k_end); 86 79 if (ret) 87 80 return ret; 88 81 89 - if (!slab_is_available()) 90 - block = memblock_alloc(k_end - k_start, PAGE_SIZE); 82 + block = memblock_alloc(k_end - k_start, PAGE_SIZE); 91 83 92 84 for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { 93 85 pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur); 94 - void *va = block ? block + k_cur - k_start : kasan_get_one_page(); 86 + void *va = block + k_cur - k_start; 95 87 pte_t pte = pfn_pte(PHYS_PFN(__pa(va)), PAGE_KERNEL); 96 88 97 89 if (!va) ··· 111 127 __set_pte_at(&init_mm, k_cur, ptep, pfn_pte(PHYS_PFN(pa), prot), 0); 112 128 } 113 129 flush_tlb_kernel_range(KASAN_SHADOW_START, KASAN_SHADOW_END); 130 + } 131 + 132 + static void __init kasan_unmap_early_shadow_vmalloc(void) 133 + { 134 + unsigned long k_start = (unsigned long)kasan_mem_to_shadow((void *)VMALLOC_START); 135 + unsigned long k_end = (unsigned long)kasan_mem_to_shadow((void *)VMALLOC_END); 136 + unsigned long k_cur; 137 + phys_addr_t pa = __pa(kasan_early_shadow_page); 138 + 139 + if (!early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) { 140 + int ret = kasan_init_shadow_page_tables(k_start, k_end); 141 + 142 + if (ret) 143 + panic("kasan: kasan_init_shadow_page_tables() failed"); 144 + } 145 + for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) { 146 + pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur); 147 + pte_t *ptep = pte_offset_kernel(pmd, k_cur); 148 + 149 + if ((pte_val(*ptep) & PTE_RPN_MASK) != pa) 150 + continue; 151 + 152 + __set_pte_at(&init_mm, k_cur, ptep, __pte(0), 0); 153 + } 154 + flush_tlb_kernel_range(k_start, k_end); 114 155 } 115 156 116 157 void __init kasan_mmu_init(void) ··· 174 165 pr_info("KASAN init done\n"); 175 166 } 176 167 177 - #ifdef CONFIG_MODULES 178 - void *module_alloc(unsigned long size) 168 + void __init kasan_late_init(void) 179 169 { 180 - void *base; 181 - 182 - base = __vmalloc_node_range(size, MODULE_ALIGN, VMALLOC_START, VMALLOC_END, 183 - GFP_KERNEL, PAGE_KERNEL_EXEC, VM_FLUSH_RESET_PERMS, 184 - NUMA_NO_NODE, __builtin_return_address(0)); 185 - 186 - if (!base) 187 - return NULL; 188 - 189 - if (!kasan_init_region(base, size)) 190 - return base; 191 - 192 - vfree(base); 193 - 194 - return NULL; 170 + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) 171 + kasan_unmap_early_shadow_vmalloc(); 195 172 } 196 - #endif 197 173 198 174 #ifdef CONFIG_PPC_BOOK3S_32 199 175 u8 __initdata early_hash[256 << 10] __aligned(256 << 10) = {0}; 200 176 201 177 static void __init kasan_early_hash_table(void) 202 178 { 203 - modify_instruction_site(&patch__hash_page_A0, 0xffff, __pa(early_hash) >> 16); 204 - modify_instruction_site(&patch__flush_hash_A0, 0xffff, __pa(early_hash) >> 16); 179 + unsigned int hash = IS_ENABLED(CONFIG_VMAP_STACK) ? (unsigned int)early_hash : 180 + __pa(early_hash); 181 + 182 + modify_instruction_site(&patch__hash_page_A0, 0xffff, hash >> 16); 183 + modify_instruction_site(&patch__flush_hash_A0, 0xffff, hash >> 16); 205 184 206 185 Hash = (struct hash_pte *)early_hash; 207 186 }
+4
arch/powerpc/mm/mem.c
··· 49 49 #include <asm/fixmap.h> 50 50 #include <asm/swiotlb.h> 51 51 #include <asm/rtas.h> 52 + #include <asm/kasan.h> 52 53 53 54 #include <mm/mmu_decl.h> 54 55 ··· 302 301 303 302 high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); 304 303 set_max_mapnr(max_pfn); 304 + 305 + kasan_late_init(); 306 + 305 307 memblock_free_all(); 306 308 307 309 #ifdef CONFIG_HIGHMEM
+6
arch/powerpc/mm/mmu_decl.h
··· 181 181 static inline void mmu_mark_initmem_nx(void) { } 182 182 static inline void mmu_mark_rodata_ro(void) { } 183 183 #endif 184 + 185 + #ifdef CONFIG_PPC_DEBUG_WX 186 + void ptdump_check_wx(void); 187 + #else 188 + static inline void ptdump_check_wx(void) { } 189 + #endif
+7 -6
arch/powerpc/mm/nohash/8xx.c
··· 21 21 static unsigned long block_mapped_ram; 22 22 23 23 /* 24 - * Return PA for this VA if it is in an area mapped with LTLBs. 24 + * Return PA for this VA if it is in an area mapped with LTLBs or fixmap. 25 25 * Otherwise, returns 0 26 26 */ 27 27 phys_addr_t v_block_mapped(unsigned long va) 28 28 { 29 29 unsigned long p = PHYS_IMMR_BASE; 30 30 31 - if (__map_without_ltlbs) 32 - return 0; 33 31 if (va >= VIRT_IMMR_BASE && va < VIRT_IMMR_BASE + IMMR_SIZE) 34 32 return p + va - VIRT_IMMR_BASE; 33 + if (__map_without_ltlbs) 34 + return 0; 35 35 if (va >= PAGE_OFFSET && va < PAGE_OFFSET + block_mapped_ram) 36 36 return __pa(va); 37 37 return 0; 38 38 } 39 39 40 40 /* 41 - * Return VA for a given PA mapped with LTLBs or 0 if not mapped 41 + * Return VA for a given PA mapped with LTLBs or fixmap 42 + * Return 0 if not mapped 42 43 */ 43 44 unsigned long p_block_mapped(phys_addr_t pa) 44 45 { 45 46 unsigned long p = PHYS_IMMR_BASE; 46 47 47 - if (__map_without_ltlbs) 48 - return 0; 49 48 if (pa >= p && pa < p + IMMR_SIZE) 50 49 return VIRT_IMMR_BASE + pa - p; 50 + if (__map_without_ltlbs) 51 + return 0; 51 52 if (pa < block_mapped_ram) 52 53 return (unsigned long)__va(pa); 53 54 return 0;
+1
arch/powerpc/mm/pgtable_32.c
··· 218 218 219 219 if (v_block_mapped((unsigned long)_sinittext)) { 220 220 mmu_mark_rodata_ro(); 221 + ptdump_check_wx(); 221 222 return; 222 223 } 223 224
+5 -1
arch/powerpc/mm/ptdump/ptdump.c
··· 24 24 #include <asm/page.h> 25 25 #include <asm/pgalloc.h> 26 26 27 + #include <mm/mmu_decl.h> 28 + 27 29 #include "ptdump.h" 28 30 29 31 /* ··· 175 173 176 174 static void note_prot_wx(struct pg_state *st, unsigned long addr) 177 175 { 176 + pte_t pte = __pte(st->current_flags); 177 + 178 178 if (!IS_ENABLED(CONFIG_PPC_DEBUG_WX) || !st->check_wx) 179 179 return; 180 180 181 - if (!((st->current_flags & pgprot_val(PAGE_KERNEL_X)) == pgprot_val(PAGE_KERNEL_X))) 181 + if (!pte_write(pte) || !pte_exec(pte)) 182 182 return; 183 183 184 184 WARN_ONCE(1, "powerpc/mm: Found insecure W+X mapping at address %p/%pS\n",
+2 -12
arch/powerpc/oprofile/backtrace.c
··· 28 28 unsigned int stack_frame[2]; 29 29 void __user *p = compat_ptr(sp); 30 30 31 - if (!access_ok(p, sizeof(stack_frame))) 32 - return 0; 33 - 34 31 /* 35 32 * The most likely reason for this is that we returned -EFAULT, 36 33 * which means that we've done all that we can do from 37 34 * interrupt context. 38 35 */ 39 - if (__copy_from_user_inatomic(stack_frame, p, sizeof(stack_frame))) 36 + if (probe_user_read(stack_frame, (void __user *)p, sizeof(stack_frame))) 40 37 return 0; 41 38 42 39 if (!is_first) ··· 51 54 { 52 55 unsigned long stack_frame[3]; 53 56 54 - if (!access_ok((void __user *)sp, sizeof(stack_frame))) 55 - return 0; 56 - 57 - if (__copy_from_user_inatomic(stack_frame, (void __user *)sp, 58 - sizeof(stack_frame))) 57 + if (probe_user_read(stack_frame, (void __user *)sp, sizeof(stack_frame))) 59 58 return 0; 60 59 61 60 if (!is_first) ··· 96 103 first_frame = 0; 97 104 } 98 105 } else { 99 - pagefault_disable(); 100 106 #ifdef CONFIG_PPC64 101 107 if (!is_32bit_task()) { 102 108 while (depth--) { ··· 104 112 break; 105 113 first_frame = 0; 106 114 } 107 - pagefault_enable(); 108 115 return; 109 116 } 110 117 #endif ··· 114 123 break; 115 124 first_frame = 0; 116 125 } 117 - pagefault_enable(); 118 126 } 119 127 }
+8 -4
arch/powerpc/perf/8xx-pmu.c
··· 157 157 158 158 static void mpc8xx_pmu_del(struct perf_event *event, int flags) 159 159 { 160 - /* mfspr r10, SPRN_SPRG_SCRATCH0 */ 161 - unsigned int insn = PPC_INST_MFSPR | __PPC_RS(R10) | 162 - __PPC_SPR(SPRN_SPRG_SCRATCH0); 163 - 164 160 mpc8xx_pmu_read(event); 165 161 166 162 /* If it was the last user, stop counting to avoid useles overhead */ ··· 169 173 break; 170 174 case PERF_8xx_ID_ITLB_LOAD_MISS: 171 175 if (atomic_dec_return(&itlb_miss_ref) == 0) { 176 + /* mfspr r10, SPRN_SPRG_SCRATCH0 */ 177 + unsigned int insn = PPC_INST_MFSPR | __PPC_RS(R10) | 178 + __PPC_SPR(SPRN_SPRG_SCRATCH0); 179 + 172 180 patch_instruction_site(&patch__itlbmiss_exit_1, insn); 173 181 #ifndef CONFIG_PIN_TLB_TEXT 174 182 patch_instruction_site(&patch__itlbmiss_exit_2, insn); ··· 181 181 break; 182 182 case PERF_8xx_ID_DTLB_LOAD_MISS: 183 183 if (atomic_dec_return(&dtlb_miss_ref) == 0) { 184 + /* mfspr r10, SPRN_DAR */ 185 + unsigned int insn = PPC_INST_MFSPR | __PPC_RS(R10) | 186 + __PPC_SPR(SPRN_DAR); 187 + 184 188 patch_instruction_site(&patch__dtlbmiss_exit_1, insn); 185 189 patch_instruction_site(&patch__dtlbmiss_exit_2, insn); 186 190 patch_instruction_site(&patch__dtlbmiss_exit_3, insn);
+3 -17
arch/powerpc/perf/callchain.c
··· 155 155 ((unsigned long)ptr & 7)) 156 156 return -EFAULT; 157 157 158 - pagefault_disable(); 159 - if (!__get_user_inatomic(*ret, ptr)) { 160 - pagefault_enable(); 158 + if (!probe_user_read(ret, ptr, sizeof(*ret))) 161 159 return 0; 162 - } 163 - pagefault_enable(); 164 160 165 161 return read_user_stack_slow(ptr, ret, 8); 166 162 } ··· 167 171 ((unsigned long)ptr & 3)) 168 172 return -EFAULT; 169 173 170 - pagefault_disable(); 171 - if (!__get_user_inatomic(*ret, ptr)) { 172 - pagefault_enable(); 174 + if (!probe_user_read(ret, ptr, sizeof(*ret))) 173 175 return 0; 174 - } 175 - pagefault_enable(); 176 176 177 177 return read_user_stack_slow(ptr, ret, 4); 178 178 } ··· 285 293 */ 286 294 static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret) 287 295 { 288 - int rc; 289 - 290 296 if ((unsigned long)ptr > TASK_SIZE - sizeof(unsigned int) || 291 297 ((unsigned long)ptr & 3)) 292 298 return -EFAULT; 293 299 294 - pagefault_disable(); 295 - rc = __get_user_inatomic(*ret, ptr); 296 - pagefault_enable(); 297 - 298 - return rc; 300 + return probe_user_read(ret, ptr, sizeof(*ret)); 299 301 } 300 302 301 303 static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
+1 -7
arch/powerpc/perf/core-book3s.c
··· 415 415 static __u64 power_pmu_bhrb_to(u64 addr) 416 416 { 417 417 unsigned int instr; 418 - int ret; 419 418 __u64 target; 420 419 421 420 if (is_kernel_addr(addr)) { ··· 425 426 } 426 427 427 428 /* Userspace: need copy instruction here then translate it */ 428 - pagefault_disable(); 429 - ret = __get_user_inatomic(instr, (unsigned int __user *)addr); 430 - if (ret) { 431 - pagefault_enable(); 429 + if (probe_user_read(&instr, (unsigned int __user *)addr, sizeof(instr))) 432 430 return 0; 433 - } 434 - pagefault_enable(); 435 431 436 432 target = branch_target(&instr); 437 433 if ((!target) || (instr & BRANCH_ABSOLUTE))
+3 -3
arch/powerpc/platforms/512x/mpc512x_lpbfifo.c
··· 434 434 memset(&lpbfifo, 0, sizeof(struct lpbfifo_data)); 435 435 spin_lock_init(&lpbfifo.lock); 436 436 437 - lpbfifo.chan = dma_request_slave_channel(&pdev->dev, "rx-tx"); 438 - if (lpbfifo.chan == NULL) 439 - return -EPROBE_DEFER; 437 + lpbfifo.chan = dma_request_chan(&pdev->dev, "rx-tx"); 438 + if (IS_ERR(lpbfifo.chan)) 439 + return PTR_ERR(lpbfifo.chan); 440 440 441 441 if (of_address_to_resource(pdev->dev.of_node, 0, &r) != 0) { 442 442 dev_err(&pdev->dev, "bad 'reg' in 'sclpc' device tree node\n");
+1 -1
arch/powerpc/platforms/83xx/km83xx.c
··· 64 64 return; 65 65 } 66 66 67 - base = ioremap(res.start, res.end - res.start + 1); 67 + base = ioremap(res.start, resource_size(&res)); 68 68 69 69 /* 70 70 * set output delay adjustments to default values according
+9
arch/powerpc/platforms/85xx/smp.c
··· 252 252 out_be64((u64 *)(&spin_table->addr_h), 253 253 __pa(ppc_function_entry(generic_secondary_smp_init))); 254 254 #else 255 + #ifdef CONFIG_PHYS_ADDR_T_64BIT 256 + /* 257 + * We need also to write addr_h to spin table for systems 258 + * in which their physical memory start address was configured 259 + * to above 4G, otherwise the secondary core can not get 260 + * correct entry to start from. 261 + */ 262 + out_be32(&spin_table->addr_h, __pa(__early_start) >> 32); 263 + #endif 255 264 out_be32(&spin_table->addr_l, __pa(__early_start)); 256 265 #endif 257 266 flush_spin_table(spin_table);
+1 -4
arch/powerpc/platforms/85xx/twr_p102x.c
··· 60 60 */ 61 61 static void __init twr_p1025_setup_arch(void) 62 62 { 63 - #ifdef CONFIG_QUICC_ENGINE 64 - struct device_node *np; 65 - #endif 66 - 67 63 if (ppc_md.progress) 68 64 ppc_md.progress("twr_p1025_setup_arch()", 0); 69 65 ··· 73 77 #if IS_ENABLED(CONFIG_UCC_GETH) || IS_ENABLED(CONFIG_SERIAL_QE) 74 78 if (machine_is(twr_p1025)) { 75 79 struct ccsr_guts __iomem *guts; 80 + struct device_node *np; 76 81 77 82 np = of_find_compatible_node(NULL, NULL, "fsl,p1021-guts"); 78 83 if (np) {
+4
arch/powerpc/platforms/Kconfig
··· 177 177 config PPC_P7_NAP 178 178 bool 179 179 180 + config PPC_BOOK3S_IDLE 181 + def_bool y 182 + depends on (PPC_970_NAP || PPC_P7_NAP) 183 + 180 184 config PPC_INDIRECT_PIO 181 185 bool 182 186 select GENERIC_IOMAP
+8
arch/powerpc/platforms/Kconfig.cputype
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 + config PPC32 3 + bool 4 + default y if !PPC64 5 + select KASAN_VMALLOC if KASAN && MODULES 6 + 2 7 config PPC64 3 8 bool "64-bit kernel" 4 9 select ZLIB_DEFLATE ··· 36 31 select PPC_HAVE_PMU_SUPPORT 37 32 select PPC_HAVE_KUEP 38 33 select PPC_HAVE_KUAP 34 + select HAVE_ARCH_VMAP_STACK 39 35 40 36 config PPC_BOOK3S_601 41 37 bool "PowerPC 601" 42 38 select PPC_BOOK3S_32 43 39 select PPC_FPU 44 40 select PPC_HAVE_KUAP 41 + select HAVE_ARCH_VMAP_STACK 45 42 46 43 config PPC_85xx 47 44 bool "Freescale 85xx" ··· 56 49 select PPC_HAVE_KUEP 57 50 select PPC_HAVE_KUAP 58 51 select PPC_MM_SLICES if HUGETLB_PAGE 52 + select HAVE_ARCH_VMAP_STACK 59 53 60 54 config 40x 61 55 bool "AMCC 40x"
+1 -1
arch/powerpc/platforms/maple/setup.c
··· 229 229 root = of_find_node_by_path("/"); 230 230 naddr = of_n_addr_cells(root); 231 231 opprop = of_get_property(root, "platform-open-pic", &opplen); 232 - if (opprop != 0) { 232 + if (opprop) { 233 233 openpic_addr = of_read_number(opprop, naddr); 234 234 has_isus = (opplen > naddr); 235 235 printk(KERN_DEBUG "OpenPIC addr: %lx, has ISUs: %d\n",
+80 -78
arch/powerpc/platforms/powernv/opal.c
··· 790 790 return 0; 791 791 } 792 792 793 - static ssize_t symbol_map_read(struct file *fp, struct kobject *kobj, 794 - struct bin_attribute *bin_attr, 795 - char *buf, loff_t off, size_t count) 796 - { 797 - return memory_read_from_buffer(buf, count, &off, bin_attr->private, 798 - bin_attr->size); 799 - } 800 - 801 - static struct bin_attribute symbol_map_attr = { 802 - .attr = {.name = "symbol_map", .mode = 0400}, 803 - .read = symbol_map_read 804 - }; 805 - 806 - static void opal_export_symmap(void) 807 - { 808 - const __be64 *syms; 809 - unsigned int size; 810 - struct device_node *fw; 811 - int rc; 812 - 813 - fw = of_find_node_by_path("/ibm,opal/firmware"); 814 - if (!fw) 815 - return; 816 - syms = of_get_property(fw, "symbol-map", &size); 817 - if (!syms || size != 2 * sizeof(__be64)) 818 - return; 819 - 820 - /* Setup attributes */ 821 - symbol_map_attr.private = __va(be64_to_cpu(syms[0])); 822 - symbol_map_attr.size = be64_to_cpu(syms[1]); 823 - 824 - rc = sysfs_create_bin_file(opal_kobj, &symbol_map_attr); 825 - if (rc) 826 - pr_warn("Error %d creating OPAL symbols file\n", rc); 827 - } 828 - 829 793 static ssize_t export_attr_read(struct file *fp, struct kobject *kobj, 830 794 struct bin_attribute *bin_attr, char *buf, 831 795 loff_t off, size_t count) 832 796 { 833 797 return memory_read_from_buffer(buf, count, &off, bin_attr->private, 834 798 bin_attr->size); 799 + } 800 + 801 + static int opal_add_one_export(struct kobject *parent, const char *export_name, 802 + struct device_node *np, const char *prop_name) 803 + { 804 + struct bin_attribute *attr = NULL; 805 + const char *name = NULL; 806 + u64 vals[2]; 807 + int rc; 808 + 809 + rc = of_property_read_u64_array(np, prop_name, &vals[0], 2); 810 + if (rc) 811 + goto out; 812 + 813 + attr = kzalloc(sizeof(*attr), GFP_KERNEL); 814 + name = kstrdup(export_name, GFP_KERNEL); 815 + if (!name) { 816 + rc = -ENOMEM; 817 + goto out; 818 + } 819 + 820 + sysfs_bin_attr_init(attr); 821 + attr->attr.name = name; 822 + attr->attr.mode = 0400; 823 + attr->read = export_attr_read; 824 + attr->private = __va(vals[0]); 825 + attr->size = vals[1]; 826 + 827 + rc = sysfs_create_bin_file(parent, attr); 828 + out: 829 + if (rc) { 830 + kfree(name); 831 + kfree(attr); 832 + } 833 + 834 + return rc; 835 + } 836 + 837 + static void opal_add_exported_attrs(struct device_node *np, 838 + struct kobject *kobj) 839 + { 840 + struct device_node *child; 841 + struct property *prop; 842 + 843 + for_each_property_of_node(np, prop) { 844 + int rc; 845 + 846 + if (!strcmp(prop->name, "name") || 847 + !strcmp(prop->name, "phandle")) 848 + continue; 849 + 850 + rc = opal_add_one_export(kobj, prop->name, np, prop->name); 851 + if (rc) { 852 + pr_warn("Unable to add export %pOF/%s, rc = %d!\n", 853 + np, prop->name, rc); 854 + } 855 + } 856 + 857 + for_each_child_of_node(np, child) { 858 + struct kobject *child_kobj; 859 + 860 + child_kobj = kobject_create_and_add(child->name, kobj); 861 + if (!child_kobj) { 862 + pr_err("Unable to create export dir for %pOF\n", child); 863 + continue; 864 + } 865 + 866 + opal_add_exported_attrs(child, child_kobj); 867 + } 835 868 } 836 869 837 870 /* ··· 876 843 */ 877 844 static void opal_export_attrs(void) 878 845 { 879 - struct bin_attribute *attr; 880 846 struct device_node *np; 881 - struct property *prop; 882 847 struct kobject *kobj; 883 - u64 vals[2]; 884 848 int rc; 885 849 886 850 np = of_find_node_by_path("/ibm,opal/firmware/exports"); ··· 891 861 return; 892 862 } 893 863 894 - for_each_property_of_node(np, prop) { 895 - if (!strcmp(prop->name, "name") || !strcmp(prop->name, "phandle")) 896 - continue; 864 + opal_add_exported_attrs(np, kobj); 897 865 898 - if (of_property_read_u64_array(np, prop->name, &vals[0], 2)) 899 - continue; 900 - 901 - attr = kzalloc(sizeof(*attr), GFP_KERNEL); 902 - 903 - if (attr == NULL) { 904 - pr_warn("Failed kmalloc for bin_attribute!"); 905 - continue; 906 - } 907 - 908 - sysfs_bin_attr_init(attr); 909 - attr->attr.name = kstrdup(prop->name, GFP_KERNEL); 910 - attr->attr.mode = 0400; 911 - attr->read = export_attr_read; 912 - attr->private = __va(vals[0]); 913 - attr->size = vals[1]; 914 - 915 - if (attr->attr.name == NULL) { 916 - pr_warn("Failed kstrdup for bin_attribute attr.name"); 917 - kfree(attr); 918 - continue; 919 - } 920 - 921 - rc = sysfs_create_bin_file(kobj, attr); 922 - if (rc) { 923 - pr_warn("Error %d creating OPAL sysfs exports/%s file\n", 924 - rc, prop->name); 925 - kfree(attr->attr.name); 926 - kfree(attr); 927 - } 928 - } 866 + /* 867 + * NB: symbol_map existed before the generic export interface so it 868 + * lives under the top level opal_kobj. 869 + */ 870 + rc = opal_add_one_export(opal_kobj, "symbol_map", 871 + np->parent, "symbol-map"); 872 + if (rc) 873 + pr_warn("Error %d creating OPAL symbols file\n", rc); 929 874 930 875 of_node_put(np); 931 876 } ··· 1047 1042 /* Create "opal" kobject under /sys/firmware */ 1048 1043 rc = opal_sysfs_init(); 1049 1044 if (rc == 0) { 1050 - /* Export symbol map to userspace */ 1051 - opal_export_symmap(); 1052 1045 /* Setup dump region interface */ 1053 1046 opal_dump_region_init(); 1054 1047 /* Setup error log interface */ ··· 1059 1056 opal_sys_param_init(); 1060 1057 /* Setup message log sysfs interface. */ 1061 1058 opal_msglog_sysfs_init(); 1059 + /* Add all export properties*/ 1060 + opal_export_attrs(); 1062 1061 } 1063 - 1064 - /* Export all properties */ 1065 - opal_export_attrs(); 1066 1062 1067 1063 /* Initialize platform devices: IPMI backend, PRD & flash interface */ 1068 1064 opal_pdev_init("ibm,opal-ipmi");
+180 -64
arch/powerpc/platforms/powernv/pci-ioda.c
··· 188 188 unsigned int pe_num = pe->pe_number; 189 189 190 190 WARN_ON(pe->pdev); 191 - WARN_ON(pe->npucomp); /* NPUs are not supposed to be freed */ 191 + WARN_ON(pe->npucomp); /* NPUs for nvlink are not supposed to be freed */ 192 192 kfree(pe->npucomp); 193 193 memset(pe, 0, sizeof(struct pnv_ioda_pe)); 194 194 clear_bit(pe_num, phb->ioda.pe_alloc); ··· 777 777 return 0; 778 778 } 779 779 780 + static void pnv_ioda_unset_peltv(struct pnv_phb *phb, 781 + struct pnv_ioda_pe *pe, 782 + struct pci_dev *parent) 783 + { 784 + int64_t rc; 785 + 786 + while (parent) { 787 + struct pci_dn *pdn = pci_get_pdn(parent); 788 + 789 + if (pdn && pdn->pe_number != IODA_INVALID_PE) { 790 + rc = opal_pci_set_peltv(phb->opal_id, pdn->pe_number, 791 + pe->pe_number, 792 + OPAL_REMOVE_PE_FROM_DOMAIN); 793 + /* XXX What to do in case of error ? */ 794 + } 795 + parent = parent->bus->self; 796 + } 797 + 798 + opal_pci_eeh_freeze_clear(phb->opal_id, pe->pe_number, 799 + OPAL_EEH_ACTION_CLEAR_FREEZE_ALL); 800 + 801 + /* Disassociate PE in PELT */ 802 + rc = opal_pci_set_peltv(phb->opal_id, pe->pe_number, 803 + pe->pe_number, OPAL_REMOVE_PE_FROM_DOMAIN); 804 + if (rc) 805 + pe_warn(pe, "OPAL error %lld remove self from PELTV\n", rc); 806 + } 807 + 780 808 static int pnv_ioda_deconfigure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe) 781 809 { 782 810 struct pci_dev *parent; ··· 820 792 fcomp = OPAL_IGNORE_RID_FUNCTION_NUMBER; 821 793 parent = pe->pbus->self; 822 794 if (pe->flags & PNV_IODA_PE_BUS_ALL) 823 - count = pe->pbus->busn_res.end - pe->pbus->busn_res.start + 1; 795 + count = resource_size(&pe->pbus->busn_res); 824 796 else 825 797 count = 1; 826 798 ··· 855 827 for (rid = pe->rid; rid < rid_end; rid++) 856 828 phb->ioda.pe_rmap[rid] = IODA_INVALID_PE; 857 829 858 - /* Release from all parents PELT-V */ 859 - while (parent) { 860 - struct pci_dn *pdn = pci_get_pdn(parent); 861 - if (pdn && pdn->pe_number != IODA_INVALID_PE) { 862 - rc = opal_pci_set_peltv(phb->opal_id, pdn->pe_number, 863 - pe->pe_number, OPAL_REMOVE_PE_FROM_DOMAIN); 864 - /* XXX What to do in case of error ? */ 865 - } 866 - parent = parent->bus->self; 867 - } 830 + /* 831 + * Release from all parents PELT-V. NPUs don't have a PELTV 832 + * table 833 + */ 834 + if (phb->type != PNV_PHB_NPU_NVLINK && phb->type != PNV_PHB_NPU_OCAPI) 835 + pnv_ioda_unset_peltv(phb, pe, parent); 868 836 869 - opal_pci_eeh_freeze_clear(phb->opal_id, pe->pe_number, 870 - OPAL_EEH_ACTION_CLEAR_FREEZE_ALL); 871 - 872 - /* Disassociate PE in PELT */ 873 - rc = opal_pci_set_peltv(phb->opal_id, pe->pe_number, 874 - pe->pe_number, OPAL_REMOVE_PE_FROM_DOMAIN); 875 - if (rc) 876 - pe_warn(pe, "OPAL error %lld remove self from PELTV\n", rc); 877 837 rc = opal_pci_set_pe(phb->opal_id, pe->pe_number, pe->rid, 878 838 bcomp, dcomp, fcomp, OPAL_UNMAP_PE); 879 839 if (rc) ··· 890 874 fcomp = OPAL_IGNORE_RID_FUNCTION_NUMBER; 891 875 parent = pe->pbus->self; 892 876 if (pe->flags & PNV_IODA_PE_BUS_ALL) 893 - count = pe->pbus->busn_res.end - pe->pbus->busn_res.start + 1; 877 + count = resource_size(&pe->pbus->busn_res); 894 878 else 895 879 count = 1; 896 880 ··· 1078 1062 return NULL; 1079 1063 } 1080 1064 1081 - /* NOTE: We get only one ref to the pci_dev for the pdn, not for the 1082 - * pointer in the PE data structure, both should be destroyed at the 1083 - * same time. However, this needs to be looked at more closely again 1084 - * once we actually start removing things (Hotplug, SR-IOV, ...) 1065 + /* NOTE: We don't get a reference for the pointer in the PE 1066 + * data structure, both the device and PE structures should be 1067 + * destroyed at the same time. However, removing nvlink 1068 + * devices will need some work. 1085 1069 * 1086 1070 * At some point we want to remove the PDN completely anyways 1087 1071 */ 1088 - pci_dev_get(dev); 1089 1072 pdn->pe_number = pe->pe_number; 1090 1073 pe->flags = PNV_IODA_PE_DEV; 1091 1074 pe->pdev = dev; 1092 1075 pe->pbus = NULL; 1093 1076 pe->mve_number = -1; 1094 1077 pe->rid = dev->bus->number << 8 | pdn->devfn; 1078 + pe->device_count++; 1095 1079 1096 1080 pe_info(pe, "Associated device to PE\n"); 1097 1081 ··· 1100 1084 pnv_ioda_free_pe(pe); 1101 1085 pdn->pe_number = IODA_INVALID_PE; 1102 1086 pe->pdev = NULL; 1103 - pci_dev_put(dev); 1104 1087 return NULL; 1105 1088 } 1106 1089 1107 1090 /* Put PE to the list */ 1091 + mutex_lock(&phb->ioda.pe_list_mutex); 1108 1092 list_add_tail(&pe->list, &phb->ioda.pe_list); 1109 - 1093 + mutex_unlock(&phb->ioda.pe_list_mutex); 1110 1094 return pe; 1111 1095 } 1112 1096 ··· 1222 1206 struct pnv_phb *phb = hose->private_data; 1223 1207 1224 1208 /* 1209 + * Intentionally leak a reference on the npu device (for 1210 + * nvlink only; this is not an opencapi path) to make sure it 1211 + * never goes away, as it's been the case all along and some 1212 + * work is needed otherwise. 1213 + */ 1214 + pci_dev_get(npu_pdev); 1215 + 1216 + /* 1225 1217 * Due to a hardware errata PE#0 on the NPU is reserved for 1226 1218 * error handling. This means we only have three PEs remaining 1227 1219 * which need to be assigned to four links, implying some ··· 1252 1228 */ 1253 1229 dev_info(&npu_pdev->dev, 1254 1230 "Associating to existing PE %x\n", pe_num); 1255 - pci_dev_get(npu_pdev); 1256 1231 npu_pdn = pci_get_pdn(npu_pdev); 1257 1232 rid = npu_pdev->bus->number << 8 | npu_pdn->devfn; 1258 1233 npu_pdn->pe_number = pe_num; 1259 1234 phb->ioda.pe_rmap[rid] = pe->pe_number; 1235 + pe->device_count++; 1260 1236 1261 1237 /* Map the PE to this link */ 1262 1238 rc = opal_pci_set_pe(phb->opal_id, pe_num, rid, ··· 1292 1268 { 1293 1269 struct pci_controller *hose; 1294 1270 struct pnv_phb *phb; 1295 - struct pci_bus *bus; 1296 - struct pci_dev *pdev; 1297 1271 struct pnv_ioda_pe *pe; 1298 1272 1299 1273 list_for_each_entry(hose, &hose_list, list_node) { ··· 1302 1280 pnv_ioda_setup_npu_PEs(hose->bus); 1303 1281 if (phb->model == PNV_PHB_MODEL_NPU2) 1304 1282 WARN_ON_ONCE(pnv_npu2_init(hose)); 1305 - } 1306 - if (phb->type == PNV_PHB_NPU_OCAPI) { 1307 - bus = hose->bus; 1308 - list_for_each_entry(pdev, &bus->devices, bus_list) 1309 - pnv_ioda_setup_dev_PE(pdev); 1310 1283 } 1311 1284 } 1312 1285 list_for_each_entry(hose, &hose_list, list_node) { ··· 1575 1558 1576 1559 /* Reserve PE for each VF */ 1577 1560 for (vf_index = 0; vf_index < num_vfs; vf_index++) { 1561 + int vf_devfn = pci_iov_virtfn_devfn(pdev, vf_index); 1562 + int vf_bus = pci_iov_virtfn_bus(pdev, vf_index); 1563 + struct pci_dn *vf_pdn; 1564 + 1578 1565 if (pdn->m64_single_mode) 1579 1566 pe_num = pdn->pe_num_map[vf_index]; 1580 1567 else ··· 1591 1570 pe->pbus = NULL; 1592 1571 pe->parent_dev = pdev; 1593 1572 pe->mve_number = -1; 1594 - pe->rid = (pci_iov_virtfn_bus(pdev, vf_index) << 8) | 1595 - pci_iov_virtfn_devfn(pdev, vf_index); 1573 + pe->rid = (vf_bus << 8) | vf_devfn; 1596 1574 1597 1575 pe_info(pe, "VF %04d:%02d:%02d.%d associated with PE#%x\n", 1598 1576 hose->global_number, pdev->bus->number, 1599 - PCI_SLOT(pci_iov_virtfn_devfn(pdev, vf_index)), 1600 - PCI_FUNC(pci_iov_virtfn_devfn(pdev, vf_index)), pe_num); 1577 + PCI_SLOT(vf_devfn), PCI_FUNC(vf_devfn), pe_num); 1601 1578 1602 1579 if (pnv_ioda_configure_pe(phb, pe)) { 1603 1580 /* XXX What do we do here ? */ ··· 1608 1589 mutex_lock(&phb->ioda.pe_list_mutex); 1609 1590 list_add_tail(&pe->list, &phb->ioda.pe_list); 1610 1591 mutex_unlock(&phb->ioda.pe_list_mutex); 1592 + 1593 + /* associate this pe to it's pdn */ 1594 + list_for_each_entry(vf_pdn, &pdn->parent->child_list, list) { 1595 + if (vf_pdn->busno == vf_bus && 1596 + vf_pdn->devfn == vf_devfn) { 1597 + vf_pdn->pe_number = pe_num; 1598 + break; 1599 + } 1600 + } 1611 1601 1612 1602 pnv_pci_ioda2_setup_dma_pe(phb, pe); 1613 1603 #ifdef CONFIG_IOMMU_API ··· 1747 1719 pnv_pci_sriov_disable(pdev); 1748 1720 1749 1721 /* Release PCI data */ 1750 - remove_dev_pci_data(pdev); 1722 + remove_sriov_vf_pdns(pdev); 1751 1723 return 0; 1752 1724 } 1753 1725 1754 1726 int pnv_pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs) 1755 1727 { 1756 1728 /* Allocate PCI data */ 1757 - add_dev_pci_data(pdev); 1729 + add_sriov_vf_pdns(pdev); 1758 1730 1759 1731 return pnv_pci_sriov_enable(pdev, num_vfs); 1760 1732 } 1761 1733 #endif /* CONFIG_PCI_IOV */ 1762 1734 1763 - static void pnv_pci_ioda_dma_dev_setup(struct pnv_phb *phb, struct pci_dev *pdev) 1735 + static void pnv_pci_ioda_dma_dev_setup(struct pci_dev *pdev) 1764 1736 { 1737 + struct pci_controller *hose = pci_bus_to_host(pdev->bus); 1738 + struct pnv_phb *phb = hose->private_data; 1765 1739 struct pci_dn *pdn = pci_get_pdn(pdev); 1766 1740 struct pnv_ioda_pe *pe; 1767 1741 ··· 2919 2889 struct pci_dn *pdn; 2920 2890 int mul, total_vfs; 2921 2891 2922 - if (!pdev->is_physfn || pci_dev_is_added(pdev)) 2923 - return; 2924 - 2925 2892 pdn = pci_get_pdn(pdev); 2926 2893 pdn->vfs_expanded = 0; 2927 2894 pdn->m64_single_mode = false; ··· 2991 2964 res = &pdev->resource[i + PCI_IOV_RESOURCES]; 2992 2965 res->flags = 0; 2993 2966 res->end = res->start - 1; 2967 + } 2968 + } 2969 + 2970 + static void pnv_pci_ioda_fixup_iov(struct pci_dev *pdev) 2971 + { 2972 + if (WARN_ON(pci_dev_is_added(pdev))) 2973 + return; 2974 + 2975 + if (pdev->is_virtfn) { 2976 + struct pnv_ioda_pe *pe = pnv_ioda_get_pe(pdev); 2977 + 2978 + /* 2979 + * VF PEs are single-device PEs so their pdev pointer needs to 2980 + * be set. The pdev doesn't exist when the PE is allocated (in 2981 + * (pcibios_sriov_enable()) so we fix it up here. 2982 + */ 2983 + pe->pdev = pdev; 2984 + WARN_ON(!(pe->flags & PNV_IODA_PE_VF)); 2985 + } else if (pdev->is_physfn) { 2986 + /* 2987 + * For PFs adjust their allocated IOV resources to match what 2988 + * the PHB can support using it's M64 BAR table. 2989 + */ 2990 + pnv_pci_ioda_fixup_iov_resources(pdev); 2994 2991 } 2995 2992 } 2996 2993 #endif /* CONFIG_PCI_IOV */ ··· 3113 3062 #ifdef CONFIG_DEBUG_FS 3114 3063 static int pnv_pci_diag_data_set(void *data, u64 val) 3115 3064 { 3116 - struct pci_controller *hose; 3117 - struct pnv_phb *phb; 3065 + struct pnv_phb *phb = data; 3118 3066 s64 ret; 3119 - 3120 - if (val != 1ULL) 3121 - return -EINVAL; 3122 - 3123 - hose = (struct pci_controller *)data; 3124 - if (!hose || !hose->private_data) 3125 - return -ENODEV; 3126 - 3127 - phb = hose->private_data; 3128 3067 3129 3068 /* Retrieve the diag data from firmware */ 3130 3069 ret = opal_pci_get_phb_diag_data2(phb->opal_id, phb->diag_data, ··· 3129 3088 3130 3089 DEFINE_DEBUGFS_ATTRIBUTE(pnv_pci_diag_data_fops, NULL, pnv_pci_diag_data_set, 3131 3090 "%llu\n"); 3091 + 3092 + static int pnv_pci_ioda_pe_dump(void *data, u64 val) 3093 + { 3094 + struct pnv_phb *phb = data; 3095 + int pe_num; 3096 + 3097 + for (pe_num = 0; pe_num < phb->ioda.total_pe_num; pe_num++) { 3098 + struct pnv_ioda_pe *pe = &phb->ioda.pe_array[pe_num]; 3099 + 3100 + if (!test_bit(pe_num, phb->ioda.pe_alloc)) 3101 + continue; 3102 + 3103 + pe_warn(pe, "rid: %04x dev count: %2d flags: %s%s%s%s%s%s\n", 3104 + pe->rid, pe->device_count, 3105 + (pe->flags & PNV_IODA_PE_DEV) ? "dev " : "", 3106 + (pe->flags & PNV_IODA_PE_BUS) ? "bus " : "", 3107 + (pe->flags & PNV_IODA_PE_BUS_ALL) ? "all " : "", 3108 + (pe->flags & PNV_IODA_PE_MASTER) ? "master " : "", 3109 + (pe->flags & PNV_IODA_PE_SLAVE) ? "slave " : "", 3110 + (pe->flags & PNV_IODA_PE_VF) ? "vf " : ""); 3111 + } 3112 + 3113 + return 0; 3114 + } 3115 + 3116 + DEFINE_DEBUGFS_ATTRIBUTE(pnv_pci_ioda_pe_dump_fops, NULL, 3117 + pnv_pci_ioda_pe_dump, "%llu\n"); 3132 3118 3133 3119 #endif /* CONFIG_DEBUG_FS */ 3134 3120 ··· 3181 3113 } 3182 3114 3183 3115 debugfs_create_file_unsafe("dump_diag_regs", 0200, phb->dbgfs, 3184 - hose, &pnv_pci_diag_data_fops); 3116 + phb, &pnv_pci_diag_data_fops); 3117 + debugfs_create_file_unsafe("dump_ioda_pe_state", 0200, phb->dbgfs, 3118 + phb, &pnv_pci_ioda_pe_dump_fops); 3185 3119 } 3186 3120 #endif /* CONFIG_DEBUG_FS */ 3187 3121 } ··· 3453 3383 return true; 3454 3384 } 3455 3385 3386 + static bool pnv_ocapi_enable_device_hook(struct pci_dev *dev) 3387 + { 3388 + struct pci_controller *hose = pci_bus_to_host(dev->bus); 3389 + struct pnv_phb *phb = hose->private_data; 3390 + struct pci_dn *pdn; 3391 + struct pnv_ioda_pe *pe; 3392 + 3393 + if (!phb->initialized) 3394 + return true; 3395 + 3396 + pdn = pci_get_pdn(dev); 3397 + if (!pdn) 3398 + return false; 3399 + 3400 + if (pdn->pe_number == IODA_INVALID_PE) { 3401 + pe = pnv_ioda_setup_dev_PE(dev); 3402 + if (!pe) 3403 + return false; 3404 + } 3405 + return true; 3406 + } 3407 + 3456 3408 static long pnv_pci_ioda1_unset_window(struct iommu_table_group *table_group, 3457 3409 int num) 3458 3410 { ··· 3604 3512 struct pnv_phb *phb = pe->phb; 3605 3513 struct pnv_ioda_pe *slave, *tmp; 3606 3514 3515 + mutex_lock(&phb->ioda.pe_list_mutex); 3607 3516 list_del(&pe->list); 3517 + mutex_unlock(&phb->ioda.pe_list_mutex); 3518 + 3608 3519 switch (phb->type) { 3609 3520 case PNV_PHB_IODA1: 3610 3521 pnv_pci_ioda1_release_pe_dma(pe); 3611 3522 break; 3612 3523 case PNV_PHB_IODA2: 3613 3524 pnv_pci_ioda2_release_pe_dma(pe); 3525 + break; 3526 + case PNV_PHB_NPU_OCAPI: 3614 3527 break; 3615 3528 default: 3616 3529 WARN_ON(1); ··· 3691 3594 OPAL_ASSERT_RESET); 3692 3595 } 3693 3596 3597 + static void pnv_pci_ioda_dma_bus_setup(struct pci_bus *bus) 3598 + { 3599 + struct pci_controller *hose = bus->sysdata; 3600 + struct pnv_phb *phb = hose->private_data; 3601 + struct pnv_ioda_pe *pe; 3602 + 3603 + list_for_each_entry(pe, &phb->ioda.pe_list, list) { 3604 + if (!(pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL))) 3605 + continue; 3606 + 3607 + if (!pe->pbus) 3608 + continue; 3609 + 3610 + if (bus->number == ((pe->rid >> 8) & 0xFF)) { 3611 + pe->pbus = bus; 3612 + break; 3613 + } 3614 + } 3615 + } 3616 + 3694 3617 static const struct pci_controller_ops pnv_pci_ioda_controller_ops = { 3695 - .dma_dev_setup = pnv_pci_dma_dev_setup, 3696 - .dma_bus_setup = pnv_pci_dma_bus_setup, 3618 + .dma_dev_setup = pnv_pci_ioda_dma_dev_setup, 3619 + .dma_bus_setup = pnv_pci_ioda_dma_bus_setup, 3697 3620 .iommu_bypass_supported = pnv_pci_ioda_iommu_bypass_supported, 3698 3621 .setup_msi_irqs = pnv_setup_msi_irqs, 3699 3622 .teardown_msi_irqs = pnv_teardown_msi_irqs, ··· 3726 3609 }; 3727 3610 3728 3611 static const struct pci_controller_ops pnv_npu_ioda_controller_ops = { 3729 - .dma_dev_setup = pnv_pci_dma_dev_setup, 3730 3612 .setup_msi_irqs = pnv_setup_msi_irqs, 3731 3613 .teardown_msi_irqs = pnv_teardown_msi_irqs, 3732 3614 .enable_device_hook = pnv_pci_enable_device_hook, ··· 3736 3620 }; 3737 3621 3738 3622 static const struct pci_controller_ops pnv_npu_ocapi_ioda_controller_ops = { 3739 - .enable_device_hook = pnv_pci_enable_device_hook, 3623 + .enable_device_hook = pnv_ocapi_enable_device_hook, 3624 + .release_device = pnv_pci_release_device, 3740 3625 .window_alignment = pnv_pci_window_alignment, 3741 3626 .reset_secondary_bus = pnv_pci_reset_secondary_bus, 3742 3627 .shutdown = pnv_pci_ioda_shutdown, ··· 3972 3855 hose->controller_ops = pnv_npu_ocapi_ioda_controller_ops; 3973 3856 break; 3974 3857 default: 3975 - phb->dma_dev_setup = pnv_pci_ioda_dma_dev_setup; 3976 3858 hose->controller_ops = pnv_pci_ioda_controller_ops; 3977 3859 } 3978 3860 3979 3861 ppc_md.pcibios_default_alignment = pnv_pci_default_alignment; 3980 3862 3981 3863 #ifdef CONFIG_PCI_IOV 3982 - ppc_md.pcibios_fixup_sriov = pnv_pci_ioda_fixup_iov_resources; 3864 + ppc_md.pcibios_fixup_sriov = pnv_pci_ioda_fixup_iov; 3983 3865 ppc_md.pcibios_iov_resource_alignment = pnv_pci_iov_resource_alignment; 3984 3866 ppc_md.pcibios_sriov_enable = pnv_pcibios_sriov_enable; 3985 3867 ppc_md.pcibios_sriov_disable = pnv_pcibios_sriov_disable;
+14 -57
arch/powerpc/platforms/powernv/pci.c
··· 38 38 39 39 int pnv_pci_get_slot_id(struct device_node *np, uint64_t *id) 40 40 { 41 - struct device_node *parent = np; 41 + struct device_node *node = np; 42 42 u32 bdfn; 43 43 u64 phbid; 44 44 int ret; ··· 48 48 return -ENXIO; 49 49 50 50 bdfn = ((bdfn & 0x00ffff00) >> 8); 51 - while ((parent = of_get_parent(parent))) { 52 - if (!PCI_DN(parent)) { 53 - of_node_put(parent); 51 + for (node = np; node; node = of_get_parent(node)) { 52 + if (!PCI_DN(node)) { 53 + of_node_put(node); 54 54 break; 55 55 } 56 56 57 - if (!of_device_is_compatible(parent, "ibm,ioda2-phb") && 58 - !of_device_is_compatible(parent, "ibm,ioda3-phb")) { 59 - of_node_put(parent); 57 + if (!of_device_is_compatible(node, "ibm,ioda2-phb") && 58 + !of_device_is_compatible(node, "ibm,ioda3-phb") && 59 + !of_device_is_compatible(node, "ibm,ioda2-npu2-opencapi-phb")) { 60 + of_node_put(node); 60 61 continue; 61 62 } 62 63 63 - ret = of_property_read_u64(parent, "ibm,opal-phbid", &phbid); 64 + ret = of_property_read_u64(node, "ibm,opal-phbid", &phbid); 64 65 if (ret) { 65 - of_node_put(parent); 66 + of_node_put(node); 66 67 return -ENXIO; 67 68 } 68 69 69 - *id = PCI_SLOT_ID(phbid, bdfn); 70 + if (of_device_is_compatible(node, "ibm,ioda2-npu2-opencapi-phb")) 71 + *id = PCI_PHB_SLOT_ID(phbid); 72 + else 73 + *id = PCI_SLOT_ID(phbid, bdfn); 70 74 return 0; 71 75 } 72 76 ··· 812 808 kref_init(&tbl->it_kref); 813 809 814 810 return tbl; 815 - } 816 - 817 - void pnv_pci_dma_dev_setup(struct pci_dev *pdev) 818 - { 819 - struct pci_controller *hose = pci_bus_to_host(pdev->bus); 820 - struct pnv_phb *phb = hose->private_data; 821 - #ifdef CONFIG_PCI_IOV 822 - struct pnv_ioda_pe *pe; 823 - struct pci_dn *pdn; 824 - 825 - /* Fix the VF pdn PE number */ 826 - if (pdev->is_virtfn) { 827 - pdn = pci_get_pdn(pdev); 828 - WARN_ON(pdn->pe_number != IODA_INVALID_PE); 829 - list_for_each_entry(pe, &phb->ioda.pe_list, list) { 830 - if (pe->rid == ((pdev->bus->number << 8) | 831 - (pdev->devfn & 0xff))) { 832 - pdn->pe_number = pe->pe_number; 833 - pe->pdev = pdev; 834 - break; 835 - } 836 - } 837 - } 838 - #endif /* CONFIG_PCI_IOV */ 839 - 840 - if (phb && phb->dma_dev_setup) 841 - phb->dma_dev_setup(phb, pdev); 842 - } 843 - 844 - void pnv_pci_dma_bus_setup(struct pci_bus *bus) 845 - { 846 - struct pci_controller *hose = bus->sysdata; 847 - struct pnv_phb *phb = hose->private_data; 848 - struct pnv_ioda_pe *pe; 849 - 850 - list_for_each_entry(pe, &phb->ioda.pe_list, list) { 851 - if (!(pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL))) 852 - continue; 853 - 854 - if (!pe->pbus) 855 - continue; 856 - 857 - if (bus->number == ((pe->rid >> 8) & 0xFF)) { 858 - pe->pbus = bus; 859 - break; 860 - } 861 - } 862 811 } 863 812 864 813 struct device_node *pnv_pci_get_phb_node(struct pci_dev *dev)
-3
arch/powerpc/platforms/powernv/pci.h
··· 108 108 int (*msi_setup)(struct pnv_phb *phb, struct pci_dev *dev, 109 109 unsigned int hwirq, unsigned int virq, 110 110 unsigned int is_64, struct msi_msg *msg); 111 - void (*dma_dev_setup)(struct pnv_phb *phb, struct pci_dev *pdev); 112 111 int (*init_m64)(struct pnv_phb *phb); 113 112 int (*get_pe_state)(struct pnv_phb *phb, int pe_no); 114 113 void (*freeze_pe)(struct pnv_phb *phb, int pe_no); ··· 188 189 extern void pnv_pci_reset_secondary_bus(struct pci_dev *dev); 189 190 extern int pnv_eeh_phb_reset(struct pci_controller *hose, int option); 190 191 191 - extern void pnv_pci_dma_dev_setup(struct pci_dev *pdev); 192 - extern void pnv_pci_dma_bus_setup(struct pci_bus *bus); 193 192 extern int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type); 194 193 extern void pnv_teardown_msi_irqs(struct pci_dev *pdev); 195 194 extern struct pnv_ioda_pe *pnv_ioda_get_pe(struct pci_dev *dev);
+4
arch/powerpc/platforms/powernv/setup.c
··· 233 233 rc = opal_cec_reboot(); 234 234 else if (strcmp(cmd, "full") == 0) 235 235 rc = opal_cec_reboot2(OPAL_REBOOT_FULL_IPL, NULL); 236 + else if (strcmp(cmd, "mpipl") == 0) 237 + rc = opal_cec_reboot2(OPAL_REBOOT_MPIPL, NULL); 238 + else if (strcmp(cmd, "error") == 0) 239 + rc = opal_cec_reboot2(OPAL_REBOOT_PLATFORM_ERROR, NULL); 236 240 else 237 241 rc = OPAL_UNSUPPORTED; 238 242
-1
arch/powerpc/platforms/pseries/Kconfig
··· 21 21 select PPC_DOORBELL 22 22 select HOTPLUG_CPU 23 23 select ARCH_RANDOM 24 - select PPC_DOORBELL 25 24 select FORCE_SMP 26 25 select SWIOTLB 27 26 default y
+9 -1
arch/powerpc/platforms/pseries/firmware.c
··· 22 22 #include <asm/firmware.h> 23 23 #include <asm/prom.h> 24 24 #include <asm/udbg.h> 25 + #include <asm/svm.h> 25 26 26 27 #include "pseries.h" 27 28 ··· 56 55 {FW_FEATURE_LLAN, "hcall-lLAN"}, 57 56 {FW_FEATURE_BULK_REMOVE, "hcall-bulk"}, 58 57 {FW_FEATURE_XDABR, "hcall-xdabr"}, 59 - {FW_FEATURE_MULTITCE, "hcall-multi-tce"}, 58 + {FW_FEATURE_PUT_TCE_IND | FW_FEATURE_STUFF_TCE, 59 + "hcall-multi-tce"}, 60 60 {FW_FEATURE_SPLPAR, "hcall-splpar"}, 61 61 {FW_FEATURE_VPHN, "hcall-vphn"}, 62 62 {FW_FEATURE_SET_MODE, "hcall-set-mode"}, ··· 100 98 hypertas_fw_features_table[i].val; 101 99 break; 102 100 } 101 + } 102 + 103 + if (is_secure_guest() && 104 + (powerpc_firmware_features & FW_FEATURE_PUT_TCE_IND)) { 105 + powerpc_firmware_features &= ~FW_FEATURE_PUT_TCE_IND; 106 + pr_debug("SVM: disabling PUT_TCE_IND firmware feature\n"); 103 107 } 104 108 105 109 pr_debug(" <- fw_hypertas_feature_init()\n");
+3 -1
arch/powerpc/platforms/pseries/hotplug-memory.c
··· 360 360 361 361 for (i = 0; i < scns_per_block; i++) { 362 362 pfn = PFN_DOWN(phys_addr); 363 - if (!pfn_present(pfn)) 363 + if (!pfn_present(pfn)) { 364 + phys_addr += MIN_MEMORY_BLOCK_SIZE; 364 365 continue; 366 + } 365 367 366 368 rc = rc && is_mem_section_removable(pfn, PAGES_PER_SECTION); 367 369 phys_addr += MIN_MEMORY_BLOCK_SIZE;
+36 -28
arch/powerpc/platforms/pseries/iommu.c
··· 36 36 #include <asm/udbg.h> 37 37 #include <asm/mmzone.h> 38 38 #include <asm/plpar_wrappers.h> 39 - #include <asm/svm.h> 40 39 41 40 #include "pseries.h" 42 41 ··· 132 133 return be64_to_cpu(*tcep); 133 134 } 134 135 135 - static void tce_free_pSeriesLP(struct iommu_table*, long, long); 136 + static void tce_free_pSeriesLP(unsigned long liobn, long, long); 136 137 static void tce_freemulti_pSeriesLP(struct iommu_table*, long, long); 137 138 138 - static int tce_build_pSeriesLP(struct iommu_table *tbl, long tcenum, 139 + static int tce_build_pSeriesLP(unsigned long liobn, long tcenum, long tceshift, 139 140 long npages, unsigned long uaddr, 140 141 enum dma_data_direction direction, 141 142 unsigned long attrs) ··· 146 147 int ret = 0; 147 148 long tcenum_start = tcenum, npages_start = npages; 148 149 149 - rpn = __pa(uaddr) >> TCE_SHIFT; 150 + rpn = __pa(uaddr) >> tceshift; 150 151 proto_tce = TCE_PCI_READ; 151 152 if (direction != DMA_TO_DEVICE) 152 153 proto_tce |= TCE_PCI_WRITE; 153 154 154 155 while (npages--) { 155 - tce = proto_tce | (rpn & TCE_RPN_MASK) << TCE_RPN_SHIFT; 156 - rc = plpar_tce_put((u64)tbl->it_index, (u64)tcenum << 12, tce); 156 + tce = proto_tce | (rpn & TCE_RPN_MASK) << tceshift; 157 + rc = plpar_tce_put((u64)liobn, (u64)tcenum << tceshift, tce); 157 158 158 159 if (unlikely(rc == H_NOT_ENOUGH_RESOURCES)) { 159 160 ret = (int)rc; 160 - tce_free_pSeriesLP(tbl, tcenum_start, 161 + tce_free_pSeriesLP(liobn, tcenum_start, 161 162 (npages_start - (npages + 1))); 162 163 break; 163 164 } 164 165 165 166 if (rc && printk_ratelimit()) { 166 167 printk("tce_build_pSeriesLP: plpar_tce_put failed. rc=%lld\n", rc); 167 - printk("\tindex = 0x%llx\n", (u64)tbl->it_index); 168 + printk("\tindex = 0x%llx\n", (u64)liobn); 168 169 printk("\ttcenum = 0x%llx\n", (u64)tcenum); 169 170 printk("\ttce val = 0x%llx\n", tce ); 170 171 dump_stack(); ··· 192 193 int ret = 0; 193 194 unsigned long flags; 194 195 195 - if ((npages == 1) || !firmware_has_feature(FW_FEATURE_MULTITCE)) { 196 - return tce_build_pSeriesLP(tbl, tcenum, npages, uaddr, 196 + if ((npages == 1) || !firmware_has_feature(FW_FEATURE_PUT_TCE_IND)) { 197 + return tce_build_pSeriesLP(tbl->it_index, tcenum, 198 + tbl->it_page_shift, npages, uaddr, 197 199 direction, attrs); 198 200 } 199 201 ··· 210 210 /* If allocation fails, fall back to the loop implementation */ 211 211 if (!tcep) { 212 212 local_irq_restore(flags); 213 - return tce_build_pSeriesLP(tbl, tcenum, npages, uaddr, 214 - direction, attrs); 213 + return tce_build_pSeriesLP(tbl->it_index, tcenum, 214 + tbl->it_page_shift, 215 + npages, uaddr, direction, attrs); 215 216 } 216 217 __this_cpu_write(tce_page, tcep); 217 218 } ··· 263 262 return ret; 264 263 } 265 264 266 - static void tce_free_pSeriesLP(struct iommu_table *tbl, long tcenum, long npages) 265 + static void tce_free_pSeriesLP(unsigned long liobn, long tcenum, long npages) 267 266 { 268 267 u64 rc; 269 268 270 269 while (npages--) { 271 - rc = plpar_tce_put((u64)tbl->it_index, (u64)tcenum << 12, 0); 270 + rc = plpar_tce_put((u64)liobn, (u64)tcenum << 12, 0); 272 271 273 272 if (rc && printk_ratelimit()) { 274 273 printk("tce_free_pSeriesLP: plpar_tce_put failed. rc=%lld\n", rc); 275 - printk("\tindex = 0x%llx\n", (u64)tbl->it_index); 274 + printk("\tindex = 0x%llx\n", (u64)liobn); 276 275 printk("\ttcenum = 0x%llx\n", (u64)tcenum); 277 276 dump_stack(); 278 277 } ··· 286 285 { 287 286 u64 rc; 288 287 289 - if (!firmware_has_feature(FW_FEATURE_MULTITCE)) 290 - return tce_free_pSeriesLP(tbl, tcenum, npages); 288 + if (!firmware_has_feature(FW_FEATURE_STUFF_TCE)) 289 + return tce_free_pSeriesLP(tbl->it_index, tcenum, npages); 291 290 292 291 rc = plpar_tce_stuff((u64)tbl->it_index, (u64)tcenum << 12, 0, npages); 293 292 ··· 401 400 u32 tce_shift; 402 401 u64 rc = 0; 403 402 long l, limit; 403 + 404 + if (!firmware_has_feature(FW_FEATURE_PUT_TCE_IND)) { 405 + unsigned long tceshift = be32_to_cpu(maprange->tce_shift); 406 + unsigned long dmastart = (start_pfn << PAGE_SHIFT) + 407 + be64_to_cpu(maprange->dma_base); 408 + unsigned long tcenum = dmastart >> tceshift; 409 + unsigned long npages = num_pfn << PAGE_SHIFT >> tceshift; 410 + void *uaddr = __va(start_pfn << PAGE_SHIFT); 411 + 412 + return tce_build_pSeriesLP(be32_to_cpu(maprange->liobn), 413 + tcenum, tceshift, npages, (unsigned long) uaddr, 414 + DMA_BIDIRECTIONAL, 0); 415 + } 404 416 405 417 local_irq_disable(); /* to protect tcep and the page behind it */ 406 418 tcep = __this_cpu_read(tce_page); ··· 1334 1320 of_reconfig_notifier_register(&iommu_reconfig_nb); 1335 1321 register_memory_notifier(&iommu_mem_nb); 1336 1322 1337 - /* 1338 - * Secure guest memory is inacessible to devices so regular DMA isn't 1339 - * possible. 1340 - * 1341 - * In that case keep devices' dma_map_ops as NULL so that the generic 1342 - * DMA code path will use SWIOTLB to bounce buffers for DMA. 1343 - */ 1344 - if (!is_secure_guest()) 1345 - set_pci_dma_ops(&dma_iommu_ops); 1323 + set_pci_dma_ops(&dma_iommu_ops); 1346 1324 } 1347 1325 1348 1326 static int __init disable_multitce(char *str) 1349 1327 { 1350 1328 if (strcmp(str, "off") == 0 && 1351 1329 firmware_has_feature(FW_FEATURE_LPAR) && 1352 - firmware_has_feature(FW_FEATURE_MULTITCE)) { 1330 + (firmware_has_feature(FW_FEATURE_PUT_TCE_IND) || 1331 + firmware_has_feature(FW_FEATURE_STUFF_TCE))) { 1353 1332 printk(KERN_INFO "Disabling MULTITCE firmware feature\n"); 1354 - powerpc_firmware_features &= ~FW_FEATURE_MULTITCE; 1333 + powerpc_firmware_features &= 1334 + ~(FW_FEATURE_PUT_TCE_IND | FW_FEATURE_STUFF_TCE); 1355 1335 } 1356 1336 return 1; 1357 1337 }
+2 -2
arch/powerpc/platforms/pseries/lparcfg.c
··· 435 435 { 436 436 unsigned long maxmem = 0; 437 437 438 - maxmem += drmem_info->n_lmbs * drmem_info->lmb_size; 438 + maxmem += (unsigned long)drmem_info->n_lmbs * drmem_info->lmb_size; 439 439 maxmem += hugetlb_total_pages() * PAGE_SIZE; 440 440 441 - seq_printf(m, "MaxMem=%ld\n", maxmem); 441 + seq_printf(m, "MaxMem=%lu\n", maxmem); 442 442 } 443 443 444 444 static int pseries_lparcfg_data(struct seq_file *m, void *v)
+5 -3
arch/powerpc/platforms/pseries/papr_scm.c
··· 69 69 return rc; 70 70 71 71 p->bound_addr = saved; 72 - dev_dbg(&p->pdev->dev, "bound drc 0x%x to %pR\n", p->drc_index, &p->res); 72 + dev_dbg(&p->pdev->dev, "bound drc 0x%x to 0x%lx\n", 73 + p->drc_index, (unsigned long)saved); 73 74 return rc; 74 75 } 75 76 ··· 134 133 goto err_out; 135 134 136 135 p->bound_addr = start_addr; 137 - dev_dbg(&p->pdev->dev, "bound drc 0x%x to %pR\n", p->drc_index, &p->res); 136 + dev_dbg(&p->pdev->dev, "bound drc 0x%x to 0x%lx\n", p->drc_index, start_addr); 138 137 return rc; 139 138 140 139 err_out: ··· 323 322 p->bus = nvdimm_bus_register(NULL, &p->bus_desc); 324 323 if (!p->bus) { 325 324 dev_err(dev, "Error creating nvdimm bus %pOF\n", p->dn); 325 + kfree(p->bus_desc.provider_name); 326 326 return -ENXIO; 327 327 } 328 328 ··· 358 356 ndr_desc.mapping = &mapping; 359 357 ndr_desc.num_mappings = 1; 360 358 ndr_desc.nd_set = &p->nd_set; 361 - set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags); 362 359 363 360 if (p->is_volatile) 364 361 p->region = nvdimm_volatile_region_create(p->bus, &ndr_desc); ··· 478 477 479 478 nvdimm_bus_unregister(p->bus); 480 479 drc_pmem_unbind(p); 480 + kfree(p->bus_desc.provider_name); 481 481 kfree(p); 482 482 483 483 return 0;
+2 -2
arch/powerpc/platforms/pseries/pci.c
··· 192 192 int pseries_pcibios_sriov_enable(struct pci_dev *pdev, u16 num_vfs) 193 193 { 194 194 /* Allocate PCI data */ 195 - add_dev_pci_data(pdev); 195 + add_sriov_vf_pdns(pdev); 196 196 return pseries_pci_sriov_enable(pdev, num_vfs); 197 197 } 198 198 ··· 204 204 /* Releasing pe_num_map */ 205 205 kfree(pdn->pe_num_map); 206 206 /* Release PCI data */ 207 - remove_dev_pci_data(pdev); 207 + remove_sriov_vf_pdns(pdev); 208 208 pci_vf_drivers_autoprobe(pdev, true); 209 209 return 0; 210 210 }
+2
arch/powerpc/platforms/pseries/vio.c
··· 1176 1176 if (tbl == NULL) 1177 1177 return NULL; 1178 1178 1179 + kref_init(&tbl->it_kref); 1180 + 1179 1181 of_parse_dma_window(dev->dev.of_node, dma_window, 1180 1182 &tbl->it_index, &offset, &size); 1181 1183
+4 -6
arch/powerpc/sysdev/fsl_pci.c
··· 1065 1065 addr += mfspr(SPRN_MCAR); 1066 1066 1067 1067 if (is_in_pci_mem_space(addr)) { 1068 - if (user_mode(regs)) { 1069 - pagefault_disable(); 1070 - ret = get_user(inst, (__u32 __user *)regs->nip); 1071 - pagefault_enable(); 1072 - } else { 1068 + if (user_mode(regs)) 1069 + ret = probe_user_read(&inst, (void __user *)regs->nip, 1070 + sizeof(inst)); 1071 + else 1073 1072 ret = probe_kernel_address((void *)regs->nip, inst); 1074 - } 1075 1073 1076 1074 if (!ret && mcheck_handle_load(regs, inst)) { 1077 1075 regs->nip += 4;
+2 -2
arch/powerpc/sysdev/mpic.c
··· 964 964 }; 965 965 966 966 #ifdef CONFIG_SMP 967 - static struct irq_chip mpic_ipi_chip = { 967 + static const struct irq_chip mpic_ipi_chip = { 968 968 .irq_mask = mpic_mask_ipi, 969 969 .irq_unmask = mpic_unmask_ipi, 970 970 .irq_eoi = mpic_end_ipi, ··· 978 978 }; 979 979 980 980 #ifdef CONFIG_MPIC_U3_HT_IRQS 981 - static struct irq_chip mpic_irq_ht_chip = { 981 + static const struct irq_chip mpic_irq_ht_chip = { 982 982 .irq_startup = mpic_startup_ht_irq, 983 983 .irq_shutdown = mpic_shutdown_ht_irq, 984 984 .irq_mask = mpic_mask_irq,
+12 -8
arch/powerpc/tools/relocs_check.sh
··· 10 10 # based on relocs_check.pl 11 11 # Copyright © 2009 IBM Corporation 12 12 13 - if [ $# -lt 2 ]; then 14 - echo "$0 [path to objdump] [path to vmlinux]" 1>&2 13 + if [ $# -lt 3 ]; then 14 + echo "$0 [path to objdump] [path to nm] [path to vmlinux]" 1>&2 15 15 exit 1 16 16 fi 17 17 18 - # Have Kbuild supply the path to objdump so we handle cross compilation. 18 + # Have Kbuild supply the path to objdump and nm so we handle cross compilation. 19 19 objdump="$1" 20 - vmlinux="$2" 20 + nm="$2" 21 + vmlinux="$3" 22 + 23 + # Remove from the bad relocations those that match an undefined weak symbol 24 + # which will result in an absolute relocation to 0. 25 + # Weak unresolved symbols are of that form in nm output: 26 + # " w _binary__btf_vmlinux_bin_end" 27 + undef_weak_symbols=$($nm "$vmlinux" | awk '$1 ~ /w/ { print $2 }') 21 28 22 29 bad_relocs=$( 23 30 $objdump -R "$vmlinux" | ··· 33 26 # These relocations are okay 34 27 # On PPC64: 35 28 # R_PPC64_RELATIVE, R_PPC64_NONE 36 - # R_PPC64_ADDR64 mach_<name> 37 - # R_PPC64_ADDR64 __crc_<name> 38 29 # On PPC: 39 30 # R_PPC_RELATIVE, R_PPC_ADDR16_HI, 40 31 # R_PPC_ADDR16_HA,R_PPC_ADDR16_LO, ··· 44 39 R_PPC_ADDR16_HA 45 40 R_PPC_RELATIVE 46 41 R_PPC_NONE' | 47 - grep -E -v '\<R_PPC64_ADDR64[[:space:]]+mach_' | 48 - grep -E -v '\<R_PPC64_ADDR64[[:space:]]+__crc_' 42 + ([ "$undef_weak_symbols" ] && grep -F -w -v "$undef_weak_symbols" || cat) 49 43 ) 50 44 51 45 if [ -z "$bad_relocs" ]; then
+2 -2
arch/powerpc/xmon/dis-asm.h
··· 13 13 #else 14 14 static inline int print_insn_powerpc(unsigned long insn, unsigned long memaddr) 15 15 { 16 - printf("%.8x", insn); 16 + printf("%.8lx", insn); 17 17 return 0; 18 18 } 19 19 20 20 static inline int print_insn_spu(unsigned long insn, unsigned long memaddr) 21 21 { 22 - printf("%.8x", insn); 22 + printf("%.8lx", insn); 23 23 return 0; 24 24 } 25 25 #endif
+11 -9
arch/powerpc/xmon/xmon.c
··· 1192 1192 1193 1193 static void bootcmds(void) 1194 1194 { 1195 + char tmp[64]; 1195 1196 int cmd; 1196 1197 1197 1198 cmd = inchar(); 1198 - if (cmd == 'r') 1199 - ppc_md.restart(NULL); 1200 - else if (cmd == 'h') 1199 + if (cmd == 'r') { 1200 + getstring(tmp, 64); 1201 + ppc_md.restart(tmp); 1202 + } else if (cmd == 'h') { 1201 1203 ppc_md.halt(); 1202 - else if (cmd == 'p') 1204 + } else if (cmd == 'p') { 1203 1205 if (pm_power_off) 1204 1206 pm_power_off(); 1207 + } 1205 1208 } 1206 1209 1207 1210 static int cpu_cmd(void) ··· 1952 1949 1953 1950 printf("pidr = %.16lx tidr = %.16lx\n", 1954 1951 mfspr(SPRN_PID), mfspr(SPRN_TIDR)); 1955 - printf("asdr = %.16lx psscr = %.16lx\n", 1956 - mfspr(SPRN_ASDR), hv ? mfspr(SPRN_PSSCR) 1957 - : mfspr(SPRN_PSSCR_PR)); 1952 + printf("psscr = %.16lx\n", 1953 + hv ? mfspr(SPRN_PSSCR) : mfspr(SPRN_PSSCR_PR)); 1958 1954 1959 1955 if (!hv) 1960 1956 return; 1961 1957 1962 - printf("ptcr = %.16lx\n", 1963 - mfspr(SPRN_PTCR)); 1958 + printf("ptcr = %.16lx asdr = %.16lx\n", 1959 + mfspr(SPRN_PTCR), mfspr(SPRN_ASDR)); 1964 1960 #endif 1965 1961 } 1966 1962
+3 -3
drivers/macintosh/Kconfig
··· 178 178 depends on I2C && I2C_POWERMAC && PPC_PMAC && !PPC_PMAC64 179 179 help 180 180 This driver provides some thermostat and fan control for the 181 - iBook G4, and the ATI based aluminium PowerBooks, allowing slightly 181 + iBook G4, and the ATI based aluminium PowerBooks, allowing slightly 182 182 better fan behaviour by default, and some manual control. 183 183 184 184 config WINDFARM ··· 214 214 select I2C_POWERMAC 215 215 help 216 216 This driver provides thermal control for the PowerMac9,1 217 - which is the recent (SMU based) single CPU desktop G5 217 + which is the recent (SMU based) single CPU desktop G5 218 218 219 219 config WINDFARM_PM112 220 220 tristate "Support for thermal management on PowerMac11,2" ··· 242 242 depends on PPC_PMAC 243 243 help 244 244 This driver provides some support to control the front panel 245 - blue LEDs "vu-meter" of the XServer macs. 245 + blue LEDs "vu-meter" of the XServer macs. 246 246 247 247 config SENSORS_AMS 248 248 tristate "Apple Motion Sensor driver"
+1
drivers/misc/ocxl/Kconfig
··· 11 11 tristate "OpenCAPI coherent accelerator support" 12 12 depends on PPC_POWERNV && PCI && EEH 13 13 select OCXL_BASE 14 + select HOTPLUG_PCI_POWERNV 14 15 default m 15 16 help 16 17 Select this option to enable the ocxl driver for Open
+49 -33
drivers/pci/hotplug/pnv_php.c
··· 18 18 #define DRIVER_AUTHOR "Gavin Shan, IBM Corporation" 19 19 #define DRIVER_DESC "PowerPC PowerNV PCI Hotplug Driver" 20 20 21 + #define SLOT_WARN(sl, x...) \ 22 + ((sl)->pdev ? pci_warn((sl)->pdev, x) : dev_warn(&(sl)->bus->dev, x)) 23 + 21 24 struct pnv_php_event { 22 25 bool added; 23 26 struct pnv_php_slot *php_slot; ··· 154 151 static void pnv_php_detach_device_nodes(struct device_node *parent) 155 152 { 156 153 struct device_node *dn; 157 - int refcount; 158 154 159 155 for_each_child_of_node(parent, dn) { 160 156 pnv_php_detach_device_nodes(dn); 161 157 162 158 of_node_put(dn); 163 - refcount = kref_read(&dn->kobj.kref); 164 - if (refcount != 1) 165 - pr_warn("Invalid refcount %d on <%pOF>\n", 166 - refcount, dn); 167 - 168 159 of_detach_node(dn); 169 160 } 170 161 } ··· 268 271 269 272 ret = pnv_pci_get_device_tree(php_slot->dn->phandle, fdt1, 0x10000); 270 273 if (ret) { 271 - pci_warn(php_slot->pdev, "Error %d getting FDT blob\n", ret); 274 + SLOT_WARN(php_slot, "Error %d getting FDT blob\n", ret); 272 275 goto free_fdt1; 273 276 } 274 277 ··· 282 285 dt = of_fdt_unflatten_tree(fdt, php_slot->dn, NULL); 283 286 if (!dt) { 284 287 ret = -EINVAL; 285 - pci_warn(php_slot->pdev, "Cannot unflatten FDT\n"); 288 + SLOT_WARN(php_slot, "Cannot unflatten FDT\n"); 286 289 goto free_fdt; 287 290 } 288 291 ··· 292 295 ret = pnv_php_populate_changeset(&php_slot->ocs, php_slot->dn); 293 296 if (ret) { 294 297 pnv_php_reverse_nodes(php_slot->dn); 295 - pci_warn(php_slot->pdev, "Error %d populating changeset\n", 296 - ret); 298 + SLOT_WARN(php_slot, "Error %d populating changeset\n", 299 + ret); 297 300 goto free_dt; 298 301 } 299 302 300 303 php_slot->dn->child = NULL; 301 304 ret = of_changeset_apply(&php_slot->ocs); 302 305 if (ret) { 303 - pci_warn(php_slot->pdev, "Error %d applying changeset\n", ret); 306 + SLOT_WARN(php_slot, "Error %d applying changeset\n", ret); 304 307 goto destroy_changeset; 305 308 } 306 309 ··· 339 342 ret = pnv_pci_set_power_state(php_slot->id, state, &msg); 340 343 if (ret > 0) { 341 344 if (be64_to_cpu(msg.params[1]) != php_slot->dn->phandle || 342 - be64_to_cpu(msg.params[2]) != state || 343 - be64_to_cpu(msg.params[3]) != OPAL_SUCCESS) { 344 - pci_warn(php_slot->pdev, "Wrong msg (%lld, %lld, %lld)\n", 345 - be64_to_cpu(msg.params[1]), 346 - be64_to_cpu(msg.params[2]), 347 - be64_to_cpu(msg.params[3])); 345 + be64_to_cpu(msg.params[2]) != state) { 346 + SLOT_WARN(php_slot, "Wrong msg (%lld, %lld, %lld)\n", 347 + be64_to_cpu(msg.params[1]), 348 + be64_to_cpu(msg.params[2]), 349 + be64_to_cpu(msg.params[3])); 348 350 return -ENOMSG; 349 351 } 352 + if (be64_to_cpu(msg.params[3]) != OPAL_SUCCESS) { 353 + ret = -ENODEV; 354 + goto error; 355 + } 350 356 } else if (ret < 0) { 351 - pci_warn(php_slot->pdev, "Error %d powering %s\n", 352 - ret, (state == OPAL_PCI_SLOT_POWER_ON) ? "on" : "off"); 353 - return ret; 357 + goto error; 354 358 } 355 359 356 360 if (state == OPAL_PCI_SLOT_POWER_OFF || state == OPAL_PCI_SLOT_OFFLINE) ··· 359 361 else 360 362 ret = pnv_php_add_devtree(php_slot); 361 363 364 + return ret; 365 + 366 + error: 367 + SLOT_WARN(php_slot, "Error %d powering %s\n", 368 + ret, (state == OPAL_PCI_SLOT_POWER_ON) ? "on" : "off"); 362 369 return ret; 363 370 } 364 371 EXPORT_SYMBOL_GPL(pnv_php_set_slot_power_state); ··· 381 378 */ 382 379 ret = pnv_pci_get_power_state(php_slot->id, &power_state); 383 380 if (ret) { 384 - pci_warn(php_slot->pdev, "Error %d getting power status\n", 385 - ret); 381 + SLOT_WARN(php_slot, "Error %d getting power status\n", 382 + ret); 386 383 } else { 387 384 *state = power_state; 388 385 } ··· 405 402 *state = presence; 406 403 ret = 0; 407 404 } else { 408 - pci_warn(php_slot->pdev, "Error %d getting presence\n", ret); 405 + SLOT_WARN(php_slot, "Error %d getting presence\n", ret); 409 406 } 410 407 411 408 return ret; ··· 569 566 struct pnv_php_slot *php_slot = to_pnv_php_slot(slot); 570 567 int ret; 571 568 572 - if (php_slot->state != PNV_PHP_STATE_POPULATED) 569 + /* 570 + * Allow to disable a slot already in the registered state to 571 + * cover cases where the slot couldn't be enabled and never 572 + * reached the populated state 573 + */ 574 + if (php_slot->state != PNV_PHP_STATE_POPULATED && 575 + php_slot->state != PNV_PHP_STATE_REGISTERED) 573 576 return 0; 574 577 575 578 /* Remove all devices behind the slot */ ··· 684 675 ret = pci_hp_register(&php_slot->slot, php_slot->bus, 685 676 php_slot->slot_no, php_slot->name); 686 677 if (ret) { 687 - pci_warn(php_slot->pdev, "Error %d registering slot\n", ret); 678 + SLOT_WARN(php_slot, "Error %d registering slot\n", ret); 688 679 return ret; 689 680 } 690 681 ··· 737 728 /* Enable MSIx */ 738 729 ret = pci_enable_msix_exact(pdev, &entry, 1); 739 730 if (ret) { 740 - pci_warn(pdev, "Error %d enabling MSIx\n", ret); 731 + SLOT_WARN(php_slot, "Error %d enabling MSIx\n", ret); 741 732 return ret; 742 733 } 743 734 ··· 787 778 (sts & PCI_EXP_SLTSTA_PDC)) { 788 779 ret = pnv_pci_get_presence_state(php_slot->id, &presence); 789 780 if (ret) { 790 - pci_warn(pdev, "PCI slot [%s] error %d getting presence (0x%04x), to retry the operation.\n", 791 - php_slot->name, ret, sts); 781 + SLOT_WARN(php_slot, 782 + "PCI slot [%s] error %d getting presence (0x%04x), to retry the operation.\n", 783 + php_slot->name, ret, sts); 792 784 return IRQ_HANDLED; 793 785 } 794 786 ··· 819 809 */ 820 810 event = kzalloc(sizeof(*event), GFP_ATOMIC); 821 811 if (!event) { 822 - pci_warn(pdev, "PCI slot [%s] missed hotplug event 0x%04x\n", 823 - php_slot->name, sts); 812 + SLOT_WARN(php_slot, 813 + "PCI slot [%s] missed hotplug event 0x%04x\n", 814 + php_slot->name, sts); 824 815 return IRQ_HANDLED; 825 816 } 826 817 ··· 845 834 /* Allocate workqueue */ 846 835 php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name); 847 836 if (!php_slot->wq) { 848 - pci_warn(pdev, "Cannot alloc workqueue\n"); 837 + SLOT_WARN(php_slot, "Cannot alloc workqueue\n"); 849 838 pnv_php_disable_irq(php_slot, true); 850 839 return; 851 840 } ··· 869 858 php_slot->name, php_slot); 870 859 if (ret) { 871 860 pnv_php_disable_irq(php_slot, true); 872 - pci_warn(pdev, "Error %d enabling IRQ %d\n", ret, irq); 861 + SLOT_WARN(php_slot, "Error %d enabling IRQ %d\n", ret, irq); 873 862 return; 874 863 } 875 864 ··· 905 894 906 895 ret = pci_enable_device(pdev); 907 896 if (ret) { 908 - pci_warn(pdev, "Error %d enabling device\n", ret); 897 + SLOT_WARN(php_slot, "Error %d enabling device\n", ret); 909 898 return; 910 899 } 911 900 ··· 1020 1009 for_each_compatible_node(dn, NULL, "ibm,ioda3-phb") 1021 1010 pnv_php_register(dn); 1022 1011 1012 + for_each_compatible_node(dn, NULL, "ibm,ioda2-npu2-opencapi-phb") 1013 + pnv_php_register_one(dn); /* slot directly under the PHB */ 1023 1014 return 0; 1024 1015 } 1025 1016 ··· 1034 1021 1035 1022 for_each_compatible_node(dn, NULL, "ibm,ioda3-phb") 1036 1023 pnv_php_unregister(dn); 1024 + 1025 + for_each_compatible_node(dn, NULL, "ibm,ioda2-npu2-opencapi-phb") 1026 + pnv_php_unregister_one(dn); /* slot directly under the PHB */ 1037 1027 } 1038 1028 1039 1029 module_init(pnv_php_init);
+7 -3
tools/testing/selftests/powerpc/eeh/eeh-functions.sh
··· 53 53 # is a no-op. 54 54 echo $dev >/sys/kernel/debug/powerpc/eeh_dev_check 55 55 56 - # Enforce a 30s timeout for recovery. Even the IPR, which is infamously 57 - # slow to reset, should recover within 30s. 58 - max_wait=30 56 + # Default to a 60s timeout when waiting for a device to recover. This 57 + # is an arbitrary default which can be overridden by setting the 58 + # EEH_MAX_WAIT environmental variable when required. 59 + 60 + # The current record holder for longest recovery time is: 61 + # "Adaptec Series 8 12G SAS/PCIe 3" at 39 seconds 62 + max_wait=${EEH_MAX_WAIT:=60} 59 63 60 64 for i in `seq 0 ${max_wait}` ; do 61 65 if pe_ok $dev ; then
+1
tools/testing/selftests/powerpc/mm/.gitignore
··· 5 5 segv_errors 6 6 wild_bctr 7 7 large_vm_fork_separation 8 + bad_accesses
+2 -1
tools/testing/selftests/powerpc/mm/Makefile
··· 3 3 $(MAKE) -C ../ 4 4 5 5 TEST_GEN_PROGS := hugetlb_vs_thp_test subpage_prot prot_sao segv_errors wild_bctr \ 6 - large_vm_fork_separation 6 + large_vm_fork_separation bad_accesses 7 7 TEST_GEN_PROGS_EXTENDED := tlbie_test 8 8 TEST_GEN_FILES := tempfile 9 9 ··· 16 16 17 17 $(OUTPUT)/wild_bctr: CFLAGS += -m64 18 18 $(OUTPUT)/large_vm_fork_separation: CFLAGS += -m64 19 + $(OUTPUT)/bad_accesses: CFLAGS += -m64 19 20 20 21 $(OUTPUT)/tempfile: 21 22 dd if=/dev/zero of=$@ bs=64k count=1
+171
tools/testing/selftests/powerpc/mm/bad_accesses.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + // 3 + // Copyright 2019, Michael Ellerman, IBM Corp. 4 + // 5 + // Test that out-of-bounds reads/writes behave as expected. 6 + 7 + #include <setjmp.h> 8 + #include <stdbool.h> 9 + #include <stdio.h> 10 + #include <stdlib.h> 11 + #include <string.h> 12 + #include <sys/types.h> 13 + #include <sys/wait.h> 14 + #include <unistd.h> 15 + 16 + #include "utils.h" 17 + 18 + // Old distros (Ubuntu 16.04 at least) don't define this 19 + #ifndef SEGV_BNDERR 20 + #define SEGV_BNDERR 3 21 + #endif 22 + 23 + // 64-bit kernel is always here 24 + #define PAGE_OFFSET (0xcul << 60) 25 + 26 + static unsigned long kernel_virt_end; 27 + 28 + static volatile int fault_code; 29 + static volatile unsigned long fault_addr; 30 + static jmp_buf setjmp_env; 31 + 32 + static void segv_handler(int n, siginfo_t *info, void *ctxt_v) 33 + { 34 + fault_code = info->si_code; 35 + fault_addr = (unsigned long)info->si_addr; 36 + siglongjmp(setjmp_env, 1); 37 + } 38 + 39 + int bad_access(char *p, bool write) 40 + { 41 + char x; 42 + 43 + fault_code = 0; 44 + fault_addr = 0; 45 + 46 + if (sigsetjmp(setjmp_env, 1) == 0) { 47 + if (write) 48 + *p = 1; 49 + else 50 + x = *p; 51 + 52 + printf("Bad - no SEGV! (%c)\n", x); 53 + return 1; 54 + } 55 + 56 + // If we see MAPERR that means we took a page fault rather than an SLB 57 + // miss. We only expect to take page faults for addresses within the 58 + // valid kernel range. 59 + FAIL_IF(fault_code == SEGV_MAPERR && \ 60 + (fault_addr < PAGE_OFFSET || fault_addr >= kernel_virt_end)); 61 + 62 + FAIL_IF(fault_code != SEGV_MAPERR && fault_code != SEGV_BNDERR); 63 + 64 + return 0; 65 + } 66 + 67 + static int using_hash_mmu(bool *using_hash) 68 + { 69 + char line[128]; 70 + FILE *f; 71 + int rc; 72 + 73 + f = fopen("/proc/cpuinfo", "r"); 74 + FAIL_IF(!f); 75 + 76 + rc = 0; 77 + while (fgets(line, sizeof(line), f) != NULL) { 78 + if (strcmp(line, "MMU : Hash\n") == 0) { 79 + *using_hash = true; 80 + goto out; 81 + } 82 + 83 + if (strcmp(line, "MMU : Radix\n") == 0) { 84 + *using_hash = false; 85 + goto out; 86 + } 87 + } 88 + 89 + rc = -1; 90 + out: 91 + fclose(f); 92 + return rc; 93 + } 94 + 95 + static int test(void) 96 + { 97 + unsigned long i, j, addr, region_shift, page_shift, page_size; 98 + struct sigaction sig; 99 + bool hash_mmu; 100 + 101 + sig = (struct sigaction) { 102 + .sa_sigaction = segv_handler, 103 + .sa_flags = SA_SIGINFO, 104 + }; 105 + 106 + FAIL_IF(sigaction(SIGSEGV, &sig, NULL) != 0); 107 + 108 + FAIL_IF(using_hash_mmu(&hash_mmu)); 109 + 110 + page_size = sysconf(_SC_PAGESIZE); 111 + if (page_size == (64 * 1024)) 112 + page_shift = 16; 113 + else 114 + page_shift = 12; 115 + 116 + if (page_size == (64 * 1024) || !hash_mmu) { 117 + region_shift = 52; 118 + 119 + // We have 7 512T regions (4 kernel linear, vmalloc, io, vmemmap) 120 + kernel_virt_end = PAGE_OFFSET + (7 * (512ul << 40)); 121 + } else if (page_size == (4 * 1024) && hash_mmu) { 122 + region_shift = 46; 123 + 124 + // We have 7 64T regions (4 kernel linear, vmalloc, io, vmemmap) 125 + kernel_virt_end = PAGE_OFFSET + (7 * (64ul << 40)); 126 + } else 127 + FAIL_IF(true); 128 + 129 + printf("Using %s MMU, PAGE_SIZE = %dKB start address 0x%016lx\n", 130 + hash_mmu ? "hash" : "radix", 131 + (1 << page_shift) >> 10, 132 + 1ul << region_shift); 133 + 134 + // This generates access patterns like: 135 + // 0x0010000000000000 136 + // 0x0010000000010000 137 + // 0x0010000000020000 138 + // ... 139 + // 0x0014000000000000 140 + // 0x0018000000000000 141 + // 0x0020000000000000 142 + // 0x0020000000010000 143 + // 0x0020000000020000 144 + // ... 145 + // 0xf400000000000000 146 + // 0xf800000000000000 147 + 148 + for (i = 1; i <= ((0xful << 60) >> region_shift); i++) { 149 + for (j = page_shift - 1; j < 60; j++) { 150 + unsigned long base, delta; 151 + 152 + base = i << region_shift; 153 + delta = 1ul << j; 154 + 155 + if (delta >= base) 156 + break; 157 + 158 + addr = (base | delta) & ~((1 << page_shift) - 1); 159 + 160 + FAIL_IF(bad_access((char *)addr, false)); 161 + FAIL_IF(bad_access((char *)addr, true)); 162 + } 163 + } 164 + 165 + return 0; 166 + } 167 + 168 + int main(void) 169 + { 170 + return test_harness(test, "bad_accesses"); 171 + }
+2 -3
tools/testing/selftests/powerpc/ptrace/ptrace-hwbreak.c
··· 455 455 if (dbginfo->features & PPC_DEBUG_FEATURE_DATA_BP_RANGE) { 456 456 test_sethwdebug_exact(child_pid); 457 457 458 - if (!is_8xx) 459 - test_sethwdebug_range_aligned(child_pid); 460 - if (dawr && !is_8xx) { 458 + test_sethwdebug_range_aligned(child_pid); 459 + if (dawr || is_8xx) { 461 460 test_sethwdebug_range_unaligned(child_pid); 462 461 test_sethwdebug_range_unaligned_dar(child_pid); 463 462 test_sethwdebug_dawr_max_range(child_pid);