Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc

Pull powerpc updates from Ben Herrenschmidt:
"This is the powerpc new goodies for 3.17. The short story:

The biggest bit is Michael removing all of pre-POWER4 processor
support from the 64-bit kernel. POWER3 and rs64. This gets rid of a
ton of old cruft that has been bitrotting in a long while. It was
broken for quite a few versions already and nobody noticed. Nobody
uses those machines anymore. While at it, he cleaned up a bunch of
old dusty cabinets, getting rid of a skeletton or two.

Then, we have some base VFIO support for KVM, which allows assigning
of PCI devices to KVM guests, support for large 64-bit BARs on
"powernv" platforms, support for HMI (Hardware Management Interrupts)
on those same platforms, some sparse-vmemmap improvements (for memory
hotplug),

There is the usual batch of Freescale embedded updates (summary in the
merge commit) and fixes here or there, I think that's it for the
highlights"

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (102 commits)
powerpc/eeh: Export eeh_iommu_group_to_pe()
powerpc/eeh: Add missing #ifdef CONFIG_IOMMU_API
powerpc: Reduce scariness of interrupt frames in stack traces
powerpc: start loop at section start of start in vmemmap_populated()
powerpc: implement vmemmap_free()
powerpc: implement vmemmap_remove_mapping() for BOOK3S
powerpc: implement vmemmap_list_free()
powerpc: Fail remap_4k_pfn() if PFN doesn't fit inside PTE
powerpc/book3s: Fix endianess issue for HMI handling on napping cpus.
powerpc/book3s: handle HMIs for cpus in nap mode.
powerpc/powernv: Invoke opal call to handle hmi.
powerpc/book3s: Add basic infrastructure to handle HMI in Linux.
powerpc/iommu: Fix comments with it_page_shift
powerpc/powernv: Handle compound PE in config accessors
powerpc/powernv: Handle compound PE for EEH
powerpc/powernv: Handle compound PE
powerpc/powernv: Split ioda_eeh_get_state()
powerpc/powernv: Allow to freeze PE
powerpc/powernv: Enable M64 aperatus for PHB3
powerpc/eeh: Aux PE data for error log
...

+5503 -3310
+16
Documentation/devicetree/bindings/powerpc/fsl/board.txt
··· 84 84 compatible = "fsl,bsc9132qds-fpga", "fsl,fpga-qixis-i2c"; 85 85 reg = <0x66>; 86 86 }; 87 + 88 + * Freescale on-board CPLD 89 + 90 + Some Freescale boards like T1040RDB have an on board CPLD connected. 91 + 92 + Required properties: 93 + - compatible: Should be a board-specific string like "fsl,<board>-cpld" 94 + Example: 95 + "fsl,t1040rdb-cpld", "fsl,t1042rdb-cpld", "fsl,t1042rdb_pi-cpld" 96 + - reg: should describe CPLD registers 97 + 98 + Example: 99 + cpld@3,0 { 100 + compatible = "fsl,t1040rdb-cpld"; 101 + reg = <3 0 0x300>; 102 + };
+84 -3
Documentation/vfio.txt
··· 305 305 an excellent performance which has limitations such as inability to do 306 306 locked pages accounting in real time. 307 307 308 - So 3 additional ioctls have been added: 308 + 4) According to sPAPR specification, A Partitionable Endpoint (PE) is an I/O 309 + subtree that can be treated as a unit for the purposes of partitioning and 310 + error recovery. A PE may be a single or multi-function IOA (IO Adapter), a 311 + function of a multi-function IOA, or multiple IOAs (possibly including switch 312 + and bridge structures above the multiple IOAs). PPC64 guests detect PCI errors 313 + and recover from them via EEH RTAS services, which works on the basis of 314 + additional ioctl commands. 315 + 316 + So 4 additional ioctls have been added: 309 317 310 318 VFIO_IOMMU_SPAPR_TCE_GET_INFO - returns the size and the start 311 319 of the DMA window on the PCI bus. ··· 324 316 325 317 VFIO_IOMMU_DISABLE - disables the container. 326 318 319 + VFIO_EEH_PE_OP - provides an API for EEH setup, error detection and recovery. 327 320 328 321 The code flow from the example above should be slightly changed: 322 + 323 + struct vfio_eeh_pe_op pe_op = { .argsz = sizeof(pe_op), .flags = 0 }; 329 324 330 325 ..... 331 326 /* Add the group to the container */ ··· 353 342 dma_map.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE; 354 343 355 344 /* Check here is .iova/.size are within DMA window from spapr_iommu_info */ 356 - 357 345 ioctl(container, VFIO_IOMMU_MAP_DMA, &dma_map); 358 - ..... 346 + 347 + /* Get a file descriptor for the device */ 348 + device = ioctl(group, VFIO_GROUP_GET_DEVICE_FD, "0000:06:0d.0"); 349 + 350 + .... 351 + 352 + /* Gratuitous device reset and go... */ 353 + ioctl(device, VFIO_DEVICE_RESET); 354 + 355 + /* Make sure EEH is supported */ 356 + ioctl(container, VFIO_CHECK_EXTENSION, VFIO_EEH); 357 + 358 + /* Enable the EEH functionality on the device */ 359 + pe_op.op = VFIO_EEH_PE_ENABLE; 360 + ioctl(container, VFIO_EEH_PE_OP, &pe_op); 361 + 362 + /* You're suggested to create additional data struct to represent 363 + * PE, and put child devices belonging to same IOMMU group to the 364 + * PE instance for later reference. 365 + */ 366 + 367 + /* Check the PE's state and make sure it's in functional state */ 368 + pe_op.op = VFIO_EEH_PE_GET_STATE; 369 + ioctl(container, VFIO_EEH_PE_OP, &pe_op); 370 + 371 + /* Save device state using pci_save_state(). 372 + * EEH should be enabled on the specified device. 373 + */ 374 + 375 + .... 376 + 377 + /* When 0xFF's returned from reading PCI config space or IO BARs 378 + * of the PCI device. Check the PE's state to see if that has been 379 + * frozen. 380 + */ 381 + ioctl(container, VFIO_EEH_PE_OP, &pe_op); 382 + 383 + /* Waiting for pending PCI transactions to be completed and don't 384 + * produce any more PCI traffic from/to the affected PE until 385 + * recovery is finished. 386 + */ 387 + 388 + /* Enable IO for the affected PE and collect logs. Usually, the 389 + * standard part of PCI config space, AER registers are dumped 390 + * as logs for further analysis. 391 + */ 392 + pe_op.op = VFIO_EEH_PE_UNFREEZE_IO; 393 + ioctl(container, VFIO_EEH_PE_OP, &pe_op); 394 + 395 + /* 396 + * Issue PE reset: hot or fundamental reset. Usually, hot reset 397 + * is enough. However, the firmware of some PCI adapters would 398 + * require fundamental reset. 399 + */ 400 + pe_op.op = VFIO_EEH_PE_RESET_HOT; 401 + ioctl(container, VFIO_EEH_PE_OP, &pe_op); 402 + pe_op.op = VFIO_EEH_PE_RESET_DEACTIVATE; 403 + ioctl(container, VFIO_EEH_PE_OP, &pe_op); 404 + 405 + /* Configure the PCI bridges for the affected PE */ 406 + pe_op.op = VFIO_EEH_PE_CONFIGURE; 407 + ioctl(container, VFIO_EEH_PE_OP, &pe_op); 408 + 409 + /* Restored state we saved at initialization time. pci_restore_state() 410 + * is good enough as an example. 411 + */ 412 + 413 + /* Hopefully, error is recovered successfully. Now, you can resume to 414 + * start PCI traffic to/from the affected PE. 415 + */ 416 + 417 + .... 359 418 360 419 ------------------------------------------------------------------------------- 361 420
+3 -1
MAINTAINERS
··· 5424 5424 LINUX FOR POWERPC (32-BIT AND 64-BIT) 5425 5425 M: Benjamin Herrenschmidt <benh@kernel.crashing.org> 5426 5426 M: Paul Mackerras <paulus@samba.org> 5427 + M: Michael Ellerman <mpe@ellerman.id.au> 5427 5428 W: http://www.penguinppc.org/ 5428 5429 L: linuxppc-dev@lists.ozlabs.org 5429 5430 Q: http://patchwork.ozlabs.org/project/linuxppc-dev/list/ ··· 5466 5465 5467 5466 LINUX FOR POWERPC EMBEDDED PPC8XX 5468 5467 M: Vitaly Bordug <vitb@kernel.crashing.org> 5469 - M: Marcelo Tosatti <marcelo@kvack.org> 5470 5468 W: http://www.penguinppc.org/ 5471 5469 L: linuxppc-dev@lists.ozlabs.org 5472 5470 S: Maintained 5473 5471 F: arch/powerpc/platforms/8xx/ 5474 5472 5475 5473 LINUX FOR POWERPC EMBEDDED PPC83XX AND PPC85XX 5474 + M: Scott Wood <scottwood@freescale.com> 5476 5475 M: Kumar Gala <galak@kernel.crashing.org> 5477 5476 W: http://www.penguinppc.org/ 5478 5477 L: linuxppc-dev@lists.ozlabs.org 5478 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/scottwood/linux.git 5479 5479 S: Maintained 5480 5480 F: arch/powerpc/platforms/83xx/ 5481 5481 F: arch/powerpc/platforms/85xx/
+1
arch/powerpc/boot/dts/fsl/p2041si-post.dtsi
··· 359 359 compatible = "fsl,qoriq-core-mux-1.0"; 360 360 clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 361 361 clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 362 + clock-output-names = "cmux2"; 362 363 }; 363 364 364 365 mux3: mux3@60 {
+69
arch/powerpc/boot/dts/fsl/t2080si-post.dtsi
··· 1 + /* 2 + * T2080 Silicon/SoC Device Tree Source (post include) 3 + * 4 + * Copyright 2013 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + /include/ "t2081si-post.dtsi" 36 + 37 + &soc { 38 + /include/ "qoriq-sata2-0.dtsi" 39 + sata@220000 { 40 + fsl,iommu-parent = <&pamu1>; 41 + fsl,liodn-reg = <&guts 0x550>; /* SATA1LIODNR */ 42 + }; 43 + 44 + /include/ "qoriq-sata2-1.dtsi" 45 + sata@221000 { 46 + fsl,iommu-parent = <&pamu1>; 47 + fsl,liodn-reg = <&guts 0x554>; /* SATA2LIODNR */ 48 + }; 49 + }; 50 + 51 + &rio { 52 + compatible = "fsl,srio"; 53 + interrupts = <16 2 1 11>; 54 + #address-cells = <2>; 55 + #size-cells = <2>; 56 + ranges; 57 + 58 + port1 { 59 + #address-cells = <2>; 60 + #size-cells = <2>; 61 + cell-index = <1>; 62 + }; 63 + 64 + port2 { 65 + #address-cells = <2>; 66 + #size-cells = <2>; 67 + cell-index = <2>; 68 + }; 69 + };
+435
arch/powerpc/boot/dts/fsl/t2081si-post.dtsi
··· 1 + /* 2 + * T2081 Silicon/SoC Device Tree Source (post include) 3 + * 4 + * Copyright 2013 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + &ifc { 36 + #address-cells = <2>; 37 + #size-cells = <1>; 38 + compatible = "fsl,ifc", "simple-bus"; 39 + interrupts = <25 2 0 0>; 40 + }; 41 + 42 + /* controller at 0x240000 */ 43 + &pci0 { 44 + compatible = "fsl,t2080-pcie", "fsl,qoriq-pcie-v3.0", "fsl,qoriq-pcie"; 45 + device_type = "pci"; 46 + #size-cells = <2>; 47 + #address-cells = <3>; 48 + bus-range = <0x0 0xff>; 49 + interrupts = <20 2 0 0>; 50 + fsl,iommu-parent = <&pamu0>; 51 + pcie@0 { 52 + reg = <0 0 0 0 0>; 53 + #interrupt-cells = <1>; 54 + #size-cells = <2>; 55 + #address-cells = <3>; 56 + device_type = "pci"; 57 + interrupts = <20 2 0 0>; 58 + interrupt-map-mask = <0xf800 0 0 7>; 59 + interrupt-map = < 60 + /* IDSEL 0x0 */ 61 + 0000 0 0 1 &mpic 40 1 0 0 62 + 0000 0 0 2 &mpic 1 1 0 0 63 + 0000 0 0 3 &mpic 2 1 0 0 64 + 0000 0 0 4 &mpic 3 1 0 0 65 + >; 66 + }; 67 + }; 68 + 69 + /* controller at 0x250000 */ 70 + &pci1 { 71 + compatible = "fsl,t2080-pcie", "fsl,qoriq-pcie-v3.0", "fsl,qoriq-pcie"; 72 + device_type = "pci"; 73 + #size-cells = <2>; 74 + #address-cells = <3>; 75 + bus-range = <0 0xff>; 76 + interrupts = <21 2 0 0>; 77 + fsl,iommu-parent = <&pamu0>; 78 + pcie@0 { 79 + reg = <0 0 0 0 0>; 80 + #interrupt-cells = <1>; 81 + #size-cells = <2>; 82 + #address-cells = <3>; 83 + device_type = "pci"; 84 + interrupts = <21 2 0 0>; 85 + interrupt-map-mask = <0xf800 0 0 7>; 86 + interrupt-map = < 87 + /* IDSEL 0x0 */ 88 + 0000 0 0 1 &mpic 41 1 0 0 89 + 0000 0 0 2 &mpic 5 1 0 0 90 + 0000 0 0 3 &mpic 6 1 0 0 91 + 0000 0 0 4 &mpic 7 1 0 0 92 + >; 93 + }; 94 + }; 95 + 96 + /* controller at 0x260000 */ 97 + &pci2 { 98 + compatible = "fsl,t2080-pcie", "fsl,qoriq-pcie-v3.0", "fsl,qoriq-pcie"; 99 + device_type = "pci"; 100 + #size-cells = <2>; 101 + #address-cells = <3>; 102 + bus-range = <0x0 0xff>; 103 + interrupts = <22 2 0 0>; 104 + fsl,iommu-parent = <&pamu0>; 105 + pcie@0 { 106 + reg = <0 0 0 0 0>; 107 + #interrupt-cells = <1>; 108 + #size-cells = <2>; 109 + #address-cells = <3>; 110 + device_type = "pci"; 111 + interrupts = <22 2 0 0>; 112 + interrupt-map-mask = <0xf800 0 0 7>; 113 + interrupt-map = < 114 + /* IDSEL 0x0 */ 115 + 0000 0 0 1 &mpic 42 1 0 0 116 + 0000 0 0 2 &mpic 9 1 0 0 117 + 0000 0 0 3 &mpic 10 1 0 0 118 + 0000 0 0 4 &mpic 11 1 0 0 119 + >; 120 + }; 121 + }; 122 + 123 + /* controller at 0x270000 */ 124 + &pci3 { 125 + compatible = "fsl,t2080-pcie", "fsl,qoriq-pcie-v3.0", "fsl,qoriq-pcie"; 126 + device_type = "pci"; 127 + #size-cells = <2>; 128 + #address-cells = <3>; 129 + bus-range = <0x0 0xff>; 130 + interrupts = <23 2 0 0>; 131 + fsl,iommu-parent = <&pamu0>; 132 + pcie@0 { 133 + reg = <0 0 0 0 0>; 134 + #interrupt-cells = <1>; 135 + #size-cells = <2>; 136 + #address-cells = <3>; 137 + device_type = "pci"; 138 + interrupts = <23 2 0 0>; 139 + interrupt-map-mask = <0xf800 0 0 7>; 140 + interrupt-map = < 141 + /* IDSEL 0x0 */ 142 + 0000 0 0 1 &mpic 43 1 0 0 143 + 0000 0 0 2 &mpic 0 1 0 0 144 + 0000 0 0 3 &mpic 4 1 0 0 145 + 0000 0 0 4 &mpic 8 1 0 0 146 + >; 147 + }; 148 + }; 149 + 150 + &dcsr { 151 + #address-cells = <1>; 152 + #size-cells = <1>; 153 + compatible = "fsl,dcsr", "simple-bus"; 154 + 155 + dcsr-epu@0 { 156 + compatible = "fsl,t2080-dcsr-epu", "fsl,dcsr-epu"; 157 + interrupts = <52 2 0 0 158 + 84 2 0 0 159 + 85 2 0 0 160 + 94 2 0 0 161 + 95 2 0 0>; 162 + reg = <0x0 0x1000>; 163 + }; 164 + dcsr-npc { 165 + compatible = "fsl,t2080-dcsr-cnpc", "fsl,dcsr-cnpc"; 166 + reg = <0x1000 0x1000 0x1002000 0x10000>; 167 + }; 168 + dcsr-nxc@2000 { 169 + compatible = "fsl,dcsr-nxc"; 170 + reg = <0x2000 0x1000>; 171 + }; 172 + dcsr-corenet { 173 + compatible = "fsl,dcsr-corenet"; 174 + reg = <0x8000 0x1000 0x1A000 0x1000>; 175 + }; 176 + dcsr-ocn@11000 { 177 + compatible = "fsl,t2080-dcsr-ocn", "fsl,dcsr-ocn"; 178 + reg = <0x11000 0x1000>; 179 + }; 180 + dcsr-ddr@12000 { 181 + compatible = "fsl,dcsr-ddr"; 182 + dev-handle = <&ddr1>; 183 + reg = <0x12000 0x1000>; 184 + }; 185 + dcsr-nal@18000 { 186 + compatible = "fsl,t2080-dcsr-nal", "fsl,dcsr-nal"; 187 + reg = <0x18000 0x1000>; 188 + }; 189 + dcsr-rcpm@22000 { 190 + compatible = "fsl,t2080-dcsr-rcpm", "fsl,dcsr-rcpm"; 191 + reg = <0x22000 0x1000>; 192 + }; 193 + dcsr-snpc@30000 { 194 + compatible = "fsl,t2080-dcsr-snpc", "fsl,dcsr-snpc"; 195 + reg = <0x30000 0x1000 0x1022000 0x10000>; 196 + }; 197 + dcsr-snpc@31000 { 198 + compatible = "fsl,t2080-dcsr-snpc", "fsl,dcsr-snpc"; 199 + reg = <0x31000 0x1000 0x1042000 0x10000>; 200 + }; 201 + dcsr-snpc@32000 { 202 + compatible = "fsl,t2080-dcsr-snpc", "fsl,dcsr-snpc"; 203 + reg = <0x32000 0x1000 0x1062000 0x10000>; 204 + }; 205 + dcsr-cpu-sb-proxy@100000 { 206 + compatible = "fsl,dcsr-e6500-sb-proxy", "fsl,dcsr-cpu-sb-proxy"; 207 + cpu-handle = <&cpu0>; 208 + reg = <0x100000 0x1000 0x101000 0x1000>; 209 + }; 210 + dcsr-cpu-sb-proxy@108000 { 211 + compatible = "fsl,dcsr-e6500-sb-proxy", "fsl,dcsr-cpu-sb-proxy"; 212 + cpu-handle = <&cpu1>; 213 + reg = <0x108000 0x1000 0x109000 0x1000>; 214 + }; 215 + dcsr-cpu-sb-proxy@110000 { 216 + compatible = "fsl,dcsr-e6500-sb-proxy", "fsl,dcsr-cpu-sb-proxy"; 217 + cpu-handle = <&cpu2>; 218 + reg = <0x110000 0x1000 0x111000 0x1000>; 219 + }; 220 + dcsr-cpu-sb-proxy@118000 { 221 + compatible = "fsl,dcsr-e6500-sb-proxy", "fsl,dcsr-cpu-sb-proxy"; 222 + cpu-handle = <&cpu3>; 223 + reg = <0x118000 0x1000 0x119000 0x1000>; 224 + }; 225 + }; 226 + 227 + &soc { 228 + #address-cells = <1>; 229 + #size-cells = <1>; 230 + device_type = "soc"; 231 + compatible = "simple-bus"; 232 + 233 + soc-sram-error { 234 + compatible = "fsl,soc-sram-error"; 235 + interrupts = <16 2 1 29>; 236 + }; 237 + 238 + corenet-law@0 { 239 + compatible = "fsl,corenet-law"; 240 + reg = <0x0 0x1000>; 241 + fsl,num-laws = <32>; 242 + }; 243 + 244 + ddr1: memory-controller@8000 { 245 + compatible = "fsl,qoriq-memory-controller-v4.7", 246 + "fsl,qoriq-memory-controller"; 247 + reg = <0x8000 0x1000>; 248 + interrupts = <16 2 1 23>; 249 + }; 250 + 251 + cpc: l3-cache-controller@10000 { 252 + compatible = "fsl,t2080-l3-cache-controller", "cache"; 253 + reg = <0x10000 0x1000 254 + 0x11000 0x1000 255 + 0x12000 0x1000>; 256 + interrupts = <16 2 1 27 257 + 16 2 1 26 258 + 16 2 1 25>; 259 + }; 260 + 261 + corenet-cf@18000 { 262 + compatible = "fsl,corenet2-cf", "fsl,corenet-cf"; 263 + reg = <0x18000 0x1000>; 264 + interrupts = <16 2 1 31>; 265 + fsl,ccf-num-csdids = <32>; 266 + fsl,ccf-num-snoopids = <32>; 267 + }; 268 + 269 + iommu@20000 { 270 + compatible = "fsl,pamu-v1.0", "fsl,pamu"; 271 + reg = <0x20000 0x3000>; 272 + fsl,portid-mapping = <0x8000>; 273 + ranges = <0 0x20000 0x3000>; 274 + #address-cells = <1>; 275 + #size-cells = <1>; 276 + interrupts = < 277 + 24 2 0 0 278 + 16 2 1 30>; 279 + 280 + pamu0: pamu@0 { 281 + reg = <0 0x1000>; 282 + fsl,primary-cache-geometry = <32 1>; 283 + fsl,secondary-cache-geometry = <128 2>; 284 + }; 285 + 286 + pamu1: pamu@1000 { 287 + reg = <0x1000 0x1000>; 288 + fsl,primary-cache-geometry = <32 1>; 289 + fsl,secondary-cache-geometry = <128 2>; 290 + }; 291 + 292 + pamu2: pamu@2000 { 293 + reg = <0x2000 0x1000>; 294 + fsl,primary-cache-geometry = <32 1>; 295 + fsl,secondary-cache-geometry = <128 2>; 296 + }; 297 + }; 298 + 299 + /include/ "qoriq-mpic4.3.dtsi" 300 + 301 + guts: global-utilities@e0000 { 302 + compatible = "fsl,t2080-device-config", "fsl,qoriq-device-config-2.0"; 303 + reg = <0xe0000 0xe00>; 304 + fsl,has-rstcr; 305 + fsl,liodn-bits = <12>; 306 + }; 307 + 308 + clockgen: global-utilities@e1000 { 309 + compatible = "fsl,t2080-clockgen", "fsl,qoriq-clockgen-2.0"; 310 + ranges = <0x0 0xe1000 0x1000>; 311 + reg = <0xe1000 0x1000>; 312 + #address-cells = <1>; 313 + #size-cells = <1>; 314 + 315 + sysclk: sysclk { 316 + #clock-cells = <0>; 317 + compatible = "fsl,qoriq-sysclk-2.0"; 318 + clock-output-names = "sysclk", "fixed-clock"; 319 + }; 320 + 321 + pll0: pll0@800 { 322 + #clock-cells = <1>; 323 + reg = <0x800 4>; 324 + compatible = "fsl,qoriq-core-pll-2.0"; 325 + clocks = <&sysclk>; 326 + clock-output-names = "pll0", "pll0-div2", "pll0-div4"; 327 + }; 328 + 329 + pll1: pll1@820 { 330 + #clock-cells = <1>; 331 + reg = <0x820 4>; 332 + compatible = "fsl,qoriq-core-pll-2.0"; 333 + clocks = <&sysclk>; 334 + clock-output-names = "pll1", "pll1-div2", "pll1-div4"; 335 + }; 336 + 337 + mux0: mux0@0 { 338 + #clock-cells = <0>; 339 + reg = <0x0 4>; 340 + compatible = "fsl,qoriq-core-mux-2.0"; 341 + clocks = <&pll0 0>, <&pll0 1>, <&pll0 2>, 342 + <&pll1 0>, <&pll1 1>, <&pll1 2>; 343 + clock-names = "pll0", "pll0-div2", "pll1-div4", 344 + "pll1", "pll1-div2", "pll1-div4"; 345 + clock-output-names = "cmux0"; 346 + }; 347 + 348 + mux1: mux1@20 { 349 + #clock-cells = <0>; 350 + reg = <0x20 4>; 351 + compatible = "fsl,qoriq-core-mux-2.0"; 352 + clocks = <&pll0 0>, <&pll0 1>, <&pll0 2>, 353 + <&pll1 0>, <&pll1 1>, <&pll1 2>; 354 + clock-names = "pll0", "pll0-div2", "pll1-div4", 355 + "pll1", "pll1-div2", "pll1-div4"; 356 + clock-output-names = "cmux1"; 357 + }; 358 + }; 359 + 360 + rcpm: global-utilities@e2000 { 361 + compatible = "fsl,t2080-rcpm", "fsl,qoriq-rcpm-2.0"; 362 + reg = <0xe2000 0x1000>; 363 + }; 364 + 365 + sfp: sfp@e8000 { 366 + compatible = "fsl,t2080-sfp"; 367 + reg = <0xe8000 0x1000>; 368 + }; 369 + 370 + serdes: serdes@ea000 { 371 + compatible = "fsl,t2080-serdes"; 372 + reg = <0xea000 0x4000>; 373 + }; 374 + 375 + /include/ "elo3-dma-0.dtsi" 376 + dma@100300 { 377 + fsl,iommu-parent = <&pamu0>; 378 + fsl,liodn-reg = <&guts 0x580>; /* DMA1LIODNR */ 379 + }; 380 + /include/ "elo3-dma-1.dtsi" 381 + dma@101300 { 382 + fsl,iommu-parent = <&pamu0>; 383 + fsl,liodn-reg = <&guts 0x584>; /* DMA2LIODNR */ 384 + }; 385 + /include/ "elo3-dma-2.dtsi" 386 + dma@102300 { 387 + fsl,iommu-parent = <&pamu0>; 388 + fsl,liodn-reg = <&guts 0x588>; /* DMA3LIODNR */ 389 + }; 390 + 391 + /include/ "qoriq-espi-0.dtsi" 392 + spi@110000 { 393 + fsl,espi-num-chipselects = <4>; 394 + }; 395 + 396 + /include/ "qoriq-esdhc-0.dtsi" 397 + sdhc@114000 { 398 + compatible = "fsl,t2080-esdhc", "fsl,esdhc"; 399 + fsl,iommu-parent = <&pamu1>; 400 + fsl,liodn-reg = <&guts 0x530>; /* SDMMCLIODNR */ 401 + sdhci,auto-cmd12; 402 + }; 403 + /include/ "qoriq-i2c-0.dtsi" 404 + /include/ "qoriq-i2c-1.dtsi" 405 + /include/ "qoriq-duart-0.dtsi" 406 + /include/ "qoriq-duart-1.dtsi" 407 + /include/ "qoriq-gpio-0.dtsi" 408 + /include/ "qoriq-gpio-1.dtsi" 409 + /include/ "qoriq-gpio-2.dtsi" 410 + /include/ "qoriq-gpio-3.dtsi" 411 + /include/ "qoriq-usb2-mph-0.dtsi" 412 + usb0: usb@210000 { 413 + compatible = "fsl-usb2-mph-v2.4", "fsl-usb2-mph"; 414 + fsl,iommu-parent = <&pamu1>; 415 + fsl,liodn-reg = <&guts 0x520>; /* USB1LIODNR */ 416 + phy_type = "utmi"; 417 + port0; 418 + }; 419 + /include/ "qoriq-usb2-dr-0.dtsi" 420 + usb1: usb@211000 { 421 + compatible = "fsl-usb2-dr-v2.4", "fsl-usb2-dr"; 422 + fsl,iommu-parent = <&pamu1>; 423 + fsl,liodn-reg = <&guts 0x524>; /* USB1LIODNR */ 424 + dr_mode = "host"; 425 + phy_type = "utmi"; 426 + }; 427 + /include/ "qoriq-sec5.2-0.dtsi" 428 + 429 + L2_1: l2-cache-controller@c20000 { 430 + /* Cluster 0 L2 cache */ 431 + compatible = "fsl,t2080-l2-cache-controller"; 432 + reg = <0xc20000 0x40000>; 433 + next-level-cache = <&cpc>; 434 + }; 435 + };
+99
arch/powerpc/boot/dts/fsl/t208xsi-pre.dtsi
··· 1 + /* 2 + * T2080/T2081 Silicon/SoC Device Tree Source (pre include) 3 + * 4 + * Copyright 2013 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + /dts-v1/; 36 + 37 + /include/ "e6500_power_isa.dtsi" 38 + 39 + / { 40 + #address-cells = <2>; 41 + #size-cells = <2>; 42 + interrupt-parent = <&mpic>; 43 + 44 + aliases { 45 + ccsr = &soc; 46 + dcsr = &dcsr; 47 + 48 + serial0 = &serial0; 49 + serial1 = &serial1; 50 + serial2 = &serial2; 51 + serial3 = &serial3; 52 + 53 + crypto = &crypto; 54 + pci0 = &pci0; 55 + pci1 = &pci1; 56 + pci2 = &pci2; 57 + pci3 = &pci3; 58 + usb0 = &usb0; 59 + usb1 = &usb1; 60 + dma0 = &dma0; 61 + dma1 = &dma1; 62 + dma2 = &dma2; 63 + sdhc = &sdhc; 64 + }; 65 + 66 + cpus { 67 + #address-cells = <1>; 68 + #size-cells = <0>; 69 + 70 + cpu0: PowerPC,e6500@0 { 71 + device_type = "cpu"; 72 + reg = <0 1>; 73 + clocks = <&mux0>; 74 + next-level-cache = <&L2_1>; 75 + fsl,portid-mapping = <0x80000000>; 76 + }; 77 + cpu1: PowerPC,e6500@2 { 78 + device_type = "cpu"; 79 + reg = <2 3>; 80 + clocks = <&mux0>; 81 + next-level-cache = <&L2_1>; 82 + fsl,portid-mapping = <0x80000000>; 83 + }; 84 + cpu2: PowerPC,e6500@4 { 85 + device_type = "cpu"; 86 + reg = <4 5>; 87 + clocks = <&mux0>; 88 + next-level-cache = <&L2_1>; 89 + fsl,portid-mapping = <0x80000000>; 90 + }; 91 + cpu3: PowerPC,e6500@6 { 92 + device_type = "cpu"; 93 + reg = <6 7>; 94 + clocks = <&mux0>; 95 + next-level-cache = <&L2_1>; 96 + fsl,portid-mapping = <0x80000000>; 97 + }; 98 + }; 99 + };
+1
arch/powerpc/boot/dts/fsl/t4240si-post.dtsi
··· 476 476 477 477 /include/ "elo3-dma-0.dtsi" 478 478 /include/ "elo3-dma-1.dtsi" 479 + /include/ "elo3-dma-2.dtsi" 479 480 480 481 /include/ "qoriq-espi-0.dtsi" 481 482 spi@110000 {
+1
arch/powerpc/boot/dts/fsl/t4240si-pre.dtsi
··· 57 57 pci3 = &pci3; 58 58 dma0 = &dma0; 59 59 dma1 = &dma1; 60 + dma2 = &dma2; 60 61 sdhc = &sdhc; 61 62 }; 62 63
+57
arch/powerpc/boot/dts/t2080qds.dts
··· 1 + /* 2 + * T2080QDS Device Tree Source 3 + * 4 + * Copyright 2013 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + /include/ "fsl/t208xsi-pre.dtsi" 36 + /include/ "t208xqds.dtsi" 37 + 38 + / { 39 + model = "fsl,T2080QDS"; 40 + compatible = "fsl,T2080QDS"; 41 + #address-cells = <2>; 42 + #size-cells = <2>; 43 + interrupt-parent = <&mpic>; 44 + 45 + rio: rapidio@ffe0c0000 { 46 + reg = <0xf 0xfe0c0000 0 0x11000>; 47 + 48 + port1 { 49 + ranges = <0 0 0xc 0x20000000 0 0x10000000>; 50 + }; 51 + port2 { 52 + ranges = <0 0 0xc 0x30000000 0 0x10000000>; 53 + }; 54 + }; 55 + }; 56 + 57 + /include/ "fsl/t2080si-post.dtsi"
+57
arch/powerpc/boot/dts/t2080rdb.dts
··· 1 + /* 2 + * T2080PCIe-RDB Board Device Tree Source 3 + * 4 + * Copyright 2014 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + /include/ "fsl/t208xsi-pre.dtsi" 36 + /include/ "t208xrdb.dtsi" 37 + 38 + / { 39 + model = "fsl,T2080RDB"; 40 + compatible = "fsl,T2080RDB"; 41 + #address-cells = <2>; 42 + #size-cells = <2>; 43 + interrupt-parent = <&mpic>; 44 + 45 + rio: rapidio@ffe0c0000 { 46 + reg = <0xf 0xfe0c0000 0 0x11000>; 47 + 48 + port1 { 49 + ranges = <0 0 0xc 0x20000000 0 0x10000000>; 50 + }; 51 + port2 { 52 + ranges = <0 0 0xc 0x30000000 0 0x10000000>; 53 + }; 54 + }; 55 + }; 56 + 57 + /include/ "fsl/t2080si-post.dtsi"
+46
arch/powerpc/boot/dts/t2081qds.dts
··· 1 + /* 2 + * T2081QDS Device Tree Source 3 + * 4 + * Copyright 2013 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + /include/ "fsl/t208xsi-pre.dtsi" 36 + /include/ "t208xqds.dtsi" 37 + 38 + / { 39 + model = "fsl,T2081QDS"; 40 + compatible = "fsl,T2081QDS"; 41 + #address-cells = <2>; 42 + #size-cells = <2>; 43 + interrupt-parent = <&mpic>; 44 + }; 45 + 46 + /include/ "fsl/t2081si-post.dtsi"
+239
arch/powerpc/boot/dts/t208xqds.dtsi
··· 1 + /* 2 + * T2080/T2081 QDS Device Tree Source 3 + * 4 + * Copyright 2013 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + / { 36 + model = "fsl,T2080QDS"; 37 + compatible = "fsl,T2080QDS"; 38 + #address-cells = <2>; 39 + #size-cells = <2>; 40 + interrupt-parent = <&mpic>; 41 + 42 + ifc: localbus@ffe124000 { 43 + reg = <0xf 0xfe124000 0 0x2000>; 44 + ranges = <0 0 0xf 0xe8000000 0x08000000 45 + 2 0 0xf 0xff800000 0x00010000 46 + 3 0 0xf 0xffdf0000 0x00008000>; 47 + 48 + nor@0,0 { 49 + #address-cells = <1>; 50 + #size-cells = <1>; 51 + compatible = "cfi-flash"; 52 + reg = <0x0 0x0 0x8000000>; 53 + bank-width = <2>; 54 + device-width = <1>; 55 + }; 56 + 57 + nand@2,0 { 58 + #address-cells = <1>; 59 + #size-cells = <1>; 60 + compatible = "fsl,ifc-nand"; 61 + reg = <0x2 0x0 0x10000>; 62 + }; 63 + 64 + boardctrl: board-control@3,0 { 65 + #address-cells = <1>; 66 + #size-cells = <1>; 67 + compatible = "fsl,fpga-qixis"; 68 + reg = <3 0 0x300>; 69 + ranges = <0 3 0 0x300>; 70 + }; 71 + }; 72 + 73 + memory { 74 + device_type = "memory"; 75 + }; 76 + 77 + dcsr: dcsr@f00000000 { 78 + ranges = <0x00000000 0xf 0x00000000 0x01072000>; 79 + }; 80 + 81 + soc: soc@ffe000000 { 82 + ranges = <0x00000000 0xf 0xfe000000 0x1000000>; 83 + reg = <0xf 0xfe000000 0 0x00001000>; 84 + spi@110000 { 85 + flash@0 { 86 + #address-cells = <1>; 87 + #size-cells = <1>; 88 + compatible = "micron,n25q128a11"; /* 16MB */ 89 + reg = <0>; 90 + spi-max-frequency = <40000000>; /* input clock */ 91 + }; 92 + 93 + flash@1 { 94 + #address-cells = <1>; 95 + #size-cells = <1>; 96 + compatible = "sst,sst25wf040"; 97 + reg = <1>; 98 + spi-max-frequency = <35000000>; 99 + }; 100 + 101 + flash@2 { 102 + #address-cells = <1>; 103 + #size-cells = <1>; 104 + compatible = "eon,en25s64"; 105 + reg = <2>; 106 + spi-max-frequency = <35000000>; 107 + }; 108 + }; 109 + 110 + i2c@118000 { 111 + pca9547@77 { 112 + compatible = "nxp,pca9547"; 113 + reg = <0x77>; 114 + #address-cells = <1>; 115 + #size-cells = <0>; 116 + 117 + i2c@0 { 118 + #address-cells = <1>; 119 + #size-cells = <0>; 120 + reg = <0x0>; 121 + 122 + eeprom@50 { 123 + compatible = "at24,24c512"; 124 + reg = <0x50>; 125 + }; 126 + 127 + eeprom@51 { 128 + compatible = "at24,24c02"; 129 + reg = <0x51>; 130 + }; 131 + 132 + eeprom@57 { 133 + compatible = "at24,24c02"; 134 + reg = <0x57>; 135 + }; 136 + 137 + rtc@68 { 138 + compatible = "dallas,ds3232"; 139 + reg = <0x68>; 140 + interrupts = <0x1 0x1 0 0>; 141 + }; 142 + }; 143 + 144 + i2c@1 { 145 + #address-cells = <1>; 146 + #size-cells = <0>; 147 + reg = <0x1>; 148 + 149 + eeprom@55 { 150 + compatible = "at24,24c02"; 151 + reg = <0x55>; 152 + }; 153 + }; 154 + 155 + i2c@2 { 156 + #address-cells = <1>; 157 + #size-cells = <0>; 158 + reg = <0x2>; 159 + 160 + ina220@40 { 161 + compatible = "ti,ina220"; 162 + reg = <0x40>; 163 + shunt-resistor = <1000>; 164 + }; 165 + 166 + ina220@41 { 167 + compatible = "ti,ina220"; 168 + reg = <0x41>; 169 + shunt-resistor = <1000>; 170 + }; 171 + }; 172 + }; 173 + }; 174 + 175 + sdhc@114000 { 176 + voltage-ranges = <1800 1800 3300 3300>; 177 + }; 178 + }; 179 + 180 + pci0: pcie@ffe240000 { 181 + reg = <0xf 0xfe240000 0 0x10000>; 182 + ranges = <0x02000000 0 0xe0000000 0xc 0x00000000 0x0 0x20000000 183 + 0x01000000 0 0x00000000 0xf 0xf8000000 0x0 0x00010000>; 184 + pcie@0 { 185 + ranges = <0x02000000 0 0xe0000000 186 + 0x02000000 0 0xe0000000 187 + 0 0x20000000 188 + 189 + 0x01000000 0 0x00000000 190 + 0x01000000 0 0x00000000 191 + 0 0x00010000>; 192 + }; 193 + }; 194 + 195 + pci1: pcie@ffe250000 { 196 + reg = <0xf 0xfe250000 0 0x10000>; 197 + ranges = <0x02000000 0x0 0xe0000000 0xc 0x20000000 0x0 0x10000000 198 + 0x01000000 0x0 0x00000000 0xf 0xf8010000 0x0 0x00010000>; 199 + pcie@0 { 200 + ranges = <0x02000000 0 0xe0000000 201 + 0x02000000 0 0xe0000000 202 + 0 0x20000000 203 + 204 + 0x01000000 0 0x00000000 205 + 0x01000000 0 0x00000000 206 + 0 0x00010000>; 207 + }; 208 + }; 209 + 210 + pci2: pcie@ffe260000 { 211 + reg = <0xf 0xfe260000 0 0x1000>; 212 + ranges = <0x02000000 0 0xe0000000 0xc 0x30000000 0 0x10000000 213 + 0x01000000 0 0x00000000 0xf 0xf8020000 0 0x00010000>; 214 + pcie@0 { 215 + ranges = <0x02000000 0 0xe0000000 216 + 0x02000000 0 0xe0000000 217 + 0 0x20000000 218 + 219 + 0x01000000 0 0x00000000 220 + 0x01000000 0 0x00000000 221 + 0 0x00010000>; 222 + }; 223 + }; 224 + 225 + pci3: pcie@ffe270000 { 226 + reg = <0xf 0xfe270000 0 0x10000>; 227 + ranges = <0x02000000 0 0xe0000000 0xc 0x40000000 0 0x10000000 228 + 0x01000000 0 0x00000000 0xf 0xf8030000 0 0x00010000>; 229 + pcie@0 { 230 + ranges = <0x02000000 0 0xe0000000 231 + 0x02000000 0 0xe0000000 232 + 0 0x20000000 233 + 234 + 0x01000000 0 0x00000000 235 + 0x01000000 0 0x00000000 236 + 0 0x00010000>; 237 + }; 238 + }; 239 + };
+184
arch/powerpc/boot/dts/t208xrdb.dtsi
··· 1 + /* 2 + * T2080PCIe-RDB Board Device Tree Source 3 + * 4 + * Copyright 2014 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + / { 36 + model = "fsl,T2080RDB"; 37 + compatible = "fsl,T2080RDB"; 38 + #address-cells = <2>; 39 + #size-cells = <2>; 40 + interrupt-parent = <&mpic>; 41 + 42 + ifc: localbus@ffe124000 { 43 + reg = <0xf 0xfe124000 0 0x2000>; 44 + ranges = <0 0 0xf 0xe8000000 0x08000000 45 + 2 0 0xf 0xff800000 0x00010000 46 + 3 0 0xf 0xffdf0000 0x00008000>; 47 + 48 + nor@0,0 { 49 + #address-cells = <1>; 50 + #size-cells = <1>; 51 + compatible = "cfi-flash"; 52 + reg = <0x0 0x0 0x8000000>; 53 + 54 + bank-width = <2>; 55 + device-width = <1>; 56 + }; 57 + 58 + nand@1,0 { 59 + #address-cells = <1>; 60 + #size-cells = <1>; 61 + compatible = "fsl,ifc-nand"; 62 + reg = <0x2 0x0 0x10000>; 63 + }; 64 + 65 + boardctrl: board-control@2,0 { 66 + #address-cells = <1>; 67 + #size-cells = <1>; 68 + compatible = "fsl,t2080-cpld"; 69 + reg = <3 0 0x300>; 70 + ranges = <0 3 0 0x300>; 71 + }; 72 + }; 73 + 74 + memory { 75 + device_type = "memory"; 76 + }; 77 + 78 + dcsr: dcsr@f00000000 { 79 + ranges = <0x00000000 0xf 0x00000000 0x01072000>; 80 + }; 81 + 82 + soc: soc@ffe000000 { 83 + ranges = <0x00000000 0xf 0xfe000000 0x1000000>; 84 + reg = <0xf 0xfe000000 0 0x00001000>; 85 + spi@110000 { 86 + flash@0 { 87 + #address-cells = <1>; 88 + #size-cells = <1>; 89 + compatible = "micron,n25q512a"; 90 + reg = <0>; 91 + spi-max-frequency = <10000000>; /* input clock */ 92 + }; 93 + }; 94 + 95 + i2c@118000 { 96 + adt7481@4c { 97 + compatible = "adi,adt7481"; 98 + reg = <0x4c>; 99 + }; 100 + 101 + rtc@68 { 102 + compatible = "dallas,ds1339"; 103 + reg = <0x68>; 104 + interrupts = <0x1 0x1 0 0>; 105 + }; 106 + 107 + eeprom@50 { 108 + compatible = "atmel,24c256"; 109 + reg = <0x50>; 110 + }; 111 + }; 112 + 113 + i2c@118100 { 114 + pca9546@77 { 115 + compatible = "nxp,pca9546"; 116 + reg = <0x77>; 117 + }; 118 + }; 119 + 120 + sdhc@114000 { 121 + voltage-ranges = <1800 1800 3300 3300>; 122 + }; 123 + }; 124 + 125 + pci0: pcie@ffe240000 { 126 + reg = <0xf 0xfe240000 0 0x10000>; 127 + ranges = <0x02000000 0 0xe0000000 0xc 0x00000000 0x0 0x20000000 128 + 0x01000000 0 0x00000000 0xf 0xf8000000 0x0 0x00010000>; 129 + pcie@0 { 130 + ranges = <0x02000000 0 0xe0000000 131 + 0x02000000 0 0xe0000000 132 + 0 0x20000000 133 + 134 + 0x01000000 0 0x00000000 135 + 0x01000000 0 0x00000000 136 + 0 0x00010000>; 137 + }; 138 + }; 139 + 140 + pci1: pcie@ffe250000 { 141 + reg = <0xf 0xfe250000 0 0x10000>; 142 + ranges = <0x02000000 0x0 0xe0000000 0xc 0x20000000 0x0 0x10000000 143 + 0x01000000 0x0 0x00000000 0xf 0xf8010000 0x0 0x00010000>; 144 + pcie@0 { 145 + ranges = <0x02000000 0 0xe0000000 146 + 0x02000000 0 0xe0000000 147 + 0 0x20000000 148 + 149 + 0x01000000 0 0x00000000 150 + 0x01000000 0 0x00000000 151 + 0 0x00010000>; 152 + }; 153 + }; 154 + 155 + pci2: pcie@ffe260000 { 156 + reg = <0xf 0xfe260000 0 0x1000>; 157 + ranges = <0x02000000 0 0xe0000000 0xc 0x30000000 0 0x10000000 158 + 0x01000000 0 0x00000000 0xf 0xf8020000 0 0x00010000>; 159 + pcie@0 { 160 + ranges = <0x02000000 0 0xe0000000 161 + 0x02000000 0 0xe0000000 162 + 0 0x20000000 163 + 164 + 0x01000000 0 0x00000000 165 + 0x01000000 0 0x00000000 166 + 0 0x00010000>; 167 + }; 168 + }; 169 + 170 + pci3: pcie@ffe270000 { 171 + reg = <0xf 0xfe270000 0 0x10000>; 172 + ranges = <0x02000000 0 0xe0000000 0xc 0x40000000 0 0x10000000 173 + 0x01000000 0 0x00000000 0xf 0xf8030000 0 0x00010000>; 174 + pcie@0 { 175 + ranges = <0x02000000 0 0xe0000000 176 + 0x02000000 0 0xe0000000 177 + 0 0x20000000 178 + 179 + 0x01000000 0 0x00000000 180 + 0x01000000 0 0x00000000 181 + 0 0x00010000>; 182 + }; 183 + }; 184 + };
+186
arch/powerpc/boot/dts/t4240rdb.dts
··· 1 + /* 2 + * T4240RDB Device Tree Source 3 + * 4 + * Copyright 2014 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + /include/ "fsl/t4240si-pre.dtsi" 36 + 37 + / { 38 + model = "fsl,T4240RDB"; 39 + compatible = "fsl,T4240RDB"; 40 + #address-cells = <2>; 41 + #size-cells = <2>; 42 + interrupt-parent = <&mpic>; 43 + 44 + ifc: localbus@ffe124000 { 45 + reg = <0xf 0xfe124000 0 0x2000>; 46 + ranges = <0 0 0xf 0xe8000000 0x08000000 47 + 2 0 0xf 0xff800000 0x00010000 48 + 3 0 0xf 0xffdf0000 0x00008000>; 49 + 50 + nor@0,0 { 51 + #address-cells = <1>; 52 + #size-cells = <1>; 53 + compatible = "cfi-flash"; 54 + reg = <0x0 0x0 0x8000000>; 55 + 56 + bank-width = <2>; 57 + device-width = <1>; 58 + }; 59 + 60 + nand@2,0 { 61 + #address-cells = <1>; 62 + #size-cells = <1>; 63 + compatible = "fsl,ifc-nand"; 64 + reg = <0x2 0x0 0x10000>; 65 + }; 66 + }; 67 + 68 + memory { 69 + device_type = "memory"; 70 + }; 71 + 72 + dcsr: dcsr@f00000000 { 73 + ranges = <0x00000000 0xf 0x00000000 0x01072000>; 74 + }; 75 + 76 + soc: soc@ffe000000 { 77 + ranges = <0x00000000 0xf 0xfe000000 0x1000000>; 78 + reg = <0xf 0xfe000000 0 0x00001000>; 79 + spi@110000 { 80 + flash@0 { 81 + #address-cells = <1>; 82 + #size-cells = <1>; 83 + compatible = "sst,sst25wf040"; 84 + reg = <0>; 85 + spi-max-frequency = <40000000>; /* input clock */ 86 + }; 87 + }; 88 + 89 + i2c@118000 { 90 + eeprom@52 { 91 + compatible = "at24,24c256"; 92 + reg = <0x52>; 93 + }; 94 + eeprom@54 { 95 + compatible = "at24,24c256"; 96 + reg = <0x54>; 97 + }; 98 + eeprom@56 { 99 + compatible = "at24,24c256"; 100 + reg = <0x56>; 101 + }; 102 + rtc@68 { 103 + compatible = "dallas,ds1374"; 104 + reg = <0x68>; 105 + interrupts = <0x1 0x1 0 0>; 106 + }; 107 + }; 108 + 109 + sdhc@114000 { 110 + voltage-ranges = <1800 1800 3300 3300>; 111 + }; 112 + }; 113 + 114 + pci0: pcie@ffe240000 { 115 + reg = <0xf 0xfe240000 0 0x10000>; 116 + ranges = <0x02000000 0 0xe0000000 0xc 0x00000000 0x0 0x20000000 117 + 0x01000000 0 0x00000000 0xf 0xf8000000 0x0 0x00010000>; 118 + pcie@0 { 119 + ranges = <0x02000000 0 0xe0000000 120 + 0x02000000 0 0xe0000000 121 + 0 0x20000000 122 + 123 + 0x01000000 0 0x00000000 124 + 0x01000000 0 0x00000000 125 + 0 0x00010000>; 126 + }; 127 + }; 128 + 129 + pci1: pcie@ffe250000 { 130 + reg = <0xf 0xfe250000 0 0x10000>; 131 + ranges = <0x02000000 0x0 0xe0000000 0xc 0x20000000 0x0 0x20000000 132 + 0x01000000 0x0 0x00000000 0xf 0xf8010000 0x0 0x00010000>; 133 + pcie@0 { 134 + ranges = <0x02000000 0 0xe0000000 135 + 0x02000000 0 0xe0000000 136 + 0 0x20000000 137 + 138 + 0x01000000 0 0x00000000 139 + 0x01000000 0 0x00000000 140 + 0 0x00010000>; 141 + }; 142 + }; 143 + 144 + pci2: pcie@ffe260000 { 145 + reg = <0xf 0xfe260000 0 0x1000>; 146 + ranges = <0x02000000 0 0xe0000000 0xc 0x40000000 0 0x20000000 147 + 0x01000000 0 0x00000000 0xf 0xf8020000 0 0x00010000>; 148 + pcie@0 { 149 + ranges = <0x02000000 0 0xe0000000 150 + 0x02000000 0 0xe0000000 151 + 0 0x20000000 152 + 153 + 0x01000000 0 0x00000000 154 + 0x01000000 0 0x00000000 155 + 0 0x00010000>; 156 + }; 157 + }; 158 + 159 + pci3: pcie@ffe270000 { 160 + reg = <0xf 0xfe270000 0 0x10000>; 161 + ranges = <0x02000000 0 0xe0000000 0xc 0x60000000 0 0x20000000 162 + 0x01000000 0 0x00000000 0xf 0xf8030000 0 0x00010000>; 163 + pcie@0 { 164 + ranges = <0x02000000 0 0xe0000000 165 + 0x02000000 0 0xe0000000 166 + 0 0x20000000 167 + 168 + 0x01000000 0 0x00000000 169 + 0x01000000 0 0x00000000 170 + 0 0x00010000>; 171 + }; 172 + }; 173 + 174 + rio: rapidio@ffe0c0000 { 175 + reg = <0xf 0xfe0c0000 0 0x11000>; 176 + 177 + port1 { 178 + ranges = <0 0 0xc 0x20000000 0 0x10000000>; 179 + }; 180 + port2 { 181 + ranges = <0 0 0xc 0x30000000 0 0x10000000>; 182 + }; 183 + }; 184 + }; 185 + 186 + /include/ "fsl/t4240si-post.dtsi"
+1 -1
arch/powerpc/boot/io.h
··· 1 1 #ifndef _IO_H 2 - #define __IO_H 2 + #define _IO_H 3 3 4 4 #include "types.h" 5 5
+3 -1
arch/powerpc/configs/corenet32_smp_defconfig
··· 139 139 CONFIG_EDAC_MM_EDAC=y 140 140 CONFIG_EDAC_MPC85XX=y 141 141 CONFIG_RTC_CLASS=y 142 + CONFIG_RTC_DRV_DS1307=y 143 + CONFIG_RTC_DRV_DS1374=y 142 144 CONFIG_RTC_DRV_DS3232=y 143 - CONFIG_RTC_DRV_CMOS=y 144 145 CONFIG_UIO=y 145 146 CONFIG_STAGING=y 146 147 CONFIG_VIRT_DRIVERS=y ··· 180 179 CONFIG_CRYPTO_AES=y 181 180 # CONFIG_CRYPTO_ANSI_CPRNG is not set 182 181 CONFIG_CRYPTO_DEV_FSL_CAAM=y 182 + CONFIG_FSL_CORENET_CF=y
+5
arch/powerpc/configs/corenet64_smp_defconfig
··· 123 123 CONFIG_USB_STORAGE=y 124 124 CONFIG_MMC=y 125 125 CONFIG_MMC_SDHCI=y 126 + CONFIG_RTC_CLASS=y 127 + CONFIG_RTC_DRV_DS1307=y 128 + CONFIG_RTC_DRV_DS1374=y 129 + CONFIG_RTC_DRV_DS3232=y 126 130 CONFIG_EDAC=y 127 131 CONFIG_EDAC_MM_EDAC=y 128 132 CONFIG_DMADEVICES=y ··· 179 175 CONFIG_CRYPTO_SHA512=y 180 176 # CONFIG_CRYPTO_ANSI_CPRNG is not set 181 177 CONFIG_CRYPTO_DEV_FSL_CAAM=y 178 + CONFIG_FSL_CORENET_CF=y
+3
arch/powerpc/configs/mpc85xx_defconfig
··· 209 209 CONFIG_EDAC=y 210 210 CONFIG_EDAC_MM_EDAC=y 211 211 CONFIG_RTC_CLASS=y 212 + CONFIG_RTC_DRV_DS1307=y 213 + CONFIG_RTC_DRV_DS1374=y 214 + CONFIG_RTC_DRV_DS3232=y 212 215 CONFIG_RTC_DRV_CMOS=y 213 216 CONFIG_RTC_DRV_DS1307=y 214 217 CONFIG_DMADEVICES=y
+3
arch/powerpc/configs/mpc85xx_smp_defconfig
··· 210 210 CONFIG_EDAC=y 211 211 CONFIG_EDAC_MM_EDAC=y 212 212 CONFIG_RTC_CLASS=y 213 + CONFIG_RTC_DRV_DS1307=y 214 + CONFIG_RTC_DRV_DS1374=y 215 + CONFIG_RTC_DRV_DS3232=y 213 216 CONFIG_RTC_DRV_CMOS=y 214 217 CONFIG_RTC_DRV_DS1307=y 215 218 CONFIG_DMADEVICES=y
+10 -21
arch/powerpc/include/asm/cputable.h
··· 195 195 196 196 #define CPU_FTR_PPCAS_ARCH_V2 (CPU_FTR_NOEXECUTE | CPU_FTR_NODSISRALIGN) 197 197 198 - #define MMU_FTR_PPCAS_ARCH_V2 (MMU_FTR_SLB | MMU_FTR_TLBIEL | \ 199 - MMU_FTR_16M_PAGE) 198 + #define MMU_FTR_PPCAS_ARCH_V2 (MMU_FTR_TLBIEL | MMU_FTR_16M_PAGE) 200 199 201 200 /* We only set the altivec features if the kernel was compiled with altivec 202 201 * support ··· 266 267 #define CPU_FTR_MAYBE_CAN_DOZE 0 267 268 #define CPU_FTR_MAYBE_CAN_NAP 0 268 269 #endif 269 - 270 - #define CLASSIC_PPC (!defined(CONFIG_8xx) && !defined(CONFIG_4xx) && \ 271 - !defined(CONFIG_POWER3) && !defined(CONFIG_POWER4) && \ 272 - !defined(CONFIG_BOOKE)) 273 270 274 271 #define CPU_FTRS_PPC601 (CPU_FTR_COMMON | CPU_FTR_601 | \ 275 272 CPU_FTR_COHERENT_ICACHE | CPU_FTR_UNIFIED_ID_CACHE) ··· 391 396 CPU_FTR_L2CSR | CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \ 392 397 CPU_FTR_DBELL | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \ 393 398 CPU_FTR_DEBUG_LVL_EXC | CPU_FTR_EMB_HV | CPU_FTR_ALTIVEC_COMP | \ 394 - CPU_FTR_CELL_TB_BUG) 399 + CPU_FTR_CELL_TB_BUG | CPU_FTR_SMT) 395 400 #define CPU_FTRS_GENERIC_32 (CPU_FTR_COMMON | CPU_FTR_NODSISRALIGN) 396 401 397 402 /* 64-bit CPUs */ 398 - #define CPU_FTRS_POWER3 (CPU_FTR_USE_TB | \ 399 - CPU_FTR_IABR | CPU_FTR_PPC_LE) 400 - #define CPU_FTRS_RS64 (CPU_FTR_USE_TB | \ 401 - CPU_FTR_IABR | \ 402 - CPU_FTR_MMCRA | CPU_FTR_CTRL) 403 403 #define CPU_FTRS_POWER4 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 404 404 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 405 405 CPU_FTR_MMCRA | CPU_FTR_CP_USE_DCBTZ | \ ··· 457 467 #define CPU_FTRS_POSSIBLE (CPU_FTRS_E6500 | CPU_FTRS_E5500 | CPU_FTRS_A2) 458 468 #else 459 469 #define CPU_FTRS_POSSIBLE \ 460 - (CPU_FTRS_POWER3 | CPU_FTRS_RS64 | CPU_FTRS_POWER4 | \ 461 - CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | CPU_FTRS_POWER6 | \ 462 - CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | CPU_FTRS_POWER8 | \ 463 - CPU_FTRS_CELL | CPU_FTRS_PA6T | CPU_FTR_VSX) 470 + (CPU_FTRS_POWER4 | CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | \ 471 + CPU_FTRS_POWER6 | CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | \ 472 + CPU_FTRS_POWER8 | CPU_FTRS_CELL | CPU_FTRS_PA6T | CPU_FTR_VSX) 464 473 #endif 465 474 #else 466 475 enum { 467 476 CPU_FTRS_POSSIBLE = 468 - #if CLASSIC_PPC 477 + #ifdef CONFIG_PPC_BOOK3S_32 469 478 CPU_FTRS_PPC601 | CPU_FTRS_603 | CPU_FTRS_604 | CPU_FTRS_740_NOTAU | 470 479 CPU_FTRS_740 | CPU_FTRS_750 | CPU_FTRS_750FX1 | 471 480 CPU_FTRS_750FX2 | CPU_FTRS_750FX | CPU_FTRS_750GX | ··· 507 518 #define CPU_FTRS_ALWAYS (CPU_FTRS_E6500 & CPU_FTRS_E5500 & CPU_FTRS_A2) 508 519 #else 509 520 #define CPU_FTRS_ALWAYS \ 510 - (CPU_FTRS_POWER3 & CPU_FTRS_RS64 & CPU_FTRS_POWER4 & \ 511 - CPU_FTRS_PPC970 & CPU_FTRS_POWER5 & CPU_FTRS_POWER6 & \ 512 - CPU_FTRS_POWER7 & CPU_FTRS_CELL & CPU_FTRS_PA6T & CPU_FTRS_POSSIBLE) 521 + (CPU_FTRS_POWER4 & CPU_FTRS_PPC970 & CPU_FTRS_POWER5 & \ 522 + CPU_FTRS_POWER6 & CPU_FTRS_POWER7 & CPU_FTRS_CELL & \ 523 + CPU_FTRS_PA6T & CPU_FTRS_POSSIBLE) 513 524 #endif 514 525 #else 515 526 enum { 516 527 CPU_FTRS_ALWAYS = 517 - #if CLASSIC_PPC 528 + #ifdef CONFIG_PPC_BOOK3S_32 518 529 CPU_FTRS_PPC601 & CPU_FTRS_603 & CPU_FTRS_604 & CPU_FTRS_740_NOTAU & 519 530 CPU_FTRS_740 & CPU_FTRS_750 & CPU_FTRS_750FX1 & 520 531 CPU_FTRS_750FX2 & CPU_FTRS_750FX & CPU_FTRS_750GX &
+40 -28
arch/powerpc/include/asm/eeh.h
··· 25 25 #include <linux/list.h> 26 26 #include <linux/string.h> 27 27 #include <linux/time.h> 28 + #include <linux/atomic.h> 28 29 29 30 struct pci_dev; 30 31 struct pci_bus; ··· 34 33 #ifdef CONFIG_EEH 35 34 36 35 /* EEH subsystem flags */ 37 - #define EEH_ENABLED 0x1 /* EEH enabled */ 38 - #define EEH_FORCE_DISABLED 0x2 /* EEH disabled */ 39 - #define EEH_PROBE_MODE_DEV 0x4 /* From PCI device */ 40 - #define EEH_PROBE_MODE_DEVTREE 0x8 /* From device tree */ 36 + #define EEH_ENABLED 0x01 /* EEH enabled */ 37 + #define EEH_FORCE_DISABLED 0x02 /* EEH disabled */ 38 + #define EEH_PROBE_MODE_DEV 0x04 /* From PCI device */ 39 + #define EEH_PROBE_MODE_DEVTREE 0x08 /* From device tree */ 40 + #define EEH_ENABLE_IO_FOR_LOG 0x10 /* Enable IO for log */ 41 41 42 42 /* 43 43 * Delay for PE reset, all in ms ··· 86 84 int freeze_count; /* Times of froze up */ 87 85 struct timeval tstamp; /* Time on first-time freeze */ 88 86 int false_positives; /* Times of reported #ff's */ 87 + atomic_t pass_dev_cnt; /* Count of passed through devs */ 89 88 struct eeh_pe *parent; /* Parent PE */ 89 + void *data; /* PE auxillary data */ 90 90 struct list_head child_list; /* Link PE to the child list */ 91 91 struct list_head edevs; /* Link list of EEH devices */ 92 92 struct list_head child; /* Child PEs */ ··· 96 92 97 93 #define eeh_pe_for_each_dev(pe, edev, tmp) \ 98 94 list_for_each_entry_safe(edev, tmp, &pe->edevs, list) 95 + 96 + static inline bool eeh_pe_passed(struct eeh_pe *pe) 97 + { 98 + return pe ? !!atomic_read(&pe->pass_dev_cnt) : false; 99 + } 99 100 100 101 /* 101 102 * The struct is used to trace EEH state for the associated ··· 174 165 #define EEH_STATE_DMA_ACTIVE (1 << 4) /* Active DMA */ 175 166 #define EEH_STATE_MMIO_ENABLED (1 << 5) /* MMIO enabled */ 176 167 #define EEH_STATE_DMA_ENABLED (1 << 6) /* DMA enabled */ 168 + #define EEH_PE_STATE_NORMAL 0 /* Normal state */ 169 + #define EEH_PE_STATE_RESET 1 /* PE reset asserted */ 170 + #define EEH_PE_STATE_STOPPED_IO_DMA 2 /* Frozen PE */ 171 + #define EEH_PE_STATE_STOPPED_DMA 4 /* Stopped DMA, Enabled IO */ 172 + #define EEH_PE_STATE_UNAVAIL 5 /* Unavailable */ 177 173 #define EEH_RESET_DEACTIVATE 0 /* Deactivate the PE reset */ 178 174 #define EEH_RESET_HOT 1 /* Hot reset */ 179 175 #define EEH_RESET_FUNDAMENTAL 3 /* Fundamental reset */ ··· 208 194 extern struct eeh_ops *eeh_ops; 209 195 extern raw_spinlock_t confirm_error_lock; 210 196 211 - static inline bool eeh_enabled(void) 212 - { 213 - if ((eeh_subsystem_flags & EEH_FORCE_DISABLED) || 214 - !(eeh_subsystem_flags & EEH_ENABLED)) 215 - return false; 216 - 217 - return true; 218 - } 219 - 220 - static inline void eeh_set_enable(bool mode) 221 - { 222 - if (mode) 223 - eeh_subsystem_flags |= EEH_ENABLED; 224 - else 225 - eeh_subsystem_flags &= ~EEH_ENABLED; 226 - } 227 - 228 - static inline void eeh_probe_mode_set(int flag) 197 + static inline void eeh_add_flag(int flag) 229 198 { 230 199 eeh_subsystem_flags |= flag; 231 200 } 232 201 233 - static inline int eeh_probe_mode_devtree(void) 202 + static inline void eeh_clear_flag(int flag) 234 203 { 235 - return (eeh_subsystem_flags & EEH_PROBE_MODE_DEVTREE); 204 + eeh_subsystem_flags &= ~flag; 236 205 } 237 206 238 - static inline int eeh_probe_mode_dev(void) 207 + static inline bool eeh_has_flag(int flag) 239 208 { 240 - return (eeh_subsystem_flags & EEH_PROBE_MODE_DEV); 209 + return !!(eeh_subsystem_flags & flag); 210 + } 211 + 212 + static inline bool eeh_enabled(void) 213 + { 214 + if (eeh_has_flag(EEH_FORCE_DISABLED) || 215 + !eeh_has_flag(EEH_ENABLED)) 216 + return false; 217 + 218 + return true; 241 219 } 242 220 243 221 static inline void eeh_serialize_lock(unsigned long *flags) ··· 249 243 #define EEH_MAX_ALLOWED_FREEZES 5 250 244 251 245 typedef void *(*eeh_traverse_func)(void *data, void *flag); 246 + void eeh_set_pe_aux_size(int size); 252 247 int eeh_phb_pe_create(struct pci_controller *phb); 253 248 struct eeh_pe *eeh_phb_pe_get(struct pci_controller *phb); 254 249 struct eeh_pe *eeh_pe_get(struct eeh_dev *edev); ··· 279 272 void eeh_add_device_tree_late(struct pci_bus *); 280 273 void eeh_add_sysfs_files(struct pci_bus *); 281 274 void eeh_remove_device(struct pci_dev *); 275 + int eeh_dev_open(struct pci_dev *pdev); 276 + void eeh_dev_release(struct pci_dev *pdev); 277 + struct eeh_pe *eeh_iommu_group_to_pe(struct iommu_group *group); 278 + int eeh_pe_set_option(struct eeh_pe *pe, int option); 279 + int eeh_pe_get_state(struct eeh_pe *pe); 280 + int eeh_pe_reset(struct eeh_pe *pe, int option); 281 + int eeh_pe_configure(struct eeh_pe *pe); 282 282 283 283 /** 284 284 * EEH_POSSIBLE_ERROR() -- test for possible MMIO failure. ··· 308 294 { 309 295 return false; 310 296 } 311 - 312 - static inline void eeh_set_enable(bool mode) { } 313 297 314 298 static inline int eeh_init(void) 315 299 {
+10 -4
arch/powerpc/include/asm/exception-64s.h
··· 425 425 #define SOFTEN_VALUE_0xa00 PACA_IRQ_DBELL 426 426 #define SOFTEN_VALUE_0xe80 PACA_IRQ_DBELL 427 427 #define SOFTEN_VALUE_0xe82 PACA_IRQ_DBELL 428 + #define SOFTEN_VALUE_0xe60 PACA_IRQ_HMI 429 + #define SOFTEN_VALUE_0xe62 PACA_IRQ_HMI 428 430 429 431 #define __SOFTEN_TEST(h, vec) \ 430 432 lbz r10,PACASOFTIRQEN(r13); \ ··· 515 513 * runlatch, etc... 516 514 */ 517 515 518 - /* Exception addition: Hard disable interrupts */ 519 - #define DISABLE_INTS RECONCILE_IRQ_STATE(r10,r11) 516 + /* 517 + * This addition reconciles our actual IRQ state with the various software 518 + * flags that track it. This may call C code. 519 + */ 520 + #define ADD_RECONCILE RECONCILE_IRQ_STATE(r10,r11) 520 521 521 522 #define ADD_NVGPRS \ 522 523 bl save_nvgprs ··· 537 532 .globl label##_common; \ 538 533 label##_common: \ 539 534 EXCEPTION_PROLOG_COMMON(trap, PACA_EXGEN); \ 535 + /* Volatile regs are potentially clobbered here */ \ 540 536 additions; \ 541 537 addi r3,r1,STACK_FRAME_OVERHEAD; \ 542 538 bl hdlr; \ ··· 545 539 546 540 #define STD_EXCEPTION_COMMON(trap, label, hdlr) \ 547 541 EXCEPTION_COMMON(trap, label, hdlr, ret_from_except, \ 548 - ADD_NVGPRS;DISABLE_INTS) 542 + ADD_NVGPRS;ADD_RECONCILE) 549 543 550 544 /* 551 545 * Like STD_EXCEPTION_COMMON, but for exceptions that can occur ··· 554 548 */ 555 549 #define STD_EXCEPTION_COMMON_ASYNC(trap, label, hdlr) \ 556 550 EXCEPTION_COMMON(trap, label, hdlr, ret_from_except_lite, \ 557 - FINISH_NAP;DISABLE_INTS;RUNLATCH_ON) 551 + FINISH_NAP;ADD_RECONCILE;RUNLATCH_ON) 558 552 559 553 /* 560 554 * When the idle code in power4_idle puts the CPU into NAP mode,
-1
arch/powerpc/include/asm/fs_pd.h
··· 28 28 29 29 #ifdef CONFIG_8xx 30 30 #include <asm/8xx_immap.h> 31 - #include <asm/mpc8xx.h> 32 31 33 32 extern immap_t __iomem *mpc8xx_immr; 34 33
+1
arch/powerpc/include/asm/hardirq.h
··· 11 11 unsigned int pmu_irqs; 12 12 unsigned int mce_exceptions; 13 13 unsigned int spurious_irqs; 14 + unsigned int hmi_exceptions; 14 15 #ifdef CONFIG_PPC_DOORBELL 15 16 unsigned int doorbell_irqs; 16 17 #endif
+1
arch/powerpc/include/asm/hw_irq.h
··· 25 25 #define PACA_IRQ_EE 0x04 26 26 #define PACA_IRQ_DEC 0x08 /* Or FIT */ 27 27 #define PACA_IRQ_EE_EDGE 0x10 /* BookE only */ 28 + #define PACA_IRQ_HMI 0x20 28 29 29 30 #endif /* CONFIG_PPC64 */ 30 31
+5 -3
arch/powerpc/include/asm/irqflags.h
··· 32 32 #endif 33 33 34 34 /* 35 - * Most of the CPU's IRQ-state tracing is done from assembly code; we 36 - * have to call a C function so call a wrapper that saves all the 37 - * C-clobbered registers. 35 + * These are calls to C code, so the caller must be prepared for volatiles to 36 + * be clobbered. 38 37 */ 39 38 #define TRACE_ENABLE_INTS TRACE_WITH_FRAME_BUFFER(trace_hardirqs_on) 40 39 #define TRACE_DISABLE_INTS TRACE_WITH_FRAME_BUFFER(trace_hardirqs_off) ··· 41 42 /* 42 43 * This is used by assembly code to soft-disable interrupts first and 43 44 * reconcile irq state. 45 + * 46 + * NB: This may call C code, so the caller must be prepared for volatiles to 47 + * be clobbered. 44 48 */ 45 49 #define RECONCILE_IRQ_STATE(__rA, __rB) \ 46 50 lbz __rA,PACASOFTIRQEN(r13); \
+9
arch/powerpc/include/asm/jump_label.h
··· 10 10 * 2 of the License, or (at your option) any later version. 11 11 */ 12 12 13 + #ifndef __ASSEMBLY__ 13 14 #include <linux/types.h> 14 15 15 16 #include <asm/feature-fixups.h> ··· 42 41 jump_label_t target; 43 42 jump_label_t key; 44 43 }; 44 + 45 + #else 46 + #define ARCH_STATIC_BRANCH(LABEL, KEY) \ 47 + 1098: nop; \ 48 + .pushsection __jump_table, "aw"; \ 49 + FTR_ENTRY_LONG 1098b, LABEL, KEY; \ 50 + .popsection 51 + #endif 45 52 46 53 #endif /* _ASM_POWERPC_JUMP_LABEL_H */
+1
arch/powerpc/include/asm/kvm_asm.h
··· 98 98 #define BOOK3S_INTERRUPT_H_DATA_STORAGE 0xe00 99 99 #define BOOK3S_INTERRUPT_H_INST_STORAGE 0xe20 100 100 #define BOOK3S_INTERRUPT_H_EMUL_ASSIST 0xe40 101 + #define BOOK3S_INTERRUPT_HMI 0xe60 101 102 #define BOOK3S_INTERRUPT_H_DOORBELL 0xe80 102 103 #define BOOK3S_INTERRUPT_PERFMON 0xf00 103 104 #define BOOK3S_INTERRUPT_ALTIVEC 0xf20
+5
arch/powerpc/include/asm/machdep.h
··· 174 174 /* Exception handlers */ 175 175 int (*system_reset_exception)(struct pt_regs *regs); 176 176 int (*machine_check_exception)(struct pt_regs *regs); 177 + int (*handle_hmi_exception)(struct pt_regs *regs); 178 + 179 + /* Early exception handlers called in realmode */ 180 + int (*hmi_exception_early)(struct pt_regs *regs); 177 181 178 182 /* Called during machine check exception to retrive fixup address. */ 179 183 bool (*mce_check_early_recovery)(struct pt_regs *regs); ··· 370 366 } \ 371 367 __define_initcall(__machine_initcall_##mach##_##fn, id); 372 368 369 + #define machine_early_initcall(mach, fn) __define_machine_initcall(mach, fn, early) 373 370 #define machine_core_initcall(mach, fn) __define_machine_initcall(mach, fn, 1) 374 371 #define machine_core_initcall_sync(mach, fn) __define_machine_initcall(mach, fn, 1s) 375 372 #define machine_postcore_initcall(mach, fn) __define_machine_initcall(mach, fn, 2)
-22
arch/powerpc/include/asm/mmu-hash64.h
··· 25 25 #include <asm/processor.h> 26 26 27 27 /* 28 - * Segment table 29 - */ 30 - 31 - #define STE_ESID_V 0x80 32 - #define STE_ESID_KS 0x20 33 - #define STE_ESID_KP 0x10 34 - #define STE_ESID_N 0x08 35 - 36 - #define STE_VSID_SHIFT 12 37 - 38 - /* Location of cpu0's segment table */ 39 - #define STAB0_PAGE 0x8 40 - #define STAB0_OFFSET (STAB0_PAGE << 12) 41 - #define STAB0_PHYS_ADDR (STAB0_OFFSET + PHYSICAL_START) 42 - 43 - #ifndef __ASSEMBLY__ 44 - extern char initial_stab[]; 45 - #endif /* ! __ASSEMBLY */ 46 - 47 - /* 48 28 * SLB 49 29 */ 50 30 ··· 350 370 extern void hpte_init_beat(void); 351 371 extern void hpte_init_beat_v3(void); 352 372 353 - extern void stabs_alloc(void); 354 373 extern void slb_initialize(void); 355 374 extern void slb_flush_and_rebolt(void); 356 - extern void stab_initialize(unsigned long stab); 357 375 358 376 extern void slb_vmalloc_update(void); 359 377 extern void slb_set_size(u16 size);
+2 -6
arch/powerpc/include/asm/mmu.h
··· 64 64 */ 65 65 #define MMU_FTR_USE_PAIRED_MAS ASM_CONST(0x01000000) 66 66 67 - /* MMU is SLB-based 67 + /* Doesn't support the B bit (1T segment) in SLBIE 68 68 */ 69 - #define MMU_FTR_SLB ASM_CONST(0x02000000) 69 + #define MMU_FTR_NO_SLBIE_B ASM_CONST(0x02000000) 70 70 71 71 /* Support 16M large pages 72 72 */ ··· 87 87 /* 1T segments available 88 88 */ 89 89 #define MMU_FTR_1T_SEGMENT ASM_CONST(0x40000000) 90 - 91 - /* Doesn't support the B bit (1T segment) in SLBIE 92 - */ 93 - #define MMU_FTR_NO_SLBIE_B ASM_CONST(0x80000000) 94 90 95 91 /* MMU feature bit sets for various CPUs */ 96 92 #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2 \
+1 -5
arch/powerpc/include/asm/mmu_context.h
··· 18 18 extern void destroy_context(struct mm_struct *mm); 19 19 20 20 extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next); 21 - extern void switch_stab(struct task_struct *tsk, struct mm_struct *mm); 22 21 extern void switch_slb(struct task_struct *tsk, struct mm_struct *mm); 23 22 extern void set_context(unsigned long id, pgd_t *pgd); 24 23 ··· 76 77 * sub architectures. 77 78 */ 78 79 #ifdef CONFIG_PPC_STD_MMU_64 79 - if (mmu_has_feature(MMU_FTR_SLB)) 80 - switch_slb(tsk, next); 81 - else 82 - switch_stab(tsk, next); 80 + switch_slb(tsk, next); 83 81 #else 84 82 /* Out of line for now */ 85 83 switch_mmu_context(prev, next);
+2
arch/powerpc/include/asm/mpc85xx.h
··· 77 77 #define SVR_T1020 0x852100 78 78 #define SVR_T1021 0x852101 79 79 #define SVR_T1022 0x852102 80 + #define SVR_T2080 0x853000 81 + #define SVR_T2081 0x853100 80 82 81 83 #define SVR_8610 0x80A000 82 84 #define SVR_8641 0x809000
-12
arch/powerpc/include/asm/mpc8xx.h
··· 1 - /* This is the single file included by all MPC8xx build options. 2 - * Since there are many different boards and no standard configuration, 3 - * we have a unique include file for each. Rather than change every 4 - * file that has to include MPC8xx configuration, they all include 5 - * this one and the configuration switching is done here. 6 - */ 7 - #ifndef __CONFIG_8xx_DEFS 8 - #define __CONFIG_8xx_DEFS 9 - 10 - extern struct mpc8xx_pcmcia_ops m8xx_pcmcia_ops; 11 - 12 - #endif /* __CONFIG_8xx_DEFS */
+128 -66
arch/powerpc/include/asm/opal.h
··· 147 147 #define OPAL_SET_PARAM 90 148 148 #define OPAL_DUMP_RESEND 91 149 149 #define OPAL_DUMP_INFO2 94 150 + #define OPAL_PCI_EEH_FREEZE_SET 97 151 + #define OPAL_HANDLE_HMI 98 150 152 151 153 #ifndef __ASSEMBLY__ 152 154 ··· 172 170 enum OpalEehFreezeActionToken { 173 171 OPAL_EEH_ACTION_CLEAR_FREEZE_MMIO = 1, 174 172 OPAL_EEH_ACTION_CLEAR_FREEZE_DMA = 2, 175 - OPAL_EEH_ACTION_CLEAR_FREEZE_ALL = 3 173 + OPAL_EEH_ACTION_CLEAR_FREEZE_ALL = 3, 174 + 175 + OPAL_EEH_ACTION_SET_FREEZE_MMIO = 1, 176 + OPAL_EEH_ACTION_SET_FREEZE_DMA = 2, 177 + OPAL_EEH_ACTION_SET_FREEZE_ALL = 3 176 178 }; 177 179 178 180 enum OpalPciStatusToken { ··· 246 240 OPAL_MSG_MEM_ERR, 247 241 OPAL_MSG_EPOW, 248 242 OPAL_MSG_SHUTDOWN, 243 + OPAL_MSG_HMI_EVT, 249 244 OPAL_MSG_TYPE_MAX, 250 245 }; 251 246 ··· 345 338 enum OpalMveEnableAction { 346 339 OPAL_DISABLE_MVE = 0, 347 340 OPAL_ENABLE_MVE = 1 341 + }; 342 + 343 + enum OpalM64EnableAction { 344 + OPAL_DISABLE_M64 = 0, 345 + OPAL_ENABLE_M64_SPLIT = 1, 346 + OPAL_ENABLE_M64_NON_SPLIT = 2 348 347 }; 349 348 350 349 enum OpalPciResetScope { ··· 515 502 } u; 516 503 }; 517 504 505 + /* HMI interrupt event */ 506 + enum OpalHMI_Version { 507 + OpalHMIEvt_V1 = 1, 508 + }; 509 + 510 + enum OpalHMI_Severity { 511 + OpalHMI_SEV_NO_ERROR = 0, 512 + OpalHMI_SEV_WARNING = 1, 513 + OpalHMI_SEV_ERROR_SYNC = 2, 514 + OpalHMI_SEV_FATAL = 3, 515 + }; 516 + 517 + enum OpalHMI_Disposition { 518 + OpalHMI_DISPOSITION_RECOVERED = 0, 519 + OpalHMI_DISPOSITION_NOT_RECOVERED = 1, 520 + }; 521 + 522 + enum OpalHMI_ErrType { 523 + OpalHMI_ERROR_MALFUNC_ALERT = 0, 524 + OpalHMI_ERROR_PROC_RECOV_DONE, 525 + OpalHMI_ERROR_PROC_RECOV_DONE_AGAIN, 526 + OpalHMI_ERROR_PROC_RECOV_MASKED, 527 + OpalHMI_ERROR_TFAC, 528 + OpalHMI_ERROR_TFMR_PARITY, 529 + OpalHMI_ERROR_HA_OVERFLOW_WARN, 530 + OpalHMI_ERROR_XSCOM_FAIL, 531 + OpalHMI_ERROR_XSCOM_DONE, 532 + OpalHMI_ERROR_SCOM_FIR, 533 + OpalHMI_ERROR_DEBUG_TRIG_FIR, 534 + OpalHMI_ERROR_HYP_RESOURCE, 535 + }; 536 + 537 + struct OpalHMIEvent { 538 + uint8_t version; /* 0x00 */ 539 + uint8_t severity; /* 0x01 */ 540 + uint8_t type; /* 0x02 */ 541 + uint8_t disposition; /* 0x03 */ 542 + uint8_t reserved_1[4]; /* 0x04 */ 543 + 544 + __be64 hmer; 545 + /* TFMR register. Valid only for TFAC and TFMR_PARITY error type. */ 546 + __be64 tfmr; 547 + }; 548 + 518 549 enum { 519 550 OPAL_P7IOC_DIAG_TYPE_NONE = 0, 520 551 OPAL_P7IOC_DIAG_TYPE_RGC = 1, ··· 570 513 }; 571 514 572 515 struct OpalIoP7IOCErrorData { 573 - uint16_t type; 516 + __be16 type; 574 517 575 518 /* GEM */ 576 - uint64_t gemXfir; 577 - uint64_t gemRfir; 578 - uint64_t gemRirqfir; 579 - uint64_t gemMask; 580 - uint64_t gemRwof; 519 + __be64 gemXfir; 520 + __be64 gemRfir; 521 + __be64 gemRirqfir; 522 + __be64 gemMask; 523 + __be64 gemRwof; 581 524 582 525 /* LEM */ 583 - uint64_t lemFir; 584 - uint64_t lemErrMask; 585 - uint64_t lemAction0; 586 - uint64_t lemAction1; 587 - uint64_t lemWof; 526 + __be64 lemFir; 527 + __be64 lemErrMask; 528 + __be64 lemAction0; 529 + __be64 lemAction1; 530 + __be64 lemWof; 588 531 589 532 union { 590 533 struct OpalIoP7IOCRgcErrorData { 591 - uint64_t rgcStatus; /* 3E1C10 */ 592 - uint64_t rgcLdcp; /* 3E1C18 */ 534 + __be64 rgcStatus; /* 3E1C10 */ 535 + __be64 rgcLdcp; /* 3E1C18 */ 593 536 }rgc; 594 537 struct OpalIoP7IOCBiErrorData { 595 - uint64_t biLdcp0; /* 3C0100, 3C0118 */ 596 - uint64_t biLdcp1; /* 3C0108, 3C0120 */ 597 - uint64_t biLdcp2; /* 3C0110, 3C0128 */ 598 - uint64_t biFenceStatus; /* 3C0130, 3C0130 */ 538 + __be64 biLdcp0; /* 3C0100, 3C0118 */ 539 + __be64 biLdcp1; /* 3C0108, 3C0120 */ 540 + __be64 biLdcp2; /* 3C0110, 3C0128 */ 541 + __be64 biFenceStatus; /* 3C0130, 3C0130 */ 599 542 600 - uint8_t biDownbound; /* BI Downbound or Upbound */ 543 + u8 biDownbound; /* BI Downbound or Upbound */ 601 544 }bi; 602 545 struct OpalIoP7IOCCiErrorData { 603 - uint64_t ciPortStatus; /* 3Dn008 */ 604 - uint64_t ciPortLdcp; /* 3Dn010 */ 546 + __be64 ciPortStatus; /* 3Dn008 */ 547 + __be64 ciPortLdcp; /* 3Dn010 */ 605 548 606 - uint8_t ciPort; /* Index of CI port: 0/1 */ 549 + u8 ciPort; /* Index of CI port: 0/1 */ 607 550 }ci; 608 551 }; 609 552 }; ··· 635 578 struct OpalIoP7IOCPhbErrorData { 636 579 struct OpalIoPhbErrorCommon common; 637 580 638 - uint32_t brdgCtl; 581 + __be32 brdgCtl; 639 582 640 583 // P7IOC utl regs 641 - uint32_t portStatusReg; 642 - uint32_t rootCmplxStatus; 643 - uint32_t busAgentStatus; 584 + __be32 portStatusReg; 585 + __be32 rootCmplxStatus; 586 + __be32 busAgentStatus; 644 587 645 588 // P7IOC cfg regs 646 - uint32_t deviceStatus; 647 - uint32_t slotStatus; 648 - uint32_t linkStatus; 649 - uint32_t devCmdStatus; 650 - uint32_t devSecStatus; 589 + __be32 deviceStatus; 590 + __be32 slotStatus; 591 + __be32 linkStatus; 592 + __be32 devCmdStatus; 593 + __be32 devSecStatus; 651 594 652 595 // cfg AER regs 653 - uint32_t rootErrorStatus; 654 - uint32_t uncorrErrorStatus; 655 - uint32_t corrErrorStatus; 656 - uint32_t tlpHdr1; 657 - uint32_t tlpHdr2; 658 - uint32_t tlpHdr3; 659 - uint32_t tlpHdr4; 660 - uint32_t sourceId; 596 + __be32 rootErrorStatus; 597 + __be32 uncorrErrorStatus; 598 + __be32 corrErrorStatus; 599 + __be32 tlpHdr1; 600 + __be32 tlpHdr2; 601 + __be32 tlpHdr3; 602 + __be32 tlpHdr4; 603 + __be32 sourceId; 661 604 662 - uint32_t rsv3; 605 + __be32 rsv3; 663 606 664 607 // Record data about the call to allocate a buffer. 665 - uint64_t errorClass; 666 - uint64_t correlator; 608 + __be64 errorClass; 609 + __be64 correlator; 667 610 668 611 //P7IOC MMIO Error Regs 669 - uint64_t p7iocPlssr; // n120 670 - uint64_t p7iocCsr; // n110 671 - uint64_t lemFir; // nC00 672 - uint64_t lemErrorMask; // nC18 673 - uint64_t lemWOF; // nC40 674 - uint64_t phbErrorStatus; // nC80 675 - uint64_t phbFirstErrorStatus; // nC88 676 - uint64_t phbErrorLog0; // nCC0 677 - uint64_t phbErrorLog1; // nCC8 678 - uint64_t mmioErrorStatus; // nD00 679 - uint64_t mmioFirstErrorStatus; // nD08 680 - uint64_t mmioErrorLog0; // nD40 681 - uint64_t mmioErrorLog1; // nD48 682 - uint64_t dma0ErrorStatus; // nD80 683 - uint64_t dma0FirstErrorStatus; // nD88 684 - uint64_t dma0ErrorLog0; // nDC0 685 - uint64_t dma0ErrorLog1; // nDC8 686 - uint64_t dma1ErrorStatus; // nE00 687 - uint64_t dma1FirstErrorStatus; // nE08 688 - uint64_t dma1ErrorLog0; // nE40 689 - uint64_t dma1ErrorLog1; // nE48 690 - uint64_t pestA[OPAL_P7IOC_NUM_PEST_REGS]; 691 - uint64_t pestB[OPAL_P7IOC_NUM_PEST_REGS]; 612 + __be64 p7iocPlssr; // n120 613 + __be64 p7iocCsr; // n110 614 + __be64 lemFir; // nC00 615 + __be64 lemErrorMask; // nC18 616 + __be64 lemWOF; // nC40 617 + __be64 phbErrorStatus; // nC80 618 + __be64 phbFirstErrorStatus; // nC88 619 + __be64 phbErrorLog0; // nCC0 620 + __be64 phbErrorLog1; // nCC8 621 + __be64 mmioErrorStatus; // nD00 622 + __be64 mmioFirstErrorStatus; // nD08 623 + __be64 mmioErrorLog0; // nD40 624 + __be64 mmioErrorLog1; // nD48 625 + __be64 dma0ErrorStatus; // nD80 626 + __be64 dma0FirstErrorStatus; // nD88 627 + __be64 dma0ErrorLog0; // nDC0 628 + __be64 dma0ErrorLog1; // nDC8 629 + __be64 dma1ErrorStatus; // nE00 630 + __be64 dma1FirstErrorStatus; // nE08 631 + __be64 dma1ErrorLog0; // nE40 632 + __be64 dma1ErrorLog1; // nE48 633 + __be64 pestA[OPAL_P7IOC_NUM_PEST_REGS]; 634 + __be64 pestB[OPAL_P7IOC_NUM_PEST_REGS]; 692 635 }; 693 636 694 637 struct OpalIoPhb3ErrorData { ··· 815 758 __be64 *phb_status); 816 759 int64_t opal_pci_eeh_freeze_clear(uint64_t phb_id, uint64_t pe_number, 817 760 uint64_t eeh_action_token); 761 + int64_t opal_pci_eeh_freeze_set(uint64_t phb_id, uint64_t pe_number, 762 + uint64_t eeh_action_token); 818 763 int64_t opal_pci_shpc(uint64_t phb_id, uint64_t shpc_action, uint8_t *state); 819 764 820 765 ··· 827 768 uint16_t window_num, 828 769 uint64_t starting_real_address, 829 770 uint64_t starting_pci_address, 830 - uint16_t segment_size); 771 + uint64_t size); 831 772 int64_t opal_pci_map_pe_mmio_window(uint64_t phb_id, uint16_t pe_number, 832 773 uint16_t window_type, uint16_t window_num, 833 774 uint16_t segment_num); ··· 919 860 int64_t opal_set_param(uint64_t token, uint32_t param_id, uint64_t buffer, 920 861 uint64_t length); 921 862 int64_t opal_sensor_read(uint32_t sensor_hndl, int token, __be32 *sensor_data); 863 + int64_t opal_handle_hmi(void); 922 864 923 865 /* Internal functions */ 924 866 extern int early_init_dt_scan_opal(unsigned long node, const char *uname, ··· 962 902 963 903 extern int opal_machine_check(struct pt_regs *regs); 964 904 extern bool opal_mce_check_early_recovery(struct pt_regs *regs); 905 + extern int opal_hmi_exception_early(struct pt_regs *regs); 906 + extern int opal_handle_hmi_exception(struct pt_regs *regs); 965 907 966 908 extern void opal_shutdown(void); 967 909 extern int opal_resync_timebase(void);
-1
arch/powerpc/include/asm/oprofile_impl.h
··· 61 61 }; 62 62 63 63 extern struct op_powerpc_model op_model_fsl_emb; 64 - extern struct op_powerpc_model op_model_rs64; 65 64 extern struct op_powerpc_model op_model_power4; 66 65 extern struct op_powerpc_model op_model_7450; 67 66 extern struct op_powerpc_model op_model_cell;
+1 -4
arch/powerpc/include/asm/paca.h
··· 78 78 u64 kernel_toc; /* Kernel TOC address */ 79 79 u64 kernelbase; /* Base address of kernel */ 80 80 u64 kernel_msr; /* MSR while running in kernel */ 81 - #ifdef CONFIG_PPC_STD_MMU_64 82 - u64 stab_real; /* Absolute address of segment table */ 83 - u64 stab_addr; /* Virtual address of segment table */ 84 - #endif /* CONFIG_PPC_STD_MMU_64 */ 85 81 void *emergency_sp; /* pointer to emergency stack */ 86 82 u64 data_offset; /* per cpu data offset */ 87 83 s16 hw_cpu_id; /* Physical processor number */ ··· 167 171 * and already using emergency stack. 168 172 */ 169 173 u16 in_mce; 174 + u8 hmi_event_available; /* HMI event is available */ 170 175 #endif 171 176 172 177 /* Stuff for accurate time accounting */
+4 -1
arch/powerpc/include/asm/perf_event_server.h
··· 19 19 #define MAX_EVENT_ALTERNATIVES 8 20 20 #define MAX_LIMITED_HWCOUNTERS 2 21 21 22 + struct perf_event; 23 + 22 24 /* 23 25 * This struct provides the constants and functions needed to 24 26 * describe the PMU on a particular POWER-family CPU. ··· 32 30 unsigned long add_fields; 33 31 unsigned long test_adder; 34 32 int (*compute_mmcr)(u64 events[], int n_ev, 35 - unsigned int hwc[], unsigned long mmcr[]); 33 + unsigned int hwc[], unsigned long mmcr[], 34 + struct perf_event *pevents[]); 36 35 int (*get_constraint)(u64 event_id, unsigned long *mskp, 37 36 unsigned long *valp); 38 37 int (*get_alternatives)(u64 event_id, unsigned int flags,
+9
arch/powerpc/include/asm/ppc-opcode.h
··· 150 150 #define PPC_INST_MCRXR_MASK 0xfc0007fe 151 151 #define PPC_INST_MFSPR_PVR 0x7c1f42a6 152 152 #define PPC_INST_MFSPR_PVR_MASK 0xfc1fffff 153 + #define PPC_INST_MFTMR 0x7c0002dc 153 154 #define PPC_INST_MSGSND 0x7c00019c 154 155 #define PPC_INST_MSGSNDP 0x7c00011c 156 + #define PPC_INST_MTTMR 0x7c0003dc 155 157 #define PPC_INST_NOP 0x60000000 156 158 #define PPC_INST_POPCNTB 0x7c0000f4 157 159 #define PPC_INST_POPCNTB_MASK 0xfc0007fe ··· 370 368 | __PPC_RA(r)) 371 369 #define TABORT(r) stringify_in_c(.long PPC_INST_TABORT \ 372 370 | __PPC_RA(r)) 371 + 372 + /* book3e thread control instructions */ 373 + #define TMRN(x) ((((x) & 0x1f) << 16) | (((x) & 0x3e0) << 6)) 374 + #define MTTMR(tmr, r) stringify_in_c(.long PPC_INST_MTTMR | \ 375 + TMRN(tmr) | ___PPC_RS(r)) 376 + #define MFTMR(tmr, r) stringify_in_c(.long PPC_INST_MFTMR | \ 377 + TMRN(tmr) | ___PPC_RT(r)) 373 378 374 379 #endif /* _ASM_POWERPC_PPC_OPCODE_H */
+2
arch/powerpc/include/asm/pte-fsl-booke.h
··· 37 37 #define _PMD_PRESENT_MASK (PAGE_MASK) 38 38 #define _PMD_BAD (~PAGE_MASK) 39 39 40 + #define PTE_WIMGE_SHIFT (6) 41 + 40 42 #endif /* __KERNEL__ */ 41 43 #endif /* _ASM_POWERPC_PTE_FSL_BOOKE_H */
+3 -2
arch/powerpc/include/asm/pte-hash64-64k.h
··· 75 75 (((pte) & _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K) 76 76 77 77 #define remap_4k_pfn(vma, addr, pfn, prot) \ 78 - remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, \ 79 - __pgprot(pgprot_val((prot)) | _PAGE_4K_PFN)) 78 + (WARN_ON(((pfn) >= (1UL << (64 - PTE_RPN_SHIFT)))) ? -EINVAL : \ 79 + remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, \ 80 + __pgprot(pgprot_val((prot)) | _PAGE_4K_PFN))) 80 81 81 82 #endif /* __ASSEMBLY__ */
+1 -1
arch/powerpc/include/asm/reg.h
··· 254 254 #define DSISR_PROTFAULT 0x08000000 /* protection fault */ 255 255 #define DSISR_ISSTORE 0x02000000 /* access was a store */ 256 256 #define DSISR_DABRMATCH 0x00400000 /* hit data breakpoint */ 257 - #define DSISR_NOSEGMENT 0x00200000 /* STAB/SLB miss */ 257 + #define DSISR_NOSEGMENT 0x00200000 /* SLB miss */ 258 258 #define DSISR_KEYFAULT 0x00200000 /* Key fault */ 259 259 #define SPRN_TBRL 0x10C /* Time Base Read Lower Register (user, R/O) */ 260 260 #define SPRN_TBRU 0x10D /* Time Base Read Upper Register (user, R/O) */
+47 -10
arch/powerpc/include/asm/reg_booke.h
··· 15 15 #ifndef __ASM_POWERPC_REG_BOOKE_H__ 16 16 #define __ASM_POWERPC_REG_BOOKE_H__ 17 17 18 + #include <asm/ppc-opcode.h> 19 + 18 20 /* Machine State Register (MSR) Fields */ 19 - #define MSR_GS (1<<28) /* Guest state */ 20 - #define MSR_UCLE (1<<26) /* User-mode cache lock enable */ 21 - #define MSR_SPE (1<<25) /* Enable SPE */ 22 - #define MSR_DWE (1<<10) /* Debug Wait Enable */ 23 - #define MSR_UBLE (1<<10) /* BTB lock enable (e500) */ 24 - #define MSR_IS MSR_IR /* Instruction Space */ 25 - #define MSR_DS MSR_DR /* Data Space */ 26 - #define MSR_PMM (1<<2) /* Performance monitor mark bit */ 27 - #define MSR_CM (1<<31) /* Computation Mode (0=32-bit, 1=64-bit) */ 21 + #define MSR_GS_LG 28 /* Guest state */ 22 + #define MSR_UCLE_LG 26 /* User-mode cache lock enable */ 23 + #define MSR_SPE_LG 25 /* Enable SPE */ 24 + #define MSR_DWE_LG 10 /* Debug Wait Enable */ 25 + #define MSR_UBLE_LG 10 /* BTB lock enable (e500) */ 26 + #define MSR_IS_LG MSR_IR_LG /* Instruction Space */ 27 + #define MSR_DS_LG MSR_DR_LG /* Data Space */ 28 + #define MSR_PMM_LG 2 /* Performance monitor mark bit */ 29 + #define MSR_CM_LG 31 /* Computation Mode (0=32-bit, 1=64-bit) */ 30 + 31 + #define MSR_GS __MASK(MSR_GS_LG) 32 + #define MSR_UCLE __MASK(MSR_UCLE_LG) 33 + #define MSR_SPE __MASK(MSR_SPE_LG) 34 + #define MSR_DWE __MASK(MSR_DWE_LG) 35 + #define MSR_UBLE __MASK(MSR_UBLE_LG) 36 + #define MSR_IS __MASK(MSR_IS_LG) 37 + #define MSR_DS __MASK(MSR_DS_LG) 38 + #define MSR_PMM __MASK(MSR_PMM_LG) 39 + #define MSR_CM __MASK(MSR_CM_LG) 28 40 29 41 #if defined(CONFIG_PPC_BOOK3E_64) 30 42 #define MSR_64BIT MSR_CM ··· 272 260 273 261 /* e500mc */ 274 262 #define MCSR_DCPERR_MC 0x20000000UL /* D-Cache Parity Error */ 275 - #define MCSR_L2MMU_MHIT 0x04000000UL /* Hit on multiple TLB entries */ 263 + #define MCSR_L2MMU_MHIT 0x08000000UL /* Hit on multiple TLB entries */ 276 264 #define MCSR_NMI 0x00100000UL /* Non-Maskable Interrupt */ 277 265 #define MCSR_MAV 0x00080000UL /* MCAR address valid */ 278 266 #define MCSR_MEA 0x00040000UL /* MCAR is effective address */ ··· 610 598 /* Bit definitions for L1CSR2. */ 611 599 #define L1CSR2_DCWS 0x40000000 /* Data Cache write shadow */ 612 600 601 + /* Bit definitions for BUCSR. */ 602 + #define BUCSR_STAC_EN 0x01000000 /* Segment Target Address Cache */ 603 + #define BUCSR_LS_EN 0x00400000 /* Link Stack */ 604 + #define BUCSR_BBFI 0x00000200 /* Branch Buffer flash invalidate */ 605 + #define BUCSR_BPEN 0x00000001 /* Branch prediction enable */ 606 + #define BUCSR_INIT (BUCSR_STAC_EN | BUCSR_LS_EN | BUCSR_BBFI | BUCSR_BPEN) 607 + 613 608 /* Bit definitions for L2CSR0. */ 614 609 #define L2CSR0_L2E 0x80000000 /* L2 Cache Enable */ 615 610 #define L2CSR0_L2PE 0x40000000 /* L2 Cache Parity/ECC Enable */ ··· 739 720 #define MMUBE1_VBE3 0x00000004 740 721 #define MMUBE1_VBE4 0x00000002 741 722 #define MMUBE1_VBE5 0x00000001 723 + 724 + #define TMRN_IMSR0 0x120 /* Initial MSR Register 0 (e6500) */ 725 + #define TMRN_IMSR1 0x121 /* Initial MSR Register 1 (e6500) */ 726 + #define TMRN_INIA0 0x140 /* Next Instruction Address Register 0 */ 727 + #define TMRN_INIA1 0x141 /* Next Instruction Address Register 1 */ 728 + #define SPRN_TENSR 0x1b5 /* Thread Enable Status Register */ 729 + #define SPRN_TENS 0x1b6 /* Thread Enable Set Register */ 730 + #define SPRN_TENC 0x1b7 /* Thread Enable Clear Register */ 731 + 732 + #define TEN_THREAD(x) (1 << (x)) 733 + 734 + #ifndef __ASSEMBLY__ 735 + #define mftmr(rn) ({unsigned long rval; \ 736 + asm volatile(MFTMR(rn, %0) : "=r" (rval)); rval;}) 737 + #define mttmr(rn, v) asm volatile(MTTMR(rn, %0) : \ 738 + : "r" ((unsigned long)(v)) \ 739 + : "memory") 740 + #endif /* !__ASSEMBLY__ */ 742 741 743 742 #endif /* __ASM_POWERPC_REG_BOOKE_H__ */ 744 743 #endif /* __KERNEL__ */
+2 -2
arch/powerpc/include/asm/systbl.h
··· 77 77 SYSCALL_SPU(setregid) 78 78 #define compat_sys_sigsuspend sys_sigsuspend 79 79 SYS32ONLY(sigsuspend) 80 - COMPAT_SYS(sigpending) 80 + SYSX(sys_ni_syscall,compat_sys_sigpending,sys_sigpending) 81 81 SYSCALL_SPU(sethostname) 82 82 COMPAT_SYS_SPU(setrlimit) 83 - COMPAT_SYS(old_getrlimit) 83 + SYSX(sys_ni_syscall,compat_sys_old_getrlimit,sys_old_getrlimit) 84 84 COMPAT_SYS_SPU(getrusage) 85 85 COMPAT_SYS_SPU(gettimeofday) 86 86 COMPAT_SYS_SPU(settimeofday)
+45
arch/powerpc/include/asm/trace.h
··· 99 99 ); 100 100 #endif 101 101 102 + #ifdef CONFIG_PPC_POWERNV 103 + extern void opal_tracepoint_regfunc(void); 104 + extern void opal_tracepoint_unregfunc(void); 105 + 106 + TRACE_EVENT_FN(opal_entry, 107 + 108 + TP_PROTO(unsigned long opcode, unsigned long *args), 109 + 110 + TP_ARGS(opcode, args), 111 + 112 + TP_STRUCT__entry( 113 + __field(unsigned long, opcode) 114 + ), 115 + 116 + TP_fast_assign( 117 + __entry->opcode = opcode; 118 + ), 119 + 120 + TP_printk("opcode=%lu", __entry->opcode), 121 + 122 + opal_tracepoint_regfunc, opal_tracepoint_unregfunc 123 + ); 124 + 125 + TRACE_EVENT_FN(opal_exit, 126 + 127 + TP_PROTO(unsigned long opcode, unsigned long retval), 128 + 129 + TP_ARGS(opcode, retval), 130 + 131 + TP_STRUCT__entry( 132 + __field(unsigned long, opcode) 133 + __field(unsigned long, retval) 134 + ), 135 + 136 + TP_fast_assign( 137 + __entry->opcode = opcode; 138 + __entry->retval = retval; 139 + ), 140 + 141 + TP_printk("opcode=%lu retval=%lu", __entry->opcode, __entry->retval), 142 + 143 + opal_tracepoint_regfunc, opal_tracepoint_unregfunc 144 + ); 145 + #endif 146 + 102 147 #endif /* _TRACE_POWERPC_H */ 103 148 104 149 #undef TRACE_INCLUDE_PATH
-2
arch/powerpc/kernel/asm-offsets.c
··· 216 216 #endif /* CONFIG_PPC_BOOK3E */ 217 217 218 218 #ifdef CONFIG_PPC_STD_MMU_64 219 - DEFINE(PACASTABREAL, offsetof(struct paca_struct, stab_real)); 220 - DEFINE(PACASTABVIRT, offsetof(struct paca_struct, stab_addr)); 221 219 DEFINE(PACASLBCACHE, offsetof(struct paca_struct, slb_cache)); 222 220 DEFINE(PACASLBCACHEPTR, offsetof(struct paca_struct, slb_cache_ptr)); 223 221 DEFINE(PACAVMALLOCSLLP, offsetof(struct paca_struct, vmalloc_sllp));
+2 -92
arch/powerpc/kernel/cputable.c
··· 123 123 124 124 static struct cpu_spec __initdata cpu_specs[] = { 125 125 #ifdef CONFIG_PPC_BOOK3S_64 126 - { /* Power3 */ 127 - .pvr_mask = 0xffff0000, 128 - .pvr_value = 0x00400000, 129 - .cpu_name = "POWER3 (630)", 130 - .cpu_features = CPU_FTRS_POWER3, 131 - .cpu_user_features = COMMON_USER_PPC64|PPC_FEATURE_PPC_LE, 132 - .mmu_features = MMU_FTR_HPTE_TABLE, 133 - .icache_bsize = 128, 134 - .dcache_bsize = 128, 135 - .num_pmcs = 8, 136 - .pmc_type = PPC_PMC_IBM, 137 - .oprofile_cpu_type = "ppc64/power3", 138 - .oprofile_type = PPC_OPROFILE_RS64, 139 - .platform = "power3", 140 - }, 141 - { /* Power3+ */ 142 - .pvr_mask = 0xffff0000, 143 - .pvr_value = 0x00410000, 144 - .cpu_name = "POWER3 (630+)", 145 - .cpu_features = CPU_FTRS_POWER3, 146 - .cpu_user_features = COMMON_USER_PPC64|PPC_FEATURE_PPC_LE, 147 - .mmu_features = MMU_FTR_HPTE_TABLE, 148 - .icache_bsize = 128, 149 - .dcache_bsize = 128, 150 - .num_pmcs = 8, 151 - .pmc_type = PPC_PMC_IBM, 152 - .oprofile_cpu_type = "ppc64/power3", 153 - .oprofile_type = PPC_OPROFILE_RS64, 154 - .platform = "power3", 155 - }, 156 - { /* Northstar */ 157 - .pvr_mask = 0xffff0000, 158 - .pvr_value = 0x00330000, 159 - .cpu_name = "RS64-II (northstar)", 160 - .cpu_features = CPU_FTRS_RS64, 161 - .cpu_user_features = COMMON_USER_PPC64, 162 - .mmu_features = MMU_FTR_HPTE_TABLE, 163 - .icache_bsize = 128, 164 - .dcache_bsize = 128, 165 - .num_pmcs = 8, 166 - .pmc_type = PPC_PMC_IBM, 167 - .oprofile_cpu_type = "ppc64/rs64", 168 - .oprofile_type = PPC_OPROFILE_RS64, 169 - .platform = "rs64", 170 - }, 171 - { /* Pulsar */ 172 - .pvr_mask = 0xffff0000, 173 - .pvr_value = 0x00340000, 174 - .cpu_name = "RS64-III (pulsar)", 175 - .cpu_features = CPU_FTRS_RS64, 176 - .cpu_user_features = COMMON_USER_PPC64, 177 - .mmu_features = MMU_FTR_HPTE_TABLE, 178 - .icache_bsize = 128, 179 - .dcache_bsize = 128, 180 - .num_pmcs = 8, 181 - .pmc_type = PPC_PMC_IBM, 182 - .oprofile_cpu_type = "ppc64/rs64", 183 - .oprofile_type = PPC_OPROFILE_RS64, 184 - .platform = "rs64", 185 - }, 186 - { /* I-star */ 187 - .pvr_mask = 0xffff0000, 188 - .pvr_value = 0x00360000, 189 - .cpu_name = "RS64-III (icestar)", 190 - .cpu_features = CPU_FTRS_RS64, 191 - .cpu_user_features = COMMON_USER_PPC64, 192 - .mmu_features = MMU_FTR_HPTE_TABLE, 193 - .icache_bsize = 128, 194 - .dcache_bsize = 128, 195 - .num_pmcs = 8, 196 - .pmc_type = PPC_PMC_IBM, 197 - .oprofile_cpu_type = "ppc64/rs64", 198 - .oprofile_type = PPC_OPROFILE_RS64, 199 - .platform = "rs64", 200 - }, 201 - { /* S-star */ 202 - .pvr_mask = 0xffff0000, 203 - .pvr_value = 0x00370000, 204 - .cpu_name = "RS64-IV (sstar)", 205 - .cpu_features = CPU_FTRS_RS64, 206 - .cpu_user_features = COMMON_USER_PPC64, 207 - .mmu_features = MMU_FTR_HPTE_TABLE, 208 - .icache_bsize = 128, 209 - .dcache_bsize = 128, 210 - .num_pmcs = 8, 211 - .pmc_type = PPC_PMC_IBM, 212 - .oprofile_cpu_type = "ppc64/rs64", 213 - .oprofile_type = PPC_OPROFILE_RS64, 214 - .platform = "rs64", 215 - }, 216 126 { /* Power4 */ 217 127 .pvr_mask = 0xffff0000, 218 128 .pvr_value = 0x00350000, ··· 527 617 #endif /* CONFIG_PPC_BOOK3S_64 */ 528 618 529 619 #ifdef CONFIG_PPC32 530 - #if CLASSIC_PPC 620 + #ifdef CONFIG_PPC_BOOK3S_32 531 621 { /* 601 */ 532 622 .pvr_mask = 0xffff0000, 533 623 .pvr_value = 0x00010000, ··· 1167 1257 .machine_check = machine_check_generic, 1168 1258 .platform = "ppc603", 1169 1259 }, 1170 - #endif /* CLASSIC_PPC */ 1260 + #endif /* CONFIG_PPC_BOOK3S_32 */ 1171 1261 #ifdef CONFIG_8xx 1172 1262 { /* 8xx */ 1173 1263 .pvr_mask = 0xffff0000,
+343 -23
arch/powerpc/kernel/eeh.c
··· 27 27 #include <linux/init.h> 28 28 #include <linux/list.h> 29 29 #include <linux/pci.h> 30 + #include <linux/iommu.h> 30 31 #include <linux/proc_fs.h> 31 32 #include <linux/rbtree.h> 32 33 #include <linux/reboot.h> ··· 41 40 #include <asm/eeh.h> 42 41 #include <asm/eeh_event.h> 43 42 #include <asm/io.h> 43 + #include <asm/iommu.h> 44 44 #include <asm/machdep.h> 45 45 #include <asm/ppc-pci.h> 46 46 #include <asm/rtas.h> ··· 110 108 /* Lock to avoid races due to multiple reports of an error */ 111 109 DEFINE_RAW_SPINLOCK(confirm_error_lock); 112 110 111 + /* Lock to protect passed flags */ 112 + static DEFINE_MUTEX(eeh_dev_mutex); 113 + 113 114 /* Buffer for reporting pci register dumps. Its here in BSS, and 114 115 * not dynamically alloced, so that it ends up in RMO where RTAS 115 116 * can access it. ··· 142 137 static int __init eeh_setup(char *str) 143 138 { 144 139 if (!strcmp(str, "off")) 145 - eeh_subsystem_flags |= EEH_FORCE_DISABLED; 140 + eeh_add_flag(EEH_FORCE_DISABLED); 146 141 147 142 return 1; 148 143 } ··· 157 152 * This routine captures assorted PCI configuration space data, 158 153 * and puts them into a buffer for RTAS error logging. 159 154 */ 160 - static size_t eeh_gather_pci_data(struct eeh_dev *edev, char * buf, size_t len) 155 + static size_t eeh_gather_pci_data(struct eeh_dev *edev, char *buf, size_t len) 161 156 { 162 157 struct device_node *dn = eeh_dev_to_of_node(edev); 163 158 u32 cfg; 164 159 int cap, i; 165 - int n = 0; 160 + int n = 0, l = 0; 161 + char buffer[128]; 166 162 167 163 n += scnprintf(buf+n, len-n, "%s\n", dn->full_name); 168 164 pr_warn("EEH: of node=%s\n", dn->full_name); ··· 208 202 for (i=0; i<=8; i++) { 209 203 eeh_ops->read_config(dn, cap+4*i, 4, &cfg); 210 204 n += scnprintf(buf+n, len-n, "%02x:%x\n", 4*i, cfg); 211 - pr_warn("EEH: PCI-E %02x: %08x\n", i, cfg); 205 + 206 + if ((i % 4) == 0) { 207 + if (i != 0) 208 + pr_warn("%s\n", buffer); 209 + 210 + l = scnprintf(buffer, sizeof(buffer), 211 + "EEH: PCI-E %02x: %08x ", 212 + 4*i, cfg); 213 + } else { 214 + l += scnprintf(buffer+l, sizeof(buffer)-l, 215 + "%08x ", cfg); 216 + } 217 + 212 218 } 219 + 220 + pr_warn("%s\n", buffer); 213 221 } 214 222 215 223 /* If AER capable, dump it */ ··· 232 212 n += scnprintf(buf+n, len-n, "pci-e AER:\n"); 233 213 pr_warn("EEH: PCI-E AER capability register set follows:\n"); 234 214 235 - for (i=0; i<14; i++) { 215 + for (i=0; i<=13; i++) { 236 216 eeh_ops->read_config(dn, cap+4*i, 4, &cfg); 237 217 n += scnprintf(buf+n, len-n, "%02x:%x\n", 4*i, cfg); 238 - pr_warn("EEH: PCI-E AER %02x: %08x\n", i, cfg); 218 + 219 + if ((i % 4) == 0) { 220 + if (i != 0) 221 + pr_warn("%s\n", buffer); 222 + 223 + l = scnprintf(buffer, sizeof(buffer), 224 + "EEH: PCI-E AER %02x: %08x ", 225 + 4*i, cfg); 226 + } else { 227 + l += scnprintf(buffer+l, sizeof(buffer)-l, 228 + "%08x ", cfg); 229 + } 239 230 } 231 + 232 + pr_warn("%s\n", buffer); 240 233 } 241 234 242 235 return n; ··· 280 247 * 0xFF's is always returned from PCI config space. 281 248 */ 282 249 if (!(pe->type & EEH_PE_PHB)) { 283 - if (eeh_probe_mode_devtree()) 250 + if (eeh_has_flag(EEH_ENABLE_IO_FOR_LOG)) 284 251 eeh_pci_enable(pe, EEH_OPT_THAW_MMIO); 285 252 eeh_ops->configure_bridge(pe); 286 253 eeh_pe_restore_bars(pe); ··· 331 298 unsigned long flags; 332 299 int ret; 333 300 334 - if (!eeh_probe_mode_dev()) 301 + if (!eeh_has_flag(EEH_PROBE_MODE_DEV)) 335 302 return -EPERM; 336 303 337 304 /* Find the PHB PE */ 338 305 phb_pe = eeh_phb_pe_get(pe->phb); 339 306 if (!phb_pe) { 340 - pr_warning("%s Can't find PE for PHB#%d\n", 341 - __func__, pe->phb->global_number); 307 + pr_warn("%s Can't find PE for PHB#%d\n", 308 + __func__, pe->phb->global_number); 342 309 return -EEXIST; 343 310 } 344 311 ··· 432 399 ret = eeh_phb_check_failure(pe); 433 400 if (ret > 0) 434 401 return ret; 402 + 403 + /* 404 + * If the PE isn't owned by us, we shouldn't check the 405 + * state. Instead, let the owner handle it if the PE has 406 + * been frozen. 407 + */ 408 + if (eeh_pe_passed(pe)) 409 + return 0; 435 410 436 411 /* If we already have a pending isolation event for this 437 412 * slot, we know it's bad already, we don't need to check. ··· 787 746 int __init eeh_ops_register(struct eeh_ops *ops) 788 747 { 789 748 if (!ops->name) { 790 - pr_warning("%s: Invalid EEH ops name for %p\n", 749 + pr_warn("%s: Invalid EEH ops name for %p\n", 791 750 __func__, ops); 792 751 return -EINVAL; 793 752 } 794 753 795 754 if (eeh_ops && eeh_ops != ops) { 796 - pr_warning("%s: EEH ops of platform %s already existing (%s)\n", 755 + pr_warn("%s: EEH ops of platform %s already existing (%s)\n", 797 756 __func__, eeh_ops->name, ops->name); 798 757 return -EEXIST; 799 758 } ··· 813 772 int __exit eeh_ops_unregister(const char *name) 814 773 { 815 774 if (!name || !strlen(name)) { 816 - pr_warning("%s: Invalid EEH ops name\n", 775 + pr_warn("%s: Invalid EEH ops name\n", 817 776 __func__); 818 777 return -EINVAL; 819 778 } ··· 829 788 static int eeh_reboot_notifier(struct notifier_block *nb, 830 789 unsigned long action, void *unused) 831 790 { 832 - eeh_set_enable(false); 791 + eeh_clear_flag(EEH_ENABLED); 833 792 return NOTIFY_DONE; 834 793 } 835 794 ··· 878 837 879 838 /* call platform initialization function */ 880 839 if (!eeh_ops) { 881 - pr_warning("%s: Platform EEH operation not found\n", 840 + pr_warn("%s: Platform EEH operation not found\n", 882 841 __func__); 883 842 return -EEXIST; 884 843 } else if ((ret = eeh_ops->init())) { 885 - pr_warning("%s: Failed to call platform init function (%d)\n", 844 + pr_warn("%s: Failed to call platform init function (%d)\n", 886 845 __func__, ret); 887 846 return ret; 888 847 } ··· 893 852 return ret; 894 853 895 854 /* Enable EEH for all adapters */ 896 - if (eeh_probe_mode_devtree()) { 855 + if (eeh_has_flag(EEH_PROBE_MODE_DEVTREE)) { 897 856 list_for_each_entry_safe(hose, tmp, 898 857 &hose_list, list_node) { 899 858 phb = hose->dn; 900 859 traverse_pci_devices(phb, eeh_ops->of_probe, NULL); 901 860 } 902 - } else if (eeh_probe_mode_dev()) { 861 + } else if (eeh_has_flag(EEH_PROBE_MODE_DEV)) { 903 862 list_for_each_entry_safe(hose, tmp, 904 863 &hose_list, list_node) 905 864 pci_walk_bus(hose->bus, eeh_ops->dev_probe, NULL); ··· 923 882 if (eeh_enabled()) 924 883 pr_info("EEH: PCI Enhanced I/O Error Handling Enabled\n"); 925 884 else 926 - pr_warning("EEH: No capable adapters found\n"); 885 + pr_warn("EEH: No capable adapters found\n"); 927 886 928 887 return ret; 929 888 } ··· 951 910 * would delay the probe until late stage because 952 911 * the PCI device isn't available this moment. 953 912 */ 954 - if (!eeh_probe_mode_devtree()) 913 + if (!eeh_has_flag(EEH_PROBE_MODE_DEVTREE)) 955 914 return; 956 915 957 916 if (!of_node_to_eeh_dev(dn)) ··· 1037 996 * We have to do the EEH probe here because the PCI device 1038 997 * hasn't been created yet in the early stage. 1039 998 */ 1040 - if (eeh_probe_mode_dev()) 999 + if (eeh_has_flag(EEH_PROBE_MODE_DEV)) 1041 1000 eeh_ops->dev_probe(dev, NULL); 1042 1001 1043 1002 eeh_addr_cache_insert_dev(dev); ··· 1141 1100 edev->mode &= ~EEH_DEV_SYSFS; 1142 1101 } 1143 1102 1103 + /** 1104 + * eeh_dev_open - Increase count of pass through devices for PE 1105 + * @pdev: PCI device 1106 + * 1107 + * Increase count of passed through devices for the indicated 1108 + * PE. In the result, the EEH errors detected on the PE won't be 1109 + * reported. The PE owner will be responsible for detection 1110 + * and recovery. 1111 + */ 1112 + int eeh_dev_open(struct pci_dev *pdev) 1113 + { 1114 + struct eeh_dev *edev; 1115 + 1116 + mutex_lock(&eeh_dev_mutex); 1117 + 1118 + /* No PCI device ? */ 1119 + if (!pdev) 1120 + goto out; 1121 + 1122 + /* No EEH device or PE ? */ 1123 + edev = pci_dev_to_eeh_dev(pdev); 1124 + if (!edev || !edev->pe) 1125 + goto out; 1126 + 1127 + /* Increase PE's pass through count */ 1128 + atomic_inc(&edev->pe->pass_dev_cnt); 1129 + mutex_unlock(&eeh_dev_mutex); 1130 + 1131 + return 0; 1132 + out: 1133 + mutex_unlock(&eeh_dev_mutex); 1134 + return -ENODEV; 1135 + } 1136 + EXPORT_SYMBOL_GPL(eeh_dev_open); 1137 + 1138 + /** 1139 + * eeh_dev_release - Decrease count of pass through devices for PE 1140 + * @pdev: PCI device 1141 + * 1142 + * Decrease count of pass through devices for the indicated PE. If 1143 + * there is no passed through device in PE, the EEH errors detected 1144 + * on the PE will be reported and handled as usual. 1145 + */ 1146 + void eeh_dev_release(struct pci_dev *pdev) 1147 + { 1148 + struct eeh_dev *edev; 1149 + 1150 + mutex_lock(&eeh_dev_mutex); 1151 + 1152 + /* No PCI device ? */ 1153 + if (!pdev) 1154 + goto out; 1155 + 1156 + /* No EEH device ? */ 1157 + edev = pci_dev_to_eeh_dev(pdev); 1158 + if (!edev || !edev->pe || !eeh_pe_passed(edev->pe)) 1159 + goto out; 1160 + 1161 + /* Decrease PE's pass through count */ 1162 + atomic_dec(&edev->pe->pass_dev_cnt); 1163 + WARN_ON(atomic_read(&edev->pe->pass_dev_cnt) < 0); 1164 + out: 1165 + mutex_unlock(&eeh_dev_mutex); 1166 + } 1167 + EXPORT_SYMBOL(eeh_dev_release); 1168 + 1169 + #ifdef CONFIG_IOMMU_API 1170 + 1171 + static int dev_has_iommu_table(struct device *dev, void *data) 1172 + { 1173 + struct pci_dev *pdev = to_pci_dev(dev); 1174 + struct pci_dev **ppdev = data; 1175 + struct iommu_table *tbl; 1176 + 1177 + if (!dev) 1178 + return 0; 1179 + 1180 + tbl = get_iommu_table_base(dev); 1181 + if (tbl && tbl->it_group) { 1182 + *ppdev = pdev; 1183 + return 1; 1184 + } 1185 + 1186 + return 0; 1187 + } 1188 + 1189 + /** 1190 + * eeh_iommu_group_to_pe - Convert IOMMU group to EEH PE 1191 + * @group: IOMMU group 1192 + * 1193 + * The routine is called to convert IOMMU group to EEH PE. 1194 + */ 1195 + struct eeh_pe *eeh_iommu_group_to_pe(struct iommu_group *group) 1196 + { 1197 + struct pci_dev *pdev = NULL; 1198 + struct eeh_dev *edev; 1199 + int ret; 1200 + 1201 + /* No IOMMU group ? */ 1202 + if (!group) 1203 + return NULL; 1204 + 1205 + ret = iommu_group_for_each_dev(group, &pdev, dev_has_iommu_table); 1206 + if (!ret || !pdev) 1207 + return NULL; 1208 + 1209 + /* No EEH device or PE ? */ 1210 + edev = pci_dev_to_eeh_dev(pdev); 1211 + if (!edev || !edev->pe) 1212 + return NULL; 1213 + 1214 + return edev->pe; 1215 + } 1216 + EXPORT_SYMBOL_GPL(eeh_iommu_group_to_pe); 1217 + 1218 + #endif /* CONFIG_IOMMU_API */ 1219 + 1220 + /** 1221 + * eeh_pe_set_option - Set options for the indicated PE 1222 + * @pe: EEH PE 1223 + * @option: requested option 1224 + * 1225 + * The routine is called to enable or disable EEH functionality 1226 + * on the indicated PE, to enable IO or DMA for the frozen PE. 1227 + */ 1228 + int eeh_pe_set_option(struct eeh_pe *pe, int option) 1229 + { 1230 + int ret = 0; 1231 + 1232 + /* Invalid PE ? */ 1233 + if (!pe) 1234 + return -ENODEV; 1235 + 1236 + /* 1237 + * EEH functionality could possibly be disabled, just 1238 + * return error for the case. And the EEH functinality 1239 + * isn't expected to be disabled on one specific PE. 1240 + */ 1241 + switch (option) { 1242 + case EEH_OPT_ENABLE: 1243 + if (eeh_enabled()) 1244 + break; 1245 + ret = -EIO; 1246 + break; 1247 + case EEH_OPT_DISABLE: 1248 + break; 1249 + case EEH_OPT_THAW_MMIO: 1250 + case EEH_OPT_THAW_DMA: 1251 + if (!eeh_ops || !eeh_ops->set_option) { 1252 + ret = -ENOENT; 1253 + break; 1254 + } 1255 + 1256 + ret = eeh_ops->set_option(pe, option); 1257 + break; 1258 + default: 1259 + pr_debug("%s: Option %d out of range (%d, %d)\n", 1260 + __func__, option, EEH_OPT_DISABLE, EEH_OPT_THAW_DMA); 1261 + ret = -EINVAL; 1262 + } 1263 + 1264 + return ret; 1265 + } 1266 + EXPORT_SYMBOL_GPL(eeh_pe_set_option); 1267 + 1268 + /** 1269 + * eeh_pe_get_state - Retrieve PE's state 1270 + * @pe: EEH PE 1271 + * 1272 + * Retrieve the PE's state, which includes 3 aspects: enabled 1273 + * DMA, enabled IO and asserted reset. 1274 + */ 1275 + int eeh_pe_get_state(struct eeh_pe *pe) 1276 + { 1277 + int result, ret = 0; 1278 + bool rst_active, dma_en, mmio_en; 1279 + 1280 + /* Existing PE ? */ 1281 + if (!pe) 1282 + return -ENODEV; 1283 + 1284 + if (!eeh_ops || !eeh_ops->get_state) 1285 + return -ENOENT; 1286 + 1287 + result = eeh_ops->get_state(pe, NULL); 1288 + rst_active = !!(result & EEH_STATE_RESET_ACTIVE); 1289 + dma_en = !!(result & EEH_STATE_DMA_ENABLED); 1290 + mmio_en = !!(result & EEH_STATE_MMIO_ENABLED); 1291 + 1292 + if (rst_active) 1293 + ret = EEH_PE_STATE_RESET; 1294 + else if (dma_en && mmio_en) 1295 + ret = EEH_PE_STATE_NORMAL; 1296 + else if (!dma_en && !mmio_en) 1297 + ret = EEH_PE_STATE_STOPPED_IO_DMA; 1298 + else if (!dma_en && mmio_en) 1299 + ret = EEH_PE_STATE_STOPPED_DMA; 1300 + else 1301 + ret = EEH_PE_STATE_UNAVAIL; 1302 + 1303 + return ret; 1304 + } 1305 + EXPORT_SYMBOL_GPL(eeh_pe_get_state); 1306 + 1307 + /** 1308 + * eeh_pe_reset - Issue PE reset according to specified type 1309 + * @pe: EEH PE 1310 + * @option: reset type 1311 + * 1312 + * The routine is called to reset the specified PE with the 1313 + * indicated type, either fundamental reset or hot reset. 1314 + * PE reset is the most important part for error recovery. 1315 + */ 1316 + int eeh_pe_reset(struct eeh_pe *pe, int option) 1317 + { 1318 + int ret = 0; 1319 + 1320 + /* Invalid PE ? */ 1321 + if (!pe) 1322 + return -ENODEV; 1323 + 1324 + if (!eeh_ops || !eeh_ops->set_option || !eeh_ops->reset) 1325 + return -ENOENT; 1326 + 1327 + switch (option) { 1328 + case EEH_RESET_DEACTIVATE: 1329 + ret = eeh_ops->reset(pe, option); 1330 + if (ret) 1331 + break; 1332 + 1333 + /* 1334 + * The PE is still in frozen state and we need to clear 1335 + * that. It's good to clear frozen state after deassert 1336 + * to avoid messy IO access during reset, which might 1337 + * cause recursive frozen PE. 1338 + */ 1339 + ret = eeh_ops->set_option(pe, EEH_OPT_THAW_MMIO); 1340 + if (!ret) 1341 + ret = eeh_ops->set_option(pe, EEH_OPT_THAW_DMA); 1342 + if (!ret) 1343 + eeh_pe_state_clear(pe, EEH_PE_ISOLATED); 1344 + break; 1345 + case EEH_RESET_HOT: 1346 + case EEH_RESET_FUNDAMENTAL: 1347 + ret = eeh_ops->reset(pe, option); 1348 + break; 1349 + default: 1350 + pr_debug("%s: Unsupported option %d\n", 1351 + __func__, option); 1352 + ret = -EINVAL; 1353 + } 1354 + 1355 + return ret; 1356 + } 1357 + EXPORT_SYMBOL_GPL(eeh_pe_reset); 1358 + 1359 + /** 1360 + * eeh_pe_configure - Configure PCI bridges after PE reset 1361 + * @pe: EEH PE 1362 + * 1363 + * The routine is called to restore the PCI config space for 1364 + * those PCI devices, especially PCI bridges affected by PE 1365 + * reset issued previously. 1366 + */ 1367 + int eeh_pe_configure(struct eeh_pe *pe) 1368 + { 1369 + int ret = 0; 1370 + 1371 + /* Invalid PE ? */ 1372 + if (!pe) 1373 + return -ENODEV; 1374 + 1375 + /* Restore config space for the affected devices */ 1376 + eeh_pe_restore_bars(pe); 1377 + 1378 + return ret; 1379 + } 1380 + EXPORT_SYMBOL_GPL(eeh_pe_configure); 1381 + 1144 1382 static int proc_eeh_show(struct seq_file *m, void *v) 1145 1383 { 1146 1384 if (!eeh_enabled()) { ··· 1463 1143 static int eeh_enable_dbgfs_set(void *data, u64 val) 1464 1144 { 1465 1145 if (val) 1466 - eeh_subsystem_flags &= ~EEH_FORCE_DISABLED; 1146 + eeh_clear_flag(EEH_FORCE_DISABLED); 1467 1147 else 1468 - eeh_subsystem_flags |= EEH_FORCE_DISABLED; 1148 + eeh_add_flag(EEH_FORCE_DISABLED); 1469 1149 1470 1150 /* Notify the backend */ 1471 1151 if (eeh_ops->post_init)
+5 -4
arch/powerpc/kernel/eeh_cache.c
··· 143 143 } else { 144 144 if (dev != piar->pcidev || 145 145 alo != piar->addr_lo || ahi != piar->addr_hi) { 146 - pr_warning("PIAR: overlapping address range\n"); 146 + pr_warn("PIAR: overlapping address range\n"); 147 147 } 148 148 return piar; 149 149 } ··· 177 177 178 178 dn = pci_device_to_OF_node(dev); 179 179 if (!dn) { 180 - pr_warning("PCI: no pci dn found for dev=%s\n", pci_name(dev)); 180 + pr_warn("PCI: no pci dn found for dev=%s\n", 181 + pci_name(dev)); 181 182 return; 182 183 } 183 184 184 185 edev = of_node_to_eeh_dev(dn); 185 186 if (!edev) { 186 - pr_warning("PCI: no EEH dev found for dn=%s\n", 187 + pr_warn("PCI: no EEH dev found for dn=%s\n", 187 188 dn->full_name); 188 189 return; 189 190 } 190 191 191 192 /* Skip any devices for which EEH is not enabled. */ 192 - if (!eeh_probe_mode_dev() && !edev->pe) { 193 + if (!edev->pe) { 193 194 #ifdef DEBUG 194 195 pr_info("PCI: skip building address cache for=%s - %s\n", 195 196 pci_name(dev), dn->full_name);
+2 -1
arch/powerpc/kernel/eeh_dev.c
··· 57 57 /* Allocate EEH device */ 58 58 edev = kzalloc(sizeof(*edev), GFP_KERNEL); 59 59 if (!edev) { 60 - pr_warning("%s: out of memory\n", __func__); 60 + pr_warn("%s: out of memory\n", 61 + __func__); 61 62 return NULL; 62 63 } 63 64
+8 -8
arch/powerpc/kernel/eeh_driver.c
··· 599 599 pe->freeze_count++; 600 600 if (pe->freeze_count > EEH_MAX_ALLOWED_FREEZES) 601 601 goto excess_failures; 602 - pr_warning("EEH: This PCI device has failed %d times in the last hour\n", 602 + pr_warn("EEH: This PCI device has failed %d times in the last hour\n", 603 603 pe->freeze_count); 604 604 605 605 /* Walk the various device drivers attached to this slot through ··· 616 616 */ 617 617 rc = eeh_ops->wait_state(pe, MAX_WAIT_FOR_RECOVERY*1000); 618 618 if (rc < 0 || rc == EEH_STATE_NOT_SUPPORT) { 619 - pr_warning("EEH: Permanent failure\n"); 619 + pr_warn("EEH: Permanent failure\n"); 620 620 goto hard_fail; 621 621 } 622 622 ··· 635 635 pr_info("EEH: Reset with hotplug activity\n"); 636 636 rc = eeh_reset_device(pe, frozen_bus); 637 637 if (rc) { 638 - pr_warning("%s: Unable to reset, err=%d\n", 639 - __func__, rc); 638 + pr_warn("%s: Unable to reset, err=%d\n", 639 + __func__, rc); 640 640 goto hard_fail; 641 641 } 642 642 } ··· 678 678 679 679 /* If any device has a hard failure, then shut off everything. */ 680 680 if (result == PCI_ERS_RESULT_DISCONNECT) { 681 - pr_warning("EEH: Device driver gave up\n"); 681 + pr_warn("EEH: Device driver gave up\n"); 682 682 goto hard_fail; 683 683 } 684 684 ··· 687 687 pr_info("EEH: Reset without hotplug activity\n"); 688 688 rc = eeh_reset_device(pe, NULL); 689 689 if (rc) { 690 - pr_warning("%s: Cannot reset, err=%d\n", 691 - __func__, rc); 690 + pr_warn("%s: Cannot reset, err=%d\n", 691 + __func__, rc); 692 692 goto hard_fail; 693 693 } 694 694 ··· 701 701 /* All devices should claim they have recovered by now. */ 702 702 if ((result != PCI_ERS_RESULT_RECOVERED) && 703 703 (result != PCI_ERS_RESULT_NONE)) { 704 - pr_warning("EEH: Not recovered\n"); 704 + pr_warn("EEH: Not recovered\n"); 705 705 goto hard_fail; 706 706 } 707 707
+40 -46
arch/powerpc/kernel/eeh_pe.c
··· 32 32 #include <asm/pci-bridge.h> 33 33 #include <asm/ppc-pci.h> 34 34 35 + static int eeh_pe_aux_size = 0; 35 36 static LIST_HEAD(eeh_phb_pe); 37 + 38 + /** 39 + * eeh_set_pe_aux_size - Set PE auxillary data size 40 + * @size: PE auxillary data size 41 + * 42 + * Set PE auxillary data size 43 + */ 44 + void eeh_set_pe_aux_size(int size) 45 + { 46 + if (size < 0) 47 + return; 48 + 49 + eeh_pe_aux_size = size; 50 + } 36 51 37 52 /** 38 53 * eeh_pe_alloc - Allocate PE ··· 59 44 static struct eeh_pe *eeh_pe_alloc(struct pci_controller *phb, int type) 60 45 { 61 46 struct eeh_pe *pe; 47 + size_t alloc_size; 48 + 49 + alloc_size = sizeof(struct eeh_pe); 50 + if (eeh_pe_aux_size) { 51 + alloc_size = ALIGN(alloc_size, cache_line_size()); 52 + alloc_size += eeh_pe_aux_size; 53 + } 62 54 63 55 /* Allocate PHB PE */ 64 - pe = kzalloc(sizeof(struct eeh_pe), GFP_KERNEL); 56 + pe = kzalloc(alloc_size, GFP_KERNEL); 65 57 if (!pe) return NULL; 66 58 67 59 /* Initialize PHB PE */ ··· 78 56 INIT_LIST_HEAD(&pe->child); 79 57 INIT_LIST_HEAD(&pe->edevs); 80 58 59 + pe->data = (void *)pe + ALIGN(sizeof(struct eeh_pe), 60 + cache_line_size()); 81 61 return pe; 82 62 } 83 63 ··· 203 179 void *ret; 204 180 205 181 if (!root) { 206 - pr_warning("%s: Invalid PE %p\n", __func__, root); 182 + pr_warn("%s: Invalid PE %p\n", 183 + __func__, root); 207 184 return NULL; 208 185 } 209 186 ··· 374 349 } 375 350 pe->addr = edev->pe_config_addr; 376 351 pe->config_addr = edev->config_addr; 377 - 378 - /* 379 - * While doing PE reset, we probably hot-reset the 380 - * upstream bridge. However, the PCI devices including 381 - * the associated EEH devices might be removed when EEH 382 - * core is doing recovery. So that won't safe to retrieve 383 - * the bridge through downstream EEH device. We have to 384 - * trace the parent PCI bus, then the upstream bridge. 385 - */ 386 - if (eeh_probe_mode_dev()) 387 - pe->bus = eeh_dev_to_pci_dev(edev)->bus; 388 352 389 353 /* 390 354 * Put the new EEH PE into hierarchy tree. If the parent ··· 816 802 */ 817 803 const char *eeh_pe_loc_get(struct eeh_pe *pe) 818 804 { 819 - struct pci_controller *hose; 820 805 struct pci_bus *bus = eeh_pe_bus_get(pe); 821 - struct pci_dev *pdev; 822 - struct device_node *dn; 823 - const char *loc; 806 + struct device_node *dn = pci_bus_to_OF_node(bus); 807 + const char *loc = NULL; 824 808 825 - if (!bus) 826 - return "N/A"; 809 + if (!dn) 810 + goto out; 827 811 828 812 /* PHB PE or root PE ? */ 829 813 if (pci_is_root_bus(bus)) { 830 - hose = pci_bus_to_host(bus); 831 - loc = of_get_property(hose->dn, 832 - "ibm,loc-code", NULL); 814 + loc = of_get_property(dn, "ibm,loc-code", NULL); 815 + if (!loc) 816 + loc = of_get_property(dn, "ibm,io-base-loc-code", NULL); 833 817 if (loc) 834 - return loc; 835 - loc = of_get_property(hose->dn, 836 - "ibm,io-base-loc-code", NULL); 837 - if (loc) 838 - return loc; 818 + goto out; 839 819 840 - pdev = pci_get_slot(bus, 0x0); 841 - } else { 842 - pdev = bus->self; 843 - } 844 - 845 - if (!pdev) { 846 - loc = "N/A"; 847 - goto out; 848 - } 849 - 850 - dn = pci_device_to_OF_node(pdev); 851 - if (!dn) { 852 - loc = "N/A"; 853 - goto out; 820 + /* Check the root port */ 821 + dn = dn->child; 822 + if (!dn) 823 + goto out; 854 824 } 855 825 856 826 loc = of_get_property(dn, "ibm,loc-code", NULL); 857 827 if (!loc) 858 828 loc = of_get_property(dn, "ibm,slot-location-code", NULL); 859 - if (!loc) 860 - loc = "N/A"; 861 829 862 830 out: 863 - if (pci_is_root_bus(bus) && pdev) 864 - pci_dev_put(pdev); 865 - return loc; 831 + return loc ? loc : "N/A"; 866 832 } 867 833 868 834 /**
+7 -6
arch/powerpc/kernel/entry_64.S
··· 482 482 ld r8,KSP(r4) /* new stack pointer */ 483 483 #ifdef CONFIG_PPC_BOOK3S 484 484 BEGIN_FTR_SECTION 485 - BEGIN_FTR_SECTION_NESTED(95) 486 485 clrrdi r6,r8,28 /* get its ESID */ 487 486 clrrdi r9,r1,28 /* get current sp ESID */ 488 - FTR_SECTION_ELSE_NESTED(95) 487 + FTR_SECTION_ELSE 489 488 clrrdi r6,r8,40 /* get its 1T ESID */ 490 489 clrrdi r9,r1,40 /* get current sp 1T ESID */ 491 - ALT_MMU_FTR_SECTION_END_NESTED_IFCLR(MMU_FTR_1T_SEGMENT, 95) 492 - FTR_SECTION_ELSE 493 - b 2f 494 - ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_SLB) 490 + ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_1T_SEGMENT) 495 491 clrldi. r0,r6,2 /* is new ESID c00000000? */ 496 492 cmpd cr1,r6,r9 /* or is new ESID the same as current ESID? */ 497 493 cror eq,4*cr1+eq,eq ··· 914 918 bne 1f 915 919 addi r3,r1,STACK_FRAME_OVERHEAD; 916 920 bl do_IRQ 921 + b ret_from_except 922 + 1: cmpwi cr0,r3,0xe60 923 + bne 1f 924 + addi r3,r1,STACK_FRAME_OVERHEAD; 925 + bl handle_hmi_exception 917 926 b ret_from_except 918 927 1: cmpwi cr0,r3,0x900 919 928 bne 1f
+133 -228
arch/powerpc/kernel/exceptions-64s.S
··· 188 188 data_access_pSeries: 189 189 HMT_MEDIUM_PPR_DISCARD 190 190 SET_SCRATCH0(r13) 191 - BEGIN_FTR_SECTION 192 - b data_access_check_stab 193 - data_access_not_stab: 194 - END_MMU_FTR_SECTION_IFCLR(MMU_FTR_SLB) 195 191 EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, data_access_common, EXC_STD, 196 192 KVMTEST, 0x300) 197 193 ··· 335 339 hv_exception_trampoline: 336 340 SET_SCRATCH0(r13) 337 341 EXCEPTION_PROLOG_0(PACA_EXGEN) 338 - b hmi_exception_hv 342 + b hmi_exception_early 339 343 340 344 . = 0xe80 341 345 hv_doorbell_trampoline: ··· 510 514 EXCEPTION_PROLOG_1(PACA_EXMC, KVMTEST, 0x200) 511 515 EXCEPTION_PROLOG_PSERIES_1(machine_check_common, EXC_STD) 512 516 KVM_HANDLER_SKIP(PACA_EXMC, EXC_STD, 0x200) 513 - 514 - /* moved from 0x300 */ 515 - data_access_check_stab: 516 - GET_PACA(r13) 517 - std r9,PACA_EXSLB+EX_R9(r13) 518 - std r10,PACA_EXSLB+EX_R10(r13) 519 - mfspr r10,SPRN_DAR 520 - mfspr r9,SPRN_DSISR 521 - srdi r10,r10,60 522 - rlwimi r10,r9,16,0x20 523 - #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE 524 - lbz r9,HSTATE_IN_GUEST(r13) 525 - rlwimi r10,r9,8,0x300 526 - #endif 527 - mfcr r9 528 - cmpwi r10,0x2c 529 - beq do_stab_bolted_pSeries 530 - mtcrf 0x80,r9 531 - ld r9,PACA_EXSLB+EX_R9(r13) 532 - ld r10,PACA_EXSLB+EX_R10(r13) 533 - b data_access_not_stab 534 - do_stab_bolted_pSeries: 535 - std r11,PACA_EXSLB+EX_R11(r13) 536 - std r12,PACA_EXSLB+EX_R12(r13) 537 - GET_SCRATCH0(r10) 538 - std r10,PACA_EXSLB+EX_R13(r13) 539 - EXCEPTION_PROLOG_PSERIES_1(do_stab_bolted, EXC_STD) 540 - 541 517 KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x300) 542 518 KVM_HANDLER_SKIP(PACA_EXSLB, EXC_STD, 0x380) 543 519 KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x400) ··· 589 621 KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe22) 590 622 STD_EXCEPTION_HV_OOL(0xe42, emulation_assist) 591 623 KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe42) 592 - STD_EXCEPTION_HV_OOL(0xe62, hmi_exception) /* need to flush cache ? */ 624 + MASKABLE_EXCEPTION_HV_OOL(0xe62, hmi_exception) 593 625 KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe62) 626 + 627 + .globl hmi_exception_early 628 + hmi_exception_early: 629 + EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0xe60) 630 + mr r10,r1 /* Save r1 */ 631 + ld r1,PACAEMERGSP(r13) /* Use emergency stack */ 632 + subi r1,r1,INT_FRAME_SIZE /* alloc stack frame */ 633 + std r9,_CCR(r1) /* save CR in stackframe */ 634 + mfspr r11,SPRN_HSRR0 /* Save HSRR0 */ 635 + std r11,_NIP(r1) /* save HSRR0 in stackframe */ 636 + mfspr r12,SPRN_HSRR1 /* Save SRR1 */ 637 + std r12,_MSR(r1) /* save SRR1 in stackframe */ 638 + std r10,0(r1) /* make stack chain pointer */ 639 + std r0,GPR0(r1) /* save r0 in stackframe */ 640 + std r10,GPR1(r1) /* save r1 in stackframe */ 641 + EXCEPTION_PROLOG_COMMON_2(PACA_EXGEN) 642 + EXCEPTION_PROLOG_COMMON_3(0xe60) 643 + addi r3,r1,STACK_FRAME_OVERHEAD 644 + bl hmi_exception_realmode 645 + /* Windup the stack. */ 646 + /* Clear MSR_RI before setting SRR0 and SRR1. */ 647 + li r0,MSR_RI 648 + mfmsr r9 /* get MSR value */ 649 + andc r9,r9,r0 650 + mtmsrd r9,1 /* Clear MSR_RI */ 651 + /* Move original HSRR0 and HSRR1 into the respective regs */ 652 + ld r9,_MSR(r1) 653 + mtspr SPRN_HSRR1,r9 654 + ld r3,_NIP(r1) 655 + mtspr SPRN_HSRR0,r3 656 + ld r9,_CTR(r1) 657 + mtctr r9 658 + ld r9,_XER(r1) 659 + mtxer r9 660 + ld r9,_LINK(r1) 661 + mtlr r9 662 + REST_GPR(0, r1) 663 + REST_8GPRS(2, r1) 664 + REST_GPR(10, r1) 665 + ld r11,_CCR(r1) 666 + mtcr r11 667 + REST_GPR(11, r1) 668 + REST_2GPRS(12, r1) 669 + /* restore original r1. */ 670 + ld r1,GPR1(r1) 671 + 672 + /* 673 + * Go to virtual mode and pull the HMI event information from 674 + * firmware. 675 + */ 676 + .globl hmi_exception_after_realmode 677 + hmi_exception_after_realmode: 678 + SET_SCRATCH0(r13) 679 + EXCEPTION_PROLOG_0(PACA_EXGEN) 680 + b hmi_exception_hv 681 + 594 682 MASKABLE_EXCEPTION_HV_OOL(0xe82, h_doorbell) 595 683 KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe82) 596 684 ··· 667 643 * - If it was a decrementer interrupt, we bump the dec to max and and return. 668 644 * - If it was a doorbell we return immediately since doorbells are edge 669 645 * triggered and won't automatically refire. 646 + * - If it was a HMI we return immediately since we handled it in realmode 647 + * and it won't refire. 670 648 * - else we hard disable and return. 671 649 * This is called with r10 containing the value to OR to the paca field. 672 650 */ ··· 685 659 mtspr SPRN_DEC,r10; \ 686 660 b 2f; \ 687 661 1: cmpwi r10,PACA_IRQ_DBELL; \ 662 + beq 2f; \ 663 + cmpwi r10,PACA_IRQ_HMI; \ 688 664 beq 2f; \ 689 665 mfspr r10,SPRN_##_H##SRR1; \ 690 666 rldicl r10,r10,48,1; /* clear MSR_EE */ \ ··· 827 799 STD_EXCEPTION_COMMON(0xd00, single_step, single_step_exception) 828 800 STD_EXCEPTION_COMMON(0xe00, trap_0e, unknown_exception) 829 801 STD_EXCEPTION_COMMON(0xe40, emulation_assist, emulation_assist_interrupt) 830 - STD_EXCEPTION_COMMON(0xe60, hmi_exception, unknown_exception) 802 + STD_EXCEPTION_COMMON_ASYNC(0xe60, hmi_exception, handle_hmi_exception) 831 803 #ifdef CONFIG_PPC_DOORBELL 832 804 STD_EXCEPTION_COMMON_ASYNC(0xe80, h_doorbell, doorbell_exception) 833 805 #else ··· 1013 985 b __ppc64_runlatch_on 1014 986 1015 987 /* 1016 - * Here we have detected that the kernel stack pointer is bad. 1017 - * R9 contains the saved CR, r13 points to the paca, 1018 - * r10 contains the (bad) kernel stack pointer, 1019 - * r11 and r12 contain the saved SRR0 and SRR1. 1020 - * We switch to using an emergency stack, save the registers there, 1021 - * and call kernel_bad_stack(), which panics. 1022 - */ 1023 - bad_stack: 1024 - ld r1,PACAEMERGSP(r13) 1025 - subi r1,r1,64+INT_FRAME_SIZE 1026 - std r9,_CCR(r1) 1027 - std r10,GPR1(r1) 1028 - std r11,_NIP(r1) 1029 - std r12,_MSR(r1) 1030 - mfspr r11,SPRN_DAR 1031 - mfspr r12,SPRN_DSISR 1032 - std r11,_DAR(r1) 1033 - std r12,_DSISR(r1) 1034 - mflr r10 1035 - mfctr r11 1036 - mfxer r12 1037 - std r10,_LINK(r1) 1038 - std r11,_CTR(r1) 1039 - std r12,_XER(r1) 1040 - SAVE_GPR(0,r1) 1041 - SAVE_GPR(2,r1) 1042 - ld r10,EX_R3(r3) 1043 - std r10,GPR3(r1) 1044 - SAVE_GPR(4,r1) 1045 - SAVE_4GPRS(5,r1) 1046 - ld r9,EX_R9(r3) 1047 - ld r10,EX_R10(r3) 1048 - SAVE_2GPRS(9,r1) 1049 - ld r9,EX_R11(r3) 1050 - ld r10,EX_R12(r3) 1051 - ld r11,EX_R13(r3) 1052 - std r9,GPR11(r1) 1053 - std r10,GPR12(r1) 1054 - std r11,GPR13(r1) 1055 - BEGIN_FTR_SECTION 1056 - ld r10,EX_CFAR(r3) 1057 - std r10,ORIG_GPR3(r1) 1058 - END_FTR_SECTION_IFSET(CPU_FTR_CFAR) 1059 - SAVE_8GPRS(14,r1) 1060 - SAVE_10GPRS(22,r1) 1061 - lhz r12,PACA_TRAP_SAVE(r13) 1062 - std r12,_TRAP(r1) 1063 - addi r11,r1,INT_FRAME_SIZE 1064 - std r11,0(r1) 1065 - li r12,0 1066 - std r12,0(r11) 1067 - ld r2,PACATOC(r13) 1068 - ld r11,exception_marker@toc(r2) 1069 - std r12,RESULT(r1) 1070 - std r11,STACK_FRAME_OVERHEAD-16(r1) 1071 - 1: addi r3,r1,STACK_FRAME_OVERHEAD 1072 - bl kernel_bad_stack 1073 - b 1b 1074 - 1075 - /* 1076 988 * Here r13 points to the paca, r9 contains the saved CR, 1077 989 * SRR0 and SRR1 are saved in r11 and r12, 1078 990 * r9 - r13 are saved in paca->exgen. ··· 1025 1057 mfspr r10,SPRN_DSISR 1026 1058 stw r10,PACA_EXGEN+EX_DSISR(r13) 1027 1059 EXCEPTION_PROLOG_COMMON(0x300, PACA_EXGEN) 1028 - DISABLE_INTS 1060 + RECONCILE_IRQ_STATE(r10, r11) 1029 1061 ld r12,_MSR(r1) 1030 1062 ld r3,PACA_EXGEN+EX_DAR(r13) 1031 1063 lwz r4,PACA_EXGEN+EX_DSISR(r13) ··· 1041 1073 stw r10,PACA_EXGEN+EX_DSISR(r13) 1042 1074 EXCEPTION_PROLOG_COMMON(0xe00, PACA_EXGEN) 1043 1075 bl save_nvgprs 1044 - DISABLE_INTS 1076 + RECONCILE_IRQ_STATE(r10, r11) 1045 1077 addi r3,r1,STACK_FRAME_OVERHEAD 1046 1078 bl unknown_exception 1047 1079 b ret_from_except ··· 1050 1082 .globl instruction_access_common 1051 1083 instruction_access_common: 1052 1084 EXCEPTION_PROLOG_COMMON(0x400, PACA_EXGEN) 1053 - DISABLE_INTS 1085 + RECONCILE_IRQ_STATE(r10, r11) 1054 1086 ld r12,_MSR(r1) 1055 1087 ld r3,_NIP(r1) 1056 1088 andis. r4,r12,0x5820 ··· 1114 1146 1115 1147 unrecov_user_slb: 1116 1148 EXCEPTION_PROLOG_COMMON(0x4200, PACA_EXGEN) 1117 - DISABLE_INTS 1149 + RECONCILE_IRQ_STATE(r10, r11) 1118 1150 bl save_nvgprs 1119 1151 1: addi r3,r1,STACK_FRAME_OVERHEAD 1120 1152 bl unrecoverable_exception ··· 1137 1169 stw r10,PACA_EXGEN+EX_DSISR(r13) 1138 1170 EXCEPTION_PROLOG_COMMON(0x200, PACA_EXMC) 1139 1171 FINISH_NAP 1140 - DISABLE_INTS 1172 + RECONCILE_IRQ_STATE(r10, r11) 1141 1173 ld r3,PACA_EXGEN+EX_DAR(r13) 1142 1174 lwz r4,PACA_EXGEN+EX_DSISR(r13) 1143 1175 std r3,_DAR(r1) ··· 1160 1192 std r3,_DAR(r1) 1161 1193 std r4,_DSISR(r1) 1162 1194 bl save_nvgprs 1163 - DISABLE_INTS 1195 + RECONCILE_IRQ_STATE(r10, r11) 1164 1196 addi r3,r1,STACK_FRAME_OVERHEAD 1165 1197 bl alignment_exception 1166 1198 b ret_from_except ··· 1170 1202 program_check_common: 1171 1203 EXCEPTION_PROLOG_COMMON(0x700, PACA_EXGEN) 1172 1204 bl save_nvgprs 1173 - DISABLE_INTS 1205 + RECONCILE_IRQ_STATE(r10, r11) 1174 1206 addi r3,r1,STACK_FRAME_OVERHEAD 1175 1207 bl program_check_exception 1176 1208 b ret_from_except ··· 1181 1213 EXCEPTION_PROLOG_COMMON(0x800, PACA_EXGEN) 1182 1214 bne 1f /* if from user, just load it up */ 1183 1215 bl save_nvgprs 1184 - DISABLE_INTS 1216 + RECONCILE_IRQ_STATE(r10, r11) 1185 1217 addi r3,r1,STACK_FRAME_OVERHEAD 1186 1218 bl kernel_fp_unavailable_exception 1187 1219 BUG_OPCODE ··· 1200 1232 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM 1201 1233 2: /* User process was in a transaction */ 1202 1234 bl save_nvgprs 1203 - DISABLE_INTS 1235 + RECONCILE_IRQ_STATE(r10, r11) 1204 1236 addi r3,r1,STACK_FRAME_OVERHEAD 1205 1237 bl fp_unavailable_tm 1206 1238 b ret_from_except ··· 1226 1258 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM 1227 1259 2: /* User process was in a transaction */ 1228 1260 bl save_nvgprs 1229 - DISABLE_INTS 1261 + RECONCILE_IRQ_STATE(r10, r11) 1230 1262 addi r3,r1,STACK_FRAME_OVERHEAD 1231 1263 bl altivec_unavailable_tm 1232 1264 b ret_from_except ··· 1235 1267 END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) 1236 1268 #endif 1237 1269 bl save_nvgprs 1238 - DISABLE_INTS 1270 + RECONCILE_IRQ_STATE(r10, r11) 1239 1271 addi r3,r1,STACK_FRAME_OVERHEAD 1240 1272 bl altivec_unavailable_exception 1241 1273 b ret_from_except ··· 1260 1292 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM 1261 1293 2: /* User process was in a transaction */ 1262 1294 bl save_nvgprs 1263 - DISABLE_INTS 1295 + RECONCILE_IRQ_STATE(r10, r11) 1264 1296 addi r3,r1,STACK_FRAME_OVERHEAD 1265 1297 bl vsx_unavailable_tm 1266 1298 b ret_from_except ··· 1269 1301 END_FTR_SECTION_IFSET(CPU_FTR_VSX) 1270 1302 #endif 1271 1303 bl save_nvgprs 1272 - DISABLE_INTS 1304 + RECONCILE_IRQ_STATE(r10, r11) 1273 1305 addi r3,r1,STACK_FRAME_OVERHEAD 1274 1306 bl vsx_unavailable_exception 1275 1307 b ret_from_except ··· 1305 1337 */ 1306 1338 . = 0x8000 1307 1339 #endif /* defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV) */ 1308 - 1309 - /* Space for CPU0's segment table */ 1310 - .balign 4096 1311 - .globl initial_stab 1312 - initial_stab: 1313 - .space 4096 1314 1340 1315 1341 #ifdef CONFIG_PPC_POWERNV 1316 1342 _GLOBAL(opal_mc_secondary_handler) ··· 1528 1566 1529 1567 unrecov_slb: 1530 1568 EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB) 1531 - DISABLE_INTS 1569 + RECONCILE_IRQ_STATE(r10, r11) 1532 1570 bl save_nvgprs 1533 1571 1: addi r3,r1,STACK_FRAME_OVERHEAD 1534 1572 bl unrecoverable_exception ··· 1556 1594 bne- handle_page_fault /* if not, try to insert a HPTE */ 1557 1595 andis. r0,r4,DSISR_DABRMATCH@h 1558 1596 bne- handle_dabr_fault 1559 - 1560 - BEGIN_FTR_SECTION 1561 - andis. r0,r4,0x0020 /* Is it a segment table fault? */ 1562 - bne- do_ste_alloc /* If so handle it */ 1563 - END_MMU_FTR_SECTION_IFCLR(MMU_FTR_SLB) 1564 - 1565 1597 CURRENT_THREAD_INFO(r11, r1) 1566 1598 lwz r0,TI_PREEMPT(r11) /* If we're in an "NMI" */ 1567 1599 andis. r0,r0,NMI_MASK@h /* (i.e. an irq when soft-disabled) */ ··· 1637 1681 bl bad_page_fault 1638 1682 b ret_from_except 1639 1683 1640 - /* here we have a segment miss */ 1641 - do_ste_alloc: 1642 - bl ste_allocate /* try to insert stab entry */ 1643 - cmpdi r3,0 1644 - bne- handle_page_fault 1645 - b fast_exception_return 1646 - 1647 1684 /* 1648 - * r13 points to the PACA, r9 contains the saved CR, 1685 + * Here we have detected that the kernel stack pointer is bad. 1686 + * R9 contains the saved CR, r13 points to the paca, 1687 + * r10 contains the (bad) kernel stack pointer, 1649 1688 * r11 and r12 contain the saved SRR0 and SRR1. 1650 - * r9 - r13 are saved in paca->exslb. 1651 - * We assume we aren't going to take any exceptions during this procedure. 1652 - * We assume (DAR >> 60) == 0xc. 1689 + * We switch to using an emergency stack, save the registers there, 1690 + * and call kernel_bad_stack(), which panics. 1653 1691 */ 1654 - .align 7 1655 - do_stab_bolted: 1656 - stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */ 1657 - std r11,PACA_EXSLB+EX_SRR0(r13) /* save SRR0 in exc. frame */ 1658 - mfspr r11,SPRN_DAR /* ea */ 1659 - 1660 - /* 1661 - * check for bad kernel/user address 1662 - * (ea & ~REGION_MASK) >= PGTABLE_RANGE 1663 - */ 1664 - rldicr. r9,r11,4,(63 - 46 - 4) 1665 - li r9,0 /* VSID = 0 for bad address */ 1666 - bne- 0f 1667 - 1668 - /* 1669 - * Calculate VSID: 1670 - * This is the kernel vsid, we take the top for context from 1671 - * the range. context = (MAX_USER_CONTEXT) + ((ea >> 60) - 0xc) + 1 1672 - * Here we know that (ea >> 60) == 0xc 1673 - */ 1674 - lis r9,(MAX_USER_CONTEXT + 1)@ha 1675 - addi r9,r9,(MAX_USER_CONTEXT + 1)@l 1676 - 1677 - srdi r10,r11,SID_SHIFT 1678 - rldimi r10,r9,ESID_BITS,0 /* proto vsid */ 1679 - ASM_VSID_SCRAMBLE(r10, r9, 256M) 1680 - rldic r9,r10,12,16 /* r9 = vsid << 12 */ 1681 - 1682 - 0: 1683 - /* Hash to the primary group */ 1684 - ld r10,PACASTABVIRT(r13) 1685 - srdi r11,r11,SID_SHIFT 1686 - rldimi r10,r11,7,52 /* r10 = first ste of the group */ 1687 - 1688 - /* Search the primary group for a free entry */ 1689 - 1: ld r11,0(r10) /* Test valid bit of the current ste */ 1690 - andi. r11,r11,0x80 1691 - beq 2f 1692 - addi r10,r10,16 1693 - andi. r11,r10,0x70 1694 - bne 1b 1695 - 1696 - /* Stick for only searching the primary group for now. */ 1697 - /* At least for now, we use a very simple random castout scheme */ 1698 - /* Use the TB as a random number ; OR in 1 to avoid entry 0 */ 1699 - mftb r11 1700 - rldic r11,r11,4,57 /* r11 = (r11 << 4) & 0x70 */ 1701 - ori r11,r11,0x10 1702 - 1703 - /* r10 currently points to an ste one past the group of interest */ 1704 - /* make it point to the randomly selected entry */ 1705 - subi r10,r10,128 1706 - or r10,r10,r11 /* r10 is the entry to invalidate */ 1707 - 1708 - isync /* mark the entry invalid */ 1709 - ld r11,0(r10) 1710 - rldicl r11,r11,56,1 /* clear the valid bit */ 1711 - rotldi r11,r11,8 1712 - std r11,0(r10) 1713 - sync 1714 - 1715 - clrrdi r11,r11,28 /* Get the esid part of the ste */ 1716 - slbie r11 1717 - 1718 - 2: std r9,8(r10) /* Store the vsid part of the ste */ 1719 - eieio 1720 - 1721 - mfspr r11,SPRN_DAR /* Get the new esid */ 1722 - clrrdi r11,r11,28 /* Permits a full 32b of ESID */ 1723 - ori r11,r11,0x90 /* Turn on valid and kp */ 1724 - std r11,0(r10) /* Put new entry back into the stab */ 1725 - 1726 - sync 1727 - 1728 - /* All done -- return from exception. */ 1729 - lwz r9,PACA_EXSLB+EX_CCR(r13) /* get saved CR */ 1730 - ld r11,PACA_EXSLB+EX_SRR0(r13) /* get saved SRR0 */ 1731 - 1732 - andi. r10,r12,MSR_RI 1733 - beq- unrecov_slb 1734 - 1735 - mtcrf 0x80,r9 /* restore CR */ 1736 - 1737 - mfmsr r10 1738 - clrrdi r10,r10,2 1739 - mtmsrd r10,1 1740 - 1741 - mtspr SPRN_SRR0,r11 1742 - mtspr SPRN_SRR1,r12 1743 - ld r9,PACA_EXSLB+EX_R9(r13) 1744 - ld r10,PACA_EXSLB+EX_R10(r13) 1745 - ld r11,PACA_EXSLB+EX_R11(r13) 1746 - ld r12,PACA_EXSLB+EX_R12(r13) 1747 - ld r13,PACA_EXSLB+EX_R13(r13) 1748 - rfid 1749 - b . /* prevent speculative execution */ 1692 + bad_stack: 1693 + ld r1,PACAEMERGSP(r13) 1694 + subi r1,r1,64+INT_FRAME_SIZE 1695 + std r9,_CCR(r1) 1696 + std r10,GPR1(r1) 1697 + std r11,_NIP(r1) 1698 + std r12,_MSR(r1) 1699 + mfspr r11,SPRN_DAR 1700 + mfspr r12,SPRN_DSISR 1701 + std r11,_DAR(r1) 1702 + std r12,_DSISR(r1) 1703 + mflr r10 1704 + mfctr r11 1705 + mfxer r12 1706 + std r10,_LINK(r1) 1707 + std r11,_CTR(r1) 1708 + std r12,_XER(r1) 1709 + SAVE_GPR(0,r1) 1710 + SAVE_GPR(2,r1) 1711 + ld r10,EX_R3(r3) 1712 + std r10,GPR3(r1) 1713 + SAVE_GPR(4,r1) 1714 + SAVE_4GPRS(5,r1) 1715 + ld r9,EX_R9(r3) 1716 + ld r10,EX_R10(r3) 1717 + SAVE_2GPRS(9,r1) 1718 + ld r9,EX_R11(r3) 1719 + ld r10,EX_R12(r3) 1720 + ld r11,EX_R13(r3) 1721 + std r9,GPR11(r1) 1722 + std r10,GPR12(r1) 1723 + std r11,GPR13(r1) 1724 + BEGIN_FTR_SECTION 1725 + ld r10,EX_CFAR(r3) 1726 + std r10,ORIG_GPR3(r1) 1727 + END_FTR_SECTION_IFSET(CPU_FTR_CFAR) 1728 + SAVE_8GPRS(14,r1) 1729 + SAVE_10GPRS(22,r1) 1730 + lhz r12,PACA_TRAP_SAVE(r13) 1731 + std r12,_TRAP(r1) 1732 + addi r11,r1,INT_FRAME_SIZE 1733 + std r11,0(r1) 1734 + li r12,0 1735 + std r12,0(r11) 1736 + ld r2,PACATOC(r13) 1737 + ld r11,exception_marker@toc(r2) 1738 + std r12,RESULT(r1) 1739 + std r11,STACK_FRAME_OVERHEAD-16(r1) 1740 + 1: addi r3,r1,STACK_FRAME_OVERHEAD 1741 + bl kernel_bad_stack 1742 + b 1b
+27 -3
arch/powerpc/kernel/head_64.S
··· 180 180 #include "exceptions-64s.S" 181 181 #endif 182 182 183 + #ifdef CONFIG_PPC_BOOK3E 184 + _GLOBAL(fsl_secondary_thread_init) 185 + /* Enable branch prediction */ 186 + lis r3,BUCSR_INIT@h 187 + ori r3,r3,BUCSR_INIT@l 188 + mtspr SPRN_BUCSR,r3 189 + isync 190 + 191 + /* 192 + * Fix PIR to match the linear numbering in the device tree. 193 + * 194 + * On e6500, the reset value of PIR uses the low three bits for 195 + * the thread within a core, and the upper bits for the core 196 + * number. There are two threads per core, so shift everything 197 + * but the low bit right by two bits so that the cpu numbering is 198 + * continuous. 199 + */ 200 + mfspr r3, SPRN_PIR 201 + rlwimi r3, r3, 30, 2, 30 202 + mtspr SPRN_PIR, r3 203 + #endif 204 + 183 205 _GLOBAL(generic_secondary_thread_init) 184 206 mr r24,r3 185 207 ··· 640 618 addi r14,r14,THREAD_SIZE-STACK_FRAME_OVERHEAD 641 619 std r14,PACAKSAVE(r13) 642 620 643 - /* Do early setup for that CPU (stab, slb, hash table pointer) */ 621 + /* Do early setup for that CPU (SLB and hash table pointer) */ 644 622 bl early_setup_secondary 645 623 646 624 /* ··· 793 771 li r0,0 794 772 stdu r0,-STACK_FRAME_OVERHEAD(r1) 795 773 796 - /* Do very early kernel initializations, including initial hash table, 797 - * stab and slb setup before we turn on relocation. */ 774 + /* 775 + * Do very early kernel initializations, including initial hash table 776 + * and SLB setup before we turn on relocation. 777 + */ 798 778 799 779 /* Restore parameters passed from prom_init/kexec */ 800 780 mr r3,r31
+63 -6
arch/powerpc/kernel/idle_power7.S
··· 135 135 b power7_powersave_common 136 136 /* No return */ 137 137 138 + /* 139 + * Make opal call in realmode. This is a generic function to be called 140 + * from realmode from reset vector. It handles endianess. 141 + * 142 + * r13 - paca pointer 143 + * r1 - stack pointer 144 + * r3 - opal token 145 + */ 146 + opal_call_realmode: 147 + mflr r12 148 + std r12,_LINK(r1) 149 + ld r2,PACATOC(r13) 150 + /* Set opal return address */ 151 + LOAD_REG_ADDR(r0,return_from_opal_call) 152 + mtlr r0 153 + /* Handle endian-ness */ 154 + li r0,MSR_LE 155 + mfmsr r12 156 + andc r12,r12,r0 157 + mtspr SPRN_HSRR1,r12 158 + mr r0,r3 /* Move opal token to r0 */ 159 + LOAD_REG_ADDR(r11,opal) 160 + ld r12,8(r11) 161 + ld r2,0(r11) 162 + mtspr SPRN_HSRR0,r12 163 + hrfid 164 + 165 + return_from_opal_call: 166 + FIXUP_ENDIAN 167 + ld r0,_LINK(r1) 168 + mtlr r0 169 + blr 170 + 171 + #define CHECK_HMI_INTERRUPT \ 172 + mfspr r0,SPRN_SRR1; \ 173 + BEGIN_FTR_SECTION_NESTED(66); \ 174 + rlwinm r0,r0,45-31,0xf; /* extract wake reason field (P8) */ \ 175 + FTR_SECTION_ELSE_NESTED(66); \ 176 + rlwinm r0,r0,45-31,0xe; /* P7 wake reason field is 3 bits */ \ 177 + ALT_FTR_SECTION_END_NESTED_IFSET(CPU_FTR_ARCH_207S, 66); \ 178 + cmpwi r0,0xa; /* Hypervisor maintenance ? */ \ 179 + bne 20f; \ 180 + /* Invoke opal call to handle hmi */ \ 181 + ld r2,PACATOC(r13); \ 182 + ld r1,PACAR1(r13); \ 183 + std r3,ORIG_GPR3(r1); /* Save original r3 */ \ 184 + li r3,OPAL_HANDLE_HMI; /* Pass opal token argument*/ \ 185 + bl opal_call_realmode; \ 186 + ld r3,ORIG_GPR3(r1); /* Restore original r3 */ \ 187 + 20: nop; 188 + 189 + 138 190 _GLOBAL(power7_wakeup_tb_loss) 139 191 ld r2,PACATOC(r13); 140 192 ld r1,PACAR1(r13) 141 193 194 + BEGIN_FTR_SECTION 195 + CHECK_HMI_INTERRUPT 196 + END_FTR_SECTION_IFSET(CPU_FTR_HVMODE) 142 197 /* Time base re-sync */ 143 - li r0,OPAL_RESYNC_TIMEBASE 144 - LOAD_REG_ADDR(r11,opal); 145 - ld r12,8(r11); 146 - ld r2,0(r11); 147 - mtctr r12 148 - bctrl 198 + li r3,OPAL_RESYNC_TIMEBASE 199 + bl opal_call_realmode; 149 200 150 201 /* TODO: Check r3 for failure */ 151 202 ··· 214 163 215 164 _GLOBAL(power7_wakeup_loss) 216 165 ld r1,PACAR1(r13) 166 + BEGIN_FTR_SECTION 167 + CHECK_HMI_INTERRUPT 168 + END_FTR_SECTION_IFSET(CPU_FTR_HVMODE) 217 169 REST_NVGPRS(r1) 218 170 REST_GPR(2, r1) 219 171 ld r3,_CCR(r1) ··· 232 178 lbz r0,PACA_NAPSTATELOST(r13) 233 179 cmpwi r0,0 234 180 bne power7_wakeup_loss 181 + BEGIN_FTR_SECTION 182 + CHECK_HMI_INTERRUPT 183 + END_FTR_SECTION_IFSET(CPU_FTR_HVMODE) 235 184 ld r1,PACAR1(r13) 236 185 ld r4,_MSR(r1) 237 186 ld r5,_NIP(r1)
+2 -2
arch/powerpc/kernel/iommu.c
··· 1037 1037 1038 1038 /* if (unlikely(ret)) 1039 1039 pr_err("iommu_tce: %s failed on hwaddr=%lx ioba=%lx kva=%lx ret=%d\n", 1040 - __func__, hwaddr, entry << IOMMU_PAGE_SHIFT(tbl), 1040 + __func__, hwaddr, entry << tbl->it_page_shift, 1041 1041 hwaddr, ret); */ 1042 1042 1043 1043 return ret; ··· 1056 1056 direction != DMA_TO_DEVICE, &page); 1057 1057 if (unlikely(ret != 1)) { 1058 1058 /* pr_err("iommu_tce: get_user_pages_fast failed tce=%lx ioba=%lx ret=%d\n", 1059 - tce, entry << IOMMU_PAGE_SHIFT(tbl), ret); */ 1059 + tce, entry << tbl->it_page_shift, ret); */ 1060 1060 return -EFAULT; 1061 1061 } 1062 1062 hwaddr = (unsigned long) page_address(page) + offset;
+14
arch/powerpc/kernel/irq.c
··· 189 189 } 190 190 #endif /* CONFIG_PPC_BOOK3E */ 191 191 192 + /* Check if an hypervisor Maintenance interrupt happened */ 193 + local_paca->irq_happened &= ~PACA_IRQ_HMI; 194 + if (happened & PACA_IRQ_HMI) 195 + return 0xe60; 196 + 192 197 /* There should be nothing left ! */ 193 198 BUG_ON(local_paca->irq_happened != 0); 194 199 ··· 382 377 seq_printf(p, "%10u ", per_cpu(irq_stat, j).mce_exceptions); 383 378 seq_printf(p, " Machine check exceptions\n"); 384 379 380 + if (cpu_has_feature(CPU_FTR_HVMODE)) { 381 + seq_printf(p, "%*s: ", prec, "HMI"); 382 + for_each_online_cpu(j) 383 + seq_printf(p, "%10u ", 384 + per_cpu(irq_stat, j).hmi_exceptions); 385 + seq_printf(p, " Hypervisor Maintenance Interrupts\n"); 386 + } 387 + 385 388 #ifdef CONFIG_PPC_DOORBELL 386 389 if (cpu_has_feature(CPU_FTR_DBELL)) { 387 390 seq_printf(p, "%*s: ", prec, "DBL"); ··· 413 400 sum += per_cpu(irq_stat, cpu).mce_exceptions; 414 401 sum += per_cpu(irq_stat, cpu).spurious_irqs; 415 402 sum += per_cpu(irq_stat, cpu).timer_irqs_others; 403 + sum += per_cpu(irq_stat, cpu).hmi_exceptions; 416 404 #ifdef CONFIG_PPC_DOORBELL 417 405 sum += per_cpu(irq_stat, cpu).doorbell_irqs; 418 406 #endif
+19 -15
arch/powerpc/kernel/process.c
··· 1095 1095 return 0; 1096 1096 } 1097 1097 1098 + static void setup_ksp_vsid(struct task_struct *p, unsigned long sp) 1099 + { 1100 + #ifdef CONFIG_PPC_STD_MMU_64 1101 + unsigned long sp_vsid; 1102 + unsigned long llp = mmu_psize_defs[mmu_linear_psize].sllp; 1103 + 1104 + if (mmu_has_feature(MMU_FTR_1T_SEGMENT)) 1105 + sp_vsid = get_kernel_vsid(sp, MMU_SEGSIZE_1T) 1106 + << SLB_VSID_SHIFT_1T; 1107 + else 1108 + sp_vsid = get_kernel_vsid(sp, MMU_SEGSIZE_256M) 1109 + << SLB_VSID_SHIFT; 1110 + sp_vsid |= SLB_VSID_KERNEL | llp; 1111 + p->thread.ksp_vsid = sp_vsid; 1112 + #endif 1113 + } 1114 + 1098 1115 /* 1099 1116 * Copy a thread.. 1100 1117 */ ··· 1191 1174 p->thread.vr_save_area = NULL; 1192 1175 #endif 1193 1176 1194 - #ifdef CONFIG_PPC_STD_MMU_64 1195 - if (mmu_has_feature(MMU_FTR_SLB)) { 1196 - unsigned long sp_vsid; 1197 - unsigned long llp = mmu_psize_defs[mmu_linear_psize].sllp; 1177 + setup_ksp_vsid(p, sp); 1198 1178 1199 - if (mmu_has_feature(MMU_FTR_1T_SEGMENT)) 1200 - sp_vsid = get_kernel_vsid(sp, MMU_SEGSIZE_1T) 1201 - << SLB_VSID_SHIFT_1T; 1202 - else 1203 - sp_vsid = get_kernel_vsid(sp, MMU_SEGSIZE_256M) 1204 - << SLB_VSID_SHIFT; 1205 - sp_vsid |= SLB_VSID_KERNEL | llp; 1206 - p->thread.ksp_vsid = sp_vsid; 1207 - } 1208 - #endif /* CONFIG_PPC_STD_MMU_64 */ 1209 1179 #ifdef CONFIG_PPC64 1210 1180 if (cpu_has_feature(CPU_FTR_DSCR)) { 1211 1181 p->thread.dscr_inherit = current->thread.dscr_inherit; ··· 1581 1577 struct pt_regs *regs = (struct pt_regs *) 1582 1578 (sp + STACK_FRAME_OVERHEAD); 1583 1579 lr = regs->link; 1584 - printk("--- Exception: %lx at %pS\n LR = %pS\n", 1580 + printk("--- interrupt: %lx at %pS\n LR = %pS\n", 1585 1581 regs->trap, (void *)regs->nip, (void *)lr); 1586 1582 firstframe = 1; 1587 1583 }
+4 -7
arch/powerpc/kernel/prom.c
··· 155 155 } ibm_pa_features[] __initdata = { 156 156 {0, 0, PPC_FEATURE_HAS_MMU, 0, 0, 0}, 157 157 {0, 0, PPC_FEATURE_HAS_FPU, 0, 1, 0}, 158 - {0, MMU_FTR_SLB, 0, 0, 2, 0}, 159 158 {CPU_FTR_CTRL, 0, 0, 0, 3, 0}, 160 159 {CPU_FTR_NOEXECUTE, 0, 0, 0, 6, 0}, 161 160 {CPU_FTR_NODSISRALIGN, 0, 0, 1, 1, 1}, ··· 308 309 309 310 /* Get physical cpuid */ 310 311 intserv = of_get_flat_dt_prop(node, "ibm,ppc-interrupt-server#s", &len); 311 - if (intserv) { 312 - nthreads = len / sizeof(int); 313 - } else { 314 - intserv = of_get_flat_dt_prop(node, "reg", NULL); 315 - nthreads = 1; 316 - } 312 + if (!intserv) 313 + intserv = of_get_flat_dt_prop(node, "reg", &len); 314 + 315 + nthreads = len / sizeof(int); 317 316 318 317 /* 319 318 * Now see if any of these threads match our boot cpu.
+4 -2
arch/powerpc/kernel/setup-common.c
··· 456 456 intserv = of_get_property(dn, "ibm,ppc-interrupt-server#s", 457 457 &len); 458 458 if (intserv) { 459 - nthreads = len / sizeof(int); 460 459 DBG(" ibm,ppc-interrupt-server#s -> %d threads\n", 461 460 nthreads); 462 461 } else { 463 462 DBG(" no ibm,ppc-interrupt-server#s -> 1 thread\n"); 464 - intserv = of_get_property(dn, "reg", NULL); 463 + intserv = of_get_property(dn, "reg", &len); 465 464 if (!intserv) { 466 465 cpu_be = cpu_to_be32(cpu); 467 466 intserv = &cpu_be; /* assume logical == phys */ 467 + len = 4; 468 468 } 469 469 } 470 + 471 + nthreads = len / sizeof(int); 470 472 471 473 for (j = 0; j < nthreads && cpu < nr_cpu_ids; j++) { 472 474 bool avail;
+10 -5
arch/powerpc/kernel/setup_64.c
··· 201 201 /* Set IR and DR in PACA MSR */ 202 202 get_paca()->kernel_msr = MSR_KERNEL; 203 203 204 - /* Enable AIL if supported */ 204 + /* 205 + * Enable AIL if supported, and we are in hypervisor mode. If we are 206 + * not in hypervisor mode, we enable relocation-on interrupts later 207 + * in pSeries_setup_arch() using the H_SET_MODE hcall. 208 + */ 205 209 if (cpu_has_feature(CPU_FTR_HVMODE) && 206 210 cpu_has_feature(CPU_FTR_ARCH_207S)) { 207 211 unsigned long lpcr = mfspr(SPRN_LPCR); ··· 511 507 check_smt_enabled(); 512 508 setup_tlb_core_data(); 513 509 514 - #ifdef CONFIG_SMP 510 + /* 511 + * Freescale Book3e parts spin in a loop provided by firmware, 512 + * so smp_release_cpus() does nothing for them 513 + */ 514 + #if defined(CONFIG_SMP) && !defined(CONFIG_PPC_FSL_BOOK3E) 515 515 /* Release secondary cpus out of their spinloops at 0x60 now that 516 516 * we can map physical -> logical CPU ids 517 517 */ ··· 681 673 exc_lvl_early_init(); 682 674 emergency_stack_init(); 683 675 684 - #ifdef CONFIG_PPC_STD_MMU_64 685 - stabs_alloc(); 686 - #endif 687 676 /* set up the bootmem stuff with available memory */ 688 677 do_init_bootmem(); 689 678 sparse_init();
-3
arch/powerpc/kernel/systbl.S
··· 39 39 .section .rodata,"a" 40 40 41 41 #ifdef CONFIG_PPC64 42 - #define sys_sigpending sys_ni_syscall 43 - #define sys_old_getrlimit sys_ni_syscall 44 - 45 42 .p2align 3 46 43 #endif 47 44
+25 -1
arch/powerpc/kernel/traps.c
··· 302 302 return handled; 303 303 } 304 304 305 + long hmi_exception_realmode(struct pt_regs *regs) 306 + { 307 + __get_cpu_var(irq_stat).hmi_exceptions++; 308 + 309 + if (ppc_md.hmi_exception_early) 310 + ppc_md.hmi_exception_early(regs); 311 + 312 + return 0; 313 + } 314 + 305 315 #endif 306 316 307 317 /* ··· 619 609 if (reason & MCSR_BUS_RBERR) 620 610 printk("Bus - Read Data Bus Error\n"); 621 611 if (reason & MCSR_BUS_WBERR) 622 - printk("Bus - Read Data Bus Error\n"); 612 + printk("Bus - Write Data Bus Error\n"); 623 613 if (reason & MCSR_BUS_IPERR) 624 614 printk("Bus - Instruction Parity Error\n"); 625 615 if (reason & MCSR_BUS_RPERR) ··· 746 736 void SMIException(struct pt_regs *regs) 747 737 { 748 738 die("System Management Interrupt", regs, SIGABRT); 739 + } 740 + 741 + void handle_hmi_exception(struct pt_regs *regs) 742 + { 743 + struct pt_regs *old_regs; 744 + 745 + old_regs = set_irq_regs(regs); 746 + irq_enter(); 747 + 748 + if (ppc_md.handle_hmi_exception) 749 + ppc_md.handle_hmi_exception(regs); 750 + 751 + irq_exit(); 752 + set_irq_regs(old_regs); 749 753 } 750 754 751 755 void unknown_exception(struct pt_regs *regs)
+6
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 159 159 cmpwi r12, BOOK3S_INTERRUPT_EXTERNAL 160 160 BEGIN_FTR_SECTION 161 161 beq 11f 162 + cmpwi cr2, r12, BOOK3S_INTERRUPT_HMI 163 + beq cr2, 14f /* HMI check */ 162 164 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206) 163 165 164 166 /* RFI into the highmem handler, or branch to interrupt handler */ ··· 180 178 ba 0x500 181 179 182 180 13: b machine_check_fwnmi 181 + 182 + 14: mtspr SPRN_HSRR0, r8 183 + mtspr SPRN_HSRR1, r7 184 + b hmi_exception_after_realmode 183 185 184 186 kvmppc_primary_no_guest: 185 187 /* We handle this much like a ceded vcpu */
+1 -2
arch/powerpc/lib/copyuser_64.S
··· 461 461 /* 462 462 * Routine to copy a whole page of data, optimized for POWER4. 463 463 * On POWER4 it is more than 50% faster than the simple loop 464 - * above (following the .Ldst_aligned label) but it runs slightly 465 - * slower on POWER3. 464 + * above (following the .Ldst_aligned label). 466 465 */ 467 466 .Lcopy_page_4K: 468 467 std r31,-32(1)
+1 -3
arch/powerpc/mm/Makefile
··· 13 13 tlb_nohash_low.o 14 14 obj-$(CONFIG_PPC_BOOK3E) += tlb_low_$(CONFIG_WORD_SIZE)e.o 15 15 hash64-$(CONFIG_PPC_NATIVE) := hash_native_64.o 16 - obj-$(CONFIG_PPC_STD_MMU_64) += hash_utils_64.o \ 17 - slb_low.o slb.o stab.o \ 18 - $(hash64-y) 16 + obj-$(CONFIG_PPC_STD_MMU_64) += hash_utils_64.o slb_low.o slb.o $(hash64-y) 19 17 obj-$(CONFIG_PPC_STD_MMU_32) += ppc_mmu_32.o 20 18 obj-$(CONFIG_PPC_STD_MMU) += hash_low_$(CONFIG_WORD_SIZE).o \ 21 19 tlb_hash$(CONFIG_WORD_SIZE).o \
+7 -19
arch/powerpc/mm/hash_utils_64.c
··· 243 243 } 244 244 245 245 #ifdef CONFIG_MEMORY_HOTPLUG 246 - static int htab_remove_mapping(unsigned long vstart, unsigned long vend, 246 + int htab_remove_mapping(unsigned long vstart, unsigned long vend, 247 247 int psize, int ssize) 248 248 { 249 249 unsigned long vaddr; ··· 821 821 822 822 void __init early_init_mmu(void) 823 823 { 824 - /* Setup initial STAB address in the PACA */ 825 - get_paca()->stab_real = __pa((u64)&initial_stab); 826 - get_paca()->stab_addr = (u64)&initial_stab; 827 - 828 824 /* Initialize the MMU Hash table and create the linear mapping 829 - * of memory. Has to be done before stab/slb initialization as 830 - * this is currently where the page size encoding is obtained 825 + * of memory. Has to be done before SLB initialization as this is 826 + * currently where the page size encoding is obtained. 831 827 */ 832 828 htab_initialize(); 833 829 834 - /* Initialize stab / SLB management */ 835 - if (mmu_has_feature(MMU_FTR_SLB)) 836 - slb_initialize(); 837 - else 838 - stab_initialize(get_paca()->stab_real); 830 + /* Initialize SLB management */ 831 + slb_initialize(); 839 832 } 840 833 841 834 #ifdef CONFIG_SMP ··· 838 845 if (!firmware_has_feature(FW_FEATURE_LPAR)) 839 846 mtspr(SPRN_SDR1, _SDR1); 840 847 841 - /* Initialize STAB/SLB. We use a virtual address as it works 842 - * in real mode on pSeries. 843 - */ 844 - if (mmu_has_feature(MMU_FTR_SLB)) 845 - slb_initialize(); 846 - else 847 - stab_initialize(get_paca()->stab_addr); 848 + /* Initialize SLB */ 849 + slb_initialize(); 848 850 } 849 851 #endif /* CONFIG_SMP */ 850 852
+120 -12
arch/powerpc/mm/init_64.c
··· 175 175 static int __meminit vmemmap_populated(unsigned long start, int page_size) 176 176 { 177 177 unsigned long end = start + page_size; 178 + start = (unsigned long)(pfn_to_page(vmemmap_section_start(start))); 178 179 179 180 for (; start < end; start += (PAGES_PER_SECTION * sizeof(struct page))) 180 - if (pfn_valid(vmemmap_section_start(start))) 181 + if (pfn_valid(page_to_pfn((struct page *)start))) 181 182 return 1; 182 183 183 184 return 0; ··· 213 212 for (i = 0; i < page_size; i += PAGE_SIZE) 214 213 BUG_ON(map_kernel_page(start + i, phys, flags)); 215 214 } 215 + 216 + #ifdef CONFIG_MEMORY_HOTPLUG 217 + static void vmemmap_remove_mapping(unsigned long start, 218 + unsigned long page_size) 219 + { 220 + } 221 + #endif 216 222 #else /* CONFIG_PPC_BOOK3E */ 217 223 static void __meminit vmemmap_create_mapping(unsigned long start, 218 224 unsigned long page_size, ··· 231 223 mmu_kernel_ssize); 232 224 BUG_ON(mapped < 0); 233 225 } 226 + 227 + #ifdef CONFIG_MEMORY_HOTPLUG 228 + extern int htab_remove_mapping(unsigned long vstart, unsigned long vend, 229 + int psize, int ssize); 230 + 231 + static void vmemmap_remove_mapping(unsigned long start, 232 + unsigned long page_size) 233 + { 234 + int mapped = htab_remove_mapping(start, start + page_size, 235 + mmu_vmemmap_psize, 236 + mmu_kernel_ssize); 237 + BUG_ON(mapped < 0); 238 + } 239 + #endif 240 + 234 241 #endif /* CONFIG_PPC_BOOK3E */ 235 242 236 243 struct vmemmap_backing *vmemmap_list; 244 + static struct vmemmap_backing *next; 245 + static int num_left; 246 + static int num_freed; 237 247 238 248 static __meminit struct vmemmap_backing * vmemmap_list_alloc(int node) 239 249 { 240 - static struct vmemmap_backing *next; 241 - static int num_left; 250 + struct vmemmap_backing *vmem_back; 251 + /* get from freed entries first */ 252 + if (num_freed) { 253 + num_freed--; 254 + vmem_back = next; 255 + next = next->list; 256 + 257 + return vmem_back; 258 + } 242 259 243 260 /* allocate a page when required and hand out chunks */ 244 - if (!next || !num_left) { 261 + if (!num_left) { 245 262 next = vmemmap_alloc_block(PAGE_SIZE, node); 246 263 if (unlikely(!next)) { 247 264 WARN_ON(1); ··· 329 296 return 0; 330 297 } 331 298 332 - void vmemmap_free(unsigned long start, unsigned long end) 299 + #ifdef CONFIG_MEMORY_HOTPLUG 300 + static unsigned long vmemmap_list_free(unsigned long start) 333 301 { 302 + struct vmemmap_backing *vmem_back, *vmem_back_prev; 303 + 304 + vmem_back_prev = vmem_back = vmemmap_list; 305 + 306 + /* look for it with prev pointer recorded */ 307 + for (; vmem_back; vmem_back = vmem_back->list) { 308 + if (vmem_back->virt_addr == start) 309 + break; 310 + vmem_back_prev = vmem_back; 311 + } 312 + 313 + if (unlikely(!vmem_back)) { 314 + WARN_ON(1); 315 + return 0; 316 + } 317 + 318 + /* remove it from vmemmap_list */ 319 + if (vmem_back == vmemmap_list) /* remove head */ 320 + vmemmap_list = vmem_back->list; 321 + else 322 + vmem_back_prev->list = vmem_back->list; 323 + 324 + /* next point to this freed entry */ 325 + vmem_back->list = next; 326 + next = vmem_back; 327 + num_freed++; 328 + 329 + return vmem_back->phys; 334 330 } 335 331 332 + void __ref vmemmap_free(unsigned long start, unsigned long end) 333 + { 334 + unsigned long page_size = 1 << mmu_psize_defs[mmu_vmemmap_psize].shift; 335 + 336 + start = _ALIGN_DOWN(start, page_size); 337 + 338 + pr_debug("vmemmap_free %lx...%lx\n", start, end); 339 + 340 + for (; start < end; start += page_size) { 341 + unsigned long addr; 342 + 343 + /* 344 + * the section has already be marked as invalid, so 345 + * vmemmap_populated() true means some other sections still 346 + * in this page, so skip it. 347 + */ 348 + if (vmemmap_populated(start, page_size)) 349 + continue; 350 + 351 + addr = vmemmap_list_free(start); 352 + if (addr) { 353 + struct page *page = pfn_to_page(addr >> PAGE_SHIFT); 354 + 355 + if (PageReserved(page)) { 356 + /* allocated from bootmem */ 357 + if (page_size < PAGE_SIZE) { 358 + /* 359 + * this shouldn't happen, but if it is 360 + * the case, leave the memory there 361 + */ 362 + WARN_ON_ONCE(1); 363 + } else { 364 + unsigned int nr_pages = 365 + 1 << get_order(page_size); 366 + while (nr_pages--) 367 + free_reserved_page(page++); 368 + } 369 + } else 370 + free_pages((unsigned long)(__va(addr)), 371 + get_order(page_size)); 372 + 373 + vmemmap_remove_mapping(start, page_size); 374 + } 375 + } 376 + } 377 + #endif 336 378 void register_page_bootmem_memmap(unsigned long section_nr, 337 379 struct page *start_page, unsigned long size) 338 380 { ··· 439 331 if (pg_va < vmem_back->virt_addr) 440 332 continue; 441 333 442 - /* Check that page struct is not split between real pages */ 443 - if ((pg_va + sizeof(struct page)) > 444 - (vmem_back->virt_addr + page_size)) 445 - return NULL; 446 - 447 - page = (struct page *) (vmem_back->phys + pg_va - 334 + /* After vmemmap_list entry free is possible, need check all */ 335 + if ((pg_va + sizeof(struct page)) <= 336 + (vmem_back->virt_addr + page_size)) { 337 + page = (struct page *) (vmem_back->phys + pg_va - 448 338 vmem_back->virt_addr); 449 - return page; 339 + return page; 340 + } 450 341 } 451 342 343 + /* Probably that page struct is split between real pages */ 452 344 return NULL; 453 345 } 454 346 EXPORT_SYMBOL_GPL(realmode_pfn_to_page);
+1 -1
arch/powerpc/mm/mmu_context_hash32.c
··· 2 2 * This file contains the routines for handling the MMU on those 3 3 * PowerPC implementations where the MMU substantially follows the 4 4 * architecture specification. This includes the 6xx, 7xx, 7xxx, 5 - * 8260, and POWER3 implementations but excludes the 8xx and 4xx. 5 + * and 8260 implementations but excludes the 8xx and 4xx. 6 6 * -- paulus 7 7 * 8 8 * Derived from arch/ppc/mm/init.c:
+1 -1
arch/powerpc/mm/numa.c
··· 611 611 case CPU_UP_CANCELED: 612 612 case CPU_UP_CANCELED_FROZEN: 613 613 unmap_cpu_from_node(lcpu); 614 - break; 615 614 ret = NOTIFY_OK; 615 + break; 616 616 #endif 617 617 } 618 618 return ret;
+1 -1
arch/powerpc/mm/pgtable_32.c
··· 41 41 unsigned long ioremap_bot; 42 42 EXPORT_SYMBOL(ioremap_bot); /* aka VMALLOC_END */ 43 43 44 - #if defined(CONFIG_6xx) || defined(CONFIG_POWER3) 44 + #ifdef CONFIG_6xx 45 45 #define HAVE_BATS 1 46 46 #endif 47 47
+1 -1
arch/powerpc/mm/pgtable_64.c
··· 68 68 unsigned long ioremap_bot = IOREMAP_BASE; 69 69 70 70 #ifdef CONFIG_PPC_MMU_NOHASH 71 - static void *early_alloc_pgtable(unsigned long size) 71 + static __ref void *early_alloc_pgtable(unsigned long size) 72 72 { 73 73 void *pt; 74 74
+1 -1
arch/powerpc/mm/ppc_mmu_32.c
··· 2 2 * This file contains the routines for handling the MMU on those 3 3 * PowerPC implementations where the MMU substantially follows the 4 4 * architecture specification. This includes the 6xx, 7xx, 7xxx, 5 - * 8260, and POWER3 implementations but excludes the 8xx and 4xx. 5 + * and 8260 implementations but excludes the 8xx and 4xx. 6 6 * -- paulus 7 7 * 8 8 * Derived from arch/ppc/mm/init.c:
-286
arch/powerpc/mm/stab.c
··· 1 - /* 2 - * PowerPC64 Segment Translation Support. 3 - * 4 - * Dave Engebretsen and Mike Corrigan {engebret|mikejc}@us.ibm.com 5 - * Copyright (c) 2001 Dave Engebretsen 6 - * 7 - * Copyright (C) 2002 Anton Blanchard <anton@au.ibm.com>, IBM 8 - * 9 - * This program is free software; you can redistribute it and/or 10 - * modify it under the terms of the GNU General Public License 11 - * as published by the Free Software Foundation; either version 12 - * 2 of the License, or (at your option) any later version. 13 - */ 14 - 15 - #include <linux/memblock.h> 16 - 17 - #include <asm/pgtable.h> 18 - #include <asm/mmu.h> 19 - #include <asm/mmu_context.h> 20 - #include <asm/paca.h> 21 - #include <asm/cputable.h> 22 - #include <asm/prom.h> 23 - 24 - struct stab_entry { 25 - unsigned long esid_data; 26 - unsigned long vsid_data; 27 - }; 28 - 29 - #define NR_STAB_CACHE_ENTRIES 8 30 - static DEFINE_PER_CPU(long, stab_cache_ptr); 31 - static DEFINE_PER_CPU(long [NR_STAB_CACHE_ENTRIES], stab_cache); 32 - 33 - /* 34 - * Create a segment table entry for the given esid/vsid pair. 35 - */ 36 - static int make_ste(unsigned long stab, unsigned long esid, unsigned long vsid) 37 - { 38 - unsigned long esid_data, vsid_data; 39 - unsigned long entry, group, old_esid, castout_entry, i; 40 - unsigned int global_entry; 41 - struct stab_entry *ste, *castout_ste; 42 - unsigned long kernel_segment = (esid << SID_SHIFT) >= PAGE_OFFSET; 43 - 44 - vsid_data = vsid << STE_VSID_SHIFT; 45 - esid_data = esid << SID_SHIFT | STE_ESID_KP | STE_ESID_V; 46 - if (! kernel_segment) 47 - esid_data |= STE_ESID_KS; 48 - 49 - /* Search the primary group first. */ 50 - global_entry = (esid & 0x1f) << 3; 51 - ste = (struct stab_entry *)(stab | ((esid & 0x1f) << 7)); 52 - 53 - /* Find an empty entry, if one exists. */ 54 - for (group = 0; group < 2; group++) { 55 - for (entry = 0; entry < 8; entry++, ste++) { 56 - if (!(ste->esid_data & STE_ESID_V)) { 57 - ste->vsid_data = vsid_data; 58 - eieio(); 59 - ste->esid_data = esid_data; 60 - return (global_entry | entry); 61 - } 62 - } 63 - /* Now search the secondary group. */ 64 - global_entry = ((~esid) & 0x1f) << 3; 65 - ste = (struct stab_entry *)(stab | (((~esid) & 0x1f) << 7)); 66 - } 67 - 68 - /* 69 - * Could not find empty entry, pick one with a round robin selection. 70 - * Search all entries in the two groups. 71 - */ 72 - castout_entry = get_paca()->stab_rr; 73 - for (i = 0; i < 16; i++) { 74 - if (castout_entry < 8) { 75 - global_entry = (esid & 0x1f) << 3; 76 - ste = (struct stab_entry *)(stab | ((esid & 0x1f) << 7)); 77 - castout_ste = ste + castout_entry; 78 - } else { 79 - global_entry = ((~esid) & 0x1f) << 3; 80 - ste = (struct stab_entry *)(stab | (((~esid) & 0x1f) << 7)); 81 - castout_ste = ste + (castout_entry - 8); 82 - } 83 - 84 - /* Dont cast out the first kernel segment */ 85 - if ((castout_ste->esid_data & ESID_MASK) != PAGE_OFFSET) 86 - break; 87 - 88 - castout_entry = (castout_entry + 1) & 0xf; 89 - } 90 - 91 - get_paca()->stab_rr = (castout_entry + 1) & 0xf; 92 - 93 - /* Modify the old entry to the new value. */ 94 - 95 - /* Force previous translations to complete. DRENG */ 96 - asm volatile("isync" : : : "memory"); 97 - 98 - old_esid = castout_ste->esid_data >> SID_SHIFT; 99 - castout_ste->esid_data = 0; /* Invalidate old entry */ 100 - 101 - asm volatile("sync" : : : "memory"); /* Order update */ 102 - 103 - castout_ste->vsid_data = vsid_data; 104 - eieio(); /* Order update */ 105 - castout_ste->esid_data = esid_data; 106 - 107 - asm volatile("slbie %0" : : "r" (old_esid << SID_SHIFT)); 108 - /* Ensure completion of slbie */ 109 - asm volatile("sync" : : : "memory"); 110 - 111 - return (global_entry | (castout_entry & 0x7)); 112 - } 113 - 114 - /* 115 - * Allocate a segment table entry for the given ea and mm 116 - */ 117 - static int __ste_allocate(unsigned long ea, struct mm_struct *mm) 118 - { 119 - unsigned long vsid; 120 - unsigned char stab_entry; 121 - unsigned long offset; 122 - 123 - /* Kernel or user address? */ 124 - if (is_kernel_addr(ea)) { 125 - vsid = get_kernel_vsid(ea, MMU_SEGSIZE_256M); 126 - } else { 127 - if ((ea >= TASK_SIZE_USER64) || (! mm)) 128 - return 1; 129 - 130 - vsid = get_vsid(mm->context.id, ea, MMU_SEGSIZE_256M); 131 - } 132 - 133 - stab_entry = make_ste(get_paca()->stab_addr, GET_ESID(ea), vsid); 134 - 135 - if (!is_kernel_addr(ea)) { 136 - offset = __get_cpu_var(stab_cache_ptr); 137 - if (offset < NR_STAB_CACHE_ENTRIES) 138 - __get_cpu_var(stab_cache[offset++]) = stab_entry; 139 - else 140 - offset = NR_STAB_CACHE_ENTRIES+1; 141 - __get_cpu_var(stab_cache_ptr) = offset; 142 - 143 - /* Order update */ 144 - asm volatile("sync":::"memory"); 145 - } 146 - 147 - return 0; 148 - } 149 - 150 - int ste_allocate(unsigned long ea) 151 - { 152 - return __ste_allocate(ea, current->mm); 153 - } 154 - 155 - /* 156 - * Do the segment table work for a context switch: flush all user 157 - * entries from the table, then preload some probably useful entries 158 - * for the new task 159 - */ 160 - void switch_stab(struct task_struct *tsk, struct mm_struct *mm) 161 - { 162 - struct stab_entry *stab = (struct stab_entry *) get_paca()->stab_addr; 163 - struct stab_entry *ste; 164 - unsigned long offset; 165 - unsigned long pc = KSTK_EIP(tsk); 166 - unsigned long stack = KSTK_ESP(tsk); 167 - unsigned long unmapped_base; 168 - 169 - /* Force previous translations to complete. DRENG */ 170 - asm volatile("isync" : : : "memory"); 171 - 172 - /* 173 - * We need interrupts hard-disabled here, not just soft-disabled, 174 - * so that a PMU interrupt can't occur, which might try to access 175 - * user memory (to get a stack trace) and possible cause an STAB miss 176 - * which would update the stab_cache/stab_cache_ptr per-cpu variables. 177 - */ 178 - hard_irq_disable(); 179 - 180 - offset = __get_cpu_var(stab_cache_ptr); 181 - if (offset <= NR_STAB_CACHE_ENTRIES) { 182 - int i; 183 - 184 - for (i = 0; i < offset; i++) { 185 - ste = stab + __get_cpu_var(stab_cache[i]); 186 - ste->esid_data = 0; /* invalidate entry */ 187 - } 188 - } else { 189 - unsigned long entry; 190 - 191 - /* Invalidate all entries. */ 192 - ste = stab; 193 - 194 - /* Never flush the first entry. */ 195 - ste += 1; 196 - for (entry = 1; 197 - entry < (HW_PAGE_SIZE / sizeof(struct stab_entry)); 198 - entry++, ste++) { 199 - unsigned long ea; 200 - ea = ste->esid_data & ESID_MASK; 201 - if (!is_kernel_addr(ea)) { 202 - ste->esid_data = 0; 203 - } 204 - } 205 - } 206 - 207 - asm volatile("sync; slbia; sync":::"memory"); 208 - 209 - __get_cpu_var(stab_cache_ptr) = 0; 210 - 211 - /* Now preload some entries for the new task */ 212 - if (test_tsk_thread_flag(tsk, TIF_32BIT)) 213 - unmapped_base = TASK_UNMAPPED_BASE_USER32; 214 - else 215 - unmapped_base = TASK_UNMAPPED_BASE_USER64; 216 - 217 - __ste_allocate(pc, mm); 218 - 219 - if (GET_ESID(pc) == GET_ESID(stack)) 220 - return; 221 - 222 - __ste_allocate(stack, mm); 223 - 224 - if ((GET_ESID(pc) == GET_ESID(unmapped_base)) 225 - || (GET_ESID(stack) == GET_ESID(unmapped_base))) 226 - return; 227 - 228 - __ste_allocate(unmapped_base, mm); 229 - 230 - /* Order update */ 231 - asm volatile("sync" : : : "memory"); 232 - } 233 - 234 - /* 235 - * Allocate segment tables for secondary CPUs. These must all go in 236 - * the first (bolted) segment, so that do_stab_bolted won't get a 237 - * recursive segment miss on the segment table itself. 238 - */ 239 - void __init stabs_alloc(void) 240 - { 241 - int cpu; 242 - 243 - if (mmu_has_feature(MMU_FTR_SLB)) 244 - return; 245 - 246 - for_each_possible_cpu(cpu) { 247 - unsigned long newstab; 248 - 249 - if (cpu == 0) 250 - continue; /* stab for CPU 0 is statically allocated */ 251 - 252 - newstab = memblock_alloc_base(HW_PAGE_SIZE, HW_PAGE_SIZE, 253 - 1<<SID_SHIFT); 254 - newstab = (unsigned long)__va(newstab); 255 - 256 - memset((void *)newstab, 0, HW_PAGE_SIZE); 257 - 258 - paca[cpu].stab_addr = newstab; 259 - paca[cpu].stab_real = __pa(newstab); 260 - printk(KERN_INFO "Segment table for CPU %d at 0x%llx " 261 - "virtual, 0x%llx absolute\n", 262 - cpu, paca[cpu].stab_addr, paca[cpu].stab_real); 263 - } 264 - } 265 - 266 - /* 267 - * Build an entry for the base kernel segment and put it into 268 - * the segment table or SLB. All other segment table or SLB 269 - * entries are faulted in. 270 - */ 271 - void stab_initialize(unsigned long stab) 272 - { 273 - unsigned long vsid = get_kernel_vsid(PAGE_OFFSET, MMU_SEGSIZE_256M); 274 - unsigned long stabreal; 275 - 276 - asm volatile("isync; slbia; isync":::"memory"); 277 - make_ste(stab, GET_ESID(PAGE_OFFSET), vsid); 278 - 279 - /* Order update */ 280 - asm volatile("sync":::"memory"); 281 - 282 - /* Set ASR */ 283 - stabreal = get_paca()->stab_real | 0x1ul; 284 - 285 - mtspr(SPRN_ASR, stabreal); 286 - }
+58 -11
arch/powerpc/mm/tlb_low_64e.S
··· 296 296 * r14 = page table base 297 297 * r13 = PACA 298 298 * r11 = tlb_per_core ptr 299 - * r10 = cpu number 299 + * r10 = crap (free to use) 300 300 */ 301 301 tlb_miss_common_e6500: 302 + crmove cr2*4+2,cr0*4+2 /* cr2.eq != 0 if kernel address */ 303 + 304 + BEGIN_FTR_SECTION /* CPU_FTR_SMT */ 302 305 /* 303 306 * Search if we already have an indirect entry for that virtual 304 307 * address, and if we do, bail out. ··· 312 309 lhz r10,PACAPACAINDEX(r13) 313 310 cmpdi r15,0 314 311 cmpdi cr1,r15,1 /* set cr1.eq = 0 for non-recursive */ 312 + addi r10,r10,1 315 313 bne 2f 316 314 stbcx. r10,0,r11 317 315 bne 1b ··· 326 322 b 1b 327 323 .previous 328 324 325 + /* 326 + * Erratum A-008139 says that we can't use tlbwe to change 327 + * an indirect entry in any way (including replacing or 328 + * invalidating) if the other thread could be in the process 329 + * of a lookup. The workaround is to invalidate the entry 330 + * with tlbilx before overwriting. 331 + */ 332 + 333 + lbz r15,TCD_ESEL_NEXT(r11) 334 + rlwinm r10,r15,16,0xff0000 335 + oris r10,r10,MAS0_TLBSEL(1)@h 336 + mtspr SPRN_MAS0,r10 337 + isync 338 + tlbre 339 + mfspr r15,SPRN_MAS1 340 + andis. r15,r15,MAS1_VALID@h 341 + beq 5f 342 + 343 + BEGIN_FTR_SECTION_NESTED(532) 344 + mfspr r10,SPRN_MAS8 345 + rlwinm r10,r10,0,0x80000fff /* tgs,tlpid -> sgs,slpid */ 346 + mtspr SPRN_MAS5,r10 347 + END_FTR_SECTION_NESTED(CPU_FTR_EMB_HV,CPU_FTR_EMB_HV,532) 348 + 349 + mfspr r10,SPRN_MAS1 350 + rlwinm r15,r10,0,0x3fff0000 /* tid -> spid */ 351 + rlwimi r15,r10,20,0x00000003 /* ind,ts -> sind,sas */ 352 + mfspr r10,SPRN_MAS6 353 + mtspr SPRN_MAS6,r15 354 + 329 355 mfspr r15,SPRN_MAS2 356 + isync 357 + tlbilxva 0,r15 358 + isync 359 + 360 + mtspr SPRN_MAS6,r10 361 + 362 + 5: 363 + BEGIN_FTR_SECTION_NESTED(532) 364 + li r10,0 365 + mtspr SPRN_MAS8,r10 366 + mtspr SPRN_MAS5,r10 367 + END_FTR_SECTION_NESTED(CPU_FTR_EMB_HV,CPU_FTR_EMB_HV,532) 330 368 331 369 tlbsx 0,r16 332 370 mfspr r10,SPRN_MAS1 333 - andis. r10,r10,MAS1_VALID@h 371 + andis. r15,r10,MAS1_VALID@h 334 372 bne tlb_miss_done_e6500 335 - 336 - /* Undo MAS-damage from the tlbsx */ 373 + FTR_SECTION_ELSE 337 374 mfspr r10,SPRN_MAS1 375 + ALT_FTR_SECTION_END_IFSET(CPU_FTR_SMT) 376 + 338 377 oris r10,r10,MAS1_VALID@h 339 - mtspr SPRN_MAS1,r10 340 - mtspr SPRN_MAS2,r15 378 + beq cr2,4f 379 + rlwinm r10,r10,0,16,1 /* Clear TID */ 380 + 4: mtspr SPRN_MAS1,r10 341 381 342 382 /* Now, we need to walk the page tables. First check if we are in 343 383 * range. ··· 442 394 443 395 tlb_miss_done_e6500: 444 396 .macro tlb_unlock_e6500 397 + BEGIN_FTR_SECTION 445 398 beq cr1,1f /* no unlock if lock was recursively grabbed */ 446 399 li r15,0 447 400 isync 448 401 stb r15,0(r11) 449 402 1: 403 + END_FTR_SECTION_IFSET(CPU_FTR_SMT) 450 404 .endm 451 405 452 406 tlb_unlock_e6500 ··· 457 407 rfi 458 408 459 409 tlb_miss_kernel_e6500: 460 - mfspr r10,SPRN_MAS1 461 410 ld r14,PACA_KERNELPGD(r13) 462 - cmpldi cr0,r15,8 /* Check for vmalloc region */ 463 - rlwinm r10,r10,0,16,1 /* Clear TID */ 464 - mtspr SPRN_MAS1,r10 465 - beq+ tlb_miss_common_e6500 411 + cmpldi cr1,r15,8 /* Check for vmalloc region */ 412 + beq+ cr1,tlb_miss_common_e6500 466 413 467 414 tlb_miss_fault_e6500: 468 415 tlb_unlock_e6500
+1 -1
arch/powerpc/oprofile/Makefile
··· 14 14 oprofile-$(CONFIG_OPROFILE_CELL) += op_model_cell.o \ 15 15 cell/spu_profiler.o cell/vma_map.o \ 16 16 cell/spu_task_sync.o 17 - oprofile-$(CONFIG_PPC_BOOK3S_64) += op_model_rs64.o op_model_power4.o op_model_pa6t.o 17 + oprofile-$(CONFIG_PPC_BOOK3S_64) += op_model_power4.o op_model_pa6t.o 18 18 oprofile-$(CONFIG_FSL_EMB_PERFMON) += op_model_fsl_emb.o 19 19 oprofile-$(CONFIG_6xx) += op_model_7450.o
-3
arch/powerpc/oprofile/common.c
··· 205 205 ops->sync_stop = model->sync_stop; 206 206 break; 207 207 #endif 208 - case PPC_OPROFILE_RS64: 209 - model = &op_model_rs64; 210 - break; 211 208 case PPC_OPROFILE_POWER4: 212 209 model = &op_model_power4; 213 210 break;
-222
arch/powerpc/oprofile/op_model_rs64.c
··· 1 - /* 2 - * Copyright (C) 2004 Anton Blanchard <anton@au.ibm.com>, IBM 3 - * 4 - * This program is free software; you can redistribute it and/or 5 - * modify it under the terms of the GNU General Public License 6 - * as published by the Free Software Foundation; either version 7 - * 2 of the License, or (at your option) any later version. 8 - */ 9 - 10 - #include <linux/oprofile.h> 11 - #include <linux/smp.h> 12 - #include <asm/ptrace.h> 13 - #include <asm/processor.h> 14 - #include <asm/cputable.h> 15 - #include <asm/oprofile_impl.h> 16 - 17 - #define dbg(args...) 18 - 19 - static void ctrl_write(unsigned int i, unsigned int val) 20 - { 21 - unsigned int tmp = 0; 22 - unsigned long shift = 0, mask = 0; 23 - 24 - dbg("ctrl_write %d %x\n", i, val); 25 - 26 - switch(i) { 27 - case 0: 28 - tmp = mfspr(SPRN_MMCR0); 29 - shift = 6; 30 - mask = 0x7F; 31 - break; 32 - case 1: 33 - tmp = mfspr(SPRN_MMCR0); 34 - shift = 0; 35 - mask = 0x3F; 36 - break; 37 - case 2: 38 - tmp = mfspr(SPRN_MMCR1); 39 - shift = 31 - 4; 40 - mask = 0x1F; 41 - break; 42 - case 3: 43 - tmp = mfspr(SPRN_MMCR1); 44 - shift = 31 - 9; 45 - mask = 0x1F; 46 - break; 47 - case 4: 48 - tmp = mfspr(SPRN_MMCR1); 49 - shift = 31 - 14; 50 - mask = 0x1F; 51 - break; 52 - case 5: 53 - tmp = mfspr(SPRN_MMCR1); 54 - shift = 31 - 19; 55 - mask = 0x1F; 56 - break; 57 - case 6: 58 - tmp = mfspr(SPRN_MMCR1); 59 - shift = 31 - 24; 60 - mask = 0x1F; 61 - break; 62 - case 7: 63 - tmp = mfspr(SPRN_MMCR1); 64 - shift = 31 - 28; 65 - mask = 0xF; 66 - break; 67 - } 68 - 69 - tmp = tmp & ~(mask << shift); 70 - tmp |= val << shift; 71 - 72 - switch(i) { 73 - case 0: 74 - case 1: 75 - mtspr(SPRN_MMCR0, tmp); 76 - break; 77 - default: 78 - mtspr(SPRN_MMCR1, tmp); 79 - } 80 - 81 - dbg("ctrl_write mmcr0 %lx mmcr1 %lx\n", mfspr(SPRN_MMCR0), 82 - mfspr(SPRN_MMCR1)); 83 - } 84 - 85 - static unsigned long reset_value[OP_MAX_COUNTER]; 86 - 87 - static int num_counters; 88 - 89 - static int rs64_reg_setup(struct op_counter_config *ctr, 90 - struct op_system_config *sys, 91 - int num_ctrs) 92 - { 93 - int i; 94 - 95 - num_counters = num_ctrs; 96 - 97 - for (i = 0; i < num_counters; ++i) 98 - reset_value[i] = 0x80000000UL - ctr[i].count; 99 - 100 - /* XXX setup user and kernel profiling */ 101 - return 0; 102 - } 103 - 104 - static int rs64_cpu_setup(struct op_counter_config *ctr) 105 - { 106 - unsigned int mmcr0; 107 - 108 - /* reset MMCR0 and set the freeze bit */ 109 - mmcr0 = MMCR0_FC; 110 - mtspr(SPRN_MMCR0, mmcr0); 111 - 112 - /* reset MMCR1, MMCRA */ 113 - mtspr(SPRN_MMCR1, 0); 114 - 115 - if (cpu_has_feature(CPU_FTR_MMCRA)) 116 - mtspr(SPRN_MMCRA, 0); 117 - 118 - mmcr0 |= MMCR0_FCM1|MMCR0_PMXE|MMCR0_FCECE; 119 - /* Only applies to POWER3, but should be safe on RS64 */ 120 - mmcr0 |= MMCR0_PMC1CE|MMCR0_PMCjCE; 121 - mtspr(SPRN_MMCR0, mmcr0); 122 - 123 - dbg("setup on cpu %d, mmcr0 %lx\n", smp_processor_id(), 124 - mfspr(SPRN_MMCR0)); 125 - dbg("setup on cpu %d, mmcr1 %lx\n", smp_processor_id(), 126 - mfspr(SPRN_MMCR1)); 127 - 128 - return 0; 129 - } 130 - 131 - static int rs64_start(struct op_counter_config *ctr) 132 - { 133 - int i; 134 - unsigned int mmcr0; 135 - 136 - /* set the PMM bit (see comment below) */ 137 - mtmsrd(mfmsr() | MSR_PMM); 138 - 139 - for (i = 0; i < num_counters; ++i) { 140 - if (ctr[i].enabled) { 141 - classic_ctr_write(i, reset_value[i]); 142 - ctrl_write(i, ctr[i].event); 143 - } else { 144 - classic_ctr_write(i, 0); 145 - } 146 - } 147 - 148 - mmcr0 = mfspr(SPRN_MMCR0); 149 - 150 - /* 151 - * now clear the freeze bit, counting will not start until we 152 - * rfid from this excetion, because only at that point will 153 - * the PMM bit be cleared 154 - */ 155 - mmcr0 &= ~MMCR0_FC; 156 - mtspr(SPRN_MMCR0, mmcr0); 157 - 158 - dbg("start on cpu %d, mmcr0 %x\n", smp_processor_id(), mmcr0); 159 - return 0; 160 - } 161 - 162 - static void rs64_stop(void) 163 - { 164 - unsigned int mmcr0; 165 - 166 - /* freeze counters */ 167 - mmcr0 = mfspr(SPRN_MMCR0); 168 - mmcr0 |= MMCR0_FC; 169 - mtspr(SPRN_MMCR0, mmcr0); 170 - 171 - dbg("stop on cpu %d, mmcr0 %x\n", smp_processor_id(), mmcr0); 172 - 173 - mb(); 174 - } 175 - 176 - static void rs64_handle_interrupt(struct pt_regs *regs, 177 - struct op_counter_config *ctr) 178 - { 179 - unsigned int mmcr0; 180 - int is_kernel; 181 - int val; 182 - int i; 183 - unsigned long pc = mfspr(SPRN_SIAR); 184 - 185 - is_kernel = is_kernel_addr(pc); 186 - 187 - /* set the PMM bit (see comment below) */ 188 - mtmsrd(mfmsr() | MSR_PMM); 189 - 190 - for (i = 0; i < num_counters; ++i) { 191 - val = classic_ctr_read(i); 192 - if (val < 0) { 193 - if (ctr[i].enabled) { 194 - oprofile_add_ext_sample(pc, regs, i, is_kernel); 195 - classic_ctr_write(i, reset_value[i]); 196 - } else { 197 - classic_ctr_write(i, 0); 198 - } 199 - } 200 - } 201 - 202 - mmcr0 = mfspr(SPRN_MMCR0); 203 - 204 - /* reset the perfmon trigger */ 205 - mmcr0 |= MMCR0_PMXE; 206 - 207 - /* 208 - * now clear the freeze bit, counting will not start until we 209 - * rfid from this exception, because only at that point will 210 - * the PMM bit be cleared 211 - */ 212 - mmcr0 &= ~MMCR0_FC; 213 - mtspr(SPRN_MMCR0, mmcr0); 214 - } 215 - 216 - struct op_powerpc_model op_model_rs64 = { 217 - .reg_setup = rs64_reg_setup, 218 - .cpu_setup = rs64_cpu_setup, 219 - .start = rs64_start, 220 - .stop = rs64_stop, 221 - .handle_interrupt = rs64_handle_interrupt, 222 - };
+49 -24
arch/powerpc/perf/core-book3s.c
··· 36 36 struct perf_event *event[MAX_HWEVENTS]; 37 37 u64 events[MAX_HWEVENTS]; 38 38 unsigned int flags[MAX_HWEVENTS]; 39 - unsigned long mmcr[3]; 39 + /* 40 + * The order of the MMCR array is: 41 + * - 64-bit, MMCR0, MMCR1, MMCRA, MMCR2 42 + * - 32-bit, MMCR0, MMCR1, MMCR2 43 + */ 44 + unsigned long mmcr[4]; 40 45 struct perf_event *limited_counter[MAX_LIMITED_HWCOUNTERS]; 41 46 u8 limited_hwidx[MAX_LIMITED_HWCOUNTERS]; 42 47 u64 alternatives[MAX_HWEVENTS][MAX_EVENT_ALTERNATIVES]; ··· 117 112 static int ebb_event_check(struct perf_event *event) { return 0; } 118 113 static void ebb_event_add(struct perf_event *event) { } 119 114 static void ebb_switch_out(unsigned long mmcr0) { } 120 - static unsigned long ebb_switch_in(bool ebb, unsigned long mmcr0) 115 + static unsigned long ebb_switch_in(bool ebb, struct cpu_hw_events *cpuhw) 121 116 { 122 - return mmcr0; 117 + return cpuhw->mmcr[0]; 123 118 } 124 119 125 120 static inline void power_pmu_bhrb_enable(struct perf_event *event) {} ··· 547 542 current->thread.mmcr2 = mfspr(SPRN_MMCR2) & MMCR2_USER_MASK; 548 543 } 549 544 550 - static unsigned long ebb_switch_in(bool ebb, unsigned long mmcr0) 545 + static unsigned long ebb_switch_in(bool ebb, struct cpu_hw_events *cpuhw) 551 546 { 547 + unsigned long mmcr0 = cpuhw->mmcr[0]; 548 + 552 549 if (!ebb) 553 550 goto out; 554 551 ··· 575 568 mtspr(SPRN_SIAR, current->thread.siar); 576 569 mtspr(SPRN_SIER, current->thread.sier); 577 570 mtspr(SPRN_SDAR, current->thread.sdar); 578 - mtspr(SPRN_MMCR2, current->thread.mmcr2); 571 + 572 + /* 573 + * Merge the kernel & user values of MMCR2. The semantics we implement 574 + * are that the user MMCR2 can set bits, ie. cause counters to freeze, 575 + * but not clear bits. If a task wants to be able to clear bits, ie. 576 + * unfreeze counters, it should not set exclude_xxx in its events and 577 + * instead manage the MMCR2 entirely by itself. 578 + */ 579 + mtspr(SPRN_MMCR2, cpuhw->mmcr[3] | current->thread.mmcr2); 579 580 out: 580 581 return mmcr0; 581 582 } ··· 930 915 int i, n, first; 931 916 struct perf_event *event; 932 917 918 + /* 919 + * If the PMU we're on supports per event exclude settings then we 920 + * don't need to do any of this logic. NB. This assumes no PMU has both 921 + * per event exclude and limited PMCs. 922 + */ 923 + if (ppmu->flags & PPMU_ARCH_207S) 924 + return 0; 925 + 933 926 n = n_prev + n_new; 934 927 if (n <= 1) 935 928 return 0; ··· 1242 1219 } 1243 1220 1244 1221 /* 1245 - * Compute MMCR* values for the new set of events 1222 + * Clear all MMCR settings and recompute them for the new set of events. 1246 1223 */ 1224 + memset(cpuhw->mmcr, 0, sizeof(cpuhw->mmcr)); 1225 + 1247 1226 if (ppmu->compute_mmcr(cpuhw->events, cpuhw->n_events, hwc_index, 1248 - cpuhw->mmcr)) { 1227 + cpuhw->mmcr, cpuhw->event)) { 1249 1228 /* shouldn't ever get here */ 1250 1229 printk(KERN_ERR "oops compute_mmcr failed\n"); 1251 1230 goto out; 1252 1231 } 1253 1232 1254 - /* 1255 - * Add in MMCR0 freeze bits corresponding to the 1256 - * attr.exclude_* bits for the first event. 1257 - * We have already checked that all events have the 1258 - * same values for these bits as the first event. 1259 - */ 1260 - event = cpuhw->event[0]; 1261 - if (event->attr.exclude_user) 1262 - cpuhw->mmcr[0] |= MMCR0_FCP; 1263 - if (event->attr.exclude_kernel) 1264 - cpuhw->mmcr[0] |= freeze_events_kernel; 1265 - if (event->attr.exclude_hv) 1266 - cpuhw->mmcr[0] |= MMCR0_FCHV; 1233 + if (!(ppmu->flags & PPMU_ARCH_207S)) { 1234 + /* 1235 + * Add in MMCR0 freeze bits corresponding to the attr.exclude_* 1236 + * bits for the first event. We have already checked that all 1237 + * events have the same value for these bits as the first event. 1238 + */ 1239 + event = cpuhw->event[0]; 1240 + if (event->attr.exclude_user) 1241 + cpuhw->mmcr[0] |= MMCR0_FCP; 1242 + if (event->attr.exclude_kernel) 1243 + cpuhw->mmcr[0] |= freeze_events_kernel; 1244 + if (event->attr.exclude_hv) 1245 + cpuhw->mmcr[0] |= MMCR0_FCHV; 1246 + } 1267 1247 1268 1248 /* 1269 1249 * Write the new configuration to MMCR* with the freeze ··· 1278 1252 mtspr(SPRN_MMCR1, cpuhw->mmcr[1]); 1279 1253 mtspr(SPRN_MMCR0, (cpuhw->mmcr[0] & ~(MMCR0_PMC1CE | MMCR0_PMCjCE)) 1280 1254 | MMCR0_FC); 1255 + if (ppmu->flags & PPMU_ARCH_207S) 1256 + mtspr(SPRN_MMCR2, cpuhw->mmcr[3]); 1281 1257 1282 1258 /* 1283 1259 * Read off any pre-existing events that need to move ··· 1335 1307 out_enable: 1336 1308 pmao_restore_workaround(ebb); 1337 1309 1338 - if (ppmu->flags & PPMU_ARCH_207S) 1339 - mtspr(SPRN_MMCR2, 0); 1340 - 1341 - mmcr0 = ebb_switch_in(ebb, cpuhw->mmcr[0]); 1310 + mmcr0 = ebb_switch_in(ebb, cpuhw); 1342 1311 1343 1312 mb(); 1344 1313 if (cpuhw->bhrb_users)
+3 -2
arch/powerpc/perf/mpc7450-pmu.c
··· 260 260 /* 261 261 * Compute MMCR0/1/2 values for a set of events. 262 262 */ 263 - static int mpc7450_compute_mmcr(u64 event[], int n_ev, 264 - unsigned int hwc[], unsigned long mmcr[]) 263 + static int mpc7450_compute_mmcr(u64 event[], int n_ev, unsigned int hwc[], 264 + unsigned long mmcr[], 265 + struct perf_event *pevents[]) 265 266 { 266 267 u8 event_index[N_CLASSES][N_COUNTER]; 267 268 int n_classevent[N_CLASSES];
+1 -1
arch/powerpc/perf/power4-pmu.c
··· 356 356 } 357 357 358 358 static int p4_compute_mmcr(u64 event[], int n_ev, 359 - unsigned int hwc[], unsigned long mmcr[]) 359 + unsigned int hwc[], unsigned long mmcr[], struct perf_event *pevents[]) 360 360 { 361 361 unsigned long mmcr0 = 0, mmcr1 = 0, mmcra = 0; 362 362 unsigned int pmc, unit, byte, psel, lower;
+1 -1
arch/powerpc/perf/power5+-pmu.c
··· 452 452 } 453 453 454 454 static int power5p_compute_mmcr(u64 event[], int n_ev, 455 - unsigned int hwc[], unsigned long mmcr[]) 455 + unsigned int hwc[], unsigned long mmcr[], struct perf_event *pevents[]) 456 456 { 457 457 unsigned long mmcr1 = 0; 458 458 unsigned long mmcra = 0;
+1 -1
arch/powerpc/perf/power5-pmu.c
··· 383 383 } 384 384 385 385 static int power5_compute_mmcr(u64 event[], int n_ev, 386 - unsigned int hwc[], unsigned long mmcr[]) 386 + unsigned int hwc[], unsigned long mmcr[], struct perf_event *pevents[]) 387 387 { 388 388 unsigned long mmcr1 = 0; 389 389 unsigned long mmcra = MMCRA_SDAR_DCACHE_MISS | MMCRA_SDAR_ERAT_MISS;
+1 -1
arch/powerpc/perf/power6-pmu.c
··· 175 175 * Assign PMC numbers and compute MMCR1 value for a set of events 176 176 */ 177 177 static int p6_compute_mmcr(u64 event[], int n_ev, 178 - unsigned int hwc[], unsigned long mmcr[]) 178 + unsigned int hwc[], unsigned long mmcr[], struct perf_event *pevents[]) 179 179 { 180 180 unsigned long mmcr1 = 0; 181 181 unsigned long mmcra = MMCRA_SDAR_DCACHE_MISS | MMCRA_SDAR_ERAT_MISS;
+1 -1
arch/powerpc/perf/power7-pmu.c
··· 245 245 } 246 246 247 247 static int power7_compute_mmcr(u64 event[], int n_ev, 248 - unsigned int hwc[], unsigned long mmcr[]) 248 + unsigned int hwc[], unsigned long mmcr[], struct perf_event *pevents[]) 249 249 { 250 250 unsigned long mmcr1 = 0; 251 251 unsigned long mmcra = MMCRA_SDAR_DCACHE_MISS | MMCRA_SDAR_ERAT_MISS;
+24 -3
arch/powerpc/perf/power8-pmu.c
··· 15 15 #include <linux/kernel.h> 16 16 #include <linux/perf_event.h> 17 17 #include <asm/firmware.h> 18 + #include <asm/cputable.h> 18 19 19 20 20 21 /* ··· 267 266 #define MMCRA_SDAR_MODE_TLB (1ull << 42) 268 267 #define MMCRA_IFM_SHIFT 30 269 268 269 + /* Bits in MMCR2 for POWER8 */ 270 + #define MMCR2_FCS(pmc) (1ull << (63 - (((pmc) - 1) * 9))) 271 + #define MMCR2_FCP(pmc) (1ull << (62 - (((pmc) - 1) * 9))) 272 + #define MMCR2_FCH(pmc) (1ull << (57 - (((pmc) - 1) * 9))) 273 + 270 274 271 275 static inline bool event_is_fab_match(u64 event) 272 276 { ··· 399 393 } 400 394 401 395 static int power8_compute_mmcr(u64 event[], int n_ev, 402 - unsigned int hwc[], unsigned long mmcr[]) 396 + unsigned int hwc[], unsigned long mmcr[], 397 + struct perf_event *pevents[]) 403 398 { 404 - unsigned long mmcra, mmcr1, unit, combine, psel, cache, val; 399 + unsigned long mmcra, mmcr1, mmcr2, unit, combine, psel, cache, val; 405 400 unsigned int pmc, pmc_inuse; 406 401 int i; 407 402 ··· 417 410 418 411 /* In continous sampling mode, update SDAR on TLB miss */ 419 412 mmcra = MMCRA_SDAR_MODE_TLB; 420 - mmcr1 = 0; 413 + mmcr1 = mmcr2 = 0; 421 414 422 415 /* Second pass: assign PMCs, set all MMCR1 fields */ 423 416 for (i = 0; i < n_ev; ++i) { ··· 479 472 mmcra |= val << MMCRA_IFM_SHIFT; 480 473 } 481 474 475 + if (pevents[i]->attr.exclude_user) 476 + mmcr2 |= MMCR2_FCP(pmc); 477 + 478 + if (pevents[i]->attr.exclude_hv) 479 + mmcr2 |= MMCR2_FCH(pmc); 480 + 481 + if (pevents[i]->attr.exclude_kernel) { 482 + if (cpu_has_feature(CPU_FTR_HVMODE)) 483 + mmcr2 |= MMCR2_FCH(pmc); 484 + else 485 + mmcr2 |= MMCR2_FCS(pmc); 486 + } 487 + 482 488 hwc[i] = pmc - 1; 483 489 } 484 490 ··· 511 491 512 492 mmcr[1] = mmcr1; 513 493 mmcr[2] = mmcra; 494 + mmcr[3] = mmcr2; 514 495 515 496 return 0; 516 497 }
+1 -1
arch/powerpc/perf/ppc970-pmu.c
··· 257 257 } 258 258 259 259 static int p970_compute_mmcr(u64 event[], int n_ev, 260 - unsigned int hwc[], unsigned long mmcr[]) 260 + unsigned int hwc[], unsigned long mmcr[], struct perf_event *pevents[]) 261 261 { 262 262 unsigned long mmcr0 = 0, mmcr1 = 0, mmcra = 0; 263 263 unsigned int pmc, unit, byte, psel;
+1 -1
arch/powerpc/platforms/85xx/Kconfig
··· 274 274 For 32bit kernel, the following boards are supported: 275 275 P2041 RDB, P3041 DS, P4080 DS, kmcoge4, and OCA4080 276 276 For 64bit kernel, the following boards are supported: 277 - T4240 QDS and B4 QDS 277 + T208x QDS/RDB, T4240 QDS/RDB and B4 QDS 278 278 The following boards are supported for both 32bit and 64bit kernel: 279 279 P5020 DS, P5040 DS and T104xQDS 280 280
+24 -29
arch/powerpc/platforms/85xx/corenet_generic.c
··· 119 119 "fsl,P4080DS", 120 120 "fsl,P5020DS", 121 121 "fsl,P5040DS", 122 + "fsl,T2080QDS", 123 + "fsl,T2080RDB", 124 + "fsl,T2081QDS", 122 125 "fsl,T4240QDS", 126 + "fsl,T4240RDB", 123 127 "fsl,B4860QDS", 124 128 "fsl,B4420QDS", 125 129 "fsl,B4220QDS", ··· 133 129 NULL 134 130 }; 135 131 136 - static const char * const hv_boards[] __initconst = { 137 - "fsl,P2041RDB-hv", 138 - "fsl,P3041DS-hv", 139 - "fsl,OCA4080-hv", 140 - "fsl,P4080DS-hv", 141 - "fsl,P5020DS-hv", 142 - "fsl,P5040DS-hv", 143 - "fsl,T4240QDS-hv", 144 - "fsl,B4860QDS-hv", 145 - "fsl,B4420QDS-hv", 146 - "fsl,B4220QDS-hv", 147 - "fsl,T1040QDS-hv", 148 - "fsl,T1042QDS-hv", 149 - NULL 150 - }; 151 - 152 132 /* 153 133 * Called very early, device-tree isn't unflattened 154 134 */ 155 135 static int __init corenet_generic_probe(void) 156 136 { 157 137 unsigned long root = of_get_flat_dt_root(); 138 + char hv_compat[24]; 139 + int i; 158 140 #ifdef CONFIG_SMP 159 141 extern struct smp_ops_t smp_85xx_ops; 160 142 #endif ··· 149 159 return 1; 150 160 151 161 /* Check if we're running under the Freescale hypervisor */ 152 - if (of_flat_dt_match(root, hv_boards)) { 153 - ppc_md.init_IRQ = ehv_pic_init; 154 - ppc_md.get_irq = ehv_pic_get_irq; 155 - ppc_md.restart = fsl_hv_restart; 156 - ppc_md.power_off = fsl_hv_halt; 157 - ppc_md.halt = fsl_hv_halt; 162 + for (i = 0; boards[i]; i++) { 163 + snprintf(hv_compat, sizeof(hv_compat), "%s-hv", boards[i]); 164 + if (of_flat_dt_is_compatible(root, hv_compat)) { 165 + ppc_md.init_IRQ = ehv_pic_init; 166 + 167 + ppc_md.get_irq = ehv_pic_get_irq; 168 + ppc_md.restart = fsl_hv_restart; 169 + ppc_md.power_off = fsl_hv_halt; 170 + ppc_md.halt = fsl_hv_halt; 158 171 #ifdef CONFIG_SMP 159 - /* 160 - * Disable the timebase sync operations because we can't write 161 - * to the timebase registers under the hypervisor. 162 - */ 163 - smp_85xx_ops.give_timebase = NULL; 164 - smp_85xx_ops.take_timebase = NULL; 172 + /* 173 + * Disable the timebase sync operations because we 174 + * can't write to the timebase registers under the 175 + * hypervisor. 176 + */ 177 + smp_85xx_ops.give_timebase = NULL; 178 + smp_85xx_ops.take_timebase = NULL; 165 179 #endif 166 - return 1; 180 + return 1; 181 + } 167 182 } 168 183 169 184 return 0;
+44
arch/powerpc/platforms/85xx/smp.c
··· 28 28 #include <asm/dbell.h> 29 29 #include <asm/fsl_guts.h> 30 30 #include <asm/code-patching.h> 31 + #include <asm/cputhreads.h> 31 32 32 33 #include <sysdev/fsl_soc.h> 33 34 #include <sysdev/mpic.h> ··· 169 168 return in_be32(&((struct epapr_spin_table *)spin_table)->addr_l); 170 169 } 171 170 171 + #ifdef CONFIG_PPC64 172 + static void wake_hw_thread(void *info) 173 + { 174 + void fsl_secondary_thread_init(void); 175 + unsigned long imsr1, inia1; 176 + int nr = *(const int *)info; 177 + 178 + imsr1 = MSR_KERNEL; 179 + inia1 = *(unsigned long *)fsl_secondary_thread_init; 180 + 181 + mttmr(TMRN_IMSR1, imsr1); 182 + mttmr(TMRN_INIA1, inia1); 183 + mtspr(SPRN_TENS, TEN_THREAD(1)); 184 + 185 + smp_generic_kick_cpu(nr); 186 + } 187 + #endif 188 + 172 189 static int smp_85xx_kick_cpu(int nr) 173 190 { 174 191 unsigned long flags; ··· 201 182 WARN_ON(hw_cpu < 0 || hw_cpu >= NR_CPUS); 202 183 203 184 pr_debug("smp_85xx_kick_cpu: kick CPU #%d\n", nr); 185 + 186 + #ifdef CONFIG_PPC64 187 + /* Threads don't use the spin table */ 188 + if (cpu_thread_in_core(nr) != 0) { 189 + int primary = cpu_first_thread_sibling(nr); 190 + 191 + if (WARN_ON_ONCE(!cpu_has_feature(CPU_FTR_SMT))) 192 + return -ENOENT; 193 + 194 + if (cpu_thread_in_core(nr) != 1) { 195 + pr_err("%s: cpu %d: invalid hw thread %d\n", 196 + __func__, nr, cpu_thread_in_core(nr)); 197 + return -ENOENT; 198 + } 199 + 200 + if (!cpu_online(primary)) { 201 + pr_err("%s: cpu %d: primary %d not online\n", 202 + __func__, nr, primary); 203 + return -ENOENT; 204 + } 205 + 206 + smp_call_function_single(primary, wake_hw_thread, &nr, 0); 207 + return 0; 208 + } 209 + #endif 204 210 205 211 np = of_get_cpu_node(nr, NULL); 206 212 cpu_rel_addr = of_get_property(np, "cpu-release-addr", NULL);
-3
arch/powerpc/platforms/8xx/m8xx_setup.c
··· 18 18 #include <linux/fsl_devices.h> 19 19 20 20 #include <asm/io.h> 21 - #include <asm/mpc8xx.h> 22 21 #include <asm/8xx_immap.h> 23 22 #include <asm/prom.h> 24 23 #include <asm/fs_pd.h> ··· 26 27 #include <sysdev/mpc8xx_pic.h> 27 28 28 29 #include "mpc8xx.h" 29 - 30 - struct mpc8xx_pcmcia_ops m8xx_pcmcia_ops; 31 30 32 31 extern int cpm_pic_init(void); 33 32 extern int cpm_get_irq(void);
-62
arch/powerpc/platforms/8xx/mpc885ads_setup.c
··· 35 35 #include <asm/page.h> 36 36 #include <asm/processor.h> 37 37 #include <asm/time.h> 38 - #include <asm/mpc8xx.h> 39 38 #include <asm/8xx_immap.h> 40 39 #include <asm/cpm1.h> 41 40 #include <asm/fs_pd.h> ··· 44 45 #include "mpc8xx.h" 45 46 46 47 static u32 __iomem *bcsr, *bcsr5; 47 - 48 - #ifdef CONFIG_PCMCIA_M8XX 49 - static void pcmcia_hw_setup(int slot, int enable) 50 - { 51 - if (enable) 52 - clrbits32(&bcsr[1], BCSR1_PCCEN); 53 - else 54 - setbits32(&bcsr[1], BCSR1_PCCEN); 55 - } 56 - 57 - static int pcmcia_set_voltage(int slot, int vcc, int vpp) 58 - { 59 - u32 reg = 0; 60 - 61 - switch (vcc) { 62 - case 0: 63 - break; 64 - case 33: 65 - reg |= BCSR1_PCCVCC0; 66 - break; 67 - case 50: 68 - reg |= BCSR1_PCCVCC1; 69 - break; 70 - default: 71 - return 1; 72 - } 73 - 74 - switch (vpp) { 75 - case 0: 76 - break; 77 - case 33: 78 - case 50: 79 - if (vcc == vpp) 80 - reg |= BCSR1_PCCVPP1; 81 - else 82 - return 1; 83 - break; 84 - case 120: 85 - if ((vcc == 33) || (vcc == 50)) 86 - reg |= BCSR1_PCCVPP0; 87 - else 88 - return 1; 89 - default: 90 - return 1; 91 - } 92 - 93 - /* first, turn off all power */ 94 - clrbits32(&bcsr[1], 0x00610000); 95 - 96 - /* enable new powersettings */ 97 - setbits32(&bcsr[1], reg); 98 - 99 - return 0; 100 - } 101 - #endif 102 48 103 49 struct cpm_pin { 104 50 int port, pin, flags; ··· 189 245 of_detach_node(np); 190 246 of_node_put(np); 191 247 } 192 - 193 - #ifdef CONFIG_PCMCIA_M8XX 194 - /* Set up board specific hook-ups.*/ 195 - m8xx_pcmcia_ops.hw_ctrl = pcmcia_hw_setup; 196 - m8xx_pcmcia_ops.voltage_set = pcmcia_set_voltage; 197 - #endif 198 248 } 199 249 200 250 static int __init mpc885ads_probe(void)
-1
arch/powerpc/platforms/8xx/tqm8xx_setup.c
··· 37 37 #include <asm/page.h> 38 38 #include <asm/processor.h> 39 39 #include <asm/time.h> 40 - #include <asm/mpc8xx.h> 41 40 #include <asm/8xx_immap.h> 42 41 #include <asm/cpm1.h> 43 42 #include <asm/fs_pd.h>
+5 -13
arch/powerpc/platforms/Kconfig.cputype
··· 61 61 help 62 62 There are two families of 64 bit PowerPC chips supported. 63 63 The most common ones are the desktop and server CPUs 64 - (POWER3, RS64, POWER4, POWER5, POWER5+, POWER6, ...) 64 + (POWER4, POWER5, 970, POWER5+, POWER6, POWER7, POWER8 ...) 65 65 66 66 The other are the "embedded" processors compliant with the 67 67 "Book 3E" variant of the architecture ··· 139 139 def_bool y 140 140 depends on PPC32 && PPC_BOOK3S 141 141 select PPC_HAVE_PMU_SUPPORT 142 - 143 - config POWER3 144 - depends on PPC64 && PPC_BOOK3S 145 - def_bool y 146 - 147 - config POWER4 148 - depends on PPC64 && PPC_BOOK3S 149 - def_bool y 150 142 151 143 config TUNE_CELL 152 144 bool "Optimize for Cell Broadband Engine" ··· 236 244 237 245 config ALTIVEC 238 246 bool "AltiVec Support" 239 - depends on 6xx || POWER4 || (PPC_E500MC && PPC64) 247 + depends on 6xx || PPC_BOOK3S_64 || (PPC_E500MC && PPC64) 240 248 ---help--- 241 249 This option enables kernel support for the Altivec extensions to the 242 250 PowerPC processor. The kernel currently supports saving and restoring ··· 252 260 253 261 config VSX 254 262 bool "VSX Support" 255 - depends on POWER4 && ALTIVEC && PPC_FPU 263 + depends on PPC_BOOK3S_64 && ALTIVEC && PPC_FPU 256 264 ---help--- 257 265 258 266 This option enables kernel support for the Vector Scaler extensions ··· 268 276 269 277 config PPC_ICSWX 270 278 bool "Support for PowerPC icswx coprocessor instruction" 271 - depends on POWER4 279 + depends on PPC_BOOK3S_64 272 280 default n 273 281 ---help--- 274 282 ··· 286 294 287 295 config PPC_ICSWX_PID 288 296 bool "icswx requires direct PID management" 289 - depends on PPC_ICSWX && POWER4 297 + depends on PPC_ICSWX 290 298 default y 291 299 ---help--- 292 300 The PID register in server is used explicitly for ICSWX. In
+1 -1
arch/powerpc/platforms/powermac/Kconfig
··· 10 10 11 11 config PPC_PMAC64 12 12 bool 13 - depends on PPC_PMAC && POWER4 13 + depends on PPC_PMAC && PPC64 14 14 select MPIC 15 15 select U3_DART 16 16 select MPIC_U3_HT_IRQS
+21 -21
arch/powerpc/platforms/powermac/feature.c
··· 158 158 return 0; 159 159 } 160 160 161 - #ifndef CONFIG_POWER4 161 + #ifndef CONFIG_PPC64 162 162 163 163 static long ohare_htw_scc_enable(struct device_node *node, long param, 164 164 long value) ··· 1318 1318 } 1319 1319 1320 1320 1321 - #endif /* CONFIG_POWER4 */ 1321 + #endif /* CONFIG_PPC64 */ 1322 1322 1323 1323 static long 1324 1324 core99_read_gpio(struct device_node *node, long param, long value) ··· 1338 1338 return 0; 1339 1339 } 1340 1340 1341 - #ifdef CONFIG_POWER4 1341 + #ifdef CONFIG_PPC64 1342 1342 static long g5_gmac_enable(struct device_node *node, long param, long value) 1343 1343 { 1344 1344 struct macio_chip *macio = &macio_chips[0]; ··· 1550 1550 if (uninorth_maj == 3) 1551 1551 UN_OUT(U3_API_PHY_CONFIG_1, 0); 1552 1552 } 1553 - #endif /* CONFIG_POWER4 */ 1553 + #endif /* CONFIG_PPC64 */ 1554 1554 1555 - #ifndef CONFIG_POWER4 1555 + #ifndef CONFIG_PPC64 1556 1556 1557 1557 1558 1558 #ifdef CONFIG_PM ··· 1864 1864 return 0; 1865 1865 } 1866 1866 1867 - #endif /* CONFIG_POWER4 */ 1867 + #endif /* CONFIG_PPC64 */ 1868 1868 1869 1869 static long 1870 1870 generic_dev_can_wake(struct device_node *node, long param, long value) ··· 1906 1906 { 0, NULL } 1907 1907 }; 1908 1908 1909 - #ifndef CONFIG_POWER4 1909 + #ifndef CONFIG_PPC64 1910 1910 1911 1911 /* OHare based motherboards. Currently, we only use these on the 1912 1912 * 2400,3400 and 3500 series powerbooks. Some older desktops seem ··· 2056 2056 { 0, NULL } 2057 2057 }; 2058 2058 2059 - #else /* CONFIG_POWER4 */ 2059 + #else /* CONFIG_PPC64 */ 2060 2060 2061 2061 /* G5 features 2062 2062 */ ··· 2074 2074 { 0, NULL } 2075 2075 }; 2076 2076 2077 - #endif /* CONFIG_POWER4 */ 2077 + #endif /* CONFIG_PPC64 */ 2078 2078 2079 2079 static struct pmac_mb_def pmac_mb_defs[] = { 2080 - #ifndef CONFIG_POWER4 2080 + #ifndef CONFIG_PPC64 2081 2081 /* 2082 2082 * Desktops 2083 2083 */ ··· 2342 2342 PMAC_TYPE_UNKNOWN_INTREPID, intrepid_features, 2343 2343 PMAC_MB_MAY_SLEEP | PMAC_MB_HAS_FW_POWER | PMAC_MB_MOBILE, 2344 2344 }, 2345 - #else /* CONFIG_POWER4 */ 2345 + #else /* CONFIG_PPC64 */ 2346 2346 { "PowerMac7,2", "PowerMac G5", 2347 2347 PMAC_TYPE_POWERMAC_G5, g5_features, 2348 2348 0, ··· 2373 2373 0, 2374 2374 }, 2375 2375 #endif /* CONFIG_PPC64 */ 2376 - #endif /* CONFIG_POWER4 */ 2376 + #endif /* CONFIG_PPC64 */ 2377 2377 }; 2378 2378 2379 2379 /* ··· 2441 2441 2442 2442 /* Fallback to selection depending on mac-io chip type */ 2443 2443 switch(macio->type) { 2444 - #ifndef CONFIG_POWER4 2444 + #ifndef CONFIG_PPC64 2445 2445 case macio_grand_central: 2446 2446 pmac_mb.model_id = PMAC_TYPE_PSURGE; 2447 2447 pmac_mb.model_name = "Unknown PowerSurge"; ··· 2475 2475 pmac_mb.model_name = "Unknown Intrepid-based"; 2476 2476 pmac_mb.features = intrepid_features; 2477 2477 break; 2478 - #else /* CONFIG_POWER4 */ 2478 + #else /* CONFIG_PPC64 */ 2479 2479 case macio_keylargo2: 2480 2480 pmac_mb.model_id = PMAC_TYPE_UNKNOWN_K2; 2481 2481 pmac_mb.model_name = "Unknown K2-based"; ··· 2486 2486 pmac_mb.model_name = "Unknown Shasta-based"; 2487 2487 pmac_mb.features = g5_features; 2488 2488 break; 2489 - #endif /* CONFIG_POWER4 */ 2489 + #endif /* CONFIG_PPC64 */ 2490 2490 default: 2491 2491 ret = -ENODEV; 2492 2492 goto done; 2493 2493 } 2494 2494 found: 2495 - #ifndef CONFIG_POWER4 2495 + #ifndef CONFIG_PPC64 2496 2496 /* Fixup Hooper vs. Comet */ 2497 2497 if (pmac_mb.model_id == PMAC_TYPE_HOOPER) { 2498 2498 u32 __iomem * mach_id_ptr = ioremap(0xf3000034, 4); ··· 2546 2546 */ 2547 2547 powersave_lowspeed = 1; 2548 2548 2549 - #else /* CONFIG_POWER4 */ 2549 + #else /* CONFIG_PPC64 */ 2550 2550 powersave_nap = 1; 2551 - #endif /* CONFIG_POWER4 */ 2551 + #endif /* CONFIG_PPC64 */ 2552 2552 2553 2553 /* Check for "mobile" machine */ 2554 2554 if (model && (strncmp(model, "PowerBook", 9) == 0 ··· 2786 2786 MACIO_BIS(OHARE_FCR, OH_IOBUS_ENABLE); 2787 2787 } 2788 2788 2789 - #ifdef CONFIG_POWER4 2789 + #ifdef CONFIG_PPC64 2790 2790 if (macio_chips[0].type == macio_keylargo2 || 2791 2791 macio_chips[0].type == macio_shasta) { 2792 2792 #ifndef CONFIG_SMP ··· 2826 2826 np = of_find_node_by_name(np, "firewire"); 2827 2827 } 2828 2828 } 2829 - #else /* CONFIG_POWER4 */ 2829 + #else /* CONFIG_PPC64 */ 2830 2830 2831 2831 if (macio_chips[0].type == macio_keylargo || 2832 2832 macio_chips[0].type == macio_pangea || ··· 2895 2895 MACIO_BIC(HEATHROW_FCR, HRW_SOUND_POWER_N); 2896 2896 } 2897 2897 2898 - #endif /* CONFIG_POWER4 */ 2898 + #endif /* CONFIG_PPC64 */ 2899 2899 2900 2900 /* On all machines, switch modem & serial ports off */ 2901 2901 for_each_node_by_name(np, "ch-a")
+2 -1
arch/powerpc/platforms/powernv/Makefile
··· 1 1 obj-y += setup.o opal-wrappers.o opal.o opal-async.o 2 2 obj-y += opal-rtc.o opal-nvram.o opal-lpc.o opal-flash.o 3 3 obj-y += rng.o opal-elog.o opal-dump.o opal-sysparam.o opal-sensor.o 4 - obj-y += opal-msglog.o 4 + obj-y += opal-msglog.o opal-hmi.o 5 5 6 6 obj-$(CONFIG_SMP) += smp.o subcore.o subcore-asm.o 7 7 obj-$(CONFIG_PCI) += pci.o pci-p5ioc2.o pci-ioda.o 8 8 obj-$(CONFIG_EEH) += eeh-ioda.o eeh-powernv.o 9 9 obj-$(CONFIG_PPC_SCOM) += opal-xscom.o 10 10 obj-$(CONFIG_MEMORY_FAILURE) += opal-memory-errors.o 11 + obj-$(CONFIG_TRACEPOINTS) += opal-tracepoints.o
+262 -177
arch/powerpc/platforms/powernv/eeh-ioda.c
··· 187 187 */ 188 188 static int ioda_eeh_set_option(struct eeh_pe *pe, int option) 189 189 { 190 - s64 ret; 191 - u32 pe_no; 192 190 struct pci_controller *hose = pe->phb; 193 191 struct pnv_phb *phb = hose->private_data; 192 + int enable, ret = 0; 193 + s64 rc; 194 194 195 195 /* Check on PE number */ 196 196 if (pe->addr < 0 || pe->addr >= phb->ioda.total_pe) { ··· 201 201 return -EINVAL; 202 202 } 203 203 204 - pe_no = pe->addr; 205 204 switch (option) { 206 205 case EEH_OPT_DISABLE: 207 - ret = -EEXIST; 208 - break; 206 + return -EPERM; 209 207 case EEH_OPT_ENABLE: 210 - ret = 0; 211 - break; 208 + return 0; 212 209 case EEH_OPT_THAW_MMIO: 213 - ret = opal_pci_eeh_freeze_clear(phb->opal_id, pe_no, 214 - OPAL_EEH_ACTION_CLEAR_FREEZE_MMIO); 215 - if (ret) { 216 - pr_warning("%s: Failed to enable MMIO for " 217 - "PHB#%x-PE#%x, err=%lld\n", 218 - __func__, hose->global_number, pe_no, ret); 219 - return -EIO; 220 - } 221 - 210 + enable = OPAL_EEH_ACTION_CLEAR_FREEZE_MMIO; 222 211 break; 223 212 case EEH_OPT_THAW_DMA: 224 - ret = opal_pci_eeh_freeze_clear(phb->opal_id, pe_no, 225 - OPAL_EEH_ACTION_CLEAR_FREEZE_DMA); 226 - if (ret) { 227 - pr_warning("%s: Failed to enable DMA for " 228 - "PHB#%x-PE#%x, err=%lld\n", 229 - __func__, hose->global_number, pe_no, ret); 230 - return -EIO; 231 - } 232 - 213 + enable = OPAL_EEH_ACTION_CLEAR_FREEZE_DMA; 233 214 break; 234 215 default: 235 - pr_warning("%s: Invalid option %d\n", __func__, option); 216 + pr_warn("%s: Invalid option %d\n", 217 + __func__, option); 236 218 return -EINVAL; 219 + } 220 + 221 + /* If PHB supports compound PE, to handle it */ 222 + if (phb->unfreeze_pe) { 223 + ret = phb->unfreeze_pe(phb, pe->addr, enable); 224 + } else { 225 + rc = opal_pci_eeh_freeze_clear(phb->opal_id, 226 + pe->addr, 227 + enable); 228 + if (rc != OPAL_SUCCESS) { 229 + pr_warn("%s: Failure %lld enable %d for PHB#%x-PE#%x\n", 230 + __func__, rc, option, phb->hose->global_number, 231 + pe->addr); 232 + ret = -EIO; 233 + } 237 234 } 238 235 239 236 return ret; 240 237 } 241 238 242 - static void ioda_eeh_phb_diag(struct pci_controller *hose) 239 + static void ioda_eeh_phb_diag(struct eeh_pe *pe) 243 240 { 244 - struct pnv_phb *phb = hose->private_data; 241 + struct pnv_phb *phb = pe->phb->private_data; 245 242 long rc; 246 243 247 - rc = opal_pci_get_phb_diag_data2(phb->opal_id, phb->diag.blob, 244 + rc = opal_pci_get_phb_diag_data2(phb->opal_id, pe->data, 248 245 PNV_PCI_DIAG_BUF_SIZE); 246 + if (rc != OPAL_SUCCESS) 247 + pr_warn("%s: Failed to get diag-data for PHB#%x (%ld)\n", 248 + __func__, pe->phb->global_number, rc); 249 + } 250 + 251 + static int ioda_eeh_get_phb_state(struct eeh_pe *pe) 252 + { 253 + struct pnv_phb *phb = pe->phb->private_data; 254 + u8 fstate; 255 + __be16 pcierr; 256 + s64 rc; 257 + int result = 0; 258 + 259 + rc = opal_pci_eeh_freeze_status(phb->opal_id, 260 + pe->addr, 261 + &fstate, 262 + &pcierr, 263 + NULL); 249 264 if (rc != OPAL_SUCCESS) { 250 - pr_warning("%s: Failed to get diag-data for PHB#%x (%ld)\n", 251 - __func__, hose->global_number, rc); 252 - return; 265 + pr_warn("%s: Failure %lld getting PHB#%x state\n", 266 + __func__, rc, phb->hose->global_number); 267 + return EEH_STATE_NOT_SUPPORT; 253 268 } 254 269 255 - pnv_pci_dump_phb_diag_data(hose, phb->diag.blob); 270 + /* 271 + * Check PHB state. If the PHB is frozen for the 272 + * first time, to dump the PHB diag-data. 273 + */ 274 + if (be16_to_cpu(pcierr) != OPAL_EEH_PHB_ERROR) { 275 + result = (EEH_STATE_MMIO_ACTIVE | 276 + EEH_STATE_DMA_ACTIVE | 277 + EEH_STATE_MMIO_ENABLED | 278 + EEH_STATE_DMA_ENABLED); 279 + } else if (!(pe->state & EEH_PE_ISOLATED)) { 280 + eeh_pe_state_mark(pe, EEH_PE_ISOLATED); 281 + ioda_eeh_phb_diag(pe); 282 + } 283 + 284 + return result; 285 + } 286 + 287 + static int ioda_eeh_get_pe_state(struct eeh_pe *pe) 288 + { 289 + struct pnv_phb *phb = pe->phb->private_data; 290 + u8 fstate; 291 + __be16 pcierr; 292 + s64 rc; 293 + int result; 294 + 295 + /* 296 + * We don't clobber hardware frozen state until PE 297 + * reset is completed. In order to keep EEH core 298 + * moving forward, we have to return operational 299 + * state during PE reset. 300 + */ 301 + if (pe->state & EEH_PE_RESET) { 302 + result = (EEH_STATE_MMIO_ACTIVE | 303 + EEH_STATE_DMA_ACTIVE | 304 + EEH_STATE_MMIO_ENABLED | 305 + EEH_STATE_DMA_ENABLED); 306 + return result; 307 + } 308 + 309 + /* 310 + * Fetch PE state from hardware. If the PHB 311 + * supports compound PE, let it handle that. 312 + */ 313 + if (phb->get_pe_state) { 314 + fstate = phb->get_pe_state(phb, pe->addr); 315 + } else { 316 + rc = opal_pci_eeh_freeze_status(phb->opal_id, 317 + pe->addr, 318 + &fstate, 319 + &pcierr, 320 + NULL); 321 + if (rc != OPAL_SUCCESS) { 322 + pr_warn("%s: Failure %lld getting PHB#%x-PE%x state\n", 323 + __func__, rc, phb->hose->global_number, pe->addr); 324 + return EEH_STATE_NOT_SUPPORT; 325 + } 326 + } 327 + 328 + /* Figure out state */ 329 + switch (fstate) { 330 + case OPAL_EEH_STOPPED_NOT_FROZEN: 331 + result = (EEH_STATE_MMIO_ACTIVE | 332 + EEH_STATE_DMA_ACTIVE | 333 + EEH_STATE_MMIO_ENABLED | 334 + EEH_STATE_DMA_ENABLED); 335 + break; 336 + case OPAL_EEH_STOPPED_MMIO_FREEZE: 337 + result = (EEH_STATE_DMA_ACTIVE | 338 + EEH_STATE_DMA_ENABLED); 339 + break; 340 + case OPAL_EEH_STOPPED_DMA_FREEZE: 341 + result = (EEH_STATE_MMIO_ACTIVE | 342 + EEH_STATE_MMIO_ENABLED); 343 + break; 344 + case OPAL_EEH_STOPPED_MMIO_DMA_FREEZE: 345 + result = 0; 346 + break; 347 + case OPAL_EEH_STOPPED_RESET: 348 + result = EEH_STATE_RESET_ACTIVE; 349 + break; 350 + case OPAL_EEH_STOPPED_TEMP_UNAVAIL: 351 + result = EEH_STATE_UNAVAILABLE; 352 + break; 353 + case OPAL_EEH_STOPPED_PERM_UNAVAIL: 354 + result = EEH_STATE_NOT_SUPPORT; 355 + break; 356 + default: 357 + result = EEH_STATE_NOT_SUPPORT; 358 + pr_warn("%s: Invalid PHB#%x-PE#%x state %x\n", 359 + __func__, phb->hose->global_number, 360 + pe->addr, fstate); 361 + } 362 + 363 + /* 364 + * If PHB supports compound PE, to freeze all 365 + * slave PEs for consistency. 366 + * 367 + * If the PE is switching to frozen state for the 368 + * first time, to dump the PHB diag-data. 369 + */ 370 + if (!(result & EEH_STATE_NOT_SUPPORT) && 371 + !(result & EEH_STATE_UNAVAILABLE) && 372 + !(result & EEH_STATE_MMIO_ACTIVE) && 373 + !(result & EEH_STATE_DMA_ACTIVE) && 374 + !(pe->state & EEH_PE_ISOLATED)) { 375 + if (phb->freeze_pe) 376 + phb->freeze_pe(phb, pe->addr); 377 + 378 + eeh_pe_state_mark(pe, EEH_PE_ISOLATED); 379 + ioda_eeh_phb_diag(pe); 380 + } 381 + 382 + return result; 256 383 } 257 384 258 385 /** ··· 392 265 */ 393 266 static int ioda_eeh_get_state(struct eeh_pe *pe) 394 267 { 395 - s64 ret = 0; 396 - u8 fstate; 397 - __be16 pcierr; 398 - u32 pe_no; 399 - int result; 400 - struct pci_controller *hose = pe->phb; 401 - struct pnv_phb *phb = hose->private_data; 268 + struct pnv_phb *phb = pe->phb->private_data; 402 269 403 - /* 404 - * Sanity check on PE address. The PHB PE address should 405 - * be zero. 406 - */ 407 - if (pe->addr < 0 || pe->addr >= phb->ioda.total_pe) { 408 - pr_err("%s: PE address %x out of range [0, %x] " 409 - "on PHB#%x\n", 410 - __func__, pe->addr, phb->ioda.total_pe, 411 - hose->global_number); 270 + /* Sanity check on PE number. PHB PE should have 0 */ 271 + if (pe->addr < 0 || 272 + pe->addr >= phb->ioda.total_pe) { 273 + pr_warn("%s: PHB#%x-PE#%x out of range [0, %x]\n", 274 + __func__, phb->hose->global_number, 275 + pe->addr, phb->ioda.total_pe); 412 276 return EEH_STATE_NOT_SUPPORT; 413 277 } 414 278 415 - /* 416 - * If we're in middle of PE reset, return normal 417 - * state to keep EEH core going. For PHB reset, we 418 - * still expect to have fenced PHB cleared with 419 - * PHB reset. 420 - */ 421 - if (!(pe->type & EEH_PE_PHB) && 422 - (pe->state & EEH_PE_RESET)) { 423 - result = (EEH_STATE_MMIO_ACTIVE | 424 - EEH_STATE_DMA_ACTIVE | 425 - EEH_STATE_MMIO_ENABLED | 426 - EEH_STATE_DMA_ENABLED); 427 - return result; 428 - } 279 + if (pe->type & EEH_PE_PHB) 280 + return ioda_eeh_get_phb_state(pe); 429 281 430 - /* Retrieve PE status through OPAL */ 431 - pe_no = pe->addr; 432 - ret = opal_pci_eeh_freeze_status(phb->opal_id, pe_no, 433 - &fstate, &pcierr, NULL); 434 - if (ret) { 435 - pr_err("%s: Failed to get EEH status on " 436 - "PHB#%x-PE#%x\n, err=%lld\n", 437 - __func__, hose->global_number, pe_no, ret); 438 - return EEH_STATE_NOT_SUPPORT; 439 - } 440 - 441 - /* Check PHB status */ 442 - if (pe->type & EEH_PE_PHB) { 443 - result = 0; 444 - result &= ~EEH_STATE_RESET_ACTIVE; 445 - 446 - if (be16_to_cpu(pcierr) != OPAL_EEH_PHB_ERROR) { 447 - result |= EEH_STATE_MMIO_ACTIVE; 448 - result |= EEH_STATE_DMA_ACTIVE; 449 - result |= EEH_STATE_MMIO_ENABLED; 450 - result |= EEH_STATE_DMA_ENABLED; 451 - } else if (!(pe->state & EEH_PE_ISOLATED)) { 452 - eeh_pe_state_mark(pe, EEH_PE_ISOLATED); 453 - ioda_eeh_phb_diag(hose); 454 - } 455 - 456 - return result; 457 - } 458 - 459 - /* Parse result out */ 460 - result = 0; 461 - switch (fstate) { 462 - case OPAL_EEH_STOPPED_NOT_FROZEN: 463 - result &= ~EEH_STATE_RESET_ACTIVE; 464 - result |= EEH_STATE_MMIO_ACTIVE; 465 - result |= EEH_STATE_DMA_ACTIVE; 466 - result |= EEH_STATE_MMIO_ENABLED; 467 - result |= EEH_STATE_DMA_ENABLED; 468 - break; 469 - case OPAL_EEH_STOPPED_MMIO_FREEZE: 470 - result &= ~EEH_STATE_RESET_ACTIVE; 471 - result |= EEH_STATE_DMA_ACTIVE; 472 - result |= EEH_STATE_DMA_ENABLED; 473 - break; 474 - case OPAL_EEH_STOPPED_DMA_FREEZE: 475 - result &= ~EEH_STATE_RESET_ACTIVE; 476 - result |= EEH_STATE_MMIO_ACTIVE; 477 - result |= EEH_STATE_MMIO_ENABLED; 478 - break; 479 - case OPAL_EEH_STOPPED_MMIO_DMA_FREEZE: 480 - result &= ~EEH_STATE_RESET_ACTIVE; 481 - break; 482 - case OPAL_EEH_STOPPED_RESET: 483 - result |= EEH_STATE_RESET_ACTIVE; 484 - break; 485 - case OPAL_EEH_STOPPED_TEMP_UNAVAIL: 486 - result |= EEH_STATE_UNAVAILABLE; 487 - break; 488 - case OPAL_EEH_STOPPED_PERM_UNAVAIL: 489 - result |= EEH_STATE_NOT_SUPPORT; 490 - break; 491 - default: 492 - pr_warning("%s: Unexpected EEH status 0x%x " 493 - "on PHB#%x-PE#%x\n", 494 - __func__, fstate, hose->global_number, pe_no); 495 - } 496 - 497 - /* Dump PHB diag-data for frozen PE */ 498 - if (result != EEH_STATE_NOT_SUPPORT && 499 - (result & (EEH_STATE_MMIO_ACTIVE | EEH_STATE_DMA_ACTIVE)) != 500 - (EEH_STATE_MMIO_ACTIVE | EEH_STATE_DMA_ACTIVE) && 501 - !(pe->state & EEH_PE_ISOLATED)) { 502 - eeh_pe_state_mark(pe, EEH_PE_ISOLATED); 503 - ioda_eeh_phb_diag(hose); 504 - } 505 - 506 - return result; 282 + return ioda_eeh_get_pe_state(pe); 507 283 } 508 284 509 285 static s64 ioda_eeh_phb_poll(struct pnv_phb *phb) ··· 619 589 } 620 590 621 591 /** 592 + * ioda_eeh_get_log - Retrieve error log 593 + * @pe: frozen PE 594 + * @severity: permanent or temporary error 595 + * @drv_log: device driver log 596 + * @len: length of device driver log 597 + * 598 + * Retrieve error log, which contains log from device driver 599 + * and firmware. 600 + */ 601 + int ioda_eeh_get_log(struct eeh_pe *pe, int severity, 602 + char *drv_log, unsigned long len) 603 + { 604 + pnv_pci_dump_phb_diag_data(pe->phb, pe->data); 605 + 606 + return 0; 607 + } 608 + 609 + /** 622 610 * ioda_eeh_configure_bridge - Configure the PCI bridges for the indicated PE 623 611 * @pe: EEH PE 624 612 * ··· 653 605 static void ioda_eeh_hub_diag_common(struct OpalIoP7IOCErrorData *data) 654 606 { 655 607 /* GEM */ 656 - pr_info(" GEM XFIR: %016llx\n", data->gemXfir); 657 - pr_info(" GEM RFIR: %016llx\n", data->gemRfir); 658 - pr_info(" GEM RIRQFIR: %016llx\n", data->gemRirqfir); 659 - pr_info(" GEM Mask: %016llx\n", data->gemMask); 660 - pr_info(" GEM RWOF: %016llx\n", data->gemRwof); 608 + if (data->gemXfir || data->gemRfir || 609 + data->gemRirqfir || data->gemMask || data->gemRwof) 610 + pr_info(" GEM: %016llx %016llx %016llx %016llx %016llx\n", 611 + be64_to_cpu(data->gemXfir), 612 + be64_to_cpu(data->gemRfir), 613 + be64_to_cpu(data->gemRirqfir), 614 + be64_to_cpu(data->gemMask), 615 + be64_to_cpu(data->gemRwof)); 661 616 662 617 /* LEM */ 663 - pr_info(" LEM FIR: %016llx\n", data->lemFir); 664 - pr_info(" LEM Error Mask: %016llx\n", data->lemErrMask); 665 - pr_info(" LEM Action 0: %016llx\n", data->lemAction0); 666 - pr_info(" LEM Action 1: %016llx\n", data->lemAction1); 667 - pr_info(" LEM WOF: %016llx\n", data->lemWof); 618 + if (data->lemFir || data->lemErrMask || 619 + data->lemAction0 || data->lemAction1 || data->lemWof) 620 + pr_info(" LEM: %016llx %016llx %016llx %016llx %016llx\n", 621 + be64_to_cpu(data->lemFir), 622 + be64_to_cpu(data->lemErrMask), 623 + be64_to_cpu(data->lemAction0), 624 + be64_to_cpu(data->lemAction1), 625 + be64_to_cpu(data->lemWof)); 668 626 } 669 627 670 628 static void ioda_eeh_hub_diag(struct pci_controller *hose) ··· 681 627 682 628 rc = opal_pci_get_hub_diag_data(phb->hub_id, data, sizeof(*data)); 683 629 if (rc != OPAL_SUCCESS) { 684 - pr_warning("%s: Failed to get HUB#%llx diag-data (%ld)\n", 685 - __func__, phb->hub_id, rc); 630 + pr_warn("%s: Failed to get HUB#%llx diag-data (%ld)\n", 631 + __func__, phb->hub_id, rc); 686 632 return; 687 633 } 688 634 ··· 690 636 case OPAL_P7IOC_DIAG_TYPE_RGC: 691 637 pr_info("P7IOC diag-data for RGC\n\n"); 692 638 ioda_eeh_hub_diag_common(data); 693 - pr_info(" RGC Status: %016llx\n", data->rgc.rgcStatus); 694 - pr_info(" RGC LDCP: %016llx\n", data->rgc.rgcLdcp); 639 + if (data->rgc.rgcStatus || data->rgc.rgcLdcp) 640 + pr_info(" RGC: %016llx %016llx\n", 641 + be64_to_cpu(data->rgc.rgcStatus), 642 + be64_to_cpu(data->rgc.rgcLdcp)); 695 643 break; 696 644 case OPAL_P7IOC_DIAG_TYPE_BI: 697 645 pr_info("P7IOC diag-data for BI %s\n\n", 698 646 data->bi.biDownbound ? "Downbound" : "Upbound"); 699 647 ioda_eeh_hub_diag_common(data); 700 - pr_info(" BI LDCP 0: %016llx\n", data->bi.biLdcp0); 701 - pr_info(" BI LDCP 1: %016llx\n", data->bi.biLdcp1); 702 - pr_info(" BI LDCP 2: %016llx\n", data->bi.biLdcp2); 703 - pr_info(" BI Fence Status: %016llx\n", data->bi.biFenceStatus); 648 + if (data->bi.biLdcp0 || data->bi.biLdcp1 || 649 + data->bi.biLdcp2 || data->bi.biFenceStatus) 650 + pr_info(" BI: %016llx %016llx %016llx %016llx\n", 651 + be64_to_cpu(data->bi.biLdcp0), 652 + be64_to_cpu(data->bi.biLdcp1), 653 + be64_to_cpu(data->bi.biLdcp2), 654 + be64_to_cpu(data->bi.biFenceStatus)); 704 655 break; 705 656 case OPAL_P7IOC_DIAG_TYPE_CI: 706 - pr_info("P7IOC diag-data for CI Port %d\\nn", 657 + pr_info("P7IOC diag-data for CI Port %d\n\n", 707 658 data->ci.ciPort); 708 659 ioda_eeh_hub_diag_common(data); 709 - pr_info(" CI Port Status: %016llx\n", data->ci.ciPortStatus); 710 - pr_info(" CI Port LDCP: %016llx\n", data->ci.ciPortLdcp); 660 + if (data->ci.ciPortStatus || data->ci.ciPortLdcp) 661 + pr_info(" CI: %016llx %016llx\n", 662 + be64_to_cpu(data->ci.ciPortStatus), 663 + be64_to_cpu(data->ci.ciPortLdcp)); 711 664 break; 712 665 case OPAL_P7IOC_DIAG_TYPE_MISC: 713 666 pr_info("P7IOC diag-data for MISC\n\n"); ··· 725 664 ioda_eeh_hub_diag_common(data); 726 665 break; 727 666 default: 728 - pr_warning("%s: Invalid type of HUB#%llx diag-data (%d)\n", 729 - __func__, phb->hub_id, data->type); 667 + pr_warn("%s: Invalid type of HUB#%llx diag-data (%d)\n", 668 + __func__, phb->hub_id, data->type); 730 669 } 731 670 } 732 671 733 672 static int ioda_eeh_get_pe(struct pci_controller *hose, 734 673 u16 pe_no, struct eeh_pe **pe) 735 674 { 736 - struct eeh_pe *phb_pe, *dev_pe; 737 - struct eeh_dev dev; 675 + struct pnv_phb *phb = hose->private_data; 676 + struct pnv_ioda_pe *pnv_pe; 677 + struct eeh_pe *dev_pe; 678 + struct eeh_dev edev; 738 679 739 - /* Find the PHB PE */ 740 - phb_pe = eeh_phb_pe_get(hose); 741 - if (!phb_pe) 742 - return -EEXIST; 680 + /* 681 + * If PHB supports compound PE, to fetch 682 + * the master PE because slave PE is invisible 683 + * to EEH core. 684 + */ 685 + if (phb->get_pe_state) { 686 + pnv_pe = &phb->ioda.pe_array[pe_no]; 687 + if (pnv_pe->flags & PNV_IODA_PE_SLAVE) { 688 + pnv_pe = pnv_pe->master; 689 + WARN_ON(!pnv_pe || 690 + !(pnv_pe->flags & PNV_IODA_PE_MASTER)); 691 + pe_no = pnv_pe->pe_number; 692 + } 693 + } 743 694 744 695 /* Find the PE according to PE# */ 745 - memset(&dev, 0, sizeof(struct eeh_dev)); 746 - dev.phb = hose; 747 - dev.pe_config_addr = pe_no; 748 - dev_pe = eeh_pe_get(&dev); 749 - if (!dev_pe) return -EEXIST; 696 + memset(&edev, 0, sizeof(struct eeh_dev)); 697 + edev.phb = hose; 698 + edev.pe_config_addr = pe_no; 699 + dev_pe = eeh_pe_get(&edev); 700 + if (!dev_pe) 701 + return -EEXIST; 750 702 703 + /* 704 + * At this point, we're sure the compound PE should 705 + * be put into frozen state. 706 + */ 751 707 *pe = dev_pe; 708 + if (phb->freeze_pe && 709 + !(dev_pe->state & EEH_PE_ISOLATED)) 710 + phb->freeze_pe(phb, pe_no); 711 + 752 712 return 0; 753 713 } 754 714 ··· 874 792 "detected, location: %s\n", 875 793 hose->global_number, 876 794 eeh_pe_loc_get(phb_pe)); 877 - ioda_eeh_phb_diag(hose); 795 + ioda_eeh_phb_diag(phb_pe); 796 + pnv_pci_dump_phb_diag_data(hose, phb_pe->data); 878 797 ret = EEH_NEXT_ERR_NONE; 879 798 } 880 799 ··· 895 812 opal_pci_eeh_freeze_clear(phb->opal_id, frozen_pe_no, 896 813 OPAL_EEH_ACTION_CLEAR_FREEZE_ALL); 897 814 ret = EEH_NEXT_ERR_NONE; 898 - } else if ((*pe)->state & EEH_PE_ISOLATED) { 815 + } else if ((*pe)->state & EEH_PE_ISOLATED || 816 + eeh_pe_passed(*pe)) { 899 817 ret = EEH_NEXT_ERR_NONE; 900 818 } else { 901 819 pr_err("EEH: Frozen PE#%x on PHB#%x detected\n", ··· 923 839 ret == EEH_NEXT_ERR_FENCED_PHB) && 924 840 !((*pe)->state & EEH_PE_ISOLATED)) { 925 841 eeh_pe_state_mark(*pe, EEH_PE_ISOLATED); 926 - ioda_eeh_phb_diag(hose); 842 + ioda_eeh_phb_diag(*pe); 927 843 } 928 844 929 845 /* ··· 969 885 .set_option = ioda_eeh_set_option, 970 886 .get_state = ioda_eeh_get_state, 971 887 .reset = ioda_eeh_reset, 888 + .get_log = ioda_eeh_get_log, 972 889 .configure_bridge = ioda_eeh_configure_bridge, 973 890 .next_error = ioda_eeh_next_error 974 891 };
+42 -13
arch/powerpc/platforms/powernv/eeh-powernv.c
··· 45 45 */ 46 46 static int powernv_eeh_init(void) 47 47 { 48 + struct pci_controller *hose; 49 + struct pnv_phb *phb; 50 + 48 51 /* We require OPALv3 */ 49 52 if (!firmware_has_feature(FW_FEATURE_OPALv3)) { 50 - pr_warning("%s: OPALv3 is required !\n", __func__); 53 + pr_warn("%s: OPALv3 is required !\n", 54 + __func__); 51 55 return -EINVAL; 52 56 } 53 57 54 - /* Set EEH probe mode */ 55 - eeh_probe_mode_set(EEH_PROBE_MODE_DEV); 58 + /* Set probe mode */ 59 + eeh_add_flag(EEH_PROBE_MODE_DEV); 60 + 61 + /* 62 + * P7IOC blocks PCI config access to frozen PE, but PHB3 63 + * doesn't do that. So we have to selectively enable I/O 64 + * prior to collecting error log. 65 + */ 66 + list_for_each_entry(hose, &hose_list, list_node) { 67 + phb = hose->private_data; 68 + 69 + if (phb->model == PNV_PHB_MODEL_P7IOC) 70 + eeh_add_flag(EEH_ENABLE_IO_FOR_LOG); 71 + break; 72 + } 56 73 57 74 return 0; 58 75 } ··· 124 107 struct pnv_phb *phb = hose->private_data; 125 108 struct device_node *dn = pci_device_to_OF_node(dev); 126 109 struct eeh_dev *edev = of_node_to_eeh_dev(dn); 110 + int ret; 127 111 128 112 /* 129 113 * When probing the root bridge, which doesn't have any ··· 161 143 edev->pe_config_addr = phb->bdfn_to_pe(phb, dev->bus, dev->devfn & 0xff); 162 144 163 145 /* Create PE */ 164 - eeh_add_to_parent_pe(edev); 146 + ret = eeh_add_to_parent_pe(edev); 147 + if (ret) { 148 + pr_warn("%s: Can't add PCI dev %s to parent PE (%d)\n", 149 + __func__, pci_name(dev), ret); 150 + return ret; 151 + } 152 + 153 + /* 154 + * Cache the PE primary bus, which can't be fetched when 155 + * full hotplug is in progress. In that case, all child 156 + * PCI devices of the PE are expected to be removed prior 157 + * to PE reset. 158 + */ 159 + if (!edev->pe->bus) 160 + edev->pe->bus = dev->bus; 165 161 166 162 /* 167 163 * Enable EEH explicitly so that we will do EEH check 168 164 * while accessing I/O stuff 169 165 */ 170 - eeh_set_enable(true); 166 + eeh_add_flag(EEH_ENABLED); 171 167 172 168 /* Save memory bars */ 173 169 eeh_save_bars(edev); ··· 305 273 306 274 max_wait -= mwait; 307 275 if (max_wait <= 0) { 308 - pr_warning("%s: Timeout getting PE#%x's state (%d)\n", 309 - __func__, pe->addr, max_wait); 276 + pr_warn("%s: Timeout getting PE#%x's state (%d)\n", 277 + __func__, pe->addr, max_wait); 310 278 return EEH_STATE_NOT_SUPPORT; 311 279 } 312 280 ··· 326 294 * Retrieve the temporary or permanent error from the PE. 327 295 */ 328 296 static int powernv_eeh_get_log(struct eeh_pe *pe, int severity, 329 - char *drv_log, unsigned long len) 297 + char *drv_log, unsigned long len) 330 298 { 331 299 struct pci_controller *hose = pe->phb; 332 300 struct pnv_phb *phb = hose->private_data; ··· 430 398 { 431 399 int ret = -EINVAL; 432 400 433 - if (!machine_is(powernv)) 434 - return ret; 435 - 401 + eeh_set_pe_aux_size(PNV_PCI_DIAG_BUF_SIZE); 436 402 ret = eeh_ops_register(&powernv_eeh_ops); 437 403 if (!ret) 438 404 pr_info("EEH: PowerNV platform initialized\n"); ··· 439 409 440 410 return ret; 441 411 } 442 - 443 - early_initcall(eeh_powernv_init); 412 + machine_early_initcall(powernv, eeh_powernv_init);
+2 -1
arch/powerpc/platforms/powernv/opal-async.c
··· 20 20 #include <linux/wait.h> 21 21 #include <linux/gfp.h> 22 22 #include <linux/of.h> 23 + #include <asm/machdep.h> 23 24 #include <asm/opal.h> 24 25 25 26 #define N_ASYNC_COMPLETIONS 64 ··· 202 201 out: 203 202 return err; 204 203 } 205 - subsys_initcall(opal_async_comp_init); 204 + machine_subsys_initcall(powernv, opal_async_comp_init);
+188
arch/powerpc/platforms/powernv/opal-hmi.c
··· 1 + /* 2 + * OPAL hypervisor Maintenance interrupt handling support in PowreNV. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License, or 7 + * (at your option) any later version. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program; If not, see <http://www.gnu.org/licenses/>. 16 + * 17 + * Copyright 2014 IBM Corporation 18 + * Author: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> 19 + */ 20 + 21 + #undef DEBUG 22 + 23 + #include <linux/kernel.h> 24 + #include <linux/init.h> 25 + #include <linux/of.h> 26 + #include <linux/mm.h> 27 + #include <linux/slab.h> 28 + 29 + #include <asm/opal.h> 30 + #include <asm/cputable.h> 31 + 32 + static int opal_hmi_handler_nb_init; 33 + struct OpalHmiEvtNode { 34 + struct list_head list; 35 + struct OpalHMIEvent hmi_evt; 36 + }; 37 + static LIST_HEAD(opal_hmi_evt_list); 38 + static DEFINE_SPINLOCK(opal_hmi_evt_lock); 39 + 40 + static void print_hmi_event_info(struct OpalHMIEvent *hmi_evt) 41 + { 42 + const char *level, *sevstr, *error_info; 43 + static const char *hmi_error_types[] = { 44 + "Malfunction Alert", 45 + "Processor Recovery done", 46 + "Processor recovery occurred again", 47 + "Processor recovery occurred for masked error", 48 + "Timer facility experienced an error", 49 + "TFMR SPR is corrupted", 50 + "UPS (Uniterrupted Power System) Overflow indication", 51 + "An XSCOM operation failure", 52 + "An XSCOM operation completed", 53 + "SCOM has set a reserved FIR bit to cause recovery", 54 + "Debug trigger has set a reserved FIR bit to cause recovery", 55 + "A hypervisor resource error occurred" 56 + }; 57 + 58 + /* Print things out */ 59 + if (hmi_evt->version != OpalHMIEvt_V1) { 60 + pr_err("HMI Interrupt, Unknown event version %d !\n", 61 + hmi_evt->version); 62 + return; 63 + } 64 + switch (hmi_evt->severity) { 65 + case OpalHMI_SEV_NO_ERROR: 66 + level = KERN_INFO; 67 + sevstr = "Harmless"; 68 + break; 69 + case OpalHMI_SEV_WARNING: 70 + level = KERN_WARNING; 71 + sevstr = ""; 72 + break; 73 + case OpalHMI_SEV_ERROR_SYNC: 74 + level = KERN_ERR; 75 + sevstr = "Severe"; 76 + break; 77 + case OpalHMI_SEV_FATAL: 78 + default: 79 + level = KERN_ERR; 80 + sevstr = "Fatal"; 81 + break; 82 + } 83 + 84 + printk("%s%s Hypervisor Maintenance interrupt [%s]\n", 85 + level, sevstr, 86 + hmi_evt->disposition == OpalHMI_DISPOSITION_RECOVERED ? 87 + "Recovered" : "Not recovered"); 88 + error_info = hmi_evt->type < ARRAY_SIZE(hmi_error_types) ? 89 + hmi_error_types[hmi_evt->type] 90 + : "Unknown"; 91 + printk("%s Error detail: %s\n", level, error_info); 92 + printk("%s HMER: %016llx\n", level, be64_to_cpu(hmi_evt->hmer)); 93 + if ((hmi_evt->type == OpalHMI_ERROR_TFAC) || 94 + (hmi_evt->type == OpalHMI_ERROR_TFMR_PARITY)) 95 + printk("%s TFMR: %016llx\n", level, 96 + be64_to_cpu(hmi_evt->tfmr)); 97 + } 98 + 99 + static void hmi_event_handler(struct work_struct *work) 100 + { 101 + unsigned long flags; 102 + struct OpalHMIEvent *hmi_evt; 103 + struct OpalHmiEvtNode *msg_node; 104 + uint8_t disposition; 105 + 106 + spin_lock_irqsave(&opal_hmi_evt_lock, flags); 107 + while (!list_empty(&opal_hmi_evt_list)) { 108 + msg_node = list_entry(opal_hmi_evt_list.next, 109 + struct OpalHmiEvtNode, list); 110 + list_del(&msg_node->list); 111 + spin_unlock_irqrestore(&opal_hmi_evt_lock, flags); 112 + 113 + hmi_evt = (struct OpalHMIEvent *) &msg_node->hmi_evt; 114 + print_hmi_event_info(hmi_evt); 115 + disposition = hmi_evt->disposition; 116 + kfree(msg_node); 117 + 118 + /* 119 + * Check if HMI event has been recovered or not. If not 120 + * then we can't continue, invoke panic. 121 + */ 122 + if (disposition != OpalHMI_DISPOSITION_RECOVERED) 123 + panic("Unrecoverable HMI exception"); 124 + 125 + spin_lock_irqsave(&opal_hmi_evt_lock, flags); 126 + } 127 + spin_unlock_irqrestore(&opal_hmi_evt_lock, flags); 128 + } 129 + 130 + static DECLARE_WORK(hmi_event_work, hmi_event_handler); 131 + /* 132 + * opal_handle_hmi_event - notifier handler that queues up HMI events 133 + * to be preocessed later. 134 + */ 135 + static int opal_handle_hmi_event(struct notifier_block *nb, 136 + unsigned long msg_type, void *msg) 137 + { 138 + unsigned long flags; 139 + struct OpalHMIEvent *hmi_evt; 140 + struct opal_msg *hmi_msg = msg; 141 + struct OpalHmiEvtNode *msg_node; 142 + 143 + /* Sanity Checks */ 144 + if (msg_type != OPAL_MSG_HMI_EVT) 145 + return 0; 146 + 147 + /* HMI event info starts from param[0] */ 148 + hmi_evt = (struct OpalHMIEvent *)&hmi_msg->params[0]; 149 + 150 + /* Delay the logging of HMI events to workqueue. */ 151 + msg_node = kzalloc(sizeof(*msg_node), GFP_ATOMIC); 152 + if (!msg_node) { 153 + pr_err("HMI: out of memory, Opal message event not handled\n"); 154 + return -ENOMEM; 155 + } 156 + memcpy(&msg_node->hmi_evt, hmi_evt, sizeof(struct OpalHMIEvent)); 157 + 158 + spin_lock_irqsave(&opal_hmi_evt_lock, flags); 159 + list_add(&msg_node->list, &opal_hmi_evt_list); 160 + spin_unlock_irqrestore(&opal_hmi_evt_lock, flags); 161 + 162 + schedule_work(&hmi_event_work); 163 + return 0; 164 + } 165 + 166 + static struct notifier_block opal_hmi_handler_nb = { 167 + .notifier_call = opal_handle_hmi_event, 168 + .next = NULL, 169 + .priority = 0, 170 + }; 171 + 172 + static int __init opal_hmi_handler_init(void) 173 + { 174 + int ret; 175 + 176 + if (!opal_hmi_handler_nb_init) { 177 + ret = opal_message_notifier_register( 178 + OPAL_MSG_HMI_EVT, &opal_hmi_handler_nb); 179 + if (ret) { 180 + pr_err("%s: Can't register OPAL event notifier (%d)\n", 181 + __func__, ret); 182 + return ret; 183 + } 184 + opal_hmi_handler_nb_init = 1; 185 + } 186 + return 0; 187 + } 188 + subsys_initcall(opal_hmi_handler_init);
+1 -1
arch/powerpc/platforms/powernv/opal-lpc.c
··· 324 324 rc |= opal_lpc_debugfs_create_type(root, "fw", OPAL_LPC_FW); 325 325 return rc; 326 326 } 327 - device_initcall(opal_lpc_init_debugfs); 327 + machine_device_initcall(powernv, opal_lpc_init_debugfs); 328 328 #endif /* CONFIG_DEBUG_FS */ 329 329 330 330 void opal_lpc_init(void)
+2 -1
arch/powerpc/platforms/powernv/opal-memory-errors.c
··· 27 27 #include <linux/mm.h> 28 28 #include <linux/slab.h> 29 29 30 + #include <asm/machdep.h> 30 31 #include <asm/opal.h> 31 32 #include <asm/cputable.h> 32 33 ··· 144 143 } 145 144 return 0; 146 145 } 147 - subsys_initcall(opal_mem_err_init); 146 + machine_subsys_initcall(powernv, opal_mem_err_init);
+84
arch/powerpc/platforms/powernv/opal-tracepoints.c
··· 1 + #include <linux/percpu.h> 2 + #include <linux/jump_label.h> 3 + #include <asm/trace.h> 4 + 5 + #ifdef CONFIG_JUMP_LABEL 6 + struct static_key opal_tracepoint_key = STATIC_KEY_INIT; 7 + 8 + void opal_tracepoint_regfunc(void) 9 + { 10 + static_key_slow_inc(&opal_tracepoint_key); 11 + } 12 + 13 + void opal_tracepoint_unregfunc(void) 14 + { 15 + static_key_slow_dec(&opal_tracepoint_key); 16 + } 17 + #else 18 + /* 19 + * We optimise OPAL calls by placing opal_tracepoint_refcount 20 + * directly in the TOC so we can check if the opal tracepoints are 21 + * enabled via a single load. 22 + */ 23 + 24 + /* NB: reg/unreg are called while guarded with the tracepoints_mutex */ 25 + extern long opal_tracepoint_refcount; 26 + 27 + void opal_tracepoint_regfunc(void) 28 + { 29 + opal_tracepoint_refcount++; 30 + } 31 + 32 + void opal_tracepoint_unregfunc(void) 33 + { 34 + opal_tracepoint_refcount--; 35 + } 36 + #endif 37 + 38 + /* 39 + * Since the tracing code might execute OPAL calls we need to guard against 40 + * recursion. 41 + */ 42 + static DEFINE_PER_CPU(unsigned int, opal_trace_depth); 43 + 44 + void __trace_opal_entry(unsigned long opcode, unsigned long *args) 45 + { 46 + unsigned long flags; 47 + unsigned int *depth; 48 + 49 + local_irq_save(flags); 50 + 51 + depth = &__get_cpu_var(opal_trace_depth); 52 + 53 + if (*depth) 54 + goto out; 55 + 56 + (*depth)++; 57 + preempt_disable(); 58 + trace_opal_entry(opcode, args); 59 + (*depth)--; 60 + 61 + out: 62 + local_irq_restore(flags); 63 + } 64 + 65 + void __trace_opal_exit(long opcode, unsigned long retval) 66 + { 67 + unsigned long flags; 68 + unsigned int *depth; 69 + 70 + local_irq_save(flags); 71 + 72 + depth = &__get_cpu_var(opal_trace_depth); 73 + 74 + if (*depth) 75 + goto out; 76 + 77 + (*depth)++; 78 + trace_opal_exit(opcode, retval); 79 + preempt_enable(); 80 + (*depth)--; 81 + 82 + out: 83 + local_irq_restore(flags); 84 + }
+108 -9
arch/powerpc/platforms/powernv/opal-wrappers.S
··· 13 13 #include <asm/hvcall.h> 14 14 #include <asm/asm-offsets.h> 15 15 #include <asm/opal.h> 16 + #include <asm/jump_label.h> 17 + 18 + .section ".text" 19 + 20 + #ifdef CONFIG_TRACEPOINTS 21 + #ifdef CONFIG_JUMP_LABEL 22 + #define OPAL_BRANCH(LABEL) \ 23 + ARCH_STATIC_BRANCH(LABEL, opal_tracepoint_key) 24 + #else 25 + 26 + .section ".toc","aw" 27 + 28 + .globl opal_tracepoint_refcount 29 + opal_tracepoint_refcount: 30 + .llong 0 31 + 32 + .section ".text" 33 + 34 + /* 35 + * We branch around this in early init by using an unconditional cpu 36 + * feature. 37 + */ 38 + #define OPAL_BRANCH(LABEL) \ 39 + BEGIN_FTR_SECTION; \ 40 + b 1f; \ 41 + END_FTR_SECTION(0, 1); \ 42 + ld r12,opal_tracepoint_refcount@toc(r2); \ 43 + std r12,32(r1); \ 44 + cmpdi r12,0; \ 45 + bne- LABEL; \ 46 + 1: 47 + 48 + #endif 49 + 50 + #else 51 + #define OPAL_BRANCH(LABEL) 52 + #endif 16 53 17 54 /* TODO: 18 55 * 19 56 * - Trace irqs in/off (needs saving/restoring all args, argh...) 20 57 * - Get r11 feed up by Dave so I can have better register usage 21 58 */ 59 + 22 60 #define OPAL_CALL(name, token) \ 23 61 _GLOBAL(name); \ 24 62 mflr r0; \ 25 - mfcr r12; \ 26 63 std r0,16(r1); \ 64 + li r0,token; \ 65 + OPAL_BRANCH(opal_tracepoint_entry) \ 66 + mfcr r12; \ 27 67 stw r12,8(r1); \ 28 68 std r1,PACAR1(r13); \ 29 - li r0,0; \ 69 + li r11,0; \ 30 70 mfmsr r12; \ 31 - ori r0,r0,MSR_EE; \ 71 + ori r11,r11,MSR_EE; \ 32 72 std r12,PACASAVEDMSR(r13); \ 33 - andc r12,r12,r0; \ 73 + andc r12,r12,r11; \ 34 74 mtmsrd r12,1; \ 35 - LOAD_REG_ADDR(r0,opal_return); \ 36 - mtlr r0; \ 37 - li r0,MSR_DR|MSR_IR|MSR_LE;\ 38 - andc r12,r12,r0; \ 39 - li r0,token; \ 75 + LOAD_REG_ADDR(r11,opal_return); \ 76 + mtlr r11; \ 77 + li r11,MSR_DR|MSR_IR|MSR_LE;\ 78 + andc r12,r12,r11; \ 40 79 mtspr SPRN_HSRR1,r12; \ 41 80 LOAD_REG_ADDR(r11,opal); \ 42 81 ld r12,8(r11); \ ··· 99 60 mtspr SPRN_SRR1,r6; 100 61 mtcr r4; 101 62 rfid 63 + 64 + #ifdef CONFIG_TRACEPOINTS 65 + opal_tracepoint_entry: 66 + stdu r1,-STACKFRAMESIZE(r1) 67 + std r0,STK_REG(R23)(r1) 68 + std r3,STK_REG(R24)(r1) 69 + std r4,STK_REG(R25)(r1) 70 + std r5,STK_REG(R26)(r1) 71 + std r6,STK_REG(R27)(r1) 72 + std r7,STK_REG(R28)(r1) 73 + std r8,STK_REG(R29)(r1) 74 + std r9,STK_REG(R30)(r1) 75 + std r10,STK_REG(R31)(r1) 76 + mr r3,r0 77 + addi r4,r1,STK_REG(R24) 78 + bl __trace_opal_entry 79 + ld r0,STK_REG(R23)(r1) 80 + ld r3,STK_REG(R24)(r1) 81 + ld r4,STK_REG(R25)(r1) 82 + ld r5,STK_REG(R26)(r1) 83 + ld r6,STK_REG(R27)(r1) 84 + ld r7,STK_REG(R28)(r1) 85 + ld r8,STK_REG(R29)(r1) 86 + ld r9,STK_REG(R30)(r1) 87 + ld r10,STK_REG(R31)(r1) 88 + LOAD_REG_ADDR(r11,opal_tracepoint_return) 89 + mfcr r12 90 + std r11,16(r1) 91 + stw r12,8(r1) 92 + std r1,PACAR1(r13) 93 + li r11,0 94 + mfmsr r12 95 + ori r11,r11,MSR_EE 96 + std r12,PACASAVEDMSR(r13) 97 + andc r12,r12,r11 98 + mtmsrd r12,1 99 + LOAD_REG_ADDR(r11,opal_return) 100 + mtlr r11 101 + li r11,MSR_DR|MSR_IR|MSR_LE 102 + andc r12,r12,r11 103 + mtspr SPRN_HSRR1,r12 104 + LOAD_REG_ADDR(r11,opal) 105 + ld r12,8(r11) 106 + ld r2,0(r11) 107 + mtspr SPRN_HSRR0,r12 108 + hrfid 109 + 110 + opal_tracepoint_return: 111 + std r3,STK_REG(R31)(r1) 112 + mr r4,r3 113 + ld r0,STK_REG(R23)(r1) 114 + bl __trace_opal_exit 115 + ld r3,STK_REG(R31)(r1) 116 + addi r1,r1,STACKFRAMESIZE 117 + ld r0,16(r1) 118 + mtlr r0 119 + blr 120 + #endif 102 121 103 122 OPAL_CALL(opal_invalid_call, OPAL_INVALID_CALL); 104 123 OPAL_CALL(opal_console_write, OPAL_CONSOLE_WRITE); ··· 183 86 OPAL_CALL(opal_register_exception_handler, OPAL_REGISTER_OPAL_EXCEPTION_HANDLER); 184 87 OPAL_CALL(opal_pci_eeh_freeze_status, OPAL_PCI_EEH_FREEZE_STATUS); 185 88 OPAL_CALL(opal_pci_eeh_freeze_clear, OPAL_PCI_EEH_FREEZE_CLEAR); 89 + OPAL_CALL(opal_pci_eeh_freeze_set, OPAL_PCI_EEH_FREEZE_SET); 186 90 OPAL_CALL(opal_pci_shpc, OPAL_PCI_SHPC); 187 91 OPAL_CALL(opal_pci_phb_mmio_enable, OPAL_PCI_PHB_MMIO_ENABLE); 188 92 OPAL_CALL(opal_pci_set_phb_mem_window, OPAL_PCI_SET_PHB_MEM_WINDOW); ··· 244 146 OPAL_CALL(opal_sensor_read, OPAL_SENSOR_READ); 245 147 OPAL_CALL(opal_get_param, OPAL_GET_PARAM); 246 148 OPAL_CALL(opal_set_param, OPAL_SET_PARAM); 149 + OPAL_CALL(opal_handle_hmi, OPAL_HANDLE_HMI);
+1 -1
arch/powerpc/platforms/powernv/opal-xscom.c
··· 130 130 scom_init(&opal_scom_controller); 131 131 return 0; 132 132 } 133 - arch_initcall(opal_xscom_init); 133 + machine_arch_initcall(powernv, opal_xscom_init);
+45 -7
arch/powerpc/platforms/powernv/opal.c
··· 22 22 #include <linux/kobject.h> 23 23 #include <linux/delay.h> 24 24 #include <linux/memblock.h> 25 + 26 + #include <asm/machdep.h> 25 27 #include <asm/opal.h> 26 28 #include <asm/firmware.h> 27 29 #include <asm/mce.h> ··· 194 192 * fwnmi area at 0x7000 to provide the glue space to OPAL 195 193 */ 196 194 glue = 0x7000; 197 - opal_register_exception_handler(OPAL_HYPERVISOR_MAINTENANCE_HANDLER, 198 - 0, glue); 199 - glue += 128; 200 195 opal_register_exception_handler(OPAL_SOFTPATCH_HANDLER, 0, glue); 201 196 #endif 202 197 203 198 return 0; 204 199 } 205 - 206 - early_initcall(opal_register_exception_handlers); 200 + machine_early_initcall(powernv, opal_register_exception_handlers); 207 201 208 202 int opal_notifier_register(struct notifier_block *nb) 209 203 { ··· 366 368 } 367 369 return 0; 368 370 } 369 - early_initcall(opal_message_init); 371 + machine_early_initcall(powernv, opal_message_init); 370 372 371 373 int opal_get_chars(uint32_t vtermno, char *buf, int count) 372 374 { ··· 511 513 return 0; 512 514 } 513 515 516 + /* Early hmi handler called in real mode. */ 517 + int opal_hmi_exception_early(struct pt_regs *regs) 518 + { 519 + s64 rc; 520 + 521 + /* 522 + * call opal hmi handler. Pass paca address as token. 523 + * The return value OPAL_SUCCESS is an indication that there is 524 + * an HMI event generated waiting to pull by Linux. 525 + */ 526 + rc = opal_handle_hmi(); 527 + if (rc == OPAL_SUCCESS) { 528 + local_paca->hmi_event_available = 1; 529 + return 1; 530 + } 531 + return 0; 532 + } 533 + 534 + /* HMI exception handler called in virtual mode during check_irq_replay. */ 535 + int opal_handle_hmi_exception(struct pt_regs *regs) 536 + { 537 + s64 rc; 538 + __be64 evt = 0; 539 + 540 + /* 541 + * Check if HMI event is available. 542 + * if Yes, then call opal_poll_events to pull opal messages and 543 + * process them. 544 + */ 545 + if (!local_paca->hmi_event_available) 546 + return 0; 547 + 548 + local_paca->hmi_event_available = 0; 549 + rc = opal_poll_events(&evt); 550 + if (rc == OPAL_SUCCESS && evt) 551 + opal_do_notifier(be64_to_cpu(evt)); 552 + 553 + return 1; 554 + } 555 + 514 556 static uint64_t find_recovery_address(uint64_t nip) 515 557 { 516 558 int i; ··· 668 630 669 631 return 0; 670 632 } 671 - subsys_initcall(opal_init); 633 + machine_subsys_initcall(powernv, opal_init); 672 634 673 635 void opal_shutdown(void) 674 636 {
+458 -41
arch/powerpc/platforms/powernv/pci-ioda.c
··· 36 36 #include <asm/tce.h> 37 37 #include <asm/xics.h> 38 38 #include <asm/debug.h> 39 + #include <asm/firmware.h> 39 40 40 41 #include "powernv.h" 41 42 #include "pci.h" ··· 83 82 : : "r" (val), "r" (paddr) : "memory"); 84 83 } 85 84 85 + static inline bool pnv_pci_is_mem_pref_64(unsigned long flags) 86 + { 87 + return ((flags & (IORESOURCE_MEM_64 | IORESOURCE_PREFETCH)) == 88 + (IORESOURCE_MEM_64 | IORESOURCE_PREFETCH)); 89 + } 90 + 86 91 static int pnv_ioda_alloc_pe(struct pnv_phb *phb) 87 92 { 88 93 unsigned long pe; ··· 111 104 112 105 memset(&phb->ioda.pe_array[pe], 0, sizeof(struct pnv_ioda_pe)); 113 106 clear_bit(pe, phb->ioda.pe_alloc); 107 + } 108 + 109 + /* The default M64 BAR is shared by all PEs */ 110 + static int pnv_ioda2_init_m64(struct pnv_phb *phb) 111 + { 112 + const char *desc; 113 + struct resource *r; 114 + s64 rc; 115 + 116 + /* Configure the default M64 BAR */ 117 + rc = opal_pci_set_phb_mem_window(phb->opal_id, 118 + OPAL_M64_WINDOW_TYPE, 119 + phb->ioda.m64_bar_idx, 120 + phb->ioda.m64_base, 121 + 0, /* unused */ 122 + phb->ioda.m64_size); 123 + if (rc != OPAL_SUCCESS) { 124 + desc = "configuring"; 125 + goto fail; 126 + } 127 + 128 + /* Enable the default M64 BAR */ 129 + rc = opal_pci_phb_mmio_enable(phb->opal_id, 130 + OPAL_M64_WINDOW_TYPE, 131 + phb->ioda.m64_bar_idx, 132 + OPAL_ENABLE_M64_SPLIT); 133 + if (rc != OPAL_SUCCESS) { 134 + desc = "enabling"; 135 + goto fail; 136 + } 137 + 138 + /* Mark the M64 BAR assigned */ 139 + set_bit(phb->ioda.m64_bar_idx, &phb->ioda.m64_bar_alloc); 140 + 141 + /* 142 + * Strip off the segment used by the reserved PE, which is 143 + * expected to be 0 or last one of PE capabicity. 144 + */ 145 + r = &phb->hose->mem_resources[1]; 146 + if (phb->ioda.reserved_pe == 0) 147 + r->start += phb->ioda.m64_segsize; 148 + else if (phb->ioda.reserved_pe == (phb->ioda.total_pe - 1)) 149 + r->end -= phb->ioda.m64_segsize; 150 + else 151 + pr_warn(" Cannot strip M64 segment for reserved PE#%d\n", 152 + phb->ioda.reserved_pe); 153 + 154 + return 0; 155 + 156 + fail: 157 + pr_warn(" Failure %lld %s M64 BAR#%d\n", 158 + rc, desc, phb->ioda.m64_bar_idx); 159 + opal_pci_phb_mmio_enable(phb->opal_id, 160 + OPAL_M64_WINDOW_TYPE, 161 + phb->ioda.m64_bar_idx, 162 + OPAL_DISABLE_M64); 163 + return -EIO; 164 + } 165 + 166 + static void pnv_ioda2_alloc_m64_pe(struct pnv_phb *phb) 167 + { 168 + resource_size_t sgsz = phb->ioda.m64_segsize; 169 + struct pci_dev *pdev; 170 + struct resource *r; 171 + int base, step, i; 172 + 173 + /* 174 + * Root bus always has full M64 range and root port has 175 + * M64 range used in reality. So we're checking root port 176 + * instead of root bus. 177 + */ 178 + list_for_each_entry(pdev, &phb->hose->bus->devices, bus_list) { 179 + for (i = PCI_BRIDGE_RESOURCES; 180 + i <= PCI_BRIDGE_RESOURCE_END; i++) { 181 + r = &pdev->resource[i]; 182 + if (!r->parent || 183 + !pnv_pci_is_mem_pref_64(r->flags)) 184 + continue; 185 + 186 + base = (r->start - phb->ioda.m64_base) / sgsz; 187 + for (step = 0; step < resource_size(r) / sgsz; step++) 188 + set_bit(base + step, phb->ioda.pe_alloc); 189 + } 190 + } 191 + } 192 + 193 + static int pnv_ioda2_pick_m64_pe(struct pnv_phb *phb, 194 + struct pci_bus *bus, int all) 195 + { 196 + resource_size_t segsz = phb->ioda.m64_segsize; 197 + struct pci_dev *pdev; 198 + struct resource *r; 199 + struct pnv_ioda_pe *master_pe, *pe; 200 + unsigned long size, *pe_alloc; 201 + bool found; 202 + int start, i, j; 203 + 204 + /* Root bus shouldn't use M64 */ 205 + if (pci_is_root_bus(bus)) 206 + return IODA_INVALID_PE; 207 + 208 + /* We support only one M64 window on each bus */ 209 + found = false; 210 + pci_bus_for_each_resource(bus, r, i) { 211 + if (r && r->parent && 212 + pnv_pci_is_mem_pref_64(r->flags)) { 213 + found = true; 214 + break; 215 + } 216 + } 217 + 218 + /* No M64 window found ? */ 219 + if (!found) 220 + return IODA_INVALID_PE; 221 + 222 + /* Allocate bitmap */ 223 + size = _ALIGN_UP(phb->ioda.total_pe / 8, sizeof(unsigned long)); 224 + pe_alloc = kzalloc(size, GFP_KERNEL); 225 + if (!pe_alloc) { 226 + pr_warn("%s: Out of memory !\n", 227 + __func__); 228 + return IODA_INVALID_PE; 229 + } 230 + 231 + /* 232 + * Figure out reserved PE numbers by the PE 233 + * the its child PEs. 234 + */ 235 + start = (r->start - phb->ioda.m64_base) / segsz; 236 + for (i = 0; i < resource_size(r) / segsz; i++) 237 + set_bit(start + i, pe_alloc); 238 + 239 + if (all) 240 + goto done; 241 + 242 + /* 243 + * If the PE doesn't cover all subordinate buses, 244 + * we need subtract from reserved PEs for children. 245 + */ 246 + list_for_each_entry(pdev, &bus->devices, bus_list) { 247 + if (!pdev->subordinate) 248 + continue; 249 + 250 + pci_bus_for_each_resource(pdev->subordinate, r, i) { 251 + if (!r || !r->parent || 252 + !pnv_pci_is_mem_pref_64(r->flags)) 253 + continue; 254 + 255 + start = (r->start - phb->ioda.m64_base) / segsz; 256 + for (j = 0; j < resource_size(r) / segsz ; j++) 257 + clear_bit(start + j, pe_alloc); 258 + } 259 + } 260 + 261 + /* 262 + * the current bus might not own M64 window and that's all 263 + * contributed by its child buses. For the case, we needn't 264 + * pick M64 dependent PE#. 265 + */ 266 + if (bitmap_empty(pe_alloc, phb->ioda.total_pe)) { 267 + kfree(pe_alloc); 268 + return IODA_INVALID_PE; 269 + } 270 + 271 + /* 272 + * Figure out the master PE and put all slave PEs to master 273 + * PE's list to form compound PE. 274 + */ 275 + done: 276 + master_pe = NULL; 277 + i = -1; 278 + while ((i = find_next_bit(pe_alloc, phb->ioda.total_pe, i + 1)) < 279 + phb->ioda.total_pe) { 280 + pe = &phb->ioda.pe_array[i]; 281 + pe->phb = phb; 282 + pe->pe_number = i; 283 + 284 + if (!master_pe) { 285 + pe->flags |= PNV_IODA_PE_MASTER; 286 + INIT_LIST_HEAD(&pe->slaves); 287 + master_pe = pe; 288 + } else { 289 + pe->flags |= PNV_IODA_PE_SLAVE; 290 + pe->master = master_pe; 291 + list_add_tail(&pe->list, &master_pe->slaves); 292 + } 293 + } 294 + 295 + kfree(pe_alloc); 296 + return master_pe->pe_number; 297 + } 298 + 299 + static void __init pnv_ioda_parse_m64_window(struct pnv_phb *phb) 300 + { 301 + struct pci_controller *hose = phb->hose; 302 + struct device_node *dn = hose->dn; 303 + struct resource *res; 304 + const u32 *r; 305 + u64 pci_addr; 306 + 307 + if (!firmware_has_feature(FW_FEATURE_OPALv3)) { 308 + pr_info(" Firmware too old to support M64 window\n"); 309 + return; 310 + } 311 + 312 + r = of_get_property(dn, "ibm,opal-m64-window", NULL); 313 + if (!r) { 314 + pr_info(" No <ibm,opal-m64-window> on %s\n", 315 + dn->full_name); 316 + return; 317 + } 318 + 319 + /* FIXME: Support M64 for P7IOC */ 320 + if (phb->type != PNV_PHB_IODA2) { 321 + pr_info(" Not support M64 window\n"); 322 + return; 323 + } 324 + 325 + res = &hose->mem_resources[1]; 326 + res->start = of_translate_address(dn, r + 2); 327 + res->end = res->start + of_read_number(r + 4, 2) - 1; 328 + res->flags = (IORESOURCE_MEM | IORESOURCE_MEM_64 | IORESOURCE_PREFETCH); 329 + pci_addr = of_read_number(r, 2); 330 + hose->mem_offset[1] = res->start - pci_addr; 331 + 332 + phb->ioda.m64_size = resource_size(res); 333 + phb->ioda.m64_segsize = phb->ioda.m64_size / phb->ioda.total_pe; 334 + phb->ioda.m64_base = pci_addr; 335 + 336 + /* Use last M64 BAR to cover M64 window */ 337 + phb->ioda.m64_bar_idx = 15; 338 + phb->init_m64 = pnv_ioda2_init_m64; 339 + phb->alloc_m64_pe = pnv_ioda2_alloc_m64_pe; 340 + phb->pick_m64_pe = pnv_ioda2_pick_m64_pe; 341 + } 342 + 343 + static void pnv_ioda_freeze_pe(struct pnv_phb *phb, int pe_no) 344 + { 345 + struct pnv_ioda_pe *pe = &phb->ioda.pe_array[pe_no]; 346 + struct pnv_ioda_pe *slave; 347 + s64 rc; 348 + 349 + /* Fetch master PE */ 350 + if (pe->flags & PNV_IODA_PE_SLAVE) { 351 + pe = pe->master; 352 + WARN_ON(!pe || !(pe->flags & PNV_IODA_PE_MASTER)); 353 + pe_no = pe->pe_number; 354 + } 355 + 356 + /* Freeze master PE */ 357 + rc = opal_pci_eeh_freeze_set(phb->opal_id, 358 + pe_no, 359 + OPAL_EEH_ACTION_SET_FREEZE_ALL); 360 + if (rc != OPAL_SUCCESS) { 361 + pr_warn("%s: Failure %lld freezing PHB#%x-PE#%x\n", 362 + __func__, rc, phb->hose->global_number, pe_no); 363 + return; 364 + } 365 + 366 + /* Freeze slave PEs */ 367 + if (!(pe->flags & PNV_IODA_PE_MASTER)) 368 + return; 369 + 370 + list_for_each_entry(slave, &pe->slaves, list) { 371 + rc = opal_pci_eeh_freeze_set(phb->opal_id, 372 + slave->pe_number, 373 + OPAL_EEH_ACTION_SET_FREEZE_ALL); 374 + if (rc != OPAL_SUCCESS) 375 + pr_warn("%s: Failure %lld freezing PHB#%x-PE#%x\n", 376 + __func__, rc, phb->hose->global_number, 377 + slave->pe_number); 378 + } 379 + } 380 + 381 + int pnv_ioda_unfreeze_pe(struct pnv_phb *phb, int pe_no, int opt) 382 + { 383 + struct pnv_ioda_pe *pe, *slave; 384 + s64 rc; 385 + 386 + /* Find master PE */ 387 + pe = &phb->ioda.pe_array[pe_no]; 388 + if (pe->flags & PNV_IODA_PE_SLAVE) { 389 + pe = pe->master; 390 + WARN_ON(!pe || !(pe->flags & PNV_IODA_PE_MASTER)); 391 + pe_no = pe->pe_number; 392 + } 393 + 394 + /* Clear frozen state for master PE */ 395 + rc = opal_pci_eeh_freeze_clear(phb->opal_id, pe_no, opt); 396 + if (rc != OPAL_SUCCESS) { 397 + pr_warn("%s: Failure %lld clear %d on PHB#%x-PE#%x\n", 398 + __func__, rc, opt, phb->hose->global_number, pe_no); 399 + return -EIO; 400 + } 401 + 402 + if (!(pe->flags & PNV_IODA_PE_MASTER)) 403 + return 0; 404 + 405 + /* Clear frozen state for slave PEs */ 406 + list_for_each_entry(slave, &pe->slaves, list) { 407 + rc = opal_pci_eeh_freeze_clear(phb->opal_id, 408 + slave->pe_number, 409 + opt); 410 + if (rc != OPAL_SUCCESS) { 411 + pr_warn("%s: Failure %lld clear %d on PHB#%x-PE#%x\n", 412 + __func__, rc, opt, phb->hose->global_number, 413 + slave->pe_number); 414 + return -EIO; 415 + } 416 + } 417 + 418 + return 0; 419 + } 420 + 421 + static int pnv_ioda_get_pe_state(struct pnv_phb *phb, int pe_no) 422 + { 423 + struct pnv_ioda_pe *slave, *pe; 424 + u8 fstate, state; 425 + __be16 pcierr; 426 + s64 rc; 427 + 428 + /* Sanity check on PE number */ 429 + if (pe_no < 0 || pe_no >= phb->ioda.total_pe) 430 + return OPAL_EEH_STOPPED_PERM_UNAVAIL; 431 + 432 + /* 433 + * Fetch the master PE and the PE instance might be 434 + * not initialized yet. 435 + */ 436 + pe = &phb->ioda.pe_array[pe_no]; 437 + if (pe->flags & PNV_IODA_PE_SLAVE) { 438 + pe = pe->master; 439 + WARN_ON(!pe || !(pe->flags & PNV_IODA_PE_MASTER)); 440 + pe_no = pe->pe_number; 441 + } 442 + 443 + /* Check the master PE */ 444 + rc = opal_pci_eeh_freeze_status(phb->opal_id, pe_no, 445 + &state, &pcierr, NULL); 446 + if (rc != OPAL_SUCCESS) { 447 + pr_warn("%s: Failure %lld getting " 448 + "PHB#%x-PE#%x state\n", 449 + __func__, rc, 450 + phb->hose->global_number, pe_no); 451 + return OPAL_EEH_STOPPED_TEMP_UNAVAIL; 452 + } 453 + 454 + /* Check the slave PE */ 455 + if (!(pe->flags & PNV_IODA_PE_MASTER)) 456 + return state; 457 + 458 + list_for_each_entry(slave, &pe->slaves, list) { 459 + rc = opal_pci_eeh_freeze_status(phb->opal_id, 460 + slave->pe_number, 461 + &fstate, 462 + &pcierr, 463 + NULL); 464 + if (rc != OPAL_SUCCESS) { 465 + pr_warn("%s: Failure %lld getting " 466 + "PHB#%x-PE#%x state\n", 467 + __func__, rc, 468 + phb->hose->global_number, slave->pe_number); 469 + return OPAL_EEH_STOPPED_TEMP_UNAVAIL; 470 + } 471 + 472 + /* 473 + * Override the result based on the ascending 474 + * priority. 475 + */ 476 + if (fstate > state) 477 + state = fstate; 478 + } 479 + 480 + return state; 114 481 } 115 482 116 483 /* Currently those 2 are only used when MSIs are enabled, this will change ··· 744 363 struct pci_controller *hose = pci_bus_to_host(bus); 745 364 struct pnv_phb *phb = hose->private_data; 746 365 struct pnv_ioda_pe *pe; 747 - int pe_num; 366 + int pe_num = IODA_INVALID_PE; 748 367 749 - pe_num = pnv_ioda_alloc_pe(phb); 368 + /* Check if PE is determined by M64 */ 369 + if (phb->pick_m64_pe) 370 + pe_num = phb->pick_m64_pe(phb, bus, all); 371 + 372 + /* The PE number isn't pinned by M64 */ 373 + if (pe_num == IODA_INVALID_PE) 374 + pe_num = pnv_ioda_alloc_pe(phb); 375 + 750 376 if (pe_num == IODA_INVALID_PE) { 751 377 pr_warning("%s: Not enough PE# available for PCI bus %04x:%02x\n", 752 378 __func__, pci_domain_nr(bus), bus->number); ··· 761 373 } 762 374 763 375 pe = &phb->ioda.pe_array[pe_num]; 764 - pe->flags = (all ? PNV_IODA_PE_BUS_ALL : PNV_IODA_PE_BUS); 376 + pe->flags |= (all ? PNV_IODA_PE_BUS_ALL : PNV_IODA_PE_BUS); 765 377 pe->pbus = bus; 766 378 pe->pdev = NULL; 767 379 pe->tce32_seg = -1; ··· 829 441 static void pnv_pci_ioda_setup_PEs(void) 830 442 { 831 443 struct pci_controller *hose, *tmp; 444 + struct pnv_phb *phb; 832 445 833 446 list_for_each_entry_safe(hose, tmp, &hose_list, list_node) { 447 + phb = hose->private_data; 448 + 449 + /* M64 layout might affect PE allocation */ 450 + if (phb->alloc_m64_pe) 451 + phb->alloc_m64_pe(phb); 452 + 834 453 pnv_ioda_setup_PEs(hose->bus); 835 454 } 836 455 } ··· 886 491 set_dma_ops(&pdev->dev, &dma_iommu_ops); 887 492 set_iommu_table_base(&pdev->dev, &pe->tce32_table); 888 493 } 494 + *pdev->dev.dma_mask = dma_mask; 889 495 return 0; 890 496 } 891 497 892 - static void pnv_ioda_setup_bus_dma(struct pnv_ioda_pe *pe, struct pci_bus *bus) 498 + static void pnv_ioda_setup_bus_dma(struct pnv_ioda_pe *pe, 499 + struct pci_bus *bus, 500 + bool add_to_iommu_group) 893 501 { 894 502 struct pci_dev *dev; 895 503 896 504 list_for_each_entry(dev, &bus->devices, bus_list) { 897 - set_iommu_table_base_and_group(&dev->dev, &pe->tce32_table); 505 + if (add_to_iommu_group) 506 + set_iommu_table_base_and_group(&dev->dev, 507 + &pe->tce32_table); 508 + else 509 + set_iommu_table_base(&dev->dev, &pe->tce32_table); 510 + 898 511 if (dev->subordinate) 899 - pnv_ioda_setup_bus_dma(pe, dev->subordinate); 512 + pnv_ioda_setup_bus_dma(pe, dev->subordinate, 513 + add_to_iommu_group); 900 514 } 901 515 } 902 516 ··· 917 513 (__be64 __iomem *)pe->tce_inval_reg_phys : 918 514 (__be64 __iomem *)tbl->it_index; 919 515 unsigned long start, end, inc; 516 + const unsigned shift = tbl->it_page_shift; 920 517 921 518 start = __pa(startp); 922 519 end = __pa(endp); 923 520 924 521 /* BML uses this case for p6/p7/galaxy2: Shift addr and put in node */ 925 522 if (tbl->it_busno) { 926 - start <<= 12; 927 - end <<= 12; 928 - inc = 128 << 12; 523 + start <<= shift; 524 + end <<= shift; 525 + inc = 128ull << shift; 929 526 start |= tbl->it_busno; 930 527 end |= tbl->it_busno; 931 528 } else if (tbl->it_type & TCE_PCI_SWINV_PAIR) { ··· 964 559 __be64 __iomem *invalidate = rm ? 965 560 (__be64 __iomem *)pe->tce_inval_reg_phys : 966 561 (__be64 __iomem *)tbl->it_index; 562 + const unsigned shift = tbl->it_page_shift; 967 563 968 564 /* We'll invalidate DMA address in PE scope */ 969 - start = 0x2ul << 60; 565 + start = 0x2ull << 60; 970 566 start |= (pe->pe_number & 0xFF); 971 567 end = start; 972 568 973 569 /* Figure out the start, end and step */ 974 570 inc = tbl->it_offset + (((u64)startp - tbl->it_base) / sizeof(u64)); 975 - start |= (inc << 12); 571 + start |= (inc << shift); 976 572 inc = tbl->it_offset + (((u64)endp - tbl->it_base) / sizeof(u64)); 977 - end |= (inc << 12); 978 - inc = (0x1ul << 12); 573 + end |= (inc << shift); 574 + inc = (0x1ull << shift); 979 575 mb(); 980 576 981 577 while (start <= end) { ··· 1060 654 /* Setup linux iommu table */ 1061 655 tbl = &pe->tce32_table; 1062 656 pnv_pci_setup_iommu_table(tbl, addr, TCE32_TABLE_SIZE * segs, 1063 - base << 28); 657 + base << 28, IOMMU_PAGE_SHIFT_4K); 1064 658 1065 659 /* OPAL variant of P7IOC SW invalidated TCEs */ 1066 660 swinvp = of_get_property(phb->hose->dn, "ibm,opal-tce-kill", NULL); ··· 1083 677 if (pe->pdev) 1084 678 set_iommu_table_base_and_group(&pe->pdev->dev, tbl); 1085 679 else 1086 - pnv_ioda_setup_bus_dma(pe, pe->pbus); 680 + pnv_ioda_setup_bus_dma(pe, pe->pbus, true); 1087 681 1088 682 return; 1089 683 fail: ··· 1119 713 0); 1120 714 1121 715 /* 1122 - * We might want to reset the DMA ops of all devices on 1123 - * this PE. However in theory, that shouldn't be necessary 1124 - * as this is used for VFIO/KVM pass-through and the device 1125 - * hasn't yet been returned to its kernel driver 716 + * EEH needs the mapping between IOMMU table and group 717 + * of those VFIO/KVM pass-through devices. We can postpone 718 + * resetting DMA ops until the DMA mask is configured in 719 + * host side. 1126 720 */ 721 + if (pe->pdev) 722 + set_iommu_table_base(&pe->pdev->dev, tbl); 723 + else 724 + pnv_ioda_setup_bus_dma(pe, pe->pbus, false); 1127 725 } 1128 726 if (rc) 1129 727 pe_err(pe, "OPAL error %lld configuring bypass window\n", rc); ··· 1194 784 1195 785 /* Setup linux iommu table */ 1196 786 tbl = &pe->tce32_table; 1197 - pnv_pci_setup_iommu_table(tbl, addr, tce_table_size, 0); 787 + pnv_pci_setup_iommu_table(tbl, addr, tce_table_size, 0, 788 + IOMMU_PAGE_SHIFT_4K); 1198 789 1199 790 /* OPAL variant of PHB3 invalidated TCEs */ 1200 791 swinvp = of_get_property(phb->hose->dn, "ibm,opal-tce-kill", NULL); ··· 1216 805 if (pe->pdev) 1217 806 set_iommu_table_base_and_group(&pe->pdev->dev, tbl); 1218 807 else 1219 - pnv_ioda_setup_bus_dma(pe, pe->pbus); 808 + pnv_ioda_setup_bus_dma(pe, pe->pbus, true); 1220 809 1221 810 /* Also create a bypass window */ 1222 811 pnv_pci_ioda2_setup_bypass_pe(phb, pe); ··· 1466 1055 index++; 1467 1056 } 1468 1057 } else if (res->flags & IORESOURCE_MEM) { 1469 - /* WARNING: Assumes M32 is mem region 0 in PHB. We need to 1470 - * harden that algorithm when we start supporting M64 1471 - */ 1472 1058 region.start = res->start - 1473 1059 hose->mem_offset[0] - 1474 1060 phb->ioda.m32_pci_base; ··· 1549 1141 pnv_pci_ioda_create_dbgfs(); 1550 1142 1551 1143 #ifdef CONFIG_EEH 1552 - eeh_probe_mode_set(EEH_PROBE_MODE_DEV); 1553 - eeh_addr_cache_build(); 1554 1144 eeh_init(); 1145 + eeh_addr_cache_build(); 1555 1146 #endif 1556 1147 } 1557 1148 ··· 1585 1178 bridge = bridge->bus->self; 1586 1179 } 1587 1180 1588 - /* We need support prefetchable memory window later */ 1181 + /* We fail back to M32 if M64 isn't supported */ 1182 + if (phb->ioda.m64_segsize && 1183 + pnv_pci_is_mem_pref_64(type)) 1184 + return phb->ioda.m64_segsize; 1589 1185 if (type & IORESOURCE_MEM) 1590 1186 return phb->ioda.m32_segsize; 1591 1187 ··· 1709 1299 prop32 = of_get_property(np, "ibm,opal-reserved-pe", NULL); 1710 1300 if (prop32) 1711 1301 phb->ioda.reserved_pe = be32_to_cpup(prop32); 1302 + 1303 + /* Parse 64-bit MMIO range */ 1304 + pnv_ioda_parse_m64_window(phb); 1305 + 1712 1306 phb->ioda.m32_size = resource_size(&hose->mem_resources[0]); 1713 1307 /* FW Has already off top 64k of M32 space (MSI space) */ 1714 1308 phb->ioda.m32_size += 0x10000; ··· 1748 1334 /* Calculate how many 32-bit TCE segments we have */ 1749 1335 phb->ioda.tce32_count = phb->ioda.m32_pci_base >> 28; 1750 1336 1751 - /* Clear unusable m64 */ 1752 - hose->mem_resources[1].flags = 0; 1753 - hose->mem_resources[1].start = 0; 1754 - hose->mem_resources[1].end = 0; 1755 - hose->mem_resources[2].flags = 0; 1756 - hose->mem_resources[2].start = 0; 1757 - hose->mem_resources[2].end = 0; 1758 - 1759 1337 #if 0 /* We should really do that ... */ 1760 1338 rc = opal_pci_set_phb_mem_window(opal->phb_id, 1761 1339 window_type, ··· 1757 1351 segment_size); 1758 1352 #endif 1759 1353 1760 - pr_info(" %d (%d) PE's M32: 0x%x [segment=0x%x]" 1761 - " IO: 0x%x [segment=0x%x]\n", 1762 - phb->ioda.total_pe, 1763 - phb->ioda.reserved_pe, 1764 - phb->ioda.m32_size, phb->ioda.m32_segsize, 1765 - phb->ioda.io_size, phb->ioda.io_segsize); 1354 + pr_info(" %03d (%03d) PE's M32: 0x%x [segment=0x%x]\n", 1355 + phb->ioda.total_pe, phb->ioda.reserved_pe, 1356 + phb->ioda.m32_size, phb->ioda.m32_segsize); 1357 + if (phb->ioda.m64_size) 1358 + pr_info(" M64: 0x%lx [segment=0x%lx]\n", 1359 + phb->ioda.m64_size, phb->ioda.m64_segsize); 1360 + if (phb->ioda.io_size) 1361 + pr_info(" IO: 0x%x [segment=0x%x]\n", 1362 + phb->ioda.io_size, phb->ioda.io_segsize); 1363 + 1766 1364 1767 1365 phb->hose->ops = &pnv_pci_ops; 1366 + phb->get_pe_state = pnv_ioda_get_pe_state; 1367 + phb->freeze_pe = pnv_ioda_freeze_pe; 1368 + phb->unfreeze_pe = pnv_ioda_unfreeze_pe; 1768 1369 #ifdef CONFIG_EEH 1769 1370 phb->eeh_ops = &ioda_eeh_ops; 1770 1371 #endif ··· 1817 1404 ioda_eeh_phb_reset(hose, EEH_RESET_FUNDAMENTAL); 1818 1405 ioda_eeh_phb_reset(hose, OPAL_DEASSERT_RESET); 1819 1406 } 1407 + 1408 + /* Configure M64 window */ 1409 + if (phb->init_m64 && phb->init_m64(phb)) 1410 + hose->mem_resources[1].flags = 0; 1820 1411 } 1821 1412 1822 1413 void __init pnv_pci_init_ioda2_phb(struct device_node *np)
+2 -1
arch/powerpc/platforms/powernv/pci-p5ioc2.c
··· 172 172 /* Setup TCEs */ 173 173 phb->dma_dev_setup = pnv_pci_p5ioc2_dma_dev_setup; 174 174 pnv_pci_setup_iommu_table(&phb->p5ioc2.iommu_table, 175 - tce_mem, tce_size, 0); 175 + tce_mem, tce_size, 0, 176 + IOMMU_PAGE_SHIFT_4K); 176 177 } 177 178 178 179 void __init pnv_pci_init_p5ioc2_hub(struct device_node *np)
+109 -60
arch/powerpc/platforms/powernv/pci.c
··· 132 132 133 133 data = (struct OpalIoP7IOCPhbErrorData *)common; 134 134 pr_info("P7IOC PHB#%d Diag-data (Version: %d)\n", 135 - hose->global_number, common->version); 135 + hose->global_number, be32_to_cpu(common->version)); 136 136 137 137 if (data->brdgCtl) 138 138 pr_info("brdgCtl: %08x\n", 139 - data->brdgCtl); 139 + be32_to_cpu(data->brdgCtl)); 140 140 if (data->portStatusReg || data->rootCmplxStatus || 141 141 data->busAgentStatus) 142 142 pr_info("UtlSts: %08x %08x %08x\n", 143 - data->portStatusReg, data->rootCmplxStatus, 144 - data->busAgentStatus); 143 + be32_to_cpu(data->portStatusReg), 144 + be32_to_cpu(data->rootCmplxStatus), 145 + be32_to_cpu(data->busAgentStatus)); 145 146 if (data->deviceStatus || data->slotStatus || 146 147 data->linkStatus || data->devCmdStatus || 147 148 data->devSecStatus) 148 149 pr_info("RootSts: %08x %08x %08x %08x %08x\n", 149 - data->deviceStatus, data->slotStatus, 150 - data->linkStatus, data->devCmdStatus, 151 - data->devSecStatus); 150 + be32_to_cpu(data->deviceStatus), 151 + be32_to_cpu(data->slotStatus), 152 + be32_to_cpu(data->linkStatus), 153 + be32_to_cpu(data->devCmdStatus), 154 + be32_to_cpu(data->devSecStatus)); 152 155 if (data->rootErrorStatus || data->uncorrErrorStatus || 153 156 data->corrErrorStatus) 154 157 pr_info("RootErrSts: %08x %08x %08x\n", 155 - data->rootErrorStatus, data->uncorrErrorStatus, 156 - data->corrErrorStatus); 158 + be32_to_cpu(data->rootErrorStatus), 159 + be32_to_cpu(data->uncorrErrorStatus), 160 + be32_to_cpu(data->corrErrorStatus)); 157 161 if (data->tlpHdr1 || data->tlpHdr2 || 158 162 data->tlpHdr3 || data->tlpHdr4) 159 163 pr_info("RootErrLog: %08x %08x %08x %08x\n", 160 - data->tlpHdr1, data->tlpHdr2, 161 - data->tlpHdr3, data->tlpHdr4); 164 + be32_to_cpu(data->tlpHdr1), 165 + be32_to_cpu(data->tlpHdr2), 166 + be32_to_cpu(data->tlpHdr3), 167 + be32_to_cpu(data->tlpHdr4)); 162 168 if (data->sourceId || data->errorClass || 163 169 data->correlator) 164 170 pr_info("RootErrLog1: %08x %016llx %016llx\n", 165 - data->sourceId, data->errorClass, 166 - data->correlator); 171 + be32_to_cpu(data->sourceId), 172 + be64_to_cpu(data->errorClass), 173 + be64_to_cpu(data->correlator)); 167 174 if (data->p7iocPlssr || data->p7iocCsr) 168 175 pr_info("PhbSts: %016llx %016llx\n", 169 - data->p7iocPlssr, data->p7iocCsr); 176 + be64_to_cpu(data->p7iocPlssr), 177 + be64_to_cpu(data->p7iocCsr)); 170 178 if (data->lemFir) 171 179 pr_info("Lem: %016llx %016llx %016llx\n", 172 - data->lemFir, data->lemErrorMask, 173 - data->lemWOF); 180 + be64_to_cpu(data->lemFir), 181 + be64_to_cpu(data->lemErrorMask), 182 + be64_to_cpu(data->lemWOF)); 174 183 if (data->phbErrorStatus) 175 184 pr_info("PhbErr: %016llx %016llx %016llx %016llx\n", 176 - data->phbErrorStatus, data->phbFirstErrorStatus, 177 - data->phbErrorLog0, data->phbErrorLog1); 185 + be64_to_cpu(data->phbErrorStatus), 186 + be64_to_cpu(data->phbFirstErrorStatus), 187 + be64_to_cpu(data->phbErrorLog0), 188 + be64_to_cpu(data->phbErrorLog1)); 178 189 if (data->mmioErrorStatus) 179 190 pr_info("OutErr: %016llx %016llx %016llx %016llx\n", 180 - data->mmioErrorStatus, data->mmioFirstErrorStatus, 181 - data->mmioErrorLog0, data->mmioErrorLog1); 191 + be64_to_cpu(data->mmioErrorStatus), 192 + be64_to_cpu(data->mmioFirstErrorStatus), 193 + be64_to_cpu(data->mmioErrorLog0), 194 + be64_to_cpu(data->mmioErrorLog1)); 182 195 if (data->dma0ErrorStatus) 183 196 pr_info("InAErr: %016llx %016llx %016llx %016llx\n", 184 - data->dma0ErrorStatus, data->dma0FirstErrorStatus, 185 - data->dma0ErrorLog0, data->dma0ErrorLog1); 197 + be64_to_cpu(data->dma0ErrorStatus), 198 + be64_to_cpu(data->dma0FirstErrorStatus), 199 + be64_to_cpu(data->dma0ErrorLog0), 200 + be64_to_cpu(data->dma0ErrorLog1)); 186 201 if (data->dma1ErrorStatus) 187 202 pr_info("InBErr: %016llx %016llx %016llx %016llx\n", 188 - data->dma1ErrorStatus, data->dma1FirstErrorStatus, 189 - data->dma1ErrorLog0, data->dma1ErrorLog1); 203 + be64_to_cpu(data->dma1ErrorStatus), 204 + be64_to_cpu(data->dma1FirstErrorStatus), 205 + be64_to_cpu(data->dma1ErrorLog0), 206 + be64_to_cpu(data->dma1ErrorLog1)); 190 207 191 208 for (i = 0; i < OPAL_P7IOC_NUM_PEST_REGS; i++) { 192 209 if ((data->pestA[i] >> 63) == 0 && ··· 211 194 continue; 212 195 213 196 pr_info("PE[%3d] A/B: %016llx %016llx\n", 214 - i, data->pestA[i], data->pestB[i]); 197 + i, be64_to_cpu(data->pestA[i]), 198 + be64_to_cpu(data->pestB[i])); 215 199 } 216 200 } 217 201 ··· 337 319 static void pnv_pci_handle_eeh_config(struct pnv_phb *phb, u32 pe_no) 338 320 { 339 321 unsigned long flags, rc; 340 - int has_diag; 322 + int has_diag, ret = 0; 341 323 342 324 spin_lock_irqsave(&phb->lock, flags); 343 325 326 + /* Fetch PHB diag-data */ 344 327 rc = opal_pci_get_phb_diag_data2(phb->opal_id, phb->diag.blob, 345 328 PNV_PCI_DIAG_BUF_SIZE); 346 329 has_diag = (rc == OPAL_SUCCESS); 347 330 348 - rc = opal_pci_eeh_freeze_clear(phb->opal_id, pe_no, 331 + /* If PHB supports compound PE, to handle it */ 332 + if (phb->unfreeze_pe) { 333 + ret = phb->unfreeze_pe(phb, 334 + pe_no, 349 335 OPAL_EEH_ACTION_CLEAR_FREEZE_ALL); 350 - if (rc) { 351 - pr_warning("PCI %d: Failed to clear EEH freeze state" 352 - " for PE#%d, err %ld\n", 353 - phb->hose->global_number, pe_no, rc); 354 - 355 - /* For now, let's only display the diag buffer when we fail to clear 356 - * the EEH status. We'll do more sensible things later when we have 357 - * proper EEH support. We need to make sure we don't pollute ourselves 358 - * with the normal errors generated when probing empty slots 359 - */ 360 - if (has_diag) 361 - pnv_pci_dump_phb_diag_data(phb->hose, phb->diag.blob); 362 - else 363 - pr_warning("PCI %d: No diag data available\n", 364 - phb->hose->global_number); 336 + } else { 337 + rc = opal_pci_eeh_freeze_clear(phb->opal_id, 338 + pe_no, 339 + OPAL_EEH_ACTION_CLEAR_FREEZE_ALL); 340 + if (rc) { 341 + pr_warn("%s: Failure %ld clearing frozen " 342 + "PHB#%x-PE#%x\n", 343 + __func__, rc, phb->hose->global_number, 344 + pe_no); 345 + ret = -EIO; 346 + } 365 347 } 348 + 349 + /* 350 + * For now, let's only display the diag buffer when we fail to clear 351 + * the EEH status. We'll do more sensible things later when we have 352 + * proper EEH support. We need to make sure we don't pollute ourselves 353 + * with the normal errors generated when probing empty slots 354 + */ 355 + if (has_diag && ret) 356 + pnv_pci_dump_phb_diag_data(phb->hose, phb->diag.blob); 366 357 367 358 spin_unlock_irqrestore(&phb->lock, flags); 368 359 } ··· 379 352 static void pnv_pci_config_check_eeh(struct pnv_phb *phb, 380 353 struct device_node *dn) 381 354 { 382 - s64 rc; 383 355 u8 fstate; 384 356 __be16 pcierr; 385 - u32 pe_no; 357 + int pe_no; 358 + s64 rc; 386 359 387 360 /* 388 361 * Get the PE#. During the PCI probe stage, we might not ··· 397 370 pe_no = phb->ioda.reserved_pe; 398 371 } 399 372 400 - /* Read freeze status */ 401 - rc = opal_pci_eeh_freeze_status(phb->opal_id, pe_no, &fstate, &pcierr, 402 - NULL); 403 - if (rc) { 404 - pr_warning("%s: Can't read EEH status (PE#%d) for " 405 - "%s, err %lld\n", 406 - __func__, pe_no, dn->full_name, rc); 407 - return; 373 + /* 374 + * Fetch frozen state. If the PHB support compound PE, 375 + * we need handle that case. 376 + */ 377 + if (phb->get_pe_state) { 378 + fstate = phb->get_pe_state(phb, pe_no); 379 + } else { 380 + rc = opal_pci_eeh_freeze_status(phb->opal_id, 381 + pe_no, 382 + &fstate, 383 + &pcierr, 384 + NULL); 385 + if (rc) { 386 + pr_warn("%s: Failure %lld getting PHB#%x-PE#%x state\n", 387 + __func__, rc, phb->hose->global_number, pe_no); 388 + return; 389 + } 408 390 } 391 + 409 392 cfg_dbg(" -> EEH check, bdfn=%04x PE#%d fstate=%x\n", 410 393 (PCI_DN(dn)->busno << 8) | (PCI_DN(dn)->devfn), 411 394 pe_no, fstate); 412 - if (fstate != 0) 395 + 396 + /* Clear the frozen state if applicable */ 397 + if (fstate == OPAL_EEH_STOPPED_MMIO_FREEZE || 398 + fstate == OPAL_EEH_STOPPED_DMA_FREEZE || 399 + fstate == OPAL_EEH_STOPPED_MMIO_DMA_FREEZE) { 400 + /* 401 + * If PHB supports compound PE, freeze it for 402 + * consistency. 403 + */ 404 + if (phb->freeze_pe) 405 + phb->freeze_pe(phb, pe_no); 406 + 413 407 pnv_pci_handle_eeh_config(phb, pe_no); 408 + } 414 409 } 415 410 416 411 int pnv_pci_cfg_read(struct device_node *dn, ··· 613 564 proto_tce |= TCE_PCI_WRITE; 614 565 615 566 tces = tcep = ((__be64 *)tbl->it_base) + index - tbl->it_offset; 616 - rpn = __pa(uaddr) >> TCE_SHIFT; 567 + rpn = __pa(uaddr) >> tbl->it_page_shift; 617 568 618 569 while (npages--) 619 - *(tcep++) = cpu_to_be64(proto_tce | (rpn++ << TCE_RPN_SHIFT)); 570 + *(tcep++) = cpu_to_be64(proto_tce | 571 + (rpn++ << tbl->it_page_shift)); 620 572 621 573 /* Some implementations won't cache invalid TCEs and thus may not 622 574 * need that flush. We'll probably turn it_type into a bit mask ··· 677 627 678 628 void pnv_pci_setup_iommu_table(struct iommu_table *tbl, 679 629 void *tce_mem, u64 tce_size, 680 - u64 dma_offset) 630 + u64 dma_offset, unsigned page_shift) 681 631 { 682 632 tbl->it_blocksize = 16; 683 633 tbl->it_base = (unsigned long)tce_mem; 684 - tbl->it_page_shift = IOMMU_PAGE_SHIFT_4K; 634 + tbl->it_page_shift = page_shift; 685 635 tbl->it_offset = dma_offset >> tbl->it_page_shift; 686 636 tbl->it_index = 0; 687 637 tbl->it_size = tce_size >> 3; ··· 706 656 if (WARN_ON(!tbl)) 707 657 return NULL; 708 658 pnv_pci_setup_iommu_table(tbl, __va(be64_to_cpup(basep)), 709 - be32_to_cpup(sizep), 0); 659 + be32_to_cpup(sizep), 0, IOMMU_PAGE_SHIFT_4K); 710 660 iommu_init_table(tbl, hose->node); 711 661 iommu_register_group(tbl, pci_domain_nr(hose->bus), 0); 712 662 ··· 892 842 bus_register_notifier(&pci_bus_type, &tce_iommu_bus_nb); 893 843 return 0; 894 844 } 895 - 896 - subsys_initcall_sync(tce_iommu_bus_notifier_init); 845 + machine_subsys_initcall_sync(powernv, tce_iommu_bus_notifier_init);
+24 -1
arch/powerpc/platforms/powernv/pci.h
··· 21 21 #define PNV_IODA_PE_DEV (1 << 0) /* PE has single PCI device */ 22 22 #define PNV_IODA_PE_BUS (1 << 1) /* PE has primary PCI bus */ 23 23 #define PNV_IODA_PE_BUS_ALL (1 << 2) /* PE has subordinate buses */ 24 + #define PNV_IODA_PE_MASTER (1 << 3) /* Master PE in compound case */ 25 + #define PNV_IODA_PE_SLAVE (1 << 4) /* Slave PE in compound case */ 24 26 25 27 /* Data associated with a PE, including IOMMU tracking etc.. */ 26 28 struct pnv_phb; ··· 65 63 * PE number) 66 64 */ 67 65 int mve_number; 66 + 67 + /* PEs in compound case */ 68 + struct pnv_ioda_pe *master; 69 + struct list_head slaves; 68 70 69 71 /* Link in list of PE#s */ 70 72 struct list_head dma_link; ··· 125 119 void (*fixup_phb)(struct pci_controller *hose); 126 120 u32 (*bdfn_to_pe)(struct pnv_phb *phb, struct pci_bus *bus, u32 devfn); 127 121 void (*shutdown)(struct pnv_phb *phb); 122 + int (*init_m64)(struct pnv_phb *phb); 123 + void (*alloc_m64_pe)(struct pnv_phb *phb); 124 + int (*pick_m64_pe)(struct pnv_phb *phb, struct pci_bus *bus, int all); 125 + int (*get_pe_state)(struct pnv_phb *phb, int pe_no); 126 + void (*freeze_pe)(struct pnv_phb *phb, int pe_no); 127 + int (*unfreeze_pe)(struct pnv_phb *phb, int pe_no, int opt); 128 128 129 129 union { 130 130 struct { ··· 141 129 /* Global bridge info */ 142 130 unsigned int total_pe; 143 131 unsigned int reserved_pe; 132 + 133 + /* 32-bit MMIO window */ 144 134 unsigned int m32_size; 145 135 unsigned int m32_segsize; 146 136 unsigned int m32_pci_base; 137 + 138 + /* 64-bit MMIO window */ 139 + unsigned int m64_bar_idx; 140 + unsigned long m64_size; 141 + unsigned long m64_segsize; 142 + unsigned long m64_base; 143 + unsigned long m64_bar_alloc; 144 + 145 + /* IO ports */ 147 146 unsigned int io_size; 148 147 unsigned int io_segsize; 149 148 unsigned int io_pci_base; ··· 221 198 int where, int size, u32 val); 222 199 extern void pnv_pci_setup_iommu_table(struct iommu_table *tbl, 223 200 void *tce_mem, u64 tce_size, 224 - u64 dma_offset); 201 + u64 dma_offset, unsigned page_shift); 225 202 extern void pnv_pci_init_p5ioc2_hub(struct device_node *np); 226 203 extern void pnv_pci_init_ioda_hub(struct device_node *np); 227 204 extern void pnv_pci_init_ioda2_phb(struct device_node *np);
+1 -1
arch/powerpc/platforms/powernv/rng.c
··· 123 123 124 124 return 0; 125 125 } 126 - subsys_initcall(rng_init); 126 + machine_subsys_initcall(powernv, rng_init);
+2
arch/powerpc/platforms/powernv/setup.c
··· 264 264 ppc_md.halt = pnv_halt; 265 265 ppc_md.machine_check_exception = opal_machine_check; 266 266 ppc_md.mce_check_early_recovery = opal_mce_check_early_recovery; 267 + ppc_md.hmi_exception_early = opal_hmi_exception_early; 268 + ppc_md.handle_hmi_exception = opal_handle_hmi_exception; 267 269 } 268 270 269 271 #ifdef CONFIG_PPC_POWERNV_RTAS
+2 -1
arch/powerpc/platforms/pseries/dtl.c
··· 29 29 #include <asm/lppaca.h> 30 30 #include <asm/debug.h> 31 31 #include <asm/plpar_wrappers.h> 32 + #include <asm/machdep.h> 32 33 33 34 struct dtl { 34 35 struct dtl_entry *buf; ··· 392 391 err: 393 392 return rc; 394 393 } 395 - arch_initcall(dtl_init); 394 + machine_arch_initcall(pseries, dtl_init);
+18 -22
arch/powerpc/platforms/pseries/eeh_pseries.c
··· 89 89 * of domain/bus/slot/function for EEH RTAS operations. 90 90 */ 91 91 if (ibm_set_eeh_option == RTAS_UNKNOWN_SERVICE) { 92 - pr_warning("%s: RTAS service <ibm,set-eeh-option> invalid\n", 92 + pr_warn("%s: RTAS service <ibm,set-eeh-option> invalid\n", 93 93 __func__); 94 94 return -EINVAL; 95 95 } else if (ibm_set_slot_reset == RTAS_UNKNOWN_SERVICE) { 96 - pr_warning("%s: RTAS service <ibm,set-slot-reset> invalid\n", 96 + pr_warn("%s: RTAS service <ibm,set-slot-reset> invalid\n", 97 97 __func__); 98 98 return -EINVAL; 99 99 } else if (ibm_read_slot_reset_state2 == RTAS_UNKNOWN_SERVICE && 100 100 ibm_read_slot_reset_state == RTAS_UNKNOWN_SERVICE) { 101 - pr_warning("%s: RTAS service <ibm,read-slot-reset-state2> and " 101 + pr_warn("%s: RTAS service <ibm,read-slot-reset-state2> and " 102 102 "<ibm,read-slot-reset-state> invalid\n", 103 103 __func__); 104 104 return -EINVAL; 105 105 } else if (ibm_slot_error_detail == RTAS_UNKNOWN_SERVICE) { 106 - pr_warning("%s: RTAS service <ibm,slot-error-detail> invalid\n", 106 + pr_warn("%s: RTAS service <ibm,slot-error-detail> invalid\n", 107 107 __func__); 108 108 return -EINVAL; 109 109 } else if (ibm_configure_pe == RTAS_UNKNOWN_SERVICE && 110 110 ibm_configure_bridge == RTAS_UNKNOWN_SERVICE) { 111 - pr_warning("%s: RTAS service <ibm,configure-pe> and " 111 + pr_warn("%s: RTAS service <ibm,configure-pe> and " 112 112 "<ibm,configure-bridge> invalid\n", 113 113 __func__); 114 114 return -EINVAL; ··· 118 118 spin_lock_init(&slot_errbuf_lock); 119 119 eeh_error_buf_size = rtas_token("rtas-error-log-max"); 120 120 if (eeh_error_buf_size == RTAS_UNKNOWN_SERVICE) { 121 - pr_warning("%s: unknown EEH error log size\n", 121 + pr_warn("%s: unknown EEH error log size\n", 122 122 __func__); 123 123 eeh_error_buf_size = 1024; 124 124 } else if (eeh_error_buf_size > RTAS_ERROR_LOG_MAX) { 125 - pr_warning("%s: EEH error log size %d exceeds the maximal %d\n", 125 + pr_warn("%s: EEH error log size %d exceeds the maximal %d\n", 126 126 __func__, eeh_error_buf_size, RTAS_ERROR_LOG_MAX); 127 127 eeh_error_buf_size = RTAS_ERROR_LOG_MAX; 128 128 } 129 129 130 130 /* Set EEH probe mode */ 131 - eeh_probe_mode_set(EEH_PROBE_MODE_DEVTREE); 131 + eeh_add_flag(EEH_PROBE_MODE_DEVTREE | EEH_ENABLE_IO_FOR_LOG); 132 132 133 133 return 0; 134 134 } ··· 270 270 /* Retrieve the device address */ 271 271 regs = of_get_property(dn, "reg", NULL); 272 272 if (!regs) { 273 - pr_warning("%s: OF node property %s::reg not found\n", 273 + pr_warn("%s: OF node property %s::reg not found\n", 274 274 __func__, dn->full_name); 275 275 return NULL; 276 276 } ··· 297 297 enable = 1; 298 298 299 299 if (enable) { 300 - eeh_set_enable(true); 300 + eeh_add_flag(EEH_ENABLED); 301 301 eeh_add_to_parent_pe(edev); 302 302 303 303 pr_debug("%s: EEH enabled on %s PHB#%d-PE#%x, config addr#%x\n", ··· 398 398 pe->config_addr, BUID_HI(pe->phb->buid), 399 399 BUID_LO(pe->phb->buid), 0); 400 400 if (ret) { 401 - pr_warning("%s: Failed to get address for PHB#%d-PE#%x\n", 401 + pr_warn("%s: Failed to get address for PHB#%d-PE#%x\n", 402 402 __func__, pe->phb->global_number, pe->config_addr); 403 403 return 0; 404 404 } ··· 411 411 pe->config_addr, BUID_HI(pe->phb->buid), 412 412 BUID_LO(pe->phb->buid), 0); 413 413 if (ret) { 414 - pr_warning("%s: Failed to get address for PHB#%d-PE#%x\n", 414 + pr_warn("%s: Failed to get address for PHB#%d-PE#%x\n", 415 415 __func__, pe->phb->global_number, pe->config_addr); 416 416 return 0; 417 417 } ··· 584 584 return ret; 585 585 586 586 if (max_wait <= 0) { 587 - pr_warning("%s: Timeout when getting PE's state (%d)\n", 587 + pr_warn("%s: Timeout when getting PE's state (%d)\n", 588 588 __func__, max_wait); 589 589 return EEH_STATE_NOT_SUPPORT; 590 590 } 591 591 592 592 if (mwait <= 0) { 593 - pr_warning("%s: Firmware returned bad wait value %d\n", 593 + pr_warn("%s: Firmware returned bad wait value %d\n", 594 594 __func__, mwait); 595 595 mwait = EEH_STATE_MIN_WAIT_TIME; 596 596 } else if (mwait > EEH_STATE_MAX_WAIT_TIME) { 597 - pr_warning("%s: Firmware returned too long wait value %d\n", 597 + pr_warn("%s: Firmware returned too long wait value %d\n", 598 598 __func__, mwait); 599 599 mwait = EEH_STATE_MAX_WAIT_TIME; 600 600 } ··· 675 675 } 676 676 677 677 if (ret) 678 - pr_warning("%s: Unable to configure bridge PHB#%d-PE#%x (%d)\n", 678 + pr_warn("%s: Unable to configure bridge PHB#%d-PE#%x (%d)\n", 679 679 __func__, pe->phb->global_number, pe->addr, ret); 680 680 681 681 return ret; ··· 743 743 */ 744 744 static int __init eeh_pseries_init(void) 745 745 { 746 - int ret = -EINVAL; 747 - 748 - if (!machine_is(pseries)) 749 - return ret; 746 + int ret; 750 747 751 748 ret = eeh_ops_register(&pseries_eeh_ops); 752 749 if (!ret) ··· 754 757 755 758 return ret; 756 759 } 757 - 758 - early_initcall(eeh_pseries_init); 760 + machine_early_initcall(pseries, eeh_pseries_init);
+121 -51
arch/powerpc/platforms/pseries/hvCall.S
··· 12 12 #include <asm/ppc_asm.h> 13 13 #include <asm/asm-offsets.h> 14 14 #include <asm/ptrace.h> 15 + #include <asm/jump_label.h> 16 + 17 + .section ".text" 15 18 16 19 #ifdef CONFIG_TRACEPOINTS 17 20 21 + #ifndef CONFIG_JUMP_LABEL 18 22 .section ".toc","aw" 19 23 20 24 .globl hcall_tracepoint_refcount ··· 26 22 .llong 0 27 23 28 24 .section ".text" 25 + #endif 29 26 30 27 /* 31 28 * precall must preserve all registers. use unused STK_PARAM() 32 - * areas to save snapshots and opcode. We branch around this 33 - * in early init (eg when populating the MMU hashtable) by using an 34 - * unconditional cpu feature. 29 + * areas to save snapshots and opcode. 35 30 */ 36 31 #define HCALL_INST_PRECALL(FIRST_REG) \ 37 - BEGIN_FTR_SECTION; \ 38 - b 1f; \ 39 - END_FTR_SECTION(0, 1); \ 40 - ld r12,hcall_tracepoint_refcount@toc(r2); \ 41 - std r12,32(r1); \ 42 - cmpdi r12,0; \ 43 - beq+ 1f; \ 44 32 mflr r0; \ 45 33 std r3,STK_PARAM(R3)(r1); \ 46 34 std r4,STK_PARAM(R4)(r1); \ ··· 46 50 addi r4,r1,STK_PARAM(FIRST_REG); \ 47 51 stdu r1,-STACK_FRAME_OVERHEAD(r1); \ 48 52 bl __trace_hcall_entry; \ 49 - addi r1,r1,STACK_FRAME_OVERHEAD; \ 50 - ld r0,16(r1); \ 51 - ld r3,STK_PARAM(R3)(r1); \ 52 - ld r4,STK_PARAM(R4)(r1); \ 53 - ld r5,STK_PARAM(R5)(r1); \ 54 - ld r6,STK_PARAM(R6)(r1); \ 55 - ld r7,STK_PARAM(R7)(r1); \ 56 - ld r8,STK_PARAM(R8)(r1); \ 57 - ld r9,STK_PARAM(R9)(r1); \ 58 - ld r10,STK_PARAM(R10)(r1); \ 59 - mtlr r0; \ 60 - 1: 53 + ld r3,STACK_FRAME_OVERHEAD+STK_PARAM(R3)(r1); \ 54 + ld r4,STACK_FRAME_OVERHEAD+STK_PARAM(R4)(r1); \ 55 + ld r5,STACK_FRAME_OVERHEAD+STK_PARAM(R5)(r1); \ 56 + ld r6,STACK_FRAME_OVERHEAD+STK_PARAM(R6)(r1); \ 57 + ld r7,STACK_FRAME_OVERHEAD+STK_PARAM(R7)(r1); \ 58 + ld r8,STACK_FRAME_OVERHEAD+STK_PARAM(R8)(r1); \ 59 + ld r9,STACK_FRAME_OVERHEAD+STK_PARAM(R9)(r1); \ 60 + ld r10,STACK_FRAME_OVERHEAD+STK_PARAM(R10)(r1) 61 61 62 62 /* 63 63 * postcall is performed immediately before function return which 64 - * allows liberal use of volatile registers. We branch around this 65 - * in early init (eg when populating the MMU hashtable) by using an 66 - * unconditional cpu feature. 64 + * allows liberal use of volatile registers. 67 65 */ 68 66 #define __HCALL_INST_POSTCALL \ 69 - BEGIN_FTR_SECTION; \ 70 - b 1f; \ 71 - END_FTR_SECTION(0, 1); \ 72 - ld r12,32(r1); \ 73 - cmpdi r12,0; \ 74 - beq+ 1f; \ 75 - mflr r0; \ 76 - ld r6,STK_PARAM(R3)(r1); \ 77 - std r3,STK_PARAM(R3)(r1); \ 67 + ld r0,STACK_FRAME_OVERHEAD+STK_PARAM(R3)(r1); \ 68 + std r3,STACK_FRAME_OVERHEAD+STK_PARAM(R3)(r1); \ 78 69 mr r4,r3; \ 79 - mr r3,r6; \ 80 - std r0,16(r1); \ 81 - stdu r1,-STACK_FRAME_OVERHEAD(r1); \ 70 + mr r3,r0; \ 82 71 bl __trace_hcall_exit; \ 72 + ld r0,STACK_FRAME_OVERHEAD+16(r1); \ 83 73 addi r1,r1,STACK_FRAME_OVERHEAD; \ 84 - ld r0,16(r1); \ 85 74 ld r3,STK_PARAM(R3)(r1); \ 86 - mtlr r0; \ 87 - 1: 75 + mtlr r0 88 76 89 77 #define HCALL_INST_POSTCALL_NORETS \ 90 78 li r5,0; \ ··· 78 98 mr r5,BUFREG; \ 79 99 __HCALL_INST_POSTCALL 80 100 101 + #ifdef CONFIG_JUMP_LABEL 102 + #define HCALL_BRANCH(LABEL) \ 103 + ARCH_STATIC_BRANCH(LABEL, hcall_tracepoint_key) 104 + #else 105 + 106 + /* 107 + * We branch around this in early init (eg when populating the MMU 108 + * hashtable) by using an unconditional cpu feature. 109 + */ 110 + #define HCALL_BRANCH(LABEL) \ 111 + BEGIN_FTR_SECTION; \ 112 + b 1f; \ 113 + END_FTR_SECTION(0, 1); \ 114 + ld r12,hcall_tracepoint_refcount@toc(r2); \ 115 + std r12,32(r1); \ 116 + cmpdi r12,0; \ 117 + bne- LABEL; \ 118 + 1: 119 + #endif 120 + 81 121 #else 82 122 #define HCALL_INST_PRECALL(FIRST_ARG) 83 123 #define HCALL_INST_POSTCALL_NORETS 84 124 #define HCALL_INST_POSTCALL(BUFREG) 125 + #define HCALL_BRANCH(LABEL) 85 126 #endif 86 - 87 - .text 88 127 89 128 _GLOBAL_TOC(plpar_hcall_norets) 90 129 HMT_MEDIUM 91 130 92 131 mfcr r0 93 132 stw r0,8(r1) 94 - 95 - HCALL_INST_PRECALL(R4) 96 - 133 + HCALL_BRANCH(plpar_hcall_norets_trace) 97 134 HVSC /* invoke the hypervisor */ 98 - 99 - HCALL_INST_POSTCALL_NORETS 100 135 101 136 lwz r0,8(r1) 102 137 mtcrf 0xff,r0 103 138 blr /* return r3 = status */ 139 + 140 + #ifdef CONFIG_TRACEPOINTS 141 + plpar_hcall_norets_trace: 142 + HCALL_INST_PRECALL(R4) 143 + HVSC 144 + HCALL_INST_POSTCALL_NORETS 145 + lwz r0,8(r1) 146 + mtcrf 0xff,r0 147 + blr 148 + #endif 104 149 105 150 _GLOBAL_TOC(plpar_hcall) 106 151 HMT_MEDIUM ··· 133 128 mfcr r0 134 129 stw r0,8(r1) 135 130 136 - HCALL_INST_PRECALL(R5) 131 + HCALL_BRANCH(plpar_hcall_trace) 137 132 138 133 std r4,STK_PARAM(R4)(r1) /* Save ret buffer */ 139 134 ··· 152 147 std r6, 16(r12) 153 148 std r7, 24(r12) 154 149 150 + lwz r0,8(r1) 151 + mtcrf 0xff,r0 152 + 153 + blr /* return r3 = status */ 154 + 155 + #ifdef CONFIG_TRACEPOINTS 156 + plpar_hcall_trace: 157 + HCALL_INST_PRECALL(R5) 158 + 159 + std r4,STK_PARAM(R4)(r1) 160 + mr r0,r4 161 + 162 + mr r4,r5 163 + mr r5,r6 164 + mr r6,r7 165 + mr r7,r8 166 + mr r8,r9 167 + mr r9,r10 168 + 169 + HVSC 170 + 171 + ld r12,STK_PARAM(R4)(r1) 172 + std r4,0(r12) 173 + std r5,8(r12) 174 + std r6,16(r12) 175 + std r7,24(r12) 176 + 155 177 HCALL_INST_POSTCALL(r12) 156 178 157 179 lwz r0,8(r1) 158 180 mtcrf 0xff,r0 159 181 160 - blr /* return r3 = status */ 182 + blr 183 + #endif 161 184 162 185 /* 163 186 * plpar_hcall_raw can be called in real mode. kexec/kdump need some ··· 227 194 mfcr r0 228 195 stw r0,8(r1) 229 196 230 - HCALL_INST_PRECALL(R5) 197 + HCALL_BRANCH(plpar_hcall9_trace) 231 198 232 199 std r4,STK_PARAM(R4)(r1) /* Save ret buffer */ 233 200 ··· 255 222 std r11,56(r12) 256 223 std r0, 64(r12) 257 224 225 + lwz r0,8(r1) 226 + mtcrf 0xff,r0 227 + 228 + blr /* return r3 = status */ 229 + 230 + #ifdef CONFIG_TRACEPOINTS 231 + plpar_hcall9_trace: 232 + HCALL_INST_PRECALL(R5) 233 + 234 + std r4,STK_PARAM(R4)(r1) 235 + mr r0,r4 236 + 237 + mr r4,r5 238 + mr r5,r6 239 + mr r6,r7 240 + mr r7,r8 241 + mr r8,r9 242 + mr r9,r10 243 + ld r10,STACK_FRAME_OVERHEAD+STK_PARAM(R11)(r1) 244 + ld r11,STACK_FRAME_OVERHEAD+STK_PARAM(R12)(r1) 245 + ld r12,STACK_FRAME_OVERHEAD+STK_PARAM(R13)(r1) 246 + 247 + HVSC 248 + 249 + mr r0,r12 250 + ld r12,STACK_FRAME_OVERHEAD+STK_PARAM(R4)(r1) 251 + std r4,0(r12) 252 + std r5,8(r12) 253 + std r6,16(r12) 254 + std r7,24(r12) 255 + std r8,32(r12) 256 + std r9,40(r12) 257 + std r10,48(r12) 258 + std r11,56(r12) 259 + std r0,64(r12) 260 + 258 261 HCALL_INST_POSTCALL(r12) 259 262 260 263 lwz r0,8(r1) 261 264 mtcrf 0xff,r0 262 265 263 - blr /* return r3 = status */ 266 + blr 267 + #endif 264 268 265 269 /* See plpar_hcall_raw to see why this is needed */ 266 270 _GLOBAL(plpar_hcall9_raw)
+2 -1
arch/powerpc/platforms/pseries/hvCall_inst.c
··· 27 27 #include <asm/firmware.h> 28 28 #include <asm/cputable.h> 29 29 #include <asm/trace.h> 30 + #include <asm/machdep.h> 30 31 31 32 DEFINE_PER_CPU(struct hcall_stats[HCALL_STAT_ARRAY_SIZE], hcall_stats); 32 33 ··· 163 162 164 163 return 0; 165 164 } 166 - __initcall(hcall_inst_init); 165 + machine_device_initcall(pseries, hcall_inst_init);
+23 -7
arch/powerpc/platforms/pseries/lpar.c
··· 26 26 #include <linux/dma-mapping.h> 27 27 #include <linux/console.h> 28 28 #include <linux/export.h> 29 + #include <linux/static_key.h> 29 30 #include <asm/processor.h> 30 31 #include <asm/mmu.h> 31 32 #include <asm/page.h> ··· 650 649 #endif 651 650 652 651 #ifdef CONFIG_TRACEPOINTS 652 + #ifdef CONFIG_JUMP_LABEL 653 + struct static_key hcall_tracepoint_key = STATIC_KEY_INIT; 654 + 655 + void hcall_tracepoint_regfunc(void) 656 + { 657 + static_key_slow_inc(&hcall_tracepoint_key); 658 + } 659 + 660 + void hcall_tracepoint_unregfunc(void) 661 + { 662 + static_key_slow_dec(&hcall_tracepoint_key); 663 + } 664 + #else 653 665 /* 654 666 * We optimise our hcall path by placing hcall_tracepoint_refcount 655 667 * directly in the TOC so we can check if the hcall tracepoints are ··· 671 657 672 658 /* NB: reg/unreg are called while guarded with the tracepoints_mutex */ 673 659 extern long hcall_tracepoint_refcount; 674 - 675 - /* 676 - * Since the tracing code might execute hcalls we need to guard against 677 - * recursion. One example of this are spinlocks calling H_YIELD on 678 - * shared processor partitions. 679 - */ 680 - static DEFINE_PER_CPU(unsigned int, hcall_trace_depth); 681 660 682 661 void hcall_tracepoint_regfunc(void) 683 662 { ··· 681 674 { 682 675 hcall_tracepoint_refcount--; 683 676 } 677 + #endif 678 + 679 + /* 680 + * Since the tracing code might execute hcalls we need to guard against 681 + * recursion. One example of this are spinlocks calling H_YIELD on 682 + * shared processor partitions. 683 + */ 684 + static DEFINE_PER_CPU(unsigned int, hcall_trace_depth); 685 + 684 686 685 687 void __trace_hcall_entry(unsigned long opcode, unsigned long *args) 686 688 {
+2 -1
arch/powerpc/platforms/pseries/mobility.c
··· 18 18 #include <linux/delay.h> 19 19 #include <linux/slab.h> 20 20 21 + #include <asm/machdep.h> 21 22 #include <asm/rtas.h> 22 23 #include "pseries.h" 23 24 ··· 363 362 364 363 return rc; 365 364 } 366 - device_initcall(mobility_sysfs_init); 365 + machine_device_initcall(pseries, mobility_sysfs_init);
+2 -2
arch/powerpc/platforms/pseries/msi.c
··· 16 16 #include <asm/rtas.h> 17 17 #include <asm/hw_irq.h> 18 18 #include <asm/ppc-pci.h> 19 + #include <asm/machdep.h> 19 20 20 21 static int query_token, change_token; 21 22 ··· 533 532 534 533 return 0; 535 534 } 536 - arch_initcall(rtas_msi_init); 537 - 535 + machine_arch_initcall(pseries, rtas_msi_init);
+2 -2
arch/powerpc/platforms/pseries/pci_dlpar.c
··· 118 118 } 119 119 } 120 120 121 - /* Unregister the bridge device from sysfs and remove the PCI bus */ 122 - device_unregister(b->bridge); 121 + /* Remove the PCI bus and unregister the bridge device from sysfs */ 123 122 phb->bus = NULL; 124 123 pci_remove_bus(b); 124 + device_unregister(b->bridge); 125 125 126 126 /* Now release the IO resource */ 127 127 if (res->flags & IORESOURCE_IO)
+3 -2
arch/powerpc/platforms/pseries/power.c
··· 25 25 #include <linux/string.h> 26 26 #include <linux/errno.h> 27 27 #include <linux/init.h> 28 + #include <asm/machdep.h> 28 29 29 30 unsigned long rtas_poweron_auto; /* default and normal state is 0 */ 30 31 ··· 72 71 return -ENOMEM; 73 72 return sysfs_create_group(power_kobj, &attr_group); 74 73 } 75 - core_initcall(pm_init); 74 + machine_core_initcall(pseries, pm_init); 76 75 #else 77 76 static int __init apo_pm_init(void) 78 77 { 79 78 return (sysfs_create_file(power_kobj, &auto_poweron_attr.attr)); 80 79 } 81 - __initcall(apo_pm_init); 80 + machine_device_initcall(pseries, apo_pm_init); 82 81 #endif
+1 -1
arch/powerpc/platforms/pseries/ras.c
··· 71 71 72 72 return 0; 73 73 } 74 - subsys_initcall(init_ras_IRQ); 74 + machine_subsys_initcall(pseries, init_ras_IRQ); 75 75 76 76 #define EPOW_SHUTDOWN_NORMAL 1 77 77 #define EPOW_SHUTDOWN_ON_UPS 2
+1 -4
arch/powerpc/platforms/pseries/reconfig.c
··· 446 446 { 447 447 struct proc_dir_entry *ent; 448 448 449 - if (!machine_is(pseries)) 450 - return 0; 451 - 452 449 ent = proc_create("powerpc/ofdt", S_IWUSR, NULL, &ofdt_fops); 453 450 if (ent) 454 451 proc_set_size(ent, 0); 455 452 456 453 return 0; 457 454 } 458 - __initcall(proc_ppc64_create_ofdt); 455 + machine_device_initcall(pseries, proc_ppc64_create_ofdt);
+1 -1
arch/powerpc/platforms/pseries/rng.c
··· 42 42 43 43 return 0; 44 44 } 45 - subsys_initcall(rng_init); 45 + machine_subsys_initcall(pseries, rng_init);
+1 -1
arch/powerpc/platforms/pseries/setup.c
··· 351 351 352 352 return alloc_dispatch_logs(); 353 353 } 354 - early_initcall(alloc_dispatch_log_kmem_cache); 354 + machine_early_initcall(pseries, alloc_dispatch_log_kmem_cache); 355 355 356 356 static void pseries_lpar_idle(void) 357 357 {
+2 -3
arch/powerpc/platforms/pseries/suspend.c
··· 265 265 { 266 266 int rc; 267 267 268 - if (!machine_is(pseries) || !firmware_has_feature(FW_FEATURE_LPAR)) 268 + if (!firmware_has_feature(FW_FEATURE_LPAR)) 269 269 return 0; 270 270 271 271 suspend_data.token = rtas_token("ibm,suspend-me"); ··· 280 280 suspend_set_ops(&pseries_suspend_ops); 281 281 return 0; 282 282 } 283 - 284 - __initcall(pseries_suspend_init); 283 + machine_device_initcall(pseries, pseries_suspend_init);
+2 -2
arch/powerpc/sysdev/fsl_pci.c
··· 853 853 in = pcie->cfg_type0 + PEX_RC_INWIN_BASE; 854 854 for (i = 0; i < 4; i++) { 855 855 /* not enabled, skip */ 856 - if (!in_le32(&in[i].ar) & PEX_RCIWARn_EN) 857 - continue; 856 + if (!(in_le32(&in[i].ar) & PEX_RCIWARn_EN)) 857 + continue; 858 858 859 859 if (get_immrbase() == in_le32(&in[i].tar)) 860 860 return (u64)in_le32(&in[i].barh) << 32 |
-1
arch/powerpc/sysdev/micropatch.c
··· 13 13 #include <linux/mm.h> 14 14 #include <linux/interrupt.h> 15 15 #include <asm/irq.h> 16 - #include <asm/mpc8xx.h> 17 16 #include <asm/page.h> 18 17 #include <asm/pgtable.h> 19 18 #include <asm/8xx_immap.h>
+1 -1
arch/powerpc/sysdev/mpic_msgr.c
··· 184 184 dev_info(&dev->dev, "Found %d message registers\n", 185 185 mpic_msgr_count); 186 186 187 - mpic_msgrs = kzalloc(sizeof(struct mpic_msgr) * mpic_msgr_count, 187 + mpic_msgrs = kcalloc(mpic_msgr_count, sizeof(*mpic_msgrs), 188 188 GFP_KERNEL); 189 189 if (!mpic_msgrs) { 190 190 dev_err(&dev->dev,
+1 -33
arch/powerpc/xmon/xmon.c
··· 2058 2058 DUMP(p, kernel_toc, "lx"); 2059 2059 DUMP(p, kernelbase, "lx"); 2060 2060 DUMP(p, kernel_msr, "lx"); 2061 - #ifdef CONFIG_PPC_STD_MMU_64 2062 - DUMP(p, stab_real, "lx"); 2063 - DUMP(p, stab_addr, "lx"); 2064 - #endif 2065 2061 DUMP(p, emergency_sp, "p"); 2066 2062 #ifdef CONFIG_PPC_BOOK3S_64 2067 2063 DUMP(p, mc_emergency_sp, "p"); ··· 2690 2694 } 2691 2695 2692 2696 #ifdef CONFIG_PPC_BOOK3S_64 2693 - static void dump_slb(void) 2697 + void dump_segments(void) 2694 2698 { 2695 2699 int i; 2696 2700 unsigned long esid,vsid,valid; ··· 2721 2725 printf("\n"); 2722 2726 } 2723 2727 } 2724 - } 2725 - 2726 - static void dump_stab(void) 2727 - { 2728 - int i; 2729 - unsigned long *tmp = (unsigned long *)local_paca->stab_addr; 2730 - 2731 - printf("Segment table contents of cpu 0x%x\n", smp_processor_id()); 2732 - 2733 - for (i = 0; i < PAGE_SIZE/16; i++) { 2734 - unsigned long a, b; 2735 - 2736 - a = *tmp++; 2737 - b = *tmp++; 2738 - 2739 - if (a || b) { 2740 - printf("%03d %016lx ", i, a); 2741 - printf("%016lx\n", b); 2742 - } 2743 - } 2744 - } 2745 - 2746 - void dump_segments(void) 2747 - { 2748 - if (mmu_has_feature(MMU_FTR_SLB)) 2749 - dump_slb(); 2750 - else 2751 - dump_stab(); 2752 2728 } 2753 2729 #endif 2754 2730
+16 -2
drivers/cpufreq/powernv-cpufreq.c
··· 28 28 #include <linux/of.h> 29 29 30 30 #include <asm/cputhreads.h> 31 + #include <asm/firmware.h> 31 32 #include <asm/reg.h> 32 33 #include <asm/smp.h> /* Required for cpu_sibling_mask() in UP configs */ 33 34 ··· 99 98 return -ENODEV; 100 99 } 101 100 102 - WARN_ON(len_ids != len_freqs); 101 + if (len_ids != len_freqs) { 102 + pr_warn("Entries in ibm,pstate-ids and " 103 + "ibm,pstate-frequencies-mhz does not match\n"); 104 + } 105 + 103 106 nr_pstates = min(len_ids, len_freqs) / sizeof(u32); 104 107 if (!nr_pstates) { 105 108 pr_warn("No PStates found\n"); ··· 136 131 int i; 137 132 138 133 i = powernv_pstate_info.max - pstate_id; 139 - BUG_ON(i >= powernv_pstate_info.nr_pstates || i < 0); 134 + if (i >= powernv_pstate_info.nr_pstates || i < 0) { 135 + pr_warn("PState id %d outside of PState table, " 136 + "reporting nominal id %d instead\n", 137 + pstate_id, powernv_pstate_info.nominal); 138 + i = powernv_pstate_info.max - powernv_pstate_info.nominal; 139 + } 140 140 141 141 return powernv_freqs[i].frequency; 142 142 } ··· 330 320 static int __init powernv_cpufreq_init(void) 331 321 { 332 322 int rc = 0; 323 + 324 + /* Don't probe on pseries (guest) platforms */ 325 + if (!firmware_has_feature(FW_FEATURE_OPALv3)) 326 + return -ENODEV; 333 327 334 328 /* Discover pstates from device tree and init */ 335 329 rc = init_powernv_pstates();
+8 -8
drivers/cpuidle/cpuidle-powernv.c
··· 160 160 static int powernv_add_idle_states(void) 161 161 { 162 162 struct device_node *power_mgt; 163 - struct property *prop; 164 163 int nr_idle_states = 1; /* Snooze */ 165 164 int dt_idle_states; 166 - u32 *flags; 165 + const __be32 *idle_state_flags; 166 + u32 len_flags, flags; 167 167 int i; 168 168 169 169 /* Currently we have snooze statically defined */ ··· 174 174 return nr_idle_states; 175 175 } 176 176 177 - prop = of_find_property(power_mgt, "ibm,cpu-idle-state-flags", NULL); 178 - if (!prop) { 177 + idle_state_flags = of_get_property(power_mgt, "ibm,cpu-idle-state-flags", &len_flags); 178 + if (!idle_state_flags) { 179 179 pr_warn("DT-PowerMgmt: missing ibm,cpu-idle-state-flags\n"); 180 180 return nr_idle_states; 181 181 } 182 182 183 - dt_idle_states = prop->length / sizeof(u32); 184 - flags = (u32 *) prop->value; 183 + dt_idle_states = len_flags / sizeof(u32); 185 184 186 185 for (i = 0; i < dt_idle_states; i++) { 187 186 188 - if (flags[i] & IDLE_USE_INST_NAP) { 187 + flags = be32_to_cpu(idle_state_flags[i]); 188 + if (flags & IDLE_USE_INST_NAP) { 189 189 /* Add NAP state */ 190 190 strcpy(powernv_states[nr_idle_states].name, "Nap"); 191 191 strcpy(powernv_states[nr_idle_states].desc, "Nap"); ··· 196 196 nr_idle_states++; 197 197 } 198 198 199 - if (flags[i] & IDLE_USE_INST_SLEEP) { 199 + if (flags & IDLE_USE_INST_SLEEP) { 200 200 /* Add FASTSLEEP state */ 201 201 strcpy(powernv_states[nr_idle_states].name, "FastSleep"); 202 202 strcpy(powernv_states[nr_idle_states].desc, "FastSleep");
+10
drivers/memory/Kconfig
··· 61 61 analysis, especially for IOMMU/SMMU(System Memory Management 62 62 Unit) module. 63 63 64 + config FSL_CORENET_CF 65 + tristate "Freescale CoreNet Error Reporting" 66 + depends on FSL_SOC_BOOKE 67 + help 68 + Say Y for reporting of errors from the Freescale CoreNet 69 + Coherency Fabric. Errors reported include accesses to 70 + physical addresses that mapped by no local access window 71 + (LAW) or an invalid LAW, as well as bad cache state that 72 + represents a coherency violation. 73 + 64 74 config FSL_IFC 65 75 bool 66 76 depends on FSL_SOC
+1
drivers/memory/Makefile
··· 7 7 endif 8 8 obj-$(CONFIG_TI_AEMIF) += ti-aemif.o 9 9 obj-$(CONFIG_TI_EMIF) += emif.o 10 + obj-$(CONFIG_FSL_CORENET_CF) += fsl-corenet-cf.o 10 11 obj-$(CONFIG_FSL_IFC) += fsl_ifc.o 11 12 obj-$(CONFIG_MVEBU_DEVBUS) += mvebu-devbus.o 12 13 obj-$(CONFIG_TEGRA20_MC) += tegra20-mc.o
+251
drivers/memory/fsl-corenet-cf.c
··· 1 + /* 2 + * CoreNet Coherency Fabric error reporting 3 + * 4 + * Copyright 2014 Freescale Semiconductor Inc. 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms of the GNU General Public License as published by the 8 + * Free Software Foundation; either version 2 of the License, or (at your 9 + * option) any later version. 10 + */ 11 + 12 + #include <linux/interrupt.h> 13 + #include <linux/io.h> 14 + #include <linux/irq.h> 15 + #include <linux/module.h> 16 + #include <linux/of.h> 17 + #include <linux/of_address.h> 18 + #include <linux/of_device.h> 19 + #include <linux/of_irq.h> 20 + #include <linux/platform_device.h> 21 + 22 + enum ccf_version { 23 + CCF1, 24 + CCF2, 25 + }; 26 + 27 + struct ccf_info { 28 + enum ccf_version version; 29 + int err_reg_offs; 30 + }; 31 + 32 + static const struct ccf_info ccf1_info = { 33 + .version = CCF1, 34 + .err_reg_offs = 0xa00, 35 + }; 36 + 37 + static const struct ccf_info ccf2_info = { 38 + .version = CCF2, 39 + .err_reg_offs = 0xe40, 40 + }; 41 + 42 + static const struct of_device_id ccf_matches[] = { 43 + { 44 + .compatible = "fsl,corenet1-cf", 45 + .data = &ccf1_info, 46 + }, 47 + { 48 + .compatible = "fsl,corenet2-cf", 49 + .data = &ccf2_info, 50 + }, 51 + {} 52 + }; 53 + 54 + struct ccf_err_regs { 55 + u32 errdet; /* 0x00 Error Detect Register */ 56 + /* 0x04 Error Enable (ccf1)/Disable (ccf2) Register */ 57 + u32 errdis; 58 + /* 0x08 Error Interrupt Enable Register (ccf2 only) */ 59 + u32 errinten; 60 + u32 cecar; /* 0x0c Error Capture Attribute Register */ 61 + u32 cecaddrh; /* 0x10 Error Capture Address High */ 62 + u32 cecaddrl; /* 0x14 Error Capture Address Low */ 63 + u32 cecar2; /* 0x18 Error Capture Attribute Register 2 */ 64 + }; 65 + 66 + /* LAE/CV also valid for errdis and errinten */ 67 + #define ERRDET_LAE (1 << 0) /* Local Access Error */ 68 + #define ERRDET_CV (1 << 1) /* Coherency Violation */ 69 + #define ERRDET_CTYPE_SHIFT 26 /* Capture Type (ccf2 only) */ 70 + #define ERRDET_CTYPE_MASK (0x1f << ERRDET_CTYPE_SHIFT) 71 + #define ERRDET_CAP (1 << 31) /* Capture Valid (ccf2 only) */ 72 + 73 + #define CECAR_VAL (1 << 0) /* Valid (ccf1 only) */ 74 + #define CECAR_UVT (1 << 15) /* Unavailable target ID (ccf1) */ 75 + #define CECAR_SRCID_SHIFT_CCF1 24 76 + #define CECAR_SRCID_MASK_CCF1 (0xff << CECAR_SRCID_SHIFT_CCF1) 77 + #define CECAR_SRCID_SHIFT_CCF2 18 78 + #define CECAR_SRCID_MASK_CCF2 (0xff << CECAR_SRCID_SHIFT_CCF2) 79 + 80 + #define CECADDRH_ADDRH 0xff 81 + 82 + struct ccf_private { 83 + const struct ccf_info *info; 84 + struct device *dev; 85 + void __iomem *regs; 86 + struct ccf_err_regs __iomem *err_regs; 87 + }; 88 + 89 + static irqreturn_t ccf_irq(int irq, void *dev_id) 90 + { 91 + struct ccf_private *ccf = dev_id; 92 + static DEFINE_RATELIMIT_STATE(ratelimit, DEFAULT_RATELIMIT_INTERVAL, 93 + DEFAULT_RATELIMIT_BURST); 94 + u32 errdet, cecar, cecar2; 95 + u64 addr; 96 + u32 src_id; 97 + bool uvt = false; 98 + bool cap_valid = false; 99 + 100 + errdet = ioread32be(&ccf->err_regs->errdet); 101 + cecar = ioread32be(&ccf->err_regs->cecar); 102 + cecar2 = ioread32be(&ccf->err_regs->cecar2); 103 + addr = ioread32be(&ccf->err_regs->cecaddrl); 104 + addr |= ((u64)(ioread32be(&ccf->err_regs->cecaddrh) & 105 + CECADDRH_ADDRH)) << 32; 106 + 107 + if (!__ratelimit(&ratelimit)) 108 + goto out; 109 + 110 + switch (ccf->info->version) { 111 + case CCF1: 112 + if (cecar & CECAR_VAL) { 113 + if (cecar & CECAR_UVT) 114 + uvt = true; 115 + 116 + src_id = (cecar & CECAR_SRCID_MASK_CCF1) >> 117 + CECAR_SRCID_SHIFT_CCF1; 118 + cap_valid = true; 119 + } 120 + 121 + break; 122 + case CCF2: 123 + if (errdet & ERRDET_CAP) { 124 + src_id = (cecar & CECAR_SRCID_MASK_CCF2) >> 125 + CECAR_SRCID_SHIFT_CCF2; 126 + cap_valid = true; 127 + } 128 + 129 + break; 130 + } 131 + 132 + dev_crit(ccf->dev, "errdet 0x%08x cecar 0x%08x cecar2 0x%08x\n", 133 + errdet, cecar, cecar2); 134 + 135 + if (errdet & ERRDET_LAE) { 136 + if (uvt) 137 + dev_crit(ccf->dev, "LAW Unavailable Target ID\n"); 138 + else 139 + dev_crit(ccf->dev, "Local Access Window Error\n"); 140 + } 141 + 142 + if (errdet & ERRDET_CV) 143 + dev_crit(ccf->dev, "Coherency Violation\n"); 144 + 145 + if (cap_valid) { 146 + dev_crit(ccf->dev, "address 0x%09llx, src id 0x%x\n", 147 + addr, src_id); 148 + } 149 + 150 + out: 151 + iowrite32be(errdet, &ccf->err_regs->errdet); 152 + return errdet ? IRQ_HANDLED : IRQ_NONE; 153 + } 154 + 155 + static int ccf_probe(struct platform_device *pdev) 156 + { 157 + struct ccf_private *ccf; 158 + struct resource *r; 159 + const struct of_device_id *match; 160 + int ret, irq; 161 + 162 + match = of_match_device(ccf_matches, &pdev->dev); 163 + if (WARN_ON(!match)) 164 + return -ENODEV; 165 + 166 + ccf = devm_kzalloc(&pdev->dev, sizeof(*ccf), GFP_KERNEL); 167 + if (!ccf) 168 + return -ENOMEM; 169 + 170 + r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 171 + if (!r) { 172 + dev_err(&pdev->dev, "%s: no mem resource\n", __func__); 173 + return -ENXIO; 174 + } 175 + 176 + ccf->regs = devm_ioremap_resource(&pdev->dev, r); 177 + if (IS_ERR(ccf->regs)) { 178 + dev_err(&pdev->dev, "%s: can't map mem resource\n", __func__); 179 + return PTR_ERR(ccf->regs); 180 + } 181 + 182 + ccf->dev = &pdev->dev; 183 + ccf->info = match->data; 184 + ccf->err_regs = ccf->regs + ccf->info->err_reg_offs; 185 + 186 + dev_set_drvdata(&pdev->dev, ccf); 187 + 188 + irq = platform_get_irq(pdev, 0); 189 + if (!irq) { 190 + dev_err(&pdev->dev, "%s: no irq\n", __func__); 191 + return -ENXIO; 192 + } 193 + 194 + ret = devm_request_irq(&pdev->dev, irq, ccf_irq, 0, pdev->name, ccf); 195 + if (ret) { 196 + dev_err(&pdev->dev, "%s: can't request irq\n", __func__); 197 + return ret; 198 + } 199 + 200 + switch (ccf->info->version) { 201 + case CCF1: 202 + /* On CCF1 this register enables rather than disables. */ 203 + iowrite32be(ERRDET_LAE | ERRDET_CV, &ccf->err_regs->errdis); 204 + break; 205 + 206 + case CCF2: 207 + iowrite32be(0, &ccf->err_regs->errdis); 208 + iowrite32be(ERRDET_LAE | ERRDET_CV, &ccf->err_regs->errinten); 209 + break; 210 + } 211 + 212 + return 0; 213 + } 214 + 215 + static int ccf_remove(struct platform_device *pdev) 216 + { 217 + struct ccf_private *ccf = dev_get_drvdata(&pdev->dev); 218 + 219 + switch (ccf->info->version) { 220 + case CCF1: 221 + iowrite32be(0, &ccf->err_regs->errdis); 222 + break; 223 + 224 + case CCF2: 225 + /* 226 + * We clear errdis on ccf1 because that's the only way to 227 + * disable interrupts, but on ccf2 there's no need to disable 228 + * detection. 229 + */ 230 + iowrite32be(0, &ccf->err_regs->errinten); 231 + break; 232 + } 233 + 234 + return 0; 235 + } 236 + 237 + static struct platform_driver ccf_driver = { 238 + .driver = { 239 + .name = KBUILD_MODNAME, 240 + .owner = THIS_MODULE, 241 + .of_match_table = ccf_matches, 242 + }, 243 + .probe = ccf_probe, 244 + .remove = ccf_remove, 245 + }; 246 + 247 + module_platform_driver(ccf_driver); 248 + 249 + MODULE_LICENSE("GPL"); 250 + MODULE_AUTHOR("Freescale Semiconductor"); 251 + MODULE_DESCRIPTION("Freescale CoreNet Coherency Fabric error reporting");
-1
drivers/net/ethernet/freescale/fs_enet/mac-fec.c
··· 41 41 #ifdef CONFIG_8xx 42 42 #include <asm/8xx_immap.h> 43 43 #include <asm/pgtable.h> 44 - #include <asm/mpc8xx.h> 45 44 #include <asm/cpm1.h> 46 45 #endif 47 46
-1
drivers/net/ethernet/freescale/fs_enet/mac-scc.c
··· 40 40 #ifdef CONFIG_8xx 41 41 #include <asm/8xx_immap.h> 42 42 #include <asm/pgtable.h> 43 - #include <asm/mpc8xx.h> 44 43 #include <asm/cpm1.h> 45 44 #endif 46 45
-10
drivers/pcmcia/Kconfig
··· 144 144 "Bridge" is the name used for the hardware inside your computer that 145 145 PCMCIA cards are plugged into. If unsure, say N. 146 146 147 - config PCMCIA_M8XX 148 - tristate "MPC8xx PCMCIA support" 149 - depends on PCCARD && PPC && 8xx 150 - select PCCARD_IODYN if PCMCIA != n 151 - help 152 - Say Y here to include support for PowerPC 8xx series PCMCIA 153 - controller. 154 - 155 - This driver is also available as a module called m8xx_pcmcia. 156 - 157 147 config PCMCIA_ALCHEMY_DEVBOARD 158 148 tristate "Alchemy Db/Pb1xxx PCMCIA socket services" 159 149 depends on MIPS_ALCHEMY && PCMCIA
-1
drivers/pcmcia/Makefile
··· 23 23 obj-$(CONFIG_I82365) += i82365.o 24 24 obj-$(CONFIG_I82092) += i82092.o 25 25 obj-$(CONFIG_TCIC) += tcic.o 26 - obj-$(CONFIG_PCMCIA_M8XX) += m8xx_pcmcia.o 27 26 obj-$(CONFIG_PCMCIA_SOC_COMMON) += soc_common.o 28 27 obj-$(CONFIG_PCMCIA_SA11XX_BASE) += sa11xx_base.o 29 28 obj-$(CONFIG_PCMCIA_SA1100) += sa1100_cs.o
-1168
drivers/pcmcia/m8xx_pcmcia.c
··· 1 - /* 2 - * m8xx_pcmcia.c - Linux PCMCIA socket driver for the mpc8xx series. 3 - * 4 - * (C) 1999-2000 Magnus Damm <damm@opensource.se> 5 - * (C) 2001-2002 Montavista Software, Inc. 6 - * <mlocke@mvista.com> 7 - * 8 - * Support for two slots by Cyclades Corporation 9 - * <oliver.kurth@cyclades.de> 10 - * Further fixes, v2.6 kernel port 11 - * <marcelo.tosatti@cyclades.com> 12 - * 13 - * Some fixes, additions (C) 2005-2007 Montavista Software, Inc. 14 - * <vbordug@ru.mvista.com> 15 - * 16 - * "The ExCA standard specifies that socket controllers should provide 17 - * two IO and five memory windows per socket, which can be independently 18 - * configured and positioned in the host address space and mapped to 19 - * arbitrary segments of card address space. " - David A Hinds. 1999 20 - * 21 - * This controller does _not_ meet the ExCA standard. 22 - * 23 - * m8xx pcmcia controller brief info: 24 - * + 8 windows (attrib, mem, i/o) 25 - * + up to two slots (SLOT_A and SLOT_B) 26 - * + inputpins, outputpins, event and mask registers. 27 - * - no offset register. sigh. 28 - * 29 - * Because of the lacking offset register we must map the whole card. 30 - * We assign each memory window PCMCIA_MEM_WIN_SIZE address space. 31 - * Make sure there is (PCMCIA_MEM_WIN_SIZE * PCMCIA_MEM_WIN_NO 32 - * * PCMCIA_SOCKETS_NO) bytes at PCMCIA_MEM_WIN_BASE. 33 - * The i/o windows are dynamically allocated at PCMCIA_IO_WIN_BASE. 34 - * They are maximum 64KByte each... 35 - */ 36 - 37 - #include <linux/module.h> 38 - #include <linux/init.h> 39 - #include <linux/types.h> 40 - #include <linux/fcntl.h> 41 - #include <linux/string.h> 42 - 43 - #include <linux/kernel.h> 44 - #include <linux/errno.h> 45 - #include <linux/timer.h> 46 - #include <linux/ioport.h> 47 - #include <linux/delay.h> 48 - #include <linux/interrupt.h> 49 - #include <linux/fsl_devices.h> 50 - #include <linux/bitops.h> 51 - #include <linux/of_address.h> 52 - #include <linux/of_device.h> 53 - #include <linux/of_irq.h> 54 - #include <linux/of_platform.h> 55 - 56 - #include <asm/io.h> 57 - #include <asm/time.h> 58 - #include <asm/mpc8xx.h> 59 - #include <asm/8xx_immap.h> 60 - #include <asm/irq.h> 61 - #include <asm/fs_pd.h> 62 - 63 - #include <pcmcia/ss.h> 64 - 65 - #define pcmcia_info(args...) printk(KERN_INFO "m8xx_pcmcia: "args) 66 - #define pcmcia_error(args...) printk(KERN_ERR "m8xx_pcmcia: "args) 67 - 68 - static const char *version = "Version 0.06, Aug 2005"; 69 - MODULE_LICENSE("Dual MPL/GPL"); 70 - 71 - #if !defined(CONFIG_PCMCIA_SLOT_A) && !defined(CONFIG_PCMCIA_SLOT_B) 72 - 73 - /* The ADS board use SLOT_A */ 74 - #ifdef CONFIG_ADS 75 - #define CONFIG_PCMCIA_SLOT_A 76 - #define CONFIG_BD_IS_MHZ 77 - #endif 78 - 79 - /* The FADS series are a mess */ 80 - #ifdef CONFIG_FADS 81 - #if defined(CONFIG_MPC860T) || defined(CONFIG_MPC860) || defined(CONFIG_MPC821) 82 - #define CONFIG_PCMCIA_SLOT_A 83 - #else 84 - #define CONFIG_PCMCIA_SLOT_B 85 - #endif 86 - #endif 87 - 88 - #if defined(CONFIG_MPC885ADS) 89 - #define CONFIG_PCMCIA_SLOT_A 90 - #define PCMCIA_GLITCHY_CD 91 - #endif 92 - 93 - /* Cyclades ACS uses both slots */ 94 - #ifdef CONFIG_PRxK 95 - #define CONFIG_PCMCIA_SLOT_A 96 - #define CONFIG_PCMCIA_SLOT_B 97 - #endif 98 - 99 - #endif /* !defined(CONFIG_PCMCIA_SLOT_A) && !defined(CONFIG_PCMCIA_SLOT_B) */ 100 - 101 - #if defined(CONFIG_PCMCIA_SLOT_A) && defined(CONFIG_PCMCIA_SLOT_B) 102 - 103 - #define PCMCIA_SOCKETS_NO 2 104 - /* We have only 8 windows, dualsocket support will be limited. */ 105 - #define PCMCIA_MEM_WIN_NO 2 106 - #define PCMCIA_IO_WIN_NO 2 107 - #define PCMCIA_SLOT_MSG "SLOT_A and SLOT_B" 108 - 109 - #elif defined(CONFIG_PCMCIA_SLOT_A) || defined(CONFIG_PCMCIA_SLOT_B) 110 - 111 - #define PCMCIA_SOCKETS_NO 1 112 - /* full support for one slot */ 113 - #define PCMCIA_MEM_WIN_NO 5 114 - #define PCMCIA_IO_WIN_NO 2 115 - 116 - /* define _slot_ to be able to optimize macros */ 117 - 118 - #ifdef CONFIG_PCMCIA_SLOT_A 119 - #define _slot_ 0 120 - #define PCMCIA_SLOT_MSG "SLOT_A" 121 - #else 122 - #define _slot_ 1 123 - #define PCMCIA_SLOT_MSG "SLOT_B" 124 - #endif 125 - 126 - #else 127 - #error m8xx_pcmcia: Bad configuration! 128 - #endif 129 - 130 - /* ------------------------------------------------------------------------- */ 131 - 132 - #define PCMCIA_MEM_WIN_BASE 0xe0000000 /* base address for memory window 0 */ 133 - #define PCMCIA_MEM_WIN_SIZE 0x04000000 /* each memory window is 64 MByte */ 134 - #define PCMCIA_IO_WIN_BASE _IO_BASE /* base address for io window 0 */ 135 - /* ------------------------------------------------------------------------- */ 136 - 137 - static int pcmcia_schlvl; 138 - 139 - static DEFINE_SPINLOCK(events_lock); 140 - 141 - #define PCMCIA_SOCKET_KEY_5V 1 142 - #define PCMCIA_SOCKET_KEY_LV 2 143 - 144 - /* look up table for pgcrx registers */ 145 - static u32 *m8xx_pgcrx[2]; 146 - 147 - /* 148 - * This structure is used to address each window in the PCMCIA controller. 149 - * 150 - * Keep in mind that we assume that pcmcia_win[n+1] is mapped directly 151 - * after pcmcia_win[n]... 152 - */ 153 - 154 - struct pcmcia_win { 155 - u32 br; 156 - u32 or; 157 - }; 158 - 159 - /* 160 - * For some reason the hardware guys decided to make both slots share 161 - * some registers. 162 - * 163 - * Could someone invent object oriented hardware ? 164 - * 165 - * The macros are used to get the right bit from the registers. 166 - * SLOT_A : slot = 0 167 - * SLOT_B : slot = 1 168 - */ 169 - 170 - #define M8XX_PCMCIA_VS1(slot) (0x80000000 >> (slot << 4)) 171 - #define M8XX_PCMCIA_VS2(slot) (0x40000000 >> (slot << 4)) 172 - #define M8XX_PCMCIA_VS_MASK(slot) (0xc0000000 >> (slot << 4)) 173 - #define M8XX_PCMCIA_VS_SHIFT(slot) (30 - (slot << 4)) 174 - 175 - #define M8XX_PCMCIA_WP(slot) (0x20000000 >> (slot << 4)) 176 - #define M8XX_PCMCIA_CD2(slot) (0x10000000 >> (slot << 4)) 177 - #define M8XX_PCMCIA_CD1(slot) (0x08000000 >> (slot << 4)) 178 - #define M8XX_PCMCIA_BVD2(slot) (0x04000000 >> (slot << 4)) 179 - #define M8XX_PCMCIA_BVD1(slot) (0x02000000 >> (slot << 4)) 180 - #define M8XX_PCMCIA_RDY(slot) (0x01000000 >> (slot << 4)) 181 - #define M8XX_PCMCIA_RDY_L(slot) (0x00800000 >> (slot << 4)) 182 - #define M8XX_PCMCIA_RDY_H(slot) (0x00400000 >> (slot << 4)) 183 - #define M8XX_PCMCIA_RDY_R(slot) (0x00200000 >> (slot << 4)) 184 - #define M8XX_PCMCIA_RDY_F(slot) (0x00100000 >> (slot << 4)) 185 - #define M8XX_PCMCIA_MASK(slot) (0xFFFF0000 >> (slot << 4)) 186 - 187 - #define M8XX_PCMCIA_POR_VALID 0x00000001 188 - #define M8XX_PCMCIA_POR_WRPROT 0x00000002 189 - #define M8XX_PCMCIA_POR_ATTRMEM 0x00000010 190 - #define M8XX_PCMCIA_POR_IO 0x00000018 191 - #define M8XX_PCMCIA_POR_16BIT 0x00000040 192 - 193 - #define M8XX_PGCRX(slot) m8xx_pgcrx[slot] 194 - 195 - #define M8XX_PGCRX_CXOE 0x00000080 196 - #define M8XX_PGCRX_CXRESET 0x00000040 197 - 198 - /* we keep one lookup table per socket to check flags */ 199 - 200 - #define PCMCIA_EVENTS_MAX 5 /* 4 max at a time + termination */ 201 - 202 - struct event_table { 203 - u32 regbit; 204 - u32 eventbit; 205 - }; 206 - 207 - static const char driver_name[] = "m8xx-pcmcia"; 208 - 209 - struct socket_info { 210 - void (*handler) (void *info, u32 events); 211 - void *info; 212 - 213 - u32 slot; 214 - pcmconf8xx_t *pcmcia; 215 - u32 bus_freq; 216 - int hwirq; 217 - 218 - socket_state_t state; 219 - struct pccard_mem_map mem_win[PCMCIA_MEM_WIN_NO]; 220 - struct pccard_io_map io_win[PCMCIA_IO_WIN_NO]; 221 - struct event_table events[PCMCIA_EVENTS_MAX]; 222 - struct pcmcia_socket socket; 223 - }; 224 - 225 - static struct socket_info socket[PCMCIA_SOCKETS_NO]; 226 - 227 - /* 228 - * Search this table to see if the windowsize is 229 - * supported... 230 - */ 231 - 232 - #define M8XX_SIZES_NO 32 233 - 234 - static const u32 m8xx_size_to_gray[M8XX_SIZES_NO] = { 235 - 0x00000001, 0x00000002, 0x00000008, 0x00000004, 236 - 0x00000080, 0x00000040, 0x00000010, 0x00000020, 237 - 0x00008000, 0x00004000, 0x00001000, 0x00002000, 238 - 0x00000100, 0x00000200, 0x00000800, 0x00000400, 239 - 240 - 0x0fffffff, 0xffffffff, 0xffffffff, 0xffffffff, 241 - 0x01000000, 0x02000000, 0xffffffff, 0x04000000, 242 - 0x00010000, 0x00020000, 0x00080000, 0x00040000, 243 - 0x00800000, 0x00400000, 0x00100000, 0x00200000 244 - }; 245 - 246 - /* ------------------------------------------------------------------------- */ 247 - 248 - static irqreturn_t m8xx_interrupt(int irq, void *dev); 249 - 250 - #define PCMCIA_BMT_LIMIT (15*4) /* Bus Monitor Timeout value */ 251 - 252 - /* FADS Boards from Motorola */ 253 - 254 - #if defined(CONFIG_FADS) 255 - 256 - #define PCMCIA_BOARD_MSG "FADS" 257 - 258 - static int voltage_set(int slot, int vcc, int vpp) 259 - { 260 - u32 reg = 0; 261 - 262 - switch (vcc) { 263 - case 0: 264 - break; 265 - case 33: 266 - reg |= BCSR1_PCCVCC0; 267 - break; 268 - case 50: 269 - reg |= BCSR1_PCCVCC1; 270 - break; 271 - default: 272 - return 1; 273 - } 274 - 275 - switch (vpp) { 276 - case 0: 277 - break; 278 - case 33: 279 - case 50: 280 - if (vcc == vpp) 281 - reg |= BCSR1_PCCVPP1; 282 - else 283 - return 1; 284 - break; 285 - case 120: 286 - if ((vcc == 33) || (vcc == 50)) 287 - reg |= BCSR1_PCCVPP0; 288 - else 289 - return 1; 290 - default: 291 - return 1; 292 - } 293 - 294 - /* first, turn off all power */ 295 - out_be32((u32 *) BCSR1, 296 - in_be32((u32 *) BCSR1) & ~(BCSR1_PCCVCC_MASK | 297 - BCSR1_PCCVPP_MASK)); 298 - 299 - /* enable new powersettings */ 300 - out_be32((u32 *) BCSR1, in_be32((u32 *) BCSR1) | reg); 301 - 302 - return 0; 303 - } 304 - 305 - #define socket_get(_slot_) PCMCIA_SOCKET_KEY_5V 306 - 307 - static void hardware_enable(int slot) 308 - { 309 - out_be32((u32 *) BCSR1, in_be32((u32 *) BCSR1) & ~BCSR1_PCCEN); 310 - } 311 - 312 - static void hardware_disable(int slot) 313 - { 314 - out_be32((u32 *) BCSR1, in_be32((u32 *) BCSR1) | BCSR1_PCCEN); 315 - } 316 - 317 - #endif 318 - 319 - /* MPC885ADS Boards */ 320 - 321 - #if defined(CONFIG_MPC885ADS) 322 - 323 - #define PCMCIA_BOARD_MSG "MPC885ADS" 324 - #define socket_get(_slot_) PCMCIA_SOCKET_KEY_5V 325 - 326 - static inline void hardware_enable(int slot) 327 - { 328 - m8xx_pcmcia_ops.hw_ctrl(slot, 1); 329 - } 330 - 331 - static inline void hardware_disable(int slot) 332 - { 333 - m8xx_pcmcia_ops.hw_ctrl(slot, 0); 334 - } 335 - 336 - static inline int voltage_set(int slot, int vcc, int vpp) 337 - { 338 - return m8xx_pcmcia_ops.voltage_set(slot, vcc, vpp); 339 - } 340 - 341 - #endif 342 - 343 - #if defined(CONFIG_PRxK) 344 - #include <asm/cpld.h> 345 - extern volatile fpga_pc_regs *fpga_pc; 346 - 347 - #define PCMCIA_BOARD_MSG "MPC855T" 348 - 349 - static int voltage_set(int slot, int vcc, int vpp) 350 - { 351 - u8 reg = 0; 352 - u8 regread; 353 - cpld_regs *ccpld = get_cpld(); 354 - 355 - switch (vcc) { 356 - case 0: 357 - break; 358 - case 33: 359 - reg |= PCMCIA_VCC_33; 360 - break; 361 - case 50: 362 - reg |= PCMCIA_VCC_50; 363 - break; 364 - default: 365 - return 1; 366 - } 367 - 368 - switch (vpp) { 369 - case 0: 370 - break; 371 - case 33: 372 - case 50: 373 - if (vcc == vpp) 374 - reg |= PCMCIA_VPP_VCC; 375 - else 376 - return 1; 377 - break; 378 - case 120: 379 - if ((vcc == 33) || (vcc == 50)) 380 - reg |= PCMCIA_VPP_12; 381 - else 382 - return 1; 383 - default: 384 - return 1; 385 - } 386 - 387 - reg = reg >> (slot << 2); 388 - regread = in_8(&ccpld->fpga_pc_ctl); 389 - if (reg != 390 - (regread & ((PCMCIA_VCC_MASK | PCMCIA_VPP_MASK) >> (slot << 2)))) { 391 - /* enable new powersettings */ 392 - regread = 393 - regread & ~((PCMCIA_VCC_MASK | PCMCIA_VPP_MASK) >> 394 - (slot << 2)); 395 - out_8(&ccpld->fpga_pc_ctl, reg | regread); 396 - msleep(100); 397 - } 398 - 399 - return 0; 400 - } 401 - 402 - #define socket_get(_slot_) PCMCIA_SOCKET_KEY_LV 403 - #define hardware_enable(_slot_) /* No hardware to enable */ 404 - #define hardware_disable(_slot_) /* No hardware to disable */ 405 - 406 - #endif /* CONFIG_PRxK */ 407 - 408 - static u32 pending_events[PCMCIA_SOCKETS_NO]; 409 - static DEFINE_SPINLOCK(pending_event_lock); 410 - 411 - static irqreturn_t m8xx_interrupt(int irq, void *dev) 412 - { 413 - struct socket_info *s; 414 - struct event_table *e; 415 - unsigned int i, events, pscr, pipr, per; 416 - pcmconf8xx_t *pcmcia = socket[0].pcmcia; 417 - 418 - pr_debug("m8xx_pcmcia: Interrupt!\n"); 419 - /* get interrupt sources */ 420 - 421 - pscr = in_be32(&pcmcia->pcmc_pscr); 422 - pipr = in_be32(&pcmcia->pcmc_pipr); 423 - per = in_be32(&pcmcia->pcmc_per); 424 - 425 - for (i = 0; i < PCMCIA_SOCKETS_NO; i++) { 426 - s = &socket[i]; 427 - e = &s->events[0]; 428 - events = 0; 429 - 430 - while (e->regbit) { 431 - if (pscr & e->regbit) 432 - events |= e->eventbit; 433 - 434 - e++; 435 - } 436 - 437 - /* 438 - * report only if both card detect signals are the same 439 - * not too nice done, 440 - * we depend on that CD2 is the bit to the left of CD1... 441 - */ 442 - if (events & SS_DETECT) 443 - if (((pipr & M8XX_PCMCIA_CD2(i)) >> 1) ^ 444 - (pipr & M8XX_PCMCIA_CD1(i))) { 445 - events &= ~SS_DETECT; 446 - } 447 - #ifdef PCMCIA_GLITCHY_CD 448 - /* 449 - * I've experienced CD problems with my ADS board. 450 - * We make an extra check to see if there was a 451 - * real change of Card detection. 452 - */ 453 - 454 - if ((events & SS_DETECT) && 455 - ((pipr & 456 - (M8XX_PCMCIA_CD2(i) | M8XX_PCMCIA_CD1(i))) == 0) && 457 - (s->state.Vcc | s->state.Vpp)) { 458 - events &= ~SS_DETECT; 459 - /*printk( "CD glitch workaround - CD = 0x%08x!\n", 460 - (pipr & (M8XX_PCMCIA_CD2(i) 461 - | M8XX_PCMCIA_CD1(i)))); */ 462 - } 463 - #endif 464 - 465 - /* call the handler */ 466 - 467 - pr_debug("m8xx_pcmcia: slot %u: events = 0x%02x, pscr = 0x%08x, " 468 - "pipr = 0x%08x\n", i, events, pscr, pipr); 469 - 470 - if (events) { 471 - spin_lock(&pending_event_lock); 472 - pending_events[i] |= events; 473 - spin_unlock(&pending_event_lock); 474 - /* 475 - * Turn off RDY_L bits in the PER mask on 476 - * CD interrupt receival. 477 - * 478 - * They can generate bad interrupts on the 479 - * ACS4,8,16,32. - marcelo 480 - */ 481 - per &= ~M8XX_PCMCIA_RDY_L(0); 482 - per &= ~M8XX_PCMCIA_RDY_L(1); 483 - 484 - out_be32(&pcmcia->pcmc_per, per); 485 - 486 - if (events) 487 - pcmcia_parse_events(&socket[i].socket, events); 488 - } 489 - } 490 - 491 - /* clear the interrupt sources */ 492 - out_be32(&pcmcia->pcmc_pscr, pscr); 493 - 494 - pr_debug("m8xx_pcmcia: Interrupt done.\n"); 495 - 496 - return IRQ_HANDLED; 497 - } 498 - 499 - static u32 m8xx_get_graycode(u32 size) 500 - { 501 - u32 k; 502 - 503 - for (k = 0; k < M8XX_SIZES_NO; k++) 504 - if (m8xx_size_to_gray[k] == size) 505 - break; 506 - 507 - if ((k == M8XX_SIZES_NO) || (m8xx_size_to_gray[k] == -1)) 508 - k = -1; 509 - 510 - return k; 511 - } 512 - 513 - static u32 m8xx_get_speed(u32 ns, u32 is_io, u32 bus_freq) 514 - { 515 - u32 reg, clocks, psst, psl, psht; 516 - 517 - if (!ns) { 518 - 519 - /* 520 - * We get called with IO maps setup to 0ns 521 - * if not specified by the user. 522 - * They should be 255ns. 523 - */ 524 - 525 - if (is_io) 526 - ns = 255; 527 - else 528 - ns = 100; /* fast memory if 0 */ 529 - } 530 - 531 - /* 532 - * In PSST, PSL, PSHT fields we tell the controller 533 - * timing parameters in CLKOUT clock cycles. 534 - * CLKOUT is the same as GCLK2_50. 535 - */ 536 - 537 - /* how we want to adjust the timing - in percent */ 538 - 539 - #define ADJ 180 /* 80 % longer accesstime - to be sure */ 540 - 541 - clocks = ((bus_freq / 1000) * ns) / 1000; 542 - clocks = (clocks * ADJ) / (100 * 1000); 543 - if (clocks >= PCMCIA_BMT_LIMIT) { 544 - printk("Max access time limit reached\n"); 545 - clocks = PCMCIA_BMT_LIMIT - 1; 546 - } 547 - 548 - psst = clocks / 7; /* setup time */ 549 - psht = clocks / 7; /* hold time */ 550 - psl = (clocks * 5) / 7; /* strobe length */ 551 - 552 - psst += clocks - (psst + psht + psl); 553 - 554 - reg = psst << 12; 555 - reg |= psl << 7; 556 - reg |= psht << 16; 557 - 558 - return reg; 559 - } 560 - 561 - static int m8xx_get_status(struct pcmcia_socket *sock, unsigned int *value) 562 - { 563 - int lsock = container_of(sock, struct socket_info, socket)->slot; 564 - struct socket_info *s = &socket[lsock]; 565 - unsigned int pipr, reg; 566 - pcmconf8xx_t *pcmcia = s->pcmcia; 567 - 568 - pipr = in_be32(&pcmcia->pcmc_pipr); 569 - 570 - *value = ((pipr & (M8XX_PCMCIA_CD1(lsock) 571 - | M8XX_PCMCIA_CD2(lsock))) == 0) ? SS_DETECT : 0; 572 - *value |= (pipr & M8XX_PCMCIA_WP(lsock)) ? SS_WRPROT : 0; 573 - 574 - if (s->state.flags & SS_IOCARD) 575 - *value |= (pipr & M8XX_PCMCIA_BVD1(lsock)) ? SS_STSCHG : 0; 576 - else { 577 - *value |= (pipr & M8XX_PCMCIA_RDY(lsock)) ? SS_READY : 0; 578 - *value |= (pipr & M8XX_PCMCIA_BVD1(lsock)) ? SS_BATDEAD : 0; 579 - *value |= (pipr & M8XX_PCMCIA_BVD2(lsock)) ? SS_BATWARN : 0; 580 - } 581 - 582 - if (s->state.Vcc | s->state.Vpp) 583 - *value |= SS_POWERON; 584 - 585 - /* 586 - * Voltage detection: 587 - * This driver only supports 16-Bit pc-cards. 588 - * Cardbus is not handled here. 589 - * 590 - * To determine what voltage to use we must read the VS1 and VS2 pin. 591 - * Depending on what socket type is present, 592 - * different combinations mean different things. 593 - * 594 - * Card Key Socket Key VS1 VS2 Card Vcc for CIS parse 595 - * 596 - * 5V 5V, LV* NC NC 5V only 5V (if available) 597 - * 598 - * 5V 5V, LV* GND NC 5 or 3.3V as low as possible 599 - * 600 - * 5V 5V, LV* GND GND 5, 3.3, x.xV as low as possible 601 - * 602 - * LV* 5V - - shall not fit into socket 603 - * 604 - * LV* LV* GND NC 3.3V only 3.3V 605 - * 606 - * LV* LV* NC GND x.xV x.xV (if avail.) 607 - * 608 - * LV* LV* GND GND 3.3 or x.xV as low as possible 609 - * 610 - * *LV means Low Voltage 611 - * 612 - * 613 - * That gives us the following table: 614 - * 615 - * Socket VS1 VS2 Voltage 616 - * 617 - * 5V NC NC 5V 618 - * 5V NC GND none (should not be possible) 619 - * 5V GND NC >= 3.3V 620 - * 5V GND GND >= x.xV 621 - * 622 - * LV NC NC 5V (if available) 623 - * LV NC GND x.xV (if available) 624 - * LV GND NC 3.3V 625 - * LV GND GND >= x.xV 626 - * 627 - * So, how do I determine if I have a 5V or a LV 628 - * socket on my board? Look at the socket! 629 - * 630 - * 631 - * Socket with 5V key: 632 - * ++--------------------------------------------+ 633 - * || | 634 - * || || 635 - * || || 636 - * | | 637 - * +---------------------------------------------+ 638 - * 639 - * Socket with LV key: 640 - * ++--------------------------------------------+ 641 - * || | 642 - * | || 643 - * | || 644 - * | | 645 - * +---------------------------------------------+ 646 - * 647 - * 648 - * With other words - LV only cards does not fit 649 - * into the 5V socket! 650 - */ 651 - 652 - /* read out VS1 and VS2 */ 653 - 654 - reg = (pipr & M8XX_PCMCIA_VS_MASK(lsock)) 655 - >> M8XX_PCMCIA_VS_SHIFT(lsock); 656 - 657 - if (socket_get(lsock) == PCMCIA_SOCKET_KEY_LV) { 658 - switch (reg) { 659 - case 1: 660 - *value |= SS_3VCARD; 661 - break; /* GND, NC - 3.3V only */ 662 - case 2: 663 - *value |= SS_XVCARD; 664 - break; /* NC. GND - x.xV only */ 665 - }; 666 - } 667 - 668 - pr_debug("m8xx_pcmcia: GetStatus(%d) = %#2.2x\n", lsock, *value); 669 - return 0; 670 - } 671 - 672 - static int m8xx_set_socket(struct pcmcia_socket *sock, socket_state_t * state) 673 - { 674 - int lsock = container_of(sock, struct socket_info, socket)->slot; 675 - struct socket_info *s = &socket[lsock]; 676 - struct event_table *e; 677 - unsigned int reg; 678 - unsigned long flags; 679 - pcmconf8xx_t *pcmcia = socket[0].pcmcia; 680 - 681 - pr_debug("m8xx_pcmcia: SetSocket(%d, flags %#3.3x, Vcc %d, Vpp %d, " 682 - "io_irq %d, csc_mask %#2.2x)\n", lsock, state->flags, 683 - state->Vcc, state->Vpp, state->io_irq, state->csc_mask); 684 - 685 - /* First, set voltage - bail out if invalid */ 686 - if (voltage_set(lsock, state->Vcc, state->Vpp)) 687 - return -EINVAL; 688 - 689 - /* Take care of reset... */ 690 - if (state->flags & SS_RESET) 691 - out_be32(M8XX_PGCRX(lsock), in_be32(M8XX_PGCRX(lsock)) | M8XX_PGCRX_CXRESET); /* active high */ 692 - else 693 - out_be32(M8XX_PGCRX(lsock), 694 - in_be32(M8XX_PGCRX(lsock)) & ~M8XX_PGCRX_CXRESET); 695 - 696 - /* ... and output enable. */ 697 - 698 - /* The CxOE signal is connected to a 74541 on the ADS. 699 - I guess most other boards used the ADS as a reference. 700 - I tried to control the CxOE signal with SS_OUTPUT_ENA, 701 - but the reset signal seems connected via the 541. 702 - If the CxOE is left high are some signals tristated and 703 - no pullups are present -> the cards act weird. 704 - So right now the buffers are enabled if the power is on. */ 705 - 706 - if (state->Vcc || state->Vpp) 707 - out_be32(M8XX_PGCRX(lsock), in_be32(M8XX_PGCRX(lsock)) & ~M8XX_PGCRX_CXOE); /* active low */ 708 - else 709 - out_be32(M8XX_PGCRX(lsock), 710 - in_be32(M8XX_PGCRX(lsock)) | M8XX_PGCRX_CXOE); 711 - 712 - /* 713 - * We'd better turn off interrupts before 714 - * we mess with the events-table.. 715 - */ 716 - 717 - spin_lock_irqsave(&events_lock, flags); 718 - 719 - /* 720 - * Play around with the interrupt mask to be able to 721 - * give the events the generic pcmcia driver wants us to. 722 - */ 723 - 724 - e = &s->events[0]; 725 - reg = 0; 726 - 727 - if (state->csc_mask & SS_DETECT) { 728 - e->eventbit = SS_DETECT; 729 - reg |= e->regbit = (M8XX_PCMCIA_CD2(lsock) 730 - | M8XX_PCMCIA_CD1(lsock)); 731 - e++; 732 - } 733 - if (state->flags & SS_IOCARD) { 734 - /* 735 - * I/O card 736 - */ 737 - if (state->csc_mask & SS_STSCHG) { 738 - e->eventbit = SS_STSCHG; 739 - reg |= e->regbit = M8XX_PCMCIA_BVD1(lsock); 740 - e++; 741 - } 742 - /* 743 - * If io_irq is non-zero we should enable irq. 744 - */ 745 - if (state->io_irq) { 746 - out_be32(M8XX_PGCRX(lsock), 747 - in_be32(M8XX_PGCRX(lsock)) | 748 - mk_int_int_mask(s->hwirq) << 24); 749 - /* 750 - * Strange thing here: 751 - * The manual does not tell us which interrupt 752 - * the sources generate. 753 - * Anyhow, I found out that RDY_L generates IREQLVL. 754 - * 755 - * We use level triggerd interrupts, and they don't 756 - * have to be cleared in PSCR in the interrupt handler. 757 - */ 758 - reg |= M8XX_PCMCIA_RDY_L(lsock); 759 - } else 760 - out_be32(M8XX_PGCRX(lsock), 761 - in_be32(M8XX_PGCRX(lsock)) & 0x00ffffff); 762 - } else { 763 - /* 764 - * Memory card 765 - */ 766 - if (state->csc_mask & SS_BATDEAD) { 767 - e->eventbit = SS_BATDEAD; 768 - reg |= e->regbit = M8XX_PCMCIA_BVD1(lsock); 769 - e++; 770 - } 771 - if (state->csc_mask & SS_BATWARN) { 772 - e->eventbit = SS_BATWARN; 773 - reg |= e->regbit = M8XX_PCMCIA_BVD2(lsock); 774 - e++; 775 - } 776 - /* What should I trigger on - low/high,raise,fall? */ 777 - if (state->csc_mask & SS_READY) { 778 - e->eventbit = SS_READY; 779 - reg |= e->regbit = 0; //?? 780 - e++; 781 - } 782 - } 783 - 784 - e->regbit = 0; /* terminate list */ 785 - 786 - /* 787 - * Clear the status changed . 788 - * Port A and Port B share the same port. 789 - * Writing ones will clear the bits. 790 - */ 791 - 792 - out_be32(&pcmcia->pcmc_pscr, reg); 793 - 794 - /* 795 - * Write the mask. 796 - * Port A and Port B share the same port. 797 - * Need for read-modify-write. 798 - * Ones will enable the interrupt. 799 - */ 800 - 801 - reg |= 802 - in_be32(&pcmcia-> 803 - pcmc_per) & (M8XX_PCMCIA_MASK(0) | M8XX_PCMCIA_MASK(1)); 804 - out_be32(&pcmcia->pcmc_per, reg); 805 - 806 - spin_unlock_irqrestore(&events_lock, flags); 807 - 808 - /* copy the struct and modify the copy */ 809 - 810 - s->state = *state; 811 - 812 - return 0; 813 - } 814 - 815 - static int m8xx_set_io_map(struct pcmcia_socket *sock, struct pccard_io_map *io) 816 - { 817 - int lsock = container_of(sock, struct socket_info, socket)->slot; 818 - 819 - struct socket_info *s = &socket[lsock]; 820 - struct pcmcia_win *w; 821 - unsigned int reg, winnr; 822 - pcmconf8xx_t *pcmcia = s->pcmcia; 823 - 824 - #define M8XX_SIZE (io->stop - io->start + 1) 825 - #define M8XX_BASE (PCMCIA_IO_WIN_BASE + io->start) 826 - 827 - pr_debug("m8xx_pcmcia: SetIOMap(%d, %d, %#2.2x, %d ns, " 828 - "%#4.4llx-%#4.4llx)\n", lsock, io->map, io->flags, 829 - io->speed, (unsigned long long)io->start, 830 - (unsigned long long)io->stop); 831 - 832 - if ((io->map >= PCMCIA_IO_WIN_NO) || (io->start > 0xffff) 833 - || (io->stop > 0xffff) || (io->stop < io->start)) 834 - return -EINVAL; 835 - 836 - if ((reg = m8xx_get_graycode(M8XX_SIZE)) == -1) 837 - return -EINVAL; 838 - 839 - if (io->flags & MAP_ACTIVE) { 840 - 841 - pr_debug("m8xx_pcmcia: io->flags & MAP_ACTIVE\n"); 842 - 843 - winnr = (PCMCIA_MEM_WIN_NO * PCMCIA_SOCKETS_NO) 844 - + (lsock * PCMCIA_IO_WIN_NO) + io->map; 845 - 846 - /* setup registers */ 847 - 848 - w = (void *)&pcmcia->pcmc_pbr0; 849 - w += winnr; 850 - 851 - out_be32(&w->or, 0); /* turn off window first */ 852 - out_be32(&w->br, M8XX_BASE); 853 - 854 - reg <<= 27; 855 - reg |= M8XX_PCMCIA_POR_IO | (lsock << 2); 856 - 857 - reg |= m8xx_get_speed(io->speed, 1, s->bus_freq); 858 - 859 - if (io->flags & MAP_WRPROT) 860 - reg |= M8XX_PCMCIA_POR_WRPROT; 861 - 862 - /*if(io->flags & (MAP_16BIT | MAP_AUTOSZ)) */ 863 - if (io->flags & MAP_16BIT) 864 - reg |= M8XX_PCMCIA_POR_16BIT; 865 - 866 - if (io->flags & MAP_ACTIVE) 867 - reg |= M8XX_PCMCIA_POR_VALID; 868 - 869 - out_be32(&w->or, reg); 870 - 871 - pr_debug("m8xx_pcmcia: Socket %u: Mapped io window %u at " 872 - "%#8.8x, OR = %#8.8x.\n", lsock, io->map, w->br, w->or); 873 - } else { 874 - /* shutdown IO window */ 875 - winnr = (PCMCIA_MEM_WIN_NO * PCMCIA_SOCKETS_NO) 876 - + (lsock * PCMCIA_IO_WIN_NO) + io->map; 877 - 878 - /* setup registers */ 879 - 880 - w = (void *)&pcmcia->pcmc_pbr0; 881 - w += winnr; 882 - 883 - out_be32(&w->or, 0); /* turn off window */ 884 - out_be32(&w->br, 0); /* turn off base address */ 885 - 886 - pr_debug("m8xx_pcmcia: Socket %u: Unmapped io window %u at " 887 - "%#8.8x, OR = %#8.8x.\n", lsock, io->map, w->br, w->or); 888 - } 889 - 890 - /* copy the struct and modify the copy */ 891 - s->io_win[io->map] = *io; 892 - s->io_win[io->map].flags &= (MAP_WRPROT | MAP_16BIT | MAP_ACTIVE); 893 - pr_debug("m8xx_pcmcia: SetIOMap exit\n"); 894 - 895 - return 0; 896 - } 897 - 898 - static int m8xx_set_mem_map(struct pcmcia_socket *sock, 899 - struct pccard_mem_map *mem) 900 - { 901 - int lsock = container_of(sock, struct socket_info, socket)->slot; 902 - struct socket_info *s = &socket[lsock]; 903 - struct pcmcia_win *w; 904 - struct pccard_mem_map *old; 905 - unsigned int reg, winnr; 906 - pcmconf8xx_t *pcmcia = s->pcmcia; 907 - 908 - pr_debug("m8xx_pcmcia: SetMemMap(%d, %d, %#2.2x, %d ns, " 909 - "%#5.5llx, %#5.5x)\n", lsock, mem->map, mem->flags, 910 - mem->speed, (unsigned long long)mem->static_start, 911 - mem->card_start); 912 - 913 - if ((mem->map >= PCMCIA_MEM_WIN_NO) 914 - // || ((mem->s) >= PCMCIA_MEM_WIN_SIZE) 915 - || (mem->card_start >= 0x04000000) 916 - || (mem->static_start & 0xfff) /* 4KByte resolution */ 917 - ||(mem->card_start & 0xfff)) 918 - return -EINVAL; 919 - 920 - if ((reg = m8xx_get_graycode(PCMCIA_MEM_WIN_SIZE)) == -1) { 921 - printk("Cannot set size to 0x%08x.\n", PCMCIA_MEM_WIN_SIZE); 922 - return -EINVAL; 923 - } 924 - reg <<= 27; 925 - 926 - winnr = (lsock * PCMCIA_MEM_WIN_NO) + mem->map; 927 - 928 - /* Setup the window in the pcmcia controller */ 929 - 930 - w = (void *)&pcmcia->pcmc_pbr0; 931 - w += winnr; 932 - 933 - reg |= lsock << 2; 934 - 935 - reg |= m8xx_get_speed(mem->speed, 0, s->bus_freq); 936 - 937 - if (mem->flags & MAP_ATTRIB) 938 - reg |= M8XX_PCMCIA_POR_ATTRMEM; 939 - 940 - if (mem->flags & MAP_WRPROT) 941 - reg |= M8XX_PCMCIA_POR_WRPROT; 942 - 943 - if (mem->flags & MAP_16BIT) 944 - reg |= M8XX_PCMCIA_POR_16BIT; 945 - 946 - if (mem->flags & MAP_ACTIVE) 947 - reg |= M8XX_PCMCIA_POR_VALID; 948 - 949 - out_be32(&w->or, reg); 950 - 951 - pr_debug("m8xx_pcmcia: Socket %u: Mapped memory window %u at %#8.8x, " 952 - "OR = %#8.8x.\n", lsock, mem->map, w->br, w->or); 953 - 954 - if (mem->flags & MAP_ACTIVE) { 955 - /* get the new base address */ 956 - mem->static_start = PCMCIA_MEM_WIN_BASE + 957 - (PCMCIA_MEM_WIN_SIZE * winnr) 958 - + mem->card_start; 959 - } 960 - 961 - pr_debug("m8xx_pcmcia: SetMemMap(%d, %d, %#2.2x, %d ns, " 962 - "%#5.5llx, %#5.5x)\n", lsock, mem->map, mem->flags, 963 - mem->speed, (unsigned long long)mem->static_start, 964 - mem->card_start); 965 - 966 - /* copy the struct and modify the copy */ 967 - 968 - old = &s->mem_win[mem->map]; 969 - 970 - *old = *mem; 971 - old->flags &= (MAP_ATTRIB | MAP_WRPROT | MAP_16BIT | MAP_ACTIVE); 972 - 973 - return 0; 974 - } 975 - 976 - static int m8xx_sock_init(struct pcmcia_socket *sock) 977 - { 978 - int i; 979 - pccard_io_map io = { 0, 0, 0, 0, 1 }; 980 - pccard_mem_map mem = { 0, 0, 0, 0, 0, 0 }; 981 - 982 - pr_debug("m8xx_pcmcia: sock_init(%d)\n", s); 983 - 984 - m8xx_set_socket(sock, &dead_socket); 985 - for (i = 0; i < PCMCIA_IO_WIN_NO; i++) { 986 - io.map = i; 987 - m8xx_set_io_map(sock, &io); 988 - } 989 - for (i = 0; i < PCMCIA_MEM_WIN_NO; i++) { 990 - mem.map = i; 991 - m8xx_set_mem_map(sock, &mem); 992 - } 993 - 994 - return 0; 995 - 996 - } 997 - 998 - static int m8xx_sock_suspend(struct pcmcia_socket *sock) 999 - { 1000 - return m8xx_set_socket(sock, &dead_socket); 1001 - } 1002 - 1003 - static struct pccard_operations m8xx_services = { 1004 - .init = m8xx_sock_init, 1005 - .suspend = m8xx_sock_suspend, 1006 - .get_status = m8xx_get_status, 1007 - .set_socket = m8xx_set_socket, 1008 - .set_io_map = m8xx_set_io_map, 1009 - .set_mem_map = m8xx_set_mem_map, 1010 - }; 1011 - 1012 - static int __init m8xx_probe(struct platform_device *ofdev) 1013 - { 1014 - struct pcmcia_win *w; 1015 - unsigned int i, m, hwirq; 1016 - pcmconf8xx_t *pcmcia; 1017 - int status; 1018 - struct device_node *np = ofdev->dev.of_node; 1019 - 1020 - pcmcia_info("%s\n", version); 1021 - 1022 - pcmcia = of_iomap(np, 0); 1023 - if (pcmcia == NULL) 1024 - return -EINVAL; 1025 - 1026 - pcmcia_schlvl = irq_of_parse_and_map(np, 0); 1027 - hwirq = irq_map[pcmcia_schlvl].hwirq; 1028 - if (pcmcia_schlvl < 0) { 1029 - iounmap(pcmcia); 1030 - return -EINVAL; 1031 - } 1032 - 1033 - m8xx_pgcrx[0] = &pcmcia->pcmc_pgcra; 1034 - m8xx_pgcrx[1] = &pcmcia->pcmc_pgcrb; 1035 - 1036 - pcmcia_info(PCMCIA_BOARD_MSG " using " PCMCIA_SLOT_MSG 1037 - " with IRQ %u (%d). \n", pcmcia_schlvl, hwirq); 1038 - 1039 - /* Configure Status change interrupt */ 1040 - 1041 - if (request_irq(pcmcia_schlvl, m8xx_interrupt, IRQF_SHARED, 1042 - driver_name, socket)) { 1043 - pcmcia_error("Cannot allocate IRQ %u for SCHLVL!\n", 1044 - pcmcia_schlvl); 1045 - iounmap(pcmcia); 1046 - return -1; 1047 - } 1048 - 1049 - w = (void *)&pcmcia->pcmc_pbr0; 1050 - 1051 - out_be32(&pcmcia->pcmc_pscr, M8XX_PCMCIA_MASK(0) | M8XX_PCMCIA_MASK(1)); 1052 - clrbits32(&pcmcia->pcmc_per, M8XX_PCMCIA_MASK(0) | M8XX_PCMCIA_MASK(1)); 1053 - 1054 - /* connect interrupt and disable CxOE */ 1055 - 1056 - out_be32(M8XX_PGCRX(0), 1057 - M8XX_PGCRX_CXOE | (mk_int_int_mask(hwirq) << 16)); 1058 - out_be32(M8XX_PGCRX(1), 1059 - M8XX_PGCRX_CXOE | (mk_int_int_mask(hwirq) << 16)); 1060 - 1061 - /* initialize the fixed memory windows */ 1062 - 1063 - for (i = 0; i < PCMCIA_SOCKETS_NO; i++) { 1064 - for (m = 0; m < PCMCIA_MEM_WIN_NO; m++) { 1065 - out_be32(&w->br, PCMCIA_MEM_WIN_BASE + 1066 - (PCMCIA_MEM_WIN_SIZE 1067 - * (m + i * PCMCIA_MEM_WIN_NO))); 1068 - 1069 - out_be32(&w->or, 0); /* set to not valid */ 1070 - 1071 - w++; 1072 - } 1073 - } 1074 - 1075 - /* turn off voltage */ 1076 - voltage_set(0, 0, 0); 1077 - voltage_set(1, 0, 0); 1078 - 1079 - /* Enable external hardware */ 1080 - hardware_enable(0); 1081 - hardware_enable(1); 1082 - 1083 - for (i = 0; i < PCMCIA_SOCKETS_NO; i++) { 1084 - socket[i].slot = i; 1085 - socket[i].socket.owner = THIS_MODULE; 1086 - socket[i].socket.features = 1087 - SS_CAP_PCCARD | SS_CAP_MEM_ALIGN | SS_CAP_STATIC_MAP; 1088 - socket[i].socket.irq_mask = 0x000; 1089 - socket[i].socket.map_size = 0x1000; 1090 - socket[i].socket.io_offset = 0; 1091 - socket[i].socket.pci_irq = pcmcia_schlvl; 1092 - socket[i].socket.ops = &m8xx_services; 1093 - socket[i].socket.resource_ops = &pccard_iodyn_ops; 1094 - socket[i].socket.cb_dev = NULL; 1095 - socket[i].socket.dev.parent = &ofdev->dev; 1096 - socket[i].pcmcia = pcmcia; 1097 - socket[i].bus_freq = ppc_proc_freq; 1098 - socket[i].hwirq = hwirq; 1099 - 1100 - } 1101 - 1102 - for (i = 0; i < PCMCIA_SOCKETS_NO; i++) { 1103 - status = pcmcia_register_socket(&socket[i].socket); 1104 - if (status < 0) 1105 - pcmcia_error("Socket register failed\n"); 1106 - } 1107 - 1108 - return 0; 1109 - } 1110 - 1111 - static int m8xx_remove(struct platform_device *ofdev) 1112 - { 1113 - u32 m, i; 1114 - struct pcmcia_win *w; 1115 - pcmconf8xx_t *pcmcia = socket[0].pcmcia; 1116 - 1117 - for (i = 0; i < PCMCIA_SOCKETS_NO; i++) { 1118 - w = (void *)&pcmcia->pcmc_pbr0; 1119 - 1120 - out_be32(&pcmcia->pcmc_pscr, M8XX_PCMCIA_MASK(i)); 1121 - out_be32(&pcmcia->pcmc_per, 1122 - in_be32(&pcmcia->pcmc_per) & ~M8XX_PCMCIA_MASK(i)); 1123 - 1124 - /* turn off interrupt and disable CxOE */ 1125 - out_be32(M8XX_PGCRX(i), M8XX_PGCRX_CXOE); 1126 - 1127 - /* turn off memory windows */ 1128 - for (m = 0; m < PCMCIA_MEM_WIN_NO; m++) { 1129 - out_be32(&w->or, 0); /* set to not valid */ 1130 - w++; 1131 - } 1132 - 1133 - /* turn off voltage */ 1134 - voltage_set(i, 0, 0); 1135 - 1136 - /* disable external hardware */ 1137 - hardware_disable(i); 1138 - } 1139 - for (i = 0; i < PCMCIA_SOCKETS_NO; i++) 1140 - pcmcia_unregister_socket(&socket[i].socket); 1141 - iounmap(pcmcia); 1142 - 1143 - free_irq(pcmcia_schlvl, NULL); 1144 - 1145 - return 0; 1146 - } 1147 - 1148 - static const struct of_device_id m8xx_pcmcia_match[] = { 1149 - { 1150 - .type = "pcmcia", 1151 - .compatible = "fsl,pq-pcmcia", 1152 - }, 1153 - {}, 1154 - }; 1155 - 1156 - MODULE_DEVICE_TABLE(of, m8xx_pcmcia_match); 1157 - 1158 - static struct platform_driver m8xx_pcmcia_driver = { 1159 - .driver = { 1160 - .name = driver_name, 1161 - .owner = THIS_MODULE, 1162 - .of_match_table = m8xx_pcmcia_match, 1163 - }, 1164 - .probe = m8xx_probe, 1165 - .remove = m8xx_remove, 1166 - }; 1167 - 1168 - module_platform_driver(m8xx_pcmcia_driver);
+1
drivers/vfio/Makefile
··· 1 1 obj-$(CONFIG_VFIO) += vfio.o 2 2 obj-$(CONFIG_VFIO_IOMMU_TYPE1) += vfio_iommu_type1.o 3 3 obj-$(CONFIG_VFIO_IOMMU_SPAPR_TCE) += vfio_iommu_spapr_tce.o 4 + obj-$(CONFIG_EEH) += vfio_spapr_eeh.o 4 5 obj-$(CONFIG_VFIO_PCI) += pci/
+14 -4
drivers/vfio/pci/vfio_pci.c
··· 157 157 { 158 158 struct vfio_pci_device *vdev = device_data; 159 159 160 - if (atomic_dec_and_test(&vdev->refcnt)) 160 + if (atomic_dec_and_test(&vdev->refcnt)) { 161 + vfio_spapr_pci_eeh_release(vdev->pdev); 161 162 vfio_pci_disable(vdev); 163 + } 162 164 163 165 module_put(THIS_MODULE); 164 166 } ··· 168 166 static int vfio_pci_open(void *device_data) 169 167 { 170 168 struct vfio_pci_device *vdev = device_data; 169 + int ret; 171 170 172 171 if (!try_module_get(THIS_MODULE)) 173 172 return -ENODEV; 174 173 175 174 if (atomic_inc_return(&vdev->refcnt) == 1) { 176 - int ret = vfio_pci_enable(vdev); 175 + ret = vfio_pci_enable(vdev); 176 + if (ret) 177 + goto error; 178 + 179 + ret = vfio_spapr_pci_eeh_open(vdev->pdev); 177 180 if (ret) { 178 - module_put(THIS_MODULE); 179 - return ret; 181 + vfio_pci_disable(vdev); 182 + goto error; 180 183 } 181 184 } 182 185 183 186 return 0; 187 + error: 188 + module_put(THIS_MODULE); 189 + return ret; 184 190 } 185 191 186 192 static int vfio_pci_get_irq_count(struct vfio_pci_device *vdev, int irq_type)
+16 -1
drivers/vfio/vfio_iommu_spapr_tce.c
··· 156 156 157 157 switch (cmd) { 158 158 case VFIO_CHECK_EXTENSION: 159 - return (arg == VFIO_SPAPR_TCE_IOMMU) ? 1 : 0; 159 + switch (arg) { 160 + case VFIO_SPAPR_TCE_IOMMU: 161 + ret = 1; 162 + break; 163 + default: 164 + ret = vfio_spapr_iommu_eeh_ioctl(NULL, cmd, arg); 165 + break; 166 + } 167 + 168 + return (ret < 0) ? 0 : ret; 160 169 161 170 case VFIO_IOMMU_SPAPR_TCE_GET_INFO: { 162 171 struct vfio_iommu_spapr_tce_info info; ··· 292 283 tce_iommu_disable(container); 293 284 mutex_unlock(&container->lock); 294 285 return 0; 286 + case VFIO_EEH_PE_OP: 287 + if (!container->tbl || !container->tbl->it_group) 288 + return -ENODEV; 289 + 290 + return vfio_spapr_iommu_eeh_ioctl(container->tbl->it_group, 291 + cmd, arg); 295 292 } 296 293 297 294 return -ENOTTY;
+87
drivers/vfio/vfio_spapr_eeh.c
··· 1 + /* 2 + * EEH functionality support for VFIO devices. The feature is only 3 + * available on sPAPR compatible platforms. 4 + * 5 + * Copyright Gavin Shan, IBM Corporation 2014. 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/uaccess.h> 13 + #include <linux/vfio.h> 14 + #include <asm/eeh.h> 15 + 16 + /* We might build address mapping here for "fast" path later */ 17 + int vfio_spapr_pci_eeh_open(struct pci_dev *pdev) 18 + { 19 + return eeh_dev_open(pdev); 20 + } 21 + 22 + void vfio_spapr_pci_eeh_release(struct pci_dev *pdev) 23 + { 24 + eeh_dev_release(pdev); 25 + } 26 + 27 + long vfio_spapr_iommu_eeh_ioctl(struct iommu_group *group, 28 + unsigned int cmd, unsigned long arg) 29 + { 30 + struct eeh_pe *pe; 31 + struct vfio_eeh_pe_op op; 32 + unsigned long minsz; 33 + long ret = -EINVAL; 34 + 35 + switch (cmd) { 36 + case VFIO_CHECK_EXTENSION: 37 + if (arg == VFIO_EEH) 38 + ret = eeh_enabled() ? 1 : 0; 39 + else 40 + ret = 0; 41 + break; 42 + case VFIO_EEH_PE_OP: 43 + pe = eeh_iommu_group_to_pe(group); 44 + if (!pe) 45 + return -ENODEV; 46 + 47 + minsz = offsetofend(struct vfio_eeh_pe_op, op); 48 + if (copy_from_user(&op, (void __user *)arg, minsz)) 49 + return -EFAULT; 50 + if (op.argsz < minsz || op.flags) 51 + return -EINVAL; 52 + 53 + switch (op.op) { 54 + case VFIO_EEH_PE_DISABLE: 55 + ret = eeh_pe_set_option(pe, EEH_OPT_DISABLE); 56 + break; 57 + case VFIO_EEH_PE_ENABLE: 58 + ret = eeh_pe_set_option(pe, EEH_OPT_ENABLE); 59 + break; 60 + case VFIO_EEH_PE_UNFREEZE_IO: 61 + ret = eeh_pe_set_option(pe, EEH_OPT_THAW_MMIO); 62 + break; 63 + case VFIO_EEH_PE_UNFREEZE_DMA: 64 + ret = eeh_pe_set_option(pe, EEH_OPT_THAW_DMA); 65 + break; 66 + case VFIO_EEH_PE_GET_STATE: 67 + ret = eeh_pe_get_state(pe); 68 + break; 69 + case VFIO_EEH_PE_RESET_DEACTIVATE: 70 + ret = eeh_pe_reset(pe, EEH_RESET_DEACTIVATE); 71 + break; 72 + case VFIO_EEH_PE_RESET_HOT: 73 + ret = eeh_pe_reset(pe, EEH_RESET_HOT); 74 + break; 75 + case VFIO_EEH_PE_RESET_FUNDAMENTAL: 76 + ret = eeh_pe_reset(pe, EEH_RESET_FUNDAMENTAL); 77 + break; 78 + case VFIO_EEH_PE_CONFIGURE: 79 + ret = eeh_pe_configure(pe); 80 + break; 81 + default: 82 + ret = -EINVAL; 83 + } 84 + } 85 + 86 + return ret; 87 + }
+23
include/linux/vfio.h
··· 98 98 extern long vfio_external_check_extension(struct vfio_group *group, 99 99 unsigned long arg); 100 100 101 + #ifdef CONFIG_EEH 102 + extern int vfio_spapr_pci_eeh_open(struct pci_dev *pdev); 103 + extern void vfio_spapr_pci_eeh_release(struct pci_dev *pdev); 104 + extern long vfio_spapr_iommu_eeh_ioctl(struct iommu_group *group, 105 + unsigned int cmd, 106 + unsigned long arg); 107 + #else 108 + static inline int vfio_spapr_pci_eeh_open(struct pci_dev *pdev) 109 + { 110 + return 0; 111 + } 112 + 113 + static inline void vfio_spapr_pci_eeh_release(struct pci_dev *pdev) 114 + { 115 + } 116 + 117 + static inline long vfio_spapr_iommu_eeh_ioctl(struct iommu_group *group, 118 + unsigned int cmd, 119 + unsigned long arg) 120 + { 121 + return -ENOTTY; 122 + } 123 + #endif /* CONFIG_EEH */ 101 124 #endif /* VFIO_H */
+34
include/uapi/linux/vfio.h
··· 30 30 */ 31 31 #define VFIO_DMA_CC_IOMMU 4 32 32 33 + /* Check if EEH is supported */ 34 + #define VFIO_EEH 5 35 + 33 36 /* 34 37 * The IOCTL interface is designed for extensibility by embedding the 35 38 * structure length (argsz) and flags into structures passed between ··· 457 454 }; 458 455 459 456 #define VFIO_IOMMU_SPAPR_TCE_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12) 457 + 458 + /* 459 + * EEH PE operation struct provides ways to: 460 + * - enable/disable EEH functionality; 461 + * - unfreeze IO/DMA for frozen PE; 462 + * - read PE state; 463 + * - reset PE; 464 + * - configure PE. 465 + */ 466 + struct vfio_eeh_pe_op { 467 + __u32 argsz; 468 + __u32 flags; 469 + __u32 op; 470 + }; 471 + 472 + #define VFIO_EEH_PE_DISABLE 0 /* Disable EEH functionality */ 473 + #define VFIO_EEH_PE_ENABLE 1 /* Enable EEH functionality */ 474 + #define VFIO_EEH_PE_UNFREEZE_IO 2 /* Enable IO for frozen PE */ 475 + #define VFIO_EEH_PE_UNFREEZE_DMA 3 /* Enable DMA for frozen PE */ 476 + #define VFIO_EEH_PE_GET_STATE 4 /* PE state retrieval */ 477 + #define VFIO_EEH_PE_STATE_NORMAL 0 /* PE in functional state */ 478 + #define VFIO_EEH_PE_STATE_RESET 1 /* PE reset in progress */ 479 + #define VFIO_EEH_PE_STATE_STOPPED 2 /* Stopped DMA and IO */ 480 + #define VFIO_EEH_PE_STATE_STOPPED_DMA 4 /* Stopped DMA only */ 481 + #define VFIO_EEH_PE_STATE_UNAVAIL 5 /* State unavailable */ 482 + #define VFIO_EEH_PE_RESET_DEACTIVATE 5 /* Deassert PE reset */ 483 + #define VFIO_EEH_PE_RESET_HOT 6 /* Assert hot reset */ 484 + #define VFIO_EEH_PE_RESET_FUNDAMENTAL 7 /* Assert fundamental reset */ 485 + #define VFIO_EEH_PE_CONFIGURE 8 /* PE configuration */ 486 + 487 + #define VFIO_EEH_PE_OP _IO(VFIO_TYPE, VFIO_BASE + 21) 460 488 461 489 /* ***************************************************************** */ 462 490
+5 -5
tools/testing/selftests/powerpc/Makefile
··· 17 17 18 18 endif 19 19 20 - all: 21 - @for TARGET in $(TARGETS); do \ 22 - $(MAKE) -C $$TARGET all; \ 23 - done; 20 + all: $(TARGETS) 21 + 22 + $(TARGETS): 23 + $(MAKE) -k -C $@ all 24 24 25 25 run_tests: all 26 26 @for TARGET in $(TARGETS); do \ ··· 36 36 tags: 37 37 find . -name '*.c' -o -name '*.h' | xargs ctags 38 38 39 - .PHONY: all run_tests clean tags 39 + .PHONY: all run_tests clean tags $(TARGETS)
+8 -11
tools/testing/selftests/powerpc/pmu/Makefile
··· 1 1 noarg: 2 2 $(MAKE) -C ../ 3 3 4 - PROGS := count_instructions 5 - EXTRA_SOURCES := ../harness.c event.c 4 + PROGS := count_instructions l3_bank_test per_event_excludes 5 + EXTRA_SOURCES := ../harness.c event.c lib.c 6 6 7 - all: $(PROGS) sub_all 7 + SUB_TARGETS = ebb 8 + 9 + all: $(PROGS) $(SUB_TARGETS) 8 10 9 11 $(PROGS): $(EXTRA_SOURCES) 10 12 ··· 22 20 clean: sub_clean 23 21 rm -f $(PROGS) loop.o 24 22 25 - 26 - SUB_TARGETS = ebb 27 - 28 - sub_all: 29 - @for TARGET in $(SUB_TARGETS); do \ 30 - $(MAKE) -C $$TARGET all; \ 31 - done; 23 + $(SUB_TARGETS): 24 + $(MAKE) -k -C $@ all 32 25 33 26 sub_run_tests: all 34 27 @for TARGET in $(SUB_TARGETS); do \ ··· 35 38 $(MAKE) -C $$TARGET clean; \ 36 39 done; 37 40 38 - .PHONY: all run_tests clean sub_all sub_run_tests sub_clean 41 + .PHONY: all run_tests clean sub_run_tests sub_clean $(SUB_TARGETS)
+21 -9
tools/testing/selftests/powerpc/pmu/count_instructions.c
··· 12 12 13 13 #include "event.h" 14 14 #include "utils.h" 15 + #include "lib.h" 15 16 16 17 extern void thirty_two_instruction_loop(u64 loops); 17 18 ··· 91 90 return overhead; 92 91 } 93 92 94 - static int count_instructions(void) 93 + static int test_body(void) 95 94 { 96 95 struct event events[2]; 97 96 u64 overhead; ··· 112 111 overhead = determine_overhead(events); 113 112 printf("Overhead of null loop: %llu instructions\n", overhead); 114 113 115 - /* Run for 1M instructions */ 116 - FAIL_IF(do_count_loop(events, 0x100000, overhead, true)); 114 + /* Run for 1Mi instructions */ 115 + FAIL_IF(do_count_loop(events, 1000000, overhead, true)); 117 116 118 - /* Run for 10M instructions */ 119 - FAIL_IF(do_count_loop(events, 0xa00000, overhead, true)); 117 + /* Run for 10Mi instructions */ 118 + FAIL_IF(do_count_loop(events, 10000000, overhead, true)); 120 119 121 - /* Run for 100M instructions */ 122 - FAIL_IF(do_count_loop(events, 0x6400000, overhead, true)); 120 + /* Run for 100Mi instructions */ 121 + FAIL_IF(do_count_loop(events, 100000000, overhead, true)); 123 122 124 - /* Run for 1G instructions */ 125 - FAIL_IF(do_count_loop(events, 0x40000000, overhead, true)); 123 + /* Run for 1Bi instructions */ 124 + FAIL_IF(do_count_loop(events, 1000000000, overhead, true)); 125 + 126 + /* Run for 16Bi instructions */ 127 + FAIL_IF(do_count_loop(events, 16000000000, overhead, true)); 128 + 129 + /* Run for 64Bi instructions */ 130 + FAIL_IF(do_count_loop(events, 64000000000, overhead, true)); 126 131 127 132 event_close(&events[0]); 128 133 event_close(&events[1]); 129 134 130 135 return 0; 136 + } 137 + 138 + static int count_instructions(void) 139 + { 140 + return eat_cpu(test_body); 131 141 } 132 142 133 143 int main(void)
+3 -2
tools/testing/selftests/powerpc/pmu/ebb/Makefile
··· 13 13 close_clears_pmcc_test instruction_count_test \ 14 14 fork_cleanup_test ebb_on_child_test \ 15 15 ebb_on_willing_child_test back_to_back_ebbs_test \ 16 - lost_exception_test no_handler_test 16 + lost_exception_test no_handler_test \ 17 + cycles_with_mmcr2_test 17 18 18 19 all: $(PROGS) 19 20 20 - $(PROGS): ../../harness.c ../event.c ../lib.c ebb.c ebb_handler.S trace.c 21 + $(PROGS): ../../harness.c ../event.c ../lib.c ebb.c ebb_handler.S trace.c busy_loop.S 21 22 22 23 instruction_count_test: ../loop.S 23 24
+271
tools/testing/selftests/powerpc/pmu/ebb/busy_loop.S
··· 1 + /* 2 + * Copyright 2014, Michael Ellerman, IBM Corp. 3 + * Licensed under GPLv2. 4 + */ 5 + 6 + #include <ppc-asm.h> 7 + 8 + .text 9 + 10 + FUNC_START(core_busy_loop) 11 + stdu %r1, -168(%r1) 12 + std r14, 160(%r1) 13 + std r15, 152(%r1) 14 + std r16, 144(%r1) 15 + std r17, 136(%r1) 16 + std r18, 128(%r1) 17 + std r19, 120(%r1) 18 + std r20, 112(%r1) 19 + std r21, 104(%r1) 20 + std r22, 96(%r1) 21 + std r23, 88(%r1) 22 + std r24, 80(%r1) 23 + std r25, 72(%r1) 24 + std r26, 64(%r1) 25 + std r27, 56(%r1) 26 + std r28, 48(%r1) 27 + std r29, 40(%r1) 28 + std r30, 32(%r1) 29 + std r31, 24(%r1) 30 + 31 + li r3, 0x3030 32 + std r3, -96(%r1) 33 + li r4, 0x4040 34 + std r4, -104(%r1) 35 + li r5, 0x5050 36 + std r5, -112(%r1) 37 + li r6, 0x6060 38 + std r6, -120(%r1) 39 + li r7, 0x7070 40 + std r7, -128(%r1) 41 + li r8, 0x0808 42 + std r8, -136(%r1) 43 + li r9, 0x0909 44 + std r9, -144(%r1) 45 + li r10, 0x1010 46 + std r10, -152(%r1) 47 + li r11, 0x1111 48 + std r11, -160(%r1) 49 + li r14, 0x1414 50 + std r14, -168(%r1) 51 + li r15, 0x1515 52 + std r15, -176(%r1) 53 + li r16, 0x1616 54 + std r16, -184(%r1) 55 + li r17, 0x1717 56 + std r17, -192(%r1) 57 + li r18, 0x1818 58 + std r18, -200(%r1) 59 + li r19, 0x1919 60 + std r19, -208(%r1) 61 + li r20, 0x2020 62 + std r20, -216(%r1) 63 + li r21, 0x2121 64 + std r21, -224(%r1) 65 + li r22, 0x2222 66 + std r22, -232(%r1) 67 + li r23, 0x2323 68 + std r23, -240(%r1) 69 + li r24, 0x2424 70 + std r24, -248(%r1) 71 + li r25, 0x2525 72 + std r25, -256(%r1) 73 + li r26, 0x2626 74 + std r26, -264(%r1) 75 + li r27, 0x2727 76 + std r27, -272(%r1) 77 + li r28, 0x2828 78 + std r28, -280(%r1) 79 + li r29, 0x2929 80 + std r29, -288(%r1) 81 + li r30, 0x3030 82 + li r31, 0x3131 83 + 84 + li r3, 0 85 + 0: addi r3, r3, 1 86 + cmpwi r3, 100 87 + blt 0b 88 + 89 + /* Return 1 (fail) unless we get through all the checks */ 90 + li r3, 1 91 + 92 + /* Check none of our registers have been corrupted */ 93 + cmpwi r4, 0x4040 94 + bne 1f 95 + cmpwi r5, 0x5050 96 + bne 1f 97 + cmpwi r6, 0x6060 98 + bne 1f 99 + cmpwi r7, 0x7070 100 + bne 1f 101 + cmpwi r8, 0x0808 102 + bne 1f 103 + cmpwi r9, 0x0909 104 + bne 1f 105 + cmpwi r10, 0x1010 106 + bne 1f 107 + cmpwi r11, 0x1111 108 + bne 1f 109 + cmpwi r14, 0x1414 110 + bne 1f 111 + cmpwi r15, 0x1515 112 + bne 1f 113 + cmpwi r16, 0x1616 114 + bne 1f 115 + cmpwi r17, 0x1717 116 + bne 1f 117 + cmpwi r18, 0x1818 118 + bne 1f 119 + cmpwi r19, 0x1919 120 + bne 1f 121 + cmpwi r20, 0x2020 122 + bne 1f 123 + cmpwi r21, 0x2121 124 + bne 1f 125 + cmpwi r22, 0x2222 126 + bne 1f 127 + cmpwi r23, 0x2323 128 + bne 1f 129 + cmpwi r24, 0x2424 130 + bne 1f 131 + cmpwi r25, 0x2525 132 + bne 1f 133 + cmpwi r26, 0x2626 134 + bne 1f 135 + cmpwi r27, 0x2727 136 + bne 1f 137 + cmpwi r28, 0x2828 138 + bne 1f 139 + cmpwi r29, 0x2929 140 + bne 1f 141 + cmpwi r30, 0x3030 142 + bne 1f 143 + cmpwi r31, 0x3131 144 + bne 1f 145 + 146 + /* Load junk into all our registers before we reload them from the stack. */ 147 + li r3, 0xde 148 + li r4, 0xad 149 + li r5, 0xbe 150 + li r6, 0xef 151 + li r7, 0xde 152 + li r8, 0xad 153 + li r9, 0xbe 154 + li r10, 0xef 155 + li r11, 0xde 156 + li r14, 0xad 157 + li r15, 0xbe 158 + li r16, 0xef 159 + li r17, 0xde 160 + li r18, 0xad 161 + li r19, 0xbe 162 + li r20, 0xef 163 + li r21, 0xde 164 + li r22, 0xad 165 + li r23, 0xbe 166 + li r24, 0xef 167 + li r25, 0xde 168 + li r26, 0xad 169 + li r27, 0xbe 170 + li r28, 0xef 171 + li r29, 0xdd 172 + 173 + ld r3, -96(%r1) 174 + cmpwi r3, 0x3030 175 + bne 1f 176 + ld r4, -104(%r1) 177 + cmpwi r4, 0x4040 178 + bne 1f 179 + ld r5, -112(%r1) 180 + cmpwi r5, 0x5050 181 + bne 1f 182 + ld r6, -120(%r1) 183 + cmpwi r6, 0x6060 184 + bne 1f 185 + ld r7, -128(%r1) 186 + cmpwi r7, 0x7070 187 + bne 1f 188 + ld r8, -136(%r1) 189 + cmpwi r8, 0x0808 190 + bne 1f 191 + ld r9, -144(%r1) 192 + cmpwi r9, 0x0909 193 + bne 1f 194 + ld r10, -152(%r1) 195 + cmpwi r10, 0x1010 196 + bne 1f 197 + ld r11, -160(%r1) 198 + cmpwi r11, 0x1111 199 + bne 1f 200 + ld r14, -168(%r1) 201 + cmpwi r14, 0x1414 202 + bne 1f 203 + ld r15, -176(%r1) 204 + cmpwi r15, 0x1515 205 + bne 1f 206 + ld r16, -184(%r1) 207 + cmpwi r16, 0x1616 208 + bne 1f 209 + ld r17, -192(%r1) 210 + cmpwi r17, 0x1717 211 + bne 1f 212 + ld r18, -200(%r1) 213 + cmpwi r18, 0x1818 214 + bne 1f 215 + ld r19, -208(%r1) 216 + cmpwi r19, 0x1919 217 + bne 1f 218 + ld r20, -216(%r1) 219 + cmpwi r20, 0x2020 220 + bne 1f 221 + ld r21, -224(%r1) 222 + cmpwi r21, 0x2121 223 + bne 1f 224 + ld r22, -232(%r1) 225 + cmpwi r22, 0x2222 226 + bne 1f 227 + ld r23, -240(%r1) 228 + cmpwi r23, 0x2323 229 + bne 1f 230 + ld r24, -248(%r1) 231 + cmpwi r24, 0x2424 232 + bne 1f 233 + ld r25, -256(%r1) 234 + cmpwi r25, 0x2525 235 + bne 1f 236 + ld r26, -264(%r1) 237 + cmpwi r26, 0x2626 238 + bne 1f 239 + ld r27, -272(%r1) 240 + cmpwi r27, 0x2727 241 + bne 1f 242 + ld r28, -280(%r1) 243 + cmpwi r28, 0x2828 244 + bne 1f 245 + ld r29, -288(%r1) 246 + cmpwi r29, 0x2929 247 + bne 1f 248 + 249 + /* Load 0 (success) to return */ 250 + li r3, 0 251 + 252 + 1: ld r14, 160(%r1) 253 + ld r15, 152(%r1) 254 + ld r16, 144(%r1) 255 + ld r17, 136(%r1) 256 + ld r18, 128(%r1) 257 + ld r19, 120(%r1) 258 + ld r20, 112(%r1) 259 + ld r21, 104(%r1) 260 + ld r22, 96(%r1) 261 + ld r23, 88(%r1) 262 + ld r24, 80(%r1) 263 + ld r25, 72(%r1) 264 + ld r26, 64(%r1) 265 + ld r27, 56(%r1) 266 + ld r28, 48(%r1) 267 + ld r29, 40(%r1) 268 + ld r30, 32(%r1) 269 + ld r31, 24(%r1) 270 + addi %r1, %r1, 168 271 + blr
+91
tools/testing/selftests/powerpc/pmu/ebb/cycles_with_mmcr2_test.c
··· 1 + /* 2 + * Copyright 2014, Michael Ellerman, IBM Corp. 3 + * Licensed under GPLv2. 4 + */ 5 + 6 + #include <stdio.h> 7 + #include <stdlib.h> 8 + #include <stdbool.h> 9 + 10 + #include "ebb.h" 11 + 12 + 13 + /* 14 + * Test of counting cycles while manipulating the user accessible bits in MMCR2. 15 + */ 16 + 17 + /* We use two values because the first freezes PMC1 and so we would get no EBBs */ 18 + #define MMCR2_EXPECTED_1 0x4020100804020000UL /* (FC1P|FC2P|FC3P|FC4P|FC5P|FC6P) */ 19 + #define MMCR2_EXPECTED_2 0x0020100804020000UL /* ( FC2P|FC3P|FC4P|FC5P|FC6P) */ 20 + 21 + 22 + int cycles_with_mmcr2(void) 23 + { 24 + struct event event; 25 + uint64_t val, expected[2], actual; 26 + int i; 27 + bool bad_mmcr2; 28 + 29 + event_init_named(&event, 0x1001e, "cycles"); 30 + event_leader_ebb_init(&event); 31 + 32 + event.attr.exclude_kernel = 1; 33 + event.attr.exclude_hv = 1; 34 + event.attr.exclude_idle = 1; 35 + 36 + FAIL_IF(event_open(&event)); 37 + 38 + ebb_enable_pmc_counting(1); 39 + setup_ebb_handler(standard_ebb_callee); 40 + ebb_global_enable(); 41 + 42 + FAIL_IF(ebb_event_enable(&event)); 43 + 44 + mtspr(SPRN_PMC1, pmc_sample_period(sample_period)); 45 + 46 + /* XXX Set of MMCR2 must be after enable */ 47 + expected[0] = MMCR2_EXPECTED_1; 48 + expected[1] = MMCR2_EXPECTED_2; 49 + i = 0; 50 + bad_mmcr2 = false; 51 + 52 + /* Make sure we loop until we take at least one EBB */ 53 + while ((ebb_state.stats.ebb_count < 20 && !bad_mmcr2) || 54 + ebb_state.stats.ebb_count < 1) 55 + { 56 + mtspr(SPRN_MMCR2, expected[i % 2]); 57 + 58 + FAIL_IF(core_busy_loop()); 59 + 60 + val = mfspr(SPRN_MMCR2); 61 + if (val != expected[i % 2]) { 62 + bad_mmcr2 = true; 63 + actual = val; 64 + } 65 + 66 + i++; 67 + } 68 + 69 + ebb_global_disable(); 70 + ebb_freeze_pmcs(); 71 + 72 + count_pmc(1, sample_period); 73 + 74 + dump_ebb_state(); 75 + 76 + event_close(&event); 77 + 78 + FAIL_IF(ebb_state.stats.ebb_count == 0); 79 + 80 + if (bad_mmcr2) 81 + printf("Bad MMCR2 value seen is 0x%lx\n", actual); 82 + 83 + FAIL_IF(bad_mmcr2); 84 + 85 + return 0; 86 + } 87 + 88 + int main(void) 89 + { 90 + return test_harness(cycles_with_mmcr2, "cycles_with_mmcr2"); 91 + }
+6 -255
tools/testing/selftests/powerpc/pmu/ebb/ebb.c
··· 224 224 225 225 printf("HW state:\n" \ 226 226 "MMCR0 0x%016x %s\n" \ 227 + "MMCR2 0x%016lx\n" \ 227 228 "EBBHR 0x%016lx\n" \ 228 229 "BESCR 0x%016llx %s\n" \ 229 230 "PMC1 0x%016lx\n" \ ··· 234 233 "PMC5 0x%016lx\n" \ 235 234 "PMC6 0x%016lx\n" \ 236 235 "SIAR 0x%016lx\n", 237 - mmcr0, decode_mmcr0(mmcr0), mfspr(SPRN_EBBHR), bescr, 238 - decode_bescr(bescr), mfspr(SPRN_PMC1), mfspr(SPRN_PMC2), 239 - mfspr(SPRN_PMC3), mfspr(SPRN_PMC4), mfspr(SPRN_PMC5), 240 - mfspr(SPRN_PMC6), mfspr(SPRN_SIAR)); 236 + mmcr0, decode_mmcr0(mmcr0), mfspr(SPRN_MMCR2), 237 + mfspr(SPRN_EBBHR), bescr, decode_bescr(bescr), 238 + mfspr(SPRN_PMC1), mfspr(SPRN_PMC2), mfspr(SPRN_PMC3), 239 + mfspr(SPRN_PMC4), mfspr(SPRN_PMC5), mfspr(SPRN_PMC6), 240 + mfspr(SPRN_SIAR)); 241 241 } 242 242 243 243 void dump_ebb_state(void) ··· 335 333 336 334 e->attr.exclusive = 1; 337 335 e->attr.pinned = 1; 338 - } 339 - 340 - int core_busy_loop(void) 341 - { 342 - int rc; 343 - 344 - asm volatile ( 345 - "li 3, 0x3030\n" 346 - "std 3, -96(1)\n" 347 - "li 4, 0x4040\n" 348 - "std 4, -104(1)\n" 349 - "li 5, 0x5050\n" 350 - "std 5, -112(1)\n" 351 - "li 6, 0x6060\n" 352 - "std 6, -120(1)\n" 353 - "li 7, 0x7070\n" 354 - "std 7, -128(1)\n" 355 - "li 8, 0x0808\n" 356 - "std 8, -136(1)\n" 357 - "li 9, 0x0909\n" 358 - "std 9, -144(1)\n" 359 - "li 10, 0x1010\n" 360 - "std 10, -152(1)\n" 361 - "li 11, 0x1111\n" 362 - "std 11, -160(1)\n" 363 - "li 14, 0x1414\n" 364 - "std 14, -168(1)\n" 365 - "li 15, 0x1515\n" 366 - "std 15, -176(1)\n" 367 - "li 16, 0x1616\n" 368 - "std 16, -184(1)\n" 369 - "li 17, 0x1717\n" 370 - "std 17, -192(1)\n" 371 - "li 18, 0x1818\n" 372 - "std 18, -200(1)\n" 373 - "li 19, 0x1919\n" 374 - "std 19, -208(1)\n" 375 - "li 20, 0x2020\n" 376 - "std 20, -216(1)\n" 377 - "li 21, 0x2121\n" 378 - "std 21, -224(1)\n" 379 - "li 22, 0x2222\n" 380 - "std 22, -232(1)\n" 381 - "li 23, 0x2323\n" 382 - "std 23, -240(1)\n" 383 - "li 24, 0x2424\n" 384 - "std 24, -248(1)\n" 385 - "li 25, 0x2525\n" 386 - "std 25, -256(1)\n" 387 - "li 26, 0x2626\n" 388 - "std 26, -264(1)\n" 389 - "li 27, 0x2727\n" 390 - "std 27, -272(1)\n" 391 - "li 28, 0x2828\n" 392 - "std 28, -280(1)\n" 393 - "li 29, 0x2929\n" 394 - "std 29, -288(1)\n" 395 - "li 30, 0x3030\n" 396 - "li 31, 0x3131\n" 397 - 398 - "li 3, 0\n" 399 - "0: " 400 - "addi 3, 3, 1\n" 401 - "cmpwi 3, 100\n" 402 - "blt 0b\n" 403 - 404 - /* Return 1 (fail) unless we get through all the checks */ 405 - "li 0, 1\n" 406 - 407 - /* Check none of our registers have been corrupted */ 408 - "cmpwi 4, 0x4040\n" 409 - "bne 1f\n" 410 - "cmpwi 5, 0x5050\n" 411 - "bne 1f\n" 412 - "cmpwi 6, 0x6060\n" 413 - "bne 1f\n" 414 - "cmpwi 7, 0x7070\n" 415 - "bne 1f\n" 416 - "cmpwi 8, 0x0808\n" 417 - "bne 1f\n" 418 - "cmpwi 9, 0x0909\n" 419 - "bne 1f\n" 420 - "cmpwi 10, 0x1010\n" 421 - "bne 1f\n" 422 - "cmpwi 11, 0x1111\n" 423 - "bne 1f\n" 424 - "cmpwi 14, 0x1414\n" 425 - "bne 1f\n" 426 - "cmpwi 15, 0x1515\n" 427 - "bne 1f\n" 428 - "cmpwi 16, 0x1616\n" 429 - "bne 1f\n" 430 - "cmpwi 17, 0x1717\n" 431 - "bne 1f\n" 432 - "cmpwi 18, 0x1818\n" 433 - "bne 1f\n" 434 - "cmpwi 19, 0x1919\n" 435 - "bne 1f\n" 436 - "cmpwi 20, 0x2020\n" 437 - "bne 1f\n" 438 - "cmpwi 21, 0x2121\n" 439 - "bne 1f\n" 440 - "cmpwi 22, 0x2222\n" 441 - "bne 1f\n" 442 - "cmpwi 23, 0x2323\n" 443 - "bne 1f\n" 444 - "cmpwi 24, 0x2424\n" 445 - "bne 1f\n" 446 - "cmpwi 25, 0x2525\n" 447 - "bne 1f\n" 448 - "cmpwi 26, 0x2626\n" 449 - "bne 1f\n" 450 - "cmpwi 27, 0x2727\n" 451 - "bne 1f\n" 452 - "cmpwi 28, 0x2828\n" 453 - "bne 1f\n" 454 - "cmpwi 29, 0x2929\n" 455 - "bne 1f\n" 456 - "cmpwi 30, 0x3030\n" 457 - "bne 1f\n" 458 - "cmpwi 31, 0x3131\n" 459 - "bne 1f\n" 460 - 461 - /* Load junk into all our registers before we reload them from the stack. */ 462 - "li 3, 0xde\n" 463 - "li 4, 0xad\n" 464 - "li 5, 0xbe\n" 465 - "li 6, 0xef\n" 466 - "li 7, 0xde\n" 467 - "li 8, 0xad\n" 468 - "li 9, 0xbe\n" 469 - "li 10, 0xef\n" 470 - "li 11, 0xde\n" 471 - "li 14, 0xad\n" 472 - "li 15, 0xbe\n" 473 - "li 16, 0xef\n" 474 - "li 17, 0xde\n" 475 - "li 18, 0xad\n" 476 - "li 19, 0xbe\n" 477 - "li 20, 0xef\n" 478 - "li 21, 0xde\n" 479 - "li 22, 0xad\n" 480 - "li 23, 0xbe\n" 481 - "li 24, 0xef\n" 482 - "li 25, 0xde\n" 483 - "li 26, 0xad\n" 484 - "li 27, 0xbe\n" 485 - "li 28, 0xef\n" 486 - "li 29, 0xdd\n" 487 - 488 - "ld 3, -96(1)\n" 489 - "cmpwi 3, 0x3030\n" 490 - "bne 1f\n" 491 - "ld 4, -104(1)\n" 492 - "cmpwi 4, 0x4040\n" 493 - "bne 1f\n" 494 - "ld 5, -112(1)\n" 495 - "cmpwi 5, 0x5050\n" 496 - "bne 1f\n" 497 - "ld 6, -120(1)\n" 498 - "cmpwi 6, 0x6060\n" 499 - "bne 1f\n" 500 - "ld 7, -128(1)\n" 501 - "cmpwi 7, 0x7070\n" 502 - "bne 1f\n" 503 - "ld 8, -136(1)\n" 504 - "cmpwi 8, 0x0808\n" 505 - "bne 1f\n" 506 - "ld 9, -144(1)\n" 507 - "cmpwi 9, 0x0909\n" 508 - "bne 1f\n" 509 - "ld 10, -152(1)\n" 510 - "cmpwi 10, 0x1010\n" 511 - "bne 1f\n" 512 - "ld 11, -160(1)\n" 513 - "cmpwi 11, 0x1111\n" 514 - "bne 1f\n" 515 - "ld 14, -168(1)\n" 516 - "cmpwi 14, 0x1414\n" 517 - "bne 1f\n" 518 - "ld 15, -176(1)\n" 519 - "cmpwi 15, 0x1515\n" 520 - "bne 1f\n" 521 - "ld 16, -184(1)\n" 522 - "cmpwi 16, 0x1616\n" 523 - "bne 1f\n" 524 - "ld 17, -192(1)\n" 525 - "cmpwi 17, 0x1717\n" 526 - "bne 1f\n" 527 - "ld 18, -200(1)\n" 528 - "cmpwi 18, 0x1818\n" 529 - "bne 1f\n" 530 - "ld 19, -208(1)\n" 531 - "cmpwi 19, 0x1919\n" 532 - "bne 1f\n" 533 - "ld 20, -216(1)\n" 534 - "cmpwi 20, 0x2020\n" 535 - "bne 1f\n" 536 - "ld 21, -224(1)\n" 537 - "cmpwi 21, 0x2121\n" 538 - "bne 1f\n" 539 - "ld 22, -232(1)\n" 540 - "cmpwi 22, 0x2222\n" 541 - "bne 1f\n" 542 - "ld 23, -240(1)\n" 543 - "cmpwi 23, 0x2323\n" 544 - "bne 1f\n" 545 - "ld 24, -248(1)\n" 546 - "cmpwi 24, 0x2424\n" 547 - "bne 1f\n" 548 - "ld 25, -256(1)\n" 549 - "cmpwi 25, 0x2525\n" 550 - "bne 1f\n" 551 - "ld 26, -264(1)\n" 552 - "cmpwi 26, 0x2626\n" 553 - "bne 1f\n" 554 - "ld 27, -272(1)\n" 555 - "cmpwi 27, 0x2727\n" 556 - "bne 1f\n" 557 - "ld 28, -280(1)\n" 558 - "cmpwi 28, 0x2828\n" 559 - "bne 1f\n" 560 - "ld 29, -288(1)\n" 561 - "cmpwi 29, 0x2929\n" 562 - "bne 1f\n" 563 - 564 - /* Load 0 (success) to return */ 565 - "li 0, 0\n" 566 - 567 - "1: mr %0, 0\n" 568 - 569 - : "=r" (rc) 570 - : /* no inputs */ 571 - : "3", "4", "5", "6", "7", "8", "9", "10", "11", "14", 572 - "15", "16", "17", "18", "19", "20", "21", "22", "23", 573 - "24", "25", "26", "27", "28", "29", "30", "31", 574 - "memory" 575 - ); 576 - 577 - return rc; 578 - } 579 - 580 - int core_busy_loop_with_freeze(void) 581 - { 582 - int rc; 583 - 584 - mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) & ~MMCR0_FC); 585 - rc = core_busy_loop(); 586 - mtspr(SPRN_MMCR0, mfspr(SPRN_MMCR0) | MMCR0_FC); 587 - 588 - return rc; 589 336 } 590 337 591 338 int ebb_child(union pipe read_pipe, union pipe write_pipe)
-1
tools/testing/selftests/powerpc/pmu/ebb/ebb.h
··· 70 70 extern u64 sample_period; 71 71 72 72 int core_busy_loop(void); 73 - int core_busy_loop_with_freeze(void); 74 73 int ebb_child(union pipe read_pipe, union pipe write_pipe); 75 74 int catch_sigill(void (*func)(void)); 76 75 void write_pmc1(void);
+48
tools/testing/selftests/powerpc/pmu/l3_bank_test.c
··· 1 + /* 2 + * Copyright 2014, Michael Ellerman, IBM Corp. 3 + * Licensed under GPLv2. 4 + */ 5 + 6 + #include <stdio.h> 7 + #include <stdlib.h> 8 + 9 + #include "event.h" 10 + #include "utils.h" 11 + 12 + #define MALLOC_SIZE (0x10000 * 10) /* Ought to be enough .. */ 13 + 14 + /* 15 + * Tests that the L3 bank handling is correct. We fixed it in commit e9aaac1. 16 + */ 17 + static int l3_bank_test(void) 18 + { 19 + struct event event; 20 + char *p; 21 + int i; 22 + 23 + p = malloc(MALLOC_SIZE); 24 + FAIL_IF(!p); 25 + 26 + event_init(&event, 0x84918F); 27 + 28 + FAIL_IF(event_open(&event)); 29 + 30 + for (i = 0; i < MALLOC_SIZE; i += 0x10000) 31 + p[i] = i; 32 + 33 + event_read(&event); 34 + event_report(&event); 35 + 36 + FAIL_IF(event.result.running == 0); 37 + FAIL_IF(event.result.enabled == 0); 38 + 39 + event_close(&event); 40 + free(p); 41 + 42 + return 0; 43 + } 44 + 45 + int main(void) 46 + { 47 + return test_harness(l3_bank_test, "l3_bank_test"); 48 + }
+49 -1
tools/testing/selftests/powerpc/pmu/lib.c
··· 5 5 6 6 #define _GNU_SOURCE /* For CPU_ZERO etc. */ 7 7 8 + #include <elf.h> 8 9 #include <errno.h> 10 + #include <fcntl.h> 11 + #include <link.h> 9 12 #include <sched.h> 10 13 #include <setjmp.h> 11 14 #include <stdlib.h> 15 + #include <sys/stat.h> 16 + #include <sys/types.h> 12 17 #include <sys/wait.h> 13 18 14 19 #include "utils.h" ··· 182 177 183 178 int parse_proc_maps(void) 184 179 { 180 + unsigned long start, end; 185 181 char execute, name[128]; 186 - uint64_t start, end; 187 182 FILE *f; 188 183 int rc; 189 184 ··· 254 249 fclose(f); 255 250 out: 256 251 return rc; 252 + } 253 + 254 + static char auxv[4096]; 255 + 256 + void *get_auxv_entry(int type) 257 + { 258 + ElfW(auxv_t) *p; 259 + void *result; 260 + ssize_t num; 261 + int fd; 262 + 263 + fd = open("/proc/self/auxv", O_RDONLY); 264 + if (fd == -1) { 265 + perror("open"); 266 + return NULL; 267 + } 268 + 269 + result = NULL; 270 + 271 + num = read(fd, auxv, sizeof(auxv)); 272 + if (num < 0) { 273 + perror("read"); 274 + goto out; 275 + } 276 + 277 + if (num > sizeof(auxv)) { 278 + printf("Overflowed auxv buffer\n"); 279 + goto out; 280 + } 281 + 282 + p = (ElfW(auxv_t) *)auxv; 283 + 284 + while (p->a_type != AT_NULL) { 285 + if (p->a_type == type) { 286 + result = (void *)p->a_un.a_val; 287 + break; 288 + } 289 + 290 + p++; 291 + } 292 + out: 293 + close(fd); 294 + return result; 257 295 }
+1
tools/testing/selftests/powerpc/pmu/lib.h
··· 29 29 extern int notify_parent_of_error(union pipe write_pipe); 30 30 extern pid_t eat_cpu(int (test_function)(void)); 31 31 extern bool require_paranoia_below(int level); 32 + extern void *get_auxv_entry(int type); 32 33 33 34 struct addr_range { 34 35 uint64_t first, last;
+114
tools/testing/selftests/powerpc/pmu/per_event_excludes.c
··· 1 + /* 2 + * Copyright 2014, Michael Ellerman, IBM Corp. 3 + * Licensed under GPLv2. 4 + */ 5 + 6 + #define _GNU_SOURCE 7 + 8 + #include <elf.h> 9 + #include <limits.h> 10 + #include <stdio.h> 11 + #include <stdbool.h> 12 + #include <string.h> 13 + #include <sys/prctl.h> 14 + 15 + #include "event.h" 16 + #include "lib.h" 17 + #include "utils.h" 18 + 19 + /* 20 + * Test that per-event excludes work. 21 + */ 22 + 23 + static int per_event_excludes(void) 24 + { 25 + struct event *e, events[4]; 26 + char *platform; 27 + int i; 28 + 29 + platform = (char *)get_auxv_entry(AT_BASE_PLATFORM); 30 + FAIL_IF(!platform); 31 + SKIP_IF(strcmp(platform, "power8") != 0); 32 + 33 + /* 34 + * We need to create the events disabled, otherwise the running/enabled 35 + * counts don't match up. 36 + */ 37 + e = &events[0]; 38 + event_init_opts(e, PERF_COUNT_HW_INSTRUCTIONS, 39 + PERF_TYPE_HARDWARE, "instructions"); 40 + e->attr.disabled = 1; 41 + 42 + e = &events[1]; 43 + event_init_opts(e, PERF_COUNT_HW_INSTRUCTIONS, 44 + PERF_TYPE_HARDWARE, "instructions(k)"); 45 + e->attr.disabled = 1; 46 + e->attr.exclude_user = 1; 47 + e->attr.exclude_hv = 1; 48 + 49 + e = &events[2]; 50 + event_init_opts(e, PERF_COUNT_HW_INSTRUCTIONS, 51 + PERF_TYPE_HARDWARE, "instructions(h)"); 52 + e->attr.disabled = 1; 53 + e->attr.exclude_user = 1; 54 + e->attr.exclude_kernel = 1; 55 + 56 + e = &events[3]; 57 + event_init_opts(e, PERF_COUNT_HW_INSTRUCTIONS, 58 + PERF_TYPE_HARDWARE, "instructions(u)"); 59 + e->attr.disabled = 1; 60 + e->attr.exclude_hv = 1; 61 + e->attr.exclude_kernel = 1; 62 + 63 + FAIL_IF(event_open(&events[0])); 64 + 65 + /* 66 + * The open here will fail if we don't have per event exclude support, 67 + * because the second event has an incompatible set of exclude settings 68 + * and we're asking for the events to be in a group. 69 + */ 70 + for (i = 1; i < 4; i++) 71 + FAIL_IF(event_open_with_group(&events[i], events[0].fd)); 72 + 73 + /* 74 + * Even though the above will fail without per-event excludes we keep 75 + * testing in order to be thorough. 76 + */ 77 + prctl(PR_TASK_PERF_EVENTS_ENABLE); 78 + 79 + /* Spin for a while */ 80 + for (i = 0; i < INT_MAX; i++) 81 + asm volatile("" : : : "memory"); 82 + 83 + prctl(PR_TASK_PERF_EVENTS_DISABLE); 84 + 85 + for (i = 0; i < 4; i++) { 86 + FAIL_IF(event_read(&events[i])); 87 + event_report(&events[i]); 88 + } 89 + 90 + /* 91 + * We should see that all events have enabled == running. That 92 + * shows that they were all on the PMU at once. 93 + */ 94 + for (i = 0; i < 4; i++) 95 + FAIL_IF(events[i].result.running != events[i].result.enabled); 96 + 97 + /* 98 + * We can also check that the result for instructions is >= all the 99 + * other counts. That's because it is counting all instructions while 100 + * the others are counting a subset. 101 + */ 102 + for (i = 1; i < 4; i++) 103 + FAIL_IF(events[0].result.value < events[i].result.value); 104 + 105 + for (i = 0; i < 4; i++) 106 + event_close(&events[i]); 107 + 108 + return 0; 109 + } 110 + 111 + int main(void) 112 + { 113 + return test_harness(per_event_excludes, "per_event_excludes"); 114 + }