Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'powerpc-4.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:

- support "hybrid" iommu/direct DMA ops for coherent_mask < dma_mask
from Benjamin Herrenschmidt

- EEH fixes for SRIOV from Gavin

- introduce rtas_get_sensor_fast() for IRQ handlers from Thomas Huth

- use hardware RNG for arch_get_random_seed_* not arch_get_random_*
from Paul Mackerras

- seccomp filter support from Michael Ellerman

- opal_cec_reboot2() handling for HMIs & machine checks from Mahesh
Salgaonkar

- add powerpc timebase as a trace clock source from Naveen N. Rao

- misc cleanups in the xmon, signal & SLB code from Anshuman Khandual

- add an inline function to update POWER8 HID0 from Gautham R. Shenoy

- fix pte_pagesize_index() crash on 4K w/64K hash from Michael Ellerman

- drop support for 64K local store on 4K kernels from Michael Ellerman

- move dma_get_required_mask() from pnv_phb to pci_controller_ops from
Andrew Donnellan

- initialize distance lookup table from drconf path from Nikunj A
Dadhania

- enable RTC class support from Vaibhav Jain

- disable automatically blocked PCI config from Gavin Shan

- add LEDs driver for PowerNV platform from Vasant Hegde

- fix endianness issues in the HVSI driver from Laurent Dufour

- kexec endian fixes from Samuel Mendoza-Jonas

- fix corrupted pdn list from Gavin Shan

- fix fenced PHB caused by eeh_slot_error_detail() from Gavin Shan

- Freescale updates from Scott: Highlights include 32-bit memcpy/memset
optimizations, checksum optimizations, 85xx config fragments and
updates, device tree updates, e6500 fixes for non-SMP, and misc
cleanup and minor fixes.

- a ton of cxl updates & fixes:
- add explicit precision specifiers from Rasmus Villemoes
- use more common format specifier from Rasmus Villemoes
- destroy cxl_adapter_idr on module_exit from Johannes Thumshirn
- destroy afu->contexts_idr on release of an afu from Johannes
Thumshirn
- compile with -Werror from Daniel Axtens
- EEH support from Daniel Axtens
- plug irq_bitmap getting leaked in cxl_context from Vaibhav Jain
- add alternate MMIO error handling from Ian Munsie
- allow release of contexts which have been OPENED but not STARTED
from Andrew Donnellan
- remove use of macro DEFINE_PCI_DEVICE_TABLE from Vaishali Thakkar
- release irqs if memory allocation fails from Vaibhav Jain
- remove racy attempt to force EEH invocation in reset from Daniel
Axtens
- fix + cleanup error paths in cxl_dev_context_init from Ian Munsie
- fix force unmapping mmaps of contexts allocated through the kernel
api from Ian Munsie
- set up and enable PSL Timebase from Philippe Bergheaud

* tag 'powerpc-4.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (140 commits)
cxl: Set up and enable PSL Timebase
cxl: Fix force unmapping mmaps of contexts allocated through the kernel api
cxl: Fix + cleanup error paths in cxl_dev_context_init
powerpc/eeh: Fix fenced PHB caused by eeh_slot_error_detail()
powerpc/pseries: Cleanup on pci_dn_reconfig_notifier()
powerpc/pseries: Fix corrupted pdn list
powerpc/powernv: Enable LEDS support
powerpc/iommu: Set default DMA offset in dma_dev_setup
cxl: Remove racy attempt to force EEH invocation in reset
cxl: Release irqs if memory allocation fails
cxl: Remove use of macro DEFINE_PCI_DEVICE_TABLE
powerpc/powernv: Fix mis-merge of OPAL support for LEDS driver
powerpc/powernv: Reset HILE before kexec_sequence()
powerpc/kexec: Reset secondary cpu endianness before kexec
powerpc/hvsi: Fix endianness issues in the HVSI driver
leds/powernv: Add driver for PowerNV platform
powerpc/powernv: Create LED platform device
powerpc/powernv: Add OPAL interfaces for accessing and modifying system LED states
powerpc/powernv: Fix the log message when disabling VF
cxl: Allow release of contexts which have been OPENED but not STARTED
...

+3597 -2035
+10
Documentation/ABI/testing/sysfs-class-cxl
··· 223 223 Writing 1 will issue a PERST to card which may cause the card 224 224 to reload the FPGA depending on load_image_on_perst. 225 225 Users: https://github.com/ibm-capi/libcxl 226 + 227 + What: /sys/class/cxl/<card>/perst_reloads_same_image 228 + Date: July 2015 229 + Contact: linuxppc-dev@lists.ozlabs.org 230 + Description: read/write 231 + Trust that when an image is reloaded via PERST, it will not 232 + have changed. 233 + 0 = don't trust, the image may be different (default) 234 + 1 = trust that the image will not change. 235 + Users: https://github.com/ibm-capi/libcxl
+26
Documentation/devicetree/bindings/leds/leds-powernv.txt
··· 1 + Device Tree binding for LEDs on IBM Power Systems 2 + ------------------------------------------------- 3 + 4 + Required properties: 5 + - compatible : Should be "ibm,opal-v3-led". 6 + - led-mode : Should be "lightpath" or "guidinglight". 7 + 8 + Each location code of FRU/Enclosure must be expressed in the 9 + form of a sub-node. 10 + 11 + Required properties for the sub nodes: 12 + - led-types : Supported LED types (attention/identify/fault) provided 13 + in the form of string array. 14 + 15 + Example: 16 + 17 + leds { 18 + compatible = "ibm,opal-v3-led"; 19 + led-mode = "lightpath"; 20 + 21 + U78C9.001.RST0027-P1-C1 { 22 + led-types = "identify", "fault"; 23 + }; 24 + ... 25 + ... 26 + };
+3
Documentation/devicetree/bindings/memory-controllers/fsl/ifc.txt
··· 18 18 interrupt (NAND_EVTER_STAT). If there is only one, 19 19 that interrupt reports both types of event. 20 20 21 + - little-endian : If this property is absent, the big-endian mode will 22 + be in use as default for registers. 21 23 22 24 - ranges : Each range corresponds to a single chipselect, and covers 23 25 the entire access window as configured. ··· 36 34 #size-cells = <1>; 37 35 reg = <0x0 0xffe1e000 0 0x2000>; 38 36 interrupts = <16 2 19 2>; 37 + little-endian; 39 38 40 39 /* NOR, NAND Flashes and CPLD on board */ 41 40 ranges = <0x0 0x0 0x0 0xee000000 0x02000000
+18
Documentation/devicetree/bindings/powerpc/fsl/scfg.txt
··· 1 + Freescale Supplement configuration unit (SCFG) 2 + 3 + SCFG is the supplemental configuration unit, that provides SoC specific 4 + configuration and status registers for the chip. Such as getting PEX port 5 + status. 6 + 7 + Required properties: 8 + 9 + - compatible: should be "fsl,<chip>-scfg" 10 + - reg: should contain base address and length of SCFG memory-mapped 11 + registers 12 + 13 + Example: 14 + 15 + scfg: global-utilities@fc000 { 16 + compatible = "fsl,t1040-scfg"; 17 + reg = <0xfc000 0x1000>; 18 + };
+5
Documentation/trace/ftrace.txt
··· 346 346 x86-tsc: Architectures may define their own clocks. For 347 347 example, x86 uses its own TSC cycle clock here. 348 348 349 + ppc-tb: This uses the powerpc timebase register value. 350 + This is in sync across CPUs and can also be used 351 + to correlate events across hypervisor/guest if 352 + tb_offset is known. 353 + 349 354 To set a clock, simply echo the clock name into this file. 350 355 351 356 echo global > trace_clock
+11 -11
arch/powerpc/Kconfig
··· 82 82 bool 83 83 default y 84 84 85 + config ARCH_HAS_DMA_SET_COHERENT_MASK 86 + bool 87 + 85 88 config PPC 86 89 bool 87 90 default y ··· 158 155 select HAVE_PERF_EVENTS_NMI if PPC64 159 156 select EDAC_SUPPORT 160 157 select EDAC_ATOMIC_SCRUB 158 + select ARCH_HAS_DMA_SET_COHERENT_MASK 159 + select HAVE_ARCH_SECCOMP_FILTER 161 160 162 161 config GENERIC_CSUM 163 162 def_bool CPU_LITTLE_ENDIAN ··· 519 514 def_bool y 520 515 depends on NEED_MULTIPLE_NODES 521 516 522 - config PPC_HAS_HASH_64K 523 - bool 524 - depends on PPC64 525 - default n 526 - 527 517 config STDBINUTILS 528 518 bool "Using standard binutils settings" 529 519 depends on 44x ··· 560 560 bool "4k page size" 561 561 562 562 config PPC_16K_PAGES 563 - bool "16k page size" if 44x || PPC_8xx 563 + bool "16k page size" 564 + depends on 44x || PPC_8xx 564 565 565 566 config PPC_64K_PAGES 566 - bool "64k page size" if 44x || PPC_STD_MMU_64 || PPC_BOOK3E_64 567 - depends on !PPC_FSL_BOOK3E 568 - select PPC_HAS_HASH_64K if PPC_STD_MMU_64 567 + bool "64k page size" 568 + depends on !PPC_FSL_BOOK3E && (44x || PPC_STD_MMU_64 || PPC_BOOK3E_64) 569 569 570 570 config PPC_256K_PAGES 571 - bool "256k page size" if 44x 572 - depends on !STDBINUTILS 571 + bool "256k page size" 572 + depends on 44x && !STDBINUTILS 573 573 help 574 574 Make the page size 256k. 575 575
+20
arch/powerpc/Makefile
··· 288 288 pseries_le_defconfig: 289 289 $(call merge_into_defconfig,pseries_defconfig,le) 290 290 291 + PHONY += mpc85xx_defconfig 292 + mpc85xx_defconfig: 293 + $(call merge_into_defconfig,mpc85xx_basic_defconfig,\ 294 + 85xx-32bit 85xx-hw fsl-emb-nonhw) 295 + 296 + PHONY += mpc85xx_smp_defconfig 297 + mpc85xx_smp_defconfig: 298 + $(call merge_into_defconfig,mpc85xx_basic_defconfig,\ 299 + 85xx-32bit 85xx-smp 85xx-hw fsl-emb-nonhw) 300 + 301 + PHONY += corenet32_smp_defconfig 302 + corenet32_smp_defconfig: 303 + $(call merge_into_defconfig,corenet_basic_defconfig,\ 304 + 85xx-32bit 85xx-smp 85xx-hw fsl-emb-nonhw) 305 + 306 + PHONY += corenet64_smp_defconfig 307 + corenet64_smp_defconfig: 308 + $(call merge_into_defconfig,corenet_basic_defconfig,\ 309 + 85xx-64bit 85xx-smp altivec 85xx-hw fsl-emb-nonhw) 310 + 291 311 define archhelp 292 312 @echo '* zImage - Build default images selected by kernel config' 293 313 @echo ' zImage.* - Compressed kernel image (arch/$(ARCH)/boot/zImage.*)'
+1 -1
arch/powerpc/boot/dts/fsl/p1022si-post.dtsi
··· 175 175 176 176 /include/ "pq3-gpio-0.dtsi" 177 177 178 - display@10000 { 178 + display: display@10000 { 179 179 compatible = "fsl,diu", "fsl,p1022-diu"; 180 180 reg = <0x10000 1000>; 181 181 interrupts = <64 2 0 0>;
+2
arch/powerpc/boot/dts/fsl/p1022si-pre.dtsi
··· 50 50 pci0 = &pci0; 51 51 pci1 = &pci1; 52 52 pci2 = &pci2; 53 + vga = &display; 54 + display = &display; 53 55 }; 54 56 55 57 cpus {
+5
arch/powerpc/boot/dts/fsl/t1040si-post.dtsi
··· 484 484 reg = <0xea000 0x4000>; 485 485 }; 486 486 487 + scfg: global-utilities@fc000 { 488 + compatible = "fsl,t1040-scfg"; 489 + reg = <0xfc000 0x1000>; 490 + }; 491 + 487 492 /include/ "elo3-dma-0.dtsi" 488 493 /include/ "elo3-dma-1.dtsi" 489 494 /include/ "qoriq-espi-0.dtsi"
+12 -1
arch/powerpc/boot/dts/t1023rdb.dts
··· 60 60 #address-cells = <1>; 61 61 #size-cells = <1>; 62 62 compatible = "fsl,ifc-nand"; 63 - reg = <0x2 0x0 0x10000>; 63 + reg = <0x1 0x0 0x10000>; 64 64 }; 65 65 }; 66 66 ··· 99 99 }; 100 100 101 101 i2c@118100 { 102 + current-sensor@40 { 103 + compatible = "ti,ina220"; 104 + reg = <0x40>; 105 + shunt-resistor = <1000>; 106 + }; 107 + 108 + current-sensor@41 { 109 + compatible = "ti,ina220"; 110 + reg = <0x41>; 111 + shunt-resistor = <1000>; 112 + }; 102 113 }; 103 114 }; 104 115
+6
arch/powerpc/boot/dts/t1024rdb.dts
··· 114 114 reg = <0x4c>; 115 115 }; 116 116 117 + current-sensor@40 { 118 + compatible = "ti,ina220"; 119 + reg = <0x40>; 120 + shunt-resistor = <1000>; 121 + }; 122 + 117 123 eeprom@50 { 118 124 compatible = "atmel,24c256"; 119 125 reg = <0x50>;
+46
arch/powerpc/boot/dts/t1040d4rdb.dts
··· 1 + /* 2 + * T1040D4RDB Device Tree Source 3 + * 4 + * Copyright 2015 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + /include/ "fsl/t104xsi-pre.dtsi" 36 + /include/ "t104xd4rdb.dtsi" 37 + 38 + / { 39 + model = "fsl,T1040D4RDB"; 40 + compatible = "fsl,T1040D4RDB"; 41 + #address-cells = <2>; 42 + #size-cells = <2>; 43 + interrupt-parent = <&mpic>; 44 + }; 45 + 46 + /include/ "fsl/t1040si-post.dtsi"
+53
arch/powerpc/boot/dts/t1042d4rdb.dts
··· 1 + /* 2 + * T1042D4RDB Device Tree Source 3 + * 4 + * Copyright 2015 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + /include/ "fsl/t104xsi-pre.dtsi" 36 + /include/ "t104xd4rdb.dtsi" 37 + 38 + / { 39 + model = "fsl,T1042D4RDB"; 40 + compatible = "fsl,T1042D4RDB"; 41 + #address-cells = <2>; 42 + #size-cells = <2>; 43 + interrupt-parent = <&mpic>; 44 + 45 + ifc: localbus@ffe124000 { 46 + cpld@3,0 { 47 + compatible = "fsl,t1040d4rdb-cpld", 48 + "fsl,deepsleep-cpld"; 49 + }; 50 + }; 51 + }; 52 + 53 + /include/ "fsl/t1040si-post.dtsi"
+205
arch/powerpc/boot/dts/t104xd4rdb.dtsi
··· 1 + /* 2 + * T1040D4RDB/T1042D4RDB Device Tree Source 3 + * 4 + * Copyright 2015 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor "AS IS" AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + / { 36 + reserved-memory { 37 + #address-cells = <2>; 38 + #size-cells = <2>; 39 + ranges; 40 + 41 + bman_fbpr: bman-fbpr { 42 + size = <0 0x1000000>; 43 + alignment = <0 0x1000000>; 44 + }; 45 + qman_fqd: qman-fqd { 46 + size = <0 0x400000>; 47 + alignment = <0 0x400000>; 48 + }; 49 + qman_pfdr: qman-pfdr { 50 + size = <0 0x2000000>; 51 + alignment = <0 0x2000000>; 52 + }; 53 + }; 54 + 55 + ifc: localbus@ffe124000 { 56 + reg = <0xf 0xfe124000 0 0x2000>; 57 + ranges = <0 0 0xf 0xe8000000 0x08000000 58 + 2 0 0xf 0xff800000 0x00010000 59 + 3 0 0xf 0xffdf0000 0x00008000>; 60 + 61 + nor@0,0 { 62 + #address-cells = <1>; 63 + #size-cells = <1>; 64 + compatible = "cfi-flash"; 65 + reg = <0x0 0x0 0x8000000>; 66 + bank-width = <2>; 67 + device-width = <1>; 68 + }; 69 + 70 + nand@2,0 { 71 + #address-cells = <1>; 72 + #size-cells = <1>; 73 + compatible = "fsl,ifc-nand"; 74 + reg = <0x2 0x0 0x10000>; 75 + }; 76 + 77 + cpld@3,0 { 78 + compatible = "fsl,t1040d4rdb-cpld"; 79 + reg = <3 0 0x300>; 80 + }; 81 + }; 82 + 83 + memory { 84 + device_type = "memory"; 85 + }; 86 + 87 + dcsr: dcsr@f00000000 { 88 + ranges = <0x00000000 0xf 0x00000000 0x01072000>; 89 + }; 90 + 91 + bportals: bman-portals@ff4000000 { 92 + ranges = <0x0 0xf 0xf4000000 0x2000000>; 93 + }; 94 + 95 + qportals: qman-portals@ff6000000 { 96 + ranges = <0x0 0xf 0xf6000000 0x2000000>; 97 + }; 98 + 99 + soc: soc@ffe000000 { 100 + ranges = <0x00000000 0xf 0xfe000000 0x1000000>; 101 + reg = <0xf 0xfe000000 0 0x00001000>; 102 + 103 + spi@110000 { 104 + flash@0 { 105 + #address-cells = <1>; 106 + #size-cells = <1>; 107 + compatible = "micron,n25q512ax3"; 108 + reg = <0>; 109 + /* input clock */ 110 + spi-max-frequency = <10000000>; 111 + }; 112 + }; 113 + i2c@118000 { 114 + hwmon@4c { 115 + compatible = "adi,adt7461"; 116 + reg = <0x4c>; 117 + }; 118 + 119 + rtc@68 { 120 + compatible = "dallas,ds1337"; 121 + reg = <0x68>; 122 + interrupts = <0x2 0x1 0 0>; 123 + }; 124 + }; 125 + 126 + i2c@118100 { 127 + mux@77 { 128 + /* 129 + * Child nodes of mux depend on which i2c 130 + * devices are connected via the mini PCI 131 + * connector slot1, the mini PCI connector 132 + * slot2, the HDMI connector, and the PEX 133 + * slot. Systems with such devices attached 134 + * should provide a wrapper .dts file that 135 + * includes this one, and adds those nodes 136 + */ 137 + compatible = "nxp,pca9546"; 138 + reg = <0x77>; 139 + #address-cells = <1>; 140 + #size-cells = <0>; 141 + }; 142 + }; 143 + 144 + }; 145 + 146 + pci0: pcie@ffe240000 { 147 + reg = <0xf 0xfe240000 0 0x10000>; 148 + ranges = <0x02000000 0 0xe0000000 0xc 0x0 0x0 0x10000000 149 + 0x01000000 0 0x0 0xf 0xf8000000 0x0 0x00010000>; 150 + pcie@0 { 151 + ranges = <0x02000000 0 0xe0000000 152 + 0x02000000 0 0xe0000000 153 + 0 0x10000000 154 + 155 + 0x01000000 0 0x00000000 156 + 0x01000000 0 0x00000000 157 + 0 0x00010000>; 158 + }; 159 + }; 160 + 161 + pci1: pcie@ffe250000 { 162 + reg = <0xf 0xfe250000 0 0x10000>; 163 + ranges = <0x02000000 0 0xe0000000 0xc 0x10000000 0 0x10000000 164 + 0x01000000 0 0 0xf 0xf8010000 0 0x00010000>; 165 + pcie@0 { 166 + ranges = <0x02000000 0 0xe0000000 167 + 0x02000000 0 0xe0000000 168 + 0 0x10000000 169 + 170 + 0x01000000 0 0x00000000 171 + 0x01000000 0 0x00000000 172 + 0 0x00010000>; 173 + }; 174 + }; 175 + 176 + pci2: pcie@ffe260000 { 177 + reg = <0xf 0xfe260000 0 0x10000>; 178 + ranges = <0x02000000 0 0xe0000000 0xc 0x20000000 0 0x10000000 179 + 0x01000000 0 0x00000000 0xf 0xf8020000 0 0x00010000>; 180 + pcie@0 { 181 + ranges = <0x02000000 0 0xe0000000 182 + 0x02000000 0 0xe0000000 183 + 0 0x10000000 184 + 185 + 0x01000000 0 0x00000000 186 + 0x01000000 0 0x00000000 187 + 0 0x00010000>; 188 + }; 189 + }; 190 + 191 + pci3: pcie@ffe270000 { 192 + reg = <0xf 0xfe270000 0 0x10000>; 193 + ranges = <0x02000000 0 0xe0000000 0xc 0x30000000 0 0x10000000 194 + 0x01000000 0 0x00000000 0xf 0xf8030000 0 0x00010000>; 195 + pcie@0 { 196 + ranges = <0x02000000 0 0xe0000000 197 + 0x02000000 0 0xe0000000 198 + 0 0x10000000 199 + 200 + 0x01000000 0 0x00000000 201 + 0x01000000 0 0x00000000 202 + 0 0x00010000>; 203 + }; 204 + }; 205 + };
+5
arch/powerpc/configs/85xx-32bit.config
··· 1 + CONFIG_HIGHMEM=y 2 + CONFIG_KEXEC=y 3 + CONFIG_PPC_85xx=y 4 + CONFIG_PROC_KCORE=y 5 + CONFIG_PHYS_64BIT=y
+4
arch/powerpc/configs/85xx-64bit.config
··· 1 + CONFIG_MATH_EMULATION=y 2 + CONFIG_MATH_EMULATION_HW_UNIMPLEMENTED=y 3 + CONFIG_PPC64=y 4 + CONFIG_PPC_BOOK3E_64=y
+142
arch/powerpc/configs/85xx-hw.config
··· 1 + CONFIG_AQUANTIA_PHY=y 2 + CONFIG_AT803X_PHY=y 3 + CONFIG_ATA=y 4 + CONFIG_BLK_DEV_SD=y 5 + CONFIG_BLK_DEV_SR_VENDOR=y 6 + CONFIG_BLK_DEV_SR=y 7 + CONFIG_BROADCOM_PHY=y 8 + CONFIG_C293_PCIE=y 9 + CONFIG_CHR_DEV_SG=y 10 + CONFIG_CHR_DEV_ST=y 11 + CONFIG_CICADA_PHY=y 12 + CONFIG_CLK_QORIQ=y 13 + CONFIG_CRYPTO_DEV_FSL_CAAM=y 14 + CONFIG_CRYPTO_DEV_TALITOS=y 15 + CONFIG_DAVICOM_PHY=y 16 + CONFIG_DMADEVICES=y 17 + CONFIG_E1000E=y 18 + CONFIG_E1000=y 19 + CONFIG_EDAC_MM_EDAC=y 20 + CONFIG_EDAC_MPC85XX=y 21 + CONFIG_EDAC=y 22 + CONFIG_EEPROM_AT24=y 23 + CONFIG_EEPROM_LEGACY=y 24 + CONFIG_FB_FSL_DIU=y 25 + CONFIG_FS_ENET=y 26 + CONFIG_FSL_CORENET_CF=y 27 + CONFIG_FSL_DMA=y 28 + CONFIG_FSL_HV_MANAGER=y 29 + CONFIG_FSL_PQ_MDIO=y 30 + CONFIG_FSL_RIO=y 31 + CONFIG_FSL_XGMAC_MDIO=y 32 + CONFIG_GIANFAR=y 33 + CONFIG_GPIO_MPC8XXX=y 34 + CONFIG_HID_A4TECH=y 35 + CONFIG_HID_APPLE=y 36 + CONFIG_HID_BELKIN=y 37 + CONFIG_HID_CHERRY=y 38 + CONFIG_HID_CHICONY=y 39 + CONFIG_HID_CYPRESS=y 40 + CONFIG_HID_EZKEY=y 41 + CONFIG_HID_GYRATION=y 42 + CONFIG_HID_LOGITECH=y 43 + CONFIG_HID_MICROSOFT=y 44 + CONFIG_HID_MONTEREY=y 45 + CONFIG_HID_PANTHERLORD=y 46 + CONFIG_HID_PETALYNX=y 47 + CONFIG_HID_SAMSUNG=y 48 + CONFIG_HID_SUNPLUS=y 49 + CONFIG_I2C_CHARDEV=y 50 + CONFIG_I2C_CPM=m 51 + CONFIG_I2C_MPC=y 52 + CONFIG_I2C_MUX_PCA954x=y 53 + CONFIG_I2C_MUX=y 54 + CONFIG_I2C=y 55 + CONFIG_IGB=y 56 + CONFIG_INPUT_FF_MEMLESS=m 57 + # CONFIG_INPUT_KEYBOARD is not set 58 + # CONFIG_INPUT_MOUSEDEV is not set 59 + # CONFIG_INPUT_MOUSE is not set 60 + CONFIG_MARVELL_PHY=y 61 + CONFIG_MDIO_BUS_MUX_GPIO=y 62 + CONFIG_MDIO_BUS_MUX_MMIOREG=y 63 + CONFIG_MMC_SDHCI_OF_ESDHC=y 64 + CONFIG_MMC_SDHCI_PLTFM=y 65 + CONFIG_MMC_SDHCI=y 66 + CONFIG_MMC=y 67 + CONFIG_MTD_BLOCK=y 68 + CONFIG_MTD_CFI_AMDSTD=y 69 + CONFIG_MTD_CFI_INTELEXT=y 70 + CONFIG_MTD_CFI=y 71 + CONFIG_MTD_CMDLINE_PARTS=y 72 + CONFIG_MTD_M25P80=y 73 + CONFIG_MTD_NAND_FSL_ELBC=y 74 + CONFIG_MTD_NAND_FSL_IFC=y 75 + CONFIG_MTD_NAND=y 76 + CONFIG_MTD_PHYSMAP_OF=y 77 + CONFIG_MTD_PHYSMAP=y 78 + CONFIG_MTD_PLATRAM=y 79 + CONFIG_MTD_SPI_NOR=y 80 + CONFIG_NETDEVICES=y 81 + CONFIG_NVRAM=y 82 + CONFIG_PATA_ALI=y 83 + CONFIG_PATA_SIL680=y 84 + CONFIG_PATA_VIA=y 85 + # CONFIG_PCIEASPM is not set 86 + CONFIG_PCIEPORTBUS=y 87 + CONFIG_PCI_MSI=y 88 + CONFIG_PCI=y 89 + CONFIG_PPC_EPAPR_HV_BYTECHAN=y 90 + # CONFIG_PPC_OF_BOOT_TRAMPOLINE is not set 91 + CONFIG_QE_GPIO=y 92 + CONFIG_QUICC_ENGINE=y 93 + CONFIG_RAPIDIO=y 94 + CONFIG_RTC_CLASS=y 95 + CONFIG_RTC_DRV_CMOS=y 96 + CONFIG_RTC_DRV_DS1307=y 97 + CONFIG_RTC_DRV_DS1374=y 98 + CONFIG_RTC_DRV_DS3232=y 99 + CONFIG_SATA_AHCI=y 100 + CONFIG_SATA_FSL=y 101 + CONFIG_SATA_SIL24=y 102 + CONFIG_SATA_SIL=y 103 + CONFIG_SCSI_LOGGING=y 104 + CONFIG_SCSI_SYM53C8XX_2=y 105 + CONFIG_SENSORS_INA2XX=y 106 + CONFIG_SENSORS_LM90=y 107 + CONFIG_SERIAL_8250_CONSOLE=y 108 + CONFIG_SERIAL_8250_DETECT_IRQ=y 109 + CONFIG_SERIAL_8250_MANY_PORTS=y 110 + CONFIG_SERIAL_8250_NR_UARTS=6 111 + CONFIG_SERIAL_8250_RSA=y 112 + CONFIG_SERIAL_8250_RUNTIME_UARTS=6 113 + CONFIG_SERIAL_8250=y 114 + CONFIG_SERIAL_QE=m 115 + CONFIG_SERIO_LIBPS2=y 116 + # CONFIG_SND_DRIVERS is not set 117 + CONFIG_SND_INTEL8X0=y 118 + CONFIG_SND_POWERPC_SOC=y 119 + # CONFIG_SND_PPC is not set 120 + CONFIG_SND_SOC=y 121 + # CONFIG_SND_SUPPORT_OLD_API is not set 122 + # CONFIG_SND_USB is not set 123 + CONFIG_SND=y 124 + CONFIG_SOUND=y 125 + CONFIG_SPI_FSL_ESPI=y 126 + CONFIG_SPI_FSL_SPI=y 127 + CONFIG_SPI_GPIO=y 128 + CONFIG_SPI=y 129 + CONFIG_TERANETICS_PHY=y 130 + CONFIG_UCC_GETH=y 131 + CONFIG_USB_EHCI_FSL=y 132 + CONFIG_USB_EHCI_HCD=y 133 + CONFIG_USB_HID=m 134 + CONFIG_USB_MON=y 135 + CONFIG_USB_OHCI_HCD_PPC_OF_BE=y 136 + CONFIG_USB_OHCI_HCD_PPC_OF_LE=y 137 + CONFIG_USB_OHCI_HCD=y 138 + CONFIG_USB_STORAGE=y 139 + CONFIG_USB=y 140 + # CONFIG_VGA_CONSOLE is not set 141 + CONFIG_VIRT_DRIVERS=y 142 + CONFIG_VITESSE_PHY=y
+2
arch/powerpc/configs/85xx-smp.config
··· 1 + CONFIG_NR_CPUS=24 2 + CONFIG_SMP=y
+1
arch/powerpc/configs/altivec.config
··· 1 + CONFIG_ALTIVEC=y
-185
arch/powerpc/configs/corenet32_smp_defconfig
··· 1 - CONFIG_PPC_85xx=y 2 - CONFIG_SMP=y 3 - CONFIG_NR_CPUS=8 4 - CONFIG_SYSVIPC=y 5 - CONFIG_POSIX_MQUEUE=y 6 - CONFIG_AUDIT=y 7 - CONFIG_NO_HZ=y 8 - CONFIG_HIGH_RES_TIMERS=y 9 - CONFIG_BSD_PROCESS_ACCT=y 10 - CONFIG_IKCONFIG=y 11 - CONFIG_IKCONFIG_PROC=y 12 - CONFIG_LOG_BUF_SHIFT=14 13 - CONFIG_BLK_DEV_INITRD=y 14 - CONFIG_KALLSYMS_ALL=y 15 - CONFIG_EMBEDDED=y 16 - CONFIG_PERF_EVENTS=y 17 - CONFIG_SLAB=y 18 - CONFIG_MODULES=y 19 - CONFIG_MODULE_UNLOAD=y 20 - CONFIG_MODULE_FORCE_UNLOAD=y 21 - CONFIG_MODVERSIONS=y 22 - # CONFIG_BLK_DEV_BSG is not set 23 - CONFIG_PARTITION_ADVANCED=y 24 - CONFIG_MAC_PARTITION=y 25 - CONFIG_CORENET_GENERIC=y 26 - CONFIG_HIGHMEM=y 27 - # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 28 - CONFIG_BINFMT_MISC=m 29 - CONFIG_KEXEC=y 30 - CONFIG_FORCE_MAX_ZONEORDER=13 31 - CONFIG_PCI=y 32 - CONFIG_PCIEPORTBUS=y 33 - # CONFIG_PCIEASPM is not set 34 - CONFIG_PCI_MSI=y 35 - CONFIG_RAPIDIO=y 36 - CONFIG_FSL_RIO=y 37 - CONFIG_NET=y 38 - CONFIG_PACKET=y 39 - CONFIG_UNIX=y 40 - CONFIG_XFRM_USER=y 41 - CONFIG_XFRM_SUB_POLICY=y 42 - CONFIG_XFRM_STATISTICS=y 43 - CONFIG_NET_KEY=y 44 - CONFIG_NET_KEY_MIGRATE=y 45 - CONFIG_INET=y 46 - CONFIG_IP_MULTICAST=y 47 - CONFIG_IP_ADVANCED_ROUTER=y 48 - CONFIG_IP_MULTIPLE_TABLES=y 49 - CONFIG_IP_ROUTE_MULTIPATH=y 50 - CONFIG_IP_ROUTE_VERBOSE=y 51 - CONFIG_IP_PNP=y 52 - CONFIG_IP_PNP_DHCP=y 53 - CONFIG_IP_PNP_BOOTP=y 54 - CONFIG_IP_PNP_RARP=y 55 - CONFIG_NET_IPIP=y 56 - CONFIG_IP_MROUTE=y 57 - CONFIG_IP_PIMSM_V1=y 58 - CONFIG_IP_PIMSM_V2=y 59 - CONFIG_INET_AH=y 60 - CONFIG_INET_ESP=y 61 - CONFIG_INET_IPCOMP=y 62 - # CONFIG_INET_LRO is not set 63 - CONFIG_IPV6=y 64 - CONFIG_IP_SCTP=m 65 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 66 - CONFIG_DEVTMPFS=y 67 - CONFIG_DEVTMPFS_MOUNT=y 68 - CONFIG_MTD=y 69 - CONFIG_MTD_CMDLINE_PARTS=y 70 - CONFIG_MTD_BLOCK=y 71 - CONFIG_MTD_CFI=y 72 - CONFIG_MTD_CFI_INTELEXT=y 73 - CONFIG_MTD_CFI_AMDSTD=y 74 - CONFIG_MTD_PHYSMAP_OF=y 75 - CONFIG_MTD_NAND=y 76 - CONFIG_MTD_NAND_FSL_ELBC=y 77 - CONFIG_MTD_NAND_FSL_IFC=y 78 - CONFIG_MTD_SPI_NOR=y 79 - CONFIG_BLK_DEV_LOOP=y 80 - CONFIG_BLK_DEV_RAM=y 81 - CONFIG_BLK_DEV_RAM_SIZE=131072 82 - CONFIG_BLK_DEV_SD=y 83 - CONFIG_CHR_DEV_ST=y 84 - CONFIG_BLK_DEV_SR=y 85 - CONFIG_CHR_DEV_SG=y 86 - CONFIG_SCSI_LOGGING=y 87 - CONFIG_SCSI_SYM53C8XX_2=y 88 - CONFIG_ATA=y 89 - CONFIG_SATA_AHCI=y 90 - CONFIG_SATA_FSL=y 91 - CONFIG_SATA_SIL24=y 92 - CONFIG_SATA_SIL=y 93 - CONFIG_PATA_SIL680=y 94 - CONFIG_NETDEVICES=y 95 - CONFIG_FSL_PQ_MDIO=y 96 - CONFIG_FSL_XGMAC_MDIO=y 97 - CONFIG_E1000=y 98 - CONFIG_E1000E=y 99 - CONFIG_AT803X_PHY=y 100 - CONFIG_VITESSE_PHY=y 101 - CONFIG_FIXED_PHY=y 102 - CONFIG_MDIO_BUS_MUX_GPIO=y 103 - CONFIG_MDIO_BUS_MUX_MMIOREG=y 104 - # CONFIG_INPUT_MOUSEDEV is not set 105 - # CONFIG_INPUT_KEYBOARD is not set 106 - # CONFIG_INPUT_MOUSE is not set 107 - CONFIG_SERIO_LIBPS2=y 108 - # CONFIG_LEGACY_PTYS is not set 109 - CONFIG_PPC_EPAPR_HV_BYTECHAN=y 110 - CONFIG_SERIAL_8250=y 111 - CONFIG_SERIAL_8250_CONSOLE=y 112 - CONFIG_SERIAL_8250_MANY_PORTS=y 113 - CONFIG_SERIAL_8250_DETECT_IRQ=y 114 - CONFIG_SERIAL_8250_RSA=y 115 - CONFIG_NVRAM=y 116 - CONFIG_I2C=y 117 - CONFIG_I2C_CHARDEV=y 118 - CONFIG_I2C_MPC=y 119 - CONFIG_I2C_MUX=y 120 - CONFIG_I2C_MUX_PCA954x=y 121 - CONFIG_SPI=y 122 - CONFIG_SPI_GPIO=y 123 - CONFIG_SPI_FSL_SPI=y 124 - CONFIG_SPI_FSL_ESPI=y 125 - CONFIG_SENSORS_LM90=y 126 - CONFIG_SENSORS_INA2XX=y 127 - CONFIG_USB_HID=m 128 - CONFIG_USB=y 129 - CONFIG_USB_MON=y 130 - CONFIG_USB_EHCI_HCD=y 131 - CONFIG_USB_EHCI_FSL=y 132 - CONFIG_USB_OHCI_HCD=y 133 - CONFIG_USB_OHCI_HCD_PPC_OF_BE=y 134 - CONFIG_USB_OHCI_HCD_PPC_OF_LE=y 135 - CONFIG_USB_STORAGE=y 136 - CONFIG_MMC=y 137 - CONFIG_MMC_SDHCI=y 138 - CONFIG_EDAC=y 139 - CONFIG_EDAC_MM_EDAC=y 140 - CONFIG_EDAC_MPC85XX=y 141 - CONFIG_RTC_CLASS=y 142 - CONFIG_RTC_DRV_DS1307=y 143 - CONFIG_RTC_DRV_DS1374=y 144 - CONFIG_RTC_DRV_DS3232=y 145 - CONFIG_UIO=y 146 - CONFIG_VIRT_DRIVERS=y 147 - CONFIG_FSL_HV_MANAGER=y 148 - CONFIG_STAGING=y 149 - CONFIG_FSL_CORENET_CF=y 150 - CONFIG_CLK_QORIQ=y 151 - CONFIG_EXT2_FS=y 152 - CONFIG_EXT3_FS=y 153 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 154 - CONFIG_ISO9660_FS=m 155 - CONFIG_JOLIET=y 156 - CONFIG_ZISOFS=y 157 - CONFIG_UDF_FS=m 158 - CONFIG_MSDOS_FS=m 159 - CONFIG_VFAT_FS=y 160 - CONFIG_NTFS_FS=y 161 - CONFIG_PROC_KCORE=y 162 - CONFIG_TMPFS=y 163 - CONFIG_HUGETLBFS=y 164 - CONFIG_JFFS2_FS=y 165 - CONFIG_CRAMFS=y 166 - CONFIG_NFS_FS=y 167 - CONFIG_NFS_V4=y 168 - CONFIG_ROOT_NFS=y 169 - CONFIG_NFSD=m 170 - CONFIG_NLS_CODEPAGE_437=y 171 - CONFIG_NLS_CODEPAGE_850=y 172 - CONFIG_NLS_ISO8859_1=y 173 - CONFIG_NLS_UTF8=m 174 - CONFIG_DEBUG_INFO=y 175 - CONFIG_MAGIC_SYSRQ=y 176 - CONFIG_DEBUG_SHIRQ=y 177 - CONFIG_DETECT_HUNG_TASK=y 178 - CONFIG_RCU_TRACE=y 179 - CONFIG_CRYPTO_NULL=y 180 - CONFIG_CRYPTO_PCBC=m 181 - CONFIG_CRYPTO_MD4=y 182 - CONFIG_CRYPTO_SHA256=y 183 - CONFIG_CRYPTO_SHA512=y 184 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 185 - CONFIG_CRYPTO_DEV_FSL_CAAM=y
-176
arch/powerpc/configs/corenet64_smp_defconfig
··· 1 - CONFIG_PPC64=y 2 - CONFIG_PPC_BOOK3E_64=y 3 - CONFIG_ALTIVEC=y 4 - CONFIG_SMP=y 5 - CONFIG_NR_CPUS=24 6 - CONFIG_SYSVIPC=y 7 - CONFIG_FHANDLE=y 8 - CONFIG_IRQ_DOMAIN_DEBUG=y 9 - CONFIG_NO_HZ=y 10 - CONFIG_HIGH_RES_TIMERS=y 11 - CONFIG_BSD_PROCESS_ACCT=y 12 - CONFIG_IKCONFIG=y 13 - CONFIG_IKCONFIG_PROC=y 14 - CONFIG_LOG_BUF_SHIFT=14 15 - CONFIG_CGROUPS=y 16 - CONFIG_CPUSETS=y 17 - CONFIG_CGROUP_CPUACCT=y 18 - CONFIG_CGROUP_SCHED=y 19 - CONFIG_BLK_DEV_INITRD=y 20 - CONFIG_EXPERT=y 21 - CONFIG_KALLSYMS_ALL=y 22 - CONFIG_MODULES=y 23 - CONFIG_MODULE_UNLOAD=y 24 - CONFIG_MODULE_FORCE_UNLOAD=y 25 - CONFIG_MODVERSIONS=y 26 - # CONFIG_BLK_DEV_BSG is not set 27 - CONFIG_PARTITION_ADVANCED=y 28 - CONFIG_MAC_PARTITION=y 29 - CONFIG_CORENET_GENERIC=y 30 - # CONFIG_PPC_OF_BOOT_TRAMPOLINE is not set 31 - CONFIG_BINFMT_MISC=m 32 - CONFIG_MATH_EMULATION=y 33 - CONFIG_MATH_EMULATION_HW_UNIMPLEMENTED=y 34 - CONFIG_PCIEPORTBUS=y 35 - CONFIG_PCI_MSI=y 36 - CONFIG_RAPIDIO=y 37 - CONFIG_FSL_RIO=y 38 - CONFIG_NET=y 39 - CONFIG_PACKET=y 40 - CONFIG_UNIX=y 41 - CONFIG_XFRM_USER=y 42 - CONFIG_NET_KEY=y 43 - CONFIG_INET=y 44 - CONFIG_IP_MULTICAST=y 45 - CONFIG_IP_ADVANCED_ROUTER=y 46 - CONFIG_IP_MULTIPLE_TABLES=y 47 - CONFIG_IP_ROUTE_MULTIPATH=y 48 - CONFIG_IP_ROUTE_VERBOSE=y 49 - CONFIG_IP_PNP=y 50 - CONFIG_IP_PNP_DHCP=y 51 - CONFIG_IP_PNP_BOOTP=y 52 - CONFIG_IP_PNP_RARP=y 53 - CONFIG_NET_IPIP=y 54 - CONFIG_IP_MROUTE=y 55 - CONFIG_IP_PIMSM_V1=y 56 - CONFIG_IP_PIMSM_V2=y 57 - CONFIG_INET_ESP=y 58 - # CONFIG_INET_XFRM_MODE_BEET is not set 59 - # CONFIG_INET_LRO is not set 60 - CONFIG_IPV6=y 61 - CONFIG_IP_SCTP=m 62 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 63 - CONFIG_DEVTMPFS=y 64 - CONFIG_DEVTMPFS_MOUNT=y 65 - CONFIG_MTD=y 66 - CONFIG_MTD_CMDLINE_PARTS=y 67 - CONFIG_MTD_BLOCK=y 68 - CONFIG_FTL=y 69 - CONFIG_MTD_CFI=y 70 - CONFIG_MTD_CFI_INTELEXT=y 71 - CONFIG_MTD_CFI_AMDSTD=y 72 - CONFIG_MTD_PHYSMAP_OF=y 73 - CONFIG_MTD_NAND=y 74 - CONFIG_MTD_NAND_FSL_ELBC=y 75 - CONFIG_MTD_NAND_FSL_IFC=y 76 - CONFIG_MTD_SPI_NOR=y 77 - CONFIG_MTD_UBI=y 78 - CONFIG_BLK_DEV_LOOP=y 79 - CONFIG_BLK_DEV_RAM=y 80 - CONFIG_BLK_DEV_RAM_SIZE=131072 81 - CONFIG_EEPROM_LEGACY=y 82 - CONFIG_BLK_DEV_SD=y 83 - CONFIG_BLK_DEV_SR=y 84 - CONFIG_BLK_DEV_SR_VENDOR=y 85 - CONFIG_CHR_DEV_SG=y 86 - CONFIG_ATA=y 87 - CONFIG_SATA_FSL=y 88 - CONFIG_SATA_SIL24=y 89 - CONFIG_NETDEVICES=y 90 - CONFIG_DUMMY=y 91 - CONFIG_FSL_PQ_MDIO=y 92 - CONFIG_FSL_XGMAC_MDIO=y 93 - CONFIG_E1000E=y 94 - CONFIG_VITESSE_PHY=y 95 - CONFIG_FIXED_PHY=y 96 - CONFIG_MDIO_BUS_MUX_GPIO=y 97 - CONFIG_MDIO_BUS_MUX_MMIOREG=y 98 - CONFIG_INPUT_FF_MEMLESS=m 99 - # CONFIG_INPUT_MOUSEDEV is not set 100 - # CONFIG_INPUT_KEYBOARD is not set 101 - # CONFIG_INPUT_MOUSE is not set 102 - CONFIG_SERIO_LIBPS2=y 103 - CONFIG_PPC_EPAPR_HV_BYTECHAN=y 104 - CONFIG_SERIAL_8250=y 105 - CONFIG_SERIAL_8250_CONSOLE=y 106 - CONFIG_SERIAL_8250_MANY_PORTS=y 107 - CONFIG_SERIAL_8250_DETECT_IRQ=y 108 - CONFIG_SERIAL_8250_RSA=y 109 - CONFIG_I2C=y 110 - CONFIG_I2C_CHARDEV=y 111 - CONFIG_I2C_MPC=y 112 - CONFIG_I2C_MUX=y 113 - CONFIG_I2C_MUX_PCA954x=y 114 - CONFIG_SPI=y 115 - CONFIG_SPI_GPIO=y 116 - CONFIG_SPI_FSL_SPI=y 117 - CONFIG_SPI_FSL_ESPI=y 118 - CONFIG_SENSORS_LM90=y 119 - CONFIG_SENSORS_INA2XX=y 120 - CONFIG_USB_HID=m 121 - CONFIG_USB=y 122 - CONFIG_USB_MON=y 123 - CONFIG_USB_EHCI_HCD=y 124 - CONFIG_USB_EHCI_FSL=y 125 - CONFIG_USB_STORAGE=y 126 - CONFIG_MMC=y 127 - CONFIG_MMC_SDHCI=y 128 - CONFIG_EDAC=y 129 - CONFIG_EDAC_MM_EDAC=y 130 - CONFIG_RTC_CLASS=y 131 - CONFIG_RTC_DRV_DS1307=y 132 - CONFIG_RTC_DRV_DS1374=y 133 - CONFIG_RTC_DRV_DS3232=y 134 - CONFIG_DMADEVICES=y 135 - CONFIG_FSL_DMA=y 136 - CONFIG_VIRT_DRIVERS=y 137 - CONFIG_FSL_HV_MANAGER=y 138 - CONFIG_CLK_QORIQ=y 139 - CONFIG_FSL_CORENET_CF=y 140 - CONFIG_EXT2_FS=y 141 - CONFIG_EXT3_FS=y 142 - CONFIG_ISO9660_FS=m 143 - CONFIG_JOLIET=y 144 - CONFIG_ZISOFS=y 145 - CONFIG_UDF_FS=m 146 - CONFIG_MSDOS_FS=m 147 - CONFIG_VFAT_FS=y 148 - CONFIG_NTFS_FS=y 149 - CONFIG_PROC_KCORE=y 150 - CONFIG_TMPFS=y 151 - CONFIG_HUGETLBFS=y 152 - CONFIG_JFFS2_FS=y 153 - CONFIG_JFFS2_FS_DEBUG=1 154 - CONFIG_UBIFS_FS=y 155 - CONFIG_NFS_FS=y 156 - CONFIG_NFS_V4=y 157 - CONFIG_ROOT_NFS=y 158 - CONFIG_NFSD=m 159 - CONFIG_NLS_CODEPAGE_437=y 160 - CONFIG_NLS_CODEPAGE_850=y 161 - CONFIG_NLS_ISO8859_1=y 162 - CONFIG_NLS_UTF8=m 163 - CONFIG_CRC_T10DIF=y 164 - CONFIG_DEBUG_INFO=y 165 - CONFIG_FRAME_WARN=1024 166 - CONFIG_DEBUG_FS=y 167 - CONFIG_MAGIC_SYSRQ=y 168 - CONFIG_DEBUG_SHIRQ=y 169 - CONFIG_DETECT_HUNG_TASK=y 170 - CONFIG_CRYPTO_NULL=y 171 - CONFIG_CRYPTO_PCBC=m 172 - CONFIG_CRYPTO_MD4=y 173 - CONFIG_CRYPTO_SHA256=y 174 - CONFIG_CRYPTO_SHA512=y 175 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 176 - CONFIG_CRYPTO_DEV_FSL_CAAM=y
+1
arch/powerpc/configs/corenet_basic_defconfig
··· 1 + CONFIG_CORENET_GENERIC=y
+126
arch/powerpc/configs/fsl-emb-nonhw.config
··· 1 + CONFIG_ADFS_FS=m 2 + CONFIG_AFFS_FS=m 3 + CONFIG_AUDIT=y 4 + CONFIG_BEFS_FS=m 5 + CONFIG_BFS_FS=m 6 + CONFIG_BINFMT_MISC=m 7 + # CONFIG_BLK_DEV_BSG is not set 8 + CONFIG_BLK_DEV_INITRD=y 9 + CONFIG_BLK_DEV_LOOP=y 10 + CONFIG_BLK_DEV_NBD=y 11 + CONFIG_BLK_DEV_RAM_SIZE=131072 12 + CONFIG_BLK_DEV_RAM=y 13 + CONFIG_BSD_PROCESS_ACCT=y 14 + CONFIG_CGROUP_CPUACCT=y 15 + CONFIG_CGROUP_SCHED=y 16 + CONFIG_CGROUPS=y 17 + # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 18 + CONFIG_CRC_T10DIF=y 19 + CONFIG_CPUSETS=y 20 + CONFIG_CRAMFS=y 21 + CONFIG_CRYPTO_MD4=y 22 + CONFIG_CRYPTO_NULL=y 23 + CONFIG_CRYPTO_PCBC=m 24 + CONFIG_CRYPTO_SHA256=y 25 + CONFIG_CRYPTO_SHA512=y 26 + CONFIG_DEBUG_FS=y 27 + CONFIG_DEBUG_INFO=y 28 + CONFIG_DEBUG_SHIRQ=y 29 + CONFIG_DETECT_HUNG_TASK=y 30 + CONFIG_DEVTMPFS_MOUNT=y 31 + CONFIG_DEVTMPFS=y 32 + CONFIG_DUMMY=y 33 + CONFIG_EFS_FS=m 34 + CONFIG_EXPERT=y 35 + CONFIG_EXT2_FS=y 36 + # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 37 + CONFIG_EXT3_FS=y 38 + CONFIG_FB=y 39 + CONFIG_FHANDLE=y 40 + CONFIG_FIXED_PHY=y 41 + CONFIG_FONT_8x16=y 42 + CONFIG_FONT_8x8=y 43 + CONFIG_FONTS=y 44 + CONFIG_FORCE_MAX_ZONEORDER=13 45 + CONFIG_FRAMEBUFFER_CONSOLE=y 46 + CONFIG_FRAME_WARN=1024 47 + CONFIG_FTL=y 48 + CONFIG_HFS_FS=m 49 + CONFIG_HFSPLUS_FS=m 50 + CONFIG_HIGH_RES_TIMERS=y 51 + CONFIG_HPFS_FS=m 52 + CONFIG_HUGETLBFS=y 53 + CONFIG_IKCONFIG_PROC=y 54 + CONFIG_IKCONFIG=y 55 + CONFIG_INET_AH=y 56 + CONFIG_INET_ESP=y 57 + CONFIG_INET_IPCOMP=y 58 + # CONFIG_INET_LRO is not set 59 + # CONFIG_INET_XFRM_MODE_BEET is not set 60 + CONFIG_INET=y 61 + CONFIG_IP_ADVANCED_ROUTER=y 62 + CONFIG_IP_MROUTE=y 63 + CONFIG_IP_MULTICAST=y 64 + CONFIG_IP_MULTIPLE_TABLES=y 65 + CONFIG_IP_PIMSM_V1=y 66 + CONFIG_IP_PIMSM_V2=y 67 + CONFIG_IP_PNP_BOOTP=y 68 + CONFIG_IP_PNP_DHCP=y 69 + CONFIG_IP_PNP_RARP=y 70 + CONFIG_IP_PNP=y 71 + CONFIG_IP_ROUTE_MULTIPATH=y 72 + CONFIG_IP_ROUTE_VERBOSE=y 73 + CONFIG_IP_SCTP=m 74 + CONFIG_IPV6=y 75 + CONFIG_IRQ_DOMAIN_DEBUG=y 76 + CONFIG_ISO9660_FS=m 77 + CONFIG_JFFS2_FS_DEBUG=1 78 + CONFIG_JFFS2_FS=y 79 + CONFIG_JOLIET=y 80 + CONFIG_KALLSYMS_ALL=y 81 + # CONFIG_LEGACY_PTYS is not set 82 + CONFIG_LOG_BUF_SHIFT=14 83 + CONFIG_MAC_PARTITION=y 84 + CONFIG_MAGIC_SYSRQ=y 85 + CONFIG_MODULE_FORCE_UNLOAD=y 86 + CONFIG_MODULES=y 87 + CONFIG_MODULE_UNLOAD=y 88 + CONFIG_MODVERSIONS=y 89 + CONFIG_MSDOS_FS=m 90 + CONFIG_MTD_UBI=y 91 + CONFIG_MTD=y 92 + CONFIG_NET_IPIP=y 93 + CONFIG_NET_KEY_MIGRATE=y 94 + CONFIG_NET_KEY=y 95 + CONFIG_NET=y 96 + CONFIG_NFSD=y 97 + CONFIG_NFS_FS=y 98 + CONFIG_NFS_V4=y 99 + CONFIG_NLS_CODEPAGE_437=y 100 + CONFIG_NLS_CODEPAGE_850=y 101 + CONFIG_NLS_ISO8859_1=y 102 + CONFIG_NLS_UTF8=m 103 + CONFIG_NO_HZ=y 104 + CONFIG_NTFS_FS=y 105 + CONFIG_PACKET=y 106 + CONFIG_PARTITION_ADVANCED=y 107 + CONFIG_PERF_EVENTS=y 108 + CONFIG_POSIX_MQUEUE=y 109 + CONFIG_QNX4FS_FS=m 110 + CONFIG_RCU_TRACE=y 111 + CONFIG_ROOT_NFS=y 112 + CONFIG_SYSV_FS=m 113 + CONFIG_SYSVIPC=y 114 + CONFIG_TMPFS=y 115 + CONFIG_UBIFS_FS=y 116 + CONFIG_UDF_FS=m 117 + CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 118 + CONFIG_UFS_FS=m 119 + CONFIG_UIO=y 120 + CONFIG_UNIX=y 121 + CONFIG_VFAT_FS=y 122 + CONFIG_VXFS_FS=m 123 + CONFIG_XFRM_STATISTICS=y 124 + CONFIG_XFRM_SUB_POLICY=y 125 + CONFIG_XFRM_USER=y 126 + CONFIG_ZISOFS=y
+23
arch/powerpc/configs/mpc85xx_basic_defconfig
··· 1 + CONFIG_MATH_EMULATION=y 2 + CONFIG_MPC8536_DS=y 3 + CONFIG_MPC8540_ADS=y 4 + CONFIG_MPC8560_ADS=y 5 + CONFIG_MPC85xx_CDS=y 6 + CONFIG_MPC85xx_DS=y 7 + CONFIG_MPC85xx_MDS=y 8 + CONFIG_MPC85xx_RDB=y 9 + CONFIG_KSI8560=y 10 + CONFIG_MVME2500=y 11 + CONFIG_P1010_RDB=y 12 + CONFIG_P1022_DS=y 13 + CONFIG_P1022_RDK=y 14 + CONFIG_P1023_RDB=y 15 + CONFIG_SBC8548=y 16 + CONFIG_SOCRATES=y 17 + CONFIG_STX_GP3=y 18 + CONFIG_TQM8540=y 19 + CONFIG_TQM8541=y 20 + CONFIG_TQM8548=y 21 + CONFIG_TQM8555=y 22 + CONFIG_TQM8560=y 23 + CONFIG_XES_MPC85xx=y
-252
arch/powerpc/configs/mpc85xx_defconfig
··· 1 - CONFIG_PPC_85xx=y 2 - CONFIG_PHYS_64BIT=y 3 - CONFIG_SYSVIPC=y 4 - CONFIG_POSIX_MQUEUE=y 5 - CONFIG_AUDIT=y 6 - CONFIG_IRQ_DOMAIN_DEBUG=y 7 - CONFIG_NO_HZ=y 8 - CONFIG_HIGH_RES_TIMERS=y 9 - CONFIG_BSD_PROCESS_ACCT=y 10 - CONFIG_IKCONFIG=y 11 - CONFIG_IKCONFIG_PROC=y 12 - CONFIG_LOG_BUF_SHIFT=14 13 - CONFIG_BLK_DEV_INITRD=y 14 - CONFIG_EXPERT=y 15 - CONFIG_KALLSYMS_ALL=y 16 - CONFIG_MODULES=y 17 - CONFIG_MODULE_UNLOAD=y 18 - CONFIG_MODULE_FORCE_UNLOAD=y 19 - CONFIG_MODVERSIONS=y 20 - # CONFIG_BLK_DEV_BSG is not set 21 - CONFIG_PARTITION_ADVANCED=y 22 - CONFIG_MAC_PARTITION=y 23 - CONFIG_C293_PCIE=y 24 - CONFIG_MPC8540_ADS=y 25 - CONFIG_MPC8560_ADS=y 26 - CONFIG_MPC85xx_CDS=y 27 - CONFIG_MPC85xx_MDS=y 28 - CONFIG_MPC8536_DS=y 29 - CONFIG_MPC85xx_DS=y 30 - CONFIG_MPC85xx_RDB=y 31 - CONFIG_P1010_RDB=y 32 - CONFIG_P1022_DS=y 33 - CONFIG_P1022_RDK=y 34 - CONFIG_P1023_RDB=y 35 - CONFIG_SOCRATES=y 36 - CONFIG_KSI8560=y 37 - CONFIG_XES_MPC85xx=y 38 - CONFIG_STX_GP3=y 39 - CONFIG_TQM8540=y 40 - CONFIG_TQM8541=y 41 - CONFIG_TQM8548=y 42 - CONFIG_TQM8555=y 43 - CONFIG_TQM8560=y 44 - CONFIG_SBC8548=y 45 - CONFIG_MVME2500=y 46 - CONFIG_QUICC_ENGINE=y 47 - CONFIG_QE_GPIO=y 48 - CONFIG_HIGHMEM=y 49 - CONFIG_BINFMT_MISC=m 50 - CONFIG_MATH_EMULATION=y 51 - CONFIG_FORCE_MAX_ZONEORDER=12 52 - CONFIG_PCI=y 53 - CONFIG_PCIEPORTBUS=y 54 - # CONFIG_PCIEASPM is not set 55 - CONFIG_PCI_MSI=y 56 - CONFIG_RAPIDIO=y 57 - CONFIG_NET=y 58 - CONFIG_PACKET=y 59 - CONFIG_UNIX=y 60 - CONFIG_XFRM_USER=y 61 - CONFIG_NET_KEY=y 62 - CONFIG_INET=y 63 - CONFIG_IP_MULTICAST=y 64 - CONFIG_IP_ADVANCED_ROUTER=y 65 - CONFIG_IP_MULTIPLE_TABLES=y 66 - CONFIG_IP_ROUTE_MULTIPATH=y 67 - CONFIG_IP_ROUTE_VERBOSE=y 68 - CONFIG_IP_PNP=y 69 - CONFIG_IP_PNP_DHCP=y 70 - CONFIG_IP_PNP_BOOTP=y 71 - CONFIG_IP_PNP_RARP=y 72 - CONFIG_NET_IPIP=y 73 - CONFIG_IP_MROUTE=y 74 - CONFIG_IP_PIMSM_V1=y 75 - CONFIG_IP_PIMSM_V2=y 76 - CONFIG_INET_ESP=y 77 - # CONFIG_INET_XFRM_MODE_BEET is not set 78 - # CONFIG_INET_LRO is not set 79 - CONFIG_IPV6=y 80 - CONFIG_IP_SCTP=m 81 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 82 - CONFIG_DEVTMPFS=y 83 - CONFIG_DEVTMPFS_MOUNT=y 84 - CONFIG_MTD=y 85 - CONFIG_MTD_CMDLINE_PARTS=y 86 - CONFIG_MTD_BLOCK=y 87 - CONFIG_FTL=y 88 - CONFIG_MTD_CFI=y 89 - CONFIG_MTD_CFI_INTELEXT=y 90 - CONFIG_MTD_CFI_AMDSTD=y 91 - CONFIG_MTD_PHYSMAP=y 92 - CONFIG_MTD_PHYSMAP_OF=y 93 - CONFIG_MTD_PLATRAM=y 94 - CONFIG_MTD_M25P80=y 95 - CONFIG_MTD_NAND=y 96 - CONFIG_MTD_NAND_FSL_ELBC=y 97 - CONFIG_MTD_NAND_FSL_IFC=y 98 - CONFIG_MTD_SPI_NOR=y 99 - CONFIG_MTD_UBI=y 100 - CONFIG_BLK_DEV_LOOP=y 101 - CONFIG_BLK_DEV_NBD=y 102 - CONFIG_BLK_DEV_RAM=y 103 - CONFIG_BLK_DEV_RAM_SIZE=131072 104 - CONFIG_EEPROM_AT24=y 105 - CONFIG_EEPROM_LEGACY=y 106 - CONFIG_BLK_DEV_SD=y 107 - CONFIG_CHR_DEV_ST=y 108 - CONFIG_BLK_DEV_SR=y 109 - CONFIG_CHR_DEV_SG=y 110 - CONFIG_SCSI_LOGGING=y 111 - CONFIG_ATA=y 112 - CONFIG_SATA_AHCI=y 113 - CONFIG_SATA_FSL=y 114 - CONFIG_SATA_SIL24=y 115 - CONFIG_PATA_ALI=y 116 - CONFIG_PATA_VIA=y 117 - CONFIG_NETDEVICES=y 118 - CONFIG_DUMMY=y 119 - CONFIG_FS_ENET=y 120 - CONFIG_UCC_GETH=y 121 - CONFIG_GIANFAR=y 122 - CONFIG_E1000=y 123 - CONFIG_E1000E=y 124 - CONFIG_IGB=y 125 - CONFIG_AT803X_PHY=y 126 - CONFIG_MARVELL_PHY=y 127 - CONFIG_DAVICOM_PHY=y 128 - CONFIG_CICADA_PHY=y 129 - CONFIG_VITESSE_PHY=y 130 - CONFIG_BROADCOM_PHY=y 131 - CONFIG_FIXED_PHY=y 132 - CONFIG_INPUT_FF_MEMLESS=m 133 - # CONFIG_INPUT_MOUSEDEV is not set 134 - # CONFIG_INPUT_KEYBOARD is not set 135 - # CONFIG_INPUT_MOUSE is not set 136 - CONFIG_SERIO_LIBPS2=y 137 - CONFIG_SERIAL_8250=y 138 - CONFIG_SERIAL_8250_CONSOLE=y 139 - CONFIG_SERIAL_8250_NR_UARTS=6 140 - CONFIG_SERIAL_8250_RUNTIME_UARTS=6 141 - CONFIG_SERIAL_8250_MANY_PORTS=y 142 - CONFIG_SERIAL_8250_DETECT_IRQ=y 143 - CONFIG_SERIAL_8250_RSA=y 144 - CONFIG_SERIAL_QE=m 145 - CONFIG_NVRAM=y 146 - CONFIG_I2C_CHARDEV=y 147 - CONFIG_I2C_CPM=m 148 - CONFIG_I2C_MPC=y 149 - CONFIG_SPI=y 150 - CONFIG_SPI_FSL_SPI=y 151 - CONFIG_SPI_FSL_ESPI=y 152 - CONFIG_GPIO_MPC8XXX=y 153 - CONFIG_SENSORS_LM90=y 154 - CONFIG_FB=y 155 - CONFIG_FB_FSL_DIU=y 156 - # CONFIG_VGA_CONSOLE is not set 157 - CONFIG_FRAMEBUFFER_CONSOLE=y 158 - CONFIG_SOUND=y 159 - CONFIG_SND=y 160 - # CONFIG_SND_SUPPORT_OLD_API is not set 161 - # CONFIG_SND_DRIVERS is not set 162 - CONFIG_SND_INTEL8X0=y 163 - # CONFIG_SND_PPC is not set 164 - # CONFIG_SND_USB is not set 165 - CONFIG_SND_SOC=y 166 - CONFIG_SND_POWERPC_SOC=y 167 - CONFIG_HID_A4TECH=y 168 - CONFIG_HID_APPLE=y 169 - CONFIG_HID_BELKIN=y 170 - CONFIG_HID_CHERRY=y 171 - CONFIG_HID_CHICONY=y 172 - CONFIG_HID_CYPRESS=y 173 - CONFIG_HID_EZKEY=y 174 - CONFIG_HID_GYRATION=y 175 - CONFIG_HID_LOGITECH=y 176 - CONFIG_HID_MICROSOFT=y 177 - CONFIG_HID_MONTEREY=y 178 - CONFIG_HID_PANTHERLORD=y 179 - CONFIG_HID_PETALYNX=y 180 - CONFIG_HID_SAMSUNG=y 181 - CONFIG_HID_SUNPLUS=y 182 - CONFIG_USB=y 183 - CONFIG_USB_MON=y 184 - CONFIG_USB_EHCI_HCD=y 185 - CONFIG_USB_EHCI_FSL=y 186 - CONFIG_USB_OHCI_HCD=y 187 - CONFIG_USB_OHCI_HCD_PPC_OF_BE=y 188 - CONFIG_USB_OHCI_HCD_PPC_OF_LE=y 189 - CONFIG_USB_STORAGE=y 190 - CONFIG_MMC=y 191 - CONFIG_MMC_SDHCI=y 192 - CONFIG_MMC_SDHCI_PLTFM=y 193 - CONFIG_MMC_SDHCI_OF_ESDHC=y 194 - CONFIG_EDAC=y 195 - CONFIG_EDAC_MM_EDAC=y 196 - CONFIG_EDAC_MPC85XX=y 197 - CONFIG_RTC_CLASS=y 198 - CONFIG_RTC_DRV_DS1307=y 199 - CONFIG_RTC_DRV_DS1374=y 200 - CONFIG_RTC_DRV_DS3232=y 201 - CONFIG_RTC_DRV_CMOS=y 202 - CONFIG_DMADEVICES=y 203 - CONFIG_FSL_DMA=y 204 - CONFIG_EXT2_FS=y 205 - CONFIG_EXT3_FS=y 206 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 207 - CONFIG_ISO9660_FS=m 208 - CONFIG_JOLIET=y 209 - CONFIG_ZISOFS=y 210 - CONFIG_UDF_FS=m 211 - CONFIG_MSDOS_FS=m 212 - CONFIG_VFAT_FS=y 213 - CONFIG_NTFS_FS=y 214 - CONFIG_PROC_KCORE=y 215 - CONFIG_TMPFS=y 216 - CONFIG_HUGETLBFS=y 217 - CONFIG_ADFS_FS=m 218 - CONFIG_AFFS_FS=m 219 - CONFIG_HFS_FS=m 220 - CONFIG_HFSPLUS_FS=m 221 - CONFIG_BEFS_FS=m 222 - CONFIG_BFS_FS=m 223 - CONFIG_EFS_FS=m 224 - CONFIG_JFFS2_FS=y 225 - CONFIG_JFFS2_FS_DEBUG=1 226 - CONFIG_UBIFS_FS=y 227 - CONFIG_CRAMFS=y 228 - CONFIG_VXFS_FS=m 229 - CONFIG_HPFS_FS=m 230 - CONFIG_QNX4FS_FS=m 231 - CONFIG_SYSV_FS=m 232 - CONFIG_UFS_FS=m 233 - CONFIG_NFS_FS=y 234 - CONFIG_NFS_V4=y 235 - CONFIG_ROOT_NFS=y 236 - CONFIG_NFSD=y 237 - CONFIG_NLS_CODEPAGE_437=y 238 - CONFIG_NLS_CODEPAGE_850=y 239 - CONFIG_NLS_ISO8859_1=y 240 - CONFIG_CRC_T10DIF=y 241 - CONFIG_FONTS=y 242 - CONFIG_FONT_8x8=y 243 - CONFIG_FONT_8x16=y 244 - CONFIG_DEBUG_INFO=y 245 - CONFIG_DEBUG_FS=y 246 - CONFIG_DETECT_HUNG_TASK=y 247 - CONFIG_CRYPTO_PCBC=m 248 - CONFIG_CRYPTO_SHA256=y 249 - CONFIG_CRYPTO_SHA512=y 250 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 251 - CONFIG_CRYPTO_DEV_FSL_CAAM=y 252 - CONFIG_CRYPTO_DEV_TALITOS=y
-244
arch/powerpc/configs/mpc85xx_smp_defconfig
··· 1 - CONFIG_PPC_85xx=y 2 - CONFIG_PHYS_64BIT=y 3 - CONFIG_SMP=y 4 - CONFIG_NR_CPUS=8 5 - CONFIG_SYSVIPC=y 6 - CONFIG_POSIX_MQUEUE=y 7 - CONFIG_AUDIT=y 8 - CONFIG_IRQ_DOMAIN_DEBUG=y 9 - CONFIG_NO_HZ=y 10 - CONFIG_HIGH_RES_TIMERS=y 11 - CONFIG_BSD_PROCESS_ACCT=y 12 - CONFIG_IKCONFIG=y 13 - CONFIG_IKCONFIG_PROC=y 14 - CONFIG_LOG_BUF_SHIFT=14 15 - CONFIG_BLK_DEV_INITRD=y 16 - CONFIG_EXPERT=y 17 - CONFIG_KALLSYMS_ALL=y 18 - CONFIG_MODULES=y 19 - CONFIG_MODULE_UNLOAD=y 20 - CONFIG_MODULE_FORCE_UNLOAD=y 21 - CONFIG_MODVERSIONS=y 22 - # CONFIG_BLK_DEV_BSG is not set 23 - CONFIG_PARTITION_ADVANCED=y 24 - CONFIG_MAC_PARTITION=y 25 - CONFIG_C293_PCIE=y 26 - CONFIG_MPC8540_ADS=y 27 - CONFIG_MPC8560_ADS=y 28 - CONFIG_MPC85xx_CDS=y 29 - CONFIG_MPC85xx_MDS=y 30 - CONFIG_MPC8536_DS=y 31 - CONFIG_MPC85xx_DS=y 32 - CONFIG_MPC85xx_RDB=y 33 - CONFIG_P1010_RDB=y 34 - CONFIG_P1022_DS=y 35 - CONFIG_P1022_RDK=y 36 - CONFIG_P1023_RDB=y 37 - CONFIG_SOCRATES=y 38 - CONFIG_KSI8560=y 39 - CONFIG_XES_MPC85xx=y 40 - CONFIG_STX_GP3=y 41 - CONFIG_TQM8540=y 42 - CONFIG_TQM8541=y 43 - CONFIG_TQM8548=y 44 - CONFIG_TQM8555=y 45 - CONFIG_TQM8560=y 46 - CONFIG_SBC8548=y 47 - CONFIG_QUICC_ENGINE=y 48 - CONFIG_QE_GPIO=y 49 - CONFIG_HIGHMEM=y 50 - CONFIG_BINFMT_MISC=m 51 - CONFIG_MATH_EMULATION=y 52 - CONFIG_FORCE_MAX_ZONEORDER=12 53 - CONFIG_PCI=y 54 - CONFIG_PCI_MSI=y 55 - CONFIG_RAPIDIO=y 56 - CONFIG_NET=y 57 - CONFIG_PACKET=y 58 - CONFIG_UNIX=y 59 - CONFIG_XFRM_USER=y 60 - CONFIG_NET_KEY=y 61 - CONFIG_INET=y 62 - CONFIG_IP_MULTICAST=y 63 - CONFIG_IP_ADVANCED_ROUTER=y 64 - CONFIG_IP_MULTIPLE_TABLES=y 65 - CONFIG_IP_ROUTE_MULTIPATH=y 66 - CONFIG_IP_ROUTE_VERBOSE=y 67 - CONFIG_IP_PNP=y 68 - CONFIG_IP_PNP_DHCP=y 69 - CONFIG_IP_PNP_BOOTP=y 70 - CONFIG_IP_PNP_RARP=y 71 - CONFIG_NET_IPIP=y 72 - CONFIG_IP_MROUTE=y 73 - CONFIG_IP_PIMSM_V1=y 74 - CONFIG_IP_PIMSM_V2=y 75 - CONFIG_INET_ESP=y 76 - # CONFIG_INET_XFRM_MODE_BEET is not set 77 - # CONFIG_INET_LRO is not set 78 - CONFIG_IPV6=y 79 - CONFIG_IP_SCTP=m 80 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 81 - CONFIG_DEVTMPFS=y 82 - CONFIG_DEVTMPFS_MOUNT=y 83 - CONFIG_MTD=y 84 - CONFIG_MTD_CMDLINE_PARTS=y 85 - CONFIG_MTD_BLOCK=y 86 - CONFIG_FTL=y 87 - CONFIG_MTD_CFI=y 88 - CONFIG_MTD_CFI_INTELEXT=y 89 - CONFIG_MTD_CFI_AMDSTD=y 90 - CONFIG_MTD_PHYSMAP_OF=y 91 - CONFIG_MTD_NAND=y 92 - CONFIG_MTD_NAND_FSL_ELBC=y 93 - CONFIG_MTD_NAND_FSL_IFC=y 94 - CONFIG_MTD_SPI_NOR=y 95 - CONFIG_MTD_UBI=y 96 - CONFIG_BLK_DEV_LOOP=y 97 - CONFIG_BLK_DEV_NBD=y 98 - CONFIG_BLK_DEV_RAM=y 99 - CONFIG_BLK_DEV_RAM_SIZE=131072 100 - CONFIG_EEPROM_AT24=y 101 - CONFIG_EEPROM_LEGACY=y 102 - CONFIG_BLK_DEV_SD=y 103 - CONFIG_CHR_DEV_ST=y 104 - CONFIG_BLK_DEV_SR=y 105 - CONFIG_CHR_DEV_SG=y 106 - CONFIG_SCSI_LOGGING=y 107 - CONFIG_ATA=y 108 - CONFIG_SATA_AHCI=y 109 - CONFIG_SATA_FSL=y 110 - CONFIG_SATA_SIL24=y 111 - CONFIG_PATA_ALI=y 112 - CONFIG_NETDEVICES=y 113 - CONFIG_DUMMY=y 114 - CONFIG_FS_ENET=y 115 - CONFIG_UCC_GETH=y 116 - CONFIG_GIANFAR=y 117 - CONFIG_E1000E=y 118 - CONFIG_AT803X_PHY=y 119 - CONFIG_MARVELL_PHY=y 120 - CONFIG_DAVICOM_PHY=y 121 - CONFIG_CICADA_PHY=y 122 - CONFIG_VITESSE_PHY=y 123 - CONFIG_FIXED_PHY=y 124 - CONFIG_INPUT_FF_MEMLESS=m 125 - # CONFIG_INPUT_MOUSEDEV is not set 126 - # CONFIG_INPUT_KEYBOARD is not set 127 - # CONFIG_INPUT_MOUSE is not set 128 - CONFIG_SERIO_LIBPS2=y 129 - CONFIG_SERIAL_8250=y 130 - CONFIG_SERIAL_8250_CONSOLE=y 131 - CONFIG_SERIAL_8250_NR_UARTS=2 132 - CONFIG_SERIAL_8250_RUNTIME_UARTS=2 133 - CONFIG_SERIAL_8250_MANY_PORTS=y 134 - CONFIG_SERIAL_8250_DETECT_IRQ=y 135 - CONFIG_SERIAL_8250_RSA=y 136 - CONFIG_SERIAL_QE=m 137 - CONFIG_NVRAM=y 138 - CONFIG_I2C=y 139 - CONFIG_I2C_CHARDEV=y 140 - CONFIG_I2C_CPM=m 141 - CONFIG_I2C_MPC=y 142 - CONFIG_SPI=y 143 - CONFIG_SPI_FSL_SPI=y 144 - CONFIG_SPI_FSL_ESPI=y 145 - CONFIG_GPIO_MPC8XXX=y 146 - CONFIG_SENSORS_LM90=y 147 - CONFIG_FB=y 148 - CONFIG_FB_FSL_DIU=y 149 - # CONFIG_VGA_CONSOLE is not set 150 - CONFIG_FRAMEBUFFER_CONSOLE=y 151 - CONFIG_SOUND=y 152 - CONFIG_SND=y 153 - # CONFIG_SND_SUPPORT_OLD_API is not set 154 - # CONFIG_SND_DRIVERS is not set 155 - CONFIG_SND_INTEL8X0=y 156 - # CONFIG_SND_PPC is not set 157 - # CONFIG_SND_USB is not set 158 - CONFIG_SND_SOC=y 159 - CONFIG_SND_POWERPC_SOC=y 160 - CONFIG_HID_A4TECH=y 161 - CONFIG_HID_APPLE=y 162 - CONFIG_HID_BELKIN=y 163 - CONFIG_HID_CHERRY=y 164 - CONFIG_HID_CHICONY=y 165 - CONFIG_HID_CYPRESS=y 166 - CONFIG_HID_EZKEY=y 167 - CONFIG_HID_GYRATION=y 168 - CONFIG_HID_LOGITECH=y 169 - CONFIG_HID_MICROSOFT=y 170 - CONFIG_HID_MONTEREY=y 171 - CONFIG_HID_PANTHERLORD=y 172 - CONFIG_HID_PETALYNX=y 173 - CONFIG_HID_SAMSUNG=y 174 - CONFIG_HID_SUNPLUS=y 175 - CONFIG_USB=y 176 - CONFIG_USB_MON=y 177 - CONFIG_USB_EHCI_HCD=y 178 - CONFIG_USB_EHCI_FSL=y 179 - CONFIG_USB_OHCI_HCD=y 180 - CONFIG_USB_OHCI_HCD_PPC_OF_BE=y 181 - CONFIG_USB_OHCI_HCD_PPC_OF_LE=y 182 - CONFIG_USB_STORAGE=y 183 - CONFIG_MMC=y 184 - CONFIG_MMC_SDHCI=y 185 - CONFIG_MMC_SDHCI_PLTFM=y 186 - CONFIG_MMC_SDHCI_OF_ESDHC=y 187 - CONFIG_EDAC=y 188 - CONFIG_EDAC_MM_EDAC=y 189 - CONFIG_RTC_CLASS=y 190 - CONFIG_RTC_DRV_DS1307=y 191 - CONFIG_RTC_DRV_DS1374=y 192 - CONFIG_RTC_DRV_DS3232=y 193 - CONFIG_RTC_DRV_CMOS=y 194 - CONFIG_DMADEVICES=y 195 - CONFIG_FSL_DMA=y 196 - CONFIG_EXT2_FS=y 197 - CONFIG_EXT3_FS=y 198 - # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 199 - CONFIG_ISO9660_FS=m 200 - CONFIG_JOLIET=y 201 - CONFIG_ZISOFS=y 202 - CONFIG_UDF_FS=m 203 - CONFIG_MSDOS_FS=m 204 - CONFIG_VFAT_FS=y 205 - CONFIG_NTFS_FS=y 206 - CONFIG_PROC_KCORE=y 207 - CONFIG_TMPFS=y 208 - CONFIG_HUGETLBFS=y 209 - CONFIG_ADFS_FS=m 210 - CONFIG_AFFS_FS=m 211 - CONFIG_HFS_FS=m 212 - CONFIG_HFSPLUS_FS=m 213 - CONFIG_BEFS_FS=m 214 - CONFIG_BFS_FS=m 215 - CONFIG_EFS_FS=m 216 - CONFIG_JFFS2_FS=y 217 - CONFIG_JFFS2_FS_DEBUG=1 218 - CONFIG_UBIFS_FS=y 219 - CONFIG_CRAMFS=y 220 - CONFIG_VXFS_FS=m 221 - CONFIG_HPFS_FS=m 222 - CONFIG_QNX4FS_FS=m 223 - CONFIG_SYSV_FS=m 224 - CONFIG_UFS_FS=m 225 - CONFIG_NFS_FS=y 226 - CONFIG_NFS_V4=y 227 - CONFIG_ROOT_NFS=y 228 - CONFIG_NFSD=y 229 - CONFIG_NLS_CODEPAGE_437=y 230 - CONFIG_NLS_CODEPAGE_850=y 231 - CONFIG_NLS_ISO8859_1=y 232 - CONFIG_CRC_T10DIF=y 233 - CONFIG_FONTS=y 234 - CONFIG_FONT_8x8=y 235 - CONFIG_FONT_8x16=y 236 - CONFIG_DEBUG_INFO=y 237 - CONFIG_DEBUG_FS=y 238 - CONFIG_DETECT_HUNG_TASK=y 239 - CONFIG_CRYPTO_PCBC=m 240 - CONFIG_CRYPTO_SHA256=y 241 - CONFIG_CRYPTO_SHA512=y 242 - # CONFIG_CRYPTO_ANSI_CPRNG is not set 243 - CONFIG_CRYPTO_DEV_FSL_CAAM=y 244 - CONFIG_CRYPTO_DEV_TALITOS=y
+3
arch/powerpc/configs/ppc64_defconfig
··· 355 355 CONFIG_VIRTUALIZATION=y 356 356 CONFIG_KVM_BOOK3S_64=m 357 357 CONFIG_KVM_BOOK3S_64_HV=m 358 + CONFIG_NEW_LEDS=y 359 + CONFIG_LEDS_CLASS=m 360 + CONFIG_LEDS_POWERNV=m
+5 -1
arch/powerpc/configs/pseries_defconfig
··· 190 190 CONFIG_HVCS=m 191 191 CONFIG_VIRTIO_CONSOLE=m 192 192 CONFIG_IBM_BSR=m 193 - CONFIG_GEN_RTC=y 193 + CONFIG_RTC_CLASS=y 194 + CONFIG_RTC_DRV_GENERIC=y 194 195 CONFIG_RAW_DRIVER=y 195 196 CONFIG_MAX_RAW_DEVS=1024 196 197 CONFIG_FB=y ··· 320 319 CONFIG_VIRTUALIZATION=y 321 320 CONFIG_KVM_BOOK3S_64=m 322 321 CONFIG_KVM_BOOK3S_64_HV=m 322 + CONFIG_NEW_LEDS=y 323 + CONFIG_LEDS_CLASS=m 324 + CONFIG_LEDS_POWERNV=m
-1
arch/powerpc/include/asm/Kbuild
··· 6 6 generic-y += mcs_spinlock.h 7 7 generic-y += preempt.h 8 8 generic-y += rwsem.h 9 - generic-y += trace_clock.h 10 9 generic-y += vtime.h
+14 -14
arch/powerpc/include/asm/archrandom.h
··· 7 7 8 8 static inline int arch_get_random_long(unsigned long *v) 9 9 { 10 - if (ppc_md.get_random_long) 11 - return ppc_md.get_random_long(v); 12 - 13 10 return 0; 14 11 } 15 12 16 13 static inline int arch_get_random_int(unsigned int *v) 14 + { 15 + return 0; 16 + } 17 + 18 + static inline int arch_get_random_seed_long(unsigned long *v) 19 + { 20 + if (ppc_md.get_random_seed) 21 + return ppc_md.get_random_seed(v); 22 + 23 + return 0; 24 + } 25 + static inline int arch_get_random_seed_int(unsigned int *v) 17 26 { 18 27 unsigned long val; 19 28 int rc; ··· 36 27 37 28 static inline int arch_has_random(void) 38 29 { 39 - return !!ppc_md.get_random_long; 30 + return 0; 40 31 } 41 32 42 - static inline int arch_get_random_seed_long(unsigned long *v) 43 - { 44 - return 0; 45 - } 46 - static inline int arch_get_random_seed_int(unsigned int *v) 47 - { 48 - return 0; 49 - } 50 33 static inline int arch_has_random_seed(void) 51 34 { 52 - return 0; 35 + return !!ppc_md.get_random_seed; 53 36 } 54 - 55 37 #endif /* CONFIG_ARCH_RANDOM */ 56 38 57 39 #ifdef CONFIG_PPC_POWERNV
+6 -1
arch/powerpc/include/asm/cacheflush.h
··· 40 40 extern void flush_dcache_icache_page(struct page *page); 41 41 #if defined(CONFIG_PPC32) && !defined(CONFIG_BOOKE) 42 42 extern void __flush_dcache_icache_phys(unsigned long physaddr); 43 - #endif /* CONFIG_PPC32 && !CONFIG_BOOKE */ 43 + #else 44 + static inline void __flush_dcache_icache_phys(unsigned long physaddr) 45 + { 46 + BUG(); 47 + } 48 + #endif 44 49 45 50 extern void flush_dcache_range(unsigned long start, unsigned long stop); 46 51 #ifdef CONFIG_PPC32
+28 -9
arch/powerpc/include/asm/checksum.h
··· 20 20 extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl); 21 21 22 22 /* 23 - * computes the checksum of the TCP/UDP pseudo-header 24 - * returns a 16-bit checksum, already complemented 25 - */ 26 - extern __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr, 27 - unsigned short len, 28 - unsigned short proto, 29 - __wsum sum); 30 - 31 - /* 32 23 * computes the checksum of a memory block at buff, length len, 33 24 * and adds in "sum" (32-bit) 34 25 * ··· 115 124 : "=r" (sum) 116 125 : "r" (daddr), "r"(saddr), "r"(proto + len), "0"(sum)); 117 126 return sum; 127 + #endif 128 + } 129 + 130 + /* 131 + * computes the checksum of the TCP/UDP pseudo-header 132 + * returns a 16-bit checksum, already complemented 133 + */ 134 + static inline __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr, 135 + unsigned short len, 136 + unsigned short proto, 137 + __wsum sum) 138 + { 139 + return csum_fold(csum_tcpudp_nofold(saddr, daddr, len, proto, sum)); 140 + } 141 + 142 + #define HAVE_ARCH_CSUM_ADD 143 + static inline __wsum csum_add(__wsum csum, __wsum addend) 144 + { 145 + #ifdef __powerpc64__ 146 + u64 res = (__force u64)csum; 147 + 148 + res += (__force u64)addend; 149 + return (__force __wsum)((u32)res + (res >> 32)); 150 + #else 151 + asm("addc %0,%0,%1;" 152 + "addze %0,%0;" 153 + : "+r" (csum) : "r" (addend)); 154 + return csum; 118 155 #endif 119 156 } 120 157
+7
arch/powerpc/include/asm/compat.h
··· 174 174 int _band; /* POLL_IN, POLL_OUT, POLL_MSG */ 175 175 int _fd; 176 176 } _sigpoll; 177 + 178 + /* SIGSYS */ 179 + struct { 180 + unsigned int _call_addr; /* calling insn */ 181 + int _syscall; /* triggering system call number */ 182 + unsigned int _arch; /* AUDIT_ARCH_* of syscall */ 183 + } _sigsys; 177 184 } _sifields; 178 185 } compat_siginfo_t; 179 186
+9 -6
arch/powerpc/include/asm/device.h
··· 10 10 struct device_node; 11 11 #ifdef CONFIG_PPC64 12 12 struct pci_dn; 13 + struct iommu_table; 13 14 #endif 14 15 15 16 /* ··· 24 23 struct dma_map_ops *dma_ops; 25 24 26 25 /* 27 - * When an iommu is in use, dma_data is used as a ptr to the base of the 28 - * iommu_table. Otherwise, it is a simple numerical offset. 26 + * These two used to be a union. However, with the hybrid ops we need 27 + * both so here we store both a DMA offset for direct mappings and 28 + * an iommu_table for remapped DMA. 29 29 */ 30 - union { 31 - dma_addr_t dma_offset; 32 - void *iommu_table_base; 33 - } dma_data; 30 + dma_addr_t dma_offset; 31 + 32 + #ifdef CONFIG_PPC64 33 + struct iommu_table *iommu_table_base; 34 + #endif 34 35 35 36 #ifdef CONFIG_IOMMU_API 36 37 void *iommu_domain;
+7 -7
arch/powerpc/include/asm/dma-mapping.h
··· 21 21 #define DMA_ERROR_CODE (~(dma_addr_t)0x0) 22 22 23 23 /* Some dma direct funcs must be visible for use in other dma_ops */ 24 - extern void *dma_direct_alloc_coherent(struct device *dev, size_t size, 25 - dma_addr_t *dma_handle, gfp_t flag, 24 + extern void *__dma_direct_alloc_coherent(struct device *dev, size_t size, 25 + dma_addr_t *dma_handle, gfp_t flag, 26 + struct dma_attrs *attrs); 27 + extern void __dma_direct_free_coherent(struct device *dev, size_t size, 28 + void *vaddr, dma_addr_t dma_handle, 26 29 struct dma_attrs *attrs); 27 - extern void dma_direct_free_coherent(struct device *dev, size_t size, 28 - void *vaddr, dma_addr_t dma_handle, 29 - struct dma_attrs *attrs); 30 30 extern int dma_direct_mmap_coherent(struct device *dev, 31 31 struct vm_area_struct *vma, 32 32 void *cpu_addr, dma_addr_t handle, ··· 106 106 static inline dma_addr_t get_dma_offset(struct device *dev) 107 107 { 108 108 if (dev) 109 - return dev->archdata.dma_data.dma_offset; 109 + return dev->archdata.dma_offset; 110 110 111 111 return PCI_DRAM_OFFSET; 112 112 } ··· 114 114 static inline void set_dma_offset(struct device *dev, dma_addr_t off) 115 115 { 116 116 if (dev) 117 - dev->archdata.dma_data.dma_offset = off; 117 + dev->archdata.dma_offset = off; 118 118 } 119 119 120 120 /* this will be removed soon */
+25 -6
arch/powerpc/include/asm/iommu.h
··· 2 2 * Copyright (C) 2001 Mike Corrigan & Dave Engebretsen, IBM Corporation 3 3 * Rewrite, cleanup: 4 4 * Copyright (C) 2004 Olof Johansson <olof@lixom.net>, IBM Corporation 5 - * 5 + * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License as published by 8 8 * the Free Software Foundation; either version 2 of the License, or 9 9 * (at your option) any later version. 10 - * 10 + * 11 11 * This program is distributed in the hope that it will be useful, 12 12 * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 14 * GNU General Public License for more details. 15 - * 15 + * 16 16 * You should have received a copy of the GNU General Public License 17 17 * along with this program; if not, write to the Free Software 18 18 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA ··· 131 131 132 132 struct scatterlist; 133 133 134 - static inline void set_iommu_table_base(struct device *dev, void *base) 134 + #ifdef CONFIG_PPC64 135 + 136 + static inline void set_iommu_table_base(struct device *dev, 137 + struct iommu_table *base) 135 138 { 136 - dev->archdata.dma_data.iommu_table_base = base; 139 + dev->archdata.iommu_table_base = base; 137 140 } 138 141 139 142 static inline void *get_iommu_table_base(struct device *dev) 140 143 { 141 - return dev->archdata.dma_data.iommu_table_base; 144 + return dev->archdata.iommu_table_base; 142 145 } 146 + 147 + extern int dma_iommu_dma_supported(struct device *dev, u64 mask); 143 148 144 149 /* Frees table for an individual device node */ 145 150 extern void iommu_free_table(struct iommu_table *tbl, const char *node_name); ··· 229 224 return 0; 230 225 } 231 226 #endif /* !CONFIG_IOMMU_API */ 227 + 228 + #else 229 + 230 + static inline void *get_iommu_table_base(struct device *dev) 231 + { 232 + return NULL; 233 + } 234 + 235 + static inline int dma_iommu_dma_supported(struct device *dev, u64 mask) 236 + { 237 + return 0; 238 + } 239 + 240 + #endif /* CONFIG_PPC64 */ 232 241 233 242 extern int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl, 234 243 struct scatterlist *sglist, int nelems,
+1 -1
arch/powerpc/include/asm/machdep.h
··· 249 249 #endif 250 250 251 251 #ifdef CONFIG_ARCH_RANDOM 252 - int (*get_random_long)(unsigned long *v); 252 + int (*get_random_seed)(unsigned long *v); 253 253 #endif 254 254 }; 255 255
+123 -1
arch/powerpc/include/asm/opal-api.h
··· 154 154 #define OPAL_FLASH_WRITE 111 155 155 #define OPAL_FLASH_ERASE 112 156 156 #define OPAL_PRD_MSG 113 157 - #define OPAL_LAST 113 157 + #define OPAL_LEDS_GET_INDICATOR 114 158 + #define OPAL_LEDS_SET_INDICATOR 115 159 + #define OPAL_CEC_REBOOT2 116 160 + #define OPAL_LAST 116 158 161 159 162 /* Device tree flags */ 160 163 ··· 343 340 OPAL_ASSERT_RESET = 1 344 341 }; 345 342 343 + enum OpalSlotLedType { 344 + OPAL_SLOT_LED_TYPE_ID = 0, /* IDENTIFY LED */ 345 + OPAL_SLOT_LED_TYPE_FAULT = 1, /* FAULT LED */ 346 + OPAL_SLOT_LED_TYPE_ATTN = 2, /* System Attention LED */ 347 + OPAL_SLOT_LED_TYPE_MAX = 3 348 + }; 349 + 350 + enum OpalSlotLedState { 351 + OPAL_SLOT_LED_STATE_OFF = 0, /* LED is OFF */ 352 + OPAL_SLOT_LED_STATE_ON = 1 /* LED is ON */ 353 + }; 354 + 346 355 /* 347 356 * Address cycle types for LPC accesses. These also correspond 348 357 * to the content of the first cell of the "reg" property for ··· 453 438 /* HMI interrupt event */ 454 439 enum OpalHMI_Version { 455 440 OpalHMIEvt_V1 = 1, 441 + OpalHMIEvt_V2 = 2, 456 442 }; 457 443 458 444 enum OpalHMI_Severity { ··· 484 468 OpalHMI_ERROR_CAPP_RECOVERY, 485 469 }; 486 470 471 + enum OpalHMI_XstopType { 472 + CHECKSTOP_TYPE_UNKNOWN = 0, 473 + CHECKSTOP_TYPE_CORE = 1, 474 + CHECKSTOP_TYPE_NX = 2, 475 + }; 476 + 477 + enum OpalHMI_CoreXstopReason { 478 + CORE_CHECKSTOP_IFU_REGFILE = 0x00000001, 479 + CORE_CHECKSTOP_IFU_LOGIC = 0x00000002, 480 + CORE_CHECKSTOP_PC_DURING_RECOV = 0x00000004, 481 + CORE_CHECKSTOP_ISU_REGFILE = 0x00000008, 482 + CORE_CHECKSTOP_ISU_LOGIC = 0x00000010, 483 + CORE_CHECKSTOP_FXU_LOGIC = 0x00000020, 484 + CORE_CHECKSTOP_VSU_LOGIC = 0x00000040, 485 + CORE_CHECKSTOP_PC_RECOV_IN_MAINT_MODE = 0x00000080, 486 + CORE_CHECKSTOP_LSU_REGFILE = 0x00000100, 487 + CORE_CHECKSTOP_PC_FWD_PROGRESS = 0x00000200, 488 + CORE_CHECKSTOP_LSU_LOGIC = 0x00000400, 489 + CORE_CHECKSTOP_PC_LOGIC = 0x00000800, 490 + CORE_CHECKSTOP_PC_HYP_RESOURCE = 0x00001000, 491 + CORE_CHECKSTOP_PC_HANG_RECOV_FAILED = 0x00002000, 492 + CORE_CHECKSTOP_PC_AMBI_HANG_DETECTED = 0x00004000, 493 + CORE_CHECKSTOP_PC_DEBUG_TRIG_ERR_INJ = 0x00008000, 494 + CORE_CHECKSTOP_PC_SPRD_HYP_ERR_INJ = 0x00010000, 495 + }; 496 + 497 + enum OpalHMI_NestAccelXstopReason { 498 + NX_CHECKSTOP_SHM_INVAL_STATE_ERR = 0x00000001, 499 + NX_CHECKSTOP_DMA_INVAL_STATE_ERR_1 = 0x00000002, 500 + NX_CHECKSTOP_DMA_INVAL_STATE_ERR_2 = 0x00000004, 501 + NX_CHECKSTOP_DMA_CH0_INVAL_STATE_ERR = 0x00000008, 502 + NX_CHECKSTOP_DMA_CH1_INVAL_STATE_ERR = 0x00000010, 503 + NX_CHECKSTOP_DMA_CH2_INVAL_STATE_ERR = 0x00000020, 504 + NX_CHECKSTOP_DMA_CH3_INVAL_STATE_ERR = 0x00000040, 505 + NX_CHECKSTOP_DMA_CH4_INVAL_STATE_ERR = 0x00000080, 506 + NX_CHECKSTOP_DMA_CH5_INVAL_STATE_ERR = 0x00000100, 507 + NX_CHECKSTOP_DMA_CH6_INVAL_STATE_ERR = 0x00000200, 508 + NX_CHECKSTOP_DMA_CH7_INVAL_STATE_ERR = 0x00000400, 509 + NX_CHECKSTOP_DMA_CRB_UE = 0x00000800, 510 + NX_CHECKSTOP_DMA_CRB_SUE = 0x00001000, 511 + NX_CHECKSTOP_PBI_ISN_UE = 0x00002000, 512 + }; 513 + 487 514 struct OpalHMIEvent { 488 515 uint8_t version; /* 0x00 */ 489 516 uint8_t severity; /* 0x01 */ ··· 537 478 __be64 hmer; 538 479 /* TFMR register. Valid only for TFAC and TFMR_PARITY error type. */ 539 480 __be64 tfmr; 481 + 482 + /* version 2 and later */ 483 + union { 484 + /* 485 + * checkstop info (Core/NX). 486 + * Valid for OpalHMI_ERROR_MALFUNC_ALERT. 487 + */ 488 + struct { 489 + uint8_t xstop_type; /* enum OpalHMI_XstopType */ 490 + uint8_t reserved_1[3]; 491 + __be32 xstop_reason; 492 + union { 493 + __be32 pir; /* for CHECKSTOP_TYPE_CORE */ 494 + __be32 chip_id; /* for CHECKSTOP_TYPE_NX */ 495 + } u; 496 + } xstop_error; 497 + } u; 540 498 }; 541 499 542 500 enum { ··· 842 766 __be32 subaddr; /* Sub-address if any */ 843 767 __be32 size; /* Data size */ 844 768 __be64 buffer_ra; /* Buffer real address */ 769 + }; 770 + 771 + /* 772 + * EPOW status sharing (OPAL and the host) 773 + * 774 + * The host will pass on OPAL, a buffer of length OPAL_SYSEPOW_MAX 775 + * with individual elements being 16 bits wide to fetch the system 776 + * wide EPOW status. Each element in the buffer will contain the 777 + * EPOW status in it's bit representation for a particular EPOW sub 778 + * class as defiend here. So multiple detailed EPOW status bits 779 + * specific for any sub class can be represented in a single buffer 780 + * element as it's bit representation. 781 + */ 782 + 783 + /* System EPOW type */ 784 + enum OpalSysEpow { 785 + OPAL_SYSEPOW_POWER = 0, /* Power EPOW */ 786 + OPAL_SYSEPOW_TEMP = 1, /* Temperature EPOW */ 787 + OPAL_SYSEPOW_COOLING = 2, /* Cooling EPOW */ 788 + OPAL_SYSEPOW_MAX = 3, /* Max EPOW categories */ 789 + }; 790 + 791 + /* Power EPOW */ 792 + enum OpalSysPower { 793 + OPAL_SYSPOWER_UPS = 0x0001, /* System on UPS power */ 794 + OPAL_SYSPOWER_CHNG = 0x0002, /* System power config change */ 795 + OPAL_SYSPOWER_FAIL = 0x0004, /* System impending power failure */ 796 + OPAL_SYSPOWER_INCL = 0x0008, /* System incomplete power */ 797 + }; 798 + 799 + /* Temperature EPOW */ 800 + enum OpalSysTemp { 801 + OPAL_SYSTEMP_AMB = 0x0001, /* System over ambient temperature */ 802 + OPAL_SYSTEMP_INT = 0x0002, /* System over internal temperature */ 803 + OPAL_SYSTEMP_HMD = 0x0004, /* System over ambient humidity */ 804 + }; 805 + 806 + /* Cooling EPOW */ 807 + enum OpalSysCooling { 808 + OPAL_SYSCOOL_INSF = 0x0001, /* System insufficient cooling */ 809 + }; 810 + 811 + /* Argument to OPAL_CEC_REBOOT2() */ 812 + enum { 813 + OPAL_REBOOT_NORMAL = 0, 814 + OPAL_REBOOT_PLATFORM_ERROR = 1, 845 815 }; 846 816 847 817 #endif /* __ASSEMBLY__ */
+7 -1
arch/powerpc/include/asm/opal.h
··· 44 44 uint32_t hour_min); 45 45 int64_t opal_cec_power_down(uint64_t request); 46 46 int64_t opal_cec_reboot(void); 47 + int64_t opal_cec_reboot2(uint32_t reboot_type, char *diag); 47 48 int64_t opal_read_nvram(uint64_t buffer, uint64_t size, uint64_t offset); 48 49 int64_t opal_write_nvram(uint64_t buffer, uint64_t size, uint64_t offset); 49 50 int64_t opal_handle_interrupt(uint64_t isn, __be64 *outstanding_event_mask); ··· 142 141 int64_t opal_pci_reinit(uint64_t phb_id, uint64_t reinit_scope, uint64_t data); 143 142 int64_t opal_pci_mask_pe_error(uint64_t phb_id, uint16_t pe_number, uint8_t error_type, uint8_t mask_action); 144 143 int64_t opal_set_slot_led_status(uint64_t phb_id, uint64_t slot_id, uint8_t led_type, uint8_t led_action); 145 - int64_t opal_get_epow_status(__be64 *status); 144 + int64_t opal_get_epow_status(__be16 *epow_status, __be16 *num_epow_classes); 145 + int64_t opal_get_dpo_status(__be64 *dpo_timeout); 146 146 int64_t opal_set_system_attention_led(uint8_t led_action); 147 147 int64_t opal_pci_next_error(uint64_t phb_id, __be64 *first_frozen_pe, 148 148 __be16 *pci_error_type, __be16 *severity); ··· 197 195 int64_t opal_i2c_request(uint64_t async_token, uint32_t bus_id, 198 196 struct opal_i2c_request *oreq); 199 197 int64_t opal_prd_msg(struct opal_prd_msg *msg); 198 + int64_t opal_leds_get_ind(char *loc_code, __be64 *led_mask, 199 + __be64 *led_value, __be64 *max_led_type); 200 + int64_t opal_leds_set_ind(uint64_t token, char *loc_code, const u64 led_mask, 201 + const u64 led_value, __be64 *max_led_type); 200 202 201 203 int64_t opal_flash_read(uint64_t id, uint64_t offset, uint64_t buf, 202 204 uint64_t size, uint64_t token);
+1
arch/powerpc/include/asm/pci-bridge.h
··· 42 42 #endif 43 43 44 44 int (*dma_set_mask)(struct pci_dev *dev, u64 dma_mask); 45 + u64 (*dma_get_required_mask)(struct pci_dev *dev); 45 46 46 47 void (*shutdown)(struct pci_controller *); 47 48 };
+4 -4
arch/powerpc/include/asm/pgtable-ppc64.h
··· 134 134 135 135 #define pte_iterate_hashed_end() } while(0) 136 136 137 - #ifdef CONFIG_PPC_HAS_HASH_64K 138 - #define pte_pagesize_index(mm, addr, pte) get_slice_psize(mm, addr) 139 - #else 137 + /* 138 + * We expect this to be called only for user addresses or kernel virtual 139 + * addresses other than the linear mapping. 140 + */ 140 141 #define pte_pagesize_index(mm, addr, pte) MMU_PAGE_4K 141 - #endif 142 142 143 143 #endif /* __real_pte */ 144 144
+11
arch/powerpc/include/asm/pgtable.h
··· 169 169 * cases, and 32-bit non-hash with 32-bit PTEs. 170 170 */ 171 171 *ptep = pte; 172 + 173 + #ifdef CONFIG_PPC_BOOK3E_64 174 + /* 175 + * With hardware tablewalk, a sync is needed to ensure that 176 + * subsequent accesses see the PTE we just wrote. Unlike userspace 177 + * mappings, we can't tolerate spurious faults, so make sure 178 + * the new PTE will be seen the first time. 179 + */ 180 + if (is_kernel_addr(addr)) 181 + mb(); 182 + #endif 172 183 #endif 173 184 } 174 185
+1
arch/powerpc/include/asm/ppc-pci.h
··· 61 61 int rtas_read_config(struct pci_dn *, int where, int size, u32 *val); 62 62 void eeh_pe_state_mark(struct eeh_pe *pe, int state); 63 63 void eeh_pe_state_clear(struct eeh_pe *pe, int state); 64 + void eeh_pe_state_mark_with_cfg(struct eeh_pe *pe, int state); 64 65 void eeh_pe_dev_mode_mark(struct eeh_pe *pe, int mode); 65 66 66 67 void eeh_sysfs_add_device(struct pci_dev *pdev);
-1
arch/powerpc/include/asm/processor.h
··· 264 264 u64 tm_tfhar; /* Transaction fail handler addr */ 265 265 u64 tm_texasr; /* Transaction exception & summary */ 266 266 u64 tm_tfiar; /* Transaction fail instr address reg */ 267 - unsigned long tm_orig_msr; /* Thread's MSR on ctx switch */ 268 267 struct pt_regs ckpt_regs; /* Checkpointed registers */ 269 268 270 269 unsigned long tm_tar;
+2 -1
arch/powerpc/include/asm/pte-common.h
··· 109 109 * the processor might need it for DMA coherency. 110 110 */ 111 111 #define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE) 112 - #if defined(CONFIG_SMP) || defined(CONFIG_PPC_STD_MMU) 112 + #if defined(CONFIG_SMP) || defined(CONFIG_PPC_STD_MMU) || \ 113 + defined(CONFIG_PPC_E500MC) 113 114 #define _PAGE_BASE (_PAGE_BASE_NC | _PAGE_COHERENT) 114 115 #else 115 116 #define _PAGE_BASE (_PAGE_BASE_NC)
+10 -2
arch/powerpc/include/asm/reg.h
··· 1193 1193 #ifdef CONFIG_PPC_BOOK3S_64 1194 1194 #define __mtmsrd(v, l) asm volatile("mtmsrd %0," __stringify(l) \ 1195 1195 : : "r" (v) : "memory") 1196 - #define mtmsrd(v) __mtmsrd((v), 0) 1197 - #define mtmsr(v) mtmsrd(v) 1196 + #define mtmsr(v) __mtmsrd((v), 0) 1198 1197 #else 1199 1198 #define mtmsr(v) asm volatile("mtmsr %0" : \ 1200 1199 : "r" ((unsigned long)(v)) \ ··· 1280 1281 1281 1282 extern void ppc_save_regs(struct pt_regs *regs); 1282 1283 1284 + static inline void update_power8_hid0(unsigned long hid0) 1285 + { 1286 + /* 1287 + * The HID0 update on Power8 should at the very least be 1288 + * preceded by a a SYNC instruction followed by an ISYNC 1289 + * instruction 1290 + */ 1291 + asm volatile("sync; mtspr %0,%1; isync":: "i"(SPRN_HID0), "r"(hid0)); 1292 + } 1283 1293 #endif /* __ASSEMBLY__ */ 1284 1294 #endif /* __KERNEL__ */ 1285 1295 #endif /* _ASM_POWERPC_REG_H */
+1
arch/powerpc/include/asm/rtas.h
··· 343 343 extern void rtas_halt(void); 344 344 extern void rtas_os_term(char *str); 345 345 extern int rtas_get_sensor(int sensor, int index, int *state); 346 + extern int rtas_get_sensor_fast(int sensor, int index, int *state); 346 347 extern int rtas_get_power_level(int powerdomain, int *level); 347 348 extern int rtas_set_power_level(int powerdomain, int level, int *setlevel); 348 349 extern bool rtas_indicator_present(int token, int *maxindex);
-6
arch/powerpc/include/asm/spu_csa.h
··· 241 241 */ 242 242 struct spu_state { 243 243 struct spu_lscsa *lscsa; 244 - #ifdef CONFIG_SPU_FS_64K_LS 245 - int use_big_pages; 246 - /* One struct page per 64k page */ 247 - #define SPU_LSCSA_NUM_BIG_PAGES (sizeof(struct spu_lscsa) / 0x10000) 248 - struct page *lscsa_pages[SPU_LSCSA_NUM_BIG_PAGES]; 249 - #endif 250 244 struct spu_problem_collapsed prob; 251 245 struct spu_priv1_collapsed priv1; 252 246 struct spu_priv2_collapsed priv2;
+33 -21
arch/powerpc/include/asm/syscall.h
··· 22 22 extern const unsigned long sys_call_table[]; 23 23 #endif /* CONFIG_FTRACE_SYSCALLS */ 24 24 25 - static inline long syscall_get_nr(struct task_struct *task, 26 - struct pt_regs *regs) 25 + static inline int syscall_get_nr(struct task_struct *task, struct pt_regs *regs) 27 26 { 28 - return TRAP(regs) == 0xc00 ? regs->gpr[0] : -1L; 27 + /* 28 + * Note that we are returning an int here. That means 0xffffffff, ie. 29 + * 32-bit negative 1, will be interpreted as -1 on a 64-bit kernel. 30 + * This is important for seccomp so that compat tasks can set r0 = -1 31 + * to reject the syscall. 32 + */ 33 + return TRAP(regs) == 0xc00 ? regs->gpr[0] : -1; 29 34 } 30 35 31 36 static inline void syscall_rollback(struct task_struct *task, 32 37 struct pt_regs *regs) 33 38 { 34 39 regs->gpr[3] = regs->orig_gpr3; 35 - } 36 - 37 - static inline long syscall_get_error(struct task_struct *task, 38 - struct pt_regs *regs) 39 - { 40 - return (regs->ccr & 0x10000000) ? -regs->gpr[3] : 0; 41 40 } 42 41 43 42 static inline long syscall_get_return_value(struct task_struct *task, ··· 49 50 struct pt_regs *regs, 50 51 int error, long val) 51 52 { 53 + /* 54 + * In the general case it's not obvious that we must deal with CCR 55 + * here, as the syscall exit path will also do that for us. However 56 + * there are some places, eg. the signal code, which check ccr to 57 + * decide if the value in r3 is actually an error. 58 + */ 52 59 if (error) { 53 60 regs->ccr |= 0x10000000L; 54 - regs->gpr[3] = -error; 61 + regs->gpr[3] = error; 55 62 } else { 56 63 regs->ccr &= ~0x10000000L; 57 64 regs->gpr[3] = val; ··· 69 64 unsigned int i, unsigned int n, 70 65 unsigned long *args) 71 66 { 67 + unsigned long val, mask = -1UL; 68 + 72 69 BUG_ON(i + n > 6); 73 - #ifdef CONFIG_PPC64 74 - if (test_tsk_thread_flag(task, TIF_32BIT)) { 75 - /* 76 - * Zero-extend 32-bit argument values. The high bits are 77 - * garbage ignored by the actual syscall dispatch. 78 - */ 79 - while (n-- > 0) 80 - args[n] = (u32) regs->gpr[3 + i + n]; 81 - return; 82 - } 70 + 71 + #ifdef CONFIG_COMPAT 72 + if (test_tsk_thread_flag(task, TIF_32BIT)) 73 + mask = 0xffffffff; 83 74 #endif 84 - memcpy(args, &regs->gpr[3 + i], n * sizeof(args[0])); 75 + while (n--) { 76 + if (n == 0 && i == 0) 77 + val = regs->orig_gpr3; 78 + else 79 + val = regs->gpr[3 + i + n]; 80 + 81 + args[n] = val & mask; 82 + } 85 83 } 86 84 87 85 static inline void syscall_set_arguments(struct task_struct *task, ··· 94 86 { 95 87 BUG_ON(i + n > 6); 96 88 memcpy(&regs->gpr[3 + i], args, n * sizeof(args[0])); 89 + 90 + /* Also copy the first argument into orig_gpr3 */ 91 + if (i == 0 && n > 0) 92 + regs->orig_gpr3 = args[0]; 97 93 } 98 94 99 95 static inline int syscall_get_arch(void)
+19
arch/powerpc/include/asm/trace_clock.h
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify 3 + * it under the terms of the GNU General Public License, version 2, as 4 + * published by the Free Software Foundation. 5 + * 6 + * Copyright (C) 2015 Naveen N. Rao, IBM Corporation 7 + */ 8 + 9 + #ifndef _ASM_PPC_TRACE_CLOCK_H 10 + #define _ASM_PPC_TRACE_CLOCK_H 11 + 12 + #include <linux/compiler.h> 13 + #include <linux/types.h> 14 + 15 + extern u64 notrace trace_clock_ppc_tb(void); 16 + 17 + #define ARCH_TRACE_CLOCKS { trace_clock_ppc_tb, "ppc-tb", 0 }, 18 + 19 + #endif /* _ASM_PPC_TRACE_CLOCK_H */
+1
arch/powerpc/include/uapi/asm/Kbuild
··· 6 6 header-y += bootx.h 7 7 header-y += byteorder.h 8 8 header-y += cputable.h 9 + header-y += eeh.h 9 10 header-y += elf.h 10 11 header-y += epapr_hcalls.h 11 12 header-y += errno.h
-2
arch/powerpc/include/uapi/asm/errno.h
··· 6 6 #undef EDEADLOCK 7 7 #define EDEADLOCK 58 /* File locking deadlock error */ 8 8 9 - #define _LAST_ERRNO 516 10 - 11 9 #endif /* _ASM_POWERPC_ERRNO_H */
+2 -2
arch/powerpc/include/uapi/asm/sigcontext.h
··· 28 28 /* 29 29 * To maintain compatibility with current implementations the sigcontext is 30 30 * extended by appending a pointer (v_regs) to a quadword type (elf_vrreg_t) 31 - * followed by an unstructured (vmx_reserve) field of 69 doublewords. This 31 + * followed by an unstructured (vmx_reserve) field of 101 doublewords. This 32 32 * allows the array of vector registers to be quadword aligned independent of 33 33 * the alignment of the containing sigcontext or ucontext. It is the 34 34 * responsibility of the code setting the sigcontext to set this pointer to ··· 80 80 * registers and vscr/vrsave. 81 81 */ 82 82 elf_vrreg_t __user *v_regs; 83 - long vmx_reserve[ELF_NVRREG+ELF_NVRREG+32+1]; 83 + long vmx_reserve[ELF_NVRREG + ELF_NVRREG + 1 + 32]; 84 84 #endif 85 85 }; 86 86
+1
arch/powerpc/kernel/Makefile
··· 118 118 obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o 119 119 obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o 120 120 obj-$(CONFIG_FTRACE_SYSCALLS) += ftrace.o 121 + obj-$(CONFIG_TRACING) += trace_clock.o 121 122 122 123 ifneq ($(CONFIG_PPC_INDIRECT_PIO),y) 123 124 obj-y += iomap.o
-1
arch/powerpc/kernel/asm-offsets.c
··· 213 213 offsetof(struct tlb_core_data, esel_max)); 214 214 DEFINE(TCD_ESEL_FIRST, 215 215 offsetof(struct tlb_core_data, esel_first)); 216 - DEFINE(TCD_LOCK, offsetof(struct tlb_core_data, lock)); 217 216 #endif /* CONFIG_PPC_BOOK3E */ 218 217 219 218 #ifdef CONFIG_PPC_STD_MMU_64
+1 -1
arch/powerpc/kernel/dma-iommu.c
··· 73 73 } 74 74 75 75 /* We support DMA to/from any memory page via the iommu */ 76 - static int dma_iommu_dma_supported(struct device *dev, u64 mask) 76 + int dma_iommu_dma_supported(struct device *dev, u64 mask) 77 77 { 78 78 struct iommu_table *tbl = get_iommu_table_base(dev); 79 79
+2 -2
arch/powerpc/kernel/dma-swiotlb.c
··· 47 47 * for everything else. 48 48 */ 49 49 struct dma_map_ops swiotlb_dma_ops = { 50 - .alloc = dma_direct_alloc_coherent, 51 - .free = dma_direct_free_coherent, 50 + .alloc = __dma_direct_alloc_coherent, 51 + .free = __dma_direct_free_coherent, 52 52 .mmap = dma_direct_mmap_coherent, 53 53 .map_sg = swiotlb_map_sg_attrs, 54 54 .unmap_sg = swiotlb_unmap_sg_attrs,
+100 -18
arch/powerpc/kernel/dma.c
··· 16 16 #include <asm/bug.h> 17 17 #include <asm/machdep.h> 18 18 #include <asm/swiotlb.h> 19 + #include <asm/iommu.h> 19 20 20 21 /* 21 22 * Generic direct DMA implementation ··· 40 39 return pfn; 41 40 } 42 41 43 - void *dma_direct_alloc_coherent(struct device *dev, size_t size, 44 - dma_addr_t *dma_handle, gfp_t flag, 45 - struct dma_attrs *attrs) 42 + static int dma_direct_dma_supported(struct device *dev, u64 mask) 43 + { 44 + #ifdef CONFIG_PPC64 45 + u64 limit = get_dma_offset(dev) + (memblock_end_of_DRAM() - 1); 46 + 47 + /* Limit fits in the mask, we are good */ 48 + if (mask >= limit) 49 + return 1; 50 + 51 + #ifdef CONFIG_FSL_SOC 52 + /* Freescale gets another chance via ZONE_DMA/ZONE_DMA32, however 53 + * that will have to be refined if/when they support iommus 54 + */ 55 + return 1; 56 + #endif 57 + /* Sorry ... */ 58 + return 0; 59 + #else 60 + return 1; 61 + #endif 62 + } 63 + 64 + void *__dma_direct_alloc_coherent(struct device *dev, size_t size, 65 + dma_addr_t *dma_handle, gfp_t flag, 66 + struct dma_attrs *attrs) 46 67 { 47 68 void *ret; 48 69 #ifdef CONFIG_NOT_COHERENT_CACHE ··· 119 96 #endif 120 97 } 121 98 122 - void dma_direct_free_coherent(struct device *dev, size_t size, 123 - void *vaddr, dma_addr_t dma_handle, 124 - struct dma_attrs *attrs) 99 + void __dma_direct_free_coherent(struct device *dev, size_t size, 100 + void *vaddr, dma_addr_t dma_handle, 101 + struct dma_attrs *attrs) 125 102 { 126 103 #ifdef CONFIG_NOT_COHERENT_CACHE 127 104 __dma_free_coherent(size, vaddr); 128 105 #else 129 106 free_pages((unsigned long)vaddr, get_order(size)); 130 107 #endif 108 + } 109 + 110 + static void *dma_direct_alloc_coherent(struct device *dev, size_t size, 111 + dma_addr_t *dma_handle, gfp_t flag, 112 + struct dma_attrs *attrs) 113 + { 114 + struct iommu_table *iommu; 115 + 116 + /* The coherent mask may be smaller than the real mask, check if 117 + * we can really use the direct ops 118 + */ 119 + if (dma_direct_dma_supported(dev, dev->coherent_dma_mask)) 120 + return __dma_direct_alloc_coherent(dev, size, dma_handle, 121 + flag, attrs); 122 + 123 + /* Ok we can't ... do we have an iommu ? If not, fail */ 124 + iommu = get_iommu_table_base(dev); 125 + if (!iommu) 126 + return NULL; 127 + 128 + /* Try to use the iommu */ 129 + return iommu_alloc_coherent(dev, iommu, size, dma_handle, 130 + dev->coherent_dma_mask, flag, 131 + dev_to_node(dev)); 132 + } 133 + 134 + static void dma_direct_free_coherent(struct device *dev, size_t size, 135 + void *vaddr, dma_addr_t dma_handle, 136 + struct dma_attrs *attrs) 137 + { 138 + struct iommu_table *iommu; 139 + 140 + /* See comments in dma_direct_alloc_coherent() */ 141 + if (dma_direct_dma_supported(dev, dev->coherent_dma_mask)) 142 + return __dma_direct_free_coherent(dev, size, vaddr, dma_handle, 143 + attrs); 144 + /* Maybe we used an iommu ... */ 145 + iommu = get_iommu_table_base(dev); 146 + 147 + /* If we hit that we should have never allocated in the first 148 + * place so how come we are freeing ? 149 + */ 150 + if (WARN_ON(!iommu)) 151 + return; 152 + iommu_free_coherent(iommu, size, vaddr, dma_handle); 131 153 } 132 154 133 155 int dma_direct_mmap_coherent(struct device *dev, struct vm_area_struct *vma, ··· 213 145 int nents, enum dma_data_direction direction, 214 146 struct dma_attrs *attrs) 215 147 { 216 - } 217 - 218 - static int dma_direct_dma_supported(struct device *dev, u64 mask) 219 - { 220 - #ifdef CONFIG_PPC64 221 - /* Could be improved so platforms can set the limit in case 222 - * they have limited DMA windows 223 - */ 224 - return mask >= get_dma_offset(dev) + (memblock_end_of_DRAM() - 1); 225 - #else 226 - return 1; 227 - #endif 228 148 } 229 149 230 150 static u64 dma_direct_get_required_mask(struct device *dev) ··· 286 230 }; 287 231 EXPORT_SYMBOL(dma_direct_ops); 288 232 233 + int dma_set_coherent_mask(struct device *dev, u64 mask) 234 + { 235 + if (!dma_supported(dev, mask)) { 236 + /* 237 + * We need to special case the direct DMA ops which can 238 + * support a fallback for coherent allocations. There 239 + * is no dma_op->set_coherent_mask() so we have to do 240 + * things the hard way: 241 + */ 242 + if (get_dma_ops(dev) != &dma_direct_ops || 243 + get_iommu_table_base(dev) == NULL || 244 + !dma_iommu_dma_supported(dev, mask)) 245 + return -EIO; 246 + } 247 + dev->coherent_dma_mask = mask; 248 + return 0; 249 + } 250 + EXPORT_SYMBOL_GPL(dma_set_coherent_mask); 251 + 289 252 #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) 290 253 291 254 int __dma_set_mask(struct device *dev, u64 dma_mask) ··· 352 277 { 353 278 if (ppc_md.dma_get_required_mask) 354 279 return ppc_md.dma_get_required_mask(dev); 280 + 281 + if (dev_is_pci(dev)) { 282 + struct pci_dev *pdev = to_pci_dev(dev); 283 + struct pci_controller *phb = pci_bus_to_host(pdev->bus); 284 + if (phb->controller_ops.dma_get_required_mask) 285 + return phb->controller_ops.dma_get_required_mask(pdev); 286 + } 355 287 356 288 return __dma_get_required_mask(dev); 357 289 }
+24 -9
arch/powerpc/kernel/eeh.c
··· 308 308 if (!(pe->type & EEH_PE_PHB)) { 309 309 if (eeh_has_flag(EEH_ENABLE_IO_FOR_LOG)) 310 310 eeh_pci_enable(pe, EEH_OPT_THAW_MMIO); 311 - eeh_ops->configure_bridge(pe); 312 - eeh_pe_restore_bars(pe); 313 311 314 - pci_regs_buf[0] = 0; 315 - eeh_pe_traverse(pe, eeh_dump_pe_log, &loglen); 312 + /* 313 + * The config space of some PCI devices can't be accessed 314 + * when their PEs are in frozen state. Otherwise, fenced 315 + * PHB might be seen. Those PEs are identified with flag 316 + * EEH_PE_CFG_RESTRICTED, indicating EEH_PE_CFG_BLOCKED 317 + * is set automatically when the PE is put to EEH_PE_ISOLATED. 318 + * 319 + * Restoring BARs possibly triggers PCI config access in 320 + * (OPAL) firmware and then causes fenced PHB. If the 321 + * PCI config is blocked with flag EEH_PE_CFG_BLOCKED, it's 322 + * pointless to restore BARs and dump config space. 323 + */ 324 + eeh_ops->configure_bridge(pe); 325 + if (!(pe->state & EEH_PE_CFG_BLOCKED)) { 326 + eeh_pe_restore_bars(pe); 327 + 328 + pci_regs_buf[0] = 0; 329 + eeh_pe_traverse(pe, eeh_dump_pe_log, &loglen); 330 + } 316 331 } 317 332 318 333 eeh_ops->get_log(pe, severity, pci_regs_buf, loglen); ··· 765 750 eeh_pe_state_clear(pe, EEH_PE_ISOLATED); 766 751 break; 767 752 case pcie_hot_reset: 768 - eeh_pe_state_mark(pe, EEH_PE_ISOLATED); 753 + eeh_pe_state_mark_with_cfg(pe, EEH_PE_ISOLATED); 769 754 eeh_ops->set_option(pe, EEH_OPT_FREEZE_PE); 770 755 eeh_pe_dev_traverse(pe, eeh_disable_and_save_dev_state, dev); 771 756 eeh_pe_state_mark(pe, EEH_PE_CFG_BLOCKED); 772 757 eeh_ops->reset(pe, EEH_RESET_HOT); 773 758 break; 774 759 case pcie_warm_reset: 775 - eeh_pe_state_mark(pe, EEH_PE_ISOLATED); 760 + eeh_pe_state_mark_with_cfg(pe, EEH_PE_ISOLATED); 776 761 eeh_ops->set_option(pe, EEH_OPT_FREEZE_PE); 777 762 eeh_pe_dev_traverse(pe, eeh_disable_and_save_dev_state, dev); 778 763 eeh_pe_state_mark(pe, EEH_PE_CFG_BLOCKED); ··· 1131 1116 return; 1132 1117 } 1133 1118 1134 - if (eeh_has_flag(EEH_PROBE_MODE_DEV)) 1135 - eeh_ops->probe(pdn, NULL); 1136 - 1137 1119 /* 1138 1120 * The EEH cache might not be removed correctly because of 1139 1121 * unbalanced kref to the device during unplug time, which ··· 1153 1141 edev->pdev = NULL; 1154 1142 dev->dev.archdata.edev = NULL; 1155 1143 } 1144 + 1145 + if (eeh_has_flag(EEH_PROBE_MODE_DEV)) 1146 + eeh_ops->probe(pdn, NULL); 1156 1147 1157 1148 edev->pdev = dev; 1158 1149 dev->dev.archdata.edev = edev;
+22
arch/powerpc/kernel/eeh_pe.c
··· 657 657 eeh_pe_traverse(pe, __eeh_pe_state_clear, &state); 658 658 } 659 659 660 + /** 661 + * eeh_pe_state_mark_with_cfg - Mark PE state with unblocked config space 662 + * @pe: PE 663 + * @state: PE state to be set 664 + * 665 + * Set specified flag to PE and its child PEs. The PCI config space 666 + * of some PEs is blocked automatically when EEH_PE_ISOLATED is set, 667 + * which isn't needed in some situations. The function allows to set 668 + * the specified flag to indicated PEs without blocking their PCI 669 + * config space. 670 + */ 671 + void eeh_pe_state_mark_with_cfg(struct eeh_pe *pe, int state) 672 + { 673 + eeh_pe_traverse(pe, __eeh_pe_state_mark, &state); 674 + if (!(state & EEH_PE_ISOLATED)) 675 + return; 676 + 677 + /* Clear EEH_PE_CFG_BLOCKED, which might be set just now */ 678 + state = EEH_PE_CFG_BLOCKED; 679 + eeh_pe_traverse(pe, __eeh_pe_state_clear, &state); 680 + } 681 + 660 682 /* 661 683 * Some PCI bridges (e.g. PLX bridges) have primary/secondary 662 684 * buses assigned explicitly by firmware, and we probably have
+6 -1
arch/powerpc/kernel/entry_32.S
··· 20 20 */ 21 21 22 22 #include <linux/errno.h> 23 + #include <linux/err.h> 23 24 #include <linux/sys.h> 24 25 #include <linux/threads.h> 25 26 #include <asm/reg.h> ··· 355 354 SYNC 356 355 MTMSRD(r10) 357 356 lwz r9,TI_FLAGS(r12) 358 - li r8,-_LAST_ERRNO 357 + li r8,-MAX_ERRNO 359 358 andi. r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK) 360 359 bne- syscall_exit_work 361 360 cmplw 0,r3,r8 ··· 458 457 lwz r7,GPR7(r1) 459 458 lwz r8,GPR8(r1) 460 459 REST_NVGPRS(r1) 460 + 461 + cmplwi r0,NR_syscalls 462 + /* Return code is already in r3 thanks to do_syscall_trace_enter() */ 463 + bge- ret_from_syscall 461 464 b syscall_dotrace_cont 462 465 463 466 syscall_exit_work:
+20 -8
arch/powerpc/kernel/entry_64.S
··· 19 19 */ 20 20 21 21 #include <linux/errno.h> 22 + #include <linux/err.h> 22 23 #include <asm/unistd.h> 23 24 #include <asm/processor.h> 24 25 #include <asm/page.h> ··· 151 150 CURRENT_THREAD_INFO(r11, r1) 152 151 ld r10,TI_FLAGS(r11) 153 152 andi. r11,r10,_TIF_SYSCALL_DOTRACE 154 - bne syscall_dotrace 155 - .Lsyscall_dotrace_cont: 153 + bne syscall_dotrace /* does not return */ 156 154 cmpldi 0,r0,NR_syscalls 157 155 bge- syscall_enosys 158 156 ··· 207 207 #endif /* CONFIG_PPC_BOOK3E */ 208 208 209 209 ld r9,TI_FLAGS(r12) 210 - li r11,-_LAST_ERRNO 210 + li r11,-MAX_ERRNO 211 211 andi. r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP|_TIF_USER_WORK_MASK|_TIF_PERSYSCALL_MASK) 212 212 bne- syscall_exit_work 213 213 cmpld r3,r11 ··· 245 245 bl save_nvgprs 246 246 addi r3,r1,STACK_FRAME_OVERHEAD 247 247 bl do_syscall_trace_enter 248 + 248 249 /* 249 - * Restore argument registers possibly just changed. 250 - * We use the return value of do_syscall_trace_enter 251 - * for the call number to look up in the table (r0). 250 + * We use the return value of do_syscall_trace_enter() as the syscall 251 + * number. If the syscall was rejected for any reason do_syscall_trace_enter() 252 + * returns an invalid syscall number and the test below against 253 + * NR_syscalls will fail. 252 254 */ 253 255 mr r0,r3 256 + 257 + /* Restore argument registers just clobbered and/or possibly changed. */ 254 258 ld r3,GPR3(r1) 255 259 ld r4,GPR4(r1) 256 260 ld r5,GPR5(r1) 257 261 ld r6,GPR6(r1) 258 262 ld r7,GPR7(r1) 259 263 ld r8,GPR8(r1) 264 + 265 + /* Repopulate r9 and r10 for the system_call path */ 260 266 addi r9,r1,STACK_FRAME_OVERHEAD 261 267 CURRENT_THREAD_INFO(r10, r1) 262 268 ld r10,TI_FLAGS(r10) 263 - b .Lsyscall_dotrace_cont 269 + 270 + cmpldi r0,NR_syscalls 271 + blt+ system_call 272 + 273 + /* Return code is already in r3 thanks to do_syscall_trace_enter() */ 274 + b .Lsyscall_exit 275 + 264 276 265 277 syscall_enosys: 266 278 li r3,-ENOSYS ··· 289 277 beq+ 0f 290 278 REST_NVGPRS(r1) 291 279 b 2f 292 - 0: cmpld r3,r11 /* r10 is -LAST_ERRNO */ 280 + 0: cmpld r3,r11 /* r11 is -MAX_ERRNO */ 293 281 blt+ 1f 294 282 andi. r0,r9,_TIF_NOERROR 295 283 bne- 1f
+8 -5
arch/powerpc/kernel/exceptions-64e.S
··· 1313 1313 sync 1314 1314 isync 1315 1315 1316 - /* The mapping only needs to be cache-coherent on SMP */ 1317 - #ifdef CONFIG_SMP 1318 - #define M_IF_SMP MAS2_M 1316 + /* 1317 + * The mapping only needs to be cache-coherent on SMP, except on 1318 + * Freescale e500mc derivatives where it's also needed for coherent DMA. 1319 + */ 1320 + #if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC) 1321 + #define M_IF_NEEDED MAS2_M 1319 1322 #else 1320 - #define M_IF_SMP 0 1323 + #define M_IF_NEEDED 0 1321 1324 #endif 1322 1325 1323 1326 /* 6. Setup KERNELBASE mapping in TLB[0] ··· 1335 1332 ori r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_1GB))@l 1336 1333 mtspr SPRN_MAS1,r6 1337 1334 1338 - LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET | M_IF_SMP) 1335 + LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET | M_IF_NEEDED) 1339 1336 mtspr SPRN_MAS2,r6 1340 1337 1341 1338 rlwinm r5,r5,0,0,25
+9 -6
arch/powerpc/kernel/fsl_booke_entry_mapping.S
··· 152 152 tlbivax 0,r9 153 153 TLBSYNC 154 154 155 - /* The mapping only needs to be cache-coherent on SMP */ 156 - #ifdef CONFIG_SMP 157 - #define M_IF_SMP MAS2_M 155 + /* 156 + * The mapping only needs to be cache-coherent on SMP, except on 157 + * Freescale e500mc derivatives where it's also needed for coherent DMA. 158 + */ 159 + #if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC) 160 + #define M_IF_NEEDED MAS2_M 158 161 #else 159 - #define M_IF_SMP 0 162 + #define M_IF_NEEDED 0 160 163 #endif 161 164 162 165 #if defined(ENTRY_MAPPING_BOOT_SETUP) ··· 170 167 lis r6,(MAS1_VALID|MAS1_IPROT)@h 171 168 ori r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l 172 169 mtspr SPRN_MAS1,r6 173 - lis r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_SMP)@h 174 - ori r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_SMP)@l 170 + lis r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_NEEDED)@h 171 + ori r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_NEEDED)@l 175 172 mtspr SPRN_MAS2,r6 176 173 mtspr SPRN_MAS3,r8 177 174 tlbwe
-1
arch/powerpc/kernel/kvm.c
··· 649 649 kvm_patch_ins_mtsrin(inst, inst_rt, inst_rb); 650 650 } 651 651 break; 652 - break; 653 652 #endif 654 653 } 655 654
+11 -2
arch/powerpc/kernel/misc_64.S
··· 475 475 #ifdef CONFIG_KEXEC /* use no memory without kexec */ 476 476 lwz r4,0(r5) 477 477 cmpwi 0,r4,0 478 - bnea 0x60 478 + beq 99b 479 + #ifdef CONFIG_PPC_BOOK3S_64 480 + li r10,0x60 481 + mfmsr r11 482 + clrrdi r11,r11,1 /* Clear MSR_LE */ 483 + mtsrr0 r10 484 + mtsrr1 r11 485 + rfid 486 + #else 487 + ba 0x60 479 488 #endif 480 - b 99b 489 + #endif 481 490 482 491 /* this can be in text because we won't change it until we are 483 492 * running in real anyways
+5 -5
arch/powerpc/kernel/nvram_64.c
··· 541 541 time->tv_sec = be64_to_cpu(oops_hdr->timestamp); 542 542 time->tv_nsec = 0; 543 543 } 544 - *buf = kmalloc(length, GFP_KERNEL); 544 + *buf = kmemdup(buff + hdr_size, length, GFP_KERNEL); 545 545 if (*buf == NULL) 546 546 return -ENOMEM; 547 - memcpy(*buf, buff + hdr_size, length); 548 547 kfree(buff); 549 548 550 549 if (err_type == ERR_TYPE_KERNEL_PANIC_GZ) ··· 581 582 spin_lock_init(&nvram_pstore_info.buf_lock); 582 583 583 584 rc = pstore_register(&nvram_pstore_info); 584 - if (rc != 0) 585 - pr_err("nvram: pstore_register() failed, defaults to " 586 - "kmsg_dump; returned %d\n", rc); 585 + if (rc && (rc != -EPERM)) 586 + /* Print error only when pstore.backend == nvram */ 587 + pr_err("nvram: pstore_register() failed, returned %d. " 588 + "Defaults to kmsg_dump\n", rc); 587 589 588 590 return rc; 589 591 }
+18 -54
arch/powerpc/kernel/pci-common.c
··· 823 823 (reg.start == 0 && !pci_has_flag(PCI_PROBE_ONLY))) { 824 824 /* Only print message if not re-assigning */ 825 825 if (!pci_has_flag(PCI_REASSIGN_ALL_RSRC)) 826 - pr_debug("PCI:%s Resource %d %016llx-%016llx [%x] " 827 - "is unassigned\n", 828 - pci_name(dev), i, 829 - (unsigned long long)res->start, 830 - (unsigned long long)res->end, 831 - (unsigned int)res->flags); 826 + pr_debug("PCI:%s Resource %d %pR is unassigned\n", 827 + pci_name(dev), i, res); 832 828 res->end -= res->start; 833 829 res->start = 0; 834 830 res->flags |= IORESOURCE_UNSET; 835 831 continue; 836 832 } 837 833 838 - pr_debug("PCI:%s Resource %d %016llx-%016llx [%x]\n", 839 - pci_name(dev), i, 840 - (unsigned long long)res->start,\ 841 - (unsigned long long)res->end, 842 - (unsigned int)res->flags); 834 + pr_debug("PCI:%s Resource %d %pR\n", pci_name(dev), i, res); 843 835 } 844 836 845 837 /* Call machine specific resource fixup */ ··· 935 943 continue; 936 944 } 937 945 938 - pr_debug("PCI:%s Bus rsrc %d %016llx-%016llx [%x]\n", 939 - pci_name(dev), i, 940 - (unsigned long long)res->start,\ 941 - (unsigned long long)res->end, 942 - (unsigned int)res->flags); 946 + pr_debug("PCI:%s Bus rsrc %d %pR\n", pci_name(dev), i, res); 943 947 944 948 /* Try to detect uninitialized P2P bridge resources, 945 949 * and clear them out so they get re-assigned later ··· 1114 1126 *pp = NULL; 1115 1127 for (p = res->child; p != NULL; p = p->sibling) { 1116 1128 p->parent = res; 1117 - pr_debug("PCI: Reparented %s [%llx..%llx] under %s\n", 1118 - p->name, 1119 - (unsigned long long)p->start, 1120 - (unsigned long long)p->end, res->name); 1129 + pr_debug("PCI: Reparented %s %pR under %s\n", 1130 + p->name, p, res->name); 1121 1131 } 1122 1132 return 0; 1123 1133 } ··· 1184 1198 } 1185 1199 } 1186 1200 1187 - pr_debug("PCI: %s (bus %d) bridge rsrc %d: %016llx-%016llx " 1188 - "[0x%x], parent %p (%s)\n", 1189 - bus->self ? pci_name(bus->self) : "PHB", 1190 - bus->number, i, 1191 - (unsigned long long)res->start, 1192 - (unsigned long long)res->end, 1193 - (unsigned int)res->flags, 1194 - pr, (pr && pr->name) ? pr->name : "nil"); 1201 + pr_debug("PCI: %s (bus %d) bridge rsrc %d: %pR, parent %p (%s)\n", 1202 + bus->self ? pci_name(bus->self) : "PHB", bus->number, 1203 + i, res, pr, (pr && pr->name) ? pr->name : "nil"); 1195 1204 1196 1205 if (pr && !(pr->flags & IORESOURCE_UNSET)) { 1197 1206 struct pci_dev *dev = bus->self; ··· 1228 1247 { 1229 1248 struct resource *pr, *r = &dev->resource[idx]; 1230 1249 1231 - pr_debug("PCI: Allocating %s: Resource %d: %016llx..%016llx [%x]\n", 1232 - pci_name(dev), idx, 1233 - (unsigned long long)r->start, 1234 - (unsigned long long)r->end, 1235 - (unsigned int)r->flags); 1250 + pr_debug("PCI: Allocating %s: Resource %d: %pR\n", 1251 + pci_name(dev), idx, r); 1236 1252 1237 1253 pr = pci_find_parent_resource(dev, r); 1238 1254 if (!pr || (pr->flags & IORESOURCE_UNSET) || ··· 1237 1259 printk(KERN_WARNING "PCI: Cannot allocate resource region %d" 1238 1260 " of device %s, will remap\n", idx, pci_name(dev)); 1239 1261 if (pr) 1240 - pr_debug("PCI: parent is %p: %016llx-%016llx [%x]\n", 1241 - pr, 1242 - (unsigned long long)pr->start, 1243 - (unsigned long long)pr->end, 1244 - (unsigned int)pr->flags); 1262 + pr_debug("PCI: parent is %p: %pR\n", pr, pr); 1245 1263 /* We'll assign a new address later */ 1246 1264 r->flags |= IORESOURCE_UNSET; 1247 1265 r->end -= r->start; ··· 1399 1425 if (r->parent || !r->start || !r->flags) 1400 1426 continue; 1401 1427 1402 - pr_debug("PCI: Claiming %s: " 1403 - "Resource %d: %016llx..%016llx [%x]\n", 1404 - pci_name(dev), i, 1405 - (unsigned long long)r->start, 1406 - (unsigned long long)r->end, 1407 - (unsigned int)r->flags); 1428 + pr_debug("PCI: Claiming %s: Resource %d: %pR\n", 1429 + pci_name(dev), i, r); 1408 1430 1409 1431 if (pci_claim_resource(dev, i) == 0) 1410 1432 continue; ··· 1484 1514 } else { 1485 1515 offset = pcibios_io_space_offset(hose); 1486 1516 1487 - pr_debug("PCI: PHB IO resource = %08llx-%08llx [%lx] off 0x%08llx\n", 1488 - (unsigned long long)res->start, 1489 - (unsigned long long)res->end, 1490 - (unsigned long)res->flags, 1491 - (unsigned long long)offset); 1517 + pr_debug("PCI: PHB IO resource = %pR off 0x%08llx\n", 1518 + res, (unsigned long long)offset); 1492 1519 pci_add_resource_offset(resources, res, offset); 1493 1520 } 1494 1521 ··· 1502 1535 offset = hose->mem_offset[i]; 1503 1536 1504 1537 1505 - pr_debug("PCI: PHB MEM resource %d = %08llx-%08llx [%lx] off 0x%08llx\n", i, 1506 - (unsigned long long)res->start, 1507 - (unsigned long long)res->end, 1508 - (unsigned long)res->flags, 1509 - (unsigned long long)offset); 1538 + pr_debug("PCI: PHB MEM resource %d = %pR off 0x%08llx\n", i, 1539 + res, (unsigned long long)offset); 1510 1540 1511 1541 pci_add_resource_offset(resources, res, offset); 1512 1542 }
+7 -7
arch/powerpc/kernel/process.c
··· 86 86 if (tsk == current && tsk->thread.regs && 87 87 MSR_TM_ACTIVE(tsk->thread.regs->msr) && 88 88 !test_thread_flag(TIF_RESTORE_TM)) { 89 - tsk->thread.tm_orig_msr = tsk->thread.regs->msr; 89 + tsk->thread.ckpt_regs.msr = tsk->thread.regs->msr; 90 90 set_thread_flag(TIF_RESTORE_TM); 91 91 } 92 92 ··· 104 104 if (tsk == current && tsk->thread.regs && 105 105 MSR_TM_ACTIVE(tsk->thread.regs->msr) && 106 106 !test_thread_flag(TIF_RESTORE_TM)) { 107 - tsk->thread.tm_orig_msr = tsk->thread.regs->msr; 107 + tsk->thread.ckpt_regs.msr = tsk->thread.regs->msr; 108 108 set_thread_flag(TIF_RESTORE_TM); 109 109 } 110 110 ··· 540 540 * the thread will no longer be transactional. 541 541 */ 542 542 if (test_ti_thread_flag(ti, TIF_RESTORE_TM)) { 543 - msr_diff = thr->tm_orig_msr & ~thr->regs->msr; 543 + msr_diff = thr->ckpt_regs.msr & ~thr->regs->msr; 544 544 if (msr_diff & MSR_FP) 545 545 memcpy(&thr->transact_fp, &thr->fp_state, 546 546 sizeof(struct thread_fp_state)); ··· 591 591 /* Stash the original thread MSR, as giveup_fpu et al will 592 592 * modify it. We hold onto it to see whether the task used 593 593 * FP & vector regs. If the TIF_RESTORE_TM flag is set, 594 - * tm_orig_msr is already set. 594 + * ckpt_regs.msr is already set. 595 595 */ 596 596 if (!test_ti_thread_flag(task_thread_info(tsk), TIF_RESTORE_TM)) 597 - thr->tm_orig_msr = thr->regs->msr; 597 + thr->ckpt_regs.msr = thr->regs->msr; 598 598 599 599 TM_DEBUG("--- tm_reclaim on pid %d (NIP=%lx, " 600 600 "ccr=%lx, msr=%lx, trap=%lx)\n", ··· 663 663 tm_restore_sprs(&new->thread); 664 664 return; 665 665 } 666 - msr = new->thread.tm_orig_msr; 666 + msr = new->thread.ckpt_regs.msr; 667 667 /* Recheckpoint to restore original checkpointed register state. */ 668 668 TM_DEBUG("*** tm_recheckpoint of pid %d " 669 669 "(new->msr 0x%lx, new->origmsr 0x%lx)\n", ··· 723 723 if (!MSR_TM_ACTIVE(regs->msr)) 724 724 return; 725 725 726 - msr_diff = current->thread.tm_orig_msr & ~regs->msr; 726 + msr_diff = current->thread.ckpt_regs.msr & ~regs->msr; 727 727 msr_diff &= MSR_FP | MSR_VEC | MSR_VSX; 728 728 if (msr_diff & MSR_FP) { 729 729 fp_enable();
+11 -14
arch/powerpc/kernel/prom.c
··· 218 218 } 219 219 220 220 #ifdef CONFIG_PPC_STD_MMU_64 221 - static void __init check_cpu_slb_size(unsigned long node) 221 + static void __init init_mmu_slb_size(unsigned long node) 222 222 { 223 223 const __be32 *slb_size_ptr; 224 224 225 - slb_size_ptr = of_get_flat_dt_prop(node, "slb-size", NULL); 226 - if (slb_size_ptr != NULL) { 225 + slb_size_ptr = of_get_flat_dt_prop(node, "slb-size", NULL) ? : 226 + of_get_flat_dt_prop(node, "ibm,slb-size", NULL); 227 + 228 + if (slb_size_ptr) 227 229 mmu_slb_size = be32_to_cpup(slb_size_ptr); 228 - return; 229 - } 230 - slb_size_ptr = of_get_flat_dt_prop(node, "ibm,slb-size", NULL); 231 - if (slb_size_ptr != NULL) { 232 - mmu_slb_size = be32_to_cpup(slb_size_ptr); 233 - } 234 230 } 235 231 #else 236 - #define check_cpu_slb_size(node) do { } while(0) 232 + #define init_mmu_slb_size(node) do { } while(0) 237 233 #endif 238 234 239 235 static struct feature_property { ··· 376 380 377 381 check_cpu_feature_properties(node); 378 382 check_cpu_pa_features(node); 379 - check_cpu_slb_size(node); 383 + init_mmu_slb_size(node); 380 384 381 385 #ifdef CONFIG_PPC64 382 386 if (nthreads > 1) ··· 472 476 flags = of_read_number(&dm[3], 1); 473 477 /* skip DRC index, pad, assoc. list index, flags */ 474 478 dm += 4; 475 - /* skip this block if the reserved bit is set in flags (0x80) 476 - or if the block is not assigned to this partition (0x8) */ 477 - if ((flags & 0x80) || !(flags & 0x8)) 479 + /* skip this block if the reserved bit is set in flags 480 + or if the block is not assigned to this partition */ 481 + if ((flags & DRCONF_MEM_RESERVED) || 482 + !(flags & DRCONF_MEM_ASSIGNED)) 478 483 continue; 479 484 size = memblock_size; 480 485 rngs = 1;
+17 -8
arch/powerpc/kernel/prom_init.c
··· 641 641 #define W(x) ((x) >> 24) & 0xff, ((x) >> 16) & 0xff, \ 642 642 ((x) >> 8) & 0xff, (x) & 0xff 643 643 644 + /* Firmware expects the value to be n - 1, where n is the # of vectors */ 645 + #define NUM_VECTORS(n) ((n) - 1) 646 + 647 + /* 648 + * Firmware expects 1 + n - 2, where n is the length of the option vector in 649 + * bytes. The 1 accounts for the length byte itself, the - 2 .. ? 650 + */ 651 + #define VECTOR_LENGTH(n) (1 + (n) - 2) 652 + 644 653 unsigned char ibm_architecture_vec[] = { 645 654 W(0xfffe0000), W(0x003a0000), /* POWER5/POWER5+ */ 646 655 W(0xffff0000), W(0x003e0000), /* POWER6 */ ··· 660 651 W(0xffffffff), W(0x0f000003), /* all 2.06-compliant */ 661 652 W(0xffffffff), W(0x0f000002), /* all 2.05-compliant */ 662 653 W(0xfffffffe), W(0x0f000001), /* all 2.04-compliant and earlier */ 663 - 6 - 1, /* 6 option vectors */ 654 + NUM_VECTORS(6), /* 6 option vectors */ 664 655 665 656 /* option vector 1: processor architectures supported */ 666 - 3 - 2, /* length */ 657 + VECTOR_LENGTH(2), /* length */ 667 658 0, /* don't ignore, don't halt */ 668 659 OV1_PPC_2_00 | OV1_PPC_2_01 | OV1_PPC_2_02 | OV1_PPC_2_03 | 669 660 OV1_PPC_2_04 | OV1_PPC_2_05 | OV1_PPC_2_06 | OV1_PPC_2_07, 670 661 671 662 /* option vector 2: Open Firmware options supported */ 672 - 34 - 2, /* length */ 663 + VECTOR_LENGTH(33), /* length */ 673 664 OV2_REAL_MODE, 674 665 0, 0, 675 666 W(0xffffffff), /* real_base */ ··· 683 674 48, /* max log_2(hash table size) */ 684 675 685 676 /* option vector 3: processor options supported */ 686 - 3 - 2, /* length */ 677 + VECTOR_LENGTH(2), /* length */ 687 678 0, /* don't ignore, don't halt */ 688 679 OV3_FP | OV3_VMX | OV3_DFP, 689 680 690 681 /* option vector 4: IBM PAPR implementation */ 691 - 3 - 2, /* length */ 682 + VECTOR_LENGTH(2), /* length */ 692 683 0, /* don't halt */ 693 684 OV4_MIN_ENT_CAP, /* minimum VP entitled capacity */ 694 685 695 686 /* option vector 5: PAPR/OF options */ 696 - 19 - 2, /* length */ 687 + VECTOR_LENGTH(18), /* length */ 697 688 0, /* don't ignore, don't halt */ 698 689 OV5_FEAT(OV5_LPAR) | OV5_FEAT(OV5_SPLPAR) | OV5_FEAT(OV5_LARGE_PAGES) | 699 690 OV5_FEAT(OV5_DRCONF_MEMORY) | OV5_FEAT(OV5_DONATE_DEDICATE_CPU) | ··· 726 717 OV5_FEAT(OV5_PFO_HW_RNG) | OV5_FEAT(OV5_PFO_HW_ENCR) | 727 718 OV5_FEAT(OV5_PFO_HW_842), 728 719 OV5_FEAT(OV5_SUB_PROCESSORS), 720 + 729 721 /* option vector 6: IBM PAPR hints */ 730 - 4 - 2, /* length */ 722 + VECTOR_LENGTH(3), /* length */ 731 723 0, 732 724 0, 733 725 OV6_LINUX, 734 - 735 726 }; 736 727 737 728 /* Old method - ELF header with PT_NOTE sections only works on BE */
+77 -12
arch/powerpc/kernel/ptrace.c
··· 1762 1762 return ret; 1763 1763 } 1764 1764 1765 - /* 1766 - * We must return the syscall number to actually look up in the table. 1767 - * This can be -1L to skip running any syscall at all. 1765 + #ifdef CONFIG_SECCOMP 1766 + static int do_seccomp(struct pt_regs *regs) 1767 + { 1768 + if (!test_thread_flag(TIF_SECCOMP)) 1769 + return 0; 1770 + 1771 + /* 1772 + * The ABI we present to seccomp tracers is that r3 contains 1773 + * the syscall return value and orig_gpr3 contains the first 1774 + * syscall parameter. This is different to the ptrace ABI where 1775 + * both r3 and orig_gpr3 contain the first syscall parameter. 1776 + */ 1777 + regs->gpr[3] = -ENOSYS; 1778 + 1779 + /* 1780 + * We use the __ version here because we have already checked 1781 + * TIF_SECCOMP. If this fails, there is nothing left to do, we 1782 + * have already loaded -ENOSYS into r3, or seccomp has put 1783 + * something else in r3 (via SECCOMP_RET_ERRNO/TRACE). 1784 + */ 1785 + if (__secure_computing()) 1786 + return -1; 1787 + 1788 + /* 1789 + * The syscall was allowed by seccomp, restore the register 1790 + * state to what ptrace and audit expect. 1791 + * Note that we use orig_gpr3, which means a seccomp tracer can 1792 + * modify the first syscall parameter (in orig_gpr3) and also 1793 + * allow the syscall to proceed. 1794 + */ 1795 + regs->gpr[3] = regs->orig_gpr3; 1796 + 1797 + return 0; 1798 + } 1799 + #else 1800 + static inline int do_seccomp(struct pt_regs *regs) { return 0; } 1801 + #endif /* CONFIG_SECCOMP */ 1802 + 1803 + /** 1804 + * do_syscall_trace_enter() - Do syscall tracing on kernel entry. 1805 + * @regs: the pt_regs of the task to trace (current) 1806 + * 1807 + * Performs various types of tracing on syscall entry. This includes seccomp, 1808 + * ptrace, syscall tracepoints and audit. 1809 + * 1810 + * The pt_regs are potentially visible to userspace via ptrace, so their 1811 + * contents is ABI. 1812 + * 1813 + * One or more of the tracers may modify the contents of pt_regs, in particular 1814 + * to modify arguments or even the syscall number itself. 1815 + * 1816 + * It's also possible that a tracer can choose to reject the system call. In 1817 + * that case this function will return an illegal syscall number, and will put 1818 + * an appropriate return value in regs->r3. 1819 + * 1820 + * Return: the (possibly changed) syscall number. 1768 1821 */ 1769 1822 long do_syscall_trace_enter(struct pt_regs *regs) 1770 1823 { 1771 - long ret = 0; 1824 + bool abort = false; 1772 1825 1773 1826 user_exit(); 1774 1827 1775 - secure_computing_strict(regs->gpr[0]); 1828 + if (do_seccomp(regs)) 1829 + return -1; 1776 1830 1777 - if (test_thread_flag(TIF_SYSCALL_TRACE) && 1778 - tracehook_report_syscall_entry(regs)) 1831 + if (test_thread_flag(TIF_SYSCALL_TRACE)) { 1779 1832 /* 1780 - * Tracing decided this syscall should not happen. 1781 - * We'll return a bogus call number to get an ENOSYS 1782 - * error, but leave the original number in regs->gpr[0]. 1833 + * The tracer may decide to abort the syscall, if so tracehook 1834 + * will return !0. Note that the tracer may also just change 1835 + * regs->gpr[0] to an invalid syscall number, that is handled 1836 + * below on the exit path. 1783 1837 */ 1784 - ret = -1L; 1838 + abort = tracehook_report_syscall_entry(regs) != 0; 1839 + } 1785 1840 1786 1841 if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT))) 1787 1842 trace_sys_enter(regs, regs->gpr[0]); ··· 1853 1798 regs->gpr[5] & 0xffffffff, 1854 1799 regs->gpr[6] & 0xffffffff); 1855 1800 1856 - return ret ?: regs->gpr[0]; 1801 + if (abort || regs->gpr[0] >= NR_syscalls) { 1802 + /* 1803 + * If we are aborting explicitly, or if the syscall number is 1804 + * now invalid, set the return value to -ENOSYS. 1805 + */ 1806 + regs->gpr[3] = -ENOSYS; 1807 + return -1; 1808 + } 1809 + 1810 + /* Return the possibly modified but valid syscall number */ 1811 + return regs->gpr[0]; 1857 1812 } 1858 1813 1859 1814 void do_syscall_trace_leave(struct pt_regs *regs)
+22 -3
arch/powerpc/kernel/rtas.c
··· 478 478 479 479 if (status == RTAS_BUSY) { 480 480 ms = 1; 481 - } else if (status >= 9900 && status <= 9905) { 482 - order = status - 9900; 481 + } else if (status >= RTAS_EXTENDED_DELAY_MIN && 482 + status <= RTAS_EXTENDED_DELAY_MAX) { 483 + order = status - RTAS_EXTENDED_DELAY_MIN; 483 484 for (ms = 1; order > 0; order--) 484 485 ms *= 10; 485 486 } ··· 585 584 } 586 585 EXPORT_SYMBOL(rtas_get_sensor); 587 586 587 + int rtas_get_sensor_fast(int sensor, int index, int *state) 588 + { 589 + int token = rtas_token("get-sensor-state"); 590 + int rc; 591 + 592 + if (token == RTAS_UNKNOWN_SERVICE) 593 + return -ENOENT; 594 + 595 + rc = rtas_call(token, 2, 2, state, sensor, index); 596 + WARN_ON(rc == RTAS_BUSY || (rc >= RTAS_EXTENDED_DELAY_MIN && 597 + rc <= RTAS_EXTENDED_DELAY_MAX)); 598 + 599 + if (rc < 0) 600 + return rtas_error_rc(rc); 601 + return rc; 602 + } 603 + 588 604 bool rtas_indicator_present(int token, int *maxindex) 589 605 { 590 606 int proplen, count, i; ··· 659 641 660 642 rc = rtas_call(token, 3, 1, NULL, indicator, index, new_value); 661 643 662 - WARN_ON(rc == -2 || (rc >= 9900 && rc <= 9905)); 644 + WARN_ON(rc == RTAS_BUSY || (rc >= RTAS_EXTENDED_DELAY_MIN && 645 + rc <= RTAS_EXTENDED_DELAY_MAX)); 663 646 664 647 if (rc < 0) 665 648 return rtas_error_rc(rc);
+5
arch/powerpc/kernel/signal_32.c
··· 949 949 err |= __put_user(s->si_overrun, &d->si_overrun); 950 950 err |= __put_user(s->si_int, &d->si_int); 951 951 break; 952 + case __SI_SYS >> 16: 953 + err |= __put_user(ptr_to_compat(s->si_call_addr), &d->si_call_addr); 954 + err |= __put_user(s->si_syscall, &d->si_syscall); 955 + err |= __put_user(s->si_arch, &d->si_arch); 956 + break; 952 957 case __SI_RT >> 16: /* This is not generated by the kernel as of now. */ 953 958 case __SI_MESGQ >> 16: 954 959 err |= __put_user(s->si_int, &d->si_int);
+16 -5
arch/powerpc/kernel/signal_64.c
··· 74 74 "%s[%d]: bad frame in %s: %016lx nip %016lx lr %016lx\n"; 75 75 76 76 /* 77 + * This computes a quad word aligned pointer inside the vmx_reserve array 78 + * element. For historical reasons sigcontext might not be quad word aligned, 79 + * but the location we write the VMX regs to must be. See the comment in 80 + * sigcontext for more detail. 81 + */ 82 + #ifdef CONFIG_ALTIVEC 83 + static elf_vrreg_t __user *sigcontext_vmx_regs(struct sigcontext __user *sc) 84 + { 85 + return (elf_vrreg_t __user *) (((unsigned long)sc->vmx_reserve + 15) & ~0xful); 86 + } 87 + #endif 88 + 89 + /* 77 90 * Set up the sigcontext for the signal frame. 78 91 */ 79 92 ··· 103 90 * v_regs pointer or not 104 91 */ 105 92 #ifdef CONFIG_ALTIVEC 106 - elf_vrreg_t __user *v_regs = (elf_vrreg_t __user *)(((unsigned long)sc->vmx_reserve + 15) & ~0xful); 93 + elf_vrreg_t __user *v_regs = sigcontext_vmx_regs(sc); 107 94 #endif 108 95 unsigned long msr = regs->msr; 109 96 long err = 0; ··· 194 181 * v_regs pointer or not. 195 182 */ 196 183 #ifdef CONFIG_ALTIVEC 197 - elf_vrreg_t __user *v_regs = (elf_vrreg_t __user *) 198 - (((unsigned long)sc->vmx_reserve + 15) & ~0xful); 199 - elf_vrreg_t __user *tm_v_regs = (elf_vrreg_t __user *) 200 - (((unsigned long)tm_sc->vmx_reserve + 15) & ~0xful); 184 + elf_vrreg_t __user *v_regs = sigcontext_vmx_regs(sc); 185 + elf_vrreg_t __user *tm_v_regs = sigcontext_vmx_regs(tm_sc); 201 186 #endif 202 187 unsigned long msr = regs->msr; 203 188 long err = 0;
+15
arch/powerpc/kernel/trace_clock.c
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify 3 + * it under the terms of the GNU General Public License, version 2, as 4 + * published by the Free Software Foundation. 5 + * 6 + * Copyright (C) 2015 Naveen N. Rao, IBM Corporation 7 + */ 8 + 9 + #include <asm/trace_clock.h> 10 + #include <asm/time.h> 11 + 12 + u64 notrace trace_clock_ppc_tb(void) 13 + { 14 + return get_tb(); 15 + }
-16
arch/powerpc/lib/checksum_32.S
··· 41 41 blr 42 42 43 43 /* 44 - * Compute checksum of TCP or UDP pseudo-header: 45 - * csum_tcpudp_magic(saddr, daddr, len, proto, sum) 46 - */ 47 - _GLOBAL(csum_tcpudp_magic) 48 - rlwimi r5,r6,16,0,15 /* put proto in upper half of len */ 49 - addc r0,r3,r4 /* add 4 32-bit words together */ 50 - adde r0,r0,r5 51 - adde r0,r0,r7 52 - addze r0,r0 /* add in final carry */ 53 - rlwinm r3,r0,16,0,31 /* fold two halves together */ 54 - add r3,r0,r3 55 - not r3,r3 56 - srwi r3,r3,16 57 - blr 58 - 59 - /* 60 44 * computes the checksum of a memory block at buff, length len, 61 45 * and adds in "sum" (32-bit) 62 46 *
-21
arch/powerpc/lib/checksum_64.S
··· 45 45 blr 46 46 47 47 /* 48 - * Compute checksum of TCP or UDP pseudo-header: 49 - * csum_tcpudp_magic(r3=saddr, r4=daddr, r5=len, r6=proto, r7=sum) 50 - * No real gain trying to do this specially for 64 bit, but 51 - * the 32 bit addition may spill into the upper bits of 52 - * the doubleword so we still must fold it down from 64. 53 - */ 54 - _GLOBAL(csum_tcpudp_magic) 55 - rlwimi r5,r6,16,0,15 /* put proto in upper half of len */ 56 - addc r0,r3,r4 /* add 4 32-bit words together */ 57 - adde r0,r0,r5 58 - adde r0,r0,r7 59 - rldicl r4,r0,32,0 /* fold 64 bit value */ 60 - add r0,r4,r0 61 - srdi r0,r0,32 62 - rlwinm r3,r0,16,0,31 /* fold two halves together */ 63 - add r3,r0,r3 64 - not r3,r3 65 - srwi r3,r3,16 66 - blr 67 - 68 - /* 69 48 * Computes the checksum of a memory block at buff, length len, 70 49 * and adds in "sum" (32-bit). 71 50 *
+108 -1
arch/powerpc/lib/copy_32.S
··· 69 69 LG_CACHELINE_BYTES = L1_CACHE_SHIFT 70 70 CACHELINE_MASK = (L1_CACHE_BYTES-1) 71 71 72 + /* 73 + * Use dcbz on the complete cache lines in the destination 74 + * to set them to zero. This requires that the destination 75 + * area is cacheable. -- paulus 76 + */ 72 77 _GLOBAL(memset) 73 78 rlwimi r4,r4,8,16,23 74 79 rlwimi r4,r4,16,0,15 80 + 75 81 addi r6,r3,-4 76 82 cmplwi 0,r5,4 77 83 blt 7f ··· 86 80 andi. r0,r6,3 87 81 add r5,r0,r5 88 82 subf r6,r0,r6 89 - srwi r0,r5,2 83 + cmplwi 0,r4,0 84 + bne 2f /* Use normal procedure if r4 is not zero */ 85 + 86 + clrlwi r7,r6,32-LG_CACHELINE_BYTES 87 + add r8,r7,r5 88 + srwi r9,r8,LG_CACHELINE_BYTES 89 + addic. r9,r9,-1 /* total number of complete cachelines */ 90 + ble 2f 91 + xori r0,r7,CACHELINE_MASK & ~3 92 + srwi. r0,r0,2 93 + beq 3f 94 + mtctr r0 95 + 4: stwu r4,4(r6) 96 + bdnz 4b 97 + 3: mtctr r9 98 + li r7,4 99 + 10: dcbz r7,r6 100 + addi r6,r6,CACHELINE_BYTES 101 + bdnz 10b 102 + clrlwi r5,r8,32-LG_CACHELINE_BYTES 103 + addi r5,r5,4 104 + 105 + 2: srwi r0,r5,2 90 106 mtctr r0 91 107 bdz 6f 92 108 1: stwu r4,4(r6) ··· 122 94 bdnz 8b 123 95 blr 124 96 97 + /* 98 + * This version uses dcbz on the complete cache lines in the 99 + * destination area to reduce memory traffic. This requires that 100 + * the destination area is cacheable. 101 + * We only use this version if the source and dest don't overlap. 102 + * -- paulus. 103 + */ 125 104 _GLOBAL(memmove) 126 105 cmplw 0,r3,r4 127 106 bgt backwards_memcpy 128 107 /* fall through */ 129 108 130 109 _GLOBAL(memcpy) 110 + add r7,r3,r5 /* test if the src & dst overlap */ 111 + add r8,r4,r5 112 + cmplw 0,r4,r7 113 + cmplw 1,r3,r8 114 + crand 0,0,4 /* cr0.lt &= cr1.lt */ 115 + blt generic_memcpy /* if regions overlap */ 116 + 117 + addi r4,r4,-4 118 + addi r6,r3,-4 119 + neg r0,r3 120 + andi. r0,r0,CACHELINE_MASK /* # bytes to start of cache line */ 121 + beq 58f 122 + 123 + cmplw 0,r5,r0 /* is this more than total to do? */ 124 + blt 63f /* if not much to do */ 125 + andi. r8,r0,3 /* get it word-aligned first */ 126 + subf r5,r0,r5 127 + mtctr r8 128 + beq+ 61f 129 + 70: lbz r9,4(r4) /* do some bytes */ 130 + addi r4,r4,1 131 + addi r6,r6,1 132 + stb r9,3(r6) 133 + bdnz 70b 134 + 61: srwi. r0,r0,2 135 + mtctr r0 136 + beq 58f 137 + 72: lwzu r9,4(r4) /* do some words */ 138 + stwu r9,4(r6) 139 + bdnz 72b 140 + 141 + 58: srwi. r0,r5,LG_CACHELINE_BYTES /* # complete cachelines */ 142 + clrlwi r5,r5,32-LG_CACHELINE_BYTES 143 + li r11,4 144 + mtctr r0 145 + beq 63f 146 + 53: 147 + dcbz r11,r6 148 + COPY_16_BYTES 149 + #if L1_CACHE_BYTES >= 32 150 + COPY_16_BYTES 151 + #if L1_CACHE_BYTES >= 64 152 + COPY_16_BYTES 153 + COPY_16_BYTES 154 + #if L1_CACHE_BYTES >= 128 155 + COPY_16_BYTES 156 + COPY_16_BYTES 157 + COPY_16_BYTES 158 + COPY_16_BYTES 159 + #endif 160 + #endif 161 + #endif 162 + bdnz 53b 163 + 164 + 63: srwi. r0,r5,2 165 + mtctr r0 166 + beq 64f 167 + 30: lwzu r0,4(r4) 168 + stwu r0,4(r6) 169 + bdnz 30b 170 + 171 + 64: andi. r0,r5,3 172 + mtctr r0 173 + beq+ 65f 174 + addi r4,r4,3 175 + addi r6,r6,3 176 + 40: lbzu r0,1(r4) 177 + stbu r0,1(r6) 178 + bdnz 40b 179 + 65: blr 180 + 181 + _GLOBAL(generic_memcpy) 131 182 srwi. r7,r5,3 132 183 addi r6,r3,-4 133 184 addi r4,r4,-4
+1 -1
arch/powerpc/mm/fsl_booke_mmu.c
··· 112 112 113 113 tsize = __ilog2(size) - 10; 114 114 115 - #ifdef CONFIG_SMP 115 + #if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC) 116 116 if ((flags & _PAGE_NO_CACHE) == 0) 117 117 flags |= _PAGE_COHERENT; 118 118 #endif
+2 -2
arch/powerpc/mm/hash_low_64.S
··· 701 701 702 702 #endif /* CONFIG_PPC_64K_PAGES */ 703 703 704 - #ifdef CONFIG_PPC_HAS_HASH_64K 704 + #ifdef CONFIG_PPC_64K_PAGES 705 705 706 706 /***************************************************************************** 707 707 * * ··· 993 993 b ht64_bail 994 994 995 995 996 - #endif /* CONFIG_PPC_HAS_HASH_64K */ 996 + #endif /* CONFIG_PPC_64K_PAGES */ 997 997 998 998 999 999 /*****************************************************************************
+6 -6
arch/powerpc/mm/hash_utils_64.c
··· 640 640 641 641 static void __init htab_finish_init(void) 642 642 { 643 - #ifdef CONFIG_PPC_HAS_HASH_64K 643 + #ifdef CONFIG_PPC_64K_PAGES 644 644 patch_branch(ht64_call_hpte_insert1, 645 645 ppc_function_entry(ppc_md.hpte_insert), 646 646 BRANCH_SET_LINK); ··· 653 653 patch_branch(ht64_call_hpte_updatepp, 654 654 ppc_function_entry(ppc_md.hpte_updatepp), 655 655 BRANCH_SET_LINK); 656 - #endif /* CONFIG_PPC_HAS_HASH_64K */ 656 + #endif /* CONFIG_PPC_64K_PAGES */ 657 657 658 658 patch_branch(htab_call_hpte_insert1, 659 659 ppc_function_entry(ppc_md.hpte_insert), ··· 1151 1151 check_paca_psize(ea, mm, psize, user_region); 1152 1152 #endif /* CONFIG_PPC_64K_PAGES */ 1153 1153 1154 - #ifdef CONFIG_PPC_HAS_HASH_64K 1154 + #ifdef CONFIG_PPC_64K_PAGES 1155 1155 if (psize == MMU_PAGE_64K) 1156 1156 rc = __hash_page_64K(ea, access, vsid, ptep, trap, 1157 1157 flags, ssize); 1158 1158 else 1159 - #endif /* CONFIG_PPC_HAS_HASH_64K */ 1159 + #endif /* CONFIG_PPC_64K_PAGES */ 1160 1160 { 1161 1161 int spp = subpage_protection(mm, ea); 1162 1162 if (access & spp) ··· 1264 1264 update_flags |= HPTE_LOCAL_UPDATE; 1265 1265 1266 1266 /* Hash it in */ 1267 - #ifdef CONFIG_PPC_HAS_HASH_64K 1267 + #ifdef CONFIG_PPC_64K_PAGES 1268 1268 if (mm->context.user_psize == MMU_PAGE_64K) 1269 1269 rc = __hash_page_64K(ea, access, vsid, ptep, trap, 1270 1270 update_flags, ssize); 1271 1271 else 1272 - #endif /* CONFIG_PPC_HAS_HASH_64K */ 1272 + #endif /* CONFIG_PPC_64K_PAGES */ 1273 1273 rc = __hash_page_4K(ea, access, vsid, ptep, trap, update_flags, 1274 1274 ssize, subpage_protection(mm, ea)); 1275 1275
-8
arch/powerpc/mm/hugetlbpage.c
··· 808 808 if ((mmu_psize = shift_to_mmu_psize(shift)) < 0) 809 809 return -EINVAL; 810 810 811 - #ifdef CONFIG_SPU_FS_64K_LS 812 - /* Disable support for 64K huge pages when 64K SPU local store 813 - * support is enabled as the current implementation conflicts. 814 - */ 815 - if (shift == PAGE_SHIFT_64K) 816 - return -EINVAL; 817 - #endif /* CONFIG_SPU_FS_64K_LS */ 818 - 819 811 BUG_ON(mmu_psize_defs[mmu_psize].shift != shift); 820 812 821 813 /* Return if huge page size has already been setup */
+7 -7
arch/powerpc/mm/mem.c
··· 414 414 return; 415 415 } 416 416 #endif 417 - #ifdef CONFIG_BOOKE 418 - { 417 + #if defined(CONFIG_8xx) || defined(CONFIG_PPC64) 418 + /* On 8xx there is no need to kmap since highmem is not supported */ 419 + __flush_dcache_icache(page_address(page)); 420 + #else 421 + if (IS_ENABLED(CONFIG_BOOKE) || sizeof(phys_addr_t) > sizeof(void *)) { 419 422 void *start = kmap_atomic(page); 420 423 __flush_dcache_icache(start); 421 424 kunmap_atomic(start); 425 + } else { 426 + __flush_dcache_icache_phys(page_to_pfn(page) << PAGE_SHIFT); 422 427 } 423 - #elif defined(CONFIG_8xx) || defined(CONFIG_PPC64) 424 - /* On 8xx there is no need to kmap since highmem is not supported */ 425 - __flush_dcache_icache(page_address(page)); 426 - #else 427 - __flush_dcache_icache_phys(page_to_pfn(page) << PAGE_SHIFT); 428 428 #endif 429 429 } 430 430 EXPORT_SYMBOL(flush_dcache_icache_page);
+13 -3
arch/powerpc/mm/numa.c
··· 225 225 for (i = 0; i < distance_ref_points_depth; i++) { 226 226 const __be32 *entry; 227 227 228 - entry = &associativity[be32_to_cpu(distance_ref_points[i])]; 228 + entry = &associativity[be32_to_cpu(distance_ref_points[i]) - 1]; 229 229 distance_lookup_table[nid][i] = of_read_number(entry, 1); 230 230 } 231 231 } ··· 248 248 nid = -1; 249 249 250 250 if (nid > 0 && 251 - of_read_number(associativity, 1) >= distance_ref_points_depth) 252 - initialize_distance_lookup_table(nid, associativity); 251 + of_read_number(associativity, 1) >= distance_ref_points_depth) { 252 + /* 253 + * Skip the length field and send start of associativity array 254 + */ 255 + initialize_distance_lookup_table(nid, associativity + 1); 256 + } 253 257 254 258 out: 255 259 return nid; ··· 511 507 512 508 if (nid == 0xffff || nid >= MAX_NUMNODES) 513 509 nid = default_nid; 510 + 511 + if (nid > 0) { 512 + index = drmem->aa_index * aa->array_sz; 513 + initialize_distance_lookup_table(nid, 514 + &aa->arrays[index]); 515 + } 514 516 } 515 517 516 518 return nid;
-10
arch/powerpc/mm/pgtable_64.c
··· 149 149 #endif /* !CONFIG_PPC_MMU_NOHASH */ 150 150 } 151 151 152 - #ifdef CONFIG_PPC_BOOK3E_64 153 - /* 154 - * With hardware tablewalk, a sync is needed to ensure that 155 - * subsequent accesses see the PTE we just wrote. Unlike userspace 156 - * mappings, we can't tolerate spurious faults, so make sure 157 - * the new PTE will be seen the first time. 158 - */ 159 - mb(); 160 - #else 161 152 smp_wmb(); 162 - #endif 163 153 return 0; 164 154 } 165 155
+18 -6
arch/powerpc/mm/slb.c
··· 41 41 (((ssize) == MMU_SEGSIZE_256M)? ESID_MASK: ESID_MASK_1T) 42 42 43 43 static inline unsigned long mk_esid_data(unsigned long ea, int ssize, 44 - unsigned long slot) 44 + unsigned long entry) 45 45 { 46 - return (ea & slb_esid_mask(ssize)) | SLB_ESID_V | slot; 46 + return (ea & slb_esid_mask(ssize)) | SLB_ESID_V | entry; 47 47 } 48 48 49 49 static inline unsigned long mk_vsid_data(unsigned long ea, int ssize, ··· 249 249 static inline void patch_slb_encoding(unsigned int *insn_addr, 250 250 unsigned int immed) 251 251 { 252 - int insn = (*insn_addr & 0xffff0000) | immed; 252 + 253 + /* 254 + * This function patches either an li or a cmpldi instruction with 255 + * a new immediate value. This relies on the fact that both li 256 + * (which is actually addi) and cmpldi both take a 16-bit immediate 257 + * value, and it is situated in the same location in the instruction, 258 + * ie. bits 16-31 (Big endian bit order) or the lower 16 bits. 259 + * The signedness of the immediate operand differs between the two 260 + * instructions however this code is only ever patching a small value, 261 + * much less than 1 << 15, so we can get away with it. 262 + * To patch the value we read the existing instruction, clear the 263 + * immediate value, and or in our new value, then write the instruction 264 + * back. 265 + */ 266 + unsigned int insn = (*insn_addr & 0xffff0000) | immed; 253 267 patch_instruction(insn_addr, insn); 254 268 } 255 269 256 - extern u32 slb_compare_rr_to_size[]; 257 270 extern u32 slb_miss_kernel_load_linear[]; 258 271 extern u32 slb_miss_kernel_load_io[]; 259 272 extern u32 slb_compare_rr_to_size[]; ··· 322 309 lflags = SLB_VSID_KERNEL | linear_llp; 323 310 vflags = SLB_VSID_KERNEL | vmalloc_llp; 324 311 325 - /* Invalidate the entire SLB (even slot 0) & all the ERATS */ 312 + /* Invalidate the entire SLB (even entry 0) & all the ERATS */ 326 313 asm volatile("isync":::"memory"); 327 314 asm volatile("slbmte %0,%0"::"r" (0) : "memory"); 328 315 asm volatile("isync; slbia; isync":::"memory"); 329 316 create_shadowed_slbe(PAGE_OFFSET, mmu_kernel_ssize, lflags, 0); 330 - 331 317 create_shadowed_slbe(VMALLOC_START, mmu_kernel_ssize, vflags, 1); 332 318 333 319 /* For the boot cpu, we're running on the stack in init_thread_union,
+5 -5
arch/powerpc/mm/tlb_low_64e.S
··· 308 308 * 309 309 * MAS6:IND should be already set based on MAS4 310 310 */ 311 - 1: lbarx r15,0,r11 312 311 lhz r10,PACAPACAINDEX(r13) 313 - cmpdi r15,0 314 - cmpdi cr1,r15,1 /* set cr1.eq = 0 for non-recursive */ 315 312 addi r10,r10,1 313 + crclr cr1*4+eq /* set cr1.eq = 0 for non-recursive */ 314 + 1: lbarx r15,0,r11 315 + cmpdi r15,0 316 316 bne 2f 317 317 stbcx. r10,0,r11 318 318 bne 1b ··· 320 320 .subsection 1 321 321 2: cmpd cr1,r15,r10 /* recursive lock due to mcheck/crit/etc? */ 322 322 beq cr1,3b /* unlock will happen if cr1.eq = 0 */ 323 - lbz r15,0(r11) 323 + 10: lbz r15,0(r11) 324 324 cmpdi r15,0 325 - bne 2b 325 + bne 10b 326 326 b 1b 327 327 .previous 328 328
+2 -2
arch/powerpc/oprofile/op_model_power4.c
··· 207 207 unsigned int mmcr0; 208 208 209 209 /* set the PMM bit (see comment below) */ 210 - mtmsrd(mfmsr() | MSR_PMM); 210 + mtmsr(mfmsr() | MSR_PMM); 211 211 212 212 for (i = 0; i < cur_cpu_spec->num_pmcs; ++i) { 213 213 if (ctr[i].enabled) { ··· 377 377 is_kernel = get_kernel(pc, mmcra); 378 378 379 379 /* set the PMM bit (see comment below) */ 380 - mtmsrd(mfmsr() | MSR_PMM); 380 + mtmsr(mfmsr() | MSR_PMM); 381 381 382 382 /* Check that the SIAR valid bit in MMCRA is set to 1. */ 383 383 if ((mmcra & MMCRA_SIAR_VALID_MASK) == MMCRA_SIAR_VALID_MASK)
+2 -2
arch/powerpc/perf/core-book3s.c
··· 53 53 54 54 /* BHRB bits */ 55 55 u64 bhrb_filter; /* BHRB HW branch filter */ 56 - int bhrb_users; 56 + unsigned int bhrb_users; 57 57 void *bhrb_context; 58 58 struct perf_branch_stack bhrb_stack; 59 59 struct perf_branch_entry bhrb_entries[BHRB_MAX_ENTRIES]; ··· 369 369 if (!ppmu->bhrb_nr) 370 370 return; 371 371 372 + WARN_ON_ONCE(!cpuhw->bhrb_users); 372 373 cpuhw->bhrb_users--; 373 - WARN_ON_ONCE(cpuhw->bhrb_users < 0); 374 374 perf_sched_cb_dec(event->ctx->pmu); 375 375 376 376 if (!cpuhw->disabled && !cpuhw->bhrb_users) {
+11 -13
arch/powerpc/perf/hv-24x7.c
··· 416 416 } 417 417 418 418 static struct attribute *event_to_desc_attr(struct hv_24x7_event_data *event, 419 - int nonce) 419 + int nonce) 420 420 { 421 421 int nl, dl; 422 422 char *name = event_name(event, &nl); ··· 444 444 } 445 445 446 446 static ssize_t event_data_to_attrs(unsigned ix, struct attribute **attrs, 447 - struct hv_24x7_event_data *event, int nonce) 447 + struct hv_24x7_event_data *event, int nonce) 448 448 { 449 449 unsigned i; 450 450 ··· 512 512 } 513 513 514 514 static int ev_uniq_ord(const void *v1, size_t s1, unsigned d1, const void *v2, 515 - size_t s2, unsigned d2) 515 + size_t s2, unsigned d2) 516 516 { 517 517 int r = memord(v1, s1, v2, s2); 518 518 ··· 526 526 } 527 527 528 528 static int event_uniq_add(struct rb_root *root, const char *name, int nl, 529 - unsigned domain) 529 + unsigned domain) 530 530 { 531 531 struct rb_node **new = &(root->rb_node), *parent = NULL; 532 532 struct event_uniq *data; ··· 650 650 #define MAX_4K (SIZE_MAX / 4096) 651 651 652 652 static int create_events_from_catalog(struct attribute ***events_, 653 - struct attribute ***event_descs_, 654 - struct attribute ***event_long_descs_) 653 + struct attribute ***event_descs_, 654 + struct attribute ***event_long_descs_) 655 655 { 656 656 unsigned long hret; 657 657 size_t catalog_len, catalog_page_len, event_entry_count, ··· 1008 1008 }; 1009 1009 1010 1010 static void log_24x7_hcall(struct hv_24x7_request_buffer *request_buffer, 1011 - struct hv_24x7_data_result_buffer *result_buffer, 1012 - unsigned long ret) 1011 + struct hv_24x7_data_result_buffer *result_buffer, 1012 + unsigned long ret) 1013 1013 { 1014 1014 struct hv_24x7_request *req; 1015 1015 ··· 1026 1026 * Start the process for a new H_GET_24x7_DATA hcall. 1027 1027 */ 1028 1028 static void init_24x7_request(struct hv_24x7_request_buffer *request_buffer, 1029 - struct hv_24x7_data_result_buffer *result_buffer) 1029 + struct hv_24x7_data_result_buffer *result_buffer) 1030 1030 { 1031 1031 1032 1032 memset(request_buffer, 0, 4096); ··· 1041 1041 * by 'init_24x7_request()' and 'add_event_to_24x7_request()'. 1042 1042 */ 1043 1043 static int make_24x7_request(struct hv_24x7_request_buffer *request_buffer, 1044 - struct hv_24x7_data_result_buffer *result_buffer) 1044 + struct hv_24x7_data_result_buffer *result_buffer) 1045 1045 { 1046 1046 unsigned long ret; 1047 1047 ··· 1104 1104 unsigned long ret; 1105 1105 struct hv_24x7_request_buffer *request_buffer; 1106 1106 struct hv_24x7_data_result_buffer *result_buffer; 1107 - struct hv_24x7_result *resb; 1108 1107 1109 1108 BUILD_BUG_ON(sizeof(*request_buffer) > 4096); 1110 1109 BUILD_BUG_ON(sizeof(*result_buffer) > 4096); ··· 1124 1125 } 1125 1126 1126 1127 /* process result from hcall */ 1127 - resb = &result_buffer->results[0]; 1128 - *count = be64_to_cpu(resb->elements[0].element_data[0]); 1128 + *count = be64_to_cpu(result_buffer->results[0].elements[0].element_data[0]); 1129 1129 1130 1130 out: 1131 1131 put_cpu_var(hv_24x7_reqb);
+2 -2
arch/powerpc/platforms/512x/Kconfig
··· 7 7 select PPC_PCI_CHOICE 8 8 select FSL_PCI if PCI 9 9 select ARCH_WANT_OPTIONAL_GPIOLIB 10 - select USB_EHCI_BIG_ENDIAN_MMIO 11 - select USB_EHCI_BIG_ENDIAN_DESC 10 + select USB_EHCI_BIG_ENDIAN_MMIO if USB_EHCI_HCD 11 + select USB_EHCI_BIG_ENDIAN_DESC if USB_EHCI_HCD 12 12 13 13 config MPC5121_ADS 14 14 bool "Freescale MPC5121E ADS"
-4
arch/powerpc/platforms/85xx/c293pcie.c
··· 66 66 .probe = c293_pcie_probe, 67 67 .setup_arch = c293_pcie_setup_arch, 68 68 .init_IRQ = c293_pcie_pic_init, 69 - #ifdef CONFIG_PCI 70 - .pcibios_fixup_bus = fsl_pcibios_fixup_bus, 71 - .pcibios_fixup_phb = fsl_pcibios_fixup_phb, 72 - #endif 73 69 .get_irq = mpic_get_irq, 74 70 .restart = fsl_rstcr_restart, 75 71 .calibrate_decr = generic_calibrate_decr,
+2
arch/powerpc/platforms/85xx/corenet_generic.c
··· 153 153 "fsl,T1023RDB", 154 154 "fsl,T1024QDS", 155 155 "fsl,T1024RDB", 156 + "fsl,T1040D4RDB", 157 + "fsl,T1042D4RDB", 156 158 "fsl,T1040QDS", 157 159 "fsl,T1042QDS", 158 160 "fsl,T1040RDB",
-15
arch/powerpc/platforms/cell/Kconfig
··· 57 57 Units on machines implementing the Broadband Processor 58 58 Architecture. 59 59 60 - config SPU_FS_64K_LS 61 - bool "Use 64K pages to map SPE local store" 62 - # we depend on PPC_MM_SLICES for now rather than selecting 63 - # it because we depend on hugetlbfs hooks being present. We 64 - # will fix that when the generic code has been improved to 65 - # not require hijacking hugetlbfs hooks. 66 - depends on SPU_FS && PPC_MM_SLICES && !PPC_64K_PAGES 67 - default y 68 - select PPC_HAS_HASH_64K 69 - help 70 - This option causes SPE local stores to be mapped in process 71 - address spaces using 64K pages while the rest of the kernel 72 - uses 4K pages. This can improve performances of applications 73 - using multiple SPEs by lowering the TLB pressure on them. 74 - 75 60 config SPU_BASE 76 61 bool 77 62 default n
-55
arch/powerpc/platforms/cell/spufs/file.c
··· 239 239 unsigned long address = (unsigned long)vmf->virtual_address; 240 240 unsigned long pfn, offset; 241 241 242 - #ifdef CONFIG_SPU_FS_64K_LS 243 - struct spu_state *csa = &ctx->csa; 244 - int psize; 245 - 246 - /* Check what page size we are using */ 247 - psize = get_slice_psize(vma->vm_mm, address); 248 - 249 - /* Some sanity checking */ 250 - BUG_ON(csa->use_big_pages != (psize == MMU_PAGE_64K)); 251 - 252 - /* Wow, 64K, cool, we need to align the address though */ 253 - if (csa->use_big_pages) { 254 - BUG_ON(vma->vm_start & 0xffff); 255 - address &= ~0xfffful; 256 - } 257 - #endif /* CONFIG_SPU_FS_64K_LS */ 258 - 259 242 offset = vmf->pgoff << PAGE_SHIFT; 260 243 if (offset >= LS_SIZE) 261 244 return VM_FAULT_SIGBUS; ··· 293 310 294 311 static int spufs_mem_mmap(struct file *file, struct vm_area_struct *vma) 295 312 { 296 - #ifdef CONFIG_SPU_FS_64K_LS 297 - struct spu_context *ctx = file->private_data; 298 - struct spu_state *csa = &ctx->csa; 299 - 300 - /* Sanity check VMA alignment */ 301 - if (csa->use_big_pages) { 302 - pr_debug("spufs_mem_mmap 64K, start=0x%lx, end=0x%lx," 303 - " pgoff=0x%lx\n", vma->vm_start, vma->vm_end, 304 - vma->vm_pgoff); 305 - if (vma->vm_start & 0xffff) 306 - return -EINVAL; 307 - if (vma->vm_pgoff & 0xf) 308 - return -EINVAL; 309 - } 310 - #endif /* CONFIG_SPU_FS_64K_LS */ 311 - 312 313 if (!(vma->vm_flags & VM_SHARED)) 313 314 return -EINVAL; 314 315 ··· 303 336 return 0; 304 337 } 305 338 306 - #ifdef CONFIG_SPU_FS_64K_LS 307 - static unsigned long spufs_get_unmapped_area(struct file *file, 308 - unsigned long addr, unsigned long len, unsigned long pgoff, 309 - unsigned long flags) 310 - { 311 - struct spu_context *ctx = file->private_data; 312 - struct spu_state *csa = &ctx->csa; 313 - 314 - /* If not using big pages, fallback to normal MM g_u_a */ 315 - if (!csa->use_big_pages) 316 - return current->mm->get_unmapped_area(file, addr, len, 317 - pgoff, flags); 318 - 319 - /* Else, try to obtain a 64K pages slice */ 320 - return slice_get_unmapped_area(addr, len, flags, 321 - MMU_PAGE_64K, 1); 322 - } 323 - #endif /* CONFIG_SPU_FS_64K_LS */ 324 - 325 339 static const struct file_operations spufs_mem_fops = { 326 340 .open = spufs_mem_open, 327 341 .release = spufs_mem_release, ··· 310 362 .write = spufs_mem_write, 311 363 .llseek = generic_file_llseek, 312 364 .mmap = spufs_mem_mmap, 313 - #ifdef CONFIG_SPU_FS_64K_LS 314 - .get_unmapped_area = spufs_get_unmapped_area, 315 - #endif 316 365 }; 317 366 318 367 static int spufs_ps_fault(struct vm_area_struct *vma,
+2 -122
arch/powerpc/platforms/cell/spufs/lscsa_alloc.c
··· 31 31 32 32 #include "spufs.h" 33 33 34 - static int spu_alloc_lscsa_std(struct spu_state *csa) 34 + int spu_alloc_lscsa(struct spu_state *csa) 35 35 { 36 36 struct spu_lscsa *lscsa; 37 37 unsigned char *p; ··· 48 48 return 0; 49 49 } 50 50 51 - static void spu_free_lscsa_std(struct spu_state *csa) 51 + void spu_free_lscsa(struct spu_state *csa) 52 52 { 53 53 /* Clear reserved bit before vfree. */ 54 54 unsigned char *p; ··· 61 61 62 62 vfree(csa->lscsa); 63 63 } 64 - 65 - #ifdef CONFIG_SPU_FS_64K_LS 66 - 67 - #define SPU_64K_PAGE_SHIFT 16 68 - #define SPU_64K_PAGE_ORDER (SPU_64K_PAGE_SHIFT - PAGE_SHIFT) 69 - #define SPU_64K_PAGE_COUNT (1ul << SPU_64K_PAGE_ORDER) 70 - 71 - int spu_alloc_lscsa(struct spu_state *csa) 72 - { 73 - struct page **pgarray; 74 - unsigned char *p; 75 - int i, j, n_4k; 76 - 77 - /* Check availability of 64K pages */ 78 - if (!spu_64k_pages_available()) 79 - goto fail; 80 - 81 - csa->use_big_pages = 1; 82 - 83 - pr_debug("spu_alloc_lscsa(csa=0x%p), trying to allocate 64K pages\n", 84 - csa); 85 - 86 - /* First try to allocate our 64K pages. We need 5 of them 87 - * with the current implementation. In the future, we should try 88 - * to separate the lscsa with the actual local store image, thus 89 - * allowing us to require only 4 64K pages per context 90 - */ 91 - for (i = 0; i < SPU_LSCSA_NUM_BIG_PAGES; i++) { 92 - /* XXX This is likely to fail, we should use a special pool 93 - * similar to what hugetlbfs does. 94 - */ 95 - csa->lscsa_pages[i] = alloc_pages(GFP_KERNEL, 96 - SPU_64K_PAGE_ORDER); 97 - if (csa->lscsa_pages[i] == NULL) 98 - goto fail; 99 - } 100 - 101 - pr_debug(" success ! creating vmap...\n"); 102 - 103 - /* Now we need to create a vmalloc mapping of these for the kernel 104 - * and SPU context switch code to use. Currently, we stick to a 105 - * normal kernel vmalloc mapping, which in our case will be 4K 106 - */ 107 - n_4k = SPU_64K_PAGE_COUNT * SPU_LSCSA_NUM_BIG_PAGES; 108 - pgarray = kmalloc(sizeof(struct page *) * n_4k, GFP_KERNEL); 109 - if (pgarray == NULL) 110 - goto fail; 111 - for (i = 0; i < SPU_LSCSA_NUM_BIG_PAGES; i++) 112 - for (j = 0; j < SPU_64K_PAGE_COUNT; j++) 113 - /* We assume all the struct page's are contiguous 114 - * which should be hopefully the case for an order 4 115 - * allocation.. 116 - */ 117 - pgarray[i * SPU_64K_PAGE_COUNT + j] = 118 - csa->lscsa_pages[i] + j; 119 - csa->lscsa = vmap(pgarray, n_4k, VM_USERMAP, PAGE_KERNEL); 120 - kfree(pgarray); 121 - if (csa->lscsa == NULL) 122 - goto fail; 123 - 124 - memset(csa->lscsa, 0, sizeof(struct spu_lscsa)); 125 - 126 - /* Set LS pages reserved to allow for user-space mapping. 127 - * 128 - * XXX isn't that a bit obsolete ? I think we should just 129 - * make sure the page count is high enough. Anyway, won't harm 130 - * for now 131 - */ 132 - for (p = csa->lscsa->ls; p < csa->lscsa->ls + LS_SIZE; p += PAGE_SIZE) 133 - SetPageReserved(vmalloc_to_page(p)); 134 - 135 - pr_debug(" all good !\n"); 136 - 137 - return 0; 138 - fail: 139 - pr_debug("spufs: failed to allocate lscsa 64K pages, falling back\n"); 140 - spu_free_lscsa(csa); 141 - return spu_alloc_lscsa_std(csa); 142 - } 143 - 144 - void spu_free_lscsa(struct spu_state *csa) 145 - { 146 - unsigned char *p; 147 - int i; 148 - 149 - if (!csa->use_big_pages) { 150 - spu_free_lscsa_std(csa); 151 - return; 152 - } 153 - csa->use_big_pages = 0; 154 - 155 - if (csa->lscsa == NULL) 156 - goto free_pages; 157 - 158 - for (p = csa->lscsa->ls; p < csa->lscsa->ls + LS_SIZE; p += PAGE_SIZE) 159 - ClearPageReserved(vmalloc_to_page(p)); 160 - 161 - vunmap(csa->lscsa); 162 - csa->lscsa = NULL; 163 - 164 - free_pages: 165 - 166 - for (i = 0; i < SPU_LSCSA_NUM_BIG_PAGES; i++) 167 - if (csa->lscsa_pages[i]) 168 - __free_pages(csa->lscsa_pages[i], SPU_64K_PAGE_ORDER); 169 - } 170 - 171 - #else /* CONFIG_SPU_FS_64K_LS */ 172 - 173 - int spu_alloc_lscsa(struct spu_state *csa) 174 - { 175 - return spu_alloc_lscsa_std(csa); 176 - } 177 - 178 - void spu_free_lscsa(struct spu_state *csa) 179 - { 180 - spu_free_lscsa_std(csa); 181 - } 182 - 183 - #endif /* !defined(CONFIG_SPU_FS_64K_LS) */
+10 -2
arch/powerpc/platforms/powernv/eeh-powernv.c
··· 1394 1394 */ 1395 1395 if (pnv_eeh_get_pe(hose, 1396 1396 be64_to_cpu(frozen_pe_no), pe)) { 1397 - /* Try best to clear it */ 1398 1397 pr_info("EEH: Clear non-existing PHB#%x-PE#%llx\n", 1399 - hose->global_number, frozen_pe_no); 1398 + hose->global_number, be64_to_cpu(frozen_pe_no)); 1400 1399 pr_info("EEH: PHB location: %s\n", 1401 1400 eeh_pe_loc_get(phb_pe)); 1401 + 1402 + /* Dump PHB diag-data */ 1403 + rc = opal_pci_get_phb_diag_data2(phb->opal_id, 1404 + phb->diag.blob, PNV_PCI_DIAG_BUF_SIZE); 1405 + if (rc == OPAL_SUCCESS) 1406 + pnv_pci_dump_phb_diag_data(hose, 1407 + phb->diag.blob); 1408 + 1409 + /* Try best to clear it */ 1402 1410 opal_pci_eeh_freeze_clear(phb->opal_id, 1403 1411 frozen_pe_no, 1404 1412 OPAL_EEH_ACTION_CLEAR_FREEZE_ALL);
+175 -2
arch/powerpc/platforms/powernv/opal-hmi.c
··· 35 35 struct list_head list; 36 36 struct OpalHMIEvent hmi_evt; 37 37 }; 38 + 39 + struct xstop_reason { 40 + uint32_t xstop_reason; 41 + const char *unit_failed; 42 + const char *description; 43 + }; 44 + 38 45 static LIST_HEAD(opal_hmi_evt_list); 39 46 static DEFINE_SPINLOCK(opal_hmi_evt_lock); 47 + 48 + static void print_core_checkstop_reason(const char *level, 49 + struct OpalHMIEvent *hmi_evt) 50 + { 51 + int i; 52 + static const struct xstop_reason xstop_reason[] = { 53 + { CORE_CHECKSTOP_IFU_REGFILE, "IFU", 54 + "RegFile core check stop" }, 55 + { CORE_CHECKSTOP_IFU_LOGIC, "IFU", "Logic core check stop" }, 56 + { CORE_CHECKSTOP_PC_DURING_RECOV, "PC", 57 + "Core checkstop during recovery" }, 58 + { CORE_CHECKSTOP_ISU_REGFILE, "ISU", 59 + "RegFile core check stop (mapper error)" }, 60 + { CORE_CHECKSTOP_ISU_LOGIC, "ISU", "Logic core check stop" }, 61 + { CORE_CHECKSTOP_FXU_LOGIC, "FXU", "Logic core check stop" }, 62 + { CORE_CHECKSTOP_VSU_LOGIC, "VSU", "Logic core check stop" }, 63 + { CORE_CHECKSTOP_PC_RECOV_IN_MAINT_MODE, "PC", 64 + "Recovery in maintenance mode" }, 65 + { CORE_CHECKSTOP_LSU_REGFILE, "LSU", 66 + "RegFile core check stop" }, 67 + { CORE_CHECKSTOP_PC_FWD_PROGRESS, "PC", 68 + "Forward Progress Error" }, 69 + { CORE_CHECKSTOP_LSU_LOGIC, "LSU", "Logic core check stop" }, 70 + { CORE_CHECKSTOP_PC_LOGIC, "PC", "Logic core check stop" }, 71 + { CORE_CHECKSTOP_PC_HYP_RESOURCE, "PC", 72 + "Hypervisor Resource error - core check stop" }, 73 + { CORE_CHECKSTOP_PC_HANG_RECOV_FAILED, "PC", 74 + "Hang Recovery Failed (core check stop)" }, 75 + { CORE_CHECKSTOP_PC_AMBI_HANG_DETECTED, "PC", 76 + "Ambiguous Hang Detected (unknown source)" }, 77 + { CORE_CHECKSTOP_PC_DEBUG_TRIG_ERR_INJ, "PC", 78 + "Debug Trigger Error inject" }, 79 + { CORE_CHECKSTOP_PC_SPRD_HYP_ERR_INJ, "PC", 80 + "Hypervisor check stop via SPRC/SPRD" }, 81 + }; 82 + 83 + /* Validity check */ 84 + if (!hmi_evt->u.xstop_error.xstop_reason) { 85 + printk("%s Unknown Core check stop.\n", level); 86 + return; 87 + } 88 + 89 + printk("%s CPU PIR: %08x\n", level, 90 + be32_to_cpu(hmi_evt->u.xstop_error.u.pir)); 91 + for (i = 0; i < ARRAY_SIZE(xstop_reason); i++) 92 + if (be32_to_cpu(hmi_evt->u.xstop_error.xstop_reason) & 93 + xstop_reason[i].xstop_reason) 94 + printk("%s [Unit: %-3s] %s\n", level, 95 + xstop_reason[i].unit_failed, 96 + xstop_reason[i].description); 97 + } 98 + 99 + static void print_nx_checkstop_reason(const char *level, 100 + struct OpalHMIEvent *hmi_evt) 101 + { 102 + int i; 103 + static const struct xstop_reason xstop_reason[] = { 104 + { NX_CHECKSTOP_SHM_INVAL_STATE_ERR, "DMA & Engine", 105 + "SHM invalid state error" }, 106 + { NX_CHECKSTOP_DMA_INVAL_STATE_ERR_1, "DMA & Engine", 107 + "DMA invalid state error bit 15" }, 108 + { NX_CHECKSTOP_DMA_INVAL_STATE_ERR_2, "DMA & Engine", 109 + "DMA invalid state error bit 16" }, 110 + { NX_CHECKSTOP_DMA_CH0_INVAL_STATE_ERR, "DMA & Engine", 111 + "Channel 0 invalid state error" }, 112 + { NX_CHECKSTOP_DMA_CH1_INVAL_STATE_ERR, "DMA & Engine", 113 + "Channel 1 invalid state error" }, 114 + { NX_CHECKSTOP_DMA_CH2_INVAL_STATE_ERR, "DMA & Engine", 115 + "Channel 2 invalid state error" }, 116 + { NX_CHECKSTOP_DMA_CH3_INVAL_STATE_ERR, "DMA & Engine", 117 + "Channel 3 invalid state error" }, 118 + { NX_CHECKSTOP_DMA_CH4_INVAL_STATE_ERR, "DMA & Engine", 119 + "Channel 4 invalid state error" }, 120 + { NX_CHECKSTOP_DMA_CH5_INVAL_STATE_ERR, "DMA & Engine", 121 + "Channel 5 invalid state error" }, 122 + { NX_CHECKSTOP_DMA_CH6_INVAL_STATE_ERR, "DMA & Engine", 123 + "Channel 6 invalid state error" }, 124 + { NX_CHECKSTOP_DMA_CH7_INVAL_STATE_ERR, "DMA & Engine", 125 + "Channel 7 invalid state error" }, 126 + { NX_CHECKSTOP_DMA_CRB_UE, "DMA & Engine", 127 + "UE error on CRB(CSB address, CCB)" }, 128 + { NX_CHECKSTOP_DMA_CRB_SUE, "DMA & Engine", 129 + "SUE error on CRB(CSB address, CCB)" }, 130 + { NX_CHECKSTOP_PBI_ISN_UE, "PowerBus Interface", 131 + "CRB Kill ISN received while holding ISN with UE error" }, 132 + }; 133 + 134 + /* Validity check */ 135 + if (!hmi_evt->u.xstop_error.xstop_reason) { 136 + printk("%s Unknown NX check stop.\n", level); 137 + return; 138 + } 139 + 140 + printk("%s NX checkstop on CHIP ID: %x\n", level, 141 + be32_to_cpu(hmi_evt->u.xstop_error.u.chip_id)); 142 + for (i = 0; i < ARRAY_SIZE(xstop_reason); i++) 143 + if (be32_to_cpu(hmi_evt->u.xstop_error.xstop_reason) & 144 + xstop_reason[i].xstop_reason) 145 + printk("%s [Unit: %-3s] %s\n", level, 146 + xstop_reason[i].unit_failed, 147 + xstop_reason[i].description); 148 + } 149 + 150 + static void print_checkstop_reason(const char *level, 151 + struct OpalHMIEvent *hmi_evt) 152 + { 153 + switch (hmi_evt->u.xstop_error.xstop_type) { 154 + case CHECKSTOP_TYPE_CORE: 155 + print_core_checkstop_reason(level, hmi_evt); 156 + break; 157 + case CHECKSTOP_TYPE_NX: 158 + print_nx_checkstop_reason(level, hmi_evt); 159 + break; 160 + case CHECKSTOP_TYPE_UNKNOWN: 161 + printk("%s Unknown Malfunction Alert.\n", level); 162 + break; 163 + } 164 + } 40 165 41 166 static void print_hmi_event_info(struct OpalHMIEvent *hmi_evt) 42 167 { ··· 220 95 (hmi_evt->type == OpalHMI_ERROR_TFMR_PARITY)) 221 96 printk("%s TFMR: %016llx\n", level, 222 97 be64_to_cpu(hmi_evt->tfmr)); 98 + 99 + if (hmi_evt->version < OpalHMIEvt_V2) 100 + return; 101 + 102 + /* OpalHMIEvt_V2 and above provides reason for malfunction alert. */ 103 + if (hmi_evt->type == OpalHMI_ERROR_MALFUNC_ALERT) 104 + print_checkstop_reason(level, hmi_evt); 223 105 } 224 106 225 107 static void hmi_event_handler(struct work_struct *work) ··· 235 103 struct OpalHMIEvent *hmi_evt; 236 104 struct OpalHmiEvtNode *msg_node; 237 105 uint8_t disposition; 106 + struct opal_msg msg; 107 + int unrecoverable = 0; 238 108 239 109 spin_lock_irqsave(&opal_hmi_evt_lock, flags); 240 110 while (!list_empty(&opal_hmi_evt_list)) { ··· 252 118 253 119 /* 254 120 * Check if HMI event has been recovered or not. If not 255 - * then we can't continue, invoke panic. 121 + * then kernel can't continue, we need to panic. 122 + * But before we do that, display all the HMI event 123 + * available on the list and set unrecoverable flag to 1. 256 124 */ 257 125 if (disposition != OpalHMI_DISPOSITION_RECOVERED) 258 - panic("Unrecoverable HMI exception"); 126 + unrecoverable = 1; 259 127 260 128 spin_lock_irqsave(&opal_hmi_evt_lock, flags); 261 129 } 262 130 spin_unlock_irqrestore(&opal_hmi_evt_lock, flags); 131 + 132 + if (unrecoverable) { 133 + int ret; 134 + 135 + /* Pull all HMI events from OPAL before we panic. */ 136 + while (opal_get_msg(__pa(&msg), sizeof(msg)) == OPAL_SUCCESS) { 137 + u32 type; 138 + 139 + type = be32_to_cpu(msg.msg_type); 140 + 141 + /* skip if not HMI event */ 142 + if (type != OPAL_MSG_HMI_EVT) 143 + continue; 144 + 145 + /* HMI event info starts from param[0] */ 146 + hmi_evt = (struct OpalHMIEvent *)&msg.params[0]; 147 + print_hmi_event_info(hmi_evt); 148 + } 149 + 150 + /* 151 + * Unrecoverable HMI exception. We need to inform BMC/OCC 152 + * about this error so that it can collect relevant data 153 + * for error analysis before rebooting. 154 + */ 155 + ret = opal_cec_reboot2(OPAL_REBOOT_PLATFORM_ERROR, 156 + "Unrecoverable HMI exception"); 157 + if (ret == OPAL_UNSUPPORTED) { 158 + pr_emerg("Reboot type %d not supported\n", 159 + OPAL_REBOOT_PLATFORM_ERROR); 160 + } 161 + 162 + /* 163 + * Fall through and panic if opal_cec_reboot2() returns 164 + * OPAL_UNSUPPORTED. 165 + */ 166 + panic("Unrecoverable HMI exception"); 167 + } 263 168 } 264 169 265 170 static DECLARE_WORK(hmi_event_work, hmi_event_handler);
+131 -18
arch/powerpc/platforms/powernv/opal-power.c
··· 9 9 * 2 of the License, or (at your option) any later version. 10 10 */ 11 11 12 + #define pr_fmt(fmt) "opal-power: " fmt 13 + 12 14 #include <linux/kernel.h> 13 15 #include <linux/reboot.h> 14 16 #include <linux/notifier.h> 17 + #include <linux/of.h> 15 18 16 19 #include <asm/opal.h> 17 20 #include <asm/machdep.h> ··· 22 19 #define SOFT_OFF 0x00 23 20 #define SOFT_REBOOT 0x01 24 21 25 - static int opal_power_control_event(struct notifier_block *nb, 26 - unsigned long msg_type, void *msg) 22 + /* Detect EPOW event */ 23 + static bool detect_epow(void) 27 24 { 28 - struct opal_msg *power_msg = msg; 25 + u16 epow; 26 + int i, rc; 27 + __be16 epow_classes; 28 + __be16 opal_epow_status[OPAL_SYSEPOW_MAX] = {0}; 29 + 30 + /* 31 + * Check for EPOW event. Kernel sends supported EPOW classes info 32 + * to OPAL. OPAL returns EPOW info along with classes present. 33 + */ 34 + epow_classes = cpu_to_be16(OPAL_SYSEPOW_MAX); 35 + rc = opal_get_epow_status(opal_epow_status, &epow_classes); 36 + if (rc != OPAL_SUCCESS) { 37 + pr_err("Failed to get EPOW event information\n"); 38 + return false; 39 + } 40 + 41 + /* Look for EPOW events present */ 42 + for (i = 0; i < be16_to_cpu(epow_classes); i++) { 43 + epow = be16_to_cpu(opal_epow_status[i]); 44 + 45 + /* Filter events which do not need shutdown. */ 46 + if (i == OPAL_SYSEPOW_POWER) 47 + epow &= ~(OPAL_SYSPOWER_CHNG | OPAL_SYSPOWER_FAIL | 48 + OPAL_SYSPOWER_INCL); 49 + if (epow) 50 + return true; 51 + } 52 + 53 + return false; 54 + } 55 + 56 + /* Check for existing EPOW, DPO events */ 57 + static bool poweroff_pending(void) 58 + { 59 + int rc; 60 + __be64 opal_dpo_timeout; 61 + 62 + /* Check for DPO event */ 63 + rc = opal_get_dpo_status(&opal_dpo_timeout); 64 + if (rc == OPAL_SUCCESS) { 65 + pr_info("Existing DPO event detected.\n"); 66 + return true; 67 + } 68 + 69 + /* Check for EPOW event */ 70 + if (detect_epow()) { 71 + pr_info("Existing EPOW event detected.\n"); 72 + return true; 73 + } 74 + 75 + return false; 76 + } 77 + 78 + /* OPAL power-control events notifier */ 79 + static int opal_power_control_event(struct notifier_block *nb, 80 + unsigned long msg_type, void *msg) 81 + { 29 82 uint64_t type; 30 83 31 - type = be64_to_cpu(power_msg->params[0]); 32 - 33 - switch (type) { 34 - case SOFT_REBOOT: 35 - pr_info("OPAL: reboot requested\n"); 36 - orderly_reboot(); 84 + switch (msg_type) { 85 + case OPAL_MSG_EPOW: 86 + if (detect_epow()) { 87 + pr_info("EPOW msg received. Powering off system\n"); 88 + orderly_poweroff(true); 89 + } 37 90 break; 38 - case SOFT_OFF: 39 - pr_info("OPAL: poweroff requested\n"); 91 + case OPAL_MSG_DPO: 92 + pr_info("DPO msg received. Powering off system\n"); 40 93 orderly_poweroff(true); 41 94 break; 95 + case OPAL_MSG_SHUTDOWN: 96 + type = be64_to_cpu(((struct opal_msg *)msg)->params[0]); 97 + switch (type) { 98 + case SOFT_REBOOT: 99 + pr_info("Reboot requested\n"); 100 + orderly_reboot(); 101 + break; 102 + case SOFT_OFF: 103 + pr_info("Poweroff requested\n"); 104 + orderly_poweroff(true); 105 + break; 106 + default: 107 + pr_err("Unknown power-control type %llu\n", type); 108 + } 109 + break; 42 110 default: 43 - pr_err("OPAL: power control type unexpected %016llx\n", type); 111 + pr_err("Unknown OPAL message type %lu\n", msg_type); 44 112 } 45 113 46 114 return 0; 47 115 } 48 116 117 + /* OPAL EPOW event notifier block */ 118 + static struct notifier_block opal_epow_nb = { 119 + .notifier_call = opal_power_control_event, 120 + .next = NULL, 121 + .priority = 0, 122 + }; 123 + 124 + /* OPAL DPO event notifier block */ 125 + static struct notifier_block opal_dpo_nb = { 126 + .notifier_call = opal_power_control_event, 127 + .next = NULL, 128 + .priority = 0, 129 + }; 130 + 131 + /* OPAL power-control event notifier block */ 49 132 static struct notifier_block opal_power_control_nb = { 50 133 .notifier_call = opal_power_control_event, 51 134 .next = NULL, ··· 140 51 141 52 static int __init opal_power_control_init(void) 142 53 { 143 - int ret; 54 + int ret, supported = 0; 55 + struct device_node *np; 144 56 57 + /* Register OPAL power-control events notifier */ 145 58 ret = opal_message_notifier_register(OPAL_MSG_SHUTDOWN, 146 - &opal_power_control_nb); 147 - if (ret) { 148 - pr_err("%s: Can't register OPAL event notifier (%d)\n", 149 - __func__, ret); 150 - return ret; 59 + &opal_power_control_nb); 60 + if (ret) 61 + pr_err("Failed to register SHUTDOWN notifier, ret = %d\n", ret); 62 + 63 + /* Determine OPAL EPOW, DPO support */ 64 + np = of_find_node_by_path("/ibm,opal/epow"); 65 + if (np) { 66 + supported = of_device_is_compatible(np, "ibm,opal-v3-epow"); 67 + of_node_put(np); 151 68 } 69 + 70 + if (!supported) 71 + return 0; 72 + pr_info("OPAL EPOW, DPO support detected.\n"); 73 + 74 + /* Register EPOW event notifier */ 75 + ret = opal_message_notifier_register(OPAL_MSG_EPOW, &opal_epow_nb); 76 + if (ret) 77 + pr_err("Failed to register EPOW notifier, ret = %d\n", ret); 78 + 79 + /* Register DPO event notifier */ 80 + ret = opal_message_notifier_register(OPAL_MSG_DPO, &opal_dpo_nb); 81 + if (ret) 82 + pr_err("Failed to register DPO notifier, ret = %d\n", ret); 83 + 84 + /* Check for any pending EPOW or DPO events. */ 85 + if (poweroff_pending()) 86 + orderly_poweroff(true); 152 87 153 88 return 0; 154 89 }
+4
arch/powerpc/platforms/powernv/opal-wrappers.S
··· 202 202 OPAL_CALL(opal_rtc_write, OPAL_RTC_WRITE); 203 203 OPAL_CALL(opal_cec_power_down, OPAL_CEC_POWER_DOWN); 204 204 OPAL_CALL(opal_cec_reboot, OPAL_CEC_REBOOT); 205 + OPAL_CALL(opal_cec_reboot2, OPAL_CEC_REBOOT2); 205 206 OPAL_CALL(opal_read_nvram, OPAL_READ_NVRAM); 206 207 OPAL_CALL(opal_write_nvram, OPAL_WRITE_NVRAM); 207 208 OPAL_CALL(opal_handle_interrupt, OPAL_HANDLE_INTERRUPT); ··· 250 249 OPAL_CALL(opal_pci_mask_pe_error, OPAL_PCI_MASK_PE_ERROR); 251 250 OPAL_CALL(opal_set_slot_led_status, OPAL_SET_SLOT_LED_STATUS); 252 251 OPAL_CALL(opal_get_epow_status, OPAL_GET_EPOW_STATUS); 252 + OPAL_CALL(opal_get_dpo_status, OPAL_GET_DPO_STATUS); 253 253 OPAL_CALL(opal_set_system_attention_led, OPAL_SET_SYSTEM_ATTENTION_LED); 254 254 OPAL_CALL(opal_pci_next_error, OPAL_PCI_NEXT_ERROR); 255 255 OPAL_CALL(opal_pci_poll, OPAL_PCI_POLL); ··· 299 297 OPAL_CALL(opal_flash_write, OPAL_FLASH_WRITE); 300 298 OPAL_CALL(opal_flash_erase, OPAL_FLASH_ERASE); 301 299 OPAL_CALL(opal_prd_msg, OPAL_PRD_MSG); 300 + OPAL_CALL(opal_leds_get_ind, OPAL_LEDS_GET_INDICATOR); 301 + OPAL_CALL(opal_leds_set_ind, OPAL_LEDS_SET_INDICATOR);
+46 -1
arch/powerpc/platforms/powernv/opal.c
··· 441 441 int opal_machine_check(struct pt_regs *regs) 442 442 { 443 443 struct machine_check_event evt; 444 + int ret; 444 445 445 446 if (!get_mce_event(&evt, MCE_EVENT_RELEASE)) 446 447 return 0; ··· 456 455 457 456 if (opal_recover_mce(regs, &evt)) 458 457 return 1; 458 + 459 + /* 460 + * Unrecovered machine check, we are heading to panic path. 461 + * 462 + * We may have hit this MCE in very early stage of kernel 463 + * initialization even before opal-prd has started running. If 464 + * this is the case then this MCE error may go un-noticed or 465 + * un-analyzed if we go down panic path. We need to inform 466 + * BMC/OCC about this error so that they can collect relevant 467 + * data for error analysis before rebooting. 468 + * Use opal_cec_reboot2(OPAL_REBOOT_PLATFORM_ERROR) to do so. 469 + * This function may not return on BMC based system. 470 + */ 471 + ret = opal_cec_reboot2(OPAL_REBOOT_PLATFORM_ERROR, 472 + "Unrecoverable Machine Check exception"); 473 + if (ret == OPAL_UNSUPPORTED) { 474 + pr_emerg("Reboot type %d not supported\n", 475 + OPAL_REBOOT_PLATFORM_ERROR); 476 + } 477 + 478 + /* 479 + * We reached here. There can be three possibilities: 480 + * 1. We are running on a firmware level that do not support 481 + * opal_cec_reboot2() 482 + * 2. We are running on a firmware level that do not support 483 + * OPAL_REBOOT_PLATFORM_ERROR reboot type. 484 + * 3. We are running on FSP based system that does not need opal 485 + * to trigger checkstop explicitly for error analysis. The FSP 486 + * PRD component would have already got notified about this 487 + * error through other channels. 488 + * 489 + * In any case, let us just fall through. We anyway heading 490 + * down to panic path. 491 + */ 459 492 return 0; 460 493 } 461 494 ··· 683 648 684 649 static int __init opal_init(void) 685 650 { 686 - struct device_node *np, *consoles; 651 + struct device_node *np, *consoles, *leds; 687 652 int rc; 688 653 689 654 opal_node = of_find_node_by_path("/ibm,opal"); ··· 723 688 724 689 /* Setup a heatbeat thread if requested by OPAL */ 725 690 opal_init_heartbeat(); 691 + 692 + /* Create leds platform devices */ 693 + leds = of_find_node_by_path("/ibm,opal/leds"); 694 + if (leds) { 695 + of_platform_device_create(leds, "opal_leds", NULL); 696 + of_node_put(leds); 697 + } 726 698 727 699 /* Create "opal" kobject under /sys/firmware */ 728 700 rc = opal_sysfs_init(); ··· 883 841 EXPORT_SYMBOL_GPL(opal_tpo_read); 884 842 EXPORT_SYMBOL_GPL(opal_tpo_write); 885 843 EXPORT_SYMBOL_GPL(opal_i2c_request); 844 + /* Export these symbols for PowerNV LED class driver */ 845 + EXPORT_SYMBOL_GPL(opal_leds_get_ind); 846 + EXPORT_SYMBOL_GPL(opal_leds_set_ind);
+58 -87
arch/powerpc/platforms/powernv/pci-ioda.c
··· 140 140 return; 141 141 } 142 142 143 - if (test_and_set_bit(pe_no, phb->ioda.pe_alloc)) { 144 - pr_warn("%s: PE %d was assigned on PHB#%x\n", 145 - __func__, pe_no, phb->hose->global_number); 146 - return; 147 - } 143 + if (test_and_set_bit(pe_no, phb->ioda.pe_alloc)) 144 + pr_debug("%s: PE %d was reserved on PHB#%x\n", 145 + __func__, pe_no, phb->hose->global_number); 148 146 149 147 phb->ioda.pe_array[pe_no].phb = phb; 150 148 phb->ioda.pe_array[pe_no].pe_number = pe_no; ··· 229 231 return -EIO; 230 232 } 231 233 232 - static void pnv_ioda2_reserve_m64_pe(struct pnv_phb *phb) 234 + static void pnv_ioda2_reserve_dev_m64_pe(struct pci_dev *pdev, 235 + unsigned long *pe_bitmap) 233 236 { 234 - resource_size_t sgsz = phb->ioda.m64_segsize; 235 - struct pci_dev *pdev; 237 + struct pci_controller *hose = pci_bus_to_host(pdev->bus); 238 + struct pnv_phb *phb = hose->private_data; 236 239 struct resource *r; 237 - int base, step, i; 240 + resource_size_t base, sgsz, start, end; 241 + int segno, i; 238 242 239 - /* 240 - * Root bus always has full M64 range and root port has 241 - * M64 range used in reality. So we're checking root port 242 - * instead of root bus. 243 - */ 244 - list_for_each_entry(pdev, &phb->hose->bus->devices, bus_list) { 245 - for (i = 0; i < PCI_BRIDGE_RESOURCE_NUM; i++) { 246 - r = &pdev->resource[PCI_BRIDGE_RESOURCES + i]; 247 - if (!r->parent || 248 - !pnv_pci_is_mem_pref_64(r->flags)) 249 - continue; 243 + base = phb->ioda.m64_base; 244 + sgsz = phb->ioda.m64_segsize; 245 + for (i = 0; i <= PCI_ROM_RESOURCE; i++) { 246 + r = &pdev->resource[i]; 247 + if (!r->parent || !pnv_pci_is_mem_pref_64(r->flags)) 248 + continue; 250 249 251 - base = (r->start - phb->ioda.m64_base) / sgsz; 252 - for (step = 0; step < resource_size(r) / sgsz; step++) 253 - pnv_ioda_reserve_pe(phb, base + step); 250 + start = _ALIGN_DOWN(r->start - base, sgsz); 251 + end = _ALIGN_UP(r->end - base, sgsz); 252 + for (segno = start / sgsz; segno < end / sgsz; segno++) { 253 + if (pe_bitmap) 254 + set_bit(segno, pe_bitmap); 255 + else 256 + pnv_ioda_reserve_pe(phb, segno); 254 257 } 255 258 } 256 259 } 257 260 258 - static int pnv_ioda2_pick_m64_pe(struct pnv_phb *phb, 259 - struct pci_bus *bus, int all) 261 + static void pnv_ioda2_reserve_m64_pe(struct pci_bus *bus, 262 + unsigned long *pe_bitmap, 263 + bool all) 260 264 { 261 - resource_size_t segsz = phb->ioda.m64_segsize; 262 265 struct pci_dev *pdev; 263 - struct resource *r; 266 + 267 + list_for_each_entry(pdev, &bus->devices, bus_list) { 268 + pnv_ioda2_reserve_dev_m64_pe(pdev, pe_bitmap); 269 + 270 + if (all && pdev->subordinate) 271 + pnv_ioda2_reserve_m64_pe(pdev->subordinate, 272 + pe_bitmap, all); 273 + } 274 + } 275 + 276 + static int pnv_ioda2_pick_m64_pe(struct pci_bus *bus, bool all) 277 + { 278 + struct pci_controller *hose = pci_bus_to_host(bus); 279 + struct pnv_phb *phb = hose->private_data; 264 280 struct pnv_ioda_pe *master_pe, *pe; 265 281 unsigned long size, *pe_alloc; 266 - bool found; 267 - int start, i, j; 282 + int i; 268 283 269 284 /* Root bus shouldn't use M64 */ 270 285 if (pci_is_root_bus(bus)) 271 - return IODA_INVALID_PE; 272 - 273 - /* We support only one M64 window on each bus */ 274 - found = false; 275 - pci_bus_for_each_resource(bus, r, i) { 276 - if (r && r->parent && 277 - pnv_pci_is_mem_pref_64(r->flags)) { 278 - found = true; 279 - break; 280 - } 281 - } 282 - 283 - /* No M64 window found ? */ 284 - if (!found) 285 286 return IODA_INVALID_PE; 286 287 287 288 /* Allocate bitmap */ ··· 292 295 return IODA_INVALID_PE; 293 296 } 294 297 295 - /* 296 - * Figure out reserved PE numbers by the PE 297 - * the its child PEs. 298 - */ 299 - start = (r->start - phb->ioda.m64_base) / segsz; 300 - for (i = 0; i < resource_size(r) / segsz; i++) 301 - set_bit(start + i, pe_alloc); 302 - 303 - if (all) 304 - goto done; 305 - 306 - /* 307 - * If the PE doesn't cover all subordinate buses, 308 - * we need subtract from reserved PEs for children. 309 - */ 310 - list_for_each_entry(pdev, &bus->devices, bus_list) { 311 - if (!pdev->subordinate) 312 - continue; 313 - 314 - pci_bus_for_each_resource(pdev->subordinate, r, i) { 315 - if (!r || !r->parent || 316 - !pnv_pci_is_mem_pref_64(r->flags)) 317 - continue; 318 - 319 - start = (r->start - phb->ioda.m64_base) / segsz; 320 - for (j = 0; j < resource_size(r) / segsz ; j++) 321 - clear_bit(start + j, pe_alloc); 322 - } 323 - } 298 + /* Figure out reserved PE numbers by the PE */ 299 + pnv_ioda2_reserve_m64_pe(bus, pe_alloc, all); 324 300 325 301 /* 326 302 * the current bus might not own M64 window and that's all ··· 309 339 * Figure out the master PE and put all slave PEs to master 310 340 * PE's list to form compound PE. 311 341 */ 312 - done: 313 342 master_pe = NULL; 314 343 i = -1; 315 344 while ((i = find_next_bit(pe_alloc, phb->ioda.total_pe, i + 1)) < ··· 622 653 pdev = pe->pdev->bus->self; 623 654 #ifdef CONFIG_PCI_IOV 624 655 else if (pe->flags & PNV_IODA_PE_VF) 625 - pdev = pe->parent_dev->bus->self; 656 + pdev = pe->parent_dev; 626 657 #endif /* CONFIG_PCI_IOV */ 627 658 while (pdev) { 628 659 struct pci_dn *pdn = pci_get_pdn(pdev); ··· 701 732 parent = parent->bus->self; 702 733 } 703 734 704 - opal_pci_eeh_freeze_set(phb->opal_id, pe->pe_number, 735 + opal_pci_eeh_freeze_clear(phb->opal_id, pe->pe_number, 705 736 OPAL_EEH_ACTION_CLEAR_FREEZE_ALL); 706 737 707 738 /* Disassociate PE in PELT */ ··· 915 946 res2 = *res; 916 947 res->start += size * offset; 917 948 918 - dev_info(&dev->dev, "VF BAR%d: %pR shifted to %pR (enabling %d VFs shifted by %d)\n", 919 - i, &res2, res, num_vfs, offset); 949 + dev_info(&dev->dev, "VF BAR%d: %pR shifted to %pR (%sabling %d VFs shifted by %d)\n", 950 + i, &res2, res, (offset > 0) ? "En" : "Dis", 951 + num_vfs, offset); 920 952 pci_update_resource(dev, i + PCI_IOV_RESOURCES); 921 953 } 922 954 return 0; ··· 1020 1050 * subordinate PCI devices and buses. The second type of PE is normally 1021 1051 * orgiriated by PCIe-to-PCI bridge or PLX switch downstream ports. 1022 1052 */ 1023 - static void pnv_ioda_setup_bus_PE(struct pci_bus *bus, int all) 1053 + static void pnv_ioda_setup_bus_PE(struct pci_bus *bus, bool all) 1024 1054 { 1025 1055 struct pci_controller *hose = pci_bus_to_host(bus); 1026 1056 struct pnv_phb *phb = hose->private_data; ··· 1029 1059 1030 1060 /* Check if PE is determined by M64 */ 1031 1061 if (phb->pick_m64_pe) 1032 - pe_num = phb->pick_m64_pe(phb, bus, all); 1062 + pe_num = phb->pick_m64_pe(bus, all); 1033 1063 1034 1064 /* The PE number isn't pinned by M64 */ 1035 1065 if (pe_num == IODA_INVALID_PE) ··· 1087 1117 { 1088 1118 struct pci_dev *dev; 1089 1119 1090 - pnv_ioda_setup_bus_PE(bus, 0); 1120 + pnv_ioda_setup_bus_PE(bus, false); 1091 1121 1092 1122 list_for_each_entry(dev, &bus->devices, bus_list) { 1093 1123 if (dev->subordinate) { 1094 1124 if (pci_pcie_type(dev) == PCI_EXP_TYPE_PCI_BRIDGE) 1095 - pnv_ioda_setup_bus_PE(dev->subordinate, 1); 1125 + pnv_ioda_setup_bus_PE(dev->subordinate, true); 1096 1126 else 1097 1127 pnv_ioda_setup_PEs(dev->subordinate); 1098 1128 } ··· 1117 1147 1118 1148 /* M64 layout might affect PE allocation */ 1119 1149 if (phb->reserve_m64_pe) 1120 - phb->reserve_m64_pe(phb); 1150 + phb->reserve_m64_pe(hose->bus, NULL, true); 1121 1151 1122 1152 pnv_ioda_setup_PEs(hose->bus); 1123 1153 } ··· 1560 1590 1561 1591 pe = &phb->ioda.pe_array[pdn->pe_number]; 1562 1592 WARN_ON(get_dma_ops(&pdev->dev) != &dma_iommu_ops); 1593 + set_dma_offset(&pdev->dev, pe->tce_bypass_base); 1563 1594 set_iommu_table_base(&pdev->dev, pe->table_group.tables[0]); 1564 1595 /* 1565 1596 * Note: iommu_add_device() will fail here as ··· 1591 1620 if (bypass) { 1592 1621 dev_info(&pdev->dev, "Using 64-bit DMA iommu bypass\n"); 1593 1622 set_dma_ops(&pdev->dev, &dma_direct_ops); 1594 - set_dma_offset(&pdev->dev, pe->tce_bypass_base); 1595 1623 } else { 1596 1624 dev_info(&pdev->dev, "Using 32-bit DMA via iommu\n"); 1597 1625 set_dma_ops(&pdev->dev, &dma_iommu_ops); 1598 - set_iommu_table_base(&pdev->dev, pe->table_group.tables[0]); 1599 1626 } 1600 1627 *pdev->dev.dma_mask = dma_mask; 1601 1628 return 0; 1602 1629 } 1603 1630 1604 - static u64 pnv_pci_ioda_dma_get_required_mask(struct pnv_phb *phb, 1605 - struct pci_dev *pdev) 1631 + static u64 pnv_pci_ioda_dma_get_required_mask(struct pci_dev *pdev) 1606 1632 { 1633 + struct pci_controller *hose = pci_bus_to_host(pdev->bus); 1634 + struct pnv_phb *phb = hose->private_data; 1607 1635 struct pci_dn *pdn = pci_get_pdn(pdev); 1608 1636 struct pnv_ioda_pe *pe; 1609 1637 u64 end, mask; ··· 1629 1659 1630 1660 list_for_each_entry(dev, &bus->devices, bus_list) { 1631 1661 set_iommu_table_base(&dev->dev, pe->table_group.tables[0]); 1662 + set_dma_offset(&dev->dev, pe->tce_bypass_base); 1632 1663 iommu_add_device(&dev->dev); 1633 1664 1634 1665 if ((pe->flags & PNV_IODA_PE_BUS_ALL) && dev->subordinate) ··· 3028 3057 .window_alignment = pnv_pci_window_alignment, 3029 3058 .reset_secondary_bus = pnv_pci_reset_secondary_bus, 3030 3059 .dma_set_mask = pnv_pci_ioda_dma_set_mask, 3060 + .dma_get_required_mask = pnv_pci_ioda_dma_get_required_mask, 3031 3061 .shutdown = pnv_pci_ioda_shutdown, 3032 3062 }; 3033 3063 ··· 3175 3203 3176 3204 /* Setup TCEs */ 3177 3205 phb->dma_dev_setup = pnv_pci_ioda_dma_dev_setup; 3178 - phb->dma_get_required_mask = pnv_pci_ioda_dma_get_required_mask; 3179 3206 3180 3207 /* Setup MSI support */ 3181 3208 pnv_pci_init_ioda_msis(phb);
-11
arch/powerpc/platforms/powernv/pci.c
··· 761 761 phb->dma_dev_setup(phb, pdev); 762 762 } 763 763 764 - u64 pnv_pci_dma_get_required_mask(struct pci_dev *pdev) 765 - { 766 - struct pci_controller *hose = pci_bus_to_host(pdev->bus); 767 - struct pnv_phb *phb = hose->private_data; 768 - 769 - if (phb && phb->dma_get_required_mask) 770 - return phb->dma_get_required_mask(phb, pdev); 771 - 772 - return __dma_get_required_mask(&pdev->dev); 773 - } 774 - 775 764 void pnv_pci_shutdown(void) 776 765 { 777 766 struct pci_controller *hose;
+3 -4
arch/powerpc/platforms/powernv/pci.h
··· 105 105 unsigned int hwirq, unsigned int virq, 106 106 unsigned int is_64, struct msi_msg *msg); 107 107 void (*dma_dev_setup)(struct pnv_phb *phb, struct pci_dev *pdev); 108 - u64 (*dma_get_required_mask)(struct pnv_phb *phb, 109 - struct pci_dev *pdev); 110 108 void (*fixup_phb)(struct pci_controller *hose); 111 109 u32 (*bdfn_to_pe)(struct pnv_phb *phb, struct pci_bus *bus, u32 devfn); 112 110 int (*init_m64)(struct pnv_phb *phb); 113 - void (*reserve_m64_pe)(struct pnv_phb *phb); 114 - int (*pick_m64_pe)(struct pnv_phb *phb, struct pci_bus *bus, int all); 111 + void (*reserve_m64_pe)(struct pci_bus *bus, 112 + unsigned long *pe_bitmap, bool all); 113 + int (*pick_m64_pe)(struct pci_bus *bus, bool all); 115 114 int (*get_pe_state)(struct pnv_phb *phb, int pe_no); 116 115 void (*freeze_pe)(struct pnv_phb *phb, int pe_no); 117 116 int (*unfreeze_pe)(struct pnv_phb *phb, int pe_no, int opt);
-6
arch/powerpc/platforms/powernv/powernv.h
··· 12 12 #ifdef CONFIG_PCI 13 13 extern void pnv_pci_init(void); 14 14 extern void pnv_pci_shutdown(void); 15 - extern u64 pnv_pci_dma_get_required_mask(struct pci_dev *pdev); 16 15 #else 17 16 static inline void pnv_pci_init(void) { } 18 17 static inline void pnv_pci_shutdown(void) { } 19 - 20 - static inline u64 pnv_pci_dma_get_required_mask(struct pci_dev *pdev) 21 - { 22 - return 0; 23 - } 24 18 #endif 25 19 26 20 extern u32 pnv_get_supported_cpuidle_states(void);
+1 -1
arch/powerpc/platforms/powernv/rng.c
··· 128 128 129 129 pr_info_once("Registering arch random hook.\n"); 130 130 131 - ppc_md.get_random_long = powernv_get_random_long; 131 + ppc_md.get_random_seed = powernv_get_random_long; 132 132 133 133 return 0; 134 134 }
+7 -9
arch/powerpc/platforms/powernv/setup.c
··· 165 165 { 166 166 } 167 167 168 - static u64 pnv_dma_get_required_mask(struct device *dev) 169 - { 170 - if (dev_is_pci(dev)) 171 - return pnv_pci_dma_get_required_mask(to_pci_dev(dev)); 172 - 173 - return __dma_get_required_mask(dev); 174 - } 175 - 176 168 static void pnv_shutdown(void) 177 169 { 178 170 /* Let the PCI code clear up IODA tables */ ··· 235 243 } else { 236 244 /* Primary waits for the secondaries to have reached OPAL */ 237 245 pnv_kexec_wait_secondaries_down(); 246 + 247 + /* 248 + * We might be running as little-endian - now that interrupts 249 + * are disabled, reset the HILE bit to big-endian so we don't 250 + * take interrupts in the wrong endian later 251 + */ 252 + opal_reinit_cpus(OPAL_REINIT_CPUS_HILE_BE); 238 253 } 239 254 } 240 255 #endif /* CONFIG_KEXEC */ ··· 313 314 .machine_shutdown = pnv_shutdown, 314 315 .power_save = power7_idle, 315 316 .calibrate_decr = generic_calibrate_decr, 316 - .dma_get_required_mask = pnv_dma_get_required_mask, 317 317 #ifdef CONFIG_KEXEC 318 318 .kexec_cpu_down = pnv_kexec_cpu_down, 319 319 #endif
+2 -2
arch/powerpc/platforms/powernv/subcore.c
··· 190 190 191 191 hid0 = mfspr(SPRN_HID0); 192 192 hid0 &= ~HID0_POWER8_DYNLPARDIS; 193 - mtspr(SPRN_HID0, hid0); 193 + update_power8_hid0(hid0); 194 194 update_hid_in_slw(hid0); 195 195 196 196 while (mfspr(SPRN_HID0) & mask) ··· 227 227 /* Write new mode */ 228 228 hid0 = mfspr(SPRN_HID0); 229 229 hid0 |= HID0_POWER8_DYNLPARDIS | split_parms[i].value; 230 - mtspr(SPRN_HID0, hid0); 230 + update_power8_hid0(hid0); 231 231 update_hid_in_slw(hid0); 232 232 233 233 /* Wait for it to happen */
+1 -2
arch/powerpc/platforms/pseries/hotplug-memory.c
··· 92 92 return NULL; 93 93 94 94 new_prop->name = kstrdup(prop->name, GFP_KERNEL); 95 - new_prop->value = kmalloc(prop->length, GFP_KERNEL); 95 + new_prop->value = kmemdup(prop->value, prop->length, GFP_KERNEL); 96 96 if (!new_prop->name || !new_prop->value) { 97 97 dlpar_free_drconf_property(new_prop); 98 98 return NULL; 99 99 } 100 100 101 - memcpy(new_prop->value, prop->value, prop->length); 102 101 new_prop->length = prop->length; 103 102 104 103 /* Convert the property to cpu endian-ness */
+1 -2
arch/powerpc/platforms/pseries/iommu.c
··· 1253 1253 } 1254 1254 } 1255 1255 1256 - /* fall back on iommu ops, restore table pointer with ops */ 1256 + /* fall back on iommu ops */ 1257 1257 if (!ddw_enabled && get_dma_ops(dev) != &dma_iommu_ops) { 1258 1258 dev_info(dev, "Restoring 32-bit DMA via iommu\n"); 1259 1259 set_dma_ops(dev, &dma_iommu_ops); 1260 - pci_dma_dev_setup_pSeriesLP(pdev); 1261 1260 } 1262 1261 1263 1262 check_mask:
+2 -1
arch/powerpc/platforms/pseries/ras.c
··· 189 189 int state; 190 190 int critical; 191 191 192 - status = rtas_get_sensor(EPOW_SENSOR_TOKEN, EPOW_SENSOR_INDEX, &state); 192 + status = rtas_get_sensor_fast(EPOW_SENSOR_TOKEN, EPOW_SENSOR_INDEX, 193 + &state); 193 194 194 195 if (state > 3) 195 196 critical = 1; /* Time Critical */
+1 -1
arch/powerpc/platforms/pseries/rng.c
··· 38 38 39 39 pr_info("Registering arch random hook.\n"); 40 40 41 - ppc_md.get_random_long = pseries_get_random_long; 41 + ppc_md.get_random_seed = pseries_get_random_long; 42 42 43 43 return 0; 44 44 }
+15 -8
arch/powerpc/platforms/pseries/setup.c
··· 254 254 static int pci_dn_reconfig_notifier(struct notifier_block *nb, unsigned long action, void *data) 255 255 { 256 256 struct of_reconfig_data *rd = data; 257 - struct device_node *np = rd->dn; 258 - struct pci_dn *pci = NULL; 257 + struct device_node *parent, *np = rd->dn; 258 + struct pci_dn *pdn; 259 259 int err = NOTIFY_OK; 260 260 261 261 switch (action) { 262 262 case OF_RECONFIG_ATTACH_NODE: 263 - pci = np->parent->data; 264 - if (pci) { 265 - update_dn_pci_info(np, pci->phb); 266 - 267 - /* Create EEH device for the OF node */ 268 - eeh_dev_init(PCI_DN(np), pci->phb); 263 + parent = of_get_parent(np); 264 + pdn = parent ? PCI_DN(parent) : NULL; 265 + if (pdn) { 266 + /* Create pdn and EEH device */ 267 + update_dn_pci_info(np, pdn->phb); 268 + eeh_dev_init(PCI_DN(np), pdn->phb); 269 269 } 270 + 271 + of_node_put(parent); 272 + break; 273 + case OF_RECONFIG_DETACH_NODE: 274 + pdn = PCI_DN(np); 275 + if (pdn) 276 + list_del(&pdn->list); 270 277 break; 271 278 default: 272 279 err = NOTIFY_DONE;
+1 -1
arch/powerpc/sysdev/cpm_common.c
··· 147 147 spin_lock_irqsave(&cpm_muram_lock, flags); 148 148 cpm_muram_info.alignment = align; 149 149 start = rh_alloc(&cpm_muram_info, size, "commproc"); 150 - memset(cpm_muram_addr(start), 0, size); 150 + memset_io(cpm_muram_addr(start), 0, size); 151 151 spin_unlock_irqrestore(&cpm_muram_lock, flags); 152 152 153 153 return start;
+3 -13
arch/powerpc/sysdev/dart_iommu.c
··· 313 313 set_bit(iommu_table_dart.it_size - 1, iommu_table_dart.it_map); 314 314 } 315 315 316 - static void dma_dev_setup_dart(struct device *dev) 317 - { 318 - /* We only have one iommu table on the mac for now, which makes 319 - * things simple. Setup all PCI devices to point to this table 320 - */ 321 - if (get_dma_ops(dev) == &dma_direct_ops) 322 - set_dma_offset(dev, DART_U4_BYPASS_BASE); 323 - else 324 - set_iommu_table_base(dev, &iommu_table_dart); 325 - } 326 - 327 316 static void pci_dma_dev_setup_dart(struct pci_dev *dev) 328 317 { 329 - dma_dev_setup_dart(&dev->dev); 318 + if (dart_is_u4) 319 + set_dma_offset(&dev->dev, DART_U4_BYPASS_BASE); 320 + set_iommu_table_base(&dev->dev, &iommu_table_dart); 330 321 } 331 322 332 323 static void pci_dma_bus_setup_dart(struct pci_bus *bus) ··· 361 370 dev_info(dev, "Using 32-bit DMA via iommu\n"); 362 371 set_dma_ops(dev, &dma_iommu_ops); 363 372 } 364 - dma_dev_setup_dart(dev); 365 373 366 374 *dev->dma_mask = dma_mask; 367 375 return 0;
+2 -2
arch/powerpc/sysdev/ppc4xx_hsta_msi.c
··· 132 132 struct pci_controller *phb; 133 133 134 134 mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 135 - if (IS_ERR(mem)) { 135 + if (!mem) { 136 136 dev_err(dev, "Unable to get mmio space\n"); 137 137 return -EINVAL; 138 138 } ··· 157 157 goto out; 158 158 159 159 ppc4xx_hsta_msi.irq_map = kmalloc(sizeof(int) * irq_count, GFP_KERNEL); 160 - if (IS_ERR(ppc4xx_hsta_msi.irq_map)) { 160 + if (!ppc4xx_hsta_msi.irq_map) { 161 161 ret = -ENOMEM; 162 162 goto out1; 163 163 }
+3 -5
arch/powerpc/xmon/xmon.c
··· 1987 1987 case '^': 1988 1988 adrs -= size; 1989 1989 break; 1990 - break; 1991 1990 case '/': 1992 1991 if (nslash > 0) 1993 1992 adrs -= 1 << nslash; ··· 2730 2731 void dump_segments(void) 2731 2732 { 2732 2733 int i; 2733 - unsigned long esid,vsid,valid; 2734 + unsigned long esid,vsid; 2734 2735 unsigned long llp; 2735 2736 2736 2737 printf("SLB contents of cpu 0x%x\n", smp_processor_id()); ··· 2738 2739 for (i = 0; i < mmu_slb_size; i++) { 2739 2740 asm volatile("slbmfee %0,%1" : "=r" (esid) : "r" (i)); 2740 2741 asm volatile("slbmfev %0,%1" : "=r" (vsid) : "r" (i)); 2741 - valid = (esid & SLB_ESID_V); 2742 - if (valid | esid | vsid) { 2742 + if (esid || vsid) { 2743 2743 printf("%02d %016lx %016lx", i, esid, vsid); 2744 - if (valid) { 2744 + if (esid & SLB_ESID_V) { 2745 2745 llp = vsid & SLB_VSID_LLP; 2746 2746 if (vsid & SLB_VSID_B_1T) { 2747 2747 printf(" 1T ESID=%9lx VSID=%13lx LLP:%3lx \n",
+11
drivers/leds/Kconfig
··· 565 565 This option enables support for the BlinkM RGB LED connected 566 566 through I2C. Say Y to enable support for the BlinkM LED. 567 567 568 + config LEDS_POWERNV 569 + tristate "LED support for PowerNV Platform" 570 + depends on LEDS_CLASS 571 + depends on PPC_POWERNV 572 + depends on OF 573 + help 574 + This option enables support for the system LEDs present on 575 + PowerNV platforms. Say 'y' to enable this support in kernel. 576 + To compile this driver as a module, choose 'm' here: the module 577 + will be called leds-powernv. 578 + 568 579 config LEDS_SYSCON 569 580 bool "LED support for LEDs on system controllers" 570 581 depends on LEDS_CLASS=y
+1
drivers/leds/Makefile
··· 65 65 obj-$(CONFIG_LEDS_MENF21BMC) += leds-menf21bmc.o 66 66 obj-$(CONFIG_LEDS_PM8941_WLED) += leds-pm8941-wled.o 67 67 obj-$(CONFIG_LEDS_KTD2692) += leds-ktd2692.o 68 + obj-$(CONFIG_LEDS_POWERNV) += leds-powernv.o 68 69 69 70 # LED SPI Drivers 70 71 obj-$(CONFIG_LEDS_DAC124S085) += leds-dac124s085.o
+345
drivers/leds/leds-powernv.c
··· 1 + /* 2 + * PowerNV LED Driver 3 + * 4 + * Copyright IBM Corp. 2015 5 + * 6 + * Author: Vasant Hegde <hegdevasant@linux.vnet.ibm.com> 7 + * Author: Anshuman Khandual <khandual@linux.vnet.ibm.com> 8 + * 9 + * This program is free software; you can redistribute it and/or 10 + * modify it under the terms of the GNU General Public License 11 + * as published by the Free Software Foundation; either version 12 + * 2 of the License, or (at your option) any later version. 13 + */ 14 + 15 + #include <linux/leds.h> 16 + #include <linux/module.h> 17 + #include <linux/of.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/slab.h> 20 + #include <linux/types.h> 21 + 22 + #include <asm/opal.h> 23 + 24 + /* Map LED type to description. */ 25 + struct led_type_map { 26 + const int type; 27 + const char *desc; 28 + }; 29 + static const struct led_type_map led_type_map[] = { 30 + {OPAL_SLOT_LED_TYPE_ID, "identify"}, 31 + {OPAL_SLOT_LED_TYPE_FAULT, "fault"}, 32 + {OPAL_SLOT_LED_TYPE_ATTN, "attention"}, 33 + {-1, NULL}, 34 + }; 35 + 36 + struct powernv_led_common { 37 + /* 38 + * By default unload path resets all the LEDs. But on PowerNV 39 + * platform we want to retain LED state across reboot as these 40 + * are controlled by firmware. Also service processor can modify 41 + * the LEDs independent of OS. Hence avoid resetting LEDs in 42 + * unload path. 43 + */ 44 + bool led_disabled; 45 + 46 + /* Max supported LED type */ 47 + __be64 max_led_type; 48 + 49 + /* glabal lock */ 50 + struct mutex lock; 51 + }; 52 + 53 + /* PowerNV LED data */ 54 + struct powernv_led_data { 55 + struct led_classdev cdev; 56 + char *loc_code; /* LED location code */ 57 + int led_type; /* OPAL_SLOT_LED_TYPE_* */ 58 + 59 + struct powernv_led_common *common; 60 + }; 61 + 62 + 63 + /* Returns OPAL_SLOT_LED_TYPE_* for given led type string */ 64 + static int powernv_get_led_type(const char *led_type_desc) 65 + { 66 + int i; 67 + 68 + for (i = 0; i < ARRAY_SIZE(led_type_map); i++) 69 + if (!strcmp(led_type_map[i].desc, led_type_desc)) 70 + return led_type_map[i].type; 71 + 72 + return -1; 73 + } 74 + 75 + /* 76 + * This commits the state change of the requested LED through an OPAL call. 77 + * This function is called from work queue task context when ever it gets 78 + * scheduled. This function can sleep at opal_async_wait_response call. 79 + */ 80 + static void powernv_led_set(struct powernv_led_data *powernv_led, 81 + enum led_brightness value) 82 + { 83 + int rc, token; 84 + u64 led_mask, led_value = 0; 85 + __be64 max_type; 86 + struct opal_msg msg; 87 + struct device *dev = powernv_led->cdev.dev; 88 + struct powernv_led_common *powernv_led_common = powernv_led->common; 89 + 90 + /* Prepare for the OPAL call */ 91 + max_type = powernv_led_common->max_led_type; 92 + led_mask = OPAL_SLOT_LED_STATE_ON << powernv_led->led_type; 93 + if (value) 94 + led_value = led_mask; 95 + 96 + /* OPAL async call */ 97 + token = opal_async_get_token_interruptible(); 98 + if (token < 0) { 99 + if (token != -ERESTARTSYS) 100 + dev_err(dev, "%s: Couldn't get OPAL async token\n", 101 + __func__); 102 + return; 103 + } 104 + 105 + rc = opal_leds_set_ind(token, powernv_led->loc_code, 106 + led_mask, led_value, &max_type); 107 + if (rc != OPAL_ASYNC_COMPLETION) { 108 + dev_err(dev, "%s: OPAL set LED call failed for %s [rc=%d]\n", 109 + __func__, powernv_led->loc_code, rc); 110 + goto out_token; 111 + } 112 + 113 + rc = opal_async_wait_response(token, &msg); 114 + if (rc) { 115 + dev_err(dev, 116 + "%s: Failed to wait for the async response [rc=%d]\n", 117 + __func__, rc); 118 + goto out_token; 119 + } 120 + 121 + rc = be64_to_cpu(msg.params[1]); 122 + if (rc != OPAL_SUCCESS) 123 + dev_err(dev, "%s : OAPL async call returned failed [rc=%d]\n", 124 + __func__, rc); 125 + 126 + out_token: 127 + opal_async_release_token(token); 128 + } 129 + 130 + /* 131 + * This function fetches the LED state for a given LED type for 132 + * mentioned LED classdev structure. 133 + */ 134 + static enum led_brightness powernv_led_get(struct powernv_led_data *powernv_led) 135 + { 136 + int rc; 137 + __be64 mask, value, max_type; 138 + u64 led_mask, led_value; 139 + struct device *dev = powernv_led->cdev.dev; 140 + struct powernv_led_common *powernv_led_common = powernv_led->common; 141 + 142 + /* Fetch all LED status */ 143 + mask = cpu_to_be64(0); 144 + value = cpu_to_be64(0); 145 + max_type = powernv_led_common->max_led_type; 146 + 147 + rc = opal_leds_get_ind(powernv_led->loc_code, 148 + &mask, &value, &max_type); 149 + if (rc != OPAL_SUCCESS && rc != OPAL_PARTIAL) { 150 + dev_err(dev, "%s: OPAL get led call failed [rc=%d]\n", 151 + __func__, rc); 152 + return LED_OFF; 153 + } 154 + 155 + led_mask = be64_to_cpu(mask); 156 + led_value = be64_to_cpu(value); 157 + 158 + /* LED status available */ 159 + if (!((led_mask >> powernv_led->led_type) & OPAL_SLOT_LED_STATE_ON)) { 160 + dev_err(dev, "%s: LED status not available for %s\n", 161 + __func__, powernv_led->cdev.name); 162 + return LED_OFF; 163 + } 164 + 165 + /* LED status value */ 166 + if ((led_value >> powernv_led->led_type) & OPAL_SLOT_LED_STATE_ON) 167 + return LED_FULL; 168 + 169 + return LED_OFF; 170 + } 171 + 172 + /* 173 + * LED classdev 'brightness_get' function. This schedules work 174 + * to update LED state. 175 + */ 176 + static void powernv_brightness_set(struct led_classdev *led_cdev, 177 + enum led_brightness value) 178 + { 179 + struct powernv_led_data *powernv_led = 180 + container_of(led_cdev, struct powernv_led_data, cdev); 181 + struct powernv_led_common *powernv_led_common = powernv_led->common; 182 + 183 + /* Do not modify LED in unload path */ 184 + if (powernv_led_common->led_disabled) 185 + return; 186 + 187 + mutex_lock(&powernv_led_common->lock); 188 + powernv_led_set(powernv_led, value); 189 + mutex_unlock(&powernv_led_common->lock); 190 + } 191 + 192 + /* LED classdev 'brightness_get' function */ 193 + static enum led_brightness powernv_brightness_get(struct led_classdev *led_cdev) 194 + { 195 + struct powernv_led_data *powernv_led = 196 + container_of(led_cdev, struct powernv_led_data, cdev); 197 + 198 + return powernv_led_get(powernv_led); 199 + } 200 + 201 + /* 202 + * This function registers classdev structure for any given type of LED on 203 + * a given child LED device node. 204 + */ 205 + static int powernv_led_create(struct device *dev, 206 + struct powernv_led_data *powernv_led, 207 + const char *led_type_desc) 208 + { 209 + int rc; 210 + 211 + /* Make sure LED type is supported */ 212 + powernv_led->led_type = powernv_get_led_type(led_type_desc); 213 + if (powernv_led->led_type == -1) { 214 + dev_warn(dev, "%s: No support for led type : %s\n", 215 + __func__, led_type_desc); 216 + return -EINVAL; 217 + } 218 + 219 + /* Create the name for classdev */ 220 + powernv_led->cdev.name = devm_kasprintf(dev, GFP_KERNEL, "%s:%s", 221 + powernv_led->loc_code, 222 + led_type_desc); 223 + if (!powernv_led->cdev.name) { 224 + dev_err(dev, 225 + "%s: Memory allocation failed for classdev name\n", 226 + __func__); 227 + return -ENOMEM; 228 + } 229 + 230 + powernv_led->cdev.brightness_set = powernv_brightness_set; 231 + powernv_led->cdev.brightness_get = powernv_brightness_get; 232 + powernv_led->cdev.brightness = LED_OFF; 233 + powernv_led->cdev.max_brightness = LED_FULL; 234 + 235 + /* Register the classdev */ 236 + rc = devm_led_classdev_register(dev, &powernv_led->cdev); 237 + if (rc) { 238 + dev_err(dev, "%s: Classdev registration failed for %s\n", 239 + __func__, powernv_led->cdev.name); 240 + } 241 + 242 + return rc; 243 + } 244 + 245 + /* Go through LED device tree node and register LED classdev structure */ 246 + static int powernv_led_classdev(struct platform_device *pdev, 247 + struct device_node *led_node, 248 + struct powernv_led_common *powernv_led_common) 249 + { 250 + const char *cur = NULL; 251 + int rc = -1; 252 + struct property *p; 253 + struct device_node *np; 254 + struct powernv_led_data *powernv_led; 255 + struct device *dev = &pdev->dev; 256 + 257 + for_each_child_of_node(led_node, np) { 258 + p = of_find_property(np, "led-types", NULL); 259 + if (!p) 260 + continue; 261 + 262 + while ((cur = of_prop_next_string(p, cur)) != NULL) { 263 + powernv_led = devm_kzalloc(dev, sizeof(*powernv_led), 264 + GFP_KERNEL); 265 + if (!powernv_led) 266 + return -ENOMEM; 267 + 268 + powernv_led->common = powernv_led_common; 269 + powernv_led->loc_code = (char *)np->name; 270 + 271 + rc = powernv_led_create(dev, powernv_led, cur); 272 + if (rc) 273 + return rc; 274 + } /* while end */ 275 + } 276 + 277 + return rc; 278 + } 279 + 280 + /* Platform driver probe */ 281 + static int powernv_led_probe(struct platform_device *pdev) 282 + { 283 + struct device_node *led_node; 284 + struct powernv_led_common *powernv_led_common; 285 + struct device *dev = &pdev->dev; 286 + 287 + led_node = of_find_node_by_path("/ibm,opal/leds"); 288 + if (!led_node) { 289 + dev_err(dev, "%s: LED parent device node not found\n", 290 + __func__); 291 + return -EINVAL; 292 + } 293 + 294 + powernv_led_common = devm_kzalloc(dev, sizeof(*powernv_led_common), 295 + GFP_KERNEL); 296 + if (!powernv_led_common) 297 + return -ENOMEM; 298 + 299 + mutex_init(&powernv_led_common->lock); 300 + powernv_led_common->max_led_type = cpu_to_be64(OPAL_SLOT_LED_TYPE_MAX); 301 + 302 + platform_set_drvdata(pdev, powernv_led_common); 303 + 304 + return powernv_led_classdev(pdev, led_node, powernv_led_common); 305 + } 306 + 307 + /* Platform driver remove */ 308 + static int powernv_led_remove(struct platform_device *pdev) 309 + { 310 + struct powernv_led_common *powernv_led_common; 311 + 312 + /* Disable LED operation */ 313 + powernv_led_common = platform_get_drvdata(pdev); 314 + powernv_led_common->led_disabled = true; 315 + 316 + /* Destroy lock */ 317 + mutex_destroy(&powernv_led_common->lock); 318 + 319 + dev_info(&pdev->dev, "PowerNV led module unregistered\n"); 320 + return 0; 321 + } 322 + 323 + /* Platform driver property match */ 324 + static const struct of_device_id powernv_led_match[] = { 325 + { 326 + .compatible = "ibm,opal-v3-led", 327 + }, 328 + {}, 329 + }; 330 + MODULE_DEVICE_TABLE(of, powernv_led_match); 331 + 332 + static struct platform_driver powernv_led_driver = { 333 + .probe = powernv_led_probe, 334 + .remove = powernv_led_remove, 335 + .driver = { 336 + .name = "powernv-led-driver", 337 + .of_match_table = powernv_led_match, 338 + }, 339 + }; 340 + 341 + module_platform_driver(powernv_led_driver); 342 + 343 + MODULE_LICENSE("GPL v2"); 344 + MODULE_DESCRIPTION("PowerNV LED driver"); 345 + MODULE_AUTHOR("Vasant Hegde <hegdevasant@linux.vnet.ibm.com>");
+2
drivers/macintosh/therm_windtunnel.c
··· 408 408 { "therm_adm1030", adm1030 }, 409 409 { } 410 410 }; 411 + MODULE_DEVICE_TABLE(i2c, therm_windtunnel_id); 411 412 412 413 static int 413 414 do_probe(struct i2c_client *cl, const struct i2c_device_id *id) ··· 460 459 .compatible = "adm1030" 461 460 }, {} 462 461 }; 462 + MODULE_DEVICE_TABLE(of, therm_of_match); 463 463 464 464 static struct platform_driver therm_of_driver = { 465 465 .driver = {
-4
drivers/macintosh/windfarm.h
··· 53 53 * the kref and wf_unregister_control will decrement it, thus the 54 54 * object creating/disposing a given control shouldn't assume it 55 55 * still exists after wf_unregister_control has been called. 56 - * wf_find_control will inc the refcount for you 57 56 */ 58 57 extern int wf_register_control(struct wf_control *ct); 59 58 extern void wf_unregister_control(struct wf_control *ct); 60 - extern struct wf_control * wf_find_control(const char *name); 61 59 extern int wf_get_control(struct wf_control *ct); 62 60 extern void wf_put_control(struct wf_control *ct); 63 61 ··· 115 117 /* Same lifetime rules as controls */ 116 118 extern int wf_register_sensor(struct wf_sensor *sr); 117 119 extern void wf_unregister_sensor(struct wf_sensor *sr); 118 - extern struct wf_sensor * wf_find_sensor(const char *name); 119 120 extern int wf_get_sensor(struct wf_sensor *sr); 120 121 extern void wf_put_sensor(struct wf_sensor *sr); 121 122 ··· 141 144 /* Overtemp conditions. Those are refcounted */ 142 145 extern void wf_set_overtemp(void); 143 146 extern void wf_clear_overtemp(void); 144 - extern int wf_is_overtemp(void); 145 147 146 148 #define WF_EVENT_NEW_CONTROL 0 /* param is wf_control * */ 147 149 #define WF_EVENT_NEW_SENSOR 1 /* param is wf_sensor * */
+2 -45
drivers/macintosh/windfarm_core.c
··· 72 72 blocking_notifier_call_chain(&wf_client_list, event, param); 73 73 } 74 74 75 - int wf_critical_overtemp(void) 75 + static int wf_critical_overtemp(void) 76 76 { 77 77 static char * critical_overtemp_path = "/sbin/critical_overtemp"; 78 78 char *argv[] = { critical_overtemp_path, NULL }; ··· 84 84 return call_usermodehelper(critical_overtemp_path, 85 85 argv, envp, UMH_WAIT_EXEC); 86 86 } 87 - EXPORT_SYMBOL_GPL(wf_critical_overtemp); 88 87 89 88 static int wf_thread_func(void *data) 90 89 { ··· 254 255 } 255 256 EXPORT_SYMBOL_GPL(wf_unregister_control); 256 257 257 - struct wf_control * wf_find_control(const char *name) 258 - { 259 - struct wf_control *ct; 260 - 261 - mutex_lock(&wf_lock); 262 - list_for_each_entry(ct, &wf_controls, link) { 263 - if (!strcmp(ct->name, name)) { 264 - if (wf_get_control(ct)) 265 - ct = NULL; 266 - mutex_unlock(&wf_lock); 267 - return ct; 268 - } 269 - } 270 - mutex_unlock(&wf_lock); 271 - return NULL; 272 - } 273 - EXPORT_SYMBOL_GPL(wf_find_control); 274 - 275 258 int wf_get_control(struct wf_control *ct) 276 259 { 277 260 if (!try_module_get(ct->ops->owner)) ··· 349 368 } 350 369 EXPORT_SYMBOL_GPL(wf_unregister_sensor); 351 370 352 - struct wf_sensor * wf_find_sensor(const char *name) 353 - { 354 - struct wf_sensor *sr; 355 - 356 - mutex_lock(&wf_lock); 357 - list_for_each_entry(sr, &wf_sensors, link) { 358 - if (!strcmp(sr->name, name)) { 359 - if (wf_get_sensor(sr)) 360 - sr = NULL; 361 - mutex_unlock(&wf_lock); 362 - return sr; 363 - } 364 - } 365 - mutex_unlock(&wf_lock); 366 - return NULL; 367 - } 368 - EXPORT_SYMBOL_GPL(wf_find_sensor); 369 - 370 371 int wf_get_sensor(struct wf_sensor *sr) 371 372 { 372 373 if (!try_module_get(sr->ops->owner)) ··· 398 435 { 399 436 mutex_lock(&wf_lock); 400 437 blocking_notifier_chain_unregister(&wf_client_list, nb); 401 - wf_client_count++; 438 + wf_client_count--; 402 439 if (wf_client_count == 0) 403 440 wf_stop_thread(); 404 441 mutex_unlock(&wf_lock); ··· 436 473 mutex_unlock(&wf_lock); 437 474 } 438 475 EXPORT_SYMBOL_GPL(wf_clear_overtemp); 439 - 440 - int wf_is_overtemp(void) 441 - { 442 - return (wf_overtemp != 0); 443 - } 444 - EXPORT_SYMBOL_GPL(wf_is_overtemp); 445 476 446 477 static int __init windfarm_core_init(void) 447 478 {
+30 -13
drivers/memory/fsl_ifc.c
··· 62 62 return -ENODEV; 63 63 64 64 for (i = 0; i < fsl_ifc_ctrl_dev->banks; i++) { 65 - u32 cspr = in_be32(&fsl_ifc_ctrl_dev->regs->cspr_cs[i].cspr); 65 + u32 cspr = ifc_in32(&fsl_ifc_ctrl_dev->regs->cspr_cs[i].cspr); 66 66 if (cspr & CSPR_V && (cspr & CSPR_BA) == 67 67 convert_ifc_address(addr_base)) 68 68 return i; ··· 79 79 /* 80 80 * Clear all the common status and event registers 81 81 */ 82 - if (in_be32(&ifc->cm_evter_stat) & IFC_CM_EVTER_STAT_CSER) 83 - out_be32(&ifc->cm_evter_stat, IFC_CM_EVTER_STAT_CSER); 82 + if (ifc_in32(&ifc->cm_evter_stat) & IFC_CM_EVTER_STAT_CSER) 83 + ifc_out32(IFC_CM_EVTER_STAT_CSER, &ifc->cm_evter_stat); 84 84 85 85 /* enable all error and events */ 86 - out_be32(&ifc->cm_evter_en, IFC_CM_EVTER_EN_CSEREN); 86 + ifc_out32(IFC_CM_EVTER_EN_CSEREN, &ifc->cm_evter_en); 87 87 88 88 /* enable all error and event interrupts */ 89 - out_be32(&ifc->cm_evter_intr_en, IFC_CM_EVTER_INTR_EN_CSERIREN); 90 - out_be32(&ifc->cm_erattr0, 0x0); 91 - out_be32(&ifc->cm_erattr1, 0x0); 89 + ifc_out32(IFC_CM_EVTER_INTR_EN_CSERIREN, &ifc->cm_evter_intr_en); 90 + ifc_out32(0x0, &ifc->cm_erattr0); 91 + ifc_out32(0x0, &ifc->cm_erattr1); 92 92 93 93 return 0; 94 94 } ··· 127 127 128 128 spin_lock_irqsave(&nand_irq_lock, flags); 129 129 130 - stat = in_be32(&ifc->ifc_nand.nand_evter_stat); 130 + stat = ifc_in32(&ifc->ifc_nand.nand_evter_stat); 131 131 if (stat) { 132 - out_be32(&ifc->ifc_nand.nand_evter_stat, stat); 132 + ifc_out32(stat, &ifc->ifc_nand.nand_evter_stat); 133 133 ctrl->nand_stat = stat; 134 134 wake_up(&ctrl->nand_wait); 135 135 } ··· 161 161 irqreturn_t ret = IRQ_NONE; 162 162 163 163 /* read for chip select error */ 164 - cs_err = in_be32(&ifc->cm_evter_stat); 164 + cs_err = ifc_in32(&ifc->cm_evter_stat); 165 165 if (cs_err) { 166 166 dev_err(ctrl->dev, "transaction sent to IFC is not mapped to" 167 167 "any memory bank 0x%08X\n", cs_err); 168 168 /* clear the chip select error */ 169 - out_be32(&ifc->cm_evter_stat, IFC_CM_EVTER_STAT_CSER); 169 + ifc_out32(IFC_CM_EVTER_STAT_CSER, &ifc->cm_evter_stat); 170 170 171 171 /* read error attribute registers print the error information */ 172 - status = in_be32(&ifc->cm_erattr0); 173 - err_addr = in_be32(&ifc->cm_erattr1); 172 + status = ifc_in32(&ifc->cm_erattr0); 173 + err_addr = ifc_in32(&ifc->cm_erattr1); 174 174 175 175 if (status & IFC_CM_ERATTR0_ERTYP_READ) 176 176 dev_err(ctrl->dev, "Read transaction error" ··· 229 229 dev_err(&dev->dev, "failed to get memory region\n"); 230 230 ret = -ENODEV; 231 231 goto err; 232 + } 233 + 234 + version = ifc_in32(&fsl_ifc_ctrl_dev->regs->ifc_rev) & 235 + FSL_IFC_VERSION_MASK; 236 + banks = (version == FSL_IFC_VERSION_1_0_0) ? 4 : 8; 237 + dev_info(&dev->dev, "IFC version %d.%d, %d banks\n", 238 + version >> 24, (version >> 16) & 0xf, banks); 239 + 240 + fsl_ifc_ctrl_dev->version = version; 241 + fsl_ifc_ctrl_dev->banks = banks; 242 + 243 + if (of_property_read_bool(dev->dev.of_node, "little-endian")) { 244 + fsl_ifc_ctrl_dev->little_endian = true; 245 + dev_dbg(&dev->dev, "IFC REGISTERS are LITTLE endian\n"); 246 + } else { 247 + fsl_ifc_ctrl_dev->little_endian = false; 248 + dev_dbg(&dev->dev, "IFC REGISTERS are BIG endian\n"); 232 249 } 233 250 234 251 version = ioread32be(&fsl_ifc_ctrl_dev->regs->ifc_rev) &
+6 -1
drivers/misc/cxl/Kconfig
··· 11 11 bool 12 12 default n 13 13 14 + config CXL_EEH 15 + bool 16 + default n 17 + 14 18 config CXL 15 19 tristate "Support for IBM Coherent Accelerators (CXL)" 16 - depends on PPC_POWERNV && PCI_MSI 20 + depends on PPC_POWERNV && PCI_MSI && EEH 17 21 select CXL_BASE 18 22 select CXL_KERNEL_API 23 + select CXL_EEH 19 24 default m 20 25 help 21 26 Select this option to enable driver support for IBM Coherent
+2
drivers/misc/cxl/Makefile
··· 1 + ccflags-y := -Werror 2 + 1 3 cxl-y += main.o file.o irq.o fault.o native.o 2 4 cxl-y += context.o sysfs.o debugfs.o pci.o trace.o 3 5 cxl-y += vphb.o api.o
+49 -10
drivers/misc/cxl/api.c
··· 12 12 #include <linux/anon_inodes.h> 13 13 #include <linux/file.h> 14 14 #include <misc/cxl.h> 15 + #include <linux/fs.h> 15 16 16 17 #include "cxl.h" 17 18 18 19 struct cxl_context *cxl_dev_context_init(struct pci_dev *dev) 19 20 { 21 + struct address_space *mapping; 20 22 struct cxl_afu *afu; 21 23 struct cxl_context *ctx; 22 24 int rc; ··· 27 25 28 26 get_device(&afu->dev); 29 27 ctx = cxl_context_alloc(); 30 - if (IS_ERR(ctx)) 31 - return ctx; 28 + if (IS_ERR(ctx)) { 29 + rc = PTR_ERR(ctx); 30 + goto err_dev; 31 + } 32 + 33 + ctx->kernelapi = true; 34 + 35 + /* 36 + * Make our own address space since we won't have one from the 37 + * filesystem like the user api has, and even if we do associate a file 38 + * with this context we don't want to use the global anonymous inode's 39 + * address space as that can invalidate unrelated users: 40 + */ 41 + mapping = kmalloc(sizeof(struct address_space), GFP_KERNEL); 42 + if (!mapping) { 43 + rc = -ENOMEM; 44 + goto err_ctx; 45 + } 46 + address_space_init_once(mapping); 32 47 33 48 /* Make it a slave context. We can promote it later? */ 34 - rc = cxl_context_init(ctx, afu, false, NULL); 35 - if (rc) { 36 - kfree(ctx); 37 - put_device(&afu->dev); 38 - return ERR_PTR(-ENOMEM); 39 - } 49 + rc = cxl_context_init(ctx, afu, false, mapping); 50 + if (rc) 51 + goto err_mapping; 52 + 40 53 cxl_assign_psn_space(ctx); 41 54 42 55 return ctx; 56 + 57 + err_mapping: 58 + kfree(mapping); 59 + err_ctx: 60 + kfree(ctx); 61 + err_dev: 62 + put_device(&afu->dev); 63 + return ERR_PTR(rc); 43 64 } 44 65 EXPORT_SYMBOL_GPL(cxl_dev_context_init); 45 66 ··· 84 59 85 60 int cxl_release_context(struct cxl_context *ctx) 86 61 { 87 - if (ctx->status != CLOSED) 62 + if (ctx->status >= STARTED) 88 63 return -EBUSY; 89 64 90 65 put_device(&ctx->afu->dev); ··· 280 255 281 256 file = anon_inode_getfile("cxl", fops, ctx, flags); 282 257 if (IS_ERR(file)) 283 - put_unused_fd(fdtmp); 258 + goto err_fd; 259 + 260 + file->f_mapping = ctx->mapping; 261 + 284 262 *fd = fdtmp; 285 263 return file; 264 + 265 + err_fd: 266 + put_unused_fd(fdtmp); 267 + return NULL; 286 268 } 287 269 EXPORT_SYMBOL_GPL(cxl_get_fd); 288 270 ··· 359 327 return cxl_afu_check_and_enable(afu); 360 328 } 361 329 EXPORT_SYMBOL_GPL(cxl_afu_reset); 330 + 331 + void cxl_perst_reloads_same_image(struct cxl_afu *afu, 332 + bool perst_reloads_same_image) 333 + { 334 + afu->adapter->perst_same_image = perst_reloads_same_image; 335 + } 336 + EXPORT_SYMBOL_GPL(cxl_perst_reloads_same_image);
+21 -1
drivers/misc/cxl/context.c
··· 126 126 if (ctx->status != STARTED) { 127 127 mutex_unlock(&ctx->status_mutex); 128 128 pr_devel("%s: Context not started, failing problem state access\n", __func__); 129 + if (ctx->mmio_err_ff) { 130 + if (!ctx->ff_page) { 131 + ctx->ff_page = alloc_page(GFP_USER); 132 + if (!ctx->ff_page) 133 + return VM_FAULT_OOM; 134 + memset(page_address(ctx->ff_page), 0xff, PAGE_SIZE); 135 + } 136 + get_page(ctx->ff_page); 137 + vmf->page = ctx->ff_page; 138 + vma->vm_page_prot = pgprot_cached(vma->vm_page_prot); 139 + return 0; 140 + } 129 141 return VM_FAULT_SIGBUS; 130 142 } 131 143 ··· 205 193 if (status != STARTED) 206 194 return -EBUSY; 207 195 208 - WARN_ON(cxl_detach_process(ctx)); 196 + /* Only warn if we detached while the link was OK. 197 + * If detach fails when hw is down, we don't care. 198 + */ 199 + WARN_ON(cxl_detach_process(ctx) && 200 + cxl_adapter_link_ok(ctx->afu->adapter)); 209 201 flush_work(&ctx->fault_work); /* Only needed for dedicated process */ 210 202 put_pid(ctx->pid); 211 203 cxl_ctx_put(); ··· 269 253 struct cxl_context *ctx = container_of(rcu, struct cxl_context, rcu); 270 254 271 255 free_page((u64)ctx->sstp); 256 + if (ctx->ff_page) 257 + __free_page(ctx->ff_page); 272 258 ctx->sstp = NULL; 259 + if (ctx->kernelapi) 260 + kfree(ctx->mapping); 273 261 274 262 kfree(ctx); 275 263 }
+77 -17
drivers/misc/cxl/cxl.h
··· 34 34 * Bump version each time a user API change is made, whether it is 35 35 * backwards compatible ot not. 36 36 */ 37 - #define CXL_API_VERSION 1 37 + #define CXL_API_VERSION 2 38 38 #define CXL_API_VERSION_COMPATIBLE 1 39 39 40 40 /* ··· 83 83 /* 0x00C0:7EFF Implementation dependent area */ 84 84 static const cxl_p1_reg_t CXL_PSL_FIR1 = {0x0100}; 85 85 static const cxl_p1_reg_t CXL_PSL_FIR2 = {0x0108}; 86 + static const cxl_p1_reg_t CXL_PSL_Timebase = {0x0110}; 86 87 static const cxl_p1_reg_t CXL_PSL_VERSION = {0x0118}; 87 88 static const cxl_p1_reg_t CXL_PSL_RESLCKTO = {0x0128}; 89 + static const cxl_p1_reg_t CXL_PSL_TB_CTLSTAT = {0x0140}; 88 90 static const cxl_p1_reg_t CXL_PSL_FIR_CNTL = {0x0148}; 89 91 static const cxl_p1_reg_t CXL_PSL_DSNDCTL = {0x0150}; 90 92 static const cxl_p1_reg_t CXL_PSL_SNWRALLOC = {0x0158}; ··· 153 151 #define CXL_PSL_SPAP_Size 0x0000000000000ff0ULL 154 152 #define CXL_PSL_SPAP_Size_Shift 4 155 153 #define CXL_PSL_SPAP_V 0x0000000000000001ULL 154 + 155 + /****** CXL_PSL_Control ****************************************************/ 156 + #define CXL_PSL_Control_tb 0x0000000000000001ULL 156 157 157 158 /****** CXL_PSL_DLCNTL *****************************************************/ 158 159 #define CXL_PSL_DLCNTL_D (0x1ull << (63-28)) ··· 423 418 /* Used to unmap any mmaps when force detaching */ 424 419 struct address_space *mapping; 425 420 struct mutex mapping_lock; 421 + struct page *ff_page; 422 + bool mmio_err_ff; 423 + bool kernelapi; 426 424 427 425 spinlock_t sste_lock; /* Protects segment table entries */ 428 426 struct cxl_sste *sstp; ··· 501 493 bool user_image_loaded; 502 494 bool perst_loads_image; 503 495 bool perst_select_user; 496 + bool perst_same_image; 504 497 }; 505 498 506 499 int cxl_alloc_one_irq(struct cxl *adapter); ··· 540 531 __be32 software_state; 541 532 } __packed; 542 533 534 + static inline bool cxl_adapter_link_ok(struct cxl *cxl) 535 + { 536 + struct pci_dev *pdev; 537 + 538 + pdev = to_pci_dev(cxl->dev.parent); 539 + return !pci_channel_offline(pdev); 540 + } 541 + 543 542 static inline void __iomem *_cxl_p1_addr(struct cxl *cxl, cxl_p1_reg_t reg) 544 543 { 545 544 WARN_ON(!cpu_has_feature(CPU_FTR_HVMODE)); 546 545 return cxl->p1_mmio + cxl_reg_off(reg); 547 546 } 548 547 549 - #define cxl_p1_write(cxl, reg, val) \ 550 - out_be64(_cxl_p1_addr(cxl, reg), val) 551 - #define cxl_p1_read(cxl, reg) \ 552 - in_be64(_cxl_p1_addr(cxl, reg)) 548 + static inline void cxl_p1_write(struct cxl *cxl, cxl_p1_reg_t reg, u64 val) 549 + { 550 + if (likely(cxl_adapter_link_ok(cxl))) 551 + out_be64(_cxl_p1_addr(cxl, reg), val); 552 + } 553 + 554 + static inline u64 cxl_p1_read(struct cxl *cxl, cxl_p1_reg_t reg) 555 + { 556 + if (likely(cxl_adapter_link_ok(cxl))) 557 + return in_be64(_cxl_p1_addr(cxl, reg)); 558 + else 559 + return ~0ULL; 560 + } 553 561 554 562 static inline void __iomem *_cxl_p1n_addr(struct cxl_afu *afu, cxl_p1n_reg_t reg) 555 563 { ··· 574 548 return afu->p1n_mmio + cxl_reg_off(reg); 575 549 } 576 550 577 - #define cxl_p1n_write(afu, reg, val) \ 578 - out_be64(_cxl_p1n_addr(afu, reg), val) 579 - #define cxl_p1n_read(afu, reg) \ 580 - in_be64(_cxl_p1n_addr(afu, reg)) 551 + static inline void cxl_p1n_write(struct cxl_afu *afu, cxl_p1n_reg_t reg, u64 val) 552 + { 553 + if (likely(cxl_adapter_link_ok(afu->adapter))) 554 + out_be64(_cxl_p1n_addr(afu, reg), val); 555 + } 556 + 557 + static inline u64 cxl_p1n_read(struct cxl_afu *afu, cxl_p1n_reg_t reg) 558 + { 559 + if (likely(cxl_adapter_link_ok(afu->adapter))) 560 + return in_be64(_cxl_p1n_addr(afu, reg)); 561 + else 562 + return ~0ULL; 563 + } 581 564 582 565 static inline void __iomem *_cxl_p2n_addr(struct cxl_afu *afu, cxl_p2n_reg_t reg) 583 566 { 584 567 return afu->p2n_mmio + cxl_reg_off(reg); 585 568 } 586 569 587 - #define cxl_p2n_write(afu, reg, val) \ 588 - out_be64(_cxl_p2n_addr(afu, reg), val) 589 - #define cxl_p2n_read(afu, reg) \ 590 - in_be64(_cxl_p2n_addr(afu, reg)) 570 + static inline void cxl_p2n_write(struct cxl_afu *afu, cxl_p2n_reg_t reg, u64 val) 571 + { 572 + if (likely(cxl_adapter_link_ok(afu->adapter))) 573 + out_be64(_cxl_p2n_addr(afu, reg), val); 574 + } 591 575 576 + static inline u64 cxl_p2n_read(struct cxl_afu *afu, cxl_p2n_reg_t reg) 577 + { 578 + if (likely(cxl_adapter_link_ok(afu->adapter))) 579 + return in_be64(_cxl_p2n_addr(afu, reg)); 580 + else 581 + return ~0ULL; 582 + } 592 583 593 - #define cxl_afu_cr_read64(afu, cr, off) \ 594 - in_le64((afu)->afu_desc_mmio + (afu)->crs_offset + ((cr) * (afu)->crs_len) + (off)) 595 - #define cxl_afu_cr_read32(afu, cr, off) \ 596 - in_le32((afu)->afu_desc_mmio + (afu)->crs_offset + ((cr) * (afu)->crs_len) + (off)) 584 + static inline u64 cxl_afu_cr_read64(struct cxl_afu *afu, int cr, u64 off) 585 + { 586 + if (likely(cxl_adapter_link_ok(afu->adapter))) 587 + return in_le64((afu)->afu_desc_mmio + (afu)->crs_offset + 588 + ((cr) * (afu)->crs_len) + (off)); 589 + else 590 + return ~0ULL; 591 + } 592 + 593 + static inline u32 cxl_afu_cr_read32(struct cxl_afu *afu, int cr, u64 off) 594 + { 595 + if (likely(cxl_adapter_link_ok(afu->adapter))) 596 + return in_le32((afu)->afu_desc_mmio + (afu)->crs_offset + 597 + ((cr) * (afu)->crs_len) + (off)); 598 + else 599 + return 0xffffffff; 600 + } 597 601 u16 cxl_afu_cr_read16(struct cxl_afu *afu, int cr, u64 off); 598 602 u8 cxl_afu_cr_read8(struct cxl_afu *afu, int cr, u64 off); 599 603 ··· 640 584 641 585 int cxl_alloc_adapter_nr(struct cxl *adapter); 642 586 void cxl_remove_adapter_nr(struct cxl *adapter); 587 + 588 + int cxl_alloc_spa(struct cxl_afu *afu); 589 + void cxl_release_spa(struct cxl_afu *afu); 643 590 644 591 int cxl_file_init(void); 645 592 void cxl_file_exit(void); ··· 734 675 735 676 void cxl_stop_trace(struct cxl *cxl); 736 677 int cxl_pci_vphb_add(struct cxl_afu *afu); 678 + void cxl_pci_vphb_reconfigure(struct cxl_afu *afu); 737 679 void cxl_pci_vphb_remove(struct cxl_afu *afu); 738 680 739 681 extern struct pci_driver cxl_pci_driver;
+1 -1
drivers/misc/cxl/debugfs.c
··· 48 48 static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode, 49 49 struct dentry *parent, u64 __iomem *value) 50 50 { 51 - return debugfs_create_file(name, mode, parent, (void *)value, &fops_io_x64); 51 + return debugfs_create_file(name, mode, parent, (void __force *)value, &fops_io_x64); 52 52 } 53 53 54 54 int cxl_debugfs_adapter_add(struct cxl *adapter)
+24 -3
drivers/misc/cxl/file.c
··· 73 73 if (!afu->current_mode) 74 74 goto err_put_afu; 75 75 76 + if (!cxl_adapter_link_ok(adapter)) { 77 + rc = -EIO; 78 + goto err_put_afu; 79 + } 80 + 76 81 if (!(ctx = cxl_context_alloc())) { 77 82 rc = -ENOMEM; 78 83 goto err_put_afu; ··· 184 179 if (work.flags & CXL_START_WORK_AMR) 185 180 amr = work.amr & mfspr(SPRN_UAMOR); 186 181 182 + ctx->mmio_err_ff = !!(work.flags & CXL_START_WORK_ERR_FF); 183 + 187 184 /* 188 185 * We grab the PID here and not in the file open to allow for the case 189 186 * where a process (master, some daemon, etc) has opened the chardev on ··· 245 238 if (ctx->status == CLOSED) 246 239 return -EIO; 247 240 241 + if (!cxl_adapter_link_ok(ctx->afu->adapter)) 242 + return -EIO; 243 + 248 244 pr_devel("afu_ioctl\n"); 249 245 switch (cmd) { 250 246 case CXL_IOCTL_START_WORK: ··· 261 251 return -EINVAL; 262 252 } 263 253 264 - long afu_compat_ioctl(struct file *file, unsigned int cmd, 254 + static long afu_compat_ioctl(struct file *file, unsigned int cmd, 265 255 unsigned long arg) 266 256 { 267 257 return afu_ioctl(file, cmd, arg); ··· 273 263 274 264 /* AFU must be started before we can MMIO */ 275 265 if (ctx->status != STARTED) 266 + return -EIO; 267 + 268 + if (!cxl_adapter_link_ok(ctx->afu->adapter)) 276 269 return -EIO; 277 270 278 271 return cxl_context_iomap(ctx, vm); ··· 322 309 int rc; 323 310 DEFINE_WAIT(wait); 324 311 312 + if (!cxl_adapter_link_ok(ctx->afu->adapter)) 313 + return -EIO; 314 + 325 315 if (count < CXL_READ_MIN_SIZE) 326 316 return -EINVAL; 327 317 ··· 334 318 prepare_to_wait(&ctx->wq, &wait, TASK_INTERRUPTIBLE); 335 319 if (ctx_event_pending(ctx)) 336 320 break; 321 + 322 + if (!cxl_adapter_link_ok(ctx->afu->adapter)) { 323 + rc = -EIO; 324 + goto out; 325 + } 337 326 338 327 if (file->f_flags & O_NONBLOCK) { 339 328 rc = -EAGAIN; ··· 417 396 .mmap = afu_mmap, 418 397 }; 419 398 420 - const struct file_operations afu_master_fops = { 399 + static const struct file_operations afu_master_fops = { 421 400 .owner = THIS_MODULE, 422 401 .open = afu_master_open, 423 402 .poll = afu_poll, ··· 540 519 * If these change we really need to update API. Either change some 541 520 * flags or update API version number CXL_API_VERSION. 542 521 */ 543 - BUILD_BUG_ON(CXL_API_VERSION != 1); 522 + BUILD_BUG_ON(CXL_API_VERSION != 2); 544 523 BUILD_BUG_ON(sizeof(struct cxl_ioctl_start_work) != 64); 545 524 BUILD_BUG_ON(sizeof(struct cxl_event_header) != 8); 546 525 BUILD_BUG_ON(sizeof(struct cxl_event_afu_interrupt) != 8);
+36 -20
drivers/misc/cxl/irq.c
··· 30 30 serr = cxl_p1n_read(ctx->afu, CXL_PSL_SERR_An); 31 31 afu_debug = cxl_p1n_read(ctx->afu, CXL_AFU_DEBUG_An); 32 32 33 - dev_crit(&ctx->afu->dev, "PSL ERROR STATUS: 0x%.16llx\n", errstat); 34 - dev_crit(&ctx->afu->dev, "PSL_FIR1: 0x%.16llx\n", fir1); 35 - dev_crit(&ctx->afu->dev, "PSL_FIR2: 0x%.16llx\n", fir2); 36 - dev_crit(&ctx->afu->dev, "PSL_SERR_An: 0x%.16llx\n", serr); 37 - dev_crit(&ctx->afu->dev, "PSL_FIR_SLICE_An: 0x%.16llx\n", fir_slice); 38 - dev_crit(&ctx->afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%.16llx\n", afu_debug); 33 + dev_crit(&ctx->afu->dev, "PSL ERROR STATUS: 0x%016llx\n", errstat); 34 + dev_crit(&ctx->afu->dev, "PSL_FIR1: 0x%016llx\n", fir1); 35 + dev_crit(&ctx->afu->dev, "PSL_FIR2: 0x%016llx\n", fir2); 36 + dev_crit(&ctx->afu->dev, "PSL_SERR_An: 0x%016llx\n", serr); 37 + dev_crit(&ctx->afu->dev, "PSL_FIR_SLICE_An: 0x%016llx\n", fir_slice); 38 + dev_crit(&ctx->afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%016llx\n", afu_debug); 39 39 40 40 dev_crit(&ctx->afu->dev, "STOPPING CXL TRACE\n"); 41 41 cxl_stop_trace(ctx->afu->adapter); ··· 54 54 fir_slice = cxl_p1n_read(afu, CXL_PSL_FIR_SLICE_An); 55 55 errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An); 56 56 afu_debug = cxl_p1n_read(afu, CXL_AFU_DEBUG_An); 57 - dev_crit(&afu->dev, "PSL_SERR_An: 0x%.16llx\n", serr); 58 - dev_crit(&afu->dev, "PSL_FIR_SLICE_An: 0x%.16llx\n", fir_slice); 59 - dev_crit(&afu->dev, "CXL_PSL_ErrStat_An: 0x%.16llx\n", errstat); 60 - dev_crit(&afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%.16llx\n", afu_debug); 57 + dev_crit(&afu->dev, "PSL_SERR_An: 0x%016llx\n", serr); 58 + dev_crit(&afu->dev, "PSL_FIR_SLICE_An: 0x%016llx\n", fir_slice); 59 + dev_crit(&afu->dev, "CXL_PSL_ErrStat_An: 0x%016llx\n", errstat); 60 + dev_crit(&afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%016llx\n", afu_debug); 61 61 62 62 cxl_p1n_write(afu, CXL_PSL_SERR_An, serr); 63 63 ··· 72 72 WARN(1, "CXL ERROR interrupt %i\n", irq); 73 73 74 74 err_ivte = cxl_p1_read(adapter, CXL_PSL_ErrIVTE); 75 - dev_crit(&adapter->dev, "PSL_ErrIVTE: 0x%.16llx\n", err_ivte); 75 + dev_crit(&adapter->dev, "PSL_ErrIVTE: 0x%016llx\n", err_ivte); 76 76 77 77 dev_crit(&adapter->dev, "STOPPING CXL TRACE\n"); 78 78 cxl_stop_trace(adapter); ··· 80 80 fir1 = cxl_p1_read(adapter, CXL_PSL_FIR1); 81 81 fir2 = cxl_p1_read(adapter, CXL_PSL_FIR2); 82 82 83 - dev_crit(&adapter->dev, "PSL_FIR1: 0x%.16llx\nPSL_FIR2: 0x%.16llx\n", fir1, fir2); 83 + dev_crit(&adapter->dev, "PSL_FIR1: 0x%016llx\nPSL_FIR2: 0x%016llx\n", fir1, fir2); 84 84 85 85 return IRQ_HANDLED; 86 86 } ··· 147 147 if (dsisr & CXL_PSL_DSISR_An_PE) 148 148 return handle_psl_slice_error(ctx, dsisr, irq_info->errstat); 149 149 if (dsisr & CXL_PSL_DSISR_An_AE) { 150 - pr_devel("CXL interrupt: AFU Error %.llx\n", irq_info->afu_err); 150 + pr_devel("CXL interrupt: AFU Error 0x%016llx\n", irq_info->afu_err); 151 151 152 152 if (ctx->pending_afu_err) { 153 153 /* ··· 158 158 * probably best that we log them somewhere: 159 159 */ 160 160 dev_err_ratelimited(&ctx->afu->dev, "CXL AFU Error " 161 - "undelivered to pe %i: %.llx\n", 161 + "undelivered to pe %i: 0x%016llx\n", 162 162 ctx->pe, irq_info->afu_err); 163 163 } else { 164 164 spin_lock(&ctx->lock); ··· 211 211 } 212 212 rcu_read_unlock(); 213 213 214 - WARN(1, "Unable to demultiplex CXL PSL IRQ for PE %i DSISR %.16llx DAR" 215 - " %.16llx\n(Possible AFU HW issue - was a term/remove acked" 214 + WARN(1, "Unable to demultiplex CXL PSL IRQ for PE %i DSISR %016llx DAR" 215 + " %016llx\n(Possible AFU HW issue - was a term/remove acked" 216 216 " with outstanding transactions?)\n", ph, irq_info.dsisr, 217 217 irq_info.dar); 218 218 return fail_psl_irq(afu, &irq_info); ··· 341 341 342 342 void cxl_release_psl_err_irq(struct cxl *adapter) 343 343 { 344 + if (adapter->err_virq != irq_find_mapping(NULL, adapter->err_hwirq)) 345 + return; 346 + 344 347 cxl_p1_write(adapter, CXL_PSL_ErrIVTE, 0x0000000000000000); 345 348 cxl_unmap_irq(adapter->err_virq, adapter); 346 349 cxl_release_one_irq(adapter, adapter->err_hwirq); ··· 377 374 378 375 void cxl_release_serr_irq(struct cxl_afu *afu) 379 376 { 377 + if (afu->serr_virq != irq_find_mapping(NULL, afu->serr_hwirq)) 378 + return; 379 + 380 380 cxl_p1n_write(afu, CXL_PSL_SERR_An, 0x0000000000000000); 381 381 cxl_unmap_irq(afu->serr_virq, afu); 382 382 cxl_release_one_irq(afu->adapter, afu->serr_hwirq); ··· 406 400 407 401 void cxl_release_psl_irq(struct cxl_afu *afu) 408 402 { 403 + if (afu->psl_virq != irq_find_mapping(NULL, afu->psl_hwirq)) 404 + return; 405 + 409 406 cxl_unmap_irq(afu->psl_virq, afu); 410 407 cxl_release_one_irq(afu->adapter, afu->psl_hwirq); 411 408 kfree(afu->psl_irq_name); 412 409 } 413 410 414 - void afu_irq_name_free(struct cxl_context *ctx) 411 + static void afu_irq_name_free(struct cxl_context *ctx) 415 412 { 416 413 struct cxl_irq_name *irq_name, *tmp; 417 414 ··· 430 421 int rc, r, i, j = 1; 431 422 struct cxl_irq_name *irq_name; 432 423 424 + /* Initialize the list head to hold irq names */ 425 + INIT_LIST_HEAD(&ctx->irq_names); 426 + 433 427 if ((rc = cxl_alloc_irq_ranges(&ctx->irqs, ctx->afu->adapter, count))) 434 428 return rc; 435 429 ··· 444 432 ctx->irq_bitmap = kcalloc(BITS_TO_LONGS(count), 445 433 sizeof(*ctx->irq_bitmap), GFP_KERNEL); 446 434 if (!ctx->irq_bitmap) 447 - return -ENOMEM; 435 + goto out; 448 436 449 437 /* 450 438 * Allocate names first. If any fail, bail out before allocating 451 439 * actual hardware IRQs. 452 440 */ 453 - INIT_LIST_HEAD(&ctx->irq_names); 454 441 for (r = 1; r < CXL_IRQ_RANGES; r++) { 455 442 for (i = 0; i < ctx->irqs.range[r]; i++) { 456 443 irq_name = kmalloc(sizeof(struct cxl_irq_name), ··· 471 460 return 0; 472 461 473 462 out: 463 + cxl_release_irq_ranges(&ctx->irqs, ctx->afu->adapter); 474 464 afu_irq_name_free(ctx); 475 465 return -ENOMEM; 476 466 } 477 467 478 - void afu_register_hwirqs(struct cxl_context *ctx) 468 + static void afu_register_hwirqs(struct cxl_context *ctx) 479 469 { 480 470 irq_hw_number_t hwirq; 481 471 struct cxl_irq_name *irq_name; ··· 523 511 524 512 afu_irq_name_free(ctx); 525 513 cxl_release_irq_ranges(&ctx->irqs, ctx->afu->adapter); 514 + 515 + kfree(ctx->irq_bitmap); 516 + ctx->irq_bitmap = NULL; 517 + ctx->irq_count = 0; 526 518 }
+1
drivers/misc/cxl/main.c
··· 222 222 cxl_debugfs_exit(); 223 223 cxl_file_exit(); 224 224 unregister_cxl_calls(&cxl_calls); 225 + idr_destroy(&cxl_adapter_idr); 225 226 } 226 227 227 228 module_init(init_cxl);
+97 -22
drivers/misc/cxl/native.c
··· 41 41 rc = -EBUSY; 42 42 goto out; 43 43 } 44 - pr_devel_ratelimited("AFU control... (0x%.16llx)\n", 44 + 45 + if (!cxl_adapter_link_ok(afu->adapter)) { 46 + afu->enabled = enabled; 47 + rc = -EIO; 48 + goto out; 49 + } 50 + 51 + pr_devel_ratelimited("AFU control... (0x%016llx)\n", 45 52 AFU_Cntl | command); 46 53 cpu_relax(); 47 54 AFU_Cntl = cxl_p2n_read(afu, CXL_AFU_Cntl_An); ··· 92 85 93 86 int cxl_afu_check_and_enable(struct cxl_afu *afu) 94 87 { 88 + if (!cxl_adapter_link_ok(afu->adapter)) { 89 + WARN(1, "Refusing to enable afu while link down!\n"); 90 + return -EIO; 91 + } 95 92 if (afu->enabled) 96 93 return 0; 97 94 return afu_enable(afu); ··· 114 103 115 104 pr_devel("PSL purge request\n"); 116 105 106 + if (!cxl_adapter_link_ok(afu->adapter)) { 107 + dev_warn(&afu->dev, "PSL Purge called with link down, ignoring\n"); 108 + rc = -EIO; 109 + goto out; 110 + } 111 + 117 112 if ((AFU_Cntl & CXL_AFU_Cntl_An_ES_MASK) != CXL_AFU_Cntl_An_ES_Disabled) { 118 113 WARN(1, "psl_purge request while AFU not disabled!\n"); 119 114 cxl_afu_disable(afu); ··· 136 119 rc = -EBUSY; 137 120 goto out; 138 121 } 122 + if (!cxl_adapter_link_ok(afu->adapter)) { 123 + rc = -EIO; 124 + goto out; 125 + } 126 + 139 127 dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An); 140 - pr_devel_ratelimited("PSL purging... PSL_CNTL: 0x%.16llx PSL_DSISR: 0x%.16llx\n", PSL_CNTL, dsisr); 128 + pr_devel_ratelimited("PSL purging... PSL_CNTL: 0x%016llx PSL_DSISR: 0x%016llx\n", PSL_CNTL, dsisr); 141 129 if (dsisr & CXL_PSL_DSISR_TRANS) { 142 130 dar = cxl_p2n_read(afu, CXL_PSL_DAR_An); 143 - dev_notice(&afu->dev, "PSL purge terminating pending translation, DSISR: 0x%.16llx, DAR: 0x%.16llx\n", dsisr, dar); 131 + dev_notice(&afu->dev, "PSL purge terminating pending translation, DSISR: 0x%016llx, DAR: 0x%016llx\n", dsisr, dar); 144 132 cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE); 145 133 } else if (dsisr) { 146 - dev_notice(&afu->dev, "PSL purge acknowledging pending non-translation fault, DSISR: 0x%.16llx\n", dsisr); 134 + dev_notice(&afu->dev, "PSL purge acknowledging pending non-translation fault, DSISR: 0x%016llx\n", dsisr); 147 135 cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A); 148 136 } else { 149 137 cpu_relax(); ··· 183 161 return ((spa_size / 8) - 96) / 17; 184 162 } 185 163 186 - static int alloc_spa(struct cxl_afu *afu) 164 + int cxl_alloc_spa(struct cxl_afu *afu) 187 165 { 188 - u64 spap; 189 - 190 166 /* Work out how many pages to allocate */ 191 167 afu->spa_order = 0; 192 168 do { ··· 203 183 pr_devel("spa pages: %i afu->spa_max_procs: %i afu->num_procs: %i\n", 204 184 1<<afu->spa_order, afu->spa_max_procs, afu->num_procs); 205 185 186 + return 0; 187 + } 188 + 189 + static void attach_spa(struct cxl_afu *afu) 190 + { 191 + u64 spap; 192 + 206 193 afu->sw_command_status = (__be64 *)((char *)afu->spa + 207 194 ((afu->spa_max_procs + 3) * 128)); 208 195 ··· 218 191 spap |= CXL_PSL_SPAP_V; 219 192 pr_devel("cxl: SPA allocated at 0x%p. Max processes: %i, sw_command_status: 0x%p CXL_PSL_SPAP_An=0x%016llx\n", afu->spa, afu->spa_max_procs, afu->sw_command_status, spap); 220 193 cxl_p1n_write(afu, CXL_PSL_SPAP_An, spap); 221 - 222 - return 0; 223 194 } 224 195 225 - static void release_spa(struct cxl_afu *afu) 196 + static inline void detach_spa(struct cxl_afu *afu) 226 197 { 227 198 cxl_p1n_write(afu, CXL_PSL_SPAP_An, 0); 228 - free_pages((unsigned long) afu->spa, afu->spa_order); 199 + } 200 + 201 + void cxl_release_spa(struct cxl_afu *afu) 202 + { 203 + if (afu->spa) { 204 + free_pages((unsigned long) afu->spa, afu->spa_order); 205 + afu->spa = NULL; 206 + } 229 207 } 230 208 231 209 int cxl_tlb_slb_invalidate(struct cxl *adapter) ··· 247 215 dev_warn(&adapter->dev, "WARNING: CXL adapter wide TLBIA timed out!\n"); 248 216 return -EBUSY; 249 217 } 218 + if (!cxl_adapter_link_ok(adapter)) 219 + return -EIO; 250 220 cpu_relax(); 251 221 } 252 222 ··· 258 224 dev_warn(&adapter->dev, "WARNING: CXL adapter wide SLBIA timed out!\n"); 259 225 return -EBUSY; 260 226 } 227 + if (!cxl_adapter_link_ok(adapter)) 228 + return -EIO; 261 229 cpu_relax(); 262 230 } 263 231 return 0; ··· 276 240 dev_warn(&afu->dev, "WARNING: CXL AFU SLBIA timed out!\n"); 277 241 return -EBUSY; 278 242 } 243 + /* If the adapter has gone down, we can assume that we 244 + * will PERST it and that will invalidate everything. 245 + */ 246 + if (!cxl_adapter_link_ok(afu->adapter)) 247 + return -EIO; 279 248 cpu_relax(); 280 249 } 281 250 return 0; ··· 320 279 cxl_p1_write(adapter, CXL_PSL_SLBIA, CXL_TLB_SLB_IQ_LPIDPID); 321 280 322 281 while (1) { 282 + if (!cxl_adapter_link_ok(adapter)) 283 + break; 323 284 slbia = cxl_p1_read(adapter, CXL_PSL_SLBIA); 324 285 if (!(slbia & CXL_TLB_SLB_P)) 325 286 break; ··· 349 306 if (time_after_eq(jiffies, timeout)) { 350 307 dev_warn(&ctx->afu->dev, "WARNING: Process Element Command timed out!\n"); 351 308 rc = -EBUSY; 309 + goto out; 310 + } 311 + if (!cxl_adapter_link_ok(ctx->afu->adapter)) { 312 + dev_warn(&ctx->afu->dev, "WARNING: Device link down, aborting Process Element Command!\n"); 313 + rc = -EIO; 352 314 goto out; 353 315 } 354 316 state = be64_to_cpup(ctx->afu->sw_command_status); ··· 403 355 404 356 mutex_lock(&ctx->afu->spa_mutex); 405 357 pr_devel("%s Terminate pe: %i started\n", __func__, ctx->pe); 406 - rc = do_process_element_cmd(ctx, CXL_SPA_SW_CMD_TERMINATE, 407 - CXL_PE_SOFTWARE_STATE_V | CXL_PE_SOFTWARE_STATE_T); 358 + /* We could be asked to terminate when the hw is down. That 359 + * should always succeed: it's not running if the hw has gone 360 + * away and is being reset. 361 + */ 362 + if (cxl_adapter_link_ok(ctx->afu->adapter)) 363 + rc = do_process_element_cmd(ctx, CXL_SPA_SW_CMD_TERMINATE, 364 + CXL_PE_SOFTWARE_STATE_V | CXL_PE_SOFTWARE_STATE_T); 408 365 ctx->elem->software_state = 0; /* Remove Valid bit */ 409 366 pr_devel("%s Terminate pe: %i finished\n", __func__, ctx->pe); 410 367 mutex_unlock(&ctx->afu->spa_mutex); ··· 422 369 423 370 mutex_lock(&ctx->afu->spa_mutex); 424 371 pr_devel("%s Remove pe: %i started\n", __func__, ctx->pe); 425 - if (!(rc = do_process_element_cmd(ctx, CXL_SPA_SW_CMD_REMOVE, 0))) 372 + 373 + /* We could be asked to remove when the hw is down. Again, if 374 + * the hw is down, the PE is gone, so we succeed. 375 + */ 376 + if (cxl_adapter_link_ok(ctx->afu->adapter)) 377 + rc = do_process_element_cmd(ctx, CXL_SPA_SW_CMD_REMOVE, 0); 378 + 379 + if (!rc) 426 380 ctx->pe_inserted = false; 427 381 slb_invalid(ctx); 428 382 pr_devel("%s Remove pe: %i finished\n", __func__, ctx->pe); ··· 457 397 458 398 dev_info(&afu->dev, "Activating AFU directed mode\n"); 459 399 460 - if (alloc_spa(afu)) 461 - return -ENOMEM; 400 + if (afu->spa == NULL) { 401 + if (cxl_alloc_spa(afu)) 402 + return -ENOMEM; 403 + } 404 + attach_spa(afu); 462 405 463 406 cxl_p1n_write(afu, CXL_PSL_SCNTL_An, CXL_PSL_SCNTL_An_PM_AFU); 464 407 cxl_p1n_write(afu, CXL_PSL_AMOR_An, 0xFFFFFFFFFFFFFFFFULL); ··· 555 492 if ((result = cxl_afu_check_and_enable(ctx->afu))) 556 493 return result; 557 494 558 - add_process_element(ctx); 559 - 560 - return 0; 495 + return add_process_element(ctx); 561 496 } 562 497 563 498 static int deactivate_afu_directed(struct cxl_afu *afu) ··· 571 510 __cxl_afu_reset(afu); 572 511 cxl_afu_disable(afu); 573 512 cxl_psl_purge(afu); 574 - 575 - release_spa(afu); 576 513 577 514 return 0; 578 515 } ··· 673 614 if (!(mode & afu->modes_supported)) 674 615 return -EINVAL; 675 616 617 + if (!cxl_adapter_link_ok(afu->adapter)) { 618 + WARN(1, "Device link is down, refusing to activate!\n"); 619 + return -EIO; 620 + } 621 + 676 622 if (mode == CXL_MODE_DIRECTED) 677 623 return activate_afu_directed(afu); 678 624 if (mode == CXL_MODE_DEDICATED) ··· 688 624 689 625 int cxl_attach_process(struct cxl_context *ctx, bool kernel, u64 wed, u64 amr) 690 626 { 627 + if (!cxl_adapter_link_ok(ctx->afu->adapter)) { 628 + WARN(1, "Device link is down, refusing to attach process!\n"); 629 + return -EIO; 630 + } 631 + 691 632 ctx->kernel = kernel; 692 633 if (ctx->afu->current_mode == CXL_MODE_DIRECTED) 693 634 return attach_afu_directed(ctx, wed, amr); ··· 737 668 { 738 669 u64 pidtid; 739 670 671 + /* If the adapter has gone away, we can't get any meaningful 672 + * information. 673 + */ 674 + if (!cxl_adapter_link_ok(afu->adapter)) 675 + return -EIO; 676 + 740 677 info->dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An); 741 678 info->dar = cxl_p2n_read(afu, CXL_PSL_DAR_An); 742 679 info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An); ··· 759 684 { 760 685 u64 dsisr; 761 686 762 - pr_devel("RECOVERING FROM PSL ERROR... (0x%.16llx)\n", errstat); 687 + pr_devel("RECOVERING FROM PSL ERROR... (0x%016llx)\n", errstat); 763 688 764 689 /* Clear PSL_DSISR[PE] */ 765 690 dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
+504 -143
drivers/misc/cxl/pci.c
··· 24 24 #include <asm/io.h> 25 25 26 26 #include "cxl.h" 27 + #include <misc/cxl.h> 27 28 28 29 29 30 #define CXL_PCI_VSEC_ID 0x1280 ··· 134 133 return (val >> ((off & 0x3) * 8)) & 0xff; 135 134 } 136 135 137 - static DEFINE_PCI_DEVICE_TABLE(cxl_pci_tbl) = { 136 + static const struct pci_device_id cxl_pci_tbl[] = { 138 137 { PCI_DEVICE(PCI_VENDOR_ID_IBM, 0x0477), }, 139 138 { PCI_DEVICE(PCI_VENDOR_ID_IBM, 0x044b), }, 140 139 { PCI_DEVICE(PCI_VENDOR_ID_IBM, 0x04cf), }, ··· 370 369 return 0; 371 370 } 372 371 372 + #define TBSYNC_CNT(n) (((u64)n & 0x7) << (63-6)) 373 + #define _2048_250MHZ_CYCLES 1 374 + 375 + static int cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev) 376 + { 377 + u64 psl_tb; 378 + int delta; 379 + unsigned int retry = 0; 380 + struct device_node *np; 381 + 382 + if (!(np = pnv_pci_get_phb_node(dev))) 383 + return -ENODEV; 384 + 385 + /* Do not fail when CAPP timebase sync is not supported by OPAL */ 386 + of_node_get(np); 387 + if (! of_get_property(np, "ibm,capp-timebase-sync", NULL)) { 388 + of_node_put(np); 389 + pr_err("PSL: Timebase sync: OPAL support missing\n"); 390 + return 0; 391 + } 392 + of_node_put(np); 393 + 394 + /* 395 + * Setup PSL Timebase Control and Status register 396 + * with the recommended Timebase Sync Count value 397 + */ 398 + cxl_p1_write(adapter, CXL_PSL_TB_CTLSTAT, 399 + TBSYNC_CNT(2 * _2048_250MHZ_CYCLES)); 400 + 401 + /* Enable PSL Timebase */ 402 + cxl_p1_write(adapter, CXL_PSL_Control, 0x0000000000000000); 403 + cxl_p1_write(adapter, CXL_PSL_Control, CXL_PSL_Control_tb); 404 + 405 + /* Wait until CORE TB and PSL TB difference <= 16usecs */ 406 + do { 407 + msleep(1); 408 + if (retry++ > 5) { 409 + pr_err("PSL: Timebase sync: giving up!\n"); 410 + return -EIO; 411 + } 412 + psl_tb = cxl_p1_read(adapter, CXL_PSL_Timebase); 413 + delta = mftb() - psl_tb; 414 + if (delta < 0) 415 + delta = -delta; 416 + } while (cputime_to_usecs(delta) > 16); 417 + 418 + return 0; 419 + } 420 + 373 421 static int init_implementation_afu_regs(struct cxl_afu *afu) 374 422 { 375 423 /* read/write masks for this slice */ ··· 589 539 590 540 static void cxl_unmap_slice_regs(struct cxl_afu *afu) 591 541 { 592 - if (afu->p2n_mmio) 542 + if (afu->p2n_mmio) { 593 543 iounmap(afu->p2n_mmio); 594 - if (afu->p1n_mmio) 544 + afu->p2n_mmio = NULL; 545 + } 546 + if (afu->p1n_mmio) { 595 547 iounmap(afu->p1n_mmio); 548 + afu->p1n_mmio = NULL; 549 + } 550 + if (afu->afu_desc_mmio) { 551 + iounmap(afu->afu_desc_mmio); 552 + afu->afu_desc_mmio = NULL; 553 + } 596 554 } 597 555 598 556 static void cxl_release_afu(struct device *dev) ··· 608 550 struct cxl_afu *afu = to_cxl_afu(dev); 609 551 610 552 pr_devel("cxl_release_afu\n"); 553 + 554 + idr_destroy(&afu->contexts_idr); 555 + cxl_release_spa(afu); 611 556 612 557 kfree(afu); 613 558 } ··· 717 656 */ 718 657 reg = cxl_p2n_read(afu, CXL_AFU_Cntl_An); 719 658 if ((reg & CXL_AFU_Cntl_An_ES_MASK) != CXL_AFU_Cntl_An_ES_Disabled) { 720 - dev_warn(&afu->dev, "WARNING: AFU was not disabled: %#.16llx\n", reg); 659 + dev_warn(&afu->dev, "WARNING: AFU was not disabled: %#016llx\n", reg); 721 660 if (__cxl_afu_reset(afu)) 722 661 return -EIO; 723 662 if (cxl_afu_disable(afu)) ··· 738 677 cxl_p2n_write(afu, CXL_SSTP0_An, 0x0000000000000000); 739 678 reg = cxl_p2n_read(afu, CXL_PSL_DSISR_An); 740 679 if (reg) { 741 - dev_warn(&afu->dev, "AFU had pending DSISR: %#.16llx\n", reg); 680 + dev_warn(&afu->dev, "AFU had pending DSISR: %#016llx\n", reg); 742 681 if (reg & CXL_PSL_DSISR_TRANS) 743 682 cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE); 744 683 else ··· 747 686 reg = cxl_p1n_read(afu, CXL_PSL_SERR_An); 748 687 if (reg) { 749 688 if (reg & ~0xffff) 750 - dev_warn(&afu->dev, "AFU had pending SERR: %#.16llx\n", reg); 689 + dev_warn(&afu->dev, "AFU had pending SERR: %#016llx\n", reg); 751 690 cxl_p1n_write(afu, CXL_PSL_SERR_An, reg & ~0xffff); 752 691 } 753 692 reg = cxl_p2n_read(afu, CXL_PSL_ErrStat_An); 754 693 if (reg) { 755 - dev_warn(&afu->dev, "AFU had pending error status: %#.16llx\n", reg); 694 + dev_warn(&afu->dev, "AFU had pending error status: %#016llx\n", reg); 756 695 cxl_p2n_write(afu, CXL_PSL_ErrStat_An, reg); 757 696 } 758 697 ··· 803 742 return count; 804 743 } 805 744 806 - static int cxl_init_afu(struct cxl *adapter, int slice, struct pci_dev *dev) 745 + static int cxl_configure_afu(struct cxl_afu *afu, struct cxl *adapter, struct pci_dev *dev) 807 746 { 808 - struct cxl_afu *afu; 809 - bool free = true; 810 747 int rc; 811 748 812 - if (!(afu = cxl_alloc_afu(adapter, slice))) 813 - return -ENOMEM; 814 - 815 - if ((rc = dev_set_name(&afu->dev, "afu%i.%i", adapter->adapter_num, slice))) 816 - goto err1; 817 - 818 749 if ((rc = cxl_map_slice_regs(afu, adapter, dev))) 819 - goto err1; 750 + return rc; 820 751 821 752 if ((rc = sanitise_afu_regs(afu))) 822 - goto err2; 753 + goto err1; 823 754 824 755 /* We need to reset the AFU before we can read the AFU descriptor */ 825 756 if ((rc = __cxl_afu_reset(afu))) 826 - goto err2; 757 + goto err1; 827 758 828 759 if (cxl_verbose) 829 760 dump_afu_descriptor(afu); 830 761 831 762 if ((rc = cxl_read_afu_descriptor(afu))) 832 - goto err2; 763 + goto err1; 833 764 834 765 if ((rc = cxl_afu_descriptor_looks_ok(afu))) 835 - goto err2; 766 + goto err1; 836 767 837 768 if ((rc = init_implementation_afu_regs(afu))) 838 - goto err2; 769 + goto err1; 839 770 840 771 if ((rc = cxl_register_serr_irq(afu))) 841 - goto err2; 772 + goto err1; 842 773 843 774 if ((rc = cxl_register_psl_irq(afu))) 844 - goto err3; 775 + goto err2; 776 + 777 + return 0; 778 + 779 + err2: 780 + cxl_release_serr_irq(afu); 781 + err1: 782 + cxl_unmap_slice_regs(afu); 783 + return rc; 784 + } 785 + 786 + static void cxl_deconfigure_afu(struct cxl_afu *afu) 787 + { 788 + cxl_release_psl_irq(afu); 789 + cxl_release_serr_irq(afu); 790 + cxl_unmap_slice_regs(afu); 791 + } 792 + 793 + static int cxl_init_afu(struct cxl *adapter, int slice, struct pci_dev *dev) 794 + { 795 + struct cxl_afu *afu; 796 + int rc; 797 + 798 + afu = cxl_alloc_afu(adapter, slice); 799 + if (!afu) 800 + return -ENOMEM; 801 + 802 + rc = dev_set_name(&afu->dev, "afu%i.%i", adapter->adapter_num, slice); 803 + if (rc) 804 + goto err_free; 805 + 806 + rc = cxl_configure_afu(afu, adapter, dev); 807 + if (rc) 808 + goto err_free; 845 809 846 810 /* Don't care if this fails */ 847 811 cxl_debugfs_afu_add(afu); ··· 881 795 if ((rc = cxl_sysfs_afu_add(afu))) 882 796 goto err_put1; 883 797 884 - 885 - if ((rc = cxl_afu_select_best_mode(afu))) 886 - goto err_put2; 887 - 888 798 adapter->afu[afu->slice] = afu; 889 799 890 800 if ((rc = cxl_pci_vphb_add(afu))) ··· 888 806 889 807 return 0; 890 808 891 - err_put2: 892 - cxl_sysfs_afu_remove(afu); 893 809 err_put1: 894 - device_unregister(&afu->dev); 895 - free = false; 810 + cxl_deconfigure_afu(afu); 896 811 cxl_debugfs_afu_remove(afu); 897 - cxl_release_psl_irq(afu); 898 - err3: 899 - cxl_release_serr_irq(afu); 900 - err2: 901 - cxl_unmap_slice_regs(afu); 902 - err1: 903 - if (free) 904 - kfree(afu); 812 + device_unregister(&afu->dev); 905 813 return rc; 814 + 815 + err_free: 816 + kfree(afu); 817 + return rc; 818 + 906 819 } 907 820 908 821 static void cxl_remove_afu(struct cxl_afu *afu) ··· 917 840 cxl_context_detach_all(afu); 918 841 cxl_afu_deactivate_mode(afu); 919 842 920 - cxl_release_psl_irq(afu); 921 - cxl_release_serr_irq(afu); 922 - cxl_unmap_slice_regs(afu); 923 - 843 + cxl_deconfigure_afu(afu); 924 844 device_unregister(&afu->dev); 925 845 } 926 846 ··· 925 851 { 926 852 struct pci_dev *dev = to_pci_dev(adapter->dev.parent); 927 853 int rc; 928 - int i; 929 - u32 val; 854 + 855 + if (adapter->perst_same_image) { 856 + dev_warn(&dev->dev, 857 + "cxl: refusing to reset/reflash when perst_reloads_same_image is set.\n"); 858 + return -EINVAL; 859 + } 930 860 931 861 dev_info(&dev->dev, "CXL reset\n"); 932 - 933 - for (i = 0; i < adapter->slices; i++) { 934 - cxl_pci_vphb_remove(adapter->afu[i]); 935 - cxl_remove_afu(adapter->afu[i]); 936 - } 937 862 938 863 /* pcie_warm_reset requests a fundamental pci reset which includes a 939 864 * PERST assert/deassert. PERST triggers a loading of the image ··· 941 868 dev_err(&dev->dev, "cxl: pcie_warm_reset failed\n"); 942 869 return rc; 943 870 } 944 - 945 - /* the PERST done above fences the PHB. So, reset depends on EEH 946 - * to unbind the driver, tell Sapphire to reinit the PHB, and rebind 947 - * the driver. Do an mmio read explictly to ensure EEH notices the 948 - * fenced PHB. Retry for a few seconds before giving up. */ 949 - i = 0; 950 - while (((val = mmio_read32be(adapter->p1_mmio)) != 0xffffffff) && 951 - (i < 5)) { 952 - msleep(500); 953 - i++; 954 - } 955 - 956 - if (val != 0xffffffff) 957 - dev_err(&dev->dev, "cxl: PERST failed to trigger EEH\n"); 958 871 959 872 return rc; 960 873 } ··· 952 893 if (pci_request_region(dev, 0, "priv 1 regs")) 953 894 goto err2; 954 895 955 - pr_devel("cxl_map_adapter_regs: p1: %#.16llx %#llx, p2: %#.16llx %#llx", 896 + pr_devel("cxl_map_adapter_regs: p1: %#016llx %#llx, p2: %#016llx %#llx", 956 897 p1_base(dev), p1_size(dev), p2_base(dev), p2_size(dev)); 957 898 958 899 if (!(adapter->p1_mmio = ioremap(p1_base(dev), p1_size(dev)))) ··· 976 917 977 918 static void cxl_unmap_adapter_regs(struct cxl *adapter) 978 919 { 979 - if (adapter->p1_mmio) 920 + if (adapter->p1_mmio) { 980 921 iounmap(adapter->p1_mmio); 981 - if (adapter->p2_mmio) 922 + adapter->p1_mmio = NULL; 923 + pci_release_region(to_pci_dev(adapter->dev.parent), 2); 924 + } 925 + if (adapter->p2_mmio) { 982 926 iounmap(adapter->p2_mmio); 927 + adapter->p2_mmio = NULL; 928 + pci_release_region(to_pci_dev(adapter->dev.parent), 0); 929 + } 983 930 } 984 931 985 932 static int cxl_read_vsec(struct cxl *adapter, struct pci_dev *dev) ··· 1014 949 CXL_READ_VSEC_BASE_IMAGE(dev, vsec, &adapter->base_image); 1015 950 CXL_READ_VSEC_IMAGE_STATE(dev, vsec, &image_state); 1016 951 adapter->user_image_loaded = !!(image_state & CXL_VSEC_USER_IMAGE_LOADED); 1017 - adapter->perst_loads_image = true; 1018 952 adapter->perst_select_user = !!(image_state & CXL_VSEC_USER_IMAGE_LOADED); 1019 953 1020 954 CXL_READ_VSEC_NAFUS(dev, vsec, &adapter->slices); ··· 1073 1009 1074 1010 pr_devel("cxl_release_adapter\n"); 1075 1011 1012 + cxl_remove_adapter_nr(adapter); 1013 + 1076 1014 kfree(adapter); 1077 1015 } 1078 1016 1079 - static struct cxl *cxl_alloc_adapter(struct pci_dev *dev) 1017 + static struct cxl *cxl_alloc_adapter(void) 1080 1018 { 1081 1019 struct cxl *adapter; 1082 1020 1083 1021 if (!(adapter = kzalloc(sizeof(struct cxl), GFP_KERNEL))) 1084 1022 return NULL; 1085 1023 1086 - adapter->dev.parent = &dev->dev; 1087 - adapter->dev.release = cxl_release_adapter; 1088 - pci_set_drvdata(dev, adapter); 1089 1024 spin_lock_init(&adapter->afu_list_lock); 1090 1025 1026 + if (cxl_alloc_adapter_nr(adapter)) 1027 + goto err1; 1028 + 1029 + if (dev_set_name(&adapter->dev, "card%i", adapter->adapter_num)) 1030 + goto err2; 1031 + 1091 1032 return adapter; 1033 + 1034 + err2: 1035 + cxl_remove_adapter_nr(adapter); 1036 + err1: 1037 + kfree(adapter); 1038 + return NULL; 1092 1039 } 1040 + 1041 + #define CXL_PSL_ErrIVTE_tberror (0x1ull << (63-31)) 1093 1042 1094 1043 static int sanitise_adapter_regs(struct cxl *adapter) 1095 1044 { 1096 - cxl_p1_write(adapter, CXL_PSL_ErrIVTE, 0x0000000000000000); 1045 + /* Clear PSL tberror bit by writing 1 to it */ 1046 + cxl_p1_write(adapter, CXL_PSL_ErrIVTE, CXL_PSL_ErrIVTE_tberror); 1097 1047 return cxl_tlb_slb_invalidate(adapter); 1048 + } 1049 + 1050 + /* This should contain *only* operations that can safely be done in 1051 + * both creation and recovery. 1052 + */ 1053 + static int cxl_configure_adapter(struct cxl *adapter, struct pci_dev *dev) 1054 + { 1055 + int rc; 1056 + 1057 + adapter->dev.parent = &dev->dev; 1058 + adapter->dev.release = cxl_release_adapter; 1059 + pci_set_drvdata(dev, adapter); 1060 + 1061 + rc = pci_enable_device(dev); 1062 + if (rc) { 1063 + dev_err(&dev->dev, "pci_enable_device failed: %i\n", rc); 1064 + return rc; 1065 + } 1066 + 1067 + if ((rc = cxl_read_vsec(adapter, dev))) 1068 + return rc; 1069 + 1070 + if ((rc = cxl_vsec_looks_ok(adapter, dev))) 1071 + return rc; 1072 + 1073 + if ((rc = setup_cxl_bars(dev))) 1074 + return rc; 1075 + 1076 + if ((rc = switch_card_to_cxl(dev))) 1077 + return rc; 1078 + 1079 + if ((rc = cxl_update_image_control(adapter))) 1080 + return rc; 1081 + 1082 + if ((rc = cxl_map_adapter_regs(adapter, dev))) 1083 + return rc; 1084 + 1085 + if ((rc = sanitise_adapter_regs(adapter))) 1086 + goto err; 1087 + 1088 + if ((rc = init_implementation_adapter_regs(adapter, dev))) 1089 + goto err; 1090 + 1091 + if ((rc = pnv_phb_to_cxl_mode(dev, OPAL_PHB_CAPI_MODE_CAPI))) 1092 + goto err; 1093 + 1094 + /* If recovery happened, the last step is to turn on snooping. 1095 + * In the non-recovery case this has no effect */ 1096 + if ((rc = pnv_phb_to_cxl_mode(dev, OPAL_PHB_CAPI_MODE_SNOOP_ON))) 1097 + goto err; 1098 + 1099 + if ((rc = cxl_setup_psl_timebase(adapter, dev))) 1100 + goto err; 1101 + 1102 + if ((rc = cxl_register_psl_err_irq(adapter))) 1103 + goto err; 1104 + 1105 + return 0; 1106 + 1107 + err: 1108 + cxl_unmap_adapter_regs(adapter); 1109 + return rc; 1110 + 1111 + } 1112 + 1113 + static void cxl_deconfigure_adapter(struct cxl *adapter) 1114 + { 1115 + struct pci_dev *pdev = to_pci_dev(adapter->dev.parent); 1116 + 1117 + cxl_release_psl_err_irq(adapter); 1118 + cxl_unmap_adapter_regs(adapter); 1119 + 1120 + pci_disable_device(pdev); 1098 1121 } 1099 1122 1100 1123 static struct cxl *cxl_init_adapter(struct pci_dev *dev) 1101 1124 { 1102 1125 struct cxl *adapter; 1103 - bool free = true; 1104 1126 int rc; 1105 1127 1106 - 1107 - if (!(adapter = cxl_alloc_adapter(dev))) 1128 + adapter = cxl_alloc_adapter(); 1129 + if (!adapter) 1108 1130 return ERR_PTR(-ENOMEM); 1109 1131 1110 - if ((rc = cxl_read_vsec(adapter, dev))) 1111 - goto err1; 1132 + /* Set defaults for parameters which need to persist over 1133 + * configure/reconfigure 1134 + */ 1135 + adapter->perst_loads_image = true; 1136 + adapter->perst_same_image = false; 1112 1137 1113 - if ((rc = cxl_vsec_looks_ok(adapter, dev))) 1114 - goto err1; 1115 - 1116 - if ((rc = setup_cxl_bars(dev))) 1117 - goto err1; 1118 - 1119 - if ((rc = switch_card_to_cxl(dev))) 1120 - goto err1; 1121 - 1122 - if ((rc = cxl_alloc_adapter_nr(adapter))) 1123 - goto err1; 1124 - 1125 - if ((rc = dev_set_name(&adapter->dev, "card%i", adapter->adapter_num))) 1126 - goto err2; 1127 - 1128 - if ((rc = cxl_update_image_control(adapter))) 1129 - goto err2; 1130 - 1131 - if ((rc = cxl_map_adapter_regs(adapter, dev))) 1132 - goto err2; 1133 - 1134 - if ((rc = sanitise_adapter_regs(adapter))) 1135 - goto err2; 1136 - 1137 - if ((rc = init_implementation_adapter_regs(adapter, dev))) 1138 - goto err3; 1139 - 1140 - if ((rc = pnv_phb_to_cxl_mode(dev, OPAL_PHB_CAPI_MODE_CAPI))) 1141 - goto err3; 1142 - 1143 - /* If recovery happened, the last step is to turn on snooping. 1144 - * In the non-recovery case this has no effect */ 1145 - if ((rc = pnv_phb_to_cxl_mode(dev, OPAL_PHB_CAPI_MODE_SNOOP_ON))) { 1146 - goto err3; 1138 + rc = cxl_configure_adapter(adapter, dev); 1139 + if (rc) { 1140 + pci_disable_device(dev); 1141 + cxl_release_adapter(&adapter->dev); 1142 + return ERR_PTR(rc); 1147 1143 } 1148 - 1149 - if ((rc = cxl_register_psl_err_irq(adapter))) 1150 - goto err3; 1151 1144 1152 1145 /* Don't care if this one fails: */ 1153 1146 cxl_debugfs_adapter_add(adapter); ··· 1222 1101 return adapter; 1223 1102 1224 1103 err_put1: 1225 - device_unregister(&adapter->dev); 1226 - free = false; 1104 + /* This should mirror cxl_remove_adapter, except without the 1105 + * sysfs parts 1106 + */ 1227 1107 cxl_debugfs_adapter_remove(adapter); 1228 - cxl_release_psl_err_irq(adapter); 1229 - err3: 1230 - cxl_unmap_adapter_regs(adapter); 1231 - err2: 1232 - cxl_remove_adapter_nr(adapter); 1233 - err1: 1234 - if (free) 1235 - kfree(adapter); 1108 + cxl_deconfigure_adapter(adapter); 1109 + device_unregister(&adapter->dev); 1236 1110 return ERR_PTR(rc); 1237 1111 } 1238 1112 1239 1113 static void cxl_remove_adapter(struct cxl *adapter) 1240 1114 { 1241 - struct pci_dev *pdev = to_pci_dev(adapter->dev.parent); 1242 - 1243 - pr_devel("cxl_release_adapter\n"); 1115 + pr_devel("cxl_remove_adapter\n"); 1244 1116 1245 1117 cxl_sysfs_adapter_remove(adapter); 1246 1118 cxl_debugfs_adapter_remove(adapter); 1247 - cxl_release_psl_err_irq(adapter); 1248 - cxl_unmap_adapter_regs(adapter); 1249 - cxl_remove_adapter_nr(adapter); 1119 + 1120 + cxl_deconfigure_adapter(adapter); 1250 1121 1251 1122 device_unregister(&adapter->dev); 1252 - 1253 - pci_release_region(pdev, 0); 1254 - pci_release_region(pdev, 2); 1255 - pci_disable_device(pdev); 1256 1123 } 1257 1124 1258 1125 static int cxl_probe(struct pci_dev *dev, const struct pci_device_id *id) ··· 1254 1145 if (cxl_verbose) 1255 1146 dump_cxl_config_space(dev); 1256 1147 1257 - if ((rc = pci_enable_device(dev))) { 1258 - dev_err(&dev->dev, "pci_enable_device failed: %i\n", rc); 1259 - return rc; 1260 - } 1261 - 1262 1148 adapter = cxl_init_adapter(dev); 1263 1149 if (IS_ERR(adapter)) { 1264 1150 dev_err(&dev->dev, "cxl_init_adapter failed: %li\n", PTR_ERR(adapter)); 1265 - pci_disable_device(dev); 1266 1151 return PTR_ERR(adapter); 1267 1152 } 1268 1153 1269 1154 for (slice = 0; slice < adapter->slices; slice++) { 1270 - if ((rc = cxl_init_afu(adapter, slice, dev))) 1155 + if ((rc = cxl_init_afu(adapter, slice, dev))) { 1271 1156 dev_err(&dev->dev, "AFU %i failed to initialise: %i\n", slice, rc); 1157 + continue; 1158 + } 1159 + 1160 + rc = cxl_afu_select_best_mode(adapter->afu[slice]); 1161 + if (rc) 1162 + dev_err(&dev->dev, "AFU %i failed to start: %i\n", slice, rc); 1272 1163 } 1273 1164 1274 1165 return 0; ··· 1292 1183 cxl_remove_adapter(adapter); 1293 1184 } 1294 1185 1186 + static pci_ers_result_t cxl_vphb_error_detected(struct cxl_afu *afu, 1187 + pci_channel_state_t state) 1188 + { 1189 + struct pci_dev *afu_dev; 1190 + pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET; 1191 + pci_ers_result_t afu_result = PCI_ERS_RESULT_NEED_RESET; 1192 + 1193 + /* There should only be one entry, but go through the list 1194 + * anyway 1195 + */ 1196 + list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { 1197 + if (!afu_dev->driver) 1198 + continue; 1199 + 1200 + afu_dev->error_state = state; 1201 + 1202 + if (afu_dev->driver->err_handler) 1203 + afu_result = afu_dev->driver->err_handler->error_detected(afu_dev, 1204 + state); 1205 + /* Disconnect trumps all, NONE trumps NEED_RESET */ 1206 + if (afu_result == PCI_ERS_RESULT_DISCONNECT) 1207 + result = PCI_ERS_RESULT_DISCONNECT; 1208 + else if ((afu_result == PCI_ERS_RESULT_NONE) && 1209 + (result == PCI_ERS_RESULT_NEED_RESET)) 1210 + result = PCI_ERS_RESULT_NONE; 1211 + } 1212 + return result; 1213 + } 1214 + 1215 + static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev, 1216 + pci_channel_state_t state) 1217 + { 1218 + struct cxl *adapter = pci_get_drvdata(pdev); 1219 + struct cxl_afu *afu; 1220 + pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET; 1221 + int i; 1222 + 1223 + /* At this point, we could still have an interrupt pending. 1224 + * Let's try to get them out of the way before they do 1225 + * anything we don't like. 1226 + */ 1227 + schedule(); 1228 + 1229 + /* If we're permanently dead, give up. */ 1230 + if (state == pci_channel_io_perm_failure) { 1231 + /* Tell the AFU drivers; but we don't care what they 1232 + * say, we're going away. 1233 + */ 1234 + for (i = 0; i < adapter->slices; i++) { 1235 + afu = adapter->afu[i]; 1236 + cxl_vphb_error_detected(afu, state); 1237 + } 1238 + return PCI_ERS_RESULT_DISCONNECT; 1239 + } 1240 + 1241 + /* Are we reflashing? 1242 + * 1243 + * If we reflash, we could come back as something entirely 1244 + * different, including a non-CAPI card. As such, by default 1245 + * we don't participate in the process. We'll be unbound and 1246 + * the slot re-probed. (TODO: check EEH doesn't blindly rebind 1247 + * us!) 1248 + * 1249 + * However, this isn't the entire story: for reliablity 1250 + * reasons, we usually want to reflash the FPGA on PERST in 1251 + * order to get back to a more reliable known-good state. 1252 + * 1253 + * This causes us a bit of a problem: if we reflash we can't 1254 + * trust that we'll come back the same - we could have a new 1255 + * image and been PERSTed in order to load that 1256 + * image. However, most of the time we actually *will* come 1257 + * back the same - for example a regular EEH event. 1258 + * 1259 + * Therefore, we allow the user to assert that the image is 1260 + * indeed the same and that we should continue on into EEH 1261 + * anyway. 1262 + */ 1263 + if (adapter->perst_loads_image && !adapter->perst_same_image) { 1264 + /* TODO take the PHB out of CXL mode */ 1265 + dev_info(&pdev->dev, "reflashing, so opting out of EEH!\n"); 1266 + return PCI_ERS_RESULT_NONE; 1267 + } 1268 + 1269 + /* 1270 + * At this point, we want to try to recover. We'll always 1271 + * need a complete slot reset: we don't trust any other reset. 1272 + * 1273 + * Now, we go through each AFU: 1274 + * - We send the driver, if bound, an error_detected callback. 1275 + * We expect it to clean up, but it can also tell us to give 1276 + * up and permanently detach the card. To simplify things, if 1277 + * any bound AFU driver doesn't support EEH, we give up on EEH. 1278 + * 1279 + * - We detach all contexts associated with the AFU. This 1280 + * does not free them, but puts them into a CLOSED state 1281 + * which causes any the associated files to return useful 1282 + * errors to userland. It also unmaps, but does not free, 1283 + * any IRQs. 1284 + * 1285 + * - We clean up our side: releasing and unmapping resources we hold 1286 + * so we can wire them up again when the hardware comes back up. 1287 + * 1288 + * Driver authors should note: 1289 + * 1290 + * - Any contexts you create in your kernel driver (except 1291 + * those associated with anonymous file descriptors) are 1292 + * your responsibility to free and recreate. Likewise with 1293 + * any attached resources. 1294 + * 1295 + * - We will take responsibility for re-initialising the 1296 + * device context (the one set up for you in 1297 + * cxl_pci_enable_device_hook and accessed through 1298 + * cxl_get_context). If you've attached IRQs or other 1299 + * resources to it, they remains yours to free. 1300 + * 1301 + * You can call the same functions to release resources as you 1302 + * normally would: we make sure that these functions continue 1303 + * to work when the hardware is down. 1304 + * 1305 + * Two examples: 1306 + * 1307 + * 1) If you normally free all your resources at the end of 1308 + * each request, or if you use anonymous FDs, your 1309 + * error_detected callback can simply set a flag to tell 1310 + * your driver not to start any new calls. You can then 1311 + * clear the flag in the resume callback. 1312 + * 1313 + * 2) If you normally allocate your resources on startup: 1314 + * * Set a flag in error_detected as above. 1315 + * * Let CXL detach your contexts. 1316 + * * In slot_reset, free the old resources and allocate new ones. 1317 + * * In resume, clear the flag to allow things to start. 1318 + */ 1319 + for (i = 0; i < adapter->slices; i++) { 1320 + afu = adapter->afu[i]; 1321 + 1322 + result = cxl_vphb_error_detected(afu, state); 1323 + 1324 + /* Only continue if everyone agrees on NEED_RESET */ 1325 + if (result != PCI_ERS_RESULT_NEED_RESET) 1326 + return result; 1327 + 1328 + cxl_context_detach_all(afu); 1329 + cxl_afu_deactivate_mode(afu); 1330 + cxl_deconfigure_afu(afu); 1331 + } 1332 + cxl_deconfigure_adapter(adapter); 1333 + 1334 + return result; 1335 + } 1336 + 1337 + static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev) 1338 + { 1339 + struct cxl *adapter = pci_get_drvdata(pdev); 1340 + struct cxl_afu *afu; 1341 + struct cxl_context *ctx; 1342 + struct pci_dev *afu_dev; 1343 + pci_ers_result_t afu_result = PCI_ERS_RESULT_RECOVERED; 1344 + pci_ers_result_t result = PCI_ERS_RESULT_RECOVERED; 1345 + int i; 1346 + 1347 + if (cxl_configure_adapter(adapter, pdev)) 1348 + goto err; 1349 + 1350 + for (i = 0; i < adapter->slices; i++) { 1351 + afu = adapter->afu[i]; 1352 + 1353 + if (cxl_configure_afu(afu, adapter, pdev)) 1354 + goto err; 1355 + 1356 + if (cxl_afu_select_best_mode(afu)) 1357 + goto err; 1358 + 1359 + cxl_pci_vphb_reconfigure(afu); 1360 + 1361 + list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { 1362 + /* Reset the device context. 1363 + * TODO: make this less disruptive 1364 + */ 1365 + ctx = cxl_get_context(afu_dev); 1366 + 1367 + if (ctx && cxl_release_context(ctx)) 1368 + goto err; 1369 + 1370 + ctx = cxl_dev_context_init(afu_dev); 1371 + if (!ctx) 1372 + goto err; 1373 + 1374 + afu_dev->dev.archdata.cxl_ctx = ctx; 1375 + 1376 + if (cxl_afu_check_and_enable(afu)) 1377 + goto err; 1378 + 1379 + afu_dev->error_state = pci_channel_io_normal; 1380 + 1381 + /* If there's a driver attached, allow it to 1382 + * chime in on recovery. Drivers should check 1383 + * if everything has come back OK, but 1384 + * shouldn't start new work until we call 1385 + * their resume function. 1386 + */ 1387 + if (!afu_dev->driver) 1388 + continue; 1389 + 1390 + if (afu_dev->driver->err_handler && 1391 + afu_dev->driver->err_handler->slot_reset) 1392 + afu_result = afu_dev->driver->err_handler->slot_reset(afu_dev); 1393 + 1394 + if (afu_result == PCI_ERS_RESULT_DISCONNECT) 1395 + result = PCI_ERS_RESULT_DISCONNECT; 1396 + } 1397 + } 1398 + return result; 1399 + 1400 + err: 1401 + /* All the bits that happen in both error_detected and cxl_remove 1402 + * should be idempotent, so we don't need to worry about leaving a mix 1403 + * of unconfigured and reconfigured resources. 1404 + */ 1405 + dev_err(&pdev->dev, "EEH recovery failed. Asking to be disconnected.\n"); 1406 + return PCI_ERS_RESULT_DISCONNECT; 1407 + } 1408 + 1409 + static void cxl_pci_resume(struct pci_dev *pdev) 1410 + { 1411 + struct cxl *adapter = pci_get_drvdata(pdev); 1412 + struct cxl_afu *afu; 1413 + struct pci_dev *afu_dev; 1414 + int i; 1415 + 1416 + /* Everything is back now. Drivers should restart work now. 1417 + * This is not the place to be checking if everything came back up 1418 + * properly, because there's no return value: do that in slot_reset. 1419 + */ 1420 + for (i = 0; i < adapter->slices; i++) { 1421 + afu = adapter->afu[i]; 1422 + 1423 + list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { 1424 + if (afu_dev->driver && afu_dev->driver->err_handler && 1425 + afu_dev->driver->err_handler->resume) 1426 + afu_dev->driver->err_handler->resume(afu_dev); 1427 + } 1428 + } 1429 + } 1430 + 1431 + static const struct pci_error_handlers cxl_err_handler = { 1432 + .error_detected = cxl_pci_error_detected, 1433 + .slot_reset = cxl_pci_slot_reset, 1434 + .resume = cxl_pci_resume, 1435 + }; 1436 + 1295 1437 struct pci_driver cxl_pci_driver = { 1296 1438 .name = "cxl-pci", 1297 1439 .id_table = cxl_pci_tbl, 1298 1440 .probe = cxl_probe, 1299 1441 .remove = cxl_remove, 1300 1442 .shutdown = cxl_remove, 1443 + .err_handler = &cxl_err_handler, 1301 1444 };
+26
drivers/misc/cxl/sysfs.c
··· 112 112 return count; 113 113 } 114 114 115 + static ssize_t perst_reloads_same_image_show(struct device *device, 116 + struct device_attribute *attr, 117 + char *buf) 118 + { 119 + struct cxl *adapter = to_cxl_adapter(device); 120 + 121 + return scnprintf(buf, PAGE_SIZE, "%i\n", adapter->perst_same_image); 122 + } 123 + 124 + static ssize_t perst_reloads_same_image_store(struct device *device, 125 + struct device_attribute *attr, 126 + const char *buf, size_t count) 127 + { 128 + struct cxl *adapter = to_cxl_adapter(device); 129 + int rc; 130 + int val; 131 + 132 + rc = sscanf(buf, "%i", &val); 133 + if ((rc != 1) || !(val == 1 || val == 0)) 134 + return -EINVAL; 135 + 136 + adapter->perst_same_image = (val == 1 ? true : false); 137 + return count; 138 + } 139 + 115 140 static struct device_attribute adapter_attrs[] = { 116 141 __ATTR_RO(caia_version), 117 142 __ATTR_RO(psl_revision), 118 143 __ATTR_RO(base_image), 119 144 __ATTR_RO(image_loaded), 120 145 __ATTR_RW(load_image_on_perst), 146 + __ATTR_RW(perst_reloads_same_image), 121 147 __ATTR(reset, S_IWUSR, NULL, reset_adapter_store), 122 148 }; 123 149
+5 -5
drivers/misc/cxl/trace.h
··· 105 105 __entry->num_interrupts = num_interrupts; 106 106 ), 107 107 108 - TP_printk("afu%i.%i pid=%i pe=%i wed=0x%.16llx irqs=%i amr=0x%llx", 108 + TP_printk("afu%i.%i pid=%i pe=%i wed=0x%016llx irqs=%i amr=0x%llx", 109 109 __entry->card, 110 110 __entry->afu, 111 111 __entry->pid, ··· 177 177 __entry->dar = dar; 178 178 ), 179 179 180 - TP_printk("afu%i.%i pe=%i irq=%i dsisr=%s dar=0x%.16llx", 180 + TP_printk("afu%i.%i pe=%i irq=%i dsisr=%s dar=0x%016llx", 181 181 __entry->card, 182 182 __entry->afu, 183 183 __entry->pe, ··· 233 233 __entry->dar = dar; 234 234 ), 235 235 236 - TP_printk("afu%i.%i pe=%i dar=0x%.16llx", 236 + TP_printk("afu%i.%i pe=%i dar=0x%016llx", 237 237 __entry->card, 238 238 __entry->afu, 239 239 __entry->pe, ··· 264 264 __entry->v = v; 265 265 ), 266 266 267 - TP_printk("afu%i.%i pe=%i SSTE[%i] E=0x%.16llx V=0x%.16llx", 267 + TP_printk("afu%i.%i pe=%i SSTE[%i] E=0x%016llx V=0x%016llx", 268 268 __entry->card, 269 269 __entry->afu, 270 270 __entry->pe, ··· 295 295 __entry->dar = dar; 296 296 ), 297 297 298 - TP_printk("afu%i.%i pe=%i dsisr=%s dar=0x%.16llx", 298 + TP_printk("afu%i.%i pe=%i dsisr=%s dar=0x%016llx", 299 299 __entry->card, 300 300 __entry->afu, 301 301 __entry->pe,
+34
drivers/misc/cxl/vphb.c
··· 138 138 return 0; 139 139 } 140 140 141 + 142 + static inline bool cxl_config_link_ok(struct pci_bus *bus) 143 + { 144 + struct pci_controller *phb; 145 + struct cxl_afu *afu; 146 + 147 + /* Config space IO is based on phb->cfg_addr, which is based on 148 + * afu_desc_mmio. This isn't safe to read/write when the link 149 + * goes down, as EEH tears down MMIO space. 150 + * 151 + * Check if the link is OK before proceeding. 152 + */ 153 + 154 + phb = pci_bus_to_host(bus); 155 + if (phb == NULL) 156 + return false; 157 + afu = (struct cxl_afu *)phb->private_data; 158 + return cxl_adapter_link_ok(afu->adapter); 159 + } 160 + 141 161 static int cxl_pcie_read_config(struct pci_bus *bus, unsigned int devfn, 142 162 int offset, int len, u32 *val) 143 163 { ··· 169 149 &mask, &shift); 170 150 if (rc) 171 151 return rc; 152 + 153 + if (!cxl_config_link_ok(bus)) 154 + return PCIBIOS_DEVICE_NOT_FOUND; 172 155 173 156 /* Can only read 32 bits */ 174 157 *val = (in_le32(ioaddr) >> shift) & mask; ··· 189 166 &mask, &shift); 190 167 if (rc) 191 168 return rc; 169 + 170 + if (!cxl_config_link_ok(bus)) 171 + return PCIBIOS_DEVICE_NOT_FOUND; 192 172 193 173 /* Can only write 32 bits so do read-modify-write */ 194 174 mask <<= shift; ··· 266 240 return 0; 267 241 } 268 242 243 + void cxl_pci_vphb_reconfigure(struct cxl_afu *afu) 244 + { 245 + /* When we are reconfigured, the AFU's MMIO space is unmapped 246 + * and remapped. We need to reflect this in the PHB's view of 247 + * the world. 248 + */ 249 + afu->phb->cfg_addr = afu->afu_desc_mmio + afu->crs_offset; 250 + } 269 251 270 252 void cxl_pci_vphb_remove(struct cxl_afu *afu) 271 253 {
+129 -127
drivers/mtd/nand/fsl_ifc_nand.c
··· 238 238 239 239 ifc_nand_ctrl->page = page_addr; 240 240 /* Program ROW0/COL0 */ 241 - iowrite32be(page_addr, &ifc->ifc_nand.row0); 242 - iowrite32be((oob ? IFC_NAND_COL_MS : 0) | column, &ifc->ifc_nand.col0); 241 + ifc_out32(page_addr, &ifc->ifc_nand.row0); 242 + ifc_out32((oob ? IFC_NAND_COL_MS : 0) | column, &ifc->ifc_nand.col0); 243 243 244 244 buf_num = page_addr & priv->bufnum_mask; 245 245 ··· 301 301 int i; 302 302 303 303 /* set the chip select for NAND Transaction */ 304 - iowrite32be(priv->bank << IFC_NAND_CSEL_SHIFT, 305 - &ifc->ifc_nand.nand_csel); 304 + ifc_out32(priv->bank << IFC_NAND_CSEL_SHIFT, 305 + &ifc->ifc_nand.nand_csel); 306 306 307 307 dev_vdbg(priv->dev, 308 308 "%s: fir0=%08x fcr0=%08x\n", 309 309 __func__, 310 - ioread32be(&ifc->ifc_nand.nand_fir0), 311 - ioread32be(&ifc->ifc_nand.nand_fcr0)); 310 + ifc_in32(&ifc->ifc_nand.nand_fir0), 311 + ifc_in32(&ifc->ifc_nand.nand_fcr0)); 312 312 313 313 ctrl->nand_stat = 0; 314 314 315 315 /* start read/write seq */ 316 - iowrite32be(IFC_NAND_SEQ_STRT_FIR_STRT, &ifc->ifc_nand.nandseq_strt); 316 + ifc_out32(IFC_NAND_SEQ_STRT_FIR_STRT, &ifc->ifc_nand.nandseq_strt); 317 317 318 318 /* wait for command complete flag or timeout */ 319 319 wait_event_timeout(ctrl->nand_wait, ctrl->nand_stat, ··· 336 336 int sector_end = sector + chip->ecc.steps - 1; 337 337 338 338 for (i = sector / 4; i <= sector_end / 4; i++) 339 - eccstat[i] = ioread32be(&ifc->ifc_nand.nand_eccstat[i]); 339 + eccstat[i] = ifc_in32(&ifc->ifc_nand.nand_eccstat[i]); 340 340 341 341 for (i = sector; i <= sector_end; i++) { 342 342 errors = check_read_ecc(mtd, ctrl, eccstat, i); ··· 376 376 377 377 /* Program FIR/IFC_NAND_FCR0 for Small/Large page */ 378 378 if (mtd->writesize > 512) { 379 - iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 380 - (IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP1_SHIFT) | 381 - (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP2_SHIFT) | 382 - (IFC_FIR_OP_CMD1 << IFC_NAND_FIR0_OP3_SHIFT) | 383 - (IFC_FIR_OP_RBCD << IFC_NAND_FIR0_OP4_SHIFT), 384 - &ifc->ifc_nand.nand_fir0); 385 - iowrite32be(0x0, &ifc->ifc_nand.nand_fir1); 379 + ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 380 + (IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP1_SHIFT) | 381 + (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP2_SHIFT) | 382 + (IFC_FIR_OP_CMD1 << IFC_NAND_FIR0_OP3_SHIFT) | 383 + (IFC_FIR_OP_RBCD << IFC_NAND_FIR0_OP4_SHIFT), 384 + &ifc->ifc_nand.nand_fir0); 385 + ifc_out32(0x0, &ifc->ifc_nand.nand_fir1); 386 386 387 - iowrite32be((NAND_CMD_READ0 << IFC_NAND_FCR0_CMD0_SHIFT) | 388 - (NAND_CMD_READSTART << IFC_NAND_FCR0_CMD1_SHIFT), 389 - &ifc->ifc_nand.nand_fcr0); 387 + ifc_out32((NAND_CMD_READ0 << IFC_NAND_FCR0_CMD0_SHIFT) | 388 + (NAND_CMD_READSTART << IFC_NAND_FCR0_CMD1_SHIFT), 389 + &ifc->ifc_nand.nand_fcr0); 390 390 } else { 391 - iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 392 - (IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP1_SHIFT) | 393 - (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP2_SHIFT) | 394 - (IFC_FIR_OP_RBCD << IFC_NAND_FIR0_OP3_SHIFT), 395 - &ifc->ifc_nand.nand_fir0); 396 - iowrite32be(0x0, &ifc->ifc_nand.nand_fir1); 391 + ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 392 + (IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP1_SHIFT) | 393 + (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP2_SHIFT) | 394 + (IFC_FIR_OP_RBCD << IFC_NAND_FIR0_OP3_SHIFT), 395 + &ifc->ifc_nand.nand_fir0); 396 + ifc_out32(0x0, &ifc->ifc_nand.nand_fir1); 397 397 398 398 if (oob) 399 - iowrite32be(NAND_CMD_READOOB << 400 - IFC_NAND_FCR0_CMD0_SHIFT, 401 - &ifc->ifc_nand.nand_fcr0); 399 + ifc_out32(NAND_CMD_READOOB << 400 + IFC_NAND_FCR0_CMD0_SHIFT, 401 + &ifc->ifc_nand.nand_fcr0); 402 402 else 403 - iowrite32be(NAND_CMD_READ0 << 404 - IFC_NAND_FCR0_CMD0_SHIFT, 405 - &ifc->ifc_nand.nand_fcr0); 403 + ifc_out32(NAND_CMD_READ0 << 404 + IFC_NAND_FCR0_CMD0_SHIFT, 405 + &ifc->ifc_nand.nand_fcr0); 406 406 } 407 407 } 408 408 ··· 422 422 switch (command) { 423 423 /* READ0 read the entire buffer to use hardware ECC. */ 424 424 case NAND_CMD_READ0: 425 - iowrite32be(0, &ifc->ifc_nand.nand_fbcr); 425 + ifc_out32(0, &ifc->ifc_nand.nand_fbcr); 426 426 set_addr(mtd, 0, page_addr, 0); 427 427 428 428 ifc_nand_ctrl->read_bytes = mtd->writesize + mtd->oobsize; ··· 437 437 438 438 /* READOOB reads only the OOB because no ECC is performed. */ 439 439 case NAND_CMD_READOOB: 440 - iowrite32be(mtd->oobsize - column, &ifc->ifc_nand.nand_fbcr); 440 + ifc_out32(mtd->oobsize - column, &ifc->ifc_nand.nand_fbcr); 441 441 set_addr(mtd, column, page_addr, 1); 442 442 443 443 ifc_nand_ctrl->read_bytes = mtd->writesize + mtd->oobsize; ··· 453 453 if (command == NAND_CMD_PARAM) 454 454 timing = IFC_FIR_OP_RBCD; 455 455 456 - iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 457 - (IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) | 458 - (timing << IFC_NAND_FIR0_OP2_SHIFT), 459 - &ifc->ifc_nand.nand_fir0); 460 - iowrite32be(command << IFC_NAND_FCR0_CMD0_SHIFT, 461 - &ifc->ifc_nand.nand_fcr0); 462 - iowrite32be(column, &ifc->ifc_nand.row3); 456 + ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 457 + (IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) | 458 + (timing << IFC_NAND_FIR0_OP2_SHIFT), 459 + &ifc->ifc_nand.nand_fir0); 460 + ifc_out32(command << IFC_NAND_FCR0_CMD0_SHIFT, 461 + &ifc->ifc_nand.nand_fcr0); 462 + ifc_out32(column, &ifc->ifc_nand.row3); 463 463 464 464 /* 465 465 * although currently it's 8 bytes for READID, we always read 466 466 * the maximum 256 bytes(for PARAM) 467 467 */ 468 - iowrite32be(256, &ifc->ifc_nand.nand_fbcr); 468 + ifc_out32(256, &ifc->ifc_nand.nand_fbcr); 469 469 ifc_nand_ctrl->read_bytes = 256; 470 470 471 471 set_addr(mtd, 0, 0, 0); ··· 480 480 481 481 /* ERASE2 uses the block and page address from ERASE1 */ 482 482 case NAND_CMD_ERASE2: 483 - iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 484 - (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP1_SHIFT) | 485 - (IFC_FIR_OP_CMD1 << IFC_NAND_FIR0_OP2_SHIFT), 486 - &ifc->ifc_nand.nand_fir0); 483 + ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 484 + (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP1_SHIFT) | 485 + (IFC_FIR_OP_CMD1 << IFC_NAND_FIR0_OP2_SHIFT), 486 + &ifc->ifc_nand.nand_fir0); 487 487 488 - iowrite32be((NAND_CMD_ERASE1 << IFC_NAND_FCR0_CMD0_SHIFT) | 489 - (NAND_CMD_ERASE2 << IFC_NAND_FCR0_CMD1_SHIFT), 490 - &ifc->ifc_nand.nand_fcr0); 488 + ifc_out32((NAND_CMD_ERASE1 << IFC_NAND_FCR0_CMD0_SHIFT) | 489 + (NAND_CMD_ERASE2 << IFC_NAND_FCR0_CMD1_SHIFT), 490 + &ifc->ifc_nand.nand_fcr0); 491 491 492 - iowrite32be(0, &ifc->ifc_nand.nand_fbcr); 492 + ifc_out32(0, &ifc->ifc_nand.nand_fbcr); 493 493 ifc_nand_ctrl->read_bytes = 0; 494 494 fsl_ifc_run_command(mtd); 495 495 return; ··· 506 506 (NAND_CMD_STATUS << IFC_NAND_FCR0_CMD1_SHIFT) | 507 507 (NAND_CMD_PAGEPROG << IFC_NAND_FCR0_CMD2_SHIFT); 508 508 509 - iowrite32be( 510 - (IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 511 - (IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP1_SHIFT) | 512 - (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP2_SHIFT) | 513 - (IFC_FIR_OP_WBCD << IFC_NAND_FIR0_OP3_SHIFT) | 514 - (IFC_FIR_OP_CMD2 << IFC_NAND_FIR0_OP4_SHIFT), 515 - &ifc->ifc_nand.nand_fir0); 516 - iowrite32be( 517 - (IFC_FIR_OP_CW1 << IFC_NAND_FIR1_OP5_SHIFT) | 518 - (IFC_FIR_OP_RDSTAT << 519 - IFC_NAND_FIR1_OP6_SHIFT) | 520 - (IFC_FIR_OP_NOP << IFC_NAND_FIR1_OP7_SHIFT), 521 - &ifc->ifc_nand.nand_fir1); 509 + ifc_out32( 510 + (IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 511 + (IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP1_SHIFT) | 512 + (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP2_SHIFT) | 513 + (IFC_FIR_OP_WBCD << IFC_NAND_FIR0_OP3_SHIFT) | 514 + (IFC_FIR_OP_CMD2 << IFC_NAND_FIR0_OP4_SHIFT), 515 + &ifc->ifc_nand.nand_fir0); 516 + ifc_out32( 517 + (IFC_FIR_OP_CW1 << IFC_NAND_FIR1_OP5_SHIFT) | 518 + (IFC_FIR_OP_RDSTAT << IFC_NAND_FIR1_OP6_SHIFT) | 519 + (IFC_FIR_OP_NOP << IFC_NAND_FIR1_OP7_SHIFT), 520 + &ifc->ifc_nand.nand_fir1); 522 521 } else { 523 522 nand_fcr0 = ((NAND_CMD_PAGEPROG << 524 523 IFC_NAND_FCR0_CMD1_SHIFT) | ··· 526 527 (NAND_CMD_STATUS << 527 528 IFC_NAND_FCR0_CMD3_SHIFT)); 528 529 529 - iowrite32be( 530 + ifc_out32( 530 531 (IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 531 532 (IFC_FIR_OP_CMD2 << IFC_NAND_FIR0_OP1_SHIFT) | 532 533 (IFC_FIR_OP_CA0 << IFC_NAND_FIR0_OP2_SHIFT) | 533 534 (IFC_FIR_OP_RA0 << IFC_NAND_FIR0_OP3_SHIFT) | 534 535 (IFC_FIR_OP_WBCD << IFC_NAND_FIR0_OP4_SHIFT), 535 536 &ifc->ifc_nand.nand_fir0); 536 - iowrite32be( 537 - (IFC_FIR_OP_CMD1 << IFC_NAND_FIR1_OP5_SHIFT) | 538 - (IFC_FIR_OP_CW3 << IFC_NAND_FIR1_OP6_SHIFT) | 539 - (IFC_FIR_OP_RDSTAT << 540 - IFC_NAND_FIR1_OP7_SHIFT) | 541 - (IFC_FIR_OP_NOP << IFC_NAND_FIR1_OP8_SHIFT), 542 - &ifc->ifc_nand.nand_fir1); 537 + ifc_out32( 538 + (IFC_FIR_OP_CMD1 << IFC_NAND_FIR1_OP5_SHIFT) | 539 + (IFC_FIR_OP_CW3 << IFC_NAND_FIR1_OP6_SHIFT) | 540 + (IFC_FIR_OP_RDSTAT << IFC_NAND_FIR1_OP7_SHIFT) | 541 + (IFC_FIR_OP_NOP << IFC_NAND_FIR1_OP8_SHIFT), 542 + &ifc->ifc_nand.nand_fir1); 543 543 544 544 if (column >= mtd->writesize) 545 545 nand_fcr0 |= ··· 553 555 column -= mtd->writesize; 554 556 ifc_nand_ctrl->oob = 1; 555 557 } 556 - iowrite32be(nand_fcr0, &ifc->ifc_nand.nand_fcr0); 558 + ifc_out32(nand_fcr0, &ifc->ifc_nand.nand_fcr0); 557 559 set_addr(mtd, column, page_addr, ifc_nand_ctrl->oob); 558 560 return; 559 561 } ··· 561 563 /* PAGEPROG reuses all of the setup from SEQIN and adds the length */ 562 564 case NAND_CMD_PAGEPROG: { 563 565 if (ifc_nand_ctrl->oob) { 564 - iowrite32be(ifc_nand_ctrl->index - 565 - ifc_nand_ctrl->column, 566 - &ifc->ifc_nand.nand_fbcr); 566 + ifc_out32(ifc_nand_ctrl->index - 567 + ifc_nand_ctrl->column, 568 + &ifc->ifc_nand.nand_fbcr); 567 569 } else { 568 - iowrite32be(0, &ifc->ifc_nand.nand_fbcr); 570 + ifc_out32(0, &ifc->ifc_nand.nand_fbcr); 569 571 } 570 572 571 573 fsl_ifc_run_command(mtd); 572 574 return; 573 575 } 574 576 575 - case NAND_CMD_STATUS: 576 - iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 577 - (IFC_FIR_OP_RB << IFC_NAND_FIR0_OP1_SHIFT), 578 - &ifc->ifc_nand.nand_fir0); 579 - iowrite32be(NAND_CMD_STATUS << IFC_NAND_FCR0_CMD0_SHIFT, 580 - &ifc->ifc_nand.nand_fcr0); 581 - iowrite32be(1, &ifc->ifc_nand.nand_fbcr); 577 + case NAND_CMD_STATUS: { 578 + void __iomem *addr; 579 + 580 + ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 581 + (IFC_FIR_OP_RB << IFC_NAND_FIR0_OP1_SHIFT), 582 + &ifc->ifc_nand.nand_fir0); 583 + ifc_out32(NAND_CMD_STATUS << IFC_NAND_FCR0_CMD0_SHIFT, 584 + &ifc->ifc_nand.nand_fcr0); 585 + ifc_out32(1, &ifc->ifc_nand.nand_fbcr); 582 586 set_addr(mtd, 0, 0, 0); 583 587 ifc_nand_ctrl->read_bytes = 1; 584 588 ··· 590 590 * The chip always seems to report that it is 591 591 * write-protected, even when it is not. 592 592 */ 593 + addr = ifc_nand_ctrl->addr; 593 594 if (chip->options & NAND_BUSWIDTH_16) 594 - setbits16(ifc_nand_ctrl->addr, NAND_STATUS_WP); 595 + ifc_out16(ifc_in16(addr) | (NAND_STATUS_WP), addr); 595 596 else 596 - setbits8(ifc_nand_ctrl->addr, NAND_STATUS_WP); 597 + ifc_out8(ifc_in8(addr) | (NAND_STATUS_WP), addr); 597 598 return; 599 + } 598 600 599 601 case NAND_CMD_RESET: 600 - iowrite32be(IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT, 601 - &ifc->ifc_nand.nand_fir0); 602 - iowrite32be(NAND_CMD_RESET << IFC_NAND_FCR0_CMD0_SHIFT, 603 - &ifc->ifc_nand.nand_fcr0); 602 + ifc_out32(IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT, 603 + &ifc->ifc_nand.nand_fir0); 604 + ifc_out32(NAND_CMD_RESET << IFC_NAND_FCR0_CMD0_SHIFT, 605 + &ifc->ifc_nand.nand_fcr0); 604 606 fsl_ifc_run_command(mtd); 605 607 return; 606 608 ··· 660 658 */ 661 659 if (ifc_nand_ctrl->index < ifc_nand_ctrl->read_bytes) { 662 660 offset = ifc_nand_ctrl->index++; 663 - return in_8(ifc_nand_ctrl->addr + offset); 661 + return ifc_in8(ifc_nand_ctrl->addr + offset); 664 662 } 665 663 666 664 dev_err(priv->dev, "%s: beyond end of buffer\n", __func__); ··· 682 680 * next byte. 683 681 */ 684 682 if (ifc_nand_ctrl->index < ifc_nand_ctrl->read_bytes) { 685 - data = in_be16(ifc_nand_ctrl->addr + ifc_nand_ctrl->index); 683 + data = ifc_in16(ifc_nand_ctrl->addr + ifc_nand_ctrl->index); 686 684 ifc_nand_ctrl->index += 2; 687 685 return (uint8_t) data; 688 686 } ··· 728 726 u32 nand_fsr; 729 727 730 728 /* Use READ_STATUS command, but wait for the device to be ready */ 731 - iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 732 - (IFC_FIR_OP_RDSTAT << IFC_NAND_FIR0_OP1_SHIFT), 733 - &ifc->ifc_nand.nand_fir0); 734 - iowrite32be(NAND_CMD_STATUS << IFC_NAND_FCR0_CMD0_SHIFT, 735 - &ifc->ifc_nand.nand_fcr0); 736 - iowrite32be(1, &ifc->ifc_nand.nand_fbcr); 729 + ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 730 + (IFC_FIR_OP_RDSTAT << IFC_NAND_FIR0_OP1_SHIFT), 731 + &ifc->ifc_nand.nand_fir0); 732 + ifc_out32(NAND_CMD_STATUS << IFC_NAND_FCR0_CMD0_SHIFT, 733 + &ifc->ifc_nand.nand_fcr0); 734 + ifc_out32(1, &ifc->ifc_nand.nand_fbcr); 737 735 set_addr(mtd, 0, 0, 0); 738 736 ifc_nand_ctrl->read_bytes = 1; 739 737 740 738 fsl_ifc_run_command(mtd); 741 739 742 - nand_fsr = ioread32be(&ifc->ifc_nand.nand_fsr); 740 + nand_fsr = ifc_in32(&ifc->ifc_nand.nand_fsr); 743 741 744 742 /* 745 743 * The chip always seems to report that it is ··· 831 829 uint32_t cs = priv->bank; 832 830 833 831 /* Save CSOR and CSOR_ext */ 834 - csor = ioread32be(&ifc->csor_cs[cs].csor); 835 - csor_ext = ioread32be(&ifc->csor_cs[cs].csor_ext); 832 + csor = ifc_in32(&ifc->csor_cs[cs].csor); 833 + csor_ext = ifc_in32(&ifc->csor_cs[cs].csor_ext); 836 834 837 835 /* chage PageSize 8K and SpareSize 1K*/ 838 836 csor_8k = (csor & ~(CSOR_NAND_PGS_MASK)) | 0x0018C000; 839 - iowrite32be(csor_8k, &ifc->csor_cs[cs].csor); 840 - iowrite32be(0x0000400, &ifc->csor_cs[cs].csor_ext); 837 + ifc_out32(csor_8k, &ifc->csor_cs[cs].csor); 838 + ifc_out32(0x0000400, &ifc->csor_cs[cs].csor_ext); 841 839 842 840 /* READID */ 843 - iowrite32be((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 844 - (IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) | 845 - (IFC_FIR_OP_RB << IFC_NAND_FIR0_OP2_SHIFT), 846 - &ifc->ifc_nand.nand_fir0); 847 - iowrite32be(NAND_CMD_READID << IFC_NAND_FCR0_CMD0_SHIFT, 848 - &ifc->ifc_nand.nand_fcr0); 849 - iowrite32be(0x0, &ifc->ifc_nand.row3); 841 + ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) | 842 + (IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) | 843 + (IFC_FIR_OP_RB << IFC_NAND_FIR0_OP2_SHIFT), 844 + &ifc->ifc_nand.nand_fir0); 845 + ifc_out32(NAND_CMD_READID << IFC_NAND_FCR0_CMD0_SHIFT, 846 + &ifc->ifc_nand.nand_fcr0); 847 + ifc_out32(0x0, &ifc->ifc_nand.row3); 850 848 851 - iowrite32be(0x0, &ifc->ifc_nand.nand_fbcr); 849 + ifc_out32(0x0, &ifc->ifc_nand.nand_fbcr); 852 850 853 851 /* Program ROW0/COL0 */ 854 - iowrite32be(0x0, &ifc->ifc_nand.row0); 855 - iowrite32be(0x0, &ifc->ifc_nand.col0); 852 + ifc_out32(0x0, &ifc->ifc_nand.row0); 853 + ifc_out32(0x0, &ifc->ifc_nand.col0); 856 854 857 855 /* set the chip select for NAND Transaction */ 858 - iowrite32be(cs << IFC_NAND_CSEL_SHIFT, &ifc->ifc_nand.nand_csel); 856 + ifc_out32(cs << IFC_NAND_CSEL_SHIFT, &ifc->ifc_nand.nand_csel); 859 857 860 858 /* start read seq */ 861 - iowrite32be(IFC_NAND_SEQ_STRT_FIR_STRT, &ifc->ifc_nand.nandseq_strt); 859 + ifc_out32(IFC_NAND_SEQ_STRT_FIR_STRT, &ifc->ifc_nand.nandseq_strt); 862 860 863 861 /* wait for command complete flag or timeout */ 864 862 wait_event_timeout(ctrl->nand_wait, ctrl->nand_stat, ··· 868 866 printk(KERN_ERR "fsl-ifc: Failed to Initialise SRAM\n"); 869 867 870 868 /* Restore CSOR and CSOR_ext */ 871 - iowrite32be(csor, &ifc->csor_cs[cs].csor); 872 - iowrite32be(csor_ext, &ifc->csor_cs[cs].csor_ext); 869 + ifc_out32(csor, &ifc->csor_cs[cs].csor); 870 + ifc_out32(csor_ext, &ifc->csor_cs[cs].csor_ext); 873 871 } 874 872 875 873 static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv) ··· 886 884 887 885 /* fill in nand_chip structure */ 888 886 /* set up function call table */ 889 - if ((ioread32be(&ifc->cspr_cs[priv->bank].cspr)) & CSPR_PORT_SIZE_16) 887 + if ((ifc_in32(&ifc->cspr_cs[priv->bank].cspr)) & CSPR_PORT_SIZE_16) 890 888 chip->read_byte = fsl_ifc_read_byte16; 891 889 else 892 890 chip->read_byte = fsl_ifc_read_byte; ··· 900 898 chip->bbt_td = &bbt_main_descr; 901 899 chip->bbt_md = &bbt_mirror_descr; 902 900 903 - iowrite32be(0x0, &ifc->ifc_nand.ncfgr); 901 + ifc_out32(0x0, &ifc->ifc_nand.ncfgr); 904 902 905 903 /* set up nand options */ 906 904 chip->bbt_options = NAND_BBT_USE_FLASH; 907 905 chip->options = NAND_NO_SUBPAGE_WRITE; 908 906 909 - if (ioread32be(&ifc->cspr_cs[priv->bank].cspr) & CSPR_PORT_SIZE_16) { 907 + if (ifc_in32(&ifc->cspr_cs[priv->bank].cspr) & CSPR_PORT_SIZE_16) { 910 908 chip->read_byte = fsl_ifc_read_byte16; 911 909 chip->options |= NAND_BUSWIDTH_16; 912 910 } else { ··· 919 917 chip->ecc.read_page = fsl_ifc_read_page; 920 918 chip->ecc.write_page = fsl_ifc_write_page; 921 919 922 - csor = ioread32be(&ifc->csor_cs[priv->bank].csor); 920 + csor = ifc_in32(&ifc->csor_cs[priv->bank].csor); 923 921 924 922 /* Hardware generates ECC per 512 Bytes */ 925 923 chip->ecc.size = 512; ··· 1008 1006 static int match_bank(struct fsl_ifc_regs __iomem *ifc, int bank, 1009 1007 phys_addr_t addr) 1010 1008 { 1011 - u32 cspr = ioread32be(&ifc->cspr_cs[bank].cspr); 1009 + u32 cspr = ifc_in32(&ifc->cspr_cs[bank].cspr); 1012 1010 1013 1011 if (!(cspr & CSPR_V)) 1014 1012 return 0; ··· 1094 1092 1095 1093 dev_set_drvdata(priv->dev, priv); 1096 1094 1097 - iowrite32be(IFC_NAND_EVTER_EN_OPC_EN | 1098 - IFC_NAND_EVTER_EN_FTOER_EN | 1099 - IFC_NAND_EVTER_EN_WPER_EN, 1100 - &ifc->ifc_nand.nand_evter_en); 1095 + ifc_out32(IFC_NAND_EVTER_EN_OPC_EN | 1096 + IFC_NAND_EVTER_EN_FTOER_EN | 1097 + IFC_NAND_EVTER_EN_WPER_EN, 1098 + &ifc->ifc_nand.nand_evter_en); 1101 1099 1102 1100 /* enable NAND Machine Interrupts */ 1103 - iowrite32be(IFC_NAND_EVTER_INTR_OPCIR_EN | 1104 - IFC_NAND_EVTER_INTR_FTOERIR_EN | 1105 - IFC_NAND_EVTER_INTR_WPERIR_EN, 1106 - &ifc->ifc_nand.nand_evter_intr_en); 1101 + ifc_out32(IFC_NAND_EVTER_INTR_OPCIR_EN | 1102 + IFC_NAND_EVTER_INTR_FTOERIR_EN | 1103 + IFC_NAND_EVTER_INTR_WPERIR_EN, 1104 + &ifc->ifc_nand.nand_evter_intr_en); 1107 1105 priv->mtd.name = kasprintf(GFP_KERNEL, "%llx.flash", (u64)res.start); 1108 1106 if (!priv->mtd.name) { 1109 1107 ret = -ENOMEM;
+24 -22
drivers/tty/hvc/hvsi.c
··· 240 240 { 241 241 struct hvsi_control *header = (struct hvsi_control *)packet; 242 242 243 - switch (header->verb) { 243 + switch (be16_to_cpu(header->verb)) { 244 244 case VSV_MODEM_CTL_UPDATE: 245 - if ((header->word & HVSI_TSCD) == 0) { 245 + if ((be32_to_cpu(header->word) & HVSI_TSCD) == 0) { 246 246 /* CD went away; no more connection */ 247 247 pr_debug("hvsi%i: CD dropped\n", hp->index); 248 248 hp->mctrl &= TIOCM_CD; ··· 267 267 static void hvsi_recv_response(struct hvsi_struct *hp, uint8_t *packet) 268 268 { 269 269 struct hvsi_query_response *resp = (struct hvsi_query_response *)packet; 270 + uint32_t mctrl_word; 270 271 271 272 switch (hp->state) { 272 273 case HVSI_WAIT_FOR_VER_RESPONSE: ··· 275 274 break; 276 275 case HVSI_WAIT_FOR_MCTRL_RESPONSE: 277 276 hp->mctrl = 0; 278 - if (resp->u.mctrl_word & HVSI_TSDTR) 277 + mctrl_word = be32_to_cpu(resp->u.mctrl_word); 278 + if (mctrl_word & HVSI_TSDTR) 279 279 hp->mctrl |= TIOCM_DTR; 280 - if (resp->u.mctrl_word & HVSI_TSCD) 280 + if (mctrl_word & HVSI_TSCD) 281 281 hp->mctrl |= TIOCM_CD; 282 282 __set_state(hp, HVSI_OPEN); 283 283 break; ··· 297 295 298 296 packet.hdr.type = VS_QUERY_RESPONSE_PACKET_HEADER; 299 297 packet.hdr.len = sizeof(struct hvsi_query_response); 300 - packet.hdr.seqno = atomic_inc_return(&hp->seqno); 301 - packet.verb = VSV_SEND_VERSION_NUMBER; 298 + packet.hdr.seqno = cpu_to_be16(atomic_inc_return(&hp->seqno)); 299 + packet.verb = cpu_to_be16(VSV_SEND_VERSION_NUMBER); 302 300 packet.u.version = HVSI_VERSION; 303 - packet.query_seqno = query_seqno+1; 301 + packet.query_seqno = cpu_to_be16(query_seqno+1); 304 302 305 303 pr_debug("%s: sending %i bytes\n", __func__, packet.hdr.len); 306 304 dbg_dump_hex((uint8_t*)&packet, packet.hdr.len); ··· 321 319 322 320 switch (hp->state) { 323 321 case HVSI_WAIT_FOR_VER_QUERY: 324 - hvsi_version_respond(hp, query->hdr.seqno); 322 + hvsi_version_respond(hp, be16_to_cpu(query->hdr.seqno)); 325 323 __set_state(hp, HVSI_OPEN); 326 324 break; 327 325 default: ··· 557 555 558 556 packet.hdr.type = VS_QUERY_PACKET_HEADER; 559 557 packet.hdr.len = sizeof(struct hvsi_query); 560 - packet.hdr.seqno = atomic_inc_return(&hp->seqno); 561 - packet.verb = verb; 558 + packet.hdr.seqno = cpu_to_be16(atomic_inc_return(&hp->seqno)); 559 + packet.verb = cpu_to_be16(verb); 562 560 563 561 pr_debug("%s: sending %i bytes\n", __func__, packet.hdr.len); 564 562 dbg_dump_hex((uint8_t*)&packet, packet.hdr.len); ··· 598 596 struct hvsi_control packet __ALIGNED__; 599 597 int wrote; 600 598 601 - packet.hdr.type = VS_CONTROL_PACKET_HEADER, 602 - packet.hdr.seqno = atomic_inc_return(&hp->seqno); 599 + packet.hdr.type = VS_CONTROL_PACKET_HEADER; 600 + packet.hdr.seqno = cpu_to_be16(atomic_inc_return(&hp->seqno)); 603 601 packet.hdr.len = sizeof(struct hvsi_control); 604 - packet.verb = VSV_SET_MODEM_CTL; 605 - packet.mask = HVSI_TSDTR; 602 + packet.verb = cpu_to_be16(VSV_SET_MODEM_CTL); 603 + packet.mask = cpu_to_be32(HVSI_TSDTR); 606 604 607 605 if (mctrl & TIOCM_DTR) 608 - packet.word = HVSI_TSDTR; 606 + packet.word = cpu_to_be32(HVSI_TSDTR); 609 607 610 608 pr_debug("%s: sending %i bytes\n", __func__, packet.hdr.len); 611 609 dbg_dump_hex((uint8_t*)&packet, packet.hdr.len); ··· 682 680 BUG_ON(count > HVSI_MAX_OUTGOING_DATA); 683 681 684 682 packet.hdr.type = VS_DATA_PACKET_HEADER; 685 - packet.hdr.seqno = atomic_inc_return(&hp->seqno); 683 + packet.hdr.seqno = cpu_to_be16(atomic_inc_return(&hp->seqno)); 686 684 packet.hdr.len = count + sizeof(struct hvsi_header); 687 685 memcpy(&packet.data, buf, count); 688 686 ··· 699 697 struct hvsi_control packet __ALIGNED__; 700 698 701 699 packet.hdr.type = VS_CONTROL_PACKET_HEADER; 702 - packet.hdr.seqno = atomic_inc_return(&hp->seqno); 700 + packet.hdr.seqno = cpu_to_be16(atomic_inc_return(&hp->seqno)); 703 701 packet.hdr.len = 6; 704 - packet.verb = VSV_CLOSE_PROTOCOL; 702 + packet.verb = cpu_to_be16(VSV_CLOSE_PROTOCOL); 705 703 706 704 pr_debug("%s: sending %i bytes\n", __func__, packet.hdr.len); 707 705 dbg_dump_hex((uint8_t*)&packet, packet.hdr.len); ··· 1182 1180 /* search device tree for vty nodes */ 1183 1181 for_each_compatible_node(vty, "serial", "hvterm-protocol") { 1184 1182 struct hvsi_struct *hp; 1185 - const uint32_t *vtermno, *irq; 1183 + const __be32 *vtermno, *irq; 1186 1184 1187 1185 vtermno = of_get_property(vty, "reg", NULL); 1188 1186 irq = of_get_property(vty, "interrupts", NULL); ··· 1204 1202 hp->index = hvsi_count; 1205 1203 hp->inbuf_end = hp->inbuf; 1206 1204 hp->state = HVSI_CLOSED; 1207 - hp->vtermno = *vtermno; 1208 - hp->virq = irq_create_mapping(NULL, irq[0]); 1205 + hp->vtermno = be32_to_cpup(vtermno); 1206 + hp->virq = irq_create_mapping(NULL, be32_to_cpup(irq)); 1209 1207 if (hp->virq == 0) { 1210 1208 printk(KERN_ERR "%s: couldn't create irq mapping for 0x%x\n", 1211 - __func__, irq[0]); 1209 + __func__, be32_to_cpup(irq)); 1212 1210 tty_port_destroy(&hp->port); 1213 1211 continue; 1214 1212 }
+50
include/linux/fsl_ifc.h
··· 841 841 842 842 u32 nand_stat; 843 843 wait_queue_head_t nand_wait; 844 + bool little_endian; 844 845 }; 845 846 846 847 extern struct fsl_ifc_ctrl *fsl_ifc_ctrl_dev; 847 848 849 + static inline u32 ifc_in32(void __iomem *addr) 850 + { 851 + u32 val; 852 + 853 + if (fsl_ifc_ctrl_dev->little_endian) 854 + val = ioread32(addr); 855 + else 856 + val = ioread32be(addr); 857 + 858 + return val; 859 + } 860 + 861 + static inline u16 ifc_in16(void __iomem *addr) 862 + { 863 + u16 val; 864 + 865 + if (fsl_ifc_ctrl_dev->little_endian) 866 + val = ioread16(addr); 867 + else 868 + val = ioread16be(addr); 869 + 870 + return val; 871 + } 872 + 873 + static inline u8 ifc_in8(void __iomem *addr) 874 + { 875 + return ioread8(addr); 876 + } 877 + 878 + static inline void ifc_out32(u32 val, void __iomem *addr) 879 + { 880 + if (fsl_ifc_ctrl_dev->little_endian) 881 + iowrite32(val, addr); 882 + else 883 + iowrite32be(val, addr); 884 + } 885 + 886 + static inline void ifc_out16(u16 val, void __iomem *addr) 887 + { 888 + if (fsl_ifc_ctrl_dev->little_endian) 889 + iowrite16(val, addr); 890 + else 891 + iowrite16be(val, addr); 892 + } 893 + 894 + static inline void ifc_out8(u8 val, void __iomem *addr) 895 + { 896 + iowrite8(val, addr); 897 + } 848 898 849 899 #endif /* __ASM_FSL_IFC_H */
+10
include/misc/cxl.h
··· 200 200 ssize_t cxl_fd_read(struct file *file, char __user *buf, size_t count, 201 201 loff_t *off); 202 202 203 + /* 204 + * For EEH, a driver may want to assert a PERST will reload the same image 205 + * from flash into the FPGA. 206 + * 207 + * This is a property of the entire adapter, not a single AFU, so drivers 208 + * should set this property with care! 209 + */ 210 + void cxl_perst_reloads_same_image(struct cxl_afu *afu, 211 + bool perst_reloads_same_image); 212 + 203 213 #endif /* _MISC_CXL_H */
+3 -1
include/uapi/misc/cxl.h
··· 29 29 30 30 #define CXL_START_WORK_AMR 0x0000000000000001ULL 31 31 #define CXL_START_WORK_NUM_IRQS 0x0000000000000002ULL 32 + #define CXL_START_WORK_ERR_FF 0x0000000000000004ULL 32 33 #define CXL_START_WORK_ALL (CXL_START_WORK_AMR |\ 33 - CXL_START_WORK_NUM_IRQS) 34 + CXL_START_WORK_NUM_IRQS |\ 35 + CXL_START_WORK_ERR_FF) 34 36 35 37 36 38 /* Possible modes that an afu can be in */
+2 -1
tools/testing/selftests/powerpc/mm/Makefile
··· 2 2 $(MAKE) -C ../ 3 3 4 4 TEST_PROGS := hugetlb_vs_thp_test subpage_prot 5 + TEST_FILES := tempfile 5 6 6 - all: $(TEST_PROGS) tempfile 7 + all: $(TEST_PROGS) $(TEST_FILES) 7 8 8 9 $(TEST_PROGS): ../harness.c 9 10
+14 -1
tools/testing/selftests/seccomp/seccomp_bpf.c
··· 14 14 #include <linux/filter.h> 15 15 #include <sys/prctl.h> 16 16 #include <sys/ptrace.h> 17 + #include <sys/types.h> 17 18 #include <sys/user.h> 18 19 #include <linux/prctl.h> 19 20 #include <linux/ptrace.h> ··· 83 82 }; 84 83 #endif 85 84 85 + #if __BYTE_ORDER == __LITTLE_ENDIAN 86 86 #define syscall_arg(_n) (offsetof(struct seccomp_data, args[_n])) 87 + #elif __BYTE_ORDER == __BIG_ENDIAN 88 + #define syscall_arg(_n) (offsetof(struct seccomp_data, args[_n]) + sizeof(__u32)) 89 + #else 90 + #error "wut? Unknown __BYTE_ORDER?!" 91 + #endif 87 92 88 93 #define SIBLING_EXIT_UNKILLED 0xbadbeef 89 94 #define SIBLING_EXIT_FAILURE 0xbadface ··· 1206 1199 # define ARCH_REGS struct user_pt_regs 1207 1200 # define SYSCALL_NUM regs[8] 1208 1201 # define SYSCALL_RET regs[0] 1202 + #elif defined(__powerpc__) 1203 + # define ARCH_REGS struct pt_regs 1204 + # define SYSCALL_NUM gpr[0] 1205 + # define SYSCALL_RET gpr[3] 1209 1206 #else 1210 1207 # error "Do not know how to find your architecture's registers and syscalls" 1211 1208 #endif ··· 1243 1232 ret = ptrace(PTRACE_GETREGSET, tracee, NT_PRSTATUS, &iov); 1244 1233 EXPECT_EQ(0, ret); 1245 1234 1246 - #if defined(__x86_64__) || defined(__i386__) || defined(__aarch64__) 1235 + #if defined(__x86_64__) || defined(__i386__) || defined(__aarch64__) || defined(__powerpc__) 1247 1236 { 1248 1237 regs.SYSCALL_NUM = syscall; 1249 1238 } ··· 1407 1396 # define __NR_seccomp 383 1408 1397 # elif defined(__aarch64__) 1409 1398 # define __NR_seccomp 277 1399 + # elif defined(__powerpc__) 1400 + # define __NR_seccomp 358 1410 1401 # else 1411 1402 # warning "seccomp syscall number unknown for this architecture" 1412 1403 # define __NR_seccomp 0xffff