Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'powerpc-6.15-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Madhavan Srinivasan:

- Remove support for IBM Cell Blades

- SMP support for microwatt platform

- Support for inline static calls on PPC32

- Enable pmu selftests for power11 platform

- Enable hardware trace macro (HTM) hcall support

- Support for limited address mode capability

- Changes to RMA size from 512 MB to 768 MB to handle fadump

- Misc fixes and cleanups

Thanks to Abhishek Dubey, Amit Machhiwal, Andreas Schwab, Arnd Bergmann,
Athira Rajeev, Avnish Chouhan, Christophe Leroy, Disha Goel, Donet Tom,
Gaurav Batra, Gautam Menghani, Hari Bathini, Kajol Jain, Kees Cook,
Mahesh Salgaonkar, Michael Ellerman, Paul Mackerras, Ritesh Harjani
(IBM), Sathvika Vasireddy, Segher Boessenkool, Sourabh Jain, Vaibhav
Jain, and Venkat Rao Bagalkote.

* tag 'powerpc-6.15-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (61 commits)
powerpc/kexec: fix physical address calculation in clear_utlb_entry()
crypto: powerpc: Mark ghashp8-ppc.o as an OBJECT_FILES_NON_STANDARD
powerpc: Fix 'intra_function_call not a direct call' warning
powerpc/perf: Fix ref-counting on the PMU 'vpa_pmu'
KVM: PPC: Enable CAP_SPAPR_TCE_VFIO on pSeries KVM guests
powerpc/prom_init: Fixup missing #size-cells on PowerBook6,7
powerpc/microwatt: Add SMP support
powerpc: Define config option for processors with broadcast TLBIE
powerpc/microwatt: Define an idle power-save function
powerpc/microwatt: Device-tree updates
powerpc/microwatt: Select COMMON_CLK in order to get the clock framework
net: toshiba: Remove reference to PPC_IBM_CELL_BLADE
net: spider_net: Remove powerpc Cell driver
cpufreq: ppc_cbe: Remove powerpc Cell driver
genirq: Remove IRQ_EDGE_EOI_HANDLER
docs: Remove reference to removed CBE_CPUFREQ_SPU_GOVERNOR
powerpc: Remove UDBG_RTAS_CONSOLE
powerpc/io: Use standard barrier macros in io.c
powerpc/io: Rename _insw_ns() etc.
powerpc/io: Use generic raw accessors
...

+1043 -12714
+11
CREDITS
··· 2187 2187 D: various lockdep annotations, autofs and other random bugfixes 2188 2188 S: Prague, Czech Republic 2189 2189 2190 + N: Ishizaki Kou 2191 + E: kou.ishizaki@toshiba.co.jp 2192 + D: Spidernet driver for PowerPC Cell platforms 2193 + 2190 2194 N: Gene Kozin 2191 2195 E: 74604.152@compuserve.com 2192 2196 W: https://www.sangoma.com ··· 2200 2196 S: Markham, Ontario 2201 2197 S: L3R 8B2 2202 2198 S: Canada 2199 + 2200 + N: Christian Krafft 2201 + D: PowerPC Cell support 2203 2202 2204 2203 N: Maxim Krasnyansky 2205 2204 E: maxk@qualcomm.com ··· 2395 2388 S: ICP vortex GmbH 2396 2389 S: Neckarsulm 2397 2390 S: Germany 2391 + 2392 + N: Geoff Levand 2393 + E: geoff@infradead.org 2394 + D: Spidernet driver for PowerPC Cell platforms 2398 2395 2399 2396 N: Phil Lewis 2400 2397 E: beans@bucket.ualr.edu
+2 -1
Documentation/ABI/testing/sysfs-kernel-fadump
··· 55 55 Contact: linuxppc-dev@lists.ozlabs.org 56 56 Description: read/write 57 57 This is a special sysfs file available to setup additional 58 - parameters to be passed to capture kernel. 58 + parameters to be passed to capture kernel. For HASH MMU it 59 + is exported only if RMA size higher than 768MB.
+1 -6
Documentation/admin-guide/kernel-per-CPU-kthreads.rst
··· 278 278 due to the rtas_event_scan() function. 279 279 WARNING: Please check your CPU specifications to 280 280 make sure that this is safe on your particular system. 281 - e. If running on Cell Processor, build your kernel with 282 - CBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from 283 - spu_gov_work(). 284 - WARNING: Please check your CPU specifications to 285 - make sure that this is safe on your particular system. 286 - f. If running on PowerMAC, build your kernel with 281 + e. If running on PowerMAC, build your kernel with 287 282 CONFIG_PMAC_RACKMETER=n to disable the CPU-meter, 288 283 avoiding OS jitter from rackmeter_do_timer(). 289 284
+22
Documentation/arch/powerpc/firmware-assisted-dump.rst
··· 120 120 e.g. 121 121 # echo 1 > /sys/firmware/opal/mpipl/release_core 122 122 123 + -- Support for Additional Kernel Arguments in Fadump 124 + Fadump has a feature that allows passing additional kernel arguments 125 + to the fadump kernel. This feature was primarily designed to disable 126 + kernel functionalities that are not required for the fadump kernel 127 + and to reduce its memory footprint while collecting the dump. 128 + 129 + Command to Add Additional Kernel Parameters to Fadump: 130 + e.g. 131 + # echo "nr_cpus=16" > /sys/kernel/fadump/bootargs_append 132 + 133 + The above command is sufficient to add additional arguments to fadump. 134 + An explicit service restart is not required. 135 + 136 + Command to Retrieve the Additional Fadump Arguments: 137 + e.g. 138 + # cat /sys/kernel/fadump/bootargs_append 139 + 140 + Note: Additional kernel arguments for fadump with HASH MMU is only 141 + supported if the RMA size is greater than 768 MB. If the RMA 142 + size is less than 768 MB, the kernel does not export the 143 + /sys/kernel/fadump/bootargs_append sysfs node. 144 + 123 145 Implementation details: 124 146 ----------------------- 125 147
+11
Documentation/arch/powerpc/papr_hcalls.rst
··· 289 289 subsequent hcalls to the hypervisor until the hcall is completely serviced 290 290 at which point H_SUCCESS or other error is returned by the hypervisor. 291 291 292 + **H_HTM** 293 + 294 + | Input: flags, target, operation (op), op-param1, op-param2, op-param3 295 + | Out: *dumphtmbufferdata* 296 + | Return Value: *H_Success,H_Busy,H_LongBusyOrder,H_Partial,H_Parameter, 297 + H_P2,H_P3,H_P4,H_P5,H_P6,H_State,H_Not_Available,H_Authority* 298 + 299 + H_HTM supports setup, configuration, control and dumping of Hardware Trace 300 + Macro (HTM) function and its data. HTM buffer stores tracing data for functions 301 + like core instruction, core LLAT and nest. 302 + 292 303 References 293 304 ========== 294 305 .. [1] "Power Architecture Platform Reference"
-1
Documentation/networking/device_drivers/ethernet/index.rst
··· 55 55 ti/cpsw_switchdev 56 56 ti/am65_nuss_cpsw_switchdev 57 57 ti/tlan 58 - toshiba/spider_net 59 58 wangxun/txgbe 60 59 wangxun/ngbe 61 60
-202
Documentation/networking/device_drivers/ethernet/toshiba/spider_net.rst
··· 1 - .. SPDX-License-Identifier: GPL-2.0 2 - 3 - =========================== 4 - The Spidernet Device Driver 5 - =========================== 6 - 7 - Written by Linas Vepstas <linas@austin.ibm.com> 8 - 9 - Version of 7 June 2007 10 - 11 - Abstract 12 - ======== 13 - This document sketches the structure of portions of the spidernet 14 - device driver in the Linux kernel tree. The spidernet is a gigabit 15 - ethernet device built into the Toshiba southbridge commonly used 16 - in the SONY Playstation 3 and the IBM QS20 Cell blade. 17 - 18 - The Structure of the RX Ring. 19 - ============================= 20 - The receive (RX) ring is a circular linked list of RX descriptors, 21 - together with three pointers into the ring that are used to manage its 22 - contents. 23 - 24 - The elements of the ring are called "descriptors" or "descrs"; they 25 - describe the received data. This includes a pointer to a buffer 26 - containing the received data, the buffer size, and various status bits. 27 - 28 - There are three primary states that a descriptor can be in: "empty", 29 - "full" and "not-in-use". An "empty" or "ready" descriptor is ready 30 - to receive data from the hardware. A "full" descriptor has data in it, 31 - and is waiting to be emptied and processed by the OS. A "not-in-use" 32 - descriptor is neither empty or full; it is simply not ready. It may 33 - not even have a data buffer in it, or is otherwise unusable. 34 - 35 - During normal operation, on device startup, the OS (specifically, the 36 - spidernet device driver) allocates a set of RX descriptors and RX 37 - buffers. These are all marked "empty", ready to receive data. This 38 - ring is handed off to the hardware, which sequentially fills in the 39 - buffers, and marks them "full". The OS follows up, taking the full 40 - buffers, processing them, and re-marking them empty. 41 - 42 - This filling and emptying is managed by three pointers, the "head" 43 - and "tail" pointers, managed by the OS, and a hardware current 44 - descriptor pointer (GDACTDPA). The GDACTDPA points at the descr 45 - currently being filled. When this descr is filled, the hardware 46 - marks it full, and advances the GDACTDPA by one. Thus, when there is 47 - flowing RX traffic, every descr behind it should be marked "full", 48 - and everything in front of it should be "empty". If the hardware 49 - discovers that the current descr is not empty, it will signal an 50 - interrupt, and halt processing. 51 - 52 - The tail pointer tails or trails the hardware pointer. When the 53 - hardware is ahead, the tail pointer will be pointing at a "full" 54 - descr. The OS will process this descr, and then mark it "not-in-use", 55 - and advance the tail pointer. Thus, when there is flowing RX traffic, 56 - all of the descrs in front of the tail pointer should be "full", and 57 - all of those behind it should be "not-in-use". When RX traffic is not 58 - flowing, then the tail pointer can catch up to the hardware pointer. 59 - The OS will then note that the current tail is "empty", and halt 60 - processing. 61 - 62 - The head pointer (somewhat mis-named) follows after the tail pointer. 63 - When traffic is flowing, then the head pointer will be pointing at 64 - a "not-in-use" descr. The OS will perform various housekeeping duties 65 - on this descr. This includes allocating a new data buffer and 66 - dma-mapping it so as to make it visible to the hardware. The OS will 67 - then mark the descr as "empty", ready to receive data. Thus, when there 68 - is flowing RX traffic, everything in front of the head pointer should 69 - be "not-in-use", and everything behind it should be "empty". If no 70 - RX traffic is flowing, then the head pointer can catch up to the tail 71 - pointer, at which point the OS will notice that the head descr is 72 - "empty", and it will halt processing. 73 - 74 - Thus, in an idle system, the GDACTDPA, tail and head pointers will 75 - all be pointing at the same descr, which should be "empty". All of the 76 - other descrs in the ring should be "empty" as well. 77 - 78 - The show_rx_chain() routine will print out the locations of the 79 - GDACTDPA, tail and head pointers. It will also summarize the contents 80 - of the ring, starting at the tail pointer, and listing the status 81 - of the descrs that follow. 82 - 83 - A typical example of the output, for a nearly idle system, might be:: 84 - 85 - net eth1: Total number of descrs=256 86 - net eth1: Chain tail located at descr=20 87 - net eth1: Chain head is at 20 88 - net eth1: HW curr desc (GDACTDPA) is at 21 89 - net eth1: Have 1 descrs with stat=x40800101 90 - net eth1: HW next desc (GDACNEXTDA) is at 22 91 - net eth1: Last 255 descrs with stat=xa0800000 92 - 93 - In the above, the hardware has filled in one descr, number 20. Both 94 - head and tail are pointing at 20, because it has not yet been emptied. 95 - Meanwhile, hw is pointing at 21, which is free. 96 - 97 - The "Have nnn decrs" refers to the descr starting at the tail: in this 98 - case, nnn=1 descr, starting at descr 20. The "Last nnn descrs" refers 99 - to all of the rest of the descrs, from the last status change. The "nnn" 100 - is a count of how many descrs have exactly the same status. 101 - 102 - The status x4... corresponds to "full" and status xa... corresponds 103 - to "empty". The actual value printed is RXCOMST_A. 104 - 105 - In the device driver source code, a different set of names are 106 - used for these same concepts, so that:: 107 - 108 - "empty" == SPIDER_NET_DESCR_CARDOWNED == 0xa 109 - "full" == SPIDER_NET_DESCR_FRAME_END == 0x4 110 - "not in use" == SPIDER_NET_DESCR_NOT_IN_USE == 0xf 111 - 112 - 113 - The RX RAM full bug/feature 114 - =========================== 115 - 116 - As long as the OS can empty out the RX buffers at a rate faster than 117 - the hardware can fill them, there is no problem. If, for some reason, 118 - the OS fails to empty the RX ring fast enough, the hardware GDACTDPA 119 - pointer will catch up to the head, notice the not-empty condition, 120 - ad stop. However, RX packets may still continue arriving on the wire. 121 - The spidernet chip can save some limited number of these in local RAM. 122 - When this local ram fills up, the spider chip will issue an interrupt 123 - indicating this (GHIINT0STS will show ERRINT, and the GRMFLLINT bit 124 - will be set in GHIINT1STS). When the RX ram full condition occurs, 125 - a certain bug/feature is triggered that has to be specially handled. 126 - This section describes the special handling for this condition. 127 - 128 - When the OS finally has a chance to run, it will empty out the RX ring. 129 - In particular, it will clear the descriptor on which the hardware had 130 - stopped. However, once the hardware has decided that a certain 131 - descriptor is invalid, it will not restart at that descriptor; instead 132 - it will restart at the next descr. This potentially will lead to a 133 - deadlock condition, as the tail pointer will be pointing at this descr, 134 - which, from the OS point of view, is empty; the OS will be waiting for 135 - this descr to be filled. However, the hardware has skipped this descr, 136 - and is filling the next descrs. Since the OS doesn't see this, there 137 - is a potential deadlock, with the OS waiting for one descr to fill, 138 - while the hardware is waiting for a different set of descrs to become 139 - empty. 140 - 141 - A call to show_rx_chain() at this point indicates the nature of the 142 - problem. A typical print when the network is hung shows the following:: 143 - 144 - net eth1: Spider RX RAM full, incoming packets might be discarded! 145 - net eth1: Total number of descrs=256 146 - net eth1: Chain tail located at descr=255 147 - net eth1: Chain head is at 255 148 - net eth1: HW curr desc (GDACTDPA) is at 0 149 - net eth1: Have 1 descrs with stat=xa0800000 150 - net eth1: HW next desc (GDACNEXTDA) is at 1 151 - net eth1: Have 127 descrs with stat=x40800101 152 - net eth1: Have 1 descrs with stat=x40800001 153 - net eth1: Have 126 descrs with stat=x40800101 154 - net eth1: Last 1 descrs with stat=xa0800000 155 - 156 - Both the tail and head pointers are pointing at descr 255, which is 157 - marked xa... which is "empty". Thus, from the OS point of view, there 158 - is nothing to be done. In particular, there is the implicit assumption 159 - that everything in front of the "empty" descr must surely also be empty, 160 - as explained in the last section. The OS is waiting for descr 255 to 161 - become non-empty, which, in this case, will never happen. 162 - 163 - The HW pointer is at descr 0. This descr is marked 0x4.. or "full". 164 - Since its already full, the hardware can do nothing more, and thus has 165 - halted processing. Notice that descrs 0 through 254 are all marked 166 - "full", while descr 254 and 255 are empty. (The "Last 1 descrs" is 167 - descr 254, since tail was at 255.) Thus, the system is deadlocked, 168 - and there can be no forward progress; the OS thinks there's nothing 169 - to do, and the hardware has nowhere to put incoming data. 170 - 171 - This bug/feature is worked around with the spider_net_resync_head_ptr() 172 - routine. When the driver receives RX interrupts, but an examination 173 - of the RX chain seems to show it is empty, then it is probable that 174 - the hardware has skipped a descr or two (sometimes dozens under heavy 175 - network conditions). The spider_net_resync_head_ptr() subroutine will 176 - search the ring for the next full descr, and the driver will resume 177 - operations there. Since this will leave "holes" in the ring, there 178 - is also a spider_net_resync_tail_ptr() that will skip over such holes. 179 - 180 - As of this writing, the spider_net_resync() strategy seems to work very 181 - well, even under heavy network loads. 182 - 183 - 184 - The TX ring 185 - =========== 186 - The TX ring uses a low-watermark interrupt scheme to make sure that 187 - the TX queue is appropriately serviced for large packet sizes. 188 - 189 - For packet sizes greater than about 1KBytes, the kernel can fill 190 - the TX ring quicker than the device can drain it. Once the ring 191 - is full, the netdev is stopped. When there is room in the ring, 192 - the netdev needs to be reawakened, so that more TX packets are placed 193 - in the ring. The hardware can empty the ring about four times per jiffy, 194 - so its not appropriate to wait for the poll routine to refill, since 195 - the poll routine runs only once per jiffy. The low-watermark mechanism 196 - marks a descr about 1/4th of the way from the bottom of the queue, so 197 - that an interrupt is generated when the descr is processed. This 198 - interrupt wakes up the netdev, which can then refill the queue. 199 - For large packets, this mechanism generates a relatively small number 200 - of interrupts, about 1K/sec. For smaller packets, this will drop to zero 201 - interrupts, as the hardware can empty the queue faster than the kernel 202 - can fill it.
-9
MAINTAINERS
··· 22526 22526 F: include/uapi/linux/spi/ 22527 22527 F: tools/spi/ 22528 22528 22529 - SPIDERNET NETWORK DRIVER for CELL 22530 - M: Ishizaki Kou <kou.ishizaki@toshiba.co.jp> 22531 - M: Geoff Levand <geoff@infradead.org> 22532 - L: netdev@vger.kernel.org 22533 - L: linuxppc-dev@lists.ozlabs.org 22534 - S: Maintained 22535 - F: Documentation/networking/device_drivers/ethernet/toshiba/spider_net.rst 22536 - F: drivers/net/ethernet/toshiba/spider_net* 22537 - 22538 22529 SPMI SUBSYSTEM 22539 22530 M: Stephen Boyd <sboyd@kernel.org> 22540 22531 L: linux-kernel@vger.kernel.org
+2 -9
arch/powerpc/Kconfig
··· 288 288 select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,$(m32-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 -mstack-protector-guard-offset=0) 289 289 select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,$(m64-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 -mstack-protector-guard-offset=0) 290 290 select HAVE_STATIC_CALL if PPC32 291 + select HAVE_STATIC_CALL_INLINE if PPC32 291 292 select HAVE_SYSCALL_TRACEPOINTS 292 293 select HAVE_VIRT_CPU_ACCOUNTING 293 294 select HAVE_VIRT_CPU_ACCOUNTING_GEN ··· 414 413 config PPC_DCR_NATIVE 415 414 bool 416 415 417 - config PPC_DCR_MMIO 418 - bool 419 - 420 416 config PPC_DCR 421 417 bool 422 - depends on PPC_DCR_NATIVE || PPC_DCR_MMIO 418 + depends on PPC_DCR_NATIVE 423 419 default y 424 420 425 421 config PPC_PCI_OF_BUS_MAP ··· 441 443 PCI domains. When this option is enabled then PCI bus numbers are 442 444 PCI domain dependent and each PCI controller on own domain can have 443 445 256 PCI buses, like it is on other Linux architectures. 444 - 445 - config PPC_OF_PLATFORM_PCI 446 - bool 447 - depends on PCI 448 - depends on PPC64 # not supported on 32 bits yet 449 446 450 447 config ARCH_SUPPORTS_UPROBES 451 448 def_bool y
-7
arch/powerpc/Kconfig.debug
··· 216 216 help 217 217 Select this to enable early debugging via the RTAS panel. 218 218 219 - config PPC_EARLY_DEBUG_RTAS_CONSOLE 220 - bool "RTAS Console" 221 - depends on PPC_RTAS 222 - select UDBG_RTAS_CONSOLE 223 - help 224 - Select this to enable early debugging via the RTAS console. 225 - 226 219 config PPC_EARLY_DEBUG_PAS_REALMODE 227 220 bool "PA Semi real mode" 228 221 depends on PPC_PASEMI
-2
arch/powerpc/boot/Makefile
··· 173 173 src-plat-$(CONFIG_EPAPR_BOOT) += epapr.c epapr-wrapper.c 174 174 src-plat-$(CONFIG_PPC_PSERIES) += pseries-head.S 175 175 src-plat-$(CONFIG_PPC_POWERNV) += pseries-head.S 176 - src-plat-$(CONFIG_PPC_IBM_CELL_BLADE) += pseries-head.S 177 176 src-plat-$(CONFIG_MVME7100) += motload-head.S mvme7100.c 178 177 179 178 src-plat-$(CONFIG_PPC_MICROWATT) += fixed-head.S microwatt.c ··· 275 276 276 277 image-$(CONFIG_PPC_PSERIES) += zImage.pseries 277 278 image-$(CONFIG_PPC_POWERNV) += zImage.pseries 278 - image-$(CONFIG_PPC_IBM_CELL_BLADE) += zImage.pseries 279 279 image-$(CONFIG_PPC_PS3) += dtbImage.ps3 280 280 image-$(CONFIG_PPC_CHRP) += zImage.chrp 281 281 image-$(CONFIG_PPC_EFIKA) += zImage.chrp
+94 -13
arch/powerpc/boot/dts/microwatt.dts
··· 1 1 /dts-v1/; 2 + #include <dt-bindings/gpio/gpio.h> 2 3 3 4 / { 4 5 #size-cells = <0x02>; ··· 9 8 10 9 aliases { 11 10 serial0 = &UART0; 11 + ethernet = &enet0; 12 12 }; 13 13 14 14 reserved-memory { ··· 37 35 38 36 ibm,powerpc-cpu-features { 39 37 display-name = "Microwatt"; 40 - isa = <3000>; 38 + isa = <3010>; 41 39 device_type = "cpu-features"; 42 40 compatible = "ibm,powerpc-cpu-features"; 43 41 44 42 mmu-radix { 45 43 isa = <3000>; 46 - usable-privilege = <2>; 44 + usable-privilege = <6>; 45 + os-support = <0>; 47 46 }; 48 47 49 48 little-endian { 50 - isa = <2050>; 51 - usable-privilege = <3>; 49 + isa = <0>; 50 + usable-privilege = <7>; 51 + os-support = <0>; 52 52 hwcap-bit-nr = <1>; 53 53 }; 54 54 55 55 cache-inhibited-large-page { 56 - isa = <2040>; 57 - usable-privilege = <2>; 56 + isa = <0>; 57 + usable-privilege = <6>; 58 + os-support = <0>; 58 59 }; 59 60 60 61 fixed-point-v3 { 61 62 isa = <3000>; 62 - usable-privilege = <3>; 63 + usable-privilege = <7>; 63 64 }; 64 65 65 66 no-execute { 66 - isa = <2010>; 67 + isa = <0x00>; 67 68 usable-privilege = <2>; 69 + os-support = <0>; 68 70 }; 69 71 70 72 floating-point { 73 + hfscr-bit-nr = <0>; 71 74 hwcap-bit-nr = <27>; 72 75 isa = <0>; 73 - usable-privilege = <3>; 76 + usable-privilege = <7>; 77 + hv-support = <1>; 78 + os-support = <0>; 79 + }; 80 + 81 + prefixed-instructions { 82 + hfscr-bit-nr = <13>; 83 + fscr-bit-nr = <13>; 84 + isa = <3010>; 85 + usable-privilege = <7>; 86 + os-support = <1>; 87 + hv-support = <1>; 88 + }; 89 + 90 + tar { 91 + hfscr-bit-nr = <8>; 92 + fscr-bit-nr = <8>; 93 + isa = <2070>; 94 + usable-privilege = <7>; 95 + os-support = <1>; 96 + hv-support = <1>; 97 + hwcap-bit-nr = <58>; 98 + }; 99 + 100 + control-register { 101 + isa = <0>; 102 + usable-privilege = <7>; 103 + }; 104 + 105 + system-call-vectored { 106 + isa = <3000>; 107 + usable-privilege = <7>; 108 + os-support = <1>; 109 + fscr-bit-nr = <12>; 110 + hwcap-bit-nr = <52>; 74 111 }; 75 112 }; 76 113 ··· 142 101 ibm,mmu-lpid-bits = <12>; 143 102 ibm,mmu-pid-bits = <20>; 144 103 }; 104 + 105 + PowerPC,Microwatt@1 { 106 + i-cache-sets = <2>; 107 + ibm,dec-bits = <64>; 108 + reservation-granule-size = <64>; 109 + clock-frequency = <100000000>; 110 + timebase-frequency = <100000000>; 111 + i-tlb-sets = <1>; 112 + ibm,ppc-interrupt-server#s = <1>; 113 + i-cache-block-size = <64>; 114 + d-cache-block-size = <64>; 115 + d-cache-sets = <2>; 116 + i-tlb-size = <64>; 117 + cpu-version = <0x990000>; 118 + status = "okay"; 119 + i-cache-size = <0x1000>; 120 + ibm,processor-radix-AP-encodings = <0x0c 0xa0000010 0x20000015 0x4000001e>; 121 + tlb-size = <0>; 122 + tlb-sets = <0>; 123 + device_type = "cpu"; 124 + d-tlb-size = <128>; 125 + d-tlb-sets = <2>; 126 + reg = <1>; 127 + general-purpose; 128 + 64-bit; 129 + d-cache-size = <0x1000>; 130 + ibm,chip-id = <0>; 131 + ibm,mmu-lpid-bits = <12>; 132 + ibm,mmu-pid-bits = <20>; 133 + }; 145 134 }; 146 135 147 136 soc@c0000000 { ··· 184 113 185 114 interrupt-controller@4000 { 186 115 compatible = "openpower,xics-presentation", "ibm,ppc-xicp"; 187 - ibm,interrupt-server-ranges = <0x0 0x1>; 188 - reg = <0x4000 0x100>; 116 + ibm,interrupt-server-ranges = <0x0 0x2>; 117 + reg = <0x4000 0x10 0x4010 0x10>; 189 118 }; 190 119 191 120 ICS: interrupt-controller@5000 { ··· 209 138 interrupts = <0x10 0x1>; 210 139 }; 211 140 212 - ethernet@8020000 { 141 + gpio: gpio@7000 { 142 + device_type = "gpio"; 143 + compatible = "faraday,ftgpio010"; 144 + gpio-controller; 145 + #gpio-cells = <2>; 146 + reg = <0x7000 0x80>; 147 + interrupts = <0x14 1>; 148 + interrupt-controller; 149 + #interrupt-cells = <2>; 150 + }; 151 + 152 + enet0: ethernet@8020000 { 213 153 compatible = "litex,liteeth"; 214 154 reg = <0x8021000 0x100 215 155 0x8020800 0x100 ··· 242 160 reg-names = "phy", "core", "reader", "writer", "irq"; 243 161 bus-width = <4>; 244 162 interrupts = <0x13 1>; 245 - cap-sd-highspeed; 246 163 clocks = <&sys_clk>; 247 164 }; 248 165 };
-2
arch/powerpc/configs/cell_defconfig
··· 25 25 CONFIG_PS3_ROM=m 26 26 CONFIG_PS3_FLASH=m 27 27 CONFIG_PS3_LPM=m 28 - CONFIG_PPC_IBM_CELL_BLADE=y 29 28 CONFIG_RTAS_FLASH=y 30 29 CONFIG_CPU_FREQ=y 31 30 CONFIG_CPU_FREQ_GOV_POWERSAVE=y ··· 132 133 CONFIG_SKY2=m 133 134 CONFIG_GELIC_NET=m 134 135 CONFIG_GELIC_WIRELESS=y 135 - CONFIG_SPIDER_NET=y 136 136 # CONFIG_INPUT_KEYBOARD is not set 137 137 # CONFIG_INPUT_MOUSE is not set 138 138 # CONFIG_SERIO_I8042 is not set
-2
arch/powerpc/configs/ppc64_defconfig
··· 51 51 CONFIG_PS3_ROM=m 52 52 CONFIG_PS3_FLASH=m 53 53 CONFIG_PS3_LPM=m 54 - CONFIG_PPC_IBM_CELL_BLADE=y 55 54 CONFIG_RTAS_FLASH=m 56 55 CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y 57 56 CONFIG_CPU_FREQ_GOV_POWERSAVE=y ··· 227 228 CONFIG_SUNGEM=y 228 229 CONFIG_GELIC_NET=m 229 230 CONFIG_GELIC_WIRELESS=y 230 - CONFIG_SPIDER_NET=m 231 231 CONFIG_BROADCOM_PHY=m 232 232 CONFIG_MARVELL_PHY=y 233 233 CONFIG_PPP=m
+1
arch/powerpc/crypto/Makefile
··· 51 51 OBJECT_FILES_NON_STANDARD_aesp10-ppc.o := y 52 52 OBJECT_FILES_NON_STANDARD_ghashp10-ppc.o := y 53 53 OBJECT_FILES_NON_STANDARD_aesp8-ppc.o := y 54 + OBJECT_FILES_NON_STANDARD_ghashp8-ppc.o := y
-56
arch/powerpc/include/asm/cell-pmu.h
··· 20 20 21 21 /* Macros for the pm_control register. */ 22 22 #define CBE_PM_16BIT_CTR(ctr) (1 << (24 - ((ctr) & (NR_PHYS_CTRS - 1)))) 23 - #define CBE_PM_ENABLE_PERF_MON 0x80000000 24 - #define CBE_PM_STOP_AT_MAX 0x40000000 25 - #define CBE_PM_TRACE_MODE_GET(pm_control) (((pm_control) >> 28) & 0x3) 26 - #define CBE_PM_TRACE_MODE_SET(mode) (((mode) & 0x3) << 28) 27 - #define CBE_PM_TRACE_BUF_OVFLW(bit) (((bit) & 0x1) << 17) 28 - #define CBE_PM_COUNT_MODE_SET(count) (((count) & 0x3) << 18) 29 - #define CBE_PM_FREEZE_ALL_CTRS 0x00100000 30 - #define CBE_PM_ENABLE_EXT_TRACE 0x00008000 31 - #define CBE_PM_SPU_ADDR_TRACE_SET(msk) (((msk) & 0x3) << 9) 32 23 33 24 /* Macros for the trace_address register. */ 34 - #define CBE_PM_TRACE_BUF_FULL 0x00000800 35 25 #define CBE_PM_TRACE_BUF_EMPTY 0x00000400 36 - #define CBE_PM_TRACE_BUF_DATA_COUNT(ta) ((ta) & 0x3ff) 37 - #define CBE_PM_TRACE_BUF_MAX_COUNT 0x400 38 - 39 - /* Macros for the pm07_control registers. */ 40 - #define CBE_PM_CTR_INPUT_MUX(pm07_control) (((pm07_control) >> 26) & 0x3f) 41 - #define CBE_PM_CTR_INPUT_CONTROL 0x02000000 42 - #define CBE_PM_CTR_POLARITY 0x01000000 43 - #define CBE_PM_CTR_COUNT_CYCLES 0x00800000 44 - #define CBE_PM_CTR_ENABLE 0x00400000 45 - #define PM07_CTR_INPUT_MUX(x) (((x) & 0x3F) << 26) 46 - #define PM07_CTR_INPUT_CONTROL(x) (((x) & 1) << 25) 47 - #define PM07_CTR_POLARITY(x) (((x) & 1) << 24) 48 - #define PM07_CTR_COUNT_CYCLES(x) (((x) & 1) << 23) 49 - #define PM07_CTR_ENABLE(x) (((x) & 1) << 22) 50 - 51 - /* Macros for the pm_status register. */ 52 - #define CBE_PM_CTR_OVERFLOW_INTR(ctr) (1 << (31 - ((ctr) & 7))) 53 26 54 27 enum pm_reg_name { 55 28 group_control, ··· 34 61 pm_interval, 35 62 pm_start_stop, 36 63 }; 37 - 38 - /* Routines for reading/writing the PMU registers. */ 39 - extern u32 cbe_read_phys_ctr(u32 cpu, u32 phys_ctr); 40 - extern void cbe_write_phys_ctr(u32 cpu, u32 phys_ctr, u32 val); 41 - extern u32 cbe_read_ctr(u32 cpu, u32 ctr); 42 - extern void cbe_write_ctr(u32 cpu, u32 ctr, u32 val); 43 - 44 - extern u32 cbe_read_pm07_control(u32 cpu, u32 ctr); 45 - extern void cbe_write_pm07_control(u32 cpu, u32 ctr, u32 val); 46 - extern u32 cbe_read_pm(u32 cpu, enum pm_reg_name reg); 47 - extern void cbe_write_pm(u32 cpu, enum pm_reg_name reg, u32 val); 48 - 49 - extern u32 cbe_get_ctr_size(u32 cpu, u32 phys_ctr); 50 - extern void cbe_set_ctr_size(u32 cpu, u32 phys_ctr, u32 ctr_size); 51 - 52 - extern void cbe_enable_pm(u32 cpu); 53 - extern void cbe_disable_pm(u32 cpu); 54 - 55 - extern void cbe_read_trace_buffer(u32 cpu, u64 *buf); 56 - 57 - extern void cbe_enable_pm_interrupts(u32 cpu, u32 thread, u32 mask); 58 - extern void cbe_disable_pm_interrupts(u32 cpu); 59 - extern u32 cbe_get_and_clear_pm_interrupts(u32 cpu); 60 - extern void cbe_sync_irq(int node); 61 - 62 - #define CBE_COUNT_SUPERVISOR_MODE 0 63 - #define CBE_COUNT_HYPERVISOR_MODE 1 64 - #define CBE_COUNT_PROBLEM_MODE 2 65 - #define CBE_COUNT_ALL_MODES 3 66 64 67 65 #endif /* __ASM_CELL_PMU_H__ */
-296
arch/powerpc/include/asm/cell-regs.h
··· 18 18 19 19 #include <asm/cell-pmu.h> 20 20 21 - /* 22 - * 23 - * Some HID register definitions 24 - * 25 - */ 26 - 27 - /* CBE specific HID0 bits */ 28 - #define HID0_CBE_THERM_WAKEUP 0x0000020000000000ul 29 - #define HID0_CBE_SYSERR_WAKEUP 0x0000008000000000ul 30 - #define HID0_CBE_THERM_INT_EN 0x0000000400000000ul 31 - #define HID0_CBE_SYSERR_INT_EN 0x0000000200000000ul 32 - 33 - #define MAX_CBE 2 34 - 35 - /* 36 - * 37 - * Pervasive unit register definitions 38 - * 39 - */ 40 - 41 - union spe_reg { 42 - u64 val; 43 - u8 spe[8]; 44 - }; 45 - 46 - union ppe_spe_reg { 47 - u64 val; 48 - struct { 49 - u32 ppe; 50 - u32 spe; 51 - }; 52 - }; 53 - 54 - 55 - struct cbe_pmd_regs { 56 - /* Debug Bus Control */ 57 - u64 pad_0x0000; /* 0x0000 */ 58 - 59 - u64 group_control; /* 0x0008 */ 60 - 61 - u8 pad_0x0010_0x00a8 [0x00a8 - 0x0010]; /* 0x0010 */ 62 - 63 - u64 debug_bus_control; /* 0x00a8 */ 64 - 65 - u8 pad_0x00b0_0x0100 [0x0100 - 0x00b0]; /* 0x00b0 */ 66 - 67 - u64 trace_aux_data; /* 0x0100 */ 68 - u64 trace_buffer_0_63; /* 0x0108 */ 69 - u64 trace_buffer_64_127; /* 0x0110 */ 70 - u64 trace_address; /* 0x0118 */ 71 - u64 ext_tr_timer; /* 0x0120 */ 72 - 73 - u8 pad_0x0128_0x0400 [0x0400 - 0x0128]; /* 0x0128 */ 74 - 75 - /* Performance Monitor */ 76 - u64 pm_status; /* 0x0400 */ 77 - u64 pm_control; /* 0x0408 */ 78 - u64 pm_interval; /* 0x0410 */ 79 - u64 pm_ctr[4]; /* 0x0418 */ 80 - u64 pm_start_stop; /* 0x0438 */ 81 - u64 pm07_control[8]; /* 0x0440 */ 82 - 83 - u8 pad_0x0480_0x0800 [0x0800 - 0x0480]; /* 0x0480 */ 84 - 85 - /* Thermal Sensor Registers */ 86 - union spe_reg ts_ctsr1; /* 0x0800 */ 87 - u64 ts_ctsr2; /* 0x0808 */ 88 - union spe_reg ts_mtsr1; /* 0x0810 */ 89 - u64 ts_mtsr2; /* 0x0818 */ 90 - union spe_reg ts_itr1; /* 0x0820 */ 91 - u64 ts_itr2; /* 0x0828 */ 92 - u64 ts_gitr; /* 0x0830 */ 93 - u64 ts_isr; /* 0x0838 */ 94 - u64 ts_imr; /* 0x0840 */ 95 - union spe_reg tm_cr1; /* 0x0848 */ 96 - u64 tm_cr2; /* 0x0850 */ 97 - u64 tm_simr; /* 0x0858 */ 98 - union ppe_spe_reg tm_tpr; /* 0x0860 */ 99 - union spe_reg tm_str1; /* 0x0868 */ 100 - u64 tm_str2; /* 0x0870 */ 101 - union ppe_spe_reg tm_tsr; /* 0x0878 */ 102 - 103 - /* Power Management */ 104 - u64 pmcr; /* 0x0880 */ 105 - #define CBE_PMD_PAUSE_ZERO_CONTROL 0x10000 106 - u64 pmsr; /* 0x0888 */ 107 - 108 - /* Time Base Register */ 109 - u64 tbr; /* 0x0890 */ 110 - 111 - u8 pad_0x0898_0x0c00 [0x0c00 - 0x0898]; /* 0x0898 */ 112 - 113 - /* Fault Isolation Registers */ 114 - u64 checkstop_fir; /* 0x0c00 */ 115 - u64 recoverable_fir; /* 0x0c08 */ 116 - u64 spec_att_mchk_fir; /* 0x0c10 */ 117 - u32 fir_mode_reg; /* 0x0c18 */ 118 - u8 pad_0x0c1c_0x0c20 [4]; /* 0x0c1c */ 119 - #define CBE_PMD_FIR_MODE_M8 0x00800 120 - u64 fir_enable_mask; /* 0x0c20 */ 121 - 122 - u8 pad_0x0c28_0x0ca8 [0x0ca8 - 0x0c28]; /* 0x0c28 */ 123 - u64 ras_esc_0; /* 0x0ca8 */ 124 - u8 pad_0x0cb0_0x1000 [0x1000 - 0x0cb0]; /* 0x0cb0 */ 125 - }; 126 - 127 - extern struct cbe_pmd_regs __iomem *cbe_get_pmd_regs(struct device_node *np); 128 - extern struct cbe_pmd_regs __iomem *cbe_get_cpu_pmd_regs(int cpu); 129 - 130 - /* 131 - * PMU shadow registers 132 - * 133 - * Many of the registers in the performance monitoring unit are write-only, 134 - * so we need to save a copy of what we write to those registers. 135 - * 136 - * The actual data counters are read/write. However, writing to the counters 137 - * only takes effect if the PMU is enabled. Otherwise the value is stored in 138 - * a hardware latch until the next time the PMU is enabled. So we save a copy 139 - * of the counter values if we need to read them back while the PMU is 140 - * disabled. The counter_value_in_latch field is a bitmap indicating which 141 - * counters currently have a value waiting to be written. 142 - */ 143 - 144 - struct cbe_pmd_shadow_regs { 145 - u32 group_control; 146 - u32 debug_bus_control; 147 - u32 trace_address; 148 - u32 ext_tr_timer; 149 - u32 pm_status; 150 - u32 pm_control; 151 - u32 pm_interval; 152 - u32 pm_start_stop; 153 - u32 pm07_control[NR_CTRS]; 154 - 155 - u32 pm_ctr[NR_PHYS_CTRS]; 156 - u32 counter_value_in_latch; 157 - }; 158 - 159 - extern struct cbe_pmd_shadow_regs *cbe_get_pmd_shadow_regs(struct device_node *np); 160 - extern struct cbe_pmd_shadow_regs *cbe_get_cpu_pmd_shadow_regs(int cpu); 161 - 162 - /* 163 - * 164 - * IIC unit register definitions 165 - * 166 - */ 167 - 168 - struct cbe_iic_pending_bits { 169 - u32 data; 170 - u8 flags; 171 - u8 class; 172 - u8 source; 173 - u8 prio; 174 - }; 175 - 176 - #define CBE_IIC_IRQ_VALID 0x80 177 - #define CBE_IIC_IRQ_IPI 0x40 178 - 179 - struct cbe_iic_thread_regs { 180 - struct cbe_iic_pending_bits pending; 181 - struct cbe_iic_pending_bits pending_destr; 182 - u64 generate; 183 - u64 prio; 184 - }; 185 - 186 - struct cbe_iic_regs { 187 - u8 pad_0x0000_0x0400[0x0400 - 0x0000]; /* 0x0000 */ 188 - 189 - /* IIC interrupt registers */ 190 - struct cbe_iic_thread_regs thread[2]; /* 0x0400 */ 191 - 192 - u64 iic_ir; /* 0x0440 */ 193 - #define CBE_IIC_IR_PRIO(x) (((x) & 0xf) << 12) 194 - #define CBE_IIC_IR_DEST_NODE(x) (((x) & 0xf) << 4) 195 - #define CBE_IIC_IR_DEST_UNIT(x) ((x) & 0xf) 196 - #define CBE_IIC_IR_IOC_0 0x0 197 - #define CBE_IIC_IR_IOC_1S 0xb 198 - #define CBE_IIC_IR_PT_0 0xe 199 - #define CBE_IIC_IR_PT_1 0xf 200 - 201 - u64 iic_is; /* 0x0448 */ 202 - #define CBE_IIC_IS_PMI 0x2 203 - 204 - u8 pad_0x0450_0x0500[0x0500 - 0x0450]; /* 0x0450 */ 205 - 206 - /* IOC FIR */ 207 - u64 ioc_fir_reset; /* 0x0500 */ 208 - u64 ioc_fir_set; /* 0x0508 */ 209 - u64 ioc_checkstop_enable; /* 0x0510 */ 210 - u64 ioc_fir_error_mask; /* 0x0518 */ 211 - u64 ioc_syserr_enable; /* 0x0520 */ 212 - u64 ioc_fir; /* 0x0528 */ 213 - 214 - u8 pad_0x0530_0x1000[0x1000 - 0x0530]; /* 0x0530 */ 215 - }; 216 - 217 - extern struct cbe_iic_regs __iomem *cbe_get_iic_regs(struct device_node *np); 218 - extern struct cbe_iic_regs __iomem *cbe_get_cpu_iic_regs(int cpu); 219 - 220 - 221 - struct cbe_mic_tm_regs { 222 - u8 pad_0x0000_0x0040[0x0040 - 0x0000]; /* 0x0000 */ 223 - 224 - u64 mic_ctl_cnfg2; /* 0x0040 */ 225 - #define CBE_MIC_ENABLE_AUX_TRC 0x8000000000000000LL 226 - #define CBE_MIC_DISABLE_PWR_SAV_2 0x0200000000000000LL 227 - #define CBE_MIC_DISABLE_AUX_TRC_WRAP 0x0100000000000000LL 228 - #define CBE_MIC_ENABLE_AUX_TRC_INT 0x0080000000000000LL 229 - 230 - u64 pad_0x0048; /* 0x0048 */ 231 - 232 - u64 mic_aux_trc_base; /* 0x0050 */ 233 - u64 mic_aux_trc_max_addr; /* 0x0058 */ 234 - u64 mic_aux_trc_cur_addr; /* 0x0060 */ 235 - u64 mic_aux_trc_grf_addr; /* 0x0068 */ 236 - u64 mic_aux_trc_grf_data; /* 0x0070 */ 237 - 238 - u64 pad_0x0078; /* 0x0078 */ 239 - 240 - u64 mic_ctl_cnfg_0; /* 0x0080 */ 241 - #define CBE_MIC_DISABLE_PWR_SAV_0 0x8000000000000000LL 242 - 243 - u64 pad_0x0088; /* 0x0088 */ 244 - 245 - u64 slow_fast_timer_0; /* 0x0090 */ 246 - u64 slow_next_timer_0; /* 0x0098 */ 247 - 248 - u8 pad_0x00a0_0x00f8[0x00f8 - 0x00a0]; /* 0x00a0 */ 249 - u64 mic_df_ecc_address_0; /* 0x00f8 */ 250 - 251 - u8 pad_0x0100_0x01b8[0x01b8 - 0x0100]; /* 0x0100 */ 252 - u64 mic_df_ecc_address_1; /* 0x01b8 */ 253 - 254 - u64 mic_ctl_cnfg_1; /* 0x01c0 */ 255 - #define CBE_MIC_DISABLE_PWR_SAV_1 0x8000000000000000LL 256 - 257 - u64 pad_0x01c8; /* 0x01c8 */ 258 - 259 - u64 slow_fast_timer_1; /* 0x01d0 */ 260 - u64 slow_next_timer_1; /* 0x01d8 */ 261 - 262 - u8 pad_0x01e0_0x0208[0x0208 - 0x01e0]; /* 0x01e0 */ 263 - u64 mic_exc; /* 0x0208 */ 264 - #define CBE_MIC_EXC_BLOCK_SCRUB 0x0800000000000000ULL 265 - #define CBE_MIC_EXC_FAST_SCRUB 0x0100000000000000ULL 266 - 267 - u64 mic_mnt_cfg; /* 0x0210 */ 268 - #define CBE_MIC_MNT_CFG_CHAN_0_POP 0x0002000000000000ULL 269 - #define CBE_MIC_MNT_CFG_CHAN_1_POP 0x0004000000000000ULL 270 - 271 - u64 mic_df_config; /* 0x0218 */ 272 - #define CBE_MIC_ECC_DISABLE_0 0x4000000000000000ULL 273 - #define CBE_MIC_ECC_REP_SINGLE_0 0x2000000000000000ULL 274 - #define CBE_MIC_ECC_DISABLE_1 0x0080000000000000ULL 275 - #define CBE_MIC_ECC_REP_SINGLE_1 0x0040000000000000ULL 276 - 277 - u8 pad_0x0220_0x0230[0x0230 - 0x0220]; /* 0x0220 */ 278 - u64 mic_fir; /* 0x0230 */ 279 - #define CBE_MIC_FIR_ECC_SINGLE_0_ERR 0x0200000000000000ULL 280 - #define CBE_MIC_FIR_ECC_MULTI_0_ERR 0x0100000000000000ULL 281 - #define CBE_MIC_FIR_ECC_SINGLE_1_ERR 0x0080000000000000ULL 282 - #define CBE_MIC_FIR_ECC_MULTI_1_ERR 0x0040000000000000ULL 283 - #define CBE_MIC_FIR_ECC_ERR_MASK 0xffff000000000000ULL 284 - #define CBE_MIC_FIR_ECC_SINGLE_0_CTE 0x0000020000000000ULL 285 - #define CBE_MIC_FIR_ECC_MULTI_0_CTE 0x0000010000000000ULL 286 - #define CBE_MIC_FIR_ECC_SINGLE_1_CTE 0x0000008000000000ULL 287 - #define CBE_MIC_FIR_ECC_MULTI_1_CTE 0x0000004000000000ULL 288 - #define CBE_MIC_FIR_ECC_CTE_MASK 0x0000ffff00000000ULL 289 - #define CBE_MIC_FIR_ECC_SINGLE_0_RESET 0x0000000002000000ULL 290 - #define CBE_MIC_FIR_ECC_MULTI_0_RESET 0x0000000001000000ULL 291 - #define CBE_MIC_FIR_ECC_SINGLE_1_RESET 0x0000000000800000ULL 292 - #define CBE_MIC_FIR_ECC_MULTI_1_RESET 0x0000000000400000ULL 293 - #define CBE_MIC_FIR_ECC_RESET_MASK 0x00000000ffff0000ULL 294 - #define CBE_MIC_FIR_ECC_SINGLE_0_SET 0x0000000000000200ULL 295 - #define CBE_MIC_FIR_ECC_MULTI_0_SET 0x0000000000000100ULL 296 - #define CBE_MIC_FIR_ECC_SINGLE_1_SET 0x0000000000000080ULL 297 - #define CBE_MIC_FIR_ECC_MULTI_1_SET 0x0000000000000040ULL 298 - #define CBE_MIC_FIR_ECC_SET_MASK 0x000000000000ffffULL 299 - u64 mic_fir_debug; /* 0x0238 */ 300 - 301 - u8 pad_0x0240_0x1000[0x1000 - 0x0240]; /* 0x0240 */ 302 - }; 303 - 304 - extern struct cbe_mic_tm_regs __iomem *cbe_get_mic_tm_regs(struct device_node *np); 305 - extern struct cbe_mic_tm_regs __iomem *cbe_get_cpu_mic_tm_regs(int cpu); 306 - 307 - 308 21 /* Cell page table entries */ 309 22 #define CBE_IOPTE_PP_W 0x8000000000000000ul /* protection: write */ 310 23 #define CBE_IOPTE_PP_R 0x4000000000000000ul /* protection: read */ ··· 27 314 #define CBE_IOPTE_RPN_Mask 0x07fffffffffff000ul /* RPN */ 28 315 #define CBE_IOPTE_H 0x0000000000000800ul /* cache hint */ 29 316 #define CBE_IOPTE_IOID_Mask 0x00000000000007fful /* ioid */ 30 - 31 - /* some utility functions to deal with SMT */ 32 - extern u32 cbe_get_hw_thread_id(int cpu); 33 - extern u32 cbe_cpu_to_node(int cpu); 34 - extern u32 cbe_node_to_cpu(int node); 35 - 36 - /* Init this module early */ 37 - extern void cbe_regs_init(void); 38 - 39 317 40 318 #endif /* CBE_REGS_H */
-36
arch/powerpc/include/asm/dcr-generic.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * (c) Copyright 2006 Benjamin Herrenschmidt, IBM Corp. 4 - * <benh@kernel.crashing.org> 5 - */ 6 - 7 - #ifndef _ASM_POWERPC_DCR_GENERIC_H 8 - #define _ASM_POWERPC_DCR_GENERIC_H 9 - #ifdef __KERNEL__ 10 - #ifndef __ASSEMBLY__ 11 - 12 - enum host_type_t {DCR_HOST_MMIO, DCR_HOST_NATIVE, DCR_HOST_INVALID}; 13 - 14 - typedef struct { 15 - enum host_type_t type; 16 - union { 17 - dcr_host_mmio_t mmio; 18 - dcr_host_native_t native; 19 - } host; 20 - } dcr_host_t; 21 - 22 - extern bool dcr_map_ok_generic(dcr_host_t host); 23 - 24 - extern dcr_host_t dcr_map_generic(struct device_node *dev, unsigned int dcr_n, 25 - unsigned int dcr_c); 26 - extern void dcr_unmap_generic(dcr_host_t host, unsigned int dcr_c); 27 - 28 - extern u32 dcr_read_generic(dcr_host_t host, unsigned int dcr_n); 29 - 30 - extern void dcr_write_generic(dcr_host_t host, unsigned int dcr_n, u32 value); 31 - 32 - #endif /* __ASSEMBLY__ */ 33 - #endif /* __KERNEL__ */ 34 - #endif /* _ASM_POWERPC_DCR_GENERIC_H */ 35 - 36 -
-44
arch/powerpc/include/asm/dcr-mmio.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * (c) Copyright 2006 Benjamin Herrenschmidt, IBM Corp. 4 - * <benh@kernel.crashing.org> 5 - */ 6 - 7 - #ifndef _ASM_POWERPC_DCR_MMIO_H 8 - #define _ASM_POWERPC_DCR_MMIO_H 9 - #ifdef __KERNEL__ 10 - 11 - #include <asm/io.h> 12 - 13 - typedef struct { 14 - void __iomem *token; 15 - unsigned int stride; 16 - unsigned int base; 17 - } dcr_host_mmio_t; 18 - 19 - static inline bool dcr_map_ok_mmio(dcr_host_mmio_t host) 20 - { 21 - return host.token != NULL; 22 - } 23 - 24 - extern dcr_host_mmio_t dcr_map_mmio(struct device_node *dev, 25 - unsigned int dcr_n, 26 - unsigned int dcr_c); 27 - extern void dcr_unmap_mmio(dcr_host_mmio_t host, unsigned int dcr_c); 28 - 29 - static inline u32 dcr_read_mmio(dcr_host_mmio_t host, unsigned int dcr_n) 30 - { 31 - return in_be32(host.token + ((host.base + dcr_n) * host.stride)); 32 - } 33 - 34 - static inline void dcr_write_mmio(dcr_host_mmio_t host, 35 - unsigned int dcr_n, 36 - u32 value) 37 - { 38 - out_be32(host.token + ((host.base + dcr_n) * host.stride), value); 39 - } 40 - 41 - #endif /* __KERNEL__ */ 42 - #endif /* _ASM_POWERPC_DCR_MMIO_H */ 43 - 44 -
-32
arch/powerpc/include/asm/dcr.h
··· 10 10 #ifndef __ASSEMBLY__ 11 11 #ifdef CONFIG_PPC_DCR 12 12 13 - #ifdef CONFIG_PPC_DCR_NATIVE 14 13 #include <asm/dcr-native.h> 15 - #endif 16 14 17 - #ifdef CONFIG_PPC_DCR_MMIO 18 - #include <asm/dcr-mmio.h> 19 - #endif 20 - 21 - 22 - /* Indirection layer for providing both NATIVE and MMIO support. */ 23 - 24 - #if defined(CONFIG_PPC_DCR_NATIVE) && defined(CONFIG_PPC_DCR_MMIO) 25 - 26 - #include <asm/dcr-generic.h> 27 - 28 - #define DCR_MAP_OK(host) dcr_map_ok_generic(host) 29 - #define dcr_map(dev, dcr_n, dcr_c) dcr_map_generic(dev, dcr_n, dcr_c) 30 - #define dcr_unmap(host, dcr_c) dcr_unmap_generic(host, dcr_c) 31 - #define dcr_read(host, dcr_n) dcr_read_generic(host, dcr_n) 32 - #define dcr_write(host, dcr_n, value) dcr_write_generic(host, dcr_n, value) 33 - 34 - #else 35 - 36 - #ifdef CONFIG_PPC_DCR_NATIVE 37 15 typedef dcr_host_native_t dcr_host_t; 38 16 #define DCR_MAP_OK(host) dcr_map_ok_native(host) 39 17 #define dcr_map(dev, dcr_n, dcr_c) dcr_map_native(dev, dcr_n, dcr_c) 40 18 #define dcr_unmap(host, dcr_c) dcr_unmap_native(host, dcr_c) 41 19 #define dcr_read(host, dcr_n) dcr_read_native(host, dcr_n) 42 20 #define dcr_write(host, dcr_n, value) dcr_write_native(host, dcr_n, value) 43 - #else 44 - typedef dcr_host_mmio_t dcr_host_t; 45 - #define DCR_MAP_OK(host) dcr_map_ok_mmio(host) 46 - #define dcr_map(dev, dcr_n, dcr_c) dcr_map_mmio(dev, dcr_n, dcr_c) 47 - #define dcr_unmap(host, dcr_c) dcr_unmap_mmio(host, dcr_c) 48 - #define dcr_read(host, dcr_n) dcr_read_mmio(host, dcr_n) 49 - #define dcr_write(host, dcr_n, value) dcr_write_mmio(host, dcr_n, value) 50 - #endif 51 - 52 - #endif /* defined(CONFIG_PPC_DCR_NATIVE) && defined(CONFIG_PPC_DCR_MMIO) */ 53 21 54 22 /* 55 23 * additional helpers to read the DCR * base from the device-tree
+34
arch/powerpc/include/asm/hvcall.h
··· 348 348 #define H_SCM_FLUSH 0x44C 349 349 #define H_GET_ENERGY_SCALE_INFO 0x450 350 350 #define H_PKS_SIGNED_UPDATE 0x454 351 + #define H_HTM 0x458 351 352 #define H_WATCHDOG 0x45C 352 353 #define H_GUEST_GET_CAPABILITIES 0x460 353 354 #define H_GUEST_SET_CAPABILITIES 0x464 ··· 498 497 #define H_GUEST_CAP_POWER10 (1UL<<(63-2)) 499 498 #define H_GUEST_CAP_POWER11 (1UL<<(63-3)) 500 499 #define H_GUEST_CAP_BITMAP2 (1UL<<(63-63)) 500 + 501 + /* 502 + * Defines for H_HTM - Macros for hardware trace macro (HTM) function. 503 + */ 504 + #define H_HTM_FLAGS_HARDWARE_TARGET (1ul << 63) 505 + #define H_HTM_FLAGS_LOGICAL_TARGET (1ul << 62) 506 + #define H_HTM_FLAGS_PROCID_TARGET (1ul << 61) 507 + #define H_HTM_FLAGS_NOWRAP (1ul << 60) 508 + 509 + #define H_HTM_OP_SHIFT (63-15) 510 + #define H_HTM_OP(x) ((unsigned long)(x)<<H_HTM_OP_SHIFT) 511 + #define H_HTM_OP_CAPABILITIES 0x01 512 + #define H_HTM_OP_STATUS 0x02 513 + #define H_HTM_OP_SETUP 0x03 514 + #define H_HTM_OP_CONFIGURE 0x04 515 + #define H_HTM_OP_START 0x05 516 + #define H_HTM_OP_STOP 0x06 517 + #define H_HTM_OP_DECONFIGURE 0x07 518 + #define H_HTM_OP_DUMP_DETAILS 0x08 519 + #define H_HTM_OP_DUMP_DATA 0x09 520 + #define H_HTM_OP_DUMP_SYSMEM_CONF 0x0a 521 + #define H_HTM_OP_DUMP_SYSPROC_CONF 0x0b 522 + 523 + #define H_HTM_TYPE_SHIFT (63-31) 524 + #define H_HTM_TYPE(x) ((unsigned long)(x)<<H_HTM_TYPE_SHIFT) 525 + #define H_HTM_TYPE_NEST 0x01 526 + #define H_HTM_TYPE_CORE 0x02 527 + #define H_HTM_TYPE_LLAT 0x03 528 + #define H_HTM_TYPE_GLOBAL 0xff 529 + 530 + #define H_HTM_TARGET_NODE_INDEX(x) ((unsigned long)(x)<<(63-15)) 531 + #define H_HTM_TARGET_NODAL_CHIP_INDEX(x) ((unsigned long)(x)<<(63-31)) 532 + #define H_HTM_TARGET_CORE_INDEX_ON_CHIP(x) ((unsigned long)(x)<<(63-47)) 501 533 502 534 #ifndef __ASSEMBLY__ 503 535 #include <linux/types.h>
+12 -58
arch/powerpc/include/asm/io-defs.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* This file is meant to be include multiple times by other headers */ 3 - /* last 2 argments are used by platforms/cell/io-workarounds.[ch] */ 4 3 5 - DEF_PCI_AC_RET(readb, u8, (const PCI_IO_ADDR addr), (addr), mem, addr) 6 - DEF_PCI_AC_RET(readw, u16, (const PCI_IO_ADDR addr), (addr), mem, addr) 7 - DEF_PCI_AC_RET(readl, u32, (const PCI_IO_ADDR addr), (addr), mem, addr) 8 - DEF_PCI_AC_RET(readw_be, u16, (const PCI_IO_ADDR addr), (addr), mem, addr) 9 - DEF_PCI_AC_RET(readl_be, u32, (const PCI_IO_ADDR addr), (addr), mem, addr) 10 - DEF_PCI_AC_NORET(writeb, (u8 val, PCI_IO_ADDR addr), (val, addr), mem, addr) 11 - DEF_PCI_AC_NORET(writew, (u16 val, PCI_IO_ADDR addr), (val, addr), mem, addr) 12 - DEF_PCI_AC_NORET(writel, (u32 val, PCI_IO_ADDR addr), (val, addr), mem, addr) 13 - DEF_PCI_AC_NORET(writew_be, (u16 val, PCI_IO_ADDR addr), (val, addr), mem, addr) 14 - DEF_PCI_AC_NORET(writel_be, (u32 val, PCI_IO_ADDR addr), (val, addr), mem, addr) 15 - 16 - #ifdef __powerpc64__ 17 - DEF_PCI_AC_RET(readq, u64, (const PCI_IO_ADDR addr), (addr), mem, addr) 18 - DEF_PCI_AC_RET(readq_be, u64, (const PCI_IO_ADDR addr), (addr), mem, addr) 19 - DEF_PCI_AC_NORET(writeq, (u64 val, PCI_IO_ADDR addr), (val, addr), mem, addr) 20 - DEF_PCI_AC_NORET(writeq_be, (u64 val, PCI_IO_ADDR addr), (val, addr), mem, addr) 21 - #endif /* __powerpc64__ */ 22 - 23 - DEF_PCI_AC_RET(inb, u8, (unsigned long port), (port), pio, port) 24 - DEF_PCI_AC_RET(inw, u16, (unsigned long port), (port), pio, port) 25 - DEF_PCI_AC_RET(inl, u32, (unsigned long port), (port), pio, port) 26 - DEF_PCI_AC_NORET(outb, (u8 val, unsigned long port), (val, port), pio, port) 27 - DEF_PCI_AC_NORET(outw, (u16 val, unsigned long port), (val, port), pio, port) 28 - DEF_PCI_AC_NORET(outl, (u32 val, unsigned long port), (val, port), pio, port) 29 - 30 - DEF_PCI_AC_NORET(readsb, (const PCI_IO_ADDR a, void *b, unsigned long c), 31 - (a, b, c), mem, a) 32 - DEF_PCI_AC_NORET(readsw, (const PCI_IO_ADDR a, void *b, unsigned long c), 33 - (a, b, c), mem, a) 34 - DEF_PCI_AC_NORET(readsl, (const PCI_IO_ADDR a, void *b, unsigned long c), 35 - (a, b, c), mem, a) 36 - DEF_PCI_AC_NORET(writesb, (PCI_IO_ADDR a, const void *b, unsigned long c), 37 - (a, b, c), mem, a) 38 - DEF_PCI_AC_NORET(writesw, (PCI_IO_ADDR a, const void *b, unsigned long c), 39 - (a, b, c), mem, a) 40 - DEF_PCI_AC_NORET(writesl, (PCI_IO_ADDR a, const void *b, unsigned long c), 41 - (a, b, c), mem, a) 42 - 43 - DEF_PCI_AC_NORET(insb, (unsigned long p, void *b, unsigned long c), 44 - (p, b, c), pio, p) 45 - DEF_PCI_AC_NORET(insw, (unsigned long p, void *b, unsigned long c), 46 - (p, b, c), pio, p) 47 - DEF_PCI_AC_NORET(insl, (unsigned long p, void *b, unsigned long c), 48 - (p, b, c), pio, p) 49 - DEF_PCI_AC_NORET(outsb, (unsigned long p, const void *b, unsigned long c), 50 - (p, b, c), pio, p) 51 - DEF_PCI_AC_NORET(outsw, (unsigned long p, const void *b, unsigned long c), 52 - (p, b, c), pio, p) 53 - DEF_PCI_AC_NORET(outsl, (unsigned long p, const void *b, unsigned long c), 54 - (p, b, c), pio, p) 55 - 56 - DEF_PCI_AC_NORET(memset_io, (PCI_IO_ADDR a, int c, unsigned long n), 57 - (a, c, n), mem, a) 58 - DEF_PCI_AC_NORET(memcpy_fromio, (void *d, const PCI_IO_ADDR s, unsigned long n), 59 - (d, s, n), mem, s) 60 - DEF_PCI_AC_NORET(memcpy_toio, (PCI_IO_ADDR d, const void *s, unsigned long n), 61 - (d, s, n), mem, d) 4 + DEF_PCI_AC_RET(inb, u8, (unsigned long port), (port)) 5 + DEF_PCI_AC_RET(inw, u16, (unsigned long port), (port)) 6 + DEF_PCI_AC_RET(inl, u32, (unsigned long port), (port)) 7 + DEF_PCI_AC_NORET(outb, (u8 val, unsigned long port), (val, port)) 8 + DEF_PCI_AC_NORET(outw, (u16 val, unsigned long port), (val, port)) 9 + DEF_PCI_AC_NORET(outl, (u32 val, unsigned long port), (val, port)) 10 + DEF_PCI_AC_NORET(insb, (unsigned long p, void *b, unsigned long c), (p, b, c)) 11 + DEF_PCI_AC_NORET(insw, (unsigned long p, void *b, unsigned long c), (p, b, c)) 12 + DEF_PCI_AC_NORET(insl, (unsigned long p, void *b, unsigned long c), (p, b, c)) 13 + DEF_PCI_AC_NORET(outsb, (unsigned long p, const void *b, unsigned long c), (p, b, c)) 14 + DEF_PCI_AC_NORET(outsw, (unsigned long p, const void *b, unsigned long c), (p, b, c)) 15 + DEF_PCI_AC_NORET(outsl, (unsigned long p, const void *b, unsigned long c), (p, b, c))
-55
arch/powerpc/include/asm/io-workarounds.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Support PCI IO workaround 4 - * 5 - * (C) Copyright 2007-2008 TOSHIBA CORPORATION 6 - */ 7 - 8 - #ifndef _IO_WORKAROUNDS_H 9 - #define _IO_WORKAROUNDS_H 10 - 11 - #ifdef CONFIG_PPC_IO_WORKAROUNDS 12 - #include <linux/io.h> 13 - #include <asm/pci-bridge.h> 14 - 15 - /* Bus info */ 16 - struct iowa_bus { 17 - struct pci_controller *phb; 18 - struct ppc_pci_io *ops; 19 - void *private; 20 - }; 21 - 22 - void iowa_register_bus(struct pci_controller *, struct ppc_pci_io *, 23 - int (*)(struct iowa_bus *, void *), void *); 24 - struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR); 25 - struct iowa_bus *iowa_pio_find_bus(unsigned long); 26 - 27 - extern struct ppc_pci_io spiderpci_ops; 28 - extern int spiderpci_iowa_init(struct iowa_bus *, void *); 29 - 30 - #define SPIDER_PCI_REG_BASE 0xd000 31 - #define SPIDER_PCI_REG_SIZE 0x1000 32 - #define SPIDER_PCI_VCI_CNTL_STAT 0x0110 33 - #define SPIDER_PCI_DUMMY_READ 0x0810 34 - #define SPIDER_PCI_DUMMY_READ_BASE 0x0814 35 - 36 - #endif 37 - 38 - #if defined(CONFIG_PPC_IO_WORKAROUNDS) && defined(CONFIG_PPC_INDIRECT_MMIO) 39 - extern bool io_workaround_inited; 40 - 41 - static inline bool iowa_is_active(void) 42 - { 43 - return unlikely(io_workaround_inited); 44 - } 45 - #else 46 - static inline bool iowa_is_active(void) 47 - { 48 - return false; 49 - } 50 - #endif 51 - 52 - void __iomem *iowa_ioremap(phys_addr_t addr, unsigned long size, 53 - pgprot_t prot, void *caller); 54 - 55 - #endif /* _IO_WORKAROUNDS_H */
+198 -211
arch/powerpc/include/asm/io.h
··· 65 65 extern bool isa_io_special; 66 66 67 67 #ifdef CONFIG_PPC32 68 - #if defined(CONFIG_PPC_INDIRECT_PIO) || defined(CONFIG_PPC_INDIRECT_MMIO) 69 - #error CONFIG_PPC_INDIRECT_{PIO,MMIO} are not yet supported on 32 bits 68 + #ifdef CONFIG_PPC_INDIRECT_PIO 69 + #error CONFIG_PPC_INDIRECT_PIO is not yet supported on 32 bits 70 70 #endif 71 71 #endif 72 72 ··· 80 80 * 81 81 * in_8, in_le16, in_be16, in_le32, in_be32, in_le64, in_be64 82 82 * out_8, out_le16, out_be16, out_le32, out_be32, out_le64, out_be64 83 - * _insb, _insw_ns, _insl_ns, _outsb, _outsw_ns, _outsl_ns 83 + * _insb, _insw, _insl, _outsb, _outsw, _outsl 84 84 * 85 85 * Those operate directly on a kernel virtual address. Note that the prototype 86 86 * for the out_* accessors has the arguments in opposite order from the usual 87 87 * linux PCI accessors. Unlike those, they take the address first and the value 88 88 * next. 89 - * 90 - * Note: I might drop the _ns suffix on the stream operations soon as it is 91 - * simply normal for stream operations to not swap in the first place. 92 - * 93 89 */ 94 90 95 91 /* -mprefixed can generate offsets beyond range, fall back hack */ ··· 224 228 */ 225 229 extern void _insb(const volatile u8 __iomem *addr, void *buf, long count); 226 230 extern void _outsb(volatile u8 __iomem *addr,const void *buf,long count); 227 - extern void _insw_ns(const volatile u16 __iomem *addr, void *buf, long count); 228 - extern void _outsw_ns(volatile u16 __iomem *addr, const void *buf, long count); 229 - extern void _insl_ns(const volatile u32 __iomem *addr, void *buf, long count); 230 - extern void _outsl_ns(volatile u32 __iomem *addr, const void *buf, long count); 231 - 232 - /* The _ns naming is historical and will be removed. For now, just #define 233 - * the non _ns equivalent names 234 - */ 235 - #define _insw _insw_ns 236 - #define _insl _insl_ns 237 - #define _outsw _outsw_ns 238 - #define _outsl _outsl_ns 239 - 231 + extern void _insw(const volatile u16 __iomem *addr, void *buf, long count); 232 + extern void _outsw(volatile u16 __iomem *addr, const void *buf, long count); 233 + extern void _insl(const volatile u32 __iomem *addr, void *buf, long count); 234 + extern void _outsl(volatile u32 __iomem *addr, const void *buf, long count); 240 235 241 236 /* 242 237 * memset_io, memcpy_toio, memcpy_fromio base implementations are out of line ··· 248 261 * for PowerPC is as close as possible to the x86 version of these, and thus 249 262 * provides fairly heavy weight barriers for the non-raw versions 250 263 * 251 - * In addition, they support a hook mechanism when CONFIG_PPC_INDIRECT_MMIO 252 - * or CONFIG_PPC_INDIRECT_PIO are set allowing the platform to provide its 253 - * own implementation of some or all of the accessors. 264 + * In addition, they support a hook mechanism when CONFIG_PPC_INDIRECT_PIO 265 + * is set allowing the platform to provide its own implementation of some 266 + * of the accessors. 254 267 */ 255 268 256 269 /* ··· 261 274 #include <asm/eeh.h> 262 275 #endif 263 276 264 - /* Shortcut to the MMIO argument pointer */ 265 - #define PCI_IO_ADDR volatile void __iomem * 266 - 267 - /* Indirect IO address tokens: 268 - * 269 - * When CONFIG_PPC_INDIRECT_MMIO is set, the platform can provide hooks 270 - * on all MMIOs. (Note that this is all 64 bits only for now) 271 - * 272 - * To help platforms who may need to differentiate MMIO addresses in 273 - * their hooks, a bitfield is reserved for use by the platform near the 274 - * top of MMIO addresses (not PIO, those have to cope the hard way). 275 - * 276 - * The highest address in the kernel virtual space are: 277 - * 278 - * d0003fffffffffff # with Hash MMU 279 - * c00fffffffffffff # with Radix MMU 280 - * 281 - * The top 4 bits are reserved as the region ID on hash, leaving us 8 bits 282 - * that can be used for the field. 283 - * 284 - * The direct IO mapping operations will then mask off those bits 285 - * before doing the actual access, though that only happen when 286 - * CONFIG_PPC_INDIRECT_MMIO is set, thus be careful when you use that 287 - * mechanism 288 - * 289 - * For PIO, there is a separate CONFIG_PPC_INDIRECT_PIO which makes 290 - * all PIO functions call through a hook. 291 - */ 292 - 293 - #ifdef CONFIG_PPC_INDIRECT_MMIO 294 - #define PCI_IO_IND_TOKEN_SHIFT 52 295 - #define PCI_IO_IND_TOKEN_MASK (0xfful << PCI_IO_IND_TOKEN_SHIFT) 296 - #define PCI_FIX_ADDR(addr) \ 297 - ((PCI_IO_ADDR)(((unsigned long)(addr)) & ~PCI_IO_IND_TOKEN_MASK)) 298 - #define PCI_GET_ADDR_TOKEN(addr) \ 299 - (((unsigned long)(addr) & PCI_IO_IND_TOKEN_MASK) >> \ 300 - PCI_IO_IND_TOKEN_SHIFT) 301 - #define PCI_SET_ADDR_TOKEN(addr, token) \ 302 - do { \ 303 - unsigned long __a = (unsigned long)(addr); \ 304 - __a &= ~PCI_IO_IND_TOKEN_MASK; \ 305 - __a |= ((unsigned long)(token)) << PCI_IO_IND_TOKEN_SHIFT; \ 306 - (addr) = (void __iomem *)__a; \ 307 - } while(0) 308 - #else 309 - #define PCI_FIX_ADDR(addr) (addr) 310 - #endif 311 - 312 - 313 - /* 314 - * Non ordered and non-swapping "raw" accessors 315 - */ 316 - 317 - static inline unsigned char __raw_readb(const volatile void __iomem *addr) 318 - { 319 - return *(volatile unsigned char __force *)PCI_FIX_ADDR(addr); 320 - } 321 - #define __raw_readb __raw_readb 322 - 323 - static inline unsigned short __raw_readw(const volatile void __iomem *addr) 324 - { 325 - return *(volatile unsigned short __force *)PCI_FIX_ADDR(addr); 326 - } 327 - #define __raw_readw __raw_readw 328 - 329 - static inline unsigned int __raw_readl(const volatile void __iomem *addr) 330 - { 331 - return *(volatile unsigned int __force *)PCI_FIX_ADDR(addr); 332 - } 333 - #define __raw_readl __raw_readl 334 - 335 - static inline void __raw_writeb(unsigned char v, volatile void __iomem *addr) 336 - { 337 - *(volatile unsigned char __force *)PCI_FIX_ADDR(addr) = v; 338 - } 339 - #define __raw_writeb __raw_writeb 340 - 341 - static inline void __raw_writew(unsigned short v, volatile void __iomem *addr) 342 - { 343 - *(volatile unsigned short __force *)PCI_FIX_ADDR(addr) = v; 344 - } 345 - #define __raw_writew __raw_writew 346 - 347 - static inline void __raw_writel(unsigned int v, volatile void __iomem *addr) 348 - { 349 - *(volatile unsigned int __force *)PCI_FIX_ADDR(addr) = v; 350 - } 351 - #define __raw_writel __raw_writel 277 + #define _IO_PORT(port) ((volatile void __iomem *)(_IO_BASE + (port))) 352 278 353 279 #ifdef __powerpc64__ 354 - static inline unsigned long __raw_readq(const volatile void __iomem *addr) 355 - { 356 - return *(volatile unsigned long __force *)PCI_FIX_ADDR(addr); 357 - } 358 - #define __raw_readq __raw_readq 359 - 360 - static inline void __raw_writeq(unsigned long v, volatile void __iomem *addr) 361 - { 362 - *(volatile unsigned long __force *)PCI_FIX_ADDR(addr) = v; 363 - } 364 - #define __raw_writeq __raw_writeq 365 - 366 - static inline void __raw_writeq_be(unsigned long v, volatile void __iomem *addr) 367 - { 368 - __raw_writeq((__force unsigned long)cpu_to_be64(v), addr); 369 - } 370 - #define __raw_writeq_be __raw_writeq_be 371 - 372 280 /* 373 - * Real mode versions of the above. Those instructions are only supposed 281 + * Real mode versions of raw accessors. Those instructions are only supposed 374 282 * to be used in hypervisor real mode as per the architecture spec. 375 283 */ 376 284 static inline void __raw_rm_writeb(u8 val, volatile void __iomem *paddr) ··· 433 551 * possible to hook directly at the toplevel PIO operation if they have to 434 552 * be handled differently 435 553 */ 436 - #define __do_writeb(val, addr) out_8(PCI_FIX_ADDR(addr), val) 437 - #define __do_writew(val, addr) out_le16(PCI_FIX_ADDR(addr), val) 438 - #define __do_writel(val, addr) out_le32(PCI_FIX_ADDR(addr), val) 439 - #define __do_writeq(val, addr) out_le64(PCI_FIX_ADDR(addr), val) 440 - #define __do_writew_be(val, addr) out_be16(PCI_FIX_ADDR(addr), val) 441 - #define __do_writel_be(val, addr) out_be32(PCI_FIX_ADDR(addr), val) 442 - #define __do_writeq_be(val, addr) out_be64(PCI_FIX_ADDR(addr), val) 443 554 444 555 #ifdef CONFIG_EEH 445 - #define __do_readb(addr) eeh_readb(PCI_FIX_ADDR(addr)) 446 - #define __do_readw(addr) eeh_readw(PCI_FIX_ADDR(addr)) 447 - #define __do_readl(addr) eeh_readl(PCI_FIX_ADDR(addr)) 448 - #define __do_readq(addr) eeh_readq(PCI_FIX_ADDR(addr)) 449 - #define __do_readw_be(addr) eeh_readw_be(PCI_FIX_ADDR(addr)) 450 - #define __do_readl_be(addr) eeh_readl_be(PCI_FIX_ADDR(addr)) 451 - #define __do_readq_be(addr) eeh_readq_be(PCI_FIX_ADDR(addr)) 556 + #define __do_readb(addr) eeh_readb(addr) 557 + #define __do_readw(addr) eeh_readw(addr) 558 + #define __do_readl(addr) eeh_readl(addr) 559 + #define __do_readq(addr) eeh_readq(addr) 560 + #define __do_readw_be(addr) eeh_readw_be(addr) 561 + #define __do_readl_be(addr) eeh_readl_be(addr) 562 + #define __do_readq_be(addr) eeh_readq_be(addr) 452 563 #else /* CONFIG_EEH */ 453 - #define __do_readb(addr) in_8(PCI_FIX_ADDR(addr)) 454 - #define __do_readw(addr) in_le16(PCI_FIX_ADDR(addr)) 455 - #define __do_readl(addr) in_le32(PCI_FIX_ADDR(addr)) 456 - #define __do_readq(addr) in_le64(PCI_FIX_ADDR(addr)) 457 - #define __do_readw_be(addr) in_be16(PCI_FIX_ADDR(addr)) 458 - #define __do_readl_be(addr) in_be32(PCI_FIX_ADDR(addr)) 459 - #define __do_readq_be(addr) in_be64(PCI_FIX_ADDR(addr)) 564 + #define __do_readb(addr) in_8(addr) 565 + #define __do_readw(addr) in_le16(addr) 566 + #define __do_readl(addr) in_le32(addr) 567 + #define __do_readq(addr) in_le64(addr) 568 + #define __do_readw_be(addr) in_be16(addr) 569 + #define __do_readl_be(addr) in_be32(addr) 570 + #define __do_readq_be(addr) in_be64(addr) 460 571 #endif /* !defined(CONFIG_EEH) */ 461 572 462 573 #ifdef CONFIG_PPC32 ··· 460 585 #define __do_inw(port) _rec_inw(port) 461 586 #define __do_inl(port) _rec_inl(port) 462 587 #else /* CONFIG_PPC32 */ 463 - #define __do_outb(val, port) writeb(val,(PCI_IO_ADDR)(_IO_BASE+port)); 464 - #define __do_outw(val, port) writew(val,(PCI_IO_ADDR)(_IO_BASE+port)); 465 - #define __do_outl(val, port) writel(val,(PCI_IO_ADDR)(_IO_BASE+port)); 466 - #define __do_inb(port) readb((PCI_IO_ADDR)(_IO_BASE + port)); 467 - #define __do_inw(port) readw((PCI_IO_ADDR)(_IO_BASE + port)); 468 - #define __do_inl(port) readl((PCI_IO_ADDR)(_IO_BASE + port)); 588 + #define __do_outb(val, port) writeb(val,_IO_PORT(port)); 589 + #define __do_outw(val, port) writew(val,_IO_PORT(port)); 590 + #define __do_outl(val, port) writel(val,_IO_PORT(port)); 591 + #define __do_inb(port) readb(_IO_PORT(port)); 592 + #define __do_inw(port) readw(_IO_PORT(port)); 593 + #define __do_inl(port) readl(_IO_PORT(port)); 469 594 #endif /* !CONFIG_PPC32 */ 470 595 471 596 #ifdef CONFIG_EEH 472 - #define __do_readsb(a, b, n) eeh_readsb(PCI_FIX_ADDR(a), (b), (n)) 473 - #define __do_readsw(a, b, n) eeh_readsw(PCI_FIX_ADDR(a), (b), (n)) 474 - #define __do_readsl(a, b, n) eeh_readsl(PCI_FIX_ADDR(a), (b), (n)) 597 + #define __do_readsb(a, b, n) eeh_readsb(a, (b), (n)) 598 + #define __do_readsw(a, b, n) eeh_readsw(a, (b), (n)) 599 + #define __do_readsl(a, b, n) eeh_readsl(a, (b), (n)) 475 600 #else /* CONFIG_EEH */ 476 - #define __do_readsb(a, b, n) _insb(PCI_FIX_ADDR(a), (b), (n)) 477 - #define __do_readsw(a, b, n) _insw(PCI_FIX_ADDR(a), (b), (n)) 478 - #define __do_readsl(a, b, n) _insl(PCI_FIX_ADDR(a), (b), (n)) 601 + #define __do_readsb(a, b, n) _insb(a, (b), (n)) 602 + #define __do_readsw(a, b, n) _insw(a, (b), (n)) 603 + #define __do_readsl(a, b, n) _insl(a, (b), (n)) 479 604 #endif /* !CONFIG_EEH */ 480 - #define __do_writesb(a, b, n) _outsb(PCI_FIX_ADDR(a),(b),(n)) 481 - #define __do_writesw(a, b, n) _outsw(PCI_FIX_ADDR(a),(b),(n)) 482 - #define __do_writesl(a, b, n) _outsl(PCI_FIX_ADDR(a),(b),(n)) 605 + #define __do_writesb(a, b, n) _outsb(a, (b), (n)) 606 + #define __do_writesw(a, b, n) _outsw(a, (b), (n)) 607 + #define __do_writesl(a, b, n) _outsl(a, (b), (n)) 483 608 484 - #define __do_insb(p, b, n) readsb((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n)) 485 - #define __do_insw(p, b, n) readsw((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n)) 486 - #define __do_insl(p, b, n) readsl((PCI_IO_ADDR)(_IO_BASE+(p)), (b), (n)) 487 - #define __do_outsb(p, b, n) writesb((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n)) 488 - #define __do_outsw(p, b, n) writesw((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n)) 489 - #define __do_outsl(p, b, n) writesl((PCI_IO_ADDR)(_IO_BASE+(p)),(b),(n)) 490 - 491 - #define __do_memset_io(addr, c, n) \ 492 - _memset_io(PCI_FIX_ADDR(addr), c, n) 493 - #define __do_memcpy_toio(dst, src, n) \ 494 - _memcpy_toio(PCI_FIX_ADDR(dst), src, n) 609 + #define __do_insb(p, b, n) readsb(_IO_PORT(p), (b), (n)) 610 + #define __do_insw(p, b, n) readsw(_IO_PORT(p), (b), (n)) 611 + #define __do_insl(p, b, n) readsl(_IO_PORT(p), (b), (n)) 612 + #define __do_outsb(p, b, n) writesb(_IO_PORT(p),(b),(n)) 613 + #define __do_outsw(p, b, n) writesw(_IO_PORT(p),(b),(n)) 614 + #define __do_outsl(p, b, n) writesl(_IO_PORT(p),(b),(n)) 495 615 496 616 #ifdef CONFIG_EEH 497 617 #define __do_memcpy_fromio(dst, src, n) \ 498 - eeh_memcpy_fromio(dst, PCI_FIX_ADDR(src), n) 618 + eeh_memcpy_fromio(dst, src, n) 499 619 #else /* CONFIG_EEH */ 500 620 #define __do_memcpy_fromio(dst, src, n) \ 501 - _memcpy_fromio(dst,PCI_FIX_ADDR(src),n) 621 + _memcpy_fromio(dst, src, n) 502 622 #endif /* !CONFIG_EEH */ 503 623 504 - #ifdef CONFIG_PPC_INDIRECT_PIO 505 - #define DEF_PCI_HOOK_pio(x) x 506 - #else 507 - #define DEF_PCI_HOOK_pio(x) NULL 508 - #endif 624 + static inline u8 readb(const volatile void __iomem *addr) 625 + { 626 + return __do_readb(addr); 627 + } 628 + #define readb readb 509 629 510 - #ifdef CONFIG_PPC_INDIRECT_MMIO 511 - #define DEF_PCI_HOOK_mem(x) x 630 + static inline u16 readw(const volatile void __iomem *addr) 631 + { 632 + return __do_readw(addr); 633 + } 634 + #define readw readw 635 + 636 + static inline u32 readl(const volatile void __iomem *addr) 637 + { 638 + return __do_readl(addr); 639 + } 640 + #define readl readl 641 + 642 + static inline u16 readw_be(const volatile void __iomem *addr) 643 + { 644 + return __do_readw_be(addr); 645 + } 646 + 647 + static inline u32 readl_be(const volatile void __iomem *addr) 648 + { 649 + return __do_readl_be(addr); 650 + } 651 + 652 + static inline void writeb(u8 val, volatile void __iomem *addr) 653 + { 654 + out_8(addr, val); 655 + } 656 + #define writeb writeb 657 + 658 + static inline void writew(u16 val, volatile void __iomem *addr) 659 + { 660 + out_le16(addr, val); 661 + } 662 + #define writew writew 663 + 664 + static inline void writel(u32 val, volatile void __iomem *addr) 665 + { 666 + out_le32(addr, val); 667 + } 668 + #define writel writel 669 + 670 + static inline void writew_be(u16 val, volatile void __iomem *addr) 671 + { 672 + out_be16(addr, val); 673 + } 674 + 675 + static inline void writel_be(u32 val, volatile void __iomem *addr) 676 + { 677 + out_be32(addr, val); 678 + } 679 + 680 + static inline void readsb(const volatile void __iomem *a, void *b, unsigned long c) 681 + { 682 + __do_readsb(a, b, c); 683 + } 684 + #define readsb readsb 685 + 686 + static inline void readsw(const volatile void __iomem *a, void *b, unsigned long c) 687 + { 688 + __do_readsw(a, b, c); 689 + } 690 + #define readsw readsw 691 + 692 + static inline void readsl(const volatile void __iomem *a, void *b, unsigned long c) 693 + { 694 + __do_readsl(a, b, c); 695 + } 696 + #define readsl readsl 697 + 698 + static inline void writesb(volatile void __iomem *a, const void *b, unsigned long c) 699 + { 700 + __do_writesb(a, b, c); 701 + } 702 + #define writesb writesb 703 + 704 + static inline void writesw(volatile void __iomem *a, const void *b, unsigned long c) 705 + { 706 + __do_writesw(a, b, c); 707 + } 708 + #define writesw writesw 709 + 710 + static inline void writesl(volatile void __iomem *a, const void *b, unsigned long c) 711 + { 712 + __do_writesl(a, b, c); 713 + } 714 + #define writesl writesl 715 + 716 + static inline void memset_io(volatile void __iomem *a, int c, unsigned long n) 717 + { 718 + _memset_io(a, c, n); 719 + } 720 + #define memset_io memset_io 721 + 722 + static inline void memcpy_fromio(void *d, const volatile void __iomem *s, unsigned long n) 723 + { 724 + __do_memcpy_fromio(d, s, n); 725 + } 726 + #define memcpy_fromio memcpy_fromio 727 + 728 + static inline void memcpy_toio(volatile void __iomem *d, const void *s, unsigned long n) 729 + { 730 + _memcpy_toio(d, s, n); 731 + } 732 + #define memcpy_toio memcpy_toio 733 + 734 + #ifdef __powerpc64__ 735 + static inline u64 readq(const volatile void __iomem *addr) 736 + { 737 + return __do_readq(addr); 738 + } 739 + 740 + static inline u64 readq_be(const volatile void __iomem *addr) 741 + { 742 + return __do_readq_be(addr); 743 + } 744 + 745 + static inline void writeq(u64 val, volatile void __iomem *addr) 746 + { 747 + out_le64(addr, val); 748 + } 749 + 750 + static inline void writeq_be(u64 val, volatile void __iomem *addr) 751 + { 752 + out_be64(addr, val); 753 + } 754 + #endif /* __powerpc64__ */ 755 + 756 + #ifdef CONFIG_PPC_INDIRECT_PIO 757 + #define DEF_PCI_HOOK(x) x 512 758 #else 513 - #define DEF_PCI_HOOK_mem(x) NULL 759 + #define DEF_PCI_HOOK(x) NULL 514 760 #endif 515 761 516 762 /* Structure containing all the hooks */ 517 763 extern struct ppc_pci_io { 518 764 519 - #define DEF_PCI_AC_RET(name, ret, at, al, space, aa) ret (*name) at; 520 - #define DEF_PCI_AC_NORET(name, at, al, space, aa) void (*name) at; 765 + #define DEF_PCI_AC_RET(name, ret, at, al) ret (*name) at; 766 + #define DEF_PCI_AC_NORET(name, at, al) void (*name) at; 521 767 522 768 #include <asm/io-defs.h> 523 769 ··· 648 652 } ppc_pci_io; 649 653 650 654 /* The inline wrappers */ 651 - #define DEF_PCI_AC_RET(name, ret, at, al, space, aa) \ 655 + #define DEF_PCI_AC_RET(name, ret, at, al) \ 652 656 static inline ret name at \ 653 657 { \ 654 - if (DEF_PCI_HOOK_##space(ppc_pci_io.name) != NULL) \ 658 + if (DEF_PCI_HOOK(ppc_pci_io.name) != NULL) \ 655 659 return ppc_pci_io.name al; \ 656 660 return __do_##name al; \ 657 661 } 658 662 659 - #define DEF_PCI_AC_NORET(name, at, al, space, aa) \ 663 + #define DEF_PCI_AC_NORET(name, at, al) \ 660 664 static inline void name at \ 661 665 { \ 662 - if (DEF_PCI_HOOK_##space(ppc_pci_io.name) != NULL) \ 666 + if (DEF_PCI_HOOK(ppc_pci_io.name) != NULL) \ 663 667 ppc_pci_io.name al; \ 664 668 else \ 665 669 __do_##name al; \ ··· 670 674 #undef DEF_PCI_AC_RET 671 675 #undef DEF_PCI_AC_NORET 672 676 673 - /* Some drivers check for the presence of readq & writeq with 674 - * a #ifdef, so we make them happy here. 675 - */ 676 - #define readb readb 677 - #define readw readw 678 - #define readl readl 679 - #define writeb writeb 680 - #define writew writew 681 - #define writel writel 682 - #define readsb readsb 683 - #define readsw readsw 684 - #define readsl readsl 685 - #define writesb writesb 686 - #define writesw writesw 687 - #define writesl writesl 677 + // Signal to asm-generic/io.h that we have implemented these. 688 678 #define inb inb 689 679 #define inw inw 690 680 #define inl inl ··· 687 705 #define readq readq 688 706 #define writeq writeq 689 707 #endif 690 - #define memset_io memset_io 691 - #define memcpy_fromio memcpy_fromio 692 - #define memcpy_toio memcpy_toio 693 708 694 709 /* 695 710 * We don't do relaxed operations yet, at least not with this semantic ··· 960 981 #define clrsetbits_8(addr, clear, set) clrsetbits(8, addr, clear, set) 961 982 962 983 #include <asm-generic/io.h> 984 + 985 + #ifdef __powerpc64__ 986 + static inline void __raw_writeq_be(unsigned long v, volatile void __iomem *addr) 987 + { 988 + __raw_writeq((__force unsigned long)cpu_to_be64(v), addr); 989 + } 990 + #define __raw_writeq_be __raw_writeq_be 991 + #endif // __powerpc64__ 963 992 964 993 #endif /* __KERNEL__ */ 965 994
-6
arch/powerpc/include/asm/iommu.h
··· 317 317 extern enum dma_data_direction iommu_tce_direction(unsigned long tce); 318 318 extern unsigned long iommu_direction_to_tce_perm(enum dma_data_direction dir); 319 319 320 - #ifdef CONFIG_PPC_CELL_NATIVE 321 - extern bool iommu_fixed_is_weak; 322 - #else 323 - #define iommu_fixed_is_weak false 324 - #endif 325 - 326 320 extern const struct dma_map_ops dma_iommu_ops; 327 321 328 322 #endif /* __KERNEL__ */
+1
arch/powerpc/include/asm/mmzone.h
··· 29 29 #ifdef CONFIG_MEMORY_HOTPLUG 30 30 extern unsigned long max_pfn; 31 31 u64 memory_hotplug_max(void); 32 + u64 hot_add_drconf_memory_max(void); 32 33 #else 33 34 #define memory_hotplug_max() memblock_end_of_DRAM() 34 35 #endif
+21
arch/powerpc/include/asm/plpar_wrappers.h
··· 65 65 return vpa_call(H_VPA_REG_DTL, cpu, vpa); 66 66 } 67 67 68 + static inline long htm_call(unsigned long flags, unsigned long target, 69 + unsigned long operation, unsigned long param1, 70 + unsigned long param2, unsigned long param3) 71 + { 72 + return plpar_hcall_norets(H_HTM, flags, target, operation, 73 + param1, param2, param3); 74 + } 75 + 76 + static inline long htm_get_dump_hardware(unsigned long nodeindex, 77 + unsigned long nodalchipindex, unsigned long coreindexonchip, 78 + unsigned long type, unsigned long addr, unsigned long size, 79 + unsigned long offset) 80 + { 81 + return htm_call(H_HTM_FLAGS_HARDWARE_TARGET, 82 + H_HTM_TARGET_NODE_INDEX(nodeindex) | 83 + H_HTM_TARGET_NODAL_CHIP_INDEX(nodalchipindex) | 84 + H_HTM_TARGET_CORE_INDEX_ON_CHIP(coreindexonchip), 85 + H_HTM_OP(H_HTM_OP_DUMP_DATA) | H_HTM_TYPE(type), 86 + addr, size, offset); 87 + } 88 + 68 89 extern void vpa_init(int cpu); 69 90 70 91 static inline long plpar_pte_enter(unsigned long flags,
-53
arch/powerpc/include/asm/pmi.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - #ifndef _POWERPC_PMI_H 3 - #define _POWERPC_PMI_H 4 - 5 - /* 6 - * Definitions for talking with PMI device on PowerPC 7 - * 8 - * PMI (Platform Management Interrupt) is a way to communicate 9 - * with the BMC (Baseboard Management Controller) via interrupts. 10 - * Unlike IPMI it is bidirectional and has a low latency. 11 - * 12 - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 13 - * 14 - * Author: Christian Krafft <krafft@de.ibm.com> 15 - */ 16 - 17 - #ifdef __KERNEL__ 18 - 19 - #define PMI_TYPE_FREQ_CHANGE 0x01 20 - #define PMI_TYPE_POWER_BUTTON 0x02 21 - #define PMI_READ_TYPE 0 22 - #define PMI_READ_DATA0 1 23 - #define PMI_READ_DATA1 2 24 - #define PMI_READ_DATA2 3 25 - #define PMI_WRITE_TYPE 4 26 - #define PMI_WRITE_DATA0 5 27 - #define PMI_WRITE_DATA1 6 28 - #define PMI_WRITE_DATA2 7 29 - 30 - #define PMI_ACK 0x80 31 - 32 - #define PMI_TIMEOUT 100 33 - 34 - typedef struct { 35 - u8 type; 36 - u8 data0; 37 - u8 data1; 38 - u8 data2; 39 - } pmi_message_t; 40 - 41 - struct pmi_handler { 42 - struct list_head node; 43 - u8 type; 44 - void (*handle_pmi_message) (pmi_message_t); 45 - }; 46 - 47 - int pmi_register_handler(struct pmi_handler *); 48 - void pmi_unregister_handler(struct pmi_handler *); 49 - 50 - int pmi_send_message(pmi_message_t); 51 - 52 - #endif /* __KERNEL__ */ 53 - #endif /* _POWERPC_PMI_H */
+2
arch/powerpc/include/asm/prom.h
··· 17 17 struct device_node; 18 18 struct property; 19 19 20 + #define MIN_RMA 768 /* Minimum RMA (in MB) for CAS negotiation */ 21 + 20 22 #define OF_DT_BEGIN_NODE 0x1 /* Start of node, full name */ 21 23 #define OF_DT_END_NODE 0x2 /* End node */ 22 24 #define OF_DT_PROP 0x3 /* Property: name off, size,
-2
arch/powerpc/include/asm/spu_priv1.h
··· 215 215 * and only intended to be used by the platform setup code. 216 216 */ 217 217 218 - extern const struct spu_priv1_ops spu_priv1_mmio_ops; 219 - 220 218 extern const struct spu_management_ops spu_management_of_ops; 221 219 222 220 #endif /* __KERNEL__ */
+2
arch/powerpc/include/asm/static_call.h
··· 26 26 #define ARCH_DEFINE_STATIC_CALL_NULL_TRAMP(name) __PPC_SCT(name, "blr") 27 27 #define ARCH_DEFINE_STATIC_CALL_RET0_TRAMP(name) __PPC_SCT(name, "b .+20") 28 28 29 + #define CALL_INSN_SIZE 4 30 + 29 31 #endif /* _ASM_POWERPC_STATIC_CALL_H */
-3
arch/powerpc/include/asm/time.h
··· 89 89 #define mulhdu(x, y) mul_u64_u64_shr(x, y, 64) 90 90 #endif 91 91 92 - extern void div128_by_32(u64 dividend_high, u64 dividend_low, 93 - unsigned divisor, struct div_result *dr); 94 - 95 92 extern void secondary_cpu_time_init(void); 96 93 extern void __init time_init(void); 97 94
-1
arch/powerpc/include/asm/xics.h
··· 31 31 #ifdef CONFIG_PPC_ICP_NATIVE 32 32 extern int icp_native_init(void); 33 33 extern void icp_native_flush_interrupt(void); 34 - extern void icp_native_cause_ipi_rm(int cpu); 35 34 #else 36 35 static inline int icp_native_init(void) { return -ENODEV; } 37 36 #endif
-2
arch/powerpc/include/asm/xmon.h
··· 12 12 13 13 #ifdef CONFIG_XMON 14 14 extern void xmon_setup(void); 15 - void __init xmon_register_spus(struct list_head *list); 16 15 struct pt_regs; 17 16 extern int xmon(struct pt_regs *excp); 18 17 extern irqreturn_t xmon_irq(int, void *); 19 18 #else 20 19 static inline void xmon_setup(void) { } 21 - static inline void xmon_register_spus(struct list_head *list) { } 22 20 #endif 23 21 24 22 #if defined(CONFIG_XMON) && defined(CONFIG_SMP)
+1 -3
arch/powerpc/kernel/Makefile
··· 70 70 signal.o sysfs.o cacheinfo.o time.o \ 71 71 prom.o traps.o setup-common.o \ 72 72 udbg.o misc.o io.o misc_$(BITS).o \ 73 - of_platform.o prom_parse.o firmware.o \ 73 + prom_parse.o firmware.o \ 74 74 hw_breakpoint_constraints.o interrupt.o \ 75 75 kdebugfs.o stacktrace.o syscall.o 76 76 obj-y += ptrace/ ··· 151 151 152 152 obj-$(CONFIG_AUDIT) += audit.o 153 153 obj64-$(CONFIG_AUDIT) += compat_audit.o 154 - 155 - obj-$(CONFIG_PPC_IO_WORKAROUNDS) += io-workarounds.o 156 154 157 155 obj-y += trace/ 158 156
+1 -1
arch/powerpc/kernel/dma-iommu.c
··· 136 136 struct pci_dev *pdev = to_pci_dev(dev); 137 137 struct pci_controller *phb = pci_bus_to_host(pdev->bus); 138 138 139 - if (iommu_fixed_is_weak || !phb->controller_ops.iommu_bypass_supported) 139 + if (!phb->controller_ops.iommu_bypass_supported) 140 140 return false; 141 141 return phb->controller_ops.iommu_bypass_supported(pdev, mask); 142 142 }
-55
arch/powerpc/kernel/exceptions-64s.S
··· 2537 2537 EXC_VIRT_NONE(0x5000, 0x100) 2538 2538 EXC_REAL_NONE(0x1100, 0x100) 2539 2539 EXC_VIRT_NONE(0x5100, 0x100) 2540 - 2541 - #ifdef CONFIG_CBE_RAS 2542 - INT_DEFINE_BEGIN(cbe_system_error) 2543 - IVEC=0x1200 2544 - IHSRR=1 2545 - INT_DEFINE_END(cbe_system_error) 2546 - 2547 - EXC_REAL_BEGIN(cbe_system_error, 0x1200, 0x100) 2548 - GEN_INT_ENTRY cbe_system_error, virt=0 2549 - EXC_REAL_END(cbe_system_error, 0x1200, 0x100) 2550 - EXC_VIRT_NONE(0x5200, 0x100) 2551 - EXC_COMMON_BEGIN(cbe_system_error_common) 2552 - GEN_COMMON cbe_system_error 2553 - addi r3,r1,STACK_INT_FRAME_REGS 2554 - bl CFUNC(cbe_system_error_exception) 2555 - b interrupt_return_hsrr 2556 - 2557 - #else /* CONFIG_CBE_RAS */ 2558 2540 EXC_REAL_NONE(0x1200, 0x100) 2559 2541 EXC_VIRT_NONE(0x5200, 0x100) 2560 - #endif 2561 2542 2562 2543 /** 2563 2544 * Interrupt 0x1300 - Instruction Address Breakpoint Interrupt. ··· 2689 2708 b interrupt_return_hsrr 2690 2709 2691 2710 2692 - #ifdef CONFIG_CBE_RAS 2693 - INT_DEFINE_BEGIN(cbe_maintenance) 2694 - IVEC=0x1600 2695 - IHSRR=1 2696 - INT_DEFINE_END(cbe_maintenance) 2697 - 2698 - EXC_REAL_BEGIN(cbe_maintenance, 0x1600, 0x100) 2699 - GEN_INT_ENTRY cbe_maintenance, virt=0 2700 - EXC_REAL_END(cbe_maintenance, 0x1600, 0x100) 2701 - EXC_VIRT_NONE(0x5600, 0x100) 2702 - EXC_COMMON_BEGIN(cbe_maintenance_common) 2703 - GEN_COMMON cbe_maintenance 2704 - addi r3,r1,STACK_INT_FRAME_REGS 2705 - bl CFUNC(cbe_maintenance_exception) 2706 - b interrupt_return_hsrr 2707 - 2708 - #else /* CONFIG_CBE_RAS */ 2709 2711 EXC_REAL_NONE(0x1600, 0x100) 2710 2712 EXC_VIRT_NONE(0x5600, 0x100) 2711 - #endif 2712 2713 2713 2714 2714 2715 INT_DEFINE_BEGIN(altivec_assist) ··· 2718 2755 b interrupt_return_srr 2719 2756 2720 2757 2721 - #ifdef CONFIG_CBE_RAS 2722 - INT_DEFINE_BEGIN(cbe_thermal) 2723 - IVEC=0x1800 2724 - IHSRR=1 2725 - INT_DEFINE_END(cbe_thermal) 2726 - 2727 - EXC_REAL_BEGIN(cbe_thermal, 0x1800, 0x100) 2728 - GEN_INT_ENTRY cbe_thermal, virt=0 2729 - EXC_REAL_END(cbe_thermal, 0x1800, 0x100) 2730 - EXC_VIRT_NONE(0x5800, 0x100) 2731 - EXC_COMMON_BEGIN(cbe_thermal_common) 2732 - GEN_COMMON cbe_thermal 2733 - addi r3,r1,STACK_INT_FRAME_REGS 2734 - bl CFUNC(cbe_thermal_exception) 2735 - b interrupt_return_hsrr 2736 - 2737 - #else /* CONFIG_CBE_RAS */ 2738 2758 EXC_REAL_NONE(0x1800, 0x100) 2739 2759 EXC_VIRT_NONE(0x5800, 0x100) 2740 - #endif 2741 2760 2742 2761 2743 2762 #ifdef CONFIG_PPC_WATCHDOG
+11 -10
arch/powerpc/kernel/fadump.c
··· 33 33 #include <asm/fadump-internal.h> 34 34 #include <asm/setup.h> 35 35 #include <asm/interrupt.h> 36 + #include <asm/prom.h> 36 37 37 38 /* 38 39 * The CPU who acquired the lock to trigger the fadump crash should ··· 1765 1764 range_end = memblock_end_of_DRAM(); 1766 1765 } else { 1767 1766 /* 1768 - * Passing additional parameters is supported for hash MMU only 1769 - * if the first memory block size is 768MB or higher. 1767 + * Memory range for passing additional parameters for HASH MMU 1768 + * must meet the following conditions: 1769 + * 1. The first memory block size must be higher than the 1770 + * minimum RMA (MIN_RMA) size. Bootloader can use memory 1771 + * upto RMA size. So it should be avoided. 1772 + * 2. The range should be between MIN_RMA and RMA size (ppc64_rma_size) 1773 + * 3. It must not overlap with the fadump reserved area. 1770 1774 */ 1771 - if (ppc64_rma_size < 0x30000000) 1775 + if (ppc64_rma_size < MIN_RMA*1024*1024) 1772 1776 return; 1773 1777 1774 - /* 1775 - * 640 MB to 768 MB is not used by PFW/bootloader. So, try reserving 1776 - * memory for passing additional parameters in this range to avoid 1777 - * being stomped on by PFW/bootloader. 1778 - */ 1779 - range_start = 0x2A000000; 1780 - range_end = range_start + 0x4000000; 1778 + range_start = MIN_RMA * 1024 * 1024; 1779 + range_end = min(ppc64_rma_size, fw_dump.boot_mem_top); 1781 1780 } 1782 1781 1783 1782 fw_dump.param_area = memblock_phys_alloc_range(COMMAND_LINE_SIZE,
-197
arch/powerpc/kernel/io-workarounds.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Support PCI IO workaround 4 - * 5 - * Copyright (C) 2006 Benjamin Herrenschmidt <benh@kernel.crashing.org> 6 - * IBM, Corp. 7 - * (C) Copyright 2007-2008 TOSHIBA CORPORATION 8 - */ 9 - #undef DEBUG 10 - 11 - #include <linux/kernel.h> 12 - #include <linux/sched/mm.h> /* for init_mm */ 13 - #include <linux/pgtable.h> 14 - 15 - #include <asm/io.h> 16 - #include <asm/machdep.h> 17 - #include <asm/ppc-pci.h> 18 - #include <asm/io-workarounds.h> 19 - #include <asm/pte-walk.h> 20 - 21 - 22 - #define IOWA_MAX_BUS 8 23 - 24 - static struct iowa_bus iowa_busses[IOWA_MAX_BUS]; 25 - static unsigned int iowa_bus_count; 26 - 27 - static struct iowa_bus *iowa_pci_find(unsigned long vaddr, unsigned long paddr) 28 - { 29 - int i, j; 30 - struct resource *res; 31 - unsigned long vstart, vend; 32 - 33 - for (i = 0; i < iowa_bus_count; i++) { 34 - struct iowa_bus *bus = &iowa_busses[i]; 35 - struct pci_controller *phb = bus->phb; 36 - 37 - if (vaddr) { 38 - vstart = (unsigned long)phb->io_base_virt; 39 - vend = vstart + phb->pci_io_size - 1; 40 - if ((vaddr >= vstart) && (vaddr <= vend)) 41 - return bus; 42 - } 43 - 44 - if (paddr) 45 - for (j = 0; j < 3; j++) { 46 - res = &phb->mem_resources[j]; 47 - if (paddr >= res->start && paddr <= res->end) 48 - return bus; 49 - } 50 - } 51 - 52 - return NULL; 53 - } 54 - 55 - #ifdef CONFIG_PPC_INDIRECT_MMIO 56 - struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR addr) 57 - { 58 - struct iowa_bus *bus; 59 - int token; 60 - 61 - token = PCI_GET_ADDR_TOKEN(addr); 62 - 63 - if (token && token <= iowa_bus_count) 64 - bus = &iowa_busses[token - 1]; 65 - else { 66 - unsigned long vaddr, paddr; 67 - 68 - vaddr = (unsigned long)PCI_FIX_ADDR(addr); 69 - if (vaddr < PHB_IO_BASE || vaddr >= PHB_IO_END) 70 - return NULL; 71 - 72 - paddr = ppc_find_vmap_phys(vaddr); 73 - 74 - bus = iowa_pci_find(vaddr, paddr); 75 - 76 - if (bus == NULL) 77 - return NULL; 78 - } 79 - 80 - return bus; 81 - } 82 - #else /* CONFIG_PPC_INDIRECT_MMIO */ 83 - struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR addr) 84 - { 85 - return NULL; 86 - } 87 - #endif /* !CONFIG_PPC_INDIRECT_MMIO */ 88 - 89 - #ifdef CONFIG_PPC_INDIRECT_PIO 90 - struct iowa_bus *iowa_pio_find_bus(unsigned long port) 91 - { 92 - unsigned long vaddr = (unsigned long)pci_io_base + port; 93 - return iowa_pci_find(vaddr, 0); 94 - } 95 - #else 96 - struct iowa_bus *iowa_pio_find_bus(unsigned long port) 97 - { 98 - return NULL; 99 - } 100 - #endif 101 - 102 - #define DEF_PCI_AC_RET(name, ret, at, al, space, aa) \ 103 - static ret iowa_##name at \ 104 - { \ 105 - struct iowa_bus *bus; \ 106 - bus = iowa_##space##_find_bus(aa); \ 107 - if (bus && bus->ops && bus->ops->name) \ 108 - return bus->ops->name al; \ 109 - return __do_##name al; \ 110 - } 111 - 112 - #define DEF_PCI_AC_NORET(name, at, al, space, aa) \ 113 - static void iowa_##name at \ 114 - { \ 115 - struct iowa_bus *bus; \ 116 - bus = iowa_##space##_find_bus(aa); \ 117 - if (bus && bus->ops && bus->ops->name) { \ 118 - bus->ops->name al; \ 119 - return; \ 120 - } \ 121 - __do_##name al; \ 122 - } 123 - 124 - #include <asm/io-defs.h> 125 - 126 - #undef DEF_PCI_AC_RET 127 - #undef DEF_PCI_AC_NORET 128 - 129 - static const struct ppc_pci_io iowa_pci_io = { 130 - 131 - #define DEF_PCI_AC_RET(name, ret, at, al, space, aa) .name = iowa_##name, 132 - #define DEF_PCI_AC_NORET(name, at, al, space, aa) .name = iowa_##name, 133 - 134 - #include <asm/io-defs.h> 135 - 136 - #undef DEF_PCI_AC_RET 137 - #undef DEF_PCI_AC_NORET 138 - 139 - }; 140 - 141 - #ifdef CONFIG_PPC_INDIRECT_MMIO 142 - void __iomem *iowa_ioremap(phys_addr_t addr, unsigned long size, 143 - pgprot_t prot, void *caller) 144 - { 145 - struct iowa_bus *bus; 146 - void __iomem *res = __ioremap_caller(addr, size, prot, caller); 147 - int busno; 148 - 149 - bus = iowa_pci_find(0, (unsigned long)addr); 150 - if (bus != NULL) { 151 - busno = bus - iowa_busses; 152 - PCI_SET_ADDR_TOKEN(res, busno + 1); 153 - } 154 - return res; 155 - } 156 - #endif /* !CONFIG_PPC_INDIRECT_MMIO */ 157 - 158 - bool io_workaround_inited; 159 - 160 - /* Enable IO workaround */ 161 - static void io_workaround_init(void) 162 - { 163 - if (io_workaround_inited) 164 - return; 165 - ppc_pci_io = iowa_pci_io; 166 - io_workaround_inited = true; 167 - } 168 - 169 - /* Register new bus to support workaround */ 170 - void iowa_register_bus(struct pci_controller *phb, struct ppc_pci_io *ops, 171 - int (*initfunc)(struct iowa_bus *, void *), void *data) 172 - { 173 - struct iowa_bus *bus; 174 - struct device_node *np = phb->dn; 175 - 176 - io_workaround_init(); 177 - 178 - if (iowa_bus_count >= IOWA_MAX_BUS) { 179 - pr_err("IOWA:Too many pci bridges, " 180 - "workarounds disabled for %pOF\n", np); 181 - return; 182 - } 183 - 184 - bus = &iowa_busses[iowa_bus_count]; 185 - bus->phb = phb; 186 - bus->ops = ops; 187 - bus->private = data; 188 - 189 - if (initfunc) 190 - if ((*initfunc)(bus, data)) 191 - return; 192 - 193 - iowa_bus_count++; 194 - 195 - pr_debug("IOWA:[%d]Add bus, %pOF.\n", iowa_bus_count-1, np); 196 - } 197 -
+32 -26
arch/powerpc/kernel/io.c
··· 31 31 32 32 if (unlikely(count <= 0)) 33 33 return; 34 - asm volatile("sync"); 34 + 35 + mb(); 35 36 do { 36 37 tmp = *(const volatile u8 __force *)port; 37 38 eieio(); 38 39 *tbuf++ = tmp; 39 40 } while (--count != 0); 40 - asm volatile("twi 0,%0,0; isync" : : "r" (tmp)); 41 + data_barrier(tmp); 41 42 } 42 43 EXPORT_SYMBOL(_insb); 43 44 ··· 48 47 49 48 if (unlikely(count <= 0)) 50 49 return; 51 - asm volatile("sync"); 50 + 51 + mb(); 52 52 do { 53 53 *(volatile u8 __force *)port = *tbuf++; 54 54 } while (--count != 0); 55 - asm volatile("sync"); 55 + mb(); 56 56 } 57 57 EXPORT_SYMBOL(_outsb); 58 58 59 - void _insw_ns(const volatile u16 __iomem *port, void *buf, long count) 59 + void _insw(const volatile u16 __iomem *port, void *buf, long count) 60 60 { 61 61 u16 *tbuf = buf; 62 62 u16 tmp; 63 63 64 64 if (unlikely(count <= 0)) 65 65 return; 66 - asm volatile("sync"); 66 + 67 + mb(); 67 68 do { 68 69 tmp = *(const volatile u16 __force *)port; 69 70 eieio(); 70 71 *tbuf++ = tmp; 71 72 } while (--count != 0); 72 - asm volatile("twi 0,%0,0; isync" : : "r" (tmp)); 73 + data_barrier(tmp); 73 74 } 74 - EXPORT_SYMBOL(_insw_ns); 75 + EXPORT_SYMBOL(_insw); 75 76 76 - void _outsw_ns(volatile u16 __iomem *port, const void *buf, long count) 77 + void _outsw(volatile u16 __iomem *port, const void *buf, long count) 77 78 { 78 79 const u16 *tbuf = buf; 79 80 80 81 if (unlikely(count <= 0)) 81 82 return; 82 - asm volatile("sync"); 83 + 84 + mb(); 83 85 do { 84 86 *(volatile u16 __force *)port = *tbuf++; 85 87 } while (--count != 0); 86 - asm volatile("sync"); 88 + mb(); 87 89 } 88 - EXPORT_SYMBOL(_outsw_ns); 90 + EXPORT_SYMBOL(_outsw); 89 91 90 - void _insl_ns(const volatile u32 __iomem *port, void *buf, long count) 92 + void _insl(const volatile u32 __iomem *port, void *buf, long count) 91 93 { 92 94 u32 *tbuf = buf; 93 95 u32 tmp; 94 96 95 97 if (unlikely(count <= 0)) 96 98 return; 97 - asm volatile("sync"); 99 + 100 + mb(); 98 101 do { 99 102 tmp = *(const volatile u32 __force *)port; 100 103 eieio(); 101 104 *tbuf++ = tmp; 102 105 } while (--count != 0); 103 - asm volatile("twi 0,%0,0; isync" : : "r" (tmp)); 106 + data_barrier(tmp); 104 107 } 105 - EXPORT_SYMBOL(_insl_ns); 108 + EXPORT_SYMBOL(_insl); 106 109 107 - void _outsl_ns(volatile u32 __iomem *port, const void *buf, long count) 110 + void _outsl(volatile u32 __iomem *port, const void *buf, long count) 108 111 { 109 112 const u32 *tbuf = buf; 110 113 111 114 if (unlikely(count <= 0)) 112 115 return; 113 - asm volatile("sync"); 116 + 117 + mb(); 114 118 do { 115 119 *(volatile u32 __force *)port = *tbuf++; 116 120 } while (--count != 0); 117 - asm volatile("sync"); 121 + mb(); 118 122 } 119 - EXPORT_SYMBOL(_outsl_ns); 123 + EXPORT_SYMBOL(_outsl); 120 124 121 125 #define IO_CHECK_ALIGN(v,a) ((((unsigned long)(v)) & ((a) - 1)) == 0) 122 126 ··· 133 127 lc |= lc << 8; 134 128 lc |= lc << 16; 135 129 136 - __asm__ __volatile__ ("sync" : : : "memory"); 130 + mb(); 137 131 while(n && !IO_CHECK_ALIGN(p, 4)) { 138 132 *((volatile u8 *)p) = c; 139 133 p++; ··· 149 143 p++; 150 144 n--; 151 145 } 152 - __asm__ __volatile__ ("sync" : : : "memory"); 146 + mb(); 153 147 } 154 148 EXPORT_SYMBOL(_memset_io); 155 149 ··· 158 152 { 159 153 void *vsrc = (void __force *) src; 160 154 161 - __asm__ __volatile__ ("sync" : : : "memory"); 155 + mb(); 162 156 while(n && (!IO_CHECK_ALIGN(vsrc, 4) || !IO_CHECK_ALIGN(dest, 4))) { 163 157 *((u8 *)dest) = *((volatile u8 *)vsrc); 164 158 eieio(); ··· 180 174 dest++; 181 175 n--; 182 176 } 183 - __asm__ __volatile__ ("sync" : : : "memory"); 177 + mb(); 184 178 } 185 179 EXPORT_SYMBOL(_memcpy_fromio); 186 180 ··· 188 182 { 189 183 void *vdest = (void __force *) dest; 190 184 191 - __asm__ __volatile__ ("sync" : : : "memory"); 185 + mb(); 192 186 while(n && (!IO_CHECK_ALIGN(vdest, 4) || !IO_CHECK_ALIGN(src, 4))) { 193 187 *((volatile u8 *)vdest) = *((u8 *)src); 194 188 src++; ··· 207 201 vdest++; 208 202 n--; 209 203 } 210 - __asm__ __volatile__ ("sync" : : : "memory"); 204 + mb(); 211 205 } 212 206 EXPORT_SYMBOL(_memcpy_toio);
-102
arch/powerpc/kernel/of_platform.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Copyright (C) 2006 Benjamin Herrenschmidt, IBM Corp. 4 - * <benh@kernel.crashing.org> 5 - * and Arnd Bergmann, IBM Corp. 6 - */ 7 - 8 - #undef DEBUG 9 - 10 - #include <linux/string.h> 11 - #include <linux/kernel.h> 12 - #include <linux/init.h> 13 - #include <linux/export.h> 14 - #include <linux/mod_devicetable.h> 15 - #include <linux/pci.h> 16 - #include <linux/platform_device.h> 17 - #include <linux/atomic.h> 18 - 19 - #include <asm/errno.h> 20 - #include <asm/topology.h> 21 - #include <asm/pci-bridge.h> 22 - #include <asm/ppc-pci.h> 23 - #include <asm/eeh.h> 24 - 25 - #ifdef CONFIG_PPC_OF_PLATFORM_PCI 26 - 27 - /* The probing of PCI controllers from of_platform is currently 28 - * 64 bits only, mostly due to gratuitous differences between 29 - * the 32 and 64 bits PCI code on PowerPC and the 32 bits one 30 - * lacking some bits needed here. 31 - */ 32 - 33 - static int of_pci_phb_probe(struct platform_device *dev) 34 - { 35 - struct pci_controller *phb; 36 - 37 - /* Check if we can do that ... */ 38 - if (ppc_md.pci_setup_phb == NULL) 39 - return -ENODEV; 40 - 41 - pr_info("Setting up PCI bus %pOF\n", dev->dev.of_node); 42 - 43 - /* Alloc and setup PHB data structure */ 44 - phb = pcibios_alloc_controller(dev->dev.of_node); 45 - if (!phb) 46 - return -ENODEV; 47 - 48 - /* Setup parent in sysfs */ 49 - phb->parent = &dev->dev; 50 - 51 - /* Setup the PHB using arch provided callback */ 52 - if (ppc_md.pci_setup_phb(phb)) { 53 - pcibios_free_controller(phb); 54 - return -ENODEV; 55 - } 56 - 57 - /* Process "ranges" property */ 58 - pci_process_bridge_OF_ranges(phb, dev->dev.of_node, 0); 59 - 60 - /* Init pci_dn data structures */ 61 - pci_devs_phb_init_dynamic(phb); 62 - 63 - /* Create EEH PE for the PHB */ 64 - eeh_phb_pe_create(phb); 65 - 66 - /* Scan the bus */ 67 - pcibios_scan_phb(phb); 68 - if (phb->bus == NULL) 69 - return -ENXIO; 70 - 71 - /* Claim resources. This might need some rework as well depending 72 - * whether we are doing probe-only or not, like assigning unassigned 73 - * resources etc... 74 - */ 75 - pcibios_claim_one_bus(phb->bus); 76 - 77 - /* Add probed PCI devices to the device model */ 78 - pci_bus_add_devices(phb->bus); 79 - 80 - return 0; 81 - } 82 - 83 - static const struct of_device_id of_pci_phb_ids[] = { 84 - { .type = "pci", }, 85 - { .type = "pcix", }, 86 - { .type = "pcie", }, 87 - { .type = "pciex", }, 88 - { .type = "ht", }, 89 - {} 90 - }; 91 - 92 - static struct platform_driver of_pci_phb_driver = { 93 - .probe = of_pci_phb_probe, 94 - .driver = { 95 - .name = "of-pci", 96 - .of_match_table = of_pci_phb_ids, 97 - }, 98 - }; 99 - 100 - builtin_platform_driver(of_pci_phb_driver); 101 - 102 - #endif /* CONFIG_PPC_OF_PLATFORM_PCI */
+3 -3
arch/powerpc/kernel/prom_init.c
··· 1061 1061 .virt_base = cpu_to_be32(0xffffffff), 1062 1062 .virt_size = cpu_to_be32(0xffffffff), 1063 1063 .load_base = cpu_to_be32(0xffffffff), 1064 - .min_rma = cpu_to_be32(512), /* 512MB min RMA */ 1064 + .min_rma = cpu_to_be32(MIN_RMA), 1065 1065 .min_load = cpu_to_be32(0xffffffff), /* full client load */ 1066 1066 .min_rma_percent = 0, /* min RMA percentage of total RAM */ 1067 1067 .max_pft_size = 48, /* max log_2(hash table size) */ ··· 2889 2889 char type[8]; 2890 2890 phandle node; 2891 2891 2892 - // Some pmacs are missing #size-cells on escc nodes 2892 + // Some pmacs are missing #size-cells on escc or i2s nodes 2893 2893 for (node = 0; prom_next_node(&node); ) { 2894 2894 type[0] = '\0'; 2895 2895 prom_getprop(node, "device_type", type, sizeof(type)); 2896 - if (prom_strcmp(type, "escc")) 2896 + if (prom_strcmp(type, "escc") && prom_strcmp(type, "i2s")) 2897 2897 continue; 2898 2898 2899 2899 if (prom_getproplen(node, "#size-cells") != PROM_ERROR)
-75
arch/powerpc/kernel/rtas.c
··· 798 798 udbg_putc = call_rtas_display_status_delay; 799 799 } 800 800 801 - #ifdef CONFIG_UDBG_RTAS_CONSOLE 802 - 803 - /* If you think you're dying before early_init_dt_scan_rtas() does its 804 - * work, you can hard code the token values for your firmware here and 805 - * hardcode rtas.base/entry etc. 806 - */ 807 - static unsigned int rtas_putchar_token = RTAS_UNKNOWN_SERVICE; 808 - static unsigned int rtas_getchar_token = RTAS_UNKNOWN_SERVICE; 809 - 810 - static void udbg_rtascon_putc(char c) 811 - { 812 - int tries; 813 - 814 - if (!rtas.base) 815 - return; 816 - 817 - /* Add CRs before LFs */ 818 - if (c == '\n') 819 - udbg_rtascon_putc('\r'); 820 - 821 - /* if there is more than one character to be displayed, wait a bit */ 822 - for (tries = 0; tries < 16; tries++) { 823 - if (rtas_call(rtas_putchar_token, 1, 1, NULL, c) == 0) 824 - break; 825 - udelay(1000); 826 - } 827 - } 828 - 829 - static int udbg_rtascon_getc_poll(void) 830 - { 831 - int c; 832 - 833 - if (!rtas.base) 834 - return -1; 835 - 836 - if (rtas_call(rtas_getchar_token, 0, 2, &c)) 837 - return -1; 838 - 839 - return c; 840 - } 841 - 842 - static int udbg_rtascon_getc(void) 843 - { 844 - int c; 845 - 846 - while ((c = udbg_rtascon_getc_poll()) == -1) 847 - ; 848 - 849 - return c; 850 - } 851 - 852 - 853 - void __init udbg_init_rtas_console(void) 854 - { 855 - udbg_putc = udbg_rtascon_putc; 856 - udbg_getc = udbg_rtascon_getc; 857 - udbg_getc_poll = udbg_rtascon_getc_poll; 858 - } 859 - #endif /* CONFIG_UDBG_RTAS_CONSOLE */ 860 - 861 801 void rtas_progress(char *s, unsigned short hex) 862 802 { 863 803 struct device_node *root; ··· 2074 2134 rtas.entry = *entryp; 2075 2135 rtas.size = *sizep; 2076 2136 } 2077 - 2078 - #ifdef CONFIG_UDBG_RTAS_CONSOLE 2079 - basep = of_get_flat_dt_prop(node, "put-term-char", NULL); 2080 - if (basep) 2081 - rtas_putchar_token = *basep; 2082 - 2083 - basep = of_get_flat_dt_prop(node, "get-term-char", NULL); 2084 - if (basep) 2085 - rtas_getchar_token = *basep; 2086 - 2087 - if (rtas_putchar_token != RTAS_UNKNOWN_SERVICE && 2088 - rtas_getchar_token != RTAS_UNKNOWN_SERVICE) 2089 - udbg_init_rtas_console(); 2090 - 2091 - #endif 2092 2137 2093 2138 /* break now */ 2094 2139 return 1;
+1 -1
arch/powerpc/kernel/setup_64.c
··· 892 892 } 893 893 #endif 894 894 895 - #if defined(CONFIG_PPC_INDIRECT_PIO) || defined(CONFIG_PPC_INDIRECT_MMIO) 895 + #ifdef CONFIG_PPC_INDIRECT_PIO 896 896 struct ppc_pci_io ppc_pci_io; 897 897 EXPORT_SYMBOL(ppc_pci_io); 898 898 #endif
+43 -15
arch/powerpc/kernel/static_call.c
··· 8 8 { 9 9 int err; 10 10 bool is_ret0 = (func == __static_call_return0); 11 - unsigned long target = (unsigned long)(is_ret0 ? tramp + PPC_SCT_RET0 : func); 12 - bool is_short = is_offset_in_branch_range((long)target - (long)tramp); 13 - 14 - if (!tramp) 15 - return; 11 + unsigned long _tramp = (unsigned long)tramp; 12 + unsigned long _func = (unsigned long)func; 13 + unsigned long _ret0 = _tramp + PPC_SCT_RET0; 14 + bool is_short = is_offset_in_branch_range((long)func - (long)(site ? : tramp)); 16 15 17 16 mutex_lock(&text_mutex); 18 17 19 - if (func && !is_short) { 20 - err = patch_ulong(tramp + PPC_SCT_DATA, target); 21 - if (err) 22 - goto out; 18 + if (site && tail) { 19 + if (!func) 20 + err = patch_instruction(site, ppc_inst(PPC_RAW_BLR())); 21 + else if (is_ret0) 22 + err = patch_branch(site, _ret0, 0); 23 + else if (is_short) 24 + err = patch_branch(site, _func, 0); 25 + else if (tramp) 26 + err = patch_branch(site, _tramp, 0); 27 + else 28 + err = 0; 29 + } else if (site) { 30 + if (!func) 31 + err = patch_instruction(site, ppc_inst(PPC_RAW_NOP())); 32 + else if (is_ret0) 33 + err = patch_instruction(site, ppc_inst(PPC_RAW_LI(_R3, 0))); 34 + else if (is_short) 35 + err = patch_branch(site, _func, BRANCH_SET_LINK); 36 + else if (tramp) 37 + err = patch_branch(site, _tramp, BRANCH_SET_LINK); 38 + else 39 + err = 0; 40 + } else if (tramp) { 41 + if (func && !is_short) { 42 + err = patch_ulong(tramp + PPC_SCT_DATA, _func); 43 + if (err) 44 + goto out; 45 + } 46 + 47 + if (!func) 48 + err = patch_instruction(tramp, ppc_inst(PPC_RAW_BLR())); 49 + else if (is_ret0) 50 + err = patch_branch(tramp, _ret0, 0); 51 + else if (is_short) 52 + err = patch_branch(tramp, _func, 0); 53 + else 54 + err = patch_instruction(tramp, ppc_inst(PPC_RAW_NOP())); 55 + } else { 56 + err = 0; 23 57 } 24 58 25 - if (!func) 26 - err = patch_instruction(tramp, ppc_inst(PPC_RAW_BLR())); 27 - else if (is_short) 28 - err = patch_branch(tramp, target, 0); 29 - else 30 - err = patch_instruction(tramp, ppc_inst(PPC_RAW_NOP())); 31 59 out: 32 60 mutex_unlock(&text_mutex); 33 61
-1
arch/powerpc/kernel/switch.S
··· 39 39 40 40 // Flush the link stack 41 41 .rept 64 42 - ANNOTATE_INTRA_FUNCTION_CALL 43 42 bl .+4 44 43 .endr 45 44 b 1f
+32 -33
arch/powerpc/kernel/time.c
··· 901 901 register_decrementer_clockevent(smp_processor_id()); 902 902 } 903 903 904 + /* 905 + * Divide a 128-bit dividend by a 32-bit divisor, leaving a 128 bit 906 + * result. 907 + */ 908 + static __init void div128_by_32(u64 dividend_high, u64 dividend_low, 909 + unsigned int divisor, struct div_result *dr) 910 + { 911 + unsigned long a, b, c, d; 912 + unsigned long w, x, y, z; 913 + u64 ra, rb, rc; 914 + 915 + a = dividend_high >> 32; 916 + b = dividend_high & 0xffffffff; 917 + c = dividend_low >> 32; 918 + d = dividend_low & 0xffffffff; 919 + 920 + w = a / divisor; 921 + ra = ((u64)(a - (w * divisor)) << 32) + b; 922 + 923 + rb = ((u64)do_div(ra, divisor) << 32) + c; 924 + x = ra; 925 + 926 + rc = ((u64)do_div(rb, divisor) << 32) + d; 927 + y = rb; 928 + 929 + do_div(rc, divisor); 930 + z = rc; 931 + 932 + dr->result_high = ((u64)w << 32) + x; 933 + dr->result_low = ((u64)y << 32) + z; 934 + } 935 + 904 936 /* This function is only called on the boot processor */ 905 937 void __init time_init(void) 906 938 { ··· 1004 972 1005 973 of_clk_init(NULL); 1006 974 enable_sched_clock_irqtime(); 1007 - } 1008 - 1009 - /* 1010 - * Divide a 128-bit dividend by a 32-bit divisor, leaving a 128 bit 1011 - * result. 1012 - */ 1013 - void div128_by_32(u64 dividend_high, u64 dividend_low, 1014 - unsigned divisor, struct div_result *dr) 1015 - { 1016 - unsigned long a, b, c, d; 1017 - unsigned long w, x, y, z; 1018 - u64 ra, rb, rc; 1019 - 1020 - a = dividend_high >> 32; 1021 - b = dividend_high & 0xffffffff; 1022 - c = dividend_low >> 32; 1023 - d = dividend_low & 0xffffffff; 1024 - 1025 - w = a / divisor; 1026 - ra = ((u64)(a - (w * divisor)) << 32) + b; 1027 - 1028 - rb = ((u64) do_div(ra, divisor) << 32) + c; 1029 - x = ra; 1030 - 1031 - rc = ((u64) do_div(rb, divisor) << 32) + d; 1032 - y = rb; 1033 - 1034 - do_div(rc, divisor); 1035 - z = rc; 1036 - 1037 - dr->result_high = ((u64)w << 32) + x; 1038 - dr->result_low = ((u64)y << 32) + z; 1039 - 1040 975 } 1041 976 1042 977 /* We don't need to calibrate delay, we use the CPU timebase for that */
-3
arch/powerpc/kernel/udbg.c
··· 36 36 #elif defined(CONFIG_PPC_EARLY_DEBUG_RTAS_PANEL) 37 37 /* RTAS panel debug */ 38 38 udbg_init_rtas_panel(); 39 - #elif defined(CONFIG_PPC_EARLY_DEBUG_RTAS_CONSOLE) 40 - /* RTAS console debug */ 41 - udbg_init_rtas_console(); 42 39 #elif defined(CONFIG_PPC_EARLY_DEBUG_PAS_REALMODE) 43 40 udbg_init_pas_realmode(); 44 41 #elif defined(CONFIG_PPC_EARLY_DEBUG_BOOTX)
-9
arch/powerpc/kernel/vmlinux.lds.S
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifdef CONFIG_PPC64 3 - #define PROVIDE32(x) PROVIDE(__unused__##x) 4 - #else 5 - #define PROVIDE32(x) PROVIDE(x) 6 - #endif 7 - 8 2 #define BSS_FIRST_SECTIONS *(.bss.prominit) 9 3 #define EMITS_PT_NOTE 10 4 #define RO_EXCEPTION_TABLE_ALIGN 0 ··· 121 127 122 128 . = ALIGN(PAGE_SIZE); 123 129 _etext = .; 124 - PROVIDE32 (etext = .); 125 130 126 131 /* Read-only data */ 127 132 RO_DATA(PAGE_SIZE) ··· 387 394 388 395 . = ALIGN(PAGE_SIZE); 389 396 _edata = .; 390 - PROVIDE32 (edata = .); 391 397 392 398 /* 393 399 * And finally the bss ··· 396 404 397 405 . = ALIGN(PAGE_SIZE); 398 406 _end = . ; 399 - PROVIDE32 (end = .); 400 407 401 408 DWARF_DEBUG 402 409 ELF_DETAILS
+2 -5
arch/powerpc/kexec/relocate_32.S
··· 348 348 rlwinm r10, r24, 0, 22, 27 349 349 350 350 cmpwi r10, PPC47x_TLB0_4K 351 - bne 0f 352 351 li r10, 0x1000 /* r10 = 4k */ 353 - ANNOTATE_INTRA_FUNCTION_CALL 354 - bl 1f 352 + beq 0f 355 353 356 - 0: 357 354 /* Defaults to 256M */ 358 355 lis r10, 0x1000 359 356 360 - bcl 20,31,$+4 357 + 0: bcl 20,31,$+4 361 358 1: mflr r4 362 359 addi r4, r4, (2f-1b) /* virtual address of 2f */ 363 360
-2
arch/powerpc/kvm/book3s_32_mmu_host.c
··· 125 125 return (u32*)pteg; 126 126 } 127 127 128 - extern char etext[]; 129 - 130 128 int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte, 131 129 bool iswrite) 132 130 {
-2
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 1524 1524 1525 1525 /* Flush the link stack. On Power8 it's up to 32 entries in size. */ 1526 1526 .rept 32 1527 - ANNOTATE_INTRA_FUNCTION_CALL 1528 1527 bl .+4 1529 1528 .endr 1530 1529 1531 1530 /* And on Power9 it's up to 64. */ 1532 1531 BEGIN_FTR_SECTION 1533 1532 .rept 32 1534 - ANNOTATE_INTRA_FUNCTION_CALL 1535 1533 bl .+4 1536 1534 .endr 1537 1535 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+1 -4
arch/powerpc/kvm/powerpc.c
··· 550 550 551 551 #ifdef CONFIG_PPC_BOOK3S_64 552 552 case KVM_CAP_SPAPR_TCE: 553 + fallthrough; 553 554 case KVM_CAP_SPAPR_TCE_64: 554 - r = 1; 555 - break; 556 555 case KVM_CAP_SPAPR_TCE_VFIO: 557 - r = !!cpu_has_feature(CPU_FTR_HVMODE); 558 - break; 559 556 case KVM_CAP_PPC_RTAS: 560 557 case KVM_CAP_PPC_FIXUP_HCALL: 561 558 case KVM_CAP_PPC_ENABLE_HCALL:
-12
arch/powerpc/mm/book3s64/hash_utils.c
··· 1358 1358 } else { 1359 1359 unsigned long limit = MEMBLOCK_ALLOC_ANYWHERE; 1360 1360 1361 - #ifdef CONFIG_PPC_CELL 1362 - /* 1363 - * Cell may require the hash table down low when using the 1364 - * Axon IOMMU in order to fit the dynamic region over it, see 1365 - * comments in cell/iommu.c 1366 - */ 1367 - if (fdt_subnode_offset(initial_boot_params, 0, "axon") > 0) { 1368 - limit = 0x80000000; 1369 - pr_info("Hash table forced below 2G for Axon IOMMU\n"); 1370 - } 1371 - #endif /* CONFIG_PPC_CELL */ 1372 - 1373 1361 table = memblock_phys_alloc_range(htab_size_bytes, 1374 1362 htab_size_bytes, 1375 1363 0, limit);
+2 -2
arch/powerpc/mm/book3s64/pgtable.c
··· 587 587 /* 588 588 * Does the CPU support tlbie? 589 589 */ 590 - bool tlbie_capable __read_mostly = true; 590 + bool tlbie_capable __read_mostly = IS_ENABLED(CONFIG_PPC_RADIX_BROADCAST_TLBIE); 591 591 EXPORT_SYMBOL(tlbie_capable); 592 592 593 593 /* ··· 595 595 * address spaces? tlbie may still be used for nMMU accelerators, and for KVM 596 596 * guest address spaces. 597 597 */ 598 - bool tlbie_enabled __read_mostly = true; 598 + bool tlbie_enabled __read_mostly = IS_ENABLED(CONFIG_PPC_RADIX_BROADCAST_TLBIE); 599 599 600 600 static int __init setup_disable_tlbie(char *str) 601 601 {
-9
arch/powerpc/mm/ioremap.c
··· 4 4 #include <linux/slab.h> 5 5 #include <linux/mmzone.h> 6 6 #include <linux/vmalloc.h> 7 - #include <asm/io-workarounds.h> 8 7 9 8 unsigned long ioremap_bot; 10 9 EXPORT_SYMBOL(ioremap_bot); ··· 13 14 pgprot_t prot = pgprot_noncached(PAGE_KERNEL); 14 15 void *caller = __builtin_return_address(0); 15 16 16 - if (iowa_is_active()) 17 - return iowa_ioremap(addr, size, prot, caller); 18 17 return __ioremap_caller(addr, size, prot, caller); 19 18 } 20 19 EXPORT_SYMBOL(ioremap); ··· 22 25 pgprot_t prot = pgprot_noncached_wc(PAGE_KERNEL); 23 26 void *caller = __builtin_return_address(0); 24 27 25 - if (iowa_is_active()) 26 - return iowa_ioremap(addr, size, prot, caller); 27 28 return __ioremap_caller(addr, size, prot, caller); 28 29 } 29 30 EXPORT_SYMBOL(ioremap_wc); ··· 31 36 pgprot_t prot = pgprot_cached(PAGE_KERNEL); 32 37 void *caller = __builtin_return_address(0); 33 38 34 - if (iowa_is_active()) 35 - return iowa_ioremap(addr, size, prot, caller); 36 39 return __ioremap_caller(addr, size, prot, caller); 37 40 } 38 41 ··· 43 50 if (pte_write(pte)) 44 51 pte = pte_mkdirty(pte); 45 52 46 - if (iowa_is_active()) 47 - return iowa_ioremap(addr, size, pte_pgprot(pte), caller); 48 53 return __ioremap_caller(addr, size, pte_pgprot(pte), caller); 49 54 } 50 55 EXPORT_SYMBOL(ioremap_prot);
+1 -1
arch/powerpc/mm/ioremap_64.c
··· 52 52 if (!slab_is_available()) 53 53 return; 54 54 55 - generic_iounmap(PCI_FIX_ADDR(token)); 55 + generic_iounmap(token); 56 56 } 57 57 EXPORT_SYMBOL(iounmap);
-22
arch/powerpc/mm/mem.c
··· 319 319 per_cpu(next_tlbcam_idx, smp_processor_id()) = 320 320 (mfspr(SPRN_TLB1CFG) & TLBnCFG_N_ENTRY) - 1; 321 321 #endif 322 - 323 - #ifdef CONFIG_PPC32 324 - pr_info("Kernel virtual memory layout:\n"); 325 - #ifdef CONFIG_KASAN 326 - pr_info(" * 0x%08lx..0x%08lx : kasan shadow mem\n", 327 - KASAN_SHADOW_START, KASAN_SHADOW_END); 328 - #endif 329 - pr_info(" * 0x%08lx..0x%08lx : fixmap\n", FIXADDR_START, FIXADDR_TOP); 330 - #ifdef CONFIG_HIGHMEM 331 - pr_info(" * 0x%08lx..0x%08lx : highmem PTEs\n", 332 - PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP)); 333 - #endif /* CONFIG_HIGHMEM */ 334 - if (ioremap_bot != IOREMAP_TOP) 335 - pr_info(" * 0x%08lx..0x%08lx : early ioremap\n", 336 - ioremap_bot, IOREMAP_TOP); 337 - pr_info(" * 0x%08lx..0x%08lx : vmalloc & ioremap\n", 338 - VMALLOC_START, VMALLOC_END); 339 - #ifdef MODULES_VADDR 340 - pr_info(" * 0x%08lx..0x%08lx : modules\n", 341 - MODULES_VADDR, MODULES_END); 342 - #endif 343 - #endif /* CONFIG_PPC32 */ 344 322 } 345 323 346 324 void free_initmem(void)
+1 -1
arch/powerpc/mm/numa.c
··· 1336 1336 return nid; 1337 1337 } 1338 1338 1339 - static u64 hot_add_drconf_memory_max(void) 1339 + u64 hot_add_drconf_memory_max(void) 1340 1340 { 1341 1341 struct device_node *memory = NULL; 1342 1342 struct device_node *dn = NULL;
+20
arch/powerpc/perf/core-book3s.c
··· 2226 2226 #define PERF_SAMPLE_ADDR_TYPE (PERF_SAMPLE_ADDR | \ 2227 2227 PERF_SAMPLE_PHYS_ADDR | \ 2228 2228 PERF_SAMPLE_DATA_PAGE_SIZE) 2229 + 2230 + #define SIER_TYPE_SHIFT 15 2231 + #define SIER_TYPE_MASK (0x7ull << SIER_TYPE_SHIFT) 2232 + 2229 2233 /* 2230 2234 * A counter has overflowed; update its count and record 2231 2235 * things if requested. Note that interrupts are hard-disabled ··· 2297 2293 (event->attr.sample_type & PERF_SAMPLE_IP) && 2298 2294 is_kernel_addr(mfspr(SPRN_SIAR))) 2299 2295 record = 0; 2296 + 2297 + /* 2298 + * SIER[46-48] presents instruction type of the sampled instruction. 2299 + * In ISA v3.0 and before values "0" and "7" are considered reserved. 2300 + * In ISA v3.1, value "7" has been used to indicate "larx/stcx". 2301 + * Drop the sample if "type" has reserved values for this field with a 2302 + * ISA version check. 2303 + */ 2304 + if (event->attr.sample_type & PERF_SAMPLE_DATA_SRC && 2305 + ppmu->get_mem_data_src) { 2306 + val = (regs->dar & SIER_TYPE_MASK) >> SIER_TYPE_SHIFT; 2307 + if (val == 0 || (val == 7 && !cpu_has_feature(CPU_FTR_ARCH_31))) { 2308 + record = 0; 2309 + atomic64_inc(&event->lost_samples); 2310 + } 2311 + } 2300 2312 2301 2313 /* 2302 2314 * Finally record data if requested.
+15 -3
arch/powerpc/perf/isa207-common.c
··· 319 319 return; 320 320 } 321 321 322 - sier = mfspr(SPRN_SIER); 322 + /* 323 + * Use regs-dar for SPRN_SIER which is saved 324 + * during perf_read_regs at the beginning 325 + * of the PMU interrupt handler to avoid multiple 326 + * reads of SPRN_SIER 327 + */ 328 + sier = regs->dar; 323 329 val = (sier & ISA207_SIER_TYPE_MASK) >> ISA207_SIER_TYPE_SHIFT; 324 - if (val != 1 && val != 2 && !(val == 7 && cpu_has_feature(CPU_FTR_ARCH_31))) 330 + if (val != 1 && val != 2 && !(val == 7 && cpu_has_feature(CPU_FTR_ARCH_31))) { 331 + dsrc->val = 0; 325 332 return; 333 + } 326 334 327 335 idx = (sier & ISA207_SIER_LDST_MASK) >> ISA207_SIER_LDST_SHIFT; 328 336 sub_idx = (sier & ISA207_SIER_DATA_SRC_MASK) >> ISA207_SIER_DATA_SRC_SHIFT; ··· 346 338 * to determine the exact instruction type. If the sampling 347 339 * criteria is neither load or store, set the type as default 348 340 * to NA. 341 + * 342 + * Use regs->dsisr for MMCRA which is saved during perf_read_regs 343 + * at the beginning of the PMU interrupt handler to avoid 344 + * multiple reads of SPRN_MMCRA 349 345 */ 350 - mmcra = mfspr(SPRN_MMCRA); 346 + mmcra = regs->dsisr; 351 347 352 348 op_type = (mmcra >> MMCRA_SAMP_ELIG_SHIFT) & MMCRA_SAMP_ELIG_MASK; 353 349 switch (op_type) {
+1
arch/powerpc/perf/vpa-pmu.c
··· 156 156 } 157 157 158 158 static struct pmu vpa_pmu = { 159 + .module = THIS_MODULE, 159 160 .task_ctx_nr = perf_sw_context, 160 161 .name = "vpa_pmu", 161 162 .event_init = vpa_pmu_event_init,
+1 -1
arch/powerpc/platforms/44x/uic.c
··· 37 37 #define UIC_VR 0x7 38 38 #define UIC_VCR 0x8 39 39 40 - struct uic *primary_uic; 40 + static struct uic *primary_uic; 41 41 42 42 struct uic { 43 43 int index;
-10
arch/powerpc/platforms/Kconfig
··· 70 70 firmware provides this binding. 71 71 If you're not sure say Y. 72 72 73 - config UDBG_RTAS_CONSOLE 74 - bool "RTAS based debug console" 75 - depends on PPC_RTAS 76 - 77 73 config PPC_SMP_MUXED_IPI 78 74 bool 79 75 help ··· 181 185 config PPC_INDIRECT_PIO 182 186 bool 183 187 select GENERIC_IOMAP 184 - 185 - config PPC_INDIRECT_MMIO 186 - bool 187 - 188 - config PPC_IO_WORKAROUNDS 189 - bool 190 188 191 189 source "drivers/cpufreq/Kconfig" 192 190
+13
arch/powerpc/platforms/Kconfig.cputype
··· 449 449 450 450 If you're unsure, say Y. 451 451 452 + config PPC_RADIX_BROADCAST_TLBIE 453 + bool 454 + depends on PPC_RADIX_MMU 455 + help 456 + Power ISA v3.0 and later implementations in the Linux Compliancy Subset 457 + and lower are not required to implement broadcast TLBIE instructions. 458 + Platforms with CPUs that do implement TLBIE broadcast, that is, where 459 + a TLB invalidation instruction performed on one CPU operates on the 460 + TLBs of all CPUs in the system, should select this option. If this 461 + option is selected, the disable_tlbie kernel command line option can 462 + be used to cause global TLB invalidations to be done via IPIs; without 463 + it, IPIs will be used unconditionally. 464 + 452 465 config PPC_KERNEL_PREFIXED 453 466 depends on PPC_HAVE_PREFIXED_SUPPORT 454 467 depends on CC_HAS_PREFIXED
-80
arch/powerpc/platforms/cell/Kconfig
··· 3 3 select PPC_64S_HASH_MMU if PPC64 4 4 bool 5 5 6 - config PPC_CELL_COMMON 7 - bool 8 - select PPC_CELL 9 - select PPC_DCR_MMIO 10 - select PPC_INDIRECT_PIO 11 - select PPC_INDIRECT_MMIO 12 - select PPC_HASH_MMU_NATIVE 13 - select PPC_RTAS 14 - select IRQ_EDGE_EOI_HANDLER 15 - 16 - config PPC_CELL_NATIVE 17 - bool 18 - select PPC_CELL_COMMON 19 - select MPIC 20 - select PPC_IO_WORKAROUNDS 21 - select IBM_EMAC_EMAC4 if IBM_EMAC 22 - select IBM_EMAC_RGMII if IBM_EMAC 23 - select IBM_EMAC_ZMII if IBM_EMAC #test only 24 - select IBM_EMAC_TAH if IBM_EMAC #test only 25 - 26 - config PPC_IBM_CELL_BLADE 27 - bool "IBM Cell Blade" 28 - depends on PPC64 && PPC_BOOK3S && CPU_BIG_ENDIAN 29 - select PPC_CELL_NATIVE 30 - select PPC_OF_PLATFORM_PCI 31 - select FORCE_PCI 32 - select MMIO_NVRAM 33 - select PPC_UDBG_16550 34 - select UDBG_RTAS_CONSOLE 35 - 36 - config AXON_MSI 37 - bool 38 - depends on PPC_IBM_CELL_BLADE && PCI_MSI 39 - select IRQ_DOMAIN_NOMAP 40 - default y 41 - 42 6 menu "Cell Broadband Engine options" 43 7 depends on PPC_CELL 44 8 ··· 20 56 config SPU_BASE 21 57 bool 22 58 select PPC_COPRO_BASE 23 - 24 - config CBE_RAS 25 - bool "RAS features for bare metal Cell BE" 26 - depends on PPC_CELL_NATIVE 27 - default y 28 - 29 - config PPC_IBM_CELL_RESETBUTTON 30 - bool "IBM Cell Blade Pinhole reset button" 31 - depends on CBE_RAS && PPC_IBM_CELL_BLADE 32 - default y 33 - help 34 - Support Pinhole Resetbutton on IBM Cell blades. 35 - This adds a method to trigger system reset via front panel pinhole button. 36 - 37 - config PPC_IBM_CELL_POWERBUTTON 38 - tristate "IBM Cell Blade power button" 39 - depends on PPC_IBM_CELL_BLADE && INPUT_EVDEV 40 - default y 41 - help 42 - Support Powerbutton on IBM Cell blades. 43 - This will enable the powerbutton as an input device. 44 - 45 - config CBE_THERM 46 - tristate "CBE thermal support" 47 - default m 48 - depends on CBE_RAS && SPU_BASE 49 - 50 - config PPC_PMI 51 - tristate 52 - default y 53 - depends on CPU_FREQ_CBE_PMI || PPC_IBM_CELL_POWERBUTTON 54 - help 55 - PMI (Platform Management Interrupt) is a way to 56 - communicate with the BMC (Baseboard Management Controller). 57 - It is used in some IBM Cell blades. 58 - 59 - config CBE_CPUFREQ_SPU_GOVERNOR 60 - tristate "CBE frequency scaling based on SPU usage" 61 - depends on SPU_FS && CPU_FREQ 62 - default m 63 - help 64 - This governor checks for spu usage to adjust the cpu frequency. 65 - If no spu is running on a given cpu, that cpu will be throttled to 66 - the minimal possible frequency. 67 59 68 60 endmenu
-23
arch/powerpc/platforms/cell/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - obj-$(CONFIG_PPC_CELL_COMMON) += cbe_regs.o interrupt.o pervasive.o 3 - 4 - obj-$(CONFIG_PPC_CELL_NATIVE) += iommu.o setup.o spider-pic.o \ 5 - pmu.o spider-pci.o 6 - obj-$(CONFIG_CBE_RAS) += ras.o 7 - 8 - obj-$(CONFIG_CBE_THERM) += cbe_thermal.o 9 - obj-$(CONFIG_CBE_CPUFREQ_SPU_GOVERNOR) += cpufreq_spudemand.o 10 - 11 - obj-$(CONFIG_PPC_IBM_CELL_POWERBUTTON) += cbe_powerbutton.o 12 - 13 - ifdef CONFIG_SMP 14 - obj-$(CONFIG_PPC_CELL_NATIVE) += smp.o 15 - endif 16 - 17 - # needed only when building loadable spufs.ko 18 - spu-priv1-$(CONFIG_PPC_CELL_COMMON) += spu_priv1_mmio.o 19 - spu-manage-$(CONFIG_PPC_CELL_COMMON) += spu_manage.o 20 - 21 2 obj-$(CONFIG_SPU_BASE) += spu_callbacks.o spu_base.o \ 22 3 spu_syscalls.o \ 23 - $(spu-priv1-y) \ 24 - $(spu-manage-y) \ 25 4 spufs/ 26 - 27 - obj-$(CONFIG_AXON_MSI) += axon_msi.o
-481
arch/powerpc/platforms/cell/axon_msi.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Copyright 2007, Michael Ellerman, IBM Corporation. 4 - */ 5 - 6 - 7 - #include <linux/interrupt.h> 8 - #include <linux/irq.h> 9 - #include <linux/kernel.h> 10 - #include <linux/pci.h> 11 - #include <linux/msi.h> 12 - #include <linux/export.h> 13 - #include <linux/slab.h> 14 - #include <linux/debugfs.h> 15 - #include <linux/of.h> 16 - #include <linux/of_irq.h> 17 - #include <linux/platform_device.h> 18 - 19 - #include <asm/dcr.h> 20 - #include <asm/machdep.h> 21 - 22 - #include "cell.h" 23 - 24 - /* 25 - * MSIC registers, specified as offsets from dcr_base 26 - */ 27 - #define MSIC_CTRL_REG 0x0 28 - 29 - /* Base Address registers specify FIFO location in BE memory */ 30 - #define MSIC_BASE_ADDR_HI_REG 0x3 31 - #define MSIC_BASE_ADDR_LO_REG 0x4 32 - 33 - /* Hold the read/write offsets into the FIFO */ 34 - #define MSIC_READ_OFFSET_REG 0x5 35 - #define MSIC_WRITE_OFFSET_REG 0x6 36 - 37 - 38 - /* MSIC control register flags */ 39 - #define MSIC_CTRL_ENABLE 0x0001 40 - #define MSIC_CTRL_FIFO_FULL_ENABLE 0x0002 41 - #define MSIC_CTRL_IRQ_ENABLE 0x0008 42 - #define MSIC_CTRL_FULL_STOP_ENABLE 0x0010 43 - 44 - /* 45 - * The MSIC can be configured to use a FIFO of 32KB, 64KB, 128KB or 256KB. 46 - * Currently we're using a 64KB FIFO size. 47 - */ 48 - #define MSIC_FIFO_SIZE_SHIFT 16 49 - #define MSIC_FIFO_SIZE_BYTES (1 << MSIC_FIFO_SIZE_SHIFT) 50 - 51 - /* 52 - * To configure the FIFO size as (1 << n) bytes, we write (n - 15) into bits 53 - * 8-9 of the MSIC control reg. 54 - */ 55 - #define MSIC_CTRL_FIFO_SIZE (((MSIC_FIFO_SIZE_SHIFT - 15) << 8) & 0x300) 56 - 57 - /* 58 - * We need to mask the read/write offsets to make sure they stay within 59 - * the bounds of the FIFO. Also they should always be 16-byte aligned. 60 - */ 61 - #define MSIC_FIFO_SIZE_MASK ((MSIC_FIFO_SIZE_BYTES - 1) & ~0xFu) 62 - 63 - /* Each entry in the FIFO is 16 bytes, the first 4 bytes hold the irq # */ 64 - #define MSIC_FIFO_ENTRY_SIZE 0x10 65 - 66 - 67 - struct axon_msic { 68 - struct irq_domain *irq_domain; 69 - __le32 *fifo_virt; 70 - dma_addr_t fifo_phys; 71 - dcr_host_t dcr_host; 72 - u32 read_offset; 73 - #ifdef DEBUG 74 - u32 __iomem *trigger; 75 - #endif 76 - }; 77 - 78 - #ifdef DEBUG 79 - void axon_msi_debug_setup(struct device_node *dn, struct axon_msic *msic); 80 - #else 81 - static inline void axon_msi_debug_setup(struct device_node *dn, 82 - struct axon_msic *msic) { } 83 - #endif 84 - 85 - 86 - static void msic_dcr_write(struct axon_msic *msic, unsigned int dcr_n, u32 val) 87 - { 88 - pr_devel("axon_msi: dcr_write(0x%x, 0x%x)\n", val, dcr_n); 89 - 90 - dcr_write(msic->dcr_host, dcr_n, val); 91 - } 92 - 93 - static void axon_msi_cascade(struct irq_desc *desc) 94 - { 95 - struct irq_chip *chip = irq_desc_get_chip(desc); 96 - struct axon_msic *msic = irq_desc_get_handler_data(desc); 97 - u32 write_offset, msi; 98 - int idx; 99 - int retry = 0; 100 - 101 - write_offset = dcr_read(msic->dcr_host, MSIC_WRITE_OFFSET_REG); 102 - pr_devel("axon_msi: original write_offset 0x%x\n", write_offset); 103 - 104 - /* write_offset doesn't wrap properly, so we have to mask it */ 105 - write_offset &= MSIC_FIFO_SIZE_MASK; 106 - 107 - while (msic->read_offset != write_offset && retry < 100) { 108 - idx = msic->read_offset / sizeof(__le32); 109 - msi = le32_to_cpu(msic->fifo_virt[idx]); 110 - msi &= 0xFFFF; 111 - 112 - pr_devel("axon_msi: woff %x roff %x msi %x\n", 113 - write_offset, msic->read_offset, msi); 114 - 115 - if (msi < irq_get_nr_irqs() && irq_get_chip_data(msi) == msic) { 116 - generic_handle_irq(msi); 117 - msic->fifo_virt[idx] = cpu_to_le32(0xffffffff); 118 - } else { 119 - /* 120 - * Reading the MSIC_WRITE_OFFSET_REG does not 121 - * reliably flush the outstanding DMA to the 122 - * FIFO buffer. Here we were reading stale 123 - * data, so we need to retry. 124 - */ 125 - udelay(1); 126 - retry++; 127 - pr_devel("axon_msi: invalid irq 0x%x!\n", msi); 128 - continue; 129 - } 130 - 131 - if (retry) { 132 - pr_devel("axon_msi: late irq 0x%x, retry %d\n", 133 - msi, retry); 134 - retry = 0; 135 - } 136 - 137 - msic->read_offset += MSIC_FIFO_ENTRY_SIZE; 138 - msic->read_offset &= MSIC_FIFO_SIZE_MASK; 139 - } 140 - 141 - if (retry) { 142 - printk(KERN_WARNING "axon_msi: irq timed out\n"); 143 - 144 - msic->read_offset += MSIC_FIFO_ENTRY_SIZE; 145 - msic->read_offset &= MSIC_FIFO_SIZE_MASK; 146 - } 147 - 148 - chip->irq_eoi(&desc->irq_data); 149 - } 150 - 151 - static struct axon_msic *find_msi_translator(struct pci_dev *dev) 152 - { 153 - struct irq_domain *irq_domain; 154 - struct device_node *dn, *tmp; 155 - const phandle *ph; 156 - struct axon_msic *msic = NULL; 157 - 158 - dn = of_node_get(pci_device_to_OF_node(dev)); 159 - if (!dn) { 160 - dev_dbg(&dev->dev, "axon_msi: no pci_dn found\n"); 161 - return NULL; 162 - } 163 - 164 - for (; dn; dn = of_get_next_parent(dn)) { 165 - ph = of_get_property(dn, "msi-translator", NULL); 166 - if (ph) 167 - break; 168 - } 169 - 170 - if (!ph) { 171 - dev_dbg(&dev->dev, 172 - "axon_msi: no msi-translator property found\n"); 173 - goto out_error; 174 - } 175 - 176 - tmp = dn; 177 - dn = of_find_node_by_phandle(*ph); 178 - of_node_put(tmp); 179 - if (!dn) { 180 - dev_dbg(&dev->dev, 181 - "axon_msi: msi-translator doesn't point to a node\n"); 182 - goto out_error; 183 - } 184 - 185 - irq_domain = irq_find_host(dn); 186 - if (!irq_domain) { 187 - dev_dbg(&dev->dev, "axon_msi: no irq_domain found for node %pOF\n", 188 - dn); 189 - goto out_error; 190 - } 191 - 192 - msic = irq_domain->host_data; 193 - 194 - out_error: 195 - of_node_put(dn); 196 - 197 - return msic; 198 - } 199 - 200 - static int setup_msi_msg_address(struct pci_dev *dev, struct msi_msg *msg) 201 - { 202 - struct device_node *dn; 203 - int len; 204 - const u32 *prop; 205 - 206 - dn = of_node_get(pci_device_to_OF_node(dev)); 207 - if (!dn) { 208 - dev_dbg(&dev->dev, "axon_msi: no pci_dn found\n"); 209 - return -ENODEV; 210 - } 211 - 212 - for (; dn; dn = of_get_next_parent(dn)) { 213 - if (!dev->no_64bit_msi) { 214 - prop = of_get_property(dn, "msi-address-64", &len); 215 - if (prop) 216 - break; 217 - } 218 - 219 - prop = of_get_property(dn, "msi-address-32", &len); 220 - if (prop) 221 - break; 222 - } 223 - 224 - if (!prop) { 225 - dev_dbg(&dev->dev, 226 - "axon_msi: no msi-address-(32|64) properties found\n"); 227 - of_node_put(dn); 228 - return -ENOENT; 229 - } 230 - 231 - switch (len) { 232 - case 8: 233 - msg->address_hi = prop[0]; 234 - msg->address_lo = prop[1]; 235 - break; 236 - case 4: 237 - msg->address_hi = 0; 238 - msg->address_lo = prop[0]; 239 - break; 240 - default: 241 - dev_dbg(&dev->dev, 242 - "axon_msi: malformed msi-address-(32|64) property\n"); 243 - of_node_put(dn); 244 - return -EINVAL; 245 - } 246 - 247 - of_node_put(dn); 248 - 249 - return 0; 250 - } 251 - 252 - static int axon_msi_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) 253 - { 254 - unsigned int virq, rc; 255 - struct msi_desc *entry; 256 - struct msi_msg msg; 257 - struct axon_msic *msic; 258 - 259 - msic = find_msi_translator(dev); 260 - if (!msic) 261 - return -ENODEV; 262 - 263 - rc = setup_msi_msg_address(dev, &msg); 264 - if (rc) 265 - return rc; 266 - 267 - msi_for_each_desc(entry, &dev->dev, MSI_DESC_NOTASSOCIATED) { 268 - virq = irq_create_direct_mapping(msic->irq_domain); 269 - if (!virq) { 270 - dev_warn(&dev->dev, 271 - "axon_msi: virq allocation failed!\n"); 272 - return -1; 273 - } 274 - dev_dbg(&dev->dev, "axon_msi: allocated virq 0x%x\n", virq); 275 - 276 - irq_set_msi_desc(virq, entry); 277 - msg.data = virq; 278 - pci_write_msi_msg(virq, &msg); 279 - } 280 - 281 - return 0; 282 - } 283 - 284 - static void axon_msi_teardown_msi_irqs(struct pci_dev *dev) 285 - { 286 - struct msi_desc *entry; 287 - 288 - dev_dbg(&dev->dev, "axon_msi: tearing down msi irqs\n"); 289 - 290 - msi_for_each_desc(entry, &dev->dev, MSI_DESC_ASSOCIATED) { 291 - irq_set_msi_desc(entry->irq, NULL); 292 - irq_dispose_mapping(entry->irq); 293 - entry->irq = 0; 294 - } 295 - } 296 - 297 - static struct irq_chip msic_irq_chip = { 298 - .irq_mask = pci_msi_mask_irq, 299 - .irq_unmask = pci_msi_unmask_irq, 300 - .irq_shutdown = pci_msi_mask_irq, 301 - .name = "AXON-MSI", 302 - }; 303 - 304 - static int msic_host_map(struct irq_domain *h, unsigned int virq, 305 - irq_hw_number_t hw) 306 - { 307 - irq_set_chip_data(virq, h->host_data); 308 - irq_set_chip_and_handler(virq, &msic_irq_chip, handle_simple_irq); 309 - 310 - return 0; 311 - } 312 - 313 - static const struct irq_domain_ops msic_host_ops = { 314 - .map = msic_host_map, 315 - }; 316 - 317 - static void axon_msi_shutdown(struct platform_device *device) 318 - { 319 - struct axon_msic *msic = dev_get_drvdata(&device->dev); 320 - u32 tmp; 321 - 322 - pr_devel("axon_msi: disabling %pOF\n", 323 - irq_domain_get_of_node(msic->irq_domain)); 324 - tmp = dcr_read(msic->dcr_host, MSIC_CTRL_REG); 325 - tmp &= ~MSIC_CTRL_ENABLE & ~MSIC_CTRL_IRQ_ENABLE; 326 - msic_dcr_write(msic, MSIC_CTRL_REG, tmp); 327 - } 328 - 329 - static int axon_msi_probe(struct platform_device *device) 330 - { 331 - struct device_node *dn = device->dev.of_node; 332 - struct axon_msic *msic; 333 - unsigned int virq; 334 - int dcr_base, dcr_len; 335 - 336 - pr_devel("axon_msi: setting up dn %pOF\n", dn); 337 - 338 - msic = kzalloc(sizeof(*msic), GFP_KERNEL); 339 - if (!msic) { 340 - printk(KERN_ERR "axon_msi: couldn't allocate msic for %pOF\n", 341 - dn); 342 - goto out; 343 - } 344 - 345 - dcr_base = dcr_resource_start(dn, 0); 346 - dcr_len = dcr_resource_len(dn, 0); 347 - 348 - if (dcr_base == 0 || dcr_len == 0) { 349 - printk(KERN_ERR 350 - "axon_msi: couldn't parse dcr properties on %pOF\n", 351 - dn); 352 - goto out_free_msic; 353 - } 354 - 355 - msic->dcr_host = dcr_map(dn, dcr_base, dcr_len); 356 - if (!DCR_MAP_OK(msic->dcr_host)) { 357 - printk(KERN_ERR "axon_msi: dcr_map failed for %pOF\n", 358 - dn); 359 - goto out_free_msic; 360 - } 361 - 362 - msic->fifo_virt = dma_alloc_coherent(&device->dev, MSIC_FIFO_SIZE_BYTES, 363 - &msic->fifo_phys, GFP_KERNEL); 364 - if (!msic->fifo_virt) { 365 - printk(KERN_ERR "axon_msi: couldn't allocate fifo for %pOF\n", 366 - dn); 367 - goto out_free_msic; 368 - } 369 - 370 - virq = irq_of_parse_and_map(dn, 0); 371 - if (!virq) { 372 - printk(KERN_ERR "axon_msi: irq parse and map failed for %pOF\n", 373 - dn); 374 - goto out_free_fifo; 375 - } 376 - memset(msic->fifo_virt, 0xff, MSIC_FIFO_SIZE_BYTES); 377 - 378 - /* We rely on being able to stash a virq in a u16, so limit irqs to < 65536 */ 379 - msic->irq_domain = irq_domain_add_nomap(dn, 65536, &msic_host_ops, msic); 380 - if (!msic->irq_domain) { 381 - printk(KERN_ERR "axon_msi: couldn't allocate irq_domain for %pOF\n", 382 - dn); 383 - goto out_free_fifo; 384 - } 385 - 386 - irq_set_handler_data(virq, msic); 387 - irq_set_chained_handler(virq, axon_msi_cascade); 388 - pr_devel("axon_msi: irq 0x%x setup for axon_msi\n", virq); 389 - 390 - /* Enable the MSIC hardware */ 391 - msic_dcr_write(msic, MSIC_BASE_ADDR_HI_REG, msic->fifo_phys >> 32); 392 - msic_dcr_write(msic, MSIC_BASE_ADDR_LO_REG, 393 - msic->fifo_phys & 0xFFFFFFFF); 394 - msic_dcr_write(msic, MSIC_CTRL_REG, 395 - MSIC_CTRL_IRQ_ENABLE | MSIC_CTRL_ENABLE | 396 - MSIC_CTRL_FIFO_SIZE); 397 - 398 - msic->read_offset = dcr_read(msic->dcr_host, MSIC_WRITE_OFFSET_REG) 399 - & MSIC_FIFO_SIZE_MASK; 400 - 401 - dev_set_drvdata(&device->dev, msic); 402 - 403 - cell_pci_controller_ops.setup_msi_irqs = axon_msi_setup_msi_irqs; 404 - cell_pci_controller_ops.teardown_msi_irqs = axon_msi_teardown_msi_irqs; 405 - 406 - axon_msi_debug_setup(dn, msic); 407 - 408 - printk(KERN_DEBUG "axon_msi: setup MSIC on %pOF\n", dn); 409 - 410 - return 0; 411 - 412 - out_free_fifo: 413 - dma_free_coherent(&device->dev, MSIC_FIFO_SIZE_BYTES, msic->fifo_virt, 414 - msic->fifo_phys); 415 - out_free_msic: 416 - kfree(msic); 417 - out: 418 - 419 - return -1; 420 - } 421 - 422 - static const struct of_device_id axon_msi_device_id[] = { 423 - { 424 - .compatible = "ibm,axon-msic" 425 - }, 426 - {} 427 - }; 428 - 429 - static struct platform_driver axon_msi_driver = { 430 - .probe = axon_msi_probe, 431 - .shutdown = axon_msi_shutdown, 432 - .driver = { 433 - .name = "axon-msi", 434 - .of_match_table = axon_msi_device_id, 435 - }, 436 - }; 437 - 438 - static int __init axon_msi_init(void) 439 - { 440 - return platform_driver_register(&axon_msi_driver); 441 - } 442 - subsys_initcall(axon_msi_init); 443 - 444 - 445 - #ifdef DEBUG 446 - static int msic_set(void *data, u64 val) 447 - { 448 - struct axon_msic *msic = data; 449 - out_le32(msic->trigger, val); 450 - return 0; 451 - } 452 - 453 - static int msic_get(void *data, u64 *val) 454 - { 455 - *val = 0; 456 - return 0; 457 - } 458 - 459 - DEFINE_SIMPLE_ATTRIBUTE(fops_msic, msic_get, msic_set, "%llu\n"); 460 - 461 - void axon_msi_debug_setup(struct device_node *dn, struct axon_msic *msic) 462 - { 463 - char name[8]; 464 - struct resource res; 465 - 466 - if (of_address_to_resource(dn, 0, &res)) { 467 - pr_devel("axon_msi: couldn't get reg property\n"); 468 - return; 469 - } 470 - 471 - msic->trigger = ioremap(res.start, 0x4); 472 - if (!msic->trigger) { 473 - pr_devel("axon_msi: ioremap failed\n"); 474 - return; 475 - } 476 - 477 - snprintf(name, sizeof(name), "msic_%d", of_node_to_nid(dn)); 478 - 479 - debugfs_create_file(name, 0600, arch_debugfs_dir, msic, &fops_msic); 480 - } 481 - #endif /* DEBUG */
-106
arch/powerpc/platforms/cell/cbe_powerbutton.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * driver for powerbutton on IBM cell blades 4 - * 5 - * (C) Copyright IBM Corp. 2005-2008 6 - * 7 - * Author: Christian Krafft <krafft@de.ibm.com> 8 - */ 9 - 10 - #include <linux/input.h> 11 - #include <linux/module.h> 12 - #include <linux/of.h> 13 - #include <linux/platform_device.h> 14 - #include <asm/pmi.h> 15 - 16 - static struct input_dev *button_dev; 17 - static struct platform_device *button_pdev; 18 - 19 - static void cbe_powerbutton_handle_pmi(pmi_message_t pmi_msg) 20 - { 21 - BUG_ON(pmi_msg.type != PMI_TYPE_POWER_BUTTON); 22 - 23 - input_report_key(button_dev, KEY_POWER, 1); 24 - input_sync(button_dev); 25 - input_report_key(button_dev, KEY_POWER, 0); 26 - input_sync(button_dev); 27 - } 28 - 29 - static struct pmi_handler cbe_pmi_handler = { 30 - .type = PMI_TYPE_POWER_BUTTON, 31 - .handle_pmi_message = cbe_powerbutton_handle_pmi, 32 - }; 33 - 34 - static int __init cbe_powerbutton_init(void) 35 - { 36 - int ret = 0; 37 - struct input_dev *dev; 38 - 39 - if (!of_machine_is_compatible("IBM,CBPLUS-1.0")) { 40 - printk(KERN_ERR "%s: Not a cell blade.\n", __func__); 41 - ret = -ENODEV; 42 - goto out; 43 - } 44 - 45 - dev = input_allocate_device(); 46 - if (!dev) { 47 - ret = -ENOMEM; 48 - printk(KERN_ERR "%s: Not enough memory.\n", __func__); 49 - goto out; 50 - } 51 - 52 - set_bit(EV_KEY, dev->evbit); 53 - set_bit(KEY_POWER, dev->keybit); 54 - 55 - dev->name = "Power Button"; 56 - dev->id.bustype = BUS_HOST; 57 - 58 - /* this makes the button look like an acpi power button 59 - * no clue whether anyone relies on that though */ 60 - dev->id.product = 0x02; 61 - dev->phys = "LNXPWRBN/button/input0"; 62 - 63 - button_pdev = platform_device_register_simple("power_button", 0, NULL, 0); 64 - if (IS_ERR(button_pdev)) { 65 - ret = PTR_ERR(button_pdev); 66 - goto out_free_input; 67 - } 68 - 69 - dev->dev.parent = &button_pdev->dev; 70 - ret = input_register_device(dev); 71 - if (ret) { 72 - printk(KERN_ERR "%s: Failed to register device\n", __func__); 73 - goto out_free_pdev; 74 - } 75 - 76 - button_dev = dev; 77 - 78 - ret = pmi_register_handler(&cbe_pmi_handler); 79 - if (ret) { 80 - printk(KERN_ERR "%s: Failed to register with pmi.\n", __func__); 81 - goto out_free_pdev; 82 - } 83 - 84 - goto out; 85 - 86 - out_free_pdev: 87 - platform_device_unregister(button_pdev); 88 - out_free_input: 89 - input_free_device(dev); 90 - out: 91 - return ret; 92 - } 93 - 94 - static void __exit cbe_powerbutton_exit(void) 95 - { 96 - pmi_unregister_handler(&cbe_pmi_handler); 97 - platform_device_unregister(button_pdev); 98 - input_free_device(button_dev); 99 - } 100 - 101 - module_init(cbe_powerbutton_init); 102 - module_exit(cbe_powerbutton_exit); 103 - 104 - MODULE_DESCRIPTION("Driver for powerbutton on IBM cell blades"); 105 - MODULE_LICENSE("GPL"); 106 - MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>");
-298
arch/powerpc/platforms/cell/cbe_regs.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * cbe_regs.c 4 - * 5 - * Accessor routines for the various MMIO register blocks of the CBE 6 - * 7 - * (c) 2006 Benjamin Herrenschmidt <benh@kernel.crashing.org>, IBM Corp. 8 - */ 9 - 10 - #include <linux/percpu.h> 11 - #include <linux/types.h> 12 - #include <linux/export.h> 13 - #include <linux/of.h> 14 - #include <linux/of_address.h> 15 - #include <linux/pgtable.h> 16 - 17 - #include <asm/io.h> 18 - #include <asm/ptrace.h> 19 - #include <asm/cell-regs.h> 20 - 21 - /* 22 - * Current implementation uses "cpu" nodes. We build our own mapping 23 - * array of cpu numbers to cpu nodes locally for now to allow interrupt 24 - * time code to have a fast path rather than call of_get_cpu_node(). If 25 - * we implement cpu hotplug, we'll have to install an appropriate notifier 26 - * in order to release references to the cpu going away 27 - */ 28 - static struct cbe_regs_map 29 - { 30 - struct device_node *cpu_node; 31 - struct device_node *be_node; 32 - struct cbe_pmd_regs __iomem *pmd_regs; 33 - struct cbe_iic_regs __iomem *iic_regs; 34 - struct cbe_mic_tm_regs __iomem *mic_tm_regs; 35 - struct cbe_pmd_shadow_regs pmd_shadow_regs; 36 - } cbe_regs_maps[MAX_CBE]; 37 - static int cbe_regs_map_count; 38 - 39 - static struct cbe_thread_map 40 - { 41 - struct device_node *cpu_node; 42 - struct device_node *be_node; 43 - struct cbe_regs_map *regs; 44 - unsigned int thread_id; 45 - unsigned int cbe_id; 46 - } cbe_thread_map[NR_CPUS]; 47 - 48 - static cpumask_t cbe_local_mask[MAX_CBE] = { [0 ... MAX_CBE-1] = {CPU_BITS_NONE} }; 49 - static cpumask_t cbe_first_online_cpu = { CPU_BITS_NONE }; 50 - 51 - static struct cbe_regs_map *cbe_find_map(struct device_node *np) 52 - { 53 - int i; 54 - struct device_node *tmp_np; 55 - 56 - if (!of_node_is_type(np, "spe")) { 57 - for (i = 0; i < cbe_regs_map_count; i++) 58 - if (cbe_regs_maps[i].cpu_node == np || 59 - cbe_regs_maps[i].be_node == np) 60 - return &cbe_regs_maps[i]; 61 - return NULL; 62 - } 63 - 64 - if (np->data) 65 - return np->data; 66 - 67 - /* walk up path until cpu or be node was found */ 68 - tmp_np = np; 69 - do { 70 - tmp_np = tmp_np->parent; 71 - /* on a correct devicetree we wont get up to root */ 72 - BUG_ON(!tmp_np); 73 - } while (!of_node_is_type(tmp_np, "cpu") || 74 - !of_node_is_type(tmp_np, "be")); 75 - 76 - np->data = cbe_find_map(tmp_np); 77 - 78 - return np->data; 79 - } 80 - 81 - struct cbe_pmd_regs __iomem *cbe_get_pmd_regs(struct device_node *np) 82 - { 83 - struct cbe_regs_map *map = cbe_find_map(np); 84 - if (map == NULL) 85 - return NULL; 86 - return map->pmd_regs; 87 - } 88 - EXPORT_SYMBOL_GPL(cbe_get_pmd_regs); 89 - 90 - struct cbe_pmd_regs __iomem *cbe_get_cpu_pmd_regs(int cpu) 91 - { 92 - struct cbe_regs_map *map = cbe_thread_map[cpu].regs; 93 - if (map == NULL) 94 - return NULL; 95 - return map->pmd_regs; 96 - } 97 - EXPORT_SYMBOL_GPL(cbe_get_cpu_pmd_regs); 98 - 99 - struct cbe_pmd_shadow_regs *cbe_get_pmd_shadow_regs(struct device_node *np) 100 - { 101 - struct cbe_regs_map *map = cbe_find_map(np); 102 - if (map == NULL) 103 - return NULL; 104 - return &map->pmd_shadow_regs; 105 - } 106 - 107 - struct cbe_pmd_shadow_regs *cbe_get_cpu_pmd_shadow_regs(int cpu) 108 - { 109 - struct cbe_regs_map *map = cbe_thread_map[cpu].regs; 110 - if (map == NULL) 111 - return NULL; 112 - return &map->pmd_shadow_regs; 113 - } 114 - 115 - struct cbe_iic_regs __iomem *cbe_get_iic_regs(struct device_node *np) 116 - { 117 - struct cbe_regs_map *map = cbe_find_map(np); 118 - if (map == NULL) 119 - return NULL; 120 - return map->iic_regs; 121 - } 122 - 123 - struct cbe_iic_regs __iomem *cbe_get_cpu_iic_regs(int cpu) 124 - { 125 - struct cbe_regs_map *map = cbe_thread_map[cpu].regs; 126 - if (map == NULL) 127 - return NULL; 128 - return map->iic_regs; 129 - } 130 - 131 - struct cbe_mic_tm_regs __iomem *cbe_get_mic_tm_regs(struct device_node *np) 132 - { 133 - struct cbe_regs_map *map = cbe_find_map(np); 134 - if (map == NULL) 135 - return NULL; 136 - return map->mic_tm_regs; 137 - } 138 - 139 - struct cbe_mic_tm_regs __iomem *cbe_get_cpu_mic_tm_regs(int cpu) 140 - { 141 - struct cbe_regs_map *map = cbe_thread_map[cpu].regs; 142 - if (map == NULL) 143 - return NULL; 144 - return map->mic_tm_regs; 145 - } 146 - EXPORT_SYMBOL_GPL(cbe_get_cpu_mic_tm_regs); 147 - 148 - u32 cbe_get_hw_thread_id(int cpu) 149 - { 150 - return cbe_thread_map[cpu].thread_id; 151 - } 152 - EXPORT_SYMBOL_GPL(cbe_get_hw_thread_id); 153 - 154 - u32 cbe_cpu_to_node(int cpu) 155 - { 156 - return cbe_thread_map[cpu].cbe_id; 157 - } 158 - EXPORT_SYMBOL_GPL(cbe_cpu_to_node); 159 - 160 - u32 cbe_node_to_cpu(int node) 161 - { 162 - return cpumask_first(&cbe_local_mask[node]); 163 - 164 - } 165 - EXPORT_SYMBOL_GPL(cbe_node_to_cpu); 166 - 167 - static struct device_node *__init cbe_get_be_node(int cpu_id) 168 - { 169 - struct device_node *np; 170 - 171 - for_each_node_by_type (np, "be") { 172 - int len,i; 173 - const phandle *cpu_handle; 174 - 175 - cpu_handle = of_get_property(np, "cpus", &len); 176 - 177 - /* 178 - * the CAB SLOF tree is non compliant, so we just assume 179 - * there is only one node 180 - */ 181 - if (WARN_ON_ONCE(!cpu_handle)) 182 - return np; 183 - 184 - for (i = 0; i < len; i++) { 185 - struct device_node *ch_np = of_find_node_by_phandle(cpu_handle[i]); 186 - struct device_node *ci_np = of_get_cpu_node(cpu_id, NULL); 187 - 188 - of_node_put(ch_np); 189 - of_node_put(ci_np); 190 - 191 - if (ch_np == ci_np) 192 - return np; 193 - } 194 - } 195 - 196 - return NULL; 197 - } 198 - 199 - static void __init cbe_fill_regs_map(struct cbe_regs_map *map) 200 - { 201 - if(map->be_node) { 202 - struct device_node *be, *np, *parent_np; 203 - 204 - be = map->be_node; 205 - 206 - for_each_node_by_type(np, "pervasive") { 207 - parent_np = of_get_parent(np); 208 - if (parent_np == be) 209 - map->pmd_regs = of_iomap(np, 0); 210 - of_node_put(parent_np); 211 - } 212 - 213 - for_each_node_by_type(np, "CBEA-Internal-Interrupt-Controller") { 214 - parent_np = of_get_parent(np); 215 - if (parent_np == be) 216 - map->iic_regs = of_iomap(np, 2); 217 - of_node_put(parent_np); 218 - } 219 - 220 - for_each_node_by_type(np, "mic-tm") { 221 - parent_np = of_get_parent(np); 222 - if (parent_np == be) 223 - map->mic_tm_regs = of_iomap(np, 0); 224 - of_node_put(parent_np); 225 - } 226 - } else { 227 - struct device_node *cpu; 228 - /* That hack must die die die ! */ 229 - const struct address_prop { 230 - unsigned long address; 231 - unsigned int len; 232 - } __attribute__((packed)) *prop; 233 - 234 - cpu = map->cpu_node; 235 - 236 - prop = of_get_property(cpu, "pervasive", NULL); 237 - if (prop != NULL) 238 - map->pmd_regs = ioremap(prop->address, prop->len); 239 - 240 - prop = of_get_property(cpu, "iic", NULL); 241 - if (prop != NULL) 242 - map->iic_regs = ioremap(prop->address, prop->len); 243 - 244 - prop = of_get_property(cpu, "mic-tm", NULL); 245 - if (prop != NULL) 246 - map->mic_tm_regs = ioremap(prop->address, prop->len); 247 - } 248 - } 249 - 250 - 251 - void __init cbe_regs_init(void) 252 - { 253 - int i; 254 - unsigned int thread_id; 255 - struct device_node *cpu; 256 - 257 - /* Build local fast map of CPUs */ 258 - for_each_possible_cpu(i) { 259 - cbe_thread_map[i].cpu_node = of_get_cpu_node(i, &thread_id); 260 - cbe_thread_map[i].be_node = cbe_get_be_node(i); 261 - cbe_thread_map[i].thread_id = thread_id; 262 - } 263 - 264 - /* Find maps for each device tree CPU */ 265 - for_each_node_by_type(cpu, "cpu") { 266 - struct cbe_regs_map *map; 267 - unsigned int cbe_id; 268 - 269 - cbe_id = cbe_regs_map_count++; 270 - map = &cbe_regs_maps[cbe_id]; 271 - 272 - if (cbe_regs_map_count > MAX_CBE) { 273 - printk(KERN_ERR "cbe_regs: More BE chips than supported" 274 - "!\n"); 275 - cbe_regs_map_count--; 276 - of_node_put(cpu); 277 - return; 278 - } 279 - of_node_put(map->cpu_node); 280 - map->cpu_node = of_node_get(cpu); 281 - 282 - for_each_possible_cpu(i) { 283 - struct cbe_thread_map *thread = &cbe_thread_map[i]; 284 - 285 - if (thread->cpu_node == cpu) { 286 - thread->regs = map; 287 - thread->cbe_id = cbe_id; 288 - map->be_node = thread->be_node; 289 - cpumask_set_cpu(i, &cbe_local_mask[cbe_id]); 290 - if(thread->thread_id == 0) 291 - cpumask_set_cpu(i, &cbe_first_online_cpu); 292 - } 293 - } 294 - 295 - cbe_fill_regs_map(map); 296 - } 297 - } 298 -
-387
arch/powerpc/platforms/cell/cbe_thermal.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * thermal support for the cell processor 4 - * 5 - * This module adds some sysfs attributes to cpu and spu nodes. 6 - * Base for measurements are the digital thermal sensors (DTS) 7 - * located on the chip. 8 - * The accuracy is 2 degrees, starting from 65 up to 125 degrees celsius 9 - * The attributes can be found under 10 - * /sys/devices/system/cpu/cpuX/thermal 11 - * /sys/devices/system/spu/spuX/thermal 12 - * 13 - * The following attributes are added for each node: 14 - * temperature: 15 - * contains the current temperature measured by the DTS 16 - * throttle_begin: 17 - * throttling begins when temperature is greater or equal to 18 - * throttle_begin. Setting this value to 125 prevents throttling. 19 - * throttle_end: 20 - * throttling is being ceased, if the temperature is lower than 21 - * throttle_end. Due to a delay between applying throttling and 22 - * a reduced temperature this value should be less than throttle_begin. 23 - * A value equal to throttle_begin provides only a very little hysteresis. 24 - * throttle_full_stop: 25 - * If the temperatrue is greater or equal to throttle_full_stop, 26 - * full throttling is applied to the cpu or spu. This value should be 27 - * greater than throttle_begin and throttle_end. Setting this value to 28 - * 65 prevents the unit from running code at all. 29 - * 30 - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 31 - * 32 - * Author: Christian Krafft <krafft@de.ibm.com> 33 - */ 34 - 35 - #include <linux/module.h> 36 - #include <linux/device.h> 37 - #include <linux/kernel.h> 38 - #include <linux/cpu.h> 39 - #include <linux/stringify.h> 40 - #include <asm/spu.h> 41 - #include <asm/io.h> 42 - #include <asm/cell-regs.h> 43 - 44 - #include "spu_priv1_mmio.h" 45 - 46 - #define TEMP_MIN 65 47 - #define TEMP_MAX 125 48 - 49 - #define DEVICE_PREFIX_ATTR(_prefix,_name,_mode) \ 50 - struct device_attribute attr_ ## _prefix ## _ ## _name = { \ 51 - .attr = { .name = __stringify(_name), .mode = _mode }, \ 52 - .show = _prefix ## _show_ ## _name, \ 53 - .store = _prefix ## _store_ ## _name, \ 54 - }; 55 - 56 - static inline u8 reg_to_temp(u8 reg_value) 57 - { 58 - return ((reg_value & 0x3f) << 1) + TEMP_MIN; 59 - } 60 - 61 - static inline u8 temp_to_reg(u8 temp) 62 - { 63 - return ((temp - TEMP_MIN) >> 1) & 0x3f; 64 - } 65 - 66 - static struct cbe_pmd_regs __iomem *get_pmd_regs(struct device *dev) 67 - { 68 - struct spu *spu; 69 - 70 - spu = container_of(dev, struct spu, dev); 71 - 72 - return cbe_get_pmd_regs(spu_devnode(spu)); 73 - } 74 - 75 - /* returns the value for a given spu in a given register */ 76 - static u8 spu_read_register_value(struct device *dev, union spe_reg __iomem *reg) 77 - { 78 - union spe_reg value; 79 - struct spu *spu; 80 - 81 - spu = container_of(dev, struct spu, dev); 82 - value.val = in_be64(&reg->val); 83 - 84 - return value.spe[spu->spe_id]; 85 - } 86 - 87 - static ssize_t spu_show_temp(struct device *dev, struct device_attribute *attr, 88 - char *buf) 89 - { 90 - u8 value; 91 - struct cbe_pmd_regs __iomem *pmd_regs; 92 - 93 - pmd_regs = get_pmd_regs(dev); 94 - 95 - value = spu_read_register_value(dev, &pmd_regs->ts_ctsr1); 96 - 97 - return sprintf(buf, "%d\n", reg_to_temp(value)); 98 - } 99 - 100 - static ssize_t show_throttle(struct cbe_pmd_regs __iomem *pmd_regs, char *buf, int pos) 101 - { 102 - u64 value; 103 - 104 - value = in_be64(&pmd_regs->tm_tpr.val); 105 - /* access the corresponding byte */ 106 - value >>= pos; 107 - value &= 0x3F; 108 - 109 - return sprintf(buf, "%d\n", reg_to_temp(value)); 110 - } 111 - 112 - static ssize_t store_throttle(struct cbe_pmd_regs __iomem *pmd_regs, const char *buf, size_t size, int pos) 113 - { 114 - u64 reg_value; 115 - unsigned int temp; 116 - u64 new_value; 117 - int ret; 118 - 119 - ret = sscanf(buf, "%u", &temp); 120 - 121 - if (ret != 1 || temp < TEMP_MIN || temp > TEMP_MAX) 122 - return -EINVAL; 123 - 124 - new_value = temp_to_reg(temp); 125 - 126 - reg_value = in_be64(&pmd_regs->tm_tpr.val); 127 - 128 - /* zero out bits for new value */ 129 - reg_value &= ~(0xffull << pos); 130 - /* set bits to new value */ 131 - reg_value |= new_value << pos; 132 - 133 - out_be64(&pmd_regs->tm_tpr.val, reg_value); 134 - return size; 135 - } 136 - 137 - static ssize_t spu_show_throttle_end(struct device *dev, 138 - struct device_attribute *attr, char *buf) 139 - { 140 - return show_throttle(get_pmd_regs(dev), buf, 0); 141 - } 142 - 143 - static ssize_t spu_show_throttle_begin(struct device *dev, 144 - struct device_attribute *attr, char *buf) 145 - { 146 - return show_throttle(get_pmd_regs(dev), buf, 8); 147 - } 148 - 149 - static ssize_t spu_show_throttle_full_stop(struct device *dev, 150 - struct device_attribute *attr, char *buf) 151 - { 152 - return show_throttle(get_pmd_regs(dev), buf, 16); 153 - } 154 - 155 - static ssize_t spu_store_throttle_end(struct device *dev, 156 - struct device_attribute *attr, const char *buf, size_t size) 157 - { 158 - return store_throttle(get_pmd_regs(dev), buf, size, 0); 159 - } 160 - 161 - static ssize_t spu_store_throttle_begin(struct device *dev, 162 - struct device_attribute *attr, const char *buf, size_t size) 163 - { 164 - return store_throttle(get_pmd_regs(dev), buf, size, 8); 165 - } 166 - 167 - static ssize_t spu_store_throttle_full_stop(struct device *dev, 168 - struct device_attribute *attr, const char *buf, size_t size) 169 - { 170 - return store_throttle(get_pmd_regs(dev), buf, size, 16); 171 - } 172 - 173 - static ssize_t ppe_show_temp(struct device *dev, char *buf, int pos) 174 - { 175 - struct cbe_pmd_regs __iomem *pmd_regs; 176 - u64 value; 177 - 178 - pmd_regs = cbe_get_cpu_pmd_regs(dev->id); 179 - value = in_be64(&pmd_regs->ts_ctsr2); 180 - 181 - value = (value >> pos) & 0x3f; 182 - 183 - return sprintf(buf, "%d\n", reg_to_temp(value)); 184 - } 185 - 186 - 187 - /* shows the temperature of the DTS on the PPE, 188 - * located near the linear thermal sensor */ 189 - static ssize_t ppe_show_temp0(struct device *dev, 190 - struct device_attribute *attr, char *buf) 191 - { 192 - return ppe_show_temp(dev, buf, 32); 193 - } 194 - 195 - /* shows the temperature of the second DTS on the PPE */ 196 - static ssize_t ppe_show_temp1(struct device *dev, 197 - struct device_attribute *attr, char *buf) 198 - { 199 - return ppe_show_temp(dev, buf, 0); 200 - } 201 - 202 - static ssize_t ppe_show_throttle_end(struct device *dev, 203 - struct device_attribute *attr, char *buf) 204 - { 205 - return show_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, 32); 206 - } 207 - 208 - static ssize_t ppe_show_throttle_begin(struct device *dev, 209 - struct device_attribute *attr, char *buf) 210 - { 211 - return show_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, 40); 212 - } 213 - 214 - static ssize_t ppe_show_throttle_full_stop(struct device *dev, 215 - struct device_attribute *attr, char *buf) 216 - { 217 - return show_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, 48); 218 - } 219 - 220 - static ssize_t ppe_store_throttle_end(struct device *dev, 221 - struct device_attribute *attr, const char *buf, size_t size) 222 - { 223 - return store_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, size, 32); 224 - } 225 - 226 - static ssize_t ppe_store_throttle_begin(struct device *dev, 227 - struct device_attribute *attr, const char *buf, size_t size) 228 - { 229 - return store_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, size, 40); 230 - } 231 - 232 - static ssize_t ppe_store_throttle_full_stop(struct device *dev, 233 - struct device_attribute *attr, const char *buf, size_t size) 234 - { 235 - return store_throttle(cbe_get_cpu_pmd_regs(dev->id), buf, size, 48); 236 - } 237 - 238 - 239 - static struct device_attribute attr_spu_temperature = { 240 - .attr = {.name = "temperature", .mode = 0400 }, 241 - .show = spu_show_temp, 242 - }; 243 - 244 - static DEVICE_PREFIX_ATTR(spu, throttle_end, 0600); 245 - static DEVICE_PREFIX_ATTR(spu, throttle_begin, 0600); 246 - static DEVICE_PREFIX_ATTR(spu, throttle_full_stop, 0600); 247 - 248 - 249 - static struct attribute *spu_attributes[] = { 250 - &attr_spu_temperature.attr, 251 - &attr_spu_throttle_end.attr, 252 - &attr_spu_throttle_begin.attr, 253 - &attr_spu_throttle_full_stop.attr, 254 - NULL, 255 - }; 256 - 257 - static const struct attribute_group spu_attribute_group = { 258 - .name = "thermal", 259 - .attrs = spu_attributes, 260 - }; 261 - 262 - static struct device_attribute attr_ppe_temperature0 = { 263 - .attr = {.name = "temperature0", .mode = 0400 }, 264 - .show = ppe_show_temp0, 265 - }; 266 - 267 - static struct device_attribute attr_ppe_temperature1 = { 268 - .attr = {.name = "temperature1", .mode = 0400 }, 269 - .show = ppe_show_temp1, 270 - }; 271 - 272 - static DEVICE_PREFIX_ATTR(ppe, throttle_end, 0600); 273 - static DEVICE_PREFIX_ATTR(ppe, throttle_begin, 0600); 274 - static DEVICE_PREFIX_ATTR(ppe, throttle_full_stop, 0600); 275 - 276 - static struct attribute *ppe_attributes[] = { 277 - &attr_ppe_temperature0.attr, 278 - &attr_ppe_temperature1.attr, 279 - &attr_ppe_throttle_end.attr, 280 - &attr_ppe_throttle_begin.attr, 281 - &attr_ppe_throttle_full_stop.attr, 282 - NULL, 283 - }; 284 - 285 - static struct attribute_group ppe_attribute_group = { 286 - .name = "thermal", 287 - .attrs = ppe_attributes, 288 - }; 289 - 290 - /* 291 - * initialize throttling with default values 292 - */ 293 - static int __init init_default_values(void) 294 - { 295 - int cpu; 296 - struct cbe_pmd_regs __iomem *pmd_regs; 297 - struct device *dev; 298 - union ppe_spe_reg tpr; 299 - union spe_reg str1; 300 - u64 str2; 301 - union spe_reg cr1; 302 - u64 cr2; 303 - 304 - /* TPR defaults */ 305 - /* ppe 306 - * 1F - no full stop 307 - * 08 - dynamic throttling starts if over 80 degrees 308 - * 03 - dynamic throttling ceases if below 70 degrees */ 309 - tpr.ppe = 0x1F0803; 310 - /* spe 311 - * 10 - full stopped when over 96 degrees 312 - * 08 - dynamic throttling starts if over 80 degrees 313 - * 03 - dynamic throttling ceases if below 70 degrees 314 - */ 315 - tpr.spe = 0x100803; 316 - 317 - /* STR defaults */ 318 - /* str1 319 - * 10 - stop 16 of 32 cycles 320 - */ 321 - str1.val = 0x1010101010101010ull; 322 - /* str2 323 - * 10 - stop 16 of 32 cycles 324 - */ 325 - str2 = 0x10; 326 - 327 - /* CR defaults */ 328 - /* cr1 329 - * 4 - normal operation 330 - */ 331 - cr1.val = 0x0404040404040404ull; 332 - /* cr2 333 - * 4 - normal operation 334 - */ 335 - cr2 = 0x04; 336 - 337 - for_each_possible_cpu (cpu) { 338 - pr_debug("processing cpu %d\n", cpu); 339 - dev = get_cpu_device(cpu); 340 - 341 - if (!dev) { 342 - pr_info("invalid dev pointer for cbe_thermal\n"); 343 - return -EINVAL; 344 - } 345 - 346 - pmd_regs = cbe_get_cpu_pmd_regs(dev->id); 347 - 348 - if (!pmd_regs) { 349 - pr_info("invalid CBE regs pointer for cbe_thermal\n"); 350 - return -EINVAL; 351 - } 352 - 353 - out_be64(&pmd_regs->tm_str2, str2); 354 - out_be64(&pmd_regs->tm_str1.val, str1.val); 355 - out_be64(&pmd_regs->tm_tpr.val, tpr.val); 356 - out_be64(&pmd_regs->tm_cr1.val, cr1.val); 357 - out_be64(&pmd_regs->tm_cr2, cr2); 358 - } 359 - 360 - return 0; 361 - } 362 - 363 - 364 - static int __init thermal_init(void) 365 - { 366 - int rc = init_default_values(); 367 - 368 - if (rc == 0) { 369 - spu_add_dev_attr_group(&spu_attribute_group); 370 - cpu_add_dev_attr_group(&ppe_attribute_group); 371 - } 372 - 373 - return rc; 374 - } 375 - module_init(thermal_init); 376 - 377 - static void __exit thermal_exit(void) 378 - { 379 - spu_remove_dev_attr_group(&spu_attribute_group); 380 - cpu_remove_dev_attr_group(&ppe_attribute_group); 381 - } 382 - module_exit(thermal_exit); 383 - 384 - MODULE_DESCRIPTION("Cell processor thermal driver"); 385 - MODULE_LICENSE("GPL"); 386 - MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>"); 387 -
-15
arch/powerpc/platforms/cell/cell.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Cell Platform common data structures 4 - * 5 - * Copyright 2015, Daniel Axtens, IBM Corporation 6 - */ 7 - 8 - #ifndef CELL_H 9 - #define CELL_H 10 - 11 - #include <asm/pci-bridge.h> 12 - 13 - extern struct pci_controller_ops cell_pci_controller_ops; 14 - 15 - #endif
-134
arch/powerpc/platforms/cell/cpufreq_spudemand.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * spu aware cpufreq governor for the cell processor 4 - * 5 - * © Copyright IBM Corporation 2006-2008 6 - * 7 - * Author: Christian Krafft <krafft@de.ibm.com> 8 - */ 9 - 10 - #include <linux/cpufreq.h> 11 - #include <linux/sched.h> 12 - #include <linux/sched/loadavg.h> 13 - #include <linux/module.h> 14 - #include <linux/timer.h> 15 - #include <linux/workqueue.h> 16 - #include <linux/atomic.h> 17 - #include <asm/machdep.h> 18 - #include <asm/spu.h> 19 - 20 - #define POLL_TIME 100000 /* in µs */ 21 - #define EXP 753 /* exp(-1) in fixed-point */ 22 - 23 - struct spu_gov_info_struct { 24 - unsigned long busy_spus; /* fixed-point */ 25 - struct cpufreq_policy *policy; 26 - struct delayed_work work; 27 - unsigned int poll_int; /* µs */ 28 - }; 29 - static DEFINE_PER_CPU(struct spu_gov_info_struct, spu_gov_info); 30 - 31 - static int calc_freq(struct spu_gov_info_struct *info) 32 - { 33 - int cpu; 34 - int busy_spus; 35 - 36 - cpu = info->policy->cpu; 37 - busy_spus = atomic_read(&cbe_spu_info[cpu_to_node(cpu)].busy_spus); 38 - 39 - info->busy_spus = calc_load(info->busy_spus, EXP, busy_spus * FIXED_1); 40 - pr_debug("cpu %d: busy_spus=%d, info->busy_spus=%ld\n", 41 - cpu, busy_spus, info->busy_spus); 42 - 43 - return info->policy->max * info->busy_spus / FIXED_1; 44 - } 45 - 46 - static void spu_gov_work(struct work_struct *work) 47 - { 48 - struct spu_gov_info_struct *info; 49 - int delay; 50 - unsigned long target_freq; 51 - 52 - info = container_of(work, struct spu_gov_info_struct, work.work); 53 - 54 - /* after cancel_delayed_work_sync we unset info->policy */ 55 - BUG_ON(info->policy == NULL); 56 - 57 - target_freq = calc_freq(info); 58 - __cpufreq_driver_target(info->policy, target_freq, CPUFREQ_RELATION_H); 59 - 60 - delay = usecs_to_jiffies(info->poll_int); 61 - schedule_delayed_work_on(info->policy->cpu, &info->work, delay); 62 - } 63 - 64 - static void spu_gov_init_work(struct spu_gov_info_struct *info) 65 - { 66 - int delay = usecs_to_jiffies(info->poll_int); 67 - INIT_DEFERRABLE_WORK(&info->work, spu_gov_work); 68 - schedule_delayed_work_on(info->policy->cpu, &info->work, delay); 69 - } 70 - 71 - static void spu_gov_cancel_work(struct spu_gov_info_struct *info) 72 - { 73 - cancel_delayed_work_sync(&info->work); 74 - } 75 - 76 - static int spu_gov_start(struct cpufreq_policy *policy) 77 - { 78 - unsigned int cpu = policy->cpu; 79 - struct spu_gov_info_struct *info = &per_cpu(spu_gov_info, cpu); 80 - struct spu_gov_info_struct *affected_info; 81 - int i; 82 - 83 - if (!cpu_online(cpu)) { 84 - printk(KERN_ERR "cpu %d is not online\n", cpu); 85 - return -EINVAL; 86 - } 87 - 88 - if (!policy->cur) { 89 - printk(KERN_ERR "no cpu specified in policy\n"); 90 - return -EINVAL; 91 - } 92 - 93 - /* initialize spu_gov_info for all affected cpus */ 94 - for_each_cpu(i, policy->cpus) { 95 - affected_info = &per_cpu(spu_gov_info, i); 96 - affected_info->policy = policy; 97 - } 98 - 99 - info->poll_int = POLL_TIME; 100 - 101 - /* setup timer */ 102 - spu_gov_init_work(info); 103 - 104 - return 0; 105 - } 106 - 107 - static void spu_gov_stop(struct cpufreq_policy *policy) 108 - { 109 - unsigned int cpu = policy->cpu; 110 - struct spu_gov_info_struct *info = &per_cpu(spu_gov_info, cpu); 111 - int i; 112 - 113 - /* cancel timer */ 114 - spu_gov_cancel_work(info); 115 - 116 - /* clean spu_gov_info for all affected cpus */ 117 - for_each_cpu (i, policy->cpus) { 118 - info = &per_cpu(spu_gov_info, i); 119 - info->policy = NULL; 120 - } 121 - } 122 - 123 - static struct cpufreq_governor spu_governor = { 124 - .name = "spudemand", 125 - .start = spu_gov_start, 126 - .stop = spu_gov_stop, 127 - .owner = THIS_MODULE, 128 - }; 129 - cpufreq_governor_init(spu_governor); 130 - cpufreq_governor_exit(spu_governor); 131 - 132 - MODULE_DESCRIPTION("SPU-aware cpufreq governor for the cell processor"); 133 - MODULE_LICENSE("GPL"); 134 - MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>");
-390
arch/powerpc/platforms/cell/interrupt.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Cell Internal Interrupt Controller 4 - * 5 - * Copyright (C) 2006 Benjamin Herrenschmidt (benh@kernel.crashing.org) 6 - * IBM, Corp. 7 - * 8 - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 9 - * 10 - * Author: Arnd Bergmann <arndb@de.ibm.com> 11 - * 12 - * TODO: 13 - * - Fix various assumptions related to HW CPU numbers vs. linux CPU numbers 14 - * vs node numbers in the setup code 15 - * - Implement proper handling of maxcpus=1/2 (that is, routing of irqs from 16 - * a non-active node to the active node) 17 - */ 18 - 19 - #include <linux/interrupt.h> 20 - #include <linux/irq.h> 21 - #include <linux/irqdomain.h> 22 - #include <linux/export.h> 23 - #include <linux/percpu.h> 24 - #include <linux/types.h> 25 - #include <linux/ioport.h> 26 - #include <linux/kernel_stat.h> 27 - #include <linux/pgtable.h> 28 - #include <linux/of_address.h> 29 - 30 - #include <asm/io.h> 31 - #include <asm/ptrace.h> 32 - #include <asm/machdep.h> 33 - #include <asm/cell-regs.h> 34 - 35 - #include "interrupt.h" 36 - 37 - struct iic { 38 - struct cbe_iic_thread_regs __iomem *regs; 39 - u8 target_id; 40 - u8 eoi_stack[16]; 41 - int eoi_ptr; 42 - struct device_node *node; 43 - }; 44 - 45 - static DEFINE_PER_CPU(struct iic, cpu_iic); 46 - #define IIC_NODE_COUNT 2 47 - static struct irq_domain *iic_host; 48 - 49 - /* Convert between "pending" bits and hw irq number */ 50 - static irq_hw_number_t iic_pending_to_hwnum(struct cbe_iic_pending_bits bits) 51 - { 52 - unsigned char unit = bits.source & 0xf; 53 - unsigned char node = bits.source >> 4; 54 - unsigned char class = bits.class & 3; 55 - 56 - /* Decode IPIs */ 57 - if (bits.flags & CBE_IIC_IRQ_IPI) 58 - return IIC_IRQ_TYPE_IPI | (bits.prio >> 4); 59 - else 60 - return (node << IIC_IRQ_NODE_SHIFT) | (class << 4) | unit; 61 - } 62 - 63 - static void iic_mask(struct irq_data *d) 64 - { 65 - } 66 - 67 - static void iic_unmask(struct irq_data *d) 68 - { 69 - } 70 - 71 - static void iic_eoi(struct irq_data *d) 72 - { 73 - struct iic *iic = this_cpu_ptr(&cpu_iic); 74 - out_be64(&iic->regs->prio, iic->eoi_stack[--iic->eoi_ptr]); 75 - BUG_ON(iic->eoi_ptr < 0); 76 - } 77 - 78 - static struct irq_chip iic_chip = { 79 - .name = "CELL-IIC", 80 - .irq_mask = iic_mask, 81 - .irq_unmask = iic_unmask, 82 - .irq_eoi = iic_eoi, 83 - }; 84 - 85 - 86 - static void iic_ioexc_eoi(struct irq_data *d) 87 - { 88 - } 89 - 90 - static void iic_ioexc_cascade(struct irq_desc *desc) 91 - { 92 - struct irq_chip *chip = irq_desc_get_chip(desc); 93 - struct cbe_iic_regs __iomem *node_iic = 94 - (void __iomem *)irq_desc_get_handler_data(desc); 95 - unsigned int irq = irq_desc_get_irq(desc); 96 - unsigned int base = (irq & 0xffffff00) | IIC_IRQ_TYPE_IOEXC; 97 - unsigned long bits, ack; 98 - int cascade; 99 - 100 - for (;;) { 101 - bits = in_be64(&node_iic->iic_is); 102 - if (bits == 0) 103 - break; 104 - /* pre-ack edge interrupts */ 105 - ack = bits & IIC_ISR_EDGE_MASK; 106 - if (ack) 107 - out_be64(&node_iic->iic_is, ack); 108 - /* handle them */ 109 - for (cascade = 63; cascade >= 0; cascade--) 110 - if (bits & (0x8000000000000000UL >> cascade)) 111 - generic_handle_domain_irq(iic_host, 112 - base | cascade); 113 - /* post-ack level interrupts */ 114 - ack = bits & ~IIC_ISR_EDGE_MASK; 115 - if (ack) 116 - out_be64(&node_iic->iic_is, ack); 117 - } 118 - chip->irq_eoi(&desc->irq_data); 119 - } 120 - 121 - 122 - static struct irq_chip iic_ioexc_chip = { 123 - .name = "CELL-IOEX", 124 - .irq_mask = iic_mask, 125 - .irq_unmask = iic_unmask, 126 - .irq_eoi = iic_ioexc_eoi, 127 - }; 128 - 129 - /* Get an IRQ number from the pending state register of the IIC */ 130 - static unsigned int iic_get_irq(void) 131 - { 132 - struct cbe_iic_pending_bits pending; 133 - struct iic *iic; 134 - unsigned int virq; 135 - 136 - iic = this_cpu_ptr(&cpu_iic); 137 - *(unsigned long *) &pending = 138 - in_be64((u64 __iomem *) &iic->regs->pending_destr); 139 - if (!(pending.flags & CBE_IIC_IRQ_VALID)) 140 - return 0; 141 - virq = irq_linear_revmap(iic_host, iic_pending_to_hwnum(pending)); 142 - if (!virq) 143 - return 0; 144 - iic->eoi_stack[++iic->eoi_ptr] = pending.prio; 145 - BUG_ON(iic->eoi_ptr > 15); 146 - return virq; 147 - } 148 - 149 - void iic_setup_cpu(void) 150 - { 151 - out_be64(&this_cpu_ptr(&cpu_iic)->regs->prio, 0xff); 152 - } 153 - 154 - u8 iic_get_target_id(int cpu) 155 - { 156 - return per_cpu(cpu_iic, cpu).target_id; 157 - } 158 - 159 - EXPORT_SYMBOL_GPL(iic_get_target_id); 160 - 161 - #ifdef CONFIG_SMP 162 - 163 - /* Use the highest interrupt priorities for IPI */ 164 - static inline int iic_msg_to_irq(int msg) 165 - { 166 - return IIC_IRQ_TYPE_IPI + 0xf - msg; 167 - } 168 - 169 - void iic_message_pass(int cpu, int msg) 170 - { 171 - out_be64(&per_cpu(cpu_iic, cpu).regs->generate, (0xf - msg) << 4); 172 - } 173 - 174 - static void iic_request_ipi(int msg) 175 - { 176 - int virq; 177 - 178 - virq = irq_create_mapping(iic_host, iic_msg_to_irq(msg)); 179 - if (!virq) { 180 - printk(KERN_ERR 181 - "iic: failed to map IPI %s\n", smp_ipi_name[msg]); 182 - return; 183 - } 184 - 185 - /* 186 - * If smp_request_message_ipi encounters an error it will notify 187 - * the error. If a message is not needed it will return non-zero. 188 - */ 189 - if (smp_request_message_ipi(virq, msg)) 190 - irq_dispose_mapping(virq); 191 - } 192 - 193 - void iic_request_IPIs(void) 194 - { 195 - iic_request_ipi(PPC_MSG_CALL_FUNCTION); 196 - iic_request_ipi(PPC_MSG_RESCHEDULE); 197 - iic_request_ipi(PPC_MSG_TICK_BROADCAST); 198 - iic_request_ipi(PPC_MSG_NMI_IPI); 199 - } 200 - 201 - #endif /* CONFIG_SMP */ 202 - 203 - 204 - static int iic_host_match(struct irq_domain *h, struct device_node *node, 205 - enum irq_domain_bus_token bus_token) 206 - { 207 - return of_device_is_compatible(node, 208 - "IBM,CBEA-Internal-Interrupt-Controller"); 209 - } 210 - 211 - static int iic_host_map(struct irq_domain *h, unsigned int virq, 212 - irq_hw_number_t hw) 213 - { 214 - switch (hw & IIC_IRQ_TYPE_MASK) { 215 - case IIC_IRQ_TYPE_IPI: 216 - irq_set_chip_and_handler(virq, &iic_chip, handle_percpu_irq); 217 - break; 218 - case IIC_IRQ_TYPE_IOEXC: 219 - irq_set_chip_and_handler(virq, &iic_ioexc_chip, 220 - handle_edge_eoi_irq); 221 - break; 222 - default: 223 - irq_set_chip_and_handler(virq, &iic_chip, handle_edge_eoi_irq); 224 - } 225 - return 0; 226 - } 227 - 228 - static int iic_host_xlate(struct irq_domain *h, struct device_node *ct, 229 - const u32 *intspec, unsigned int intsize, 230 - irq_hw_number_t *out_hwirq, unsigned int *out_flags) 231 - 232 - { 233 - unsigned int node, ext, unit, class; 234 - const u32 *val; 235 - 236 - if (!of_device_is_compatible(ct, 237 - "IBM,CBEA-Internal-Interrupt-Controller")) 238 - return -ENODEV; 239 - if (intsize != 1) 240 - return -ENODEV; 241 - val = of_get_property(ct, "#interrupt-cells", NULL); 242 - if (val == NULL || *val != 1) 243 - return -ENODEV; 244 - 245 - node = intspec[0] >> 24; 246 - ext = (intspec[0] >> 16) & 0xff; 247 - class = (intspec[0] >> 8) & 0xff; 248 - unit = intspec[0] & 0xff; 249 - 250 - /* Check if node is in supported range */ 251 - if (node > 1) 252 - return -EINVAL; 253 - 254 - /* Build up interrupt number, special case for IO exceptions */ 255 - *out_hwirq = (node << IIC_IRQ_NODE_SHIFT); 256 - if (unit == IIC_UNIT_IIC && class == 1) 257 - *out_hwirq |= IIC_IRQ_TYPE_IOEXC | ext; 258 - else 259 - *out_hwirq |= IIC_IRQ_TYPE_NORMAL | 260 - (class << IIC_IRQ_CLASS_SHIFT) | unit; 261 - 262 - /* Dummy flags, ignored by iic code */ 263 - *out_flags = IRQ_TYPE_EDGE_RISING; 264 - 265 - return 0; 266 - } 267 - 268 - static const struct irq_domain_ops iic_host_ops = { 269 - .match = iic_host_match, 270 - .map = iic_host_map, 271 - .xlate = iic_host_xlate, 272 - }; 273 - 274 - static void __init init_one_iic(unsigned int hw_cpu, unsigned long addr, 275 - struct device_node *node) 276 - { 277 - /* XXX FIXME: should locate the linux CPU number from the HW cpu 278 - * number properly. We are lucky for now 279 - */ 280 - struct iic *iic = &per_cpu(cpu_iic, hw_cpu); 281 - 282 - iic->regs = ioremap(addr, sizeof(struct cbe_iic_thread_regs)); 283 - BUG_ON(iic->regs == NULL); 284 - 285 - iic->target_id = ((hw_cpu & 2) << 3) | ((hw_cpu & 1) ? 0xf : 0xe); 286 - iic->eoi_stack[0] = 0xff; 287 - iic->node = of_node_get(node); 288 - out_be64(&iic->regs->prio, 0); 289 - 290 - printk(KERN_INFO "IIC for CPU %d target id 0x%x : %pOF\n", 291 - hw_cpu, iic->target_id, node); 292 - } 293 - 294 - static int __init setup_iic(void) 295 - { 296 - struct device_node *dn; 297 - struct resource r0, r1; 298 - unsigned int node, cascade, found = 0; 299 - struct cbe_iic_regs __iomem *node_iic; 300 - const u32 *np; 301 - 302 - for_each_node_by_name(dn, "interrupt-controller") { 303 - if (!of_device_is_compatible(dn, 304 - "IBM,CBEA-Internal-Interrupt-Controller")) 305 - continue; 306 - np = of_get_property(dn, "ibm,interrupt-server-ranges", NULL); 307 - if (np == NULL) { 308 - printk(KERN_WARNING "IIC: CPU association not found\n"); 309 - of_node_put(dn); 310 - return -ENODEV; 311 - } 312 - if (of_address_to_resource(dn, 0, &r0) || 313 - of_address_to_resource(dn, 1, &r1)) { 314 - printk(KERN_WARNING "IIC: Can't resolve addresses\n"); 315 - of_node_put(dn); 316 - return -ENODEV; 317 - } 318 - found++; 319 - init_one_iic(np[0], r0.start, dn); 320 - init_one_iic(np[1], r1.start, dn); 321 - 322 - /* Setup cascade for IO exceptions. XXX cleanup tricks to get 323 - * node vs CPU etc... 324 - * Note that we configure the IIC_IRR here with a hard coded 325 - * priority of 1. We might want to improve that later. 326 - */ 327 - node = np[0] >> 1; 328 - node_iic = cbe_get_cpu_iic_regs(np[0]); 329 - cascade = node << IIC_IRQ_NODE_SHIFT; 330 - cascade |= 1 << IIC_IRQ_CLASS_SHIFT; 331 - cascade |= IIC_UNIT_IIC; 332 - cascade = irq_create_mapping(iic_host, cascade); 333 - if (!cascade) 334 - continue; 335 - /* 336 - * irq_data is a generic pointer that gets passed back 337 - * to us later, so the forced cast is fine. 338 - */ 339 - irq_set_handler_data(cascade, (void __force *)node_iic); 340 - irq_set_chained_handler(cascade, iic_ioexc_cascade); 341 - out_be64(&node_iic->iic_ir, 342 - (1 << 12) /* priority */ | 343 - (node << 4) /* dest node */ | 344 - IIC_UNIT_THREAD_0 /* route them to thread 0 */); 345 - /* Flush pending (make sure it triggers if there is 346 - * anything pending 347 - */ 348 - out_be64(&node_iic->iic_is, 0xfffffffffffffffful); 349 - } 350 - 351 - if (found) 352 - return 0; 353 - else 354 - return -ENODEV; 355 - } 356 - 357 - void __init iic_init_IRQ(void) 358 - { 359 - /* Setup an irq host data structure */ 360 - iic_host = irq_domain_add_linear(NULL, IIC_SOURCE_COUNT, &iic_host_ops, 361 - NULL); 362 - BUG_ON(iic_host == NULL); 363 - irq_set_default_host(iic_host); 364 - 365 - /* Discover and initialize iics */ 366 - if (setup_iic() < 0) 367 - panic("IIC: Failed to initialize !\n"); 368 - 369 - /* Set master interrupt handling function */ 370 - ppc_md.get_irq = iic_get_irq; 371 - 372 - /* Enable on current CPU */ 373 - iic_setup_cpu(); 374 - } 375 - 376 - void iic_set_interrupt_routing(int cpu, int thread, int priority) 377 - { 378 - struct cbe_iic_regs __iomem *iic_regs = cbe_get_cpu_iic_regs(cpu); 379 - u64 iic_ir = 0; 380 - int node = cpu >> 1; 381 - 382 - /* Set which node and thread will handle the next interrupt */ 383 - iic_ir |= CBE_IIC_IR_PRIO(priority) | 384 - CBE_IIC_IR_DEST_NODE(node); 385 - if (thread == 0) 386 - iic_ir |= CBE_IIC_IR_DEST_UNIT(CBE_IIC_IR_PT_0); 387 - else 388 - iic_ir |= CBE_IIC_IR_DEST_UNIT(CBE_IIC_IR_PT_1); 389 - out_be64(&iic_regs->iic_ir, iic_ir); 390 - }
-90
arch/powerpc/platforms/cell/interrupt.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef ASM_CELL_PIC_H 3 - #define ASM_CELL_PIC_H 4 - #ifdef __KERNEL__ 5 - /* 6 - * Mapping of IIC pending bits into per-node interrupt numbers. 7 - * 8 - * Interrupt numbers are in the range 0...0x1ff where the top bit 9 - * (0x100) represent the source node. Only 2 nodes are supported with 10 - * the current code though it's trivial to extend that if necessary using 11 - * higher level bits 12 - * 13 - * The bottom 8 bits are split into 2 type bits and 6 data bits that 14 - * depend on the type: 15 - * 16 - * 00 (0x00 | data) : normal interrupt. data is (class << 4) | source 17 - * 01 (0x40 | data) : IO exception. data is the exception number as 18 - * defined by bit numbers in IIC_SR 19 - * 10 (0x80 | data) : IPI. data is the IPI number (obtained from the priority) 20 - * and node is always 0 (IPIs are per-cpu, their source is 21 - * not relevant) 22 - * 11 (0xc0 | data) : reserved 23 - * 24 - * In addition, interrupt number 0x80000000 is defined as always invalid 25 - * (that is the node field is expected to never extend to move than 23 bits) 26 - * 27 - */ 28 - 29 - enum { 30 - IIC_IRQ_INVALID = 0x80000000u, 31 - IIC_IRQ_NODE_MASK = 0x100, 32 - IIC_IRQ_NODE_SHIFT = 8, 33 - IIC_IRQ_MAX = 0x1ff, 34 - IIC_IRQ_TYPE_MASK = 0xc0, 35 - IIC_IRQ_TYPE_NORMAL = 0x00, 36 - IIC_IRQ_TYPE_IOEXC = 0x40, 37 - IIC_IRQ_TYPE_IPI = 0x80, 38 - IIC_IRQ_CLASS_SHIFT = 4, 39 - IIC_IRQ_CLASS_0 = 0x00, 40 - IIC_IRQ_CLASS_1 = 0x10, 41 - IIC_IRQ_CLASS_2 = 0x20, 42 - IIC_SOURCE_COUNT = 0x200, 43 - 44 - /* Here are defined the various source/dest units. Avoid using those 45 - * definitions if you can, they are mostly here for reference 46 - */ 47 - IIC_UNIT_SPU_0 = 0x4, 48 - IIC_UNIT_SPU_1 = 0x7, 49 - IIC_UNIT_SPU_2 = 0x3, 50 - IIC_UNIT_SPU_3 = 0x8, 51 - IIC_UNIT_SPU_4 = 0x2, 52 - IIC_UNIT_SPU_5 = 0x9, 53 - IIC_UNIT_SPU_6 = 0x1, 54 - IIC_UNIT_SPU_7 = 0xa, 55 - IIC_UNIT_IOC_0 = 0x0, 56 - IIC_UNIT_IOC_1 = 0xb, 57 - IIC_UNIT_THREAD_0 = 0xe, /* target only */ 58 - IIC_UNIT_THREAD_1 = 0xf, /* target only */ 59 - IIC_UNIT_IIC = 0xe, /* source only (IO exceptions) */ 60 - 61 - /* Base numbers for the external interrupts */ 62 - IIC_IRQ_EXT_IOIF0 = 63 - IIC_IRQ_TYPE_NORMAL | IIC_IRQ_CLASS_2 | IIC_UNIT_IOC_0, 64 - IIC_IRQ_EXT_IOIF1 = 65 - IIC_IRQ_TYPE_NORMAL | IIC_IRQ_CLASS_2 | IIC_UNIT_IOC_1, 66 - 67 - /* Base numbers for the IIC_ISR interrupts */ 68 - IIC_IRQ_IOEX_TMI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 63, 69 - IIC_IRQ_IOEX_PMI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 62, 70 - IIC_IRQ_IOEX_ATI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 61, 71 - IIC_IRQ_IOEX_MATBFI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 60, 72 - IIC_IRQ_IOEX_ELDI = IIC_IRQ_TYPE_IOEXC | IIC_IRQ_CLASS_1 | 59, 73 - 74 - /* Which bits in IIC_ISR are edge sensitive */ 75 - IIC_ISR_EDGE_MASK = 0x4ul, 76 - }; 77 - 78 - extern void iic_init_IRQ(void); 79 - extern void iic_message_pass(int cpu, int msg); 80 - extern void iic_request_IPIs(void); 81 - extern void iic_setup_cpu(void); 82 - 83 - extern u8 iic_get_target_id(int cpu); 84 - 85 - extern void spider_init_IRQ(void); 86 - 87 - extern void iic_set_interrupt_routing(int cpu, int thread, int priority); 88 - 89 - #endif 90 - #endif /* ASM_CELL_PIC_H */
-1060
arch/powerpc/platforms/cell/iommu.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * IOMMU implementation for Cell Broadband Processor Architecture 4 - * 5 - * (C) Copyright IBM Corporation 2006-2008 6 - * 7 - * Author: Jeremy Kerr <jk@ozlabs.org> 8 - */ 9 - 10 - #undef DEBUG 11 - 12 - #include <linux/kernel.h> 13 - #include <linux/init.h> 14 - #include <linux/interrupt.h> 15 - #include <linux/irqdomain.h> 16 - #include <linux/notifier.h> 17 - #include <linux/of.h> 18 - #include <linux/of_address.h> 19 - #include <linux/platform_device.h> 20 - #include <linux/slab.h> 21 - #include <linux/memblock.h> 22 - 23 - #include <asm/prom.h> 24 - #include <asm/iommu.h> 25 - #include <asm/machdep.h> 26 - #include <asm/pci-bridge.h> 27 - #include <asm/udbg.h> 28 - #include <asm/firmware.h> 29 - #include <asm/cell-regs.h> 30 - 31 - #include "cell.h" 32 - #include "interrupt.h" 33 - 34 - /* Define CELL_IOMMU_REAL_UNMAP to actually unmap non-used pages 35 - * instead of leaving them mapped to some dummy page. This can be 36 - * enabled once the appropriate workarounds for spider bugs have 37 - * been enabled 38 - */ 39 - #define CELL_IOMMU_REAL_UNMAP 40 - 41 - /* Define CELL_IOMMU_STRICT_PROTECTION to enforce protection of 42 - * IO PTEs based on the transfer direction. That can be enabled 43 - * once spider-net has been fixed to pass the correct direction 44 - * to the DMA mapping functions 45 - */ 46 - #define CELL_IOMMU_STRICT_PROTECTION 47 - 48 - 49 - #define NR_IOMMUS 2 50 - 51 - /* IOC mmap registers */ 52 - #define IOC_Reg_Size 0x2000 53 - 54 - #define IOC_IOPT_CacheInvd 0x908 55 - #define IOC_IOPT_CacheInvd_NE_Mask 0xffe0000000000000ul 56 - #define IOC_IOPT_CacheInvd_IOPTE_Mask 0x000003fffffffff8ul 57 - #define IOC_IOPT_CacheInvd_Busy 0x0000000000000001ul 58 - 59 - #define IOC_IOST_Origin 0x918 60 - #define IOC_IOST_Origin_E 0x8000000000000000ul 61 - #define IOC_IOST_Origin_HW 0x0000000000000800ul 62 - #define IOC_IOST_Origin_HL 0x0000000000000400ul 63 - 64 - #define IOC_IO_ExcpStat 0x920 65 - #define IOC_IO_ExcpStat_V 0x8000000000000000ul 66 - #define IOC_IO_ExcpStat_SPF_Mask 0x6000000000000000ul 67 - #define IOC_IO_ExcpStat_SPF_S 0x6000000000000000ul 68 - #define IOC_IO_ExcpStat_SPF_P 0x2000000000000000ul 69 - #define IOC_IO_ExcpStat_ADDR_Mask 0x00000007fffff000ul 70 - #define IOC_IO_ExcpStat_RW_Mask 0x0000000000000800ul 71 - #define IOC_IO_ExcpStat_IOID_Mask 0x00000000000007fful 72 - 73 - #define IOC_IO_ExcpMask 0x928 74 - #define IOC_IO_ExcpMask_SFE 0x4000000000000000ul 75 - #define IOC_IO_ExcpMask_PFE 0x2000000000000000ul 76 - 77 - #define IOC_IOCmd_Offset 0x1000 78 - 79 - #define IOC_IOCmd_Cfg 0xc00 80 - #define IOC_IOCmd_Cfg_TE 0x0000800000000000ul 81 - 82 - 83 - /* Segment table entries */ 84 - #define IOSTE_V 0x8000000000000000ul /* valid */ 85 - #define IOSTE_H 0x4000000000000000ul /* cache hint */ 86 - #define IOSTE_PT_Base_RPN_Mask 0x3ffffffffffff000ul /* base RPN of IOPT */ 87 - #define IOSTE_NPPT_Mask 0x0000000000000fe0ul /* no. pages in IOPT */ 88 - #define IOSTE_PS_Mask 0x0000000000000007ul /* page size */ 89 - #define IOSTE_PS_4K 0x0000000000000001ul /* - 4kB */ 90 - #define IOSTE_PS_64K 0x0000000000000003ul /* - 64kB */ 91 - #define IOSTE_PS_1M 0x0000000000000005ul /* - 1MB */ 92 - #define IOSTE_PS_16M 0x0000000000000007ul /* - 16MB */ 93 - 94 - 95 - /* IOMMU sizing */ 96 - #define IO_SEGMENT_SHIFT 28 97 - #define IO_PAGENO_BITS(shift) (IO_SEGMENT_SHIFT - (shift)) 98 - 99 - /* The high bit needs to be set on every DMA address */ 100 - #define SPIDER_DMA_OFFSET 0x80000000ul 101 - 102 - struct iommu_window { 103 - struct list_head list; 104 - struct cbe_iommu *iommu; 105 - unsigned long offset; 106 - unsigned long size; 107 - unsigned int ioid; 108 - struct iommu_table table; 109 - }; 110 - 111 - #define NAMESIZE 8 112 - struct cbe_iommu { 113 - int nid; 114 - char name[NAMESIZE]; 115 - void __iomem *xlate_regs; 116 - void __iomem *cmd_regs; 117 - unsigned long *stab; 118 - unsigned long *ptab; 119 - void *pad_page; 120 - struct list_head windows; 121 - }; 122 - 123 - /* Static array of iommus, one per node 124 - * each contains a list of windows, keyed from dma_window property 125 - * - on bus setup, look for a matching window, or create one 126 - * - on dev setup, assign iommu_table ptr 127 - */ 128 - static struct cbe_iommu iommus[NR_IOMMUS]; 129 - static int cbe_nr_iommus; 130 - 131 - static void invalidate_tce_cache(struct cbe_iommu *iommu, unsigned long *pte, 132 - long n_ptes) 133 - { 134 - u64 __iomem *reg; 135 - u64 val; 136 - long n; 137 - 138 - reg = iommu->xlate_regs + IOC_IOPT_CacheInvd; 139 - 140 - while (n_ptes > 0) { 141 - /* we can invalidate up to 1 << 11 PTEs at once */ 142 - n = min(n_ptes, 1l << 11); 143 - val = (((n /*- 1*/) << 53) & IOC_IOPT_CacheInvd_NE_Mask) 144 - | (__pa(pte) & IOC_IOPT_CacheInvd_IOPTE_Mask) 145 - | IOC_IOPT_CacheInvd_Busy; 146 - 147 - out_be64(reg, val); 148 - while (in_be64(reg) & IOC_IOPT_CacheInvd_Busy) 149 - ; 150 - 151 - n_ptes -= n; 152 - pte += n; 153 - } 154 - } 155 - 156 - static int tce_build_cell(struct iommu_table *tbl, long index, long npages, 157 - unsigned long uaddr, enum dma_data_direction direction, 158 - unsigned long attrs) 159 - { 160 - int i; 161 - unsigned long *io_pte, base_pte; 162 - struct iommu_window *window = 163 - container_of(tbl, struct iommu_window, table); 164 - 165 - /* implementing proper protection causes problems with the spidernet 166 - * driver - check mapping directions later, but allow read & write by 167 - * default for now.*/ 168 - #ifdef CELL_IOMMU_STRICT_PROTECTION 169 - /* to avoid referencing a global, we use a trick here to setup the 170 - * protection bit. "prot" is setup to be 3 fields of 4 bits appended 171 - * together for each of the 3 supported direction values. It is then 172 - * shifted left so that the fields matching the desired direction 173 - * lands on the appropriate bits, and other bits are masked out. 174 - */ 175 - const unsigned long prot = 0xc48; 176 - base_pte = 177 - ((prot << (52 + 4 * direction)) & 178 - (CBE_IOPTE_PP_W | CBE_IOPTE_PP_R)) | 179 - CBE_IOPTE_M | CBE_IOPTE_SO_RW | 180 - (window->ioid & CBE_IOPTE_IOID_Mask); 181 - #else 182 - base_pte = CBE_IOPTE_PP_W | CBE_IOPTE_PP_R | CBE_IOPTE_M | 183 - CBE_IOPTE_SO_RW | (window->ioid & CBE_IOPTE_IOID_Mask); 184 - #endif 185 - if (unlikely(attrs & DMA_ATTR_WEAK_ORDERING)) 186 - base_pte &= ~CBE_IOPTE_SO_RW; 187 - 188 - io_pte = (unsigned long *)tbl->it_base + (index - tbl->it_offset); 189 - 190 - for (i = 0; i < npages; i++, uaddr += (1 << tbl->it_page_shift)) 191 - io_pte[i] = base_pte | (__pa(uaddr) & CBE_IOPTE_RPN_Mask); 192 - 193 - mb(); 194 - 195 - invalidate_tce_cache(window->iommu, io_pte, npages); 196 - 197 - pr_debug("tce_build_cell(index=%lx,n=%lx,dir=%d,base_pte=%lx)\n", 198 - index, npages, direction, base_pte); 199 - return 0; 200 - } 201 - 202 - static void tce_free_cell(struct iommu_table *tbl, long index, long npages) 203 - { 204 - 205 - int i; 206 - unsigned long *io_pte, pte; 207 - struct iommu_window *window = 208 - container_of(tbl, struct iommu_window, table); 209 - 210 - pr_debug("tce_free_cell(index=%lx,n=%lx)\n", index, npages); 211 - 212 - #ifdef CELL_IOMMU_REAL_UNMAP 213 - pte = 0; 214 - #else 215 - /* spider bridge does PCI reads after freeing - insert a mapping 216 - * to a scratch page instead of an invalid entry */ 217 - pte = CBE_IOPTE_PP_R | CBE_IOPTE_M | CBE_IOPTE_SO_RW | 218 - __pa(window->iommu->pad_page) | 219 - (window->ioid & CBE_IOPTE_IOID_Mask); 220 - #endif 221 - 222 - io_pte = (unsigned long *)tbl->it_base + (index - tbl->it_offset); 223 - 224 - for (i = 0; i < npages; i++) 225 - io_pte[i] = pte; 226 - 227 - mb(); 228 - 229 - invalidate_tce_cache(window->iommu, io_pte, npages); 230 - } 231 - 232 - static irqreturn_t ioc_interrupt(int irq, void *data) 233 - { 234 - unsigned long stat, spf; 235 - struct cbe_iommu *iommu = data; 236 - 237 - stat = in_be64(iommu->xlate_regs + IOC_IO_ExcpStat); 238 - spf = stat & IOC_IO_ExcpStat_SPF_Mask; 239 - 240 - /* Might want to rate limit it */ 241 - printk(KERN_ERR "iommu: DMA exception 0x%016lx\n", stat); 242 - printk(KERN_ERR " V=%d, SPF=[%c%c], RW=%s, IOID=0x%04x\n", 243 - !!(stat & IOC_IO_ExcpStat_V), 244 - (spf == IOC_IO_ExcpStat_SPF_S) ? 'S' : ' ', 245 - (spf == IOC_IO_ExcpStat_SPF_P) ? 'P' : ' ', 246 - (stat & IOC_IO_ExcpStat_RW_Mask) ? "Read" : "Write", 247 - (unsigned int)(stat & IOC_IO_ExcpStat_IOID_Mask)); 248 - printk(KERN_ERR " page=0x%016lx\n", 249 - stat & IOC_IO_ExcpStat_ADDR_Mask); 250 - 251 - /* clear interrupt */ 252 - stat &= ~IOC_IO_ExcpStat_V; 253 - out_be64(iommu->xlate_regs + IOC_IO_ExcpStat, stat); 254 - 255 - return IRQ_HANDLED; 256 - } 257 - 258 - static int __init cell_iommu_find_ioc(int nid, unsigned long *base) 259 - { 260 - struct device_node *np; 261 - struct resource r; 262 - 263 - *base = 0; 264 - 265 - /* First look for new style /be nodes */ 266 - for_each_node_by_name(np, "ioc") { 267 - if (of_node_to_nid(np) != nid) 268 - continue; 269 - if (of_address_to_resource(np, 0, &r)) { 270 - printk(KERN_ERR "iommu: can't get address for %pOF\n", 271 - np); 272 - continue; 273 - } 274 - *base = r.start; 275 - of_node_put(np); 276 - return 0; 277 - } 278 - 279 - /* Ok, let's try the old way */ 280 - for_each_node_by_type(np, "cpu") { 281 - const unsigned int *nidp; 282 - const unsigned long *tmp; 283 - 284 - nidp = of_get_property(np, "node-id", NULL); 285 - if (nidp && *nidp == nid) { 286 - tmp = of_get_property(np, "ioc-translation", NULL); 287 - if (tmp) { 288 - *base = *tmp; 289 - of_node_put(np); 290 - return 0; 291 - } 292 - } 293 - } 294 - 295 - return -ENODEV; 296 - } 297 - 298 - static void __init cell_iommu_setup_stab(struct cbe_iommu *iommu, 299 - unsigned long dbase, unsigned long dsize, 300 - unsigned long fbase, unsigned long fsize) 301 - { 302 - struct page *page; 303 - unsigned long segments, stab_size; 304 - 305 - segments = max(dbase + dsize, fbase + fsize) >> IO_SEGMENT_SHIFT; 306 - 307 - pr_debug("%s: iommu[%d]: segments: %lu\n", 308 - __func__, iommu->nid, segments); 309 - 310 - /* set up the segment table */ 311 - stab_size = segments * sizeof(unsigned long); 312 - page = alloc_pages_node(iommu->nid, GFP_KERNEL, get_order(stab_size)); 313 - BUG_ON(!page); 314 - iommu->stab = page_address(page); 315 - memset(iommu->stab, 0, stab_size); 316 - } 317 - 318 - static unsigned long *__init cell_iommu_alloc_ptab(struct cbe_iommu *iommu, 319 - unsigned long base, unsigned long size, unsigned long gap_base, 320 - unsigned long gap_size, unsigned long page_shift) 321 - { 322 - struct page *page; 323 - int i; 324 - unsigned long reg, segments, pages_per_segment, ptab_size, 325 - n_pte_pages, start_seg, *ptab; 326 - 327 - start_seg = base >> IO_SEGMENT_SHIFT; 328 - segments = size >> IO_SEGMENT_SHIFT; 329 - pages_per_segment = 1ull << IO_PAGENO_BITS(page_shift); 330 - /* PTEs for each segment must start on a 4K boundary */ 331 - pages_per_segment = max(pages_per_segment, 332 - (1 << 12) / sizeof(unsigned long)); 333 - 334 - ptab_size = segments * pages_per_segment * sizeof(unsigned long); 335 - pr_debug("%s: iommu[%d]: ptab_size: %lu, order: %d\n", __func__, 336 - iommu->nid, ptab_size, get_order(ptab_size)); 337 - page = alloc_pages_node(iommu->nid, GFP_KERNEL, get_order(ptab_size)); 338 - BUG_ON(!page); 339 - 340 - ptab = page_address(page); 341 - memset(ptab, 0, ptab_size); 342 - 343 - /* number of 4K pages needed for a page table */ 344 - n_pte_pages = (pages_per_segment * sizeof(unsigned long)) >> 12; 345 - 346 - pr_debug("%s: iommu[%d]: stab at %p, ptab at %p, n_pte_pages: %lu\n", 347 - __func__, iommu->nid, iommu->stab, ptab, 348 - n_pte_pages); 349 - 350 - /* initialise the STEs */ 351 - reg = IOSTE_V | ((n_pte_pages - 1) << 5); 352 - 353 - switch (page_shift) { 354 - case 12: reg |= IOSTE_PS_4K; break; 355 - case 16: reg |= IOSTE_PS_64K; break; 356 - case 20: reg |= IOSTE_PS_1M; break; 357 - case 24: reg |= IOSTE_PS_16M; break; 358 - default: BUG(); 359 - } 360 - 361 - gap_base = gap_base >> IO_SEGMENT_SHIFT; 362 - gap_size = gap_size >> IO_SEGMENT_SHIFT; 363 - 364 - pr_debug("Setting up IOMMU stab:\n"); 365 - for (i = start_seg; i < (start_seg + segments); i++) { 366 - if (i >= gap_base && i < (gap_base + gap_size)) { 367 - pr_debug("\toverlap at %d, skipping\n", i); 368 - continue; 369 - } 370 - iommu->stab[i] = reg | (__pa(ptab) + (n_pte_pages << 12) * 371 - (i - start_seg)); 372 - pr_debug("\t[%d] 0x%016lx\n", i, iommu->stab[i]); 373 - } 374 - 375 - return ptab; 376 - } 377 - 378 - static void __init cell_iommu_enable_hardware(struct cbe_iommu *iommu) 379 - { 380 - int ret; 381 - unsigned long reg, xlate_base; 382 - unsigned int virq; 383 - 384 - if (cell_iommu_find_ioc(iommu->nid, &xlate_base)) 385 - panic("%s: missing IOC register mappings for node %d\n", 386 - __func__, iommu->nid); 387 - 388 - iommu->xlate_regs = ioremap(xlate_base, IOC_Reg_Size); 389 - iommu->cmd_regs = iommu->xlate_regs + IOC_IOCmd_Offset; 390 - 391 - /* ensure that the STEs have updated */ 392 - mb(); 393 - 394 - /* setup interrupts for the iommu. */ 395 - reg = in_be64(iommu->xlate_regs + IOC_IO_ExcpStat); 396 - out_be64(iommu->xlate_regs + IOC_IO_ExcpStat, 397 - reg & ~IOC_IO_ExcpStat_V); 398 - out_be64(iommu->xlate_regs + IOC_IO_ExcpMask, 399 - IOC_IO_ExcpMask_PFE | IOC_IO_ExcpMask_SFE); 400 - 401 - virq = irq_create_mapping(NULL, 402 - IIC_IRQ_IOEX_ATI | (iommu->nid << IIC_IRQ_NODE_SHIFT)); 403 - BUG_ON(!virq); 404 - 405 - ret = request_irq(virq, ioc_interrupt, 0, iommu->name, iommu); 406 - BUG_ON(ret); 407 - 408 - /* set the IOC segment table origin register (and turn on the iommu) */ 409 - reg = IOC_IOST_Origin_E | __pa(iommu->stab) | IOC_IOST_Origin_HW; 410 - out_be64(iommu->xlate_regs + IOC_IOST_Origin, reg); 411 - in_be64(iommu->xlate_regs + IOC_IOST_Origin); 412 - 413 - /* turn on IO translation */ 414 - reg = in_be64(iommu->cmd_regs + IOC_IOCmd_Cfg) | IOC_IOCmd_Cfg_TE; 415 - out_be64(iommu->cmd_regs + IOC_IOCmd_Cfg, reg); 416 - } 417 - 418 - static void __init cell_iommu_setup_hardware(struct cbe_iommu *iommu, 419 - unsigned long base, unsigned long size) 420 - { 421 - cell_iommu_setup_stab(iommu, base, size, 0, 0); 422 - iommu->ptab = cell_iommu_alloc_ptab(iommu, base, size, 0, 0, 423 - IOMMU_PAGE_SHIFT_4K); 424 - cell_iommu_enable_hardware(iommu); 425 - } 426 - 427 - static inline u32 cell_iommu_get_ioid(struct device_node *np) 428 - { 429 - const u32 *ioid; 430 - 431 - ioid = of_get_property(np, "ioid", NULL); 432 - if (ioid == NULL) { 433 - printk(KERN_WARNING "iommu: missing ioid for %pOF using 0\n", 434 - np); 435 - return 0; 436 - } 437 - 438 - return *ioid; 439 - } 440 - 441 - static struct iommu_table_ops cell_iommu_ops = { 442 - .set = tce_build_cell, 443 - .clear = tce_free_cell 444 - }; 445 - 446 - static struct iommu_window * __init 447 - cell_iommu_setup_window(struct cbe_iommu *iommu, struct device_node *np, 448 - unsigned long offset, unsigned long size, 449 - unsigned long pte_offset) 450 - { 451 - struct iommu_window *window; 452 - struct page *page; 453 - u32 ioid; 454 - 455 - ioid = cell_iommu_get_ioid(np); 456 - 457 - window = kzalloc_node(sizeof(*window), GFP_KERNEL, iommu->nid); 458 - BUG_ON(window == NULL); 459 - 460 - window->offset = offset; 461 - window->size = size; 462 - window->ioid = ioid; 463 - window->iommu = iommu; 464 - 465 - window->table.it_blocksize = 16; 466 - window->table.it_base = (unsigned long)iommu->ptab; 467 - window->table.it_index = iommu->nid; 468 - window->table.it_page_shift = IOMMU_PAGE_SHIFT_4K; 469 - window->table.it_offset = 470 - (offset >> window->table.it_page_shift) + pte_offset; 471 - window->table.it_size = size >> window->table.it_page_shift; 472 - window->table.it_ops = &cell_iommu_ops; 473 - 474 - if (!iommu_init_table(&window->table, iommu->nid, 0, 0)) 475 - panic("Failed to initialize iommu table"); 476 - 477 - pr_debug("\tioid %d\n", window->ioid); 478 - pr_debug("\tblocksize %ld\n", window->table.it_blocksize); 479 - pr_debug("\tbase 0x%016lx\n", window->table.it_base); 480 - pr_debug("\toffset 0x%lx\n", window->table.it_offset); 481 - pr_debug("\tsize %ld\n", window->table.it_size); 482 - 483 - list_add(&window->list, &iommu->windows); 484 - 485 - if (offset != 0) 486 - return window; 487 - 488 - /* We need to map and reserve the first IOMMU page since it's used 489 - * by the spider workaround. In theory, we only need to do that when 490 - * running on spider but it doesn't really matter. 491 - * 492 - * This code also assumes that we have a window that starts at 0, 493 - * which is the case on all spider based blades. 494 - */ 495 - page = alloc_pages_node(iommu->nid, GFP_KERNEL, 0); 496 - BUG_ON(!page); 497 - iommu->pad_page = page_address(page); 498 - clear_page(iommu->pad_page); 499 - 500 - __set_bit(0, window->table.it_map); 501 - tce_build_cell(&window->table, window->table.it_offset, 1, 502 - (unsigned long)iommu->pad_page, DMA_TO_DEVICE, 0); 503 - 504 - return window; 505 - } 506 - 507 - static struct cbe_iommu *cell_iommu_for_node(int nid) 508 - { 509 - int i; 510 - 511 - for (i = 0; i < cbe_nr_iommus; i++) 512 - if (iommus[i].nid == nid) 513 - return &iommus[i]; 514 - return NULL; 515 - } 516 - 517 - static unsigned long cell_dma_nommu_offset; 518 - 519 - static unsigned long dma_iommu_fixed_base; 520 - static bool cell_iommu_enabled; 521 - 522 - /* iommu_fixed_is_weak is set if booted with iommu_fixed=weak */ 523 - bool iommu_fixed_is_weak; 524 - 525 - static struct iommu_table *cell_get_iommu_table(struct device *dev) 526 - { 527 - struct iommu_window *window; 528 - struct cbe_iommu *iommu; 529 - 530 - /* Current implementation uses the first window available in that 531 - * node's iommu. We -might- do something smarter later though it may 532 - * never be necessary 533 - */ 534 - iommu = cell_iommu_for_node(dev_to_node(dev)); 535 - if (iommu == NULL || list_empty(&iommu->windows)) { 536 - dev_err(dev, "iommu: missing iommu for %pOF (node %d)\n", 537 - dev->of_node, dev_to_node(dev)); 538 - return NULL; 539 - } 540 - window = list_entry(iommu->windows.next, struct iommu_window, list); 541 - 542 - return &window->table; 543 - } 544 - 545 - static u64 cell_iommu_get_fixed_address(struct device *dev); 546 - 547 - static void cell_dma_dev_setup(struct device *dev) 548 - { 549 - if (cell_iommu_enabled) { 550 - u64 addr = cell_iommu_get_fixed_address(dev); 551 - 552 - if (addr != OF_BAD_ADDR) 553 - dev->archdata.dma_offset = addr + dma_iommu_fixed_base; 554 - set_iommu_table_base(dev, cell_get_iommu_table(dev)); 555 - } else { 556 - dev->archdata.dma_offset = cell_dma_nommu_offset; 557 - } 558 - } 559 - 560 - static void cell_pci_dma_dev_setup(struct pci_dev *dev) 561 - { 562 - cell_dma_dev_setup(&dev->dev); 563 - } 564 - 565 - static int cell_of_bus_notify(struct notifier_block *nb, unsigned long action, 566 - void *data) 567 - { 568 - struct device *dev = data; 569 - 570 - /* We are only interested in device addition */ 571 - if (action != BUS_NOTIFY_ADD_DEVICE) 572 - return 0; 573 - 574 - if (cell_iommu_enabled) 575 - dev->dma_ops = &dma_iommu_ops; 576 - cell_dma_dev_setup(dev); 577 - return 0; 578 - } 579 - 580 - static struct notifier_block cell_of_bus_notifier = { 581 - .notifier_call = cell_of_bus_notify 582 - }; 583 - 584 - static int __init cell_iommu_get_window(struct device_node *np, 585 - unsigned long *base, 586 - unsigned long *size) 587 - { 588 - const __be32 *dma_window; 589 - unsigned long index; 590 - 591 - /* Use ibm,dma-window if available, else, hard code ! */ 592 - dma_window = of_get_property(np, "ibm,dma-window", NULL); 593 - if (dma_window == NULL) { 594 - *base = 0; 595 - *size = 0x80000000u; 596 - return -ENODEV; 597 - } 598 - 599 - of_parse_dma_window(np, dma_window, &index, base, size); 600 - return 0; 601 - } 602 - 603 - static struct cbe_iommu * __init cell_iommu_alloc(struct device_node *np) 604 - { 605 - struct cbe_iommu *iommu; 606 - int nid, i; 607 - 608 - /* Get node ID */ 609 - nid = of_node_to_nid(np); 610 - if (nid < 0) { 611 - printk(KERN_ERR "iommu: failed to get node for %pOF\n", 612 - np); 613 - return NULL; 614 - } 615 - pr_debug("iommu: setting up iommu for node %d (%pOF)\n", 616 - nid, np); 617 - 618 - /* XXX todo: If we can have multiple windows on the same IOMMU, which 619 - * isn't the case today, we probably want here to check whether the 620 - * iommu for that node is already setup. 621 - * However, there might be issue with getting the size right so let's 622 - * ignore that for now. We might want to completely get rid of the 623 - * multiple window support since the cell iommu supports per-page ioids 624 - */ 625 - 626 - if (cbe_nr_iommus >= NR_IOMMUS) { 627 - printk(KERN_ERR "iommu: too many IOMMUs detected ! (%pOF)\n", 628 - np); 629 - return NULL; 630 - } 631 - 632 - /* Init base fields */ 633 - i = cbe_nr_iommus++; 634 - iommu = &iommus[i]; 635 - iommu->stab = NULL; 636 - iommu->nid = nid; 637 - snprintf(iommu->name, sizeof(iommu->name), "iommu%d", i); 638 - INIT_LIST_HEAD(&iommu->windows); 639 - 640 - return iommu; 641 - } 642 - 643 - static void __init cell_iommu_init_one(struct device_node *np, 644 - unsigned long offset) 645 - { 646 - struct cbe_iommu *iommu; 647 - unsigned long base, size; 648 - 649 - iommu = cell_iommu_alloc(np); 650 - if (!iommu) 651 - return; 652 - 653 - /* Obtain a window for it */ 654 - cell_iommu_get_window(np, &base, &size); 655 - 656 - pr_debug("\ttranslating window 0x%lx...0x%lx\n", 657 - base, base + size - 1); 658 - 659 - /* Initialize the hardware */ 660 - cell_iommu_setup_hardware(iommu, base, size); 661 - 662 - /* Setup the iommu_table */ 663 - cell_iommu_setup_window(iommu, np, base, size, 664 - offset >> IOMMU_PAGE_SHIFT_4K); 665 - } 666 - 667 - static void __init cell_disable_iommus(void) 668 - { 669 - int node; 670 - unsigned long base, val; 671 - void __iomem *xregs, *cregs; 672 - 673 - /* Make sure IOC translation is disabled on all nodes */ 674 - for_each_online_node(node) { 675 - if (cell_iommu_find_ioc(node, &base)) 676 - continue; 677 - xregs = ioremap(base, IOC_Reg_Size); 678 - if (xregs == NULL) 679 - continue; 680 - cregs = xregs + IOC_IOCmd_Offset; 681 - 682 - pr_debug("iommu: cleaning up iommu on node %d\n", node); 683 - 684 - out_be64(xregs + IOC_IOST_Origin, 0); 685 - (void)in_be64(xregs + IOC_IOST_Origin); 686 - val = in_be64(cregs + IOC_IOCmd_Cfg); 687 - val &= ~IOC_IOCmd_Cfg_TE; 688 - out_be64(cregs + IOC_IOCmd_Cfg, val); 689 - (void)in_be64(cregs + IOC_IOCmd_Cfg); 690 - 691 - iounmap(xregs); 692 - } 693 - } 694 - 695 - static int __init cell_iommu_init_disabled(void) 696 - { 697 - struct device_node *np = NULL; 698 - unsigned long base = 0, size; 699 - 700 - /* When no iommu is present, we use direct DMA ops */ 701 - 702 - /* First make sure all IOC translation is turned off */ 703 - cell_disable_iommus(); 704 - 705 - /* If we have no Axon, we set up the spider DMA magic offset */ 706 - np = of_find_node_by_name(NULL, "axon"); 707 - if (!np) 708 - cell_dma_nommu_offset = SPIDER_DMA_OFFSET; 709 - of_node_put(np); 710 - 711 - /* Now we need to check to see where the memory is mapped 712 - * in PCI space. We assume that all busses use the same dma 713 - * window which is always the case so far on Cell, thus we 714 - * pick up the first pci-internal node we can find and check 715 - * the DMA window from there. 716 - */ 717 - for_each_node_by_name(np, "axon") { 718 - if (np->parent == NULL || np->parent->parent != NULL) 719 - continue; 720 - if (cell_iommu_get_window(np, &base, &size) == 0) 721 - break; 722 - } 723 - if (np == NULL) { 724 - for_each_node_by_name(np, "pci-internal") { 725 - if (np->parent == NULL || np->parent->parent != NULL) 726 - continue; 727 - if (cell_iommu_get_window(np, &base, &size) == 0) 728 - break; 729 - } 730 - } 731 - of_node_put(np); 732 - 733 - /* If we found a DMA window, we check if it's big enough to enclose 734 - * all of physical memory. If not, we force enable IOMMU 735 - */ 736 - if (np && size < memblock_end_of_DRAM()) { 737 - printk(KERN_WARNING "iommu: force-enabled, dma window" 738 - " (%ldMB) smaller than total memory (%lldMB)\n", 739 - size >> 20, memblock_end_of_DRAM() >> 20); 740 - return -ENODEV; 741 - } 742 - 743 - cell_dma_nommu_offset += base; 744 - 745 - if (cell_dma_nommu_offset != 0) 746 - cell_pci_controller_ops.dma_dev_setup = cell_pci_dma_dev_setup; 747 - 748 - printk("iommu: disabled, direct DMA offset is 0x%lx\n", 749 - cell_dma_nommu_offset); 750 - 751 - return 0; 752 - } 753 - 754 - /* 755 - * Fixed IOMMU mapping support 756 - * 757 - * This code adds support for setting up a fixed IOMMU mapping on certain 758 - * cell machines. For 64-bit devices this avoids the performance overhead of 759 - * mapping and unmapping pages at runtime. 32-bit devices are unable to use 760 - * the fixed mapping. 761 - * 762 - * The fixed mapping is established at boot, and maps all of physical memory 763 - * 1:1 into device space at some offset. On machines with < 30 GB of memory 764 - * we setup the fixed mapping immediately above the normal IOMMU window. 765 - * 766 - * For example a machine with 4GB of memory would end up with the normal 767 - * IOMMU window from 0-2GB and the fixed mapping window from 2GB to 6GB. In 768 - * this case a 64-bit device wishing to DMA to 1GB would be told to DMA to 769 - * 3GB, plus any offset required by firmware. The firmware offset is encoded 770 - * in the "dma-ranges" property. 771 - * 772 - * On machines with 30GB or more of memory, we are unable to place the fixed 773 - * mapping above the normal IOMMU window as we would run out of address space. 774 - * Instead we move the normal IOMMU window to coincide with the hash page 775 - * table, this region does not need to be part of the fixed mapping as no 776 - * device should ever be DMA'ing to it. We then setup the fixed mapping 777 - * from 0 to 32GB. 778 - */ 779 - 780 - static u64 cell_iommu_get_fixed_address(struct device *dev) 781 - { 782 - u64 best_size, dev_addr = OF_BAD_ADDR; 783 - struct device_node *np; 784 - struct of_range_parser parser; 785 - struct of_range range; 786 - 787 - /* We can be called for platform devices that have no of_node */ 788 - np = of_node_get(dev->of_node); 789 - if (!np) 790 - goto out; 791 - 792 - while ((np = of_get_next_parent(np))) { 793 - if (of_pci_dma_range_parser_init(&parser, np)) 794 - continue; 795 - 796 - if (of_range_count(&parser)) 797 - break; 798 - } 799 - 800 - if (!np) { 801 - dev_dbg(dev, "iommu: no dma-ranges found\n"); 802 - goto out; 803 - } 804 - 805 - best_size = 0; 806 - for_each_of_range(&parser, &range) { 807 - if (!range.cpu_addr) 808 - continue; 809 - 810 - if (range.size > best_size) { 811 - best_size = range.size; 812 - dev_addr = range.bus_addr; 813 - } 814 - } 815 - 816 - if (!best_size) 817 - dev_dbg(dev, "iommu: no suitable range found!\n"); 818 - 819 - out: 820 - of_node_put(np); 821 - 822 - return dev_addr; 823 - } 824 - 825 - static bool cell_pci_iommu_bypass_supported(struct pci_dev *pdev, u64 mask) 826 - { 827 - return mask == DMA_BIT_MASK(64) && 828 - cell_iommu_get_fixed_address(&pdev->dev) != OF_BAD_ADDR; 829 - } 830 - 831 - static void __init insert_16M_pte(unsigned long addr, unsigned long *ptab, 832 - unsigned long base_pte) 833 - { 834 - unsigned long segment, offset; 835 - 836 - segment = addr >> IO_SEGMENT_SHIFT; 837 - offset = (addr >> 24) - (segment << IO_PAGENO_BITS(24)); 838 - ptab = ptab + (segment * (1 << 12) / sizeof(unsigned long)); 839 - 840 - pr_debug("iommu: addr %lx ptab %p segment %lx offset %lx\n", 841 - addr, ptab, segment, offset); 842 - 843 - ptab[offset] = base_pte | (__pa(addr) & CBE_IOPTE_RPN_Mask); 844 - } 845 - 846 - static void __init cell_iommu_setup_fixed_ptab(struct cbe_iommu *iommu, 847 - struct device_node *np, unsigned long dbase, unsigned long dsize, 848 - unsigned long fbase, unsigned long fsize) 849 - { 850 - unsigned long base_pte, uaddr, ioaddr, *ptab; 851 - 852 - ptab = cell_iommu_alloc_ptab(iommu, fbase, fsize, dbase, dsize, 24); 853 - 854 - dma_iommu_fixed_base = fbase; 855 - 856 - pr_debug("iommu: mapping 0x%lx pages from 0x%lx\n", fsize, fbase); 857 - 858 - base_pte = CBE_IOPTE_PP_W | CBE_IOPTE_PP_R | CBE_IOPTE_M | 859 - (cell_iommu_get_ioid(np) & CBE_IOPTE_IOID_Mask); 860 - 861 - if (iommu_fixed_is_weak) 862 - pr_info("IOMMU: Using weak ordering for fixed mapping\n"); 863 - else { 864 - pr_info("IOMMU: Using strong ordering for fixed mapping\n"); 865 - base_pte |= CBE_IOPTE_SO_RW; 866 - } 867 - 868 - for (uaddr = 0; uaddr < fsize; uaddr += (1 << 24)) { 869 - /* Don't touch the dynamic region */ 870 - ioaddr = uaddr + fbase; 871 - if (ioaddr >= dbase && ioaddr < (dbase + dsize)) { 872 - pr_debug("iommu: fixed/dynamic overlap, skipping\n"); 873 - continue; 874 - } 875 - 876 - insert_16M_pte(uaddr, ptab, base_pte); 877 - } 878 - 879 - mb(); 880 - } 881 - 882 - static int __init cell_iommu_fixed_mapping_init(void) 883 - { 884 - unsigned long dbase, dsize, fbase, fsize, hbase, hend; 885 - struct cbe_iommu *iommu; 886 - struct device_node *np; 887 - 888 - /* The fixed mapping is only supported on axon machines */ 889 - np = of_find_node_by_name(NULL, "axon"); 890 - of_node_put(np); 891 - 892 - if (!np) { 893 - pr_debug("iommu: fixed mapping disabled, no axons found\n"); 894 - return -1; 895 - } 896 - 897 - /* We must have dma-ranges properties for fixed mapping to work */ 898 - np = of_find_node_with_property(NULL, "dma-ranges"); 899 - of_node_put(np); 900 - 901 - if (!np) { 902 - pr_debug("iommu: no dma-ranges found, no fixed mapping\n"); 903 - return -1; 904 - } 905 - 906 - /* The default setup is to have the fixed mapping sit after the 907 - * dynamic region, so find the top of the largest IOMMU window 908 - * on any axon, then add the size of RAM and that's our max value. 909 - * If that is > 32GB we have to do other shennanigans. 910 - */ 911 - fbase = 0; 912 - for_each_node_by_name(np, "axon") { 913 - cell_iommu_get_window(np, &dbase, &dsize); 914 - fbase = max(fbase, dbase + dsize); 915 - } 916 - 917 - fbase = ALIGN(fbase, 1 << IO_SEGMENT_SHIFT); 918 - fsize = memblock_phys_mem_size(); 919 - 920 - if ((fbase + fsize) <= 0x800000000ul) 921 - hbase = 0; /* use the device tree window */ 922 - else { 923 - /* If we're over 32 GB we need to cheat. We can't map all of 924 - * RAM with the fixed mapping, and also fit the dynamic 925 - * region. So try to place the dynamic region where the hash 926 - * table sits, drivers never need to DMA to it, we don't 927 - * need a fixed mapping for that area. 928 - */ 929 - if (!htab_address) { 930 - pr_debug("iommu: htab is NULL, on LPAR? Huh?\n"); 931 - return -1; 932 - } 933 - hbase = __pa(htab_address); 934 - hend = hbase + htab_size_bytes; 935 - 936 - /* The window must start and end on a segment boundary */ 937 - if ((hbase != ALIGN(hbase, 1 << IO_SEGMENT_SHIFT)) || 938 - (hend != ALIGN(hend, 1 << IO_SEGMENT_SHIFT))) { 939 - pr_debug("iommu: hash window not segment aligned\n"); 940 - return -1; 941 - } 942 - 943 - /* Check the hash window fits inside the real DMA window */ 944 - for_each_node_by_name(np, "axon") { 945 - cell_iommu_get_window(np, &dbase, &dsize); 946 - 947 - if (hbase < dbase || (hend > (dbase + dsize))) { 948 - pr_debug("iommu: hash window doesn't fit in" 949 - "real DMA window\n"); 950 - of_node_put(np); 951 - return -1; 952 - } 953 - } 954 - 955 - fbase = 0; 956 - } 957 - 958 - /* Setup the dynamic regions */ 959 - for_each_node_by_name(np, "axon") { 960 - iommu = cell_iommu_alloc(np); 961 - BUG_ON(!iommu); 962 - 963 - if (hbase == 0) 964 - cell_iommu_get_window(np, &dbase, &dsize); 965 - else { 966 - dbase = hbase; 967 - dsize = htab_size_bytes; 968 - } 969 - 970 - printk(KERN_DEBUG "iommu: node %d, dynamic window 0x%lx-0x%lx " 971 - "fixed window 0x%lx-0x%lx\n", iommu->nid, dbase, 972 - dbase + dsize, fbase, fbase + fsize); 973 - 974 - cell_iommu_setup_stab(iommu, dbase, dsize, fbase, fsize); 975 - iommu->ptab = cell_iommu_alloc_ptab(iommu, dbase, dsize, 0, 0, 976 - IOMMU_PAGE_SHIFT_4K); 977 - cell_iommu_setup_fixed_ptab(iommu, np, dbase, dsize, 978 - fbase, fsize); 979 - cell_iommu_enable_hardware(iommu); 980 - cell_iommu_setup_window(iommu, np, dbase, dsize, 0); 981 - } 982 - 983 - cell_pci_controller_ops.iommu_bypass_supported = 984 - cell_pci_iommu_bypass_supported; 985 - return 0; 986 - } 987 - 988 - static int iommu_fixed_disabled; 989 - 990 - static int __init setup_iommu_fixed(char *str) 991 - { 992 - struct device_node *pciep; 993 - 994 - if (strcmp(str, "off") == 0) 995 - iommu_fixed_disabled = 1; 996 - 997 - /* If we can find a pcie-endpoint in the device tree assume that 998 - * we're on a triblade or a CAB so by default the fixed mapping 999 - * should be set to be weakly ordered; but only if the boot 1000 - * option WASN'T set for strong ordering 1001 - */ 1002 - pciep = of_find_node_by_type(NULL, "pcie-endpoint"); 1003 - 1004 - if (strcmp(str, "weak") == 0 || (pciep && strcmp(str, "strong") != 0)) 1005 - iommu_fixed_is_weak = true; 1006 - 1007 - of_node_put(pciep); 1008 - 1009 - return 1; 1010 - } 1011 - __setup("iommu_fixed=", setup_iommu_fixed); 1012 - 1013 - static int __init cell_iommu_init(void) 1014 - { 1015 - struct device_node *np; 1016 - 1017 - /* If IOMMU is disabled or we have little enough RAM to not need 1018 - * to enable it, we setup a direct mapping. 1019 - * 1020 - * Note: should we make sure we have the IOMMU actually disabled ? 1021 - */ 1022 - if (iommu_is_off || 1023 - (!iommu_force_on && memblock_end_of_DRAM() <= 0x80000000ull)) 1024 - if (cell_iommu_init_disabled() == 0) 1025 - goto bail; 1026 - 1027 - /* Setup various callbacks */ 1028 - cell_pci_controller_ops.dma_dev_setup = cell_pci_dma_dev_setup; 1029 - 1030 - if (!iommu_fixed_disabled && cell_iommu_fixed_mapping_init() == 0) 1031 - goto done; 1032 - 1033 - /* Create an iommu for each /axon node. */ 1034 - for_each_node_by_name(np, "axon") { 1035 - if (np->parent == NULL || np->parent->parent != NULL) 1036 - continue; 1037 - cell_iommu_init_one(np, 0); 1038 - } 1039 - 1040 - /* Create an iommu for each toplevel /pci-internal node for 1041 - * old hardware/firmware 1042 - */ 1043 - for_each_node_by_name(np, "pci-internal") { 1044 - if (np->parent == NULL || np->parent->parent != NULL) 1045 - continue; 1046 - cell_iommu_init_one(np, SPIDER_DMA_OFFSET); 1047 - } 1048 - done: 1049 - /* Setup default PCI iommu ops */ 1050 - set_pci_dma_ops(&dma_iommu_ops); 1051 - cell_iommu_enabled = true; 1052 - bail: 1053 - /* Register callbacks on OF platform device addition/removal 1054 - * to handle linking them to the right DMA operations 1055 - */ 1056 - bus_register_notifier(&platform_bus_type, &cell_of_bus_notifier); 1057 - 1058 - return 0; 1059 - } 1060 - machine_arch_initcall(cell, cell_iommu_init);
-125
arch/powerpc/platforms/cell/pervasive.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * CBE Pervasive Monitor and Debug 4 - * 5 - * (C) Copyright IBM Corporation 2005 6 - * 7 - * Authors: Maximino Aguilar (maguilar@us.ibm.com) 8 - * Michael N. Day (mnday@us.ibm.com) 9 - */ 10 - 11 - #undef DEBUG 12 - 13 - #include <linux/interrupt.h> 14 - #include <linux/irq.h> 15 - #include <linux/percpu.h> 16 - #include <linux/types.h> 17 - #include <linux/kallsyms.h> 18 - #include <linux/pgtable.h> 19 - 20 - #include <asm/io.h> 21 - #include <asm/machdep.h> 22 - #include <asm/reg.h> 23 - #include <asm/cell-regs.h> 24 - #include <asm/cpu_has_feature.h> 25 - 26 - #include "pervasive.h" 27 - #include "ras.h" 28 - 29 - static void cbe_power_save(void) 30 - { 31 - unsigned long ctrl, thread_switch_control; 32 - 33 - /* Ensure our interrupt state is properly tracked */ 34 - if (!prep_irq_for_idle()) 35 - return; 36 - 37 - ctrl = mfspr(SPRN_CTRLF); 38 - 39 - /* Enable DEC and EE interrupt request */ 40 - thread_switch_control = mfspr(SPRN_TSC_CELL); 41 - thread_switch_control |= TSC_CELL_EE_ENABLE | TSC_CELL_EE_BOOST; 42 - 43 - switch (ctrl & CTRL_CT) { 44 - case CTRL_CT0: 45 - thread_switch_control |= TSC_CELL_DEC_ENABLE_0; 46 - break; 47 - case CTRL_CT1: 48 - thread_switch_control |= TSC_CELL_DEC_ENABLE_1; 49 - break; 50 - default: 51 - printk(KERN_WARNING "%s: unknown configuration\n", 52 - __func__); 53 - break; 54 - } 55 - mtspr(SPRN_TSC_CELL, thread_switch_control); 56 - 57 - /* 58 - * go into low thread priority, medium priority will be 59 - * restored for us after wake-up. 60 - */ 61 - HMT_low(); 62 - 63 - /* 64 - * atomically disable thread execution and runlatch. 65 - * External and Decrementer exceptions are still handled when the 66 - * thread is disabled but now enter in cbe_system_reset_exception() 67 - */ 68 - ctrl &= ~(CTRL_RUNLATCH | CTRL_TE); 69 - mtspr(SPRN_CTRLT, ctrl); 70 - 71 - /* Re-enable interrupts in MSR */ 72 - __hard_irq_enable(); 73 - } 74 - 75 - static int cbe_system_reset_exception(struct pt_regs *regs) 76 - { 77 - switch (regs->msr & SRR1_WAKEMASK) { 78 - case SRR1_WAKEDEC: 79 - set_dec(1); 80 - break; 81 - case SRR1_WAKEEE: 82 - /* 83 - * Handle these when interrupts get re-enabled and we take 84 - * them as regular exceptions. We are in an NMI context 85 - * and can't handle these here. 86 - */ 87 - break; 88 - case SRR1_WAKEMT: 89 - return cbe_sysreset_hack(); 90 - #ifdef CONFIG_CBE_RAS 91 - case SRR1_WAKESYSERR: 92 - cbe_system_error_exception(regs); 93 - break; 94 - case SRR1_WAKETHERM: 95 - cbe_thermal_exception(regs); 96 - break; 97 - #endif /* CONFIG_CBE_RAS */ 98 - default: 99 - /* do system reset */ 100 - return 0; 101 - } 102 - /* everything handled */ 103 - return 1; 104 - } 105 - 106 - void __init cbe_pervasive_init(void) 107 - { 108 - int cpu; 109 - 110 - if (!cpu_has_feature(CPU_FTR_PAUSE_ZERO)) 111 - return; 112 - 113 - for_each_possible_cpu(cpu) { 114 - struct cbe_pmd_regs __iomem *regs = cbe_get_cpu_pmd_regs(cpu); 115 - if (!regs) 116 - continue; 117 - 118 - /* Enable Pause(0) control bit */ 119 - out_be64(&regs->pmcr, in_be64(&regs->pmcr) | 120 - CBE_PMD_PAUSE_ZERO_CONTROL); 121 - } 122 - 123 - ppc_md.power_save = cbe_power_save; 124 - ppc_md.system_reset_exception = cbe_system_reset_exception; 125 - }
-26
arch/powerpc/platforms/cell/pervasive.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Cell Pervasive Monitor and Debug interface and HW structures 4 - * 5 - * (C) Copyright IBM Corporation 2005 6 - * 7 - * Authors: Maximino Aguilar (maguilar@us.ibm.com) 8 - * David J. Erb (djerb@us.ibm.com) 9 - */ 10 - 11 - 12 - #ifndef PERVASIVE_H 13 - #define PERVASIVE_H 14 - 15 - extern void cbe_pervasive_init(void); 16 - 17 - #ifdef CONFIG_PPC_IBM_CELL_RESETBUTTON 18 - extern int cbe_sysreset_hack(void); 19 - #else 20 - static inline int cbe_sysreset_hack(void) 21 - { 22 - return 1; 23 - } 24 - #endif /* CONFIG_PPC_IBM_CELL_RESETBUTTON */ 25 - 26 - #endif
-412
arch/powerpc/platforms/cell/pmu.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Cell Broadband Engine Performance Monitor 4 - * 5 - * (C) Copyright IBM Corporation 2001,2006 6 - * 7 - * Author: 8 - * David Erb (djerb@us.ibm.com) 9 - * Kevin Corry (kevcorry@us.ibm.com) 10 - */ 11 - 12 - #include <linux/interrupt.h> 13 - #include <linux/irqdomain.h> 14 - #include <linux/types.h> 15 - #include <linux/export.h> 16 - #include <asm/io.h> 17 - #include <asm/irq_regs.h> 18 - #include <asm/machdep.h> 19 - #include <asm/pmc.h> 20 - #include <asm/reg.h> 21 - #include <asm/spu.h> 22 - #include <asm/cell-regs.h> 23 - 24 - #include "interrupt.h" 25 - 26 - /* 27 - * When writing to write-only mmio addresses, save a shadow copy. All of the 28 - * registers are 32-bit, but stored in the upper-half of a 64-bit field in 29 - * pmd_regs. 30 - */ 31 - 32 - #define WRITE_WO_MMIO(reg, x) \ 33 - do { \ 34 - u32 _x = (x); \ 35 - struct cbe_pmd_regs __iomem *pmd_regs; \ 36 - struct cbe_pmd_shadow_regs *shadow_regs; \ 37 - pmd_regs = cbe_get_cpu_pmd_regs(cpu); \ 38 - shadow_regs = cbe_get_cpu_pmd_shadow_regs(cpu); \ 39 - out_be64(&(pmd_regs->reg), (((u64)_x) << 32)); \ 40 - shadow_regs->reg = _x; \ 41 - } while (0) 42 - 43 - #define READ_SHADOW_REG(val, reg) \ 44 - do { \ 45 - struct cbe_pmd_shadow_regs *shadow_regs; \ 46 - shadow_regs = cbe_get_cpu_pmd_shadow_regs(cpu); \ 47 - (val) = shadow_regs->reg; \ 48 - } while (0) 49 - 50 - #define READ_MMIO_UPPER32(val, reg) \ 51 - do { \ 52 - struct cbe_pmd_regs __iomem *pmd_regs; \ 53 - pmd_regs = cbe_get_cpu_pmd_regs(cpu); \ 54 - (val) = (u32)(in_be64(&pmd_regs->reg) >> 32); \ 55 - } while (0) 56 - 57 - /* 58 - * Physical counter registers. 59 - * Each physical counter can act as one 32-bit counter or two 16-bit counters. 60 - */ 61 - 62 - u32 cbe_read_phys_ctr(u32 cpu, u32 phys_ctr) 63 - { 64 - u32 val_in_latch, val = 0; 65 - 66 - if (phys_ctr < NR_PHYS_CTRS) { 67 - READ_SHADOW_REG(val_in_latch, counter_value_in_latch); 68 - 69 - /* Read the latch or the actual counter, whichever is newer. */ 70 - if (val_in_latch & (1 << phys_ctr)) { 71 - READ_SHADOW_REG(val, pm_ctr[phys_ctr]); 72 - } else { 73 - READ_MMIO_UPPER32(val, pm_ctr[phys_ctr]); 74 - } 75 - } 76 - 77 - return val; 78 - } 79 - EXPORT_SYMBOL_GPL(cbe_read_phys_ctr); 80 - 81 - void cbe_write_phys_ctr(u32 cpu, u32 phys_ctr, u32 val) 82 - { 83 - struct cbe_pmd_shadow_regs *shadow_regs; 84 - u32 pm_ctrl; 85 - 86 - if (phys_ctr < NR_PHYS_CTRS) { 87 - /* Writing to a counter only writes to a hardware latch. 88 - * The new value is not propagated to the actual counter 89 - * until the performance monitor is enabled. 90 - */ 91 - WRITE_WO_MMIO(pm_ctr[phys_ctr], val); 92 - 93 - pm_ctrl = cbe_read_pm(cpu, pm_control); 94 - if (pm_ctrl & CBE_PM_ENABLE_PERF_MON) { 95 - /* The counters are already active, so we need to 96 - * rewrite the pm_control register to "re-enable" 97 - * the PMU. 98 - */ 99 - cbe_write_pm(cpu, pm_control, pm_ctrl); 100 - } else { 101 - shadow_regs = cbe_get_cpu_pmd_shadow_regs(cpu); 102 - shadow_regs->counter_value_in_latch |= (1 << phys_ctr); 103 - } 104 - } 105 - } 106 - EXPORT_SYMBOL_GPL(cbe_write_phys_ctr); 107 - 108 - /* 109 - * "Logical" counter registers. 110 - * These will read/write 16-bits or 32-bits depending on the 111 - * current size of the counter. Counters 4 - 7 are always 16-bit. 112 - */ 113 - 114 - u32 cbe_read_ctr(u32 cpu, u32 ctr) 115 - { 116 - u32 val; 117 - u32 phys_ctr = ctr & (NR_PHYS_CTRS - 1); 118 - 119 - val = cbe_read_phys_ctr(cpu, phys_ctr); 120 - 121 - if (cbe_get_ctr_size(cpu, phys_ctr) == 16) 122 - val = (ctr < NR_PHYS_CTRS) ? (val >> 16) : (val & 0xffff); 123 - 124 - return val; 125 - } 126 - EXPORT_SYMBOL_GPL(cbe_read_ctr); 127 - 128 - void cbe_write_ctr(u32 cpu, u32 ctr, u32 val) 129 - { 130 - u32 phys_ctr; 131 - u32 phys_val; 132 - 133 - phys_ctr = ctr & (NR_PHYS_CTRS - 1); 134 - 135 - if (cbe_get_ctr_size(cpu, phys_ctr) == 16) { 136 - phys_val = cbe_read_phys_ctr(cpu, phys_ctr); 137 - 138 - if (ctr < NR_PHYS_CTRS) 139 - val = (val << 16) | (phys_val & 0xffff); 140 - else 141 - val = (val & 0xffff) | (phys_val & 0xffff0000); 142 - } 143 - 144 - cbe_write_phys_ctr(cpu, phys_ctr, val); 145 - } 146 - EXPORT_SYMBOL_GPL(cbe_write_ctr); 147 - 148 - /* 149 - * Counter-control registers. 150 - * Each "logical" counter has a corresponding control register. 151 - */ 152 - 153 - u32 cbe_read_pm07_control(u32 cpu, u32 ctr) 154 - { 155 - u32 pm07_control = 0; 156 - 157 - if (ctr < NR_CTRS) 158 - READ_SHADOW_REG(pm07_control, pm07_control[ctr]); 159 - 160 - return pm07_control; 161 - } 162 - EXPORT_SYMBOL_GPL(cbe_read_pm07_control); 163 - 164 - void cbe_write_pm07_control(u32 cpu, u32 ctr, u32 val) 165 - { 166 - if (ctr < NR_CTRS) 167 - WRITE_WO_MMIO(pm07_control[ctr], val); 168 - } 169 - EXPORT_SYMBOL_GPL(cbe_write_pm07_control); 170 - 171 - /* 172 - * Other PMU control registers. Most of these are write-only. 173 - */ 174 - 175 - u32 cbe_read_pm(u32 cpu, enum pm_reg_name reg) 176 - { 177 - u32 val = 0; 178 - 179 - switch (reg) { 180 - case group_control: 181 - READ_SHADOW_REG(val, group_control); 182 - break; 183 - 184 - case debug_bus_control: 185 - READ_SHADOW_REG(val, debug_bus_control); 186 - break; 187 - 188 - case trace_address: 189 - READ_MMIO_UPPER32(val, trace_address); 190 - break; 191 - 192 - case ext_tr_timer: 193 - READ_SHADOW_REG(val, ext_tr_timer); 194 - break; 195 - 196 - case pm_status: 197 - READ_MMIO_UPPER32(val, pm_status); 198 - break; 199 - 200 - case pm_control: 201 - READ_SHADOW_REG(val, pm_control); 202 - break; 203 - 204 - case pm_interval: 205 - READ_MMIO_UPPER32(val, pm_interval); 206 - break; 207 - 208 - case pm_start_stop: 209 - READ_SHADOW_REG(val, pm_start_stop); 210 - break; 211 - } 212 - 213 - return val; 214 - } 215 - EXPORT_SYMBOL_GPL(cbe_read_pm); 216 - 217 - void cbe_write_pm(u32 cpu, enum pm_reg_name reg, u32 val) 218 - { 219 - switch (reg) { 220 - case group_control: 221 - WRITE_WO_MMIO(group_control, val); 222 - break; 223 - 224 - case debug_bus_control: 225 - WRITE_WO_MMIO(debug_bus_control, val); 226 - break; 227 - 228 - case trace_address: 229 - WRITE_WO_MMIO(trace_address, val); 230 - break; 231 - 232 - case ext_tr_timer: 233 - WRITE_WO_MMIO(ext_tr_timer, val); 234 - break; 235 - 236 - case pm_status: 237 - WRITE_WO_MMIO(pm_status, val); 238 - break; 239 - 240 - case pm_control: 241 - WRITE_WO_MMIO(pm_control, val); 242 - break; 243 - 244 - case pm_interval: 245 - WRITE_WO_MMIO(pm_interval, val); 246 - break; 247 - 248 - case pm_start_stop: 249 - WRITE_WO_MMIO(pm_start_stop, val); 250 - break; 251 - } 252 - } 253 - EXPORT_SYMBOL_GPL(cbe_write_pm); 254 - 255 - /* 256 - * Get/set the size of a physical counter to either 16 or 32 bits. 257 - */ 258 - 259 - u32 cbe_get_ctr_size(u32 cpu, u32 phys_ctr) 260 - { 261 - u32 pm_ctrl, size = 0; 262 - 263 - if (phys_ctr < NR_PHYS_CTRS) { 264 - pm_ctrl = cbe_read_pm(cpu, pm_control); 265 - size = (pm_ctrl & CBE_PM_16BIT_CTR(phys_ctr)) ? 16 : 32; 266 - } 267 - 268 - return size; 269 - } 270 - EXPORT_SYMBOL_GPL(cbe_get_ctr_size); 271 - 272 - void cbe_set_ctr_size(u32 cpu, u32 phys_ctr, u32 ctr_size) 273 - { 274 - u32 pm_ctrl; 275 - 276 - if (phys_ctr < NR_PHYS_CTRS) { 277 - pm_ctrl = cbe_read_pm(cpu, pm_control); 278 - switch (ctr_size) { 279 - case 16: 280 - pm_ctrl |= CBE_PM_16BIT_CTR(phys_ctr); 281 - break; 282 - 283 - case 32: 284 - pm_ctrl &= ~CBE_PM_16BIT_CTR(phys_ctr); 285 - break; 286 - } 287 - cbe_write_pm(cpu, pm_control, pm_ctrl); 288 - } 289 - } 290 - EXPORT_SYMBOL_GPL(cbe_set_ctr_size); 291 - 292 - /* 293 - * Enable/disable the entire performance monitoring unit. 294 - * When we enable the PMU, all pending writes to counters get committed. 295 - */ 296 - 297 - void cbe_enable_pm(u32 cpu) 298 - { 299 - struct cbe_pmd_shadow_regs *shadow_regs; 300 - u32 pm_ctrl; 301 - 302 - shadow_regs = cbe_get_cpu_pmd_shadow_regs(cpu); 303 - shadow_regs->counter_value_in_latch = 0; 304 - 305 - pm_ctrl = cbe_read_pm(cpu, pm_control) | CBE_PM_ENABLE_PERF_MON; 306 - cbe_write_pm(cpu, pm_control, pm_ctrl); 307 - } 308 - EXPORT_SYMBOL_GPL(cbe_enable_pm); 309 - 310 - void cbe_disable_pm(u32 cpu) 311 - { 312 - u32 pm_ctrl; 313 - pm_ctrl = cbe_read_pm(cpu, pm_control) & ~CBE_PM_ENABLE_PERF_MON; 314 - cbe_write_pm(cpu, pm_control, pm_ctrl); 315 - } 316 - EXPORT_SYMBOL_GPL(cbe_disable_pm); 317 - 318 - /* 319 - * Reading from the trace_buffer. 320 - * The trace buffer is two 64-bit registers. Reading from 321 - * the second half automatically increments the trace_address. 322 - */ 323 - 324 - void cbe_read_trace_buffer(u32 cpu, u64 *buf) 325 - { 326 - struct cbe_pmd_regs __iomem *pmd_regs = cbe_get_cpu_pmd_regs(cpu); 327 - 328 - *buf++ = in_be64(&pmd_regs->trace_buffer_0_63); 329 - *buf++ = in_be64(&pmd_regs->trace_buffer_64_127); 330 - } 331 - EXPORT_SYMBOL_GPL(cbe_read_trace_buffer); 332 - 333 - /* 334 - * Enabling/disabling interrupts for the entire performance monitoring unit. 335 - */ 336 - 337 - u32 cbe_get_and_clear_pm_interrupts(u32 cpu) 338 - { 339 - /* Reading pm_status clears the interrupt bits. */ 340 - return cbe_read_pm(cpu, pm_status); 341 - } 342 - EXPORT_SYMBOL_GPL(cbe_get_and_clear_pm_interrupts); 343 - 344 - void cbe_enable_pm_interrupts(u32 cpu, u32 thread, u32 mask) 345 - { 346 - /* Set which node and thread will handle the next interrupt. */ 347 - iic_set_interrupt_routing(cpu, thread, 0); 348 - 349 - /* Enable the interrupt bits in the pm_status register. */ 350 - if (mask) 351 - cbe_write_pm(cpu, pm_status, mask); 352 - } 353 - EXPORT_SYMBOL_GPL(cbe_enable_pm_interrupts); 354 - 355 - void cbe_disable_pm_interrupts(u32 cpu) 356 - { 357 - cbe_get_and_clear_pm_interrupts(cpu); 358 - cbe_write_pm(cpu, pm_status, 0); 359 - } 360 - EXPORT_SYMBOL_GPL(cbe_disable_pm_interrupts); 361 - 362 - static irqreturn_t cbe_pm_irq(int irq, void *dev_id) 363 - { 364 - perf_irq(get_irq_regs()); 365 - return IRQ_HANDLED; 366 - } 367 - 368 - static int __init cbe_init_pm_irq(void) 369 - { 370 - unsigned int irq; 371 - int rc, node; 372 - 373 - for_each_online_node(node) { 374 - irq = irq_create_mapping(NULL, IIC_IRQ_IOEX_PMI | 375 - (node << IIC_IRQ_NODE_SHIFT)); 376 - if (!irq) { 377 - printk("ERROR: Unable to allocate irq for node %d\n", 378 - node); 379 - return -EINVAL; 380 - } 381 - 382 - rc = request_irq(irq, cbe_pm_irq, 383 - 0, "cbe-pmu-0", NULL); 384 - if (rc) { 385 - printk("ERROR: Request for irq on node %d failed\n", 386 - node); 387 - return rc; 388 - } 389 - } 390 - 391 - return 0; 392 - } 393 - machine_arch_initcall(cell, cbe_init_pm_irq); 394 - 395 - void cbe_sync_irq(int node) 396 - { 397 - unsigned int irq; 398 - 399 - irq = irq_find_mapping(NULL, 400 - IIC_IRQ_IOEX_PMI 401 - | (node << IIC_IRQ_NODE_SHIFT)); 402 - 403 - if (!irq) { 404 - printk(KERN_WARNING "ERROR, unable to get existing irq %d " \ 405 - "for node %d\n", irq, node); 406 - return; 407 - } 408 - 409 - synchronize_irq(irq); 410 - } 411 - EXPORT_SYMBOL_GPL(cbe_sync_irq); 412 -
-352
arch/powerpc/platforms/cell/ras.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Copyright 2006-2008, IBM Corporation. 4 - */ 5 - 6 - #undef DEBUG 7 - 8 - #include <linux/types.h> 9 - #include <linux/kernel.h> 10 - #include <linux/slab.h> 11 - #include <linux/smp.h> 12 - #include <linux/reboot.h> 13 - #include <linux/kexec.h> 14 - #include <linux/crash_dump.h> 15 - #include <linux/of.h> 16 - 17 - #include <asm/kexec.h> 18 - #include <asm/reg.h> 19 - #include <asm/io.h> 20 - #include <asm/machdep.h> 21 - #include <asm/rtas.h> 22 - #include <asm/cell-regs.h> 23 - 24 - #include "ras.h" 25 - #include "pervasive.h" 26 - 27 - static void dump_fir(int cpu) 28 - { 29 - struct cbe_pmd_regs __iomem *pregs = cbe_get_cpu_pmd_regs(cpu); 30 - struct cbe_iic_regs __iomem *iregs = cbe_get_cpu_iic_regs(cpu); 31 - 32 - if (pregs == NULL) 33 - return; 34 - 35 - /* Todo: do some nicer parsing of bits and based on them go down 36 - * to other sub-units FIRs and not only IIC 37 - */ 38 - printk(KERN_ERR "Global Checkstop FIR : 0x%016llx\n", 39 - in_be64(&pregs->checkstop_fir)); 40 - printk(KERN_ERR "Global Recoverable FIR : 0x%016llx\n", 41 - in_be64(&pregs->checkstop_fir)); 42 - printk(KERN_ERR "Global MachineCheck FIR : 0x%016llx\n", 43 - in_be64(&pregs->spec_att_mchk_fir)); 44 - 45 - if (iregs == NULL) 46 - return; 47 - printk(KERN_ERR "IOC FIR : 0x%016llx\n", 48 - in_be64(&iregs->ioc_fir)); 49 - 50 - } 51 - 52 - DEFINE_INTERRUPT_HANDLER(cbe_system_error_exception) 53 - { 54 - int cpu = smp_processor_id(); 55 - 56 - printk(KERN_ERR "System Error Interrupt on CPU %d !\n", cpu); 57 - dump_fir(cpu); 58 - dump_stack(); 59 - } 60 - 61 - DEFINE_INTERRUPT_HANDLER(cbe_maintenance_exception) 62 - { 63 - int cpu = smp_processor_id(); 64 - 65 - /* 66 - * Nothing implemented for the maintenance interrupt at this point 67 - */ 68 - 69 - printk(KERN_ERR "Unhandled Maintenance interrupt on CPU %d !\n", cpu); 70 - dump_stack(); 71 - } 72 - 73 - DEFINE_INTERRUPT_HANDLER(cbe_thermal_exception) 74 - { 75 - int cpu = smp_processor_id(); 76 - 77 - /* 78 - * Nothing implemented for the thermal interrupt at this point 79 - */ 80 - 81 - printk(KERN_ERR "Unhandled Thermal interrupt on CPU %d !\n", cpu); 82 - dump_stack(); 83 - } 84 - 85 - static int cbe_machine_check_handler(struct pt_regs *regs) 86 - { 87 - int cpu = smp_processor_id(); 88 - 89 - printk(KERN_ERR "Machine Check Interrupt on CPU %d !\n", cpu); 90 - dump_fir(cpu); 91 - 92 - /* No recovery from this code now, lets continue */ 93 - return 0; 94 - } 95 - 96 - struct ptcal_area { 97 - struct list_head list; 98 - int nid; 99 - int order; 100 - struct page *pages; 101 - }; 102 - 103 - static LIST_HEAD(ptcal_list); 104 - 105 - static int ptcal_start_tok, ptcal_stop_tok; 106 - 107 - static int __init cbe_ptcal_enable_on_node(int nid, int order) 108 - { 109 - struct ptcal_area *area; 110 - int ret = -ENOMEM; 111 - unsigned long addr; 112 - 113 - if (is_kdump_kernel()) 114 - rtas_call(ptcal_stop_tok, 1, 1, NULL, nid); 115 - 116 - area = kmalloc(sizeof(*area), GFP_KERNEL); 117 - if (!area) 118 - goto out_err; 119 - 120 - area->nid = nid; 121 - area->order = order; 122 - area->pages = __alloc_pages_node(area->nid, 123 - GFP_KERNEL|__GFP_THISNODE, 124 - area->order); 125 - 126 - if (!area->pages) { 127 - printk(KERN_WARNING "%s: no page on node %d\n", 128 - __func__, area->nid); 129 - goto out_free_area; 130 - } 131 - 132 - /* 133 - * We move the ptcal area to the middle of the allocated 134 - * page, in order to avoid prefetches in memcpy and similar 135 - * functions stepping on it. 136 - */ 137 - addr = __pa(page_address(area->pages)) + (PAGE_SIZE >> 1); 138 - printk(KERN_DEBUG "%s: enabling PTCAL on node %d address=0x%016lx\n", 139 - __func__, area->nid, addr); 140 - 141 - ret = -EIO; 142 - if (rtas_call(ptcal_start_tok, 3, 1, NULL, area->nid, 143 - (unsigned int)(addr >> 32), 144 - (unsigned int)(addr & 0xffffffff))) { 145 - printk(KERN_ERR "%s: error enabling PTCAL on node %d!\n", 146 - __func__, nid); 147 - goto out_free_pages; 148 - } 149 - 150 - list_add(&area->list, &ptcal_list); 151 - 152 - return 0; 153 - 154 - out_free_pages: 155 - __free_pages(area->pages, area->order); 156 - out_free_area: 157 - kfree(area); 158 - out_err: 159 - return ret; 160 - } 161 - 162 - static int __init cbe_ptcal_enable(void) 163 - { 164 - const u32 *size; 165 - struct device_node *np; 166 - int order, found_mic = 0; 167 - 168 - np = of_find_node_by_path("/rtas"); 169 - if (!np) 170 - return -ENODEV; 171 - 172 - size = of_get_property(np, "ibm,cbe-ptcal-size", NULL); 173 - if (!size) { 174 - of_node_put(np); 175 - return -ENODEV; 176 - } 177 - 178 - pr_debug("%s: enabling PTCAL, size = 0x%x\n", __func__, *size); 179 - order = get_order(*size); 180 - of_node_put(np); 181 - 182 - /* support for malta device trees, with be@/mic@ nodes */ 183 - for_each_node_by_type(np, "mic-tm") { 184 - cbe_ptcal_enable_on_node(of_node_to_nid(np), order); 185 - found_mic = 1; 186 - } 187 - 188 - if (found_mic) 189 - return 0; 190 - 191 - /* support for older device tree - use cpu nodes */ 192 - for_each_node_by_type(np, "cpu") { 193 - const u32 *nid = of_get_property(np, "node-id", NULL); 194 - if (!nid) { 195 - printk(KERN_ERR "%s: node %pOF is missing node-id?\n", 196 - __func__, np); 197 - continue; 198 - } 199 - cbe_ptcal_enable_on_node(*nid, order); 200 - found_mic = 1; 201 - } 202 - 203 - return found_mic ? 0 : -ENODEV; 204 - } 205 - 206 - static int cbe_ptcal_disable(void) 207 - { 208 - struct ptcal_area *area, *tmp; 209 - int ret = 0; 210 - 211 - pr_debug("%s: disabling PTCAL\n", __func__); 212 - 213 - list_for_each_entry_safe(area, tmp, &ptcal_list, list) { 214 - /* disable ptcal on this node */ 215 - if (rtas_call(ptcal_stop_tok, 1, 1, NULL, area->nid)) { 216 - printk(KERN_ERR "%s: error disabling PTCAL " 217 - "on node %d!\n", __func__, 218 - area->nid); 219 - ret = -EIO; 220 - continue; 221 - } 222 - 223 - /* ensure we can access the PTCAL area */ 224 - memset(page_address(area->pages), 0, 225 - 1 << (area->order + PAGE_SHIFT)); 226 - 227 - /* clean up */ 228 - list_del(&area->list); 229 - __free_pages(area->pages, area->order); 230 - kfree(area); 231 - } 232 - 233 - return ret; 234 - } 235 - 236 - static int cbe_ptcal_notify_reboot(struct notifier_block *nb, 237 - unsigned long code, void *data) 238 - { 239 - return cbe_ptcal_disable(); 240 - } 241 - 242 - static void cbe_ptcal_crash_shutdown(void) 243 - { 244 - cbe_ptcal_disable(); 245 - } 246 - 247 - static struct notifier_block cbe_ptcal_reboot_notifier = { 248 - .notifier_call = cbe_ptcal_notify_reboot 249 - }; 250 - 251 - #ifdef CONFIG_PPC_IBM_CELL_RESETBUTTON 252 - static int sysreset_hack; 253 - 254 - static int __init cbe_sysreset_init(void) 255 - { 256 - struct cbe_pmd_regs __iomem *regs; 257 - 258 - sysreset_hack = of_machine_is_compatible("IBM,CBPLUS-1.0"); 259 - if (!sysreset_hack) 260 - return 0; 261 - 262 - regs = cbe_get_cpu_pmd_regs(0); 263 - if (!regs) 264 - return 0; 265 - 266 - /* Enable JTAG system-reset hack */ 267 - out_be32(&regs->fir_mode_reg, 268 - in_be32(&regs->fir_mode_reg) | 269 - CBE_PMD_FIR_MODE_M8); 270 - 271 - return 0; 272 - } 273 - device_initcall(cbe_sysreset_init); 274 - 275 - int cbe_sysreset_hack(void) 276 - { 277 - struct cbe_pmd_regs __iomem *regs; 278 - 279 - /* 280 - * The BMC can inject user triggered system reset exceptions, 281 - * but cannot set the system reset reason in srr1, 282 - * so check an extra register here. 283 - */ 284 - if (sysreset_hack && (smp_processor_id() == 0)) { 285 - regs = cbe_get_cpu_pmd_regs(0); 286 - if (!regs) 287 - return 0; 288 - if (in_be64(&regs->ras_esc_0) & 0x0000ffff) { 289 - out_be64(&regs->ras_esc_0, 0); 290 - return 0; 291 - } 292 - } 293 - return 1; 294 - } 295 - #endif /* CONFIG_PPC_IBM_CELL_RESETBUTTON */ 296 - 297 - static int __init cbe_ptcal_init(void) 298 - { 299 - int ret; 300 - ptcal_start_tok = rtas_function_token(RTAS_FN_IBM_CBE_START_PTCAL); 301 - ptcal_stop_tok = rtas_function_token(RTAS_FN_IBM_CBE_STOP_PTCAL); 302 - 303 - if (ptcal_start_tok == RTAS_UNKNOWN_SERVICE 304 - || ptcal_stop_tok == RTAS_UNKNOWN_SERVICE) 305 - return -ENODEV; 306 - 307 - ret = register_reboot_notifier(&cbe_ptcal_reboot_notifier); 308 - if (ret) 309 - goto out1; 310 - 311 - ret = crash_shutdown_register(&cbe_ptcal_crash_shutdown); 312 - if (ret) 313 - goto out2; 314 - 315 - return cbe_ptcal_enable(); 316 - 317 - out2: 318 - unregister_reboot_notifier(&cbe_ptcal_reboot_notifier); 319 - out1: 320 - printk(KERN_ERR "Can't disable PTCAL, so not enabling\n"); 321 - return ret; 322 - } 323 - 324 - arch_initcall(cbe_ptcal_init); 325 - 326 - void __init cbe_ras_init(void) 327 - { 328 - unsigned long hid0; 329 - 330 - /* 331 - * Enable System Error & thermal interrupts and wakeup conditions 332 - */ 333 - 334 - hid0 = mfspr(SPRN_HID0); 335 - hid0 |= HID0_CBE_THERM_INT_EN | HID0_CBE_THERM_WAKEUP | 336 - HID0_CBE_SYSERR_INT_EN | HID0_CBE_SYSERR_WAKEUP; 337 - mtspr(SPRN_HID0, hid0); 338 - mb(); 339 - 340 - /* 341 - * Install machine check handler. Leave setting of precise mode to 342 - * what the firmware did for now 343 - */ 344 - ppc_md.machine_check_exception = cbe_machine_check_handler; 345 - mb(); 346 - 347 - /* 348 - * For now, we assume that IOC_FIR is already set to forward some 349 - * error conditions to the System Error handler. If that is not true 350 - * then it will have to be fixed up here. 351 - */ 352 - }
-13
arch/powerpc/platforms/cell/ras.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef RAS_H 3 - #define RAS_H 4 - 5 - #include <asm/interrupt.h> 6 - 7 - DECLARE_INTERRUPT_HANDLER(cbe_system_error_exception); 8 - DECLARE_INTERRUPT_HANDLER(cbe_maintenance_exception); 9 - DECLARE_INTERRUPT_HANDLER(cbe_thermal_exception); 10 - 11 - extern void cbe_ras_init(void); 12 - 13 - #endif /* RAS_H */
-274
arch/powerpc/platforms/cell/setup.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * linux/arch/powerpc/platforms/cell/cell_setup.c 4 - * 5 - * Copyright (C) 1995 Linus Torvalds 6 - * Adapted from 'alpha' version by Gary Thomas 7 - * Modified by Cort Dougan (cort@cs.nmt.edu) 8 - * Modified by PPC64 Team, IBM Corp 9 - * Modified by Cell Team, IBM Deutschland Entwicklung GmbH 10 - */ 11 - #undef DEBUG 12 - 13 - #include <linux/sched.h> 14 - #include <linux/kernel.h> 15 - #include <linux/mm.h> 16 - #include <linux/stddef.h> 17 - #include <linux/export.h> 18 - #include <linux/unistd.h> 19 - #include <linux/user.h> 20 - #include <linux/reboot.h> 21 - #include <linux/init.h> 22 - #include <linux/delay.h> 23 - #include <linux/irq.h> 24 - #include <linux/seq_file.h> 25 - #include <linux/root_dev.h> 26 - #include <linux/console.h> 27 - #include <linux/mutex.h> 28 - #include <linux/memory_hotplug.h> 29 - #include <linux/of_platform.h> 30 - #include <linux/platform_device.h> 31 - 32 - #include <asm/mmu.h> 33 - #include <asm/processor.h> 34 - #include <asm/io.h> 35 - #include <asm/rtas.h> 36 - #include <asm/pci-bridge.h> 37 - #include <asm/iommu.h> 38 - #include <asm/dma.h> 39 - #include <asm/machdep.h> 40 - #include <asm/time.h> 41 - #include <asm/nvram.h> 42 - #include <asm/cputable.h> 43 - #include <asm/ppc-pci.h> 44 - #include <asm/irq.h> 45 - #include <asm/spu.h> 46 - #include <asm/spu_priv1.h> 47 - #include <asm/udbg.h> 48 - #include <asm/mpic.h> 49 - #include <asm/cell-regs.h> 50 - #include <asm/io-workarounds.h> 51 - 52 - #include "cell.h" 53 - #include "interrupt.h" 54 - #include "pervasive.h" 55 - #include "ras.h" 56 - 57 - #ifdef DEBUG 58 - #define DBG(fmt...) udbg_printf(fmt) 59 - #else 60 - #define DBG(fmt...) 61 - #endif 62 - 63 - static void cell_show_cpuinfo(struct seq_file *m) 64 - { 65 - struct device_node *root; 66 - const char *model = ""; 67 - 68 - root = of_find_node_by_path("/"); 69 - if (root) 70 - model = of_get_property(root, "model", NULL); 71 - seq_printf(m, "machine\t\t: CHRP %s\n", model); 72 - of_node_put(root); 73 - } 74 - 75 - static void cell_progress(char *s, unsigned short hex) 76 - { 77 - printk("*** %04x : %s\n", hex, s ? s : ""); 78 - } 79 - 80 - static void cell_fixup_pcie_rootcomplex(struct pci_dev *dev) 81 - { 82 - struct pci_controller *hose; 83 - const char *s; 84 - int i; 85 - 86 - if (!machine_is(cell)) 87 - return; 88 - 89 - /* We're searching for a direct child of the PHB */ 90 - if (dev->bus->self != NULL || dev->devfn != 0) 91 - return; 92 - 93 - hose = pci_bus_to_host(dev->bus); 94 - if (hose == NULL) 95 - return; 96 - 97 - /* Only on PCIE */ 98 - if (!of_device_is_compatible(hose->dn, "pciex")) 99 - return; 100 - 101 - /* And only on axon */ 102 - s = of_get_property(hose->dn, "model", NULL); 103 - if (!s || strcmp(s, "Axon") != 0) 104 - return; 105 - 106 - for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) { 107 - dev->resource[i].start = dev->resource[i].end = 0; 108 - dev->resource[i].flags = 0; 109 - } 110 - 111 - printk(KERN_DEBUG "PCI: Hiding resources on Axon PCIE RC %s\n", 112 - pci_name(dev)); 113 - } 114 - DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, cell_fixup_pcie_rootcomplex); 115 - 116 - static int cell_setup_phb(struct pci_controller *phb) 117 - { 118 - const char *model; 119 - struct device_node *np; 120 - 121 - int rc = rtas_setup_phb(phb); 122 - if (rc) 123 - return rc; 124 - 125 - phb->controller_ops = cell_pci_controller_ops; 126 - 127 - np = phb->dn; 128 - model = of_get_property(np, "model", NULL); 129 - if (model == NULL || !of_node_name_eq(np, "pci")) 130 - return 0; 131 - 132 - /* Setup workarounds for spider */ 133 - if (strcmp(model, "Spider")) 134 - return 0; 135 - 136 - iowa_register_bus(phb, &spiderpci_ops, &spiderpci_iowa_init, 137 - (void *)SPIDER_PCI_REG_BASE); 138 - return 0; 139 - } 140 - 141 - static const struct of_device_id cell_bus_ids[] __initconst = { 142 - { .type = "soc", }, 143 - { .compatible = "soc", }, 144 - { .type = "spider", }, 145 - { .type = "axon", }, 146 - { .type = "plb5", }, 147 - { .type = "plb4", }, 148 - { .type = "opb", }, 149 - { .type = "ebc", }, 150 - {}, 151 - }; 152 - 153 - static int __init cell_publish_devices(void) 154 - { 155 - struct device_node *root = of_find_node_by_path("/"); 156 - struct device_node *np; 157 - int node; 158 - 159 - /* Publish OF platform devices for southbridge IOs */ 160 - of_platform_bus_probe(NULL, cell_bus_ids, NULL); 161 - 162 - /* On spider based blades, we need to manually create the OF 163 - * platform devices for the PCI host bridges 164 - */ 165 - for_each_child_of_node(root, np) { 166 - if (!of_node_is_type(np, "pci") && !of_node_is_type(np, "pciex")) 167 - continue; 168 - of_platform_device_create(np, NULL, NULL); 169 - } 170 - 171 - of_node_put(root); 172 - 173 - /* There is no device for the MIC memory controller, thus we create 174 - * a platform device for it to attach the EDAC driver to. 175 - */ 176 - for_each_online_node(node) { 177 - if (cbe_get_cpu_mic_tm_regs(cbe_node_to_cpu(node)) == NULL) 178 - continue; 179 - platform_device_register_simple("cbe-mic", node, NULL, 0); 180 - } 181 - 182 - return 0; 183 - } 184 - machine_subsys_initcall(cell, cell_publish_devices); 185 - 186 - static void __init mpic_init_IRQ(void) 187 - { 188 - struct device_node *dn; 189 - struct mpic *mpic; 190 - 191 - for_each_node_by_name(dn, "interrupt-controller") { 192 - if (!of_device_is_compatible(dn, "CBEA,platform-open-pic")) 193 - continue; 194 - 195 - /* The MPIC driver will get everything it needs from the 196 - * device-tree, just pass 0 to all arguments 197 - */ 198 - mpic = mpic_alloc(dn, 0, MPIC_SECONDARY | MPIC_NO_RESET, 199 - 0, 0, " MPIC "); 200 - if (mpic == NULL) 201 - continue; 202 - mpic_init(mpic); 203 - } 204 - } 205 - 206 - 207 - static void __init cell_init_irq(void) 208 - { 209 - iic_init_IRQ(); 210 - spider_init_IRQ(); 211 - mpic_init_IRQ(); 212 - } 213 - 214 - static void __init cell_set_dabrx(void) 215 - { 216 - mtspr(SPRN_DABRX, DABRX_KERNEL | DABRX_USER); 217 - } 218 - 219 - static void __init cell_setup_arch(void) 220 - { 221 - #ifdef CONFIG_SPU_BASE 222 - spu_priv1_ops = &spu_priv1_mmio_ops; 223 - spu_management_ops = &spu_management_of_ops; 224 - #endif 225 - 226 - cbe_regs_init(); 227 - 228 - cell_set_dabrx(); 229 - 230 - #ifdef CONFIG_CBE_RAS 231 - cbe_ras_init(); 232 - #endif 233 - 234 - #ifdef CONFIG_SMP 235 - smp_init_cell(); 236 - #endif 237 - /* init to some ~sane value until calibrate_delay() runs */ 238 - loops_per_jiffy = 50000000; 239 - 240 - /* Find and initialize PCI host bridges */ 241 - init_pci_config_tokens(); 242 - 243 - cbe_pervasive_init(); 244 - 245 - mmio_nvram_init(); 246 - } 247 - 248 - static int __init cell_probe(void) 249 - { 250 - if (!of_machine_is_compatible("IBM,CBEA") && 251 - !of_machine_is_compatible("IBM,CPBW-1.0")) 252 - return 0; 253 - 254 - pm_power_off = rtas_power_off; 255 - 256 - return 1; 257 - } 258 - 259 - define_machine(cell) { 260 - .name = "Cell", 261 - .probe = cell_probe, 262 - .setup_arch = cell_setup_arch, 263 - .show_cpuinfo = cell_show_cpuinfo, 264 - .restart = rtas_restart, 265 - .halt = rtas_halt, 266 - .get_boot_time = rtas_get_boot_time, 267 - .get_rtc_time = rtas_get_rtc_time, 268 - .set_rtc_time = rtas_set_rtc_time, 269 - .progress = cell_progress, 270 - .init_IRQ = cell_init_irq, 271 - .pci_setup_phb = cell_setup_phb, 272 - }; 273 - 274 - struct pci_controller_ops cell_pci_controller_ops;
-162
arch/powerpc/platforms/cell/smp.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * SMP support for BPA machines. 4 - * 5 - * Dave Engebretsen, Peter Bergner, and 6 - * Mike Corrigan {engebret|bergner|mikec}@us.ibm.com 7 - * 8 - * Plus various changes from other IBM teams... 9 - */ 10 - 11 - #undef DEBUG 12 - 13 - #include <linux/kernel.h> 14 - #include <linux/sched.h> 15 - #include <linux/smp.h> 16 - #include <linux/interrupt.h> 17 - #include <linux/delay.h> 18 - #include <linux/init.h> 19 - #include <linux/spinlock.h> 20 - #include <linux/cache.h> 21 - #include <linux/err.h> 22 - #include <linux/device.h> 23 - #include <linux/cpu.h> 24 - #include <linux/pgtable.h> 25 - 26 - #include <asm/ptrace.h> 27 - #include <linux/atomic.h> 28 - #include <asm/irq.h> 29 - #include <asm/page.h> 30 - #include <asm/io.h> 31 - #include <asm/smp.h> 32 - #include <asm/paca.h> 33 - #include <asm/machdep.h> 34 - #include <asm/cputable.h> 35 - #include <asm/firmware.h> 36 - #include <asm/rtas.h> 37 - #include <asm/cputhreads.h> 38 - #include <asm/text-patching.h> 39 - 40 - #include "interrupt.h" 41 - #include <asm/udbg.h> 42 - 43 - #ifdef DEBUG 44 - #define DBG(fmt...) udbg_printf(fmt) 45 - #else 46 - #define DBG(fmt...) 47 - #endif 48 - 49 - /* 50 - * The Primary thread of each non-boot processor was started from the OF client 51 - * interface by prom_hold_cpus and is spinning on secondary_hold_spinloop. 52 - */ 53 - static cpumask_t of_spin_map; 54 - 55 - /** 56 - * smp_startup_cpu() - start the given cpu 57 - * @lcpu: Logical CPU ID of the CPU to be started. 58 - * 59 - * At boot time, there is nothing to do for primary threads which were 60 - * started from Open Firmware. For anything else, call RTAS with the 61 - * appropriate start location. 62 - * 63 - * Returns: 64 - * 0 - failure 65 - * 1 - success 66 - */ 67 - static inline int smp_startup_cpu(unsigned int lcpu) 68 - { 69 - int status; 70 - unsigned long start_here = 71 - __pa(ppc_function_entry(generic_secondary_smp_init)); 72 - unsigned int pcpu; 73 - int start_cpu; 74 - 75 - if (cpumask_test_cpu(lcpu, &of_spin_map)) 76 - /* Already started by OF and sitting in spin loop */ 77 - return 1; 78 - 79 - pcpu = get_hard_smp_processor_id(lcpu); 80 - 81 - /* 82 - * If the RTAS start-cpu token does not exist then presume the 83 - * cpu is already spinning. 84 - */ 85 - start_cpu = rtas_function_token(RTAS_FN_START_CPU); 86 - if (start_cpu == RTAS_UNKNOWN_SERVICE) 87 - return 1; 88 - 89 - status = rtas_call(start_cpu, 3, 1, NULL, pcpu, start_here, lcpu); 90 - if (status != 0) { 91 - printk(KERN_ERR "start-cpu failed: %i\n", status); 92 - return 0; 93 - } 94 - 95 - return 1; 96 - } 97 - 98 - static void smp_cell_setup_cpu(int cpu) 99 - { 100 - if (cpu != boot_cpuid) 101 - iic_setup_cpu(); 102 - 103 - /* 104 - * change default DABRX to allow user watchpoints 105 - */ 106 - mtspr(SPRN_DABRX, DABRX_KERNEL | DABRX_USER); 107 - } 108 - 109 - static int smp_cell_kick_cpu(int nr) 110 - { 111 - if (nr < 0 || nr >= nr_cpu_ids) 112 - return -EINVAL; 113 - 114 - if (!smp_startup_cpu(nr)) 115 - return -ENOENT; 116 - 117 - /* 118 - * The processor is currently spinning, waiting for the 119 - * cpu_start field to become non-zero After we set cpu_start, 120 - * the processor will continue on to secondary_start 121 - */ 122 - paca_ptrs[nr]->cpu_start = 1; 123 - 124 - return 0; 125 - } 126 - 127 - static struct smp_ops_t bpa_iic_smp_ops = { 128 - .message_pass = iic_message_pass, 129 - .probe = iic_request_IPIs, 130 - .kick_cpu = smp_cell_kick_cpu, 131 - .setup_cpu = smp_cell_setup_cpu, 132 - .cpu_bootable = smp_generic_cpu_bootable, 133 - }; 134 - 135 - /* This is called very early */ 136 - void __init smp_init_cell(void) 137 - { 138 - int i; 139 - 140 - DBG(" -> smp_init_cell()\n"); 141 - 142 - smp_ops = &bpa_iic_smp_ops; 143 - 144 - /* Mark threads which are still spinning in hold loops. */ 145 - if (cpu_has_feature(CPU_FTR_SMT)) { 146 - for_each_present_cpu(i) { 147 - if (cpu_thread_in_core(i) == 0) 148 - cpumask_set_cpu(i, &of_spin_map); 149 - } 150 - } else 151 - cpumask_copy(&of_spin_map, cpu_present_mask); 152 - 153 - cpumask_clear_cpu(boot_cpuid, &of_spin_map); 154 - 155 - /* Non-lpar has additional take/give timebase */ 156 - if (rtas_function_token(RTAS_FN_FREEZE_TIME_BASE) != RTAS_UNKNOWN_SERVICE) { 157 - smp_ops->give_timebase = rtas_give_timebase; 158 - smp_ops->take_timebase = rtas_take_timebase; 159 - } 160 - 161 - DBG(" <- smp_init_cell()\n"); 162 - }
-170
arch/powerpc/platforms/cell/spider-pci.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * IO workarounds for PCI on Celleb/Cell platform 4 - * 5 - * (C) Copyright 2006-2007 TOSHIBA CORPORATION 6 - */ 7 - 8 - #undef DEBUG 9 - 10 - #include <linux/kernel.h> 11 - #include <linux/of_address.h> 12 - #include <linux/slab.h> 13 - #include <linux/io.h> 14 - 15 - #include <asm/ppc-pci.h> 16 - #include <asm/pci-bridge.h> 17 - #include <asm/io-workarounds.h> 18 - 19 - #define SPIDER_PCI_DISABLE_PREFETCH 20 - 21 - struct spiderpci_iowa_private { 22 - void __iomem *regs; 23 - }; 24 - 25 - static void spiderpci_io_flush(struct iowa_bus *bus) 26 - { 27 - struct spiderpci_iowa_private *priv; 28 - 29 - priv = bus->private; 30 - in_be32(priv->regs + SPIDER_PCI_DUMMY_READ); 31 - iosync(); 32 - } 33 - 34 - #define SPIDER_PCI_MMIO_READ(name, ret) \ 35 - static ret spiderpci_##name(const PCI_IO_ADDR addr) \ 36 - { \ 37 - ret val = __do_##name(addr); \ 38 - spiderpci_io_flush(iowa_mem_find_bus(addr)); \ 39 - return val; \ 40 - } 41 - 42 - #define SPIDER_PCI_MMIO_READ_STR(name) \ 43 - static void spiderpci_##name(const PCI_IO_ADDR addr, void *buf, \ 44 - unsigned long count) \ 45 - { \ 46 - __do_##name(addr, buf, count); \ 47 - spiderpci_io_flush(iowa_mem_find_bus(addr)); \ 48 - } 49 - 50 - SPIDER_PCI_MMIO_READ(readb, u8) 51 - SPIDER_PCI_MMIO_READ(readw, u16) 52 - SPIDER_PCI_MMIO_READ(readl, u32) 53 - SPIDER_PCI_MMIO_READ(readq, u64) 54 - SPIDER_PCI_MMIO_READ(readw_be, u16) 55 - SPIDER_PCI_MMIO_READ(readl_be, u32) 56 - SPIDER_PCI_MMIO_READ(readq_be, u64) 57 - SPIDER_PCI_MMIO_READ_STR(readsb) 58 - SPIDER_PCI_MMIO_READ_STR(readsw) 59 - SPIDER_PCI_MMIO_READ_STR(readsl) 60 - 61 - static void spiderpci_memcpy_fromio(void *dest, const PCI_IO_ADDR src, 62 - unsigned long n) 63 - { 64 - __do_memcpy_fromio(dest, src, n); 65 - spiderpci_io_flush(iowa_mem_find_bus(src)); 66 - } 67 - 68 - static int __init spiderpci_pci_setup_chip(struct pci_controller *phb, 69 - void __iomem *regs) 70 - { 71 - void *dummy_page_va; 72 - dma_addr_t dummy_page_da; 73 - 74 - #ifdef SPIDER_PCI_DISABLE_PREFETCH 75 - u32 val = in_be32(regs + SPIDER_PCI_VCI_CNTL_STAT); 76 - pr_debug("SPIDER_IOWA:PVCI_Control_Status was 0x%08x\n", val); 77 - out_be32(regs + SPIDER_PCI_VCI_CNTL_STAT, val | 0x8); 78 - #endif /* SPIDER_PCI_DISABLE_PREFETCH */ 79 - 80 - /* setup dummy read */ 81 - /* 82 - * On CellBlade, we can't know that which XDR memory is used by 83 - * kmalloc() to allocate dummy_page_va. 84 - * In order to improve the performance, the XDR which is used to 85 - * allocate dummy_page_va is the nearest the spider-pci. 86 - * We have to select the CBE which is the nearest the spider-pci 87 - * to allocate memory from the best XDR, but I don't know that 88 - * how to do. 89 - * 90 - * Celleb does not have this problem, because it has only one XDR. 91 - */ 92 - dummy_page_va = kmalloc(PAGE_SIZE, GFP_KERNEL); 93 - if (!dummy_page_va) { 94 - pr_err("SPIDERPCI-IOWA:Alloc dummy_page_va failed.\n"); 95 - return -1; 96 - } 97 - 98 - dummy_page_da = dma_map_single(phb->parent, dummy_page_va, 99 - PAGE_SIZE, DMA_FROM_DEVICE); 100 - if (dma_mapping_error(phb->parent, dummy_page_da)) { 101 - pr_err("SPIDER-IOWA:Map dummy page filed.\n"); 102 - kfree(dummy_page_va); 103 - return -1; 104 - } 105 - 106 - out_be32(regs + SPIDER_PCI_DUMMY_READ_BASE, dummy_page_da); 107 - 108 - return 0; 109 - } 110 - 111 - int __init spiderpci_iowa_init(struct iowa_bus *bus, void *data) 112 - { 113 - void __iomem *regs = NULL; 114 - struct spiderpci_iowa_private *priv; 115 - struct device_node *np = bus->phb->dn; 116 - struct resource r; 117 - unsigned long offset = (unsigned long)data; 118 - 119 - pr_debug("SPIDERPCI-IOWA:Bus initialize for spider(%pOF)\n", 120 - np); 121 - 122 - priv = kzalloc(sizeof(*priv), GFP_KERNEL); 123 - if (!priv) { 124 - pr_err("SPIDERPCI-IOWA:" 125 - "Can't allocate struct spiderpci_iowa_private"); 126 - return -1; 127 - } 128 - 129 - if (of_address_to_resource(np, 0, &r)) { 130 - pr_err("SPIDERPCI-IOWA:Can't get resource.\n"); 131 - goto error; 132 - } 133 - 134 - regs = ioremap(r.start + offset, SPIDER_PCI_REG_SIZE); 135 - if (!regs) { 136 - pr_err("SPIDERPCI-IOWA:ioremap failed.\n"); 137 - goto error; 138 - } 139 - priv->regs = regs; 140 - bus->private = priv; 141 - 142 - if (spiderpci_pci_setup_chip(bus->phb, regs)) 143 - goto error; 144 - 145 - return 0; 146 - 147 - error: 148 - kfree(priv); 149 - bus->private = NULL; 150 - 151 - if (regs) 152 - iounmap(regs); 153 - 154 - return -1; 155 - } 156 - 157 - struct ppc_pci_io spiderpci_ops = { 158 - .readb = spiderpci_readb, 159 - .readw = spiderpci_readw, 160 - .readl = spiderpci_readl, 161 - .readq = spiderpci_readq, 162 - .readw_be = spiderpci_readw_be, 163 - .readl_be = spiderpci_readl_be, 164 - .readq_be = spiderpci_readq_be, 165 - .readsb = spiderpci_readsb, 166 - .readsw = spiderpci_readsw, 167 - .readsl = spiderpci_readsl, 168 - .memcpy_fromio = spiderpci_memcpy_fromio, 169 - }; 170 -
-344
arch/powerpc/platforms/cell/spider-pic.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * External Interrupt Controller on Spider South Bridge 4 - * 5 - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 6 - * 7 - * Author: Arnd Bergmann <arndb@de.ibm.com> 8 - */ 9 - 10 - #include <linux/interrupt.h> 11 - #include <linux/irq.h> 12 - #include <linux/ioport.h> 13 - #include <linux/of_address.h> 14 - #include <linux/of_irq.h> 15 - #include <linux/pgtable.h> 16 - 17 - #include <asm/io.h> 18 - 19 - #include "interrupt.h" 20 - 21 - /* register layout taken from Spider spec, table 7.4-4 */ 22 - enum { 23 - TIR_DEN = 0x004, /* Detection Enable Register */ 24 - TIR_MSK = 0x084, /* Mask Level Register */ 25 - TIR_EDC = 0x0c0, /* Edge Detection Clear Register */ 26 - TIR_PNDA = 0x100, /* Pending Register A */ 27 - TIR_PNDB = 0x104, /* Pending Register B */ 28 - TIR_CS = 0x144, /* Current Status Register */ 29 - TIR_LCSA = 0x150, /* Level Current Status Register A */ 30 - TIR_LCSB = 0x154, /* Level Current Status Register B */ 31 - TIR_LCSC = 0x158, /* Level Current Status Register C */ 32 - TIR_LCSD = 0x15c, /* Level Current Status Register D */ 33 - TIR_CFGA = 0x200, /* Setting Register A0 */ 34 - TIR_CFGB = 0x204, /* Setting Register B0 */ 35 - /* 0x208 ... 0x3ff Setting Register An/Bn */ 36 - TIR_PPNDA = 0x400, /* Packet Pending Register A */ 37 - TIR_PPNDB = 0x404, /* Packet Pending Register B */ 38 - TIR_PIERA = 0x408, /* Packet Output Error Register A */ 39 - TIR_PIERB = 0x40c, /* Packet Output Error Register B */ 40 - TIR_PIEN = 0x444, /* Packet Output Enable Register */ 41 - TIR_PIPND = 0x454, /* Packet Output Pending Register */ 42 - TIRDID = 0x484, /* Spider Device ID Register */ 43 - REISTIM = 0x500, /* Reissue Command Timeout Time Setting */ 44 - REISTIMEN = 0x504, /* Reissue Command Timeout Setting */ 45 - REISWAITEN = 0x508, /* Reissue Wait Control*/ 46 - }; 47 - 48 - #define SPIDER_CHIP_COUNT 4 49 - #define SPIDER_SRC_COUNT 64 50 - #define SPIDER_IRQ_INVALID 63 51 - 52 - struct spider_pic { 53 - struct irq_domain *host; 54 - void __iomem *regs; 55 - unsigned int node_id; 56 - }; 57 - static struct spider_pic spider_pics[SPIDER_CHIP_COUNT]; 58 - 59 - static struct spider_pic *spider_irq_data_to_pic(struct irq_data *d) 60 - { 61 - return irq_data_get_irq_chip_data(d); 62 - } 63 - 64 - static void __iomem *spider_get_irq_config(struct spider_pic *pic, 65 - unsigned int src) 66 - { 67 - return pic->regs + TIR_CFGA + 8 * src; 68 - } 69 - 70 - static void spider_unmask_irq(struct irq_data *d) 71 - { 72 - struct spider_pic *pic = spider_irq_data_to_pic(d); 73 - void __iomem *cfg = spider_get_irq_config(pic, irqd_to_hwirq(d)); 74 - 75 - out_be32(cfg, in_be32(cfg) | 0x30000000u); 76 - } 77 - 78 - static void spider_mask_irq(struct irq_data *d) 79 - { 80 - struct spider_pic *pic = spider_irq_data_to_pic(d); 81 - void __iomem *cfg = spider_get_irq_config(pic, irqd_to_hwirq(d)); 82 - 83 - out_be32(cfg, in_be32(cfg) & ~0x30000000u); 84 - } 85 - 86 - static void spider_ack_irq(struct irq_data *d) 87 - { 88 - struct spider_pic *pic = spider_irq_data_to_pic(d); 89 - unsigned int src = irqd_to_hwirq(d); 90 - 91 - /* Reset edge detection logic if necessary 92 - */ 93 - if (irqd_is_level_type(d)) 94 - return; 95 - 96 - /* Only interrupts 47 to 50 can be set to edge */ 97 - if (src < 47 || src > 50) 98 - return; 99 - 100 - /* Perform the clear of the edge logic */ 101 - out_be32(pic->regs + TIR_EDC, 0x100 | (src & 0xf)); 102 - } 103 - 104 - static int spider_set_irq_type(struct irq_data *d, unsigned int type) 105 - { 106 - unsigned int sense = type & IRQ_TYPE_SENSE_MASK; 107 - struct spider_pic *pic = spider_irq_data_to_pic(d); 108 - unsigned int hw = irqd_to_hwirq(d); 109 - void __iomem *cfg = spider_get_irq_config(pic, hw); 110 - u32 old_mask; 111 - u32 ic; 112 - 113 - /* Note that only level high is supported for most interrupts */ 114 - if (sense != IRQ_TYPE_NONE && sense != IRQ_TYPE_LEVEL_HIGH && 115 - (hw < 47 || hw > 50)) 116 - return -EINVAL; 117 - 118 - /* Decode sense type */ 119 - switch(sense) { 120 - case IRQ_TYPE_EDGE_RISING: 121 - ic = 0x3; 122 - break; 123 - case IRQ_TYPE_EDGE_FALLING: 124 - ic = 0x2; 125 - break; 126 - case IRQ_TYPE_LEVEL_LOW: 127 - ic = 0x0; 128 - break; 129 - case IRQ_TYPE_LEVEL_HIGH: 130 - case IRQ_TYPE_NONE: 131 - ic = 0x1; 132 - break; 133 - default: 134 - return -EINVAL; 135 - } 136 - 137 - /* Configure the source. One gross hack that was there before and 138 - * that I've kept around is the priority to the BE which I set to 139 - * be the same as the interrupt source number. I don't know whether 140 - * that's supposed to make any kind of sense however, we'll have to 141 - * decide that, but for now, I'm not changing the behaviour. 142 - */ 143 - old_mask = in_be32(cfg) & 0x30000000u; 144 - out_be32(cfg, old_mask | (ic << 24) | (0x7 << 16) | 145 - (pic->node_id << 4) | 0xe); 146 - out_be32(cfg + 4, (0x2 << 16) | (hw & 0xff)); 147 - 148 - return 0; 149 - } 150 - 151 - static struct irq_chip spider_pic = { 152 - .name = "SPIDER", 153 - .irq_unmask = spider_unmask_irq, 154 - .irq_mask = spider_mask_irq, 155 - .irq_ack = spider_ack_irq, 156 - .irq_set_type = spider_set_irq_type, 157 - }; 158 - 159 - static int spider_host_map(struct irq_domain *h, unsigned int virq, 160 - irq_hw_number_t hw) 161 - { 162 - irq_set_chip_data(virq, h->host_data); 163 - irq_set_chip_and_handler(virq, &spider_pic, handle_level_irq); 164 - 165 - /* Set default irq type */ 166 - irq_set_irq_type(virq, IRQ_TYPE_NONE); 167 - 168 - return 0; 169 - } 170 - 171 - static int spider_host_xlate(struct irq_domain *h, struct device_node *ct, 172 - const u32 *intspec, unsigned int intsize, 173 - irq_hw_number_t *out_hwirq, unsigned int *out_flags) 174 - 175 - { 176 - /* Spider interrupts have 2 cells, first is the interrupt source, 177 - * second, well, I don't know for sure yet ... We mask the top bits 178 - * because old device-trees encode a node number in there 179 - */ 180 - *out_hwirq = intspec[0] & 0x3f; 181 - *out_flags = IRQ_TYPE_LEVEL_HIGH; 182 - return 0; 183 - } 184 - 185 - static const struct irq_domain_ops spider_host_ops = { 186 - .map = spider_host_map, 187 - .xlate = spider_host_xlate, 188 - }; 189 - 190 - static void spider_irq_cascade(struct irq_desc *desc) 191 - { 192 - struct irq_chip *chip = irq_desc_get_chip(desc); 193 - struct spider_pic *pic = irq_desc_get_handler_data(desc); 194 - unsigned int cs; 195 - 196 - cs = in_be32(pic->regs + TIR_CS) >> 24; 197 - if (cs != SPIDER_IRQ_INVALID) 198 - generic_handle_domain_irq(pic->host, cs); 199 - 200 - chip->irq_eoi(&desc->irq_data); 201 - } 202 - 203 - /* For hooking up the cascade we have a problem. Our device-tree is 204 - * crap and we don't know on which BE iic interrupt we are hooked on at 205 - * least not the "standard" way. We can reconstitute it based on two 206 - * informations though: which BE node we are connected to and whether 207 - * we are connected to IOIF0 or IOIF1. Right now, we really only care 208 - * about the IBM cell blade and we know that its firmware gives us an 209 - * interrupt-map property which is pretty strange. 210 - */ 211 - static unsigned int __init spider_find_cascade_and_node(struct spider_pic *pic) 212 - { 213 - unsigned int virq; 214 - const u32 *imap, *tmp; 215 - int imaplen, intsize, unit; 216 - struct device_node *iic; 217 - struct device_node *of_node; 218 - 219 - of_node = irq_domain_get_of_node(pic->host); 220 - 221 - /* First, we check whether we have a real "interrupts" in the device 222 - * tree in case the device-tree is ever fixed 223 - */ 224 - virq = irq_of_parse_and_map(of_node, 0); 225 - if (virq) 226 - return virq; 227 - 228 - /* Now do the horrible hacks */ 229 - tmp = of_get_property(of_node, "#interrupt-cells", NULL); 230 - if (tmp == NULL) 231 - return 0; 232 - intsize = *tmp; 233 - imap = of_get_property(of_node, "interrupt-map", &imaplen); 234 - if (imap == NULL || imaplen < (intsize + 1)) 235 - return 0; 236 - iic = of_find_node_by_phandle(imap[intsize]); 237 - if (iic == NULL) 238 - return 0; 239 - imap += intsize + 1; 240 - tmp = of_get_property(iic, "#interrupt-cells", NULL); 241 - if (tmp == NULL) { 242 - of_node_put(iic); 243 - return 0; 244 - } 245 - intsize = *tmp; 246 - /* Assume unit is last entry of interrupt specifier */ 247 - unit = imap[intsize - 1]; 248 - /* Ok, we have a unit, now let's try to get the node */ 249 - tmp = of_get_property(iic, "ibm,interrupt-server-ranges", NULL); 250 - if (tmp == NULL) { 251 - of_node_put(iic); 252 - return 0; 253 - } 254 - /* ugly as hell but works for now */ 255 - pic->node_id = (*tmp) >> 1; 256 - of_node_put(iic); 257 - 258 - /* Ok, now let's get cracking. You may ask me why I just didn't match 259 - * the iic host from the iic OF node, but that way I'm still compatible 260 - * with really really old old firmwares for which we don't have a node 261 - */ 262 - /* Manufacture an IIC interrupt number of class 2 */ 263 - virq = irq_create_mapping(NULL, 264 - (pic->node_id << IIC_IRQ_NODE_SHIFT) | 265 - (2 << IIC_IRQ_CLASS_SHIFT) | 266 - unit); 267 - if (!virq) 268 - printk(KERN_ERR "spider_pic: failed to map cascade !"); 269 - return virq; 270 - } 271 - 272 - 273 - static void __init spider_init_one(struct device_node *of_node, int chip, 274 - unsigned long addr) 275 - { 276 - struct spider_pic *pic = &spider_pics[chip]; 277 - int i, virq; 278 - 279 - /* Map registers */ 280 - pic->regs = ioremap(addr, 0x1000); 281 - if (pic->regs == NULL) 282 - panic("spider_pic: can't map registers !"); 283 - 284 - /* Allocate a host */ 285 - pic->host = irq_domain_add_linear(of_node, SPIDER_SRC_COUNT, 286 - &spider_host_ops, pic); 287 - if (pic->host == NULL) 288 - panic("spider_pic: can't allocate irq host !"); 289 - 290 - /* Go through all sources and disable them */ 291 - for (i = 0; i < SPIDER_SRC_COUNT; i++) { 292 - void __iomem *cfg = pic->regs + TIR_CFGA + 8 * i; 293 - out_be32(cfg, in_be32(cfg) & ~0x30000000u); 294 - } 295 - 296 - /* do not mask any interrupts because of level */ 297 - out_be32(pic->regs + TIR_MSK, 0x0); 298 - 299 - /* enable interrupt packets to be output */ 300 - out_be32(pic->regs + TIR_PIEN, in_be32(pic->regs + TIR_PIEN) | 0x1); 301 - 302 - /* Hook up the cascade interrupt to the iic and nodeid */ 303 - virq = spider_find_cascade_and_node(pic); 304 - if (!virq) 305 - return; 306 - irq_set_handler_data(virq, pic); 307 - irq_set_chained_handler(virq, spider_irq_cascade); 308 - 309 - printk(KERN_INFO "spider_pic: node %d, addr: 0x%lx %pOF\n", 310 - pic->node_id, addr, of_node); 311 - 312 - /* Enable the interrupt detection enable bit. Do this last! */ 313 - out_be32(pic->regs + TIR_DEN, in_be32(pic->regs + TIR_DEN) | 0x1); 314 - } 315 - 316 - void __init spider_init_IRQ(void) 317 - { 318 - struct resource r; 319 - struct device_node *dn; 320 - int chip = 0; 321 - 322 - /* XXX node numbers are totally bogus. We _hope_ we get the device 323 - * nodes in the right order here but that's definitely not guaranteed, 324 - * we need to get the node from the device tree instead. 325 - * There is currently no proper property for it (but our whole 326 - * device-tree is bogus anyway) so all we can do is pray or maybe test 327 - * the address and deduce the node-id 328 - */ 329 - for_each_node_by_name(dn, "interrupt-controller") { 330 - if (of_device_is_compatible(dn, "CBEA,platform-spider-pic")) { 331 - if (of_address_to_resource(dn, 0, &r)) { 332 - printk(KERN_WARNING "spider-pic: Failed\n"); 333 - continue; 334 - } 335 - } else if (of_device_is_compatible(dn, "sti,platform-spider-pic") 336 - && (chip < 2)) { 337 - static long hard_coded_pics[] = 338 - { 0x24000008000ul, 0x34000008000ul}; 339 - r.start = hard_coded_pics[chip]; 340 - } else 341 - continue; 342 - spider_init_one(dn, chip++, r.start); 343 - } 344 - }
-2
arch/powerpc/platforms/cell/spu_base.c
··· 23 23 #include <asm/spu.h> 24 24 #include <asm/spu_priv1.h> 25 25 #include <asm/spu_csa.h> 26 - #include <asm/xmon.h> 27 26 #include <asm/kexec.h> 28 27 29 28 const struct spu_management_ops *spu_management_ops; ··· 771 772 fb_append_extra_logo(&logo_spe_clut224, ret); 772 773 773 774 mutex_lock(&spu_full_list_mutex); 774 - xmon_register_spus(&spu_full_list); 775 775 crash_register_spus(&spu_full_list); 776 776 mutex_unlock(&spu_full_list_mutex); 777 777 spu_add_dev_attr(&dev_attr_stat);
-530
arch/powerpc/platforms/cell/spu_manage.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * spu management operations for of based platforms 4 - * 5 - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 6 - * Copyright 2006 Sony Corp. 7 - * (C) Copyright 2007 TOSHIBA CORPORATION 8 - */ 9 - 10 - #include <linux/interrupt.h> 11 - #include <linux/list.h> 12 - #include <linux/export.h> 13 - #include <linux/ptrace.h> 14 - #include <linux/wait.h> 15 - #include <linux/mm.h> 16 - #include <linux/io.h> 17 - #include <linux/mutex.h> 18 - #include <linux/device.h> 19 - #include <linux/of_address.h> 20 - #include <linux/of_irq.h> 21 - 22 - #include <asm/spu.h> 23 - #include <asm/spu_priv1.h> 24 - #include <asm/firmware.h> 25 - 26 - #include "spufs/spufs.h" 27 - #include "interrupt.h" 28 - #include "spu_priv1_mmio.h" 29 - 30 - struct device_node *spu_devnode(struct spu *spu) 31 - { 32 - return spu->devnode; 33 - } 34 - 35 - EXPORT_SYMBOL_GPL(spu_devnode); 36 - 37 - static u64 __init find_spu_unit_number(struct device_node *spe) 38 - { 39 - const unsigned int *prop; 40 - int proplen; 41 - 42 - /* new device trees should provide the physical-id attribute */ 43 - prop = of_get_property(spe, "physical-id", &proplen); 44 - if (proplen == 4) 45 - return (u64)*prop; 46 - 47 - /* celleb device tree provides the unit-id */ 48 - prop = of_get_property(spe, "unit-id", &proplen); 49 - if (proplen == 4) 50 - return (u64)*prop; 51 - 52 - /* legacy device trees provide the id in the reg attribute */ 53 - prop = of_get_property(spe, "reg", &proplen); 54 - if (proplen == 4) 55 - return (u64)*prop; 56 - 57 - return 0; 58 - } 59 - 60 - static void spu_unmap(struct spu *spu) 61 - { 62 - if (!firmware_has_feature(FW_FEATURE_LPAR)) 63 - iounmap(spu->priv1); 64 - iounmap(spu->priv2); 65 - iounmap(spu->problem); 66 - iounmap((__force u8 __iomem *)spu->local_store); 67 - } 68 - 69 - static int __init spu_map_interrupts_old(struct spu *spu, 70 - struct device_node *np) 71 - { 72 - unsigned int isrc; 73 - const u32 *tmp; 74 - int nid; 75 - 76 - /* Get the interrupt source unit from the device-tree */ 77 - tmp = of_get_property(np, "isrc", NULL); 78 - if (!tmp) 79 - return -ENODEV; 80 - isrc = tmp[0]; 81 - 82 - tmp = of_get_property(np->parent->parent, "node-id", NULL); 83 - if (!tmp) { 84 - printk(KERN_WARNING "%s: can't find node-id\n", __func__); 85 - nid = spu->node; 86 - } else 87 - nid = tmp[0]; 88 - 89 - /* Add the node number */ 90 - isrc |= nid << IIC_IRQ_NODE_SHIFT; 91 - 92 - /* Now map interrupts of all 3 classes */ 93 - spu->irqs[0] = irq_create_mapping(NULL, IIC_IRQ_CLASS_0 | isrc); 94 - spu->irqs[1] = irq_create_mapping(NULL, IIC_IRQ_CLASS_1 | isrc); 95 - spu->irqs[2] = irq_create_mapping(NULL, IIC_IRQ_CLASS_2 | isrc); 96 - 97 - /* Right now, we only fail if class 2 failed */ 98 - if (!spu->irqs[2]) 99 - return -EINVAL; 100 - 101 - return 0; 102 - } 103 - 104 - static void __iomem * __init spu_map_prop_old(struct spu *spu, 105 - struct device_node *n, 106 - const char *name) 107 - { 108 - const struct address_prop { 109 - unsigned long address; 110 - unsigned int len; 111 - } __attribute__((packed)) *prop; 112 - int proplen; 113 - 114 - prop = of_get_property(n, name, &proplen); 115 - if (prop == NULL || proplen != sizeof (struct address_prop)) 116 - return NULL; 117 - 118 - return ioremap(prop->address, prop->len); 119 - } 120 - 121 - static int __init spu_map_device_old(struct spu *spu) 122 - { 123 - struct device_node *node = spu->devnode; 124 - const char *prop; 125 - int ret; 126 - 127 - ret = -ENODEV; 128 - spu->name = of_get_property(node, "name", NULL); 129 - if (!spu->name) 130 - goto out; 131 - 132 - prop = of_get_property(node, "local-store", NULL); 133 - if (!prop) 134 - goto out; 135 - spu->local_store_phys = *(unsigned long *)prop; 136 - 137 - /* we use local store as ram, not io memory */ 138 - spu->local_store = (void __force *) 139 - spu_map_prop_old(spu, node, "local-store"); 140 - if (!spu->local_store) 141 - goto out; 142 - 143 - prop = of_get_property(node, "problem", NULL); 144 - if (!prop) 145 - goto out_unmap; 146 - spu->problem_phys = *(unsigned long *)prop; 147 - 148 - spu->problem = spu_map_prop_old(spu, node, "problem"); 149 - if (!spu->problem) 150 - goto out_unmap; 151 - 152 - spu->priv2 = spu_map_prop_old(spu, node, "priv2"); 153 - if (!spu->priv2) 154 - goto out_unmap; 155 - 156 - if (!firmware_has_feature(FW_FEATURE_LPAR)) { 157 - spu->priv1 = spu_map_prop_old(spu, node, "priv1"); 158 - if (!spu->priv1) 159 - goto out_unmap; 160 - } 161 - 162 - ret = 0; 163 - goto out; 164 - 165 - out_unmap: 166 - spu_unmap(spu); 167 - out: 168 - return ret; 169 - } 170 - 171 - static int __init spu_map_interrupts(struct spu *spu, struct device_node *np) 172 - { 173 - int i; 174 - 175 - for (i=0; i < 3; i++) { 176 - spu->irqs[i] = irq_of_parse_and_map(np, i); 177 - if (!spu->irqs[i]) 178 - goto err; 179 - } 180 - return 0; 181 - 182 - err: 183 - pr_debug("failed to map irq %x for spu %s\n", i, spu->name); 184 - for (; i >= 0; i--) { 185 - if (spu->irqs[i]) 186 - irq_dispose_mapping(spu->irqs[i]); 187 - } 188 - return -EINVAL; 189 - } 190 - 191 - static int __init spu_map_resource(struct spu *spu, int nr, 192 - void __iomem** virt, unsigned long *phys) 193 - { 194 - struct device_node *np = spu->devnode; 195 - struct resource resource = { }; 196 - unsigned long len; 197 - int ret; 198 - 199 - ret = of_address_to_resource(np, nr, &resource); 200 - if (ret) 201 - return ret; 202 - if (phys) 203 - *phys = resource.start; 204 - len = resource_size(&resource); 205 - *virt = ioremap(resource.start, len); 206 - if (!*virt) 207 - return -EINVAL; 208 - return 0; 209 - } 210 - 211 - static int __init spu_map_device(struct spu *spu) 212 - { 213 - struct device_node *np = spu->devnode; 214 - int ret = -ENODEV; 215 - 216 - spu->name = of_get_property(np, "name", NULL); 217 - if (!spu->name) 218 - goto out; 219 - 220 - ret = spu_map_resource(spu, 0, (void __iomem**)&spu->local_store, 221 - &spu->local_store_phys); 222 - if (ret) { 223 - pr_debug("spu_new: failed to map %pOF resource 0\n", 224 - np); 225 - goto out; 226 - } 227 - ret = spu_map_resource(spu, 1, (void __iomem**)&spu->problem, 228 - &spu->problem_phys); 229 - if (ret) { 230 - pr_debug("spu_new: failed to map %pOF resource 1\n", 231 - np); 232 - goto out_unmap; 233 - } 234 - ret = spu_map_resource(spu, 2, (void __iomem**)&spu->priv2, NULL); 235 - if (ret) { 236 - pr_debug("spu_new: failed to map %pOF resource 2\n", 237 - np); 238 - goto out_unmap; 239 - } 240 - if (!firmware_has_feature(FW_FEATURE_LPAR)) 241 - ret = spu_map_resource(spu, 3, 242 - (void __iomem**)&spu->priv1, NULL); 243 - if (ret) { 244 - pr_debug("spu_new: failed to map %pOF resource 3\n", 245 - np); 246 - goto out_unmap; 247 - } 248 - pr_debug("spu_new: %pOF maps:\n", np); 249 - pr_debug(" local store : 0x%016lx -> 0x%p\n", 250 - spu->local_store_phys, spu->local_store); 251 - pr_debug(" problem state : 0x%016lx -> 0x%p\n", 252 - spu->problem_phys, spu->problem); 253 - pr_debug(" priv2 : 0x%p\n", spu->priv2); 254 - pr_debug(" priv1 : 0x%p\n", spu->priv1); 255 - 256 - return 0; 257 - 258 - out_unmap: 259 - spu_unmap(spu); 260 - out: 261 - pr_debug("failed to map spe %s: %d\n", spu->name, ret); 262 - return ret; 263 - } 264 - 265 - static int __init of_enumerate_spus(int (*fn)(void *data)) 266 - { 267 - int ret; 268 - struct device_node *node; 269 - unsigned int n = 0; 270 - 271 - ret = -ENODEV; 272 - for_each_node_by_type(node, "spe") { 273 - ret = fn(node); 274 - if (ret) { 275 - printk(KERN_WARNING "%s: Error initializing %pOFn\n", 276 - __func__, node); 277 - of_node_put(node); 278 - break; 279 - } 280 - n++; 281 - } 282 - return ret ? ret : n; 283 - } 284 - 285 - static int __init of_create_spu(struct spu *spu, void *data) 286 - { 287 - int ret; 288 - struct device_node *spe = (struct device_node *)data; 289 - static int legacy_map = 0, legacy_irq = 0; 290 - 291 - spu->devnode = of_node_get(spe); 292 - spu->spe_id = find_spu_unit_number(spe); 293 - 294 - spu->node = of_node_to_nid(spe); 295 - if (spu->node >= MAX_NUMNODES) { 296 - printk(KERN_WARNING "SPE %pOF on node %d ignored," 297 - " node number too big\n", spe, spu->node); 298 - printk(KERN_WARNING "Check if CONFIG_NUMA is enabled.\n"); 299 - ret = -ENODEV; 300 - goto out; 301 - } 302 - 303 - ret = spu_map_device(spu); 304 - if (ret) { 305 - if (!legacy_map) { 306 - legacy_map = 1; 307 - printk(KERN_WARNING "%s: Legacy device tree found, " 308 - "trying to map old style\n", __func__); 309 - } 310 - ret = spu_map_device_old(spu); 311 - if (ret) { 312 - printk(KERN_ERR "Unable to map %s\n", 313 - spu->name); 314 - goto out; 315 - } 316 - } 317 - 318 - ret = spu_map_interrupts(spu, spe); 319 - if (ret) { 320 - if (!legacy_irq) { 321 - legacy_irq = 1; 322 - printk(KERN_WARNING "%s: Legacy device tree found, " 323 - "trying old style irq\n", __func__); 324 - } 325 - ret = spu_map_interrupts_old(spu, spe); 326 - if (ret) { 327 - printk(KERN_ERR "%s: could not map interrupts\n", 328 - spu->name); 329 - goto out_unmap; 330 - } 331 - } 332 - 333 - pr_debug("Using SPE %s %p %p %p %p %d\n", spu->name, 334 - spu->local_store, spu->problem, spu->priv1, 335 - spu->priv2, spu->number); 336 - goto out; 337 - 338 - out_unmap: 339 - spu_unmap(spu); 340 - out: 341 - return ret; 342 - } 343 - 344 - static int of_destroy_spu(struct spu *spu) 345 - { 346 - spu_unmap(spu); 347 - of_node_put(spu->devnode); 348 - return 0; 349 - } 350 - 351 - static void enable_spu_by_master_run(struct spu_context *ctx) 352 - { 353 - ctx->ops->master_start(ctx); 354 - } 355 - 356 - static void disable_spu_by_master_run(struct spu_context *ctx) 357 - { 358 - ctx->ops->master_stop(ctx); 359 - } 360 - 361 - /* Hardcoded affinity idxs for qs20 */ 362 - #define QS20_SPES_PER_BE 8 363 - static int qs20_reg_idxs[QS20_SPES_PER_BE] = { 0, 2, 4, 6, 7, 5, 3, 1 }; 364 - static int qs20_reg_memory[QS20_SPES_PER_BE] = { 1, 1, 0, 0, 0, 0, 0, 0 }; 365 - 366 - static struct spu *__init spu_lookup_reg(int node, u32 reg) 367 - { 368 - struct spu *spu; 369 - const u32 *spu_reg; 370 - 371 - list_for_each_entry(spu, &cbe_spu_info[node].spus, cbe_list) { 372 - spu_reg = of_get_property(spu_devnode(spu), "reg", NULL); 373 - if (*spu_reg == reg) 374 - return spu; 375 - } 376 - return NULL; 377 - } 378 - 379 - static void __init init_affinity_qs20_harcoded(void) 380 - { 381 - int node, i; 382 - struct spu *last_spu, *spu; 383 - u32 reg; 384 - 385 - for (node = 0; node < MAX_NUMNODES; node++) { 386 - last_spu = NULL; 387 - for (i = 0; i < QS20_SPES_PER_BE; i++) { 388 - reg = qs20_reg_idxs[i]; 389 - spu = spu_lookup_reg(node, reg); 390 - if (!spu) 391 - continue; 392 - spu->has_mem_affinity = qs20_reg_memory[reg]; 393 - if (last_spu) 394 - list_add_tail(&spu->aff_list, 395 - &last_spu->aff_list); 396 - last_spu = spu; 397 - } 398 - } 399 - } 400 - 401 - static int __init of_has_vicinity(void) 402 - { 403 - struct device_node *dn; 404 - 405 - for_each_node_by_type(dn, "spe") { 406 - if (of_property_present(dn, "vicinity")) { 407 - of_node_put(dn); 408 - return 1; 409 - } 410 - } 411 - return 0; 412 - } 413 - 414 - static struct spu *__init devnode_spu(int cbe, struct device_node *dn) 415 - { 416 - struct spu *spu; 417 - 418 - list_for_each_entry(spu, &cbe_spu_info[cbe].spus, cbe_list) 419 - if (spu_devnode(spu) == dn) 420 - return spu; 421 - return NULL; 422 - } 423 - 424 - static struct spu * __init 425 - neighbour_spu(int cbe, struct device_node *target, struct device_node *avoid) 426 - { 427 - struct spu *spu; 428 - struct device_node *spu_dn; 429 - const phandle *vic_handles; 430 - int lenp, i; 431 - 432 - list_for_each_entry(spu, &cbe_spu_info[cbe].spus, cbe_list) { 433 - spu_dn = spu_devnode(spu); 434 - if (spu_dn == avoid) 435 - continue; 436 - vic_handles = of_get_property(spu_dn, "vicinity", &lenp); 437 - for (i=0; i < (lenp / sizeof(phandle)); i++) { 438 - if (vic_handles[i] == target->phandle) 439 - return spu; 440 - } 441 - } 442 - return NULL; 443 - } 444 - 445 - static void __init init_affinity_node(int cbe) 446 - { 447 - struct spu *spu, *last_spu; 448 - struct device_node *vic_dn, *last_spu_dn; 449 - phandle avoid_ph; 450 - const phandle *vic_handles; 451 - int lenp, i, added; 452 - 453 - last_spu = list_first_entry(&cbe_spu_info[cbe].spus, struct spu, 454 - cbe_list); 455 - avoid_ph = 0; 456 - for (added = 1; added < cbe_spu_info[cbe].n_spus; added++) { 457 - last_spu_dn = spu_devnode(last_spu); 458 - vic_handles = of_get_property(last_spu_dn, "vicinity", &lenp); 459 - 460 - /* 461 - * Walk through each phandle in vicinity property of the spu 462 - * (typically two vicinity phandles per spe node) 463 - */ 464 - for (i = 0; i < (lenp / sizeof(phandle)); i++) { 465 - if (vic_handles[i] == avoid_ph) 466 - continue; 467 - 468 - vic_dn = of_find_node_by_phandle(vic_handles[i]); 469 - if (!vic_dn) 470 - continue; 471 - 472 - if (of_node_name_eq(vic_dn, "spe") ) { 473 - spu = devnode_spu(cbe, vic_dn); 474 - avoid_ph = last_spu_dn->phandle; 475 - } else { 476 - /* 477 - * "mic-tm" and "bif0" nodes do not have 478 - * vicinity property. So we need to find the 479 - * spe which has vic_dn as neighbour, but 480 - * skipping the one we came from (last_spu_dn) 481 - */ 482 - spu = neighbour_spu(cbe, vic_dn, last_spu_dn); 483 - if (!spu) 484 - continue; 485 - if (of_node_name_eq(vic_dn, "mic-tm")) { 486 - last_spu->has_mem_affinity = 1; 487 - spu->has_mem_affinity = 1; 488 - } 489 - avoid_ph = vic_dn->phandle; 490 - } 491 - 492 - of_node_put(vic_dn); 493 - 494 - list_add_tail(&spu->aff_list, &last_spu->aff_list); 495 - last_spu = spu; 496 - break; 497 - } 498 - } 499 - } 500 - 501 - static void __init init_affinity_fw(void) 502 - { 503 - int cbe; 504 - 505 - for (cbe = 0; cbe < MAX_NUMNODES; cbe++) 506 - init_affinity_node(cbe); 507 - } 508 - 509 - static int __init init_affinity(void) 510 - { 511 - if (of_has_vicinity()) { 512 - init_affinity_fw(); 513 - } else { 514 - if (of_machine_is_compatible("IBM,CPBW-1.0")) 515 - init_affinity_qs20_harcoded(); 516 - else 517 - printk("No affinity configuration found\n"); 518 - } 519 - 520 - return 0; 521 - } 522 - 523 - const struct spu_management_ops spu_management_of_ops = { 524 - .enumerate_spus = of_enumerate_spus, 525 - .create_spu = of_create_spu, 526 - .destroy_spu = of_destroy_spu, 527 - .enable_spu = enable_spu_by_master_run, 528 - .disable_spu = disable_spu_by_master_run, 529 - .init_affinity = init_affinity, 530 - };
-167
arch/powerpc/platforms/cell/spu_priv1_mmio.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * spu hypervisor abstraction for direct hardware access. 4 - * 5 - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 6 - * Copyright 2006 Sony Corp. 7 - */ 8 - 9 - #include <linux/interrupt.h> 10 - #include <linux/list.h> 11 - #include <linux/ptrace.h> 12 - #include <linux/wait.h> 13 - #include <linux/mm.h> 14 - #include <linux/io.h> 15 - #include <linux/mutex.h> 16 - #include <linux/device.h> 17 - #include <linux/sched.h> 18 - 19 - #include <asm/spu.h> 20 - #include <asm/spu_priv1.h> 21 - #include <asm/firmware.h> 22 - 23 - #include "interrupt.h" 24 - #include "spu_priv1_mmio.h" 25 - 26 - static void int_mask_and(struct spu *spu, int class, u64 mask) 27 - { 28 - u64 old_mask; 29 - 30 - old_mask = in_be64(&spu->priv1->int_mask_RW[class]); 31 - out_be64(&spu->priv1->int_mask_RW[class], old_mask & mask); 32 - } 33 - 34 - static void int_mask_or(struct spu *spu, int class, u64 mask) 35 - { 36 - u64 old_mask; 37 - 38 - old_mask = in_be64(&spu->priv1->int_mask_RW[class]); 39 - out_be64(&spu->priv1->int_mask_RW[class], old_mask | mask); 40 - } 41 - 42 - static void int_mask_set(struct spu *spu, int class, u64 mask) 43 - { 44 - out_be64(&spu->priv1->int_mask_RW[class], mask); 45 - } 46 - 47 - static u64 int_mask_get(struct spu *spu, int class) 48 - { 49 - return in_be64(&spu->priv1->int_mask_RW[class]); 50 - } 51 - 52 - static void int_stat_clear(struct spu *spu, int class, u64 stat) 53 - { 54 - out_be64(&spu->priv1->int_stat_RW[class], stat); 55 - } 56 - 57 - static u64 int_stat_get(struct spu *spu, int class) 58 - { 59 - return in_be64(&spu->priv1->int_stat_RW[class]); 60 - } 61 - 62 - static void cpu_affinity_set(struct spu *spu, int cpu) 63 - { 64 - u64 target; 65 - u64 route; 66 - 67 - if (nr_cpus_node(spu->node)) { 68 - const struct cpumask *spumask = cpumask_of_node(spu->node), 69 - *cpumask = cpumask_of_node(cpu_to_node(cpu)); 70 - 71 - if (!cpumask_intersects(spumask, cpumask)) 72 - return; 73 - } 74 - 75 - target = iic_get_target_id(cpu); 76 - route = target << 48 | target << 32 | target << 16; 77 - out_be64(&spu->priv1->int_route_RW, route); 78 - } 79 - 80 - static u64 mfc_dar_get(struct spu *spu) 81 - { 82 - return in_be64(&spu->priv1->mfc_dar_RW); 83 - } 84 - 85 - static u64 mfc_dsisr_get(struct spu *spu) 86 - { 87 - return in_be64(&spu->priv1->mfc_dsisr_RW); 88 - } 89 - 90 - static void mfc_dsisr_set(struct spu *spu, u64 dsisr) 91 - { 92 - out_be64(&spu->priv1->mfc_dsisr_RW, dsisr); 93 - } 94 - 95 - static void mfc_sdr_setup(struct spu *spu) 96 - { 97 - out_be64(&spu->priv1->mfc_sdr_RW, mfspr(SPRN_SDR1)); 98 - } 99 - 100 - static void mfc_sr1_set(struct spu *spu, u64 sr1) 101 - { 102 - out_be64(&spu->priv1->mfc_sr1_RW, sr1); 103 - } 104 - 105 - static u64 mfc_sr1_get(struct spu *spu) 106 - { 107 - return in_be64(&spu->priv1->mfc_sr1_RW); 108 - } 109 - 110 - static void mfc_tclass_id_set(struct spu *spu, u64 tclass_id) 111 - { 112 - out_be64(&spu->priv1->mfc_tclass_id_RW, tclass_id); 113 - } 114 - 115 - static u64 mfc_tclass_id_get(struct spu *spu) 116 - { 117 - return in_be64(&spu->priv1->mfc_tclass_id_RW); 118 - } 119 - 120 - static void tlb_invalidate(struct spu *spu) 121 - { 122 - out_be64(&spu->priv1->tlb_invalidate_entry_W, 0ul); 123 - } 124 - 125 - static void resource_allocation_groupID_set(struct spu *spu, u64 id) 126 - { 127 - out_be64(&spu->priv1->resource_allocation_groupID_RW, id); 128 - } 129 - 130 - static u64 resource_allocation_groupID_get(struct spu *spu) 131 - { 132 - return in_be64(&spu->priv1->resource_allocation_groupID_RW); 133 - } 134 - 135 - static void resource_allocation_enable_set(struct spu *spu, u64 enable) 136 - { 137 - out_be64(&spu->priv1->resource_allocation_enable_RW, enable); 138 - } 139 - 140 - static u64 resource_allocation_enable_get(struct spu *spu) 141 - { 142 - return in_be64(&spu->priv1->resource_allocation_enable_RW); 143 - } 144 - 145 - const struct spu_priv1_ops spu_priv1_mmio_ops = 146 - { 147 - .int_mask_and = int_mask_and, 148 - .int_mask_or = int_mask_or, 149 - .int_mask_set = int_mask_set, 150 - .int_mask_get = int_mask_get, 151 - .int_stat_clear = int_stat_clear, 152 - .int_stat_get = int_stat_get, 153 - .cpu_affinity_set = cpu_affinity_set, 154 - .mfc_dar_get = mfc_dar_get, 155 - .mfc_dsisr_get = mfc_dsisr_get, 156 - .mfc_dsisr_set = mfc_dsisr_set, 157 - .mfc_sdr_setup = mfc_sdr_setup, 158 - .mfc_sr1_set = mfc_sr1_set, 159 - .mfc_sr1_get = mfc_sr1_get, 160 - .mfc_tclass_id_set = mfc_tclass_id_set, 161 - .mfc_tclass_id_get = mfc_tclass_id_get, 162 - .tlb_invalidate = tlb_invalidate, 163 - .resource_allocation_groupID_set = resource_allocation_groupID_set, 164 - .resource_allocation_groupID_get = resource_allocation_groupID_get, 165 - .resource_allocation_enable_set = resource_allocation_enable_set, 166 - .resource_allocation_enable_get = resource_allocation_enable_get, 167 - };
-14
arch/powerpc/platforms/cell/spu_priv1_mmio.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * spu hypervisor abstraction for direct hardware access. 4 - * 5 - * Copyright (C) 2006 Sony Computer Entertainment Inc. 6 - * Copyright 2006 Sony Corp. 7 - */ 8 - 9 - #ifndef SPU_PRIV1_MMIO_H 10 - #define SPU_PRIV1_MMIO_H 11 - 12 - struct device_node *spu_devnode(struct spu *spu); 13 - 14 - #endif /* SPU_PRIV1_MMIO_H */
+2 -1
arch/powerpc/platforms/microwatt/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 config PPC_MICROWATT 3 - depends on PPC_BOOK3S_64 && !SMP 3 + depends on PPC_BOOK3S_64 4 4 bool "Microwatt SoC platform" 5 5 select PPC_XICS 6 6 select PPC_ICS_NATIVE 7 7 select PPC_ICP_NATIVE 8 8 select PPC_UDBG_16550 9 + select COMMON_CLK 9 10 help 10 11 This option enables support for FPGA-based Microwatt implementations. 11 12
+1
arch/powerpc/platforms/microwatt/Makefile
··· 1 1 obj-y += setup.o rng.o 2 + obj-$(CONFIG_SMP) += smp.o
+1
arch/powerpc/platforms/microwatt/microwatt.h
··· 3 3 #define _MICROWATT_H 4 4 5 5 void microwatt_rng_init(void); 6 + void microwatt_init_smp(void); 6 7 7 8 #endif /* _MICROWATT_H */
+18
arch/powerpc/platforms/microwatt/setup.c
··· 29 29 } 30 30 machine_arch_initcall(microwatt, microwatt_populate); 31 31 32 + static int __init microwatt_probe(void) 33 + { 34 + /* Main reason for having this is to start the other CPU(s) */ 35 + if (IS_ENABLED(CONFIG_SMP)) 36 + microwatt_init_smp(); 37 + return 1; 38 + } 39 + 32 40 static void __init microwatt_setup_arch(void) 33 41 { 34 42 microwatt_rng_init(); 35 43 } 36 44 45 + static void microwatt_idle(void) 46 + { 47 + if (!prep_irq_for_idle_irqsoff()) 48 + return; 49 + 50 + __asm__ __volatile__ ("wait"); 51 + } 52 + 37 53 define_machine(microwatt) { 38 54 .name = "microwatt", 39 55 .compatible = "microwatt-soc", 56 + .probe = microwatt_probe, 40 57 .init_IRQ = microwatt_init_IRQ, 41 58 .setup_arch = microwatt_setup_arch, 42 59 .progress = udbg_progress, 60 + .power_save = microwatt_idle, 43 61 };
+80
arch/powerpc/platforms/microwatt/smp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + 3 + /* 4 + * SMP support functions for Microwatt 5 + * Copyright 2025 Paul Mackerras <paulus@ozlabs.org> 6 + */ 7 + 8 + #include <linux/kernel.h> 9 + #include <linux/smp.h> 10 + #include <linux/io.h> 11 + #include <asm/early_ioremap.h> 12 + #include <asm/ppc-opcode.h> 13 + #include <asm/reg.h> 14 + #include <asm/smp.h> 15 + #include <asm/xics.h> 16 + 17 + #include "microwatt.h" 18 + 19 + static void __init microwatt_smp_probe(void) 20 + { 21 + xics_smp_probe(); 22 + } 23 + 24 + static void microwatt_smp_setup_cpu(int cpu) 25 + { 26 + if (cpu != 0) 27 + xics_setup_cpu(); 28 + } 29 + 30 + static struct smp_ops_t microwatt_smp_ops = { 31 + .probe = microwatt_smp_probe, 32 + .message_pass = NULL, /* Use smp_muxed_ipi_message_pass */ 33 + .kick_cpu = smp_generic_kick_cpu, 34 + .setup_cpu = microwatt_smp_setup_cpu, 35 + }; 36 + 37 + /* XXX get from device tree */ 38 + #define SYSCON_BASE 0xc0000000 39 + #define SYSCON_LENGTH 0x100 40 + 41 + #define SYSCON_CPU_CTRL 0x58 42 + 43 + void __init microwatt_init_smp(void) 44 + { 45 + volatile unsigned char __iomem *syscon; 46 + int ncpus; 47 + int timeout; 48 + 49 + syscon = early_ioremap(SYSCON_BASE, SYSCON_LENGTH); 50 + if (syscon == NULL) { 51 + pr_err("Failed to map SYSCON\n"); 52 + return; 53 + } 54 + ncpus = (readl(syscon + SYSCON_CPU_CTRL) >> 8) & 0xff; 55 + if (ncpus < 2) 56 + goto out; 57 + 58 + smp_ops = &microwatt_smp_ops; 59 + 60 + /* 61 + * Write two instructions at location 0: 62 + * mfspr r3, PIR 63 + * b __secondary_hold 64 + */ 65 + *(unsigned int *)KERNELBASE = PPC_RAW_MFSPR(3, SPRN_PIR); 66 + *(unsigned int *)(KERNELBASE+4) = PPC_RAW_BRANCH(&__secondary_hold - (char *)(KERNELBASE+4)); 67 + 68 + /* enable the other CPUs, they start at location 0 */ 69 + writel((1ul << ncpus) - 1, syscon + SYSCON_CPU_CTRL); 70 + 71 + timeout = 10000; 72 + while (!__secondary_hold_acknowledge) { 73 + if (--timeout == 0) 74 + break; 75 + barrier(); 76 + } 77 + 78 + out: 79 + early_iounmap((void *)syscon, SYSCON_LENGTH); 80 + }
+1
arch/powerpc/platforms/powernv/Kconfig
··· 17 17 select MMU_NOTIFIER 18 18 select FORCE_SMP 19 19 select ARCH_SUPPORTS_PER_VMA_LOCK 20 + select PPC_RADIX_BROADCAST_TLBIE 20 21 default y 21 22 22 23 config OPAL_PRD
+10
arch/powerpc/platforms/pseries/Kconfig
··· 23 23 select FORCE_SMP 24 24 select SWIOTLB 25 25 select ARCH_SUPPORTS_PER_VMA_LOCK 26 + select PPC_RADIX_BROADCAST_TLBIE 26 27 default y 27 28 28 29 config PARAVIRT ··· 128 127 makes sense for a system running in an LPAR where the unused pages 129 128 will be reused for other LPARs. The interface allows firmware to 130 129 balance memory across many LPARs. 130 + 131 + config HTMDUMP 132 + tristate "PowerVM data dumper" 133 + depends on PPC_PSERIES && DEBUG_FS 134 + default m 135 + help 136 + Select this option, if you want to enable the kernel debugfs 137 + interface to dump the Hardware Trace Macro (HTM) function data 138 + in the LPAR. 131 139 132 140 config HV_PERF_CTRS 133 141 bool "Hypervisor supplied PMU events (24x7 & GPCI)"
+1
arch/powerpc/platforms/pseries/Makefile
··· 19 19 obj-$(CONFIG_HVCS) += hvcserver.o 20 20 obj-$(CONFIG_HCALL_STATS) += hvCall_inst.o 21 21 obj-$(CONFIG_CMM) += cmm.o 22 + obj-$(CONFIG_HTMDUMP) += htmdump.o 22 23 obj-$(CONFIG_IO_EVENT_IRQ) += io_event_irq.o 23 24 obj-$(CONFIG_LPARCFG) += lparcfg.o 24 25 obj-$(CONFIG_IBMVIO) += vio.o
+121
arch/powerpc/platforms/pseries/htmdump.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * Copyright (C) IBM Corporation, 2024 4 + */ 5 + 6 + #define pr_fmt(fmt) "htmdump: " fmt 7 + 8 + #include <linux/debugfs.h> 9 + #include <linux/module.h> 10 + #include <asm/io.h> 11 + #include <asm/machdep.h> 12 + #include <asm/plpar_wrappers.h> 13 + 14 + static void *htm_buf; 15 + static u32 nodeindex; 16 + static u32 nodalchipindex; 17 + static u32 coreindexonchip; 18 + static u32 htmtype; 19 + static struct dentry *htmdump_debugfs_dir; 20 + 21 + static ssize_t htmdump_read(struct file *filp, char __user *ubuf, 22 + size_t count, loff_t *ppos) 23 + { 24 + void *htm_buf = filp->private_data; 25 + unsigned long page, read_size, available; 26 + loff_t offset; 27 + long rc; 28 + 29 + page = ALIGN_DOWN(*ppos, PAGE_SIZE); 30 + offset = (*ppos) % PAGE_SIZE; 31 + 32 + rc = htm_get_dump_hardware(nodeindex, nodalchipindex, coreindexonchip, 33 + htmtype, virt_to_phys(htm_buf), PAGE_SIZE, page); 34 + 35 + switch (rc) { 36 + case H_SUCCESS: 37 + /* H_PARTIAL for the case where all available data can't be 38 + * returned due to buffer size constraint. 39 + */ 40 + case H_PARTIAL: 41 + break; 42 + /* H_NOT_AVAILABLE indicates reading from an offset outside the range, 43 + * i.e. past end of file. 44 + */ 45 + case H_NOT_AVAILABLE: 46 + return 0; 47 + case H_BUSY: 48 + case H_LONG_BUSY_ORDER_1_MSEC: 49 + case H_LONG_BUSY_ORDER_10_MSEC: 50 + case H_LONG_BUSY_ORDER_100_MSEC: 51 + case H_LONG_BUSY_ORDER_1_SEC: 52 + case H_LONG_BUSY_ORDER_10_SEC: 53 + case H_LONG_BUSY_ORDER_100_SEC: 54 + return -EBUSY; 55 + case H_PARAMETER: 56 + case H_P2: 57 + case H_P3: 58 + case H_P4: 59 + case H_P5: 60 + case H_P6: 61 + return -EINVAL; 62 + case H_STATE: 63 + return -EIO; 64 + case H_AUTHORITY: 65 + return -EPERM; 66 + } 67 + 68 + available = PAGE_SIZE; 69 + read_size = min(count, available); 70 + *ppos += read_size; 71 + return simple_read_from_buffer(ubuf, count, &offset, htm_buf, available); 72 + } 73 + 74 + static const struct file_operations htmdump_fops = { 75 + .llseek = NULL, 76 + .read = htmdump_read, 77 + .open = simple_open, 78 + }; 79 + 80 + static int htmdump_init_debugfs(void) 81 + { 82 + htm_buf = kmalloc(PAGE_SIZE, GFP_KERNEL); 83 + if (!htm_buf) { 84 + pr_err("Failed to allocate htmdump buf\n"); 85 + return -ENOMEM; 86 + } 87 + 88 + htmdump_debugfs_dir = debugfs_create_dir("htmdump", 89 + arch_debugfs_dir); 90 + 91 + debugfs_create_u32("nodeindex", 0600, 92 + htmdump_debugfs_dir, &nodeindex); 93 + debugfs_create_u32("nodalchipindex", 0600, 94 + htmdump_debugfs_dir, &nodalchipindex); 95 + debugfs_create_u32("coreindexonchip", 0600, 96 + htmdump_debugfs_dir, &coreindexonchip); 97 + debugfs_create_u32("htmtype", 0600, 98 + htmdump_debugfs_dir, &htmtype); 99 + debugfs_create_file("trace", 0400, htmdump_debugfs_dir, htm_buf, &htmdump_fops); 100 + 101 + return 0; 102 + } 103 + 104 + static int __init htmdump_init(void) 105 + { 106 + if (htmdump_init_debugfs()) 107 + return -ENOMEM; 108 + 109 + return 0; 110 + } 111 + 112 + static void __exit htmdump_exit(void) 113 + { 114 + debugfs_remove_recursive(htmdump_debugfs_dir); 115 + kfree(htm_buf); 116 + } 117 + 118 + module_init(htmdump_init); 119 + module_exit(htmdump_exit); 120 + MODULE_DESCRIPTION("PHYP Hardware Trace Macro (HTM) data dumper"); 121 + MODULE_LICENSE("GPL");
+110 -29
arch/powerpc/platforms/pseries/iommu.c
··· 52 52 enum { 53 53 DDW_EXT_SIZE = 0, 54 54 DDW_EXT_RESET_DMA_WIN = 1, 55 - DDW_EXT_QUERY_OUT_SIZE = 2 55 + DDW_EXT_QUERY_OUT_SIZE = 2, 56 + DDW_EXT_LIMITED_ADDR_MODE = 3 56 57 }; 57 58 58 59 static struct iommu_table *iommu_pseries_alloc_table(int node) ··· 1285 1284 1286 1285 static phys_addr_t ddw_memory_hotplug_max(void) 1287 1286 { 1288 - resource_size_t max_addr = memory_hotplug_max(); 1289 - struct device_node *memory; 1287 + resource_size_t max_addr; 1290 1288 1291 - for_each_node_by_type(memory, "memory") { 1292 - struct resource res; 1293 - 1294 - if (of_address_to_resource(memory, 0, &res)) 1295 - continue; 1296 - 1297 - max_addr = max_t(resource_size_t, max_addr, res.end + 1); 1298 - } 1289 + #if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG) 1290 + max_addr = hot_add_drconf_memory_max(); 1291 + #else 1292 + max_addr = memblock_end_of_DRAM(); 1293 + #endif 1299 1294 1300 1295 return max_addr; 1301 1296 } ··· 1326 1329 "ibm,reset-pe-dma-windows(%x) %x %x %x returned %d ", 1327 1330 reset_dma_win, cfg_addr, BUID_HI(buid), BUID_LO(buid), 1328 1331 ret); 1332 + } 1333 + 1334 + /* 1335 + * Platforms support placing PHB in limited address mode starting with LoPAR 1336 + * level 2.13 implement. In this mode, the DMA address returned by DDW is over 1337 + * 4GB but, less than 64-bits. This benefits IO adapters that don't support 1338 + * 64-bits for DMA addresses. 1339 + */ 1340 + static int limited_dma_window(struct pci_dev *dev, struct device_node *par_dn) 1341 + { 1342 + int ret; 1343 + u32 cfg_addr, reset_dma_win, las_supported; 1344 + u64 buid; 1345 + struct device_node *dn; 1346 + struct pci_dn *pdn; 1347 + 1348 + ret = ddw_read_ext(par_dn, DDW_EXT_RESET_DMA_WIN, &reset_dma_win); 1349 + if (ret) 1350 + goto out; 1351 + 1352 + ret = ddw_read_ext(par_dn, DDW_EXT_LIMITED_ADDR_MODE, &las_supported); 1353 + 1354 + /* Limited Address Space extension available on the platform but DDW in 1355 + * limited addressing mode not supported 1356 + */ 1357 + if (!ret && !las_supported) 1358 + ret = -EPROTO; 1359 + 1360 + if (ret) { 1361 + dev_info(&dev->dev, "Limited Address Space for DDW not Supported, err: %d", ret); 1362 + goto out; 1363 + } 1364 + 1365 + dn = pci_device_to_OF_node(dev); 1366 + pdn = PCI_DN(dn); 1367 + buid = pdn->phb->buid; 1368 + cfg_addr = (pdn->busno << 16) | (pdn->devfn << 8); 1369 + 1370 + ret = rtas_call(reset_dma_win, 4, 1, NULL, cfg_addr, BUID_HI(buid), 1371 + BUID_LO(buid), 1); 1372 + if (ret) 1373 + dev_info(&dev->dev, 1374 + "ibm,reset-pe-dma-windows(%x) for Limited Addr Support: %x %x %x returned %d ", 1375 + reset_dma_win, cfg_addr, BUID_HI(buid), BUID_LO(buid), 1376 + ret); 1377 + 1378 + out: 1379 + return ret; 1329 1380 } 1330 1381 1331 1382 /* Return largest page shift based on "IO Page Sizes" output of ibm,query-pe-dma-window. */ ··· 1443 1398 * 1444 1399 * returns true if can map all pages (direct mapping), false otherwise.. 1445 1400 */ 1446 - static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn) 1401 + static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn, u64 dma_mask) 1447 1402 { 1448 1403 int len = 0, ret; 1449 1404 int max_ram_len = order_base_2(ddw_memory_hotplug_max()); ··· 1462 1417 bool pmem_present; 1463 1418 struct pci_dn *pci = PCI_DN(pdn); 1464 1419 struct property *default_win = NULL; 1420 + bool limited_addr_req = false, limited_addr_enabled = false; 1421 + int dev_max_ddw; 1422 + int ddw_sz; 1465 1423 1466 1424 dn = of_find_node_by_type(NULL, "ibm,pmemory"); 1467 1425 pmem_present = dn != NULL; ··· 1491 1443 * the ibm,ddw-applicable property holds the tokens for: 1492 1444 * ibm,query-pe-dma-window 1493 1445 * ibm,create-pe-dma-window 1494 - * ibm,remove-pe-dma-window 1495 1446 * for the given node in that order. 1496 1447 * the property is actually in the parent, not the PE 1497 1448 */ ··· 1509 1462 ret = query_ddw(dev, ddw_avail, &query, pdn); 1510 1463 if (ret != 0) 1511 1464 goto out_failed; 1465 + 1466 + /* DMA Limited Addressing required? This is when the driver has 1467 + * requested to create DDW but supports mask which is less than 64-bits 1468 + */ 1469 + limited_addr_req = (dma_mask != DMA_BIT_MASK(64)); 1470 + 1471 + /* place the PHB in Limited Addressing mode */ 1472 + if (limited_addr_req) { 1473 + if (limited_dma_window(dev, pdn)) 1474 + goto out_failed; 1475 + 1476 + /* PHB is in Limited address mode */ 1477 + limited_addr_enabled = true; 1478 + } 1512 1479 1513 1480 /* 1514 1481 * If there is no window available, remove the default DMA window, ··· 1570 1509 goto out_failed; 1571 1510 } 1572 1511 1512 + /* Maximum DMA window size that the device can address (in log2) */ 1513 + dev_max_ddw = fls64(dma_mask); 1514 + 1515 + /* If the device DMA mask is less than 64-bits, make sure the DMA window 1516 + * size is not bigger than what the device can access 1517 + */ 1518 + ddw_sz = min(order_base_2(query.largest_available_block << page_shift), 1519 + dev_max_ddw); 1520 + 1573 1521 /* 1574 1522 * The "ibm,pmemory" can appear anywhere in the address space. 1575 1523 * Assuming it is still backed by page structs, try MAX_PHYSMEM_BITS ··· 1587 1517 */ 1588 1518 len = max_ram_len; 1589 1519 if (pmem_present) { 1590 - if (query.largest_available_block >= 1591 - (1ULL << (MAX_PHYSMEM_BITS - page_shift))) 1520 + if (ddw_sz >= MAX_PHYSMEM_BITS) 1592 1521 len = MAX_PHYSMEM_BITS; 1593 1522 else 1594 1523 dev_info(&dev->dev, "Skipping ibm,pmemory"); 1595 1524 } 1596 1525 1597 1526 /* check if the available block * number of ptes will map everything */ 1598 - if (query.largest_available_block < (1ULL << (len - page_shift))) { 1527 + if (ddw_sz < len) { 1599 1528 dev_dbg(&dev->dev, 1600 1529 "can't map partition max 0x%llx with %llu %llu-sized pages\n", 1601 1530 1ULL << len, 1602 1531 query.largest_available_block, 1603 1532 1ULL << page_shift); 1604 1533 1605 - len = order_base_2(query.largest_available_block << page_shift); 1606 - 1534 + len = ddw_sz; 1607 1535 dynamic_mapping = true; 1608 1536 } else { 1609 1537 direct_mapping = !default_win_removed || ··· 1615 1547 */ 1616 1548 if (default_win_removed && pmem_present && !direct_mapping) { 1617 1549 /* DDW is big enough to be split */ 1618 - if ((query.largest_available_block << page_shift) >= 1619 - MIN_DDW_VPMEM_DMA_WINDOW + (1ULL << max_ram_len)) { 1550 + if ((1ULL << ddw_sz) >= 1551 + MIN_DDW_VPMEM_DMA_WINDOW + (1ULL << max_ram_len)) { 1552 + 1620 1553 direct_mapping = true; 1621 1554 1622 1555 /* offset of the Dynamic part of DDW */ ··· 1628 1559 dynamic_mapping = true; 1629 1560 1630 1561 /* create max size DDW possible */ 1631 - len = order_base_2(query.largest_available_block 1632 - << page_shift); 1562 + len = ddw_sz; 1633 1563 } 1634 1564 } 1635 1565 ··· 1668 1600 1669 1601 if (direct_mapping) { 1670 1602 /* DDW maps the whole partition, so enable direct DMA mapping */ 1671 - ret = walk_system_ram_range(0, memblock_end_of_DRAM() >> PAGE_SHIFT, 1603 + ret = walk_system_ram_range(0, ddw_memory_hotplug_max() >> PAGE_SHIFT, 1672 1604 win64->value, tce_setrange_multi_pSeriesLP_walk); 1673 1605 if (ret) { 1674 1606 dev_info(&dev->dev, "failed to map DMA window for %pOF: %d\n", ··· 1757 1689 __remove_dma_window(pdn, ddw_avail, create.liobn); 1758 1690 1759 1691 out_failed: 1760 - if (default_win_removed) 1692 + if (default_win_removed || limited_addr_enabled) 1761 1693 reset_dma_window(dev, pdn); 1762 1694 1763 1695 fpdn = kzalloc(sizeof(*fpdn), GFP_KERNEL); ··· 1775 1707 if (pmem_present && direct_mapping && len != MAX_PHYSMEM_BITS) 1776 1708 dev->dev.bus_dma_limit = dev->dev.archdata.dma_offset + 1777 1709 (1ULL << max_ram_len); 1710 + 1711 + dev_info(&dev->dev, "lsa_required: %x, lsa_enabled: %x, direct mapping: %x\n", 1712 + limited_addr_req, limited_addr_enabled, direct_mapping); 1778 1713 1779 1714 return direct_mapping; 1780 1715 } ··· 1904 1833 { 1905 1834 struct device_node *dn = pci_device_to_OF_node(pdev), *pdn; 1906 1835 1907 - /* only attempt to use a new window if 64-bit DMA is requested */ 1908 - if (dma_mask < DMA_BIT_MASK(64)) 1836 + /* For DDW, DMA mask should be more than 32-bits. For mask more then 1837 + * 32-bits but less then 64-bits, DMA addressing is supported in 1838 + * Limited Addressing mode. 1839 + */ 1840 + if (dma_mask <= DMA_BIT_MASK(32)) 1909 1841 return false; 1910 1842 1911 1843 dev_dbg(&pdev->dev, "node is %pOF\n", dn); ··· 1921 1847 */ 1922 1848 pdn = pci_dma_find(dn, NULL); 1923 1849 if (pdn && PCI_DN(pdn)) 1924 - return enable_ddw(pdev, pdn); 1850 + return enable_ddw(pdev, pdn, dma_mask); 1925 1851 1926 1852 return false; 1927 1853 } ··· 2423 2349 struct memory_notify *arg = data; 2424 2350 int ret = 0; 2425 2351 2352 + /* This notifier can get called when onlining persistent memory as well. 2353 + * TCEs are not pre-mapped for persistent memory. Persistent memory will 2354 + * always be above ddw_memory_hotplug_max() 2355 + */ 2356 + 2426 2357 switch (action) { 2427 2358 case MEM_GOING_ONLINE: 2428 2359 spin_lock(&dma_win_list_lock); 2429 2360 list_for_each_entry(window, &dma_win_list, list) { 2430 - if (window->direct) { 2361 + if (window->direct && (arg->start_pfn << PAGE_SHIFT) < 2362 + ddw_memory_hotplug_max()) { 2431 2363 ret |= tce_setrange_multi_pSeriesLP(arg->start_pfn, 2432 2364 arg->nr_pages, window->prop); 2433 2365 } ··· 2445 2365 case MEM_OFFLINE: 2446 2366 spin_lock(&dma_win_list_lock); 2447 2367 list_for_each_entry(window, &dma_win_list, list) { 2448 - if (window->direct) { 2368 + if (window->direct && (arg->start_pfn << PAGE_SHIFT) < 2369 + ddw_memory_hotplug_max()) { 2449 2370 ret |= tce_clearrange_multi_pSeriesLP(arg->start_pfn, 2450 2371 arg->nr_pages, window->prop); 2451 2372 }
-1
arch/powerpc/sysdev/Makefile
··· 12 12 13 13 obj-$(CONFIG_PPC_MPC106) += grackle.o 14 14 obj-$(CONFIG_PPC_DCR_NATIVE) += dcr-low.o 15 - obj-$(CONFIG_PPC_PMI) += pmi.o 16 15 obj-$(CONFIG_U3_DART) += dart_iommu.o 17 16 obj-$(CONFIG_MMIO_NVRAM) += mmio_nvram.o 18 17 obj-$(CONFIG_FSL_SOC) += fsl_soc.o fsl_mpic_err.o
-182
arch/powerpc/sysdev/dcr.c
··· 11 11 #include <linux/of_address.h> 12 12 #include <asm/dcr.h> 13 13 14 - #ifdef CONFIG_PPC_DCR_MMIO 15 - static struct device_node *find_dcr_parent(struct device_node *node) 16 - { 17 - struct device_node *par, *tmp; 18 - const u32 *p; 19 - 20 - for (par = of_node_get(node); par;) { 21 - if (of_property_read_bool(par, "dcr-controller")) 22 - break; 23 - p = of_get_property(par, "dcr-parent", NULL); 24 - tmp = par; 25 - if (p == NULL) 26 - par = of_get_parent(par); 27 - else 28 - par = of_find_node_by_phandle(*p); 29 - of_node_put(tmp); 30 - } 31 - return par; 32 - } 33 - #endif 34 - 35 - #if defined(CONFIG_PPC_DCR_NATIVE) && defined(CONFIG_PPC_DCR_MMIO) 36 - 37 - bool dcr_map_ok_generic(dcr_host_t host) 38 - { 39 - if (host.type == DCR_HOST_NATIVE) 40 - return dcr_map_ok_native(host.host.native); 41 - else if (host.type == DCR_HOST_MMIO) 42 - return dcr_map_ok_mmio(host.host.mmio); 43 - else 44 - return false; 45 - } 46 - EXPORT_SYMBOL_GPL(dcr_map_ok_generic); 47 - 48 - dcr_host_t dcr_map_generic(struct device_node *dev, 49 - unsigned int dcr_n, 50 - unsigned int dcr_c) 51 - { 52 - dcr_host_t host; 53 - struct device_node *dp; 54 - const char *prop; 55 - 56 - host.type = DCR_HOST_INVALID; 57 - 58 - dp = find_dcr_parent(dev); 59 - if (dp == NULL) 60 - return host; 61 - 62 - prop = of_get_property(dp, "dcr-access-method", NULL); 63 - 64 - pr_debug("dcr_map_generic(dcr-access-method = %s)\n", prop); 65 - 66 - if (!strcmp(prop, "native")) { 67 - host.type = DCR_HOST_NATIVE; 68 - host.host.native = dcr_map_native(dev, dcr_n, dcr_c); 69 - } else if (!strcmp(prop, "mmio")) { 70 - host.type = DCR_HOST_MMIO; 71 - host.host.mmio = dcr_map_mmio(dev, dcr_n, dcr_c); 72 - } 73 - 74 - of_node_put(dp); 75 - return host; 76 - } 77 - EXPORT_SYMBOL_GPL(dcr_map_generic); 78 - 79 - void dcr_unmap_generic(dcr_host_t host, unsigned int dcr_c) 80 - { 81 - if (host.type == DCR_HOST_NATIVE) 82 - dcr_unmap_native(host.host.native, dcr_c); 83 - else if (host.type == DCR_HOST_MMIO) 84 - dcr_unmap_mmio(host.host.mmio, dcr_c); 85 - else /* host.type == DCR_HOST_INVALID */ 86 - WARN_ON(true); 87 - } 88 - EXPORT_SYMBOL_GPL(dcr_unmap_generic); 89 - 90 - u32 dcr_read_generic(dcr_host_t host, unsigned int dcr_n) 91 - { 92 - if (host.type == DCR_HOST_NATIVE) 93 - return dcr_read_native(host.host.native, dcr_n); 94 - else if (host.type == DCR_HOST_MMIO) 95 - return dcr_read_mmio(host.host.mmio, dcr_n); 96 - else /* host.type == DCR_HOST_INVALID */ 97 - WARN_ON(true); 98 - return 0; 99 - } 100 - EXPORT_SYMBOL_GPL(dcr_read_generic); 101 - 102 - void dcr_write_generic(dcr_host_t host, unsigned int dcr_n, u32 value) 103 - { 104 - if (host.type == DCR_HOST_NATIVE) 105 - dcr_write_native(host.host.native, dcr_n, value); 106 - else if (host.type == DCR_HOST_MMIO) 107 - dcr_write_mmio(host.host.mmio, dcr_n, value); 108 - else /* host.type == DCR_HOST_INVALID */ 109 - WARN_ON(true); 110 - } 111 - EXPORT_SYMBOL_GPL(dcr_write_generic); 112 - 113 - #endif /* defined(CONFIG_PPC_DCR_NATIVE) && defined(CONFIG_PPC_DCR_MMIO) */ 114 - 115 14 unsigned int dcr_resource_start(const struct device_node *np, 116 15 unsigned int index) 117 16 { ··· 36 137 } 37 138 EXPORT_SYMBOL_GPL(dcr_resource_len); 38 139 39 - #ifdef CONFIG_PPC_DCR_MMIO 40 - 41 - static u64 of_translate_dcr_address(struct device_node *dev, 42 - unsigned int dcr_n, 43 - unsigned int *out_stride) 44 - { 45 - struct device_node *dp; 46 - const u32 *p; 47 - unsigned int stride; 48 - u64 ret = OF_BAD_ADDR; 49 - 50 - dp = find_dcr_parent(dev); 51 - if (dp == NULL) 52 - return OF_BAD_ADDR; 53 - 54 - /* Stride is not properly defined yet, default to 0x10 for Axon */ 55 - p = of_get_property(dp, "dcr-mmio-stride", NULL); 56 - stride = (p == NULL) ? 0x10 : *p; 57 - 58 - /* XXX FIXME: Which property name is to use of the 2 following ? */ 59 - p = of_get_property(dp, "dcr-mmio-range", NULL); 60 - if (p == NULL) 61 - p = of_get_property(dp, "dcr-mmio-space", NULL); 62 - if (p == NULL) 63 - goto done; 64 - 65 - /* Maybe could do some better range checking here */ 66 - ret = of_translate_address(dp, p); 67 - if (ret != OF_BAD_ADDR) 68 - ret += (u64)(stride) * (u64)dcr_n; 69 - if (out_stride) 70 - *out_stride = stride; 71 - 72 - done: 73 - of_node_put(dp); 74 - return ret; 75 - } 76 - 77 - dcr_host_mmio_t dcr_map_mmio(struct device_node *dev, 78 - unsigned int dcr_n, 79 - unsigned int dcr_c) 80 - { 81 - dcr_host_mmio_t ret = { .token = NULL, .stride = 0, .base = dcr_n }; 82 - u64 addr; 83 - 84 - pr_debug("dcr_map(%pOF, 0x%x, 0x%x)\n", 85 - dev, dcr_n, dcr_c); 86 - 87 - addr = of_translate_dcr_address(dev, dcr_n, &ret.stride); 88 - pr_debug("translates to addr: 0x%llx, stride: 0x%x\n", 89 - (unsigned long long) addr, ret.stride); 90 - if (addr == OF_BAD_ADDR) 91 - return ret; 92 - pr_debug("mapping 0x%x bytes\n", dcr_c * ret.stride); 93 - ret.token = ioremap(addr, dcr_c * ret.stride); 94 - if (ret.token == NULL) 95 - return ret; 96 - pr_debug("mapped at 0x%p -> base is 0x%p\n", 97 - ret.token, ret.token - dcr_n * ret.stride); 98 - ret.token -= dcr_n * ret.stride; 99 - return ret; 100 - } 101 - EXPORT_SYMBOL_GPL(dcr_map_mmio); 102 - 103 - void dcr_unmap_mmio(dcr_host_mmio_t host, unsigned int dcr_c) 104 - { 105 - dcr_host_mmio_t h = host; 106 - 107 - if (h.token == NULL) 108 - return; 109 - h.token += host.base * h.stride; 110 - iounmap(h.token); 111 - h.token = NULL; 112 - } 113 - EXPORT_SYMBOL_GPL(dcr_unmap_mmio); 114 - 115 - #endif /* defined(CONFIG_PPC_DCR_MMIO) */ 116 - 117 - #ifdef CONFIG_PPC_DCR_NATIVE 118 140 DEFINE_SPINLOCK(dcr_ind_lock); 119 141 EXPORT_SYMBOL_GPL(dcr_ind_lock); 120 - #endif /* defined(CONFIG_PPC_DCR_NATIVE) */ 121 -
+1 -2
arch/powerpc/sysdev/ipic.c
··· 762 762 ipic_write(ipic->regs, IPIC_SIMSR_H, 0); 763 763 ipic_write(ipic->regs, IPIC_SIMSR_L, 0); 764 764 765 - printk ("IPIC (%d IRQ sources) at %p\n", NR_IPIC_INTS, 766 - primary_ipic->regs); 765 + pr_info("IPIC (%d IRQ sources) at MMIO %pa\n", NR_IPIC_INTS, &res.start); 767 766 768 767 return ipic; 769 768 }
-267
arch/powerpc/sysdev/pmi.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * pmi driver 4 - * 5 - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005 6 - * 7 - * PMI (Platform Management Interrupt) is a way to communicate 8 - * with the BMC (Baseboard Management Controller) via interrupts. 9 - * Unlike IPMI it is bidirectional and has a low latency. 10 - * 11 - * Author: Christian Krafft <krafft@de.ibm.com> 12 - */ 13 - 14 - #include <linux/interrupt.h> 15 - #include <linux/slab.h> 16 - #include <linux/completion.h> 17 - #include <linux/spinlock.h> 18 - #include <linux/module.h> 19 - #include <linux/mod_devicetable.h> 20 - #include <linux/workqueue.h> 21 - #include <linux/of_address.h> 22 - #include <linux/of_irq.h> 23 - #include <linux/platform_device.h> 24 - 25 - #include <asm/io.h> 26 - #include <asm/pmi.h> 27 - 28 - struct pmi_data { 29 - struct list_head handler; 30 - spinlock_t handler_spinlock; 31 - spinlock_t pmi_spinlock; 32 - struct mutex msg_mutex; 33 - pmi_message_t msg; 34 - struct completion *completion; 35 - struct platform_device *dev; 36 - int irq; 37 - u8 __iomem *pmi_reg; 38 - struct work_struct work; 39 - }; 40 - 41 - static struct pmi_data *data; 42 - 43 - static irqreturn_t pmi_irq_handler(int irq, void *dev_id) 44 - { 45 - u8 type; 46 - int rc; 47 - 48 - spin_lock(&data->pmi_spinlock); 49 - 50 - type = ioread8(data->pmi_reg + PMI_READ_TYPE); 51 - pr_debug("pmi: got message of type %d\n", type); 52 - 53 - if (type & PMI_ACK && !data->completion) { 54 - printk(KERN_WARNING "pmi: got unexpected ACK message.\n"); 55 - rc = -EIO; 56 - goto unlock; 57 - } 58 - 59 - if (data->completion && !(type & PMI_ACK)) { 60 - printk(KERN_WARNING "pmi: expected ACK, but got %d\n", type); 61 - rc = -EIO; 62 - goto unlock; 63 - } 64 - 65 - data->msg.type = type; 66 - data->msg.data0 = ioread8(data->pmi_reg + PMI_READ_DATA0); 67 - data->msg.data1 = ioread8(data->pmi_reg + PMI_READ_DATA1); 68 - data->msg.data2 = ioread8(data->pmi_reg + PMI_READ_DATA2); 69 - rc = 0; 70 - unlock: 71 - spin_unlock(&data->pmi_spinlock); 72 - 73 - if (rc == -EIO) { 74 - rc = IRQ_HANDLED; 75 - goto out; 76 - } 77 - 78 - if (data->msg.type & PMI_ACK) { 79 - complete(data->completion); 80 - rc = IRQ_HANDLED; 81 - goto out; 82 - } 83 - 84 - schedule_work(&data->work); 85 - 86 - rc = IRQ_HANDLED; 87 - out: 88 - return rc; 89 - } 90 - 91 - 92 - static const struct of_device_id pmi_match[] = { 93 - { .type = "ibm,pmi", .name = "ibm,pmi" }, 94 - { .type = "ibm,pmi" }, 95 - {}, 96 - }; 97 - 98 - MODULE_DEVICE_TABLE(of, pmi_match); 99 - 100 - static void pmi_notify_handlers(struct work_struct *work) 101 - { 102 - struct pmi_handler *handler; 103 - 104 - spin_lock(&data->handler_spinlock); 105 - list_for_each_entry(handler, &data->handler, node) { 106 - pr_debug("pmi: notifying handler %p\n", handler); 107 - if (handler->type == data->msg.type) 108 - handler->handle_pmi_message(data->msg); 109 - } 110 - spin_unlock(&data->handler_spinlock); 111 - } 112 - 113 - static int pmi_of_probe(struct platform_device *dev) 114 - { 115 - struct device_node *np = dev->dev.of_node; 116 - int rc; 117 - 118 - if (data) { 119 - printk(KERN_ERR "pmi: driver has already been initialized.\n"); 120 - rc = -EBUSY; 121 - goto out; 122 - } 123 - 124 - data = kzalloc(sizeof(struct pmi_data), GFP_KERNEL); 125 - if (!data) { 126 - printk(KERN_ERR "pmi: could not allocate memory.\n"); 127 - rc = -ENOMEM; 128 - goto out; 129 - } 130 - 131 - data->pmi_reg = of_iomap(np, 0); 132 - if (!data->pmi_reg) { 133 - printk(KERN_ERR "pmi: invalid register address.\n"); 134 - rc = -EFAULT; 135 - goto error_cleanup_data; 136 - } 137 - 138 - INIT_LIST_HEAD(&data->handler); 139 - 140 - mutex_init(&data->msg_mutex); 141 - spin_lock_init(&data->pmi_spinlock); 142 - spin_lock_init(&data->handler_spinlock); 143 - 144 - INIT_WORK(&data->work, pmi_notify_handlers); 145 - 146 - data->dev = dev; 147 - 148 - data->irq = irq_of_parse_and_map(np, 0); 149 - if (!data->irq) { 150 - printk(KERN_ERR "pmi: invalid interrupt.\n"); 151 - rc = -EFAULT; 152 - goto error_cleanup_iomap; 153 - } 154 - 155 - rc = request_irq(data->irq, pmi_irq_handler, 0, "pmi", NULL); 156 - if (rc) { 157 - printk(KERN_ERR "pmi: can't request IRQ %d: returned %d\n", 158 - data->irq, rc); 159 - goto error_cleanup_iomap; 160 - } 161 - 162 - printk(KERN_INFO "pmi: found pmi device at addr %p.\n", data->pmi_reg); 163 - 164 - goto out; 165 - 166 - error_cleanup_iomap: 167 - iounmap(data->pmi_reg); 168 - 169 - error_cleanup_data: 170 - kfree(data); 171 - 172 - out: 173 - return rc; 174 - } 175 - 176 - static void pmi_of_remove(struct platform_device *dev) 177 - { 178 - struct pmi_handler *handler, *tmp; 179 - 180 - free_irq(data->irq, NULL); 181 - iounmap(data->pmi_reg); 182 - 183 - spin_lock(&data->handler_spinlock); 184 - 185 - list_for_each_entry_safe(handler, tmp, &data->handler, node) 186 - list_del(&handler->node); 187 - 188 - spin_unlock(&data->handler_spinlock); 189 - 190 - kfree(data); 191 - data = NULL; 192 - } 193 - 194 - static struct platform_driver pmi_of_platform_driver = { 195 - .probe = pmi_of_probe, 196 - .remove = pmi_of_remove, 197 - .driver = { 198 - .name = "pmi", 199 - .of_match_table = pmi_match, 200 - }, 201 - }; 202 - module_platform_driver(pmi_of_platform_driver); 203 - 204 - int pmi_send_message(pmi_message_t msg) 205 - { 206 - unsigned long flags; 207 - DECLARE_COMPLETION_ONSTACK(completion); 208 - 209 - if (!data) 210 - return -ENODEV; 211 - 212 - mutex_lock(&data->msg_mutex); 213 - 214 - data->msg = msg; 215 - pr_debug("pmi_send_message: msg is %08x\n", *(u32*)&msg); 216 - 217 - data->completion = &completion; 218 - 219 - spin_lock_irqsave(&data->pmi_spinlock, flags); 220 - iowrite8(msg.data0, data->pmi_reg + PMI_WRITE_DATA0); 221 - iowrite8(msg.data1, data->pmi_reg + PMI_WRITE_DATA1); 222 - iowrite8(msg.data2, data->pmi_reg + PMI_WRITE_DATA2); 223 - iowrite8(msg.type, data->pmi_reg + PMI_WRITE_TYPE); 224 - spin_unlock_irqrestore(&data->pmi_spinlock, flags); 225 - 226 - pr_debug("pmi_send_message: wait for completion\n"); 227 - 228 - wait_for_completion_interruptible_timeout(data->completion, 229 - PMI_TIMEOUT); 230 - 231 - data->completion = NULL; 232 - 233 - mutex_unlock(&data->msg_mutex); 234 - 235 - return 0; 236 - } 237 - EXPORT_SYMBOL_GPL(pmi_send_message); 238 - 239 - int pmi_register_handler(struct pmi_handler *handler) 240 - { 241 - if (!data) 242 - return -ENODEV; 243 - 244 - spin_lock(&data->handler_spinlock); 245 - list_add_tail(&handler->node, &data->handler); 246 - spin_unlock(&data->handler_spinlock); 247 - 248 - return 0; 249 - } 250 - EXPORT_SYMBOL_GPL(pmi_register_handler); 251 - 252 - void pmi_unregister_handler(struct pmi_handler *handler) 253 - { 254 - if (!data) 255 - return; 256 - 257 - pr_debug("pmi: unregistering handler %p\n", handler); 258 - 259 - spin_lock(&data->handler_spinlock); 260 - list_del(&handler->node); 261 - spin_unlock(&data->handler_spinlock); 262 - } 263 - EXPORT_SYMBOL_GPL(pmi_unregister_handler); 264 - 265 - MODULE_LICENSE("GPL"); 266 - MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>"); 267 - MODULE_DESCRIPTION("IBM Platform Management Interrupt driver");
-21
arch/powerpc/sysdev/xics/icp-native.c
··· 145 145 icp_native_set_qirr(cpu, IPI_PRIORITY); 146 146 } 147 147 148 - #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE 149 - void icp_native_cause_ipi_rm(int cpu) 150 - { 151 - /* 152 - * Currently not used to send IPIs to another CPU 153 - * on the same core. Only caller is KVM real mode. 154 - * Need the physical address of the XICS to be 155 - * previously saved in kvm_hstate in the paca. 156 - */ 157 - void __iomem *xics_phys; 158 - 159 - /* 160 - * Just like the cause_ipi functions, it is required to 161 - * include a full barrier before causing the IPI. 162 - */ 163 - xics_phys = paca_ptrs[cpu]->kvm_hstate.xics_phys; 164 - mb(); 165 - __raw_rm_writeb(IPI_PRIORITY, xics_phys + XICS_MFRR); 166 - } 167 - #endif 168 - 169 148 /* 170 149 * Called when an interrupt is received on an off-line CPU to 171 150 * clear the interrupt, so that the CPU can go back to nap mode.
+1 -4
arch/powerpc/xmon/Makefile
··· 16 16 17 17 obj-y += xmon.o nonstdio.o spr_access.o xmon_bpts.o 18 18 19 - ifdef CONFIG_XMON_DISASSEMBLY 20 - obj-y += ppc-dis.o ppc-opc.o 21 - obj-$(CONFIG_SPU_BASE) += spu-dis.o spu-opc.o 22 - endif 19 + obj-$(CONFIG_XMON_DISASSEMBLY) += ppc-dis.o ppc-opc.o
-237
arch/powerpc/xmon/spu-dis.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* Disassemble SPU instructions 3 - 4 - Copyright 2006 Free Software Foundation, Inc. 5 - 6 - This file is part of GDB, GAS, and the GNU binutils. 7 - 8 - */ 9 - 10 - #include <linux/string.h> 11 - #include "nonstdio.h" 12 - #include "ansidecl.h" 13 - #include "spu.h" 14 - #include "dis-asm.h" 15 - 16 - /* This file provides a disassembler function which uses 17 - the disassembler interface defined in dis-asm.h. */ 18 - 19 - extern const struct spu_opcode spu_opcodes[]; 20 - extern const int spu_num_opcodes; 21 - 22 - #define SPU_DISASM_TBL_SIZE (1 << 11) 23 - static const struct spu_opcode *spu_disassemble_table[SPU_DISASM_TBL_SIZE]; 24 - 25 - static void 26 - init_spu_disassemble (void) 27 - { 28 - int i; 29 - 30 - /* If two instructions have the same opcode then we prefer the first 31 - * one. In most cases it is just an alternate mnemonic. */ 32 - for (i = 0; i < spu_num_opcodes; i++) 33 - { 34 - int o = spu_opcodes[i].opcode; 35 - if (o >= SPU_DISASM_TBL_SIZE) 36 - continue; /* abort (); */ 37 - if (spu_disassemble_table[o] == 0) 38 - spu_disassemble_table[o] = &spu_opcodes[i]; 39 - } 40 - } 41 - 42 - /* Determine the instruction from the 10 least significant bits. */ 43 - static const struct spu_opcode * 44 - get_index_for_opcode (unsigned int insn) 45 - { 46 - const struct spu_opcode *index; 47 - unsigned int opcode = insn >> (32-11); 48 - 49 - /* Init the table. This assumes that element 0/opcode 0 (currently 50 - * NOP) is always used */ 51 - if (spu_disassemble_table[0] == 0) 52 - init_spu_disassemble (); 53 - 54 - if ((index = spu_disassemble_table[opcode & 0x780]) != 0 55 - && index->insn_type == RRR) 56 - return index; 57 - 58 - if ((index = spu_disassemble_table[opcode & 0x7f0]) != 0 59 - && (index->insn_type == RI18 || index->insn_type == LBT)) 60 - return index; 61 - 62 - if ((index = spu_disassemble_table[opcode & 0x7f8]) != 0 63 - && index->insn_type == RI10) 64 - return index; 65 - 66 - if ((index = spu_disassemble_table[opcode & 0x7fc]) != 0 67 - && (index->insn_type == RI16)) 68 - return index; 69 - 70 - if ((index = spu_disassemble_table[opcode & 0x7fe]) != 0 71 - && (index->insn_type == RI8)) 72 - return index; 73 - 74 - if ((index = spu_disassemble_table[opcode & 0x7ff]) != 0) 75 - return index; 76 - 77 - return NULL; 78 - } 79 - 80 - /* Print a Spu instruction. */ 81 - 82 - int 83 - print_insn_spu (unsigned long insn, unsigned long memaddr) 84 - { 85 - int value; 86 - int hex_value; 87 - const struct spu_opcode *index; 88 - enum spu_insns tag; 89 - 90 - index = get_index_for_opcode (insn); 91 - 92 - if (index == 0) 93 - { 94 - printf(".long 0x%lx", insn); 95 - } 96 - else 97 - { 98 - int i; 99 - int paren = 0; 100 - tag = (enum spu_insns)(index - spu_opcodes); 101 - printf("%s", index->mnemonic); 102 - if (tag == M_BI || tag == M_BISL || tag == M_IRET || tag == M_BISLED 103 - || tag == M_BIHNZ || tag == M_BIHZ || tag == M_BINZ || tag == M_BIZ 104 - || tag == M_SYNC || tag == M_HBR) 105 - { 106 - int fb = (insn >> (32-18)) & 0x7f; 107 - if (fb & 0x40) 108 - printf(tag == M_SYNC ? "c" : "p"); 109 - if (fb & 0x20) 110 - printf("d"); 111 - if (fb & 0x10) 112 - printf("e"); 113 - } 114 - if (index->arg[0] != 0) 115 - printf("\t"); 116 - hex_value = 0; 117 - for (i = 1; i <= index->arg[0]; i++) 118 - { 119 - int arg = index->arg[i]; 120 - if (arg != A_P && !paren && i > 1) 121 - printf(","); 122 - 123 - switch (arg) 124 - { 125 - case A_T: 126 - printf("$%lu", 127 - DECODE_INSN_RT (insn)); 128 - break; 129 - case A_A: 130 - printf("$%lu", 131 - DECODE_INSN_RA (insn)); 132 - break; 133 - case A_B: 134 - printf("$%lu", 135 - DECODE_INSN_RB (insn)); 136 - break; 137 - case A_C: 138 - printf("$%lu", 139 - DECODE_INSN_RC (insn)); 140 - break; 141 - case A_S: 142 - printf("$sp%lu", 143 - DECODE_INSN_RA (insn)); 144 - break; 145 - case A_H: 146 - printf("$ch%lu", 147 - DECODE_INSN_RA (insn)); 148 - break; 149 - case A_P: 150 - paren++; 151 - printf("("); 152 - break; 153 - case A_U7A: 154 - printf("%lu", 155 - 173 - DECODE_INSN_U8 (insn)); 156 - break; 157 - case A_U7B: 158 - printf("%lu", 159 - 155 - DECODE_INSN_U8 (insn)); 160 - break; 161 - case A_S3: 162 - case A_S6: 163 - case A_S7: 164 - case A_S7N: 165 - case A_U3: 166 - case A_U5: 167 - case A_U6: 168 - case A_U7: 169 - hex_value = DECODE_INSN_I7 (insn); 170 - printf("%d", hex_value); 171 - break; 172 - case A_S11: 173 - print_address(memaddr + DECODE_INSN_I9a (insn) * 4); 174 - break; 175 - case A_S11I: 176 - print_address(memaddr + DECODE_INSN_I9b (insn) * 4); 177 - break; 178 - case A_S10: 179 - case A_S10B: 180 - hex_value = DECODE_INSN_I10 (insn); 181 - printf("%d", hex_value); 182 - break; 183 - case A_S14: 184 - hex_value = DECODE_INSN_I10 (insn) * 16; 185 - printf("%d", hex_value); 186 - break; 187 - case A_S16: 188 - hex_value = DECODE_INSN_I16 (insn); 189 - printf("%d", hex_value); 190 - break; 191 - case A_X16: 192 - hex_value = DECODE_INSN_U16 (insn); 193 - printf("%u", hex_value); 194 - break; 195 - case A_R18: 196 - value = DECODE_INSN_I16 (insn) * 4; 197 - if (value == 0) 198 - printf("%d", value); 199 - else 200 - { 201 - hex_value = memaddr + value; 202 - print_address(hex_value & 0x3ffff); 203 - } 204 - break; 205 - case A_S18: 206 - value = DECODE_INSN_U16 (insn) * 4; 207 - if (value == 0) 208 - printf("%d", value); 209 - else 210 - print_address(value); 211 - break; 212 - case A_U18: 213 - value = DECODE_INSN_U18 (insn); 214 - if (value == 0 || 1) 215 - { 216 - hex_value = value; 217 - printf("%u", value); 218 - } 219 - else 220 - print_address(value); 221 - break; 222 - case A_U14: 223 - hex_value = DECODE_INSN_U14 (insn); 224 - printf("%u", hex_value); 225 - break; 226 - } 227 - if (arg != A_P && paren) 228 - { 229 - printf(")"); 230 - paren--; 231 - } 232 - } 233 - if (hex_value > 16) 234 - printf("\t# %x", hex_value); 235 - } 236 - return 4; 237 - }
-399
arch/powerpc/xmon/spu-insns.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* SPU ELF support for BFD. 3 - 4 - Copyright 2006 Free Software Foundation, Inc. 5 - 6 - This file is part of BFD, the Binary File Descriptor library. 7 - 8 - */ 9 - 10 - /* SPU Opcode Table 11 - 12 - -=-=-= FORMAT =-=-=- 13 - 14 - +----+-------+-------+-------+-------+ +------------+-------+-------+-------+ 15 - RRR | op | RC | RB | RA | RT | RI7 | op | I7 | RA | RT | 16 - +----+-------+-------+-------+-------+ +------------+-------+-------+-------+ 17 - 0 3 1 1 2 3 0 1 1 2 3 18 - 0 7 4 1 0 7 4 1 19 - 20 - +-----------+--------+-------+-------+ +---------+----------+-------+-------+ 21 - RI8 | op | I8 | RA | RT | RI10 | op | I10 | RA | RT | 22 - +-----------+--------+-------+-------+ +---------+----------+-------+-------+ 23 - 0 9 1 2 3 0 7 1 2 3 24 - 7 4 1 7 4 1 25 - 26 - +----------+-----------------+-------+ +--------+-------------------+-------+ 27 - RI16 | op | I16 | RT | RI18 | op | I18 | RT | 28 - +----------+-----------------+-------+ +--------+-------------------+-------+ 29 - 0 8 2 3 0 6 2 3 30 - 4 1 4 1 31 - 32 - +------------+-------+-------+-------+ +-------+--+-----------------+-------+ 33 - RR | op | RB | RA | RT | LBT | op |RO| I16 | RO | 34 - +------------+-------+-------+-------+ +-------+--+-----------------+-------+ 35 - 0 1 1 2 3 0 6 8 2 3 36 - 0 7 4 1 4 1 37 - 38 - +------------+----+--+-------+-------+ 39 - LBTI | op | // |RO| RA | RO | 40 - +------------+----+--+-------+-------+ 41 - 0 1 1 1 2 3 42 - 0 5 7 4 1 43 - 44 - -=-=-= OPCODE =-=-=- 45 - 46 - OPCODE field specifies the most significant 11bit of the instruction. Some formats don't have 11bits for opcode field, and in this 47 - case, bit field other than op are defined as 0s. For example, opcode of fma instruction which is RRR format is defined as 0x700, 48 - since 0x700 -> 11'b11100000000, this means opcode is 4'b1110, and other 7bits are defined as 7'b0000000. 49 - 50 - -=-=-= ASM_FORMAT =-=-=- 51 - 52 - RRR category RI7 category 53 - ASM_RRR mnemonic RC, RA, RB, RT ASM_RI4 mnemonic RT, RA, I4 54 - ASM_RI7 mnemonic RT, RA, I7 55 - 56 - RI8 category RI10 category 57 - ASM_RUI8 mnemonic RT, RA, UI8 ASM_AI10 mnemonic RA, I10 58 - ASM_RI10 mnemonic RT, RA, R10 59 - ASM_RI10IDX mnemonic RT, I10(RA) 60 - 61 - RI16 category RI18 category 62 - ASM_I16W mnemonic I16W ASM_RI18 mnemonic RT, I18 63 - ASM_RI16 mnemonic RT, I16 64 - ASM_RI16W mnemonic RT, I16W 65 - 66 - RR category LBT category 67 - ASM_MFSPR mnemonic RT, SA ASM_LBT mnemonic brinst, brtarg 68 - ASM_MTSPR mnemonic SA, RT 69 - ASM_NOOP mnemonic LBTI category 70 - ASM_RA mnemonic RA ASM_LBTI mnemonic brinst, RA 71 - ASM_RAB mnemonic RA, RB 72 - ASM_RDCH mnemonic RT, CA 73 - ASM_RR mnemonic RT, RA, RB 74 - ASM_RT mnemonic RT 75 - ASM_RTA mnemonic RT, RA 76 - ASM_WRCH mnemonic CA, RT 77 - 78 - Note that RRR instructions have the names for RC and RT reversed from 79 - what's in the ISA, in order to put RT in the same position it appears 80 - for other formats. 81 - 82 - -=-=-= DEPENDENCY =-=-=- 83 - 84 - DEPENDENCY filed consists of 5 digits. This represents which register is used as source and which register is used as target. 85 - The first(most significant) digit is always 0. Then it is followd by RC, RB, RA and RT digits. 86 - If the digit is 0, this means the corresponding register is not used in the instruction. 87 - If the digit is 1, this means the corresponding register is used as a source in the instruction. 88 - If the digit is 2, this means the corresponding register is used as a target in the instruction. 89 - If the digit is 3, this means the corresponding register is used as both source and target in the instruction. 90 - For example, fms instruction has 00113 as the DEPENDENCY field. This means RC is not used in this operation, RB and RA are 91 - used as sources and RT is the target. 92 - 93 - -=-=-= PIPE =-=-=- 94 - 95 - This field shows which execution pipe is used for the instruction 96 - 97 - pipe0 execution pipelines: 98 - FP6 SP floating pipeline 99 - FP7 integer operations executed in SP floating pipeline 100 - FPD DP floating pipeline 101 - FX2 FXU pipeline 102 - FX3 Rotate/Shift pipeline 103 - FXB Byte pipeline 104 - NOP No pipeline 105 - 106 - pipe1 execution pipelines: 107 - BR Branch pipeline 108 - LNOP No pipeline 109 - LS Load/Store pipeline 110 - SHUF Shuffle pipeline 111 - SPR SPR/CH pipeline 112 - 113 - */ 114 - 115 - #define _A0() {0} 116 - #define _A1(a) {1,a} 117 - #define _A2(a,b) {2,a,b} 118 - #define _A3(a,b,c) {3,a,b,c} 119 - #define _A4(a,b,c,d) {4,a,b,c,d} 120 - 121 - /* TAG FORMAT OPCODE MNEMONIC ASM_FORMAT DEPENDENCY PIPE COMMENT */ 122 - /* 0[RC][RB][RA][RT] */ 123 - /* 1:src, 2:target */ 124 - 125 - APUOP(M_BR, RI16, 0x190, "br", _A1(A_R18), 00000, BR) /* BRel IP<-IP+I16 */ 126 - APUOP(M_BRSL, RI16, 0x198, "brsl", _A2(A_T,A_R18), 00002, BR) /* BRelSetLink RT,IP<-IP,IP+I16 */ 127 - APUOP(M_BRA, RI16, 0x180, "bra", _A1(A_S18), 00000, BR) /* BRAbs IP<-I16 */ 128 - APUOP(M_BRASL, RI16, 0x188, "brasl", _A2(A_T,A_S18), 00002, BR) /* BRAbsSetLink RT,IP<-IP,I16 */ 129 - APUOP(M_FSMBI, RI16, 0x194, "fsmbi", _A2(A_T,A_X16), 00002, SHUF) /* FormSelMask%I RT<-fsm(I16) */ 130 - APUOP(M_LQA, RI16, 0x184, "lqa", _A2(A_T,A_S18), 00002, LS) /* LoadQAbs RT<-M[I16] */ 131 - APUOP(M_LQR, RI16, 0x19C, "lqr", _A2(A_T,A_R18), 00002, LS) /* LoadQRel RT<-M[IP+I16] */ 132 - APUOP(M_STOP, RR, 0x000, "stop", _A0(), 00000, BR) /* STOP stop */ 133 - APUOP(M_STOP2, RR, 0x000, "stop", _A1(A_U14), 00000, BR) /* STOP stop */ 134 - APUOP(M_STOPD, RR, 0x140, "stopd", _A3(A_T,A_A,A_B), 00111, BR) /* STOPD stop (with register dependencies) */ 135 - APUOP(M_LNOP, RR, 0x001, "lnop", _A0(), 00000, LNOP) /* LNOP no_operation */ 136 - APUOP(M_SYNC, RR, 0x002, "sync", _A0(), 00000, BR) /* SYNC flush_pipe */ 137 - APUOP(M_DSYNC, RR, 0x003, "dsync", _A0(), 00000, BR) /* DSYNC flush_store_queue */ 138 - APUOP(M_MFSPR, RR, 0x00c, "mfspr", _A2(A_T,A_S), 00002, SPR) /* MFSPR RT<-SA */ 139 - APUOP(M_RDCH, RR, 0x00d, "rdch", _A2(A_T,A_H), 00002, SPR) /* ReaDCHannel RT<-CA:data */ 140 - APUOP(M_RCHCNT, RR, 0x00f, "rchcnt", _A2(A_T,A_H), 00002, SPR) /* ReaDCHanCouNT RT<-CA:count */ 141 - APUOP(M_HBRA, LBT, 0x080, "hbra", _A2(A_S11,A_S18), 00000, LS) /* HBRA BTB[B9]<-M[I16] */ 142 - APUOP(M_HBRR, LBT, 0x090, "hbrr", _A2(A_S11,A_R18), 00000, LS) /* HBRR BTB[B9]<-M[IP+I16] */ 143 - APUOP(M_BRZ, RI16, 0x100, "brz", _A2(A_T,A_R18), 00001, BR) /* BRZ IP<-IP+I16_if(RT) */ 144 - APUOP(M_BRNZ, RI16, 0x108, "brnz", _A2(A_T,A_R18), 00001, BR) /* BRNZ IP<-IP+I16_if(RT) */ 145 - APUOP(M_BRHZ, RI16, 0x110, "brhz", _A2(A_T,A_R18), 00001, BR) /* BRHZ IP<-IP+I16_if(RT) */ 146 - APUOP(M_BRHNZ, RI16, 0x118, "brhnz", _A2(A_T,A_R18), 00001, BR) /* BRHNZ IP<-IP+I16_if(RT) */ 147 - APUOP(M_STQA, RI16, 0x104, "stqa", _A2(A_T,A_S18), 00001, LS) /* SToreQAbs M[I16]<-RT */ 148 - APUOP(M_STQR, RI16, 0x11C, "stqr", _A2(A_T,A_R18), 00001, LS) /* SToreQRel M[IP+I16]<-RT */ 149 - APUOP(M_MTSPR, RR, 0x10c, "mtspr", _A2(A_S,A_T), 00001, SPR) /* MTSPR SA<-RT */ 150 - APUOP(M_WRCH, RR, 0x10d, "wrch", _A2(A_H,A_T), 00001, SPR) /* ChanWRite CA<-RT */ 151 - APUOP(M_LQD, RI10, 0x1a0, "lqd", _A4(A_T,A_S14,A_P,A_A), 00012, LS) /* LoadQDisp RT<-M[Ra+I10] */ 152 - APUOP(M_BI, RR, 0x1a8, "bi", _A1(A_A), 00010, BR) /* BI IP<-RA */ 153 - APUOP(M_BISL, RR, 0x1a9, "bisl", _A2(A_T,A_A), 00012, BR) /* BISL RT,IP<-IP,RA */ 154 - APUOP(M_IRET, RR, 0x1aa, "iret", _A1(A_A), 00010, BR) /* IRET IP<-SRR0 */ 155 - APUOP(M_IRET2, RR, 0x1aa, "iret", _A0(), 00010, BR) /* IRET IP<-SRR0 */ 156 - APUOP(M_BISLED, RR, 0x1ab, "bisled", _A2(A_T,A_A), 00012, BR) /* BISLED RT,IP<-IP,RA_if(ext) */ 157 - APUOP(M_HBR, LBTI, 0x1ac, "hbr", _A2(A_S11I,A_A), 00010, LS) /* HBR BTB[B9]<-M[Ra] */ 158 - APUOP(M_FREST, RR, 0x1b8, "frest", _A2(A_T,A_A), 00012, SHUF) /* FREST RT<-recip(RA) */ 159 - APUOP(M_FRSQEST, RR, 0x1b9, "frsqest", _A2(A_T,A_A), 00012, SHUF) /* FRSQEST RT<-rsqrt(RA) */ 160 - APUOP(M_FSM, RR, 0x1b4, "fsm", _A2(A_T,A_A), 00012, SHUF) /* FormSelMask% RT<-expand(Ra) */ 161 - APUOP(M_FSMH, RR, 0x1b5, "fsmh", _A2(A_T,A_A), 00012, SHUF) /* FormSelMask% RT<-expand(Ra) */ 162 - APUOP(M_FSMB, RR, 0x1b6, "fsmb", _A2(A_T,A_A), 00012, SHUF) /* FormSelMask% RT<-expand(Ra) */ 163 - APUOP(M_GB, RR, 0x1b0, "gb", _A2(A_T,A_A), 00012, SHUF) /* GatherBits% RT<-gather(RA) */ 164 - APUOP(M_GBH, RR, 0x1b1, "gbh", _A2(A_T,A_A), 00012, SHUF) /* GatherBits% RT<-gather(RA) */ 165 - APUOP(M_GBB, RR, 0x1b2, "gbb", _A2(A_T,A_A), 00012, SHUF) /* GatherBits% RT<-gather(RA) */ 166 - APUOP(M_CBD, RI7, 0x1f4, "cbd", _A4(A_T,A_U7,A_P,A_A), 00012, SHUF) /* genCtl%%insD RT<-sta(Ra+I4,siz) */ 167 - APUOP(M_CHD, RI7, 0x1f5, "chd", _A4(A_T,A_U7,A_P,A_A), 00012, SHUF) /* genCtl%%insD RT<-sta(Ra+I4,siz) */ 168 - APUOP(M_CWD, RI7, 0x1f6, "cwd", _A4(A_T,A_U7,A_P,A_A), 00012, SHUF) /* genCtl%%insD RT<-sta(Ra+I4,siz) */ 169 - APUOP(M_CDD, RI7, 0x1f7, "cdd", _A4(A_T,A_U7,A_P,A_A), 00012, SHUF) /* genCtl%%insD RT<-sta(Ra+I4,siz) */ 170 - APUOP(M_ROTQBII, RI7, 0x1f8, "rotqbii", _A3(A_T,A_A,A_U3), 00012, SHUF) /* ROTQBII RT<-RA<<<I7 */ 171 - APUOP(M_ROTQBYI, RI7, 0x1fc, "rotqbyi", _A3(A_T,A_A,A_S7N), 00012, SHUF) /* ROTQBYI RT<-RA<<<(I7*8) */ 172 - APUOP(M_ROTQMBII, RI7, 0x1f9, "rotqmbii", _A3(A_T,A_A,A_S3), 00012, SHUF) /* ROTQMBII RT<-RA<<I7 */ 173 - APUOP(M_ROTQMBYI, RI7, 0x1fd, "rotqmbyi", _A3(A_T,A_A,A_S6), 00012, SHUF) /* ROTQMBYI RT<-RA<<I7 */ 174 - APUOP(M_SHLQBII, RI7, 0x1fb, "shlqbii", _A3(A_T,A_A,A_U3), 00012, SHUF) /* SHLQBII RT<-RA<<I7 */ 175 - APUOP(M_SHLQBYI, RI7, 0x1ff, "shlqbyi", _A3(A_T,A_A,A_U5), 00012, SHUF) /* SHLQBYI RT<-RA<<I7 */ 176 - APUOP(M_STQD, RI10, 0x120, "stqd", _A4(A_T,A_S14,A_P,A_A), 00011, LS) /* SToreQDisp M[Ra+I10]<-RT */ 177 - APUOP(M_BIHNZ, RR, 0x12b, "bihnz", _A2(A_T,A_A), 00011, BR) /* BIHNZ IP<-RA_if(RT) */ 178 - APUOP(M_BIHZ, RR, 0x12a, "bihz", _A2(A_T,A_A), 00011, BR) /* BIHZ IP<-RA_if(RT) */ 179 - APUOP(M_BINZ, RR, 0x129, "binz", _A2(A_T,A_A), 00011, BR) /* BINZ IP<-RA_if(RT) */ 180 - APUOP(M_BIZ, RR, 0x128, "biz", _A2(A_T,A_A), 00011, BR) /* BIZ IP<-RA_if(RT) */ 181 - APUOP(M_CBX, RR, 0x1d4, "cbx", _A3(A_T,A_A,A_B), 00112, SHUF) /* genCtl%%insX RT<-sta(Ra+Rb,siz) */ 182 - APUOP(M_CHX, RR, 0x1d5, "chx", _A3(A_T,A_A,A_B), 00112, SHUF) /* genCtl%%insX RT<-sta(Ra+Rb,siz) */ 183 - APUOP(M_CWX, RR, 0x1d6, "cwx", _A3(A_T,A_A,A_B), 00112, SHUF) /* genCtl%%insX RT<-sta(Ra+Rb,siz) */ 184 - APUOP(M_CDX, RR, 0x1d7, "cdx", _A3(A_T,A_A,A_B), 00112, SHUF) /* genCtl%%insX RT<-sta(Ra+Rb,siz) */ 185 - APUOP(M_LQX, RR, 0x1c4, "lqx", _A3(A_T,A_A,A_B), 00112, LS) /* LoadQindeX RT<-M[Ra+Rb] */ 186 - APUOP(M_ROTQBI, RR, 0x1d8, "rotqbi", _A3(A_T,A_A,A_B), 00112, SHUF) /* ROTQBI RT<-RA<<<Rb */ 187 - APUOP(M_ROTQMBI, RR, 0x1d9, "rotqmbi", _A3(A_T,A_A,A_B), 00112, SHUF) /* ROTQMBI RT<-RA<<Rb */ 188 - APUOP(M_SHLQBI, RR, 0x1db, "shlqbi", _A3(A_T,A_A,A_B), 00112, SHUF) /* SHLQBI RT<-RA<<Rb */ 189 - APUOP(M_ROTQBY, RR, 0x1dc, "rotqby", _A3(A_T,A_A,A_B), 00112, SHUF) /* ROTQBY RT<-RA<<<(Rb*8) */ 190 - APUOP(M_ROTQMBY, RR, 0x1dd, "rotqmby", _A3(A_T,A_A,A_B), 00112, SHUF) /* ROTQMBY RT<-RA<<Rb */ 191 - APUOP(M_SHLQBY, RR, 0x1df, "shlqby", _A3(A_T,A_A,A_B), 00112, SHUF) /* SHLQBY RT<-RA<<Rb */ 192 - APUOP(M_ROTQBYBI, RR, 0x1cc, "rotqbybi", _A3(A_T,A_A,A_B), 00112, SHUF) /* ROTQBYBI RT<-RA<<Rb */ 193 - APUOP(M_ROTQMBYBI, RR, 0x1cd, "rotqmbybi", _A3(A_T,A_A,A_B), 00112, SHUF) /* ROTQMBYBI RT<-RA<<Rb */ 194 - APUOP(M_SHLQBYBI, RR, 0x1cf, "shlqbybi", _A3(A_T,A_A,A_B), 00112, SHUF) /* SHLQBYBI RT<-RA<<Rb */ 195 - APUOP(M_STQX, RR, 0x144, "stqx", _A3(A_T,A_A,A_B), 00111, LS) /* SToreQindeX M[Ra+Rb]<-RT */ 196 - APUOP(M_SHUFB, RRR, 0x580, "shufb", _A4(A_C,A_A,A_B,A_T), 02111, SHUF) /* SHUFfleBytes RC<-f(RA,RB,RT) */ 197 - APUOP(M_IL, RI16, 0x204, "il", _A2(A_T,A_S16), 00002, FX2) /* ImmLoad RT<-sxt(I16) */ 198 - APUOP(M_ILH, RI16, 0x20c, "ilh", _A2(A_T,A_X16), 00002, FX2) /* ImmLoadH RT<-I16 */ 199 - APUOP(M_ILHU, RI16, 0x208, "ilhu", _A2(A_T,A_X16), 00002, FX2) /* ImmLoadHUpper RT<-I16<<16 */ 200 - APUOP(M_ILA, RI18, 0x210, "ila", _A2(A_T,A_U18), 00002, FX2) /* ImmLoadAddr RT<-zxt(I18) */ 201 - APUOP(M_NOP, RR, 0x201, "nop", _A1(A_T), 00000, NOP) /* XNOP no_operation */ 202 - APUOP(M_NOP2, RR, 0x201, "nop", _A0(), 00000, NOP) /* XNOP no_operation */ 203 - APUOP(M_IOHL, RI16, 0x304, "iohl", _A2(A_T,A_X16), 00003, FX2) /* AddImmeXt RT<-RT+sxt(I16) */ 204 - APUOP(M_ANDBI, RI10, 0x0b0, "andbi", _A3(A_T,A_A,A_S10B), 00012, FX2) /* AND%I RT<-RA&I10 */ 205 - APUOP(M_ANDHI, RI10, 0x0a8, "andhi", _A3(A_T,A_A,A_S10), 00012, FX2) /* AND%I RT<-RA&I10 */ 206 - APUOP(M_ANDI, RI10, 0x0a0, "andi", _A3(A_T,A_A,A_S10), 00012, FX2) /* AND%I RT<-RA&I10 */ 207 - APUOP(M_ORBI, RI10, 0x030, "orbi", _A3(A_T,A_A,A_S10B), 00012, FX2) /* OR%I RT<-RA|I10 */ 208 - APUOP(M_ORHI, RI10, 0x028, "orhi", _A3(A_T,A_A,A_S10), 00012, FX2) /* OR%I RT<-RA|I10 */ 209 - APUOP(M_ORI, RI10, 0x020, "ori", _A3(A_T,A_A,A_S10), 00012, FX2) /* OR%I RT<-RA|I10 */ 210 - APUOP(M_ORX, RR, 0x1f0, "orx", _A2(A_T,A_A), 00012, BR) /* ORX RT<-RA.w0|RA.w1|RA.w2|RA.w3 */ 211 - APUOP(M_XORBI, RI10, 0x230, "xorbi", _A3(A_T,A_A,A_S10B), 00012, FX2) /* XOR%I RT<-RA^I10 */ 212 - APUOP(M_XORHI, RI10, 0x228, "xorhi", _A3(A_T,A_A,A_S10), 00012, FX2) /* XOR%I RT<-RA^I10 */ 213 - APUOP(M_XORI, RI10, 0x220, "xori", _A3(A_T,A_A,A_S10), 00012, FX2) /* XOR%I RT<-RA^I10 */ 214 - APUOP(M_AHI, RI10, 0x0e8, "ahi", _A3(A_T,A_A,A_S10), 00012, FX2) /* Add%Immed RT<-RA+I10 */ 215 - APUOP(M_AI, RI10, 0x0e0, "ai", _A3(A_T,A_A,A_S10), 00012, FX2) /* Add%Immed RT<-RA+I10 */ 216 - APUOP(M_SFHI, RI10, 0x068, "sfhi", _A3(A_T,A_A,A_S10), 00012, FX2) /* SubFrom%Imm RT<-I10-RA */ 217 - APUOP(M_SFI, RI10, 0x060, "sfi", _A3(A_T,A_A,A_S10), 00012, FX2) /* SubFrom%Imm RT<-I10-RA */ 218 - APUOP(M_CGTBI, RI10, 0x270, "cgtbi", _A3(A_T,A_A,A_S10B), 00012, FX2) /* CGT%I RT<-(RA>I10) */ 219 - APUOP(M_CGTHI, RI10, 0x268, "cgthi", _A3(A_T,A_A,A_S10), 00012, FX2) /* CGT%I RT<-(RA>I10) */ 220 - APUOP(M_CGTI, RI10, 0x260, "cgti", _A3(A_T,A_A,A_S10), 00012, FX2) /* CGT%I RT<-(RA>I10) */ 221 - APUOP(M_CLGTBI, RI10, 0x2f0, "clgtbi", _A3(A_T,A_A,A_S10B), 00012, FX2) /* CLGT%I RT<-(RA>I10) */ 222 - APUOP(M_CLGTHI, RI10, 0x2e8, "clgthi", _A3(A_T,A_A,A_S10), 00012, FX2) /* CLGT%I RT<-(RA>I10) */ 223 - APUOP(M_CLGTI, RI10, 0x2e0, "clgti", _A3(A_T,A_A,A_S10), 00012, FX2) /* CLGT%I RT<-(RA>I10) */ 224 - APUOP(M_CEQBI, RI10, 0x3f0, "ceqbi", _A3(A_T,A_A,A_S10B), 00012, FX2) /* CEQ%I RT<-(RA=I10) */ 225 - APUOP(M_CEQHI, RI10, 0x3e8, "ceqhi", _A3(A_T,A_A,A_S10), 00012, FX2) /* CEQ%I RT<-(RA=I10) */ 226 - APUOP(M_CEQI, RI10, 0x3e0, "ceqi", _A3(A_T,A_A,A_S10), 00012, FX2) /* CEQ%I RT<-(RA=I10) */ 227 - APUOP(M_HGTI, RI10, 0x278, "hgti", _A3(A_T,A_A,A_S10), 00010, FX2) /* HaltGTI halt_if(RA>I10) */ 228 - APUOP(M_HGTI2, RI10, 0x278, "hgti", _A2(A_A,A_S10), 00010, FX2) /* HaltGTI halt_if(RA>I10) */ 229 - APUOP(M_HLGTI, RI10, 0x2f8, "hlgti", _A3(A_T,A_A,A_S10), 00010, FX2) /* HaltLGTI halt_if(RA>I10) */ 230 - APUOP(M_HLGTI2, RI10, 0x2f8, "hlgti", _A2(A_A,A_S10), 00010, FX2) /* HaltLGTI halt_if(RA>I10) */ 231 - APUOP(M_HEQI, RI10, 0x3f8, "heqi", _A3(A_T,A_A,A_S10), 00010, FX2) /* HaltEQImm halt_if(RA=I10) */ 232 - APUOP(M_HEQI2, RI10, 0x3f8, "heqi", _A2(A_A,A_S10), 00010, FX2) /* HaltEQImm halt_if(RA=I10) */ 233 - APUOP(M_MPYI, RI10, 0x3a0, "mpyi", _A3(A_T,A_A,A_S10), 00012, FP7) /* MPYI RT<-RA*I10 */ 234 - APUOP(M_MPYUI, RI10, 0x3a8, "mpyui", _A3(A_T,A_A,A_S10), 00012, FP7) /* MPYUI RT<-RA*I10 */ 235 - APUOP(M_CFLTS, RI8, 0x3b0, "cflts", _A3(A_T,A_A,A_U7A), 00012, FP7) /* CFLTS RT<-int(RA,I8) */ 236 - APUOP(M_CFLTU, RI8, 0x3b2, "cfltu", _A3(A_T,A_A,A_U7A), 00012, FP7) /* CFLTU RT<-int(RA,I8) */ 237 - APUOP(M_CSFLT, RI8, 0x3b4, "csflt", _A3(A_T,A_A,A_U7B), 00012, FP7) /* CSFLT RT<-flt(RA,I8) */ 238 - APUOP(M_CUFLT, RI8, 0x3b6, "cuflt", _A3(A_T,A_A,A_U7B), 00012, FP7) /* CUFLT RT<-flt(RA,I8) */ 239 - APUOP(M_FESD, RR, 0x3b8, "fesd", _A2(A_T,A_A), 00012, FPD) /* FESD RT<-double(RA) */ 240 - APUOP(M_FRDS, RR, 0x3b9, "frds", _A2(A_T,A_A), 00012, FPD) /* FRDS RT<-single(RA) */ 241 - APUOP(M_FSCRRD, RR, 0x398, "fscrrd", _A1(A_T), 00002, FPD) /* FSCRRD RT<-FP_status */ 242 - APUOP(M_FSCRWR, RR, 0x3ba, "fscrwr", _A2(A_T,A_A), 00010, FP7) /* FSCRWR FP_status<-RA */ 243 - APUOP(M_FSCRWR2, RR, 0x3ba, "fscrwr", _A1(A_A), 00010, FP7) /* FSCRWR FP_status<-RA */ 244 - APUOP(M_CLZ, RR, 0x2a5, "clz", _A2(A_T,A_A), 00012, FX2) /* CLZ RT<-clz(RA) */ 245 - APUOP(M_CNTB, RR, 0x2b4, "cntb", _A2(A_T,A_A), 00012, FXB) /* CNT RT<-pop(RA) */ 246 - APUOP(M_XSBH, RR, 0x2b6, "xsbh", _A2(A_T,A_A), 00012, FX2) /* eXtSignBtoH RT<-sign_ext(RA) */ 247 - APUOP(M_XSHW, RR, 0x2ae, "xshw", _A2(A_T,A_A), 00012, FX2) /* eXtSignHtoW RT<-sign_ext(RA) */ 248 - APUOP(M_XSWD, RR, 0x2a6, "xswd", _A2(A_T,A_A), 00012, FX2) /* eXtSignWtoD RT<-sign_ext(RA) */ 249 - APUOP(M_ROTI, RI7, 0x078, "roti", _A3(A_T,A_A,A_S7N), 00012, FX3) /* ROT%I RT<-RA<<<I7 */ 250 - APUOP(M_ROTMI, RI7, 0x079, "rotmi", _A3(A_T,A_A,A_S7), 00012, FX3) /* ROT%MI RT<-RA<<I7 */ 251 - APUOP(M_ROTMAI, RI7, 0x07a, "rotmai", _A3(A_T,A_A,A_S7), 00012, FX3) /* ROTMA%I RT<-RA<<I7 */ 252 - APUOP(M_SHLI, RI7, 0x07b, "shli", _A3(A_T,A_A,A_U6), 00012, FX3) /* SHL%I RT<-RA<<I7 */ 253 - APUOP(M_ROTHI, RI7, 0x07c, "rothi", _A3(A_T,A_A,A_S7N), 00012, FX3) /* ROT%I RT<-RA<<<I7 */ 254 - APUOP(M_ROTHMI, RI7, 0x07d, "rothmi", _A3(A_T,A_A,A_S6), 00012, FX3) /* ROT%MI RT<-RA<<I7 */ 255 - APUOP(M_ROTMAHI, RI7, 0x07e, "rotmahi", _A3(A_T,A_A,A_S6), 00012, FX3) /* ROTMA%I RT<-RA<<I7 */ 256 - APUOP(M_SHLHI, RI7, 0x07f, "shlhi", _A3(A_T,A_A,A_U5), 00012, FX3) /* SHL%I RT<-RA<<I7 */ 257 - APUOP(M_A, RR, 0x0c0, "a", _A3(A_T,A_A,A_B), 00112, FX2) /* Add% RT<-RA+RB */ 258 - APUOP(M_AH, RR, 0x0c8, "ah", _A3(A_T,A_A,A_B), 00112, FX2) /* Add% RT<-RA+RB */ 259 - APUOP(M_SF, RR, 0x040, "sf", _A3(A_T,A_A,A_B), 00112, FX2) /* SubFrom% RT<-RB-RA */ 260 - APUOP(M_SFH, RR, 0x048, "sfh", _A3(A_T,A_A,A_B), 00112, FX2) /* SubFrom% RT<-RB-RA */ 261 - APUOP(M_CGT, RR, 0x240, "cgt", _A3(A_T,A_A,A_B), 00112, FX2) /* CGT% RT<-(RA>RB) */ 262 - APUOP(M_CGTB, RR, 0x250, "cgtb", _A3(A_T,A_A,A_B), 00112, FX2) /* CGT% RT<-(RA>RB) */ 263 - APUOP(M_CGTH, RR, 0x248, "cgth", _A3(A_T,A_A,A_B), 00112, FX2) /* CGT% RT<-(RA>RB) */ 264 - APUOP(M_CLGT, RR, 0x2c0, "clgt", _A3(A_T,A_A,A_B), 00112, FX2) /* CLGT% RT<-(RA>RB) */ 265 - APUOP(M_CLGTB, RR, 0x2d0, "clgtb", _A3(A_T,A_A,A_B), 00112, FX2) /* CLGT% RT<-(RA>RB) */ 266 - APUOP(M_CLGTH, RR, 0x2c8, "clgth", _A3(A_T,A_A,A_B), 00112, FX2) /* CLGT% RT<-(RA>RB) */ 267 - APUOP(M_CEQ, RR, 0x3c0, "ceq", _A3(A_T,A_A,A_B), 00112, FX2) /* CEQ% RT<-(RA=RB) */ 268 - APUOP(M_CEQB, RR, 0x3d0, "ceqb", _A3(A_T,A_A,A_B), 00112, FX2) /* CEQ% RT<-(RA=RB) */ 269 - APUOP(M_CEQH, RR, 0x3c8, "ceqh", _A3(A_T,A_A,A_B), 00112, FX2) /* CEQ% RT<-(RA=RB) */ 270 - APUOP(M_HGT, RR, 0x258, "hgt", _A3(A_T,A_A,A_B), 00110, FX2) /* HaltGT halt_if(RA>RB) */ 271 - APUOP(M_HGT2, RR, 0x258, "hgt", _A2(A_A,A_B), 00110, FX2) /* HaltGT halt_if(RA>RB) */ 272 - APUOP(M_HLGT, RR, 0x2d8, "hlgt", _A3(A_T,A_A,A_B), 00110, FX2) /* HaltLGT halt_if(RA>RB) */ 273 - APUOP(M_HLGT2, RR, 0x2d8, "hlgt", _A2(A_A,A_B), 00110, FX2) /* HaltLGT halt_if(RA>RB) */ 274 - APUOP(M_HEQ, RR, 0x3d8, "heq", _A3(A_T,A_A,A_B), 00110, FX2) /* HaltEQ halt_if(RA=RB) */ 275 - APUOP(M_HEQ2, RR, 0x3d8, "heq", _A2(A_A,A_B), 00110, FX2) /* HaltEQ halt_if(RA=RB) */ 276 - APUOP(M_FCEQ, RR, 0x3c2, "fceq", _A3(A_T,A_A,A_B), 00112, FX2) /* FCEQ RT<-(RA=RB) */ 277 - APUOP(M_FCMEQ, RR, 0x3ca, "fcmeq", _A3(A_T,A_A,A_B), 00112, FX2) /* FCMEQ RT<-(|RA|=|RB|) */ 278 - APUOP(M_FCGT, RR, 0x2c2, "fcgt", _A3(A_T,A_A,A_B), 00112, FX2) /* FCGT RT<-(RA<RB) */ 279 - APUOP(M_FCMGT, RR, 0x2ca, "fcmgt", _A3(A_T,A_A,A_B), 00112, FX2) /* FCMGT RT<-(|RA|<|RB|) */ 280 - APUOP(M_AND, RR, 0x0c1, "and", _A3(A_T,A_A,A_B), 00112, FX2) /* AND RT<-RA&RB */ 281 - APUOP(M_NAND, RR, 0x0c9, "nand", _A3(A_T,A_A,A_B), 00112, FX2) /* NAND RT<-!(RA&RB) */ 282 - APUOP(M_OR, RR, 0x041, "or", _A3(A_T,A_A,A_B), 00112, FX2) /* OR RT<-RA|RB */ 283 - APUOP(M_NOR, RR, 0x049, "nor", _A3(A_T,A_A,A_B), 00112, FX2) /* NOR RT<-!(RA&RB) */ 284 - APUOP(M_XOR, RR, 0x241, "xor", _A3(A_T,A_A,A_B), 00112, FX2) /* XOR RT<-RA^RB */ 285 - APUOP(M_EQV, RR, 0x249, "eqv", _A3(A_T,A_A,A_B), 00112, FX2) /* EQuiValent RT<-!(RA^RB) */ 286 - APUOP(M_ANDC, RR, 0x2c1, "andc", _A3(A_T,A_A,A_B), 00112, FX2) /* ANDComplement RT<-RA&!RB */ 287 - APUOP(M_ORC, RR, 0x2c9, "orc", _A3(A_T,A_A,A_B), 00112, FX2) /* ORComplement RT<-RA|!RB */ 288 - APUOP(M_ABSDB, RR, 0x053, "absdb", _A3(A_T,A_A,A_B), 00112, FXB) /* ABSoluteDiff RT<-|RA-RB| */ 289 - APUOP(M_AVGB, RR, 0x0d3, "avgb", _A3(A_T,A_A,A_B), 00112, FXB) /* AVG% RT<-(RA+RB+1)/2 */ 290 - APUOP(M_SUMB, RR, 0x253, "sumb", _A3(A_T,A_A,A_B), 00112, FXB) /* SUM% RT<-f(RA,RB) */ 291 - APUOP(M_DFA, RR, 0x2cc, "dfa", _A3(A_T,A_A,A_B), 00112, FPD) /* DFAdd RT<-RA+RB */ 292 - APUOP(M_DFM, RR, 0x2ce, "dfm", _A3(A_T,A_A,A_B), 00112, FPD) /* DFMul RT<-RA*RB */ 293 - APUOP(M_DFS, RR, 0x2cd, "dfs", _A3(A_T,A_A,A_B), 00112, FPD) /* DFSub RT<-RA-RB */ 294 - APUOP(M_FA, RR, 0x2c4, "fa", _A3(A_T,A_A,A_B), 00112, FP6) /* FAdd RT<-RA+RB */ 295 - APUOP(M_FM, RR, 0x2c6, "fm", _A3(A_T,A_A,A_B), 00112, FP6) /* FMul RT<-RA*RB */ 296 - APUOP(M_FS, RR, 0x2c5, "fs", _A3(A_T,A_A,A_B), 00112, FP6) /* FSub RT<-RA-RB */ 297 - APUOP(M_MPY, RR, 0x3c4, "mpy", _A3(A_T,A_A,A_B), 00112, FP7) /* MPY RT<-RA*RB */ 298 - APUOP(M_MPYH, RR, 0x3c5, "mpyh", _A3(A_T,A_A,A_B), 00112, FP7) /* MPYH RT<-(RAh*RB)<<16 */ 299 - APUOP(M_MPYHH, RR, 0x3c6, "mpyhh", _A3(A_T,A_A,A_B), 00112, FP7) /* MPYHH RT<-RAh*RBh */ 300 - APUOP(M_MPYHHU, RR, 0x3ce, "mpyhhu", _A3(A_T,A_A,A_B), 00112, FP7) /* MPYHHU RT<-RAh*RBh */ 301 - APUOP(M_MPYS, RR, 0x3c7, "mpys", _A3(A_T,A_A,A_B), 00112, FP7) /* MPYS RT<-(RA*RB)>>16 */ 302 - APUOP(M_MPYU, RR, 0x3cc, "mpyu", _A3(A_T,A_A,A_B), 00112, FP7) /* MPYU RT<-RA*RB */ 303 - APUOP(M_FI, RR, 0x3d4, "fi", _A3(A_T,A_A,A_B), 00112, FP7) /* FInterpolate RT<-f(RA,RB) */ 304 - APUOP(M_ROT, RR, 0x058, "rot", _A3(A_T,A_A,A_B), 00112, FX3) /* ROT% RT<-RA<<<RB */ 305 - APUOP(M_ROTM, RR, 0x059, "rotm", _A3(A_T,A_A,A_B), 00112, FX3) /* ROT%M RT<-RA<<Rb */ 306 - APUOP(M_ROTMA, RR, 0x05a, "rotma", _A3(A_T,A_A,A_B), 00112, FX3) /* ROTMA% RT<-RA<<Rb */ 307 - APUOP(M_SHL, RR, 0x05b, "shl", _A3(A_T,A_A,A_B), 00112, FX3) /* SHL% RT<-RA<<Rb */ 308 - APUOP(M_ROTH, RR, 0x05c, "roth", _A3(A_T,A_A,A_B), 00112, FX3) /* ROT% RT<-RA<<<RB */ 309 - APUOP(M_ROTHM, RR, 0x05d, "rothm", _A3(A_T,A_A,A_B), 00112, FX3) /* ROT%M RT<-RA<<Rb */ 310 - APUOP(M_ROTMAH, RR, 0x05e, "rotmah", _A3(A_T,A_A,A_B), 00112, FX3) /* ROTMA% RT<-RA<<Rb */ 311 - APUOP(M_SHLH, RR, 0x05f, "shlh", _A3(A_T,A_A,A_B), 00112, FX3) /* SHL% RT<-RA<<Rb */ 312 - APUOP(M_MPYHHA, RR, 0x346, "mpyhha", _A3(A_T,A_A,A_B), 00113, FP7) /* MPYHHA RT<-RAh*RBh+RT */ 313 - APUOP(M_MPYHHAU, RR, 0x34e, "mpyhhau", _A3(A_T,A_A,A_B), 00113, FP7) /* MPYHHAU RT<-RAh*RBh+RT */ 314 - APUOP(M_DFMA, RR, 0x35c, "dfma", _A3(A_T,A_A,A_B), 00113, FPD) /* DFMAdd RT<-RT+RA*RB */ 315 - APUOP(M_DFMS, RR, 0x35d, "dfms", _A3(A_T,A_A,A_B), 00113, FPD) /* DFMSub RT<-RA*RB-RT */ 316 - APUOP(M_DFNMS, RR, 0x35e, "dfnms", _A3(A_T,A_A,A_B), 00113, FPD) /* DFNMSub RT<-RT-RA*RB */ 317 - APUOP(M_DFNMA, RR, 0x35f, "dfnma", _A3(A_T,A_A,A_B), 00113, FPD) /* DFNMAdd RT<-(-RT)-RA*RB */ 318 - APUOP(M_FMA, RRR, 0x700, "fma", _A4(A_C,A_A,A_B,A_T), 02111, FP6) /* FMAdd RC<-RT+RA*RB */ 319 - APUOP(M_FMS, RRR, 0x780, "fms", _A4(A_C,A_A,A_B,A_T), 02111, FP6) /* FMSub RC<-RA*RB-RT */ 320 - APUOP(M_FNMS, RRR, 0x680, "fnms", _A4(A_C,A_A,A_B,A_T), 02111, FP6) /* FNMSub RC<-RT-RA*RB */ 321 - APUOP(M_MPYA, RRR, 0x600, "mpya", _A4(A_C,A_A,A_B,A_T), 02111, FP7) /* MPYA RC<-RA*RB+RT */ 322 - APUOP(M_SELB, RRR, 0x400, "selb", _A4(A_C,A_A,A_B,A_T), 02111, FX2) /* SELectBits RC<-RA&RT|RB&!RT */ 323 - /* for system function call, this uses op-code of mtspr */ 324 - APUOP(M_SYSCALL, RI7, 0x10c, "syscall", _A3(A_T,A_A,A_S7N), 00002, SPR) /* System Call */ 325 - /* 326 - pseudo instruction: 327 - system call 328 - value of I9 operation 329 - 0 halt 330 - 1 rt[0] = open(MEM[ra[0]], ra[1]) 331 - 2 rt[0] = close(ra[0]) 332 - 3 rt[0] = read(ra[0], MEM[ra[1]], ra[2]) 333 - 4 rt[0] = write(ra[0], MEM[ra[1]], ra[2]) 334 - 5 printf(MEM[ra[0]], ra[1], ra[2], ra[3]) 335 - 42 rt[0] = clock() 336 - 52 rt[0] = lseek(ra0, ra1, ra2) 337 - 338 - */ 339 - 340 - 341 - /* new multiprecision add/sub */ 342 - APUOP(M_ADDX, RR, 0x340, "addx", _A3(A_T,A_A,A_B), 00113, FX2) /* Add_eXtended RT<-RA+RB+RT */ 343 - APUOP(M_CG, RR, 0x0c2, "cg", _A3(A_T,A_A,A_B), 00112, FX2) /* CarryGenerate RT<-cout(RA+RB) */ 344 - APUOP(M_CGX, RR, 0x342, "cgx", _A3(A_T,A_A,A_B), 00113, FX2) /* CarryGen_eXtd RT<-cout(RA+RB+RT) */ 345 - APUOP(M_SFX, RR, 0x341, "sfx", _A3(A_T,A_A,A_B), 00113, FX2) /* Add_eXtended RT<-RA+RB+RT */ 346 - APUOP(M_BG, RR, 0x042, "bg", _A3(A_T,A_A,A_B), 00112, FX2) /* CarryGenerate RT<-cout(RA+RB) */ 347 - APUOP(M_BGX, RR, 0x343, "bgx", _A3(A_T,A_A,A_B), 00113, FX2) /* CarryGen_eXtd RT<-cout(RA+RB+RT) */ 348 - 349 - /* 350 - 351 - The following ops are a subset of above except with feature bits set. 352 - Feature bits are bits 11-17 of the instruction: 353 - 354 - 11 - C & P feature bit 355 - 12 - disable interrupts 356 - 13 - enable interrupts 357 - 358 - */ 359 - APUOPFB(M_BID, RR, 0x1a8, 0x20, "bid", _A1(A_A), 00010, BR) /* BI IP<-RA */ 360 - APUOPFB(M_BIE, RR, 0x1a8, 0x10, "bie", _A1(A_A), 00010, BR) /* BI IP<-RA */ 361 - APUOPFB(M_BISLD, RR, 0x1a9, 0x20, "bisld", _A2(A_T,A_A), 00012, BR) /* BISL RT,IP<-IP,RA */ 362 - APUOPFB(M_BISLE, RR, 0x1a9, 0x10, "bisle", _A2(A_T,A_A), 00012, BR) /* BISL RT,IP<-IP,RA */ 363 - APUOPFB(M_IRETD, RR, 0x1aa, 0x20, "iretd", _A1(A_A), 00010, BR) /* IRET IP<-SRR0 */ 364 - APUOPFB(M_IRETD2, RR, 0x1aa, 0x20, "iretd", _A0(), 00010, BR) /* IRET IP<-SRR0 */ 365 - APUOPFB(M_IRETE, RR, 0x1aa, 0x10, "irete", _A1(A_A), 00010, BR) /* IRET IP<-SRR0 */ 366 - APUOPFB(M_IRETE2, RR, 0x1aa, 0x10, "irete", _A0(), 00010, BR) /* IRET IP<-SRR0 */ 367 - APUOPFB(M_BISLEDD, RR, 0x1ab, 0x20, "bisledd", _A2(A_T,A_A), 00012, BR) /* BISLED RT,IP<-IP,RA_if(ext) */ 368 - APUOPFB(M_BISLEDE, RR, 0x1ab, 0x10, "bislede", _A2(A_T,A_A), 00012, BR) /* BISLED RT,IP<-IP,RA_if(ext) */ 369 - APUOPFB(M_BIHNZD, RR, 0x12b, 0x20, "bihnzd", _A2(A_T,A_A), 00011, BR) /* BIHNZ IP<-RA_if(RT) */ 370 - APUOPFB(M_BIHNZE, RR, 0x12b, 0x10, "bihnze", _A2(A_T,A_A), 00011, BR) /* BIHNZ IP<-RA_if(RT) */ 371 - APUOPFB(M_BIHZD, RR, 0x12a, 0x20, "bihzd", _A2(A_T,A_A), 00011, BR) /* BIHZ IP<-RA_if(RT) */ 372 - APUOPFB(M_BIHZE, RR, 0x12a, 0x10, "bihze", _A2(A_T,A_A), 00011, BR) /* BIHZ IP<-RA_if(RT) */ 373 - APUOPFB(M_BINZD, RR, 0x129, 0x20, "binzd", _A2(A_T,A_A), 00011, BR) /* BINZ IP<-RA_if(RT) */ 374 - APUOPFB(M_BINZE, RR, 0x129, 0x10, "binze", _A2(A_T,A_A), 00011, BR) /* BINZ IP<-RA_if(RT) */ 375 - APUOPFB(M_BIZD, RR, 0x128, 0x20, "bizd", _A2(A_T,A_A), 00011, BR) /* BIZ IP<-RA_if(RT) */ 376 - APUOPFB(M_BIZE, RR, 0x128, 0x10, "bize", _A2(A_T,A_A), 00011, BR) /* BIZ IP<-RA_if(RT) */ 377 - APUOPFB(M_SYNCC, RR, 0x002, 0x40, "syncc", _A0(), 00000, BR) /* SYNCC flush_pipe */ 378 - APUOPFB(M_HBRP, LBTI, 0x1ac, 0x40, "hbrp", _A0(), 00010, LS) /* HBR BTB[B9]<-M[Ra] */ 379 - 380 - /* Synonyms required by the AS manual. */ 381 - APUOP(M_LR, RI10, 0x020, "lr", _A2(A_T,A_A), 00012, FX2) /* OR%I RT<-RA|I10 */ 382 - APUOP(M_BIHT, RR, 0x12b, "biht", _A2(A_T,A_A), 00011, BR) /* BIHNZ IP<-RA_if(RT) */ 383 - APUOP(M_BIHF, RR, 0x12a, "bihf", _A2(A_T,A_A), 00011, BR) /* BIHZ IP<-RA_if(RT) */ 384 - APUOP(M_BIT, RR, 0x129, "bit", _A2(A_T,A_A), 00011, BR) /* BINZ IP<-RA_if(RT) */ 385 - APUOP(M_BIF, RR, 0x128, "bif", _A2(A_T,A_A), 00011, BR) /* BIZ IP<-RA_if(RT) */ 386 - APUOPFB(M_BIHTD, RR, 0x12b, 0x20, "bihtd", _A2(A_T,A_A), 00011, BR) /* BIHNF IP<-RA_if(RT) */ 387 - APUOPFB(M_BIHTE, RR, 0x12b, 0x10, "bihte", _A2(A_T,A_A), 00011, BR) /* BIHNF IP<-RA_if(RT) */ 388 - APUOPFB(M_BIHFD, RR, 0x12a, 0x20, "bihfd", _A2(A_T,A_A), 00011, BR) /* BIHZ IP<-RA_if(RT) */ 389 - APUOPFB(M_BIHFE, RR, 0x12a, 0x10, "bihfe", _A2(A_T,A_A), 00011, BR) /* BIHZ IP<-RA_if(RT) */ 390 - APUOPFB(M_BITD, RR, 0x129, 0x20, "bitd", _A2(A_T,A_A), 00011, BR) /* BINF IP<-RA_if(RT) */ 391 - APUOPFB(M_BITE, RR, 0x129, 0x10, "bite", _A2(A_T,A_A), 00011, BR) /* BINF IP<-RA_if(RT) */ 392 - APUOPFB(M_BIFD, RR, 0x128, 0x20, "bifd", _A2(A_T,A_A), 00011, BR) /* BIZ IP<-RA_if(RT) */ 393 - APUOPFB(M_BIFE, RR, 0x128, 0x10, "bife", _A2(A_T,A_A), 00011, BR) /* BIZ IP<-RA_if(RT) */ 394 - 395 - #undef _A0 396 - #undef _A1 397 - #undef _A2 398 - #undef _A3 399 - #undef _A4
-34
arch/powerpc/xmon/spu-opc.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* SPU opcode list 3 - 4 - Copyright 2006 Free Software Foundation, Inc. 5 - 6 - This file is part of GDB, GAS, and the GNU binutils. 7 - 8 - */ 9 - 10 - #include <linux/kernel.h> 11 - #include <linux/bug.h> 12 - #include "spu.h" 13 - 14 - /* This file holds the Spu opcode table */ 15 - 16 - 17 - /* 18 - Example contents of spu-insn.h 19 - id_tag mode mode type opcode mnemonic asmtype dependency FPU L/S? branch? instruction 20 - QUAD WORD (0,RC,RB,RA,RT) latency 21 - APUOP(M_LQD, 1, 0, RI9, 0x1f8, "lqd", ASM_RI9IDX, 00012, FXU, 1, 0) Load Quadword d-form 22 - */ 23 - 24 - const struct spu_opcode spu_opcodes[] = { 25 - #define APUOP(TAG,MACFORMAT,OPCODE,MNEMONIC,ASMFORMAT,DEP,PIPE) \ 26 - { MACFORMAT, OPCODE, MNEMONIC, ASMFORMAT }, 27 - #define APUOPFB(TAG,MACFORMAT,OPCODE,FB,MNEMONIC,ASMFORMAT,DEP,PIPE) \ 28 - { MACFORMAT, OPCODE, MNEMONIC, ASMFORMAT }, 29 - #include "spu-insns.h" 30 - #undef APUOP 31 - #undef APUOPFB 32 - }; 33 - 34 - const int spu_num_opcodes = ARRAY_SIZE(spu_opcodes);
-115
arch/powerpc/xmon/spu.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* SPU ELF support for BFD. 3 - 4 - Copyright 2006 Free Software Foundation, Inc. 5 - 6 - This file is part of GDB, GAS, and the GNU binutils. 7 - 8 - */ 9 - 10 - 11 - /* These two enums are from rel_apu/common/spu_asm_format.h */ 12 - /* definition of instruction format */ 13 - typedef enum { 14 - RRR, 15 - RI18, 16 - RI16, 17 - RI10, 18 - RI8, 19 - RI7, 20 - RR, 21 - LBT, 22 - LBTI, 23 - IDATA, 24 - UNKNOWN_IFORMAT 25 - } spu_iformat; 26 - 27 - /* These values describe assembly instruction arguments. They indicate 28 - * how to encode, range checking and which relocation to use. */ 29 - typedef enum { 30 - A_T, /* register at pos 0 */ 31 - A_A, /* register at pos 7 */ 32 - A_B, /* register at pos 14 */ 33 - A_C, /* register at pos 21 */ 34 - A_S, /* special purpose register at pos 7 */ 35 - A_H, /* channel register at pos 7 */ 36 - A_P, /* parenthesis, this has to separate regs from immediates */ 37 - A_S3, 38 - A_S6, 39 - A_S7N, 40 - A_S7, 41 - A_U7A, 42 - A_U7B, 43 - A_S10B, 44 - A_S10, 45 - A_S11, 46 - A_S11I, 47 - A_S14, 48 - A_S16, 49 - A_S18, 50 - A_R18, 51 - A_U3, 52 - A_U5, 53 - A_U6, 54 - A_U7, 55 - A_U14, 56 - A_X16, 57 - A_U18, 58 - A_MAX 59 - } spu_aformat; 60 - 61 - enum spu_insns { 62 - #define APUOP(TAG,MACFORMAT,OPCODE,MNEMONIC,ASMFORMAT,DEP,PIPE) \ 63 - TAG, 64 - #define APUOPFB(TAG,MACFORMAT,OPCODE,FB,MNEMONIC,ASMFORMAT,DEP,PIPE) \ 65 - TAG, 66 - #include "spu-insns.h" 67 - #undef APUOP 68 - #undef APUOPFB 69 - M_SPU_MAX 70 - }; 71 - 72 - struct spu_opcode 73 - { 74 - spu_iformat insn_type; 75 - unsigned int opcode; 76 - char *mnemonic; 77 - int arg[5]; 78 - }; 79 - 80 - #define SIGNED_EXTRACT(insn,size,pos) (((int)((insn) << (32-size-pos))) >> (32-size)) 81 - #define UNSIGNED_EXTRACT(insn,size,pos) (((insn) >> pos) & ((1 << size)-1)) 82 - 83 - #define DECODE_INSN_RT(insn) (insn & 0x7f) 84 - #define DECODE_INSN_RA(insn) ((insn >> 7) & 0x7f) 85 - #define DECODE_INSN_RB(insn) ((insn >> 14) & 0x7f) 86 - #define DECODE_INSN_RC(insn) ((insn >> 21) & 0x7f) 87 - 88 - #define DECODE_INSN_I10(insn) SIGNED_EXTRACT(insn,10,14) 89 - #define DECODE_INSN_U10(insn) UNSIGNED_EXTRACT(insn,10,14) 90 - 91 - /* For branching, immediate loads, hbr and lqa/stqa. */ 92 - #define DECODE_INSN_I16(insn) SIGNED_EXTRACT(insn,16,7) 93 - #define DECODE_INSN_U16(insn) UNSIGNED_EXTRACT(insn,16,7) 94 - 95 - /* for stop */ 96 - #define DECODE_INSN_U14(insn) UNSIGNED_EXTRACT(insn,14,0) 97 - 98 - /* For ila */ 99 - #define DECODE_INSN_I18(insn) SIGNED_EXTRACT(insn,18,7) 100 - #define DECODE_INSN_U18(insn) UNSIGNED_EXTRACT(insn,18,7) 101 - 102 - /* For rotate and shift and generate control mask */ 103 - #define DECODE_INSN_I7(insn) SIGNED_EXTRACT(insn,7,14) 104 - #define DECODE_INSN_U7(insn) UNSIGNED_EXTRACT(insn,7,14) 105 - 106 - /* For float <-> int conversion */ 107 - #define DECODE_INSN_I8(insn) SIGNED_EXTRACT(insn,8,14) 108 - #define DECODE_INSN_U8(insn) UNSIGNED_EXTRACT(insn,8,14) 109 - 110 - /* For hbr */ 111 - #define DECODE_INSN_I9a(insn) ((SIGNED_EXTRACT(insn,2,23) << 7) | UNSIGNED_EXTRACT(insn,7,0)) 112 - #define DECODE_INSN_I9b(insn) ((SIGNED_EXTRACT(insn,2,14) << 7) | UNSIGNED_EXTRACT(insn,7,0)) 113 - #define DECODE_INSN_U9a(insn) ((UNSIGNED_EXTRACT(insn,2,23) << 7) | UNSIGNED_EXTRACT(insn,7,0)) 114 - #define DECODE_INSN_U9b(insn) ((UNSIGNED_EXTRACT(insn,2,14) << 7) | UNSIGNED_EXTRACT(insn,7,0)) 115 -
-273
arch/powerpc/xmon/xmon.c
··· 41 41 #include <asm/rtas.h> 42 42 #include <asm/sstep.h> 43 43 #include <asm/irq_regs.h> 44 - #include <asm/spu.h> 45 - #include <asm/spu_priv1.h> 46 44 #include <asm/setjmp.h> 47 45 #include <asm/reg.h> 48 46 #include <asm/debug.h> ··· 186 188 const char *after); 187 189 static const char *getvecname(unsigned long vec); 188 190 189 - static int do_spu_cmd(void); 190 - 191 191 #ifdef CONFIG_44x 192 192 static void dump_tlb_44x(void); 193 193 #endif ··· 268 272 P list processes/tasks\n\ 269 273 r print registers\n\ 270 274 s single step\n" 271 - #ifdef CONFIG_SPU_BASE 272 - " ss stop execution on all spus\n\ 273 - sr restore execution on stopped spus\n\ 274 - sf # dump spu fields for spu # (in hex)\n\ 275 - sd # dump spu local store for spu # (in hex)\n\ 276 - sdi # disassemble spu local store for spu # (in hex)\n" 277 - #endif 278 275 " S print special registers\n\ 279 276 Sa print all SPRs\n\ 280 277 Sr # read SPR #\n\ ··· 1101 1112 cacheflush(); 1102 1113 break; 1103 1114 case 's': 1104 - if (do_spu_cmd() == 0) 1105 - break; 1106 1115 if (do_step(excp)) 1107 1116 return cmd; 1108 1117 break; ··· 4090 4103 if (xmon_early) 4091 4104 debugger(NULL); 4092 4105 } 4093 - 4094 - #ifdef CONFIG_SPU_BASE 4095 - 4096 - struct spu_info { 4097 - struct spu *spu; 4098 - u64 saved_mfc_sr1_RW; 4099 - u32 saved_spu_runcntl_RW; 4100 - unsigned long dump_addr; 4101 - u8 stopped_ok; 4102 - }; 4103 - 4104 - #define XMON_NUM_SPUS 16 /* Enough for current hardware */ 4105 - 4106 - static struct spu_info spu_info[XMON_NUM_SPUS]; 4107 - 4108 - void __init xmon_register_spus(struct list_head *list) 4109 - { 4110 - struct spu *spu; 4111 - 4112 - list_for_each_entry(spu, list, full_list) { 4113 - if (spu->number >= XMON_NUM_SPUS) { 4114 - WARN_ON(1); 4115 - continue; 4116 - } 4117 - 4118 - spu_info[spu->number].spu = spu; 4119 - spu_info[spu->number].stopped_ok = 0; 4120 - spu_info[spu->number].dump_addr = (unsigned long) 4121 - spu_info[spu->number].spu->local_store; 4122 - } 4123 - } 4124 - 4125 - static void stop_spus(void) 4126 - { 4127 - struct spu *spu; 4128 - volatile int i; 4129 - u64 tmp; 4130 - 4131 - for (i = 0; i < XMON_NUM_SPUS; i++) { 4132 - if (!spu_info[i].spu) 4133 - continue; 4134 - 4135 - if (setjmp(bus_error_jmp) == 0) { 4136 - catch_memory_errors = 1; 4137 - sync(); 4138 - 4139 - spu = spu_info[i].spu; 4140 - 4141 - spu_info[i].saved_spu_runcntl_RW = 4142 - in_be32(&spu->problem->spu_runcntl_RW); 4143 - 4144 - tmp = spu_mfc_sr1_get(spu); 4145 - spu_info[i].saved_mfc_sr1_RW = tmp; 4146 - 4147 - tmp &= ~MFC_STATE1_MASTER_RUN_CONTROL_MASK; 4148 - spu_mfc_sr1_set(spu, tmp); 4149 - 4150 - sync(); 4151 - __delay(200); 4152 - 4153 - spu_info[i].stopped_ok = 1; 4154 - 4155 - printf("Stopped spu %.2d (was %s)\n", i, 4156 - spu_info[i].saved_spu_runcntl_RW ? 4157 - "running" : "stopped"); 4158 - } else { 4159 - catch_memory_errors = 0; 4160 - printf("*** Error stopping spu %.2d\n", i); 4161 - } 4162 - catch_memory_errors = 0; 4163 - } 4164 - } 4165 - 4166 - static void restart_spus(void) 4167 - { 4168 - struct spu *spu; 4169 - volatile int i; 4170 - 4171 - for (i = 0; i < XMON_NUM_SPUS; i++) { 4172 - if (!spu_info[i].spu) 4173 - continue; 4174 - 4175 - if (!spu_info[i].stopped_ok) { 4176 - printf("*** Error, spu %d was not successfully stopped" 4177 - ", not restarting\n", i); 4178 - continue; 4179 - } 4180 - 4181 - if (setjmp(bus_error_jmp) == 0) { 4182 - catch_memory_errors = 1; 4183 - sync(); 4184 - 4185 - spu = spu_info[i].spu; 4186 - spu_mfc_sr1_set(spu, spu_info[i].saved_mfc_sr1_RW); 4187 - out_be32(&spu->problem->spu_runcntl_RW, 4188 - spu_info[i].saved_spu_runcntl_RW); 4189 - 4190 - sync(); 4191 - __delay(200); 4192 - 4193 - printf("Restarted spu %.2d\n", i); 4194 - } else { 4195 - catch_memory_errors = 0; 4196 - printf("*** Error restarting spu %.2d\n", i); 4197 - } 4198 - catch_memory_errors = 0; 4199 - } 4200 - } 4201 - 4202 - #define DUMP_WIDTH 23 4203 - #define DUMP_VALUE(format, field, value) \ 4204 - do { \ 4205 - if (setjmp(bus_error_jmp) == 0) { \ 4206 - catch_memory_errors = 1; \ 4207 - sync(); \ 4208 - printf(" %-*s = "format"\n", DUMP_WIDTH, \ 4209 - #field, value); \ 4210 - sync(); \ 4211 - __delay(200); \ 4212 - } else { \ 4213 - catch_memory_errors = 0; \ 4214 - printf(" %-*s = *** Error reading field.\n", \ 4215 - DUMP_WIDTH, #field); \ 4216 - } \ 4217 - catch_memory_errors = 0; \ 4218 - } while (0) 4219 - 4220 - #define DUMP_FIELD(obj, format, field) \ 4221 - DUMP_VALUE(format, field, obj->field) 4222 - 4223 - static void dump_spu_fields(struct spu *spu) 4224 - { 4225 - printf("Dumping spu fields at address %p:\n", spu); 4226 - 4227 - DUMP_FIELD(spu, "0x%x", number); 4228 - DUMP_FIELD(spu, "%s", name); 4229 - DUMP_FIELD(spu, "0x%lx", local_store_phys); 4230 - DUMP_FIELD(spu, "0x%p", local_store); 4231 - DUMP_FIELD(spu, "0x%lx", ls_size); 4232 - DUMP_FIELD(spu, "0x%x", node); 4233 - DUMP_FIELD(spu, "0x%lx", flags); 4234 - DUMP_FIELD(spu, "%llu", class_0_pending); 4235 - DUMP_FIELD(spu, "0x%llx", class_0_dar); 4236 - DUMP_FIELD(spu, "0x%llx", class_1_dar); 4237 - DUMP_FIELD(spu, "0x%llx", class_1_dsisr); 4238 - DUMP_FIELD(spu, "0x%x", irqs[0]); 4239 - DUMP_FIELD(spu, "0x%x", irqs[1]); 4240 - DUMP_FIELD(spu, "0x%x", irqs[2]); 4241 - DUMP_FIELD(spu, "0x%x", slb_replace); 4242 - DUMP_FIELD(spu, "%d", pid); 4243 - DUMP_FIELD(spu, "0x%p", mm); 4244 - DUMP_FIELD(spu, "0x%p", ctx); 4245 - DUMP_FIELD(spu, "0x%p", rq); 4246 - DUMP_FIELD(spu, "0x%llx", timestamp); 4247 - DUMP_FIELD(spu, "0x%lx", problem_phys); 4248 - DUMP_FIELD(spu, "0x%p", problem); 4249 - DUMP_VALUE("0x%x", problem->spu_runcntl_RW, 4250 - in_be32(&spu->problem->spu_runcntl_RW)); 4251 - DUMP_VALUE("0x%x", problem->spu_status_R, 4252 - in_be32(&spu->problem->spu_status_R)); 4253 - DUMP_VALUE("0x%x", problem->spu_npc_RW, 4254 - in_be32(&spu->problem->spu_npc_RW)); 4255 - DUMP_FIELD(spu, "0x%p", priv2); 4256 - DUMP_FIELD(spu, "0x%p", pdata); 4257 - } 4258 - 4259 - static int spu_inst_dump(unsigned long adr, long count, int praddr) 4260 - { 4261 - return generic_inst_dump(adr, count, praddr, print_insn_spu); 4262 - } 4263 - 4264 - static void dump_spu_ls(unsigned long num, int subcmd) 4265 - { 4266 - unsigned long offset, addr, ls_addr; 4267 - 4268 - if (setjmp(bus_error_jmp) == 0) { 4269 - catch_memory_errors = 1; 4270 - sync(); 4271 - ls_addr = (unsigned long)spu_info[num].spu->local_store; 4272 - sync(); 4273 - __delay(200); 4274 - } else { 4275 - catch_memory_errors = 0; 4276 - printf("*** Error: accessing spu info for spu %ld\n", num); 4277 - return; 4278 - } 4279 - catch_memory_errors = 0; 4280 - 4281 - if (scanhex(&offset)) 4282 - addr = ls_addr + offset; 4283 - else 4284 - addr = spu_info[num].dump_addr; 4285 - 4286 - if (addr >= ls_addr + LS_SIZE) { 4287 - printf("*** Error: address outside of local store\n"); 4288 - return; 4289 - } 4290 - 4291 - switch (subcmd) { 4292 - case 'i': 4293 - addr += spu_inst_dump(addr, 16, 1); 4294 - last_cmd = "sdi\n"; 4295 - break; 4296 - default: 4297 - prdump(addr, 64); 4298 - addr += 64; 4299 - last_cmd = "sd\n"; 4300 - break; 4301 - } 4302 - 4303 - spu_info[num].dump_addr = addr; 4304 - } 4305 - 4306 - static int do_spu_cmd(void) 4307 - { 4308 - static unsigned long num = 0; 4309 - int cmd, subcmd = 0; 4310 - 4311 - cmd = inchar(); 4312 - switch (cmd) { 4313 - case 's': 4314 - stop_spus(); 4315 - break; 4316 - case 'r': 4317 - restart_spus(); 4318 - break; 4319 - case 'd': 4320 - subcmd = inchar(); 4321 - if (isxdigit(subcmd) || subcmd == '\n') 4322 - termch = subcmd; 4323 - fallthrough; 4324 - case 'f': 4325 - scanhex(&num); 4326 - if (num >= XMON_NUM_SPUS || !spu_info[num].spu) { 4327 - printf("*** Error: invalid spu number\n"); 4328 - return 0; 4329 - } 4330 - 4331 - switch (cmd) { 4332 - case 'f': 4333 - dump_spu_fields(spu_info[num].spu); 4334 - break; 4335 - default: 4336 - dump_spu_ls(num, subcmd); 4337 - break; 4338 - } 4339 - 4340 - break; 4341 - default: 4342 - return -1; 4343 - } 4344 - 4345 - return 0; 4346 - } 4347 - #else /* ! CONFIG_SPU_BASE */ 4348 - static int do_spu_cmd(void) 4349 - { 4350 - return -1; 4351 - } 4352 - #endif
+1 -1
arch/x86/kernel/static_call.c
··· 158 158 { 159 159 mutex_lock(&text_mutex); 160 160 161 - if (tramp) { 161 + if (tramp && !site) { 162 162 __static_call_validate(tramp, true, true); 163 163 __static_call_transform(tramp, __sc_insn(!func, true), func, false); 164 164 }
-18
drivers/cpufreq/Kconfig.powerpc
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 - config CPU_FREQ_CBE 3 - tristate "CBE frequency scaling" 4 - depends on CBE_RAS && PPC_CELL 5 - default m 6 - help 7 - This adds the cpufreq driver for Cell BE processors. 8 - For details, take a look at <file:Documentation/cpu-freq/>. 9 - If you don't have such processor, say N 10 - 11 - config CPU_FREQ_CBE_PMI 12 - bool "CBE frequency scaling using PMI interface" 13 - depends on CPU_FREQ_CBE 14 - default n 15 - help 16 - Select this, if you want to use the PMI interface to switch 17 - frequencies. Using PMI, the processor will not only be able to run at 18 - lower speed, but also at lower core voltage. 19 - 20 2 config CPU_FREQ_PMAC 21 3 bool "Support for Apple PowerBooks" 22 4 depends on ADB_PMU && PPC32
-3
drivers/cpufreq/Makefile
··· 91 91 92 92 ################################################################################## 93 93 # PowerPC platform drivers 94 - obj-$(CONFIG_CPU_FREQ_CBE) += ppc-cbe-cpufreq.o 95 - ppc-cbe-cpufreq-y += ppc_cbe_cpufreq_pervasive.o ppc_cbe_cpufreq.o 96 - obj-$(CONFIG_CPU_FREQ_CBE_PMI) += ppc_cbe_cpufreq_pmi.o 97 94 obj-$(CONFIG_QORIQ_CPUFREQ) += qoriq-cpufreq.o 98 95 obj-$(CONFIG_CPU_FREQ_PMAC) += pmac32-cpufreq.o 99 96 obj-$(CONFIG_CPU_FREQ_PMAC64) += pmac64-cpufreq.o
-173
drivers/cpufreq/ppc_cbe_cpufreq.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * cpufreq driver for the cell processor 4 - * 5 - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005-2007 6 - * 7 - * Author: Christian Krafft <krafft@de.ibm.com> 8 - */ 9 - 10 - #include <linux/cpufreq.h> 11 - #include <linux/module.h> 12 - #include <linux/of.h> 13 - 14 - #include <asm/machdep.h> 15 - #include <asm/cell-regs.h> 16 - 17 - #include "ppc_cbe_cpufreq.h" 18 - 19 - /* the CBE supports an 8 step frequency scaling */ 20 - static struct cpufreq_frequency_table cbe_freqs[] = { 21 - {0, 1, 0}, 22 - {0, 2, 0}, 23 - {0, 3, 0}, 24 - {0, 4, 0}, 25 - {0, 5, 0}, 26 - {0, 6, 0}, 27 - {0, 8, 0}, 28 - {0, 10, 0}, 29 - {0, 0, CPUFREQ_TABLE_END}, 30 - }; 31 - 32 - /* 33 - * hardware specific functions 34 - */ 35 - 36 - static int set_pmode(unsigned int cpu, unsigned int slow_mode) 37 - { 38 - int rc; 39 - 40 - if (cbe_cpufreq_has_pmi) 41 - rc = cbe_cpufreq_set_pmode_pmi(cpu, slow_mode); 42 - else 43 - rc = cbe_cpufreq_set_pmode(cpu, slow_mode); 44 - 45 - pr_debug("register contains slow mode %d\n", cbe_cpufreq_get_pmode(cpu)); 46 - 47 - return rc; 48 - } 49 - 50 - /* 51 - * cpufreq functions 52 - */ 53 - 54 - static int cbe_cpufreq_cpu_init(struct cpufreq_policy *policy) 55 - { 56 - struct cpufreq_frequency_table *pos; 57 - const u32 *max_freqp; 58 - u32 max_freq; 59 - int cur_pmode; 60 - struct device_node *cpu; 61 - 62 - cpu = of_get_cpu_node(policy->cpu, NULL); 63 - 64 - if (!cpu) 65 - return -ENODEV; 66 - 67 - pr_debug("init cpufreq on CPU %d\n", policy->cpu); 68 - 69 - /* 70 - * Let's check we can actually get to the CELL regs 71 - */ 72 - if (!cbe_get_cpu_pmd_regs(policy->cpu) || 73 - !cbe_get_cpu_mic_tm_regs(policy->cpu)) { 74 - pr_info("invalid CBE regs pointers for cpufreq\n"); 75 - of_node_put(cpu); 76 - return -EINVAL; 77 - } 78 - 79 - max_freqp = of_get_property(cpu, "clock-frequency", NULL); 80 - 81 - of_node_put(cpu); 82 - 83 - if (!max_freqp) 84 - return -EINVAL; 85 - 86 - /* we need the freq in kHz */ 87 - max_freq = *max_freqp / 1000; 88 - 89 - pr_debug("max clock-frequency is at %u kHz\n", max_freq); 90 - pr_debug("initializing frequency table\n"); 91 - 92 - /* initialize frequency table */ 93 - cpufreq_for_each_entry(pos, cbe_freqs) { 94 - pos->frequency = max_freq / pos->driver_data; 95 - pr_debug("%d: %d\n", (int)(pos - cbe_freqs), pos->frequency); 96 - } 97 - 98 - /* if DEBUG is enabled set_pmode() measures the latency 99 - * of a transition */ 100 - policy->cpuinfo.transition_latency = 25000; 101 - 102 - cur_pmode = cbe_cpufreq_get_pmode(policy->cpu); 103 - pr_debug("current pmode is at %d\n",cur_pmode); 104 - 105 - policy->cur = cbe_freqs[cur_pmode].frequency; 106 - 107 - #ifdef CONFIG_SMP 108 - cpumask_copy(policy->cpus, cpu_sibling_mask(policy->cpu)); 109 - #endif 110 - 111 - policy->freq_table = cbe_freqs; 112 - cbe_cpufreq_pmi_policy_init(policy); 113 - return 0; 114 - } 115 - 116 - static void cbe_cpufreq_cpu_exit(struct cpufreq_policy *policy) 117 - { 118 - cbe_cpufreq_pmi_policy_exit(policy); 119 - } 120 - 121 - static int cbe_cpufreq_target(struct cpufreq_policy *policy, 122 - unsigned int cbe_pmode_new) 123 - { 124 - pr_debug("setting frequency for cpu %d to %d kHz, " \ 125 - "1/%d of max frequency\n", 126 - policy->cpu, 127 - cbe_freqs[cbe_pmode_new].frequency, 128 - cbe_freqs[cbe_pmode_new].driver_data); 129 - 130 - return set_pmode(policy->cpu, cbe_pmode_new); 131 - } 132 - 133 - static struct cpufreq_driver cbe_cpufreq_driver = { 134 - .verify = cpufreq_generic_frequency_table_verify, 135 - .target_index = cbe_cpufreq_target, 136 - .init = cbe_cpufreq_cpu_init, 137 - .exit = cbe_cpufreq_cpu_exit, 138 - .name = "cbe-cpufreq", 139 - .flags = CPUFREQ_CONST_LOOPS, 140 - }; 141 - 142 - /* 143 - * module init and destoy 144 - */ 145 - 146 - static int __init cbe_cpufreq_init(void) 147 - { 148 - int ret; 149 - 150 - if (!machine_is(cell)) 151 - return -ENODEV; 152 - 153 - cbe_cpufreq_pmi_init(); 154 - 155 - ret = cpufreq_register_driver(&cbe_cpufreq_driver); 156 - if (ret) 157 - cbe_cpufreq_pmi_exit(); 158 - 159 - return ret; 160 - } 161 - 162 - static void __exit cbe_cpufreq_exit(void) 163 - { 164 - cpufreq_unregister_driver(&cbe_cpufreq_driver); 165 - cbe_cpufreq_pmi_exit(); 166 - } 167 - 168 - module_init(cbe_cpufreq_init); 169 - module_exit(cbe_cpufreq_exit); 170 - 171 - MODULE_DESCRIPTION("cpufreq driver for Cell BE processors"); 172 - MODULE_LICENSE("GPL"); 173 - MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>");
-33
drivers/cpufreq/ppc_cbe_cpufreq.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * ppc_cbe_cpufreq.h 4 - * 5 - * This file contains the definitions used by the cbe_cpufreq driver. 6 - * 7 - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005-2007 8 - * 9 - * Author: Christian Krafft <krafft@de.ibm.com> 10 - * 11 - */ 12 - 13 - #include <linux/cpufreq.h> 14 - #include <linux/types.h> 15 - 16 - int cbe_cpufreq_set_pmode(int cpu, unsigned int pmode); 17 - int cbe_cpufreq_get_pmode(int cpu); 18 - 19 - int cbe_cpufreq_set_pmode_pmi(int cpu, unsigned int pmode); 20 - 21 - #if IS_ENABLED(CONFIG_CPU_FREQ_CBE_PMI) 22 - extern bool cbe_cpufreq_has_pmi; 23 - void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy); 24 - void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy); 25 - void cbe_cpufreq_pmi_init(void); 26 - void cbe_cpufreq_pmi_exit(void); 27 - #else 28 - #define cbe_cpufreq_has_pmi (0) 29 - static inline void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy) {} 30 - static inline void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy) {} 31 - static inline void cbe_cpufreq_pmi_init(void) {} 32 - static inline void cbe_cpufreq_pmi_exit(void) {} 33 - #endif
-102
drivers/cpufreq/ppc_cbe_cpufreq_pervasive.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * pervasive backend for the cbe_cpufreq driver 4 - * 5 - * This driver makes use of the pervasive unit to 6 - * engage the desired frequency. 7 - * 8 - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005-2007 9 - * 10 - * Author: Christian Krafft <krafft@de.ibm.com> 11 - */ 12 - 13 - #include <linux/io.h> 14 - #include <linux/kernel.h> 15 - #include <linux/time.h> 16 - #include <asm/machdep.h> 17 - #include <asm/hw_irq.h> 18 - #include <asm/cell-regs.h> 19 - 20 - #include "ppc_cbe_cpufreq.h" 21 - 22 - /* to write to MIC register */ 23 - static u64 MIC_Slow_Fast_Timer_table[] = { 24 - [0 ... 7] = 0x007fc00000000000ull, 25 - }; 26 - 27 - /* more values for the MIC */ 28 - static u64 MIC_Slow_Next_Timer_table[] = { 29 - 0x0000240000000000ull, 30 - 0x0000268000000000ull, 31 - 0x000029C000000000ull, 32 - 0x00002D0000000000ull, 33 - 0x0000300000000000ull, 34 - 0x0000334000000000ull, 35 - 0x000039C000000000ull, 36 - 0x00003FC000000000ull, 37 - }; 38 - 39 - 40 - int cbe_cpufreq_set_pmode(int cpu, unsigned int pmode) 41 - { 42 - struct cbe_pmd_regs __iomem *pmd_regs; 43 - struct cbe_mic_tm_regs __iomem *mic_tm_regs; 44 - unsigned long flags; 45 - u64 value; 46 - #ifdef DEBUG 47 - long time; 48 - #endif 49 - 50 - local_irq_save(flags); 51 - 52 - mic_tm_regs = cbe_get_cpu_mic_tm_regs(cpu); 53 - pmd_regs = cbe_get_cpu_pmd_regs(cpu); 54 - 55 - #ifdef DEBUG 56 - time = jiffies; 57 - #endif 58 - 59 - out_be64(&mic_tm_regs->slow_fast_timer_0, MIC_Slow_Fast_Timer_table[pmode]); 60 - out_be64(&mic_tm_regs->slow_fast_timer_1, MIC_Slow_Fast_Timer_table[pmode]); 61 - 62 - out_be64(&mic_tm_regs->slow_next_timer_0, MIC_Slow_Next_Timer_table[pmode]); 63 - out_be64(&mic_tm_regs->slow_next_timer_1, MIC_Slow_Next_Timer_table[pmode]); 64 - 65 - value = in_be64(&pmd_regs->pmcr); 66 - /* set bits to zero */ 67 - value &= 0xFFFFFFFFFFFFFFF8ull; 68 - /* set bits to next pmode */ 69 - value |= pmode; 70 - 71 - out_be64(&pmd_regs->pmcr, value); 72 - 73 - #ifdef DEBUG 74 - /* wait until new pmode appears in status register */ 75 - value = in_be64(&pmd_regs->pmsr) & 0x07; 76 - while (value != pmode) { 77 - cpu_relax(); 78 - value = in_be64(&pmd_regs->pmsr) & 0x07; 79 - } 80 - 81 - time = jiffies - time; 82 - time = jiffies_to_msecs(time); 83 - pr_debug("had to wait %lu ms for a transition using " \ 84 - "pervasive unit\n", time); 85 - #endif 86 - local_irq_restore(flags); 87 - 88 - return 0; 89 - } 90 - 91 - 92 - int cbe_cpufreq_get_pmode(int cpu) 93 - { 94 - int ret; 95 - struct cbe_pmd_regs __iomem *pmd_regs; 96 - 97 - pmd_regs = cbe_get_cpu_pmd_regs(cpu); 98 - ret = in_be64(&pmd_regs->pmsr) & 0x07; 99 - 100 - return ret; 101 - } 102 -
-150
drivers/cpufreq/ppc_cbe_cpufreq_pmi.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * pmi backend for the cbe_cpufreq driver 4 - * 5 - * (C) Copyright IBM Deutschland Entwicklung GmbH 2005-2007 6 - * 7 - * Author: Christian Krafft <krafft@de.ibm.com> 8 - */ 9 - 10 - #include <linux/kernel.h> 11 - #include <linux/types.h> 12 - #include <linux/timer.h> 13 - #include <linux/init.h> 14 - #include <linux/pm_qos.h> 15 - #include <linux/slab.h> 16 - 17 - #include <asm/processor.h> 18 - #include <asm/pmi.h> 19 - #include <asm/cell-regs.h> 20 - 21 - #ifdef DEBUG 22 - #include <asm/time.h> 23 - #endif 24 - 25 - #include "ppc_cbe_cpufreq.h" 26 - 27 - bool cbe_cpufreq_has_pmi = false; 28 - EXPORT_SYMBOL_GPL(cbe_cpufreq_has_pmi); 29 - 30 - /* 31 - * hardware specific functions 32 - */ 33 - 34 - int cbe_cpufreq_set_pmode_pmi(int cpu, unsigned int pmode) 35 - { 36 - int ret; 37 - pmi_message_t pmi_msg; 38 - #ifdef DEBUG 39 - long time; 40 - #endif 41 - pmi_msg.type = PMI_TYPE_FREQ_CHANGE; 42 - pmi_msg.data1 = cbe_cpu_to_node(cpu); 43 - pmi_msg.data2 = pmode; 44 - 45 - #ifdef DEBUG 46 - time = jiffies; 47 - #endif 48 - pmi_send_message(pmi_msg); 49 - 50 - #ifdef DEBUG 51 - time = jiffies - time; 52 - time = jiffies_to_msecs(time); 53 - pr_debug("had to wait %lu ms for a transition using " \ 54 - "PMI\n", time); 55 - #endif 56 - ret = pmi_msg.data2; 57 - pr_debug("PMI returned slow mode %d\n", ret); 58 - 59 - return ret; 60 - } 61 - EXPORT_SYMBOL_GPL(cbe_cpufreq_set_pmode_pmi); 62 - 63 - 64 - static void cbe_cpufreq_handle_pmi(pmi_message_t pmi_msg) 65 - { 66 - struct cpufreq_policy *policy; 67 - struct freq_qos_request *req; 68 - u8 node, slow_mode; 69 - int cpu, ret; 70 - 71 - BUG_ON(pmi_msg.type != PMI_TYPE_FREQ_CHANGE); 72 - 73 - node = pmi_msg.data1; 74 - slow_mode = pmi_msg.data2; 75 - 76 - cpu = cbe_node_to_cpu(node); 77 - 78 - pr_debug("cbe_handle_pmi: node: %d max_freq: %d\n", node, slow_mode); 79 - 80 - policy = cpufreq_cpu_get(cpu); 81 - if (!policy) { 82 - pr_warn("cpufreq policy not found cpu%d\n", cpu); 83 - return; 84 - } 85 - 86 - req = policy->driver_data; 87 - 88 - ret = freq_qos_update_request(req, 89 - policy->freq_table[slow_mode].frequency); 90 - if (ret < 0) 91 - pr_warn("Failed to update freq constraint: %d\n", ret); 92 - else 93 - pr_debug("limiting node %d to slow mode %d\n", node, slow_mode); 94 - 95 - cpufreq_cpu_put(policy); 96 - } 97 - 98 - static struct pmi_handler cbe_pmi_handler = { 99 - .type = PMI_TYPE_FREQ_CHANGE, 100 - .handle_pmi_message = cbe_cpufreq_handle_pmi, 101 - }; 102 - 103 - void cbe_cpufreq_pmi_policy_init(struct cpufreq_policy *policy) 104 - { 105 - struct freq_qos_request *req; 106 - int ret; 107 - 108 - if (!cbe_cpufreq_has_pmi) 109 - return; 110 - 111 - req = kzalloc(sizeof(*req), GFP_KERNEL); 112 - if (!req) 113 - return; 114 - 115 - ret = freq_qos_add_request(&policy->constraints, req, FREQ_QOS_MAX, 116 - policy->freq_table[0].frequency); 117 - if (ret < 0) { 118 - pr_err("Failed to add freq constraint (%d)\n", ret); 119 - kfree(req); 120 - return; 121 - } 122 - 123 - policy->driver_data = req; 124 - } 125 - EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_policy_init); 126 - 127 - void cbe_cpufreq_pmi_policy_exit(struct cpufreq_policy *policy) 128 - { 129 - struct freq_qos_request *req = policy->driver_data; 130 - 131 - if (cbe_cpufreq_has_pmi) { 132 - freq_qos_remove_request(req); 133 - kfree(req); 134 - } 135 - } 136 - EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_policy_exit); 137 - 138 - void cbe_cpufreq_pmi_init(void) 139 - { 140 - if (!pmi_register_handler(&cbe_pmi_handler)) 141 - cbe_cpufreq_has_pmi = true; 142 - } 143 - EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_init); 144 - 145 - void cbe_cpufreq_pmi_exit(void) 146 - { 147 - pmi_unregister_handler(&cbe_pmi_handler); 148 - cbe_cpufreq_has_pmi = false; 149 - } 150 - EXPORT_SYMBOL_GPL(cbe_cpufreq_pmi_exit);
+1 -10
drivers/net/ethernet/toshiba/Kconfig
··· 6 6 config NET_VENDOR_TOSHIBA 7 7 bool "Toshiba devices" 8 8 default y 9 - depends on PCI && (PPC_IBM_CELL_BLADE || MIPS) || PPC_PS3 9 + depends on PCI && MIPS || PPC_PS3 10 10 help 11 11 If you have a network (Ethernet) card belonging to this class, say Y. 12 12 ··· 38 38 use wireless feature, disabling this option saves memory. As 39 39 the driver automatically distinguishes the models, you can 40 40 safely enable this option even if you have a wireless-less model. 41 - 42 - config SPIDER_NET 43 - tristate "Spider Gigabit Ethernet driver" 44 - depends on PCI && PPC_IBM_CELL_BLADE 45 - select FW_LOADER 46 - select SUNGEM_PHY 47 - help 48 - This driver supports the Gigabit Ethernet chips present on the 49 - Cell Processor-Based Blades from IBM. 50 41 51 42 config TC35815 52 43 tristate "TOSHIBA TC35815 Ethernet support"
-2
drivers/net/ethernet/toshiba/Makefile
··· 6 6 obj-$(CONFIG_GELIC_NET) += ps3_gelic.o 7 7 gelic_wireless-$(CONFIG_GELIC_WIRELESS) += ps3_gelic_wireless.o 8 8 ps3_gelic-objs += ps3_gelic_net.o $(gelic_wireless-y) 9 - spidernet-y += spider_net.o spider_net_ethtool.o 10 - obj-$(CONFIG_SPIDER_NET) += spidernet.o 11 9 obj-$(CONFIG_TC35815) += tc35815.o
-2556
drivers/net/ethernet/toshiba/spider_net.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Network device driver for Cell Processor-Based Blade and Celleb platform 4 - * 5 - * (C) Copyright IBM Corp. 2005 6 - * (C) Copyright 2006 TOSHIBA CORPORATION 7 - * 8 - * Authors : Utz Bacher <utz.bacher@de.ibm.com> 9 - * Jens Osterkamp <Jens.Osterkamp@de.ibm.com> 10 - */ 11 - 12 - #include <linux/compiler.h> 13 - #include <linux/crc32.h> 14 - #include <linux/delay.h> 15 - #include <linux/etherdevice.h> 16 - #include <linux/ethtool.h> 17 - #include <linux/firmware.h> 18 - #include <linux/if_vlan.h> 19 - #include <linux/in.h> 20 - #include <linux/init.h> 21 - #include <linux/interrupt.h> 22 - #include <linux/gfp.h> 23 - #include <linux/ioport.h> 24 - #include <linux/ip.h> 25 - #include <linux/kernel.h> 26 - #include <linux/mii.h> 27 - #include <linux/module.h> 28 - #include <linux/netdevice.h> 29 - #include <linux/device.h> 30 - #include <linux/pci.h> 31 - #include <linux/skbuff.h> 32 - #include <linux/tcp.h> 33 - #include <linux/types.h> 34 - #include <linux/vmalloc.h> 35 - #include <linux/wait.h> 36 - #include <linux/workqueue.h> 37 - #include <linux/bitops.h> 38 - #include <linux/of.h> 39 - #include <net/checksum.h> 40 - 41 - #include "spider_net.h" 42 - 43 - MODULE_AUTHOR("Utz Bacher <utz.bacher@de.ibm.com> and Jens Osterkamp " \ 44 - "<Jens.Osterkamp@de.ibm.com>"); 45 - MODULE_DESCRIPTION("Spider Southbridge Gigabit Ethernet driver"); 46 - MODULE_LICENSE("GPL"); 47 - MODULE_VERSION(VERSION); 48 - MODULE_FIRMWARE(SPIDER_NET_FIRMWARE_NAME); 49 - 50 - static int rx_descriptors = SPIDER_NET_RX_DESCRIPTORS_DEFAULT; 51 - static int tx_descriptors = SPIDER_NET_TX_DESCRIPTORS_DEFAULT; 52 - 53 - module_param(rx_descriptors, int, 0444); 54 - module_param(tx_descriptors, int, 0444); 55 - 56 - MODULE_PARM_DESC(rx_descriptors, "number of descriptors used " \ 57 - "in rx chains"); 58 - MODULE_PARM_DESC(tx_descriptors, "number of descriptors used " \ 59 - "in tx chain"); 60 - 61 - char spider_net_driver_name[] = "spidernet"; 62 - 63 - static const struct pci_device_id spider_net_pci_tbl[] = { 64 - { PCI_VENDOR_ID_TOSHIBA_2, PCI_DEVICE_ID_TOSHIBA_SPIDER_NET, 65 - PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL }, 66 - { 0, } 67 - }; 68 - 69 - MODULE_DEVICE_TABLE(pci, spider_net_pci_tbl); 70 - 71 - /** 72 - * spider_net_read_reg - reads an SMMIO register of a card 73 - * @card: device structure 74 - * @reg: register to read from 75 - * 76 - * returns the content of the specified SMMIO register. 77 - */ 78 - static inline u32 79 - spider_net_read_reg(struct spider_net_card *card, u32 reg) 80 - { 81 - /* We use the powerpc specific variants instead of readl_be() because 82 - * we know spidernet is not a real PCI device and we can thus avoid the 83 - * performance hit caused by the PCI workarounds. 84 - */ 85 - return in_be32(card->regs + reg); 86 - } 87 - 88 - /** 89 - * spider_net_write_reg - writes to an SMMIO register of a card 90 - * @card: device structure 91 - * @reg: register to write to 92 - * @value: value to write into the specified SMMIO register 93 - */ 94 - static inline void 95 - spider_net_write_reg(struct spider_net_card *card, u32 reg, u32 value) 96 - { 97 - /* We use the powerpc specific variants instead of writel_be() because 98 - * we know spidernet is not a real PCI device and we can thus avoid the 99 - * performance hit caused by the PCI workarounds. 100 - */ 101 - out_be32(card->regs + reg, value); 102 - } 103 - 104 - /** 105 - * spider_net_write_phy - write to phy register 106 - * @netdev: adapter to be written to 107 - * @mii_id: id of MII 108 - * @reg: PHY register 109 - * @val: value to be written to phy register 110 - * 111 - * spider_net_write_phy_register writes to an arbitrary PHY 112 - * register via the spider GPCWOPCMD register. We assume the queue does 113 - * not run full (not more than 15 commands outstanding). 114 - **/ 115 - static void 116 - spider_net_write_phy(struct net_device *netdev, int mii_id, 117 - int reg, int val) 118 - { 119 - struct spider_net_card *card = netdev_priv(netdev); 120 - u32 writevalue; 121 - 122 - writevalue = ((u32)mii_id << 21) | 123 - ((u32)reg << 16) | ((u32)val); 124 - 125 - spider_net_write_reg(card, SPIDER_NET_GPCWOPCMD, writevalue); 126 - } 127 - 128 - /** 129 - * spider_net_read_phy - read from phy register 130 - * @netdev: network device to be read from 131 - * @mii_id: id of MII 132 - * @reg: PHY register 133 - * 134 - * Returns value read from PHY register 135 - * 136 - * spider_net_write_phy reads from an arbitrary PHY 137 - * register via the spider GPCROPCMD register 138 - **/ 139 - static int 140 - spider_net_read_phy(struct net_device *netdev, int mii_id, int reg) 141 - { 142 - struct spider_net_card *card = netdev_priv(netdev); 143 - u32 readvalue; 144 - 145 - readvalue = ((u32)mii_id << 21) | ((u32)reg << 16); 146 - spider_net_write_reg(card, SPIDER_NET_GPCROPCMD, readvalue); 147 - 148 - /* we don't use semaphores to wait for an SPIDER_NET_GPROPCMPINT 149 - * interrupt, as we poll for the completion of the read operation 150 - * in spider_net_read_phy. Should take about 50 us 151 - */ 152 - do { 153 - readvalue = spider_net_read_reg(card, SPIDER_NET_GPCROPCMD); 154 - } while (readvalue & SPIDER_NET_GPREXEC); 155 - 156 - readvalue &= SPIDER_NET_GPRDAT_MASK; 157 - 158 - return readvalue; 159 - } 160 - 161 - /** 162 - * spider_net_setup_aneg - initial auto-negotiation setup 163 - * @card: device structure 164 - **/ 165 - static void 166 - spider_net_setup_aneg(struct spider_net_card *card) 167 - { 168 - struct mii_phy *phy = &card->phy; 169 - u32 advertise = 0; 170 - u16 bmsr, estat; 171 - 172 - bmsr = spider_net_read_phy(card->netdev, phy->mii_id, MII_BMSR); 173 - estat = spider_net_read_phy(card->netdev, phy->mii_id, MII_ESTATUS); 174 - 175 - if (bmsr & BMSR_10HALF) 176 - advertise |= ADVERTISED_10baseT_Half; 177 - if (bmsr & BMSR_10FULL) 178 - advertise |= ADVERTISED_10baseT_Full; 179 - if (bmsr & BMSR_100HALF) 180 - advertise |= ADVERTISED_100baseT_Half; 181 - if (bmsr & BMSR_100FULL) 182 - advertise |= ADVERTISED_100baseT_Full; 183 - 184 - if ((bmsr & BMSR_ESTATEN) && (estat & ESTATUS_1000_TFULL)) 185 - advertise |= SUPPORTED_1000baseT_Full; 186 - if ((bmsr & BMSR_ESTATEN) && (estat & ESTATUS_1000_THALF)) 187 - advertise |= SUPPORTED_1000baseT_Half; 188 - 189 - sungem_phy_probe(phy, phy->mii_id); 190 - phy->def->ops->setup_aneg(phy, advertise); 191 - 192 - } 193 - 194 - /** 195 - * spider_net_rx_irq_off - switch off rx irq on this spider card 196 - * @card: device structure 197 - * 198 - * switches off rx irq by masking them out in the GHIINTnMSK register 199 - */ 200 - static void 201 - spider_net_rx_irq_off(struct spider_net_card *card) 202 - { 203 - u32 regvalue; 204 - 205 - regvalue = SPIDER_NET_INT0_MASK_VALUE & (~SPIDER_NET_RXINT); 206 - spider_net_write_reg(card, SPIDER_NET_GHIINT0MSK, regvalue); 207 - } 208 - 209 - /** 210 - * spider_net_rx_irq_on - switch on rx irq on this spider card 211 - * @card: device structure 212 - * 213 - * switches on rx irq by enabling them in the GHIINTnMSK register 214 - */ 215 - static void 216 - spider_net_rx_irq_on(struct spider_net_card *card) 217 - { 218 - u32 regvalue; 219 - 220 - regvalue = SPIDER_NET_INT0_MASK_VALUE | SPIDER_NET_RXINT; 221 - spider_net_write_reg(card, SPIDER_NET_GHIINT0MSK, regvalue); 222 - } 223 - 224 - /** 225 - * spider_net_set_promisc - sets the unicast address or the promiscuous mode 226 - * @card: card structure 227 - * 228 - * spider_net_set_promisc sets the unicast destination address filter and 229 - * thus either allows for non-promisc mode or promisc mode 230 - */ 231 - static void 232 - spider_net_set_promisc(struct spider_net_card *card) 233 - { 234 - u32 macu, macl; 235 - struct net_device *netdev = card->netdev; 236 - 237 - if (netdev->flags & IFF_PROMISC) { 238 - /* clear destination entry 0 */ 239 - spider_net_write_reg(card, SPIDER_NET_GMRUAFILnR, 0); 240 - spider_net_write_reg(card, SPIDER_NET_GMRUAFILnR + 0x04, 0); 241 - spider_net_write_reg(card, SPIDER_NET_GMRUA0FIL15R, 242 - SPIDER_NET_PROMISC_VALUE); 243 - } else { 244 - macu = netdev->dev_addr[0]; 245 - macu <<= 8; 246 - macu |= netdev->dev_addr[1]; 247 - memcpy(&macl, &netdev->dev_addr[2], sizeof(macl)); 248 - 249 - macu |= SPIDER_NET_UA_DESCR_VALUE; 250 - spider_net_write_reg(card, SPIDER_NET_GMRUAFILnR, macu); 251 - spider_net_write_reg(card, SPIDER_NET_GMRUAFILnR + 0x04, macl); 252 - spider_net_write_reg(card, SPIDER_NET_GMRUA0FIL15R, 253 - SPIDER_NET_NONPROMISC_VALUE); 254 - } 255 - } 256 - 257 - /** 258 - * spider_net_get_descr_status -- returns the status of a descriptor 259 - * @hwdescr: descriptor to look at 260 - * 261 - * returns the status as in the dmac_cmd_status field of the descriptor 262 - */ 263 - static inline int 264 - spider_net_get_descr_status(struct spider_net_hw_descr *hwdescr) 265 - { 266 - return hwdescr->dmac_cmd_status & SPIDER_NET_DESCR_IND_PROC_MASK; 267 - } 268 - 269 - /** 270 - * spider_net_free_chain - free descriptor chain 271 - * @card: card structure 272 - * @chain: address of chain 273 - * 274 - */ 275 - static void 276 - spider_net_free_chain(struct spider_net_card *card, 277 - struct spider_net_descr_chain *chain) 278 - { 279 - struct spider_net_descr *descr; 280 - 281 - descr = chain->ring; 282 - do { 283 - descr->bus_addr = 0; 284 - descr->hwdescr->next_descr_addr = 0; 285 - descr = descr->next; 286 - } while (descr != chain->ring); 287 - 288 - dma_free_coherent(&card->pdev->dev, chain->num_desc * sizeof(struct spider_net_hw_descr), 289 - chain->hwring, chain->dma_addr); 290 - } 291 - 292 - /** 293 - * spider_net_init_chain - alloc and link descriptor chain 294 - * @card: card structure 295 - * @chain: address of chain 296 - * 297 - * We manage a circular list that mirrors the hardware structure, 298 - * except that the hardware uses bus addresses. 299 - * 300 - * Returns 0 on success, <0 on failure 301 - */ 302 - static int 303 - spider_net_init_chain(struct spider_net_card *card, 304 - struct spider_net_descr_chain *chain) 305 - { 306 - int i; 307 - struct spider_net_descr *descr; 308 - struct spider_net_hw_descr *hwdescr; 309 - dma_addr_t buf; 310 - size_t alloc_size; 311 - 312 - alloc_size = chain->num_desc * sizeof(struct spider_net_hw_descr); 313 - 314 - chain->hwring = dma_alloc_coherent(&card->pdev->dev, alloc_size, 315 - &chain->dma_addr, GFP_KERNEL); 316 - if (!chain->hwring) 317 - return -ENOMEM; 318 - 319 - /* Set up the hardware pointers in each descriptor */ 320 - descr = chain->ring; 321 - hwdescr = chain->hwring; 322 - buf = chain->dma_addr; 323 - for (i=0; i < chain->num_desc; i++, descr++, hwdescr++) { 324 - hwdescr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE; 325 - hwdescr->next_descr_addr = 0; 326 - 327 - descr->hwdescr = hwdescr; 328 - descr->bus_addr = buf; 329 - descr->next = descr + 1; 330 - descr->prev = descr - 1; 331 - 332 - buf += sizeof(struct spider_net_hw_descr); 333 - } 334 - /* do actual circular list */ 335 - (descr-1)->next = chain->ring; 336 - chain->ring->prev = descr-1; 337 - 338 - spin_lock_init(&chain->lock); 339 - chain->head = chain->ring; 340 - chain->tail = chain->ring; 341 - return 0; 342 - } 343 - 344 - /** 345 - * spider_net_free_rx_chain_contents - frees descr contents in rx chain 346 - * @card: card structure 347 - * 348 - * returns 0 on success, <0 on failure 349 - */ 350 - static void 351 - spider_net_free_rx_chain_contents(struct spider_net_card *card) 352 - { 353 - struct spider_net_descr *descr; 354 - 355 - descr = card->rx_chain.head; 356 - do { 357 - if (descr->skb) { 358 - dma_unmap_single(&card->pdev->dev, 359 - descr->hwdescr->buf_addr, 360 - SPIDER_NET_MAX_FRAME, 361 - DMA_BIDIRECTIONAL); 362 - dev_kfree_skb(descr->skb); 363 - descr->skb = NULL; 364 - } 365 - descr = descr->next; 366 - } while (descr != card->rx_chain.head); 367 - } 368 - 369 - /** 370 - * spider_net_prepare_rx_descr - Reinitialize RX descriptor 371 - * @card: card structure 372 - * @descr: descriptor to re-init 373 - * 374 - * Return 0 on success, <0 on failure. 375 - * 376 - * Allocates a new rx skb, iommu-maps it and attaches it to the 377 - * descriptor. Mark the descriptor as activated, ready-to-use. 378 - */ 379 - static int 380 - spider_net_prepare_rx_descr(struct spider_net_card *card, 381 - struct spider_net_descr *descr) 382 - { 383 - struct spider_net_hw_descr *hwdescr = descr->hwdescr; 384 - dma_addr_t buf; 385 - int offset; 386 - int bufsize; 387 - 388 - /* we need to round up the buffer size to a multiple of 128 */ 389 - bufsize = (SPIDER_NET_MAX_FRAME + SPIDER_NET_RXBUF_ALIGN - 1) & 390 - (~(SPIDER_NET_RXBUF_ALIGN - 1)); 391 - 392 - /* and we need to have it 128 byte aligned, therefore we allocate a 393 - * bit more 394 - */ 395 - /* allocate an skb */ 396 - descr->skb = netdev_alloc_skb(card->netdev, 397 - bufsize + SPIDER_NET_RXBUF_ALIGN - 1); 398 - if (!descr->skb) { 399 - if (netif_msg_rx_err(card) && net_ratelimit()) 400 - dev_err(&card->netdev->dev, 401 - "Not enough memory to allocate rx buffer\n"); 402 - card->spider_stats.alloc_rx_skb_error++; 403 - return -ENOMEM; 404 - } 405 - hwdescr->buf_size = bufsize; 406 - hwdescr->result_size = 0; 407 - hwdescr->valid_size = 0; 408 - hwdescr->data_status = 0; 409 - hwdescr->data_error = 0; 410 - 411 - offset = ((unsigned long)descr->skb->data) & 412 - (SPIDER_NET_RXBUF_ALIGN - 1); 413 - if (offset) 414 - skb_reserve(descr->skb, SPIDER_NET_RXBUF_ALIGN - offset); 415 - /* iommu-map the skb */ 416 - buf = dma_map_single(&card->pdev->dev, descr->skb->data, 417 - SPIDER_NET_MAX_FRAME, DMA_FROM_DEVICE); 418 - if (dma_mapping_error(&card->pdev->dev, buf)) { 419 - dev_kfree_skb_any(descr->skb); 420 - descr->skb = NULL; 421 - if (netif_msg_rx_err(card) && net_ratelimit()) 422 - dev_err(&card->netdev->dev, "Could not iommu-map rx buffer\n"); 423 - card->spider_stats.rx_iommu_map_error++; 424 - hwdescr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE; 425 - } else { 426 - hwdescr->buf_addr = buf; 427 - wmb(); 428 - hwdescr->dmac_cmd_status = SPIDER_NET_DESCR_CARDOWNED | 429 - SPIDER_NET_DMAC_NOINTR_COMPLETE; 430 - } 431 - 432 - return 0; 433 - } 434 - 435 - /** 436 - * spider_net_enable_rxchtails - sets RX dmac chain tail addresses 437 - * @card: card structure 438 - * 439 - * spider_net_enable_rxchtails sets the RX DMAC chain tail addresses in the 440 - * chip by writing to the appropriate register. DMA is enabled in 441 - * spider_net_enable_rxdmac. 442 - */ 443 - static inline void 444 - spider_net_enable_rxchtails(struct spider_net_card *card) 445 - { 446 - /* assume chain is aligned correctly */ 447 - spider_net_write_reg(card, SPIDER_NET_GDADCHA , 448 - card->rx_chain.tail->bus_addr); 449 - } 450 - 451 - /** 452 - * spider_net_enable_rxdmac - enables a receive DMA controller 453 - * @card: card structure 454 - * 455 - * spider_net_enable_rxdmac enables the DMA controller by setting RX_DMA_EN 456 - * in the GDADMACCNTR register 457 - */ 458 - static inline void 459 - spider_net_enable_rxdmac(struct spider_net_card *card) 460 - { 461 - wmb(); 462 - spider_net_write_reg(card, SPIDER_NET_GDADMACCNTR, 463 - SPIDER_NET_DMA_RX_VALUE); 464 - } 465 - 466 - /** 467 - * spider_net_disable_rxdmac - disables the receive DMA controller 468 - * @card: card structure 469 - * 470 - * spider_net_disable_rxdmac terminates processing on the DMA controller 471 - * by turing off the DMA controller, with the force-end flag set. 472 - */ 473 - static inline void 474 - spider_net_disable_rxdmac(struct spider_net_card *card) 475 - { 476 - spider_net_write_reg(card, SPIDER_NET_GDADMACCNTR, 477 - SPIDER_NET_DMA_RX_FEND_VALUE); 478 - } 479 - 480 - /** 481 - * spider_net_refill_rx_chain - refills descriptors/skbs in the rx chains 482 - * @card: card structure 483 - * 484 - * refills descriptors in the rx chain: allocates skbs and iommu-maps them. 485 - */ 486 - static void 487 - spider_net_refill_rx_chain(struct spider_net_card *card) 488 - { 489 - struct spider_net_descr_chain *chain = &card->rx_chain; 490 - unsigned long flags; 491 - 492 - /* one context doing the refill (and a second context seeing that 493 - * and omitting it) is ok. If called by NAPI, we'll be called again 494 - * as spider_net_decode_one_descr is called several times. If some 495 - * interrupt calls us, the NAPI is about to clean up anyway. 496 - */ 497 - if (!spin_trylock_irqsave(&chain->lock, flags)) 498 - return; 499 - 500 - while (spider_net_get_descr_status(chain->head->hwdescr) == 501 - SPIDER_NET_DESCR_NOT_IN_USE) { 502 - if (spider_net_prepare_rx_descr(card, chain->head)) 503 - break; 504 - chain->head = chain->head->next; 505 - } 506 - 507 - spin_unlock_irqrestore(&chain->lock, flags); 508 - } 509 - 510 - /** 511 - * spider_net_alloc_rx_skbs - Allocates rx skbs in rx descriptor chains 512 - * @card: card structure 513 - * 514 - * Returns 0 on success, <0 on failure. 515 - */ 516 - static int 517 - spider_net_alloc_rx_skbs(struct spider_net_card *card) 518 - { 519 - struct spider_net_descr_chain *chain = &card->rx_chain; 520 - struct spider_net_descr *start = chain->tail; 521 - struct spider_net_descr *descr = start; 522 - 523 - /* Link up the hardware chain pointers */ 524 - do { 525 - descr->prev->hwdescr->next_descr_addr = descr->bus_addr; 526 - descr = descr->next; 527 - } while (descr != start); 528 - 529 - /* Put at least one buffer into the chain. if this fails, 530 - * we've got a problem. If not, spider_net_refill_rx_chain 531 - * will do the rest at the end of this function. 532 - */ 533 - if (spider_net_prepare_rx_descr(card, chain->head)) 534 - goto error; 535 - else 536 - chain->head = chain->head->next; 537 - 538 - /* This will allocate the rest of the rx buffers; 539 - * if not, it's business as usual later on. 540 - */ 541 - spider_net_refill_rx_chain(card); 542 - spider_net_enable_rxdmac(card); 543 - return 0; 544 - 545 - error: 546 - spider_net_free_rx_chain_contents(card); 547 - return -ENOMEM; 548 - } 549 - 550 - /** 551 - * spider_net_get_multicast_hash - generates hash for multicast filter table 552 - * @netdev: interface device structure 553 - * @addr: multicast address 554 - * 555 - * returns the hash value. 556 - * 557 - * spider_net_get_multicast_hash calculates a hash value for a given multicast 558 - * address, that is used to set the multicast filter tables 559 - */ 560 - static u8 561 - spider_net_get_multicast_hash(struct net_device *netdev, __u8 *addr) 562 - { 563 - u32 crc; 564 - u8 hash; 565 - char addr_for_crc[ETH_ALEN] = { 0, }; 566 - int i, bit; 567 - 568 - for (i = 0; i < ETH_ALEN * 8; i++) { 569 - bit = (addr[i / 8] >> (i % 8)) & 1; 570 - addr_for_crc[ETH_ALEN - 1 - i / 8] += bit << (7 - (i % 8)); 571 - } 572 - 573 - crc = crc32_be(~0, addr_for_crc, netdev->addr_len); 574 - 575 - hash = (crc >> 27); 576 - hash <<= 3; 577 - hash |= crc & 7; 578 - hash &= 0xff; 579 - 580 - return hash; 581 - } 582 - 583 - /** 584 - * spider_net_set_multi - sets multicast addresses and promisc flags 585 - * @netdev: interface device structure 586 - * 587 - * spider_net_set_multi configures multicast addresses as needed for the 588 - * netdev interface. It also sets up multicast, allmulti and promisc 589 - * flags appropriately 590 - */ 591 - static void 592 - spider_net_set_multi(struct net_device *netdev) 593 - { 594 - struct netdev_hw_addr *ha; 595 - u8 hash; 596 - int i; 597 - u32 reg; 598 - struct spider_net_card *card = netdev_priv(netdev); 599 - DECLARE_BITMAP(bitmask, SPIDER_NET_MULTICAST_HASHES); 600 - 601 - spider_net_set_promisc(card); 602 - 603 - if (netdev->flags & IFF_ALLMULTI) { 604 - bitmap_fill(bitmask, SPIDER_NET_MULTICAST_HASHES); 605 - goto write_hash; 606 - } 607 - 608 - bitmap_zero(bitmask, SPIDER_NET_MULTICAST_HASHES); 609 - 610 - /* well, we know, what the broadcast hash value is: it's xfd 611 - hash = spider_net_get_multicast_hash(netdev, netdev->broadcast); */ 612 - __set_bit(0xfd, bitmask); 613 - 614 - netdev_for_each_mc_addr(ha, netdev) { 615 - hash = spider_net_get_multicast_hash(netdev, ha->addr); 616 - __set_bit(hash, bitmask); 617 - } 618 - 619 - write_hash: 620 - for (i = 0; i < SPIDER_NET_MULTICAST_HASHES / 4; i++) { 621 - reg = 0; 622 - if (test_bit(i * 4, bitmask)) 623 - reg += 0x08; 624 - reg <<= 8; 625 - if (test_bit(i * 4 + 1, bitmask)) 626 - reg += 0x08; 627 - reg <<= 8; 628 - if (test_bit(i * 4 + 2, bitmask)) 629 - reg += 0x08; 630 - reg <<= 8; 631 - if (test_bit(i * 4 + 3, bitmask)) 632 - reg += 0x08; 633 - 634 - spider_net_write_reg(card, SPIDER_NET_GMRMHFILnR + i * 4, reg); 635 - } 636 - } 637 - 638 - /** 639 - * spider_net_prepare_tx_descr - fill tx descriptor with skb data 640 - * @card: card structure 641 - * @skb: packet to use 642 - * 643 - * returns 0 on success, <0 on failure. 644 - * 645 - * fills out the descriptor structure with skb data and len. Copies data, 646 - * if needed (32bit DMA!) 647 - */ 648 - static int 649 - spider_net_prepare_tx_descr(struct spider_net_card *card, 650 - struct sk_buff *skb) 651 - { 652 - struct spider_net_descr_chain *chain = &card->tx_chain; 653 - struct spider_net_descr *descr; 654 - struct spider_net_hw_descr *hwdescr; 655 - dma_addr_t buf; 656 - unsigned long flags; 657 - 658 - buf = dma_map_single(&card->pdev->dev, skb->data, skb->len, 659 - DMA_TO_DEVICE); 660 - if (dma_mapping_error(&card->pdev->dev, buf)) { 661 - if (netif_msg_tx_err(card) && net_ratelimit()) 662 - dev_err(&card->netdev->dev, "could not iommu-map packet (%p, %i). " 663 - "Dropping packet\n", skb->data, skb->len); 664 - card->spider_stats.tx_iommu_map_error++; 665 - return -ENOMEM; 666 - } 667 - 668 - spin_lock_irqsave(&chain->lock, flags); 669 - descr = card->tx_chain.head; 670 - if (descr->next == chain->tail->prev) { 671 - spin_unlock_irqrestore(&chain->lock, flags); 672 - dma_unmap_single(&card->pdev->dev, buf, skb->len, 673 - DMA_TO_DEVICE); 674 - return -ENOMEM; 675 - } 676 - hwdescr = descr->hwdescr; 677 - chain->head = descr->next; 678 - 679 - descr->skb = skb; 680 - hwdescr->buf_addr = buf; 681 - hwdescr->buf_size = skb->len; 682 - hwdescr->next_descr_addr = 0; 683 - hwdescr->data_status = 0; 684 - 685 - hwdescr->dmac_cmd_status = 686 - SPIDER_NET_DESCR_CARDOWNED | SPIDER_NET_DMAC_TXFRMTL; 687 - spin_unlock_irqrestore(&chain->lock, flags); 688 - 689 - if (skb->ip_summed == CHECKSUM_PARTIAL) 690 - switch (ip_hdr(skb)->protocol) { 691 - case IPPROTO_TCP: 692 - hwdescr->dmac_cmd_status |= SPIDER_NET_DMAC_TCP; 693 - break; 694 - case IPPROTO_UDP: 695 - hwdescr->dmac_cmd_status |= SPIDER_NET_DMAC_UDP; 696 - break; 697 - } 698 - 699 - /* Chain the bus address, so that the DMA engine finds this descr. */ 700 - wmb(); 701 - descr->prev->hwdescr->next_descr_addr = descr->bus_addr; 702 - 703 - netif_trans_update(card->netdev); /* set netdev watchdog timer */ 704 - return 0; 705 - } 706 - 707 - static int 708 - spider_net_set_low_watermark(struct spider_net_card *card) 709 - { 710 - struct spider_net_descr *descr = card->tx_chain.tail; 711 - struct spider_net_hw_descr *hwdescr; 712 - unsigned long flags; 713 - int status; 714 - int cnt=0; 715 - int i; 716 - 717 - /* Measure the length of the queue. Measurement does not 718 - * need to be precise -- does not need a lock. 719 - */ 720 - while (descr != card->tx_chain.head) { 721 - status = descr->hwdescr->dmac_cmd_status & SPIDER_NET_DESCR_NOT_IN_USE; 722 - if (status == SPIDER_NET_DESCR_NOT_IN_USE) 723 - break; 724 - descr = descr->next; 725 - cnt++; 726 - } 727 - 728 - /* If TX queue is short, don't even bother with interrupts */ 729 - if (cnt < card->tx_chain.num_desc/4) 730 - return cnt; 731 - 732 - /* Set low-watermark 3/4th's of the way into the queue. */ 733 - descr = card->tx_chain.tail; 734 - cnt = (cnt*3)/4; 735 - for (i=0;i<cnt; i++) 736 - descr = descr->next; 737 - 738 - /* Set the new watermark, clear the old watermark */ 739 - spin_lock_irqsave(&card->tx_chain.lock, flags); 740 - descr->hwdescr->dmac_cmd_status |= SPIDER_NET_DESCR_TXDESFLG; 741 - if (card->low_watermark && card->low_watermark != descr) { 742 - hwdescr = card->low_watermark->hwdescr; 743 - hwdescr->dmac_cmd_status = 744 - hwdescr->dmac_cmd_status & ~SPIDER_NET_DESCR_TXDESFLG; 745 - } 746 - card->low_watermark = descr; 747 - spin_unlock_irqrestore(&card->tx_chain.lock, flags); 748 - return cnt; 749 - } 750 - 751 - /** 752 - * spider_net_release_tx_chain - processes sent tx descriptors 753 - * @card: adapter structure 754 - * @brutal: if set, don't care about whether descriptor seems to be in use 755 - * 756 - * returns 0 if the tx ring is empty, otherwise 1. 757 - * 758 - * spider_net_release_tx_chain releases the tx descriptors that spider has 759 - * finished with (if non-brutal) or simply release tx descriptors (if brutal). 760 - * If some other context is calling this function, we return 1 so that we're 761 - * scheduled again (if we were scheduled) and will not lose initiative. 762 - */ 763 - static int 764 - spider_net_release_tx_chain(struct spider_net_card *card, int brutal) 765 - { 766 - struct net_device *dev = card->netdev; 767 - struct spider_net_descr_chain *chain = &card->tx_chain; 768 - struct spider_net_descr *descr; 769 - struct spider_net_hw_descr *hwdescr; 770 - struct sk_buff *skb; 771 - u32 buf_addr; 772 - unsigned long flags; 773 - int status; 774 - 775 - while (1) { 776 - spin_lock_irqsave(&chain->lock, flags); 777 - if (chain->tail == chain->head) { 778 - spin_unlock_irqrestore(&chain->lock, flags); 779 - return 0; 780 - } 781 - descr = chain->tail; 782 - hwdescr = descr->hwdescr; 783 - 784 - status = spider_net_get_descr_status(hwdescr); 785 - switch (status) { 786 - case SPIDER_NET_DESCR_COMPLETE: 787 - dev->stats.tx_packets++; 788 - dev->stats.tx_bytes += descr->skb->len; 789 - break; 790 - 791 - case SPIDER_NET_DESCR_CARDOWNED: 792 - if (!brutal) { 793 - spin_unlock_irqrestore(&chain->lock, flags); 794 - return 1; 795 - } 796 - 797 - /* fallthrough, if we release the descriptors 798 - * brutally (then we don't care about 799 - * SPIDER_NET_DESCR_CARDOWNED) 800 - */ 801 - fallthrough; 802 - 803 - case SPIDER_NET_DESCR_RESPONSE_ERROR: 804 - case SPIDER_NET_DESCR_PROTECTION_ERROR: 805 - case SPIDER_NET_DESCR_FORCE_END: 806 - if (netif_msg_tx_err(card)) 807 - dev_err(&card->netdev->dev, "forcing end of tx descriptor " 808 - "with status x%02x\n", status); 809 - dev->stats.tx_errors++; 810 - break; 811 - 812 - default: 813 - dev->stats.tx_dropped++; 814 - if (!brutal) { 815 - spin_unlock_irqrestore(&chain->lock, flags); 816 - return 1; 817 - } 818 - } 819 - 820 - chain->tail = descr->next; 821 - hwdescr->dmac_cmd_status |= SPIDER_NET_DESCR_NOT_IN_USE; 822 - skb = descr->skb; 823 - descr->skb = NULL; 824 - buf_addr = hwdescr->buf_addr; 825 - spin_unlock_irqrestore(&chain->lock, flags); 826 - 827 - /* unmap the skb */ 828 - if (skb) { 829 - dma_unmap_single(&card->pdev->dev, buf_addr, skb->len, 830 - DMA_TO_DEVICE); 831 - dev_consume_skb_any(skb); 832 - } 833 - } 834 - return 0; 835 - } 836 - 837 - /** 838 - * spider_net_kick_tx_dma - enables TX DMA processing 839 - * @card: card structure 840 - * 841 - * This routine will start the transmit DMA running if 842 - * it is not already running. This routine ned only be 843 - * called when queueing a new packet to an empty tx queue. 844 - * Writes the current tx chain head as start address 845 - * of the tx descriptor chain and enables the transmission 846 - * DMA engine. 847 - */ 848 - static inline void 849 - spider_net_kick_tx_dma(struct spider_net_card *card) 850 - { 851 - struct spider_net_descr *descr; 852 - 853 - if (spider_net_read_reg(card, SPIDER_NET_GDTDMACCNTR) & 854 - SPIDER_NET_TX_DMA_EN) 855 - goto out; 856 - 857 - descr = card->tx_chain.tail; 858 - for (;;) { 859 - if (spider_net_get_descr_status(descr->hwdescr) == 860 - SPIDER_NET_DESCR_CARDOWNED) { 861 - spider_net_write_reg(card, SPIDER_NET_GDTDCHA, 862 - descr->bus_addr); 863 - spider_net_write_reg(card, SPIDER_NET_GDTDMACCNTR, 864 - SPIDER_NET_DMA_TX_VALUE); 865 - break; 866 - } 867 - if (descr == card->tx_chain.head) 868 - break; 869 - descr = descr->next; 870 - } 871 - 872 - out: 873 - mod_timer(&card->tx_timer, jiffies + SPIDER_NET_TX_TIMER); 874 - } 875 - 876 - /** 877 - * spider_net_xmit - transmits a frame over the device 878 - * @skb: packet to send out 879 - * @netdev: interface device structure 880 - * 881 - * returns NETDEV_TX_OK on success, NETDEV_TX_BUSY on failure 882 - */ 883 - static netdev_tx_t 884 - spider_net_xmit(struct sk_buff *skb, struct net_device *netdev) 885 - { 886 - int cnt; 887 - struct spider_net_card *card = netdev_priv(netdev); 888 - 889 - spider_net_release_tx_chain(card, 0); 890 - 891 - if (spider_net_prepare_tx_descr(card, skb) != 0) { 892 - netdev->stats.tx_dropped++; 893 - netif_stop_queue(netdev); 894 - return NETDEV_TX_BUSY; 895 - } 896 - 897 - cnt = spider_net_set_low_watermark(card); 898 - if (cnt < 5) 899 - spider_net_kick_tx_dma(card); 900 - return NETDEV_TX_OK; 901 - } 902 - 903 - /** 904 - * spider_net_cleanup_tx_ring - cleans up the TX ring 905 - * @t: timer context used to obtain the pointer to net card data structure 906 - * 907 - * spider_net_cleanup_tx_ring is called by either the tx_timer 908 - * or from the NAPI polling routine. 909 - * This routine releases resources associted with transmitted 910 - * packets, including updating the queue tail pointer. 911 - */ 912 - static void 913 - spider_net_cleanup_tx_ring(struct timer_list *t) 914 - { 915 - struct spider_net_card *card = from_timer(card, t, tx_timer); 916 - if ((spider_net_release_tx_chain(card, 0) != 0) && 917 - (card->netdev->flags & IFF_UP)) { 918 - spider_net_kick_tx_dma(card); 919 - netif_wake_queue(card->netdev); 920 - } 921 - } 922 - 923 - /** 924 - * spider_net_do_ioctl - called for device ioctls 925 - * @netdev: interface device structure 926 - * @ifr: request parameter structure for ioctl 927 - * @cmd: command code for ioctl 928 - * 929 - * returns 0 on success, <0 on failure. Currently, we have no special ioctls. 930 - * -EOPNOTSUPP is returned, if an unknown ioctl was requested 931 - */ 932 - static int 933 - spider_net_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) 934 - { 935 - switch (cmd) { 936 - default: 937 - return -EOPNOTSUPP; 938 - } 939 - } 940 - 941 - /** 942 - * spider_net_pass_skb_up - takes an skb from a descriptor and passes it on 943 - * @descr: descriptor to process 944 - * @card: card structure 945 - * 946 - * Fills out skb structure and passes the data to the stack. 947 - * The descriptor state is not changed. 948 - */ 949 - static void 950 - spider_net_pass_skb_up(struct spider_net_descr *descr, 951 - struct spider_net_card *card) 952 - { 953 - struct spider_net_hw_descr *hwdescr = descr->hwdescr; 954 - struct sk_buff *skb = descr->skb; 955 - struct net_device *netdev = card->netdev; 956 - u32 data_status = hwdescr->data_status; 957 - u32 data_error = hwdescr->data_error; 958 - 959 - skb_put(skb, hwdescr->valid_size); 960 - 961 - /* the card seems to add 2 bytes of junk in front 962 - * of the ethernet frame 963 - */ 964 - #define SPIDER_MISALIGN 2 965 - skb_pull(skb, SPIDER_MISALIGN); 966 - skb->protocol = eth_type_trans(skb, netdev); 967 - 968 - /* checksum offload */ 969 - skb_checksum_none_assert(skb); 970 - if (netdev->features & NETIF_F_RXCSUM) { 971 - if ( ( (data_status & SPIDER_NET_DATA_STATUS_CKSUM_MASK) == 972 - SPIDER_NET_DATA_STATUS_CKSUM_MASK) && 973 - !(data_error & SPIDER_NET_DATA_ERR_CKSUM_MASK)) 974 - skb->ip_summed = CHECKSUM_UNNECESSARY; 975 - } 976 - 977 - if (data_status & SPIDER_NET_VLAN_PACKET) { 978 - /* further enhancements: HW-accel VLAN */ 979 - } 980 - 981 - /* update netdevice statistics */ 982 - netdev->stats.rx_packets++; 983 - netdev->stats.rx_bytes += skb->len; 984 - 985 - /* pass skb up to stack */ 986 - netif_receive_skb(skb); 987 - } 988 - 989 - static void show_rx_chain(struct spider_net_card *card) 990 - { 991 - struct spider_net_descr_chain *chain = &card->rx_chain; 992 - struct spider_net_descr *start= chain->tail; 993 - struct spider_net_descr *descr= start; 994 - struct spider_net_hw_descr *hwd = start->hwdescr; 995 - struct device *dev = &card->netdev->dev; 996 - u32 curr_desc, next_desc; 997 - int status; 998 - 999 - int tot = 0; 1000 - int cnt = 0; 1001 - int off = start - chain->ring; 1002 - int cstat = hwd->dmac_cmd_status; 1003 - 1004 - dev_info(dev, "Total number of descrs=%d\n", 1005 - chain->num_desc); 1006 - dev_info(dev, "Chain tail located at descr=%d, status=0x%x\n", 1007 - off, cstat); 1008 - 1009 - curr_desc = spider_net_read_reg(card, SPIDER_NET_GDACTDPA); 1010 - next_desc = spider_net_read_reg(card, SPIDER_NET_GDACNEXTDA); 1011 - 1012 - status = cstat; 1013 - do 1014 - { 1015 - hwd = descr->hwdescr; 1016 - off = descr - chain->ring; 1017 - status = hwd->dmac_cmd_status; 1018 - 1019 - if (descr == chain->head) 1020 - dev_info(dev, "Chain head is at %d, head status=0x%x\n", 1021 - off, status); 1022 - 1023 - if (curr_desc == descr->bus_addr) 1024 - dev_info(dev, "HW curr desc (GDACTDPA) is at %d, status=0x%x\n", 1025 - off, status); 1026 - 1027 - if (next_desc == descr->bus_addr) 1028 - dev_info(dev, "HW next desc (GDACNEXTDA) is at %d, status=0x%x\n", 1029 - off, status); 1030 - 1031 - if (hwd->next_descr_addr == 0) 1032 - dev_info(dev, "chain is cut at %d\n", off); 1033 - 1034 - if (cstat != status) { 1035 - int from = (chain->num_desc + off - cnt) % chain->num_desc; 1036 - int to = (chain->num_desc + off - 1) % chain->num_desc; 1037 - dev_info(dev, "Have %d (from %d to %d) descrs " 1038 - "with stat=0x%08x\n", cnt, from, to, cstat); 1039 - cstat = status; 1040 - cnt = 0; 1041 - } 1042 - 1043 - cnt ++; 1044 - tot ++; 1045 - descr = descr->next; 1046 - } while (descr != start); 1047 - 1048 - dev_info(dev, "Last %d descrs with stat=0x%08x " 1049 - "for a total of %d descrs\n", cnt, cstat, tot); 1050 - 1051 - #ifdef DEBUG 1052 - /* Now dump the whole ring */ 1053 - descr = start; 1054 - do 1055 - { 1056 - struct spider_net_hw_descr *hwd = descr->hwdescr; 1057 - status = spider_net_get_descr_status(hwd); 1058 - cnt = descr - chain->ring; 1059 - dev_info(dev, "Descr %d stat=0x%08x skb=%p\n", 1060 - cnt, status, descr->skb); 1061 - dev_info(dev, "bus addr=%08x buf addr=%08x sz=%d\n", 1062 - descr->bus_addr, hwd->buf_addr, hwd->buf_size); 1063 - dev_info(dev, "next=%08x result sz=%d valid sz=%d\n", 1064 - hwd->next_descr_addr, hwd->result_size, 1065 - hwd->valid_size); 1066 - dev_info(dev, "dmac=%08x data stat=%08x data err=%08x\n", 1067 - hwd->dmac_cmd_status, hwd->data_status, 1068 - hwd->data_error); 1069 - dev_info(dev, "\n"); 1070 - 1071 - descr = descr->next; 1072 - } while (descr != start); 1073 - #endif 1074 - 1075 - } 1076 - 1077 - /** 1078 - * spider_net_resync_head_ptr - Advance head ptr past empty descrs 1079 - * @card: card structure 1080 - * 1081 - * If the driver fails to keep up and empty the queue, then the 1082 - * hardware wil run out of room to put incoming packets. This 1083 - * will cause the hardware to skip descrs that are full (instead 1084 - * of halting/retrying). Thus, once the driver runs, it wil need 1085 - * to "catch up" to where the hardware chain pointer is at. 1086 - */ 1087 - static void spider_net_resync_head_ptr(struct spider_net_card *card) 1088 - { 1089 - unsigned long flags; 1090 - struct spider_net_descr_chain *chain = &card->rx_chain; 1091 - struct spider_net_descr *descr; 1092 - int i, status; 1093 - 1094 - /* Advance head pointer past any empty descrs */ 1095 - descr = chain->head; 1096 - status = spider_net_get_descr_status(descr->hwdescr); 1097 - 1098 - if (status == SPIDER_NET_DESCR_NOT_IN_USE) 1099 - return; 1100 - 1101 - spin_lock_irqsave(&chain->lock, flags); 1102 - 1103 - descr = chain->head; 1104 - status = spider_net_get_descr_status(descr->hwdescr); 1105 - for (i=0; i<chain->num_desc; i++) { 1106 - if (status != SPIDER_NET_DESCR_CARDOWNED) break; 1107 - descr = descr->next; 1108 - status = spider_net_get_descr_status(descr->hwdescr); 1109 - } 1110 - chain->head = descr; 1111 - 1112 - spin_unlock_irqrestore(&chain->lock, flags); 1113 - } 1114 - 1115 - static int spider_net_resync_tail_ptr(struct spider_net_card *card) 1116 - { 1117 - struct spider_net_descr_chain *chain = &card->rx_chain; 1118 - struct spider_net_descr *descr; 1119 - int i, status; 1120 - 1121 - /* Advance tail pointer past any empty and reaped descrs */ 1122 - descr = chain->tail; 1123 - status = spider_net_get_descr_status(descr->hwdescr); 1124 - 1125 - for (i=0; i<chain->num_desc; i++) { 1126 - if ((status != SPIDER_NET_DESCR_CARDOWNED) && 1127 - (status != SPIDER_NET_DESCR_NOT_IN_USE)) break; 1128 - descr = descr->next; 1129 - status = spider_net_get_descr_status(descr->hwdescr); 1130 - } 1131 - chain->tail = descr; 1132 - 1133 - if ((i == chain->num_desc) || (i == 0)) 1134 - return 1; 1135 - return 0; 1136 - } 1137 - 1138 - /** 1139 - * spider_net_decode_one_descr - processes an RX descriptor 1140 - * @card: card structure 1141 - * 1142 - * Returns 1 if a packet has been sent to the stack, otherwise 0. 1143 - * 1144 - * Processes an RX descriptor by iommu-unmapping the data buffer 1145 - * and passing the packet up to the stack. This function is called 1146 - * in softirq context, e.g. either bottom half from interrupt or 1147 - * NAPI polling context. 1148 - */ 1149 - static int 1150 - spider_net_decode_one_descr(struct spider_net_card *card) 1151 - { 1152 - struct net_device *dev = card->netdev; 1153 - struct spider_net_descr_chain *chain = &card->rx_chain; 1154 - struct spider_net_descr *descr = chain->tail; 1155 - struct spider_net_hw_descr *hwdescr = descr->hwdescr; 1156 - u32 hw_buf_addr; 1157 - int status; 1158 - 1159 - status = spider_net_get_descr_status(hwdescr); 1160 - 1161 - /* Nothing in the descriptor, or ring must be empty */ 1162 - if ((status == SPIDER_NET_DESCR_CARDOWNED) || 1163 - (status == SPIDER_NET_DESCR_NOT_IN_USE)) 1164 - return 0; 1165 - 1166 - /* descriptor definitively used -- move on tail */ 1167 - chain->tail = descr->next; 1168 - 1169 - /* unmap descriptor */ 1170 - hw_buf_addr = hwdescr->buf_addr; 1171 - hwdescr->buf_addr = 0xffffffff; 1172 - dma_unmap_single(&card->pdev->dev, hw_buf_addr, SPIDER_NET_MAX_FRAME, 1173 - DMA_FROM_DEVICE); 1174 - 1175 - if ( (status == SPIDER_NET_DESCR_RESPONSE_ERROR) || 1176 - (status == SPIDER_NET_DESCR_PROTECTION_ERROR) || 1177 - (status == SPIDER_NET_DESCR_FORCE_END) ) { 1178 - if (netif_msg_rx_err(card)) 1179 - dev_err(&dev->dev, 1180 - "dropping RX descriptor with state %d\n", status); 1181 - dev->stats.rx_dropped++; 1182 - goto bad_desc; 1183 - } 1184 - 1185 - if ( (status != SPIDER_NET_DESCR_COMPLETE) && 1186 - (status != SPIDER_NET_DESCR_FRAME_END) ) { 1187 - if (netif_msg_rx_err(card)) 1188 - dev_err(&card->netdev->dev, 1189 - "RX descriptor with unknown state %d\n", status); 1190 - card->spider_stats.rx_desc_unk_state++; 1191 - goto bad_desc; 1192 - } 1193 - 1194 - /* The cases we'll throw away the packet immediately */ 1195 - if (hwdescr->data_error & SPIDER_NET_DESTROY_RX_FLAGS) { 1196 - if (netif_msg_rx_err(card)) 1197 - dev_err(&card->netdev->dev, 1198 - "error in received descriptor found, " 1199 - "data_status=x%08x, data_error=x%08x\n", 1200 - hwdescr->data_status, hwdescr->data_error); 1201 - goto bad_desc; 1202 - } 1203 - 1204 - if (hwdescr->dmac_cmd_status & SPIDER_NET_DESCR_BAD_STATUS) { 1205 - dev_err(&card->netdev->dev, "bad status, cmd_status=x%08x\n", 1206 - hwdescr->dmac_cmd_status); 1207 - pr_err("buf_addr=x%08x\n", hw_buf_addr); 1208 - pr_err("buf_size=x%08x\n", hwdescr->buf_size); 1209 - pr_err("next_descr_addr=x%08x\n", hwdescr->next_descr_addr); 1210 - pr_err("result_size=x%08x\n", hwdescr->result_size); 1211 - pr_err("valid_size=x%08x\n", hwdescr->valid_size); 1212 - pr_err("data_status=x%08x\n", hwdescr->data_status); 1213 - pr_err("data_error=x%08x\n", hwdescr->data_error); 1214 - pr_err("which=%ld\n", descr - card->rx_chain.ring); 1215 - 1216 - card->spider_stats.rx_desc_error++; 1217 - goto bad_desc; 1218 - } 1219 - 1220 - /* Ok, we've got a packet in descr */ 1221 - spider_net_pass_skb_up(descr, card); 1222 - descr->skb = NULL; 1223 - hwdescr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE; 1224 - return 1; 1225 - 1226 - bad_desc: 1227 - if (netif_msg_rx_err(card)) 1228 - show_rx_chain(card); 1229 - dev_kfree_skb_irq(descr->skb); 1230 - descr->skb = NULL; 1231 - hwdescr->dmac_cmd_status = SPIDER_NET_DESCR_NOT_IN_USE; 1232 - return 0; 1233 - } 1234 - 1235 - /** 1236 - * spider_net_poll - NAPI poll function called by the stack to return packets 1237 - * @napi: napi device structure 1238 - * @budget: number of packets we can pass to the stack at most 1239 - * 1240 - * returns 0 if no more packets available to the driver/stack. Returns 1, 1241 - * if the quota is exceeded, but the driver has still packets. 1242 - * 1243 - * spider_net_poll returns all packets from the rx descriptors to the stack 1244 - * (using netif_receive_skb). If all/enough packets are up, the driver 1245 - * reenables interrupts and returns 0. If not, 1 is returned. 1246 - */ 1247 - static int spider_net_poll(struct napi_struct *napi, int budget) 1248 - { 1249 - struct spider_net_card *card = container_of(napi, struct spider_net_card, napi); 1250 - int packets_done = 0; 1251 - 1252 - while (packets_done < budget) { 1253 - if (!spider_net_decode_one_descr(card)) 1254 - break; 1255 - 1256 - packets_done++; 1257 - } 1258 - 1259 - if ((packets_done == 0) && (card->num_rx_ints != 0)) { 1260 - if (!spider_net_resync_tail_ptr(card)) 1261 - packets_done = budget; 1262 - spider_net_resync_head_ptr(card); 1263 - } 1264 - card->num_rx_ints = 0; 1265 - 1266 - spider_net_refill_rx_chain(card); 1267 - spider_net_enable_rxdmac(card); 1268 - 1269 - spider_net_cleanup_tx_ring(&card->tx_timer); 1270 - 1271 - /* if all packets are in the stack, enable interrupts and return 0 */ 1272 - /* if not, return 1 */ 1273 - if (packets_done < budget) { 1274 - napi_complete_done(napi, packets_done); 1275 - spider_net_rx_irq_on(card); 1276 - card->ignore_rx_ramfull = 0; 1277 - } 1278 - 1279 - return packets_done; 1280 - } 1281 - 1282 - /** 1283 - * spider_net_set_mac - sets the MAC of an interface 1284 - * @netdev: interface device structure 1285 - * @p: pointer to new MAC address 1286 - * 1287 - * Returns 0 on success, <0 on failure. Currently, we don't support this 1288 - * and will always return EOPNOTSUPP. 1289 - */ 1290 - static int 1291 - spider_net_set_mac(struct net_device *netdev, void *p) 1292 - { 1293 - struct spider_net_card *card = netdev_priv(netdev); 1294 - u32 macl, macu, regvalue; 1295 - struct sockaddr *addr = p; 1296 - 1297 - if (!is_valid_ether_addr(addr->sa_data)) 1298 - return -EADDRNOTAVAIL; 1299 - 1300 - eth_hw_addr_set(netdev, addr->sa_data); 1301 - 1302 - /* switch off GMACTPE and GMACRPE */ 1303 - regvalue = spider_net_read_reg(card, SPIDER_NET_GMACOPEMD); 1304 - regvalue &= ~((1 << 5) | (1 << 6)); 1305 - spider_net_write_reg(card, SPIDER_NET_GMACOPEMD, regvalue); 1306 - 1307 - /* write mac */ 1308 - macu = (netdev->dev_addr[0]<<24) + (netdev->dev_addr[1]<<16) + 1309 - (netdev->dev_addr[2]<<8) + (netdev->dev_addr[3]); 1310 - macl = (netdev->dev_addr[4]<<8) + (netdev->dev_addr[5]); 1311 - spider_net_write_reg(card, SPIDER_NET_GMACUNIMACU, macu); 1312 - spider_net_write_reg(card, SPIDER_NET_GMACUNIMACL, macl); 1313 - 1314 - /* switch GMACTPE and GMACRPE back on */ 1315 - regvalue = spider_net_read_reg(card, SPIDER_NET_GMACOPEMD); 1316 - regvalue |= ((1 << 5) | (1 << 6)); 1317 - spider_net_write_reg(card, SPIDER_NET_GMACOPEMD, regvalue); 1318 - 1319 - spider_net_set_promisc(card); 1320 - 1321 - return 0; 1322 - } 1323 - 1324 - /** 1325 - * spider_net_link_reset 1326 - * @netdev: net device structure 1327 - * 1328 - * This is called when the PHY_LINK signal is asserted. For the blade this is 1329 - * not connected so we should never get here. 1330 - * 1331 - */ 1332 - static void 1333 - spider_net_link_reset(struct net_device *netdev) 1334 - { 1335 - 1336 - struct spider_net_card *card = netdev_priv(netdev); 1337 - 1338 - del_timer_sync(&card->aneg_timer); 1339 - 1340 - /* clear interrupt, block further interrupts */ 1341 - spider_net_write_reg(card, SPIDER_NET_GMACST, 1342 - spider_net_read_reg(card, SPIDER_NET_GMACST)); 1343 - spider_net_write_reg(card, SPIDER_NET_GMACINTEN, 0); 1344 - 1345 - /* reset phy and setup aneg */ 1346 - card->aneg_count = 0; 1347 - card->medium = BCM54XX_COPPER; 1348 - spider_net_setup_aneg(card); 1349 - mod_timer(&card->aneg_timer, jiffies + SPIDER_NET_ANEG_TIMER); 1350 - 1351 - } 1352 - 1353 - /** 1354 - * spider_net_handle_error_irq - handles errors raised by an interrupt 1355 - * @card: card structure 1356 - * @status_reg: interrupt status register 0 (GHIINT0STS) 1357 - * @error_reg1: interrupt status register 1 (GHIINT1STS) 1358 - * @error_reg2: interrupt status register 2 (GHIINT2STS) 1359 - * 1360 - * spider_net_handle_error_irq treats or ignores all error conditions 1361 - * found when an interrupt is presented 1362 - */ 1363 - static void 1364 - spider_net_handle_error_irq(struct spider_net_card *card, u32 status_reg, 1365 - u32 error_reg1, u32 error_reg2) 1366 - { 1367 - u32 i; 1368 - int show_error = 1; 1369 - 1370 - /* check GHIINT0STS ************************************/ 1371 - if (status_reg) 1372 - for (i = 0; i < 32; i++) 1373 - if (status_reg & (1<<i)) 1374 - switch (i) 1375 - { 1376 - /* let error_reg1 and error_reg2 evaluation decide, what to do 1377 - case SPIDER_NET_PHYINT: 1378 - case SPIDER_NET_GMAC2INT: 1379 - case SPIDER_NET_GMAC1INT: 1380 - case SPIDER_NET_GFIFOINT: 1381 - case SPIDER_NET_DMACINT: 1382 - case SPIDER_NET_GSYSINT: 1383 - break; */ 1384 - 1385 - case SPIDER_NET_GIPSINT: 1386 - show_error = 0; 1387 - break; 1388 - 1389 - case SPIDER_NET_GPWOPCMPINT: 1390 - /* PHY write operation completed */ 1391 - show_error = 0; 1392 - break; 1393 - case SPIDER_NET_GPROPCMPINT: 1394 - /* PHY read operation completed */ 1395 - /* we don't use semaphores, as we poll for the completion 1396 - * of the read operation in spider_net_read_phy. Should take 1397 - * about 50 us 1398 - */ 1399 - show_error = 0; 1400 - break; 1401 - case SPIDER_NET_GPWFFINT: 1402 - /* PHY command queue full */ 1403 - if (netif_msg_intr(card)) 1404 - dev_err(&card->netdev->dev, "PHY write queue full\n"); 1405 - show_error = 0; 1406 - break; 1407 - 1408 - /* case SPIDER_NET_GRMDADRINT: not used. print a message */ 1409 - /* case SPIDER_NET_GRMARPINT: not used. print a message */ 1410 - /* case SPIDER_NET_GRMMPINT: not used. print a message */ 1411 - 1412 - case SPIDER_NET_GDTDEN0INT: 1413 - /* someone has set TX_DMA_EN to 0 */ 1414 - show_error = 0; 1415 - break; 1416 - 1417 - case SPIDER_NET_GDDDEN0INT: 1418 - case SPIDER_NET_GDCDEN0INT: 1419 - case SPIDER_NET_GDBDEN0INT: 1420 - case SPIDER_NET_GDADEN0INT: 1421 - /* someone has set RX_DMA_EN to 0 */ 1422 - show_error = 0; 1423 - break; 1424 - 1425 - /* RX interrupts */ 1426 - case SPIDER_NET_GDDFDCINT: 1427 - case SPIDER_NET_GDCFDCINT: 1428 - case SPIDER_NET_GDBFDCINT: 1429 - case SPIDER_NET_GDAFDCINT: 1430 - /* case SPIDER_NET_GDNMINT: not used. print a message */ 1431 - /* case SPIDER_NET_GCNMINT: not used. print a message */ 1432 - /* case SPIDER_NET_GBNMINT: not used. print a message */ 1433 - /* case SPIDER_NET_GANMINT: not used. print a message */ 1434 - /* case SPIDER_NET_GRFNMINT: not used. print a message */ 1435 - show_error = 0; 1436 - break; 1437 - 1438 - /* TX interrupts */ 1439 - case SPIDER_NET_GDTFDCINT: 1440 - show_error = 0; 1441 - break; 1442 - case SPIDER_NET_GTTEDINT: 1443 - show_error = 0; 1444 - break; 1445 - case SPIDER_NET_GDTDCEINT: 1446 - /* chain end. If a descriptor should be sent, kick off 1447 - * tx dma 1448 - if (card->tx_chain.tail != card->tx_chain.head) 1449 - spider_net_kick_tx_dma(card); 1450 - */ 1451 - show_error = 0; 1452 - break; 1453 - 1454 - /* case SPIDER_NET_G1TMCNTINT: not used. print a message */ 1455 - /* case SPIDER_NET_GFREECNTINT: not used. print a message */ 1456 - } 1457 - 1458 - /* check GHIINT1STS ************************************/ 1459 - if (error_reg1) 1460 - for (i = 0; i < 32; i++) 1461 - if (error_reg1 & (1<<i)) 1462 - switch (i) 1463 - { 1464 - case SPIDER_NET_GTMFLLINT: 1465 - /* TX RAM full may happen on a usual case. 1466 - * Logging is not needed. 1467 - */ 1468 - show_error = 0; 1469 - break; 1470 - case SPIDER_NET_GRFDFLLINT: 1471 - case SPIDER_NET_GRFCFLLINT: 1472 - case SPIDER_NET_GRFBFLLINT: 1473 - case SPIDER_NET_GRFAFLLINT: 1474 - case SPIDER_NET_GRMFLLINT: 1475 - /* Could happen when rx chain is full */ 1476 - if (card->ignore_rx_ramfull == 0) { 1477 - card->ignore_rx_ramfull = 1; 1478 - spider_net_resync_head_ptr(card); 1479 - spider_net_refill_rx_chain(card); 1480 - spider_net_enable_rxdmac(card); 1481 - card->num_rx_ints ++; 1482 - napi_schedule(&card->napi); 1483 - } 1484 - show_error = 0; 1485 - break; 1486 - 1487 - /* case SPIDER_NET_GTMSHTINT: problem, print a message */ 1488 - case SPIDER_NET_GDTINVDINT: 1489 - /* allrighty. tx from previous descr ok */ 1490 - show_error = 0; 1491 - break; 1492 - 1493 - /* chain end */ 1494 - case SPIDER_NET_GDDDCEINT: 1495 - case SPIDER_NET_GDCDCEINT: 1496 - case SPIDER_NET_GDBDCEINT: 1497 - case SPIDER_NET_GDADCEINT: 1498 - spider_net_resync_head_ptr(card); 1499 - spider_net_refill_rx_chain(card); 1500 - spider_net_enable_rxdmac(card); 1501 - card->num_rx_ints ++; 1502 - napi_schedule(&card->napi); 1503 - show_error = 0; 1504 - break; 1505 - 1506 - /* invalid descriptor */ 1507 - case SPIDER_NET_GDDINVDINT: 1508 - case SPIDER_NET_GDCINVDINT: 1509 - case SPIDER_NET_GDBINVDINT: 1510 - case SPIDER_NET_GDAINVDINT: 1511 - /* Could happen when rx chain is full */ 1512 - spider_net_resync_head_ptr(card); 1513 - spider_net_refill_rx_chain(card); 1514 - spider_net_enable_rxdmac(card); 1515 - card->num_rx_ints ++; 1516 - napi_schedule(&card->napi); 1517 - show_error = 0; 1518 - break; 1519 - 1520 - /* case SPIDER_NET_GDTRSERINT: problem, print a message */ 1521 - /* case SPIDER_NET_GDDRSERINT: problem, print a message */ 1522 - /* case SPIDER_NET_GDCRSERINT: problem, print a message */ 1523 - /* case SPIDER_NET_GDBRSERINT: problem, print a message */ 1524 - /* case SPIDER_NET_GDARSERINT: problem, print a message */ 1525 - /* case SPIDER_NET_GDSERINT: problem, print a message */ 1526 - /* case SPIDER_NET_GDTPTERINT: problem, print a message */ 1527 - /* case SPIDER_NET_GDDPTERINT: problem, print a message */ 1528 - /* case SPIDER_NET_GDCPTERINT: problem, print a message */ 1529 - /* case SPIDER_NET_GDBPTERINT: problem, print a message */ 1530 - /* case SPIDER_NET_GDAPTERINT: problem, print a message */ 1531 - default: 1532 - show_error = 1; 1533 - break; 1534 - } 1535 - 1536 - /* check GHIINT2STS ************************************/ 1537 - if (error_reg2) 1538 - for (i = 0; i < 32; i++) 1539 - if (error_reg2 & (1<<i)) 1540 - switch (i) 1541 - { 1542 - /* there is nothing we can (want to) do at this time. Log a 1543 - * message, we can switch on and off the specific values later on 1544 - case SPIDER_NET_GPROPERINT: 1545 - case SPIDER_NET_GMCTCRSNGINT: 1546 - case SPIDER_NET_GMCTLCOLINT: 1547 - case SPIDER_NET_GMCTTMOTINT: 1548 - case SPIDER_NET_GMCRCAERINT: 1549 - case SPIDER_NET_GMCRCALERINT: 1550 - case SPIDER_NET_GMCRALNERINT: 1551 - case SPIDER_NET_GMCROVRINT: 1552 - case SPIDER_NET_GMCRRNTINT: 1553 - case SPIDER_NET_GMCRRXERINT: 1554 - case SPIDER_NET_GTITCSERINT: 1555 - case SPIDER_NET_GTIFMTERINT: 1556 - case SPIDER_NET_GTIPKTRVKINT: 1557 - case SPIDER_NET_GTISPINGINT: 1558 - case SPIDER_NET_GTISADNGINT: 1559 - case SPIDER_NET_GTISPDNGINT: 1560 - case SPIDER_NET_GRIFMTERINT: 1561 - case SPIDER_NET_GRIPKTRVKINT: 1562 - case SPIDER_NET_GRISPINGINT: 1563 - case SPIDER_NET_GRISADNGINT: 1564 - case SPIDER_NET_GRISPDNGINT: 1565 - break; 1566 - */ 1567 - default: 1568 - break; 1569 - } 1570 - 1571 - if ((show_error) && (netif_msg_intr(card)) && net_ratelimit()) 1572 - dev_err(&card->netdev->dev, "Error interrupt, GHIINT0STS = 0x%08x, " 1573 - "GHIINT1STS = 0x%08x, GHIINT2STS = 0x%08x\n", 1574 - status_reg, error_reg1, error_reg2); 1575 - 1576 - /* clear interrupt sources */ 1577 - spider_net_write_reg(card, SPIDER_NET_GHIINT1STS, error_reg1); 1578 - spider_net_write_reg(card, SPIDER_NET_GHIINT2STS, error_reg2); 1579 - } 1580 - 1581 - /** 1582 - * spider_net_interrupt - interrupt handler for spider_net 1583 - * @irq: interrupt number 1584 - * @ptr: pointer to net_device 1585 - * 1586 - * returns IRQ_HANDLED, if interrupt was for driver, or IRQ_NONE, if no 1587 - * interrupt found raised by card. 1588 - * 1589 - * This is the interrupt handler, that turns off 1590 - * interrupts for this device and makes the stack poll the driver 1591 - */ 1592 - static irqreturn_t 1593 - spider_net_interrupt(int irq, void *ptr) 1594 - { 1595 - struct net_device *netdev = ptr; 1596 - struct spider_net_card *card = netdev_priv(netdev); 1597 - u32 status_reg, error_reg1, error_reg2; 1598 - 1599 - status_reg = spider_net_read_reg(card, SPIDER_NET_GHIINT0STS); 1600 - error_reg1 = spider_net_read_reg(card, SPIDER_NET_GHIINT1STS); 1601 - error_reg2 = spider_net_read_reg(card, SPIDER_NET_GHIINT2STS); 1602 - 1603 - if (!(status_reg & SPIDER_NET_INT0_MASK_VALUE) && 1604 - !(error_reg1 & SPIDER_NET_INT1_MASK_VALUE) && 1605 - !(error_reg2 & SPIDER_NET_INT2_MASK_VALUE)) 1606 - return IRQ_NONE; 1607 - 1608 - if (status_reg & SPIDER_NET_RXINT ) { 1609 - spider_net_rx_irq_off(card); 1610 - napi_schedule(&card->napi); 1611 - card->num_rx_ints ++; 1612 - } 1613 - if (status_reg & SPIDER_NET_TXINT) 1614 - napi_schedule(&card->napi); 1615 - 1616 - if (status_reg & SPIDER_NET_LINKINT) 1617 - spider_net_link_reset(netdev); 1618 - 1619 - if (status_reg & SPIDER_NET_ERRINT ) 1620 - spider_net_handle_error_irq(card, status_reg, 1621 - error_reg1, error_reg2); 1622 - 1623 - /* clear interrupt sources */ 1624 - spider_net_write_reg(card, SPIDER_NET_GHIINT0STS, status_reg); 1625 - 1626 - return IRQ_HANDLED; 1627 - } 1628 - 1629 - #ifdef CONFIG_NET_POLL_CONTROLLER 1630 - /** 1631 - * spider_net_poll_controller - artificial interrupt for netconsole etc. 1632 - * @netdev: interface device structure 1633 - * 1634 - * see Documentation/networking/netconsole.rst 1635 - */ 1636 - static void 1637 - spider_net_poll_controller(struct net_device *netdev) 1638 - { 1639 - disable_irq(netdev->irq); 1640 - spider_net_interrupt(netdev->irq, netdev); 1641 - enable_irq(netdev->irq); 1642 - } 1643 - #endif /* CONFIG_NET_POLL_CONTROLLER */ 1644 - 1645 - /** 1646 - * spider_net_enable_interrupts - enable interrupts 1647 - * @card: card structure 1648 - * 1649 - * spider_net_enable_interrupt enables several interrupts 1650 - */ 1651 - static void 1652 - spider_net_enable_interrupts(struct spider_net_card *card) 1653 - { 1654 - spider_net_write_reg(card, SPIDER_NET_GHIINT0MSK, 1655 - SPIDER_NET_INT0_MASK_VALUE); 1656 - spider_net_write_reg(card, SPIDER_NET_GHIINT1MSK, 1657 - SPIDER_NET_INT1_MASK_VALUE); 1658 - spider_net_write_reg(card, SPIDER_NET_GHIINT2MSK, 1659 - SPIDER_NET_INT2_MASK_VALUE); 1660 - } 1661 - 1662 - /** 1663 - * spider_net_disable_interrupts - disable interrupts 1664 - * @card: card structure 1665 - * 1666 - * spider_net_disable_interrupts disables all the interrupts 1667 - */ 1668 - static void 1669 - spider_net_disable_interrupts(struct spider_net_card *card) 1670 - { 1671 - spider_net_write_reg(card, SPIDER_NET_GHIINT0MSK, 0); 1672 - spider_net_write_reg(card, SPIDER_NET_GHIINT1MSK, 0); 1673 - spider_net_write_reg(card, SPIDER_NET_GHIINT2MSK, 0); 1674 - spider_net_write_reg(card, SPIDER_NET_GMACINTEN, 0); 1675 - } 1676 - 1677 - /** 1678 - * spider_net_init_card - initializes the card 1679 - * @card: card structure 1680 - * 1681 - * spider_net_init_card initializes the card so that other registers can 1682 - * be used 1683 - */ 1684 - static void 1685 - spider_net_init_card(struct spider_net_card *card) 1686 - { 1687 - spider_net_write_reg(card, SPIDER_NET_CKRCTRL, 1688 - SPIDER_NET_CKRCTRL_STOP_VALUE); 1689 - 1690 - spider_net_write_reg(card, SPIDER_NET_CKRCTRL, 1691 - SPIDER_NET_CKRCTRL_RUN_VALUE); 1692 - 1693 - /* trigger ETOMOD signal */ 1694 - spider_net_write_reg(card, SPIDER_NET_GMACOPEMD, 1695 - spider_net_read_reg(card, SPIDER_NET_GMACOPEMD) | 0x4); 1696 - 1697 - spider_net_disable_interrupts(card); 1698 - } 1699 - 1700 - /** 1701 - * spider_net_enable_card - enables the card by setting all kinds of regs 1702 - * @card: card structure 1703 - * 1704 - * spider_net_enable_card sets a lot of SMMIO registers to enable the device 1705 - */ 1706 - static void 1707 - spider_net_enable_card(struct spider_net_card *card) 1708 - { 1709 - int i; 1710 - /* the following array consists of (register),(value) pairs 1711 - * that are set in this function. A register of 0 ends the list 1712 - */ 1713 - u32 regs[][2] = { 1714 - { SPIDER_NET_GRESUMINTNUM, 0 }, 1715 - { SPIDER_NET_GREINTNUM, 0 }, 1716 - 1717 - /* set interrupt frame number registers */ 1718 - /* clear the single DMA engine registers first */ 1719 - { SPIDER_NET_GFAFRMNUM, SPIDER_NET_GFXFRAMES_VALUE }, 1720 - { SPIDER_NET_GFBFRMNUM, SPIDER_NET_GFXFRAMES_VALUE }, 1721 - { SPIDER_NET_GFCFRMNUM, SPIDER_NET_GFXFRAMES_VALUE }, 1722 - { SPIDER_NET_GFDFRMNUM, SPIDER_NET_GFXFRAMES_VALUE }, 1723 - /* then set, what we really need */ 1724 - { SPIDER_NET_GFFRMNUM, SPIDER_NET_FRAMENUM_VALUE }, 1725 - 1726 - /* timer counter registers and stuff */ 1727 - { SPIDER_NET_GFREECNNUM, 0 }, 1728 - { SPIDER_NET_GONETIMENUM, 0 }, 1729 - { SPIDER_NET_GTOUTFRMNUM, 0 }, 1730 - 1731 - /* RX mode setting */ 1732 - { SPIDER_NET_GRXMDSET, SPIDER_NET_RXMODE_VALUE }, 1733 - /* TX mode setting */ 1734 - { SPIDER_NET_GTXMDSET, SPIDER_NET_TXMODE_VALUE }, 1735 - /* IPSEC mode setting */ 1736 - { SPIDER_NET_GIPSECINIT, SPIDER_NET_IPSECINIT_VALUE }, 1737 - 1738 - { SPIDER_NET_GFTRESTRT, SPIDER_NET_RESTART_VALUE }, 1739 - 1740 - { SPIDER_NET_GMRWOLCTRL, 0 }, 1741 - { SPIDER_NET_GTESTMD, 0x10000000 }, 1742 - { SPIDER_NET_GTTQMSK, 0x00400040 }, 1743 - 1744 - { SPIDER_NET_GMACINTEN, 0 }, 1745 - 1746 - /* flow control stuff */ 1747 - { SPIDER_NET_GMACAPAUSE, SPIDER_NET_MACAPAUSE_VALUE }, 1748 - { SPIDER_NET_GMACTXPAUSE, SPIDER_NET_TXPAUSE_VALUE }, 1749 - 1750 - { SPIDER_NET_GMACBSTLMT, SPIDER_NET_BURSTLMT_VALUE }, 1751 - { 0, 0} 1752 - }; 1753 - 1754 - i = 0; 1755 - while (regs[i][0]) { 1756 - spider_net_write_reg(card, regs[i][0], regs[i][1]); 1757 - i++; 1758 - } 1759 - 1760 - /* clear unicast filter table entries 1 to 14 */ 1761 - for (i = 1; i <= 14; i++) { 1762 - spider_net_write_reg(card, 1763 - SPIDER_NET_GMRUAFILnR + i * 8, 1764 - 0x00080000); 1765 - spider_net_write_reg(card, 1766 - SPIDER_NET_GMRUAFILnR + i * 8 + 4, 1767 - 0x00000000); 1768 - } 1769 - 1770 - spider_net_write_reg(card, SPIDER_NET_GMRUA0FIL15R, 0x08080000); 1771 - 1772 - spider_net_write_reg(card, SPIDER_NET_ECMODE, SPIDER_NET_ECMODE_VALUE); 1773 - 1774 - /* set chain tail address for RX chains and 1775 - * enable DMA 1776 - */ 1777 - spider_net_enable_rxchtails(card); 1778 - spider_net_enable_rxdmac(card); 1779 - 1780 - spider_net_write_reg(card, SPIDER_NET_GRXDMAEN, SPIDER_NET_WOL_VALUE); 1781 - 1782 - spider_net_write_reg(card, SPIDER_NET_GMACLENLMT, 1783 - SPIDER_NET_LENLMT_VALUE); 1784 - spider_net_write_reg(card, SPIDER_NET_GMACOPEMD, 1785 - SPIDER_NET_OPMODE_VALUE); 1786 - 1787 - spider_net_write_reg(card, SPIDER_NET_GDTDMACCNTR, 1788 - SPIDER_NET_GDTBSTA); 1789 - } 1790 - 1791 - /** 1792 - * spider_net_download_firmware - loads firmware into the adapter 1793 - * @card: card structure 1794 - * @firmware_ptr: pointer to firmware data 1795 - * 1796 - * spider_net_download_firmware loads the firmware data into the 1797 - * adapter. It assumes the length etc. to be allright. 1798 - */ 1799 - static int 1800 - spider_net_download_firmware(struct spider_net_card *card, 1801 - const void *firmware_ptr) 1802 - { 1803 - int sequencer, i; 1804 - const u32 *fw_ptr = firmware_ptr; 1805 - 1806 - /* stop sequencers */ 1807 - spider_net_write_reg(card, SPIDER_NET_GSINIT, 1808 - SPIDER_NET_STOP_SEQ_VALUE); 1809 - 1810 - for (sequencer = 0; sequencer < SPIDER_NET_FIRMWARE_SEQS; 1811 - sequencer++) { 1812 - spider_net_write_reg(card, 1813 - SPIDER_NET_GSnPRGADR + sequencer * 8, 0); 1814 - for (i = 0; i < SPIDER_NET_FIRMWARE_SEQWORDS; i++) { 1815 - spider_net_write_reg(card, SPIDER_NET_GSnPRGDAT + 1816 - sequencer * 8, *fw_ptr); 1817 - fw_ptr++; 1818 - } 1819 - } 1820 - 1821 - if (spider_net_read_reg(card, SPIDER_NET_GSINIT)) 1822 - return -EIO; 1823 - 1824 - spider_net_write_reg(card, SPIDER_NET_GSINIT, 1825 - SPIDER_NET_RUN_SEQ_VALUE); 1826 - 1827 - return 0; 1828 - } 1829 - 1830 - /** 1831 - * spider_net_init_firmware - reads in firmware parts 1832 - * @card: card structure 1833 - * 1834 - * Returns 0 on success, <0 on failure 1835 - * 1836 - * spider_net_init_firmware opens the sequencer firmware and does some basic 1837 - * checks. This function opens and releases the firmware structure. A call 1838 - * to download the firmware is performed before the release. 1839 - * 1840 - * Firmware format 1841 - * =============== 1842 - * spider_fw.bin is expected to be a file containing 6*1024*4 bytes, 4k being 1843 - * the program for each sequencer. Use the command 1844 - * tail -q -n +2 Seq_code1_0x088.txt Seq_code2_0x090.txt \ 1845 - * Seq_code3_0x098.txt Seq_code4_0x0A0.txt Seq_code5_0x0A8.txt \ 1846 - * Seq_code6_0x0B0.txt | xxd -r -p -c4 > spider_fw.bin 1847 - * 1848 - * to generate spider_fw.bin, if you have sequencer programs with something 1849 - * like the following contents for each sequencer: 1850 - * <ONE LINE COMMENT> 1851 - * <FIRST 4-BYTES-WORD FOR SEQUENCER> 1852 - * <SECOND 4-BYTES-WORD FOR SEQUENCER> 1853 - * ... 1854 - * <1024th 4-BYTES-WORD FOR SEQUENCER> 1855 - */ 1856 - static int 1857 - spider_net_init_firmware(struct spider_net_card *card) 1858 - { 1859 - struct firmware *firmware = NULL; 1860 - struct device_node *dn; 1861 - const u8 *fw_prop = NULL; 1862 - int err = -ENOENT; 1863 - int fw_size; 1864 - 1865 - if (request_firmware((const struct firmware **)&firmware, 1866 - SPIDER_NET_FIRMWARE_NAME, &card->pdev->dev) == 0) { 1867 - if ( (firmware->size != SPIDER_NET_FIRMWARE_LEN) && 1868 - netif_msg_probe(card) ) { 1869 - dev_err(&card->netdev->dev, 1870 - "Incorrect size of spidernet firmware in " \ 1871 - "filesystem. Looking in host firmware...\n"); 1872 - goto try_host_fw; 1873 - } 1874 - err = spider_net_download_firmware(card, firmware->data); 1875 - 1876 - release_firmware(firmware); 1877 - if (err) 1878 - goto try_host_fw; 1879 - 1880 - goto done; 1881 - } 1882 - 1883 - try_host_fw: 1884 - dn = pci_device_to_OF_node(card->pdev); 1885 - if (!dn) 1886 - goto out_err; 1887 - 1888 - fw_prop = of_get_property(dn, "firmware", &fw_size); 1889 - if (!fw_prop) 1890 - goto out_err; 1891 - 1892 - if ( (fw_size != SPIDER_NET_FIRMWARE_LEN) && 1893 - netif_msg_probe(card) ) { 1894 - dev_err(&card->netdev->dev, 1895 - "Incorrect size of spidernet firmware in host firmware\n"); 1896 - goto done; 1897 - } 1898 - 1899 - err = spider_net_download_firmware(card, fw_prop); 1900 - 1901 - done: 1902 - return err; 1903 - out_err: 1904 - if (netif_msg_probe(card)) 1905 - dev_err(&card->netdev->dev, 1906 - "Couldn't find spidernet firmware in filesystem " \ 1907 - "or host firmware\n"); 1908 - return err; 1909 - } 1910 - 1911 - /** 1912 - * spider_net_open - called upon ifonfig up 1913 - * @netdev: interface device structure 1914 - * 1915 - * returns 0 on success, <0 on failure 1916 - * 1917 - * spider_net_open allocates all the descriptors and memory needed for 1918 - * operation, sets up multicast list and enables interrupts 1919 - */ 1920 - int 1921 - spider_net_open(struct net_device *netdev) 1922 - { 1923 - struct spider_net_card *card = netdev_priv(netdev); 1924 - int result; 1925 - 1926 - result = spider_net_init_firmware(card); 1927 - if (result) 1928 - goto init_firmware_failed; 1929 - 1930 - /* start probing with copper */ 1931 - card->aneg_count = 0; 1932 - card->medium = BCM54XX_COPPER; 1933 - spider_net_setup_aneg(card); 1934 - if (card->phy.def->phy_id) 1935 - mod_timer(&card->aneg_timer, jiffies + SPIDER_NET_ANEG_TIMER); 1936 - 1937 - result = spider_net_init_chain(card, &card->tx_chain); 1938 - if (result) 1939 - goto alloc_tx_failed; 1940 - card->low_watermark = NULL; 1941 - 1942 - result = spider_net_init_chain(card, &card->rx_chain); 1943 - if (result) 1944 - goto alloc_rx_failed; 1945 - 1946 - /* Allocate rx skbs */ 1947 - result = spider_net_alloc_rx_skbs(card); 1948 - if (result) 1949 - goto alloc_skbs_failed; 1950 - 1951 - spider_net_set_multi(netdev); 1952 - 1953 - /* further enhancement: setup hw vlan, if needed */ 1954 - 1955 - result = -EBUSY; 1956 - if (request_irq(netdev->irq, spider_net_interrupt, 1957 - IRQF_SHARED, netdev->name, netdev)) 1958 - goto register_int_failed; 1959 - 1960 - spider_net_enable_card(card); 1961 - 1962 - netif_start_queue(netdev); 1963 - netif_carrier_on(netdev); 1964 - napi_enable(&card->napi); 1965 - 1966 - spider_net_enable_interrupts(card); 1967 - 1968 - return 0; 1969 - 1970 - register_int_failed: 1971 - spider_net_free_rx_chain_contents(card); 1972 - alloc_skbs_failed: 1973 - spider_net_free_chain(card, &card->rx_chain); 1974 - alloc_rx_failed: 1975 - spider_net_free_chain(card, &card->tx_chain); 1976 - alloc_tx_failed: 1977 - del_timer_sync(&card->aneg_timer); 1978 - init_firmware_failed: 1979 - return result; 1980 - } 1981 - 1982 - /** 1983 - * spider_net_link_phy 1984 - * @t: timer context used to obtain the pointer to net card data structure 1985 - */ 1986 - static void spider_net_link_phy(struct timer_list *t) 1987 - { 1988 - struct spider_net_card *card = from_timer(card, t, aneg_timer); 1989 - struct mii_phy *phy = &card->phy; 1990 - 1991 - /* if link didn't come up after SPIDER_NET_ANEG_TIMEOUT tries, setup phy again */ 1992 - if (card->aneg_count > SPIDER_NET_ANEG_TIMEOUT) { 1993 - 1994 - pr_debug("%s: link is down trying to bring it up\n", 1995 - card->netdev->name); 1996 - 1997 - switch (card->medium) { 1998 - case BCM54XX_COPPER: 1999 - /* enable fiber with autonegotiation first */ 2000 - if (phy->def->ops->enable_fiber) 2001 - phy->def->ops->enable_fiber(phy, 1); 2002 - card->medium = BCM54XX_FIBER; 2003 - break; 2004 - 2005 - case BCM54XX_FIBER: 2006 - /* fiber didn't come up, try to disable fiber autoneg */ 2007 - if (phy->def->ops->enable_fiber) 2008 - phy->def->ops->enable_fiber(phy, 0); 2009 - card->medium = BCM54XX_UNKNOWN; 2010 - break; 2011 - 2012 - case BCM54XX_UNKNOWN: 2013 - /* copper, fiber with and without failed, 2014 - * retry from beginning 2015 - */ 2016 - spider_net_setup_aneg(card); 2017 - card->medium = BCM54XX_COPPER; 2018 - break; 2019 - } 2020 - 2021 - card->aneg_count = 0; 2022 - mod_timer(&card->aneg_timer, jiffies + SPIDER_NET_ANEG_TIMER); 2023 - return; 2024 - } 2025 - 2026 - /* link still not up, try again later */ 2027 - if (!(phy->def->ops->poll_link(phy))) { 2028 - card->aneg_count++; 2029 - mod_timer(&card->aneg_timer, jiffies + SPIDER_NET_ANEG_TIMER); 2030 - return; 2031 - } 2032 - 2033 - /* link came up, get abilities */ 2034 - phy->def->ops->read_link(phy); 2035 - 2036 - spider_net_write_reg(card, SPIDER_NET_GMACST, 2037 - spider_net_read_reg(card, SPIDER_NET_GMACST)); 2038 - spider_net_write_reg(card, SPIDER_NET_GMACINTEN, 0x4); 2039 - 2040 - if (phy->speed == 1000) 2041 - spider_net_write_reg(card, SPIDER_NET_GMACMODE, 0x00000001); 2042 - else 2043 - spider_net_write_reg(card, SPIDER_NET_GMACMODE, 0); 2044 - 2045 - card->aneg_count = 0; 2046 - 2047 - pr_info("%s: link up, %i Mbps, %s-duplex %sautoneg.\n", 2048 - card->netdev->name, phy->speed, 2049 - phy->duplex == 1 ? "Full" : "Half", 2050 - phy->autoneg == 1 ? "" : "no "); 2051 - } 2052 - 2053 - /** 2054 - * spider_net_setup_phy - setup PHY 2055 - * @card: card structure 2056 - * 2057 - * returns 0 on success, <0 on failure 2058 - * 2059 - * spider_net_setup_phy is used as part of spider_net_probe. 2060 - **/ 2061 - static int 2062 - spider_net_setup_phy(struct spider_net_card *card) 2063 - { 2064 - struct mii_phy *phy = &card->phy; 2065 - 2066 - spider_net_write_reg(card, SPIDER_NET_GDTDMASEL, 2067 - SPIDER_NET_DMASEL_VALUE); 2068 - spider_net_write_reg(card, SPIDER_NET_GPCCTRL, 2069 - SPIDER_NET_PHY_CTRL_VALUE); 2070 - 2071 - phy->dev = card->netdev; 2072 - phy->mdio_read = spider_net_read_phy; 2073 - phy->mdio_write = spider_net_write_phy; 2074 - 2075 - for (phy->mii_id = 1; phy->mii_id <= 31; phy->mii_id++) { 2076 - unsigned short id; 2077 - id = spider_net_read_phy(card->netdev, phy->mii_id, MII_BMSR); 2078 - if (id != 0x0000 && id != 0xffff) { 2079 - if (!sungem_phy_probe(phy, phy->mii_id)) { 2080 - pr_info("Found %s.\n", phy->def->name); 2081 - break; 2082 - } 2083 - } 2084 - } 2085 - 2086 - return 0; 2087 - } 2088 - 2089 - /** 2090 - * spider_net_workaround_rxramfull - work around firmware bug 2091 - * @card: card structure 2092 - * 2093 - * no return value 2094 - **/ 2095 - static void 2096 - spider_net_workaround_rxramfull(struct spider_net_card *card) 2097 - { 2098 - int i, sequencer = 0; 2099 - 2100 - /* cancel reset */ 2101 - spider_net_write_reg(card, SPIDER_NET_CKRCTRL, 2102 - SPIDER_NET_CKRCTRL_RUN_VALUE); 2103 - 2104 - /* empty sequencer data */ 2105 - for (sequencer = 0; sequencer < SPIDER_NET_FIRMWARE_SEQS; 2106 - sequencer++) { 2107 - spider_net_write_reg(card, SPIDER_NET_GSnPRGADR + 2108 - sequencer * 8, 0x0); 2109 - for (i = 0; i < SPIDER_NET_FIRMWARE_SEQWORDS; i++) { 2110 - spider_net_write_reg(card, SPIDER_NET_GSnPRGDAT + 2111 - sequencer * 8, 0x0); 2112 - } 2113 - } 2114 - 2115 - /* set sequencer operation */ 2116 - spider_net_write_reg(card, SPIDER_NET_GSINIT, 0x000000fe); 2117 - 2118 - /* reset */ 2119 - spider_net_write_reg(card, SPIDER_NET_CKRCTRL, 2120 - SPIDER_NET_CKRCTRL_STOP_VALUE); 2121 - } 2122 - 2123 - /** 2124 - * spider_net_stop - called upon ifconfig down 2125 - * @netdev: interface device structure 2126 - * 2127 - * always returns 0 2128 - */ 2129 - int 2130 - spider_net_stop(struct net_device *netdev) 2131 - { 2132 - struct spider_net_card *card = netdev_priv(netdev); 2133 - 2134 - napi_disable(&card->napi); 2135 - netif_carrier_off(netdev); 2136 - netif_stop_queue(netdev); 2137 - del_timer_sync(&card->tx_timer); 2138 - del_timer_sync(&card->aneg_timer); 2139 - 2140 - spider_net_disable_interrupts(card); 2141 - 2142 - free_irq(netdev->irq, netdev); 2143 - 2144 - spider_net_write_reg(card, SPIDER_NET_GDTDMACCNTR, 2145 - SPIDER_NET_DMA_TX_FEND_VALUE); 2146 - 2147 - /* turn off DMA, force end */ 2148 - spider_net_disable_rxdmac(card); 2149 - 2150 - /* release chains */ 2151 - spider_net_release_tx_chain(card, 1); 2152 - spider_net_free_rx_chain_contents(card); 2153 - 2154 - spider_net_free_chain(card, &card->tx_chain); 2155 - spider_net_free_chain(card, &card->rx_chain); 2156 - 2157 - return 0; 2158 - } 2159 - 2160 - /** 2161 - * spider_net_tx_timeout_task - task scheduled by the watchdog timeout 2162 - * function (to be called not under interrupt status) 2163 - * @work: work context used to obtain the pointer to net card data structure 2164 - * 2165 - * called as task when tx hangs, resets interface (if interface is up) 2166 - */ 2167 - static void 2168 - spider_net_tx_timeout_task(struct work_struct *work) 2169 - { 2170 - struct spider_net_card *card = 2171 - container_of(work, struct spider_net_card, tx_timeout_task); 2172 - struct net_device *netdev = card->netdev; 2173 - 2174 - if (!(netdev->flags & IFF_UP)) 2175 - goto out; 2176 - 2177 - netif_device_detach(netdev); 2178 - spider_net_stop(netdev); 2179 - 2180 - spider_net_workaround_rxramfull(card); 2181 - spider_net_init_card(card); 2182 - 2183 - if (spider_net_setup_phy(card)) 2184 - goto out; 2185 - 2186 - spider_net_open(netdev); 2187 - spider_net_kick_tx_dma(card); 2188 - netif_device_attach(netdev); 2189 - 2190 - out: 2191 - atomic_dec(&card->tx_timeout_task_counter); 2192 - } 2193 - 2194 - /** 2195 - * spider_net_tx_timeout - called when the tx timeout watchdog kicks in. 2196 - * @netdev: interface device structure 2197 - * @txqueue: unused 2198 - * 2199 - * called, if tx hangs. Schedules a task that resets the interface 2200 - */ 2201 - static void 2202 - spider_net_tx_timeout(struct net_device *netdev, unsigned int txqueue) 2203 - { 2204 - struct spider_net_card *card; 2205 - 2206 - card = netdev_priv(netdev); 2207 - atomic_inc(&card->tx_timeout_task_counter); 2208 - if (netdev->flags & IFF_UP) 2209 - schedule_work(&card->tx_timeout_task); 2210 - else 2211 - atomic_dec(&card->tx_timeout_task_counter); 2212 - card->spider_stats.tx_timeouts++; 2213 - } 2214 - 2215 - static const struct net_device_ops spider_net_ops = { 2216 - .ndo_open = spider_net_open, 2217 - .ndo_stop = spider_net_stop, 2218 - .ndo_start_xmit = spider_net_xmit, 2219 - .ndo_set_rx_mode = spider_net_set_multi, 2220 - .ndo_set_mac_address = spider_net_set_mac, 2221 - .ndo_eth_ioctl = spider_net_do_ioctl, 2222 - .ndo_tx_timeout = spider_net_tx_timeout, 2223 - .ndo_validate_addr = eth_validate_addr, 2224 - /* HW VLAN */ 2225 - #ifdef CONFIG_NET_POLL_CONTROLLER 2226 - /* poll controller */ 2227 - .ndo_poll_controller = spider_net_poll_controller, 2228 - #endif /* CONFIG_NET_POLL_CONTROLLER */ 2229 - }; 2230 - 2231 - /** 2232 - * spider_net_setup_netdev_ops - initialization of net_device operations 2233 - * @netdev: net_device structure 2234 - * 2235 - * fills out function pointers in the net_device structure 2236 - */ 2237 - static void 2238 - spider_net_setup_netdev_ops(struct net_device *netdev) 2239 - { 2240 - netdev->netdev_ops = &spider_net_ops; 2241 - netdev->watchdog_timeo = SPIDER_NET_WATCHDOG_TIMEOUT; 2242 - /* ethtool ops */ 2243 - netdev->ethtool_ops = &spider_net_ethtool_ops; 2244 - } 2245 - 2246 - /** 2247 - * spider_net_setup_netdev - initialization of net_device 2248 - * @card: card structure 2249 - * 2250 - * Returns 0 on success or <0 on failure 2251 - * 2252 - * spider_net_setup_netdev initializes the net_device structure 2253 - **/ 2254 - static int 2255 - spider_net_setup_netdev(struct spider_net_card *card) 2256 - { 2257 - int result; 2258 - struct net_device *netdev = card->netdev; 2259 - struct device_node *dn; 2260 - struct sockaddr addr; 2261 - const u8 *mac; 2262 - 2263 - SET_NETDEV_DEV(netdev, &card->pdev->dev); 2264 - 2265 - pci_set_drvdata(card->pdev, netdev); 2266 - 2267 - timer_setup(&card->tx_timer, spider_net_cleanup_tx_ring, 0); 2268 - netdev->irq = card->pdev->irq; 2269 - 2270 - card->aneg_count = 0; 2271 - timer_setup(&card->aneg_timer, spider_net_link_phy, 0); 2272 - 2273 - netif_napi_add(netdev, &card->napi, spider_net_poll); 2274 - 2275 - spider_net_setup_netdev_ops(netdev); 2276 - 2277 - netdev->hw_features = NETIF_F_RXCSUM | NETIF_F_IP_CSUM; 2278 - if (SPIDER_NET_RX_CSUM_DEFAULT) 2279 - netdev->features |= NETIF_F_RXCSUM; 2280 - netdev->features |= NETIF_F_IP_CSUM; 2281 - /* some time: NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX | 2282 - * NETIF_F_HW_VLAN_CTAG_FILTER 2283 - */ 2284 - netdev->lltx = true; 2285 - 2286 - /* MTU range: 64 - 2294 */ 2287 - netdev->min_mtu = SPIDER_NET_MIN_MTU; 2288 - netdev->max_mtu = SPIDER_NET_MAX_MTU; 2289 - 2290 - netdev->irq = card->pdev->irq; 2291 - card->num_rx_ints = 0; 2292 - card->ignore_rx_ramfull = 0; 2293 - 2294 - dn = pci_device_to_OF_node(card->pdev); 2295 - if (!dn) 2296 - return -EIO; 2297 - 2298 - mac = of_get_property(dn, "local-mac-address", NULL); 2299 - if (!mac) 2300 - return -EIO; 2301 - memcpy(addr.sa_data, mac, ETH_ALEN); 2302 - 2303 - result = spider_net_set_mac(netdev, &addr); 2304 - if ((result) && (netif_msg_probe(card))) 2305 - dev_err(&card->netdev->dev, 2306 - "Failed to set MAC address: %i\n", result); 2307 - 2308 - result = register_netdev(netdev); 2309 - if (result) { 2310 - if (netif_msg_probe(card)) 2311 - dev_err(&card->netdev->dev, 2312 - "Couldn't register net_device: %i\n", result); 2313 - return result; 2314 - } 2315 - 2316 - if (netif_msg_probe(card)) 2317 - pr_info("Initialized device %s.\n", netdev->name); 2318 - 2319 - return 0; 2320 - } 2321 - 2322 - /** 2323 - * spider_net_alloc_card - allocates net_device and card structure 2324 - * 2325 - * returns the card structure or NULL in case of errors 2326 - * 2327 - * the card and net_device structures are linked to each other 2328 - */ 2329 - static struct spider_net_card * 2330 - spider_net_alloc_card(void) 2331 - { 2332 - struct net_device *netdev; 2333 - struct spider_net_card *card; 2334 - 2335 - netdev = alloc_etherdev(struct_size(card, darray, 2336 - size_add(tx_descriptors, rx_descriptors))); 2337 - if (!netdev) 2338 - return NULL; 2339 - 2340 - card = netdev_priv(netdev); 2341 - card->netdev = netdev; 2342 - card->msg_enable = SPIDER_NET_DEFAULT_MSG; 2343 - INIT_WORK(&card->tx_timeout_task, spider_net_tx_timeout_task); 2344 - init_waitqueue_head(&card->waitq); 2345 - atomic_set(&card->tx_timeout_task_counter, 0); 2346 - 2347 - card->rx_chain.num_desc = rx_descriptors; 2348 - card->rx_chain.ring = card->darray; 2349 - card->tx_chain.num_desc = tx_descriptors; 2350 - card->tx_chain.ring = card->darray + rx_descriptors; 2351 - 2352 - return card; 2353 - } 2354 - 2355 - /** 2356 - * spider_net_undo_pci_setup - releases PCI ressources 2357 - * @card: card structure 2358 - * 2359 - * spider_net_undo_pci_setup releases the mapped regions 2360 - */ 2361 - static void 2362 - spider_net_undo_pci_setup(struct spider_net_card *card) 2363 - { 2364 - iounmap(card->regs); 2365 - pci_release_regions(card->pdev); 2366 - } 2367 - 2368 - /** 2369 - * spider_net_setup_pci_dev - sets up the device in terms of PCI operations 2370 - * @pdev: PCI device 2371 - * 2372 - * Returns the card structure or NULL if any errors occur 2373 - * 2374 - * spider_net_setup_pci_dev initializes pdev and together with the 2375 - * functions called in spider_net_open configures the device so that 2376 - * data can be transferred over it 2377 - * The net_device structure is attached to the card structure, if the 2378 - * function returns without error. 2379 - **/ 2380 - static struct spider_net_card * 2381 - spider_net_setup_pci_dev(struct pci_dev *pdev) 2382 - { 2383 - struct spider_net_card *card; 2384 - unsigned long mmio_start, mmio_len; 2385 - 2386 - if (pci_enable_device(pdev)) { 2387 - dev_err(&pdev->dev, "Couldn't enable PCI device\n"); 2388 - return NULL; 2389 - } 2390 - 2391 - if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM)) { 2392 - dev_err(&pdev->dev, 2393 - "Couldn't find proper PCI device base address.\n"); 2394 - goto out_disable_dev; 2395 - } 2396 - 2397 - if (pci_request_regions(pdev, spider_net_driver_name)) { 2398 - dev_err(&pdev->dev, 2399 - "Couldn't obtain PCI resources, aborting.\n"); 2400 - goto out_disable_dev; 2401 - } 2402 - 2403 - pci_set_master(pdev); 2404 - 2405 - card = spider_net_alloc_card(); 2406 - if (!card) { 2407 - dev_err(&pdev->dev, 2408 - "Couldn't allocate net_device structure, aborting.\n"); 2409 - goto out_release_regions; 2410 - } 2411 - card->pdev = pdev; 2412 - 2413 - /* fetch base address and length of first resource */ 2414 - mmio_start = pci_resource_start(pdev, 0); 2415 - mmio_len = pci_resource_len(pdev, 0); 2416 - 2417 - card->netdev->mem_start = mmio_start; 2418 - card->netdev->mem_end = mmio_start + mmio_len; 2419 - card->regs = ioremap(mmio_start, mmio_len); 2420 - 2421 - if (!card->regs) { 2422 - dev_err(&pdev->dev, 2423 - "Couldn't obtain PCI resources, aborting.\n"); 2424 - goto out_release_regions; 2425 - } 2426 - 2427 - return card; 2428 - 2429 - out_release_regions: 2430 - pci_release_regions(pdev); 2431 - out_disable_dev: 2432 - pci_disable_device(pdev); 2433 - return NULL; 2434 - } 2435 - 2436 - /** 2437 - * spider_net_probe - initialization of a device 2438 - * @pdev: PCI device 2439 - * @ent: entry in the device id list 2440 - * 2441 - * Returns 0 on success, <0 on failure 2442 - * 2443 - * spider_net_probe initializes pdev and registers a net_device 2444 - * structure for it. After that, the device can be ifconfig'ed up 2445 - **/ 2446 - static int 2447 - spider_net_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 2448 - { 2449 - int err = -EIO; 2450 - struct spider_net_card *card; 2451 - 2452 - card = spider_net_setup_pci_dev(pdev); 2453 - if (!card) 2454 - goto out; 2455 - 2456 - spider_net_workaround_rxramfull(card); 2457 - spider_net_init_card(card); 2458 - 2459 - err = spider_net_setup_phy(card); 2460 - if (err) 2461 - goto out_undo_pci; 2462 - 2463 - err = spider_net_setup_netdev(card); 2464 - if (err) 2465 - goto out_undo_pci; 2466 - 2467 - return 0; 2468 - 2469 - out_undo_pci: 2470 - spider_net_undo_pci_setup(card); 2471 - free_netdev(card->netdev); 2472 - out: 2473 - return err; 2474 - } 2475 - 2476 - /** 2477 - * spider_net_remove - removal of a device 2478 - * @pdev: PCI device 2479 - * 2480 - * Returns 0 on success, <0 on failure 2481 - * 2482 - * spider_net_remove is called to remove the device and unregisters the 2483 - * net_device 2484 - **/ 2485 - static void 2486 - spider_net_remove(struct pci_dev *pdev) 2487 - { 2488 - struct net_device *netdev; 2489 - struct spider_net_card *card; 2490 - 2491 - netdev = pci_get_drvdata(pdev); 2492 - card = netdev_priv(netdev); 2493 - 2494 - wait_event(card->waitq, 2495 - atomic_read(&card->tx_timeout_task_counter) == 0); 2496 - 2497 - unregister_netdev(netdev); 2498 - 2499 - /* switch off card */ 2500 - spider_net_write_reg(card, SPIDER_NET_CKRCTRL, 2501 - SPIDER_NET_CKRCTRL_STOP_VALUE); 2502 - spider_net_write_reg(card, SPIDER_NET_CKRCTRL, 2503 - SPIDER_NET_CKRCTRL_RUN_VALUE); 2504 - 2505 - spider_net_undo_pci_setup(card); 2506 - free_netdev(netdev); 2507 - } 2508 - 2509 - static struct pci_driver spider_net_driver = { 2510 - .name = spider_net_driver_name, 2511 - .id_table = spider_net_pci_tbl, 2512 - .probe = spider_net_probe, 2513 - .remove = spider_net_remove 2514 - }; 2515 - 2516 - /** 2517 - * spider_net_init - init function when the driver is loaded 2518 - * 2519 - * spider_net_init registers the device driver 2520 - */ 2521 - static int __init spider_net_init(void) 2522 - { 2523 - printk(KERN_INFO "Spidernet version %s.\n", VERSION); 2524 - 2525 - if (rx_descriptors < SPIDER_NET_RX_DESCRIPTORS_MIN) { 2526 - rx_descriptors = SPIDER_NET_RX_DESCRIPTORS_MIN; 2527 - pr_info("adjusting rx descriptors to %i.\n", rx_descriptors); 2528 - } 2529 - if (rx_descriptors > SPIDER_NET_RX_DESCRIPTORS_MAX) { 2530 - rx_descriptors = SPIDER_NET_RX_DESCRIPTORS_MAX; 2531 - pr_info("adjusting rx descriptors to %i.\n", rx_descriptors); 2532 - } 2533 - if (tx_descriptors < SPIDER_NET_TX_DESCRIPTORS_MIN) { 2534 - tx_descriptors = SPIDER_NET_TX_DESCRIPTORS_MIN; 2535 - pr_info("adjusting tx descriptors to %i.\n", tx_descriptors); 2536 - } 2537 - if (tx_descriptors > SPIDER_NET_TX_DESCRIPTORS_MAX) { 2538 - tx_descriptors = SPIDER_NET_TX_DESCRIPTORS_MAX; 2539 - pr_info("adjusting tx descriptors to %i.\n", tx_descriptors); 2540 - } 2541 - 2542 - return pci_register_driver(&spider_net_driver); 2543 - } 2544 - 2545 - /** 2546 - * spider_net_cleanup - exit function when driver is unloaded 2547 - * 2548 - * spider_net_cleanup unregisters the device driver 2549 - */ 2550 - static void __exit spider_net_cleanup(void) 2551 - { 2552 - pci_unregister_driver(&spider_net_driver); 2553 - } 2554 - 2555 - module_init(spider_net_init); 2556 - module_exit(spider_net_cleanup);
-475
drivers/net/ethernet/toshiba/spider_net.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Network device driver for Cell Processor-Based Blade and Celleb platform 4 - * 5 - * (C) Copyright IBM Corp. 2005 6 - * (C) Copyright 2006 TOSHIBA CORPORATION 7 - * 8 - * Authors : Utz Bacher <utz.bacher@de.ibm.com> 9 - * Jens Osterkamp <Jens.Osterkamp@de.ibm.com> 10 - */ 11 - 12 - #ifndef _SPIDER_NET_H 13 - #define _SPIDER_NET_H 14 - 15 - #define VERSION "2.0 B" 16 - 17 - #include <linux/sungem_phy.h> 18 - 19 - int spider_net_stop(struct net_device *netdev); 20 - int spider_net_open(struct net_device *netdev); 21 - 22 - extern const struct ethtool_ops spider_net_ethtool_ops; 23 - 24 - extern char spider_net_driver_name[]; 25 - 26 - #define SPIDER_NET_MAX_FRAME 2312 27 - #define SPIDER_NET_MAX_MTU 2294 28 - #define SPIDER_NET_MIN_MTU 64 29 - 30 - #define SPIDER_NET_RXBUF_ALIGN 128 31 - 32 - #define SPIDER_NET_RX_DESCRIPTORS_DEFAULT 256 33 - #define SPIDER_NET_RX_DESCRIPTORS_MIN 16 34 - #define SPIDER_NET_RX_DESCRIPTORS_MAX 512 35 - 36 - #define SPIDER_NET_TX_DESCRIPTORS_DEFAULT 256 37 - #define SPIDER_NET_TX_DESCRIPTORS_MIN 16 38 - #define SPIDER_NET_TX_DESCRIPTORS_MAX 512 39 - 40 - #define SPIDER_NET_TX_TIMER (HZ/5) 41 - #define SPIDER_NET_ANEG_TIMER (HZ) 42 - #define SPIDER_NET_ANEG_TIMEOUT 5 43 - 44 - #define SPIDER_NET_RX_CSUM_DEFAULT 1 45 - 46 - #define SPIDER_NET_WATCHDOG_TIMEOUT 50*HZ 47 - 48 - #define SPIDER_NET_FIRMWARE_SEQS 6 49 - #define SPIDER_NET_FIRMWARE_SEQWORDS 1024 50 - #define SPIDER_NET_FIRMWARE_LEN (SPIDER_NET_FIRMWARE_SEQS * \ 51 - SPIDER_NET_FIRMWARE_SEQWORDS * \ 52 - sizeof(u32)) 53 - #define SPIDER_NET_FIRMWARE_NAME "spider_fw.bin" 54 - 55 - /** spider_net SMMIO registers */ 56 - #define SPIDER_NET_GHIINT0STS 0x00000000 57 - #define SPIDER_NET_GHIINT1STS 0x00000004 58 - #define SPIDER_NET_GHIINT2STS 0x00000008 59 - #define SPIDER_NET_GHIINT0MSK 0x00000010 60 - #define SPIDER_NET_GHIINT1MSK 0x00000014 61 - #define SPIDER_NET_GHIINT2MSK 0x00000018 62 - 63 - #define SPIDER_NET_GRESUMINTNUM 0x00000020 64 - #define SPIDER_NET_GREINTNUM 0x00000024 65 - 66 - #define SPIDER_NET_GFFRMNUM 0x00000028 67 - #define SPIDER_NET_GFAFRMNUM 0x0000002c 68 - #define SPIDER_NET_GFBFRMNUM 0x00000030 69 - #define SPIDER_NET_GFCFRMNUM 0x00000034 70 - #define SPIDER_NET_GFDFRMNUM 0x00000038 71 - 72 - /* clear them (don't use it) */ 73 - #define SPIDER_NET_GFREECNNUM 0x0000003c 74 - #define SPIDER_NET_GONETIMENUM 0x00000040 75 - 76 - #define SPIDER_NET_GTOUTFRMNUM 0x00000044 77 - 78 - #define SPIDER_NET_GTXMDSET 0x00000050 79 - #define SPIDER_NET_GPCCTRL 0x00000054 80 - #define SPIDER_NET_GRXMDSET 0x00000058 81 - #define SPIDER_NET_GIPSECINIT 0x0000005c 82 - #define SPIDER_NET_GFTRESTRT 0x00000060 83 - #define SPIDER_NET_GRXDMAEN 0x00000064 84 - #define SPIDER_NET_GMRWOLCTRL 0x00000068 85 - #define SPIDER_NET_GPCWOPCMD 0x0000006c 86 - #define SPIDER_NET_GPCROPCMD 0x00000070 87 - #define SPIDER_NET_GTTFRMCNT 0x00000078 88 - #define SPIDER_NET_GTESTMD 0x0000007c 89 - 90 - #define SPIDER_NET_GSINIT 0x00000080 91 - #define SPIDER_NET_GSnPRGADR 0x00000084 92 - #define SPIDER_NET_GSnPRGDAT 0x00000088 93 - 94 - #define SPIDER_NET_GMACOPEMD 0x00000100 95 - #define SPIDER_NET_GMACLENLMT 0x00000108 96 - #define SPIDER_NET_GMACST 0x00000110 97 - #define SPIDER_NET_GMACINTEN 0x00000118 98 - #define SPIDER_NET_GMACPHYCTRL 0x00000120 99 - 100 - #define SPIDER_NET_GMACAPAUSE 0x00000154 101 - #define SPIDER_NET_GMACTXPAUSE 0x00000164 102 - 103 - #define SPIDER_NET_GMACMODE 0x000001b0 104 - #define SPIDER_NET_GMACBSTLMT 0x000001b4 105 - 106 - #define SPIDER_NET_GMACUNIMACU 0x000001c0 107 - #define SPIDER_NET_GMACUNIMACL 0x000001c8 108 - 109 - #define SPIDER_NET_GMRMHFILnR 0x00000400 110 - #define SPIDER_NET_MULTICAST_HASHES 256 111 - 112 - #define SPIDER_NET_GMRUAFILnR 0x00000500 113 - #define SPIDER_NET_GMRUA0FIL15R 0x00000578 114 - 115 - #define SPIDER_NET_GTTQMSK 0x00000934 116 - 117 - /* RX DMA controller registers, all 0x00000a.. are for DMA controller A, 118 - * 0x00000b.. for DMA controller B, etc. */ 119 - #define SPIDER_NET_GDADCHA 0x00000a00 120 - #define SPIDER_NET_GDADMACCNTR 0x00000a04 121 - #define SPIDER_NET_GDACTDPA 0x00000a08 122 - #define SPIDER_NET_GDACTDCNT 0x00000a0c 123 - #define SPIDER_NET_GDACDBADDR 0x00000a20 124 - #define SPIDER_NET_GDACDBSIZE 0x00000a24 125 - #define SPIDER_NET_GDACNEXTDA 0x00000a28 126 - #define SPIDER_NET_GDACCOMST 0x00000a2c 127 - #define SPIDER_NET_GDAWBCOMST 0x00000a30 128 - #define SPIDER_NET_GDAWBRSIZE 0x00000a34 129 - #define SPIDER_NET_GDAWBVSIZE 0x00000a38 130 - #define SPIDER_NET_GDAWBTRST 0x00000a3c 131 - #define SPIDER_NET_GDAWBTRERR 0x00000a40 132 - 133 - /* TX DMA controller registers */ 134 - #define SPIDER_NET_GDTDCHA 0x00000e00 135 - #define SPIDER_NET_GDTDMACCNTR 0x00000e04 136 - #define SPIDER_NET_GDTCDPA 0x00000e08 137 - #define SPIDER_NET_GDTDMASEL 0x00000e14 138 - 139 - #define SPIDER_NET_ECMODE 0x00000f00 140 - /* clock and reset control register */ 141 - #define SPIDER_NET_CKRCTRL 0x00000ff0 142 - 143 - /** SCONFIG registers */ 144 - #define SPIDER_NET_SCONFIG_IOACTE 0x00002810 145 - 146 - /** interrupt mask registers */ 147 - #define SPIDER_NET_INT0_MASK_VALUE 0x3f7fe2c7 148 - #define SPIDER_NET_INT1_MASK_VALUE 0x0000fff2 149 - #define SPIDER_NET_INT2_MASK_VALUE 0x000003f1 150 - 151 - /* we rely on flagged descriptor interrupts */ 152 - #define SPIDER_NET_FRAMENUM_VALUE 0x00000000 153 - /* set this first, then the FRAMENUM_VALUE */ 154 - #define SPIDER_NET_GFXFRAMES_VALUE 0x00000000 155 - 156 - #define SPIDER_NET_STOP_SEQ_VALUE 0x00000000 157 - #define SPIDER_NET_RUN_SEQ_VALUE 0x0000007e 158 - 159 - #define SPIDER_NET_PHY_CTRL_VALUE 0x00040040 160 - /* #define SPIDER_NET_PHY_CTRL_VALUE 0x01070080*/ 161 - #define SPIDER_NET_RXMODE_VALUE 0x00000011 162 - /* auto retransmission in case of MAC aborts */ 163 - #define SPIDER_NET_TXMODE_VALUE 0x00010000 164 - #define SPIDER_NET_RESTART_VALUE 0x00000000 165 - #define SPIDER_NET_WOL_VALUE 0x00001111 166 - #if 0 167 - #define SPIDER_NET_WOL_VALUE 0x00000000 168 - #endif 169 - #define SPIDER_NET_IPSECINIT_VALUE 0x6f716f71 170 - 171 - /* pause frames: automatic, no upper retransmission count */ 172 - /* outside loopback mode: ETOMOD signal dont matter, not connected */ 173 - /* ETOMOD signal is brought to PHY reset. bit 2 must be 1 in Celleb */ 174 - #define SPIDER_NET_OPMODE_VALUE 0x00000067 175 - /*#define SPIDER_NET_OPMODE_VALUE 0x001b0062*/ 176 - #define SPIDER_NET_LENLMT_VALUE 0x00000908 177 - 178 - #define SPIDER_NET_MACAPAUSE_VALUE 0x00000800 /* about 1 ms */ 179 - #define SPIDER_NET_TXPAUSE_VALUE 0x00000000 180 - 181 - #define SPIDER_NET_MACMODE_VALUE 0x00000001 182 - #define SPIDER_NET_BURSTLMT_VALUE 0x00000200 /* about 16 us */ 183 - 184 - /* DMAC control register GDMACCNTR 185 - * 186 - * 1(0) enable r/tx dma 187 - * 0000000 fixed to 0 188 - * 189 - * 000000 fixed to 0 190 - * 0(1) en/disable descr writeback on force end 191 - * 0(1) force end 192 - * 193 - * 000000 fixed to 0 194 - * 00 burst alignment: 128 bytes 195 - * 11 burst alignment: 1024 bytes 196 - * 197 - * 00000 fixed to 0 198 - * 0 descr writeback size 32 bytes 199 - * 0(1) descr chain end interrupt enable 200 - * 0(1) descr status writeback enable */ 201 - 202 - /* to set RX_DMA_EN */ 203 - #define SPIDER_NET_DMA_RX_VALUE 0x80000000 204 - #define SPIDER_NET_DMA_RX_FEND_VALUE 0x00030003 205 - /* to set TX_DMA_EN */ 206 - #define SPIDER_NET_TX_DMA_EN 0x80000000 207 - #define SPIDER_NET_GDTBSTA 0x00000300 208 - #define SPIDER_NET_GDTDCEIDIS 0x00000002 209 - #define SPIDER_NET_DMA_TX_VALUE SPIDER_NET_TX_DMA_EN | \ 210 - SPIDER_NET_GDTDCEIDIS | \ 211 - SPIDER_NET_GDTBSTA 212 - 213 - #define SPIDER_NET_DMA_TX_FEND_VALUE 0x00030003 214 - 215 - /* SPIDER_NET_UA_DESCR_VALUE is OR'ed with the unicast address */ 216 - #define SPIDER_NET_UA_DESCR_VALUE 0x00080000 217 - #define SPIDER_NET_PROMISC_VALUE 0x00080000 218 - #define SPIDER_NET_NONPROMISC_VALUE 0x00000000 219 - 220 - #define SPIDER_NET_DMASEL_VALUE 0x00000001 221 - 222 - #define SPIDER_NET_ECMODE_VALUE 0x00000000 223 - 224 - #define SPIDER_NET_CKRCTRL_RUN_VALUE 0x1fff010f 225 - #define SPIDER_NET_CKRCTRL_STOP_VALUE 0x0000010f 226 - 227 - #define SPIDER_NET_SBIMSTATE_VALUE 0x00000000 228 - #define SPIDER_NET_SBTMSTATE_VALUE 0x00000000 229 - 230 - /* SPIDER_NET_GHIINT0STS bits, in reverse order so that they can be used 231 - * with 1 << SPIDER_NET_... */ 232 - enum spider_net_int0_status { 233 - SPIDER_NET_GPHYINT = 0, 234 - SPIDER_NET_GMAC2INT, 235 - SPIDER_NET_GMAC1INT, 236 - SPIDER_NET_GIPSINT, 237 - SPIDER_NET_GFIFOINT, 238 - SPIDER_NET_GDMACINT, 239 - SPIDER_NET_GSYSINT, 240 - SPIDER_NET_GPWOPCMPINT, 241 - SPIDER_NET_GPROPCMPINT, 242 - SPIDER_NET_GPWFFINT, 243 - SPIDER_NET_GRMDADRINT, 244 - SPIDER_NET_GRMARPINT, 245 - SPIDER_NET_GRMMPINT, 246 - SPIDER_NET_GDTDEN0INT, 247 - SPIDER_NET_GDDDEN0INT, 248 - SPIDER_NET_GDCDEN0INT, 249 - SPIDER_NET_GDBDEN0INT, 250 - SPIDER_NET_GDADEN0INT, 251 - SPIDER_NET_GDTFDCINT, 252 - SPIDER_NET_GDDFDCINT, 253 - SPIDER_NET_GDCFDCINT, 254 - SPIDER_NET_GDBFDCINT, 255 - SPIDER_NET_GDAFDCINT, 256 - SPIDER_NET_GTTEDINT, 257 - SPIDER_NET_GDTDCEINT, 258 - SPIDER_NET_GRFDNMINT, 259 - SPIDER_NET_GRFCNMINT, 260 - SPIDER_NET_GRFBNMINT, 261 - SPIDER_NET_GRFANMINT, 262 - SPIDER_NET_GRFNMINT, 263 - SPIDER_NET_G1TMCNTINT, 264 - SPIDER_NET_GFREECNTINT 265 - }; 266 - /* GHIINT1STS bits */ 267 - enum spider_net_int1_status { 268 - SPIDER_NET_GTMFLLINT = 0, 269 - SPIDER_NET_GRMFLLINT, 270 - SPIDER_NET_GTMSHTINT, 271 - SPIDER_NET_GDTINVDINT, 272 - SPIDER_NET_GRFDFLLINT, 273 - SPIDER_NET_GDDDCEINT, 274 - SPIDER_NET_GDDINVDINT, 275 - SPIDER_NET_GRFCFLLINT, 276 - SPIDER_NET_GDCDCEINT, 277 - SPIDER_NET_GDCINVDINT, 278 - SPIDER_NET_GRFBFLLINT, 279 - SPIDER_NET_GDBDCEINT, 280 - SPIDER_NET_GDBINVDINT, 281 - SPIDER_NET_GRFAFLLINT, 282 - SPIDER_NET_GDADCEINT, 283 - SPIDER_NET_GDAINVDINT, 284 - SPIDER_NET_GDTRSERINT, 285 - SPIDER_NET_GDDRSERINT, 286 - SPIDER_NET_GDCRSERINT, 287 - SPIDER_NET_GDBRSERINT, 288 - SPIDER_NET_GDARSERINT, 289 - SPIDER_NET_GDSERINT, 290 - SPIDER_NET_GDTPTERINT, 291 - SPIDER_NET_GDDPTERINT, 292 - SPIDER_NET_GDCPTERINT, 293 - SPIDER_NET_GDBPTERINT, 294 - SPIDER_NET_GDAPTERINT 295 - }; 296 - /* GHIINT2STS bits */ 297 - enum spider_net_int2_status { 298 - SPIDER_NET_GPROPERINT = 0, 299 - SPIDER_NET_GMCTCRSNGINT, 300 - SPIDER_NET_GMCTLCOLINT, 301 - SPIDER_NET_GMCTTMOTINT, 302 - SPIDER_NET_GMCRCAERINT, 303 - SPIDER_NET_GMCRCALERINT, 304 - SPIDER_NET_GMCRALNERINT, 305 - SPIDER_NET_GMCROVRINT, 306 - SPIDER_NET_GMCRRNTINT, 307 - SPIDER_NET_GMCRRXERINT, 308 - SPIDER_NET_GTITCSERINT, 309 - SPIDER_NET_GTIFMTERINT, 310 - SPIDER_NET_GTIPKTRVKINT, 311 - SPIDER_NET_GTISPINGINT, 312 - SPIDER_NET_GTISADNGINT, 313 - SPIDER_NET_GTISPDNGINT, 314 - SPIDER_NET_GRIFMTERINT, 315 - SPIDER_NET_GRIPKTRVKINT, 316 - SPIDER_NET_GRISPINGINT, 317 - SPIDER_NET_GRISADNGINT, 318 - SPIDER_NET_GRISPDNGINT 319 - }; 320 - 321 - #define SPIDER_NET_TXINT (1 << SPIDER_NET_GDTFDCINT) 322 - 323 - /* We rely on flagged descriptor interrupts */ 324 - #define SPIDER_NET_RXINT ( (1 << SPIDER_NET_GDAFDCINT) ) 325 - 326 - #define SPIDER_NET_LINKINT ( 1 << SPIDER_NET_GMAC2INT ) 327 - 328 - #define SPIDER_NET_ERRINT ( 0xffffffff & \ 329 - (~SPIDER_NET_TXINT) & \ 330 - (~SPIDER_NET_RXINT) & \ 331 - (~SPIDER_NET_LINKINT) ) 332 - 333 - #define SPIDER_NET_GPREXEC 0x80000000 334 - #define SPIDER_NET_GPRDAT_MASK 0x0000ffff 335 - 336 - #define SPIDER_NET_DMAC_NOINTR_COMPLETE 0x00800000 337 - #define SPIDER_NET_DMAC_TXFRMTL 0x00040000 338 - #define SPIDER_NET_DMAC_TCP 0x00020000 339 - #define SPIDER_NET_DMAC_UDP 0x00030000 340 - #define SPIDER_NET_TXDCEST 0x08000000 341 - 342 - #define SPIDER_NET_DESCR_RXFDIS 0x00000001 343 - #define SPIDER_NET_DESCR_RXDCEIS 0x00000002 344 - #define SPIDER_NET_DESCR_RXDEN0IS 0x00000004 345 - #define SPIDER_NET_DESCR_RXINVDIS 0x00000008 346 - #define SPIDER_NET_DESCR_RXRERRIS 0x00000010 347 - #define SPIDER_NET_DESCR_RXFDCIMS 0x00000100 348 - #define SPIDER_NET_DESCR_RXDCEIMS 0x00000200 349 - #define SPIDER_NET_DESCR_RXDEN0IMS 0x00000400 350 - #define SPIDER_NET_DESCR_RXINVDIMS 0x00000800 351 - #define SPIDER_NET_DESCR_RXRERRMIS 0x00001000 352 - #define SPIDER_NET_DESCR_UNUSED 0x077fe0e0 353 - 354 - #define SPIDER_NET_DESCR_IND_PROC_MASK 0xF0000000 355 - #define SPIDER_NET_DESCR_COMPLETE 0x00000000 /* used in rx and tx */ 356 - #define SPIDER_NET_DESCR_RESPONSE_ERROR 0x10000000 /* used in rx and tx */ 357 - #define SPIDER_NET_DESCR_PROTECTION_ERROR 0x20000000 /* used in rx and tx */ 358 - #define SPIDER_NET_DESCR_FRAME_END 0x40000000 /* used in rx */ 359 - #define SPIDER_NET_DESCR_FORCE_END 0x50000000 /* used in rx and tx */ 360 - #define SPIDER_NET_DESCR_CARDOWNED 0xA0000000 /* used in rx and tx */ 361 - #define SPIDER_NET_DESCR_NOT_IN_USE 0xF0000000 362 - #define SPIDER_NET_DESCR_TXDESFLG 0x00800000 363 - 364 - #define SPIDER_NET_DESCR_BAD_STATUS (SPIDER_NET_DESCR_RXDEN0IS | \ 365 - SPIDER_NET_DESCR_RXRERRIS | \ 366 - SPIDER_NET_DESCR_RXDEN0IMS | \ 367 - SPIDER_NET_DESCR_RXINVDIMS | \ 368 - SPIDER_NET_DESCR_RXRERRMIS | \ 369 - SPIDER_NET_DESCR_UNUSED) 370 - 371 - /* Descriptor, as defined by the hardware */ 372 - struct spider_net_hw_descr { 373 - u32 buf_addr; 374 - u32 buf_size; 375 - u32 next_descr_addr; 376 - u32 dmac_cmd_status; 377 - u32 result_size; 378 - u32 valid_size; /* all zeroes for tx */ 379 - u32 data_status; 380 - u32 data_error; /* all zeroes for tx */ 381 - } __attribute__((aligned(32))); 382 - 383 - struct spider_net_descr { 384 - struct spider_net_hw_descr *hwdescr; 385 - struct sk_buff *skb; 386 - u32 bus_addr; 387 - struct spider_net_descr *next; 388 - struct spider_net_descr *prev; 389 - }; 390 - 391 - struct spider_net_descr_chain { 392 - spinlock_t lock; 393 - struct spider_net_descr *head; 394 - struct spider_net_descr *tail; 395 - struct spider_net_descr *ring; 396 - int num_desc; 397 - struct spider_net_hw_descr *hwring; 398 - dma_addr_t dma_addr; 399 - }; 400 - 401 - /* descriptor data_status bits */ 402 - #define SPIDER_NET_RX_IPCHK 29 403 - #define SPIDER_NET_RX_TCPCHK 28 404 - #define SPIDER_NET_VLAN_PACKET 21 405 - #define SPIDER_NET_DATA_STATUS_CKSUM_MASK ( (1 << SPIDER_NET_RX_IPCHK) | \ 406 - (1 << SPIDER_NET_RX_TCPCHK) ) 407 - 408 - /* descriptor data_error bits */ 409 - #define SPIDER_NET_RX_IPCHKERR 27 410 - #define SPIDER_NET_RX_RXTCPCHKERR 28 411 - 412 - #define SPIDER_NET_DATA_ERR_CKSUM_MASK (1 << SPIDER_NET_RX_IPCHKERR) 413 - 414 - /* the cases we don't pass the packet to the stack. 415 - * 701b8000 would be correct, but every packets gets that flag */ 416 - #define SPIDER_NET_DESTROY_RX_FLAGS 0x700b8000 417 - 418 - #define SPIDER_NET_DEFAULT_MSG ( NETIF_MSG_DRV | \ 419 - NETIF_MSG_PROBE | \ 420 - NETIF_MSG_LINK | \ 421 - NETIF_MSG_TIMER | \ 422 - NETIF_MSG_IFDOWN | \ 423 - NETIF_MSG_IFUP | \ 424 - NETIF_MSG_RX_ERR | \ 425 - NETIF_MSG_TX_ERR | \ 426 - NETIF_MSG_TX_QUEUED | \ 427 - NETIF_MSG_INTR | \ 428 - NETIF_MSG_TX_DONE | \ 429 - NETIF_MSG_RX_STATUS | \ 430 - NETIF_MSG_PKTDATA | \ 431 - NETIF_MSG_HW | \ 432 - NETIF_MSG_WOL ) 433 - 434 - struct spider_net_extra_stats { 435 - unsigned long rx_desc_error; 436 - unsigned long tx_timeouts; 437 - unsigned long alloc_rx_skb_error; 438 - unsigned long rx_iommu_map_error; 439 - unsigned long tx_iommu_map_error; 440 - unsigned long rx_desc_unk_state; 441 - }; 442 - 443 - struct spider_net_card { 444 - struct net_device *netdev; 445 - struct pci_dev *pdev; 446 - struct mii_phy phy; 447 - 448 - struct napi_struct napi; 449 - 450 - int medium; 451 - 452 - void __iomem *regs; 453 - 454 - struct spider_net_descr_chain tx_chain; 455 - struct spider_net_descr_chain rx_chain; 456 - struct spider_net_descr *low_watermark; 457 - 458 - int aneg_count; 459 - struct timer_list aneg_timer; 460 - struct timer_list tx_timer; 461 - struct work_struct tx_timeout_task; 462 - atomic_t tx_timeout_task_counter; 463 - wait_queue_head_t waitq; 464 - int num_rx_ints; 465 - int ignore_rx_ramfull; 466 - 467 - /* for ethtool */ 468 - int msg_enable; 469 - struct spider_net_extra_stats spider_stats; 470 - 471 - /* Must be last item in struct */ 472 - struct spider_net_descr darray[]; 473 - }; 474 - 475 - #endif
-174
drivers/net/ethernet/toshiba/spider_net_ethtool.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Network device driver for Cell Processor-Based Blade 4 - * 5 - * (C) Copyright IBM Corp. 2005 6 - * 7 - * Authors : Utz Bacher <utz.bacher@de.ibm.com> 8 - * Jens Osterkamp <Jens.Osterkamp@de.ibm.com> 9 - */ 10 - 11 - #include <linux/netdevice.h> 12 - #include <linux/ethtool.h> 13 - #include <linux/pci.h> 14 - 15 - #include "spider_net.h" 16 - 17 - 18 - static struct { 19 - const char str[ETH_GSTRING_LEN]; 20 - } ethtool_stats_keys[] = { 21 - { "tx_packets" }, 22 - { "tx_bytes" }, 23 - { "rx_packets" }, 24 - { "rx_bytes" }, 25 - { "tx_errors" }, 26 - { "tx_dropped" }, 27 - { "rx_dropped" }, 28 - { "rx_descriptor_error" }, 29 - { "tx_timeouts" }, 30 - { "alloc_rx_skb_error" }, 31 - { "rx_iommu_map_error" }, 32 - { "tx_iommu_map_error" }, 33 - { "rx_desc_unk_state" }, 34 - }; 35 - 36 - static int 37 - spider_net_ethtool_get_link_ksettings(struct net_device *netdev, 38 - struct ethtool_link_ksettings *cmd) 39 - { 40 - struct spider_net_card *card; 41 - card = netdev_priv(netdev); 42 - 43 - ethtool_link_ksettings_zero_link_mode(cmd, supported); 44 - ethtool_link_ksettings_add_link_mode(cmd, supported, 1000baseT_Full); 45 - ethtool_link_ksettings_add_link_mode(cmd, supported, FIBRE); 46 - 47 - ethtool_link_ksettings_zero_link_mode(cmd, advertising); 48 - ethtool_link_ksettings_add_link_mode(cmd, advertising, 1000baseT_Full); 49 - ethtool_link_ksettings_add_link_mode(cmd, advertising, FIBRE); 50 - 51 - cmd->base.port = PORT_FIBRE; 52 - cmd->base.speed = card->phy.speed; 53 - cmd->base.duplex = DUPLEX_FULL; 54 - 55 - return 0; 56 - } 57 - 58 - static void 59 - spider_net_ethtool_get_drvinfo(struct net_device *netdev, 60 - struct ethtool_drvinfo *drvinfo) 61 - { 62 - struct spider_net_card *card; 63 - card = netdev_priv(netdev); 64 - 65 - /* clear and fill out info */ 66 - strscpy(drvinfo->driver, spider_net_driver_name, 67 - sizeof(drvinfo->driver)); 68 - strscpy(drvinfo->version, VERSION, sizeof(drvinfo->version)); 69 - strscpy(drvinfo->fw_version, "no information", 70 - sizeof(drvinfo->fw_version)); 71 - strscpy(drvinfo->bus_info, pci_name(card->pdev), 72 - sizeof(drvinfo->bus_info)); 73 - } 74 - 75 - static void 76 - spider_net_ethtool_get_wol(struct net_device *netdev, 77 - struct ethtool_wolinfo *wolinfo) 78 - { 79 - /* no support for wol */ 80 - wolinfo->supported = 0; 81 - wolinfo->wolopts = 0; 82 - } 83 - 84 - static u32 85 - spider_net_ethtool_get_msglevel(struct net_device *netdev) 86 - { 87 - struct spider_net_card *card; 88 - card = netdev_priv(netdev); 89 - return card->msg_enable; 90 - } 91 - 92 - static void 93 - spider_net_ethtool_set_msglevel(struct net_device *netdev, 94 - u32 level) 95 - { 96 - struct spider_net_card *card; 97 - card = netdev_priv(netdev); 98 - card->msg_enable = level; 99 - } 100 - 101 - static int 102 - spider_net_ethtool_nway_reset(struct net_device *netdev) 103 - { 104 - if (netif_running(netdev)) { 105 - spider_net_stop(netdev); 106 - spider_net_open(netdev); 107 - } 108 - return 0; 109 - } 110 - 111 - static void 112 - spider_net_ethtool_get_ringparam(struct net_device *netdev, 113 - struct ethtool_ringparam *ering, 114 - struct kernel_ethtool_ringparam *kernel_ering, 115 - struct netlink_ext_ack *extack) 116 - { 117 - struct spider_net_card *card = netdev_priv(netdev); 118 - 119 - ering->tx_max_pending = SPIDER_NET_TX_DESCRIPTORS_MAX; 120 - ering->tx_pending = card->tx_chain.num_desc; 121 - ering->rx_max_pending = SPIDER_NET_RX_DESCRIPTORS_MAX; 122 - ering->rx_pending = card->rx_chain.num_desc; 123 - } 124 - 125 - static int spider_net_get_sset_count(struct net_device *netdev, int sset) 126 - { 127 - switch (sset) { 128 - case ETH_SS_STATS: 129 - return ARRAY_SIZE(ethtool_stats_keys); 130 - default: 131 - return -EOPNOTSUPP; 132 - } 133 - } 134 - 135 - static void spider_net_get_ethtool_stats(struct net_device *netdev, 136 - struct ethtool_stats *stats, u64 *data) 137 - { 138 - struct spider_net_card *card = netdev_priv(netdev); 139 - 140 - data[0] = netdev->stats.tx_packets; 141 - data[1] = netdev->stats.tx_bytes; 142 - data[2] = netdev->stats.rx_packets; 143 - data[3] = netdev->stats.rx_bytes; 144 - data[4] = netdev->stats.tx_errors; 145 - data[5] = netdev->stats.tx_dropped; 146 - data[6] = netdev->stats.rx_dropped; 147 - data[7] = card->spider_stats.rx_desc_error; 148 - data[8] = card->spider_stats.tx_timeouts; 149 - data[9] = card->spider_stats.alloc_rx_skb_error; 150 - data[10] = card->spider_stats.rx_iommu_map_error; 151 - data[11] = card->spider_stats.tx_iommu_map_error; 152 - data[12] = card->spider_stats.rx_desc_unk_state; 153 - } 154 - 155 - static void spider_net_get_strings(struct net_device *netdev, u32 stringset, 156 - u8 *data) 157 - { 158 - memcpy(data, ethtool_stats_keys, sizeof(ethtool_stats_keys)); 159 - } 160 - 161 - const struct ethtool_ops spider_net_ethtool_ops = { 162 - .get_drvinfo = spider_net_ethtool_get_drvinfo, 163 - .get_wol = spider_net_ethtool_get_wol, 164 - .get_msglevel = spider_net_ethtool_get_msglevel, 165 - .set_msglevel = spider_net_ethtool_set_msglevel, 166 - .get_link = ethtool_op_get_link, 167 - .nway_reset = spider_net_ethtool_nway_reset, 168 - .get_ringparam = spider_net_ethtool_get_ringparam, 169 - .get_strings = spider_net_get_strings, 170 - .get_sset_count = spider_net_get_sset_count, 171 - .get_ethtool_stats = spider_net_get_ethtool_stats, 172 - .get_link_ksettings = spider_net_ethtool_get_link_ksettings, 173 - }; 174 -
-4
kernel/irq/Kconfig
··· 47 47 config HARDIRQS_SW_RESEND 48 48 bool 49 49 50 - # Edge style eoi based handler (cell) 51 - config IRQ_EDGE_EOI_HANDLER 52 - bool 53 - 54 50 # Generic configurable interrupt chip implementation 55 51 config GENERIC_IRQ_CHIP 56 52 bool
-47
kernel/irq/chip.c
··· 838 838 } 839 839 EXPORT_SYMBOL(handle_edge_irq); 840 840 841 - #ifdef CONFIG_IRQ_EDGE_EOI_HANDLER 842 - /** 843 - * handle_edge_eoi_irq - edge eoi type IRQ handler 844 - * @desc: the interrupt description structure for this irq 845 - * 846 - * Similar as the above handle_edge_irq, but using eoi and w/o the 847 - * mask/unmask logic. 848 - */ 849 - void handle_edge_eoi_irq(struct irq_desc *desc) 850 - { 851 - struct irq_chip *chip = irq_desc_get_chip(desc); 852 - 853 - raw_spin_lock(&desc->lock); 854 - 855 - desc->istate &= ~(IRQS_REPLAY | IRQS_WAITING); 856 - 857 - if (!irq_may_run(desc)) { 858 - desc->istate |= IRQS_PENDING; 859 - goto out_eoi; 860 - } 861 - 862 - /* 863 - * If its disabled or no action available then mask it and get 864 - * out of here. 865 - */ 866 - if (irqd_irq_disabled(&desc->irq_data) || !desc->action) { 867 - desc->istate |= IRQS_PENDING; 868 - goto out_eoi; 869 - } 870 - 871 - kstat_incr_irqs_this_cpu(desc); 872 - 873 - do { 874 - if (unlikely(!desc->action)) 875 - goto out_eoi; 876 - 877 - handle_irq_event(desc); 878 - 879 - } while ((desc->istate & IRQS_PENDING) && 880 - !irqd_irq_disabled(&desc->irq_data)); 881 - 882 - out_eoi: 883 - chip->irq_eoi(&desc->irq_data); 884 - raw_spin_unlock(&desc->lock); 885 - } 886 - #endif 887 - 888 841 /** 889 842 * handle_percpu_irq - Per CPU local irq handler 890 843 * @desc: the interrupt description structure for this irq
+1 -1
kernel/static_call_inline.c
··· 206 206 continue; 207 207 } 208 208 209 - arch_static_call_transform(site_addr, NULL, func, 209 + arch_static_call_transform(site_addr, tramp, func, 210 210 static_call_is_tail(site)); 211 211 } 212 212 }
+9 -1
tools/objtool/arch/powerpc/decode.c
··· 55 55 56 56 switch (opcode) { 57 57 case 18: /* b[l][a] */ 58 - if ((ins & 3) == 1) /* bl */ 58 + if (ins == 0x48000005) /* bl .+4 */ 59 + typ = INSN_OTHER; 60 + else if (ins & 1) /* bl[a] */ 59 61 typ = INSN_CALL; 62 + else /* b[a] */ 63 + typ = INSN_JUMP_UNCONDITIONAL; 60 64 61 65 imm = ins & 0x3fffffc; 62 66 if (imm & 0x2000000) 63 67 imm -= 0x4000000; 68 + imm |= ins & 2; /* AA flag */ 64 69 break; 65 70 } 66 71 ··· 82 77 83 78 unsigned long arch_jump_destination(struct instruction *insn) 84 79 { 80 + if (insn->immediate & 2) 81 + return insn->immediate & ~2; 82 + 85 83 return insn->offset + insn->immediate; 86 84 } 87 85
+2 -1
tools/testing/selftests/powerpc/pmu/event_code_tests/event_alternatives_tests_p10.c
··· 26 26 { 27 27 struct event *e, events[5]; 28 28 int i; 29 + int pvr = PVR_VER(mfspr(SPRN_PVR)); 29 30 30 31 /* Check for platform support for the test */ 31 32 SKIP_IF(platform_check_for_tests()); ··· 37 36 * code and using PVR will work correctly for all cases 38 37 * including generic compat mode. 39 38 */ 40 - SKIP_IF(PVR_VER(mfspr(SPRN_PVR)) != POWER10); 39 + SKIP_IF((pvr != POWER10) && (pvr != POWER11)); 41 40 42 41 SKIP_IF(check_for_generic_compat_pmu()); 43 42
+2 -1
tools/testing/selftests/powerpc/pmu/event_code_tests/generic_events_valid_test.c
··· 17 17 static int generic_events_valid_test(void) 18 18 { 19 19 struct event event; 20 + int pvr = mfspr(SPRN_PVR); 20 21 21 22 /* Check for platform support for the test */ 22 23 SKIP_IF(platform_check_for_tests()); ··· 32 31 * - PERF_COUNT_HW_STALLED_CYCLES_BACKEND 33 32 * - PERF_COUNT_HW_REF_CPU_CYCLES 34 33 */ 35 - if (PVR_VER(mfspr(SPRN_PVR)) == POWER10) { 34 + if ((pvr == POWER10) || (pvr == POWER11)) { 36 35 event_init_opts(&event, PERF_COUNT_HW_CPU_CYCLES, PERF_TYPE_HARDWARE, "event"); 37 36 FAIL_IF(event_open(&event)); 38 37 event_close(&event);
+1 -1
tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_l2l3_sel_test.c
··· 30 30 31 31 /* 32 32 * Check for platform support for the test. 33 - * This test is only aplicable on power10 33 + * This test is only aplicable on ISA v3.1 34 34 */ 35 35 SKIP_IF(platform_check_for_tests()); 36 36 SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_1));
+1 -1
tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_radix_scope_qual_test.c
··· 26 26 27 27 /* 28 28 * Check for platform support for the test. 29 - * This test is aplicable on power10 only. 29 + * This test is aplicable on ISA v3.1 only. 30 30 */ 31 31 SKIP_IF(platform_check_for_tests()); 32 32 SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_1));
+1 -1
tools/testing/selftests/powerpc/pmu/event_code_tests/group_constraint_thresh_cmp_test.c
··· 25 25 /* 26 26 * Testcase for group constraint check of thresh_cmp bits which is 27 27 * used to program thresh compare field in Monitor Mode Control Register A 28 - * (MMCRA: 9-18 bits for power9 and MMCRA: 8-18 bits for power10). 28 + * (MMCRA: 9-18 bits for power9 and MMCRA: 8-18 bits for power10/power11). 29 29 * All events in the group should match thresh compare bits otherwise 30 30 * event_open for the group will fail. 31 31 */
+2 -2
tools/testing/selftests/powerpc/pmu/event_code_tests/invalid_event_code_test.c
··· 20 20 * Some of the bits in the event code is 21 21 * reserved for specific platforms. 22 22 * Event code bits 52-59 are reserved in power9, 23 - * whereas in power10, these are used for programming 23 + * whereas in ISA v3.1, these are used for programming 24 24 * Monitor Mode Control Register 3 (MMCR3). 25 25 * Bit 9 in event code is reserved in power9, 26 26 * whereas it is used for programming "radix_scope_qual" ··· 39 39 40 40 /* 41 41 * Events using MMCR3 bits and radix scope qual bits 42 - * should fail in power9 and should succeed in power10. 42 + * should fail in power9 and should succeed in power10 ( ISA v3.1 ) 43 43 * Init the events and check for pass/fail in event open. 44 44 */ 45 45 if (have_hwcap2(PPC_FEATURE2_ARCH_3_1)) {
+3 -2
tools/testing/selftests/powerpc/pmu/event_code_tests/reserved_bits_mmcra_sample_elig_mode_test.c
··· 21 21 static int reserved_bits_mmcra_sample_elig_mode(void) 22 22 { 23 23 struct event event; 24 + int pvr = PVR_VER(mfspr(SPRN_PVR)); 24 25 25 26 /* Check for platform support for the test */ 26 27 SKIP_IF(platform_check_for_tests()); ··· 57 56 58 57 /* 59 58 * MMCRA Random Sampling Mode (SM) value 0x10 60 - * is reserved in power10 and 0xC is reserved in 59 + * is reserved in power10/power11 and 0xC is reserved in 61 60 * power9. 62 61 */ 63 - if (PVR_VER(mfspr(SPRN_PVR)) == POWER10) { 62 + if ((pvr == POWER10) || (pvr == POWER11)) { 64 63 event_init(&event, 0x100401e0); 65 64 FAIL_IF(!event_open(&event)); 66 65 } else if (PVR_VER(mfspr(SPRN_PVR)) == POWER9) {
+2 -1
tools/testing/selftests/powerpc/pmu/sampling_tests/Makefile
··· 5 5 mmcr3_src_test mmcra_thresh_marked_sample_test mmcra_thresh_cmp_test \ 6 6 mmcra_bhrb_ind_call_test mmcra_bhrb_any_test mmcra_bhrb_cond_test \ 7 7 mmcra_bhrb_disable_test bhrb_no_crash_wo_pmu_test intr_regs_no_crash_wo_pmu_test \ 8 - bhrb_filter_map_test mmcr1_sel_unit_cache_test mmcra_bhrb_disable_no_branch_test 8 + bhrb_filter_map_test mmcr1_sel_unit_cache_test mmcra_bhrb_disable_no_branch_test \ 9 + check_extended_reg_test 9 10 10 11 top_srcdir = ../../../../../.. 11 12 include ../../../lib.mk
+10 -7
tools/testing/selftests/powerpc/pmu/sampling_tests/bhrb_filter_map_test.c
··· 14 14 * A perf sampling test to check bhrb filter 15 15 * map. All the branch filters are not supported 16 16 * in powerpc. Supported filters in: 17 - * power10: any, any_call, ind_call, cond 17 + * power10/power11: any, any_call, ind_call, cond 18 18 * power9: any, any_call 19 19 * 20 20 * Testcase checks event open for invalid bhrb filter ··· 24 24 */ 25 25 26 26 /* Invalid types for powerpc */ 27 - /* Valid bhrb filters in power9/power10 */ 27 + /* Valid bhrb filters in power9/power10/power11 */ 28 28 int bhrb_filter_map_valid_common[] = { 29 29 PERF_SAMPLE_BRANCH_ANY, 30 30 PERF_SAMPLE_BRANCH_ANY_CALL, 31 31 }; 32 32 33 - /* Valid bhrb filters in power10 */ 33 + /* Valid bhrb filters in power10/power11 */ 34 34 int bhrb_filter_map_valid_p10[] = { 35 35 PERF_SAMPLE_BRANCH_IND_CALL, 36 36 PERF_SAMPLE_BRANCH_COND, ··· 69 69 FAIL_IF(!event_open(&event)); 70 70 } 71 71 72 - /* valid filter maps for power9/power10 which are expected to pass in event_open */ 72 + /* valid filter maps for power9/power10/power11 which are expected to pass in event_open */ 73 73 for (i = 0; i < ARRAY_SIZE(bhrb_filter_map_valid_common); i++) { 74 74 event.attr.branch_sample_type = bhrb_filter_map_valid_common[i]; 75 75 FAIL_IF(event_open(&event)); ··· 77 77 } 78 78 79 79 /* 80 - * filter maps which are valid in power10 and invalid in power9. 80 + * filter maps which are valid in power10/power11 and invalid in power9. 81 81 * PVR check is used here since PMU specific data like bhrb filter 82 82 * alternative tests is handled by respective PMU driver code and 83 83 * using PVR will work correctly for all cases including generic 84 84 * compat mode. 85 85 */ 86 - if (PVR_VER(mfspr(SPRN_PVR)) == POWER10) { 86 + switch (PVR_VER(mfspr(SPRN_PVR))) { 87 + case POWER11: 88 + case POWER10: 87 89 for (i = 0; i < ARRAY_SIZE(bhrb_filter_map_valid_p10); i++) { 88 90 event.attr.branch_sample_type = bhrb_filter_map_valid_p10[i]; 89 91 FAIL_IF(event_open(&event)); 90 92 event_close(&event); 91 93 } 92 - } else { 94 + break; 95 + default: 93 96 for (i = 0; i < ARRAY_SIZE(bhrb_filter_map_valid_p10); i++) { 94 97 event.attr.branch_sample_type = bhrb_filter_map_valid_p10[i]; 95 98 FAIL_IF(!event_open(&event));
+35
tools/testing/selftests/powerpc/pmu/sampling_tests/check_extended_reg_test.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright 2024, Kajol Jain, IBM Corp. 4 + */ 5 + 6 + #include <stdio.h> 7 + #include <stdlib.h> 8 + 9 + #include "../event.h" 10 + #include "misc.h" 11 + #include "utils.h" 12 + 13 + /* 14 + * A perf sampling test to check extended 15 + * reg support. 16 + */ 17 + static int check_extended_reg_test(void) 18 + { 19 + /* Check for platform support for the test */ 20 + SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_00)); 21 + 22 + /* Skip for Generic compat PMU */ 23 + SKIP_IF(check_for_generic_compat_pmu()); 24 + 25 + /* Check if platform supports extended regs */ 26 + platform_extended_mask = perf_get_platform_reg_mask(); 27 + FAIL_IF(check_extended_regs_support()); 28 + 29 + return 0; 30 + } 31 + 32 + int main(void) 33 + { 34 + return test_harness(check_extended_reg_test, "check_extended_reg_test"); 35 + }
+18 -2
tools/testing/selftests/powerpc/pmu/sampling_tests/misc.c
··· 59 59 ev_shift_thd_stop = 32; 60 60 61 61 switch (pvr) { 62 + case POWER11: 62 63 case POWER10: 63 64 ev_mask_thd_cmp = 0x3ffff; 64 65 ev_shift_thd_cmp = 0; ··· 92 91 } 93 92 94 93 /* Return the extended regs mask value */ 95 - static u64 perf_get_platform_reg_mask(void) 94 + u64 perf_get_platform_reg_mask(void) 96 95 { 97 96 if (have_hwcap2(PPC_FEATURE2_ARCH_3_1)) 98 97 return PERF_POWER10_MASK; ··· 130 129 * Check for supported platforms 131 130 * for sampling test 132 131 */ 133 - if ((pvr != POWER10) && (pvr != POWER9)) 132 + switch (pvr) { 133 + case POWER11: 134 + case POWER10: 135 + case POWER9: 136 + break; 137 + default: 134 138 goto out; 139 + } 135 140 136 141 /* 137 142 * Check PMU driver registered by looking for ··· 497 490 * Utility function to check for generic compat PMU 498 491 * by comparing base_platform value from auxv and real 499 492 * PVR value. 493 + * auxv_base_platform() func gives information of "base platform" 494 + * corresponding to PVR value. Incase, if the distro doesn't 495 + * support platform PVR (missing cputable support), base platform 496 + * in auxv will have a default value other than the real PVR's. 497 + * In this case, ISAv3 PMU (generic compat PMU) will be registered 498 + * in the system. auxv_generic_compat_pmu() makes use of the base 499 + * platform value from auxv to do this check. 500 500 */ 501 501 static bool auxv_generic_compat_pmu(void) 502 502 { ··· 513 499 base_pvr = POWER9; 514 500 else if (!strcmp(auxv_base_platform(), "power10")) 515 501 base_pvr = POWER10; 502 + else if (!strcmp(auxv_base_platform(), "power11")) 503 + base_pvr = POWER11; 516 504 517 505 return (!base_pvr); 518 506 }
+8 -4
tools/testing/selftests/powerpc/pmu/sampling_tests/misc.h
··· 8 8 #include <sys/stat.h> 9 9 #include "../event.h" 10 10 11 + #define POWER11 0x82 11 12 #define POWER10 0x80 12 13 #define POWER9 0x4e 13 14 #define PERF_POWER9_MASK 0x7f8ffffffffffff 14 15 #define PERF_POWER10_MASK 0x7ffffffffffffff 16 + #define PERF_POWER11_MASK PERF_POWER10_MASK 15 17 16 18 #define MMCR0_FC56 0x00000010UL /* freeze counters 5 and 6 */ 17 19 #define MMCR0_PMCCEXT 0x00000200UL /* PMCCEXT control */ ··· 39 37 extern u64 platform_extended_mask; 40 38 extern int check_pvr_for_sampling_tests(void); 41 39 extern int platform_check_for_tests(void); 40 + extern int check_extended_regs_support(void); 41 + extern u64 perf_get_platform_reg_mask(void); 42 42 43 43 /* 44 44 * Event code field extraction macro. ··· 169 165 170 166 static inline int get_mmcr2_l2l3(u64 mmcr2, int pmc) 171 167 { 172 - if (pvr == POWER10) 168 + if (have_hwcap2(PPC_FEATURE2_ARCH_3_1)) 173 169 return ((mmcr2 & 0xf8) >> 3); 174 170 return 0; 175 171 } 176 172 177 173 static inline int get_mmcr3_src(u64 mmcr3, int pmc) 178 174 { 179 - if (pvr != POWER10) 175 + if (!have_hwcap2(PPC_FEATURE2_ARCH_3_1)) 180 176 return 0; 181 177 return ((mmcr3 >> ((49 - (15 * ((pmc) - 1))))) & 0x7fff); 182 178 } 183 179 184 180 static inline int get_mmcra_thd_cmp(u64 mmcra, int pmc) 185 181 { 186 - if (pvr == POWER10) 182 + if (have_hwcap2(PPC_FEATURE2_ARCH_3_1)) 187 183 return ((mmcra >> 45) & 0x7ff); 188 184 return ((mmcra >> 45) & 0x3ff); 189 185 } ··· 195 191 196 192 static inline u64 get_mmcra_bhrb_disable(u64 mmcra, int pmc) 197 193 { 198 - if (pvr == POWER10) 194 + if (have_hwcap2(PPC_FEATURE2_ARCH_3_1)) 199 195 return mmcra & BHRB_DISABLE; 200 196 return 0; 201 197 }
+1 -1
tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_cond_test.c
··· 29 29 30 30 /* 31 31 * Check for platform support for the test. 32 - * This test is only aplicable on power10 32 + * This test is only aplicable on ISA v3.1 33 33 */ 34 34 SKIP_IF(check_pvr_for_sampling_tests()); 35 35 SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_1));
+1 -1
tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_disable_no_branch_test.c
··· 26 26 27 27 /* 28 28 * Check for platform support for the test. 29 - * This test is only aplicable on power10 29 + * This test is only aplicable on ISA v3.1 30 30 */ 31 31 SKIP_IF(check_pvr_for_sampling_tests()); 32 32 SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_1));
+1 -1
tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_disable_test.c
··· 26 26 27 27 /* 28 28 * Check for platform support for the test. 29 - * This test is only aplicable on power10 29 + * This test is only aplicable on ISA v3.1 30 30 */ 31 31 SKIP_IF(check_pvr_for_sampling_tests()); 32 32 SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_1));
+1 -1
tools/testing/selftests/powerpc/pmu/sampling_tests/mmcra_bhrb_ind_call_test.c
··· 29 29 30 30 /* 31 31 * Check for platform support for the test. 32 - * This test is only aplicable on power10 32 + * This test is only aplicable on ISA v3.1 33 33 */ 34 34 SKIP_IF(check_pvr_for_sampling_tests()); 35 35 SKIP_IF(!have_hwcap2(PPC_FEATURE2_ARCH_3_1));