Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch '4.14-features' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus

Pull MIPS updates from Ralf Baechle:
"This is the main pull request for 4.14 for MIPS; below a summary of
the non-merge commits:

CM:
- Rename mips_cm_base to mips_gcr_base
- Specify register size when generating accessors
- Use BIT/GENMASK for register fields, order & drop shifts
- Add cluster & block args to mips_cm_lock_other()

CPC:
- Use common CPS accessor generation macros
- Use BIT/GENMASK for register fields, order & drop shifts
- Introduce register modify (set/clear/change) accessors
- Use change_*, set_* & clear_* where appropriate
- Add CM/CPC 3.5 register definitions
- Use GlobalNumber macros rather than magic numbers
- Have asm/mips-cps.h include CM & CPC headers
- Cluster support for topology functions
- Detect CPUs in secondary clusters

CPS:
- Read GIC_VL_IDENT directly, not via irqchip driver

DMA:
- Consolidate coherent and non-coherent dma_alloc code
- Don't use dma_cache_sync to implement fd_cacheflush

FPU emulation / FP assist code:
- Another series of 14 commits fixing corner cases such as NaN
propgagation and other special input values.
- Zero bits 32-63 of the result for a CLASS.D instruction.
- Enhanced statics via debugfs
- Do not use bools for arithmetic. GCC 7.1 moans about this.
- Correct user fault_addr type

Generic MIPS:
- Enhancement of stack backtraces
- Cleanup from non-existing options
- Handle non word sized instructions when examining frame
- Fix detection and decoding of ADDIUSP instruction
- Fix decoding of SWSP16 instruction
- Refactor handling of stack pointer in get_frame_info
- Remove unreachable code from force_fcr31_sig()
- Convert to using %pOF instead of full_name
- Remove the R6000 support.
- Move FP code from *_switch.S to *_fpu.S
- Remove unused ST_OFF from r2300_switch.S
- Allow platform to specify multiple its.S files
- Add #includes to various files to ensure code builds reliable and
without warning..
- Remove __invalidate_kernel_vmap_range
- Remove plat_timer_setup
- Declare various variables & functions static
- Abstract CPU core & VP(E) ID access through accessor functions
- Store core & VP IDs in GlobalNumber-style variable
- Unify checks for sibling CPUs
- Add CPU cluster number accessors
- Prevent direct use of generic_defconfig
- Make CONFIG_MIPS_MT_SMP default y
- Add __ioread64_copy
- Remove unnecessary inclusions of linux/irqchip/mips-gic.h

GIC:
- Introduce asm/mips-gic.h with accessor functions
- Use new GIC accessor functions in mips-gic-timer
- Remove counter access functions from irq-mips-gic.c
- Remove gic_read_local_vp_id() from irq-mips-gic.c
- Simplify shared interrupt pending/mask reads in irq-mips-gic.c
- Simplify gic_local_irq_domain_map() in irq-mips-gic.c
- Drop gic_(re)set_mask() functions in irq-mips-gic.c
- Remove gic_set_polarity(), gic_set_trigger(), gic_set_dual_edge(),
gic_map_to_pin() and gic_map_to_vpe() from irq-mips-gic.c.
- Convert remaining shared reg access, local int mask access and
remaining local reg access to new accessors
- Move GIC_LOCAL_INT_* to asm/mips-gic.h
- Remove GIC_CPU_INT* macros from irq-mips-gic.c
- Move various definitions to the driver
- Remove gic_get_usm_range()
- Remove __gic_irq_dispatch() forward declaration
- Remove gic_init()
- Use mips_gic_present() in place of gic_present and remove
gic_present
- Move gic_get_c0_*_int() to asm/mips-gic.h
- Remove linux/irqchip/mips-gic.h
- Inline __gic_init()
- Inline gic_basic_init()
- Make pcpu_masks a per-cpu variable
- Use pcpu_masks to avoid reading GIC_SH_MASK*
- Clean up mti, reserved-cpu-vectors handling
- Use cpumask_first_and() in gic_set_affinity()
- Let the core set struct irq_common_data affinity

microMIPS:
- Fix microMIPS stack unwinding on big endian systems

MIPS-GIC:
- SYNC after enabling GIC region

NUMA:
- Remove the unused parent_node() macro

R6:
- Constify r2_decoder_tables
- Add accessor & bit definitions for GlobalNumber

SMP:
- Constify smp ops
- Allow boot_secondary SMP op to return errors

VDSO:
- Drop gic_get_usm_range() usage
- Avoid use of linux/irqchip/mips-gic.h

Platform changes:

Alchemy:
- Add devboard machine type to cpuinfo
- update cpu feature overrides
- Threaded carddetect irqs for devboards

AR7:
- allow NULL clock for clk_get_rate

BCM63xx:
- Fix ENETDMA_6345_MAXBURST_REG offset
- Allow NULL clock for clk_get_rate

CI20:
- Enable GPIO and RTC drivers in defconfig
- Add ethernet and fixed-regulator nodes to DTS

Generic platform:
- Move Boston and NI 169445 FIT image source to their own files
- Include asm/bootinfo.h for plat_fdt_relocated()
- Include asm/time.h for get_c0_*_int()
- Include asm/bootinfo.h for plat_fdt_relocated()
- Include asm/time.h for get_c0_*_int()
- Allow filtering enabled boards by requirements
- Don't explicitly disable CONFIG_USB_SUPPORT
- Bump default NR_CPUS to 16

JZ4700:
- Probe the jz4740-rtc driver from devicetree

Lantiq:
- Drop check of boot select from the spi-falcon driver.
- Drop check of boot select from the lantiq-flash MTD driver.
- Access boot cause register in the watchdog driver through regmap
- Add device tree binding documentation for the watchdog driver
- Add docs for the RCU DT bindings.
- Convert the fpi bus driver to a platform_driver
- Remove ltq_reset_cause() and ltq_boot_select(
- Switch to a proper reset driver
- Switch to a new drivers/soc GPHY driver
- Add an USB PHY driver for the Lantiq SoCs using the RCU module
- Use of_platform_default_populate instead of __dt_register_buses
- Enable MFD_SYSCON to be able to use it for the RCU MFD
- Replace ltq_boot_select() with dummy implementation.

Loongson 2F:
- Allow NULL clock for clk_get_rate

Malta:
- Use new GIC accessor functions

NI 169445:
- Add support for NI 169445 board.
- Only include in 32r2el kernels

Octeon:
- Add support for watchdog of 78XX SOCs.
- Add support for watchdog of CN68XX SOCs.
- Expose support for mips32r1, mips32r2 and mips64r1
- Enable more drivers in config file
- Add support for accessing the boot vector.
- Remove old boot vector code from watchdog driver
- Define watchdog registers for 70xx, 73xx, 78xx, F75xx.
- Make CSR functions node aware.
- Allow access to CIU3 IRQ domains.
- Misc cleanups in the watchdog driver

Omega2+:
- New board, add support and defconfig

Pistachio:
- Enable Root FS on NFS in defconfig

Ralink:
- Add Mediatek MT7628A SoC
- Allow NULL clock for clk_get_rate
- Explicitly request exclusive reset control in the pci-mt7620 PCI driver.

SEAD3:
- Only include in 32 bit kernels by default

VoCore:
- Add VoCore as a vendor t0 dt-bindings
- Add defconfig file"

* '4.14-features' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (167 commits)
MIPS: Refactor handling of stack pointer in get_frame_info
MIPS: Stacktrace: Fix microMIPS stack unwinding on big endian systems
MIPS: microMIPS: Fix decoding of swsp16 instruction
MIPS: microMIPS: Fix decoding of addiusp instruction
MIPS: microMIPS: Fix detection of addiusp instruction
MIPS: Handle non word sized instructions when examining frame
MIPS: ralink: allow NULL clock for clk_get_rate
MIPS: Loongson 2F: allow NULL clock for clk_get_rate
MIPS: BCM63XX: allow NULL clock for clk_get_rate
MIPS: AR7: allow NULL clock for clk_get_rate
MIPS: BCM63XX: fix ENETDMA_6345_MAXBURST_REG offset
mips: Save all registers when saving the frame
MIPS: Add DWARF unwinding to assembly
MIPS: Make SAVE_SOME more standard
MIPS: Fix issues in backtraces
MIPS: jz4780: DTS: Probe the jz4740-rtc driver from devicetree
MIPS: Ci20: Enable RTC driver
watchdog: octeon-wdt: Add support for 78XX SOCs.
watchdog: octeon-wdt: Add support for cn68XX SOCs.
watchdog: octeon-wdt: File cleaning.
...

+6360 -3388
+31
Documentation/devicetree/bindings/mips/lantiq/fpi-bus.txt
··· 1 + Lantiq XWAY SoC FPI BUS binding 2 + ============================ 3 + 4 + 5 + ------------------------------------------------------------------------------- 6 + Required properties: 7 + - compatible : Should be one of 8 + "lantiq,xrx200-fpi" 9 + - reg : The address and length of the XBAR 10 + configuration register. 11 + Address and length of the FPI bus itself. 12 + - lantiq,rcu : A phandle to the RCU syscon 13 + - lantiq,offset-endianness : Offset of the endianness configuration 14 + register 15 + 16 + ------------------------------------------------------------------------------- 17 + Example for the FPI on the xrx200 SoCs: 18 + fpi@10000000 { 19 + compatible = "lantiq,xrx200-fpi"; 20 + ranges = <0x0 0x10000000 0xf000000>; 21 + reg = <0x1f400000 0x1000>, 22 + <0x10000000 0xf000000>; 23 + lantiq,rcu = <&rcu0>; 24 + lantiq,offset-endianness = <0x4c>; 25 + #address-cells = <1>; 26 + #size-cells = <1>; 27 + 28 + gptu@e100a00 { 29 + ...... 30 + }; 31 + };
+36
Documentation/devicetree/bindings/mips/lantiq/rcu-gphy.txt
··· 1 + Lantiq XWAY SoC GPHY binding 2 + ============================ 3 + 4 + This binding describes a software-defined ethernet PHY, provided by the RCU 5 + module on newer Lantiq XWAY SoCs (xRX200 and newer). 6 + 7 + ------------------------------------------------------------------------------- 8 + Required properties: 9 + - compatible : Should be one of 10 + "lantiq,xrx200a1x-gphy" 11 + "lantiq,xrx200a2x-gphy" 12 + "lantiq,xrx300-gphy" 13 + "lantiq,xrx330-gphy" 14 + - reg : Addrress of the GPHY FW load address register 15 + - resets : Must reference the RCU GPHY reset bit 16 + - reset-names : One entry, value must be "gphy" or optional "gphy2" 17 + - clocks : A reference to the (PMU) GPHY clock gate 18 + 19 + Optional properties: 20 + - lantiq,gphy-mode : GPHY_MODE_GE (default) or GPHY_MODE_FE as defined in 21 + <dt-bindings/mips/lantiq_xway_gphy.h> 22 + 23 + 24 + ------------------------------------------------------------------------------- 25 + Example for the GPHys on the xRX200 SoCs: 26 + 27 + #include <dt-bindings/mips/lantiq_rcu_gphy.h> 28 + gphy0: gphy@20 { 29 + compatible = "lantiq,xrx200a2x-gphy"; 30 + reg = <0x20 0x4>; 31 + 32 + resets = <&reset0 31 30>, <&reset1 7 7>; 33 + reset-names = "gphy", "gphy2"; 34 + clocks = <&pmu0 XRX200_PMU_GATE_GPHY>; 35 + lantiq,gphy-mode = <GPHY_MODE_GE>; 36 + };
+89
Documentation/devicetree/bindings/mips/lantiq/rcu.txt
··· 1 + Lantiq XWAY SoC RCU binding 2 + =========================== 3 + 4 + This binding describes the RCU (reset controller unit) multifunction device, 5 + where each sub-device has it's own set of registers. 6 + 7 + The RCU register range is used for multiple purposes. Mostly one device 8 + uses one or multiple register exclusively, but for some registers some 9 + bits are for one driver and some other bits are for a different driver. 10 + With this patch all accesses to the RCU registers will go through 11 + syscon. 12 + 13 + 14 + ------------------------------------------------------------------------------- 15 + Required properties: 16 + - compatible : The first and second values must be: 17 + "lantiq,xrx200-rcu", "simple-mfd", "syscon" 18 + - reg : The address and length of the system control registers 19 + 20 + 21 + ------------------------------------------------------------------------------- 22 + Example of the RCU bindings on a xRX200 SoC: 23 + rcu0: rcu@203000 { 24 + compatible = "lantiq,xrx200-rcu", "simple-mfd", "syscon"; 25 + reg = <0x203000 0x100>; 26 + ranges = <0x0 0x203000 0x100>; 27 + big-endian; 28 + 29 + gphy0: gphy@20 { 30 + compatible = "lantiq,xrx200a2x-gphy"; 31 + reg = <0x20 0x4>; 32 + 33 + resets = <&reset0 31 30>, <&reset1 7 7>; 34 + reset-names = "gphy", "gphy2"; 35 + lantiq,gphy-mode = <GPHY_MODE_GE>; 36 + }; 37 + 38 + gphy1: gphy@68 { 39 + compatible = "lantiq,xrx200a2x-gphy"; 40 + reg = <0x68 0x4>; 41 + 42 + resets = <&reset0 29 28>, <&reset1 6 6>; 43 + reset-names = "gphy", "gphy2"; 44 + lantiq,gphy-mode = <GPHY_MODE_GE>; 45 + }; 46 + 47 + reset0: reset-controller@10 { 48 + compatible = "lantiq,xrx200-reset"; 49 + reg = <0x10 4>, <0x14 4>; 50 + 51 + #reset-cells = <2>; 52 + }; 53 + 54 + reset1: reset-controller@48 { 55 + compatible = "lantiq,xrx200-reset"; 56 + reg = <0x48 4>, <0x24 4>; 57 + 58 + #reset-cells = <2>; 59 + }; 60 + 61 + usb_phy0: usb2-phy@18 { 62 + compatible = "lantiq,xrx200-usb2-phy"; 63 + reg = <0x18 4>, <0x38 4>; 64 + status = "disabled"; 65 + 66 + resets = <&reset1 4 4>, <&reset0 4 4>; 67 + reset-names = "phy", "ctrl"; 68 + #phy-cells = <0>; 69 + }; 70 + 71 + usb_phy1: usb2-phy@34 { 72 + compatible = "lantiq,xrx200-usb2-phy"; 73 + reg = <0x34 4>, <0x3C 4>; 74 + status = "disabled"; 75 + 76 + resets = <&reset1 5 4>, <&reset0 4 4>; 77 + reset-names = "phy", "ctrl"; 78 + #phy-cells = <0>; 79 + }; 80 + 81 + reboot@10 { 82 + compatible = "syscon-reboot"; 83 + reg = <0x10 4>; 84 + 85 + regmap = <&rcu0>; 86 + offset = <0x10>; 87 + mask = <0x40000000>; 88 + }; 89 + };
+7
Documentation/devicetree/bindings/mips/ni.txt
··· 1 + National Instruments MIPS platforms 2 + 3 + required root node properties: 4 + - compatible: must be "ni,169445" 5 + 6 + CPU Nodes 7 + - compatible: must be "mti,mips14KEc"
+1
Documentation/devicetree/bindings/mips/ralink.txt
··· 15 15 ralink,rt5350-soc 16 16 ralink,mt7620a-soc 17 17 ralink,mt7620n-soc 18 + ralink,mt7628a-soc
+40
Documentation/devicetree/bindings/phy/phy-lantiq-rcu-usb2.txt
··· 1 + Lantiq XWAY SoC RCU USB 1.1/2.0 PHY binding 2 + =========================================== 3 + 4 + This binding describes the USB PHY hardware provided by the RCU module on the 5 + Lantiq XWAY SoCs. 6 + 7 + This node has to be a sub node of the Lantiq RCU block. 8 + 9 + ------------------------------------------------------------------------------- 10 + Required properties (controller (parent) node): 11 + - compatible : Should be one of 12 + "lantiq,ase-usb2-phy" 13 + "lantiq,danube-usb2-phy" 14 + "lantiq,xrx100-usb2-phy" 15 + "lantiq,xrx200-usb2-phy" 16 + "lantiq,xrx300-usb2-phy" 17 + - reg : Defines the following sets of registers in the parent 18 + syscon device 19 + - Offset of the USB PHY configuration register 20 + - Offset of the USB Analog configuration 21 + register (only for xrx200 and xrx200) 22 + - clocks : References to the (PMU) "phy" clk gate. 23 + - clock-names : Must be "phy" 24 + - resets : References to the RCU USB configuration reset bits. 25 + - reset-names : Must be one of the following: 26 + "phy" (optional) 27 + "ctrl" (shared) 28 + 29 + ------------------------------------------------------------------------------- 30 + Example for the USB PHYs on an xRX200 SoC: 31 + usb_phy0: usb2-phy@18 { 32 + compatible = "lantiq,xrx200-usb2-phy"; 33 + reg = <0x18 4>, <0x38 4>; 34 + 35 + clocks = <&pmu PMU_GATE_USB0_PHY>; 36 + clock-names = "phy"; 37 + resets = <&reset1 4 4>, <&reset0 4 4>; 38 + reset-names = "phy", "ctrl"; 39 + #phy-cells = <0>; 40 + };
+30
Documentation/devicetree/bindings/reset/lantiq,reset.txt
··· 1 + Lantiq XWAY SoC RCU reset controller binding 2 + ============================================ 3 + 4 + This binding describes a reset-controller found on the RCU module on Lantiq 5 + XWAY SoCs. 6 + 7 + This node has to be a sub node of the Lantiq RCU block. 8 + 9 + ------------------------------------------------------------------------------- 10 + Required properties: 11 + - compatible : Should be one of 12 + "lantiq,danube-reset" 13 + "lantiq,xrx200-reset" 14 + - reg : Defines the following sets of registers in the parent 15 + syscon device 16 + - Offset of the reset set register 17 + - Offset of the reset status register 18 + - #reset-cells : Specifies the number of cells needed to encode the 19 + reset line, should be 2. 20 + The first cell takes the reset set bit and the 21 + second cell takes the status bit. 22 + 23 + ------------------------------------------------------------------------------- 24 + Example for the reset-controllers on the xRX200 SoCs: 25 + reset0: reset-controller@10 { 26 + compatible = "lantiq,xrx200-reset"; 27 + reg <0x10 0x04>, <0x14 0x04>; 28 + 29 + #reset-cells = <2>; 30 + };
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 361 361 via VIA Technologies, Inc. 362 362 virtio Virtual I/O Device Specification, developed by the OASIS consortium 363 363 vivante Vivante Corporation 364 + vocore VoCore Studio 364 365 voipac Voipac Technologies s.r.o. 365 366 wd Western Digital Corp. 366 367 wetek WeTek Electronics, limited.
+24
Documentation/devicetree/bindings/watchdog/lantiq-wdt.txt
··· 1 + Lantiq WTD watchdog binding 2 + ============================ 3 + 4 + This describes the binding of the Lantiq watchdog driver. 5 + 6 + ------------------------------------------------------------------------------- 7 + Required properties: 8 + - compatible : Should be one of 9 + "lantiq,wdt" 10 + "lantiq,xrx100-wdt" 11 + "lantiq,xrx200-wdt", "lantiq,xrx100-wdt" 12 + "lantiq,falcon-wdt" 13 + - reg : Address of the watchdog block 14 + - lantiq,rcu : A phandle to the RCU syscon (required for 15 + "lantiq,falcon-wdt" and "lantiq,xrx100-wdt") 16 + 17 + ------------------------------------------------------------------------------- 18 + Example for the watchdog on the xRX200 SoCs: 19 + watchdog@803f0 { 20 + compatible = "lantiq,xrx200-wdt", "lantiq,xrx100-wdt"; 21 + reg = <0x803f0 0x10>; 22 + 23 + lantiq,rcu = <&rcu0>; 24 + };
+21
MAINTAINERS
··· 7717 7717 L: linux-mips@linux-mips.org 7718 7718 S: Maintained 7719 7719 F: arch/mips/lantiq 7720 + F: drivers/soc/lantiq 7720 7721 7721 7722 LAPB module 7722 7723 L: linux-x25@vger.kernel.org ··· 8983 8982 L: linux-mips@linux-mips.org 8984 8983 S: Supported 8985 8984 F: arch/mips/generic/ 8985 + F: arch/mips/tools/generic-board-config.sh 8986 8986 8987 8987 MIPS/LOONGSON1 ARCHITECTURE 8988 8988 M: Keguang Zhang <keguang.zhang@gmail.com> ··· 8993 8991 F: arch/mips/include/asm/mach-loongson32/ 8994 8992 F: drivers/*/*loongson1* 8995 8993 F: drivers/*/*/*loongson1* 8994 + 8995 + MIPS RINT INSTRUCTION EMULATION 8996 + M: Aleksandar Markovic <aleksandar.markovic@imgtec.com> 8997 + L: linux-mips@linux-mips.org 8998 + S: Supported 8999 + F: arch/mips/math-emu/sp_rint.c 9000 + F: arch/mips/math-emu/dp_rint.c 8996 9001 8997 9002 MIROSOUND PCM20 FM RADIO RECEIVER DRIVER 8998 9003 M: Hans Verkuil <hverkuil@xs4all.nl> ··· 9877 9868 F: drivers/regulator/twl-regulator.c 9878 9869 F: drivers/regulator/twl6030-regulator.c 9879 9870 F: include/linux/i2c-omap.h 9871 + 9872 + ONION OMEGA2+ BOARD 9873 + M: Harvey Hunt <harveyhuntnexus@gmail.com> 9874 + L: linux-mips@linux-mips.org 9875 + S: Maintained 9876 + F: arch/mips/boot/dts/ralink/omega2p.dts 9880 9877 9881 9878 OMFS FILESYSTEM 9882 9879 M: Bob Copeland <me@bobcopeland.com> ··· 14404 14389 L: netdev@vger.kernel.org 14405 14390 S: Maintained 14406 14391 F: drivers/net/vmxnet3/ 14392 + 14393 + VOCORE VOCORE2 BOARD 14394 + M: Harvey Hunt <harveyhuntnexus@gmail.com> 14395 + L: linux-mips@linux-mips.org 14396 + S: Maintained 14397 + F: arch/mips/boot/dts/ralink/vocore2.dts 14407 14398 14408 14399 VOLTAGE AND CURRENT REGULATOR FRAMEWORK 14409 14400 M: Liam Girdwood <lgirdwood@gmail.com>
+5 -16
arch/mips/Kconfig
··· 1627 1627 NEC VR5500 and VR5500A series processors implement 64-bit MIPS IV 1628 1628 instruction set. 1629 1629 1630 - config CPU_R6000 1631 - bool "R6000" 1632 - depends on SYS_HAS_CPU_R6000 1633 - select CPU_SUPPORTS_32BIT_KERNEL 1634 - help 1635 - MIPS Technologies R6000 and R6000A series processors. Note these 1636 - processors are extremely rare and the support for them is incomplete. 1637 - 1638 1630 config CPU_NEVADA 1639 1631 bool "RM52xx" 1640 1632 depends on SYS_HAS_CPU_NEVADA ··· 1942 1950 config SYS_HAS_CPU_R5500 1943 1951 bool 1944 1952 1945 - config SYS_HAS_CPU_R6000 1946 - bool 1947 - 1948 1953 config SYS_HAS_CPU_NEVADA 1949 1954 bool 1950 1955 ··· 2169 2180 2170 2181 config PAGE_SIZE_64KB 2171 2182 bool "64kB" 2172 - depends on !CPU_R3000 && !CPU_TX39XX && !CPU_R6000 2183 + depends on !CPU_R3000 && !CPU_TX39XX 2173 2184 help 2174 2185 Using 64kB page size will result in higher performance kernel at 2175 2186 the price of higher memory consumption. This option is available on ··· 2237 2248 2238 2249 config CPU_GENERIC_DUMP_TLB 2239 2250 bool 2240 - default y if !(CPU_R3000 || CPU_R6000 || CPU_R8000 || CPU_TX39XX) 2251 + default y if !(CPU_R3000 || CPU_R8000 || CPU_TX39XX) 2241 2252 2242 2253 config CPU_R4K_FPU 2243 2254 bool 2244 - default y if !(CPU_R3000 || CPU_R6000 || CPU_TX39XX || CPU_CAVIUM_OCTEON) 2255 + default y if !(CPU_R3000 || CPU_TX39XX) 2245 2256 2246 2257 config CPU_R4K_CACHE_TLB 2247 2258 bool ··· 2249 2260 2250 2261 config MIPS_MT_SMP 2251 2262 bool "MIPS MT SMP support (1 TC on each available VPE)" 2263 + default y 2252 2264 depends on SYS_SUPPORTS_MULTITHREADING && !CPU_MIPSR6 && !CPU_MICROMIPS 2253 2265 select CPU_MIPSR2_IRQ_VI 2254 2266 select CPU_MIPSR2_IRQ_EI ··· 2366 2376 bool "MIPS Coherent Processing System support" 2367 2377 depends on SYS_SUPPORTS_MIPS_CPS 2368 2378 select MIPS_CM 2369 - select MIPS_CPC 2370 2379 select MIPS_CPS_PM if HOTPLUG_CPU 2371 2380 select SMP 2372 2381 select SYNC_R4K if (CEVT_R4K || CSRC_R4K) ··· 2382 2393 2383 2394 config MIPS_CPS_PM 2384 2395 depends on MIPS_CPS 2385 - select MIPS_CPC 2386 2396 bool 2387 2397 2388 2398 config MIPS_CM 2389 2399 bool 2400 + select MIPS_CPC 2390 2401 2391 2402 config MIPS_CPC 2392 2403 bool
+27 -4
arch/mips/Makefile
··· 151 151 # 152 152 cflags-$(CONFIG_CPU_R3000) += -march=r3000 153 153 cflags-$(CONFIG_CPU_TX39XX) += -march=r3900 154 - cflags-$(CONFIG_CPU_R6000) += -march=r6000 -Wa,--trap 155 154 cflags-$(CONFIG_CPU_R4300) += -march=r4300 -Wa,--trap 156 155 cflags-$(CONFIG_CPU_VR41XX) += -march=r4100 -Wa,--trap 157 156 cflags-$(CONFIG_CPU_R4X00) += -march=r4600 -Wa,--trap ··· 290 291 291 292 bootvars-y = VMLINUX_LOAD_ADDRESS=$(load-y) \ 292 293 VMLINUX_ENTRY_ADDRESS=$(entry-y) \ 293 - PLATFORM="$(platform-y)" 294 + PLATFORM="$(platform-y)" \ 295 + ITS_INPUTS="$(its-y)" 294 296 ifdef CONFIG_32BIT 295 297 bootvars-y += ADDR_BITS=32 296 298 endif 297 299 ifdef CONFIG_64BIT 298 300 bootvars-y += ADDR_BITS=64 299 301 endif 302 + 303 + # This is required to get dwarf unwinding tables into .debug_frame 304 + # instead of .eh_frame so we don't discard them. 305 + KBUILD_CFLAGS += -fno-asynchronous-unwind-tables 300 306 301 307 LDFLAGS += -m $(ld-emul) 302 308 ··· 504 500 .PHONY: $(generic_defconfigs) 505 501 $(generic_defconfigs): 506 502 $(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh \ 507 - -m -O $(objtree) $(srctree)/arch/$(ARCH)/configs/generic_defconfig $^ \ 508 - $(foreach board,$(BOARDS),$(generic_config_dir)/board-$(board).config) 503 + -m -O $(objtree) $(srctree)/arch/$(ARCH)/configs/generic_defconfig $^ | \ 504 + grep -Ev '^#' 505 + $(Q)cp $(KCONFIG_CONFIG) $(objtree)/.config.$@ 506 + $(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig \ 507 + KCONFIG_CONFIG=$(objtree)/.config.$@ >/dev/null 508 + $(Q)$(CONFIG_SHELL) $(srctree)/arch/$(ARCH)/tools/generic-board-config.sh \ 509 + $(srctree) $(objtree) $(objtree)/.config.$@ $(KCONFIG_CONFIG) \ 510 + "$(origin BOARDS)" $(BOARDS) 509 511 $(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig 510 512 511 513 # 512 514 # Prevent generic merge_config rules attempting to merge single fragments 513 515 # 514 516 $(generic_config_dir)/%.config: ; 517 + 518 + # 519 + # Prevent direct use of generic_defconfig, which is intended to be used as the 520 + # basis of the various ISA-specific targets generated above. 521 + # 522 + .PHONY: generic_defconfig 523 + generic_defconfig: 524 + $(Q)echo "generic_defconfig is not intended for direct use, but should instead be" 525 + $(Q)echo "used via an ISA-specific target from the following list:" 526 + $(Q)echo 527 + $(Q)for cfg in $(generic_defconfigs); do echo " $${cfg}"; done 528 + $(Q)echo 529 + $(Q)false 515 530 516 531 # 517 532 # Legacy defconfig compatibility - these targets used to be real defconfigs but
+36 -28
arch/mips/alchemy/devboards/db1200.c
··· 344 344 345 345 /* SD carddetects: they're supposed to be edge-triggered, but ack 346 346 * doesn't seem to work (CPLD Rev 2). Instead, the screaming one 347 - * is disabled and its counterpart enabled. The 500ms timeout is 348 - * because the carddetect isn't debounced in hardware. 347 + * is disabled and its counterpart enabled. The 200ms timeout is 348 + * because the carddetect usually triggers twice, after debounce. 349 349 */ 350 350 static irqreturn_t db1200_mmc_cd(int irq, void *ptr) 351 351 { 352 - void(*mmc_cd)(struct mmc_host *, unsigned long); 352 + disable_irq_nosync(irq); 353 + return IRQ_WAKE_THREAD; 354 + } 353 355 354 - if (irq == DB1200_SD0_INSERT_INT) { 355 - disable_irq_nosync(DB1200_SD0_INSERT_INT); 356 - enable_irq(DB1200_SD0_EJECT_INT); 357 - } else { 358 - disable_irq_nosync(DB1200_SD0_EJECT_INT); 359 - enable_irq(DB1200_SD0_INSERT_INT); 360 - } 356 + static irqreturn_t db1200_mmc_cdfn(int irq, void *ptr) 357 + { 358 + void (*mmc_cd)(struct mmc_host *, unsigned long); 361 359 362 360 /* link against CONFIG_MMC=m */ 363 361 mmc_cd = symbol_get(mmc_detect_change); 364 362 if (mmc_cd) { 365 - mmc_cd(ptr, msecs_to_jiffies(500)); 363 + mmc_cd(ptr, msecs_to_jiffies(200)); 366 364 symbol_put(mmc_detect_change); 367 365 } 366 + 367 + msleep(100); /* debounce */ 368 + if (irq == DB1200_SD0_INSERT_INT) 369 + enable_irq(DB1200_SD0_EJECT_INT); 370 + else 371 + enable_irq(DB1200_SD0_INSERT_INT); 368 372 369 373 return IRQ_HANDLED; 370 374 } ··· 378 374 int ret; 379 375 380 376 if (en) { 381 - ret = request_irq(DB1200_SD0_INSERT_INT, db1200_mmc_cd, 382 - 0, "sd_insert", mmc_host); 377 + ret = request_threaded_irq(DB1200_SD0_INSERT_INT, db1200_mmc_cd, 378 + db1200_mmc_cdfn, 0, "sd_insert", mmc_host); 383 379 if (ret) 384 380 goto out; 385 381 386 - ret = request_irq(DB1200_SD0_EJECT_INT, db1200_mmc_cd, 387 - 0, "sd_eject", mmc_host); 382 + ret = request_threaded_irq(DB1200_SD0_EJECT_INT, db1200_mmc_cd, 383 + db1200_mmc_cdfn, 0, "sd_eject", mmc_host); 388 384 if (ret) { 389 385 free_irq(DB1200_SD0_INSERT_INT, mmc_host); 390 386 goto out; ··· 440 436 441 437 static irqreturn_t pb1200_mmc1_cd(int irq, void *ptr) 442 438 { 443 - void(*mmc_cd)(struct mmc_host *, unsigned long); 439 + disable_irq_nosync(irq); 440 + return IRQ_WAKE_THREAD; 441 + } 444 442 445 - if (irq == PB1200_SD1_INSERT_INT) { 446 - disable_irq_nosync(PB1200_SD1_INSERT_INT); 447 - enable_irq(PB1200_SD1_EJECT_INT); 448 - } else { 449 - disable_irq_nosync(PB1200_SD1_EJECT_INT); 450 - enable_irq(PB1200_SD1_INSERT_INT); 451 - } 443 + static irqreturn_t pb1200_mmc1_cdfn(int irq, void *ptr) 444 + { 445 + void (*mmc_cd)(struct mmc_host *, unsigned long); 452 446 453 447 /* link against CONFIG_MMC=m */ 454 448 mmc_cd = symbol_get(mmc_detect_change); 455 449 if (mmc_cd) { 456 - mmc_cd(ptr, msecs_to_jiffies(500)); 450 + mmc_cd(ptr, msecs_to_jiffies(200)); 457 451 symbol_put(mmc_detect_change); 458 452 } 453 + 454 + msleep(100); /* debounce */ 455 + if (irq == PB1200_SD1_INSERT_INT) 456 + enable_irq(PB1200_SD1_EJECT_INT); 457 + else 458 + enable_irq(PB1200_SD1_INSERT_INT); 459 459 460 460 return IRQ_HANDLED; 461 461 } ··· 469 461 int ret; 470 462 471 463 if (en) { 472 - ret = request_irq(PB1200_SD1_INSERT_INT, pb1200_mmc1_cd, 0, 473 - "sd1_insert", mmc_host); 464 + ret = request_threaded_irq(PB1200_SD1_INSERT_INT, pb1200_mmc1_cd, 465 + pb1200_mmc1_cdfn, 0, "sd1_insert", mmc_host); 474 466 if (ret) 475 467 goto out; 476 468 477 - ret = request_irq(PB1200_SD1_EJECT_INT, pb1200_mmc1_cd, 0, 478 - "sd1_eject", mmc_host); 469 + ret = request_threaded_irq(PB1200_SD1_EJECT_INT, pb1200_mmc1_cd, 470 + pb1200_mmc1_cdfn, 0, "sd1_eject", mmc_host); 479 471 if (ret) { 480 472 free_irq(PB1200_SD1_INSERT_INT, mmc_host); 481 473 goto out;
+17 -14
arch/mips/alchemy/devboards/db1300.c
··· 450 450 451 451 static irqreturn_t db1300_mmc_cd(int irq, void *ptr) 452 452 { 453 - void(*mmc_cd)(struct mmc_host *, unsigned long); 453 + disable_irq_nosync(irq); 454 + return IRQ_WAKE_THREAD; 455 + } 454 456 455 - /* disable the one currently screaming. No other way to shut it up */ 456 - if (irq == DB1300_SD1_INSERT_INT) { 457 - disable_irq_nosync(DB1300_SD1_INSERT_INT); 458 - enable_irq(DB1300_SD1_EJECT_INT); 459 - } else { 460 - disable_irq_nosync(DB1300_SD1_EJECT_INT); 461 - enable_irq(DB1300_SD1_INSERT_INT); 462 - } 457 + static irqreturn_t db1300_mmc_cdfn(int irq, void *ptr) 458 + { 459 + void (*mmc_cd)(struct mmc_host *, unsigned long); 463 460 464 461 /* link against CONFIG_MMC=m. We can only be called once MMC core has 465 462 * initialized the controller, so symbol_get() should always succeed. 466 463 */ 467 464 mmc_cd = symbol_get(mmc_detect_change); 468 - mmc_cd(ptr, msecs_to_jiffies(500)); 465 + mmc_cd(ptr, msecs_to_jiffies(200)); 469 466 symbol_put(mmc_detect_change); 467 + 468 + msleep(100); /* debounce */ 469 + if (irq == DB1300_SD1_INSERT_INT) 470 + enable_irq(DB1300_SD1_EJECT_INT); 471 + else 472 + enable_irq(DB1300_SD1_INSERT_INT); 470 473 471 474 return IRQ_HANDLED; 472 475 } ··· 490 487 int ret; 491 488 492 489 if (en) { 493 - ret = request_irq(DB1300_SD1_INSERT_INT, db1300_mmc_cd, 0, 494 - "sd_insert", mmc_host); 490 + ret = request_threaded_irq(DB1300_SD1_INSERT_INT, db1300_mmc_cd, 491 + db1300_mmc_cdfn, 0, "sd_insert", mmc_host); 495 492 if (ret) 496 493 goto out; 497 494 498 - ret = request_irq(DB1300_SD1_EJECT_INT, db1300_mmc_cd, 0, 499 - "sd_eject", mmc_host); 495 + ret = request_threaded_irq(DB1300_SD1_EJECT_INT, db1300_mmc_cd, 496 + db1300_mmc_cdfn, 0, "sd_eject", mmc_host); 500 497 if (ret) { 501 498 free_irq(DB1300_SD1_INSERT_INT, mmc_host); 502 499 goto out;
+2
arch/mips/alchemy/devboards/db1xxx.c
··· 2 2 * Alchemy DB/PB1xxx board support. 3 3 */ 4 4 5 + #include <asm/prom.h> 5 6 #include <asm/mach-au1x00/au1000.h> 6 7 #include <asm/mach-db1x00/bcsr.h> 7 8 ··· 98 97 99 98 static int __init db1xxx_dev_init(void) 100 99 { 100 + mips_set_machine_name(board_type_str()); 101 101 switch (BCSR_WHOAMI_BOARD(bcsr_read(BCSR_WHOAMI))) { 102 102 case BCSR_WHOAMI_DB1000: 103 103 case BCSR_WHOAMI_DB1500:
+3
arch/mips/ar7/clock.c
··· 430 430 431 431 unsigned long clk_get_rate(struct clk *clk) 432 432 { 433 + if (!clk) 434 + return 0; 435 + 433 436 return clk->rate; 434 437 } 435 438 EXPORT_SYMBOL(clk_get_rate);
+4 -5
arch/mips/ath79/clock.c
··· 487 487 { 488 488 struct clk *ref_clk; 489 489 void __iomem *pll_base; 490 - const char *dnfn = of_node_full_name(np); 491 490 492 491 ref_clk = of_clk_get(np, 0); 493 492 if (IS_ERR(ref_clk)) { 494 - pr_err("%s: of_clk_get failed\n", dnfn); 493 + pr_err("%pOF: of_clk_get failed\n", np); 495 494 goto err; 496 495 } 497 496 498 497 pll_base = of_iomap(np, 0); 499 498 if (!pll_base) { 500 - pr_err("%s: can't map pll registers\n", dnfn); 499 + pr_err("%pOF: can't map pll registers\n", np); 501 500 goto err_clk; 502 501 } 503 502 ··· 505 506 else if (of_device_is_compatible(np, "qca,ar9330-pll")) 506 507 ar9330_clk_init(ref_clk, pll_base); 507 508 else { 508 - pr_err("%s: could not find any appropriate clk_init()\n", dnfn); 509 + pr_err("%pOF: could not find any appropriate clk_init()\n", np); 509 510 goto err_iounmap; 510 511 } 511 512 512 513 if (of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data)) { 513 - pr_err("%s: could not register clk provider\n", dnfn); 514 + pr_err("%pOF: could not register clk provider\n", np); 514 515 goto err_iounmap; 515 516 } 516 517
+3
arch/mips/bcm63xx/clk.c
··· 339 339 340 340 unsigned long clk_get_rate(struct clk *clk) 341 341 { 342 + if (!clk) 343 + return 0; 344 + 342 345 return clk->rate; 343 346 } 344 347
+11 -5
arch/mips/boot/Makefile
··· 118 118 itb_addr_cells = 2 119 119 endif 120 120 121 + quiet_cmd_its_cat = CAT $@ 122 + cmd_its_cat = cat $^ >$@ 123 + 124 + $(obj)/vmlinux.its.S: $(addprefix $(srctree)/arch/mips/$(PLATFORM)/,$(ITS_INPUTS)) 125 + $(call if_changed,its_cat) 126 + 121 127 quiet_cmd_cpp_its_S = ITS $@ 122 128 cmd_cpp_its_S = $(CPP) $(cpp_flags) -P -C -o $@ $< \ 123 129 -DKERNEL_NAME="\"Linux $(KERNELRELEASE)\"" \ ··· 134 128 -DADDR_BITS=$(ADDR_BITS) \ 135 129 -DADDR_CELLS=$(itb_addr_cells) 136 130 137 - $(obj)/vmlinux.its: $(srctree)/arch/mips/$(PLATFORM)/vmlinux.its.S $(VMLINUX) FORCE 131 + $(obj)/vmlinux.its: $(obj)/vmlinux.its.S $(VMLINUX) FORCE 138 132 $(call if_changed_dep,cpp_its_S,none,vmlinux.bin) 139 133 140 - $(obj)/vmlinux.gz.its: $(srctree)/arch/mips/$(PLATFORM)/vmlinux.its.S $(VMLINUX) FORCE 134 + $(obj)/vmlinux.gz.its: $(obj)/vmlinux.its.S $(VMLINUX) FORCE 141 135 $(call if_changed_dep,cpp_its_S,gzip,vmlinux.bin.gz) 142 136 143 - $(obj)/vmlinux.bz2.its: $(srctree)/arch/mips/$(PLATFORM)/vmlinux.its.S $(VMLINUX) FORCE 137 + $(obj)/vmlinux.bz2.its: $(obj)/vmlinux.its.S $(VMLINUX) FORCE 144 138 $(call if_changed_dep,cpp_its_S,bzip2,vmlinux.bin.bz2) 145 139 146 - $(obj)/vmlinux.lzma.its: $(srctree)/arch/mips/$(PLATFORM)/vmlinux.its.S $(VMLINUX) FORCE 140 + $(obj)/vmlinux.lzma.its: $(obj)/vmlinux.its.S $(VMLINUX) FORCE 147 141 $(call if_changed_dep,cpp_its_S,lzma,vmlinux.bin.lzma) 148 142 149 - $(obj)/vmlinux.lzo.its: $(srctree)/arch/mips/$(PLATFORM)/vmlinux.its.S $(VMLINUX) FORCE 143 + $(obj)/vmlinux.lzo.its: $(obj)/vmlinux.its.S $(VMLINUX) FORCE 150 144 $(call if_changed_dep,cpp_its_S,lzo,vmlinux.bin.lzo) 151 145 152 146 quiet_cmd_itb-image = ITB $@
+1
arch/mips/boot/dts/Makefile
··· 5 5 dts-dirs += lantiq 6 6 dts-dirs += mti 7 7 dts-dirs += netlogic 8 + dts-dirs += ni 8 9 dts-dirs += pic32 9 10 dts-dirs += qca 10 11 dts-dirs += ralink
+37
arch/mips/boot/dts/ingenic/ci20.dts
··· 1 1 /dts-v1/; 2 2 3 3 #include "jz4780.dtsi" 4 + #include <dt-bindings/gpio/gpio.h> 4 5 5 6 / { 6 7 compatible = "img,ci20", "ingenic,jz4780"; ··· 21 20 device_type = "memory"; 22 21 reg = <0x0 0x10000000 23 22 0x30000000 0x30000000>; 23 + }; 24 + 25 + eth0_power: fixedregulator@0 { 26 + compatible = "regulator-fixed"; 27 + regulator-name = "eth0_power"; 28 + gpio = <&gpb 25 GPIO_ACTIVE_LOW>; 29 + enable-active-high; 24 30 }; 25 31 }; 26 32 ··· 131 123 }; 132 124 }; 133 125 }; 126 + 127 + dm9000@6 { 128 + compatible = "davicom,dm9000"; 129 + davicom,no-eeprom; 130 + 131 + pinctrl-names = "default"; 132 + pinctrl-0 = <&pins_nemc_cs6>; 133 + 134 + reg = <6 0 1 /* addr */ 135 + 6 2 1>; /* data */ 136 + 137 + ingenic,nemc-tAS = <15>; 138 + ingenic,nemc-tAH = <10>; 139 + ingenic,nemc-tBP = <20>; 140 + ingenic,nemc-tAW = <50>; 141 + ingenic,nemc-tSTRV = <100>; 142 + 143 + reset-gpios = <&gpf 12 GPIO_ACTIVE_HIGH>; 144 + vcc-supply = <&eth0_power>; 145 + 146 + interrupt-parent = <&gpe>; 147 + interrupts = <19 4>; 148 + }; 134 149 }; 135 150 136 151 &bch { ··· 194 163 pins_nemc_cs1: nemc-cs1 { 195 164 function = "nemc-cs1"; 196 165 groups = "nemc-cs1"; 166 + bias-disable; 167 + }; 168 + 169 + pins_nemc_cs6: nemc-cs6 { 170 + function = "nemc-cs6"; 171 + groups = "nemc-cs6"; 197 172 bias-disable; 198 173 }; 199 174 };
+11
arch/mips/boot/dts/ingenic/jz4780.dtsi
··· 44 44 #clock-cells = <1>; 45 45 }; 46 46 47 + rtc_dev: rtc@10003000 { 48 + compatible = "ingenic,jz4780-rtc"; 49 + reg = <0x10003000 0x4c>; 50 + 51 + interrupt-parent = <&intc>; 52 + interrupts = <32>; 53 + 54 + clocks = <&cgu JZ4780_CLK_RTCLK>; 55 + clock-names = "rtc"; 56 + }; 57 + 47 58 pinctrl: pin-controller@10010000 { 48 59 compatible = "ingenic,jz4780-pinctrl"; 49 60 reg = <0x10010000 0x600>;
+100
arch/mips/boot/dts/ni/169445.dts
··· 1 + /dts-v1/; 2 + 3 + / { 4 + #address-cells = <1>; 5 + #size-cells = <1>; 6 + compatible = "ni,169445"; 7 + 8 + cpus { 9 + #address-cells = <1>; 10 + #size-cells = <0>; 11 + cpu@0 { 12 + device_type = "cpu"; 13 + compatible = "mti,mips14KEc"; 14 + clocks = <&baseclk>; 15 + reg = <0>; 16 + }; 17 + }; 18 + 19 + memory@0 { 20 + device_type = "memory"; 21 + reg = <0x0 0x10000000>; 22 + }; 23 + 24 + baseclk: baseclock { 25 + compatible = "fixed-clock"; 26 + #clock-cells = <0>; 27 + clock-frequency = <50000000>; 28 + }; 29 + 30 + cpu_intc: interrupt-controller { 31 + #address-cells = <0>; 32 + compatible = "mti,cpu-interrupt-controller"; 33 + interrupt-controller; 34 + #interrupt-cells = <1>; 35 + }; 36 + 37 + ahb@1f300000 { 38 + compatible = "simple-bus"; 39 + #address-cells = <1>; 40 + #size-cells = <1>; 41 + ranges = <0x0 0x1f300000 0x80FFF>; 42 + 43 + gpio1: gpio@10 { 44 + compatible = "ni,169445-nand-gpio"; 45 + reg = <0x10 0x4>; 46 + reg-names = "dat"; 47 + gpio-controller; 48 + #gpio-cells = <2>; 49 + }; 50 + 51 + gpio2: gpio@14 { 52 + compatible = "ni,169445-nand-gpio"; 53 + reg = <0x14 0x4>; 54 + reg-names = "dat"; 55 + gpio-controller; 56 + #gpio-cells = <2>; 57 + no-output; 58 + }; 59 + 60 + nand@0 { 61 + compatible = "gpio-control-nand"; 62 + nand-on-flash-bbt; 63 + nand-ecc-mode = "soft_bch"; 64 + nand-ecc-step-size = <512>; 65 + nand-ecc-strength = <4>; 66 + reg = <0x0 4>; 67 + gpios = <&gpio2 0 0>, /* rdy */ 68 + <&gpio1 1 0>, /* nce */ 69 + <&gpio1 2 0>, /* ale */ 70 + <&gpio1 3 0>, /* cle */ 71 + <&gpio1 4 0>; /* nwp */ 72 + }; 73 + 74 + serial@80000 { 75 + compatible = "ns16550a"; 76 + reg = <0x80000 0x1000>; 77 + interrupt-parent = <&cpu_intc>; 78 + interrupts = <6>; 79 + clocks = <&baseclk>; 80 + reg-shift = <0>; 81 + }; 82 + 83 + ethernet@40000 { 84 + compatible = "snps,dwmac-4.10a"; 85 + interrupt-parent = <&cpu_intc>; 86 + interrupts = <5>; 87 + interrupt-names = "macirq"; 88 + reg = <0x40000 0x2000>; 89 + clock-names = "stmmaceth", "pclk"; 90 + clocks = <&baseclk>, <&baseclk>; 91 + 92 + phy-mode = "rgmii"; 93 + 94 + fixed-link { 95 + speed = <1000>; 96 + full-duplex; 97 + }; 98 + }; 99 + }; 100 + };
+7
arch/mips/boot/dts/ni/Makefile
··· 1 + dtb-$(CONFIG_FIT_IMAGE_FDT_NI169445) += 169445.dtb 2 + 3 + # Force kbuild to make empty built-in.o if necessary 4 + obj- += dummy.o 5 + 6 + always := $(dtb-y) 7 + clean-files := *.dtb *.dtb.S
+2
arch/mips/boot/dts/ralink/Makefile
··· 2 2 dtb-$(CONFIG_DTB_RT305X_EVAL) += rt3052_eval.dtb 3 3 dtb-$(CONFIG_DTB_RT3883_EVAL) += rt3883_eval.dtb 4 4 dtb-$(CONFIG_DTB_MT7620A_EVAL) += mt7620a_eval.dtb 5 + dtb-$(CONFIG_DTB_OMEGA2P) += omega2p.dtb 6 + dtb-$(CONFIG_DTB_VOCORE2) += vocore2.dtb 5 7 6 8 obj-y += $(patsubst %.dtb, %.dtb.o, $(dtb-y)) 7 9
+126
arch/mips/boot/dts/ralink/mt7628a.dtsi
··· 1 + / { 2 + #address-cells = <1>; 3 + #size-cells = <1>; 4 + compatible = "ralink,mt7628a-soc"; 5 + 6 + cpus { 7 + #address-cells = <1>; 8 + #size-cells = <0>; 9 + 10 + cpu@0 { 11 + compatible = "mti,mips24KEc"; 12 + device_type = "cpu"; 13 + reg = <0>; 14 + }; 15 + }; 16 + 17 + resetc: reset-controller { 18 + compatible = "ralink,rt2880-reset"; 19 + #reset-cells = <1>; 20 + }; 21 + 22 + cpuintc: interrupt-controller { 23 + #address-cells = <0>; 24 + #interrupt-cells = <1>; 25 + interrupt-controller; 26 + compatible = "mti,cpu-interrupt-controller"; 27 + }; 28 + 29 + palmbus@10000000 { 30 + compatible = "palmbus"; 31 + reg = <0x10000000 0x200000>; 32 + ranges = <0x0 0x10000000 0x1FFFFF>; 33 + 34 + #address-cells = <1>; 35 + #size-cells = <1>; 36 + 37 + sysc: system-controller@0 { 38 + compatible = "ralink,mt7620a-sysc", "syscon"; 39 + reg = <0x0 0x100>; 40 + }; 41 + 42 + intc: interrupt-controller@200 { 43 + compatible = "ralink,rt2880-intc"; 44 + reg = <0x200 0x100>; 45 + 46 + interrupt-controller; 47 + #interrupt-cells = <1>; 48 + 49 + resets = <&resetc 9>; 50 + reset-names = "intc"; 51 + 52 + interrupt-parent = <&cpuintc>; 53 + interrupts = <2>; 54 + 55 + ralink,intc-registers = <0x9c 0xa0 56 + 0x6c 0xa4 57 + 0x80 0x78>; 58 + }; 59 + 60 + memory-controller@300 { 61 + compatible = "ralink,mt7620a-memc"; 62 + reg = <0x300 0x100>; 63 + }; 64 + 65 + uart0: uartlite@c00 { 66 + compatible = "ns16550a"; 67 + reg = <0xc00 0x100>; 68 + 69 + resets = <&resetc 12>; 70 + reset-names = "uart0"; 71 + 72 + interrupt-parent = <&intc>; 73 + interrupts = <20>; 74 + 75 + reg-shift = <2>; 76 + }; 77 + 78 + uart1: uart1@d00 { 79 + compatible = "ns16550a"; 80 + reg = <0xd00 0x100>; 81 + 82 + resets = <&resetc 19>; 83 + reset-names = "uart1"; 84 + 85 + interrupt-parent = <&intc>; 86 + interrupts = <21>; 87 + 88 + reg-shift = <2>; 89 + }; 90 + 91 + uart2: uart2@e00 { 92 + compatible = "ns16550a"; 93 + reg = <0xe00 0x100>; 94 + 95 + resets = <&resetc 20>; 96 + reset-names = "uart2"; 97 + 98 + interrupt-parent = <&intc>; 99 + interrupts = <22>; 100 + 101 + reg-shift = <2>; 102 + }; 103 + }; 104 + 105 + usb_phy: usb-phy@10120000 { 106 + compatible = "mediatek,mt7628-usbphy"; 107 + reg = <0x10120000 0x1000>; 108 + 109 + #phy-cells = <0>; 110 + 111 + ralink,sysctl = <&sysc>; 112 + resets = <&resetc 22 &resetc 25>; 113 + reset-names = "host", "device"; 114 + }; 115 + 116 + ehci@101c0000 { 117 + compatible = "generic-ehci"; 118 + reg = <0x101c0000 0x1000>; 119 + 120 + phys = <&usb_phy>; 121 + phy-names = "usb"; 122 + 123 + interrupt-parent = <&intc>; 124 + interrupts = <18>; 125 + }; 126 + };
+18
arch/mips/boot/dts/ralink/omega2p.dts
··· 1 + /dts-v1/; 2 + 3 + /include/ "mt7628a.dtsi" 4 + 5 + / { 6 + compatible = "onion,omega2+", "ralink,mt7688a-soc", "ralink,mt7628a-soc"; 7 + model = "Onion Omega2+"; 8 + 9 + memory@0 { 10 + device_type = "memory"; 11 + reg = <0x0 0x8000000>; 12 + }; 13 + 14 + chosen { 15 + bootargs = "console=ttyS0,115200"; 16 + stdout-path = &uart0; 17 + }; 18 + };
+18
arch/mips/boot/dts/ralink/vocore2.dts
··· 1 + /dts-v1/; 2 + 3 + #include "mt7628a.dtsi" 4 + 5 + / { 6 + compatible = "vocore,vocore2", "ralink,mt7628a-soc"; 7 + model = "VoCore2"; 8 + 9 + memory@0 { 10 + device_type = "memory"; 11 + reg = <0x0 0x8000000>; 12 + }; 13 + 14 + chosen { 15 + bootargs = "console=ttyS2,115200"; 16 + stdout-path = &uart2; 17 + }; 18 + };
+1 -1
arch/mips/cavium-octeon/executive/Makefile
··· 16 16 cvmx-helper-loop.o cvmx-helper-spi.o cvmx-helper-util.o \ 17 17 cvmx-interrupt-decodes.o cvmx-interrupt-rsl.o 18 18 19 - obj-y += cvmx-helper-errata.o cvmx-helper-jtag.o 19 + obj-y += cvmx-helper-errata.o cvmx-helper-jtag.o cvmx-boot-vector.o
+167
arch/mips/cavium-octeon/executive/cvmx-boot-vector.c
··· 1 + /* 2 + * This file is subject to the terms and conditions of the GNU General Public 3 + * License. See the file "COPYING" in the main directory of this archive 4 + * for more details. 5 + * 6 + * Copyright (C) 2004-2017 Cavium, Inc. 7 + */ 8 + 9 + 10 + /* 11 + We install this program at the bootvector: 12 + ------------------------------------ 13 + .set noreorder 14 + .set nomacro 15 + .set noat 16 + reset_vector: 17 + dmtc0 $k0, $31, 0 # Save $k0 to DESAVE 18 + dmtc0 $k1, $31, 3 # Save $k1 to KScratch2 19 + 20 + mfc0 $k0, $12, 0 # Status 21 + mfc0 $k1, $15, 1 # Ebase 22 + 23 + ori $k0, 0x84 # Enable 64-bit addressing, set 24 + # ERL (should already be set) 25 + andi $k1, 0x3ff # mask out core ID 26 + 27 + mtc0 $k0, $12, 0 # Status 28 + sll $k1, 5 29 + 30 + lui $k0, 0xbfc0 31 + cache 17, 0($0) # Core-14345, clear L1 Dcache virtual 32 + # tags if the core hit an NMI 33 + 34 + ld $k0, 0x78($k0) # k0 <- (bfc00078) pointer to the reset vector 35 + synci 0($0) # Invalidate ICache to get coherent 36 + # view of target code. 37 + 38 + daddu $k0, $k0, $k1 39 + nop 40 + 41 + ld $k0, 0($k0) # k0 <- core specific target address 42 + dmfc0 $k1, $31, 3 # Restore $k1 from KScratch2 43 + 44 + beqz $k0, wait_loop # Spin in wait loop 45 + nop 46 + 47 + jr $k0 48 + nop 49 + 50 + nop # NOPs needed here to fill delay slots 51 + nop # on endian reversal of previous instructions 52 + 53 + wait_loop: 54 + wait 55 + nop 56 + 57 + b wait_loop 58 + nop 59 + 60 + nop 61 + nop 62 + ------------------------------------ 63 + 64 + 0000000000000000 <reset_vector>: 65 + 0: 40baf800 dmtc0 k0,c0_desave 66 + 4: 40bbf803 dmtc0 k1,c0_kscratch2 67 + 68 + 8: 401a6000 mfc0 k0,c0_status 69 + c: 401b7801 mfc0 k1,c0_ebase 70 + 71 + 10: 375a0084 ori k0,k0,0x84 72 + 14: 337b03ff andi k1,k1,0x3ff 73 + 74 + 18: 409a6000 mtc0 k0,c0_status 75 + 1c: 001bd940 sll k1,k1,0x5 76 + 77 + 20: 3c1abfc0 lui k0,0xbfc0 78 + 24: bc110000 cache 0x11,0(zero) 79 + 80 + 28: df5a0078 ld k0,120(k0) 81 + 2c: 041f0000 synci 0(zero) 82 + 83 + 30: 035bd02d daddu k0,k0,k1 84 + 34: 00000000 nop 85 + 86 + 38: df5a0000 ld k0,0(k0) 87 + 3c: 403bf803 dmfc0 k1,c0_kscratch2 88 + 89 + 40: 13400005 beqz k0,58 <wait_loop> 90 + 44: 00000000 nop 91 + 92 + 48: 03400008 jr k0 93 + 4c: 00000000 nop 94 + 95 + 50: 00000000 nop 96 + 54: 00000000 nop 97 + 98 + 0000000000000058 <wait_loop>: 99 + 58: 42000020 wait 100 + 5c: 00000000 nop 101 + 102 + 60: 1000fffd b 58 <wait_loop> 103 + 64: 00000000 nop 104 + 105 + 68: 00000000 nop 106 + 6c: 00000000 nop 107 + 108 + */ 109 + 110 + #include <asm/octeon/cvmx-boot-vector.h> 111 + 112 + static unsigned long long _cvmx_bootvector_data[16] = { 113 + 0x40baf80040bbf803ull, /* patch low order 8-bits if no KScratch*/ 114 + 0x401a6000401b7801ull, 115 + 0x375a0084337b03ffull, 116 + 0x409a6000001bd940ull, 117 + 0x3c1abfc0bc110000ull, 118 + 0xdf5a0078041f0000ull, 119 + 0x035bd02d00000000ull, 120 + 0xdf5a0000403bf803ull, /* patch low order 8-bits if no KScratch*/ 121 + 0x1340000500000000ull, 122 + 0x0340000800000000ull, 123 + 0x0000000000000000ull, 124 + 0x4200002000000000ull, 125 + 0x1000fffd00000000ull, 126 + 0x0000000000000000ull, 127 + OCTEON_BOOT_MOVEABLE_MAGIC1, 128 + 0 /* To be filled in with address of vector block*/ 129 + }; 130 + 131 + /* 2^10 CPUs */ 132 + #define VECTOR_TABLE_SIZE (1024 * sizeof(struct cvmx_boot_vector_element)) 133 + 134 + static void cvmx_boot_vector_init(void *mem) 135 + { 136 + uint64_t kseg0_mem; 137 + int i; 138 + 139 + memset(mem, 0, VECTOR_TABLE_SIZE); 140 + kseg0_mem = cvmx_ptr_to_phys(mem) | 0x8000000000000000ull; 141 + 142 + for (i = 0; i < 15; i++) { 143 + uint64_t v = _cvmx_bootvector_data[i]; 144 + 145 + if (OCTEON_IS_OCTEON1PLUS() && (i == 0 || i == 7)) 146 + v &= 0xffffffff00000000ull; /* KScratch not availble. */ 147 + cvmx_write_csr(CVMX_MIO_BOOT_LOC_ADR, i * 8); 148 + cvmx_write_csr(CVMX_MIO_BOOT_LOC_DAT, v); 149 + } 150 + cvmx_write_csr(CVMX_MIO_BOOT_LOC_ADR, 15 * 8); 151 + cvmx_write_csr(CVMX_MIO_BOOT_LOC_DAT, kseg0_mem); 152 + cvmx_write_csr(CVMX_MIO_BOOT_LOC_CFGX(0), 0x81fc0000); 153 + } 154 + 155 + /** 156 + * Get a pointer to the per-core table of reset vector pointers 157 + * 158 + */ 159 + struct cvmx_boot_vector_element *cvmx_boot_vector_get(void) 160 + { 161 + struct cvmx_boot_vector_element *ret; 162 + 163 + ret = cvmx_bootmem_alloc_named_range_once(VECTOR_TABLE_SIZE, 0, 164 + (1ull << 32) - 1, 8, "__boot_vector1__", cvmx_boot_vector_init); 165 + return ret; 166 + } 167 + EXPORT_SYMBOL(cvmx_boot_vector_get);
+85
arch/mips/cavium-octeon/executive/cvmx-bootmem.c
··· 44 44 45 45 /* See header file for descriptions of functions */ 46 46 47 + /** 48 + * This macro returns the size of a member of a structure. 49 + * Logically it is the same as "sizeof(s::field)" in C++, but 50 + * C lacks the "::" operator. 51 + */ 52 + #define SIZEOF_FIELD(s, field) sizeof(((s *)NULL)->field) 53 + 54 + /** 55 + * This macro returns a member of the 56 + * cvmx_bootmem_named_block_desc_t structure. These members can't 57 + * be directly addressed as they might be in memory not directly 58 + * reachable. In the case where bootmem is compiled with 59 + * LINUX_HOST, the structure itself might be located on a remote 60 + * Octeon. The argument "field" is the member name of the 61 + * cvmx_bootmem_named_block_desc_t to read. Regardless of the type 62 + * of the field, the return type is always a uint64_t. The "addr" 63 + * parameter is the physical address of the structure. 64 + */ 65 + #define CVMX_BOOTMEM_NAMED_GET_FIELD(addr, field) \ 66 + __cvmx_bootmem_desc_get(addr, \ 67 + offsetof(struct cvmx_bootmem_named_block_desc, field), \ 68 + SIZEOF_FIELD(struct cvmx_bootmem_named_block_desc, field)) 69 + 70 + /** 71 + * This function is the implementation of the get macros defined 72 + * for individual structure members. The argument are generated 73 + * by the macros inorder to read only the needed memory. 74 + * 75 + * @param base 64bit physical address of the complete structure 76 + * @param offset Offset from the beginning of the structure to the member being 77 + * accessed. 78 + * @param size Size of the structure member. 79 + * 80 + * @return Value of the structure member promoted into a uint64_t. 81 + */ 82 + static inline uint64_t __cvmx_bootmem_desc_get(uint64_t base, int offset, 83 + int size) 84 + { 85 + base = (1ull << 63) | (base + offset); 86 + switch (size) { 87 + case 4: 88 + return cvmx_read64_uint32(base); 89 + case 8: 90 + return cvmx_read64_uint64(base); 91 + default: 92 + return 0; 93 + } 94 + } 95 + 47 96 /* 48 97 * Wrapper functions are provided for reading/writing the size and 49 98 * next block values as these may not be directly addressible (in 32 ··· 146 97 { 147 98 return cvmx_bootmem_alloc_range(size, alignment, 0, 0); 148 99 } 100 + 101 + void *cvmx_bootmem_alloc_named_range_once(uint64_t size, uint64_t min_addr, 102 + uint64_t max_addr, uint64_t align, 103 + char *name, 104 + void (*init) (void *)) 105 + { 106 + int64_t addr; 107 + void *ptr; 108 + uint64_t named_block_desc_addr; 109 + 110 + named_block_desc_addr = (uint64_t) 111 + cvmx_bootmem_phy_named_block_find(name, 112 + (uint32_t)CVMX_BOOTMEM_FLAG_NO_LOCKING); 113 + 114 + if (named_block_desc_addr) { 115 + addr = CVMX_BOOTMEM_NAMED_GET_FIELD(named_block_desc_addr, 116 + base_addr); 117 + return cvmx_phys_to_ptr(addr); 118 + } 119 + 120 + addr = cvmx_bootmem_phy_named_block_alloc(size, min_addr, max_addr, 121 + align, name, 122 + (uint32_t)CVMX_BOOTMEM_FLAG_NO_LOCKING); 123 + 124 + if (addr < 0) 125 + return NULL; 126 + ptr = cvmx_phys_to_ptr(addr); 127 + 128 + if (init) 129 + init(ptr); 130 + else 131 + memset(ptr, 0, size); 132 + 133 + return ptr; 134 + } 135 + EXPORT_SYMBOL(cvmx_bootmem_alloc_named_range_once); 149 136 150 137 void *cvmx_bootmem_alloc_named_range(uint64_t size, uint64_t min_addr, 151 138 uint64_t max_addr, uint64_t align,
+9
arch/mips/cavium-octeon/octeon-irq.c
··· 2963 2963 } 2964 2964 2965 2965 #endif /* CONFIG_HOTPLUG_CPU */ 2966 + 2967 + struct irq_domain *octeon_irq_get_block_domain(int node, uint8_t block) 2968 + { 2969 + struct octeon_ciu3_info *ciu3_info; 2970 + 2971 + ciu3_info = octeon_ciu3_info_per_node[node & CVMX_NODE_MASK]; 2972 + return ciu3_info->domain[block]; 2973 + } 2974 + EXPORT_SYMBOL(octeon_irq_get_block_domain);
+9 -5
arch/mips/cavium-octeon/smp.c
··· 205 205 * Firmware CPU startup hook 206 206 * 207 207 */ 208 - static void octeon_boot_secondary(int cpu, struct task_struct *idle) 208 + static int octeon_boot_secondary(int cpu, struct task_struct *idle) 209 209 { 210 210 int count; 211 211 ··· 223 223 udelay(1); 224 224 count--; 225 225 } 226 - if (count == 0) 226 + if (count == 0) { 227 227 pr_err("Secondary boot timeout\n"); 228 + return -ETIMEDOUT; 229 + } 230 + 231 + return 0; 228 232 } 229 233 230 234 /** ··· 412 408 413 409 #endif /* CONFIG_HOTPLUG_CPU */ 414 410 415 - struct plat_smp_ops octeon_smp_ops = { 411 + const struct plat_smp_ops octeon_smp_ops = { 416 412 .send_ipi_single = octeon_send_ipi_single, 417 413 .send_ipi_mask = octeon_send_ipi_mask, 418 414 .init_secondary = octeon_init_secondary, ··· 489 485 octeon_78xx_send_ipi_single(cpu, action); 490 486 } 491 487 492 - static struct plat_smp_ops octeon_78xx_smp_ops = { 488 + static const struct plat_smp_ops octeon_78xx_smp_ops = { 493 489 .send_ipi_single = octeon_78xx_send_ipi_single, 494 490 .send_ipi_mask = octeon_78xx_send_ipi_mask, 495 491 .init_secondary = octeon_init_secondary, ··· 505 501 506 502 void __init octeon_setup_smp(void) 507 503 { 508 - struct plat_smp_ops *ops; 504 + const struct plat_smp_ops *ops; 509 505 510 506 if (octeon_has_feature(OCTEON_FEATURE_CIU3)) 511 507 ops = &octeon_78xx_smp_ops;
+15 -10
arch/mips/configs/cavium_octeon_defconfig
··· 60 60 CONFIG_ATA=y 61 61 CONFIG_SATA_AHCI=y 62 62 CONFIG_SATA_AHCI_PLATFORM=y 63 - CONFIG_AHCI_OCTEON=y 64 63 CONFIG_PATA_OCTEON_CF=y 65 - CONFIG_SATA_SIL=y 66 64 CONFIG_NETDEVICES=y 67 - CONFIG_MII=y 68 65 # CONFIG_NET_VENDOR_3COM is not set 69 66 # CONFIG_NET_VENDOR_ADAPTEC is not set 70 67 # CONFIG_NET_VENDOR_ALTEON is not set ··· 118 121 CONFIG_SPI_OCTEON=y 119 122 # CONFIG_HWMON is not set 120 123 CONFIG_WATCHDOG=y 121 - CONFIG_USB=m 122 - CONFIG_USB_EHCI_HCD=m 123 - CONFIG_USB_EHCI_HCD_PLATFORM=m 124 - CONFIG_USB_OHCI_HCD=m 125 - CONFIG_USB_OHCI_HCD_PLATFORM=m 124 + CONFIG_USB=y 125 + # CONFIG_USB_PCI is not set 126 + CONFIG_USB_XHCI_HCD=y 127 + CONFIG_USB_EHCI_HCD=y 128 + CONFIG_USB_EHCI_HCD_PLATFORM=y 129 + CONFIG_USB_OHCI_HCD=y 130 + CONFIG_USB_OHCI_HCD_PLATFORM=y 131 + CONFIG_USB_STORAGE=y 132 + CONFIG_USB_DWC3=y 126 133 CONFIG_MMC=y 127 134 # CONFIG_PWRSEQ_EMMC is not set 128 135 # CONFIG_PWRSEQ_SIMPLE is not set 129 - # CONFIG_MMC_BLOCK_BOUNCE is not set 130 136 CONFIG_MMC_CAVIUM_OCTEON=y 137 + CONFIG_EDAC=y 138 + CONFIG_EDAC_OCTEON_PC=y 139 + CONFIG_EDAC_OCTEON_L2C=y 140 + CONFIG_EDAC_OCTEON_LMC=y 141 + CONFIG_EDAC_OCTEON_PCI=y 131 142 CONFIG_RTC_CLASS=y 132 143 CONFIG_RTC_DRV_DS1307=y 133 144 CONFIG_STAGING=y 134 145 CONFIG_OCTEON_ETHERNET=y 135 - CONFIG_OCTEON_USB=m 136 146 # CONFIG_IOMMU_SUPPORT is not set 147 + CONFIG_RAS=y 137 148 CONFIG_EXT4_FS=y 138 149 CONFIG_EXT4_FS_POSIX_ACL=y 139 150 CONFIG_EXT4_FS_SECURITY=y
+3
arch/mips/configs/ci20_defconfig
··· 91 91 CONFIG_I2C=y 92 92 CONFIG_I2C_JZ4780=y 93 93 CONFIG_GPIO_SYSFS=y 94 + CONFIG_GPIO_INGENIC=y 94 95 # CONFIG_HWMON is not set 95 96 CONFIG_REGULATOR=y 96 97 CONFIG_REGULATOR_DEBUG=y ··· 100 99 # CONFIG_HID is not set 101 100 # CONFIG_USB_SUPPORT is not set 102 101 CONFIG_MMC=y 102 + CONFIG_RTC_CLASS=y 103 + CONFIG_RTC_DRV_JZ4740=y 103 104 # CONFIG_IOMMU_SUPPORT is not set 104 105 CONFIG_MEMORY=y 105 106 # CONFIG_DNOTIFY is not set
+30
arch/mips/configs/generic/board-ni169445.config
··· 1 + # require CONFIG_CPU_MIPS32_R2=y 2 + # require CONFIG_CPU_LITTLE_ENDIAN=y 3 + 4 + CONFIG_FIT_IMAGE_FDT_NI169445=y 5 + 6 + CONFIG_SERIAL_8250=y 7 + CONFIG_SERIAL_8250_CONSOLE=y 8 + CONFIG_SERIAL_OF_PLATFORM=y 9 + 10 + CONFIG_GPIOLIB=y 11 + CONFIG_GPIO_SYSFS=y 12 + CONFIG_GPIO_GENERIC_PLATFORM=y 13 + 14 + CONFIG_MTD=y 15 + CONFIG_MTD_BLOCK=y 16 + CONFIG_MTD_CMDLINE_PARTS=y 17 + 18 + CONFIG_MTD_NAND_ECC=y 19 + CONFIG_MTD_NAND_ECC_BCH=y 20 + CONFIG_MTD_NAND=y 21 + CONFIG_MTD_NAND_GPIO=y 22 + CONFIG_MTD_NAND_IDS=y 23 + 24 + CONFIG_MTD_UBI=y 25 + CONFIG_MTD_UBI_BLOCK=y 26 + 27 + CONFIG_NETDEVICES=y 28 + CONFIG_STMMAC_ETH=y 29 + CONFIG_STMMAC_PLATFORM=y 30 + CONFIG_DWMAC_GENERIC=y
+2
arch/mips/configs/generic/board-sead-3.config
··· 1 + # require CONFIG_32BIT=y 2 + 1 3 CONFIG_LEGACY_BOARD_SEAD3=y 2 4 3 5 CONFIG_AUXDISPLAY=y
+1 -2
arch/mips/configs/generic_defconfig
··· 3 3 CONFIG_MIPS_CPS=y 4 4 CONFIG_CPU_HAS_MSA=y 5 5 CONFIG_HIGHMEM=y 6 - CONFIG_NR_CPUS=2 6 + CONFIG_NR_CPUS=16 7 7 CONFIG_MIPS_O32_FP64_SUPPORT=y 8 8 CONFIG_SYSVIPC=y 9 9 CONFIG_NO_HZ_IDLE=y ··· 61 61 CONFIG_HID_LOGITECH=y 62 62 CONFIG_HID_MICROSOFT=y 63 63 CONFIG_HID_MONTEREY=y 64 - # CONFIG_USB_SUPPORT is not set 65 64 # CONFIG_MIPS_PLATFORM_DEVICES is not set 66 65 # CONFIG_IOMMU_SUPPORT is not set 67 66 CONFIG_EXT4_FS=y
-4
arch/mips/configs/gpr_defconfig
··· 111 111 CONFIG_DEV_APPLETALK=m 112 112 CONFIG_IPDDP=m 113 113 CONFIG_IPDDP_ENCAP=y 114 - CONFIG_IPDDP_DECAP=y 115 114 CONFIG_X25=m 116 115 CONFIG_LAPB=m 117 - CONFIG_ECONET=m 118 - CONFIG_ECONET_AUNUDP=y 119 - CONFIG_ECONET_NATIVE=y 120 116 CONFIG_WAN_ROUTER=m 121 117 CONFIG_NET_SCHED=y 122 118 CONFIG_NET_SCH_CBQ=m
-1
arch/mips/configs/lemote2f_defconfig
··· 37 37 CONFIG_HIBERNATION=y 38 38 CONFIG_PM_STD_PARTITION="/dev/hda3" 39 39 CONFIG_CPU_FREQ=y 40 - CONFIG_CPU_FREQ_DEBUG=y 41 40 CONFIG_CPU_FREQ_STAT=y 42 41 CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y 43 42 CONFIG_CPU_FREQ_GOV_POWERSAVE=m
-1
arch/mips/configs/malta_defconfig
··· 2 2 CONFIG_CPU_LITTLE_ENDIAN=y 3 3 CONFIG_CPU_MIPS32_R2=y 4 4 CONFIG_PAGE_SIZE_16KB=y 5 - CONFIG_MIPS_MT_SMP=y 6 5 CONFIG_NR_CPUS=8 7 6 CONFIG_HZ_100=y 8 7 CONFIG_SYSVIPC=y
-1
arch/mips/configs/malta_kvm_defconfig
··· 2 2 CONFIG_CPU_LITTLE_ENDIAN=y 3 3 CONFIG_CPU_MIPS32_R2=y 4 4 CONFIG_PAGE_SIZE_16KB=y 5 - CONFIG_MIPS_MT_SMP=y 6 5 CONFIG_NR_CPUS=8 7 6 CONFIG_HZ_100=y 8 7 CONFIG_SYSVIPC=y
+1
arch/mips/configs/malta_kvm_guest_defconfig
··· 3 3 CONFIG_CPU_MIPS32_R2=y 4 4 CONFIG_KVM_GUEST=y 5 5 CONFIG_PAGE_SIZE_16KB=y 6 + # CONFIG_MIPS_MT_SMP is not set 6 7 CONFIG_HZ_100=y 7 8 CONFIG_SYSVIPC=y 8 9 CONFIG_NO_HZ=y
-1
arch/mips/configs/maltasmvp_defconfig
··· 2 2 CONFIG_CPU_LITTLE_ENDIAN=y 3 3 CONFIG_CPU_MIPS32_R2=y 4 4 CONFIG_PAGE_SIZE_16KB=y 5 - CONFIG_MIPS_MT_SMP=y 6 5 CONFIG_SCHED_SMT=y 7 6 CONFIG_MIPS_CPS=y 8 7 CONFIG_NR_CPUS=8
-1
arch/mips/configs/maltasmvp_eva_defconfig
··· 3 3 CONFIG_CPU_MIPS32_R2=y 4 4 CONFIG_CPU_MIPS32_3_5_FEATURES=y 5 5 CONFIG_PAGE_SIZE_16KB=y 6 - CONFIG_MIPS_MT_SMP=y 7 6 CONFIG_SCHED_SMT=y 8 7 CONFIG_MIPS_CPS=y 9 8 CONFIG_NR_CPUS=8
-4
arch/mips/configs/mtx1_defconfig
··· 146 146 CONFIG_DEV_APPLETALK=m 147 147 CONFIG_IPDDP=m 148 148 CONFIG_IPDDP_ENCAP=y 149 - CONFIG_IPDDP_DECAP=y 150 149 CONFIG_X25=m 151 150 CONFIG_LAPB=m 152 - CONFIG_ECONET=m 153 - CONFIG_ECONET_AUNUDP=y 154 - CONFIG_ECONET_NATIVE=y 155 151 CONFIG_WAN_ROUTER=m 156 152 CONFIG_NET_SCHED=y 157 153 CONFIG_NET_SCH_CBQ=m
-1
arch/mips/configs/nlm_xlp_defconfig
··· 259 259 CONFIG_DEV_APPLETALK=m 260 260 CONFIG_IPDDP=m 261 261 CONFIG_IPDDP_ENCAP=y 262 - CONFIG_IPDDP_DECAP=y 263 262 CONFIG_X25=m 264 263 CONFIG_LAPB=m 265 264 CONFIG_WAN_ROUTER=m
-4
arch/mips/configs/nlm_xlr_defconfig
··· 240 240 CONFIG_DEV_APPLETALK=m 241 241 CONFIG_IPDDP=m 242 242 CONFIG_IPDDP_ENCAP=y 243 - CONFIG_IPDDP_DECAP=y 244 243 CONFIG_X25=m 245 244 CONFIG_LAPB=m 246 - CONFIG_ECONET=m 247 - CONFIG_ECONET_AUNUDP=y 248 - CONFIG_ECONET_NATIVE=y 249 245 CONFIG_WAN_ROUTER=m 250 246 CONFIG_PHONET=m 251 247 CONFIG_IEEE802154=m
+129
arch/mips/configs/omega2p_defconfig
··· 1 + CONFIG_RALINK=y 2 + CONFIG_SOC_MT7620=y 3 + CONFIG_DTB_OMEGA2P=y 4 + CONFIG_CPU_MIPS32_R2=y 5 + # CONFIG_COMPACTION is not set 6 + CONFIG_HZ_100=y 7 + CONFIG_PREEMPT=y 8 + # CONFIG_SECCOMP is not set 9 + CONFIG_MIPS_CMDLINE_FROM_BOOTLOADER=y 10 + # CONFIG_LOCALVERSION_AUTO is not set 11 + CONFIG_SYSVIPC=y 12 + CONFIG_POSIX_MQUEUE=y 13 + CONFIG_NO_HZ_IDLE=y 14 + CONFIG_HIGH_RES_TIMERS=y 15 + CONFIG_IKCONFIG=y 16 + CONFIG_IKCONFIG_PROC=y 17 + CONFIG_LOG_BUF_SHIFT=14 18 + CONFIG_CGROUPS=y 19 + CONFIG_MEMCG=y 20 + CONFIG_CGROUP_SCHED=y 21 + CONFIG_CGROUP_FREEZER=y 22 + CONFIG_CGROUP_DEVICE=y 23 + CONFIG_CGROUP_CPUACCT=y 24 + CONFIG_NAMESPACES=y 25 + CONFIG_USER_NS=y 26 + CONFIG_CC_OPTIMIZE_FOR_SIZE=y 27 + CONFIG_SYSCTL_SYSCALL=y 28 + CONFIG_KALLSYMS_ALL=y 29 + CONFIG_EMBEDDED=y 30 + # CONFIG_VM_EVENT_COUNTERS is not set 31 + # CONFIG_SLUB_DEBUG is not set 32 + # CONFIG_COMPAT_BRK is not set 33 + # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 34 + # CONFIG_SUSPEND is not set 35 + CONFIG_NET=y 36 + CONFIG_PACKET=y 37 + CONFIG_UNIX=y 38 + CONFIG_INET=y 39 + # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 40 + # CONFIG_INET_XFRM_MODE_TUNNEL is not set 41 + # CONFIG_INET_XFRM_MODE_BEET is not set 42 + # CONFIG_INET_DIAG is not set 43 + # CONFIG_IPV6 is not set 44 + # CONFIG_WIRELESS is not set 45 + CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 46 + CONFIG_DEVTMPFS=y 47 + # CONFIG_FW_LOADER is not set 48 + # CONFIG_ALLOW_DEV_COREDUMP is not set 49 + CONFIG_NETDEVICES=y 50 + # CONFIG_ETHERNET is not set 51 + # CONFIG_WLAN is not set 52 + # CONFIG_INPUT_KEYBOARD is not set 53 + # CONFIG_INPUT_MOUSE is not set 54 + # CONFIG_SERIO is not set 55 + CONFIG_VT_HW_CONSOLE_BINDING=y 56 + CONFIG_LEGACY_PTY_COUNT=2 57 + CONFIG_SERIAL_8250=y 58 + CONFIG_SERIAL_8250_CONSOLE=y 59 + CONFIG_SERIAL_8250_NR_UARTS=3 60 + CONFIG_SERIAL_8250_RUNTIME_UARTS=3 61 + CONFIG_SERIAL_OF_PLATFORM=y 62 + # CONFIG_HW_RANDOM is not set 63 + # CONFIG_HWMON is not set 64 + # CONFIG_VGA_CONSOLE is not set 65 + CONFIG_USB=y 66 + CONFIG_USB_EHCI_HCD=y 67 + CONFIG_USB_EHCI_HCD_PLATFORM=y 68 + CONFIG_MMC=y 69 + # CONFIG_IOMMU_SUPPORT is not set 70 + CONFIG_MEMORY=y 71 + CONFIG_PHY_RALINK_USB=y 72 + # CONFIG_DNOTIFY is not set 73 + CONFIG_PROC_KCORE=y 74 + # CONFIG_PROC_PAGE_MONITOR is not set 75 + CONFIG_TMPFS=y 76 + CONFIG_CONFIGFS_FS=y 77 + # CONFIG_NETWORK_FILESYSTEMS is not set 78 + CONFIG_NLS_CODEPAGE_437=y 79 + CONFIG_NLS_CODEPAGE_737=y 80 + CONFIG_NLS_CODEPAGE_775=y 81 + CONFIG_NLS_CODEPAGE_850=y 82 + CONFIG_NLS_CODEPAGE_852=y 83 + CONFIG_NLS_CODEPAGE_855=y 84 + CONFIG_NLS_CODEPAGE_857=y 85 + CONFIG_NLS_CODEPAGE_860=y 86 + CONFIG_NLS_CODEPAGE_861=y 87 + CONFIG_NLS_CODEPAGE_862=y 88 + CONFIG_NLS_CODEPAGE_863=y 89 + CONFIG_NLS_CODEPAGE_864=y 90 + CONFIG_NLS_CODEPAGE_865=y 91 + CONFIG_NLS_CODEPAGE_866=y 92 + CONFIG_NLS_CODEPAGE_869=y 93 + CONFIG_NLS_CODEPAGE_936=y 94 + CONFIG_NLS_CODEPAGE_950=y 95 + CONFIG_NLS_CODEPAGE_932=y 96 + CONFIG_NLS_CODEPAGE_949=y 97 + CONFIG_NLS_CODEPAGE_874=y 98 + CONFIG_NLS_ISO8859_8=y 99 + CONFIG_NLS_CODEPAGE_1250=y 100 + CONFIG_NLS_CODEPAGE_1251=y 101 + CONFIG_NLS_ASCII=y 102 + CONFIG_NLS_ISO8859_1=y 103 + CONFIG_NLS_ISO8859_2=y 104 + CONFIG_NLS_ISO8859_3=y 105 + CONFIG_NLS_ISO8859_4=y 106 + CONFIG_NLS_ISO8859_5=y 107 + CONFIG_NLS_ISO8859_6=y 108 + CONFIG_NLS_ISO8859_7=y 109 + CONFIG_NLS_ISO8859_9=y 110 + CONFIG_NLS_ISO8859_13=y 111 + CONFIG_NLS_ISO8859_14=y 112 + CONFIG_NLS_ISO8859_15=y 113 + CONFIG_NLS_KOI8_R=y 114 + CONFIG_NLS_KOI8_U=y 115 + CONFIG_NLS_UTF8=y 116 + CONFIG_PRINTK_TIME=y 117 + CONFIG_DEBUG_INFO=y 118 + CONFIG_STRIP_ASM_SYMS=y 119 + CONFIG_DEBUG_FS=y 120 + CONFIG_MAGIC_SYSRQ=y 121 + CONFIG_PANIC_TIMEOUT=10 122 + # CONFIG_SCHED_DEBUG is not set 123 + # CONFIG_DEBUG_PREEMPT is not set 124 + CONFIG_STACKTRACE=y 125 + # CONFIG_FTRACE is not set 126 + CONFIG_CRYPTO_DEFLATE=y 127 + CONFIG_CRYPTO_LZO=y 128 + CONFIG_CRC16=y 129 + CONFIG_XZ_DEC=y
+4 -1
arch/mips/configs/pistachio_defconfig
··· 47 47 CONFIG_IP_MULTIPLE_TABLES=y 48 48 CONFIG_IP_ROUTE_MULTIPATH=y 49 49 CONFIG_IP_ROUTE_VERBOSE=y 50 + CONFIG_IP_PNP=y 51 + CONFIG_IP_PNP_DHCP=y 50 52 CONFIG_IP_MROUTE=y 51 53 CONFIG_IP_PIMSM_V1=y 52 54 CONFIG_IP_PIMSM_V2=y ··· 294 292 CONFIG_PSTORE=y 295 293 CONFIG_PSTORE_CONSOLE=y 296 294 CONFIG_PSTORE_RAM=y 297 - # CONFIG_NETWORK_FILESYSTEMS is not set 295 + CONFIG_NFS_FS=y 296 + CONFIG_ROOT_NFS=y 298 297 CONFIG_NLS_DEFAULT="utf8" 299 298 CONFIG_NLS_CODEPAGE_437=m 300 299 CONFIG_NLS_ASCII=m
+129
arch/mips/configs/vocore2_defconfig
··· 1 + CONFIG_RALINK=y 2 + CONFIG_SOC_MT7620=y 3 + CONFIG_DTB_VOCORE2=y 4 + CONFIG_CPU_MIPS32_R2=y 5 + # CONFIG_COMPACTION is not set 6 + CONFIG_HZ_100=y 7 + CONFIG_PREEMPT=y 8 + # CONFIG_SECCOMP is not set 9 + CONFIG_MIPS_CMDLINE_FROM_BOOTLOADER=y 10 + # CONFIG_LOCALVERSION_AUTO is not set 11 + CONFIG_SYSVIPC=y 12 + CONFIG_POSIX_MQUEUE=y 13 + CONFIG_NO_HZ_IDLE=y 14 + CONFIG_HIGH_RES_TIMERS=y 15 + CONFIG_IKCONFIG=y 16 + CONFIG_IKCONFIG_PROC=y 17 + CONFIG_LOG_BUF_SHIFT=14 18 + CONFIG_CGROUPS=y 19 + CONFIG_MEMCG=y 20 + CONFIG_CGROUP_SCHED=y 21 + CONFIG_CGROUP_FREEZER=y 22 + CONFIG_CGROUP_DEVICE=y 23 + CONFIG_CGROUP_CPUACCT=y 24 + CONFIG_NAMESPACES=y 25 + CONFIG_USER_NS=y 26 + CONFIG_CC_OPTIMIZE_FOR_SIZE=y 27 + CONFIG_SYSCTL_SYSCALL=y 28 + CONFIG_KALLSYMS_ALL=y 29 + CONFIG_EMBEDDED=y 30 + # CONFIG_VM_EVENT_COUNTERS is not set 31 + # CONFIG_SLUB_DEBUG is not set 32 + # CONFIG_COMPAT_BRK is not set 33 + # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 34 + # CONFIG_SUSPEND is not set 35 + CONFIG_NET=y 36 + CONFIG_PACKET=y 37 + CONFIG_UNIX=y 38 + CONFIG_INET=y 39 + # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 40 + # CONFIG_INET_XFRM_MODE_TUNNEL is not set 41 + # CONFIG_INET_XFRM_MODE_BEET is not set 42 + # CONFIG_INET_DIAG is not set 43 + # CONFIG_IPV6 is not set 44 + # CONFIG_WIRELESS is not set 45 + CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 46 + CONFIG_DEVTMPFS=y 47 + # CONFIG_FW_LOADER is not set 48 + # CONFIG_ALLOW_DEV_COREDUMP is not set 49 + CONFIG_NETDEVICES=y 50 + # CONFIG_ETHERNET is not set 51 + # CONFIG_WLAN is not set 52 + # CONFIG_INPUT_KEYBOARD is not set 53 + # CONFIG_INPUT_MOUSE is not set 54 + # CONFIG_SERIO is not set 55 + CONFIG_VT_HW_CONSOLE_BINDING=y 56 + CONFIG_LEGACY_PTY_COUNT=2 57 + CONFIG_SERIAL_8250=y 58 + CONFIG_SERIAL_8250_CONSOLE=y 59 + CONFIG_SERIAL_8250_NR_UARTS=3 60 + CONFIG_SERIAL_8250_RUNTIME_UARTS=3 61 + CONFIG_SERIAL_OF_PLATFORM=y 62 + # CONFIG_HW_RANDOM is not set 63 + # CONFIG_HWMON is not set 64 + # CONFIG_VGA_CONSOLE is not set 65 + CONFIG_USB=y 66 + CONFIG_USB_EHCI_HCD=y 67 + CONFIG_USB_EHCI_HCD_PLATFORM=y 68 + CONFIG_MMC=y 69 + # CONFIG_IOMMU_SUPPORT is not set 70 + CONFIG_MEMORY=y 71 + CONFIG_PHY_RALINK_USB=y 72 + # CONFIG_DNOTIFY is not set 73 + CONFIG_PROC_KCORE=y 74 + # CONFIG_PROC_PAGE_MONITOR is not set 75 + CONFIG_TMPFS=y 76 + CONFIG_CONFIGFS_FS=y 77 + # CONFIG_NETWORK_FILESYSTEMS is not set 78 + CONFIG_NLS_CODEPAGE_437=y 79 + CONFIG_NLS_CODEPAGE_737=y 80 + CONFIG_NLS_CODEPAGE_775=y 81 + CONFIG_NLS_CODEPAGE_850=y 82 + CONFIG_NLS_CODEPAGE_852=y 83 + CONFIG_NLS_CODEPAGE_855=y 84 + CONFIG_NLS_CODEPAGE_857=y 85 + CONFIG_NLS_CODEPAGE_860=y 86 + CONFIG_NLS_CODEPAGE_861=y 87 + CONFIG_NLS_CODEPAGE_862=y 88 + CONFIG_NLS_CODEPAGE_863=y 89 + CONFIG_NLS_CODEPAGE_864=y 90 + CONFIG_NLS_CODEPAGE_865=y 91 + CONFIG_NLS_CODEPAGE_866=y 92 + CONFIG_NLS_CODEPAGE_869=y 93 + CONFIG_NLS_CODEPAGE_936=y 94 + CONFIG_NLS_CODEPAGE_950=y 95 + CONFIG_NLS_CODEPAGE_932=y 96 + CONFIG_NLS_CODEPAGE_949=y 97 + CONFIG_NLS_CODEPAGE_874=y 98 + CONFIG_NLS_ISO8859_8=y 99 + CONFIG_NLS_CODEPAGE_1250=y 100 + CONFIG_NLS_CODEPAGE_1251=y 101 + CONFIG_NLS_ASCII=y 102 + CONFIG_NLS_ISO8859_1=y 103 + CONFIG_NLS_ISO8859_2=y 104 + CONFIG_NLS_ISO8859_3=y 105 + CONFIG_NLS_ISO8859_4=y 106 + CONFIG_NLS_ISO8859_5=y 107 + CONFIG_NLS_ISO8859_6=y 108 + CONFIG_NLS_ISO8859_7=y 109 + CONFIG_NLS_ISO8859_9=y 110 + CONFIG_NLS_ISO8859_13=y 111 + CONFIG_NLS_ISO8859_14=y 112 + CONFIG_NLS_ISO8859_15=y 113 + CONFIG_NLS_KOI8_R=y 114 + CONFIG_NLS_KOI8_U=y 115 + CONFIG_NLS_UTF8=y 116 + CONFIG_PRINTK_TIME=y 117 + CONFIG_DEBUG_INFO=y 118 + CONFIG_STRIP_ASM_SYMS=y 119 + CONFIG_DEBUG_FS=y 120 + CONFIG_MAGIC_SYSRQ=y 121 + CONFIG_PANIC_TIMEOUT=10 122 + # CONFIG_SCHED_DEBUG is not set 123 + # CONFIG_DEBUG_PREEMPT is not set 124 + CONFIG_STACKTRACE=y 125 + # CONFIG_FTRACE is not set 126 + CONFIG_CRYPTO_DEFLATE=y 127 + CONFIG_CRYPTO_LZO=y 128 + CONFIG_CRC16=y 129 + CONFIG_XZ_DEC=y
+1 -1
arch/mips/fw/arc/init.c
··· 51 51 #endif 52 52 #ifdef CONFIG_SGI_IP27 53 53 { 54 - extern struct plat_smp_ops ip27_smp_ops; 54 + extern const struct plat_smp_ops ip27_smp_ops; 55 55 56 56 register_smp_ops(&ip27_smp_ops); 57 57 }
+6
arch/mips/generic/Kconfig
··· 36 36 enable this if you wish to boot on a MIPS Boston board, as it is 37 37 expected by the bootloader. 38 38 39 + config FIT_IMAGE_FDT_NI169445 40 + bool "Include FDT for NI 169445" 41 + help 42 + Enable this to include the FDT for the 169445 platform from 43 + National Instruments in the FIT kernel image. 44 + 39 45 endif
+4
arch/mips/generic/Platform
··· 12 12 cflags-$(CONFIG_MIPS_GENERIC) += -I$(srctree)/arch/mips/include/asm/mach-generic 13 13 load-$(CONFIG_MIPS_GENERIC) += 0xffffffff80100000 14 14 all-$(CONFIG_MIPS_GENERIC) := vmlinux.gz.itb 15 + 16 + its-y := vmlinux.its.S 17 + its-$(CONFIG_FIT_IMAGE_FDT_BOSTON) += board-boston.its.S 18 + its-$(CONFIG_FIT_IMAGE_FDT_NI169445) += board-ni169445.its.S
+22
arch/mips/generic/board-boston.its.S
··· 1 + / { 2 + images { 3 + fdt@boston { 4 + description = "img,boston Device Tree"; 5 + data = /incbin/("boot/dts/img/boston.dtb"); 6 + type = "flat_dt"; 7 + arch = "mips"; 8 + compression = "none"; 9 + hash@0 { 10 + algo = "sha1"; 11 + }; 12 + }; 13 + }; 14 + 15 + configurations { 16 + conf@boston { 17 + description = "Boston Linux kernel"; 18 + kernel = "kernel@0"; 19 + fdt = "fdt@boston"; 20 + }; 21 + }; 22 + };
+22
arch/mips/generic/board-ni169445.its.S
··· 1 + { 2 + images { 3 + fdt@ni169445 { 4 + description = "NI 169445 device tree"; 5 + data = /incbin/("boot/dts/ni/169445.dtb"); 6 + type = "flat_dt"; 7 + arch = "mips"; 8 + compression = "none"; 9 + hash@0 { 10 + algo = "sha1"; 11 + }; 12 + }; 13 + }; 14 + 15 + configurations { 16 + conf@ni169445 { 17 + description = "NI 169445 Linux Kernel"; 18 + kernel = "kernel@0"; 19 + fdt = "fdt@ni169445"; 20 + }; 21 + }; 22 + };
+5
arch/mips/generic/init.c
··· 16 16 #include <linux/of_fdt.h> 17 17 #include <linux/of_platform.h> 18 18 19 + #include <asm/bootinfo.h> 19 20 #include <asm/fw/fw.h> 20 21 #include <asm/irq_cpu.h> 21 22 #include <asm/machine.h> ··· 89 88 return (void *)fdt; 90 89 } 91 90 91 + #ifdef CONFIG_RELOCATABLE 92 + 92 93 void __init plat_fdt_relocated(void *new_location) 93 94 { 94 95 /* ··· 103 100 if (fw_arg0 == -2) 104 101 fw_arg1 = (unsigned long)new_location; 105 102 } 103 + 104 + #endif /* CONFIG_RELOCATABLE */ 106 105 107 106 void __init plat_mem_setup(void) 108 107 {
+5 -4
arch/mips/generic/irq.c
··· 12 12 #include <linux/clk-provider.h> 13 13 #include <linux/clocksource.h> 14 14 #include <linux/init.h> 15 - #include <linux/irqchip/mips-gic.h> 16 15 #include <linux/types.h> 17 16 18 17 #include <asm/irq.h> 18 + #include <asm/mips-cps.h> 19 + #include <asm/time.h> 19 20 20 21 int get_c0_fdc_int(void) 21 22 { ··· 24 23 25 24 if (cpu_has_veic) 26 25 panic("Unimplemented!"); 27 - else if (gic_present) 26 + else if (mips_gic_present()) 28 27 mips_cpu_fdc_irq = gic_get_c0_fdc_int(); 29 28 else if (cp0_fdc_irq >= 0) 30 29 mips_cpu_fdc_irq = MIPS_CPU_IRQ_BASE + cp0_fdc_irq; ··· 40 39 41 40 if (cpu_has_veic) 42 41 panic("Unimplemented!"); 43 - else if (gic_present) 42 + else if (mips_gic_present()) 44 43 mips_cpu_perf_irq = gic_get_c0_perfcount_int(); 45 44 else if (cp0_perfcount_irq >= 0) 46 45 mips_cpu_perf_irq = MIPS_CPU_IRQ_BASE + cp0_perfcount_irq; ··· 56 55 57 56 if (cpu_has_veic) 58 57 panic("Unimplemented!"); 59 - else if (gic_present) 58 + else if (mips_gic_present()) 60 59 mips_cpu_timer_irq = gic_get_c0_compare_int(); 61 60 else 62 61 mips_cpu_timer_irq = MIPS_CPU_IRQ_BASE + cp0_compare_irq;
-25
arch/mips/generic/vmlinux.its.S
··· 29 29 }; 30 30 }; 31 31 }; 32 - 33 - #ifdef CONFIG_FIT_IMAGE_FDT_BOSTON 34 - / { 35 - images { 36 - fdt@boston { 37 - description = "img,boston Device Tree"; 38 - data = /incbin/("boot/dts/img/boston.dtb"); 39 - type = "flat_dt"; 40 - arch = "mips"; 41 - compression = "none"; 42 - hash@0 { 43 - algo = "sha1"; 44 - }; 45 - }; 46 - }; 47 - 48 - configurations { 49 - conf@boston { 50 - description = "Boston Linux kernel"; 51 - kernel = "kernel@0"; 52 - fdt = "fdt@boston"; 53 - }; 54 - }; 55 - }; 56 - #endif /* CONFIG_FIT_IMAGE_FDT_BOSTON */
+3
arch/mips/include/asm/asm.h
··· 55 55 .type symbol, @function; \ 56 56 .ent symbol, 0; \ 57 57 symbol: .frame sp, 0, ra; \ 58 + .cfi_startproc; \ 58 59 .insn 59 60 60 61 /* ··· 67 66 .type symbol, @function; \ 68 67 .ent symbol, 0; \ 69 68 symbol: .frame sp, framesize, rpc; \ 69 + .cfi_startproc; \ 70 70 .insn 71 71 72 72 /* 73 73 * END - mark end of function 74 74 */ 75 75 #define END(function) \ 76 + .cfi_endproc; \ 76 77 .end function; \ 77 78 .size function, .-function 78 79
+2 -2
arch/mips/include/asm/bmips.h
··· 48 48 #include <asm/r4kcache.h> 49 49 #include <asm/smp-ops.h> 50 50 51 - extern struct plat_smp_ops bmips43xx_smp_ops; 52 - extern struct plat_smp_ops bmips5000_smp_ops; 51 + extern const struct plat_smp_ops bmips43xx_smp_ops; 52 + extern const struct plat_smp_ops bmips5000_smp_ops; 53 53 54 54 static inline int register_bmips_smp_ops(void) 55 55 {
+49 -13
arch/mips/include/asm/cpu-info.h
··· 15 15 #include <linux/cache.h> 16 16 #include <linux/types.h> 17 17 18 + #include <asm/mipsregs.h> 19 + 18 20 /* 19 21 * Descriptor for a cache 20 22 */ ··· 79 77 struct cache_desc tcache; /* Tertiary/split secondary cache */ 80 78 int srsets; /* Shadow register sets */ 81 79 int package;/* physical package number */ 82 - int core; /* physical core number */ 80 + unsigned int globalnumber; 83 81 #ifdef CONFIG_64BIT 84 82 int vmbits; /* Virtual memory size in bits */ 85 - #endif 86 - #if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_CPU_MIPSR6) 87 - /* 88 - * There is not necessarily a 1:1 mapping of VPE num to CPU number 89 - * in particular on multi-core systems. 90 - */ 91 - int vpe_id; /* Virtual Processor number */ 92 83 #endif 93 84 void *data; /* Additional data */ 94 85 unsigned int watch_reg_count; /* Number that exist */ ··· 139 144 unsigned long n; 140 145 }; 141 146 142 - #if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_CPU_MIPSR6) 143 - # define cpu_vpe_id(cpuinfo) ((cpuinfo)->vpe_id) 144 - #else 145 - # define cpu_vpe_id(cpuinfo) ({ (void)cpuinfo; 0; }) 146 - #endif 147 + static inline unsigned int cpu_cluster(struct cpuinfo_mips *cpuinfo) 148 + { 149 + /* Optimisation for systems where multiple clusters aren't used */ 150 + if (!IS_ENABLED(CONFIG_CPU_MIPSR6)) 151 + return 0; 152 + 153 + return (cpuinfo->globalnumber & MIPS_GLOBALNUMBER_CLUSTER) >> 154 + MIPS_GLOBALNUMBER_CLUSTER_SHF; 155 + } 156 + 157 + static inline unsigned int cpu_core(struct cpuinfo_mips *cpuinfo) 158 + { 159 + return (cpuinfo->globalnumber & MIPS_GLOBALNUMBER_CORE) >> 160 + MIPS_GLOBALNUMBER_CORE_SHF; 161 + } 162 + 163 + static inline unsigned int cpu_vpe_id(struct cpuinfo_mips *cpuinfo) 164 + { 165 + /* Optimisation for systems where VP(E)s aren't used */ 166 + if (!IS_ENABLED(CONFIG_MIPS_MT_SMP) && !IS_ENABLED(CONFIG_CPU_MIPSR6)) 167 + return 0; 168 + 169 + return (cpuinfo->globalnumber & MIPS_GLOBALNUMBER_VP) >> 170 + MIPS_GLOBALNUMBER_VP_SHF; 171 + } 172 + 173 + extern void cpu_set_cluster(struct cpuinfo_mips *cpuinfo, unsigned int cluster); 174 + extern void cpu_set_core(struct cpuinfo_mips *cpuinfo, unsigned int core); 175 + extern void cpu_set_vpe_id(struct cpuinfo_mips *cpuinfo, unsigned int vpe); 176 + 177 + static inline bool cpus_are_siblings(int cpua, int cpub) 178 + { 179 + struct cpuinfo_mips *infoa = &cpu_data[cpua]; 180 + struct cpuinfo_mips *infob = &cpu_data[cpub]; 181 + unsigned int gnuma, gnumb; 182 + 183 + if (infoa->package != infob->package) 184 + return false; 185 + 186 + gnuma = infoa->globalnumber & ~MIPS_GLOBALNUMBER_VP; 187 + gnumb = infob->globalnumber & ~MIPS_GLOBALNUMBER_VP; 188 + if (gnuma != gnumb) 189 + return false; 190 + 191 + return true; 192 + } 147 193 148 194 static inline unsigned long cpu_asid_inc(void) 149 195 {
-5
arch/mips/include/asm/cpu-type.h
··· 151 151 case CPU_R5500: 152 152 #endif 153 153 154 - #ifdef CONFIG_SYS_HAS_CPU_R6000 155 - case CPU_R6000: 156 - case CPU_R6000A: 157 - #endif 158 - 159 154 #ifdef CONFIG_SYS_HAS_CPU_NEVADA 160 155 case CPU_NEVADA: 161 156 #endif
-5
arch/mips/include/asm/cpu.h
··· 286 286 CPU_R3081, CPU_R3081E, 287 287 288 288 /* 289 - * R6000 class processors 290 - */ 291 - CPU_R6000, CPU_R6000A, 292 - 293 - /* 294 289 * R4000 class processors 295 290 */ 296 291 CPU_R4000PC, CPU_R4000SC, CPU_R4000MC, CPU_R4200, CPU_R4300, CPU_R4310,
+2 -2
arch/mips/include/asm/floppy.h
··· 10 10 #ifndef _ASM_FLOPPY_H 11 11 #define _ASM_FLOPPY_H 12 12 13 - #include <linux/dma-mapping.h> 13 + #include <asm/io.h> 14 14 15 15 static inline void fd_cacheflush(char * addr, long size) 16 16 { 17 - dma_cache_sync(NULL, addr, size, DMA_BIDIRECTIONAL); 17 + dma_cache_wback_inv((unsigned long)addr, size); 18 18 } 19 19 20 20 #define MAX_BUFFER_SECTORS 24
+117 -1
arch/mips/include/asm/fpu_emulator.h
··· 36 36 unsigned long emulated; 37 37 unsigned long loads; 38 38 unsigned long stores; 39 + unsigned long branches; 39 40 unsigned long cp1ops; 40 41 unsigned long cp1xops; 41 42 unsigned long errors; ··· 46 45 unsigned long ieee754_zerodiv; 47 46 unsigned long ieee754_invalidop; 48 47 unsigned long ds_emul; 48 + 49 + unsigned long abs_s; 50 + unsigned long abs_d; 51 + unsigned long add_s; 52 + unsigned long add_d; 53 + unsigned long bc1eqz; 54 + unsigned long bc1nez; 55 + unsigned long ceil_w_s; 56 + unsigned long ceil_w_d; 57 + unsigned long ceil_l_s; 58 + unsigned long ceil_l_d; 59 + unsigned long class_s; 60 + unsigned long class_d; 61 + unsigned long cmp_af_s; 62 + unsigned long cmp_af_d; 63 + unsigned long cmp_eq_s; 64 + unsigned long cmp_eq_d; 65 + unsigned long cmp_le_s; 66 + unsigned long cmp_le_d; 67 + unsigned long cmp_lt_s; 68 + unsigned long cmp_lt_d; 69 + unsigned long cmp_ne_s; 70 + unsigned long cmp_ne_d; 71 + unsigned long cmp_or_s; 72 + unsigned long cmp_or_d; 73 + unsigned long cmp_ueq_s; 74 + unsigned long cmp_ueq_d; 75 + unsigned long cmp_ule_s; 76 + unsigned long cmp_ule_d; 77 + unsigned long cmp_ult_s; 78 + unsigned long cmp_ult_d; 79 + unsigned long cmp_un_s; 80 + unsigned long cmp_un_d; 81 + unsigned long cmp_une_s; 82 + unsigned long cmp_une_d; 83 + unsigned long cmp_saf_s; 84 + unsigned long cmp_saf_d; 85 + unsigned long cmp_seq_s; 86 + unsigned long cmp_seq_d; 87 + unsigned long cmp_sle_s; 88 + unsigned long cmp_sle_d; 89 + unsigned long cmp_slt_s; 90 + unsigned long cmp_slt_d; 91 + unsigned long cmp_sne_s; 92 + unsigned long cmp_sne_d; 93 + unsigned long cmp_sor_s; 94 + unsigned long cmp_sor_d; 95 + unsigned long cmp_sueq_s; 96 + unsigned long cmp_sueq_d; 97 + unsigned long cmp_sule_s; 98 + unsigned long cmp_sule_d; 99 + unsigned long cmp_sult_s; 100 + unsigned long cmp_sult_d; 101 + unsigned long cmp_sun_s; 102 + unsigned long cmp_sun_d; 103 + unsigned long cmp_sune_s; 104 + unsigned long cmp_sune_d; 105 + unsigned long cvt_d_l; 106 + unsigned long cvt_d_s; 107 + unsigned long cvt_d_w; 108 + unsigned long cvt_l_s; 109 + unsigned long cvt_l_d; 110 + unsigned long cvt_s_d; 111 + unsigned long cvt_s_l; 112 + unsigned long cvt_s_w; 113 + unsigned long cvt_w_s; 114 + unsigned long cvt_w_d; 115 + unsigned long div_s; 116 + unsigned long div_d; 117 + unsigned long floor_w_s; 118 + unsigned long floor_w_d; 119 + unsigned long floor_l_s; 120 + unsigned long floor_l_d; 121 + unsigned long maddf_s; 122 + unsigned long maddf_d; 123 + unsigned long max_s; 124 + unsigned long max_d; 125 + unsigned long maxa_s; 126 + unsigned long maxa_d; 127 + unsigned long min_s; 128 + unsigned long min_d; 129 + unsigned long mina_s; 130 + unsigned long mina_d; 131 + unsigned long mov_s; 132 + unsigned long mov_d; 133 + unsigned long msubf_s; 134 + unsigned long msubf_d; 135 + unsigned long mul_s; 136 + unsigned long mul_d; 137 + unsigned long neg_s; 138 + unsigned long neg_d; 139 + unsigned long recip_s; 140 + unsigned long recip_d; 141 + unsigned long rint_s; 142 + unsigned long rint_d; 143 + unsigned long round_w_s; 144 + unsigned long round_w_d; 145 + unsigned long round_l_s; 146 + unsigned long round_l_d; 147 + unsigned long rsqrt_s; 148 + unsigned long rsqrt_d; 149 + unsigned long sel_s; 150 + unsigned long sel_d; 151 + unsigned long seleqz_s; 152 + unsigned long seleqz_d; 153 + unsigned long selnez_s; 154 + unsigned long selnez_d; 155 + unsigned long sqrt_s; 156 + unsigned long sqrt_d; 157 + unsigned long sub_s; 158 + unsigned long sub_d; 159 + unsigned long trunc_w_s; 160 + unsigned long trunc_w_d; 161 + unsigned long trunc_l_s; 162 + unsigned long trunc_l_d; 49 163 }; 50 164 51 165 DECLARE_PER_CPU(struct mips_fpu_emulator_stats, fpuemustats); ··· 178 62 179 63 extern int fpu_emulator_cop1Handler(struct pt_regs *xcp, 180 64 struct mips_fpu_struct *ctx, int has_fpu, 181 - void *__user *fault_addr); 65 + void __user **fault_addr); 182 66 void force_fcr31_sig(unsigned long fcr31, void __user *fault_addr, 183 67 struct task_struct *tsk); 184 68 int process_fpemu_return(int sig, void __user *fault_addr,
+2
arch/mips/include/asm/io.h
··· 632 632 */ 633 633 #define xlate_dev_kmem_ptr(p) p 634 634 635 + void __ioread64_copy(void *to, const void __iomem *from, size_t count); 636 + 635 637 #endif /* _ASM_IO_H */
+26
arch/mips/include/asm/mach-au1x00/cpu-feature-overrides.h
··· 8 8 #define __ASM_MACH_AU1X00_CPU_FEATURE_OVERRIDES_H 9 9 10 10 #define cpu_has_tlb 1 11 + #define cpu_has_ftlb 0 11 12 #define cpu_has_tlbinv 0 12 13 #define cpu_has_segments 0 13 14 #define cpu_has_eva 0 14 15 #define cpu_has_htw 0 16 + #define cpu_has_ldpte 0 15 17 #define cpu_has_rixiex 0 16 18 #define cpu_has_maar 0 19 + #define cpu_has_rw_llb 0 20 + #define cpu_has_3kex 0 17 21 #define cpu_has_4kex 1 18 22 #define cpu_has_3k_cache 0 19 23 #define cpu_has_4k_cache 1 ··· 34 30 #define cpu_has_mcheck 1 35 31 #define cpu_has_ejtag 1 36 32 #define cpu_has_llsc 1 33 + #define cpu_has_guestctl0ext 0 34 + #define cpu_has_guestctl1 0 35 + #define cpu_has_guestctl2 0 36 + #define cpu_has_guestid 0 37 + #define cpu_has_drg 0 38 + #define cpu_has_bp_ghist 0 37 39 #define cpu_has_mips16 0 38 40 #define cpu_has_mips16e2 0 39 41 #define cpu_has_mdmx 0 ··· 47 37 #define cpu_has_smartmips 0 48 38 #define cpu_has_rixi 0 49 39 #define cpu_has_mmips 0 40 + #define cpu_has_lpa 0 41 + #define cpu_has_mhv 0 50 42 #define cpu_has_vtag_icache 0 51 43 #define cpu_has_dc_aliases 0 52 44 #define cpu_has_ic_fills_f_dc 1 53 45 #define cpu_has_pindexed_dcache 0 54 46 #define cpu_has_mips32r1 1 55 47 #define cpu_has_mips32r2 0 48 + #define cpu_has_mips32r6 0 56 49 #define cpu_has_mips64r1 0 57 50 #define cpu_has_mips64r2 0 51 + #define cpu_has_mips64r6 0 58 52 #define cpu_has_dsp 0 59 53 #define cpu_has_dsp2 0 54 + #define cpu_has_dsp3 0 60 55 #define cpu_has_mipsmt 0 56 + #define cpu_has_vp 0 61 57 #define cpu_has_userlocal 0 62 58 #define cpu_has_nofpuex 0 63 59 #define cpu_has_64bits 0 ··· 74 58 75 59 #define cpu_dcache_line_size() 32 76 60 #define cpu_icache_line_size() 32 61 + #define cpu_scache_line_size() 0 77 62 78 63 #define cpu_has_perf_cntr_intr_bit 0 79 64 #define cpu_has_vz 0 80 65 #define cpu_has_msa 0 66 + #define cpu_has_fre 0 67 + #define cpu_has_cdmm 0 68 + #define cpu_has_small_pages 0 69 + #define cpu_has_nan_legacy 1 70 + #define cpu_has_nan_2008 1 71 + #define cpu_has_ebase_wg 0 72 + #define cpu_has_badinstr 0 73 + #define cpu_has_badinstrp 0 74 + #define cpu_has_contextconfig 0 81 75 82 76 #endif /* __ASM_MACH_AU1X00_CPU_FEATURE_OVERRIDES_H */
+1 -1
arch/mips/include/asm/mach-bcm63xx/bcm63xx_regs.h
··· 710 710 /* Broadcom 6345 ENET DMA definitions */ 711 711 #define ENETDMA_6345_CHANCFG_REG (0x00) 712 712 713 - #define ENETDMA_6345_MAXBURST_REG (0x40) 713 + #define ENETDMA_6345_MAXBURST_REG (0x04) 714 714 715 715 #define ENETDMA_6345_RSTART_REG (0x08) 716 716
+3 -3
arch/mips/include/asm/mach-cavium-octeon/cpu-feature-overrides.h
··· 46 46 #define cpu_has_64bits 1 47 47 #define cpu_has_octeon_cache 1 48 48 #define cpu_has_saa octeon_has_saa() 49 - #define cpu_has_mips32r1 0 50 - #define cpu_has_mips32r2 0 51 - #define cpu_has_mips64r1 0 49 + #define cpu_has_mips32r1 1 50 + #define cpu_has_mips32r2 1 51 + #define cpu_has_mips64r1 1 52 52 #define cpu_has_mips64r2 1 53 53 #define cpu_has_dsp 0 54 54 #define cpu_has_dsp2 0
-1
arch/mips/include/asm/mach-ip27/topology.h
··· 23 23 extern struct cpuinfo_ip27 sn_cpu_info[NR_CPUS]; 24 24 25 25 #define cpu_to_node(cpu) (sn_cpu_info[(cpu)].p_nodeid) 26 - #define parent_node(node) (node) 27 26 #define cpumask_of_node(node) ((node) == -1 ? \ 28 27 cpu_all_mask : \ 29 28 &hub_data(node)->h_cpus)
-2
arch/mips/include/asm/mach-lantiq/lantiq.h
··· 46 46 47 47 /* find out what bootsource we have */ 48 48 extern unsigned char ltq_boot_select(void); 49 - /* find out what caused the last cpu reset */ 50 - extern int ltq_reset_cause(void); 51 49 /* find out the soc type */ 52 50 extern int ltq_soc_type(void); 53 51
+1 -1
arch/mips/include/asm/mach-loongson64/loongson.h
··· 26 26 /* environment arguments from bootloader */ 27 27 extern u32 cpu_clock_freq; 28 28 extern u32 memsize, highmemsize; 29 - extern struct plat_smp_ops loongson3_smp_ops; 29 + extern const struct plat_smp_ops loongson3_smp_ops; 30 30 31 31 /* loongson-specific command line, env and memory initialization */ 32 32 extern void __init prom_init_memory(void);
-1
arch/mips/include/asm/mach-loongson64/topology.h
··· 4 4 #ifdef CONFIG_NUMA 5 5 6 6 #define cpu_to_node(cpu) (cpu_logical_map(cpu) >> 2) 7 - #define parent_node(node) (node) 8 7 #define cpumask_of_node(node) (&__node_data[(node)]->cpumask) 9 8 10 9 struct pci_bus;
-5
arch/mips/include/asm/mips-boards/maltaint.h
··· 10 10 #ifndef _MIPS_MALTAINT_H 11 11 #define _MIPS_MALTAINT_H 12 12 13 - #include <linux/irqchip/mips-gic.h> 14 - 15 13 /* 16 14 * Interrupts 0..15 are used for Malta ISA compatible interrupts 17 15 */ ··· 59 61 #define MSC01E_INT_PCI 9 60 62 #define MSC01E_INT_PERFCTR 10 61 63 #define MSC01E_INT_CPUCTR 11 62 - 63 - /* GIC external interrupts */ 64 - #define GIC_INT_I8259A GIC_SHARED_TO_HWIRQ(3) 65 64 66 65 #endif /* !(_MIPS_MALTAINT_H) */
+218 -291
arch/mips/include/asm/mips-cm.h
··· 8 8 * option) any later version. 9 9 */ 10 10 11 + #ifndef __MIPS_ASM_MIPS_CPS_H__ 12 + # error Please include asm/mips-cps.h rather than asm/mips-cm.h 13 + #endif 14 + 11 15 #ifndef __MIPS_ASM_MIPS_CM_H__ 12 16 #define __MIPS_ASM_MIPS_CM_H__ 13 17 14 18 #include <linux/bitops.h> 15 19 #include <linux/errno.h> 16 - #include <linux/io.h> 17 - #include <linux/types.h> 18 20 19 21 /* The base address of the CM GCR block */ 20 - extern void __iomem *mips_cm_base; 22 + extern void __iomem *mips_gcr_base; 21 23 22 24 /* The base address of the CM L2-only sync region */ 23 25 extern void __iomem *mips_cm_l2sync_base; ··· 82 80 static inline bool mips_cm_present(void) 83 81 { 84 82 #ifdef CONFIG_MIPS_CM 85 - return mips_cm_base != NULL; 83 + return mips_gcr_base != NULL; 86 84 #else 87 85 return false; 88 86 #endif ··· 114 112 /* Size of the L2-only sync region */ 115 113 #define MIPS_CM_L2SYNC_SIZE 0x1000 116 114 117 - /* Macros to ease the creation of register access functions */ 118 - #define BUILD_CM_R_(name, off) \ 119 - static inline unsigned long __iomem *addr_gcr_##name(void) \ 120 - { \ 121 - return (unsigned long __iomem *)(mips_cm_base + (off)); \ 122 - } \ 123 - \ 124 - static inline u32 read32_gcr_##name(void) \ 125 - { \ 126 - return __raw_readl(addr_gcr_##name()); \ 127 - } \ 128 - \ 129 - static inline u64 read64_gcr_##name(void) \ 130 - { \ 131 - void __iomem *addr = addr_gcr_##name(); \ 132 - u64 ret; \ 133 - \ 134 - if (mips_cm_is64) { \ 135 - ret = __raw_readq(addr); \ 136 - } else { \ 137 - ret = __raw_readl(addr); \ 138 - ret |= (u64)__raw_readl(addr + 0x4) << 32; \ 139 - } \ 140 - \ 141 - return ret; \ 142 - } \ 143 - \ 144 - static inline unsigned long read_gcr_##name(void) \ 145 - { \ 146 - if (mips_cm_is64) \ 147 - return read64_gcr_##name(); \ 148 - else \ 149 - return read32_gcr_##name(); \ 150 - } 115 + #define GCR_ACCESSOR_RO(sz, off, name) \ 116 + CPS_ACCESSOR_RO(gcr, sz, MIPS_CM_GCB_OFS + off, name) \ 117 + CPS_ACCESSOR_RO(gcr, sz, MIPS_CM_COCB_OFS + off, redir_##name) 151 118 152 - #define BUILD_CM__W(name, off) \ 153 - static inline void write32_gcr_##name(u32 value) \ 154 - { \ 155 - __raw_writel(value, addr_gcr_##name()); \ 156 - } \ 157 - \ 158 - static inline void write64_gcr_##name(u64 value) \ 159 - { \ 160 - __raw_writeq(value, addr_gcr_##name()); \ 161 - } \ 162 - \ 163 - static inline void write_gcr_##name(unsigned long value) \ 164 - { \ 165 - if (mips_cm_is64) \ 166 - write64_gcr_##name(value); \ 167 - else \ 168 - write32_gcr_##name(value); \ 169 - } 119 + #define GCR_ACCESSOR_RW(sz, off, name) \ 120 + CPS_ACCESSOR_RW(gcr, sz, MIPS_CM_GCB_OFS + off, name) \ 121 + CPS_ACCESSOR_RW(gcr, sz, MIPS_CM_COCB_OFS + off, redir_##name) 170 122 171 - #define BUILD_CM_RW(name, off) \ 172 - BUILD_CM_R_(name, off) \ 173 - BUILD_CM__W(name, off) 123 + #define GCR_CX_ACCESSOR_RO(sz, off, name) \ 124 + CPS_ACCESSOR_RO(gcr, sz, MIPS_CM_CLCB_OFS + off, cl_##name) \ 125 + CPS_ACCESSOR_RO(gcr, sz, MIPS_CM_COCB_OFS + off, co_##name) 174 126 175 - #define BUILD_CM_Cx_R_(name, off) \ 176 - BUILD_CM_R_(cl_##name, MIPS_CM_CLCB_OFS + (off)) \ 177 - BUILD_CM_R_(co_##name, MIPS_CM_COCB_OFS + (off)) 127 + #define GCR_CX_ACCESSOR_RW(sz, off, name) \ 128 + CPS_ACCESSOR_RW(gcr, sz, MIPS_CM_CLCB_OFS + off, cl_##name) \ 129 + CPS_ACCESSOR_RW(gcr, sz, MIPS_CM_COCB_OFS + off, co_##name) 178 130 179 - #define BUILD_CM_Cx__W(name, off) \ 180 - BUILD_CM__W(cl_##name, MIPS_CM_CLCB_OFS + (off)) \ 181 - BUILD_CM__W(co_##name, MIPS_CM_COCB_OFS + (off)) 131 + /* GCR_CONFIG - Information about the system */ 132 + GCR_ACCESSOR_RO(64, 0x000, config) 133 + #define CM_GCR_CONFIG_CLUSTER_COH_CAPABLE BIT_ULL(43) 134 + #define CM_GCR_CONFIG_CLUSTER_ID GENMASK_ULL(39, 32) 135 + #define CM_GCR_CONFIG_NUM_CLUSTERS GENMASK(29, 23) 136 + #define CM_GCR_CONFIG_NUMIOCU GENMASK(15, 8) 137 + #define CM_GCR_CONFIG_PCORES GENMASK(7, 0) 182 138 183 - #define BUILD_CM_Cx_RW(name, off) \ 184 - BUILD_CM_Cx_R_(name, off) \ 185 - BUILD_CM_Cx__W(name, off) 186 - 187 - /* GCB register accessor functions */ 188 - BUILD_CM_R_(config, MIPS_CM_GCB_OFS + 0x00) 189 - BUILD_CM_RW(base, MIPS_CM_GCB_OFS + 0x08) 190 - BUILD_CM_RW(access, MIPS_CM_GCB_OFS + 0x20) 191 - BUILD_CM_R_(rev, MIPS_CM_GCB_OFS + 0x30) 192 - BUILD_CM_RW(err_control, MIPS_CM_GCB_OFS + 0x38) 193 - BUILD_CM_RW(error_mask, MIPS_CM_GCB_OFS + 0x40) 194 - BUILD_CM_RW(error_cause, MIPS_CM_GCB_OFS + 0x48) 195 - BUILD_CM_RW(error_addr, MIPS_CM_GCB_OFS + 0x50) 196 - BUILD_CM_RW(error_mult, MIPS_CM_GCB_OFS + 0x58) 197 - BUILD_CM_RW(l2_only_sync_base, MIPS_CM_GCB_OFS + 0x70) 198 - BUILD_CM_RW(gic_base, MIPS_CM_GCB_OFS + 0x80) 199 - BUILD_CM_RW(cpc_base, MIPS_CM_GCB_OFS + 0x88) 200 - BUILD_CM_RW(reg0_base, MIPS_CM_GCB_OFS + 0x90) 201 - BUILD_CM_RW(reg0_mask, MIPS_CM_GCB_OFS + 0x98) 202 - BUILD_CM_RW(reg1_base, MIPS_CM_GCB_OFS + 0xa0) 203 - BUILD_CM_RW(reg1_mask, MIPS_CM_GCB_OFS + 0xa8) 204 - BUILD_CM_RW(reg2_base, MIPS_CM_GCB_OFS + 0xb0) 205 - BUILD_CM_RW(reg2_mask, MIPS_CM_GCB_OFS + 0xb8) 206 - BUILD_CM_RW(reg3_base, MIPS_CM_GCB_OFS + 0xc0) 207 - BUILD_CM_RW(reg3_mask, MIPS_CM_GCB_OFS + 0xc8) 208 - BUILD_CM_R_(gic_status, MIPS_CM_GCB_OFS + 0xd0) 209 - BUILD_CM_R_(cpc_status, MIPS_CM_GCB_OFS + 0xf0) 210 - BUILD_CM_RW(l2_config, MIPS_CM_GCB_OFS + 0x130) 211 - BUILD_CM_RW(sys_config2, MIPS_CM_GCB_OFS + 0x150) 212 - BUILD_CM_RW(l2_pft_control, MIPS_CM_GCB_OFS + 0x300) 213 - BUILD_CM_RW(l2_pft_control_b, MIPS_CM_GCB_OFS + 0x308) 214 - BUILD_CM_RW(bev_base, MIPS_CM_GCB_OFS + 0x680) 215 - 216 - /* Core Local & Core Other register accessor functions */ 217 - BUILD_CM_Cx_RW(reset_release, 0x00) 218 - BUILD_CM_Cx_RW(coherence, 0x08) 219 - BUILD_CM_Cx_R_(config, 0x10) 220 - BUILD_CM_Cx_RW(other, 0x18) 221 - BUILD_CM_Cx_RW(reset_base, 0x20) 222 - BUILD_CM_Cx_R_(id, 0x28) 223 - BUILD_CM_Cx_RW(reset_ext_base, 0x30) 224 - BUILD_CM_Cx_R_(tcid_0_priority, 0x40) 225 - BUILD_CM_Cx_R_(tcid_1_priority, 0x48) 226 - BUILD_CM_Cx_R_(tcid_2_priority, 0x50) 227 - BUILD_CM_Cx_R_(tcid_3_priority, 0x58) 228 - BUILD_CM_Cx_R_(tcid_4_priority, 0x60) 229 - BUILD_CM_Cx_R_(tcid_5_priority, 0x68) 230 - BUILD_CM_Cx_R_(tcid_6_priority, 0x70) 231 - BUILD_CM_Cx_R_(tcid_7_priority, 0x78) 232 - BUILD_CM_Cx_R_(tcid_8_priority, 0x80) 233 - 234 - /* GCR_CONFIG register fields */ 235 - #define CM_GCR_CONFIG_NUMIOCU_SHF 8 236 - #define CM_GCR_CONFIG_NUMIOCU_MSK (_ULCAST_(0xf) << 8) 237 - #define CM_GCR_CONFIG_PCORES_SHF 0 238 - #define CM_GCR_CONFIG_PCORES_MSK (_ULCAST_(0xff) << 0) 239 - 240 - /* GCR_BASE register fields */ 241 - #define CM_GCR_BASE_GCRBASE_SHF 15 242 - #define CM_GCR_BASE_GCRBASE_MSK (_ULCAST_(0x1ffff) << 15) 243 - #define CM_GCR_BASE_CMDEFTGT_SHF 0 244 - #define CM_GCR_BASE_CMDEFTGT_MSK (_ULCAST_(0x3) << 0) 139 + /* GCR_BASE - Base address of the Global Configuration Registers (GCRs) */ 140 + GCR_ACCESSOR_RW(64, 0x008, base) 141 + #define CM_GCR_BASE_GCRBASE GENMASK_ULL(47, 15) 142 + #define CM_GCR_BASE_CMDEFTGT GENMASK(1, 0) 245 143 #define CM_GCR_BASE_CMDEFTGT_DISABLED 0 246 144 #define CM_GCR_BASE_CMDEFTGT_MEM 1 247 145 #define CM_GCR_BASE_CMDEFTGT_IOCU0 2 248 146 #define CM_GCR_BASE_CMDEFTGT_IOCU1 3 249 147 250 - /* GCR_RESET_EXT_BASE register fields */ 251 - #define CM_GCR_RESET_EXT_BASE_EVARESET BIT(31) 252 - #define CM_GCR_RESET_EXT_BASE_UEB BIT(30) 148 + /* GCR_ACCESS - Controls core/IOCU access to GCRs */ 149 + GCR_ACCESSOR_RW(32, 0x020, access) 150 + #define CM_GCR_ACCESS_ACCESSEN GENMASK(7, 0) 253 151 254 - /* GCR_ACCESS register fields */ 255 - #define CM_GCR_ACCESS_ACCESSEN_SHF 0 256 - #define CM_GCR_ACCESS_ACCESSEN_MSK (_ULCAST_(0xff) << 0) 257 - 258 - /* GCR_REV register fields */ 259 - #define CM_GCR_REV_MAJOR_SHF 8 260 - #define CM_GCR_REV_MAJOR_MSK (_ULCAST_(0xff) << 8) 261 - #define CM_GCR_REV_MINOR_SHF 0 262 - #define CM_GCR_REV_MINOR_MSK (_ULCAST_(0xff) << 0) 152 + /* GCR_REV - Indicates the Coherence Manager revision */ 153 + GCR_ACCESSOR_RO(32, 0x030, rev) 154 + #define CM_GCR_REV_MAJOR GENMASK(15, 8) 155 + #define CM_GCR_REV_MINOR GENMASK(7, 0) 263 156 264 157 #define CM_ENCODE_REV(major, minor) \ 265 - (((major) << CM_GCR_REV_MAJOR_SHF) | \ 266 - ((minor) << CM_GCR_REV_MINOR_SHF)) 158 + (((major) << __ffs(CM_GCR_REV_MAJOR)) | \ 159 + ((minor) << __ffs(CM_GCR_REV_MINOR))) 267 160 268 161 #define CM_REV_CM2 CM_ENCODE_REV(6, 0) 269 162 #define CM_REV_CM2_5 CM_ENCODE_REV(7, 0) 270 163 #define CM_REV_CM3 CM_ENCODE_REV(8, 0) 164 + #define CM_REV_CM3_5 CM_ENCODE_REV(9, 0) 271 165 272 - /* GCR_ERR_CONTROL register fields */ 273 - #define CM_GCR_ERR_CONTROL_L2_ECC_EN_SHF 1 274 - #define CM_GCR_ERR_CONTROL_L2_ECC_EN_MSK (_ULCAST_(0x1) << 1) 275 - #define CM_GCR_ERR_CONTROL_L2_ECC_SUPPORT_SHF 0 276 - #define CM_GCR_ERR_CONTROL_L2_ECC_SUPPORT_MSK (_ULCAST_(0x1) << 0) 166 + /* GCR_ERR_CONTROL - Control error checking logic */ 167 + GCR_ACCESSOR_RW(32, 0x038, err_control) 168 + #define CM_GCR_ERR_CONTROL_L2_ECC_EN BIT(1) 169 + #define CM_GCR_ERR_CONTROL_L2_ECC_SUPPORT BIT(0) 277 170 278 - /* GCR_ERROR_CAUSE register fields */ 279 - #define CM_GCR_ERROR_CAUSE_ERRTYPE_SHF 27 280 - #define CM_GCR_ERROR_CAUSE_ERRTYPE_MSK (_ULCAST_(0x1f) << 27) 281 - #define CM3_GCR_ERROR_CAUSE_ERRTYPE_SHF 58 282 - #define CM3_GCR_ERROR_CAUSE_ERRTYPE_MSK GENMASK_ULL(63, 58) 283 - #define CM_GCR_ERROR_CAUSE_ERRINFO_SHF 0 284 - #define CM_GCR_ERROR_CAUSE_ERRINGO_MSK (_ULCAST_(0x7ffffff) << 0) 171 + /* GCR_ERR_MASK - Control which errors are reported as interrupts */ 172 + GCR_ACCESSOR_RW(64, 0x040, error_mask) 285 173 286 - /* GCR_ERROR_MULT register fields */ 287 - #define CM_GCR_ERROR_MULT_ERR2ND_SHF 0 288 - #define CM_GCR_ERROR_MULT_ERR2ND_MSK (_ULCAST_(0x1f) << 0) 174 + /* GCR_ERR_CAUSE - Indicates the type of error that occurred */ 175 + GCR_ACCESSOR_RW(64, 0x048, error_cause) 176 + #define CM_GCR_ERROR_CAUSE_ERRTYPE GENMASK(31, 27) 177 + #define CM3_GCR_ERROR_CAUSE_ERRTYPE GENMASK_ULL(63, 58) 178 + #define CM_GCR_ERROR_CAUSE_ERRINFO GENMASK(26, 0) 289 179 290 - /* GCR_L2_ONLY_SYNC_BASE register fields */ 291 - #define CM_GCR_L2_ONLY_SYNC_BASE_SYNCBASE_SHF 12 292 - #define CM_GCR_L2_ONLY_SYNC_BASE_SYNCBASE_MSK (_ULCAST_(0xfffff) << 12) 293 - #define CM_GCR_L2_ONLY_SYNC_BASE_SYNCEN_SHF 0 294 - #define CM_GCR_L2_ONLY_SYNC_BASE_SYNCEN_MSK (_ULCAST_(0x1) << 0) 180 + /* GCR_ERR_ADDR - Indicates the address associated with an error */ 181 + GCR_ACCESSOR_RW(64, 0x050, error_addr) 295 182 296 - /* GCR_GIC_BASE register fields */ 297 - #define CM_GCR_GIC_BASE_GICBASE_SHF 17 298 - #define CM_GCR_GIC_BASE_GICBASE_MSK (_ULCAST_(0x7fff) << 17) 299 - #define CM_GCR_GIC_BASE_GICEN_SHF 0 300 - #define CM_GCR_GIC_BASE_GICEN_MSK (_ULCAST_(0x1) << 0) 183 + /* GCR_ERR_MULT - Indicates when multiple errors have occurred */ 184 + GCR_ACCESSOR_RW(64, 0x058, error_mult) 185 + #define CM_GCR_ERROR_MULT_ERR2ND GENMASK(4, 0) 301 186 302 - /* GCR_CPC_BASE register fields */ 303 - #define CM_GCR_CPC_BASE_CPCBASE_SHF 15 304 - #define CM_GCR_CPC_BASE_CPCBASE_MSK (_ULCAST_(0x1ffff) << 15) 305 - #define CM_GCR_CPC_BASE_CPCEN_SHF 0 306 - #define CM_GCR_CPC_BASE_CPCEN_MSK (_ULCAST_(0x1) << 0) 187 + /* GCR_L2_ONLY_SYNC_BASE - Base address of the L2 cache-only sync region */ 188 + GCR_ACCESSOR_RW(64, 0x070, l2_only_sync_base) 189 + #define CM_GCR_L2_ONLY_SYNC_BASE_SYNCBASE GENMASK(31, 12) 190 + #define CM_GCR_L2_ONLY_SYNC_BASE_SYNCEN BIT(0) 307 191 308 - /* GCR_GIC_STATUS register fields */ 309 - #define CM_GCR_GIC_STATUS_GICEX_SHF 0 310 - #define CM_GCR_GIC_STATUS_GICEX_MSK (_ULCAST_(0x1) << 0) 192 + /* GCR_GIC_BASE - Base address of the Global Interrupt Controller (GIC) */ 193 + GCR_ACCESSOR_RW(64, 0x080, gic_base) 194 + #define CM_GCR_GIC_BASE_GICBASE GENMASK(31, 17) 195 + #define CM_GCR_GIC_BASE_GICEN BIT(0) 311 196 312 - /* GCR_REGn_BASE register fields */ 313 - #define CM_GCR_REGn_BASE_BASEADDR_SHF 16 314 - #define CM_GCR_REGn_BASE_BASEADDR_MSK (_ULCAST_(0xffff) << 16) 197 + /* GCR_CPC_BASE - Base address of the Cluster Power Controller (CPC) */ 198 + GCR_ACCESSOR_RW(64, 0x088, cpc_base) 199 + #define CM_GCR_CPC_BASE_CPCBASE GENMASK(31, 15) 200 + #define CM_GCR_CPC_BASE_CPCEN BIT(0) 315 201 316 - /* GCR_REGn_MASK register fields */ 317 - #define CM_GCR_REGn_MASK_ADDRMASK_SHF 16 318 - #define CM_GCR_REGn_MASK_ADDRMASK_MSK (_ULCAST_(0xffff) << 16) 319 - #define CM_GCR_REGn_MASK_CCAOVR_SHF 5 320 - #define CM_GCR_REGn_MASK_CCAOVR_MSK (_ULCAST_(0x3) << 5) 321 - #define CM_GCR_REGn_MASK_CCAOVREN_SHF 4 322 - #define CM_GCR_REGn_MASK_CCAOVREN_MSK (_ULCAST_(0x1) << 4) 323 - #define CM_GCR_REGn_MASK_DROPL2_SHF 2 324 - #define CM_GCR_REGn_MASK_DROPL2_MSK (_ULCAST_(0x1) << 2) 325 - #define CM_GCR_REGn_MASK_CMTGT_SHF 0 326 - #define CM_GCR_REGn_MASK_CMTGT_MSK (_ULCAST_(0x3) << 0) 327 - #define CM_GCR_REGn_MASK_CMTGT_DISABLED (_ULCAST_(0x0) << 0) 328 - #define CM_GCR_REGn_MASK_CMTGT_MEM (_ULCAST_(0x1) << 0) 329 - #define CM_GCR_REGn_MASK_CMTGT_IOCU0 (_ULCAST_(0x2) << 0) 330 - #define CM_GCR_REGn_MASK_CMTGT_IOCU1 (_ULCAST_(0x3) << 0) 202 + /* GCR_REGn_BASE - Base addresses of CM address regions */ 203 + GCR_ACCESSOR_RW(64, 0x090, reg0_base) 204 + GCR_ACCESSOR_RW(64, 0x0a0, reg1_base) 205 + GCR_ACCESSOR_RW(64, 0x0b0, reg2_base) 206 + GCR_ACCESSOR_RW(64, 0x0c0, reg3_base) 207 + #define CM_GCR_REGn_BASE_BASEADDR GENMASK(31, 16) 331 208 332 - /* GCR_GIC_STATUS register fields */ 333 - #define CM_GCR_GIC_STATUS_EX_SHF 0 334 - #define CM_GCR_GIC_STATUS_EX_MSK (_ULCAST_(0x1) << 0) 209 + /* GCR_REGn_MASK - Size & destination of CM address regions */ 210 + GCR_ACCESSOR_RW(64, 0x098, reg0_mask) 211 + GCR_ACCESSOR_RW(64, 0x0a8, reg1_mask) 212 + GCR_ACCESSOR_RW(64, 0x0b8, reg2_mask) 213 + GCR_ACCESSOR_RW(64, 0x0c8, reg3_mask) 214 + #define CM_GCR_REGn_MASK_ADDRMASK GENMASK(31, 16) 215 + #define CM_GCR_REGn_MASK_CCAOVR GENMASK(7, 5) 216 + #define CM_GCR_REGn_MASK_CCAOVREN BIT(4) 217 + #define CM_GCR_REGn_MASK_DROPL2 BIT(2) 218 + #define CM_GCR_REGn_MASK_CMTGT GENMASK(1, 0) 219 + #define CM_GCR_REGn_MASK_CMTGT_DISABLED 0x0 220 + #define CM_GCR_REGn_MASK_CMTGT_MEM 0x1 221 + #define CM_GCR_REGn_MASK_CMTGT_IOCU0 0x2 222 + #define CM_GCR_REGn_MASK_CMTGT_IOCU1 0x3 335 223 336 - /* GCR_CPC_STATUS register fields */ 337 - #define CM_GCR_CPC_STATUS_EX_SHF 0 338 - #define CM_GCR_CPC_STATUS_EX_MSK (_ULCAST_(0x1) << 0) 224 + /* GCR_GIC_STATUS - Indicates presence of a Global Interrupt Controller (GIC) */ 225 + GCR_ACCESSOR_RO(32, 0x0d0, gic_status) 226 + #define CM_GCR_GIC_STATUS_EX BIT(0) 339 227 340 - /* GCR_L2_CONFIG register fields */ 341 - #define CM_GCR_L2_CONFIG_BYPASS_SHF 20 342 - #define CM_GCR_L2_CONFIG_BYPASS_MSK (_ULCAST_(0x1) << 20) 343 - #define CM_GCR_L2_CONFIG_SET_SIZE_SHF 12 344 - #define CM_GCR_L2_CONFIG_SET_SIZE_MSK (_ULCAST_(0xf) << 12) 345 - #define CM_GCR_L2_CONFIG_LINE_SIZE_SHF 8 346 - #define CM_GCR_L2_CONFIG_LINE_SIZE_MSK (_ULCAST_(0xf) << 8) 347 - #define CM_GCR_L2_CONFIG_ASSOC_SHF 0 348 - #define CM_GCR_L2_CONFIG_ASSOC_MSK (_ULCAST_(0xff) << 0) 228 + /* GCR_CPC_STATUS - Indicates presence of a Cluster Power Controller (CPC) */ 229 + GCR_ACCESSOR_RO(32, 0x0f0, cpc_status) 230 + #define CM_GCR_CPC_STATUS_EX BIT(0) 349 231 350 - /* GCR_SYS_CONFIG2 register fields */ 351 - #define CM_GCR_SYS_CONFIG2_MAXVPW_SHF 0 352 - #define CM_GCR_SYS_CONFIG2_MAXVPW_MSK (_ULCAST_(0xf) << 0) 232 + /* GCR_L2_CONFIG - Indicates L2 cache configuration when Config5.L2C=1 */ 233 + GCR_ACCESSOR_RW(32, 0x130, l2_config) 234 + #define CM_GCR_L2_CONFIG_BYPASS BIT(20) 235 + #define CM_GCR_L2_CONFIG_SET_SIZE GENMASK(15, 12) 236 + #define CM_GCR_L2_CONFIG_LINE_SIZE GENMASK(11, 8) 237 + #define CM_GCR_L2_CONFIG_ASSOC GENMASK(7, 0) 353 238 354 - /* GCR_L2_PFT_CONTROL register fields */ 355 - #define CM_GCR_L2_PFT_CONTROL_PAGEMASK_SHF 12 356 - #define CM_GCR_L2_PFT_CONTROL_PAGEMASK_MSK (_ULCAST_(0xfffff) << 12) 357 - #define CM_GCR_L2_PFT_CONTROL_PFTEN_SHF 8 358 - #define CM_GCR_L2_PFT_CONTROL_PFTEN_MSK (_ULCAST_(0x1) << 8) 359 - #define CM_GCR_L2_PFT_CONTROL_NPFT_SHF 0 360 - #define CM_GCR_L2_PFT_CONTROL_NPFT_MSK (_ULCAST_(0xff) << 0) 239 + /* GCR_SYS_CONFIG2 - Further information about the system */ 240 + GCR_ACCESSOR_RO(32, 0x150, sys_config2) 241 + #define CM_GCR_SYS_CONFIG2_MAXVPW GENMASK(3, 0) 361 242 362 - /* GCR_L2_PFT_CONTROL_B register fields */ 363 - #define CM_GCR_L2_PFT_CONTROL_B_CEN_SHF 8 364 - #define CM_GCR_L2_PFT_CONTROL_B_CEN_MSK (_ULCAST_(0x1) << 8) 365 - #define CM_GCR_L2_PFT_CONTROL_B_PORTID_SHF 0 366 - #define CM_GCR_L2_PFT_CONTROL_B_PORTID_MSK (_ULCAST_(0xff) << 0) 243 + /* GCR_L2_PFT_CONTROL - Controls hardware L2 prefetching */ 244 + GCR_ACCESSOR_RW(32, 0x300, l2_pft_control) 245 + #define CM_GCR_L2_PFT_CONTROL_PAGEMASK GENMASK(31, 12) 246 + #define CM_GCR_L2_PFT_CONTROL_PFTEN BIT(8) 247 + #define CM_GCR_L2_PFT_CONTROL_NPFT GENMASK(7, 0) 367 248 368 - /* GCR_Cx_COHERENCE register fields */ 369 - #define CM_GCR_Cx_COHERENCE_COHDOMAINEN_SHF 0 370 - #define CM_GCR_Cx_COHERENCE_COHDOMAINEN_MSK (_ULCAST_(0xff) << 0) 371 - #define CM3_GCR_Cx_COHERENCE_COHEN_MSK (_ULCAST_(0x1) << 0) 249 + /* GCR_L2_PFT_CONTROL_B - Controls hardware L2 prefetching */ 250 + GCR_ACCESSOR_RW(32, 0x308, l2_pft_control_b) 251 + #define CM_GCR_L2_PFT_CONTROL_B_CEN BIT(8) 252 + #define CM_GCR_L2_PFT_CONTROL_B_PORTID GENMASK(7, 0) 372 253 373 - /* GCR_Cx_CONFIG register fields */ 374 - #define CM_GCR_Cx_CONFIG_IOCUTYPE_SHF 10 375 - #define CM_GCR_Cx_CONFIG_IOCUTYPE_MSK (_ULCAST_(0x3) << 10) 376 - #define CM_GCR_Cx_CONFIG_PVPE_SHF 0 377 - #define CM_GCR_Cx_CONFIG_PVPE_MSK (_ULCAST_(0x3ff) << 0) 254 + /* GCR_L2SM_COP - L2 cache op state machine control */ 255 + GCR_ACCESSOR_RW(32, 0x620, l2sm_cop) 256 + #define CM_GCR_L2SM_COP_PRESENT BIT(31) 257 + #define CM_GCR_L2SM_COP_RESULT GENMASK(8, 6) 258 + #define CM_GCR_L2SM_COP_RESULT_DONTCARE 0 259 + #define CM_GCR_L2SM_COP_RESULT_DONE_OK 1 260 + #define CM_GCR_L2SM_COP_RESULT_DONE_ERROR 2 261 + #define CM_GCR_L2SM_COP_RESULT_ABORT_OK 3 262 + #define CM_GCR_L2SM_COP_RESULT_ABORT_ERROR 4 263 + #define CM_GCR_L2SM_COP_RUNNING BIT(5) 264 + #define CM_GCR_L2SM_COP_TYPE GENMASK(4, 2) 265 + #define CM_GCR_L2SM_COP_TYPE_IDX_WBINV 0 266 + #define CM_GCR_L2SM_COP_TYPE_IDX_STORETAG 1 267 + #define CM_GCR_L2SM_COP_TYPE_IDX_STORETAGDATA 2 268 + #define CM_GCR_L2SM_COP_TYPE_HIT_INV 4 269 + #define CM_GCR_L2SM_COP_TYPE_HIT_WBINV 5 270 + #define CM_GCR_L2SM_COP_TYPE_HIT_WB 6 271 + #define CM_GCR_L2SM_COP_TYPE_FETCHLOCK 7 272 + #define CM_GCR_L2SM_COP_CMD GENMASK(1, 0) 273 + #define CM_GCR_L2SM_COP_CMD_START 1 /* only when idle */ 274 + #define CM_GCR_L2SM_COP_CMD_ABORT 3 /* only when running */ 378 275 379 - /* GCR_Cx_OTHER register fields */ 380 - #define CM_GCR_Cx_OTHER_CORENUM_SHF 16 381 - #define CM_GCR_Cx_OTHER_CORENUM_MSK (_ULCAST_(0xffff) << 16) 382 - #define CM3_GCR_Cx_OTHER_CORE_SHF 8 383 - #define CM3_GCR_Cx_OTHER_CORE_MSK (_ULCAST_(0x3f) << 8) 384 - #define CM3_GCR_Cx_OTHER_VP_SHF 0 385 - #define CM3_GCR_Cx_OTHER_VP_MSK (_ULCAST_(0x7) << 0) 276 + /* GCR_L2SM_TAG_ADDR_COP - L2 cache op state machine address control */ 277 + GCR_ACCESSOR_RW(64, 0x628, l2sm_tag_addr_cop) 278 + #define CM_GCR_L2SM_TAG_ADDR_COP_NUM_LINES GENMASK_ULL(63, 48) 279 + #define CM_GCR_L2SM_TAG_ADDR_COP_START_TAG GENMASK_ULL(47, 6) 386 280 387 - /* GCR_Cx_RESET_BASE register fields */ 388 - #define CM_GCR_Cx_RESET_BASE_BEVEXCBASE_SHF 12 389 - #define CM_GCR_Cx_RESET_BASE_BEVEXCBASE_MSK (_ULCAST_(0xfffff) << 12) 281 + /* GCR_BEV_BASE - Controls the location of the BEV for powered up cores */ 282 + GCR_ACCESSOR_RW(64, 0x680, bev_base) 390 283 391 - /* GCR_Cx_RESET_EXT_BASE register fields */ 392 - #define CM_GCR_Cx_RESET_EXT_BASE_EVARESET_SHF 31 393 - #define CM_GCR_Cx_RESET_EXT_BASE_EVARESET_MSK (_ULCAST_(0x1) << 31) 394 - #define CM_GCR_Cx_RESET_EXT_BASE_UEB_SHF 30 395 - #define CM_GCR_Cx_RESET_EXT_BASE_UEB_MSK (_ULCAST_(0x1) << 30) 396 - #define CM_GCR_Cx_RESET_EXT_BASE_BEVEXCMASK_SHF 20 397 - #define CM_GCR_Cx_RESET_EXT_BASE_BEVEXCMASK_MSK (_ULCAST_(0xff) << 20) 398 - #define CM_GCR_Cx_RESET_EXT_BASE_BEVEXCPA_SHF 1 399 - #define CM_GCR_Cx_RESET_EXT_BASE_BEVEXCPA_MSK (_ULCAST_(0x7f) << 1) 400 - #define CM_GCR_Cx_RESET_EXT_BASE_PRESENT_SHF 0 401 - #define CM_GCR_Cx_RESET_EXT_BASE_PRESENT_MSK (_ULCAST_(0x1) << 0) 284 + /* GCR_Cx_RESET_RELEASE - Controls core reset for CM 1.x */ 285 + GCR_CX_ACCESSOR_RW(32, 0x000, reset_release) 402 286 403 - /** 404 - * mips_cm_numcores - return the number of cores present in the system 405 - * 406 - * Returns the value of the PCORES field of the GCR_CONFIG register plus 1, or 407 - * zero if no Coherence Manager is present. 408 - */ 409 - static inline unsigned mips_cm_numcores(void) 410 - { 411 - if (!mips_cm_present()) 412 - return 0; 287 + /* GCR_Cx_COHERENCE - Controls core coherence */ 288 + GCR_CX_ACCESSOR_RW(32, 0x008, coherence) 289 + #define CM_GCR_Cx_COHERENCE_COHDOMAINEN GENMASK(7, 0) 290 + #define CM3_GCR_Cx_COHERENCE_COHEN BIT(0) 413 291 414 - return ((read_gcr_config() & CM_GCR_CONFIG_PCORES_MSK) 415 - >> CM_GCR_CONFIG_PCORES_SHF) + 1; 416 - } 292 + /* GCR_Cx_CONFIG - Information about a core's configuration */ 293 + GCR_CX_ACCESSOR_RO(32, 0x010, config) 294 + #define CM_GCR_Cx_CONFIG_IOCUTYPE GENMASK(11, 10) 295 + #define CM_GCR_Cx_CONFIG_PVPE GENMASK(9, 0) 417 296 418 - /** 419 - * mips_cm_numiocu - return the number of IOCUs present in the system 420 - * 421 - * Returns the value of the NUMIOCU field of the GCR_CONFIG register, or zero 422 - * if no Coherence Manager is present. 423 - */ 424 - static inline unsigned mips_cm_numiocu(void) 425 - { 426 - if (!mips_cm_present()) 427 - return 0; 297 + /* GCR_Cx_OTHER - Configure the core-other/redirect GCR block */ 298 + GCR_CX_ACCESSOR_RW(32, 0x018, other) 299 + #define CM_GCR_Cx_OTHER_CORENUM GENMASK(31, 16) /* CM < 3 */ 300 + #define CM_GCR_Cx_OTHER_CLUSTER_EN BIT(31) /* CM >= 3.5 */ 301 + #define CM_GCR_Cx_OTHER_GIC_EN BIT(30) /* CM >= 3.5 */ 302 + #define CM_GCR_Cx_OTHER_BLOCK GENMASK(25, 24) /* CM >= 3.5 */ 303 + #define CM_GCR_Cx_OTHER_BLOCK_LOCAL 0 304 + #define CM_GCR_Cx_OTHER_BLOCK_GLOBAL 1 305 + #define CM_GCR_Cx_OTHER_BLOCK_USER 2 306 + #define CM_GCR_Cx_OTHER_BLOCK_GLOBAL_HIGH 3 307 + #define CM_GCR_Cx_OTHER_CLUSTER GENMASK(21, 16) /* CM >= 3.5 */ 308 + #define CM3_GCR_Cx_OTHER_CORE GENMASK(13, 8) /* CM >= 3 */ 309 + #define CM_GCR_Cx_OTHER_CORE_CM 32 310 + #define CM3_GCR_Cx_OTHER_VP GENMASK(2, 0) /* CM >= 3 */ 428 311 429 - return (read_gcr_config() & CM_GCR_CONFIG_NUMIOCU_MSK) 430 - >> CM_GCR_CONFIG_NUMIOCU_SHF; 431 - } 312 + /* GCR_Cx_RESET_BASE - Configure where powered up cores will fetch from */ 313 + GCR_CX_ACCESSOR_RW(32, 0x020, reset_base) 314 + #define CM_GCR_Cx_RESET_BASE_BEVEXCBASE GENMASK(31, 12) 315 + 316 + /* GCR_Cx_ID - Identify the current core */ 317 + GCR_CX_ACCESSOR_RO(32, 0x028, id) 318 + #define CM_GCR_Cx_ID_CLUSTER GENMASK(15, 8) 319 + #define CM_GCR_Cx_ID_CORE GENMASK(7, 0) 320 + 321 + /* GCR_Cx_RESET_EXT_BASE - Configure behaviour when cores reset or power up */ 322 + GCR_CX_ACCESSOR_RW(32, 0x030, reset_ext_base) 323 + #define CM_GCR_Cx_RESET_EXT_BASE_EVARESET BIT(31) 324 + #define CM_GCR_Cx_RESET_EXT_BASE_UEB BIT(30) 325 + #define CM_GCR_Cx_RESET_EXT_BASE_BEVEXCMASK GENMASK(27, 20) 326 + #define CM_GCR_Cx_RESET_EXT_BASE_BEVEXCPA GENMASK(7, 1) 327 + #define CM_GCR_Cx_RESET_EXT_BASE_PRESENT BIT(0) 432 328 433 329 /** 434 330 * mips_cm_l2sync - perform an L2-only sync operation ··· 369 469 uint32_t cfg; 370 470 371 471 if (mips_cm_revision() >= CM_REV_CM3) 372 - return read_gcr_sys_config2() & CM_GCR_SYS_CONFIG2_MAXVPW_MSK; 472 + return read_gcr_sys_config2() & CM_GCR_SYS_CONFIG2_MAXVPW; 373 473 374 474 if (mips_cm_present()) { 375 475 /* ··· 377 477 * number of VP(E)s, and if that ever changes then this will 378 478 * need revisiting. 379 479 */ 380 - cfg = read_gcr_cl_config() & CM_GCR_Cx_CONFIG_PVPE_MSK; 381 - return (cfg >> CM_GCR_Cx_CONFIG_PVPE_SHF) + 1; 480 + cfg = read_gcr_cl_config() & CM_GCR_Cx_CONFIG_PVPE; 481 + return (cfg >> __ffs(CM_GCR_Cx_CONFIG_PVPE)) + 1; 382 482 } 383 483 384 484 if (IS_ENABLED(CONFIG_SMP)) ··· 399 499 */ 400 500 static inline unsigned int mips_cm_vp_id(unsigned int cpu) 401 501 { 402 - unsigned int core = cpu_data[cpu].core; 502 + unsigned int core = cpu_core(&cpu_data[cpu]); 403 503 unsigned int vp = cpu_vpe_id(&cpu_data[cpu]); 404 504 405 505 return (core * mips_cm_max_vp_width()) + vp; ··· 408 508 #ifdef CONFIG_MIPS_CM 409 509 410 510 /** 411 - * mips_cm_lock_other - lock access to another core 511 + * mips_cm_lock_other - lock access to redirect/other region 512 + * @cluster: the other cluster to be accessed 412 513 * @core: the other core to be accessed 413 514 * @vp: the VP within the other core to be accessed 515 + * @block: the register block to be accessed 414 516 * 415 - * Call before operating upon a core via the 'other' register region in 416 - * order to prevent the region being moved during access. Must be followed 417 - * by a call to mips_cm_unlock_other. 517 + * Configure the redirect/other region for the local core/VP (depending upon 518 + * the CM revision) to target the specified @cluster, @core, @vp & register 519 + * @block. Must be called before using the redirect/other region, and followed 520 + * by a call to mips_cm_unlock_other() when access to the redirect/other region 521 + * is complete. 522 + * 523 + * This function acquires a spinlock such that code between it & 524 + * mips_cm_unlock_other() calls cannot be pre-empted by anything which may 525 + * reconfigure the redirect/other region, and cannot be interfered with by 526 + * another VP in the core. As such calls to this function should not be nested. 418 527 */ 419 - extern void mips_cm_lock_other(unsigned int core, unsigned int vp); 528 + extern void mips_cm_lock_other(unsigned int cluster, unsigned int core, 529 + unsigned int vp, unsigned int block); 420 530 421 531 /** 422 - * mips_cm_unlock_other - unlock access to another core 532 + * mips_cm_unlock_other - unlock access to redirect/other region 423 533 * 424 - * Call after operating upon another core via the 'other' register region. 425 - * Must be called after mips_cm_lock_other. 534 + * Must be called after mips_cm_lock_other() once all required access to the 535 + * redirect/other region has been completed. 426 536 */ 427 537 extern void mips_cm_unlock_other(void); 428 538 429 539 #else /* !CONFIG_MIPS_CM */ 430 540 431 - static inline void mips_cm_lock_other(unsigned int core, unsigned int vp) { } 541 + static inline void mips_cm_lock_other(unsigned int cluster, unsigned int core, 542 + unsigned int vp, unsigned int block) { } 432 543 static inline void mips_cm_unlock_other(void) { } 433 544 434 545 #endif /* !CONFIG_MIPS_CM */ 546 + 547 + /** 548 + * mips_cm_lock_other_cpu - lock access to redirect/other region 549 + * @cpu: the other CPU whose register we want to access 550 + * 551 + * Configure the redirect/other region for the local core/VP (depending upon 552 + * the CM revision) to target the specified @cpu & register @block. This is 553 + * equivalent to calling mips_cm_lock_other() but accepts a Linux CPU number 554 + * for convenience. 555 + */ 556 + static inline void mips_cm_lock_other_cpu(unsigned int cpu, unsigned int block) 557 + { 558 + struct cpuinfo_mips *d = &cpu_data[cpu]; 559 + 560 + mips_cm_lock_other(cpu_cluster(d), cpu_core(d), cpu_vpe_id(d), block); 561 + } 435 562 436 563 #endif /* __MIPS_ASM_MIPS_CM_H__ */
+82 -75
arch/mips/include/asm/mips-cpc.h
··· 8 8 * option) any later version. 9 9 */ 10 10 11 + #ifndef __MIPS_ASM_MIPS_CPS_H__ 12 + # error Please include asm/mips-cps.h rather than asm/mips-cpc.h 13 + #endif 14 + 11 15 #ifndef __MIPS_ASM_MIPS_CPC_H__ 12 16 #define __MIPS_ASM_MIPS_CPC_H__ 13 17 14 - #include <linux/io.h> 15 - #include <linux/types.h> 18 + #include <linux/bitops.h> 19 + #include <linux/errno.h> 16 20 17 21 /* The base address of the CPC registers */ 18 22 extern void __iomem *mips_cpc_base; ··· 65 61 #define MIPS_CPC_CLCB_OFS 0x2000 66 62 #define MIPS_CPC_COCB_OFS 0x4000 67 63 68 - /* Macros to ease the creation of register access functions */ 69 - #define BUILD_CPC_R_(name, off) \ 70 - static inline u32 *addr_cpc_##name(void) \ 71 - { \ 72 - return (u32 *)(mips_cpc_base + (off)); \ 73 - } \ 74 - \ 75 - static inline u32 read_cpc_##name(void) \ 76 - { \ 77 - return __raw_readl(mips_cpc_base + (off)); \ 78 - } 64 + #define CPC_ACCESSOR_RO(sz, off, name) \ 65 + CPS_ACCESSOR_RO(cpc, sz, MIPS_CPC_GCB_OFS + off, name) \ 66 + CPS_ACCESSOR_RO(cpc, sz, MIPS_CPC_COCB_OFS + off, redir_##name) 79 67 80 - #define BUILD_CPC__W(name, off) \ 81 - static inline void write_cpc_##name(u32 value) \ 82 - { \ 83 - __raw_writel(value, mips_cpc_base + (off)); \ 84 - } 68 + #define CPC_ACCESSOR_RW(sz, off, name) \ 69 + CPS_ACCESSOR_RW(cpc, sz, MIPS_CPC_GCB_OFS + off, name) \ 70 + CPS_ACCESSOR_RW(cpc, sz, MIPS_CPC_COCB_OFS + off, redir_##name) 85 71 86 - #define BUILD_CPC_RW(name, off) \ 87 - BUILD_CPC_R_(name, off) \ 88 - BUILD_CPC__W(name, off) 72 + #define CPC_CX_ACCESSOR_RO(sz, off, name) \ 73 + CPS_ACCESSOR_RO(cpc, sz, MIPS_CPC_CLCB_OFS + off, cl_##name) \ 74 + CPS_ACCESSOR_RO(cpc, sz, MIPS_CPC_COCB_OFS + off, co_##name) 89 75 90 - #define BUILD_CPC_Cx_R_(name, off) \ 91 - BUILD_CPC_R_(cl_##name, MIPS_CPC_CLCB_OFS + (off)) \ 92 - BUILD_CPC_R_(co_##name, MIPS_CPC_COCB_OFS + (off)) 76 + #define CPC_CX_ACCESSOR_RW(sz, off, name) \ 77 + CPS_ACCESSOR_RW(cpc, sz, MIPS_CPC_CLCB_OFS + off, cl_##name) \ 78 + CPS_ACCESSOR_RW(cpc, sz, MIPS_CPC_COCB_OFS + off, co_##name) 93 79 94 - #define BUILD_CPC_Cx__W(name, off) \ 95 - BUILD_CPC__W(cl_##name, MIPS_CPC_CLCB_OFS + (off)) \ 96 - BUILD_CPC__W(co_##name, MIPS_CPC_COCB_OFS + (off)) 80 + /* CPC_ACCESS - Control core/IOCU access to CPC registers prior to CM 3 */ 81 + CPC_ACCESSOR_RW(32, 0x000, access) 97 82 98 - #define BUILD_CPC_Cx_RW(name, off) \ 99 - BUILD_CPC_Cx_R_(name, off) \ 100 - BUILD_CPC_Cx__W(name, off) 83 + /* CPC_SEQDEL - Configure delays between command sequencer steps */ 84 + CPC_ACCESSOR_RW(32, 0x008, seqdel) 101 85 102 - /* GCB register accessor functions */ 103 - BUILD_CPC_RW(access, MIPS_CPC_GCB_OFS + 0x00) 104 - BUILD_CPC_RW(seqdel, MIPS_CPC_GCB_OFS + 0x08) 105 - BUILD_CPC_RW(rail, MIPS_CPC_GCB_OFS + 0x10) 106 - BUILD_CPC_RW(resetlen, MIPS_CPC_GCB_OFS + 0x18) 107 - BUILD_CPC_R_(revision, MIPS_CPC_GCB_OFS + 0x20) 86 + /* CPC_RAIL - Configure the delay from rail power-up to stability */ 87 + CPC_ACCESSOR_RW(32, 0x010, rail) 108 88 109 - /* Core Local & Core Other accessor functions */ 110 - BUILD_CPC_Cx_RW(cmd, 0x00) 111 - BUILD_CPC_Cx_RW(stat_conf, 0x08) 112 - BUILD_CPC_Cx_RW(other, 0x10) 113 - BUILD_CPC_Cx_RW(vp_stop, 0x20) 114 - BUILD_CPC_Cx_RW(vp_run, 0x28) 115 - BUILD_CPC_Cx_RW(vp_running, 0x30) 89 + /* CPC_RESETLEN - Configure the length of reset sequences */ 90 + CPC_ACCESSOR_RW(32, 0x018, resetlen) 116 91 117 - /* CPC_Cx_CMD register fields */ 118 - #define CPC_Cx_CMD_SHF 0 119 - #define CPC_Cx_CMD_MSK (_ULCAST_(0xf) << 0) 120 - #define CPC_Cx_CMD_CLOCKOFF (_ULCAST_(0x1) << 0) 121 - #define CPC_Cx_CMD_PWRDOWN (_ULCAST_(0x2) << 0) 122 - #define CPC_Cx_CMD_PWRUP (_ULCAST_(0x3) << 0) 123 - #define CPC_Cx_CMD_RESET (_ULCAST_(0x4) << 0) 92 + /* CPC_REVISION - Indicates the revisison of the CPC */ 93 + CPC_ACCESSOR_RO(32, 0x020, revision) 124 94 125 - /* CPC_Cx_STAT_CONF register fields */ 126 - #define CPC_Cx_STAT_CONF_PWRUPE_SHF 23 127 - #define CPC_Cx_STAT_CONF_PWRUPE_MSK (_ULCAST_(0x1) << 23) 128 - #define CPC_Cx_STAT_CONF_SEQSTATE_SHF 19 129 - #define CPC_Cx_STAT_CONF_SEQSTATE_MSK (_ULCAST_(0xf) << 19) 130 - #define CPC_Cx_STAT_CONF_SEQSTATE_D0 (_ULCAST_(0x0) << 19) 131 - #define CPC_Cx_STAT_CONF_SEQSTATE_U0 (_ULCAST_(0x1) << 19) 132 - #define CPC_Cx_STAT_CONF_SEQSTATE_U1 (_ULCAST_(0x2) << 19) 133 - #define CPC_Cx_STAT_CONF_SEQSTATE_U2 (_ULCAST_(0x3) << 19) 134 - #define CPC_Cx_STAT_CONF_SEQSTATE_U3 (_ULCAST_(0x4) << 19) 135 - #define CPC_Cx_STAT_CONF_SEQSTATE_U4 (_ULCAST_(0x5) << 19) 136 - #define CPC_Cx_STAT_CONF_SEQSTATE_U5 (_ULCAST_(0x6) << 19) 137 - #define CPC_Cx_STAT_CONF_SEQSTATE_U6 (_ULCAST_(0x7) << 19) 138 - #define CPC_Cx_STAT_CONF_SEQSTATE_D1 (_ULCAST_(0x8) << 19) 139 - #define CPC_Cx_STAT_CONF_SEQSTATE_D3 (_ULCAST_(0x9) << 19) 140 - #define CPC_Cx_STAT_CONF_SEQSTATE_D2 (_ULCAST_(0xa) << 19) 141 - #define CPC_Cx_STAT_CONF_CLKGAT_IMPL_SHF 17 142 - #define CPC_Cx_STAT_CONF_CLKGAT_IMPL_MSK (_ULCAST_(0x1) << 17) 143 - #define CPC_Cx_STAT_CONF_PWRDN_IMPL_SHF 16 144 - #define CPC_Cx_STAT_CONF_PWRDN_IMPL_MSK (_ULCAST_(0x1) << 16) 145 - #define CPC_Cx_STAT_CONF_EJTAG_PROBE_SHF 15 146 - #define CPC_Cx_STAT_CONF_EJTAG_PROBE_MSK (_ULCAST_(0x1) << 15) 95 + /* CPC_PWRUP_CTL - Control power to the Coherence Manager (CM) */ 96 + CPC_ACCESSOR_RW(32, 0x030, pwrup_ctl) 97 + #define CPC_PWRUP_CTL_CM_PWRUP BIT(0) 147 98 148 - /* CPC_Cx_OTHER register fields */ 149 - #define CPC_Cx_OTHER_CORENUM_SHF 16 150 - #define CPC_Cx_OTHER_CORENUM_MSK (_ULCAST_(0xff) << 16) 99 + /* CPC_CONFIG - Mirrors GCR_CONFIG */ 100 + CPC_ACCESSOR_RW(64, 0x138, config) 101 + 102 + /* CPC_SYS_CONFIG - Control cluster endianness */ 103 + CPC_ACCESSOR_RW(32, 0x140, sys_config) 104 + #define CPC_SYS_CONFIG_BE_IMMEDIATE BIT(2) 105 + #define CPC_SYS_CONFIG_BE_STATUS BIT(1) 106 + #define CPC_SYS_CONFIG_BE BIT(0) 107 + 108 + /* CPC_Cx_CMD - Instruct the CPC to take action on a core */ 109 + CPC_CX_ACCESSOR_RW(32, 0x000, cmd) 110 + #define CPC_Cx_CMD GENMASK(3, 0) 111 + #define CPC_Cx_CMD_CLOCKOFF 0x1 112 + #define CPC_Cx_CMD_PWRDOWN 0x2 113 + #define CPC_Cx_CMD_PWRUP 0x3 114 + #define CPC_Cx_CMD_RESET 0x4 115 + 116 + /* CPC_Cx_STAT_CONF - Indicates core configuration & state */ 117 + CPC_CX_ACCESSOR_RW(32, 0x008, stat_conf) 118 + #define CPC_Cx_STAT_CONF_PWRUPE BIT(23) 119 + #define CPC_Cx_STAT_CONF_SEQSTATE GENMASK(22, 19) 120 + #define CPC_Cx_STAT_CONF_SEQSTATE_D0 0x0 121 + #define CPC_Cx_STAT_CONF_SEQSTATE_U0 0x1 122 + #define CPC_Cx_STAT_CONF_SEQSTATE_U1 0x2 123 + #define CPC_Cx_STAT_CONF_SEQSTATE_U2 0x3 124 + #define CPC_Cx_STAT_CONF_SEQSTATE_U3 0x4 125 + #define CPC_Cx_STAT_CONF_SEQSTATE_U4 0x5 126 + #define CPC_Cx_STAT_CONF_SEQSTATE_U5 0x6 127 + #define CPC_Cx_STAT_CONF_SEQSTATE_U6 0x7 128 + #define CPC_Cx_STAT_CONF_SEQSTATE_D1 0x8 129 + #define CPC_Cx_STAT_CONF_SEQSTATE_D3 0x9 130 + #define CPC_Cx_STAT_CONF_SEQSTATE_D2 0xa 131 + #define CPC_Cx_STAT_CONF_CLKGAT_IMPL BIT(17) 132 + #define CPC_Cx_STAT_CONF_PWRDN_IMPL BIT(16) 133 + #define CPC_Cx_STAT_CONF_EJTAG_PROBE BIT(15) 134 + 135 + /* CPC_Cx_OTHER - Configure the core-other register block prior to CM 3 */ 136 + CPC_CX_ACCESSOR_RW(32, 0x010, other) 137 + #define CPC_Cx_OTHER_CORENUM GENMASK(23, 16) 138 + 139 + /* CPC_Cx_VP_STOP - Stop Virtual Processors (VPs) within a core from running */ 140 + CPC_CX_ACCESSOR_RW(32, 0x020, vp_stop) 141 + 142 + /* CPC_Cx_VP_START - Start Virtual Processors (VPs) within a core running */ 143 + CPC_CX_ACCESSOR_RW(32, 0x028, vp_run) 144 + 145 + /* CPC_Cx_VP_RUNNING - Indicate which Virtual Processors (VPs) are running */ 146 + CPC_CX_ACCESSOR_RW(32, 0x030, vp_running) 147 + 148 + /* CPC_Cx_CONFIG - Mirrors GCR_Cx_CONFIG */ 149 + CPC_CX_ACCESSOR_RW(32, 0x090, config) 151 150 152 151 #ifdef CONFIG_MIPS_CPC 153 152
+240
arch/mips/include/asm/mips-cps.h
··· 1 + /* 2 + * Copyright (C) 2017 Imagination Technologies 3 + * Author: Paul Burton <paul.burton@imgtec.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms of the GNU General Public License as published by the 7 + * Free Software Foundation; either version 2 of the License, or (at your 8 + * option) any later version. 9 + */ 10 + 11 + #ifndef __MIPS_ASM_MIPS_CPS_H__ 12 + #define __MIPS_ASM_MIPS_CPS_H__ 13 + 14 + #include <linux/io.h> 15 + #include <linux/types.h> 16 + 17 + extern unsigned long __cps_access_bad_size(void) 18 + __compiletime_error("Bad size for CPS accessor"); 19 + 20 + #define CPS_ACCESSOR_A(unit, off, name) \ 21 + static inline void *addr_##unit##_##name(void) \ 22 + { \ 23 + return mips_##unit##_base + (off); \ 24 + } 25 + 26 + #define CPS_ACCESSOR_R(unit, sz, name) \ 27 + static inline uint##sz##_t read_##unit##_##name(void) \ 28 + { \ 29 + uint64_t val64; \ 30 + \ 31 + switch (sz) { \ 32 + case 32: \ 33 + return __raw_readl(addr_##unit##_##name()); \ 34 + \ 35 + case 64: \ 36 + if (mips_cm_is64) \ 37 + return __raw_readq(addr_##unit##_##name()); \ 38 + \ 39 + val64 = __raw_readl(addr_##unit##_##name() + 4); \ 40 + val64 <<= 32; \ 41 + val64 |= __raw_readl(addr_##unit##_##name()); \ 42 + return val64; \ 43 + \ 44 + default: \ 45 + return __cps_access_bad_size(); \ 46 + } \ 47 + } 48 + 49 + #define CPS_ACCESSOR_W(unit, sz, name) \ 50 + static inline void write_##unit##_##name(uint##sz##_t val) \ 51 + { \ 52 + switch (sz) { \ 53 + case 32: \ 54 + __raw_writel(val, addr_##unit##_##name()); \ 55 + break; \ 56 + \ 57 + case 64: \ 58 + if (mips_cm_is64) { \ 59 + __raw_writeq(val, addr_##unit##_##name()); \ 60 + break; \ 61 + } \ 62 + \ 63 + __raw_writel((uint64_t)val >> 32, \ 64 + addr_##unit##_##name() + 4); \ 65 + __raw_writel(val, addr_##unit##_##name()); \ 66 + break; \ 67 + \ 68 + default: \ 69 + __cps_access_bad_size(); \ 70 + break; \ 71 + } \ 72 + } 73 + 74 + #define CPS_ACCESSOR_M(unit, sz, name) \ 75 + static inline void change_##unit##_##name(uint##sz##_t mask, \ 76 + uint##sz##_t val) \ 77 + { \ 78 + uint##sz##_t reg_val = read_##unit##_##name(); \ 79 + reg_val &= ~mask; \ 80 + reg_val |= val; \ 81 + write_##unit##_##name(reg_val); \ 82 + } \ 83 + \ 84 + static inline void set_##unit##_##name(uint##sz##_t val) \ 85 + { \ 86 + change_##unit##_##name(val, val); \ 87 + } \ 88 + \ 89 + static inline void clear_##unit##_##name(uint##sz##_t val) \ 90 + { \ 91 + change_##unit##_##name(val, 0); \ 92 + } 93 + 94 + #define CPS_ACCESSOR_RO(unit, sz, off, name) \ 95 + CPS_ACCESSOR_A(unit, off, name) \ 96 + CPS_ACCESSOR_R(unit, sz, name) 97 + 98 + #define CPS_ACCESSOR_WO(unit, sz, off, name) \ 99 + CPS_ACCESSOR_A(unit, off, name) \ 100 + CPS_ACCESSOR_W(unit, sz, name) 101 + 102 + #define CPS_ACCESSOR_RW(unit, sz, off, name) \ 103 + CPS_ACCESSOR_A(unit, off, name) \ 104 + CPS_ACCESSOR_R(unit, sz, name) \ 105 + CPS_ACCESSOR_W(unit, sz, name) \ 106 + CPS_ACCESSOR_M(unit, sz, name) 107 + 108 + #include <asm/mips-cm.h> 109 + #include <asm/mips-cpc.h> 110 + #include <asm/mips-gic.h> 111 + 112 + /** 113 + * mips_cps_numclusters - return the number of clusters present in the system 114 + * 115 + * Returns the number of clusters in the system. 116 + */ 117 + static inline unsigned int mips_cps_numclusters(void) 118 + { 119 + unsigned int num_clusters; 120 + 121 + if (mips_cm_revision() < CM_REV_CM3_5) 122 + return 1; 123 + 124 + num_clusters = read_gcr_config() & CM_GCR_CONFIG_NUM_CLUSTERS; 125 + num_clusters >>= __ffs(CM_GCR_CONFIG_NUM_CLUSTERS); 126 + return num_clusters; 127 + } 128 + 129 + /** 130 + * mips_cps_cluster_config - return (GCR|CPC)_CONFIG from a cluster 131 + * @cluster: the ID of the cluster whose config we want 132 + * 133 + * Read the value of GCR_CONFIG (or its CPC_CONFIG mirror) from a @cluster. 134 + * 135 + * Returns the value of GCR_CONFIG. 136 + */ 137 + static inline uint64_t mips_cps_cluster_config(unsigned int cluster) 138 + { 139 + uint64_t config; 140 + 141 + if (mips_cm_revision() < CM_REV_CM3_5) { 142 + /* 143 + * Prior to CM 3.5 we don't have the notion of multiple 144 + * clusters so we can trivially read the GCR_CONFIG register 145 + * within this cluster. 146 + */ 147 + WARN_ON(cluster != 0); 148 + config = read_gcr_config(); 149 + } else { 150 + /* 151 + * From CM 3.5 onwards we read the CPC_CONFIG mirror of 152 + * GCR_CONFIG via the redirect region, since the CPC is always 153 + * powered up allowing us not to need to power up the CM. 154 + */ 155 + mips_cm_lock_other(cluster, 0, 0, CM_GCR_Cx_OTHER_BLOCK_GLOBAL); 156 + config = read_cpc_redir_config(); 157 + mips_cm_unlock_other(); 158 + } 159 + 160 + return config; 161 + } 162 + 163 + /** 164 + * mips_cps_numcores - return the number of cores present in a cluster 165 + * @cluster: the ID of the cluster whose core count we want 166 + * 167 + * Returns the value of the PCORES field of the GCR_CONFIG register plus 1, or 168 + * zero if no Coherence Manager is present. 169 + */ 170 + static inline unsigned int mips_cps_numcores(unsigned int cluster) 171 + { 172 + if (!mips_cm_present()) 173 + return 0; 174 + 175 + /* Add one before masking to handle 0xff indicating no cores */ 176 + return (mips_cps_cluster_config(cluster) + 1) & CM_GCR_CONFIG_PCORES; 177 + } 178 + 179 + /** 180 + * mips_cps_numiocu - return the number of IOCUs present in a cluster 181 + * @cluster: the ID of the cluster whose IOCU count we want 182 + * 183 + * Returns the value of the NUMIOCU field of the GCR_CONFIG register, or zero 184 + * if no Coherence Manager is present. 185 + */ 186 + static inline unsigned int mips_cps_numiocu(unsigned int cluster) 187 + { 188 + unsigned int num_iocu; 189 + 190 + if (!mips_cm_present()) 191 + return 0; 192 + 193 + num_iocu = mips_cps_cluster_config(cluster) & CM_GCR_CONFIG_NUMIOCU; 194 + num_iocu >>= __ffs(CM_GCR_CONFIG_NUMIOCU); 195 + return num_iocu; 196 + } 197 + 198 + /** 199 + * mips_cps_numvps - return the number of VPs (threads) supported by a core 200 + * @cluster: the ID of the cluster containing the core we want to examine 201 + * @core: the ID of the core whose VP count we want 202 + * 203 + * Returns the number of Virtual Processors (VPs, ie. hardware threads) that 204 + * are supported by the given @core in the given @cluster. If the core or the 205 + * kernel do not support hardware mutlti-threading this returns 1. 206 + */ 207 + static inline unsigned int mips_cps_numvps(unsigned int cluster, unsigned int core) 208 + { 209 + unsigned int cfg; 210 + 211 + if (!mips_cm_present()) 212 + return 1; 213 + 214 + if ((!IS_ENABLED(CONFIG_MIPS_MT_SMP) || !cpu_has_mipsmt) 215 + && (!IS_ENABLED(CONFIG_CPU_MIPSR6) || !cpu_has_vp)) 216 + return 1; 217 + 218 + mips_cm_lock_other(cluster, core, 0, CM_GCR_Cx_OTHER_BLOCK_LOCAL); 219 + 220 + if (mips_cm_revision() < CM_REV_CM3_5) { 221 + /* 222 + * Prior to CM 3.5 we can only have one cluster & don't have 223 + * CPC_Cx_CONFIG, so we read GCR_Cx_CONFIG. 224 + */ 225 + cfg = read_gcr_co_config(); 226 + } else { 227 + /* 228 + * From CM 3.5 onwards we read CPC_Cx_CONFIG because the CPC is 229 + * always powered, which allows us to not worry about powering 230 + * up the cluster's CM here. 231 + */ 232 + cfg = read_cpc_co_config(); 233 + } 234 + 235 + mips_cm_unlock_other(); 236 + 237 + return (cfg + 1) & CM_GCR_Cx_CONFIG_PVPE; 238 + } 239 + 240 + #endif /* __MIPS_ASM_MIPS_CPS_H__ */
+347
arch/mips/include/asm/mips-gic.h
··· 1 + /* 2 + * Copyright (C) 2017 Imagination Technologies 3 + * Author: Paul Burton <paul.burton@imgtec.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms of the GNU General Public License as published by the 7 + * Free Software Foundation; either version 2 of the License, or (at your 8 + * option) any later version. 9 + */ 10 + 11 + #ifndef __MIPS_ASM_MIPS_CPS_H__ 12 + # error Please include asm/mips-cps.h rather than asm/mips-gic.h 13 + #endif 14 + 15 + #ifndef __MIPS_ASM_MIPS_GIC_H__ 16 + #define __MIPS_ASM_MIPS_GIC_H__ 17 + 18 + #include <linux/bitops.h> 19 + 20 + /* The base address of the GIC registers */ 21 + extern void __iomem *mips_gic_base; 22 + 23 + /* Offsets from the GIC base address to various control blocks */ 24 + #define MIPS_GIC_SHARED_OFS 0x00000 25 + #define MIPS_GIC_SHARED_SZ 0x08000 26 + #define MIPS_GIC_LOCAL_OFS 0x08000 27 + #define MIPS_GIC_LOCAL_SZ 0x04000 28 + #define MIPS_GIC_REDIR_OFS 0x0c000 29 + #define MIPS_GIC_REDIR_SZ 0x04000 30 + #define MIPS_GIC_USER_OFS 0x10000 31 + #define MIPS_GIC_USER_SZ 0x10000 32 + 33 + /* For read-only shared registers */ 34 + #define GIC_ACCESSOR_RO(sz, off, name) \ 35 + CPS_ACCESSOR_RO(gic, sz, MIPS_GIC_SHARED_OFS + off, name) 36 + 37 + /* For read-write shared registers */ 38 + #define GIC_ACCESSOR_RW(sz, off, name) \ 39 + CPS_ACCESSOR_RW(gic, sz, MIPS_GIC_SHARED_OFS + off, name) 40 + 41 + /* For read-only local registers */ 42 + #define GIC_VX_ACCESSOR_RO(sz, off, name) \ 43 + CPS_ACCESSOR_RO(gic, sz, MIPS_GIC_LOCAL_OFS + off, vl_##name) \ 44 + CPS_ACCESSOR_RO(gic, sz, MIPS_GIC_REDIR_OFS + off, vo_##name) 45 + 46 + /* For read-write local registers */ 47 + #define GIC_VX_ACCESSOR_RW(sz, off, name) \ 48 + CPS_ACCESSOR_RW(gic, sz, MIPS_GIC_LOCAL_OFS + off, vl_##name) \ 49 + CPS_ACCESSOR_RW(gic, sz, MIPS_GIC_REDIR_OFS + off, vo_##name) 50 + 51 + /* For read-only shared per-interrupt registers */ 52 + #define GIC_ACCESSOR_RO_INTR_REG(sz, off, stride, name) \ 53 + static inline void __iomem *addr_gic_##name(unsigned int intr) \ 54 + { \ 55 + return mips_gic_base + (off) + (intr * (stride)); \ 56 + } \ 57 + \ 58 + static inline unsigned int read_gic_##name(unsigned int intr) \ 59 + { \ 60 + BUILD_BUG_ON(sz != 32); \ 61 + return __raw_readl(addr_gic_##name(intr)); \ 62 + } 63 + 64 + /* For read-write shared per-interrupt registers */ 65 + #define GIC_ACCESSOR_RW_INTR_REG(sz, off, stride, name) \ 66 + GIC_ACCESSOR_RO_INTR_REG(sz, off, stride, name) \ 67 + \ 68 + static inline void write_gic_##name(unsigned int intr, \ 69 + unsigned int val) \ 70 + { \ 71 + BUILD_BUG_ON(sz != 32); \ 72 + __raw_writel(val, addr_gic_##name(intr)); \ 73 + } 74 + 75 + /* For read-only local per-interrupt registers */ 76 + #define GIC_VX_ACCESSOR_RO_INTR_REG(sz, off, stride, name) \ 77 + GIC_ACCESSOR_RO_INTR_REG(sz, MIPS_GIC_LOCAL_OFS + off, \ 78 + stride, vl_##name) \ 79 + GIC_ACCESSOR_RO_INTR_REG(sz, MIPS_GIC_REDIR_OFS + off, \ 80 + stride, vo_##name) 81 + 82 + /* For read-write local per-interrupt registers */ 83 + #define GIC_VX_ACCESSOR_RW_INTR_REG(sz, off, stride, name) \ 84 + GIC_ACCESSOR_RW_INTR_REG(sz, MIPS_GIC_LOCAL_OFS + off, \ 85 + stride, vl_##name) \ 86 + GIC_ACCESSOR_RW_INTR_REG(sz, MIPS_GIC_REDIR_OFS + off, \ 87 + stride, vo_##name) 88 + 89 + /* For read-only shared bit-per-interrupt registers */ 90 + #define GIC_ACCESSOR_RO_INTR_BIT(off, name) \ 91 + static inline void __iomem *addr_gic_##name(void) \ 92 + { \ 93 + return mips_gic_base + (off); \ 94 + } \ 95 + \ 96 + static inline unsigned int read_gic_##name(unsigned int intr) \ 97 + { \ 98 + void __iomem *addr = addr_gic_##name(); \ 99 + unsigned int val; \ 100 + \ 101 + if (mips_cm_is64) { \ 102 + addr += (intr / 64) * sizeof(uint64_t); \ 103 + val = __raw_readq(addr) >> intr % 64; \ 104 + } else { \ 105 + addr += (intr / 32) * sizeof(uint32_t); \ 106 + val = __raw_readl(addr) >> intr % 32; \ 107 + } \ 108 + \ 109 + return val & 0x1; \ 110 + } 111 + 112 + /* For read-write shared bit-per-interrupt registers */ 113 + #define GIC_ACCESSOR_RW_INTR_BIT(off, name) \ 114 + GIC_ACCESSOR_RO_INTR_BIT(off, name) \ 115 + \ 116 + static inline void write_gic_##name(unsigned int intr) \ 117 + { \ 118 + void __iomem *addr = addr_gic_##name(); \ 119 + \ 120 + if (mips_cm_is64) { \ 121 + addr += (intr / 64) * sizeof(uint64_t); \ 122 + __raw_writeq(BIT(intr % 64), addr); \ 123 + } else { \ 124 + addr += (intr / 32) * sizeof(uint32_t); \ 125 + __raw_writel(BIT(intr % 32), addr); \ 126 + } \ 127 + } \ 128 + \ 129 + static inline void change_gic_##name(unsigned int intr, \ 130 + unsigned int val) \ 131 + { \ 132 + void __iomem *addr = addr_gic_##name(); \ 133 + \ 134 + if (mips_cm_is64) { \ 135 + uint64_t _val; \ 136 + \ 137 + addr += (intr / 64) * sizeof(uint64_t); \ 138 + _val = __raw_readq(addr); \ 139 + _val &= ~BIT_ULL(intr % 64); \ 140 + _val |= (uint64_t)val << (intr % 64); \ 141 + __raw_writeq(_val, addr); \ 142 + } else { \ 143 + uint32_t _val; \ 144 + \ 145 + addr += (intr / 32) * sizeof(uint32_t); \ 146 + _val = __raw_readl(addr); \ 147 + _val &= ~BIT(intr % 32); \ 148 + _val |= val << (intr % 32); \ 149 + __raw_writel(_val, addr); \ 150 + } \ 151 + } 152 + 153 + /* For read-only local bit-per-interrupt registers */ 154 + #define GIC_VX_ACCESSOR_RO_INTR_BIT(sz, off, name) \ 155 + GIC_ACCESSOR_RO_INTR_BIT(sz, MIPS_GIC_LOCAL_OFS + off, \ 156 + vl_##name) \ 157 + GIC_ACCESSOR_RO_INTR_BIT(sz, MIPS_GIC_REDIR_OFS + off, \ 158 + vo_##name) 159 + 160 + /* For read-write local bit-per-interrupt registers */ 161 + #define GIC_VX_ACCESSOR_RW_INTR_BIT(sz, off, name) \ 162 + GIC_ACCESSOR_RW_INTR_BIT(sz, MIPS_GIC_LOCAL_OFS + off, \ 163 + vl_##name) \ 164 + GIC_ACCESSOR_RW_INTR_BIT(sz, MIPS_GIC_REDIR_OFS + off, \ 165 + vo_##name) 166 + 167 + /* GIC_SH_CONFIG - Information about the GIC configuration */ 168 + GIC_ACCESSOR_RW(32, 0x000, config) 169 + #define GIC_CONFIG_COUNTSTOP BIT(28) 170 + #define GIC_CONFIG_COUNTBITS GENMASK(27, 24) 171 + #define GIC_CONFIG_NUMINTERRUPTS GENMASK(23, 16) 172 + #define GIC_CONFIG_PVPS GENMASK(6, 0) 173 + 174 + /* GIC_SH_COUNTER - Shared global counter value */ 175 + GIC_ACCESSOR_RW(64, 0x010, counter) 176 + GIC_ACCESSOR_RW(32, 0x010, counter_32l) 177 + GIC_ACCESSOR_RW(32, 0x014, counter_32h) 178 + 179 + /* GIC_SH_POL_* - Configures interrupt polarity */ 180 + GIC_ACCESSOR_RW_INTR_BIT(0x100, pol) 181 + #define GIC_POL_ACTIVE_LOW 0 /* when level triggered */ 182 + #define GIC_POL_ACTIVE_HIGH 1 /* when level triggered */ 183 + #define GIC_POL_FALLING_EDGE 0 /* when single-edge triggered */ 184 + #define GIC_POL_RISING_EDGE 1 /* when single-edge triggered */ 185 + 186 + /* GIC_SH_TRIG_* - Configures interrupts to be edge or level triggered */ 187 + GIC_ACCESSOR_RW_INTR_BIT(0x180, trig) 188 + #define GIC_TRIG_LEVEL 0 189 + #define GIC_TRIG_EDGE 1 190 + 191 + /* GIC_SH_DUAL_* - Configures whether interrupts trigger on both edges */ 192 + GIC_ACCESSOR_RW_INTR_BIT(0x200, dual) 193 + #define GIC_DUAL_SINGLE 0 /* when edge-triggered */ 194 + #define GIC_DUAL_DUAL 1 /* when edge-triggered */ 195 + 196 + /* GIC_SH_WEDGE - Write an 'edge', ie. trigger an interrupt */ 197 + GIC_ACCESSOR_RW(32, 0x280, wedge) 198 + #define GIC_WEDGE_RW BIT(31) 199 + #define GIC_WEDGE_INTR GENMASK(7, 0) 200 + 201 + /* GIC_SH_RMASK_* - Reset/clear shared interrupt mask bits */ 202 + GIC_ACCESSOR_RW_INTR_BIT(0x300, rmask) 203 + 204 + /* GIC_SH_SMASK_* - Set shared interrupt mask bits */ 205 + GIC_ACCESSOR_RW_INTR_BIT(0x380, smask) 206 + 207 + /* GIC_SH_MASK_* - Read the current shared interrupt mask */ 208 + GIC_ACCESSOR_RO_INTR_BIT(0x400, mask) 209 + 210 + /* GIC_SH_PEND_* - Read currently pending shared interrupts */ 211 + GIC_ACCESSOR_RO_INTR_BIT(0x480, pend) 212 + 213 + /* GIC_SH_MAPx_PIN - Map shared interrupts to a particular CPU pin */ 214 + GIC_ACCESSOR_RW_INTR_REG(32, 0x500, 0x4, map_pin) 215 + #define GIC_MAP_PIN_MAP_TO_PIN BIT(31) 216 + #define GIC_MAP_PIN_MAP_TO_NMI BIT(30) 217 + #define GIC_MAP_PIN_MAP GENMASK(5, 0) 218 + 219 + /* GIC_SH_MAPx_VP - Map shared interrupts to a particular Virtual Processor */ 220 + GIC_ACCESSOR_RW_INTR_REG(32, 0x2000, 0x20, map_vp) 221 + 222 + /* GIC_Vx_CTL - VP-level interrupt control */ 223 + GIC_VX_ACCESSOR_RW(32, 0x000, ctl) 224 + #define GIC_VX_CTL_FDC_ROUTABLE BIT(4) 225 + #define GIC_VX_CTL_SWINT_ROUTABLE BIT(3) 226 + #define GIC_VX_CTL_PERFCNT_ROUTABLE BIT(2) 227 + #define GIC_VX_CTL_TIMER_ROUTABLE BIT(1) 228 + #define GIC_VX_CTL_EIC BIT(0) 229 + 230 + /* GIC_Vx_PEND - Read currently pending local interrupts */ 231 + GIC_VX_ACCESSOR_RO(32, 0x004, pend) 232 + 233 + /* GIC_Vx_MASK - Read the current local interrupt mask */ 234 + GIC_VX_ACCESSOR_RO(32, 0x008, mask) 235 + 236 + /* GIC_Vx_RMASK - Reset/clear local interrupt mask bits */ 237 + GIC_VX_ACCESSOR_RW(32, 0x00c, rmask) 238 + 239 + /* GIC_Vx_SMASK - Set local interrupt mask bits */ 240 + GIC_VX_ACCESSOR_RW(32, 0x010, smask) 241 + 242 + /* GIC_Vx_*_MAP - Route local interrupts to the desired pins */ 243 + GIC_VX_ACCESSOR_RW_INTR_REG(32, 0x040, 0x4, map) 244 + 245 + /* GIC_Vx_WD_MAP - Route the local watchdog timer interrupt */ 246 + GIC_VX_ACCESSOR_RW(32, 0x040, wd_map) 247 + 248 + /* GIC_Vx_COMPARE_MAP - Route the local count/compare interrupt */ 249 + GIC_VX_ACCESSOR_RW(32, 0x044, compare_map) 250 + 251 + /* GIC_Vx_TIMER_MAP - Route the local CPU timer (cp0 count/compare) interrupt */ 252 + GIC_VX_ACCESSOR_RW(32, 0x048, timer_map) 253 + 254 + /* GIC_Vx_FDC_MAP - Route the local fast debug channel interrupt */ 255 + GIC_VX_ACCESSOR_RW(32, 0x04c, fdc_map) 256 + 257 + /* GIC_Vx_PERFCTR_MAP - Route the local performance counter interrupt */ 258 + GIC_VX_ACCESSOR_RW(32, 0x050, perfctr_map) 259 + 260 + /* GIC_Vx_SWINT0_MAP - Route the local software interrupt 0 */ 261 + GIC_VX_ACCESSOR_RW(32, 0x054, swint0_map) 262 + 263 + /* GIC_Vx_SWINT1_MAP - Route the local software interrupt 1 */ 264 + GIC_VX_ACCESSOR_RW(32, 0x058, swint1_map) 265 + 266 + /* GIC_Vx_OTHER - Configure access to other Virtual Processor registers */ 267 + GIC_VX_ACCESSOR_RW(32, 0x080, other) 268 + #define GIC_VX_OTHER_VPNUM GENMASK(5, 0) 269 + 270 + /* GIC_Vx_IDENT - Retrieve the local Virtual Processor's ID */ 271 + GIC_VX_ACCESSOR_RO(32, 0x088, ident) 272 + #define GIC_VX_IDENT_VPNUM GENMASK(5, 0) 273 + 274 + /* GIC_Vx_COMPARE - Value to compare with GIC_SH_COUNTER */ 275 + GIC_VX_ACCESSOR_RW(64, 0x0a0, compare) 276 + 277 + /* GIC_Vx_EIC_SHADOW_SET_BASE - Set shadow register set for each interrupt */ 278 + GIC_VX_ACCESSOR_RW_INTR_REG(32, 0x100, 0x4, eic_shadow_set) 279 + 280 + /** 281 + * enum mips_gic_local_interrupt - GIC local interrupts 282 + * @GIC_LOCAL_INT_WD: GIC watchdog timer interrupt 283 + * @GIC_LOCAL_INT_COMPARE: GIC count/compare interrupt 284 + * @GIC_LOCAL_INT_TIMER: CP0 count/compare interrupt 285 + * @GIC_LOCAL_INT_PERFCTR: Performance counter interrupt 286 + * @GIC_LOCAL_INT_SWINT0: Software interrupt 0 287 + * @GIC_LOCAL_INT_SWINT1: Software interrupt 1 288 + * @GIC_LOCAL_INT_FDC: Fast debug channel interrupt 289 + * @GIC_NUM_LOCAL_INTRS: The number of local interrupts 290 + * 291 + * Enumerates interrupts provided by the GIC that are local to a VP. 292 + */ 293 + enum mips_gic_local_interrupt { 294 + GIC_LOCAL_INT_WD, 295 + GIC_LOCAL_INT_COMPARE, 296 + GIC_LOCAL_INT_TIMER, 297 + GIC_LOCAL_INT_PERFCTR, 298 + GIC_LOCAL_INT_SWINT0, 299 + GIC_LOCAL_INT_SWINT1, 300 + GIC_LOCAL_INT_FDC, 301 + GIC_NUM_LOCAL_INTRS 302 + }; 303 + 304 + /** 305 + * mips_gic_present() - Determine whether a GIC is present 306 + * 307 + * Determines whether a MIPS Global Interrupt Controller (GIC) is present in 308 + * the system that the kernel is running on. 309 + * 310 + * Return true if a GIC is present, else false. 311 + */ 312 + static inline bool mips_gic_present(void) 313 + { 314 + return IS_ENABLED(CONFIG_MIPS_GIC) && mips_gic_base; 315 + } 316 + 317 + /** 318 + * gic_get_c0_compare_int() - Return cp0 count/compare interrupt virq 319 + * 320 + * Determine the virq number to use for the coprocessor 0 count/compare 321 + * interrupt, which may be routed via the GIC. 322 + * 323 + * Returns the virq number or a negative error number. 324 + */ 325 + extern int gic_get_c0_compare_int(void); 326 + 327 + /** 328 + * gic_get_c0_perfcount_int() - Return performance counter interrupt virq 329 + * 330 + * Determine the virq number to use for CPU performance counter interrupts, 331 + * which may be routed via the GIC. 332 + * 333 + * Returns the virq number or a negative error number. 334 + */ 335 + extern int gic_get_c0_perfcount_int(void); 336 + 337 + /** 338 + * gic_get_c0_fdc_int() - Return fast debug channel interrupt virq 339 + * 340 + * Determine the virq number to use for fast debug channel (FDC) interrupts, 341 + * which may be routed via the GIC. 342 + * 343 + * Returns the virq number or a negative error number. 344 + */ 345 + extern int gic_get_c0_fdc_int(void); 346 + 347 + #endif /* __MIPS_ASM_MIPS_CPS_H__ */
+13
arch/mips/include/asm/mipsregs.h
··· 48 48 #define CP0_ENTRYLO0 $2 49 49 #define CP0_ENTRYLO1 $3 50 50 #define CP0_CONF $3 51 + #define CP0_GLOBALNUMBER $3, 1 51 52 #define CP0_CONTEXT $4 52 53 #define CP0_PAGEMASK $5 53 54 #define CP0_SEGCTL0 $5, 2 ··· 147 146 #define MIPS_ENTRYLO_PFN_SHIFT 6 148 147 #define MIPS_ENTRYLO_XI (_ULCAST_(1) << (BITS_PER_LONG - 2)) 149 148 #define MIPS_ENTRYLO_RI (_ULCAST_(1) << (BITS_PER_LONG - 1)) 149 + 150 + /* 151 + * MIPSr6+ GlobalNumber register definitions 152 + */ 153 + #define MIPS_GLOBALNUMBER_VP_SHF 0 154 + #define MIPS_GLOBALNUMBER_VP (_ULCAST_(0xff) << MIPS_GLOBALNUMBER_VP_SHF) 155 + #define MIPS_GLOBALNUMBER_CORE_SHF 8 156 + #define MIPS_GLOBALNUMBER_CORE (_ULCAST_(0xff) << MIPS_GLOBALNUMBER_CORE_SHF) 157 + #define MIPS_GLOBALNUMBER_CLUSTER_SHF 16 158 + #define MIPS_GLOBALNUMBER_CLUSTER (_ULCAST_(0xf) << MIPS_GLOBALNUMBER_CLUSTER_SHF) 150 159 151 160 /* 152 161 * Values for PageMask register ··· 1456 1445 1457 1446 #define read_c0_conf() __read_32bit_c0_register($3, 0) 1458 1447 #define write_c0_conf(val) __write_32bit_c0_register($3, 0, val) 1448 + 1449 + #define read_c0_globalnumber() __read_32bit_c0_register($3, 1) 1459 1450 1460 1451 #define read_c0_context() __read_ulong_c0_register($4, 0) 1461 1452 #define write_c0_context(val) __write_ulong_c0_register($4, 0, val)
-2
arch/mips/include/asm/module.h
··· 114 114 #define MODULE_PROC_FAMILY "R5432 " 115 115 #elif defined CONFIG_CPU_R5500 116 116 #define MODULE_PROC_FAMILY "R5500 " 117 - #elif defined CONFIG_CPU_R6000 118 - #define MODULE_PROC_FAMILY "R6000 " 119 117 #elif defined CONFIG_CPU_NEVADA 120 118 #define MODULE_PROC_FAMILY "NEVADA " 121 119 #elif defined CONFIG_CPU_R8000
+1 -1
arch/mips/include/asm/netlogic/common.h
··· 84 84 */ 85 85 void nlm_init_boot_cpu(void); 86 86 unsigned int nlm_get_cpu_frequency(void); 87 - extern struct plat_smp_ops nlm_smp_ops; 87 + extern const struct plat_smp_ops nlm_smp_ops; 88 88 extern char nlm_reset_entry[], nlm_reset_entry_end[]; 89 89 90 90 /* SWIOTLB */
+53
arch/mips/include/asm/octeon/cvmx-boot-vector.h
··· 1 + /* 2 + * This file is subject to the terms and conditions of the GNU General Public 3 + * License. See the file "COPYING" in the main directory of this archive 4 + * for more details. 5 + * 6 + * Copyright (C) 2003-2017 Cavium, Inc. 7 + */ 8 + 9 + #ifndef __CVMX_BOOT_VECTOR_H__ 10 + #define __CVMX_BOOT_VECTOR_H__ 11 + 12 + #include <asm/octeon/octeon.h> 13 + 14 + /* 15 + * The boot vector table is made up of an array of 1024 elements of 16 + * struct cvmx_boot_vector_element. There is one entry for each 17 + * possible MIPS CPUNum, indexed by the CPUNum. 18 + * 19 + * Once cvmx_boot_vector_get() returns a non-NULL value (indicating 20 + * success), NMI to a core will cause execution to transfer to the 21 + * target_ptr location for that core's entry in the vector table. 22 + * 23 + * The struct cvmx_boot_vector_element fields app0, app1, and app2 can 24 + * be used by the application that has set the target_ptr in any 25 + * application specific manner, they are not touched by the vectoring 26 + * code. 27 + * 28 + * The boot vector code clobbers the CP0_DESAVE register, and on 29 + * OCTEON II and later CPUs also clobbers CP0_KScratch2. All GP 30 + * registers are preserved, except on pre-OCTEON II CPUs, where k1 is 31 + * clobbered. 32 + * 33 + */ 34 + 35 + 36 + /* 37 + * Applications install the boot bus code in cvmx-boot-vector.c, which 38 + * uses this magic: 39 + */ 40 + #define OCTEON_BOOT_MOVEABLE_MAGIC1 0xdb00110ad358eacdull 41 + 42 + struct cvmx_boot_vector_element { 43 + /* kseg0 or xkphys address of target code. */ 44 + uint64_t target_ptr; 45 + /* Three application specific arguments. */ 46 + uint64_t app0; 47 + uint64_t app1; 48 + uint64_t app2; 49 + }; 50 + 51 + struct cvmx_boot_vector_element *cvmx_boot_vector_get(void); 52 + 53 + #endif /* __CVMX_BOOT_VECTOR_H__ */
+28
arch/mips/include/asm/octeon/cvmx-bootmem.h
··· 255 255 uint64_t max_addr, uint64_t align, 256 256 char *name); 257 257 258 + /** 259 + * Allocate if needed a block of memory from a specific range of the 260 + * free list that was passed to the application by the bootloader, and 261 + * assign it a name in the global named block table. (part of the 262 + * cvmx_bootmem_descriptor_t structure) Named blocks can later be 263 + * freed. If the requested name block is already allocated, return 264 + * the pointer to block of memory. If request cannot be satisfied 265 + * within the address range specified, NULL is returned 266 + * 267 + * @param size Size in bytes of block to allocate 268 + * @param min_addr minimum address of range 269 + * @param max_addr maximum address of range 270 + * @param align Alignment of memory to be allocated. (must be a power of 2) 271 + * @param name name of block - must be less than CVMX_BOOTMEM_NAME_LEN bytes 272 + * @param init Initialization function 273 + * 274 + * The initialization function is optional, if omitted the named block 275 + * is initialized to all zeros when it is created, i.e. once. 276 + * 277 + * @return pointer to block of memory, NULL on error 278 + */ 279 + void *cvmx_bootmem_alloc_named_range_once(uint64_t size, 280 + uint64_t min_addr, 281 + uint64_t max_addr, 282 + uint64_t align, 283 + char *name, 284 + void (*init) (void *)); 285 + 258 286 extern int cvmx_bootmem_free_named(char *name); 259 287 260 288 /**
+10
arch/mips/include/asm/octeon/cvmx-ciu-defs.h
··· 128 128 case OCTEON_CN52XX & OCTEON_FAMILY_MASK: 129 129 case OCTEON_CNF71XX & OCTEON_FAMILY_MASK: 130 130 case OCTEON_CN61XX & OCTEON_FAMILY_MASK: 131 + case OCTEON_CN70XX & OCTEON_FAMILY_MASK: 131 132 return CVMX_ADD_IO_SEG(0x0001070000000580ull) + (offset) * 8; 132 133 case OCTEON_CN31XX & OCTEON_FAMILY_MASK: 133 134 case OCTEON_CN50XX & OCTEON_FAMILY_MASK: ··· 144 143 return CVMX_ADD_IO_SEG(0x0001070000000580ull) + (offset) * 8; 145 144 case OCTEON_CN68XX & OCTEON_FAMILY_MASK: 146 145 return CVMX_ADD_IO_SEG(0x0001070100100200ull) + (offset) * 8; 146 + case OCTEON_CNF75XX & OCTEON_FAMILY_MASK: 147 + case OCTEON_CN73XX & OCTEON_FAMILY_MASK: 148 + case OCTEON_CN78XX & OCTEON_FAMILY_MASK: 149 + return CVMX_ADD_IO_SEG(0x0001010000030000ull) + (offset) * 8; 147 150 } 148 151 return CVMX_ADD_IO_SEG(0x0001070000000580ull) + (offset) * 8; 149 152 } ··· 185 180 case OCTEON_CN52XX & OCTEON_FAMILY_MASK: 186 181 case OCTEON_CNF71XX & OCTEON_FAMILY_MASK: 187 182 case OCTEON_CN61XX & OCTEON_FAMILY_MASK: 183 + case OCTEON_CN70XX & OCTEON_FAMILY_MASK: 188 184 return CVMX_ADD_IO_SEG(0x0001070000000500ull) + (offset) * 8; 189 185 case OCTEON_CN31XX & OCTEON_FAMILY_MASK: 190 186 case OCTEON_CN50XX & OCTEON_FAMILY_MASK: ··· 201 195 return CVMX_ADD_IO_SEG(0x0001070000000500ull) + (offset) * 8; 202 196 case OCTEON_CN68XX & OCTEON_FAMILY_MASK: 203 197 return CVMX_ADD_IO_SEG(0x0001070100100000ull) + (offset) * 8; 198 + case OCTEON_CNF75XX & OCTEON_FAMILY_MASK: 199 + case OCTEON_CN73XX & OCTEON_FAMILY_MASK: 200 + case OCTEON_CN78XX & OCTEON_FAMILY_MASK: 201 + return CVMX_ADD_IO_SEG(0x0001010000020000ull) + (offset) * 8; 204 202 } 205 203 return CVMX_ADD_IO_SEG(0x0001070000000500ull) + (offset) * 8; 206 204 }
+28
arch/mips/include/asm/octeon/cvmx.h
··· 357 357 return cvmx_get_core_num() & ((1 << CVMX_NODE_NO_SHIFT) - 1); 358 358 } 359 359 360 + #define CVMX_NODE_BITS (2) /* Number of bits to define a node */ 361 + #define CVMX_MAX_NODES (1 << CVMX_NODE_BITS) 362 + #define CVMX_NODE_IO_SHIFT (36) 363 + #define CVMX_NODE_MEM_SHIFT (40) 364 + #define CVMX_NODE_IO_MASK ((uint64_t)CVMX_NODE_MASK << CVMX_NODE_IO_SHIFT) 365 + 366 + static inline void cvmx_write_csr_node(uint64_t node, uint64_t csr_addr, 367 + uint64_t val) 368 + { 369 + uint64_t composite_csr_addr, node_addr; 370 + 371 + node_addr = (node & CVMX_NODE_MASK) << CVMX_NODE_IO_SHIFT; 372 + composite_csr_addr = (csr_addr & ~CVMX_NODE_IO_MASK) | node_addr; 373 + 374 + cvmx_write64_uint64(composite_csr_addr, val); 375 + if (((csr_addr >> 40) & 0x7ffff) == (0x118)) 376 + cvmx_read64_uint64(CVMX_MIO_BOOT_BIST_STAT | node_addr); 377 + } 378 + 379 + static inline uint64_t cvmx_read_csr_node(uint64_t node, uint64_t csr_addr) 380 + { 381 + uint64_t node_addr; 382 + 383 + node_addr = (csr_addr & ~CVMX_NODE_IO_MASK) | 384 + (node & CVMX_NODE_MASK) << CVMX_NODE_IO_SHIFT; 385 + return cvmx_read_csr(node_addr); 386 + } 387 + 360 388 /** 361 389 * Returns the number of bits set in the provided value. 362 390 * Simple wrapper for POP instruction.
+2
arch/mips/include/asm/octeon/octeon.h
··· 362 362 363 363 extern struct semaphore octeon_bootbus_sem; 364 364 365 + struct irq_domain *octeon_irq_get_block_domain(int node, uint8_t block); 366 + 365 367 #endif /* __ASM_OCTEON_OCTEON_H */
+8 -8
arch/mips/include/asm/smp-ops.h
··· 13 13 14 14 #include <linux/errno.h> 15 15 16 - #include <asm/mips-cm.h> 16 + #include <asm/mips-cps.h> 17 17 18 18 #ifdef CONFIG_SMP 19 19 ··· 26 26 void (*send_ipi_mask)(const struct cpumask *mask, unsigned int action); 27 27 void (*init_secondary)(void); 28 28 void (*smp_finish)(void); 29 - void (*boot_secondary)(int cpu, struct task_struct *idle); 29 + int (*boot_secondary)(int cpu, struct task_struct *idle); 30 30 void (*smp_setup)(void); 31 31 void (*prepare_cpus)(unsigned int max_cpus); 32 32 #ifdef CONFIG_HOTPLUG_CPU ··· 35 35 #endif 36 36 }; 37 37 38 - extern void register_smp_ops(struct plat_smp_ops *ops); 38 + extern void register_smp_ops(const struct plat_smp_ops *ops); 39 39 40 40 static inline void plat_smp_setup(void) 41 41 { 42 - extern struct plat_smp_ops *mp_ops; /* private */ 42 + extern const struct plat_smp_ops *mp_ops; /* private */ 43 43 44 44 mp_ops->smp_setup(); 45 45 } ··· 57 57 /* UP, nothing to do ... */ 58 58 } 59 59 60 - static inline void register_smp_ops(struct plat_smp_ops *ops) 60 + static inline void register_smp_ops(const struct plat_smp_ops *ops) 61 61 { 62 62 } 63 63 ··· 66 66 static inline int register_up_smp_ops(void) 67 67 { 68 68 #ifdef CONFIG_SMP_UP 69 - extern struct plat_smp_ops up_smp_ops; 69 + extern const struct plat_smp_ops up_smp_ops; 70 70 71 71 register_smp_ops(&up_smp_ops); 72 72 ··· 79 79 static inline int register_cmp_smp_ops(void) 80 80 { 81 81 #ifdef CONFIG_MIPS_CMP 82 - extern struct plat_smp_ops cmp_smp_ops; 82 + extern const struct plat_smp_ops cmp_smp_ops; 83 83 84 84 if (!mips_cm_present()) 85 85 return -ENODEV; ··· 95 95 static inline int register_vsmp_smp_ops(void) 96 96 { 97 97 #ifdef CONFIG_MIPS_MT_SMP 98 - extern struct plat_smp_ops vsmp_smp_ops; 98 + extern const struct plat_smp_ops vsmp_smp_ops; 99 99 100 100 register_smp_ops(&vsmp_smp_ops); 101 101
+5 -5
arch/mips/include/asm/smp.h
··· 58 58 */ 59 59 static inline void smp_send_reschedule(int cpu) 60 60 { 61 - extern struct plat_smp_ops *mp_ops; /* private */ 61 + extern const struct plat_smp_ops *mp_ops; /* private */ 62 62 63 63 mp_ops->send_ipi_single(cpu, SMP_RESCHEDULE_YOURSELF); 64 64 } ··· 66 66 #ifdef CONFIG_HOTPLUG_CPU 67 67 static inline int __cpu_disable(void) 68 68 { 69 - extern struct plat_smp_ops *mp_ops; /* private */ 69 + extern const struct plat_smp_ops *mp_ops; /* private */ 70 70 71 71 return mp_ops->cpu_disable(); 72 72 } 73 73 74 74 static inline void __cpu_die(unsigned int cpu) 75 75 { 76 - extern struct plat_smp_ops *mp_ops; /* private */ 76 + extern const struct plat_smp_ops *mp_ops; /* private */ 77 77 78 78 mp_ops->cpu_die(cpu); 79 79 } ··· 97 97 98 98 static inline void arch_send_call_function_single_ipi(int cpu) 99 99 { 100 - extern struct plat_smp_ops *mp_ops; /* private */ 100 + extern const struct plat_smp_ops *mp_ops; /* private */ 101 101 102 102 mp_ops->send_ipi_mask(cpumask_of(cpu), SMP_CALL_FUNCTION); 103 103 } 104 104 105 105 static inline void arch_send_call_function_ipi_mask(const struct cpumask *mask) 106 106 { 107 - extern struct plat_smp_ops *mp_ops; /* private */ 107 + extern const struct plat_smp_ops *mp_ops; /* private */ 108 108 109 109 mp_ops->send_ipi_mask(mask, SMP_CALL_FUNCTION); 110 110 }
+169 -113
arch/mips/include/asm/stackframe.h
··· 19 19 #include <asm/asm-offsets.h> 20 20 #include <asm/thread_info.h> 21 21 22 + /* Make the addition of cfi info a little easier. */ 23 + .macro cfi_rel_offset reg offset=0 docfi=0 24 + .if \docfi 25 + .cfi_rel_offset \reg, \offset 26 + .endif 27 + .endm 28 + 29 + .macro cfi_st reg offset=0 docfi=0 30 + LONG_S \reg, \offset(sp) 31 + cfi_rel_offset \reg, \offset, \docfi 32 + .endm 33 + 34 + .macro cfi_restore reg offset=0 docfi=0 35 + .if \docfi 36 + .cfi_restore \reg 37 + .endif 38 + .endm 39 + 40 + .macro cfi_ld reg offset=0 docfi=0 41 + LONG_L \reg, \offset(sp) 42 + cfi_restore \reg \offset \docfi 43 + .endm 44 + 22 45 #if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX) 23 46 #define STATMASK 0x3f 24 47 #else 25 48 #define STATMASK 0x1f 26 49 #endif 27 50 28 - .macro SAVE_AT 51 + .macro SAVE_AT docfi=0 29 52 .set push 30 53 .set noat 31 - LONG_S $1, PT_R1(sp) 54 + cfi_st $1, PT_R1, \docfi 32 55 .set pop 33 56 .endm 34 57 35 - .macro SAVE_TEMP 58 + .macro SAVE_TEMP docfi=0 36 59 #ifdef CONFIG_CPU_HAS_SMARTMIPS 37 60 mflhxu v1 38 61 LONG_S v1, PT_LO(sp) ··· 67 44 mfhi v1 68 45 #endif 69 46 #ifdef CONFIG_32BIT 70 - LONG_S $8, PT_R8(sp) 71 - LONG_S $9, PT_R9(sp) 47 + cfi_st $8, PT_R8, \docfi 48 + cfi_st $9, PT_R9, \docfi 72 49 #endif 73 - LONG_S $10, PT_R10(sp) 74 - LONG_S $11, PT_R11(sp) 75 - LONG_S $12, PT_R12(sp) 50 + cfi_st $10, PT_R10, \docfi 51 + cfi_st $11, PT_R11, \docfi 52 + cfi_st $12, PT_R12, \docfi 76 53 #if !defined(CONFIG_CPU_HAS_SMARTMIPS) && !defined(CONFIG_CPU_MIPSR6) 77 54 LONG_S v1, PT_HI(sp) 78 55 mflo v1 79 56 #endif 80 - LONG_S $13, PT_R13(sp) 81 - LONG_S $14, PT_R14(sp) 82 - LONG_S $15, PT_R15(sp) 83 - LONG_S $24, PT_R24(sp) 57 + cfi_st $13, PT_R13, \docfi 58 + cfi_st $14, PT_R14, \docfi 59 + cfi_st $15, PT_R15, \docfi 60 + cfi_st $24, PT_R24, \docfi 84 61 #if !defined(CONFIG_CPU_HAS_SMARTMIPS) && !defined(CONFIG_CPU_MIPSR6) 85 62 LONG_S v1, PT_LO(sp) 86 63 #endif ··· 94 71 #endif 95 72 .endm 96 73 97 - .macro SAVE_STATIC 98 - LONG_S $16, PT_R16(sp) 99 - LONG_S $17, PT_R17(sp) 100 - LONG_S $18, PT_R18(sp) 101 - LONG_S $19, PT_R19(sp) 102 - LONG_S $20, PT_R20(sp) 103 - LONG_S $21, PT_R21(sp) 104 - LONG_S $22, PT_R22(sp) 105 - LONG_S $23, PT_R23(sp) 106 - LONG_S $30, PT_R30(sp) 74 + .macro SAVE_STATIC docfi=0 75 + cfi_st $16, PT_R16, \docfi 76 + cfi_st $17, PT_R17, \docfi 77 + cfi_st $18, PT_R18, \docfi 78 + cfi_st $19, PT_R19, \docfi 79 + cfi_st $20, PT_R20, \docfi 80 + cfi_st $21, PT_R21, \docfi 81 + cfi_st $22, PT_R22, \docfi 82 + cfi_st $23, PT_R23, \docfi 83 + cfi_st $30, PT_R30, \docfi 107 84 .endm 108 85 86 + /* 87 + * get_saved_sp returns the SP for the current CPU by looking in the 88 + * kernelsp array for it. If tosp is set, it stores the current sp in 89 + * k0 and loads the new value in sp. If not, it clobbers k0 and 90 + * stores the new value in k1, leaving sp unaffected. 91 + */ 109 92 #ifdef CONFIG_SMP 110 - .macro get_saved_sp /* SMP variation */ 93 + 94 + /* SMP variation */ 95 + .macro get_saved_sp docfi=0 tosp=0 111 96 ASM_CPUID_MFC0 k0, ASM_SMP_CPUID_REG 112 97 #if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32) 113 98 lui k1, %hi(kernelsp) ··· 128 97 #endif 129 98 LONG_SRL k0, SMP_CPUID_PTRSHIFT 130 99 LONG_ADDU k1, k0 100 + .if \tosp 101 + move k0, sp 102 + .if \docfi 103 + .cfi_register sp, k0 104 + .endif 105 + LONG_L sp, %lo(kernelsp)(k1) 106 + .else 131 107 LONG_L k1, %lo(kernelsp)(k1) 108 + .endif 132 109 .endm 133 110 134 111 .macro set_saved_sp stackp temp temp2 ··· 145 106 LONG_S \stackp, kernelsp(\temp) 146 107 .endm 147 108 #else /* !CONFIG_SMP */ 148 - .macro get_saved_sp /* Uniprocessor variation */ 109 + /* Uniprocessor variation */ 110 + .macro get_saved_sp docfi=0 tosp=0 149 111 #ifdef CONFIG_CPU_JUMP_WORKAROUNDS 150 112 /* 151 113 * Clear BTB (branch target buffer), forbid RAS (return address ··· 175 135 daddiu k1, %hi(kernelsp) 176 136 dsll k1, k1, 16 177 137 #endif 138 + .if \tosp 139 + move k0, sp 140 + .if \docfi 141 + .cfi_register sp, k0 142 + .endif 143 + LONG_L sp, %lo(kernelsp)(k1) 144 + .else 178 145 LONG_L k1, %lo(kernelsp)(k1) 146 + .endif 179 147 .endm 180 148 181 149 .macro set_saved_sp stackp temp temp2 ··· 191 143 .endm 192 144 #endif 193 145 194 - .macro SAVE_SOME 146 + .macro SAVE_SOME docfi=0 195 147 .set push 196 148 .set noat 197 149 .set reorder ··· 199 151 sll k0, 3 /* extract cu0 bit */ 200 152 .set noreorder 201 153 bltz k0, 8f 202 - move k1, sp 203 154 #ifdef CONFIG_EVA 204 155 /* 205 156 * Flush interAptiv's Return Prediction Stack (RPS) by writing ··· 225 178 MTC0 k0, CP0_ENTRYHI 226 179 #endif 227 180 .set reorder 181 + move k0, sp 182 + .if \docfi 183 + .cfi_register sp, k0 184 + .endif 228 185 /* Called from user mode, new stack. */ 229 - get_saved_sp 230 - #ifndef CONFIG_CPU_DADDI_WORKAROUNDS 231 - 8: move k0, sp 232 - PTR_SUBU sp, k1, PT_SIZE 233 - #else 234 - .set at=k0 235 - 8: PTR_SUBU k1, PT_SIZE 236 - .set noat 237 - move k0, sp 238 - move sp, k1 186 + get_saved_sp docfi=\docfi tosp=1 187 + 8: 188 + #ifdef CONFIG_CPU_DADDI_WORKAROUNDS 189 + .set at=k1 239 190 #endif 240 - LONG_S k0, PT_R29(sp) 241 - LONG_S $3, PT_R3(sp) 191 + PTR_SUBU sp, PT_SIZE 192 + #ifdef CONFIG_CPU_DADDI_WORKAROUNDS 193 + .set noat 194 + #endif 195 + .if \docfi 196 + .cfi_def_cfa sp,0 197 + .endif 198 + cfi_st k0, PT_R29, \docfi 199 + cfi_rel_offset sp, PT_R29, \docfi 200 + cfi_st v1, PT_R3, \docfi 242 201 /* 243 202 * You might think that you don't need to save $0, 244 203 * but the FPU emulator and gdb remote debug stub ··· 252 199 */ 253 200 LONG_S $0, PT_R0(sp) 254 201 mfc0 v1, CP0_STATUS 255 - LONG_S $2, PT_R2(sp) 202 + cfi_st v0, PT_R2, \docfi 256 203 LONG_S v1, PT_STATUS(sp) 257 - LONG_S $4, PT_R4(sp) 204 + cfi_st $4, PT_R4, \docfi 258 205 mfc0 v1, CP0_CAUSE 259 - LONG_S $5, PT_R5(sp) 206 + cfi_st $5, PT_R5, \docfi 260 207 LONG_S v1, PT_CAUSE(sp) 261 - LONG_S $6, PT_R6(sp) 262 - MFC0 v1, CP0_EPC 263 - LONG_S $7, PT_R7(sp) 208 + cfi_st $6, PT_R6, \docfi 209 + cfi_st ra, PT_R31, \docfi 210 + MFC0 ra, CP0_EPC 211 + cfi_st $7, PT_R7, \docfi 264 212 #ifdef CONFIG_64BIT 265 - LONG_S $8, PT_R8(sp) 266 - LONG_S $9, PT_R9(sp) 213 + cfi_st $8, PT_R8, \docfi 214 + cfi_st $9, PT_R9, \docfi 267 215 #endif 268 - LONG_S v1, PT_EPC(sp) 269 - LONG_S $25, PT_R25(sp) 270 - LONG_S $28, PT_R28(sp) 271 - LONG_S $31, PT_R31(sp) 216 + LONG_S ra, PT_EPC(sp) 217 + .if \docfi 218 + .cfi_rel_offset ra, PT_EPC 219 + .endif 220 + cfi_st $25, PT_R25, \docfi 221 + cfi_st $28, PT_R28, \docfi 272 222 273 223 /* Set thread_info if we're coming from user mode */ 274 224 mfc0 k0, CP0_STATUS ··· 288 232 .set pop 289 233 .endm 290 234 291 - .macro SAVE_ALL 292 - SAVE_SOME 293 - SAVE_AT 294 - SAVE_TEMP 295 - SAVE_STATIC 235 + .macro SAVE_ALL docfi=0 236 + SAVE_SOME \docfi 237 + SAVE_AT \docfi 238 + SAVE_TEMP \docfi 239 + SAVE_STATIC \docfi 296 240 .endm 297 241 298 - .macro RESTORE_AT 242 + .macro RESTORE_AT docfi=0 299 243 .set push 300 244 .set noat 301 - LONG_L $1, PT_R1(sp) 245 + cfi_ld $1, PT_R1, \docfi 302 246 .set pop 303 247 .endm 304 248 305 - .macro RESTORE_TEMP 249 + .macro RESTORE_TEMP docfi=0 306 250 #ifdef CONFIG_CPU_CAVIUM_OCTEON 307 251 /* Restore the Octeon multiplier state */ 308 252 jal octeon_mult_restore ··· 321 265 mthi $24 322 266 #endif 323 267 #ifdef CONFIG_32BIT 324 - LONG_L $8, PT_R8(sp) 325 - LONG_L $9, PT_R9(sp) 268 + cfi_ld $8, PT_R8, \docfi 269 + cfi_ld $9, PT_R9, \docfi 326 270 #endif 327 - LONG_L $10, PT_R10(sp) 328 - LONG_L $11, PT_R11(sp) 329 - LONG_L $12, PT_R12(sp) 330 - LONG_L $13, PT_R13(sp) 331 - LONG_L $14, PT_R14(sp) 332 - LONG_L $15, PT_R15(sp) 333 - LONG_L $24, PT_R24(sp) 271 + cfi_ld $10, PT_R10, \docfi 272 + cfi_ld $11, PT_R11, \docfi 273 + cfi_ld $12, PT_R12, \docfi 274 + cfi_ld $13, PT_R13, \docfi 275 + cfi_ld $14, PT_R14, \docfi 276 + cfi_ld $15, PT_R15, \docfi 277 + cfi_ld $24, PT_R24, \docfi 334 278 .endm 335 279 336 - .macro RESTORE_STATIC 337 - LONG_L $16, PT_R16(sp) 338 - LONG_L $17, PT_R17(sp) 339 - LONG_L $18, PT_R18(sp) 340 - LONG_L $19, PT_R19(sp) 341 - LONG_L $20, PT_R20(sp) 342 - LONG_L $21, PT_R21(sp) 343 - LONG_L $22, PT_R22(sp) 344 - LONG_L $23, PT_R23(sp) 345 - LONG_L $30, PT_R30(sp) 280 + .macro RESTORE_STATIC docfi=0 281 + cfi_ld $16, PT_R16, \docfi 282 + cfi_ld $17, PT_R17, \docfi 283 + cfi_ld $18, PT_R18, \docfi 284 + cfi_ld $19, PT_R19, \docfi 285 + cfi_ld $20, PT_R20, \docfi 286 + cfi_ld $21, PT_R21, \docfi 287 + cfi_ld $22, PT_R22, \docfi 288 + cfi_ld $23, PT_R23, \docfi 289 + cfi_ld $30, PT_R30, \docfi 290 + .endm 291 + 292 + .macro RESTORE_SP docfi=0 293 + cfi_ld sp, PT_R29, \docfi 346 294 .endm 347 295 348 296 #if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX) 349 297 350 - .macro RESTORE_SOME 298 + .macro RESTORE_SOME docfi=0 351 299 .set push 352 300 .set reorder 353 301 .set noat ··· 366 306 and v0, v1 367 307 or v0, a0 368 308 mtc0 v0, CP0_STATUS 369 - LONG_L $31, PT_R31(sp) 370 - LONG_L $28, PT_R28(sp) 371 - LONG_L $25, PT_R25(sp) 372 - LONG_L $7, PT_R7(sp) 373 - LONG_L $6, PT_R6(sp) 374 - LONG_L $5, PT_R5(sp) 375 - LONG_L $4, PT_R4(sp) 376 - LONG_L $3, PT_R3(sp) 377 - LONG_L $2, PT_R2(sp) 309 + cfi_ld $31, PT_R31, \docfi 310 + cfi_ld $28, PT_R28, \docfi 311 + cfi_ld $25, PT_R25, \docfi 312 + cfi_ld $7, PT_R7, \docfi 313 + cfi_ld $6, PT_R6, \docfi 314 + cfi_ld $5, PT_R5, \docfi 315 + cfi_ld $4, PT_R4, \docfi 316 + cfi_ld $3, PT_R3, \docfi 317 + cfi_ld $2, PT_R2, \docfi 378 318 .set pop 379 319 .endm 380 320 381 - .macro RESTORE_SP_AND_RET 321 + .macro RESTORE_SP_AND_RET docfi=0 382 322 .set push 383 323 .set noreorder 384 324 LONG_L k0, PT_EPC(sp) 385 - LONG_L sp, PT_R29(sp) 325 + RESTORE_SP \docfi 386 326 jr k0 387 327 rfe 388 328 .set pop 389 329 .endm 390 330 391 331 #else 392 - .macro RESTORE_SOME 332 + .macro RESTORE_SOME docfi=0 393 333 .set push 394 334 .set reorder 395 335 .set noat ··· 406 346 mtc0 v0, CP0_STATUS 407 347 LONG_L v1, PT_EPC(sp) 408 348 MTC0 v1, CP0_EPC 409 - LONG_L $31, PT_R31(sp) 410 - LONG_L $28, PT_R28(sp) 411 - LONG_L $25, PT_R25(sp) 349 + cfi_ld $31, PT_R31, \docfi 350 + cfi_ld $28, PT_R28, \docfi 351 + cfi_ld $25, PT_R25, \docfi 412 352 #ifdef CONFIG_64BIT 413 - LONG_L $8, PT_R8(sp) 414 - LONG_L $9, PT_R9(sp) 353 + cfi_ld $8, PT_R8, \docfi 354 + cfi_ld $9, PT_R9, \docfi 415 355 #endif 416 - LONG_L $7, PT_R7(sp) 417 - LONG_L $6, PT_R6(sp) 418 - LONG_L $5, PT_R5(sp) 419 - LONG_L $4, PT_R4(sp) 420 - LONG_L $3, PT_R3(sp) 421 - LONG_L $2, PT_R2(sp) 356 + cfi_ld $7, PT_R7, \docfi 357 + cfi_ld $6, PT_R6, \docfi 358 + cfi_ld $5, PT_R5, \docfi 359 + cfi_ld $4, PT_R4, \docfi 360 + cfi_ld $3, PT_R3, \docfi 361 + cfi_ld $2, PT_R2, \docfi 422 362 .set pop 423 363 .endm 424 364 425 - .macro RESTORE_SP_AND_RET 426 - LONG_L sp, PT_R29(sp) 365 + .macro RESTORE_SP_AND_RET docfi=0 366 + RESTORE_SP \docfi 427 367 #ifdef CONFIG_CPU_MIPSR6 428 368 eretnc 429 369 #else ··· 435 375 436 376 #endif 437 377 438 - .macro RESTORE_SP 439 - LONG_L sp, PT_R29(sp) 440 - .endm 441 - 442 - .macro RESTORE_ALL 443 - RESTORE_TEMP 444 - RESTORE_STATIC 445 - RESTORE_AT 446 - RESTORE_SOME 447 - RESTORE_SP 378 + .macro RESTORE_ALL docfi=0 379 + RESTORE_TEMP \docfi 380 + RESTORE_STATIC \docfi 381 + RESTORE_AT \docfi 382 + RESTORE_SOME \docfi 383 + RESTORE_SP \docfi 448 384 .endm 449 385 450 386 /*
+50 -14
arch/mips/include/asm/stacktrace.h
··· 2 2 #define _ASM_STACKTRACE_H 3 3 4 4 #include <asm/ptrace.h> 5 + #include <asm/asm.h> 6 + #include <linux/stringify.h> 5 7 6 8 #ifdef CONFIG_KALLSYMS 7 9 extern int raw_show_trace; ··· 22 20 } 23 21 #endif 24 22 23 + #define STR_PTR_LA __stringify(PTR_LA) 24 + #define STR_LONG_S __stringify(LONG_S) 25 + #define STR_LONG_L __stringify(LONG_L) 26 + #define STR_LONGSIZE __stringify(LONGSIZE) 27 + 28 + #define STORE_ONE_REG(r) \ 29 + STR_LONG_S " $" __stringify(r)",("STR_LONGSIZE"*"__stringify(r)")(%1)\n\t" 30 + 25 31 static __always_inline void prepare_frametrace(struct pt_regs *regs) 26 32 { 27 33 #ifndef CONFIG_KALLSYMS ··· 42 32 __asm__ __volatile__( 43 33 ".set push\n\t" 44 34 ".set noat\n\t" 45 - #ifdef CONFIG_64BIT 46 - "1: dla $1, 1b\n\t" 47 - "sd $1, %0\n\t" 48 - "sd $29, %1\n\t" 49 - "sd $31, %2\n\t" 50 - #else 51 - "1: la $1, 1b\n\t" 52 - "sw $1, %0\n\t" 53 - "sw $29, %1\n\t" 54 - "sw $31, %2\n\t" 55 - #endif 35 + /* Store $1 so we can use it */ 36 + STR_LONG_S " $1,"STR_LONGSIZE"(%1)\n\t" 37 + /* Store the PC */ 38 + "1: " STR_PTR_LA " $1, 1b\n\t" 39 + STR_LONG_S " $1,%0\n\t" 40 + STORE_ONE_REG(2) 41 + STORE_ONE_REG(3) 42 + STORE_ONE_REG(4) 43 + STORE_ONE_REG(5) 44 + STORE_ONE_REG(6) 45 + STORE_ONE_REG(7) 46 + STORE_ONE_REG(8) 47 + STORE_ONE_REG(9) 48 + STORE_ONE_REG(10) 49 + STORE_ONE_REG(11) 50 + STORE_ONE_REG(12) 51 + STORE_ONE_REG(13) 52 + STORE_ONE_REG(14) 53 + STORE_ONE_REG(15) 54 + STORE_ONE_REG(16) 55 + STORE_ONE_REG(17) 56 + STORE_ONE_REG(18) 57 + STORE_ONE_REG(19) 58 + STORE_ONE_REG(20) 59 + STORE_ONE_REG(21) 60 + STORE_ONE_REG(22) 61 + STORE_ONE_REG(23) 62 + STORE_ONE_REG(24) 63 + STORE_ONE_REG(25) 64 + STORE_ONE_REG(26) 65 + STORE_ONE_REG(27) 66 + STORE_ONE_REG(28) 67 + STORE_ONE_REG(29) 68 + STORE_ONE_REG(30) 69 + STORE_ONE_REG(31) 70 + /* Restore $1 */ 71 + STR_LONG_L " $1,"STR_LONGSIZE"(%1)\n\t" 56 72 ".set pop\n\t" 57 - : "=m" (regs->cp0_epc), 58 - "=m" (regs->regs[29]), "=m" (regs->regs[31]) 59 - : : "memory"); 73 + : "=m" (regs->cp0_epc) 74 + : "r" (regs->regs) 75 + : "memory"); 60 76 } 61 77 62 78 #endif /* _ASM_STACKTRACE_H */
+1 -1
arch/mips/include/asm/topology.h
··· 13 13 14 14 #ifdef CONFIG_SMP 15 15 #define topology_physical_package_id(cpu) (cpu_data[cpu].package) 16 - #define topology_core_id(cpu) (cpu_data[cpu].core) 16 + #define topology_core_id(cpu) (cpu_core(&cpu_data[cpu])) 17 17 #define topology_core_cpumask(cpu) (&cpu_core_map[cpu]) 18 18 #define topology_sibling_cpumask(cpu) (&cpu_sibling_map[cpu]) 19 19 #endif
+1 -1
arch/mips/include/uapi/asm/inst.h
··· 981 981 struct mm16_r5_format { /* Load/store from stack pointer format */ 982 982 __BITFIELD_FIELD(unsigned int opcode : 6, 983 983 __BITFIELD_FIELD(unsigned int rt : 5, 984 - __BITFIELD_FIELD(signed int simmediate : 5, 984 + __BITFIELD_FIELD(unsigned int imm : 5, 985 985 __BITFIELD_FIELD(unsigned int : 16, /* Ignored */ 986 986 ;)))) 987 987 };
+9 -5
arch/mips/kernel/Makefile
··· 35 35 obj-$(CONFIG_FTRACE_SYSCALLS) += ftrace.o 36 36 obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o 37 37 38 - obj-$(CONFIG_CPU_R4K_FPU) += r4k_fpu.o r4k_switch.o 39 - obj-$(CONFIG_CPU_R3000) += r2300_fpu.o r2300_switch.o 40 - obj-$(CONFIG_CPU_R6000) += r6000_fpu.o r4k_switch.o 41 - obj-$(CONFIG_CPU_TX39XX) += r2300_fpu.o r2300_switch.o 42 - obj-$(CONFIG_CPU_CAVIUM_OCTEON) += r4k_fpu.o octeon_switch.o 38 + sw-y := r4k_switch.o 39 + sw-$(CONFIG_CPU_R3000) := r2300_switch.o 40 + sw-$(CONFIG_CPU_TX39XX) := r2300_switch.o 41 + sw-$(CONFIG_CPU_CAVIUM_OCTEON) := octeon_switch.o 42 + obj-y += $(sw-y) 43 + 44 + obj-$(CONFIG_CPU_R4K_FPU) += r4k_fpu.o 45 + obj-$(CONFIG_CPU_R3000) += r2300_fpu.o 46 + obj-$(CONFIG_CPU_TX39XX) += r2300_fpu.o 43 47 44 48 obj-$(CONFIG_SMP) += smp.o 45 49 obj-$(CONFIG_SMP_UP) += smp-up.o
+2 -2
arch/mips/kernel/cps-vec.S
··· 327 327 * to handle contiguous VP numbering, but no such systems yet 328 328 * exist. 329 329 */ 330 - mfc0 t9, $3, 1 331 - andi t9, t9, 0xff 330 + mfc0 t9, CP0_GLOBALNUMBER 331 + andi t9, t9, MIPS_GLOBALNUMBER_VP 332 332 #elif defined(CONFIG_MIPS_MT_SMP) 333 333 has_mt ta2, 1f 334 334
+38 -21
arch/mips/kernel/cpu-probe.c
··· 326 326 327 327 __setup("nofpu", fpu_disable); 328 328 329 - int mips_dsp_disabled; 329 + static int mips_dsp_disabled; 330 330 331 331 static int __init dsp_disable(char *s) 332 332 { ··· 919 919 920 920 #ifndef CONFIG_MIPS_CPS 921 921 if (cpu_has_mips_r2_r6) { 922 - c->core = get_ebase_cpunum(); 922 + unsigned int core; 923 + 924 + core = get_ebase_cpunum(); 923 925 if (cpu_has_mipsmt) 924 - c->core >>= fls(core_nvpes()) - 1; 926 + core >>= fls(core_nvpes()) - 1; 927 + cpu_set_core(c, core); 925 928 } 926 929 #endif 927 930 } ··· 1396 1393 c->options = R4K_OPTS | MIPS_CPU_FPU | MIPS_CPU_32FPR | 1397 1394 MIPS_CPU_DIVEC | MIPS_CPU_LLSC; 1398 1395 c->tlbsize = 48; 1399 - break; 1400 - case PRID_IMP_R6000: 1401 - c->cputype = CPU_R6000; 1402 - __cpu_name[cpu] = "R6000"; 1403 - set_isa(c, MIPS_CPU_ISA_II); 1404 - c->fpu_msk31 |= FPU_CSR_CONDX | FPU_CSR_FS; 1405 - c->options = MIPS_CPU_TLB | MIPS_CPU_FPU | 1406 - MIPS_CPU_LLSC; 1407 - c->tlbsize = 32; 1408 - break; 1409 - case PRID_IMP_R6000A: 1410 - c->cputype = CPU_R6000A; 1411 - __cpu_name[cpu] = "R6000A"; 1412 - set_isa(c, MIPS_CPU_ISA_II); 1413 - c->fpu_msk31 |= FPU_CSR_CONDX | FPU_CSR_FS; 1414 - c->options = MIPS_CPU_TLB | MIPS_CPU_FPU | 1415 - MIPS_CPU_LLSC; 1416 - c->tlbsize = 32; 1417 1396 break; 1418 1397 case PRID_IMP_RM7000: 1419 1398 c->cputype = CPU_RM7000; ··· 2097 2112 printk(KERN_INFO "FPU revision is: %08x\n", c->fpu_id); 2098 2113 if (cpu_has_msa) 2099 2114 pr_info("MSA revision is: %08x\n", c->msa_id); 2115 + } 2116 + 2117 + void cpu_set_cluster(struct cpuinfo_mips *cpuinfo, unsigned int cluster) 2118 + { 2119 + /* Ensure the core number fits in the field */ 2120 + WARN_ON(cluster > (MIPS_GLOBALNUMBER_CLUSTER >> 2121 + MIPS_GLOBALNUMBER_CLUSTER_SHF)); 2122 + 2123 + cpuinfo->globalnumber &= ~MIPS_GLOBALNUMBER_CLUSTER; 2124 + cpuinfo->globalnumber |= cluster << MIPS_GLOBALNUMBER_CLUSTER_SHF; 2125 + } 2126 + 2127 + void cpu_set_core(struct cpuinfo_mips *cpuinfo, unsigned int core) 2128 + { 2129 + /* Ensure the core number fits in the field */ 2130 + WARN_ON(core > (MIPS_GLOBALNUMBER_CORE >> MIPS_GLOBALNUMBER_CORE_SHF)); 2131 + 2132 + cpuinfo->globalnumber &= ~MIPS_GLOBALNUMBER_CORE; 2133 + cpuinfo->globalnumber |= core << MIPS_GLOBALNUMBER_CORE_SHF; 2134 + } 2135 + 2136 + void cpu_set_vpe_id(struct cpuinfo_mips *cpuinfo, unsigned int vpe) 2137 + { 2138 + /* Ensure the VP(E) ID fits in the field */ 2139 + WARN_ON(vpe > (MIPS_GLOBALNUMBER_VP >> MIPS_GLOBALNUMBER_VP_SHF)); 2140 + 2141 + /* Ensure we're not using VP(E)s without support */ 2142 + WARN_ON(vpe && !IS_ENABLED(CONFIG_MIPS_MT_SMP) && 2143 + !IS_ENABLED(CONFIG_CPU_MIPSR6)); 2144 + 2145 + cpuinfo->globalnumber &= ~MIPS_GLOBALNUMBER_VP; 2146 + cpuinfo->globalnumber |= vpe << MIPS_GLOBALNUMBER_VP_SHF; 2100 2147 }
+8 -5
arch/mips/kernel/genex.S
··· 150 150 .align 5 151 151 BUILD_ROLLBACK_PROLOGUE handle_int 152 152 NESTED(handle_int, PT_SIZE, sp) 153 + .cfi_signal_frame 153 154 #ifdef CONFIG_TRACE_IRQFLAGS 154 155 /* 155 156 * Check to see if the interrupted code has just disabled ··· 182 181 1: 183 182 .set pop 184 183 #endif 185 - SAVE_ALL 184 + SAVE_ALL docfi=1 186 185 CLI 187 186 TRACE_IRQS_OFF 188 187 ··· 270 269 */ 271 270 BUILD_ROLLBACK_PROLOGUE except_vec_vi 272 271 NESTED(except_vec_vi, 0, sp) 273 - SAVE_SOME 274 - SAVE_AT 272 + SAVE_SOME docfi=1 273 + SAVE_AT docfi=1 275 274 .set push 276 275 .set noreorder 277 276 PTR_LA v1, except_vec_vi_handler ··· 397 396 __FINIT 398 397 399 398 NESTED(nmi_handler, PT_SIZE, sp) 399 + .cfi_signal_frame 400 400 .set push 401 401 .set noat 402 402 /* ··· 480 478 .macro __BUILD_HANDLER exception handler clear verbose ext 481 479 .align 5 482 480 NESTED(handle_\exception, PT_SIZE, sp) 481 + .cfi_signal_frame 483 482 .set noat 484 483 SAVE_ALL 485 484 FEXPORT(handle_\exception\ext) ··· 488 485 .set at 489 486 __BUILD_\verbose \exception 490 487 move a0, sp 491 - PTR_LA ra, ret_from_exception 492 - j do_\handler 488 + jal do_\handler 489 + j ret_from_exception 493 490 END(handle_\exception) 494 491 .endm 495 492
+1
arch/mips/kernel/idle.c
··· 11 11 * as published by the Free Software Foundation; either version 12 12 * 2 of the License, or (at your option) any later version. 13 13 */ 14 + #include <linux/cpu.h> 14 15 #include <linux/export.h> 15 16 #include <linux/init.h> 16 17 #include <linux/irqflags.h>
+51 -43
arch/mips/kernel/mips-cm.c
··· 12 12 #include <linux/percpu.h> 13 13 #include <linux/spinlock.h> 14 14 15 - #include <asm/mips-cm.h> 15 + #include <asm/mips-cps.h> 16 16 #include <asm/mipsregs.h> 17 17 18 - void __iomem *mips_cm_base; 18 + void __iomem *mips_gcr_base; 19 19 void __iomem *mips_cm_l2sync_base; 20 20 int mips_cm_is64; 21 21 ··· 167 167 * current location. 168 168 */ 169 169 base_reg = read_gcr_l2_only_sync_base(); 170 - if (base_reg & CM_GCR_L2_ONLY_SYNC_BASE_SYNCEN_MSK) 171 - return base_reg & CM_GCR_L2_ONLY_SYNC_BASE_SYNCBASE_MSK; 170 + if (base_reg & CM_GCR_L2_ONLY_SYNC_BASE_SYNCEN) 171 + return base_reg & CM_GCR_L2_ONLY_SYNC_BASE_SYNCBASE; 172 172 173 173 /* Default to following the CM */ 174 174 return mips_cm_phys_base() + MIPS_CM_GCR_SIZE; ··· 183 183 phys_addr_t addr; 184 184 185 185 /* L2-only sync was introduced with CM major revision 6 */ 186 - major_rev = (read_gcr_rev() & CM_GCR_REV_MAJOR_MSK) >> 187 - CM_GCR_REV_MAJOR_SHF; 186 + major_rev = (read_gcr_rev() & CM_GCR_REV_MAJOR) >> 187 + __ffs(CM_GCR_REV_MAJOR); 188 188 if (major_rev < 6) 189 189 return; 190 190 191 191 /* Find a location for the L2 sync region */ 192 192 addr = mips_cm_l2sync_phys_base(); 193 - BUG_ON((addr & CM_GCR_L2_ONLY_SYNC_BASE_SYNCBASE_MSK) != addr); 193 + BUG_ON((addr & CM_GCR_L2_ONLY_SYNC_BASE_SYNCBASE) != addr); 194 194 if (!addr) 195 195 return; 196 196 197 197 /* Set the region base address & enable it */ 198 - write_gcr_l2_only_sync_base(addr | CM_GCR_L2_ONLY_SYNC_BASE_SYNCEN_MSK); 198 + write_gcr_l2_only_sync_base(addr | CM_GCR_L2_ONLY_SYNC_BASE_SYNCEN); 199 199 200 200 /* Map the region */ 201 201 mips_cm_l2sync_base = ioremap_nocache(addr, MIPS_CM_L2SYNC_SIZE); ··· 211 211 * No need to probe again if we have already been 212 212 * here before. 213 213 */ 214 - if (mips_cm_base) 214 + if (mips_gcr_base) 215 215 return 0; 216 216 217 217 addr = mips_cm_phys_base(); 218 - BUG_ON((addr & CM_GCR_BASE_GCRBASE_MSK) != addr); 218 + BUG_ON((addr & CM_GCR_BASE_GCRBASE) != addr); 219 219 if (!addr) 220 220 return -ENODEV; 221 221 222 - mips_cm_base = ioremap_nocache(addr, MIPS_CM_GCR_SIZE); 223 - if (!mips_cm_base) 222 + mips_gcr_base = ioremap_nocache(addr, MIPS_CM_GCR_SIZE); 223 + if (!mips_gcr_base) 224 224 return -ENXIO; 225 225 226 226 /* sanity check that we're looking at a CM */ 227 227 base_reg = read_gcr_base(); 228 - if ((base_reg & CM_GCR_BASE_GCRBASE_MSK) != addr) { 228 + if ((base_reg & CM_GCR_BASE_GCRBASE) != addr) { 229 229 pr_err("GCRs appear to have been moved (expected them at 0x%08lx)!\n", 230 230 (unsigned long)addr); 231 - mips_cm_base = NULL; 231 + mips_gcr_base = NULL; 232 232 return -ENODEV; 233 233 } 234 234 235 235 /* set default target to memory */ 236 - base_reg &= ~CM_GCR_BASE_CMDEFTGT_MSK; 237 - base_reg |= CM_GCR_BASE_CMDEFTGT_MEM; 238 - write_gcr_base(base_reg); 236 + change_gcr_base(CM_GCR_BASE_CMDEFTGT, CM_GCR_BASE_CMDEFTGT_MEM); 239 237 240 238 /* disable CM regions */ 241 - write_gcr_reg0_base(CM_GCR_REGn_BASE_BASEADDR_MSK); 242 - write_gcr_reg0_mask(CM_GCR_REGn_MASK_ADDRMASK_MSK); 243 - write_gcr_reg1_base(CM_GCR_REGn_BASE_BASEADDR_MSK); 244 - write_gcr_reg1_mask(CM_GCR_REGn_MASK_ADDRMASK_MSK); 245 - write_gcr_reg2_base(CM_GCR_REGn_BASE_BASEADDR_MSK); 246 - write_gcr_reg2_mask(CM_GCR_REGn_MASK_ADDRMASK_MSK); 247 - write_gcr_reg3_base(CM_GCR_REGn_BASE_BASEADDR_MSK); 248 - write_gcr_reg3_mask(CM_GCR_REGn_MASK_ADDRMASK_MSK); 239 + write_gcr_reg0_base(CM_GCR_REGn_BASE_BASEADDR); 240 + write_gcr_reg0_mask(CM_GCR_REGn_MASK_ADDRMASK); 241 + write_gcr_reg1_base(CM_GCR_REGn_BASE_BASEADDR); 242 + write_gcr_reg1_mask(CM_GCR_REGn_MASK_ADDRMASK); 243 + write_gcr_reg2_base(CM_GCR_REGn_BASE_BASEADDR); 244 + write_gcr_reg2_mask(CM_GCR_REGn_MASK_ADDRMASK); 245 + write_gcr_reg3_base(CM_GCR_REGn_BASE_BASEADDR); 246 + write_gcr_reg3_mask(CM_GCR_REGn_MASK_ADDRMASK); 249 247 250 248 /* probe for an L2-only sync region */ 251 249 mips_cm_probe_l2sync(); ··· 257 259 return 0; 258 260 } 259 261 260 - void mips_cm_lock_other(unsigned int core, unsigned int vp) 262 + void mips_cm_lock_other(unsigned int cluster, unsigned int core, 263 + unsigned int vp, unsigned int block) 261 264 { 262 - unsigned curr_core; 265 + unsigned int curr_core, cm_rev; 263 266 u32 val; 264 267 268 + cm_rev = mips_cm_revision(); 265 269 preempt_disable(); 266 270 267 - if (mips_cm_revision() >= CM_REV_CM3) { 268 - val = core << CM3_GCR_Cx_OTHER_CORE_SHF; 269 - val |= vp << CM3_GCR_Cx_OTHER_VP_SHF; 271 + if (cm_rev >= CM_REV_CM3) { 272 + val = core << __ffs(CM3_GCR_Cx_OTHER_CORE); 273 + val |= vp << __ffs(CM3_GCR_Cx_OTHER_VP); 274 + 275 + if (cm_rev >= CM_REV_CM3_5) { 276 + val |= CM_GCR_Cx_OTHER_CLUSTER_EN; 277 + val |= cluster << __ffs(CM_GCR_Cx_OTHER_CLUSTER); 278 + val |= block << __ffs(CM_GCR_Cx_OTHER_BLOCK); 279 + } else { 280 + WARN_ON(cluster != 0); 281 + WARN_ON(block != CM_GCR_Cx_OTHER_BLOCK_LOCAL); 282 + } 270 283 271 284 /* 272 285 * We need to disable interrupts in SMP systems in order to ··· 291 282 spin_lock_irqsave(this_cpu_ptr(&cm_core_lock), 292 283 *this_cpu_ptr(&cm_core_lock_flags)); 293 284 } else { 285 + WARN_ON(cluster != 0); 294 286 WARN_ON(vp != 0); 287 + WARN_ON(block != CM_GCR_Cx_OTHER_BLOCK_LOCAL); 295 288 296 289 /* 297 290 * We only have a GCR_CL_OTHER per core in systems with 298 291 * CM 2.5 & older, so have to ensure other VP(E)s don't 299 292 * race with us. 300 293 */ 301 - curr_core = current_cpu_data.core; 294 + curr_core = cpu_core(&current_cpu_data); 302 295 spin_lock_irqsave(&per_cpu(cm_core_lock, curr_core), 303 296 per_cpu(cm_core_lock_flags, curr_core)); 304 297 305 - val = core << CM_GCR_Cx_OTHER_CORENUM_SHF; 298 + val = core << __ffs(CM_GCR_Cx_OTHER_CORENUM); 306 299 } 307 300 308 301 write_gcr_cl_other(val); ··· 321 310 unsigned int curr_core; 322 311 323 312 if (mips_cm_revision() < CM_REV_CM3) { 324 - curr_core = current_cpu_data.core; 313 + curr_core = cpu_core(&current_cpu_data); 325 314 spin_unlock_irqrestore(&per_cpu(cm_core_lock, curr_core), 326 315 per_cpu(cm_core_lock_flags, curr_core)); 327 316 } else { ··· 343 332 return; 344 333 345 334 revision = mips_cm_revision(); 335 + cm_error = read_gcr_error_cause(); 336 + cm_addr = read_gcr_error_addr(); 337 + cm_other = read_gcr_error_mult(); 346 338 347 339 if (revision < CM_REV_CM3) { /* CM2 */ 348 - cm_error = read_gcr_error_cause(); 349 - cm_addr = read_gcr_error_addr(); 350 - cm_other = read_gcr_error_mult(); 351 - cause = cm_error >> CM_GCR_ERROR_CAUSE_ERRTYPE_SHF; 352 - ocause = cm_other >> CM_GCR_ERROR_MULT_ERR2ND_SHF; 340 + cause = cm_error >> __ffs(CM_GCR_ERROR_CAUSE_ERRTYPE); 341 + ocause = cm_other >> __ffs(CM_GCR_ERROR_MULT_ERR2ND); 353 342 354 343 if (!cause) 355 344 return; ··· 391 380 ulong core_id_bits, vp_id_bits, cmd_bits, cmd_group_bits; 392 381 ulong cm3_cca_bits, mcp_bits, cm3_tr_bits, sched_bit; 393 382 394 - cm_error = read64_gcr_error_cause(); 395 - cm_addr = read64_gcr_error_addr(); 396 - cm_other = read64_gcr_error_mult(); 397 - cause = cm_error >> CM3_GCR_ERROR_CAUSE_ERRTYPE_SHF; 398 - ocause = cm_other >> CM_GCR_ERROR_MULT_ERR2ND_SHF; 383 + cause = cm_error >> __ffs64(CM3_GCR_ERROR_CAUSE_ERRTYPE); 384 + ocause = cm_other >> __ffs(CM_GCR_ERROR_MULT_ERR2ND); 399 385 400 386 if (!cause) 401 387 return;
+8 -9
arch/mips/kernel/mips-cpc.c
··· 12 12 #include <linux/percpu.h> 13 13 #include <linux/spinlock.h> 14 14 15 - #include <asm/mips-cm.h> 16 - #include <asm/mips-cpc.h> 15 + #include <asm/mips-cps.h> 17 16 18 17 void __iomem *mips_cpc_base; 19 18 ··· 39 40 if (!mips_cm_present()) 40 41 return 0; 41 42 42 - if (!(read_gcr_cpc_status() & CM_GCR_CPC_STATUS_EX_MSK)) 43 + if (!(read_gcr_cpc_status() & CM_GCR_CPC_STATUS_EX)) 43 44 return 0; 44 45 45 46 /* If the CPC is already enabled, leave it so */ 46 47 cpc_base = read_gcr_cpc_base(); 47 - if (cpc_base & CM_GCR_CPC_BASE_CPCEN_MSK) 48 - return cpc_base & CM_GCR_CPC_BASE_CPCBASE_MSK; 48 + if (cpc_base & CM_GCR_CPC_BASE_CPCEN) 49 + return cpc_base & CM_GCR_CPC_BASE_CPCBASE; 49 50 50 51 /* Otherwise, use the default address */ 51 52 cpc_base = mips_cpc_default_phys_base(); ··· 53 54 return cpc_base; 54 55 55 56 /* Enable the CPC, mapped at the default address */ 56 - write_gcr_cpc_base(cpc_base | CM_GCR_CPC_BASE_CPCEN_MSK); 57 + write_gcr_cpc_base(cpc_base | CM_GCR_CPC_BASE_CPCEN); 57 58 return cpc_base; 58 59 } 59 60 ··· 85 86 return; 86 87 87 88 preempt_disable(); 88 - curr_core = current_cpu_data.core; 89 + curr_core = cpu_core(&current_cpu_data); 89 90 spin_lock_irqsave(&per_cpu(cpc_core_lock, curr_core), 90 91 per_cpu(cpc_core_lock_flags, curr_core)); 91 - write_cpc_cl_other(core << CPC_Cx_OTHER_CORENUM_SHF); 92 + write_cpc_cl_other(core << __ffs(CPC_Cx_OTHER_CORENUM)); 92 93 93 94 /* 94 95 * Ensure the core-other region reflects the appropriate core & ··· 105 106 /* Systems with CM >= 3 lock the CPC via mips_cm_lock_other */ 106 107 return; 107 108 108 - curr_core = current_cpu_data.core; 109 + curr_core = cpu_core(&current_cpu_data); 109 110 spin_unlock_irqrestore(&per_cpu(cpc_core_lock, curr_core), 110 111 per_cpu(cpc_core_lock_flags, curr_core)); 111 112 preempt_enable();
+9 -7
arch/mips/kernel/mips-r2-to-r6-emul.c
··· 46 46 #define LL "ll " 47 47 #define SC "sc " 48 48 49 - DEFINE_PER_CPU(struct mips_r2_emulator_stats, mipsr2emustats); 50 - DEFINE_PER_CPU(struct mips_r2_emulator_stats, mipsr2bdemustats); 51 - DEFINE_PER_CPU(struct mips_r2br_emulator_stats, mipsr2bremustats); 49 + #ifdef CONFIG_DEBUG_FS 50 + static DEFINE_PER_CPU(struct mips_r2_emulator_stats, mipsr2emustats); 51 + static DEFINE_PER_CPU(struct mips_r2_emulator_stats, mipsr2bdemustats); 52 + static DEFINE_PER_CPU(struct mips_r2br_emulator_stats, mipsr2bremustats); 53 + #endif 52 54 53 55 extern const unsigned int fpucondbit[8]; 54 56 ··· 602 600 } 603 601 604 602 /* R6 removed instructions for the SPECIAL opcode */ 605 - static struct r2_decoder_table spec_op_table[] = { 603 + static const struct r2_decoder_table spec_op_table[] = { 606 604 { 0xfc1ff83f, 0x00000008, jr_func }, 607 605 { 0xfc00ffff, 0x00000018, mult_func }, 608 606 { 0xfc00ffff, 0x00000019, multu_func }, ··· 869 867 } 870 868 871 869 /* R6 removed instructions for the SPECIAL2 opcode */ 872 - static struct r2_decoder_table spec2_op_table[] = { 870 + static const struct r2_decoder_table spec2_op_table[] = { 873 871 { 0xfc00ffff, 0x70000000, madd_func }, 874 872 { 0xfc00ffff, 0x70000001, maddu_func }, 875 873 { 0xfc0007ff, 0x70000002, mul_func }, ··· 883 881 }; 884 882 885 883 static inline int mipsr2_find_op_func(struct pt_regs *regs, u32 inst, 886 - struct r2_decoder_table *table) 884 + const struct r2_decoder_table *table) 887 885 { 888 - struct r2_decoder_table *p; 886 + const struct r2_decoder_table *p; 889 887 int err; 890 888 891 889 for (p = table; p->func; p++) {
+6 -5
arch/mips/kernel/octeon_switch.S
··· 10 10 * Copyright (C) 2000 MIPS Technologies, Inc. 11 11 * written by Carsten Langgaard, carstenl@mips.com 12 12 */ 13 + #include <asm/asm.h> 14 + #include <asm/export.h> 15 + #include <asm/asm-offsets.h> 16 + #include <asm/mipsregs.h> 17 + #include <asm/regdef.h> 18 + #include <asm/stackframe.h> 13 19 14 - #define USE_ALTERNATE_RESUME_IMPL 1 15 - .set push 16 - .set arch=mips64r2 17 - #include "r4k_switch.S" 18 - .set pop 19 20 /* 20 21 * task_struct *resume(task_struct *prev, task_struct *next, 21 22 * struct thread_info *next_ti)
+8 -9
arch/mips/kernel/pm-cps.c
··· 17 17 #include <asm/cacheflush.h> 18 18 #include <asm/cacheops.h> 19 19 #include <asm/idle.h> 20 - #include <asm/mips-cm.h> 21 - #include <asm/mips-cpc.h> 20 + #include <asm/mips-cps.h> 22 21 #include <asm/mipsmtregs.h> 23 22 #include <asm/pm.h> 24 23 #include <asm/pm-cps.h> ··· 48 49 nc_asm_enter); 49 50 50 51 /* Bitmap indicating which states are supported by the system */ 51 - DECLARE_BITMAP(state_support, CPS_PM_STATE_COUNT); 52 + static DECLARE_BITMAP(state_support, CPS_PM_STATE_COUNT); 52 53 53 54 /* 54 55 * Indicates the number of coupled VPEs ready to operate in a non-coherent ··· 113 114 int cps_pm_enter_state(enum cps_pm_state state) 114 115 { 115 116 unsigned cpu = smp_processor_id(); 116 - unsigned core = current_cpu_data.core; 117 + unsigned core = cpu_core(&current_cpu_data); 117 118 unsigned online, left; 118 119 cpumask_t *coupled_mask = this_cpu_ptr(&online_coupled); 119 120 u32 *core_ready_count, *nc_core_ready_count; ··· 485 486 * defined by the interAptiv & proAptiv SUMs as ensuring that the 486 487 * operation resulting from the preceding store is complete. 487 488 */ 488 - uasm_i_addiu(&p, t0, zero, 1 << cpu_data[cpu].core); 489 + uasm_i_addiu(&p, t0, zero, 1 << cpu_core(&cpu_data[cpu])); 489 490 uasm_i_sw(&p, t0, 0, r_pcohctl); 490 491 uasm_i_lw(&p, t0, 0, r_pcohctl); 491 492 ··· 568 569 * rest will just be performing a rather unusual nop. 569 570 */ 570 571 uasm_i_addiu(&p, t0, zero, mips_cm_revision() < CM_REV_CM3 571 - ? CM_GCR_Cx_COHERENCE_COHDOMAINEN_MSK 572 - : CM3_GCR_Cx_COHERENCE_COHEN_MSK); 572 + ? CM_GCR_Cx_COHERENCE_COHDOMAINEN 573 + : CM3_GCR_Cx_COHERENCE_COHEN); 573 574 574 575 uasm_i_sw(&p, t0, 0, r_pcohctl); 575 576 uasm_i_lw(&p, t0, 0, r_pcohctl); ··· 639 640 static int cps_pm_online_cpu(unsigned int cpu) 640 641 { 641 642 enum cps_pm_state state; 642 - unsigned core = cpu_data[cpu].core; 643 + unsigned core = cpu_core(&cpu_data[cpu]); 643 644 void *entry_fn, *core_rc; 644 645 645 646 for (state = CPS_PM_NC_WAIT; state < CPS_PM_STATE_COUNT; state++) { ··· 691 692 /* Detect whether a CPC is present */ 692 693 if (mips_cpc_present()) { 693 694 /* Detect whether clock gating is implemented */ 694 - if (read_cpc_cl_stat_conf() & CPC_Cx_STAT_CONF_CLKGAT_IMPL_MSK) 695 + if (read_cpc_cl_stat_conf() & CPC_Cx_STAT_CONF_CLKGAT_IMPL) 695 696 set_bit(CPS_PM_CLOCK_GATED, state_support); 696 697 else 697 698 pr_warn("pm-cps: CPC does not support clock gating\n");
+3 -3
arch/mips/kernel/proc.c
··· 134 134 seq_printf(m, "kscratch registers\t: %d\n", 135 135 hweight8(cpu_data[n].kscratch_mask)); 136 136 seq_printf(m, "package\t\t\t: %d\n", cpu_data[n].package); 137 - seq_printf(m, "core\t\t\t: %d\n", cpu_data[n].core); 137 + seq_printf(m, "core\t\t\t: %d\n", cpu_core(&cpu_data[n])); 138 138 139 139 #if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_CPU_MIPSR6) 140 140 if (cpu_has_mipsmt) 141 - seq_printf(m, "VPE\t\t\t: %d\n", cpu_data[n].vpe_id); 141 + seq_printf(m, "VPE\t\t\t: %d\n", cpu_vpe_id(&cpu_data[n])); 142 142 else if (cpu_has_vp) 143 - seq_printf(m, "VP\t\t\t: %d\n", cpu_data[n].vpe_id); 143 + seq_printf(m, "VP\t\t\t: %d\n", cpu_vpe_id(&cpu_data[n])); 144 144 #endif 145 145 146 146 sprintf(fmt, "VCE%%c exceptions\t\t: %s\n",
+61 -41
arch/mips/kernel/process.c
··· 208 208 * 209 209 * microMIPS is way more fun... 210 210 */ 211 - if (mm_insn_16bit(ip->halfword[1])) { 211 + if (mm_insn_16bit(ip->word >> 16)) { 212 212 switch (ip->mm16_r5_format.opcode) { 213 213 case mm_swsp16_op: 214 214 if (ip->mm16_r5_format.rt != 31) 215 215 return 0; 216 216 217 - *poff = ip->mm16_r5_format.simmediate; 217 + *poff = ip->mm16_r5_format.imm; 218 218 *poff = (*poff << 2) / sizeof(ulong); 219 219 return 1; 220 220 ··· 287 287 * 288 288 * microMIPS is kind of more fun... 289 289 */ 290 - if (mm_insn_16bit(ip->halfword[1])) { 290 + if (mm_insn_16bit(ip->word >> 16)) { 291 291 if ((ip->mm16_r5_format.opcode == mm_pool16c_op && 292 292 (ip->mm16_r5_format.rt & mm_jr16_op) == mm_jr16_op)) 293 293 return 1; ··· 313 313 #endif 314 314 } 315 315 316 - static inline int is_sp_move_ins(union mips_instruction *ip) 316 + static inline int is_sp_move_ins(union mips_instruction *ip, int *frame_size) 317 317 { 318 318 #ifdef CONFIG_CPU_MICROMIPS 319 + unsigned short tmp; 320 + 319 321 /* 320 322 * addiusp -imm 321 323 * addius5 sp,-imm ··· 326 324 * 327 325 * microMIPS is not more fun... 328 326 */ 329 - if (mm_insn_16bit(ip->halfword[1])) { 330 - return (ip->mm16_r3_format.opcode == mm_pool16d_op && 331 - ip->mm16_r3_format.simmediate && mm_addiusp_func) || 332 - (ip->mm16_r5_format.opcode == mm_pool16d_op && 333 - ip->mm16_r5_format.rt == 29); 327 + if (mm_insn_16bit(ip->word >> 16)) { 328 + if (ip->mm16_r3_format.opcode == mm_pool16d_op && 329 + ip->mm16_r3_format.simmediate & mm_addiusp_func) { 330 + tmp = ip->mm_b0_format.simmediate >> 1; 331 + tmp = ((tmp & 0x1ff) ^ 0x100) - 0x100; 332 + if ((tmp + 2) < 4) /* 0x0,0x1,0x1fe,0x1ff are special */ 333 + tmp ^= 0x100; 334 + *frame_size = -(signed short)(tmp << 2); 335 + return 1; 336 + } 337 + if (ip->mm16_r5_format.opcode == mm_pool16d_op && 338 + ip->mm16_r5_format.rt == 29) { 339 + tmp = ip->mm16_r5_format.imm >> 1; 340 + *frame_size = -(signed short)(tmp & 0xf); 341 + return 1; 342 + } 343 + return 0; 334 344 } 335 345 336 - return ip->mm_i_format.opcode == mm_addiu32_op && 337 - ip->mm_i_format.rt == 29 && ip->mm_i_format.rs == 29; 346 + if (ip->mm_i_format.opcode == mm_addiu32_op && 347 + ip->mm_i_format.rt == 29 && ip->mm_i_format.rs == 29) { 348 + *frame_size = -ip->i_format.simmediate; 349 + return 1; 350 + } 338 351 #else 339 352 /* addiu/daddiu sp,sp,-imm */ 340 353 if (ip->i_format.rs != 29 || ip->i_format.rt != 29) 341 354 return 0; 342 - if (ip->i_format.opcode == addiu_op || ip->i_format.opcode == daddiu_op) 355 + 356 + if (ip->i_format.opcode == addiu_op || 357 + ip->i_format.opcode == daddiu_op) { 358 + *frame_size = -ip->i_format.simmediate; 343 359 return 1; 360 + } 344 361 #endif 345 362 return 0; 346 363 } ··· 369 348 bool is_mmips = IS_ENABLED(CONFIG_CPU_MICROMIPS); 370 349 union mips_instruction insn, *ip, *ip_end; 371 350 const unsigned int max_insns = 128; 351 + unsigned int last_insn_size = 0; 372 352 unsigned int i; 353 + bool saw_jump = false; 373 354 374 355 info->pc_offset = -1; 375 356 info->frame_size = 0; ··· 382 359 383 360 ip_end = (void *)ip + info->func_size; 384 361 385 - for (i = 0; i < max_insns && ip < ip_end; i++, ip++) { 362 + for (i = 0; i < max_insns && ip < ip_end; i++) { 363 + ip = (void *)ip + last_insn_size; 386 364 if (is_mmips && mm_insn_16bit(ip->halfword[0])) { 387 - insn.halfword[0] = 0; 388 - insn.halfword[1] = ip->halfword[0]; 365 + insn.word = ip->halfword[0] << 16; 366 + last_insn_size = 2; 389 367 } else if (is_mmips) { 390 - insn.halfword[0] = ip->halfword[1]; 391 - insn.halfword[1] = ip->halfword[0]; 368 + insn.word = ip->halfword[0] << 16 | ip->halfword[1]; 369 + last_insn_size = 4; 392 370 } else { 393 371 insn.word = ip->word; 372 + last_insn_size = 4; 394 373 } 395 374 396 - if (is_jump_ins(&insn)) 397 - break; 398 - 399 375 if (!info->frame_size) { 400 - if (is_sp_move_ins(&insn)) 401 - { 402 - #ifdef CONFIG_CPU_MICROMIPS 403 - if (mm_insn_16bit(ip->halfword[0])) 404 - { 405 - unsigned short tmp; 406 - 407 - if (ip->halfword[0] & mm_addiusp_func) 408 - { 409 - tmp = (((ip->halfword[0] >> 1) & 0x1ff) << 2); 410 - info->frame_size = -(signed short)(tmp | ((tmp & 0x100) ? 0xfe00 : 0)); 411 - } else { 412 - tmp = (ip->halfword[0] >> 1); 413 - info->frame_size = -(signed short)(tmp & 0xf); 414 - } 415 - ip = (void *) &ip->halfword[1]; 416 - ip--; 417 - } else 418 - #endif 419 - info->frame_size = - ip->i_format.simmediate; 420 - } 376 + is_sp_move_ins(&insn, &info->frame_size); 377 + continue; 378 + } else if (!saw_jump && is_jump_ins(ip)) { 379 + /* 380 + * If we see a jump instruction, we are finished 381 + * with the frame save. 382 + * 383 + * Some functions can have a shortcut return at 384 + * the beginning of the function, so don't start 385 + * looking for jump instruction until we see the 386 + * frame setup. 387 + * 388 + * The RA save instruction can get put into the 389 + * delay slot of the jump instruction, so look 390 + * at the next instruction, too. 391 + */ 392 + saw_jump = true; 421 393 continue; 422 394 } 423 395 if (info->pc_offset == -1 && 424 396 is_ra_save_ins(&insn, &info->pc_offset)) 397 + break; 398 + if (saw_jump) 425 399 break; 426 400 } 427 401 if (info->frame_size && info->pc_offset >= 0) /* nested */
+79 -1
arch/mips/kernel/r2300_fpu.S
··· 12 12 * Copyright (c) 1998 Harald Koerfgen 13 13 */ 14 14 #include <asm/asm.h> 15 + #include <asm/asmmacro.h> 15 16 #include <asm/errno.h> 17 + #include <asm/export.h> 16 18 #include <asm/fpregdef.h> 17 19 #include <asm/mipsregs.h> 18 20 #include <asm/asm-offsets.h> ··· 33 31 PTR 9b+4,bad_stack; \ 34 32 .previous 35 33 36 - .set noreorder 37 34 .set mips1 35 + 36 + /* 37 + * Save a thread's fp context. 38 + */ 39 + LEAF(_save_fp) 40 + EXPORT_SYMBOL(_save_fp) 41 + fpu_save_single a0, t1 # clobbers t1 42 + jr ra 43 + END(_save_fp) 44 + 45 + /* 46 + * Restore a thread's fp context. 47 + */ 48 + LEAF(_restore_fp) 49 + fpu_restore_single a0, t1 # clobbers t1 50 + jr ra 51 + END(_restore_fp) 52 + 53 + /* 54 + * Load the FPU with signalling NANS. This bit pattern we're using has 55 + * the property that no matter whether considered as single or as double 56 + * precision represents signaling NANS. 57 + * 58 + * The value to initialize fcr31 to comes in $a0. 59 + */ 60 + 61 + .set push 62 + SET_HARDFLOAT 63 + 64 + LEAF(_init_fpu) 65 + mfc0 t0, CP0_STATUS 66 + li t1, ST0_CU1 67 + or t0, t1 68 + mtc0 t0, CP0_STATUS 69 + 70 + ctc1 a0, fcr31 71 + 72 + li t0, -1 73 + 74 + mtc1 t0, $f0 75 + mtc1 t0, $f1 76 + mtc1 t0, $f2 77 + mtc1 t0, $f3 78 + mtc1 t0, $f4 79 + mtc1 t0, $f5 80 + mtc1 t0, $f6 81 + mtc1 t0, $f7 82 + mtc1 t0, $f8 83 + mtc1 t0, $f9 84 + mtc1 t0, $f10 85 + mtc1 t0, $f11 86 + mtc1 t0, $f12 87 + mtc1 t0, $f13 88 + mtc1 t0, $f14 89 + mtc1 t0, $f15 90 + mtc1 t0, $f16 91 + mtc1 t0, $f17 92 + mtc1 t0, $f18 93 + mtc1 t0, $f19 94 + mtc1 t0, $f20 95 + mtc1 t0, $f21 96 + mtc1 t0, $f22 97 + mtc1 t0, $f23 98 + mtc1 t0, $f24 99 + mtc1 t0, $f25 100 + mtc1 t0, $f26 101 + mtc1 t0, $f27 102 + mtc1 t0, $f28 103 + mtc1 t0, $f29 104 + mtc1 t0, $f30 105 + mtc1 t0, $f31 106 + jr ra 107 + END(_init_fpu) 108 + 109 + .set pop 110 + 111 + .set noreorder 38 112 39 113 /** 40 114 * _save_fp_context() - save FP context from the FPU
-81
arch/mips/kernel/r2300_switch.S
··· 26 26 .align 5 27 27 28 28 /* 29 - * Offset to the current process status flags, the first 32 bytes of the 30 - * stack are not used. 31 - */ 32 - #define ST_OFF (_THREAD_SIZE - 32 - PT_SIZE + PT_STATUS) 33 - 34 - /* 35 29 * task_struct *resume(task_struct *prev, task_struct *next, 36 30 * struct thread_info *next_ti) 37 31 */ ··· 62 68 move v0, a0 63 69 jr ra 64 70 END(resume) 65 - 66 - /* 67 - * Save a thread's fp context. 68 - */ 69 - LEAF(_save_fp) 70 - EXPORT_SYMBOL(_save_fp) 71 - fpu_save_single a0, t1 # clobbers t1 72 - jr ra 73 - END(_save_fp) 74 - 75 - /* 76 - * Restore a thread's fp context. 77 - */ 78 - LEAF(_restore_fp) 79 - fpu_restore_single a0, t1 # clobbers t1 80 - jr ra 81 - END(_restore_fp) 82 - 83 - /* 84 - * Load the FPU with signalling NANS. This bit pattern we're using has 85 - * the property that no matter whether considered as single or as double 86 - * precision represents signaling NANS. 87 - * 88 - * The value to initialize fcr31 to comes in $a0. 89 - */ 90 - 91 - .set push 92 - SET_HARDFLOAT 93 - 94 - LEAF(_init_fpu) 95 - mfc0 t0, CP0_STATUS 96 - li t1, ST0_CU1 97 - or t0, t1 98 - mtc0 t0, CP0_STATUS 99 - 100 - ctc1 a0, fcr31 101 - 102 - li t0, -1 103 - 104 - mtc1 t0, $f0 105 - mtc1 t0, $f1 106 - mtc1 t0, $f2 107 - mtc1 t0, $f3 108 - mtc1 t0, $f4 109 - mtc1 t0, $f5 110 - mtc1 t0, $f6 111 - mtc1 t0, $f7 112 - mtc1 t0, $f8 113 - mtc1 t0, $f9 114 - mtc1 t0, $f10 115 - mtc1 t0, $f11 116 - mtc1 t0, $f12 117 - mtc1 t0, $f13 118 - mtc1 t0, $f14 119 - mtc1 t0, $f15 120 - mtc1 t0, $f16 121 - mtc1 t0, $f17 122 - mtc1 t0, $f18 123 - mtc1 t0, $f19 124 - mtc1 t0, $f20 125 - mtc1 t0, $f21 126 - mtc1 t0, $f22 127 - mtc1 t0, $f23 128 - mtc1 t0, $f24 129 - mtc1 t0, $f25 130 - mtc1 t0, $f26 131 - mtc1 t0, $f27 132 - mtc1 t0, $f28 133 - mtc1 t0, $f29 134 - mtc1 t0, $f30 135 - mtc1 t0, $f31 136 - jr ra 137 - END(_init_fpu) 138 - 139 - .set pop
+196
arch/mips/kernel/r4k_fpu.S
··· 15 15 #include <asm/asm.h> 16 16 #include <asm/asmmacro.h> 17 17 #include <asm/errno.h> 18 + #include <asm/export.h> 18 19 #include <asm/fpregdef.h> 19 20 #include <asm/mipsregs.h> 20 21 #include <asm/asm-offsets.h> ··· 34 33 PTR .ex\@, fault 35 34 .previous 36 35 .endm 36 + 37 + /* 38 + * Save a thread's fp context. 39 + */ 40 + LEAF(_save_fp) 41 + EXPORT_SYMBOL(_save_fp) 42 + #if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) || \ 43 + defined(CONFIG_CPU_MIPS32_R6) 44 + mfc0 t0, CP0_STATUS 45 + #endif 46 + fpu_save_double a0 t0 t1 # clobbers t1 47 + jr ra 48 + END(_save_fp) 49 + 50 + /* 51 + * Restore a thread's fp context. 52 + */ 53 + LEAF(_restore_fp) 54 + #if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) || \ 55 + defined(CONFIG_CPU_MIPS32_R6) 56 + mfc0 t0, CP0_STATUS 57 + #endif 58 + fpu_restore_double a0 t0 t1 # clobbers t1 59 + jr ra 60 + END(_restore_fp) 61 + 62 + #ifdef CONFIG_CPU_HAS_MSA 63 + 64 + /* 65 + * Save a thread's MSA vector context. 66 + */ 67 + LEAF(_save_msa) 68 + EXPORT_SYMBOL(_save_msa) 69 + msa_save_all a0 70 + jr ra 71 + END(_save_msa) 72 + 73 + /* 74 + * Restore a thread's MSA vector context. 75 + */ 76 + LEAF(_restore_msa) 77 + msa_restore_all a0 78 + jr ra 79 + END(_restore_msa) 80 + 81 + LEAF(_init_msa_upper) 82 + msa_init_all_upper 83 + jr ra 84 + END(_init_msa_upper) 85 + 86 + #endif 87 + 88 + /* 89 + * Load the FPU with signalling NANS. This bit pattern we're using has 90 + * the property that no matter whether considered as single or as double 91 + * precision represents signaling NANS. 92 + * 93 + * The value to initialize fcr31 to comes in $a0. 94 + */ 95 + 96 + .set push 97 + SET_HARDFLOAT 98 + 99 + LEAF(_init_fpu) 100 + mfc0 t0, CP0_STATUS 101 + li t1, ST0_CU1 102 + or t0, t1 103 + mtc0 t0, CP0_STATUS 104 + enable_fpu_hazard 105 + 106 + ctc1 a0, fcr31 107 + 108 + li t1, -1 # SNaN 109 + 110 + #ifdef CONFIG_64BIT 111 + sll t0, t0, 5 112 + bgez t0, 1f # 16 / 32 register mode? 113 + 114 + dmtc1 t1, $f1 115 + dmtc1 t1, $f3 116 + dmtc1 t1, $f5 117 + dmtc1 t1, $f7 118 + dmtc1 t1, $f9 119 + dmtc1 t1, $f11 120 + dmtc1 t1, $f13 121 + dmtc1 t1, $f15 122 + dmtc1 t1, $f17 123 + dmtc1 t1, $f19 124 + dmtc1 t1, $f21 125 + dmtc1 t1, $f23 126 + dmtc1 t1, $f25 127 + dmtc1 t1, $f27 128 + dmtc1 t1, $f29 129 + dmtc1 t1, $f31 130 + 1: 131 + #endif 132 + 133 + #ifdef CONFIG_CPU_MIPS32 134 + mtc1 t1, $f0 135 + mtc1 t1, $f1 136 + mtc1 t1, $f2 137 + mtc1 t1, $f3 138 + mtc1 t1, $f4 139 + mtc1 t1, $f5 140 + mtc1 t1, $f6 141 + mtc1 t1, $f7 142 + mtc1 t1, $f8 143 + mtc1 t1, $f9 144 + mtc1 t1, $f10 145 + mtc1 t1, $f11 146 + mtc1 t1, $f12 147 + mtc1 t1, $f13 148 + mtc1 t1, $f14 149 + mtc1 t1, $f15 150 + mtc1 t1, $f16 151 + mtc1 t1, $f17 152 + mtc1 t1, $f18 153 + mtc1 t1, $f19 154 + mtc1 t1, $f20 155 + mtc1 t1, $f21 156 + mtc1 t1, $f22 157 + mtc1 t1, $f23 158 + mtc1 t1, $f24 159 + mtc1 t1, $f25 160 + mtc1 t1, $f26 161 + mtc1 t1, $f27 162 + mtc1 t1, $f28 163 + mtc1 t1, $f29 164 + mtc1 t1, $f30 165 + mtc1 t1, $f31 166 + 167 + #if defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_CPU_MIPS32_R6) 168 + .set push 169 + .set MIPS_ISA_LEVEL_RAW 170 + .set fp=64 171 + sll t0, t0, 5 # is Status.FR set? 172 + bgez t0, 1f # no: skip setting upper 32b 173 + 174 + mthc1 t1, $f0 175 + mthc1 t1, $f1 176 + mthc1 t1, $f2 177 + mthc1 t1, $f3 178 + mthc1 t1, $f4 179 + mthc1 t1, $f5 180 + mthc1 t1, $f6 181 + mthc1 t1, $f7 182 + mthc1 t1, $f8 183 + mthc1 t1, $f9 184 + mthc1 t1, $f10 185 + mthc1 t1, $f11 186 + mthc1 t1, $f12 187 + mthc1 t1, $f13 188 + mthc1 t1, $f14 189 + mthc1 t1, $f15 190 + mthc1 t1, $f16 191 + mthc1 t1, $f17 192 + mthc1 t1, $f18 193 + mthc1 t1, $f19 194 + mthc1 t1, $f20 195 + mthc1 t1, $f21 196 + mthc1 t1, $f22 197 + mthc1 t1, $f23 198 + mthc1 t1, $f24 199 + mthc1 t1, $f25 200 + mthc1 t1, $f26 201 + mthc1 t1, $f27 202 + mthc1 t1, $f28 203 + mthc1 t1, $f29 204 + mthc1 t1, $f30 205 + mthc1 t1, $f31 206 + 1: .set pop 207 + #endif /* CONFIG_CPU_MIPS32_R2 || CONFIG_CPU_MIPS32_R6 */ 208 + #else 209 + .set MIPS_ISA_ARCH_LEVEL_RAW 210 + dmtc1 t1, $f0 211 + dmtc1 t1, $f2 212 + dmtc1 t1, $f4 213 + dmtc1 t1, $f6 214 + dmtc1 t1, $f8 215 + dmtc1 t1, $f10 216 + dmtc1 t1, $f12 217 + dmtc1 t1, $f14 218 + dmtc1 t1, $f16 219 + dmtc1 t1, $f18 220 + dmtc1 t1, $f20 221 + dmtc1 t1, $f22 222 + dmtc1 t1, $f24 223 + dmtc1 t1, $f26 224 + dmtc1 t1, $f28 225 + dmtc1 t1, $f30 226 + #endif 227 + jr ra 228 + END(_init_fpu) 229 + 230 + .set pop /* SET_HARDFLOAT */ 37 231 38 232 .set noreorder 39 233
-203
arch/mips/kernel/r4k_switch.S
··· 12 12 */ 13 13 #include <asm/asm.h> 14 14 #include <asm/cachectl.h> 15 - #include <asm/export.h> 16 - #include <asm/fpregdef.h> 17 15 #include <asm/mipsregs.h> 18 16 #include <asm/asm-offsets.h> 19 17 #include <asm/regdef.h> ··· 20 22 21 23 #include <asm/asmmacro.h> 22 24 23 - /* preprocessor replaces the fp in ".set fp=64" with $30 otherwise */ 24 - #undef fp 25 - 26 - #ifndef USE_ALTERNATE_RESUME_IMPL 27 25 /* 28 26 * task_struct *resume(task_struct *prev, task_struct *next, 29 27 * struct thread_info *next_ti) ··· 57 63 move v0, a0 58 64 jr ra 59 65 END(resume) 60 - 61 - #endif /* USE_ALTERNATE_RESUME_IMPL */ 62 - 63 - /* 64 - * Save a thread's fp context. 65 - */ 66 - LEAF(_save_fp) 67 - EXPORT_SYMBOL(_save_fp) 68 - #if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) || \ 69 - defined(CONFIG_CPU_MIPS32_R6) 70 - mfc0 t0, CP0_STATUS 71 - #endif 72 - fpu_save_double a0 t0 t1 # clobbers t1 73 - jr ra 74 - END(_save_fp) 75 - 76 - /* 77 - * Restore a thread's fp context. 78 - */ 79 - LEAF(_restore_fp) 80 - #if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) || \ 81 - defined(CONFIG_CPU_MIPS32_R6) 82 - mfc0 t0, CP0_STATUS 83 - #endif 84 - fpu_restore_double a0 t0 t1 # clobbers t1 85 - jr ra 86 - END(_restore_fp) 87 - 88 - #ifdef CONFIG_CPU_HAS_MSA 89 - 90 - /* 91 - * Save a thread's MSA vector context. 92 - */ 93 - LEAF(_save_msa) 94 - EXPORT_SYMBOL(_save_msa) 95 - msa_save_all a0 96 - jr ra 97 - END(_save_msa) 98 - 99 - /* 100 - * Restore a thread's MSA vector context. 101 - */ 102 - LEAF(_restore_msa) 103 - msa_restore_all a0 104 - jr ra 105 - END(_restore_msa) 106 - 107 - LEAF(_init_msa_upper) 108 - msa_init_all_upper 109 - jr ra 110 - END(_init_msa_upper) 111 - 112 - #endif 113 - 114 - /* 115 - * Load the FPU with signalling NANS. This bit pattern we're using has 116 - * the property that no matter whether considered as single or as double 117 - * precision represents signaling NANS. 118 - * 119 - * The value to initialize fcr31 to comes in $a0. 120 - */ 121 - 122 - .set push 123 - SET_HARDFLOAT 124 - 125 - LEAF(_init_fpu) 126 - mfc0 t0, CP0_STATUS 127 - li t1, ST0_CU1 128 - or t0, t1 129 - mtc0 t0, CP0_STATUS 130 - enable_fpu_hazard 131 - 132 - ctc1 a0, fcr31 133 - 134 - li t1, -1 # SNaN 135 - 136 - #ifdef CONFIG_64BIT 137 - sll t0, t0, 5 138 - bgez t0, 1f # 16 / 32 register mode? 139 - 140 - dmtc1 t1, $f1 141 - dmtc1 t1, $f3 142 - dmtc1 t1, $f5 143 - dmtc1 t1, $f7 144 - dmtc1 t1, $f9 145 - dmtc1 t1, $f11 146 - dmtc1 t1, $f13 147 - dmtc1 t1, $f15 148 - dmtc1 t1, $f17 149 - dmtc1 t1, $f19 150 - dmtc1 t1, $f21 151 - dmtc1 t1, $f23 152 - dmtc1 t1, $f25 153 - dmtc1 t1, $f27 154 - dmtc1 t1, $f29 155 - dmtc1 t1, $f31 156 - 1: 157 - #endif 158 - 159 - #ifdef CONFIG_CPU_MIPS32 160 - mtc1 t1, $f0 161 - mtc1 t1, $f1 162 - mtc1 t1, $f2 163 - mtc1 t1, $f3 164 - mtc1 t1, $f4 165 - mtc1 t1, $f5 166 - mtc1 t1, $f6 167 - mtc1 t1, $f7 168 - mtc1 t1, $f8 169 - mtc1 t1, $f9 170 - mtc1 t1, $f10 171 - mtc1 t1, $f11 172 - mtc1 t1, $f12 173 - mtc1 t1, $f13 174 - mtc1 t1, $f14 175 - mtc1 t1, $f15 176 - mtc1 t1, $f16 177 - mtc1 t1, $f17 178 - mtc1 t1, $f18 179 - mtc1 t1, $f19 180 - mtc1 t1, $f20 181 - mtc1 t1, $f21 182 - mtc1 t1, $f22 183 - mtc1 t1, $f23 184 - mtc1 t1, $f24 185 - mtc1 t1, $f25 186 - mtc1 t1, $f26 187 - mtc1 t1, $f27 188 - mtc1 t1, $f28 189 - mtc1 t1, $f29 190 - mtc1 t1, $f30 191 - mtc1 t1, $f31 192 - 193 - #if defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_CPU_MIPS32_R6) 194 - .set push 195 - .set MIPS_ISA_LEVEL_RAW 196 - .set fp=64 197 - sll t0, t0, 5 # is Status.FR set? 198 - bgez t0, 1f # no: skip setting upper 32b 199 - 200 - mthc1 t1, $f0 201 - mthc1 t1, $f1 202 - mthc1 t1, $f2 203 - mthc1 t1, $f3 204 - mthc1 t1, $f4 205 - mthc1 t1, $f5 206 - mthc1 t1, $f6 207 - mthc1 t1, $f7 208 - mthc1 t1, $f8 209 - mthc1 t1, $f9 210 - mthc1 t1, $f10 211 - mthc1 t1, $f11 212 - mthc1 t1, $f12 213 - mthc1 t1, $f13 214 - mthc1 t1, $f14 215 - mthc1 t1, $f15 216 - mthc1 t1, $f16 217 - mthc1 t1, $f17 218 - mthc1 t1, $f18 219 - mthc1 t1, $f19 220 - mthc1 t1, $f20 221 - mthc1 t1, $f21 222 - mthc1 t1, $f22 223 - mthc1 t1, $f23 224 - mthc1 t1, $f24 225 - mthc1 t1, $f25 226 - mthc1 t1, $f26 227 - mthc1 t1, $f27 228 - mthc1 t1, $f28 229 - mthc1 t1, $f29 230 - mthc1 t1, $f30 231 - mthc1 t1, $f31 232 - 1: .set pop 233 - #endif /* CONFIG_CPU_MIPS32_R2 || CONFIG_CPU_MIPS32_R6 */ 234 - #else 235 - .set MIPS_ISA_ARCH_LEVEL_RAW 236 - dmtc1 t1, $f0 237 - dmtc1 t1, $f2 238 - dmtc1 t1, $f4 239 - dmtc1 t1, $f6 240 - dmtc1 t1, $f8 241 - dmtc1 t1, $f10 242 - dmtc1 t1, $f12 243 - dmtc1 t1, $f14 244 - dmtc1 t1, $f16 245 - dmtc1 t1, $f18 246 - dmtc1 t1, $f20 247 - dmtc1 t1, $f22 248 - dmtc1 t1, $f24 249 - dmtc1 t1, $f26 250 - dmtc1 t1, $f28 251 - dmtc1 t1, $f30 252 - #endif 253 - jr ra 254 - END(_init_fpu) 255 - 256 - .set pop /* SET_HARDFLOAT */
-99
arch/mips/kernel/r6000_fpu.S
··· 1 - /* 2 - * r6000_fpu.S: Save/restore floating point context for signal handlers. 3 - * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 7 - * 8 - * Copyright (C) 1996 by Ralf Baechle 9 - * 10 - * Multi-arch abstraction and asm macros for easier reading: 11 - * Copyright (C) 1996 David S. Miller (davem@davemloft.net) 12 - */ 13 - #include <asm/asm.h> 14 - #include <asm/fpregdef.h> 15 - #include <asm/mipsregs.h> 16 - #include <asm/asm-offsets.h> 17 - #include <asm/regdef.h> 18 - 19 - .set noreorder 20 - .set mips2 21 - .set push 22 - SET_HARDFLOAT 23 - 24 - /** 25 - * _save_fp_context() - save FP context from the FPU 26 - * @a0 - pointer to fpregs field of sigcontext 27 - * @a1 - pointer to fpc_csr field of sigcontext 28 - * 29 - * Save FP context, including the 32 FP data registers and the FP 30 - * control & status register, from the FPU to signal context. 31 - */ 32 - LEAF(_save_fp_context) 33 - mfc0 t0,CP0_STATUS 34 - sll t0,t0,2 35 - bgez t0,1f 36 - nop 37 - 38 - cfc1 t1,fcr31 39 - /* Store the 16 double precision registers */ 40 - sdc1 $f0,0(a0) 41 - sdc1 $f2,16(a0) 42 - sdc1 $f4,32(a0) 43 - sdc1 $f6,48(a0) 44 - sdc1 $f8,64(a0) 45 - sdc1 $f10,80(a0) 46 - sdc1 $f12,96(a0) 47 - sdc1 $f14,112(a0) 48 - sdc1 $f16,128(a0) 49 - sdc1 $f18,144(a0) 50 - sdc1 $f20,160(a0) 51 - sdc1 $f22,176(a0) 52 - sdc1 $f24,192(a0) 53 - sdc1 $f26,208(a0) 54 - sdc1 $f28,224(a0) 55 - sdc1 $f30,240(a0) 56 - jr ra 57 - sw t0,(a1) 58 - 1: jr ra 59 - nop 60 - END(_save_fp_context) 61 - 62 - /** 63 - * _restore_fp_context() - restore FP context to the FPU 64 - * @a0 - pointer to fpregs field of sigcontext 65 - * @a1 - pointer to fpc_csr field of sigcontext 66 - * 67 - * Restore FP context, including the 32 FP data registers and the FP 68 - * control & status register, from signal context to the FPU. 69 - */ 70 - LEAF(_restore_fp_context) 71 - mfc0 t0,CP0_STATUS 72 - sll t0,t0,2 73 - 74 - bgez t0,1f 75 - lw t0,(a1) 76 - /* Restore the 16 double precision registers */ 77 - ldc1 $f0,0(a0) 78 - ldc1 $f2,16(a0) 79 - ldc1 $f4,32(a0) 80 - ldc1 $f6,48(a0) 81 - ldc1 $f8,64(a0) 82 - ldc1 $f10,80(a0) 83 - ldc1 $f12,96(a0) 84 - ldc1 $f14,112(a0) 85 - ldc1 $f16,128(a0) 86 - ldc1 $f18,144(a0) 87 - ldc1 $f20,160(a0) 88 - ldc1 $f22,176(a0) 89 - ldc1 $f24,192(a0) 90 - ldc1 $f26,208(a0) 91 - ldc1 $f28,224(a0) 92 - ldc1 $f30,240(a0) 93 - jr ra 94 - ctc1 t0,fcr31 95 - 1: jr ra 96 - nop 97 - END(_restore_fp_context) 98 - 99 - .set pop /* SET_HARDFLOAT */
+6 -4
arch/mips/kernel/smp-bmips.c
··· 179 179 /* 180 180 * Tell the hardware to boot CPUx - runs on CPU0 181 181 */ 182 - static void bmips_boot_secondary(int cpu, struct task_struct *idle) 182 + static int bmips_boot_secondary(int cpu, struct task_struct *idle) 183 183 { 184 184 bmips_smp_boot_sp = __KSTK_TOS(idle); 185 185 bmips_smp_boot_gp = (unsigned long)task_thread_info(idle); ··· 231 231 } 232 232 cpumask_set_cpu(cpu, &bmips_booted_mask); 233 233 } 234 + 235 + return 0; 234 236 } 235 237 236 238 /* ··· 247 245 break; 248 246 case CPU_BMIPS5000: 249 247 write_c0_brcm_action(ACTION_CLR_IPI(smp_processor_id(), 0)); 250 - current_cpu_data.core = (read_c0_brcm_config() >> 25) & 3; 248 + cpu_set_core(&current_cpu_data, (read_c0_brcm_config() >> 25) & 3); 251 249 break; 252 250 } 253 251 } ··· 411 409 412 410 #endif /* CONFIG_HOTPLUG_CPU */ 413 411 414 - struct plat_smp_ops bmips43xx_smp_ops = { 412 + const struct plat_smp_ops bmips43xx_smp_ops = { 415 413 .smp_setup = bmips_smp_setup, 416 414 .prepare_cpus = bmips_prepare_cpus, 417 415 .boot_secondary = bmips_boot_secondary, ··· 425 423 #endif 426 424 }; 427 425 428 - struct plat_smp_ops bmips5000_smp_ops = { 426 + const struct plat_smp_ops bmips5000_smp_ops = { 429 427 .smp_setup = bmips_smp_setup, 430 428 .prepare_cpus = bmips_prepare_cpus, 431 429 .boot_secondary = bmips_boot_secondary,
+3 -3
arch/mips/kernel/smp-cmp.c
··· 24 24 #include <linux/cpumask.h> 25 25 #include <linux/interrupt.h> 26 26 #include <linux/compiler.h> 27 - #include <linux/irqchip/mips-gic.h> 28 27 29 28 #include <linux/atomic.h> 30 29 #include <asm/cacheflush.h> ··· 77 78 * __KSTK_TOS(idle) is apparently the stack pointer 78 79 * (unsigned long)idle->thread_info the gp 79 80 */ 80 - static void cmp_boot_secondary(int cpu, struct task_struct *idle) 81 + static int cmp_boot_secondary(int cpu, struct task_struct *idle) 81 82 { 82 83 struct thread_info *gp = task_thread_info(idle); 83 84 unsigned long sp = __KSTK_TOS(idle); ··· 94 95 #endif 95 96 96 97 amon_cpu_start(cpu, pc, sp, (unsigned long)gp, a0); 98 + return 0; 97 99 } 98 100 99 101 /* ··· 148 148 149 149 } 150 150 151 - struct plat_smp_ops cmp_smp_ops = { 151 + const struct plat_smp_ops cmp_smp_ops = { 152 152 .send_ipi_single = mips_smp_send_ipi_single, 153 153 .send_ipi_mask = mips_smp_send_ipi_mask, 154 154 .init_secondary = cmp_init_secondary,
+80 -68
arch/mips/kernel/smp-cps.c
··· 11 11 #include <linux/cpu.h> 12 12 #include <linux/delay.h> 13 13 #include <linux/io.h> 14 - #include <linux/irqchip/mips-gic.h> 15 14 #include <linux/sched/task_stack.h> 16 15 #include <linux/sched/hotplug.h> 17 16 #include <linux/slab.h> ··· 18 19 #include <linux/types.h> 19 20 20 21 #include <asm/bcache.h> 21 - #include <asm/mips-cm.h> 22 - #include <asm/mips-cpc.h> 22 + #include <asm/mips-cps.h> 23 23 #include <asm/mips_mt.h> 24 24 #include <asm/mipsregs.h> 25 25 #include <asm/pm-cps.h> ··· 39 41 } 40 42 early_param("nothreads", setup_nothreads); 41 43 42 - static unsigned core_vpe_count(unsigned core) 44 + static unsigned core_vpe_count(unsigned int cluster, unsigned core) 43 45 { 44 - unsigned cfg; 45 - 46 46 if (threads_disabled) 47 47 return 1; 48 48 49 - if ((!IS_ENABLED(CONFIG_MIPS_MT_SMP) || !cpu_has_mipsmt) 50 - && (!IS_ENABLED(CONFIG_CPU_MIPSR6) || !cpu_has_vp)) 51 - return 1; 52 - 53 - mips_cm_lock_other(core, 0); 54 - cfg = read_gcr_co_config() & CM_GCR_Cx_CONFIG_PVPE_MSK; 55 - mips_cm_unlock_other(); 56 - return (cfg >> CM_GCR_Cx_CONFIG_PVPE_SHF) + 1; 49 + return mips_cps_numvps(cluster, core); 57 50 } 58 51 59 52 static void __init cps_smp_setup(void) 60 53 { 61 - unsigned int ncores, nvpes, core_vpes; 54 + unsigned int nclusters, ncores, nvpes, core_vpes; 62 55 unsigned long core_entry; 63 - int c, v; 56 + int cl, c, v; 64 57 65 58 /* Detect & record VPE topology */ 66 - ncores = mips_cm_numcores(); 59 + nvpes = 0; 60 + nclusters = mips_cps_numclusters(); 67 61 pr_info("%s topology ", cpu_has_mips_r6 ? "VP" : "VPE"); 68 - for (c = nvpes = 0; c < ncores; c++) { 69 - core_vpes = core_vpe_count(c); 70 - pr_cont("%c%u", c ? ',' : '{', core_vpes); 62 + for (cl = 0; cl < nclusters; cl++) { 63 + if (cl > 0) 64 + pr_cont(","); 65 + pr_cont("{"); 71 66 72 - /* Use the number of VPEs in core 0 for smp_num_siblings */ 73 - if (!c) 74 - smp_num_siblings = core_vpes; 67 + ncores = mips_cps_numcores(cl); 68 + for (c = 0; c < ncores; c++) { 69 + core_vpes = core_vpe_count(cl, c); 75 70 76 - for (v = 0; v < min_t(int, core_vpes, NR_CPUS - nvpes); v++) { 77 - cpu_data[nvpes + v].core = c; 78 - #if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_CPU_MIPSR6) 79 - cpu_data[nvpes + v].vpe_id = v; 80 - #endif 71 + if (c > 0) 72 + pr_cont(","); 73 + pr_cont("%u", core_vpes); 74 + 75 + /* Use the number of VPEs in cluster 0 core 0 for smp_num_siblings */ 76 + if (!cl && !c) 77 + smp_num_siblings = core_vpes; 78 + 79 + for (v = 0; v < min_t(int, core_vpes, NR_CPUS - nvpes); v++) { 80 + cpu_set_cluster(&cpu_data[nvpes + v], cl); 81 + cpu_set_core(&cpu_data[nvpes + v], c); 82 + cpu_set_vpe_id(&cpu_data[nvpes + v], v); 83 + } 84 + 85 + nvpes += core_vpes; 81 86 } 82 87 83 - nvpes += core_vpes; 88 + pr_cont("}"); 84 89 } 85 - pr_cont("} total %u\n", nvpes); 90 + pr_cont(" total %u\n", nvpes); 86 91 87 92 /* Indicate present CPUs (CPU being synonymous with VPE) */ 88 93 for (v = 0; v < min_t(unsigned, nvpes, NR_CPUS); v++) { 89 - set_cpu_possible(v, true); 90 - set_cpu_present(v, true); 94 + set_cpu_possible(v, cpu_cluster(&cpu_data[v]) == 0); 95 + set_cpu_present(v, cpu_cluster(&cpu_data[v]) == 0); 91 96 __cpu_number_map[v] = v; 92 97 __cpu_logical_map[v] = v; 93 98 } ··· 122 121 static void __init cps_prepare_cpus(unsigned int max_cpus) 123 122 { 124 123 unsigned ncores, core_vpes, c, cca; 125 - bool cca_unsuitable; 124 + bool cca_unsuitable, cores_limited; 126 125 u32 *entry_code; 127 126 128 127 mips_mt_set_cpuoptions(); ··· 142 141 } 143 142 144 143 /* Warn the user if the CCA prevents multi-core */ 145 - ncores = mips_cm_numcores(); 146 - if ((cca_unsuitable || cpu_has_dc_aliases) && ncores > 1) { 144 + cores_limited = false; 145 + if (cca_unsuitable || cpu_has_dc_aliases) { 146 + for_each_present_cpu(c) { 147 + if (cpus_are_siblings(smp_processor_id(), c)) 148 + continue; 149 + 150 + set_cpu_present(c, false); 151 + cores_limited = true; 152 + } 153 + } 154 + if (cores_limited) 147 155 pr_warn("Using only one core due to %s%s%s\n", 148 156 cca_unsuitable ? "unsuitable CCA" : "", 149 157 (cca_unsuitable && cpu_has_dc_aliases) ? " & " : "", 150 158 cpu_has_dc_aliases ? "dcache aliasing" : ""); 151 - 152 - for_each_present_cpu(c) { 153 - if (cpu_data[c].core) 154 - set_cpu_present(c, false); 155 - } 156 - } 157 159 158 160 /* 159 161 * Patch the start of mips_cps_core_entry to provide: ··· 172 168 __sync(); 173 169 174 170 /* Allocate core boot configuration structs */ 171 + ncores = mips_cps_numcores(0); 175 172 mips_cps_core_bootcfg = kcalloc(ncores, sizeof(*mips_cps_core_bootcfg), 176 173 GFP_KERNEL); 177 174 if (!mips_cps_core_bootcfg) { ··· 182 177 183 178 /* Allocate VPE boot configuration structs */ 184 179 for (c = 0; c < ncores; c++) { 185 - core_vpes = core_vpe_count(c); 180 + core_vpes = core_vpe_count(0, c); 186 181 mips_cps_core_bootcfg[c].vpe_config = kcalloc(core_vpes, 187 182 sizeof(*mips_cps_core_bootcfg[c].vpe_config), 188 183 GFP_KERNEL); ··· 194 189 } 195 190 196 191 /* Mark this CPU as booted */ 197 - atomic_set(&mips_cps_core_bootcfg[current_cpu_data.core].vpe_mask, 192 + atomic_set(&mips_cps_core_bootcfg[cpu_core(&current_cpu_data)].vpe_mask, 198 193 1 << cpu_vpe_id(&current_cpu_data)); 199 194 200 195 return; ··· 217 212 218 213 static void boot_core(unsigned int core, unsigned int vpe_id) 219 214 { 220 - u32 access, stat, seq_state; 215 + u32 stat, seq_state; 221 216 unsigned timeout; 222 217 223 218 /* Select the appropriate core */ 224 - mips_cm_lock_other(core, 0); 219 + mips_cm_lock_other(0, core, 0, CM_GCR_Cx_OTHER_BLOCK_LOCAL); 225 220 226 221 /* Set its reset vector */ 227 222 write_gcr_co_reset_base(CKSEG1ADDR((unsigned long)mips_cps_core_entry)); ··· 230 225 write_gcr_co_coherence(0); 231 226 232 227 /* Start it with the legacy memory map and exception base */ 233 - write_gcr_co_reset_ext_base(CM_GCR_RESET_EXT_BASE_UEB); 228 + write_gcr_co_reset_ext_base(CM_GCR_Cx_RESET_EXT_BASE_UEB); 234 229 235 230 /* Ensure the core can access the GCRs */ 236 - access = read_gcr_access(); 237 - access |= 1 << (CM_GCR_ACCESS_ACCESSEN_SHF + core); 238 - write_gcr_access(access); 231 + set_gcr_access(1 << core); 239 232 240 233 if (mips_cpc_present()) { 241 234 /* Reset the core */ ··· 256 253 timeout = 100; 257 254 while (true) { 258 255 stat = read_cpc_co_stat_conf(); 259 - seq_state = stat & CPC_Cx_STAT_CONF_SEQSTATE_MSK; 256 + seq_state = stat & CPC_Cx_STAT_CONF_SEQSTATE; 257 + seq_state >>= __ffs(CPC_Cx_STAT_CONF_SEQSTATE); 260 258 261 259 /* U6 == coherent execution, ie. the core is up */ 262 260 if (seq_state == CPC_Cx_STAT_CONF_SEQSTATE_U6) ··· 289 285 290 286 static void remote_vpe_boot(void *dummy) 291 287 { 292 - unsigned core = current_cpu_data.core; 288 + unsigned core = cpu_core(&current_cpu_data); 293 289 struct core_boot_config *core_cfg = &mips_cps_core_bootcfg[core]; 294 290 295 291 mips_cps_boot_vpes(core_cfg, cpu_vpe_id(&current_cpu_data)); 296 292 } 297 293 298 - static void cps_boot_secondary(int cpu, struct task_struct *idle) 294 + static int cps_boot_secondary(int cpu, struct task_struct *idle) 299 295 { 300 - unsigned core = cpu_data[cpu].core; 296 + unsigned core = cpu_core(&cpu_data[cpu]); 301 297 unsigned vpe_id = cpu_vpe_id(&cpu_data[cpu]); 302 298 struct core_boot_config *core_cfg = &mips_cps_core_bootcfg[core]; 303 299 struct vpe_boot_config *vpe_cfg = &core_cfg->vpe_config[vpe_id]; 304 300 unsigned long core_entry; 305 301 unsigned int remote; 306 302 int err; 303 + 304 + /* We don't yet support booting CPUs in other clusters */ 305 + if (cpu_cluster(&cpu_data[cpu]) != cpu_cluster(&current_cpu_data)) 306 + return -ENOSYS; 307 307 308 308 vpe_cfg->pc = (unsigned long)&smp_bootstrap; 309 309 vpe_cfg->sp = __KSTK_TOS(idle); ··· 324 316 } 325 317 326 318 if (cpu_has_vp) { 327 - mips_cm_lock_other(core, vpe_id); 319 + mips_cm_lock_other(0, core, vpe_id, CM_GCR_Cx_OTHER_BLOCK_LOCAL); 328 320 core_entry = CKSEG1ADDR((unsigned long)mips_cps_core_entry); 329 321 write_gcr_co_reset_base(core_entry); 330 322 mips_cm_unlock_other(); 331 323 } 332 324 333 - if (core != current_cpu_data.core) { 325 + if (!cpus_are_siblings(cpu, smp_processor_id())) { 334 326 /* Boot a VPE on another powered up core */ 335 327 for (remote = 0; remote < NR_CPUS; remote++) { 336 - if (cpu_data[remote].core != core) 328 + if (!cpus_are_siblings(cpu, remote)) 337 329 continue; 338 330 if (cpu_online(remote)) 339 331 break; ··· 357 349 mips_cps_boot_vpes(core_cfg, vpe_id); 358 350 out: 359 351 preempt_enable(); 352 + return 0; 360 353 } 361 354 362 355 static void cps_init_secondary(void) ··· 367 358 dmt(); 368 359 369 360 if (mips_cm_revision() >= CM_REV_CM3) { 370 - unsigned ident = gic_read_local_vp_id(); 361 + unsigned int ident = read_gic_vl_ident(); 371 362 372 363 /* 373 364 * Ensure that our calculation of the VP ID matches up with ··· 411 402 if (!cps_pm_support_state(CPS_PM_POWER_GATED)) 412 403 return -EINVAL; 413 404 414 - core_cfg = &mips_cps_core_bootcfg[current_cpu_data.core]; 405 + core_cfg = &mips_cps_core_bootcfg[cpu_core(&current_cpu_data)]; 415 406 atomic_sub(1 << cpu_vpe_id(&current_cpu_data), &core_cfg->vpe_mask); 416 407 smp_mb__after_atomic(); 417 408 set_cpu_online(cpu, false); ··· 433 424 local_irq_disable(); 434 425 idle_task_exit(); 435 426 cpu = smp_processor_id(); 436 - core = cpu_data[cpu].core; 427 + core = cpu_core(&cpu_data[cpu]); 437 428 cpu_death = CPU_DEATH_POWER; 438 429 439 430 pr_debug("CPU%d going offline\n", cpu); 440 431 441 432 if (cpu_has_mipsmt || cpu_has_vp) { 433 + core = cpu_core(&cpu_data[cpu]); 434 + 442 435 /* Look for another online VPE within the core */ 443 436 for_each_online_cpu(cpu_death_sibling) { 444 - if (cpu_data[cpu_death_sibling].core != core) 437 + if (!cpus_are_siblings(cpu, cpu_death_sibling)) 445 438 continue; 446 439 447 440 /* ··· 499 488 500 489 static void cps_cpu_die(unsigned int cpu) 501 490 { 502 - unsigned core = cpu_data[cpu].core; 491 + unsigned core = cpu_core(&cpu_data[cpu]); 503 492 unsigned int vpe_id = cpu_vpe_id(&cpu_data[cpu]); 504 493 ktime_t fail_time; 505 494 unsigned stat; ··· 530 519 */ 531 520 fail_time = ktime_add_ms(ktime_get(), 2000); 532 521 do { 533 - mips_cm_lock_other(core, 0); 522 + mips_cm_lock_other(0, core, 0, CM_GCR_Cx_OTHER_BLOCK_LOCAL); 534 523 mips_cpc_lock_other(core); 535 524 stat = read_cpc_co_stat_conf(); 536 - stat &= CPC_Cx_STAT_CONF_SEQSTATE_MSK; 525 + stat &= CPC_Cx_STAT_CONF_SEQSTATE; 526 + stat >>= __ffs(CPC_Cx_STAT_CONF_SEQSTATE); 537 527 mips_cpc_unlock_other(); 538 528 mips_cm_unlock_other(); 539 529 ··· 556 544 */ 557 545 if (WARN(ktime_after(ktime_get(), fail_time), 558 546 "CPU%u hasn't powered down, seq. state %u\n", 559 - cpu, stat >> CPC_Cx_STAT_CONF_SEQSTATE_SHF)) 547 + cpu, stat)) 560 548 break; 561 549 } while (1); 562 550 ··· 574 562 panic("Failed to call remote sibling CPU\n"); 575 563 } else if (cpu_has_vp) { 576 564 do { 577 - mips_cm_lock_other(core, vpe_id); 565 + mips_cm_lock_other(0, core, vpe_id, CM_GCR_Cx_OTHER_BLOCK_LOCAL); 578 566 stat = read_cpc_co_vp_running(); 579 567 mips_cm_unlock_other(); 580 568 } while (stat & (1 << vpe_id)); ··· 583 571 584 572 #endif /* CONFIG_HOTPLUG_CPU */ 585 573 586 - static struct plat_smp_ops cps_smp_ops = { 574 + static const struct plat_smp_ops cps_smp_ops = { 587 575 .smp_setup = cps_smp_setup, 588 576 .prepare_cpus = cps_prepare_cpus, 589 577 .boot_secondary = cps_boot_secondary, ··· 599 587 600 588 bool mips_cps_smp_in_use(void) 601 589 { 602 - extern struct plat_smp_ops *mp_ops; 590 + extern const struct plat_smp_ops *mp_ops; 603 591 return mp_ops == &cps_smp_ops; 604 592 } 605 593 ··· 611 599 } 612 600 613 601 /* check we have a GIC - we need one for IPIs */ 614 - if (!(read_gcr_gic_status() & CM_GCR_GIC_STATUS_EX_MSK)) { 602 + if (!(read_gcr_gic_status() & CM_GCR_GIC_STATUS_EX)) { 615 603 pr_warn("MIPS CPS SMP unable to proceed without a GIC\n"); 616 604 return -ENODEV; 617 605 }
+7 -7
arch/mips/kernel/smp-mt.c
··· 21 21 #include <linux/sched.h> 22 22 #include <linux/cpumask.h> 23 23 #include <linux/interrupt.h> 24 - #include <linux/irqchip/mips-gic.h> 25 24 #include <linux/compiler.h> 26 25 #include <linux/sched/task_stack.h> 27 26 #include <linux/smp.h> ··· 35 36 #include <asm/mipsregs.h> 36 37 #include <asm/mipsmtregs.h> 37 38 #include <asm/mips_mt.h> 39 + #include <asm/mips-cps.h> 38 40 39 41 static void __init smvp_copy_vpe_config(void) 40 42 { ··· 83 83 if (tc != 0) 84 84 smvp_copy_vpe_config(); 85 85 86 - cpu_data[ncpu].vpe_id = tc; 86 + cpu_set_vpe_id(&cpu_data[ncpu], tc); 87 87 88 88 return ncpu; 89 89 } ··· 118 118 119 119 static void vsmp_init_secondary(void) 120 120 { 121 - #ifdef CONFIG_MIPS_GIC 122 121 /* This is Malta specific: IPI,performance and timer interrupts */ 123 - if (gic_present) 122 + if (mips_gic_present()) 124 123 change_c0_status(ST0_IM, STATUSF_IP2 | STATUSF_IP3 | 125 124 STATUSF_IP4 | STATUSF_IP5 | 126 125 STATUSF_IP6 | STATUSF_IP7); 127 126 else 128 - #endif 129 127 change_c0_status(ST0_IM, STATUSF_IP0 | STATUSF_IP1 | 130 128 STATUSF_IP6 | STATUSF_IP7); 131 129 } ··· 150 152 * (unsigned long)idle->thread_info the gp 151 153 * assumes a 1:1 mapping of TC => VPE 152 154 */ 153 - static void vsmp_boot_secondary(int cpu, struct task_struct *idle) 155 + static int vsmp_boot_secondary(int cpu, struct task_struct *idle) 154 156 { 155 157 struct thread_info *gp = task_thread_info(idle); 156 158 dvpe(); ··· 182 184 clear_c0_mvpcontrol(MVPCONTROL_VPC); 183 185 184 186 evpe(EVPE_ENABLE); 187 + 188 + return 0; 185 189 } 186 190 187 191 /* ··· 239 239 mips_mt_set_cpuoptions(); 240 240 } 241 241 242 - struct plat_smp_ops vsmp_smp_ops = { 242 + const struct plat_smp_ops vsmp_smp_ops = { 243 243 .send_ipi_single = mips_smp_send_ipi_single, 244 244 .send_ipi_mask = mips_smp_send_ipi_mask, 245 245 .init_secondary = vsmp_init_secondary,
+3 -2
arch/mips/kernel/smp-up.c
··· 39 39 /* 40 40 * Firmware CPU startup hook 41 41 */ 42 - static void up_boot_secondary(int cpu, struct task_struct *idle) 42 + static int up_boot_secondary(int cpu, struct task_struct *idle) 43 43 { 44 + return 0; 44 45 } 45 46 46 47 static void __init up_smp_setup(void) ··· 64 63 } 65 64 #endif 66 65 67 - struct plat_smp_ops up_smp_ops = { 66 + const struct plat_smp_ops up_smp_ops = { 68 67 .send_ipi_single = up_send_ipi_single, 69 68 .send_ipi_mask = up_send_ipi_mask, 70 69 .init_secondary = up_init_secondary,
+13 -11
arch/mips/kernel/smp.c
··· 96 96 97 97 if (smp_num_siblings > 1) { 98 98 for_each_cpu(i, &cpu_sibling_setup_map) { 99 - if (cpu_data[cpu].package == cpu_data[i].package && 100 - cpu_data[cpu].core == cpu_data[i].core) { 99 + if (cpus_are_siblings(cpu, i)) { 101 100 cpumask_set_cpu(i, &cpu_sibling_map[cpu]); 102 101 cpumask_set_cpu(cpu, &cpu_sibling_map[i]); 103 102 } ··· 133 134 for_each_online_cpu(i) { 134 135 core_present = 0; 135 136 for_each_cpu(k, &temp_foreign_map) 136 - if (cpu_data[i].package == cpu_data[k].package && 137 - cpu_data[i].core == cpu_data[k].core) 137 + if (cpus_are_siblings(i, k)) 138 138 core_present = 1; 139 139 if (!core_present) 140 140 cpumask_set_cpu(i, &temp_foreign_map); ··· 144 146 &temp_foreign_map, &cpu_sibling_map[i]); 145 147 } 146 148 147 - struct plat_smp_ops *mp_ops; 149 + const struct plat_smp_ops *mp_ops; 148 150 EXPORT_SYMBOL(mp_ops); 149 151 150 - void register_smp_ops(struct plat_smp_ops *ops) 152 + void register_smp_ops(const struct plat_smp_ops *ops) 151 153 { 152 154 if (mp_ops) 153 155 printk(KERN_WARNING "Overriding previously set SMP ops\n"); ··· 184 186 185 187 if (mips_cpc_present()) { 186 188 for_each_cpu(cpu, mask) { 187 - core = cpu_data[cpu].core; 188 - 189 - if (core == current_cpu_data.core) 189 + if (cpus_are_siblings(cpu, smp_processor_id())) 190 190 continue; 191 191 192 + core = cpu_core(&cpu_data[cpu]); 193 + 192 194 while (!cpumask_test_cpu(cpu, &cpu_coherent_mask)) { 193 - mips_cm_lock_other(core, 0); 195 + mips_cm_lock_other_cpu(cpu, CM_GCR_Cx_OTHER_BLOCK_LOCAL); 194 196 mips_cpc_lock_other(core); 195 197 write_cpc_co_cmd(CPC_Cx_CMD_PWRUP); 196 198 mips_cpc_unlock_other(); ··· 439 441 440 442 int __cpu_up(unsigned int cpu, struct task_struct *tidle) 441 443 { 442 - mp_ops->boot_secondary(cpu, tidle); 444 + int err; 445 + 446 + err = mp_ops->boot_secondary(cpu, tidle); 447 + if (err) 448 + return err; 443 449 444 450 /* 445 451 * We must check for timeout here, as the CPU will not be marked
-14
arch/mips/kernel/time.c
··· 72 72 unsigned int mips_hpt_frequency; 73 73 EXPORT_SYMBOL_GPL(mips_hpt_frequency); 74 74 75 - /* 76 - * This function exists in order to cause an error due to a duplicate 77 - * definition if platform code should have its own implementation. The hook 78 - * to use instead is plat_time_init. plat_time_init does not receive the 79 - * irqaction pointer argument anymore. This is because any function which 80 - * initializes an interrupt timer now takes care of its own request_irq rsp. 81 - * setup_irq calls and each clock_event_device should use its own 82 - * struct irqrequest. 83 - */ 84 - void __init plat_timer_setup(void) 85 - { 86 - BUG(); 87 - } 88 - 89 75 static __init int cpu_has_mfc0_count_bug(void) 90 76 { 91 77 switch (current_cpu_type()) {
+6 -23
arch/mips/kernel/traps.c
··· 50 50 #include <asm/fpu.h> 51 51 #include <asm/fpu_emulator.h> 52 52 #include <asm/idle.h> 53 - #include <asm/mips-cm.h> 53 + #include <asm/mips-cps.h> 54 54 #include <asm/mips-r2-to-r6-emul.h> 55 - #include <asm/mips-cm.h> 56 55 #include <asm/mipsregs.h> 57 56 #include <asm/mipsmtregs.h> 58 57 #include <asm/module.h> ··· 733 734 si.si_code = FPE_FLTUND; 734 735 else if (fcr31 & FPU_CSR_INE_X) 735 736 si.si_code = FPE_FLTRES; 736 - else 737 - return; /* Broken hardware? */ 737 + 738 738 force_sig_info(SIGFPE, &si, tsk); 739 739 } 740 740 ··· 1671 1673 /* Probe L2 ECC support */ 1672 1674 gcr_ectl = read_gcr_err_control(); 1673 1675 1674 - if (!(gcr_ectl & CM_GCR_ERR_CONTROL_L2_ECC_SUPPORT_MSK) || 1676 + if (!(gcr_ectl & CM_GCR_ERR_CONTROL_L2_ECC_SUPPORT) || 1675 1677 !(cp0_ectl & ERRCTL_PE)) { 1676 1678 /* 1677 1679 * One of L1 or L2 ECC checking isn't supported, ··· 1691 1693 1692 1694 /* Configure L2 ECC checking */ 1693 1695 if (l2parity) 1694 - gcr_ectl |= CM_GCR_ERR_CONTROL_L2_ECC_EN_MSK; 1696 + gcr_ectl |= CM_GCR_ERR_CONTROL_L2_ECC_EN; 1695 1697 else 1696 - gcr_ectl &= ~CM_GCR_ERR_CONTROL_L2_ECC_EN_MSK; 1698 + gcr_ectl &= ~CM_GCR_ERR_CONTROL_L2_ECC_EN; 1697 1699 write_gcr_err_control(gcr_ectl); 1698 1700 gcr_ectl = read_gcr_err_control(); 1699 - gcr_ectl &= CM_GCR_ERR_CONTROL_L2_ECC_EN_MSK; 1701 + gcr_ectl &= CM_GCR_ERR_CONTROL_L2_ECC_EN; 1700 1702 WARN_ON(!!gcr_ectl != l2parity); 1701 1703 1702 1704 pr_info("Cache parity protection %sabled\n", ··· 2425 2427 set_except_vector(EXCCODE_OV, handle_ov); 2426 2428 set_except_vector(EXCCODE_TR, handle_tr); 2427 2429 set_except_vector(EXCCODE_MSAFPE, handle_msa_fpe); 2428 - 2429 - if (current_cpu_type() == CPU_R6000 || 2430 - current_cpu_type() == CPU_R6000A) { 2431 - /* 2432 - * The R6000 is the only R-series CPU that features a machine 2433 - * check exception (similar to the R4000 cache error) and 2434 - * unaligned ldc1/sdc1 exception. The handlers have not been 2435 - * written yet. Well, anyway there is no R6000 machine on the 2436 - * current list of targets for Linux/MIPS. 2437 - * (Duh, crap, there is someone with a triple R6k machine) 2438 - */ 2439 - //set_except_vector(14, handle_mc); 2440 - //set_except_vector(15, handle_ndc); 2441 - } 2442 - 2443 2430 2444 2431 if (board_nmi_handler_setup) 2445 2432 board_nmi_handler_setup();
+1 -1
arch/mips/kernel/unaligned.c
··· 1378 1378 const int reg16to32[] = { 16, 17, 2, 3, 4, 5, 6, 7 }; 1379 1379 1380 1380 /* Recode table from 16-bit STORE register notation to 32-bit GPR. */ 1381 - const int reg16to32st[] = { 0, 17, 2, 3, 4, 5, 6, 7 }; 1381 + static const int reg16to32st[] = { 0, 17, 2, 3, 4, 5, 6, 7 }; 1382 1382 1383 1383 static void emulate_load_store_microMIPS(struct pt_regs *regs, 1384 1384 void __user *addr)
+5 -10
arch/mips/kernel/vdso.c
··· 13 13 #include <linux/err.h> 14 14 #include <linux/init.h> 15 15 #include <linux/ioport.h> 16 - #include <linux/irqchip/mips-gic.h> 17 16 #include <linux/mm.h> 18 17 #include <linux/sched.h> 19 18 #include <linux/slab.h> 20 19 #include <linux/timekeeper_internal.h> 21 20 22 21 #include <asm/abi.h> 22 + #include <asm/mips-cps.h> 23 23 #include <asm/vdso.h> 24 24 25 25 /* Kernel-provided data used by the VDSO. */ ··· 99 99 { 100 100 struct mips_vdso_image *image = current->thread.abi->vdso; 101 101 struct mm_struct *mm = current->mm; 102 - unsigned long gic_size, vvar_size, size, base, data_addr, vdso_addr; 102 + unsigned long gic_size, vvar_size, size, base, data_addr, vdso_addr, gic_pfn; 103 103 struct vm_area_struct *vma; 104 - struct resource gic_res; 105 104 int ret; 106 105 107 106 if (down_write_killable(&mm->mmap_sem)) ··· 124 125 * only map a page even though the total area is 64K, as we only need 125 126 * the counter registers at the start. 126 127 */ 127 - gic_size = gic_present ? PAGE_SIZE : 0; 128 + gic_size = mips_gic_present() ? PAGE_SIZE : 0; 128 129 vvar_size = gic_size + PAGE_SIZE; 129 130 size = vvar_size + image->size; 130 131 ··· 147 148 148 149 /* Map GIC user page. */ 149 150 if (gic_size) { 150 - ret = gic_get_usm_range(&gic_res); 151 - if (ret) 152 - goto out; 151 + gic_pfn = virt_to_phys(mips_gic_base + MIPS_GIC_USER_OFS) >> PAGE_SHIFT; 153 152 154 - ret = io_remap_pfn_range(vma, base, 155 - gic_res.start >> PAGE_SHIFT, 156 - gic_size, 153 + ret = io_remap_pfn_range(vma, base, gic_pfn, gic_size, 157 154 pgprot_noncached(PAGE_READONLY)); 158 155 if (ret) 159 156 goto out;
+2
arch/mips/lantiq/Kconfig
··· 17 17 bool "XWAY" 18 18 select SOC_TYPE_XWAY 19 19 select HW_HAS_PCI 20 + select MFD_SYSCON 21 + select MFD_CORE 20 22 21 23 config SOC_FALCON 22 24 bool "FALCON"
+5 -18
arch/mips/lantiq/falcon/reset.c
··· 15 15 16 16 #include <lantiq_soc.h> 17 17 18 - /* CPU0 Reset Source Register */ 19 - #define SYS1_CPU0RS 0x0040 20 - /* reset cause mask */ 21 - #define CPU0RS_MASK 0x0003 22 - /* CPU0 Boot Mode Register */ 23 - #define SYS1_BM 0x00a0 24 - /* boot mode mask */ 25 - #define BM_MASK 0x0005 26 - 27 - /* allow platform code to find out what surce we booted from */ 18 + /* 19 + * Dummy implementation. Used to allow platform code to find out what 20 + * source was booted from 21 + */ 28 22 unsigned char ltq_boot_select(void) 29 23 { 30 - return ltq_sys1_r32(SYS1_BM) & BM_MASK; 24 + return BS_SPI; 31 25 } 32 - 33 - /* allow the watchdog driver to find out what the boot reason was */ 34 - int ltq_reset_cause(void) 35 - { 36 - return ltq_sys1_r32(SYS1_CPU0RS) & CPU0RS_MASK; 37 - } 38 - EXPORT_SYMBOL_GPL(ltq_reset_cause); 39 26 40 27 #define BOOT_REG_BASE (KSEG1 | 0x1F200000) 41 28 #define BOOT_PW1_REG (BOOT_REG_BASE | 0x20)
-4
arch/mips/lantiq/irq.c
··· 61 61 /* we have a cascade of 8 irqs */ 62 62 #define MIPS_CPU_IRQ_CASCADE 8 63 63 64 - #ifdef CONFIG_MIPS_MT_SMP 65 - int gic_present; 66 - #endif 67 - 68 64 static int exin_avail; 69 65 static u32 ltq_eiu_irq[MAX_EIU]; 70 66 static void __iomem *ltq_icu_membase[MAX_IM];
+1 -1
arch/mips/lantiq/prom.c
··· 117 117 118 118 int __init plat_of_setup(void) 119 119 { 120 - return __dt_register_buses(soc_info.compatible, "simple-bus"); 120 + return of_platform_default_populate(NULL, NULL, NULL); 121 121 } 122 122 123 123 arch_initcall(plat_of_setup);
+1 -3
arch/mips/lantiq/xway/Makefile
··· 1 - obj-y := prom.o sysctrl.o clk.o reset.o dma.o gptu.o dcdc.o 1 + obj-y := prom.o sysctrl.o clk.o dma.o gptu.o dcdc.o 2 2 3 3 obj-y += vmmc.o 4 - 5 - obj-$(CONFIG_XRX200_PHY_FW) += xrx200_phy_fw.o
-387
arch/mips/lantiq/xway/reset.c
··· 1 - /* 2 - * This program is free software; you can redistribute it and/or modify it 3 - * under the terms of the GNU General Public License version 2 as published 4 - * by the Free Software Foundation. 5 - * 6 - * Copyright (C) 2010 John Crispin <john@phrozen.org> 7 - * Copyright (C) 2013-2015 Lantiq Beteiligungs-GmbH & Co.KG 8 - */ 9 - 10 - #include <linux/init.h> 11 - #include <linux/io.h> 12 - #include <linux/ioport.h> 13 - #include <linux/pm.h> 14 - #include <linux/export.h> 15 - #include <linux/delay.h> 16 - #include <linux/of_address.h> 17 - #include <linux/of_platform.h> 18 - #include <linux/reset-controller.h> 19 - 20 - #include <asm/reboot.h> 21 - 22 - #include <lantiq_soc.h> 23 - 24 - #include "../prom.h" 25 - 26 - /* reset request register */ 27 - #define RCU_RST_REQ 0x0010 28 - /* reset status register */ 29 - #define RCU_RST_STAT 0x0014 30 - /* vr9 gphy registers */ 31 - #define RCU_GFS_ADD0_XRX200 0x0020 32 - #define RCU_GFS_ADD1_XRX200 0x0068 33 - /* xRX300 gphy registers */ 34 - #define RCU_GFS_ADD0_XRX300 0x0020 35 - #define RCU_GFS_ADD1_XRX300 0x0058 36 - #define RCU_GFS_ADD2_XRX300 0x00AC 37 - /* xRX330 gphy registers */ 38 - #define RCU_GFS_ADD0_XRX330 0x0020 39 - #define RCU_GFS_ADD1_XRX330 0x0058 40 - #define RCU_GFS_ADD2_XRX330 0x00AC 41 - #define RCU_GFS_ADD3_XRX330 0x0264 42 - 43 - /* xbar BE flag */ 44 - #define RCU_AHB_ENDIAN 0x004C 45 - #define RCU_VR9_BE_AHB1S 0x00000008 46 - 47 - /* reboot bit */ 48 - #define RCU_RD_GPHY0_XRX200 BIT(31) 49 - #define RCU_RD_SRST BIT(30) 50 - #define RCU_RD_GPHY1_XRX200 BIT(29) 51 - /* xRX300 bits */ 52 - #define RCU_RD_GPHY0_XRX300 BIT(31) 53 - #define RCU_RD_GPHY1_XRX300 BIT(29) 54 - #define RCU_RD_GPHY2_XRX300 BIT(28) 55 - /* xRX330 bits */ 56 - #define RCU_RD_GPHY0_XRX330 BIT(31) 57 - #define RCU_RD_GPHY1_XRX330 BIT(29) 58 - #define RCU_RD_GPHY2_XRX330 BIT(28) 59 - #define RCU_RD_GPHY3_XRX330 BIT(10) 60 - 61 - /* reset cause */ 62 - #define RCU_STAT_SHIFT 26 63 - /* boot selection */ 64 - #define RCU_BOOT_SEL(x) ((x >> 18) & 0x7) 65 - #define RCU_BOOT_SEL_XRX200(x) (((x >> 17) & 0xf) | ((x >> 8) & 0x10)) 66 - 67 - /* dwc2 USB configuration registers */ 68 - #define RCU_USB1CFG 0x0018 69 - #define RCU_USB2CFG 0x0034 70 - 71 - /* USB DMA endianness bits */ 72 - #define RCU_USBCFG_HDSEL_BIT BIT(11) 73 - #define RCU_USBCFG_HOST_END_BIT BIT(10) 74 - #define RCU_USBCFG_SLV_END_BIT BIT(9) 75 - 76 - /* USB reset bits */ 77 - #define RCU_USBRESET 0x0010 78 - 79 - #define USBRESET_BIT BIT(4) 80 - 81 - #define RCU_USBRESET2 0x0048 82 - 83 - #define USB1RESET_BIT BIT(4) 84 - #define USB2RESET_BIT BIT(5) 85 - 86 - #define RCU_CFG1A 0x0038 87 - #define RCU_CFG1B 0x003C 88 - 89 - /* USB PMU devices */ 90 - #define PMU_AHBM BIT(15) 91 - #define PMU_USB0 BIT(6) 92 - #define PMU_USB1 BIT(27) 93 - 94 - /* USB PHY PMU devices */ 95 - #define PMU_USB0_P BIT(0) 96 - #define PMU_USB1_P BIT(26) 97 - 98 - /* remapped base addr of the reset control unit */ 99 - static void __iomem *ltq_rcu_membase; 100 - static struct device_node *ltq_rcu_np; 101 - static DEFINE_SPINLOCK(ltq_rcu_lock); 102 - 103 - static void ltq_rcu_w32(uint32_t val, uint32_t reg_off) 104 - { 105 - ltq_w32(val, ltq_rcu_membase + reg_off); 106 - } 107 - 108 - static uint32_t ltq_rcu_r32(uint32_t reg_off) 109 - { 110 - return ltq_r32(ltq_rcu_membase + reg_off); 111 - } 112 - 113 - static void ltq_rcu_w32_mask(uint32_t clr, uint32_t set, uint32_t reg_off) 114 - { 115 - unsigned long flags; 116 - 117 - spin_lock_irqsave(&ltq_rcu_lock, flags); 118 - ltq_rcu_w32((ltq_rcu_r32(reg_off) & ~(clr)) | (set), reg_off); 119 - spin_unlock_irqrestore(&ltq_rcu_lock, flags); 120 - } 121 - 122 - /* This function is used by the watchdog driver */ 123 - int ltq_reset_cause(void) 124 - { 125 - u32 val = ltq_rcu_r32(RCU_RST_STAT); 126 - return val >> RCU_STAT_SHIFT; 127 - } 128 - EXPORT_SYMBOL_GPL(ltq_reset_cause); 129 - 130 - /* allow platform code to find out what source we booted from */ 131 - unsigned char ltq_boot_select(void) 132 - { 133 - u32 val = ltq_rcu_r32(RCU_RST_STAT); 134 - 135 - if (of_device_is_compatible(ltq_rcu_np, "lantiq,rcu-xrx200")) 136 - return RCU_BOOT_SEL_XRX200(val); 137 - 138 - return RCU_BOOT_SEL(val); 139 - } 140 - 141 - struct ltq_gphy_reset { 142 - u32 rd; 143 - u32 addr; 144 - }; 145 - 146 - /* reset / boot a gphy */ 147 - static struct ltq_gphy_reset xrx200_gphy[] = { 148 - {RCU_RD_GPHY0_XRX200, RCU_GFS_ADD0_XRX200}, 149 - {RCU_RD_GPHY1_XRX200, RCU_GFS_ADD1_XRX200}, 150 - }; 151 - 152 - /* reset / boot a gphy */ 153 - static struct ltq_gphy_reset xrx300_gphy[] = { 154 - {RCU_RD_GPHY0_XRX300, RCU_GFS_ADD0_XRX300}, 155 - {RCU_RD_GPHY1_XRX300, RCU_GFS_ADD1_XRX300}, 156 - {RCU_RD_GPHY2_XRX300, RCU_GFS_ADD2_XRX300}, 157 - }; 158 - 159 - /* reset / boot a gphy */ 160 - static struct ltq_gphy_reset xrx330_gphy[] = { 161 - {RCU_RD_GPHY0_XRX330, RCU_GFS_ADD0_XRX330}, 162 - {RCU_RD_GPHY1_XRX330, RCU_GFS_ADD1_XRX330}, 163 - {RCU_RD_GPHY2_XRX330, RCU_GFS_ADD2_XRX330}, 164 - {RCU_RD_GPHY3_XRX330, RCU_GFS_ADD3_XRX330}, 165 - }; 166 - 167 - static void xrx200_gphy_boot_addr(struct ltq_gphy_reset *phy_regs, 168 - dma_addr_t dev_addr) 169 - { 170 - ltq_rcu_w32_mask(0, phy_regs->rd, RCU_RST_REQ); 171 - ltq_rcu_w32(dev_addr, phy_regs->addr); 172 - ltq_rcu_w32_mask(phy_regs->rd, 0, RCU_RST_REQ); 173 - } 174 - 175 - /* reset and boot a gphy. these phys only exist on xrx200 SoC */ 176 - int xrx200_gphy_boot(struct device *dev, unsigned int id, dma_addr_t dev_addr) 177 - { 178 - struct clk *clk; 179 - 180 - if (!of_device_is_compatible(ltq_rcu_np, "lantiq,rcu-xrx200")) { 181 - dev_err(dev, "this SoC has no GPHY\n"); 182 - return -EINVAL; 183 - } 184 - 185 - if (of_machine_is_compatible("lantiq,vr9")) { 186 - clk = clk_get_sys("1f203000.rcu", "gphy"); 187 - if (IS_ERR(clk)) 188 - return PTR_ERR(clk); 189 - clk_enable(clk); 190 - } 191 - 192 - dev_info(dev, "booting GPHY%u firmware at %X\n", id, dev_addr); 193 - 194 - if (of_machine_is_compatible("lantiq,vr9")) { 195 - if (id >= ARRAY_SIZE(xrx200_gphy)) { 196 - dev_err(dev, "%u is an invalid gphy id\n", id); 197 - return -EINVAL; 198 - } 199 - xrx200_gphy_boot_addr(&xrx200_gphy[id], dev_addr); 200 - } else if (of_machine_is_compatible("lantiq,ar10")) { 201 - if (id >= ARRAY_SIZE(xrx300_gphy)) { 202 - dev_err(dev, "%u is an invalid gphy id\n", id); 203 - return -EINVAL; 204 - } 205 - xrx200_gphy_boot_addr(&xrx300_gphy[id], dev_addr); 206 - } else if (of_machine_is_compatible("lantiq,grx390")) { 207 - if (id >= ARRAY_SIZE(xrx330_gphy)) { 208 - dev_err(dev, "%u is an invalid gphy id\n", id); 209 - return -EINVAL; 210 - } 211 - xrx200_gphy_boot_addr(&xrx330_gphy[id], dev_addr); 212 - } 213 - return 0; 214 - } 215 - 216 - /* reset a io domain for u micro seconds */ 217 - void ltq_reset_once(unsigned int module, ulong u) 218 - { 219 - ltq_rcu_w32(ltq_rcu_r32(RCU_RST_REQ) | module, RCU_RST_REQ); 220 - udelay(u); 221 - ltq_rcu_w32(ltq_rcu_r32(RCU_RST_REQ) & ~module, RCU_RST_REQ); 222 - } 223 - 224 - static int ltq_assert_device(struct reset_controller_dev *rcdev, 225 - unsigned long id) 226 - { 227 - u32 val; 228 - 229 - if (id < 8) 230 - return -1; 231 - 232 - val = ltq_rcu_r32(RCU_RST_REQ); 233 - val |= BIT(id); 234 - ltq_rcu_w32(val, RCU_RST_REQ); 235 - 236 - return 0; 237 - } 238 - 239 - static int ltq_deassert_device(struct reset_controller_dev *rcdev, 240 - unsigned long id) 241 - { 242 - u32 val; 243 - 244 - if (id < 8) 245 - return -1; 246 - 247 - val = ltq_rcu_r32(RCU_RST_REQ); 248 - val &= ~BIT(id); 249 - ltq_rcu_w32(val, RCU_RST_REQ); 250 - 251 - return 0; 252 - } 253 - 254 - static int ltq_reset_device(struct reset_controller_dev *rcdev, 255 - unsigned long id) 256 - { 257 - ltq_assert_device(rcdev, id); 258 - return ltq_deassert_device(rcdev, id); 259 - } 260 - 261 - static const struct reset_control_ops reset_ops = { 262 - .reset = ltq_reset_device, 263 - .assert = ltq_assert_device, 264 - .deassert = ltq_deassert_device, 265 - }; 266 - 267 - static struct reset_controller_dev reset_dev = { 268 - .ops = &reset_ops, 269 - .owner = THIS_MODULE, 270 - .nr_resets = 32, 271 - .of_reset_n_cells = 1, 272 - }; 273 - 274 - void ltq_rst_init(void) 275 - { 276 - reset_dev.of_node = of_find_compatible_node(NULL, NULL, 277 - "lantiq,xway-reset"); 278 - if (!reset_dev.of_node) 279 - pr_err("Failed to find reset controller node"); 280 - else 281 - reset_controller_register(&reset_dev); 282 - } 283 - 284 - static void ltq_machine_restart(char *command) 285 - { 286 - u32 val = ltq_rcu_r32(RCU_RST_REQ); 287 - 288 - if (of_device_is_compatible(ltq_rcu_np, "lantiq,rcu-xrx200")) 289 - val |= RCU_RD_GPHY1_XRX200 | RCU_RD_GPHY0_XRX200; 290 - 291 - val |= RCU_RD_SRST; 292 - 293 - local_irq_disable(); 294 - ltq_rcu_w32(val, RCU_RST_REQ); 295 - unreachable(); 296 - } 297 - 298 - static void ltq_machine_halt(void) 299 - { 300 - local_irq_disable(); 301 - unreachable(); 302 - } 303 - 304 - static void ltq_machine_power_off(void) 305 - { 306 - local_irq_disable(); 307 - unreachable(); 308 - } 309 - 310 - static void ltq_usb_init(void) 311 - { 312 - /* Power for USB cores 1 & 2 */ 313 - ltq_pmu_enable(PMU_AHBM); 314 - ltq_pmu_enable(PMU_USB0); 315 - ltq_pmu_enable(PMU_USB1); 316 - 317 - ltq_rcu_w32(ltq_rcu_r32(RCU_CFG1A) | BIT(0), RCU_CFG1A); 318 - ltq_rcu_w32(ltq_rcu_r32(RCU_CFG1B) | BIT(0), RCU_CFG1B); 319 - 320 - /* Enable USB PHY power for cores 1 & 2 */ 321 - ltq_pmu_enable(PMU_USB0_P); 322 - ltq_pmu_enable(PMU_USB1_P); 323 - 324 - /* Configure cores to host mode */ 325 - ltq_rcu_w32(ltq_rcu_r32(RCU_USB1CFG) & ~RCU_USBCFG_HDSEL_BIT, 326 - RCU_USB1CFG); 327 - ltq_rcu_w32(ltq_rcu_r32(RCU_USB2CFG) & ~RCU_USBCFG_HDSEL_BIT, 328 - RCU_USB2CFG); 329 - 330 - /* Select DMA endianness (Host-endian: big-endian) */ 331 - ltq_rcu_w32((ltq_rcu_r32(RCU_USB1CFG) & ~RCU_USBCFG_SLV_END_BIT) 332 - | RCU_USBCFG_HOST_END_BIT, RCU_USB1CFG); 333 - ltq_rcu_w32(ltq_rcu_r32((RCU_USB2CFG) & ~RCU_USBCFG_SLV_END_BIT) 334 - | RCU_USBCFG_HOST_END_BIT, RCU_USB2CFG); 335 - 336 - /* Hard reset USB state machines */ 337 - ltq_rcu_w32(ltq_rcu_r32(RCU_USBRESET) | USBRESET_BIT, RCU_USBRESET); 338 - udelay(50 * 1000); 339 - ltq_rcu_w32(ltq_rcu_r32(RCU_USBRESET) & ~USBRESET_BIT, RCU_USBRESET); 340 - 341 - /* Soft reset USB state machines */ 342 - ltq_rcu_w32(ltq_rcu_r32(RCU_USBRESET2) 343 - | USB1RESET_BIT | USB2RESET_BIT, RCU_USBRESET2); 344 - udelay(50 * 1000); 345 - ltq_rcu_w32(ltq_rcu_r32(RCU_USBRESET2) 346 - & ~(USB1RESET_BIT | USB2RESET_BIT), RCU_USBRESET2); 347 - } 348 - 349 - static int __init mips_reboot_setup(void) 350 - { 351 - struct resource res; 352 - 353 - ltq_rcu_np = of_find_compatible_node(NULL, NULL, "lantiq,rcu-xway"); 354 - if (!ltq_rcu_np) 355 - ltq_rcu_np = of_find_compatible_node(NULL, NULL, 356 - "lantiq,rcu-xrx200"); 357 - 358 - /* check if all the reset register range is available */ 359 - if (!ltq_rcu_np) 360 - panic("Failed to load reset resources from devicetree"); 361 - 362 - if (of_address_to_resource(ltq_rcu_np, 0, &res)) 363 - panic("Failed to get rcu memory range"); 364 - 365 - if (!request_mem_region(res.start, resource_size(&res), res.name)) 366 - pr_err("Failed to request rcu memory"); 367 - 368 - ltq_rcu_membase = ioremap_nocache(res.start, resource_size(&res)); 369 - if (!ltq_rcu_membase) 370 - panic("Failed to remap core memory"); 371 - 372 - if (of_machine_is_compatible("lantiq,ar9") || 373 - of_machine_is_compatible("lantiq,vr9")) 374 - ltq_usb_init(); 375 - 376 - if (of_machine_is_compatible("lantiq,vr9")) 377 - ltq_rcu_w32(ltq_rcu_r32(RCU_AHB_ENDIAN) | RCU_VR9_BE_AHB1S, 378 - RCU_AHB_ENDIAN); 379 - 380 - _machine_restart = ltq_machine_restart; 381 - _machine_halt = ltq_machine_halt; 382 - pm_power_off = ltq_machine_power_off; 383 - 384 - return 0; 385 - } 386 - 387 - arch_initcall(mips_reboot_setup);
+22 -61
arch/mips/lantiq/xway/sysctrl.c
··· 145 145 #define pmu_w32(x, y) ltq_w32((x), pmu_membase + (y)) 146 146 #define pmu_r32(x) ltq_r32(pmu_membase + (x)) 147 147 148 - #define XBAR_ALWAYS_LAST 0x430 149 - #define XBAR_FPI_BURST_EN BIT(1) 150 - #define XBAR_AHB_BURST_EN BIT(2) 151 - 152 - #define xbar_w32(x, y) ltq_w32((x), ltq_xbar_membase + (y)) 153 - #define xbar_r32(x) ltq_r32(ltq_xbar_membase + (x)) 154 - 155 148 static void __iomem *pmu_membase; 156 - static void __iomem *ltq_xbar_membase; 157 149 void __iomem *ltq_cgu_membase; 158 150 void __iomem *ltq_ebu_membase; 159 151 ··· 283 291 { 284 292 ltq_cgu_w32(ltq_cgu_r32(ifccr) | (1 << 16), ifccr); 285 293 ltq_cgu_w32((1 << 31) | (1 << 30), pcicr); 286 - } 287 - 288 - static void xbar_fpi_burst_disable(void) 289 - { 290 - u32 reg; 291 - 292 - /* bit 1 as 1 --burst; bit 1 as 0 -- single */ 293 - reg = xbar_r32(XBAR_ALWAYS_LAST); 294 - reg &= ~XBAR_FPI_BURST_EN; 295 - xbar_w32(reg, XBAR_ALWAYS_LAST); 296 294 } 297 295 298 296 /* enable a clockout source */ ··· 441 459 if (!pmu_membase || !ltq_cgu_membase || !ltq_ebu_membase) 442 460 panic("Failed to remap core resources"); 443 461 444 - if (of_machine_is_compatible("lantiq,vr9")) { 445 - struct resource res_xbar; 446 - struct device_node *np_xbar = 447 - of_find_compatible_node(NULL, NULL, 448 - "lantiq,xbar-xway"); 449 - 450 - if (!np_xbar) 451 - panic("Failed to load xbar nodes from devicetree"); 452 - if (of_address_to_resource(np_xbar, 0, &res_xbar)) 453 - panic("Failed to get xbar resources"); 454 - if (!request_mem_region(res_xbar.start, resource_size(&res_xbar), 455 - res_xbar.name)) 456 - panic("Failed to get xbar resources"); 457 - 458 - ltq_xbar_membase = ioremap_nocache(res_xbar.start, 459 - resource_size(&res_xbar)); 460 - if (!ltq_xbar_membase) 461 - panic("Failed to remap xbar resources"); 462 - } 463 - 464 462 /* make sure to unprotect the memory region where flash is located */ 465 463 ltq_ebu_w32(ltq_ebu_r32(LTQ_EBU_BUSCON0) & ~EBU_WRDIS, LTQ_EBU_BUSCON0); 466 464 ··· 469 507 470 508 if (of_machine_is_compatible("lantiq,grx390") || 471 509 of_machine_is_compatible("lantiq,ar10")) { 472 - clkdev_add_pmu("1e101000.usb", "phy", 1, 2, PMU_ANALOG_USB0_P); 473 - clkdev_add_pmu("1e106000.usb", "phy", 1, 2, PMU_ANALOG_USB1_P); 510 + clkdev_add_pmu("1f203018.usb2-phy", "phy", 1, 2, PMU_ANALOG_USB0_P); 511 + clkdev_add_pmu("1f203034.usb2-phy", "phy", 1, 2, PMU_ANALOG_USB1_P); 474 512 /* rc 0 */ 475 513 clkdev_add_pmu("1d900000.pcie", "phy", 1, 2, PMU_ANALOG_PCIE0_P); 476 514 clkdev_add_pmu("1d900000.pcie", "msi", 1, 1, PMU1_PCIE_MSI); ··· 490 528 else 491 529 clkdev_add_static(CLOCK_133M, CLOCK_133M, 492 530 CLOCK_133M, CLOCK_133M); 493 - clkdev_add_pmu("1e101000.usb", "ctl", 1, 0, PMU_USB0); 494 - clkdev_add_pmu("1e101000.usb", "phy", 1, 0, PMU_USB0_P); 531 + clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0); 532 + clkdev_add_pmu("1f203018.usb2-phy", "phy", 1, 0, PMU_USB0_P); 495 533 clkdev_add_pmu("1e180000.etop", "ppe", 1, 0, PMU_PPE); 496 534 clkdev_add_cgu("1e180000.etop", "ephycgu", CGU_EPHY); 497 535 clkdev_add_pmu("1e180000.etop", "ephy", 1, 0, PMU_EPHY); ··· 500 538 } else if (of_machine_is_compatible("lantiq,grx390")) { 501 539 clkdev_add_static(ltq_grx390_cpu_hz(), ltq_grx390_fpi_hz(), 502 540 ltq_grx390_fpi_hz(), ltq_grx390_pp32_hz()); 503 - clkdev_add_pmu("1e101000.usb", "ctl", 1, 0, PMU_USB0); 504 - clkdev_add_pmu("1e106000.usb", "ctl", 1, 0, PMU_USB1); 541 + clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0); 542 + clkdev_add_pmu("1e106000.usb", "otg", 1, 0, PMU_USB1); 505 543 /* rc 2 */ 506 544 clkdev_add_pmu("1a800000.pcie", "phy", 1, 2, PMU_ANALOG_PCIE2_P); 507 545 clkdev_add_pmu("1a800000.pcie", "msi", 1, 1, PMU1_PCIE2_MSI); ··· 513 551 } else if (of_machine_is_compatible("lantiq,ar10")) { 514 552 clkdev_add_static(ltq_ar10_cpu_hz(), ltq_ar10_fpi_hz(), 515 553 ltq_ar10_fpi_hz(), ltq_ar10_pp32_hz()); 516 - clkdev_add_pmu("1e101000.usb", "ctl", 1, 0, PMU_USB0); 517 - clkdev_add_pmu("1e106000.usb", "ctl", 1, 0, PMU_USB1); 554 + clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0); 555 + clkdev_add_pmu("1e106000.usb", "otg", 1, 0, PMU_USB1); 518 556 clkdev_add_pmu("1e108000.eth", NULL, 0, 0, PMU_SWITCH | 519 557 PMU_PPE_DP | PMU_PPE_TC); 520 558 clkdev_add_pmu("1da00000.usif", "NULL", 1, 0, PMU_USIF); 521 - clkdev_add_pmu("1f203000.rcu", "gphy", 1, 0, PMU_GPHY); 559 + clkdev_add_pmu("1f203020.gphy", NULL, 1, 0, PMU_GPHY); 560 + clkdev_add_pmu("1f203068.gphy", NULL, 1, 0, PMU_GPHY); 522 561 clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU); 523 562 clkdev_add_pmu("1e116000.mei", "afe", 1, 2, PMU_ANALOG_DSL_AFE); 524 563 clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE); 525 564 } else if (of_machine_is_compatible("lantiq,vr9")) { 526 565 clkdev_add_static(ltq_vr9_cpu_hz(), ltq_vr9_fpi_hz(), 527 566 ltq_vr9_fpi_hz(), ltq_vr9_pp32_hz()); 528 - clkdev_add_pmu("1e101000.usb", "phy", 1, 0, PMU_USB0_P); 529 - clkdev_add_pmu("1e101000.usb", "ctl", 1, 0, PMU_USB0 | PMU_AHBM); 530 - clkdev_add_pmu("1e106000.usb", "phy", 1, 0, PMU_USB1_P); 531 - clkdev_add_pmu("1e106000.usb", "ctl", 1, 0, PMU_USB1 | PMU_AHBM); 567 + clkdev_add_pmu("1f203018.usb2-phy", "phy", 1, 0, PMU_USB0_P); 568 + clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0 | PMU_AHBM); 569 + clkdev_add_pmu("1f203034.usb2-phy", "phy", 1, 0, PMU_USB1_P); 570 + clkdev_add_pmu("1e106000.usb", "otg", 1, 0, PMU_USB1 | PMU_AHBM); 532 571 clkdev_add_pmu("1d900000.pcie", "phy", 1, 1, PMU1_PCIE_PHY); 533 572 clkdev_add_pmu("1d900000.pcie", "bus", 1, 0, PMU_PCIE_CLK); 534 573 clkdev_add_pmu("1d900000.pcie", "msi", 1, 1, PMU1_PCIE_MSI); ··· 542 579 PMU_SWITCH | PMU_PPE_DPLUS | PMU_PPE_DPLUM | 543 580 PMU_PPE_EMA | PMU_PPE_TC | PMU_PPE_SLL01 | 544 581 PMU_PPE_QSB | PMU_PPE_TOP); 545 - clkdev_add_pmu("1f203000.rcu", "gphy", 0, 0, PMU_GPHY); 582 + clkdev_add_pmu("1f203020.gphy", NULL, 0, 0, PMU_GPHY); 583 + clkdev_add_pmu("1f203068.gphy", NULL, 0, 0, PMU_GPHY); 546 584 clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO); 547 585 clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU); 548 586 clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE); 549 587 } else if (of_machine_is_compatible("lantiq,ar9")) { 550 588 clkdev_add_static(ltq_ar9_cpu_hz(), ltq_ar9_fpi_hz(), 551 589 ltq_ar9_fpi_hz(), CLOCK_250M); 552 - clkdev_add_pmu("1e101000.usb", "ctl", 1, 0, PMU_USB0); 553 - clkdev_add_pmu("1e101000.usb", "phy", 1, 0, PMU_USB0_P); 554 - clkdev_add_pmu("1e106000.usb", "ctl", 1, 0, PMU_USB1); 555 - clkdev_add_pmu("1e106000.usb", "phy", 1, 0, PMU_USB1_P); 590 + clkdev_add_pmu("1f203018.usb2-phy", "phy", 1, 0, PMU_USB0_P); 591 + clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0); 592 + clkdev_add_pmu("1f203034.usb2-phy", "phy", 1, 0, PMU_USB1_P); 593 + clkdev_add_pmu("1e106000.usb", "otg", 1, 0, PMU_USB1); 556 594 clkdev_add_pmu("1e180000.etop", "switch", 1, 0, PMU_SWITCH); 557 595 clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO); 558 596 clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU); ··· 562 598 } else { 563 599 clkdev_add_static(ltq_danube_cpu_hz(), ltq_danube_fpi_hz(), 564 600 ltq_danube_fpi_hz(), ltq_danube_pp32_hz()); 565 - clkdev_add_pmu("1e101000.usb", "ctl", 1, 0, PMU_USB0); 566 - clkdev_add_pmu("1e101000.usb", "phy", 1, 0, PMU_USB0_P); 601 + clkdev_add_pmu("1f203018.usb2-phy", "ctrl", 1, 0, PMU_USB0); 602 + clkdev_add_pmu("1f203018.usb2-phy", "phy", 1, 0, PMU_USB0_P); 567 603 clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO); 568 604 clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU); 569 605 clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE); 570 606 clkdev_add_pmu("1e100400.serial", NULL, 1, 0, PMU_ASC0); 571 607 } 572 - 573 - if (of_machine_is_compatible("lantiq,vr9")) 574 - xbar_fpi_burst_disable(); 575 608 }
-113
arch/mips/lantiq/xway/xrx200_phy_fw.c
··· 1 - /* 2 - * Lantiq XRX200 PHY Firmware Loader 3 - * Author: John Crispin 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms of the GNU General Public License version 2 as published 7 - * by the Free Software Foundation. 8 - * 9 - * Copyright (C) 2012 John Crispin <john@phrozen.org> 10 - */ 11 - 12 - #include <linux/delay.h> 13 - #include <linux/dma-mapping.h> 14 - #include <linux/firmware.h> 15 - #include <linux/of_platform.h> 16 - 17 - #include <lantiq_soc.h> 18 - 19 - #define XRX200_GPHY_FW_ALIGN (16 * 1024) 20 - 21 - static dma_addr_t xway_gphy_load(struct platform_device *pdev) 22 - { 23 - const struct firmware *fw; 24 - dma_addr_t dev_addr = 0; 25 - const char *fw_name; 26 - void *fw_addr; 27 - size_t size; 28 - 29 - if (of_get_property(pdev->dev.of_node, "firmware1", NULL) || 30 - of_get_property(pdev->dev.of_node, "firmware2", NULL)) { 31 - switch (ltq_soc_type()) { 32 - case SOC_TYPE_VR9: 33 - if (of_property_read_string(pdev->dev.of_node, 34 - "firmware1", &fw_name)) { 35 - dev_err(&pdev->dev, 36 - "failed to load firmware filename\n"); 37 - return 0; 38 - } 39 - break; 40 - case SOC_TYPE_VR9_2: 41 - if (of_property_read_string(pdev->dev.of_node, 42 - "firmware2", &fw_name)) { 43 - dev_err(&pdev->dev, 44 - "failed to load firmware filename\n"); 45 - return 0; 46 - } 47 - break; 48 - } 49 - } else if (of_property_read_string(pdev->dev.of_node, 50 - "firmware", &fw_name)) { 51 - dev_err(&pdev->dev, "failed to load firmware filename\n"); 52 - return 0; 53 - } 54 - 55 - dev_info(&pdev->dev, "requesting %s\n", fw_name); 56 - if (request_firmware(&fw, fw_name, &pdev->dev)) { 57 - dev_err(&pdev->dev, "failed to load firmware: %s\n", fw_name); 58 - return 0; 59 - } 60 - 61 - /* 62 - * GPHY cores need the firmware code in a persistent and contiguous 63 - * memory area with a 16 kB boundary aligned start address 64 - */ 65 - size = fw->size + XRX200_GPHY_FW_ALIGN; 66 - 67 - fw_addr = dma_alloc_coherent(&pdev->dev, size, &dev_addr, GFP_KERNEL); 68 - if (fw_addr) { 69 - fw_addr = PTR_ALIGN(fw_addr, XRX200_GPHY_FW_ALIGN); 70 - dev_addr = ALIGN(dev_addr, XRX200_GPHY_FW_ALIGN); 71 - memcpy(fw_addr, fw->data, fw->size); 72 - } else { 73 - dev_err(&pdev->dev, "failed to alloc firmware memory\n"); 74 - } 75 - 76 - release_firmware(fw); 77 - return dev_addr; 78 - } 79 - 80 - static int xway_phy_fw_probe(struct platform_device *pdev) 81 - { 82 - dma_addr_t fw_addr; 83 - struct property *pp; 84 - unsigned char *phyids; 85 - int i, ret = 0; 86 - 87 - fw_addr = xway_gphy_load(pdev); 88 - if (!fw_addr) 89 - return -EINVAL; 90 - pp = of_find_property(pdev->dev.of_node, "phys", NULL); 91 - if (!pp) 92 - return -ENOENT; 93 - phyids = pp->value; 94 - for (i = 0; i < pp->length && !ret; i++) 95 - ret = xrx200_gphy_boot(&pdev->dev, phyids[i], fw_addr); 96 - if (!ret) 97 - mdelay(100); 98 - return ret; 99 - } 100 - 101 - static const struct of_device_id xway_phy_match[] = { 102 - { .compatible = "lantiq,phy-xrx200" }, 103 - {}, 104 - }; 105 - 106 - static struct platform_driver xway_phy_driver = { 107 - .probe = xway_phy_fw_probe, 108 - .driver = { 109 - .name = "phy-xrx200", 110 - .of_match_table = xway_phy_match, 111 - }, 112 - }; 113 - builtin_platform_driver(xway_phy_driver);
+1 -1
arch/mips/lib/Makefile
··· 6 6 mips-atomic.o strncpy_user.o \ 7 7 strnlen_user.o uncached.o 8 8 9 - obj-y += iomap.o 9 + obj-y += iomap.o iomap_copy.o 10 10 obj-$(CONFIG_PCI) += iomap-pci.o 11 11 lib-$(CONFIG_GENERIC_CSUM) := $(filter-out csum_partial.o, $(lib-y)) 12 12
+1
arch/mips/lib/delay.c
··· 8 8 * Copyright (C) 1999, 2000 Silicon Graphics, Inc. 9 9 * Copyright (C) 2007, 2014 Maciej W. Rozycki 10 10 */ 11 + #include <linux/delay.h> 11 12 #include <linux/export.h> 12 13 #include <linux/param.h> 13 14 #include <linux/smp.h>
+42
arch/mips/lib/iomap_copy.c
··· 1 + /* 2 + * This file is free software; you can redistribute it and/or modify 3 + * it under the terms of version 2 of the GNU General Public License 4 + * as published by the Free Software Foundation. 5 + * 6 + * This program is distributed in the hope that it will be useful, 7 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 8 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 9 + * GNU General Public License for more details. 10 + * 11 + * You should have received a copy of the GNU General Public License 12 + * along with this program; if not, write to the Free Software Foundation, 13 + * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA. 14 + */ 15 + 16 + #include <linux/export.h> 17 + #include <linux/io.h> 18 + 19 + /** 20 + * __ioread64_copy - copy data from MMIO space, in 64-bit units 21 + * @to: destination (must be 64-bit aligned) 22 + * @from: source, in MMIO space (must be 64-bit aligned) 23 + * @count: number of 64-bit quantities to copy 24 + * 25 + * Copy data from MMIO space to kernel space, in units of 32 or 64 bits at a 26 + * time. Order of access is not guaranteed, nor is a memory barrier 27 + * performed afterwards. 28 + */ 29 + void __ioread64_copy(void *to, const void __iomem *from, size_t count) 30 + { 31 + #ifdef CONFIG_64BIT 32 + u64 *dst = to; 33 + const u64 __iomem *src = from; 34 + const u64 __iomem *end = src + count; 35 + 36 + while (src < end) 37 + *dst++ = __raw_readq(src++); 38 + #else 39 + __ioread32_copy(to, from, count * 2); 40 + #endif 41 + } 42 + EXPORT_SYMBOL_GPL(__ioread64_copy);
+3
arch/mips/loongson64/lemote-2f/clock.c
··· 80 80 81 81 unsigned long clk_get_rate(struct clk *clk) 82 82 { 83 + if (!clk) 84 + return 0; 85 + 83 86 return (unsigned long)clk->rate; 84 87 } 85 88 EXPORT_SYMBOL(clk_get_rate);
+9 -7
arch/mips/loongson64/loongson-3/smp.c
··· 319 319 loongson3_ipi_write32(0xffffffff, ipi_en0_regs[cpu_logical_map(i)]); 320 320 321 321 per_cpu(cpu_state, cpu) = CPU_ONLINE; 322 - cpu_data[cpu].core = 323 - cpu_logical_map(cpu) % loongson_sysconf.cores_per_package; 322 + cpu_set_core(&cpu_data[cpu], 323 + cpu_logical_map(cpu) % loongson_sysconf.cores_per_package); 324 324 cpu_data[cpu].package = 325 325 cpu_logical_map(cpu) / loongson_sysconf.cores_per_package; 326 326 ··· 386 386 ipi_status0_regs_init(); 387 387 ipi_en0_regs_init(); 388 388 ipi_mailbox_buf_init(); 389 - cpu_data[0].core = cpu_logical_map(0) % loongson_sysconf.cores_per_package; 389 + cpu_set_core(&cpu_data[0], 390 + cpu_logical_map(0) % loongson_sysconf.cores_per_package); 390 391 cpu_data[0].package = cpu_logical_map(0) / loongson_sysconf.cores_per_package; 391 392 } 392 393 ··· 400 399 /* 401 400 * Setup the PC, SP, and GP of a secondary processor and start it runing! 402 401 */ 403 - static void loongson3_boot_secondary(int cpu, struct task_struct *idle) 402 + static int loongson3_boot_secondary(int cpu, struct task_struct *idle) 404 403 { 405 404 unsigned long startargs[4]; 406 405 ··· 423 422 (void *)(ipi_mailbox_buf[cpu_logical_map(cpu)]+0x8)); 424 423 loongson3_ipi_write64(startargs[0], 425 424 (void *)(ipi_mailbox_buf[cpu_logical_map(cpu)]+0x0)); 425 + return 0; 426 426 } 427 427 428 428 #ifdef CONFIG_HOTPLUG_CPU ··· 699 697 700 698 static int loongson3_disable_clock(unsigned int cpu) 701 699 { 702 - uint64_t core_id = cpu_data[cpu].core; 700 + uint64_t core_id = cpu_core(&cpu_data[cpu]); 703 701 uint64_t package_id = cpu_data[cpu].package; 704 702 705 703 if ((read_c0_prid() & PRID_REV_MASK) == PRID_REV_LOONGSON3A_R1) { ··· 713 711 714 712 static int loongson3_enable_clock(unsigned int cpu) 715 713 { 716 - uint64_t core_id = cpu_data[cpu].core; 714 + uint64_t core_id = cpu_core(&cpu_data[cpu]); 717 715 uint64_t package_id = cpu_data[cpu].package; 718 716 719 717 if ((read_c0_prid() & PRID_REV_MASK) == PRID_REV_LOONGSON3A_R1) { ··· 736 734 737 735 #endif 738 736 739 - struct plat_smp_ops loongson3_smp_ops = { 737 + const struct plat_smp_ops loongson3_smp_ops = { 740 738 .send_ipi_single = loongson3_send_ipi_single, 741 739 .send_ipi_mask = loongson3_send_ipi_mask, 742 740 .init_secondary = loongson3_init_secondary,
+4 -2
arch/mips/math-emu/Makefile
··· 4 4 5 5 obj-y += cp1emu.o ieee754dp.o ieee754sp.o ieee754.o \ 6 6 dp_div.o dp_mul.o dp_sub.o dp_add.o dp_fsp.o dp_cmp.o dp_simple.o \ 7 - dp_tint.o dp_fint.o dp_maddf.o dp_2008class.o dp_fmin.o dp_fmax.o \ 7 + dp_tint.o dp_fint.o dp_rint.o dp_maddf.o dp_2008class.o dp_fmin.o \ 8 + dp_fmax.o \ 8 9 sp_div.o sp_mul.o sp_sub.o sp_add.o sp_fdp.o sp_cmp.o sp_simple.o \ 9 - sp_tint.o sp_fint.o sp_maddf.o sp_2008class.o sp_fmin.o sp_fmax.o \ 10 + sp_tint.o sp_fint.o sp_rint.o sp_maddf.o sp_2008class.o sp_fmin.o \ 11 + sp_fmax.o \ 10 12 dsemul.o 11 13 12 14 lib-y += ieee754d.o \
+272 -12
arch/mips/math-emu/cp1emu.c
··· 58 58 mips_instruction); 59 59 60 60 static int fpux_emu(struct pt_regs *, 61 - struct mips_fpu_struct *, mips_instruction, void *__user *); 61 + struct mips_fpu_struct *, mips_instruction, void __user **); 62 62 63 63 /* Control registers */ 64 64 ··· 830 830 } while (0) 831 831 832 832 #define DIFROMREG(di, x) \ 833 - ((di) = get_fpr64(&ctx->fpr[(x) & ~(cop1_64bit(xcp) == 0)], 0)) 833 + ((di) = get_fpr64(&ctx->fpr[(x) & ~(cop1_64bit(xcp) ^ 1)], 0)) 834 834 835 835 #define DITOREG(di, x) \ 836 836 do { \ 837 837 unsigned fpr, i; \ 838 - fpr = (x) & ~(cop1_64bit(xcp) == 0); \ 838 + fpr = (x) & ~(cop1_64bit(xcp) ^ 1); \ 839 839 set_fpr64(&ctx->fpr[fpr], 0, di); \ 840 840 for (i = 1; i < ARRAY_SIZE(ctx->fpr[x].val64); i++) \ 841 841 set_fpr64(&ctx->fpr[fpr], i, 0); \ ··· 973 973 */ 974 974 975 975 static int cop1Emulate(struct pt_regs *xcp, struct mips_fpu_struct *ctx, 976 - struct mm_decoded_insn dec_insn, void *__user *fault_addr) 976 + struct mm_decoded_insn dec_insn, void __user **fault_addr) 977 977 { 978 978 unsigned long contpc = xcp->cp0_epc + dec_insn.pc_inc; 979 979 unsigned int cond, cbit, bit0; ··· 1195 1195 bit0 = get_fpr32(fpr, 0) & 0x1; 1196 1196 switch (MIPSInst_RS(ir)) { 1197 1197 case bc1eqz_op: 1198 + MIPS_FPU_EMU_INC_STATS(bc1eqz); 1198 1199 cond = bit0 == 0; 1199 1200 break; 1200 1201 case bc1nez_op: 1202 + MIPS_FPU_EMU_INC_STATS(bc1nez); 1201 1203 cond = bit0 != 0; 1202 1204 break; 1203 1205 } ··· 1232 1230 break; 1233 1231 } 1234 1232 branch_common: 1233 + MIPS_FPU_EMU_INC_STATS(branches); 1235 1234 set_delay_slot(xcp); 1236 1235 if (cond) { 1237 1236 /* ··· 1463 1460 DEF3OP(nmsub, dp, ieee754dp_mul, ieee754dp_sub, ieee754dp_neg); 1464 1461 1465 1462 static int fpux_emu(struct pt_regs *xcp, struct mips_fpu_struct *ctx, 1466 - mips_instruction ir, void *__user *fault_addr) 1463 + mips_instruction ir, void __user **fault_addr) 1467 1464 { 1468 1465 unsigned rcsr = 0; /* resulting csr */ 1469 1466 ··· 1685 1682 switch (MIPSInst_FUNC(ir)) { 1686 1683 /* binary ops */ 1687 1684 case fadd_op: 1685 + MIPS_FPU_EMU_INC_STATS(add_s); 1688 1686 handler.b = ieee754sp_add; 1689 1687 goto scopbop; 1690 1688 case fsub_op: 1689 + MIPS_FPU_EMU_INC_STATS(sub_s); 1691 1690 handler.b = ieee754sp_sub; 1692 1691 goto scopbop; 1693 1692 case fmul_op: 1693 + MIPS_FPU_EMU_INC_STATS(mul_s); 1694 1694 handler.b = ieee754sp_mul; 1695 1695 goto scopbop; 1696 1696 case fdiv_op: 1697 + MIPS_FPU_EMU_INC_STATS(div_s); 1697 1698 handler.b = ieee754sp_div; 1698 1699 goto scopbop; 1699 1700 ··· 1706 1699 if (!cpu_has_mips_2_3_4_5_r) 1707 1700 return SIGILL; 1708 1701 1702 + MIPS_FPU_EMU_INC_STATS(sqrt_s); 1709 1703 handler.u = ieee754sp_sqrt; 1710 1704 goto scopuop; 1711 1705 ··· 1719 1711 if (!cpu_has_mips_4_5_64_r2_r6) 1720 1712 return SIGILL; 1721 1713 1714 + MIPS_FPU_EMU_INC_STATS(rsqrt_s); 1722 1715 handler.u = fpemu_sp_rsqrt; 1723 1716 goto scopuop; 1724 1717 ··· 1727 1718 if (!cpu_has_mips_4_5_64_r2_r6) 1728 1719 return SIGILL; 1729 1720 1721 + MIPS_FPU_EMU_INC_STATS(recip_s); 1730 1722 handler.u = fpemu_sp_recip; 1731 1723 goto scopuop; 1732 1724 ··· 1764 1754 if (!cpu_has_mips_r6) 1765 1755 return SIGILL; 1766 1756 1757 + MIPS_FPU_EMU_INC_STATS(seleqz_s); 1767 1758 SPFROMREG(rv.s, MIPSInst_FT(ir)); 1768 1759 if (rv.w & 0x1) 1769 1760 rv.w = 0; ··· 1776 1765 if (!cpu_has_mips_r6) 1777 1766 return SIGILL; 1778 1767 1768 + MIPS_FPU_EMU_INC_STATS(selnez_s); 1779 1769 SPFROMREG(rv.s, MIPSInst_FT(ir)); 1780 1770 if (rv.w & 0x1) 1781 1771 SPFROMREG(rv.s, MIPSInst_FS(ir)); ··· 1790 1778 if (!cpu_has_mips_r6) 1791 1779 return SIGILL; 1792 1780 1781 + MIPS_FPU_EMU_INC_STATS(maddf_s); 1793 1782 SPFROMREG(ft, MIPSInst_FT(ir)); 1794 1783 SPFROMREG(fs, MIPSInst_FS(ir)); 1795 1784 SPFROMREG(fd, MIPSInst_FD(ir)); ··· 1804 1791 if (!cpu_has_mips_r6) 1805 1792 return SIGILL; 1806 1793 1794 + MIPS_FPU_EMU_INC_STATS(msubf_s); 1807 1795 SPFROMREG(ft, MIPSInst_FT(ir)); 1808 1796 SPFROMREG(fs, MIPSInst_FS(ir)); 1809 1797 SPFROMREG(fd, MIPSInst_FD(ir)); ··· 1818 1804 if (!cpu_has_mips_r6) 1819 1805 return SIGILL; 1820 1806 1807 + MIPS_FPU_EMU_INC_STATS(rint_s); 1821 1808 SPFROMREG(fs, MIPSInst_FS(ir)); 1822 - rv.l = ieee754sp_tlong(fs); 1823 - rv.s = ieee754sp_flong(rv.l); 1809 + rv.s = ieee754sp_rint(fs); 1824 1810 goto copcsr; 1825 1811 } 1826 1812 ··· 1830 1816 if (!cpu_has_mips_r6) 1831 1817 return SIGILL; 1832 1818 1819 + MIPS_FPU_EMU_INC_STATS(class_s); 1833 1820 SPFROMREG(fs, MIPSInst_FS(ir)); 1834 1821 rv.w = ieee754sp_2008class(fs); 1835 1822 rfmt = w_fmt; ··· 1843 1828 if (!cpu_has_mips_r6) 1844 1829 return SIGILL; 1845 1830 1831 + MIPS_FPU_EMU_INC_STATS(min_s); 1846 1832 SPFROMREG(ft, MIPSInst_FT(ir)); 1847 1833 SPFROMREG(fs, MIPSInst_FS(ir)); 1848 1834 rv.s = ieee754sp_fmin(fs, ft); ··· 1856 1840 if (!cpu_has_mips_r6) 1857 1841 return SIGILL; 1858 1842 1843 + MIPS_FPU_EMU_INC_STATS(mina_s); 1859 1844 SPFROMREG(ft, MIPSInst_FT(ir)); 1860 1845 SPFROMREG(fs, MIPSInst_FS(ir)); 1861 1846 rv.s = ieee754sp_fmina(fs, ft); ··· 1869 1852 if (!cpu_has_mips_r6) 1870 1853 return SIGILL; 1871 1854 1855 + MIPS_FPU_EMU_INC_STATS(max_s); 1872 1856 SPFROMREG(ft, MIPSInst_FT(ir)); 1873 1857 SPFROMREG(fs, MIPSInst_FS(ir)); 1874 1858 rv.s = ieee754sp_fmax(fs, ft); ··· 1882 1864 if (!cpu_has_mips_r6) 1883 1865 return SIGILL; 1884 1866 1867 + MIPS_FPU_EMU_INC_STATS(maxa_s); 1885 1868 SPFROMREG(ft, MIPSInst_FT(ir)); 1886 1869 SPFROMREG(fs, MIPSInst_FS(ir)); 1887 1870 rv.s = ieee754sp_fmaxa(fs, ft); ··· 1890 1871 } 1891 1872 1892 1873 case fabs_op: 1874 + MIPS_FPU_EMU_INC_STATS(abs_s); 1893 1875 handler.u = ieee754sp_abs; 1894 1876 goto scopuop; 1895 1877 1896 1878 case fneg_op: 1879 + MIPS_FPU_EMU_INC_STATS(neg_s); 1897 1880 handler.u = ieee754sp_neg; 1898 1881 goto scopuop; 1899 1882 1900 1883 case fmov_op: 1901 1884 /* an easy one */ 1885 + MIPS_FPU_EMU_INC_STATS(mov_s); 1902 1886 SPFROMREG(rv.s, MIPSInst_FS(ir)); 1903 1887 goto copcsr; 1904 1888 ··· 1944 1922 return SIGILL; /* not defined */ 1945 1923 1946 1924 case fcvtd_op: 1925 + MIPS_FPU_EMU_INC_STATS(cvt_d_s); 1947 1926 SPFROMREG(fs, MIPSInst_FS(ir)); 1948 1927 rv.d = ieee754dp_fsp(fs); 1949 1928 rfmt = d_fmt; 1950 1929 goto copcsr; 1951 1930 1952 1931 case fcvtw_op: 1932 + MIPS_FPU_EMU_INC_STATS(cvt_w_s); 1953 1933 SPFROMREG(fs, MIPSInst_FS(ir)); 1954 1934 rv.w = ieee754sp_tint(fs); 1955 1935 rfmt = w_fmt; ··· 1963 1939 case ffloor_op: 1964 1940 if (!cpu_has_mips_2_3_4_5_r) 1965 1941 return SIGILL; 1942 + 1943 + if (MIPSInst_FUNC(ir) == fceil_op) 1944 + MIPS_FPU_EMU_INC_STATS(ceil_w_s); 1945 + if (MIPSInst_FUNC(ir) == ffloor_op) 1946 + MIPS_FPU_EMU_INC_STATS(floor_w_s); 1947 + if (MIPSInst_FUNC(ir) == fround_op) 1948 + MIPS_FPU_EMU_INC_STATS(round_w_s); 1949 + if (MIPSInst_FUNC(ir) == ftrunc_op) 1950 + MIPS_FPU_EMU_INC_STATS(trunc_w_s); 1966 1951 1967 1952 oldrm = ieee754_csr.rm; 1968 1953 SPFROMREG(fs, MIPSInst_FS(ir)); ··· 1985 1952 if (!cpu_has_mips_r6) 1986 1953 return SIGILL; 1987 1954 1955 + MIPS_FPU_EMU_INC_STATS(sel_s); 1988 1956 SPFROMREG(fd, MIPSInst_FD(ir)); 1989 1957 if (fd.bits & 0x1) 1990 1958 SPFROMREG(rv.s, MIPSInst_FT(ir)); ··· 1997 1963 if (!cpu_has_mips_3_4_5_64_r2_r6) 1998 1964 return SIGILL; 1999 1965 1966 + MIPS_FPU_EMU_INC_STATS(cvt_l_s); 2000 1967 SPFROMREG(fs, MIPSInst_FS(ir)); 2001 1968 rv.l = ieee754sp_tlong(fs); 2002 1969 rfmt = l_fmt; ··· 2009 1974 case ffloorl_op: 2010 1975 if (!cpu_has_mips_3_4_5_64_r2_r6) 2011 1976 return SIGILL; 1977 + 1978 + if (MIPSInst_FUNC(ir) == fceill_op) 1979 + MIPS_FPU_EMU_INC_STATS(ceil_l_s); 1980 + if (MIPSInst_FUNC(ir) == ffloorl_op) 1981 + MIPS_FPU_EMU_INC_STATS(floor_l_s); 1982 + if (MIPSInst_FUNC(ir) == froundl_op) 1983 + MIPS_FPU_EMU_INC_STATS(round_l_s); 1984 + if (MIPSInst_FUNC(ir) == ftruncl_op) 1985 + MIPS_FPU_EMU_INC_STATS(trunc_l_s); 2012 1986 2013 1987 oldrm = ieee754_csr.rm; 2014 1988 SPFROMREG(fs, MIPSInst_FS(ir)); ··· 2060 2016 switch (MIPSInst_FUNC(ir)) { 2061 2017 /* binary ops */ 2062 2018 case fadd_op: 2019 + MIPS_FPU_EMU_INC_STATS(add_d); 2063 2020 handler.b = ieee754dp_add; 2064 2021 goto dcopbop; 2065 2022 case fsub_op: 2023 + MIPS_FPU_EMU_INC_STATS(sub_d); 2066 2024 handler.b = ieee754dp_sub; 2067 2025 goto dcopbop; 2068 2026 case fmul_op: 2027 + MIPS_FPU_EMU_INC_STATS(mul_d); 2069 2028 handler.b = ieee754dp_mul; 2070 2029 goto dcopbop; 2071 2030 case fdiv_op: 2031 + MIPS_FPU_EMU_INC_STATS(div_d); 2072 2032 handler.b = ieee754dp_div; 2073 2033 goto dcopbop; 2074 2034 ··· 2081 2033 if (!cpu_has_mips_2_3_4_5_r) 2082 2034 return SIGILL; 2083 2035 2036 + MIPS_FPU_EMU_INC_STATS(sqrt_d); 2084 2037 handler.u = ieee754dp_sqrt; 2085 2038 goto dcopuop; 2086 2039 /* ··· 2093 2044 if (!cpu_has_mips_4_5_64_r2_r6) 2094 2045 return SIGILL; 2095 2046 2047 + MIPS_FPU_EMU_INC_STATS(rsqrt_d); 2096 2048 handler.u = fpemu_dp_rsqrt; 2097 2049 goto dcopuop; 2098 2050 case frecip_op: 2099 2051 if (!cpu_has_mips_4_5_64_r2_r6) 2100 2052 return SIGILL; 2101 2053 2054 + MIPS_FPU_EMU_INC_STATS(recip_d); 2102 2055 handler.u = fpemu_dp_recip; 2103 2056 goto dcopuop; 2104 2057 case fmovc_op: ··· 2134 2083 if (!cpu_has_mips_r6) 2135 2084 return SIGILL; 2136 2085 2086 + MIPS_FPU_EMU_INC_STATS(seleqz_d); 2137 2087 DPFROMREG(rv.d, MIPSInst_FT(ir)); 2138 2088 if (rv.l & 0x1) 2139 2089 rv.l = 0; ··· 2146 2094 if (!cpu_has_mips_r6) 2147 2095 return SIGILL; 2148 2096 2097 + MIPS_FPU_EMU_INC_STATS(selnez_d); 2149 2098 DPFROMREG(rv.d, MIPSInst_FT(ir)); 2150 2099 if (rv.l & 0x1) 2151 2100 DPFROMREG(rv.d, MIPSInst_FS(ir)); ··· 2160 2107 if (!cpu_has_mips_r6) 2161 2108 return SIGILL; 2162 2109 2110 + MIPS_FPU_EMU_INC_STATS(maddf_d); 2163 2111 DPFROMREG(ft, MIPSInst_FT(ir)); 2164 2112 DPFROMREG(fs, MIPSInst_FS(ir)); 2165 2113 DPFROMREG(fd, MIPSInst_FD(ir)); ··· 2174 2120 if (!cpu_has_mips_r6) 2175 2121 return SIGILL; 2176 2122 2123 + MIPS_FPU_EMU_INC_STATS(msubf_d); 2177 2124 DPFROMREG(ft, MIPSInst_FT(ir)); 2178 2125 DPFROMREG(fs, MIPSInst_FS(ir)); 2179 2126 DPFROMREG(fd, MIPSInst_FD(ir)); ··· 2188 2133 if (!cpu_has_mips_r6) 2189 2134 return SIGILL; 2190 2135 2136 + MIPS_FPU_EMU_INC_STATS(rint_d); 2191 2137 DPFROMREG(fs, MIPSInst_FS(ir)); 2192 - rv.l = ieee754dp_tlong(fs); 2193 - rv.d = ieee754dp_flong(rv.l); 2138 + rv.d = ieee754dp_rint(fs); 2194 2139 goto copcsr; 2195 2140 } 2196 2141 ··· 2200 2145 if (!cpu_has_mips_r6) 2201 2146 return SIGILL; 2202 2147 2148 + MIPS_FPU_EMU_INC_STATS(class_d); 2203 2149 DPFROMREG(fs, MIPSInst_FS(ir)); 2204 - rv.w = ieee754dp_2008class(fs); 2205 - rfmt = w_fmt; 2150 + rv.l = ieee754dp_2008class(fs); 2151 + rfmt = l_fmt; 2206 2152 break; 2207 2153 } 2208 2154 ··· 2213 2157 if (!cpu_has_mips_r6) 2214 2158 return SIGILL; 2215 2159 2160 + MIPS_FPU_EMU_INC_STATS(min_d); 2216 2161 DPFROMREG(ft, MIPSInst_FT(ir)); 2217 2162 DPFROMREG(fs, MIPSInst_FS(ir)); 2218 2163 rv.d = ieee754dp_fmin(fs, ft); ··· 2226 2169 if (!cpu_has_mips_r6) 2227 2170 return SIGILL; 2228 2171 2172 + MIPS_FPU_EMU_INC_STATS(mina_d); 2229 2173 DPFROMREG(ft, MIPSInst_FT(ir)); 2230 2174 DPFROMREG(fs, MIPSInst_FS(ir)); 2231 2175 rv.d = ieee754dp_fmina(fs, ft); ··· 2239 2181 if (!cpu_has_mips_r6) 2240 2182 return SIGILL; 2241 2183 2184 + MIPS_FPU_EMU_INC_STATS(max_d); 2242 2185 DPFROMREG(ft, MIPSInst_FT(ir)); 2243 2186 DPFROMREG(fs, MIPSInst_FS(ir)); 2244 2187 rv.d = ieee754dp_fmax(fs, ft); ··· 2252 2193 if (!cpu_has_mips_r6) 2253 2194 return SIGILL; 2254 2195 2196 + MIPS_FPU_EMU_INC_STATS(maxa_d); 2255 2197 DPFROMREG(ft, MIPSInst_FT(ir)); 2256 2198 DPFROMREG(fs, MIPSInst_FS(ir)); 2257 2199 rv.d = ieee754dp_fmaxa(fs, ft); ··· 2260 2200 } 2261 2201 2262 2202 case fabs_op: 2203 + MIPS_FPU_EMU_INC_STATS(abs_d); 2263 2204 handler.u = ieee754dp_abs; 2264 2205 goto dcopuop; 2265 2206 2266 2207 case fneg_op: 2208 + MIPS_FPU_EMU_INC_STATS(neg_d); 2267 2209 handler.u = ieee754dp_neg; 2268 2210 goto dcopuop; 2269 2211 2270 2212 case fmov_op: 2271 2213 /* an easy one */ 2214 + MIPS_FPU_EMU_INC_STATS(mov_d); 2272 2215 DPFROMREG(rv.d, MIPSInst_FS(ir)); 2273 2216 goto copcsr; 2274 2217 ··· 2291 2228 * unary conv ops 2292 2229 */ 2293 2230 case fcvts_op: 2231 + MIPS_FPU_EMU_INC_STATS(cvt_s_d); 2294 2232 DPFROMREG(fs, MIPSInst_FS(ir)); 2295 2233 rv.s = ieee754sp_fdp(fs); 2296 2234 rfmt = s_fmt; ··· 2301 2237 return SIGILL; /* not defined */ 2302 2238 2303 2239 case fcvtw_op: 2240 + MIPS_FPU_EMU_INC_STATS(cvt_w_d); 2304 2241 DPFROMREG(fs, MIPSInst_FS(ir)); 2305 2242 rv.w = ieee754dp_tint(fs); /* wrong */ 2306 2243 rfmt = w_fmt; ··· 2313 2248 case ffloor_op: 2314 2249 if (!cpu_has_mips_2_3_4_5_r) 2315 2250 return SIGILL; 2251 + 2252 + if (MIPSInst_FUNC(ir) == fceil_op) 2253 + MIPS_FPU_EMU_INC_STATS(ceil_w_d); 2254 + if (MIPSInst_FUNC(ir) == ffloor_op) 2255 + MIPS_FPU_EMU_INC_STATS(floor_w_d); 2256 + if (MIPSInst_FUNC(ir) == fround_op) 2257 + MIPS_FPU_EMU_INC_STATS(round_w_d); 2258 + if (MIPSInst_FUNC(ir) == ftrunc_op) 2259 + MIPS_FPU_EMU_INC_STATS(trunc_w_d); 2316 2260 2317 2261 oldrm = ieee754_csr.rm; 2318 2262 DPFROMREG(fs, MIPSInst_FS(ir)); ··· 2335 2261 if (!cpu_has_mips_r6) 2336 2262 return SIGILL; 2337 2263 2264 + MIPS_FPU_EMU_INC_STATS(sel_d); 2338 2265 DPFROMREG(fd, MIPSInst_FD(ir)); 2339 2266 if (fd.bits & 0x1) 2340 2267 DPFROMREG(rv.d, MIPSInst_FT(ir)); ··· 2347 2272 if (!cpu_has_mips_3_4_5_64_r2_r6) 2348 2273 return SIGILL; 2349 2274 2275 + MIPS_FPU_EMU_INC_STATS(cvt_l_d); 2350 2276 DPFROMREG(fs, MIPSInst_FS(ir)); 2351 2277 rv.l = ieee754dp_tlong(fs); 2352 2278 rfmt = l_fmt; ··· 2359 2283 case ffloorl_op: 2360 2284 if (!cpu_has_mips_3_4_5_64_r2_r6) 2361 2285 return SIGILL; 2286 + 2287 + if (MIPSInst_FUNC(ir) == fceill_op) 2288 + MIPS_FPU_EMU_INC_STATS(ceil_l_d); 2289 + if (MIPSInst_FUNC(ir) == ffloorl_op) 2290 + MIPS_FPU_EMU_INC_STATS(floor_l_d); 2291 + if (MIPSInst_FUNC(ir) == froundl_op) 2292 + MIPS_FPU_EMU_INC_STATS(round_l_d); 2293 + if (MIPSInst_FUNC(ir) == ftruncl_op) 2294 + MIPS_FPU_EMU_INC_STATS(trunc_l_d); 2362 2295 2363 2296 oldrm = ieee754_csr.rm; 2364 2297 DPFROMREG(fs, MIPSInst_FS(ir)); ··· 2410 2325 switch (MIPSInst_FUNC(ir)) { 2411 2326 case fcvts_op: 2412 2327 /* convert word to single precision real */ 2328 + MIPS_FPU_EMU_INC_STATS(cvt_s_w); 2413 2329 SPFROMREG(fs, MIPSInst_FS(ir)); 2414 2330 rv.s = ieee754sp_fint(fs.bits); 2415 2331 rfmt = s_fmt; 2416 2332 goto copcsr; 2417 2333 case fcvtd_op: 2418 2334 /* convert word to double precision real */ 2335 + MIPS_FPU_EMU_INC_STATS(cvt_d_w); 2419 2336 SPFROMREG(fs, MIPSInst_FS(ir)); 2420 2337 rv.d = ieee754dp_fint(fs.bits); 2421 2338 rfmt = d_fmt; ··· 2436 2349 if (!cpu_has_mips_r6 || 2437 2350 (MIPSInst_FUNC(ir) & 0x20)) 2438 2351 return SIGILL; 2352 + 2353 + if (!sig) { 2354 + if (!(MIPSInst_FUNC(ir) & PREDICATE_BIT)) { 2355 + switch (cmpop) { 2356 + case 0: 2357 + MIPS_FPU_EMU_INC_STATS(cmp_af_s); 2358 + break; 2359 + case 1: 2360 + MIPS_FPU_EMU_INC_STATS(cmp_un_s); 2361 + break; 2362 + case 2: 2363 + MIPS_FPU_EMU_INC_STATS(cmp_eq_s); 2364 + break; 2365 + case 3: 2366 + MIPS_FPU_EMU_INC_STATS(cmp_ueq_s); 2367 + break; 2368 + case 4: 2369 + MIPS_FPU_EMU_INC_STATS(cmp_lt_s); 2370 + break; 2371 + case 5: 2372 + MIPS_FPU_EMU_INC_STATS(cmp_ult_s); 2373 + break; 2374 + case 6: 2375 + MIPS_FPU_EMU_INC_STATS(cmp_le_s); 2376 + break; 2377 + case 7: 2378 + MIPS_FPU_EMU_INC_STATS(cmp_ule_s); 2379 + break; 2380 + } 2381 + } else { 2382 + switch (cmpop) { 2383 + case 1: 2384 + MIPS_FPU_EMU_INC_STATS(cmp_or_s); 2385 + break; 2386 + case 2: 2387 + MIPS_FPU_EMU_INC_STATS(cmp_une_s); 2388 + break; 2389 + case 3: 2390 + MIPS_FPU_EMU_INC_STATS(cmp_ne_s); 2391 + break; 2392 + } 2393 + } 2394 + } else { 2395 + if (!(MIPSInst_FUNC(ir) & PREDICATE_BIT)) { 2396 + switch (cmpop) { 2397 + case 0: 2398 + MIPS_FPU_EMU_INC_STATS(cmp_saf_s); 2399 + break; 2400 + case 1: 2401 + MIPS_FPU_EMU_INC_STATS(cmp_sun_s); 2402 + break; 2403 + case 2: 2404 + MIPS_FPU_EMU_INC_STATS(cmp_seq_s); 2405 + break; 2406 + case 3: 2407 + MIPS_FPU_EMU_INC_STATS(cmp_sueq_s); 2408 + break; 2409 + case 4: 2410 + MIPS_FPU_EMU_INC_STATS(cmp_slt_s); 2411 + break; 2412 + case 5: 2413 + MIPS_FPU_EMU_INC_STATS(cmp_sult_s); 2414 + break; 2415 + case 6: 2416 + MIPS_FPU_EMU_INC_STATS(cmp_sle_s); 2417 + break; 2418 + case 7: 2419 + MIPS_FPU_EMU_INC_STATS(cmp_sule_s); 2420 + break; 2421 + } 2422 + } else { 2423 + switch (cmpop) { 2424 + case 1: 2425 + MIPS_FPU_EMU_INC_STATS(cmp_sor_s); 2426 + break; 2427 + case 2: 2428 + MIPS_FPU_EMU_INC_STATS(cmp_sune_s); 2429 + break; 2430 + case 3: 2431 + MIPS_FPU_EMU_INC_STATS(cmp_sne_s); 2432 + break; 2433 + } 2434 + } 2435 + } 2439 2436 2440 2437 /* fmt is w_fmt for single precision so fix it */ 2441 2438 rfmt = s_fmt; ··· 2565 2394 break; 2566 2395 } 2567 2396 } 2397 + break; 2568 2398 } 2569 2399 2570 2400 case l_fmt: ··· 2578 2406 switch (MIPSInst_FUNC(ir)) { 2579 2407 case fcvts_op: 2580 2408 /* convert long to single precision real */ 2409 + MIPS_FPU_EMU_INC_STATS(cvt_s_l); 2581 2410 rv.s = ieee754sp_flong(bits); 2582 2411 rfmt = s_fmt; 2583 2412 goto copcsr; 2584 2413 case fcvtd_op: 2585 2414 /* convert long to double precision real */ 2415 + MIPS_FPU_EMU_INC_STATS(cvt_d_l); 2586 2416 rv.d = ieee754dp_flong(bits); 2587 2417 rfmt = d_fmt; 2588 2418 goto copcsr; ··· 2597 2423 if (!cpu_has_mips_r6 || 2598 2424 (MIPSInst_FUNC(ir) & 0x20)) 2599 2425 return SIGILL; 2426 + 2427 + if (!sig) { 2428 + if (!(MIPSInst_FUNC(ir) & PREDICATE_BIT)) { 2429 + switch (cmpop) { 2430 + case 0: 2431 + MIPS_FPU_EMU_INC_STATS(cmp_af_d); 2432 + break; 2433 + case 1: 2434 + MIPS_FPU_EMU_INC_STATS(cmp_un_d); 2435 + break; 2436 + case 2: 2437 + MIPS_FPU_EMU_INC_STATS(cmp_eq_d); 2438 + break; 2439 + case 3: 2440 + MIPS_FPU_EMU_INC_STATS(cmp_ueq_d); 2441 + break; 2442 + case 4: 2443 + MIPS_FPU_EMU_INC_STATS(cmp_lt_d); 2444 + break; 2445 + case 5: 2446 + MIPS_FPU_EMU_INC_STATS(cmp_ult_d); 2447 + break; 2448 + case 6: 2449 + MIPS_FPU_EMU_INC_STATS(cmp_le_d); 2450 + break; 2451 + case 7: 2452 + MIPS_FPU_EMU_INC_STATS(cmp_ule_d); 2453 + break; 2454 + } 2455 + } else { 2456 + switch (cmpop) { 2457 + case 1: 2458 + MIPS_FPU_EMU_INC_STATS(cmp_or_d); 2459 + break; 2460 + case 2: 2461 + MIPS_FPU_EMU_INC_STATS(cmp_une_d); 2462 + break; 2463 + case 3: 2464 + MIPS_FPU_EMU_INC_STATS(cmp_ne_d); 2465 + break; 2466 + } 2467 + } 2468 + } else { 2469 + if (!(MIPSInst_FUNC(ir) & PREDICATE_BIT)) { 2470 + switch (cmpop) { 2471 + case 0: 2472 + MIPS_FPU_EMU_INC_STATS(cmp_saf_d); 2473 + break; 2474 + case 1: 2475 + MIPS_FPU_EMU_INC_STATS(cmp_sun_d); 2476 + break; 2477 + case 2: 2478 + MIPS_FPU_EMU_INC_STATS(cmp_seq_d); 2479 + break; 2480 + case 3: 2481 + MIPS_FPU_EMU_INC_STATS(cmp_sueq_d); 2482 + break; 2483 + case 4: 2484 + MIPS_FPU_EMU_INC_STATS(cmp_slt_d); 2485 + break; 2486 + case 5: 2487 + MIPS_FPU_EMU_INC_STATS(cmp_sult_d); 2488 + break; 2489 + case 6: 2490 + MIPS_FPU_EMU_INC_STATS(cmp_sle_d); 2491 + break; 2492 + case 7: 2493 + MIPS_FPU_EMU_INC_STATS(cmp_sule_d); 2494 + break; 2495 + } 2496 + } else { 2497 + switch (cmpop) { 2498 + case 1: 2499 + MIPS_FPU_EMU_INC_STATS(cmp_sor_d); 2500 + break; 2501 + case 2: 2502 + MIPS_FPU_EMU_INC_STATS(cmp_sune_d); 2503 + break; 2504 + case 3: 2505 + MIPS_FPU_EMU_INC_STATS(cmp_sne_d); 2506 + break; 2507 + } 2508 + } 2509 + } 2600 2510 2601 2511 /* fmt is l_fmt for double precision so fix it */ 2602 2512 rfmt = d_fmt; ··· 2726 2468 break; 2727 2469 } 2728 2470 } 2471 + break; 2472 + 2729 2473 default: 2730 2474 return SIGILL; 2731 2475 } ··· 2813 2553 * For simplicity we always terminate upon an ISA mode switch. 2814 2554 */ 2815 2555 int fpu_emulator_cop1Handler(struct pt_regs *xcp, struct mips_fpu_struct *ctx, 2816 - int has_fpu, void *__user *fault_addr) 2556 + int has_fpu, void __user **fault_addr) 2817 2557 { 2818 2558 unsigned long oldepc, prevepc; 2819 2559 struct mm_decoded_insn dec_insn;
+63 -21
arch/mips/math-emu/dp_fmax.c
··· 47 47 case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_INF): 48 48 return ieee754dp_nanxcpt(x); 49 49 50 - /* numbers are preferred to NaNs */ 50 + /* 51 + * Quiet NaN handling 52 + */ 53 + 54 + /* 55 + * The case of both inputs quiet NaNs 56 + */ 57 + case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 58 + return x; 59 + 60 + /* 61 + * The cases of exactly one input quiet NaN (numbers 62 + * are here preferred as returned values to NaNs) 63 + */ 51 64 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_QNAN): 52 65 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_QNAN): 53 66 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_QNAN): 54 67 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_QNAN): 55 68 return x; 56 69 57 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 58 70 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_ZERO): 59 71 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_NORM): 60 72 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_DNORM): ··· 92 80 return ys ? x : y; 93 81 94 82 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO): 95 - if (xs == ys) 96 - return x; 97 - return ieee754dp_zero(1); 83 + return ieee754dp_zero(xs & ys); 98 84 99 85 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM): 100 86 DPDNORMX; ··· 116 106 else if (xs < ys) 117 107 return x; 118 108 119 - /* Compare exponent */ 120 - if (xe > ye) 121 - return x; 122 - else if (xe < ye) 123 - return y; 109 + /* Signs of inputs are equal, let's compare exponents */ 110 + if (xs == 0) { 111 + /* Inputs are both positive */ 112 + if (xe > ye) 113 + return x; 114 + else if (xe < ye) 115 + return y; 116 + } else { 117 + /* Inputs are both negative */ 118 + if (xe > ye) 119 + return y; 120 + else if (xe < ye) 121 + return x; 122 + } 124 123 125 - /* Compare mantissa */ 124 + /* Signs and exponents of inputs are equal, let's compare mantissas */ 125 + if (xs == 0) { 126 + /* Inputs are both positive, with equal signs and exponents */ 127 + if (xm <= ym) 128 + return y; 129 + return x; 130 + } 131 + /* Inputs are both negative, with equal signs and exponents */ 126 132 if (xm <= ym) 127 - return y; 128 - return x; 133 + return x; 134 + return y; 129 135 } 130 136 131 137 union ieee754dp ieee754dp_fmaxa(union ieee754dp x, union ieee754dp y) ··· 173 147 case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_INF): 174 148 return ieee754dp_nanxcpt(x); 175 149 176 - /* numbers are preferred to NaNs */ 150 + /* 151 + * Quiet NaN handling 152 + */ 153 + 154 + /* 155 + * The case of both inputs quiet NaNs 156 + */ 157 + case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 158 + return x; 159 + 160 + /* 161 + * The cases of exactly one input quiet NaN (numbers 162 + * are here preferred as returned values to NaNs) 163 + */ 177 164 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_QNAN): 178 165 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_QNAN): 179 166 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_QNAN): 180 167 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_QNAN): 181 168 return x; 182 169 183 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 184 170 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_ZERO): 185 171 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_NORM): 186 172 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_DNORM): ··· 202 164 /* 203 165 * Infinity and zero handling 204 166 */ 167 + case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF): 168 + return ieee754dp_inf(xs & ys); 169 + 205 170 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_ZERO): 206 171 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_NORM): 207 172 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM): ··· 212 171 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_ZERO): 213 172 return x; 214 173 215 - case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF): 216 174 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_INF): 217 175 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_INF): 218 176 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_INF): ··· 220 180 return y; 221 181 222 182 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO): 223 - if (xs == ys) 224 - return x; 225 - return ieee754dp_zero(1); 183 + return ieee754dp_zero(xs & ys); 226 184 227 185 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM): 228 186 DPDNORMX; ··· 245 207 return y; 246 208 247 209 /* Compare mantissa */ 248 - if (xm <= ym) 210 + if (xm < ym) 249 211 return y; 250 - return x; 212 + else if (xm > ym) 213 + return x; 214 + else if (xs == 0) 215 + return x; 216 + return y; 251 217 }
+64 -22
arch/mips/math-emu/dp_fmin.c
··· 47 47 case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_INF): 48 48 return ieee754dp_nanxcpt(x); 49 49 50 - /* numbers are preferred to NaNs */ 50 + /* 51 + * Quiet NaN handling 52 + */ 53 + 54 + /* 55 + * The case of both inputs quiet NaNs 56 + */ 57 + case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 58 + return x; 59 + 60 + /* 61 + * The cases of exactly one input quiet NaN (numbers 62 + * are here preferred as returned values to NaNs) 63 + */ 51 64 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_QNAN): 52 65 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_QNAN): 53 66 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_QNAN): 54 67 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_QNAN): 55 68 return x; 56 69 57 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 58 70 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_ZERO): 59 71 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_NORM): 60 72 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_DNORM): ··· 92 80 return ys ? y : x; 93 81 94 82 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO): 95 - if (xs == ys) 96 - return x; 97 - return ieee754dp_zero(1); 83 + return ieee754dp_zero(xs | ys); 98 84 99 85 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM): 100 86 DPDNORMX; ··· 116 106 else if (xs < ys) 117 107 return y; 118 108 119 - /* Compare exponent */ 120 - if (xe > ye) 121 - return y; 122 - else if (xe < ye) 123 - return x; 109 + /* Signs of inputs are the same, let's compare exponents */ 110 + if (xs == 0) { 111 + /* Inputs are both positive */ 112 + if (xe > ye) 113 + return y; 114 + else if (xe < ye) 115 + return x; 116 + } else { 117 + /* Inputs are both negative */ 118 + if (xe > ye) 119 + return x; 120 + else if (xe < ye) 121 + return y; 122 + } 124 123 125 - /* Compare mantissa */ 124 + /* Signs and exponents of inputs are equal, let's compare mantissas */ 125 + if (xs == 0) { 126 + /* Inputs are both positive, with equal signs and exponents */ 127 + if (xm <= ym) 128 + return x; 129 + return y; 130 + } 131 + /* Inputs are both negative, with equal signs and exponents */ 126 132 if (xm <= ym) 127 - return x; 128 - return y; 133 + return y; 134 + return x; 129 135 } 130 136 131 137 union ieee754dp ieee754dp_fmina(union ieee754dp x, union ieee754dp y) ··· 173 147 case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_INF): 174 148 return ieee754dp_nanxcpt(x); 175 149 176 - /* numbers are preferred to NaNs */ 150 + /* 151 + * Quiet NaN handling 152 + */ 153 + 154 + /* 155 + * The case of both inputs quiet NaNs 156 + */ 157 + case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 158 + return x; 159 + 160 + /* 161 + * The cases of exactly one input quiet NaN (numbers 162 + * are here preferred as returned values to NaNs) 163 + */ 177 164 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_QNAN): 178 165 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_QNAN): 179 166 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_QNAN): 180 167 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_QNAN): 181 168 return x; 182 169 183 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 184 170 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_ZERO): 185 171 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_NORM): 186 172 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_DNORM): ··· 202 164 /* 203 165 * Infinity and zero handling 204 166 */ 167 + case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF): 168 + return ieee754dp_inf(xs | ys); 169 + 205 170 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_ZERO): 206 171 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_NORM): 207 172 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM): 208 173 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_ZERO): 209 174 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_ZERO): 210 - return x; 175 + return y; 211 176 212 - case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF): 213 177 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_INF): 214 178 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_INF): 215 179 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_INF): 216 180 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_NORM): 217 181 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_DNORM): 218 - return y; 182 + return x; 219 183 220 184 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO): 221 - if (xs == ys) 222 - return x; 223 - return ieee754dp_zero(1); 185 + return ieee754dp_zero(xs | ys); 224 186 225 187 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM): 226 188 DPDNORMX; ··· 245 207 return x; 246 208 247 209 /* Compare mantissa */ 248 - if (xm <= ym) 210 + if (xm < ym) 211 + return x; 212 + else if (xm > ym) 213 + return y; 214 + else if (xs == 1) 249 215 return x; 250 216 return y; 251 217 }
+154 -92
arch/mips/math-emu/dp_maddf.c
··· 14 14 15 15 #include "ieee754dp.h" 16 16 17 - enum maddf_flags { 18 - maddf_negate_product = 1 << 0, 19 - }; 17 + 18 + /* 128 bits shift right logical with rounding. */ 19 + void srl128(u64 *hptr, u64 *lptr, int count) 20 + { 21 + u64 low; 22 + 23 + if (count >= 128) { 24 + *lptr = *hptr != 0 || *lptr != 0; 25 + *hptr = 0; 26 + } else if (count >= 64) { 27 + if (count == 64) { 28 + *lptr = *hptr | (*lptr != 0); 29 + } else { 30 + low = *lptr; 31 + *lptr = *hptr >> (count - 64); 32 + *lptr |= (*hptr << (128 - count)) != 0 || low != 0; 33 + } 34 + *hptr = 0; 35 + } else { 36 + low = *lptr; 37 + *lptr = low >> count | *hptr << (64 - count); 38 + *lptr |= (low << (64 - count)) != 0; 39 + *hptr = *hptr >> count; 40 + } 41 + } 20 42 21 43 static union ieee754dp _dp_maddf(union ieee754dp z, union ieee754dp x, 22 44 union ieee754dp y, enum maddf_flags flags) 23 45 { 24 46 int re; 25 47 int rs; 26 - u64 rm; 27 48 unsigned lxm; 28 49 unsigned hxm; 29 50 unsigned lym; 30 51 unsigned hym; 31 52 u64 lrm; 32 53 u64 hrm; 54 + u64 lzm; 55 + u64 hzm; 33 56 u64 t; 34 57 u64 at; 35 58 int s; ··· 71 48 72 49 ieee754_clearcx(); 73 50 74 - switch (zc) { 75 - case IEEE754_CLASS_SNAN: 76 - ieee754_setcx(IEEE754_INVALID_OPERATION); 51 + /* 52 + * Handle the cases when at least one of x, y or z is a NaN. 53 + * Order of precedence is sNaN, qNaN and z, x, y. 54 + */ 55 + if (zc == IEEE754_CLASS_SNAN) 77 56 return ieee754dp_nanxcpt(z); 78 - case IEEE754_CLASS_DNORM: 79 - DPDNORMZ; 80 - /* QNAN and ZERO cases are handled separately below */ 81 - } 82 - 83 - switch (CLPAIR(xc, yc)) { 84 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_SNAN): 85 - case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_SNAN): 86 - case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_SNAN): 87 - case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_SNAN): 88 - case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_SNAN): 89 - return ieee754dp_nanxcpt(y); 90 - 91 - case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_SNAN): 92 - case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_QNAN): 93 - case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_ZERO): 94 - case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_NORM): 95 - case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_DNORM): 96 - case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_INF): 57 + if (xc == IEEE754_CLASS_SNAN) 97 58 return ieee754dp_nanxcpt(x); 98 - 99 - case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_QNAN): 100 - case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_QNAN): 101 - case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_QNAN): 102 - case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_QNAN): 59 + if (yc == IEEE754_CLASS_SNAN) 60 + return ieee754dp_nanxcpt(y); 61 + if (zc == IEEE754_CLASS_QNAN) 62 + return z; 63 + if (xc == IEEE754_CLASS_QNAN) 64 + return x; 65 + if (yc == IEEE754_CLASS_QNAN) 103 66 return y; 104 67 105 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 106 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_ZERO): 107 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_NORM): 108 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_DNORM): 109 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_INF): 110 - return x; 68 + if (zc == IEEE754_CLASS_DNORM) 69 + DPDNORMZ; 70 + /* ZERO z cases are handled separately below */ 111 71 72 + switch (CLPAIR(xc, yc)) { 112 73 113 74 /* 114 75 * Infinity handling 115 76 */ 116 77 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_ZERO): 117 78 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_INF): 118 - if (zc == IEEE754_CLASS_QNAN) 119 - return z; 120 79 ieee754_setcx(IEEE754_INVALID_OPERATION); 121 80 return ieee754dp_indef(); 122 81 ··· 107 102 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_NORM): 108 103 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM): 109 104 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF): 110 - if (zc == IEEE754_CLASS_QNAN) 111 - return z; 112 - return ieee754dp_inf(xs ^ ys); 105 + if ((zc == IEEE754_CLASS_INF) && 106 + ((!(flags & MADDF_NEGATE_PRODUCT) && (zs != (xs ^ ys))) || 107 + ((flags & MADDF_NEGATE_PRODUCT) && (zs == (xs ^ ys))))) { 108 + /* 109 + * Cases of addition of infinities with opposite signs 110 + * or subtraction of infinities with same signs. 111 + */ 112 + ieee754_setcx(IEEE754_INVALID_OPERATION); 113 + return ieee754dp_indef(); 114 + } 115 + /* 116 + * z is here either not an infinity, or an infinity having the 117 + * same sign as product (x*y) (in case of MADDF.D instruction) 118 + * or product -(x*y) (in MSUBF.D case). The result must be an 119 + * infinity, and its sign is determined only by the value of 120 + * (flags & MADDF_NEGATE_PRODUCT) and the signs of x and y. 121 + */ 122 + if (flags & MADDF_NEGATE_PRODUCT) 123 + return ieee754dp_inf(1 ^ (xs ^ ys)); 124 + else 125 + return ieee754dp_inf(xs ^ ys); 113 126 114 127 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO): 115 128 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_NORM): ··· 136 113 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_ZERO): 137 114 if (zc == IEEE754_CLASS_INF) 138 115 return ieee754dp_inf(zs); 139 - /* Multiplication is 0 so just return z */ 116 + if (zc == IEEE754_CLASS_ZERO) { 117 + /* Handle cases +0 + (-0) and similar ones. */ 118 + if ((!(flags & MADDF_NEGATE_PRODUCT) 119 + && (zs == (xs ^ ys))) || 120 + ((flags & MADDF_NEGATE_PRODUCT) 121 + && (zs != (xs ^ ys)))) 122 + /* 123 + * Cases of addition of zeros of equal signs 124 + * or subtraction of zeroes of opposite signs. 125 + * The sign of the resulting zero is in any 126 + * such case determined only by the sign of z. 127 + */ 128 + return z; 129 + 130 + return ieee754dp_zero(ieee754_csr.rm == FPU_CSR_RD); 131 + } 132 + /* x*y is here 0, and z is not 0, so just return z */ 140 133 return z; 141 134 142 135 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM): 143 136 DPDNORMX; 144 137 145 138 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_DNORM): 146 - if (zc == IEEE754_CLASS_QNAN) 147 - return z; 148 - else if (zc == IEEE754_CLASS_INF) 139 + if (zc == IEEE754_CLASS_INF) 149 140 return ieee754dp_inf(zs); 150 141 DPDNORMY; 151 142 break; 152 143 153 144 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_NORM): 154 - if (zc == IEEE754_CLASS_QNAN) 155 - return z; 156 - else if (zc == IEEE754_CLASS_INF) 145 + if (zc == IEEE754_CLASS_INF) 157 146 return ieee754dp_inf(zs); 158 147 DPDNORMX; 159 148 break; 160 149 161 150 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_NORM): 162 - if (zc == IEEE754_CLASS_QNAN) 163 - return z; 164 - else if (zc == IEEE754_CLASS_INF) 151 + if (zc == IEEE754_CLASS_INF) 165 152 return ieee754dp_inf(zs); 166 153 /* fall through to real computations */ 167 154 } ··· 190 157 191 158 re = xe + ye; 192 159 rs = xs ^ ys; 193 - if (flags & maddf_negate_product) 160 + if (flags & MADDF_NEGATE_PRODUCT) 194 161 rs ^= 1; 195 162 196 163 /* shunt to top of word */ ··· 198 165 ym <<= 64 - (DP_FBITS + 1); 199 166 200 167 /* 201 - * Multiply 64 bits xm, ym to give high 64 bits rm with stickness. 168 + * Multiply 64 bits xm and ym to give 128 bits result in hrm:lrm. 202 169 */ 203 170 204 171 /* 32 * 32 => 64 */ ··· 228 195 229 196 hrm = hrm + (t >> 32); 230 197 231 - rm = hrm | (lrm != 0); 232 - 233 - /* 234 - * Sticky shift down to normal rounding precision. 235 - */ 236 - if ((s64) rm < 0) { 237 - rm = (rm >> (64 - (DP_FBITS + 1 + 3))) | 238 - ((rm << (DP_FBITS + 1 + 3)) != 0); 198 + /* Put explicit bit at bit 126 if necessary */ 199 + if ((int64_t)hrm < 0) { 200 + lrm = (hrm << 63) | (lrm >> 1); 201 + hrm = hrm >> 1; 239 202 re++; 240 - } else { 241 - rm = (rm >> (64 - (DP_FBITS + 1 + 3 + 1))) | 242 - ((rm << (DP_FBITS + 1 + 3 + 1)) != 0); 243 203 } 244 - assert(rm & (DP_HIDDEN_BIT << 3)); 245 204 246 - if (zc == IEEE754_CLASS_ZERO) 247 - return ieee754dp_format(rs, re, rm); 205 + assert(hrm & (1 << 62)); 248 206 249 - /* And now the addition */ 250 - assert(zm & DP_HIDDEN_BIT); 207 + if (zc == IEEE754_CLASS_ZERO) { 208 + /* 209 + * Move explicit bit from bit 126 to bit 55 since the 210 + * ieee754dp_format code expects the mantissa to be 211 + * 56 bits wide (53 + 3 rounding bits). 212 + */ 213 + srl128(&hrm, &lrm, (126 - 55)); 214 + return ieee754dp_format(rs, re, lrm); 215 + } 251 216 252 - /* 253 - * Provide guard,round and stick bit space. 254 - */ 255 - zm <<= 3; 217 + /* Move explicit bit from bit 52 to bit 126 */ 218 + lzm = 0; 219 + hzm = zm << 10; 220 + assert(hzm & (1 << 62)); 256 221 222 + /* Make the exponents the same */ 257 223 if (ze > re) { 258 224 /* 259 225 * Have to shift y fraction right to align. 260 226 */ 261 227 s = ze - re; 262 - rm = XDPSRS(rm, s); 228 + srl128(&hrm, &lrm, s); 263 229 re += s; 264 230 } else if (re > ze) { 265 231 /* 266 232 * Have to shift x fraction right to align. 267 233 */ 268 234 s = re - ze; 269 - zm = XDPSRS(zm, s); 235 + srl128(&hzm, &lzm, s); 270 236 ze += s; 271 237 } 272 238 assert(ze == re); 273 239 assert(ze <= DP_EMAX); 274 240 241 + /* Do the addition */ 275 242 if (zs == rs) { 276 243 /* 277 - * Generate 28 bit result of adding two 27 bit numbers 278 - * leaving result in xm, xs and xe. 244 + * Generate 128 bit result by adding two 127 bit numbers 245 + * leaving result in hzm:lzm, zs and ze. 279 246 */ 280 - zm = zm + rm; 281 - 282 - if (zm >> (DP_FBITS + 1 + 3)) { /* carry out */ 283 - zm = XDPSRS1(zm); 247 + hzm = hzm + hrm + (lzm > (lzm + lrm)); 248 + lzm = lzm + lrm; 249 + if ((int64_t)hzm < 0) { /* carry out */ 250 + srl128(&hzm, &lzm, 1); 284 251 ze++; 285 252 } 286 253 } else { 287 - if (zm >= rm) { 288 - zm = zm - rm; 254 + if (hzm > hrm || (hzm == hrm && lzm >= lrm)) { 255 + hzm = hzm - hrm - (lzm < lrm); 256 + lzm = lzm - lrm; 289 257 } else { 290 - zm = rm - zm; 258 + hzm = hrm - hzm - (lrm < lzm); 259 + lzm = lrm - lzm; 291 260 zs = rs; 292 261 } 293 - if (zm == 0) 262 + if (lzm == 0 && hzm == 0) 294 263 return ieee754dp_zero(ieee754_csr.rm == FPU_CSR_RD); 295 264 296 265 /* 297 - * Normalize to rounding precision. 266 + * Put explicit bit at bit 126 if necessary. 298 267 */ 299 - while ((zm >> (DP_FBITS + 3)) == 0) { 300 - zm <<= 1; 301 - ze--; 268 + if (hzm == 0) { 269 + /* left shift by 63 or 64 bits */ 270 + if ((int64_t)lzm < 0) { 271 + /* MSB of lzm is the explicit bit */ 272 + hzm = lzm >> 1; 273 + lzm = lzm << 63; 274 + ze -= 63; 275 + } else { 276 + hzm = lzm; 277 + lzm = 0; 278 + ze -= 64; 279 + } 280 + } 281 + 282 + t = 0; 283 + while ((hzm >> (62 - t)) == 0) 284 + t++; 285 + 286 + assert(t <= 62); 287 + if (t) { 288 + hzm = hzm << t | lzm >> (64 - t); 289 + lzm = lzm << t; 290 + ze -= t; 302 291 } 303 292 } 304 293 305 - return ieee754dp_format(zs, ze, zm); 294 + /* 295 + * Move explicit bit from bit 126 to bit 55 since the 296 + * ieee754dp_format code expects the mantissa to be 297 + * 56 bits wide (53 + 3 rounding bits). 298 + */ 299 + srl128(&hzm, &lzm, (126 - 55)); 300 + 301 + return ieee754dp_format(zs, ze, lzm); 306 302 } 307 303 308 304 union ieee754dp ieee754dp_maddf(union ieee754dp z, union ieee754dp x, ··· 343 281 union ieee754dp ieee754dp_msubf(union ieee754dp z, union ieee754dp x, 344 282 union ieee754dp y) 345 283 { 346 - return _dp_maddf(z, x, y, maddf_negate_product); 284 + return _dp_maddf(z, x, y, MADDF_NEGATE_PRODUCT); 347 285 }
+89
arch/mips/math-emu/dp_rint.c
··· 1 + /* IEEE754 floating point arithmetic 2 + * double precision: common utilities 3 + */ 4 + /* 5 + * MIPS floating point support 6 + * Copyright (C) 1994-2000 Algorithmics Ltd. 7 + * Copyright (C) 2017 Imagination Technologies, Ltd. 8 + * Author: Aleksandar Markovic <aleksandar.markovic@imgtec.com> 9 + * 10 + * This program is free software; you can distribute it and/or modify it 11 + * under the terms of the GNU General Public License (Version 2) as 12 + * published by the Free Software Foundation. 13 + * 14 + * This program is distributed in the hope it will be useful, but WITHOUT 15 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 16 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 17 + * for more details. 18 + * 19 + * You should have received a copy of the GNU General Public License along 20 + * with this program. 21 + */ 22 + 23 + #include "ieee754dp.h" 24 + 25 + union ieee754dp ieee754dp_rint(union ieee754dp x) 26 + { 27 + union ieee754dp ret; 28 + u64 residue; 29 + int sticky; 30 + int round; 31 + int odd; 32 + 33 + COMPXDP; 34 + 35 + ieee754_clearcx(); 36 + 37 + EXPLODEXDP; 38 + FLUSHXDP; 39 + 40 + if (xc == IEEE754_CLASS_SNAN) 41 + return ieee754dp_nanxcpt(x); 42 + 43 + if ((xc == IEEE754_CLASS_QNAN) || 44 + (xc == IEEE754_CLASS_INF) || 45 + (xc == IEEE754_CLASS_ZERO)) 46 + return x; 47 + 48 + if (xe >= DP_FBITS) 49 + return x; 50 + 51 + if (xe < -1) { 52 + residue = xm; 53 + round = 0; 54 + sticky = residue != 0; 55 + xm = 0; 56 + } else { 57 + residue = xm << (64 - DP_FBITS + xe); 58 + round = (residue >> 63) != 0; 59 + sticky = (residue << 1) != 0; 60 + xm >>= DP_FBITS - xe; 61 + } 62 + 63 + odd = (xm & 0x1) != 0x0; 64 + 65 + switch (ieee754_csr.rm) { 66 + case FPU_CSR_RN: /* toward nearest */ 67 + if (round && (sticky || odd)) 68 + xm++; 69 + break; 70 + case FPU_CSR_RZ: /* toward zero */ 71 + break; 72 + case FPU_CSR_RU: /* toward +infinity */ 73 + if ((round || sticky) && !xs) 74 + xm++; 75 + break; 76 + case FPU_CSR_RD: /* toward -infinity */ 77 + if ((round || sticky) && xs) 78 + xm++; 79 + break; 80 + } 81 + 82 + if (round || sticky) 83 + ieee754_setcx(IEEE754_INEXACT); 84 + 85 + ret = ieee754dp_flong(xm); 86 + DPSIGN(ret) = xs; 87 + 88 + return ret; 89 + }
+2
arch/mips/math-emu/ieee754.h
··· 67 67 union ieee754sp ieee754sp_fint(int x); 68 68 union ieee754sp ieee754sp_flong(s64 x); 69 69 union ieee754sp ieee754sp_fdp(union ieee754dp x); 70 + union ieee754sp ieee754sp_rint(union ieee754sp x); 70 71 71 72 int ieee754sp_tint(union ieee754sp x); 72 73 s64 ieee754sp_tlong(union ieee754sp x); ··· 102 101 union ieee754dp ieee754dp_fint(int x); 103 102 union ieee754dp ieee754dp_flong(s64 x); 104 103 union ieee754dp ieee754dp_fsp(union ieee754sp x); 104 + union ieee754dp ieee754dp_rint(union ieee754dp x); 105 105 106 106 int ieee754dp_tint(union ieee754dp x); 107 107 s64 ieee754dp_tlong(union ieee754dp x);
+4
arch/mips/math-emu/ieee754int.h
··· 26 26 27 27 #define CLPAIR(x, y) ((x)*6+(y)) 28 28 29 + enum maddf_flags { 30 + MADDF_NEGATE_PRODUCT = 1 << 0, 31 + }; 32 + 29 33 static inline void ieee754_clearcx(void) 30 34 { 31 35 ieee754_csr.cx = 0;
+4
arch/mips/math-emu/ieee754sp.h
··· 45 45 return SPBEXP(x) != SP_EMAX + 1 + SP_EBIAS; 46 46 } 47 47 48 + /* 64 bit right shift with rounding */ 49 + #define XSPSRS64(v, rs) \ 50 + (((rs) >= 64) ? ((v) != 0) : ((v) >> (rs)) | ((v) << (64-(rs)) != 0)) 51 + 48 52 /* 3bit extended single precision sticky right shift */ 49 53 #define XSPSRS(v, rs) \ 50 54 ((rs > (SP_FBITS+3))?1:((v) >> (rs)) | ((v) << (32-(rs)) != 0))
+314 -4
arch/mips/math-emu/me-debugfs.c
··· 28 28 } 29 29 DEFINE_SIMPLE_ATTRIBUTE(fops_fpuemu_stat, fpuemu_stat_get, NULL, "%llu\n"); 30 30 31 + /* 32 + * Used to obtain names for a debugfs instruction counter, given field name 33 + * in fpuemustats structure. For example, for input "cmp_sueq_d", the output 34 + * would be "cmp.sueq.d". This is needed since dots are not allowed to be 35 + * used in structure field names, and are, on the other hand, desired to be 36 + * used in debugfs item names to be clearly associated to corresponding 37 + * MIPS FPU instructions. 38 + */ 39 + static void adjust_instruction_counter_name(char *out_name, char *in_name) 40 + { 41 + int i = 0; 42 + 43 + strcpy(out_name, in_name); 44 + while (in_name[i] != '\0') { 45 + if (out_name[i] == '_') 46 + out_name[i] = '.'; 47 + i++; 48 + } 49 + } 50 + 51 + static int fpuemustats_clear_show(struct seq_file *s, void *unused) 52 + { 53 + __this_cpu_write((fpuemustats).emulated, 0); 54 + __this_cpu_write((fpuemustats).loads, 0); 55 + __this_cpu_write((fpuemustats).stores, 0); 56 + __this_cpu_write((fpuemustats).branches, 0); 57 + __this_cpu_write((fpuemustats).cp1ops, 0); 58 + __this_cpu_write((fpuemustats).cp1xops, 0); 59 + __this_cpu_write((fpuemustats).errors, 0); 60 + __this_cpu_write((fpuemustats).ieee754_inexact, 0); 61 + __this_cpu_write((fpuemustats).ieee754_underflow, 0); 62 + __this_cpu_write((fpuemustats).ieee754_overflow, 0); 63 + __this_cpu_write((fpuemustats).ieee754_zerodiv, 0); 64 + __this_cpu_write((fpuemustats).ieee754_invalidop, 0); 65 + __this_cpu_write((fpuemustats).ds_emul, 0); 66 + 67 + __this_cpu_write((fpuemustats).abs_s, 0); 68 + __this_cpu_write((fpuemustats).abs_d, 0); 69 + __this_cpu_write((fpuemustats).add_s, 0); 70 + __this_cpu_write((fpuemustats).add_d, 0); 71 + __this_cpu_write((fpuemustats).bc1eqz, 0); 72 + __this_cpu_write((fpuemustats).bc1nez, 0); 73 + __this_cpu_write((fpuemustats).ceil_w_s, 0); 74 + __this_cpu_write((fpuemustats).ceil_w_d, 0); 75 + __this_cpu_write((fpuemustats).ceil_l_s, 0); 76 + __this_cpu_write((fpuemustats).ceil_l_d, 0); 77 + __this_cpu_write((fpuemustats).class_s, 0); 78 + __this_cpu_write((fpuemustats).class_d, 0); 79 + __this_cpu_write((fpuemustats).cmp_af_s, 0); 80 + __this_cpu_write((fpuemustats).cmp_af_d, 0); 81 + __this_cpu_write((fpuemustats).cmp_eq_s, 0); 82 + __this_cpu_write((fpuemustats).cmp_eq_d, 0); 83 + __this_cpu_write((fpuemustats).cmp_le_s, 0); 84 + __this_cpu_write((fpuemustats).cmp_le_d, 0); 85 + __this_cpu_write((fpuemustats).cmp_lt_s, 0); 86 + __this_cpu_write((fpuemustats).cmp_lt_d, 0); 87 + __this_cpu_write((fpuemustats).cmp_ne_s, 0); 88 + __this_cpu_write((fpuemustats).cmp_ne_d, 0); 89 + __this_cpu_write((fpuemustats).cmp_or_s, 0); 90 + __this_cpu_write((fpuemustats).cmp_or_d, 0); 91 + __this_cpu_write((fpuemustats).cmp_ueq_s, 0); 92 + __this_cpu_write((fpuemustats).cmp_ueq_d, 0); 93 + __this_cpu_write((fpuemustats).cmp_ule_s, 0); 94 + __this_cpu_write((fpuemustats).cmp_ule_d, 0); 95 + __this_cpu_write((fpuemustats).cmp_ult_s, 0); 96 + __this_cpu_write((fpuemustats).cmp_ult_d, 0); 97 + __this_cpu_write((fpuemustats).cmp_un_s, 0); 98 + __this_cpu_write((fpuemustats).cmp_un_d, 0); 99 + __this_cpu_write((fpuemustats).cmp_une_s, 0); 100 + __this_cpu_write((fpuemustats).cmp_une_d, 0); 101 + __this_cpu_write((fpuemustats).cmp_saf_s, 0); 102 + __this_cpu_write((fpuemustats).cmp_saf_d, 0); 103 + __this_cpu_write((fpuemustats).cmp_seq_s, 0); 104 + __this_cpu_write((fpuemustats).cmp_seq_d, 0); 105 + __this_cpu_write((fpuemustats).cmp_sle_s, 0); 106 + __this_cpu_write((fpuemustats).cmp_sle_d, 0); 107 + __this_cpu_write((fpuemustats).cmp_slt_s, 0); 108 + __this_cpu_write((fpuemustats).cmp_slt_d, 0); 109 + __this_cpu_write((fpuemustats).cmp_sne_s, 0); 110 + __this_cpu_write((fpuemustats).cmp_sne_d, 0); 111 + __this_cpu_write((fpuemustats).cmp_sor_s, 0); 112 + __this_cpu_write((fpuemustats).cmp_sor_d, 0); 113 + __this_cpu_write((fpuemustats).cmp_sueq_s, 0); 114 + __this_cpu_write((fpuemustats).cmp_sueq_d, 0); 115 + __this_cpu_write((fpuemustats).cmp_sule_s, 0); 116 + __this_cpu_write((fpuemustats).cmp_sule_d, 0); 117 + __this_cpu_write((fpuemustats).cmp_sult_s, 0); 118 + __this_cpu_write((fpuemustats).cmp_sult_d, 0); 119 + __this_cpu_write((fpuemustats).cmp_sun_s, 0); 120 + __this_cpu_write((fpuemustats).cmp_sun_d, 0); 121 + __this_cpu_write((fpuemustats).cmp_sune_s, 0); 122 + __this_cpu_write((fpuemustats).cmp_sune_d, 0); 123 + __this_cpu_write((fpuemustats).cvt_d_l, 0); 124 + __this_cpu_write((fpuemustats).cvt_d_s, 0); 125 + __this_cpu_write((fpuemustats).cvt_d_w, 0); 126 + __this_cpu_write((fpuemustats).cvt_l_s, 0); 127 + __this_cpu_write((fpuemustats).cvt_l_d, 0); 128 + __this_cpu_write((fpuemustats).cvt_s_d, 0); 129 + __this_cpu_write((fpuemustats).cvt_s_l, 0); 130 + __this_cpu_write((fpuemustats).cvt_s_w, 0); 131 + __this_cpu_write((fpuemustats).cvt_w_s, 0); 132 + __this_cpu_write((fpuemustats).cvt_w_d, 0); 133 + __this_cpu_write((fpuemustats).div_s, 0); 134 + __this_cpu_write((fpuemustats).div_d, 0); 135 + __this_cpu_write((fpuemustats).floor_w_s, 0); 136 + __this_cpu_write((fpuemustats).floor_w_d, 0); 137 + __this_cpu_write((fpuemustats).floor_l_s, 0); 138 + __this_cpu_write((fpuemustats).floor_l_d, 0); 139 + __this_cpu_write((fpuemustats).maddf_s, 0); 140 + __this_cpu_write((fpuemustats).maddf_d, 0); 141 + __this_cpu_write((fpuemustats).max_s, 0); 142 + __this_cpu_write((fpuemustats).max_d, 0); 143 + __this_cpu_write((fpuemustats).maxa_s, 0); 144 + __this_cpu_write((fpuemustats).maxa_d, 0); 145 + __this_cpu_write((fpuemustats).min_s, 0); 146 + __this_cpu_write((fpuemustats).min_d, 0); 147 + __this_cpu_write((fpuemustats).mina_s, 0); 148 + __this_cpu_write((fpuemustats).mina_d, 0); 149 + __this_cpu_write((fpuemustats).mov_s, 0); 150 + __this_cpu_write((fpuemustats).mov_d, 0); 151 + __this_cpu_write((fpuemustats).msubf_s, 0); 152 + __this_cpu_write((fpuemustats).msubf_d, 0); 153 + __this_cpu_write((fpuemustats).mul_s, 0); 154 + __this_cpu_write((fpuemustats).mul_d, 0); 155 + __this_cpu_write((fpuemustats).neg_s, 0); 156 + __this_cpu_write((fpuemustats).neg_d, 0); 157 + __this_cpu_write((fpuemustats).recip_s, 0); 158 + __this_cpu_write((fpuemustats).recip_d, 0); 159 + __this_cpu_write((fpuemustats).rint_s, 0); 160 + __this_cpu_write((fpuemustats).rint_d, 0); 161 + __this_cpu_write((fpuemustats).round_w_s, 0); 162 + __this_cpu_write((fpuemustats).round_w_d, 0); 163 + __this_cpu_write((fpuemustats).round_l_s, 0); 164 + __this_cpu_write((fpuemustats).round_l_d, 0); 165 + __this_cpu_write((fpuemustats).rsqrt_s, 0); 166 + __this_cpu_write((fpuemustats).rsqrt_d, 0); 167 + __this_cpu_write((fpuemustats).sel_s, 0); 168 + __this_cpu_write((fpuemustats).sel_d, 0); 169 + __this_cpu_write((fpuemustats).seleqz_s, 0); 170 + __this_cpu_write((fpuemustats).seleqz_d, 0); 171 + __this_cpu_write((fpuemustats).selnez_s, 0); 172 + __this_cpu_write((fpuemustats).selnez_d, 0); 173 + __this_cpu_write((fpuemustats).sqrt_s, 0); 174 + __this_cpu_write((fpuemustats).sqrt_d, 0); 175 + __this_cpu_write((fpuemustats).sub_s, 0); 176 + __this_cpu_write((fpuemustats).sub_d, 0); 177 + __this_cpu_write((fpuemustats).trunc_w_s, 0); 178 + __this_cpu_write((fpuemustats).trunc_w_d, 0); 179 + __this_cpu_write((fpuemustats).trunc_l_s, 0); 180 + __this_cpu_write((fpuemustats).trunc_l_d, 0); 181 + 182 + return 0; 183 + } 184 + 185 + static int fpuemustats_clear_open(struct inode *inode, struct file *file) 186 + { 187 + return single_open(file, fpuemustats_clear_show, inode->i_private); 188 + } 189 + 190 + static const struct file_operations fpuemustats_clear_fops = { 191 + .open = fpuemustats_clear_open, 192 + .read = seq_read, 193 + .llseek = seq_lseek, 194 + .release = single_release, 195 + }; 196 + 31 197 static int __init debugfs_fpuemu(void) 32 198 { 33 - struct dentry *d, *dir; 199 + struct dentry *fpuemu_debugfs_base_dir; 200 + struct dentry *fpuemu_debugfs_inst_dir; 201 + struct dentry *d, *reset_file; 34 202 35 203 if (!mips_debugfs_dir) 36 204 return -ENODEV; 37 - dir = debugfs_create_dir("fpuemustats", mips_debugfs_dir); 38 - if (!dir) 205 + 206 + fpuemu_debugfs_base_dir = debugfs_create_dir("fpuemustats", 207 + mips_debugfs_dir); 208 + if (!fpuemu_debugfs_base_dir) 209 + return -ENOMEM; 210 + 211 + reset_file = debugfs_create_file("fpuemustats_clear", 0444, 212 + mips_debugfs_dir, NULL, 213 + &fpuemustats_clear_fops); 214 + if (!reset_file) 39 215 return -ENOMEM; 40 216 41 217 #define FPU_EMU_STAT_OFFSET(m) \ ··· 219 43 220 44 #define FPU_STAT_CREATE(m) \ 221 45 do { \ 222 - d = debugfs_create_file(#m , S_IRUGO, dir, \ 46 + d = debugfs_create_file(#m, 0444, fpuemu_debugfs_base_dir, \ 223 47 (void *)FPU_EMU_STAT_OFFSET(m), \ 224 48 &fops_fpuemu_stat); \ 225 49 if (!d) \ ··· 229 53 FPU_STAT_CREATE(emulated); 230 54 FPU_STAT_CREATE(loads); 231 55 FPU_STAT_CREATE(stores); 56 + FPU_STAT_CREATE(branches); 232 57 FPU_STAT_CREATE(cp1ops); 233 58 FPU_STAT_CREATE(cp1xops); 234 59 FPU_STAT_CREATE(errors); ··· 239 62 FPU_STAT_CREATE(ieee754_zerodiv); 240 63 FPU_STAT_CREATE(ieee754_invalidop); 241 64 FPU_STAT_CREATE(ds_emul); 65 + 66 + fpuemu_debugfs_inst_dir = debugfs_create_dir("instructions", 67 + fpuemu_debugfs_base_dir); 68 + if (!fpuemu_debugfs_inst_dir) 69 + return -ENOMEM; 70 + 71 + #define FPU_STAT_CREATE_EX(m) \ 72 + do { \ 73 + char name[32]; \ 74 + \ 75 + adjust_instruction_counter_name(name, #m); \ 76 + \ 77 + d = debugfs_create_file(name, 0444, fpuemu_debugfs_inst_dir, \ 78 + (void *)FPU_EMU_STAT_OFFSET(m), \ 79 + &fops_fpuemu_stat); \ 80 + if (!d) \ 81 + return -ENOMEM; \ 82 + } while (0) 83 + 84 + FPU_STAT_CREATE_EX(abs_s); 85 + FPU_STAT_CREATE_EX(abs_d); 86 + FPU_STAT_CREATE_EX(add_s); 87 + FPU_STAT_CREATE_EX(add_d); 88 + FPU_STAT_CREATE_EX(bc1eqz); 89 + FPU_STAT_CREATE_EX(bc1nez); 90 + FPU_STAT_CREATE_EX(ceil_w_s); 91 + FPU_STAT_CREATE_EX(ceil_w_d); 92 + FPU_STAT_CREATE_EX(ceil_l_s); 93 + FPU_STAT_CREATE_EX(ceil_l_d); 94 + FPU_STAT_CREATE_EX(class_s); 95 + FPU_STAT_CREATE_EX(class_d); 96 + FPU_STAT_CREATE_EX(cmp_af_s); 97 + FPU_STAT_CREATE_EX(cmp_af_d); 98 + FPU_STAT_CREATE_EX(cmp_eq_s); 99 + FPU_STAT_CREATE_EX(cmp_eq_d); 100 + FPU_STAT_CREATE_EX(cmp_le_s); 101 + FPU_STAT_CREATE_EX(cmp_le_d); 102 + FPU_STAT_CREATE_EX(cmp_lt_s); 103 + FPU_STAT_CREATE_EX(cmp_lt_d); 104 + FPU_STAT_CREATE_EX(cmp_ne_s); 105 + FPU_STAT_CREATE_EX(cmp_ne_d); 106 + FPU_STAT_CREATE_EX(cmp_or_s); 107 + FPU_STAT_CREATE_EX(cmp_or_d); 108 + FPU_STAT_CREATE_EX(cmp_ueq_s); 109 + FPU_STAT_CREATE_EX(cmp_ueq_d); 110 + FPU_STAT_CREATE_EX(cmp_ule_s); 111 + FPU_STAT_CREATE_EX(cmp_ule_d); 112 + FPU_STAT_CREATE_EX(cmp_ult_s); 113 + FPU_STAT_CREATE_EX(cmp_ult_d); 114 + FPU_STAT_CREATE_EX(cmp_un_s); 115 + FPU_STAT_CREATE_EX(cmp_un_d); 116 + FPU_STAT_CREATE_EX(cmp_une_s); 117 + FPU_STAT_CREATE_EX(cmp_une_d); 118 + FPU_STAT_CREATE_EX(cmp_saf_s); 119 + FPU_STAT_CREATE_EX(cmp_saf_d); 120 + FPU_STAT_CREATE_EX(cmp_seq_s); 121 + FPU_STAT_CREATE_EX(cmp_seq_d); 122 + FPU_STAT_CREATE_EX(cmp_sle_s); 123 + FPU_STAT_CREATE_EX(cmp_sle_d); 124 + FPU_STAT_CREATE_EX(cmp_slt_s); 125 + FPU_STAT_CREATE_EX(cmp_slt_d); 126 + FPU_STAT_CREATE_EX(cmp_sne_s); 127 + FPU_STAT_CREATE_EX(cmp_sne_d); 128 + FPU_STAT_CREATE_EX(cmp_sor_s); 129 + FPU_STAT_CREATE_EX(cmp_sor_d); 130 + FPU_STAT_CREATE_EX(cmp_sueq_s); 131 + FPU_STAT_CREATE_EX(cmp_sueq_d); 132 + FPU_STAT_CREATE_EX(cmp_sule_s); 133 + FPU_STAT_CREATE_EX(cmp_sule_d); 134 + FPU_STAT_CREATE_EX(cmp_sult_s); 135 + FPU_STAT_CREATE_EX(cmp_sult_d); 136 + FPU_STAT_CREATE_EX(cmp_sun_s); 137 + FPU_STAT_CREATE_EX(cmp_sun_d); 138 + FPU_STAT_CREATE_EX(cmp_sune_s); 139 + FPU_STAT_CREATE_EX(cmp_sune_d); 140 + FPU_STAT_CREATE_EX(cvt_d_l); 141 + FPU_STAT_CREATE_EX(cvt_d_s); 142 + FPU_STAT_CREATE_EX(cvt_d_w); 143 + FPU_STAT_CREATE_EX(cvt_l_s); 144 + FPU_STAT_CREATE_EX(cvt_l_d); 145 + FPU_STAT_CREATE_EX(cvt_s_d); 146 + FPU_STAT_CREATE_EX(cvt_s_l); 147 + FPU_STAT_CREATE_EX(cvt_s_w); 148 + FPU_STAT_CREATE_EX(cvt_w_s); 149 + FPU_STAT_CREATE_EX(cvt_w_d); 150 + FPU_STAT_CREATE_EX(div_s); 151 + FPU_STAT_CREATE_EX(div_d); 152 + FPU_STAT_CREATE_EX(floor_w_s); 153 + FPU_STAT_CREATE_EX(floor_w_d); 154 + FPU_STAT_CREATE_EX(floor_l_s); 155 + FPU_STAT_CREATE_EX(floor_l_d); 156 + FPU_STAT_CREATE_EX(maddf_s); 157 + FPU_STAT_CREATE_EX(maddf_d); 158 + FPU_STAT_CREATE_EX(max_s); 159 + FPU_STAT_CREATE_EX(max_d); 160 + FPU_STAT_CREATE_EX(maxa_s); 161 + FPU_STAT_CREATE_EX(maxa_d); 162 + FPU_STAT_CREATE_EX(min_s); 163 + FPU_STAT_CREATE_EX(min_d); 164 + FPU_STAT_CREATE_EX(mina_s); 165 + FPU_STAT_CREATE_EX(mina_d); 166 + FPU_STAT_CREATE_EX(mov_s); 167 + FPU_STAT_CREATE_EX(mov_d); 168 + FPU_STAT_CREATE_EX(msubf_s); 169 + FPU_STAT_CREATE_EX(msubf_d); 170 + FPU_STAT_CREATE_EX(mul_s); 171 + FPU_STAT_CREATE_EX(mul_d); 172 + FPU_STAT_CREATE_EX(neg_s); 173 + FPU_STAT_CREATE_EX(neg_d); 174 + FPU_STAT_CREATE_EX(recip_s); 175 + FPU_STAT_CREATE_EX(recip_d); 176 + FPU_STAT_CREATE_EX(rint_s); 177 + FPU_STAT_CREATE_EX(rint_d); 178 + FPU_STAT_CREATE_EX(round_w_s); 179 + FPU_STAT_CREATE_EX(round_w_d); 180 + FPU_STAT_CREATE_EX(round_l_s); 181 + FPU_STAT_CREATE_EX(round_l_d); 182 + FPU_STAT_CREATE_EX(rsqrt_s); 183 + FPU_STAT_CREATE_EX(rsqrt_d); 184 + FPU_STAT_CREATE_EX(sel_s); 185 + FPU_STAT_CREATE_EX(sel_d); 186 + FPU_STAT_CREATE_EX(seleqz_s); 187 + FPU_STAT_CREATE_EX(seleqz_d); 188 + FPU_STAT_CREATE_EX(selnez_s); 189 + FPU_STAT_CREATE_EX(selnez_d); 190 + FPU_STAT_CREATE_EX(sqrt_s); 191 + FPU_STAT_CREATE_EX(sqrt_d); 192 + FPU_STAT_CREATE_EX(sub_s); 193 + FPU_STAT_CREATE_EX(sub_d); 194 + FPU_STAT_CREATE_EX(trunc_w_s); 195 + FPU_STAT_CREATE_EX(trunc_w_d); 196 + FPU_STAT_CREATE_EX(trunc_l_s); 197 + FPU_STAT_CREATE_EX(trunc_l_d); 242 198 243 199 return 0; 244 200 }
+63 -21
arch/mips/math-emu/sp_fmax.c
··· 47 47 case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_INF): 48 48 return ieee754sp_nanxcpt(x); 49 49 50 - /* numbers are preferred to NaNs */ 50 + /* 51 + * Quiet NaN handling 52 + */ 53 + 54 + /* 55 + * The case of both inputs quiet NaNs 56 + */ 57 + case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 58 + return x; 59 + 60 + /* 61 + * The cases of exactly one input quiet NaN (numbers 62 + * are here preferred as returned values to NaNs) 63 + */ 51 64 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_QNAN): 52 65 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_QNAN): 53 66 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_QNAN): 54 67 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_QNAN): 55 68 return x; 56 69 57 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 58 70 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_ZERO): 59 71 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_NORM): 60 72 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_DNORM): ··· 92 80 return ys ? x : y; 93 81 94 82 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO): 95 - if (xs == ys) 96 - return x; 97 - return ieee754sp_zero(1); 83 + return ieee754sp_zero(xs & ys); 98 84 99 85 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM): 100 86 SPDNORMX; ··· 116 106 else if (xs < ys) 117 107 return x; 118 108 119 - /* Compare exponent */ 120 - if (xe > ye) 121 - return x; 122 - else if (xe < ye) 123 - return y; 109 + /* Signs of inputs are equal, let's compare exponents */ 110 + if (xs == 0) { 111 + /* Inputs are both positive */ 112 + if (xe > ye) 113 + return x; 114 + else if (xe < ye) 115 + return y; 116 + } else { 117 + /* Inputs are both negative */ 118 + if (xe > ye) 119 + return y; 120 + else if (xe < ye) 121 + return x; 122 + } 124 123 125 - /* Compare mantissa */ 124 + /* Signs and exponents of inputs are equal, let's compare mantissas */ 125 + if (xs == 0) { 126 + /* Inputs are both positive, with equal signs and exponents */ 127 + if (xm <= ym) 128 + return y; 129 + return x; 130 + } 131 + /* Inputs are both negative, with equal signs and exponents */ 126 132 if (xm <= ym) 127 - return y; 128 - return x; 133 + return x; 134 + return y; 129 135 } 130 136 131 137 union ieee754sp ieee754sp_fmaxa(union ieee754sp x, union ieee754sp y) ··· 173 147 case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_INF): 174 148 return ieee754sp_nanxcpt(x); 175 149 176 - /* numbers are preferred to NaNs */ 150 + /* 151 + * Quiet NaN handling 152 + */ 153 + 154 + /* 155 + * The case of both inputs quiet NaNs 156 + */ 157 + case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 158 + return x; 159 + 160 + /* 161 + * The cases of exactly one input quiet NaN (numbers 162 + * are here preferred as returned values to NaNs) 163 + */ 177 164 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_QNAN): 178 165 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_QNAN): 179 166 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_QNAN): 180 167 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_QNAN): 181 168 return x; 182 169 183 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 184 170 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_ZERO): 185 171 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_NORM): 186 172 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_DNORM): ··· 202 164 /* 203 165 * Infinity and zero handling 204 166 */ 167 + case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF): 168 + return ieee754sp_inf(xs & ys); 169 + 205 170 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_ZERO): 206 171 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_NORM): 207 172 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM): ··· 212 171 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_ZERO): 213 172 return x; 214 173 215 - case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF): 216 174 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_INF): 217 175 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_INF): 218 176 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_INF): ··· 220 180 return y; 221 181 222 182 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO): 223 - if (xs == ys) 224 - return x; 225 - return ieee754sp_zero(1); 183 + return ieee754sp_zero(xs & ys); 226 184 227 185 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM): 228 186 SPDNORMX; ··· 245 207 return y; 246 208 247 209 /* Compare mantissa */ 248 - if (xm <= ym) 210 + if (xm < ym) 249 211 return y; 250 - return x; 212 + else if (xm > ym) 213 + return x; 214 + else if (xs == 0) 215 + return x; 216 + return y; 251 217 }
+64 -22
arch/mips/math-emu/sp_fmin.c
··· 47 47 case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_INF): 48 48 return ieee754sp_nanxcpt(x); 49 49 50 - /* numbers are preferred to NaNs */ 50 + /* 51 + * Quiet NaN handling 52 + */ 53 + 54 + /* 55 + * The case of both inputs quiet NaNs 56 + */ 57 + case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 58 + return x; 59 + 60 + /* 61 + * The cases of exactly one input quiet NaN (numbers 62 + * are here preferred as returned values to NaNs) 63 + */ 51 64 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_QNAN): 52 65 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_QNAN): 53 66 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_QNAN): 54 67 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_QNAN): 55 68 return x; 56 69 57 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 58 70 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_ZERO): 59 71 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_NORM): 60 72 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_DNORM): ··· 92 80 return ys ? y : x; 93 81 94 82 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO): 95 - if (xs == ys) 96 - return x; 97 - return ieee754sp_zero(1); 83 + return ieee754sp_zero(xs | ys); 98 84 99 85 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM): 100 86 SPDNORMX; ··· 116 106 else if (xs < ys) 117 107 return y; 118 108 119 - /* Compare exponent */ 120 - if (xe > ye) 121 - return y; 122 - else if (xe < ye) 123 - return x; 109 + /* Signs of inputs are the same, let's compare exponents */ 110 + if (xs == 0) { 111 + /* Inputs are both positive */ 112 + if (xe > ye) 113 + return y; 114 + else if (xe < ye) 115 + return x; 116 + } else { 117 + /* Inputs are both negative */ 118 + if (xe > ye) 119 + return x; 120 + else if (xe < ye) 121 + return y; 122 + } 124 123 125 - /* Compare mantissa */ 124 + /* Signs and exponents of inputs are equal, let's compare mantissas */ 125 + if (xs == 0) { 126 + /* Inputs are both positive, with equal signs and exponents */ 127 + if (xm <= ym) 128 + return x; 129 + return y; 130 + } 131 + /* Inputs are both negative, with equal signs and exponents */ 126 132 if (xm <= ym) 127 - return x; 128 - return y; 133 + return y; 134 + return x; 129 135 } 130 136 131 137 union ieee754sp ieee754sp_fmina(union ieee754sp x, union ieee754sp y) ··· 173 147 case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_INF): 174 148 return ieee754sp_nanxcpt(x); 175 149 176 - /* numbers are preferred to NaNs */ 150 + /* 151 + * Quiet NaN handling 152 + */ 153 + 154 + /* 155 + * The case of both inputs quiet NaNs 156 + */ 157 + case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 158 + return x; 159 + 160 + /* 161 + * The cases of exactly one input quiet NaN (numbers 162 + * are here preferred as returned values to NaNs) 163 + */ 177 164 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_QNAN): 178 165 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_QNAN): 179 166 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_QNAN): 180 167 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_QNAN): 181 168 return x; 182 169 183 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 184 170 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_ZERO): 185 171 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_NORM): 186 172 case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_DNORM): ··· 202 164 /* 203 165 * Infinity and zero handling 204 166 */ 167 + case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF): 168 + return ieee754sp_inf(xs | ys); 169 + 205 170 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_ZERO): 206 171 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_NORM): 207 172 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM): 208 173 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_ZERO): 209 174 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_ZERO): 210 - return x; 175 + return y; 211 176 212 - case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF): 213 177 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_INF): 214 178 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_INF): 215 179 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_INF): 216 180 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_NORM): 217 181 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_DNORM): 218 - return y; 182 + return x; 219 183 220 184 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO): 221 - if (xs == ys) 222 - return x; 223 - return ieee754sp_zero(1); 185 + return ieee754sp_zero(xs | ys); 224 186 225 187 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM): 226 188 SPDNORMX; ··· 245 207 return x; 246 208 247 209 /* Compare mantissa */ 248 - if (xm <= ym) 210 + if (xm < ym) 211 + return x; 212 + else if (xm > ym) 213 + return y; 214 + else if (xs == 1) 249 215 return x; 250 216 return y; 251 217 }
+108 -123
arch/mips/math-emu/sp_maddf.c
··· 14 14 15 15 #include "ieee754sp.h" 16 16 17 - enum maddf_flags { 18 - maddf_negate_product = 1 << 0, 19 - }; 20 17 21 18 static union ieee754sp _sp_maddf(union ieee754sp z, union ieee754sp x, 22 19 union ieee754sp y, enum maddf_flags flags) ··· 21 24 int re; 22 25 int rs; 23 26 unsigned rm; 24 - unsigned short lxm; 25 - unsigned short hxm; 26 - unsigned short lym; 27 - unsigned short hym; 28 - unsigned lrm; 29 - unsigned hrm; 30 - unsigned t; 31 - unsigned at; 27 + uint64_t rm64; 28 + uint64_t zm64; 32 29 int s; 33 30 34 31 COMPXSP; ··· 39 48 40 49 ieee754_clearcx(); 41 50 42 - switch (zc) { 43 - case IEEE754_CLASS_SNAN: 44 - ieee754_setcx(IEEE754_INVALID_OPERATION); 51 + /* 52 + * Handle the cases when at least one of x, y or z is a NaN. 53 + * Order of precedence is sNaN, qNaN and z, x, y. 54 + */ 55 + if (zc == IEEE754_CLASS_SNAN) 45 56 return ieee754sp_nanxcpt(z); 46 - case IEEE754_CLASS_DNORM: 47 - SPDNORMZ; 48 - /* QNAN and ZERO cases are handled separately below */ 49 - } 50 - 51 - switch (CLPAIR(xc, yc)) { 52 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_SNAN): 53 - case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_SNAN): 54 - case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_SNAN): 55 - case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_SNAN): 56 - case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_SNAN): 57 - return ieee754sp_nanxcpt(y); 58 - 59 - case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_SNAN): 60 - case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_QNAN): 61 - case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_ZERO): 62 - case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_NORM): 63 - case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_DNORM): 64 - case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_INF): 57 + if (xc == IEEE754_CLASS_SNAN) 65 58 return ieee754sp_nanxcpt(x); 66 - 67 - case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_QNAN): 68 - case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_QNAN): 69 - case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_QNAN): 70 - case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_QNAN): 59 + if (yc == IEEE754_CLASS_SNAN) 60 + return ieee754sp_nanxcpt(y); 61 + if (zc == IEEE754_CLASS_QNAN) 62 + return z; 63 + if (xc == IEEE754_CLASS_QNAN) 64 + return x; 65 + if (yc == IEEE754_CLASS_QNAN) 71 66 return y; 72 67 73 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN): 74 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_ZERO): 75 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_NORM): 76 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_DNORM): 77 - case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_INF): 78 - return x; 68 + if (zc == IEEE754_CLASS_DNORM) 69 + SPDNORMZ; 70 + /* ZERO z cases are handled separately below */ 71 + 72 + switch (CLPAIR(xc, yc)) { 73 + 79 74 80 75 /* 81 76 * Infinity handling 82 77 */ 83 78 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_ZERO): 84 79 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_INF): 85 - if (zc == IEEE754_CLASS_QNAN) 86 - return z; 87 80 ieee754_setcx(IEEE754_INVALID_OPERATION); 88 81 return ieee754sp_indef(); 89 82 ··· 76 101 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_NORM): 77 102 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM): 78 103 case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF): 79 - if (zc == IEEE754_CLASS_QNAN) 80 - return z; 81 - return ieee754sp_inf(xs ^ ys); 104 + if ((zc == IEEE754_CLASS_INF) && 105 + ((!(flags & MADDF_NEGATE_PRODUCT) && (zs != (xs ^ ys))) || 106 + ((flags & MADDF_NEGATE_PRODUCT) && (zs == (xs ^ ys))))) { 107 + /* 108 + * Cases of addition of infinities with opposite signs 109 + * or subtraction of infinities with same signs. 110 + */ 111 + ieee754_setcx(IEEE754_INVALID_OPERATION); 112 + return ieee754sp_indef(); 113 + } 114 + /* 115 + * z is here either not an infinity, or an infinity having the 116 + * same sign as product (x*y) (in case of MADDF.D instruction) 117 + * or product -(x*y) (in MSUBF.D case). The result must be an 118 + * infinity, and its sign is determined only by the value of 119 + * (flags & MADDF_NEGATE_PRODUCT) and the signs of x and y. 120 + */ 121 + if (flags & MADDF_NEGATE_PRODUCT) 122 + return ieee754sp_inf(1 ^ (xs ^ ys)); 123 + else 124 + return ieee754sp_inf(xs ^ ys); 82 125 83 126 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO): 84 127 case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_NORM): ··· 105 112 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_ZERO): 106 113 if (zc == IEEE754_CLASS_INF) 107 114 return ieee754sp_inf(zs); 108 - /* Multiplication is 0 so just return z */ 115 + if (zc == IEEE754_CLASS_ZERO) { 116 + /* Handle cases +0 + (-0) and similar ones. */ 117 + if ((!(flags & MADDF_NEGATE_PRODUCT) 118 + && (zs == (xs ^ ys))) || 119 + ((flags & MADDF_NEGATE_PRODUCT) 120 + && (zs != (xs ^ ys)))) 121 + /* 122 + * Cases of addition of zeros of equal signs 123 + * or subtraction of zeroes of opposite signs. 124 + * The sign of the resulting zero is in any 125 + * such case determined only by the sign of z. 126 + */ 127 + return z; 128 + 129 + return ieee754sp_zero(ieee754_csr.rm == FPU_CSR_RD); 130 + } 131 + /* x*y is here 0, and z is not 0, so just return z */ 109 132 return z; 110 133 111 134 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM): 112 135 SPDNORMX; 113 136 114 137 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_DNORM): 115 - if (zc == IEEE754_CLASS_QNAN) 116 - return z; 117 - else if (zc == IEEE754_CLASS_INF) 138 + if (zc == IEEE754_CLASS_INF) 118 139 return ieee754sp_inf(zs); 119 140 SPDNORMY; 120 141 break; 121 142 122 143 case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_NORM): 123 - if (zc == IEEE754_CLASS_QNAN) 124 - return z; 125 - else if (zc == IEEE754_CLASS_INF) 144 + if (zc == IEEE754_CLASS_INF) 126 145 return ieee754sp_inf(zs); 127 146 SPDNORMX; 128 147 break; 129 148 130 149 case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_NORM): 131 - if (zc == IEEE754_CLASS_QNAN) 132 - return z; 133 - else if (zc == IEEE754_CLASS_INF) 150 + if (zc == IEEE754_CLASS_INF) 134 151 return ieee754sp_inf(zs); 135 152 /* fall through to real computations */ 136 153 } ··· 161 158 162 159 re = xe + ye; 163 160 rs = xs ^ ys; 164 - if (flags & maddf_negate_product) 161 + if (flags & MADDF_NEGATE_PRODUCT) 165 162 rs ^= 1; 166 163 167 - /* shunt to top of word */ 168 - xm <<= 32 - (SP_FBITS + 1); 169 - ym <<= 32 - (SP_FBITS + 1); 164 + /* Multiple 24 bit xm and ym to give 48 bit results */ 165 + rm64 = (uint64_t)xm * ym; 170 166 171 - /* 172 - * Multiply 32 bits xm, ym to give high 32 bits rm with stickness. 173 - */ 174 - lxm = xm & 0xffff; 175 - hxm = xm >> 16; 176 - lym = ym & 0xffff; 177 - hym = ym >> 16; 167 + /* Shunt to top of word */ 168 + rm64 = rm64 << 16; 178 169 179 - lrm = lxm * lym; /* 16 * 16 => 32 */ 180 - hrm = hxm * hym; /* 16 * 16 => 32 */ 181 - 182 - t = lxm * hym; /* 16 * 16 => 32 */ 183 - at = lrm + (t << 16); 184 - hrm += at < lrm; 185 - lrm = at; 186 - hrm = hrm + (t >> 16); 187 - 188 - t = hxm * lym; /* 16 * 16 => 32 */ 189 - at = lrm + (t << 16); 190 - hrm += at < lrm; 191 - lrm = at; 192 - hrm = hrm + (t >> 16); 193 - 194 - rm = hrm | (lrm != 0); 195 - 196 - /* 197 - * Sticky shift down to normal rounding precision. 198 - */ 199 - if ((int) rm < 0) { 200 - rm = (rm >> (32 - (SP_FBITS + 1 + 3))) | 201 - ((rm << (SP_FBITS + 1 + 3)) != 0); 170 + /* Put explicit bit at bit 62 if necessary */ 171 + if ((int64_t) rm64 < 0) { 172 + rm64 = rm64 >> 1; 202 173 re++; 203 - } else { 204 - rm = (rm >> (32 - (SP_FBITS + 1 + 3 + 1))) | 205 - ((rm << (SP_FBITS + 1 + 3 + 1)) != 0); 206 174 } 207 - assert(rm & (SP_HIDDEN_BIT << 3)); 208 175 209 - if (zc == IEEE754_CLASS_ZERO) 176 + assert(rm64 & (1 << 62)); 177 + 178 + if (zc == IEEE754_CLASS_ZERO) { 179 + /* 180 + * Move explicit bit from bit 62 to bit 26 since the 181 + * ieee754sp_format code expects the mantissa to be 182 + * 27 bits wide (24 + 3 rounding bits). 183 + */ 184 + rm = XSPSRS64(rm64, (62 - 26)); 210 185 return ieee754sp_format(rs, re, rm); 186 + } 211 187 212 - /* And now the addition */ 188 + /* Move explicit bit from bit 23 to bit 62 */ 189 + zm64 = (uint64_t)zm << (62 - 23); 190 + assert(zm64 & (1 << 62)); 213 191 214 - assert(zm & SP_HIDDEN_BIT); 215 - 216 - /* 217 - * Provide guard,round and stick bit space. 218 - */ 219 - zm <<= 3; 220 - 192 + /* Make the exponents the same */ 221 193 if (ze > re) { 222 194 /* 223 195 * Have to shift r fraction right to align. 224 196 */ 225 197 s = ze - re; 226 - rm = XSPSRS(rm, s); 198 + rm64 = XSPSRS64(rm64, s); 227 199 re += s; 228 200 } else if (re > ze) { 229 201 /* 230 202 * Have to shift z fraction right to align. 231 203 */ 232 204 s = re - ze; 233 - zm = XSPSRS(zm, s); 205 + zm64 = XSPSRS64(zm64, s); 234 206 ze += s; 235 207 } 236 208 assert(ze == re); 237 209 assert(ze <= SP_EMAX); 238 210 211 + /* Do the addition */ 239 212 if (zs == rs) { 240 213 /* 241 - * Generate 28 bit result of adding two 27 bit numbers 242 - * leaving result in zm, zs and ze. 214 + * Generate 64 bit result by adding two 63 bit numbers 215 + * leaving result in zm64, zs and ze. 243 216 */ 244 - zm = zm + rm; 245 - 246 - if (zm >> (SP_FBITS + 1 + 3)) { /* carry out */ 247 - zm = XSPSRS1(zm); 217 + zm64 = zm64 + rm64; 218 + if ((int64_t)zm64 < 0) { /* carry out */ 219 + zm64 = XSPSRS1(zm64); 248 220 ze++; 249 221 } 250 222 } else { 251 - if (zm >= rm) { 252 - zm = zm - rm; 223 + if (zm64 >= rm64) { 224 + zm64 = zm64 - rm64; 253 225 } else { 254 - zm = rm - zm; 226 + zm64 = rm64 - zm64; 255 227 zs = rs; 256 228 } 257 - if (zm == 0) 229 + if (zm64 == 0) 258 230 return ieee754sp_zero(ieee754_csr.rm == FPU_CSR_RD); 259 231 260 232 /* 261 - * Normalize in extended single precision 233 + * Put explicit bit at bit 62 if necessary. 262 234 */ 263 - while ((zm >> (SP_MBITS + 3)) == 0) { 264 - zm <<= 1; 235 + while ((zm64 >> 62) == 0) { 236 + zm64 <<= 1; 265 237 ze--; 266 238 } 267 - 268 239 } 240 + 241 + /* 242 + * Move explicit bit from bit 62 to bit 26 since the 243 + * ieee754sp_format code expects the mantissa to be 244 + * 27 bits wide (24 + 3 rounding bits). 245 + */ 246 + zm = XSPSRS64(zm64, (62 - 26)); 247 + 269 248 return ieee754sp_format(zs, ze, zm); 270 249 } 271 250 ··· 260 275 union ieee754sp ieee754sp_msubf(union ieee754sp z, union ieee754sp x, 261 276 union ieee754sp y) 262 277 { 263 - return _sp_maddf(z, x, y, maddf_negate_product); 278 + return _sp_maddf(z, x, y, MADDF_NEGATE_PRODUCT); 264 279 }
+90
arch/mips/math-emu/sp_rint.c
··· 1 + /* IEEE754 floating point arithmetic 2 + * single precision 3 + */ 4 + /* 5 + * MIPS floating point support 6 + * Copyright (C) 1994-2000 Algorithmics Ltd. 7 + * Copyright (C) 2017 Imagination Technologies, Ltd. 8 + * Author: Aleksandar Markovic <aleksandar.markovic@imgtec.com> 9 + * 10 + * This program is free software; you can distribute it and/or modify it 11 + * under the terms of the GNU General Public License (Version 2) as 12 + * published by the Free Software Foundation. 13 + * 14 + * This program is distributed in the hope it will be useful, but WITHOUT 15 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 16 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 17 + * for more details. 18 + * 19 + * You should have received a copy of the GNU General Public License along 20 + * with this program. 21 + */ 22 + 23 + #include "ieee754sp.h" 24 + 25 + union ieee754sp ieee754sp_rint(union ieee754sp x) 26 + { 27 + union ieee754sp ret; 28 + u32 residue; 29 + int sticky; 30 + int round; 31 + int odd; 32 + 33 + COMPXDP; /* <-- DP needed for 64-bit mantissa tmp */ 34 + 35 + ieee754_clearcx(); 36 + 37 + EXPLODEXSP; 38 + FLUSHXSP; 39 + 40 + if (xc == IEEE754_CLASS_SNAN) 41 + return ieee754sp_nanxcpt(x); 42 + 43 + if ((xc == IEEE754_CLASS_QNAN) || 44 + (xc == IEEE754_CLASS_INF) || 45 + (xc == IEEE754_CLASS_ZERO)) 46 + return x; 47 + 48 + if (xe >= SP_FBITS) 49 + return x; 50 + 51 + if (xe < -1) { 52 + residue = xm; 53 + round = 0; 54 + sticky = residue != 0; 55 + xm = 0; 56 + } else { 57 + residue = xm << (xe + 1); 58 + residue <<= 31 - SP_FBITS; 59 + round = (residue >> 31) != 0; 60 + sticky = (residue << 1) != 0; 61 + xm >>= SP_FBITS - xe; 62 + } 63 + 64 + odd = (xm & 0x1) != 0x0; 65 + 66 + switch (ieee754_csr.rm) { 67 + case FPU_CSR_RN: /* toward nearest */ 68 + if (round && (sticky || odd)) 69 + xm++; 70 + break; 71 + case FPU_CSR_RZ: /* toward zero */ 72 + break; 73 + case FPU_CSR_RU: /* toward +infinity */ 74 + if ((round || sticky) && !xs) 75 + xm++; 76 + break; 77 + case FPU_CSR_RD: /* toward -infinity */ 78 + if ((round || sticky) && xs) 79 + xm++; 80 + break; 81 + } 82 + 83 + if (round || sticky) 84 + ieee754_setcx(IEEE754_INEXACT); 85 + 86 + ret = ieee754sp_flong(xm); 87 + SPSIGN(ret) = xs; 88 + 89 + return ret; 90 + }
+1 -1
arch/mips/mm/c-r4k.c
··· 37 37 #include <asm/cacheflush.h> /* for run_uncached() */ 38 38 #include <asm/traps.h> 39 39 #include <asm/dma-coherence.h> 40 - #include <asm/mips-cm.h> 40 + #include <asm/mips-cps.h> 41 41 42 42 /* 43 43 * Bits describing what cache ops an SMP callback function may perform.
+1 -1
arch/mips/mm/cache.c
··· 20 20 #include <asm/processor.h> 21 21 #include <asm/cpu.h> 22 22 #include <asm/cpu-features.h> 23 + #include <asm/setup.h> 23 24 24 25 /* Cache operations. */ 25 26 void (*flush_cache_all)(void); ··· 45 44 46 45 void (*__flush_kernel_vmap_range)(unsigned long vaddr, int size); 47 46 EXPORT_SYMBOL_GPL(__flush_kernel_vmap_range); 48 - void (*__invalidate_kernel_vmap_range)(unsigned long vaddr, int size); 49 47 50 48 /* MIPS specific cache operations */ 51 49 void (*flush_cache_sigtramp)(unsigned long addr);
+5 -41
arch/mips/mm/dma-default.c
··· 127 127 return gfp | dma_flag; 128 128 } 129 129 130 - static void *mips_dma_alloc_noncoherent(struct device *dev, size_t size, 131 - dma_addr_t * dma_handle, gfp_t gfp) 132 - { 133 - void *ret; 134 - 135 - gfp = massage_gfp_flags(dev, gfp); 136 - 137 - ret = (void *) __get_free_pages(gfp, get_order(size)); 138 - 139 - if (ret != NULL) { 140 - memset(ret, 0, size); 141 - *dma_handle = plat_map_dma_mem(dev, ret, size); 142 - } 143 - 144 - return ret; 145 - } 146 - 147 130 static void *mips_dma_alloc_coherent(struct device *dev, size_t size, 148 131 dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) 149 132 { 150 133 void *ret; 151 134 struct page *page = NULL; 152 135 unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; 153 - 154 - /* 155 - * XXX: seems like the coherent and non-coherent implementations could 156 - * be consolidated. 157 - */ 158 - if (attrs & DMA_ATTR_NON_CONSISTENT) 159 - return mips_dma_alloc_noncoherent(dev, size, dma_handle, gfp); 160 136 161 137 gfp = massage_gfp_flags(dev, gfp); 162 138 ··· 148 172 ret = page_address(page); 149 173 memset(ret, 0, size); 150 174 *dma_handle = plat_map_dma_mem(dev, ret, size); 151 - if (!plat_device_is_coherent(dev)) { 175 + if (!(attrs & DMA_ATTR_NON_CONSISTENT) && 176 + !plat_device_is_coherent(dev)) { 152 177 dma_cache_wback_inv((unsigned long) ret, size); 153 178 ret = UNCAC_ADDR(ret); 154 179 } 155 180 156 181 return ret; 157 - } 158 - 159 - 160 - static void mips_dma_free_noncoherent(struct device *dev, size_t size, 161 - void *vaddr, dma_addr_t dma_handle) 162 - { 163 - plat_unmap_dma_mem(dev, dma_handle, size, DMA_BIDIRECTIONAL); 164 - free_pages((unsigned long) vaddr, get_order(size)); 165 182 } 166 183 167 184 static void mips_dma_free_coherent(struct device *dev, size_t size, void *vaddr, ··· 164 195 unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; 165 196 struct page *page = NULL; 166 197 167 - if (attrs & DMA_ATTR_NON_CONSISTENT) { 168 - mips_dma_free_noncoherent(dev, size, vaddr, dma_handle); 169 - return; 170 - } 171 - 172 198 plat_unmap_dma_mem(dev, dma_handle, size, DMA_BIDIRECTIONAL); 173 199 174 - if (!plat_device_is_coherent(dev)) 200 + if (!(attrs & DMA_ATTR_NON_CONSISTENT) && !plat_device_is_coherent(dev)) 175 201 addr = CAC_ADDR(addr); 176 202 177 203 page = virt_to_page((void *) addr); ··· 373 409 } 374 410 } 375 411 376 - int mips_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 412 + static int mips_dma_mapping_error(struct device *dev, dma_addr_t dma_addr) 377 413 { 378 414 return 0; 379 415 } 380 416 381 - int mips_dma_supported(struct device *dev, u64 mask) 417 + static int mips_dma_supported(struct device *dev, u64 mask) 382 418 { 383 419 return plat_dma_supported(dev, mask); 384 420 }
+1
arch/mips/mm/init.c
··· 31 31 #include <linux/gfp.h> 32 32 #include <linux/kcore.h> 33 33 #include <linux/export.h> 34 + #include <linux/initrd.h> 34 35 35 36 #include <asm/asm-offsets.h> 36 37 #include <asm/bootinfo.h>
+1
arch/mips/mm/mmap.c
··· 7 7 * written by Ralf Baechle <ralf@linux-mips.org> 8 8 */ 9 9 #include <linux/compiler.h> 10 + #include <linux/elf-randomize.h> 10 11 #include <linux/errno.h> 11 12 #include <linux/mm.h> 12 13 #include <linux/mman.h>
+19 -28
arch/mips/mm/sc-mips.c
··· 14 14 #include <asm/pgtable.h> 15 15 #include <asm/mmu_context.h> 16 16 #include <asm/r4kcache.h> 17 - #include <asm/mips-cm.h> 17 + #include <asm/mips-cps.h> 18 18 19 19 /* 20 20 * MIPS32/MIPS64 L2 cache handling ··· 63 63 * prefetching for both code & data, for all ports. 64 64 */ 65 65 pftctl = read_gcr_l2_pft_control(); 66 - if (pftctl & CM_GCR_L2_PFT_CONTROL_NPFT_MSK) { 67 - pftctl &= ~CM_GCR_L2_PFT_CONTROL_PAGEMASK_MSK; 68 - pftctl |= PAGE_MASK & CM_GCR_L2_PFT_CONTROL_PAGEMASK_MSK; 69 - pftctl |= CM_GCR_L2_PFT_CONTROL_PFTEN_MSK; 66 + if (pftctl & CM_GCR_L2_PFT_CONTROL_NPFT) { 67 + pftctl &= ~CM_GCR_L2_PFT_CONTROL_PAGEMASK; 68 + pftctl |= PAGE_MASK & CM_GCR_L2_PFT_CONTROL_PAGEMASK; 69 + pftctl |= CM_GCR_L2_PFT_CONTROL_PFTEN; 70 70 write_gcr_l2_pft_control(pftctl); 71 71 72 - pftctl = read_gcr_l2_pft_control_b(); 73 - pftctl |= CM_GCR_L2_PFT_CONTROL_B_PORTID_MSK; 74 - pftctl |= CM_GCR_L2_PFT_CONTROL_B_CEN_MSK; 75 - write_gcr_l2_pft_control_b(pftctl); 72 + set_gcr_l2_pft_control_b(CM_GCR_L2_PFT_CONTROL_B_PORTID | 73 + CM_GCR_L2_PFT_CONTROL_B_CEN); 76 74 } 77 75 } 78 76 79 77 static void mips_sc_prefetch_disable(void) 80 78 { 81 - unsigned long pftctl; 82 - 83 79 if (mips_cm_revision() < CM_REV_CM2_5) 84 80 return; 85 81 86 - pftctl = read_gcr_l2_pft_control(); 87 - pftctl &= ~CM_GCR_L2_PFT_CONTROL_PFTEN_MSK; 88 - write_gcr_l2_pft_control(pftctl); 89 - 90 - pftctl = read_gcr_l2_pft_control_b(); 91 - pftctl &= ~CM_GCR_L2_PFT_CONTROL_B_PORTID_MSK; 92 - pftctl &= ~CM_GCR_L2_PFT_CONTROL_B_CEN_MSK; 93 - write_gcr_l2_pft_control_b(pftctl); 82 + clear_gcr_l2_pft_control(CM_GCR_L2_PFT_CONTROL_PFTEN); 83 + clear_gcr_l2_pft_control_b(CM_GCR_L2_PFT_CONTROL_B_PORTID | 84 + CM_GCR_L2_PFT_CONTROL_B_CEN); 94 85 } 95 86 96 87 static bool mips_sc_prefetch_is_enabled(void) ··· 92 101 return false; 93 102 94 103 pftctl = read_gcr_l2_pft_control(); 95 - if (!(pftctl & CM_GCR_L2_PFT_CONTROL_NPFT_MSK)) 104 + if (!(pftctl & CM_GCR_L2_PFT_CONTROL_NPFT)) 96 105 return false; 97 - return !!(pftctl & CM_GCR_L2_PFT_CONTROL_PFTEN_MSK); 106 + return !!(pftctl & CM_GCR_L2_PFT_CONTROL_PFTEN); 98 107 } 99 108 100 109 static struct bcache_ops mips_sc_ops = { ··· 151 160 unsigned long cfg = read_gcr_l2_config(); 152 161 unsigned long sets, line_sz, assoc; 153 162 154 - if (cfg & CM_GCR_L2_CONFIG_BYPASS_MSK) 163 + if (cfg & CM_GCR_L2_CONFIG_BYPASS) 155 164 return 0; 156 165 157 - sets = cfg & CM_GCR_L2_CONFIG_SET_SIZE_MSK; 158 - sets >>= CM_GCR_L2_CONFIG_SET_SIZE_SHF; 166 + sets = cfg & CM_GCR_L2_CONFIG_SET_SIZE; 167 + sets >>= __ffs(CM_GCR_L2_CONFIG_SET_SIZE); 159 168 if (sets) 160 169 c->scache.sets = 64 << sets; 161 170 162 - line_sz = cfg & CM_GCR_L2_CONFIG_LINE_SIZE_MSK; 163 - line_sz >>= CM_GCR_L2_CONFIG_LINE_SIZE_SHF; 171 + line_sz = cfg & CM_GCR_L2_CONFIG_LINE_SIZE; 172 + line_sz >>= __ffs(CM_GCR_L2_CONFIG_LINE_SIZE); 164 173 if (line_sz) 165 174 c->scache.linesz = 2 << line_sz; 166 175 167 - assoc = cfg & CM_GCR_L2_CONFIG_ASSOC_MSK; 168 - assoc >>= CM_GCR_L2_CONFIG_ASSOC_SHF; 176 + assoc = cfg & CM_GCR_L2_CONFIG_ASSOC; 177 + assoc >>= __ffs(CM_GCR_L2_CONFIG_ASSOC); 169 178 c->scache.ways = assoc + 1; 170 179 c->scache.waysize = c->scache.sets * c->scache.linesz; 171 180 c->scache.waybit = __ffs(c->scache.waysize);
+4 -3
arch/mips/mm/tlbex-fault.S
··· 12 12 13 13 .macro tlb_do_page_fault, write 14 14 NESTED(tlb_do_page_fault_\write, PT_SIZE, sp) 15 - SAVE_ALL 15 + .cfi_signal_frame 16 + SAVE_ALL docfi=1 16 17 MFC0 a2, CP0_BADVADDR 17 18 KMODE 18 19 move a0, sp 19 20 REG_S a2, PT_BVADDR(sp) 20 21 li a1, \write 21 - PTR_LA ra, ret_from_exception 22 - j do_page_fault 22 + jal do_page_fault 23 + j ret_from_exception 23 24 END(tlb_do_page_fault_\write) 24 25 .endm 25 26
-5
arch/mips/mm/tlbex.c
··· 2634 2634 #endif 2635 2635 break; 2636 2636 2637 - case CPU_R6000: 2638 - case CPU_R6000A: 2639 - panic("No R6000 TLB refill handler yet"); 2640 - break; 2641 - 2642 2637 case CPU_R8000: 2643 2638 panic("No R8000 TLB refill handler yet"); 2644 2639 break;
+2 -2
arch/mips/mti-malta/malta-dtshim.c
··· 18 18 #include <asm/fw/fw.h> 19 19 #include <asm/mips-boards/generic.h> 20 20 #include <asm/mips-boards/malta.h> 21 - #include <asm/mips-cm.h> 21 + #include <asm/mips-cps.h> 22 22 #include <asm/page.h> 23 23 24 24 #define ROCIT_REG_BASE 0x1f403000 ··· 236 236 237 237 /* if we have a CM which reports a GIC is present, leave the DT alone */ 238 238 err = mips_cm_probe(); 239 - if (!err && (read_gcr_gic_status() & CM_GCR_GIC_STATUS_GICEX_MSK)) 239 + if (!err && (read_gcr_gic_status() & CM_GCR_GIC_STATUS_EX)) 240 240 return; 241 241 242 242 if (malta_scon() == MIPS_REVISION_SCON_ROCIT) {
+1 -2
arch/mips/mti-malta/malta-init.c
··· 21 21 #include <asm/smp-ops.h> 22 22 #include <asm/traps.h> 23 23 #include <asm/fw/fw.h> 24 - #include <asm/mips-cm.h> 25 - #include <asm/mips-cpc.h> 24 + #include <asm/mips-cps.h> 26 25 #include <asm/mips-boards/generic.h> 27 26 #include <asm/mips-boards/malta.h> 28 27
+2 -3
arch/mips/mti-malta/malta-int.c
··· 19 19 #include <linux/smp.h> 20 20 #include <linux/interrupt.h> 21 21 #include <linux/io.h> 22 - #include <linux/irqchip/mips-gic.h> 23 22 #include <linux/of_irq.h> 24 23 #include <linux/kernel_stat.h> 25 24 #include <linux/kernel.h> ··· 28 29 #include <asm/i8259.h> 29 30 #include <asm/irq_cpu.h> 30 31 #include <asm/irq_regs.h> 31 - #include <asm/mips-cm.h> 32 32 #include <asm/mips-boards/malta.h> 33 33 #include <asm/mips-boards/maltaint.h> 34 + #include <asm/mips-cps.h> 34 35 #include <asm/gt64120.h> 35 36 #include <asm/mips-boards/generic.h> 36 37 #include <asm/mips-boards/msc01_pci.h> ··· 214 215 msc_nr_irqs); 215 216 } 216 217 217 - if (gic_present) { 218 + if (mips_gic_present()) { 218 219 corehi_irq = MIPS_CPU_IRQ_BASE + MIPSCPU_INT_COREHI; 219 220 } else if (cpu_has_veic) { 220 221 set_vi_handler(MSC01E_INT_COREHI, corehi_irqdispatch);
+2 -2
arch/mips/mti-malta/malta-setup.c
··· 28 28 29 29 #include <asm/fw/fw.h> 30 30 #include <asm/mach-malta/malta-dtshim.h> 31 - #include <asm/mips-cm.h> 31 + #include <asm/mips-cps.h> 32 32 #include <asm/mips-boards/generic.h> 33 33 #include <asm/mips-boards/malta.h> 34 34 #include <asm/mips-boards/maltaint.h> ··· 128 128 BONITO_PCIMEMBASECFG_MEMBASE1_CACHED); 129 129 pr_info("Enabled Bonito IOBC coherency\n"); 130 130 } 131 - } else if (mips_cm_numiocu() != 0) { 131 + } else if (mips_cps_numiocu(0) != 0) { 132 132 /* Nothing special needs to be done to enable coherency */ 133 133 pr_info("CMP IOCU detected\n"); 134 134 cfg = __raw_readl((u32 *)CKSEG1ADDR(ROCIT_CONFIG_GEN0));
+12 -14
arch/mips/mti-malta/malta-time.c
··· 26 26 #include <linux/sched.h> 27 27 #include <linux/spinlock.h> 28 28 #include <linux/interrupt.h> 29 - #include <linux/irqchip/mips-gic.h> 30 29 #include <linux/timex.h> 31 30 #include <linux/mc146818rtc.h> 32 31 ··· 39 40 #include <asm/time.h> 40 41 #include <asm/mc146818-time.h> 41 42 #include <asm/msc01_ic.h> 43 + #include <asm/mips-cps.h> 42 44 43 45 #include <asm/mips-boards/generic.h> 44 46 #include <asm/mips-boards/maltaint.h> ··· 85 85 86 86 local_irq_save(flags); 87 87 88 - if (gic_present) 89 - gic_start_count(); 88 + if (mips_gic_present()) 89 + clear_gic_config(GIC_CONFIG_COUNTSTOP); 90 90 91 91 /* 92 92 * Read counters exactly on rising edge of update flag. ··· 95 95 while (CMOS_READ(RTC_REG_A) & RTC_UIP); 96 96 while (!(CMOS_READ(RTC_REG_A) & RTC_UIP)); 97 97 start = read_c0_count(); 98 - if (gic_present) 99 - gicstart = gic_read_count(); 98 + if (mips_gic_present()) 99 + gicstart = read_gic_counter(); 100 100 101 101 /* Wait for falling edge before reading RTC. */ 102 102 while (CMOS_READ(RTC_REG_A) & RTC_UIP); ··· 105 105 /* Read counters again exactly on rising edge of update flag. */ 106 106 while (!(CMOS_READ(RTC_REG_A) & RTC_UIP)); 107 107 count = read_c0_count(); 108 - if (gic_present) 109 - giccount = gic_read_count(); 108 + if (mips_gic_present()) 109 + giccount = read_gic_counter(); 110 110 111 111 /* Wait for falling edge before reading RTC again. */ 112 112 while (CMOS_READ(RTC_REG_A) & RTC_UIP); ··· 128 128 count /= secs; 129 129 mips_hpt_frequency = count; 130 130 131 - if (gic_present) { 131 + if (mips_gic_present()) { 132 132 giccount = div_u64(giccount - gicstart, secs); 133 133 gic_frequency = giccount; 134 134 } ··· 154 154 155 155 if (cpu_has_veic) 156 156 return -1; 157 - else if (gic_present) 157 + else if (mips_gic_present()) 158 158 return gic_get_c0_fdc_int(); 159 159 else if (cp0_fdc_irq >= 0) 160 160 return MIPS_CPU_IRQ_BASE + cp0_fdc_irq; ··· 167 167 if (cpu_has_veic) { 168 168 set_vi_handler(MSC01E_INT_PERFCTR, mips_perf_dispatch); 169 169 mips_cpu_perf_irq = MSC01E_INT_BASE + MSC01E_INT_PERFCTR; 170 - } else if (gic_present) { 170 + } else if (mips_gic_present()) { 171 171 mips_cpu_perf_irq = gic_get_c0_perfcount_int(); 172 172 } else if (cp0_perfcount_irq >= 0) { 173 173 mips_cpu_perf_irq = MIPS_CPU_IRQ_BASE + cp0_perfcount_irq; ··· 184 184 if (cpu_has_veic) { 185 185 set_vi_handler(MSC01E_INT_CPUCTR, mips_timer_dispatch); 186 186 mips_cpu_timer_irq = MSC01E_INT_BASE + MSC01E_INT_CPUCTR; 187 - } else if (gic_present) { 187 + } else if (mips_gic_present()) { 188 188 mips_cpu_timer_irq = gic_get_c0_compare_int(); 189 189 } else { 190 190 mips_cpu_timer_irq = MIPS_CPU_IRQ_BASE + cp0_compare_irq; ··· 258 258 setup_pit_timer(); 259 259 #endif 260 260 261 - #ifdef CONFIG_MIPS_GIC 262 - if (gic_present) { 261 + if (mips_gic_present()) { 263 262 freq = freqround(gic_frequency, 5000); 264 263 printk("GIC frequency %d.%02d MHz\n", freq/1000000, 265 264 (freq%1000000)*100/1000000); ··· 267 268 timer_probe(); 268 269 #endif 269 270 } 270 - #endif 271 271 }
+5 -3
arch/mips/netlogic/common/smp.c
··· 122 122 int hwtid; 123 123 124 124 hwtid = hard_smp_processor_id(); 125 - current_cpu_data.core = hwtid / NLM_THREADS_PER_CORE; 125 + cpu_set_core(&current_cpu_data, hwtid / NLM_THREADS_PER_CORE); 126 126 current_cpu_data.package = nlm_nodeid(); 127 127 nlm_percpu_init(hwtid); 128 128 nlm_smp_irq_init(hwtid); ··· 147 147 unsigned long nlm_next_sp; 148 148 static cpumask_t phys_cpu_present_mask; 149 149 150 - void nlm_boot_secondary(int logical_cpu, struct task_struct *idle) 150 + int nlm_boot_secondary(int logical_cpu, struct task_struct *idle) 151 151 { 152 152 uint64_t picbase; 153 153 int hwtid; ··· 161 161 /* barrier for sp/gp store above */ 162 162 __sync(); 163 163 nlm_pic_send_ipi(picbase, hwtid, 1, 1); /* NMI */ 164 + 165 + return 0; 164 166 } 165 167 166 168 void __init nlm_smp_setup(void) ··· 274 272 return 0; 275 273 } 276 274 277 - struct plat_smp_ops nlm_smp_ops = { 275 + const struct plat_smp_ops nlm_smp_ops = { 278 276 .send_ipi_single = nlm_send_ipi_single, 279 277 .send_ipi_mask = nlm_send_ipi_mask, 280 278 .init_secondary = nlm_init_secondary,
+2 -2
arch/mips/oprofile/op_model_mipsxx.c
··· 38 38 #ifdef CONFIG_MIPS_MT_SMP 39 39 static int cpu_has_mipsmt_pertccounters; 40 40 #define WHAT (MIPS_PERFCTRL_MT_EN_VPE | \ 41 - M_PERFCTL_VPEID(cpu_data[smp_processor_id()].vpe_id)) 41 + M_PERFCTL_VPEID(cpu_vpe_id(&current_cpu_data))) 42 42 #define vpe_id() (cpu_has_mipsmt_pertccounters ? \ 43 - 0 : cpu_data[smp_processor_id()].vpe_id) 43 + 0 : cpu_vpe_id(&current_cpu_data)) 44 44 45 45 /* 46 46 * The number of bits to shift to convert between counters per core and
+3 -2
arch/mips/paravirt/paravirt-smp.c
··· 100 100 local_irq_enable(); 101 101 } 102 102 103 - static void paravirt_boot_secondary(int cpu, struct task_struct *idle) 103 + static int paravirt_boot_secondary(int cpu, struct task_struct *idle) 104 104 { 105 105 paravirt_smp_gp[cpu] = (unsigned long)task_thread_info(idle); 106 106 smp_wmb(); 107 107 paravirt_smp_sp[cpu] = __KSTK_TOS(idle); 108 + return 0; 108 109 } 109 110 110 111 static irqreturn_t paravirt_reched_interrupt(int irq, void *dev_id) ··· 134 133 } 135 134 } 136 135 137 - struct plat_smp_ops paravirt_smp_ops = { 136 + const struct plat_smp_ops paravirt_smp_ops = { 138 137 .send_ipi_single = paravirt_send_ipi_single, 139 138 .send_ipi_mask = paravirt_send_ipi_mask, 140 139 .init_secondary = paravirt_init_secondary,
+1 -1
arch/mips/paravirt/setup.c
··· 14 14 #include <asm/smp-ops.h> 15 15 #include <asm/time.h> 16 16 17 - extern struct plat_smp_ops paravirt_smp_ops; 17 + extern const struct plat_smp_ops paravirt_smp_ops; 18 18 19 19 const char *get_system_type(void) 20 20 {
+1 -1
arch/mips/pci/pci-legacy.c
··· 139 139 struct of_pci_range range; 140 140 struct of_pci_range_parser parser; 141 141 142 - pr_info("PCI host bridge %s ranges:\n", node->full_name); 142 + pr_info("PCI host bridge %pOF ranges:\n", node); 143 143 hose->of_node = node; 144 144 145 145 if (of_pci_range_parser_init(&parser, node))
+3 -3
arch/mips/pci/pci-malta.c
··· 27 27 #include <linux/init.h> 28 28 29 29 #include <asm/gt64120.h> 30 - #include <asm/mips-cm.h> 30 + #include <asm/mips-cps.h> 31 31 #include <asm/mips-boards/generic.h> 32 32 #include <asm/mips-boards/bonito64.h> 33 33 #include <asm/mips-boards/msc01_pci.h> ··· 201 201 msc_mem_resource.start = start & mask; 202 202 msc_mem_resource.end = (start & mask) | ~mask; 203 203 msc_controller.mem_offset = (start & mask) - (map & mask); 204 - if (mips_cm_numiocu()) { 204 + if (mips_cps_numiocu(0)) { 205 205 write_gcr_reg0_base(start); 206 206 write_gcr_reg0_mask(mask | 207 207 CM_GCR_REGn_MASK_CMTGT_IOCU0); ··· 213 213 msc_io_resource.end = (map & mask) | ~mask; 214 214 msc_controller.io_offset = 0; 215 215 ioport_resource.end = ~mask; 216 - if (mips_cm_numiocu()) { 216 + if (mips_cps_numiocu(0)) { 217 217 write_gcr_reg1_base(start); 218 218 write_gcr_reg1_mask(mask | 219 219 CM_GCR_REGn_MASK_CMTGT_IOCU0);
+1 -1
arch/mips/pci/pci-mt7620.c
··· 291 291 IORESOURCE_MEM, 1); 292 292 u32 val = 0; 293 293 294 - rstpcie0 = devm_reset_control_get(&pdev->dev, "pcie0"); 294 + rstpcie0 = devm_reset_control_get_exclusive(&pdev->dev, "pcie0"); 295 295 if (IS_ERR(rstpcie0)) 296 296 return PTR_ERR(rstpcie0); 297 297
+5 -6
arch/mips/pci/pci-rt3883.c
··· 207 207 208 208 irq = irq_of_parse_and_map(rpc->intc_of_node, 0); 209 209 if (irq == 0) { 210 - dev_err(dev, "%s has no IRQ", 211 - of_node_full_name(rpc->intc_of_node)); 210 + dev_err(dev, "%pOF has no IRQ", rpc->intc_of_node); 212 211 return -EINVAL; 213 212 } 214 213 ··· 437 438 } 438 439 439 440 if (!rpc->intc_of_node) { 440 - dev_err(dev, "%s has no %s child node", 441 - of_node_full_name(rpc->intc_of_node), 441 + dev_err(dev, "%pOF has no %s child node", 442 + rpc->intc_of_node, 442 443 "interrupt controller"); 443 444 return -EINVAL; 444 445 } ··· 453 454 } 454 455 455 456 if (!rpc->pci_controller.of_node) { 456 - dev_err(dev, "%s has no %s child node", 457 - of_node_full_name(rpc->intc_of_node), 457 + dev_err(dev, "%pOF has no %s child node", 458 + rpc->intc_of_node, 458 459 "PCI host bridge"); 459 460 err = -EINVAL; 460 461 goto err_put_intc_node;
+1 -2
arch/mips/pistachio/init.c
··· 19 19 #include <asm/dma-coherence.h> 20 20 #include <asm/fw/fw.h> 21 21 #include <asm/mips-boards/generic.h> 22 - #include <asm/mips-cm.h> 23 - #include <asm/mips-cpc.h> 22 + #include <asm/mips-cps.h> 24 23 #include <asm/prom.h> 25 24 #include <asm/smp-ops.h> 26 25 #include <asm/traps.h>
-1
arch/mips/pistachio/irq.c
··· 10 10 11 11 #include <linux/init.h> 12 12 #include <linux/irqchip.h> 13 - #include <linux/irqchip/mips-gic.h> 14 13 #include <linux/kernel.h> 15 14 16 15 #include <asm/cpu-features.h>
+1 -1
arch/mips/pistachio/time.c
··· 12 12 #include <linux/clk-provider.h> 13 13 #include <linux/clocksource.h> 14 14 #include <linux/init.h> 15 - #include <linux/irqchip/mips-gic.h> 16 15 #include <linux/of.h> 17 16 17 + #include <asm/mips-cps.h> 18 18 #include <asm/time.h> 19 19 20 20 unsigned int get_c0_compare_int(void)
+10
arch/mips/ralink/Kconfig
··· 82 82 depends on SOC_MT7620 83 83 select BUILTIN_DTB 84 84 85 + config DTB_OMEGA2P 86 + bool "Onion Omega2+" 87 + depends on SOC_MT7620 88 + select BUILTIN_DTB 89 + 90 + config DTB_VOCORE2 91 + bool "VoCore2" 92 + depends on SOC_MT7620 93 + select BUILTIN_DTB 94 + 85 95 endchoice 86 96 87 97 endif
+3
arch/mips/ralink/clk.c
··· 53 53 54 54 unsigned long clk_get_rate(struct clk *clk) 55 55 { 56 + if (!clk) 57 + return 0; 58 + 56 59 return clk->rate; 57 60 } 58 61 EXPORT_SYMBOL_GPL(clk_get_rate);
+1 -1
arch/mips/ralink/irq-gic.c
··· 11 11 12 12 #include <linux/of.h> 13 13 #include <linux/irqchip.h> 14 - #include <linux/irqchip/mips-gic.h> 14 + #include <asm/mips-cps.h> 15 15 16 16 int get_c0_perfcount_int(void) 17 17 {
+2 -3
arch/mips/ralink/mt7621.c
··· 12 12 13 13 #include <asm/mipsregs.h> 14 14 #include <asm/smp-ops.h> 15 - #include <asm/mips-cm.h> 16 - #include <asm/mips-cpc.h> 15 + #include <asm/mips-cps.h> 17 16 #include <asm/mach-ralink/ralink_regs.h> 18 17 #include <asm/mach-ralink/mt7621.h> 19 18 ··· 198 199 mips_cm_probe(); 199 200 mips_cpc_probe(); 200 201 201 - if (mips_cm_numiocu()) { 202 + if (mips_cps_numiocu(0)) { 202 203 /* 203 204 * mips_cm_probe() wipes out bootloader 204 205 * config for CM regions and we have to configure them
+3 -2
arch/mips/sgi-ip27/ip27-smp.c
··· 195 195 * set sp to the kernel stack of the newly created idle process, gp to the proc 196 196 * struct so that current_thread_info() will work. 197 197 */ 198 - static void ip27_boot_secondary(int cpu, struct task_struct *idle) 198 + static int ip27_boot_secondary(int cpu, struct task_struct *idle) 199 199 { 200 200 unsigned long gp = (unsigned long)task_thread_info(idle); 201 201 unsigned long sp = __KSTK_TOS(idle); ··· 203 203 LAUNCH_SLAVE(cputonasid(cpu), cputoslice(cpu), 204 204 (launch_proc_t)MAPPED_KERN_RW_TO_K0(smp_bootstrap), 205 205 0, (void *) sp, (void *) gp); 206 + return 0; 206 207 } 207 208 208 209 static void __init ip27_smp_setup(void) ··· 232 231 /* We already did everything necessary earlier */ 233 232 } 234 233 235 - struct plat_smp_ops ip27_smp_ops = { 234 + const struct plat_smp_ops ip27_smp_ops = { 236 235 .send_ipi_single = ip27_send_ipi_single, 237 236 .send_ipi_mask = ip27_send_ipi_mask, 238 237 .init_secondary = ip27_init_secondary,
+3 -2
arch/mips/sibyte/bcm1480/smp.c
··· 117 117 * Setup the PC, SP, and GP of a secondary processor and start it 118 118 * running! 119 119 */ 120 - static void bcm1480_boot_secondary(int cpu, struct task_struct *idle) 120 + static int bcm1480_boot_secondary(int cpu, struct task_struct *idle) 121 121 { 122 122 int retval; 123 123 ··· 126 126 (unsigned long)task_thread_info(idle), 0); 127 127 if (retval != 0) 128 128 printk("cfe_start_cpu(%i) returned %i\n" , cpu, retval); 129 + return retval; 129 130 } 130 131 131 132 /* ··· 158 157 { 159 158 } 160 159 161 - struct plat_smp_ops bcm1480_smp_ops = { 160 + const struct plat_smp_ops bcm1480_smp_ops = { 162 161 .send_ipi_single = bcm1480_send_ipi_single, 163 162 .send_ipi_mask = bcm1480_send_ipi_mask, 164 163 .init_secondary = bcm1480_init_secondary,
+2 -2
arch/mips/sibyte/common/cfe.c
··· 229 229 230 230 #endif 231 231 232 - extern struct plat_smp_ops sb_smp_ops; 233 - extern struct plat_smp_ops bcm1480_smp_ops; 232 + extern const struct plat_smp_ops sb_smp_ops; 233 + extern const struct plat_smp_ops bcm1480_smp_ops; 234 234 235 235 /* 236 236 * prom_init is called just after the cpu type is determined, from setup_arch()
+3 -2
arch/mips/sibyte/sb1250/smp.c
··· 106 106 * Setup the PC, SP, and GP of a secondary processor and start it 107 107 * running! 108 108 */ 109 - static void sb1250_boot_secondary(int cpu, struct task_struct *idle) 109 + static int sb1250_boot_secondary(int cpu, struct task_struct *idle) 110 110 { 111 111 int retval; 112 112 ··· 115 115 (unsigned long)task_thread_info(idle), 0); 116 116 if (retval != 0) 117 117 printk("cfe_start_cpu(%i) returned %i\n" , cpu, retval); 118 + return retval; 118 119 } 119 120 120 121 /* ··· 147 146 { 148 147 } 149 148 150 - struct plat_smp_ops sb_smp_ops = { 149 + const struct plat_smp_ops sb_smp_ops = { 151 150 .send_ipi_single = sb1250_send_ipi_single, 152 151 .send_ipi_mask = sb1250_send_ipi_mask, 153 152 .init_secondary = sb1250_init_secondary,
+90
arch/mips/tools/generic-board-config.sh
··· 1 + #!/bin/sh 2 + # 3 + # Copyright (C) 2017 Imagination Technologies 4 + # Author: Paul Burton <paul.burton@imgtec.com> 5 + # 6 + # This program is free software; you can redistribute it and/or modify it 7 + # under the terms of the GNU General Public License as published by the 8 + # Free Software Foundation; either version 2 of the License, or (at your 9 + # option) any later version. 10 + # 11 + # This script merges configuration fragments for boards supported by the 12 + # generic MIPS kernel. It checks each for requirements specified using 13 + # formatted comments, and then calls merge_config.sh to merge those 14 + # fragments which have no unmet requirements. 15 + # 16 + # An example of requirements in your board config fragment might be: 17 + # 18 + # # require CONFIG_CPU_MIPS32_R2=y 19 + # # require CONFIG_CPU_LITTLE_ENDIAN=y 20 + # 21 + # This would mean that your board is only included in kernels which are 22 + # configured for little endian MIPS32r2 CPUs, and not for example in kernels 23 + # configured for 64 bit or big endian systems. 24 + # 25 + 26 + srctree="$1" 27 + objtree="$2" 28 + ref_cfg="$3" 29 + cfg="$4" 30 + boards_origin="$5" 31 + shift 5 32 + 33 + cd "${srctree}" 34 + 35 + # Only print Skipping... lines if the user explicitly specified BOARDS=. In the 36 + # general case it only serves to obscure the useful output about what actually 37 + # was included. 38 + case ${boards_origin} in 39 + "command line") 40 + print_skipped=1 41 + ;; 42 + environment*) 43 + print_skipped=1 44 + ;; 45 + *) 46 + print_skipped=0 47 + ;; 48 + esac 49 + 50 + for board in $@; do 51 + board_cfg="arch/mips/configs/generic/board-${board}.config" 52 + if [ ! -f "${board_cfg}" ]; then 53 + echo "WARNING: Board config '${board_cfg}' not found" 54 + continue 55 + fi 56 + 57 + # For each line beginning with # require, cut out the field following 58 + # it & search for that in the reference config file. If the requirement 59 + # is not found then the subshell will exit with code 1, and we'll 60 + # continue on to the next board. 61 + grep -E '^# require ' "${board_cfg}" | \ 62 + cut -d' ' -f 3- | \ 63 + while read req; do 64 + case ${req} in 65 + *=y) 66 + # If we require something =y then we check that a line 67 + # containing it is present in the reference config. 68 + grep -Eq "^${req}\$" "${ref_cfg}" && continue 69 + ;; 70 + *=n) 71 + # If we require something =n then we just invert that 72 + # check, considering the requirement met if there isn't 73 + # a line containing the value =y in the reference 74 + # config. 75 + grep -Eq "^${req/%=n/=y}\$" "${ref_cfg}" || continue 76 + ;; 77 + *) 78 + echo "WARNING: Unhandled requirement '${req}'" 79 + ;; 80 + esac 81 + 82 + [ ${print_skipped} -eq 1 ] && echo "Skipping ${board_cfg}" 83 + exit 1 84 + done || continue 85 + 86 + # Merge this board config fragment into our final config file 87 + ./scripts/kconfig/merge_config.sh \ 88 + -m -O ${objtree} ${cfg} ${board_cfg} \ 89 + | grep -Ev '^(#|Using)' 90 + done
+3 -5
arch/mips/vdso/gettimeofday.c
··· 11 11 #include "vdso.h" 12 12 13 13 #include <linux/compiler.h> 14 - #include <linux/irqchip/mips-gic.h> 15 14 #include <linux/time.h> 16 15 17 16 #include <asm/clocksource.h> 18 17 #include <asm/io.h> 19 - #include <asm/mips-cm.h> 20 18 #include <asm/unistd.h> 21 19 #include <asm/vdso.h> 22 20 ··· 124 126 u32 hi, hi2, lo; 125 127 126 128 do { 127 - hi = __raw_readl(gic + GIC_UMV_SH_COUNTER_63_32_OFS); 128 - lo = __raw_readl(gic + GIC_UMV_SH_COUNTER_31_00_OFS); 129 - hi2 = __raw_readl(gic + GIC_UMV_SH_COUNTER_63_32_OFS); 129 + hi = __raw_readl(gic + sizeof(lo)); 130 + lo = __raw_readl(gic); 131 + hi2 = __raw_readl(gic + sizeof(lo)); 130 132 } while (hi2 != hi); 131 133 132 134 return (((u64)hi) << 32) + lo;
-10
arch/mips/vdso/sigreturn.S
··· 19 19 .cfi_sections .debug_frame 20 20 21 21 LEAF(__vdso_rt_sigreturn) 22 - .cfi_startproc 23 - .frame sp, 0, ra 24 - .mask 0x00000000, 0 25 - .fmask 0x00000000, 0 26 22 .cfi_signal_frame 27 23 28 24 li v0, __NR_rt_sigreturn 29 25 syscall 30 26 31 - .cfi_endproc 32 27 END(__vdso_rt_sigreturn) 33 28 34 29 #if _MIPS_SIM == _MIPS_SIM_ABI32 35 30 36 31 LEAF(__vdso_sigreturn) 37 - .cfi_startproc 38 - .frame sp, 0, ra 39 - .mask 0x00000000, 0 40 - .fmask 0x00000000, 0 41 32 .cfi_signal_frame 42 33 43 34 li v0, __NR_sigreturn 44 35 syscall 45 36 46 - .cfi_endproc 47 37 END(__vdso_sigreturn) 48 38 49 39 #endif
+31 -6
drivers/clocksource/mips-gic-timer.c
··· 10 10 #include <linux/cpu.h> 11 11 #include <linux/init.h> 12 12 #include <linux/interrupt.h> 13 - #include <linux/irqchip/mips-gic.h> 14 13 #include <linux/notifier.h> 15 14 #include <linux/of_irq.h> 16 15 #include <linux/percpu.h> 17 16 #include <linux/smp.h> 18 17 #include <linux/time.h> 18 + #include <asm/mips-cps.h> 19 19 20 20 static DEFINE_PER_CPU(struct clock_event_device, gic_clockevent_device); 21 21 static int gic_timer_irq; 22 22 static unsigned int gic_frequency; 23 23 24 + static u64 notrace gic_read_count(void) 25 + { 26 + unsigned int hi, hi2, lo; 27 + 28 + if (mips_cm_is64) 29 + return read_gic_counter(); 30 + 31 + do { 32 + hi = read_gic_counter_32h(); 33 + lo = read_gic_counter_32l(); 34 + hi2 = read_gic_counter_32h(); 35 + } while (hi2 != hi); 36 + 37 + return (((u64) hi) << 32) + lo; 38 + } 39 + 24 40 static int gic_next_event(unsigned long delta, struct clock_event_device *evt) 25 41 { 42 + unsigned long flags; 26 43 u64 cnt; 27 44 int res; 28 45 29 46 cnt = gic_read_count(); 30 47 cnt += (u64)delta; 31 - gic_write_cpu_compare(cnt, cpumask_first(evt->cpumask)); 48 + local_irq_save(flags); 49 + write_gic_vl_other(mips_cm_vp_id(cpumask_first(evt->cpumask))); 50 + write_gic_vo_compare(cnt); 51 + local_irq_restore(flags); 32 52 res = ((int)(gic_read_count() - cnt) >= 0) ? -ETIME : 0; 33 53 return res; 34 54 } ··· 57 37 { 58 38 struct clock_event_device *cd = dev_id; 59 39 60 - gic_write_compare(gic_read_compare()); 40 + write_gic_vl_compare(read_gic_vl_compare()); 61 41 cd->event_handler(cd); 62 42 return IRQ_HANDLED; 63 43 } ··· 159 139 160 140 static int __init __gic_clocksource_init(void) 161 141 { 142 + unsigned int count_width; 162 143 int ret; 163 144 164 145 /* Set clocksource mask. */ 165 - gic_clocksource.mask = CLOCKSOURCE_MASK(gic_get_count_width()); 146 + count_width = read_gic_config() & GIC_CONFIG_COUNTBITS; 147 + count_width >>= __fls(GIC_CONFIG_COUNTBITS); 148 + count_width *= 4; 149 + count_width += 32; 150 + gic_clocksource.mask = CLOCKSOURCE_MASK(count_width); 166 151 167 152 /* Calculate a somewhat reasonable rating value. */ 168 153 gic_clocksource.rating = 200 + gic_frequency / 10000000; ··· 184 159 struct clk *clk; 185 160 int ret; 186 161 187 - if (!gic_present || !node->parent || 162 + if (!mips_gic_present() || !node->parent || 188 163 !of_device_is_compatible(node->parent, "mti,gic")) { 189 164 pr_warn("No DT definition for the mips gic driver\n"); 190 165 return -ENXIO; ··· 222 197 } 223 198 224 199 /* And finally start the counter */ 225 - gic_start_count(); 200 + clear_gic_config(GIC_CONFIG_COUNTSTOP); 226 201 227 202 return 0; 228 203 }
+1 -1
drivers/cpuidle/cpuidle-cps.c
··· 37 37 * TODO: don't treat core 0 specially, just prevent the final core 38 38 * TODO: remap interrupt affinity temporarily 39 39 */ 40 - if (!cpu_data[dev->cpu].core && (index > STATE_NC_WAIT)) 40 + if (cpus_are_siblings(0, dev->cpu) && (index > STATE_NC_WAIT)) 41 41 index = STATE_NC_WAIT; 42 42 43 43 /* Select the appropriate cps_pm_state */
+1 -1
drivers/irqchip/irq-mips-cpu.c
··· 101 101 local_irq_save(flags); 102 102 103 103 /* We can only send IPIs to VPEs within the local core */ 104 - WARN_ON(cpu_data[cpu].core != current_cpu_data.core); 104 + WARN_ON(!cpus_are_siblings(smp_processor_id(), cpu)); 105 105 106 106 vpflags = dvpe(); 107 107 settc(cpu_vpe_id(&cpu_data[cpu]));
+181 -435
drivers/irqchip/irq-mips-gic.c
··· 12 12 #include <linux/interrupt.h> 13 13 #include <linux/irq.h> 14 14 #include <linux/irqchip.h> 15 - #include <linux/irqchip/mips-gic.h> 16 15 #include <linux/of_address.h> 16 + #include <linux/percpu.h> 17 17 #include <linux/sched.h> 18 18 #include <linux/smp.h> 19 19 20 - #include <asm/mips-cm.h> 20 + #include <asm/mips-cps.h> 21 21 #include <asm/setup.h> 22 22 #include <asm/traps.h> 23 23 24 24 #include <dt-bindings/interrupt-controller/mips-gic.h> 25 25 26 - unsigned int gic_present; 26 + #define GIC_MAX_INTRS 256 27 + #define GIC_MAX_LONGS BITS_TO_LONGS(GIC_MAX_INTRS) 27 28 28 - struct gic_pcpu_mask { 29 - DECLARE_BITMAP(pcpu_mask, GIC_MAX_INTRS); 30 - }; 29 + /* Add 2 to convert GIC CPU pin to core interrupt */ 30 + #define GIC_CPU_PIN_OFFSET 2 31 31 32 - static unsigned long __gic_base_addr; 32 + /* Mapped interrupt to pin X, then GIC will generate the vector (X+1). */ 33 + #define GIC_PIN_TO_VEC_OFFSET 1 33 34 34 - static void __iomem *gic_base; 35 - static struct gic_pcpu_mask pcpu_masks[NR_CPUS]; 35 + /* Convert between local/shared IRQ number and GIC HW IRQ number. */ 36 + #define GIC_LOCAL_HWIRQ_BASE 0 37 + #define GIC_LOCAL_TO_HWIRQ(x) (GIC_LOCAL_HWIRQ_BASE + (x)) 38 + #define GIC_HWIRQ_TO_LOCAL(x) ((x) - GIC_LOCAL_HWIRQ_BASE) 39 + #define GIC_SHARED_HWIRQ_BASE GIC_NUM_LOCAL_INTRS 40 + #define GIC_SHARED_TO_HWIRQ(x) (GIC_SHARED_HWIRQ_BASE + (x)) 41 + #define GIC_HWIRQ_TO_SHARED(x) ((x) - GIC_SHARED_HWIRQ_BASE) 42 + 43 + void __iomem *mips_gic_base; 44 + 45 + DEFINE_PER_CPU_READ_MOSTLY(unsigned long[GIC_MAX_LONGS], pcpu_masks); 46 + 36 47 static DEFINE_SPINLOCK(gic_lock); 37 48 static struct irq_domain *gic_irq_domain; 38 49 static struct irq_domain *gic_ipi_domain; ··· 55 44 DECLARE_BITMAP(ipi_resrv, GIC_MAX_INTRS); 56 45 DECLARE_BITMAP(ipi_available, GIC_MAX_INTRS); 57 46 58 - static void __gic_irq_dispatch(void); 59 - 60 - static inline u32 gic_read32(unsigned int reg) 47 + static void gic_clear_pcpu_masks(unsigned int intr) 61 48 { 62 - return __raw_readl(gic_base + reg); 63 - } 49 + unsigned int i; 64 50 65 - static inline u64 gic_read64(unsigned int reg) 66 - { 67 - return __raw_readq(gic_base + reg); 68 - } 69 - 70 - static inline unsigned long gic_read(unsigned int reg) 71 - { 72 - if (!mips_cm_is64) 73 - return gic_read32(reg); 74 - else 75 - return gic_read64(reg); 76 - } 77 - 78 - static inline void gic_write32(unsigned int reg, u32 val) 79 - { 80 - return __raw_writel(val, gic_base + reg); 81 - } 82 - 83 - static inline void gic_write64(unsigned int reg, u64 val) 84 - { 85 - return __raw_writeq(val, gic_base + reg); 86 - } 87 - 88 - static inline void gic_write(unsigned int reg, unsigned long val) 89 - { 90 - if (!mips_cm_is64) 91 - return gic_write32(reg, (u32)val); 92 - else 93 - return gic_write64(reg, (u64)val); 94 - } 95 - 96 - static inline void gic_update_bits(unsigned int reg, unsigned long mask, 97 - unsigned long val) 98 - { 99 - unsigned long regval; 100 - 101 - regval = gic_read(reg); 102 - regval &= ~mask; 103 - regval |= val; 104 - gic_write(reg, regval); 105 - } 106 - 107 - static inline void gic_reset_mask(unsigned int intr) 108 - { 109 - gic_write(GIC_REG(SHARED, GIC_SH_RMASK) + GIC_INTR_OFS(intr), 110 - 1ul << GIC_INTR_BIT(intr)); 111 - } 112 - 113 - static inline void gic_set_mask(unsigned int intr) 114 - { 115 - gic_write(GIC_REG(SHARED, GIC_SH_SMASK) + GIC_INTR_OFS(intr), 116 - 1ul << GIC_INTR_BIT(intr)); 117 - } 118 - 119 - static inline void gic_set_polarity(unsigned int intr, unsigned int pol) 120 - { 121 - gic_update_bits(GIC_REG(SHARED, GIC_SH_SET_POLARITY) + 122 - GIC_INTR_OFS(intr), 1ul << GIC_INTR_BIT(intr), 123 - (unsigned long)pol << GIC_INTR_BIT(intr)); 124 - } 125 - 126 - static inline void gic_set_trigger(unsigned int intr, unsigned int trig) 127 - { 128 - gic_update_bits(GIC_REG(SHARED, GIC_SH_SET_TRIGGER) + 129 - GIC_INTR_OFS(intr), 1ul << GIC_INTR_BIT(intr), 130 - (unsigned long)trig << GIC_INTR_BIT(intr)); 131 - } 132 - 133 - static inline void gic_set_dual_edge(unsigned int intr, unsigned int dual) 134 - { 135 - gic_update_bits(GIC_REG(SHARED, GIC_SH_SET_DUAL) + GIC_INTR_OFS(intr), 136 - 1ul << GIC_INTR_BIT(intr), 137 - (unsigned long)dual << GIC_INTR_BIT(intr)); 138 - } 139 - 140 - static inline void gic_map_to_pin(unsigned int intr, unsigned int pin) 141 - { 142 - gic_write32(GIC_REG(SHARED, GIC_SH_INTR_MAP_TO_PIN_BASE) + 143 - GIC_SH_MAP_TO_PIN(intr), GIC_MAP_TO_PIN_MSK | pin); 144 - } 145 - 146 - static inline void gic_map_to_vpe(unsigned int intr, unsigned int vpe) 147 - { 148 - gic_write(GIC_REG(SHARED, GIC_SH_INTR_MAP_TO_VPE_BASE) + 149 - GIC_SH_MAP_TO_VPE_REG_OFF(intr, vpe), 150 - GIC_SH_MAP_TO_VPE_REG_BIT(vpe)); 151 - } 152 - 153 - #ifdef CONFIG_CLKSRC_MIPS_GIC 154 - u64 notrace gic_read_count(void) 155 - { 156 - unsigned int hi, hi2, lo; 157 - 158 - if (mips_cm_is64) 159 - return (u64)gic_read(GIC_REG(SHARED, GIC_SH_COUNTER)); 160 - 161 - do { 162 - hi = gic_read32(GIC_REG(SHARED, GIC_SH_COUNTER_63_32)); 163 - lo = gic_read32(GIC_REG(SHARED, GIC_SH_COUNTER_31_00)); 164 - hi2 = gic_read32(GIC_REG(SHARED, GIC_SH_COUNTER_63_32)); 165 - } while (hi2 != hi); 166 - 167 - return (((u64) hi) << 32) + lo; 168 - } 169 - 170 - unsigned int gic_get_count_width(void) 171 - { 172 - unsigned int bits, config; 173 - 174 - config = gic_read(GIC_REG(SHARED, GIC_SH_CONFIG)); 175 - bits = 32 + 4 * ((config & GIC_SH_CONFIG_COUNTBITS_MSK) >> 176 - GIC_SH_CONFIG_COUNTBITS_SHF); 177 - 178 - return bits; 179 - } 180 - 181 - void notrace gic_write_compare(u64 cnt) 182 - { 183 - if (mips_cm_is64) { 184 - gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE), cnt); 185 - } else { 186 - gic_write32(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE_HI), 187 - (int)(cnt >> 32)); 188 - gic_write32(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE_LO), 189 - (int)(cnt & 0xffffffff)); 190 - } 191 - } 192 - 193 - void notrace gic_write_cpu_compare(u64 cnt, int cpu) 194 - { 195 - unsigned long flags; 196 - 197 - local_irq_save(flags); 198 - 199 - gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR), mips_cm_vp_id(cpu)); 200 - 201 - if (mips_cm_is64) { 202 - gic_write(GIC_REG(VPE_OTHER, GIC_VPE_COMPARE), cnt); 203 - } else { 204 - gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_COMPARE_HI), 205 - (int)(cnt >> 32)); 206 - gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_COMPARE_LO), 207 - (int)(cnt & 0xffffffff)); 208 - } 209 - 210 - local_irq_restore(flags); 211 - } 212 - 213 - u64 gic_read_compare(void) 214 - { 215 - unsigned int hi, lo; 216 - 217 - if (mips_cm_is64) 218 - return (u64)gic_read(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE)); 219 - 220 - hi = gic_read32(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE_HI)); 221 - lo = gic_read32(GIC_REG(VPE_LOCAL, GIC_VPE_COMPARE_LO)); 222 - 223 - return (((u64) hi) << 32) + lo; 224 - } 225 - 226 - void gic_start_count(void) 227 - { 228 - u32 gicconfig; 229 - 230 - /* Start the counter */ 231 - gicconfig = gic_read(GIC_REG(SHARED, GIC_SH_CONFIG)); 232 - gicconfig &= ~(1 << GIC_SH_CONFIG_COUNTSTOP_SHF); 233 - gic_write(GIC_REG(SHARED, GIC_SH_CONFIG), gicconfig); 234 - } 235 - 236 - void gic_stop_count(void) 237 - { 238 - u32 gicconfig; 239 - 240 - /* Stop the counter */ 241 - gicconfig = gic_read(GIC_REG(SHARED, GIC_SH_CONFIG)); 242 - gicconfig |= 1 << GIC_SH_CONFIG_COUNTSTOP_SHF; 243 - gic_write(GIC_REG(SHARED, GIC_SH_CONFIG), gicconfig); 244 - } 245 - 246 - #endif 247 - 248 - unsigned gic_read_local_vp_id(void) 249 - { 250 - unsigned long ident; 251 - 252 - ident = gic_read(GIC_REG(VPE_LOCAL, GIC_VP_IDENT)); 253 - return ident & GIC_VP_IDENT_VCNUM_MSK; 51 + /* Clear the interrupt's bit in all pcpu_masks */ 52 + for_each_possible_cpu(i) 53 + clear_bit(intr, per_cpu_ptr(pcpu_masks, i)); 254 54 } 255 55 256 56 static bool gic_local_irq_is_routable(int intr) ··· 72 250 if (cpu_has_veic) 73 251 return true; 74 252 75 - vpe_ctl = gic_read32(GIC_REG(VPE_LOCAL, GIC_VPE_CTL)); 253 + vpe_ctl = read_gic_vl_ctl(); 76 254 switch (intr) { 77 255 case GIC_LOCAL_INT_TIMER: 78 - return vpe_ctl & GIC_VPE_CTL_TIMER_RTBL_MSK; 256 + return vpe_ctl & GIC_VX_CTL_TIMER_ROUTABLE; 79 257 case GIC_LOCAL_INT_PERFCTR: 80 - return vpe_ctl & GIC_VPE_CTL_PERFCNT_RTBL_MSK; 258 + return vpe_ctl & GIC_VX_CTL_PERFCNT_ROUTABLE; 81 259 case GIC_LOCAL_INT_FDC: 82 - return vpe_ctl & GIC_VPE_CTL_FDC_RTBL_MSK; 260 + return vpe_ctl & GIC_VX_CTL_FDC_ROUTABLE; 83 261 case GIC_LOCAL_INT_SWINT0: 84 262 case GIC_LOCAL_INT_SWINT1: 85 - return vpe_ctl & GIC_VPE_CTL_SWINT_RTBL_MSK; 263 + return vpe_ctl & GIC_VX_CTL_SWINT_ROUTABLE; 86 264 default: 87 265 return true; 88 266 } ··· 94 272 irq -= GIC_PIN_TO_VEC_OFFSET; 95 273 96 274 /* Set irq to use shadow set */ 97 - gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_EIC_SHADOW_SET_BASE) + 98 - GIC_VPE_EIC_SS(irq), set); 275 + write_gic_vl_eic_shadow_set(irq, set); 99 276 } 100 277 101 278 static void gic_send_ipi(struct irq_data *d, unsigned int cpu) 102 279 { 103 280 irq_hw_number_t hwirq = GIC_HWIRQ_TO_SHARED(irqd_to_hwirq(d)); 104 281 105 - gic_write(GIC_REG(SHARED, GIC_SH_WEDGE), GIC_SH_WEDGE_SET(hwirq)); 282 + write_gic_wedge(GIC_WEDGE_RW | hwirq); 106 283 } 107 284 108 285 int gic_get_c0_compare_int(void) ··· 137 316 GIC_LOCAL_TO_HWIRQ(GIC_LOCAL_INT_FDC)); 138 317 } 139 318 140 - int gic_get_usm_range(struct resource *gic_usm_res) 141 - { 142 - if (!gic_present) 143 - return -1; 144 - 145 - gic_usm_res->start = __gic_base_addr + USM_VISIBLE_SECTION_OFS; 146 - gic_usm_res->end = gic_usm_res->start + (USM_VISIBLE_SECTION_SIZE - 1); 147 - 148 - return 0; 149 - } 150 - 151 319 static void gic_handle_shared_int(bool chained) 152 320 { 153 - unsigned int i, intr, virq, gic_reg_step = mips_cm_is64 ? 8 : 4; 321 + unsigned int intr, virq; 154 322 unsigned long *pcpu_mask; 155 - unsigned long pending_reg, intrmask_reg; 156 323 DECLARE_BITMAP(pending, GIC_MAX_INTRS); 157 - DECLARE_BITMAP(intrmask, GIC_MAX_INTRS); 158 324 159 325 /* Get per-cpu bitmaps */ 160 - pcpu_mask = pcpu_masks[smp_processor_id()].pcpu_mask; 326 + pcpu_mask = this_cpu_ptr(pcpu_masks); 161 327 162 - pending_reg = GIC_REG(SHARED, GIC_SH_PEND); 163 - intrmask_reg = GIC_REG(SHARED, GIC_SH_MASK); 328 + if (mips_cm_is64) 329 + __ioread64_copy(pending, addr_gic_pend(), 330 + DIV_ROUND_UP(gic_shared_intrs, 64)); 331 + else 332 + __ioread32_copy(pending, addr_gic_pend(), 333 + DIV_ROUND_UP(gic_shared_intrs, 32)); 164 334 165 - for (i = 0; i < BITS_TO_LONGS(gic_shared_intrs); i++) { 166 - pending[i] = gic_read(pending_reg); 167 - intrmask[i] = gic_read(intrmask_reg); 168 - pending_reg += gic_reg_step; 169 - intrmask_reg += gic_reg_step; 170 - 171 - if (!IS_ENABLED(CONFIG_64BIT) || mips_cm_is64) 172 - continue; 173 - 174 - pending[i] |= (u64)gic_read(pending_reg) << 32; 175 - intrmask[i] |= (u64)gic_read(intrmask_reg) << 32; 176 - pending_reg += gic_reg_step; 177 - intrmask_reg += gic_reg_step; 178 - } 179 - 180 - bitmap_and(pending, pending, intrmask, gic_shared_intrs); 181 335 bitmap_and(pending, pending, pcpu_mask, gic_shared_intrs); 182 336 183 337 for_each_set_bit(intr, pending, gic_shared_intrs) { ··· 167 371 168 372 static void gic_mask_irq(struct irq_data *d) 169 373 { 170 - gic_reset_mask(GIC_HWIRQ_TO_SHARED(d->hwirq)); 374 + unsigned int intr = GIC_HWIRQ_TO_SHARED(d->hwirq); 375 + 376 + write_gic_rmask(BIT(intr)); 377 + gic_clear_pcpu_masks(intr); 171 378 } 172 379 173 380 static void gic_unmask_irq(struct irq_data *d) 174 381 { 175 - gic_set_mask(GIC_HWIRQ_TO_SHARED(d->hwirq)); 382 + struct cpumask *affinity = irq_data_get_affinity_mask(d); 383 + unsigned int intr = GIC_HWIRQ_TO_SHARED(d->hwirq); 384 + unsigned int cpu; 385 + 386 + write_gic_smask(BIT(intr)); 387 + 388 + gic_clear_pcpu_masks(intr); 389 + cpu = cpumask_first_and(affinity, cpu_online_mask); 390 + set_bit(intr, per_cpu_ptr(pcpu_masks, cpu)); 176 391 } 177 392 178 393 static void gic_ack_irq(struct irq_data *d) 179 394 { 180 395 unsigned int irq = GIC_HWIRQ_TO_SHARED(d->hwirq); 181 396 182 - gic_write(GIC_REG(SHARED, GIC_SH_WEDGE), GIC_SH_WEDGE_CLR(irq)); 397 + write_gic_wedge(irq); 183 398 } 184 399 185 400 static int gic_set_type(struct irq_data *d, unsigned int type) ··· 202 395 spin_lock_irqsave(&gic_lock, flags); 203 396 switch (type & IRQ_TYPE_SENSE_MASK) { 204 397 case IRQ_TYPE_EDGE_FALLING: 205 - gic_set_polarity(irq, GIC_POL_NEG); 206 - gic_set_trigger(irq, GIC_TRIG_EDGE); 207 - gic_set_dual_edge(irq, GIC_TRIG_DUAL_DISABLE); 398 + change_gic_pol(irq, GIC_POL_FALLING_EDGE); 399 + change_gic_trig(irq, GIC_TRIG_EDGE); 400 + change_gic_dual(irq, GIC_DUAL_SINGLE); 208 401 is_edge = true; 209 402 break; 210 403 case IRQ_TYPE_EDGE_RISING: 211 - gic_set_polarity(irq, GIC_POL_POS); 212 - gic_set_trigger(irq, GIC_TRIG_EDGE); 213 - gic_set_dual_edge(irq, GIC_TRIG_DUAL_DISABLE); 404 + change_gic_pol(irq, GIC_POL_RISING_EDGE); 405 + change_gic_trig(irq, GIC_TRIG_EDGE); 406 + change_gic_dual(irq, GIC_DUAL_SINGLE); 214 407 is_edge = true; 215 408 break; 216 409 case IRQ_TYPE_EDGE_BOTH: 217 410 /* polarity is irrelevant in this case */ 218 - gic_set_trigger(irq, GIC_TRIG_EDGE); 219 - gic_set_dual_edge(irq, GIC_TRIG_DUAL_ENABLE); 411 + change_gic_trig(irq, GIC_TRIG_EDGE); 412 + change_gic_dual(irq, GIC_DUAL_DUAL); 220 413 is_edge = true; 221 414 break; 222 415 case IRQ_TYPE_LEVEL_LOW: 223 - gic_set_polarity(irq, GIC_POL_NEG); 224 - gic_set_trigger(irq, GIC_TRIG_LEVEL); 225 - gic_set_dual_edge(irq, GIC_TRIG_DUAL_DISABLE); 416 + change_gic_pol(irq, GIC_POL_ACTIVE_LOW); 417 + change_gic_trig(irq, GIC_TRIG_LEVEL); 418 + change_gic_dual(irq, GIC_DUAL_SINGLE); 226 419 is_edge = false; 227 420 break; 228 421 case IRQ_TYPE_LEVEL_HIGH: 229 422 default: 230 - gic_set_polarity(irq, GIC_POL_POS); 231 - gic_set_trigger(irq, GIC_TRIG_LEVEL); 232 - gic_set_dual_edge(irq, GIC_TRIG_DUAL_DISABLE); 423 + change_gic_pol(irq, GIC_POL_ACTIVE_HIGH); 424 + change_gic_trig(irq, GIC_TRIG_LEVEL); 425 + change_gic_dual(irq, GIC_DUAL_SINGLE); 233 426 is_edge = false; 234 427 break; 235 428 } ··· 250 443 bool force) 251 444 { 252 445 unsigned int irq = GIC_HWIRQ_TO_SHARED(d->hwirq); 253 - cpumask_t tmp = CPU_MASK_NONE; 254 - unsigned long flags; 255 - int i, cpu; 446 + unsigned long flags; 447 + unsigned int cpu; 256 448 257 - cpumask_and(&tmp, cpumask, cpu_online_mask); 258 - if (cpumask_empty(&tmp)) 449 + cpu = cpumask_first_and(cpumask, cpu_online_mask); 450 + if (cpu >= NR_CPUS) 259 451 return -EINVAL; 260 - 261 - cpu = cpumask_first(&tmp); 262 452 263 453 /* Assumption : cpumask refers to a single CPU */ 264 454 spin_lock_irqsave(&gic_lock, flags); 265 455 266 456 /* Re-route this IRQ */ 267 - gic_map_to_vpe(irq, mips_cm_vp_id(cpu)); 457 + write_gic_map_vp(irq, BIT(mips_cm_vp_id(cpu))); 268 458 269 459 /* Update the pcpu_masks */ 270 - for (i = 0; i < min(gic_vpes, NR_CPUS); i++) 271 - clear_bit(irq, pcpu_masks[i].pcpu_mask); 272 - set_bit(irq, pcpu_masks[cpu].pcpu_mask); 460 + gic_clear_pcpu_masks(irq); 461 + if (read_gic_mask(irq)) 462 + set_bit(irq, per_cpu_ptr(pcpu_masks, cpu)); 273 463 274 - cpumask_copy(irq_data_get_affinity_mask(d), cpumask); 275 464 irq_data_update_effective_affinity(d, cpumask_of(cpu)); 276 465 spin_unlock_irqrestore(&gic_lock, flags); 277 466 278 - return IRQ_SET_MASK_OK_NOCOPY; 467 + return IRQ_SET_MASK_OK; 279 468 } 280 469 #endif 281 470 ··· 302 499 unsigned long pending, masked; 303 500 unsigned int intr, virq; 304 501 305 - pending = gic_read32(GIC_REG(VPE_LOCAL, GIC_VPE_PEND)); 306 - masked = gic_read32(GIC_REG(VPE_LOCAL, GIC_VPE_MASK)); 502 + pending = read_gic_vl_pend(); 503 + masked = read_gic_vl_mask(); 307 504 308 505 bitmap_and(&pending, &pending, &masked, GIC_NUM_LOCAL_INTRS); 309 506 ··· 321 518 { 322 519 int intr = GIC_HWIRQ_TO_LOCAL(d->hwirq); 323 520 324 - gic_write32(GIC_REG(VPE_LOCAL, GIC_VPE_RMASK), 1 << intr); 521 + write_gic_vl_rmask(BIT(intr)); 325 522 } 326 523 327 524 static void gic_unmask_local_irq(struct irq_data *d) 328 525 { 329 526 int intr = GIC_HWIRQ_TO_LOCAL(d->hwirq); 330 527 331 - gic_write32(GIC_REG(VPE_LOCAL, GIC_VPE_SMASK), 1 << intr); 528 + write_gic_vl_smask(BIT(intr)); 332 529 } 333 530 334 531 static struct irq_chip gic_local_irq_controller = { ··· 345 542 346 543 spin_lock_irqsave(&gic_lock, flags); 347 544 for (i = 0; i < gic_vpes; i++) { 348 - gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR), 349 - mips_cm_vp_id(i)); 350 - gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_RMASK), 1 << intr); 545 + write_gic_vl_other(mips_cm_vp_id(i)); 546 + write_gic_vo_rmask(BIT(intr)); 351 547 } 352 548 spin_unlock_irqrestore(&gic_lock, flags); 353 549 } ··· 359 557 360 558 spin_lock_irqsave(&gic_lock, flags); 361 559 for (i = 0; i < gic_vpes; i++) { 362 - gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR), 363 - mips_cm_vp_id(i)); 364 - gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_SMASK), 1 << intr); 560 + write_gic_vl_other(mips_cm_vp_id(i)); 561 + write_gic_vo_smask(BIT(intr)); 365 562 } 366 563 spin_unlock_irqrestore(&gic_lock, flags); 367 564 } ··· 383 582 gic_handle_shared_int(true); 384 583 } 385 584 386 - static void __init gic_basic_init(void) 387 - { 388 - unsigned int i; 389 - 390 - board_bind_eic_interrupt = &gic_bind_eic_interrupt; 391 - 392 - /* Setup defaults */ 393 - for (i = 0; i < gic_shared_intrs; i++) { 394 - gic_set_polarity(i, GIC_POL_POS); 395 - gic_set_trigger(i, GIC_TRIG_LEVEL); 396 - gic_reset_mask(i); 397 - } 398 - 399 - for (i = 0; i < gic_vpes; i++) { 400 - unsigned int j; 401 - 402 - gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR), 403 - mips_cm_vp_id(i)); 404 - for (j = 0; j < GIC_NUM_LOCAL_INTRS; j++) { 405 - if (!gic_local_irq_is_routable(j)) 406 - continue; 407 - gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_RMASK), 1 << j); 408 - } 409 - } 410 - } 411 - 412 585 static int gic_local_irq_domain_map(struct irq_domain *d, unsigned int virq, 413 586 irq_hw_number_t hw) 414 587 { 415 588 int intr = GIC_HWIRQ_TO_LOCAL(hw); 416 - int ret = 0; 417 589 int i; 418 590 unsigned long flags; 591 + u32 val; 419 592 420 593 if (!gic_local_irq_is_routable(intr)) 421 594 return -EPERM; 422 595 596 + if (intr > GIC_LOCAL_INT_FDC) { 597 + pr_err("Invalid local IRQ %d\n", intr); 598 + return -EINVAL; 599 + } 600 + 601 + if (intr == GIC_LOCAL_INT_TIMER) { 602 + /* CONFIG_MIPS_CMP workaround (see __gic_init) */ 603 + val = GIC_MAP_PIN_MAP_TO_PIN | timer_cpu_pin; 604 + } else { 605 + val = GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin; 606 + } 607 + 423 608 spin_lock_irqsave(&gic_lock, flags); 424 609 for (i = 0; i < gic_vpes; i++) { 425 - u32 val = GIC_MAP_TO_PIN_MSK | gic_cpu_pin; 426 - 427 - gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR), 428 - mips_cm_vp_id(i)); 429 - 430 - switch (intr) { 431 - case GIC_LOCAL_INT_WD: 432 - gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_WD_MAP), val); 433 - break; 434 - case GIC_LOCAL_INT_COMPARE: 435 - gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_COMPARE_MAP), 436 - val); 437 - break; 438 - case GIC_LOCAL_INT_TIMER: 439 - /* CONFIG_MIPS_CMP workaround (see __gic_init) */ 440 - val = GIC_MAP_TO_PIN_MSK | timer_cpu_pin; 441 - gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_TIMER_MAP), 442 - val); 443 - break; 444 - case GIC_LOCAL_INT_PERFCTR: 445 - gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_PERFCTR_MAP), 446 - val); 447 - break; 448 - case GIC_LOCAL_INT_SWINT0: 449 - gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_SWINT0_MAP), 450 - val); 451 - break; 452 - case GIC_LOCAL_INT_SWINT1: 453 - gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_SWINT1_MAP), 454 - val); 455 - break; 456 - case GIC_LOCAL_INT_FDC: 457 - gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_FDC_MAP), val); 458 - break; 459 - default: 460 - pr_err("Invalid local IRQ %d\n", intr); 461 - ret = -EINVAL; 462 - break; 463 - } 610 + write_gic_vl_other(mips_cm_vp_id(i)); 611 + write_gic_vo_map(intr, val); 464 612 } 465 613 spin_unlock_irqrestore(&gic_lock, flags); 466 614 467 - return ret; 615 + return 0; 468 616 } 469 617 470 618 static int gic_shared_irq_domain_map(struct irq_domain *d, unsigned int virq, 471 - irq_hw_number_t hw, unsigned int vpe) 619 + irq_hw_number_t hw, unsigned int cpu) 472 620 { 473 621 int intr = GIC_HWIRQ_TO_SHARED(hw); 474 622 unsigned long flags; 475 - int i; 476 623 477 624 spin_lock_irqsave(&gic_lock, flags); 478 - gic_map_to_pin(intr, gic_cpu_pin); 479 - gic_map_to_vpe(intr, mips_cm_vp_id(vpe)); 480 - for (i = 0; i < min(gic_vpes, NR_CPUS); i++) 481 - clear_bit(intr, pcpu_masks[i].pcpu_mask); 482 - set_bit(intr, pcpu_masks[vpe].pcpu_mask); 625 + write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin); 626 + write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu))); 627 + gic_clear_pcpu_masks(intr); 628 + set_bit(intr, per_cpu_ptr(pcpu_masks, cpu)); 483 629 spin_unlock_irqrestore(&gic_lock, flags); 484 630 485 631 return 0; ··· 633 885 .match = gic_ipi_domain_match, 634 886 }; 635 887 636 - static void __init __gic_init(unsigned long gic_base_addr, 637 - unsigned long gic_addrspace_size, 638 - unsigned int cpu_vec, unsigned int irqbase, 639 - struct device_node *node) 888 + 889 + static int __init gic_of_init(struct device_node *node, 890 + struct device_node *parent) 640 891 { 641 - unsigned int gicconfig, cpu; 642 - unsigned int v[2]; 892 + unsigned int cpu_vec, i, j, gicconfig, cpu, v[2]; 893 + unsigned long reserved; 894 + phys_addr_t gic_base; 895 + struct resource res; 896 + size_t gic_len; 643 897 644 - __gic_base_addr = gic_base_addr; 898 + /* Find the first available CPU vector. */ 899 + i = 0; 900 + reserved = (C_SW0 | C_SW1) >> __fls(C_SW0); 901 + while (!of_property_read_u32_index(node, "mti,reserved-cpu-vectors", 902 + i++, &cpu_vec)) 903 + reserved |= BIT(cpu_vec); 645 904 646 - gic_base = ioremap_nocache(gic_base_addr, gic_addrspace_size); 905 + cpu_vec = find_first_zero_bit(&reserved, hweight_long(ST0_IM)); 906 + if (cpu_vec == hweight_long(ST0_IM)) { 907 + pr_err("No CPU vectors available for GIC\n"); 908 + return -ENODEV; 909 + } 647 910 648 - gicconfig = gic_read(GIC_REG(SHARED, GIC_SH_CONFIG)); 649 - gic_shared_intrs = (gicconfig & GIC_SH_CONFIG_NUMINTRS_MSK) >> 650 - GIC_SH_CONFIG_NUMINTRS_SHF; 651 - gic_shared_intrs = ((gic_shared_intrs + 1) * 8); 911 + if (of_address_to_resource(node, 0, &res)) { 912 + /* 913 + * Probe the CM for the GIC base address if not specified 914 + * in the device-tree. 915 + */ 916 + if (mips_cm_present()) { 917 + gic_base = read_gcr_gic_base() & 918 + ~CM_GCR_GIC_BASE_GICEN; 919 + gic_len = 0x20000; 920 + } else { 921 + pr_err("Failed to get GIC memory range\n"); 922 + return -ENODEV; 923 + } 924 + } else { 925 + gic_base = res.start; 926 + gic_len = resource_size(&res); 927 + } 652 928 653 - gic_vpes = (gicconfig & GIC_SH_CONFIG_NUMVPES_MSK) >> 654 - GIC_SH_CONFIG_NUMVPES_SHF; 929 + if (mips_cm_present()) { 930 + write_gcr_gic_base(gic_base | CM_GCR_GIC_BASE_GICEN); 931 + /* Ensure GIC region is enabled before trying to access it */ 932 + __sync(); 933 + } 934 + 935 + mips_gic_base = ioremap_nocache(gic_base, gic_len); 936 + 937 + gicconfig = read_gic_config(); 938 + gic_shared_intrs = gicconfig & GIC_CONFIG_NUMINTERRUPTS; 939 + gic_shared_intrs >>= __fls(GIC_CONFIG_NUMINTERRUPTS); 940 + gic_shared_intrs = (gic_shared_intrs + 1) * 8; 941 + 942 + gic_vpes = gicconfig & GIC_CONFIG_PVPS; 943 + gic_vpes >>= __fls(GIC_CONFIG_PVPS); 655 944 gic_vpes = gic_vpes + 1; 656 945 657 946 if (cpu_has_veic) { 658 947 /* Set EIC mode for all VPEs */ 659 948 for_each_present_cpu(cpu) { 660 - gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR), 661 - mips_cm_vp_id(cpu)); 662 - gic_write(GIC_REG(VPE_OTHER, GIC_VPE_CTL), 663 - GIC_VPE_CTL_EIC_MODE_MSK); 949 + write_gic_vl_other(mips_cm_vp_id(cpu)); 950 + write_gic_vo_ctl(GIC_VX_CTL_EIC); 664 951 } 665 952 666 953 /* Always use vector 1 in EIC mode */ ··· 720 937 */ 721 938 if (IS_ENABLED(CONFIG_MIPS_CMP) && 722 939 gic_local_irq_is_routable(GIC_LOCAL_INT_TIMER)) { 723 - timer_cpu_pin = gic_read32(GIC_REG(VPE_LOCAL, 724 - GIC_VPE_TIMER_MAP)) & 725 - GIC_MAP_MSK; 940 + timer_cpu_pin = read_gic_vl_timer_map() & GIC_MAP_PIN_MAP; 726 941 irq_set_chained_handler(MIPS_CPU_IRQ_BASE + 727 942 GIC_CPU_PIN_OFFSET + 728 943 timer_cpu_pin, ··· 731 950 } 732 951 733 952 gic_irq_domain = irq_domain_add_simple(node, GIC_NUM_LOCAL_INTRS + 734 - gic_shared_intrs, irqbase, 953 + gic_shared_intrs, 0, 735 954 &gic_irq_domain_ops, NULL); 736 - if (!gic_irq_domain) 737 - panic("Failed to add GIC IRQ domain"); 955 + if (!gic_irq_domain) { 956 + pr_err("Failed to add GIC IRQ domain"); 957 + return -ENXIO; 958 + } 738 959 739 960 gic_ipi_domain = irq_domain_add_hierarchy(gic_irq_domain, 740 961 IRQ_DOMAIN_FLAG_IPI_PER_CPU, 741 962 GIC_NUM_LOCAL_INTRS + gic_shared_intrs, 742 963 node, &gic_ipi_domain_ops, NULL); 743 - if (!gic_ipi_domain) 744 - panic("Failed to add GIC IPI domain"); 964 + if (!gic_ipi_domain) { 965 + pr_err("Failed to add GIC IPI domain"); 966 + return -ENXIO; 967 + } 745 968 746 969 irq_domain_update_bus_token(gic_ipi_domain, DOMAIN_BUS_IPI); 747 970 ··· 760 975 } 761 976 762 977 bitmap_copy(ipi_available, ipi_resrv, GIC_MAX_INTRS); 763 - gic_basic_init(); 764 - } 765 978 766 - void __init gic_init(unsigned long gic_base_addr, 767 - unsigned long gic_addrspace_size, 768 - unsigned int cpu_vec, unsigned int irqbase) 769 - { 770 - __gic_init(gic_base_addr, gic_addrspace_size, cpu_vec, irqbase, NULL); 771 - } 979 + board_bind_eic_interrupt = &gic_bind_eic_interrupt; 772 980 773 - static int __init gic_of_init(struct device_node *node, 774 - struct device_node *parent) 775 - { 776 - struct resource res; 777 - unsigned int cpu_vec, i = 0, reserved = 0; 778 - phys_addr_t gic_base; 779 - size_t gic_len; 780 - 781 - /* Find the first available CPU vector. */ 782 - while (!of_property_read_u32_index(node, "mti,reserved-cpu-vectors", 783 - i++, &cpu_vec)) 784 - reserved |= BIT(cpu_vec); 785 - for (cpu_vec = 2; cpu_vec < 8; cpu_vec++) { 786 - if (!(reserved & BIT(cpu_vec))) 787 - break; 788 - } 789 - if (cpu_vec == 8) { 790 - pr_err("No CPU vectors available for GIC\n"); 791 - return -ENODEV; 981 + /* Setup defaults */ 982 + for (i = 0; i < gic_shared_intrs; i++) { 983 + change_gic_pol(i, GIC_POL_ACTIVE_HIGH); 984 + change_gic_trig(i, GIC_TRIG_LEVEL); 985 + write_gic_rmask(BIT(i)); 792 986 } 793 987 794 - if (of_address_to_resource(node, 0, &res)) { 795 - /* 796 - * Probe the CM for the GIC base address if not specified 797 - * in the device-tree. 798 - */ 799 - if (mips_cm_present()) { 800 - gic_base = read_gcr_gic_base() & 801 - ~CM_GCR_GIC_BASE_GICEN_MSK; 802 - gic_len = 0x20000; 803 - } else { 804 - pr_err("Failed to get GIC memory range\n"); 805 - return -ENODEV; 988 + for (i = 0; i < gic_vpes; i++) { 989 + write_gic_vl_other(mips_cm_vp_id(i)); 990 + for (j = 0; j < GIC_NUM_LOCAL_INTRS; j++) { 991 + if (!gic_local_irq_is_routable(j)) 992 + continue; 993 + write_gic_vo_rmask(BIT(j)); 806 994 } 807 - } else { 808 - gic_base = res.start; 809 - gic_len = resource_size(&res); 810 995 } 811 - 812 - if (mips_cm_present()) { 813 - write_gcr_gic_base(gic_base | CM_GCR_GIC_BASE_GICEN_MSK); 814 - /* Ensure GIC region is enabled before trying to access it */ 815 - __sync(); 816 - } 817 - gic_present = true; 818 - 819 - __gic_init(gic_base, gic_len, cpu_vec, 0, node); 820 996 821 997 return 0; 822 998 }
-6
drivers/mtd/maps/lantiq-flash.c
··· 114 114 struct cfi_private *cfi; 115 115 int err; 116 116 117 - if (of_machine_is_compatible("lantiq,falcon") && 118 - (ltq_boot_select() != BS_FLASH)) { 119 - dev_err(&pdev->dev, "invalid bootstrap options\n"); 120 - return -ENODEV; 121 - } 122 - 123 117 ltq_mtd = devm_kzalloc(&pdev->dev, sizeof(struct ltq_mtd), GFP_KERNEL); 124 118 if (!ltq_mtd) 125 119 return -ENOMEM;
+19 -14
drivers/pcmcia/db1xxx_ss.c
··· 131 131 return IRQ_HANDLED; 132 132 } 133 133 134 + /* Db/Pb1200 have separate per-socket insertion and ejection 135 + * interrupts which stay asserted as long as the card is 136 + * inserted/missing. The one which caused us to be called 137 + * needs to be disabled and the other one enabled. 138 + */ 134 139 static irqreturn_t db1200_pcmcia_cdirq(int irq, void *data) 140 + { 141 + disable_irq_nosync(irq); 142 + return IRQ_WAKE_THREAD; 143 + } 144 + 145 + static irqreturn_t db1200_pcmcia_cdirq_fn(int irq, void *data) 135 146 { 136 147 struct db1x_pcmcia_sock *sock = data; 137 148 138 - /* Db/Pb1200 have separate per-socket insertion and ejection 139 - * interrupts which stay asserted as long as the card is 140 - * inserted/missing. The one which caused us to be called 141 - * needs to be disabled and the other one enabled. 142 - */ 143 - if (irq == sock->insert_irq) { 144 - disable_irq_nosync(sock->insert_irq); 149 + /* Wait a bit for the signals to stop bouncing. */ 150 + msleep(100); 151 + if (irq == sock->insert_irq) 145 152 enable_irq(sock->eject_irq); 146 - } else { 147 - disable_irq_nosync(sock->eject_irq); 153 + else 148 154 enable_irq(sock->insert_irq); 149 - } 150 155 151 156 pcmcia_parse_events(&sock->socket, SS_DETECT); 152 157 ··· 177 172 */ 178 173 if ((sock->board_type == BOARD_TYPE_DB1200) || 179 174 (sock->board_type == BOARD_TYPE_DB1300)) { 180 - ret = request_irq(sock->insert_irq, db1200_pcmcia_cdirq, 181 - 0, "pcmcia_insert", sock); 175 + ret = request_threaded_irq(sock->insert_irq, db1200_pcmcia_cdirq, 176 + db1200_pcmcia_cdirq_fn, 0, "pcmcia_insert", sock); 182 177 if (ret) 183 178 goto out1; 184 179 185 - ret = request_irq(sock->eject_irq, db1200_pcmcia_cdirq, 186 - 0, "pcmcia_eject", sock); 180 + ret = request_threaded_irq(sock->eject_irq, db1200_pcmcia_cdirq, 181 + db1200_pcmcia_cdirq_fn, 0, "pcmcia_eject", sock); 187 182 if (ret) { 188 183 free_irq(sock->insert_irq, sock); 189 184 goto out1;
+1
drivers/phy/Kconfig
··· 44 44 source "drivers/phy/amlogic/Kconfig" 45 45 source "drivers/phy/broadcom/Kconfig" 46 46 source "drivers/phy/hisilicon/Kconfig" 47 + source "drivers/phy/lantiq/Kconfig" 47 48 source "drivers/phy/marvell/Kconfig" 48 49 source "drivers/phy/mediatek/Kconfig" 49 50 source "drivers/phy/motorola/Kconfig"
+1 -1
drivers/phy/Makefile
··· 6 6 obj-$(CONFIG_PHY_LPC18XX_USB_OTG) += phy-lpc18xx-usb-otg.o 7 7 obj-$(CONFIG_PHY_XGENE) += phy-xgene.o 8 8 obj-$(CONFIG_PHY_PISTACHIO_USB) += phy-pistachio-usb.o 9 - 10 9 obj-$(CONFIG_ARCH_SUNXI) += allwinner/ 11 10 obj-$(CONFIG_ARCH_MESON) += amlogic/ 11 + obj-$(CONFIG_LANTIQ) += lantiq/ 12 12 obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/ 13 13 obj-$(CONFIG_ARCH_RENESAS) += renesas/ 14 14 obj-$(CONFIG_ARCH_ROCKCHIP) += rockchip/
+9
drivers/phy/lantiq/Kconfig
··· 1 + # 2 + # Phy drivers for Lantiq / Intel platforms 3 + # 4 + config PHY_LANTIQ_RCU_USB2 5 + tristate "Lantiq XWAY SoC RCU based USB PHY" 6 + depends on OF && (SOC_TYPE_XWAY || COMPILE_TEST) 7 + select GENERIC_PHY 8 + help 9 + Support for the USB PHY(s) on the Lantiq / Intel XWAY family SoCs.
+1
drivers/phy/lantiq/Makefile
··· 1 + obj-$(CONFIG_PHY_LANTIQ_RCU_USB2) += phy-lantiq-rcu-usb2.o
+254
drivers/phy/lantiq/phy-lantiq-rcu-usb2.c
··· 1 + /* 2 + * Lantiq XWAY SoC RCU module based USB 1.1/2.0 PHY driver 3 + * 4 + * Copyright (C) 2016 Martin Blumenstingl <martin.blumenstingl@googlemail.com> 5 + * Copyright (C) 2017 Hauke Mehrtens <hauke@hauke-m.de> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/clk.h> 13 + #include <linux/delay.h> 14 + #include <linux/mfd/syscon.h> 15 + #include <linux/module.h> 16 + #include <linux/of.h> 17 + #include <linux/of_address.h> 18 + #include <linux/of_device.h> 19 + #include <linux/phy/phy.h> 20 + #include <linux/platform_device.h> 21 + #include <linux/property.h> 22 + #include <linux/regmap.h> 23 + #include <linux/reset.h> 24 + 25 + /* Transmitter HS Pre-Emphasis Enable */ 26 + #define RCU_CFG1_TX_PEE BIT(0) 27 + /* Disconnect Threshold */ 28 + #define RCU_CFG1_DIS_THR_MASK 0x00038000 29 + #define RCU_CFG1_DIS_THR_SHIFT 15 30 + 31 + struct ltq_rcu_usb2_bits { 32 + u8 hostmode; 33 + u8 slave_endianness; 34 + u8 host_endianness; 35 + bool have_ana_cfg; 36 + }; 37 + 38 + struct ltq_rcu_usb2_priv { 39 + struct regmap *regmap; 40 + unsigned int phy_reg_offset; 41 + unsigned int ana_cfg1_reg_offset; 42 + const struct ltq_rcu_usb2_bits *reg_bits; 43 + struct device *dev; 44 + struct phy *phy; 45 + struct clk *phy_gate_clk; 46 + struct reset_control *ctrl_reset; 47 + struct reset_control *phy_reset; 48 + }; 49 + 50 + static const struct ltq_rcu_usb2_bits xway_rcu_usb2_reg_bits = { 51 + .hostmode = 11, 52 + .slave_endianness = 9, 53 + .host_endianness = 10, 54 + .have_ana_cfg = false, 55 + }; 56 + 57 + static const struct ltq_rcu_usb2_bits xrx100_rcu_usb2_reg_bits = { 58 + .hostmode = 11, 59 + .slave_endianness = 17, 60 + .host_endianness = 10, 61 + .have_ana_cfg = false, 62 + }; 63 + 64 + static const struct ltq_rcu_usb2_bits xrx200_rcu_usb2_reg_bits = { 65 + .hostmode = 11, 66 + .slave_endianness = 9, 67 + .host_endianness = 10, 68 + .have_ana_cfg = true, 69 + }; 70 + 71 + static const struct of_device_id ltq_rcu_usb2_phy_of_match[] = { 72 + { 73 + .compatible = "lantiq,ase-usb2-phy", 74 + .data = &xway_rcu_usb2_reg_bits, 75 + }, 76 + { 77 + .compatible = "lantiq,danube-usb2-phy", 78 + .data = &xway_rcu_usb2_reg_bits, 79 + }, 80 + { 81 + .compatible = "lantiq,xrx100-usb2-phy", 82 + .data = &xrx100_rcu_usb2_reg_bits, 83 + }, 84 + { 85 + .compatible = "lantiq,xrx200-usb2-phy", 86 + .data = &xrx200_rcu_usb2_reg_bits, 87 + }, 88 + { 89 + .compatible = "lantiq,xrx300-usb2-phy", 90 + .data = &xrx200_rcu_usb2_reg_bits, 91 + }, 92 + { }, 93 + }; 94 + MODULE_DEVICE_TABLE(of, ltq_rcu_usb2_phy_of_match); 95 + 96 + static int ltq_rcu_usb2_phy_init(struct phy *phy) 97 + { 98 + struct ltq_rcu_usb2_priv *priv = phy_get_drvdata(phy); 99 + 100 + if (priv->reg_bits->have_ana_cfg) { 101 + regmap_update_bits(priv->regmap, priv->ana_cfg1_reg_offset, 102 + RCU_CFG1_TX_PEE, RCU_CFG1_TX_PEE); 103 + regmap_update_bits(priv->regmap, priv->ana_cfg1_reg_offset, 104 + RCU_CFG1_DIS_THR_MASK, 7 << RCU_CFG1_DIS_THR_SHIFT); 105 + } 106 + 107 + /* Configure core to host mode */ 108 + regmap_update_bits(priv->regmap, priv->phy_reg_offset, 109 + BIT(priv->reg_bits->hostmode), 0); 110 + 111 + /* Select DMA endianness (Host-endian: big-endian) */ 112 + regmap_update_bits(priv->regmap, priv->phy_reg_offset, 113 + BIT(priv->reg_bits->slave_endianness), 0); 114 + regmap_update_bits(priv->regmap, priv->phy_reg_offset, 115 + BIT(priv->reg_bits->host_endianness), 116 + BIT(priv->reg_bits->host_endianness)); 117 + 118 + return 0; 119 + } 120 + 121 + static int ltq_rcu_usb2_phy_power_on(struct phy *phy) 122 + { 123 + struct ltq_rcu_usb2_priv *priv = phy_get_drvdata(phy); 124 + struct device *dev = priv->dev; 125 + int ret; 126 + 127 + reset_control_deassert(priv->phy_reset); 128 + 129 + ret = clk_prepare_enable(priv->phy_gate_clk); 130 + if (ret) 131 + dev_err(dev, "failed to enable PHY gate\n"); 132 + 133 + return ret; 134 + } 135 + 136 + static int ltq_rcu_usb2_phy_power_off(struct phy *phy) 137 + { 138 + struct ltq_rcu_usb2_priv *priv = phy_get_drvdata(phy); 139 + 140 + reset_control_assert(priv->phy_reset); 141 + 142 + clk_disable_unprepare(priv->phy_gate_clk); 143 + 144 + return 0; 145 + } 146 + 147 + static struct phy_ops ltq_rcu_usb2_phy_ops = { 148 + .init = ltq_rcu_usb2_phy_init, 149 + .power_on = ltq_rcu_usb2_phy_power_on, 150 + .power_off = ltq_rcu_usb2_phy_power_off, 151 + .owner = THIS_MODULE, 152 + }; 153 + 154 + static int ltq_rcu_usb2_of_parse(struct ltq_rcu_usb2_priv *priv, 155 + struct platform_device *pdev) 156 + { 157 + struct device *dev = priv->dev; 158 + const __be32 *offset; 159 + int ret; 160 + 161 + priv->reg_bits = of_device_get_match_data(dev); 162 + 163 + priv->regmap = syscon_node_to_regmap(dev->of_node->parent); 164 + if (IS_ERR(priv->regmap)) { 165 + dev_err(dev, "Failed to lookup RCU regmap\n"); 166 + return PTR_ERR(priv->regmap); 167 + } 168 + 169 + offset = of_get_address(dev->of_node, 0, NULL, NULL); 170 + if (!offset) { 171 + dev_err(dev, "Failed to get RCU PHY reg offset\n"); 172 + return -ENOENT; 173 + } 174 + priv->phy_reg_offset = __be32_to_cpu(*offset); 175 + 176 + if (priv->reg_bits->have_ana_cfg) { 177 + offset = of_get_address(dev->of_node, 1, NULL, NULL); 178 + if (!offset) { 179 + dev_err(dev, "Failed to get RCU ANA CFG1 reg offset\n"); 180 + return -ENOENT; 181 + } 182 + priv->ana_cfg1_reg_offset = __be32_to_cpu(*offset); 183 + } 184 + 185 + priv->phy_gate_clk = devm_clk_get(dev, "phy"); 186 + if (IS_ERR(priv->phy_gate_clk)) { 187 + dev_err(dev, "Unable to get USB phy gate clk\n"); 188 + return PTR_ERR(priv->phy_gate_clk); 189 + } 190 + 191 + priv->ctrl_reset = devm_reset_control_get_shared(dev, "ctrl"); 192 + if (IS_ERR(priv->ctrl_reset)) { 193 + if (PTR_ERR(priv->ctrl_reset) != -EPROBE_DEFER) 194 + dev_err(dev, "failed to get 'ctrl' reset\n"); 195 + return PTR_ERR(priv->ctrl_reset); 196 + } 197 + 198 + priv->phy_reset = devm_reset_control_get_optional(dev, "phy"); 199 + if (IS_ERR(priv->phy_reset)) 200 + return PTR_ERR(priv->phy_reset); 201 + 202 + return 0; 203 + } 204 + 205 + static int ltq_rcu_usb2_phy_probe(struct platform_device *pdev) 206 + { 207 + struct device *dev = &pdev->dev; 208 + struct ltq_rcu_usb2_priv *priv; 209 + struct phy_provider *provider; 210 + int ret; 211 + 212 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 213 + if (!priv) 214 + return -ENOMEM; 215 + 216 + priv->dev = dev; 217 + 218 + ret = ltq_rcu_usb2_of_parse(priv, pdev); 219 + if (ret) 220 + return ret; 221 + 222 + /* Reset USB core through reset controller */ 223 + reset_control_deassert(priv->ctrl_reset); 224 + 225 + reset_control_assert(priv->phy_reset); 226 + 227 + priv->phy = devm_phy_create(dev, dev->of_node, &ltq_rcu_usb2_phy_ops); 228 + if (IS_ERR(priv->phy)) { 229 + dev_err(dev, "failed to create PHY\n"); 230 + return PTR_ERR(priv->phy); 231 + } 232 + 233 + phy_set_drvdata(priv->phy, priv); 234 + 235 + provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate); 236 + if (IS_ERR(provider)) 237 + return PTR_ERR(provider); 238 + 239 + dev_set_drvdata(priv->dev, priv); 240 + return 0; 241 + } 242 + 243 + static struct platform_driver ltq_rcu_usb2_phy_driver = { 244 + .probe = ltq_rcu_usb2_phy_probe, 245 + .driver = { 246 + .name = "lantiq-rcu-usb2-phy", 247 + .of_match_table = ltq_rcu_usb2_phy_of_match, 248 + } 249 + }; 250 + module_platform_driver(ltq_rcu_usb2_phy_driver); 251 + 252 + MODULE_AUTHOR("Martin Blumenstingl <martin.blumenstingl@googlemail.com>"); 253 + MODULE_DESCRIPTION("Lantiq XWAY USB2 PHY driver"); 254 + MODULE_LICENSE("GPL v2");
+6
drivers/reset/Kconfig
··· 47 47 help 48 48 This enables the reset controller driver for i.MX7 SoCs. 49 49 50 + config RESET_LANTIQ 51 + bool "Lantiq XWAY Reset Driver" if COMPILE_TEST 52 + default SOC_TYPE_XWAY 53 + help 54 + This enables the reset controller driver for Lantiq / Intel XWAY SoCs. 55 + 50 56 config RESET_LPC18XX 51 57 bool "LPC18xx/43xx Reset Driver" if COMPILE_TEST 52 58 default ARCH_LPC18XX
+1
drivers/reset/Makefile
··· 7 7 obj-$(CONFIG_RESET_BERLIN) += reset-berlin.o 8 8 obj-$(CONFIG_RESET_HSDK_V1) += reset-hsdk-v1.o 9 9 obj-$(CONFIG_RESET_IMX7) += reset-imx7.o 10 + obj-$(CONFIG_RESET_LANTIQ) += reset-lantiq.o 10 11 obj-$(CONFIG_RESET_LPC18XX) += reset-lpc18xx.o 11 12 obj-$(CONFIG_RESET_MESON) += reset-meson.o 12 13 obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o
+212
drivers/reset/reset-lantiq.c
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify it 3 + * under the terms of the GNU General Public License version 2 as published 4 + * by the Free Software Foundation. 5 + * 6 + * Copyright (C) 2010 John Crispin <blogic@phrozen.org> 7 + * Copyright (C) 2013-2015 Lantiq Beteiligungs-GmbH & Co.KG 8 + * Copyright (C) 2016 Martin Blumenstingl <martin.blumenstingl@googlemail.com> 9 + * Copyright (C) 2017 Hauke Mehrtens <hauke@hauke-m.de> 10 + */ 11 + 12 + #include <linux/mfd/syscon.h> 13 + #include <linux/module.h> 14 + #include <linux/regmap.h> 15 + #include <linux/reset-controller.h> 16 + #include <linux/of_address.h> 17 + #include <linux/of_platform.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/property.h> 20 + 21 + #define LANTIQ_RCU_RESET_TIMEOUT 10000 22 + 23 + struct lantiq_rcu_reset_priv { 24 + struct reset_controller_dev rcdev; 25 + struct device *dev; 26 + struct regmap *regmap; 27 + u32 reset_offset; 28 + u32 status_offset; 29 + }; 30 + 31 + static struct lantiq_rcu_reset_priv *to_lantiq_rcu_reset_priv( 32 + struct reset_controller_dev *rcdev) 33 + { 34 + return container_of(rcdev, struct lantiq_rcu_reset_priv, rcdev); 35 + } 36 + 37 + static int lantiq_rcu_reset_status(struct reset_controller_dev *rcdev, 38 + unsigned long id) 39 + { 40 + struct lantiq_rcu_reset_priv *priv = to_lantiq_rcu_reset_priv(rcdev); 41 + unsigned int status = (id >> 8) & 0x1f; 42 + u32 val; 43 + int ret; 44 + 45 + ret = regmap_read(priv->regmap, priv->status_offset, &val); 46 + if (ret) 47 + return ret; 48 + 49 + return !!(val & BIT(status)); 50 + } 51 + 52 + static int lantiq_rcu_reset_status_timeout(struct reset_controller_dev *rcdev, 53 + unsigned long id, bool assert) 54 + { 55 + int ret; 56 + int retry = LANTIQ_RCU_RESET_TIMEOUT; 57 + 58 + do { 59 + ret = lantiq_rcu_reset_status(rcdev, id); 60 + if (ret < 0) 61 + return ret; 62 + if (ret == assert) 63 + return 0; 64 + usleep_range(20, 40); 65 + } while (--retry); 66 + 67 + return -ETIMEDOUT; 68 + } 69 + 70 + static int lantiq_rcu_reset_update(struct reset_controller_dev *rcdev, 71 + unsigned long id, bool assert) 72 + { 73 + struct lantiq_rcu_reset_priv *priv = to_lantiq_rcu_reset_priv(rcdev); 74 + unsigned int set = id & 0x1f; 75 + u32 val = assert ? BIT(set) : 0; 76 + int ret; 77 + 78 + ret = regmap_update_bits(priv->regmap, priv->reset_offset, BIT(set), 79 + val); 80 + if (ret) { 81 + dev_err(priv->dev, "Failed to set reset bit %u\n", set); 82 + return ret; 83 + } 84 + 85 + 86 + ret = lantiq_rcu_reset_status_timeout(rcdev, id, assert); 87 + if (ret) 88 + dev_err(priv->dev, "Failed to %s bit %u\n", 89 + assert ? "assert" : "deassert", set); 90 + 91 + return ret; 92 + } 93 + 94 + static int lantiq_rcu_reset_assert(struct reset_controller_dev *rcdev, 95 + unsigned long id) 96 + { 97 + return lantiq_rcu_reset_update(rcdev, id, true); 98 + } 99 + 100 + static int lantiq_rcu_reset_deassert(struct reset_controller_dev *rcdev, 101 + unsigned long id) 102 + { 103 + return lantiq_rcu_reset_update(rcdev, id, false); 104 + } 105 + 106 + static int lantiq_rcu_reset_reset(struct reset_controller_dev *rcdev, 107 + unsigned long id) 108 + { 109 + int ret; 110 + 111 + ret = lantiq_rcu_reset_assert(rcdev, id); 112 + if (ret) 113 + return ret; 114 + 115 + return lantiq_rcu_reset_deassert(rcdev, id); 116 + } 117 + 118 + static const struct reset_control_ops lantiq_rcu_reset_ops = { 119 + .assert = lantiq_rcu_reset_assert, 120 + .deassert = lantiq_rcu_reset_deassert, 121 + .status = lantiq_rcu_reset_status, 122 + .reset = lantiq_rcu_reset_reset, 123 + }; 124 + 125 + static int lantiq_rcu_reset_of_parse(struct platform_device *pdev, 126 + struct lantiq_rcu_reset_priv *priv) 127 + { 128 + struct device *dev = &pdev->dev; 129 + const __be32 *offset; 130 + 131 + priv->regmap = syscon_node_to_regmap(dev->of_node->parent); 132 + if (IS_ERR(priv->regmap)) { 133 + dev_err(&pdev->dev, "Failed to lookup RCU regmap\n"); 134 + return PTR_ERR(priv->regmap); 135 + } 136 + 137 + offset = of_get_address(dev->of_node, 0, NULL, NULL); 138 + if (!offset) { 139 + dev_err(&pdev->dev, "Failed to get RCU reset offset\n"); 140 + return -ENOENT; 141 + } 142 + priv->reset_offset = __be32_to_cpu(*offset); 143 + 144 + offset = of_get_address(dev->of_node, 1, NULL, NULL); 145 + if (!offset) { 146 + dev_err(&pdev->dev, "Failed to get RCU status offset\n"); 147 + return -ENOENT; 148 + } 149 + priv->status_offset = __be32_to_cpu(*offset); 150 + 151 + return 0; 152 + } 153 + 154 + static int lantiq_rcu_reset_xlate(struct reset_controller_dev *rcdev, 155 + const struct of_phandle_args *reset_spec) 156 + { 157 + unsigned int status, set; 158 + 159 + set = reset_spec->args[0]; 160 + status = reset_spec->args[1]; 161 + 162 + if (set >= rcdev->nr_resets || status >= rcdev->nr_resets) 163 + return -EINVAL; 164 + 165 + return (status << 8) | set; 166 + } 167 + 168 + static int lantiq_rcu_reset_probe(struct platform_device *pdev) 169 + { 170 + struct lantiq_rcu_reset_priv *priv; 171 + int err; 172 + 173 + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 174 + if (!priv) 175 + return -ENOMEM; 176 + 177 + priv->dev = &pdev->dev; 178 + platform_set_drvdata(pdev, priv); 179 + 180 + err = lantiq_rcu_reset_of_parse(pdev, priv); 181 + if (err) 182 + return err; 183 + 184 + priv->rcdev.ops = &lantiq_rcu_reset_ops; 185 + priv->rcdev.owner = THIS_MODULE; 186 + priv->rcdev.of_node = pdev->dev.of_node; 187 + priv->rcdev.nr_resets = 32; 188 + priv->rcdev.of_xlate = lantiq_rcu_reset_xlate; 189 + priv->rcdev.of_reset_n_cells = 2; 190 + 191 + return reset_controller_register(&priv->rcdev); 192 + } 193 + 194 + static const struct of_device_id lantiq_rcu_reset_dt_ids[] = { 195 + { .compatible = "lantiq,danube-reset", }, 196 + { .compatible = "lantiq,xrx200-reset", }, 197 + { }, 198 + }; 199 + MODULE_DEVICE_TABLE(of, lantiq_rcu_reset_dt_ids); 200 + 201 + static struct platform_driver lantiq_rcu_reset_driver = { 202 + .probe = lantiq_rcu_reset_probe, 203 + .driver = { 204 + .name = "lantiq-reset", 205 + .of_match_table = lantiq_rcu_reset_dt_ids, 206 + }, 207 + }; 208 + module_platform_driver(lantiq_rcu_reset_driver); 209 + 210 + MODULE_AUTHOR("Martin Blumenstingl <martin.blumenstingl@googlemail.com>"); 211 + MODULE_DESCRIPTION("Lantiq XWAY RCU Reset Controller Driver"); 212 + MODULE_LICENSE("GPL");
+1
drivers/soc/Makefile
··· 9 9 obj-$(CONFIG_MACH_DOVE) += dove/ 10 10 obj-y += fsl/ 11 11 obj-$(CONFIG_ARCH_MXC) += imx/ 12 + obj-$(CONFIG_SOC_XWAY) += lantiq/ 12 13 obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/ 13 14 obj-$(CONFIG_ARCH_MESON) += amlogic/ 14 15 obj-$(CONFIG_ARCH_QCOM) += qcom/
+2
drivers/soc/lantiq/Makefile
··· 1 + obj-y += fpi-bus.o 2 + obj-$(CONFIG_XRX200_PHY_FW) += gphy.o
+87
drivers/soc/lantiq/fpi-bus.c
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify it 3 + * under the terms of the GNU General Public License version 2 as published 4 + * by the Free Software Foundation. 5 + * 6 + * Copyright (C) 2011-2015 John Crispin <blogic@phrozen.org> 7 + * Copyright (C) 2015 Martin Blumenstingl <martin.blumenstingl@googlemail.com> 8 + * Copyright (C) 2017 Hauke Mehrtens <hauke@hauke-m.de> 9 + */ 10 + 11 + #include <linux/device.h> 12 + #include <linux/err.h> 13 + #include <linux/mfd/syscon.h> 14 + #include <linux/module.h> 15 + #include <linux/of.h> 16 + #include <linux/of_platform.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/property.h> 19 + #include <linux/regmap.h> 20 + 21 + #include <lantiq_soc.h> 22 + 23 + #define XBAR_ALWAYS_LAST 0x430 24 + #define XBAR_FPI_BURST_EN BIT(1) 25 + #define XBAR_AHB_BURST_EN BIT(2) 26 + 27 + #define RCU_VR9_BE_AHB1S 0x00000008 28 + 29 + static int ltq_fpi_probe(struct platform_device *pdev) 30 + { 31 + struct device *dev = &pdev->dev; 32 + struct device_node *np = dev->of_node; 33 + struct resource *res_xbar; 34 + struct regmap *rcu_regmap; 35 + void __iomem *xbar_membase; 36 + u32 rcu_ahb_endianness_reg_offset; 37 + int ret; 38 + 39 + res_xbar = platform_get_resource(pdev, IORESOURCE_MEM, 0); 40 + xbar_membase = devm_ioremap_resource(dev, res_xbar); 41 + if (IS_ERR(xbar_membase)) 42 + return PTR_ERR(xbar_membase); 43 + 44 + /* RCU configuration is optional */ 45 + rcu_regmap = syscon_regmap_lookup_by_phandle(np, "lantiq,rcu"); 46 + if (IS_ERR(rcu_regmap)) 47 + return PTR_ERR(rcu_regmap); 48 + 49 + ret = device_property_read_u32(dev, "lantiq,offset-endianness", 50 + &rcu_ahb_endianness_reg_offset); 51 + if (ret) { 52 + dev_err(&pdev->dev, "Failed to get RCU reg offset\n"); 53 + return ret; 54 + } 55 + 56 + ret = regmap_update_bits(rcu_regmap, rcu_ahb_endianness_reg_offset, 57 + RCU_VR9_BE_AHB1S, RCU_VR9_BE_AHB1S); 58 + if (ret) { 59 + dev_warn(&pdev->dev, 60 + "Failed to configure RCU AHB endianness\n"); 61 + return ret; 62 + } 63 + 64 + /* disable fpi burst */ 65 + ltq_w32_mask(XBAR_FPI_BURST_EN, 0, xbar_membase + XBAR_ALWAYS_LAST); 66 + 67 + return of_platform_populate(dev->of_node, NULL, NULL, dev); 68 + } 69 + 70 + static const struct of_device_id ltq_fpi_match[] = { 71 + { .compatible = "lantiq,xrx200-fpi" }, 72 + {}, 73 + }; 74 + MODULE_DEVICE_TABLE(of, ltq_fpi_match); 75 + 76 + static struct platform_driver ltq_fpi_driver = { 77 + .probe = ltq_fpi_probe, 78 + .driver = { 79 + .name = "fpi-xway", 80 + .of_match_table = ltq_fpi_match, 81 + }, 82 + }; 83 + 84 + module_platform_driver(ltq_fpi_driver); 85 + 86 + MODULE_DESCRIPTION("Lantiq FPI bus driver"); 87 + MODULE_LICENSE("GPL");
+260
drivers/soc/lantiq/gphy.c
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify it 3 + * under the terms of the GNU General Public License version 2 as published 4 + * by the Free Software Foundation. 5 + * 6 + * Copyright (C) 2012 John Crispin <blogic@phrozen.org> 7 + * Copyright (C) 2016 Martin Blumenstingl <martin.blumenstingl@googlemail.com> 8 + * Copyright (C) 2017 Hauke Mehrtens <hauke@hauke-m.de> 9 + */ 10 + 11 + #include <linux/clk.h> 12 + #include <linux/delay.h> 13 + #include <linux/dma-mapping.h> 14 + #include <linux/firmware.h> 15 + #include <linux/mfd/syscon.h> 16 + #include <linux/module.h> 17 + #include <linux/reboot.h> 18 + #include <linux/regmap.h> 19 + #include <linux/reset.h> 20 + #include <linux/of_device.h> 21 + #include <linux/of_platform.h> 22 + #include <linux/property.h> 23 + #include <dt-bindings/mips/lantiq_rcu_gphy.h> 24 + 25 + #include <lantiq_soc.h> 26 + 27 + #define XRX200_GPHY_FW_ALIGN (16 * 1024) 28 + 29 + struct xway_gphy_priv { 30 + struct clk *gphy_clk_gate; 31 + struct reset_control *gphy_reset; 32 + struct reset_control *gphy_reset2; 33 + struct notifier_block gphy_reboot_nb; 34 + void __iomem *membase; 35 + char *fw_name; 36 + }; 37 + 38 + struct xway_gphy_match_data { 39 + char *fe_firmware_name; 40 + char *ge_firmware_name; 41 + }; 42 + 43 + static const struct xway_gphy_match_data xrx200a1x_gphy_data = { 44 + .fe_firmware_name = "lantiq/xrx200_phy22f_a14.bin", 45 + .ge_firmware_name = "lantiq/xrx200_phy11g_a14.bin", 46 + }; 47 + 48 + static const struct xway_gphy_match_data xrx200a2x_gphy_data = { 49 + .fe_firmware_name = "lantiq/xrx200_phy22f_a22.bin", 50 + .ge_firmware_name = "lantiq/xrx200_phy11g_a22.bin", 51 + }; 52 + 53 + static const struct xway_gphy_match_data xrx300_gphy_data = { 54 + .fe_firmware_name = "lantiq/xrx300_phy22f_a21.bin", 55 + .ge_firmware_name = "lantiq/xrx300_phy11g_a21.bin", 56 + }; 57 + 58 + static const struct of_device_id xway_gphy_match[] = { 59 + { .compatible = "lantiq,xrx200a1x-gphy", .data = &xrx200a1x_gphy_data }, 60 + { .compatible = "lantiq,xrx200a2x-gphy", .data = &xrx200a2x_gphy_data }, 61 + { .compatible = "lantiq,xrx300-gphy", .data = &xrx300_gphy_data }, 62 + { .compatible = "lantiq,xrx330-gphy", .data = &xrx300_gphy_data }, 63 + {}, 64 + }; 65 + MODULE_DEVICE_TABLE(of, xway_gphy_match); 66 + 67 + static struct xway_gphy_priv *to_xway_gphy_priv(struct notifier_block *nb) 68 + { 69 + return container_of(nb, struct xway_gphy_priv, gphy_reboot_nb); 70 + } 71 + 72 + static int xway_gphy_reboot_notify(struct notifier_block *reboot_nb, 73 + unsigned long code, void *unused) 74 + { 75 + struct xway_gphy_priv *priv = to_xway_gphy_priv(reboot_nb); 76 + 77 + if (priv) { 78 + reset_control_assert(priv->gphy_reset); 79 + reset_control_assert(priv->gphy_reset2); 80 + } 81 + 82 + return NOTIFY_DONE; 83 + } 84 + 85 + static int xway_gphy_load(struct device *dev, struct xway_gphy_priv *priv, 86 + dma_addr_t *dev_addr) 87 + { 88 + const struct firmware *fw; 89 + void *fw_addr; 90 + dma_addr_t dma_addr; 91 + size_t size; 92 + int ret; 93 + 94 + ret = request_firmware(&fw, priv->fw_name, dev); 95 + if (ret) { 96 + dev_err(dev, "failed to load firmware: %s, error: %i\n", 97 + priv->fw_name, ret); 98 + return ret; 99 + } 100 + 101 + /* 102 + * GPHY cores need the firmware code in a persistent and contiguous 103 + * memory area with a 16 kB boundary aligned start address. 104 + */ 105 + size = fw->size + XRX200_GPHY_FW_ALIGN; 106 + 107 + fw_addr = dmam_alloc_coherent(dev, size, &dma_addr, GFP_KERNEL); 108 + if (fw_addr) { 109 + fw_addr = PTR_ALIGN(fw_addr, XRX200_GPHY_FW_ALIGN); 110 + *dev_addr = ALIGN(dma_addr, XRX200_GPHY_FW_ALIGN); 111 + memcpy(fw_addr, fw->data, fw->size); 112 + } else { 113 + dev_err(dev, "failed to alloc firmware memory\n"); 114 + ret = -ENOMEM; 115 + } 116 + 117 + release_firmware(fw); 118 + 119 + return ret; 120 + } 121 + 122 + static int xway_gphy_of_probe(struct platform_device *pdev, 123 + struct xway_gphy_priv *priv) 124 + { 125 + struct device *dev = &pdev->dev; 126 + const struct xway_gphy_match_data *gphy_fw_name_cfg; 127 + u32 gphy_mode; 128 + int ret; 129 + struct resource *res_gphy; 130 + 131 + gphy_fw_name_cfg = of_device_get_match_data(dev); 132 + 133 + priv->gphy_clk_gate = devm_clk_get(dev, NULL); 134 + if (IS_ERR(priv->gphy_clk_gate)) { 135 + dev_err(dev, "Failed to lookup gate clock\n"); 136 + return PTR_ERR(priv->gphy_clk_gate); 137 + } 138 + 139 + res_gphy = platform_get_resource(pdev, IORESOURCE_MEM, 0); 140 + priv->membase = devm_ioremap_resource(dev, res_gphy); 141 + if (IS_ERR(priv->membase)) 142 + return PTR_ERR(priv->membase); 143 + 144 + priv->gphy_reset = devm_reset_control_get(dev, "gphy"); 145 + if (IS_ERR(priv->gphy_reset)) { 146 + if (PTR_ERR(priv->gphy_reset) != -EPROBE_DEFER) 147 + dev_err(dev, "Failed to lookup gphy reset\n"); 148 + return PTR_ERR(priv->gphy_reset); 149 + } 150 + 151 + priv->gphy_reset2 = devm_reset_control_get_optional(dev, "gphy2"); 152 + if (IS_ERR(priv->gphy_reset2)) 153 + return PTR_ERR(priv->gphy_reset2); 154 + 155 + ret = device_property_read_u32(dev, "lantiq,gphy-mode", &gphy_mode); 156 + /* Default to GE mode */ 157 + if (ret) 158 + gphy_mode = GPHY_MODE_GE; 159 + 160 + switch (gphy_mode) { 161 + case GPHY_MODE_FE: 162 + priv->fw_name = gphy_fw_name_cfg->fe_firmware_name; 163 + break; 164 + case GPHY_MODE_GE: 165 + priv->fw_name = gphy_fw_name_cfg->ge_firmware_name; 166 + break; 167 + default: 168 + dev_err(dev, "Unknown GPHY mode %d\n", gphy_mode); 169 + return -EINVAL; 170 + } 171 + 172 + return 0; 173 + } 174 + 175 + static int xway_gphy_probe(struct platform_device *pdev) 176 + { 177 + struct device *dev = &pdev->dev; 178 + struct xway_gphy_priv *priv; 179 + dma_addr_t fw_addr = 0; 180 + int ret; 181 + 182 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 183 + if (!priv) 184 + return -ENOMEM; 185 + 186 + ret = xway_gphy_of_probe(pdev, priv); 187 + if (ret) 188 + return ret; 189 + 190 + ret = clk_prepare_enable(priv->gphy_clk_gate); 191 + if (ret) 192 + return ret; 193 + 194 + ret = xway_gphy_load(dev, priv, &fw_addr); 195 + if (ret) { 196 + clk_disable_unprepare(priv->gphy_clk_gate); 197 + return ret; 198 + } 199 + 200 + reset_control_assert(priv->gphy_reset); 201 + reset_control_assert(priv->gphy_reset2); 202 + 203 + iowrite32be(fw_addr, priv->membase); 204 + 205 + reset_control_deassert(priv->gphy_reset); 206 + reset_control_deassert(priv->gphy_reset2); 207 + 208 + /* assert the gphy reset because it can hang after a reboot: */ 209 + priv->gphy_reboot_nb.notifier_call = xway_gphy_reboot_notify; 210 + priv->gphy_reboot_nb.priority = -1; 211 + 212 + ret = register_reboot_notifier(&priv->gphy_reboot_nb); 213 + if (ret) 214 + dev_warn(dev, "Failed to register reboot notifier\n"); 215 + 216 + platform_set_drvdata(pdev, priv); 217 + 218 + return ret; 219 + } 220 + 221 + static int xway_gphy_remove(struct platform_device *pdev) 222 + { 223 + struct device *dev = &pdev->dev; 224 + struct xway_gphy_priv *priv = platform_get_drvdata(pdev); 225 + int ret; 226 + 227 + reset_control_assert(priv->gphy_reset); 228 + reset_control_assert(priv->gphy_reset2); 229 + 230 + iowrite32be(0, priv->membase); 231 + 232 + clk_disable_unprepare(priv->gphy_clk_gate); 233 + 234 + ret = unregister_reboot_notifier(&priv->gphy_reboot_nb); 235 + if (ret) 236 + dev_warn(dev, "Failed to unregister reboot notifier\n"); 237 + 238 + return 0; 239 + } 240 + 241 + static struct platform_driver xway_gphy_driver = { 242 + .probe = xway_gphy_probe, 243 + .remove = xway_gphy_remove, 244 + .driver = { 245 + .name = "xway-rcu-gphy", 246 + .of_match_table = xway_gphy_match, 247 + }, 248 + }; 249 + 250 + module_platform_driver(xway_gphy_driver); 251 + 252 + MODULE_FIRMWARE("lantiq/xrx300_phy11g_a21.bin"); 253 + MODULE_FIRMWARE("lantiq/xrx300_phy22f_a21.bin"); 254 + MODULE_FIRMWARE("lantiq/xrx200_phy11g_a14.bin"); 255 + MODULE_FIRMWARE("lantiq/xrx200_phy11g_a22.bin"); 256 + MODULE_FIRMWARE("lantiq/xrx200_phy22f_a14.bin"); 257 + MODULE_FIRMWARE("lantiq/xrx200_phy22f_a22.bin"); 258 + MODULE_AUTHOR("Martin Blumenstingl <martin.blumenstingl@googlemail.com>"); 259 + MODULE_DESCRIPTION("Lantiq XWAY GPHY Firmware Loader"); 260 + MODULE_LICENSE("GPL");
+69 -5
drivers/watchdog/lantiq_wdt.c
··· 4 4 * by the Free Software Foundation. 5 5 * 6 6 * Copyright (C) 2010 John Crispin <john@phrozen.org> 7 + * Copyright (C) 2017 Hauke Mehrtens <hauke@hauke-m.de> 7 8 * Based on EP93xx wdt driver 8 9 */ 9 10 ··· 18 17 #include <linux/uaccess.h> 19 18 #include <linux/clk.h> 20 19 #include <linux/io.h> 20 + #include <linux/regmap.h> 21 + #include <linux/mfd/syscon.h> 21 22 22 23 #include <lantiq_soc.h> 24 + 25 + #define LTQ_XRX_RCU_RST_STAT 0x0014 26 + #define LTQ_XRX_RCU_RST_STAT_WDT BIT(31) 27 + 28 + /* CPU0 Reset Source Register */ 29 + #define LTQ_FALCON_SYS1_CPU0RS 0x0060 30 + /* reset cause mask */ 31 + #define LTQ_FALCON_SYS1_CPU0RS_MASK 0x0007 32 + #define LTQ_FALCON_SYS1_CPU0RS_WDT 0x02 23 33 24 34 /* 25 35 * Section 3.4 of the datasheet ··· 198 186 .fops = &ltq_wdt_fops, 199 187 }; 200 188 189 + typedef int (*ltq_wdt_bootstatus_set)(struct platform_device *pdev); 190 + 191 + static int ltq_wdt_bootstatus_xrx(struct platform_device *pdev) 192 + { 193 + struct device *dev = &pdev->dev; 194 + struct regmap *rcu_regmap; 195 + u32 val; 196 + int err; 197 + 198 + rcu_regmap = syscon_regmap_lookup_by_phandle(dev->of_node, "regmap"); 199 + if (IS_ERR(rcu_regmap)) 200 + return PTR_ERR(rcu_regmap); 201 + 202 + err = regmap_read(rcu_regmap, LTQ_XRX_RCU_RST_STAT, &val); 203 + if (err) 204 + return err; 205 + 206 + if (val & LTQ_XRX_RCU_RST_STAT_WDT) 207 + ltq_wdt_bootstatus = WDIOF_CARDRESET; 208 + 209 + return 0; 210 + } 211 + 212 + static int ltq_wdt_bootstatus_falcon(struct platform_device *pdev) 213 + { 214 + struct device *dev = &pdev->dev; 215 + struct regmap *rcu_regmap; 216 + u32 val; 217 + int err; 218 + 219 + rcu_regmap = syscon_regmap_lookup_by_phandle(dev->of_node, 220 + "lantiq,rcu"); 221 + if (IS_ERR(rcu_regmap)) 222 + return PTR_ERR(rcu_regmap); 223 + 224 + err = regmap_read(rcu_regmap, LTQ_FALCON_SYS1_CPU0RS, &val); 225 + if (err) 226 + return err; 227 + 228 + if ((val & LTQ_FALCON_SYS1_CPU0RS_MASK) == LTQ_FALCON_SYS1_CPU0RS_WDT) 229 + ltq_wdt_bootstatus = WDIOF_CARDRESET; 230 + 231 + return 0; 232 + } 233 + 201 234 static int 202 235 ltq_wdt_probe(struct platform_device *pdev) 203 236 { 204 237 struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 205 238 struct clk *clk; 239 + ltq_wdt_bootstatus_set ltq_wdt_bootstatus_set; 240 + int ret; 206 241 207 242 ltq_wdt_membase = devm_ioremap_resource(&pdev->dev, res); 208 243 if (IS_ERR(ltq_wdt_membase)) 209 244 return PTR_ERR(ltq_wdt_membase); 245 + 246 + ltq_wdt_bootstatus_set = of_device_get_match_data(&pdev->dev); 247 + if (ltq_wdt_bootstatus_set) { 248 + ret = ltq_wdt_bootstatus_set(pdev); 249 + if (ret) 250 + return ret; 251 + } 210 252 211 253 /* we do not need to enable the clock as it is always running */ 212 254 clk = clk_get_io(); ··· 270 204 } 271 205 ltq_io_region_clk_rate = clk_get_rate(clk); 272 206 clk_put(clk); 273 - 274 - /* find out if the watchdog caused the last reboot */ 275 - if (ltq_reset_cause() == LTQ_RST_CAUSE_WDTRST) 276 - ltq_wdt_bootstatus = WDIOF_CARDRESET; 277 207 278 208 dev_info(&pdev->dev, "Init done\n"); 279 209 return misc_register(&ltq_wdt_miscdev); ··· 284 222 } 285 223 286 224 static const struct of_device_id ltq_wdt_match[] = { 287 - { .compatible = "lantiq,wdt" }, 225 + { .compatible = "lantiq,wdt", .data = NULL}, 226 + { .compatible = "lantiq,xrx100-wdt", .data = ltq_wdt_bootstatus_xrx }, 227 + { .compatible = "lantiq,falcon-wdt", .data = ltq_wdt_bootstatus_falcon }, 288 228 {}, 289 229 }; 290 230 MODULE_DEVICE_TABLE(of, ltq_wdt_match);
+175 -179
drivers/watchdog/octeon-wdt-main.c
··· 1 1 /* 2 2 * Octeon Watchdog driver 3 3 * 4 - * Copyright (C) 2007, 2008, 2009, 2010 Cavium Networks 4 + * Copyright (C) 2007-2017 Cavium, Inc. 5 5 * 6 6 * Converted to use WATCHDOG_CORE by Aaro Koskinen <aaro.koskinen@iki.fi>. 7 7 * ··· 59 59 #include <linux/interrupt.h> 60 60 #include <linux/watchdog.h> 61 61 #include <linux/cpumask.h> 62 - #include <linux/bitops.h> 63 - #include <linux/kernel.h> 64 62 #include <linux/module.h> 65 - #include <linux/string.h> 66 63 #include <linux/delay.h> 67 64 #include <linux/cpu.h> 68 - #include <linux/smp.h> 69 - #include <linux/fs.h> 70 65 #include <linux/irq.h> 71 66 72 67 #include <asm/mipsregs.h> 73 68 #include <asm/uasm.h> 74 69 75 70 #include <asm/octeon/octeon.h> 71 + #include <asm/octeon/cvmx-boot-vector.h> 72 + #include <asm/octeon/cvmx-ciu2-defs.h> 73 + #include <asm/octeon/cvmx-rst-defs.h> 74 + 75 + /* Watchdog interrupt major block number (8 MSBs of intsn) */ 76 + #define WD_BLOCK_NUMBER 0x01 77 + 78 + static int divisor; 76 79 77 80 /* The count needed to achieve timeout_sec. */ 78 81 static unsigned int timeout_cnt; ··· 87 84 static unsigned int timeout_sec; 88 85 89 86 /* Set to non-zero when userspace countdown mode active */ 90 - static int do_coundown; 87 + static bool do_countdown; 91 88 static unsigned int countdown_reset; 92 89 static unsigned int per_cpu_countdown[NR_CPUS]; 93 90 ··· 95 92 96 93 #define WD_TIMO 60 /* Default heartbeat = 60 seconds */ 97 94 95 + #define CVMX_GSERX_SCRATCH(offset) (CVMX_ADD_IO_SEG(0x0001180090000020ull) + ((offset) & 15) * 0x1000000ull) 96 + 98 97 static int heartbeat = WD_TIMO; 99 - module_param(heartbeat, int, S_IRUGO); 98 + module_param(heartbeat, int, 0444); 100 99 MODULE_PARM_DESC(heartbeat, 101 100 "Watchdog heartbeat in seconds. (0 < heartbeat, default=" 102 101 __MODULE_STRING(WD_TIMO) ")"); 103 102 104 103 static bool nowayout = WATCHDOG_NOWAYOUT; 105 - module_param(nowayout, bool, S_IRUGO); 104 + module_param(nowayout, bool, 0444); 106 105 MODULE_PARM_DESC(nowayout, 107 106 "Watchdog cannot be stopped once started (default=" 108 107 __MODULE_STRING(WATCHDOG_NOWAYOUT) ")"); 109 108 110 - static u32 nmi_stage1_insns[64] __initdata; 111 - /* We need one branch and therefore one relocation per target label. */ 112 - static struct uasm_label labels[5] __initdata; 113 - static struct uasm_reloc relocs[5] __initdata; 109 + static int disable; 110 + module_param(disable, int, 0444); 111 + MODULE_PARM_DESC(disable, 112 + "Disable the watchdog entirely (default=0)"); 114 113 115 - enum lable_id { 116 - label_enter_bootloader = 1 117 - }; 118 - 119 - /* Some CP0 registers */ 120 - #define K0 26 121 - #define C0_CVMMEMCTL 11, 7 122 - #define C0_STATUS 12, 0 123 - #define C0_EBASE 15, 1 124 - #define C0_DESAVE 31, 0 114 + static struct cvmx_boot_vector_element *octeon_wdt_bootvector; 125 115 126 116 void octeon_wdt_nmi_stage2(void); 127 - 128 - static void __init octeon_wdt_build_stage1(void) 129 - { 130 - int i; 131 - int len; 132 - u32 *p = nmi_stage1_insns; 133 - #ifdef CONFIG_HOTPLUG_CPU 134 - struct uasm_label *l = labels; 135 - struct uasm_reloc *r = relocs; 136 - #endif 137 - 138 - /* 139 - * For the next few instructions running the debugger may 140 - * cause corruption of k0 in the saved registers. Since we're 141 - * about to crash, nobody probably cares. 142 - * 143 - * Save K0 into the debug scratch register 144 - */ 145 - uasm_i_dmtc0(&p, K0, C0_DESAVE); 146 - 147 - uasm_i_mfc0(&p, K0, C0_STATUS); 148 - #ifdef CONFIG_HOTPLUG_CPU 149 - if (octeon_bootloader_entry_addr) 150 - uasm_il_bbit0(&p, &r, K0, ilog2(ST0_NMI), 151 - label_enter_bootloader); 152 - #endif 153 - /* Force 64-bit addressing enabled */ 154 - uasm_i_ori(&p, K0, K0, ST0_UX | ST0_SX | ST0_KX); 155 - uasm_i_mtc0(&p, K0, C0_STATUS); 156 - 157 - #ifdef CONFIG_HOTPLUG_CPU 158 - if (octeon_bootloader_entry_addr) { 159 - uasm_i_mfc0(&p, K0, C0_EBASE); 160 - /* Coreid number in K0 */ 161 - uasm_i_andi(&p, K0, K0, 0xf); 162 - /* 8 * coreid in bits 16-31 */ 163 - uasm_i_dsll_safe(&p, K0, K0, 3 + 16); 164 - uasm_i_ori(&p, K0, K0, 0x8001); 165 - uasm_i_dsll_safe(&p, K0, K0, 16); 166 - uasm_i_ori(&p, K0, K0, 0x0700); 167 - uasm_i_drotr_safe(&p, K0, K0, 32); 168 - /* 169 - * Should result in: 0x8001,0700,0000,8*coreid which is 170 - * CVMX_CIU_WDOGX(coreid) - 0x0500 171 - * 172 - * Now ld K0, CVMX_CIU_WDOGX(coreid) 173 - */ 174 - uasm_i_ld(&p, K0, 0x500, K0); 175 - /* 176 - * If bit one set handle the NMI as a watchdog event. 177 - * otherwise transfer control to bootloader. 178 - */ 179 - uasm_il_bbit0(&p, &r, K0, 1, label_enter_bootloader); 180 - uasm_i_nop(&p); 181 - } 182 - #endif 183 - 184 - /* Clear Dcache so cvmseg works right. */ 185 - uasm_i_cache(&p, 1, 0, 0); 186 - 187 - /* Use K0 to do a read/modify/write of CVMMEMCTL */ 188 - uasm_i_dmfc0(&p, K0, C0_CVMMEMCTL); 189 - /* Clear out the size of CVMSEG */ 190 - uasm_i_dins(&p, K0, 0, 0, 6); 191 - /* Set CVMSEG to its largest value */ 192 - uasm_i_ori(&p, K0, K0, 0x1c0 | 54); 193 - /* Store the CVMMEMCTL value */ 194 - uasm_i_dmtc0(&p, K0, C0_CVMMEMCTL); 195 - 196 - /* Load the address of the second stage handler */ 197 - UASM_i_LA(&p, K0, (long)octeon_wdt_nmi_stage2); 198 - uasm_i_jr(&p, K0); 199 - uasm_i_dmfc0(&p, K0, C0_DESAVE); 200 - 201 - #ifdef CONFIG_HOTPLUG_CPU 202 - if (octeon_bootloader_entry_addr) { 203 - uasm_build_label(&l, p, label_enter_bootloader); 204 - /* Jump to the bootloader and restore K0 */ 205 - UASM_i_LA(&p, K0, (long)octeon_bootloader_entry_addr); 206 - uasm_i_jr(&p, K0); 207 - uasm_i_dmfc0(&p, K0, C0_DESAVE); 208 - } 209 - #endif 210 - uasm_resolve_relocs(relocs, labels); 211 - 212 - len = (int)(p - nmi_stage1_insns); 213 - pr_debug("Synthesized NMI stage 1 handler (%d instructions)\n", len); 214 - 215 - pr_debug("\t.set push\n"); 216 - pr_debug("\t.set noreorder\n"); 217 - for (i = 0; i < len; i++) 218 - pr_debug("\t.word 0x%08x\n", nmi_stage1_insns[i]); 219 - pr_debug("\t.set pop\n"); 220 - 221 - if (len > 32) 222 - panic("NMI stage 1 handler exceeds 32 instructions, was %d\n", 223 - len); 224 - } 225 117 226 118 static int cpu2core(int cpu) 227 119 { 228 120 #ifdef CONFIG_SMP 229 - return cpu_logical_map(cpu); 121 + return cpu_logical_map(cpu) & 0x3f; 230 122 #else 231 123 return cvmx_get_core_num(); 232 - #endif 233 - } 234 - 235 - static int core2cpu(int coreid) 236 - { 237 - #ifdef CONFIG_SMP 238 - return cpu_number_map(coreid); 239 - #else 240 - return 0; 241 124 #endif 242 125 } 243 126 ··· 137 248 */ 138 249 static irqreturn_t octeon_wdt_poke_irq(int cpl, void *dev_id) 139 250 { 140 - unsigned int core = cvmx_get_core_num(); 141 - int cpu = core2cpu(core); 251 + int cpu = raw_smp_processor_id(); 252 + unsigned int core = cpu2core(cpu); 253 + int node = cpu_to_node(cpu); 142 254 143 - if (do_coundown) { 255 + if (do_countdown) { 144 256 if (per_cpu_countdown[cpu] > 0) { 145 257 /* We're alive, poke the watchdog */ 146 - cvmx_write_csr(CVMX_CIU_PP_POKEX(core), 1); 258 + cvmx_write_csr_node(node, CVMX_CIU_PP_POKEX(core), 1); 147 259 per_cpu_countdown[cpu]--; 148 260 } else { 149 261 /* Bad news, you are about to reboot. */ ··· 153 263 } 154 264 } else { 155 265 /* Not open, just ping away... */ 156 - cvmx_write_csr(CVMX_CIU_PP_POKEX(core), 1); 266 + cvmx_write_csr_node(node, CVMX_CIU_PP_POKEX(core), 1); 157 267 } 158 268 return IRQ_HANDLED; 159 269 } ··· 228 338 u64 cp0_epc = read_c0_epc(); 229 339 230 340 /* Delay so output from all cores output is not jumbled together. */ 231 - __delay(100000000ull * coreid); 341 + udelay(85000 * coreid); 232 342 233 343 octeon_wdt_write_string("\r\n*** NMI Watchdog interrupt on Core 0x"); 234 - octeon_wdt_write_hex(coreid, 1); 344 + octeon_wdt_write_hex(coreid, 2); 235 345 octeon_wdt_write_string(" ***\r\n"); 236 346 for (i = 0; i < 32; i++) { 237 347 octeon_wdt_write_string("\t"); ··· 254 364 octeon_wdt_write_hex(cp0_cause, 16); 255 365 octeon_wdt_write_string("\r\n"); 256 366 257 - octeon_wdt_write_string("\tsum0\t0x"); 258 - octeon_wdt_write_hex(cvmx_read_csr(CVMX_CIU_INTX_SUM0(coreid * 2)), 16); 259 - octeon_wdt_write_string("\ten0\t0x"); 260 - octeon_wdt_write_hex(cvmx_read_csr(CVMX_CIU_INTX_EN0(coreid * 2)), 16); 261 - octeon_wdt_write_string("\r\n"); 367 + /* The CIU register is different for each Octeon model. */ 368 + if (OCTEON_IS_MODEL(OCTEON_CN68XX)) { 369 + octeon_wdt_write_string("\tsrc_wd\t0x"); 370 + octeon_wdt_write_hex(cvmx_read_csr(CVMX_CIU2_SRC_PPX_IP2_WDOG(coreid)), 16); 371 + octeon_wdt_write_string("\ten_wd\t0x"); 372 + octeon_wdt_write_hex(cvmx_read_csr(CVMX_CIU2_EN_PPX_IP2_WDOG(coreid)), 16); 373 + octeon_wdt_write_string("\r\n"); 374 + octeon_wdt_write_string("\tsrc_rml\t0x"); 375 + octeon_wdt_write_hex(cvmx_read_csr(CVMX_CIU2_SRC_PPX_IP2_RML(coreid)), 16); 376 + octeon_wdt_write_string("\ten_rml\t0x"); 377 + octeon_wdt_write_hex(cvmx_read_csr(CVMX_CIU2_EN_PPX_IP2_RML(coreid)), 16); 378 + octeon_wdt_write_string("\r\n"); 379 + octeon_wdt_write_string("\tsum\t0x"); 380 + octeon_wdt_write_hex(cvmx_read_csr(CVMX_CIU2_SUM_PPX_IP2(coreid)), 16); 381 + octeon_wdt_write_string("\r\n"); 382 + } else if (!octeon_has_feature(OCTEON_FEATURE_CIU3)) { 383 + octeon_wdt_write_string("\tsum0\t0x"); 384 + octeon_wdt_write_hex(cvmx_read_csr(CVMX_CIU_INTX_SUM0(coreid * 2)), 16); 385 + octeon_wdt_write_string("\ten0\t0x"); 386 + octeon_wdt_write_hex(cvmx_read_csr(CVMX_CIU_INTX_EN0(coreid * 2)), 16); 387 + octeon_wdt_write_string("\r\n"); 388 + } 262 389 263 390 octeon_wdt_write_string("*** Chip soft reset soon ***\r\n"); 391 + 392 + /* 393 + * G-30204: We must trigger a soft reset before watchdog 394 + * does an incomplete job of doing it. 395 + */ 396 + if (OCTEON_IS_OCTEON3() && !OCTEON_IS_MODEL(OCTEON_CN70XX)) { 397 + u64 scr; 398 + unsigned int node = cvmx_get_node_num(); 399 + unsigned int lcore = cvmx_get_local_core_num(); 400 + union cvmx_ciu_wdogx ciu_wdog; 401 + 402 + /* 403 + * Wait for other cores to print out information, but 404 + * not too long. Do the soft reset before watchdog 405 + * can trigger it. 406 + */ 407 + do { 408 + ciu_wdog.u64 = cvmx_read_csr_node(node, CVMX_CIU_WDOGX(lcore)); 409 + } while (ciu_wdog.s.cnt > 0x10000); 410 + 411 + scr = cvmx_read_csr_node(0, CVMX_GSERX_SCRATCH(0)); 412 + scr |= 1 << 11; /* Indicate watchdog in bit 11 */ 413 + cvmx_write_csr_node(0, CVMX_GSERX_SCRATCH(0), scr); 414 + cvmx_write_csr_node(0, CVMX_RST_SOFT_RST, 1); 415 + } 416 + } 417 + 418 + static int octeon_wdt_cpu_to_irq(int cpu) 419 + { 420 + unsigned int coreid; 421 + int node; 422 + int irq; 423 + 424 + coreid = cpu2core(cpu); 425 + node = cpu_to_node(cpu); 426 + 427 + if (octeon_has_feature(OCTEON_FEATURE_CIU3)) { 428 + struct irq_domain *domain; 429 + int hwirq; 430 + 431 + domain = octeon_irq_get_block_domain(node, 432 + WD_BLOCK_NUMBER); 433 + hwirq = WD_BLOCK_NUMBER << 12 | 0x200 | coreid; 434 + irq = irq_find_mapping(domain, hwirq); 435 + } else { 436 + irq = OCTEON_IRQ_WDOG0 + coreid; 437 + } 438 + return irq; 264 439 } 265 440 266 441 static int octeon_wdt_cpu_pre_down(unsigned int cpu) 267 442 { 268 443 unsigned int core; 269 - unsigned int irq; 444 + int node; 270 445 union cvmx_ciu_wdogx ciu_wdog; 271 446 272 447 core = cpu2core(cpu); 273 448 274 - irq = OCTEON_IRQ_WDOG0 + core; 449 + node = cpu_to_node(cpu); 275 450 276 451 /* Poke the watchdog to clear out its state */ 277 - cvmx_write_csr(CVMX_CIU_PP_POKEX(core), 1); 452 + cvmx_write_csr_node(node, CVMX_CIU_PP_POKEX(core), 1); 278 453 279 454 /* Disable the hardware. */ 280 455 ciu_wdog.u64 = 0; 281 - cvmx_write_csr(CVMX_CIU_WDOGX(core), ciu_wdog.u64); 456 + cvmx_write_csr_node(node, CVMX_CIU_WDOGX(core), ciu_wdog.u64); 282 457 283 - free_irq(irq, octeon_wdt_poke_irq); 458 + free_irq(octeon_wdt_cpu_to_irq(cpu), octeon_wdt_poke_irq); 284 459 return 0; 285 460 } 286 461 ··· 354 399 unsigned int core; 355 400 unsigned int irq; 356 401 union cvmx_ciu_wdogx ciu_wdog; 402 + int node; 403 + struct irq_domain *domain; 404 + int hwirq; 357 405 358 406 core = cpu2core(cpu); 407 + node = cpu_to_node(cpu); 408 + 409 + octeon_wdt_bootvector[core].target_ptr = (u64)octeon_wdt_nmi_stage2; 359 410 360 411 /* Disable it before doing anything with the interrupts. */ 361 412 ciu_wdog.u64 = 0; 362 - cvmx_write_csr(CVMX_CIU_WDOGX(core), ciu_wdog.u64); 413 + cvmx_write_csr_node(node, CVMX_CIU_WDOGX(core), ciu_wdog.u64); 363 414 364 415 per_cpu_countdown[cpu] = countdown_reset; 365 416 366 - irq = OCTEON_IRQ_WDOG0 + core; 417 + if (octeon_has_feature(OCTEON_FEATURE_CIU3)) { 418 + /* Must get the domain for the watchdog block */ 419 + domain = octeon_irq_get_block_domain(node, WD_BLOCK_NUMBER); 420 + 421 + /* Get a irq for the wd intsn (hardware interrupt) */ 422 + hwirq = WD_BLOCK_NUMBER << 12 | 0x200 | core; 423 + irq = irq_create_mapping(domain, hwirq); 424 + irqd_set_trigger_type(irq_get_irq_data(irq), 425 + IRQ_TYPE_EDGE_RISING); 426 + } else 427 + irq = OCTEON_IRQ_WDOG0 + core; 367 428 368 429 if (request_irq(irq, octeon_wdt_poke_irq, 369 430 IRQF_NO_THREAD, "octeon_wdt", octeon_wdt_poke_irq)) 370 431 panic("octeon_wdt: Couldn't obtain irq %d", irq); 371 432 433 + /* Must set the irq affinity here */ 434 + if (octeon_has_feature(OCTEON_FEATURE_CIU3)) { 435 + cpumask_t mask; 436 + 437 + cpumask_clear(&mask); 438 + cpumask_set_cpu(cpu, &mask); 439 + irq_set_affinity(irq, &mask); 440 + } 441 + 372 442 cpumask_set_cpu(cpu, &irq_enabled_cpus); 373 443 374 444 /* Poke the watchdog to clear out its state */ 375 - cvmx_write_csr(CVMX_CIU_PP_POKEX(core), 1); 445 + cvmx_write_csr_node(node, CVMX_CIU_PP_POKEX(core), 1); 376 446 377 447 /* Finally enable the watchdog now that all handlers are installed */ 378 448 ciu_wdog.u64 = 0; 379 449 ciu_wdog.s.len = timeout_cnt; 380 450 ciu_wdog.s.mode = 3; /* 3 = Interrupt + NMI + Soft-Reset */ 381 - cvmx_write_csr(CVMX_CIU_WDOGX(core), ciu_wdog.u64); 451 + cvmx_write_csr_node(node, CVMX_CIU_WDOGX(core), ciu_wdog.u64); 382 452 383 453 return 0; 384 454 } ··· 412 432 { 413 433 int cpu; 414 434 int coreid; 435 + int node; 436 + 437 + if (disable) 438 + return 0; 415 439 416 440 for_each_online_cpu(cpu) { 417 441 coreid = cpu2core(cpu); 418 - cvmx_write_csr(CVMX_CIU_PP_POKEX(coreid), 1); 442 + node = cpu_to_node(cpu); 443 + cvmx_write_csr_node(node, CVMX_CIU_PP_POKEX(coreid), 1); 419 444 per_cpu_countdown[cpu] = countdown_reset; 420 - if ((countdown_reset || !do_coundown) && 445 + if ((countdown_reset || !do_countdown) && 421 446 !cpumask_test_cpu(cpu, &irq_enabled_cpus)) { 422 447 /* We have to enable the irq */ 423 - int irq = OCTEON_IRQ_WDOG0 + coreid; 424 - 425 - enable_irq(irq); 448 + enable_irq(octeon_wdt_cpu_to_irq(cpu)); 426 449 cpumask_set_cpu(cpu, &irq_enabled_cpus); 427 450 } 428 451 } ··· 455 472 456 473 countdown_reset = periods > 2 ? periods - 2 : 0; 457 474 heartbeat = t; 458 - timeout_cnt = ((octeon_get_io_clock_rate() >> 8) * timeout_sec) >> 8; 475 + timeout_cnt = ((octeon_get_io_clock_rate() / divisor) * timeout_sec) >> 8; 459 476 } 460 477 461 478 static int octeon_wdt_set_timeout(struct watchdog_device *wdog, ··· 464 481 int cpu; 465 482 int coreid; 466 483 union cvmx_ciu_wdogx ciu_wdog; 484 + int node; 467 485 468 486 if (t <= 0) 469 487 return -1; 470 488 471 489 octeon_wdt_calc_parameters(t); 472 490 491 + if (disable) 492 + return 0; 493 + 473 494 for_each_online_cpu(cpu) { 474 495 coreid = cpu2core(cpu); 475 - cvmx_write_csr(CVMX_CIU_PP_POKEX(coreid), 1); 496 + node = cpu_to_node(cpu); 497 + cvmx_write_csr_node(node, CVMX_CIU_PP_POKEX(coreid), 1); 476 498 ciu_wdog.u64 = 0; 477 499 ciu_wdog.s.len = timeout_cnt; 478 500 ciu_wdog.s.mode = 3; /* 3 = Interrupt + NMI + Soft-Reset */ 479 - cvmx_write_csr(CVMX_CIU_WDOGX(coreid), ciu_wdog.u64); 480 - cvmx_write_csr(CVMX_CIU_PP_POKEX(coreid), 1); 501 + cvmx_write_csr_node(node, CVMX_CIU_WDOGX(coreid), ciu_wdog.u64); 502 + cvmx_write_csr_node(node, CVMX_CIU_PP_POKEX(coreid), 1); 481 503 } 482 504 octeon_wdt_ping(wdog); /* Get the irqs back on. */ 483 505 return 0; ··· 491 503 static int octeon_wdt_start(struct watchdog_device *wdog) 492 504 { 493 505 octeon_wdt_ping(wdog); 494 - do_coundown = 1; 506 + do_countdown = 1; 495 507 return 0; 496 508 } 497 509 498 510 static int octeon_wdt_stop(struct watchdog_device *wdog) 499 511 { 500 - do_coundown = 0; 512 + do_countdown = 0; 501 513 octeon_wdt_ping(wdog); 502 514 return 0; 503 515 } ··· 528 540 */ 529 541 static int __init octeon_wdt_init(void) 530 542 { 531 - int i; 532 543 int ret; 533 - u64 *ptr; 544 + 545 + octeon_wdt_bootvector = cvmx_boot_vector_get(); 546 + if (!octeon_wdt_bootvector) { 547 + pr_err("Error: Cannot allocate boot vector.\n"); 548 + return -ENOMEM; 549 + } 550 + 551 + if (OCTEON_IS_MODEL(OCTEON_CN68XX)) 552 + divisor = 0x200; 553 + else if (OCTEON_IS_MODEL(OCTEON_CN78XX)) 554 + divisor = 0x400; 555 + else 556 + divisor = 0x100; 534 557 535 558 /* 536 559 * Watchdog time expiration length = The 16 bits of LEN 537 560 * represent the most significant bits of a 24 bit decrementer 538 - * that decrements every 256 cycles. 561 + * that decrements every divisor cycle. 539 562 * 540 563 * Try for a timeout of 5 sec, if that fails a smaller number 541 564 * of even seconds, ··· 554 555 max_timeout_sec = 6; 555 556 do { 556 557 max_timeout_sec--; 557 - timeout_cnt = ((octeon_get_io_clock_rate() >> 8) * 558 - max_timeout_sec) >> 8; 558 + timeout_cnt = ((octeon_get_io_clock_rate() / divisor) * max_timeout_sec) >> 8; 559 559 } while (timeout_cnt > 65535); 560 560 561 561 BUG_ON(timeout_cnt == 0); ··· 574 576 return ret; 575 577 } 576 578 577 - /* Build the NMI handler ... */ 578 - octeon_wdt_build_stage1(); 579 - 580 - /* ... and install it. */ 581 - ptr = (u64 *) nmi_stage1_insns; 582 - for (i = 0; i < 16; i++) { 583 - cvmx_write_csr(CVMX_MIO_BOOT_LOC_ADR, i * 8); 584 - cvmx_write_csr(CVMX_MIO_BOOT_LOC_DAT, ptr[i]); 579 + if (disable) { 580 + pr_notice("disabled\n"); 581 + return 0; 585 582 } 586 - cvmx_write_csr(CVMX_MIO_BOOT_LOC_CFGX(0), 0x81fc0000); 587 583 588 584 cpumask_clear(&irq_enabled_cpus); 589 585 ··· 599 607 static void __exit octeon_wdt_cleanup(void) 600 608 { 601 609 watchdog_unregister_device(&octeon_wdt); 610 + 611 + if (disable) 612 + return; 613 + 602 614 cpuhp_remove_state(octeon_wdt_online); 603 615 604 616 /* ··· 613 617 } 614 618 615 619 MODULE_LICENSE("GPL"); 616 - MODULE_AUTHOR("Cavium Networks <support@caviumnetworks.com>"); 617 - MODULE_DESCRIPTION("Cavium Networks Octeon Watchdog driver."); 620 + MODULE_AUTHOR("Cavium Inc. <support@cavium.com>"); 621 + MODULE_DESCRIPTION("Cavium Inc. OCTEON Watchdog driver."); 618 622 module_init(octeon_wdt_init); 619 623 module_exit(octeon_wdt_cleanup);
+34 -8
drivers/watchdog/octeon-wdt-nmi.S
··· 3 3 * License. See the file "COPYING" in the main directory of this archive 4 4 * for more details. 5 5 * 6 - * Copyright (C) 2007 Cavium Networks 6 + * Copyright (C) 2007-2017 Cavium, Inc. 7 7 */ 8 8 #include <asm/asm.h> 9 9 #include <asm/regdef.h> 10 10 11 - #define SAVE_REG(r) sd $r, -32768+6912-(32-r)*8($0) 11 + #define CVMSEG_BASE -32768 12 + #define CVMSEG_SIZE 6912 13 + #define SAVE_REG(r) sd $r, CVMSEG_BASE + CVMSEG_SIZE - ((32 - r) * 8)($0) 12 14 13 15 NESTED(octeon_wdt_nmi_stage2, 0, sp) 14 16 .set push 15 17 .set noreorder 16 18 .set noat 17 - /* Save all registers to the top CVMSEG. This shouldn't 19 + /* Clear Dcache so cvmseg works right. */ 20 + cache 1,0($0) 21 + /* Use K0 to do a read/modify/write of CVMMEMCTL */ 22 + dmfc0 k0, $11, 7 23 + /* Clear out the size of CVMSEG */ 24 + dins k0, $0, 0, 6 25 + /* Set CVMSEG to its largest value */ 26 + ori k0, k0, 0x1c0 | 54 27 + /* Store the CVMMEMCTL value */ 28 + dmtc0 k0, $11, 7 29 + /* 30 + * Restore K0 from the debug scratch register, it was saved in 31 + * the boot-vector code. 32 + */ 33 + dmfc0 k0, $31 34 + 35 + /* 36 + * Save all registers to the top CVMSEG. This shouldn't 18 37 * corrupt any state used by the kernel. Also all registers 19 - * should have the value right before the NMI. */ 38 + * should have the value right before the NMI. 39 + */ 20 40 SAVE_REG(0) 21 41 SAVE_REG(1) 22 42 SAVE_REG(2) ··· 69 49 SAVE_REG(29) 70 50 SAVE_REG(30) 71 51 SAVE_REG(31) 52 + /* Write zero to all CVMSEG locations per Core-15169 */ 53 + dli a0, CVMSEG_SIZE - (33 * 8) 54 + 1: sd zero, CVMSEG_BASE(a0) 55 + daddiu a0, a0, -8 56 + bgez a0, 1b 57 + nop 72 58 /* Set the stack to begin right below the registers */ 73 - li sp, -32768+6912-32*8 59 + dli sp, CVMSEG_BASE + CVMSEG_SIZE - (32 * 8) 74 60 /* Load the address of the third stage handler */ 75 - dla a0, octeon_wdt_nmi_stage3 61 + dla $25, octeon_wdt_nmi_stage3 76 62 /* Call the third stage handler */ 77 - jal a0 63 + jal $25 78 64 /* a0 is the address of the saved registers */ 79 65 move a0, sp 80 66 /* Loop forvever if we get here. */ 81 - 1: b 1b 67 + 2: b 2b 82 68 nop 83 69 .set pop 84 70 END(octeon_wdt_nmi_stage2)
+15
include/dt-bindings/mips/lantiq_rcu_gphy.h
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify it 3 + * under the terms of the GNU General Public License version 2 as published 4 + * by the Free Software Foundation. 5 + * 6 + * Copyright (C) 2016 Martin Blumenstingl <martin.blumenstingl@googlemail.com> 7 + * Copyright (C) 2017 Hauke Mehrtens <hauke@hauke-m.de> 8 + */ 9 + #ifndef _DT_BINDINGS_MIPS_LANTIQ_RCU_GPHY_H 10 + #define _DT_BINDINGS_MIPS_LANTIQ_RCU_GPHY_H 11 + 12 + #define GPHY_MODE_GE 1 13 + #define GPHY_MODE_FE 2 14 + 15 + #endif /* _DT_BINDINGS_MIPS_LANTIQ_RCU_GPHY_H */
-297
include/linux/irqchip/mips-gic.h
··· 1 - /* 2 - * This file is subject to the terms and conditions of the GNU General Public 3 - * License. See the file "COPYING" in the main directory of this archive 4 - * for more details. 5 - * 6 - * Copyright (C) 2000, 07 MIPS Technologies, Inc. 7 - */ 8 - #ifndef __LINUX_IRQCHIP_MIPS_GIC_H 9 - #define __LINUX_IRQCHIP_MIPS_GIC_H 10 - 11 - #include <linux/clocksource.h> 12 - #include <linux/ioport.h> 13 - 14 - #define GIC_MAX_INTRS 256 15 - 16 - /* Constants */ 17 - #define GIC_POL_POS 1 18 - #define GIC_POL_NEG 0 19 - #define GIC_TRIG_EDGE 1 20 - #define GIC_TRIG_LEVEL 0 21 - #define GIC_TRIG_DUAL_ENABLE 1 22 - #define GIC_TRIG_DUAL_DISABLE 0 23 - 24 - #define MSK(n) ((1 << (n)) - 1) 25 - 26 - /* Accessors */ 27 - #define GIC_REG(segment, offset) (segment##_##SECTION_OFS + offset##_##OFS) 28 - 29 - /* GIC Address Space */ 30 - #define SHARED_SECTION_OFS 0x0000 31 - #define SHARED_SECTION_SIZE 0x8000 32 - #define VPE_LOCAL_SECTION_OFS 0x8000 33 - #define VPE_LOCAL_SECTION_SIZE 0x4000 34 - #define VPE_OTHER_SECTION_OFS 0xc000 35 - #define VPE_OTHER_SECTION_SIZE 0x4000 36 - #define USM_VISIBLE_SECTION_OFS 0x10000 37 - #define USM_VISIBLE_SECTION_SIZE 0x10000 38 - 39 - /* Register Map for Shared Section */ 40 - 41 - #define GIC_SH_CONFIG_OFS 0x0000 42 - 43 - /* Shared Global Counter */ 44 - #define GIC_SH_COUNTER_31_00_OFS 0x0010 45 - /* 64-bit counter register for CM3 */ 46 - #define GIC_SH_COUNTER_OFS GIC_SH_COUNTER_31_00_OFS 47 - #define GIC_SH_COUNTER_63_32_OFS 0x0014 48 - #define GIC_SH_REVISIONID_OFS 0x0020 49 - 50 - /* Convert an interrupt number to a byte offset/bit for multi-word registers */ 51 - #define GIC_INTR_OFS(intr) ({ \ 52 - unsigned bits = mips_cm_is64 ? 64 : 32; \ 53 - unsigned reg_idx = (intr) / bits; \ 54 - unsigned reg_width = bits / 8; \ 55 - \ 56 - reg_idx * reg_width; \ 57 - }) 58 - #define GIC_INTR_BIT(intr) ((intr) % (mips_cm_is64 ? 64 : 32)) 59 - 60 - /* Polarity : Reset Value is always 0 */ 61 - #define GIC_SH_SET_POLARITY_OFS 0x0100 62 - 63 - /* Triggering : Reset Value is always 0 */ 64 - #define GIC_SH_SET_TRIGGER_OFS 0x0180 65 - 66 - /* Dual edge triggering : Reset Value is always 0 */ 67 - #define GIC_SH_SET_DUAL_OFS 0x0200 68 - 69 - /* Set/Clear corresponding bit in Edge Detect Register */ 70 - #define GIC_SH_WEDGE_OFS 0x0280 71 - 72 - /* Mask manipulation */ 73 - #define GIC_SH_RMASK_OFS 0x0300 74 - #define GIC_SH_SMASK_OFS 0x0380 75 - 76 - /* Global Interrupt Mask Register (RO) - Bit Set == Interrupt enabled */ 77 - #define GIC_SH_MASK_OFS 0x0400 78 - 79 - /* Pending Global Interrupts (RO) */ 80 - #define GIC_SH_PEND_OFS 0x0480 81 - 82 - /* Maps Interrupt X to a Pin */ 83 - #define GIC_SH_INTR_MAP_TO_PIN_BASE_OFS 0x0500 84 - #define GIC_SH_MAP_TO_PIN(intr) (4 * (intr)) 85 - 86 - /* Maps Interrupt X to a VPE */ 87 - #define GIC_SH_INTR_MAP_TO_VPE_BASE_OFS 0x2000 88 - #define GIC_SH_MAP_TO_VPE_REG_OFF(intr, vpe) \ 89 - ((32 * (intr)) + (((vpe) / 32) * 4)) 90 - #define GIC_SH_MAP_TO_VPE_REG_BIT(vpe) (1 << ((vpe) % 32)) 91 - 92 - /* Register Map for Local Section */ 93 - #define GIC_VPE_CTL_OFS 0x0000 94 - #define GIC_VPE_PEND_OFS 0x0004 95 - #define GIC_VPE_MASK_OFS 0x0008 96 - #define GIC_VPE_RMASK_OFS 0x000c 97 - #define GIC_VPE_SMASK_OFS 0x0010 98 - #define GIC_VPE_WD_MAP_OFS 0x0040 99 - #define GIC_VPE_COMPARE_MAP_OFS 0x0044 100 - #define GIC_VPE_TIMER_MAP_OFS 0x0048 101 - #define GIC_VPE_FDC_MAP_OFS 0x004c 102 - #define GIC_VPE_PERFCTR_MAP_OFS 0x0050 103 - #define GIC_VPE_SWINT0_MAP_OFS 0x0054 104 - #define GIC_VPE_SWINT1_MAP_OFS 0x0058 105 - #define GIC_VPE_OTHER_ADDR_OFS 0x0080 106 - #define GIC_VP_IDENT_OFS 0x0088 107 - #define GIC_VPE_WD_CONFIG0_OFS 0x0090 108 - #define GIC_VPE_WD_COUNT0_OFS 0x0094 109 - #define GIC_VPE_WD_INITIAL0_OFS 0x0098 110 - #define GIC_VPE_COMPARE_LO_OFS 0x00a0 111 - /* 64-bit Compare register on CM3 */ 112 - #define GIC_VPE_COMPARE_OFS GIC_VPE_COMPARE_LO_OFS 113 - #define GIC_VPE_COMPARE_HI_OFS 0x00a4 114 - 115 - #define GIC_VPE_EIC_SHADOW_SET_BASE_OFS 0x0100 116 - #define GIC_VPE_EIC_SS(intr) (4 * (intr)) 117 - 118 - #define GIC_VPE_EIC_VEC_BASE_OFS 0x0800 119 - #define GIC_VPE_EIC_VEC(intr) (4 * (intr)) 120 - 121 - #define GIC_VPE_TENABLE_NMI_OFS 0x1000 122 - #define GIC_VPE_TENABLE_YQ_OFS 0x1004 123 - #define GIC_VPE_TENABLE_INT_31_0_OFS 0x1080 124 - #define GIC_VPE_TENABLE_INT_63_32_OFS 0x1084 125 - 126 - /* User Mode Visible Section Register Map */ 127 - #define GIC_UMV_SH_COUNTER_31_00_OFS 0x0000 128 - #define GIC_UMV_SH_COUNTER_63_32_OFS 0x0004 129 - 130 - /* Masks */ 131 - #define GIC_SH_CONFIG_COUNTSTOP_SHF 28 132 - #define GIC_SH_CONFIG_COUNTSTOP_MSK (MSK(1) << GIC_SH_CONFIG_COUNTSTOP_SHF) 133 - 134 - #define GIC_SH_CONFIG_COUNTBITS_SHF 24 135 - #define GIC_SH_CONFIG_COUNTBITS_MSK (MSK(4) << GIC_SH_CONFIG_COUNTBITS_SHF) 136 - 137 - #define GIC_SH_CONFIG_NUMINTRS_SHF 16 138 - #define GIC_SH_CONFIG_NUMINTRS_MSK (MSK(8) << GIC_SH_CONFIG_NUMINTRS_SHF) 139 - 140 - #define GIC_SH_CONFIG_NUMVPES_SHF 0 141 - #define GIC_SH_CONFIG_NUMVPES_MSK (MSK(8) << GIC_SH_CONFIG_NUMVPES_SHF) 142 - 143 - #define GIC_SH_WEDGE_SET(intr) ((intr) | (0x1 << 31)) 144 - #define GIC_SH_WEDGE_CLR(intr) ((intr) & ~(0x1 << 31)) 145 - 146 - #define GIC_MAP_TO_PIN_SHF 31 147 - #define GIC_MAP_TO_PIN_MSK (MSK(1) << GIC_MAP_TO_PIN_SHF) 148 - #define GIC_MAP_TO_NMI_SHF 30 149 - #define GIC_MAP_TO_NMI_MSK (MSK(1) << GIC_MAP_TO_NMI_SHF) 150 - #define GIC_MAP_TO_YQ_SHF 29 151 - #define GIC_MAP_TO_YQ_MSK (MSK(1) << GIC_MAP_TO_YQ_SHF) 152 - #define GIC_MAP_SHF 0 153 - #define GIC_MAP_MSK (MSK(6) << GIC_MAP_SHF) 154 - 155 - /* GIC_VPE_CTL Masks */ 156 - #define GIC_VPE_CTL_FDC_RTBL_SHF 4 157 - #define GIC_VPE_CTL_FDC_RTBL_MSK (MSK(1) << GIC_VPE_CTL_FDC_RTBL_SHF) 158 - #define GIC_VPE_CTL_SWINT_RTBL_SHF 3 159 - #define GIC_VPE_CTL_SWINT_RTBL_MSK (MSK(1) << GIC_VPE_CTL_SWINT_RTBL_SHF) 160 - #define GIC_VPE_CTL_PERFCNT_RTBL_SHF 2 161 - #define GIC_VPE_CTL_PERFCNT_RTBL_MSK (MSK(1) << GIC_VPE_CTL_PERFCNT_RTBL_SHF) 162 - #define GIC_VPE_CTL_TIMER_RTBL_SHF 1 163 - #define GIC_VPE_CTL_TIMER_RTBL_MSK (MSK(1) << GIC_VPE_CTL_TIMER_RTBL_SHF) 164 - #define GIC_VPE_CTL_EIC_MODE_SHF 0 165 - #define GIC_VPE_CTL_EIC_MODE_MSK (MSK(1) << GIC_VPE_CTL_EIC_MODE_SHF) 166 - 167 - /* GIC_VPE_PEND Masks */ 168 - #define GIC_VPE_PEND_WD_SHF 0 169 - #define GIC_VPE_PEND_WD_MSK (MSK(1) << GIC_VPE_PEND_WD_SHF) 170 - #define GIC_VPE_PEND_CMP_SHF 1 171 - #define GIC_VPE_PEND_CMP_MSK (MSK(1) << GIC_VPE_PEND_CMP_SHF) 172 - #define GIC_VPE_PEND_TIMER_SHF 2 173 - #define GIC_VPE_PEND_TIMER_MSK (MSK(1) << GIC_VPE_PEND_TIMER_SHF) 174 - #define GIC_VPE_PEND_PERFCOUNT_SHF 3 175 - #define GIC_VPE_PEND_PERFCOUNT_MSK (MSK(1) << GIC_VPE_PEND_PERFCOUNT_SHF) 176 - #define GIC_VPE_PEND_SWINT0_SHF 4 177 - #define GIC_VPE_PEND_SWINT0_MSK (MSK(1) << GIC_VPE_PEND_SWINT0_SHF) 178 - #define GIC_VPE_PEND_SWINT1_SHF 5 179 - #define GIC_VPE_PEND_SWINT1_MSK (MSK(1) << GIC_VPE_PEND_SWINT1_SHF) 180 - #define GIC_VPE_PEND_FDC_SHF 6 181 - #define GIC_VPE_PEND_FDC_MSK (MSK(1) << GIC_VPE_PEND_FDC_SHF) 182 - 183 - /* GIC_VPE_RMASK Masks */ 184 - #define GIC_VPE_RMASK_WD_SHF 0 185 - #define GIC_VPE_RMASK_WD_MSK (MSK(1) << GIC_VPE_RMASK_WD_SHF) 186 - #define GIC_VPE_RMASK_CMP_SHF 1 187 - #define GIC_VPE_RMASK_CMP_MSK (MSK(1) << GIC_VPE_RMASK_CMP_SHF) 188 - #define GIC_VPE_RMASK_TIMER_SHF 2 189 - #define GIC_VPE_RMASK_TIMER_MSK (MSK(1) << GIC_VPE_RMASK_TIMER_SHF) 190 - #define GIC_VPE_RMASK_PERFCNT_SHF 3 191 - #define GIC_VPE_RMASK_PERFCNT_MSK (MSK(1) << GIC_VPE_RMASK_PERFCNT_SHF) 192 - #define GIC_VPE_RMASK_SWINT0_SHF 4 193 - #define GIC_VPE_RMASK_SWINT0_MSK (MSK(1) << GIC_VPE_RMASK_SWINT0_SHF) 194 - #define GIC_VPE_RMASK_SWINT1_SHF 5 195 - #define GIC_VPE_RMASK_SWINT1_MSK (MSK(1) << GIC_VPE_RMASK_SWINT1_SHF) 196 - #define GIC_VPE_RMASK_FDC_SHF 6 197 - #define GIC_VPE_RMASK_FDC_MSK (MSK(1) << GIC_VPE_RMASK_FDC_SHF) 198 - 199 - /* GIC_VPE_SMASK Masks */ 200 - #define GIC_VPE_SMASK_WD_SHF 0 201 - #define GIC_VPE_SMASK_WD_MSK (MSK(1) << GIC_VPE_SMASK_WD_SHF) 202 - #define GIC_VPE_SMASK_CMP_SHF 1 203 - #define GIC_VPE_SMASK_CMP_MSK (MSK(1) << GIC_VPE_SMASK_CMP_SHF) 204 - #define GIC_VPE_SMASK_TIMER_SHF 2 205 - #define GIC_VPE_SMASK_TIMER_MSK (MSK(1) << GIC_VPE_SMASK_TIMER_SHF) 206 - #define GIC_VPE_SMASK_PERFCNT_SHF 3 207 - #define GIC_VPE_SMASK_PERFCNT_MSK (MSK(1) << GIC_VPE_SMASK_PERFCNT_SHF) 208 - #define GIC_VPE_SMASK_SWINT0_SHF 4 209 - #define GIC_VPE_SMASK_SWINT0_MSK (MSK(1) << GIC_VPE_SMASK_SWINT0_SHF) 210 - #define GIC_VPE_SMASK_SWINT1_SHF 5 211 - #define GIC_VPE_SMASK_SWINT1_MSK (MSK(1) << GIC_VPE_SMASK_SWINT1_SHF) 212 - #define GIC_VPE_SMASK_FDC_SHF 6 213 - #define GIC_VPE_SMASK_FDC_MSK (MSK(1) << GIC_VPE_SMASK_FDC_SHF) 214 - 215 - /* GIC_VP_IDENT fields */ 216 - #define GIC_VP_IDENT_VCNUM_SHF 0 217 - #define GIC_VP_IDENT_VCNUM_MSK (MSK(6) << GIC_VP_IDENT_VCNUM_SHF) 218 - 219 - /* GIC nomenclature for Core Interrupt Pins. */ 220 - #define GIC_CPU_INT0 0 /* Core Interrupt 2 */ 221 - #define GIC_CPU_INT1 1 /* . */ 222 - #define GIC_CPU_INT2 2 /* . */ 223 - #define GIC_CPU_INT3 3 /* . */ 224 - #define GIC_CPU_INT4 4 /* . */ 225 - #define GIC_CPU_INT5 5 /* Core Interrupt 7 */ 226 - 227 - /* Add 2 to convert GIC CPU pin to core interrupt */ 228 - #define GIC_CPU_PIN_OFFSET 2 229 - 230 - /* Add 2 to convert non-EIC hardware interrupt to EIC vector number. */ 231 - #define GIC_CPU_TO_VEC_OFFSET 2 232 - 233 - /* Mapped interrupt to pin X, then GIC will generate the vector (X+1). */ 234 - #define GIC_PIN_TO_VEC_OFFSET 1 235 - 236 - /* Local GIC interrupts. */ 237 - #define GIC_LOCAL_INT_WD 0 /* GIC watchdog */ 238 - #define GIC_LOCAL_INT_COMPARE 1 /* GIC count and compare timer */ 239 - #define GIC_LOCAL_INT_TIMER 2 /* CPU timer interrupt */ 240 - #define GIC_LOCAL_INT_PERFCTR 3 /* CPU performance counter */ 241 - #define GIC_LOCAL_INT_SWINT0 4 /* CPU software interrupt 0 */ 242 - #define GIC_LOCAL_INT_SWINT1 5 /* CPU software interrupt 1 */ 243 - #define GIC_LOCAL_INT_FDC 6 /* CPU fast debug channel */ 244 - #define GIC_NUM_LOCAL_INTRS 7 245 - 246 - /* Convert between local/shared IRQ number and GIC HW IRQ number. */ 247 - #define GIC_LOCAL_HWIRQ_BASE 0 248 - #define GIC_LOCAL_TO_HWIRQ(x) (GIC_LOCAL_HWIRQ_BASE + (x)) 249 - #define GIC_HWIRQ_TO_LOCAL(x) ((x) - GIC_LOCAL_HWIRQ_BASE) 250 - #define GIC_SHARED_HWIRQ_BASE GIC_NUM_LOCAL_INTRS 251 - #define GIC_SHARED_TO_HWIRQ(x) (GIC_SHARED_HWIRQ_BASE + (x)) 252 - #define GIC_HWIRQ_TO_SHARED(x) ((x) - GIC_SHARED_HWIRQ_BASE) 253 - 254 - #ifdef CONFIG_MIPS_GIC 255 - 256 - extern unsigned int gic_present; 257 - 258 - extern void gic_init(unsigned long gic_base_addr, 259 - unsigned long gic_addrspace_size, unsigned int cpu_vec, 260 - unsigned int irqbase); 261 - extern u64 gic_read_count(void); 262 - extern unsigned int gic_get_count_width(void); 263 - extern u64 gic_read_compare(void); 264 - extern void gic_write_compare(u64 cnt); 265 - extern void gic_write_cpu_compare(u64 cnt, int cpu); 266 - extern void gic_start_count(void); 267 - extern void gic_stop_count(void); 268 - extern int gic_get_c0_compare_int(void); 269 - extern int gic_get_c0_perfcount_int(void); 270 - extern int gic_get_c0_fdc_int(void); 271 - extern int gic_get_usm_range(struct resource *gic_usm_res); 272 - 273 - #else /* CONFIG_MIPS_GIC */ 274 - 275 - #define gic_present 0 276 - 277 - static inline int gic_get_usm_range(struct resource *gic_usm_res) 278 - { 279 - /* Shouldn't be called. */ 280 - return -1; 281 - } 282 - 283 - #endif /* CONFIG_MIPS_GIC */ 284 - 285 - /** 286 - * gic_read_local_vp_id() - read the local VPs VCNUM 287 - * 288 - * Read the VCNUM of the local VP from the GIC_VP_IDENT register and 289 - * return it to the caller. This ID should be used to refer to the VP 290 - * via the GICs VP-other region, or when calculating an offset to a 291 - * bit representing the VP in interrupt masks. 292 - * 293 - * Return: The VCNUM value for the local VP. 294 - */ 295 - extern unsigned gic_read_local_vp_id(void); 296 - 297 - #endif /* __LINUX_IRQCHIP_MIPS_GIC_H */