Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

nds32: Remove the architecture

The nds32 architecture, also known as AndeStar V3, is a custom 32-bit
RISC target designed by Andes Technologies. Support was added to the
kernel in 2016 as the replacement RISC-V based V5 processors were
already announced, and maintained by (current or former) Andes
employees.

As explained by Alan Kao, new customers are now all using RISC-V,
and all known nds32 users are already on longterm stable kernels
provided by Andes, with no development work going into mainline
support any more.

While the port is still in a reasonably good shape, it only gets
worse over time without active maintainers, so it seems best
to remove it before it becomes unusable. As always, if it turns
out that there are mainline users after all, and they volunteer
to maintain the port in the future, the removal can be reverted.

Link: https://lore.kernel.org/linux-mm/YhdWNLUhk+x9RAzU@yamatobi.andestech.com/
Link: https://lore.kernel.org/lkml/20220302065213.82702-1-alankao@andestech.com/
Link: https://www.andestech.com/en/products-solutions/andestar-architecture/
Signed-off-by: Alan Kao <alankao@andestech.com>
[arnd: rewrite changelog to provide more background]
Signed-off-by: Arnd Bergmann <arnd@arndb.de>

authored by

Alan Kao and committed by
Arnd Bergmann
aec499c7 dd865f09

+6 -15814
-19
Documentation/devicetree/bindings/interrupt-controller/andestech,ativic32.txt
··· 1 - * Andestech Internal Vector Interrupt Controller 2 - 3 - The Internal Vector Interrupt Controller (IVIC) is a basic interrupt controller 4 - suitable for a simpler SoC platform not requiring a more sophisticated and 5 - bigger External Vector Interrupt Controller. 6 - 7 - 8 - Main node required properties: 9 - 10 - - compatible : should at least contain "andestech,ativic32". 11 - - interrupt-controller : Identifies the node as an interrupt controller 12 - - #interrupt-cells: 1 cells and refer to interrupt-controller/interrupts 13 - 14 - Examples: 15 - intc: interrupt-controller { 16 - compatible = "andestech,ativic32"; 17 - #interrupt-cells = <1>; 18 - interrupt-controller; 19 - };
-40
Documentation/devicetree/bindings/nds32/andestech-boards
··· 1 - Andestech(nds32) AE3XX Platform 2 - ----------------------------------------------------------------------------- 3 - The AE3XX prototype demonstrates the AE3XX example platform on the FPGA. It 4 - is composed of one Andestech(nds32) processor and AE3XX. 5 - 6 - Required properties (in root node): 7 - - compatible = "andestech,ae3xx"; 8 - 9 - Example: 10 - /dts-v1/; 11 - / { 12 - compatible = "andestech,ae3xx"; 13 - #address-cells = <1>; 14 - #size-cells = <1>; 15 - interrupt-parent = <&intc>; 16 - }; 17 - 18 - Andestech(nds32) AG101P Platform 19 - ----------------------------------------------------------------------------- 20 - AG101P is a generic SoC Platform IP that works with any of Andestech(nds32) 21 - processors to provide a cost-effective and high performance solution for 22 - majority of embedded systems in variety of application domains. Users may 23 - simply attach their IP on one of the system buses together with certain glue 24 - logics to complete a SoC solution for a specific application. With 25 - comprehensive simulation and design environments, users may evaluate the 26 - system performance of their applications and track bugs of their designs 27 - efficiently. The optional hardware development platform further provides real 28 - system environment for early prototyping and software/hardware co-development. 29 - 30 - Required properties (in root node): 31 - compatible = "andestech,ag101p"; 32 - 33 - Example: 34 - /dts-v1/; 35 - / { 36 - compatible = "andestech,ag101p"; 37 - #address-cells = <1>; 38 - #size-cells = <1>; 39 - interrupt-parent = <&intc>; 40 - };
-28
Documentation/devicetree/bindings/nds32/atl2c.txt
··· 1 - * Andestech L2 cache Controller 2 - 3 - The level-2 cache controller plays an important role in reducing memory latency 4 - for high performance systems, such as thoese designs with AndesCore processors. 5 - Level-2 cache controller in general enhances overall system performance 6 - signigicantly and the system power consumption might be reduced as well by 7 - reducing DRAM accesses. 8 - 9 - This binding specifies what properties must be available in the device tree 10 - representation of an Andestech L2 cache controller. 11 - 12 - Required properties: 13 - - compatible: 14 - Usage: required 15 - Value type: <string> 16 - Definition: "andestech,atl2c" 17 - - reg : Physical base address and size of cache controller's memory mapped 18 - - cache-unified : Specifies the cache is a unified cache. 19 - - cache-level : Should be set to 2 for a level 2 cache. 20 - 21 - * Example 22 - 23 - cache-controller@e0500000 { 24 - compatible = "andestech,atl2c"; 25 - reg = <0xe0500000 0x1000>; 26 - cache-unified; 27 - cache-level = <2>; 28 - };
-38
Documentation/devicetree/bindings/nds32/cpus.txt
··· 1 - * Andestech Processor Binding 2 - 3 - This binding specifies what properties must be available in the device tree 4 - representation of a Andestech Processor Core, which is the root node in the 5 - tree. 6 - 7 - Required properties: 8 - 9 - - compatible: 10 - Usage: required 11 - Value type: <string> 12 - Definition: Should be "andestech,<core_name>", "andestech,nds32v3" as fallback. 13 - Must contain "andestech,nds32v3" as the most generic value, in addition to 14 - one of the following identifiers for a particular CPU core: 15 - "andestech,n13" 16 - "andestech,n15" 17 - "andestech,d15" 18 - "andestech,n10" 19 - "andestech,d10" 20 - - device_type 21 - Usage: required 22 - Value type: <string> 23 - Definition: must be "cpu" 24 - - reg: Contains CPU index. 25 - - clock-frequency: Contains the clock frequency for CPU, in Hz. 26 - 27 - * Examples 28 - 29 - / { 30 - cpus { 31 - cpu@0 { 32 - device_type = "cpu"; 33 - compatible = "andestech,n13", "andestech,nds32v3"; 34 - reg = <0x0>; 35 - clock-frequency = <60000000> 36 - }; 37 - }; 38 - };
-17
Documentation/devicetree/bindings/perf/nds32v3-pmu.txt
··· 1 - * NDS32 Performance Monitor Units 2 - 3 - NDS32 core have a PMU for counting cpu and cache events like cache misses. 4 - The NDS32 PMU representation in the device tree should be done as under: 5 - 6 - Required properties: 7 - 8 - - compatible : 9 - "andestech,nds32v3-pmu" 10 - 11 - - interrupts : The interrupt number for NDS32 PMU is 13. 12 - 13 - Example: 14 - pmu{ 15 - compatible = "andestech,nds32v3-pmu"; 16 - interrupts = <13>; 17 - }
-33
Documentation/devicetree/bindings/timer/andestech,atcpit100-timer.txt
··· 1 - Andestech ATCPIT100 timer 2 - ------------------------------------------------------------------ 3 - ATCPIT100 is a generic IP block from Andes Technology, embedded in 4 - Andestech AE3XX platforms and other designs. 5 - 6 - This timer is a set of compact multi-function timers, which can be 7 - used as pulse width modulators (PWM) as well as simple timers. 8 - 9 - It supports up to 4 PIT channels. Each PIT channel is a 10 - multi-function timer and provide the following usage scenarios: 11 - One 32-bit timer 12 - Two 16-bit timers 13 - Four 8-bit timers 14 - One 16-bit PWM 15 - One 16-bit timer and one 8-bit PWM 16 - Two 8-bit timer and one 8-bit PWM 17 - 18 - Required properties: 19 - - compatible : Should be "andestech,atcpit100" 20 - - reg : Address and length of the register set 21 - - interrupts : Reference to the timer interrupt 22 - - clocks : a clock to provide the tick rate for "andestech,atcpit100" 23 - - clock-names : should be "PCLK" for the peripheral clock source. 24 - 25 - Examples: 26 - 27 - timer0: timer@f0400000 { 28 - compatible = "andestech,atcpit100"; 29 - reg = <0xf0400000 0x1000>; 30 - interrupts = <2>; 31 - clocks = <&apb>; 32 - clock-names = "PCLK"; 33 - };
-1
Documentation/features/core/cBPF-JIT/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/core/eBPF-JIT/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/core/generic-idle-thread/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | ok | 23 22 | parisc: | ok |
-1
Documentation/features/core/jump-labels/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | ok |
-1
Documentation/features/core/thread-info-in-task/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | TODO | 20 - | nds32: | ok | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | ok |
-1
Documentation/features/core/tracehook/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | ok | 21 20 | nios2: | ok | 22 21 | openrisc: | ok | 23 22 | parisc: | ok |
-1
Documentation/features/debug/KASAN/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | TODO | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/debug/debug-vm-pgtable/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | TODO | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/debug/gcov-profile-all/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | ok | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/debug/kcov/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/debug/kgdb/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | ok | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | ok | 22 21 | openrisc: | TODO | 23 22 | parisc: | ok |
-1
Documentation/features/debug/kmemleak/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | ok | 19 19 | mips: | ok | 20 - | nds32: | ok | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | TODO | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | ok |
-1
Documentation/features/debug/kprobes/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | ok |
-1
Documentation/features/debug/kretprobes/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | ok |
-1
Documentation/features/debug/optprobes/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | TODO | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/debug/stackprotector/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/debug/uprobes/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/debug/user-ret-profiler/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | TODO | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/io/dma-contiguous/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | ok | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/locking/cmpxchg-local/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | TODO | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/locking/lockdep/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | ok | 19 19 | mips: | ok | 20 - | nds32: | ok | 21 20 | nios2: | TODO | 22 21 | openrisc: | ok | 23 22 | parisc: | TODO |
-1
Documentation/features/locking/queued-rwlocks/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | ok | 23 22 | parisc: | TODO |
-1
Documentation/features/locking/queued-spinlocks/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | ok | 23 22 | parisc: | TODO |
-1
Documentation/features/perf/kprobes-event/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | ok | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | ok |
-1
Documentation/features/perf/perf-regs/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/perf/perf-stackdump/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/sched/membarrier-sync-core/arch-support.txt
··· 40 40 | m68k: | TODO | 41 41 | microblaze: | TODO | 42 42 | mips: | TODO | 43 - | nds32: | TODO | 44 43 | nios2: | TODO | 45 44 | openrisc: | TODO | 46 45 | parisc: | TODO |
-1
Documentation/features/sched/numa-balancing/arch-support.txt
··· 17 17 | m68k: | .. | 18 18 | microblaze: | .. | 19 19 | mips: | TODO | 20 - | nds32: | TODO | 21 20 | nios2: | .. | 22 21 | openrisc: | .. | 23 22 | parisc: | .. |
-1
Documentation/features/seccomp/seccomp-filter/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | ok |
-1
Documentation/features/time/arch-tick-broadcast/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/time/clockevents/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | ok | 19 19 | mips: | ok | 20 - | nds32: | ok | 21 20 | nios2: | ok | 22 21 | openrisc: | ok | 23 22 | parisc: | TODO |
-1
Documentation/features/time/context-tracking/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/time/irq-time-acct/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | .. |
-1
Documentation/features/time/virt-cpuacct/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | ok |
-1
Documentation/features/vm/ELF-ASLR/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | ok |
-1
Documentation/features/vm/PG_uncached/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | TODO | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/vm/THP/arch-support.txt
··· 17 17 | m68k: | .. | 18 18 | microblaze: | .. | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | .. | 22 21 | openrisc: | .. | 23 22 | parisc: | TODO |
-1
Documentation/features/vm/TLB/arch-support.txt
··· 17 17 | m68k: | .. | 18 18 | microblaze: | .. | 19 19 | mips: | TODO | 20 - | nds32: | TODO | 21 20 | nios2: | .. | 22 21 | openrisc: | .. | 23 22 | parisc: | TODO |
-1
Documentation/features/vm/huge-vmap/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | TODO | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/vm/ioremap_prot/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-1
Documentation/features/vm/pte_special/arch-support.txt
··· 17 17 | m68k: | TODO | 18 18 | microblaze: | TODO | 19 19 | mips: | ok | 20 - | nds32: | TODO | 21 20 | nios2: | TODO | 22 21 | openrisc: | TODO | 23 22 | parisc: | TODO |
-12
MAINTAINERS
··· 1229 1229 F: drivers/clk/analogbits/* 1230 1230 F: include/linux/clk/analogbits* 1231 1231 1232 - ANDES ARCHITECTURE 1233 - M: Nick Hu <nickhu@andestech.com> 1234 - M: Greentime Hu <green.hu@gmail.com> 1235 - M: Vincent Chen <deanbo422@gmail.com> 1236 - S: Supported 1237 - T: git https://git.kernel.org/pub/scm/linux/kernel/git/greentime/linux.git 1238 - F: Documentation/devicetree/bindings/interrupt-controller/andestech,ativic32.txt 1239 - F: Documentation/devicetree/bindings/nds32/ 1240 - F: arch/nds32/ 1241 - N: nds32 1242 - K: nds32 1243 - 1244 1232 ANDROID CONFIG FRAGMENTS 1245 1233 M: Rob Herring <robh@kernel.org> 1246 1234 S: Supported
-4
arch/nds32/Kbuild
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - 3 - # for cleaning 4 - subdir- += boot
-101
arch/nds32/Kconfig
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - # 3 - # For a description of the syntax of this configuration file, 4 - # see Documentation/kbuild/kconfig-language.rst. 5 - # 6 - 7 - config NDS32 8 - def_bool y 9 - select ARCH_32BIT_OFF_T 10 - select ARCH_HAS_DMA_PREP_COHERENT 11 - select ARCH_HAS_SYNC_DMA_FOR_CPU 12 - select ARCH_HAS_SYNC_DMA_FOR_DEVICE 13 - select ARCH_WANT_FRAME_POINTERS if FTRACE 14 - select CLKSRC_MMIO 15 - select CLONE_BACKWARDS 16 - select COMMON_CLK 17 - select DMA_DIRECT_REMAP 18 - select GENERIC_ATOMIC64 19 - select GENERIC_CPU_DEVICES 20 - select GENERIC_IRQ_CHIP 21 - select GENERIC_IRQ_SHOW 22 - select GENERIC_IOREMAP 23 - select GENERIC_LIB_ASHLDI3 24 - select GENERIC_LIB_ASHRDI3 25 - select GENERIC_LIB_CMPDI2 26 - select GENERIC_LIB_LSHRDI3 27 - select GENERIC_LIB_MULDI3 28 - select GENERIC_LIB_UCMPDI2 29 - select GENERIC_TIME_VSYSCALL 30 - select HAVE_ARCH_TRACEHOOK 31 - select HAVE_DEBUG_KMEMLEAK 32 - select HAVE_EXIT_THREAD 33 - select HAVE_REGS_AND_STACK_ACCESS_API 34 - select HAVE_PERF_EVENTS 35 - select IRQ_DOMAIN 36 - select LOCKDEP_SUPPORT 37 - select MODULES_USE_ELF_RELA 38 - select OF 39 - select OF_EARLY_FLATTREE 40 - select NO_IOPORT_MAP 41 - select RTC_LIB 42 - select THREAD_INFO_IN_TASK 43 - select HAVE_FUNCTION_TRACER 44 - select HAVE_FUNCTION_GRAPH_TRACER 45 - select HAVE_FTRACE_MCOUNT_RECORD 46 - select HAVE_DYNAMIC_FTRACE 47 - select TRACE_IRQFLAGS_SUPPORT 48 - help 49 - Andes(nds32) Linux support. 50 - 51 - config GENERIC_CALIBRATE_DELAY 52 - def_bool y 53 - 54 - config GENERIC_CSUM 55 - def_bool y 56 - 57 - config GENERIC_HWEIGHT 58 - def_bool y 59 - 60 - config GENERIC_LOCKBREAK 61 - def_bool y 62 - depends on PREEMPTION 63 - 64 - config STACKTRACE_SUPPORT 65 - def_bool y 66 - 67 - config FIX_EARLYCON_MEM 68 - def_bool y 69 - 70 - config PGTABLE_LEVELS 71 - default 2 72 - 73 - menu "System Type" 74 - source "arch/nds32/Kconfig.cpu" 75 - config NR_CPUS 76 - int 77 - default 1 78 - 79 - config MMU 80 - def_bool y 81 - 82 - config NDS32_BUILTIN_DTB 83 - string "Builtin DTB" 84 - default "" 85 - help 86 - User can use it to specify the dts of the SoC 87 - endmenu 88 - 89 - menu "Kernel Features" 90 - source "kernel/Kconfig.hz" 91 - endmenu 92 - 93 - menu "Power management options" 94 - config SYS_SUPPORTS_APM_EMULATION 95 - bool 96 - 97 - config ARCH_SUSPEND_POSSIBLE 98 - def_bool y 99 - 100 - source "kernel/power/Kconfig" 101 - endmenu
-218
arch/nds32/Kconfig.cpu
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - comment "Processor Features" 3 - 4 - config CPU_BIG_ENDIAN 5 - def_bool !CPU_LITTLE_ENDIAN 6 - 7 - config CPU_LITTLE_ENDIAN 8 - bool "Little endian" 9 - default y 10 - 11 - config FPU 12 - bool "FPU support" 13 - default n 14 - help 15 - If FPU ISA is used in user space, this configuration shall be Y to 16 - enable required support in kernel such as fpu context switch and 17 - fpu exception handler. 18 - 19 - If no FPU ISA is used in user space, say N. 20 - 21 - config LAZY_FPU 22 - bool "lazy FPU support" 23 - depends on FPU 24 - default y 25 - help 26 - Say Y here to enable the lazy FPU scheme. The lazy FPU scheme can 27 - enhance system performance by reducing the context switch 28 - frequency of the FPU register. 29 - 30 - For normal case, say Y. 31 - 32 - config SUPPORT_DENORMAL_ARITHMETIC 33 - bool "Denormal arithmetic support" 34 - depends on FPU 35 - default n 36 - help 37 - Say Y here to enable arithmetic of denormalized number. Enabling 38 - this feature can enhance the precision for tininess number. 39 - However, performance loss in float point calculations is 40 - possibly significant due to additional FPU exception. 41 - 42 - If the calculated tolerance for tininess number is not critical, 43 - say N to prevent performance loss. 44 - 45 - config HWZOL 46 - bool "hardware zero overhead loop support" 47 - depends on CPU_D10 || CPU_D15 48 - default n 49 - help 50 - A set of Zero-Overhead Loop mechanism is provided to reduce the 51 - instruction fetch and execution overhead of loop-control instructions. 52 - It will save 3 registers($LB, $LC, $LE) for context saving if say Y. 53 - You don't need to save these registers if you can make sure your user 54 - program doesn't use these registers. 55 - 56 - If unsure, say N. 57 - 58 - config CPU_CACHE_ALIASING 59 - bool "Aliasing cache" 60 - depends on CPU_N10 || CPU_D10 || CPU_N13 || CPU_V3 61 - default y 62 - help 63 - If this CPU is using VIPT data cache and its cache way size is larger 64 - than page size, say Y. If it is using PIPT data cache, say N. 65 - 66 - If unsure, say Y. 67 - 68 - choice 69 - prompt "minimum CPU type" 70 - default CPU_V3 71 - help 72 - The data cache of N15/D15 is implemented as PIPT and it will not cause 73 - the cache aliasing issue. The rest cpus(N13, N10 and D10) are 74 - implemented as VIPT data cache. It may cause the cache aliasing issue 75 - if its cache way size is larger than page size. You can specify the 76 - CPU type directly or choose CPU_V3 if unsure. 77 - 78 - A kernel built for N10 is able to run on N15, D15, N13, N10 or D10. 79 - A kernel built for N15 is able to run on N15 or D15. 80 - A kernel built for D10 is able to run on D10 or D15. 81 - A kernel built for D15 is able to run on D15. 82 - A kernel built for N13 is able to run on N15, N13 or D15. 83 - 84 - config CPU_N15 85 - bool "AndesCore N15" 86 - config CPU_N13 87 - bool "AndesCore N13" 88 - select CPU_CACHE_ALIASING if ANDES_PAGE_SIZE_4KB 89 - config CPU_N10 90 - bool "AndesCore N10" 91 - select CPU_CACHE_ALIASING 92 - config CPU_D15 93 - bool "AndesCore D15" 94 - config CPU_D10 95 - bool "AndesCore D10" 96 - select CPU_CACHE_ALIASING 97 - config CPU_V3 98 - bool "AndesCore v3 compatible" 99 - select CPU_CACHE_ALIASING 100 - endchoice 101 - choice 102 - prompt "Paging -- page size " 103 - default ANDES_PAGE_SIZE_4KB 104 - config ANDES_PAGE_SIZE_4KB 105 - bool "use 4KB page size" 106 - config ANDES_PAGE_SIZE_8KB 107 - bool "use 8KB page size" 108 - endchoice 109 - 110 - config CPU_ICACHE_DISABLE 111 - bool "Disable I-Cache" 112 - help 113 - Say Y here to disable the processor instruction cache. Unless 114 - you have a reason not to or are unsure, say N. 115 - 116 - config CPU_DCACHE_DISABLE 117 - bool "Disable D-Cache" 118 - help 119 - Say Y here to disable the processor data cache. Unless 120 - you have a reason not to or are unsure, say N. 121 - 122 - config CPU_DCACHE_WRITETHROUGH 123 - bool "Force write through D-cache" 124 - depends on !CPU_DCACHE_DISABLE 125 - help 126 - Say Y here to use the data cache in writethrough mode. Unless you 127 - specifically require this or are unsure, say N. 128 - 129 - config WBNA 130 - bool "WBNA" 131 - default n 132 - help 133 - Say Y here to enable write-back memory with no-write-allocation policy. 134 - 135 - config ALIGNMENT_TRAP 136 - bool "Kernel support unaligned access handling by sw" 137 - depends on PROC_FS 138 - default n 139 - help 140 - Andes processors cannot load/store information which is not 141 - naturally aligned on the bus, i.e., a 4 byte load must start at an 142 - address divisible by 4. On 32-bit Andes processors, these non-aligned 143 - load/store instructions will be emulated in software if you say Y 144 - here, which has a severe performance impact. With an IP-only 145 - configuration it is safe to say N, otherwise say Y. 146 - 147 - config HW_SUPPORT_UNALIGNMENT_ACCESS 148 - bool "Kernel support unaligned access handling by hw" 149 - depends on !ALIGNMENT_TRAP 150 - default n 151 - help 152 - Andes processors load/store world/half-word instructions can access 153 - unaligned memory locations without generating the Data Alignment 154 - Check exceptions. With an IP-only configuration it is safe to say N, 155 - otherwise say Y. 156 - 157 - config HIGHMEM 158 - bool "High Memory Support" 159 - depends on MMU && !CPU_CACHE_ALIASING 160 - select KMAP_LOCAL 161 - help 162 - The address space of Andes processors is only 4 Gigabytes large 163 - and it has to accommodate user address space, kernel address 164 - space as well as some memory mapped IO. That means that, if you 165 - have a large amount of physical memory and/or IO, not all of the 166 - memory can be "permanently mapped" by the kernel. The physical 167 - memory that is not permanently mapped is called "high memory". 168 - 169 - Depending on the selected kernel/user memory split, minimum 170 - vmalloc space and actual amount of RAM, you may not need this 171 - option which should result in a slightly faster kernel. 172 - 173 - If unsure, say N. 174 - 175 - config CACHE_L2 176 - bool "Support L2 cache" 177 - default y 178 - help 179 - Say Y here to enable L2 cache if your SoC are integrated with L2CC. 180 - If unsure, say N. 181 - 182 - config HW_PRE 183 - bool "Enable hardware prefetcher" 184 - default y 185 - help 186 - Say Y here to enable hardware prefetcher feature. 187 - Only when CPU_VER.REV >= 0x09 can support. 188 - 189 - menu "Memory configuration" 190 - 191 - choice 192 - prompt "Memory split" 193 - depends on MMU 194 - default VMSPLIT_3G_OPT 195 - help 196 - Select the desired split between kernel and user memory. 197 - 198 - If you are not absolutely sure what you are doing, leave this 199 - option alone! 200 - 201 - config VMSPLIT_3G 202 - bool "3G/1G user/kernel split" 203 - config VMSPLIT_3G_OPT 204 - bool "3G/1G user/kernel split (for full 1G low memory)" 205 - config VMSPLIT_2G 206 - bool "2G/2G user/kernel split" 207 - config VMSPLIT_1G 208 - bool "1G/3G user/kernel split" 209 - endchoice 210 - 211 - config PAGE_OFFSET 212 - hex 213 - default 0x40000000 if VMSPLIT_1G 214 - default 0x80000000 if VMSPLIT_2G 215 - default 0xB0000000 if VMSPLIT_3G_OPT 216 - default 0xC0000000 217 - 218 - endmenu
-2
arch/nds32/Kconfig.debug
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - # dummy file, do not delete
-63
arch/nds32/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - LDFLAGS_vmlinux := --no-undefined -X 3 - OBJCOPYFLAGS := -O binary -R .note -R .note.gnu.build-id -R .comment -S 4 - 5 - ifdef CONFIG_FUNCTION_TRACER 6 - arch-y += -malways-save-lp -mno-relax 7 - endif 8 - 9 - # Avoid generating FPU instructions 10 - arch-y += -mno-ext-fpu-sp -mno-ext-fpu-dp -mfloat-abi=soft 11 - 12 - # Enable <nds32_intrinsic.h> 13 - KBUILD_CFLAGS += -isystem $(shell $(CC) -print-file-name=include) 14 - KBUILD_CFLAGS += $(call cc-option, -mno-sched-prolog-epilog) 15 - KBUILD_CFLAGS += -mcmodel=large 16 - 17 - KBUILD_CFLAGS +=$(arch-y) $(tune-y) 18 - KBUILD_AFLAGS +=$(arch-y) $(tune-y) 19 - 20 - #Default value 21 - head-y := arch/nds32/kernel/head.o 22 - textaddr-y := $(CONFIG_PAGE_OFFSET)+0xc000 23 - 24 - TEXTADDR := $(textaddr-y) 25 - 26 - export TEXTADDR 27 - 28 - 29 - # If we have a machine-specific directory, then include it in the build. 30 - core-y += arch/nds32/kernel/ arch/nds32/mm/ 31 - core-$(CONFIG_FPU) += arch/nds32/math-emu/ 32 - libs-y += arch/nds32/lib/ 33 - 34 - ifdef CONFIG_CPU_LITTLE_ENDIAN 35 - KBUILD_CFLAGS += $(call cc-option, -EL) 36 - KBUILD_AFLAGS += $(call cc-option, -EL) 37 - KBUILD_LDFLAGS += $(call cc-option, -EL) 38 - CHECKFLAGS += -D__NDS32_EL__ 39 - else 40 - KBUILD_CFLAGS += $(call cc-option, -EB) 41 - KBUILD_AFLAGS += $(call cc-option, -EB) 42 - KBUILD_LDFLAGS += $(call cc-option, -EB) 43 - CHECKFLAGS += -D__NDS32_EB__ 44 - endif 45 - 46 - boot := arch/nds32/boot 47 - core-y += $(boot)/dts/ 48 - 49 - Image: vmlinux 50 - $(Q)$(MAKE) $(build)=$(boot) $(boot)/$@ 51 - 52 - 53 - PHONY += vdso_install 54 - vdso_install: 55 - $(Q)$(MAKE) $(build)=arch/nds32/kernel/vdso $@ 56 - 57 - prepare: vdso_prepare 58 - vdso_prepare: prepare0 59 - $(Q)$(MAKE) $(build)=arch/nds32/kernel/vdso include/generated/vdso-offsets.h 60 - 61 - define archhelp 62 - echo ' Image - kernel image (arch/$(ARCH)/boot/Image)' 63 - endef
-2
arch/nds32/boot/.gitignore
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - /Image
-16
arch/nds32/boot/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - targets := Image Image.gz 3 - 4 - $(obj)/Image: vmlinux FORCE 5 - $(call if_changed,objcopy) 6 - 7 - $(obj)/Image.gz: $(obj)/Image FORCE 8 - $(call if_changed,gzip) 9 - 10 - install: $(obj)/Image 11 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 12 - $(obj)/Image System.map "$(INSTALL_PATH)" 13 - 14 - zinstall: $(obj)/Image.gz 15 - $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \ 16 - $(obj)/Image.gz System.map "$(INSTALL_PATH)"
-2
arch/nds32/boot/dts/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - obj-$(CONFIG_OF) += $(addsuffix .dtb.o, $(CONFIG_NDS32_BUILTIN_DTB))
-90
arch/nds32/boot/dts/ae3xx.dts
··· 1 - /dts-v1/; 2 - / { 3 - compatible = "andestech,ae3xx"; 4 - #address-cells = <1>; 5 - #size-cells = <1>; 6 - interrupt-parent = <&intc>; 7 - 8 - chosen { 9 - stdout-path = &serial0; 10 - }; 11 - 12 - memory@0 { 13 - device_type = "memory"; 14 - reg = <0x00000000 0x40000000>; 15 - }; 16 - 17 - cpus { 18 - #address-cells = <1>; 19 - #size-cells = <0>; 20 - cpu@0 { 21 - device_type = "cpu"; 22 - compatible = "andestech,n13", "andestech,nds32v3"; 23 - reg = <0>; 24 - clock-frequency = <60000000>; 25 - next-level-cache = <&L2>; 26 - }; 27 - }; 28 - 29 - intc: interrupt-controller { 30 - compatible = "andestech,ativic32"; 31 - #interrupt-cells = <1>; 32 - interrupt-controller; 33 - }; 34 - 35 - clock: clk { 36 - #clock-cells = <0>; 37 - compatible = "fixed-clock"; 38 - clock-frequency = <30000000>; 39 - }; 40 - 41 - apb { 42 - compatible = "simple-bus"; 43 - #address-cells = <1>; 44 - #size-cells = <1>; 45 - ranges; 46 - 47 - serial0: serial@f0300000 { 48 - compatible = "andestech,uart16550", "ns16550a"; 49 - reg = <0xf0300000 0x1000>; 50 - interrupts = <8>; 51 - clock-frequency = <14745600>; 52 - reg-shift = <2>; 53 - reg-offset = <32>; 54 - no-loopback-test = <1>; 55 - }; 56 - 57 - timer0: timer@f0400000 { 58 - compatible = "andestech,atcpit100"; 59 - reg = <0xf0400000 0x1000>; 60 - interrupts = <2>; 61 - clocks = <&clock>; 62 - clock-names = "PCLK"; 63 - }; 64 - }; 65 - 66 - ahb { 67 - compatible = "simple-bus"; 68 - #address-cells = <1>; 69 - #size-cells = <1>; 70 - ranges; 71 - 72 - L2: cache-controller@e0500000 { 73 - compatible = "andestech,atl2c"; 74 - reg = <0xe0500000 0x1000>; 75 - cache-unified; 76 - cache-level = <2>; 77 - }; 78 - 79 - mac0: ethernet@e0100000 { 80 - compatible = "andestech,atmac100"; 81 - reg = <0xe0100000 0x1000>; 82 - interrupts = <18>; 83 - }; 84 - }; 85 - 86 - pmu { 87 - compatible = "andestech,nds32v3-pmu"; 88 - interrupts= <13>; 89 - }; 90 - };
-104
arch/nds32/configs/defconfig
··· 1 - CONFIG_SYSVIPC=y 2 - CONFIG_POSIX_MQUEUE=y 3 - CONFIG_HIGH_RES_TIMERS=y 4 - CONFIG_BSD_PROCESS_ACCT=y 5 - CONFIG_BSD_PROCESS_ACCT_V3=y 6 - CONFIG_IKCONFIG=y 7 - CONFIG_IKCONFIG_PROC=y 8 - CONFIG_LOG_BUF_SHIFT=14 9 - CONFIG_USER_NS=y 10 - CONFIG_RELAY=y 11 - CONFIG_BLK_DEV_INITRD=y 12 - CONFIG_KALLSYMS_ALL=y 13 - CONFIG_PROFILING=y 14 - CONFIG_MODULES=y 15 - CONFIG_MODULE_UNLOAD=y 16 - # CONFIG_BLK_DEV_BSG is not set 17 - # CONFIG_CACHE_L2 is not set 18 - CONFIG_PREEMPT=y 19 - # CONFIG_COMPACTION is not set 20 - CONFIG_HZ_100=y 21 - # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 22 - CONFIG_NET=y 23 - CONFIG_PACKET=y 24 - CONFIG_UNIX=y 25 - CONFIG_NET_KEY=y 26 - CONFIG_INET=y 27 - CONFIG_IP_MULTICAST=y 28 - # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 29 - # CONFIG_INET_XFRM_MODE_TUNNEL is not set 30 - # CONFIG_INET_XFRM_MODE_BEET is not set 31 - # CONFIG_INET_DIAG is not set 32 - # CONFIG_IPV6 is not set 33 - # CONFIG_BLK_DEV is not set 34 - CONFIG_NETDEVICES=y 35 - # CONFIG_NET_CADENCE is not set 36 - # CONFIG_NET_VENDOR_BROADCOM is not set 37 - CONFIG_FTMAC100=y 38 - # CONFIG_NET_VENDOR_INTEL is not set 39 - # CONFIG_NET_VENDOR_MARVELL is not set 40 - # CONFIG_NET_VENDOR_MICREL is not set 41 - # CONFIG_NET_VENDOR_NATSEMI is not set 42 - # CONFIG_NET_VENDOR_SEEQ is not set 43 - # CONFIG_NET_VENDOR_STMICRO is not set 44 - # CONFIG_NET_VENDOR_WIZNET is not set 45 - CONFIG_INPUT_EVDEV=y 46 - # CONFIG_INPUT_KEYBOARD is not set 47 - # CONFIG_INPUT_MOUSE is not set 48 - CONFIG_INPUT_TOUCHSCREEN=y 49 - # CONFIG_SERIO is not set 50 - CONFIG_VT_HW_CONSOLE_BINDING=y 51 - CONFIG_SERIAL_8250=y 52 - # CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set 53 - CONFIG_SERIAL_8250_CONSOLE=y 54 - CONFIG_SERIAL_8250_NR_UARTS=3 55 - CONFIG_SERIAL_8250_RUNTIME_UARTS=3 56 - CONFIG_SERIAL_OF_PLATFORM=y 57 - # CONFIG_HW_RANDOM is not set 58 - # CONFIG_HWMON is not set 59 - # CONFIG_HID_A4TECH is not set 60 - # CONFIG_HID_APPLE is not set 61 - # CONFIG_HID_BELKIN is not set 62 - # CONFIG_HID_CHERRY is not set 63 - # CONFIG_HID_CHICONY is not set 64 - # CONFIG_HID_CYPRESS is not set 65 - # CONFIG_HID_EZKEY is not set 66 - # CONFIG_HID_ITE is not set 67 - # CONFIG_HID_KENSINGTON is not set 68 - # CONFIG_HID_LOGITECH is not set 69 - # CONFIG_HID_MICROSOFT is not set 70 - # CONFIG_HID_MONTEREY is not set 71 - # CONFIG_USB_SUPPORT is not set 72 - CONFIG_GENERIC_PHY=y 73 - CONFIG_EXT4_FS=y 74 - CONFIG_EXT4_FS_POSIX_ACL=y 75 - CONFIG_EXT4_FS_SECURITY=y 76 - CONFIG_FS_ENCRYPTION=y 77 - CONFIG_FUSE_FS=y 78 - CONFIG_MSDOS_FS=y 79 - CONFIG_VFAT_FS=y 80 - CONFIG_TMPFS=y 81 - CONFIG_TMPFS_POSIX_ACL=y 82 - CONFIG_CONFIGFS_FS=y 83 - CONFIG_NFS_FS=y 84 - CONFIG_NFS_V3_ACL=y 85 - CONFIG_NFS_V4=y 86 - CONFIG_NFS_V4_1=y 87 - CONFIG_NFS_USE_LEGACY_DNS=y 88 - CONFIG_NLS_CODEPAGE_437=y 89 - CONFIG_NLS_ISO8859_1=y 90 - CONFIG_DEBUG_INFO=y 91 - CONFIG_DEBUG_INFO_DWARF4=y 92 - CONFIG_GDB_SCRIPTS=y 93 - CONFIG_READABLE_ASM=y 94 - CONFIG_HEADERS_INSTALL=y 95 - CONFIG_HEADERS_CHECK=y 96 - CONFIG_DEBUG_SECTION_MISMATCH=y 97 - CONFIG_MAGIC_SYSRQ=y 98 - CONFIG_DEBUG_KERNEL=y 99 - CONFIG_PANIC_ON_OOPS=y 100 - # CONFIG_SCHED_DEBUG is not set 101 - # CONFIG_DEBUG_PREEMPT is not set 102 - CONFIG_STACKTRACE=y 103 - CONFIG_RCU_CPU_STALL_TIMEOUT=300 104 - # CONFIG_CRYPTO_HW is not set
-8
arch/nds32/include/asm/Kbuild
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - generic-y += asm-offsets.h 3 - generic-y += cmpxchg.h 4 - generic-y += export.h 5 - generic-y += gpio.h 6 - generic-y += kvm_para.h 7 - generic-y += parport.h 8 - generic-y += user.h
-39
arch/nds32/include/asm/assembler.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __NDS32_ASSEMBLER_H__ 5 - #define __NDS32_ASSEMBLER_H__ 6 - 7 - .macro gie_disable 8 - setgie.d 9 - dsb 10 - .endm 11 - 12 - .macro gie_enable 13 - setgie.e 14 - dsb 15 - .endm 16 - 17 - .macro gie_save oldpsw 18 - mfsr \oldpsw, $ir0 19 - setgie.d 20 - dsb 21 - .endm 22 - 23 - .macro gie_restore oldpsw 24 - andi \oldpsw, \oldpsw, #0x1 25 - beqz \oldpsw, 7001f 26 - setgie.e 27 - dsb 28 - 7001: 29 - .endm 30 - 31 - 32 - #define USER(insn, reg, addr, opr) \ 33 - 9999: insn reg, addr, opr; \ 34 - .section __ex_table,"a"; \ 35 - .align 3; \ 36 - .long 9999b, 9001f; \ 37 - .previous 38 - 39 - #endif /* __NDS32_ASSEMBLER_H__ */
-15
arch/nds32/include/asm/barrier.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __NDS32_ASM_BARRIER_H 5 - #define __NDS32_ASM_BARRIER_H 6 - 7 - #ifndef __ASSEMBLY__ 8 - #define mb() asm volatile("msync all":::"memory") 9 - #define rmb() asm volatile("msync all":::"memory") 10 - #define wmb() asm volatile("msync store":::"memory") 11 - #include <asm-generic/barrier.h> 12 - 13 - #endif /* __ASSEMBLY__ */ 14 - 15 - #endif /* __NDS32_ASM_BARRIER_H */
-985
arch/nds32/include/asm/bitfield.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __NDS32_BITFIELD_H__ 5 - #define __NDS32_BITFIELD_H__ 6 - /****************************************************************************** 7 - * cr0: CPU_VER (CPU Version Register) 8 - *****************************************************************************/ 9 - #define CPU_VER_offCFGID 0 /* Minor configuration */ 10 - #define CPU_VER_offREV 16 /* Revision of the CPU version */ 11 - #define CPU_VER_offCPUID 24 /* Major CPU versions */ 12 - 13 - #define CPU_VER_mskCFGID ( 0xFFFF << CPU_VER_offCFGID ) 14 - #define CPU_VER_mskREV ( 0xFF << CPU_VER_offREV ) 15 - #define CPU_VER_mskCPUID ( 0xFF << CPU_VER_offCPUID ) 16 - 17 - /****************************************************************************** 18 - * cr1: ICM_CFG (Instruction Cache/Memory Configuration Register) 19 - *****************************************************************************/ 20 - #define ICM_CFG_offISET 0 /* I-cache sets (# of cache lines) per way */ 21 - #define ICM_CFG_offIWAY 3 /* I-cache ways */ 22 - #define ICM_CFG_offISZ 6 /* I-cache line size */ 23 - #define ICM_CFG_offILCK 9 /* I-cache locking support */ 24 - #define ICM_CFG_offILMB 10 /* On-chip ILM banks */ 25 - #define ICM_CFG_offBSAV 13 /* ILM base register alignment version */ 26 - /* bit 15:31 reserved */ 27 - 28 - #define ICM_CFG_mskISET ( 0x7 << ICM_CFG_offISET ) 29 - #define ICM_CFG_mskIWAY ( 0x7 << ICM_CFG_offIWAY ) 30 - #define ICM_CFG_mskISZ ( 0x7 << ICM_CFG_offISZ ) 31 - #define ICM_CFG_mskILCK ( 0x1 << ICM_CFG_offILCK ) 32 - #define ICM_CFG_mskILMB ( 0x7 << ICM_CFG_offILMB ) 33 - #define ICM_CFG_mskBSAV ( 0x3 << ICM_CFG_offBSAV ) 34 - 35 - /****************************************************************************** 36 - * cr2: DCM_CFG (Data Cache/Memory Configuration Register) 37 - *****************************************************************************/ 38 - #define DCM_CFG_offDSET 0 /* D-cache sets (# of cache lines) per way */ 39 - #define DCM_CFG_offDWAY 3 /* D-cache ways */ 40 - #define DCM_CFG_offDSZ 6 /* D-cache line size */ 41 - #define DCM_CFG_offDLCK 9 /* D-cache locking support */ 42 - #define DCM_CFG_offDLMB 10 /* On-chip DLM banks */ 43 - #define DCM_CFG_offBSAV 13 /* DLM base register alignment version */ 44 - /* bit 15:31 reserved */ 45 - 46 - #define DCM_CFG_mskDSET ( 0x7 << DCM_CFG_offDSET ) 47 - #define DCM_CFG_mskDWAY ( 0x7 << DCM_CFG_offDWAY ) 48 - #define DCM_CFG_mskDSZ ( 0x7 << DCM_CFG_offDSZ ) 49 - #define DCM_CFG_mskDLCK ( 0x1 << DCM_CFG_offDLCK ) 50 - #define DCM_CFG_mskDLMB ( 0x7 << DCM_CFG_offDLMB ) 51 - #define DCM_CFG_mskBSAV ( 0x3 << DCM_CFG_offBSAV ) 52 - 53 - /****************************************************************************** 54 - * cr3: MMU_CFG (MMU Configuration Register) 55 - *****************************************************************************/ 56 - #define MMU_CFG_offMMPS 0 /* Memory management protection scheme */ 57 - #define MMU_CFG_offMMPV 2 /* Memory management protection version number */ 58 - #define MMU_CFG_offFATB 7 /* Fully-associative or non-fully-associative TLB */ 59 - 60 - #define MMU_CFG_offTBW 8 /* TLB ways(non-associative) TBS */ 61 - #define MMU_CFG_offTBS 11 /* TLB sets per way(non-associative) TBS */ 62 - /* bit 14:14 reserved */ 63 - 64 - #define MMU_CFG_offEP8MIN4 15 /* 8KB page supported while minimum page is 4KB */ 65 - #define MMU_CFG_offfEPSZ 16 /* Extra page size supported */ 66 - #define MMU_CFG_offTLBLCK 24 /* TLB locking support */ 67 - #define MMU_CFG_offHPTWK 25 /* Hardware Page Table Walker implemented */ 68 - #define MMU_CFG_offDE 26 /* Default endian */ 69 - #define MMU_CFG_offNTPT 27 /* Partitions for non-translated attributes */ 70 - #define MMU_CFG_offIVTB 28 /* Invisible TLB */ 71 - #define MMU_CFG_offVLPT 29 /* VLPT for fast TLB fill handling implemented */ 72 - #define MMU_CFG_offNTME 30 /* Non-translated VA to PA mapping */ 73 - /* bit 31 reserved */ 74 - 75 - #define MMU_CFG_mskMMPS ( 0x3 << MMU_CFG_offMMPS ) 76 - #define MMU_CFG_mskMMPV ( 0x1F << MMU_CFG_offMMPV ) 77 - #define MMU_CFG_mskFATB ( 0x1 << MMU_CFG_offFATB ) 78 - #define MMU_CFG_mskTBW ( 0x7 << MMU_CFG_offTBW ) 79 - #define MMU_CFG_mskTBS ( 0x7 << MMU_CFG_offTBS ) 80 - #define MMU_CFG_mskEP8MIN4 ( 0x1 << MMU_CFG_offEP8MIN4 ) 81 - #define MMU_CFG_mskfEPSZ ( 0xFF << MMU_CFG_offfEPSZ ) 82 - #define MMU_CFG_mskTLBLCK ( 0x1 << MMU_CFG_offTLBLCK ) 83 - #define MMU_CFG_mskHPTWK ( 0x1 << MMU_CFG_offHPTWK ) 84 - #define MMU_CFG_mskDE ( 0x1 << MMU_CFG_offDE ) 85 - #define MMU_CFG_mskNTPT ( 0x1 << MMU_CFG_offNTPT ) 86 - #define MMU_CFG_mskIVTB ( 0x1 << MMU_CFG_offIVTB ) 87 - #define MMU_CFG_mskVLPT ( 0x1 << MMU_CFG_offVLPT ) 88 - #define MMU_CFG_mskNTME ( 0x1 << MMU_CFG_offNTME ) 89 - 90 - /****************************************************************************** 91 - * cr4: MSC_CFG (Misc Configuration Register) 92 - *****************************************************************************/ 93 - #define MSC_CFG_offEDM 0 94 - #define MSC_CFG_offLMDMA 1 95 - #define MSC_CFG_offPFM 2 96 - #define MSC_CFG_offHSMP 3 97 - #define MSC_CFG_offTRACE 4 98 - #define MSC_CFG_offDIV 5 99 - #define MSC_CFG_offMAC 6 100 - #define MSC_CFG_offAUDIO 7 101 - #define MSC_CFG_offL2C 9 102 - #define MSC_CFG_offRDREG 10 103 - #define MSC_CFG_offADR24 11 104 - #define MSC_CFG_offINTLC 12 105 - #define MSC_CFG_offBASEV 13 106 - #define MSC_CFG_offNOD 16 107 - /* bit 13:31 reserved */ 108 - 109 - #define MSC_CFG_mskEDM ( 0x1 << MSC_CFG_offEDM ) 110 - #define MSC_CFG_mskLMDMA ( 0x1 << MSC_CFG_offLMDMA ) 111 - #define MSC_CFG_mskPFM ( 0x1 << MSC_CFG_offPFM ) 112 - #define MSC_CFG_mskHSMP ( 0x1 << MSC_CFG_offHSMP ) 113 - #define MSC_CFG_mskTRACE ( 0x1 << MSC_CFG_offTRACE ) 114 - #define MSC_CFG_mskDIV ( 0x1 << MSC_CFG_offDIV ) 115 - #define MSC_CFG_mskMAC ( 0x1 << MSC_CFG_offMAC ) 116 - #define MSC_CFG_mskAUDIO ( 0x3 << MSC_CFG_offAUDIO ) 117 - #define MSC_CFG_mskL2C ( 0x1 << MSC_CFG_offL2C ) 118 - #define MSC_CFG_mskRDREG ( 0x1 << MSC_CFG_offRDREG ) 119 - #define MSC_CFG_mskADR24 ( 0x1 << MSC_CFG_offADR24 ) 120 - #define MSC_CFG_mskINTLC ( 0x1 << MSC_CFG_offINTLC ) 121 - #define MSC_CFG_mskBASEV ( 0x7 << MSC_CFG_offBASEV ) 122 - #define MSC_CFG_mskNOD ( 0x1 << MSC_CFG_offNOD ) 123 - 124 - /****************************************************************************** 125 - * cr5: CORE_CFG (Core Identification Register) 126 - *****************************************************************************/ 127 - #define CORE_ID_offCOREID 0 128 - /* bit 4:31 reserved */ 129 - 130 - #define CORE_ID_mskCOREID ( 0xF << CORE_ID_offCOREID ) 131 - 132 - /****************************************************************************** 133 - * cr6: FUCOP_EXIST (FPU and Coprocessor Existence Configuration Register) 134 - *****************************************************************************/ 135 - #define FUCOP_EXIST_offCP0EX 0 136 - #define FUCOP_EXIST_offCP1EX 1 137 - #define FUCOP_EXIST_offCP2EX 2 138 - #define FUCOP_EXIST_offCP3EX 3 139 - #define FUCOP_EXIST_offCP0ISFPU 31 140 - 141 - #define FUCOP_EXIST_mskCP0EX ( 0x1 << FUCOP_EXIST_offCP0EX ) 142 - #define FUCOP_EXIST_mskCP1EX ( 0x1 << FUCOP_EXIST_offCP1EX ) 143 - #define FUCOP_EXIST_mskCP2EX ( 0x1 << FUCOP_EXIST_offCP2EX ) 144 - #define FUCOP_EXIST_mskCP3EX ( 0x1 << FUCOP_EXIST_offCP3EX ) 145 - #define FUCOP_EXIST_mskCP0ISFPU ( 0x1 << FUCOP_EXIST_offCP0ISFPU ) 146 - 147 - /****************************************************************************** 148 - * ir0: PSW (Processor Status Word Register) 149 - * ir1: IPSW (Interruption PSW Register) 150 - * ir2: P_IPSW (Previous IPSW Register) 151 - *****************************************************************************/ 152 - #define PSW_offGIE 0 /* Global Interrupt Enable */ 153 - #define PSW_offINTL 1 /* Interruption Stack Level */ 154 - #define PSW_offPOM 3 /* Processor Operation Mode, User/Superuser */ 155 - #define PSW_offBE 5 /* Endianness for data memory access, 1:MSB, 0:LSB */ 156 - #define PSW_offIT 6 /* Enable instruction address translation */ 157 - #define PSW_offDT 7 /* Enable data address translation */ 158 - #define PSW_offIME 8 /* Instruction Machine Error flag */ 159 - #define PSW_offDME 9 /* Data Machine Error flag */ 160 - #define PSW_offDEX 10 /* Debug Exception */ 161 - #define PSW_offHSS 11 /* Hardware Single Stepping */ 162 - #define PSW_offDRBE 12 /* Device Register Endian Mode */ 163 - #define PSW_offAEN 13 /* Audio ISA special feature */ 164 - #define PSW_offWBNA 14 /* Write Back Non-Allocate */ 165 - #define PSW_offIFCON 15 /* IFC On */ 166 - #define PSW_offCPL 16 /* Current Priority Level */ 167 - /* bit 19:31 reserved */ 168 - 169 - #define PSW_mskGIE ( 0x1 << PSW_offGIE ) 170 - #define PSW_mskINTL ( 0x3 << PSW_offINTL ) 171 - #define PSW_mskPOM ( 0x3 << PSW_offPOM ) 172 - #define PSW_mskBE ( 0x1 << PSW_offBE ) 173 - #define PSW_mskIT ( 0x1 << PSW_offIT ) 174 - #define PSW_mskDT ( 0x1 << PSW_offDT ) 175 - #define PSW_mskIME ( 0x1 << PSW_offIME ) 176 - #define PSW_mskDME ( 0x1 << PSW_offDME ) 177 - #define PSW_mskDEX ( 0x1 << PSW_offDEX ) 178 - #define PSW_mskHSS ( 0x1 << PSW_offHSS ) 179 - #define PSW_mskDRBE ( 0x1 << PSW_offDRBE ) 180 - #define PSW_mskAEN ( 0x1 << PSW_offAEN ) 181 - #define PSW_mskWBNA ( 0x1 << PSW_offWBNA ) 182 - #define PSW_mskIFCON ( 0x1 << PSW_offIFCON ) 183 - #define PSW_mskCPL ( 0x7 << PSW_offCPL ) 184 - 185 - #define PSW_SYSTEM ( 1 << PSW_offPOM ) 186 - #define PSW_INTL_1 ( 1 << PSW_offINTL ) 187 - #define PSW_CPL_NO ( 0 << PSW_offCPL ) 188 - #define PSW_CPL_ANY ( 7 << PSW_offCPL ) 189 - 190 - #define PSW_clr (PSW_mskGIE|PSW_mskINTL|PSW_mskPOM|PSW_mskIT|PSW_mskDT|PSW_mskIME|PSW_mskWBNA) 191 - #ifdef __NDS32_EB__ 192 - #ifdef CONFIG_WBNA 193 - #define PSW_init (PSW_mskWBNA|(1<<PSW_offINTL)|(1<<PSW_offPOM)|PSW_mskIT|PSW_mskDT|PSW_mskBE) 194 - #else 195 - #define PSW_init ((1<<PSW_offINTL)|(1<<PSW_offPOM)|PSW_mskIT|PSW_mskDT|PSW_mskBE) 196 - #endif 197 - #else 198 - #ifdef CONFIG_WBNA 199 - #define PSW_init (PSW_mskWBNA|(1<<PSW_offINTL)|(1<<PSW_offPOM)|PSW_mskIT|PSW_mskDT) 200 - #else 201 - #define PSW_init ((1<<PSW_offINTL)|(1<<PSW_offPOM)|PSW_mskIT|PSW_mskDT) 202 - #endif 203 - #endif 204 - /****************************************************************************** 205 - * ir3: IVB (Interruption Vector Base Register) 206 - *****************************************************************************/ 207 - /* bit 0:12 reserved */ 208 - #define IVB_offNIVIC 1 /* Number of input for IVIC Controller */ 209 - #define IVB_offIVIC_VER 11 /* IVIC Version */ 210 - #define IVB_offEVIC 13 /* External Vector Interrupt Controller mode */ 211 - #define IVB_offESZ 14 /* Size of each vector entry */ 212 - #define IVB_offIVBASE 16 /* BasePA of interrupt vector table */ 213 - 214 - #define IVB_mskNIVIC ( 0x7 << IVB_offNIVIC ) 215 - #define IVB_mskIVIC_VER ( 0x3 << IVB_offIVIC_VER ) 216 - #define IVB_mskEVIC ( 0x1 << IVB_offEVIC ) 217 - #define IVB_mskESZ ( 0x3 << IVB_offESZ ) 218 - #define IVB_mskIVBASE ( 0xFFFF << IVB_offIVBASE ) 219 - 220 - #define IVB_valESZ4 0 221 - #define IVB_valESZ16 1 222 - #define IVB_valESZ64 2 223 - #define IVB_valESZ256 3 224 - /****************************************************************************** 225 - * ir4: EVA (Exception Virtual Address Register) 226 - * ir5: P_EVA (Previous EVA Register) 227 - *****************************************************************************/ 228 - 229 - /* This register contains the VA that causes the exception */ 230 - 231 - /****************************************************************************** 232 - * ir6: ITYPE (Interruption Type Register) 233 - * ir7: P_ITYPE (Previous ITYPE Register) 234 - *****************************************************************************/ 235 - #define ITYPE_offETYPE 0 /* Exception Type */ 236 - #define ITYPE_offINST 4 /* Exception caused by insn fetch or data access */ 237 - /* bit 5:15 reserved */ 238 - #define ITYPE_offVECTOR 5 /* Vector */ 239 - #define ITYPE_offSWID 16 /* SWID of debugging exception */ 240 - /* bit 31:31 reserved */ 241 - 242 - #define ITYPE_mskETYPE ( 0xF << ITYPE_offETYPE ) 243 - #define ITYPE_mskINST ( 0x1 << ITYPE_offINST ) 244 - #define ITYPE_mskVECTOR ( 0x7F << ITYPE_offVECTOR ) 245 - #define ITYPE_mskSWID ( 0x7FFF << ITYPE_offSWID ) 246 - 247 - /* Additional definitions for ITYPE register */ 248 - #define ITYPE_offSTYPE 16 /* Arithmetic Sub Type */ 249 - #define ITYPE_offCPID 20 /* Co-Processor ID which generate the exception */ 250 - 251 - #define ITYPE_mskSTYPE ( 0xF << ITYPE_offSTYPE ) 252 - #define ITYPE_mskCPID ( 0x3 << ITYPE_offCPID ) 253 - 254 - /* Additional definitions of ITYPE register for FPU */ 255 - #define FPU_DISABLE_EXCEPTION (0x1 << ITYPE_offSTYPE) 256 - #define FPU_EXCEPTION (0x2 << ITYPE_offSTYPE) 257 - #define FPU_CPID 0 /* FPU Co-Processor ID is 0 */ 258 - 259 - #define NDS32_VECTOR_mskNONEXCEPTION 0x78 260 - #define NDS32_VECTOR_offEXCEPTION 8 261 - #define NDS32_VECTOR_offINTERRUPT 9 262 - 263 - /* Interrupt vector entry */ 264 - #define ENTRY_RESET_NMI 0 265 - #define ENTRY_TLB_FILL 1 266 - #define ENTRY_PTE_NOT_PRESENT 2 267 - #define ENTRY_TLB_MISC 3 268 - #define ENTRY_TLB_VLPT_MISS 4 269 - #define ENTRY_MACHINE_ERROR 5 270 - #define ENTRY_DEBUG_RELATED 6 271 - #define ENTRY_GENERAL_EXCPETION 7 272 - #define ENTRY_SYSCALL 8 273 - 274 - /* PTE not present exception definition */ 275 - #define ETYPE_NON_LEAF_PTE_NOT_PRESENT 0 276 - #define ETYPE_LEAF_PTE_NOT_PRESENT 1 277 - 278 - /* General exception ETYPE definition */ 279 - #define ETYPE_ALIGNMENT_CHECK 0 280 - #define ETYPE_RESERVED_INSTRUCTION 1 281 - #define ETYPE_TRAP 2 282 - #define ETYPE_ARITHMETIC 3 283 - #define ETYPE_PRECISE_BUS_ERROR 4 284 - #define ETYPE_IMPRECISE_BUS_ERROR 5 285 - #define ETYPE_COPROCESSOR 6 286 - #define ETYPE_RESERVED_VALUE 7 287 - #define ETYPE_NONEXISTENT_MEM_ADDRESS 8 288 - #define ETYPE_MPZIU_CONTROL 9 289 - #define ETYPE_NEXT_PRECISE_STACK_OFL 10 290 - 291 - /* Kerenl reserves software ID */ 292 - #define SWID_RAISE_INTERRUPT_LEVEL 0x1a /* SWID_RAISE_INTERRUPT_LEVEL is used to 293 - * raise interrupt level for debug exception 294 - */ 295 - 296 - /****************************************************************************** 297 - * ir8: MERR (Machine Error Log Register) 298 - *****************************************************************************/ 299 - /* bit 0:30 reserved */ 300 - #define MERR_offBUSERR 31 /* Bus error caused by a load insn */ 301 - 302 - #define MERR_mskBUSERR ( 0x1 << MERR_offBUSERR ) 303 - 304 - /****************************************************************************** 305 - * ir9: IPC (Interruption Program Counter Register) 306 - * ir10: P_IPC (Previous IPC Register) 307 - * ir11: OIPC (Overflow Interruption Program Counter Register) 308 - *****************************************************************************/ 309 - 310 - /* This is the shadow stack register of the Program Counter */ 311 - 312 - /****************************************************************************** 313 - * ir12: P_P0 (Previous P0 Register) 314 - * ir13: P_P1 (Previous P1 Register) 315 - *****************************************************************************/ 316 - 317 - /* These are shadow registers of $p0 and $p1 */ 318 - 319 - /****************************************************************************** 320 - * ir14: INT_MASK (Interruption Masking Register) 321 - *****************************************************************************/ 322 - #define INT_MASK_offH0IM 0 /* Hardware Interrupt 0 Mask bit */ 323 - #define INT_MASK_offH1IM 1 /* Hardware Interrupt 1 Mask bit */ 324 - #define INT_MASK_offH2IM 2 /* Hardware Interrupt 2 Mask bit */ 325 - #define INT_MASK_offH3IM 3 /* Hardware Interrupt 3 Mask bit */ 326 - #define INT_MASK_offH4IM 4 /* Hardware Interrupt 4 Mask bit */ 327 - #define INT_MASK_offH5IM 5 /* Hardware Interrupt 5 Mask bit */ 328 - /* bit 6:15 reserved */ 329 - #define INT_MASK_offSIM 16 /* Software Interrupt Mask bit */ 330 - /* bit 17:29 reserved */ 331 - #define INT_MASK_offIDIVZE 30 /* Enable detection for Divide-By-Zero */ 332 - #define INT_MASK_offDSSIM 31 /* Default Single Stepping Interruption Mask */ 333 - 334 - #define INT_MASK_mskH0IM ( 0x1 << INT_MASK_offH0IM ) 335 - #define INT_MASK_mskH1IM ( 0x1 << INT_MASK_offH1IM ) 336 - #define INT_MASK_mskH2IM ( 0x1 << INT_MASK_offH2IM ) 337 - #define INT_MASK_mskH3IM ( 0x1 << INT_MASK_offH3IM ) 338 - #define INT_MASK_mskH4IM ( 0x1 << INT_MASK_offH4IM ) 339 - #define INT_MASK_mskH5IM ( 0x1 << INT_MASK_offH5IM ) 340 - #define INT_MASK_mskSIM ( 0x1 << INT_MASK_offSIM ) 341 - #define INT_MASK_mskIDIVZE ( 0x1 << INT_MASK_offIDIVZE ) 342 - #define INT_MASK_mskDSSIM ( 0x1 << INT_MASK_offDSSIM ) 343 - 344 - #define INT_MASK_INITAIAL_VAL (INT_MASK_mskDSSIM|INT_MASK_mskIDIVZE) 345 - 346 - /****************************************************************************** 347 - * ir15: INT_PEND (Interrupt Pending Register) 348 - *****************************************************************************/ 349 - #define INT_PEND_offH0I 0 /* Hardware Interrupt 0 pending bit */ 350 - #define INT_PEND_offH1I 1 /* Hardware Interrupt 1 pending bit */ 351 - #define INT_PEND_offH2I 2 /* Hardware Interrupt 2 pending bit */ 352 - #define INT_PEND_offH3I 3 /* Hardware Interrupt 3 pending bit */ 353 - #define INT_PEND_offH4I 4 /* Hardware Interrupt 4 pending bit */ 354 - #define INT_PEND_offH5I 5 /* Hardware Interrupt 5 pending bit */ 355 - 356 - #define INT_PEND_offCIPL 0 /* Current Interrupt Priority Level */ 357 - 358 - /* bit 6:15 reserved */ 359 - #define INT_PEND_offSWI 16 /* Software Interrupt pending bit */ 360 - /* bit 17:31 reserved */ 361 - 362 - #define INT_PEND_mskH0I ( 0x1 << INT_PEND_offH0I ) 363 - #define INT_PEND_mskH1I ( 0x1 << INT_PEND_offH1I ) 364 - #define INT_PEND_mskH2I ( 0x1 << INT_PEND_offH2I ) 365 - #define INT_PEND_mskH3I ( 0x1 << INT_PEND_offH3I ) 366 - #define INT_PEND_mskH4I ( 0x1 << INT_PEND_offH4I ) 367 - #define INT_PEND_mskH5I ( 0x1 << INT_PEND_offH5I ) 368 - #define INT_PEND_mskCIPL ( 0x1 << INT_PEND_offCIPL ) 369 - #define INT_PEND_mskSWI ( 0x1 << INT_PEND_offSWI ) 370 - 371 - /****************************************************************************** 372 - * mr0: MMU_CTL (MMU Control Register) 373 - *****************************************************************************/ 374 - #define MMU_CTL_offD 0 /* Default minimum page size */ 375 - #define MMU_CTL_offNTC0 1 /* Non-Translated Cachebility of partition 0 */ 376 - #define MMU_CTL_offNTC1 3 /* Non-Translated Cachebility of partition 1 */ 377 - #define MMU_CTL_offNTC2 5 /* Non-Translated Cachebility of partition 2 */ 378 - #define MMU_CTL_offNTC3 7 /* Non-Translated Cachebility of partition 3 */ 379 - #define MMU_CTL_offTBALCK 9 /* TLB all-lock resolution scheme */ 380 - #define MMU_CTL_offMPZIU 10 /* Multiple Page Size In Use bit */ 381 - #define MMU_CTL_offNTM0 11 /* Non-Translated VA to PA of partition 0 */ 382 - #define MMU_CTL_offNTM1 13 /* Non-Translated VA to PA of partition 1 */ 383 - #define MMU_CTL_offNTM2 15 /* Non-Translated VA to PA of partition 2 */ 384 - #define MMU_CTL_offNTM3 17 /* Non-Translated VA to PA of partition 3 */ 385 - #define MMU_CTL_offUNA 23 /* Unaligned access */ 386 - /* bit 24:31 reserved */ 387 - 388 - #define MMU_CTL_mskD ( 0x1 << MMU_CTL_offD ) 389 - #define MMU_CTL_mskNTC0 ( 0x3 << MMU_CTL_offNTC0 ) 390 - #define MMU_CTL_mskNTC1 ( 0x3 << MMU_CTL_offNTC1 ) 391 - #define MMU_CTL_mskNTC2 ( 0x3 << MMU_CTL_offNTC2 ) 392 - #define MMU_CTL_mskNTC3 ( 0x3 << MMU_CTL_offNTC3 ) 393 - #define MMU_CTL_mskTBALCK ( 0x1 << MMU_CTL_offTBALCK ) 394 - #define MMU_CTL_mskMPZIU ( 0x1 << MMU_CTL_offMPZIU ) 395 - #define MMU_CTL_mskNTM0 ( 0x3 << MMU_CTL_offNTM0 ) 396 - #define MMU_CTL_mskNTM1 ( 0x3 << MMU_CTL_offNTM1 ) 397 - #define MMU_CTL_mskNTM2 ( 0x3 << MMU_CTL_offNTM2 ) 398 - #define MMU_CTL_mskNTM3 ( 0x3 << MMU_CTL_offNTM3 ) 399 - 400 - #define MMU_CTL_D4KB 0 401 - #define MMU_CTL_D8KB 1 402 - #define MMU_CTL_UNA ( 0x1 << MMU_CTL_offUNA ) 403 - 404 - #define MMU_CTL_CACHEABLE_NON 0 405 - #define MMU_CTL_CACHEABLE_WB 2 406 - #define MMU_CTL_CACHEABLE_WT 3 407 - 408 - /****************************************************************************** 409 - * mr1: L1_PPTB (L1 Physical Page Table Base Register) 410 - *****************************************************************************/ 411 - #define L1_PPTB_offNV 0 /* Enable Hardware Page Table Walker (HPTWK) */ 412 - /* bit 1:11 reserved */ 413 - #define L1_PPTB_offBASE 12 /* First level physical page table base address */ 414 - 415 - #define L1_PPTB_mskNV ( 0x1 << L1_PPTB_offNV ) 416 - #define L1_PPTB_mskBASE ( 0xFFFFF << L1_PPTB_offBASE ) 417 - 418 - /****************************************************************************** 419 - * mr2: TLB_VPN (TLB Access VPN Register) 420 - *****************************************************************************/ 421 - /* bit 0:11 reserved */ 422 - #define TLB_VPN_offVPN 12 /* Virtual Page Number */ 423 - 424 - #define TLB_VPN_mskVPN ( 0xFFFFF << TLB_VPN_offVPN ) 425 - 426 - /****************************************************************************** 427 - * mr3: TLB_DATA (TLB Access Data Register) 428 - *****************************************************************************/ 429 - #define TLB_DATA_offV 0 /* PTE is valid and present */ 430 - #define TLB_DATA_offM 1 /* Page read/write access privilege */ 431 - #define TLB_DATA_offD 4 /* Dirty bit */ 432 - #define TLB_DATA_offX 5 /* Executable bit */ 433 - #define TLB_DATA_offA 6 /* Access bit */ 434 - #define TLB_DATA_offG 7 /* Global page (shared across contexts) */ 435 - #define TLB_DATA_offC 8 /* Cacheability atribute */ 436 - /* bit 11:11 reserved */ 437 - #define TLB_DATA_offPPN 12 /* Phisical Page Number */ 438 - 439 - #define TLB_DATA_mskV ( 0x1 << TLB_DATA_offV ) 440 - #define TLB_DATA_mskM ( 0x7 << TLB_DATA_offM ) 441 - #define TLB_DATA_mskD ( 0x1 << TLB_DATA_offD ) 442 - #define TLB_DATA_mskX ( 0x1 << TLB_DATA_offX ) 443 - #define TLB_DATA_mskA ( 0x1 << TLB_DATA_offA ) 444 - #define TLB_DATA_mskG ( 0x1 << TLB_DATA_offG ) 445 - #define TLB_DATA_mskC ( 0x7 << TLB_DATA_offC ) 446 - #define TLB_DATA_mskPPN ( 0xFFFFF << TLB_DATA_offPPN ) 447 - 448 - #ifdef CONFIG_CPU_DCACHE_WRITETHROUGH 449 - #define TLB_DATA_kernel_text_attr (TLB_DATA_mskV|TLB_DATA_mskM|TLB_DATA_mskD|TLB_DATA_mskX|TLB_DATA_mskG|TLB_DATA_mskC) 450 - #else 451 - #define TLB_DATA_kernel_text_attr (TLB_DATA_mskV|TLB_DATA_mskM|TLB_DATA_mskD|TLB_DATA_mskX|TLB_DATA_mskG|(0x6 << TLB_DATA_offC)) 452 - #endif 453 - 454 - /****************************************************************************** 455 - * mr4: TLB_MISC (TLB Access Misc Register) 456 - *****************************************************************************/ 457 - #define TLB_MISC_offACC_PSZ 0 /* Page size of a PTE entry */ 458 - #define TLB_MISC_offCID 4 /* Context id */ 459 - /* bit 13:31 reserved */ 460 - 461 - #define TLB_MISC_mskACC_PSZ ( 0xF << TLB_MISC_offACC_PSZ ) 462 - #define TLB_MISC_mskCID ( 0x1FF << TLB_MISC_offCID ) 463 - 464 - /****************************************************************************** 465 - * mr5: VLPT_IDX (Virtual Linear Page Table Index Register) 466 - *****************************************************************************/ 467 - #define VLPT_IDX_offZERO 0 /* Always 0 */ 468 - #define VLPT_IDX_offEVPN 2 /* Exception Virtual Page Number */ 469 - #define VLPT_IDX_offVLPTB 22 /* Base VA of VLPT */ 470 - 471 - #define VLPT_IDX_mskZERO ( 0x3 << VLPT_IDX_offZERO ) 472 - #define VLPT_IDX_mskEVPN ( 0xFFFFF << VLPT_IDX_offEVPN ) 473 - #define VLPT_IDX_mskVLPTB ( 0x3FF << VLPT_IDX_offVLPTB ) 474 - 475 - /****************************************************************************** 476 - * mr6: ILMB (Instruction Local Memory Base Register) 477 - *****************************************************************************/ 478 - #define ILMB_offIEN 0 /* Enable ILM */ 479 - #define ILMB_offILMSZ 1 /* Size of ILM */ 480 - /* bit 5:19 reserved */ 481 - #define ILMB_offIBPA 20 /* Base PA of ILM */ 482 - 483 - #define ILMB_mskIEN ( 0x1 << ILMB_offIEN ) 484 - #define ILMB_mskILMSZ ( 0xF << ILMB_offILMSZ ) 485 - #define ILMB_mskIBPA ( 0xFFF << ILMB_offIBPA ) 486 - 487 - /****************************************************************************** 488 - * mr7: DLMB (Data Local Memory Base Register) 489 - *****************************************************************************/ 490 - #define DLMB_offDEN 0 /* Enable DLM */ 491 - #define DLMB_offDLMSZ 1 /* Size of DLM */ 492 - #define DLMB_offDBM 5 /* Enable Double-Buffer Mode for DLM */ 493 - #define DLMB_offDBB 6 /* Double-buffer bank which can be accessed by the processor */ 494 - /* bit 7:19 reserved */ 495 - #define DLMB_offDBPA 20 /* Base PA of DLM */ 496 - 497 - #define DLMB_mskDEN ( 0x1 << DLMB_offDEN ) 498 - #define DLMB_mskDLMSZ ( 0xF << DLMB_offDLMSZ ) 499 - #define DLMB_mskDBM ( 0x1 << DLMB_offDBM ) 500 - #define DLMB_mskDBB ( 0x1 << DLMB_offDBB ) 501 - #define DLMB_mskDBPA ( 0xFFF << DLMB_offDBPA ) 502 - 503 - /****************************************************************************** 504 - * mr8: CACHE_CTL (Cache Control Register) 505 - *****************************************************************************/ 506 - #define CACHE_CTL_offIC_EN 0 /* Enable I-cache */ 507 - #define CACHE_CTL_offDC_EN 1 /* Enable D-cache */ 508 - #define CACHE_CTL_offICALCK 2 /* I-cache all-lock resolution scheme */ 509 - #define CACHE_CTL_offDCALCK 3 /* D-cache all-lock resolution scheme */ 510 - #define CACHE_CTL_offDCCWF 4 /* Enable D-cache Critical Word Forwarding */ 511 - #define CACHE_CTL_offDCPMW 5 /* Enable D-cache concurrent miss and write-back processing */ 512 - /* bit 6:31 reserved */ 513 - 514 - #define CACHE_CTL_mskIC_EN ( 0x1 << CACHE_CTL_offIC_EN ) 515 - #define CACHE_CTL_mskDC_EN ( 0x1 << CACHE_CTL_offDC_EN ) 516 - #define CACHE_CTL_mskICALCK ( 0x1 << CACHE_CTL_offICALCK ) 517 - #define CACHE_CTL_mskDCALCK ( 0x1 << CACHE_CTL_offDCALCK ) 518 - #define CACHE_CTL_mskDCCWF ( 0x1 << CACHE_CTL_offDCCWF ) 519 - #define CACHE_CTL_mskDCPMW ( 0x1 << CACHE_CTL_offDCPMW ) 520 - 521 - /****************************************************************************** 522 - * mr9: HSMP_SADDR (High Speed Memory Port Starting Address) 523 - *****************************************************************************/ 524 - #define HSMP_SADDR_offEN 0 /* Enable control bit for the High Speed Memory port */ 525 - /* bit 1:19 reserved */ 526 - 527 - #define HSMP_SADDR_offRANGE 1 /* Denote the address range (only defined in HSMP v2 ) */ 528 - #define HSMP_SADDR_offSADDR 20 /* Starting base PA of the High Speed Memory Port region */ 529 - 530 - #define HSMP_SADDR_mskEN ( 0x1 << HSMP_SADDR_offEN ) 531 - #define HSMP_SADDR_mskRANGE ( 0xFFF << HSMP_SADDR_offRANGE ) 532 - #define HSMP_SADDR_mskSADDR ( 0xFFF << HSMP_SADDR_offSADDR ) 533 - 534 - /****************************************************************************** 535 - * mr10: HSMP_EADDR (High Speed Memory Port Ending Address) 536 - *****************************************************************************/ 537 - /* bit 0:19 reserved */ 538 - #define HSMP_EADDR_offEADDR 20 539 - 540 - #define HSMP_EADDR_mskEADDR ( 0xFFF << HSMP_EADDR_offEADDR ) 541 - 542 - /****************************************************************************** 543 - * dr0+(n*5): BPCn (n=0-7) (Breakpoint Control Register) 544 - *****************************************************************************/ 545 - #define BPC_offWP 0 /* Configuration of BPAn */ 546 - #define BPC_offEL 1 /* Enable BPAn */ 547 - #define BPC_offS 2 /* Data address comparison for a store instruction */ 548 - #define BPC_offP 3 /* Compared data address is PA */ 549 - #define BPC_offC 4 /* CID value is compared with the BPCIDn register */ 550 - #define BPC_offBE0 5 /* Enable byte mask for the comparison with register */ 551 - #define BPC_offBE1 6 /* Enable byte mask for the comparison with register */ 552 - #define BPC_offBE2 7 /* Enable byte mask for the comparison with register */ 553 - #define BPC_offBE3 8 /* Enable byte mask for the comparison with register */ 554 - #define BPC_offT 9 /* Enable breakpoint Embedded Tracer triggering operation */ 555 - 556 - #define BPC_mskWP ( 0x1 << BPC_offWP ) 557 - #define BPC_mskEL ( 0x1 << BPC_offEL ) 558 - #define BPC_mskS ( 0x1 << BPC_offS ) 559 - #define BPC_mskP ( 0x1 << BPC_offP ) 560 - #define BPC_mskC ( 0x1 << BPC_offC ) 561 - #define BPC_mskBE0 ( 0x1 << BPC_offBE0 ) 562 - #define BPC_mskBE1 ( 0x1 << BPC_offBE1 ) 563 - #define BPC_mskBE2 ( 0x1 << BPC_offBE2 ) 564 - #define BPC_mskBE3 ( 0x1 << BPC_offBE3 ) 565 - #define BPC_mskT ( 0x1 << BPC_offT ) 566 - 567 - /****************************************************************************** 568 - * dr1+(n*5): BPAn (n=0-7) (Breakpoint Address Register) 569 - *****************************************************************************/ 570 - 571 - /* These registers contain break point address */ 572 - 573 - /****************************************************************************** 574 - * dr2+(n*5): BPAMn (n=0-7) (Breakpoint Address Mask Register) 575 - *****************************************************************************/ 576 - 577 - /* These registerd contain the address comparison mask for the BPAn register */ 578 - 579 - /****************************************************************************** 580 - * dr3+(n*5): BPVn (n=0-7) Breakpoint Data Value Register 581 - *****************************************************************************/ 582 - 583 - /* The BPVn register contains the data value that will be compared with the 584 - * incoming load/store data value */ 585 - 586 - /****************************************************************************** 587 - * dr4+(n*5): BPCIDn (n=0-7) (Breakpoint Context ID Register) 588 - *****************************************************************************/ 589 - #define BPCID_offCID 0 /* CID that will be compared with a process's CID */ 590 - /* bit 9:31 reserved */ 591 - 592 - #define BPCID_mskCID ( 0x1FF << BPCID_offCID ) 593 - 594 - /****************************************************************************** 595 - * dr40: EDM_CFG (EDM Configuration Register) 596 - *****************************************************************************/ 597 - #define EDM_CFG_offBC 0 /* Number of hardware breakpoint sets implemented */ 598 - #define EDM_CFG_offDIMU 3 /* Debug Instruction Memory Unit exists */ 599 - /* bit 4:15 reserved */ 600 - #define EDM_CFG_offVER 16 /* EDM version */ 601 - 602 - #define EDM_CFG_mskBC ( 0x7 << EDM_CFG_offBC ) 603 - #define EDM_CFG_mskDIMU ( 0x1 << EDM_CFG_offDIMU ) 604 - #define EDM_CFG_mskVER ( 0xFFFF << EDM_CFG_offVER ) 605 - 606 - /****************************************************************************** 607 - * dr41: EDMSW (EDM Status Word) 608 - *****************************************************************************/ 609 - #define EDMSW_offWV 0 /* Write Valid */ 610 - #define EDMSW_offRV 1 /* Read Valid */ 611 - #define EDMSW_offDE 2 /* Debug exception has occurred for this core */ 612 - /* bit 3:31 reserved */ 613 - 614 - #define EDMSW_mskWV ( 0x1 << EDMSW_offWV ) 615 - #define EDMSW_mskRV ( 0x1 << EDMSW_offRV ) 616 - #define EDMSW_mskDE ( 0x1 << EDMSW_offDE ) 617 - 618 - /****************************************************************************** 619 - * dr42: EDM_CTL (EDM Control Register) 620 - *****************************************************************************/ 621 - /* bit 0:30 reserved */ 622 - #define EDM_CTL_offV3_EDM_MODE 6 /* EDM compatibility control bit */ 623 - #define EDM_CTL_offDEH_SEL 31 /* Controls where debug exception is directed to */ 624 - 625 - #define EDM_CTL_mskV3_EDM_MODE ( 0x1 << EDM_CTL_offV3_EDM_MODE ) 626 - #define EDM_CTL_mskDEH_SEL ( 0x1 << EDM_CTL_offDEH_SEL ) 627 - 628 - /****************************************************************************** 629 - * dr43: EDM_DTR (EDM Data Transfer Register) 630 - *****************************************************************************/ 631 - 632 - /* This is used to exchange data between the embedded EDM logic 633 - * and the processor core */ 634 - 635 - /****************************************************************************** 636 - * dr44: BPMTC (Breakpoint Match Trigger Counter Register) 637 - *****************************************************************************/ 638 - #define BPMTC_offBPMTC 0 /* Breakpoint match trigger counter value */ 639 - /* bit 16:31 reserved */ 640 - 641 - #define BPMTC_mskBPMTC ( 0xFFFF << BPMTC_offBPMTC ) 642 - 643 - /****************************************************************************** 644 - * dr45: DIMBR (Debug Instruction Memory Base Register) 645 - *****************************************************************************/ 646 - /* bit 0:11 reserved */ 647 - #define DIMBR_offDIMB 12 /* Base address of the Debug Instruction Memory (DIM) */ 648 - #define DIMBR_mskDIMB ( 0xFFFFF << DIMBR_offDIMB ) 649 - 650 - /****************************************************************************** 651 - * dr46: TECR0(Trigger Event Control register 0) 652 - * dr47: TECR1 (Trigger Event Control register 1) 653 - *****************************************************************************/ 654 - #define TECR_offBP 0 /* Controld which BP is used as a trigger source */ 655 - #define TECR_offNMI 8 /* Use NMI as a trigger source */ 656 - #define TECR_offHWINT 9 /* Corresponding interrupt is used as a trigger source */ 657 - #define TECR_offEVIC 15 /* Enable HWINT as a trigger source in EVIC mode */ 658 - #define TECR_offSYS 16 /* Enable SYSCALL instruction as a trigger source */ 659 - #define TECR_offDBG 17 /* Enable debug exception as a trigger source */ 660 - #define TECR_offMRE 18 /* Enable MMU related exception as a trigger source */ 661 - #define TECR_offE 19 /* An exception is used as a trigger source */ 662 - /* bit 20:30 reserved */ 663 - #define TECR_offL 31 /* Link/Cascade TECR0 trigger event to TECR1 trigger event */ 664 - 665 - #define TECR_mskBP ( 0xFF << TECR_offBP ) 666 - #define TECR_mskNMI ( 0x1 << TECR_offBNMI ) 667 - #define TECR_mskHWINT ( 0x3F << TECR_offBHWINT ) 668 - #define TECR_mskEVIC ( 0x1 << TECR_offBEVIC ) 669 - #define TECR_mskSYS ( 0x1 << TECR_offBSYS ) 670 - #define TECR_mskDBG ( 0x1 << TECR_offBDBG ) 671 - #define TECR_mskMRE ( 0x1 << TECR_offBMRE ) 672 - #define TECR_mskE ( 0x1 << TECR_offE ) 673 - #define TECR_mskL ( 0x1 << TECR_offL ) 674 - 675 - /****************************************************************************** 676 - * pfr0-2: PFMC0-2 (Performance Counter Register 0-2) 677 - *****************************************************************************/ 678 - 679 - /* These registers contains performance event count */ 680 - 681 - /****************************************************************************** 682 - * pfr3: PFM_CTL (Performance Counter Control Register) 683 - *****************************************************************************/ 684 - #define PFM_CTL_offEN0 0 /* Enable PFMC0 */ 685 - #define PFM_CTL_offEN1 1 /* Enable PFMC1 */ 686 - #define PFM_CTL_offEN2 2 /* Enable PFMC2 */ 687 - #define PFM_CTL_offIE0 3 /* Enable interrupt for PFMC0 */ 688 - #define PFM_CTL_offIE1 4 /* Enable interrupt for PFMC1 */ 689 - #define PFM_CTL_offIE2 5 /* Enable interrupt for PFMC2 */ 690 - #define PFM_CTL_offOVF0 6 /* Overflow bit of PFMC0 */ 691 - #define PFM_CTL_offOVF1 7 /* Overflow bit of PFMC1 */ 692 - #define PFM_CTL_offOVF2 8 /* Overflow bit of PFMC2 */ 693 - #define PFM_CTL_offKS0 9 /* Enable superuser mode event counting for PFMC0 */ 694 - #define PFM_CTL_offKS1 10 /* Enable superuser mode event counting for PFMC1 */ 695 - #define PFM_CTL_offKS2 11 /* Enable superuser mode event counting for PFMC2 */ 696 - #define PFM_CTL_offKU0 12 /* Enable user mode event counting for PFMC0 */ 697 - #define PFM_CTL_offKU1 13 /* Enable user mode event counting for PFMC1 */ 698 - #define PFM_CTL_offKU2 14 /* Enable user mode event counting for PFMC2 */ 699 - #define PFM_CTL_offSEL0 15 /* The event selection for PFMC0 */ 700 - #define PFM_CTL_offSEL1 16 /* The event selection for PFMC1 */ 701 - #define PFM_CTL_offSEL2 22 /* The event selection for PFMC2 */ 702 - /* bit 28:31 reserved */ 703 - 704 - #define PFM_CTL_mskEN0 ( 0x01 << PFM_CTL_offEN0 ) 705 - #define PFM_CTL_mskEN1 ( 0x01 << PFM_CTL_offEN1 ) 706 - #define PFM_CTL_mskEN2 ( 0x01 << PFM_CTL_offEN2 ) 707 - #define PFM_CTL_mskIE0 ( 0x01 << PFM_CTL_offIE0 ) 708 - #define PFM_CTL_mskIE1 ( 0x01 << PFM_CTL_offIE1 ) 709 - #define PFM_CTL_mskIE2 ( 0x01 << PFM_CTL_offIE2 ) 710 - #define PFM_CTL_mskOVF0 ( 0x01 << PFM_CTL_offOVF0 ) 711 - #define PFM_CTL_mskOVF1 ( 0x01 << PFM_CTL_offOVF1 ) 712 - #define PFM_CTL_mskOVF2 ( 0x01 << PFM_CTL_offOVF2 ) 713 - #define PFM_CTL_mskKS0 ( 0x01 << PFM_CTL_offKS0 ) 714 - #define PFM_CTL_mskKS1 ( 0x01 << PFM_CTL_offKS1 ) 715 - #define PFM_CTL_mskKS2 ( 0x01 << PFM_CTL_offKS2 ) 716 - #define PFM_CTL_mskKU0 ( 0x01 << PFM_CTL_offKU0 ) 717 - #define PFM_CTL_mskKU1 ( 0x01 << PFM_CTL_offKU1 ) 718 - #define PFM_CTL_mskKU2 ( 0x01 << PFM_CTL_offKU2 ) 719 - #define PFM_CTL_mskSEL0 ( 0x01 << PFM_CTL_offSEL0 ) 720 - #define PFM_CTL_mskSEL1 ( 0x3F << PFM_CTL_offSEL1 ) 721 - #define PFM_CTL_mskSEL2 ( 0x3F << PFM_CTL_offSEL2 ) 722 - 723 - /****************************************************************************** 724 - * SDZ_CTL (Structure Downsizing Control Register) 725 - *****************************************************************************/ 726 - #define SDZ_CTL_offICDZ 0 /* I-cache downsizing control */ 727 - #define SDZ_CTL_offDCDZ 3 /* D-cache downsizing control */ 728 - #define SDZ_CTL_offMTBDZ 6 /* MTLB downsizing control */ 729 - #define SDZ_CTL_offBTBDZ 9 /* Branch Target Table downsizing control */ 730 - /* bit 12:31 reserved */ 731 - #define SDZ_CTL_mskICDZ ( 0x07 << SDZ_CTL_offICDZ ) 732 - #define SDZ_CTL_mskDCDZ ( 0x07 << SDZ_CTL_offDCDZ ) 733 - #define SDZ_CTL_mskMTBDZ ( 0x07 << SDZ_CTL_offMTBDZ ) 734 - #define SDZ_CTL_mskBTBDZ ( 0x07 << SDZ_CTL_offBTBDZ ) 735 - 736 - /****************************************************************************** 737 - * N13MISC_CTL (N13 Miscellaneous Control Register) 738 - *****************************************************************************/ 739 - #define N13MISC_CTL_offBTB 0 /* Disable Branch Target Buffer */ 740 - #define N13MISC_CTL_offRTP 1 /* Disable Return Target Predictor */ 741 - #define N13MISC_CTL_offPTEPF 2 /* Disable HPTWK L2 PTE pefetch */ 742 - #define N13MISC_CTL_offSP_SHADOW_EN 4 /* Enable shadow stack pointers */ 743 - #define MISC_CTL_offHWPRE 11 /* Enable HardWare PREFETCH */ 744 - /* bit 6, 9:31 reserved */ 745 - 746 - #define N13MISC_CTL_makBTB ( 0x1 << N13MISC_CTL_offBTB ) 747 - #define N13MISC_CTL_makRTP ( 0x1 << N13MISC_CTL_offRTP ) 748 - #define N13MISC_CTL_makPTEPF ( 0x1 << N13MISC_CTL_offPTEPF ) 749 - #define N13MISC_CTL_makSP_SHADOW_EN ( 0x1 << N13MISC_CTL_offSP_SHADOW_EN ) 750 - #define MISC_CTL_makHWPRE_EN ( 0x1 << MISC_CTL_offHWPRE ) 751 - 752 - #ifdef CONFIG_HW_PRE 753 - #define MISC_init (N13MISC_CTL_makBTB|N13MISC_CTL_makRTP|N13MISC_CTL_makSP_SHADOW_EN|MISC_CTL_makHWPRE_EN) 754 - #else 755 - #define MISC_init (N13MISC_CTL_makBTB|N13MISC_CTL_makRTP|N13MISC_CTL_makSP_SHADOW_EN) 756 - #endif 757 - 758 - /****************************************************************************** 759 - * PRUSR_ACC_CTL (Privileged Resource User Access Control Registers) 760 - *****************************************************************************/ 761 - #define PRUSR_ACC_CTL_offDMA_EN 0 /* Allow user mode access of DMA registers */ 762 - #define PRUSR_ACC_CTL_offPFM_EN 1 /* Allow user mode access of PFM registers */ 763 - 764 - #define PRUSR_ACC_CTL_mskDMA_EN ( 0x1 << PRUSR_ACC_CTL_offDMA_EN ) 765 - #define PRUSR_ACC_CTL_mskPFM_EN ( 0x1 << PRUSR_ACC_CTL_offPFM_EN ) 766 - 767 - /****************************************************************************** 768 - * dmar0: DMA_CFG (DMA Configuration Register) 769 - *****************************************************************************/ 770 - #define DMA_CFG_offNCHN 0 /* The number of DMA channels implemented */ 771 - #define DMA_CFG_offUNEA 2 /* Un-aligned External Address transfer feature */ 772 - #define DMA_CFG_off2DET 3 /* 2-D Element Transfer feature */ 773 - /* bit 4:15 reserved */ 774 - #define DMA_CFG_offVER 16 /* DMA architecture and implementation version */ 775 - 776 - #define DMA_CFG_mskNCHN ( 0x3 << DMA_CFG_offNCHN ) 777 - #define DMA_CFG_mskUNEA ( 0x1 << DMA_CFG_offUNEA ) 778 - #define DMA_CFG_msk2DET ( 0x1 << DMA_CFG_off2DET ) 779 - #define DMA_CFG_mskVER ( 0xFFFF << DMA_CFG_offVER ) 780 - 781 - /****************************************************************************** 782 - * dmar1: DMA_GCSW (DMA Global Control and Status Word Register) 783 - *****************************************************************************/ 784 - #define DMA_GCSW_offC0STAT 0 /* DMA channel 0 state */ 785 - #define DMA_GCSW_offC1STAT 3 /* DMA channel 1 state */ 786 - /* bit 6:11 reserved */ 787 - #define DMA_GCSW_offC0INT 12 /* DMA channel 0 generate interrupt */ 788 - #define DMA_GCSW_offC1INT 13 /* DMA channel 1 generate interrupt */ 789 - /* bit 14:30 reserved */ 790 - #define DMA_GCSW_offEN 31 /* Enable DMA engine */ 791 - 792 - #define DMA_GCSW_mskC0STAT ( 0x7 << DMA_GCSW_offC0STAT ) 793 - #define DMA_GCSW_mskC1STAT ( 0x7 << DMA_GCSW_offC1STAT ) 794 - #define DMA_GCSW_mskC0INT ( 0x1 << DMA_GCSW_offC0INT ) 795 - #define DMA_GCSW_mskC1INT ( 0x1 << DMA_GCSW_offC1INT ) 796 - #define DMA_GCSW_mskEN ( 0x1 << DMA_GCSW_offEN ) 797 - 798 - /****************************************************************************** 799 - * dmar2: DMA_CHNSEL (DMA Channel Selection Register) 800 - *****************************************************************************/ 801 - #define DMA_CHNSEL_offCHAN 0 /* Selected channel number */ 802 - /* bit 2:31 reserved */ 803 - 804 - #define DMA_CHNSEL_mskCHAN ( 0x3 << DMA_CHNSEL_offCHAN ) 805 - 806 - /****************************************************************************** 807 - * dmar3: DMA_ACT (DMA Action Register) 808 - *****************************************************************************/ 809 - #define DMA_ACT_offACMD 0 /* DMA Action Command */ 810 - /* bit 2:31 reserved */ 811 - #define DMA_ACT_mskACMD ( 0x3 << DMA_ACT_offACMD ) 812 - 813 - /****************************************************************************** 814 - * dmar4: DMA_SETUP (DMA Setup Register) 815 - *****************************************************************************/ 816 - #define DMA_SETUP_offLM 0 /* Local Memory Selection */ 817 - #define DMA_SETUP_offTDIR 1 /* Transfer Direction */ 818 - #define DMA_SETUP_offTES 2 /* Transfer Element Size */ 819 - #define DMA_SETUP_offESTR 4 /* External memory transfer Stride */ 820 - #define DMA_SETUP_offCIE 16 /* Interrupt Enable on Completion */ 821 - #define DMA_SETUP_offSIE 17 /* Interrupt Enable on explicit Stop */ 822 - #define DMA_SETUP_offEIE 18 /* Interrupt Enable on Error */ 823 - #define DMA_SETUP_offUE 19 /* Enable the Un-aligned External Address */ 824 - #define DMA_SETUP_off2DE 20 /* Enable the 2-D External Transfer */ 825 - #define DMA_SETUP_offCOA 21 /* Transfer Coalescable */ 826 - /* bit 22:31 reserved */ 827 - 828 - #define DMA_SETUP_mskLM ( 0x1 << DMA_SETUP_offLM ) 829 - #define DMA_SETUP_mskTDIR ( 0x1 << DMA_SETUP_offTDIR ) 830 - #define DMA_SETUP_mskTES ( 0x3 << DMA_SETUP_offTES ) 831 - #define DMA_SETUP_mskESTR ( 0xFFF << DMA_SETUP_offESTR ) 832 - #define DMA_SETUP_mskCIE ( 0x1 << DMA_SETUP_offCIE ) 833 - #define DMA_SETUP_mskSIE ( 0x1 << DMA_SETUP_offSIE ) 834 - #define DMA_SETUP_mskEIE ( 0x1 << DMA_SETUP_offEIE ) 835 - #define DMA_SETUP_mskUE ( 0x1 << DMA_SETUP_offUE ) 836 - #define DMA_SETUP_msk2DE ( 0x1 << DMA_SETUP_off2DE ) 837 - #define DMA_SETUP_mskCOA ( 0x1 << DMA_SETUP_offCOA ) 838 - 839 - /****************************************************************************** 840 - * dmar5: DMA_ISADDR (DMA Internal Start Address Register) 841 - *****************************************************************************/ 842 - #define DMA_ISADDR_offISADDR 0 /* Internal Start Address */ 843 - /* bit 20:31 reserved */ 844 - #define DMA_ISADDR_mskISADDR ( 0xFFFFF << DMA_ISADDR_offISADDR ) 845 - 846 - /****************************************************************************** 847 - * dmar6: DMA_ESADDR (DMA External Start Address Register) 848 - *****************************************************************************/ 849 - /* This register holds External Start Address */ 850 - 851 - /****************************************************************************** 852 - * dmar7: DMA_TCNT (DMA Transfer Element Count Register) 853 - *****************************************************************************/ 854 - #define DMA_TCNT_offTCNT 0 /* DMA transfer element count */ 855 - /* bit 18:31 reserved */ 856 - #define DMA_TCNT_mskTCNT ( 0x3FFFF << DMA_TCNT_offTCNT ) 857 - 858 - /****************************************************************************** 859 - * dmar8: DMA_STATUS (DMA Status Register) 860 - *****************************************************************************/ 861 - #define DMA_STATUS_offSTAT 0 /* DMA channel state */ 862 - #define DMA_STATUS_offSTUNA 3 /* Un-aligned error on External Stride value */ 863 - #define DMA_STATUS_offDERR 4 /* DMA Transfer Disruption Error */ 864 - #define DMA_STATUS_offEUNA 5 /* Un-aligned error on the External address */ 865 - #define DMA_STATUS_offIUNA 6 /* Un-aligned error on the Internal address */ 866 - #define DMA_STATUS_offIOOR 7 /* Out-Of-Range error on the Internal address */ 867 - #define DMA_STATUS_offEBUS 8 /* Bus Error on an External DMA transfer */ 868 - #define DMA_STATUS_offESUP 9 /* DMA setup error */ 869 - /* bit 10:31 reserved */ 870 - 871 - #define DMA_STATUS_mskSTAT ( 0x7 << DMA_STATUS_offSTAT ) 872 - #define DMA_STATUS_mskSTUNA ( 0x1 << DMDMA_STATUS_offSTUNA ) 873 - #define DMA_STATUS_mskDERR ( 0x1 << DMDMA_STATUS_offDERR ) 874 - #define DMA_STATUS_mskEUNA ( 0x1 << DMDMA_STATUS_offEUNA ) 875 - #define DMA_STATUS_mskIUNA ( 0x1 << DMDMA_STATUS_offIUNA ) 876 - #define DMA_STATUS_mskIOOR ( 0x1 << DMDMA_STATUS_offIOOR ) 877 - #define DMA_STATUS_mskEBUS ( 0x1 << DMDMA_STATUS_offEBUS ) 878 - #define DMA_STATUS_mskESUP ( 0x1 << DMDMA_STATUS_offESUP ) 879 - 880 - /****************************************************************************** 881 - * dmar9: DMA_2DSET (DMA 2D Setup Register) 882 - *****************************************************************************/ 883 - #define DMA_2DSET_offWECNT 0 /* The Width Element Count for a 2-D region */ 884 - #define DMA_2DSET_offHTSTR 16 /* The Height Stride for a 2-D region */ 885 - 886 - #define DMA_2DSET_mskHTSTR ( 0xFFFF << DMA_2DSET_offHTSTR ) 887 - #define DMA_2DSET_mskWECNT ( 0xFFFF << DMA_2DSET_offWECNT ) 888 - 889 - /****************************************************************************** 890 - * dmar10: DMA_2DSCTL (DMA 2D Startup Control Register) 891 - *****************************************************************************/ 892 - #define DMA_2DSCTL_offSTWECNT 0 /* Startup Width Element Count for a 2-D region */ 893 - /* bit 16:31 reserved */ 894 - 895 - #define DMA_2DSCTL_mskSTWECNT ( 0xFFFF << DMA_2DSCTL_offSTWECNT ) 896 - 897 - /****************************************************************************** 898 - * fpcsr: FPCSR (Floating-Point Control Status Register) 899 - *****************************************************************************/ 900 - #define FPCSR_offRM 0 901 - #define FPCSR_offIVO 2 902 - #define FPCSR_offDBZ 3 903 - #define FPCSR_offOVF 4 904 - #define FPCSR_offUDF 5 905 - #define FPCSR_offIEX 6 906 - #define FPCSR_offIVOE 7 907 - #define FPCSR_offDBZE 8 908 - #define FPCSR_offOVFE 9 909 - #define FPCSR_offUDFE 10 910 - #define FPCSR_offIEXE 11 911 - #define FPCSR_offDNZ 12 912 - #define FPCSR_offIVOT 13 913 - #define FPCSR_offDBZT 14 914 - #define FPCSR_offOVFT 15 915 - #define FPCSR_offUDFT 16 916 - #define FPCSR_offIEXT 17 917 - #define FPCSR_offDNIT 18 918 - #define FPCSR_offRIT 19 919 - 920 - #define FPCSR_mskRM ( 0x3 << FPCSR_offRM ) 921 - #define FPCSR_mskIVO ( 0x1 << FPCSR_offIVO ) 922 - #define FPCSR_mskDBZ ( 0x1 << FPCSR_offDBZ ) 923 - #define FPCSR_mskOVF ( 0x1 << FPCSR_offOVF ) 924 - #define FPCSR_mskUDF ( 0x1 << FPCSR_offUDF ) 925 - #define FPCSR_mskIEX ( 0x1 << FPCSR_offIEX ) 926 - #define FPCSR_mskIVOE ( 0x1 << FPCSR_offIVOE ) 927 - #define FPCSR_mskDBZE ( 0x1 << FPCSR_offDBZE ) 928 - #define FPCSR_mskOVFE ( 0x1 << FPCSR_offOVFE ) 929 - #define FPCSR_mskUDFE ( 0x1 << FPCSR_offUDFE ) 930 - #define FPCSR_mskIEXE ( 0x1 << FPCSR_offIEXE ) 931 - #define FPCSR_mskDNZ ( 0x1 << FPCSR_offDNZ ) 932 - #define FPCSR_mskIVOT ( 0x1 << FPCSR_offIVOT ) 933 - #define FPCSR_mskDBZT ( 0x1 << FPCSR_offDBZT ) 934 - #define FPCSR_mskOVFT ( 0x1 << FPCSR_offOVFT ) 935 - #define FPCSR_mskUDFT ( 0x1 << FPCSR_offUDFT ) 936 - #define FPCSR_mskIEXT ( 0x1 << FPCSR_offIEXT ) 937 - #define FPCSR_mskDNIT ( 0x1 << FPCSR_offDNIT ) 938 - #define FPCSR_mskRIT ( 0x1 << FPCSR_offRIT ) 939 - #define FPCSR_mskALL (FPCSR_mskIVO | FPCSR_mskDBZ | FPCSR_mskOVF | FPCSR_mskUDF | FPCSR_mskIEX) 940 - #define FPCSR_mskALLE_NO_UDF_IEXE (FPCSR_mskIVOE | FPCSR_mskDBZE | FPCSR_mskOVFE) 941 - #define FPCSR_mskALLE (FPCSR_mskIVOE | FPCSR_mskDBZE | FPCSR_mskOVFE | FPCSR_mskUDFE | FPCSR_mskIEXE) 942 - #define FPCSR_mskALLT (FPCSR_mskIVOT | FPCSR_mskDBZT | FPCSR_mskOVFT | FPCSR_mskUDFT | FPCSR_mskIEXT |FPCSR_mskDNIT | FPCSR_mskRIT) 943 - 944 - /****************************************************************************** 945 - * fpcfg: FPCFG (Floating-Point Configuration Register) 946 - *****************************************************************************/ 947 - #define FPCFG_offSP 0 948 - #define FPCFG_offDP 1 949 - #define FPCFG_offFREG 2 950 - #define FPCFG_offFMA 4 951 - #define FPCFG_offIMVER 22 952 - #define FPCFG_offAVER 27 953 - 954 - #define FPCFG_mskSP ( 0x1 << FPCFG_offSP ) 955 - #define FPCFG_mskDP ( 0x1 << FPCFG_offDP ) 956 - #define FPCFG_mskFREG ( 0x3 << FPCFG_offFREG ) 957 - #define FPCFG_mskFMA ( 0x1 << FPCFG_offFMA ) 958 - #define FPCFG_mskIMVER ( 0x1F << FPCFG_offIMVER ) 959 - #define FPCFG_mskAVER ( 0x1F << FPCFG_offAVER ) 960 - 961 - /* 8 Single precision or 4 double precision registers are available */ 962 - #define SP8_DP4_reg 0 963 - /* 16 Single precision or 8 double precision registers are available */ 964 - #define SP16_DP8_reg 1 965 - /* 32 Single precision or 16 double precision registers are available */ 966 - #define SP32_DP16_reg 2 967 - /* 32 Single precision or 32 double precision registers are available */ 968 - #define SP32_DP32_reg 3 969 - 970 - /****************************************************************************** 971 - * fucpr: FUCOP_CTL (FPU and Coprocessor Enable Control Register) 972 - *****************************************************************************/ 973 - #define FUCOP_CTL_offCP0EN 0 974 - #define FUCOP_CTL_offCP1EN 1 975 - #define FUCOP_CTL_offCP2EN 2 976 - #define FUCOP_CTL_offCP3EN 3 977 - #define FUCOP_CTL_offAUEN 31 978 - 979 - #define FUCOP_CTL_mskCP0EN ( 0x1 << FUCOP_CTL_offCP0EN ) 980 - #define FUCOP_CTL_mskCP1EN ( 0x1 << FUCOP_CTL_offCP1EN ) 981 - #define FUCOP_CTL_mskCP2EN ( 0x1 << FUCOP_CTL_offCP2EN ) 982 - #define FUCOP_CTL_mskCP3EN ( 0x1 << FUCOP_CTL_offCP3EN ) 983 - #define FUCOP_CTL_mskAUEN ( 0x1 << FUCOP_CTL_offAUEN ) 984 - 985 - #endif /* __NDS32_BITFIELD_H__ */
-12
arch/nds32/include/asm/cache.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __NDS32_CACHE_H__ 5 - #define __NDS32_CACHE_H__ 6 - 7 - #define L1_CACHE_BYTES 32 8 - #define L1_CACHE_SHIFT 5 9 - 10 - #define ARCH_DMA_MINALIGN L1_CACHE_BYTES 11 - 12 - #endif /* __NDS32_CACHE_H__ */
-13
arch/nds32/include/asm/cache_info.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - struct cache_info { 5 - unsigned char ways; 6 - unsigned char line_size; 7 - unsigned short sets; 8 - unsigned short size; 9 - #if defined(CONFIG_CPU_CACHE_ALIASING) 10 - unsigned short aliasing_num; 11 - unsigned int aliasing_mask; 12 - #endif 13 - };
-53
arch/nds32/include/asm/cacheflush.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __NDS32_CACHEFLUSH_H__ 5 - #define __NDS32_CACHEFLUSH_H__ 6 - 7 - #include <linux/mm.h> 8 - 9 - #define PG_dcache_dirty PG_arch_1 10 - 11 - void flush_icache_range(unsigned long start, unsigned long end); 12 - #define flush_icache_range flush_icache_range 13 - 14 - void flush_icache_page(struct vm_area_struct *vma, struct page *page); 15 - #define flush_icache_page flush_icache_page 16 - 17 - #ifdef CONFIG_CPU_CACHE_ALIASING 18 - void flush_cache_mm(struct mm_struct *mm); 19 - void flush_cache_dup_mm(struct mm_struct *mm); 20 - void flush_cache_range(struct vm_area_struct *vma, 21 - unsigned long start, unsigned long end); 22 - void flush_cache_page(struct vm_area_struct *vma, 23 - unsigned long addr, unsigned long pfn); 24 - void flush_cache_kmaps(void); 25 - void flush_cache_vmap(unsigned long start, unsigned long end); 26 - void flush_cache_vunmap(unsigned long start, unsigned long end); 27 - 28 - #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 29 - void flush_dcache_page(struct page *page); 30 - void copy_to_user_page(struct vm_area_struct *vma, struct page *page, 31 - unsigned long vaddr, void *dst, void *src, int len); 32 - void copy_from_user_page(struct vm_area_struct *vma, struct page *page, 33 - unsigned long vaddr, void *dst, void *src, int len); 34 - 35 - #define ARCH_HAS_FLUSH_ANON_PAGE 36 - void flush_anon_page(struct vm_area_struct *vma, 37 - struct page *page, unsigned long vaddr); 38 - 39 - #define ARCH_IMPLEMENTS_FLUSH_KERNEL_VMAP_RANGE 1 40 - void flush_kernel_vmap_range(void *addr, int size); 41 - void invalidate_kernel_vmap_range(void *addr, int size); 42 - #define flush_dcache_mmap_lock(mapping) xa_lock_irq(&(mapping)->i_pages) 43 - #define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&(mapping)->i_pages) 44 - 45 - #else 46 - void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, 47 - unsigned long addr, int len); 48 - #define flush_icache_user_page flush_icache_user_page 49 - 50 - #include <asm-generic/cacheflush.h> 51 - #endif 52 - 53 - #endif /* __NDS32_CACHEFLUSH_H__ */
-12
arch/nds32/include/asm/current.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef _ASM_NDS32_CURRENT_H 5 - #define _ASM_NDS32_CURRENT_H 6 - 7 - #ifndef __ASSEMBLY__ 8 - register struct task_struct *current asm("$r25"); 9 - #endif /* __ASSEMBLY__ */ 10 - #define tsk $r25 11 - 12 - #endif /* _ASM_NDS32_CURRENT_H */
-39
arch/nds32/include/asm/delay.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __NDS32_DELAY_H__ 5 - #define __NDS32_DELAY_H__ 6 - 7 - #include <asm/param.h> 8 - 9 - /* There is no clocksource cycle counter in the CPU. */ 10 - static inline void __delay(unsigned long loops) 11 - { 12 - __asm__ __volatile__(".align 2\n" 13 - "1:\n" 14 - "\taddi\t%0, %0, -1\n" 15 - "\tbgtz\t%0, 1b\n" 16 - :"=r"(loops) 17 - :"0"(loops)); 18 - } 19 - 20 - static inline void __udelay(unsigned long usecs, unsigned long lpj) 21 - { 22 - usecs *= (unsigned long)(((0x8000000000000000ULL / (500000 / HZ)) + 23 - 0x80000000ULL) >> 32); 24 - usecs = (unsigned long)(((unsigned long long)usecs * lpj) >> 32); 25 - __delay(usecs); 26 - } 27 - 28 - #define udelay(usecs) __udelay((usecs), loops_per_jiffy) 29 - 30 - /* make sure "usecs *= ..." in udelay do not overflow. */ 31 - #if HZ >= 1000 32 - #define MAX_UDELAY_MS 1 33 - #elif HZ <= 200 34 - #define MAX_UDELAY_MS 5 35 - #else 36 - #define MAX_UDELAY_MS (1000 / HZ) 37 - #endif 38 - 39 - #endif
-180
arch/nds32/include/asm/elf.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASMNDS32_ELF_H 5 - #define __ASMNDS32_ELF_H 6 - 7 - /* 8 - * ELF register definitions.. 9 - */ 10 - 11 - #include <asm/ptrace.h> 12 - #include <asm/fpu.h> 13 - #include <linux/elf-em.h> 14 - 15 - typedef unsigned long elf_greg_t; 16 - typedef unsigned long elf_freg_t[3]; 17 - 18 - extern unsigned int elf_hwcap; 19 - 20 - #define R_NDS32_NONE 0 21 - #define R_NDS32_16_RELA 19 22 - #define R_NDS32_32_RELA 20 23 - #define R_NDS32_9_PCREL_RELA 22 24 - #define R_NDS32_15_PCREL_RELA 23 25 - #define R_NDS32_17_PCREL_RELA 24 26 - #define R_NDS32_25_PCREL_RELA 25 27 - #define R_NDS32_HI20_RELA 26 28 - #define R_NDS32_LO12S3_RELA 27 29 - #define R_NDS32_LO12S2_RELA 28 30 - #define R_NDS32_LO12S1_RELA 29 31 - #define R_NDS32_LO12S0_RELA 30 32 - #define R_NDS32_SDA15S3_RELA 31 33 - #define R_NDS32_SDA15S2_RELA 32 34 - #define R_NDS32_SDA15S1_RELA 33 35 - #define R_NDS32_SDA15S0_RELA 34 36 - #define R_NDS32_GOT20 37 37 - #define R_NDS32_25_PLTREL 38 38 - #define R_NDS32_COPY 39 39 - #define R_NDS32_GLOB_DAT 40 40 - #define R_NDS32_JMP_SLOT 41 41 - #define R_NDS32_RELATIVE 42 42 - #define R_NDS32_GOTOFF 43 43 - #define R_NDS32_GOTPC20 44 44 - #define R_NDS32_GOT_HI20 45 45 - #define R_NDS32_GOT_LO12 46 46 - #define R_NDS32_GOTPC_HI20 47 47 - #define R_NDS32_GOTPC_LO12 48 48 - #define R_NDS32_GOTOFF_HI20 49 49 - #define R_NDS32_GOTOFF_LO12 50 50 - #define R_NDS32_INSN16 51 51 - #define R_NDS32_LABEL 52 52 - #define R_NDS32_LONGCALL1 53 53 - #define R_NDS32_LONGCALL2 54 54 - #define R_NDS32_LONGCALL3 55 55 - #define R_NDS32_LONGJUMP1 56 56 - #define R_NDS32_LONGJUMP2 57 57 - #define R_NDS32_LONGJUMP3 58 58 - #define R_NDS32_LOADSTORE 59 59 - #define R_NDS32_9_FIXED_RELA 60 60 - #define R_NDS32_15_FIXED_RELA 61 61 - #define R_NDS32_17_FIXED_RELA 62 62 - #define R_NDS32_25_FIXED_RELA 63 63 - #define R_NDS32_PLTREL_HI20 64 64 - #define R_NDS32_PLTREL_LO12 65 65 - #define R_NDS32_PLT_GOTREL_HI20 66 66 - #define R_NDS32_PLT_GOTREL_LO12 67 67 - #define R_NDS32_LO12S0_ORI_RELA 72 68 - #define R_NDS32_DWARF2_OP1_RELA 77 69 - #define R_NDS32_DWARF2_OP2_RELA 78 70 - #define R_NDS32_DWARF2_LEB_RELA 79 71 - #define R_NDS32_WORD_9_PCREL_RELA 94 72 - #define R_NDS32_LONGCALL4 107 73 - #define R_NDS32_RELA_NOP_MIX 192 74 - #define R_NDS32_RELA_NOP_MAX 255 75 - 76 - #define ELF_NGREG (sizeof (struct user_pt_regs) / sizeof(elf_greg_t)) 77 - #define ELF_CORE_COPY_REGS(dest, regs) \ 78 - *(struct user_pt_regs *)&(dest) = (regs)->user_regs; 79 - 80 - typedef elf_greg_t elf_gregset_t[ELF_NGREG]; 81 - 82 - /* Core file format: The core file is written in such a way that gdb 83 - can understand it and provide useful information to the user (under 84 - linux we use the 'trad-core' bfd). There are quite a number of 85 - obstacles to being able to view the contents of the floating point 86 - registers, and until these are solved you will not be able to view the 87 - contents of them. Actually, you can read in the core file and look at 88 - the contents of the user struct to find out what the floating point 89 - registers contain. 90 - The actual file contents are as follows: 91 - UPAGE: 1 page consisting of a user struct that tells gdb what is present 92 - in the file. Directly after this is a copy of the task_struct, which 93 - is currently not used by gdb, but it may come in useful at some point. 94 - All of the registers are stored as part of the upage. The upage should 95 - always be only one page. 96 - DATA: The data area is stored. We use current->end_text to 97 - current->brk to pick up all of the user variables, plus any memory 98 - that may have been malloced. No attempt is made to determine if a page 99 - is demand-zero or if a page is totally unused, we just cover the entire 100 - range. All of the addresses are rounded in such a way that an integral 101 - number of pages is written. 102 - STACK: We need the stack information in order to get a meaningful 103 - backtrace. We need to write the data from (esp) to 104 - current->start_stack, so we round each of these off in order to be able 105 - to write an integer number of pages. 106 - The minimum core file size is 3 pages, or 12288 bytes. 107 - */ 108 - 109 - struct user_fp { 110 - unsigned long long fd_regs[32]; 111 - unsigned long fpcsr; 112 - }; 113 - 114 - typedef struct user_fp elf_fpregset_t; 115 - 116 - struct elf32_hdr; 117 - #define elf_check_arch(x) ((x)->e_machine == EM_NDS32) 118 - 119 - /* 120 - * These are used to set parameters in the core dumps. 121 - */ 122 - #define ELF_CLASS ELFCLASS32 123 - #ifdef __NDS32_EB__ 124 - #define ELF_DATA ELFDATA2MSB 125 - #else 126 - #define ELF_DATA ELFDATA2LSB 127 - #endif 128 - #define ELF_ARCH EM_NDS32 129 - #define ELF_EXEC_PAGESIZE PAGE_SIZE 130 - 131 - /* This is the location that an ET_DYN program is loaded if exec'ed. Typical 132 - use of this is to invoke "./ld.so someprog" to test out a new version of 133 - the loader. We need to make sure that it is out of the way of the program 134 - that it will "exec", and that there is sufficient room for the brk. */ 135 - 136 - #define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3) 137 - 138 - /* When the program starts, a1 contains a pointer to a function to be 139 - registered with atexit, as per the SVR4 ABI. A value of 0 means we 140 - have no such handler. */ 141 - #define ELF_PLAT_INIT(_r, load_addr) (_r)->uregs[0] = 0 142 - 143 - /* This yields a mask that user programs can use to figure out what 144 - instruction set this cpu supports. */ 145 - 146 - #define ELF_HWCAP (elf_hwcap) 147 - 148 - #ifdef __KERNEL__ 149 - 150 - #define ELF_PLATFORM (NULL) 151 - 152 - /* Old NetWinder binaries were compiled in such a way that the iBCS 153 - heuristic always trips on them. Until these binaries become uncommon 154 - enough not to care, don't trust the `ibcs' flag here. In any case 155 - there is no other ELF system currently supported by iBCS. 156 - @@ Could print a warning message to encourage users to upgrade. */ 157 - #define SET_PERSONALITY(ex) set_personality(PER_LINUX) 158 - 159 - #endif 160 - 161 - 162 - #if IS_ENABLED(CONFIG_FPU) 163 - #define FPU_AUX_ENT NEW_AUX_ENT(AT_FPUCW, FPCSR_INIT) 164 - #else 165 - #define FPU_AUX_ENT NEW_AUX_ENT(AT_IGNORE, 0) 166 - #endif 167 - 168 - #define ARCH_DLINFO \ 169 - do { \ 170 - /* Optional FPU initialization */ \ 171 - FPU_AUX_ENT; \ 172 - \ 173 - NEW_AUX_ENT(AT_SYSINFO_EHDR, \ 174 - (elf_addr_t)current->mm->context.vdso); \ 175 - } while (0) 176 - #define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1 177 - struct linux_binprm; 178 - int arch_setup_additional_pages(struct linux_binprm *, int); 179 - 180 - #endif
-29
arch/nds32/include/asm/fixmap.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_NDS32_FIXMAP_H 5 - #define __ASM_NDS32_FIXMAP_H 6 - 7 - #ifdef CONFIG_HIGHMEM 8 - #include <linux/threads.h> 9 - #include <asm/kmap_size.h> 10 - #endif 11 - 12 - enum fixed_addresses { 13 - FIX_HOLE, 14 - FIX_KMAP_RESERVED, 15 - FIX_KMAP_BEGIN, 16 - #ifdef CONFIG_HIGHMEM 17 - FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_MAX_IDX * NR_CPUS) - 1, 18 - #endif 19 - FIX_EARLYCON_MEM_BASE, 20 - __end_of_fixed_addresses 21 - }; 22 - #define FIXADDR_TOP ((unsigned long) (-(16 * PAGE_SIZE))) 23 - #define FIXADDR_SIZE ((__end_of_fixed_addresses) << PAGE_SHIFT) 24 - #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE) 25 - #define FIXMAP_PAGE_IO __pgprot(PAGE_DEVICE) 26 - void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot); 27 - 28 - #include <asm-generic/fixmap.h> 29 - #endif /* __ASM_NDS32_FIXMAP_H */
-126
arch/nds32/include/asm/fpu.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* Copyright (C) 2005-2018 Andes Technology Corporation */ 3 - 4 - #ifndef __ASM_NDS32_FPU_H 5 - #define __ASM_NDS32_FPU_H 6 - 7 - #if IS_ENABLED(CONFIG_FPU) 8 - #ifndef __ASSEMBLY__ 9 - #include <linux/sched/task_stack.h> 10 - #include <linux/preempt.h> 11 - #include <asm/ptrace.h> 12 - 13 - extern bool has_fpu; 14 - 15 - extern void save_fpu(struct task_struct *__tsk); 16 - extern void load_fpu(const struct fpu_struct *fpregs); 17 - extern bool do_fpu_exception(unsigned int subtype, struct pt_regs *regs); 18 - extern int do_fpuemu(struct pt_regs *regs, struct fpu_struct *fpu); 19 - 20 - #define test_tsk_fpu(regs) (regs->fucop_ctl & FUCOP_CTL_mskCP0EN) 21 - 22 - /* 23 - * Initially load the FPU with signalling NANS. This bit pattern 24 - * has the property that no matter whether considered as single or as 25 - * double precision, it still represents a signalling NAN. 26 - */ 27 - 28 - #define sNAN64 0xFFFFFFFFFFFFFFFFULL 29 - #define sNAN32 0xFFFFFFFFUL 30 - 31 - #if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) 32 - /* 33 - * Denormalized number is unsupported by nds32 FPU. Hence the operation 34 - * is treated as underflow cases when the final result is a denormalized 35 - * number. To enhance precision, underflow exception trap should be 36 - * enabled by default and kerenl will re-execute it by fpu emulator 37 - * when getting underflow exception. 38 - */ 39 - #define FPCSR_INIT (FPCSR_mskUDFE | FPCSR_mskIEXE) 40 - #else 41 - #define FPCSR_INIT 0x0UL 42 - #endif 43 - 44 - extern const struct fpu_struct init_fpuregs; 45 - 46 - static inline void disable_ptreg_fpu(struct pt_regs *regs) 47 - { 48 - regs->fucop_ctl &= ~FUCOP_CTL_mskCP0EN; 49 - } 50 - 51 - static inline void enable_ptreg_fpu(struct pt_regs *regs) 52 - { 53 - regs->fucop_ctl |= FUCOP_CTL_mskCP0EN; 54 - } 55 - 56 - static inline void enable_fpu(void) 57 - { 58 - unsigned long fucop_ctl; 59 - 60 - fucop_ctl = __nds32__mfsr(NDS32_SR_FUCOP_CTL) | FUCOP_CTL_mskCP0EN; 61 - __nds32__mtsr(fucop_ctl, NDS32_SR_FUCOP_CTL); 62 - __nds32__isb(); 63 - } 64 - 65 - static inline void disable_fpu(void) 66 - { 67 - unsigned long fucop_ctl; 68 - 69 - fucop_ctl = __nds32__mfsr(NDS32_SR_FUCOP_CTL) & ~FUCOP_CTL_mskCP0EN; 70 - __nds32__mtsr(fucop_ctl, NDS32_SR_FUCOP_CTL); 71 - __nds32__isb(); 72 - } 73 - 74 - static inline void lose_fpu(void) 75 - { 76 - preempt_disable(); 77 - #if IS_ENABLED(CONFIG_LAZY_FPU) 78 - if (last_task_used_math == current) { 79 - last_task_used_math = NULL; 80 - #else 81 - if (test_tsk_fpu(task_pt_regs(current))) { 82 - #endif 83 - save_fpu(current); 84 - } 85 - disable_ptreg_fpu(task_pt_regs(current)); 86 - preempt_enable(); 87 - } 88 - 89 - static inline void own_fpu(void) 90 - { 91 - preempt_disable(); 92 - #if IS_ENABLED(CONFIG_LAZY_FPU) 93 - if (last_task_used_math != current) { 94 - if (last_task_used_math != NULL) 95 - save_fpu(last_task_used_math); 96 - load_fpu(&current->thread.fpu); 97 - last_task_used_math = current; 98 - } 99 - #else 100 - if (!test_tsk_fpu(task_pt_regs(current))) { 101 - load_fpu(&current->thread.fpu); 102 - } 103 - #endif 104 - enable_ptreg_fpu(task_pt_regs(current)); 105 - preempt_enable(); 106 - } 107 - 108 - #if !IS_ENABLED(CONFIG_LAZY_FPU) 109 - static inline void unlazy_fpu(struct task_struct *tsk) 110 - { 111 - preempt_disable(); 112 - if (test_tsk_fpu(task_pt_regs(tsk))) 113 - save_fpu(tsk); 114 - preempt_enable(); 115 - } 116 - #endif /* !CONFIG_LAZY_FPU */ 117 - static inline void clear_fpu(struct pt_regs *regs) 118 - { 119 - preempt_disable(); 120 - if (test_tsk_fpu(regs)) 121 - disable_ptreg_fpu(regs); 122 - preempt_enable(); 123 - } 124 - #endif /* CONFIG_FPU */ 125 - #endif /* __ASSEMBLY__ */ 126 - #endif /* __ASM_NDS32_FPU_H */
-44
arch/nds32/include/asm/fpuemu.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* Copyright (C) 2005-2018 Andes Technology Corporation */ 3 - 4 - #ifndef __ARCH_NDS32_FPUEMU_H 5 - #define __ARCH_NDS32_FPUEMU_H 6 - 7 - /* 8 - * single precision 9 - */ 10 - 11 - void fadds(void *ft, void *fa, void *fb); 12 - void fsubs(void *ft, void *fa, void *fb); 13 - void fmuls(void *ft, void *fa, void *fb); 14 - void fdivs(void *ft, void *fa, void *fb); 15 - void fs2d(void *ft, void *fa); 16 - void fs2si(void *ft, void *fa); 17 - void fs2si_z(void *ft, void *fa); 18 - void fs2ui(void *ft, void *fa); 19 - void fs2ui_z(void *ft, void *fa); 20 - void fsi2s(void *ft, void *fa); 21 - void fui2s(void *ft, void *fa); 22 - void fsqrts(void *ft, void *fa); 23 - void fnegs(void *ft, void *fa); 24 - int fcmps(void *ft, void *fa, void *fb, int cop); 25 - 26 - /* 27 - * double precision 28 - */ 29 - void faddd(void *ft, void *fa, void *fb); 30 - void fsubd(void *ft, void *fa, void *fb); 31 - void fmuld(void *ft, void *fa, void *fb); 32 - void fdivd(void *ft, void *fa, void *fb); 33 - void fsqrtd(void *ft, void *fa); 34 - void fd2s(void *ft, void *fa); 35 - void fd2si(void *ft, void *fa); 36 - void fd2si_z(void *ft, void *fa); 37 - void fd2ui(void *ft, void *fa); 38 - void fd2ui_z(void *ft, void *fa); 39 - void fsi2d(void *ft, void *fa); 40 - void fui2d(void *ft, void *fa); 41 - void fnegd(void *ft, void *fa); 42 - int fcmpd(void *ft, void *fa, void *fb, int cop); 43 - 44 - #endif /* __ARCH_NDS32_FPUEMU_H */
-46
arch/nds32/include/asm/ftrace.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - 3 - #ifndef __ASM_NDS32_FTRACE_H 4 - #define __ASM_NDS32_FTRACE_H 5 - 6 - #ifdef CONFIG_FUNCTION_TRACER 7 - 8 - #define HAVE_FUNCTION_GRAPH_FP_TEST 9 - 10 - #define MCOUNT_ADDR ((unsigned long)(_mcount)) 11 - /* mcount call is composed of three instructions: 12 - * sethi + ori + jral 13 - */ 14 - #define MCOUNT_INSN_SIZE 12 15 - 16 - extern void _mcount(unsigned long parent_ip); 17 - 18 - #ifdef CONFIG_DYNAMIC_FTRACE 19 - 20 - #define FTRACE_ADDR ((unsigned long)_ftrace_caller) 21 - 22 - #ifdef __NDS32_EL__ 23 - #define INSN_NOP 0x09000040 24 - #define INSN_SIZE(insn) (((insn & 0x00000080) == 0) ? 4 : 2) 25 - #define IS_SETHI(insn) ((insn & 0x000000fe) == 0x00000046) 26 - #define ENDIAN_CONVERT(insn) be32_to_cpu(insn) 27 - #else /* __NDS32_EB__ */ 28 - #define INSN_NOP 0x40000009 29 - #define INSN_SIZE(insn) (((insn & 0x80000000) == 0) ? 4 : 2) 30 - #define IS_SETHI(insn) ((insn & 0xfe000000) == 0x46000000) 31 - #define ENDIAN_CONVERT(insn) (insn) 32 - #endif 33 - 34 - extern void _ftrace_caller(unsigned long parent_ip); 35 - static inline unsigned long ftrace_call_adjust(unsigned long addr) 36 - { 37 - return addr; 38 - } 39 - struct dyn_arch_ftrace { 40 - }; 41 - 42 - #endif /* CONFIG_DYNAMIC_FTRACE */ 43 - 44 - #endif /* CONFIG_FUNCTION_TRACER */ 45 - 46 - #endif /* __ASM_NDS32_FTRACE_H */
-101
arch/nds32/include/asm/futex.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __NDS32_FUTEX_H__ 5 - #define __NDS32_FUTEX_H__ 6 - 7 - #include <linux/futex.h> 8 - #include <linux/uaccess.h> 9 - #include <asm/errno.h> 10 - 11 - #define __futex_atomic_ex_table(err_reg) \ 12 - " .pushsection __ex_table,\"a\"\n" \ 13 - " .align 3\n" \ 14 - " .long 1b, 4f\n" \ 15 - " .long 2b, 4f\n" \ 16 - " .popsection\n" \ 17 - " .pushsection .fixup,\"ax\"\n" \ 18 - "4: move %0, " err_reg "\n" \ 19 - " b 3b\n" \ 20 - " .popsection" 21 - 22 - #define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \ 23 - smp_mb(); \ 24 - asm volatile( \ 25 - " movi $ta, #0\n" \ 26 - "1: llw %1, [%2+$ta]\n" \ 27 - " " insn "\n" \ 28 - "2: scw %0, [%2+$ta]\n" \ 29 - " beqz %0, 1b\n" \ 30 - " movi %0, #0\n" \ 31 - "3:\n" \ 32 - __futex_atomic_ex_table("%4") \ 33 - : "=&r" (ret), "=&r" (oldval) \ 34 - : "r" (uaddr), "r" (oparg), "i" (-EFAULT) \ 35 - : "cc", "memory") 36 - static inline int 37 - futex_atomic_cmpxchg_inatomic(u32 * uval, u32 __user * uaddr, 38 - u32 oldval, u32 newval) 39 - { 40 - int ret = 0; 41 - u32 val, tmp, flags; 42 - 43 - if (!access_ok(uaddr, sizeof(u32))) 44 - return -EFAULT; 45 - 46 - smp_mb(); 47 - asm volatile (" movi $ta, #0\n" 48 - "1: llw %1, [%6 + $ta]\n" 49 - " sub %3, %1, %4\n" 50 - " cmovz %2, %5, %3\n" 51 - " cmovn %2, %1, %3\n" 52 - "2: scw %2, [%6 + $ta]\n" 53 - " beqz %2, 1b\n" 54 - "3:\n " __futex_atomic_ex_table("%7") 55 - :"+&r"(ret), "=&r"(val), "=&r"(tmp), "=&r"(flags) 56 - :"r"(oldval), "r"(newval), "r"(uaddr), "i"(-EFAULT) 57 - :"$ta", "memory"); 58 - smp_mb(); 59 - 60 - *uval = val; 61 - return ret; 62 - } 63 - 64 - static inline int 65 - arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr) 66 - { 67 - int oldval = 0, ret; 68 - 69 - if (!access_ok(uaddr, sizeof(u32))) 70 - return -EFAULT; 71 - switch (op) { 72 - case FUTEX_OP_SET: 73 - __futex_atomic_op("move %0, %3", ret, oldval, tmp, uaddr, 74 - oparg); 75 - break; 76 - case FUTEX_OP_ADD: 77 - __futex_atomic_op("add %0, %1, %3", ret, oldval, tmp, uaddr, 78 - oparg); 79 - break; 80 - case FUTEX_OP_OR: 81 - __futex_atomic_op("or %0, %1, %3", ret, oldval, tmp, uaddr, 82 - oparg); 83 - break; 84 - case FUTEX_OP_ANDN: 85 - __futex_atomic_op("and %0, %1, %3", ret, oldval, tmp, uaddr, 86 - ~oparg); 87 - break; 88 - case FUTEX_OP_XOR: 89 - __futex_atomic_op("xor %0, %1, %3", ret, oldval, tmp, uaddr, 90 - oparg); 91 - break; 92 - default: 93 - ret = -ENOSYS; 94 - } 95 - 96 - if (!ret) 97 - *oval = oldval; 98 - 99 - return ret; 100 - } 101 - #endif /* __NDS32_FUTEX_H__ */
-65
arch/nds32/include/asm/highmem.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef _ASM_HIGHMEM_H 5 - #define _ASM_HIGHMEM_H 6 - 7 - #include <asm/proc-fns.h> 8 - #include <asm/fixmap.h> 9 - 10 - /* 11 - * Right now we initialize only a single pte table. It can be extended 12 - * easily, subsequent pte tables have to be allocated in one physical 13 - * chunk of RAM. 14 - */ 15 - /* 16 - * Ordering is (from lower to higher memory addresses): 17 - * 18 - * high_memory 19 - * Persistent kmap area 20 - * PKMAP_BASE 21 - * fixed_addresses 22 - * FIXADDR_START 23 - * FIXADDR_TOP 24 - * Vmalloc area 25 - * VMALLOC_START 26 - * VMALLOC_END 27 - */ 28 - #define PKMAP_BASE ((FIXADDR_START - PGDIR_SIZE) & (PGDIR_MASK)) 29 - #define LAST_PKMAP PTRS_PER_PTE 30 - #define LAST_PKMAP_MASK (LAST_PKMAP - 1) 31 - #define PKMAP_NR(virt) (((virt) - (PKMAP_BASE)) >> PAGE_SHIFT) 32 - #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT)) 33 - 34 - static inline void flush_cache_kmaps(void) 35 - { 36 - cpu_dcache_wbinval_all(); 37 - } 38 - 39 - /* declarations for highmem.c */ 40 - extern unsigned long highstart_pfn, highend_pfn; 41 - 42 - extern pte_t *pkmap_page_table; 43 - 44 - extern void kmap_init(void); 45 - 46 - /* 47 - * FIXME: The below looks broken vs. a kmap_atomic() in task context which 48 - * is interupted and another kmap_atomic() happens in interrupt context. 49 - * But what do I know about nds32. -- tglx 50 - */ 51 - #define arch_kmap_local_post_map(vaddr, pteval) \ 52 - do { \ 53 - __nds32__tlbop_inv(vaddr); \ 54 - __nds32__mtsr_dsb(vaddr, NDS32_SR_TLB_VPN); \ 55 - __nds32__tlbop_rwr(pteval); \ 56 - __nds32__isb(); \ 57 - } while (0) 58 - 59 - #define arch_kmap_local_pre_unmap(vaddr) \ 60 - do { \ 61 - __nds32__tlbop_inv(vaddr); \ 62 - __nds32__isb(); \ 63 - } while (0) 64 - 65 - #endif
-84
arch/nds32/include/asm/io.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_NDS32_IO_H 5 - #define __ASM_NDS32_IO_H 6 - 7 - #include <linux/types.h> 8 - 9 - #define __raw_writeb __raw_writeb 10 - static inline void __raw_writeb(u8 val, volatile void __iomem *addr) 11 - { 12 - asm volatile("sbi %0, [%1]" : : "r" (val), "r" (addr)); 13 - } 14 - 15 - #define __raw_writew __raw_writew 16 - static inline void __raw_writew(u16 val, volatile void __iomem *addr) 17 - { 18 - asm volatile("shi %0, [%1]" : : "r" (val), "r" (addr)); 19 - } 20 - 21 - #define __raw_writel __raw_writel 22 - static inline void __raw_writel(u32 val, volatile void __iomem *addr) 23 - { 24 - asm volatile("swi %0, [%1]" : : "r" (val), "r" (addr)); 25 - } 26 - 27 - #define __raw_readb __raw_readb 28 - static inline u8 __raw_readb(const volatile void __iomem *addr) 29 - { 30 - u8 val; 31 - 32 - asm volatile("lbi %0, [%1]" : "=r" (val) : "r" (addr)); 33 - return val; 34 - } 35 - 36 - #define __raw_readw __raw_readw 37 - static inline u16 __raw_readw(const volatile void __iomem *addr) 38 - { 39 - u16 val; 40 - 41 - asm volatile("lhi %0, [%1]" : "=r" (val) : "r" (addr)); 42 - return val; 43 - } 44 - 45 - #define __raw_readl __raw_readl 46 - static inline u32 __raw_readl(const volatile void __iomem *addr) 47 - { 48 - u32 val; 49 - 50 - asm volatile("lwi %0, [%1]" : "=r" (val) : "r" (addr)); 51 - return val; 52 - } 53 - 54 - #define __iormb() rmb() 55 - #define __iowmb() wmb() 56 - 57 - /* 58 - * {read,write}{b,w,l,q}_relaxed() are like the regular version, but 59 - * are not guaranteed to provide ordering against spinlocks or memory 60 - * accesses. 61 - */ 62 - 63 - #define readb_relaxed(c) ({ u8 __v = __raw_readb(c); __v; }) 64 - #define readw_relaxed(c) ({ u16 __v = le16_to_cpu((__force __le16)__raw_readw(c)); __v; }) 65 - #define readl_relaxed(c) ({ u32 __v = le32_to_cpu((__force __le32)__raw_readl(c)); __v; }) 66 - #define writeb_relaxed(v,c) ((void)__raw_writeb((v),(c))) 67 - #define writew_relaxed(v,c) ((void)__raw_writew((__force u16)cpu_to_le16(v),(c))) 68 - #define writel_relaxed(v,c) ((void)__raw_writel((__force u32)cpu_to_le32(v),(c))) 69 - 70 - /* 71 - * {read,write}{b,w,l,q}() access little endian memory and return result in 72 - * native endianness. 73 - */ 74 - #define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(); __v; }) 75 - #define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(); __v; }) 76 - #define readl(c) ({ u32 __v = readl_relaxed(c); __iormb(); __v; }) 77 - 78 - #define writeb(v,c) ({ __iowmb(); writeb_relaxed((v),(c)); }) 79 - #define writew(v,c) ({ __iowmb(); writew_relaxed((v),(c)); }) 80 - #define writel(v,c) ({ __iowmb(); writel_relaxed((v),(c)); }) 81 - 82 - #include <asm-generic/io.h> 83 - 84 - #endif /* __ASM_NDS32_IO_H */
-41
arch/nds32/include/asm/irqflags.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <asm/nds32.h> 5 - #include <nds32_intrinsic.h> 6 - 7 - #define arch_local_irq_disable() \ 8 - GIE_DISABLE(); 9 - 10 - #define arch_local_irq_enable() \ 11 - GIE_ENABLE(); 12 - static inline unsigned long arch_local_irq_save(void) 13 - { 14 - unsigned long flags; 15 - flags = __nds32__mfsr(NDS32_SR_PSW) & PSW_mskGIE; 16 - GIE_DISABLE(); 17 - return flags; 18 - } 19 - 20 - static inline unsigned long arch_local_save_flags(void) 21 - { 22 - unsigned long flags; 23 - flags = __nds32__mfsr(NDS32_SR_PSW) & PSW_mskGIE; 24 - return flags; 25 - } 26 - 27 - static inline void arch_local_irq_restore(unsigned long flags) 28 - { 29 - if(flags) 30 - GIE_ENABLE(); 31 - } 32 - 33 - static inline int arch_irqs_disabled_flags(unsigned long flags) 34 - { 35 - return !flags; 36 - } 37 - 38 - static inline int arch_irqs_disabled(void) 39 - { 40 - return arch_irqs_disabled_flags(arch_local_save_flags()); 41 - }
-137
arch/nds32/include/asm/l2_cache.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef L2_CACHE_H 5 - #define L2_CACHE_H 6 - 7 - /* CCTL_CMD_OP */ 8 - #define L2_CA_CONF_OFF 0x0 9 - #define L2_IF_CONF_OFF 0x4 10 - #define L2CC_SETUP_OFF 0x8 11 - #define L2CC_PROT_OFF 0xC 12 - #define L2CC_CTRL_OFF 0x10 13 - #define L2_INT_EN_OFF 0x20 14 - #define L2_STA_OFF 0x24 15 - #define RDERR_ADDR_OFF 0x28 16 - #define WRERR_ADDR_OFF 0x2c 17 - #define EVDPTERR_ADDR_OFF 0x30 18 - #define IMPL3ERR_ADDR_OFF 0x34 19 - #define L2_CNT0_CTRL_OFF 0x40 20 - #define L2_EVNT_CNT0_OFF 0x44 21 - #define L2_CNT1_CTRL_OFF 0x48 22 - #define L2_EVNT_CNT1_OFF 0x4c 23 - #define L2_CCTL_CMD_OFF 0x60 24 - #define L2_CCTL_STATUS_OFF 0x64 25 - #define L2_LINE_TAG_OFF 0x68 26 - #define L2_LINE_DPT_OFF 0x70 27 - 28 - #define CCTL_CMD_L2_IX_INVAL 0x0 29 - #define CCTL_CMD_L2_PA_INVAL 0x1 30 - #define CCTL_CMD_L2_IX_WB 0x2 31 - #define CCTL_CMD_L2_PA_WB 0x3 32 - #define CCTL_CMD_L2_PA_WBINVAL 0x5 33 - #define CCTL_CMD_L2_SYNC 0xa 34 - 35 - /* CCTL_CMD_TYPE */ 36 - #define CCTL_SINGLE_CMD 0 37 - #define CCTL_BLOCK_CMD 0x10 38 - #define CCTL_ALL_CMD 0x10 39 - 40 - /****************************************************************************** 41 - * L2_CA_CONF (Cache architecture configuration) 42 - *****************************************************************************/ 43 - #define L2_CA_CONF_offL2SET 0 44 - #define L2_CA_CONF_offL2WAY 4 45 - #define L2_CA_CONF_offL2CLSZ 8 46 - #define L2_CA_CONF_offL2DW 11 47 - #define L2_CA_CONF_offL2PT 14 48 - #define L2_CA_CONF_offL2VER 16 49 - 50 - #define L2_CA_CONF_mskL2SET (0xFUL << L2_CA_CONF_offL2SET) 51 - #define L2_CA_CONF_mskL2WAY (0xFUL << L2_CA_CONF_offL2WAY) 52 - #define L2_CA_CONF_mskL2CLSZ (0x7UL << L2_CA_CONF_offL2CLSZ) 53 - #define L2_CA_CONF_mskL2DW (0x7UL << L2_CA_CONF_offL2DW) 54 - #define L2_CA_CONF_mskL2PT (0x3UL << L2_CA_CONF_offL2PT) 55 - #define L2_CA_CONF_mskL2VER (0xFFFFUL << L2_CA_CONF_offL2VER) 56 - 57 - /****************************************************************************** 58 - * L2CC_SETUP (L2CC Setup register) 59 - *****************************************************************************/ 60 - #define L2CC_SETUP_offPART 0 61 - #define L2CC_SETUP_mskPART (0x3UL << L2CC_SETUP_offPART) 62 - #define L2CC_SETUP_offDDLATC 4 63 - #define L2CC_SETUP_mskDDLATC (0x3UL << L2CC_SETUP_offDDLATC) 64 - #define L2CC_SETUP_offTDLATC 8 65 - #define L2CC_SETUP_mskTDLATC (0x3UL << L2CC_SETUP_offTDLATC) 66 - 67 - /****************************************************************************** 68 - * L2CC_PROT (L2CC Protect register) 69 - *****************************************************************************/ 70 - #define L2CC_PROT_offMRWEN 31 71 - #define L2CC_PROT_mskMRWEN (0x1UL << L2CC_PROT_offMRWEN) 72 - 73 - /****************************************************************************** 74 - * L2_CCTL_STATUS_Mn (The L2CCTL command working status for Master n) 75 - *****************************************************************************/ 76 - #define L2CC_CTRL_offEN 31 77 - #define L2CC_CTRL_mskEN (0x1UL << L2CC_CTRL_offEN) 78 - 79 - /****************************************************************************** 80 - * L2_CCTL_STATUS_Mn (The L2CCTL command working status for Master n) 81 - *****************************************************************************/ 82 - #define L2_CCTL_STATUS_offCMD_COMP 31 83 - #define L2_CCTL_STATUS_mskCMD_COMP (0x1 << L2_CCTL_STATUS_offCMD_COMP) 84 - 85 - extern void __iomem *atl2c_base; 86 - #include <linux/smp.h> 87 - #include <asm/io.h> 88 - #include <asm/bitfield.h> 89 - 90 - #define L2C_R_REG(offset) readl(atl2c_base + offset) 91 - #define L2C_W_REG(offset, value) writel(value, atl2c_base + offset) 92 - 93 - #define L2_CMD_RDY() \ 94 - do{;}while((L2C_R_REG(L2_CCTL_STATUS_OFF) & L2_CCTL_STATUS_mskCMD_COMP) == 0) 95 - 96 - static inline unsigned long L2_CACHE_SET(void) 97 - { 98 - return 64 << ((L2C_R_REG(L2_CA_CONF_OFF) & L2_CA_CONF_mskL2SET) >> 99 - L2_CA_CONF_offL2SET); 100 - } 101 - 102 - static inline unsigned long L2_CACHE_WAY(void) 103 - { 104 - return 1 + 105 - ((L2C_R_REG(L2_CA_CONF_OFF) & L2_CA_CONF_mskL2WAY) >> 106 - L2_CA_CONF_offL2WAY); 107 - } 108 - 109 - static inline unsigned long L2_CACHE_LINE_SIZE(void) 110 - { 111 - 112 - return 4 << ((L2C_R_REG(L2_CA_CONF_OFF) & L2_CA_CONF_mskL2CLSZ) >> 113 - L2_CA_CONF_offL2CLSZ); 114 - } 115 - 116 - static inline unsigned long GET_L2CC_CTRL_CPU(unsigned long cpu) 117 - { 118 - if (cpu == smp_processor_id()) 119 - return L2C_R_REG(L2CC_CTRL_OFF); 120 - return L2C_R_REG(L2CC_CTRL_OFF + (cpu << 8)); 121 - } 122 - 123 - static inline void SET_L2CC_CTRL_CPU(unsigned long cpu, unsigned long val) 124 - { 125 - if (cpu == smp_processor_id()) 126 - L2C_W_REG(L2CC_CTRL_OFF, val); 127 - else 128 - L2C_W_REG(L2CC_CTRL_OFF + (cpu << 8), val); 129 - } 130 - 131 - static inline unsigned long GET_L2CC_STATUS_CPU(unsigned long cpu) 132 - { 133 - if (cpu == smp_processor_id()) 134 - return L2C_R_REG(L2_CCTL_STATUS_OFF); 135 - return L2C_R_REG(L2_CCTL_STATUS_OFF + (cpu << 8)); 136 - } 137 - #endif
-11
arch/nds32/include/asm/linkage.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_LINKAGE_H 5 - #define __ASM_LINKAGE_H 6 - 7 - /* This file is required by include/linux/linkage.h */ 8 - #define __ALIGN .align 2 9 - #define __ALIGN_STR ".align 2" 10 - 11 - #endif
-91
arch/nds32/include/asm/memory.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_NDS32_MEMORY_H 5 - #define __ASM_NDS32_MEMORY_H 6 - 7 - #include <linux/compiler.h> 8 - #include <linux/sizes.h> 9 - 10 - #ifndef __ASSEMBLY__ 11 - #include <asm/page.h> 12 - #endif 13 - 14 - #ifndef PHYS_OFFSET 15 - #define PHYS_OFFSET (0x0) 16 - #endif 17 - 18 - /* 19 - * TASK_SIZE - the maximum size of a user space task. 20 - * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area 21 - */ 22 - #define TASK_SIZE ((CONFIG_PAGE_OFFSET) - (SZ_32M)) 23 - #define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_32M) 24 - #define PAGE_OFFSET (CONFIG_PAGE_OFFSET) 25 - 26 - /* 27 - * Physical vs virtual RAM address space conversion. These are 28 - * private definitions which should NOT be used outside memory.h 29 - * files. Use virt_to_phys/phys_to_virt/__pa/__va instead. 30 - */ 31 - #ifndef __virt_to_phys 32 - #define __virt_to_phys(x) ((x) - PAGE_OFFSET + PHYS_OFFSET) 33 - #define __phys_to_virt(x) ((x) - PHYS_OFFSET + PAGE_OFFSET) 34 - #endif 35 - 36 - /* 37 - * The module space lives between the addresses given by TASK_SIZE 38 - * and PAGE_OFFSET - it must be within 32MB of the kernel text. 39 - */ 40 - #define MODULES_END (PAGE_OFFSET) 41 - #define MODULES_VADDR (MODULES_END - SZ_32M) 42 - 43 - #if TASK_SIZE > MODULES_VADDR 44 - #error Top of user space clashes with start of module space 45 - #endif 46 - 47 - #ifndef __ASSEMBLY__ 48 - 49 - /* 50 - * PFNs are used to describe any physical page; this means 51 - * PFN 0 == physical address 0. 52 - * 53 - * This is the PFN of the first RAM page in the kernel 54 - * direct-mapped view. We assume this is the first page 55 - * of RAM in the mem_map as well. 56 - */ 57 - #define PHYS_PFN_OFFSET (PHYS_OFFSET >> PAGE_SHIFT) 58 - 59 - /* 60 - * Drivers should NOT use these either. 61 - */ 62 - #define __pa(x) __virt_to_phys((unsigned long)(x)) 63 - #define __va(x) ((void *)__phys_to_virt((unsigned long)(x))) 64 - 65 - /* 66 - * Conversion between a struct page and a physical address. 67 - * 68 - * Note: when converting an unknown physical address to a 69 - * struct page, the resulting pointer must be validated 70 - * using VALID_PAGE(). It must return an invalid struct page 71 - * for any physical address not corresponding to a system 72 - * RAM address. 73 - * 74 - * pfn_valid(pfn) indicates whether a PFN number is valid 75 - * 76 - * virt_to_page(k) convert a _valid_ virtual address to struct page * 77 - * virt_addr_valid(k) indicates whether a virtual address is valid 78 - */ 79 - #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET 80 - #define pfn_valid(pfn) ((pfn) >= PHYS_PFN_OFFSET && (pfn) < (PHYS_PFN_OFFSET + max_mapnr)) 81 - 82 - #define virt_to_page(kaddr) (pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)) 83 - #define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory) 84 - 85 - #define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT) 86 - 87 - #endif 88 - 89 - #include <asm-generic/memory_model.h> 90 - 91 - #endif
-12
arch/nds32/include/asm/mmu.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __NDS32_MMU_H 5 - #define __NDS32_MMU_H 6 - 7 - typedef struct { 8 - unsigned int id; 9 - void *vdso; 10 - } mm_context_t; 11 - 12 - #endif
-62
arch/nds32/include/asm/mmu_context.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_NDS32_MMU_CONTEXT_H 5 - #define __ASM_NDS32_MMU_CONTEXT_H 6 - 7 - #include <linux/spinlock.h> 8 - #include <asm/tlbflush.h> 9 - #include <asm/proc-fns.h> 10 - #include <asm-generic/mm_hooks.h> 11 - 12 - #define init_new_context init_new_context 13 - static inline int 14 - init_new_context(struct task_struct *tsk, struct mm_struct *mm) 15 - { 16 - mm->context.id = 0; 17 - return 0; 18 - } 19 - 20 - #define CID_BITS 9 21 - extern spinlock_t cid_lock; 22 - extern unsigned int cpu_last_cid; 23 - 24 - static inline void __new_context(struct mm_struct *mm) 25 - { 26 - unsigned int cid; 27 - unsigned long flags; 28 - 29 - spin_lock_irqsave(&cid_lock, flags); 30 - cid = cpu_last_cid; 31 - cpu_last_cid += 1 << TLB_MISC_offCID; 32 - if (cpu_last_cid == 0) 33 - cpu_last_cid = 1 << TLB_MISC_offCID << CID_BITS; 34 - 35 - if ((cid & TLB_MISC_mskCID) == 0) 36 - flush_tlb_all(); 37 - spin_unlock_irqrestore(&cid_lock, flags); 38 - 39 - mm->context.id = cid; 40 - } 41 - 42 - static inline void check_context(struct mm_struct *mm) 43 - { 44 - if (unlikely 45 - ((mm->context.id ^ cpu_last_cid) >> TLB_MISC_offCID >> CID_BITS)) 46 - __new_context(mm); 47 - } 48 - 49 - static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, 50 - struct task_struct *tsk) 51 - { 52 - unsigned int cpu = smp_processor_id(); 53 - 54 - if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next) { 55 - check_context(next); 56 - cpu_switch_mm(next); 57 - } 58 - } 59 - 60 - #include <asm-generic/mmu_context.h> 61 - 62 - #endif
-82
arch/nds32/include/asm/nds32.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef _ASM_NDS32_NDS32_H_ 5 - #define _ASM_NDS32_NDS32_H_ 6 - 7 - #include <asm/bitfield.h> 8 - #include <asm/cachectl.h> 9 - 10 - #ifndef __ASSEMBLY__ 11 - #include <linux/init.h> 12 - #include <asm/barrier.h> 13 - #include <nds32_intrinsic.h> 14 - 15 - #ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE 16 - #define FP_OFFSET (-3) 17 - #else 18 - #define FP_OFFSET (-2) 19 - #endif 20 - #define LP_OFFSET (-1) 21 - 22 - extern void __init early_trap_init(void); 23 - static inline void GIE_ENABLE(void) 24 - { 25 - mb(); 26 - __nds32__gie_en(); 27 - } 28 - 29 - static inline void GIE_DISABLE(void) 30 - { 31 - mb(); 32 - __nds32__gie_dis(); 33 - } 34 - 35 - static inline unsigned long CACHE_SET(unsigned char cache) 36 - { 37 - 38 - if (cache == ICACHE) 39 - return 64 << ((__nds32__mfsr(NDS32_SR_ICM_CFG) & ICM_CFG_mskISET) >> 40 - ICM_CFG_offISET); 41 - else 42 - return 64 << ((__nds32__mfsr(NDS32_SR_DCM_CFG) & DCM_CFG_mskDSET) >> 43 - DCM_CFG_offDSET); 44 - } 45 - 46 - static inline unsigned long CACHE_WAY(unsigned char cache) 47 - { 48 - 49 - if (cache == ICACHE) 50 - return 1 + 51 - ((__nds32__mfsr(NDS32_SR_ICM_CFG) & ICM_CFG_mskIWAY) >> ICM_CFG_offIWAY); 52 - else 53 - return 1 + 54 - ((__nds32__mfsr(NDS32_SR_DCM_CFG) & DCM_CFG_mskDWAY) >> DCM_CFG_offDWAY); 55 - } 56 - 57 - static inline unsigned long CACHE_LINE_SIZE(unsigned char cache) 58 - { 59 - 60 - if (cache == ICACHE) 61 - return 8 << 62 - (((__nds32__mfsr(NDS32_SR_ICM_CFG) & ICM_CFG_mskISZ) >> ICM_CFG_offISZ) - 1); 63 - else 64 - return 8 << 65 - (((__nds32__mfsr(NDS32_SR_DCM_CFG) & DCM_CFG_mskDSZ) >> DCM_CFG_offDSZ) - 1); 66 - } 67 - 68 - #endif /* __ASSEMBLY__ */ 69 - 70 - #define IVB_BASE PHYS_OFFSET /* in user space for intr/exc/trap/break table base, 64KB aligned 71 - * We defined at the start of the physical memory */ 72 - 73 - /* dispatched sub-entry exception handler numbering */ 74 - #define RD_PROT 0 /* read protrection */ 75 - #define WRT_PROT 1 /* write protection */ 76 - #define NOEXEC 2 /* non executable */ 77 - #define PAGE_MODIFY 3 /* page modified */ 78 - #define ACC_BIT 4 /* access bit */ 79 - #define RESVED_PTE 5 /* reserved PTE attribute */ 80 - /* reserved 6 ~ 16 */ 81 - 82 - #endif /* _ASM_NDS32_NDS32_H_ */
-109
arch/nds32/include/asm/nds32_fpu_inst.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* Copyright (C) 2005-2018 Andes Technology Corporation */ 3 - 4 - #ifndef __NDS32_FPU_INST_H 5 - #define __NDS32_FPU_INST_H 6 - 7 - #define cop0_op 0x35 8 - 9 - /* 10 - * COP0 field of opcodes. 11 - */ 12 - #define fs1_op 0x0 13 - #define fs2_op 0x4 14 - #define fd1_op 0x8 15 - #define fd2_op 0xc 16 - 17 - /* 18 - * FS1 opcode. 19 - */ 20 - enum fs1 { 21 - fadds_op, fsubs_op, fcpynss_op, fcpyss_op, 22 - fmadds_op, fmsubs_op, fcmovns_op, fcmovzs_op, 23 - fnmadds_op, fnmsubs_op, 24 - fmuls_op = 0xc, fdivs_op, 25 - fs1_f2op_op = 0xf 26 - }; 27 - 28 - /* 29 - * FS1/F2OP opcode. 30 - */ 31 - enum fs1_f2 { 32 - fs2d_op, fsqrts_op, 33 - fui2s_op = 0x8, fsi2s_op = 0xc, 34 - fs2ui_op = 0x10, fs2ui_z_op = 0x14, 35 - fs2si_op = 0x18, fs2si_z_op = 0x1c 36 - }; 37 - 38 - /* 39 - * FS2 opcode. 40 - */ 41 - enum fs2 { 42 - fcmpeqs_op, fcmpeqs_e_op, fcmplts_op, fcmplts_e_op, 43 - fcmples_op, fcmples_e_op, fcmpuns_op, fcmpuns_e_op 44 - }; 45 - 46 - /* 47 - * FD1 opcode. 48 - */ 49 - enum fd1 { 50 - faddd_op, fsubd_op, fcpynsd_op, fcpysd_op, 51 - fmaddd_op, fmsubd_op, fcmovnd_op, fcmovzd_op, 52 - fnmaddd_op, fnmsubd_op, 53 - fmuld_op = 0xc, fdivd_op, fd1_f2op_op = 0xf 54 - }; 55 - 56 - /* 57 - * FD1/F2OP opcode. 58 - */ 59 - enum fd1_f2 { 60 - fd2s_op, fsqrtd_op, 61 - fui2d_op = 0x8, fsi2d_op = 0xc, 62 - fd2ui_op = 0x10, fd2ui_z_op = 0x14, 63 - fd2si_op = 0x18, fd2si_z_op = 0x1c 64 - }; 65 - 66 - /* 67 - * FD2 opcode. 68 - */ 69 - enum fd2 { 70 - fcmpeqd_op, fcmpeqd_e_op, fcmpltd_op, fcmpltd_e_op, 71 - fcmpled_op, fcmpled_e_op, fcmpund_op, fcmpund_e_op 72 - }; 73 - 74 - #define NDS32Insn(x) x 75 - 76 - #define I_OPCODE_off 25 77 - #define NDS32Insn_OPCODE(x) (NDS32Insn(x) >> I_OPCODE_off) 78 - 79 - #define I_OPCODE_offRt 20 80 - #define I_OPCODE_mskRt (0x1fUL << I_OPCODE_offRt) 81 - #define NDS32Insn_OPCODE_Rt(x) \ 82 - ((NDS32Insn(x) & I_OPCODE_mskRt) >> I_OPCODE_offRt) 83 - 84 - #define I_OPCODE_offRa 15 85 - #define I_OPCODE_mskRa (0x1fUL << I_OPCODE_offRa) 86 - #define NDS32Insn_OPCODE_Ra(x) \ 87 - ((NDS32Insn(x) & I_OPCODE_mskRa) >> I_OPCODE_offRa) 88 - 89 - #define I_OPCODE_offRb 10 90 - #define I_OPCODE_mskRb (0x1fUL << I_OPCODE_offRb) 91 - #define NDS32Insn_OPCODE_Rb(x) \ 92 - ((NDS32Insn(x) & I_OPCODE_mskRb) >> I_OPCODE_offRb) 93 - 94 - #define I_OPCODE_offbit1014 10 95 - #define I_OPCODE_mskbit1014 (0x1fUL << I_OPCODE_offbit1014) 96 - #define NDS32Insn_OPCODE_BIT1014(x) \ 97 - ((NDS32Insn(x) & I_OPCODE_mskbit1014) >> I_OPCODE_offbit1014) 98 - 99 - #define I_OPCODE_offbit69 6 100 - #define I_OPCODE_mskbit69 (0xfUL << I_OPCODE_offbit69) 101 - #define NDS32Insn_OPCODE_BIT69(x) \ 102 - ((NDS32Insn(x) & I_OPCODE_mskbit69) >> I_OPCODE_offbit69) 103 - 104 - #define I_OPCODE_offCOP0 0 105 - #define I_OPCODE_mskCOP0 (0x3fUL << I_OPCODE_offCOP0) 106 - #define NDS32Insn_OPCODE_COP0(x) \ 107 - ((NDS32Insn(x) & I_OPCODE_mskCOP0) >> I_OPCODE_offCOP0) 108 - 109 - #endif /* __NDS32_FPU_INST_H */
-64
arch/nds32/include/asm/page.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Copyright (C) 2005-2017 Andes Technology Corporation 4 - */ 5 - 6 - #ifndef _ASMNDS32_PAGE_H 7 - #define _ASMNDS32_PAGE_H 8 - 9 - #ifdef CONFIG_ANDES_PAGE_SIZE_4KB 10 - #define PAGE_SHIFT 12 11 - #endif 12 - #ifdef CONFIG_ANDES_PAGE_SIZE_8KB 13 - #define PAGE_SHIFT 13 14 - #endif 15 - #include <linux/const.h> 16 - #define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT) 17 - #define PAGE_MASK (~(PAGE_SIZE-1)) 18 - 19 - #ifdef __KERNEL__ 20 - 21 - #ifndef __ASSEMBLY__ 22 - 23 - struct page; 24 - struct vm_area_struct; 25 - #ifdef CONFIG_CPU_CACHE_ALIASING 26 - extern void copy_user_highpage(struct page *to, struct page *from, 27 - unsigned long vaddr, struct vm_area_struct *vma); 28 - extern void clear_user_highpage(struct page *page, unsigned long vaddr); 29 - 30 - void copy_user_page(void *vto, void *vfrom, unsigned long vaddr, 31 - struct page *to); 32 - void clear_user_page(void *addr, unsigned long vaddr, struct page *page); 33 - #define __HAVE_ARCH_COPY_USER_HIGHPAGE 34 - #define clear_user_highpage clear_user_highpage 35 - #else 36 - #define clear_user_page(page, vaddr, pg) clear_page(page) 37 - #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) 38 - #endif 39 - 40 - void clear_page(void *page); 41 - void copy_page(void *to, void *from); 42 - 43 - typedef unsigned long pte_t; 44 - typedef unsigned long pgd_t; 45 - typedef unsigned long pgprot_t; 46 - 47 - #define pte_val(x) (x) 48 - #define pgd_val(x) (x) 49 - #define pgprot_val(x) (x) 50 - 51 - #define __pte(x) (x) 52 - #define __pgd(x) (x) 53 - #define __pgprot(x) (x) 54 - 55 - typedef struct page *pgtable_t; 56 - 57 - #include <asm/memory.h> 58 - #include <asm-generic/getorder.h> 59 - 60 - #endif /* !__ASSEMBLY__ */ 61 - 62 - #endif /* __KERNEL__ */ 63 - 64 - #endif
-16
arch/nds32/include/asm/perf_event.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* Copyright (C) 2008-2018 Andes Technology Corporation */ 3 - 4 - #ifndef __ASM_PERF_EVENT_H 5 - #define __ASM_PERF_EVENT_H 6 - 7 - /* 8 - * This file is request by Perf, 9 - * please refer to tools/perf/design.txt for more details 10 - */ 11 - struct pt_regs; 12 - unsigned long perf_instruction_pointer(struct pt_regs *regs); 13 - unsigned long perf_misc_flags(struct pt_regs *regs); 14 - #define perf_misc_flags(regs) perf_misc_flags(regs) 15 - 16 - #endif
-62
arch/nds32/include/asm/pgalloc.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef _ASMNDS32_PGALLOC_H 5 - #define _ASMNDS32_PGALLOC_H 6 - 7 - #include <asm/processor.h> 8 - #include <asm/cacheflush.h> 9 - #include <asm/tlbflush.h> 10 - #include <asm/proc-fns.h> 11 - 12 - #define __HAVE_ARCH_PTE_ALLOC_ONE 13 - #include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */ 14 - 15 - extern pgd_t *pgd_alloc(struct mm_struct *mm); 16 - extern void pgd_free(struct mm_struct *mm, pgd_t * pgd); 17 - 18 - static inline pgtable_t pte_alloc_one(struct mm_struct *mm) 19 - { 20 - pgtable_t pte; 21 - 22 - pte = __pte_alloc_one(mm, GFP_PGTABLE_USER); 23 - if (pte) 24 - cpu_dcache_wb_page((unsigned long)page_address(pte)); 25 - 26 - return pte; 27 - } 28 - 29 - /* 30 - * Populate the pmdp entry with a pointer to the pte. This pmd is part 31 - * of the mm address space. 32 - * 33 - * Ensure that we always set both PMD entries. 34 - */ 35 - static inline void 36 - pmd_populate_kernel(struct mm_struct *mm, pmd_t * pmdp, pte_t * ptep) 37 - { 38 - unsigned long pte_ptr = (unsigned long)ptep; 39 - unsigned long pmdval; 40 - 41 - BUG_ON(mm != &init_mm); 42 - 43 - /* 44 - * The pmd must be loaded with the physical 45 - * address of the PTE table 46 - */ 47 - pmdval = __pa(pte_ptr) | _PAGE_KERNEL_TABLE; 48 - set_pmd(pmdp, __pmd(pmdval)); 49 - } 50 - 51 - static inline void 52 - pmd_populate(struct mm_struct *mm, pmd_t * pmdp, pgtable_t ptep) 53 - { 54 - unsigned long pmdval; 55 - 56 - BUG_ON(mm == &init_mm); 57 - 58 - pmdval = page_to_pfn(ptep) << PAGE_SHIFT | _PAGE_USER_TABLE; 59 - set_pmd(pmdp, __pmd(pmdval)); 60 - } 61 - 62 - #endif
-377
arch/nds32/include/asm/pgtable.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef _ASMNDS32_PGTABLE_H 5 - #define _ASMNDS32_PGTABLE_H 6 - 7 - #include <asm-generic/pgtable-nopmd.h> 8 - #include <linux/sizes.h> 9 - 10 - #include <asm/memory.h> 11 - #include <asm/nds32.h> 12 - #ifndef __ASSEMBLY__ 13 - #include <asm/fixmap.h> 14 - #include <nds32_intrinsic.h> 15 - #endif 16 - 17 - #ifdef CONFIG_ANDES_PAGE_SIZE_4KB 18 - #define PGDIR_SHIFT 22 19 - #define PTRS_PER_PGD 1024 20 - #define PTRS_PER_PTE 1024 21 - #endif 22 - 23 - #ifdef CONFIG_ANDES_PAGE_SIZE_8KB 24 - #define PGDIR_SHIFT 24 25 - #define PTRS_PER_PGD 256 26 - #define PTRS_PER_PTE 2048 27 - #endif 28 - 29 - #ifndef __ASSEMBLY__ 30 - extern void __pte_error(const char *file, int line, unsigned long val); 31 - extern void __pgd_error(const char *file, int line, unsigned long val); 32 - 33 - #define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte)) 34 - #define pgd_ERROR(pgd) __pgd_error(__FILE__, __LINE__, pgd_val(pgd)) 35 - #endif /* !__ASSEMBLY__ */ 36 - 37 - #define PMD_SIZE (1UL << PMD_SHIFT) 38 - #define PMD_MASK (~(PMD_SIZE-1)) 39 - #define PGDIR_SIZE (1UL << PGDIR_SHIFT) 40 - #define PGDIR_MASK (~(PGDIR_SIZE-1)) 41 - 42 - /* 43 - * This is the lowest virtual address we can permit any user space 44 - * mapping to be mapped at. This is particularly important for 45 - * non-high vector CPUs. 46 - */ 47 - #define FIRST_USER_ADDRESS 0x8000 48 - 49 - #ifdef CONFIG_HIGHMEM 50 - #define CONSISTENT_BASE ((PKMAP_BASE) - (SZ_2M)) 51 - #define CONSISTENT_END (PKMAP_BASE) 52 - #else 53 - #define CONSISTENT_BASE (FIXADDR_START - SZ_2M) 54 - #define CONSISTENT_END (FIXADDR_START) 55 - #endif 56 - #define CONSISTENT_OFFSET(x) (((unsigned long)(x) - CONSISTENT_BASE) >> PAGE_SHIFT) 57 - 58 - #ifdef CONFIG_HIGHMEM 59 - #ifndef __ASSEMBLY__ 60 - #include <asm/highmem.h> 61 - #endif 62 - #endif 63 - 64 - #define VMALLOC_RESERVE SZ_128M 65 - #define VMALLOC_END (CONSISTENT_BASE - PAGE_SIZE) 66 - #define VMALLOC_START ((VMALLOC_END) - VMALLOC_RESERVE) 67 - #define VMALLOC_VMADDR(x) ((unsigned long)(x)) 68 - #define MAXMEM __pa(VMALLOC_START) 69 - #define MAXMEM_PFN PFN_DOWN(MAXMEM) 70 - 71 - #define FIRST_USER_PGD_NR 0 72 - #define USER_PTRS_PER_PGD ((TASK_SIZE/PGDIR_SIZE) + FIRST_USER_PGD_NR) 73 - 74 - /* L2 PTE */ 75 - #define _PAGE_V (1UL << 0) 76 - 77 - #define _PAGE_M_XKRW (0UL << 1) 78 - #define _PAGE_M_UR_KR (1UL << 1) 79 - #define _PAGE_M_UR_KRW (2UL << 1) 80 - #define _PAGE_M_URW_KRW (3UL << 1) 81 - #define _PAGE_M_KR (5UL << 1) 82 - #define _PAGE_M_KRW (7UL << 1) 83 - 84 - #define _PAGE_D (1UL << 4) 85 - #define _PAGE_E (1UL << 5) 86 - #define _PAGE_A (1UL << 6) 87 - #define _PAGE_G (1UL << 7) 88 - 89 - #define _PAGE_C_DEV (0UL << 8) 90 - #define _PAGE_C_DEV_WB (1UL << 8) 91 - #define _PAGE_C_MEM (2UL << 8) 92 - #define _PAGE_C_MEM_SHRD_WB (4UL << 8) 93 - #define _PAGE_C_MEM_SHRD_WT (5UL << 8) 94 - #define _PAGE_C_MEM_WB (6UL << 8) 95 - #define _PAGE_C_MEM_WT (7UL << 8) 96 - 97 - #define _PAGE_L (1UL << 11) 98 - 99 - #define _HAVE_PAGE_L (_PAGE_L) 100 - #define _PAGE_FILE (1UL << 1) 101 - #define _PAGE_YOUNG 0 102 - #define _PAGE_M_MASK _PAGE_M_KRW 103 - #define _PAGE_C_MASK _PAGE_C_MEM_WT 104 - 105 - #ifdef CONFIG_SMP 106 - #ifdef CONFIG_CPU_DCACHE_WRITETHROUGH 107 - #define _PAGE_CACHE_SHRD _PAGE_C_MEM_SHRD_WT 108 - #else 109 - #define _PAGE_CACHE_SHRD _PAGE_C_MEM_SHRD_WB 110 - #endif 111 - #else 112 - #ifdef CONFIG_CPU_DCACHE_WRITETHROUGH 113 - #define _PAGE_CACHE_SHRD _PAGE_C_MEM_WT 114 - #else 115 - #define _PAGE_CACHE_SHRD _PAGE_C_MEM_WB 116 - #endif 117 - #endif 118 - 119 - #ifdef CONFIG_CPU_DCACHE_WRITETHROUGH 120 - #define _PAGE_CACHE _PAGE_C_MEM_WT 121 - #else 122 - #define _PAGE_CACHE _PAGE_C_MEM_WB 123 - #endif 124 - 125 - #define _PAGE_IOREMAP \ 126 - (_PAGE_V | _PAGE_M_KRW | _PAGE_D | _PAGE_G | _PAGE_C_DEV) 127 - 128 - /* 129 - * + Level 1 descriptor (PMD) 130 - */ 131 - #define PMD_TYPE_TABLE 0 132 - 133 - #ifndef __ASSEMBLY__ 134 - 135 - #define _PAGE_USER_TABLE PMD_TYPE_TABLE 136 - #define _PAGE_KERNEL_TABLE PMD_TYPE_TABLE 137 - 138 - #define PAGE_EXEC __pgprot(_PAGE_V | _PAGE_M_XKRW | _PAGE_E) 139 - #define PAGE_NONE __pgprot(_PAGE_V | _PAGE_M_KRW | _PAGE_A) 140 - #define PAGE_READ __pgprot(_PAGE_V | _PAGE_M_UR_KR) 141 - #define PAGE_RDWR __pgprot(_PAGE_V | _PAGE_M_URW_KRW | _PAGE_D) 142 - #define PAGE_COPY __pgprot(_PAGE_V | _PAGE_M_UR_KR) 143 - 144 - #define PAGE_UXKRWX_V1 __pgprot(_PAGE_V | _PAGE_M_KRW | _PAGE_D | _PAGE_E | _PAGE_G | _PAGE_CACHE_SHRD) 145 - #define PAGE_UXKRWX_V2 __pgprot(_PAGE_V | _PAGE_M_XKRW | _PAGE_D | _PAGE_E | _PAGE_G | _PAGE_CACHE_SHRD) 146 - #define PAGE_URXKRWX_V2 __pgprot(_PAGE_V | _PAGE_M_UR_KRW | _PAGE_D | _PAGE_E | _PAGE_G | _PAGE_CACHE_SHRD) 147 - #define PAGE_CACHE_L1 __pgprot(_HAVE_PAGE_L | _PAGE_V | _PAGE_M_KRW | _PAGE_D | _PAGE_E | _PAGE_G | _PAGE_CACHE) 148 - #define PAGE_MEMORY __pgprot(_HAVE_PAGE_L | _PAGE_V | _PAGE_M_KRW | _PAGE_D | _PAGE_E | _PAGE_G | _PAGE_CACHE_SHRD) 149 - #define PAGE_KERNEL __pgprot(_PAGE_V | _PAGE_M_KRW | _PAGE_D | _PAGE_E | _PAGE_G | _PAGE_CACHE_SHRD) 150 - #define PAGE_SHARED __pgprot(_PAGE_V | _PAGE_M_URW_KRW | _PAGE_D | _PAGE_CACHE_SHRD) 151 - #define PAGE_DEVICE __pgprot(_PAGE_V | _PAGE_M_KRW | _PAGE_D | _PAGE_G | _PAGE_C_DEV) 152 - #endif /* __ASSEMBLY__ */ 153 - 154 - /* xwr */ 155 - #define __P000 (PAGE_NONE | _PAGE_CACHE_SHRD) 156 - #define __P001 (PAGE_READ | _PAGE_CACHE_SHRD) 157 - #define __P010 (PAGE_COPY | _PAGE_CACHE_SHRD) 158 - #define __P011 (PAGE_COPY | _PAGE_CACHE_SHRD) 159 - #define __P100 (PAGE_EXEC | _PAGE_CACHE_SHRD) 160 - #define __P101 (PAGE_READ | _PAGE_E | _PAGE_CACHE_SHRD) 161 - #define __P110 (PAGE_COPY | _PAGE_E | _PAGE_CACHE_SHRD) 162 - #define __P111 (PAGE_COPY | _PAGE_E | _PAGE_CACHE_SHRD) 163 - 164 - #define __S000 (PAGE_NONE | _PAGE_CACHE_SHRD) 165 - #define __S001 (PAGE_READ | _PAGE_CACHE_SHRD) 166 - #define __S010 (PAGE_RDWR | _PAGE_CACHE_SHRD) 167 - #define __S011 (PAGE_RDWR | _PAGE_CACHE_SHRD) 168 - #define __S100 (PAGE_EXEC | _PAGE_CACHE_SHRD) 169 - #define __S101 (PAGE_READ | _PAGE_E | _PAGE_CACHE_SHRD) 170 - #define __S110 (PAGE_RDWR | _PAGE_E | _PAGE_CACHE_SHRD) 171 - #define __S111 (PAGE_RDWR | _PAGE_E | _PAGE_CACHE_SHRD) 172 - 173 - #ifndef __ASSEMBLY__ 174 - /* 175 - * ZERO_PAGE is a global shared page that is always zero: used 176 - * for zero-mapped memory areas etc.. 177 - */ 178 - extern struct page *empty_zero_page; 179 - extern void paging_init(void); 180 - #define ZERO_PAGE(vaddr) (empty_zero_page) 181 - 182 - #define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) 183 - #define pfn_pte(pfn,prot) (__pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))) 184 - 185 - #define pte_none(pte) !(pte_val(pte)) 186 - #define pte_clear(mm,addr,ptep) set_pte_at((mm),(addr),(ptep), __pte(0)) 187 - #define pte_page(pte) (pfn_to_page(pte_pfn(pte))) 188 - 189 - static unsigned long pmd_page_vaddr(pmd_t pmd) 190 - { 191 - return ((unsigned long) __va(pmd_val(pmd) & PAGE_MASK)); 192 - } 193 - 194 - #define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) 195 - /* 196 - * Set a level 1 translation table entry, and clean it out of 197 - * any caches such that the MMUs can load it correctly. 198 - */ 199 - static inline void set_pmd(pmd_t * pmdp, pmd_t pmd) 200 - { 201 - 202 - *pmdp = pmd; 203 - #if !defined(CONFIG_CPU_DCACHE_DISABLE) && !defined(CONFIG_CPU_DCACHE_WRITETHROUGH) 204 - __asm__ volatile ("\n\tcctl %0, L1D_VA_WB"::"r" (pmdp):"memory"); 205 - __nds32__msync_all(); 206 - __nds32__dsb(); 207 - #endif 208 - } 209 - 210 - /* 211 - * Set a PTE and flush it out 212 - */ 213 - static inline void set_pte(pte_t * ptep, pte_t pte) 214 - { 215 - 216 - *ptep = pte; 217 - #if !defined(CONFIG_CPU_DCACHE_DISABLE) && !defined(CONFIG_CPU_DCACHE_WRITETHROUGH) 218 - __asm__ volatile ("\n\tcctl %0, L1D_VA_WB"::"r" (ptep):"memory"); 219 - __nds32__msync_all(); 220 - __nds32__dsb(); 221 - #endif 222 - } 223 - 224 - /* 225 - * The following only work if pte_present() is true. 226 - * Undefined behaviour if not.. 227 - */ 228 - 229 - /* 230 - * pte_write: this page is writeable for user mode 231 - * pte_read: this page is readable for user mode 232 - * pte_kernel_write: this page is writeable for kernel mode 233 - * 234 - * We don't have pte_kernel_read because kernel always can read. 235 - * 236 - * */ 237 - 238 - #define pte_present(pte) (pte_val(pte) & _PAGE_V) 239 - #define pte_write(pte) ((pte_val(pte) & _PAGE_M_MASK) == _PAGE_M_URW_KRW) 240 - #define pte_read(pte) (((pte_val(pte) & _PAGE_M_MASK) == _PAGE_M_UR_KR) || \ 241 - ((pte_val(pte) & _PAGE_M_MASK) == _PAGE_M_UR_KRW) || \ 242 - ((pte_val(pte) & _PAGE_M_MASK) == _PAGE_M_URW_KRW)) 243 - #define pte_kernel_write(pte) (((pte_val(pte) & _PAGE_M_MASK) == _PAGE_M_URW_KRW) || \ 244 - ((pte_val(pte) & _PAGE_M_MASK) == _PAGE_M_UR_KRW) || \ 245 - ((pte_val(pte) & _PAGE_M_MASK) == _PAGE_M_KRW) || \ 246 - (((pte_val(pte) & _PAGE_M_MASK) == _PAGE_M_XKRW) && pte_exec(pte))) 247 - #define pte_exec(pte) (pte_val(pte) & _PAGE_E) 248 - #define pte_dirty(pte) (pte_val(pte) & _PAGE_D) 249 - #define pte_young(pte) (pte_val(pte) & _PAGE_YOUNG) 250 - 251 - /* 252 - * The following only works if pte_present() is not true. 253 - */ 254 - #define pte_file(pte) (pte_val(pte) & _PAGE_FILE) 255 - #define pte_to_pgoff(x) (pte_val(x) >> 2) 256 - #define pgoff_to_pte(x) __pte(((x) << 2) | _PAGE_FILE) 257 - 258 - #define PTE_FILE_MAX_BITS 29 259 - 260 - #define PTE_BIT_FUNC(fn,op) \ 261 - static inline pte_t pte_##fn(pte_t pte) { pte_val(pte) op; return pte; } 262 - 263 - static inline pte_t pte_wrprotect(pte_t pte) 264 - { 265 - pte_val(pte) = pte_val(pte) & ~_PAGE_M_MASK; 266 - pte_val(pte) = pte_val(pte) | _PAGE_M_UR_KR; 267 - return pte; 268 - } 269 - 270 - static inline pte_t pte_mkwrite(pte_t pte) 271 - { 272 - pte_val(pte) = pte_val(pte) & ~_PAGE_M_MASK; 273 - pte_val(pte) = pte_val(pte) | _PAGE_M_URW_KRW; 274 - return pte; 275 - } 276 - 277 - PTE_BIT_FUNC(exprotect, &=~_PAGE_E); 278 - PTE_BIT_FUNC(mkexec, |=_PAGE_E); 279 - PTE_BIT_FUNC(mkclean, &=~_PAGE_D); 280 - PTE_BIT_FUNC(mkdirty, |=_PAGE_D); 281 - PTE_BIT_FUNC(mkold, &=~_PAGE_YOUNG); 282 - PTE_BIT_FUNC(mkyoung, |=_PAGE_YOUNG); 283 - 284 - /* 285 - * Mark the prot value as uncacheable and unbufferable. 286 - */ 287 - #define pgprot_noncached(prot) __pgprot((pgprot_val(prot)&~_PAGE_C_MASK) | _PAGE_C_DEV) 288 - #define pgprot_writecombine(prot) __pgprot((pgprot_val(prot)&~_PAGE_C_MASK) | _PAGE_C_DEV_WB) 289 - 290 - #define pmd_none(pmd) (pmd_val(pmd)&0x1) 291 - #define pmd_present(pmd) (!pmd_none(pmd)) 292 - #define pmd_bad(pmd) pmd_none(pmd) 293 - 294 - #define copy_pmd(pmdpd,pmdps) set_pmd((pmdpd), *(pmdps)) 295 - #define pmd_clear(pmdp) set_pmd((pmdp), __pmd(1)) 296 - 297 - static inline pmd_t __mk_pmd(pte_t * ptep, unsigned long prot) 298 - { 299 - unsigned long ptr = (unsigned long)ptep; 300 - pmd_t pmd; 301 - 302 - /* 303 - * The pmd must be loaded with the physical 304 - * address of the PTE table 305 - */ 306 - 307 - pmd_val(pmd) = __virt_to_phys(ptr) | prot; 308 - return pmd; 309 - } 310 - 311 - #define pmd_page(pmd) virt_to_page(__va(pmd_val(pmd))) 312 - 313 - /* 314 - * Permanent address of a page. We never have highmem, so this is trivial. 315 - */ 316 - #define pages_to_mb(x) ((x) >> (20 - PAGE_SHIFT)) 317 - 318 - /* 319 - * Conversion functions: convert a page and protection to a page entry, 320 - * and a page entry and page directory to the page they refer to. 321 - */ 322 - #define mk_pte(page,prot) pfn_pte(page_to_pfn(page),prot) 323 - 324 - /* 325 - * The "pgd_xxx()" functions here are trivial for a folded two-level 326 - * setup: the pgd is never bad, and a pmd always exists (as it's folded 327 - * into the pgd entry) 328 - */ 329 - #define pgd_none(pgd) (0) 330 - #define pgd_bad(pgd) (0) 331 - #define pgd_present(pgd) (1) 332 - #define pgd_clear(pgdp) do { } while (0) 333 - 334 - #define page_pte_prot(page,prot) mk_pte(page, prot) 335 - #define page_pte(page) mk_pte(page, __pgprot(0)) 336 - /* 337 - * L1PTE = $mr1 + ((virt >> PMD_SHIFT) << 2); 338 - * L2PTE = (((virt >> PAGE_SHIFT) & (PTRS_PER_PTE -1 )) << 2); 339 - * PPN = (phys & 0xfffff000); 340 - * 341 - */ 342 - 343 - static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) 344 - { 345 - const unsigned long mask = 0xfff; 346 - pte_val(pte) = (pte_val(pte) & ~mask) | (pgprot_val(newprot) & mask); 347 - return pte; 348 - } 349 - 350 - extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; 351 - 352 - /* Encode and decode a swap entry. 353 - * 354 - * We support up to 32GB of swap on 4k machines 355 - */ 356 - #define __swp_type(x) (((x).val >> 2) & 0x7f) 357 - #define __swp_offset(x) ((x).val >> 9) 358 - #define __swp_entry(type,offset) ((swp_entry_t) { ((type) << 2) | ((offset) << 9) }) 359 - #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 360 - #define __swp_entry_to_pte(swp) ((pte_t) { (swp).val }) 361 - 362 - /* Needs to be defined here and not in linux/mm.h, as it is arch dependent */ 363 - #define kern_addr_valid(addr) (1) 364 - 365 - /* 366 - * We provide our own arch_get_unmapped_area to cope with VIPT caches. 367 - */ 368 - #define HAVE_ARCH_UNMAPPED_AREA 369 - 370 - /* 371 - * remap a physical address `phys' of size `size' with page protection `prot' 372 - * into virtual address `from' 373 - */ 374 - 375 - #endif /* !__ASSEMBLY__ */ 376 - 377 - #endif /* _ASMNDS32_PGTABLE_H */
-386
arch/nds32/include/asm/pmu.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* Copyright (C) 2008-2018 Andes Technology Corporation */ 3 - 4 - #ifndef __ASM_PMU_H 5 - #define __ASM_PMU_H 6 - 7 - #include <linux/interrupt.h> 8 - #include <linux/perf_event.h> 9 - #include <asm/unistd.h> 10 - #include <asm/bitfield.h> 11 - 12 - /* Has special meaning for perf core implementation */ 13 - #define HW_OP_UNSUPPORTED 0x0 14 - #define C(_x) PERF_COUNT_HW_CACHE_##_x 15 - #define CACHE_OP_UNSUPPORTED 0x0 16 - 17 - /* Enough for both software and hardware defined events */ 18 - #define SOFTWARE_EVENT_MASK 0xFF 19 - 20 - #define PFM_OFFSET_MAGIC_0 2 /* DO NOT START FROM 0 */ 21 - #define PFM_OFFSET_MAGIC_1 (PFM_OFFSET_MAGIC_0 + 36) 22 - #define PFM_OFFSET_MAGIC_2 (PFM_OFFSET_MAGIC_1 + 36) 23 - 24 - enum { PFMC0, PFMC1, PFMC2, MAX_COUNTERS }; 25 - 26 - u32 PFM_CTL_OVF[3] = { PFM_CTL_mskOVF0, PFM_CTL_mskOVF1, 27 - PFM_CTL_mskOVF2 }; 28 - u32 PFM_CTL_EN[3] = { PFM_CTL_mskEN0, PFM_CTL_mskEN1, 29 - PFM_CTL_mskEN2 }; 30 - u32 PFM_CTL_OFFSEL[3] = { PFM_CTL_offSEL0, PFM_CTL_offSEL1, 31 - PFM_CTL_offSEL2 }; 32 - u32 PFM_CTL_IE[3] = { PFM_CTL_mskIE0, PFM_CTL_mskIE1, PFM_CTL_mskIE2 }; 33 - u32 PFM_CTL_KS[3] = { PFM_CTL_mskKS0, PFM_CTL_mskKS1, PFM_CTL_mskKS2 }; 34 - u32 PFM_CTL_KU[3] = { PFM_CTL_mskKU0, PFM_CTL_mskKU1, PFM_CTL_mskKU2 }; 35 - u32 PFM_CTL_SEL[3] = { PFM_CTL_mskSEL0, PFM_CTL_mskSEL1, PFM_CTL_mskSEL2 }; 36 - /* 37 - * Perf Events' indices 38 - */ 39 - #define NDS32_IDX_CYCLE_COUNTER 0 40 - #define NDS32_IDX_COUNTER0 1 41 - #define NDS32_IDX_COUNTER1 2 42 - 43 - /* The events for a given PMU register set. */ 44 - struct pmu_hw_events { 45 - /* 46 - * The events that are active on the PMU for the given index. 47 - */ 48 - struct perf_event *events[MAX_COUNTERS]; 49 - 50 - /* 51 - * A 1 bit for an index indicates that the counter is being used for 52 - * an event. A 0 means that the counter can be used. 53 - */ 54 - unsigned long used_mask[BITS_TO_LONGS(MAX_COUNTERS)]; 55 - 56 - /* 57 - * Hardware lock to serialize accesses to PMU registers. Needed for the 58 - * read/modify/write sequences. 59 - */ 60 - raw_spinlock_t pmu_lock; 61 - }; 62 - 63 - struct nds32_pmu { 64 - struct pmu pmu; 65 - cpumask_t active_irqs; 66 - char *name; 67 - irqreturn_t (*handle_irq)(int irq_num, void *dev); 68 - void (*enable)(struct perf_event *event); 69 - void (*disable)(struct perf_event *event); 70 - int (*get_event_idx)(struct pmu_hw_events *hw_events, 71 - struct perf_event *event); 72 - int (*set_event_filter)(struct hw_perf_event *evt, 73 - struct perf_event_attr *attr); 74 - u32 (*read_counter)(struct perf_event *event); 75 - void (*write_counter)(struct perf_event *event, u32 val); 76 - void (*start)(struct nds32_pmu *nds32_pmu); 77 - void (*stop)(struct nds32_pmu *nds32_pmu); 78 - void (*reset)(void *data); 79 - int (*request_irq)(struct nds32_pmu *nds32_pmu, irq_handler_t handler); 80 - void (*free_irq)(struct nds32_pmu *nds32_pmu); 81 - int (*map_event)(struct perf_event *event); 82 - int num_events; 83 - atomic_t active_events; 84 - u64 max_period; 85 - struct platform_device *plat_device; 86 - struct pmu_hw_events *(*get_hw_events)(void); 87 - }; 88 - 89 - #define to_nds32_pmu(p) (container_of(p, struct nds32_pmu, pmu)) 90 - 91 - int nds32_pmu_register(struct nds32_pmu *nds32_pmu, int type); 92 - 93 - u64 nds32_pmu_event_update(struct perf_event *event); 94 - 95 - int nds32_pmu_event_set_period(struct perf_event *event); 96 - 97 - /* 98 - * Common NDS32 SPAv3 event types 99 - * 100 - * Note: An implementation may not be able to count all of these events 101 - * but the encodings are considered to be `reserved' in the case that 102 - * they are not available. 103 - * 104 - * SEL_TOTAL_CYCLES will add an offset is due to ZERO is defined as 105 - * NOT_SUPPORTED EVENT mapping in generic perf code. 106 - * You will need to deal it in the event writing implementation. 107 - */ 108 - enum spav3_counter_0_perf_types { 109 - SPAV3_0_SEL_BASE = -1 + PFM_OFFSET_MAGIC_0, /* counting symbol */ 110 - SPAV3_0_SEL_TOTAL_CYCLES = 0 + PFM_OFFSET_MAGIC_0, 111 - SPAV3_0_SEL_COMPLETED_INSTRUCTION = 1 + PFM_OFFSET_MAGIC_0, 112 - SPAV3_0_SEL_LAST /* counting symbol */ 113 - }; 114 - 115 - enum spav3_counter_1_perf_types { 116 - SPAV3_1_SEL_BASE = -1 + PFM_OFFSET_MAGIC_1, /* counting symbol */ 117 - SPAV3_1_SEL_TOTAL_CYCLES = 0 + PFM_OFFSET_MAGIC_1, 118 - SPAV3_1_SEL_COMPLETED_INSTRUCTION = 1 + PFM_OFFSET_MAGIC_1, 119 - SPAV3_1_SEL_CONDITIONAL_BRANCH = 2 + PFM_OFFSET_MAGIC_1, 120 - SPAV3_1_SEL_TAKEN_CONDITIONAL_BRANCH = 3 + PFM_OFFSET_MAGIC_1, 121 - SPAV3_1_SEL_PREFETCH_INSTRUCTION = 4 + PFM_OFFSET_MAGIC_1, 122 - SPAV3_1_SEL_RET_INST = 5 + PFM_OFFSET_MAGIC_1, 123 - SPAV3_1_SEL_JR_INST = 6 + PFM_OFFSET_MAGIC_1, 124 - SPAV3_1_SEL_JAL_JRAL_INST = 7 + PFM_OFFSET_MAGIC_1, 125 - SPAV3_1_SEL_NOP_INST = 8 + PFM_OFFSET_MAGIC_1, 126 - SPAV3_1_SEL_SCW_INST = 9 + PFM_OFFSET_MAGIC_1, 127 - SPAV3_1_SEL_ISB_DSB_INST = 10 + PFM_OFFSET_MAGIC_1, 128 - SPAV3_1_SEL_CCTL_INST = 11 + PFM_OFFSET_MAGIC_1, 129 - SPAV3_1_SEL_TAKEN_INTERRUPTS = 12 + PFM_OFFSET_MAGIC_1, 130 - SPAV3_1_SEL_LOADS_COMPLETED = 13 + PFM_OFFSET_MAGIC_1, 131 - SPAV3_1_SEL_UITLB_ACCESS = 14 + PFM_OFFSET_MAGIC_1, 132 - SPAV3_1_SEL_UDTLB_ACCESS = 15 + PFM_OFFSET_MAGIC_1, 133 - SPAV3_1_SEL_MTLB_ACCESS = 16 + PFM_OFFSET_MAGIC_1, 134 - SPAV3_1_SEL_CODE_CACHE_ACCESS = 17 + PFM_OFFSET_MAGIC_1, 135 - SPAV3_1_SEL_DATA_DEPENDENCY_STALL_CYCLES = 18 + PFM_OFFSET_MAGIC_1, 136 - SPAV3_1_SEL_DATA_CACHE_MISS_STALL_CYCLES = 19 + PFM_OFFSET_MAGIC_1, 137 - SPAV3_1_SEL_DATA_CACHE_ACCESS = 20 + PFM_OFFSET_MAGIC_1, 138 - SPAV3_1_SEL_DATA_CACHE_MISS = 21 + PFM_OFFSET_MAGIC_1, 139 - SPAV3_1_SEL_LOAD_DATA_CACHE_ACCESS = 22 + PFM_OFFSET_MAGIC_1, 140 - SPAV3_1_SEL_STORE_DATA_CACHE_ACCESS = 23 + PFM_OFFSET_MAGIC_1, 141 - SPAV3_1_SEL_ILM_ACCESS = 24 + PFM_OFFSET_MAGIC_1, 142 - SPAV3_1_SEL_LSU_BIU_CYCLES = 25 + PFM_OFFSET_MAGIC_1, 143 - SPAV3_1_SEL_HPTWK_BIU_CYCLES = 26 + PFM_OFFSET_MAGIC_1, 144 - SPAV3_1_SEL_DMA_BIU_CYCLES = 27 + PFM_OFFSET_MAGIC_1, 145 - SPAV3_1_SEL_CODE_CACHE_FILL_BIU_CYCLES = 28 + PFM_OFFSET_MAGIC_1, 146 - SPAV3_1_SEL_LEGAL_UNALIGN_DCACHE_ACCESS = 29 + PFM_OFFSET_MAGIC_1, 147 - SPAV3_1_SEL_PUSH25 = 30 + PFM_OFFSET_MAGIC_1, 148 - SPAV3_1_SEL_SYSCALLS_INST = 31 + PFM_OFFSET_MAGIC_1, 149 - SPAV3_1_SEL_LAST /* counting symbol */ 150 - }; 151 - 152 - enum spav3_counter_2_perf_types { 153 - SPAV3_2_SEL_BASE = -1 + PFM_OFFSET_MAGIC_2, /* counting symbol */ 154 - SPAV3_2_SEL_TOTAL_CYCLES = 0 + PFM_OFFSET_MAGIC_2, 155 - SPAV3_2_SEL_COMPLETED_INSTRUCTION = 1 + PFM_OFFSET_MAGIC_2, 156 - SPAV3_2_SEL_CONDITIONAL_BRANCH_MISPREDICT = 2 + PFM_OFFSET_MAGIC_2, 157 - SPAV3_2_SEL_TAKEN_CONDITIONAL_BRANCH_MISPREDICT = 158 - 3 + PFM_OFFSET_MAGIC_2, 159 - SPAV3_2_SEL_PREFETCH_INSTRUCTION_CACHE_HIT = 4 + PFM_OFFSET_MAGIC_2, 160 - SPAV3_1_SEL_RET_MISPREDICT = 5 + PFM_OFFSET_MAGIC_2, 161 - SPAV3_1_SEL_IMMEDIATE_J_INST = 6 + PFM_OFFSET_MAGIC_2, 162 - SPAV3_1_SEL_MULTIPLY_INST = 7 + PFM_OFFSET_MAGIC_2, 163 - SPAV3_1_SEL_16_BIT_INST = 8 + PFM_OFFSET_MAGIC_2, 164 - SPAV3_1_SEL_FAILED_SCW_INST = 9 + PFM_OFFSET_MAGIC_2, 165 - SPAV3_1_SEL_LD_AFTER_ST_CONFLICT_REPLAYS = 10 + PFM_OFFSET_MAGIC_2, 166 - SPAV3_1_SEL_TAKEN_EXCEPTIONS = 12 + PFM_OFFSET_MAGIC_2, 167 - SPAV3_1_SEL_STORES_COMPLETED = 13 + PFM_OFFSET_MAGIC_2, 168 - SPAV3_2_SEL_UITLB_MISS = 14 + PFM_OFFSET_MAGIC_2, 169 - SPAV3_2_SEL_UDTLB_MISS = 15 + PFM_OFFSET_MAGIC_2, 170 - SPAV3_2_SEL_MTLB_MISS = 16 + PFM_OFFSET_MAGIC_2, 171 - SPAV3_2_SEL_CODE_CACHE_MISS = 17 + PFM_OFFSET_MAGIC_2, 172 - SPAV3_1_SEL_EMPTY_INST_QUEUE_STALL_CYCLES = 18 + PFM_OFFSET_MAGIC_2, 173 - SPAV3_1_SEL_DATA_WRITE_BACK = 19 + PFM_OFFSET_MAGIC_2, 174 - SPAV3_2_SEL_DATA_CACHE_MISS = 21 + PFM_OFFSET_MAGIC_2, 175 - SPAV3_2_SEL_LOAD_DATA_CACHE_MISS = 22 + PFM_OFFSET_MAGIC_2, 176 - SPAV3_2_SEL_STORE_DATA_CACHE_MISS = 23 + PFM_OFFSET_MAGIC_2, 177 - SPAV3_1_SEL_DLM_ACCESS = 24 + PFM_OFFSET_MAGIC_2, 178 - SPAV3_1_SEL_LSU_BIU_REQUEST = 25 + PFM_OFFSET_MAGIC_2, 179 - SPAV3_1_SEL_HPTWK_BIU_REQUEST = 26 + PFM_OFFSET_MAGIC_2, 180 - SPAV3_1_SEL_DMA_BIU_REQUEST = 27 + PFM_OFFSET_MAGIC_2, 181 - SPAV3_1_SEL_CODE_CACHE_FILL_BIU_REQUEST = 28 + PFM_OFFSET_MAGIC_2, 182 - SPAV3_1_SEL_EXTERNAL_EVENTS = 29 + PFM_OFFSET_MAGIC_2, 183 - SPAV3_1_SEL_POP25 = 30 + PFM_OFFSET_MAGIC_2, 184 - SPAV3_2_SEL_LAST /* counting symbol */ 185 - }; 186 - 187 - /* Get converted event counter index */ 188 - static inline int get_converted_event_idx(unsigned long event) 189 - { 190 - int idx; 191 - 192 - if ((event) > SPAV3_0_SEL_BASE && event < SPAV3_0_SEL_LAST) { 193 - idx = 0; 194 - } else if ((event) > SPAV3_1_SEL_BASE && event < SPAV3_1_SEL_LAST) { 195 - idx = 1; 196 - } else if ((event) > SPAV3_2_SEL_BASE && event < SPAV3_2_SEL_LAST) { 197 - idx = 2; 198 - } else { 199 - pr_err("GET_CONVERTED_EVENT_IDX PFM counter range error\n"); 200 - return -EPERM; 201 - } 202 - 203 - return idx; 204 - } 205 - 206 - /* Get converted hardware event number */ 207 - static inline u32 get_converted_evet_hw_num(u32 event) 208 - { 209 - if (event > SPAV3_0_SEL_BASE && event < SPAV3_0_SEL_LAST) 210 - event -= PFM_OFFSET_MAGIC_0; 211 - else if (event > SPAV3_1_SEL_BASE && event < SPAV3_1_SEL_LAST) 212 - event -= PFM_OFFSET_MAGIC_1; 213 - else if (event > SPAV3_2_SEL_BASE && event < SPAV3_2_SEL_LAST) 214 - event -= PFM_OFFSET_MAGIC_2; 215 - else if (event != 0) 216 - pr_err("GET_CONVERTED_EVENT_HW_NUM PFM counter range error\n"); 217 - 218 - return event; 219 - } 220 - 221 - /* 222 - * NDS32 HW events mapping 223 - * 224 - * The hardware events that we support. We do support cache operations but 225 - * we have harvard caches and no way to combine instruction and data 226 - * accesses/misses in hardware. 227 - */ 228 - static const unsigned int nds32_pfm_perf_map[PERF_COUNT_HW_MAX] = { 229 - [PERF_COUNT_HW_CPU_CYCLES] = SPAV3_0_SEL_TOTAL_CYCLES, 230 - [PERF_COUNT_HW_INSTRUCTIONS] = SPAV3_1_SEL_COMPLETED_INSTRUCTION, 231 - [PERF_COUNT_HW_CACHE_REFERENCES] = SPAV3_1_SEL_DATA_CACHE_ACCESS, 232 - [PERF_COUNT_HW_CACHE_MISSES] = SPAV3_2_SEL_DATA_CACHE_MISS, 233 - [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = HW_OP_UNSUPPORTED, 234 - [PERF_COUNT_HW_BRANCH_MISSES] = HW_OP_UNSUPPORTED, 235 - [PERF_COUNT_HW_BUS_CYCLES] = HW_OP_UNSUPPORTED, 236 - [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = HW_OP_UNSUPPORTED, 237 - [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = HW_OP_UNSUPPORTED, 238 - [PERF_COUNT_HW_REF_CPU_CYCLES] = HW_OP_UNSUPPORTED 239 - }; 240 - 241 - static const unsigned int nds32_pfm_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] 242 - [PERF_COUNT_HW_CACHE_OP_MAX] 243 - [PERF_COUNT_HW_CACHE_RESULT_MAX] = { 244 - [C(L1D)] = { 245 - [C(OP_READ)] = { 246 - [C(RESULT_ACCESS)] = 247 - SPAV3_1_SEL_LOAD_DATA_CACHE_ACCESS, 248 - [C(RESULT_MISS)] = 249 - SPAV3_2_SEL_LOAD_DATA_CACHE_MISS, 250 - }, 251 - [C(OP_WRITE)] = { 252 - [C(RESULT_ACCESS)] = 253 - SPAV3_1_SEL_STORE_DATA_CACHE_ACCESS, 254 - [C(RESULT_MISS)] = 255 - SPAV3_2_SEL_STORE_DATA_CACHE_MISS, 256 - }, 257 - [C(OP_PREFETCH)] = { 258 - [C(RESULT_ACCESS)] = 259 - CACHE_OP_UNSUPPORTED, 260 - [C(RESULT_MISS)] = 261 - CACHE_OP_UNSUPPORTED, 262 - }, 263 - }, 264 - [C(L1I)] = { 265 - [C(OP_READ)] = { 266 - [C(RESULT_ACCESS)] = 267 - SPAV3_1_SEL_CODE_CACHE_ACCESS, 268 - [C(RESULT_MISS)] = 269 - SPAV3_2_SEL_CODE_CACHE_MISS, 270 - }, 271 - [C(OP_WRITE)] = { 272 - [C(RESULT_ACCESS)] = 273 - SPAV3_1_SEL_CODE_CACHE_ACCESS, 274 - [C(RESULT_MISS)] = 275 - SPAV3_2_SEL_CODE_CACHE_MISS, 276 - }, 277 - [C(OP_PREFETCH)] = { 278 - [C(RESULT_ACCESS)] = 279 - CACHE_OP_UNSUPPORTED, 280 - [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 281 - }, 282 - }, 283 - /* TODO: L2CC */ 284 - [C(LL)] = { 285 - [C(OP_READ)] = { 286 - [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, 287 - [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 288 - }, 289 - [C(OP_WRITE)] = { 290 - [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, 291 - [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 292 - }, 293 - [C(OP_PREFETCH)] = { 294 - [C(RESULT_ACCESS)] = 295 - CACHE_OP_UNSUPPORTED, 296 - [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 297 - }, 298 - }, 299 - /* NDS32 PMU does not support TLB read/write hit/miss, 300 - * However, it can count access/miss, which mixed with read and write. 301 - * Therefore, only READ counter will use it. 302 - * We do as possible as we can. 303 - */ 304 - [C(DTLB)] = { 305 - [C(OP_READ)] = { 306 - [C(RESULT_ACCESS)] = 307 - SPAV3_1_SEL_UDTLB_ACCESS, 308 - [C(RESULT_MISS)] = 309 - SPAV3_2_SEL_UDTLB_MISS, 310 - }, 311 - [C(OP_WRITE)] = { 312 - [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, 313 - [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 314 - }, 315 - [C(OP_PREFETCH)] = { 316 - [C(RESULT_ACCESS)] = 317 - CACHE_OP_UNSUPPORTED, 318 - [C(RESULT_MISS)] = 319 - CACHE_OP_UNSUPPORTED, 320 - }, 321 - }, 322 - [C(ITLB)] = { 323 - [C(OP_READ)] = { 324 - [C(RESULT_ACCESS)] = 325 - SPAV3_1_SEL_UITLB_ACCESS, 326 - [C(RESULT_MISS)] = 327 - SPAV3_2_SEL_UITLB_MISS, 328 - }, 329 - [C(OP_WRITE)] = { 330 - [C(RESULT_ACCESS)] = 331 - CACHE_OP_UNSUPPORTED, 332 - [C(RESULT_MISS)] = 333 - CACHE_OP_UNSUPPORTED, 334 - }, 335 - [C(OP_PREFETCH)] = { 336 - [C(RESULT_ACCESS)] = 337 - CACHE_OP_UNSUPPORTED, 338 - [C(RESULT_MISS)] = 339 - CACHE_OP_UNSUPPORTED, 340 - }, 341 - }, 342 - [C(BPU)] = { /* What is BPU? */ 343 - [C(OP_READ)] = { 344 - [C(RESULT_ACCESS)] = 345 - CACHE_OP_UNSUPPORTED, 346 - [C(RESULT_MISS)] = 347 - CACHE_OP_UNSUPPORTED, 348 - }, 349 - [C(OP_WRITE)] = { 350 - [C(RESULT_ACCESS)] = 351 - CACHE_OP_UNSUPPORTED, 352 - [C(RESULT_MISS)] = 353 - CACHE_OP_UNSUPPORTED, 354 - }, 355 - [C(OP_PREFETCH)] = { 356 - [C(RESULT_ACCESS)] = 357 - CACHE_OP_UNSUPPORTED, 358 - [C(RESULT_MISS)] = 359 - CACHE_OP_UNSUPPORTED, 360 - }, 361 - }, 362 - [C(NODE)] = { /* What is NODE? */ 363 - [C(OP_READ)] = { 364 - [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, 365 - [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 366 - }, 367 - [C(OP_WRITE)] = { 368 - [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, 369 - [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED, 370 - }, 371 - [C(OP_PREFETCH)] = { 372 - [C(RESULT_ACCESS)] = 373 - CACHE_OP_UNSUPPORTED, 374 - [C(RESULT_MISS)] = 375 - CACHE_OP_UNSUPPORTED, 376 - }, 377 - }, 378 - }; 379 - 380 - int nds32_pmu_map_event(struct perf_event *event, 381 - const unsigned int (*event_map)[PERF_COUNT_HW_MAX], 382 - const unsigned int (*cache_map)[PERF_COUNT_HW_CACHE_MAX] 383 - [PERF_COUNT_HW_CACHE_OP_MAX] 384 - [PERF_COUNT_HW_CACHE_RESULT_MAX], u32 raw_event_mask); 385 - 386 - #endif /* __ASM_PMU_H */
-44
arch/nds32/include/asm/proc-fns.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __NDS32_PROCFNS_H__ 5 - #define __NDS32_PROCFNS_H__ 6 - 7 - #ifdef __KERNEL__ 8 - #include <asm/page.h> 9 - 10 - struct mm_struct; 11 - struct vm_area_struct; 12 - extern void cpu_proc_init(void); 13 - extern void cpu_proc_fin(void); 14 - extern void cpu_do_idle(void); 15 - extern void cpu_reset(unsigned long reset); 16 - extern void cpu_switch_mm(struct mm_struct *mm); 17 - 18 - extern void cpu_dcache_inval_all(void); 19 - extern void cpu_dcache_wbinval_all(void); 20 - extern void cpu_dcache_inval_page(unsigned long page); 21 - extern void cpu_dcache_wb_page(unsigned long page); 22 - extern void cpu_dcache_wbinval_page(unsigned long page); 23 - extern void cpu_dcache_inval_range(unsigned long start, unsigned long end); 24 - extern void cpu_dcache_wb_range(unsigned long start, unsigned long end); 25 - extern void cpu_dcache_wbinval_range(unsigned long start, unsigned long end); 26 - 27 - extern void cpu_icache_inval_all(void); 28 - extern void cpu_icache_inval_page(unsigned long page); 29 - extern void cpu_icache_inval_range(unsigned long start, unsigned long end); 30 - 31 - extern void cpu_cache_wbinval_page(unsigned long page, int flushi); 32 - extern void cpu_cache_wbinval_range(unsigned long start, 33 - unsigned long end, int flushi); 34 - extern void cpu_cache_wbinval_range_check(struct vm_area_struct *vma, 35 - unsigned long start, 36 - unsigned long end, bool flushi, 37 - bool wbd); 38 - 39 - extern void cpu_dma_wb_range(unsigned long start, unsigned long end); 40 - extern void cpu_dma_inval_range(unsigned long start, unsigned long end); 41 - extern void cpu_dma_wbinval_range(unsigned long start, unsigned long end); 42 - 43 - #endif /* __KERNEL__ */ 44 - #endif /* __NDS32_PROCFNS_H__ */
-104
arch/nds32/include/asm/processor.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_NDS32_PROCESSOR_H 5 - #define __ASM_NDS32_PROCESSOR_H 6 - 7 - #ifdef __KERNEL__ 8 - 9 - #include <asm/ptrace.h> 10 - #include <asm/types.h> 11 - #include <asm/sigcontext.h> 12 - 13 - #define KERNEL_STACK_SIZE PAGE_SIZE 14 - #define STACK_TOP TASK_SIZE 15 - #define STACK_TOP_MAX TASK_SIZE 16 - 17 - struct cpu_context { 18 - unsigned long r6; 19 - unsigned long r7; 20 - unsigned long r8; 21 - unsigned long r9; 22 - unsigned long r10; 23 - unsigned long r11; 24 - unsigned long r12; 25 - unsigned long r13; 26 - unsigned long r14; 27 - unsigned long fp; 28 - unsigned long pc; 29 - unsigned long sp; 30 - }; 31 - 32 - struct thread_struct { 33 - struct cpu_context cpu_context; /* cpu context */ 34 - /* fault info */ 35 - unsigned long address; 36 - unsigned long trap_no; 37 - unsigned long error_code; 38 - 39 - struct fpu_struct fpu; 40 - }; 41 - 42 - #define INIT_THREAD { } 43 - 44 - #ifdef __NDS32_EB__ 45 - #define PSW_DE PSW_mskBE 46 - #else 47 - #define PSW_DE 0x0 48 - #endif 49 - 50 - #ifdef CONFIG_WBNA 51 - #define PSW_valWBNA PSW_mskWBNA 52 - #else 53 - #define PSW_valWBNA 0x0 54 - #endif 55 - 56 - #ifdef CONFIG_HWZOL 57 - #define PSW_valINIT (PSW_CPL_ANY | PSW_mskAEN | PSW_valWBNA | PSW_mskDT | PSW_mskIT | PSW_DE | PSW_mskGIE) 58 - #else 59 - #define PSW_valINIT (PSW_CPL_ANY | PSW_valWBNA | PSW_mskDT | PSW_mskIT | PSW_DE | PSW_mskGIE) 60 - #endif 61 - 62 - #define start_thread(regs,pc,stack) \ 63 - ({ \ 64 - memzero(regs, sizeof(struct pt_regs)); \ 65 - forget_syscall(regs); \ 66 - regs->ipsw = PSW_valINIT; \ 67 - regs->ir0 = (PSW_CPL_ANY | PSW_valWBNA | PSW_mskDT | PSW_mskIT | PSW_DE | PSW_SYSTEM | PSW_INTL_1); \ 68 - regs->ipc = pc; \ 69 - regs->sp = stack; \ 70 - }) 71 - 72 - /* Forward declaration, a strange C thing */ 73 - struct task_struct; 74 - 75 - /* Free all resources held by a thread. */ 76 - #define release_thread(thread) do { } while(0) 77 - #if IS_ENABLED(CONFIG_FPU) 78 - #if !IS_ENABLED(CONFIG_UNLAZU_FPU) 79 - extern struct task_struct *last_task_used_math; 80 - #endif 81 - #endif 82 - 83 - /* Prepare to copy thread state - unlazy all lazy status */ 84 - #define prepare_to_copy(tsk) do { } while (0) 85 - 86 - unsigned long __get_wchan(struct task_struct *p); 87 - 88 - #define cpu_relax() barrier() 89 - 90 - #define task_pt_regs(task) \ 91 - ((struct pt_regs *) (task_stack_page(task) + THREAD_SIZE \ 92 - - 8) - 1) 93 - 94 - /* 95 - * Create a new kernel thread 96 - */ 97 - extern int kernel_thread(int (*fn) (void *), void *arg, unsigned long flags); 98 - 99 - #define KSTK_EIP(tsk) instruction_pointer(task_pt_regs(tsk)) 100 - #define KSTK_ESP(tsk) user_stack_pointer(task_pt_regs(tsk)) 101 - 102 - #endif 103 - 104 - #endif /* __ASM_NDS32_PROCESSOR_H */
-77
arch/nds32/include/asm/ptrace.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_NDS32_PTRACE_H 5 - #define __ASM_NDS32_PTRACE_H 6 - 7 - #include <uapi/asm/ptrace.h> 8 - 9 - /* 10 - * If pt_regs.syscallno == NO_SYSCALL, then the thread is not executing 11 - * a syscall -- i.e., its most recent entry into the kernel from 12 - * userspace was not via syscall, or otherwise a tracer cancelled the 13 - * syscall. 14 - * 15 - * This must have the value -1, for ABI compatibility with ptrace etc. 16 - */ 17 - #define NO_SYSCALL (-1) 18 - #ifndef __ASSEMBLY__ 19 - #include <linux/types.h> 20 - 21 - struct pt_regs { 22 - union { 23 - struct user_pt_regs user_regs; 24 - struct { 25 - long uregs[26]; 26 - long fp; 27 - long gp; 28 - long lp; 29 - long sp; 30 - long ipc; 31 - #if defined(CONFIG_HWZOL) 32 - long lb; 33 - long le; 34 - long lc; 35 - #else 36 - long dummy[3]; 37 - #endif 38 - long syscallno; 39 - }; 40 - }; 41 - long orig_r0; 42 - long ir0; 43 - long ipsw; 44 - long pipsw; 45 - long pipc; 46 - long pp0; 47 - long pp1; 48 - long fucop_ctl; 49 - long osp; 50 - }; 51 - 52 - static inline bool in_syscall(struct pt_regs const *regs) 53 - { 54 - return regs->syscallno != NO_SYSCALL; 55 - } 56 - 57 - static inline void forget_syscall(struct pt_regs *regs) 58 - { 59 - regs->syscallno = NO_SYSCALL; 60 - } 61 - static inline unsigned long regs_return_value(struct pt_regs *regs) 62 - { 63 - return regs->uregs[0]; 64 - } 65 - extern void show_regs(struct pt_regs *); 66 - /* Avoid circular header include via sched.h */ 67 - struct task_struct; 68 - 69 - #define arch_has_single_step() (1) 70 - #define user_mode(regs) (((regs)->ipsw & PSW_mskPOM) == 0) 71 - #define interrupts_enabled(regs) (!!((regs)->ipsw & PSW_mskGIE)) 72 - #define user_stack_pointer(regs) ((regs)->sp) 73 - #define instruction_pointer(regs) ((regs)->ipc) 74 - #define profile_pc(regs) instruction_pointer(regs) 75 - 76 - #endif /* __ASSEMBLY__ */ 77 - #endif
-158
arch/nds32/include/asm/sfp-machine.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* Copyright (C) 2005-2018 Andes Technology Corporation */ 3 - 4 - #include <asm/bitfield.h> 5 - 6 - #define _FP_W_TYPE_SIZE 32 7 - #define _FP_W_TYPE unsigned long 8 - #define _FP_WS_TYPE signed long 9 - #define _FP_I_TYPE long 10 - 11 - #define __ll_B ((UWtype) 1 << (W_TYPE_SIZE / 2)) 12 - #define __ll_lowpart(t) ((UWtype) (t) & (__ll_B - 1)) 13 - #define __ll_highpart(t) ((UWtype) (t) >> (W_TYPE_SIZE / 2)) 14 - 15 - #define _FP_MUL_MEAT_S(R, X, Y) \ 16 - _FP_MUL_MEAT_1_wide(_FP_WFRACBITS_S, R, X, Y, umul_ppmm) 17 - #define _FP_MUL_MEAT_D(R, X, Y) \ 18 - _FP_MUL_MEAT_2_wide(_FP_WFRACBITS_D, R, X, Y, umul_ppmm) 19 - #define _FP_MUL_MEAT_Q(R, X, Y) \ 20 - _FP_MUL_MEAT_4_wide(_FP_WFRACBITS_Q, R, X, Y, umul_ppmm) 21 - 22 - #define _FP_MUL_MEAT_DW_S(R, X, Y) \ 23 - _FP_MUL_MEAT_DW_1_wide(_FP_WFRACBITS_S, R, X, Y, umul_ppmm) 24 - #define _FP_MUL_MEAT_DW_D(R, X, Y) \ 25 - _FP_MUL_MEAT_DW_2_wide(_FP_WFRACBITS_D, R, X, Y, umul_ppmm) 26 - 27 - #define _FP_DIV_MEAT_S(R, X, Y) _FP_DIV_MEAT_1_udiv_norm(S, R, X, Y) 28 - #define _FP_DIV_MEAT_D(R, X, Y) _FP_DIV_MEAT_2_udiv(D, R, X, Y) 29 - 30 - #define _FP_NANFRAC_S ((_FP_QNANBIT_S << 1) - 1) 31 - #define _FP_NANFRAC_D ((_FP_QNANBIT_D << 1) - 1), -1 32 - #define _FP_NANFRAC_Q ((_FP_QNANBIT_Q << 1) - 1), -1, -1, -1 33 - #define _FP_NANSIGN_S 0 34 - #define _FP_NANSIGN_D 0 35 - #define _FP_NANSIGN_Q 0 36 - 37 - #define _FP_KEEPNANFRACP 1 38 - #define _FP_QNANNEGATEDP 0 39 - 40 - #define _FP_CHOOSENAN(fs, wc, R, X, Y, OP) \ 41 - do { \ 42 - if ((_FP_FRAC_HIGH_RAW_##fs(X) & _FP_QNANBIT_##fs) \ 43 - && !(_FP_FRAC_HIGH_RAW_##fs(Y) & _FP_QNANBIT_##fs)) { \ 44 - R##_s = Y##_s; \ 45 - _FP_FRAC_COPY_##wc(R, Y); \ 46 - } else { \ 47 - R##_s = X##_s; \ 48 - _FP_FRAC_COPY_##wc(R, X); \ 49 - } \ 50 - R##_c = FP_CLS_NAN; \ 51 - } while (0) 52 - 53 - #define __FPU_FPCSR (current->thread.fpu.fpcsr) 54 - 55 - /* Obtain the current rounding mode. */ 56 - #define FP_ROUNDMODE \ 57 - ({ \ 58 - __FPU_FPCSR & FPCSR_mskRM; \ 59 - }) 60 - 61 - #define FP_RND_NEAREST 0 62 - #define FP_RND_PINF 1 63 - #define FP_RND_MINF 2 64 - #define FP_RND_ZERO 3 65 - 66 - #define FP_EX_INVALID FPCSR_mskIVO 67 - #define FP_EX_DIVZERO FPCSR_mskDBZ 68 - #define FP_EX_OVERFLOW FPCSR_mskOVF 69 - #define FP_EX_UNDERFLOW FPCSR_mskUDF 70 - #define FP_EX_INEXACT FPCSR_mskIEX 71 - 72 - #define SF_CEQ 2 73 - #define SF_CLT 1 74 - #define SF_CGT 3 75 - #define SF_CUN 4 76 - 77 - #include <asm/byteorder.h> 78 - 79 - #ifdef __BIG_ENDIAN__ 80 - #define __BYTE_ORDER __BIG_ENDIAN 81 - #define __LITTLE_ENDIAN 0 82 - #else 83 - #define __BYTE_ORDER __LITTLE_ENDIAN 84 - #define __BIG_ENDIAN 0 85 - #endif 86 - 87 - #define abort() do { } while (0) 88 - #define umul_ppmm(w1, w0, u, v) \ 89 - do { \ 90 - UWtype __x0, __x1, __x2, __x3; \ 91 - UHWtype __ul, __vl, __uh, __vh; \ 92 - \ 93 - __ul = __ll_lowpart(u); \ 94 - __uh = __ll_highpart(u); \ 95 - __vl = __ll_lowpart(v); \ 96 - __vh = __ll_highpart(v); \ 97 - \ 98 - __x0 = (UWtype) __ul * __vl; \ 99 - __x1 = (UWtype) __ul * __vh; \ 100 - __x2 = (UWtype) __uh * __vl; \ 101 - __x3 = (UWtype) __uh * __vh; \ 102 - \ 103 - __x1 += __ll_highpart(__x0); \ 104 - __x1 += __x2; \ 105 - if (__x1 < __x2) \ 106 - __x3 += __ll_B; \ 107 - \ 108 - (w1) = __x3 + __ll_highpart(__x1); \ 109 - (w0) = __ll_lowpart(__x1) * __ll_B + __ll_lowpart(__x0); \ 110 - } while (0) 111 - 112 - #define add_ssaaaa(sh, sl, ah, al, bh, bl) \ 113 - do { \ 114 - UWtype __x; \ 115 - __x = (al) + (bl); \ 116 - (sh) = (ah) + (bh) + (__x < (al)); \ 117 - (sl) = __x; \ 118 - } while (0) 119 - 120 - #define sub_ddmmss(sh, sl, ah, al, bh, bl) \ 121 - do { \ 122 - UWtype __x; \ 123 - __x = (al) - (bl); \ 124 - (sh) = (ah) - (bh) - (__x > (al)); \ 125 - (sl) = __x; \ 126 - } while (0) 127 - 128 - #define udiv_qrnnd(q, r, n1, n0, d) \ 129 - do { \ 130 - UWtype __d1, __d0, __q1, __q0, __r1, __r0, __m; \ 131 - __d1 = __ll_highpart(d); \ 132 - __d0 = __ll_lowpart(d); \ 133 - \ 134 - __r1 = (n1) % __d1; \ 135 - __q1 = (n1) / __d1; \ 136 - __m = (UWtype) __q1 * __d0; \ 137 - __r1 = __r1 * __ll_B | __ll_highpart(n0); \ 138 - if (__r1 < __m) { \ 139 - __q1--, __r1 += (d); \ 140 - if (__r1 >= (d)) \ 141 - if (__r1 < __m) \ 142 - __q1--, __r1 += (d); \ 143 - } \ 144 - __r1 -= __m; \ 145 - __r0 = __r1 % __d1; \ 146 - __q0 = __r1 / __d1; \ 147 - __m = (UWtype) __q0 * __d0; \ 148 - __r0 = __r0 * __ll_B | __ll_lowpart(n0); \ 149 - if (__r0 < __m) { \ 150 - __q0--, __r0 += (d); \ 151 - if (__r0 >= (d)) \ 152 - if (__r0 < __m) \ 153 - __q0--, __r0 += (d); \ 154 - } \ 155 - __r0 -= __m; \ 156 - (q) = (UWtype) __q1 * __ll_B | __q0; \ 157 - (r) = __r0; \ 158 - } while (0)
-19
arch/nds32/include/asm/shmparam.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef _ASMNDS32_SHMPARAM_H 5 - #define _ASMNDS32_SHMPARAM_H 6 - 7 - /* 8 - * This should be the size of the virtually indexed cache/ways, 9 - * whichever is greater since the cache aliases every size/ways 10 - * bytes. 11 - */ 12 - #define SHMLBA (4 * SZ_8K) /* attach addr a multiple of this */ 13 - 14 - /* 15 - * Enforce SHMLBA in shmat 16 - */ 17 - #define __ARCH_FORCE_SHMLBA 18 - 19 - #endif /* _ASMNDS32_SHMPARAM_H */
-39
arch/nds32/include/asm/stacktrace.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* Copyright (C) 2008-2018 Andes Technology Corporation */ 3 - 4 - #ifndef __ASM_STACKTRACE_H 5 - #define __ASM_STACKTRACE_H 6 - 7 - /* Kernel callchain */ 8 - struct stackframe { 9 - unsigned long fp; 10 - unsigned long sp; 11 - unsigned long lp; 12 - }; 13 - 14 - /* 15 - * struct frame_tail: User callchain 16 - * IMPORTANT: 17 - * This struct is used for call-stack walking, 18 - * the order and types matters. 19 - * Do not use array, it only stores sizeof(pointer) 20 - * 21 - * The details can refer to arch/arm/kernel/perf_event.c 22 - */ 23 - struct frame_tail { 24 - unsigned long stack_fp; 25 - unsigned long stack_lp; 26 - }; 27 - 28 - /* For User callchain with optimize for size */ 29 - struct frame_tail_opt_size { 30 - unsigned long stack_r6; 31 - unsigned long stack_fp; 32 - unsigned long stack_gp; 33 - unsigned long stack_lp; 34 - }; 35 - 36 - extern void 37 - get_real_ret_addr(unsigned long *addr, struct task_struct *tsk, int *graph); 38 - 39 - #endif /* __ASM_STACKTRACE_H */
-17
arch/nds32/include/asm/string.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_NDS32_STRING_H 5 - #define __ASM_NDS32_STRING_H 6 - 7 - #define __HAVE_ARCH_MEMCPY 8 - extern void *memcpy(void *, const void *, __kernel_size_t); 9 - 10 - #define __HAVE_ARCH_MEMMOVE 11 - extern void *memmove(void *, const void *, __kernel_size_t); 12 - 13 - #define __HAVE_ARCH_MEMSET 14 - extern void *memset(void *, int, __kernel_size_t); 15 - 16 - extern void *memzero(void *ptr, __kernel_size_t n); 17 - #endif
-11
arch/nds32/include/asm/suspend.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2008-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_NDS32_SUSPEND_H 5 - #define __ASM_NDS32_SUSPEND_H 6 - 7 - extern void suspend2ram(void); 8 - extern void cpu_resume(void); 9 - extern unsigned long wake_mask; 10 - 11 - #endif
-35
arch/nds32/include/asm/swab.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __NDS32_SWAB_H__ 5 - #define __NDS32_SWAB_H__ 6 - 7 - #include <linux/types.h> 8 - #include <linux/compiler.h> 9 - 10 - static __inline__ __attribute_const__ __u32 ___arch__swab32(__u32 x) 11 - { 12 - __asm__("wsbh %0, %0\n\t" /* word swap byte within halfword */ 13 - "rotri %0, %0, #16\n" 14 - :"=r"(x) 15 - :"0"(x)); 16 - return x; 17 - } 18 - 19 - static __inline__ __attribute_const__ __u16 ___arch__swab16(__u16 x) 20 - { 21 - __asm__("wsbh %0, %0\n" /* word swap byte within halfword */ 22 - :"=r"(x) 23 - :"0"(x)); 24 - return x; 25 - } 26 - 27 - #define __arch_swab32(x) ___arch__swab32(x) 28 - #define __arch_swab16(x) ___arch__swab16(x) 29 - 30 - #if !defined(__STRICT_ANSI__) || defined(__KERNEL__) 31 - #define __BYTEORDER_HAS_U64__ 32 - #define __SWAB_64_THRU_32__ 33 - #endif 34 - 35 - #endif /* __NDS32_SWAB_H__ */
-142
arch/nds32/include/asm/syscall.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2008-2009 Red Hat, Inc. All rights reserved. 3 - // Copyright (C) 2005-2017 Andes Technology Corporation 4 - 5 - #ifndef _ASM_NDS32_SYSCALL_H 6 - #define _ASM_NDS32_SYSCALL_H 1 7 - 8 - #include <uapi/linux/audit.h> 9 - #include <linux/err.h> 10 - struct task_struct; 11 - struct pt_regs; 12 - 13 - /** 14 - * syscall_get_nr - find what system call a task is executing 15 - * @task: task of interest, must be blocked 16 - * @regs: task_pt_regs() of @task 17 - * 18 - * If @task is executing a system call or is at system call 19 - * tracing about to attempt one, returns the system call number. 20 - * If @task is not executing a system call, i.e. it's blocked 21 - * inside the kernel for a fault or signal, returns -1. 22 - * 23 - * Note this returns int even on 64-bit machines. Only 32 bits of 24 - * system call number can be meaningful. If the actual arch value 25 - * is 64 bits, this truncates to 32 bits so 0xffffffff means -1. 26 - * 27 - * It's only valid to call this when @task is known to be blocked. 28 - */ 29 - static inline int 30 - syscall_get_nr(struct task_struct *task, struct pt_regs *regs) 31 - { 32 - return regs->syscallno; 33 - } 34 - 35 - /** 36 - * syscall_rollback - roll back registers after an aborted system call 37 - * @task: task of interest, must be in system call exit tracing 38 - * @regs: task_pt_regs() of @task 39 - * 40 - * It's only valid to call this when @task is stopped for system 41 - * call exit tracing (due to TIF_SYSCALL_TRACE or TIF_SYSCALL_AUDIT), 42 - * after tracehook_report_syscall_entry() returned nonzero to prevent 43 - * the system call from taking place. 44 - * 45 - * This rolls back the register state in @regs so it's as if the 46 - * system call instruction was a no-op. The registers containing 47 - * the system call number and arguments are as they were before the 48 - * system call instruction. This may not be the same as what the 49 - * register state looked like at system call entry tracing. 50 - */ 51 - static inline void 52 - syscall_rollback(struct task_struct *task, struct pt_regs *regs) 53 - { 54 - regs->uregs[0] = regs->orig_r0; 55 - } 56 - 57 - /** 58 - * syscall_get_error - check result of traced system call 59 - * @task: task of interest, must be blocked 60 - * @regs: task_pt_regs() of @task 61 - * 62 - * Returns 0 if the system call succeeded, or -ERRORCODE if it failed. 63 - * 64 - * It's only valid to call this when @task is stopped for tracing on exit 65 - * from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT. 66 - */ 67 - static inline long 68 - syscall_get_error(struct task_struct *task, struct pt_regs *regs) 69 - { 70 - unsigned long error = regs->uregs[0]; 71 - return IS_ERR_VALUE(error) ? error : 0; 72 - } 73 - 74 - /** 75 - * syscall_get_return_value - get the return value of a traced system call 76 - * @task: task of interest, must be blocked 77 - * @regs: task_pt_regs() of @task 78 - * 79 - * Returns the return value of the successful system call. 80 - * This value is meaningless if syscall_get_error() returned nonzero. 81 - * 82 - * It's only valid to call this when @task is stopped for tracing on exit 83 - * from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT. 84 - */ 85 - static inline long 86 - syscall_get_return_value(struct task_struct *task, struct pt_regs *regs) 87 - { 88 - return regs->uregs[0]; 89 - } 90 - 91 - /** 92 - * syscall_set_return_value - change the return value of a traced system call 93 - * @task: task of interest, must be blocked 94 - * @regs: task_pt_regs() of @task 95 - * @error: negative error code, or zero to indicate success 96 - * @val: user return value if @error is zero 97 - * 98 - * This changes the results of the system call that user mode will see. 99 - * If @error is zero, the user sees a successful system call with a 100 - * return value of @val. If @error is nonzero, it's a negated errno 101 - * code; the user sees a failed system call with this errno code. 102 - * 103 - * It's only valid to call this when @task is stopped for tracing on exit 104 - * from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT. 105 - */ 106 - static inline void 107 - syscall_set_return_value(struct task_struct *task, struct pt_regs *regs, 108 - int error, long val) 109 - { 110 - regs->uregs[0] = (long)error ? error : val; 111 - } 112 - 113 - /** 114 - * syscall_get_arguments - extract system call parameter values 115 - * @task: task of interest, must be blocked 116 - * @regs: task_pt_regs() of @task 117 - * @args: array filled with argument values 118 - * 119 - * Fetches 6 arguments to the system call (from 0 through 5). The first 120 - * argument is stored in @args[0], and so on. 121 - * 122 - * It's only valid to call this when @task is stopped for tracing on 123 - * entry to a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT. 124 - */ 125 - #define SYSCALL_MAX_ARGS 6 126 - static inline void 127 - syscall_get_arguments(struct task_struct *task, struct pt_regs *regs, 128 - unsigned long *args) 129 - { 130 - args[0] = regs->orig_r0; 131 - args++; 132 - memcpy(args, &regs->uregs[0] + 1, 5 * sizeof(args[0])); 133 - } 134 - 135 - static inline int 136 - syscall_get_arch(struct task_struct *task) 137 - { 138 - return IS_ENABLED(CONFIG_CPU_BIG_ENDIAN) 139 - ? AUDIT_ARCH_NDS32BE : AUDIT_ARCH_NDS32; 140 - } 141 - 142 - #endif /* _ASM_NDS32_SYSCALL_H */
-14
arch/nds32/include/asm/syscalls.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_NDS32_SYSCALLS_H 5 - #define __ASM_NDS32_SYSCALLS_H 6 - 7 - asmlinkage long sys_cacheflush(unsigned long addr, unsigned long len, unsigned int op); 8 - asmlinkage long sys_fadvise64_64_wrapper(int fd, int advice, loff_t offset, loff_t len); 9 - asmlinkage long sys_rt_sigreturn_wrapper(void); 10 - asmlinkage long sys_fp_udfiex_crtl(int cmd, int act); 11 - 12 - #include <asm-generic/syscalls.h> 13 - 14 - #endif /* __ASM_NDS32_SYSCALLS_H */
-72
arch/nds32/include/asm/thread_info.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_NDS32_THREAD_INFO_H 5 - #define __ASM_NDS32_THREAD_INFO_H 6 - 7 - #ifdef __KERNEL__ 8 - 9 - #define THREAD_SIZE_ORDER (1) 10 - #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER) 11 - 12 - #ifndef __ASSEMBLY__ 13 - 14 - struct task_struct; 15 - 16 - #include <asm/ptrace.h> 17 - #include <asm/types.h> 18 - 19 - /* 20 - * low level task data that entry.S needs immediate access to. 21 - * __switch_to() assumes cpu_context follows immediately after cpu_domain. 22 - */ 23 - struct thread_info { 24 - unsigned long flags; /* low level flags */ 25 - __s32 preempt_count; /* 0 => preemptable, <0 => bug */ 26 - }; 27 - #define INIT_THREAD_INFO(tsk) \ 28 - { \ 29 - .preempt_count = INIT_PREEMPT_COUNT, \ 30 - } 31 - #define thread_saved_pc(tsk) ((unsigned long)(tsk->thread.cpu_context.pc)) 32 - #define thread_saved_fp(tsk) ((unsigned long)(tsk->thread.cpu_context.fp)) 33 - #endif 34 - 35 - /* 36 - * thread information flags: 37 - * TIF_SYSCALL_TRACE - syscall trace active 38 - * TIF_SIGPENDING - signal pending 39 - * TIF_NEED_RESCHED - rescheduling necessary 40 - * TIF_NOTIFY_RESUME - callback before returning to user 41 - * TIF_POLLING_NRFLAG - true if poll_idle() is polling TIF_NEED_RESCHED 42 - */ 43 - #define TIF_SIGPENDING 1 44 - #define TIF_NEED_RESCHED 2 45 - #define TIF_SINGLESTEP 3 46 - #define TIF_NOTIFY_RESUME 4 /* callback before returning to user */ 47 - #define TIF_NOTIFY_SIGNAL 5 /* signal notifications exist */ 48 - #define TIF_SYSCALL_TRACE 8 49 - #define TIF_POLLING_NRFLAG 17 50 - #define TIF_MEMDIE 18 51 - #define TIF_FREEZE 19 52 - #define TIF_RESTORE_SIGMASK 20 53 - 54 - #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) 55 - #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) 56 - #define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME) 57 - #define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL) 58 - #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) 59 - #define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE) 60 - #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) 61 - #define _TIF_FREEZE (1 << TIF_FREEZE) 62 - #define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK) 63 - 64 - /* 65 - * Change these and you break ASM code in entry-common.S 66 - */ 67 - #define _TIF_WORK_MASK 0x000000ff 68 - #define _TIF_WORK_SYSCALL_ENTRY (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP) 69 - #define _TIF_WORK_SYSCALL_LEAVE (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP) 70 - 71 - #endif /* __KERNEL__ */ 72 - #endif /* __ASM_NDS32_THREAD_INFO_H */
-11
arch/nds32/include/asm/tlb.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASMNDS32_TLB_H 5 - #define __ASMNDS32_TLB_H 6 - 7 - #include <asm-generic/tlb.h> 8 - 9 - #define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, pte) 10 - 11 - #endif
-46
arch/nds32/include/asm/tlbflush.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef _ASMNDS32_TLBFLUSH_H 5 - #define _ASMNDS32_TLBFLUSH_H 6 - 7 - #include <linux/spinlock.h> 8 - #include <linux/mm.h> 9 - #include <nds32_intrinsic.h> 10 - 11 - static inline void local_flush_tlb_all(void) 12 - { 13 - __nds32__tlbop_flua(); 14 - __nds32__isb(); 15 - } 16 - 17 - static inline void local_flush_tlb_mm(struct mm_struct *mm) 18 - { 19 - __nds32__tlbop_flua(); 20 - __nds32__isb(); 21 - } 22 - 23 - static inline void local_flush_tlb_kernel_range(unsigned long start, 24 - unsigned long end) 25 - { 26 - while (start < end) { 27 - __nds32__tlbop_inv(start); 28 - __nds32__isb(); 29 - start += PAGE_SIZE; 30 - } 31 - } 32 - 33 - void local_flush_tlb_range(struct vm_area_struct *vma, 34 - unsigned long start, unsigned long end); 35 - void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long addr); 36 - 37 - #define flush_tlb_all local_flush_tlb_all 38 - #define flush_tlb_mm local_flush_tlb_mm 39 - #define flush_tlb_range local_flush_tlb_range 40 - #define flush_tlb_page local_flush_tlb_page 41 - #define flush_tlb_kernel_range local_flush_tlb_kernel_range 42 - 43 - void update_mmu_cache(struct vm_area_struct *vma, 44 - unsigned long address, pte_t * pte); 45 - 46 - #endif
-282
arch/nds32/include/asm/uaccess.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef _ASMANDES_UACCESS_H 5 - #define _ASMANDES_UACCESS_H 6 - 7 - /* 8 - * User space memory access functions 9 - */ 10 - #include <linux/sched.h> 11 - #include <asm/errno.h> 12 - #include <asm/memory.h> 13 - #include <asm/types.h> 14 - #include <asm-generic/access_ok.h> 15 - 16 - #define __asmeq(x, y) ".ifnc " x "," y " ; .err ; .endif\n\t" 17 - 18 - /* 19 - * The exception table consists of pairs of addresses: the first is the 20 - * address of an instruction that is allowed to fault, and the second is 21 - * the address at which the program should continue. No registers are 22 - * modified, so it is entirely up to the continuation code to figure out 23 - * what to do. 24 - * 25 - * All the routines below use bits of fixup code that are out of line 26 - * with the main instruction path. This means when everything is well, 27 - * we don't even have to jump over them. Further, they do not intrude 28 - * on our cache or tlb entries. 29 - */ 30 - 31 - struct exception_table_entry { 32 - unsigned long insn, fixup; 33 - }; 34 - 35 - extern int fixup_exception(struct pt_regs *regs); 36 - 37 - /* 38 - * Single-value transfer routines. They automatically use the right 39 - * size if we just have the right pointer type. Note that the functions 40 - * which read from user space (*get_*) need to take care not to leak 41 - * kernel data even if the calling code is buggy and fails to check 42 - * the return value. This means zeroing out the destination variable 43 - * or buffer on error. Normally this is done out of line by the 44 - * fixup code, but there are a few places where it intrudes on the 45 - * main code path. When we only write to user space, there is no 46 - * problem. 47 - * 48 - * The "__xxx" versions of the user access functions do not verify the 49 - * address space - it must have been done previously with a separate 50 - * "access_ok()" call. 51 - * 52 - * The "xxx_error" versions set the third argument to EFAULT if an 53 - * error occurs, and leave it unchanged on success. Note that these 54 - * versions are void (ie, don't return a value as such). 55 - */ 56 - 57 - #define get_user(x, ptr) \ 58 - ({ \ 59 - long __gu_err = 0; \ 60 - __get_user_check((x), (ptr), __gu_err); \ 61 - __gu_err; \ 62 - }) 63 - 64 - #define __get_user_error(x, ptr, err) \ 65 - ({ \ 66 - __get_user_check((x), (ptr), (err)); \ 67 - (void)0; \ 68 - }) 69 - 70 - #define __get_user(x, ptr) \ 71 - ({ \ 72 - long __gu_err = 0; \ 73 - const __typeof__(*(ptr)) __user *__p = (ptr); \ 74 - __get_user_err((x), __p, (__gu_err)); \ 75 - __gu_err; \ 76 - }) 77 - 78 - #define __get_user_check(x, ptr, err) \ 79 - ({ \ 80 - const __typeof__(*(ptr)) __user *__p = (ptr); \ 81 - might_fault(); \ 82 - if (access_ok(__p, sizeof(*__p))) { \ 83 - __get_user_err((x), __p, (err)); \ 84 - } else { \ 85 - (x) = 0; (err) = -EFAULT; \ 86 - } \ 87 - }) 88 - 89 - #define __get_user_err(x, ptr, err) \ 90 - do { \ 91 - unsigned long __gu_val; \ 92 - __chk_user_ptr(ptr); \ 93 - switch (sizeof(*(ptr))) { \ 94 - case 1: \ 95 - __get_user_asm("lbi", __gu_val, (ptr), (err)); \ 96 - break; \ 97 - case 2: \ 98 - __get_user_asm("lhi", __gu_val, (ptr), (err)); \ 99 - break; \ 100 - case 4: \ 101 - __get_user_asm("lwi", __gu_val, (ptr), (err)); \ 102 - break; \ 103 - case 8: \ 104 - __get_user_asm_dword(__gu_val, (ptr), (err)); \ 105 - break; \ 106 - default: \ 107 - BUILD_BUG(); \ 108 - break; \ 109 - } \ 110 - (x) = (__force __typeof__(*(ptr)))__gu_val; \ 111 - } while (0) 112 - 113 - #define __get_user_asm(inst, x, addr, err) \ 114 - __asm__ __volatile__ ( \ 115 - "1: "inst" %1,[%2]\n" \ 116 - "2:\n" \ 117 - " .section .fixup,\"ax\"\n" \ 118 - " .align 2\n" \ 119 - "3: move %0, %3\n" \ 120 - " move %1, #0\n" \ 121 - " b 2b\n" \ 122 - " .previous\n" \ 123 - " .section __ex_table,\"a\"\n" \ 124 - " .align 3\n" \ 125 - " .long 1b, 3b\n" \ 126 - " .previous" \ 127 - : "+r" (err), "=&r" (x) \ 128 - : "r" (addr), "i" (-EFAULT) \ 129 - : "cc") 130 - 131 - #ifdef __NDS32_EB__ 132 - #define __gu_reg_oper0 "%H1" 133 - #define __gu_reg_oper1 "%L1" 134 - #else 135 - #define __gu_reg_oper0 "%L1" 136 - #define __gu_reg_oper1 "%H1" 137 - #endif 138 - 139 - #define __get_user_asm_dword(x, addr, err) \ 140 - __asm__ __volatile__ ( \ 141 - "\n1:\tlwi " __gu_reg_oper0 ",[%2]\n" \ 142 - "\n2:\tlwi " __gu_reg_oper1 ",[%2+4]\n" \ 143 - "3:\n" \ 144 - " .section .fixup,\"ax\"\n" \ 145 - " .align 2\n" \ 146 - "4: move %0, %3\n" \ 147 - " b 3b\n" \ 148 - " .previous\n" \ 149 - " .section __ex_table,\"a\"\n" \ 150 - " .align 3\n" \ 151 - " .long 1b, 4b\n" \ 152 - " .long 2b, 4b\n" \ 153 - " .previous" \ 154 - : "+r"(err), "=&r"(x) \ 155 - : "r"(addr), "i"(-EFAULT) \ 156 - : "cc") 157 - 158 - #define put_user(x, ptr) \ 159 - ({ \ 160 - long __pu_err = 0; \ 161 - __put_user_check((x), (ptr), __pu_err); \ 162 - __pu_err; \ 163 - }) 164 - 165 - #define __put_user(x, ptr) \ 166 - ({ \ 167 - long __pu_err = 0; \ 168 - __typeof__(*(ptr)) __user *__p = (ptr); \ 169 - __put_user_err((x), __p, __pu_err); \ 170 - __pu_err; \ 171 - }) 172 - 173 - #define __put_user_error(x, ptr, err) \ 174 - ({ \ 175 - __put_user_err((x), (ptr), (err)); \ 176 - (void)0; \ 177 - }) 178 - 179 - #define __put_user_check(x, ptr, err) \ 180 - ({ \ 181 - __typeof__(*(ptr)) __user *__p = (ptr); \ 182 - might_fault(); \ 183 - if (access_ok(__p, sizeof(*__p))) { \ 184 - __put_user_err((x), __p, (err)); \ 185 - } else { \ 186 - (err) = -EFAULT; \ 187 - } \ 188 - }) 189 - 190 - #define __put_user_err(x, ptr, err) \ 191 - do { \ 192 - __typeof__(*(ptr)) __pu_val = (x); \ 193 - __chk_user_ptr(ptr); \ 194 - switch (sizeof(*(ptr))) { \ 195 - case 1: \ 196 - __put_user_asm("sbi", __pu_val, (ptr), (err)); \ 197 - break; \ 198 - case 2: \ 199 - __put_user_asm("shi", __pu_val, (ptr), (err)); \ 200 - break; \ 201 - case 4: \ 202 - __put_user_asm("swi", __pu_val, (ptr), (err)); \ 203 - break; \ 204 - case 8: \ 205 - __put_user_asm_dword(__pu_val, (ptr), (err)); \ 206 - break; \ 207 - default: \ 208 - BUILD_BUG(); \ 209 - break; \ 210 - } \ 211 - } while (0) 212 - 213 - #define __put_user_asm(inst, x, addr, err) \ 214 - __asm__ __volatile__ ( \ 215 - "1: "inst" %1,[%2]\n" \ 216 - "2:\n" \ 217 - " .section .fixup,\"ax\"\n" \ 218 - " .align 2\n" \ 219 - "3: move %0, %3\n" \ 220 - " b 2b\n" \ 221 - " .previous\n" \ 222 - " .section __ex_table,\"a\"\n" \ 223 - " .align 3\n" \ 224 - " .long 1b, 3b\n" \ 225 - " .previous" \ 226 - : "+r" (err) \ 227 - : "r" (x), "r" (addr), "i" (-EFAULT) \ 228 - : "cc") 229 - 230 - #ifdef __NDS32_EB__ 231 - #define __pu_reg_oper0 "%H2" 232 - #define __pu_reg_oper1 "%L2" 233 - #else 234 - #define __pu_reg_oper0 "%L2" 235 - #define __pu_reg_oper1 "%H2" 236 - #endif 237 - 238 - #define __put_user_asm_dword(x, addr, err) \ 239 - __asm__ __volatile__ ( \ 240 - "\n1:\tswi " __pu_reg_oper0 ",[%1]\n" \ 241 - "\n2:\tswi " __pu_reg_oper1 ",[%1+4]\n" \ 242 - "3:\n" \ 243 - " .section .fixup,\"ax\"\n" \ 244 - " .align 2\n" \ 245 - "4: move %0, %3\n" \ 246 - " b 3b\n" \ 247 - " .previous\n" \ 248 - " .section __ex_table,\"a\"\n" \ 249 - " .align 3\n" \ 250 - " .long 1b, 4b\n" \ 251 - " .long 2b, 4b\n" \ 252 - " .previous" \ 253 - : "+r"(err) \ 254 - : "r"(addr), "r"(x), "i"(-EFAULT) \ 255 - : "cc") 256 - 257 - extern unsigned long __arch_clear_user(void __user * addr, unsigned long n); 258 - extern long strncpy_from_user(char *dest, const char __user * src, long count); 259 - extern __must_check long strnlen_user(const char __user * str, long n); 260 - extern unsigned long __arch_copy_from_user(void *to, const void __user * from, 261 - unsigned long n); 262 - extern unsigned long __arch_copy_to_user(void __user * to, const void *from, 263 - unsigned long n); 264 - 265 - #define raw_copy_from_user __arch_copy_from_user 266 - #define raw_copy_to_user __arch_copy_to_user 267 - 268 - #define INLINE_COPY_FROM_USER 269 - #define INLINE_COPY_TO_USER 270 - static inline unsigned long clear_user(void __user * to, unsigned long n) 271 - { 272 - if (access_ok(to, n)) 273 - n = __arch_clear_user(to, n); 274 - return n; 275 - } 276 - 277 - static inline unsigned long __clear_user(void __user * to, unsigned long n) 278 - { 279 - return __arch_clear_user(to, n); 280 - } 281 - 282 - #endif /* _ASMNDS32_UACCESS_H */
-6
arch/nds32/include/asm/unistd.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #define __ARCH_WANT_SYS_CLONE 5 - 6 - #include <uapi/asm/unistd.h>
-24
arch/nds32/include/asm/vdso.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * Copyright (C) 2005-2017 Andes Technology Corporation 4 - */ 5 - 6 - #ifndef __ASM_VDSO_H 7 - #define __ASM_VDSO_H 8 - 9 - #ifdef __KERNEL__ 10 - 11 - #ifndef __ASSEMBLY__ 12 - 13 - #include <generated/vdso-offsets.h> 14 - 15 - #define VDSO_SYMBOL(base, name) \ 16 - ({ \ 17 - (unsigned long)(vdso_offset_##name + (unsigned long)(base)); \ 18 - }) 19 - 20 - #endif /* !__ASSEMBLY__ */ 21 - 22 - #endif /* __KERNEL__ */ 23 - 24 - #endif /* __ASM_VDSO_H */
-37
arch/nds32/include/asm/vdso_datapage.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2012 ARM Limited 3 - // Copyright (C) 2005-2017 Andes Technology Corporation 4 - #ifndef __ASM_VDSO_DATAPAGE_H 5 - #define __ASM_VDSO_DATAPAGE_H 6 - 7 - #ifdef __KERNEL__ 8 - 9 - #ifndef __ASSEMBLY__ 10 - 11 - struct vdso_data { 12 - bool cycle_count_down; /* timer cyclye counter is decrease with time */ 13 - u32 cycle_count_offset; /* offset of timer cycle counter register */ 14 - u32 seq_count; /* sequence count - odd during updates */ 15 - u32 xtime_coarse_sec; /* coarse time */ 16 - u32 xtime_coarse_nsec; 17 - 18 - u32 wtm_clock_sec; /* wall to monotonic offset */ 19 - u32 wtm_clock_nsec; 20 - u32 xtime_clock_sec; /* CLOCK_REALTIME - seconds */ 21 - u32 cs_mult; /* clocksource multiplier */ 22 - u32 cs_shift; /* Cycle to nanosecond divisor (power of two) */ 23 - u32 hrtimer_res; /* hrtimer resolution */ 24 - 25 - u64 cs_cycle_last; /* last cycle value */ 26 - u64 cs_mask; /* clocksource mask */ 27 - 28 - u64 xtime_clock_nsec; /* CLOCK_REALTIME sub-ns base */ 29 - u32 tz_minuteswest; /* timezone info for gettimeofday(2) */ 30 - u32 tz_dsttime; 31 - }; 32 - 33 - #endif /* !__ASSEMBLY__ */ 34 - 35 - #endif /* __KERNEL__ */ 36 - 37 - #endif /* __ASM_VDSO_DATAPAGE_H */
-14
arch/nds32/include/asm/vdso_timer_info.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - extern struct timer_info_t timer_info; 5 - #define EMPTY_VALUE ~(0UL) 6 - #define EMPTY_TIMER_MAPPING EMPTY_VALUE 7 - #define EMPTY_REG_OFFSET EMPTY_VALUE 8 - 9 - struct timer_info_t 10 - { 11 - bool cycle_count_down; 12 - unsigned long mapping_base; 13 - unsigned long cycle_count_reg_offset; 14 - };
-9
arch/nds32/include/asm/vermagic.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef _ASM_VERMAGIC_H 5 - #define _ASM_VERMAGIC_H 6 - 7 - #define MODULE_ARCH_VERMAGIC "NDS32v3" 8 - 9 - #endif /* _ASM_VERMAGIC_H */
-4
arch/nds32/include/asm/vmalloc.h
··· 1 - #ifndef _ASM_NDS32_VMALLOC_H 2 - #define _ASM_NDS32_VMALLOC_H 3 - 4 - #endif /* _ASM_NDS32_VMALLOC_H */
-2
arch/nds32/include/uapi/asm/Kbuild
··· 1 - # SPDX-License-Identifier: GPL-2.0 2 - generic-y += ucontext.h
-19
arch/nds32/include/uapi/asm/auxvec.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_AUXVEC_H 5 - #define __ASM_AUXVEC_H 6 - 7 - /* 8 - * This entry gives some information about the FPU initialization 9 - * performed by the kernel. 10 - */ 11 - #define AT_FPUCW 18 /* Used FPU control word. */ 12 - 13 - 14 - /* VDSO location */ 15 - #define AT_SYSINFO_EHDR 33 16 - 17 - #define AT_VECTOR_SIZE_ARCH 1 18 - 19 - #endif
-13
arch/nds32/include/uapi/asm/byteorder.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __NDS32_BYTEORDER_H__ 5 - #define __NDS32_BYTEORDER_H__ 6 - 7 - #ifdef __NDS32_EB__ 8 - #include <linux/byteorder/big_endian.h> 9 - #else 10 - #include <linux/byteorder/little_endian.h> 11 - #endif 12 - 13 - #endif /* __NDS32_BYTEORDER_H__ */
-14
arch/nds32/include/uapi/asm/cachectl.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - // Copyright (C) 1994, 1995, 1996 by Ralf Baechle 3 - // Copyright (C) 2005-2017 Andes Technology Corporation 4 - #ifndef _ASM_CACHECTL 5 - #define _ASM_CACHECTL 6 - 7 - /* 8 - * Options for cacheflush system call 9 - */ 10 - #define ICACHE 0 /* flush instruction cache */ 11 - #define DCACHE 1 /* writeback and flush data cache */ 12 - #define BCACHE 2 /* flush instruction cache + writeback and flush data cache */ 13 - 14 - #endif /* _ASM_CACHECTL */
-16
arch/nds32/include/uapi/asm/fp_udfiex_crtl.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - /* Copyright (C) 2005-2019 Andes Technology Corporation */ 3 - #ifndef _FP_UDF_IEX_CRTL_H 4 - #define _FP_UDF_IEX_CRTL_H 5 - 6 - /* 7 - * The cmd list of sys_fp_udfiex_crtl() 8 - */ 9 - /* Disable UDF or IEX trap based on the content of parameter act */ 10 - #define DISABLE_UDF_IEX_TRAP 0 11 - /* Enable UDF or IEX trap based on the content of parameter act */ 12 - #define ENABLE_UDF_IEX_TRAP 1 13 - /* Get current status of UDF and IEX trap */ 14 - #define GET_UDF_IEX_TRAP 2 15 - 16 - #endif /* _FP_UDF_IEX_CRTL_H */
-11
arch/nds32/include/uapi/asm/param.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __ASM_NDS32_PARAM_H 5 - #define __ASM_NDS32_PARAM_H 6 - 7 - #define EXEC_PAGESIZE 8192 8 - 9 - #include <asm-generic/param.h> 10 - 11 - #endif /* __ASM_NDS32_PARAM_H */
-25
arch/nds32/include/uapi/asm/ptrace.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef __UAPI_ASM_NDS32_PTRACE_H 5 - #define __UAPI_ASM_NDS32_PTRACE_H 6 - 7 - #ifndef __ASSEMBLY__ 8 - 9 - /* 10 - * User structures for general purpose register. 11 - */ 12 - struct user_pt_regs { 13 - long uregs[26]; 14 - long fp; 15 - long gp; 16 - long lp; 17 - long sp; 18 - long ipc; 19 - long lb; 20 - long le; 21 - long lc; 22 - long syscallno; 23 - }; 24 - #endif 25 - #endif
-84
arch/nds32/include/uapi/asm/sigcontext.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #ifndef _ASMNDS32_SIGCONTEXT_H 5 - #define _ASMNDS32_SIGCONTEXT_H 6 - 7 - /* 8 - * Signal context structure - contains all info to do with the state 9 - * before the signal handler was invoked. Note: only add new entries 10 - * to the end of the structure. 11 - */ 12 - struct fpu_struct { 13 - unsigned long long fd_regs[32]; 14 - unsigned long fpcsr; 15 - /* 16 - * When CONFIG_SUPPORT_DENORMAL_ARITHMETIC is defined, kernel prevents 17 - * hardware from treating the denormalized output as an underflow case 18 - * and rounding it to a normal number. Hence kernel enables the UDF and 19 - * IEX trap in the fpcsr register to step in the calculation. 20 - * However, the UDF and IEX trap enable bit in $fpcsr also lose 21 - * their use. 22 - * 23 - * UDF_IEX_trap replaces the feature of UDF and IEX trap enable bit in 24 - * $fpcsr to control the trap of underflow and inexact. The bit filed 25 - * of UDF_IEX_trap is the same as $fpcsr, 10th bit is used to enable UDF 26 - * exception trapping and 11th bit is used to enable IEX exception 27 - * trapping. 28 - * 29 - * UDF_IEX_trap is only modified through fp_udfiex_crtl syscall. 30 - * Therefore, UDF_IEX_trap needn't be saved and restored in each 31 - * context switch. 32 - */ 33 - unsigned long UDF_IEX_trap; 34 - }; 35 - 36 - struct zol_struct { 37 - unsigned long nds32_lc; /* $LC */ 38 - unsigned long nds32_le; /* $LE */ 39 - unsigned long nds32_lb; /* $LB */ 40 - }; 41 - 42 - struct sigcontext { 43 - unsigned long trap_no; 44 - unsigned long error_code; 45 - unsigned long oldmask; 46 - unsigned long nds32_r0; 47 - unsigned long nds32_r1; 48 - unsigned long nds32_r2; 49 - unsigned long nds32_r3; 50 - unsigned long nds32_r4; 51 - unsigned long nds32_r5; 52 - unsigned long nds32_r6; 53 - unsigned long nds32_r7; 54 - unsigned long nds32_r8; 55 - unsigned long nds32_r9; 56 - unsigned long nds32_r10; 57 - unsigned long nds32_r11; 58 - unsigned long nds32_r12; 59 - unsigned long nds32_r13; 60 - unsigned long nds32_r14; 61 - unsigned long nds32_r15; 62 - unsigned long nds32_r16; 63 - unsigned long nds32_r17; 64 - unsigned long nds32_r18; 65 - unsigned long nds32_r19; 66 - unsigned long nds32_r20; 67 - unsigned long nds32_r21; 68 - unsigned long nds32_r22; 69 - unsigned long nds32_r23; 70 - unsigned long nds32_r24; 71 - unsigned long nds32_r25; 72 - unsigned long nds32_fp; /* $r28 */ 73 - unsigned long nds32_gp; /* $r29 */ 74 - unsigned long nds32_lp; /* $r30 */ 75 - unsigned long nds32_sp; /* $r31 */ 76 - unsigned long nds32_ipc; 77 - unsigned long fault_address; 78 - unsigned long used_math_flag; 79 - /* FPU Registers */ 80 - struct fpu_struct fpu; 81 - struct zol_struct zol; 82 - }; 83 - 84 - #endif
-16
arch/nds32/include/uapi/asm/unistd.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #define __ARCH_WANT_STAT64 5 - #define __ARCH_WANT_SYNC_FILE_RANGE2 6 - #define __ARCH_WANT_SET_GET_RLIMIT 7 - #define __ARCH_WANT_TIME32_SYSCALLS 8 - 9 - /* Use the standard ABI for syscalls */ 10 - #include <asm-generic/unistd.h> 11 - 12 - /* Additional NDS32 specific syscalls. */ 13 - #define __NR_cacheflush (__NR_arch_specific_syscall) 14 - #define __NR_fp_udfiex_crtl (__NR_arch_specific_syscall + 1) 15 - __SYSCALL(__NR_cacheflush, sys_cacheflush) 16 - __SYSCALL(__NR_fp_udfiex_crtl, sys_fp_udfiex_crtl)
-2
arch/nds32/kernel/.gitignore
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - vmlinux.lds
-33
arch/nds32/kernel/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - # 3 - # Makefile for the linux kernel. 4 - # 5 - 6 - CPPFLAGS_vmlinux.lds := -DTEXTADDR=$(TEXTADDR) 7 - AFLAGS_head.o := -DTEXTADDR=$(TEXTADDR) 8 - # Object file lists. 9 - 10 - obj-y := ex-entry.o ex-exit.o ex-scall.o irq.o \ 11 - process.o ptrace.o setup.o signal.o \ 12 - sys_nds32.o time.o traps.o cacheinfo.o \ 13 - dma.o syscall_table.o vdso.o 14 - 15 - obj-$(CONFIG_MODULES) += nds32_ksyms.o module.o 16 - obj-$(CONFIG_STACKTRACE) += stacktrace.o 17 - obj-$(CONFIG_FPU) += fpu.o 18 - obj-$(CONFIG_OF) += devtree.o 19 - obj-$(CONFIG_CACHE_L2) += atl2c.o 20 - obj-$(CONFIG_PERF_EVENTS) += perf_event_cpu.o 21 - obj-$(CONFIG_PM) += pm.o sleep.o 22 - extra-y := head.o vmlinux.lds 23 - 24 - CFLAGS_fpu.o += -mext-fpu-sp -mext-fpu-dp 25 - 26 - 27 - obj-y += vdso/ 28 - 29 - obj-$(CONFIG_FUNCTION_TRACER) += ftrace.o 30 - 31 - ifdef CONFIG_FUNCTION_TRACER 32 - CFLAGS_REMOVE_ftrace.o = $(CC_FLAGS_FTRACE) 33 - endif
-28
arch/nds32/kernel/asm-offsets.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/sched.h> 5 - #include <linux/sched/task_stack.h> 6 - #include <linux/kbuild.h> 7 - #include <asm/thread_info.h> 8 - #include <asm/ptrace.h> 9 - 10 - int main(void) 11 - { 12 - DEFINE(TSK_TI_FLAGS, offsetof(struct task_struct, thread_info.flags)); 13 - DEFINE(TSK_TI_PREEMPT, 14 - offsetof(struct task_struct, thread_info.preempt_count)); 15 - DEFINE(THREAD_CPU_CONTEXT, 16 - offsetof(struct task_struct, thread.cpu_context)); 17 - DEFINE(OSP_OFFSET, offsetof(struct pt_regs, osp)); 18 - DEFINE(SP_OFFSET, offsetof(struct pt_regs, sp)); 19 - DEFINE(FUCOP_CTL_OFFSET, offsetof(struct pt_regs, fucop_ctl)); 20 - DEFINE(IPSW_OFFSET, offsetof(struct pt_regs, ipsw)); 21 - DEFINE(SYSCALLNO_OFFSET, offsetof(struct pt_regs, syscallno)); 22 - DEFINE(IPC_OFFSET, offsetof(struct pt_regs, ipc)); 23 - DEFINE(R0_OFFSET, offsetof(struct pt_regs, uregs[0])); 24 - DEFINE(R15_OFFSET, offsetof(struct pt_regs, uregs[15])); 25 - DEFINE(CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC); 26 - DEFINE(CLOCK_COARSE_RES, LOW_RES_NSEC); 27 - return 0; 28 - }
-65
arch/nds32/kernel/atl2c.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/compiler.h> 5 - #include <linux/of_address.h> 6 - #include <linux/of_fdt.h> 7 - #include <linux/of_platform.h> 8 - #include <asm/l2_cache.h> 9 - 10 - void __iomem *atl2c_base; 11 - static const struct of_device_id atl2c_ids[] __initconst = { 12 - {.compatible = "andestech,atl2c",}, 13 - {} 14 - }; 15 - 16 - static int __init atl2c_of_init(void) 17 - { 18 - struct device_node *np; 19 - struct resource res; 20 - unsigned long tmp = 0; 21 - unsigned long l2set, l2way, l2clsz; 22 - 23 - if (!(__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskL2C)) 24 - return -ENODEV; 25 - 26 - np = of_find_matching_node(NULL, atl2c_ids); 27 - if (!np) 28 - return -ENODEV; 29 - 30 - if (of_address_to_resource(np, 0, &res)) 31 - return -ENODEV; 32 - 33 - atl2c_base = ioremap(res.start, resource_size(&res)); 34 - if (!atl2c_base) 35 - return -ENOMEM; 36 - 37 - l2set = 38 - 64 << ((L2C_R_REG(L2_CA_CONF_OFF) & L2_CA_CONF_mskL2SET) >> 39 - L2_CA_CONF_offL2SET); 40 - l2way = 41 - 1 + 42 - ((L2C_R_REG(L2_CA_CONF_OFF) & L2_CA_CONF_mskL2WAY) >> 43 - L2_CA_CONF_offL2WAY); 44 - l2clsz = 45 - 4 << ((L2C_R_REG(L2_CA_CONF_OFF) & L2_CA_CONF_mskL2CLSZ) >> 46 - L2_CA_CONF_offL2CLSZ); 47 - pr_info("L2:%luKB/%luS/%luW/%luB\n", 48 - l2set * l2way * l2clsz / 1024, l2set, l2way, l2clsz); 49 - 50 - tmp = L2C_R_REG(L2CC_PROT_OFF); 51 - tmp &= ~L2CC_PROT_mskMRWEN; 52 - L2C_W_REG(L2CC_PROT_OFF, tmp); 53 - 54 - tmp = L2C_R_REG(L2CC_SETUP_OFF); 55 - tmp &= ~L2CC_SETUP_mskPART; 56 - L2C_W_REG(L2CC_SETUP_OFF, tmp); 57 - 58 - tmp = L2C_R_REG(L2CC_CTRL_OFF); 59 - tmp |= L2CC_CTRL_mskEN; 60 - L2C_W_REG(L2CC_CTRL_OFF, tmp); 61 - 62 - return 0; 63 - } 64 - 65 - subsys_initcall(atl2c_of_init);
-49
arch/nds32/kernel/cacheinfo.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/bitops.h> 5 - #include <linux/cacheinfo.h> 6 - #include <linux/cpu.h> 7 - 8 - static void ci_leaf_init(struct cacheinfo *this_leaf, 9 - enum cache_type type, unsigned int level) 10 - { 11 - char cache_type = (type & CACHE_TYPE_INST ? ICACHE : DCACHE); 12 - 13 - this_leaf->level = level; 14 - this_leaf->type = type; 15 - this_leaf->coherency_line_size = CACHE_LINE_SIZE(cache_type); 16 - this_leaf->number_of_sets = CACHE_SET(cache_type); 17 - this_leaf->ways_of_associativity = CACHE_WAY(cache_type); 18 - this_leaf->size = this_leaf->number_of_sets * 19 - this_leaf->coherency_line_size * this_leaf->ways_of_associativity; 20 - #if defined(CONFIG_CPU_DCACHE_WRITETHROUGH) 21 - this_leaf->attributes = CACHE_WRITE_THROUGH; 22 - #else 23 - this_leaf->attributes = CACHE_WRITE_BACK; 24 - #endif 25 - } 26 - 27 - int init_cache_level(unsigned int cpu) 28 - { 29 - struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); 30 - 31 - /* Only 1 level and I/D cache seperate. */ 32 - this_cpu_ci->num_levels = 1; 33 - this_cpu_ci->num_leaves = 2; 34 - return 0; 35 - } 36 - 37 - int populate_cache_leaves(unsigned int cpu) 38 - { 39 - unsigned int level, idx; 40 - struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu); 41 - struct cacheinfo *this_leaf = this_cpu_ci->info_list; 42 - 43 - for (idx = 0, level = 1; level <= this_cpu_ci->num_levels && 44 - idx < this_cpu_ci->num_leaves; idx++, level++) { 45 - ci_leaf_init(this_leaf++, CACHE_TYPE_DATA, level); 46 - ci_leaf_init(this_leaf++, CACHE_TYPE_INST, level); 47 - } 48 - return 0; 49 - }
-19
arch/nds32/kernel/devtree.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/bug.h> 5 - #include <linux/printk.h> 6 - #include <linux/of_fdt.h> 7 - 8 - void __init early_init_devtree(void *params) 9 - { 10 - if (!params || !early_init_dt_scan(params)) { 11 - pr_crit("\n" 12 - "Error: invalid device tree blob at (virtual address 0x%p)\n" 13 - "\nPlease check your bootloader.", params); 14 - 15 - BUG_ON(1); 16 - } 17 - 18 - dump_stack_set_arch_desc("%s (DT)", of_flat_dt_get_machine_name()); 19 - }
-82
arch/nds32/kernel/dma.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/types.h> 5 - #include <linux/mm.h> 6 - #include <linux/dma-map-ops.h> 7 - #include <linux/cache.h> 8 - #include <linux/highmem.h> 9 - #include <asm/cacheflush.h> 10 - #include <asm/tlbflush.h> 11 - #include <asm/proc-fns.h> 12 - 13 - static inline void cache_op(phys_addr_t paddr, size_t size, 14 - void (*fn)(unsigned long start, unsigned long end)) 15 - { 16 - struct page *page = pfn_to_page(paddr >> PAGE_SHIFT); 17 - unsigned offset = paddr & ~PAGE_MASK; 18 - size_t left = size; 19 - unsigned long start; 20 - 21 - do { 22 - size_t len = left; 23 - 24 - if (PageHighMem(page)) { 25 - void *addr; 26 - 27 - if (offset + len > PAGE_SIZE) { 28 - if (offset >= PAGE_SIZE) { 29 - page += offset >> PAGE_SHIFT; 30 - offset &= ~PAGE_MASK; 31 - } 32 - len = PAGE_SIZE - offset; 33 - } 34 - 35 - addr = kmap_atomic(page); 36 - start = (unsigned long)(addr + offset); 37 - fn(start, start + len); 38 - kunmap_atomic(addr); 39 - } else { 40 - start = (unsigned long)phys_to_virt(paddr); 41 - fn(start, start + size); 42 - } 43 - offset = 0; 44 - page++; 45 - left -= len; 46 - } while (left); 47 - } 48 - 49 - void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, 50 - enum dma_data_direction dir) 51 - { 52 - switch (dir) { 53 - case DMA_FROM_DEVICE: 54 - break; 55 - case DMA_TO_DEVICE: 56 - case DMA_BIDIRECTIONAL: 57 - cache_op(paddr, size, cpu_dma_wb_range); 58 - break; 59 - default: 60 - BUG(); 61 - } 62 - } 63 - 64 - void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, 65 - enum dma_data_direction dir) 66 - { 67 - switch (dir) { 68 - case DMA_TO_DEVICE: 69 - break; 70 - case DMA_FROM_DEVICE: 71 - case DMA_BIDIRECTIONAL: 72 - cache_op(paddr, size, cpu_dma_inval_range); 73 - break; 74 - default: 75 - BUG(); 76 - } 77 - } 78 - 79 - void arch_dma_prep_coherent(struct page *page, size_t size) 80 - { 81 - cache_op(page_to_phys(page), size, cpu_dma_wbinval_range); 82 - }
-177
arch/nds32/kernel/ex-entry.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - #include <asm/memory.h> 6 - #include <asm/nds32.h> 7 - #include <asm/errno.h> 8 - #include <asm/asm-offsets.h> 9 - #include <asm/page.h> 10 - #include <asm/fpu.h> 11 - 12 - #ifdef CONFIG_HWZOL 13 - .macro push_zol 14 - mfusr $r14, $LB 15 - mfusr $r15, $LE 16 - mfusr $r16, $LC 17 - .endm 18 - #endif 19 - .macro skip_save_fucop_ctl 20 - #if defined(CONFIG_FPU) 21 - skip_fucop_ctl: 22 - smw.adm $p0, [$sp], $p0, #0x1 23 - j fucop_ctl_done 24 - #endif 25 - .endm 26 - 27 - .macro save_user_regs 28 - #if defined(CONFIG_FPU) 29 - sethi $p0, hi20(has_fpu) 30 - lbsi $p0, [$p0+lo12(has_fpu)] 31 - beqz $p0, skip_fucop_ctl 32 - mfsr $p0, $FUCOP_CTL 33 - smw.adm $p0, [$sp], $p0, #0x1 34 - bclr $p0, $p0, #FUCOP_CTL_offCP0EN 35 - mtsr $p0, $FUCOP_CTL 36 - fucop_ctl_done: 37 - /* move $SP to the bottom of pt_regs */ 38 - addi $sp, $sp, -FUCOP_CTL_OFFSET 39 - #else 40 - smw.adm $sp, [$sp], $sp, #0x1 41 - /* move $SP to the bottom of pt_regs */ 42 - addi $sp, $sp, -OSP_OFFSET 43 - #endif 44 - 45 - /* push $r0 ~ $r25 */ 46 - smw.bim $r0, [$sp], $r25 47 - /* push $fp, $gp, $lp */ 48 - smw.bim $sp, [$sp], $sp, #0xe 49 - 50 - mfsr $r12, $SP_USR 51 - mfsr $r13, $IPC 52 - #ifdef CONFIG_HWZOL 53 - push_zol 54 - #endif 55 - movi $r17, -1 56 - move $r18, $r0 57 - mfsr $r19, $PSW 58 - mfsr $r20, $IPSW 59 - mfsr $r21, $P_IPSW 60 - mfsr $r22, $P_IPC 61 - mfsr $r23, $P_P0 62 - mfsr $r24, $P_P1 63 - smw.bim $r12, [$sp], $r24, #0 64 - addi $sp, $sp, -FUCOP_CTL_OFFSET 65 - 66 - /* Initialize kernel space $fp */ 67 - andi $p0, $r20, #PSW_mskPOM 68 - movi $p1, #0x0 69 - cmovz $fp, $p1, $p0 70 - 71 - andi $r16, $r19, #PSW_mskINTL 72 - slti $r17, $r16, #4 73 - bnez $r17, 1f 74 - addi $r17, $r19, #-2 75 - mtsr $r17, $PSW 76 - isb 77 - 1: 78 - /* If it was superuser mode, we don't need to update $r25 */ 79 - bnez $p0, 2f 80 - la $p0, __entry_task 81 - lw $r25, [$p0] 82 - 2: 83 - .endm 84 - 85 - .text 86 - 87 - /* 88 - * Exception Vector 89 - */ 90 - exception_handlers: 91 - .long unhandled_exceptions !Reset/NMI 92 - .long unhandled_exceptions !TLB fill 93 - .long do_page_fault !PTE not present 94 - .long do_dispatch_tlb_misc !TLB misc 95 - .long unhandled_exceptions !TLB VLPT 96 - .long unhandled_exceptions !Machine Error 97 - .long do_debug_trap !Debug related 98 - .long do_dispatch_general !General exception 99 - .long eh_syscall !Syscall 100 - .long asm_do_IRQ !IRQ 101 - 102 - skip_save_fucop_ctl 103 - common_exception_handler: 104 - save_user_regs 105 - mfsr $p0, $ITYPE 106 - andi $p0, $p0, #ITYPE_mskVECTOR 107 - srli $p0, $p0, #ITYPE_offVECTOR 108 - andi $p1, $p0, #NDS32_VECTOR_mskNONEXCEPTION 109 - bnez $p1, 1f 110 - sethi $lp, hi20(ret_from_exception) 111 - ori $lp, $lp, lo12(ret_from_exception) 112 - sethi $p1, hi20(exception_handlers) 113 - ori $p1, $p1, lo12(exception_handlers) 114 - lw $p1, [$p1+$p0<<2] 115 - move $r0, $p0 116 - mfsr $r1, $EVA 117 - mfsr $r2, $ITYPE 118 - move $r3, $sp 119 - mfsr $r4, $OIPC 120 - /* enable gie if it is enabled in IPSW. */ 121 - mfsr $r21, $PSW 122 - andi $r20, $r20, #PSW_mskGIE /* r20 is $IPSW*/ 123 - or $r21, $r21, $r20 124 - mtsr $r21, $PSW 125 - dsb 126 - jr $p1 127 - /* syscall */ 128 - 1: 129 - addi $p1, $p0, #-NDS32_VECTOR_offEXCEPTION 130 - bnez $p1, 2f 131 - sethi $lp, hi20(ret_from_exception) 132 - ori $lp, $lp, lo12(ret_from_exception) 133 - sethi $p1, hi20(exception_handlers) 134 - ori $p1, $p1, lo12(exception_handlers) 135 - lwi $p1, [$p1+#NDS32_VECTOR_offEXCEPTION<<2] 136 - jr $p1 137 - 138 - /* interrupt */ 139 - 2: 140 - #ifdef CONFIG_TRACE_IRQFLAGS 141 - jal __trace_hardirqs_off 142 - #endif 143 - move $r0, $sp 144 - sethi $lp, hi20(ret_from_intr) 145 - ori $lp, $lp, lo12(ret_from_intr) 146 - sethi $p0, hi20(exception_handlers) 147 - ori $p0, $p0, lo12(exception_handlers) 148 - lwi $p0, [$p0+#NDS32_VECTOR_offINTERRUPT<<2] 149 - jr $p0 150 - 151 - .macro EXCEPTION_VECTOR_DEBUG 152 - .align 4 153 - mfsr $p0, $EDM_CTL 154 - andi $p0, $p0, EDM_CTL_mskV3_EDM_MODE 155 - tnez $p0, SWID_RAISE_INTERRUPT_LEVEL 156 - .endm 157 - 158 - .macro EXCEPTION_VECTOR 159 - .align 4 160 - sethi $p0, hi20(common_exception_handler) 161 - ori $p0, $p0, lo12(common_exception_handler) 162 - jral.ton $p0, $p0 163 - .endm 164 - 165 - .section ".text.init", #alloc, #execinstr 166 - .global exception_vector 167 - exception_vector: 168 - .rept 6 169 - EXCEPTION_VECTOR 170 - .endr 171 - EXCEPTION_VECTOR_DEBUG 172 - .rept 121 173 - EXCEPTION_VECTOR 174 - .endr 175 - .align 4 176 - .global exception_vector_end 177 - exception_vector_end:
-193
arch/nds32/kernel/ex-exit.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - #include <asm/unistd.h> 6 - #include <asm/assembler.h> 7 - #include <asm/nds32.h> 8 - #include <asm/asm-offsets.h> 9 - #include <asm/thread_info.h> 10 - #include <asm/current.h> 11 - #include <asm/fpu.h> 12 - 13 - 14 - 15 - #ifdef CONFIG_HWZOL 16 - .macro pop_zol 17 - mtusr $r14, $LB 18 - mtusr $r15, $LE 19 - mtusr $r16, $LC 20 - .endm 21 - #endif 22 - 23 - .macro restore_user_regs_first 24 - setgie.d 25 - isb 26 - #if defined(CONFIG_FPU) 27 - addi $sp, $sp, OSP_OFFSET 28 - lmw.adm $r12, [$sp], $r25, #0x0 29 - sethi $p0, hi20(has_fpu) 30 - lbsi $p0, [$p0+lo12(has_fpu)] 31 - beqz $p0, 2f 32 - mtsr $r25, $FUCOP_CTL 33 - 2: 34 - #else 35 - addi $sp, $sp, FUCOP_CTL_OFFSET 36 - lmw.adm $r12, [$sp], $r24, #0x0 37 - #endif 38 - mtsr $r12, $SP_USR 39 - mtsr $r13, $IPC 40 - #ifdef CONFIG_HWZOL 41 - pop_zol 42 - #endif 43 - mtsr $r19, $PSW 44 - mtsr $r20, $IPSW 45 - mtsr $r21, $P_IPSW 46 - mtsr $r22, $P_IPC 47 - mtsr $r23, $P_P0 48 - mtsr $r24, $P_P1 49 - lmw.adm $sp, [$sp], $sp, #0xe 50 - .endm 51 - 52 - .macro restore_user_regs_last 53 - pop $p0 54 - cmovn $sp, $p0, $p0 55 - 56 - iret 57 - nop 58 - 59 - .endm 60 - 61 - .macro restore_user_regs 62 - restore_user_regs_first 63 - lmw.adm $r0, [$sp], $r25, #0x0 64 - addi $sp, $sp, OSP_OFFSET 65 - restore_user_regs_last 66 - .endm 67 - 68 - .macro fast_restore_user_regs 69 - restore_user_regs_first 70 - lmw.adm $r1, [$sp], $r25, #0x0 71 - addi $sp, $sp, OSP_OFFSET-4 72 - restore_user_regs_last 73 - .endm 74 - 75 - #ifdef CONFIG_PREEMPTION 76 - .macro preempt_stop 77 - .endm 78 - #else 79 - .macro preempt_stop 80 - setgie.d 81 - isb 82 - .endm 83 - #define resume_kernel no_work_pending 84 - #endif 85 - 86 - ENTRY(ret_from_exception) 87 - preempt_stop 88 - ENTRY(ret_from_intr) 89 - 90 - /* 91 - * judge Kernel or user mode 92 - * 93 - */ 94 - lwi $p0, [$sp+(#IPSW_OFFSET)] ! Check if in nested interrupt 95 - andi $p0, $p0, #PSW_mskINTL 96 - bnez $p0, resume_kernel ! done with iret 97 - j resume_userspace 98 - 99 - 100 - /* 101 - * This is the fast syscall return path. We do as little as 102 - * possible here, and this includes saving $r0 back into the SVC 103 - * stack. 104 - * fixed: tsk - $r25, syscall # - $r7, syscall table pointer - $r8 105 - */ 106 - ENTRY(ret_fast_syscall) 107 - gie_disable 108 - lwi $r1, [tsk+#TSK_TI_FLAGS] 109 - andi $p1, $r1, #_TIF_WORK_MASK 110 - bnez $p1, fast_work_pending 111 - fast_restore_user_regs ! iret 112 - 113 - /* 114 - * Ok, we need to do extra processing, 115 - * enter the slow path returning from syscall, while pending work. 116 - */ 117 - fast_work_pending: 118 - swi $r0, [$sp+(#R0_OFFSET)] ! what is different from ret_from_exception 119 - work_pending: 120 - andi $p1, $r1, #_TIF_NEED_RESCHED 121 - bnez $p1, work_resched 122 - 123 - andi $p1, $r1, #_TIF_SIGPENDING|#_TIF_NOTIFY_RESUME|#_TIF_NOTIFY_SIGNAL 124 - beqz $p1, no_work_pending 125 - 126 - move $r0, $sp ! 'regs' 127 - gie_enable 128 - bal do_notify_resume 129 - b ret_slow_syscall 130 - work_resched: 131 - bal schedule ! path, return to user mode 132 - 133 - /* 134 - * "slow" syscall return path. 135 - */ 136 - ENTRY(resume_userspace) 137 - ENTRY(ret_slow_syscall) 138 - gie_disable 139 - lwi $p0, [$sp+(#IPSW_OFFSET)] ! Check if in nested interrupt 140 - andi $p0, $p0, #PSW_mskINTL 141 - bnez $p0, no_work_pending ! done with iret 142 - lwi $r1, [tsk+#TSK_TI_FLAGS] 143 - andi $p1, $r1, #_TIF_WORK_MASK 144 - bnez $p1, work_pending ! handle work_resched, sig_pend 145 - 146 - no_work_pending: 147 - #ifdef CONFIG_TRACE_IRQFLAGS 148 - lwi $p0, [$sp+(#IPSW_OFFSET)] 149 - andi $p0, $p0, #0x1 150 - la $r10, __trace_hardirqs_off 151 - la $r9, __trace_hardirqs_on 152 - cmovz $r9, $p0, $r10 153 - jral $r9 154 - #endif 155 - restore_user_regs ! return from iret 156 - 157 - 158 - /* 159 - * preemptive kernel 160 - */ 161 - #ifdef CONFIG_PREEMPTION 162 - resume_kernel: 163 - gie_disable 164 - lwi $t0, [tsk+#TSK_TI_PREEMPT] 165 - bnez $t0, no_work_pending 166 - 167 - lwi $t0, [tsk+#TSK_TI_FLAGS] 168 - andi $p1, $t0, #_TIF_NEED_RESCHED 169 - beqz $p1, no_work_pending 170 - 171 - lwi $t0, [$sp+(#IPSW_OFFSET)] ! Interrupts off? 172 - andi $t0, $t0, #1 173 - beqz $t0, no_work_pending 174 - 175 - jal preempt_schedule_irq 176 - b no_work_pending 177 - #endif 178 - 179 - /* 180 - * This is how we return from a fork. 181 - */ 182 - ENTRY(ret_from_fork) 183 - bal schedule_tail 184 - beqz $r6, 1f ! r6 stores fn for kernel thread 185 - move $r0, $r7 ! prepare kernel thread arg 186 - jral $r6 187 - 1: 188 - lwi $r1, [tsk+#TSK_TI_FLAGS] ! check for syscall tracing 189 - andi $p1, $r1, #_TIF_WORK_SYSCALL_LEAVE ! are we tracing syscalls? 190 - beqz $p1, ret_slow_syscall 191 - move $r0, $sp 192 - bal syscall_trace_leave 193 - b ret_slow_syscall
-100
arch/nds32/kernel/ex-scall.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - #include <asm/unistd.h> 6 - #include <asm/assembler.h> 7 - #include <asm/nds32.h> 8 - #include <asm/asm-offsets.h> 9 - #include <asm/thread_info.h> 10 - #include <asm/current.h> 11 - 12 - /* 13 - * $r0 = previous task_struct, 14 - * $r1 = next task_struct, 15 - * previous and next are guaranteed not to be the same. 16 - */ 17 - 18 - ENTRY(__switch_to) 19 - 20 - la $p0, __entry_task 21 - sw $r1, [$p0] 22 - addi $p1, $r0, #THREAD_CPU_CONTEXT 23 - smw.bi $r6, [$p1], $r14, #0xb ! push r6~r14, fp, lp, sp 24 - move $r25, $r1 25 - #if defined(CONFIG_FPU) 26 - call _switch_fpu 27 - #endif 28 - addi $r1, $r25, #THREAD_CPU_CONTEXT 29 - lmw.bi $r6, [$r1], $r14, #0xb ! pop r6~r14, fp, lp, sp 30 - ret 31 - 32 - 33 - #define tbl $r8 34 - 35 - /* 36 - * $r7 will be writen as syscall nr 37 - */ 38 - .macro get_scno 39 - lwi $r7, [$sp + R15_OFFSET] 40 - swi $r7, [$sp + SYSCALLNO_OFFSET] 41 - .endm 42 - 43 - .macro updateipc 44 - addi $r17, $r13, #4 ! $r13 is $IPC 45 - swi $r17, [$sp + IPC_OFFSET] 46 - .endm 47 - 48 - ENTRY(eh_syscall) 49 - updateipc 50 - 51 - get_scno 52 - gie_enable 53 - 54 - lwi $p0, [tsk+#TSK_TI_FLAGS] ! check for syscall tracing 55 - 56 - andi $p1, $p0, #_TIF_WORK_SYSCALL_ENTRY ! are we tracing syscalls? 57 - bnez $p1, __sys_trace 58 - 59 - la $lp, ret_fast_syscall ! return address 60 - jmp_systbl: 61 - addi $p1, $r7, #-__NR_syscalls ! syscall number of syscall instruction is guarded by addembler 62 - bgez $p1, _SCNO_EXCEED ! call sys_* routine 63 - la tbl, sys_call_table ! load syscall table pointer 64 - slli $p1, $r7, #2 65 - add $p1, tbl, $p1 66 - lwi $p1, [$p1] 67 - jr $p1 ! no return 68 - 69 - _SCNO_EXCEED: 70 - ori $r0, $r7, #0 71 - ori $r1, $sp, #0 72 - b bad_syscall 73 - 74 - /* 75 - * This is the really slow path. We're going to be doing 76 - * context switches, and waiting for our parent to respond. 77 - */ 78 - __sys_trace: 79 - move $r0, $sp 80 - bal syscall_trace_enter 81 - move $r7, $r0 82 - la $lp, __sys_trace_return ! return address 83 - 84 - addi $p1, $r7, #1 85 - beqz $p1, ret_slow_syscall ! fatal signal is pending 86 - 87 - addi $p1, $sp, #R0_OFFSET ! pointer to regs 88 - lmw.bi $r0, [$p1], $r5 ! have to reload $r0 - $r5 89 - b jmp_systbl 90 - 91 - __sys_trace_return: 92 - swi $r0, [$sp+#R0_OFFSET] ! T: save returned $r0 93 - move $r0, $sp ! set pt_regs for syscall_trace_leave 94 - bal syscall_trace_leave 95 - b ret_slow_syscall 96 - 97 - ENTRY(sys_rt_sigreturn_wrapper) 98 - addi $r0, $sp, #0 99 - b sys_rt_sigreturn 100 - ENDPROC(sys_rt_sigreturn_wrapper)
-266
arch/nds32/kernel/fpu.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - 4 - #include <linux/sched.h> 5 - #include <linux/signal.h> 6 - #include <linux/sched/signal.h> 7 - #include <asm/processor.h> 8 - #include <asm/user.h> 9 - #include <asm/io.h> 10 - #include <asm/bitfield.h> 11 - #include <asm/fpu.h> 12 - 13 - const struct fpu_struct init_fpuregs = { 14 - .fd_regs = {[0 ... 31] = sNAN64}, 15 - .fpcsr = FPCSR_INIT, 16 - #if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) 17 - .UDF_IEX_trap = 0 18 - #endif 19 - }; 20 - 21 - void save_fpu(struct task_struct *tsk) 22 - { 23 - unsigned int fpcfg, fpcsr; 24 - 25 - enable_fpu(); 26 - fpcfg = ((__nds32__fmfcfg() & FPCFG_mskFREG) >> FPCFG_offFREG); 27 - switch (fpcfg) { 28 - case SP32_DP32_reg: 29 - asm volatile ("fsdi $fd31, [%0+0xf8]\n\t" 30 - "fsdi $fd30, [%0+0xf0]\n\t" 31 - "fsdi $fd29, [%0+0xe8]\n\t" 32 - "fsdi $fd28, [%0+0xe0]\n\t" 33 - "fsdi $fd27, [%0+0xd8]\n\t" 34 - "fsdi $fd26, [%0+0xd0]\n\t" 35 - "fsdi $fd25, [%0+0xc8]\n\t" 36 - "fsdi $fd24, [%0+0xc0]\n\t" 37 - "fsdi $fd23, [%0+0xb8]\n\t" 38 - "fsdi $fd22, [%0+0xb0]\n\t" 39 - "fsdi $fd21, [%0+0xa8]\n\t" 40 - "fsdi $fd20, [%0+0xa0]\n\t" 41 - "fsdi $fd19, [%0+0x98]\n\t" 42 - "fsdi $fd18, [%0+0x90]\n\t" 43 - "fsdi $fd17, [%0+0x88]\n\t" 44 - "fsdi $fd16, [%0+0x80]\n\t" 45 - : /* no output */ 46 - : "r" (&tsk->thread.fpu) 47 - : "memory"); 48 - fallthrough; 49 - case SP32_DP16_reg: 50 - asm volatile ("fsdi $fd15, [%0+0x78]\n\t" 51 - "fsdi $fd14, [%0+0x70]\n\t" 52 - "fsdi $fd13, [%0+0x68]\n\t" 53 - "fsdi $fd12, [%0+0x60]\n\t" 54 - "fsdi $fd11, [%0+0x58]\n\t" 55 - "fsdi $fd10, [%0+0x50]\n\t" 56 - "fsdi $fd9, [%0+0x48]\n\t" 57 - "fsdi $fd8, [%0+0x40]\n\t" 58 - : /* no output */ 59 - : "r" (&tsk->thread.fpu) 60 - : "memory"); 61 - fallthrough; 62 - case SP16_DP8_reg: 63 - asm volatile ("fsdi $fd7, [%0+0x38]\n\t" 64 - "fsdi $fd6, [%0+0x30]\n\t" 65 - "fsdi $fd5, [%0+0x28]\n\t" 66 - "fsdi $fd4, [%0+0x20]\n\t" 67 - : /* no output */ 68 - : "r" (&tsk->thread.fpu) 69 - : "memory"); 70 - fallthrough; 71 - case SP8_DP4_reg: 72 - asm volatile ("fsdi $fd3, [%1+0x18]\n\t" 73 - "fsdi $fd2, [%1+0x10]\n\t" 74 - "fsdi $fd1, [%1+0x8]\n\t" 75 - "fsdi $fd0, [%1+0x0]\n\t" 76 - "fmfcsr %0\n\t" 77 - "swi %0, [%1+0x100]\n\t" 78 - : "=&r" (fpcsr) 79 - : "r"(&tsk->thread.fpu) 80 - : "memory"); 81 - } 82 - disable_fpu(); 83 - } 84 - 85 - void load_fpu(const struct fpu_struct *fpregs) 86 - { 87 - unsigned int fpcfg, fpcsr; 88 - 89 - enable_fpu(); 90 - fpcfg = ((__nds32__fmfcfg() & FPCFG_mskFREG) >> FPCFG_offFREG); 91 - switch (fpcfg) { 92 - case SP32_DP32_reg: 93 - asm volatile ("fldi $fd31, [%0+0xf8]\n\t" 94 - "fldi $fd30, [%0+0xf0]\n\t" 95 - "fldi $fd29, [%0+0xe8]\n\t" 96 - "fldi $fd28, [%0+0xe0]\n\t" 97 - "fldi $fd27, [%0+0xd8]\n\t" 98 - "fldi $fd26, [%0+0xd0]\n\t" 99 - "fldi $fd25, [%0+0xc8]\n\t" 100 - "fldi $fd24, [%0+0xc0]\n\t" 101 - "fldi $fd23, [%0+0xb8]\n\t" 102 - "fldi $fd22, [%0+0xb0]\n\t" 103 - "fldi $fd21, [%0+0xa8]\n\t" 104 - "fldi $fd20, [%0+0xa0]\n\t" 105 - "fldi $fd19, [%0+0x98]\n\t" 106 - "fldi $fd18, [%0+0x90]\n\t" 107 - "fldi $fd17, [%0+0x88]\n\t" 108 - "fldi $fd16, [%0+0x80]\n\t" 109 - : /* no output */ 110 - : "r" (fpregs)); 111 - fallthrough; 112 - case SP32_DP16_reg: 113 - asm volatile ("fldi $fd15, [%0+0x78]\n\t" 114 - "fldi $fd14, [%0+0x70]\n\t" 115 - "fldi $fd13, [%0+0x68]\n\t" 116 - "fldi $fd12, [%0+0x60]\n\t" 117 - "fldi $fd11, [%0+0x58]\n\t" 118 - "fldi $fd10, [%0+0x50]\n\t" 119 - "fldi $fd9, [%0+0x48]\n\t" 120 - "fldi $fd8, [%0+0x40]\n\t" 121 - : /* no output */ 122 - : "r" (fpregs)); 123 - fallthrough; 124 - case SP16_DP8_reg: 125 - asm volatile ("fldi $fd7, [%0+0x38]\n\t" 126 - "fldi $fd6, [%0+0x30]\n\t" 127 - "fldi $fd5, [%0+0x28]\n\t" 128 - "fldi $fd4, [%0+0x20]\n\t" 129 - : /* no output */ 130 - : "r" (fpregs)); 131 - fallthrough; 132 - case SP8_DP4_reg: 133 - asm volatile ("fldi $fd3, [%1+0x18]\n\t" 134 - "fldi $fd2, [%1+0x10]\n\t" 135 - "fldi $fd1, [%1+0x8]\n\t" 136 - "fldi $fd0, [%1+0x0]\n\t" 137 - "lwi %0, [%1+0x100]\n\t" 138 - "fmtcsr %0\n\t":"=&r" (fpcsr) 139 - : "r"(fpregs)); 140 - } 141 - disable_fpu(); 142 - } 143 - void store_fpu_for_suspend(void) 144 - { 145 - #ifdef CONFIG_LAZY_FPU 146 - if (last_task_used_math != NULL) 147 - save_fpu(last_task_used_math); 148 - last_task_used_math = NULL; 149 - #else 150 - if (!used_math()) 151 - return; 152 - unlazy_fpu(current); 153 - #endif 154 - clear_fpu(task_pt_regs(current)); 155 - } 156 - inline void do_fpu_context_switch(struct pt_regs *regs) 157 - { 158 - /* Enable to use FPU. */ 159 - 160 - if (!user_mode(regs)) { 161 - pr_err("BUG: FPU is used in kernel mode.\n"); 162 - BUG(); 163 - return; 164 - } 165 - 166 - enable_ptreg_fpu(regs); 167 - #ifdef CONFIG_LAZY_FPU //Lazy FPU is used 168 - if (last_task_used_math == current) 169 - return; 170 - if (last_task_used_math != NULL) 171 - /* Other processes fpu state, save away */ 172 - save_fpu(last_task_used_math); 173 - last_task_used_math = current; 174 - #endif 175 - if (used_math()) { 176 - load_fpu(&current->thread.fpu); 177 - } else { 178 - /* First time FPU user. */ 179 - load_fpu(&init_fpuregs); 180 - #if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) 181 - current->thread.fpu.UDF_IEX_trap = init_fpuregs.UDF_IEX_trap; 182 - #endif 183 - set_used_math(); 184 - } 185 - 186 - } 187 - 188 - inline void fill_sigfpe_signo(unsigned int fpcsr, int *signo) 189 - { 190 - if (fpcsr & FPCSR_mskOVFT) 191 - *signo = FPE_FLTOVF; 192 - #ifndef CONFIG_SUPPORT_DENORMAL_ARITHMETIC 193 - else if (fpcsr & FPCSR_mskUDFT) 194 - *signo = FPE_FLTUND; 195 - #endif 196 - else if (fpcsr & FPCSR_mskIVOT) 197 - *signo = FPE_FLTINV; 198 - else if (fpcsr & FPCSR_mskDBZT) 199 - *signo = FPE_FLTDIV; 200 - else if (fpcsr & FPCSR_mskIEXT) 201 - *signo = FPE_FLTRES; 202 - } 203 - 204 - inline void handle_fpu_exception(struct pt_regs *regs) 205 - { 206 - unsigned int fpcsr; 207 - int si_code = 0, si_signo = SIGFPE; 208 - #if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) 209 - unsigned long redo_except = FPCSR_mskDNIT|FPCSR_mskUDFT|FPCSR_mskIEXT; 210 - #else 211 - unsigned long redo_except = FPCSR_mskDNIT; 212 - #endif 213 - 214 - lose_fpu(); 215 - fpcsr = current->thread.fpu.fpcsr; 216 - 217 - if (fpcsr & redo_except) { 218 - si_signo = do_fpuemu(regs, &current->thread.fpu); 219 - fpcsr = current->thread.fpu.fpcsr; 220 - if (!si_signo) { 221 - current->thread.fpu.fpcsr &= ~(redo_except); 222 - goto done; 223 - } 224 - } else if (fpcsr & FPCSR_mskRIT) { 225 - if (!user_mode(regs)) 226 - make_task_dead(SIGILL); 227 - si_signo = SIGILL; 228 - } 229 - 230 - switch (si_signo) { 231 - case SIGFPE: 232 - fill_sigfpe_signo(fpcsr, &si_code); 233 - break; 234 - case SIGILL: 235 - show_regs(regs); 236 - si_code = ILL_COPROC; 237 - break; 238 - case SIGBUS: 239 - si_code = BUS_ADRERR; 240 - break; 241 - default: 242 - break; 243 - } 244 - 245 - force_sig_fault(si_signo, si_code, 246 - (void __user *)instruction_pointer(regs)); 247 - done: 248 - own_fpu(); 249 - } 250 - 251 - bool do_fpu_exception(unsigned int subtype, struct pt_regs *regs) 252 - { 253 - int done = true; 254 - /* Coprocessor disabled exception */ 255 - if (subtype == FPU_DISABLE_EXCEPTION) { 256 - preempt_disable(); 257 - do_fpu_context_switch(regs); 258 - preempt_enable(); 259 - } 260 - /* Coprocessor exception such as underflow and overflow */ 261 - else if (subtype == FPU_EXCEPTION) 262 - handle_fpu_exception(regs); 263 - else 264 - done = false; 265 - return done; 266 - }
-278
arch/nds32/kernel/ftrace.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - 3 - #include <linux/ftrace.h> 4 - #include <linux/uaccess.h> 5 - #include <asm/cacheflush.h> 6 - 7 - #ifndef CONFIG_DYNAMIC_FTRACE 8 - extern void (*ftrace_trace_function)(unsigned long, unsigned long, 9 - struct ftrace_ops*, struct ftrace_regs*); 10 - extern void ftrace_graph_caller(void); 11 - 12 - noinline void __naked ftrace_stub(unsigned long ip, unsigned long parent_ip, 13 - struct ftrace_ops *op, struct ftrace_regs *fregs) 14 - { 15 - __asm__ (""); /* avoid to optimize as pure function */ 16 - } 17 - 18 - noinline void _mcount(unsigned long parent_ip) 19 - { 20 - /* save all state by the compiler prologue */ 21 - 22 - unsigned long ip = (unsigned long)__builtin_return_address(0); 23 - 24 - if (ftrace_trace_function != ftrace_stub) 25 - ftrace_trace_function(ip - MCOUNT_INSN_SIZE, parent_ip, 26 - NULL, NULL); 27 - 28 - #ifdef CONFIG_FUNCTION_GRAPH_TRACER 29 - if (ftrace_graph_return != (trace_func_graph_ret_t)ftrace_stub 30 - || ftrace_graph_entry != ftrace_graph_entry_stub) 31 - ftrace_graph_caller(); 32 - #endif 33 - 34 - /* restore all state by the compiler epilogue */ 35 - } 36 - EXPORT_SYMBOL(_mcount); 37 - 38 - #else /* CONFIG_DYNAMIC_FTRACE */ 39 - 40 - noinline void __naked ftrace_stub(unsigned long ip, unsigned long parent_ip, 41 - struct ftrace_ops *op, struct ftrace_regs *fregs) 42 - { 43 - __asm__ (""); /* avoid to optimize as pure function */ 44 - } 45 - 46 - noinline void __naked _mcount(unsigned long parent_ip) 47 - { 48 - __asm__ (""); /* avoid to optimize as pure function */ 49 - } 50 - EXPORT_SYMBOL(_mcount); 51 - 52 - #define XSTR(s) STR(s) 53 - #define STR(s) #s 54 - void _ftrace_caller(unsigned long parent_ip) 55 - { 56 - /* save all state needed by the compiler prologue */ 57 - 58 - /* 59 - * prepare arguments for real tracing function 60 - * first arg : __builtin_return_address(0) - MCOUNT_INSN_SIZE 61 - * second arg : parent_ip 62 - */ 63 - __asm__ __volatile__ ( 64 - "move $r1, %0 \n\t" 65 - "addi $r0, %1, #-" XSTR(MCOUNT_INSN_SIZE) "\n\t" 66 - : 67 - : "r" (parent_ip), "r" (__builtin_return_address(0))); 68 - 69 - /* a placeholder for the call to a real tracing function */ 70 - __asm__ __volatile__ ( 71 - "ftrace_call: \n\t" 72 - "nop \n\t" 73 - "nop \n\t" 74 - "nop \n\t"); 75 - 76 - #ifdef CONFIG_FUNCTION_GRAPH_TRACER 77 - /* a placeholder for the call to ftrace_graph_caller */ 78 - __asm__ __volatile__ ( 79 - "ftrace_graph_call: \n\t" 80 - "nop \n\t" 81 - "nop \n\t" 82 - "nop \n\t"); 83 - #endif 84 - /* restore all state needed by the compiler epilogue */ 85 - } 86 - 87 - static unsigned long gen_sethi_insn(unsigned long addr) 88 - { 89 - unsigned long opcode = 0x46000000; 90 - unsigned long imm = addr >> 12; 91 - unsigned long rt_num = 0xf << 20; 92 - 93 - return ENDIAN_CONVERT(opcode | rt_num | imm); 94 - } 95 - 96 - static unsigned long gen_ori_insn(unsigned long addr) 97 - { 98 - unsigned long opcode = 0x58000000; 99 - unsigned long imm = addr & 0x0000fff; 100 - unsigned long rt_num = 0xf << 20; 101 - unsigned long ra_num = 0xf << 15; 102 - 103 - return ENDIAN_CONVERT(opcode | rt_num | ra_num | imm); 104 - } 105 - 106 - static unsigned long gen_jral_insn(unsigned long addr) 107 - { 108 - unsigned long opcode = 0x4a000001; 109 - unsigned long rt_num = 0x1e << 20; 110 - unsigned long rb_num = 0xf << 10; 111 - 112 - return ENDIAN_CONVERT(opcode | rt_num | rb_num); 113 - } 114 - 115 - static void ftrace_gen_call_insn(unsigned long *call_insns, 116 - unsigned long addr) 117 - { 118 - call_insns[0] = gen_sethi_insn(addr); /* sethi $r15, imm20u */ 119 - call_insns[1] = gen_ori_insn(addr); /* ori $r15, $r15, imm15u */ 120 - call_insns[2] = gen_jral_insn(addr); /* jral $lp, $r15 */ 121 - } 122 - 123 - static int __ftrace_modify_code(unsigned long pc, unsigned long *old_insn, 124 - unsigned long *new_insn, bool validate) 125 - { 126 - unsigned long orig_insn[3]; 127 - 128 - if (validate) { 129 - if (copy_from_kernel_nofault(orig_insn, (void *)pc, 130 - MCOUNT_INSN_SIZE)) 131 - return -EFAULT; 132 - if (memcmp(orig_insn, old_insn, MCOUNT_INSN_SIZE)) 133 - return -EINVAL; 134 - } 135 - 136 - if (copy_to_kernel_nofault((void *)pc, new_insn, MCOUNT_INSN_SIZE)) 137 - return -EPERM; 138 - 139 - return 0; 140 - } 141 - 142 - static int ftrace_modify_code(unsigned long pc, unsigned long *old_insn, 143 - unsigned long *new_insn, bool validate) 144 - { 145 - int ret; 146 - 147 - ret = __ftrace_modify_code(pc, old_insn, new_insn, validate); 148 - if (ret) 149 - return ret; 150 - 151 - flush_icache_range(pc, pc + MCOUNT_INSN_SIZE); 152 - 153 - return ret; 154 - } 155 - 156 - int ftrace_update_ftrace_func(ftrace_func_t func) 157 - { 158 - unsigned long pc = (unsigned long)&ftrace_call; 159 - unsigned long old_insn[3] = {INSN_NOP, INSN_NOP, INSN_NOP}; 160 - unsigned long new_insn[3] = {INSN_NOP, INSN_NOP, INSN_NOP}; 161 - 162 - if (func != ftrace_stub) 163 - ftrace_gen_call_insn(new_insn, (unsigned long)func); 164 - 165 - return ftrace_modify_code(pc, old_insn, new_insn, false); 166 - } 167 - 168 - int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 169 - { 170 - unsigned long pc = rec->ip; 171 - unsigned long nop_insn[3] = {INSN_NOP, INSN_NOP, INSN_NOP}; 172 - unsigned long call_insn[3] = {INSN_NOP, INSN_NOP, INSN_NOP}; 173 - 174 - ftrace_gen_call_insn(call_insn, addr); 175 - 176 - return ftrace_modify_code(pc, nop_insn, call_insn, true); 177 - } 178 - 179 - int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, 180 - unsigned long addr) 181 - { 182 - unsigned long pc = rec->ip; 183 - unsigned long nop_insn[3] = {INSN_NOP, INSN_NOP, INSN_NOP}; 184 - unsigned long call_insn[3] = {INSN_NOP, INSN_NOP, INSN_NOP}; 185 - 186 - ftrace_gen_call_insn(call_insn, addr); 187 - 188 - return ftrace_modify_code(pc, call_insn, nop_insn, true); 189 - } 190 - #endif /* CONFIG_DYNAMIC_FTRACE */ 191 - 192 - #ifdef CONFIG_FUNCTION_GRAPH_TRACER 193 - void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr, 194 - unsigned long frame_pointer) 195 - { 196 - unsigned long return_hooker = (unsigned long)&return_to_handler; 197 - unsigned long old; 198 - 199 - if (unlikely(atomic_read(&current->tracing_graph_pause))) 200 - return; 201 - 202 - old = *parent; 203 - 204 - if (!function_graph_enter(old, self_addr, frame_pointer, NULL)) 205 - *parent = return_hooker; 206 - } 207 - 208 - noinline void ftrace_graph_caller(void) 209 - { 210 - unsigned long *parent_ip = 211 - (unsigned long *)(__builtin_frame_address(2) - 4); 212 - 213 - unsigned long selfpc = 214 - (unsigned long)(__builtin_return_address(1) - MCOUNT_INSN_SIZE); 215 - 216 - unsigned long frame_pointer = 217 - (unsigned long)__builtin_frame_address(3); 218 - 219 - prepare_ftrace_return(parent_ip, selfpc, frame_pointer); 220 - } 221 - 222 - extern unsigned long ftrace_return_to_handler(unsigned long frame_pointer); 223 - void __naked return_to_handler(void) 224 - { 225 - __asm__ __volatile__ ( 226 - /* save state needed by the ABI */ 227 - "smw.adm $r0,[$sp],$r1,#0x0 \n\t" 228 - 229 - /* get original return address */ 230 - "move $r0, $fp \n\t" 231 - "bal ftrace_return_to_handler\n\t" 232 - "move $lp, $r0 \n\t" 233 - 234 - /* restore state needed by the ABI */ 235 - "lmw.bim $r0,[$sp],$r1,#0x0 \n\t"); 236 - } 237 - 238 - #ifdef CONFIG_DYNAMIC_FTRACE 239 - extern unsigned long ftrace_graph_call; 240 - 241 - static int ftrace_modify_graph_caller(bool enable) 242 - { 243 - unsigned long pc = (unsigned long)&ftrace_graph_call; 244 - unsigned long nop_insn[3] = {INSN_NOP, INSN_NOP, INSN_NOP}; 245 - unsigned long call_insn[3] = {INSN_NOP, INSN_NOP, INSN_NOP}; 246 - 247 - ftrace_gen_call_insn(call_insn, (unsigned long)ftrace_graph_caller); 248 - 249 - if (enable) 250 - return ftrace_modify_code(pc, nop_insn, call_insn, true); 251 - else 252 - return ftrace_modify_code(pc, call_insn, nop_insn, true); 253 - } 254 - 255 - int ftrace_enable_ftrace_graph_caller(void) 256 - { 257 - return ftrace_modify_graph_caller(true); 258 - } 259 - 260 - int ftrace_disable_ftrace_graph_caller(void) 261 - { 262 - return ftrace_modify_graph_caller(false); 263 - } 264 - #endif /* CONFIG_DYNAMIC_FTRACE */ 265 - 266 - #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 267 - 268 - 269 - #ifdef CONFIG_TRACE_IRQFLAGS 270 - noinline void __trace_hardirqs_off(void) 271 - { 272 - trace_hardirqs_off(); 273 - } 274 - noinline void __trace_hardirqs_on(void) 275 - { 276 - trace_hardirqs_on(); 277 - } 278 - #endif /* CONFIG_TRACE_IRQFLAGS */
-197
arch/nds32/kernel/head.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - #include <linux/init.h> 6 - #include <linux/pgtable.h> 7 - #include <asm/ptrace.h> 8 - #include <asm/asm-offsets.h> 9 - #include <asm/page.h> 10 - #include <linux/sizes.h> 11 - #include <asm/thread_info.h> 12 - 13 - #ifdef CONFIG_CPU_BIG_ENDIAN 14 - #define OF_DT_MAGIC 0xd00dfeed 15 - #else 16 - #define OF_DT_MAGIC 0xedfe0dd0 17 - #endif 18 - 19 - .globl swapper_pg_dir 20 - .equ swapper_pg_dir, TEXTADDR - 0x4000 21 - 22 - /* 23 - * Kernel startup entry point. 24 - */ 25 - .section ".head.text", "ax" 26 - .type _stext, %function 27 - ENTRY(_stext) 28 - setgie.d ! Disable interrupt 29 - isb 30 - /* 31 - * Disable I/D-cache and enable it at a proper time 32 - */ 33 - mfsr $r0, $mr8 34 - li $r1, #~(CACHE_CTL_mskIC_EN|CACHE_CTL_mskDC_EN) 35 - and $r0, $r0, $r1 36 - mtsr $r0, $mr8 37 - 38 - /* 39 - * Process device tree blob 40 - */ 41 - andi $r0,$r2,#0x3 42 - li $r10, 0 43 - bne $r0, $r10, _nodtb 44 - lwi $r0, [$r2] 45 - li $r1, OF_DT_MAGIC 46 - bne $r0, $r1, _nodtb 47 - move $r10, $r2 48 - _nodtb: 49 - 50 - /* 51 - * Create a temporary mapping area for booting, before start_kernel 52 - */ 53 - sethi $r4, hi20(swapper_pg_dir) 54 - li $p0, (PAGE_OFFSET - PHYS_OFFSET) 55 - sub $r4, $r4, $p0 56 - tlbop FlushAll ! invalidate TLB\n" 57 - isb 58 - mtsr $r4, $L1_PPTB ! load page table pointer\n" 59 - 60 - #ifdef CONFIG_CPU_DCACHE_DISABLE 61 - #define MMU_CTL_NTCC MMU_CTL_CACHEABLE_NON 62 - #else 63 - #ifdef CONFIG_CPU_DCACHE_WRITETHROUGH 64 - #define MMU_CTL_NTCC MMU_CTL_CACHEABLE_WT 65 - #else 66 - #define MMU_CTL_NTCC MMU_CTL_CACHEABLE_WB 67 - #endif 68 - #endif 69 - 70 - /* set NTC cacheability, mutliple page size in use */ 71 - mfsr $r3, $MMU_CTL 72 - #if CONFIG_MEMORY_START >= 0xc0000000 73 - ori $r3, $r3, (MMU_CTL_NTCC << MMU_CTL_offNTC3) 74 - #elif CONFIG_MEMORY_START >= 0x80000000 75 - ori $r3, $r3, (MMU_CTL_NTCC << MMU_CTL_offNTC2) 76 - #elif CONFIG_MEMORY_START >= 0x40000000 77 - ori $r3, $r3, (MMU_CTL_NTCC << MMU_CTL_offNTC1) 78 - #else 79 - ori $r3, $r3, (MMU_CTL_NTCC << MMU_CTL_offNTC0) 80 - #endif 81 - 82 - #ifdef CONFIG_ANDES_PAGE_SIZE_4KB 83 - ori $r3, $r3, #(MMU_CTL_mskMPZIU) 84 - #else 85 - ori $r3, $r3, #(MMU_CTL_mskMPZIU|MMU_CTL_D8KB) 86 - #endif 87 - #ifdef CONFIG_HW_SUPPORT_UNALIGNMENT_ACCESS 88 - li $r0, #MMU_CTL_UNA 89 - or $r3, $r3, $r0 90 - #endif 91 - mtsr $r3, $MMU_CTL 92 - isb 93 - 94 - /* set page size and size of kernel image */ 95 - mfsr $r0, $MMU_CFG 96 - srli $r3, $r0, MMU_CFG_offfEPSZ 97 - zeb $r3, $r3 98 - bnez $r3, _extra_page_size_support 99 - #ifdef CONFIG_ANDES_PAGE_SIZE_4KB 100 - li $r5, #SZ_4K ! Use 4KB page size 101 - #else 102 - li $r5, #SZ_8K ! Use 8KB page size 103 - li $r3, #1 104 - #endif 105 - mtsr $r3, $TLB_MISC 106 - b _image_size_check 107 - 108 - _extra_page_size_support: ! Use epzs pages size 109 - clz $r6, $r3 110 - subri $r2, $r6, #31 111 - li $r3, #1 112 - sll $r3, $r3, $r2 113 - /* MMU_CFG.EPSZ value -> meaning */ 114 - mul $r5, $r3, $r3 115 - slli $r5, $r5, #14 116 - /* MMU_CFG.EPSZ -> TLB_MISC.ACC_PSZ */ 117 - addi $r3, $r2, #0x2 118 - mtsr $r3, $TLB_MISC 119 - 120 - _image_size_check: 121 - /* calculate the image maximum size accepted by TLB config */ 122 - andi $r6, $r0, MMU_CFG_mskTBW 123 - andi $r0, $r0, MMU_CFG_mskTBS 124 - srli $r6, $r6, MMU_CFG_offTBW 125 - srli $r0, $r0, MMU_CFG_offTBS 126 - addi $r6, $r6, #0x1 ! MMU_CFG.TBW value -> meaning 127 - addi $r0, $r0, #0x2 ! MMU_CFG.TBS value -> meaning 128 - sll $r0, $r6, $r0 ! entries = k-way * n-set 129 - mul $r6, $r0, $r5 ! max size = entries * page size 130 - /* check kernel image size */ 131 - la $r3, (_end - PAGE_OFFSET) 132 - bgt $r3, $r6, __error 133 - 134 - li $r2, #(PHYS_OFFSET + TLB_DATA_kernel_text_attr) 135 - li $r3, PAGE_OFFSET 136 - add $r6, $r6, $r3 137 - 138 - _tlb: 139 - mtsr $r3, $TLB_VPN 140 - dsb 141 - tlbop $r2, RWR 142 - isb 143 - add $r3, $r3, $r5 144 - add $r2, $r2, $r5 145 - bgt $r6, $r3, _tlb 146 - mfsr $r3, $TLB_MISC ! setup access page size 147 - li $r2, #~0xf 148 - and $r3, $r3, $r2 149 - #ifdef CONFIG_ANDES_PAGE_SIZE_8KB 150 - ori $r3, $r3, #0x1 151 - #endif 152 - mtsr $r3, $TLB_MISC 153 - 154 - mfsr $r0, $MISC_CTL ! Enable BTB, RTP, shadow sp, and HW_PRE 155 - ori $r0, $r0, #MISC_init 156 - mtsr $r0, $MISC_CTL 157 - 158 - mfsr $p1, $PSW 159 - li $r15, #~PSW_clr ! clear WBNA|DME|IME|DT|IT|POM|INTL|GIE 160 - and $p1, $p1, $r15 161 - ori $p1, $p1, #PSW_init 162 - mtsr $p1, $IPSW ! when iret, it will automatically enable MMU 163 - la $lp, __mmap_switched 164 - mtsr $lp, $IPC 165 - iret 166 - nop 167 - 168 - .type __switch_data, %object 169 - __switch_data: 170 - .long __bss_start ! $r6 171 - .long _end ! $r7 172 - .long __atags_pointer ! $atag_pointer 173 - .long init_task ! $r9, move to $r25 174 - .long init_thread_union + THREAD_SIZE ! $sp 175 - 176 - 177 - /* 178 - * The following fragment of code is executed with the MMU on in MMU mode, 179 - * and uses absolute addresses; this is not position independent. 180 - */ 181 - .align 182 - .type __mmap_switched, %function 183 - __mmap_switched: 184 - la $r3, __switch_data 185 - lmw.bim $r6, [$r3], $r9, #0b0001 186 - move $r25, $r9 187 - move $fp, #0 ! Clear BSS (and zero $fp) 188 - beq $r7, $r6, _RRT 189 - 1: swi.bi $fp, [$r6], #4 190 - bne $r7, $r6, 1b 191 - swi $r10, [$r8] 192 - 193 - _RRT: 194 - b start_kernel 195 - 196 - __error: 197 - b __error
-9
arch/nds32/kernel/irq.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/irqchip.h> 5 - 6 - void __init init_IRQ(void) 7 - { 8 - irqchip_init(); 9 - }
-278
arch/nds32/kernel/module.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/module.h> 5 - #include <linux/elf.h> 6 - #include <linux/vmalloc.h> 7 - #include <linux/moduleloader.h> 8 - #include <linux/pgtable.h> 9 - 10 - void *module_alloc(unsigned long size) 11 - { 12 - return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, 13 - GFP_KERNEL, PAGE_KERNEL, 0, NUMA_NO_NODE, 14 - __builtin_return_address(0)); 15 - } 16 - 17 - void module_free(struct module *module, void *region) 18 - { 19 - vfree(region); 20 - } 21 - 22 - int module_frob_arch_sections(Elf_Ehdr * hdr, 23 - Elf_Shdr * sechdrs, 24 - char *secstrings, struct module *mod) 25 - { 26 - return 0; 27 - } 28 - 29 - void do_reloc16(unsigned int val, unsigned int *loc, unsigned int val_mask, 30 - unsigned int val_shift, unsigned int loc_mask, 31 - unsigned int partial_in_place, unsigned int swap) 32 - { 33 - unsigned int tmp = 0, tmp2 = 0; 34 - 35 - __asm__ __volatile__("\tlhi.bi\t%0, [%2], 0\n" 36 - "\tbeqz\t%3, 1f\n" 37 - "\twsbh\t%0, %1\n" 38 - "1:\n":"=r"(tmp):"0"(tmp), "r"(loc), "r"(swap) 39 - ); 40 - 41 - tmp2 = tmp & loc_mask; 42 - if (partial_in_place) { 43 - tmp &= (~loc_mask); 44 - tmp = 45 - tmp2 | ((tmp + ((val & val_mask) >> val_shift)) & val_mask); 46 - } else { 47 - tmp = tmp2 | ((val & val_mask) >> val_shift); 48 - } 49 - 50 - __asm__ __volatile__("\tbeqz\t%3, 2f\n" 51 - "\twsbh\t%0, %1\n" 52 - "2:\n" 53 - "\tshi.bi\t%0, [%2], 0\n":"=r"(tmp):"0"(tmp), 54 - "r"(loc), "r"(swap) 55 - ); 56 - } 57 - 58 - void do_reloc32(unsigned int val, unsigned int *loc, unsigned int val_mask, 59 - unsigned int val_shift, unsigned int loc_mask, 60 - unsigned int partial_in_place, unsigned int swap) 61 - { 62 - unsigned int tmp = 0, tmp2 = 0; 63 - 64 - __asm__ __volatile__("\tlmw.bi\t%0, [%2], %0, 0\n" 65 - "\tbeqz\t%3, 1f\n" 66 - "\twsbh\t%0, %1\n" 67 - "\trotri\t%0, %1, 16\n" 68 - "1:\n":"=r"(tmp):"0"(tmp), "r"(loc), "r"(swap) 69 - ); 70 - 71 - tmp2 = tmp & loc_mask; 72 - if (partial_in_place) { 73 - tmp &= (~loc_mask); 74 - tmp = 75 - tmp2 | ((tmp + ((val & val_mask) >> val_shift)) & val_mask); 76 - } else { 77 - tmp = tmp2 | ((val & val_mask) >> val_shift); 78 - } 79 - 80 - __asm__ __volatile__("\tbeqz\t%3, 2f\n" 81 - "\twsbh\t%0, %1\n" 82 - "\trotri\t%0, %1, 16\n" 83 - "2:\n" 84 - "\tsmw.bi\t%0, [%2], %0, 0\n":"=r"(tmp):"0"(tmp), 85 - "r"(loc), "r"(swap) 86 - ); 87 - } 88 - 89 - static inline int exceed_limit(int offset, unsigned int val_mask, 90 - struct module *module, Elf32_Rela * rel, 91 - unsigned int relindex, unsigned int reloc_order) 92 - { 93 - int abs_off = offset < 0 ? ~offset : offset; 94 - 95 - if (abs_off & (~val_mask)) { 96 - pr_err("\n%s: relocation type %d out of range.\n" 97 - "please rebuild the kernel module with gcc option \"-Wa,-mno-small-text\".\n", 98 - module->name, ELF32_R_TYPE(rel->r_info)); 99 - pr_err("section %d reloc %d offset 0x%x relative 0x%x.\n", 100 - relindex, reloc_order, rel->r_offset, offset); 101 - return true; 102 - } 103 - return false; 104 - } 105 - 106 - #ifdef __NDS32_EL__ 107 - #define NEED_SWAP 1 108 - #else 109 - #define NEED_SWAP 0 110 - #endif 111 - 112 - int 113 - apply_relocate_add(Elf32_Shdr * sechdrs, const char *strtab, 114 - unsigned int symindex, unsigned int relindex, 115 - struct module *module) 116 - { 117 - Elf32_Shdr *symsec = sechdrs + symindex; 118 - Elf32_Shdr *relsec = sechdrs + relindex; 119 - Elf32_Shdr *dstsec = sechdrs + relsec->sh_info; 120 - Elf32_Rela *rel = (void *)relsec->sh_addr; 121 - unsigned int i; 122 - 123 - for (i = 0; i < relsec->sh_size / sizeof(Elf32_Rela); i++, rel++) { 124 - Elf32_Addr *loc; 125 - Elf32_Sym *sym; 126 - Elf32_Addr v; 127 - s32 offset; 128 - 129 - offset = ELF32_R_SYM(rel->r_info); 130 - if (offset < 0 131 - || offset > (symsec->sh_size / sizeof(Elf32_Sym))) { 132 - pr_err("%s: bad relocation\n", module->name); 133 - pr_err("section %d reloc %d\n", relindex, i); 134 - return -ENOEXEC; 135 - } 136 - 137 - sym = ((Elf32_Sym *) symsec->sh_addr) + offset; 138 - 139 - if (rel->r_offset < 0 140 - || rel->r_offset > dstsec->sh_size - sizeof(u16)) { 141 - pr_err("%s: out of bounds relocation\n", module->name); 142 - pr_err("section %d reloc %d offset 0x%0x size %d\n", 143 - relindex, i, rel->r_offset, dstsec->sh_size); 144 - return -ENOEXEC; 145 - } 146 - 147 - loc = (Elf32_Addr *) (dstsec->sh_addr + rel->r_offset); 148 - v = sym->st_value + rel->r_addend; 149 - 150 - switch (ELF32_R_TYPE(rel->r_info)) { 151 - case R_NDS32_NONE: 152 - case R_NDS32_INSN16: 153 - case R_NDS32_LABEL: 154 - case R_NDS32_LONGCALL1: 155 - case R_NDS32_LONGCALL2: 156 - case R_NDS32_LONGCALL3: 157 - case R_NDS32_LONGCALL4: 158 - case R_NDS32_LONGJUMP1: 159 - case R_NDS32_LONGJUMP2: 160 - case R_NDS32_LONGJUMP3: 161 - case R_NDS32_9_FIXED_RELA: 162 - case R_NDS32_15_FIXED_RELA: 163 - case R_NDS32_17_FIXED_RELA: 164 - case R_NDS32_25_FIXED_RELA: 165 - case R_NDS32_LOADSTORE: 166 - case R_NDS32_DWARF2_OP1_RELA: 167 - case R_NDS32_DWARF2_OP2_RELA: 168 - case R_NDS32_DWARF2_LEB_RELA: 169 - case R_NDS32_RELA_NOP_MIX ... R_NDS32_RELA_NOP_MAX: 170 - break; 171 - 172 - case R_NDS32_32_RELA: 173 - do_reloc32(v, loc, 0xffffffff, 0, 0, 0, 0); 174 - break; 175 - 176 - case R_NDS32_HI20_RELA: 177 - do_reloc32(v, loc, 0xfffff000, 12, 0xfff00000, 0, 178 - NEED_SWAP); 179 - break; 180 - 181 - case R_NDS32_LO12S3_RELA: 182 - do_reloc32(v, loc, 0x00000fff, 3, 0xfffff000, 0, 183 - NEED_SWAP); 184 - break; 185 - 186 - case R_NDS32_LO12S2_RELA: 187 - do_reloc32(v, loc, 0x00000fff, 2, 0xfffff000, 0, 188 - NEED_SWAP); 189 - break; 190 - 191 - case R_NDS32_LO12S1_RELA: 192 - do_reloc32(v, loc, 0x00000fff, 1, 0xfffff000, 0, 193 - NEED_SWAP); 194 - break; 195 - 196 - case R_NDS32_LO12S0_RELA: 197 - case R_NDS32_LO12S0_ORI_RELA: 198 - do_reloc32(v, loc, 0x00000fff, 0, 0xfffff000, 0, 199 - NEED_SWAP); 200 - break; 201 - 202 - case R_NDS32_9_PCREL_RELA: 203 - if (exceed_limit 204 - ((v - (Elf32_Addr) loc), 0x000000ff, module, rel, 205 - relindex, i)) 206 - return -ENOEXEC; 207 - do_reloc16(v - (Elf32_Addr) loc, loc, 0x000001ff, 1, 208 - 0xffffff00, 0, NEED_SWAP); 209 - break; 210 - 211 - case R_NDS32_15_PCREL_RELA: 212 - if (exceed_limit 213 - ((v - (Elf32_Addr) loc), 0x00003fff, module, rel, 214 - relindex, i)) 215 - return -ENOEXEC; 216 - do_reloc32(v - (Elf32_Addr) loc, loc, 0x00007fff, 1, 217 - 0xffffc000, 0, NEED_SWAP); 218 - break; 219 - 220 - case R_NDS32_17_PCREL_RELA: 221 - if (exceed_limit 222 - ((v - (Elf32_Addr) loc), 0x0000ffff, module, rel, 223 - relindex, i)) 224 - return -ENOEXEC; 225 - do_reloc32(v - (Elf32_Addr) loc, loc, 0x0001ffff, 1, 226 - 0xffff0000, 0, NEED_SWAP); 227 - break; 228 - 229 - case R_NDS32_25_PCREL_RELA: 230 - if (exceed_limit 231 - ((v - (Elf32_Addr) loc), 0x00ffffff, module, rel, 232 - relindex, i)) 233 - return -ENOEXEC; 234 - do_reloc32(v - (Elf32_Addr) loc, loc, 0x01ffffff, 1, 235 - 0xff000000, 0, NEED_SWAP); 236 - break; 237 - case R_NDS32_WORD_9_PCREL_RELA: 238 - if (exceed_limit 239 - ((v - (Elf32_Addr) loc), 0x000000ff, module, rel, 240 - relindex, i)) 241 - return -ENOEXEC; 242 - do_reloc32(v - (Elf32_Addr) loc, loc, 0x000001ff, 1, 243 - 0xffffff00, 0, NEED_SWAP); 244 - break; 245 - 246 - case R_NDS32_SDA15S3_RELA: 247 - case R_NDS32_SDA15S2_RELA: 248 - case R_NDS32_SDA15S1_RELA: 249 - case R_NDS32_SDA15S0_RELA: 250 - pr_err("%s: unsupported relocation type %d.\n", 251 - module->name, ELF32_R_TYPE(rel->r_info)); 252 - pr_err 253 - ("Small data section access doesn't work in the kernel space; " 254 - "please rebuild the kernel module with gcc option -mcmodel=large.\n"); 255 - pr_err("section %d reloc %d offset 0x%x size %d\n", 256 - relindex, i, rel->r_offset, dstsec->sh_size); 257 - break; 258 - 259 - default: 260 - pr_err("%s: unsupported relocation type %d.\n", 261 - module->name, ELF32_R_TYPE(rel->r_info)); 262 - pr_err("section %d reloc %d offset 0x%x size %d\n", 263 - relindex, i, rel->r_offset, dstsec->sh_size); 264 - } 265 - } 266 - return 0; 267 - } 268 - 269 - int 270 - module_finalize(const Elf32_Ehdr * hdr, const Elf_Shdr * sechdrs, 271 - struct module *module) 272 - { 273 - return 0; 274 - } 275 - 276 - void module_arch_cleanup(struct module *mod) 277 - { 278 - }
-25
arch/nds32/kernel/nds32_ksyms.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/module.h> 5 - #include <linux/string.h> 6 - #include <linux/delay.h> 7 - #include <linux/in6.h> 8 - #include <linux/syscalls.h> 9 - #include <linux/uaccess.h> 10 - 11 - #include <asm/checksum.h> 12 - #include <asm/io.h> 13 - #include <asm/ftrace.h> 14 - #include <asm/proc-fns.h> 15 - 16 - /* mem functions */ 17 - EXPORT_SYMBOL(memset); 18 - EXPORT_SYMBOL(memcpy); 19 - EXPORT_SYMBOL(memmove); 20 - EXPORT_SYMBOL(memzero); 21 - 22 - /* user mem (segment) */ 23 - EXPORT_SYMBOL(__arch_copy_from_user); 24 - EXPORT_SYMBOL(__arch_copy_to_user); 25 - EXPORT_SYMBOL(__arch_clear_user);
-1500
arch/nds32/kernel/perf_event_cpu.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * Copyright (C) 2008-2017 Andes Technology Corporation 4 - * 5 - * Reference ARMv7: Jean Pihet <jpihet@mvista.com> 6 - * 2010 (c) MontaVista Software, LLC. 7 - */ 8 - 9 - #include <linux/perf_event.h> 10 - #include <linux/bitmap.h> 11 - #include <linux/export.h> 12 - #include <linux/kernel.h> 13 - #include <linux/of.h> 14 - #include <linux/platform_device.h> 15 - #include <linux/slab.h> 16 - #include <linux/spinlock.h> 17 - #include <linux/pm_runtime.h> 18 - #include <linux/ftrace.h> 19 - #include <linux/uaccess.h> 20 - #include <linux/sched/clock.h> 21 - #include <linux/percpu-defs.h> 22 - 23 - #include <asm/pmu.h> 24 - #include <asm/irq_regs.h> 25 - #include <asm/nds32.h> 26 - #include <asm/stacktrace.h> 27 - #include <asm/perf_event.h> 28 - #include <nds32_intrinsic.h> 29 - 30 - /* Set at runtime when we know what CPU type we are. */ 31 - static struct nds32_pmu *cpu_pmu; 32 - 33 - static DEFINE_PER_CPU(struct pmu_hw_events, cpu_hw_events); 34 - static void nds32_pmu_start(struct nds32_pmu *cpu_pmu); 35 - static void nds32_pmu_stop(struct nds32_pmu *cpu_pmu); 36 - static struct platform_device_id cpu_pmu_plat_device_ids[] = { 37 - {.name = "nds32-pfm"}, 38 - {}, 39 - }; 40 - 41 - static int nds32_pmu_map_cache_event(const unsigned int (*cache_map) 42 - [PERF_COUNT_HW_CACHE_MAX] 43 - [PERF_COUNT_HW_CACHE_OP_MAX] 44 - [PERF_COUNT_HW_CACHE_RESULT_MAX], u64 config) 45 - { 46 - unsigned int cache_type, cache_op, cache_result, ret; 47 - 48 - cache_type = (config >> 0) & 0xff; 49 - if (cache_type >= PERF_COUNT_HW_CACHE_MAX) 50 - return -EINVAL; 51 - 52 - cache_op = (config >> 8) & 0xff; 53 - if (cache_op >= PERF_COUNT_HW_CACHE_OP_MAX) 54 - return -EINVAL; 55 - 56 - cache_result = (config >> 16) & 0xff; 57 - if (cache_result >= PERF_COUNT_HW_CACHE_RESULT_MAX) 58 - return -EINVAL; 59 - 60 - ret = (int)(*cache_map)[cache_type][cache_op][cache_result]; 61 - 62 - if (ret == CACHE_OP_UNSUPPORTED) 63 - return -ENOENT; 64 - 65 - return ret; 66 - } 67 - 68 - static int 69 - nds32_pmu_map_hw_event(const unsigned int (*event_map)[PERF_COUNT_HW_MAX], 70 - u64 config) 71 - { 72 - int mapping; 73 - 74 - if (config >= PERF_COUNT_HW_MAX) 75 - return -ENOENT; 76 - 77 - mapping = (*event_map)[config]; 78 - return mapping == HW_OP_UNSUPPORTED ? -ENOENT : mapping; 79 - } 80 - 81 - static int nds32_pmu_map_raw_event(u32 raw_event_mask, u64 config) 82 - { 83 - int ev_type = (int)(config & raw_event_mask); 84 - int idx = config >> 8; 85 - 86 - switch (idx) { 87 - case 0: 88 - ev_type = PFM_OFFSET_MAGIC_0 + ev_type; 89 - if (ev_type >= SPAV3_0_SEL_LAST || ev_type <= SPAV3_0_SEL_BASE) 90 - return -ENOENT; 91 - break; 92 - case 1: 93 - ev_type = PFM_OFFSET_MAGIC_1 + ev_type; 94 - if (ev_type >= SPAV3_1_SEL_LAST || ev_type <= SPAV3_1_SEL_BASE) 95 - return -ENOENT; 96 - break; 97 - case 2: 98 - ev_type = PFM_OFFSET_MAGIC_2 + ev_type; 99 - if (ev_type >= SPAV3_2_SEL_LAST || ev_type <= SPAV3_2_SEL_BASE) 100 - return -ENOENT; 101 - break; 102 - default: 103 - return -ENOENT; 104 - } 105 - 106 - return ev_type; 107 - } 108 - 109 - int 110 - nds32_pmu_map_event(struct perf_event *event, 111 - const unsigned int (*event_map)[PERF_COUNT_HW_MAX], 112 - const unsigned int (*cache_map) 113 - [PERF_COUNT_HW_CACHE_MAX] 114 - [PERF_COUNT_HW_CACHE_OP_MAX] 115 - [PERF_COUNT_HW_CACHE_RESULT_MAX], u32 raw_event_mask) 116 - { 117 - u64 config = event->attr.config; 118 - 119 - switch (event->attr.type) { 120 - case PERF_TYPE_HARDWARE: 121 - return nds32_pmu_map_hw_event(event_map, config); 122 - case PERF_TYPE_HW_CACHE: 123 - return nds32_pmu_map_cache_event(cache_map, config); 124 - case PERF_TYPE_RAW: 125 - return nds32_pmu_map_raw_event(raw_event_mask, config); 126 - } 127 - 128 - return -ENOENT; 129 - } 130 - 131 - static int nds32_spav3_map_event(struct perf_event *event) 132 - { 133 - return nds32_pmu_map_event(event, &nds32_pfm_perf_map, 134 - &nds32_pfm_perf_cache_map, SOFTWARE_EVENT_MASK); 135 - } 136 - 137 - static inline u32 nds32_pfm_getreset_flags(void) 138 - { 139 - /* Read overflow status */ 140 - u32 val = __nds32__mfsr(NDS32_SR_PFM_CTL); 141 - u32 old_val = val; 142 - 143 - /* Write overflow bit to clear status, and others keep it 0 */ 144 - u32 ov_flag = PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]; 145 - 146 - __nds32__mtsr(val | ov_flag, NDS32_SR_PFM_CTL); 147 - 148 - return old_val; 149 - } 150 - 151 - static inline int nds32_pfm_has_overflowed(u32 pfm) 152 - { 153 - u32 ov_flag = PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]; 154 - 155 - return pfm & ov_flag; 156 - } 157 - 158 - static inline int nds32_pfm_counter_has_overflowed(u32 pfm, int idx) 159 - { 160 - u32 mask = 0; 161 - 162 - switch (idx) { 163 - case 0: 164 - mask = PFM_CTL_OVF[0]; 165 - break; 166 - case 1: 167 - mask = PFM_CTL_OVF[1]; 168 - break; 169 - case 2: 170 - mask = PFM_CTL_OVF[2]; 171 - break; 172 - default: 173 - pr_err("%s index wrong\n", __func__); 174 - break; 175 - } 176 - return pfm & mask; 177 - } 178 - 179 - /* 180 - * Set the next IRQ period, based on the hwc->period_left value. 181 - * To be called with the event disabled in hw: 182 - */ 183 - int nds32_pmu_event_set_period(struct perf_event *event) 184 - { 185 - struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); 186 - struct hw_perf_event *hwc = &event->hw; 187 - s64 left = local64_read(&hwc->period_left); 188 - s64 period = hwc->sample_period; 189 - int ret = 0; 190 - 191 - /* The period may have been changed by PERF_EVENT_IOC_PERIOD */ 192 - if (unlikely(period != hwc->last_period)) 193 - left = period - (hwc->last_period - left); 194 - 195 - if (unlikely(left <= -period)) { 196 - left = period; 197 - local64_set(&hwc->period_left, left); 198 - hwc->last_period = period; 199 - ret = 1; 200 - } 201 - 202 - if (unlikely(left <= 0)) { 203 - left += period; 204 - local64_set(&hwc->period_left, left); 205 - hwc->last_period = period; 206 - ret = 1; 207 - } 208 - 209 - if (left > (s64)nds32_pmu->max_period) 210 - left = nds32_pmu->max_period; 211 - 212 - /* 213 - * The hw event starts counting from this event offset, 214 - * mark it to be able to extract future "deltas": 215 - */ 216 - local64_set(&hwc->prev_count, (u64)(-left)); 217 - 218 - nds32_pmu->write_counter(event, (u64)(-left) & nds32_pmu->max_period); 219 - 220 - perf_event_update_userpage(event); 221 - 222 - return ret; 223 - } 224 - 225 - static irqreturn_t nds32_pmu_handle_irq(int irq_num, void *dev) 226 - { 227 - u32 pfm; 228 - struct perf_sample_data data; 229 - struct nds32_pmu *cpu_pmu = (struct nds32_pmu *)dev; 230 - struct pmu_hw_events *cpuc = cpu_pmu->get_hw_events(); 231 - struct pt_regs *regs; 232 - int idx; 233 - /* 234 - * Get and reset the IRQ flags 235 - */ 236 - pfm = nds32_pfm_getreset_flags(); 237 - 238 - /* 239 - * Did an overflow occur? 240 - */ 241 - if (!nds32_pfm_has_overflowed(pfm)) 242 - return IRQ_NONE; 243 - 244 - /* 245 - * Handle the counter(s) overflow(s) 246 - */ 247 - regs = get_irq_regs(); 248 - 249 - nds32_pmu_stop(cpu_pmu); 250 - for (idx = 0; idx < cpu_pmu->num_events; ++idx) { 251 - struct perf_event *event = cpuc->events[idx]; 252 - struct hw_perf_event *hwc; 253 - 254 - /* Ignore if we don't have an event. */ 255 - if (!event) 256 - continue; 257 - 258 - /* 259 - * We have a single interrupt for all counters. Check that 260 - * each counter has overflowed before we process it. 261 - */ 262 - if (!nds32_pfm_counter_has_overflowed(pfm, idx)) 263 - continue; 264 - 265 - hwc = &event->hw; 266 - nds32_pmu_event_update(event); 267 - perf_sample_data_init(&data, 0, hwc->last_period); 268 - if (!nds32_pmu_event_set_period(event)) 269 - continue; 270 - 271 - if (perf_event_overflow(event, &data, regs)) 272 - cpu_pmu->disable(event); 273 - } 274 - nds32_pmu_start(cpu_pmu); 275 - /* 276 - * Handle the pending perf events. 277 - * 278 - * Note: this call *must* be run with interrupts disabled. For 279 - * platforms that can have the PMU interrupts raised as an NMI, this 280 - * will not work. 281 - */ 282 - irq_work_run(); 283 - 284 - return IRQ_HANDLED; 285 - } 286 - 287 - static inline int nds32_pfm_counter_valid(struct nds32_pmu *cpu_pmu, int idx) 288 - { 289 - return ((idx >= 0) && (idx < cpu_pmu->num_events)); 290 - } 291 - 292 - static inline int nds32_pfm_disable_counter(int idx) 293 - { 294 - unsigned int val = __nds32__mfsr(NDS32_SR_PFM_CTL); 295 - u32 mask = 0; 296 - 297 - mask = PFM_CTL_EN[idx]; 298 - val &= ~mask; 299 - val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); 300 - __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); 301 - return idx; 302 - } 303 - 304 - /* 305 - * Add an event filter to a given event. 306 - */ 307 - static int nds32_pmu_set_event_filter(struct hw_perf_event *event, 308 - struct perf_event_attr *attr) 309 - { 310 - unsigned long config_base = 0; 311 - int idx = event->idx; 312 - unsigned long no_kernel_tracing = 0; 313 - unsigned long no_user_tracing = 0; 314 - /* If index is -1, do not do anything */ 315 - if (idx == -1) 316 - return 0; 317 - 318 - no_kernel_tracing = PFM_CTL_KS[idx]; 319 - no_user_tracing = PFM_CTL_KU[idx]; 320 - /* 321 - * Default: enable both kernel and user mode tracing. 322 - */ 323 - if (attr->exclude_user) 324 - config_base |= no_user_tracing; 325 - 326 - if (attr->exclude_kernel) 327 - config_base |= no_kernel_tracing; 328 - 329 - /* 330 - * Install the filter into config_base as this is used to 331 - * construct the event type. 332 - */ 333 - event->config_base |= config_base; 334 - return 0; 335 - } 336 - 337 - static inline void nds32_pfm_write_evtsel(int idx, u32 evnum) 338 - { 339 - u32 offset = 0; 340 - u32 ori_val = __nds32__mfsr(NDS32_SR_PFM_CTL); 341 - u32 ev_mask = 0; 342 - u32 no_kernel_mask = 0; 343 - u32 no_user_mask = 0; 344 - u32 val; 345 - 346 - offset = PFM_CTL_OFFSEL[idx]; 347 - /* Clear previous mode selection, and write new one */ 348 - no_kernel_mask = PFM_CTL_KS[idx]; 349 - no_user_mask = PFM_CTL_KU[idx]; 350 - ori_val &= ~no_kernel_mask; 351 - ori_val &= ~no_user_mask; 352 - if (evnum & no_kernel_mask) 353 - ori_val |= no_kernel_mask; 354 - 355 - if (evnum & no_user_mask) 356 - ori_val |= no_user_mask; 357 - 358 - /* Clear previous event selection */ 359 - ev_mask = PFM_CTL_SEL[idx]; 360 - ori_val &= ~ev_mask; 361 - evnum &= SOFTWARE_EVENT_MASK; 362 - 363 - /* undo the linear mapping */ 364 - evnum = get_converted_evet_hw_num(evnum); 365 - val = ori_val | (evnum << offset); 366 - val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); 367 - __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); 368 - } 369 - 370 - static inline int nds32_pfm_enable_counter(int idx) 371 - { 372 - unsigned int val = __nds32__mfsr(NDS32_SR_PFM_CTL); 373 - u32 mask = 0; 374 - 375 - mask = PFM_CTL_EN[idx]; 376 - val |= mask; 377 - val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); 378 - __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); 379 - return idx; 380 - } 381 - 382 - static inline int nds32_pfm_enable_intens(int idx) 383 - { 384 - unsigned int val = __nds32__mfsr(NDS32_SR_PFM_CTL); 385 - u32 mask = 0; 386 - 387 - mask = PFM_CTL_IE[idx]; 388 - val |= mask; 389 - val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); 390 - __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); 391 - return idx; 392 - } 393 - 394 - static inline int nds32_pfm_disable_intens(int idx) 395 - { 396 - unsigned int val = __nds32__mfsr(NDS32_SR_PFM_CTL); 397 - u32 mask = 0; 398 - 399 - mask = PFM_CTL_IE[idx]; 400 - val &= ~mask; 401 - val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); 402 - __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); 403 - return idx; 404 - } 405 - 406 - static int event_requires_mode_exclusion(struct perf_event_attr *attr) 407 - { 408 - /* Other modes NDS32 does not support */ 409 - return attr->exclude_user || attr->exclude_kernel; 410 - } 411 - 412 - static void nds32_pmu_enable_event(struct perf_event *event) 413 - { 414 - unsigned long flags; 415 - unsigned int evnum = 0; 416 - struct hw_perf_event *hwc = &event->hw; 417 - struct nds32_pmu *cpu_pmu = to_nds32_pmu(event->pmu); 418 - struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 419 - int idx = hwc->idx; 420 - 421 - if (!nds32_pfm_counter_valid(cpu_pmu, idx)) { 422 - pr_err("CPU enabling wrong pfm counter IRQ enable\n"); 423 - return; 424 - } 425 - 426 - /* 427 - * Enable counter and interrupt, and set the counter to count 428 - * the event that we're interested in. 429 - */ 430 - raw_spin_lock_irqsave(&events->pmu_lock, flags); 431 - 432 - /* 433 - * Disable counter 434 - */ 435 - nds32_pfm_disable_counter(idx); 436 - 437 - /* 438 - * Check whether we need to exclude the counter from certain modes. 439 - */ 440 - if ((!cpu_pmu->set_event_filter || 441 - cpu_pmu->set_event_filter(hwc, &event->attr)) && 442 - event_requires_mode_exclusion(&event->attr)) { 443 - pr_notice 444 - ("NDS32 performance counters do not support mode exclusion\n"); 445 - hwc->config_base = 0; 446 - } 447 - /* Write event */ 448 - evnum = hwc->config_base; 449 - nds32_pfm_write_evtsel(idx, evnum); 450 - 451 - /* 452 - * Enable interrupt for this counter 453 - */ 454 - nds32_pfm_enable_intens(idx); 455 - 456 - /* 457 - * Enable counter 458 - */ 459 - nds32_pfm_enable_counter(idx); 460 - 461 - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 462 - } 463 - 464 - static void nds32_pmu_disable_event(struct perf_event *event) 465 - { 466 - unsigned long flags; 467 - struct hw_perf_event *hwc = &event->hw; 468 - struct nds32_pmu *cpu_pmu = to_nds32_pmu(event->pmu); 469 - struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 470 - int idx = hwc->idx; 471 - 472 - if (!nds32_pfm_counter_valid(cpu_pmu, idx)) { 473 - pr_err("CPU disabling wrong pfm counter IRQ enable %d\n", idx); 474 - return; 475 - } 476 - 477 - /* 478 - * Disable counter and interrupt 479 - */ 480 - raw_spin_lock_irqsave(&events->pmu_lock, flags); 481 - 482 - /* 483 - * Disable counter 484 - */ 485 - nds32_pfm_disable_counter(idx); 486 - 487 - /* 488 - * Disable interrupt for this counter 489 - */ 490 - nds32_pfm_disable_intens(idx); 491 - 492 - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 493 - } 494 - 495 - static inline u32 nds32_pmu_read_counter(struct perf_event *event) 496 - { 497 - struct nds32_pmu *cpu_pmu = to_nds32_pmu(event->pmu); 498 - struct hw_perf_event *hwc = &event->hw; 499 - int idx = hwc->idx; 500 - u32 count = 0; 501 - 502 - if (!nds32_pfm_counter_valid(cpu_pmu, idx)) { 503 - pr_err("CPU reading wrong counter %d\n", idx); 504 - } else { 505 - switch (idx) { 506 - case PFMC0: 507 - count = __nds32__mfsr(NDS32_SR_PFMC0); 508 - break; 509 - case PFMC1: 510 - count = __nds32__mfsr(NDS32_SR_PFMC1); 511 - break; 512 - case PFMC2: 513 - count = __nds32__mfsr(NDS32_SR_PFMC2); 514 - break; 515 - default: 516 - pr_err 517 - ("%s: CPU has no performance counters %d\n", 518 - __func__, idx); 519 - } 520 - } 521 - return count; 522 - } 523 - 524 - static inline void nds32_pmu_write_counter(struct perf_event *event, u32 value) 525 - { 526 - struct nds32_pmu *cpu_pmu = to_nds32_pmu(event->pmu); 527 - struct hw_perf_event *hwc = &event->hw; 528 - int idx = hwc->idx; 529 - 530 - if (!nds32_pfm_counter_valid(cpu_pmu, idx)) { 531 - pr_err("CPU writing wrong counter %d\n", idx); 532 - } else { 533 - switch (idx) { 534 - case PFMC0: 535 - __nds32__mtsr_isb(value, NDS32_SR_PFMC0); 536 - break; 537 - case PFMC1: 538 - __nds32__mtsr_isb(value, NDS32_SR_PFMC1); 539 - break; 540 - case PFMC2: 541 - __nds32__mtsr_isb(value, NDS32_SR_PFMC2); 542 - break; 543 - default: 544 - pr_err 545 - ("%s: CPU has no performance counters %d\n", 546 - __func__, idx); 547 - } 548 - } 549 - } 550 - 551 - static int nds32_pmu_get_event_idx(struct pmu_hw_events *cpuc, 552 - struct perf_event *event) 553 - { 554 - int idx; 555 - struct hw_perf_event *hwc = &event->hw; 556 - /* 557 - * Current implementation maps cycles, instruction count and cache-miss 558 - * to specific counter. 559 - * However, multiple of the 3 counters are able to count these events. 560 - * 561 - * 562 - * SOFTWARE_EVENT_MASK mask for getting event num , 563 - * This is defined by Jia-Rung, you can change the polocies. 564 - * However, do not exceed 8 bits. This is hardware specific. 565 - * The last number is SPAv3_2_SEL_LAST. 566 - */ 567 - unsigned long evtype = hwc->config_base & SOFTWARE_EVENT_MASK; 568 - 569 - idx = get_converted_event_idx(evtype); 570 - /* 571 - * Try to get the counter for correpsonding event 572 - */ 573 - if (evtype == SPAV3_0_SEL_TOTAL_CYCLES) { 574 - if (!test_and_set_bit(idx, cpuc->used_mask)) 575 - return idx; 576 - if (!test_and_set_bit(NDS32_IDX_COUNTER0, cpuc->used_mask)) 577 - return NDS32_IDX_COUNTER0; 578 - if (!test_and_set_bit(NDS32_IDX_COUNTER1, cpuc->used_mask)) 579 - return NDS32_IDX_COUNTER1; 580 - } else if (evtype == SPAV3_1_SEL_COMPLETED_INSTRUCTION) { 581 - if (!test_and_set_bit(idx, cpuc->used_mask)) 582 - return idx; 583 - else if (!test_and_set_bit(NDS32_IDX_COUNTER1, cpuc->used_mask)) 584 - return NDS32_IDX_COUNTER1; 585 - else if (!test_and_set_bit 586 - (NDS32_IDX_CYCLE_COUNTER, cpuc->used_mask)) 587 - return NDS32_IDX_CYCLE_COUNTER; 588 - } else { 589 - if (!test_and_set_bit(idx, cpuc->used_mask)) 590 - return idx; 591 - } 592 - return -EAGAIN; 593 - } 594 - 595 - static void nds32_pmu_start(struct nds32_pmu *cpu_pmu) 596 - { 597 - unsigned long flags; 598 - unsigned int val; 599 - struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 600 - 601 - raw_spin_lock_irqsave(&events->pmu_lock, flags); 602 - 603 - /* Enable all counters , NDS PFM has 3 counters */ 604 - val = __nds32__mfsr(NDS32_SR_PFM_CTL); 605 - val |= (PFM_CTL_EN[0] | PFM_CTL_EN[1] | PFM_CTL_EN[2]); 606 - val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); 607 - __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); 608 - 609 - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 610 - } 611 - 612 - static void nds32_pmu_stop(struct nds32_pmu *cpu_pmu) 613 - { 614 - unsigned long flags; 615 - unsigned int val; 616 - struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 617 - 618 - raw_spin_lock_irqsave(&events->pmu_lock, flags); 619 - 620 - /* Disable all counters , NDS PFM has 3 counters */ 621 - val = __nds32__mfsr(NDS32_SR_PFM_CTL); 622 - val &= ~(PFM_CTL_EN[0] | PFM_CTL_EN[1] | PFM_CTL_EN[2]); 623 - val &= ~(PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); 624 - __nds32__mtsr_isb(val, NDS32_SR_PFM_CTL); 625 - 626 - raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 627 - } 628 - 629 - static void nds32_pmu_reset(void *info) 630 - { 631 - u32 val = 0; 632 - 633 - val |= (PFM_CTL_OVF[0] | PFM_CTL_OVF[1] | PFM_CTL_OVF[2]); 634 - __nds32__mtsr(val, NDS32_SR_PFM_CTL); 635 - __nds32__mtsr(0, NDS32_SR_PFM_CTL); 636 - __nds32__mtsr(0, NDS32_SR_PFMC0); 637 - __nds32__mtsr(0, NDS32_SR_PFMC1); 638 - __nds32__mtsr(0, NDS32_SR_PFMC2); 639 - } 640 - 641 - static void nds32_pmu_init(struct nds32_pmu *cpu_pmu) 642 - { 643 - cpu_pmu->handle_irq = nds32_pmu_handle_irq; 644 - cpu_pmu->enable = nds32_pmu_enable_event; 645 - cpu_pmu->disable = nds32_pmu_disable_event; 646 - cpu_pmu->read_counter = nds32_pmu_read_counter; 647 - cpu_pmu->write_counter = nds32_pmu_write_counter; 648 - cpu_pmu->get_event_idx = nds32_pmu_get_event_idx; 649 - cpu_pmu->start = nds32_pmu_start; 650 - cpu_pmu->stop = nds32_pmu_stop; 651 - cpu_pmu->reset = nds32_pmu_reset; 652 - cpu_pmu->max_period = 0xFFFFFFFF; /* Maximum counts */ 653 - }; 654 - 655 - static u32 nds32_read_num_pfm_events(void) 656 - { 657 - /* NDS32 SPAv3 PMU support 3 counter */ 658 - return 3; 659 - } 660 - 661 - static int device_pmu_init(struct nds32_pmu *cpu_pmu) 662 - { 663 - nds32_pmu_init(cpu_pmu); 664 - /* 665 - * This name should be devive-specific name, whatever you like :) 666 - * I think "PMU" will be a good generic name. 667 - */ 668 - cpu_pmu->name = "nds32v3-pmu"; 669 - cpu_pmu->map_event = nds32_spav3_map_event; 670 - cpu_pmu->num_events = nds32_read_num_pfm_events(); 671 - cpu_pmu->set_event_filter = nds32_pmu_set_event_filter; 672 - return 0; 673 - } 674 - 675 - /* 676 - * CPU PMU identification and probing. 677 - */ 678 - static int probe_current_pmu(struct nds32_pmu *pmu) 679 - { 680 - int ret; 681 - 682 - get_cpu(); 683 - ret = -ENODEV; 684 - /* 685 - * If ther are various CPU types with its own PMU, initialize with 686 - * 687 - * the corresponding one 688 - */ 689 - device_pmu_init(pmu); 690 - put_cpu(); 691 - return ret; 692 - } 693 - 694 - static void nds32_pmu_enable(struct pmu *pmu) 695 - { 696 - struct nds32_pmu *nds32_pmu = to_nds32_pmu(pmu); 697 - struct pmu_hw_events *hw_events = nds32_pmu->get_hw_events(); 698 - int enabled = bitmap_weight(hw_events->used_mask, 699 - nds32_pmu->num_events); 700 - 701 - if (enabled) 702 - nds32_pmu->start(nds32_pmu); 703 - } 704 - 705 - static void nds32_pmu_disable(struct pmu *pmu) 706 - { 707 - struct nds32_pmu *nds32_pmu = to_nds32_pmu(pmu); 708 - 709 - nds32_pmu->stop(nds32_pmu); 710 - } 711 - 712 - static void nds32_pmu_release_hardware(struct nds32_pmu *nds32_pmu) 713 - { 714 - nds32_pmu->free_irq(nds32_pmu); 715 - pm_runtime_put_sync(&nds32_pmu->plat_device->dev); 716 - } 717 - 718 - static irqreturn_t nds32_pmu_dispatch_irq(int irq, void *dev) 719 - { 720 - struct nds32_pmu *nds32_pmu = (struct nds32_pmu *)dev; 721 - int ret; 722 - u64 start_clock, finish_clock; 723 - 724 - start_clock = local_clock(); 725 - ret = nds32_pmu->handle_irq(irq, dev); 726 - finish_clock = local_clock(); 727 - 728 - perf_sample_event_took(finish_clock - start_clock); 729 - return ret; 730 - } 731 - 732 - static int nds32_pmu_reserve_hardware(struct nds32_pmu *nds32_pmu) 733 - { 734 - int err; 735 - struct platform_device *pmu_device = nds32_pmu->plat_device; 736 - 737 - if (!pmu_device) 738 - return -ENODEV; 739 - 740 - pm_runtime_get_sync(&pmu_device->dev); 741 - err = nds32_pmu->request_irq(nds32_pmu, nds32_pmu_dispatch_irq); 742 - if (err) { 743 - nds32_pmu_release_hardware(nds32_pmu); 744 - return err; 745 - } 746 - 747 - return 0; 748 - } 749 - 750 - static int 751 - validate_event(struct pmu *pmu, struct pmu_hw_events *hw_events, 752 - struct perf_event *event) 753 - { 754 - struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); 755 - 756 - if (is_software_event(event)) 757 - return 1; 758 - 759 - if (event->pmu != pmu) 760 - return 0; 761 - 762 - if (event->state < PERF_EVENT_STATE_OFF) 763 - return 1; 764 - 765 - if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec) 766 - return 1; 767 - 768 - return nds32_pmu->get_event_idx(hw_events, event) >= 0; 769 - } 770 - 771 - static int validate_group(struct perf_event *event) 772 - { 773 - struct perf_event *sibling, *leader = event->group_leader; 774 - struct pmu_hw_events fake_pmu; 775 - DECLARE_BITMAP(fake_used_mask, MAX_COUNTERS); 776 - /* 777 - * Initialize the fake PMU. We only need to populate the 778 - * used_mask for the purposes of validation. 779 - */ 780 - memset(fake_used_mask, 0, sizeof(fake_used_mask)); 781 - 782 - if (!validate_event(event->pmu, &fake_pmu, leader)) 783 - return -EINVAL; 784 - 785 - for_each_sibling_event(sibling, leader) { 786 - if (!validate_event(event->pmu, &fake_pmu, sibling)) 787 - return -EINVAL; 788 - } 789 - 790 - if (!validate_event(event->pmu, &fake_pmu, event)) 791 - return -EINVAL; 792 - 793 - return 0; 794 - } 795 - 796 - static int __hw_perf_event_init(struct perf_event *event) 797 - { 798 - struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); 799 - struct hw_perf_event *hwc = &event->hw; 800 - int mapping; 801 - 802 - mapping = nds32_pmu->map_event(event); 803 - 804 - if (mapping < 0) { 805 - pr_debug("event %x:%llx not supported\n", event->attr.type, 806 - event->attr.config); 807 - return mapping; 808 - } 809 - 810 - /* 811 - * We don't assign an index until we actually place the event onto 812 - * hardware. Use -1 to signify that we haven't decided where to put it 813 - * yet. For SMP systems, each core has it's own PMU so we can't do any 814 - * clever allocation or constraints checking at this point. 815 - */ 816 - hwc->idx = -1; 817 - hwc->config_base = 0; 818 - hwc->config = 0; 819 - hwc->event_base = 0; 820 - 821 - /* 822 - * Check whether we need to exclude the counter from certain modes. 823 - */ 824 - if ((!nds32_pmu->set_event_filter || 825 - nds32_pmu->set_event_filter(hwc, &event->attr)) && 826 - event_requires_mode_exclusion(&event->attr)) { 827 - pr_debug 828 - ("NDS performance counters do not support mode exclusion\n"); 829 - return -EOPNOTSUPP; 830 - } 831 - 832 - /* 833 - * Store the event encoding into the config_base field. 834 - */ 835 - hwc->config_base |= (unsigned long)mapping; 836 - 837 - if (!hwc->sample_period) { 838 - /* 839 - * For non-sampling runs, limit the sample_period to half 840 - * of the counter width. That way, the new counter value 841 - * is far less likely to overtake the previous one unless 842 - * you have some serious IRQ latency issues. 843 - */ 844 - hwc->sample_period = nds32_pmu->max_period >> 1; 845 - hwc->last_period = hwc->sample_period; 846 - local64_set(&hwc->period_left, hwc->sample_period); 847 - } 848 - 849 - if (event->group_leader != event) { 850 - if (validate_group(event) != 0) 851 - return -EINVAL; 852 - } 853 - 854 - return 0; 855 - } 856 - 857 - static int nds32_pmu_event_init(struct perf_event *event) 858 - { 859 - struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); 860 - int err = 0; 861 - atomic_t *active_events = &nds32_pmu->active_events; 862 - 863 - /* does not support taken branch sampling */ 864 - if (has_branch_stack(event)) 865 - return -EOPNOTSUPP; 866 - 867 - if (nds32_pmu->map_event(event) == -ENOENT) 868 - return -ENOENT; 869 - 870 - if (!atomic_inc_not_zero(active_events)) { 871 - if (atomic_read(active_events) == 0) { 872 - /* Register irq handler */ 873 - err = nds32_pmu_reserve_hardware(nds32_pmu); 874 - } 875 - 876 - if (!err) 877 - atomic_inc(active_events); 878 - } 879 - 880 - if (err) 881 - return err; 882 - 883 - err = __hw_perf_event_init(event); 884 - 885 - return err; 886 - } 887 - 888 - static void nds32_start(struct perf_event *event, int flags) 889 - { 890 - struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); 891 - struct hw_perf_event *hwc = &event->hw; 892 - /* 893 - * NDS pmu always has to reprogram the period, so ignore 894 - * PERF_EF_RELOAD, see the comment below. 895 - */ 896 - if (flags & PERF_EF_RELOAD) 897 - WARN_ON_ONCE(!(hwc->state & PERF_HES_UPTODATE)); 898 - 899 - hwc->state = 0; 900 - /* Set the period for the event. */ 901 - nds32_pmu_event_set_period(event); 902 - 903 - nds32_pmu->enable(event); 904 - } 905 - 906 - static int nds32_pmu_add(struct perf_event *event, int flags) 907 - { 908 - struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); 909 - struct pmu_hw_events *hw_events = nds32_pmu->get_hw_events(); 910 - struct hw_perf_event *hwc = &event->hw; 911 - int idx; 912 - int err = 0; 913 - 914 - perf_pmu_disable(event->pmu); 915 - 916 - /* If we don't have a space for the counter then finish early. */ 917 - idx = nds32_pmu->get_event_idx(hw_events, event); 918 - if (idx < 0) { 919 - err = idx; 920 - goto out; 921 - } 922 - 923 - /* 924 - * If there is an event in the counter we are going to use then make 925 - * sure it is disabled. 926 - */ 927 - event->hw.idx = idx; 928 - nds32_pmu->disable(event); 929 - hw_events->events[idx] = event; 930 - 931 - hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; 932 - if (flags & PERF_EF_START) 933 - nds32_start(event, PERF_EF_RELOAD); 934 - 935 - /* Propagate our changes to the userspace mapping. */ 936 - perf_event_update_userpage(event); 937 - 938 - out: 939 - perf_pmu_enable(event->pmu); 940 - return err; 941 - } 942 - 943 - u64 nds32_pmu_event_update(struct perf_event *event) 944 - { 945 - struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); 946 - struct hw_perf_event *hwc = &event->hw; 947 - u64 delta, prev_raw_count, new_raw_count; 948 - 949 - again: 950 - prev_raw_count = local64_read(&hwc->prev_count); 951 - new_raw_count = nds32_pmu->read_counter(event); 952 - 953 - if (local64_cmpxchg(&hwc->prev_count, prev_raw_count, 954 - new_raw_count) != prev_raw_count) { 955 - goto again; 956 - } 957 - /* 958 - * Whether overflow or not, "unsigned substraction" 959 - * will always get their delta 960 - */ 961 - delta = (new_raw_count - prev_raw_count) & nds32_pmu->max_period; 962 - 963 - local64_add(delta, &event->count); 964 - local64_sub(delta, &hwc->period_left); 965 - 966 - return new_raw_count; 967 - } 968 - 969 - static void nds32_stop(struct perf_event *event, int flags) 970 - { 971 - struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); 972 - struct hw_perf_event *hwc = &event->hw; 973 - /* 974 - * NDS pmu always has to update the counter, so ignore 975 - * PERF_EF_UPDATE, see comments in nds32_start(). 976 - */ 977 - if (!(hwc->state & PERF_HES_STOPPED)) { 978 - nds32_pmu->disable(event); 979 - nds32_pmu_event_update(event); 980 - hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; 981 - } 982 - } 983 - 984 - static void nds32_pmu_del(struct perf_event *event, int flags) 985 - { 986 - struct nds32_pmu *nds32_pmu = to_nds32_pmu(event->pmu); 987 - struct pmu_hw_events *hw_events = nds32_pmu->get_hw_events(); 988 - struct hw_perf_event *hwc = &event->hw; 989 - int idx = hwc->idx; 990 - 991 - nds32_stop(event, PERF_EF_UPDATE); 992 - hw_events->events[idx] = NULL; 993 - clear_bit(idx, hw_events->used_mask); 994 - 995 - perf_event_update_userpage(event); 996 - } 997 - 998 - static void nds32_pmu_read(struct perf_event *event) 999 - { 1000 - nds32_pmu_event_update(event); 1001 - } 1002 - 1003 - /* Please refer to SPAv3 for more hardware specific details */ 1004 - PMU_FORMAT_ATTR(event, "config:0-63"); 1005 - 1006 - static struct attribute *nds32_arch_formats_attr[] = { 1007 - &format_attr_event.attr, 1008 - NULL, 1009 - }; 1010 - 1011 - static struct attribute_group nds32_pmu_format_group = { 1012 - .name = "format", 1013 - .attrs = nds32_arch_formats_attr, 1014 - }; 1015 - 1016 - static ssize_t nds32_pmu_cpumask_show(struct device *dev, 1017 - struct device_attribute *attr, 1018 - char *buf) 1019 - { 1020 - return 0; 1021 - } 1022 - 1023 - static DEVICE_ATTR(cpus, 0444, nds32_pmu_cpumask_show, NULL); 1024 - 1025 - static struct attribute *nds32_pmu_common_attrs[] = { 1026 - &dev_attr_cpus.attr, 1027 - NULL, 1028 - }; 1029 - 1030 - static struct attribute_group nds32_pmu_common_group = { 1031 - .attrs = nds32_pmu_common_attrs, 1032 - }; 1033 - 1034 - static const struct attribute_group *nds32_pmu_attr_groups[] = { 1035 - &nds32_pmu_format_group, 1036 - &nds32_pmu_common_group, 1037 - NULL, 1038 - }; 1039 - 1040 - static void nds32_init(struct nds32_pmu *nds32_pmu) 1041 - { 1042 - atomic_set(&nds32_pmu->active_events, 0); 1043 - 1044 - nds32_pmu->pmu = (struct pmu) { 1045 - .pmu_enable = nds32_pmu_enable, 1046 - .pmu_disable = nds32_pmu_disable, 1047 - .attr_groups = nds32_pmu_attr_groups, 1048 - .event_init = nds32_pmu_event_init, 1049 - .add = nds32_pmu_add, 1050 - .del = nds32_pmu_del, 1051 - .start = nds32_start, 1052 - .stop = nds32_stop, 1053 - .read = nds32_pmu_read, 1054 - }; 1055 - } 1056 - 1057 - int nds32_pmu_register(struct nds32_pmu *nds32_pmu, int type) 1058 - { 1059 - nds32_init(nds32_pmu); 1060 - pm_runtime_enable(&nds32_pmu->plat_device->dev); 1061 - pr_info("enabled with %s PMU driver, %d counters available\n", 1062 - nds32_pmu->name, nds32_pmu->num_events); 1063 - return perf_pmu_register(&nds32_pmu->pmu, nds32_pmu->name, type); 1064 - } 1065 - 1066 - static struct pmu_hw_events *cpu_pmu_get_cpu_events(void) 1067 - { 1068 - return this_cpu_ptr(&cpu_hw_events); 1069 - } 1070 - 1071 - static int cpu_pmu_request_irq(struct nds32_pmu *cpu_pmu, irq_handler_t handler) 1072 - { 1073 - int err, irq, irqs; 1074 - struct platform_device *pmu_device = cpu_pmu->plat_device; 1075 - 1076 - if (!pmu_device) 1077 - return -ENODEV; 1078 - 1079 - irqs = min(pmu_device->num_resources, num_possible_cpus()); 1080 - if (irqs < 1) { 1081 - pr_err("no irqs for PMUs defined\n"); 1082 - return -ENODEV; 1083 - } 1084 - 1085 - irq = platform_get_irq(pmu_device, 0); 1086 - err = request_irq(irq, handler, IRQF_NOBALANCING, "nds32-pfm", 1087 - cpu_pmu); 1088 - if (err) { 1089 - pr_err("unable to request IRQ%d for NDS PMU counters\n", 1090 - irq); 1091 - return err; 1092 - } 1093 - return 0; 1094 - } 1095 - 1096 - static void cpu_pmu_free_irq(struct nds32_pmu *cpu_pmu) 1097 - { 1098 - int irq; 1099 - struct platform_device *pmu_device = cpu_pmu->plat_device; 1100 - 1101 - irq = platform_get_irq(pmu_device, 0); 1102 - if (irq >= 0) 1103 - free_irq(irq, cpu_pmu); 1104 - } 1105 - 1106 - static void cpu_pmu_init(struct nds32_pmu *cpu_pmu) 1107 - { 1108 - int cpu; 1109 - struct pmu_hw_events *events = &per_cpu(cpu_hw_events, cpu); 1110 - 1111 - raw_spin_lock_init(&events->pmu_lock); 1112 - 1113 - cpu_pmu->get_hw_events = cpu_pmu_get_cpu_events; 1114 - cpu_pmu->request_irq = cpu_pmu_request_irq; 1115 - cpu_pmu->free_irq = cpu_pmu_free_irq; 1116 - 1117 - /* Ensure the PMU has sane values out of reset. */ 1118 - if (cpu_pmu->reset) 1119 - on_each_cpu(cpu_pmu->reset, cpu_pmu, 1); 1120 - } 1121 - 1122 - static const struct of_device_id cpu_pmu_of_device_ids[] = { 1123 - {.compatible = "andestech,nds32v3-pmu", 1124 - .data = device_pmu_init}, 1125 - {}, 1126 - }; 1127 - 1128 - static int cpu_pmu_device_probe(struct platform_device *pdev) 1129 - { 1130 - const struct of_device_id *of_id; 1131 - int (*init_fn)(struct nds32_pmu *nds32_pmu); 1132 - struct device_node *node = pdev->dev.of_node; 1133 - struct nds32_pmu *pmu; 1134 - int ret = -ENODEV; 1135 - 1136 - if (cpu_pmu) { 1137 - pr_notice("[perf] attempt to register multiple PMU devices!\n"); 1138 - return -ENOSPC; 1139 - } 1140 - 1141 - pmu = kzalloc(sizeof(*pmu), GFP_KERNEL); 1142 - if (!pmu) 1143 - return -ENOMEM; 1144 - 1145 - of_id = of_match_node(cpu_pmu_of_device_ids, pdev->dev.of_node); 1146 - if (node && of_id) { 1147 - init_fn = of_id->data; 1148 - ret = init_fn(pmu); 1149 - } else { 1150 - ret = probe_current_pmu(pmu); 1151 - } 1152 - 1153 - if (ret) { 1154 - pr_notice("[perf] failed to probe PMU!\n"); 1155 - goto out_free; 1156 - } 1157 - 1158 - cpu_pmu = pmu; 1159 - cpu_pmu->plat_device = pdev; 1160 - cpu_pmu_init(cpu_pmu); 1161 - ret = nds32_pmu_register(cpu_pmu, PERF_TYPE_RAW); 1162 - 1163 - if (!ret) 1164 - return 0; 1165 - 1166 - out_free: 1167 - pr_notice("[perf] failed to register PMU devices!\n"); 1168 - kfree(pmu); 1169 - return ret; 1170 - } 1171 - 1172 - static struct platform_driver cpu_pmu_driver = { 1173 - .driver = { 1174 - .name = "nds32-pfm", 1175 - .of_match_table = cpu_pmu_of_device_ids, 1176 - }, 1177 - .probe = cpu_pmu_device_probe, 1178 - .id_table = cpu_pmu_plat_device_ids, 1179 - }; 1180 - 1181 - static int __init register_pmu_driver(void) 1182 - { 1183 - int err = 0; 1184 - 1185 - err = platform_driver_register(&cpu_pmu_driver); 1186 - if (err) 1187 - pr_notice("[perf] PMU initialization failed\n"); 1188 - else 1189 - pr_notice("[perf] PMU initialization done\n"); 1190 - 1191 - return err; 1192 - } 1193 - 1194 - device_initcall(register_pmu_driver); 1195 - 1196 - /* 1197 - * References: arch/nds32/kernel/traps.c:__dump() 1198 - * You will need to know the NDS ABI first. 1199 - */ 1200 - static int unwind_frame_kernel(struct stackframe *frame) 1201 - { 1202 - int graph = 0; 1203 - #ifdef CONFIG_FRAME_POINTER 1204 - /* 0x3 means misalignment */ 1205 - if (!kstack_end((void *)frame->fp) && 1206 - !((unsigned long)frame->fp & 0x3) && 1207 - ((unsigned long)frame->fp >= TASK_SIZE)) { 1208 - /* 1209 - * The array index is based on the ABI, the below graph 1210 - * illustrate the reasons. 1211 - * Function call procedure: "smw" and "lmw" will always 1212 - * update SP and FP for you automatically. 1213 - * 1214 - * Stack Relative Address 1215 - * | | 0 1216 - * ---- 1217 - * |LP| <-- SP(before smw) <-- FP(after smw) -1 1218 - * ---- 1219 - * |FP| -2 1220 - * ---- 1221 - * | | <-- SP(after smw) -3 1222 - */ 1223 - frame->lp = ((unsigned long *)frame->fp)[-1]; 1224 - frame->fp = ((unsigned long *)frame->fp)[FP_OFFSET]; 1225 - /* make sure CONFIG_FUNCTION_GRAPH_TRACER is turned on */ 1226 - if (__kernel_text_address(frame->lp)) 1227 - frame->lp = ftrace_graph_ret_addr 1228 - (NULL, &graph, frame->lp, NULL); 1229 - 1230 - return 0; 1231 - } else { 1232 - return -EPERM; 1233 - } 1234 - #else 1235 - /* 1236 - * You can refer to arch/nds32/kernel/traps.c:__dump() 1237 - * Treat "sp" as "fp", but the "sp" is one frame ahead of "fp". 1238 - * And, the "sp" is not always correct. 1239 - * 1240 - * Stack Relative Address 1241 - * | | 0 1242 - * ---- 1243 - * |LP| <-- SP(before smw) -1 1244 - * ---- 1245 - * | | <-- SP(after smw) -2 1246 - * ---- 1247 - */ 1248 - if (!kstack_end((void *)frame->sp)) { 1249 - frame->lp = ((unsigned long *)frame->sp)[1]; 1250 - /* TODO: How to deal with the value in first 1251 - * "sp" is not correct? 1252 - */ 1253 - if (__kernel_text_address(frame->lp)) 1254 - frame->lp = ftrace_graph_ret_addr 1255 - (tsk, &graph, frame->lp, NULL); 1256 - 1257 - frame->sp = ((unsigned long *)frame->sp) + 1; 1258 - 1259 - return 0; 1260 - } else { 1261 - return -EPERM; 1262 - } 1263 - #endif 1264 - } 1265 - 1266 - static void notrace 1267 - walk_stackframe(struct stackframe *frame, 1268 - int (*fn_record)(struct stackframe *, void *), 1269 - void *data) 1270 - { 1271 - while (1) { 1272 - int ret; 1273 - 1274 - if (fn_record(frame, data)) 1275 - break; 1276 - 1277 - ret = unwind_frame_kernel(frame); 1278 - if (ret < 0) 1279 - break; 1280 - } 1281 - } 1282 - 1283 - /* 1284 - * Gets called by walk_stackframe() for every stackframe. This will be called 1285 - * whist unwinding the stackframe and is like a subroutine return so we use 1286 - * the PC. 1287 - */ 1288 - static int callchain_trace(struct stackframe *fr, void *data) 1289 - { 1290 - struct perf_callchain_entry_ctx *entry = data; 1291 - 1292 - perf_callchain_store(entry, fr->lp); 1293 - return 0; 1294 - } 1295 - 1296 - /* 1297 - * Get the return address for a single stackframe and return a pointer to the 1298 - * next frame tail. 1299 - */ 1300 - static unsigned long 1301 - user_backtrace(struct perf_callchain_entry_ctx *entry, unsigned long fp) 1302 - { 1303 - struct frame_tail buftail; 1304 - unsigned long lp = 0; 1305 - unsigned long *user_frame_tail = 1306 - (unsigned long *)(fp - (unsigned long)sizeof(buftail)); 1307 - 1308 - /* Check accessibility of one struct frame_tail beyond */ 1309 - if (!access_ok(user_frame_tail, sizeof(buftail))) 1310 - return 0; 1311 - if (__copy_from_user_inatomic 1312 - (&buftail, user_frame_tail, sizeof(buftail))) 1313 - return 0; 1314 - 1315 - /* 1316 - * Refer to unwind_frame_kernel() for more illurstration 1317 - */ 1318 - lp = buftail.stack_lp; /* ((unsigned long *)fp)[-1] */ 1319 - fp = buftail.stack_fp; /* ((unsigned long *)fp)[FP_OFFSET] */ 1320 - perf_callchain_store(entry, lp); 1321 - return fp; 1322 - } 1323 - 1324 - static unsigned long 1325 - user_backtrace_opt_size(struct perf_callchain_entry_ctx *entry, 1326 - unsigned long fp) 1327 - { 1328 - struct frame_tail_opt_size buftail; 1329 - unsigned long lp = 0; 1330 - 1331 - unsigned long *user_frame_tail = 1332 - (unsigned long *)(fp - (unsigned long)sizeof(buftail)); 1333 - 1334 - /* Check accessibility of one struct frame_tail beyond */ 1335 - if (!access_ok(user_frame_tail, sizeof(buftail))) 1336 - return 0; 1337 - if (__copy_from_user_inatomic 1338 - (&buftail, user_frame_tail, sizeof(buftail))) 1339 - return 0; 1340 - 1341 - /* 1342 - * Refer to unwind_frame_kernel() for more illurstration 1343 - */ 1344 - lp = buftail.stack_lp; /* ((unsigned long *)fp)[-1] */ 1345 - fp = buftail.stack_fp; /* ((unsigned long *)fp)[FP_OFFSET] */ 1346 - 1347 - perf_callchain_store(entry, lp); 1348 - return fp; 1349 - } 1350 - 1351 - /* 1352 - * This will be called when the target is in user mode 1353 - * This function will only be called when we use 1354 - * "PERF_SAMPLE_CALLCHAIN" in 1355 - * kernel/events/core.c:perf_prepare_sample() 1356 - * 1357 - * How to trigger perf_callchain_[user/kernel] : 1358 - * $ perf record -e cpu-clock --call-graph fp ./program 1359 - * $ perf report --call-graph 1360 - */ 1361 - unsigned long leaf_fp; 1362 - void 1363 - perf_callchain_user(struct perf_callchain_entry_ctx *entry, 1364 - struct pt_regs *regs) 1365 - { 1366 - unsigned long fp = 0; 1367 - unsigned long gp = 0; 1368 - unsigned long lp = 0; 1369 - unsigned long sp = 0; 1370 - unsigned long *user_frame_tail; 1371 - 1372 - leaf_fp = 0; 1373 - 1374 - perf_callchain_store(entry, regs->ipc); 1375 - fp = regs->fp; 1376 - gp = regs->gp; 1377 - lp = regs->lp; 1378 - sp = regs->sp; 1379 - if (entry->nr < PERF_MAX_STACK_DEPTH && 1380 - (unsigned long)fp && !((unsigned long)fp & 0x7) && fp > sp) { 1381 - user_frame_tail = 1382 - (unsigned long *)(fp - (unsigned long)sizeof(fp)); 1383 - 1384 - if (!access_ok(user_frame_tail, sizeof(fp))) 1385 - return; 1386 - 1387 - if (__copy_from_user_inatomic 1388 - (&leaf_fp, user_frame_tail, sizeof(fp))) 1389 - return; 1390 - 1391 - if (leaf_fp == lp) { 1392 - /* 1393 - * Maybe this is non leaf function 1394 - * with optimize for size, 1395 - * or maybe this is the function 1396 - * with optimize for size 1397 - */ 1398 - struct frame_tail buftail; 1399 - 1400 - user_frame_tail = 1401 - (unsigned long *)(fp - 1402 - (unsigned long)sizeof(buftail)); 1403 - 1404 - if (!access_ok(user_frame_tail, sizeof(buftail))) 1405 - return; 1406 - 1407 - if (__copy_from_user_inatomic 1408 - (&buftail, user_frame_tail, sizeof(buftail))) 1409 - return; 1410 - 1411 - if (buftail.stack_fp == gp) { 1412 - /* non leaf function with optimize 1413 - * for size condition 1414 - */ 1415 - struct frame_tail_opt_size buftail_opt_size; 1416 - 1417 - user_frame_tail = 1418 - (unsigned long *)(fp - (unsigned long) 1419 - sizeof(buftail_opt_size)); 1420 - 1421 - if (!access_ok(user_frame_tail, 1422 - sizeof(buftail_opt_size))) 1423 - return; 1424 - 1425 - if (__copy_from_user_inatomic 1426 - (&buftail_opt_size, user_frame_tail, 1427 - sizeof(buftail_opt_size))) 1428 - return; 1429 - 1430 - perf_callchain_store(entry, lp); 1431 - fp = buftail_opt_size.stack_fp; 1432 - 1433 - while ((entry->nr < PERF_MAX_STACK_DEPTH) && 1434 - (unsigned long)fp && 1435 - !((unsigned long)fp & 0x7) && 1436 - fp > sp) { 1437 - sp = fp; 1438 - fp = user_backtrace_opt_size(entry, fp); 1439 - } 1440 - 1441 - } else { 1442 - /* this is the function 1443 - * without optimize for size 1444 - */ 1445 - fp = buftail.stack_fp; 1446 - perf_callchain_store(entry, lp); 1447 - while ((entry->nr < PERF_MAX_STACK_DEPTH) && 1448 - (unsigned long)fp && 1449 - !((unsigned long)fp & 0x7) && 1450 - fp > sp) { 1451 - sp = fp; 1452 - fp = user_backtrace(entry, fp); 1453 - } 1454 - } 1455 - } else { 1456 - /* this is leaf function */ 1457 - fp = leaf_fp; 1458 - perf_callchain_store(entry, lp); 1459 - 1460 - /* previous function callcahin */ 1461 - while ((entry->nr < PERF_MAX_STACK_DEPTH) && 1462 - (unsigned long)fp && 1463 - !((unsigned long)fp & 0x7) && fp > sp) { 1464 - sp = fp; 1465 - fp = user_backtrace(entry, fp); 1466 - } 1467 - } 1468 - return; 1469 - } 1470 - } 1471 - 1472 - /* This will be called when the target is in kernel mode */ 1473 - void 1474 - perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, 1475 - struct pt_regs *regs) 1476 - { 1477 - struct stackframe fr; 1478 - 1479 - fr.fp = regs->fp; 1480 - fr.lp = regs->lp; 1481 - fr.sp = regs->sp; 1482 - walk_stackframe(&fr, callchain_trace, entry); 1483 - } 1484 - 1485 - unsigned long perf_instruction_pointer(struct pt_regs *regs) 1486 - { 1487 - return instruction_pointer(regs); 1488 - } 1489 - 1490 - unsigned long perf_misc_flags(struct pt_regs *regs) 1491 - { 1492 - int misc = 0; 1493 - 1494 - if (user_mode(regs)) 1495 - misc |= PERF_RECORD_MISC_USER; 1496 - else 1497 - misc |= PERF_RECORD_MISC_KERNEL; 1498 - 1499 - return misc; 1500 - }
-80
arch/nds32/kernel/pm.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2008-2017 Andes Technology Corporation 3 - 4 - #include <linux/init.h> 5 - #include <linux/suspend.h> 6 - #include <linux/device.h> 7 - #include <linux/printk.h> 8 - #include <asm/suspend.h> 9 - #include <nds32_intrinsic.h> 10 - 11 - unsigned int resume_addr; 12 - unsigned int *phy_addr_sp_tmp; 13 - 14 - static void nds32_suspend2ram(void) 15 - { 16 - pgd_t *pgdv; 17 - p4d_t *p4dv; 18 - pud_t *pudv; 19 - pmd_t *pmdv; 20 - pte_t *ptev; 21 - 22 - pgdv = (pgd_t *)__va((__nds32__mfsr(NDS32_SR_L1_PPTB) & 23 - L1_PPTB_mskBASE)) + pgd_index((unsigned int)cpu_resume); 24 - 25 - p4dv = p4d_offset(pgdv, (unsigned int)cpu_resume); 26 - pudv = pud_offset(p4dv, (unsigned int)cpu_resume); 27 - pmdv = pmd_offset(pudv, (unsigned int)cpu_resume); 28 - ptev = pte_offset_map(pmdv, (unsigned int)cpu_resume); 29 - 30 - resume_addr = ((*ptev) & TLB_DATA_mskPPN) 31 - | ((unsigned int)cpu_resume & 0x00000fff); 32 - 33 - suspend2ram(); 34 - } 35 - 36 - static void nds32_suspend_cpu(void) 37 - { 38 - while (!(__nds32__mfsr(NDS32_SR_INT_PEND) & wake_mask)) 39 - __asm__ volatile ("standby no_wake_grant\n\t"); 40 - } 41 - 42 - static int nds32_pm_valid(suspend_state_t state) 43 - { 44 - switch (state) { 45 - case PM_SUSPEND_ON: 46 - case PM_SUSPEND_STANDBY: 47 - case PM_SUSPEND_MEM: 48 - return 1; 49 - default: 50 - return 0; 51 - } 52 - } 53 - 54 - static int nds32_pm_enter(suspend_state_t state) 55 - { 56 - pr_debug("%s:state:%d\n", __func__, state); 57 - switch (state) { 58 - case PM_SUSPEND_STANDBY: 59 - nds32_suspend_cpu(); 60 - return 0; 61 - case PM_SUSPEND_MEM: 62 - nds32_suspend2ram(); 63 - return 0; 64 - default: 65 - return -EINVAL; 66 - } 67 - } 68 - 69 - static const struct platform_suspend_ops nds32_pm_ops = { 70 - .valid = nds32_pm_valid, 71 - .enter = nds32_pm_enter, 72 - }; 73 - 74 - static int __init nds32_pm_init(void) 75 - { 76 - pr_debug("Enter %s\n", __func__); 77 - suspend_set_ops(&nds32_pm_ops); 78 - return 0; 79 - } 80 - late_initcall(nds32_pm_init);
-256
arch/nds32/kernel/process.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/sched.h> 5 - #include <linux/sched/debug.h> 6 - #include <linux/sched/task_stack.h> 7 - #include <linux/delay.h> 8 - #include <linux/kallsyms.h> 9 - #include <linux/uaccess.h> 10 - #include <asm/elf.h> 11 - #include <asm/proc-fns.h> 12 - #include <asm/fpu.h> 13 - #include <linux/ptrace.h> 14 - #include <linux/reboot.h> 15 - 16 - #if IS_ENABLED(CONFIG_LAZY_FPU) 17 - struct task_struct *last_task_used_math; 18 - #endif 19 - 20 - extern void setup_mm_for_reboot(char mode); 21 - 22 - extern inline void arch_reset(char mode) 23 - { 24 - if (mode == 's') { 25 - /* Use cpu handler, jump to 0 */ 26 - cpu_reset(0); 27 - } 28 - } 29 - 30 - void (*pm_power_off) (void); 31 - EXPORT_SYMBOL(pm_power_off); 32 - 33 - static char reboot_mode_nds32 = 'h'; 34 - 35 - int __init reboot_setup(char *str) 36 - { 37 - reboot_mode_nds32 = str[0]; 38 - return 1; 39 - } 40 - 41 - static int cpub_pwroff(void) 42 - { 43 - return 0; 44 - } 45 - 46 - __setup("reboot=", reboot_setup); 47 - 48 - void machine_halt(void) 49 - { 50 - cpub_pwroff(); 51 - } 52 - 53 - EXPORT_SYMBOL(machine_halt); 54 - 55 - void machine_power_off(void) 56 - { 57 - if (pm_power_off) 58 - pm_power_off(); 59 - } 60 - 61 - EXPORT_SYMBOL(machine_power_off); 62 - 63 - void machine_restart(char *cmd) 64 - { 65 - /* 66 - * Clean and disable cache, and turn off interrupts 67 - */ 68 - cpu_proc_fin(); 69 - 70 - /* 71 - * Tell the mm system that we are going to reboot - 72 - * we may need it to insert some 1:1 mappings so that 73 - * soft boot works. 74 - */ 75 - setup_mm_for_reboot(reboot_mode_nds32); 76 - 77 - /* Execute kernel restart handler call chain */ 78 - do_kernel_restart(cmd); 79 - 80 - /* 81 - * Now call the architecture specific reboot code. 82 - */ 83 - arch_reset(reboot_mode_nds32); 84 - 85 - /* 86 - * Whoops - the architecture was unable to reboot. 87 - * Tell the user! 88 - */ 89 - mdelay(1000); 90 - pr_info("Reboot failed -- System halted\n"); 91 - while (1) ; 92 - } 93 - 94 - EXPORT_SYMBOL(machine_restart); 95 - 96 - void show_regs(struct pt_regs *regs) 97 - { 98 - printk("PC is at %pS\n", (void *)instruction_pointer(regs)); 99 - printk("LP is at %pS\n", (void *)regs->lp); 100 - pr_info("pc : [<%08lx>] lp : [<%08lx>] %s\n" 101 - "sp : %08lx fp : %08lx gp : %08lx\n", 102 - instruction_pointer(regs), 103 - regs->lp, print_tainted(), regs->sp, regs->fp, regs->gp); 104 - pr_info("r25: %08lx r24: %08lx\n", regs->uregs[25], regs->uregs[24]); 105 - 106 - pr_info("r23: %08lx r22: %08lx r21: %08lx r20: %08lx\n", 107 - regs->uregs[23], regs->uregs[22], 108 - regs->uregs[21], regs->uregs[20]); 109 - pr_info("r19: %08lx r18: %08lx r17: %08lx r16: %08lx\n", 110 - regs->uregs[19], regs->uregs[18], 111 - regs->uregs[17], regs->uregs[16]); 112 - pr_info("r15: %08lx r14: %08lx r13: %08lx r12: %08lx\n", 113 - regs->uregs[15], regs->uregs[14], 114 - regs->uregs[13], regs->uregs[12]); 115 - pr_info("r11: %08lx r10: %08lx r9 : %08lx r8 : %08lx\n", 116 - regs->uregs[11], regs->uregs[10], 117 - regs->uregs[9], regs->uregs[8]); 118 - pr_info("r7 : %08lx r6 : %08lx r5 : %08lx r4 : %08lx\n", 119 - regs->uregs[7], regs->uregs[6], regs->uregs[5], regs->uregs[4]); 120 - pr_info("r3 : %08lx r2 : %08lx r1 : %08lx r0 : %08lx\n", 121 - regs->uregs[3], regs->uregs[2], regs->uregs[1], regs->uregs[0]); 122 - pr_info(" IRQs o%s Segment user\n", 123 - interrupts_enabled(regs) ? "n" : "ff"); 124 - } 125 - 126 - EXPORT_SYMBOL(show_regs); 127 - 128 - void exit_thread(struct task_struct *tsk) 129 - { 130 - #if defined(CONFIG_FPU) && defined(CONFIG_LAZY_FPU) 131 - if (last_task_used_math == tsk) 132 - last_task_used_math = NULL; 133 - #endif 134 - } 135 - 136 - void flush_thread(void) 137 - { 138 - #if defined(CONFIG_FPU) 139 - clear_fpu(task_pt_regs(current)); 140 - clear_used_math(); 141 - # ifdef CONFIG_LAZY_FPU 142 - if (last_task_used_math == current) 143 - last_task_used_math = NULL; 144 - # endif 145 - #endif 146 - } 147 - 148 - DEFINE_PER_CPU(struct task_struct *, __entry_task); 149 - 150 - asmlinkage void ret_from_fork(void) __asm__("ret_from_fork"); 151 - int copy_thread(unsigned long clone_flags, unsigned long stack_start, 152 - unsigned long stk_sz, struct task_struct *p, unsigned long tls) 153 - { 154 - struct pt_regs *childregs = task_pt_regs(p); 155 - 156 - memset(&p->thread.cpu_context, 0, sizeof(struct cpu_context)); 157 - 158 - if (unlikely(p->flags & (PF_KTHREAD | PF_IO_WORKER))) { 159 - memset(childregs, 0, sizeof(struct pt_regs)); 160 - /* kernel thread fn */ 161 - p->thread.cpu_context.r6 = stack_start; 162 - /* kernel thread argument */ 163 - p->thread.cpu_context.r7 = stk_sz; 164 - } else { 165 - *childregs = *current_pt_regs(); 166 - if (stack_start) 167 - childregs->sp = stack_start; 168 - /* child get zero as ret. */ 169 - childregs->uregs[0] = 0; 170 - childregs->osp = 0; 171 - if (clone_flags & CLONE_SETTLS) 172 - childregs->uregs[25] = tls; 173 - } 174 - /* cpu context switching */ 175 - p->thread.cpu_context.pc = (unsigned long)ret_from_fork; 176 - p->thread.cpu_context.sp = (unsigned long)childregs; 177 - 178 - #if IS_ENABLED(CONFIG_FPU) 179 - if (used_math()) { 180 - # if !IS_ENABLED(CONFIG_LAZY_FPU) 181 - unlazy_fpu(current); 182 - # else 183 - preempt_disable(); 184 - if (last_task_used_math == current) 185 - save_fpu(current); 186 - preempt_enable(); 187 - # endif 188 - p->thread.fpu = current->thread.fpu; 189 - clear_fpu(task_pt_regs(p)); 190 - set_stopped_child_used_math(p); 191 - } 192 - #endif 193 - 194 - #ifdef CONFIG_HWZOL 195 - childregs->lb = 0; 196 - childregs->le = 0; 197 - childregs->lc = 0; 198 - #endif 199 - 200 - return 0; 201 - } 202 - 203 - #if IS_ENABLED(CONFIG_FPU) 204 - struct task_struct *_switch_fpu(struct task_struct *prev, struct task_struct *next) 205 - { 206 - #if !IS_ENABLED(CONFIG_LAZY_FPU) 207 - unlazy_fpu(prev); 208 - #endif 209 - if (!(next->flags & PF_KTHREAD)) 210 - clear_fpu(task_pt_regs(next)); 211 - return prev; 212 - } 213 - #endif 214 - 215 - /* 216 - * fill in the fpe structure for a core dump... 217 - */ 218 - int dump_fpu(struct pt_regs *regs, elf_fpregset_t * fpu) 219 - { 220 - int fpvalid = 0; 221 - #if IS_ENABLED(CONFIG_FPU) 222 - struct task_struct *tsk = current; 223 - 224 - fpvalid = tsk_used_math(tsk); 225 - if (fpvalid) { 226 - lose_fpu(); 227 - memcpy(fpu, &tsk->thread.fpu, sizeof(*fpu)); 228 - } 229 - #endif 230 - return fpvalid; 231 - } 232 - 233 - EXPORT_SYMBOL(dump_fpu); 234 - 235 - unsigned long __get_wchan(struct task_struct *p) 236 - { 237 - unsigned long fp, lr; 238 - unsigned long stack_start, stack_end; 239 - int count = 0; 240 - 241 - if (IS_ENABLED(CONFIG_FRAME_POINTER)) { 242 - stack_start = (unsigned long)end_of_stack(p); 243 - stack_end = (unsigned long)task_stack_page(p) + THREAD_SIZE; 244 - 245 - fp = thread_saved_fp(p); 246 - do { 247 - if (fp < stack_start || fp > stack_end) 248 - return 0; 249 - lr = ((unsigned long *)fp)[0]; 250 - if (!in_sched_functions(lr)) 251 - return lr; 252 - fp = *(unsigned long *)(fp + 4); 253 - } while (count++ < 16); 254 - } 255 - return 0; 256 - }
-118
arch/nds32/kernel/ptrace.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/ptrace.h> 5 - #include <linux/regset.h> 6 - #include <linux/tracehook.h> 7 - #include <linux/elf.h> 8 - #include <linux/sched/task_stack.h> 9 - 10 - enum nds32_regset { 11 - REGSET_GPR, 12 - }; 13 - 14 - static int gpr_get(struct task_struct *target, 15 - const struct user_regset *regset, 16 - struct membuf to) 17 - { 18 - return membuf_write(&to, &task_pt_regs(target)->user_regs, 19 - sizeof(struct user_pt_regs)); 20 - } 21 - 22 - static int gpr_set(struct task_struct *target, const struct user_regset *regset, 23 - unsigned int pos, unsigned int count, 24 - const void *kbuf, const void __user * ubuf) 25 - { 26 - int err; 27 - struct user_pt_regs newregs = task_pt_regs(target)->user_regs; 28 - 29 - err = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &newregs, 0, -1); 30 - if (err) 31 - return err; 32 - 33 - task_pt_regs(target)->user_regs = newregs; 34 - return 0; 35 - } 36 - 37 - static const struct user_regset nds32_regsets[] = { 38 - [REGSET_GPR] = { 39 - .core_note_type = NT_PRSTATUS, 40 - .n = sizeof(struct user_pt_regs) / sizeof(u32), 41 - .size = sizeof(elf_greg_t), 42 - .align = sizeof(elf_greg_t), 43 - .regset_get = gpr_get, 44 - .set = gpr_set} 45 - }; 46 - 47 - static const struct user_regset_view nds32_user_view = { 48 - .name = "nds32", 49 - .e_machine = EM_NDS32, 50 - .regsets = nds32_regsets, 51 - .n = ARRAY_SIZE(nds32_regsets) 52 - }; 53 - 54 - const struct user_regset_view *task_user_regset_view(struct task_struct *task) 55 - { 56 - return &nds32_user_view; 57 - } 58 - 59 - void ptrace_disable(struct task_struct *child) 60 - { 61 - user_disable_single_step(child); 62 - } 63 - 64 - /* do_ptrace() 65 - * 66 - * Provide ptrace defined service. 67 - */ 68 - long arch_ptrace(struct task_struct *child, long request, unsigned long addr, 69 - unsigned long data) 70 - { 71 - int ret = -EIO; 72 - 73 - switch (request) { 74 - default: 75 - ret = ptrace_request(child, request, addr, data); 76 - break; 77 - } 78 - 79 - return ret; 80 - } 81 - 82 - void user_enable_single_step(struct task_struct *child) 83 - { 84 - struct pt_regs *regs; 85 - regs = task_pt_regs(child); 86 - regs->ipsw |= PSW_mskHSS; 87 - set_tsk_thread_flag(child, TIF_SINGLESTEP); 88 - } 89 - 90 - void user_disable_single_step(struct task_struct *child) 91 - { 92 - struct pt_regs *regs; 93 - regs = task_pt_regs(child); 94 - regs->ipsw &= ~PSW_mskHSS; 95 - clear_tsk_thread_flag(child, TIF_SINGLESTEP); 96 - } 97 - 98 - /* sys_trace() 99 - * 100 - * syscall trace handler. 101 - */ 102 - 103 - asmlinkage int syscall_trace_enter(struct pt_regs *regs) 104 - { 105 - if (test_thread_flag(TIF_SYSCALL_TRACE)) { 106 - if (tracehook_report_syscall_entry(regs)) 107 - forget_syscall(regs); 108 - } 109 - return regs->syscallno; 110 - } 111 - 112 - asmlinkage void syscall_trace_leave(struct pt_regs *regs) 113 - { 114 - int step = test_thread_flag(TIF_SINGLESTEP); 115 - if (step || test_thread_flag(TIF_SYSCALL_TRACE)) 116 - tracehook_report_syscall_exit(regs, step); 117 - 118 - }
-369
arch/nds32/kernel/setup.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/cpu.h> 5 - #include <linux/memblock.h> 6 - #include <linux/seq_file.h> 7 - #include <linux/console.h> 8 - #include <linux/screen_info.h> 9 - #include <linux/delay.h> 10 - #include <linux/dma-mapping.h> 11 - #include <linux/of_fdt.h> 12 - #include <linux/of_platform.h> 13 - #include <asm/setup.h> 14 - #include <asm/sections.h> 15 - #include <asm/proc-fns.h> 16 - #include <asm/cache_info.h> 17 - #include <asm/elf.h> 18 - #include <asm/fpu.h> 19 - #include <nds32_intrinsic.h> 20 - 21 - #define HWCAP_MFUSR_PC 0x000001 22 - #define HWCAP_EXT 0x000002 23 - #define HWCAP_EXT2 0x000004 24 - #define HWCAP_FPU 0x000008 25 - #define HWCAP_AUDIO 0x000010 26 - #define HWCAP_BASE16 0x000020 27 - #define HWCAP_STRING 0x000040 28 - #define HWCAP_REDUCED_REGS 0x000080 29 - #define HWCAP_VIDEO 0x000100 30 - #define HWCAP_ENCRYPT 0x000200 31 - #define HWCAP_EDM 0x000400 32 - #define HWCAP_LMDMA 0x000800 33 - #define HWCAP_PFM 0x001000 34 - #define HWCAP_HSMP 0x002000 35 - #define HWCAP_TRACE 0x004000 36 - #define HWCAP_DIV 0x008000 37 - #define HWCAP_MAC 0x010000 38 - #define HWCAP_L2C 0x020000 39 - #define HWCAP_FPU_DP 0x040000 40 - #define HWCAP_V2 0x080000 41 - #define HWCAP_DX_REGS 0x100000 42 - #define HWCAP_HWPRE 0x200000 43 - 44 - unsigned long cpu_id, cpu_rev, cpu_cfgid; 45 - bool has_fpu = false; 46 - char cpu_series; 47 - char *endianness = NULL; 48 - 49 - unsigned int __atags_pointer __initdata; 50 - unsigned int elf_hwcap; 51 - EXPORT_SYMBOL(elf_hwcap); 52 - 53 - /* 54 - * The following string table, must sync with HWCAP_xx bitmask, 55 - * which is defined above 56 - */ 57 - static const char *hwcap_str[] = { 58 - "mfusr_pc", 59 - "perf1", 60 - "perf2", 61 - "fpu", 62 - "audio", 63 - "16b", 64 - "string", 65 - "reduced_regs", 66 - "video", 67 - "encrypt", 68 - "edm", 69 - "lmdma", 70 - "pfm", 71 - "hsmp", 72 - "trace", 73 - "div", 74 - "mac", 75 - "l2c", 76 - "fpu_dp", 77 - "v2", 78 - "dx_regs", 79 - "hw_pre", 80 - NULL, 81 - }; 82 - 83 - #ifdef CONFIG_CPU_DCACHE_WRITETHROUGH 84 - #define WRITE_METHOD "write through" 85 - #else 86 - #define WRITE_METHOD "write back" 87 - #endif 88 - 89 - struct cache_info L1_cache_info[2]; 90 - static void __init dump_cpu_info(int cpu) 91 - { 92 - int i, p = 0; 93 - char str[sizeof(hwcap_str) + 16]; 94 - 95 - for (i = 0; hwcap_str[i]; i++) { 96 - if (elf_hwcap & (1 << i)) { 97 - sprintf(str + p, "%s ", hwcap_str[i]); 98 - p += strlen(hwcap_str[i]) + 1; 99 - } 100 - } 101 - 102 - pr_info("CPU%d Features: %s\n", cpu, str); 103 - 104 - L1_cache_info[ICACHE].ways = CACHE_WAY(ICACHE); 105 - L1_cache_info[ICACHE].line_size = CACHE_LINE_SIZE(ICACHE); 106 - L1_cache_info[ICACHE].sets = CACHE_SET(ICACHE); 107 - L1_cache_info[ICACHE].size = 108 - L1_cache_info[ICACHE].ways * L1_cache_info[ICACHE].line_size * 109 - L1_cache_info[ICACHE].sets / 1024; 110 - pr_info("L1I:%dKB/%dS/%dW/%dB\n", L1_cache_info[ICACHE].size, 111 - L1_cache_info[ICACHE].sets, L1_cache_info[ICACHE].ways, 112 - L1_cache_info[ICACHE].line_size); 113 - L1_cache_info[DCACHE].ways = CACHE_WAY(DCACHE); 114 - L1_cache_info[DCACHE].line_size = CACHE_LINE_SIZE(DCACHE); 115 - L1_cache_info[DCACHE].sets = CACHE_SET(DCACHE); 116 - L1_cache_info[DCACHE].size = 117 - L1_cache_info[DCACHE].ways * L1_cache_info[DCACHE].line_size * 118 - L1_cache_info[DCACHE].sets / 1024; 119 - pr_info("L1D:%dKB/%dS/%dW/%dB\n", L1_cache_info[DCACHE].size, 120 - L1_cache_info[DCACHE].sets, L1_cache_info[DCACHE].ways, 121 - L1_cache_info[DCACHE].line_size); 122 - pr_info("L1 D-Cache is %s\n", WRITE_METHOD); 123 - if (L1_cache_info[DCACHE].size != L1_CACHE_BYTES) 124 - pr_crit 125 - ("The cache line size(%d) of this processor is not the same as L1_CACHE_BYTES(%d).\n", 126 - L1_cache_info[DCACHE].size, L1_CACHE_BYTES); 127 - #ifdef CONFIG_CPU_CACHE_ALIASING 128 - { 129 - int aliasing_num; 130 - aliasing_num = 131 - L1_cache_info[ICACHE].size * 1024 / PAGE_SIZE / 132 - L1_cache_info[ICACHE].ways; 133 - L1_cache_info[ICACHE].aliasing_num = aliasing_num; 134 - L1_cache_info[ICACHE].aliasing_mask = 135 - (aliasing_num - 1) << PAGE_SHIFT; 136 - aliasing_num = 137 - L1_cache_info[DCACHE].size * 1024 / PAGE_SIZE / 138 - L1_cache_info[DCACHE].ways; 139 - L1_cache_info[DCACHE].aliasing_num = aliasing_num; 140 - L1_cache_info[DCACHE].aliasing_mask = 141 - (aliasing_num - 1) << PAGE_SHIFT; 142 - } 143 - #endif 144 - #ifdef CONFIG_FPU 145 - /* Disable fpu and enable when it is used. */ 146 - if (has_fpu) 147 - disable_fpu(); 148 - #endif 149 - } 150 - 151 - static void __init setup_cpuinfo(void) 152 - { 153 - unsigned long tmp = 0, cpu_name; 154 - 155 - cpu_dcache_inval_all(); 156 - cpu_icache_inval_all(); 157 - __nds32__isb(); 158 - 159 - cpu_id = (__nds32__mfsr(NDS32_SR_CPU_VER) & CPU_VER_mskCPUID) >> CPU_VER_offCPUID; 160 - cpu_name = ((cpu_id) & 0xf0) >> 4; 161 - cpu_series = cpu_name ? cpu_name - 10 + 'A' : 'N'; 162 - cpu_id = cpu_id & 0xf; 163 - cpu_rev = (__nds32__mfsr(NDS32_SR_CPU_VER) & CPU_VER_mskREV) >> CPU_VER_offREV; 164 - cpu_cfgid = (__nds32__mfsr(NDS32_SR_CPU_VER) & CPU_VER_mskCFGID) >> CPU_VER_offCFGID; 165 - 166 - pr_info("CPU:%c%ld, CPU_VER 0x%08x(id %lu, rev %lu, cfg %lu)\n", 167 - cpu_series, cpu_id, __nds32__mfsr(NDS32_SR_CPU_VER), cpu_id, cpu_rev, cpu_cfgid); 168 - 169 - elf_hwcap |= HWCAP_MFUSR_PC; 170 - 171 - if (((__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskBASEV) >> MSC_CFG_offBASEV) == 0) { 172 - if (__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskDIV) 173 - elf_hwcap |= HWCAP_DIV; 174 - 175 - if ((__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskMAC) 176 - || (cpu_id == 12 && cpu_rev < 4)) 177 - elf_hwcap |= HWCAP_MAC; 178 - } else { 179 - elf_hwcap |= HWCAP_V2; 180 - elf_hwcap |= HWCAP_DIV; 181 - elf_hwcap |= HWCAP_MAC; 182 - } 183 - 184 - if (cpu_cfgid & 0x0001) 185 - elf_hwcap |= HWCAP_EXT; 186 - 187 - if (cpu_cfgid & 0x0002) 188 - elf_hwcap |= HWCAP_BASE16; 189 - 190 - if (cpu_cfgid & 0x0004) 191 - elf_hwcap |= HWCAP_EXT2; 192 - 193 - if (cpu_cfgid & 0x0008) { 194 - elf_hwcap |= HWCAP_FPU; 195 - has_fpu = true; 196 - } 197 - if (cpu_cfgid & 0x0010) 198 - elf_hwcap |= HWCAP_STRING; 199 - 200 - if (__nds32__mfsr(NDS32_SR_MMU_CFG) & MMU_CFG_mskDE) 201 - endianness = "MSB"; 202 - else 203 - endianness = "LSB"; 204 - 205 - if (__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskEDM) 206 - elf_hwcap |= HWCAP_EDM; 207 - 208 - if (__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskLMDMA) 209 - elf_hwcap |= HWCAP_LMDMA; 210 - 211 - if (__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskPFM) 212 - elf_hwcap |= HWCAP_PFM; 213 - 214 - if (__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskHSMP) 215 - elf_hwcap |= HWCAP_HSMP; 216 - 217 - if (__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskTRACE) 218 - elf_hwcap |= HWCAP_TRACE; 219 - 220 - if (__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskAUDIO) 221 - elf_hwcap |= HWCAP_AUDIO; 222 - 223 - if (__nds32__mfsr(NDS32_SR_MSC_CFG) & MSC_CFG_mskL2C) 224 - elf_hwcap |= HWCAP_L2C; 225 - 226 - #ifdef CONFIG_HW_PRE 227 - if (__nds32__mfsr(NDS32_SR_MISC_CTL) & MISC_CTL_makHWPRE_EN) 228 - elf_hwcap |= HWCAP_HWPRE; 229 - #endif 230 - 231 - tmp = __nds32__mfsr(NDS32_SR_CACHE_CTL); 232 - if (!IS_ENABLED(CONFIG_CPU_DCACHE_DISABLE)) 233 - tmp |= CACHE_CTL_mskDC_EN; 234 - 235 - if (!IS_ENABLED(CONFIG_CPU_ICACHE_DISABLE)) 236 - tmp |= CACHE_CTL_mskIC_EN; 237 - __nds32__mtsr_isb(tmp, NDS32_SR_CACHE_CTL); 238 - 239 - dump_cpu_info(smp_processor_id()); 240 - } 241 - 242 - static void __init setup_memory(void) 243 - { 244 - unsigned long ram_start_pfn; 245 - unsigned long free_ram_start_pfn; 246 - phys_addr_t memory_start, memory_end; 247 - 248 - memory_end = memory_start = 0; 249 - 250 - /* Find main memory where is the kernel */ 251 - memory_start = memblock_start_of_DRAM(); 252 - memory_end = memblock_end_of_DRAM(); 253 - 254 - if (!memory_end) { 255 - panic("No memory!"); 256 - } 257 - 258 - ram_start_pfn = PFN_UP(memblock_start_of_DRAM()); 259 - /* free_ram_start_pfn is first page after kernel */ 260 - free_ram_start_pfn = PFN_UP(__pa(&_end)); 261 - max_pfn = PFN_DOWN(memblock_end_of_DRAM()); 262 - /* it could update max_pfn */ 263 - if (max_pfn - ram_start_pfn <= MAXMEM_PFN) 264 - max_low_pfn = max_pfn; 265 - else { 266 - max_low_pfn = MAXMEM_PFN + ram_start_pfn; 267 - if (!IS_ENABLED(CONFIG_HIGHMEM)) 268 - max_pfn = MAXMEM_PFN + ram_start_pfn; 269 - } 270 - /* high_memory is related with VMALLOC */ 271 - high_memory = (void *)__va(max_low_pfn * PAGE_SIZE); 272 - min_low_pfn = free_ram_start_pfn; 273 - 274 - /* 275 - * initialize the boot-time allocator (with low memory only). 276 - * 277 - * This makes the memory from the end of the kernel to the end of 278 - * RAM usable. 279 - */ 280 - memblock_set_bottom_up(true); 281 - memblock_reserve(PFN_PHYS(ram_start_pfn), PFN_PHYS(free_ram_start_pfn - ram_start_pfn)); 282 - 283 - early_init_fdt_reserve_self(); 284 - early_init_fdt_scan_reserved_mem(); 285 - 286 - memblock_dump_all(); 287 - } 288 - 289 - void __init setup_arch(char **cmdline_p) 290 - { 291 - early_init_devtree(__atags_pointer ? \ 292 - phys_to_virt(__atags_pointer) : __dtb_start); 293 - 294 - setup_cpuinfo(); 295 - 296 - setup_initial_init_mm(_stext, _etext, _edata, _end); 297 - 298 - /* setup bootmem allocator */ 299 - setup_memory(); 300 - 301 - /* paging_init() sets up the MMU and marks all pages as reserved */ 302 - paging_init(); 303 - 304 - /* invalidate all TLB entries because the new mapping is created */ 305 - __nds32__tlbop_flua(); 306 - 307 - /* use generic way to parse */ 308 - parse_early_param(); 309 - 310 - unflatten_and_copy_device_tree(); 311 - 312 - *cmdline_p = boot_command_line; 313 - early_trap_init(); 314 - } 315 - 316 - static int c_show(struct seq_file *m, void *v) 317 - { 318 - int i; 319 - 320 - seq_printf(m, "Processor\t: %c%ld (id %lu, rev %lu, cfg %lu)\n", 321 - cpu_series, cpu_id, cpu_id, cpu_rev, cpu_cfgid); 322 - 323 - seq_printf(m, "L1I\t\t: %luKB/%luS/%luW/%luB\n", 324 - CACHE_SET(ICACHE) * CACHE_WAY(ICACHE) * 325 - CACHE_LINE_SIZE(ICACHE) / 1024, CACHE_SET(ICACHE), 326 - CACHE_WAY(ICACHE), CACHE_LINE_SIZE(ICACHE)); 327 - 328 - seq_printf(m, "L1D\t\t: %luKB/%luS/%luW/%luB\n", 329 - CACHE_SET(DCACHE) * CACHE_WAY(DCACHE) * 330 - CACHE_LINE_SIZE(DCACHE) / 1024, CACHE_SET(DCACHE), 331 - CACHE_WAY(DCACHE), CACHE_LINE_SIZE(DCACHE)); 332 - 333 - seq_printf(m, "BogoMIPS\t: %lu.%02lu\n", 334 - loops_per_jiffy / (500000 / HZ), 335 - (loops_per_jiffy / (5000 / HZ)) % 100); 336 - 337 - /* dump out the processor features */ 338 - seq_puts(m, "Features\t: "); 339 - 340 - for (i = 0; hwcap_str[i]; i++) 341 - if (elf_hwcap & (1 << i)) 342 - seq_printf(m, "%s ", hwcap_str[i]); 343 - 344 - seq_puts(m, "\n\n"); 345 - 346 - return 0; 347 - } 348 - 349 - static void *c_start(struct seq_file *m, loff_t * pos) 350 - { 351 - return *pos < 1 ? (void *)1 : NULL; 352 - } 353 - 354 - static void *c_next(struct seq_file *m, void *v, loff_t * pos) 355 - { 356 - ++*pos; 357 - return NULL; 358 - } 359 - 360 - static void c_stop(struct seq_file *m, void *v) 361 - { 362 - } 363 - 364 - struct seq_operations cpuinfo_op = { 365 - .start = c_start, 366 - .next = c_next, 367 - .stop = c_stop, 368 - .show = c_show 369 - };
-384
arch/nds32/kernel/signal.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/errno.h> 5 - #include <linux/signal.h> 6 - #include <linux/ptrace.h> 7 - #include <linux/personality.h> 8 - #include <linux/freezer.h> 9 - #include <linux/tracehook.h> 10 - #include <linux/uaccess.h> 11 - 12 - #include <asm/cacheflush.h> 13 - #include <asm/ucontext.h> 14 - #include <asm/unistd.h> 15 - #include <asm/fpu.h> 16 - 17 - #include <asm/ptrace.h> 18 - #include <asm/vdso.h> 19 - 20 - struct rt_sigframe { 21 - struct siginfo info; 22 - struct ucontext uc; 23 - }; 24 - #if IS_ENABLED(CONFIG_FPU) 25 - static inline int restore_sigcontext_fpu(struct pt_regs *regs, 26 - struct sigcontext __user *sc) 27 - { 28 - struct task_struct *tsk = current; 29 - unsigned long used_math_flag; 30 - int ret = 0; 31 - 32 - clear_used_math(); 33 - __get_user_error(used_math_flag, &sc->used_math_flag, ret); 34 - 35 - if (!used_math_flag) 36 - return 0; 37 - set_used_math(); 38 - 39 - #if IS_ENABLED(CONFIG_LAZY_FPU) 40 - preempt_disable(); 41 - if (current == last_task_used_math) { 42 - last_task_used_math = NULL; 43 - disable_ptreg_fpu(regs); 44 - } 45 - preempt_enable(); 46 - #else 47 - clear_fpu(regs); 48 - #endif 49 - 50 - return __copy_from_user(&tsk->thread.fpu, &sc->fpu, 51 - sizeof(struct fpu_struct)); 52 - } 53 - 54 - static inline int setup_sigcontext_fpu(struct pt_regs *regs, 55 - struct sigcontext __user *sc) 56 - { 57 - struct task_struct *tsk = current; 58 - int ret = 0; 59 - 60 - __put_user_error(used_math(), &sc->used_math_flag, ret); 61 - 62 - if (!used_math()) 63 - return ret; 64 - 65 - preempt_disable(); 66 - #if IS_ENABLED(CONFIG_LAZY_FPU) 67 - if (last_task_used_math == tsk) 68 - save_fpu(last_task_used_math); 69 - #else 70 - unlazy_fpu(tsk); 71 - #endif 72 - ret = __copy_to_user(&sc->fpu, &tsk->thread.fpu, 73 - sizeof(struct fpu_struct)); 74 - preempt_enable(); 75 - return ret; 76 - } 77 - #endif 78 - 79 - static int restore_sigframe(struct pt_regs *regs, 80 - struct rt_sigframe __user * sf) 81 - { 82 - sigset_t set; 83 - int err; 84 - 85 - err = __copy_from_user(&set, &sf->uc.uc_sigmask, sizeof(set)); 86 - if (err == 0) { 87 - set_current_blocked(&set); 88 - } 89 - 90 - __get_user_error(regs->uregs[0], &sf->uc.uc_mcontext.nds32_r0, err); 91 - __get_user_error(regs->uregs[1], &sf->uc.uc_mcontext.nds32_r1, err); 92 - __get_user_error(regs->uregs[2], &sf->uc.uc_mcontext.nds32_r2, err); 93 - __get_user_error(regs->uregs[3], &sf->uc.uc_mcontext.nds32_r3, err); 94 - __get_user_error(regs->uregs[4], &sf->uc.uc_mcontext.nds32_r4, err); 95 - __get_user_error(regs->uregs[5], &sf->uc.uc_mcontext.nds32_r5, err); 96 - __get_user_error(regs->uregs[6], &sf->uc.uc_mcontext.nds32_r6, err); 97 - __get_user_error(regs->uregs[7], &sf->uc.uc_mcontext.nds32_r7, err); 98 - __get_user_error(regs->uregs[8], &sf->uc.uc_mcontext.nds32_r8, err); 99 - __get_user_error(regs->uregs[9], &sf->uc.uc_mcontext.nds32_r9, err); 100 - __get_user_error(regs->uregs[10], &sf->uc.uc_mcontext.nds32_r10, err); 101 - __get_user_error(regs->uregs[11], &sf->uc.uc_mcontext.nds32_r11, err); 102 - __get_user_error(regs->uregs[12], &sf->uc.uc_mcontext.nds32_r12, err); 103 - __get_user_error(regs->uregs[13], &sf->uc.uc_mcontext.nds32_r13, err); 104 - __get_user_error(regs->uregs[14], &sf->uc.uc_mcontext.nds32_r14, err); 105 - __get_user_error(regs->uregs[15], &sf->uc.uc_mcontext.nds32_r15, err); 106 - __get_user_error(regs->uregs[16], &sf->uc.uc_mcontext.nds32_r16, err); 107 - __get_user_error(regs->uregs[17], &sf->uc.uc_mcontext.nds32_r17, err); 108 - __get_user_error(regs->uregs[18], &sf->uc.uc_mcontext.nds32_r18, err); 109 - __get_user_error(regs->uregs[19], &sf->uc.uc_mcontext.nds32_r19, err); 110 - __get_user_error(regs->uregs[20], &sf->uc.uc_mcontext.nds32_r20, err); 111 - __get_user_error(regs->uregs[21], &sf->uc.uc_mcontext.nds32_r21, err); 112 - __get_user_error(regs->uregs[22], &sf->uc.uc_mcontext.nds32_r22, err); 113 - __get_user_error(regs->uregs[23], &sf->uc.uc_mcontext.nds32_r23, err); 114 - __get_user_error(regs->uregs[24], &sf->uc.uc_mcontext.nds32_r24, err); 115 - __get_user_error(regs->uregs[25], &sf->uc.uc_mcontext.nds32_r25, err); 116 - 117 - __get_user_error(regs->fp, &sf->uc.uc_mcontext.nds32_fp, err); 118 - __get_user_error(regs->gp, &sf->uc.uc_mcontext.nds32_gp, err); 119 - __get_user_error(regs->lp, &sf->uc.uc_mcontext.nds32_lp, err); 120 - __get_user_error(regs->sp, &sf->uc.uc_mcontext.nds32_sp, err); 121 - __get_user_error(regs->ipc, &sf->uc.uc_mcontext.nds32_ipc, err); 122 - #if defined(CONFIG_HWZOL) 123 - __get_user_error(regs->lc, &sf->uc.uc_mcontext.zol.nds32_lc, err); 124 - __get_user_error(regs->le, &sf->uc.uc_mcontext.zol.nds32_le, err); 125 - __get_user_error(regs->lb, &sf->uc.uc_mcontext.zol.nds32_lb, err); 126 - #endif 127 - #if IS_ENABLED(CONFIG_FPU) 128 - err |= restore_sigcontext_fpu(regs, &sf->uc.uc_mcontext); 129 - #endif 130 - /* 131 - * Avoid sys_rt_sigreturn() restarting. 132 - */ 133 - forget_syscall(regs); 134 - return err; 135 - } 136 - 137 - asmlinkage long sys_rt_sigreturn(struct pt_regs *regs) 138 - { 139 - struct rt_sigframe __user *frame; 140 - 141 - /* Always make any pending restarted system calls return -EINTR */ 142 - current->restart_block.fn = do_no_restart_syscall; 143 - 144 - /* 145 - * Since we stacked the signal on a 64-bit boundary, 146 - * then 'sp' should be two-word aligned here. If it's 147 - * not, then the user is trying to mess with us. 148 - */ 149 - if (regs->sp & 7) 150 - goto badframe; 151 - 152 - frame = (struct rt_sigframe __user *)regs->sp; 153 - 154 - if (!access_ok(frame, sizeof(*frame))) 155 - goto badframe; 156 - 157 - if (restore_sigframe(regs, frame)) 158 - goto badframe; 159 - 160 - if (restore_altstack(&frame->uc.uc_stack)) 161 - goto badframe; 162 - 163 - return regs->uregs[0]; 164 - 165 - badframe: 166 - force_sig(SIGSEGV); 167 - return 0; 168 - } 169 - 170 - static int 171 - setup_sigframe(struct rt_sigframe __user * sf, struct pt_regs *regs, 172 - sigset_t * set) 173 - { 174 - int err = 0; 175 - 176 - __put_user_error(regs->uregs[0], &sf->uc.uc_mcontext.nds32_r0, err); 177 - __put_user_error(regs->uregs[1], &sf->uc.uc_mcontext.nds32_r1, err); 178 - __put_user_error(regs->uregs[2], &sf->uc.uc_mcontext.nds32_r2, err); 179 - __put_user_error(regs->uregs[3], &sf->uc.uc_mcontext.nds32_r3, err); 180 - __put_user_error(regs->uregs[4], &sf->uc.uc_mcontext.nds32_r4, err); 181 - __put_user_error(regs->uregs[5], &sf->uc.uc_mcontext.nds32_r5, err); 182 - __put_user_error(regs->uregs[6], &sf->uc.uc_mcontext.nds32_r6, err); 183 - __put_user_error(regs->uregs[7], &sf->uc.uc_mcontext.nds32_r7, err); 184 - __put_user_error(regs->uregs[8], &sf->uc.uc_mcontext.nds32_r8, err); 185 - __put_user_error(regs->uregs[9], &sf->uc.uc_mcontext.nds32_r9, err); 186 - __put_user_error(regs->uregs[10], &sf->uc.uc_mcontext.nds32_r10, err); 187 - __put_user_error(regs->uregs[11], &sf->uc.uc_mcontext.nds32_r11, err); 188 - __put_user_error(regs->uregs[12], &sf->uc.uc_mcontext.nds32_r12, err); 189 - __put_user_error(regs->uregs[13], &sf->uc.uc_mcontext.nds32_r13, err); 190 - __put_user_error(regs->uregs[14], &sf->uc.uc_mcontext.nds32_r14, err); 191 - __put_user_error(regs->uregs[15], &sf->uc.uc_mcontext.nds32_r15, err); 192 - __put_user_error(regs->uregs[16], &sf->uc.uc_mcontext.nds32_r16, err); 193 - __put_user_error(regs->uregs[17], &sf->uc.uc_mcontext.nds32_r17, err); 194 - __put_user_error(regs->uregs[18], &sf->uc.uc_mcontext.nds32_r18, err); 195 - __put_user_error(regs->uregs[19], &sf->uc.uc_mcontext.nds32_r19, err); 196 - __put_user_error(regs->uregs[20], &sf->uc.uc_mcontext.nds32_r20, err); 197 - 198 - __put_user_error(regs->uregs[21], &sf->uc.uc_mcontext.nds32_r21, err); 199 - __put_user_error(regs->uregs[22], &sf->uc.uc_mcontext.nds32_r22, err); 200 - __put_user_error(regs->uregs[23], &sf->uc.uc_mcontext.nds32_r23, err); 201 - __put_user_error(regs->uregs[24], &sf->uc.uc_mcontext.nds32_r24, err); 202 - __put_user_error(regs->uregs[25], &sf->uc.uc_mcontext.nds32_r25, err); 203 - __put_user_error(regs->fp, &sf->uc.uc_mcontext.nds32_fp, err); 204 - __put_user_error(regs->gp, &sf->uc.uc_mcontext.nds32_gp, err); 205 - __put_user_error(regs->lp, &sf->uc.uc_mcontext.nds32_lp, err); 206 - __put_user_error(regs->sp, &sf->uc.uc_mcontext.nds32_sp, err); 207 - __put_user_error(regs->ipc, &sf->uc.uc_mcontext.nds32_ipc, err); 208 - #if defined(CONFIG_HWZOL) 209 - __put_user_error(regs->lc, &sf->uc.uc_mcontext.zol.nds32_lc, err); 210 - __put_user_error(regs->le, &sf->uc.uc_mcontext.zol.nds32_le, err); 211 - __put_user_error(regs->lb, &sf->uc.uc_mcontext.zol.nds32_lb, err); 212 - #endif 213 - #if IS_ENABLED(CONFIG_FPU) 214 - err |= setup_sigcontext_fpu(regs, &sf->uc.uc_mcontext); 215 - #endif 216 - 217 - __put_user_error(current->thread.trap_no, &sf->uc.uc_mcontext.trap_no, 218 - err); 219 - __put_user_error(current->thread.error_code, 220 - &sf->uc.uc_mcontext.error_code, err); 221 - __put_user_error(current->thread.address, 222 - &sf->uc.uc_mcontext.fault_address, err); 223 - __put_user_error(set->sig[0], &sf->uc.uc_mcontext.oldmask, err); 224 - 225 - err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(*set)); 226 - 227 - return err; 228 - } 229 - 230 - static inline void __user *get_sigframe(struct ksignal *ksig, 231 - struct pt_regs *regs, int framesize) 232 - { 233 - unsigned long sp; 234 - 235 - /* Default to using normal stack */ 236 - sp = regs->sp; 237 - 238 - /* 239 - * If we are on the alternate signal stack and would overflow it, don't. 240 - * Return an always-bogus address instead so we will die with SIGSEGV. 241 - */ 242 - if (on_sig_stack(sp) && !likely(on_sig_stack(sp - framesize))) 243 - return (void __user __force *)(-1UL); 244 - 245 - /* This is the X/Open sanctioned signal stack switching. */ 246 - sp = (sigsp(sp, ksig) - framesize); 247 - 248 - /* 249 - * nds32 mandates 8-byte alignment 250 - */ 251 - sp &= ~0x7UL; 252 - 253 - return (void __user *)sp; 254 - } 255 - 256 - static int 257 - setup_return(struct pt_regs *regs, struct ksignal *ksig, void __user * frame) 258 - { 259 - unsigned long handler = (unsigned long)ksig->ka.sa.sa_handler; 260 - unsigned long retcode; 261 - 262 - retcode = VDSO_SYMBOL(current->mm->context.vdso, rt_sigtramp); 263 - regs->uregs[0] = ksig->sig; 264 - regs->sp = (unsigned long)frame; 265 - regs->lp = retcode; 266 - regs->ipc = handler; 267 - 268 - return 0; 269 - } 270 - 271 - static int 272 - setup_rt_frame(struct ksignal *ksig, sigset_t * set, struct pt_regs *regs) 273 - { 274 - struct rt_sigframe __user *frame = 275 - get_sigframe(ksig, regs, sizeof(*frame)); 276 - int err = 0; 277 - 278 - if (!access_ok(frame, sizeof(*frame))) 279 - return -EFAULT; 280 - 281 - __put_user_error(0, &frame->uc.uc_flags, err); 282 - __put_user_error(NULL, &frame->uc.uc_link, err); 283 - 284 - err |= __save_altstack(&frame->uc.uc_stack, regs->sp); 285 - err |= setup_sigframe(frame, regs, set); 286 - if (err == 0) { 287 - setup_return(regs, ksig, frame); 288 - if (ksig->ka.sa.sa_flags & SA_SIGINFO) { 289 - err |= copy_siginfo_to_user(&frame->info, &ksig->info); 290 - regs->uregs[1] = (unsigned long)&frame->info; 291 - regs->uregs[2] = (unsigned long)&frame->uc; 292 - } 293 - } 294 - return err; 295 - } 296 - 297 - /* 298 - * OK, we're invoking a handler 299 - */ 300 - static void handle_signal(struct ksignal *ksig, struct pt_regs *regs) 301 - { 302 - int ret; 303 - sigset_t *oldset = sigmask_to_save(); 304 - 305 - if (in_syscall(regs)) { 306 - /* Avoid additional syscall restarting via ret_slow_syscall. */ 307 - forget_syscall(regs); 308 - 309 - switch (regs->uregs[0]) { 310 - case -ERESTART_RESTARTBLOCK: 311 - case -ERESTARTNOHAND: 312 - regs->uregs[0] = -EINTR; 313 - break; 314 - case -ERESTARTSYS: 315 - if (!(ksig->ka.sa.sa_flags & SA_RESTART)) { 316 - regs->uregs[0] = -EINTR; 317 - break; 318 - } 319 - fallthrough; 320 - case -ERESTARTNOINTR: 321 - regs->uregs[0] = regs->orig_r0; 322 - regs->ipc -= 4; 323 - break; 324 - } 325 - } 326 - /* 327 - * Set up the stack frame 328 - */ 329 - ret = setup_rt_frame(ksig, oldset, regs); 330 - 331 - signal_setup_done(ret, ksig, 0); 332 - } 333 - 334 - /* 335 - * Note that 'init' is a special process: it doesn't get signals it doesn't 336 - * want to handle. Thus you cannot kill init even with a SIGKILL even by 337 - * mistake. 338 - * 339 - * Note that we go through the signals twice: once to check the signals that 340 - * the kernel can handle, and then we build all the user-level signal handling 341 - * stack-frames in one go after that. 342 - */ 343 - static void do_signal(struct pt_regs *regs) 344 - { 345 - struct ksignal ksig; 346 - 347 - if (get_signal(&ksig)) { 348 - handle_signal(&ksig, regs); 349 - return; 350 - } 351 - 352 - /* 353 - * If we were from a system call, check for system call restarting... 354 - */ 355 - if (in_syscall(regs)) { 356 - /* Restart the system call - no handlers present */ 357 - 358 - /* Avoid additional syscall restarting via ret_slow_syscall. */ 359 - forget_syscall(regs); 360 - 361 - switch (regs->uregs[0]) { 362 - case -ERESTART_RESTARTBLOCK: 363 - regs->uregs[15] = __NR_restart_syscall; 364 - fallthrough; 365 - case -ERESTARTNOHAND: 366 - case -ERESTARTSYS: 367 - case -ERESTARTNOINTR: 368 - regs->uregs[0] = regs->orig_r0; 369 - regs->ipc -= 0x4; 370 - break; 371 - } 372 - } 373 - restore_saved_sigmask(); 374 - } 375 - 376 - asmlinkage void 377 - do_notify_resume(struct pt_regs *regs, unsigned int thread_flags) 378 - { 379 - if (thread_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) 380 - do_signal(regs); 381 - 382 - if (thread_flags & _TIF_NOTIFY_RESUME) 383 - tracehook_notify_resume(regs); 384 - }
-131
arch/nds32/kernel/sleep.S
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* Copyright (C) 2017 Andes Technology Corporation */ 3 - 4 - #include <asm/memory.h> 5 - 6 - .data 7 - .global sp_tmp 8 - sp_tmp: 9 - .long 10 - 11 - .text 12 - .globl suspend2ram 13 - .globl cpu_resume 14 - 15 - suspend2ram: 16 - pushm $r0, $r31 17 - #if defined(CONFIG_HWZOL) 18 - mfusr $r0, $lc 19 - mfusr $r1, $le 20 - mfusr $r2, $lb 21 - #endif 22 - mfsr $r3, $mr0 23 - mfsr $r4, $mr1 24 - mfsr $r5, $mr4 25 - mfsr $r6, $mr6 26 - mfsr $r7, $mr7 27 - mfsr $r8, $mr8 28 - mfsr $r9, $ir0 29 - mfsr $r10, $ir1 30 - mfsr $r11, $ir2 31 - mfsr $r12, $ir3 32 - mfsr $r13, $ir9 33 - mfsr $r14, $ir10 34 - mfsr $r15, $ir12 35 - mfsr $r16, $ir13 36 - mfsr $r17, $ir14 37 - mfsr $r18, $ir15 38 - pushm $r0, $r19 39 - #if defined(CONFIG_FPU) 40 - jal store_fpu_for_suspend 41 - #endif 42 - tlbop FlushAll 43 - isb 44 - 45 - // transfer $sp from va to pa 46 - sethi $r0, hi20(PAGE_OFFSET) 47 - ori $r0, $r0, lo12(PAGE_OFFSET) 48 - movi $r2, PHYS_OFFSET 49 - sub $r1, $sp, $r0 50 - add $r2, $r1, $r2 51 - 52 - // store pa($sp) to sp_tmp 53 - sethi $r1, hi20(sp_tmp) 54 - swi $r2, [$r1 + lo12(sp_tmp)] 55 - 56 - pushm $r16, $r25 57 - pushm $r29, $r30 58 - #ifdef CONFIG_CACHE_L2 59 - jal dcache_wb_all_level 60 - #else 61 - jal cpu_dcache_wb_all 62 - #endif 63 - popm $r29, $r30 64 - popm $r16, $r25 65 - 66 - // get wake_mask and loop in standby 67 - la $r1, wake_mask 68 - lwi $r1, [$r1] 69 - self_loop: 70 - standby wake_grant 71 - mfsr $r2, $ir15 72 - and $r2, $r1, $r2 73 - beqz $r2, self_loop 74 - 75 - // set ipc to resume address 76 - la $r1, resume_addr 77 - lwi $r1, [$r1] 78 - mtsr $r1, $ipc 79 - isb 80 - 81 - // reset psw, turn off the address translation 82 - li $r2, 0x7000a 83 - mtsr $r2, $ipsw 84 - isb 85 - 86 - iret 87 - cpu_resume: 88 - // translate the address of sp_tmp variable to pa 89 - la $r1, sp_tmp 90 - sethi $r0, hi20(PAGE_OFFSET) 91 - ori $r0, $r0, lo12(PAGE_OFFSET) 92 - movi $r2, PHYS_OFFSET 93 - sub $r1, $r1, $r0 94 - add $r1, $r1, $r2 95 - 96 - // access the sp_tmp to get stack pointer 97 - lwi $sp, [$r1] 98 - 99 - popm $r0, $r19 100 - #if defined(CONFIG_HWZOL) 101 - mtusr $r0, $lb 102 - mtusr $r1, $lc 103 - mtusr $r2, $le 104 - #endif 105 - mtsr $r3, $mr0 106 - mtsr $r4, $mr1 107 - mtsr $r5, $mr4 108 - mtsr $r6, $mr6 109 - mtsr $r7, $mr7 110 - mtsr $r8, $mr8 111 - // set original psw to ipsw 112 - mtsr $r9, $ir1 113 - 114 - mtsr $r11, $ir2 115 - mtsr $r12, $ir3 116 - 117 - // set ipc to RR 118 - la $r13, RR 119 - mtsr $r13, $ir9 120 - 121 - mtsr $r14, $ir10 122 - mtsr $r15, $ir12 123 - mtsr $r16, $ir13 124 - mtsr $r17, $ir14 125 - mtsr $r18, $ir15 126 - popm $r0, $r31 127 - 128 - isb 129 - iret 130 - RR: 131 - ret
-53
arch/nds32/kernel/stacktrace.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/sched/debug.h> 5 - #include <linux/sched/task_stack.h> 6 - #include <linux/stacktrace.h> 7 - #include <linux/ftrace.h> 8 - 9 - void save_stack_trace(struct stack_trace *trace) 10 - { 11 - save_stack_trace_tsk(current, trace); 12 - } 13 - EXPORT_SYMBOL_GPL(save_stack_trace); 14 - 15 - void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace) 16 - { 17 - unsigned long *fpn; 18 - int skip = trace->skip; 19 - int savesched; 20 - int graph_idx = 0; 21 - 22 - if (tsk == current) { 23 - __asm__ __volatile__("\tori\t%0, $fp, #0\n":"=r"(fpn)); 24 - savesched = 1; 25 - } else { 26 - fpn = (unsigned long *)thread_saved_fp(tsk); 27 - savesched = 0; 28 - } 29 - 30 - while (!kstack_end(fpn) && !((unsigned long)fpn & 0x3) 31 - && (fpn >= (unsigned long *)TASK_SIZE)) { 32 - unsigned long lpp, fpp; 33 - 34 - lpp = fpn[LP_OFFSET]; 35 - fpp = fpn[FP_OFFSET]; 36 - if (!__kernel_text_address(lpp)) 37 - break; 38 - else 39 - lpp = ftrace_graph_ret_addr(tsk, &graph_idx, lpp, NULL); 40 - 41 - if (savesched || !in_sched_functions(lpp)) { 42 - if (skip) { 43 - skip--; 44 - } else { 45 - trace->entries[trace->nr_entries++] = lpp; 46 - if (trace->nr_entries >= trace->max_entries) 47 - break; 48 - } 49 - } 50 - fpn = (unsigned long *)fpp; 51 - } 52 - } 53 - EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
-84
arch/nds32/kernel/sys_nds32.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/syscalls.h> 5 - #include <linux/uaccess.h> 6 - 7 - #include <asm/cachectl.h> 8 - #include <asm/proc-fns.h> 9 - #include <asm/fpu.h> 10 - #include <asm/fp_udfiex_crtl.h> 11 - 12 - SYSCALL_DEFINE6(mmap2, unsigned long, addr, unsigned long, len, 13 - unsigned long, prot, unsigned long, flags, 14 - unsigned long, fd, unsigned long, pgoff) 15 - { 16 - if (pgoff & (~PAGE_MASK >> 12)) 17 - return -EINVAL; 18 - 19 - return sys_mmap_pgoff(addr, len, prot, flags, fd, 20 - pgoff >> (PAGE_SHIFT - 12)); 21 - } 22 - 23 - SYSCALL_DEFINE4(fadvise64_64_wrapper,int, fd, int, advice, loff_t, offset, 24 - loff_t, len) 25 - { 26 - return sys_fadvise64_64(fd, offset, len, advice); 27 - } 28 - 29 - SYSCALL_DEFINE3(cacheflush, unsigned int, start, unsigned int, end, int, cache) 30 - { 31 - struct vm_area_struct *vma; 32 - bool flushi = true, wbd = true; 33 - 34 - vma = find_vma(current->mm, start); 35 - if (!vma) 36 - return -EFAULT; 37 - switch (cache) { 38 - case ICACHE: 39 - wbd = false; 40 - break; 41 - case DCACHE: 42 - flushi = false; 43 - break; 44 - case BCACHE: 45 - break; 46 - default: 47 - return -EINVAL; 48 - } 49 - cpu_cache_wbinval_range_check(vma, start, end, flushi, wbd); 50 - 51 - return 0; 52 - } 53 - 54 - SYSCALL_DEFINE2(fp_udfiex_crtl, unsigned int, cmd, unsigned int, act) 55 - { 56 - #if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) 57 - int old_udf_iex; 58 - 59 - if (!used_math()) { 60 - load_fpu(&init_fpuregs); 61 - current->thread.fpu.UDF_IEX_trap = init_fpuregs.UDF_IEX_trap; 62 - set_used_math(); 63 - } 64 - 65 - old_udf_iex = current->thread.fpu.UDF_IEX_trap; 66 - act &= (FPCSR_mskUDFE | FPCSR_mskIEXE); 67 - 68 - switch (cmd) { 69 - case DISABLE_UDF_IEX_TRAP: 70 - current->thread.fpu.UDF_IEX_trap &= ~act; 71 - break; 72 - case ENABLE_UDF_IEX_TRAP: 73 - current->thread.fpu.UDF_IEX_trap |= act; 74 - break; 75 - case GET_UDF_IEX_TRAP: 76 - break; 77 - default: 78 - return -EINVAL; 79 - } 80 - return old_udf_iex; 81 - #else 82 - return -ENOTSUPP; 83 - #endif 84 - }
-17
arch/nds32/kernel/syscall_table.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/syscalls.h> 5 - #include <linux/signal.h> 6 - #include <linux/unistd.h> 7 - #include <asm/syscalls.h> 8 - 9 - #undef __SYSCALL 10 - #define __SYSCALL(nr, call) [nr] = (call), 11 - 12 - #define sys_rt_sigreturn sys_rt_sigreturn_wrapper 13 - #define sys_fadvise64_64 sys_fadvise64_64_wrapper 14 - void *sys_call_table[__NR_syscalls] __aligned(8192) = { 15 - [0 ... __NR_syscalls - 1] = sys_ni_syscall, 16 - #include <asm/unistd.h> 17 - };
-11
arch/nds32/kernel/time.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/clocksource.h> 5 - #include <linux/of_clk.h> 6 - 7 - void __init time_init(void) 8 - { 9 - of_clk_init(NULL); 10 - timer_probe(); 11 - }
-354
arch/nds32/kernel/traps.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/module.h> 5 - #include <linux/personality.h> 6 - #include <linux/kallsyms.h> 7 - #include <linux/hardirq.h> 8 - #include <linux/kdebug.h> 9 - #include <linux/sched/task_stack.h> 10 - #include <linux/uaccess.h> 11 - #include <linux/ftrace.h> 12 - 13 - #include <asm/proc-fns.h> 14 - #include <asm/unistd.h> 15 - #include <asm/fpu.h> 16 - 17 - #include <linux/ptrace.h> 18 - #include <nds32_intrinsic.h> 19 - 20 - extern void show_pte(struct mm_struct *mm, unsigned long addr); 21 - 22 - /* 23 - * Dump out the contents of some memory nicely... 24 - */ 25 - void dump_mem(const char *lvl, unsigned long bottom, unsigned long top) 26 - { 27 - unsigned long first; 28 - int i; 29 - 30 - pr_emerg("%s(0x%08lx to 0x%08lx)\n", lvl, bottom, top); 31 - 32 - for (first = bottom & ~31; first < top; first += 32) { 33 - unsigned long p; 34 - char str[sizeof(" 12345678") * 8 + 1]; 35 - 36 - memset(str, ' ', sizeof(str)); 37 - str[sizeof(str) - 1] = '\0'; 38 - 39 - for (p = first, i = 0; i < 8 && p < top; i++, p += 4) { 40 - if (p >= bottom && p < top) { 41 - unsigned long val; 42 - 43 - if (get_kernel_nofault(val, 44 - (unsigned long *)p) == 0) 45 - sprintf(str + i * 9, " %08lx", val); 46 - else 47 - sprintf(str + i * 9, " ????????"); 48 - } 49 - } 50 - pr_emerg("%s%04lx:%s\n", lvl, first & 0xffff, str); 51 - } 52 - } 53 - 54 - EXPORT_SYMBOL(dump_mem); 55 - 56 - #define LOOP_TIMES (100) 57 - static void __dump(struct task_struct *tsk, unsigned long *base_reg, 58 - const char *loglvl) 59 - { 60 - unsigned long ret_addr; 61 - int cnt = LOOP_TIMES, graph = 0; 62 - printk("%sCall Trace:\n", loglvl); 63 - if (!IS_ENABLED(CONFIG_FRAME_POINTER)) { 64 - while (!kstack_end(base_reg)) { 65 - ret_addr = *base_reg++; 66 - if (__kernel_text_address(ret_addr)) { 67 - ret_addr = ftrace_graph_ret_addr( 68 - tsk, &graph, ret_addr, NULL); 69 - print_ip_sym(loglvl, ret_addr); 70 - } 71 - if (--cnt < 0) 72 - break; 73 - } 74 - } else { 75 - while (!kstack_end((void *)base_reg) && 76 - !((unsigned long)base_reg & 0x3) && 77 - ((unsigned long)base_reg >= TASK_SIZE)) { 78 - unsigned long next_fp; 79 - ret_addr = base_reg[LP_OFFSET]; 80 - next_fp = base_reg[FP_OFFSET]; 81 - if (__kernel_text_address(ret_addr)) { 82 - 83 - ret_addr = ftrace_graph_ret_addr( 84 - tsk, &graph, ret_addr, NULL); 85 - print_ip_sym(loglvl, ret_addr); 86 - } 87 - if (--cnt < 0) 88 - break; 89 - base_reg = (unsigned long *)next_fp; 90 - } 91 - } 92 - printk("%s\n", loglvl); 93 - } 94 - 95 - void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl) 96 - { 97 - unsigned long *base_reg; 98 - 99 - if (!tsk) 100 - tsk = current; 101 - if (!IS_ENABLED(CONFIG_FRAME_POINTER)) { 102 - if (tsk != current) 103 - base_reg = (unsigned long *)(tsk->thread.cpu_context.sp); 104 - else 105 - __asm__ __volatile__("\tori\t%0, $sp, #0\n":"=r"(base_reg)); 106 - } else { 107 - if (tsk != current) 108 - base_reg = (unsigned long *)(tsk->thread.cpu_context.fp); 109 - else 110 - __asm__ __volatile__("\tori\t%0, $fp, #0\n":"=r"(base_reg)); 111 - } 112 - __dump(tsk, base_reg, loglvl); 113 - barrier(); 114 - } 115 - 116 - DEFINE_SPINLOCK(die_lock); 117 - 118 - /* 119 - * This function is protected against re-entrancy. 120 - */ 121 - void __noreturn die(const char *str, struct pt_regs *regs, int err) 122 - { 123 - struct task_struct *tsk = current; 124 - static int die_counter; 125 - 126 - console_verbose(); 127 - spin_lock_irq(&die_lock); 128 - bust_spinlocks(1); 129 - 130 - pr_emerg("Internal error: %s: %x [#%d]\n", str, err, ++die_counter); 131 - print_modules(); 132 - pr_emerg("CPU: %i\n", smp_processor_id()); 133 - show_regs(regs); 134 - pr_emerg("Process %s (pid: %d, stack limit = 0x%p)\n", 135 - tsk->comm, tsk->pid, end_of_stack(tsk)); 136 - 137 - if (!user_mode(regs) || in_interrupt()) { 138 - dump_mem("Stack: ", regs->sp, (regs->sp + PAGE_SIZE) & PAGE_MASK); 139 - dump_stack(); 140 - } 141 - 142 - bust_spinlocks(0); 143 - spin_unlock_irq(&die_lock); 144 - make_task_dead(SIGSEGV); 145 - } 146 - 147 - EXPORT_SYMBOL(die); 148 - 149 - void die_if_kernel(const char *str, struct pt_regs *regs, int err) 150 - { 151 - if (user_mode(regs)) 152 - return; 153 - 154 - die(str, regs, err); 155 - } 156 - 157 - int bad_syscall(int n, struct pt_regs *regs) 158 - { 159 - if (current->personality != PER_LINUX) { 160 - send_sig(SIGSEGV, current, 1); 161 - return regs->uregs[0]; 162 - } 163 - 164 - force_sig_fault(SIGILL, ILL_ILLTRP, 165 - (void __user *)instruction_pointer(regs) - 4); 166 - die_if_kernel("Oops - bad syscall", regs, n); 167 - return regs->uregs[0]; 168 - } 169 - 170 - void __pte_error(const char *file, int line, unsigned long val) 171 - { 172 - pr_emerg("%s:%d: bad pte %08lx.\n", file, line, val); 173 - } 174 - 175 - void __pmd_error(const char *file, int line, unsigned long val) 176 - { 177 - pr_emerg("%s:%d: bad pmd %08lx.\n", file, line, val); 178 - } 179 - 180 - void __pgd_error(const char *file, int line, unsigned long val) 181 - { 182 - pr_emerg("%s:%d: bad pgd %08lx.\n", file, line, val); 183 - } 184 - 185 - extern char *exception_vector, *exception_vector_end; 186 - void __init early_trap_init(void) 187 - { 188 - unsigned long ivb = 0; 189 - unsigned long base = PAGE_OFFSET; 190 - 191 - memcpy((unsigned long *)base, (unsigned long *)&exception_vector, 192 - ((unsigned long)&exception_vector_end - 193 - (unsigned long)&exception_vector)); 194 - ivb = __nds32__mfsr(NDS32_SR_IVB); 195 - /* Check platform support. */ 196 - if (((ivb & IVB_mskNIVIC) >> IVB_offNIVIC) < 2) 197 - panic 198 - ("IVIC mode is not allowed on the platform with interrupt controller\n"); 199 - __nds32__mtsr((ivb & ~IVB_mskESZ) | (IVB_valESZ16 << IVB_offESZ) | 200 - IVB_BASE, NDS32_SR_IVB); 201 - __nds32__mtsr(INT_MASK_INITAIAL_VAL, NDS32_SR_INT_MASK); 202 - 203 - /* 204 - * 0x800 = 128 vectors * 16byte. 205 - * It should be enough to flush a page. 206 - */ 207 - cpu_cache_wbinval_page(base, true); 208 - } 209 - 210 - static void send_sigtrap(struct pt_regs *regs, int error_code, int si_code) 211 - { 212 - struct task_struct *tsk = current; 213 - 214 - tsk->thread.trap_no = ENTRY_DEBUG_RELATED; 215 - tsk->thread.error_code = error_code; 216 - 217 - force_sig_fault(SIGTRAP, si_code, 218 - (void __user *)instruction_pointer(regs)); 219 - } 220 - 221 - void do_debug_trap(unsigned long entry, unsigned long addr, 222 - unsigned long type, struct pt_regs *regs) 223 - { 224 - if (notify_die(DIE_OOPS, "Oops", regs, addr, type, SIGTRAP) 225 - == NOTIFY_STOP) 226 - return; 227 - 228 - if (user_mode(regs)) { 229 - /* trap_signal */ 230 - send_sigtrap(regs, 0, TRAP_BRKPT); 231 - } else { 232 - /* kernel_trap */ 233 - if (!fixup_exception(regs)) 234 - die("unexpected kernel_trap", regs, 0); 235 - } 236 - } 237 - 238 - void unhandled_interruption(struct pt_regs *regs) 239 - { 240 - pr_emerg("unhandled_interruption\n"); 241 - show_regs(regs); 242 - if (!user_mode(regs)) 243 - make_task_dead(SIGKILL); 244 - force_sig(SIGKILL); 245 - } 246 - 247 - void unhandled_exceptions(unsigned long entry, unsigned long addr, 248 - unsigned long type, struct pt_regs *regs) 249 - { 250 - pr_emerg("Unhandled Exception: entry: %lx addr:%lx itype:%lx\n", entry, 251 - addr, type); 252 - show_regs(regs); 253 - if (!user_mode(regs)) 254 - make_task_dead(SIGKILL); 255 - force_sig(SIGKILL); 256 - } 257 - 258 - extern int do_page_fault(unsigned long entry, unsigned long addr, 259 - unsigned int error_code, struct pt_regs *regs); 260 - 261 - /* 262 - * 2:DEF dispatch for TLB MISC exception handler 263 - */ 264 - 265 - void do_dispatch_tlb_misc(unsigned long entry, unsigned long addr, 266 - unsigned long type, struct pt_regs *regs) 267 - { 268 - type = type & (ITYPE_mskINST | ITYPE_mskETYPE); 269 - if ((type & ITYPE_mskETYPE) < 5) { 270 - /* Permission exceptions */ 271 - do_page_fault(entry, addr, type, regs); 272 - } else 273 - unhandled_exceptions(entry, addr, type, regs); 274 - } 275 - 276 - void do_revinsn(struct pt_regs *regs) 277 - { 278 - pr_emerg("Reserved Instruction\n"); 279 - show_regs(regs); 280 - if (!user_mode(regs)) 281 - make_task_dead(SIGILL); 282 - force_sig(SIGILL); 283 - } 284 - 285 - #ifdef CONFIG_ALIGNMENT_TRAP 286 - extern int unalign_access_mode; 287 - extern int do_unaligned_access(unsigned long addr, struct pt_regs *regs); 288 - #endif 289 - void do_dispatch_general(unsigned long entry, unsigned long addr, 290 - unsigned long itype, struct pt_regs *regs, 291 - unsigned long oipc) 292 - { 293 - unsigned int swid = itype >> ITYPE_offSWID; 294 - unsigned long type = itype & (ITYPE_mskINST | ITYPE_mskETYPE); 295 - if (type == ETYPE_ALIGNMENT_CHECK) { 296 - #ifdef CONFIG_ALIGNMENT_TRAP 297 - /* Alignment check */ 298 - if (user_mode(regs) && unalign_access_mode) { 299 - int ret; 300 - ret = do_unaligned_access(addr, regs); 301 - 302 - if (ret == 0) 303 - return; 304 - 305 - if (ret == -EFAULT) 306 - pr_emerg 307 - ("Unhandled unaligned access exception\n"); 308 - } 309 - #endif 310 - do_page_fault(entry, addr, type, regs); 311 - } else if (type == ETYPE_RESERVED_INSTRUCTION) { 312 - /* Reserved instruction */ 313 - do_revinsn(regs); 314 - } else if (type == ETYPE_COPROCESSOR) { 315 - /* Coprocessor */ 316 - #if IS_ENABLED(CONFIG_FPU) 317 - unsigned int fucop_exist = __nds32__mfsr(NDS32_SR_FUCOP_EXIST); 318 - unsigned int cpid = ((itype & ITYPE_mskCPID) >> ITYPE_offCPID); 319 - 320 - if ((cpid == FPU_CPID) && 321 - (fucop_exist & FUCOP_EXIST_mskCP0ISFPU)) { 322 - unsigned int subtype = (itype & ITYPE_mskSTYPE); 323 - 324 - if (true == do_fpu_exception(subtype, regs)) 325 - return; 326 - } 327 - #endif 328 - unhandled_exceptions(entry, addr, type, regs); 329 - } else if (type == ETYPE_TRAP && swid == SWID_RAISE_INTERRUPT_LEVEL) { 330 - /* trap, used on v3 EDM target debugging workaround */ 331 - /* 332 - * DIPC(OIPC) is passed as parameter before 333 - * interrupt is enabled, so the DIPC will not be corrupted 334 - * even though interrupts are coming in 335 - */ 336 - /* 337 - * 1. update ipc 338 - * 2. update pt_regs ipc with oipc 339 - * 3. update pt_regs ipsw (clear DEX) 340 - */ 341 - __asm__ volatile ("mtsr %0, $IPC\n\t"::"r" (oipc)); 342 - regs->ipc = oipc; 343 - if (regs->pipsw & PSW_mskDEX) { 344 - pr_emerg 345 - ("Nested Debug exception is possibly happened\n"); 346 - pr_emerg("ipc:%08x pipc:%08x\n", 347 - (unsigned int)regs->ipc, 348 - (unsigned int)regs->pipc); 349 - } 350 - do_debug_trap(entry, addr, itype, regs); 351 - regs->ipsw &= ~PSW_mskDEX; 352 - } else 353 - unhandled_exceptions(entry, addr, type, regs); 354 - }
-231
arch/nds32/kernel/vdso.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2012 ARM Limited 3 - // Copyright (C) 2005-2017 Andes Technology Corporation 4 - 5 - #include <linux/cache.h> 6 - #include <linux/clocksource.h> 7 - #include <linux/elf.h> 8 - #include <linux/err.h> 9 - #include <linux/errno.h> 10 - #include <linux/gfp.h> 11 - #include <linux/kernel.h> 12 - #include <linux/mm.h> 13 - #include <linux/sched.h> 14 - #include <linux/signal.h> 15 - #include <linux/slab.h> 16 - #include <linux/timekeeper_internal.h> 17 - #include <linux/vmalloc.h> 18 - #include <linux/random.h> 19 - 20 - #include <asm/cacheflush.h> 21 - #include <asm/vdso.h> 22 - #include <asm/vdso_datapage.h> 23 - #include <asm/vdso_timer_info.h> 24 - #include <asm/cache_info.h> 25 - extern struct cache_info L1_cache_info[2]; 26 - extern char vdso_start[], vdso_end[]; 27 - static unsigned long vdso_pages __ro_after_init; 28 - static unsigned long timer_mapping_base; 29 - 30 - struct timer_info_t timer_info = { 31 - .cycle_count_down = true, 32 - .mapping_base = EMPTY_TIMER_MAPPING, 33 - .cycle_count_reg_offset = EMPTY_REG_OFFSET 34 - }; 35 - /* 36 - * The vDSO data page. 37 - */ 38 - static struct page *no_pages[] = { NULL }; 39 - 40 - static union { 41 - struct vdso_data data; 42 - u8 page[PAGE_SIZE]; 43 - } vdso_data_store __page_aligned_data; 44 - struct vdso_data *vdso_data = &vdso_data_store.data; 45 - static struct vm_special_mapping vdso_spec[2] __ro_after_init = { 46 - { 47 - .name = "[vvar]", 48 - .pages = no_pages, 49 - }, 50 - { 51 - .name = "[vdso]", 52 - }, 53 - }; 54 - 55 - static void get_timer_node_info(void) 56 - { 57 - timer_mapping_base = timer_info.mapping_base; 58 - vdso_data->cycle_count_offset = 59 - timer_info.cycle_count_reg_offset; 60 - vdso_data->cycle_count_down = 61 - timer_info.cycle_count_down; 62 - } 63 - 64 - static int __init vdso_init(void) 65 - { 66 - int i; 67 - struct page **vdso_pagelist; 68 - 69 - if (memcmp(vdso_start, "\177ELF", 4)) { 70 - pr_err("vDSO is not a valid ELF object!\n"); 71 - return -EINVAL; 72 - } 73 - /* Creat a timer io mapping to get clock cycles counter */ 74 - get_timer_node_info(); 75 - 76 - vdso_pages = (vdso_end - vdso_start) >> PAGE_SHIFT; 77 - pr_info("vdso: %ld pages (%ld code @ %p, %ld data @ %p)\n", 78 - vdso_pages + 1, vdso_pages, vdso_start, 1L, vdso_data); 79 - 80 - /* Allocate the vDSO pagelist */ 81 - vdso_pagelist = kcalloc(vdso_pages, sizeof(struct page *), GFP_KERNEL); 82 - if (vdso_pagelist == NULL) 83 - return -ENOMEM; 84 - 85 - for (i = 0; i < vdso_pages; i++) 86 - vdso_pagelist[i] = virt_to_page(vdso_start + i * PAGE_SIZE); 87 - vdso_spec[1].pages = &vdso_pagelist[0]; 88 - 89 - return 0; 90 - } 91 - 92 - arch_initcall(vdso_init); 93 - 94 - unsigned long inline vdso_random_addr(unsigned long vdso_mapping_len) 95 - { 96 - unsigned long start = current->mm->mmap_base, end, offset, addr; 97 - start = PAGE_ALIGN(start); 98 - 99 - /* Round the lowest possible end address up to a PMD boundary. */ 100 - end = (start + vdso_mapping_len + PMD_SIZE - 1) & PMD_MASK; 101 - if (end >= TASK_SIZE) 102 - end = TASK_SIZE; 103 - end -= vdso_mapping_len; 104 - 105 - if (end > start) { 106 - offset = get_random_int() % (((end - start) >> PAGE_SHIFT) + 1); 107 - addr = start + (offset << PAGE_SHIFT); 108 - } else { 109 - addr = start; 110 - } 111 - return addr; 112 - } 113 - 114 - int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) 115 - { 116 - struct mm_struct *mm = current->mm; 117 - unsigned long vdso_base, vdso_text_len, vdso_mapping_len; 118 - struct vm_area_struct *vma; 119 - unsigned long addr = 0; 120 - pgprot_t prot; 121 - int ret, vvar_page_num = 2; 122 - 123 - vdso_text_len = vdso_pages << PAGE_SHIFT; 124 - 125 - if(timer_mapping_base == EMPTY_VALUE) 126 - vvar_page_num = 1; 127 - /* Be sure to map the data page */ 128 - vdso_mapping_len = vdso_text_len + vvar_page_num * PAGE_SIZE; 129 - #ifdef CONFIG_CPU_CACHE_ALIASING 130 - vdso_mapping_len += L1_cache_info[DCACHE].aliasing_num - 1; 131 - #endif 132 - 133 - if (mmap_write_lock_killable(mm)) 134 - return -EINTR; 135 - 136 - addr = vdso_random_addr(vdso_mapping_len); 137 - vdso_base = get_unmapped_area(NULL, addr, vdso_mapping_len, 0, 0); 138 - if (IS_ERR_VALUE(vdso_base)) { 139 - ret = vdso_base; 140 - goto up_fail; 141 - } 142 - 143 - #ifdef CONFIG_CPU_CACHE_ALIASING 144 - { 145 - unsigned int aliasing_mask = 146 - L1_cache_info[DCACHE].aliasing_mask; 147 - unsigned int page_colour_ofs; 148 - page_colour_ofs = ((unsigned int)vdso_data & aliasing_mask) - 149 - (vdso_base & aliasing_mask); 150 - vdso_base += page_colour_ofs & aliasing_mask; 151 - } 152 - #endif 153 - 154 - vma = _install_special_mapping(mm, vdso_base, vvar_page_num * PAGE_SIZE, 155 - VM_READ | VM_MAYREAD, &vdso_spec[0]); 156 - if (IS_ERR(vma)) { 157 - ret = PTR_ERR(vma); 158 - goto up_fail; 159 - } 160 - 161 - /*Map vdata to user space */ 162 - ret = io_remap_pfn_range(vma, vdso_base, 163 - virt_to_phys(vdso_data) >> PAGE_SHIFT, 164 - PAGE_SIZE, vma->vm_page_prot); 165 - if (ret) 166 - goto up_fail; 167 - 168 - /*Map timer to user space */ 169 - vdso_base += PAGE_SIZE; 170 - prot = __pgprot(_PAGE_V | _PAGE_M_UR_KR | _PAGE_D | _PAGE_C_DEV); 171 - ret = io_remap_pfn_range(vma, vdso_base, timer_mapping_base >> PAGE_SHIFT, 172 - PAGE_SIZE, prot); 173 - if (ret) 174 - goto up_fail; 175 - 176 - /*Map vdso to user space */ 177 - vdso_base += PAGE_SIZE; 178 - mm->context.vdso = (void *)vdso_base; 179 - vma = _install_special_mapping(mm, vdso_base, vdso_text_len, 180 - VM_READ | VM_EXEC | 181 - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC, 182 - &vdso_spec[1]); 183 - if (IS_ERR(vma)) { 184 - ret = PTR_ERR(vma); 185 - goto up_fail; 186 - } 187 - 188 - mmap_write_unlock(mm); 189 - return 0; 190 - 191 - up_fail: 192 - mm->context.vdso = NULL; 193 - mmap_write_unlock(mm); 194 - return ret; 195 - } 196 - 197 - static void vdso_write_begin(struct vdso_data *vdata) 198 - { 199 - ++vdso_data->seq_count; 200 - smp_wmb(); /* Pairs with smp_rmb in vdso_read_retry */ 201 - } 202 - 203 - static void vdso_write_end(struct vdso_data *vdata) 204 - { 205 - smp_wmb(); /* Pairs with smp_rmb in vdso_read_begin */ 206 - ++vdso_data->seq_count; 207 - } 208 - 209 - void update_vsyscall(struct timekeeper *tk) 210 - { 211 - vdso_write_begin(vdso_data); 212 - vdso_data->cs_mask = tk->tkr_mono.mask; 213 - vdso_data->cs_mult = tk->tkr_mono.mult; 214 - vdso_data->cs_shift = tk->tkr_mono.shift; 215 - vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last; 216 - vdso_data->wtm_clock_sec = tk->wall_to_monotonic.tv_sec; 217 - vdso_data->wtm_clock_nsec = tk->wall_to_monotonic.tv_nsec; 218 - vdso_data->xtime_clock_sec = tk->xtime_sec; 219 - vdso_data->xtime_clock_nsec = tk->tkr_mono.xtime_nsec; 220 - vdso_data->xtime_coarse_sec = tk->xtime_sec; 221 - vdso_data->xtime_coarse_nsec = tk->tkr_mono.xtime_nsec >> 222 - tk->tkr_mono.shift; 223 - vdso_data->hrtimer_res = hrtimer_resolution; 224 - vdso_write_end(vdso_data); 225 - } 226 - 227 - void update_vsyscall_tz(void) 228 - { 229 - vdso_data->tz_minuteswest = sys_tz.tz_minuteswest; 230 - vdso_data->tz_dsttime = sys_tz.tz_dsttime; 231 - }
-2
arch/nds32/kernel/vdso/.gitignore
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - vdso.lds
-79
arch/nds32/kernel/vdso/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - # 3 - # Building a vDSO image for AArch64. 4 - # 5 - # Author: Will Deacon <will.deacon@arm.com> 6 - # Heavily based on the vDSO Makefiles for other archs. 7 - # 8 - 9 - obj-vdso := note.o datapage.o sigreturn.o gettimeofday.o 10 - 11 - # Build rules 12 - targets := $(obj-vdso) vdso.so vdso.so.dbg 13 - obj-vdso := $(addprefix $(obj)/, $(obj-vdso)) 14 - 15 - ccflags-y := -shared -fno-common -fno-builtin -nostdlib -fPIC -Wl,-shared -g \ 16 - -Wl,-soname=linux-vdso.so.1 -Wl,--hash-style=sysv 17 - 18 - # Disable gcov profiling for VDSO code 19 - GCOV_PROFILE := n 20 - 21 - 22 - obj-y += vdso.o 23 - targets += vdso.lds 24 - CPPFLAGS_vdso.lds += -P -C -U$(ARCH) 25 - 26 - # Force dependency 27 - $(obj)/vdso.o : $(obj)/vdso.so 28 - 29 - # Link rule for the .so file, .lds has to be first 30 - $(obj)/vdso.so.dbg: $(obj)/vdso.lds $(obj-vdso) FORCE 31 - $(call if_changed,vdsold) 32 - 33 - 34 - # Strip rule for the .so file 35 - $(obj)/%.so: OBJCOPYFLAGS := -S 36 - $(obj)/%.so: $(obj)/%.so.dbg FORCE 37 - $(call if_changed,objcopy) 38 - 39 - # Generate VDSO offsets using helper script 40 - gen-vdsosym := $(srctree)/$(src)/gen_vdso_offsets.sh 41 - quiet_cmd_vdsosym = VDSOSYM $@ 42 - cmd_vdsosym = $(NM) $< | $(gen-vdsosym) | LC_ALL=C sort > $@ 43 - 44 - include/generated/vdso-offsets.h: $(obj)/vdso.so.dbg FORCE 45 - $(call if_changed,vdsosym) 46 - 47 - 48 - 49 - # Assembly rules for the .S files 50 - 51 - sigreturn.o : sigreturn.S 52 - $(call if_changed_dep,vdsoas) 53 - 54 - note.o : note.S 55 - $(call if_changed_dep,vdsoas) 56 - 57 - datapage.o : datapage.S 58 - $(call if_changed_dep,vdsoas) 59 - 60 - gettimeofday.o : gettimeofday.c FORCE 61 - $(call if_changed_dep,vdsocc) 62 - 63 - # Actual build commands 64 - quiet_cmd_vdsold = VDSOL $@ 65 - cmd_vdsold = $(CC) $(c_flags) -Wl,-n -Wl,-T $(real-prereqs) -o $@ 66 - quiet_cmd_vdsoas = VDSOA $@ 67 - cmd_vdsoas = $(CC) $(a_flags) -c -o $@ $< 68 - quiet_cmd_vdsocc = VDSOA $@ 69 - cmd_vdsocc = $(CC) $(c_flags) -c -o $@ $< 70 - 71 - # Install commands for the unstripped file 72 - quiet_cmd_vdso_install = INSTALL $@ 73 - cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@ 74 - 75 - vdso.so: $(obj)/vdso.so.dbg 76 - @mkdir -p $(MODLIB)/vdso 77 - $(call cmd,vdso_install) 78 - 79 - vdso_install: vdso.so
-21
arch/nds32/kernel/vdso/datapage.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - #include <asm/page.h> 6 - 7 - ENTRY(__get_timerpage) 8 - sethi $r0, hi20(. + PAGE_SIZE + 8) 9 - ori $r0, $r0, lo12(. + PAGE_SIZE + 4) 10 - mfusr $r1, $pc 11 - sub $r0, $r1, $r0 12 - ret 13 - ENDPROC(__get_timerpage) 14 - 15 - ENTRY(__get_datapage) 16 - sethi $r0, hi20(. + 2*PAGE_SIZE + 8) 17 - ori $r0, $r0, lo12(. + 2*PAGE_SIZE + 4) 18 - mfusr $r1, $pc 19 - sub $r0, $r1, $r0 20 - ret 21 - ENDPROC(__get_datapage)
-15
arch/nds32/kernel/vdso/gen_vdso_offsets.sh
··· 1 - #!/bin/sh 2 - 3 - # 4 - # Match symbols in the DSO that look like VDSO_*; produce a header file 5 - # of constant offsets into the shared object. 6 - # 7 - # Doing this inside the Makefile will break the $(filter-out) function, 8 - # causing Kbuild to rebuild the vdso-offsets header file every time. 9 - # 10 - # Author: Will Deacon <will.deacon@arm.com 11 - # 12 - 13 - LC_ALL=C 14 - sed -n -e 's/^00*/0/' -e \ 15 - 's/^\([0-9a-fA-F]*\) . VDSO_\([a-zA-Z0-9_]*\)$/\#define vdso_offset_\2\t0x\1/p'
-269
arch/nds32/kernel/vdso/gettimeofday.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/compiler.h> 5 - #include <linux/hrtimer.h> 6 - #include <linux/time.h> 7 - #include <asm/io.h> 8 - #include <asm/barrier.h> 9 - #include <asm/bug.h> 10 - #include <asm/page.h> 11 - #include <asm/unistd.h> 12 - #include <asm/vdso_datapage.h> 13 - #include <asm/vdso_timer_info.h> 14 - #include <asm/asm-offsets.h> 15 - 16 - #define X(x) #x 17 - #define Y(x) X(x) 18 - 19 - extern struct vdso_data *__get_datapage(void); 20 - extern struct vdso_data *__get_timerpage(void); 21 - 22 - static notrace unsigned int __vdso_read_begin(const struct vdso_data *vdata) 23 - { 24 - u32 seq; 25 - repeat: 26 - seq = READ_ONCE(vdata->seq_count); 27 - if (seq & 1) { 28 - cpu_relax(); 29 - goto repeat; 30 - } 31 - return seq; 32 - } 33 - 34 - static notrace unsigned int vdso_read_begin(const struct vdso_data *vdata) 35 - { 36 - unsigned int seq; 37 - 38 - seq = __vdso_read_begin(vdata); 39 - 40 - smp_rmb(); /* Pairs with smp_wmb in vdso_write_end */ 41 - return seq; 42 - } 43 - 44 - static notrace int vdso_read_retry(const struct vdso_data *vdata, u32 start) 45 - { 46 - smp_rmb(); /* Pairs with smp_wmb in vdso_write_begin */ 47 - return vdata->seq_count != start; 48 - } 49 - 50 - static notrace long clock_gettime_fallback(clockid_t _clkid, 51 - struct __kernel_old_timespec *_ts) 52 - { 53 - register struct __kernel_old_timespec *ts asm("$r1") = _ts; 54 - register clockid_t clkid asm("$r0") = _clkid; 55 - register long ret asm("$r0"); 56 - 57 - asm volatile ("movi $r15, %3\n" 58 - "syscall 0x0\n" 59 - :"=r" (ret) 60 - :"r"(clkid), "r"(ts), "i"(__NR_clock_gettime) 61 - :"$r15", "memory"); 62 - 63 - return ret; 64 - } 65 - 66 - static notrace int do_realtime_coarse(struct __kernel_old_timespec *ts, 67 - struct vdso_data *vdata) 68 - { 69 - u32 seq; 70 - 71 - do { 72 - seq = vdso_read_begin(vdata); 73 - 74 - ts->tv_sec = vdata->xtime_coarse_sec; 75 - ts->tv_nsec = vdata->xtime_coarse_nsec; 76 - 77 - } while (vdso_read_retry(vdata, seq)); 78 - return 0; 79 - } 80 - 81 - static notrace int do_monotonic_coarse(struct __kernel_old_timespec *ts, 82 - struct vdso_data *vdata) 83 - { 84 - u32 seq; 85 - u64 ns; 86 - 87 - do { 88 - seq = vdso_read_begin(vdata); 89 - 90 - ts->tv_sec = vdata->xtime_coarse_sec + vdata->wtm_clock_sec; 91 - ns = vdata->xtime_coarse_nsec + vdata->wtm_clock_nsec; 92 - 93 - } while (vdso_read_retry(vdata, seq)); 94 - 95 - ts->tv_sec += __iter_div_u64_rem(ns, NSEC_PER_SEC, &ns); 96 - ts->tv_nsec = ns; 97 - 98 - return 0; 99 - } 100 - 101 - static notrace inline u64 vgetsns(struct vdso_data *vdso) 102 - { 103 - u32 cycle_now; 104 - u32 cycle_delta; 105 - u32 *timer_cycle_base; 106 - 107 - timer_cycle_base = 108 - (u32 *) ((char *)__get_timerpage() + vdso->cycle_count_offset); 109 - cycle_now = readl_relaxed(timer_cycle_base); 110 - if (true == vdso->cycle_count_down) 111 - cycle_now = ~(*timer_cycle_base); 112 - cycle_delta = cycle_now - (u32) vdso->cs_cycle_last; 113 - return ((u64) cycle_delta & vdso->cs_mask) * vdso->cs_mult; 114 - } 115 - 116 - static notrace int do_realtime(struct __kernel_old_timespec *ts, struct vdso_data *vdata) 117 - { 118 - unsigned count; 119 - u64 ns; 120 - do { 121 - count = vdso_read_begin(vdata); 122 - ts->tv_sec = vdata->xtime_clock_sec; 123 - ns = vdata->xtime_clock_nsec; 124 - ns += vgetsns(vdata); 125 - ns >>= vdata->cs_shift; 126 - } while (vdso_read_retry(vdata, count)); 127 - 128 - ts->tv_sec += __iter_div_u64_rem(ns, NSEC_PER_SEC, &ns); 129 - ts->tv_nsec = ns; 130 - 131 - return 0; 132 - } 133 - 134 - static notrace int do_monotonic(struct __kernel_old_timespec *ts, struct vdso_data *vdata) 135 - { 136 - u64 ns; 137 - u32 seq; 138 - 139 - do { 140 - seq = vdso_read_begin(vdata); 141 - 142 - ts->tv_sec = vdata->xtime_clock_sec; 143 - ns = vdata->xtime_clock_nsec; 144 - ns += vgetsns(vdata); 145 - ns >>= vdata->cs_shift; 146 - 147 - ts->tv_sec += vdata->wtm_clock_sec; 148 - ns += vdata->wtm_clock_nsec; 149 - 150 - } while (vdso_read_retry(vdata, seq)); 151 - 152 - ts->tv_sec += __iter_div_u64_rem(ns, NSEC_PER_SEC, &ns); 153 - ts->tv_nsec = ns; 154 - 155 - return 0; 156 - } 157 - 158 - notrace int __vdso_clock_gettime(clockid_t clkid, struct __kernel_old_timespec *ts) 159 - { 160 - struct vdso_data *vdata; 161 - int ret = -1; 162 - 163 - vdata = __get_datapage(); 164 - if (vdata->cycle_count_offset == EMPTY_REG_OFFSET) 165 - return clock_gettime_fallback(clkid, ts); 166 - 167 - switch (clkid) { 168 - case CLOCK_REALTIME_COARSE: 169 - ret = do_realtime_coarse(ts, vdata); 170 - break; 171 - case CLOCK_MONOTONIC_COARSE: 172 - ret = do_monotonic_coarse(ts, vdata); 173 - break; 174 - case CLOCK_REALTIME: 175 - ret = do_realtime(ts, vdata); 176 - break; 177 - case CLOCK_MONOTONIC: 178 - ret = do_monotonic(ts, vdata); 179 - break; 180 - default: 181 - break; 182 - } 183 - 184 - if (ret) 185 - ret = clock_gettime_fallback(clkid, ts); 186 - 187 - return ret; 188 - } 189 - 190 - static notrace int clock_getres_fallback(clockid_t _clk_id, 191 - struct __kernel_old_timespec *_res) 192 - { 193 - register clockid_t clk_id asm("$r0") = _clk_id; 194 - register struct __kernel_old_timespec *res asm("$r1") = _res; 195 - register int ret asm("$r0"); 196 - 197 - asm volatile ("movi $r15, %3\n" 198 - "syscall 0x0\n" 199 - :"=r" (ret) 200 - :"r"(clk_id), "r"(res), "i"(__NR_clock_getres) 201 - :"$r15", "memory"); 202 - 203 - return ret; 204 - } 205 - 206 - notrace int __vdso_clock_getres(clockid_t clk_id, struct __kernel_old_timespec *res) 207 - { 208 - struct vdso_data *vdata = __get_datapage(); 209 - 210 - if (res == NULL) 211 - return 0; 212 - switch (clk_id) { 213 - case CLOCK_REALTIME: 214 - case CLOCK_MONOTONIC: 215 - case CLOCK_MONOTONIC_RAW: 216 - res->tv_sec = 0; 217 - res->tv_nsec = vdata->hrtimer_res; 218 - break; 219 - case CLOCK_REALTIME_COARSE: 220 - case CLOCK_MONOTONIC_COARSE: 221 - res->tv_sec = 0; 222 - res->tv_nsec = CLOCK_COARSE_RES; 223 - break; 224 - default: 225 - return clock_getres_fallback(clk_id, res); 226 - } 227 - return 0; 228 - } 229 - 230 - static notrace inline int gettimeofday_fallback(struct __kernel_old_timeval *_tv, 231 - struct timezone *_tz) 232 - { 233 - register struct __kernel_old_timeval *tv asm("$r0") = _tv; 234 - register struct timezone *tz asm("$r1") = _tz; 235 - register int ret asm("$r0"); 236 - 237 - asm volatile ("movi $r15, %3\n" 238 - "syscall 0x0\n" 239 - :"=r" (ret) 240 - :"r"(tv), "r"(tz), "i"(__NR_gettimeofday) 241 - :"$r15", "memory"); 242 - 243 - return ret; 244 - } 245 - 246 - notrace int __vdso_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz) 247 - { 248 - struct __kernel_old_timespec ts; 249 - struct vdso_data *vdata; 250 - int ret; 251 - 252 - vdata = __get_datapage(); 253 - 254 - if (vdata->cycle_count_offset == EMPTY_REG_OFFSET) 255 - return gettimeofday_fallback(tv, tz); 256 - 257 - ret = do_realtime(&ts, vdata); 258 - 259 - if (tv) { 260 - tv->tv_sec = ts.tv_sec; 261 - tv->tv_usec = ts.tv_nsec / 1000; 262 - } 263 - if (tz) { 264 - tz->tz_minuteswest = vdata->tz_minuteswest; 265 - tz->tz_dsttime = vdata->tz_dsttime; 266 - } 267 - 268 - return ret; 269 - }
-11
arch/nds32/kernel/vdso/note.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2012 ARM Limited 3 - // Copyright (C) 2005-2017 Andes Technology Corporation 4 - 5 - #include <linux/uts.h> 6 - #include <linux/version.h> 7 - #include <linux/elfnote.h> 8 - 9 - ELFNOTE_START(Linux, 0, "a") 10 - .long LINUX_VERSION_CODE 11 - ELFNOTE_END
-19
arch/nds32/kernel/vdso/sigreturn.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2012 ARM Limited 3 - // Copyright (C) 2005-2017 Andes Technology Corporation 4 - 5 - #include <linux/linkage.h> 6 - #include <asm/unistd.h> 7 - 8 - .text 9 - 10 - ENTRY(__kernel_rt_sigreturn) 11 - .cfi_startproc 12 - movi $r15, __NR_rt_sigreturn 13 - /* 14 - * The SWID of syscall should be __NR_rt_sigreturn to synchronize 15 - * the unwinding scheme in gcc 16 - */ 17 - syscall __NR_rt_sigreturn 18 - .cfi_endproc 19 - ENDPROC(__kernel_rt_sigreturn)
-18
arch/nds32/kernel/vdso/vdso.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2012 ARM Limited 3 - // Copyright (C) 2005-2017 Andes Technology Corporation 4 - 5 - #include <linux/init.h> 6 - #include <linux/linkage.h> 7 - #include <linux/const.h> 8 - #include <asm/page.h> 9 - 10 - .globl vdso_start, vdso_end 11 - .section .rodata 12 - .balign PAGE_SIZE 13 - vdso_start: 14 - .incbin "arch/nds32/kernel/vdso/vdso.so" 15 - .balign PAGE_SIZE 16 - vdso_end: 17 - 18 - .previous
-75
arch/nds32/kernel/vdso/vdso.lds.S
··· 1 - /* 2 - * SPDX-License-Identifier: GPL-2.0 3 - * Copyright (C) 2005-2017 Andes Technology Corporation 4 - */ 5 - 6 - 7 - #include <linux/const.h> 8 - #include <asm/page.h> 9 - #include <asm/vdso.h> 10 - 11 - OUTPUT_ARCH(nds32) 12 - 13 - SECTIONS 14 - { 15 - . = SIZEOF_HEADERS; 16 - 17 - .hash : { *(.hash) } :text 18 - .gnu.hash : { *(.gnu.hash) } 19 - .dynsym : { *(.dynsym) } 20 - .dynstr : { *(.dynstr) } 21 - .gnu.version : { *(.gnu.version) } 22 - .gnu.version_d : { *(.gnu.version_d) } 23 - .gnu.version_r : { *(.gnu.version_r) } 24 - 25 - .note : { *(.note.*) } :text :note 26 - 27 - 28 - .text : { *(.text*) } :text 29 - 30 - .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr 31 - .eh_frame : { KEEP (*(.eh_frame)) } :text 32 - 33 - .dynamic : { *(.dynamic) } :text :dynamic 34 - 35 - .rodata : { *(.rodata*) } :text 36 - 37 - 38 - /DISCARD/ : { 39 - *(.note.GNU-stack) 40 - *(.data .data.* .gnu.linkonce.d.* .sdata*) 41 - *(.bss .sbss .dynbss .dynsbss) 42 - } 43 - } 44 - 45 - /* 46 - * We must supply the ELF program headers explicitly to get just one 47 - * PT_LOAD segment, and set the flags explicitly to make segments read-only. 48 - */ 49 - PHDRS 50 - { 51 - text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */ 52 - dynamic PT_DYNAMIC FLAGS(4); /* PF_R */ 53 - note PT_NOTE FLAGS(4); /* PF_R */ 54 - eh_frame_hdr PT_GNU_EH_FRAME; 55 - } 56 - 57 - /* 58 - * This controls what symbols we export from the DSO. 59 - */ 60 - VERSION 61 - { 62 - LINUX_4 { 63 - global: 64 - __kernel_rt_sigreturn; 65 - __vdso_gettimeofday; 66 - __vdso_clock_getres; 67 - __vdso_clock_gettime; 68 - local: *; 69 - }; 70 - } 71 - 72 - /* 73 - * Make the rt_sigreturn code visible to the kernel. 74 - */ 75 - VDSO_rt_sigtramp = __kernel_rt_sigreturn;
-70
arch/nds32/kernel/vmlinux.lds.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <asm/page.h> 5 - #include <asm/thread_info.h> 6 - #include <asm/cache.h> 7 - #include <asm/memory.h> 8 - 9 - #define LOAD_OFFSET (PAGE_OFFSET - PHYS_OFFSET) 10 - #include <asm-generic/vmlinux.lds.h> 11 - 12 - OUTPUT_ARCH(nds32) 13 - ENTRY(_stext_lma) 14 - jiffies = jiffies_64; 15 - 16 - #if defined(CONFIG_GCOV_KERNEL) 17 - #define NDS32_EXIT_KEEP(x) x 18 - #else 19 - #define NDS32_EXIT_KEEP(x) 20 - #endif 21 - 22 - SECTIONS 23 - { 24 - _stext_lma = TEXTADDR - LOAD_OFFSET; 25 - . = TEXTADDR; 26 - __init_begin = .; 27 - HEAD_TEXT_SECTION 28 - .exit.text : { 29 - NDS32_EXIT_KEEP(EXIT_TEXT) 30 - } 31 - INIT_TEXT_SECTION(PAGE_SIZE) 32 - INIT_DATA_SECTION(16) 33 - .exit.data : { 34 - NDS32_EXIT_KEEP(EXIT_DATA) 35 - } 36 - PERCPU_SECTION(L1_CACHE_BYTES) 37 - __init_end = .; 38 - 39 - . = ALIGN(PAGE_SIZE); 40 - _stext = .; 41 - /* Real text segment */ 42 - .text : AT(ADDR(.text) - LOAD_OFFSET) { 43 - _text = .; /* Text and read-only data */ 44 - TEXT_TEXT 45 - SCHED_TEXT 46 - CPUIDLE_TEXT 47 - LOCK_TEXT 48 - KPROBES_TEXT 49 - IRQENTRY_TEXT 50 - SOFTIRQENTRY_TEXT 51 - *(.fixup) 52 - } 53 - 54 - _etext = .; /* End of text and rodata section */ 55 - 56 - _sdata = .; 57 - RO_DATA(PAGE_SIZE) 58 - RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_SIZE) 59 - _edata = .; 60 - 61 - EXCEPTION_TABLE(16) 62 - BSS_SECTION(4, 4, 4) 63 - _end = .; 64 - 65 - STABS_DEBUG 66 - DWARF_DEBUG 67 - ELF_DETAILS 68 - 69 - DISCARDS 70 - }
-4
arch/nds32/lib/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - lib-y := copy_page.o memcpy.o memmove.o \ 3 - memset.o memzero.o \ 4 - copy_from_user.o copy_to_user.o clear_user.o
-42
arch/nds32/lib/clear_user.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - #include <asm/assembler.h> 6 - #include <asm/errno.h> 7 - 8 - /* Prototype: int __arch_clear_user(void *addr, size_t sz) 9 - * Purpose : clear some user memory 10 - * Params : addr - user memory address to clear 11 - * : sz - number of bytes to clear 12 - * Returns : number of bytes NOT cleared 13 - */ 14 - .text 15 - .align 5 16 - ENTRY(__arch_clear_user) 17 - add $r5, $r0, $r1 18 - beqz $r1, clear_exit 19 - xor $p1, $p1, $p1 ! Use $p1=0 to clear mem 20 - srli $p0, $r1, #2 ! $p0 = number of word to clear 21 - andi $r1, $r1, #3 ! Bytes less than a word to copy 22 - beqz $p0, byte_clear ! Only less than a word to clear 23 - word_clear: 24 - USER( smw.bim,$p1, [$r0], $p1) ! Clear the word 25 - addi $p0, $p0, #-1 ! Decrease word count 26 - bnez $p0, word_clear ! Continue looping to clear all words 27 - beqz $r1, clear_exit ! No left bytes to copy 28 - byte_clear: 29 - USER( sbi.bi, $p1, [$r0], #1) ! Clear the byte 30 - addi $r1, $r1, #-1 ! Decrease byte count 31 - bnez $r1, byte_clear ! Continue looping to clear all left bytes 32 - clear_exit: 33 - move $r0, $r1 ! Set return value 34 - ret 35 - 36 - .section .fixup,"ax" 37 - .align 0 38 - 9001: 39 - sub $r0, $r5, $r0 ! Bytes left to copy 40 - ret 41 - .previous 42 - ENDPROC(__arch_clear_user)
-45
arch/nds32/lib/copy_from_user.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - #include <asm/assembler.h> 6 - #include <asm/errno.h> 7 - 8 - .macro lbi1 dst, addr, adj 9 - USER( lbi.bi, \dst, [\addr], \adj) 10 - .endm 11 - 12 - .macro sbi1 src, addr, adj 13 - sbi.bi \src, [\addr], \adj 14 - .endm 15 - 16 - .macro lmw1 start_reg, addr, end_reg 17 - USER( lmw.bim, \start_reg, [\addr], \end_reg) 18 - .endm 19 - 20 - .macro smw1 start_reg, addr, end_reg 21 - smw.bim \start_reg, [\addr], \end_reg 22 - .endm 23 - 24 - 25 - /* Prototype: int __arch_copy_from_user(void *to, const char *from, size_t n) 26 - * Purpose : copy a block from user memory to kernel memory 27 - * Params : to - kernel memory 28 - * : from - user memory 29 - * : n - number of bytes to copy 30 - * Returns : Number of bytes NOT copied. 31 - */ 32 - 33 - .text 34 - ENTRY(__arch_copy_from_user) 35 - add $r5, $r0, $r2 36 - #include "copy_template.S" 37 - move $r0, $r2 38 - ret 39 - .section .fixup,"ax" 40 - .align 2 41 - 9001: 42 - sub $r0, $r5, $r0 43 - ret 44 - .previous 45 - ENDPROC(__arch_copy_from_user)
-40
arch/nds32/lib/copy_page.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - #include <asm/export.h> 6 - #include <asm/page.h> 7 - 8 - .text 9 - ENTRY(copy_page) 10 - pushm $r2, $r10 11 - movi $r2, PAGE_SIZE >> 5 12 - .Lcopy_loop: 13 - lmw.bim $r3, [$r1], $r10 14 - smw.bim $r3, [$r0], $r10 15 - subi45 $r2, #1 16 - bnez38 $r2, .Lcopy_loop 17 - popm $r2, $r10 18 - ret 19 - ENDPROC(copy_page) 20 - EXPORT_SYMBOL(copy_page) 21 - 22 - ENTRY(clear_page) 23 - pushm $r1, $r9 24 - movi $r1, PAGE_SIZE >> 5 25 - movi55 $r2, #0 26 - movi55 $r3, #0 27 - movi55 $r4, #0 28 - movi55 $r5, #0 29 - movi55 $r6, #0 30 - movi55 $r7, #0 31 - movi55 $r8, #0 32 - movi55 $r9, #0 33 - .Lclear_loop: 34 - smw.bim $r2, [$r0], $r9 35 - subi45 $r1, #1 36 - bnez38 $r1, .Lclear_loop 37 - popm $r1, $r9 38 - ret 39 - ENDPROC(clear_page) 40 - EXPORT_SYMBOL(clear_page)
-69
arch/nds32/lib/copy_template.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - 5 - beq $r1, $r0, quit_memcpy 6 - beqz $r2, quit_memcpy 7 - srli $r3, $r2, #5 ! check if len < cache-line size 32 8 - beqz $r3, word_copy_entry 9 - andi $r4, $r0, #0x3 ! check byte-align 10 - beqz $r4, unalign_word_copy_entry 11 - 12 - addi $r4, $r4,#-4 13 - abs $r4, $r4 ! check how many un-align byte to copy 14 - sub $r2, $r2, $r4 ! update $R2 15 - 16 - unalign_byte_copy: 17 - lbi1 $r3, $r1, #1 18 - addi $r4, $r4, #-1 19 - sbi1 $r3, $r0, #1 20 - bnez $r4, unalign_byte_copy 21 - beqz $r2, quit_memcpy 22 - 23 - unalign_word_copy_entry: 24 - andi $r3, $r0, 0x1f ! check cache-line unaligncount 25 - beqz $r3, cache_copy 26 - 27 - addi $r3, $r3, #-32 28 - abs $r3, $r3 29 - sub $r2, $r2, $r3 ! update $R2 30 - 31 - unalign_word_copy: 32 - lmw1 $r4, $r1, $r4 33 - addi $r3, $r3, #-4 34 - smw1 $r4, $r0, $r4 35 - bnez $r3, unalign_word_copy 36 - beqz $r2, quit_memcpy 37 - 38 - addi $r3, $r2, #-32 ! to check $r2< cache_line , than go to word_copy 39 - bltz $r3, word_copy_entry 40 - cache_copy: 41 - srli $r3, $r2, #5 42 - beqz $r3, word_copy_entry 43 - 3: 44 - lmw1 $r17, $r1, $r24 45 - addi $r3, $r3, #-1 46 - smw1 $r17, $r0, $r24 47 - bnez $r3, 3b 48 - 49 - word_copy_entry: 50 - andi $r2, $r2, #31 51 - 52 - beqz $r2, quit_memcpy 53 - 5: 54 - srli $r3, $r2, #2 55 - beqz $r3, byte_copy 56 - word_copy: 57 - lmw1 $r4, $r1, $r4 58 - addi $r3, $r3, #-1 59 - smw1 $r4, $r0, $r4 60 - bnez $r3, word_copy 61 - andi $r2, $r2, #3 62 - beqz $r2, quit_memcpy 63 - byte_copy: 64 - lbi1 $r3, $r1, #1 65 - addi $r2, $r2, #-1 66 - 67 - sbi1 $r3, $r0, #1 68 - bnez $r2, byte_copy 69 - quit_memcpy:
-45
arch/nds32/lib/copy_to_user.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - #include <asm/assembler.h> 6 - #include <asm/errno.h> 7 - 8 - .macro lbi1 dst, addr, adj 9 - lbi.bi \dst, [\addr], \adj 10 - .endm 11 - 12 - .macro sbi1 src, addr, adj 13 - USER( sbi.bi, \src, [\addr], \adj) 14 - .endm 15 - 16 - .macro lmw1 start_reg, addr, end_reg 17 - lmw.bim \start_reg, [\addr], \end_reg 18 - .endm 19 - 20 - .macro smw1 start_reg, addr, end_reg 21 - USER( smw.bim, \start_reg, [\addr], \end_reg) 22 - .endm 23 - 24 - 25 - /* Prototype: int __arch_copy_to_user(void *to, const char *from, size_t n) 26 - * Purpose : copy a block to user memory from kernel memory 27 - * Params : to - user memory 28 - * : from - kernel memory 29 - * : n - number of bytes to copy 30 - * Returns : Number of bytes NOT copied. 31 - */ 32 - 33 - .text 34 - ENTRY(__arch_copy_to_user) 35 - add $r5, $r0, $r2 36 - #include "copy_template.S" 37 - move $r0, $r2 38 - ret 39 - .section .fixup,"ax" 40 - .align 2 41 - 9001: 42 - sub $r0, $r5, $r0 43 - ret 44 - .previous 45 - ENDPROC(__arch_copy_to_user)
-30
arch/nds32/lib/memcpy.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - 6 - 7 - .macro lbi1 dst, addr, adj 8 - lbi.bi \dst, [\addr], \adj 9 - .endm 10 - 11 - .macro sbi1 src, addr, adj 12 - sbi.bi \src, [\addr], \adj 13 - .endm 14 - 15 - .macro lmw1 start_reg, addr, end_reg 16 - lmw.bim \start_reg, [\addr], \end_reg 17 - .endm 18 - 19 - .macro smw1 start_reg, addr, end_reg 20 - smw.bim \start_reg, [\addr], \end_reg 21 - .endm 22 - 23 - .text 24 - ENTRY(memcpy) 25 - move $r5, $r0 26 - #include "copy_template.S" 27 - move $r0, $r5 28 - ret 29 - 30 - ENDPROC(memcpy)
-70
arch/nds32/lib/memmove.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - 6 - /* 7 - void *memmove(void *dst, const void *src, int n); 8 - 9 - dst: $r0 10 - src: $r1 11 - n : $r2 12 - ret: $r0 - pointer to the memory area dst. 13 - */ 14 - .text 15 - 16 - ENTRY(memmove) 17 - move $r5, $r0 ! Set return value = det 18 - beq $r0, $r1, exit_memcpy ! Exit when det = src 19 - beqz $r2, exit_memcpy ! Exit when n = 0 20 - pushm $t0, $t1 ! Save reg 21 - srli $p1, $r2, #2 ! $p1 is how many words to copy 22 - 23 - ! Avoid data lost when memory overlap 24 - ! Copy data reversely when src < dst 25 - slt $p0, $r0, $r1 ! check if $r0 < $r1 26 - beqz $p0, do_reverse ! branch if dst > src 27 - 28 - ! No reverse, dst < src 29 - andi $r2, $r2, #3 ! How many bytes are less than a word 30 - li $t0, #1 ! Determining copy direction in byte_cpy 31 - beqz $p1, byte_cpy ! When n is less than a word 32 - 33 - word_cpy: 34 - lmw.bim $p0, [$r1], $p0 ! Read a word from src 35 - addi $p1, $p1, #-1 ! How many words left to copy 36 - smw.bim $p0, [$r0], $p0 ! Copy the word to det 37 - bnez $p1, word_cpy ! If remained words > 0 38 - beqz $r2, end_memcpy ! No left bytes to copy 39 - b byte_cpy 40 - 41 - do_reverse: 42 - add $r0, $r0, $r2 ! Start with the end of $r0 43 - add $r1, $r1, $r2 ! Start with the end of $r1 44 - andi $r2, $r2, #3 ! How many bytes are less than a word 45 - li $t0, #-1 ! Determining copy direction in byte_cpy 46 - beqz $p1, reverse_byte_cpy ! When n is less than a word 47 - 48 - reverse_word_cpy: 49 - lmw.adm $p0, [$r1], $p0 ! Read a word from src 50 - addi $p1, $p1, #-1 ! How many words left to copy 51 - smw.adm $p0, [$r0], $p0 ! Copy the word to det 52 - bnez $p1, reverse_word_cpy ! If remained words > 0 53 - beqz $r2, end_memcpy ! No left bytes to copy 54 - 55 - reverse_byte_cpy: 56 - addi $r0, $r0, #-1 57 - addi $r1, $r1, #-1 58 - byte_cpy: ! Less than 4 bytes to copy now 59 - lb.bi $p0, [$r1], $t0 ! Read a byte from src 60 - addi $r2, $r2, #-1 ! How many bytes left to copy 61 - sb.bi $p0, [$r0], $t0 ! copy the byte to det 62 - bnez $r2, byte_cpy ! If remained bytes > 0 63 - 64 - end_memcpy: 65 - popm $t0, $t1 66 - exit_memcpy: 67 - move $r0, $r5 68 - ret 69 - 70 - ENDPROC(memmove)
-33
arch/nds32/lib/memset.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - 6 - .text 7 - ENTRY(memset) 8 - move $r5, $r0 ! Return value 9 - beqz $r2, end_memset ! Exit when len = 0 10 - srli $p1, $r2, 2 ! $p1 is how many words to copy 11 - andi $r2, $r2, 3 ! How many bytes are less than a word 12 - beqz $p1, byte_set ! When n is less than a word 13 - 14 - ! set $r1 from ??????ab to abababab 15 - andi $r1, $r1, #0x00ff ! $r1 = 000000ab 16 - slli $p0, $r1, #8 ! $p0 = 0000ab00 17 - or $r1, $r1, $p0 ! $r1 = 0000abab 18 - slli $p0, $r1, #16 ! $p0 = abab0000 19 - or $r1, $r1, $p0 ! $r1 = abababab 20 - word_set: 21 - addi $p1, $p1, #-1 ! How many words left to copy 22 - smw.bim $r1, [$r0], $r1 ! Copy the word to det 23 - bnez $p1, word_set ! Still words to set, continue looping 24 - beqz $r2, end_memset ! No left byte to set 25 - byte_set: ! Less than 4 bytes left to set 26 - addi $r2, $r2, #-1 ! Decrease len by 1 27 - sbi.bi $r1, [$r0], #1 ! Set data of the next byte to $r1 28 - bnez $r2, byte_set ! Still bytes left to set 29 - end_memset: 30 - move $r0, $r5 31 - ret 32 - 33 - ENDPROC(memset)
-18
arch/nds32/lib/memzero.S
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/linkage.h> 5 - 6 - .text 7 - ENTRY(memzero) 8 - beqz $r1, 1f 9 - push $lp 10 - move $r2, $r1 11 - move $r1, #0 12 - push $r0 13 - bal memset 14 - pop $r0 15 - pop $lp 16 - 1: 17 - ret 18 - ENDPROC(memzero)
-10
arch/nds32/math-emu/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - # 3 - # Makefile for the Linux/nds32 kernel FPU emulation. 4 - # 5 - 6 - obj-y := fpuemu.o \ 7 - fdivd.o fmuld.o fsubd.o faddd.o fs2d.o fsqrtd.o fcmpd.o fnegs.o \ 8 - fd2si.o fd2ui.o fd2siz.o fd2uiz.o fsi2d.o fui2d.o \ 9 - fdivs.o fmuls.o fsubs.o fadds.o fd2s.o fsqrts.o fcmps.o fnegd.o \ 10 - fs2si.o fs2ui.o fs2siz.o fs2uiz.o fsi2s.o fui2s.o
-24
arch/nds32/math-emu/faddd.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/double.h> 8 - void faddd(void *ft, void *fa, void *fb) 9 - { 10 - FP_DECL_D(A); 11 - FP_DECL_D(B); 12 - FP_DECL_D(R); 13 - FP_DECL_EX; 14 - 15 - FP_UNPACK_DP(A, fa); 16 - FP_UNPACK_DP(B, fb); 17 - 18 - FP_ADD_D(R, A, B); 19 - 20 - FP_PACK_DP(ft, R); 21 - 22 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 23 - 24 - }
-24
arch/nds32/math-emu/fadds.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/single.h> 8 - void fadds(void *ft, void *fa, void *fb) 9 - { 10 - FP_DECL_S(A); 11 - FP_DECL_S(B); 12 - FP_DECL_S(R); 13 - FP_DECL_EX; 14 - 15 - FP_UNPACK_SP(A, fa); 16 - FP_UNPACK_SP(B, fb); 17 - 18 - FP_ADD_S(R, A, B); 19 - 20 - FP_PACK_SP(ft, R); 21 - 22 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 23 - 24 - }
-24
arch/nds32/math-emu/fcmpd.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - #include <asm/sfp-machine.h> 4 - #include <math-emu/soft-fp.h> 5 - #include <math-emu/double.h> 6 - int fcmpd(void *ft, void *fa, void *fb, int cmpop) 7 - { 8 - FP_DECL_D(A); 9 - FP_DECL_D(B); 10 - FP_DECL_EX; 11 - long cmp; 12 - 13 - FP_UNPACK_DP(A, fa); 14 - FP_UNPACK_DP(B, fb); 15 - 16 - FP_CMP_D(cmp, A, B, SF_CUN); 17 - cmp += 2; 18 - if (cmp == SF_CGT) 19 - *(long *)ft = 0; 20 - else 21 - *(long *)ft = (cmp & cmpop) ? 1 : 0; 22 - 23 - return 0; 24 - }
-24
arch/nds32/math-emu/fcmps.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - #include <asm/sfp-machine.h> 4 - #include <math-emu/soft-fp.h> 5 - #include <math-emu/single.h> 6 - int fcmps(void *ft, void *fa, void *fb, int cmpop) 7 - { 8 - FP_DECL_S(A); 9 - FP_DECL_S(B); 10 - FP_DECL_EX; 11 - long cmp; 12 - 13 - FP_UNPACK_SP(A, fa); 14 - FP_UNPACK_SP(B, fb); 15 - 16 - FP_CMP_S(cmp, A, B, SF_CUN); 17 - cmp += 2; 18 - if (cmp == SF_CGT) 19 - *(int *)ft = 0x0; 20 - else 21 - *(int *)ft = (cmp & cmpop) ? 0x1 : 0x0; 22 - 23 - return 0; 24 - }
-22
arch/nds32/math-emu/fd2s.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/double.h> 7 - #include <math-emu/single.h> 8 - #include <math-emu/soft-fp.h> 9 - void fd2s(void *ft, void *fa) 10 - { 11 - FP_DECL_D(A); 12 - FP_DECL_S(R); 13 - FP_DECL_EX; 14 - 15 - FP_UNPACK_DP(A, fa); 16 - 17 - FP_CONV(S, D, 1, 2, R, A); 18 - 19 - FP_PACK_SP(ft, R); 20 - 21 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 22 - }
-30
arch/nds32/math-emu/fd2si.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2019 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/double.h> 8 - 9 - void fd2si(void *ft, void *fa) 10 - { 11 - int r; 12 - 13 - FP_DECL_D(A); 14 - FP_DECL_EX; 15 - 16 - FP_UNPACK_DP(A, fa); 17 - 18 - if (A_c == FP_CLS_INF) { 19 - *(int *)ft = (A_s == 0) ? 0x7fffffff : 0x80000000; 20 - __FPU_FPCSR |= FP_EX_INVALID; 21 - } else if (A_c == FP_CLS_NAN) { 22 - *(int *)ft = 0xffffffff; 23 - __FPU_FPCSR |= FP_EX_INVALID; 24 - } else { 25 - FP_TO_INT_ROUND_D(r, A, 32, 1); 26 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 27 - *(int *)ft = r; 28 - } 29 - 30 - }
-30
arch/nds32/math-emu/fd2siz.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2019 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/double.h> 8 - 9 - void fd2si_z(void *ft, void *fa) 10 - { 11 - int r; 12 - 13 - FP_DECL_D(A); 14 - FP_DECL_EX; 15 - 16 - FP_UNPACK_DP(A, fa); 17 - 18 - if (A_c == FP_CLS_INF) { 19 - *(int *)ft = (A_s == 0) ? 0x7fffffff : 0x80000000; 20 - __FPU_FPCSR |= FP_EX_INVALID; 21 - } else if (A_c == FP_CLS_NAN) { 22 - *(int *)ft = 0xffffffff; 23 - __FPU_FPCSR |= FP_EX_INVALID; 24 - } else { 25 - FP_TO_INT_D(r, A, 32, 1); 26 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 27 - *(int *)ft = r; 28 - } 29 - 30 - }
-30
arch/nds32/math-emu/fd2ui.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2019 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/double.h> 8 - 9 - void fd2ui(void *ft, void *fa) 10 - { 11 - unsigned int r; 12 - 13 - FP_DECL_D(A); 14 - FP_DECL_EX; 15 - 16 - FP_UNPACK_DP(A, fa); 17 - 18 - if (A_c == FP_CLS_INF) { 19 - *(unsigned int *)ft = (A_s == 0) ? 0xffffffff : 0x00000000; 20 - __FPU_FPCSR |= FP_EX_INVALID; 21 - } else if (A_c == FP_CLS_NAN) { 22 - *(unsigned int *)ft = 0xffffffff; 23 - __FPU_FPCSR |= FP_EX_INVALID; 24 - } else { 25 - FP_TO_INT_ROUND_D(r, A, 32, 0); 26 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 27 - *(unsigned int *)ft = r; 28 - } 29 - 30 - }
-30
arch/nds32/math-emu/fd2uiz.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2019 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/double.h> 8 - 9 - void fd2ui_z(void *ft, void *fa) 10 - { 11 - unsigned int r; 12 - 13 - FP_DECL_D(A); 14 - FP_DECL_EX; 15 - 16 - FP_UNPACK_DP(A, fa); 17 - 18 - if (A_c == FP_CLS_INF) { 19 - *(unsigned int *)ft = (A_s == 0) ? 0xffffffff : 0x00000000; 20 - __FPU_FPCSR |= FP_EX_INVALID; 21 - } else if (A_c == FP_CLS_NAN) { 22 - *(unsigned int *)ft = 0xffffffff; 23 - __FPU_FPCSR |= FP_EX_INVALID; 24 - } else { 25 - FP_TO_INT_D(r, A, 32, 0); 26 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 27 - *(unsigned int *)ft = r; 28 - } 29 - 30 - }
-27
arch/nds32/math-emu/fdivd.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - 4 - #include <linux/uaccess.h> 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/double.h> 8 - 9 - void fdivd(void *ft, void *fa, void *fb) 10 - { 11 - FP_DECL_D(A); 12 - FP_DECL_D(B); 13 - FP_DECL_D(R); 14 - FP_DECL_EX; 15 - 16 - FP_UNPACK_DP(A, fa); 17 - FP_UNPACK_DP(B, fb); 18 - 19 - if (B_c == FP_CLS_ZERO && A_c != FP_CLS_ZERO) 20 - FP_SET_EXCEPTION(FP_EX_DIVZERO); 21 - 22 - FP_DIV_D(R, A, B); 23 - 24 - FP_PACK_DP(ft, R); 25 - 26 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 27 - }
-26
arch/nds32/math-emu/fdivs.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/single.h> 8 - void fdivs(void *ft, void *fa, void *fb) 9 - { 10 - FP_DECL_S(A); 11 - FP_DECL_S(B); 12 - FP_DECL_S(R); 13 - FP_DECL_EX; 14 - 15 - FP_UNPACK_SP(A, fa); 16 - FP_UNPACK_SP(B, fb); 17 - 18 - if (B_c == FP_CLS_ZERO && A_c != FP_CLS_ZERO) 19 - FP_SET_EXCEPTION(FP_EX_DIVZERO); 20 - 21 - FP_DIV_S(R, A, B); 22 - 23 - FP_PACK_SP(ft, R); 24 - 25 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 26 - }
-23
arch/nds32/math-emu/fmuld.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/double.h> 8 - void fmuld(void *ft, void *fa, void *fb) 9 - { 10 - FP_DECL_D(A); 11 - FP_DECL_D(B); 12 - FP_DECL_D(R); 13 - FP_DECL_EX; 14 - 15 - FP_UNPACK_DP(A, fa); 16 - FP_UNPACK_DP(B, fb); 17 - 18 - FP_MUL_D(R, A, B); 19 - 20 - FP_PACK_DP(ft, R); 21 - 22 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 23 - }
-23
arch/nds32/math-emu/fmuls.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/single.h> 8 - void fmuls(void *ft, void *fa, void *fb) 9 - { 10 - FP_DECL_S(A); 11 - FP_DECL_S(B); 12 - FP_DECL_S(R); 13 - FP_DECL_EX; 14 - 15 - FP_UNPACK_SP(A, fa); 16 - FP_UNPACK_SP(B, fb); 17 - 18 - FP_MUL_S(R, A, B); 19 - 20 - FP_PACK_SP(ft, R); 21 - 22 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 23 - }
-21
arch/nds32/math-emu/fnegd.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/double.h> 8 - void fnegd(void *ft, void *fa) 9 - { 10 - FP_DECL_D(A); 11 - FP_DECL_D(R); 12 - FP_DECL_EX; 13 - 14 - FP_UNPACK_DP(A, fa); 15 - 16 - FP_NEG_D(R, A); 17 - 18 - FP_PACK_DP(ft, R); 19 - 20 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 21 - }
-21
arch/nds32/math-emu/fnegs.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/single.h> 8 - void fnegs(void *ft, void *fa) 9 - { 10 - FP_DECL_S(A); 11 - FP_DECL_S(R); 12 - FP_DECL_EX; 13 - 14 - FP_UNPACK_SP(A, fa); 15 - 16 - FP_NEG_S(R, A); 17 - 18 - FP_PACK_SP(ft, R); 19 - 20 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 21 - }
-406
arch/nds32/math-emu/fpuemu.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - 4 - #include <asm/bitfield.h> 5 - #include <asm/uaccess.h> 6 - #include <asm/sfp-machine.h> 7 - #include <asm/fpuemu.h> 8 - #include <asm/nds32_fpu_inst.h> 9 - 10 - #define DPFROMREG(dp, x) (dp = (void *)((unsigned long *)fpu_reg + 2*x)) 11 - #ifdef __NDS32_EL__ 12 - #define SPFROMREG(sp, x)\ 13 - ((sp) = (void *)((unsigned long *)fpu_reg + (x^1))) 14 - #else 15 - #define SPFROMREG(sp, x) ((sp) = (void *)((unsigned long *)fpu_reg + x)) 16 - #endif 17 - 18 - #define DEF3OP(name, p, f1, f2) \ 19 - void fpemu_##name##p(void *ft, void *fa, void *fb) \ 20 - { \ 21 - f1(fa, fa, fb); \ 22 - f2(ft, ft, fa); \ 23 - } 24 - 25 - #define DEF3OPNEG(name, p, f1, f2, f3) \ 26 - void fpemu_##name##p(void *ft, void *fa, void *fb) \ 27 - { \ 28 - f1(fa, fa, fb); \ 29 - f2(ft, ft, fa); \ 30 - f3(ft, ft); \ 31 - } 32 - DEF3OP(fmadd, s, fmuls, fadds); 33 - DEF3OP(fmsub, s, fmuls, fsubs); 34 - DEF3OP(fmadd, d, fmuld, faddd); 35 - DEF3OP(fmsub, d, fmuld, fsubd); 36 - DEF3OPNEG(fnmadd, s, fmuls, fadds, fnegs); 37 - DEF3OPNEG(fnmsub, s, fmuls, fsubs, fnegs); 38 - DEF3OPNEG(fnmadd, d, fmuld, faddd, fnegd); 39 - DEF3OPNEG(fnmsub, d, fmuld, fsubd, fnegd); 40 - 41 - static const unsigned char cmptab[8] = { 42 - SF_CEQ, 43 - SF_CEQ, 44 - SF_CLT, 45 - SF_CLT, 46 - SF_CLT | SF_CEQ, 47 - SF_CLT | SF_CEQ, 48 - SF_CUN, 49 - SF_CUN 50 - }; 51 - 52 - enum ARGTYPE { 53 - S1S = 1, 54 - S2S, 55 - S1D, 56 - CS, 57 - D1D, 58 - D2D, 59 - D1S, 60 - CD 61 - }; 62 - union func_t { 63 - void (*t)(void *ft, void *fa, void *fb); 64 - void (*b)(void *ft, void *fa); 65 - }; 66 - /* 67 - * Emulate a single FPU arithmetic instruction. 68 - */ 69 - static int fpu_emu(struct fpu_struct *fpu_reg, unsigned long insn) 70 - { 71 - int rfmt; /* resulting format */ 72 - union func_t func; 73 - int ftype = 0; 74 - 75 - switch (rfmt = NDS32Insn_OPCODE_COP0(insn)) { 76 - case fs1_op:{ 77 - switch (NDS32Insn_OPCODE_BIT69(insn)) { 78 - case fadds_op: 79 - func.t = fadds; 80 - ftype = S2S; 81 - break; 82 - case fsubs_op: 83 - func.t = fsubs; 84 - ftype = S2S; 85 - break; 86 - case fmadds_op: 87 - func.t = fpemu_fmadds; 88 - ftype = S2S; 89 - break; 90 - case fmsubs_op: 91 - func.t = fpemu_fmsubs; 92 - ftype = S2S; 93 - break; 94 - case fnmadds_op: 95 - func.t = fpemu_fnmadds; 96 - ftype = S2S; 97 - break; 98 - case fnmsubs_op: 99 - func.t = fpemu_fnmsubs; 100 - ftype = S2S; 101 - break; 102 - case fmuls_op: 103 - func.t = fmuls; 104 - ftype = S2S; 105 - break; 106 - case fdivs_op: 107 - func.t = fdivs; 108 - ftype = S2S; 109 - break; 110 - case fs1_f2op_op: 111 - switch (NDS32Insn_OPCODE_BIT1014(insn)) { 112 - case fs2d_op: 113 - func.b = fs2d; 114 - ftype = S1D; 115 - break; 116 - case fs2si_op: 117 - func.b = fs2si; 118 - ftype = S1S; 119 - break; 120 - case fs2si_z_op: 121 - func.b = fs2si_z; 122 - ftype = S1S; 123 - break; 124 - case fs2ui_op: 125 - func.b = fs2ui; 126 - ftype = S1S; 127 - break; 128 - case fs2ui_z_op: 129 - func.b = fs2ui_z; 130 - ftype = S1S; 131 - break; 132 - case fsi2s_op: 133 - func.b = fsi2s; 134 - ftype = S1S; 135 - break; 136 - case fui2s_op: 137 - func.b = fui2s; 138 - ftype = S1S; 139 - break; 140 - case fsqrts_op: 141 - func.b = fsqrts; 142 - ftype = S1S; 143 - break; 144 - default: 145 - return SIGILL; 146 - } 147 - break; 148 - default: 149 - return SIGILL; 150 - } 151 - break; 152 - } 153 - case fs2_op: 154 - switch (NDS32Insn_OPCODE_BIT69(insn)) { 155 - case fcmpeqs_op: 156 - case fcmpeqs_e_op: 157 - case fcmplts_op: 158 - case fcmplts_e_op: 159 - case fcmples_op: 160 - case fcmples_e_op: 161 - case fcmpuns_op: 162 - case fcmpuns_e_op: 163 - ftype = CS; 164 - break; 165 - default: 166 - return SIGILL; 167 - } 168 - break; 169 - case fd1_op:{ 170 - switch (NDS32Insn_OPCODE_BIT69(insn)) { 171 - case faddd_op: 172 - func.t = faddd; 173 - ftype = D2D; 174 - break; 175 - case fsubd_op: 176 - func.t = fsubd; 177 - ftype = D2D; 178 - break; 179 - case fmaddd_op: 180 - func.t = fpemu_fmaddd; 181 - ftype = D2D; 182 - break; 183 - case fmsubd_op: 184 - func.t = fpemu_fmsubd; 185 - ftype = D2D; 186 - break; 187 - case fnmaddd_op: 188 - func.t = fpemu_fnmaddd; 189 - ftype = D2D; 190 - break; 191 - case fnmsubd_op: 192 - func.t = fpemu_fnmsubd; 193 - ftype = D2D; 194 - break; 195 - case fmuld_op: 196 - func.t = fmuld; 197 - ftype = D2D; 198 - break; 199 - case fdivd_op: 200 - func.t = fdivd; 201 - ftype = D2D; 202 - break; 203 - case fd1_f2op_op: 204 - switch (NDS32Insn_OPCODE_BIT1014(insn)) { 205 - case fd2s_op: 206 - func.b = fd2s; 207 - ftype = D1S; 208 - break; 209 - case fd2si_op: 210 - func.b = fd2si; 211 - ftype = D1S; 212 - break; 213 - case fd2si_z_op: 214 - func.b = fd2si_z; 215 - ftype = D1S; 216 - break; 217 - case fd2ui_op: 218 - func.b = fd2ui; 219 - ftype = D1S; 220 - break; 221 - case fd2ui_z_op: 222 - func.b = fd2ui_z; 223 - ftype = D1S; 224 - break; 225 - case fsi2d_op: 226 - func.b = fsi2d; 227 - ftype = D1S; 228 - break; 229 - case fui2d_op: 230 - func.b = fui2d; 231 - ftype = D1S; 232 - break; 233 - case fsqrtd_op: 234 - func.b = fsqrtd; 235 - ftype = D1D; 236 - break; 237 - default: 238 - return SIGILL; 239 - } 240 - break; 241 - default: 242 - return SIGILL; 243 - 244 - } 245 - break; 246 - } 247 - 248 - case fd2_op: 249 - switch (NDS32Insn_OPCODE_BIT69(insn)) { 250 - case fcmpeqd_op: 251 - case fcmpeqd_e_op: 252 - case fcmpltd_op: 253 - case fcmpltd_e_op: 254 - case fcmpled_op: 255 - case fcmpled_e_op: 256 - case fcmpund_op: 257 - case fcmpund_e_op: 258 - ftype = CD; 259 - break; 260 - default: 261 - return SIGILL; 262 - } 263 - break; 264 - 265 - default: 266 - return SIGILL; 267 - } 268 - 269 - switch (ftype) { 270 - case S1S:{ 271 - void *ft, *fa; 272 - 273 - SPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); 274 - SPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); 275 - func.b(ft, fa); 276 - break; 277 - } 278 - case S2S:{ 279 - void *ft, *fa, *fb; 280 - 281 - SPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); 282 - SPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); 283 - SPFROMREG(fb, NDS32Insn_OPCODE_Rb(insn)); 284 - func.t(ft, fa, fb); 285 - break; 286 - } 287 - case S1D:{ 288 - void *ft, *fa; 289 - 290 - DPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); 291 - SPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); 292 - func.b(ft, fa); 293 - break; 294 - } 295 - case CS:{ 296 - unsigned int cmpop = NDS32Insn_OPCODE_BIT69(insn); 297 - void *ft, *fa, *fb; 298 - 299 - SPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); 300 - SPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); 301 - SPFROMREG(fb, NDS32Insn_OPCODE_Rb(insn)); 302 - if (cmpop < 0x8) { 303 - cmpop = cmptab[cmpop]; 304 - fcmps(ft, fa, fb, cmpop); 305 - } else 306 - return SIGILL; 307 - break; 308 - } 309 - case D1D:{ 310 - void *ft, *fa; 311 - 312 - DPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); 313 - DPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); 314 - func.b(ft, fa); 315 - break; 316 - } 317 - case D2D:{ 318 - void *ft, *fa, *fb; 319 - 320 - DPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); 321 - DPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); 322 - DPFROMREG(fb, NDS32Insn_OPCODE_Rb(insn)); 323 - func.t(ft, fa, fb); 324 - break; 325 - } 326 - case D1S:{ 327 - void *ft, *fa; 328 - 329 - SPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); 330 - DPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); 331 - func.b(ft, fa); 332 - break; 333 - } 334 - case CD:{ 335 - unsigned int cmpop = NDS32Insn_OPCODE_BIT69(insn); 336 - void *ft, *fa, *fb; 337 - 338 - SPFROMREG(ft, NDS32Insn_OPCODE_Rt(insn)); 339 - DPFROMREG(fa, NDS32Insn_OPCODE_Ra(insn)); 340 - DPFROMREG(fb, NDS32Insn_OPCODE_Rb(insn)); 341 - if (cmpop < 0x8) { 342 - cmpop = cmptab[cmpop]; 343 - fcmpd(ft, fa, fb, cmpop); 344 - } else 345 - return SIGILL; 346 - break; 347 - } 348 - default: 349 - return SIGILL; 350 - } 351 - 352 - /* 353 - * If an exception is required, generate a tidy SIGFPE exception. 354 - */ 355 - #if IS_ENABLED(CONFIG_SUPPORT_DENORMAL_ARITHMETIC) 356 - if (((fpu_reg->fpcsr << 5) & fpu_reg->fpcsr & FPCSR_mskALLE_NO_UDF_IEXE) 357 - || ((fpu_reg->fpcsr << 5) & (fpu_reg->UDF_IEX_trap))) { 358 - #else 359 - if ((fpu_reg->fpcsr << 5) & fpu_reg->fpcsr & FPCSR_mskALLE) { 360 - #endif 361 - return SIGFPE; 362 - } 363 - return 0; 364 - } 365 - 366 - int do_fpuemu(struct pt_regs *regs, struct fpu_struct *fpu) 367 - { 368 - unsigned long insn = 0, addr = regs->ipc; 369 - unsigned long emulpc, contpc; 370 - unsigned char *pc = (void *)&insn; 371 - char c; 372 - int i = 0, ret; 373 - 374 - for (i = 0; i < 4; i++) { 375 - if (__get_user(c, (unsigned char *)addr++)) 376 - return SIGBUS; 377 - *pc++ = c; 378 - } 379 - 380 - insn = be32_to_cpu(insn); 381 - 382 - emulpc = regs->ipc; 383 - contpc = regs->ipc + 4; 384 - 385 - if (NDS32Insn_OPCODE(insn) != cop0_op) 386 - return SIGILL; 387 - 388 - switch (NDS32Insn_OPCODE_COP0(insn)) { 389 - case fs1_op: 390 - case fs2_op: 391 - case fd1_op: 392 - case fd2_op: 393 - { 394 - /* a real fpu computation instruction */ 395 - ret = fpu_emu(fpu, insn); 396 - if (!ret) 397 - regs->ipc = contpc; 398 - } 399 - break; 400 - 401 - default: 402 - return SIGILL; 403 - } 404 - 405 - return ret; 406 - }
-23
arch/nds32/math-emu/fs2d.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - 4 - #include <linux/uaccess.h> 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/double.h> 7 - #include <math-emu/single.h> 8 - #include <math-emu/soft-fp.h> 9 - 10 - void fs2d(void *ft, void *fa) 11 - { 12 - FP_DECL_S(A); 13 - FP_DECL_D(R); 14 - FP_DECL_EX; 15 - 16 - FP_UNPACK_SP(A, fa); 17 - 18 - FP_CONV(D, S, 2, 1, R, A); 19 - 20 - FP_PACK_DP(ft, R); 21 - 22 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 23 - }
-29
arch/nds32/math-emu/fs2si.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2019 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/single.h> 8 - 9 - void fs2si(void *ft, void *fa) 10 - { 11 - int r; 12 - 13 - FP_DECL_S(A); 14 - FP_DECL_EX; 15 - 16 - FP_UNPACK_SP(A, fa); 17 - 18 - if (A_c == FP_CLS_INF) { 19 - *(int *)ft = (A_s == 0) ? 0x7fffffff : 0x80000000; 20 - __FPU_FPCSR |= FP_EX_INVALID; 21 - } else if (A_c == FP_CLS_NAN) { 22 - *(int *)ft = 0xffffffff; 23 - __FPU_FPCSR |= FP_EX_INVALID; 24 - } else { 25 - FP_TO_INT_ROUND_S(r, A, 32, 1); 26 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 27 - *(int *)ft = r; 28 - } 29 - }
-29
arch/nds32/math-emu/fs2siz.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2019 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/single.h> 8 - 9 - void fs2si_z(void *ft, void *fa) 10 - { 11 - int r; 12 - 13 - FP_DECL_S(A); 14 - FP_DECL_EX; 15 - 16 - FP_UNPACK_SP(A, fa); 17 - 18 - if (A_c == FP_CLS_INF) { 19 - *(int *)ft = (A_s == 0) ? 0x7fffffff : 0x80000000; 20 - __FPU_FPCSR |= FP_EX_INVALID; 21 - } else if (A_c == FP_CLS_NAN) { 22 - *(int *)ft = 0xffffffff; 23 - __FPU_FPCSR |= FP_EX_INVALID; 24 - } else { 25 - FP_TO_INT_S(r, A, 32, 1); 26 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 27 - *(int *)ft = r; 28 - } 29 - }
-29
arch/nds32/math-emu/fs2ui.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2019 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/single.h> 8 - 9 - void fs2ui(void *ft, void *fa) 10 - { 11 - unsigned int r; 12 - 13 - FP_DECL_S(A); 14 - FP_DECL_EX; 15 - 16 - FP_UNPACK_SP(A, fa); 17 - 18 - if (A_c == FP_CLS_INF) { 19 - *(unsigned int *)ft = (A_s == 0) ? 0xffffffff : 0x00000000; 20 - __FPU_FPCSR |= FP_EX_INVALID; 21 - } else if (A_c == FP_CLS_NAN) { 22 - *(unsigned int *)ft = 0xffffffff; 23 - __FPU_FPCSR |= FP_EX_INVALID; 24 - } else { 25 - FP_TO_INT_ROUND_S(r, A, 32, 0); 26 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 27 - *(unsigned int *)ft = r; 28 - } 29 - }
-30
arch/nds32/math-emu/fs2uiz.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2019 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/single.h> 8 - 9 - void fs2ui_z(void *ft, void *fa) 10 - { 11 - unsigned int r; 12 - 13 - FP_DECL_S(A); 14 - FP_DECL_EX; 15 - 16 - FP_UNPACK_SP(A, fa); 17 - 18 - if (A_c == FP_CLS_INF) { 19 - *(unsigned int *)ft = (A_s == 0) ? 0xffffffff : 0x00000000; 20 - __FPU_FPCSR |= FP_EX_INVALID; 21 - } else if (A_c == FP_CLS_NAN) { 22 - *(unsigned int *)ft = 0xffffffff; 23 - __FPU_FPCSR |= FP_EX_INVALID; 24 - } else { 25 - FP_TO_INT_S(r, A, 32, 0); 26 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 27 - *(unsigned int *)ft = r; 28 - } 29 - 30 - }
-22
arch/nds32/math-emu/fsi2d.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2019 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/double.h> 8 - 9 - void fsi2d(void *ft, void *fa) 10 - { 11 - int a = *(int *)fa; 12 - 13 - FP_DECL_D(R); 14 - FP_DECL_EX; 15 - 16 - FP_FROM_INT_D(R, a, 32, int); 17 - 18 - FP_PACK_DP(ft, R); 19 - 20 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 21 - 22 - }
-22
arch/nds32/math-emu/fsi2s.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2019 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/single.h> 8 - 9 - void fsi2s(void *ft, void *fa) 10 - { 11 - int a = *(int *)fa; 12 - 13 - FP_DECL_S(R); 14 - FP_DECL_EX; 15 - 16 - FP_FROM_INT_S(R, a, 32, int); 17 - 18 - FP_PACK_SP(ft, R); 19 - 20 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 21 - 22 - }
-21
arch/nds32/math-emu/fsqrtd.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - 4 - #include <linux/uaccess.h> 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/double.h> 8 - void fsqrtd(void *ft, void *fa) 9 - { 10 - FP_DECL_D(A); 11 - FP_DECL_D(R); 12 - FP_DECL_EX; 13 - 14 - FP_UNPACK_DP(A, fa); 15 - 16 - FP_SQRT_D(R, A); 17 - 18 - FP_PACK_DP(ft, R); 19 - 20 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 21 - }
-21
arch/nds32/math-emu/fsqrts.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - 4 - #include <linux/uaccess.h> 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/single.h> 8 - void fsqrts(void *ft, void *fa) 9 - { 10 - FP_DECL_S(A); 11 - FP_DECL_S(R); 12 - FP_DECL_EX; 13 - 14 - FP_UNPACK_SP(A, fa); 15 - 16 - FP_SQRT_S(R, A); 17 - 18 - FP_PACK_SP(ft, R); 19 - 20 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 21 - }
-27
arch/nds32/math-emu/fsubd.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/double.h> 8 - void fsubd(void *ft, void *fa, void *fb) 9 - { 10 - 11 - FP_DECL_D(A); 12 - FP_DECL_D(B); 13 - FP_DECL_D(R); 14 - FP_DECL_EX; 15 - 16 - FP_UNPACK_DP(A, fa); 17 - FP_UNPACK_DP(B, fb); 18 - 19 - if (B_c != FP_CLS_NAN) 20 - B_s ^= 1; 21 - 22 - FP_ADD_D(R, A, B); 23 - 24 - FP_PACK_DP(ft, R); 25 - 26 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 27 - }
-27
arch/nds32/math-emu/fsubs.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2018 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/single.h> 8 - void fsubs(void *ft, void *fa, void *fb) 9 - { 10 - 11 - FP_DECL_S(A); 12 - FP_DECL_S(B); 13 - FP_DECL_S(R); 14 - FP_DECL_EX; 15 - 16 - FP_UNPACK_SP(A, fa); 17 - FP_UNPACK_SP(B, fb); 18 - 19 - if (B_c != FP_CLS_NAN) 20 - B_s ^= 1; 21 - 22 - FP_ADD_S(R, A, B); 23 - 24 - FP_PACK_SP(ft, R); 25 - 26 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 27 - }
-22
arch/nds32/math-emu/fui2d.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2019 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/double.h> 8 - 9 - void fui2d(void *ft, void *fa) 10 - { 11 - unsigned int a = *(unsigned int *)fa; 12 - 13 - FP_DECL_D(R); 14 - FP_DECL_EX; 15 - 16 - FP_FROM_INT_D(R, a, 32, int); 17 - 18 - FP_PACK_DP(ft, R); 19 - 20 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 21 - 22 - }
-22
arch/nds32/math-emu/fui2s.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2019 Andes Technology Corporation 3 - #include <linux/uaccess.h> 4 - 5 - #include <asm/sfp-machine.h> 6 - #include <math-emu/soft-fp.h> 7 - #include <math-emu/single.h> 8 - 9 - void fui2s(void *ft, void *fa) 10 - { 11 - unsigned int a = *(unsigned int *)fa; 12 - 13 - FP_DECL_S(R); 14 - FP_DECL_EX; 15 - 16 - FP_FROM_INT_S(R, a, 32, int); 17 - 18 - FP_PACK_SP(ft, R); 19 - 20 - __FPU_FPCSR |= FP_CUR_EXCEPTIONS; 21 - 22 - }
-10
arch/nds32/mm/Makefile
··· 1 - # SPDX-License-Identifier: GPL-2.0-only 2 - obj-y := extable.o tlb.o fault.o init.o mmap.o \ 3 - mm-nds32.o cacheflush.o proc.o 4 - 5 - obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o 6 - 7 - ifdef CONFIG_FUNCTION_TRACER 8 - CFLAGS_REMOVE_proc.o = $(CC_FLAGS_FTRACE) 9 - endif 10 - CFLAGS_proc.o += -fomit-frame-pointer
-575
arch/nds32/mm/alignment.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/proc_fs.h> 5 - #include <linux/uaccess.h> 6 - #include <linux/sysctl.h> 7 - #include <asm/unaligned.h> 8 - 9 - #define DEBUG(enable, tagged, ...) \ 10 - do{ \ 11 - if (enable) { \ 12 - if (tagged) \ 13 - pr_warn("[ %30s() ] ", __func__); \ 14 - pr_warn(__VA_ARGS__); \ 15 - } \ 16 - } while (0) 17 - 18 - #define RT(inst) (((inst) >> 20) & 0x1FUL) 19 - #define RA(inst) (((inst) >> 15) & 0x1FUL) 20 - #define RB(inst) (((inst) >> 10) & 0x1FUL) 21 - #define SV(inst) (((inst) >> 8) & 0x3UL) 22 - #define IMM(inst) (((inst) >> 0) & 0x7FFFUL) 23 - 24 - #define RA3(inst) (((inst) >> 3) & 0x7UL) 25 - #define RT3(inst) (((inst) >> 6) & 0x7UL) 26 - #define IMM3U(inst) (((inst) >> 0) & 0x7UL) 27 - 28 - #define RA5(inst) (((inst) >> 0) & 0x1FUL) 29 - #define RT4(inst) (((inst) >> 5) & 0xFUL) 30 - 31 - #define GET_IMMSVAL(imm_value) \ 32 - (((imm_value >> 14) & 0x1) ? (imm_value - 0x8000) : imm_value) 33 - 34 - #define __get8_data(val,addr,err) \ 35 - __asm__( \ 36 - "1: lbi.bi %1, [%2], #1\n" \ 37 - "2:\n" \ 38 - " .pushsection .text.fixup,\"ax\"\n" \ 39 - " .align 2\n" \ 40 - "3: movi %0, #1\n" \ 41 - " j 2b\n" \ 42 - " .popsection\n" \ 43 - " .pushsection __ex_table,\"a\"\n" \ 44 - " .align 3\n" \ 45 - " .long 1b, 3b\n" \ 46 - " .popsection\n" \ 47 - : "=r" (err), "=&r" (val), "=r" (addr) \ 48 - : "0" (err), "2" (addr)) 49 - 50 - #define get16_data(addr, val_ptr) \ 51 - do { \ 52 - unsigned int err = 0, v, a = addr; \ 53 - __get8_data(v,a,err); \ 54 - *val_ptr = v << 0; \ 55 - __get8_data(v,a,err); \ 56 - *val_ptr |= v << 8; \ 57 - if (err) \ 58 - goto fault; \ 59 - *val_ptr = le16_to_cpu(*val_ptr); \ 60 - } while(0) 61 - 62 - #define get32_data(addr, val_ptr) \ 63 - do { \ 64 - unsigned int err = 0, v, a = addr; \ 65 - __get8_data(v,a,err); \ 66 - *val_ptr = v << 0; \ 67 - __get8_data(v,a,err); \ 68 - *val_ptr |= v << 8; \ 69 - __get8_data(v,a,err); \ 70 - *val_ptr |= v << 16; \ 71 - __get8_data(v,a,err); \ 72 - *val_ptr |= v << 24; \ 73 - if (err) \ 74 - goto fault; \ 75 - *val_ptr = le32_to_cpu(*val_ptr); \ 76 - } while(0) 77 - 78 - #define get_data(addr, val_ptr, len) \ 79 - if (len == 2) \ 80 - get16_data(addr, val_ptr); \ 81 - else \ 82 - get32_data(addr, val_ptr); 83 - 84 - #define set16_data(addr, val) \ 85 - do { \ 86 - unsigned int err = 0, *ptr = addr ; \ 87 - val = le32_to_cpu(val); \ 88 - __asm__( \ 89 - "1: sbi.bi %2, [%1], #1\n" \ 90 - " srli %2, %2, #8\n" \ 91 - "2: sbi %2, [%1]\n" \ 92 - "3:\n" \ 93 - " .pushsection .text.fixup,\"ax\"\n" \ 94 - " .align 2\n" \ 95 - "4: movi %0, #1\n" \ 96 - " j 3b\n" \ 97 - " .popsection\n" \ 98 - " .pushsection __ex_table,\"a\"\n" \ 99 - " .align 3\n" \ 100 - " .long 1b, 4b\n" \ 101 - " .long 2b, 4b\n" \ 102 - " .popsection\n" \ 103 - : "=r" (err), "+r" (ptr), "+r" (val) \ 104 - : "0" (err) \ 105 - ); \ 106 - if (err) \ 107 - goto fault; \ 108 - } while(0) 109 - 110 - #define set32_data(addr, val) \ 111 - do { \ 112 - unsigned int err = 0, *ptr = addr ; \ 113 - val = le32_to_cpu(val); \ 114 - __asm__( \ 115 - "1: sbi.bi %2, [%1], #1\n" \ 116 - " srli %2, %2, #8\n" \ 117 - "2: sbi.bi %2, [%1], #1\n" \ 118 - " srli %2, %2, #8\n" \ 119 - "3: sbi.bi %2, [%1], #1\n" \ 120 - " srli %2, %2, #8\n" \ 121 - "4: sbi %2, [%1]\n" \ 122 - "5:\n" \ 123 - " .pushsection .text.fixup,\"ax\"\n" \ 124 - " .align 2\n" \ 125 - "6: movi %0, #1\n" \ 126 - " j 5b\n" \ 127 - " .popsection\n" \ 128 - " .pushsection __ex_table,\"a\"\n" \ 129 - " .align 3\n" \ 130 - " .long 1b, 6b\n" \ 131 - " .long 2b, 6b\n" \ 132 - " .long 3b, 6b\n" \ 133 - " .long 4b, 6b\n" \ 134 - " .popsection\n" \ 135 - : "=r" (err), "+r" (ptr), "+r" (val) \ 136 - : "0" (err) \ 137 - ); \ 138 - if (err) \ 139 - goto fault; \ 140 - } while(0) 141 - #define set_data(addr, val, len) \ 142 - if (len == 2) \ 143 - set16_data(addr, val); \ 144 - else \ 145 - set32_data(addr, val); 146 - #define NDS32_16BIT_INSTRUCTION 0x80000000 147 - 148 - extern pte_t va_present(struct mm_struct *mm, unsigned long addr); 149 - extern pte_t va_kernel_present(unsigned long addr); 150 - extern int va_readable(struct pt_regs *regs, unsigned long addr); 151 - extern int va_writable(struct pt_regs *regs, unsigned long addr); 152 - 153 - int unalign_access_mode = 0, unalign_access_debug = 0; 154 - 155 - static inline unsigned long *idx_to_addr(struct pt_regs *regs, int idx) 156 - { 157 - /* this should be consistent with ptrace.h */ 158 - if (idx >= 0 && idx <= 25) /* R0-R25 */ 159 - return &regs->uregs[0] + idx; 160 - else if (idx >= 28 && idx <= 30) /* FP, GP, LP */ 161 - return &regs->fp + (idx - 28); 162 - else if (idx == 31) /* SP */ 163 - return &regs->sp; 164 - else 165 - return NULL; /* cause a segfault */ 166 - } 167 - 168 - static inline unsigned long get_inst(unsigned long addr) 169 - { 170 - return be32_to_cpu(get_unaligned((u32 *) addr)); 171 - } 172 - 173 - static inline unsigned long sign_extend(unsigned long val, int len) 174 - { 175 - unsigned long ret = 0; 176 - unsigned char *s, *t; 177 - int i = 0; 178 - 179 - val = cpu_to_le32(val); 180 - 181 - s = (void *)&val; 182 - t = (void *)&ret; 183 - 184 - while (i++ < len) 185 - *t++ = *s++; 186 - 187 - if (((*(t - 1)) & 0x80) && (i < 4)) { 188 - 189 - while (i++ <= 4) 190 - *t++ = 0xff; 191 - } 192 - 193 - return le32_to_cpu(ret); 194 - } 195 - 196 - static inline int do_16(unsigned long inst, struct pt_regs *regs) 197 - { 198 - int imm, regular, load, len, addr_mode, idx_mode; 199 - unsigned long unaligned_addr, target_val, source_idx, target_idx, 200 - shift = 0; 201 - switch ((inst >> 9) & 0x3F) { 202 - 203 - case 0x12: /* LHI333 */ 204 - imm = 1; 205 - regular = 1; 206 - load = 1; 207 - len = 2; 208 - addr_mode = 3; 209 - idx_mode = 3; 210 - break; 211 - case 0x10: /* LWI333 */ 212 - imm = 1; 213 - regular = 1; 214 - load = 1; 215 - len = 4; 216 - addr_mode = 3; 217 - idx_mode = 3; 218 - break; 219 - case 0x11: /* LWI333.bi */ 220 - imm = 1; 221 - regular = 0; 222 - load = 1; 223 - len = 4; 224 - addr_mode = 3; 225 - idx_mode = 3; 226 - break; 227 - case 0x1A: /* LWI450 */ 228 - imm = 0; 229 - regular = 1; 230 - load = 1; 231 - len = 4; 232 - addr_mode = 5; 233 - idx_mode = 4; 234 - break; 235 - case 0x16: /* SHI333 */ 236 - imm = 1; 237 - regular = 1; 238 - load = 0; 239 - len = 2; 240 - addr_mode = 3; 241 - idx_mode = 3; 242 - break; 243 - case 0x14: /* SWI333 */ 244 - imm = 1; 245 - regular = 1; 246 - load = 0; 247 - len = 4; 248 - addr_mode = 3; 249 - idx_mode = 3; 250 - break; 251 - case 0x15: /* SWI333.bi */ 252 - imm = 1; 253 - regular = 0; 254 - load = 0; 255 - len = 4; 256 - addr_mode = 3; 257 - idx_mode = 3; 258 - break; 259 - case 0x1B: /* SWI450 */ 260 - imm = 0; 261 - regular = 1; 262 - load = 0; 263 - len = 4; 264 - addr_mode = 5; 265 - idx_mode = 4; 266 - break; 267 - 268 - default: 269 - return -EFAULT; 270 - } 271 - 272 - if (addr_mode == 3) { 273 - unaligned_addr = *idx_to_addr(regs, RA3(inst)); 274 - source_idx = RA3(inst); 275 - } else { 276 - unaligned_addr = *idx_to_addr(regs, RA5(inst)); 277 - source_idx = RA5(inst); 278 - } 279 - 280 - if (idx_mode == 3) 281 - target_idx = RT3(inst); 282 - else 283 - target_idx = RT4(inst); 284 - 285 - if (imm) 286 - shift = IMM3U(inst) * len; 287 - 288 - if (regular) 289 - unaligned_addr += shift; 290 - 291 - if (load) { 292 - if (!access_ok((void *)unaligned_addr, len)) 293 - return -EACCES; 294 - 295 - get_data(unaligned_addr, &target_val, len); 296 - *idx_to_addr(regs, target_idx) = target_val; 297 - } else { 298 - if (!access_ok((void *)unaligned_addr, len)) 299 - return -EACCES; 300 - target_val = *idx_to_addr(regs, target_idx); 301 - set_data((void *)unaligned_addr, target_val, len); 302 - } 303 - 304 - if (!regular) 305 - *idx_to_addr(regs, source_idx) = unaligned_addr + shift; 306 - regs->ipc += 2; 307 - 308 - return 0; 309 - fault: 310 - return -EACCES; 311 - } 312 - 313 - static inline int do_32(unsigned long inst, struct pt_regs *regs) 314 - { 315 - int imm, regular, load, len, sign_ext; 316 - unsigned long unaligned_addr, target_val, shift; 317 - 318 - unaligned_addr = *idx_to_addr(regs, RA(inst)); 319 - 320 - switch ((inst >> 25) << 1) { 321 - 322 - case 0x02: /* LHI */ 323 - imm = 1; 324 - regular = 1; 325 - load = 1; 326 - len = 2; 327 - sign_ext = 0; 328 - break; 329 - case 0x0A: /* LHI.bi */ 330 - imm = 1; 331 - regular = 0; 332 - load = 1; 333 - len = 2; 334 - sign_ext = 0; 335 - break; 336 - case 0x22: /* LHSI */ 337 - imm = 1; 338 - regular = 1; 339 - load = 1; 340 - len = 2; 341 - sign_ext = 1; 342 - break; 343 - case 0x2A: /* LHSI.bi */ 344 - imm = 1; 345 - regular = 0; 346 - load = 1; 347 - len = 2; 348 - sign_ext = 1; 349 - break; 350 - case 0x04: /* LWI */ 351 - imm = 1; 352 - regular = 1; 353 - load = 1; 354 - len = 4; 355 - sign_ext = 0; 356 - break; 357 - case 0x0C: /* LWI.bi */ 358 - imm = 1; 359 - regular = 0; 360 - load = 1; 361 - len = 4; 362 - sign_ext = 0; 363 - break; 364 - case 0x12: /* SHI */ 365 - imm = 1; 366 - regular = 1; 367 - load = 0; 368 - len = 2; 369 - sign_ext = 0; 370 - break; 371 - case 0x1A: /* SHI.bi */ 372 - imm = 1; 373 - regular = 0; 374 - load = 0; 375 - len = 2; 376 - sign_ext = 0; 377 - break; 378 - case 0x14: /* SWI */ 379 - imm = 1; 380 - regular = 1; 381 - load = 0; 382 - len = 4; 383 - sign_ext = 0; 384 - break; 385 - case 0x1C: /* SWI.bi */ 386 - imm = 1; 387 - regular = 0; 388 - load = 0; 389 - len = 4; 390 - sign_ext = 0; 391 - break; 392 - 393 - default: 394 - switch (inst & 0xff) { 395 - 396 - case 0x01: /* LH */ 397 - imm = 0; 398 - regular = 1; 399 - load = 1; 400 - len = 2; 401 - sign_ext = 0; 402 - break; 403 - case 0x05: /* LH.bi */ 404 - imm = 0; 405 - regular = 0; 406 - load = 1; 407 - len = 2; 408 - sign_ext = 0; 409 - break; 410 - case 0x11: /* LHS */ 411 - imm = 0; 412 - regular = 1; 413 - load = 1; 414 - len = 2; 415 - sign_ext = 1; 416 - break; 417 - case 0x15: /* LHS.bi */ 418 - imm = 0; 419 - regular = 0; 420 - load = 1; 421 - len = 2; 422 - sign_ext = 1; 423 - break; 424 - case 0x02: /* LW */ 425 - imm = 0; 426 - regular = 1; 427 - load = 1; 428 - len = 4; 429 - sign_ext = 0; 430 - break; 431 - case 0x06: /* LW.bi */ 432 - imm = 0; 433 - regular = 0; 434 - load = 1; 435 - len = 4; 436 - sign_ext = 0; 437 - break; 438 - case 0x09: /* SH */ 439 - imm = 0; 440 - regular = 1; 441 - load = 0; 442 - len = 2; 443 - sign_ext = 0; 444 - break; 445 - case 0x0D: /* SH.bi */ 446 - imm = 0; 447 - regular = 0; 448 - load = 0; 449 - len = 2; 450 - sign_ext = 0; 451 - break; 452 - case 0x0A: /* SW */ 453 - imm = 0; 454 - regular = 1; 455 - load = 0; 456 - len = 4; 457 - sign_ext = 0; 458 - break; 459 - case 0x0E: /* SW.bi */ 460 - imm = 0; 461 - regular = 0; 462 - load = 0; 463 - len = 4; 464 - sign_ext = 0; 465 - break; 466 - 467 - default: 468 - return -EFAULT; 469 - } 470 - } 471 - 472 - if (imm) 473 - shift = GET_IMMSVAL(IMM(inst)) * len; 474 - else 475 - shift = *idx_to_addr(regs, RB(inst)) << SV(inst); 476 - 477 - if (regular) 478 - unaligned_addr += shift; 479 - 480 - if (load) { 481 - 482 - if (!access_ok((void *)unaligned_addr, len)) 483 - return -EACCES; 484 - 485 - get_data(unaligned_addr, &target_val, len); 486 - 487 - if (sign_ext) 488 - *idx_to_addr(regs, RT(inst)) = 489 - sign_extend(target_val, len); 490 - else 491 - *idx_to_addr(regs, RT(inst)) = target_val; 492 - } else { 493 - 494 - if (!access_ok((void *)unaligned_addr, len)) 495 - return -EACCES; 496 - 497 - target_val = *idx_to_addr(regs, RT(inst)); 498 - set_data((void *)unaligned_addr, target_val, len); 499 - } 500 - 501 - if (!regular) 502 - *idx_to_addr(regs, RA(inst)) = unaligned_addr + shift; 503 - 504 - regs->ipc += 4; 505 - 506 - return 0; 507 - fault: 508 - return -EACCES; 509 - } 510 - 511 - int do_unaligned_access(unsigned long addr, struct pt_regs *regs) 512 - { 513 - unsigned long inst; 514 - int ret = -EFAULT; 515 - 516 - inst = get_inst(regs->ipc); 517 - 518 - DEBUG((unalign_access_debug > 0), 1, 519 - "Faulting addr: 0x%08lx, pc: 0x%08lx [inst: 0x%08lx ]\n", addr, 520 - regs->ipc, inst); 521 - 522 - if (inst & NDS32_16BIT_INSTRUCTION) 523 - ret = do_16((inst >> 16) & 0xffff, regs); 524 - else 525 - ret = do_32(inst, regs); 526 - 527 - return ret; 528 - } 529 - 530 - #ifdef CONFIG_PROC_FS 531 - 532 - static struct ctl_table alignment_tbl[3] = { 533 - { 534 - .procname = "enable", 535 - .data = &unalign_access_mode, 536 - .maxlen = sizeof(unalign_access_mode), 537 - .mode = 0666, 538 - .proc_handler = &proc_dointvec 539 - } 540 - , 541 - { 542 - .procname = "debug_info", 543 - .data = &unalign_access_debug, 544 - .maxlen = sizeof(unalign_access_debug), 545 - .mode = 0644, 546 - .proc_handler = &proc_dointvec 547 - } 548 - , 549 - {} 550 - }; 551 - 552 - static struct ctl_table nds32_sysctl_table[2] = { 553 - { 554 - .procname = "unaligned_access", 555 - .mode = 0555, 556 - .child = alignment_tbl}, 557 - {} 558 - }; 559 - 560 - static struct ctl_path nds32_path[2] = { 561 - {.procname = "nds32"}, 562 - {} 563 - }; 564 - 565 - /* 566 - * Initialize nds32 alignment-correction interface 567 - */ 568 - static int __init nds32_sysctl_init(void) 569 - { 570 - register_sysctl_paths(nds32_path, nds32_sysctl_table); 571 - return 0; 572 - } 573 - 574 - __initcall(nds32_sysctl_init); 575 - #endif /* CONFIG_PROC_FS */
-338
arch/nds32/mm/cacheflush.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/mm.h> 5 - #include <linux/sched.h> 6 - #include <linux/fs.h> 7 - #include <linux/pagemap.h> 8 - #include <linux/module.h> 9 - #include <asm/cacheflush.h> 10 - #include <asm/proc-fns.h> 11 - #include <asm/shmparam.h> 12 - #include <asm/cache_info.h> 13 - 14 - extern struct cache_info L1_cache_info[2]; 15 - 16 - void flush_icache_range(unsigned long start, unsigned long end) 17 - { 18 - unsigned long line_size, flags; 19 - line_size = L1_cache_info[DCACHE].line_size; 20 - start = start & ~(line_size - 1); 21 - end = (end + line_size - 1) & ~(line_size - 1); 22 - local_irq_save(flags); 23 - cpu_cache_wbinval_range(start, end, 1); 24 - local_irq_restore(flags); 25 - } 26 - EXPORT_SYMBOL(flush_icache_range); 27 - 28 - void flush_icache_page(struct vm_area_struct *vma, struct page *page) 29 - { 30 - unsigned long flags; 31 - unsigned long kaddr; 32 - local_irq_save(flags); 33 - kaddr = (unsigned long)kmap_atomic(page); 34 - cpu_cache_wbinval_page(kaddr, vma->vm_flags & VM_EXEC); 35 - kunmap_atomic((void *)kaddr); 36 - local_irq_restore(flags); 37 - } 38 - 39 - void flush_icache_user_page(struct vm_area_struct *vma, struct page *page, 40 - unsigned long addr, int len) 41 - { 42 - unsigned long kaddr; 43 - kaddr = (unsigned long)kmap_atomic(page) + (addr & ~PAGE_MASK); 44 - flush_icache_range(kaddr, kaddr + len); 45 - kunmap_atomic((void *)kaddr); 46 - } 47 - 48 - void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, 49 - pte_t * pte) 50 - { 51 - struct page *page; 52 - unsigned long pfn = pte_pfn(*pte); 53 - unsigned long flags; 54 - 55 - if (!pfn_valid(pfn)) 56 - return; 57 - 58 - if (vma->vm_mm == current->active_mm) { 59 - local_irq_save(flags); 60 - __nds32__mtsr_dsb(addr, NDS32_SR_TLB_VPN); 61 - __nds32__tlbop_rwr(*pte); 62 - __nds32__isb(); 63 - local_irq_restore(flags); 64 - } 65 - page = pfn_to_page(pfn); 66 - 67 - if ((test_and_clear_bit(PG_dcache_dirty, &page->flags)) || 68 - (vma->vm_flags & VM_EXEC)) { 69 - unsigned long kaddr; 70 - local_irq_save(flags); 71 - kaddr = (unsigned long)kmap_atomic(page); 72 - cpu_cache_wbinval_page(kaddr, vma->vm_flags & VM_EXEC); 73 - kunmap_atomic((void *)kaddr); 74 - local_irq_restore(flags); 75 - } 76 - } 77 - #ifdef CONFIG_CPU_CACHE_ALIASING 78 - extern pte_t va_present(struct mm_struct *mm, unsigned long addr); 79 - 80 - static inline unsigned long aliasing(unsigned long addr, unsigned long page) 81 - { 82 - return ((addr & PAGE_MASK) ^ page) & (SHMLBA - 1); 83 - } 84 - 85 - static inline unsigned long kremap0(unsigned long uaddr, unsigned long pa) 86 - { 87 - unsigned long kaddr, pte; 88 - 89 - #define BASE_ADDR0 0xffffc000 90 - kaddr = BASE_ADDR0 | (uaddr & L1_cache_info[DCACHE].aliasing_mask); 91 - pte = (pa | PAGE_KERNEL); 92 - __nds32__mtsr_dsb(kaddr, NDS32_SR_TLB_VPN); 93 - __nds32__tlbop_rwlk(pte); 94 - __nds32__isb(); 95 - return kaddr; 96 - } 97 - 98 - static inline void kunmap01(unsigned long kaddr) 99 - { 100 - __nds32__tlbop_unlk(kaddr); 101 - __nds32__tlbop_inv(kaddr); 102 - __nds32__isb(); 103 - } 104 - 105 - static inline unsigned long kremap1(unsigned long uaddr, unsigned long pa) 106 - { 107 - unsigned long kaddr, pte; 108 - 109 - #define BASE_ADDR1 0xffff8000 110 - kaddr = BASE_ADDR1 | (uaddr & L1_cache_info[DCACHE].aliasing_mask); 111 - pte = (pa | PAGE_KERNEL); 112 - __nds32__mtsr_dsb(kaddr, NDS32_SR_TLB_VPN); 113 - __nds32__tlbop_rwlk(pte); 114 - __nds32__isb(); 115 - return kaddr; 116 - } 117 - 118 - void flush_cache_mm(struct mm_struct *mm) 119 - { 120 - unsigned long flags; 121 - 122 - local_irq_save(flags); 123 - cpu_dcache_wbinval_all(); 124 - cpu_icache_inval_all(); 125 - local_irq_restore(flags); 126 - } 127 - 128 - void flush_cache_dup_mm(struct mm_struct *mm) 129 - { 130 - } 131 - 132 - void flush_cache_range(struct vm_area_struct *vma, 133 - unsigned long start, unsigned long end) 134 - { 135 - unsigned long flags; 136 - 137 - if ((end - start) > 8 * PAGE_SIZE) { 138 - cpu_dcache_wbinval_all(); 139 - if (vma->vm_flags & VM_EXEC) 140 - cpu_icache_inval_all(); 141 - return; 142 - } 143 - local_irq_save(flags); 144 - while (start < end) { 145 - if (va_present(vma->vm_mm, start)) 146 - cpu_cache_wbinval_page(start, vma->vm_flags & VM_EXEC); 147 - start += PAGE_SIZE; 148 - } 149 - local_irq_restore(flags); 150 - return; 151 - } 152 - 153 - void flush_cache_page(struct vm_area_struct *vma, 154 - unsigned long addr, unsigned long pfn) 155 - { 156 - unsigned long vto, flags; 157 - 158 - local_irq_save(flags); 159 - vto = kremap0(addr, pfn << PAGE_SHIFT); 160 - cpu_cache_wbinval_page(vto, vma->vm_flags & VM_EXEC); 161 - kunmap01(vto); 162 - local_irq_restore(flags); 163 - } 164 - 165 - void flush_cache_vmap(unsigned long start, unsigned long end) 166 - { 167 - cpu_dcache_wbinval_all(); 168 - cpu_icache_inval_all(); 169 - } 170 - 171 - void flush_cache_vunmap(unsigned long start, unsigned long end) 172 - { 173 - cpu_dcache_wbinval_all(); 174 - cpu_icache_inval_all(); 175 - } 176 - 177 - void copy_user_page(void *vto, void *vfrom, unsigned long vaddr, 178 - struct page *to) 179 - { 180 - cpu_dcache_wbinval_page((unsigned long)vaddr); 181 - cpu_icache_inval_page((unsigned long)vaddr); 182 - copy_page(vto, vfrom); 183 - cpu_dcache_wbinval_page((unsigned long)vto); 184 - cpu_icache_inval_page((unsigned long)vto); 185 - } 186 - 187 - void clear_user_page(void *addr, unsigned long vaddr, struct page *page) 188 - { 189 - cpu_dcache_wbinval_page((unsigned long)vaddr); 190 - cpu_icache_inval_page((unsigned long)vaddr); 191 - clear_page(addr); 192 - cpu_dcache_wbinval_page((unsigned long)addr); 193 - cpu_icache_inval_page((unsigned long)addr); 194 - } 195 - 196 - void copy_user_highpage(struct page *to, struct page *from, 197 - unsigned long vaddr, struct vm_area_struct *vma) 198 - { 199 - unsigned long vto, vfrom, flags, kto, kfrom, pfrom, pto; 200 - kto = ((unsigned long)page_address(to) & PAGE_MASK); 201 - kfrom = ((unsigned long)page_address(from) & PAGE_MASK); 202 - pto = page_to_phys(to); 203 - pfrom = page_to_phys(from); 204 - 205 - local_irq_save(flags); 206 - if (aliasing(vaddr, (unsigned long)kfrom)) 207 - cpu_dcache_wb_page((unsigned long)kfrom); 208 - vto = kremap0(vaddr, pto); 209 - vfrom = kremap1(vaddr, pfrom); 210 - copy_page((void *)vto, (void *)vfrom); 211 - kunmap01(vfrom); 212 - kunmap01(vto); 213 - local_irq_restore(flags); 214 - } 215 - 216 - EXPORT_SYMBOL(copy_user_highpage); 217 - 218 - void clear_user_highpage(struct page *page, unsigned long vaddr) 219 - { 220 - unsigned long vto, flags, kto; 221 - 222 - kto = ((unsigned long)page_address(page) & PAGE_MASK); 223 - 224 - local_irq_save(flags); 225 - if (aliasing(kto, vaddr) && kto != 0) { 226 - cpu_dcache_inval_page(kto); 227 - cpu_icache_inval_page(kto); 228 - } 229 - vto = kremap0(vaddr, page_to_phys(page)); 230 - clear_page((void *)vto); 231 - kunmap01(vto); 232 - local_irq_restore(flags); 233 - } 234 - 235 - EXPORT_SYMBOL(clear_user_highpage); 236 - 237 - void flush_dcache_page(struct page *page) 238 - { 239 - struct address_space *mapping; 240 - 241 - mapping = page_mapping_file(page); 242 - if (mapping && !mapping_mapped(mapping)) 243 - set_bit(PG_dcache_dirty, &page->flags); 244 - else { 245 - unsigned long kaddr, flags; 246 - 247 - kaddr = (unsigned long)page_address(page); 248 - local_irq_save(flags); 249 - cpu_dcache_wbinval_page(kaddr); 250 - if (mapping) { 251 - unsigned long vaddr, kto; 252 - 253 - vaddr = page->index << PAGE_SHIFT; 254 - if (aliasing(vaddr, kaddr)) { 255 - kto = kremap0(vaddr, page_to_phys(page)); 256 - cpu_dcache_wbinval_page(kto); 257 - kunmap01(kto); 258 - } 259 - } 260 - local_irq_restore(flags); 261 - } 262 - } 263 - EXPORT_SYMBOL(flush_dcache_page); 264 - 265 - void copy_to_user_page(struct vm_area_struct *vma, struct page *page, 266 - unsigned long vaddr, void *dst, void *src, int len) 267 - { 268 - unsigned long line_size, start, end, vto, flags; 269 - 270 - local_irq_save(flags); 271 - vto = kremap0(vaddr, page_to_phys(page)); 272 - dst = (void *)(vto | (vaddr & (PAGE_SIZE - 1))); 273 - memcpy(dst, src, len); 274 - if (vma->vm_flags & VM_EXEC) { 275 - line_size = L1_cache_info[DCACHE].line_size; 276 - start = (unsigned long)dst & ~(line_size - 1); 277 - end = 278 - ((unsigned long)dst + len + line_size - 1) & ~(line_size - 279 - 1); 280 - cpu_cache_wbinval_range(start, end, 1); 281 - } 282 - kunmap01(vto); 283 - local_irq_restore(flags); 284 - } 285 - 286 - void copy_from_user_page(struct vm_area_struct *vma, struct page *page, 287 - unsigned long vaddr, void *dst, void *src, int len) 288 - { 289 - unsigned long vto, flags; 290 - 291 - local_irq_save(flags); 292 - vto = kremap0(vaddr, page_to_phys(page)); 293 - src = (void *)(vto | (vaddr & (PAGE_SIZE - 1))); 294 - memcpy(dst, src, len); 295 - kunmap01(vto); 296 - local_irq_restore(flags); 297 - } 298 - 299 - void flush_anon_page(struct vm_area_struct *vma, 300 - struct page *page, unsigned long vaddr) 301 - { 302 - unsigned long kaddr, flags, ktmp; 303 - if (!PageAnon(page)) 304 - return; 305 - 306 - if (vma->vm_mm != current->active_mm) 307 - return; 308 - 309 - local_irq_save(flags); 310 - if (vma->vm_flags & VM_EXEC) 311 - cpu_icache_inval_page(vaddr & PAGE_MASK); 312 - kaddr = (unsigned long)page_address(page); 313 - if (aliasing(vaddr, kaddr)) { 314 - ktmp = kremap0(vaddr, page_to_phys(page)); 315 - cpu_dcache_wbinval_page(ktmp); 316 - kunmap01(ktmp); 317 - } 318 - local_irq_restore(flags); 319 - } 320 - 321 - void flush_kernel_vmap_range(void *addr, int size) 322 - { 323 - unsigned long flags; 324 - local_irq_save(flags); 325 - cpu_dcache_wb_range((unsigned long)addr, (unsigned long)addr + size); 326 - local_irq_restore(flags); 327 - } 328 - EXPORT_SYMBOL(flush_kernel_vmap_range); 329 - 330 - void invalidate_kernel_vmap_range(void *addr, int size) 331 - { 332 - unsigned long flags; 333 - local_irq_save(flags); 334 - cpu_dcache_inval_range((unsigned long)addr, (unsigned long)addr + size); 335 - local_irq_restore(flags); 336 - } 337 - EXPORT_SYMBOL(invalidate_kernel_vmap_range); 338 - #endif
-16
arch/nds32/mm/extable.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/extable.h> 5 - #include <linux/uaccess.h> 6 - 7 - int fixup_exception(struct pt_regs *regs) 8 - { 9 - const struct exception_table_entry *fixup; 10 - 11 - fixup = search_exception_tables(instruction_pointer(regs)); 12 - if (fixup) 13 - regs->ipc = fixup->fixup; 14 - 15 - return fixup != NULL; 16 - }
-396
arch/nds32/mm/fault.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/extable.h> 5 - #include <linux/module.h> 6 - #include <linux/signal.h> 7 - #include <linux/ptrace.h> 8 - #include <linux/mm.h> 9 - #include <linux/init.h> 10 - #include <linux/hardirq.h> 11 - #include <linux/uaccess.h> 12 - #include <linux/perf_event.h> 13 - 14 - #include <asm/tlbflush.h> 15 - 16 - extern void __noreturn die(const char *str, struct pt_regs *regs, long err); 17 - 18 - /* 19 - * This is useful to dump out the page tables associated with 20 - * 'addr' in mm 'mm'. 21 - */ 22 - void show_pte(struct mm_struct *mm, unsigned long addr) 23 - { 24 - pgd_t *pgd; 25 - if (!mm) 26 - mm = &init_mm; 27 - 28 - pr_alert("pgd = %p\n", mm->pgd); 29 - pgd = pgd_offset(mm, addr); 30 - pr_alert("[%08lx] *pgd=%08lx", addr, pgd_val(*pgd)); 31 - 32 - do { 33 - p4d_t *p4d; 34 - pud_t *pud; 35 - pmd_t *pmd; 36 - 37 - if (pgd_none(*pgd)) 38 - break; 39 - 40 - if (pgd_bad(*pgd)) { 41 - pr_alert("(bad)"); 42 - break; 43 - } 44 - 45 - p4d = p4d_offset(pgd, addr); 46 - pud = pud_offset(p4d, addr); 47 - pmd = pmd_offset(pud, addr); 48 - #if PTRS_PER_PMD != 1 49 - pr_alert(", *pmd=%08lx", pmd_val(*pmd)); 50 - #endif 51 - 52 - if (pmd_none(*pmd)) 53 - break; 54 - 55 - if (pmd_bad(*pmd)) { 56 - pr_alert("(bad)"); 57 - break; 58 - } 59 - 60 - if (IS_ENABLED(CONFIG_HIGHMEM)) 61 - { 62 - pte_t *pte; 63 - /* We must not map this if we have highmem enabled */ 64 - pte = pte_offset_map(pmd, addr); 65 - pr_alert(", *pte=%08lx", pte_val(*pte)); 66 - pte_unmap(pte); 67 - } 68 - } while (0); 69 - 70 - pr_alert("\n"); 71 - } 72 - 73 - void do_page_fault(unsigned long entry, unsigned long addr, 74 - unsigned int error_code, struct pt_regs *regs) 75 - { 76 - struct task_struct *tsk; 77 - struct mm_struct *mm; 78 - struct vm_area_struct *vma; 79 - int si_code; 80 - vm_fault_t fault; 81 - unsigned int mask = VM_ACCESS_FLAGS; 82 - unsigned int flags = FAULT_FLAG_DEFAULT; 83 - 84 - error_code = error_code & (ITYPE_mskINST | ITYPE_mskETYPE); 85 - tsk = current; 86 - mm = tsk->mm; 87 - si_code = SEGV_MAPERR; 88 - /* 89 - * We fault-in kernel-space virtual memory on-demand. The 90 - * 'reference' page table is init_mm.pgd. 91 - * 92 - * NOTE! We MUST NOT take any locks for this case. We may 93 - * be in an interrupt or a critical region, and should 94 - * only copy the information from the master page table, 95 - * nothing more. 96 - */ 97 - if (addr >= TASK_SIZE) { 98 - if (user_mode(regs)) 99 - goto bad_area_nosemaphore; 100 - 101 - if (addr >= TASK_SIZE && addr < VMALLOC_END 102 - && (entry == ENTRY_PTE_NOT_PRESENT)) 103 - goto vmalloc_fault; 104 - else 105 - goto no_context; 106 - } 107 - 108 - /* Send a signal to the task for handling the unalignment access. */ 109 - if (entry == ENTRY_GENERAL_EXCPETION 110 - && error_code == ETYPE_ALIGNMENT_CHECK) { 111 - if (user_mode(regs)) 112 - goto bad_area_nosemaphore; 113 - else 114 - goto no_context; 115 - } 116 - 117 - /* 118 - * If we're in an interrupt or have no user 119 - * context, we must not take the fault.. 120 - */ 121 - if (unlikely(faulthandler_disabled() || !mm)) 122 - goto no_context; 123 - 124 - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); 125 - 126 - /* 127 - * As per x86, we may deadlock here. However, since the kernel only 128 - * validly references user space from well defined areas of the code, 129 - * we can bug out early if this is from code which shouldn't. 130 - */ 131 - if (unlikely(!mmap_read_trylock(mm))) { 132 - if (!user_mode(regs) && 133 - !search_exception_tables(instruction_pointer(regs))) 134 - goto no_context; 135 - retry: 136 - mmap_read_lock(mm); 137 - } else { 138 - /* 139 - * The above down_read_trylock() might have succeeded in which 140 - * case, we'll have missed the might_sleep() from down_read(). 141 - */ 142 - might_sleep(); 143 - if (IS_ENABLED(CONFIG_DEBUG_VM)) { 144 - if (!user_mode(regs) && 145 - !search_exception_tables(instruction_pointer(regs))) 146 - goto no_context; 147 - } 148 - } 149 - 150 - vma = find_vma(mm, addr); 151 - 152 - if (unlikely(!vma)) 153 - goto bad_area; 154 - 155 - if (vma->vm_start <= addr) 156 - goto good_area; 157 - 158 - if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) 159 - goto bad_area; 160 - 161 - if (unlikely(expand_stack(vma, addr))) 162 - goto bad_area; 163 - 164 - /* 165 - * Ok, we have a good vm_area for this memory access, so 166 - * we can handle it.. 167 - */ 168 - 169 - good_area: 170 - si_code = SEGV_ACCERR; 171 - 172 - /* first do some preliminary protection checks */ 173 - if (entry == ENTRY_PTE_NOT_PRESENT) { 174 - if (error_code & ITYPE_mskINST) 175 - mask = VM_EXEC; 176 - else { 177 - mask = VM_READ | VM_WRITE; 178 - } 179 - } else if (entry == ENTRY_TLB_MISC) { 180 - switch (error_code & ITYPE_mskETYPE) { 181 - case RD_PROT: 182 - mask = VM_READ; 183 - break; 184 - case WRT_PROT: 185 - mask = VM_WRITE; 186 - flags |= FAULT_FLAG_WRITE; 187 - break; 188 - case NOEXEC: 189 - mask = VM_EXEC; 190 - break; 191 - case PAGE_MODIFY: 192 - mask = VM_WRITE; 193 - flags |= FAULT_FLAG_WRITE; 194 - break; 195 - case ACC_BIT: 196 - BUG(); 197 - default: 198 - break; 199 - } 200 - 201 - } 202 - if (!(vma->vm_flags & mask)) 203 - goto bad_area; 204 - 205 - /* 206 - * If for any reason at all we couldn't handle the fault, 207 - * make sure we exit gracefully rather than endlessly redo 208 - * the fault. 209 - */ 210 - 211 - fault = handle_mm_fault(vma, addr, flags, regs); 212 - 213 - /* 214 - * If we need to retry but a fatal signal is pending, handle the 215 - * signal first. We do not need to release the mmap_lock because it 216 - * would already be released in __lock_page_or_retry in mm/filemap.c. 217 - */ 218 - if (fault_signal_pending(fault, regs)) { 219 - if (!user_mode(regs)) 220 - goto no_context; 221 - return; 222 - } 223 - 224 - if (unlikely(fault & VM_FAULT_ERROR)) { 225 - if (fault & VM_FAULT_OOM) 226 - goto out_of_memory; 227 - else if (fault & VM_FAULT_SIGBUS) 228 - goto do_sigbus; 229 - else 230 - goto bad_area; 231 - } 232 - 233 - if (fault & VM_FAULT_RETRY) { 234 - flags |= FAULT_FLAG_TRIED; 235 - 236 - /* No need to mmap_read_unlock(mm) as we would 237 - * have already released it in __lock_page_or_retry 238 - * in mm/filemap.c. 239 - */ 240 - goto retry; 241 - } 242 - 243 - mmap_read_unlock(mm); 244 - return; 245 - 246 - /* 247 - * Something tried to access memory that isn't in our memory map.. 248 - * Fix it, but check if it's kernel or user first.. 249 - */ 250 - bad_area: 251 - mmap_read_unlock(mm); 252 - 253 - bad_area_nosemaphore: 254 - 255 - /* User mode accesses just cause a SIGSEGV */ 256 - 257 - if (user_mode(regs)) { 258 - tsk->thread.address = addr; 259 - tsk->thread.error_code = error_code; 260 - tsk->thread.trap_no = entry; 261 - force_sig_fault(SIGSEGV, si_code, (void __user *)addr); 262 - return; 263 - } 264 - 265 - no_context: 266 - 267 - /* Are we prepared to handle this kernel fault? 268 - * 269 - * (The kernel has valid exception-points in the source 270 - * when it acesses user-memory. When it fails in one 271 - * of those points, we find it in a table and do a jump 272 - * to some fixup code that loads an appropriate error 273 - * code) 274 - */ 275 - 276 - { 277 - const struct exception_table_entry *entry; 278 - 279 - if ((entry = 280 - search_exception_tables(instruction_pointer(regs))) != 281 - NULL) { 282 - /* Adjust the instruction pointer in the stackframe */ 283 - instruction_pointer(regs) = entry->fixup; 284 - return; 285 - } 286 - } 287 - 288 - /* 289 - * Oops. The kernel tried to access some bad page. We'll have to 290 - * terminate things with extreme prejudice. 291 - */ 292 - 293 - bust_spinlocks(1); 294 - pr_alert("Unable to handle kernel %s at virtual address %08lx\n", 295 - (addr < PAGE_SIZE) ? "NULL pointer dereference" : 296 - "paging request", addr); 297 - 298 - show_pte(mm, addr); 299 - die("Oops", regs, error_code); 300 - 301 - /* 302 - * We ran out of memory, or some other thing happened to us that made 303 - * us unable to handle the page fault gracefully. 304 - */ 305 - 306 - out_of_memory: 307 - mmap_read_unlock(mm); 308 - if (!user_mode(regs)) 309 - goto no_context; 310 - pagefault_out_of_memory(); 311 - return; 312 - 313 - do_sigbus: 314 - mmap_read_unlock(mm); 315 - 316 - /* Kernel mode? Handle exceptions or die */ 317 - if (!user_mode(regs)) 318 - goto no_context; 319 - 320 - /* 321 - * Send a sigbus 322 - */ 323 - tsk->thread.address = addr; 324 - tsk->thread.error_code = error_code; 325 - tsk->thread.trap_no = entry; 326 - force_sig_fault(SIGBUS, BUS_ADRERR, (void __user *)addr); 327 - 328 - return; 329 - 330 - vmalloc_fault: 331 - { 332 - /* 333 - * Synchronize this task's top level page-table 334 - * with the 'reference' page table. 335 - * 336 - * Use current_pgd instead of tsk->active_mm->pgd 337 - * since the latter might be unavailable if this 338 - * code is executed in a misfortunately run irq 339 - * (like inside schedule() between switch_mm and 340 - * switch_to...). 341 - */ 342 - 343 - unsigned int index = pgd_index(addr); 344 - pgd_t *pgd, *pgd_k; 345 - p4d_t *p4d, *p4d_k; 346 - pud_t *pud, *pud_k; 347 - pmd_t *pmd, *pmd_k; 348 - pte_t *pte_k; 349 - 350 - pgd = (pgd_t *) __va(__nds32__mfsr(NDS32_SR_L1_PPTB)) + index; 351 - pgd_k = init_mm.pgd + index; 352 - 353 - if (!pgd_present(*pgd_k)) 354 - goto no_context; 355 - 356 - p4d = p4d_offset(pgd, addr); 357 - p4d_k = p4d_offset(pgd_k, addr); 358 - if (!p4d_present(*p4d_k)) 359 - goto no_context; 360 - 361 - pud = pud_offset(p4d, addr); 362 - pud_k = pud_offset(p4d_k, addr); 363 - if (!pud_present(*pud_k)) 364 - goto no_context; 365 - 366 - pmd = pmd_offset(pud, addr); 367 - pmd_k = pmd_offset(pud_k, addr); 368 - if (!pmd_present(*pmd_k)) 369 - goto no_context; 370 - 371 - if (!pmd_present(*pmd)) 372 - set_pmd(pmd, *pmd_k); 373 - else 374 - BUG_ON(pmd_page(*pmd) != pmd_page(*pmd_k)); 375 - 376 - /* 377 - * Since the vmalloc area is global, we don't 378 - * need to copy individual PTE's, it is enough to 379 - * copy the pgd pointer into the pte page of the 380 - * root task. If that is there, we'll find our pte if 381 - * it exists. 382 - */ 383 - 384 - /* Make sure the actual PTE exists as well to 385 - * catch kernel vmalloc-area accesses to non-mapped 386 - * addres. If we don't do this, this will just 387 - * silently loop forever. 388 - */ 389 - 390 - pte_k = pte_offset_kernel(pmd_k, addr); 391 - if (!pte_present(*pte_k)) 392 - goto no_context; 393 - 394 - return; 395 - } 396 - }
-263
arch/nds32/mm/init.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 1995-2005 Russell King 3 - // Copyright (C) 2012 ARM Ltd. 4 - // Copyright (C) 2013-2017 Andes Technology Corporation 5 - 6 - #include <linux/kernel.h> 7 - #include <linux/errno.h> 8 - #include <linux/swap.h> 9 - #include <linux/init.h> 10 - #include <linux/memblock.h> 11 - #include <linux/mman.h> 12 - #include <linux/nodemask.h> 13 - #include <linux/initrd.h> 14 - #include <linux/highmem.h> 15 - 16 - #include <asm/sections.h> 17 - #include <asm/setup.h> 18 - #include <asm/tlb.h> 19 - #include <asm/page.h> 20 - 21 - DEFINE_PER_CPU(struct mmu_gather, mmu_gathers); 22 - DEFINE_SPINLOCK(anon_alias_lock); 23 - extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; 24 - 25 - /* 26 - * empty_zero_page is a special page that is used for 27 - * zero-initialized data and COW. 28 - */ 29 - struct page *empty_zero_page; 30 - EXPORT_SYMBOL(empty_zero_page); 31 - 32 - static void __init zone_sizes_init(void) 33 - { 34 - unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 }; 35 - 36 - max_zone_pfn[ZONE_NORMAL] = max_low_pfn; 37 - #ifdef CONFIG_HIGHMEM 38 - max_zone_pfn[ZONE_HIGHMEM] = max_pfn; 39 - #endif 40 - free_area_init(max_zone_pfn); 41 - 42 - } 43 - 44 - /* 45 - * Map all physical memory under high_memory into kernel's address space. 46 - * 47 - * This is explicitly coded for two-level page tables, so if you need 48 - * something else then this needs to change. 49 - */ 50 - static void __init map_ram(void) 51 - { 52 - unsigned long v, p, e; 53 - pgd_t *pge; 54 - p4d_t *p4e; 55 - pud_t *pue; 56 - pmd_t *pme; 57 - pte_t *pte; 58 - /* These mark extents of read-only kernel pages... 59 - * ...from vmlinux.lds.S 60 - */ 61 - 62 - p = (u32) memblock_start_of_DRAM() & PAGE_MASK; 63 - e = min((u32) memblock_end_of_DRAM(), (u32) __pa(high_memory)); 64 - 65 - v = (u32) __va(p); 66 - pge = pgd_offset_k(v); 67 - 68 - while (p < e) { 69 - int j; 70 - p4e = p4d_offset(pge, v); 71 - pue = pud_offset(p4e, v); 72 - pme = pmd_offset(pue, v); 73 - 74 - if ((u32) pue != (u32) pge || (u32) pme != (u32) pge) { 75 - panic("%s: Kernel hardcoded for " 76 - "two-level page tables", __func__); 77 - } 78 - 79 - /* Alloc one page for holding PTE's... */ 80 - pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE); 81 - if (!pte) 82 - panic("%s: Failed to allocate %lu bytes align=0x%lx\n", 83 - __func__, PAGE_SIZE, PAGE_SIZE); 84 - set_pmd(pme, __pmd(__pa(pte) + _PAGE_KERNEL_TABLE)); 85 - 86 - /* Fill the newly allocated page with PTE'S */ 87 - for (j = 0; p < e && j < PTRS_PER_PTE; 88 - v += PAGE_SIZE, p += PAGE_SIZE, j++, pte++) { 89 - /* Create mapping between p and v. */ 90 - /* TODO: more fine grant for page access permission */ 91 - set_pte(pte, __pte(p + pgprot_val(PAGE_KERNEL))); 92 - } 93 - 94 - pge++; 95 - } 96 - } 97 - static pmd_t *fixmap_pmd_p; 98 - static void __init fixedrange_init(void) 99 - { 100 - unsigned long vaddr; 101 - pmd_t *pmd; 102 - #ifdef CONFIG_HIGHMEM 103 - pte_t *pte; 104 - #endif /* CONFIG_HIGHMEM */ 105 - 106 - /* 107 - * Fixed mappings: 108 - */ 109 - vaddr = __fix_to_virt(__end_of_fixed_addresses - 1); 110 - pmd = pmd_off_k(vaddr); 111 - fixmap_pmd_p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); 112 - if (!fixmap_pmd_p) 113 - panic("%s: Failed to allocate %lu bytes align=0x%lx\n", 114 - __func__, PAGE_SIZE, PAGE_SIZE); 115 - set_pmd(pmd, __pmd(__pa(fixmap_pmd_p) + _PAGE_KERNEL_TABLE)); 116 - 117 - #ifdef CONFIG_HIGHMEM 118 - /* 119 - * Permanent kmaps: 120 - */ 121 - vaddr = PKMAP_BASE; 122 - 123 - pmd = pmd_off_k(vaddr); 124 - pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE); 125 - if (!pte) 126 - panic("%s: Failed to allocate %lu bytes align=0x%lx\n", 127 - __func__, PAGE_SIZE, PAGE_SIZE); 128 - set_pmd(pmd, __pmd(__pa(pte) + _PAGE_KERNEL_TABLE)); 129 - pkmap_page_table = pte; 130 - #endif /* CONFIG_HIGHMEM */ 131 - } 132 - 133 - /* 134 - * paging_init() sets up the page tables, initialises the zone memory 135 - * maps, and sets up the zero page, bad page and bad page tables. 136 - */ 137 - void __init paging_init(void) 138 - { 139 - int i; 140 - void *zero_page; 141 - 142 - pr_info("Setting up paging and PTEs.\n"); 143 - /* clear out the init_mm.pgd that will contain the kernel's mappings */ 144 - for (i = 0; i < PTRS_PER_PGD; i++) 145 - swapper_pg_dir[i] = __pgd(1); 146 - 147 - map_ram(); 148 - 149 - fixedrange_init(); 150 - 151 - /* allocate space for empty_zero_page */ 152 - zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE); 153 - if (!zero_page) 154 - panic("%s: Failed to allocate %lu bytes align=0x%lx\n", 155 - __func__, PAGE_SIZE, PAGE_SIZE); 156 - zone_sizes_init(); 157 - 158 - empty_zero_page = virt_to_page(zero_page); 159 - flush_dcache_page(empty_zero_page); 160 - } 161 - 162 - static inline void __init free_highmem(void) 163 - { 164 - #ifdef CONFIG_HIGHMEM 165 - unsigned long pfn; 166 - for (pfn = PFN_UP(__pa(high_memory)); pfn < max_pfn; pfn++) { 167 - phys_addr_t paddr = (phys_addr_t) pfn << PAGE_SHIFT; 168 - if (!memblock_is_reserved(paddr)) 169 - free_highmem_page(pfn_to_page(pfn)); 170 - } 171 - #endif 172 - } 173 - 174 - static void __init set_max_mapnr_init(void) 175 - { 176 - max_mapnr = max_pfn; 177 - } 178 - 179 - /* 180 - * mem_init() marks the free areas in the mem_map and tells us how much 181 - * memory is free. This is done after various parts of the system have 182 - * claimed their memory after the kernel image. 183 - */ 184 - void __init mem_init(void) 185 - { 186 - phys_addr_t memory_start = memblock_start_of_DRAM(); 187 - BUG_ON(!mem_map); 188 - set_max_mapnr_init(); 189 - 190 - free_highmem(); 191 - 192 - /* this will put all low memory onto the freelists */ 193 - memblock_free_all(); 194 - 195 - pr_info("virtual kernel memory layout:\n" 196 - " fixmap : 0x%08lx - 0x%08lx (%4ld kB)\n" 197 - #ifdef CONFIG_HIGHMEM 198 - " pkmap : 0x%08lx - 0x%08lx (%4ld kB)\n" 199 - #endif 200 - " consist : 0x%08lx - 0x%08lx (%4ld MB)\n" 201 - " vmalloc : 0x%08lx - 0x%08lx (%4ld MB)\n" 202 - " lowmem : 0x%08lx - 0x%08lx (%4ld MB)\n" 203 - " .init : 0x%08lx - 0x%08lx (%4ld kB)\n" 204 - " .data : 0x%08lx - 0x%08lx (%4ld kB)\n" 205 - " .text : 0x%08lx - 0x%08lx (%4ld kB)\n", 206 - FIXADDR_START, FIXADDR_TOP, (FIXADDR_TOP - FIXADDR_START) >> 10, 207 - #ifdef CONFIG_HIGHMEM 208 - PKMAP_BASE, PKMAP_BASE + LAST_PKMAP * PAGE_SIZE, 209 - (LAST_PKMAP * PAGE_SIZE) >> 10, 210 - #endif 211 - CONSISTENT_BASE, CONSISTENT_END, 212 - ((CONSISTENT_END) - (CONSISTENT_BASE)) >> 20, VMALLOC_START, 213 - (unsigned long)VMALLOC_END, (VMALLOC_END - VMALLOC_START) >> 20, 214 - (unsigned long)__va(memory_start), (unsigned long)high_memory, 215 - ((unsigned long)high_memory - 216 - (unsigned long)__va(memory_start)) >> 20, 217 - (unsigned long)&__init_begin, (unsigned long)&__init_end, 218 - ((unsigned long)&__init_end - 219 - (unsigned long)&__init_begin) >> 10, (unsigned long)&_etext, 220 - (unsigned long)&_edata, 221 - ((unsigned long)&_edata - (unsigned long)&_etext) >> 10, 222 - (unsigned long)&_text, (unsigned long)&_etext, 223 - ((unsigned long)&_etext - (unsigned long)&_text) >> 10); 224 - 225 - /* 226 - * Check boundaries twice: Some fundamental inconsistencies can 227 - * be detected at build time already. 228 - */ 229 - #ifdef CONFIG_HIGHMEM 230 - BUILD_BUG_ON(PKMAP_BASE + LAST_PKMAP * PAGE_SIZE > FIXADDR_START); 231 - BUILD_BUG_ON((CONSISTENT_END) > PKMAP_BASE); 232 - #endif 233 - BUILD_BUG_ON(VMALLOC_END > CONSISTENT_BASE); 234 - BUILD_BUG_ON(VMALLOC_START >= VMALLOC_END); 235 - 236 - #ifdef CONFIG_HIGHMEM 237 - BUG_ON(PKMAP_BASE + LAST_PKMAP * PAGE_SIZE > FIXADDR_START); 238 - BUG_ON(CONSISTENT_END > PKMAP_BASE); 239 - #endif 240 - BUG_ON(VMALLOC_END > CONSISTENT_BASE); 241 - BUG_ON(VMALLOC_START >= VMALLOC_END); 242 - BUG_ON((unsigned long)high_memory > VMALLOC_START); 243 - 244 - return; 245 - } 246 - 247 - void __set_fixmap(enum fixed_addresses idx, 248 - phys_addr_t phys, pgprot_t flags) 249 - { 250 - unsigned long addr = __fix_to_virt(idx); 251 - pte_t *pte; 252 - 253 - BUG_ON(idx <= FIX_HOLE || idx >= __end_of_fixed_addresses); 254 - 255 - pte = (pte_t *)&fixmap_pmd_p[pte_index(addr)]; 256 - 257 - if (pgprot_val(flags)) { 258 - set_pte(pte, pfn_pte(phys >> PAGE_SHIFT, flags)); 259 - } else { 260 - pte_clear(&init_mm, addr, pte); 261 - flush_tlb_kernel_range(addr, addr + PAGE_SIZE); 262 - } 263 - }
-96
arch/nds32/mm/mm-nds32.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/init_task.h> 5 - 6 - #define __HAVE_ARCH_PGD_FREE 7 - #include <asm/pgalloc.h> 8 - 9 - #define FIRST_KERNEL_PGD_NR (USER_PTRS_PER_PGD) 10 - 11 - /* 12 - * need to get a page for level 1 13 - */ 14 - 15 - pgd_t *pgd_alloc(struct mm_struct *mm) 16 - { 17 - pgd_t *new_pgd, *init_pgd; 18 - int i; 19 - 20 - new_pgd = (pgd_t *) __get_free_pages(GFP_KERNEL, 0); 21 - if (!new_pgd) 22 - return NULL; 23 - for (i = 0; i < PTRS_PER_PGD; i++) { 24 - (*new_pgd) = 1; 25 - new_pgd++; 26 - } 27 - new_pgd -= PTRS_PER_PGD; 28 - 29 - init_pgd = pgd_offset_k(0); 30 - 31 - memcpy(new_pgd + FIRST_KERNEL_PGD_NR, init_pgd + FIRST_KERNEL_PGD_NR, 32 - (PTRS_PER_PGD - FIRST_KERNEL_PGD_NR) * sizeof(pgd_t)); 33 - 34 - cpu_dcache_wb_range((unsigned long)new_pgd, 35 - (unsigned long)new_pgd + 36 - PTRS_PER_PGD * sizeof(pgd_t)); 37 - inc_lruvec_page_state(virt_to_page((unsigned long *)new_pgd), 38 - NR_PAGETABLE); 39 - 40 - return new_pgd; 41 - } 42 - 43 - void pgd_free(struct mm_struct *mm, pgd_t * pgd) 44 - { 45 - pmd_t *pmd; 46 - struct page *pte; 47 - 48 - if (!pgd) 49 - return; 50 - 51 - pmd = (pmd_t *) pgd; 52 - if (pmd_none(*pmd)) 53 - goto free; 54 - if (pmd_bad(*pmd)) { 55 - pmd_ERROR(*pmd); 56 - pmd_clear(pmd); 57 - goto free; 58 - } 59 - 60 - pte = pmd_page(*pmd); 61 - pmd_clear(pmd); 62 - dec_lruvec_page_state(virt_to_page((unsigned long *)pgd), NR_PAGETABLE); 63 - pte_free(mm, pte); 64 - mm_dec_nr_ptes(mm); 65 - pmd_free(mm, pmd); 66 - free: 67 - free_pages((unsigned long)pgd, 0); 68 - } 69 - 70 - /* 71 - * In order to soft-boot, we need to insert a 1:1 mapping in place of 72 - * the user-mode pages. This will then ensure that we have predictable 73 - * results when turning the mmu off 74 - */ 75 - void setup_mm_for_reboot(char mode) 76 - { 77 - unsigned long pmdval; 78 - pgd_t *pgd; 79 - p4d_t *p4d; 80 - pud_t *pud; 81 - pmd_t *pmd; 82 - int i; 83 - 84 - if (current->mm && current->mm->pgd) 85 - pgd = current->mm->pgd; 86 - else 87 - pgd = init_mm.pgd; 88 - 89 - for (i = 0; i < USER_PTRS_PER_PGD; i++) { 90 - pmdval = (i << PGDIR_SHIFT); 91 - p4d = p4d_offset(pgd, i << PGDIR_SHIFT); 92 - pud = pud_offset(p4d, i << PGDIR_SHIFT); 93 - pmd = pmd_offset(pud + i, i << PGDIR_SHIFT); 94 - set_pmd(pmd, __pmd(pmdval)); 95 - } 96 - }
-73
arch/nds32/mm/mmap.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/sched.h> 5 - #include <linux/mman.h> 6 - #include <linux/shm.h> 7 - 8 - #define COLOUR_ALIGN(addr,pgoff) \ 9 - ((((addr)+SHMLBA-1)&~(SHMLBA-1)) + \ 10 - (((pgoff)<<PAGE_SHIFT) & (SHMLBA-1))) 11 - 12 - /* 13 - * We need to ensure that shared mappings are correctly aligned to 14 - * avoid aliasing issues with VIPT caches. We need to ensure that 15 - * a specific page of an object is always mapped at a multiple of 16 - * SHMLBA bytes. 17 - * 18 - * We unconditionally provide this function for all cases, however 19 - * in the VIVT case, we optimise out the alignment rules. 20 - */ 21 - unsigned long 22 - arch_get_unmapped_area(struct file *filp, unsigned long addr, 23 - unsigned long len, unsigned long pgoff, 24 - unsigned long flags) 25 - { 26 - struct mm_struct *mm = current->mm; 27 - struct vm_area_struct *vma; 28 - int do_align = 0; 29 - struct vm_unmapped_area_info info; 30 - int aliasing = 0; 31 - if(IS_ENABLED(CONFIG_CPU_CACHE_ALIASING)) 32 - aliasing = 1; 33 - 34 - /* 35 - * We only need to do colour alignment if either the I or D 36 - * caches alias. 37 - */ 38 - if (aliasing) 39 - do_align = filp || (flags & MAP_SHARED); 40 - 41 - /* 42 - * We enforce the MAP_FIXED case. 43 - */ 44 - if (flags & MAP_FIXED) { 45 - if (aliasing && flags & MAP_SHARED && 46 - (addr - (pgoff << PAGE_SHIFT)) & (SHMLBA - 1)) 47 - return -EINVAL; 48 - return addr; 49 - } 50 - 51 - if (len > TASK_SIZE) 52 - return -ENOMEM; 53 - 54 - if (addr) { 55 - if (do_align) 56 - addr = COLOUR_ALIGN(addr, pgoff); 57 - else 58 - addr = PAGE_ALIGN(addr); 59 - 60 - vma = find_vma(mm, addr); 61 - if (TASK_SIZE - len >= addr && 62 - (!vma || addr + len <= vm_start_gap(vma))) 63 - return addr; 64 - } 65 - 66 - info.flags = 0; 67 - info.length = len; 68 - info.low_limit = mm->mmap_base; 69 - info.high_limit = TASK_SIZE; 70 - info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0; 71 - info.align_offset = pgoff << PAGE_SHIFT; 72 - return vm_unmapped_area(&info); 73 - }
-536
arch/nds32/mm/proc.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/module.h> 5 - #include <linux/sched.h> 6 - #include <linux/mm.h> 7 - #include <asm/nds32.h> 8 - #include <asm/tlbflush.h> 9 - #include <asm/cacheflush.h> 10 - #include <asm/l2_cache.h> 11 - #include <nds32_intrinsic.h> 12 - 13 - #include <asm/cache_info.h> 14 - extern struct cache_info L1_cache_info[2]; 15 - 16 - int va_kernel_present(unsigned long addr) 17 - { 18 - pmd_t *pmd; 19 - pte_t *ptep, pte; 20 - 21 - pmd = pmd_off_k(addr); 22 - if (!pmd_none(*pmd)) { 23 - ptep = pte_offset_map(pmd, addr); 24 - pte = *ptep; 25 - if (pte_present(pte)) 26 - return pte; 27 - } 28 - return 0; 29 - } 30 - 31 - pte_t va_present(struct mm_struct * mm, unsigned long addr) 32 - { 33 - pgd_t *pgd; 34 - p4d_t *p4d; 35 - pud_t *pud; 36 - pmd_t *pmd; 37 - pte_t *ptep, pte; 38 - 39 - pgd = pgd_offset(mm, addr); 40 - if (!pgd_none(*pgd)) { 41 - p4d = p4d_offset(pgd, addr); 42 - if (!p4d_none(*p4d)) { 43 - pud = pud_offset(p4d, addr); 44 - if (!pud_none(*pud)) { 45 - pmd = pmd_offset(pud, addr); 46 - if (!pmd_none(*pmd)) { 47 - ptep = pte_offset_map(pmd, addr); 48 - pte = *ptep; 49 - if (pte_present(pte)) 50 - return pte; 51 - } 52 - } 53 - } 54 - } 55 - return 0; 56 - 57 - } 58 - 59 - int va_readable(struct pt_regs *regs, unsigned long addr) 60 - { 61 - struct mm_struct *mm = current->mm; 62 - pte_t pte; 63 - int ret = 0; 64 - 65 - if (user_mode(regs)) { 66 - /* user mode */ 67 - pte = va_present(mm, addr); 68 - if (!pte && pte_read(pte)) 69 - ret = 1; 70 - } else { 71 - /* superuser mode is always readable, so we can only 72 - * check it is present or not*/ 73 - return (! !va_kernel_present(addr)); 74 - } 75 - return ret; 76 - } 77 - 78 - int va_writable(struct pt_regs *regs, unsigned long addr) 79 - { 80 - struct mm_struct *mm = current->mm; 81 - pte_t pte; 82 - int ret = 0; 83 - 84 - if (user_mode(regs)) { 85 - /* user mode */ 86 - pte = va_present(mm, addr); 87 - if (!pte && pte_write(pte)) 88 - ret = 1; 89 - } else { 90 - /* superuser mode */ 91 - pte = va_kernel_present(addr); 92 - if (!pte && pte_kernel_write(pte)) 93 - ret = 1; 94 - } 95 - return ret; 96 - } 97 - 98 - /* 99 - * All 100 - */ 101 - void cpu_icache_inval_all(void) 102 - { 103 - unsigned long end, line_size; 104 - 105 - line_size = L1_cache_info[ICACHE].line_size; 106 - end = 107 - line_size * L1_cache_info[ICACHE].ways * L1_cache_info[ICACHE].sets; 108 - 109 - do { 110 - end -= line_size; 111 - __asm__ volatile ("\n\tcctl %0, L1I_IX_INVAL"::"r" (end)); 112 - end -= line_size; 113 - __asm__ volatile ("\n\tcctl %0, L1I_IX_INVAL"::"r" (end)); 114 - end -= line_size; 115 - __asm__ volatile ("\n\tcctl %0, L1I_IX_INVAL"::"r" (end)); 116 - end -= line_size; 117 - __asm__ volatile ("\n\tcctl %0, L1I_IX_INVAL"::"r" (end)); 118 - } while (end > 0); 119 - __nds32__isb(); 120 - } 121 - 122 - void cpu_dcache_inval_all(void) 123 - { 124 - __nds32__cctl_l1d_invalall(); 125 - } 126 - 127 - #ifdef CONFIG_CACHE_L2 128 - void dcache_wb_all_level(void) 129 - { 130 - unsigned long flags, cmd; 131 - local_irq_save(flags); 132 - __nds32__cctl_l1d_wball_alvl(); 133 - /* Section 1: Ensure the section 2 & 3 program code execution after */ 134 - __nds32__cctlidx_read(NDS32_CCTL_L1D_IX_RWD,0); 135 - 136 - /* Section 2: Confirm the writeback all level is done in CPU and L2C */ 137 - cmd = CCTL_CMD_L2_SYNC; 138 - L2_CMD_RDY(); 139 - L2C_W_REG(L2_CCTL_CMD_OFF, cmd); 140 - L2_CMD_RDY(); 141 - 142 - /* Section 3: Writeback whole L2 cache */ 143 - cmd = CCTL_ALL_CMD | CCTL_CMD_L2_IX_WB; 144 - L2_CMD_RDY(); 145 - L2C_W_REG(L2_CCTL_CMD_OFF, cmd); 146 - L2_CMD_RDY(); 147 - __nds32__msync_all(); 148 - local_irq_restore(flags); 149 - } 150 - EXPORT_SYMBOL(dcache_wb_all_level); 151 - #endif 152 - 153 - void cpu_dcache_wb_all(void) 154 - { 155 - __nds32__cctl_l1d_wball_one_lvl(); 156 - __nds32__cctlidx_read(NDS32_CCTL_L1D_IX_RWD,0); 157 - } 158 - 159 - void cpu_dcache_wbinval_all(void) 160 - { 161 - #ifndef CONFIG_CPU_DCACHE_WRITETHROUGH 162 - unsigned long flags; 163 - local_irq_save(flags); 164 - #endif 165 - cpu_dcache_wb_all(); 166 - cpu_dcache_inval_all(); 167 - #ifndef CONFIG_CPU_DCACHE_WRITETHROUGH 168 - local_irq_restore(flags); 169 - #endif 170 - } 171 - 172 - /* 173 - * Page 174 - */ 175 - void cpu_icache_inval_page(unsigned long start) 176 - { 177 - unsigned long line_size, end; 178 - 179 - line_size = L1_cache_info[ICACHE].line_size; 180 - end = start + PAGE_SIZE; 181 - 182 - do { 183 - end -= line_size; 184 - __asm__ volatile ("\n\tcctl %0, L1I_VA_INVAL"::"r" (end)); 185 - end -= line_size; 186 - __asm__ volatile ("\n\tcctl %0, L1I_VA_INVAL"::"r" (end)); 187 - end -= line_size; 188 - __asm__ volatile ("\n\tcctl %0, L1I_VA_INVAL"::"r" (end)); 189 - end -= line_size; 190 - __asm__ volatile ("\n\tcctl %0, L1I_VA_INVAL"::"r" (end)); 191 - } while (end != start); 192 - __nds32__isb(); 193 - } 194 - 195 - void cpu_dcache_inval_page(unsigned long start) 196 - { 197 - unsigned long line_size, end; 198 - 199 - line_size = L1_cache_info[DCACHE].line_size; 200 - end = start + PAGE_SIZE; 201 - 202 - do { 203 - end -= line_size; 204 - __asm__ volatile ("\n\tcctl %0, L1D_VA_INVAL"::"r" (end)); 205 - end -= line_size; 206 - __asm__ volatile ("\n\tcctl %0, L1D_VA_INVAL"::"r" (end)); 207 - end -= line_size; 208 - __asm__ volatile ("\n\tcctl %0, L1D_VA_INVAL"::"r" (end)); 209 - end -= line_size; 210 - __asm__ volatile ("\n\tcctl %0, L1D_VA_INVAL"::"r" (end)); 211 - } while (end != start); 212 - } 213 - 214 - void cpu_dcache_wb_page(unsigned long start) 215 - { 216 - #ifndef CONFIG_CPU_DCACHE_WRITETHROUGH 217 - unsigned long line_size, end; 218 - 219 - line_size = L1_cache_info[DCACHE].line_size; 220 - end = start + PAGE_SIZE; 221 - 222 - do { 223 - end -= line_size; 224 - __asm__ volatile ("\n\tcctl %0, L1D_VA_WB"::"r" (end)); 225 - end -= line_size; 226 - __asm__ volatile ("\n\tcctl %0, L1D_VA_WB"::"r" (end)); 227 - end -= line_size; 228 - __asm__ volatile ("\n\tcctl %0, L1D_VA_WB"::"r" (end)); 229 - end -= line_size; 230 - __asm__ volatile ("\n\tcctl %0, L1D_VA_WB"::"r" (end)); 231 - } while (end != start); 232 - __nds32__cctlidx_read(NDS32_CCTL_L1D_IX_RWD,0); 233 - #endif 234 - } 235 - 236 - void cpu_dcache_wbinval_page(unsigned long start) 237 - { 238 - unsigned long line_size, end; 239 - 240 - line_size = L1_cache_info[DCACHE].line_size; 241 - end = start + PAGE_SIZE; 242 - 243 - do { 244 - end -= line_size; 245 - #ifndef CONFIG_CPU_DCACHE_WRITETHROUGH 246 - __asm__ volatile ("\n\tcctl %0, L1D_VA_WB"::"r" (end)); 247 - #endif 248 - __asm__ volatile ("\n\tcctl %0, L1D_VA_INVAL"::"r" (end)); 249 - end -= line_size; 250 - #ifndef CONFIG_CPU_DCACHE_WRITETHROUGH 251 - __asm__ volatile ("\n\tcctl %0, L1D_VA_WB"::"r" (end)); 252 - #endif 253 - __asm__ volatile ("\n\tcctl %0, L1D_VA_INVAL"::"r" (end)); 254 - end -= line_size; 255 - #ifndef CONFIG_CPU_DCACHE_WRITETHROUGH 256 - __asm__ volatile ("\n\tcctl %0, L1D_VA_WB"::"r" (end)); 257 - #endif 258 - __asm__ volatile ("\n\tcctl %0, L1D_VA_INVAL"::"r" (end)); 259 - end -= line_size; 260 - #ifndef CONFIG_CPU_DCACHE_WRITETHROUGH 261 - __asm__ volatile ("\n\tcctl %0, L1D_VA_WB"::"r" (end)); 262 - #endif 263 - __asm__ volatile ("\n\tcctl %0, L1D_VA_INVAL"::"r" (end)); 264 - } while (end != start); 265 - __nds32__cctlidx_read(NDS32_CCTL_L1D_IX_RWD,0); 266 - } 267 - 268 - void cpu_cache_wbinval_page(unsigned long page, int flushi) 269 - { 270 - cpu_dcache_wbinval_page(page); 271 - if (flushi) 272 - cpu_icache_inval_page(page); 273 - } 274 - 275 - /* 276 - * Range 277 - */ 278 - void cpu_icache_inval_range(unsigned long start, unsigned long end) 279 - { 280 - unsigned long line_size; 281 - 282 - line_size = L1_cache_info[ICACHE].line_size; 283 - 284 - while (end > start) { 285 - __asm__ volatile ("\n\tcctl %0, L1I_VA_INVAL"::"r" (start)); 286 - start += line_size; 287 - } 288 - __nds32__isb(); 289 - } 290 - 291 - void cpu_dcache_inval_range(unsigned long start, unsigned long end) 292 - { 293 - unsigned long line_size; 294 - 295 - line_size = L1_cache_info[DCACHE].line_size; 296 - 297 - while (end > start) { 298 - __asm__ volatile ("\n\tcctl %0, L1D_VA_INVAL"::"r" (start)); 299 - start += line_size; 300 - } 301 - } 302 - 303 - void cpu_dcache_wb_range(unsigned long start, unsigned long end) 304 - { 305 - #ifndef CONFIG_CPU_DCACHE_WRITETHROUGH 306 - unsigned long line_size; 307 - 308 - line_size = L1_cache_info[DCACHE].line_size; 309 - 310 - while (end > start) { 311 - __asm__ volatile ("\n\tcctl %0, L1D_VA_WB"::"r" (start)); 312 - start += line_size; 313 - } 314 - __nds32__cctlidx_read(NDS32_CCTL_L1D_IX_RWD,0); 315 - #endif 316 - } 317 - 318 - void cpu_dcache_wbinval_range(unsigned long start, unsigned long end) 319 - { 320 - unsigned long line_size; 321 - 322 - line_size = L1_cache_info[DCACHE].line_size; 323 - 324 - while (end > start) { 325 - #ifndef CONFIG_CPU_DCACHE_WRITETHROUGH 326 - __asm__ volatile ("\n\tcctl %0, L1D_VA_WB"::"r" (start)); 327 - #endif 328 - __asm__ volatile ("\n\tcctl %0, L1D_VA_INVAL"::"r" (start)); 329 - start += line_size; 330 - } 331 - __nds32__cctlidx_read(NDS32_CCTL_L1D_IX_RWD,0); 332 - } 333 - 334 - void cpu_cache_wbinval_range(unsigned long start, unsigned long end, int flushi) 335 - { 336 - unsigned long line_size, align_start, align_end; 337 - 338 - line_size = L1_cache_info[DCACHE].line_size; 339 - align_start = start & ~(line_size - 1); 340 - align_end = (end + line_size - 1) & ~(line_size - 1); 341 - cpu_dcache_wbinval_range(align_start, align_end); 342 - 343 - if (flushi) { 344 - line_size = L1_cache_info[ICACHE].line_size; 345 - align_start = start & ~(line_size - 1); 346 - align_end = (end + line_size - 1) & ~(line_size - 1); 347 - cpu_icache_inval_range(align_start, align_end); 348 - } 349 - } 350 - 351 - void cpu_cache_wbinval_range_check(struct vm_area_struct *vma, 352 - unsigned long start, unsigned long end, 353 - bool flushi, bool wbd) 354 - { 355 - unsigned long line_size, t_start, t_end; 356 - 357 - if (!flushi && !wbd) 358 - return; 359 - line_size = L1_cache_info[DCACHE].line_size; 360 - start = start & ~(line_size - 1); 361 - end = (end + line_size - 1) & ~(line_size - 1); 362 - 363 - if ((end - start) > (8 * PAGE_SIZE)) { 364 - if (wbd) 365 - cpu_dcache_wbinval_all(); 366 - if (flushi) 367 - cpu_icache_inval_all(); 368 - return; 369 - } 370 - 371 - t_start = (start + PAGE_SIZE) & PAGE_MASK; 372 - t_end = ((end - 1) & PAGE_MASK); 373 - 374 - if ((start & PAGE_MASK) == t_end) { 375 - if (va_present(vma->vm_mm, start)) { 376 - if (wbd) 377 - cpu_dcache_wbinval_range(start, end); 378 - if (flushi) 379 - cpu_icache_inval_range(start, end); 380 - } 381 - return; 382 - } 383 - 384 - if (va_present(vma->vm_mm, start)) { 385 - if (wbd) 386 - cpu_dcache_wbinval_range(start, t_start); 387 - if (flushi) 388 - cpu_icache_inval_range(start, t_start); 389 - } 390 - 391 - if (va_present(vma->vm_mm, end - 1)) { 392 - if (wbd) 393 - cpu_dcache_wbinval_range(t_end, end); 394 - if (flushi) 395 - cpu_icache_inval_range(t_end, end); 396 - } 397 - 398 - while (t_start < t_end) { 399 - if (va_present(vma->vm_mm, t_start)) { 400 - if (wbd) 401 - cpu_dcache_wbinval_page(t_start); 402 - if (flushi) 403 - cpu_icache_inval_page(t_start); 404 - } 405 - t_start += PAGE_SIZE; 406 - } 407 - } 408 - 409 - #ifdef CONFIG_CACHE_L2 410 - static inline void cpu_l2cache_op(unsigned long start, unsigned long end, unsigned long op) 411 - { 412 - if (atl2c_base) { 413 - unsigned long p_start = __pa(start); 414 - unsigned long p_end = __pa(end); 415 - unsigned long cmd; 416 - unsigned long line_size; 417 - /* TODO Can Use PAGE Mode to optimize if range large than PAGE_SIZE */ 418 - line_size = L2_CACHE_LINE_SIZE(); 419 - p_start = p_start & (~(line_size - 1)); 420 - p_end = (p_end + line_size - 1) & (~(line_size - 1)); 421 - cmd = 422 - (p_start & ~(line_size - 1)) | op | 423 - CCTL_SINGLE_CMD; 424 - do { 425 - L2_CMD_RDY(); 426 - L2C_W_REG(L2_CCTL_CMD_OFF, cmd); 427 - cmd += line_size; 428 - p_start += line_size; 429 - } while (p_end > p_start); 430 - cmd = CCTL_CMD_L2_SYNC; 431 - L2_CMD_RDY(); 432 - L2C_W_REG(L2_CCTL_CMD_OFF, cmd); 433 - L2_CMD_RDY(); 434 - } 435 - } 436 - #else 437 - #define cpu_l2cache_op(start,end,op) do { } while (0) 438 - #endif 439 - /* 440 - * DMA 441 - */ 442 - void cpu_dma_wb_range(unsigned long start, unsigned long end) 443 - { 444 - unsigned long line_size; 445 - unsigned long flags; 446 - line_size = L1_cache_info[DCACHE].line_size; 447 - start = start & (~(line_size - 1)); 448 - end = (end + line_size - 1) & (~(line_size - 1)); 449 - if (unlikely(start == end)) 450 - return; 451 - 452 - local_irq_save(flags); 453 - cpu_dcache_wb_range(start, end); 454 - cpu_l2cache_op(start, end, CCTL_CMD_L2_PA_WB); 455 - __nds32__msync_all(); 456 - local_irq_restore(flags); 457 - } 458 - 459 - void cpu_dma_inval_range(unsigned long start, unsigned long end) 460 - { 461 - unsigned long line_size; 462 - unsigned long old_start = start; 463 - unsigned long old_end = end; 464 - unsigned long flags; 465 - line_size = L1_cache_info[DCACHE].line_size; 466 - start = start & (~(line_size - 1)); 467 - end = (end + line_size - 1) & (~(line_size - 1)); 468 - if (unlikely(start == end)) 469 - return; 470 - local_irq_save(flags); 471 - if (start != old_start) { 472 - cpu_dcache_wbinval_range(start, start + line_size); 473 - cpu_l2cache_op(start, start + line_size, CCTL_CMD_L2_PA_WBINVAL); 474 - } 475 - if (end != old_end) { 476 - cpu_dcache_wbinval_range(end - line_size, end); 477 - cpu_l2cache_op(end - line_size, end, CCTL_CMD_L2_PA_WBINVAL); 478 - } 479 - cpu_dcache_inval_range(start, end); 480 - cpu_l2cache_op(start, end, CCTL_CMD_L2_PA_INVAL); 481 - __nds32__msync_all(); 482 - local_irq_restore(flags); 483 - 484 - } 485 - 486 - void cpu_dma_wbinval_range(unsigned long start, unsigned long end) 487 - { 488 - unsigned long line_size; 489 - unsigned long flags; 490 - line_size = L1_cache_info[DCACHE].line_size; 491 - start = start & (~(line_size - 1)); 492 - end = (end + line_size - 1) & (~(line_size - 1)); 493 - if (unlikely(start == end)) 494 - return; 495 - 496 - local_irq_save(flags); 497 - cpu_dcache_wbinval_range(start, end); 498 - cpu_l2cache_op(start, end, CCTL_CMD_L2_PA_WBINVAL); 499 - __nds32__msync_all(); 500 - local_irq_restore(flags); 501 - } 502 - 503 - void cpu_proc_init(void) 504 - { 505 - } 506 - 507 - void cpu_proc_fin(void) 508 - { 509 - } 510 - 511 - void cpu_do_idle(void) 512 - { 513 - __nds32__standby_no_wake_grant(); 514 - } 515 - 516 - void cpu_reset(unsigned long reset) 517 - { 518 - u32 tmp; 519 - GIE_DISABLE(); 520 - tmp = __nds32__mfsr(NDS32_SR_CACHE_CTL); 521 - tmp &= ~(CACHE_CTL_mskIC_EN | CACHE_CTL_mskDC_EN); 522 - __nds32__mtsr_isb(tmp, NDS32_SR_CACHE_CTL); 523 - cpu_dcache_wbinval_all(); 524 - cpu_icache_inval_all(); 525 - 526 - __asm__ __volatile__("jr.toff %0\n\t"::"r"(reset)); 527 - } 528 - 529 - void cpu_switch_mm(struct mm_struct *mm) 530 - { 531 - unsigned long cid; 532 - cid = __nds32__mfsr(NDS32_SR_TLB_MISC); 533 - cid = (cid & ~TLB_MISC_mskCID) | mm->context.id; 534 - __nds32__mtsr_dsb(cid, NDS32_SR_TLB_MISC); 535 - __nds32__mtsr_isb(__pa(mm->pgd), NDS32_SR_L1_PPTB); 536 - }
-50
arch/nds32/mm/tlb.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/spinlock_types.h> 5 - #include <linux/mm.h> 6 - #include <linux/sched.h> 7 - #include <asm/nds32.h> 8 - #include <nds32_intrinsic.h> 9 - 10 - unsigned int cpu_last_cid = { TLB_MISC_mskCID + (2 << TLB_MISC_offCID) }; 11 - 12 - DEFINE_SPINLOCK(cid_lock); 13 - 14 - void local_flush_tlb_range(struct vm_area_struct *vma, 15 - unsigned long start, unsigned long end) 16 - { 17 - unsigned long flags, ocid, ncid; 18 - 19 - if ((end - start) > 0x400000) { 20 - __nds32__tlbop_flua(); 21 - __nds32__isb(); 22 - return; 23 - } 24 - 25 - spin_lock_irqsave(&cid_lock, flags); 26 - ocid = __nds32__mfsr(NDS32_SR_TLB_MISC); 27 - ncid = (ocid & ~TLB_MISC_mskCID) | vma->vm_mm->context.id; 28 - __nds32__mtsr_dsb(ncid, NDS32_SR_TLB_MISC); 29 - while (start < end) { 30 - __nds32__tlbop_inv(start); 31 - __nds32__isb(); 32 - start += PAGE_SIZE; 33 - } 34 - __nds32__mtsr_dsb(ocid, NDS32_SR_TLB_MISC); 35 - spin_unlock_irqrestore(&cid_lock, flags); 36 - } 37 - 38 - void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) 39 - { 40 - unsigned long flags, ocid, ncid; 41 - 42 - spin_lock_irqsave(&cid_lock, flags); 43 - ocid = __nds32__mfsr(NDS32_SR_TLB_MISC); 44 - ncid = (ocid & ~TLB_MISC_mskCID) | vma->vm_mm->context.id; 45 - __nds32__mtsr_dsb(ncid, NDS32_SR_TLB_MISC); 46 - __nds32__tlbop_inv(addr); 47 - __nds32__isb(); 48 - __nds32__mtsr_dsb(ocid, NDS32_SR_TLB_MISC); 49 - spin_unlock_irqrestore(&cid_lock, flags); 50 - }
-9
drivers/clocksource/Kconfig
··· 617 617 Enable this option to use the Low Power controller timer 618 618 as clocksource. 619 619 620 - config ATCPIT100_TIMER 621 - bool "ATCPIT100 timer driver" 622 - depends on NDS32 || COMPILE_TEST 623 - depends on HAS_IOMEM 624 - select TIMER_OF 625 - default NDS32 626 - help 627 - This option enables support for the Andestech ATCPIT100 timers. 628 - 629 620 config RISCV_TIMER 630 621 bool "Timer for the RISC-V platform" if COMPILE_TEST 631 622 depends on GENERIC_SCHED_CLOCK && RISCV && RISCV_SBI
-1
drivers/clocksource/Makefile
··· 81 81 obj-$(CONFIG_INGENIC_TIMER) += ingenic-timer.o 82 82 obj-$(CONFIG_CLKSRC_ST_LPC) += clksrc_st_lpc.o 83 83 obj-$(CONFIG_X86_NUMACHIP) += numachip.o 84 - obj-$(CONFIG_ATCPIT100_TIMER) += timer-atcpit100.o 85 84 obj-$(CONFIG_RISCV_TIMER) += timer-riscv.o 86 85 obj-$(CONFIG_CLINT_TIMER) += timer-clint.o 87 86 obj-$(CONFIG_CSKY_MP_TIMER) += timer-mp-csky.o
-266
drivers/clocksource/timer-atcpit100.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - /* 4 - * Andestech ATCPIT100 Timer Device Driver Implementation 5 - * Rick Chen, Andes Technology Corporation <rick@andestech.com> 6 - * 7 - */ 8 - 9 - #include <linux/irq.h> 10 - #include <linux/clocksource.h> 11 - #include <linux/clockchips.h> 12 - #include <linux/interrupt.h> 13 - #include <linux/ioport.h> 14 - #include <linux/cpufreq.h> 15 - #include <linux/sched.h> 16 - #include <linux/sched_clock.h> 17 - #include <linux/of_address.h> 18 - #include <linux/of_irq.h> 19 - #include <linux/of_platform.h> 20 - #include "timer-of.h" 21 - #ifdef CONFIG_NDS32 22 - #include <asm/vdso_timer_info.h> 23 - #endif 24 - 25 - /* 26 - * Definition of register offsets 27 - */ 28 - 29 - /* ID and Revision Register */ 30 - #define ID_REV 0x0 31 - 32 - /* Configuration Register */ 33 - #define CFG 0x10 34 - 35 - /* Interrupt Enable Register */ 36 - #define INT_EN 0x14 37 - #define CH_INT_EN(c, i) ((1<<i)<<(4*c)) 38 - #define CH0INT0EN 0x01 39 - 40 - /* Interrupt Status Register */ 41 - #define INT_STA 0x18 42 - #define CH0INT0 0x01 43 - 44 - /* Channel Enable Register */ 45 - #define CH_EN 0x1C 46 - #define CH0TMR0EN 0x1 47 - #define CH1TMR0EN 0x10 48 - 49 - /* Channel 0 , 1 Control Register */ 50 - #define CH0_CTL (0x20) 51 - #define CH1_CTL (0x20 + 0x10) 52 - 53 - /* Channel clock source , bit 3 , 0:External clock , 1:APB clock */ 54 - #define APB_CLK BIT(3) 55 - 56 - /* Channel mode , bit 0~2 */ 57 - #define TMR_32 0x1 58 - #define TMR_16 0x2 59 - #define TMR_8 0x3 60 - 61 - /* Channel 0 , 1 Reload Register */ 62 - #define CH0_REL (0x24) 63 - #define CH1_REL (0x24 + 0x10) 64 - 65 - /* Channel 0 , 1 Counter Register */ 66 - #define CH0_CNT (0x28) 67 - #define CH1_CNT (0x28 + 0x10) 68 - 69 - #define TIMER_SYNC_TICKS 3 70 - 71 - static void atcpit100_ch1_tmr0_en(void __iomem *base) 72 - { 73 - writel(~0, base + CH1_REL); 74 - writel(APB_CLK|TMR_32, base + CH1_CTL); 75 - } 76 - 77 - static void atcpit100_ch0_tmr0_en(void __iomem *base) 78 - { 79 - writel(APB_CLK|TMR_32, base + CH0_CTL); 80 - } 81 - 82 - static void atcpit100_clkevt_time_setup(void __iomem *base, unsigned long delay) 83 - { 84 - writel(delay, base + CH0_CNT); 85 - writel(delay, base + CH0_REL); 86 - } 87 - 88 - static void atcpit100_timer_clear_interrupt(void __iomem *base) 89 - { 90 - u32 val; 91 - 92 - val = readl(base + INT_STA); 93 - writel(val | CH0INT0, base + INT_STA); 94 - } 95 - 96 - static void atcpit100_clocksource_start(void __iomem *base) 97 - { 98 - u32 val; 99 - 100 - val = readl(base + CH_EN); 101 - writel(val | CH1TMR0EN, base + CH_EN); 102 - } 103 - 104 - static void atcpit100_clkevt_time_start(void __iomem *base) 105 - { 106 - u32 val; 107 - 108 - val = readl(base + CH_EN); 109 - writel(val | CH0TMR0EN, base + CH_EN); 110 - } 111 - 112 - static void atcpit100_clkevt_time_stop(void __iomem *base) 113 - { 114 - u32 val; 115 - 116 - atcpit100_timer_clear_interrupt(base); 117 - val = readl(base + CH_EN); 118 - writel(val & ~CH0TMR0EN, base + CH_EN); 119 - } 120 - 121 - static int atcpit100_clkevt_next_event(unsigned long evt, 122 - struct clock_event_device *clkevt) 123 - { 124 - u32 val; 125 - struct timer_of *to = to_timer_of(clkevt); 126 - 127 - val = readl(timer_of_base(to) + CH_EN); 128 - writel(val & ~CH0TMR0EN, timer_of_base(to) + CH_EN); 129 - writel(evt, timer_of_base(to) + CH0_REL); 130 - writel(val | CH0TMR0EN, timer_of_base(to) + CH_EN); 131 - 132 - return 0; 133 - } 134 - 135 - static int atcpit100_clkevt_set_periodic(struct clock_event_device *evt) 136 - { 137 - struct timer_of *to = to_timer_of(evt); 138 - 139 - atcpit100_clkevt_time_setup(timer_of_base(to), timer_of_period(to)); 140 - atcpit100_clkevt_time_start(timer_of_base(to)); 141 - 142 - return 0; 143 - } 144 - static int atcpit100_clkevt_shutdown(struct clock_event_device *evt) 145 - { 146 - struct timer_of *to = to_timer_of(evt); 147 - 148 - atcpit100_clkevt_time_stop(timer_of_base(to)); 149 - 150 - return 0; 151 - } 152 - static int atcpit100_clkevt_set_oneshot(struct clock_event_device *evt) 153 - { 154 - struct timer_of *to = to_timer_of(evt); 155 - u32 val; 156 - 157 - writel(~0x0, timer_of_base(to) + CH0_REL); 158 - val = readl(timer_of_base(to) + CH_EN); 159 - writel(val | CH0TMR0EN, timer_of_base(to) + CH_EN); 160 - 161 - return 0; 162 - } 163 - 164 - static irqreturn_t atcpit100_timer_interrupt(int irq, void *dev_id) 165 - { 166 - struct clock_event_device *evt = (struct clock_event_device *)dev_id; 167 - struct timer_of *to = to_timer_of(evt); 168 - 169 - atcpit100_timer_clear_interrupt(timer_of_base(to)); 170 - 171 - evt->event_handler(evt); 172 - 173 - return IRQ_HANDLED; 174 - } 175 - 176 - static struct timer_of to = { 177 - .flags = TIMER_OF_IRQ | TIMER_OF_CLOCK | TIMER_OF_BASE, 178 - 179 - .clkevt = { 180 - .name = "atcpit100_tick", 181 - .rating = 300, 182 - .features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT, 183 - .set_state_shutdown = atcpit100_clkevt_shutdown, 184 - .set_state_periodic = atcpit100_clkevt_set_periodic, 185 - .set_state_oneshot = atcpit100_clkevt_set_oneshot, 186 - .tick_resume = atcpit100_clkevt_shutdown, 187 - .set_next_event = atcpit100_clkevt_next_event, 188 - .cpumask = cpu_possible_mask, 189 - }, 190 - 191 - .of_irq = { 192 - .handler = atcpit100_timer_interrupt, 193 - .flags = IRQF_TIMER | IRQF_IRQPOLL, 194 - }, 195 - 196 - /* 197 - * FIXME: we currently only support clocking using PCLK 198 - * and using EXTCLK is not supported in the driver. 199 - */ 200 - .of_clk = { 201 - .name = "PCLK", 202 - } 203 - }; 204 - 205 - static u64 notrace atcpit100_timer_sched_read(void) 206 - { 207 - return ~readl(timer_of_base(&to) + CH1_CNT); 208 - } 209 - 210 - #ifdef CONFIG_NDS32 211 - static void fill_vdso_need_info(struct device_node *node) 212 - { 213 - struct resource timer_res; 214 - of_address_to_resource(node, 0, &timer_res); 215 - timer_info.mapping_base = (unsigned long)timer_res.start; 216 - timer_info.cycle_count_down = true; 217 - timer_info.cycle_count_reg_offset = CH1_CNT; 218 - } 219 - #endif 220 - 221 - static int __init atcpit100_timer_init(struct device_node *node) 222 - { 223 - int ret; 224 - u32 val; 225 - void __iomem *base; 226 - 227 - ret = timer_of_init(node, &to); 228 - if (ret) 229 - return ret; 230 - 231 - base = timer_of_base(&to); 232 - 233 - sched_clock_register(atcpit100_timer_sched_read, 32, 234 - timer_of_rate(&to)); 235 - 236 - ret = clocksource_mmio_init(base + CH1_CNT, 237 - node->name, timer_of_rate(&to), 300, 32, 238 - clocksource_mmio_readl_down); 239 - 240 - if (ret) { 241 - pr_err("Failed to register clocksource\n"); 242 - return ret; 243 - } 244 - 245 - /* clear channel 0 timer0 interrupt */ 246 - atcpit100_timer_clear_interrupt(base); 247 - 248 - clockevents_config_and_register(&to.clkevt, timer_of_rate(&to), 249 - TIMER_SYNC_TICKS, 0xffffffff); 250 - atcpit100_ch0_tmr0_en(base); 251 - atcpit100_ch1_tmr0_en(base); 252 - atcpit100_clocksource_start(base); 253 - atcpit100_clkevt_time_start(base); 254 - 255 - /* Enable channel 0 timer0 interrupt */ 256 - val = readl(base + INT_EN); 257 - writel(val | CH0INT0EN, base + INT_EN); 258 - 259 - #ifdef CONFIG_NDS32 260 - fill_vdso_need_info(node); 261 - #endif 262 - 263 - return ret; 264 - } 265 - 266 - TIMER_OF_DECLARE(atcpit100, "andestech,atcpit100", atcpit100_timer_init);
-1
drivers/irqchip/Makefile
··· 92 92 obj-$(CONFIG_ARCH_SYNQUACER) += irq-sni-exiu.o 93 93 obj-$(CONFIG_MESON_IRQ_GPIO) += irq-meson-gpio.o 94 94 obj-$(CONFIG_GOLDFISH_PIC) += irq-goldfish-pic.o 95 - obj-$(CONFIG_NDS32) += irq-ativic32.o 96 95 obj-$(CONFIG_QCOM_PDC) += qcom-pdc.o 97 96 obj-$(CONFIG_CSKY_MPINTC) += irq-csky-mpintc.o 98 97 obj-$(CONFIG_CSKY_APB_INTC) += irq-csky-apb-intc.o
-156
drivers/irqchip/irq-ativic32.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <linux/irq.h> 5 - #include <linux/of.h> 6 - #include <linux/of_irq.h> 7 - #include <linux/of_address.h> 8 - #include <linux/hardirq.h> 9 - #include <linux/interrupt.h> 10 - #include <linux/irqdomain.h> 11 - #include <linux/irqchip.h> 12 - #include <nds32_intrinsic.h> 13 - 14 - #include <asm/irq_regs.h> 15 - 16 - unsigned long wake_mask; 17 - 18 - static void ativic32_ack_irq(struct irq_data *data) 19 - { 20 - __nds32__mtsr_dsb(BIT(data->hwirq), NDS32_SR_INT_PEND2); 21 - } 22 - 23 - static void ativic32_mask_irq(struct irq_data *data) 24 - { 25 - unsigned long int_mask2 = __nds32__mfsr(NDS32_SR_INT_MASK2); 26 - __nds32__mtsr_dsb(int_mask2 & (~(BIT(data->hwirq))), NDS32_SR_INT_MASK2); 27 - } 28 - 29 - static void ativic32_unmask_irq(struct irq_data *data) 30 - { 31 - unsigned long int_mask2 = __nds32__mfsr(NDS32_SR_INT_MASK2); 32 - __nds32__mtsr_dsb(int_mask2 | (BIT(data->hwirq)), NDS32_SR_INT_MASK2); 33 - } 34 - 35 - static int nointc_set_wake(struct irq_data *data, unsigned int on) 36 - { 37 - unsigned long int_mask = __nds32__mfsr(NDS32_SR_INT_MASK); 38 - static unsigned long irq_orig_bit; 39 - u32 bit = 1 << data->hwirq; 40 - 41 - if (on) { 42 - if (int_mask & bit) 43 - __assign_bit(data->hwirq, &irq_orig_bit, true); 44 - else 45 - __assign_bit(data->hwirq, &irq_orig_bit, false); 46 - 47 - __assign_bit(data->hwirq, &int_mask, true); 48 - __assign_bit(data->hwirq, &wake_mask, true); 49 - 50 - } else { 51 - if (!(irq_orig_bit & bit)) 52 - __assign_bit(data->hwirq, &int_mask, false); 53 - 54 - __assign_bit(data->hwirq, &wake_mask, false); 55 - __assign_bit(data->hwirq, &irq_orig_bit, false); 56 - } 57 - 58 - __nds32__mtsr_dsb(int_mask, NDS32_SR_INT_MASK); 59 - 60 - return 0; 61 - } 62 - 63 - static struct irq_chip ativic32_chip = { 64 - .name = "ativic32", 65 - .irq_ack = ativic32_ack_irq, 66 - .irq_mask = ativic32_mask_irq, 67 - .irq_unmask = ativic32_unmask_irq, 68 - .irq_set_wake = nointc_set_wake, 69 - }; 70 - 71 - static unsigned int __initdata nivic_map[6] = { 6, 2, 10, 16, 24, 32 }; 72 - 73 - static struct irq_domain *root_domain; 74 - static int ativic32_irq_domain_map(struct irq_domain *id, unsigned int virq, 75 - irq_hw_number_t hw) 76 - { 77 - 78 - unsigned long int_trigger_type; 79 - u32 type; 80 - struct irq_data *irq_data; 81 - int_trigger_type = __nds32__mfsr(NDS32_SR_INT_TRIGGER); 82 - irq_data = irq_get_irq_data(virq); 83 - if (!irq_data) 84 - return -EINVAL; 85 - 86 - if (int_trigger_type & (BIT(hw))) { 87 - irq_set_chip_and_handler(virq, &ativic32_chip, handle_edge_irq); 88 - type = IRQ_TYPE_EDGE_RISING; 89 - } else { 90 - irq_set_chip_and_handler(virq, &ativic32_chip, handle_level_irq); 91 - type = IRQ_TYPE_LEVEL_HIGH; 92 - } 93 - 94 - irqd_set_trigger_type(irq_data, type); 95 - return 0; 96 - } 97 - 98 - static const struct irq_domain_ops ativic32_ops = { 99 - .map = ativic32_irq_domain_map, 100 - .xlate = irq_domain_xlate_onecell 101 - }; 102 - 103 - static irq_hw_number_t get_intr_src(void) 104 - { 105 - return ((__nds32__mfsr(NDS32_SR_ITYPE) & ITYPE_mskVECTOR) >> ITYPE_offVECTOR) 106 - - NDS32_VECTOR_offINTERRUPT; 107 - } 108 - 109 - static void ativic32_handle_irq(struct pt_regs *regs) 110 - { 111 - irq_hw_number_t hwirq = get_intr_src(); 112 - generic_handle_domain_irq(root_domain, hwirq); 113 - } 114 - 115 - /* 116 - * TODO: convert nds32 to GENERIC_IRQ_MULTI_HANDLER so that this entry logic 117 - * can live in arch code. 118 - */ 119 - asmlinkage void asm_do_IRQ(struct pt_regs *regs) 120 - { 121 - struct pt_regs *old_regs; 122 - 123 - irq_enter(); 124 - old_regs = set_irq_regs(regs); 125 - ativic32_handle_irq(regs); 126 - set_irq_regs(old_regs); 127 - irq_exit(); 128 - } 129 - 130 - int __init ativic32_init_irq(struct device_node *node, struct device_node *parent) 131 - { 132 - unsigned long int_vec_base, nivic, nr_ints; 133 - 134 - if (WARN(parent, "non-root ativic32 are not supported")) 135 - return -EINVAL; 136 - 137 - int_vec_base = __nds32__mfsr(NDS32_SR_IVB); 138 - 139 - if (((int_vec_base & IVB_mskIVIC_VER) >> IVB_offIVIC_VER) == 0) 140 - panic("Unable to use atcivic32 for this cpu.\n"); 141 - 142 - nivic = (int_vec_base & IVB_mskNIVIC) >> IVB_offNIVIC; 143 - if (nivic >= ARRAY_SIZE(nivic_map)) 144 - panic("The number of input for ativic32 is not supported.\n"); 145 - 146 - nr_ints = nivic_map[nivic]; 147 - 148 - root_domain = irq_domain_add_linear(node, nr_ints, 149 - &ativic32_ops, NULL); 150 - 151 - if (!root_domain) 152 - panic("%s: unable to create IRQ domain\n", node->full_name); 153 - 154 - return 0; 155 - } 156 - IRQCHIP_DECLARE(ativic32, "andestech,ativic32", ativic32_init_irq);
+5 -7
drivers/net/ethernet/faraday/Kconfig
··· 6 6 config NET_VENDOR_FARADAY 7 7 bool "Faraday devices" 8 8 default y 9 - depends on ARM || NDS32 || COMPILE_TEST 9 + depends on ARM || COMPILE_TEST 10 10 help 11 11 If you have a network (Ethernet) card belonging to this class, say Y. 12 12 ··· 19 19 20 20 config FTMAC100 21 21 tristate "Faraday FTMAC100 10/100 Ethernet support" 22 - depends on ARM || NDS32 || COMPILE_TEST 22 + depends on ARM || COMPILE_TEST 23 23 depends on !64BIT || BROKEN 24 24 select MII 25 25 help 26 26 This driver supports the FTMAC100 10/100 Ethernet controller 27 - from Faraday. It is used on Faraday A320, Andes AG101 and some 28 - other ARM/NDS32 SoC's. 27 + from Faraday. It is used on Faraday A320 and some other ARM SoC's. 29 28 30 29 config FTGMAC100 31 30 tristate "Faraday FTGMAC100 Gigabit Ethernet support" 32 - depends on ARM || NDS32 || COMPILE_TEST 31 + depends on ARM || COMPILE_TEST 33 32 depends on !64BIT || BROKEN 34 33 select PHYLIB 35 34 select MDIO_ASPEED if MACH_ASPEED_G6 36 35 select CRC32 37 36 help 38 37 This driver supports the FTGMAC100 Gigabit Ethernet controller 39 - from Faraday. It is used on Faraday A369, Andes AG102 and some 40 - other ARM/NDS32 SoC's. 38 + from Faraday. It is used on Faraday A369 and some other ARM SoC's. 41 39 42 40 endif # NET_VENDOR_FARADAY
+1 -1
drivers/video/console/Kconfig
··· 9 9 bool "VGA text console" if EXPERT || !X86 10 10 depends on !4xx && !PPC_8xx && !SPARC && !M68K && !PARISC && !SUPERH && \ 11 11 (!ARM || ARCH_FOOTBRIDGE || ARCH_INTEGRATOR || ARCH_NETWINDER) && \ 12 - !ARM64 && !ARC && !MICROBLAZE && !OPENRISC && !NDS32 && !S390 && !UML 12 + !ARM64 && !ARC && !MICROBLAZE && !OPENRISC && !S390 && !UML 13 13 default y 14 14 help 15 15 Saying Y here will allow you to use Linux in text mode through a
-3
scripts/recordmcount.pl
··· 362 362 $mcount_regex = "^\\s*([0-9a-fA-F]+):\\sR_RISCV_CALL(_PLT)?\\s_?mcount\$"; 363 363 $type = ".quad"; 364 364 $alignment = 2; 365 - } elsif ($arch eq "nds32") { 366 - $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*R_NDS32_HI20_RELA\\s+_mcount\$"; 367 - $alignment = 2; 368 365 } elsif ($arch eq "csky") { 369 366 $mcount_regex = "^\\s*([0-9a-fA-F]+):\\s*R_CKCORE_PCREL_JSR_IMM26BY2\\s+_mcount\$"; 370 367 $alignment = 2;
-2
tools/include/asm/barrier.h
··· 24 24 #include "../../arch/ia64/include/asm/barrier.h" 25 25 #elif defined(__xtensa__) 26 26 #include "../../arch/xtensa/include/asm/barrier.h" 27 - #elif defined(__nds32__) 28 - #include "../../arch/nds32/include/asm/barrier.h" 29 27 #else 30 28 #include <asm-generic/barrier.h> 31 29 #endif
-1
tools/perf/arch/nds32/Build
··· 1 - perf-y += util/
-1
tools/perf/arch/nds32/util/Build
··· 1 - perf-y += header.o
-29
tools/perf/arch/nds32/util/header.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - // Copyright (C) 2005-2017 Andes Technology Corporation 3 - 4 - #include <stdio.h> 5 - #include <stdlib.h> 6 - #include <api/fs/fs.h> 7 - #include "header.h" 8 - 9 - #define STR_LEN 1024 10 - 11 - char *get_cpuid_str(struct perf_pmu *pmu) 12 - { 13 - /* In nds32, we only have one cpu */ 14 - char *buf = NULL; 15 - struct cpu_map *cpus; 16 - const char *sysfs = sysfs__mountpoint(); 17 - 18 - if (!sysfs || !pmu || !pmu->cpus) 19 - return NULL; 20 - 21 - buf = malloc(STR_LEN); 22 - if (!buf) 23 - return NULL; 24 - 25 - cpus = cpu_map__get(pmu->cpus); 26 - sprintf(buf, "0x%x", cpus->nr - 1); 27 - cpu_map__put(cpus); 28 - return buf; 29 - }
-4
tools/testing/selftests/vDSO/vdso_config.h
··· 53 53 #if __riscv_xlen == 32 54 54 #define VDSO_32BIT 1 55 55 #endif 56 - #else /* nds32 */ 57 - #define VDSO_VERSION 4 58 - #define VDSO_NAMES 1 59 - #define VDSO_32BIT 1 60 56 #endif 61 57 62 58 static const char *versions[6] = {