Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm

Pull ARM updates from Russell King:
"Bigger items included in this update are:

- A series of updates from Arnd for ARM randconfig build failures
- Updates from Dmitry for StrongARM SA-1100 to move IRQ handling to
drivers/irqchip/
- Move ARMs SP804 timer to drivers/clocksource/
- Perf updates from Mark Rutland in preparation to move the ARM perf
code into drivers/ so it can be shared with ARM64.
- MCPM updates from Nicolas
- Add support for taking platform serial number from DT
- Re-implement Keystone2 physical address space switch to conform to
architecture requirements
- Clean up ARMv7 LPAE code, which goes in hand with the Keystone2
changes.
- L2C cleanups to avoid unlocking caches if we're prevented by the
secure support to unlock.
- Avoid cleaning a potentially dirty cache containing stale data on
CPU initialisation
- Add ARM-only entry point for secondary startup (for machines that
can only call into a Thumb kernel in ARM mode). Same thing is also
done for the resume entry point.
- Provide arch_irqs_disabled via asm-generic
- Enlarge ARMv7M vector table
- Always use BFD linker for VDSO, as gold doesn't accept some of the
options we need.
- Fix an incorrect BSYM (for Thumb symbols) usage, and convert all
BSYM compiler macros to a "badr" (for branch address).
- Shut up compiler warnings provoked by our cmpxchg() implementation.
- Ensure bad xchg sizes fail to link"

* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (75 commits)
ARM: Fix build if CLKDEV_LOOKUP is not configured
ARM: fix new BSYM() usage introduced via for-arm-soc branch
ARM: 8383/1: nommu: avoid deprecated source register on mov
ARM: 8391/1: l2c: add options to overwrite prefetching behavior
ARM: 8390/1: irqflags: Get arch_irqs_disabled from asm-generic
ARM: 8387/1: arm/mm/dma-mapping.c: Add arm_coherent_dma_mmap
ARM: 8388/1: tcm: Don't crash when TCM banks are protected by TrustZone
ARM: 8384/1: VDSO: force use of BFD linker
ARM: 8385/1: VDSO: group link options
ARM: cmpxchg: avoid warnings from macro-ized cmpxchg() implementations
ARM: remove __bad_xchg definition
ARM: 8369/1: ARMv7M: define size of vector table for Vybrid
ARM: 8382/1: clocksource: make ARM_TIMER_SP804 depend on GENERIC_SCHED_CLOCK
ARM: 8366/1: move Dual-Timer SP804 driver to drivers/clocksource
ARM: 8365/1: introduce sp804_timer_disable and remove arm_timer.h inclusion
ARM: 8364/1: fix BE32 module loading
ARM: 8360/1: add secondary_startup_arm prototype in header file
ARM: 8359/1: correct secondary_startup_arm mode
ARM: proc-v7: sanitise and document registers around errata
ARM: proc-v7: clean up MIDR access
...

+1892 -1364
+5
Documentation/devicetree/bindings/arm/l2cc.txt
··· 67 67 disable if zero. 68 68 - arm,prefetch-offset : Override prefetch offset value. Valid values are 69 69 0-7, 15, 23, and 31. 70 + - prefetch-data : Data prefetch. Value: <0> (forcibly disable), <1> 71 + (forcibly enable), property absent (retain settings set by firmware) 72 + - prefetch-instr : Instruction prefetch. Value: <0> (forcibly disable), 73 + <1> (forcibly enable), property absent (retain settings set by 74 + firmware) 70 75 71 76 Example: 72 77
+4
Documentation/devicetree/booting-without-of.txt
··· 856 856 name may clash with standard defined ones, you prefix them with your 857 857 vendor name and a comma. 858 858 859 + Additional properties for the root node: 860 + 861 + - serial-number : a string representing the device's serial number 862 + 859 863 b) The /cpus node 860 864 861 865 This node is the parent of all individual CPU nodes. It doesn't
+23 -11
arch/arm/Kconfig
··· 33 33 select HARDIRQS_SW_RESEND 34 34 select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT) 35 35 select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6 36 - select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL 37 - select HAVE_ARCH_KGDB 36 + select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 37 + select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 38 38 select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT) 39 39 select HAVE_ARCH_TRACEHOOK 40 40 select HAVE_BPF_JIT ··· 45 45 select HAVE_DMA_API_DEBUG 46 46 select HAVE_DMA_ATTRS 47 47 select HAVE_DMA_CONTIGUOUS if MMU 48 - select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) 48 + select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) && !CPU_ENDIAN_BE32 49 49 select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU 50 50 select HAVE_FTRACE_MCOUNT_RECORD if (!XIP_KERNEL) 51 51 select HAVE_FUNCTION_GRAPH_TRACER if (!THUMB2_KERNEL) ··· 59 59 select HAVE_KERNEL_LZMA 60 60 select HAVE_KERNEL_LZO 61 61 select HAVE_KERNEL_XZ 62 - select HAVE_KPROBES if !XIP_KERNEL 62 + select HAVE_KPROBES if !XIP_KERNEL && !CPU_ENDIAN_BE32 && !CPU_V7M 63 63 select HAVE_KRETPROBES if (HAVE_KPROBES) 64 64 select HAVE_MEMBLOCK 65 - select HAVE_MOD_ARCH_SPECIFIC if ARM_UNWIND 65 + select HAVE_MOD_ARCH_SPECIFIC 66 66 select HAVE_OPROFILE if (HAVE_PERF_EVENTS) 67 67 select HAVE_OPTPROBES if !THUMB2_KERNEL 68 68 select HAVE_PERF_EVENTS ··· 173 173 174 174 config TRACE_IRQFLAGS_SUPPORT 175 175 bool 176 - default y 176 + default !CPU_V7M 177 177 178 178 config RWSEM_XCHGADD_ALGORITHM 179 179 bool ··· 1010 1010 config PLAT_VERSATILE 1011 1011 bool 1012 1012 1013 - config ARM_TIMER_SP804 1014 - bool 1015 - select CLKSRC_MMIO 1016 - select CLKSRC_OF if OF 1017 - 1018 1013 source "arch/arm/firmware/Kconfig" 1019 1014 1020 1015 source arch/arm/mm/Kconfig ··· 1337 1342 depends on GENERIC_CLOCKEVENTS 1338 1343 depends on HAVE_SMP 1339 1344 depends on MMU || ARM_MPU 1345 + select IRQ_WORK 1340 1346 help 1341 1347 This enables support for systems with more than one CPU. If you have 1342 1348 a system with only one CPU, say N. If you have a system with more ··· 1713 1717 config ARCH_WANT_GENERAL_HUGETLB 1714 1718 def_bool y 1715 1719 1720 + config ARM_MODULE_PLTS 1721 + bool "Use PLTs to allow module memory to spill over into vmalloc area" 1722 + depends on MODULES 1723 + help 1724 + Allocate PLTs when loading modules so that jumps and calls whose 1725 + targets are too far away for their relative offsets to be encoded 1726 + in the instructions themselves can be bounced via veneers in the 1727 + module's PLT. This allows modules to be allocated in the generic 1728 + vmalloc area after the dedicated module memory area has been 1729 + exhausted. The modules will use slightly more memory, but after 1730 + rounding up to page size, the actual memory footprint is usually 1731 + the same. 1732 + 1733 + Say y if you are getting out of memory errors while loading modules 1734 + 1716 1735 source "mm/Kconfig" 1717 1736 1718 1737 config FORCE_MAX_ZONEORDER ··· 1998 1987 config KEXEC 1999 1988 bool "Kexec system call (EXPERIMENTAL)" 2000 1989 depends on (!SMP || PM_SLEEP_SMP) 1990 + depends on !CPU_V7M 2001 1991 help 2002 1992 kexec is a system call that implements the ability to shutdown your 2003 1993 current kernel, and to start another kernel. It is like a reboot
+1
arch/arm/Kconfig.debug
··· 5 5 config ARM_PTDUMP 6 6 bool "Export kernel pagetable layout to userspace via debugfs" 7 7 depends on DEBUG_KERNEL 8 + depends on MMU 8 9 select DEBUG_FS 9 10 ---help--- 10 11 Say Y here if you want to show the kernel pagetable layout in a
+4
arch/arm/Makefile
··· 19 19 LDFLAGS_MODULE += --be8 20 20 endif 21 21 22 + ifeq ($(CONFIG_ARM_MODULE_PLTS),y) 23 + LDFLAGS_MODULE += -T $(srctree)/arch/arm/kernel/module.lds 24 + endif 25 + 22 26 OBJCOPYFLAGS :=-O binary -R .comment -S 23 27 GZFLAGS :=-9 24 28 #KBUILD_CFLAGS +=-pipe
+2
arch/arm/boot/compressed/Makefile
··· 103 103 lib1funcs.S ashldi3.S bswapsdi2.S $(libfdt) $(libfdt_hdrs) \ 104 104 hyp-stub.S 105 105 106 + KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING 107 + 106 108 ifeq ($(CONFIG_FUNCTION_TRACER),y) 107 109 ORIG_CFLAGS := $(KBUILD_CFLAGS) 108 110 KBUILD_CFLAGS = $(subst -pg, , $(ORIG_CFLAGS))
+2 -2
arch/arm/boot/compressed/head.S
··· 130 130 .endr 131 131 ARM( mov r0, r0 ) 132 132 ARM( b 1f ) 133 - THUMB( adr r12, BSYM(1f) ) 133 + THUMB( badr r12, 1f ) 134 134 THUMB( bx r12 ) 135 135 136 136 .word _magic_sig @ Magic numbers to help the loader ··· 447 447 448 448 bl cache_clean_flush 449 449 450 - adr r0, BSYM(restart) 450 + badr r0, restart 451 451 add r0, r0, r6 452 452 mov pc, r0 453 453
-1
arch/arm/common/Makefile
··· 11 11 obj-$(CONFIG_SHARP_PARAM) += sharpsl_param.o 12 12 obj-$(CONFIG_SHARP_SCOOP) += scoop.o 13 13 obj-$(CONFIG_PCI_HOST_ITE8152) += it8152.o 14 - obj-$(CONFIG_ARM_TIMER_SP804) += timer-sp.o 15 14 obj-$(CONFIG_MCPM) += mcpm_head.o mcpm_entry.o mcpm_platsmp.o vlock.o 16 15 CFLAGS_REMOVE_mcpm_entry.o = -pg 17 16 AFLAGS_mcpm_head.o := -march=armv7-a
+127 -156
arch/arm/common/mcpm_entry.c
··· 20 20 #include <asm/cputype.h> 21 21 #include <asm/suspend.h> 22 22 23 + /* 24 + * The public API for this code is documented in arch/arm/include/asm/mcpm.h. 25 + * For a comprehensive description of the main algorithm used here, please 26 + * see Documentation/arm/cluster-pm-race-avoidance.txt. 27 + */ 28 + 29 + struct sync_struct mcpm_sync; 30 + 31 + /* 32 + * __mcpm_cpu_going_down: Indicates that the cpu is being torn down. 33 + * This must be called at the point of committing to teardown of a CPU. 34 + * The CPU cache (SCTRL.C bit) is expected to still be active. 35 + */ 36 + static void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster) 37 + { 38 + mcpm_sync.clusters[cluster].cpus[cpu].cpu = CPU_GOING_DOWN; 39 + sync_cache_w(&mcpm_sync.clusters[cluster].cpus[cpu].cpu); 40 + } 41 + 42 + /* 43 + * __mcpm_cpu_down: Indicates that cpu teardown is complete and that the 44 + * cluster can be torn down without disrupting this CPU. 45 + * To avoid deadlocks, this must be called before a CPU is powered down. 46 + * The CPU cache (SCTRL.C bit) is expected to be off. 47 + * However L2 cache might or might not be active. 48 + */ 49 + static void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster) 50 + { 51 + dmb(); 52 + mcpm_sync.clusters[cluster].cpus[cpu].cpu = CPU_DOWN; 53 + sync_cache_w(&mcpm_sync.clusters[cluster].cpus[cpu].cpu); 54 + sev(); 55 + } 56 + 57 + /* 58 + * __mcpm_outbound_leave_critical: Leave the cluster teardown critical section. 59 + * @state: the final state of the cluster: 60 + * CLUSTER_UP: no destructive teardown was done and the cluster has been 61 + * restored to the previous state (CPU cache still active); or 62 + * CLUSTER_DOWN: the cluster has been torn-down, ready for power-off 63 + * (CPU cache disabled, L2 cache either enabled or disabled). 64 + */ 65 + static void __mcpm_outbound_leave_critical(unsigned int cluster, int state) 66 + { 67 + dmb(); 68 + mcpm_sync.clusters[cluster].cluster = state; 69 + sync_cache_w(&mcpm_sync.clusters[cluster].cluster); 70 + sev(); 71 + } 72 + 73 + /* 74 + * __mcpm_outbound_enter_critical: Enter the cluster teardown critical section. 75 + * This function should be called by the last man, after local CPU teardown 76 + * is complete. CPU cache expected to be active. 77 + * 78 + * Returns: 79 + * false: the critical section was not entered because an inbound CPU was 80 + * observed, or the cluster is already being set up; 81 + * true: the critical section was entered: it is now safe to tear down the 82 + * cluster. 83 + */ 84 + static bool __mcpm_outbound_enter_critical(unsigned int cpu, unsigned int cluster) 85 + { 86 + unsigned int i; 87 + struct mcpm_sync_struct *c = &mcpm_sync.clusters[cluster]; 88 + 89 + /* Warn inbound CPUs that the cluster is being torn down: */ 90 + c->cluster = CLUSTER_GOING_DOWN; 91 + sync_cache_w(&c->cluster); 92 + 93 + /* Back out if the inbound cluster is already in the critical region: */ 94 + sync_cache_r(&c->inbound); 95 + if (c->inbound == INBOUND_COMING_UP) 96 + goto abort; 97 + 98 + /* 99 + * Wait for all CPUs to get out of the GOING_DOWN state, so that local 100 + * teardown is complete on each CPU before tearing down the cluster. 101 + * 102 + * If any CPU has been woken up again from the DOWN state, then we 103 + * shouldn't be taking the cluster down at all: abort in that case. 104 + */ 105 + sync_cache_r(&c->cpus); 106 + for (i = 0; i < MAX_CPUS_PER_CLUSTER; i++) { 107 + int cpustate; 108 + 109 + if (i == cpu) 110 + continue; 111 + 112 + while (1) { 113 + cpustate = c->cpus[i].cpu; 114 + if (cpustate != CPU_GOING_DOWN) 115 + break; 116 + 117 + wfe(); 118 + sync_cache_r(&c->cpus[i].cpu); 119 + } 120 + 121 + switch (cpustate) { 122 + case CPU_DOWN: 123 + continue; 124 + 125 + default: 126 + goto abort; 127 + } 128 + } 129 + 130 + return true; 131 + 132 + abort: 133 + __mcpm_outbound_leave_critical(cluster, CLUSTER_UP); 134 + return false; 135 + } 136 + 137 + static int __mcpm_cluster_state(unsigned int cluster) 138 + { 139 + sync_cache_r(&mcpm_sync.clusters[cluster].cluster); 140 + return mcpm_sync.clusters[cluster].cluster; 141 + } 142 + 23 143 extern unsigned long mcpm_entry_vectors[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER]; 24 144 25 145 void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr) ··· 198 78 bool cpu_is_down, cluster_is_down; 199 79 int ret = 0; 200 80 81 + pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); 201 82 if (!platform_ops) 202 83 return -EUNATCH; /* try not to shadow power_up errors */ 203 84 might_sleep(); 204 - 205 - /* backward compatibility callback */ 206 - if (platform_ops->power_up) 207 - return platform_ops->power_up(cpu, cluster); 208 - 209 - pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); 210 85 211 86 /* 212 87 * Since this is called with IRQs enabled, and no arch_spin_lock_irq ··· 243 128 bool cpu_going_down, last_man; 244 129 phys_reset_t phys_reset; 245 130 246 - if (WARN_ON_ONCE(!platform_ops)) 247 - return; 248 - BUG_ON(!irqs_disabled()); 249 - 250 - /* 251 - * Do this before calling into the power_down method, 252 - * as it might not always be safe to do afterwards. 253 - */ 254 - setup_mm_for_reboot(); 255 - 256 - /* backward compatibility callback */ 257 - if (platform_ops->power_down) { 258 - platform_ops->power_down(); 259 - goto not_dead; 260 - } 261 - 262 131 mpidr = read_cpuid_mpidr(); 263 132 cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); 264 133 cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); 265 134 pr_debug("%s: cpu %u cluster %u\n", __func__, cpu, cluster); 135 + if (WARN_ON_ONCE(!platform_ops)) 136 + return; 137 + BUG_ON(!irqs_disabled()); 138 + 139 + setup_mm_for_reboot(); 266 140 267 141 __mcpm_cpu_going_down(cpu, cluster); 268 - 269 142 arch_spin_lock(&mcpm_lock); 270 143 BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP); 271 144 ··· 290 187 if (cpu_going_down) 291 188 wfi(); 292 189 293 - not_dead: 294 190 /* 295 191 * It is possible for a power_up request to happen concurrently 296 192 * with a power_down request for the same CPU. In this case the ··· 321 219 return ret; 322 220 } 323 221 324 - void mcpm_cpu_suspend(u64 expected_residency) 222 + void mcpm_cpu_suspend(void) 325 223 { 326 224 if (WARN_ON_ONCE(!platform_ops)) 327 225 return; 328 - 329 - /* backward compatibility callback */ 330 - if (platform_ops->suspend) { 331 - phys_reset_t phys_reset; 332 - BUG_ON(!irqs_disabled()); 333 - setup_mm_for_reboot(); 334 - platform_ops->suspend(expected_residency); 335 - phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); 336 - phys_reset(virt_to_phys(mcpm_entry_point)); 337 - BUG(); 338 - } 339 226 340 227 /* Some platforms might have to enable special resume modes, etc. */ 341 228 if (platform_ops->cpu_suspend_prepare) { ··· 346 255 347 256 if (!platform_ops) 348 257 return -EUNATCH; 349 - 350 - /* backward compatibility callback */ 351 - if (platform_ops->powered_up) { 352 - platform_ops->powered_up(); 353 - return 0; 354 - } 355 258 356 259 mpidr = read_cpuid_mpidr(); 357 260 cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); ··· 418 333 } 419 334 420 335 #endif 421 - 422 - struct sync_struct mcpm_sync; 423 - 424 - /* 425 - * __mcpm_cpu_going_down: Indicates that the cpu is being torn down. 426 - * This must be called at the point of committing to teardown of a CPU. 427 - * The CPU cache (SCTRL.C bit) is expected to still be active. 428 - */ 429 - void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster) 430 - { 431 - mcpm_sync.clusters[cluster].cpus[cpu].cpu = CPU_GOING_DOWN; 432 - sync_cache_w(&mcpm_sync.clusters[cluster].cpus[cpu].cpu); 433 - } 434 - 435 - /* 436 - * __mcpm_cpu_down: Indicates that cpu teardown is complete and that the 437 - * cluster can be torn down without disrupting this CPU. 438 - * To avoid deadlocks, this must be called before a CPU is powered down. 439 - * The CPU cache (SCTRL.C bit) is expected to be off. 440 - * However L2 cache might or might not be active. 441 - */ 442 - void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster) 443 - { 444 - dmb(); 445 - mcpm_sync.clusters[cluster].cpus[cpu].cpu = CPU_DOWN; 446 - sync_cache_w(&mcpm_sync.clusters[cluster].cpus[cpu].cpu); 447 - sev(); 448 - } 449 - 450 - /* 451 - * __mcpm_outbound_leave_critical: Leave the cluster teardown critical section. 452 - * @state: the final state of the cluster: 453 - * CLUSTER_UP: no destructive teardown was done and the cluster has been 454 - * restored to the previous state (CPU cache still active); or 455 - * CLUSTER_DOWN: the cluster has been torn-down, ready for power-off 456 - * (CPU cache disabled, L2 cache either enabled or disabled). 457 - */ 458 - void __mcpm_outbound_leave_critical(unsigned int cluster, int state) 459 - { 460 - dmb(); 461 - mcpm_sync.clusters[cluster].cluster = state; 462 - sync_cache_w(&mcpm_sync.clusters[cluster].cluster); 463 - sev(); 464 - } 465 - 466 - /* 467 - * __mcpm_outbound_enter_critical: Enter the cluster teardown critical section. 468 - * This function should be called by the last man, after local CPU teardown 469 - * is complete. CPU cache expected to be active. 470 - * 471 - * Returns: 472 - * false: the critical section was not entered because an inbound CPU was 473 - * observed, or the cluster is already being set up; 474 - * true: the critical section was entered: it is now safe to tear down the 475 - * cluster. 476 - */ 477 - bool __mcpm_outbound_enter_critical(unsigned int cpu, unsigned int cluster) 478 - { 479 - unsigned int i; 480 - struct mcpm_sync_struct *c = &mcpm_sync.clusters[cluster]; 481 - 482 - /* Warn inbound CPUs that the cluster is being torn down: */ 483 - c->cluster = CLUSTER_GOING_DOWN; 484 - sync_cache_w(&c->cluster); 485 - 486 - /* Back out if the inbound cluster is already in the critical region: */ 487 - sync_cache_r(&c->inbound); 488 - if (c->inbound == INBOUND_COMING_UP) 489 - goto abort; 490 - 491 - /* 492 - * Wait for all CPUs to get out of the GOING_DOWN state, so that local 493 - * teardown is complete on each CPU before tearing down the cluster. 494 - * 495 - * If any CPU has been woken up again from the DOWN state, then we 496 - * shouldn't be taking the cluster down at all: abort in that case. 497 - */ 498 - sync_cache_r(&c->cpus); 499 - for (i = 0; i < MAX_CPUS_PER_CLUSTER; i++) { 500 - int cpustate; 501 - 502 - if (i == cpu) 503 - continue; 504 - 505 - while (1) { 506 - cpustate = c->cpus[i].cpu; 507 - if (cpustate != CPU_GOING_DOWN) 508 - break; 509 - 510 - wfe(); 511 - sync_cache_r(&c->cpus[i].cpu); 512 - } 513 - 514 - switch (cpustate) { 515 - case CPU_DOWN: 516 - continue; 517 - 518 - default: 519 - goto abort; 520 - } 521 - } 522 - 523 - return true; 524 - 525 - abort: 526 - __mcpm_outbound_leave_critical(cluster, CLUSTER_UP); 527 - return false; 528 - } 529 - 530 - int __mcpm_cluster_state(unsigned int cluster) 531 - { 532 - sync_cache_r(&mcpm_sync.clusters[cluster].cluster); 533 - return mcpm_sync.clusters[cluster].cluster; 534 - } 535 336 536 337 extern unsigned long mcpm_power_up_setup_phys; 537 338
+1 -1
arch/arm/common/mcpm_head.S
··· 49 49 ENTRY(mcpm_entry_point) 50 50 51 51 ARM_BE8(setend be) 52 - THUMB( adr r12, BSYM(1f) ) 52 + THUMB( badr r12, 1f ) 53 53 THUMB( bx r12 ) 54 54 THUMB( .thumb ) 55 55 1:
+9 -3
arch/arm/common/timer-sp.c drivers/clocksource/timer-sp804.c
··· 1 1 /* 2 - * linux/arch/arm/common/timer-sp.c 2 + * linux/drivers/clocksource/timer-sp.c 3 3 * 4 4 * Copyright (C) 1999 - 2003 ARM Limited 5 5 * Copyright (C) 2000 Deep Blue Solutions Ltd ··· 30 30 #include <linux/of_irq.h> 31 31 #include <linux/sched_clock.h> 32 32 33 - #include <asm/hardware/arm_timer.h> 34 - #include <asm/hardware/timer-sp.h> 33 + #include <clocksource/timer-sp804.h> 34 + 35 + #include "timer-sp.h" 35 36 36 37 static long __init sp804_get_clock_rate(struct clk *clk) 37 38 { ··· 70 69 static u64 notrace sp804_read(void) 71 70 { 72 71 return ~readl_relaxed(sched_clock_base + TIMER_VALUE); 72 + } 73 + 74 + void __init sp804_timer_disable(void __iomem *base) 75 + { 76 + writel(0, base + TIMER_CTRL); 73 77 } 74 78 75 79 void __init __sp804_clocksource_and_sched_clock_init(void __iomem *base,
+16 -1
arch/arm/include/asm/assembler.h
··· 178 178 .endm 179 179 180 180 /* 181 + * Assembly version of "adr rd, BSYM(sym)". This should only be used to 182 + * reference local symbols in the same assembly file which are to be 183 + * resolved by the assembler. Other usage is undefined. 184 + */ 185 + .irp c,,eq,ne,cs,cc,mi,pl,vs,vc,hi,ls,ge,lt,gt,le,hs,lo 186 + .macro badr\c, rd, sym 187 + #ifdef CONFIG_THUMB2_KERNEL 188 + adr\c \rd, \sym + 1 189 + #else 190 + adr\c \rd, \sym 191 + #endif 192 + .endm 193 + .endr 194 + 195 + /* 181 196 * Get current thread_info. 182 197 */ 183 198 .macro get_thread_info, rd ··· 341 326 THUMB( orr \reg , \reg , #PSR_T_BIT ) 342 327 bne 1f 343 328 orr \reg, \reg, #PSR_A_BIT 344 - adr lr, BSYM(2f) 329 + badr lr, 2f 345 330 msr spsr_cxsf, \reg 346 331 __MSR_ELR_HYP(14) 347 332 __ERET
+7
arch/arm/include/asm/cacheflush.h
··· 482 482 : : : "r0","r1","r2","r3","r4","r5","r6","r7", \ 483 483 "r9","r10","lr","memory" ) 484 484 485 + #ifdef CONFIG_MMU 485 486 int set_memory_ro(unsigned long addr, int numpages); 486 487 int set_memory_rw(unsigned long addr, int numpages); 487 488 int set_memory_x(unsigned long addr, int numpages); 488 489 int set_memory_nx(unsigned long addr, int numpages); 490 + #else 491 + static inline int set_memory_ro(unsigned long addr, int numpages) { return 0; } 492 + static inline int set_memory_rw(unsigned long addr, int numpages) { return 0; } 493 + static inline int set_memory_x(unsigned long addr, int numpages) { return 0; } 494 + static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; } 495 + #endif 489 496 490 497 #ifdef CONFIG_DEBUG_RODATA 491 498 void mark_rodata_ro(void);
+38 -29
arch/arm/include/asm/cmpxchg.h
··· 94 94 break; 95 95 #endif 96 96 default: 97 + /* Cause a link-time error, the xchg() size is not supported */ 97 98 __bad_xchg(ptr, size), ret = 0; 98 99 break; 99 100 } ··· 103 102 return ret; 104 103 } 105 104 106 - #define xchg(ptr,x) \ 107 - ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) 105 + #define xchg(ptr, x) ({ \ 106 + (__typeof__(*(ptr)))__xchg((unsigned long)(x), (ptr), \ 107 + sizeof(*(ptr))); \ 108 + }) 108 109 109 110 #include <asm-generic/cmpxchg-local.h> 110 111 ··· 121 118 * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make 122 119 * them available. 123 120 */ 124 - #define cmpxchg_local(ptr, o, n) \ 125 - ((__typeof__(*(ptr)))__cmpxchg_local_generic((ptr), (unsigned long)(o),\ 126 - (unsigned long)(n), sizeof(*(ptr)))) 121 + #define cmpxchg_local(ptr, o, n) ({ \ 122 + (__typeof(*ptr))__cmpxchg_local_generic((ptr), \ 123 + (unsigned long)(o), \ 124 + (unsigned long)(n), \ 125 + sizeof(*(ptr))); \ 126 + }) 127 + 127 128 #define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n)) 128 129 129 - #ifndef CONFIG_SMP 130 130 #include <asm-generic/cmpxchg.h> 131 - #endif 132 131 133 132 #else /* min ARCH >= ARMv6 */ 134 133 ··· 206 201 return ret; 207 202 } 208 203 209 - #define cmpxchg(ptr,o,n) \ 210 - ((__typeof__(*(ptr)))__cmpxchg_mb((ptr), \ 211 - (unsigned long)(o), \ 212 - (unsigned long)(n), \ 213 - sizeof(*(ptr)))) 204 + #define cmpxchg(ptr,o,n) ({ \ 205 + (__typeof__(*(ptr)))__cmpxchg_mb((ptr), \ 206 + (unsigned long)(o), \ 207 + (unsigned long)(n), \ 208 + sizeof(*(ptr))); \ 209 + }) 214 210 215 211 static inline unsigned long __cmpxchg_local(volatile void *ptr, 216 212 unsigned long old, ··· 232 226 233 227 return ret; 234 228 } 229 + 230 + #define cmpxchg_local(ptr, o, n) ({ \ 231 + (__typeof(*ptr))__cmpxchg_local((ptr), \ 232 + (unsigned long)(o), \ 233 + (unsigned long)(n), \ 234 + sizeof(*(ptr))); \ 235 + }) 235 236 236 237 static inline unsigned long long __cmpxchg64(unsigned long long *ptr, 237 238 unsigned long long old, ··· 265 252 return oldval; 266 253 } 267 254 255 + #define cmpxchg64_relaxed(ptr, o, n) ({ \ 256 + (__typeof__(*(ptr)))__cmpxchg64((ptr), \ 257 + (unsigned long long)(o), \ 258 + (unsigned long long)(n)); \ 259 + }) 260 + 261 + #define cmpxchg64_local(ptr, o, n) cmpxchg64_relaxed((ptr), (o), (n)) 262 + 268 263 static inline unsigned long long __cmpxchg64_mb(unsigned long long *ptr, 269 264 unsigned long long old, 270 265 unsigned long long new) ··· 286 265 return ret; 287 266 } 288 267 289 - #define cmpxchg_local(ptr,o,n) \ 290 - ((__typeof__(*(ptr)))__cmpxchg_local((ptr), \ 291 - (unsigned long)(o), \ 292 - (unsigned long)(n), \ 293 - sizeof(*(ptr)))) 294 - 295 - #define cmpxchg64(ptr, o, n) \ 296 - ((__typeof__(*(ptr)))__cmpxchg64_mb((ptr), \ 297 - (unsigned long long)(o), \ 298 - (unsigned long long)(n))) 299 - 300 - #define cmpxchg64_relaxed(ptr, o, n) \ 301 - ((__typeof__(*(ptr)))__cmpxchg64((ptr), \ 302 - (unsigned long long)(o), \ 303 - (unsigned long long)(n))) 304 - 305 - #define cmpxchg64_local(ptr, o, n) cmpxchg64_relaxed((ptr), (o), (n)) 268 + #define cmpxchg64(ptr, o, n) ({ \ 269 + (__typeof__(*(ptr)))__cmpxchg64_mb((ptr), \ 270 + (unsigned long long)(o), \ 271 + (unsigned long long)(n)); \ 272 + }) 306 273 307 274 #endif /* __LINUX_ARM_ARCH__ >= 6 */ 308 275
+2 -2
arch/arm/include/asm/entry-macro-multi.S
··· 10 10 @ 11 11 @ routine called with r0 = irq number, r1 = struct pt_regs * 12 12 @ 13 - adrne lr, BSYM(1b) 13 + badrne lr, 1b 14 14 bne asm_do_IRQ 15 15 16 16 #ifdef CONFIG_SMP ··· 23 23 ALT_SMP(test_for_ipi r0, r2, r6, lr) 24 24 ALT_UP_B(9997f) 25 25 movne r1, sp 26 - adrne lr, BSYM(1b) 26 + badrne lr, 1b 27 27 bne do_IPI 28 28 #endif 29 29 9997:
-5
arch/arm/include/asm/hardware/arm_timer.h drivers/clocksource/timer-sp.h
··· 1 - #ifndef __ASM_ARM_HARDWARE_ARM_TIMER_H 2 - #define __ASM_ARM_HARDWARE_ARM_TIMER_H 3 - 4 1 /* 5 2 * ARM timer implementation, found in Integrator, Versatile and Realview 6 3 * platforms. Not all platforms support all registers and bits in these ··· 28 31 #define TIMER_RIS 0x10 /* CVR ro */ 29 32 #define TIMER_MIS 0x14 /* CVR ro */ 30 33 #define TIMER_BGLOAD 0x18 /* CVR rw */ 31 - 32 - #endif
+5
arch/arm/include/asm/hardware/timer-sp.h include/clocksource/timer-sp804.h
··· 1 + #ifndef __CLKSOURCE_TIMER_SP804_H 2 + #define __CLKSOURCE_TIMER_SP804_H 3 + 1 4 struct clk; 2 5 3 6 void __sp804_clocksource_and_sched_clock_init(void __iomem *, 4 7 const char *, struct clk *, int); 5 8 void __sp804_clockevents_init(void __iomem *, unsigned int, 6 9 struct clk *, const char *); 10 + void sp804_timer_disable(void __iomem *); 7 11 8 12 static inline void sp804_clocksource_init(void __iomem *base, const char *name) 9 13 { ··· 25 21 __sp804_clockevents_init(base, irq, NULL, name); 26 22 27 23 } 24 + #endif
+37 -15
arch/arm/include/asm/io.h
··· 23 23 24 24 #ifdef __KERNEL__ 25 25 26 + #include <linux/string.h> 26 27 #include <linux/types.h> 27 28 #include <linux/blk_types.h> 28 29 #include <asm/byteorder.h> ··· 74 73 static inline void __raw_writew(u16 val, volatile void __iomem *addr) 75 74 { 76 75 asm volatile("strh %1, %0" 77 - : "+Q" (*(volatile u16 __force *)addr) 78 - : "r" (val)); 76 + : : "Q" (*(volatile u16 __force *)addr), "r" (val)); 79 77 } 80 78 81 79 #define __raw_readw __raw_readw 82 80 static inline u16 __raw_readw(const volatile void __iomem *addr) 83 81 { 84 82 u16 val; 85 - asm volatile("ldrh %1, %0" 86 - : "+Q" (*(volatile u16 __force *)addr), 87 - "=r" (val)); 83 + asm volatile("ldrh %0, %1" 84 + : "=r" (val) 85 + : "Q" (*(volatile u16 __force *)addr)); 88 86 return val; 89 87 } 90 88 #endif ··· 92 92 static inline void __raw_writeb(u8 val, volatile void __iomem *addr) 93 93 { 94 94 asm volatile("strb %1, %0" 95 - : "+Qo" (*(volatile u8 __force *)addr) 96 - : "r" (val)); 95 + : : "Qo" (*(volatile u8 __force *)addr), "r" (val)); 97 96 } 98 97 99 98 #define __raw_writel __raw_writel 100 99 static inline void __raw_writel(u32 val, volatile void __iomem *addr) 101 100 { 102 101 asm volatile("str %1, %0" 103 - : "+Qo" (*(volatile u32 __force *)addr) 104 - : "r" (val)); 102 + : : "Qo" (*(volatile u32 __force *)addr), "r" (val)); 105 103 } 106 104 107 105 #define __raw_readb __raw_readb 108 106 static inline u8 __raw_readb(const volatile void __iomem *addr) 109 107 { 110 108 u8 val; 111 - asm volatile("ldrb %1, %0" 112 - : "+Qo" (*(volatile u8 __force *)addr), 113 - "=r" (val)); 109 + asm volatile("ldrb %0, %1" 110 + : "=r" (val) 111 + : "Qo" (*(volatile u8 __force *)addr)); 114 112 return val; 115 113 } 116 114 ··· 116 118 static inline u32 __raw_readl(const volatile void __iomem *addr) 117 119 { 118 120 u32 val; 119 - asm volatile("ldr %1, %0" 120 - : "+Qo" (*(volatile u32 __force *)addr), 121 - "=r" (val)); 121 + asm volatile("ldr %0, %1" 122 + : "=r" (val) 123 + : "Qo" (*(volatile u32 __force *)addr)); 122 124 return val; 123 125 } 124 126 ··· 317 319 #define writesw(p,d,l) __raw_writesw(p,d,l) 318 320 #define writesl(p,d,l) __raw_writesl(p,d,l) 319 321 322 + #ifndef __ARMBE__ 323 + static inline void memset_io(volatile void __iomem *dst, unsigned c, 324 + size_t count) 325 + { 326 + memset((void __force *)dst, c, count); 327 + } 328 + #define memset_io(dst,c,count) memset_io(dst,c,count) 329 + 330 + static inline void memcpy_fromio(void *to, const volatile void __iomem *from, 331 + size_t count) 332 + { 333 + memcpy(to, (const void __force *)from, count); 334 + } 335 + #define memcpy_fromio(to,from,count) memcpy_fromio(to,from,count) 336 + 337 + static inline void memcpy_toio(volatile void __iomem *to, const void *from, 338 + size_t count) 339 + { 340 + memcpy((void __force *)to, from, count); 341 + } 342 + #define memcpy_toio(to,from,count) memcpy_toio(to,from,count) 343 + 344 + #else 320 345 #define memset_io(c,v,l) _memset_io(c,(v),(l)) 321 346 #define memcpy_fromio(a,c,l) _memcpy_fromio((a),c,(l)) 322 347 #define memcpy_toio(c,a,l) _memcpy_toio(c,(a),(l)) 348 + #endif 323 349 324 350 #endif /* readl */ 325 351
+11
arch/arm/include/asm/irqflags.h
··· 20 20 21 21 #if __LINUX_ARM_ARCH__ >= 6 22 22 23 + #define arch_local_irq_save arch_local_irq_save 23 24 static inline unsigned long arch_local_irq_save(void) 24 25 { 25 26 unsigned long flags; ··· 32 31 return flags; 33 32 } 34 33 34 + #define arch_local_irq_enable arch_local_irq_enable 35 35 static inline void arch_local_irq_enable(void) 36 36 { 37 37 asm volatile( ··· 42 40 : "memory", "cc"); 43 41 } 44 42 43 + #define arch_local_irq_disable arch_local_irq_disable 45 44 static inline void arch_local_irq_disable(void) 46 45 { 47 46 asm volatile( ··· 59 56 /* 60 57 * Save the current interrupt enable state & disable IRQs 61 58 */ 59 + #define arch_local_irq_save arch_local_irq_save 62 60 static inline unsigned long arch_local_irq_save(void) 63 61 { 64 62 unsigned long flags, temp; ··· 77 73 /* 78 74 * Enable IRQs 79 75 */ 76 + #define arch_local_irq_enable arch_local_irq_enable 80 77 static inline void arch_local_irq_enable(void) 81 78 { 82 79 unsigned long temp; ··· 93 88 /* 94 89 * Disable IRQs 95 90 */ 91 + #define arch_local_irq_disable arch_local_irq_disable 96 92 static inline void arch_local_irq_disable(void) 97 93 { 98 94 unsigned long temp; ··· 141 135 /* 142 136 * Save the current interrupt enable state. 143 137 */ 138 + #define arch_local_save_flags arch_local_save_flags 144 139 static inline unsigned long arch_local_save_flags(void) 145 140 { 146 141 unsigned long flags; ··· 154 147 /* 155 148 * restore saved IRQ & FIQ state 156 149 */ 150 + #define arch_local_irq_restore arch_local_irq_restore 157 151 static inline void arch_local_irq_restore(unsigned long flags) 158 152 { 159 153 asm volatile( ··· 164 156 : "memory", "cc"); 165 157 } 166 158 159 + #define arch_irqs_disabled_flags arch_irqs_disabled_flags 167 160 static inline int arch_irqs_disabled_flags(unsigned long flags) 168 161 { 169 162 return flags & IRQMASK_I_BIT; 170 163 } 164 + 165 + #include <asm-generic/irqflags.h> 171 166 172 167 #endif /* ifdef __KERNEL__ */ 173 168 #endif /* ifndef __ASM_ARM_IRQFLAGS_H */
+1 -1
arch/arm/include/asm/mach/arch.h
··· 51 51 bool (*smp_init)(void); 52 52 void (*fixup)(struct tag *, char **); 53 53 void (*dt_fixup)(void); 54 - void (*init_meminfo)(void); 54 + long long (*pv_fixup)(void); 55 55 void (*reserve)(void);/* reserve mem blocks */ 56 56 void (*map_io)(void);/* IO mapping function */ 57 57 void (*init_early)(void);
+28 -45
arch/arm/include/asm/mcpm.h
··· 137 137 /** 138 138 * mcpm_cpu_suspend - bring the calling CPU in a suspended state 139 139 * 140 - * @expected_residency: duration in microseconds the CPU is expected 141 - * to remain suspended, or 0 if unknown/infinity. 142 - * 143 - * The calling CPU is suspended. The expected residency argument is used 144 - * as a hint by the platform specific backend to implement the appropriate 145 - * sleep state level according to the knowledge it has on wake-up latency 146 - * for the given hardware. 140 + * The calling CPU is suspended. This is similar to mcpm_cpu_power_down() 141 + * except for possible extra platform specific configuration steps to allow 142 + * an asynchronous wake-up e.g. with a pending interrupt. 147 143 * 148 144 * If this CPU is found to be the "last man standing" in the cluster 149 - * then the cluster may be prepared for power-down too, if the expected 150 - * residency makes it worthwhile. 145 + * then the cluster may be prepared for power-down too. 151 146 * 152 147 * This must be called with interrupts disabled. 153 148 * ··· 152 157 * This will return if mcpm_platform_register() has not been called 153 158 * previously in which case the caller should take appropriate action. 154 159 */ 155 - void mcpm_cpu_suspend(u64 expected_residency); 160 + void mcpm_cpu_suspend(void); 156 161 157 162 /** 158 163 * mcpm_cpu_powered_up - housekeeping workafter a CPU has been powered up ··· 229 234 void (*cpu_is_up)(unsigned int cpu, unsigned int cluster); 230 235 void (*cluster_is_up)(unsigned int cluster); 231 236 int (*wait_for_powerdown)(unsigned int cpu, unsigned int cluster); 232 - 233 - /* deprecated callbacks */ 234 - int (*power_up)(unsigned int cpu, unsigned int cluster); 235 - void (*power_down)(void); 236 - void (*suspend)(u64); 237 - void (*powered_up)(void); 238 237 }; 239 238 240 239 /** ··· 239 250 * An error is returned if the registration has been done previously. 240 251 */ 241 252 int __init mcpm_platform_register(const struct mcpm_platform_ops *ops); 242 - 243 - /* Synchronisation structures for coordinating safe cluster setup/teardown: */ 244 - 245 - /* 246 - * When modifying this structure, make sure you update the MCPM_SYNC_ defines 247 - * to match. 248 - */ 249 - struct mcpm_sync_struct { 250 - /* individual CPU states */ 251 - struct { 252 - s8 cpu __aligned(__CACHE_WRITEBACK_GRANULE); 253 - } cpus[MAX_CPUS_PER_CLUSTER]; 254 - 255 - /* cluster state */ 256 - s8 cluster __aligned(__CACHE_WRITEBACK_GRANULE); 257 - 258 - /* inbound-side state */ 259 - s8 inbound __aligned(__CACHE_WRITEBACK_GRANULE); 260 - }; 261 - 262 - struct sync_struct { 263 - struct mcpm_sync_struct clusters[MAX_NR_CLUSTERS]; 264 - }; 265 - 266 - void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster); 267 - void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster); 268 - void __mcpm_outbound_leave_critical(unsigned int cluster, int state); 269 - bool __mcpm_outbound_enter_critical(unsigned int this_cpu, unsigned int cluster); 270 - int __mcpm_cluster_state(unsigned int cluster); 271 253 272 254 /** 273 255 * mcpm_sync_init - Initialize the cluster synchronization support ··· 277 317 int __init mcpm_loopback(void (*cache_disable)(void)); 278 318 279 319 void __init mcpm_smp_set_ops(void); 320 + 321 + /* 322 + * Synchronisation structures for coordinating safe cluster setup/teardown. 323 + * This is private to the MCPM core code and shared between C and assembly. 324 + * When modifying this structure, make sure you update the MCPM_SYNC_ defines 325 + * to match. 326 + */ 327 + struct mcpm_sync_struct { 328 + /* individual CPU states */ 329 + struct { 330 + s8 cpu __aligned(__CACHE_WRITEBACK_GRANULE); 331 + } cpus[MAX_CPUS_PER_CLUSTER]; 332 + 333 + /* cluster state */ 334 + s8 cluster __aligned(__CACHE_WRITEBACK_GRANULE); 335 + 336 + /* inbound-side state */ 337 + s8 inbound __aligned(__CACHE_WRITEBACK_GRANULE); 338 + }; 339 + 340 + struct sync_struct { 341 + struct mcpm_sync_struct clusters[MAX_NR_CLUSTERS]; 342 + }; 280 343 281 344 #else 282 345
-16
arch/arm/include/asm/memory.h
··· 18 18 #include <linux/types.h> 19 19 #include <linux/sizes.h> 20 20 21 - #include <asm/cache.h> 22 - 23 21 #ifdef CONFIG_NEED_MACH_MEMORY_H 24 22 #include <mach/memory.h> 25 23 #endif ··· 129 131 */ 130 132 #define page_to_phys(page) (__pfn_to_phys(page_to_pfn(page))) 131 133 #define phys_to_page(phys) (pfn_to_page(__phys_to_pfn(phys))) 132 - 133 - /* 134 - * Minimum guaranted alignment in pgd_alloc(). The page table pointers passed 135 - * around in head.S and proc-*.S are shifted by this amount, in order to 136 - * leave spare high bits for systems with physical address extension. This 137 - * does not fully accomodate the 40-bit addressing capability of ARM LPAE, but 138 - * gives us about 38-bits or so. 139 - */ 140 - #ifdef CONFIG_ARM_LPAE 141 - #define ARCH_PGD_SHIFT L1_CACHE_SHIFT 142 - #else 143 - #define ARCH_PGD_SHIFT 0 144 - #endif 145 - #define ARCH_PGD_MASK ((1 << ARCH_PGD_SHIFT) - 1) 146 134 147 135 /* 148 136 * PLAT_PHYS_OFFSET is the offset (from zero) of the start of physical
+11 -1
arch/arm/include/asm/module.h
··· 16 16 ARM_SEC_UNLIKELY, 17 17 ARM_SEC_MAX, 18 18 }; 19 + #endif 19 20 20 21 struct mod_arch_specific { 22 + #ifdef CONFIG_ARM_UNWIND 21 23 struct unwind_table *unwind[ARM_SEC_MAX]; 22 - }; 23 24 #endif 25 + #ifdef CONFIG_ARM_MODULE_PLTS 26 + struct elf32_shdr *core_plt; 27 + struct elf32_shdr *init_plt; 28 + int core_plt_count; 29 + int init_plt_count; 30 + #endif 31 + }; 32 + 33 + u32 get_module_plt(struct module *mod, unsigned long loc, Elf32_Addr val); 24 34 25 35 /* 26 36 * Add the ARM architecture version to the version magic string
+7
arch/arm/include/asm/perf_event.h
··· 19 19 #define perf_misc_flags(regs) perf_misc_flags(regs) 20 20 #endif 21 21 22 + #define perf_arch_fetch_caller_regs(regs, __ip) { \ 23 + (regs)->ARM_pc = (__ip); \ 24 + (regs)->ARM_fp = (unsigned long) __builtin_frame_address(0); \ 25 + (regs)->ARM_sp = current_stack_pointer; \ 26 + (regs)->ARM_cpsr = SVC_MODE; \ 27 + } 28 + 22 29 #endif /* __ARM_PERF_EVENT_H__ */
+5 -14
arch/arm/include/asm/pmu.h
··· 24 24 * interrupt and passed the address of the low level handler, 25 25 * and can be used to implement any platform specific handling 26 26 * before or after calling it. 27 - * @runtime_resume: an optional handler which will be called by the 28 - * runtime PM framework following a call to pm_runtime_get(). 29 - * Note that if pm_runtime_get() is called more than once in 30 - * succession this handler will only be called once. 31 - * @runtime_suspend: an optional handler which will be called by the 32 - * runtime PM framework following a call to pm_runtime_put(). 33 - * Note that if pm_runtime_get() is called more than once in 34 - * succession this handler will only be called following the 35 - * final call to pm_runtime_put() that actually disables the 36 - * hardware. 37 27 */ 38 28 struct arm_pmu_platdata { 39 29 irqreturn_t (*handle_irq)(int irq, void *dev, 40 30 irq_handler_t pmu_handler); 41 - int (*runtime_resume)(struct device *dev); 42 - int (*runtime_suspend)(struct device *dev); 43 31 }; 44 32 45 33 #ifdef CONFIG_HW_PERF_EVENTS ··· 80 92 struct arm_pmu { 81 93 struct pmu pmu; 82 94 cpumask_t active_irqs; 95 + cpumask_t supported_cpus; 83 96 int *irq_affinity; 84 97 char *name; 85 98 irqreturn_t (*handle_irq)(int irq_num, void *dev); ··· 110 121 }; 111 122 112 123 #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) 113 - 114 - extern const struct dev_pm_ops armpmu_dev_pm_ops; 115 124 116 125 int armpmu_register(struct arm_pmu *armpmu, int type); 117 126 ··· 144 157 145 158 #define XSCALE_PMU_PROBE(_version, _fn) \ 146 159 PMU_PROBE(ARM_CPU_IMP_INTEL << 24 | _version, ARM_PMU_XSCALE_MASK, _fn) 160 + 161 + int arm_pmu_device_probe(struct platform_device *pdev, 162 + const struct of_device_id *of_table, 163 + const struct pmu_probe_info *probe_table); 147 164 148 165 #endif /* CONFIG_HW_PERF_EVENTS */ 149 166
-7
arch/arm/include/asm/proc-fns.h
··· 125 125 ttbr; \ 126 126 }) 127 127 128 - #define cpu_set_ttbr(nr, val) \ 129 - do { \ 130 - u64 ttbr = val; \ 131 - __asm__("mcrr p15, " #nr ", %Q0, %R0, c2" \ 132 - : : "r" (ttbr)); \ 133 - } while (0) 134 - 135 128 #define cpu_get_pgd() \ 136 129 ({ \ 137 130 u64 pg = cpu_get_ttbr(0); \
+2 -1
arch/arm/include/asm/smp.h
··· 61 61 struct secondary_data { 62 62 union { 63 63 unsigned long mpu_rgn_szr; 64 - unsigned long pgdir; 64 + u64 pgdir; 65 65 }; 66 66 unsigned long swapper_pg_dir; 67 67 void *stack; ··· 69 69 extern struct secondary_data secondary_data; 70 70 extern volatile int pen_release; 71 71 extern void secondary_startup(void); 72 + extern void secondary_startup_arm(void); 72 73 73 74 extern int __cpu_disable(void); 74 75
+1
arch/arm/include/asm/system_info.h
··· 17 17 18 18 /* information about the system we're running on */ 19 19 extern unsigned int system_rev; 20 + extern const char *system_serial; 20 21 extern unsigned int system_serial_low; 21 22 extern unsigned int system_serial_high; 22 23 extern unsigned int mem_fclk_21285;
-2
arch/arm/include/asm/unified.h
··· 45 45 #define THUMB(x...) x 46 46 #ifdef __ASSEMBLY__ 47 47 #define W(instr) instr.w 48 - #define BSYM(sym) sym + 1 49 48 #else 50 49 #define WASM(instr) #instr ".w" 51 50 #endif ··· 58 59 #define THUMB(x...) 59 60 #ifdef __ASSEMBLY__ 60 61 #define W(instr) instr 61 - #define BSYM(sym) sym 62 62 #else 63 63 #define WASM(instr) #instr 64 64 #endif
+4 -1
arch/arm/kernel/Makefile
··· 34 34 obj-$(CONFIG_ISA_DMA_API) += dma.o 35 35 obj-$(CONFIG_FIQ) += fiq.o fiqasm.o 36 36 obj-$(CONFIG_MODULES) += armksyms.o module.o 37 + obj-$(CONFIG_ARM_MODULE_PLTS) += module-plts.o 37 38 obj-$(CONFIG_ISA_DMA) += dma-isa.o 38 39 obj-$(CONFIG_PCI) += bios32.o isa.o 39 40 obj-$(CONFIG_ARM_CPU_SUSPEND) += sleep.o suspend.o ··· 71 70 obj-$(CONFIG_CPU_PJ4B) += pj4-cp0.o 72 71 obj-$(CONFIG_IWMMXT) += iwmmxt.o 73 72 obj-$(CONFIG_PERF_EVENTS) += perf_regs.o perf_callchain.o 74 - obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o perf_event_cpu.o 73 + obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o \ 74 + perf_event_xscale.o perf_event_v6.o \ 75 + perf_event_v7.o 75 76 CFLAGS_pj4-cp0.o := -marm 76 77 AFLAGS_iwmmxt.o := -Wa,-mcpu=iwmmxt 77 78 obj-$(CONFIG_ARM_CPU_TOPOLOGY) += topology.o
+6 -6
arch/arm/kernel/entry-armv.S
··· 40 40 #ifdef CONFIG_MULTI_IRQ_HANDLER 41 41 ldr r1, =handle_arch_irq 42 42 mov r0, sp 43 - adr lr, BSYM(9997f) 43 + badr lr, 9997f 44 44 ldr pc, [r1] 45 45 #else 46 46 arch_irq_handler_default ··· 273 273 str r4, [sp, #S_PC] 274 274 orr r0, r9, r0, lsl #16 275 275 #endif 276 - adr r9, BSYM(__und_svc_finish) 276 + badr r9, __und_svc_finish 277 277 mov r2, r4 278 278 bl call_fpe 279 279 ··· 469 469 @ instruction, or the more conventional lr if we are to treat 470 470 @ this as a real undefined instruction 471 471 @ 472 - adr r9, BSYM(ret_from_exception) 472 + badr r9, ret_from_exception 473 473 474 474 @ IRQs must be enabled before attempting to read the instruction from 475 475 @ user space since that could cause a page/translation fault if the ··· 486 486 @ r2 = PC value for the following instruction (:= regs->ARM_pc) 487 487 @ r4 = PC value for the faulting instruction 488 488 @ lr = 32-bit undefined instruction function 489 - adr lr, BSYM(__und_usr_fault_32) 489 + badr lr, __und_usr_fault_32 490 490 b call_fpe 491 491 492 492 __und_usr_thumb: ··· 522 522 add r2, r2, #2 @ r2 is PC + 2, make it PC + 4 523 523 str r2, [sp, #S_PC] @ it's a 2x16bit instr, update 524 524 orr r0, r0, r5, lsl #16 525 - adr lr, BSYM(__und_usr_fault_32) 525 + badr lr, __und_usr_fault_32 526 526 @ r0 = the two 16-bit Thumb instructions which caused the exception 527 527 @ r2 = PC value for the following Thumb instruction (:= regs->ARM_pc) 528 528 @ r4 = PC value for the first 16-bit Thumb instruction ··· 716 716 __und_usr_fault_16: 717 717 mov r1, #2 718 718 1: mov r0, sp 719 - adr lr, BSYM(ret_from_exception) 719 + badr lr, ret_from_exception 720 720 b __und_fault 721 721 ENDPROC(__und_usr_fault_32) 722 722 ENDPROC(__und_usr_fault_16)
+3 -3
arch/arm/kernel/entry-common.S
··· 90 90 bl schedule_tail 91 91 cmp r5, #0 92 92 movne r0, r4 93 - adrne lr, BSYM(1f) 93 + badrne lr, 1f 94 94 retne r5 95 95 1: get_thread_info tsk 96 96 b ret_slow_syscall ··· 198 198 bne __sys_trace 199 199 200 200 cmp scno, #NR_syscalls @ check upper syscall limit 201 - adr lr, BSYM(ret_fast_syscall) @ return address 201 + badr lr, ret_fast_syscall @ return address 202 202 ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine 203 203 204 204 add r1, sp, #S_OFF ··· 233 233 add r0, sp, #S_OFF 234 234 bl syscall_trace_enter 235 235 236 - adr lr, BSYM(__sys_trace_return) @ return address 236 + badr lr, __sys_trace_return @ return address 237 237 mov scno, r0 @ syscall number (possibly new) 238 238 add r1, sp, #S_R0 + S_OFF @ pointer to regs 239 239 cmp scno, #NR_syscalls @ check upper syscall limit
+1 -1
arch/arm/kernel/entry-ftrace.S
··· 87 87 88 88 1: mcount_get_lr r1 @ lr of instrumented func 89 89 mcount_adjust_addr r0, lr @ instrumented function 90 - adr lr, BSYM(2f) 90 + badr lr, 2f 91 91 mov pc, r2 92 92 2: mcount_exit 93 93 .endm
+9 -4
arch/arm/kernel/entry-v7m.S
··· 117 117 ENDPROC(__switch_to) 118 118 119 119 .data 120 - .align 8 120 + #if CONFIG_CPU_V7M_NUM_IRQ <= 112 121 + .align 9 122 + #else 123 + .align 10 124 + #endif 125 + 121 126 /* 122 - * Vector table (64 words => 256 bytes natural alignment) 127 + * Vector table (Natural alignment need to be ensured) 123 128 */ 124 129 ENTRY(vector_table) 125 130 .long 0 @ 0 - Reset stack pointer ··· 143 138 .long __invalid_entry @ 13 - Reserved 144 139 .long __pendsv_entry @ 14 - PendSV 145 140 .long __invalid_entry @ 15 - SysTick 146 - .rept 64 - 16 147 - .long __irq_entry @ 16..64 - External Interrupts 141 + .rept CONFIG_CPU_V7M_NUM_IRQ 142 + .long __irq_entry @ External Interrupts 148 143 .endr
+11 -16
arch/arm/kernel/head-nommu.S
··· 46 46 .arm 47 47 ENTRY(stext) 48 48 49 - THUMB( adr r9, BSYM(1f) ) @ Kernel is always entered in ARM. 49 + THUMB( badr r9, 1f ) @ Kernel is always entered in ARM. 50 50 THUMB( bx r9 ) @ If this is a Thumb-2 kernel, 51 51 THUMB( .thumb ) @ switch to Thumb now. 52 52 THUMB(1: ) ··· 77 77 orr r6, r6, #(1 << MPU_RSR_EN) @ Set region enabled bit 78 78 bl __setup_mpu 79 79 #endif 80 - ldr r13, =__mmap_switched @ address to jump to after 81 - @ initialising sctlr 82 - adr lr, BSYM(1f) @ return (PIC) address 80 + 81 + badr lr, 1f @ return (PIC) address 83 82 ldr r12, [r10, #PROCINFO_INITFUNC] 84 83 add r12, r12, r10 85 84 ret r12 86 - 1: b __after_proc_init 85 + 1: bl __after_proc_init 86 + b __mmap_switched 87 87 ENDPROC(stext) 88 88 89 89 #ifdef CONFIG_SMP ··· 106 106 movs r10, r5 @ invalid processor? 107 107 beq __error_p @ yes, error 'p' 108 108 109 - adr r4, __secondary_data 110 - ldmia r4, {r7, r12} 109 + ldr r7, __secondary_data 111 110 112 111 #ifdef CONFIG_ARM_MPU 113 112 /* Use MPU region info supplied by __cpu_up */ ··· 114 115 bl __setup_mpu @ Initialize the MPU 115 116 #endif 116 117 117 - adr lr, BSYM(__after_proc_init) @ return address 118 - mov r13, r12 @ __secondary_switched address 118 + badr lr, 1f @ return (PIC) address 119 119 ldr r12, [r10, #PROCINFO_INITFUNC] 120 120 add r12, r12, r10 121 121 ret r12 122 - ENDPROC(secondary_startup) 123 - 124 - ENTRY(__secondary_switched) 125 - ldr sp, [r7, #8] @ set up the stack pointer 122 + 1: bl __after_proc_init 123 + ldr sp, [r7, #12] @ set up the stack pointer 126 124 mov fp, #0 127 125 b secondary_start_kernel 128 - ENDPROC(__secondary_switched) 126 + ENDPROC(secondary_startup) 129 127 130 128 .type __secondary_data, %object 131 129 __secondary_data: 132 130 .long secondary_data 133 - .long __secondary_switched 134 131 #endif /* CONFIG_SMP */ 135 132 136 133 /* ··· 159 164 #endif 160 165 mcr p15, 0, r0, c1, c0, 0 @ write control reg 161 166 #endif /* CONFIG_CPU_CP15 */ 162 - ret r13 167 + ret lr 163 168 ENDPROC(__after_proc_init) 164 169 .ltorg 165 170
+36 -16
arch/arm/kernel/head.S
··· 80 80 ENTRY(stext) 81 81 ARM_BE8(setend be ) @ ensure we are in BE8 mode 82 82 83 - THUMB( adr r9, BSYM(1f) ) @ Kernel is always entered in ARM. 83 + THUMB( badr r9, 1f ) @ Kernel is always entered in ARM. 84 84 THUMB( bx r9 ) @ If this is a Thumb-2 kernel, 85 85 THUMB( .thumb ) @ switch to Thumb now. 86 86 THUMB(1: ) ··· 131 131 * The following calls CPU specific code in a position independent 132 132 * manner. See arch/arm/mm/proc-*.S for details. r10 = base of 133 133 * xxx_proc_info structure selected by __lookup_processor_type 134 - * above. On return, the CPU will be ready for the MMU to be 135 - * turned on, and r0 will hold the CPU control register value. 134 + * above. 135 + * 136 + * The processor init function will be called with: 137 + * r1 - machine type 138 + * r2 - boot data (atags/dt) pointer 139 + * r4 - translation table base (low word) 140 + * r5 - translation table base (high word, if LPAE) 141 + * r8 - translation table base 1 (pfn if LPAE) 142 + * r9 - cpuid 143 + * r13 - virtual address for __enable_mmu -> __turn_mmu_on 144 + * 145 + * On return, the CPU will be ready for the MMU to be turned on, 146 + * r0 will hold the CPU control register value, r1, r2, r4, and 147 + * r9 will be preserved. r5 will also be preserved if LPAE. 136 148 */ 137 149 ldr r13, =__mmap_switched @ address to jump to after 138 150 @ mmu has been enabled 139 - adr lr, BSYM(1f) @ return (PIC) address 151 + badr lr, 1f @ return (PIC) address 152 + #ifdef CONFIG_ARM_LPAE 153 + mov r5, #0 @ high TTBR0 154 + mov r8, r4, lsr #12 @ TTBR1 is swapper_pg_dir pfn 155 + #else 140 156 mov r8, r4 @ set TTBR1 to swapper_pg_dir 157 + #endif 141 158 ldr r12, [r10, #PROCINFO_INITFUNC] 142 159 add r12, r12, r10 143 160 ret r12 ··· 175 158 * 176 159 * Returns: 177 160 * r0, r3, r5-r7 corrupted 178 - * r4 = page table (see ARCH_PGD_SHIFT in asm/memory.h) 161 + * r4 = physical page table address 179 162 */ 180 163 __create_page_tables: 181 164 pgtbl r4, r8 @ page table address ··· 350 333 #endif 351 334 #ifdef CONFIG_ARM_LPAE 352 335 sub r4, r4, #0x1000 @ point to the PGD table 353 - mov r4, r4, lsr #ARCH_PGD_SHIFT 354 336 #endif 355 337 ret lr 356 338 ENDPROC(__create_page_tables) ··· 362 346 363 347 #if defined(CONFIG_SMP) 364 348 .text 365 - ENTRY(secondary_startup_arm) 366 349 .arm 367 - THUMB( adr r9, BSYM(1f) ) @ Kernel is entered in ARM. 350 + ENTRY(secondary_startup_arm) 351 + THUMB( badr r9, 1f ) @ Kernel is entered in ARM. 368 352 THUMB( bx r9 ) @ If this is a Thumb-2 kernel, 369 353 THUMB( .thumb ) @ switch to Thumb now. 370 354 THUMB(1: ) ··· 397 381 adr r4, __secondary_data 398 382 ldmia r4, {r5, r7, r12} @ address to jump to after 399 383 sub lr, r4, r5 @ mmu has been enabled 400 - ldr r4, [r7, lr] @ get secondary_data.pgdir 401 - add r7, r7, #4 402 - ldr r8, [r7, lr] @ get secondary_data.swapper_pg_dir 403 - adr lr, BSYM(__enable_mmu) @ return address 384 + add r3, r7, lr 385 + ldrd r4, [r3, #0] @ get secondary_data.pgdir 386 + ldr r8, [r3, #8] @ get secondary_data.swapper_pg_dir 387 + badr lr, __enable_mmu @ return address 404 388 mov r13, r12 @ __secondary_switched address 405 389 ldr r12, [r10, #PROCINFO_INITFUNC] 406 390 add r12, r12, r10 @ initialise processor ··· 413 397 * r6 = &secondary_data 414 398 */ 415 399 ENTRY(__secondary_switched) 416 - ldr sp, [r7, #4] @ get secondary_data.stack 400 + ldr sp, [r7, #12] @ get secondary_data.stack 417 401 mov fp, #0 418 402 b secondary_start_kernel 419 403 ENDPROC(__secondary_switched) ··· 432 416 /* 433 417 * Setup common bits before finally enabling the MMU. Essentially 434 418 * this is just loading the page table pointer and domain access 435 - * registers. 419 + * registers. All these registers need to be preserved by the 420 + * processor setup function (or set in the case of r0) 436 421 * 437 422 * r0 = cp#15 control register 438 423 * r1 = machine ID 439 424 * r2 = atags or dtb pointer 440 - * r4 = page table (see ARCH_PGD_SHIFT in asm/memory.h) 425 + * r4 = TTBR pointer (low word) 426 + * r5 = TTBR pointer (high word if LPAE) 441 427 * r9 = processor ID 442 428 * r13 = *virtual* address to jump to upon completion 443 429 */ ··· 458 440 #ifdef CONFIG_CPU_ICACHE_DISABLE 459 441 bic r0, r0, #CR_I 460 442 #endif 461 - #ifndef CONFIG_ARM_LPAE 443 + #ifdef CONFIG_ARM_LPAE 444 + mcrr p15, 0, r4, r5, c2 @ load TTBR0 445 + #else 462 446 mov r5, #(domain_val(DOMAIN_USER, DOMAIN_MANAGER) | \ 463 447 domain_val(DOMAIN_KERNEL, DOMAIN_MANAGER) | \ 464 448 domain_val(DOMAIN_TABLE, DOMAIN_MANAGER) | \
+183
arch/arm/kernel/module-plts.c
··· 1 + /* 2 + * Copyright (C) 2014 Linaro Ltd. <ard.biesheuvel@linaro.org> 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + */ 8 + 9 + #include <linux/elf.h> 10 + #include <linux/kernel.h> 11 + #include <linux/module.h> 12 + 13 + #include <asm/cache.h> 14 + #include <asm/opcodes.h> 15 + 16 + #define PLT_ENT_STRIDE L1_CACHE_BYTES 17 + #define PLT_ENT_COUNT (PLT_ENT_STRIDE / sizeof(u32)) 18 + #define PLT_ENT_SIZE (sizeof(struct plt_entries) / PLT_ENT_COUNT) 19 + 20 + #ifdef CONFIG_THUMB2_KERNEL 21 + #define PLT_ENT_LDR __opcode_to_mem_thumb32(0xf8dff000 | \ 22 + (PLT_ENT_STRIDE - 4)) 23 + #else 24 + #define PLT_ENT_LDR __opcode_to_mem_arm(0xe59ff000 | \ 25 + (PLT_ENT_STRIDE - 8)) 26 + #endif 27 + 28 + struct plt_entries { 29 + u32 ldr[PLT_ENT_COUNT]; 30 + u32 lit[PLT_ENT_COUNT]; 31 + }; 32 + 33 + static bool in_init(const struct module *mod, u32 addr) 34 + { 35 + return addr - (u32)mod->module_init < mod->init_size; 36 + } 37 + 38 + u32 get_module_plt(struct module *mod, unsigned long loc, Elf32_Addr val) 39 + { 40 + struct plt_entries *plt, *plt_end; 41 + int c, *count; 42 + 43 + if (in_init(mod, loc)) { 44 + plt = (void *)mod->arch.init_plt->sh_addr; 45 + plt_end = (void *)plt + mod->arch.init_plt->sh_size; 46 + count = &mod->arch.init_plt_count; 47 + } else { 48 + plt = (void *)mod->arch.core_plt->sh_addr; 49 + plt_end = (void *)plt + mod->arch.core_plt->sh_size; 50 + count = &mod->arch.core_plt_count; 51 + } 52 + 53 + /* Look for an existing entry pointing to 'val' */ 54 + for (c = *count; plt < plt_end; c -= PLT_ENT_COUNT, plt++) { 55 + int i; 56 + 57 + if (!c) { 58 + /* Populate a new set of entries */ 59 + *plt = (struct plt_entries){ 60 + { [0 ... PLT_ENT_COUNT - 1] = PLT_ENT_LDR, }, 61 + { val, } 62 + }; 63 + ++*count; 64 + return (u32)plt->ldr; 65 + } 66 + for (i = 0; i < PLT_ENT_COUNT; i++) { 67 + if (!plt->lit[i]) { 68 + plt->lit[i] = val; 69 + ++*count; 70 + } 71 + if (plt->lit[i] == val) 72 + return (u32)&plt->ldr[i]; 73 + } 74 + } 75 + BUG(); 76 + } 77 + 78 + static int duplicate_rel(Elf32_Addr base, const Elf32_Rel *rel, int num, 79 + u32 mask) 80 + { 81 + u32 *loc1, *loc2; 82 + int i; 83 + 84 + for (i = 0; i < num; i++) { 85 + if (rel[i].r_info != rel[num].r_info) 86 + continue; 87 + 88 + /* 89 + * Identical relocation types against identical symbols can 90 + * still result in different PLT entries if the addend in the 91 + * place is different. So resolve the target of the relocation 92 + * to compare the values. 93 + */ 94 + loc1 = (u32 *)(base + rel[i].r_offset); 95 + loc2 = (u32 *)(base + rel[num].r_offset); 96 + if (((*loc1 ^ *loc2) & mask) == 0) 97 + return 1; 98 + } 99 + return 0; 100 + } 101 + 102 + /* Count how many PLT entries we may need */ 103 + static unsigned int count_plts(Elf32_Addr base, const Elf32_Rel *rel, int num) 104 + { 105 + unsigned int ret = 0; 106 + int i; 107 + 108 + /* 109 + * Sure, this is order(n^2), but it's usually short, and not 110 + * time critical 111 + */ 112 + for (i = 0; i < num; i++) 113 + switch (ELF32_R_TYPE(rel[i].r_info)) { 114 + case R_ARM_CALL: 115 + case R_ARM_PC24: 116 + case R_ARM_JUMP24: 117 + if (!duplicate_rel(base, rel, i, 118 + __opcode_to_mem_arm(0x00ffffff))) 119 + ret++; 120 + break; 121 + #ifdef CONFIG_THUMB2_KERNEL 122 + case R_ARM_THM_CALL: 123 + case R_ARM_THM_JUMP24: 124 + if (!duplicate_rel(base, rel, i, 125 + __opcode_to_mem_thumb32(0x07ff2fff))) 126 + ret++; 127 + #endif 128 + } 129 + return ret; 130 + } 131 + 132 + int module_frob_arch_sections(Elf_Ehdr *ehdr, Elf_Shdr *sechdrs, 133 + char *secstrings, struct module *mod) 134 + { 135 + unsigned long core_plts = 0, init_plts = 0; 136 + Elf32_Shdr *s, *sechdrs_end = sechdrs + ehdr->e_shnum; 137 + 138 + /* 139 + * To store the PLTs, we expand the .text section for core module code 140 + * and the .init.text section for initialization code. 141 + */ 142 + for (s = sechdrs; s < sechdrs_end; ++s) 143 + if (strcmp(".core.plt", secstrings + s->sh_name) == 0) 144 + mod->arch.core_plt = s; 145 + else if (strcmp(".init.plt", secstrings + s->sh_name) == 0) 146 + mod->arch.init_plt = s; 147 + 148 + if (!mod->arch.core_plt || !mod->arch.init_plt) { 149 + pr_err("%s: sections missing\n", mod->name); 150 + return -ENOEXEC; 151 + } 152 + 153 + for (s = sechdrs + 1; s < sechdrs_end; ++s) { 154 + const Elf32_Rel *rels = (void *)ehdr + s->sh_offset; 155 + int numrels = s->sh_size / sizeof(Elf32_Rel); 156 + Elf32_Shdr *dstsec = sechdrs + s->sh_info; 157 + 158 + if (s->sh_type != SHT_REL) 159 + continue; 160 + 161 + if (strstr(secstrings + s->sh_name, ".init")) 162 + init_plts += count_plts(dstsec->sh_addr, rels, numrels); 163 + else 164 + core_plts += count_plts(dstsec->sh_addr, rels, numrels); 165 + } 166 + 167 + mod->arch.core_plt->sh_type = SHT_NOBITS; 168 + mod->arch.core_plt->sh_flags = SHF_EXECINSTR | SHF_ALLOC; 169 + mod->arch.core_plt->sh_addralign = L1_CACHE_BYTES; 170 + mod->arch.core_plt->sh_size = round_up(core_plts * PLT_ENT_SIZE, 171 + sizeof(struct plt_entries)); 172 + mod->arch.core_plt_count = 0; 173 + 174 + mod->arch.init_plt->sh_type = SHT_NOBITS; 175 + mod->arch.init_plt->sh_flags = SHF_EXECINSTR | SHF_ALLOC; 176 + mod->arch.init_plt->sh_addralign = L1_CACHE_BYTES; 177 + mod->arch.init_plt->sh_size = round_up(init_plts * PLT_ENT_SIZE, 178 + sizeof(struct plt_entries)); 179 + mod->arch.init_plt_count = 0; 180 + pr_debug("%s: core.plt=%x, init.plt=%x\n", __func__, 181 + mod->arch.core_plt->sh_size, mod->arch.init_plt->sh_size); 182 + return 0; 183 + }
+31 -1
arch/arm/kernel/module.c
··· 40 40 #ifdef CONFIG_MMU 41 41 void *module_alloc(unsigned long size) 42 42 { 43 - return __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, 43 + void *p = __vmalloc_node_range(size, 1, MODULES_VADDR, MODULES_END, 44 + GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, 45 + __builtin_return_address(0)); 46 + if (!IS_ENABLED(CONFIG_ARM_MODULE_PLTS) || p) 47 + return p; 48 + return __vmalloc_node_range(size, 1, VMALLOC_START, VMALLOC_END, 44 49 GFP_KERNEL, PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE, 45 50 __builtin_return_address(0)); 46 51 } ··· 115 110 offset -= 0x04000000; 116 111 117 112 offset += sym->st_value - loc; 113 + 114 + /* 115 + * Route through a PLT entry if 'offset' exceeds the 116 + * supported range. Note that 'offset + loc + 8' 117 + * contains the absolute jump target, i.e., 118 + * @sym + addend, corrected for the +8 PC bias. 119 + */ 120 + if (IS_ENABLED(CONFIG_ARM_MODULE_PLTS) && 121 + (offset <= (s32)0xfe000000 || 122 + offset >= (s32)0x02000000)) 123 + offset = get_module_plt(module, loc, 124 + offset + loc + 8) 125 + - loc - 8; 126 + 118 127 if (offset <= (s32)0xfe000000 || 119 128 offset >= (s32)0x02000000) { 120 129 pr_err("%s: section %u reloc %u sym '%s': relocation %u out of range (%#lx -> %#x)\n", ··· 221 202 if (offset & 0x01000000) 222 203 offset -= 0x02000000; 223 204 offset += sym->st_value - loc; 205 + 206 + /* 207 + * Route through a PLT entry if 'offset' exceeds the 208 + * supported range. 209 + */ 210 + if (IS_ENABLED(CONFIG_ARM_MODULE_PLTS) && 211 + (offset <= (s32)0xff000000 || 212 + offset >= (s32)0x01000000)) 213 + offset = get_module_plt(module, loc, 214 + offset + loc + 4) 215 + - loc - 4; 224 216 225 217 if (offset <= (s32)0xff000000 || 226 218 offset >= (s32)0x01000000) {
+4
arch/arm/kernel/module.lds
··· 1 + SECTIONS { 2 + .core.plt : { BYTE(0) } 3 + .init.plt : { BYTE(0) } 4 + }
+376 -34
arch/arm/kernel/perf_event.c
··· 11 11 */ 12 12 #define pr_fmt(fmt) "hw perfevents: " fmt 13 13 14 + #include <linux/bitmap.h> 15 + #include <linux/cpumask.h> 16 + #include <linux/export.h> 14 17 #include <linux/kernel.h> 18 + #include <linux/of.h> 15 19 #include <linux/platform_device.h> 16 - #include <linux/pm_runtime.h> 20 + #include <linux/slab.h> 21 + #include <linux/spinlock.h> 17 22 #include <linux/irq.h> 18 23 #include <linux/irqdesc.h> 19 24 25 + #include <asm/cputype.h> 20 26 #include <asm/irq_regs.h> 21 27 #include <asm/pmu.h> 22 28 ··· 235 229 int idx; 236 230 int err = 0; 237 231 232 + /* An event following a process won't be stopped earlier */ 233 + if (!cpumask_test_cpu(smp_processor_id(), &armpmu->supported_cpus)) 234 + return -ENOENT; 235 + 238 236 perf_pmu_disable(event->pmu); 239 237 240 238 /* If we don't have a space for the counter then finish early. */ ··· 354 344 armpmu_release_hardware(struct arm_pmu *armpmu) 355 345 { 356 346 armpmu->free_irq(armpmu); 357 - pm_runtime_put_sync(&armpmu->plat_device->dev); 358 347 } 359 348 360 349 static int 361 350 armpmu_reserve_hardware(struct arm_pmu *armpmu) 362 351 { 363 - int err; 364 - struct platform_device *pmu_device = armpmu->plat_device; 365 - 366 - if (!pmu_device) 367 - return -ENODEV; 368 - 369 - pm_runtime_get_sync(&pmu_device->dev); 370 - err = armpmu->request_irq(armpmu, armpmu_dispatch_irq); 352 + int err = armpmu->request_irq(armpmu, armpmu_dispatch_irq); 371 353 if (err) { 372 354 armpmu_release_hardware(armpmu); 373 355 return err; ··· 456 454 int err = 0; 457 455 atomic_t *active_events = &armpmu->active_events; 458 456 457 + /* 458 + * Reject CPU-affine events for CPUs that are of a different class to 459 + * that which this PMU handles. Process-following events (where 460 + * event->cpu == -1) can be migrated between CPUs, and thus we have to 461 + * reject them later (in armpmu_add) if they're scheduled on a 462 + * different class of CPU. 463 + */ 464 + if (event->cpu != -1 && 465 + !cpumask_test_cpu(event->cpu, &armpmu->supported_cpus)) 466 + return -ENOENT; 467 + 459 468 /* does not support taken branch sampling */ 460 469 if (has_branch_stack(event)) 461 470 return -EOPNOTSUPP; ··· 502 489 struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events); 503 490 int enabled = bitmap_weight(hw_events->used_mask, armpmu->num_events); 504 491 492 + /* For task-bound events we may be called on other CPUs */ 493 + if (!cpumask_test_cpu(smp_processor_id(), &armpmu->supported_cpus)) 494 + return; 495 + 505 496 if (enabled) 506 497 armpmu->start(armpmu); 507 498 } ··· 513 496 static void armpmu_disable(struct pmu *pmu) 514 497 { 515 498 struct arm_pmu *armpmu = to_arm_pmu(pmu); 499 + 500 + /* For task-bound events we may be called on other CPUs */ 501 + if (!cpumask_test_cpu(smp_processor_id(), &armpmu->supported_cpus)) 502 + return; 503 + 516 504 armpmu->stop(armpmu); 517 505 } 518 506 519 - #ifdef CONFIG_PM 520 - static int armpmu_runtime_resume(struct device *dev) 507 + /* 508 + * In heterogeneous systems, events are specific to a particular 509 + * microarchitecture, and aren't suitable for another. Thus, only match CPUs of 510 + * the same microarchitecture. 511 + */ 512 + static int armpmu_filter_match(struct perf_event *event) 521 513 { 522 - struct arm_pmu_platdata *plat = dev_get_platdata(dev); 523 - 524 - if (plat && plat->runtime_resume) 525 - return plat->runtime_resume(dev); 526 - 527 - return 0; 514 + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 515 + unsigned int cpu = smp_processor_id(); 516 + return cpumask_test_cpu(cpu, &armpmu->supported_cpus); 528 517 } 529 - 530 - static int armpmu_runtime_suspend(struct device *dev) 531 - { 532 - struct arm_pmu_platdata *plat = dev_get_platdata(dev); 533 - 534 - if (plat && plat->runtime_suspend) 535 - return plat->runtime_suspend(dev); 536 - 537 - return 0; 538 - } 539 - #endif 540 - 541 - const struct dev_pm_ops armpmu_dev_pm_ops = { 542 - SET_RUNTIME_PM_OPS(armpmu_runtime_suspend, armpmu_runtime_resume, NULL) 543 - }; 544 518 545 519 static void armpmu_init(struct arm_pmu *armpmu) 546 520 { ··· 547 539 .start = armpmu_start, 548 540 .stop = armpmu_stop, 549 541 .read = armpmu_read, 542 + .filter_match = armpmu_filter_match, 550 543 }; 551 544 } 552 545 553 546 int armpmu_register(struct arm_pmu *armpmu, int type) 554 547 { 555 548 armpmu_init(armpmu); 556 - pm_runtime_enable(&armpmu->plat_device->dev); 557 549 pr_info("enabled with %s PMU driver, %d counters available\n", 558 550 armpmu->name, armpmu->num_events); 559 551 return perf_pmu_register(&armpmu->pmu, armpmu->name, type); 560 552 } 561 553 554 + /* Set at runtime when we know what CPU type we are. */ 555 + static struct arm_pmu *__oprofile_cpu_pmu; 556 + 557 + /* 558 + * Despite the names, these two functions are CPU-specific and are used 559 + * by the OProfile/perf code. 560 + */ 561 + const char *perf_pmu_name(void) 562 + { 563 + if (!__oprofile_cpu_pmu) 564 + return NULL; 565 + 566 + return __oprofile_cpu_pmu->name; 567 + } 568 + EXPORT_SYMBOL_GPL(perf_pmu_name); 569 + 570 + int perf_num_counters(void) 571 + { 572 + int max_events = 0; 573 + 574 + if (__oprofile_cpu_pmu != NULL) 575 + max_events = __oprofile_cpu_pmu->num_events; 576 + 577 + return max_events; 578 + } 579 + EXPORT_SYMBOL_GPL(perf_num_counters); 580 + 581 + static void cpu_pmu_enable_percpu_irq(void *data) 582 + { 583 + int irq = *(int *)data; 584 + 585 + enable_percpu_irq(irq, IRQ_TYPE_NONE); 586 + } 587 + 588 + static void cpu_pmu_disable_percpu_irq(void *data) 589 + { 590 + int irq = *(int *)data; 591 + 592 + disable_percpu_irq(irq); 593 + } 594 + 595 + static void cpu_pmu_free_irq(struct arm_pmu *cpu_pmu) 596 + { 597 + int i, irq, irqs; 598 + struct platform_device *pmu_device = cpu_pmu->plat_device; 599 + struct pmu_hw_events __percpu *hw_events = cpu_pmu->hw_events; 600 + 601 + irqs = min(pmu_device->num_resources, num_possible_cpus()); 602 + 603 + irq = platform_get_irq(pmu_device, 0); 604 + if (irq >= 0 && irq_is_percpu(irq)) { 605 + on_each_cpu(cpu_pmu_disable_percpu_irq, &irq, 1); 606 + free_percpu_irq(irq, &hw_events->percpu_pmu); 607 + } else { 608 + for (i = 0; i < irqs; ++i) { 609 + int cpu = i; 610 + 611 + if (cpu_pmu->irq_affinity) 612 + cpu = cpu_pmu->irq_affinity[i]; 613 + 614 + if (!cpumask_test_and_clear_cpu(cpu, &cpu_pmu->active_irqs)) 615 + continue; 616 + irq = platform_get_irq(pmu_device, i); 617 + if (irq >= 0) 618 + free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, cpu)); 619 + } 620 + } 621 + } 622 + 623 + static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler) 624 + { 625 + int i, err, irq, irqs; 626 + struct platform_device *pmu_device = cpu_pmu->plat_device; 627 + struct pmu_hw_events __percpu *hw_events = cpu_pmu->hw_events; 628 + 629 + if (!pmu_device) 630 + return -ENODEV; 631 + 632 + irqs = min(pmu_device->num_resources, num_possible_cpus()); 633 + if (irqs < 1) { 634 + pr_warn_once("perf/ARM: No irqs for PMU defined, sampling events not supported\n"); 635 + return 0; 636 + } 637 + 638 + irq = platform_get_irq(pmu_device, 0); 639 + if (irq >= 0 && irq_is_percpu(irq)) { 640 + err = request_percpu_irq(irq, handler, "arm-pmu", 641 + &hw_events->percpu_pmu); 642 + if (err) { 643 + pr_err("unable to request IRQ%d for ARM PMU counters\n", 644 + irq); 645 + return err; 646 + } 647 + on_each_cpu(cpu_pmu_enable_percpu_irq, &irq, 1); 648 + } else { 649 + for (i = 0; i < irqs; ++i) { 650 + int cpu = i; 651 + 652 + err = 0; 653 + irq = platform_get_irq(pmu_device, i); 654 + if (irq < 0) 655 + continue; 656 + 657 + if (cpu_pmu->irq_affinity) 658 + cpu = cpu_pmu->irq_affinity[i]; 659 + 660 + /* 661 + * If we have a single PMU interrupt that we can't shift, 662 + * assume that we're running on a uniprocessor machine and 663 + * continue. Otherwise, continue without this interrupt. 664 + */ 665 + if (irq_set_affinity(irq, cpumask_of(cpu)) && irqs > 1) { 666 + pr_warn("unable to set irq affinity (irq=%d, cpu=%u)\n", 667 + irq, cpu); 668 + continue; 669 + } 670 + 671 + err = request_irq(irq, handler, 672 + IRQF_NOBALANCING | IRQF_NO_THREAD, "arm-pmu", 673 + per_cpu_ptr(&hw_events->percpu_pmu, cpu)); 674 + if (err) { 675 + pr_err("unable to request IRQ%d for ARM PMU counters\n", 676 + irq); 677 + return err; 678 + } 679 + 680 + cpumask_set_cpu(cpu, &cpu_pmu->active_irqs); 681 + } 682 + } 683 + 684 + return 0; 685 + } 686 + 687 + /* 688 + * PMU hardware loses all context when a CPU goes offline. 689 + * When a CPU is hotplugged back in, since some hardware registers are 690 + * UNKNOWN at reset, the PMU must be explicitly reset to avoid reading 691 + * junk values out of them. 692 + */ 693 + static int cpu_pmu_notify(struct notifier_block *b, unsigned long action, 694 + void *hcpu) 695 + { 696 + int cpu = (unsigned long)hcpu; 697 + struct arm_pmu *pmu = container_of(b, struct arm_pmu, hotplug_nb); 698 + 699 + if ((action & ~CPU_TASKS_FROZEN) != CPU_STARTING) 700 + return NOTIFY_DONE; 701 + 702 + if (!cpumask_test_cpu(cpu, &pmu->supported_cpus)) 703 + return NOTIFY_DONE; 704 + 705 + if (pmu->reset) 706 + pmu->reset(pmu); 707 + else 708 + return NOTIFY_DONE; 709 + 710 + return NOTIFY_OK; 711 + } 712 + 713 + static int cpu_pmu_init(struct arm_pmu *cpu_pmu) 714 + { 715 + int err; 716 + int cpu; 717 + struct pmu_hw_events __percpu *cpu_hw_events; 718 + 719 + cpu_hw_events = alloc_percpu(struct pmu_hw_events); 720 + if (!cpu_hw_events) 721 + return -ENOMEM; 722 + 723 + cpu_pmu->hotplug_nb.notifier_call = cpu_pmu_notify; 724 + err = register_cpu_notifier(&cpu_pmu->hotplug_nb); 725 + if (err) 726 + goto out_hw_events; 727 + 728 + for_each_possible_cpu(cpu) { 729 + struct pmu_hw_events *events = per_cpu_ptr(cpu_hw_events, cpu); 730 + raw_spin_lock_init(&events->pmu_lock); 731 + events->percpu_pmu = cpu_pmu; 732 + } 733 + 734 + cpu_pmu->hw_events = cpu_hw_events; 735 + cpu_pmu->request_irq = cpu_pmu_request_irq; 736 + cpu_pmu->free_irq = cpu_pmu_free_irq; 737 + 738 + /* Ensure the PMU has sane values out of reset. */ 739 + if (cpu_pmu->reset) 740 + on_each_cpu_mask(&cpu_pmu->supported_cpus, cpu_pmu->reset, 741 + cpu_pmu, 1); 742 + 743 + /* If no interrupts available, set the corresponding capability flag */ 744 + if (!platform_get_irq(cpu_pmu->plat_device, 0)) 745 + cpu_pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT; 746 + 747 + return 0; 748 + 749 + out_hw_events: 750 + free_percpu(cpu_hw_events); 751 + return err; 752 + } 753 + 754 + static void cpu_pmu_destroy(struct arm_pmu *cpu_pmu) 755 + { 756 + unregister_cpu_notifier(&cpu_pmu->hotplug_nb); 757 + free_percpu(cpu_pmu->hw_events); 758 + } 759 + 760 + /* 761 + * CPU PMU identification and probing. 762 + */ 763 + static int probe_current_pmu(struct arm_pmu *pmu, 764 + const struct pmu_probe_info *info) 765 + { 766 + int cpu = get_cpu(); 767 + unsigned int cpuid = read_cpuid_id(); 768 + int ret = -ENODEV; 769 + 770 + pr_info("probing PMU on CPU %d\n", cpu); 771 + 772 + for (; info->init != NULL; info++) { 773 + if ((cpuid & info->mask) != info->cpuid) 774 + continue; 775 + ret = info->init(pmu); 776 + break; 777 + } 778 + 779 + put_cpu(); 780 + return ret; 781 + } 782 + 783 + static int of_pmu_irq_cfg(struct arm_pmu *pmu) 784 + { 785 + int i, irq, *irqs; 786 + struct platform_device *pdev = pmu->plat_device; 787 + 788 + /* Don't bother with PPIs; they're already affine */ 789 + irq = platform_get_irq(pdev, 0); 790 + if (irq >= 0 && irq_is_percpu(irq)) 791 + return 0; 792 + 793 + irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL); 794 + if (!irqs) 795 + return -ENOMEM; 796 + 797 + for (i = 0; i < pdev->num_resources; ++i) { 798 + struct device_node *dn; 799 + int cpu; 800 + 801 + dn = of_parse_phandle(pdev->dev.of_node, "interrupt-affinity", 802 + i); 803 + if (!dn) { 804 + pr_warn("Failed to parse %s/interrupt-affinity[%d]\n", 805 + of_node_full_name(pdev->dev.of_node), i); 806 + break; 807 + } 808 + 809 + for_each_possible_cpu(cpu) 810 + if (arch_find_n_match_cpu_physical_id(dn, cpu, NULL)) 811 + break; 812 + 813 + of_node_put(dn); 814 + if (cpu >= nr_cpu_ids) { 815 + pr_warn("Failed to find logical CPU for %s\n", 816 + dn->name); 817 + break; 818 + } 819 + 820 + irqs[i] = cpu; 821 + cpumask_set_cpu(cpu, &pmu->supported_cpus); 822 + } 823 + 824 + if (i == pdev->num_resources) { 825 + pmu->irq_affinity = irqs; 826 + } else { 827 + kfree(irqs); 828 + cpumask_setall(&pmu->supported_cpus); 829 + } 830 + 831 + return 0; 832 + } 833 + 834 + int arm_pmu_device_probe(struct platform_device *pdev, 835 + const struct of_device_id *of_table, 836 + const struct pmu_probe_info *probe_table) 837 + { 838 + const struct of_device_id *of_id; 839 + const int (*init_fn)(struct arm_pmu *); 840 + struct device_node *node = pdev->dev.of_node; 841 + struct arm_pmu *pmu; 842 + int ret = -ENODEV; 843 + 844 + pmu = kzalloc(sizeof(struct arm_pmu), GFP_KERNEL); 845 + if (!pmu) { 846 + pr_info("failed to allocate PMU device!\n"); 847 + return -ENOMEM; 848 + } 849 + 850 + if (!__oprofile_cpu_pmu) 851 + __oprofile_cpu_pmu = pmu; 852 + 853 + pmu->plat_device = pdev; 854 + 855 + if (node && (of_id = of_match_node(of_table, pdev->dev.of_node))) { 856 + init_fn = of_id->data; 857 + 858 + ret = of_pmu_irq_cfg(pmu); 859 + if (!ret) 860 + ret = init_fn(pmu); 861 + } else { 862 + ret = probe_current_pmu(pmu, probe_table); 863 + cpumask_setall(&pmu->supported_cpus); 864 + } 865 + 866 + if (ret) { 867 + pr_info("failed to probe PMU!\n"); 868 + goto out_free; 869 + } 870 + 871 + ret = cpu_pmu_init(pmu); 872 + if (ret) 873 + goto out_free; 874 + 875 + ret = armpmu_register(pmu, -1); 876 + if (ret) 877 + goto out_destroy; 878 + 879 + return 0; 880 + 881 + out_destroy: 882 + cpu_pmu_destroy(pmu); 883 + out_free: 884 + pr_info("failed to register PMU devices!\n"); 885 + kfree(pmu); 886 + return ret; 887 + }
-421
arch/arm/kernel/perf_event_cpu.c
··· 1 - /* 2 - * This program is free software; you can redistribute it and/or modify 3 - * it under the terms of the GNU General Public License version 2 as 4 - * published by the Free Software Foundation. 5 - * 6 - * This program is distributed in the hope that it will be useful, 7 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 8 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 9 - * GNU General Public License for more details. 10 - * 11 - * You should have received a copy of the GNU General Public License 12 - * along with this program; if not, write to the Free Software 13 - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 14 - * 15 - * Copyright (C) 2012 ARM Limited 16 - * 17 - * Author: Will Deacon <will.deacon@arm.com> 18 - */ 19 - #define pr_fmt(fmt) "CPU PMU: " fmt 20 - 21 - #include <linux/bitmap.h> 22 - #include <linux/export.h> 23 - #include <linux/kernel.h> 24 - #include <linux/of.h> 25 - #include <linux/platform_device.h> 26 - #include <linux/slab.h> 27 - #include <linux/spinlock.h> 28 - #include <linux/irq.h> 29 - #include <linux/irqdesc.h> 30 - 31 - #include <asm/cputype.h> 32 - #include <asm/irq_regs.h> 33 - #include <asm/pmu.h> 34 - 35 - /* Set at runtime when we know what CPU type we are. */ 36 - static struct arm_pmu *cpu_pmu; 37 - 38 - /* 39 - * Despite the names, these two functions are CPU-specific and are used 40 - * by the OProfile/perf code. 41 - */ 42 - const char *perf_pmu_name(void) 43 - { 44 - if (!cpu_pmu) 45 - return NULL; 46 - 47 - return cpu_pmu->name; 48 - } 49 - EXPORT_SYMBOL_GPL(perf_pmu_name); 50 - 51 - int perf_num_counters(void) 52 - { 53 - int max_events = 0; 54 - 55 - if (cpu_pmu != NULL) 56 - max_events = cpu_pmu->num_events; 57 - 58 - return max_events; 59 - } 60 - EXPORT_SYMBOL_GPL(perf_num_counters); 61 - 62 - /* Include the PMU-specific implementations. */ 63 - #include "perf_event_xscale.c" 64 - #include "perf_event_v6.c" 65 - #include "perf_event_v7.c" 66 - 67 - static void cpu_pmu_enable_percpu_irq(void *data) 68 - { 69 - int irq = *(int *)data; 70 - 71 - enable_percpu_irq(irq, IRQ_TYPE_NONE); 72 - } 73 - 74 - static void cpu_pmu_disable_percpu_irq(void *data) 75 - { 76 - int irq = *(int *)data; 77 - 78 - disable_percpu_irq(irq); 79 - } 80 - 81 - static void cpu_pmu_free_irq(struct arm_pmu *cpu_pmu) 82 - { 83 - int i, irq, irqs; 84 - struct platform_device *pmu_device = cpu_pmu->plat_device; 85 - struct pmu_hw_events __percpu *hw_events = cpu_pmu->hw_events; 86 - 87 - irqs = min(pmu_device->num_resources, num_possible_cpus()); 88 - 89 - irq = platform_get_irq(pmu_device, 0); 90 - if (irq >= 0 && irq_is_percpu(irq)) { 91 - on_each_cpu(cpu_pmu_disable_percpu_irq, &irq, 1); 92 - free_percpu_irq(irq, &hw_events->percpu_pmu); 93 - } else { 94 - for (i = 0; i < irqs; ++i) { 95 - int cpu = i; 96 - 97 - if (cpu_pmu->irq_affinity) 98 - cpu = cpu_pmu->irq_affinity[i]; 99 - 100 - if (!cpumask_test_and_clear_cpu(cpu, &cpu_pmu->active_irqs)) 101 - continue; 102 - irq = platform_get_irq(pmu_device, i); 103 - if (irq >= 0) 104 - free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, cpu)); 105 - } 106 - } 107 - } 108 - 109 - static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler) 110 - { 111 - int i, err, irq, irqs; 112 - struct platform_device *pmu_device = cpu_pmu->plat_device; 113 - struct pmu_hw_events __percpu *hw_events = cpu_pmu->hw_events; 114 - 115 - if (!pmu_device) 116 - return -ENODEV; 117 - 118 - irqs = min(pmu_device->num_resources, num_possible_cpus()); 119 - if (irqs < 1) { 120 - pr_warn_once("perf/ARM: No irqs for PMU defined, sampling events not supported\n"); 121 - return 0; 122 - } 123 - 124 - irq = platform_get_irq(pmu_device, 0); 125 - if (irq >= 0 && irq_is_percpu(irq)) { 126 - err = request_percpu_irq(irq, handler, "arm-pmu", 127 - &hw_events->percpu_pmu); 128 - if (err) { 129 - pr_err("unable to request IRQ%d for ARM PMU counters\n", 130 - irq); 131 - return err; 132 - } 133 - on_each_cpu(cpu_pmu_enable_percpu_irq, &irq, 1); 134 - } else { 135 - for (i = 0; i < irqs; ++i) { 136 - int cpu = i; 137 - 138 - err = 0; 139 - irq = platform_get_irq(pmu_device, i); 140 - if (irq < 0) 141 - continue; 142 - 143 - if (cpu_pmu->irq_affinity) 144 - cpu = cpu_pmu->irq_affinity[i]; 145 - 146 - /* 147 - * If we have a single PMU interrupt that we can't shift, 148 - * assume that we're running on a uniprocessor machine and 149 - * continue. Otherwise, continue without this interrupt. 150 - */ 151 - if (irq_set_affinity(irq, cpumask_of(cpu)) && irqs > 1) { 152 - pr_warn("unable to set irq affinity (irq=%d, cpu=%u)\n", 153 - irq, cpu); 154 - continue; 155 - } 156 - 157 - err = request_irq(irq, handler, 158 - IRQF_NOBALANCING | IRQF_NO_THREAD, "arm-pmu", 159 - per_cpu_ptr(&hw_events->percpu_pmu, cpu)); 160 - if (err) { 161 - pr_err("unable to request IRQ%d for ARM PMU counters\n", 162 - irq); 163 - return err; 164 - } 165 - 166 - cpumask_set_cpu(cpu, &cpu_pmu->active_irqs); 167 - } 168 - } 169 - 170 - return 0; 171 - } 172 - 173 - /* 174 - * PMU hardware loses all context when a CPU goes offline. 175 - * When a CPU is hotplugged back in, since some hardware registers are 176 - * UNKNOWN at reset, the PMU must be explicitly reset to avoid reading 177 - * junk values out of them. 178 - */ 179 - static int cpu_pmu_notify(struct notifier_block *b, unsigned long action, 180 - void *hcpu) 181 - { 182 - struct arm_pmu *pmu = container_of(b, struct arm_pmu, hotplug_nb); 183 - 184 - if ((action & ~CPU_TASKS_FROZEN) != CPU_STARTING) 185 - return NOTIFY_DONE; 186 - 187 - if (pmu->reset) 188 - pmu->reset(pmu); 189 - else 190 - return NOTIFY_DONE; 191 - 192 - return NOTIFY_OK; 193 - } 194 - 195 - static int cpu_pmu_init(struct arm_pmu *cpu_pmu) 196 - { 197 - int err; 198 - int cpu; 199 - struct pmu_hw_events __percpu *cpu_hw_events; 200 - 201 - cpu_hw_events = alloc_percpu(struct pmu_hw_events); 202 - if (!cpu_hw_events) 203 - return -ENOMEM; 204 - 205 - cpu_pmu->hotplug_nb.notifier_call = cpu_pmu_notify; 206 - err = register_cpu_notifier(&cpu_pmu->hotplug_nb); 207 - if (err) 208 - goto out_hw_events; 209 - 210 - for_each_possible_cpu(cpu) { 211 - struct pmu_hw_events *events = per_cpu_ptr(cpu_hw_events, cpu); 212 - raw_spin_lock_init(&events->pmu_lock); 213 - events->percpu_pmu = cpu_pmu; 214 - } 215 - 216 - cpu_pmu->hw_events = cpu_hw_events; 217 - cpu_pmu->request_irq = cpu_pmu_request_irq; 218 - cpu_pmu->free_irq = cpu_pmu_free_irq; 219 - 220 - /* Ensure the PMU has sane values out of reset. */ 221 - if (cpu_pmu->reset) 222 - on_each_cpu(cpu_pmu->reset, cpu_pmu, 1); 223 - 224 - /* If no interrupts available, set the corresponding capability flag */ 225 - if (!platform_get_irq(cpu_pmu->plat_device, 0)) 226 - cpu_pmu->pmu.capabilities |= PERF_PMU_CAP_NO_INTERRUPT; 227 - 228 - return 0; 229 - 230 - out_hw_events: 231 - free_percpu(cpu_hw_events); 232 - return err; 233 - } 234 - 235 - static void cpu_pmu_destroy(struct arm_pmu *cpu_pmu) 236 - { 237 - unregister_cpu_notifier(&cpu_pmu->hotplug_nb); 238 - free_percpu(cpu_pmu->hw_events); 239 - } 240 - 241 - /* 242 - * PMU platform driver and devicetree bindings. 243 - */ 244 - static const struct of_device_id cpu_pmu_of_device_ids[] = { 245 - {.compatible = "arm,cortex-a17-pmu", .data = armv7_a17_pmu_init}, 246 - {.compatible = "arm,cortex-a15-pmu", .data = armv7_a15_pmu_init}, 247 - {.compatible = "arm,cortex-a12-pmu", .data = armv7_a12_pmu_init}, 248 - {.compatible = "arm,cortex-a9-pmu", .data = armv7_a9_pmu_init}, 249 - {.compatible = "arm,cortex-a8-pmu", .data = armv7_a8_pmu_init}, 250 - {.compatible = "arm,cortex-a7-pmu", .data = armv7_a7_pmu_init}, 251 - {.compatible = "arm,cortex-a5-pmu", .data = armv7_a5_pmu_init}, 252 - {.compatible = "arm,arm11mpcore-pmu", .data = armv6mpcore_pmu_init}, 253 - {.compatible = "arm,arm1176-pmu", .data = armv6_1176_pmu_init}, 254 - {.compatible = "arm,arm1136-pmu", .data = armv6_1136_pmu_init}, 255 - {.compatible = "qcom,krait-pmu", .data = krait_pmu_init}, 256 - {.compatible = "qcom,scorpion-pmu", .data = scorpion_pmu_init}, 257 - {.compatible = "qcom,scorpion-mp-pmu", .data = scorpion_mp_pmu_init}, 258 - {}, 259 - }; 260 - 261 - static struct platform_device_id cpu_pmu_plat_device_ids[] = { 262 - {.name = "arm-pmu"}, 263 - {.name = "armv6-pmu"}, 264 - {.name = "armv7-pmu"}, 265 - {.name = "xscale-pmu"}, 266 - {}, 267 - }; 268 - 269 - static const struct pmu_probe_info pmu_probe_table[] = { 270 - ARM_PMU_PROBE(ARM_CPU_PART_ARM1136, armv6_1136_pmu_init), 271 - ARM_PMU_PROBE(ARM_CPU_PART_ARM1156, armv6_1156_pmu_init), 272 - ARM_PMU_PROBE(ARM_CPU_PART_ARM1176, armv6_1176_pmu_init), 273 - ARM_PMU_PROBE(ARM_CPU_PART_ARM11MPCORE, armv6mpcore_pmu_init), 274 - ARM_PMU_PROBE(ARM_CPU_PART_CORTEX_A8, armv7_a8_pmu_init), 275 - ARM_PMU_PROBE(ARM_CPU_PART_CORTEX_A9, armv7_a9_pmu_init), 276 - XSCALE_PMU_PROBE(ARM_CPU_XSCALE_ARCH_V1, xscale1pmu_init), 277 - XSCALE_PMU_PROBE(ARM_CPU_XSCALE_ARCH_V2, xscale2pmu_init), 278 - { /* sentinel value */ } 279 - }; 280 - 281 - /* 282 - * CPU PMU identification and probing. 283 - */ 284 - static int probe_current_pmu(struct arm_pmu *pmu) 285 - { 286 - int cpu = get_cpu(); 287 - unsigned int cpuid = read_cpuid_id(); 288 - int ret = -ENODEV; 289 - const struct pmu_probe_info *info; 290 - 291 - pr_info("probing PMU on CPU %d\n", cpu); 292 - 293 - for (info = pmu_probe_table; info->init != NULL; info++) { 294 - if ((cpuid & info->mask) != info->cpuid) 295 - continue; 296 - ret = info->init(pmu); 297 - break; 298 - } 299 - 300 - put_cpu(); 301 - return ret; 302 - } 303 - 304 - static int of_pmu_irq_cfg(struct platform_device *pdev) 305 - { 306 - int i, irq; 307 - int *irqs; 308 - 309 - /* Don't bother with PPIs; they're already affine */ 310 - irq = platform_get_irq(pdev, 0); 311 - if (irq >= 0 && irq_is_percpu(irq)) 312 - return 0; 313 - 314 - irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL); 315 - if (!irqs) 316 - return -ENOMEM; 317 - 318 - for (i = 0; i < pdev->num_resources; ++i) { 319 - struct device_node *dn; 320 - int cpu; 321 - 322 - dn = of_parse_phandle(pdev->dev.of_node, "interrupt-affinity", 323 - i); 324 - if (!dn) { 325 - pr_warn("Failed to parse %s/interrupt-affinity[%d]\n", 326 - of_node_full_name(pdev->dev.of_node), i); 327 - break; 328 - } 329 - 330 - for_each_possible_cpu(cpu) 331 - if (arch_find_n_match_cpu_physical_id(dn, cpu, NULL)) 332 - break; 333 - 334 - of_node_put(dn); 335 - if (cpu >= nr_cpu_ids) { 336 - pr_warn("Failed to find logical CPU for %s\n", 337 - dn->name); 338 - break; 339 - } 340 - 341 - irqs[i] = cpu; 342 - } 343 - 344 - if (i == pdev->num_resources) 345 - cpu_pmu->irq_affinity = irqs; 346 - else 347 - kfree(irqs); 348 - 349 - return 0; 350 - } 351 - 352 - static int cpu_pmu_device_probe(struct platform_device *pdev) 353 - { 354 - const struct of_device_id *of_id; 355 - const int (*init_fn)(struct arm_pmu *); 356 - struct device_node *node = pdev->dev.of_node; 357 - struct arm_pmu *pmu; 358 - int ret = -ENODEV; 359 - 360 - if (cpu_pmu) { 361 - pr_info("attempt to register multiple PMU devices!\n"); 362 - return -ENOSPC; 363 - } 364 - 365 - pmu = kzalloc(sizeof(struct arm_pmu), GFP_KERNEL); 366 - if (!pmu) { 367 - pr_info("failed to allocate PMU device!\n"); 368 - return -ENOMEM; 369 - } 370 - 371 - cpu_pmu = pmu; 372 - cpu_pmu->plat_device = pdev; 373 - 374 - if (node && (of_id = of_match_node(cpu_pmu_of_device_ids, pdev->dev.of_node))) { 375 - init_fn = of_id->data; 376 - 377 - ret = of_pmu_irq_cfg(pdev); 378 - if (!ret) 379 - ret = init_fn(pmu); 380 - } else { 381 - ret = probe_current_pmu(pmu); 382 - } 383 - 384 - if (ret) { 385 - pr_info("failed to probe PMU!\n"); 386 - goto out_free; 387 - } 388 - 389 - ret = cpu_pmu_init(cpu_pmu); 390 - if (ret) 391 - goto out_free; 392 - 393 - ret = armpmu_register(cpu_pmu, -1); 394 - if (ret) 395 - goto out_destroy; 396 - 397 - return 0; 398 - 399 - out_destroy: 400 - cpu_pmu_destroy(cpu_pmu); 401 - out_free: 402 - pr_info("failed to register PMU devices!\n"); 403 - kfree(pmu); 404 - return ret; 405 - } 406 - 407 - static struct platform_driver cpu_pmu_driver = { 408 - .driver = { 409 - .name = "arm-pmu", 410 - .pm = &armpmu_dev_pm_ops, 411 - .of_match_table = cpu_pmu_of_device_ids, 412 - }, 413 - .probe = cpu_pmu_device_probe, 414 - .id_table = cpu_pmu_plat_device_ids, 415 - }; 416 - 417 - static int __init register_pmu_driver(void) 418 - { 419 - return platform_driver_register(&cpu_pmu_driver); 420 - } 421 - device_initcall(register_pmu_driver);
+37 -14
arch/arm/kernel/perf_event_v6.c
··· 31 31 */ 32 32 33 33 #if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_V6K) 34 + 35 + #include <asm/cputype.h> 36 + #include <asm/irq_regs.h> 37 + #include <asm/pmu.h> 38 + 39 + #include <linux/of.h> 40 + #include <linux/platform_device.h> 41 + 34 42 enum armv6_perf_types { 35 43 ARMV6_PERFCTR_ICACHE_MISS = 0x0, 36 44 ARMV6_PERFCTR_IBUF_STALL = 0x1, ··· 551 543 552 544 return 0; 553 545 } 554 - #else 555 - static int armv6_1136_pmu_init(struct arm_pmu *cpu_pmu) 546 + 547 + static struct of_device_id armv6_pmu_of_device_ids[] = { 548 + {.compatible = "arm,arm11mpcore-pmu", .data = armv6mpcore_pmu_init}, 549 + {.compatible = "arm,arm1176-pmu", .data = armv6_1176_pmu_init}, 550 + {.compatible = "arm,arm1136-pmu", .data = armv6_1136_pmu_init}, 551 + { /* sentinel value */ } 552 + }; 553 + 554 + static const struct pmu_probe_info armv6_pmu_probe_table[] = { 555 + ARM_PMU_PROBE(ARM_CPU_PART_ARM1136, armv6_1136_pmu_init), 556 + ARM_PMU_PROBE(ARM_CPU_PART_ARM1156, armv6_1156_pmu_init), 557 + ARM_PMU_PROBE(ARM_CPU_PART_ARM1176, armv6_1176_pmu_init), 558 + ARM_PMU_PROBE(ARM_CPU_PART_ARM11MPCORE, armv6mpcore_pmu_init), 559 + { /* sentinel value */ } 560 + }; 561 + 562 + static int armv6_pmu_device_probe(struct platform_device *pdev) 556 563 { 557 - return -ENODEV; 564 + return arm_pmu_device_probe(pdev, armv6_pmu_of_device_ids, 565 + armv6_pmu_probe_table); 558 566 } 559 567 560 - static int armv6_1156_pmu_init(struct arm_pmu *cpu_pmu) 561 - { 562 - return -ENODEV; 563 - } 568 + static struct platform_driver armv6_pmu_driver = { 569 + .driver = { 570 + .name = "armv6-pmu", 571 + .of_match_table = armv6_pmu_of_device_ids, 572 + }, 573 + .probe = armv6_pmu_device_probe, 574 + }; 564 575 565 - static int armv6_1176_pmu_init(struct arm_pmu *cpu_pmu) 576 + static int __init register_armv6_pmu_driver(void) 566 577 { 567 - return -ENODEV; 578 + return platform_driver_register(&armv6_pmu_driver); 568 579 } 569 - 570 - static int armv6mpcore_pmu_init(struct arm_pmu *cpu_pmu) 571 - { 572 - return -ENODEV; 573 - } 580 + device_initcall(register_armv6_pmu_driver); 574 581 #endif /* CONFIG_CPU_V6 || CONFIG_CPU_V6K */
+63 -68
arch/arm/kernel/perf_event_v7.c
··· 19 19 #ifdef CONFIG_CPU_V7 20 20 21 21 #include <asm/cp15.h> 22 + #include <asm/cputype.h> 23 + #include <asm/irq_regs.h> 24 + #include <asm/pmu.h> 22 25 #include <asm/vfp.h> 23 26 #include "../vfp/vfpinstr.h" 27 + 28 + #include <linux/of.h> 29 + #include <linux/platform_device.h> 24 30 25 31 /* 26 32 * Common ARMv7 event types ··· 1062 1056 cpu_pmu->max_period = (1LLU << 32) - 1; 1063 1057 }; 1064 1058 1065 - static u32 armv7_read_num_pmnc_events(void) 1059 + static void armv7_read_num_pmnc_events(void *info) 1066 1060 { 1067 - u32 nb_cnt; 1061 + int *nb_cnt = info; 1068 1062 1069 1063 /* Read the nb of CNTx counters supported from PMNC */ 1070 - nb_cnt = (armv7_pmnc_read() >> ARMV7_PMNC_N_SHIFT) & ARMV7_PMNC_N_MASK; 1064 + *nb_cnt = (armv7_pmnc_read() >> ARMV7_PMNC_N_SHIFT) & ARMV7_PMNC_N_MASK; 1071 1065 1072 - /* Add the CPU cycles counter and return */ 1073 - return nb_cnt + 1; 1066 + /* Add the CPU cycles counter */ 1067 + *nb_cnt += 1; 1068 + } 1069 + 1070 + static int armv7_probe_num_events(struct arm_pmu *arm_pmu) 1071 + { 1072 + return smp_call_function_any(&arm_pmu->supported_cpus, 1073 + armv7_read_num_pmnc_events, 1074 + &arm_pmu->num_events, 1); 1074 1075 } 1075 1076 1076 1077 static int armv7_a8_pmu_init(struct arm_pmu *cpu_pmu) ··· 1085 1072 armv7pmu_init(cpu_pmu); 1086 1073 cpu_pmu->name = "armv7_cortex_a8"; 1087 1074 cpu_pmu->map_event = armv7_a8_map_event; 1088 - cpu_pmu->num_events = armv7_read_num_pmnc_events(); 1089 - return 0; 1075 + return armv7_probe_num_events(cpu_pmu); 1090 1076 } 1091 1077 1092 1078 static int armv7_a9_pmu_init(struct arm_pmu *cpu_pmu) ··· 1093 1081 armv7pmu_init(cpu_pmu); 1094 1082 cpu_pmu->name = "armv7_cortex_a9"; 1095 1083 cpu_pmu->map_event = armv7_a9_map_event; 1096 - cpu_pmu->num_events = armv7_read_num_pmnc_events(); 1097 - return 0; 1084 + return armv7_probe_num_events(cpu_pmu); 1098 1085 } 1099 1086 1100 1087 static int armv7_a5_pmu_init(struct arm_pmu *cpu_pmu) ··· 1101 1090 armv7pmu_init(cpu_pmu); 1102 1091 cpu_pmu->name = "armv7_cortex_a5"; 1103 1092 cpu_pmu->map_event = armv7_a5_map_event; 1104 - cpu_pmu->num_events = armv7_read_num_pmnc_events(); 1105 - return 0; 1093 + return armv7_probe_num_events(cpu_pmu); 1106 1094 } 1107 1095 1108 1096 static int armv7_a15_pmu_init(struct arm_pmu *cpu_pmu) ··· 1109 1099 armv7pmu_init(cpu_pmu); 1110 1100 cpu_pmu->name = "armv7_cortex_a15"; 1111 1101 cpu_pmu->map_event = armv7_a15_map_event; 1112 - cpu_pmu->num_events = armv7_read_num_pmnc_events(); 1113 1102 cpu_pmu->set_event_filter = armv7pmu_set_event_filter; 1114 - return 0; 1103 + return armv7_probe_num_events(cpu_pmu); 1115 1104 } 1116 1105 1117 1106 static int armv7_a7_pmu_init(struct arm_pmu *cpu_pmu) ··· 1118 1109 armv7pmu_init(cpu_pmu); 1119 1110 cpu_pmu->name = "armv7_cortex_a7"; 1120 1111 cpu_pmu->map_event = armv7_a7_map_event; 1121 - cpu_pmu->num_events = armv7_read_num_pmnc_events(); 1122 1112 cpu_pmu->set_event_filter = armv7pmu_set_event_filter; 1123 - return 0; 1113 + return armv7_probe_num_events(cpu_pmu); 1124 1114 } 1125 1115 1126 1116 static int armv7_a12_pmu_init(struct arm_pmu *cpu_pmu) ··· 1127 1119 armv7pmu_init(cpu_pmu); 1128 1120 cpu_pmu->name = "armv7_cortex_a12"; 1129 1121 cpu_pmu->map_event = armv7_a12_map_event; 1130 - cpu_pmu->num_events = armv7_read_num_pmnc_events(); 1131 1122 cpu_pmu->set_event_filter = armv7pmu_set_event_filter; 1132 - return 0; 1123 + return armv7_probe_num_events(cpu_pmu); 1133 1124 } 1134 1125 1135 1126 static int armv7_a17_pmu_init(struct arm_pmu *cpu_pmu) 1136 1127 { 1137 - armv7_a12_pmu_init(cpu_pmu); 1128 + int ret = armv7_a12_pmu_init(cpu_pmu); 1138 1129 cpu_pmu->name = "armv7_cortex_a17"; 1139 - return 0; 1130 + return ret; 1140 1131 } 1141 1132 1142 1133 /* ··· 1515 1508 cpu_pmu->map_event = krait_map_event_no_branch; 1516 1509 else 1517 1510 cpu_pmu->map_event = krait_map_event; 1518 - cpu_pmu->num_events = armv7_read_num_pmnc_events(); 1519 1511 cpu_pmu->set_event_filter = armv7pmu_set_event_filter; 1520 1512 cpu_pmu->reset = krait_pmu_reset; 1521 1513 cpu_pmu->enable = krait_pmu_enable_event; 1522 1514 cpu_pmu->disable = krait_pmu_disable_event; 1523 1515 cpu_pmu->get_event_idx = krait_pmu_get_event_idx; 1524 1516 cpu_pmu->clear_event_idx = krait_pmu_clear_event_idx; 1525 - return 0; 1517 + return armv7_probe_num_events(cpu_pmu); 1526 1518 } 1527 1519 1528 1520 /* ··· 1839 1833 armv7pmu_init(cpu_pmu); 1840 1834 cpu_pmu->name = "armv7_scorpion"; 1841 1835 cpu_pmu->map_event = scorpion_map_event; 1842 - cpu_pmu->num_events = armv7_read_num_pmnc_events(); 1843 1836 cpu_pmu->reset = scorpion_pmu_reset; 1844 1837 cpu_pmu->enable = scorpion_pmu_enable_event; 1845 1838 cpu_pmu->disable = scorpion_pmu_disable_event; 1846 1839 cpu_pmu->get_event_idx = scorpion_pmu_get_event_idx; 1847 1840 cpu_pmu->clear_event_idx = scorpion_pmu_clear_event_idx; 1848 - return 0; 1841 + return armv7_probe_num_events(cpu_pmu); 1849 1842 } 1850 1843 1851 1844 static int scorpion_mp_pmu_init(struct arm_pmu *cpu_pmu) ··· 1852 1847 armv7pmu_init(cpu_pmu); 1853 1848 cpu_pmu->name = "armv7_scorpion_mp"; 1854 1849 cpu_pmu->map_event = scorpion_map_event; 1855 - cpu_pmu->num_events = armv7_read_num_pmnc_events(); 1856 1850 cpu_pmu->reset = scorpion_pmu_reset; 1857 1851 cpu_pmu->enable = scorpion_pmu_enable_event; 1858 1852 cpu_pmu->disable = scorpion_pmu_disable_event; 1859 1853 cpu_pmu->get_event_idx = scorpion_pmu_get_event_idx; 1860 1854 cpu_pmu->clear_event_idx = scorpion_pmu_clear_event_idx; 1861 - return 0; 1862 - } 1863 - #else 1864 - static inline int armv7_a8_pmu_init(struct arm_pmu *cpu_pmu) 1865 - { 1866 - return -ENODEV; 1855 + return armv7_probe_num_events(cpu_pmu); 1867 1856 } 1868 1857 1869 - static inline int armv7_a9_pmu_init(struct arm_pmu *cpu_pmu) 1858 + static const struct of_device_id armv7_pmu_of_device_ids[] = { 1859 + {.compatible = "arm,cortex-a17-pmu", .data = armv7_a17_pmu_init}, 1860 + {.compatible = "arm,cortex-a15-pmu", .data = armv7_a15_pmu_init}, 1861 + {.compatible = "arm,cortex-a12-pmu", .data = armv7_a12_pmu_init}, 1862 + {.compatible = "arm,cortex-a9-pmu", .data = armv7_a9_pmu_init}, 1863 + {.compatible = "arm,cortex-a8-pmu", .data = armv7_a8_pmu_init}, 1864 + {.compatible = "arm,cortex-a7-pmu", .data = armv7_a7_pmu_init}, 1865 + {.compatible = "arm,cortex-a5-pmu", .data = armv7_a5_pmu_init}, 1866 + {.compatible = "qcom,krait-pmu", .data = krait_pmu_init}, 1867 + {.compatible = "qcom,scorpion-pmu", .data = scorpion_pmu_init}, 1868 + {.compatible = "qcom,scorpion-mp-pmu", .data = scorpion_mp_pmu_init}, 1869 + {}, 1870 + }; 1871 + 1872 + static const struct pmu_probe_info armv7_pmu_probe_table[] = { 1873 + ARM_PMU_PROBE(ARM_CPU_PART_CORTEX_A8, armv7_a8_pmu_init), 1874 + ARM_PMU_PROBE(ARM_CPU_PART_CORTEX_A9, armv7_a9_pmu_init), 1875 + { /* sentinel value */ } 1876 + }; 1877 + 1878 + 1879 + static int armv7_pmu_device_probe(struct platform_device *pdev) 1870 1880 { 1871 - return -ENODEV; 1881 + return arm_pmu_device_probe(pdev, armv7_pmu_of_device_ids, 1882 + armv7_pmu_probe_table); 1872 1883 } 1873 1884 1874 - static inline int armv7_a5_pmu_init(struct arm_pmu *cpu_pmu) 1875 - { 1876 - return -ENODEV; 1877 - } 1885 + static struct platform_driver armv7_pmu_driver = { 1886 + .driver = { 1887 + .name = "armv7-pmu", 1888 + .of_match_table = armv7_pmu_of_device_ids, 1889 + }, 1890 + .probe = armv7_pmu_device_probe, 1891 + }; 1878 1892 1879 - static inline int armv7_a15_pmu_init(struct arm_pmu *cpu_pmu) 1893 + static int __init register_armv7_pmu_driver(void) 1880 1894 { 1881 - return -ENODEV; 1895 + return platform_driver_register(&armv7_pmu_driver); 1882 1896 } 1883 - 1884 - static inline int armv7_a7_pmu_init(struct arm_pmu *cpu_pmu) 1885 - { 1886 - return -ENODEV; 1887 - } 1888 - 1889 - static inline int armv7_a12_pmu_init(struct arm_pmu *cpu_pmu) 1890 - { 1891 - return -ENODEV; 1892 - } 1893 - 1894 - static inline int armv7_a17_pmu_init(struct arm_pmu *cpu_pmu) 1895 - { 1896 - return -ENODEV; 1897 - } 1898 - 1899 - static inline int krait_pmu_init(struct arm_pmu *cpu_pmu) 1900 - { 1901 - return -ENODEV; 1902 - } 1903 - 1904 - static inline int scorpion_pmu_init(struct arm_pmu *cpu_pmu) 1905 - { 1906 - return -ENODEV; 1907 - } 1908 - 1909 - static inline int scorpion_mp_pmu_init(struct arm_pmu *cpu_pmu) 1910 - { 1911 - return -ENODEV; 1912 - } 1897 + device_initcall(register_armv7_pmu_driver); 1913 1898 #endif /* CONFIG_CPU_V7 */
+27 -5
arch/arm/kernel/perf_event_xscale.c
··· 13 13 */ 14 14 15 15 #ifdef CONFIG_CPU_XSCALE 16 + 17 + #include <asm/cputype.h> 18 + #include <asm/irq_regs.h> 19 + #include <asm/pmu.h> 20 + 21 + #include <linux/of.h> 22 + #include <linux/platform_device.h> 23 + 16 24 enum xscale_perf_types { 17 25 XSCALE_PERFCTR_ICACHE_MISS = 0x00, 18 26 XSCALE_PERFCTR_ICACHE_NO_DELIVER = 0x01, ··· 748 740 749 741 return 0; 750 742 } 751 - #else 752 - static inline int xscale1pmu_init(struct arm_pmu *cpu_pmu) 743 + 744 + static const struct pmu_probe_info xscale_pmu_probe_table[] = { 745 + XSCALE_PMU_PROBE(ARM_CPU_XSCALE_ARCH_V1, xscale1pmu_init), 746 + XSCALE_PMU_PROBE(ARM_CPU_XSCALE_ARCH_V2, xscale2pmu_init), 747 + { /* sentinel value */ } 748 + }; 749 + 750 + static int xscale_pmu_device_probe(struct platform_device *pdev) 753 751 { 754 - return -ENODEV; 752 + return arm_pmu_device_probe(pdev, NULL, xscale_pmu_probe_table); 755 753 } 756 754 757 - static inline int xscale2pmu_init(struct arm_pmu *cpu_pmu) 755 + static struct platform_driver xscale_pmu_driver = { 756 + .driver = { 757 + .name = "xscale-pmu", 758 + }, 759 + .probe = xscale_pmu_device_probe, 760 + }; 761 + 762 + static int __init register_xscale_pmu_driver(void) 758 763 { 759 - return -ENODEV; 764 + return platform_driver_register(&xscale_pmu_driver); 760 765 } 766 + device_initcall(register_xscale_pmu_driver); 761 767 #endif /* CONFIG_CPU_XSCALE */
+25 -5
arch/arm/kernel/setup.c
··· 75 75 76 76 extern void init_default_cache_policy(unsigned long); 77 77 extern void paging_init(const struct machine_desc *desc); 78 - extern void early_paging_init(const struct machine_desc *, 79 - struct proc_info_list *); 78 + extern void early_paging_init(const struct machine_desc *); 80 79 extern void sanity_check_meminfo(void); 81 80 extern enum reboot_mode reboot_mode; 82 81 extern void setup_dma_zone(const struct machine_desc *desc); ··· 91 92 92 93 unsigned int system_rev; 93 94 EXPORT_SYMBOL(system_rev); 95 + 96 + const char *system_serial; 97 + EXPORT_SYMBOL(system_serial); 94 98 95 99 unsigned int system_serial_low; 96 100 EXPORT_SYMBOL(system_serial_low); ··· 841 839 842 840 static int __init init_machine_late(void) 843 841 { 842 + struct device_node *root; 843 + int ret; 844 + 844 845 if (machine_desc->init_late) 845 846 machine_desc->init_late(); 847 + 848 + root = of_find_node_by_path("/"); 849 + if (root) { 850 + ret = of_property_read_string(root, "serial-number", 851 + &system_serial); 852 + if (ret) 853 + system_serial = NULL; 854 + } 855 + 856 + if (!system_serial) 857 + system_serial = kasprintf(GFP_KERNEL, "%08x%08x", 858 + system_serial_high, 859 + system_serial_low); 860 + 846 861 return 0; 847 862 } 848 863 late_initcall(init_machine_late); ··· 955 936 956 937 parse_early_param(); 957 938 958 - early_paging_init(mdesc, lookup_processor_type(read_cpuid_id())); 939 + #ifdef CONFIG_MMU 940 + early_paging_init(mdesc); 941 + #endif 959 942 setup_dma_zone(mdesc); 960 943 sanity_check_meminfo(); 961 944 arm_memblock_init(mdesc); ··· 1130 1109 1131 1110 seq_printf(m, "Hardware\t: %s\n", machine_name); 1132 1111 seq_printf(m, "Revision\t: %04x\n", system_rev); 1133 - seq_printf(m, "Serial\t\t: %08x%08x\n", 1134 - system_serial_high, system_serial_low); 1112 + seq_printf(m, "Serial\t\t: %s\n", system_serial); 1135 1113 1136 1114 return 0; 1137 1115 }
+2 -2
arch/arm/kernel/sleep.S
··· 81 81 mov r1, r4 @ size of save block 82 82 add r0, sp, #8 @ pointer to save block 83 83 bl __cpu_suspend_save 84 - adr lr, BSYM(cpu_suspend_abort) 84 + badr lr, cpu_suspend_abort 85 85 ldmfd sp!, {r0, pc} @ call suspend fn 86 86 ENDPROC(__cpu_suspend) 87 87 .ltorg ··· 122 122 #ifdef CONFIG_MMU 123 123 .arm 124 124 ENTRY(cpu_resume_arm) 125 - THUMB( adr r9, BSYM(1f) ) @ Kernel is entered in ARM. 125 + THUMB( badr r9, 1f ) @ Kernel is entered in ARM. 126 126 THUMB( bx r9 ) @ If this is a Thumb-2 kernel, 127 127 THUMB( .thumb ) @ switch to Thumb now. 128 128 THUMB(1: )
+6 -4
arch/arm/kernel/smp.c
··· 86 86 87 87 static unsigned long get_arch_pgd(pgd_t *pgd) 88 88 { 89 - phys_addr_t pgdir = virt_to_idmap(pgd); 90 - BUG_ON(pgdir & ARCH_PGD_MASK); 91 - return pgdir >> ARCH_PGD_SHIFT; 89 + #ifdef CONFIG_ARM_LPAE 90 + return __phys_to_pfn(virt_to_phys(pgd)); 91 + #else 92 + return virt_to_phys(pgd); 93 + #endif 92 94 } 93 95 94 96 int __cpu_up(unsigned int cpu, struct task_struct *idle) ··· 110 108 #endif 111 109 112 110 #ifdef CONFIG_MMU 113 - secondary_data.pgdir = get_arch_pgd(idmap_pgd); 111 + secondary_data.pgdir = virt_to_phys(idmap_pgd); 114 112 secondary_data.swapper_pg_dir = get_arch_pgd(swapper_pg_dir); 115 113 #endif 116 114 sync_cache_w(&secondary_data);
+101 -3
arch/arm/kernel/tcm.c
··· 17 17 #include <asm/mach/map.h> 18 18 #include <asm/memory.h> 19 19 #include <asm/system_info.h> 20 + #include <asm/traps.h> 21 + 22 + #define TCMTR_FORMAT_MASK 0xe0000000U 20 23 21 24 static struct gen_pool *tcm_pool; 22 25 static bool dtcm_present; ··· 179 176 } 180 177 181 178 /* 179 + * When we are running in the non-secure world and the secure world 180 + * has not explicitly given us access to the TCM we will get an 181 + * undefined error when reading the TCM region register in the 182 + * setup_tcm_bank function (above). 183 + * 184 + * There are two variants of this register read that we need to trap, 185 + * the read for the data TCM and the read for the instruction TCM: 186 + * c0370628: ee196f11 mrc 15, 0, r6, cr9, cr1, {0} 187 + * c0370674: ee196f31 mrc 15, 0, r6, cr9, cr1, {1} 188 + * 189 + * Our undef hook mask explicitly matches all fields of the encoded 190 + * instruction other than the destination register. The mask also 191 + * only allows operand 2 to have the values 0 or 1. 192 + * 193 + * The undefined hook is defined as __init and __initdata, and therefore 194 + * must be removed before tcm_init returns. 195 + * 196 + * In this particular case (MRC with ARM condition code ALways) the 197 + * Thumb-2 and ARM instruction encoding are identical, so this hook 198 + * will work on a Thumb-2 kernel. 199 + * 200 + * See A8.8.107, DDI0406C_C ARM Architecture Reference Manual, Encoding 201 + * T1/A1 for the bit-by-bit details. 202 + * 203 + * mrc p15, 0, XX, c9, c1, 0 204 + * mrc p15, 0, XX, c9, c1, 1 205 + * | | | | | | | +---- opc2 0|1 = 000|001 206 + * | | | | | | +------- CRm 0 = 0001 207 + * | | | | | +----------- CRn 0 = 1001 208 + * | | | | +--------------- Rt ? = ???? 209 + * | | | +------------------- opc1 0 = 000 210 + * | | +----------------------- coproc 15 = 1111 211 + * | +-------------------------- condition ALways = 1110 212 + * +----------------------------- instruction MRC = 1110 213 + * 214 + * Encoding this as per A8.8.107 of DDI0406C, Encoding T1/A1, yields: 215 + * 1111 1111 1111 1111 0000 1111 1101 1111 Required Mask 216 + * 1110 1110 0001 1001 ???? 1111 0001 0001 mrc p15, 0, XX, c9, c1, 0 217 + * 1110 1110 0001 1001 ???? 1111 0011 0001 mrc p15, 0, XX, c9, c1, 1 218 + * [ ] [ ] [ ]| [ ] [ ] [ ] [ ]| +--- CRm 219 + * | | | | | | | | +----- SBO 220 + * | | | | | | | +------- opc2 221 + * | | | | | | +----------- coproc 222 + * | | | | | +---------------- Rt 223 + * | | | | +--------------------- CRn 224 + * | | | +------------------------- SBO 225 + * | | +--------------------------- opc1 226 + * | +------------------------------- instruction 227 + * +------------------------------------ condition 228 + */ 229 + #define TCM_REGION_READ_MASK 0xffff0fdf 230 + #define TCM_REGION_READ_INSTR 0xee190f11 231 + #define DEST_REG_SHIFT 12 232 + #define DEST_REG_MASK 0xf 233 + 234 + static int __init tcm_handler(struct pt_regs *regs, unsigned int instr) 235 + { 236 + regs->uregs[(instr >> DEST_REG_SHIFT) & DEST_REG_MASK] = 0; 237 + regs->ARM_pc += 4; 238 + return 0; 239 + } 240 + 241 + static struct undef_hook tcm_hook __initdata = { 242 + .instr_mask = TCM_REGION_READ_MASK, 243 + .instr_val = TCM_REGION_READ_INSTR, 244 + .cpsr_mask = MODE_MASK, 245 + .cpsr_val = SVC_MODE, 246 + .fn = tcm_handler 247 + }; 248 + 249 + /* 182 250 * This initializes the TCM memory 183 251 */ 184 252 void __init tcm_init(void) ··· 278 204 } 279 205 280 206 tcm_status = read_cpuid_tcmstatus(); 207 + 208 + /* 209 + * This code only supports v6-compatible TCMTR implementations. 210 + */ 211 + if (tcm_status & TCMTR_FORMAT_MASK) 212 + return; 213 + 281 214 dtcm_banks = (tcm_status >> 16) & 0x03; 282 215 itcm_banks = (tcm_status & 0x03); 216 + 217 + register_undef_hook(&tcm_hook); 283 218 284 219 /* Values greater than 2 for D/ITCM banks are "reserved" */ 285 220 if (dtcm_banks > 2) ··· 301 218 for (i = 0; i < dtcm_banks; i++) { 302 219 ret = setup_tcm_bank(0, i, dtcm_banks, &dtcm_end); 303 220 if (ret) 304 - return; 221 + goto unregister; 305 222 } 306 223 /* This means you compiled more code than fits into DTCM */ 307 224 if (dtcm_code_sz > (dtcm_end - DTCM_OFFSET)) { ··· 310 227 dtcm_code_sz, (dtcm_end - DTCM_OFFSET)); 311 228 goto no_dtcm; 312 229 } 230 + /* 231 + * This means that the DTCM sizes were 0 or the DTCM banks 232 + * were inaccessible due to TrustZone configuration. 233 + */ 234 + if (!(dtcm_end - DTCM_OFFSET)) 235 + goto no_dtcm; 313 236 dtcm_res.end = dtcm_end - 1; 314 237 request_resource(&iomem_resource, &dtcm_res); 315 238 dtcm_iomap[0].length = dtcm_end - DTCM_OFFSET; ··· 339 250 for (i = 0; i < itcm_banks; i++) { 340 251 ret = setup_tcm_bank(1, i, itcm_banks, &itcm_end); 341 252 if (ret) 342 - return; 253 + goto unregister; 343 254 } 344 255 /* This means you compiled more code than fits into ITCM */ 345 256 if (itcm_code_sz > (itcm_end - ITCM_OFFSET)) { 346 257 pr_info("CPU ITCM: %u bytes of code compiled to " 347 258 "ITCM but only %lu bytes of ITCM present\n", 348 259 itcm_code_sz, (itcm_end - ITCM_OFFSET)); 349 - return; 260 + goto unregister; 350 261 } 262 + /* 263 + * This means that the ITCM sizes were 0 or the ITCM banks 264 + * were inaccessible due to TrustZone configuration. 265 + */ 266 + if (!(itcm_end - ITCM_OFFSET)) 267 + goto unregister; 351 268 itcm_res.end = itcm_end - 1; 352 269 request_resource(&iomem_resource, &itcm_res); 353 270 itcm_iomap[0].length = itcm_end - ITCM_OFFSET; ··· 370 275 pr_info("CPU ITCM: %u bytes of code compiled to ITCM but no " 371 276 "ITCM banks present in CPU\n", itcm_code_sz); 372 277 } 278 + 279 + unregister: 280 + unregister_undef_hook(&tcm_hook); 373 281 } 374 282 375 283 /*
-8
arch/arm/kernel/traps.c
··· 749 749 750 750 #endif 751 751 752 - void __bad_xchg(volatile void *ptr, int size) 753 - { 754 - pr_err("xchg: bad data size: pc 0x%p, ptr 0x%p, size %d\n", 755 - __builtin_return_address(0), ptr, size); 756 - BUG(); 757 - } 758 - EXPORT_SYMBOL(__bad_xchg); 759 - 760 752 /* 761 753 * A data abort trap was taken, but we did not handle the instruction. 762 754 * Try to abort the user program, or panic if it was the kernel.
+1 -1
arch/arm/kvm/interrupts.S
··· 307 307 THUMB( orr r2, r2, #PSR_T_BIT ) 308 308 msr spsr_cxsf, r2 309 309 mrs r1, ELR_hyp 310 - ldr r2, =BSYM(panic) 310 + ldr r2, =panic 311 311 msr ELR_hyp, r2 312 312 ldr r0, =\panic_str 313 313 clrex @ Clear exclusive monitor
+1 -1
arch/arm/lib/call_with_stack.S
··· 35 35 mov r2, r0 36 36 mov r0, r1 37 37 38 - adr lr, BSYM(1f) 38 + badr lr, 1f 39 39 ret r2 40 40 41 41 1: ldr lr, [sp]
+1 -7
arch/arm/mach-exynos/suspend.c
··· 311 311 312 312 if (IS_ENABLED(CONFIG_EXYNOS5420_MCPM)) { 313 313 mcpm_set_entry_vector(cpu, cluster, exynos_cpu_resume); 314 - 315 - /* 316 - * Residency value passed to mcpm_cpu_suspend back-end 317 - * has to be given clear semantics. Set to 0 as a 318 - * temporary value. 319 - */ 320 - mcpm_cpu_suspend(0); 314 + mcpm_cpu_suspend(); 321 315 } 322 316 323 317 pr_info("Failed to suspend the system\n");
+46 -85
arch/arm/mach-hisi/platmcpm.c
··· 6 6 * under the terms and conditions of the GNU General Public License, 7 7 * version 2, as published by the Free Software Foundation. 8 8 */ 9 + #include <linux/init.h> 10 + #include <linux/smp.h> 9 11 #include <linux/delay.h> 10 12 #include <linux/io.h> 11 13 #include <linux/memblock.h> ··· 15 13 16 14 #include <asm/cputype.h> 17 15 #include <asm/cp15.h> 18 - #include <asm/mcpm.h> 16 + #include <asm/cacheflush.h> 17 + #include <asm/smp.h> 18 + #include <asm/smp_plat.h> 19 19 20 20 #include "core.h" 21 21 ··· 98 94 } while (data != readl_relaxed(fabric + FAB_SF_MODE)); 99 95 } 100 96 101 - static int hip04_mcpm_power_up(unsigned int cpu, unsigned int cluster) 97 + static int hip04_boot_secondary(unsigned int l_cpu, struct task_struct *idle) 102 98 { 99 + unsigned int mpidr, cpu, cluster; 103 100 unsigned long data; 104 101 void __iomem *sys_dreq, *sys_status; 102 + 103 + mpidr = cpu_logical_map(l_cpu); 104 + cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); 105 + cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); 105 106 106 107 if (!sysctrl) 107 108 return -ENODEV; ··· 127 118 cpu_relax(); 128 119 data = readl_relaxed(sys_status); 129 120 } while (data & CLUSTER_DEBUG_RESET_STATUS); 121 + hip04_set_snoop_filter(cluster, 1); 130 122 } 131 123 132 124 data = CORE_RESET_BIT(cpu) | NEON_RESET_BIT(cpu) | \ ··· 136 126 do { 137 127 cpu_relax(); 138 128 } while (data == readl_relaxed(sys_status)); 129 + 139 130 /* 140 131 * We may fail to power up core again without this delay. 141 132 * It's not mentioned in document. It's found by test. 142 133 */ 143 134 udelay(20); 135 + 136 + arch_send_wakeup_ipi_mask(cpumask_of(l_cpu)); 137 + 144 138 out: 145 139 hip04_cpu_table[cluster][cpu]++; 146 140 spin_unlock_irq(&boot_lock); ··· 152 138 return 0; 153 139 } 154 140 155 - static void hip04_mcpm_power_down(void) 141 + #ifdef CONFIG_HOTPLUG_CPU 142 + static void hip04_cpu_die(unsigned int l_cpu) 156 143 { 157 144 unsigned int mpidr, cpu, cluster; 158 - bool skip_wfi = false, last_man = false; 145 + bool last_man; 159 146 160 - mpidr = read_cpuid_mpidr(); 147 + mpidr = cpu_logical_map(l_cpu); 161 148 cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); 162 149 cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); 163 150 164 - __mcpm_cpu_going_down(cpu, cluster); 165 - 166 151 spin_lock(&boot_lock); 167 - BUG_ON(__mcpm_cluster_state(cluster) != CLUSTER_UP); 168 152 hip04_cpu_table[cluster][cpu]--; 169 153 if (hip04_cpu_table[cluster][cpu] == 1) { 170 154 /* A power_up request went ahead of us. */ 171 - skip_wfi = true; 155 + spin_unlock(&boot_lock); 156 + return; 172 157 } else if (hip04_cpu_table[cluster][cpu] > 1) { 173 158 pr_err("Cluster %d CPU%d boots multiple times\n", cluster, cpu); 174 159 BUG(); 175 160 } 176 161 177 162 last_man = hip04_cluster_is_down(cluster); 178 - if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) { 179 - spin_unlock(&boot_lock); 163 + spin_unlock(&boot_lock); 164 + if (last_man) { 180 165 /* Since it's Cortex A15, disable L2 prefetching. */ 181 166 asm volatile( 182 167 "mcr p15, 1, %0, c15, c0, 3 \n\t" ··· 183 170 "dsb " 184 171 : : "r" (0x400) ); 185 172 v7_exit_coherency_flush(all); 186 - hip04_set_snoop_filter(cluster, 0); 187 - __mcpm_outbound_leave_critical(cluster, CLUSTER_DOWN); 188 173 } else { 189 - spin_unlock(&boot_lock); 190 174 v7_exit_coherency_flush(louis); 191 175 } 192 176 193 - __mcpm_cpu_down(cpu, cluster); 194 - 195 - if (!skip_wfi) 177 + for (;;) 196 178 wfi(); 197 179 } 198 180 199 - static int hip04_mcpm_wait_for_powerdown(unsigned int cpu, unsigned int cluster) 181 + static int hip04_cpu_kill(unsigned int l_cpu) 200 182 { 183 + unsigned int mpidr, cpu, cluster; 201 184 unsigned int data, tries, count; 202 - int ret = -ETIMEDOUT; 203 185 186 + mpidr = cpu_logical_map(l_cpu); 187 + cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); 188 + cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); 204 189 BUG_ON(cluster >= HIP04_MAX_CLUSTERS || 205 190 cpu >= HIP04_MAX_CPUS_PER_CLUSTER); 206 191 207 192 count = TIMEOUT_MSEC / POLL_MSEC; 208 193 spin_lock_irq(&boot_lock); 209 194 for (tries = 0; tries < count; tries++) { 210 - if (hip04_cpu_table[cluster][cpu]) { 211 - ret = -EBUSY; 195 + if (hip04_cpu_table[cluster][cpu]) 212 196 goto err; 213 - } 214 197 cpu_relax(); 215 198 data = readl_relaxed(sysctrl + SC_CPU_RESET_STATUS(cluster)); 216 199 if (data & CORE_WFI_STATUS(cpu)) ··· 229 220 } 230 221 if (tries >= count) 231 222 goto err; 223 + if (hip04_cluster_is_down(cluster)) 224 + hip04_set_snoop_filter(cluster, 0); 232 225 spin_unlock_irq(&boot_lock); 233 - return 0; 226 + return 1; 234 227 err: 235 228 spin_unlock_irq(&boot_lock); 236 - return ret; 229 + return 0; 237 230 } 231 + #endif 238 232 239 - static void hip04_mcpm_powered_up(void) 240 - { 241 - unsigned int mpidr, cpu, cluster; 242 - 243 - mpidr = read_cpuid_mpidr(); 244 - cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); 245 - cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1); 246 - 247 - spin_lock(&boot_lock); 248 - if (!hip04_cpu_table[cluster][cpu]) 249 - hip04_cpu_table[cluster][cpu] = 1; 250 - spin_unlock(&boot_lock); 251 - } 252 - 253 - static void __naked hip04_mcpm_power_up_setup(unsigned int affinity_level) 254 - { 255 - asm volatile (" \n" 256 - " cmp r0, #0 \n" 257 - " bxeq lr \n" 258 - /* calculate fabric phys address */ 259 - " adr r2, 2f \n" 260 - " ldmia r2, {r1, r3} \n" 261 - " sub r0, r2, r1 \n" 262 - " ldr r2, [r0, r3] \n" 263 - /* get cluster id from MPIDR */ 264 - " mrc p15, 0, r0, c0, c0, 5 \n" 265 - " ubfx r1, r0, #8, #8 \n" 266 - /* 1 << cluster id */ 267 - " mov r0, #1 \n" 268 - " mov r3, r0, lsl r1 \n" 269 - " ldr r0, [r2, #"__stringify(FAB_SF_MODE)"] \n" 270 - " tst r0, r3 \n" 271 - " bxne lr \n" 272 - " orr r1, r0, r3 \n" 273 - " str r1, [r2, #"__stringify(FAB_SF_MODE)"] \n" 274 - "1: ldr r0, [r2, #"__stringify(FAB_SF_MODE)"] \n" 275 - " tst r0, r3 \n" 276 - " beq 1b \n" 277 - " bx lr \n" 278 - 279 - " .align 2 \n" 280 - "2: .word . \n" 281 - " .word fabric_phys_addr \n" 282 - ); 283 - } 284 - 285 - static const struct mcpm_platform_ops hip04_mcpm_ops = { 286 - .power_up = hip04_mcpm_power_up, 287 - .power_down = hip04_mcpm_power_down, 288 - .wait_for_powerdown = hip04_mcpm_wait_for_powerdown, 289 - .powered_up = hip04_mcpm_powered_up, 233 + static struct smp_operations __initdata hip04_smp_ops = { 234 + .smp_boot_secondary = hip04_boot_secondary, 235 + #ifdef CONFIG_HOTPLUG_CPU 236 + .cpu_die = hip04_cpu_die, 237 + .cpu_kill = hip04_cpu_kill, 238 + #endif 290 239 }; 291 240 292 241 static bool __init hip04_cpu_table_init(void) ··· 265 298 return true; 266 299 } 267 300 268 - static int __init hip04_mcpm_init(void) 301 + static int __init hip04_smp_init(void) 269 302 { 270 303 struct device_node *np, *np_sctl, *np_fab; 271 304 struct resource fab_res; ··· 320 353 ret = -EINVAL; 321 354 goto err_table; 322 355 } 323 - ret = mcpm_platform_register(&hip04_mcpm_ops); 324 - if (ret) { 325 - goto err_table; 326 - } 327 356 328 357 /* 329 358 * Fill the instruction address that is used after secondary core ··· 327 364 */ 328 365 writel_relaxed(hip04_boot_method[0], relocation); 329 366 writel_relaxed(0xa5a5a5a5, relocation + 4); /* magic number */ 330 - writel_relaxed(virt_to_phys(mcpm_entry_point), relocation + 8); 367 + writel_relaxed(virt_to_phys(secondary_startup), relocation + 8); 331 368 writel_relaxed(0, relocation + 12); 332 369 iounmap(relocation); 333 370 334 - mcpm_sync_init(hip04_mcpm_power_up_setup); 335 - mcpm_smp_set_ops(); 336 - pr_info("HiP04 MCPM initialized\n"); 371 + smp_set_ops(&hip04_smp_ops); 337 372 return ret; 338 373 err_table: 339 374 iounmap(fabric); ··· 344 383 err: 345 384 return ret; 346 385 } 347 - early_initcall(hip04_mcpm_init); 386 + early_initcall(hip04_smp_init);
-1
arch/arm/mach-integrator/integrator_ap.c
··· 37 37 #include <linux/stat.h> 38 38 #include <linux/termios.h> 39 39 40 - #include <asm/hardware/arm_timer.h> 41 40 #include <asm/setup.h> 42 41 #include <asm/param.h> /* HZ */ 43 42 #include <asm/mach-types.h>
+17 -24
arch/arm/mach-keystone/keystone.c
··· 27 27 28 28 #include "keystone.h" 29 29 30 - static struct notifier_block platform_nb; 31 30 static unsigned long keystone_dma_pfn_offset __read_mostly; 32 31 33 32 static int keystone_platform_notifier(struct notifier_block *nb, ··· 48 49 return NOTIFY_OK; 49 50 } 50 51 52 + static struct notifier_block platform_nb = { 53 + .notifier_call = keystone_platform_notifier, 54 + }; 55 + 51 56 static void __init keystone_init(void) 52 57 { 53 - keystone_pm_runtime_init(); 54 - if (platform_nb.notifier_call) 58 + if (PHYS_OFFSET >= KEYSTONE_HIGH_PHYS_START) { 59 + keystone_dma_pfn_offset = PFN_DOWN(KEYSTONE_HIGH_PHYS_START - 60 + KEYSTONE_LOW_PHYS_START); 55 61 bus_register_notifier(&platform_bus_type, &platform_nb); 62 + } 63 + keystone_pm_runtime_init(); 56 64 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 57 65 } 58 66 ··· 68 62 return (phys_addr_t)(x) - CONFIG_PAGE_OFFSET + KEYSTONE_LOW_PHYS_START; 69 63 } 70 64 71 - static void __init keystone_init_meminfo(void) 65 + static long long __init keystone_pv_fixup(void) 72 66 { 73 - bool lpae = IS_ENABLED(CONFIG_ARM_LPAE); 74 - bool pvpatch = IS_ENABLED(CONFIG_ARM_PATCH_PHYS_VIRT); 75 - phys_addr_t offset = PHYS_OFFSET - KEYSTONE_LOW_PHYS_START; 67 + long long offset; 76 68 phys_addr_t mem_start, mem_end; 77 69 78 70 mem_start = memblock_start_of_DRAM(); ··· 79 75 /* nothing to do if we are running out of the <32-bit space */ 80 76 if (mem_start >= KEYSTONE_LOW_PHYS_START && 81 77 mem_end <= KEYSTONE_LOW_PHYS_END) 82 - return; 83 - 84 - if (!lpae || !pvpatch) { 85 - pr_crit("Enable %s%s%s to run outside 32-bit space\n", 86 - !lpae ? __stringify(CONFIG_ARM_LPAE) : "", 87 - (!lpae && !pvpatch) ? " and " : "", 88 - !pvpatch ? __stringify(CONFIG_ARM_PATCH_PHYS_VIRT) : ""); 89 - } 78 + return 0; 90 79 91 80 if (mem_start < KEYSTONE_HIGH_PHYS_START || 92 81 mem_end > KEYSTONE_HIGH_PHYS_END) { 93 82 pr_crit("Invalid address space for memory (%08llx-%08llx)\n", 94 - (u64)mem_start, (u64)mem_end); 83 + (u64)mem_start, (u64)mem_end); 84 + return 0; 95 85 } 96 86 97 - offset += KEYSTONE_HIGH_PHYS_START; 98 - __pv_phys_pfn_offset = PFN_DOWN(offset); 99 - __pv_offset = (offset - PAGE_OFFSET); 87 + offset = KEYSTONE_HIGH_PHYS_START - KEYSTONE_LOW_PHYS_START; 100 88 101 89 /* Populate the arch idmap hook */ 102 90 arch_virt_to_idmap = keystone_virt_to_idmap; 103 - platform_nb.notifier_call = keystone_platform_notifier; 104 - keystone_dma_pfn_offset = PFN_DOWN(KEYSTONE_HIGH_PHYS_START - 105 - KEYSTONE_LOW_PHYS_START); 106 91 107 - pr_info("Switching to high address space at 0x%llx\n", (u64)offset); 92 + return offset; 108 93 } 109 94 110 95 static const char *const keystone_match[] __initconst = { ··· 108 115 .smp = smp_ops(keystone_smp_ops), 109 116 .init_machine = keystone_init, 110 117 .dt_compat = keystone_match, 111 - .init_meminfo = keystone_init_meminfo, 118 + .pv_fixup = keystone_pv_fixup, 112 119 MACHINE_END
-13
arch/arm/mach-keystone/platsmp.c
··· 39 39 return error; 40 40 } 41 41 42 - #ifdef CONFIG_ARM_LPAE 43 - static void __cpuinit keystone_smp_secondary_initmem(unsigned int cpu) 44 - { 45 - pgd_t *pgd0 = pgd_offset_k(0); 46 - cpu_set_ttbr(1, __pa(pgd0) + TTBR1_OFFSET); 47 - local_flush_tlb_all(); 48 - } 49 - #else 50 - static inline void __cpuinit keystone_smp_secondary_initmem(unsigned int cpu) 51 - {} 52 - #endif 53 - 54 42 struct smp_operations keystone_smp_ops __initdata = { 55 43 .smp_boot_secondary = keystone_smp_boot_secondary, 56 - .smp_secondary_init = keystone_smp_secondary_initmem, 57 44 };
-2
arch/arm/mach-nspire/nspire.c
··· 22 22 #include <asm/mach-types.h> 23 23 #include <asm/mach/map.h> 24 24 25 - #include <asm/hardware/timer-sp.h> 26 - 27 25 #include "mmio.h" 28 26 #include "clcd.h" 29 27
+6 -7
arch/arm/mach-realview/core.c
··· 35 35 #include <linux/mtd/physmap.h> 36 36 #include <linux/memblock.h> 37 37 38 + #include <clocksource/timer-sp804.h> 39 + 38 40 #include <mach/hardware.h> 39 41 #include <asm/irq.h> 40 42 #include <asm/mach-types.h> 41 - #include <asm/hardware/arm_timer.h> 42 43 #include <asm/hardware/icst.h> 43 44 44 45 #include <asm/mach/arch.h> 45 46 #include <asm/mach/irq.h> 46 47 #include <asm/mach/map.h> 47 48 48 - 49 49 #include <mach/platform.h> 50 50 #include <mach/irqs.h> 51 - #include <asm/hardware/timer-sp.h> 52 51 53 52 #include <plat/sched_clock.h> 54 53 ··· 380 381 /* 381 382 * Initialise to a known state (all timers off) 382 383 */ 383 - writel(0, timer0_va_base + TIMER_CTRL); 384 - writel(0, timer1_va_base + TIMER_CTRL); 385 - writel(0, timer2_va_base + TIMER_CTRL); 386 - writel(0, timer3_va_base + TIMER_CTRL); 384 + sp804_timer_disable(timer0_va_base); 385 + sp804_timer_disable(timer1_va_base); 386 + sp804_timer_disable(timer2_va_base); 387 + sp804_timer_disable(timer3_va_base); 387 388 388 389 sp804_clocksource_init(timer3_va_base, "timer3"); 389 390 sp804_clockevents_init(timer0_va_base, timer_irq, "timer0");
+1 -1
arch/arm/mach-sa1100/Makefile
··· 3 3 # 4 4 5 5 # Common support 6 - obj-y := clock.o generic.o irq.o #nmi-oopser.o 6 + obj-y := clock.o generic.o #nmi-oopser.o 7 7 8 8 # Specific board support 9 9 obj-$(CONFIG_SA1100_ASSABET) += assabet.o
+37
arch/arm/mach-sa1100/generic.c
··· 20 20 #include <linux/ioport.h> 21 21 #include <linux/platform_device.h> 22 22 #include <linux/reboot.h> 23 + #include <linux/irqchip/irq-sa11x0.h> 23 24 24 25 #include <video/sa1100fb.h> 26 + 27 + #include <soc/sa1100/pwer.h> 25 28 26 29 #include <asm/div64.h> 27 30 #include <asm/mach/map.h> ··· 378 375 pxa_timer_nodt_init(IRQ_OST0, io_p2v(0x90000000), 3686400); 379 376 } 380 377 378 + static struct resource irq_resource = 379 + DEFINE_RES_MEM_NAMED(0x90050000, SZ_64K, "irqs"); 380 + 381 + void __init sa1100_init_irq(void) 382 + { 383 + request_resource(&iomem_resource, &irq_resource); 384 + 385 + sa11x0_init_irq_nodt(IRQ_GPIO0_SC, irq_resource.start); 386 + 387 + sa1100_init_gpio(); 388 + } 389 + 381 390 /* 382 391 * Disable the memory bus request/grant signals on the SA1110 to 383 392 * ensure that we don't receive spurious memory requests. We set ··· 431 416 local_irq_restore(flags); 432 417 } 433 418 419 + int sa11x0_gpio_set_wake(unsigned int gpio, unsigned int on) 420 + { 421 + if (on) 422 + PWER |= BIT(gpio); 423 + else 424 + PWER &= ~BIT(gpio); 425 + 426 + return 0; 427 + } 428 + 429 + int sa11x0_sc_set_wake(unsigned int irq, unsigned int on) 430 + { 431 + if (BIT(irq) != IC_RTCAlrm) 432 + return -EINVAL; 433 + 434 + if (on) 435 + PWER |= PWER_RTC; 436 + else 437 + PWER &= ~PWER_RTC; 438 + 439 + return 0; 440 + }
+41 -44
arch/arm/mach-sa1100/irq.c drivers/irqchip/irq-sa11x0.c
··· 1 1 /* 2 - * linux/arch/arm/mach-sa1100/irq.c 3 - * 2 + * Copyright (C) 2015 Dmitry Eremin-Solenikov 4 3 * Copyright (C) 1999-2001 Nicolas Pitre 5 4 * 6 - * Generic IRQ handling for the SA11x0, GPIO 11-27 IRQ demultiplexing. 5 + * Generic IRQ handling for the SA11x0. 7 6 * 8 7 * This program is free software; you can redistribute it and/or modify 9 8 * it under the terms of the GNU General Public License version 2 as ··· 14 15 #include <linux/io.h> 15 16 #include <linux/irq.h> 16 17 #include <linux/irqdomain.h> 17 - #include <linux/ioport.h> 18 18 #include <linux/syscore_ops.h> 19 + #include <linux/irqchip/irq-sa11x0.h> 19 20 20 - #include <mach/hardware.h> 21 - #include <mach/irqs.h> 22 - #include <asm/mach/irq.h> 21 + #include <soc/sa1100/pwer.h> 22 + 23 23 #include <asm/exception.h> 24 24 25 - #include "generic.h" 25 + #define ICIP 0x00 /* IC IRQ Pending reg. */ 26 + #define ICMR 0x04 /* IC Mask Reg. */ 27 + #define ICLR 0x08 /* IC Level Reg. */ 28 + #define ICCR 0x0C /* IC Control Reg. */ 29 + #define ICFP 0x10 /* IC FIQ Pending reg. */ 30 + #define ICPR 0x20 /* IC Pending Reg. */ 26 31 32 + static void __iomem *iobase; 27 33 28 34 /* 29 35 * We don't need to ACK IRQs on the SA1100 unless they're GPIOs ··· 36 32 */ 37 33 static void sa1100_mask_irq(struct irq_data *d) 38 34 { 39 - ICMR &= ~BIT(d->hwirq); 35 + u32 reg; 36 + 37 + reg = readl_relaxed(iobase + ICMR); 38 + reg &= ~BIT(d->hwirq); 39 + writel_relaxed(reg, iobase + ICMR); 40 40 } 41 41 42 42 static void sa1100_unmask_irq(struct irq_data *d) 43 43 { 44 - ICMR |= BIT(d->hwirq); 44 + u32 reg; 45 + 46 + reg = readl_relaxed(iobase + ICMR); 47 + reg |= BIT(d->hwirq); 48 + writel_relaxed(reg, iobase + ICMR); 45 49 } 46 50 47 - /* 48 - * Apart form GPIOs, only the RTC alarm can be a wakeup event. 49 - */ 50 51 static int sa1100_set_wake(struct irq_data *d, unsigned int on) 51 52 { 52 - if (BIT(d->hwirq) == IC_RTCAlrm) { 53 - if (on) 54 - PWER |= PWER_RTC; 55 - else 56 - PWER &= ~PWER_RTC; 57 - return 0; 58 - } 59 - return -EINVAL; 53 + return sa11x0_sc_set_wake(d->hwirq, on); 60 54 } 61 55 62 56 static struct irq_chip sa1100_normal_chip = { ··· 75 73 return 0; 76 74 } 77 75 78 - static struct irq_domain_ops sa1100_normal_irqdomain_ops = { 76 + static const struct irq_domain_ops sa1100_normal_irqdomain_ops = { 79 77 .map = sa1100_normal_irqdomain_map, 80 78 .xlate = irq_domain_xlate_onetwocell, 81 79 }; 82 80 83 81 static struct irq_domain *sa1100_normal_irqdomain; 84 - 85 - static struct resource irq_resource = 86 - DEFINE_RES_MEM_NAMED(0x90050000, SZ_64K, "irqs"); 87 82 88 83 static struct sa1100irq_state { 89 84 unsigned int saved; ··· 94 95 struct sa1100irq_state *st = &sa1100irq_state; 95 96 96 97 st->saved = 1; 97 - st->icmr = ICMR; 98 - st->iclr = ICLR; 99 - st->iccr = ICCR; 98 + st->icmr = readl_relaxed(iobase + ICMR); 99 + st->iclr = readl_relaxed(iobase + ICLR); 100 + st->iccr = readl_relaxed(iobase + ICCR); 100 101 101 102 /* 102 103 * Disable all GPIO-based interrupts. 103 104 */ 104 - ICMR &= ~(IC_GPIO11_27|IC_GPIO10|IC_GPIO9|IC_GPIO8|IC_GPIO7| 105 - IC_GPIO6|IC_GPIO5|IC_GPIO4|IC_GPIO3|IC_GPIO2| 106 - IC_GPIO1|IC_GPIO0); 105 + writel_relaxed(st->icmr & 0xfffff000, iobase + ICMR); 107 106 108 107 return 0; 109 108 } ··· 111 114 struct sa1100irq_state *st = &sa1100irq_state; 112 115 113 116 if (st->saved) { 114 - ICCR = st->iccr; 115 - ICLR = st->iclr; 117 + writel_relaxed(st->iccr, iobase + ICCR); 118 + writel_relaxed(st->iclr, iobase + ICLR); 116 119 117 - ICMR = st->icmr; 120 + writel_relaxed(st->icmr, iobase + ICMR); 118 121 } 119 122 } 120 123 ··· 137 140 uint32_t icip, icmr, mask; 138 141 139 142 do { 140 - icip = (ICIP); 141 - icmr = (ICMR); 143 + icip = readl_relaxed(iobase + ICIP); 144 + icmr = readl_relaxed(iobase + ICMR); 142 145 mask = icip & icmr; 143 146 144 147 if (mask == 0) ··· 149 152 } while (1); 150 153 } 151 154 152 - void __init sa1100_init_irq(void) 155 + void __init sa11x0_init_irq_nodt(int irq_start, resource_size_t io_start) 153 156 { 154 - request_resource(&iomem_resource, &irq_resource); 157 + iobase = ioremap(io_start, SZ_64K); 158 + if (WARN_ON(!iobase)) 159 + return; 155 160 156 161 /* disable all IRQs */ 157 - ICMR = 0; 162 + writel_relaxed(0, iobase + ICMR); 158 163 159 164 /* all IRQs are IRQ, not FIQ */ 160 - ICLR = 0; 165 + writel_relaxed(0, iobase + ICLR); 161 166 162 167 /* 163 168 * Whatever the doc says, this has to be set for the wait-on-irq 164 169 * instruction to work... on a SA1100 rev 9 at least. 165 170 */ 166 - ICCR = 1; 171 + writel_relaxed(1, iobase + ICCR); 167 172 168 173 sa1100_normal_irqdomain = irq_domain_add_simple(NULL, 169 - 32, IRQ_GPIO0_SC, 174 + 32, irq_start, 170 175 &sa1100_normal_irqdomain_ops, NULL); 171 176 172 177 set_handle_irq(sa1100_handle_irq); 173 - 174 - sa1100_init_gpio(); 175 178 }
+6 -6
arch/arm/mach-versatile/core.c
··· 41 41 #include <linux/bitops.h> 42 42 #include <linux/reboot.h> 43 43 44 + #include <clocksource/timer-sp804.h> 45 + 44 46 #include <asm/irq.h> 45 - #include <asm/hardware/arm_timer.h> 46 47 #include <asm/hardware/icst.h> 47 48 #include <asm/mach-types.h> 48 49 ··· 53 52 #include <asm/mach/map.h> 54 53 #include <mach/hardware.h> 55 54 #include <mach/platform.h> 56 - #include <asm/hardware/timer-sp.h> 57 55 58 56 #include <plat/sched_clock.h> 59 57 ··· 798 798 /* 799 799 * Initialise to a known state (all timers off) 800 800 */ 801 - writel(0, TIMER0_VA_BASE + TIMER_CTRL); 802 - writel(0, TIMER1_VA_BASE + TIMER_CTRL); 803 - writel(0, TIMER2_VA_BASE + TIMER_CTRL); 804 - writel(0, TIMER3_VA_BASE + TIMER_CTRL); 801 + sp804_timer_disable(TIMER0_VA_BASE); 802 + sp804_timer_disable(TIMER1_VA_BASE); 803 + sp804_timer_disable(TIMER2_VA_BASE); 804 + sp804_timer_disable(TIMER3_VA_BASE); 805 805 806 806 sp804_clocksource_init(TIMER3_VA_BASE, "timer3"); 807 807 sp804_clockevents_init(TIMER0_VA_BASE, IRQ_TIMERINT0_1, "timer0");
+22 -2
arch/arm/mm/Kconfig
··· 6 6 7 7 # ARM7TDMI 8 8 config CPU_ARM7TDMI 9 - bool "Support ARM7TDMI processor" 9 + bool 10 10 depends on !MMU 11 11 select CPU_32v4T 12 12 select CPU_ABRT_LV4T ··· 56 56 57 57 # ARM9TDMI 58 58 config CPU_ARM9TDMI 59 - bool "Support ARM9TDMI processor" 59 + bool 60 60 depends on !MMU 61 61 select CPU_32v4T 62 62 select CPU_ABRT_NOMMU ··· 604 604 This option enables or disables the use of domain switching 605 605 via the set_fs() function. 606 606 607 + config CPU_V7M_NUM_IRQ 608 + int "Number of external interrupts connected to the NVIC" 609 + depends on CPU_V7M 610 + default 90 if ARCH_STM32 611 + default 38 if ARCH_EFM32 612 + default 112 if SOC_VF610 613 + default 240 614 + help 615 + This option indicates the number of interrupts connected to the NVIC. 616 + The value can be larger than the real number of interrupts supported 617 + by the system, but must not be lower. 618 + The default value is 240, corresponding to the maximum number of 619 + interrupts supported by the NVIC on Cortex-M family. 620 + 621 + If unsure, keep default value. 622 + 607 623 # 608 624 # CPU supports 36-bit I/O 609 625 # ··· 639 623 processors without the LPA extension. 640 624 641 625 If unsure, say N. 626 + 627 + config ARM_PV_FIXUP 628 + def_bool y 629 + depends on ARM_LPAE && ARM_PATCH_PHYS_VIRT && ARCH_KEYSTONE 642 630 643 631 config ARCH_PHYS_ADDR_T_64BIT 644 632 def_bool ARM_LPAE
+3
arch/arm/mm/Makefile
··· 18 18 obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o 19 19 obj-$(CONFIG_HIGHMEM) += highmem.o 20 20 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o 21 + obj-$(CONFIG_ARM_PV_FIXUP) += pv-fixup-asm.o 21 22 22 23 obj-$(CONFIG_CPU_ABRT_NOMMU) += abort-nommu.o 23 24 obj-$(CONFIG_CPU_ABRT_EV4) += abort-ev4.o ··· 55 54 obj-$(CONFIG_CPU_XSCALE) += copypage-xscale.o 56 55 obj-$(CONFIG_CPU_XSC3) += copypage-xsc3.o 57 56 obj-$(CONFIG_CPU_COPY_FA) += copypage-fa.o 57 + 58 + CFLAGS_copypage-feroceon.o := -march=armv5te 58 59 59 60 obj-$(CONFIG_CPU_TLB_V4WT) += tlb-v4.o 60 61 obj-$(CONFIG_CPU_TLB_V4WB) += tlb-v4wb.o
+79 -28
arch/arm/mm/cache-l2x0.c
··· 38 38 unsigned way_size_0; 39 39 unsigned num_lock; 40 40 void (*of_parse)(const struct device_node *, u32 *, u32 *); 41 - void (*enable)(void __iomem *, u32, unsigned); 41 + void (*enable)(void __iomem *, unsigned); 42 42 void (*fixup)(void __iomem *, u32, struct outer_cache_fns *); 43 43 void (*save)(void __iomem *); 44 44 void (*configure)(void __iomem *); 45 + void (*unlock)(void __iomem *, unsigned); 45 46 struct outer_cache_fns outer_cache; 46 47 }; 47 48 ··· 111 110 112 111 static void l2c_configure(void __iomem *base) 113 112 { 114 - if (outer_cache.configure) { 115 - outer_cache.configure(&l2x0_saved_regs); 116 - return; 117 - } 118 - 119 - if (l2x0_data->configure) 120 - l2x0_data->configure(base); 121 - 122 113 l2c_write_sec(l2x0_saved_regs.aux_ctrl, base, L2X0_AUX_CTRL); 123 114 } 124 115 ··· 118 125 * Enable the L2 cache controller. This function must only be 119 126 * called when the cache controller is known to be disabled. 120 127 */ 121 - static void l2c_enable(void __iomem *base, u32 aux, unsigned num_lock) 128 + static void l2c_enable(void __iomem *base, unsigned num_lock) 122 129 { 123 130 unsigned long flags; 124 131 125 - /* Do not touch the controller if already enabled. */ 126 - if (readl_relaxed(base + L2X0_CTRL) & L2X0_CTRL_EN) 127 - return; 132 + if (outer_cache.configure) 133 + outer_cache.configure(&l2x0_saved_regs); 134 + else 135 + l2x0_data->configure(base); 128 136 129 - l2x0_saved_regs.aux_ctrl = aux; 130 - l2c_configure(base); 131 - 132 - l2c_unlock(base, num_lock); 137 + l2x0_data->unlock(base, num_lock); 133 138 134 139 local_irq_save(flags); 135 140 __l2c_op_way(base + L2X0_INV_WAY); ··· 154 163 155 164 static void l2c_resume(void) 156 165 { 157 - l2c_enable(l2x0_base, l2x0_saved_regs.aux_ctrl, l2x0_data->num_lock); 166 + void __iomem *base = l2x0_base; 167 + 168 + /* Do not touch the controller if already enabled. */ 169 + if (!(readl_relaxed(base + L2X0_CTRL) & L2X0_CTRL_EN)) 170 + l2c_enable(base, l2x0_data->num_lock); 158 171 } 159 172 160 173 /* ··· 247 252 .num_lock = 1, 248 253 .enable = l2c_enable, 249 254 .save = l2c_save, 255 + .configure = l2c_configure, 256 + .unlock = l2c_unlock, 250 257 .outer_cache = { 251 258 .inv_range = l2c210_inv_range, 252 259 .clean_range = l2c210_clean_range, ··· 388 391 raw_spin_unlock_irqrestore(&l2x0_lock, flags); 389 392 } 390 393 391 - static void l2c220_enable(void __iomem *base, u32 aux, unsigned num_lock) 394 + static void l2c220_enable(void __iomem *base, unsigned num_lock) 392 395 { 393 396 /* 394 397 * Always enable non-secure access to the lockdown registers - 395 398 * we write to them as part of the L2C enable sequence so they 396 399 * need to be accessible. 397 400 */ 398 - aux |= L220_AUX_CTRL_NS_LOCKDOWN; 401 + l2x0_saved_regs.aux_ctrl |= L220_AUX_CTRL_NS_LOCKDOWN; 399 402 400 - l2c_enable(base, aux, num_lock); 403 + l2c_enable(base, num_lock); 404 + } 405 + 406 + static void l2c220_unlock(void __iomem *base, unsigned num_lock) 407 + { 408 + if (readl_relaxed(base + L2X0_AUX_CTRL) & L220_AUX_CTRL_NS_LOCKDOWN) 409 + l2c_unlock(base, num_lock); 401 410 } 402 411 403 412 static const struct l2c_init_data l2c220_data = { ··· 412 409 .num_lock = 1, 413 410 .enable = l2c220_enable, 414 411 .save = l2c_save, 412 + .configure = l2c_configure, 413 + .unlock = l2c220_unlock, 415 414 .outer_cache = { 416 415 .inv_range = l2c220_inv_range, 417 416 .clean_range = l2c220_clean_range, ··· 574 569 { 575 570 unsigned revision; 576 571 572 + l2c_configure(base); 573 + 577 574 /* restore pl310 setup */ 578 575 l2c_write_sec(l2x0_saved_regs.tag_latency, base, 579 576 L310_TAG_LATENCY_CTRL); ··· 610 603 return NOTIFY_OK; 611 604 } 612 605 613 - static void __init l2c310_enable(void __iomem *base, u32 aux, unsigned num_lock) 606 + static void __init l2c310_enable(void __iomem *base, unsigned num_lock) 614 607 { 615 608 unsigned rev = readl_relaxed(base + L2X0_CACHE_ID) & L2X0_CACHE_ID_RTL_MASK; 616 609 bool cortex_a9 = read_cpuid_part() == ARM_CPU_PART_CORTEX_A9; 610 + u32 aux = l2x0_saved_regs.aux_ctrl; 617 611 618 612 if (rev >= L310_CACHE_ID_RTL_R2P0) { 619 613 if (cortex_a9) { ··· 657 649 * we write to them as part of the L2C enable sequence so they 658 650 * need to be accessible. 659 651 */ 660 - aux |= L310_AUX_CTRL_NS_LOCKDOWN; 652 + l2x0_saved_regs.aux_ctrl = aux | L310_AUX_CTRL_NS_LOCKDOWN; 661 653 662 - l2c_enable(base, aux, num_lock); 654 + l2c_enable(base, num_lock); 663 655 664 656 /* Read back resulting AUX_CTRL value as it could have been altered. */ 665 657 aux = readl_relaxed(base + L2X0_AUX_CTRL); ··· 763 755 set_auxcr(get_auxcr() | BIT(3) | BIT(2) | BIT(1)); 764 756 } 765 757 758 + static void l2c310_unlock(void __iomem *base, unsigned num_lock) 759 + { 760 + if (readl_relaxed(base + L2X0_AUX_CTRL) & L310_AUX_CTRL_NS_LOCKDOWN) 761 + l2c_unlock(base, num_lock); 762 + } 763 + 766 764 static const struct l2c_init_data l2c310_init_fns __initconst = { 767 765 .type = "L2C-310", 768 766 .way_size_0 = SZ_8K, ··· 777 763 .fixup = l2c310_fixup, 778 764 .save = l2c310_save, 779 765 .configure = l2c310_configure, 766 + .unlock = l2c310_unlock, 780 767 .outer_cache = { 781 768 .inv_range = l2c210_inv_range, 782 769 .clean_range = l2c210_clean_range, ··· 871 856 * Check if l2x0 controller is already enabled. If we are booting 872 857 * in non-secure mode accessing the below registers will fault. 873 858 */ 874 - if (!(readl_relaxed(l2x0_base + L2X0_CTRL) & L2X0_CTRL_EN)) 875 - data->enable(l2x0_base, aux, data->num_lock); 859 + if (!(readl_relaxed(l2x0_base + L2X0_CTRL) & L2X0_CTRL_EN)) { 860 + l2x0_saved_regs.aux_ctrl = aux; 861 + 862 + data->enable(l2x0_base, data->num_lock); 863 + } 876 864 877 865 outer_cache = fns; 878 866 ··· 1084 1066 .of_parse = l2x0_of_parse, 1085 1067 .enable = l2c_enable, 1086 1068 .save = l2c_save, 1069 + .configure = l2c_configure, 1070 + .unlock = l2c_unlock, 1087 1071 .outer_cache = { 1088 1072 .inv_range = l2c210_inv_range, 1089 1073 .clean_range = l2c210_clean_range, ··· 1104 1084 .of_parse = l2x0_of_parse, 1105 1085 .enable = l2c220_enable, 1106 1086 .save = l2c_save, 1087 + .configure = l2c_configure, 1088 + .unlock = l2c220_unlock, 1107 1089 .outer_cache = { 1108 1090 .inv_range = l2c220_inv_range, 1109 1091 .clean_range = l2c220_clean_range, ··· 1221 1199 pr_err("L2C-310 OF arm,prefetch-offset property value is missing\n"); 1222 1200 } 1223 1201 1202 + ret = of_property_read_u32(np, "prefetch-data", &val); 1203 + if (ret == 0) { 1204 + if (val) 1205 + prefetch |= L310_PREFETCH_CTRL_DATA_PREFETCH; 1206 + else 1207 + prefetch &= ~L310_PREFETCH_CTRL_DATA_PREFETCH; 1208 + } else if (ret != -EINVAL) { 1209 + pr_err("L2C-310 OF prefetch-data property value is missing\n"); 1210 + } 1211 + 1212 + ret = of_property_read_u32(np, "prefetch-instr", &val); 1213 + if (ret == 0) { 1214 + if (val) 1215 + prefetch |= L310_PREFETCH_CTRL_INSTR_PREFETCH; 1216 + else 1217 + prefetch &= ~L310_PREFETCH_CTRL_INSTR_PREFETCH; 1218 + } else if (ret != -EINVAL) { 1219 + pr_err("L2C-310 OF prefetch-instr property value is missing\n"); 1220 + } 1221 + 1224 1222 l2x0_saved_regs.prefetch_ctrl = prefetch; 1225 1223 } 1226 1224 ··· 1253 1211 .fixup = l2c310_fixup, 1254 1212 .save = l2c310_save, 1255 1213 .configure = l2c310_configure, 1214 + .unlock = l2c310_unlock, 1256 1215 .outer_cache = { 1257 1216 .inv_range = l2c210_inv_range, 1258 1217 .clean_range = l2c210_clean_range, ··· 1283 1240 .fixup = l2c310_fixup, 1284 1241 .save = l2c310_save, 1285 1242 .configure = l2c310_configure, 1243 + .unlock = l2c310_unlock, 1286 1244 .outer_cache = { 1287 1245 .inv_range = l2c210_inv_range, 1288 1246 .clean_range = l2c210_clean_range, ··· 1410 1366 * For Aurora cache in no outer mode, enable via the CP15 coprocessor 1411 1367 * broadcasting of cache commands to L2. 1412 1368 */ 1413 - static void __init aurora_enable_no_outer(void __iomem *base, u32 aux, 1369 + static void __init aurora_enable_no_outer(void __iomem *base, 1414 1370 unsigned num_lock) 1415 1371 { 1416 1372 u32 u; ··· 1421 1377 1422 1378 isb(); 1423 1379 1424 - l2c_enable(base, aux, num_lock); 1380 + l2c_enable(base, num_lock); 1425 1381 } 1426 1382 1427 1383 static void __init aurora_fixup(void __iomem *base, u32 cache_id, ··· 1460 1416 .enable = l2c_enable, 1461 1417 .fixup = aurora_fixup, 1462 1418 .save = aurora_save, 1419 + .configure = l2c_configure, 1420 + .unlock = l2c_unlock, 1463 1421 .outer_cache = { 1464 1422 .inv_range = aurora_inv_range, 1465 1423 .clean_range = aurora_clean_range, ··· 1481 1435 .enable = aurora_enable_no_outer, 1482 1436 .fixup = aurora_fixup, 1483 1437 .save = aurora_save, 1438 + .configure = l2c_configure, 1439 + .unlock = l2c_unlock, 1484 1440 .outer_cache = { 1485 1441 .resume = l2c_resume, 1486 1442 }, ··· 1633 1585 .enable = l2c310_enable, 1634 1586 .save = l2c310_save, 1635 1587 .configure = l2c310_configure, 1588 + .unlock = l2c310_unlock, 1636 1589 .outer_cache = { 1637 1590 .inv_range = bcm_inv_range, 1638 1591 .clean_range = bcm_clean_range, ··· 1657 1608 1658 1609 static void tauros3_configure(void __iomem *base) 1659 1610 { 1611 + l2c_configure(base); 1660 1612 writel_relaxed(l2x0_saved_regs.aux2_ctrl, 1661 1613 base + TAUROS3_AUX2_CTRL); 1662 1614 writel_relaxed(l2x0_saved_regs.prefetch_ctrl, ··· 1671 1621 .enable = l2c_enable, 1672 1622 .save = tauros3_save, 1673 1623 .configure = tauros3_configure, 1624 + .unlock = l2c_unlock, 1674 1625 /* Tauros3 broadcasts L1 cache operations to L2 */ 1675 1626 .outer_cache = { 1676 1627 .resume = l2c_resume,
+25 -7
arch/arm/mm/dma-mapping.c
··· 148 148 dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs); 149 149 static void arm_coherent_dma_free(struct device *dev, size_t size, void *cpu_addr, 150 150 dma_addr_t handle, struct dma_attrs *attrs); 151 + static int arm_coherent_dma_mmap(struct device *dev, struct vm_area_struct *vma, 152 + void *cpu_addr, dma_addr_t dma_addr, size_t size, 153 + struct dma_attrs *attrs); 151 154 152 155 struct dma_map_ops arm_coherent_dma_ops = { 153 156 .alloc = arm_coherent_dma_alloc, 154 157 .free = arm_coherent_dma_free, 155 - .mmap = arm_dma_mmap, 158 + .mmap = arm_coherent_dma_mmap, 156 159 .get_sgtable = arm_dma_get_sgtable, 157 160 .map_page = arm_coherent_dma_map_page, 158 161 .map_sg = arm_dma_map_sg, ··· 693 690 attrs, __builtin_return_address(0)); 694 691 } 695 692 696 - /* 697 - * Create userspace mapping for the DMA-coherent memory. 698 - */ 699 - int arm_dma_mmap(struct device *dev, struct vm_area_struct *vma, 693 + static int __arm_dma_mmap(struct device *dev, struct vm_area_struct *vma, 700 694 void *cpu_addr, dma_addr_t dma_addr, size_t size, 701 695 struct dma_attrs *attrs) 702 696 { ··· 703 703 unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT; 704 704 unsigned long pfn = dma_to_pfn(dev, dma_addr); 705 705 unsigned long off = vma->vm_pgoff; 706 - 707 - vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot); 708 706 709 707 if (dma_mmap_from_coherent(dev, vma, cpu_addr, size, &ret)) 710 708 return ret; ··· 716 718 #endif /* CONFIG_MMU */ 717 719 718 720 return ret; 721 + } 722 + 723 + /* 724 + * Create userspace mapping for the DMA-coherent memory. 725 + */ 726 + static int arm_coherent_dma_mmap(struct device *dev, struct vm_area_struct *vma, 727 + void *cpu_addr, dma_addr_t dma_addr, size_t size, 728 + struct dma_attrs *attrs) 729 + { 730 + return __arm_dma_mmap(dev, vma, cpu_addr, dma_addr, size, attrs); 731 + } 732 + 733 + int arm_dma_mmap(struct device *dev, struct vm_area_struct *vma, 734 + void *cpu_addr, dma_addr_t dma_addr, size_t size, 735 + struct dma_attrs *attrs) 736 + { 737 + #ifdef CONFIG_MMU 738 + vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot); 739 + #endif /* CONFIG_MMU */ 740 + return __arm_dma_mmap(dev, vma, cpu_addr, dma_addr, size, attrs); 719 741 } 720 742 721 743 /*
+64 -89
arch/arm/mm/mmu.c
··· 1387 1387 } 1388 1388 } 1389 1389 1390 - #ifdef CONFIG_ARM_LPAE 1390 + #ifdef CONFIG_ARM_PV_FIXUP 1391 + extern unsigned long __atags_pointer; 1392 + typedef void pgtables_remap(long long offset, unsigned long pgd, void *bdata); 1393 + pgtables_remap lpae_pgtables_remap_asm; 1394 + 1391 1395 /* 1392 1396 * early_paging_init() recreates boot time page table setup, allowing machines 1393 1397 * to switch over to a high (>4G) address space on LPAE systems 1394 1398 */ 1395 - void __init early_paging_init(const struct machine_desc *mdesc, 1396 - struct proc_info_list *procinfo) 1399 + void __init early_paging_init(const struct machine_desc *mdesc) 1397 1400 { 1398 - pmdval_t pmdprot = procinfo->__cpu_mm_mmu_flags; 1399 - unsigned long map_start, map_end; 1400 - pgd_t *pgd0, *pgdk; 1401 - pud_t *pud0, *pudk, *pud_start; 1402 - pmd_t *pmd0, *pmdk; 1403 - phys_addr_t phys; 1404 - int i; 1401 + pgtables_remap *lpae_pgtables_remap; 1402 + unsigned long pa_pgd; 1403 + unsigned int cr, ttbcr; 1404 + long long offset; 1405 + void *boot_data; 1405 1406 1406 - if (!(mdesc->init_meminfo)) 1407 + if (!mdesc->pv_fixup) 1407 1408 return; 1408 1409 1409 - /* remap kernel code and data */ 1410 - map_start = init_mm.start_code & PMD_MASK; 1411 - map_end = ALIGN(init_mm.brk, PMD_SIZE); 1410 + offset = mdesc->pv_fixup(); 1411 + if (offset == 0) 1412 + return; 1412 1413 1413 - /* get a handle on things... */ 1414 - pgd0 = pgd_offset_k(0); 1415 - pud_start = pud0 = pud_offset(pgd0, 0); 1416 - pmd0 = pmd_offset(pud0, 0); 1414 + /* 1415 + * Get the address of the remap function in the 1:1 identity 1416 + * mapping setup by the early page table assembly code. We 1417 + * must get this prior to the pv update. The following barrier 1418 + * ensures that this is complete before we fixup any P:V offsets. 1419 + */ 1420 + lpae_pgtables_remap = (pgtables_remap *)(unsigned long)__pa(lpae_pgtables_remap_asm); 1421 + pa_pgd = __pa(swapper_pg_dir); 1422 + boot_data = __va(__atags_pointer); 1423 + barrier(); 1417 1424 1418 - pgdk = pgd_offset_k(map_start); 1419 - pudk = pud_offset(pgdk, map_start); 1420 - pmdk = pmd_offset(pudk, map_start); 1425 + pr_info("Switching physical address space to 0x%08llx\n", 1426 + (u64)PHYS_OFFSET + offset); 1421 1427 1422 - mdesc->init_meminfo(); 1428 + /* Re-set the phys pfn offset, and the pv offset */ 1429 + __pv_offset += offset; 1430 + __pv_phys_pfn_offset += PFN_DOWN(offset); 1423 1431 1424 1432 /* Run the patch stub to update the constants */ 1425 1433 fixup_pv_table(&__pv_table_begin, 1426 1434 (&__pv_table_end - &__pv_table_begin) << 2); 1427 1435 1428 1436 /* 1429 - * Cache cleaning operations for self-modifying code 1430 - * We should clean the entries by MVA but running a 1431 - * for loop over every pv_table entry pointer would 1432 - * just complicate the code. 1437 + * We changing not only the virtual to physical mapping, but also 1438 + * the physical addresses used to access memory. We need to flush 1439 + * all levels of cache in the system with caching disabled to 1440 + * ensure that all data is written back, and nothing is prefetched 1441 + * into the caches. We also need to prevent the TLB walkers 1442 + * allocating into the caches too. Note that this is ARMv7 LPAE 1443 + * specific. 1433 1444 */ 1434 - flush_cache_louis(); 1435 - dsb(ishst); 1436 - isb(); 1437 - 1438 - /* 1439 - * FIXME: This code is not architecturally compliant: we modify 1440 - * the mappings in-place, indeed while they are in use by this 1441 - * very same code. This may lead to unpredictable behaviour of 1442 - * the CPU. 1443 - * 1444 - * Even modifying the mappings in a separate page table does 1445 - * not resolve this. 1446 - * 1447 - * The architecture strongly recommends that when a mapping is 1448 - * changed, that it is changed by first going via an invalid 1449 - * mapping and back to the new mapping. This is to ensure that 1450 - * no TLB conflicts (caused by the TLB having more than one TLB 1451 - * entry match a translation) can occur. However, doing that 1452 - * here will result in unmapping the code we are running. 1453 - */ 1454 - pr_warn("WARNING: unsafe modification of in-place page tables - tainting kernel\n"); 1455 - add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); 1456 - 1457 - /* 1458 - * Remap level 1 table. This changes the physical addresses 1459 - * used to refer to the level 2 page tables to the high 1460 - * physical address alias, leaving everything else the same. 1461 - */ 1462 - for (i = 0; i < PTRS_PER_PGD; pud0++, i++) { 1463 - set_pud(pud0, 1464 - __pud(__pa(pmd0) | PMD_TYPE_TABLE | L_PGD_SWAPPER)); 1465 - pmd0 += PTRS_PER_PMD; 1466 - } 1467 - 1468 - /* 1469 - * Remap the level 2 table, pointing the mappings at the high 1470 - * physical address alias of these pages. 1471 - */ 1472 - phys = __pa(map_start); 1473 - do { 1474 - *pmdk++ = __pmd(phys | pmdprot); 1475 - phys += PMD_SIZE; 1476 - } while (phys < map_end); 1477 - 1478 - /* 1479 - * Ensure that the above updates are flushed out of the cache. 1480 - * This is not strictly correct; on a system where the caches 1481 - * are coherent with each other, but the MMU page table walks 1482 - * may not be coherent, flush_cache_all() may be a no-op, and 1483 - * this will fail. 1484 - */ 1445 + cr = get_cr(); 1446 + set_cr(cr & ~(CR_I | CR_C)); 1447 + asm("mrc p15, 0, %0, c2, c0, 2" : "=r" (ttbcr)); 1448 + asm volatile("mcr p15, 0, %0, c2, c0, 2" 1449 + : : "r" (ttbcr & ~(3 << 8 | 3 << 10))); 1485 1450 flush_cache_all(); 1486 1451 1487 1452 /* 1488 - * Re-write the TTBR values to point them at the high physical 1489 - * alias of the page tables. We expect __va() will work on 1490 - * cpu_get_pgd(), which returns the value of TTBR0. 1453 + * Fixup the page tables - this must be in the idmap region as 1454 + * we need to disable the MMU to do this safely, and hence it 1455 + * needs to be assembly. It's fairly simple, as we're using the 1456 + * temporary tables setup by the initial assembly code. 1491 1457 */ 1492 - cpu_switch_mm(pgd0, &init_mm); 1493 - cpu_set_ttbr(1, __pa(pgd0) + TTBR1_OFFSET); 1458 + lpae_pgtables_remap(offset, pa_pgd, boot_data); 1494 1459 1495 - /* Finally flush any stale TLB values. */ 1496 - local_flush_bp_all(); 1497 - local_flush_tlb_all(); 1460 + /* Re-enable the caches and cacheable TLB walks */ 1461 + asm volatile("mcr p15, 0, %0, c2, c0, 2" : : "r" (ttbcr)); 1462 + set_cr(cr); 1498 1463 } 1499 1464 1500 1465 #else 1501 1466 1502 - void __init early_paging_init(const struct machine_desc *mdesc, 1503 - struct proc_info_list *procinfo) 1467 + void __init early_paging_init(const struct machine_desc *mdesc) 1504 1468 { 1505 - if (mdesc->init_meminfo) 1506 - mdesc->init_meminfo(); 1469 + long long offset; 1470 + 1471 + if (!mdesc->pv_fixup) 1472 + return; 1473 + 1474 + offset = mdesc->pv_fixup(); 1475 + if (offset == 0) 1476 + return; 1477 + 1478 + pr_crit("Physical address space modification is only to support Keystone2.\n"); 1479 + pr_crit("Please enable ARM_LPAE and ARM_PATCH_PHYS_VIRT support to use this\n"); 1480 + pr_crit("feature. Your kernel may crash now, have a good day.\n"); 1481 + add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); 1507 1482 } 1508 1483 1509 1484 #endif
-9
arch/arm/mm/nommu.c
··· 304 304 } 305 305 306 306 /* 307 - * early_paging_init() recreates boot time page table setup, allowing machines 308 - * to switch over to a high (>4G) address space on LPAE systems 309 - */ 310 - void __init early_paging_init(const struct machine_desc *mdesc, 311 - struct proc_info_list *procinfo) 312 - { 313 - } 314 - 315 - /* 316 307 * paging_init() sets up the page tables, initialises the zone memory 317 308 * maps, and sets up the zero page, bad page and bad page tables. 318 309 */
+7 -5
arch/arm/mm/proc-v7-2level.S
··· 36 36 * 37 37 * It is assumed that: 38 38 * - we are not using split page tables 39 + * 40 + * Note that we always need to flush BTAC/BTB if IBE is set 41 + * even on Cortex-A8 revisions not affected by 430973. 42 + * If IBE is not set, the flush BTAC/BTB won't do anything. 39 43 */ 40 44 ENTRY(cpu_ca8_switch_mm) 41 45 #ifdef CONFIG_MMU 42 46 mov r2, #0 43 - #ifdef CONFIG_ARM_ERRATA_430973 44 47 mcr p15, 0, r2, c7, c5, 6 @ flush BTAC/BTB 45 - #endif 46 48 #endif 47 49 ENTRY(cpu_v7_switch_mm) 48 50 #ifdef CONFIG_MMU ··· 150 148 * Macro for setting up the TTBRx and TTBCR registers. 151 149 * - \ttb0 and \ttb1 updated with the corresponding flags. 152 150 */ 153 - .macro v7_ttb_setup, zero, ttbr0, ttbr1, tmp 151 + .macro v7_ttb_setup, zero, ttbr0l, ttbr0h, ttbr1, tmp 154 152 mcr p15, 0, \zero, c2, c0, 2 @ TTB control register 155 - ALT_SMP(orr \ttbr0, \ttbr0, #TTB_FLAGS_SMP) 156 - ALT_UP(orr \ttbr0, \ttbr0, #TTB_FLAGS_UP) 153 + ALT_SMP(orr \ttbr0l, \ttbr0l, #TTB_FLAGS_SMP) 154 + ALT_UP(orr \ttbr0l, \ttbr0l, #TTB_FLAGS_UP) 157 155 ALT_SMP(orr \ttbr1, \ttbr1, #TTB_FLAGS_SMP) 158 156 ALT_UP(orr \ttbr1, \ttbr1, #TTB_FLAGS_UP) 159 157 mcr p15, 0, \ttbr1, c2, c0, 1 @ load TTB1
+5 -9
arch/arm/mm/proc-v7-3level.S
··· 126 126 * Macro for setting up the TTBRx and TTBCR registers. 127 127 * - \ttbr1 updated. 128 128 */ 129 - .macro v7_ttb_setup, zero, ttbr0, ttbr1, tmp 129 + .macro v7_ttb_setup, zero, ttbr0l, ttbr0h, ttbr1, tmp 130 130 ldr \tmp, =swapper_pg_dir @ swapper_pg_dir virtual address 131 - mov \tmp, \tmp, lsr #ARCH_PGD_SHIFT 132 - cmp \ttbr1, \tmp @ PHYS_OFFSET > PAGE_OFFSET? 133 - mrc p15, 0, \tmp, c2, c0, 2 @ TTB control register 131 + cmp \ttbr1, \tmp, lsr #12 @ PHYS_OFFSET > PAGE_OFFSET? 132 + mrc p15, 0, \tmp, c2, c0, 2 @ TTB control egister 134 133 orr \tmp, \tmp, #TTB_EAE 135 134 ALT_SMP(orr \tmp, \tmp, #TTB_FLAGS_SMP) 136 135 ALT_UP(orr \tmp, \tmp, #TTB_FLAGS_UP) ··· 142 143 */ 143 144 orrls \tmp, \tmp, #TTBR1_SIZE @ TTBCR.T1SZ 144 145 mcr p15, 0, \tmp, c2, c0, 2 @ TTBCR 145 - mov \tmp, \ttbr1, lsr #(32 - ARCH_PGD_SHIFT) @ upper bits 146 - mov \ttbr1, \ttbr1, lsl #ARCH_PGD_SHIFT @ lower bits 146 + mov \tmp, \ttbr1, lsr #20 147 + mov \ttbr1, \ttbr1, lsl #12 147 148 addls \ttbr1, \ttbr1, #TTBR1_OFFSET 148 149 mcrr p15, 1, \ttbr1, \tmp, c2 @ load TTBR1 149 - mov \tmp, \ttbr0, lsr #(32 - ARCH_PGD_SHIFT) @ upper bits 150 - mov \ttbr0, \ttbr0, lsl #ARCH_PGD_SHIFT @ lower bits 151 - mcrr p15, 0, \ttbr0, \tmp, c2 @ load TTBR0 152 150 .endm 153 151 154 152 /*
+101 -81
arch/arm/mm/proc-v7.S
··· 252 252 * Initialise TLB, Caches, and MMU state ready to switch the MMU 253 253 * on. Return in r0 the new CP15 C1 control register setting. 254 254 * 255 + * r1, r2, r4, r5, r9, r13 must be preserved - r13 is not a stack 256 + * r4: TTBR0 (low word) 257 + * r5: TTBR0 (high word if LPAE) 258 + * r8: TTBR1 259 + * r9: Main ID register 260 + * 255 261 * This should be able to cover all ARMv7 cores. 256 262 * 257 263 * It is assumed that: ··· 284 278 mcreq p15, 0, r0, c1, c0, 1 285 279 #endif 286 280 b __v7_setup 281 + 282 + /* 283 + * Errata: 284 + * r0, r10 available for use 285 + * r1, r2, r4, r5, r9, r13: must be preserved 286 + * r3: contains MIDR rX number in bits 23-20 287 + * r6: contains MIDR rXpY as 8-bit XY number 288 + * r9: MIDR 289 + */ 290 + __ca8_errata: 291 + #if defined(CONFIG_ARM_ERRATA_430973) && !defined(CONFIG_ARCH_MULTIPLATFORM) 292 + teq r3, #0x00100000 @ only present in r1p* 293 + mrceq p15, 0, r0, c1, c0, 1 @ read aux control register 294 + orreq r0, r0, #(1 << 6) @ set IBE to 1 295 + mcreq p15, 0, r0, c1, c0, 1 @ write aux control register 296 + #endif 297 + #ifdef CONFIG_ARM_ERRATA_458693 298 + teq r6, #0x20 @ only present in r2p0 299 + mrceq p15, 0, r0, c1, c0, 1 @ read aux control register 300 + orreq r0, r0, #(1 << 5) @ set L1NEON to 1 301 + orreq r0, r0, #(1 << 9) @ set PLDNOP to 1 302 + mcreq p15, 0, r0, c1, c0, 1 @ write aux control register 303 + #endif 304 + #ifdef CONFIG_ARM_ERRATA_460075 305 + teq r6, #0x20 @ only present in r2p0 306 + mrceq p15, 1, r0, c9, c0, 2 @ read L2 cache aux ctrl register 307 + tsteq r0, #1 << 22 308 + orreq r0, r0, #(1 << 22) @ set the Write Allocate disable bit 309 + mcreq p15, 1, r0, c9, c0, 2 @ write the L2 cache aux ctrl register 310 + #endif 311 + b __errata_finish 312 + 313 + __ca9_errata: 314 + #ifdef CONFIG_ARM_ERRATA_742230 315 + cmp r6, #0x22 @ only present up to r2p2 316 + mrcle p15, 0, r0, c15, c0, 1 @ read diagnostic register 317 + orrle r0, r0, #1 << 4 @ set bit #4 318 + mcrle p15, 0, r0, c15, c0, 1 @ write diagnostic register 319 + #endif 320 + #ifdef CONFIG_ARM_ERRATA_742231 321 + teq r6, #0x20 @ present in r2p0 322 + teqne r6, #0x21 @ present in r2p1 323 + teqne r6, #0x22 @ present in r2p2 324 + mrceq p15, 0, r0, c15, c0, 1 @ read diagnostic register 325 + orreq r0, r0, #1 << 12 @ set bit #12 326 + orreq r0, r0, #1 << 22 @ set bit #22 327 + mcreq p15, 0, r0, c15, c0, 1 @ write diagnostic register 328 + #endif 329 + #ifdef CONFIG_ARM_ERRATA_743622 330 + teq r3, #0x00200000 @ only present in r2p* 331 + mrceq p15, 0, r0, c15, c0, 1 @ read diagnostic register 332 + orreq r0, r0, #1 << 6 @ set bit #6 333 + mcreq p15, 0, r0, c15, c0, 1 @ write diagnostic register 334 + #endif 335 + #if defined(CONFIG_ARM_ERRATA_751472) && defined(CONFIG_SMP) 336 + ALT_SMP(cmp r6, #0x30) @ present prior to r3p0 337 + ALT_UP_B(1f) 338 + mrclt p15, 0, r0, c15, c0, 1 @ read diagnostic register 339 + orrlt r0, r0, #1 << 11 @ set bit #11 340 + mcrlt p15, 0, r0, c15, c0, 1 @ write diagnostic register 341 + 1: 342 + #endif 343 + b __errata_finish 344 + 345 + __ca15_errata: 346 + #ifdef CONFIG_ARM_ERRATA_773022 347 + cmp r6, #0x4 @ only present up to r0p4 348 + mrcle p15, 0, r0, c1, c0, 1 @ read aux control register 349 + orrle r0, r0, #1 << 1 @ disable loop buffer 350 + mcrle p15, 0, r0, c1, c0, 1 @ write aux control register 351 + #endif 352 + b __errata_finish 287 353 288 354 __v7_pj4b_setup: 289 355 #ifdef CONFIG_CPU_PJ4B ··· 417 339 bl v7_invalidate_l1 418 340 ldmia r12, {r0-r5, r7, r9, r11, lr} 419 341 420 - mrc p15, 0, r0, c0, c0, 0 @ read main ID register 421 - and r10, r0, #0xff000000 @ ARM? 422 - teq r10, #0x41000000 423 - bne 3f 424 - and r5, r0, #0x00f00000 @ variant 425 - and r6, r0, #0x0000000f @ revision 426 - orr r6, r6, r5, lsr #20-4 @ combine variant and revision 427 - ubfx r0, r0, #4, #12 @ primary part number 342 + and r0, r9, #0xff000000 @ ARM? 343 + teq r0, #0x41000000 344 + bne __errata_finish 345 + and r3, r9, #0x00f00000 @ variant 346 + and r6, r9, #0x0000000f @ revision 347 + orr r6, r6, r3, lsr #20-4 @ combine variant and revision 348 + ubfx r0, r9, #4, #12 @ primary part number 428 349 429 350 /* Cortex-A8 Errata */ 430 351 ldr r10, =0x00000c08 @ Cortex-A8 primary part number 431 352 teq r0, r10 432 - bne 2f 433 - #if defined(CONFIG_ARM_ERRATA_430973) && !defined(CONFIG_ARCH_MULTIPLATFORM) 434 - 435 - teq r5, #0x00100000 @ only present in r1p* 436 - mrceq p15, 0, r10, c1, c0, 1 @ read aux control register 437 - orreq r10, r10, #(1 << 6) @ set IBE to 1 438 - mcreq p15, 0, r10, c1, c0, 1 @ write aux control register 439 - #endif 440 - #ifdef CONFIG_ARM_ERRATA_458693 441 - teq r6, #0x20 @ only present in r2p0 442 - mrceq p15, 0, r10, c1, c0, 1 @ read aux control register 443 - orreq r10, r10, #(1 << 5) @ set L1NEON to 1 444 - orreq r10, r10, #(1 << 9) @ set PLDNOP to 1 445 - mcreq p15, 0, r10, c1, c0, 1 @ write aux control register 446 - #endif 447 - #ifdef CONFIG_ARM_ERRATA_460075 448 - teq r6, #0x20 @ only present in r2p0 449 - mrceq p15, 1, r10, c9, c0, 2 @ read L2 cache aux ctrl register 450 - tsteq r10, #1 << 22 451 - orreq r10, r10, #(1 << 22) @ set the Write Allocate disable bit 452 - mcreq p15, 1, r10, c9, c0, 2 @ write the L2 cache aux ctrl register 453 - #endif 454 - b 3f 353 + beq __ca8_errata 455 354 456 355 /* Cortex-A9 Errata */ 457 - 2: ldr r10, =0x00000c09 @ Cortex-A9 primary part number 356 + ldr r10, =0x00000c09 @ Cortex-A9 primary part number 458 357 teq r0, r10 459 - bne 3f 460 - #ifdef CONFIG_ARM_ERRATA_742230 461 - cmp r6, #0x22 @ only present up to r2p2 462 - mrcle p15, 0, r10, c15, c0, 1 @ read diagnostic register 463 - orrle r10, r10, #1 << 4 @ set bit #4 464 - mcrle p15, 0, r10, c15, c0, 1 @ write diagnostic register 465 - #endif 466 - #ifdef CONFIG_ARM_ERRATA_742231 467 - teq r6, #0x20 @ present in r2p0 468 - teqne r6, #0x21 @ present in r2p1 469 - teqne r6, #0x22 @ present in r2p2 470 - mrceq p15, 0, r10, c15, c0, 1 @ read diagnostic register 471 - orreq r10, r10, #1 << 12 @ set bit #12 472 - orreq r10, r10, #1 << 22 @ set bit #22 473 - mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register 474 - #endif 475 - #ifdef CONFIG_ARM_ERRATA_743622 476 - teq r5, #0x00200000 @ only present in r2p* 477 - mrceq p15, 0, r10, c15, c0, 1 @ read diagnostic register 478 - orreq r10, r10, #1 << 6 @ set bit #6 479 - mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register 480 - #endif 481 - #if defined(CONFIG_ARM_ERRATA_751472) && defined(CONFIG_SMP) 482 - ALT_SMP(cmp r6, #0x30) @ present prior to r3p0 483 - ALT_UP_B(1f) 484 - mrclt p15, 0, r10, c15, c0, 1 @ read diagnostic register 485 - orrlt r10, r10, #1 << 11 @ set bit #11 486 - mcrlt p15, 0, r10, c15, c0, 1 @ write diagnostic register 487 - 1: 488 - #endif 358 + beq __ca9_errata 489 359 490 360 /* Cortex-A15 Errata */ 491 - 3: ldr r10, =0x00000c0f @ Cortex-A15 primary part number 361 + ldr r10, =0x00000c0f @ Cortex-A15 primary part number 492 362 teq r0, r10 493 - bne 4f 363 + beq __ca15_errata 494 364 495 - #ifdef CONFIG_ARM_ERRATA_773022 496 - cmp r6, #0x4 @ only present up to r0p4 497 - mrcle p15, 0, r10, c1, c0, 1 @ read aux control register 498 - orrle r10, r10, #1 << 1 @ disable loop buffer 499 - mcrle p15, 0, r10, c1, c0, 1 @ write aux control register 500 - #endif 501 - 502 - 4: mov r10, #0 365 + __errata_finish: 366 + mov r10, #0 503 367 mcr p15, 0, r10, c7, c5, 0 @ I+BTB cache invalidate 504 368 #ifdef CONFIG_MMU 505 369 mcr p15, 0, r10, c8, c7, 0 @ invalidate I + D TLBs 506 - v7_ttb_setup r10, r4, r8, r5 @ TTBCR, TTBRx setup 507 - ldr r5, =PRRR @ PRRR 370 + v7_ttb_setup r10, r4, r5, r8, r3 @ TTBCR, TTBRx setup 371 + ldr r3, =PRRR @ PRRR 508 372 ldr r6, =NMRR @ NMRR 509 - mcr p15, 0, r5, c10, c2, 0 @ write PRRR 373 + mcr p15, 0, r3, c10, c2, 0 @ write PRRR 510 374 mcr p15, 0, r6, c10, c2, 1 @ write NMRR 511 375 #endif 512 376 dsb @ Complete invalidations ··· 457 437 and r0, r0, #(0xf << 12) @ ThumbEE enabled field 458 438 teq r0, #(1 << 12) @ check if ThumbEE is present 459 439 bne 1f 460 - mov r5, #0 461 - mcr p14, 6, r5, c1, c0, 0 @ Initialize TEEHBR to 0 440 + mov r3, #0 441 + mcr p14, 6, r3, c1, c0, 0 @ Initialize TEEHBR to 0 462 442 mrc p14, 6, r0, c0, c0, 0 @ load TEECR 463 443 orr r0, r0, #1 @ set the 1st bit in order to 464 444 mcr p14, 6, r0, c0, c0, 0 @ stop userspace TEEHBR access 465 445 1: 466 446 #endif 467 - adr r5, v7_crval 468 - ldmia r5, {r5, r6} 447 + adr r3, v7_crval 448 + ldmia r3, {r3, r6} 469 449 ARM_BE8(orr r6, r6, #1 << 25) @ big-endian page tables 470 450 #ifdef CONFIG_SWP_EMULATE 471 - orr r5, r5, #(1 << 10) @ set SW bit in "clear" 451 + orr r3, r3, #(1 << 10) @ set SW bit in "clear" 472 452 bic r6, r6, #(1 << 10) @ clear it in "mmuset" 473 453 #endif 474 454 mrc p15, 0, r0, c1, c0, 0 @ read control register 475 - bic r0, r0, r5 @ clear bits them 455 + bic r0, r0, r3 @ clear bits them 476 456 orr r0, r0, r6 @ set them 477 457 THUMB( orr r0, r0, #1 << 30 ) @ Thumb exceptions 478 458 ret lr @ return to head.S:__ret
+1 -1
arch/arm/mm/proc-v7m.S
··· 98 98 str r5, [r0, V7M_SCB_SHPR3] @ set PendSV priority 99 99 100 100 @ SVC to run the kernel in this mode 101 - adr r1, BSYM(1f) 101 + badr r1, 1f 102 102 ldr r5, [r12, #11 * 4] @ read the SVC vector entry 103 103 str r1, [r12, #11 * 4] @ write the temporary SVC vector entry 104 104 mov r6, lr @ save LR
+88
arch/arm/mm/pv-fixup-asm.S
··· 1 + /* 2 + * Copyright (C) 2015 Russell King 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + * This assembly is required to safely remap the physical address space 9 + * for Keystone 2 10 + */ 11 + #include <linux/linkage.h> 12 + #include <asm/asm-offsets.h> 13 + #include <asm/cp15.h> 14 + #include <asm/memory.h> 15 + #include <asm/pgtable.h> 16 + 17 + .section ".idmap.text", "ax" 18 + 19 + #define L1_ORDER 3 20 + #define L2_ORDER 3 21 + 22 + ENTRY(lpae_pgtables_remap_asm) 23 + stmfd sp!, {r4-r8, lr} 24 + 25 + mrc p15, 0, r8, c1, c0, 0 @ read control reg 26 + bic ip, r8, #CR_M @ disable caches and MMU 27 + mcr p15, 0, ip, c1, c0, 0 28 + dsb 29 + isb 30 + 31 + /* Update level 2 entries covering the kernel */ 32 + ldr r6, =(_end - 1) 33 + add r7, r2, #0x1000 34 + add r6, r7, r6, lsr #SECTION_SHIFT - L2_ORDER 35 + add r7, r7, #PAGE_OFFSET >> (SECTION_SHIFT - L2_ORDER) 36 + 1: ldrd r4, [r7] 37 + adds r4, r4, r0 38 + adc r5, r5, r1 39 + strd r4, [r7], #1 << L2_ORDER 40 + cmp r7, r6 41 + bls 1b 42 + 43 + /* Update level 2 entries for the boot data */ 44 + add r7, r2, #0x1000 45 + add r7, r7, r3, lsr #SECTION_SHIFT - L2_ORDER 46 + bic r7, r7, #(1 << L2_ORDER) - 1 47 + ldrd r4, [r7] 48 + adds r4, r4, r0 49 + adc r5, r5, r1 50 + strd r4, [r7], #1 << L2_ORDER 51 + ldrd r4, [r7] 52 + adds r4, r4, r0 53 + adc r5, r5, r1 54 + strd r4, [r7] 55 + 56 + /* Update level 1 entries */ 57 + mov r6, #4 58 + mov r7, r2 59 + 2: ldrd r4, [r7] 60 + adds r4, r4, r0 61 + adc r5, r5, r1 62 + strd r4, [r7], #1 << L1_ORDER 63 + subs r6, r6, #1 64 + bne 2b 65 + 66 + mrrc p15, 0, r4, r5, c2 @ read TTBR0 67 + adds r4, r4, r0 @ update physical address 68 + adc r5, r5, r1 69 + mcrr p15, 0, r4, r5, c2 @ write back TTBR0 70 + mrrc p15, 1, r4, r5, c2 @ read TTBR1 71 + adds r4, r4, r0 @ update physical address 72 + adc r5, r5, r1 73 + mcrr p15, 1, r4, r5, c2 @ write back TTBR1 74 + 75 + dsb 76 + 77 + mov ip, #0 78 + mcr p15, 0, ip, c7, c5, 0 @ I+BTB cache invalidate 79 + mcr p15, 0, ip, c8, c7, 0 @ local_flush_tlb_all() 80 + dsb 81 + isb 82 + 83 + mcr p15, 0, r8, c1, c0, 0 @ re-enable MMU 84 + dsb 85 + isb 86 + 87 + ldmfd sp!, {r4-r8, pc} 88 + ENDPROC(lpae_pgtables_remap_asm)
+11 -7
arch/arm/vdso/Makefile
··· 6 6 targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.so.raw vdso.lds 7 7 obj-vdso := $(addprefix $(obj)/, $(obj-vdso)) 8 8 9 - ccflags-y := -shared -fPIC -fno-common -fno-builtin -fno-stack-protector 10 - ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 -DDISABLE_BRANCH_PROFILING 11 - ccflags-y += -Wl,--no-undefined $(call cc-ldoption, -Wl$(comma)--hash-style=sysv) 9 + ccflags-y := -fPIC -fno-common -fno-builtin -fno-stack-protector 10 + ccflags-y += -DDISABLE_BRANCH_PROFILING 11 + 12 + VDSO_LDFLAGS := -Wl,-Bsymbolic -Wl,--no-undefined -Wl,-soname=linux-vdso.so.1 13 + VDSO_LDFLAGS += -Wl,-z,max-page-size=4096 -Wl,-z,common-page-size=4096 14 + VDSO_LDFLAGS += -nostdlib -shared 15 + VDSO_LDFLAGS += $(call cc-ldoption, -Wl$(comma)--hash-style=sysv) 16 + VDSO_LDFLAGS += $(call cc-ldoption, -Wl$(comma)--build-id) 17 + VDSO_LDFLAGS += $(call cc-option, -fuse-ld=bfd) 12 18 13 19 obj-$(CONFIG_VDSO) += vdso.o 14 20 extra-$(CONFIG_VDSO) += vdso.lds ··· 46 40 47 41 # Actual build commands 48 42 quiet_cmd_vdsold = VDSO $@ 49 - cmd_vdsold = $(CC) $(c_flags) -Wl,-T $(filter %.lds,$^) $(filter %.o,$^) \ 50 - $(call cc-ldoption, -Wl$(comma)--build-id) \ 51 - -Wl,-Bsymbolic -Wl,-z,max-page-size=4096 \ 52 - -Wl,-z,common-page-size=4096 -o $@ 43 + cmd_vdsold = $(CC) $(c_flags) $(VDSO_LDFLAGS) \ 44 + -Wl,-T $(filter %.lds,$^) $(filter %.o,$^) -o $@ 53 45 54 46 quiet_cmd_vdsomunge = MUNGE $@ 55 47 cmd_vdsomunge = $(objtree)/$(obj)/vdsomunge $< $@
+6
drivers/clocksource/Kconfig
··· 142 142 help 143 143 This options enables support for the ARM global timer unit 144 144 145 + config ARM_TIMER_SP804 146 + bool "Support for Dual Timer SP804 module" 147 + depends on GENERIC_SCHED_CLOCK && CLKDEV_LOOKUP 148 + select CLKSRC_MMIO 149 + select CLKSRC_OF if OF 150 + 145 151 config CLKSRC_ARM_GLOBAL_TIMER_SCHED_CLOCK 146 152 bool 147 153 depends on ARM_GLOBAL_TIMER
+1
drivers/clocksource/Makefile
··· 48 48 obj-$(CONFIG_ARM_ARCH_TIMER) += arm_arch_timer.o 49 49 obj-$(CONFIG_ARM_GLOBAL_TIMER) += arm_global_timer.o 50 50 obj-$(CONFIG_ARMV7M_SYSTICK) += armv7m_systick.o 51 + obj-$(CONFIG_ARM_TIMER_SP804) += timer-sp804.o 51 52 obj-$(CONFIG_CLKSRC_METAG_GENERIC) += metag_generic.o 52 53 obj-$(CONFIG_ARCH_HAS_TICK_BROADCAST) += dummy_timer.o 53 54 obj-$(CONFIG_ARCH_KEYSTONE) += timer-keystone.o
+2 -1
drivers/clocksource/timer-integrator-ap.c
··· 26 26 #include <linux/clockchips.h> 27 27 #include <linux/interrupt.h> 28 28 #include <linux/sched_clock.h> 29 - #include <asm/hardware/arm_timer.h> 29 + 30 + #include "timer-sp.h" 30 31 31 32 static void __iomem * sched_clk_base; 32 33
+1 -7
drivers/cpuidle/cpuidle-big_little.c
··· 108 108 unsigned int cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0); 109 109 110 110 mcpm_set_entry_vector(cpu, cluster, cpu_resume); 111 - 112 - /* 113 - * Residency value passed to mcpm_cpu_suspend back-end 114 - * has to be given clear semantics. Set to 0 as a 115 - * temporary value. 116 - */ 117 - mcpm_cpu_suspend(0); 111 + mcpm_cpu_suspend(); 118 112 119 113 /* return value != 0 means failure */ 120 114 return 1;
+1
drivers/irqchip/Makefile
··· 49 49 obj-$(CONFIG_ARCH_DIGICOLOR) += irq-digicolor.o 50 50 obj-$(CONFIG_RENESAS_H8300H_INTC) += irq-renesas-h8300h.o 51 51 obj-$(CONFIG_RENESAS_H8S_INTC) += irq-renesas-h8s.o 52 + obj-$(CONFIG_ARCH_SA1100) += irq-sa11x0.o
+17
include/linux/irqchip/irq-sa11x0.h
··· 1 + /* 2 + * Generic IRQ handling for the SA11x0. 3 + * 4 + * Copyright (C) 2015 Dmitry Eremin-Solenikov 5 + * Copyright (C) 1999-2001 Nicolas Pitre 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #ifndef __INCLUDE_LINUX_IRQCHIP_IRQ_SA11x0_H 13 + #define __INCLUDE_LINUX_IRQCHIP_IRQ_SA11x0_H 14 + 15 + void __init sa11x0_init_irq_nodt(int irq_start, resource_size_t io_start); 16 + 17 + #endif
+5
include/linux/perf_event.h
··· 300 300 * Free pmu-private AUX data structures 301 301 */ 302 302 void (*free_aux) (void *aux); /* optional */ 303 + 304 + /* 305 + * Filter events for PMU-specific reasons. 306 + */ 307 + int (*filter_match) (struct perf_event *event); /* optional */ 303 308 }; 304 309 305 310 /**
+15
include/soc/sa1100/pwer.h
··· 1 + #ifndef SOC_SA1100_PWER_H 2 + #define SOC_SA1100_PWER_H 3 + 4 + /* 5 + * Copyright (C) 2015, Dmitry Eremin-Solenikov 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + int sa11x0_gpio_set_wake(unsigned int gpio, unsigned int on); 13 + int sa11x0_sc_set_wake(unsigned int irq, unsigned int on); 14 + 15 + #endif
+7 -1
kernel/events/core.c
··· 1502 1502 1503 1503 core_initcall(perf_workqueue_init); 1504 1504 1505 + static inline int pmu_filter_match(struct perf_event *event) 1506 + { 1507 + struct pmu *pmu = event->pmu; 1508 + return pmu->filter_match ? pmu->filter_match(event) : 1; 1509 + } 1510 + 1505 1511 static inline int 1506 1512 event_filter_match(struct perf_event *event) 1507 1513 { 1508 1514 return (event->cpu == -1 || event->cpu == smp_processor_id()) 1509 - && perf_cgroup_match(event); 1515 + && perf_cgroup_match(event) && pmu_filter_match(event); 1510 1516 } 1511 1517 1512 1518 static void