Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm

Pull ARM development updates from Russell King:
"Included in this update:

- moving PSCI code from ARM64/ARM to drivers/

- removal of some architecture internals from global kernel view

- addition of software based "privileged no access" support using the
old domains register to turn off the ability for kernel
loads/stores to access userspace. Only the proper accessors will
be usable.

- addition of early fixup support for early console

- re-addition (and reimplementation) of OMAP special interconnect
barrier

- removal of finish_arch_switch()

- only expose cpuX/online in sysfs if hotpluggable

- a number of code cleanups"

* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (41 commits)
ARM: software-based priviledged-no-access support
ARM: entry: provide uaccess assembly macro hooks
ARM: entry: get rid of multiple macro definitions
ARM: 8421/1: smp: Collapse arch_cpu_idle_dead() into cpu_die()
ARM: uaccess: provide uaccess_save_and_enable() and uaccess_restore()
ARM: mm: improve do_ldrd_abort macro
ARM: entry: ensure that IRQs are enabled when calling syscall_trace_exit()
ARM: entry: efficiency cleanups
ARM: entry: get rid of asm_trace_hardirqs_on_cond
ARM: uaccess: simplify user access assembly
ARM: domains: remove DOMAIN_TABLE
ARM: domains: keep vectors in separate domain
ARM: domains: get rid of manager mode for user domain
ARM: domains: move initial domain setting value to asm/domains.h
ARM: domains: provide domain_mask()
ARM: domains: switch to keeping domain value in register
ARM: 8419/1: dma-mapping: harmonize definition of DMA_ERROR_CODE
ARM: 8417/1: refactor bitops functions with BIT_MASK() and BIT_WORD()
ARM: 8416/1: Feroceon: use of_iomap() to map register base
ARM: 8415/1: early fixmap support for earlycon
...

+1459 -1116
+6
Documentation/devicetree/bindings/arm/l2cc.txt
··· 67 67 disable if zero. 68 68 - arm,prefetch-offset : Override prefetch offset value. Valid values are 69 69 0-7, 15, 23, and 31. 70 + - arm,shared-override : The default behavior of the pl310 cache controller with 71 + respect to the shareable attribute is to transform "normal memory 72 + non-cacheable transactions" into "cacheable no allocate" (for reads) or 73 + "write through no write allocate" (for writes). 74 + On systems where this may cause DMA buffer corruption, this property must be 75 + specified to indicate that such transforms are precluded. 70 76 - prefetch-data : Data prefetch. Value: <0> (forcibly disable), <1> 71 77 (forcibly enable), property absent (retain settings set by firmware) 72 78 - prefetch-instr : Instruction prefetch. Value: <0> (forcibly disable),
+9 -3
Documentation/devicetree/bindings/arm/pmu.txt
··· 26 26 27 27 Optional properties: 28 28 29 - - interrupt-affinity : Valid only when using SPIs, specifies a list of phandles 30 - to CPU nodes corresponding directly to the affinity of 29 + - interrupt-affinity : When using SPIs, specifies a list of phandles to CPU 30 + nodes corresponding directly to the affinity of 31 31 the SPIs listed in the interrupts property. 32 32 33 - This property should be present when there is more than 33 + When using a PPI, specifies a list of phandles to CPU 34 + nodes corresponding to the set of CPUs which have 35 + a PMU of this type signalling the PPI listed in the 36 + interrupts property. 37 + 38 + This property should be present when there is more than 34 39 a single SPI. 40 + 35 41 36 42 - qcom,no-pc-write : Indicates that this PMU doesn't support the 0xc and 0xd 37 43 events.
+13 -2
MAINTAINERS
··· 806 806 ARM PMU PROFILING AND DEBUGGING 807 807 M: Will Deacon <will.deacon@arm.com> 808 808 S: Maintained 809 - F: arch/arm/kernel/perf_event* 809 + F: arch/arm/kernel/perf_* 810 810 F: arch/arm/oprofile/common.c 811 - F: arch/arm/include/asm/pmu.h 812 811 F: arch/arm/kernel/hw_breakpoint.c 813 812 F: arch/arm/include/asm/hw_breakpoint.h 813 + F: arch/arm/include/asm/perf_event.h 814 + F: drivers/perf/arm_pmu.c 815 + F: include/linux/perf/arm_pmu.h 814 816 815 817 ARM PORT 816 818 M: Russell King <linux@arm.linux.org.uk> ··· 8121 8119 F: include/linux/power_supply.h 8122 8120 F: drivers/power/ 8123 8121 X: drivers/power/avs/ 8122 + 8123 + POWER STATE COORDINATION INTERFACE (PSCI) 8124 + M: Mark Rutland <mark.rutland@arm.com> 8125 + M: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> 8126 + L: linux-arm-kernel@lists.infradead.org 8127 + S: Maintained 8128 + F: drivers/firmware/psci.c 8129 + F: include/linux/psci.h 8130 + F: include/uapi/linux/psci.h 8124 8131 8125 8132 PNP SUPPORT 8126 8133 M: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
+20 -5
arch/arm/Kconfig
··· 188 188 config ARCH_HAS_BANDGAP 189 189 bool 190 190 191 + config FIX_EARLYCON_MEM 192 + def_bool y if MMU 193 + 191 194 config GENERIC_HWEIGHT 192 195 bool 193 196 default y ··· 1499 1496 config ARM_PSCI 1500 1497 bool "Support for the ARM Power State Coordination Interface (PSCI)" 1501 1498 depends on CPU_V7 1499 + select ARM_PSCI_FW 1502 1500 help 1503 1501 Say Y here if you want Linux to communicate with system firmware 1504 1502 implementing the PSCI specification for CPU-centric power ··· 1704 1700 consumed by page tables. Setting this option will allow 1705 1701 user-space 2nd level page tables to reside in high memory. 1706 1702 1707 - config HW_PERF_EVENTS 1708 - bool "Enable hardware performance counter support for perf events" 1709 - depends on PERF_EVENTS 1703 + config CPU_SW_DOMAIN_PAN 1704 + bool "Enable use of CPU domains to implement privileged no-access" 1705 + depends on MMU && !ARM_LPAE 1710 1706 default y 1711 1707 help 1712 - Enable hardware performance counter support for perf events. If 1713 - disabled, perf events will use software events only. 1708 + Increase kernel security by ensuring that normal kernel accesses 1709 + are unable to access userspace addresses. This can help prevent 1710 + use-after-free bugs becoming an exploitable privilege escalation 1711 + by ensuring that magic values (such as LIST_POISON) will always 1712 + fault when dereferenced. 1713 + 1714 + CPUs with low-vector mappings use a best-efforts implementation. 1715 + Their lower 1MB needs to remain accessible for the vectors, but 1716 + the remainder of userspace will become appropriately inaccessible. 1717 + 1718 + config HW_PERF_EVENTS 1719 + def_bool y 1720 + depends on ARM_PMU 1714 1721 1715 1722 config SYS_SUPPORTS_HUGETLBFS 1716 1723 def_bool y
+4 -8
arch/arm/common/mcpm_platsmp.c
··· 65 65 return !mcpm_wait_for_cpu_powerdown(pcpu, pcluster); 66 66 } 67 67 68 - static int mcpm_cpu_disable(unsigned int cpu) 68 + static bool mcpm_cpu_can_disable(unsigned int cpu) 69 69 { 70 - /* 71 - * We assume all CPUs may be shut down. 72 - * This would be the hook to use for eventual Secure 73 - * OS migration requests as described in the PSCI spec. 74 - */ 75 - return 0; 70 + /* We assume all CPUs may be shut down. */ 71 + return true; 76 72 } 77 73 78 74 static void mcpm_cpu_die(unsigned int cpu) ··· 88 92 .smp_secondary_init = mcpm_secondary_init, 89 93 #ifdef CONFIG_HOTPLUG_CPU 90 94 .cpu_kill = mcpm_cpu_kill, 91 - .cpu_disable = mcpm_cpu_disable, 95 + .cpu_can_disable = mcpm_cpu_can_disable, 92 96 .cpu_die = mcpm_cpu_die, 93 97 #endif 94 98 };
-1
arch/arm/include/asm/Kbuild
··· 12 12 generic-y += kdebug.h 13 13 generic-y += local.h 14 14 generic-y += local64.h 15 - generic-y += mcs_spinlock.h 16 15 generic-y += mm-arch-hooks.h 17 16 generic-y += msgbuf.h 18 17 generic-y += param.h
+60 -9
arch/arm/include/asm/assembler.h
··· 108 108 .endm 109 109 #endif 110 110 111 - .macro asm_trace_hardirqs_off 111 + .macro asm_trace_hardirqs_off, save=1 112 112 #if defined(CONFIG_TRACE_IRQFLAGS) 113 + .if \save 113 114 stmdb sp!, {r0-r3, ip, lr} 115 + .endif 114 116 bl trace_hardirqs_off 117 + .if \save 115 118 ldmia sp!, {r0-r3, ip, lr} 119 + .endif 116 120 #endif 117 121 .endm 118 122 119 - .macro asm_trace_hardirqs_on_cond, cond 123 + .macro asm_trace_hardirqs_on, cond=al, save=1 120 124 #if defined(CONFIG_TRACE_IRQFLAGS) 121 125 /* 122 126 * actually the registers should be pushed and pop'd conditionally, but 123 127 * after bl the flags are certainly clobbered 124 128 */ 129 + .if \save 125 130 stmdb sp!, {r0-r3, ip, lr} 131 + .endif 126 132 bl\cond trace_hardirqs_on 133 + .if \save 127 134 ldmia sp!, {r0-r3, ip, lr} 135 + .endif 128 136 #endif 129 137 .endm 130 138 131 - .macro asm_trace_hardirqs_on 132 - asm_trace_hardirqs_on_cond al 133 - .endm 134 - 135 - .macro disable_irq 139 + .macro disable_irq, save=1 136 140 disable_irq_notrace 137 - asm_trace_hardirqs_off 141 + asm_trace_hardirqs_off \save 138 142 .endm 139 143 140 144 .macro enable_irq ··· 177 173 178 174 .macro restore_irqs, oldcpsr 179 175 tst \oldcpsr, #PSR_I_BIT 180 - asm_trace_hardirqs_on_cond eq 176 + asm_trace_hardirqs_on cond=eq 181 177 restore_irqs_notrace \oldcpsr 182 178 .endm 183 179 ··· 447 443 sbcccs \tmp, \tmp, \limit 448 444 bcs \bad 449 445 #endif 446 + .endm 447 + 448 + .macro uaccess_disable, tmp, isb=1 449 + #ifdef CONFIG_CPU_SW_DOMAIN_PAN 450 + /* 451 + * Whenever we re-enter userspace, the domains should always be 452 + * set appropriately. 453 + */ 454 + mov \tmp, #DACR_UACCESS_DISABLE 455 + mcr p15, 0, \tmp, c3, c0, 0 @ Set domain register 456 + .if \isb 457 + instr_sync 458 + .endif 459 + #endif 460 + .endm 461 + 462 + .macro uaccess_enable, tmp, isb=1 463 + #ifdef CONFIG_CPU_SW_DOMAIN_PAN 464 + /* 465 + * Whenever we re-enter userspace, the domains should always be 466 + * set appropriately. 467 + */ 468 + mov \tmp, #DACR_UACCESS_ENABLE 469 + mcr p15, 0, \tmp, c3, c0, 0 470 + .if \isb 471 + instr_sync 472 + .endif 473 + #endif 474 + .endm 475 + 476 + .macro uaccess_save, tmp 477 + #ifdef CONFIG_CPU_SW_DOMAIN_PAN 478 + mrc p15, 0, \tmp, c3, c0, 0 479 + str \tmp, [sp, #S_FRAME_SIZE] 480 + #endif 481 + .endm 482 + 483 + .macro uaccess_restore 484 + #ifdef CONFIG_CPU_SW_DOMAIN_PAN 485 + ldr r0, [sp, #S_FRAME_SIZE] 486 + mcr p15, 0, r0, c3, c0, 0 487 + #endif 488 + .endm 489 + 490 + .macro uaccess_save_and_disable, tmp 491 + uaccess_save \tmp 492 + uaccess_disable \tmp 450 493 .endm 451 494 452 495 .irp c,,eq,ne,cs,cc,mi,pl,vs,vc,hi,ls,ge,lt,gt,le,hs,lo
+10 -3
arch/arm/include/asm/barrier.h
··· 2 2 #define __ASM_BARRIER_H 3 3 4 4 #ifndef __ASSEMBLY__ 5 - #include <asm/outercache.h> 6 5 7 6 #define nop() __asm__ __volatile__("mov\tr0,r0\t@ nop\n\t"); 8 7 ··· 36 37 #define dmb(x) __asm__ __volatile__ ("" : : : "memory") 37 38 #endif 38 39 40 + #ifdef CONFIG_ARM_HEAVY_MB 41 + extern void (*soc_mb)(void); 42 + extern void arm_heavy_mb(void); 43 + #define __arm_heavy_mb(x...) do { dsb(x); arm_heavy_mb(); } while (0) 44 + #else 45 + #define __arm_heavy_mb(x...) dsb(x) 46 + #endif 47 + 39 48 #ifdef CONFIG_ARCH_HAS_BARRIERS 40 49 #include <mach/barriers.h> 41 50 #elif defined(CONFIG_ARM_DMA_MEM_BUFFERABLE) || defined(CONFIG_SMP) 42 - #define mb() do { dsb(); outer_sync(); } while (0) 51 + #define mb() __arm_heavy_mb() 43 52 #define rmb() dsb() 44 - #define wmb() do { dsb(st); outer_sync(); } while (0) 53 + #define wmb() __arm_heavy_mb(st) 45 54 #define dma_rmb() dmb(osh) 46 55 #define dma_wmb() dmb(oshst) 47 56 #else
+12 -12
arch/arm/include/asm/bitops.h
··· 35 35 static inline void ____atomic_set_bit(unsigned int bit, volatile unsigned long *p) 36 36 { 37 37 unsigned long flags; 38 - unsigned long mask = 1UL << (bit & 31); 38 + unsigned long mask = BIT_MASK(bit); 39 39 40 - p += bit >> 5; 40 + p += BIT_WORD(bit); 41 41 42 42 raw_local_irq_save(flags); 43 43 *p |= mask; ··· 47 47 static inline void ____atomic_clear_bit(unsigned int bit, volatile unsigned long *p) 48 48 { 49 49 unsigned long flags; 50 - unsigned long mask = 1UL << (bit & 31); 50 + unsigned long mask = BIT_MASK(bit); 51 51 52 - p += bit >> 5; 52 + p += BIT_WORD(bit); 53 53 54 54 raw_local_irq_save(flags); 55 55 *p &= ~mask; ··· 59 59 static inline void ____atomic_change_bit(unsigned int bit, volatile unsigned long *p) 60 60 { 61 61 unsigned long flags; 62 - unsigned long mask = 1UL << (bit & 31); 62 + unsigned long mask = BIT_MASK(bit); 63 63 64 - p += bit >> 5; 64 + p += BIT_WORD(bit); 65 65 66 66 raw_local_irq_save(flags); 67 67 *p ^= mask; ··· 73 73 { 74 74 unsigned long flags; 75 75 unsigned int res; 76 - unsigned long mask = 1UL << (bit & 31); 76 + unsigned long mask = BIT_MASK(bit); 77 77 78 - p += bit >> 5; 78 + p += BIT_WORD(bit); 79 79 80 80 raw_local_irq_save(flags); 81 81 res = *p; ··· 90 90 { 91 91 unsigned long flags; 92 92 unsigned int res; 93 - unsigned long mask = 1UL << (bit & 31); 93 + unsigned long mask = BIT_MASK(bit); 94 94 95 - p += bit >> 5; 95 + p += BIT_WORD(bit); 96 96 97 97 raw_local_irq_save(flags); 98 98 res = *p; ··· 107 107 { 108 108 unsigned long flags; 109 109 unsigned int res; 110 - unsigned long mask = 1UL << (bit & 31); 110 + unsigned long mask = BIT_MASK(bit); 111 111 112 - p += bit >> 5; 112 + p += BIT_WORD(bit); 113 113 114 114 raw_local_irq_save(flags); 115 115 res = *p;
+17 -4
arch/arm/include/asm/cacheflush.h
··· 140 140 * is visible to DMA, or data written by DMA to system memory is 141 141 * visible to the CPU. 142 142 */ 143 - #define dmac_map_area cpu_cache.dma_map_area 144 - #define dmac_unmap_area cpu_cache.dma_unmap_area 145 143 #define dmac_flush_range cpu_cache.dma_flush_range 146 144 147 145 #else ··· 159 161 * is visible to DMA, or data written by DMA to system memory is 160 162 * visible to the CPU. 161 163 */ 162 - extern void dmac_map_area(const void *, size_t, int); 163 - extern void dmac_unmap_area(const void *, size_t, int); 164 164 extern void dmac_flush_range(const void *, const void *); 165 165 166 166 #endif ··· 501 505 502 506 void flush_uprobe_xol_access(struct page *page, unsigned long uaddr, 503 507 void *kaddr, unsigned long len); 508 + 509 + /** 510 + * secure_flush_area - ensure coherency across the secure boundary 511 + * @addr: virtual address 512 + * @size: size of region 513 + * 514 + * Ensure that the specified area of memory is coherent across the secure 515 + * boundary from the non-secure side. This is used when calling secure 516 + * firmware where the secure firmware does not ensure coherency. 517 + */ 518 + static inline void secure_flush_area(const void *addr, size_t size) 519 + { 520 + phys_addr_t phys = __pa(addr); 521 + 522 + __cpuc_flush_dcache_area((void *)addr, size); 523 + outer_flush_range(phys, phys + size); 524 + } 504 525 505 526 #endif
+1 -1
arch/arm/include/asm/dma-mapping.h
··· 14 14 #include <xen/xen.h> 15 15 #include <asm/xen/hypervisor.h> 16 16 17 - #define DMA_ERROR_CODE (~0) 17 + #define DMA_ERROR_CODE (~(dma_addr_t)0x0) 18 18 extern struct dma_map_ops arm_dma_ops; 19 19 extern struct dma_map_ops arm_coherent_dma_ops; 20 20
+43 -10
arch/arm/include/asm/domain.h
··· 34 34 */ 35 35 #ifndef CONFIG_IO_36 36 36 #define DOMAIN_KERNEL 0 37 - #define DOMAIN_TABLE 0 38 37 #define DOMAIN_USER 1 39 38 #define DOMAIN_IO 2 40 39 #else 41 40 #define DOMAIN_KERNEL 2 42 - #define DOMAIN_TABLE 2 43 41 #define DOMAIN_USER 1 44 42 #define DOMAIN_IO 0 45 43 #endif 44 + #define DOMAIN_VECTORS 3 46 45 47 46 /* 48 47 * Domain types ··· 54 55 #define DOMAIN_MANAGER 1 55 56 #endif 56 57 57 - #define domain_val(dom,type) ((type) << (2*(dom))) 58 + #define domain_mask(dom) ((3) << (2 * (dom))) 59 + #define domain_val(dom,type) ((type) << (2 * (dom))) 60 + 61 + #ifdef CONFIG_CPU_SW_DOMAIN_PAN 62 + #define DACR_INIT \ 63 + (domain_val(DOMAIN_USER, DOMAIN_NOACCESS) | \ 64 + domain_val(DOMAIN_KERNEL, DOMAIN_MANAGER) | \ 65 + domain_val(DOMAIN_IO, DOMAIN_CLIENT) | \ 66 + domain_val(DOMAIN_VECTORS, DOMAIN_CLIENT)) 67 + #else 68 + #define DACR_INIT \ 69 + (domain_val(DOMAIN_USER, DOMAIN_CLIENT) | \ 70 + domain_val(DOMAIN_KERNEL, DOMAIN_MANAGER) | \ 71 + domain_val(DOMAIN_IO, DOMAIN_CLIENT) | \ 72 + domain_val(DOMAIN_VECTORS, DOMAIN_CLIENT)) 73 + #endif 74 + 75 + #define __DACR_DEFAULT \ 76 + domain_val(DOMAIN_KERNEL, DOMAIN_CLIENT) | \ 77 + domain_val(DOMAIN_IO, DOMAIN_CLIENT) | \ 78 + domain_val(DOMAIN_VECTORS, DOMAIN_CLIENT) 79 + 80 + #define DACR_UACCESS_DISABLE \ 81 + (__DACR_DEFAULT | domain_val(DOMAIN_USER, DOMAIN_NOACCESS)) 82 + #define DACR_UACCESS_ENABLE \ 83 + (__DACR_DEFAULT | domain_val(DOMAIN_USER, DOMAIN_CLIENT)) 58 84 59 85 #ifndef __ASSEMBLY__ 60 86 61 - #ifdef CONFIG_CPU_USE_DOMAINS 87 + static inline unsigned int get_domain(void) 88 + { 89 + unsigned int domain; 90 + 91 + asm( 92 + "mrc p15, 0, %0, c3, c0 @ get domain" 93 + : "=r" (domain)); 94 + 95 + return domain; 96 + } 97 + 62 98 static inline void set_domain(unsigned val) 63 99 { 64 100 asm volatile( ··· 102 68 isb(); 103 69 } 104 70 71 + #ifdef CONFIG_CPU_USE_DOMAINS 105 72 #define modify_domain(dom,type) \ 106 73 do { \ 107 - struct thread_info *thread = current_thread_info(); \ 108 - unsigned int domain = thread->cpu_domain; \ 109 - domain &= ~domain_val(dom, DOMAIN_MANAGER); \ 110 - thread->cpu_domain = domain | domain_val(dom, type); \ 111 - set_domain(thread->cpu_domain); \ 74 + unsigned int domain = get_domain(); \ 75 + domain &= ~domain_mask(dom); \ 76 + domain = domain | domain_val(dom, type); \ 77 + set_domain(domain); \ 112 78 } while (0) 113 79 114 80 #else 115 - static inline void set_domain(unsigned val) { } 116 81 static inline void modify_domain(unsigned dom, unsigned type) { } 117 82 #endif 118 83
+14 -1
arch/arm/include/asm/fixmap.h
··· 6 6 #define FIXADDR_TOP (FIXADDR_END - PAGE_SIZE) 7 7 8 8 #include <asm/kmap_types.h> 9 + #include <asm/pgtable.h> 9 10 10 11 enum fixed_addresses { 11 - FIX_KMAP_BEGIN, 12 + FIX_EARLYCON_MEM_BASE, 13 + __end_of_permanent_fixed_addresses, 14 + 15 + FIX_KMAP_BEGIN = __end_of_permanent_fixed_addresses, 12 16 FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1, 13 17 14 18 /* Support writing RO kernel text via kprobes, jump labels, etc. */ ··· 22 18 __end_of_fixed_addresses 23 19 }; 24 20 21 + #define FIXMAP_PAGE_COMMON (L_PTE_YOUNG | L_PTE_PRESENT | L_PTE_XN | L_PTE_DIRTY) 22 + 23 + #define FIXMAP_PAGE_NORMAL (FIXMAP_PAGE_COMMON | L_PTE_MT_WRITEBACK) 24 + 25 + /* Used by set_fixmap_(io|nocache), both meant for mapping a device */ 26 + #define FIXMAP_PAGE_IO (FIXMAP_PAGE_COMMON | L_PTE_MT_DEV_SHARED | L_PTE_SHARED) 27 + #define FIXMAP_PAGE_NOCACHE FIXMAP_PAGE_IO 28 + 25 29 void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot); 30 + void __init early_fixmap_init(void); 26 31 27 32 #include <asm-generic/fixmap.h> 28 33
+17 -2
arch/arm/include/asm/futex.h
··· 22 22 #ifdef CONFIG_SMP 23 23 24 24 #define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \ 25 + ({ \ 26 + unsigned int __ua_flags; \ 25 27 smp_mb(); \ 26 28 prefetchw(uaddr); \ 29 + __ua_flags = uaccess_save_and_enable(); \ 27 30 __asm__ __volatile__( \ 28 31 "1: ldrex %1, [%3]\n" \ 29 32 " " insn "\n" \ ··· 37 34 __futex_atomic_ex_table("%5") \ 38 35 : "=&r" (ret), "=&r" (oldval), "=&r" (tmp) \ 39 36 : "r" (uaddr), "r" (oparg), "Ir" (-EFAULT) \ 40 - : "cc", "memory") 37 + : "cc", "memory"); \ 38 + uaccess_restore(__ua_flags); \ 39 + }) 41 40 42 41 static inline int 43 42 futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, 44 43 u32 oldval, u32 newval) 45 44 { 45 + unsigned int __ua_flags; 46 46 int ret; 47 47 u32 val; 48 48 ··· 55 49 smp_mb(); 56 50 /* Prefetching cannot fault */ 57 51 prefetchw(uaddr); 52 + __ua_flags = uaccess_save_and_enable(); 58 53 __asm__ __volatile__("@futex_atomic_cmpxchg_inatomic\n" 59 54 "1: ldrex %1, [%4]\n" 60 55 " teq %1, %2\n" ··· 68 61 : "=&r" (ret), "=&r" (val) 69 62 : "r" (oldval), "r" (newval), "r" (uaddr), "Ir" (-EFAULT) 70 63 : "cc", "memory"); 64 + uaccess_restore(__ua_flags); 71 65 smp_mb(); 72 66 73 67 *uval = val; ··· 81 73 #include <asm/domain.h> 82 74 83 75 #define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \ 76 + ({ \ 77 + unsigned int __ua_flags = uaccess_save_and_enable(); \ 84 78 __asm__ __volatile__( \ 85 79 "1: " TUSER(ldr) " %1, [%3]\n" \ 86 80 " " insn "\n" \ ··· 91 81 __futex_atomic_ex_table("%5") \ 92 82 : "=&r" (ret), "=&r" (oldval), "=&r" (tmp) \ 93 83 : "r" (uaddr), "r" (oparg), "Ir" (-EFAULT) \ 94 - : "cc", "memory") 84 + : "cc", "memory"); \ 85 + uaccess_restore(__ua_flags); \ 86 + }) 95 87 96 88 static inline int 97 89 futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, 98 90 u32 oldval, u32 newval) 99 91 { 92 + unsigned int __ua_flags; 100 93 int ret = 0; 101 94 u32 val; 102 95 ··· 107 94 return -EFAULT; 108 95 109 96 preempt_disable(); 97 + __ua_flags = uaccess_save_and_enable(); 110 98 __asm__ __volatile__("@futex_atomic_cmpxchg_inatomic\n" 111 99 "1: " TUSER(ldr) " %1, [%4]\n" 112 100 " teq %1, %2\n" ··· 117 103 : "+r" (ret), "=&r" (val) 118 104 : "r" (oldval), "r" (newval), "r" (uaddr), "Ir" (-EFAULT) 119 105 : "cc", "memory"); 106 + uaccess_restore(__ua_flags); 120 107 121 108 *uval = val; 122 109 preempt_enable();
-2
arch/arm/include/asm/glue-cache.h
··· 158 158 #define __cpuc_coherent_user_range __glue(_CACHE,_coherent_user_range) 159 159 #define __cpuc_flush_dcache_area __glue(_CACHE,_flush_kern_dcache_area) 160 160 161 - #define dmac_map_area __glue(_CACHE,_dma_map_area) 162 - #define dmac_unmap_area __glue(_CACHE,_dma_unmap_area) 163 161 #define dmac_flush_range __glue(_CACHE,_dma_flush_range) 164 162 #endif 165 163
-17
arch/arm/include/asm/outercache.h
··· 129 129 130 130 #endif 131 131 132 - #ifdef CONFIG_OUTER_CACHE_SYNC 133 - /** 134 - * outer_sync - perform a sync point for outer cache 135 - * 136 - * Ensure that all outer cache operations are complete and any store 137 - * buffers are drained. 138 - */ 139 - static inline void outer_sync(void) 140 - { 141 - if (outer_cache.sync) 142 - outer_cache.sync(); 143 - } 144 - #else 145 - static inline void outer_sync(void) 146 - { } 147 - #endif 148 - 149 132 #endif /* __ASM_OUTERCACHE_H */
+1
arch/arm/include/asm/pgtable-2level-hwdef.h
··· 23 23 #define PMD_PXNTABLE (_AT(pmdval_t, 1) << 2) /* v7 */ 24 24 #define PMD_BIT4 (_AT(pmdval_t, 1) << 4) 25 25 #define PMD_DOMAIN(x) (_AT(pmdval_t, (x)) << 5) 26 + #define PMD_DOMAIN_MASK PMD_DOMAIN(0x0f) 26 27 #define PMD_PROTECTION (_AT(pmdval_t, 1) << 9) /* v5 */ 27 28 /* 28 29 * - section
+2 -2
arch/arm/include/asm/pmu.h include/linux/perf/arm_pmu.h
··· 30 30 irq_handler_t pmu_handler); 31 31 }; 32 32 33 - #ifdef CONFIG_HW_PERF_EVENTS 33 + #ifdef CONFIG_ARM_PMU 34 34 35 35 /* 36 36 * The ARMv7 CPU PMU supports up to 32 event counters. ··· 149 149 const struct of_device_id *of_table, 150 150 const struct pmu_probe_info *probe_table); 151 151 152 - #endif /* CONFIG_HW_PERF_EVENTS */ 152 + #endif /* CONFIG_ARM_PMU */ 153 153 154 154 #endif /* __ARM_PMU_H__ */
-23
arch/arm/include/asm/psci.h
··· 14 14 #ifndef __ASM_ARM_PSCI_H 15 15 #define __ASM_ARM_PSCI_H 16 16 17 - #define PSCI_POWER_STATE_TYPE_STANDBY 0 18 - #define PSCI_POWER_STATE_TYPE_POWER_DOWN 1 19 - 20 - struct psci_power_state { 21 - u16 id; 22 - u8 type; 23 - u8 affinity_level; 24 - }; 25 - 26 - struct psci_operations { 27 - int (*cpu_suspend)(struct psci_power_state state, 28 - unsigned long entry_point); 29 - int (*cpu_off)(struct psci_power_state state); 30 - int (*cpu_on)(unsigned long cpuid, unsigned long entry_point); 31 - int (*migrate)(unsigned long cpuid); 32 - int (*affinity_info)(unsigned long target_affinity, 33 - unsigned long lowest_affinity_level); 34 - int (*migrate_info_type)(void); 35 - }; 36 - 37 - extern struct psci_operations psci_ops; 38 17 extern struct smp_operations psci_smp_ops; 39 18 40 19 #ifdef CONFIG_ARM_PSCI 41 - int psci_init(void); 42 20 bool psci_smp_available(void); 43 21 #else 44 - static inline int psci_init(void) { return 0; } 45 22 static inline bool psci_smp_available(void) { return false; } 46 23 #endif 47 24
+1 -1
arch/arm/include/asm/smp.h
··· 74 74 extern int __cpu_disable(void); 75 75 76 76 extern void __cpu_die(unsigned int cpu); 77 - extern void cpu_die(void); 78 77 79 78 extern void arch_send_call_function_single_ipi(int cpu); 80 79 extern void arch_send_call_function_ipi_mask(const struct cpumask *mask); ··· 104 105 #ifdef CONFIG_HOTPLUG_CPU 105 106 int (*cpu_kill)(unsigned int cpu); 106 107 void (*cpu_die)(unsigned int cpu); 108 + bool (*cpu_can_disable)(unsigned int cpu); 107 109 int (*cpu_disable)(unsigned int cpu); 108 110 #endif 109 111 #endif
+9
arch/arm/include/asm/smp_plat.h
··· 107 107 extern int platform_can_secondary_boot(void); 108 108 extern int platform_can_cpu_hotplug(void); 109 109 110 + #ifdef CONFIG_HOTPLUG_CPU 111 + extern int platform_can_hotplug_cpu(unsigned int cpu); 112 + #else 113 + static inline int platform_can_hotplug_cpu(unsigned int cpu) 114 + { 115 + return 0; 116 + } 117 + #endif 118 + 110 119 #endif
+8 -15
arch/arm/include/asm/thread_info.h
··· 74 74 .flags = 0, \ 75 75 .preempt_count = INIT_PREEMPT_COUNT, \ 76 76 .addr_limit = KERNEL_DS, \ 77 - .cpu_domain = domain_val(DOMAIN_USER, DOMAIN_MANAGER) | \ 78 - domain_val(DOMAIN_KERNEL, DOMAIN_MANAGER) | \ 79 - domain_val(DOMAIN_IO, DOMAIN_CLIENT), \ 80 77 } 81 78 82 79 #define init_thread_info (init_thread_union.thread_info) ··· 133 136 134 137 /* 135 138 * thread information flags: 136 - * TIF_SYSCALL_TRACE - syscall trace active 137 - * TIF_SYSCAL_AUDIT - syscall auditing active 138 - * TIF_SIGPENDING - signal pending 139 - * TIF_NEED_RESCHED - rescheduling necessary 140 - * TIF_NOTIFY_RESUME - callback before returning to user 141 139 * TIF_USEDFPU - FPU was used by this task this quantum (SMP) 142 140 * TIF_POLLING_NRFLAG - true if poll_idle() is polling TIF_NEED_RESCHED 143 141 */ 144 - #define TIF_SIGPENDING 0 145 - #define TIF_NEED_RESCHED 1 142 + #define TIF_SIGPENDING 0 /* signal pending */ 143 + #define TIF_NEED_RESCHED 1 /* rescheduling necessary */ 146 144 #define TIF_NOTIFY_RESUME 2 /* callback before returning to user */ 147 - #define TIF_UPROBE 7 148 - #define TIF_SYSCALL_TRACE 8 149 - #define TIF_SYSCALL_AUDIT 9 150 - #define TIF_SYSCALL_TRACEPOINT 10 151 - #define TIF_SECCOMP 11 /* seccomp syscall filtering active */ 145 + #define TIF_UPROBE 3 /* breakpointed or singlestepping */ 146 + #define TIF_SYSCALL_TRACE 4 /* syscall trace active */ 147 + #define TIF_SYSCALL_AUDIT 5 /* syscall auditing active */ 148 + #define TIF_SYSCALL_TRACEPOINT 6 /* syscall tracepoint instrumentation */ 149 + #define TIF_SECCOMP 7 /* seccomp syscall filtering active */ 150 + 152 151 #define TIF_NOHZ 12 /* in adaptive nohz mode */ 153 152 #define TIF_USING_IWMMXT 17 154 153 #define TIF_MEMDIE 18 /* is terminating due to OOM killer */
+92 -40
arch/arm/include/asm/uaccess.h
··· 50 50 extern int fixup_exception(struct pt_regs *regs); 51 51 52 52 /* 53 + * These two functions allow hooking accesses to userspace to increase 54 + * system integrity by ensuring that the kernel can not inadvertantly 55 + * perform such accesses (eg, via list poison values) which could then 56 + * be exploited for priviledge escalation. 57 + */ 58 + static inline unsigned int uaccess_save_and_enable(void) 59 + { 60 + #ifdef CONFIG_CPU_SW_DOMAIN_PAN 61 + unsigned int old_domain = get_domain(); 62 + 63 + /* Set the current domain access to permit user accesses */ 64 + set_domain((old_domain & ~domain_mask(DOMAIN_USER)) | 65 + domain_val(DOMAIN_USER, DOMAIN_CLIENT)); 66 + 67 + return old_domain; 68 + #else 69 + return 0; 70 + #endif 71 + } 72 + 73 + static inline void uaccess_restore(unsigned int flags) 74 + { 75 + #ifdef CONFIG_CPU_SW_DOMAIN_PAN 76 + /* Restore the user access mask */ 77 + set_domain(flags); 78 + #endif 79 + } 80 + 81 + /* 53 82 * These two are intentionally not defined anywhere - if the kernel 54 83 * code generates any references to them, that's a bug. 55 84 */ ··· 194 165 register typeof(x) __r2 asm("r2"); \ 195 166 register unsigned long __l asm("r1") = __limit; \ 196 167 register int __e asm("r0"); \ 168 + unsigned int __ua_flags = uaccess_save_and_enable(); \ 197 169 switch (sizeof(*(__p))) { \ 198 170 case 1: \ 199 171 if (sizeof((x)) >= 8) \ ··· 222 192 break; \ 223 193 default: __e = __get_user_bad(); break; \ 224 194 } \ 195 + uaccess_restore(__ua_flags); \ 225 196 x = (typeof(*(p))) __r2; \ 226 197 __e; \ 227 198 }) ··· 255 224 register const typeof(*(p)) __user *__p asm("r0") = __tmp_p; \ 256 225 register unsigned long __l asm("r1") = __limit; \ 257 226 register int __e asm("r0"); \ 227 + unsigned int __ua_flags = uaccess_save_and_enable(); \ 258 228 switch (sizeof(*(__p))) { \ 259 229 case 1: \ 260 230 __put_user_x(__r2, __p, __e, __l, 1); \ ··· 271 239 break; \ 272 240 default: __e = __put_user_bad(); break; \ 273 241 } \ 242 + uaccess_restore(__ua_flags); \ 274 243 __e; \ 275 244 }) 276 245 ··· 333 300 do { \ 334 301 unsigned long __gu_addr = (unsigned long)(ptr); \ 335 302 unsigned long __gu_val; \ 303 + unsigned int __ua_flags; \ 336 304 __chk_user_ptr(ptr); \ 337 305 might_fault(); \ 306 + __ua_flags = uaccess_save_and_enable(); \ 338 307 switch (sizeof(*(ptr))) { \ 339 308 case 1: __get_user_asm_byte(__gu_val, __gu_addr, err); break; \ 340 309 case 2: __get_user_asm_half(__gu_val, __gu_addr, err); break; \ 341 310 case 4: __get_user_asm_word(__gu_val, __gu_addr, err); break; \ 342 311 default: (__gu_val) = __get_user_bad(); \ 343 312 } \ 313 + uaccess_restore(__ua_flags); \ 344 314 (x) = (__typeof__(*(ptr)))__gu_val; \ 345 315 } while (0) 346 316 347 - #define __get_user_asm_byte(x, addr, err) \ 317 + #define __get_user_asm(x, addr, err, instr) \ 348 318 __asm__ __volatile__( \ 349 - "1: " TUSER(ldrb) " %1,[%2],#0\n" \ 319 + "1: " TUSER(instr) " %1, [%2], #0\n" \ 350 320 "2:\n" \ 351 321 " .pushsection .text.fixup,\"ax\"\n" \ 352 322 " .align 2\n" \ ··· 364 328 : "+r" (err), "=&r" (x) \ 365 329 : "r" (addr), "i" (-EFAULT) \ 366 330 : "cc") 331 + 332 + #define __get_user_asm_byte(x, addr, err) \ 333 + __get_user_asm(x, addr, err, ldrb) 367 334 368 335 #ifndef __ARMEB__ 369 336 #define __get_user_asm_half(x, __gu_addr, err) \ ··· 387 348 #endif 388 349 389 350 #define __get_user_asm_word(x, addr, err) \ 390 - __asm__ __volatile__( \ 391 - "1: " TUSER(ldr) " %1,[%2],#0\n" \ 392 - "2:\n" \ 393 - " .pushsection .text.fixup,\"ax\"\n" \ 394 - " .align 2\n" \ 395 - "3: mov %0, %3\n" \ 396 - " mov %1, #0\n" \ 397 - " b 2b\n" \ 398 - " .popsection\n" \ 399 - " .pushsection __ex_table,\"a\"\n" \ 400 - " .align 3\n" \ 401 - " .long 1b, 3b\n" \ 402 - " .popsection" \ 403 - : "+r" (err), "=&r" (x) \ 404 - : "r" (addr), "i" (-EFAULT) \ 405 - : "cc") 351 + __get_user_asm(x, addr, err, ldr) 406 352 407 353 #define __put_user(x, ptr) \ 408 354 ({ \ ··· 405 381 #define __put_user_err(x, ptr, err) \ 406 382 do { \ 407 383 unsigned long __pu_addr = (unsigned long)(ptr); \ 384 + unsigned int __ua_flags; \ 408 385 __typeof__(*(ptr)) __pu_val = (x); \ 409 386 __chk_user_ptr(ptr); \ 410 387 might_fault(); \ 388 + __ua_flags = uaccess_save_and_enable(); \ 411 389 switch (sizeof(*(ptr))) { \ 412 390 case 1: __put_user_asm_byte(__pu_val, __pu_addr, err); break; \ 413 391 case 2: __put_user_asm_half(__pu_val, __pu_addr, err); break; \ ··· 417 391 case 8: __put_user_asm_dword(__pu_val, __pu_addr, err); break; \ 418 392 default: __put_user_bad(); \ 419 393 } \ 394 + uaccess_restore(__ua_flags); \ 420 395 } while (0) 421 396 422 - #define __put_user_asm_byte(x, __pu_addr, err) \ 397 + #define __put_user_asm(x, __pu_addr, err, instr) \ 423 398 __asm__ __volatile__( \ 424 - "1: " TUSER(strb) " %1,[%2],#0\n" \ 399 + "1: " TUSER(instr) " %1, [%2], #0\n" \ 425 400 "2:\n" \ 426 401 " .pushsection .text.fixup,\"ax\"\n" \ 427 402 " .align 2\n" \ ··· 436 409 : "+r" (err) \ 437 410 : "r" (x), "r" (__pu_addr), "i" (-EFAULT) \ 438 411 : "cc") 412 + 413 + #define __put_user_asm_byte(x, __pu_addr, err) \ 414 + __put_user_asm(x, __pu_addr, err, strb) 439 415 440 416 #ifndef __ARMEB__ 441 417 #define __put_user_asm_half(x, __pu_addr, err) \ ··· 457 427 #endif 458 428 459 429 #define __put_user_asm_word(x, __pu_addr, err) \ 460 - __asm__ __volatile__( \ 461 - "1: " TUSER(str) " %1,[%2],#0\n" \ 462 - "2:\n" \ 463 - " .pushsection .text.fixup,\"ax\"\n" \ 464 - " .align 2\n" \ 465 - "3: mov %0, %3\n" \ 466 - " b 2b\n" \ 467 - " .popsection\n" \ 468 - " .pushsection __ex_table,\"a\"\n" \ 469 - " .align 3\n" \ 470 - " .long 1b, 3b\n" \ 471 - " .popsection" \ 472 - : "+r" (err) \ 473 - : "r" (x), "r" (__pu_addr), "i" (-EFAULT) \ 474 - : "cc") 430 + __put_user_asm(x, __pu_addr, err, str) 475 431 476 432 #ifndef __ARMEB__ 477 433 #define __reg_oper0 "%R2" ··· 490 474 491 475 492 476 #ifdef CONFIG_MMU 493 - extern unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n); 494 - extern unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n); 495 - extern unsigned long __must_check __copy_to_user_std(void __user *to, const void *from, unsigned long n); 496 - extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n); 497 - extern unsigned long __must_check __clear_user_std(void __user *addr, unsigned long n); 477 + extern unsigned long __must_check 478 + arm_copy_from_user(void *to, const void __user *from, unsigned long n); 479 + 480 + static inline unsigned long __must_check 481 + __copy_from_user(void *to, const void __user *from, unsigned long n) 482 + { 483 + unsigned int __ua_flags = uaccess_save_and_enable(); 484 + n = arm_copy_from_user(to, from, n); 485 + uaccess_restore(__ua_flags); 486 + return n; 487 + } 488 + 489 + extern unsigned long __must_check 490 + arm_copy_to_user(void __user *to, const void *from, unsigned long n); 491 + extern unsigned long __must_check 492 + __copy_to_user_std(void __user *to, const void *from, unsigned long n); 493 + 494 + static inline unsigned long __must_check 495 + __copy_to_user(void __user *to, const void *from, unsigned long n) 496 + { 497 + unsigned int __ua_flags = uaccess_save_and_enable(); 498 + n = arm_copy_to_user(to, from, n); 499 + uaccess_restore(__ua_flags); 500 + return n; 501 + } 502 + 503 + extern unsigned long __must_check 504 + arm_clear_user(void __user *addr, unsigned long n); 505 + extern unsigned long __must_check 506 + __clear_user_std(void __user *addr, unsigned long n); 507 + 508 + static inline unsigned long __must_check 509 + __clear_user(void __user *addr, unsigned long n) 510 + { 511 + unsigned int __ua_flags = uaccess_save_and_enable(); 512 + n = arm_clear_user(addr, n); 513 + uaccess_restore(__ua_flags); 514 + return n; 515 + } 516 + 498 517 #else 499 518 #define __copy_from_user(to, from, n) (memcpy(to, (void __force *)from, n), 0) 500 519 #define __copy_to_user(to, from, n) (memcpy((void __force *)to, from, n), 0) ··· 562 511 return n; 563 512 } 564 513 514 + /* These are from lib/ code, and use __get_user() and friends */ 565 515 extern long strncpy_from_user(char *dest, const char __user *src, long count); 566 516 567 517 extern __must_check long strlen_user(const char __user *str);
+2 -3
arch/arm/kernel/Makefile
··· 71 71 obj-$(CONFIG_CPU_PJ4B) += pj4-cp0.o 72 72 obj-$(CONFIG_IWMMXT) += iwmmxt.o 73 73 obj-$(CONFIG_PERF_EVENTS) += perf_regs.o perf_callchain.o 74 - obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o \ 75 - perf_event_xscale.o perf_event_v6.o \ 74 + obj-$(CONFIG_HW_PERF_EVENTS) += perf_event_xscale.o perf_event_v6.o \ 76 75 perf_event_v7.o 77 76 CFLAGS_pj4-cp0.o := -marm 78 77 AFLAGS_iwmmxt.o := -Wa,-mcpu=iwmmxt ··· 88 89 89 90 obj-$(CONFIG_ARM_VIRT_EXT) += hyp-stub.o 90 91 ifeq ($(CONFIG_ARM_PSCI),y) 91 - obj-y += psci.o psci-call.o 92 + obj-y += psci-call.o 92 93 obj-$(CONFIG_SMP) += psci_smp.o 93 94 endif 94 95
+3 -3
arch/arm/kernel/armksyms.c
··· 97 97 #ifdef CONFIG_MMU 98 98 EXPORT_SYMBOL(copy_page); 99 99 100 - EXPORT_SYMBOL(__copy_from_user); 101 - EXPORT_SYMBOL(__copy_to_user); 102 - EXPORT_SYMBOL(__clear_user); 100 + EXPORT_SYMBOL(arm_copy_from_user); 101 + EXPORT_SYMBOL(arm_copy_to_user); 102 + EXPORT_SYMBOL(arm_clear_user); 103 103 104 104 EXPORT_SYMBOL(__get_user_1); 105 105 EXPORT_SYMBOL(__get_user_2);
+24 -8
arch/arm/kernel/entry-armv.S
··· 149 149 #define SPFIX(code...) 150 150 #endif 151 151 152 - .macro svc_entry, stack_hole=0, trace=1 152 + .macro svc_entry, stack_hole=0, trace=1, uaccess=1 153 153 UNWIND(.fnstart ) 154 154 UNWIND(.save {r0 - pc} ) 155 - sub sp, sp, #(S_FRAME_SIZE + \stack_hole - 4) 155 + sub sp, sp, #(S_FRAME_SIZE + 8 + \stack_hole - 4) 156 156 #ifdef CONFIG_THUMB2_KERNEL 157 157 SPFIX( str r0, [sp] ) @ temporarily saved 158 158 SPFIX( mov r0, sp ) ··· 167 167 ldmia r0, {r3 - r5} 168 168 add r7, sp, #S_SP - 4 @ here for interlock avoidance 169 169 mov r6, #-1 @ "" "" "" "" 170 - add r2, sp, #(S_FRAME_SIZE + \stack_hole - 4) 170 + add r2, sp, #(S_FRAME_SIZE + 8 + \stack_hole - 4) 171 171 SPFIX( addeq r2, r2, #4 ) 172 172 str r3, [sp, #-4]! @ save the "real" r0 copied 173 173 @ from the exception stack ··· 185 185 @ 186 186 stmia r7, {r2 - r6} 187 187 188 + uaccess_save r0 189 + .if \uaccess 190 + uaccess_disable r0 191 + .endif 192 + 188 193 .if \trace 189 194 #ifdef CONFIG_TRACE_IRQFLAGS 190 195 bl trace_hardirqs_off ··· 199 194 200 195 .align 5 201 196 __dabt_svc: 202 - svc_entry 197 + svc_entry uaccess=0 203 198 mov r2, sp 204 199 dabt_helper 205 200 THUMB( ldr r5, [sp, #S_PSR] ) @ potentially updated CPSR ··· 373 368 #error "sizeof(struct pt_regs) must be a multiple of 8" 374 369 #endif 375 370 376 - .macro usr_entry, trace=1 371 + .macro usr_entry, trace=1, uaccess=1 377 372 UNWIND(.fnstart ) 378 373 UNWIND(.cantunwind ) @ don't unwind the user space 379 374 sub sp, sp, #S_FRAME_SIZE ··· 404 399 stmia r0, {r4 - r6} 405 400 ARM( stmdb r0, {sp, lr}^ ) 406 401 THUMB( store_user_sp_lr r0, r1, S_SP - S_PC ) 402 + 403 + .if \uaccess 404 + uaccess_disable ip 405 + .endif 407 406 408 407 @ Enable the alignment trap while in kernel mode 409 408 ATRAP( teq r8, r7) ··· 444 435 445 436 .align 5 446 437 __dabt_usr: 447 - usr_entry 438 + usr_entry uaccess=0 448 439 kuser_cmpxchg_check 449 440 mov r2, sp 450 441 dabt_helper ··· 467 458 468 459 .align 5 469 460 __und_usr: 470 - usr_entry 461 + usr_entry uaccess=0 471 462 472 463 mov r2, r4 473 464 mov r3, r5 ··· 492 483 sub r4, r2, #4 @ ARM instr at LR - 4 493 484 1: ldrt r0, [r4] 494 485 ARM_BE8(rev r0, r0) @ little endian instruction 486 + 487 + uaccess_disable ip 495 488 496 489 @ r0 = 32-bit ARM instruction which caused the exception 497 490 @ r2 = PC value for the following instruction (:= regs->ARM_pc) ··· 529 518 2: ldrht r5, [r4] 530 519 ARM_BE8(rev16 r5, r5) @ little endian instruction 531 520 cmp r5, #0xe800 @ 32bit instruction if xx != 0 532 - blo __und_usr_fault_16 @ 16bit undefined instruction 521 + blo __und_usr_fault_16_pan @ 16bit undefined instruction 533 522 3: ldrht r0, [r2] 534 523 ARM_BE8(rev16 r0, r0) @ little endian instruction 524 + uaccess_disable ip 535 525 add r2, r2, #2 @ r2 is PC + 2, make it PC + 4 536 526 str r2, [sp, #S_PC] @ it's a 2x16bit instr, update 537 527 orr r0, r0, r5, lsl #16 ··· 727 715 __und_usr_fault_32: 728 716 mov r1, #4 729 717 b 1f 718 + __und_usr_fault_16_pan: 719 + uaccess_disable ip 730 720 __und_usr_fault_16: 731 721 mov r1, #2 732 722 1: mov r0, sp ··· 784 770 ldr r4, [r2, #TI_TP_VALUE] 785 771 ldr r5, [r2, #TI_TP_VALUE + 4] 786 772 #ifdef CONFIG_CPU_USE_DOMAINS 773 + mrc p15, 0, r6, c3, c0, 0 @ Get domain register 774 + str r6, [r1, #TI_CPU_DOMAIN] @ Save old domain register 787 775 ldr r6, [r2, #TI_CPU_DOMAIN] 788 776 #endif 789 777 switch_tls r1, r4, r5, r3, r7
+47 -16
arch/arm/kernel/entry-common.S
··· 24 24 25 25 26 26 .align 5 27 + #if !(IS_ENABLED(CONFIG_TRACE_IRQFLAGS) || IS_ENABLED(CONFIG_CONTEXT_TRACKING)) 27 28 /* 28 - * This is the fast syscall return path. We do as little as 29 - * possible here, and this includes saving r0 back into the SVC 30 - * stack. 29 + * This is the fast syscall return path. We do as little as possible here, 30 + * such as avoiding writing r0 to the stack. We only use this path if we 31 + * have tracing and context tracking disabled - the overheads from those 32 + * features make this path too inefficient. 31 33 */ 32 34 ret_fast_syscall: 33 35 UNWIND(.fnstart ) 34 36 UNWIND(.cantunwind ) 35 - disable_irq @ disable interrupts 37 + disable_irq_notrace @ disable interrupts 36 38 ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing 37 - tst r1, #_TIF_SYSCALL_WORK 38 - bne __sys_trace_return 39 - tst r1, #_TIF_WORK_MASK 39 + tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK 40 40 bne fast_work_pending 41 - asm_trace_hardirqs_on 42 41 43 42 /* perform architecture specific actions before user return */ 44 43 arch_ret_to_user r1, lr 45 - ct_user_enter 46 44 47 45 restore_user_regs fast = 1, offset = S_OFF 48 46 UNWIND(.fnend ) 47 + ENDPROC(ret_fast_syscall) 49 48 50 - /* 51 - * Ok, we need to do extra processing, enter the slow path. 52 - */ 49 + /* Ok, we need to do extra processing, enter the slow path. */ 53 50 fast_work_pending: 54 51 str r0, [sp, #S_R0+S_OFF]! @ returned r0 55 - work_pending: 52 + /* fall through to work_pending */ 53 + #else 54 + /* 55 + * The "replacement" ret_fast_syscall for when tracing or context tracking 56 + * is enabled. As we will need to call out to some C functions, we save 57 + * r0 first to avoid needing to save registers around each C function call. 58 + */ 59 + ret_fast_syscall: 60 + UNWIND(.fnstart ) 61 + UNWIND(.cantunwind ) 62 + str r0, [sp, #S_R0 + S_OFF]! @ save returned r0 63 + disable_irq_notrace @ disable interrupts 64 + ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing 65 + tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK 66 + beq no_work_pending 67 + UNWIND(.fnend ) 68 + ENDPROC(ret_fast_syscall) 69 + 70 + /* Slower path - fall through to work_pending */ 71 + #endif 72 + 73 + tst r1, #_TIF_SYSCALL_WORK 74 + bne __sys_trace_return_nosave 75 + slow_work_pending: 56 76 mov r0, sp @ 'regs' 57 77 mov r2, why @ 'syscall' 58 78 bl do_work_pending ··· 85 65 86 66 /* 87 67 * "slow" syscall return path. "why" tells us if this was a real syscall. 68 + * IRQs may be enabled here, so always disable them. Note that we use the 69 + * "notrace" version to avoid calling into the tracing code unnecessarily. 70 + * do_work_pending() will update this state if necessary. 88 71 */ 89 72 ENTRY(ret_to_user) 90 73 ret_slow_syscall: 91 - disable_irq @ disable interrupts 74 + disable_irq_notrace @ disable interrupts 92 75 ENTRY(ret_to_user_from_irq) 93 76 ldr r1, [tsk, #TI_FLAGS] 94 77 tst r1, #_TIF_WORK_MASK 95 - bne work_pending 78 + bne slow_work_pending 96 79 no_work_pending: 97 - asm_trace_hardirqs_on 80 + asm_trace_hardirqs_on save = 0 98 81 99 82 /* perform architecture specific actions before user return */ 100 83 arch_ret_to_user r1, lr ··· 197 174 USER( ldr scno, [lr, #-4] ) @ get SWI instruction 198 175 #endif 199 176 177 + uaccess_disable tbl 178 + 200 179 adr tbl, sys_call_table @ load syscall table pointer 201 180 202 181 #if defined(CONFIG_OABI_COMPAT) ··· 273 248 274 249 __sys_trace_return: 275 250 str r0, [sp, #S_R0 + S_OFF]! @ save returned r0 251 + mov r0, sp 252 + bl syscall_trace_exit 253 + b ret_slow_syscall 254 + 255 + __sys_trace_return_nosave: 256 + enable_irq_notrace 276 257 mov r0, sp 277 258 bl syscall_trace_exit 278 259 b ret_slow_syscall
+47 -65
arch/arm/kernel/entry-header.S
··· 196 196 msr cpsr_c, \rtemp @ switch back to the SVC mode 197 197 .endm 198 198 199 - #ifndef CONFIG_THUMB2_KERNEL 199 + 200 200 .macro svc_exit, rpsr, irq = 0 201 201 .if \irq != 0 202 202 @ IRQs already off ··· 215 215 blne trace_hardirqs_off 216 216 #endif 217 217 .endif 218 + uaccess_restore 219 + 220 + #ifndef CONFIG_THUMB2_KERNEL 221 + @ ARM mode SVC restore 218 222 msr spsr_cxsf, \rpsr 219 223 #if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_32v6K) 220 224 @ We must avoid clrex due to Cortex-A15 erratum #830321 ··· 226 222 strex r1, r2, [r0] @ clear the exclusive monitor 227 223 #endif 228 224 ldmia sp, {r0 - pc}^ @ load r0 - pc, cpsr 225 + #else 226 + @ Thumb mode SVC restore 227 + ldr lr, [sp, #S_SP] @ top of the stack 228 + ldrd r0, r1, [sp, #S_LR] @ calling lr and pc 229 + 230 + @ We must avoid clrex due to Cortex-A15 erratum #830321 231 + strex r2, r1, [sp, #S_LR] @ clear the exclusive monitor 232 + 233 + stmdb lr!, {r0, r1, \rpsr} @ calling lr and rfe context 234 + ldmia sp, {r0 - r12} 235 + mov sp, lr 236 + ldr lr, [sp], #4 237 + rfeia sp! 238 + #endif 229 239 .endm 230 240 231 241 @ ··· 259 241 @ on the stack remains correct). 260 242 @ 261 243 .macro svc_exit_via_fiq 244 + uaccess_restore 245 + #ifndef CONFIG_THUMB2_KERNEL 246 + @ ARM mode restore 262 247 mov r0, sp 263 248 ldmib r0, {r1 - r14} @ abort is deadly from here onward (it will 264 249 @ clobber state restored below) ··· 271 250 msr spsr_cxsf, r9 272 251 ldr r0, [r0, #S_R0] 273 252 ldmia r8, {pc}^ 253 + #else 254 + @ Thumb mode restore 255 + add r0, sp, #S_R2 256 + ldr lr, [sp, #S_LR] 257 + ldr sp, [sp, #S_SP] @ abort is deadly from here onward (it will 258 + @ clobber state restored below) 259 + ldmia r0, {r2 - r12} 260 + mov r1, #FIQ_MODE | PSR_I_BIT | PSR_F_BIT 261 + msr cpsr_c, r1 262 + sub r0, #S_R2 263 + add r8, r0, #S_PC 264 + ldmia r0, {r0 - r1} 265 + rfeia r8 266 + #endif 274 267 .endm 275 268 269 + 276 270 .macro restore_user_regs, fast = 0, offset = 0 271 + uaccess_enable r1, isb=0 272 + #ifndef CONFIG_THUMB2_KERNEL 273 + @ ARM mode restore 277 274 mov r2, sp 278 275 ldr r1, [r2, #\offset + S_PSR] @ get calling cpsr 279 276 ldr lr, [r2, #\offset + S_PC]! @ get pc ··· 309 270 @ after ldm {}^ 310 271 add sp, sp, #\offset + S_FRAME_SIZE 311 272 movs pc, lr @ return & move spsr_svc into cpsr 312 - .endm 313 - 314 - #else /* CONFIG_THUMB2_KERNEL */ 315 - .macro svc_exit, rpsr, irq = 0 316 - .if \irq != 0 317 - @ IRQs already off 318 - #ifdef CONFIG_TRACE_IRQFLAGS 319 - @ The parent context IRQs must have been enabled to get here in 320 - @ the first place, so there's no point checking the PSR I bit. 321 - bl trace_hardirqs_on 322 - #endif 323 - .else 324 - @ IRQs off again before pulling preserved data off the stack 325 - disable_irq_notrace 326 - #ifdef CONFIG_TRACE_IRQFLAGS 327 - tst \rpsr, #PSR_I_BIT 328 - bleq trace_hardirqs_on 329 - tst \rpsr, #PSR_I_BIT 330 - blne trace_hardirqs_off 331 - #endif 332 - .endif 333 - ldr lr, [sp, #S_SP] @ top of the stack 334 - ldrd r0, r1, [sp, #S_LR] @ calling lr and pc 335 - 336 - @ We must avoid clrex due to Cortex-A15 erratum #830321 337 - strex r2, r1, [sp, #S_LR] @ clear the exclusive monitor 338 - 339 - stmdb lr!, {r0, r1, \rpsr} @ calling lr and rfe context 340 - ldmia sp, {r0 - r12} 341 - mov sp, lr 342 - ldr lr, [sp], #4 343 - rfeia sp! 344 - .endm 345 - 346 - @ 347 - @ svc_exit_via_fiq - like svc_exit but switches to FIQ mode before exit 348 - @ 349 - @ For full details see non-Thumb implementation above. 350 - @ 351 - .macro svc_exit_via_fiq 352 - add r0, sp, #S_R2 353 - ldr lr, [sp, #S_LR] 354 - ldr sp, [sp, #S_SP] @ abort is deadly from here onward (it will 355 - @ clobber state restored below) 356 - ldmia r0, {r2 - r12} 357 - mov r1, #FIQ_MODE | PSR_I_BIT | PSR_F_BIT 358 - msr cpsr_c, r1 359 - sub r0, #S_R2 360 - add r8, r0, #S_PC 361 - ldmia r0, {r0 - r1} 362 - rfeia r8 363 - .endm 364 - 365 - #ifdef CONFIG_CPU_V7M 366 - /* 367 - * Note we don't need to do clrex here as clearing the local monitor is 368 - * part of each exception entry and exit sequence. 369 - */ 370 - .macro restore_user_regs, fast = 0, offset = 0 273 + #elif defined(CONFIG_CPU_V7M) 274 + @ V7M restore. 275 + @ Note that we don't need to do clrex here as clearing the local 276 + @ monitor is part of the exception entry and exit sequence. 371 277 .if \offset 372 278 add sp, #\offset 373 279 .endif 374 280 v7m_exception_slow_exit ret_r0 = \fast 375 - .endm 376 - #else /* ifdef CONFIG_CPU_V7M */ 377 - .macro restore_user_regs, fast = 0, offset = 0 281 + #else 282 + @ Thumb mode restore 378 283 mov r2, sp 379 284 load_user_sp_lr r2, r3, \offset + S_SP @ calling sp, lr 380 285 ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr ··· 336 353 .endif 337 354 add sp, sp, #S_FRAME_SIZE - S_SP 338 355 movs pc, lr @ return & move spsr_svc into cpsr 339 - .endm 340 - #endif /* ifdef CONFIG_CPU_V7M / else */ 341 356 #endif /* !CONFIG_THUMB2_KERNEL */ 357 + .endm 342 358 343 359 /* 344 360 * Context tracking subsystem. Used to instrument transitions
+1 -4
arch/arm/kernel/head.S
··· 464 464 #ifdef CONFIG_ARM_LPAE 465 465 mcrr p15, 0, r4, r5, c2 @ load TTBR0 466 466 #else 467 - mov r5, #(domain_val(DOMAIN_USER, DOMAIN_MANAGER) | \ 468 - domain_val(DOMAIN_KERNEL, DOMAIN_MANAGER) | \ 469 - domain_val(DOMAIN_TABLE, DOMAIN_MANAGER) | \ 470 - domain_val(DOMAIN_IO, DOMAIN_CLIENT)) 467 + mov r5, #DACR_INIT 471 468 mcr p15, 0, r5, c3, c0, 0 @ load domain access register 472 469 mcr p15, 0, r4, c2, c0, 0 @ load page table pointer 473 470 #endif
+1
arch/arm/kernel/irq.c
··· 39 39 #include <linux/export.h> 40 40 41 41 #include <asm/hardware/cache-l2x0.h> 42 + #include <asm/outercache.h> 42 43 #include <asm/exception.h> 43 44 #include <asm/mach/arch.h> 44 45 #include <asm/mach/irq.h>
+49 -24
arch/arm/kernel/perf_event.c drivers/perf/arm_pmu.c
··· 15 15 #include <linux/cpumask.h> 16 16 #include <linux/export.h> 17 17 #include <linux/kernel.h> 18 - #include <linux/of.h> 18 + #include <linux/of_device.h> 19 + #include <linux/perf/arm_pmu.h> 19 20 #include <linux/platform_device.h> 20 21 #include <linux/slab.h> 21 22 #include <linux/spinlock.h> ··· 25 24 26 25 #include <asm/cputype.h> 27 26 #include <asm/irq_regs.h> 28 - #include <asm/pmu.h> 29 27 30 28 static int 31 29 armpmu_map_cache_event(const unsigned (*cache_map) ··· 790 790 791 791 static int of_pmu_irq_cfg(struct arm_pmu *pmu) 792 792 { 793 - int i, irq, *irqs; 793 + int *irqs, i = 0; 794 + bool using_spi = false; 794 795 struct platform_device *pdev = pmu->plat_device; 795 - 796 - /* Don't bother with PPIs; they're already affine */ 797 - irq = platform_get_irq(pdev, 0); 798 - if (irq >= 0 && irq_is_percpu(irq)) 799 - return 0; 800 796 801 797 irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL); 802 798 if (!irqs) 803 799 return -ENOMEM; 804 800 805 - for (i = 0; i < pdev->num_resources; ++i) { 801 + do { 806 802 struct device_node *dn; 807 - int cpu; 803 + int cpu, irq; 808 804 809 - dn = of_parse_phandle(pdev->dev.of_node, "interrupt-affinity", 810 - i); 811 - if (!dn) { 812 - pr_warn("Failed to parse %s/interrupt-affinity[%d]\n", 813 - of_node_full_name(pdev->dev.of_node), i); 805 + /* See if we have an affinity entry */ 806 + dn = of_parse_phandle(pdev->dev.of_node, "interrupt-affinity", i); 807 + if (!dn) 814 808 break; 809 + 810 + /* Check the IRQ type and prohibit a mix of PPIs and SPIs */ 811 + irq = platform_get_irq(pdev, i); 812 + if (irq >= 0) { 813 + bool spi = !irq_is_percpu(irq); 814 + 815 + if (i > 0 && spi != using_spi) { 816 + pr_err("PPI/SPI IRQ type mismatch for %s!\n", 817 + dn->name); 818 + kfree(irqs); 819 + return -EINVAL; 820 + } 821 + 822 + using_spi = spi; 815 823 } 816 824 825 + /* Now look up the logical CPU number */ 817 826 for_each_possible_cpu(cpu) 818 - if (arch_find_n_match_cpu_physical_id(dn, cpu, NULL)) 827 + if (dn == of_cpu_device_node_get(cpu)) 819 828 break; 820 829 821 830 if (cpu >= nr_cpu_ids) { 822 831 pr_warn("Failed to find logical CPU for %s\n", 823 832 dn->name); 824 833 of_node_put(dn); 834 + cpumask_setall(&pmu->supported_cpus); 825 835 break; 826 836 } 827 837 of_node_put(dn); 828 838 829 - irqs[i] = cpu; 830 - cpumask_set_cpu(cpu, &pmu->supported_cpus); 831 - } 839 + /* For SPIs, we need to track the affinity per IRQ */ 840 + if (using_spi) { 841 + if (i >= pdev->num_resources) { 842 + of_node_put(dn); 843 + break; 844 + } 832 845 833 - if (i == pdev->num_resources) { 834 - pmu->irq_affinity = irqs; 835 - } else { 836 - kfree(irqs); 846 + irqs[i] = cpu; 847 + } 848 + 849 + /* Keep track of the CPUs containing this PMU type */ 850 + cpumask_set_cpu(cpu, &pmu->supported_cpus); 851 + of_node_put(dn); 852 + i++; 853 + } while (1); 854 + 855 + /* If we didn't manage to parse anything, claim to support all CPUs */ 856 + if (cpumask_weight(&pmu->supported_cpus) == 0) 837 857 cpumask_setall(&pmu->supported_cpus); 838 - } 858 + 859 + /* If we matched up the IRQ affinities, use them to route the SPIs */ 860 + if (using_spi && i == pdev->num_resources) 861 + pmu->irq_affinity = irqs; 862 + else 863 + kfree(irqs); 839 864 840 865 return 0; 841 866 }
+1 -1
arch/arm/kernel/perf_event_v6.c
··· 34 34 35 35 #include <asm/cputype.h> 36 36 #include <asm/irq_regs.h> 37 - #include <asm/pmu.h> 38 37 39 38 #include <linux/of.h> 39 + #include <linux/perf/arm_pmu.h> 40 40 #include <linux/platform_device.h> 41 41 42 42 enum armv6_perf_types {
+1 -1
arch/arm/kernel/perf_event_v7.c
··· 21 21 #include <asm/cp15.h> 22 22 #include <asm/cputype.h> 23 23 #include <asm/irq_regs.h> 24 - #include <asm/pmu.h> 25 24 #include <asm/vfp.h> 26 25 #include "../vfp/vfpinstr.h" 27 26 28 27 #include <linux/of.h> 28 + #include <linux/perf/arm_pmu.h> 29 29 #include <linux/platform_device.h> 30 30 31 31 /*
+1 -1
arch/arm/kernel/perf_event_xscale.c
··· 16 16 17 17 #include <asm/cputype.h> 18 18 #include <asm/irq_regs.h> 19 - #include <asm/pmu.h> 20 19 21 20 #include <linux/of.h> 21 + #include <linux/perf/arm_pmu.h> 22 22 #include <linux/platform_device.h> 23 23 24 24 enum xscale_perf_types {
+40 -16
arch/arm/kernel/process.c
··· 91 91 ledtrig_cpu(CPU_LED_IDLE_END); 92 92 } 93 93 94 - #ifdef CONFIG_HOTPLUG_CPU 95 - void arch_cpu_idle_dead(void) 96 - { 97 - cpu_die(); 98 - } 99 - #endif 100 - 101 94 void __show_regs(struct pt_regs *regs) 102 95 { 103 96 unsigned long flags; ··· 122 129 buf[4] = '\0'; 123 130 124 131 #ifndef CONFIG_CPU_V7M 125 - printk("Flags: %s IRQs o%s FIQs o%s Mode %s ISA %s Segment %s\n", 126 - buf, interrupts_enabled(regs) ? "n" : "ff", 127 - fast_interrupts_enabled(regs) ? "n" : "ff", 128 - processor_modes[processor_mode(regs)], 129 - isa_modes[isa_mode(regs)], 130 - get_fs() == get_ds() ? "kernel" : "user"); 132 + { 133 + unsigned int domain = get_domain(); 134 + const char *segment; 135 + 136 + #ifdef CONFIG_CPU_SW_DOMAIN_PAN 137 + /* 138 + * Get the domain register for the parent context. In user 139 + * mode, we don't save the DACR, so lets use what it should 140 + * be. For other modes, we place it after the pt_regs struct. 141 + */ 142 + if (user_mode(regs)) 143 + domain = DACR_UACCESS_ENABLE; 144 + else 145 + domain = *(unsigned int *)(regs + 1); 146 + #endif 147 + 148 + if ((domain & domain_mask(DOMAIN_USER)) == 149 + domain_val(DOMAIN_USER, DOMAIN_NOACCESS)) 150 + segment = "none"; 151 + else if (get_fs() == get_ds()) 152 + segment = "kernel"; 153 + else 154 + segment = "user"; 155 + 156 + printk("Flags: %s IRQs o%s FIQs o%s Mode %s ISA %s Segment %s\n", 157 + buf, interrupts_enabled(regs) ? "n" : "ff", 158 + fast_interrupts_enabled(regs) ? "n" : "ff", 159 + processor_modes[processor_mode(regs)], 160 + isa_modes[isa_mode(regs)], segment); 161 + } 131 162 #else 132 163 printk("xPSR: %08lx\n", regs->ARM_cpsr); 133 164 #endif ··· 163 146 buf[0] = '\0'; 164 147 #ifdef CONFIG_CPU_CP15_MMU 165 148 { 166 - unsigned int transbase, dac; 149 + unsigned int transbase, dac = get_domain(); 167 150 asm("mrc p15, 0, %0, c2, c0\n\t" 168 - "mrc p15, 0, %1, c3, c0\n" 169 - : "=r" (transbase), "=r" (dac)); 151 + : "=r" (transbase)); 170 152 snprintf(buf, sizeof(buf), " Table: %08x DAC: %08x", 171 153 transbase, dac); 172 154 } ··· 225 209 struct pt_regs *childregs = task_pt_regs(p); 226 210 227 211 memset(&thread->cpu_context, 0, sizeof(struct cpu_context_save)); 212 + 213 + /* 214 + * Copy the initial value of the domain access control register 215 + * from the current thread: thread->addr_limit will have been 216 + * copied from the current thread via setup_thread_stack() in 217 + * kernel/fork.c 218 + */ 219 + thread->cpu_domain = get_domain(); 228 220 229 221 if (likely(!(p->flags & PF_KTHREAD))) { 230 222 *childregs = *current_pt_regs();
-299
arch/arm/kernel/psci.c
··· 1 - /* 2 - * This program is free software; you can redistribute it and/or modify 3 - * it under the terms of the GNU General Public License version 2 as 4 - * published by the Free Software Foundation. 5 - * 6 - * This program is distributed in the hope that it will be useful, 7 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 8 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 9 - * GNU General Public License for more details. 10 - * 11 - * Copyright (C) 2012 ARM Limited 12 - * 13 - * Author: Will Deacon <will.deacon@arm.com> 14 - */ 15 - 16 - #define pr_fmt(fmt) "psci: " fmt 17 - 18 - #include <linux/init.h> 19 - #include <linux/of.h> 20 - #include <linux/reboot.h> 21 - #include <linux/pm.h> 22 - #include <uapi/linux/psci.h> 23 - 24 - #include <asm/compiler.h> 25 - #include <asm/errno.h> 26 - #include <asm/psci.h> 27 - #include <asm/system_misc.h> 28 - 29 - struct psci_operations psci_ops; 30 - 31 - static int (*invoke_psci_fn)(u32, u32, u32, u32); 32 - typedef int (*psci_initcall_t)(const struct device_node *); 33 - 34 - asmlinkage int __invoke_psci_fn_hvc(u32, u32, u32, u32); 35 - asmlinkage int __invoke_psci_fn_smc(u32, u32, u32, u32); 36 - 37 - enum psci_function { 38 - PSCI_FN_CPU_SUSPEND, 39 - PSCI_FN_CPU_ON, 40 - PSCI_FN_CPU_OFF, 41 - PSCI_FN_MIGRATE, 42 - PSCI_FN_AFFINITY_INFO, 43 - PSCI_FN_MIGRATE_INFO_TYPE, 44 - PSCI_FN_MAX, 45 - }; 46 - 47 - static u32 psci_function_id[PSCI_FN_MAX]; 48 - 49 - static int psci_to_linux_errno(int errno) 50 - { 51 - switch (errno) { 52 - case PSCI_RET_SUCCESS: 53 - return 0; 54 - case PSCI_RET_NOT_SUPPORTED: 55 - return -EOPNOTSUPP; 56 - case PSCI_RET_INVALID_PARAMS: 57 - return -EINVAL; 58 - case PSCI_RET_DENIED: 59 - return -EPERM; 60 - }; 61 - 62 - return -EINVAL; 63 - } 64 - 65 - static u32 psci_power_state_pack(struct psci_power_state state) 66 - { 67 - return ((state.id << PSCI_0_2_POWER_STATE_ID_SHIFT) 68 - & PSCI_0_2_POWER_STATE_ID_MASK) | 69 - ((state.type << PSCI_0_2_POWER_STATE_TYPE_SHIFT) 70 - & PSCI_0_2_POWER_STATE_TYPE_MASK) | 71 - ((state.affinity_level << PSCI_0_2_POWER_STATE_AFFL_SHIFT) 72 - & PSCI_0_2_POWER_STATE_AFFL_MASK); 73 - } 74 - 75 - static int psci_get_version(void) 76 - { 77 - int err; 78 - 79 - err = invoke_psci_fn(PSCI_0_2_FN_PSCI_VERSION, 0, 0, 0); 80 - return err; 81 - } 82 - 83 - static int psci_cpu_suspend(struct psci_power_state state, 84 - unsigned long entry_point) 85 - { 86 - int err; 87 - u32 fn, power_state; 88 - 89 - fn = psci_function_id[PSCI_FN_CPU_SUSPEND]; 90 - power_state = psci_power_state_pack(state); 91 - err = invoke_psci_fn(fn, power_state, entry_point, 0); 92 - return psci_to_linux_errno(err); 93 - } 94 - 95 - static int psci_cpu_off(struct psci_power_state state) 96 - { 97 - int err; 98 - u32 fn, power_state; 99 - 100 - fn = psci_function_id[PSCI_FN_CPU_OFF]; 101 - power_state = psci_power_state_pack(state); 102 - err = invoke_psci_fn(fn, power_state, 0, 0); 103 - return psci_to_linux_errno(err); 104 - } 105 - 106 - static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point) 107 - { 108 - int err; 109 - u32 fn; 110 - 111 - fn = psci_function_id[PSCI_FN_CPU_ON]; 112 - err = invoke_psci_fn(fn, cpuid, entry_point, 0); 113 - return psci_to_linux_errno(err); 114 - } 115 - 116 - static int psci_migrate(unsigned long cpuid) 117 - { 118 - int err; 119 - u32 fn; 120 - 121 - fn = psci_function_id[PSCI_FN_MIGRATE]; 122 - err = invoke_psci_fn(fn, cpuid, 0, 0); 123 - return psci_to_linux_errno(err); 124 - } 125 - 126 - static int psci_affinity_info(unsigned long target_affinity, 127 - unsigned long lowest_affinity_level) 128 - { 129 - int err; 130 - u32 fn; 131 - 132 - fn = psci_function_id[PSCI_FN_AFFINITY_INFO]; 133 - err = invoke_psci_fn(fn, target_affinity, lowest_affinity_level, 0); 134 - return err; 135 - } 136 - 137 - static int psci_migrate_info_type(void) 138 - { 139 - int err; 140 - u32 fn; 141 - 142 - fn = psci_function_id[PSCI_FN_MIGRATE_INFO_TYPE]; 143 - err = invoke_psci_fn(fn, 0, 0, 0); 144 - return err; 145 - } 146 - 147 - static int get_set_conduit_method(struct device_node *np) 148 - { 149 - const char *method; 150 - 151 - pr_info("probing for conduit method from DT.\n"); 152 - 153 - if (of_property_read_string(np, "method", &method)) { 154 - pr_warn("missing \"method\" property\n"); 155 - return -ENXIO; 156 - } 157 - 158 - if (!strcmp("hvc", method)) { 159 - invoke_psci_fn = __invoke_psci_fn_hvc; 160 - } else if (!strcmp("smc", method)) { 161 - invoke_psci_fn = __invoke_psci_fn_smc; 162 - } else { 163 - pr_warn("invalid \"method\" property: %s\n", method); 164 - return -EINVAL; 165 - } 166 - return 0; 167 - } 168 - 169 - static void psci_sys_reset(enum reboot_mode reboot_mode, const char *cmd) 170 - { 171 - invoke_psci_fn(PSCI_0_2_FN_SYSTEM_RESET, 0, 0, 0); 172 - } 173 - 174 - static void psci_sys_poweroff(void) 175 - { 176 - invoke_psci_fn(PSCI_0_2_FN_SYSTEM_OFF, 0, 0, 0); 177 - } 178 - 179 - /* 180 - * PSCI Function IDs for v0.2+ are well defined so use 181 - * standard values. 182 - */ 183 - static int psci_0_2_init(struct device_node *np) 184 - { 185 - int err, ver; 186 - 187 - err = get_set_conduit_method(np); 188 - 189 - if (err) 190 - goto out_put_node; 191 - 192 - ver = psci_get_version(); 193 - 194 - if (ver == PSCI_RET_NOT_SUPPORTED) { 195 - /* PSCI v0.2 mandates implementation of PSCI_ID_VERSION. */ 196 - pr_err("PSCI firmware does not comply with the v0.2 spec.\n"); 197 - err = -EOPNOTSUPP; 198 - goto out_put_node; 199 - } else { 200 - pr_info("PSCIv%d.%d detected in firmware.\n", 201 - PSCI_VERSION_MAJOR(ver), 202 - PSCI_VERSION_MINOR(ver)); 203 - 204 - if (PSCI_VERSION_MAJOR(ver) == 0 && 205 - PSCI_VERSION_MINOR(ver) < 2) { 206 - err = -EINVAL; 207 - pr_err("Conflicting PSCI version detected.\n"); 208 - goto out_put_node; 209 - } 210 - } 211 - 212 - pr_info("Using standard PSCI v0.2 function IDs\n"); 213 - psci_function_id[PSCI_FN_CPU_SUSPEND] = PSCI_0_2_FN_CPU_SUSPEND; 214 - psci_ops.cpu_suspend = psci_cpu_suspend; 215 - 216 - psci_function_id[PSCI_FN_CPU_OFF] = PSCI_0_2_FN_CPU_OFF; 217 - psci_ops.cpu_off = psci_cpu_off; 218 - 219 - psci_function_id[PSCI_FN_CPU_ON] = PSCI_0_2_FN_CPU_ON; 220 - psci_ops.cpu_on = psci_cpu_on; 221 - 222 - psci_function_id[PSCI_FN_MIGRATE] = PSCI_0_2_FN_MIGRATE; 223 - psci_ops.migrate = psci_migrate; 224 - 225 - psci_function_id[PSCI_FN_AFFINITY_INFO] = PSCI_0_2_FN_AFFINITY_INFO; 226 - psci_ops.affinity_info = psci_affinity_info; 227 - 228 - psci_function_id[PSCI_FN_MIGRATE_INFO_TYPE] = 229 - PSCI_0_2_FN_MIGRATE_INFO_TYPE; 230 - psci_ops.migrate_info_type = psci_migrate_info_type; 231 - 232 - arm_pm_restart = psci_sys_reset; 233 - 234 - pm_power_off = psci_sys_poweroff; 235 - 236 - out_put_node: 237 - of_node_put(np); 238 - return err; 239 - } 240 - 241 - /* 242 - * PSCI < v0.2 get PSCI Function IDs via DT. 243 - */ 244 - static int psci_0_1_init(struct device_node *np) 245 - { 246 - u32 id; 247 - int err; 248 - 249 - err = get_set_conduit_method(np); 250 - 251 - if (err) 252 - goto out_put_node; 253 - 254 - pr_info("Using PSCI v0.1 Function IDs from DT\n"); 255 - 256 - if (!of_property_read_u32(np, "cpu_suspend", &id)) { 257 - psci_function_id[PSCI_FN_CPU_SUSPEND] = id; 258 - psci_ops.cpu_suspend = psci_cpu_suspend; 259 - } 260 - 261 - if (!of_property_read_u32(np, "cpu_off", &id)) { 262 - psci_function_id[PSCI_FN_CPU_OFF] = id; 263 - psci_ops.cpu_off = psci_cpu_off; 264 - } 265 - 266 - if (!of_property_read_u32(np, "cpu_on", &id)) { 267 - psci_function_id[PSCI_FN_CPU_ON] = id; 268 - psci_ops.cpu_on = psci_cpu_on; 269 - } 270 - 271 - if (!of_property_read_u32(np, "migrate", &id)) { 272 - psci_function_id[PSCI_FN_MIGRATE] = id; 273 - psci_ops.migrate = psci_migrate; 274 - } 275 - 276 - out_put_node: 277 - of_node_put(np); 278 - return err; 279 - } 280 - 281 - static const struct of_device_id const psci_of_match[] __initconst = { 282 - { .compatible = "arm,psci", .data = psci_0_1_init}, 283 - { .compatible = "arm,psci-0.2", .data = psci_0_2_init}, 284 - {}, 285 - }; 286 - 287 - int __init psci_init(void) 288 - { 289 - struct device_node *np; 290 - const struct of_device_id *matched_np; 291 - psci_initcall_t init_fn; 292 - 293 - np = of_find_matching_node_and_match(NULL, psci_of_match, &matched_np); 294 - if (!np) 295 - return -ENODEV; 296 - 297 - init_fn = (psci_initcall_t)matched_np->data; 298 - return init_fn(np); 299 - }
+23 -8
arch/arm/kernel/psci_smp.c
··· 17 17 #include <linux/smp.h> 18 18 #include <linux/of.h> 19 19 #include <linux/delay.h> 20 + #include <linux/psci.h> 21 + 20 22 #include <uapi/linux/psci.h> 21 23 22 24 #include <asm/psci.h> ··· 53 51 { 54 52 if (psci_ops.cpu_on) 55 53 return psci_ops.cpu_on(cpu_logical_map(cpu), 56 - __pa(secondary_startup)); 54 + virt_to_idmap(&secondary_startup)); 57 55 return -ENODEV; 58 56 } 59 57 60 58 #ifdef CONFIG_HOTPLUG_CPU 59 + int psci_cpu_disable(unsigned int cpu) 60 + { 61 + /* Fail early if we don't have CPU_OFF support */ 62 + if (!psci_ops.cpu_off) 63 + return -EOPNOTSUPP; 64 + 65 + /* Trusted OS will deny CPU_OFF */ 66 + if (psci_tos_resident_on(cpu)) 67 + return -EPERM; 68 + 69 + return 0; 70 + } 71 + 61 72 void __ref psci_cpu_die(unsigned int cpu) 62 73 { 63 - const struct psci_power_state ps = { 64 - .type = PSCI_POWER_STATE_TYPE_POWER_DOWN, 65 - }; 74 + u32 state = PSCI_POWER_STATE_TYPE_POWER_DOWN << 75 + PSCI_0_2_POWER_STATE_TYPE_SHIFT; 66 76 67 - if (psci_ops.cpu_off) 68 - psci_ops.cpu_off(ps); 77 + if (psci_ops.cpu_off) 78 + psci_ops.cpu_off(state); 69 79 70 - /* We should never return */ 71 - panic("psci: cpu %d failed to shutdown\n", cpu); 80 + /* We should never return */ 81 + panic("psci: cpu %d failed to shutdown\n", cpu); 72 82 } 73 83 74 84 int __ref psci_cpu_kill(unsigned int cpu) ··· 123 109 struct smp_operations __initdata psci_smp_ops = { 124 110 .smp_boot_secondary = psci_boot_secondary, 125 111 #ifdef CONFIG_HOTPLUG_CPU 112 + .cpu_disable = psci_cpu_disable, 126 113 .cpu_die = psci_cpu_die, 127 114 .cpu_kill = psci_cpu_kill, 128 115 #endif
+7 -2
arch/arm/kernel/setup.c
··· 31 31 #include <linux/bug.h> 32 32 #include <linux/compiler.h> 33 33 #include <linux/sort.h> 34 + #include <linux/psci.h> 34 35 35 36 #include <asm/unified.h> 36 37 #include <asm/cp15.h> 37 38 #include <asm/cpu.h> 38 39 #include <asm/cputype.h> 39 40 #include <asm/elf.h> 41 + #include <asm/fixmap.h> 40 42 #include <asm/procinfo.h> 41 43 #include <asm/psci.h> 42 44 #include <asm/sections.h> ··· 956 954 strlcpy(cmd_line, boot_command_line, COMMAND_LINE_SIZE); 957 955 *cmdline_p = cmd_line; 958 956 957 + if (IS_ENABLED(CONFIG_FIX_EARLYCON_MEM)) 958 + early_fixmap_init(); 959 + 959 960 parse_early_param(); 960 961 961 962 #ifdef CONFIG_MMU ··· 977 972 unflatten_device_tree(); 978 973 979 974 arm_dt_init_cpu_maps(); 980 - psci_init(); 975 + psci_dt_init(); 981 976 xen_early_init(); 982 977 #ifdef CONFIG_SMP 983 978 if (is_smp()) { ··· 1020 1015 1021 1016 for_each_possible_cpu(cpu) { 1022 1017 struct cpuinfo_arm *cpuinfo = &per_cpu(cpu_data, cpu); 1023 - cpuinfo->cpu.hotpluggable = 1; 1018 + cpuinfo->cpu.hotpluggable = platform_can_hotplug_cpu(cpu); 1024 1019 register_cpu(&cpuinfo->cpu, cpu); 1025 1020 } 1026 1021
+6
arch/arm/kernel/signal.c
··· 562 562 asmlinkage int 563 563 do_work_pending(struct pt_regs *regs, unsigned int thread_flags, int syscall) 564 564 { 565 + /* 566 + * The assembly code enters us with IRQs off, but it hasn't 567 + * informed the tracing code of that for efficiency reasons. 568 + * Update the trace code with the current status. 569 + */ 570 + trace_hardirqs_off(); 565 571 do { 566 572 if (likely(thread_flags & _TIF_NEED_RESCHED)) { 567 573 schedule();
+15 -2
arch/arm/kernel/smp.c
··· 175 175 if (smp_ops.cpu_disable) 176 176 return smp_ops.cpu_disable(cpu); 177 177 178 + return 0; 179 + } 180 + 181 + int platform_can_hotplug_cpu(unsigned int cpu) 182 + { 183 + /* cpu_die must be specified to support hotplug */ 184 + if (!smp_ops.cpu_die) 185 + return 0; 186 + 187 + if (smp_ops.cpu_can_disable) 188 + return smp_ops.cpu_can_disable(cpu); 189 + 178 190 /* 179 191 * By default, allow disabling all CPUs except the first one, 180 192 * since this is special on a lot of platforms, e.g. because 181 193 * of clock tick interrupts. 182 194 */ 183 - return cpu == 0 ? -EPERM : 0; 195 + return cpu != 0; 184 196 } 197 + 185 198 /* 186 199 * __cpu_disable runs on the processor to be shutdown. 187 200 */ ··· 266 253 * of the other hotplug-cpu capable cores, so presumably coming 267 254 * out of idle fixes this. 268 255 */ 269 - void __ref cpu_die(void) 256 + void arch_cpu_idle_dead(void) 270 257 { 271 258 unsigned int cpu = smp_processor_id(); 272 259
+3
arch/arm/kernel/swp_emulate.c
··· 141 141 142 142 while (1) { 143 143 unsigned long temp; 144 + unsigned int __ua_flags; 144 145 146 + __ua_flags = uaccess_save_and_enable(); 145 147 if (type == TYPE_SWPB) 146 148 __user_swpb_asm(*data, address, res, temp); 147 149 else 148 150 __user_swp_asm(*data, address, res, temp); 151 + uaccess_restore(__ua_flags); 149 152 150 153 if (likely(res != -EAGAIN) || signal_pending(current)) 151 154 break;
-1
arch/arm/kernel/traps.c
··· 870 870 kuser_init(vectors_base); 871 871 872 872 flush_icache_range(vectors, vectors + PAGE_SIZE * 2); 873 - modify_domain(DOMAIN_USER, DOMAIN_CLIENT); 874 873 #else /* ifndef CONFIG_CPU_V7M */ 875 874 /* 876 875 * on V7-M there is no need to copy the vector table to a dedicated
+3 -3
arch/arm/lib/clear_user.S
··· 12 12 13 13 .text 14 14 15 - /* Prototype: int __clear_user(void *addr, size_t sz) 15 + /* Prototype: unsigned long arm_clear_user(void *addr, size_t sz) 16 16 * Purpose : clear some user memory 17 17 * Params : addr - user memory address to clear 18 18 * : sz - number of bytes to clear 19 19 * Returns : number of bytes NOT cleared 20 20 */ 21 21 ENTRY(__clear_user_std) 22 - WEAK(__clear_user) 22 + WEAK(arm_clear_user) 23 23 stmfd sp!, {r1, lr} 24 24 mov r2, #0 25 25 cmp r1, #4 ··· 44 44 USER( strnebt r2, [r0]) 45 45 mov r0, #0 46 46 ldmfd sp!, {r1, pc} 47 - ENDPROC(__clear_user) 47 + ENDPROC(arm_clear_user) 48 48 ENDPROC(__clear_user_std) 49 49 50 50 .pushsection .text.fixup,"ax"
+3 -3
arch/arm/lib/copy_from_user.S
··· 17 17 /* 18 18 * Prototype: 19 19 * 20 - * size_t __copy_from_user(void *to, const void *from, size_t n) 20 + * size_t arm_copy_from_user(void *to, const void *from, size_t n) 21 21 * 22 22 * Purpose: 23 23 * ··· 89 89 90 90 .text 91 91 92 - ENTRY(__copy_from_user) 92 + ENTRY(arm_copy_from_user) 93 93 94 94 #include "copy_template.S" 95 95 96 - ENDPROC(__copy_from_user) 96 + ENDPROC(arm_copy_from_user) 97 97 98 98 .pushsection .fixup,"ax" 99 99 .align 0
+3 -3
arch/arm/lib/copy_to_user.S
··· 17 17 /* 18 18 * Prototype: 19 19 * 20 - * size_t __copy_to_user(void *to, const void *from, size_t n) 20 + * size_t arm_copy_to_user(void *to, const void *from, size_t n) 21 21 * 22 22 * Purpose: 23 23 * ··· 93 93 .text 94 94 95 95 ENTRY(__copy_to_user_std) 96 - WEAK(__copy_to_user) 96 + WEAK(arm_copy_to_user) 97 97 98 98 #include "copy_template.S" 99 99 100 - ENDPROC(__copy_to_user) 100 + ENDPROC(arm_copy_to_user) 101 101 ENDPROC(__copy_to_user_std) 102 102 103 103 .pushsection .text.fixup,"ax"
+14
arch/arm/lib/csumpartialcopyuser.S
··· 17 17 18 18 .text 19 19 20 + #ifdef CONFIG_CPU_SW_DOMAIN_PAN 21 + .macro save_regs 22 + mrc p15, 0, ip, c3, c0, 0 23 + stmfd sp!, {r1, r2, r4 - r8, ip, lr} 24 + uaccess_enable ip 25 + .endm 26 + 27 + .macro load_regs 28 + ldmfd sp!, {r1, r2, r4 - r8, ip, lr} 29 + mcr p15, 0, ip, c3, c0, 0 30 + ret lr 31 + .endm 32 + #else 20 33 .macro save_regs 21 34 stmfd sp!, {r1, r2, r4 - r8, lr} 22 35 .endm ··· 37 24 .macro load_regs 38 25 ldmfd sp!, {r1, r2, r4 - r8, pc} 39 26 .endm 27 + #endif 40 28 41 29 .macro load1b, reg1 42 30 ldrusr \reg1, r0, 1
+2 -2
arch/arm/lib/uaccess_with_memcpy.c
··· 136 136 } 137 137 138 138 unsigned long 139 - __copy_to_user(void __user *to, const void *from, unsigned long n) 139 + arm_copy_to_user(void __user *to, const void *from, unsigned long n) 140 140 { 141 141 /* 142 142 * This test is stubbed out of the main function above to keep ··· 190 190 return n; 191 191 } 192 192 193 - unsigned long __clear_user(void __user *addr, unsigned long n) 193 + unsigned long arm_clear_user(void __user *addr, unsigned long n) 194 194 { 195 195 /* See rational for this in __copy_to_user() above. */ 196 196 if (n < 64)
+1 -1
arch/arm/mach-highbank/highbank.c
··· 28 28 #include <linux/reboot.h> 29 29 #include <linux/amba/bus.h> 30 30 #include <linux/platform_device.h> 31 + #include <linux/psci.h> 31 32 32 - #include <asm/psci.h> 33 33 #include <asm/hardware/cache-l2x0.h> 34 34 #include <asm/mach/arch.h> 35 35 #include <asm/mach/map.h>
+9 -7
arch/arm/mach-highbank/pm.c
··· 16 16 17 17 #include <linux/cpu_pm.h> 18 18 #include <linux/init.h> 19 + #include <linux/psci.h> 19 20 #include <linux/suspend.h> 20 21 21 22 #include <asm/suspend.h> 22 - #include <asm/psci.h> 23 + 24 + #include <uapi/linux/psci.h> 25 + 26 + #define HIGHBANK_SUSPEND_PARAM \ 27 + ((0 << PSCI_0_2_POWER_STATE_ID_SHIFT) | \ 28 + (1 << PSCI_0_2_POWER_STATE_AFFL_SHIFT) | \ 29 + (PSCI_POWER_STATE_TYPE_POWER_DOWN << PSCI_0_2_POWER_STATE_TYPE_SHIFT)) 23 30 24 31 static int highbank_suspend_finish(unsigned long val) 25 32 { 26 - const struct psci_power_state ps = { 27 - .type = PSCI_POWER_STATE_TYPE_POWER_DOWN, 28 - .affinity_level = 1, 29 - }; 30 - 31 - return psci_ops.cpu_suspend(ps, __pa(cpu_resume)); 33 + return psci_ops.cpu_suspend(HIGHBANK_SUSPEND_PARAM, __pa(cpu_resume)); 32 34 } 33 35 34 36 static int highbank_pm_enter(suspend_state_t state)
+1
arch/arm/mach-mmp/pm-pxa910.c
··· 18 18 #include <linux/io.h> 19 19 #include <linux/irq.h> 20 20 #include <asm/mach-types.h> 21 + #include <asm/outercache.h> 21 22 #include <mach/hardware.h> 22 23 #include <mach/cputype.h> 23 24 #include <mach/addr-map.h>
+7
arch/arm/mach-omap2/Kconfig
··· 29 29 select HAVE_ARM_SCU if SMP 30 30 select HAVE_ARM_TWD if SMP 31 31 select OMAP_INTERCONNECT 32 + select OMAP_INTERCONNECT_BARRIER 32 33 select PL310_ERRATA_588369 if CACHE_L2X0 33 34 select PL310_ERRATA_727915 if CACHE_L2X0 34 35 select PM_OPP if PM ··· 47 46 select HAVE_ARM_TWD if SMP 48 47 select HAVE_ARM_ARCH_TIMER 49 48 select ARM_ERRATA_798181 if SMP 49 + select OMAP_INTERCONNECT_BARRIER 50 50 51 51 config SOC_AM33XX 52 52 bool "TI AM33XX" ··· 73 71 select HAVE_ARM_ARCH_TIMER 74 72 select IRQ_CROSSBAR 75 73 select ARM_ERRATA_798181 if SMP 74 + select OMAP_INTERCONNECT_BARRIER 76 75 77 76 config ARCH_OMAP2PLUS 78 77 bool ··· 95 92 help 96 93 Systems based on OMAP2, OMAP3, OMAP4 or OMAP5 97 94 95 + config OMAP_INTERCONNECT_BARRIER 96 + bool 97 + select ARM_HEAVY_MB 98 + 98 99 99 100 if ARCH_OMAP2PLUS 100 101
+1
arch/arm/mach-omap2/common.c
··· 30 30 void __init omap_reserve(void) 31 31 { 32 32 omap_secure_ram_reserve_memblock(); 33 + omap_barrier_reserve_memblock(); 33 34 }
+9
arch/arm/mach-omap2/common.h
··· 189 189 } 190 190 #endif 191 191 192 + #ifdef CONFIG_OMAP_INTERCONNECT_BARRIER 193 + void omap_barrier_reserve_memblock(void); 194 + void omap_barriers_init(void); 195 + #else 196 + static inline void omap_barrier_reserve_memblock(void) 197 + { 198 + } 199 + #endif 200 + 192 201 /* This gets called from mach-omap2/io.c, do not call this */ 193 202 void __init omap2_set_globals_tap(u32 class, void __iomem *tap); 194 203
-33
arch/arm/mach-omap2/include/mach/barriers.h
··· 1 - /* 2 - * OMAP memory barrier header. 3 - * 4 - * Copyright (C) 2011 Texas Instruments, Inc. 5 - * Santosh Shilimkar <santosh.shilimkar@ti.com> 6 - * Richard Woodruff <r-woodruff2@ti.com> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - * 17 - * You should have received a copy of the GNU General Public License 18 - * along with this program; if not, write to the Free Software 19 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 - */ 21 - 22 - #ifndef __MACH_BARRIERS_H 23 - #define __MACH_BARRIERS_H 24 - 25 - #include <asm/outercache.h> 26 - 27 - extern void omap_bus_sync(void); 28 - 29 - #define rmb() dsb() 30 - #define wmb() do { dsb(); outer_sync(); omap_bus_sync(); } while (0) 31 - #define mb() wmb() 32 - 33 - #endif /* __MACH_BARRIERS_H */
+2
arch/arm/mach-omap2/io.c
··· 352 352 void __init omap4_map_io(void) 353 353 { 354 354 iotable_init(omap44xx_io_desc, ARRAY_SIZE(omap44xx_io_desc)); 355 + omap_barriers_init(); 355 356 } 356 357 #endif 357 358 ··· 360 359 void __init omap5_map_io(void) 361 360 { 362 361 iotable_init(omap54xx_io_desc, ARRAY_SIZE(omap54xx_io_desc)); 362 + omap_barriers_init(); 363 363 } 364 364 #endif 365 365
+121
arch/arm/mach-omap2/omap4-common.c
··· 51 51 52 52 #define IRQ_LOCALTIMER 29 53 53 54 + #ifdef CONFIG_OMAP_INTERCONNECT_BARRIER 55 + 56 + /* Used to implement memory barrier on DRAM path */ 57 + #define OMAP4_DRAM_BARRIER_VA 0xfe600000 58 + 59 + static void __iomem *dram_sync, *sram_sync; 60 + static phys_addr_t dram_sync_paddr; 61 + static u32 dram_sync_size; 62 + 63 + /* 64 + * The OMAP4 bus structure contains asynchrnous bridges which can buffer 65 + * data writes from the MPU. These asynchronous bridges can be found on 66 + * paths between the MPU to EMIF, and the MPU to L3 interconnects. 67 + * 68 + * We need to be careful about re-ordering which can happen as a result 69 + * of different accesses being performed via different paths, and 70 + * therefore different asynchronous bridges. 71 + */ 72 + 73 + /* 74 + * OMAP4 interconnect barrier which is called for each mb() and wmb(). 75 + * This is to ensure that normal paths to DRAM (normal memory, cacheable 76 + * accesses) are properly synchronised with writes to DMA coherent memory 77 + * (normal memory, uncacheable) and device writes. 78 + * 79 + * The mb() and wmb() barriers only operate only on the MPU->MA->EMIF 80 + * path, as we need to ensure that data is visible to other system 81 + * masters prior to writes to those system masters being seen. 82 + * 83 + * Note: the SRAM path is not synchronised via mb() and wmb(). 84 + */ 85 + static void omap4_mb(void) 86 + { 87 + if (dram_sync) 88 + writel_relaxed(0, dram_sync); 89 + } 90 + 91 + /* 92 + * OMAP4 Errata i688 - asynchronous bridge corruption when entering WFI. 93 + * 94 + * If a data is stalled inside asynchronous bridge because of back 95 + * pressure, it may be accepted multiple times, creating pointer 96 + * misalignment that will corrupt next transfers on that data path until 97 + * next reset of the system. No recovery procedure once the issue is hit, 98 + * the path remains consistently broken. 99 + * 100 + * Async bridges can be found on paths between MPU to EMIF and MPU to L3 101 + * interconnects. 102 + * 103 + * This situation can happen only when the idle is initiated by a Master 104 + * Request Disconnection (which is trigged by software when executing WFI 105 + * on the CPU). 106 + * 107 + * The work-around for this errata needs all the initiators connected 108 + * through an async bridge to ensure that data path is properly drained 109 + * before issuing WFI. This condition will be met if one Strongly ordered 110 + * access is performed to the target right before executing the WFI. 111 + * 112 + * In MPU case, L3 T2ASYNC FIFO and DDR T2ASYNC FIFO needs to be drained. 113 + * IO barrier ensure that there is no synchronisation loss on initiators 114 + * operating on both interconnect port simultaneously. 115 + * 116 + * This is a stronger version of the OMAP4 memory barrier below, and 117 + * operates on both the MPU->MA->EMIF path but also the MPU->OCP path 118 + * as well, and is necessary prior to executing a WFI. 119 + */ 120 + void omap_interconnect_sync(void) 121 + { 122 + if (dram_sync && sram_sync) { 123 + writel_relaxed(readl_relaxed(dram_sync), dram_sync); 124 + writel_relaxed(readl_relaxed(sram_sync), sram_sync); 125 + isb(); 126 + } 127 + } 128 + 129 + static int __init omap4_sram_init(void) 130 + { 131 + struct device_node *np; 132 + struct gen_pool *sram_pool; 133 + 134 + np = of_find_compatible_node(NULL, NULL, "ti,omap4-mpu"); 135 + if (!np) 136 + pr_warn("%s:Unable to allocate sram needed to handle errata I688\n", 137 + __func__); 138 + sram_pool = of_gen_pool_get(np, "sram", 0); 139 + if (!sram_pool) 140 + pr_warn("%s:Unable to get sram pool needed to handle errata I688\n", 141 + __func__); 142 + else 143 + sram_sync = (void *)gen_pool_alloc(sram_pool, PAGE_SIZE); 144 + 145 + return 0; 146 + } 147 + omap_arch_initcall(omap4_sram_init); 148 + 149 + /* Steal one page physical memory for barrier implementation */ 150 + void __init omap_barrier_reserve_memblock(void) 151 + { 152 + dram_sync_size = ALIGN(PAGE_SIZE, SZ_1M); 153 + dram_sync_paddr = arm_memblock_steal(dram_sync_size, SZ_1M); 154 + } 155 + 156 + void __init omap_barriers_init(void) 157 + { 158 + struct map_desc dram_io_desc[1]; 159 + 160 + dram_io_desc[0].virtual = OMAP4_DRAM_BARRIER_VA; 161 + dram_io_desc[0].pfn = __phys_to_pfn(dram_sync_paddr); 162 + dram_io_desc[0].length = dram_sync_size; 163 + dram_io_desc[0].type = MT_MEMORY_RW_SO; 164 + iotable_init(dram_io_desc, ARRAY_SIZE(dram_io_desc)); 165 + dram_sync = (void __iomem *) dram_io_desc[0].virtual; 166 + 167 + pr_info("OMAP4: Map %pa to %p for dram barrier\n", 168 + &dram_sync_paddr, dram_sync); 169 + 170 + soc_mb = omap4_mb; 171 + } 172 + 173 + #endif 174 + 54 175 void gic_dist_disable(void) 55 176 { 56 177 if (gic_dist_base_addr)
+3 -5
arch/arm/mach-omap2/sleep44xx.S
··· 333 333 334 334 #endif /* defined(CONFIG_SMP) && defined(CONFIG_PM) */ 335 335 336 - ENTRY(omap_bus_sync) 337 - ret lr 338 - ENDPROC(omap_bus_sync) 339 - 340 336 ENTRY(omap_do_wfi) 341 337 stmfd sp!, {lr} 338 + #ifdef CONFIG_OMAP_INTERCONNECT_BARRIER 342 339 /* Drain interconnect write buffers. */ 343 - bl omap_bus_sync 340 + bl omap_interconnect_sync 341 + #endif 344 342 345 343 /* 346 344 * Execute an ISB instruction to ensure that all of the
+1
arch/arm/mach-prima2/pm.c
··· 16 16 #include <linux/of_platform.h> 17 17 #include <linux/io.h> 18 18 #include <linux/rtc/sirfsoc_rtciobrg.h> 19 + #include <asm/outercache.h> 19 20 #include <asm/suspend.h> 20 21 #include <asm/hardware/cache-l2x0.h> 21 22
+1 -1
arch/arm/mach-shmobile/common.h
··· 13 13 extern void shmobile_smp_sleep(void); 14 14 extern void shmobile_smp_hook(unsigned int cpu, unsigned long fn, 15 15 unsigned long arg); 16 - extern int shmobile_smp_cpu_disable(unsigned int cpu); 16 + extern bool shmobile_smp_cpu_can_disable(unsigned int cpu); 17 17 extern void shmobile_boot_scu(void); 18 18 extern void shmobile_smp_scu_prepare_cpus(unsigned int max_cpus); 19 19 extern void shmobile_smp_scu_cpu_die(unsigned int cpu);
+2 -2
arch/arm/mach-shmobile/platsmp.c
··· 31 31 } 32 32 33 33 #ifdef CONFIG_HOTPLUG_CPU 34 - int shmobile_smp_cpu_disable(unsigned int cpu) 34 + bool shmobile_smp_cpu_can_disable(unsigned int cpu) 35 35 { 36 - return 0; /* Hotplug of any CPU is supported */ 36 + return true; /* Hotplug of any CPU is supported */ 37 37 } 38 38 #endif
+1 -1
arch/arm/mach-shmobile/smp-r8a7790.c
··· 64 64 .smp_prepare_cpus = r8a7790_smp_prepare_cpus, 65 65 .smp_boot_secondary = shmobile_smp_apmu_boot_secondary, 66 66 #ifdef CONFIG_HOTPLUG_CPU 67 - .cpu_disable = shmobile_smp_cpu_disable, 67 + .cpu_can_disable = shmobile_smp_cpu_can_disable, 68 68 .cpu_die = shmobile_smp_apmu_cpu_die, 69 69 .cpu_kill = shmobile_smp_apmu_cpu_kill, 70 70 #endif
+1 -1
arch/arm/mach-shmobile/smp-r8a7791.c
··· 58 58 .smp_prepare_cpus = r8a7791_smp_prepare_cpus, 59 59 .smp_boot_secondary = r8a7791_smp_boot_secondary, 60 60 #ifdef CONFIG_HOTPLUG_CPU 61 - .cpu_disable = shmobile_smp_cpu_disable, 61 + .cpu_can_disable = shmobile_smp_cpu_can_disable, 62 62 .cpu_die = shmobile_smp_apmu_cpu_die, 63 63 .cpu_kill = shmobile_smp_apmu_cpu_kill, 64 64 #endif
+1 -1
arch/arm/mach-shmobile/smp-sh73a0.c
··· 60 60 .smp_prepare_cpus = sh73a0_smp_prepare_cpus, 61 61 .smp_boot_secondary = sh73a0_boot_secondary, 62 62 #ifdef CONFIG_HOTPLUG_CPU 63 - .cpu_disable = shmobile_smp_cpu_disable, 63 + .cpu_can_disable = shmobile_smp_cpu_can_disable, 64 64 .cpu_die = shmobile_smp_scu_cpu_die, 65 65 .cpu_kill = shmobile_smp_scu_cpu_kill, 66 66 #endif
+1
arch/arm/mach-ux500/cache-l2x0.c
··· 8 8 #include <linux/of.h> 9 9 #include <linux/of_address.h> 10 10 11 + #include <asm/outercache.h> 11 12 #include <asm/hardware/cache-l2x0.h> 12 13 13 14 #include "db8500-regs.h"
+1 -1
arch/arm/mach-ux500/cpu-db8500.c
··· 20 20 #include <linux/mfd/dbx500-prcmu.h> 21 21 #include <linux/of.h> 22 22 #include <linux/of_platform.h> 23 + #include <linux/perf/arm_pmu.h> 23 24 #include <linux/regulator/machine.h> 24 25 #include <linux/random.h> 25 26 26 - #include <asm/pmu.h> 27 27 #include <asm/mach/map.h> 28 28 29 29 #include "setup.h"
+4
arch/arm/mm/Kconfig
··· 883 883 884 884 config OUTER_CACHE_SYNC 885 885 bool 886 + select ARM_HEAVY_MB 886 887 help 887 888 The outer cache has a outer_cache_fns.sync function pointer 888 889 that can be used to drain the write buffer of the outer cache. ··· 1031 1030 help 1032 1031 This option allows the use of custom mandatory barriers 1033 1032 included via the mach/barriers.h file. 1033 + 1034 + config ARM_HEAVY_MB 1035 + bool 1034 1036 1035 1037 config ARCH_SUPPORTS_BIG_ENDIAN 1036 1038 bool
+1
arch/arm/mm/abort-ev4.S
··· 19 19 mrc p15, 0, r1, c5, c0, 0 @ get FSR 20 20 mrc p15, 0, r0, c6, c0, 0 @ get FAR 21 21 ldr r3, [r4] @ read aborted ARM instruction 22 + uaccess_disable ip @ disable userspace access 22 23 bic r1, r1, #1 << 11 | 1 << 10 @ clear bits 11 and 10 of FSR 23 24 tst r3, #1 << 20 @ L = 1 -> write? 24 25 orreq r1, r1, #1 << 11 @ yes.
+3 -1
arch/arm/mm/abort-ev5t.S
··· 21 21 mrc p15, 0, r0, c6, c0, 0 @ get FAR 22 22 do_thumb_abort fsr=r1, pc=r4, psr=r5, tmp=r3 23 23 ldreq r3, [r4] @ read aborted ARM instruction 24 + uaccess_disable ip @ disable user access 24 25 bic r1, r1, #1 << 11 @ clear bits 11 of FSR 25 - do_ldrd_abort tmp=ip, insn=r3 26 + teq_ldrd tmp=ip, insn=r3 @ insn was LDRD? 27 + beq do_DataAbort @ yes 26 28 tst r3, #1 << 20 @ check write 27 29 orreq r1, r1, #1 << 11 28 30 b do_DataAbort
+3 -1
arch/arm/mm/abort-ev5tj.S
··· 24 24 bne do_DataAbort 25 25 do_thumb_abort fsr=r1, pc=r4, psr=r5, tmp=r3 26 26 ldreq r3, [r4] @ read aborted ARM instruction 27 - do_ldrd_abort tmp=ip, insn=r3 27 + uaccess_disable ip @ disable userspace access 28 + teq_ldrd tmp=ip, insn=r3 @ insn was LDRD? 29 + beq do_DataAbort @ yes 28 30 tst r3, #1 << 20 @ L = 0 -> write 29 31 orreq r1, r1, #1 << 11 @ yes. 30 32 b do_DataAbort
+5 -3
arch/arm/mm/abort-ev6.S
··· 26 26 ldr ip, =0x4107b36 27 27 mrc p15, 0, r3, c0, c0, 0 @ get processor id 28 28 teq ip, r3, lsr #4 @ r0 ARM1136? 29 - bne do_DataAbort 29 + bne 1f 30 30 tst r5, #PSR_J_BIT @ Java? 31 31 tsteq r5, #PSR_T_BIT @ Thumb? 32 - bne do_DataAbort 32 + bne 1f 33 33 bic r1, r1, #1 << 11 @ clear bit 11 of FSR 34 34 ldr r3, [r4] @ read aborted ARM instruction 35 35 ARM_BE8(rev r3, r3) 36 36 37 - do_ldrd_abort tmp=ip, insn=r3 37 + teq_ldrd tmp=ip, insn=r3 @ insn was LDRD? 38 + beq 1f @ yes 38 39 tst r3, #1 << 20 @ L = 0 -> write 39 40 orreq r1, r1, #1 << 11 @ yes. 40 41 #endif 42 + 1: uaccess_disable ip @ disable userspace access 41 43 b do_DataAbort
+1
arch/arm/mm/abort-ev7.S
··· 15 15 ENTRY(v7_early_abort) 16 16 mrc p15, 0, r1, c5, c0, 0 @ get FSR 17 17 mrc p15, 0, r0, c6, c0, 0 @ get FAR 18 + uaccess_disable ip @ disable userspace access 18 19 19 20 /* 20 21 * V6 code adjusts the returned DFSR.
+2
arch/arm/mm/abort-lv4t.S
··· 26 26 #endif 27 27 bne .data_thumb_abort 28 28 ldr r8, [r4] @ read arm instruction 29 + uaccess_disable ip @ disable userspace access 29 30 tst r8, #1 << 20 @ L = 1 -> write? 30 31 orreq r1, r1, #1 << 11 @ yes. 31 32 and r7, r8, #15 << 24 ··· 156 155 157 156 .data_thumb_abort: 158 157 ldrh r8, [r4] @ read instruction 158 + uaccess_disable ip @ disable userspace access 159 159 tst r8, #1 << 11 @ L = 1 -> write? 160 160 orreq r1, r1, #1 << 8 @ yes 161 161 and r7, r8, #15 << 12
+6 -8
arch/arm/mm/abort-macro.S
··· 13 13 tst \psr, #PSR_T_BIT 14 14 beq not_thumb 15 15 ldrh \tmp, [\pc] @ Read aborted Thumb instruction 16 + uaccess_disable ip @ disable userspace access 16 17 and \tmp, \tmp, # 0xfe00 @ Mask opcode field 17 18 cmp \tmp, # 0x5600 @ Is it ldrsb? 18 19 orreq \tmp, \tmp, #1 << 11 @ Set L-bit if yes ··· 30 29 * [7:4] == 1101 31 30 * [20] == 0 32 31 */ 33 - .macro do_ldrd_abort, tmp, insn 34 - tst \insn, #0x0e100000 @ [27:25,20] == 0 35 - bne not_ldrd 36 - and \tmp, \insn, #0x000000f0 @ [7:4] == 1101 37 - cmp \tmp, #0x000000d0 38 - beq do_DataAbort 39 - not_ldrd: 32 + .macro teq_ldrd, tmp, insn 33 + mov \tmp, #0x0e100000 34 + orr \tmp, #0x000000f0 35 + and \tmp, \insn, \tmp 36 + teq \tmp, #0x000000d0 40 37 .endm 41 -
+1 -5
arch/arm/mm/cache-feroceon-l2.c
··· 368 368 struct device_node *node; 369 369 void __iomem *base; 370 370 bool l2_wt_override = false; 371 - struct resource res; 372 371 373 372 #if defined(CONFIG_CACHE_FEROCEON_L2_WRITETHROUGH) 374 373 l2_wt_override = true; ··· 375 376 376 377 node = of_find_matching_node(NULL, feroceon_ids); 377 378 if (node && of_device_is_compatible(node, "marvell,kirkwood-cache")) { 378 - if (of_address_to_resource(node, 0, &res)) 379 - return -ENODEV; 380 - 381 - base = ioremap(res.start, resource_size(&res)); 379 + base = of_iomap(node, 0); 382 380 if (!base) 383 381 return -ENOMEM; 384 382
+5
arch/arm/mm/cache-l2x0.c
··· 1171 1171 } 1172 1172 } 1173 1173 1174 + if (of_property_read_bool(np, "arm,shared-override")) { 1175 + *aux_val |= L2C_AUX_CTRL_SHARED_OVERRIDE; 1176 + *aux_mask &= ~L2C_AUX_CTRL_SHARED_OVERRIDE; 1177 + } 1178 + 1174 1179 prefetch = l2x0_saved_regs.prefetch_ctrl; 1175 1180 1176 1181 ret = of_property_read_u32(np, "arm,double-linefill", &val);
+13 -9
arch/arm/mm/dma-mapping.c
··· 39 39 #include <asm/system_info.h> 40 40 #include <asm/dma-contiguous.h> 41 41 42 + #include "dma.h" 42 43 #include "mm.h" 43 44 44 45 /* ··· 649 648 size = PAGE_ALIGN(size); 650 649 want_vaddr = !dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs); 651 650 652 - if (is_coherent || nommu()) 651 + if (nommu()) 652 + addr = __alloc_simple_buffer(dev, size, gfp, &page); 653 + else if (dev_get_cma_area(dev) && (gfp & __GFP_WAIT)) 654 + addr = __alloc_from_contiguous(dev, size, prot, &page, 655 + caller, want_vaddr); 656 + else if (is_coherent) 653 657 addr = __alloc_simple_buffer(dev, size, gfp, &page); 654 658 else if (!(gfp & __GFP_WAIT)) 655 659 addr = __alloc_from_pool(size, &page); 656 - else if (!dev_get_cma_area(dev)) 657 - addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller, want_vaddr); 658 660 else 659 - addr = __alloc_from_contiguous(dev, size, prot, &page, caller, want_vaddr); 661 + addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, 662 + caller, want_vaddr); 660 663 661 664 if (page) 662 665 *handle = pfn_to_dma(dev, page_to_pfn(page)); ··· 688 683 static void *arm_coherent_dma_alloc(struct device *dev, size_t size, 689 684 dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs) 690 685 { 691 - pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL); 692 686 void *memory; 693 687 694 688 if (dma_alloc_from_coherent(dev, size, handle, &memory)) 695 689 return memory; 696 690 697 - return __dma_alloc(dev, size, handle, gfp, prot, true, 691 + return __dma_alloc(dev, size, handle, gfp, PAGE_KERNEL, true, 698 692 attrs, __builtin_return_address(0)); 699 693 } 700 694 ··· 757 753 758 754 size = PAGE_ALIGN(size); 759 755 760 - if (is_coherent || nommu()) { 756 + if (nommu()) { 761 757 __dma_free_buffer(page, size); 762 - } else if (__free_from_pool(cpu_addr, size)) { 758 + } else if (!is_coherent && __free_from_pool(cpu_addr, size)) { 763 759 return; 764 760 } else if (!dev_get_cma_area(dev)) { 765 - if (want_vaddr) 761 + if (want_vaddr && !is_coherent) 766 762 __dma_free_remap(cpu_addr, size); 767 763 __dma_free_buffer(page, size); 768 764 } else {
+32
arch/arm/mm/dma.h
··· 1 + #ifndef DMA_H 2 + #define DMA_H 3 + 4 + #include <asm/glue-cache.h> 5 + 6 + #ifndef MULTI_CACHE 7 + #define dmac_map_area __glue(_CACHE,_dma_map_area) 8 + #define dmac_unmap_area __glue(_CACHE,_dma_unmap_area) 9 + 10 + /* 11 + * These are private to the dma-mapping API. Do not use directly. 12 + * Their sole purpose is to ensure that data held in the cache 13 + * is visible to DMA, or data written by DMA to system memory is 14 + * visible to the CPU. 15 + */ 16 + extern void dmac_map_area(const void *, size_t, int); 17 + extern void dmac_unmap_area(const void *, size_t, int); 18 + 19 + #else 20 + 21 + /* 22 + * These are private to the dma-mapping API. Do not use directly. 23 + * Their sole purpose is to ensure that data held in the cache 24 + * is visible to DMA, or data written by DMA to system memory is 25 + * visible to the CPU. 26 + */ 27 + #define dmac_map_area cpu_cache.dma_map_area 28 + #define dmac_unmap_area cpu_cache.dma_unmap_area 29 + 30 + #endif 31 + 32 + #endif
+15
arch/arm/mm/flush.c
··· 21 21 22 22 #include "mm.h" 23 23 24 + #ifdef CONFIG_ARM_HEAVY_MB 25 + void (*soc_mb)(void); 26 + 27 + void arm_heavy_mb(void) 28 + { 29 + #ifdef CONFIG_OUTER_CACHE_SYNC 30 + if (outer_cache.sync) 31 + outer_cache.sync(); 32 + #endif 33 + if (soc_mb) 34 + soc_mb(); 35 + } 36 + EXPORT_SYMBOL(arm_heavy_mb); 37 + #endif 38 + 24 39 #ifdef CONFIG_CPU_CACHE_VIPT 25 40 26 41 static void flush_pfn_alias(unsigned long pfn, unsigned long vaddr)
+3 -3
arch/arm/mm/highmem.c
··· 79 79 80 80 type = kmap_atomic_idx_push(); 81 81 82 - idx = type + KM_TYPE_NR * smp_processor_id(); 82 + idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id(); 83 83 vaddr = __fix_to_virt(idx); 84 84 #ifdef CONFIG_DEBUG_HIGHMEM 85 85 /* ··· 106 106 107 107 if (kvaddr >= (void *)FIXADDR_START) { 108 108 type = kmap_atomic_idx(); 109 - idx = type + KM_TYPE_NR * smp_processor_id(); 109 + idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id(); 110 110 111 111 if (cache_is_vivt()) 112 112 __cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE); ··· 138 138 return page_address(page); 139 139 140 140 type = kmap_atomic_idx_push(); 141 - idx = type + KM_TYPE_NR * smp_processor_id(); 141 + idx = FIX_KMAP_BEGIN + type + KM_TYPE_NR * smp_processor_id(); 142 142 vaddr = __fix_to_virt(idx); 143 143 #ifdef CONFIG_DEBUG_HIGHMEM 144 144 BUG_ON(!pte_none(get_fixmap_pte(vaddr)));
+83 -9
arch/arm/mm/mmu.c
··· 291 291 .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | 292 292 L_PTE_RDONLY, 293 293 .prot_l1 = PMD_TYPE_TABLE, 294 - .domain = DOMAIN_USER, 294 + .domain = DOMAIN_VECTORS, 295 295 }, 296 296 [MT_HIGH_VECTORS] = { 297 297 .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | 298 298 L_PTE_USER | L_PTE_RDONLY, 299 299 .prot_l1 = PMD_TYPE_TABLE, 300 - .domain = DOMAIN_USER, 300 + .domain = DOMAIN_VECTORS, 301 301 }, 302 302 [MT_MEMORY_RWX] = { 303 303 .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY, ··· 357 357 } 358 358 EXPORT_SYMBOL(get_mem_type); 359 359 360 + static pte_t *(*pte_offset_fixmap)(pmd_t *dir, unsigned long addr); 361 + 362 + static pte_t bm_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS] 363 + __aligned(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE) __initdata; 364 + 365 + static pte_t * __init pte_offset_early_fixmap(pmd_t *dir, unsigned long addr) 366 + { 367 + return &bm_pte[pte_index(addr)]; 368 + } 369 + 370 + static pte_t *pte_offset_late_fixmap(pmd_t *dir, unsigned long addr) 371 + { 372 + return pte_offset_kernel(dir, addr); 373 + } 374 + 375 + static inline pmd_t * __init fixmap_pmd(unsigned long addr) 376 + { 377 + pgd_t *pgd = pgd_offset_k(addr); 378 + pud_t *pud = pud_offset(pgd, addr); 379 + pmd_t *pmd = pmd_offset(pud, addr); 380 + 381 + return pmd; 382 + } 383 + 384 + void __init early_fixmap_init(void) 385 + { 386 + pmd_t *pmd; 387 + 388 + /* 389 + * The early fixmap range spans multiple pmds, for which 390 + * we are not prepared: 391 + */ 392 + BUILD_BUG_ON((__fix_to_virt(__end_of_permanent_fixed_addresses) >> PMD_SHIFT) 393 + != FIXADDR_TOP >> PMD_SHIFT); 394 + 395 + pmd = fixmap_pmd(FIXADDR_TOP); 396 + pmd_populate_kernel(&init_mm, pmd, bm_pte); 397 + 398 + pte_offset_fixmap = pte_offset_early_fixmap; 399 + } 400 + 360 401 /* 361 402 * To avoid TLB flush broadcasts, this uses local_flush_tlb_kernel_range(). 362 403 * As a result, this can only be called with preemption disabled, as under ··· 406 365 void __set_fixmap(enum fixed_addresses idx, phys_addr_t phys, pgprot_t prot) 407 366 { 408 367 unsigned long vaddr = __fix_to_virt(idx); 409 - pte_t *pte = pte_offset_kernel(pmd_off_k(vaddr), vaddr); 368 + pte_t *pte = pte_offset_fixmap(pmd_off_k(vaddr), vaddr); 410 369 411 370 /* Make sure fixmap region does not exceed available allocation. */ 412 371 BUILD_BUG_ON(FIXADDR_START + (__end_of_fixed_addresses * PAGE_SIZE) > ··· 896 855 } 897 856 898 857 if ((md->type == MT_DEVICE || md->type == MT_ROM) && 899 - md->virtual >= PAGE_OFFSET && 858 + md->virtual >= PAGE_OFFSET && md->virtual < FIXADDR_START && 900 859 (md->virtual < VMALLOC_START || md->virtual >= VMALLOC_END)) { 901 860 pr_warn("BUG: mapping for 0x%08llx at 0x%08lx out of vmalloc space\n", 902 861 (long long)__pfn_to_phys((u64)md->pfn), md->virtual); ··· 1260 1219 1261 1220 /* 1262 1221 * Set up the device mappings. Since we clear out the page tables for all 1263 - * mappings above VMALLOC_START, we will remove any debug device mappings. 1264 - * This means you have to be careful how you debug this function, or any 1265 - * called function. This means you can't use any function or debugging 1266 - * method which may touch any device, otherwise the kernel _will_ crash. 1222 + * mappings above VMALLOC_START, except early fixmap, we might remove debug 1223 + * device mappings. This means earlycon can be used to debug this function 1224 + * Any other function or debugging method which may touch any device _will_ 1225 + * crash the kernel. 1267 1226 */ 1268 1227 static void __init devicemaps_init(const struct machine_desc *mdesc) 1269 1228 { ··· 1278 1237 1279 1238 early_trap_init(vectors); 1280 1239 1281 - for (addr = VMALLOC_START; addr; addr += PMD_SIZE) 1240 + /* 1241 + * Clear page table except top pmd used by early fixmaps 1242 + */ 1243 + for (addr = VMALLOC_START; addr < (FIXADDR_TOP & PMD_MASK); addr += PMD_SIZE) 1282 1244 pmd_clear(pmd_off_k(addr)); 1283 1245 1284 1246 /* ··· 1533 1489 1534 1490 #endif 1535 1491 1492 + static void __init early_fixmap_shutdown(void) 1493 + { 1494 + int i; 1495 + unsigned long va = fix_to_virt(__end_of_permanent_fixed_addresses - 1); 1496 + 1497 + pte_offset_fixmap = pte_offset_late_fixmap; 1498 + pmd_clear(fixmap_pmd(va)); 1499 + local_flush_tlb_kernel_page(va); 1500 + 1501 + for (i = 0; i < __end_of_permanent_fixed_addresses; i++) { 1502 + pte_t *pte; 1503 + struct map_desc map; 1504 + 1505 + map.virtual = fix_to_virt(i); 1506 + pte = pte_offset_early_fixmap(pmd_off_k(map.virtual), map.virtual); 1507 + 1508 + /* Only i/o device mappings are supported ATM */ 1509 + if (pte_none(*pte) || 1510 + (pte_val(*pte) & L_PTE_MT_MASK) != L_PTE_MT_DEV_SHARED) 1511 + continue; 1512 + 1513 + map.pfn = pte_pfn(*pte); 1514 + map.type = MT_DEVICE; 1515 + map.length = PAGE_SIZE; 1516 + 1517 + create_mapping(&map); 1518 + } 1519 + } 1520 + 1536 1521 /* 1537 1522 * paging_init() sets up the page tables, initialises the zone memory 1538 1523 * maps, and sets up the zero page, bad page and bad page tables. ··· 1575 1502 map_lowmem(); 1576 1503 memblock_set_current_limit(arm_lowmem_limit); 1577 1504 dma_contiguous_remap(); 1505 + early_fixmap_shutdown(); 1578 1506 devicemaps_init(mdesc); 1579 1507 kmap_init(); 1580 1508 tcm_init();
+10
arch/arm/mm/pgd.c
··· 84 84 if (!new_pte) 85 85 goto no_pte; 86 86 87 + #ifndef CONFIG_ARM_LPAE 88 + /* 89 + * Modify the PTE pointer to have the correct domain. This 90 + * needs to be the vectors domain to avoid the low vectors 91 + * being unmapped. 92 + */ 93 + pmd_val(*new_pmd) &= ~PMD_DOMAIN_MASK; 94 + pmd_val(*new_pmd) |= PMD_DOMAIN(DOMAIN_VECTORS); 95 + #endif 96 + 87 97 init_pud = pud_offset(init_pgd, 0); 88 98 init_pmd = pmd_offset(init_pud, 0); 89 99 init_pte = pte_offset_map(init_pmd, 0);
+1
arch/arm64/Kconfig
··· 20 20 select ARM_GIC_V2M if PCI_MSI 21 21 select ARM_GIC_V3 22 22 select ARM_GIC_V3_ITS if PCI_MSI 23 + select ARM_PSCI_FW 23 24 select BUILDTIME_EXTABLE_SORT 24 25 select CLONE_BACKWARDS 25 26 select COMMON_CLK
+2 -2
arch/arm64/include/asm/acpi.h
··· 12 12 #ifndef _ASM_ACPI_H 13 13 #define _ASM_ACPI_H 14 14 15 - #include <linux/mm.h> 16 15 #include <linux/irqchip/arm-gic-acpi.h> 16 + #include <linux/mm.h> 17 + #include <linux/psci.h> 17 18 18 19 #include <asm/cputype.h> 19 - #include <asm/psci.h> 20 20 #include <asm/smp_plat.h> 21 21 22 22 /* Macros for consistency checks of the GICC subtable of MADT */
-28
arch/arm64/include/asm/psci.h
··· 1 - /* 2 - * This program is free software; you can redistribute it and/or modify 3 - * it under the terms of the GNU General Public License version 2 as 4 - * published by the Free Software Foundation. 5 - * 6 - * This program is distributed in the hope that it will be useful, 7 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 8 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 9 - * GNU General Public License for more details. 10 - * 11 - * Copyright (C) 2013 ARM Limited 12 - */ 13 - 14 - #ifndef __ASM_PSCI_H 15 - #define __ASM_PSCI_H 16 - 17 - int __init psci_dt_init(void); 18 - 19 - #ifdef CONFIG_ACPI 20 - int __init psci_acpi_init(void); 21 - bool __init acpi_psci_present(void); 22 - bool __init acpi_psci_use_hvc(void); 23 - #else 24 - static inline int psci_acpi_init(void) { return 0; } 25 - static inline bool acpi_psci_present(void) { return false; } 26 - #endif 27 - 28 - #endif /* __ASM_PSCI_H */
+2 -359
arch/arm64/kernel/psci.c
··· 18 18 #include <linux/init.h> 19 19 #include <linux/of.h> 20 20 #include <linux/smp.h> 21 - #include <linux/reboot.h> 22 - #include <linux/pm.h> 23 21 #include <linux/delay.h> 22 + #include <linux/psci.h> 24 23 #include <linux/slab.h> 24 + 25 25 #include <uapi/linux/psci.h> 26 26 27 27 #include <asm/compiler.h> 28 - #include <asm/cputype.h> 29 28 #include <asm/cpu_ops.h> 30 29 #include <asm/errno.h> 31 - #include <asm/psci.h> 32 30 #include <asm/smp_plat.h> 33 31 #include <asm/suspend.h> 34 - #include <asm/system_misc.h> 35 - 36 - #define PSCI_POWER_STATE_TYPE_STANDBY 0 37 - #define PSCI_POWER_STATE_TYPE_POWER_DOWN 1 38 32 39 33 static bool psci_power_state_loses_context(u32 state) 40 34 { ··· 44 50 return !(state & ~valid_mask); 45 51 } 46 52 47 - /* 48 - * The CPU any Trusted OS is resident on. The trusted OS may reject CPU_OFF 49 - * calls to its resident CPU, so we must avoid issuing those. We never migrate 50 - * a Trusted OS even if it claims to be capable of migration -- doing so will 51 - * require cooperation with a Trusted OS driver. 52 - */ 53 - static int resident_cpu = -1; 54 - 55 - struct psci_operations { 56 - int (*cpu_suspend)(u32 state, unsigned long entry_point); 57 - int (*cpu_off)(u32 state); 58 - int (*cpu_on)(unsigned long cpuid, unsigned long entry_point); 59 - int (*migrate)(unsigned long cpuid); 60 - int (*affinity_info)(unsigned long target_affinity, 61 - unsigned long lowest_affinity_level); 62 - int (*migrate_info_type)(void); 63 - }; 64 - 65 - static struct psci_operations psci_ops; 66 - 67 - typedef unsigned long (psci_fn)(unsigned long, unsigned long, 68 - unsigned long, unsigned long); 69 - asmlinkage psci_fn __invoke_psci_fn_hvc; 70 - asmlinkage psci_fn __invoke_psci_fn_smc; 71 - static psci_fn *invoke_psci_fn; 72 - 73 - enum psci_function { 74 - PSCI_FN_CPU_SUSPEND, 75 - PSCI_FN_CPU_ON, 76 - PSCI_FN_CPU_OFF, 77 - PSCI_FN_MIGRATE, 78 - PSCI_FN_MAX, 79 - }; 80 - 81 53 static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state); 82 - 83 - static u32 psci_function_id[PSCI_FN_MAX]; 84 - 85 - static int psci_to_linux_errno(int errno) 86 - { 87 - switch (errno) { 88 - case PSCI_RET_SUCCESS: 89 - return 0; 90 - case PSCI_RET_NOT_SUPPORTED: 91 - return -EOPNOTSUPP; 92 - case PSCI_RET_INVALID_PARAMS: 93 - return -EINVAL; 94 - case PSCI_RET_DENIED: 95 - return -EPERM; 96 - }; 97 - 98 - return -EINVAL; 99 - } 100 - 101 - static u32 psci_get_version(void) 102 - { 103 - return invoke_psci_fn(PSCI_0_2_FN_PSCI_VERSION, 0, 0, 0); 104 - } 105 - 106 - static int psci_cpu_suspend(u32 state, unsigned long entry_point) 107 - { 108 - int err; 109 - u32 fn; 110 - 111 - fn = psci_function_id[PSCI_FN_CPU_SUSPEND]; 112 - err = invoke_psci_fn(fn, state, entry_point, 0); 113 - return psci_to_linux_errno(err); 114 - } 115 - 116 - static int psci_cpu_off(u32 state) 117 - { 118 - int err; 119 - u32 fn; 120 - 121 - fn = psci_function_id[PSCI_FN_CPU_OFF]; 122 - err = invoke_psci_fn(fn, state, 0, 0); 123 - return psci_to_linux_errno(err); 124 - } 125 - 126 - static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point) 127 - { 128 - int err; 129 - u32 fn; 130 - 131 - fn = psci_function_id[PSCI_FN_CPU_ON]; 132 - err = invoke_psci_fn(fn, cpuid, entry_point, 0); 133 - return psci_to_linux_errno(err); 134 - } 135 - 136 - static int psci_migrate(unsigned long cpuid) 137 - { 138 - int err; 139 - u32 fn; 140 - 141 - fn = psci_function_id[PSCI_FN_MIGRATE]; 142 - err = invoke_psci_fn(fn, cpuid, 0, 0); 143 - return psci_to_linux_errno(err); 144 - } 145 - 146 - static int psci_affinity_info(unsigned long target_affinity, 147 - unsigned long lowest_affinity_level) 148 - { 149 - return invoke_psci_fn(PSCI_0_2_FN64_AFFINITY_INFO, target_affinity, 150 - lowest_affinity_level, 0); 151 - } 152 - 153 - static int psci_migrate_info_type(void) 154 - { 155 - return invoke_psci_fn(PSCI_0_2_FN_MIGRATE_INFO_TYPE, 0, 0, 0); 156 - } 157 - 158 - static unsigned long psci_migrate_info_up_cpu(void) 159 - { 160 - return invoke_psci_fn(PSCI_0_2_FN64_MIGRATE_INFO_UP_CPU, 0, 0, 0); 161 - } 162 54 163 55 static int __maybe_unused cpu_psci_cpu_init_idle(unsigned int cpu) 164 56 { ··· 110 230 return ret; 111 231 } 112 232 113 - static int get_set_conduit_method(struct device_node *np) 114 - { 115 - const char *method; 116 - 117 - pr_info("probing for conduit method from DT.\n"); 118 - 119 - if (of_property_read_string(np, "method", &method)) { 120 - pr_warn("missing \"method\" property\n"); 121 - return -ENXIO; 122 - } 123 - 124 - if (!strcmp("hvc", method)) { 125 - invoke_psci_fn = __invoke_psci_fn_hvc; 126 - } else if (!strcmp("smc", method)) { 127 - invoke_psci_fn = __invoke_psci_fn_smc; 128 - } else { 129 - pr_warn("invalid \"method\" property: %s\n", method); 130 - return -EINVAL; 131 - } 132 - return 0; 133 - } 134 - 135 - static void psci_sys_reset(enum reboot_mode reboot_mode, const char *cmd) 136 - { 137 - invoke_psci_fn(PSCI_0_2_FN_SYSTEM_RESET, 0, 0, 0); 138 - } 139 - 140 - static void psci_sys_poweroff(void) 141 - { 142 - invoke_psci_fn(PSCI_0_2_FN_SYSTEM_OFF, 0, 0, 0); 143 - } 144 - 145 - /* 146 - * Detect the presence of a resident Trusted OS which may cause CPU_OFF to 147 - * return DENIED (which would be fatal). 148 - */ 149 - static void __init psci_init_migrate(void) 150 - { 151 - unsigned long cpuid; 152 - int type, cpu; 153 - 154 - type = psci_ops.migrate_info_type(); 155 - 156 - if (type == PSCI_0_2_TOS_MP) { 157 - pr_info("Trusted OS migration not required\n"); 158 - return; 159 - } 160 - 161 - if (type == PSCI_RET_NOT_SUPPORTED) { 162 - pr_info("MIGRATE_INFO_TYPE not supported.\n"); 163 - return; 164 - } 165 - 166 - if (type != PSCI_0_2_TOS_UP_MIGRATE && 167 - type != PSCI_0_2_TOS_UP_NO_MIGRATE) { 168 - pr_err("MIGRATE_INFO_TYPE returned unknown type (%d)\n", type); 169 - return; 170 - } 171 - 172 - cpuid = psci_migrate_info_up_cpu(); 173 - if (cpuid & ~MPIDR_HWID_BITMASK) { 174 - pr_warn("MIGRATE_INFO_UP_CPU reported invalid physical ID (0x%lx)\n", 175 - cpuid); 176 - return; 177 - } 178 - 179 - cpu = get_logical_index(cpuid); 180 - resident_cpu = cpu >= 0 ? cpu : -1; 181 - 182 - pr_info("Trusted OS resident on physical CPU 0x%lx\n", cpuid); 183 - } 184 - 185 - static void __init psci_0_2_set_functions(void) 186 - { 187 - pr_info("Using standard PSCI v0.2 function IDs\n"); 188 - psci_function_id[PSCI_FN_CPU_SUSPEND] = PSCI_0_2_FN64_CPU_SUSPEND; 189 - psci_ops.cpu_suspend = psci_cpu_suspend; 190 - 191 - psci_function_id[PSCI_FN_CPU_OFF] = PSCI_0_2_FN_CPU_OFF; 192 - psci_ops.cpu_off = psci_cpu_off; 193 - 194 - psci_function_id[PSCI_FN_CPU_ON] = PSCI_0_2_FN64_CPU_ON; 195 - psci_ops.cpu_on = psci_cpu_on; 196 - 197 - psci_function_id[PSCI_FN_MIGRATE] = PSCI_0_2_FN64_MIGRATE; 198 - psci_ops.migrate = psci_migrate; 199 - 200 - psci_ops.affinity_info = psci_affinity_info; 201 - 202 - psci_ops.migrate_info_type = psci_migrate_info_type; 203 - 204 - arm_pm_restart = psci_sys_reset; 205 - 206 - pm_power_off = psci_sys_poweroff; 207 - } 208 - 209 - /* 210 - * Probe function for PSCI firmware versions >= 0.2 211 - */ 212 - static int __init psci_probe(void) 213 - { 214 - u32 ver = psci_get_version(); 215 - 216 - pr_info("PSCIv%d.%d detected in firmware.\n", 217 - PSCI_VERSION_MAJOR(ver), 218 - PSCI_VERSION_MINOR(ver)); 219 - 220 - if (PSCI_VERSION_MAJOR(ver) == 0 && PSCI_VERSION_MINOR(ver) < 2) { 221 - pr_err("Conflicting PSCI version detected.\n"); 222 - return -EINVAL; 223 - } 224 - 225 - psci_0_2_set_functions(); 226 - 227 - psci_init_migrate(); 228 - 229 - return 0; 230 - } 231 - 232 - typedef int (*psci_initcall_t)(const struct device_node *); 233 - 234 - /* 235 - * PSCI init function for PSCI versions >=0.2 236 - * 237 - * Probe based on PSCI PSCI_VERSION function 238 - */ 239 - static int __init psci_0_2_init(struct device_node *np) 240 - { 241 - int err; 242 - 243 - err = get_set_conduit_method(np); 244 - 245 - if (err) 246 - goto out_put_node; 247 - /* 248 - * Starting with v0.2, the PSCI specification introduced a call 249 - * (PSCI_VERSION) that allows probing the firmware version, so 250 - * that PSCI function IDs and version specific initialization 251 - * can be carried out according to the specific version reported 252 - * by firmware 253 - */ 254 - err = psci_probe(); 255 - 256 - out_put_node: 257 - of_node_put(np); 258 - return err; 259 - } 260 - 261 - /* 262 - * PSCI < v0.2 get PSCI Function IDs via DT. 263 - */ 264 - static int __init psci_0_1_init(struct device_node *np) 265 - { 266 - u32 id; 267 - int err; 268 - 269 - err = get_set_conduit_method(np); 270 - 271 - if (err) 272 - goto out_put_node; 273 - 274 - pr_info("Using PSCI v0.1 Function IDs from DT\n"); 275 - 276 - if (!of_property_read_u32(np, "cpu_suspend", &id)) { 277 - psci_function_id[PSCI_FN_CPU_SUSPEND] = id; 278 - psci_ops.cpu_suspend = psci_cpu_suspend; 279 - } 280 - 281 - if (!of_property_read_u32(np, "cpu_off", &id)) { 282 - psci_function_id[PSCI_FN_CPU_OFF] = id; 283 - psci_ops.cpu_off = psci_cpu_off; 284 - } 285 - 286 - if (!of_property_read_u32(np, "cpu_on", &id)) { 287 - psci_function_id[PSCI_FN_CPU_ON] = id; 288 - psci_ops.cpu_on = psci_cpu_on; 289 - } 290 - 291 - if (!of_property_read_u32(np, "migrate", &id)) { 292 - psci_function_id[PSCI_FN_MIGRATE] = id; 293 - psci_ops.migrate = psci_migrate; 294 - } 295 - 296 - out_put_node: 297 - of_node_put(np); 298 - return err; 299 - } 300 - 301 - static const struct of_device_id psci_of_match[] __initconst = { 302 - { .compatible = "arm,psci", .data = psci_0_1_init}, 303 - { .compatible = "arm,psci-0.2", .data = psci_0_2_init}, 304 - {}, 305 - }; 306 - 307 - int __init psci_dt_init(void) 308 - { 309 - struct device_node *np; 310 - const struct of_device_id *matched_np; 311 - psci_initcall_t init_fn; 312 - 313 - np = of_find_matching_node_and_match(NULL, psci_of_match, &matched_np); 314 - 315 - if (!np) 316 - return -ENODEV; 317 - 318 - init_fn = (psci_initcall_t)matched_np->data; 319 - return init_fn(np); 320 - } 321 - 322 - #ifdef CONFIG_ACPI 323 - /* 324 - * We use PSCI 0.2+ when ACPI is deployed on ARM64 and it's 325 - * explicitly clarified in SBBR 326 - */ 327 - int __init psci_acpi_init(void) 328 - { 329 - if (!acpi_psci_present()) { 330 - pr_info("is not implemented in ACPI.\n"); 331 - return -EOPNOTSUPP; 332 - } 333 - 334 - pr_info("probing for conduit method from ACPI.\n"); 335 - 336 - if (acpi_psci_use_hvc()) 337 - invoke_psci_fn = __invoke_psci_fn_hvc; 338 - else 339 - invoke_psci_fn = __invoke_psci_fn_smc; 340 - 341 - return psci_probe(); 342 - } 343 - #endif 344 - 345 233 #ifdef CONFIG_SMP 346 234 347 235 static int __init cpu_psci_cpu_init(unsigned int cpu) ··· 137 489 } 138 490 139 491 #ifdef CONFIG_HOTPLUG_CPU 140 - static bool psci_tos_resident_on(int cpu) 141 - { 142 - return cpu == resident_cpu; 143 - } 144 - 145 492 static int cpu_psci_cpu_disable(unsigned int cpu) 146 493 { 147 494 /* Fail early if we don't have CPU_OFF support */
+1 -1
arch/arm64/kernel/setup.c
··· 45 45 #include <linux/of_platform.h> 46 46 #include <linux/efi.h> 47 47 #include <linux/personality.h> 48 + #include <linux/psci.h> 48 49 49 50 #include <asm/acpi.h> 50 51 #include <asm/fixmap.h> ··· 61 60 #include <asm/tlbflush.h> 62 61 #include <asm/traps.h> 63 62 #include <asm/memblock.h> 64 - #include <asm/psci.h> 65 63 #include <asm/efi.h> 66 64 #include <asm/virt.h> 67 65 #include <asm/xen/hypervisor.h>
+2
drivers/Kconfig
··· 176 176 177 177 source "drivers/mcb/Kconfig" 178 178 179 + source "drivers/perf/Kconfig" 180 + 179 181 source "drivers/ras/Kconfig" 180 182 181 183 source "drivers/thunderbolt/Kconfig"
+1
drivers/Makefile
··· 161 161 obj-$(CONFIG_FMC) += fmc/ 162 162 obj-$(CONFIG_POWERCAP) += powercap/ 163 163 obj-$(CONFIG_MCB) += mcb/ 164 + obj-$(CONFIG_PERF_EVENTS) += perf/ 164 165 obj-$(CONFIG_RAS) += ras/ 165 166 obj-$(CONFIG_THUNDERBOLT) += thunderbolt/ 166 167 obj-$(CONFIG_CORESIGHT) += hwtracing/coresight/
+10 -5
drivers/cpuidle/cpuidle-calxeda.c
··· 25 25 #include <linux/init.h> 26 26 #include <linux/mm.h> 27 27 #include <linux/platform_device.h> 28 + #include <linux/psci.h> 29 + 28 30 #include <asm/cpuidle.h> 29 31 #include <asm/suspend.h> 30 - #include <asm/psci.h> 32 + 33 + #include <uapi/linux/psci.h> 34 + 35 + #define CALXEDA_IDLE_PARAM \ 36 + ((0 << PSCI_0_2_POWER_STATE_ID_SHIFT) | \ 37 + (0 << PSCI_0_2_POWER_STATE_AFFL_SHIFT) | \ 38 + (PSCI_POWER_STATE_TYPE_POWER_DOWN << PSCI_0_2_POWER_STATE_TYPE_SHIFT)) 31 39 32 40 static int calxeda_idle_finish(unsigned long val) 33 41 { 34 - const struct psci_power_state ps = { 35 - .type = PSCI_POWER_STATE_TYPE_POWER_DOWN, 36 - }; 37 - return psci_ops.cpu_suspend(ps, __pa(cpu_resume)); 42 + return psci_ops.cpu_suspend(CALXEDA_IDLE_PARAM, __pa(cpu_resume)); 38 43 } 39 44 40 45 static int calxeda_pwrdown_idle(struct cpuidle_device *dev,
+3
drivers/firmware/Kconfig
··· 5 5 6 6 menu "Firmware Drivers" 7 7 8 + config ARM_PSCI_FW 9 + bool 10 + 8 11 config EDD 9 12 tristate "BIOS Enhanced Disk Drive calls determine boot disk" 10 13 depends on X86
+1
drivers/firmware/Makefile
··· 1 1 # 2 2 # Makefile for the linux kernel. 3 3 # 4 + obj-$(CONFIG_ARM_PSCI_FW) += psci.o 4 5 obj-$(CONFIG_DMI) += dmi_scan.o 5 6 obj-$(CONFIG_DMI_SYSFS) += dmi-sysfs.o 6 7 obj-$(CONFIG_EDD) += edd.o
+382
drivers/firmware/psci.c
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify 3 + * it under the terms of the GNU General Public License version 2 as 4 + * published by the Free Software Foundation. 5 + * 6 + * This program is distributed in the hope that it will be useful, 7 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 8 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 9 + * GNU General Public License for more details. 10 + * 11 + * Copyright (C) 2015 ARM Limited 12 + */ 13 + 14 + #define pr_fmt(fmt) "psci: " fmt 15 + 16 + #include <linux/errno.h> 17 + #include <linux/linkage.h> 18 + #include <linux/of.h> 19 + #include <linux/pm.h> 20 + #include <linux/printk.h> 21 + #include <linux/psci.h> 22 + #include <linux/reboot.h> 23 + 24 + #include <uapi/linux/psci.h> 25 + 26 + #include <asm/cputype.h> 27 + #include <asm/system_misc.h> 28 + #include <asm/smp_plat.h> 29 + 30 + /* 31 + * While a 64-bit OS can make calls with SMC32 calling conventions, for some 32 + * calls it is necessary to use SMC64 to pass or return 64-bit values. For such 33 + * calls PSCI_0_2_FN_NATIVE(x) will choose the appropriate (native-width) 34 + * function ID. 35 + */ 36 + #ifdef CONFIG_64BIT 37 + #define PSCI_0_2_FN_NATIVE(name) PSCI_0_2_FN64_##name 38 + #else 39 + #define PSCI_0_2_FN_NATIVE(name) PSCI_0_2_FN_##name 40 + #endif 41 + 42 + /* 43 + * The CPU any Trusted OS is resident on. The trusted OS may reject CPU_OFF 44 + * calls to its resident CPU, so we must avoid issuing those. We never migrate 45 + * a Trusted OS even if it claims to be capable of migration -- doing so will 46 + * require cooperation with a Trusted OS driver. 47 + */ 48 + static int resident_cpu = -1; 49 + 50 + bool psci_tos_resident_on(int cpu) 51 + { 52 + return cpu == resident_cpu; 53 + } 54 + 55 + struct psci_operations psci_ops; 56 + 57 + typedef unsigned long (psci_fn)(unsigned long, unsigned long, 58 + unsigned long, unsigned long); 59 + asmlinkage psci_fn __invoke_psci_fn_hvc; 60 + asmlinkage psci_fn __invoke_psci_fn_smc; 61 + static psci_fn *invoke_psci_fn; 62 + 63 + enum psci_function { 64 + PSCI_FN_CPU_SUSPEND, 65 + PSCI_FN_CPU_ON, 66 + PSCI_FN_CPU_OFF, 67 + PSCI_FN_MIGRATE, 68 + PSCI_FN_MAX, 69 + }; 70 + 71 + static u32 psci_function_id[PSCI_FN_MAX]; 72 + 73 + static int psci_to_linux_errno(int errno) 74 + { 75 + switch (errno) { 76 + case PSCI_RET_SUCCESS: 77 + return 0; 78 + case PSCI_RET_NOT_SUPPORTED: 79 + return -EOPNOTSUPP; 80 + case PSCI_RET_INVALID_PARAMS: 81 + return -EINVAL; 82 + case PSCI_RET_DENIED: 83 + return -EPERM; 84 + }; 85 + 86 + return -EINVAL; 87 + } 88 + 89 + static u32 psci_get_version(void) 90 + { 91 + return invoke_psci_fn(PSCI_0_2_FN_PSCI_VERSION, 0, 0, 0); 92 + } 93 + 94 + static int psci_cpu_suspend(u32 state, unsigned long entry_point) 95 + { 96 + int err; 97 + u32 fn; 98 + 99 + fn = psci_function_id[PSCI_FN_CPU_SUSPEND]; 100 + err = invoke_psci_fn(fn, state, entry_point, 0); 101 + return psci_to_linux_errno(err); 102 + } 103 + 104 + static int psci_cpu_off(u32 state) 105 + { 106 + int err; 107 + u32 fn; 108 + 109 + fn = psci_function_id[PSCI_FN_CPU_OFF]; 110 + err = invoke_psci_fn(fn, state, 0, 0); 111 + return psci_to_linux_errno(err); 112 + } 113 + 114 + static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point) 115 + { 116 + int err; 117 + u32 fn; 118 + 119 + fn = psci_function_id[PSCI_FN_CPU_ON]; 120 + err = invoke_psci_fn(fn, cpuid, entry_point, 0); 121 + return psci_to_linux_errno(err); 122 + } 123 + 124 + static int psci_migrate(unsigned long cpuid) 125 + { 126 + int err; 127 + u32 fn; 128 + 129 + fn = psci_function_id[PSCI_FN_MIGRATE]; 130 + err = invoke_psci_fn(fn, cpuid, 0, 0); 131 + return psci_to_linux_errno(err); 132 + } 133 + 134 + static int psci_affinity_info(unsigned long target_affinity, 135 + unsigned long lowest_affinity_level) 136 + { 137 + return invoke_psci_fn(PSCI_0_2_FN_NATIVE(AFFINITY_INFO), 138 + target_affinity, lowest_affinity_level, 0); 139 + } 140 + 141 + static int psci_migrate_info_type(void) 142 + { 143 + return invoke_psci_fn(PSCI_0_2_FN_MIGRATE_INFO_TYPE, 0, 0, 0); 144 + } 145 + 146 + static unsigned long psci_migrate_info_up_cpu(void) 147 + { 148 + return invoke_psci_fn(PSCI_0_2_FN_NATIVE(MIGRATE_INFO_UP_CPU), 149 + 0, 0, 0); 150 + } 151 + 152 + static int get_set_conduit_method(struct device_node *np) 153 + { 154 + const char *method; 155 + 156 + pr_info("probing for conduit method from DT.\n"); 157 + 158 + if (of_property_read_string(np, "method", &method)) { 159 + pr_warn("missing \"method\" property\n"); 160 + return -ENXIO; 161 + } 162 + 163 + if (!strcmp("hvc", method)) { 164 + invoke_psci_fn = __invoke_psci_fn_hvc; 165 + } else if (!strcmp("smc", method)) { 166 + invoke_psci_fn = __invoke_psci_fn_smc; 167 + } else { 168 + pr_warn("invalid \"method\" property: %s\n", method); 169 + return -EINVAL; 170 + } 171 + return 0; 172 + } 173 + 174 + static void psci_sys_reset(enum reboot_mode reboot_mode, const char *cmd) 175 + { 176 + invoke_psci_fn(PSCI_0_2_FN_SYSTEM_RESET, 0, 0, 0); 177 + } 178 + 179 + static void psci_sys_poweroff(void) 180 + { 181 + invoke_psci_fn(PSCI_0_2_FN_SYSTEM_OFF, 0, 0, 0); 182 + } 183 + 184 + /* 185 + * Detect the presence of a resident Trusted OS which may cause CPU_OFF to 186 + * return DENIED (which would be fatal). 187 + */ 188 + static void __init psci_init_migrate(void) 189 + { 190 + unsigned long cpuid; 191 + int type, cpu = -1; 192 + 193 + type = psci_ops.migrate_info_type(); 194 + 195 + if (type == PSCI_0_2_TOS_MP) { 196 + pr_info("Trusted OS migration not required\n"); 197 + return; 198 + } 199 + 200 + if (type == PSCI_RET_NOT_SUPPORTED) { 201 + pr_info("MIGRATE_INFO_TYPE not supported.\n"); 202 + return; 203 + } 204 + 205 + if (type != PSCI_0_2_TOS_UP_MIGRATE && 206 + type != PSCI_0_2_TOS_UP_NO_MIGRATE) { 207 + pr_err("MIGRATE_INFO_TYPE returned unknown type (%d)\n", type); 208 + return; 209 + } 210 + 211 + cpuid = psci_migrate_info_up_cpu(); 212 + if (cpuid & ~MPIDR_HWID_BITMASK) { 213 + pr_warn("MIGRATE_INFO_UP_CPU reported invalid physical ID (0x%lx)\n", 214 + cpuid); 215 + return; 216 + } 217 + 218 + cpu = get_logical_index(cpuid); 219 + resident_cpu = cpu >= 0 ? cpu : -1; 220 + 221 + pr_info("Trusted OS resident on physical CPU 0x%lx\n", cpuid); 222 + } 223 + 224 + static void __init psci_0_2_set_functions(void) 225 + { 226 + pr_info("Using standard PSCI v0.2 function IDs\n"); 227 + psci_function_id[PSCI_FN_CPU_SUSPEND] = PSCI_0_2_FN_NATIVE(CPU_SUSPEND); 228 + psci_ops.cpu_suspend = psci_cpu_suspend; 229 + 230 + psci_function_id[PSCI_FN_CPU_OFF] = PSCI_0_2_FN_CPU_OFF; 231 + psci_ops.cpu_off = psci_cpu_off; 232 + 233 + psci_function_id[PSCI_FN_CPU_ON] = PSCI_0_2_FN_NATIVE(CPU_ON); 234 + psci_ops.cpu_on = psci_cpu_on; 235 + 236 + psci_function_id[PSCI_FN_MIGRATE] = PSCI_0_2_FN_NATIVE(MIGRATE); 237 + psci_ops.migrate = psci_migrate; 238 + 239 + psci_ops.affinity_info = psci_affinity_info; 240 + 241 + psci_ops.migrate_info_type = psci_migrate_info_type; 242 + 243 + arm_pm_restart = psci_sys_reset; 244 + 245 + pm_power_off = psci_sys_poweroff; 246 + } 247 + 248 + /* 249 + * Probe function for PSCI firmware versions >= 0.2 250 + */ 251 + static int __init psci_probe(void) 252 + { 253 + u32 ver = psci_get_version(); 254 + 255 + pr_info("PSCIv%d.%d detected in firmware.\n", 256 + PSCI_VERSION_MAJOR(ver), 257 + PSCI_VERSION_MINOR(ver)); 258 + 259 + if (PSCI_VERSION_MAJOR(ver) == 0 && PSCI_VERSION_MINOR(ver) < 2) { 260 + pr_err("Conflicting PSCI version detected.\n"); 261 + return -EINVAL; 262 + } 263 + 264 + psci_0_2_set_functions(); 265 + 266 + psci_init_migrate(); 267 + 268 + return 0; 269 + } 270 + 271 + typedef int (*psci_initcall_t)(const struct device_node *); 272 + 273 + /* 274 + * PSCI init function for PSCI versions >=0.2 275 + * 276 + * Probe based on PSCI PSCI_VERSION function 277 + */ 278 + static int __init psci_0_2_init(struct device_node *np) 279 + { 280 + int err; 281 + 282 + err = get_set_conduit_method(np); 283 + 284 + if (err) 285 + goto out_put_node; 286 + /* 287 + * Starting with v0.2, the PSCI specification introduced a call 288 + * (PSCI_VERSION) that allows probing the firmware version, so 289 + * that PSCI function IDs and version specific initialization 290 + * can be carried out according to the specific version reported 291 + * by firmware 292 + */ 293 + err = psci_probe(); 294 + 295 + out_put_node: 296 + of_node_put(np); 297 + return err; 298 + } 299 + 300 + /* 301 + * PSCI < v0.2 get PSCI Function IDs via DT. 302 + */ 303 + static int __init psci_0_1_init(struct device_node *np) 304 + { 305 + u32 id; 306 + int err; 307 + 308 + err = get_set_conduit_method(np); 309 + 310 + if (err) 311 + goto out_put_node; 312 + 313 + pr_info("Using PSCI v0.1 Function IDs from DT\n"); 314 + 315 + if (!of_property_read_u32(np, "cpu_suspend", &id)) { 316 + psci_function_id[PSCI_FN_CPU_SUSPEND] = id; 317 + psci_ops.cpu_suspend = psci_cpu_suspend; 318 + } 319 + 320 + if (!of_property_read_u32(np, "cpu_off", &id)) { 321 + psci_function_id[PSCI_FN_CPU_OFF] = id; 322 + psci_ops.cpu_off = psci_cpu_off; 323 + } 324 + 325 + if (!of_property_read_u32(np, "cpu_on", &id)) { 326 + psci_function_id[PSCI_FN_CPU_ON] = id; 327 + psci_ops.cpu_on = psci_cpu_on; 328 + } 329 + 330 + if (!of_property_read_u32(np, "migrate", &id)) { 331 + psci_function_id[PSCI_FN_MIGRATE] = id; 332 + psci_ops.migrate = psci_migrate; 333 + } 334 + 335 + out_put_node: 336 + of_node_put(np); 337 + return err; 338 + } 339 + 340 + static const struct of_device_id const psci_of_match[] __initconst = { 341 + { .compatible = "arm,psci", .data = psci_0_1_init}, 342 + { .compatible = "arm,psci-0.2", .data = psci_0_2_init}, 343 + {}, 344 + }; 345 + 346 + int __init psci_dt_init(void) 347 + { 348 + struct device_node *np; 349 + const struct of_device_id *matched_np; 350 + psci_initcall_t init_fn; 351 + 352 + np = of_find_matching_node_and_match(NULL, psci_of_match, &matched_np); 353 + 354 + if (!np) 355 + return -ENODEV; 356 + 357 + init_fn = (psci_initcall_t)matched_np->data; 358 + return init_fn(np); 359 + } 360 + 361 + #ifdef CONFIG_ACPI 362 + /* 363 + * We use PSCI 0.2+ when ACPI is deployed on ARM64 and it's 364 + * explicitly clarified in SBBR 365 + */ 366 + int __init psci_acpi_init(void) 367 + { 368 + if (!acpi_psci_present()) { 369 + pr_info("is not implemented in ACPI.\n"); 370 + return -EOPNOTSUPP; 371 + } 372 + 373 + pr_info("probing for conduit method from ACPI.\n"); 374 + 375 + if (acpi_psci_use_hvc()) 376 + invoke_psci_fn = __invoke_psci_fn_hvc; 377 + else 378 + invoke_psci_fn = __invoke_psci_fn_smc; 379 + 380 + return psci_probe(); 381 + } 382 + #endif
+1 -3
drivers/firmware/qcom_scm-32.c
··· 24 24 #include <linux/err.h> 25 25 #include <linux/qcom_scm.h> 26 26 27 - #include <asm/outercache.h> 28 27 #include <asm/cacheflush.h> 29 28 30 29 #include "qcom_scm.h" ··· 218 219 * Flush the command buffer so that the secure world sees 219 220 * the correct data. 220 221 */ 221 - __cpuc_flush_dcache_area((void *)cmd, cmd->len); 222 - outer_flush_range(cmd_addr, cmd_addr + cmd->len); 222 + secure_flush_area(cmd, cmd->len); 223 223 224 224 ret = smc(cmd_addr); 225 225 if (ret < 0)
+15
drivers/perf/Kconfig
··· 1 + # 2 + # Performance Monitor Drivers 3 + # 4 + 5 + menu "Performance monitor support" 6 + 7 + config ARM_PMU 8 + depends on PERF_EVENTS && ARM 9 + bool "ARM PMU framework" 10 + default y 11 + help 12 + Say y if you want to use CPU performance monitors on ARM-based 13 + systems. 14 + 15 + endmenu
+1
drivers/perf/Makefile
··· 1 + obj-$(CONFIG_ARM_PMU) += arm_pmu.o
+52
include/linux/psci.h
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify 3 + * it under the terms of the GNU General Public License version 2 as 4 + * published by the Free Software Foundation. 5 + * 6 + * This program is distributed in the hope that it will be useful, 7 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 8 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 9 + * GNU General Public License for more details. 10 + * 11 + * Copyright (C) 2015 ARM Limited 12 + */ 13 + 14 + #ifndef __LINUX_PSCI_H 15 + #define __LINUX_PSCI_H 16 + 17 + #include <linux/init.h> 18 + #include <linux/types.h> 19 + 20 + #define PSCI_POWER_STATE_TYPE_STANDBY 0 21 + #define PSCI_POWER_STATE_TYPE_POWER_DOWN 1 22 + 23 + bool psci_tos_resident_on(int cpu); 24 + 25 + struct psci_operations { 26 + int (*cpu_suspend)(u32 state, unsigned long entry_point); 27 + int (*cpu_off)(u32 state); 28 + int (*cpu_on)(unsigned long cpuid, unsigned long entry_point); 29 + int (*migrate)(unsigned long cpuid); 30 + int (*affinity_info)(unsigned long target_affinity, 31 + unsigned long lowest_affinity_level); 32 + int (*migrate_info_type)(void); 33 + }; 34 + 35 + extern struct psci_operations psci_ops; 36 + 37 + #if defined(CONFIG_ARM_PSCI_FW) 38 + int __init psci_dt_init(void); 39 + #else 40 + static inline int psci_dt_init(void) { return 0; } 41 + #endif 42 + 43 + #if defined(CONFIG_ARM_PSCI_FW) && defined(CONFIG_ACPI) 44 + int __init psci_acpi_init(void); 45 + bool __init acpi_psci_present(void); 46 + bool __init acpi_psci_use_hvc(void); 47 + #else 48 + static inline int psci_acpi_init(void) { return 0; } 49 + static inline bool acpi_psci_present(void) { return false; } 50 + #endif 51 + 52 + #endif /* __LINUX_PSCI_H */