Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm

Pull ARM updates from Russell King:
"Included in this update are both some long term fixes and some new
features.

Fixes:

- An integer overflow in the calculation of ELF_ET_DYN_BASE.

- Avoiding OOMs for high-order IOMMU allocations

- SMP requires the data cache to be enabled for synchronisation
primitives to work, so prevent the CPU_DCACHE_DISABLE option being
visible on SMP builds.

- A bug going back 10+ years in the noMMU ARM94* CPU support code,
where it corrupts registers. Found by folk getting Linux running
on their cameras.

- Versatile Express needs an errata workaround enabled for CPU
hot-unplug to work.

Features:

- Clean up module linker by handling out of range relocations
separately from relocation cases we don't handle.

- Fix a long term bug in the pci_mmap_page_range() code, which we
hope won't impact userspace (we hope there's no users of the
existing broken interface.)

- Don't map DMA coherent allocations when we don't have a MMU.

- Drop experimental status for SMP_ON_UP.

- Warn when DT doesn't specify ePAPR mandatory cache properties.

- Add documentation concerning how we find the start of physical
memory for AUTO_ZRELADDR kernels, detailing why we have chosen the
mask and the implications of changing it.

- Updates from Ard Biesheuvel to address some issues with large
kernels (such as allyesconfig) failing to link.

- Allow hibernation to work on modern (ARMv7) CPUs - this appears to
have never worked in the past on these CPUs.

- Enable IRQ_SHOW_LEVEL, which changes the /proc/interrupts output
format (hopefully without userspace breaking... let's hope that if
it causes someone a problem, they tell us.)

- Fix tegra-ahb DT offsets.

- Rework ARM errata 643719 code (and ARMv7 flush_cache_louis()/
flush_dcache_all()) code to be more efficient, and enable this
errata workaround by default for ARMv7+SMP CPUs. This complements
the Versatile Express fix above.

- Rework ARMv7 context code for errata 430973, so that only Cortex A8
CPUs are impacted by the branch target buffer flush when this
errata is enabled. Also update the help text to indicate that all
r1p* A8 CPUs are impacted.

- Switch ARM to the generic show_mem() implementation, it conveys all
the information which we were already reporting.

- Prevent slow timer sources being used for udelay() - timers running
at less than 1MHz are not useful for this, and can cause udelay()
to return immediately, without any wait. Using such a slow timer
is silly.

- VDSO support for 32-bit ARM, mainly for gettimeofday() using the
ARM architected timer.

- Perf support for Scorpion performance monitoring units"

vdso semantic conflict fixed up as per linux-next.

* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (52 commits)
ARM: update errata 430973 documentation to cover Cortex A8 r1p*
ARM: ensure delay timer has sufficient accuracy for delays
ARM: switch to use the generic show_mem() implementation
ARM: proc-v7: avoid errata 430973 workaround for non-Cortex A8 CPUs
ARM: enable ARM errata 643719 workaround by default
ARM: cache-v7: optimise test for Cortex A9 r0pX devices
ARM: cache-v7: optimise branches in v7_flush_cache_louis
ARM: cache-v7: consolidate initialisation of cache level index
ARM: cache-v7: shift CLIDR to extract appropriate field before masking
ARM: cache-v7: use movw/movt instructions
ARM: allow 16-bit instructions in ALT_UP()
ARM: proc-arm94*.S: fix setup function
ARM: vexpress: fix CPU hotplug with CT9x4 tile.
ARM: 8276/1: Make CPU_DCACHE_DISABLE depend on !SMP
ARM: 8335/1: Documentation: DT bindings: Tegra AHB: document the legacy base address
ARM: 8334/1: amba: tegra-ahb: detect and correct bogus base address
ARM: 8333/1: amba: tegra-ahb: fix register offsets in the macros
ARM: 8339/1: Enable CONFIG_GENERIC_IRQ_SHOW_LEVEL
ARM: 8338/1: kexec: Relax SMP validation to improve DT compatibility
ARM: 8337/1: mm: Do not invoke OOM for higher order IOMMU DMA allocations
...

+2429 -604
+2
Documentation/devicetree/bindings/arm/pmu.txt
··· 18 18 "arm,arm11mpcore-pmu" 19 19 "arm,arm1176-pmu" 20 20 "arm,arm1136-pmu" 21 + "qcom,scorpion-pmu" 22 + "qcom,scorpion-mp-pmu" 21 23 "qcom,krait-pmu" 22 24 - interrupts : 1 combined interrupt or 1 per core. If the interrupt is a per-cpu 23 25 interrupt (PPI) then 1 interrupt should be specified.
+5 -2
Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-ahb.txt
··· 5 5 Tegra30, must contain "nvidia,tegra30-ahb". Otherwise, must contain 6 6 '"nvidia,<chip>-ahb", "nvidia,tegra30-ahb"' where <chip> is tegra124, 7 7 tegra132, or tegra210. 8 - - reg : Should contain 1 register ranges(address and length) 8 + - reg : Should contain 1 register ranges(address and length). For 9 + Tegra20, Tegra30, and Tegra114 chips, the value must be <0x6000c004 10 + 0x10c>. For Tegra124, Tegra132 and Tegra210 chips, the value should 11 + be be <0x6000c000 0x150>. 9 12 10 - Example: 13 + Example (for a Tegra20 chip): 11 14 ahb: ahb@6000c004 { 12 15 compatible = "nvidia,tegra20-ahb"; 13 16 reg = <0x6000c004 0x10c>; /* AHB Arbitration + Gizmo Controller */
+4 -2
arch/arm/Kconfig
··· 21 21 select GENERIC_IDLE_POLL_SETUP 22 22 select GENERIC_IRQ_PROBE 23 23 select GENERIC_IRQ_SHOW 24 + select GENERIC_IRQ_SHOW_LEVEL 24 25 select GENERIC_PCI_IOMAP 25 26 select GENERIC_SCHED_CLOCK 26 27 select GENERIC_SMP_IDLE_THREAD ··· 1064 1063 depends on CPU_V7 1065 1064 help 1066 1065 This option enables the workaround for the 430973 Cortex-A8 1067 - (r1p0..r1p2) erratum. If a code sequence containing an ARM/Thumb 1066 + r1p* erratum. If a code sequence containing an ARM/Thumb 1068 1067 interworking branch is replaced with another code sequence at the 1069 1068 same virtual address, whether due to self-modifying code or virtual 1070 1069 to physical address re-mapping, Cortex-A8 does not recover from the ··· 1133 1132 config ARM_ERRATA_643719 1134 1133 bool "ARM errata: LoUIS bit field in CLIDR register is incorrect" 1135 1134 depends on CPU_V7 && SMP 1135 + default y 1136 1136 help 1137 1137 This option enables the workaround for the 643719 Cortex-A9 (prior to 1138 1138 r1p0) erratum. On affected cores the LoUIS bit field of the CLIDR ··· 1351 1349 If you don't know what to do here, say N. 1352 1350 1353 1351 config SMP_ON_UP 1354 - bool "Allow booting SMP kernel on uniprocessor systems (EXPERIMENTAL)" 1352 + bool "Allow booting SMP kernel on uniprocessor systems" 1355 1353 depends on SMP && !XIP_KERNEL && MMU 1356 1354 default y 1357 1355 help
+9 -1
arch/arm/Makefile
··· 13 13 # Ensure linker flags are correct 14 14 LDFLAGS := 15 15 16 - LDFLAGS_vmlinux :=-p --no-undefined -X 16 + LDFLAGS_vmlinux :=-p --no-undefined -X --pic-veneer 17 17 ifeq ($(CONFIG_CPU_ENDIAN_BE8),y) 18 18 LDFLAGS_vmlinux += --be8 19 19 LDFLAGS_MODULE += --be8 ··· 264 264 core-$(CONFIG_VFP) += arch/arm/vfp/ 265 265 core-$(CONFIG_XEN) += arch/arm/xen/ 266 266 core-$(CONFIG_KVM_ARM_HOST) += arch/arm/kvm/ 267 + core-$(CONFIG_VDSO) += arch/arm/vdso/ 267 268 268 269 # If we have a machine-specific directory, then include it in the build. 269 270 core-y += arch/arm/kernel/ arch/arm/mm/ arch/arm/common/ ··· 322 321 dtbs_install: 323 322 $(Q)$(MAKE) $(dtbinst)=$(boot)/dts 324 323 324 + PHONY += vdso_install 325 + vdso_install: 326 + ifeq ($(CONFIG_VDSO),y) 327 + $(Q)$(MAKE) $(build)=arch/arm/vdso $@ 328 + endif 329 + 325 330 # We use MRPROPER_FILES and CLEAN_FILES now 326 331 archclean: 327 332 $(Q)$(MAKE) $(clean)=$(boot) ··· 352 345 echo ' Install using (your) ~/bin/$(INSTALLKERNEL) or' 353 346 echo ' (distribution) /sbin/$(INSTALLKERNEL) or' 354 347 echo ' install to $$(INSTALL_PATH) and run lilo' 348 + echo ' vdso_install - Install unstripped vdso.so to $$(INSTALL_MOD_PATH)/vdso' 355 349 endef
+45 -7
arch/arm/boot/compressed/head.S
··· 10 10 */ 11 11 #include <linux/linkage.h> 12 12 #include <asm/assembler.h> 13 + #include <asm/v7m.h> 13 14 14 - .arch armv7-a 15 + AR_CLASS( .arch armv7-a ) 16 + M_CLASS( .arch armv7-m ) 17 + 15 18 /* 16 19 * Debugging stuff 17 20 * ··· 117 114 * sort out different calling conventions 118 115 */ 119 116 .align 120 - .arm @ Always enter in ARM state 117 + /* 118 + * Always enter in ARM state for CPUs that support the ARM ISA. 119 + * As of today (2014) that's exactly the members of the A and R 120 + * classes. 121 + */ 122 + AR_CLASS( .arm ) 121 123 start: 122 124 .type start,#function 123 125 .rept 7 ··· 140 132 141 133 THUMB( .thumb ) 142 134 1: 143 - ARM_BE8( setend be ) @ go BE8 if compiled for BE8 144 - mrs r9, cpsr 135 + ARM_BE8( setend be ) @ go BE8 if compiled for BE8 136 + AR_CLASS( mrs r9, cpsr ) 145 137 #ifdef CONFIG_ARM_VIRT_EXT 146 138 bl __hyp_stub_install @ get into SVC mode, reversibly 147 139 #endif 148 140 mov r7, r1 @ save architecture ID 149 141 mov r8, r2 @ save atags pointer 150 142 143 + #ifndef CONFIG_CPU_V7M 151 144 /* 152 145 * Booting from Angel - need to enter SVC mode and disable 153 146 * FIQs/IRQs (numeric definitions from angel arm.h source). ··· 164 155 safe_svcmode_maskall r0 165 156 msr spsr_cxsf, r9 @ Save the CPU boot mode in 166 157 @ SPSR 158 + #endif 167 159 /* 168 160 * Note that some cache flushing and other stuff may 169 161 * be needed here - is there an Angel SWI call for this? ··· 178 168 .text 179 169 180 170 #ifdef CONFIG_AUTO_ZRELADDR 181 - @ determine final kernel image address 171 + /* 172 + * Find the start of physical memory. As we are executing 173 + * without the MMU on, we are in the physical address space. 174 + * We just need to get rid of any offset by aligning the 175 + * address. 176 + * 177 + * This alignment is a balance between the requirements of 178 + * different platforms - we have chosen 128MB to allow 179 + * platforms which align the start of their physical memory 180 + * to 128MB to use this feature, while allowing the zImage 181 + * to be placed within the first 128MB of memory on other 182 + * platforms. Increasing the alignment means we place 183 + * stricter alignment requirements on the start of physical 184 + * memory, but relaxing it means that we break people who 185 + * are already placing their zImage in (eg) the top 64MB 186 + * of this range. 187 + */ 182 188 mov r4, pc 183 189 and r4, r4, #0xf8000000 190 + /* Determine final kernel image address. */ 184 191 add r4, r4, #TEXT_OFFSET 185 192 #else 186 193 ldr r4, =zreladdr ··· 837 810 call_cache_fn: adr r12, proc_types 838 811 #ifdef CONFIG_CPU_CP15 839 812 mrc p15, 0, r9, c0, c0 @ get processor ID 813 + #elif defined(CONFIG_CPU_V7M) 814 + /* 815 + * On v7-M the processor id is located in the V7M_SCB_CPUID 816 + * register, but as cache handling is IMPLEMENTATION DEFINED on 817 + * v7-M (if existant at all) we just return early here. 818 + * If V7M_SCB_CPUID were used the cpu ID functions (i.e. 819 + * __armv7_mmu_cache_{on,off,flush}) would be selected which 820 + * use cp15 registers that are not implemented on v7-M. 821 + */ 822 + bx lr 840 823 #else 841 824 ldr r9, =CONFIG_PROCESSOR_ID 842 825 #endif ··· 1347 1310 1348 1311 __enter_kernel: 1349 1312 mov r0, #0 @ must be 0 1350 - ARM( mov pc, r4 ) @ call kernel 1351 - THUMB( bx r4 ) @ entry point is always ARM 1313 + ARM( mov pc, r4 ) @ call kernel 1314 + M_CLASS( add r4, r4, #1 ) @ enter in Thumb mode for M class 1315 + THUMB( bx r4 ) @ entry point is always ARM for A/R classes 1352 1316 1353 1317 reloc_code_end: 1354 1318
-1
arch/arm/include/asm/Kbuild
··· 1 1 2 2 3 - generic-y += auxvec.h 4 3 generic-y += bitsperlong.h 5 4 generic-y += cputime.h 6 5 generic-y += current.h
+3
arch/arm/include/asm/assembler.h
··· 237 237 .pushsection ".alt.smp.init", "a" ;\ 238 238 .long 9998b ;\ 239 239 9997: instr ;\ 240 + .if . - 9997b == 2 ;\ 241 + nop ;\ 242 + .endif ;\ 240 243 .if . - 9997b != 4 ;\ 241 244 .error "ALT_UP() content must assemble to exactly 4 bytes";\ 242 245 .endif ;\
+1
arch/arm/include/asm/auxvec.h
··· 1 + #include <uapi/asm/auxvec.h>
+16
arch/arm/include/asm/cputype.h
··· 253 253 #else 254 254 #define cpu_is_pj4() 0 255 255 #endif 256 + 257 + static inline int __attribute_const__ cpuid_feature_extract_field(u32 features, 258 + int field) 259 + { 260 + int feature = (features >> field) & 15; 261 + 262 + /* feature registers are signed values */ 263 + if (feature > 8) 264 + feature -= 16; 265 + 266 + return feature; 267 + } 268 + 269 + #define cpuid_feature_extract(reg, field) \ 270 + cpuid_feature_extract_field(read_cpuid_ext(reg), field) 271 + 256 272 #endif
+10 -1
arch/arm/include/asm/elf.h
··· 1 1 #ifndef __ASMARM_ELF_H 2 2 #define __ASMARM_ELF_H 3 3 4 + #include <asm/auxvec.h> 4 5 #include <asm/hwcap.h> 6 + #include <asm/vdso_datapage.h> 5 7 6 8 /* 7 9 * ELF register definitions.. ··· 117 115 the loader. We need to make sure that it is out of the way of the program 118 116 that it will "exec", and that there is sufficient room for the brk. */ 119 117 120 - #define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3) 118 + #define ELF_ET_DYN_BASE (TASK_SIZE / 3 * 2) 121 119 122 120 /* When the program starts, a1 contains a pointer to a function to be 123 121 registered with atexit, as per the SVR4 ABI. A value of 0 means we ··· 128 126 #define SET_PERSONALITY(ex) elf_set_personality(&(ex)) 129 127 130 128 #ifdef CONFIG_MMU 129 + #ifdef CONFIG_VDSO 130 + #define ARCH_DLINFO \ 131 + do { \ 132 + NEW_AUX_ENT(AT_SYSINFO_EHDR, \ 133 + (elf_addr_t)current->mm->context.vdso); \ 134 + } while (0) 135 + #endif 131 136 #define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1 132 137 struct linux_binprm; 133 138 int arch_setup_additional_pages(struct linux_binprm *, int);
+1 -1
arch/arm/include/asm/futex.h
··· 13 13 " .align 3\n" \ 14 14 " .long 1b, 4f, 2b, 4f\n" \ 15 15 " .popsection\n" \ 16 - " .pushsection .fixup,\"ax\"\n" \ 16 + " .pushsection .text.fixup,\"ax\"\n" \ 17 17 " .align 2\n" \ 18 18 "4: mov %0, " err_reg "\n" \ 19 19 " b 3b\n" \
+3
arch/arm/include/asm/mmu.h
··· 11 11 #endif 12 12 unsigned int vmalloc_seq; 13 13 unsigned long sigpage; 14 + #ifdef CONFIG_VDSO 15 + unsigned long vdso; 16 + #endif 14 17 } mm_context_t; 15 18 16 19 #ifdef CONFIG_CPU_HAS_ASID
+1
arch/arm/include/asm/pmu.h
··· 92 92 struct arm_pmu { 93 93 struct pmu pmu; 94 94 cpumask_t active_irqs; 95 + int *irq_affinity; 95 96 char *name; 96 97 irqreturn_t (*handle_irq)(int irq_num, void *dev); 97 98 void (*enable)(struct perf_event *event);
+1
arch/arm/include/asm/smp_plat.h
··· 104 104 return 1 << mpidr_hash.bits; 105 105 } 106 106 107 + extern int platform_can_secondary_boot(void); 107 108 extern int platform_can_cpu_hotplug(void); 108 109 109 110 #endif
+5 -5
arch/arm/include/asm/uaccess.h
··· 315 315 __asm__ __volatile__( \ 316 316 "1: " TUSER(ldrb) " %1,[%2],#0\n" \ 317 317 "2:\n" \ 318 - " .pushsection .fixup,\"ax\"\n" \ 318 + " .pushsection .text.fixup,\"ax\"\n" \ 319 319 " .align 2\n" \ 320 320 "3: mov %0, %3\n" \ 321 321 " mov %1, #0\n" \ ··· 351 351 __asm__ __volatile__( \ 352 352 "1: " TUSER(ldr) " %1,[%2],#0\n" \ 353 353 "2:\n" \ 354 - " .pushsection .fixup,\"ax\"\n" \ 354 + " .pushsection .text.fixup,\"ax\"\n" \ 355 355 " .align 2\n" \ 356 356 "3: mov %0, %3\n" \ 357 357 " mov %1, #0\n" \ ··· 397 397 __asm__ __volatile__( \ 398 398 "1: " TUSER(strb) " %1,[%2],#0\n" \ 399 399 "2:\n" \ 400 - " .pushsection .fixup,\"ax\"\n" \ 400 + " .pushsection .text.fixup,\"ax\"\n" \ 401 401 " .align 2\n" \ 402 402 "3: mov %0, %3\n" \ 403 403 " b 2b\n" \ ··· 430 430 __asm__ __volatile__( \ 431 431 "1: " TUSER(str) " %1,[%2],#0\n" \ 432 432 "2:\n" \ 433 - " .pushsection .fixup,\"ax\"\n" \ 433 + " .pushsection .text.fixup,\"ax\"\n" \ 434 434 " .align 2\n" \ 435 435 "3: mov %0, %3\n" \ 436 436 " b 2b\n" \ ··· 458 458 THUMB( "1: " TUSER(str) " " __reg_oper1 ", [%1]\n" ) \ 459 459 THUMB( "2: " TUSER(str) " " __reg_oper0 ", [%1, #4]\n" ) \ 460 460 "3:\n" \ 461 - " .pushsection .fixup,\"ax\"\n" \ 461 + " .pushsection .text.fixup,\"ax\"\n" \ 462 462 " .align 2\n" \ 463 463 "4: mov %0, %3\n" \ 464 464 " b 3b\n" \
+8
arch/arm/include/asm/unified.h
··· 24 24 .syntax unified 25 25 #endif 26 26 27 + #ifdef CONFIG_CPU_V7M 28 + #define AR_CLASS(x...) 29 + #define M_CLASS(x...) x 30 + #else 31 + #define AR_CLASS(x...) x 32 + #define M_CLASS(x...) 33 + #endif 34 + 27 35 #ifdef CONFIG_THUMB2_KERNEL 28 36 29 37 #if __GNUC__ < 4
+32
arch/arm/include/asm/vdso.h
··· 1 + #ifndef __ASM_VDSO_H 2 + #define __ASM_VDSO_H 3 + 4 + #ifdef __KERNEL__ 5 + 6 + #ifndef __ASSEMBLY__ 7 + 8 + struct mm_struct; 9 + 10 + #ifdef CONFIG_VDSO 11 + 12 + void arm_install_vdso(struct mm_struct *mm, unsigned long addr); 13 + 14 + extern char vdso_start, vdso_end; 15 + 16 + extern unsigned int vdso_total_pages; 17 + 18 + #else /* CONFIG_VDSO */ 19 + 20 + static inline void arm_install_vdso(struct mm_struct *mm, unsigned long addr) 21 + { 22 + } 23 + 24 + #define vdso_total_pages 0 25 + 26 + #endif /* CONFIG_VDSO */ 27 + 28 + #endif /* __ASSEMBLY__ */ 29 + 30 + #endif /* __KERNEL__ */ 31 + 32 + #endif /* __ASM_VDSO_H */
+60
arch/arm/include/asm/vdso_datapage.h
··· 1 + /* 2 + * Adapted from arm64 version. 3 + * 4 + * Copyright (C) 2012 ARM Limited 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + #ifndef __ASM_VDSO_DATAPAGE_H 19 + #define __ASM_VDSO_DATAPAGE_H 20 + 21 + #ifdef __KERNEL__ 22 + 23 + #ifndef __ASSEMBLY__ 24 + 25 + #include <asm/page.h> 26 + 27 + /* Try to be cache-friendly on systems that don't implement the 28 + * generic timer: fit the unconditionally updated fields in the first 29 + * 32 bytes. 30 + */ 31 + struct vdso_data { 32 + u32 seq_count; /* sequence count - odd during updates */ 33 + u16 tk_is_cntvct; /* fall back to syscall if false */ 34 + u16 cs_shift; /* clocksource shift */ 35 + u32 xtime_coarse_sec; /* coarse time */ 36 + u32 xtime_coarse_nsec; 37 + 38 + u32 wtm_clock_sec; /* wall to monotonic offset */ 39 + u32 wtm_clock_nsec; 40 + u32 xtime_clock_sec; /* CLOCK_REALTIME - seconds */ 41 + u32 cs_mult; /* clocksource multiplier */ 42 + 43 + u64 cs_cycle_last; /* last cycle value */ 44 + u64 cs_mask; /* clocksource mask */ 45 + 46 + u64 xtime_clock_snsec; /* CLOCK_REALTIME sub-ns base */ 47 + u32 tz_minuteswest; /* timezone info for gettimeofday(2) */ 48 + u32 tz_dsttime; 49 + }; 50 + 51 + union vdso_data_store { 52 + struct vdso_data data; 53 + u8 page[PAGE_SIZE]; 54 + }; 55 + 56 + #endif /* !__ASSEMBLY__ */ 57 + 58 + #endif /* __KERNEL__ */ 59 + 60 + #endif /* __ASM_VDSO_DATAPAGE_H */
+1 -1
arch/arm/include/asm/word-at-a-time.h
··· 71 71 asm( 72 72 "1: ldr %0, [%2]\n" 73 73 "2:\n" 74 - " .pushsection .fixup,\"ax\"\n" 74 + " .pushsection .text.fixup,\"ax\"\n" 75 75 " .align 2\n" 76 76 "3: and %1, %2, #0x3\n" 77 77 " bic %2, %2, #0x3\n"
+1
arch/arm/include/uapi/asm/Kbuild
··· 1 1 # UAPI Header export list 2 2 include include/uapi/asm-generic/Kbuild.asm 3 3 4 + header-y += auxvec.h 4 5 header-y += byteorder.h 5 6 header-y += fcntl.h 6 7 header-y += hwcap.h
+7
arch/arm/include/uapi/asm/auxvec.h
··· 1 + #ifndef __ASM_AUXVEC_H 2 + #define __ASM_AUXVEC_H 3 + 4 + /* VDSO location */ 5 + #define AT_SYSINFO_EHDR 33 6 + 7 + #endif
+3 -2
arch/arm/kernel/Makefile
··· 16 16 # Object file lists. 17 17 18 18 obj-y := elf.o entry-common.o irq.o opcodes.o \ 19 - process.o ptrace.o return_address.o \ 19 + process.o ptrace.o reboot.o return_address.o \ 20 20 setup.o signal.o sigreturn_codes.o \ 21 21 stacktrace.o sys_arm.o time.o traps.o 22 22 ··· 75 75 CFLAGS_pj4-cp0.o := -marm 76 76 AFLAGS_iwmmxt.o := -Wa,-mcpu=iwmmxt 77 77 obj-$(CONFIG_ARM_CPU_TOPOLOGY) += topology.o 78 + obj-$(CONFIG_VDSO) += vdso.o 78 79 79 80 ifneq ($(CONFIG_ARCH_EBSA110),y) 80 81 obj-y += io.o ··· 87 86 88 87 obj-$(CONFIG_ARM_VIRT_EXT) += hyp-stub.o 89 88 ifeq ($(CONFIG_ARM_PSCI),y) 90 - obj-y += psci.o 89 + obj-y += psci.o psci-call.o 91 90 obj-$(CONFIG_SMP) += psci_smp.o 92 91 endif 93 92
+5
arch/arm/kernel/asm-offsets.c
··· 25 25 #include <asm/memory.h> 26 26 #include <asm/procinfo.h> 27 27 #include <asm/suspend.h> 28 + #include <asm/vdso_datapage.h> 28 29 #include <asm/hardware/cache-l2x0.h> 29 30 #include <linux/kbuild.h> 30 31 ··· 206 205 DEFINE(KVM_TIMER_ENABLED, offsetof(struct kvm, arch.timer.enabled)); 207 206 DEFINE(KVM_VGIC_VCTRL, offsetof(struct kvm, arch.vgic.vctrl_base)); 208 207 DEFINE(KVM_VTTBR, offsetof(struct kvm, arch.vttbr)); 208 + #endif 209 + BLANK(); 210 + #ifdef CONFIG_VDSO 211 + DEFINE(VDSO_DATA_SIZE, sizeof(union vdso_data_store)); 209 212 #endif 210 213 return 0; 211 214 }
+2 -8
arch/arm/kernel/bios32.c
··· 618 618 int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, 619 619 enum pci_mmap_state mmap_state, int write_combine) 620 620 { 621 - struct pci_sys_data *root = dev->sysdata; 622 - unsigned long phys; 623 - 624 - if (mmap_state == pci_mmap_io) { 621 + if (mmap_state == pci_mmap_io) 625 622 return -EINVAL; 626 - } else { 627 - phys = vma->vm_pgoff + (root->mem_offset >> PAGE_SHIFT); 628 - } 629 623 630 624 /* 631 625 * Mark this as IO 632 626 */ 633 627 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 634 628 635 - if (remap_pfn_range(vma, vma->vm_start, phys, 629 + if (remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, 636 630 vma->vm_end - vma->vm_start, 637 631 vma->vm_page_prot)) 638 632 return -EAGAIN;
+1 -1
arch/arm/kernel/entry-armv.S
··· 545 545 /* 546 546 * The out of line fixup for the ldrt instructions above. 547 547 */ 548 - .pushsection .fixup, "ax" 548 + .pushsection .text.fixup, "ax" 549 549 .align 2 550 550 4: str r4, [sp, #S_PC] @ retry current instruction 551 551 ret r9
+7 -7
arch/arm/kernel/head.S
··· 138 138 @ mmu has been enabled 139 139 adr lr, BSYM(1f) @ return (PIC) address 140 140 mov r8, r4 @ set TTBR1 to swapper_pg_dir 141 - ARM( add pc, r10, #PROCINFO_INITFUNC ) 142 - THUMB( add r12, r10, #PROCINFO_INITFUNC ) 143 - THUMB( ret r12 ) 141 + ldr r12, [r10, #PROCINFO_INITFUNC] 142 + add r12, r12, r10 143 + ret r12 144 144 1: b __enable_mmu 145 145 ENDPROC(stext) 146 146 .ltorg ··· 386 386 ldr r8, [r7, lr] @ get secondary_data.swapper_pg_dir 387 387 adr lr, BSYM(__enable_mmu) @ return address 388 388 mov r13, r12 @ __secondary_switched address 389 - ARM( add pc, r10, #PROCINFO_INITFUNC ) @ initialise processor 390 - @ (return control reg) 391 - THUMB( add r12, r10, #PROCINFO_INITFUNC ) 392 - THUMB( ret r12 ) 389 + ldr r12, [r10, #PROCINFO_INITFUNC] 390 + add r12, r12, r10 @ initialise processor 391 + @ (return control reg) 392 + ret r12 393 393 ENDPROC(secondary_startup) 394 394 ENDPROC(secondary_startup_arm) 395 395
+3 -3
arch/arm/kernel/hibernate.c
··· 22 22 #include <asm/suspend.h> 23 23 #include <asm/memory.h> 24 24 #include <asm/sections.h> 25 + #include "reboot.h" 25 26 26 27 int pfn_is_nosave(unsigned long pfn) 27 28 { ··· 62 61 63 62 ret = swsusp_save(); 64 63 if (ret == 0) 65 - soft_restart(virt_to_phys(cpu_resume)); 64 + _soft_restart(virt_to_phys(cpu_resume), false); 66 65 return ret; 67 66 } 68 67 ··· 87 86 for (pbe = restore_pblist; pbe; pbe = pbe->next) 88 87 copy_page(pbe->orig_address, pbe->address); 89 88 90 - soft_restart(virt_to_phys(cpu_resume)); 89 + _soft_restart(virt_to_phys(cpu_resume), false); 91 90 } 92 91 93 92 static u64 resume_stack[PAGE_SIZE/2/sizeof(u64)] __nosavedata; ··· 100 99 */ 101 100 int swsusp_arch_resume(void) 102 101 { 103 - extern void call_with_stack(void (*fn)(void *), void *arg, void *sp); 104 102 call_with_stack(arch_restore_image, 0, 105 103 resume_stack + ARRAY_SIZE(resume_stack)); 106 104 return 0;
+2 -1
arch/arm/kernel/machine_kexec.c
··· 46 46 * and implements CPU hotplug for the current HW. If not, we won't be 47 47 * able to kexec reliably, so fail the prepare operation. 48 48 */ 49 - if (num_possible_cpus() > 1 && !platform_can_cpu_hotplug()) 49 + if (num_possible_cpus() > 1 && platform_can_secondary_boot() && 50 + !platform_can_cpu_hotplug()) 50 51 return -EINVAL; 51 52 52 53 /*
+24 -14
arch/arm/kernel/module.c
··· 98 98 case R_ARM_PC24: 99 99 case R_ARM_CALL: 100 100 case R_ARM_JUMP24: 101 + if (sym->st_value & 3) { 102 + pr_err("%s: section %u reloc %u sym '%s': unsupported interworking call (ARM -> Thumb)\n", 103 + module->name, relindex, i, symname); 104 + return -ENOEXEC; 105 + } 106 + 101 107 offset = __mem_to_opcode_arm(*(u32 *)loc); 102 108 offset = (offset & 0x00ffffff) << 2; 103 109 if (offset & 0x02000000) 104 110 offset -= 0x04000000; 105 111 106 112 offset += sym->st_value - loc; 107 - if (offset & 3 || 108 - offset <= (s32)0xfe000000 || 113 + if (offset <= (s32)0xfe000000 || 109 114 offset >= (s32)0x02000000) { 110 115 pr_err("%s: section %u reloc %u sym '%s': relocation %u out of range (%#lx -> %#x)\n", 111 116 module->name, relindex, i, symname, ··· 160 155 #ifdef CONFIG_THUMB2_KERNEL 161 156 case R_ARM_THM_CALL: 162 157 case R_ARM_THM_JUMP24: 158 + /* 159 + * For function symbols, only Thumb addresses are 160 + * allowed (no interworking). 161 + * 162 + * For non-function symbols, the destination 163 + * has no specific ARM/Thumb disposition, so 164 + * the branch is resolved under the assumption 165 + * that interworking is not required. 166 + */ 167 + if (ELF32_ST_TYPE(sym->st_info) == STT_FUNC && 168 + !(sym->st_value & 1)) { 169 + pr_err("%s: section %u reloc %u sym '%s': unsupported interworking call (Thumb -> ARM)\n", 170 + module->name, relindex, i, symname); 171 + return -ENOEXEC; 172 + } 173 + 163 174 upper = __mem_to_opcode_thumb16(*(u16 *)loc); 164 175 lower = __mem_to_opcode_thumb16(*(u16 *)(loc + 2)); 165 176 ··· 203 182 offset -= 0x02000000; 204 183 offset += sym->st_value - loc; 205 184 206 - /* 207 - * For function symbols, only Thumb addresses are 208 - * allowed (no interworking). 209 - * 210 - * For non-function symbols, the destination 211 - * has no specific ARM/Thumb disposition, so 212 - * the branch is resolved under the assumption 213 - * that interworking is not required. 214 - */ 215 - if ((ELF32_ST_TYPE(sym->st_info) == STT_FUNC && 216 - !(offset & 1)) || 217 - offset <= (s32)0xff000000 || 185 + if (offset <= (s32)0xff000000 || 218 186 offset >= (s32)0x01000000) { 219 187 pr_err("%s: section %u reloc %u sym '%s': relocation %u out of range (%#lx -> %#x)\n", 220 188 module->name, relindex, i, symname,
+15 -6
arch/arm/kernel/perf_event.c
··· 259 259 } 260 260 261 261 static int 262 - validate_event(struct pmu_hw_events *hw_events, 263 - struct perf_event *event) 262 + validate_event(struct pmu *pmu, struct pmu_hw_events *hw_events, 263 + struct perf_event *event) 264 264 { 265 - struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 265 + struct arm_pmu *armpmu; 266 266 267 267 if (is_software_event(event)) 268 268 return 1; 269 + 270 + /* 271 + * Reject groups spanning multiple HW PMUs (e.g. CPU + CCI). The 272 + * core perf code won't check that the pmu->ctx == leader->ctx 273 + * until after pmu->event_init(event). 274 + */ 275 + if (event->pmu != pmu) 276 + return 0; 269 277 270 278 if (event->state < PERF_EVENT_STATE_OFF) 271 279 return 1; ··· 281 273 if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec) 282 274 return 1; 283 275 276 + armpmu = to_arm_pmu(event->pmu); 284 277 return armpmu->get_event_idx(hw_events, event) >= 0; 285 278 } 286 279 ··· 297 288 */ 298 289 memset(&fake_pmu.used_mask, 0, sizeof(fake_pmu.used_mask)); 299 290 300 - if (!validate_event(&fake_pmu, leader)) 291 + if (!validate_event(event->pmu, &fake_pmu, leader)) 301 292 return -EINVAL; 302 293 303 294 list_for_each_entry(sibling, &leader->sibling_list, group_entry) { 304 - if (!validate_event(&fake_pmu, sibling)) 295 + if (!validate_event(event->pmu, &fake_pmu, sibling)) 305 296 return -EINVAL; 306 297 } 307 298 308 - if (!validate_event(&fake_pmu, event)) 299 + if (!validate_event(event->pmu, &fake_pmu, event)) 309 300 return -EINVAL; 310 301 311 302 return 0;
+64 -7
arch/arm/kernel/perf_event_cpu.c
··· 92 92 free_percpu_irq(irq, &hw_events->percpu_pmu); 93 93 } else { 94 94 for (i = 0; i < irqs; ++i) { 95 - if (!cpumask_test_and_clear_cpu(i, &cpu_pmu->active_irqs)) 95 + int cpu = i; 96 + 97 + if (cpu_pmu->irq_affinity) 98 + cpu = cpu_pmu->irq_affinity[i]; 99 + 100 + if (!cpumask_test_and_clear_cpu(cpu, &cpu_pmu->active_irqs)) 96 101 continue; 97 102 irq = platform_get_irq(pmu_device, i); 98 103 if (irq >= 0) 99 - free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, i)); 104 + free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, cpu)); 100 105 } 101 106 } 102 107 } ··· 133 128 on_each_cpu(cpu_pmu_enable_percpu_irq, &irq, 1); 134 129 } else { 135 130 for (i = 0; i < irqs; ++i) { 131 + int cpu = i; 132 + 136 133 err = 0; 137 134 irq = platform_get_irq(pmu_device, i); 138 135 if (irq < 0) 139 136 continue; 137 + 138 + if (cpu_pmu->irq_affinity) 139 + cpu = cpu_pmu->irq_affinity[i]; 140 140 141 141 /* 142 142 * If we have a single PMU interrupt that we can't shift, 143 143 * assume that we're running on a uniprocessor machine and 144 144 * continue. Otherwise, continue without this interrupt. 145 145 */ 146 - if (irq_set_affinity(irq, cpumask_of(i)) && irqs > 1) { 146 + if (irq_set_affinity(irq, cpumask_of(cpu)) && irqs > 1) { 147 147 pr_warn("unable to set irq affinity (irq=%d, cpu=%u)\n", 148 - irq, i); 148 + irq, cpu); 149 149 continue; 150 150 } 151 151 152 152 err = request_irq(irq, handler, 153 153 IRQF_NOBALANCING | IRQF_NO_THREAD, "arm-pmu", 154 - per_cpu_ptr(&hw_events->percpu_pmu, i)); 154 + per_cpu_ptr(&hw_events->percpu_pmu, cpu)); 155 155 if (err) { 156 156 pr_err("unable to request IRQ%d for ARM PMU counters\n", 157 157 irq); 158 158 return err; 159 159 } 160 160 161 - cpumask_set_cpu(i, &cpu_pmu->active_irqs); 161 + cpumask_set_cpu(cpu, &cpu_pmu->active_irqs); 162 162 } 163 163 } 164 164 ··· 253 243 {.compatible = "arm,arm1176-pmu", .data = armv6_1176_pmu_init}, 254 244 {.compatible = "arm,arm1136-pmu", .data = armv6_1136_pmu_init}, 255 245 {.compatible = "qcom,krait-pmu", .data = krait_pmu_init}, 246 + {.compatible = "qcom,scorpion-pmu", .data = scorpion_pmu_init}, 247 + {.compatible = "qcom,scorpion-mp-pmu", .data = scorpion_mp_pmu_init}, 256 248 {}, 257 249 }; 258 250 ··· 301 289 return ret; 302 290 } 303 291 292 + static int of_pmu_irq_cfg(struct platform_device *pdev) 293 + { 294 + int i; 295 + int *irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL); 296 + 297 + if (!irqs) 298 + return -ENOMEM; 299 + 300 + for (i = 0; i < pdev->num_resources; ++i) { 301 + struct device_node *dn; 302 + int cpu; 303 + 304 + dn = of_parse_phandle(pdev->dev.of_node, "interrupt-affinity", 305 + i); 306 + if (!dn) { 307 + pr_warn("Failed to parse %s/interrupt-affinity[%d]\n", 308 + of_node_full_name(dn), i); 309 + break; 310 + } 311 + 312 + for_each_possible_cpu(cpu) 313 + if (arch_find_n_match_cpu_physical_id(dn, cpu, NULL)) 314 + break; 315 + 316 + of_node_put(dn); 317 + if (cpu >= nr_cpu_ids) { 318 + pr_warn("Failed to find logical CPU for %s\n", 319 + dn->name); 320 + break; 321 + } 322 + 323 + irqs[i] = cpu; 324 + } 325 + 326 + if (i == pdev->num_resources) 327 + cpu_pmu->irq_affinity = irqs; 328 + else 329 + kfree(irqs); 330 + 331 + return 0; 332 + } 333 + 304 334 static int cpu_pmu_device_probe(struct platform_device *pdev) 305 335 { 306 336 const struct of_device_id *of_id; ··· 367 313 368 314 if (node && (of_id = of_match_node(cpu_pmu_of_device_ids, pdev->dev.of_node))) { 369 315 init_fn = of_id->data; 370 - ret = init_fn(pmu); 316 + 317 + ret = of_pmu_irq_cfg(pdev); 318 + if (!ret) 319 + ret = init_fn(pmu); 371 320 } else { 372 321 ret = probe_current_pmu(pmu); 373 322 }
+466 -59
arch/arm/kernel/perf_event_v7.c
··· 140 140 KRAIT_PERFCTR_L1_DTLB_ACCESS = 0x12210, 141 141 }; 142 142 143 + /* ARMv7 Scorpion specific event types */ 144 + enum scorpion_perf_types { 145 + SCORPION_LPM0_GROUP0 = 0x4c, 146 + SCORPION_LPM1_GROUP0 = 0x50, 147 + SCORPION_LPM2_GROUP0 = 0x54, 148 + SCORPION_L2LPM_GROUP0 = 0x58, 149 + SCORPION_VLPM_GROUP0 = 0x5c, 150 + 151 + SCORPION_ICACHE_ACCESS = 0x10053, 152 + SCORPION_ICACHE_MISS = 0x10052, 153 + 154 + SCORPION_DTLB_ACCESS = 0x12013, 155 + SCORPION_DTLB_MISS = 0x12012, 156 + 157 + SCORPION_ITLB_MISS = 0x12021, 158 + }; 159 + 143 160 /* 144 161 * Cortex-A8 HW events mapping 145 162 * ··· 496 479 [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV7_PERFCTR_PC_BRANCH_MIS_PRED, 497 480 [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV7_PERFCTR_PC_BRANCH_PRED, 498 481 [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV7_PERFCTR_PC_BRANCH_MIS_PRED, 482 + }; 483 + 484 + /* 485 + * Scorpion HW events mapping 486 + */ 487 + static const unsigned scorpion_perf_map[PERF_COUNT_HW_MAX] = { 488 + PERF_MAP_ALL_UNSUPPORTED, 489 + [PERF_COUNT_HW_CPU_CYCLES] = ARMV7_PERFCTR_CPU_CYCLES, 490 + [PERF_COUNT_HW_INSTRUCTIONS] = ARMV7_PERFCTR_INSTR_EXECUTED, 491 + [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = ARMV7_PERFCTR_PC_WRITE, 492 + [PERF_COUNT_HW_BRANCH_MISSES] = ARMV7_PERFCTR_PC_BRANCH_MIS_PRED, 493 + [PERF_COUNT_HW_BUS_CYCLES] = ARMV7_PERFCTR_CLOCK_CYCLES, 494 + }; 495 + 496 + static const unsigned scorpion_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] 497 + [PERF_COUNT_HW_CACHE_OP_MAX] 498 + [PERF_COUNT_HW_CACHE_RESULT_MAX] = { 499 + PERF_CACHE_MAP_ALL_UNSUPPORTED, 500 + /* 501 + * The performance counters don't differentiate between read and write 502 + * accesses/misses so this isn't strictly correct, but it's the best we 503 + * can do. Writes and reads get combined. 504 + */ 505 + [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV7_PERFCTR_L1_DCACHE_ACCESS, 506 + [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV7_PERFCTR_L1_DCACHE_REFILL, 507 + [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV7_PERFCTR_L1_DCACHE_ACCESS, 508 + [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV7_PERFCTR_L1_DCACHE_REFILL, 509 + [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = SCORPION_ICACHE_ACCESS, 510 + [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = SCORPION_ICACHE_MISS, 511 + /* 512 + * Only ITLB misses and DTLB refills are supported. If users want the 513 + * DTLB refills misses a raw counter must be used. 514 + */ 515 + [C(DTLB)][C(OP_READ)][C(RESULT_ACCESS)] = SCORPION_DTLB_ACCESS, 516 + [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = SCORPION_DTLB_MISS, 517 + [C(DTLB)][C(OP_WRITE)][C(RESULT_ACCESS)] = SCORPION_DTLB_ACCESS, 518 + [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = SCORPION_DTLB_MISS, 519 + [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = SCORPION_ITLB_MISS, 520 + [C(ITLB)][C(OP_WRITE)][C(RESULT_MISS)] = SCORPION_ITLB_MISS, 521 + [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV7_PERFCTR_PC_BRANCH_PRED, 522 + [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV7_PERFCTR_PC_BRANCH_MIS_PRED, 523 + [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV7_PERFCTR_PC_BRANCH_PRED, 524 + [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV7_PERFCTR_PC_BRANCH_MIS_PRED, 499 525 }; 500 526 501 527 /* ··· 1036 976 &krait_perf_cache_map, 0xFFFFF); 1037 977 } 1038 978 979 + static int scorpion_map_event(struct perf_event *event) 980 + { 981 + return armpmu_map_event(event, &scorpion_perf_map, 982 + &scorpion_perf_cache_map, 0xFFFFF); 983 + } 984 + 1039 985 static void armv7pmu_init(struct arm_pmu *cpu_pmu) 1040 986 { 1041 987 cpu_pmu->handle_irq = armv7pmu_handle_irq; ··· 1169 1103 #define KRAIT_EVENT_MASK (KRAIT_EVENT | VENUM_EVENT) 1170 1104 #define PMRESRn_EN BIT(31) 1171 1105 1106 + #define EVENT_REGION(event) (((event) >> 12) & 0xf) /* R */ 1107 + #define EVENT_GROUP(event) ((event) & 0xf) /* G */ 1108 + #define EVENT_CODE(event) (((event) >> 4) & 0xff) /* CC */ 1109 + #define EVENT_VENUM(event) (!!(event & VENUM_EVENT)) /* N=2 */ 1110 + #define EVENT_CPU(event) (!!(event & KRAIT_EVENT)) /* N=1 */ 1111 + 1172 1112 static u32 krait_read_pmresrn(int n) 1173 1113 { 1174 1114 u32 val; ··· 1213 1141 } 1214 1142 } 1215 1143 1216 - static u32 krait_read_vpmresr0(void) 1144 + static u32 venum_read_pmresr(void) 1217 1145 { 1218 1146 u32 val; 1219 1147 asm volatile("mrc p10, 7, %0, c11, c0, 0" : "=r" (val)); 1220 1148 return val; 1221 1149 } 1222 1150 1223 - static void krait_write_vpmresr0(u32 val) 1151 + static void venum_write_pmresr(u32 val) 1224 1152 { 1225 1153 asm volatile("mcr p10, 7, %0, c11, c0, 0" : : "r" (val)); 1226 1154 } 1227 1155 1228 - static void krait_pre_vpmresr0(u32 *venum_orig_val, u32 *fp_orig_val) 1156 + static void venum_pre_pmresr(u32 *venum_orig_val, u32 *fp_orig_val) 1229 1157 { 1230 1158 u32 venum_new_val; 1231 1159 u32 fp_new_val; ··· 1242 1170 fmxr(FPEXC, fp_new_val); 1243 1171 } 1244 1172 1245 - static void krait_post_vpmresr0(u32 venum_orig_val, u32 fp_orig_val) 1173 + static void venum_post_pmresr(u32 venum_orig_val, u32 fp_orig_val) 1246 1174 { 1247 1175 BUG_ON(preemptible()); 1248 1176 /* Restore FPEXC */ ··· 1265 1193 u32 val; 1266 1194 u32 mask; 1267 1195 u32 vval, fval; 1268 - unsigned int region; 1269 - unsigned int group; 1270 - unsigned int code; 1196 + unsigned int region = EVENT_REGION(config_base); 1197 + unsigned int group = EVENT_GROUP(config_base); 1198 + unsigned int code = EVENT_CODE(config_base); 1271 1199 unsigned int group_shift; 1272 - bool venum_event; 1273 - 1274 - venum_event = !!(config_base & VENUM_EVENT); 1275 - region = (config_base >> 12) & 0xf; 1276 - code = (config_base >> 4) & 0xff; 1277 - group = (config_base >> 0) & 0xf; 1200 + bool venum_event = EVENT_VENUM(config_base); 1278 1201 1279 1202 group_shift = group * 8; 1280 1203 mask = 0xff << group_shift; ··· 1284 1217 val |= config_base & (ARMV7_EXCLUDE_USER | ARMV7_EXCLUDE_PL1); 1285 1218 armv7_pmnc_write_evtsel(idx, val); 1286 1219 1287 - asm volatile("mcr p15, 0, %0, c9, c15, 0" : : "r" (0)); 1288 - 1289 1220 if (venum_event) { 1290 - krait_pre_vpmresr0(&vval, &fval); 1291 - val = krait_read_vpmresr0(); 1221 + venum_pre_pmresr(&vval, &fval); 1222 + val = venum_read_pmresr(); 1292 1223 val &= ~mask; 1293 1224 val |= code << group_shift; 1294 1225 val |= PMRESRn_EN; 1295 - krait_write_vpmresr0(val); 1296 - krait_post_vpmresr0(vval, fval); 1226 + venum_write_pmresr(val); 1227 + venum_post_pmresr(vval, fval); 1297 1228 } else { 1298 1229 val = krait_read_pmresrn(region); 1299 1230 val &= ~mask; ··· 1301 1236 } 1302 1237 } 1303 1238 1304 - static u32 krait_clear_pmresrn_group(u32 val, int group) 1239 + static u32 clear_pmresrn_group(u32 val, int group) 1305 1240 { 1306 1241 u32 mask; 1307 1242 int group_shift; ··· 1321 1256 { 1322 1257 u32 val; 1323 1258 u32 vval, fval; 1324 - unsigned int region; 1325 - unsigned int group; 1326 - bool venum_event; 1327 - 1328 - venum_event = !!(config_base & VENUM_EVENT); 1329 - region = (config_base >> 12) & 0xf; 1330 - group = (config_base >> 0) & 0xf; 1259 + unsigned int region = EVENT_REGION(config_base); 1260 + unsigned int group = EVENT_GROUP(config_base); 1261 + bool venum_event = EVENT_VENUM(config_base); 1331 1262 1332 1263 if (venum_event) { 1333 - krait_pre_vpmresr0(&vval, &fval); 1334 - val = krait_read_vpmresr0(); 1335 - val = krait_clear_pmresrn_group(val, group); 1336 - krait_write_vpmresr0(val); 1337 - krait_post_vpmresr0(vval, fval); 1264 + venum_pre_pmresr(&vval, &fval); 1265 + val = venum_read_pmresr(); 1266 + val = clear_pmresrn_group(val, group); 1267 + venum_write_pmresr(val); 1268 + venum_post_pmresr(vval, fval); 1338 1269 } else { 1339 1270 val = krait_read_pmresrn(region); 1340 - val = krait_clear_pmresrn_group(val, group); 1271 + val = clear_pmresrn_group(val, group); 1341 1272 krait_write_pmresrn(region, val); 1342 1273 } 1343 1274 } ··· 1403 1342 static void krait_pmu_reset(void *info) 1404 1343 { 1405 1344 u32 vval, fval; 1345 + struct arm_pmu *cpu_pmu = info; 1346 + u32 idx, nb_cnt = cpu_pmu->num_events; 1406 1347 1407 1348 armv7pmu_reset(info); 1408 1349 ··· 1413 1350 krait_write_pmresrn(1, 0); 1414 1351 krait_write_pmresrn(2, 0); 1415 1352 1416 - krait_pre_vpmresr0(&vval, &fval); 1417 - krait_write_vpmresr0(0); 1418 - krait_post_vpmresr0(vval, fval); 1353 + venum_pre_pmresr(&vval, &fval); 1354 + venum_write_pmresr(0); 1355 + venum_post_pmresr(vval, fval); 1356 + 1357 + /* Reset PMxEVNCTCR to sane default */ 1358 + for (idx = ARMV7_IDX_CYCLE_COUNTER; idx < nb_cnt; ++idx) { 1359 + armv7_pmnc_select_counter(idx); 1360 + asm volatile("mcr p15, 0, %0, c9, c15, 0" : : "r" (0)); 1361 + } 1362 + 1419 1363 } 1420 1364 1421 1365 static int krait_event_to_bit(struct perf_event *event, unsigned int region, ··· 1456 1386 { 1457 1387 int idx; 1458 1388 int bit = -1; 1459 - unsigned int prefix; 1460 - unsigned int region; 1461 - unsigned int code; 1462 - unsigned int group; 1463 - bool krait_event; 1464 1389 struct hw_perf_event *hwc = &event->hw; 1390 + unsigned int region = EVENT_REGION(hwc->config_base); 1391 + unsigned int code = EVENT_CODE(hwc->config_base); 1392 + unsigned int group = EVENT_GROUP(hwc->config_base); 1393 + bool venum_event = EVENT_VENUM(hwc->config_base); 1394 + bool krait_event = EVENT_CPU(hwc->config_base); 1465 1395 1466 - region = (hwc->config_base >> 12) & 0xf; 1467 - code = (hwc->config_base >> 4) & 0xff; 1468 - group = (hwc->config_base >> 0) & 0xf; 1469 - krait_event = !!(hwc->config_base & KRAIT_EVENT_MASK); 1470 - 1471 - if (krait_event) { 1396 + if (venum_event || krait_event) { 1472 1397 /* Ignore invalid events */ 1473 1398 if (group > 3 || region > 2) 1474 1399 return -EINVAL; 1475 - prefix = hwc->config_base & KRAIT_EVENT_MASK; 1476 - if (prefix != KRAIT_EVENT && prefix != VENUM_EVENT) 1477 - return -EINVAL; 1478 - if (prefix == VENUM_EVENT && (code & 0xe0)) 1400 + if (venum_event && (code & 0xe0)) 1479 1401 return -EINVAL; 1480 1402 1481 1403 bit = krait_event_to_bit(event, region, group); ··· 1487 1425 { 1488 1426 int bit; 1489 1427 struct hw_perf_event *hwc = &event->hw; 1490 - unsigned int region; 1491 - unsigned int group; 1492 - bool krait_event; 1428 + unsigned int region = EVENT_REGION(hwc->config_base); 1429 + unsigned int group = EVENT_GROUP(hwc->config_base); 1430 + bool venum_event = EVENT_VENUM(hwc->config_base); 1431 + bool krait_event = EVENT_CPU(hwc->config_base); 1493 1432 1494 - region = (hwc->config_base >> 12) & 0xf; 1495 - group = (hwc->config_base >> 0) & 0xf; 1496 - krait_event = !!(hwc->config_base & KRAIT_EVENT_MASK); 1497 - 1498 - if (krait_event) { 1433 + if (venum_event || krait_event) { 1499 1434 bit = krait_event_to_bit(event, region, group); 1500 1435 clear_bit(bit, cpuc->used_mask); 1501 1436 } ··· 1515 1456 cpu_pmu->disable = krait_pmu_disable_event; 1516 1457 cpu_pmu->get_event_idx = krait_pmu_get_event_idx; 1517 1458 cpu_pmu->clear_event_idx = krait_pmu_clear_event_idx; 1459 + return 0; 1460 + } 1461 + 1462 + /* 1463 + * Scorpion Local Performance Monitor Register (LPMn) 1464 + * 1465 + * 31 30 24 16 8 0 1466 + * +--------------------------------+ 1467 + * LPM0 | EN | CC | CC | CC | CC | N = 1, R = 0 1468 + * +--------------------------------+ 1469 + * LPM1 | EN | CC | CC | CC | CC | N = 1, R = 1 1470 + * +--------------------------------+ 1471 + * LPM2 | EN | CC | CC | CC | CC | N = 1, R = 2 1472 + * +--------------------------------+ 1473 + * L2LPM | EN | CC | CC | CC | CC | N = 1, R = 3 1474 + * +--------------------------------+ 1475 + * VLPM | EN | CC | CC | CC | CC | N = 2, R = ? 1476 + * +--------------------------------+ 1477 + * EN | G=3 | G=2 | G=1 | G=0 1478 + * 1479 + * 1480 + * Event Encoding: 1481 + * 1482 + * hwc->config_base = 0xNRCCG 1483 + * 1484 + * N = prefix, 1 for Scorpion CPU (LPMn/L2LPM), 2 for Venum VFP (VLPM) 1485 + * R = region register 1486 + * CC = class of events the group G is choosing from 1487 + * G = group or particular event 1488 + * 1489 + * Example: 0x12021 is a Scorpion CPU event in LPM2's group 1 with code 2 1490 + * 1491 + * A region (R) corresponds to a piece of the CPU (execution unit, instruction 1492 + * unit, etc.) while the event code (CC) corresponds to a particular class of 1493 + * events (interrupts for example). An event code is broken down into 1494 + * groups (G) that can be mapped into the PMU (irq, fiqs, and irq+fiqs for 1495 + * example). 1496 + */ 1497 + 1498 + static u32 scorpion_read_pmresrn(int n) 1499 + { 1500 + u32 val; 1501 + 1502 + switch (n) { 1503 + case 0: 1504 + asm volatile("mrc p15, 0, %0, c15, c0, 0" : "=r" (val)); 1505 + break; 1506 + case 1: 1507 + asm volatile("mrc p15, 1, %0, c15, c0, 0" : "=r" (val)); 1508 + break; 1509 + case 2: 1510 + asm volatile("mrc p15, 2, %0, c15, c0, 0" : "=r" (val)); 1511 + break; 1512 + case 3: 1513 + asm volatile("mrc p15, 3, %0, c15, c2, 0" : "=r" (val)); 1514 + break; 1515 + default: 1516 + BUG(); /* Should be validated in scorpion_pmu_get_event_idx() */ 1517 + } 1518 + 1519 + return val; 1520 + } 1521 + 1522 + static void scorpion_write_pmresrn(int n, u32 val) 1523 + { 1524 + switch (n) { 1525 + case 0: 1526 + asm volatile("mcr p15, 0, %0, c15, c0, 0" : : "r" (val)); 1527 + break; 1528 + case 1: 1529 + asm volatile("mcr p15, 1, %0, c15, c0, 0" : : "r" (val)); 1530 + break; 1531 + case 2: 1532 + asm volatile("mcr p15, 2, %0, c15, c0, 0" : : "r" (val)); 1533 + break; 1534 + case 3: 1535 + asm volatile("mcr p15, 3, %0, c15, c2, 0" : : "r" (val)); 1536 + break; 1537 + default: 1538 + BUG(); /* Should be validated in scorpion_pmu_get_event_idx() */ 1539 + } 1540 + } 1541 + 1542 + static u32 scorpion_get_pmresrn_event(unsigned int region) 1543 + { 1544 + static const u32 pmresrn_table[] = { SCORPION_LPM0_GROUP0, 1545 + SCORPION_LPM1_GROUP0, 1546 + SCORPION_LPM2_GROUP0, 1547 + SCORPION_L2LPM_GROUP0 }; 1548 + return pmresrn_table[region]; 1549 + } 1550 + 1551 + static void scorpion_evt_setup(int idx, u32 config_base) 1552 + { 1553 + u32 val; 1554 + u32 mask; 1555 + u32 vval, fval; 1556 + unsigned int region = EVENT_REGION(config_base); 1557 + unsigned int group = EVENT_GROUP(config_base); 1558 + unsigned int code = EVENT_CODE(config_base); 1559 + unsigned int group_shift; 1560 + bool venum_event = EVENT_VENUM(config_base); 1561 + 1562 + group_shift = group * 8; 1563 + mask = 0xff << group_shift; 1564 + 1565 + /* Configure evtsel for the region and group */ 1566 + if (venum_event) 1567 + val = SCORPION_VLPM_GROUP0; 1568 + else 1569 + val = scorpion_get_pmresrn_event(region); 1570 + val += group; 1571 + /* Mix in mode-exclusion bits */ 1572 + val |= config_base & (ARMV7_EXCLUDE_USER | ARMV7_EXCLUDE_PL1); 1573 + armv7_pmnc_write_evtsel(idx, val); 1574 + 1575 + asm volatile("mcr p15, 0, %0, c9, c15, 0" : : "r" (0)); 1576 + 1577 + if (venum_event) { 1578 + venum_pre_pmresr(&vval, &fval); 1579 + val = venum_read_pmresr(); 1580 + val &= ~mask; 1581 + val |= code << group_shift; 1582 + val |= PMRESRn_EN; 1583 + venum_write_pmresr(val); 1584 + venum_post_pmresr(vval, fval); 1585 + } else { 1586 + val = scorpion_read_pmresrn(region); 1587 + val &= ~mask; 1588 + val |= code << group_shift; 1589 + val |= PMRESRn_EN; 1590 + scorpion_write_pmresrn(region, val); 1591 + } 1592 + } 1593 + 1594 + static void scorpion_clearpmu(u32 config_base) 1595 + { 1596 + u32 val; 1597 + u32 vval, fval; 1598 + unsigned int region = EVENT_REGION(config_base); 1599 + unsigned int group = EVENT_GROUP(config_base); 1600 + bool venum_event = EVENT_VENUM(config_base); 1601 + 1602 + if (venum_event) { 1603 + venum_pre_pmresr(&vval, &fval); 1604 + val = venum_read_pmresr(); 1605 + val = clear_pmresrn_group(val, group); 1606 + venum_write_pmresr(val); 1607 + venum_post_pmresr(vval, fval); 1608 + } else { 1609 + val = scorpion_read_pmresrn(region); 1610 + val = clear_pmresrn_group(val, group); 1611 + scorpion_write_pmresrn(region, val); 1612 + } 1613 + } 1614 + 1615 + static void scorpion_pmu_disable_event(struct perf_event *event) 1616 + { 1617 + unsigned long flags; 1618 + struct hw_perf_event *hwc = &event->hw; 1619 + int idx = hwc->idx; 1620 + struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); 1621 + struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); 1622 + 1623 + /* Disable counter and interrupt */ 1624 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 1625 + 1626 + /* Disable counter */ 1627 + armv7_pmnc_disable_counter(idx); 1628 + 1629 + /* 1630 + * Clear pmresr code (if destined for PMNx counters) 1631 + */ 1632 + if (hwc->config_base & KRAIT_EVENT_MASK) 1633 + scorpion_clearpmu(hwc->config_base); 1634 + 1635 + /* Disable interrupt for this counter */ 1636 + armv7_pmnc_disable_intens(idx); 1637 + 1638 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 1639 + } 1640 + 1641 + static void scorpion_pmu_enable_event(struct perf_event *event) 1642 + { 1643 + unsigned long flags; 1644 + struct hw_perf_event *hwc = &event->hw; 1645 + int idx = hwc->idx; 1646 + struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); 1647 + struct pmu_hw_events *events = this_cpu_ptr(cpu_pmu->hw_events); 1648 + 1649 + /* 1650 + * Enable counter and interrupt, and set the counter to count 1651 + * the event that we're interested in. 1652 + */ 1653 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 1654 + 1655 + /* Disable counter */ 1656 + armv7_pmnc_disable_counter(idx); 1657 + 1658 + /* 1659 + * Set event (if destined for PMNx counters) 1660 + * We don't set the event for the cycle counter because we 1661 + * don't have the ability to perform event filtering. 1662 + */ 1663 + if (hwc->config_base & KRAIT_EVENT_MASK) 1664 + scorpion_evt_setup(idx, hwc->config_base); 1665 + else if (idx != ARMV7_IDX_CYCLE_COUNTER) 1666 + armv7_pmnc_write_evtsel(idx, hwc->config_base); 1667 + 1668 + /* Enable interrupt for this counter */ 1669 + armv7_pmnc_enable_intens(idx); 1670 + 1671 + /* Enable counter */ 1672 + armv7_pmnc_enable_counter(idx); 1673 + 1674 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 1675 + } 1676 + 1677 + static void scorpion_pmu_reset(void *info) 1678 + { 1679 + u32 vval, fval; 1680 + struct arm_pmu *cpu_pmu = info; 1681 + u32 idx, nb_cnt = cpu_pmu->num_events; 1682 + 1683 + armv7pmu_reset(info); 1684 + 1685 + /* Clear all pmresrs */ 1686 + scorpion_write_pmresrn(0, 0); 1687 + scorpion_write_pmresrn(1, 0); 1688 + scorpion_write_pmresrn(2, 0); 1689 + scorpion_write_pmresrn(3, 0); 1690 + 1691 + venum_pre_pmresr(&vval, &fval); 1692 + venum_write_pmresr(0); 1693 + venum_post_pmresr(vval, fval); 1694 + 1695 + /* Reset PMxEVNCTCR to sane default */ 1696 + for (idx = ARMV7_IDX_CYCLE_COUNTER; idx < nb_cnt; ++idx) { 1697 + armv7_pmnc_select_counter(idx); 1698 + asm volatile("mcr p15, 0, %0, c9, c15, 0" : : "r" (0)); 1699 + } 1700 + } 1701 + 1702 + static int scorpion_event_to_bit(struct perf_event *event, unsigned int region, 1703 + unsigned int group) 1704 + { 1705 + int bit; 1706 + struct hw_perf_event *hwc = &event->hw; 1707 + struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu); 1708 + 1709 + if (hwc->config_base & VENUM_EVENT) 1710 + bit = SCORPION_VLPM_GROUP0; 1711 + else 1712 + bit = scorpion_get_pmresrn_event(region); 1713 + bit -= scorpion_get_pmresrn_event(0); 1714 + bit += group; 1715 + /* 1716 + * Lower bits are reserved for use by the counters (see 1717 + * armv7pmu_get_event_idx() for more info) 1718 + */ 1719 + bit += ARMV7_IDX_COUNTER_LAST(cpu_pmu) + 1; 1720 + 1721 + return bit; 1722 + } 1723 + 1724 + /* 1725 + * We check for column exclusion constraints here. 1726 + * Two events cant use the same group within a pmresr register. 1727 + */ 1728 + static int scorpion_pmu_get_event_idx(struct pmu_hw_events *cpuc, 1729 + struct perf_event *event) 1730 + { 1731 + int idx; 1732 + int bit = -1; 1733 + struct hw_perf_event *hwc = &event->hw; 1734 + unsigned int region = EVENT_REGION(hwc->config_base); 1735 + unsigned int group = EVENT_GROUP(hwc->config_base); 1736 + bool venum_event = EVENT_VENUM(hwc->config_base); 1737 + bool scorpion_event = EVENT_CPU(hwc->config_base); 1738 + 1739 + if (venum_event || scorpion_event) { 1740 + /* Ignore invalid events */ 1741 + if (group > 3 || region > 3) 1742 + return -EINVAL; 1743 + 1744 + bit = scorpion_event_to_bit(event, region, group); 1745 + if (test_and_set_bit(bit, cpuc->used_mask)) 1746 + return -EAGAIN; 1747 + } 1748 + 1749 + idx = armv7pmu_get_event_idx(cpuc, event); 1750 + if (idx < 0 && bit >= 0) 1751 + clear_bit(bit, cpuc->used_mask); 1752 + 1753 + return idx; 1754 + } 1755 + 1756 + static void scorpion_pmu_clear_event_idx(struct pmu_hw_events *cpuc, 1757 + struct perf_event *event) 1758 + { 1759 + int bit; 1760 + struct hw_perf_event *hwc = &event->hw; 1761 + unsigned int region = EVENT_REGION(hwc->config_base); 1762 + unsigned int group = EVENT_GROUP(hwc->config_base); 1763 + bool venum_event = EVENT_VENUM(hwc->config_base); 1764 + bool scorpion_event = EVENT_CPU(hwc->config_base); 1765 + 1766 + if (venum_event || scorpion_event) { 1767 + bit = scorpion_event_to_bit(event, region, group); 1768 + clear_bit(bit, cpuc->used_mask); 1769 + } 1770 + } 1771 + 1772 + static int scorpion_pmu_init(struct arm_pmu *cpu_pmu) 1773 + { 1774 + armv7pmu_init(cpu_pmu); 1775 + cpu_pmu->name = "armv7_scorpion"; 1776 + cpu_pmu->map_event = scorpion_map_event; 1777 + cpu_pmu->num_events = armv7_read_num_pmnc_events(); 1778 + cpu_pmu->reset = scorpion_pmu_reset; 1779 + cpu_pmu->enable = scorpion_pmu_enable_event; 1780 + cpu_pmu->disable = scorpion_pmu_disable_event; 1781 + cpu_pmu->get_event_idx = scorpion_pmu_get_event_idx; 1782 + cpu_pmu->clear_event_idx = scorpion_pmu_clear_event_idx; 1783 + return 0; 1784 + } 1785 + 1786 + static int scorpion_mp_pmu_init(struct arm_pmu *cpu_pmu) 1787 + { 1788 + armv7pmu_init(cpu_pmu); 1789 + cpu_pmu->name = "armv7_scorpion_mp"; 1790 + cpu_pmu->map_event = scorpion_map_event; 1791 + cpu_pmu->num_events = armv7_read_num_pmnc_events(); 1792 + cpu_pmu->reset = scorpion_pmu_reset; 1793 + cpu_pmu->enable = scorpion_pmu_enable_event; 1794 + cpu_pmu->disable = scorpion_pmu_disable_event; 1795 + cpu_pmu->get_event_idx = scorpion_pmu_get_event_idx; 1796 + cpu_pmu->clear_event_idx = scorpion_pmu_clear_event_idx; 1518 1797 return 0; 1519 1798 } 1520 1799 #else ··· 1892 1495 } 1893 1496 1894 1497 static inline int krait_pmu_init(struct arm_pmu *cpu_pmu) 1498 + { 1499 + return -ENODEV; 1500 + } 1501 + 1502 + static inline int scorpion_pmu_init(struct arm_pmu *cpu_pmu) 1503 + { 1504 + return -ENODEV; 1505 + } 1506 + 1507 + static inline int scorpion_mp_pmu_init(struct arm_pmu *cpu_pmu) 1895 1508 { 1896 1509 return -ENODEV; 1897 1510 }
+14 -145
arch/arm/kernel/process.c
··· 17 17 #include <linux/stddef.h> 18 18 #include <linux/unistd.h> 19 19 #include <linux/user.h> 20 - #include <linux/delay.h> 21 - #include <linux/reboot.h> 22 20 #include <linux/interrupt.h> 23 21 #include <linux/kallsyms.h> 24 22 #include <linux/init.h> 25 - #include <linux/cpu.h> 26 23 #include <linux/elfcore.h> 27 24 #include <linux/pm.h> 28 25 #include <linux/tick.h> ··· 28 31 #include <linux/random.h> 29 32 #include <linux/hw_breakpoint.h> 30 33 #include <linux/leds.h> 31 - #include <linux/reboot.h> 32 34 33 - #include <asm/cacheflush.h> 34 - #include <asm/idmap.h> 35 35 #include <asm/processor.h> 36 36 #include <asm/thread_notify.h> 37 37 #include <asm/stacktrace.h> 38 38 #include <asm/system_misc.h> 39 39 #include <asm/mach/time.h> 40 40 #include <asm/tls.h> 41 + #include <asm/vdso.h> 41 42 42 43 #ifdef CONFIG_CC_STACKPROTECTOR 43 44 #include <linux/stackprotector.h> ··· 53 58 static const char *isa_modes[] __maybe_unused = { 54 59 "ARM" , "Thumb" , "Jazelle", "ThumbEE" 55 60 }; 56 - 57 - extern void call_with_stack(void (*fn)(void *), void *arg, void *sp); 58 - typedef void (*phys_reset_t)(unsigned long); 59 - 60 - /* 61 - * A temporary stack to use for CPU reset. This is static so that we 62 - * don't clobber it with the identity mapping. When running with this 63 - * stack, any references to the current task *will not work* so you 64 - * should really do as little as possible before jumping to your reset 65 - * code. 66 - */ 67 - static u64 soft_restart_stack[16]; 68 - 69 - static void __soft_restart(void *addr) 70 - { 71 - phys_reset_t phys_reset; 72 - 73 - /* Take out a flat memory mapping. */ 74 - setup_mm_for_reboot(); 75 - 76 - /* Clean and invalidate caches */ 77 - flush_cache_all(); 78 - 79 - /* Turn off caching */ 80 - cpu_proc_fin(); 81 - 82 - /* Push out any further dirty data, and ensure cache is empty */ 83 - flush_cache_all(); 84 - 85 - /* Switch to the identity mapping. */ 86 - phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); 87 - phys_reset((unsigned long)addr); 88 - 89 - /* Should never get here. */ 90 - BUG(); 91 - } 92 - 93 - void soft_restart(unsigned long addr) 94 - { 95 - u64 *stack = soft_restart_stack + ARRAY_SIZE(soft_restart_stack); 96 - 97 - /* Disable interrupts first */ 98 - raw_local_irq_disable(); 99 - local_fiq_disable(); 100 - 101 - /* Disable the L2 if we're the last man standing. */ 102 - if (num_online_cpus() == 1) 103 - outer_disable(); 104 - 105 - /* Change to the new stack and continue with the reset. */ 106 - call_with_stack(__soft_restart, (void *)addr, (void *)stack); 107 - 108 - /* Should never get here. */ 109 - BUG(); 110 - } 111 - 112 - /* 113 - * Function pointers to optional machine specific functions 114 - */ 115 - void (*pm_power_off)(void); 116 - EXPORT_SYMBOL(pm_power_off); 117 - 118 - void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd); 119 61 120 62 /* 121 63 * This is our default idle handler. ··· 97 165 cpu_die(); 98 166 } 99 167 #endif 100 - 101 - /* 102 - * Called by kexec, immediately prior to machine_kexec(). 103 - * 104 - * This must completely disable all secondary CPUs; simply causing those CPUs 105 - * to execute e.g. a RAM-based pin loop is not sufficient. This allows the 106 - * kexec'd kernel to use any and all RAM as it sees fit, without having to 107 - * avoid any code or data used by any SW CPU pin loop. The CPU hotplug 108 - * functionality embodied in disable_nonboot_cpus() to achieve this. 109 - */ 110 - void machine_shutdown(void) 111 - { 112 - disable_nonboot_cpus(); 113 - } 114 - 115 - /* 116 - * Halting simply requires that the secondary CPUs stop performing any 117 - * activity (executing tasks, handling interrupts). smp_send_stop() 118 - * achieves this. 119 - */ 120 - void machine_halt(void) 121 - { 122 - local_irq_disable(); 123 - smp_send_stop(); 124 - 125 - local_irq_disable(); 126 - while (1); 127 - } 128 - 129 - /* 130 - * Power-off simply requires that the secondary CPUs stop performing any 131 - * activity (executing tasks, handling interrupts). smp_send_stop() 132 - * achieves this. When the system power is turned off, it will take all CPUs 133 - * with it. 134 - */ 135 - void machine_power_off(void) 136 - { 137 - local_irq_disable(); 138 - smp_send_stop(); 139 - 140 - if (pm_power_off) 141 - pm_power_off(); 142 - } 143 - 144 - /* 145 - * Restart requires that the secondary CPUs stop performing any activity 146 - * while the primary CPU resets the system. Systems with a single CPU can 147 - * use soft_restart() as their machine descriptor's .restart hook, since that 148 - * will cause the only available CPU to reset. Systems with multiple CPUs must 149 - * provide a HW restart implementation, to ensure that all CPUs reset at once. 150 - * This is required so that any code running after reset on the primary CPU 151 - * doesn't have to co-ordinate with other CPUs to ensure they aren't still 152 - * executing pre-reset code, and using RAM that the primary CPU's code wishes 153 - * to use. Implementing such co-ordination would be essentially impossible. 154 - */ 155 - void machine_restart(char *cmd) 156 - { 157 - local_irq_disable(); 158 - smp_send_stop(); 159 - 160 - if (arm_pm_restart) 161 - arm_pm_restart(reboot_mode, cmd); 162 - else 163 - do_kernel_restart(cmd); 164 - 165 - /* Give a grace period for failure to restart of 1s */ 166 - mdelay(1000); 167 - 168 - /* Whoops - the platform was unable to reboot. Tell the user! */ 169 - printk("Reboot failed -- System halted\n"); 170 - local_irq_disable(); 171 - while (1); 172 - } 173 168 174 169 void __show_regs(struct pt_regs *regs) 175 170 { ··· 334 475 } 335 476 336 477 /* If possible, provide a placement hint at a random offset from the 337 - * stack for the signal page. 478 + * stack for the sigpage and vdso pages. 338 479 */ 339 480 static unsigned long sigpage_addr(const struct mm_struct *mm, 340 481 unsigned int npages) ··· 378 519 { 379 520 struct mm_struct *mm = current->mm; 380 521 struct vm_area_struct *vma; 522 + unsigned long npages; 381 523 unsigned long addr; 382 524 unsigned long hint; 383 525 int ret = 0; ··· 388 528 if (!signal_page) 389 529 return -ENOMEM; 390 530 531 + npages = 1; /* for sigpage */ 532 + npages += vdso_total_pages; 533 + 391 534 down_write(&mm->mmap_sem); 392 - hint = sigpage_addr(mm, 1); 393 - addr = get_unmapped_area(NULL, hint, PAGE_SIZE, 0, 0); 535 + hint = sigpage_addr(mm, npages); 536 + addr = get_unmapped_area(NULL, hint, npages << PAGE_SHIFT, 0, 0); 394 537 if (IS_ERR_VALUE(addr)) { 395 538 ret = addr; 396 539 goto up_fail; ··· 409 546 } 410 547 411 548 mm->context.sigpage = addr; 549 + 550 + /* Unlike the sigpage, failure to install the vdso is unlikely 551 + * to be fatal to the process, so no error check needed 552 + * here. 553 + */ 554 + arm_install_vdso(mm, addr + PAGE_SIZE); 412 555 413 556 up_fail: 414 557 up_write(&mm->mmap_sem);
+31
arch/arm/kernel/psci-call.S
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify 3 + * it under the terms of the GNU General Public License version 2 as 4 + * published by the Free Software Foundation. 5 + * 6 + * This program is distributed in the hope that it will be useful, 7 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 8 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 9 + * GNU General Public License for more details. 10 + * 11 + * Copyright (C) 2015 ARM Limited 12 + * 13 + * Author: Mark Rutland <mark.rutland@arm.com> 14 + */ 15 + 16 + #include <linux/linkage.h> 17 + 18 + #include <asm/opcodes-sec.h> 19 + #include <asm/opcodes-virt.h> 20 + 21 + /* int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1, u32 arg2) */ 22 + ENTRY(__invoke_psci_fn_hvc) 23 + __HVC(0) 24 + bx lr 25 + ENDPROC(__invoke_psci_fn_hvc) 26 + 27 + /* int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1, u32 arg2) */ 28 + ENTRY(__invoke_psci_fn_smc) 29 + __SMC(0) 30 + bx lr 31 + ENDPROC(__invoke_psci_fn_smc)
+3 -36
arch/arm/kernel/psci.c
··· 23 23 24 24 #include <asm/compiler.h> 25 25 #include <asm/errno.h> 26 - #include <asm/opcodes-sec.h> 27 - #include <asm/opcodes-virt.h> 28 26 #include <asm/psci.h> 29 27 #include <asm/system_misc.h> 30 28 ··· 30 32 31 33 static int (*invoke_psci_fn)(u32, u32, u32, u32); 32 34 typedef int (*psci_initcall_t)(const struct device_node *); 35 + 36 + asmlinkage int __invoke_psci_fn_hvc(u32, u32, u32, u32); 37 + asmlinkage int __invoke_psci_fn_smc(u32, u32, u32, u32); 33 38 34 39 enum psci_function { 35 40 PSCI_FN_CPU_SUSPEND, ··· 70 69 & PSCI_0_2_POWER_STATE_TYPE_MASK) | 71 70 ((state.affinity_level << PSCI_0_2_POWER_STATE_AFFL_SHIFT) 72 71 & PSCI_0_2_POWER_STATE_AFFL_MASK); 73 - } 74 - 75 - /* 76 - * The following two functions are invoked via the invoke_psci_fn pointer 77 - * and will not be inlined, allowing us to piggyback on the AAPCS. 78 - */ 79 - static noinline int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1, 80 - u32 arg2) 81 - { 82 - asm volatile( 83 - __asmeq("%0", "r0") 84 - __asmeq("%1", "r1") 85 - __asmeq("%2", "r2") 86 - __asmeq("%3", "r3") 87 - __HVC(0) 88 - : "+r" (function_id) 89 - : "r" (arg0), "r" (arg1), "r" (arg2)); 90 - 91 - return function_id; 92 - } 93 - 94 - static noinline int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1, 95 - u32 arg2) 96 - { 97 - asm volatile( 98 - __asmeq("%0", "r0") 99 - __asmeq("%1", "r1") 100 - __asmeq("%2", "r2") 101 - __asmeq("%3", "r3") 102 - __SMC(0) 103 - : "+r" (function_id) 104 - : "r" (arg0), "r" (arg1), "r" (arg2)); 105 - 106 - return function_id; 107 72 } 108 73 109 74 static int psci_get_version(void)
+155
arch/arm/kernel/reboot.c
··· 1 + /* 2 + * Copyright (C) 1996-2000 Russell King - Converted to ARM. 3 + * Original Copyright (C) 1995 Linus Torvalds 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + */ 9 + #include <linux/cpu.h> 10 + #include <linux/delay.h> 11 + #include <linux/reboot.h> 12 + 13 + #include <asm/cacheflush.h> 14 + #include <asm/idmap.h> 15 + 16 + #include "reboot.h" 17 + 18 + typedef void (*phys_reset_t)(unsigned long); 19 + 20 + /* 21 + * Function pointers to optional machine specific functions 22 + */ 23 + void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd); 24 + void (*pm_power_off)(void); 25 + EXPORT_SYMBOL(pm_power_off); 26 + 27 + /* 28 + * A temporary stack to use for CPU reset. This is static so that we 29 + * don't clobber it with the identity mapping. When running with this 30 + * stack, any references to the current task *will not work* so you 31 + * should really do as little as possible before jumping to your reset 32 + * code. 33 + */ 34 + static u64 soft_restart_stack[16]; 35 + 36 + static void __soft_restart(void *addr) 37 + { 38 + phys_reset_t phys_reset; 39 + 40 + /* Take out a flat memory mapping. */ 41 + setup_mm_for_reboot(); 42 + 43 + /* Clean and invalidate caches */ 44 + flush_cache_all(); 45 + 46 + /* Turn off caching */ 47 + cpu_proc_fin(); 48 + 49 + /* Push out any further dirty data, and ensure cache is empty */ 50 + flush_cache_all(); 51 + 52 + /* Switch to the identity mapping. */ 53 + phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset); 54 + phys_reset((unsigned long)addr); 55 + 56 + /* Should never get here. */ 57 + BUG(); 58 + } 59 + 60 + void _soft_restart(unsigned long addr, bool disable_l2) 61 + { 62 + u64 *stack = soft_restart_stack + ARRAY_SIZE(soft_restart_stack); 63 + 64 + /* Disable interrupts first */ 65 + raw_local_irq_disable(); 66 + local_fiq_disable(); 67 + 68 + /* Disable the L2 if we're the last man standing. */ 69 + if (disable_l2) 70 + outer_disable(); 71 + 72 + /* Change to the new stack and continue with the reset. */ 73 + call_with_stack(__soft_restart, (void *)addr, (void *)stack); 74 + 75 + /* Should never get here. */ 76 + BUG(); 77 + } 78 + 79 + void soft_restart(unsigned long addr) 80 + { 81 + _soft_restart(addr, num_online_cpus() == 1); 82 + } 83 + 84 + /* 85 + * Called by kexec, immediately prior to machine_kexec(). 86 + * 87 + * This must completely disable all secondary CPUs; simply causing those CPUs 88 + * to execute e.g. a RAM-based pin loop is not sufficient. This allows the 89 + * kexec'd kernel to use any and all RAM as it sees fit, without having to 90 + * avoid any code or data used by any SW CPU pin loop. The CPU hotplug 91 + * functionality embodied in disable_nonboot_cpus() to achieve this. 92 + */ 93 + void machine_shutdown(void) 94 + { 95 + disable_nonboot_cpus(); 96 + } 97 + 98 + /* 99 + * Halting simply requires that the secondary CPUs stop performing any 100 + * activity (executing tasks, handling interrupts). smp_send_stop() 101 + * achieves this. 102 + */ 103 + void machine_halt(void) 104 + { 105 + local_irq_disable(); 106 + smp_send_stop(); 107 + 108 + local_irq_disable(); 109 + while (1); 110 + } 111 + 112 + /* 113 + * Power-off simply requires that the secondary CPUs stop performing any 114 + * activity (executing tasks, handling interrupts). smp_send_stop() 115 + * achieves this. When the system power is turned off, it will take all CPUs 116 + * with it. 117 + */ 118 + void machine_power_off(void) 119 + { 120 + local_irq_disable(); 121 + smp_send_stop(); 122 + 123 + if (pm_power_off) 124 + pm_power_off(); 125 + } 126 + 127 + /* 128 + * Restart requires that the secondary CPUs stop performing any activity 129 + * while the primary CPU resets the system. Systems with a single CPU can 130 + * use soft_restart() as their machine descriptor's .restart hook, since that 131 + * will cause the only available CPU to reset. Systems with multiple CPUs must 132 + * provide a HW restart implementation, to ensure that all CPUs reset at once. 133 + * This is required so that any code running after reset on the primary CPU 134 + * doesn't have to co-ordinate with other CPUs to ensure they aren't still 135 + * executing pre-reset code, and using RAM that the primary CPU's code wishes 136 + * to use. Implementing such co-ordination would be essentially impossible. 137 + */ 138 + void machine_restart(char *cmd) 139 + { 140 + local_irq_disable(); 141 + smp_send_stop(); 142 + 143 + if (arm_pm_restart) 144 + arm_pm_restart(reboot_mode, cmd); 145 + else 146 + do_kernel_restart(cmd); 147 + 148 + /* Give a grace period for failure to restart of 1s */ 149 + mdelay(1000); 150 + 151 + /* Whoops - the platform was unable to reboot. Tell the user! */ 152 + printk("Reboot failed -- System halted\n"); 153 + local_irq_disable(); 154 + while (1); 155 + }
+7
arch/arm/kernel/reboot.h
··· 1 + #ifndef REBOOT_H 2 + #define REBOOT_H 3 + 4 + extern void call_with_stack(void (*fn)(void *), void *arg, void *sp); 5 + extern void _soft_restart(unsigned long addr, bool disable_l2); 6 + 7 + #endif
+1 -3
arch/arm/kernel/return_address.c
··· 56 56 return NULL; 57 57 } 58 58 59 - #else /* if defined(CONFIG_FRAME_POINTER) && !defined(CONFIG_ARM_UNWIND) */ 60 - 61 - #endif /* if defined(CONFIG_FRAME_POINTER) && !defined(CONFIG_ARM_UNWIND) / else */ 59 + #endif /* if defined(CONFIG_FRAME_POINTER) && !defined(CONFIG_ARM_UNWIND) */ 62 60 63 61 EXPORT_SYMBOL_GPL(return_address);
+31 -13
arch/arm/kernel/setup.c
··· 372 372 373 373 static void __init cpuid_init_hwcaps(void) 374 374 { 375 - unsigned int divide_instrs, vmsa; 375 + int block; 376 + u32 isar5; 376 377 377 378 if (cpu_architecture() < CPU_ARCH_ARMv7) 378 379 return; 379 380 380 - divide_instrs = (read_cpuid_ext(CPUID_EXT_ISAR0) & 0x0f000000) >> 24; 381 - 382 - switch (divide_instrs) { 383 - case 2: 381 + block = cpuid_feature_extract(CPUID_EXT_ISAR0, 24); 382 + if (block >= 2) 384 383 elf_hwcap |= HWCAP_IDIVA; 385 - case 1: 384 + if (block >= 1) 386 385 elf_hwcap |= HWCAP_IDIVT; 387 - } 388 386 389 387 /* LPAE implies atomic ldrd/strd instructions */ 390 - vmsa = (read_cpuid_ext(CPUID_EXT_MMFR0) & 0xf) >> 0; 391 - if (vmsa >= 5) 388 + block = cpuid_feature_extract(CPUID_EXT_MMFR0, 0); 389 + if (block >= 5) 392 390 elf_hwcap |= HWCAP_LPAE; 391 + 392 + /* check for supported v8 Crypto instructions */ 393 + isar5 = read_cpuid_ext(CPUID_EXT_ISAR5); 394 + 395 + block = cpuid_feature_extract_field(isar5, 4); 396 + if (block >= 2) 397 + elf_hwcap2 |= HWCAP2_PMULL; 398 + if (block >= 1) 399 + elf_hwcap2 |= HWCAP2_AES; 400 + 401 + block = cpuid_feature_extract_field(isar5, 8); 402 + if (block >= 1) 403 + elf_hwcap2 |= HWCAP2_SHA1; 404 + 405 + block = cpuid_feature_extract_field(isar5, 12); 406 + if (block >= 1) 407 + elf_hwcap2 |= HWCAP2_SHA2; 408 + 409 + block = cpuid_feature_extract_field(isar5, 16); 410 + if (block >= 1) 411 + elf_hwcap2 |= HWCAP2_CRC32; 393 412 } 394 413 395 414 static void __init elf_hwcap_fixup(void) 396 415 { 397 416 unsigned id = read_cpuid_id(); 398 - unsigned sync_prim; 399 417 400 418 /* 401 419 * HWCAP_TLS is available only on 1136 r1p0 and later, ··· 434 416 * avoid advertising SWP; it may not be atomic with 435 417 * multiprocessing cores. 436 418 */ 437 - sync_prim = ((read_cpuid_ext(CPUID_EXT_ISAR3) >> 8) & 0xf0) | 438 - ((read_cpuid_ext(CPUID_EXT_ISAR4) >> 20) & 0x0f); 439 - if (sync_prim >= 0x13) 419 + if (cpuid_feature_extract(CPUID_EXT_ISAR3, 12) > 1 || 420 + (cpuid_feature_extract(CPUID_EXT_ISAR3, 12) == 1 && 421 + cpuid_feature_extract(CPUID_EXT_ISAR3, 20) >= 3)) 440 422 elf_hwcap &= ~HWCAP_SWP; 441 423 } 442 424
+6 -9
arch/arm/kernel/sleep.S
··· 116 116 ldmfd sp!, {r4 - r11, pc} 117 117 ENDPROC(cpu_resume_after_mmu) 118 118 119 - /* 120 - * Note: Yes, part of the following code is located into the .data section. 121 - * This is to allow sleep_save_sp to be accessed with a relative load 122 - * while we can't rely on any MMU translation. We could have put 123 - * sleep_save_sp in the .text section as well, but some setups might 124 - * insist on it to be truly read-only. 125 - */ 126 - .data 119 + .text 127 120 .align 128 121 ENTRY(cpu_resume) 129 122 ARM_BE8(setend be) @ ensure we are in BE mode ··· 138 145 compute_mpidr_hash r1, r4, r5, r6, r0, r3 139 146 1: 140 147 adr r0, _sleep_save_sp 148 + ldr r2, [r0] 149 + add r0, r0, r2 141 150 ldr r0, [r0, #SLEEP_SAVE_SP_PHYS] 142 151 ldr r0, [r0, r1, lsl #2] 143 152 ··· 151 156 ENDPROC(cpu_resume) 152 157 153 158 .align 2 159 + _sleep_save_sp: 160 + .long sleep_save_sp - . 154 161 mpidr_hash_ptr: 155 162 .long mpidr_hash - . @ mpidr_hash struct offset 156 163 164 + .data 157 165 .type sleep_save_sp, #object 158 166 ENTRY(sleep_save_sp) 159 - _sleep_save_sp: 160 167 .space SLEEP_SAVE_SP_SZ @ struct sleep_save_sp
+5
arch/arm/kernel/smp.c
··· 145 145 smp_ops.smp_init_cpus(); 146 146 } 147 147 148 + int platform_can_secondary_boot(void) 149 + { 150 + return !!smp_ops.smp_boot_secondary; 151 + } 152 + 148 153 int platform_can_cpu_hotplug(void) 149 154 { 150 155 #ifdef CONFIG_HOTPLUG_CPU
+1 -1
arch/arm/kernel/swp_emulate.c
··· 42 42 " cmp %0, #0\n" \ 43 43 " movne %0, %4\n" \ 44 44 "2:\n" \ 45 - " .section .fixup,\"ax\"\n" \ 45 + " .section .text.fixup,\"ax\"\n" \ 46 46 " .align 2\n" \ 47 47 "3: mov %0, %5\n" \ 48 48 " b 2b\n" \
+337
arch/arm/kernel/vdso.c
··· 1 + /* 2 + * Adapted from arm64 version. 3 + * 4 + * Copyright (C) 2012 ARM Limited 5 + * Copyright (C) 2015 Mentor Graphics Corporation. 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + * You should have received a copy of the GNU General Public License 17 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 18 + */ 19 + 20 + #include <linux/elf.h> 21 + #include <linux/err.h> 22 + #include <linux/kernel.h> 23 + #include <linux/mm.h> 24 + #include <linux/of.h> 25 + #include <linux/printk.h> 26 + #include <linux/slab.h> 27 + #include <linux/timekeeper_internal.h> 28 + #include <linux/vmalloc.h> 29 + #include <asm/arch_timer.h> 30 + #include <asm/barrier.h> 31 + #include <asm/cacheflush.h> 32 + #include <asm/page.h> 33 + #include <asm/vdso.h> 34 + #include <asm/vdso_datapage.h> 35 + #include <clocksource/arm_arch_timer.h> 36 + 37 + #define MAX_SYMNAME 64 38 + 39 + static struct page **vdso_text_pagelist; 40 + 41 + /* Total number of pages needed for the data and text portions of the VDSO. */ 42 + unsigned int vdso_total_pages __read_mostly; 43 + 44 + /* 45 + * The VDSO data page. 46 + */ 47 + static union vdso_data_store vdso_data_store __page_aligned_data; 48 + static struct vdso_data *vdso_data = &vdso_data_store.data; 49 + 50 + static struct page *vdso_data_page; 51 + static struct vm_special_mapping vdso_data_mapping = { 52 + .name = "[vvar]", 53 + .pages = &vdso_data_page, 54 + }; 55 + 56 + static struct vm_special_mapping vdso_text_mapping = { 57 + .name = "[vdso]", 58 + }; 59 + 60 + struct elfinfo { 61 + Elf32_Ehdr *hdr; /* ptr to ELF */ 62 + Elf32_Sym *dynsym; /* ptr to .dynsym section */ 63 + unsigned long dynsymsize; /* size of .dynsym section */ 64 + char *dynstr; /* ptr to .dynstr section */ 65 + }; 66 + 67 + /* Cached result of boot-time check for whether the arch timer exists, 68 + * and if so, whether the virtual counter is useable. 69 + */ 70 + static bool cntvct_ok __read_mostly; 71 + 72 + static bool __init cntvct_functional(void) 73 + { 74 + struct device_node *np; 75 + bool ret = false; 76 + 77 + if (!IS_ENABLED(CONFIG_ARM_ARCH_TIMER)) 78 + goto out; 79 + 80 + /* The arm_arch_timer core should export 81 + * arch_timer_use_virtual or similar so we don't have to do 82 + * this. 83 + */ 84 + np = of_find_compatible_node(NULL, NULL, "arm,armv7-timer"); 85 + if (!np) 86 + goto out_put; 87 + 88 + if (of_property_read_bool(np, "arm,cpu-registers-not-fw-configured")) 89 + goto out_put; 90 + 91 + ret = true; 92 + 93 + out_put: 94 + of_node_put(np); 95 + out: 96 + return ret; 97 + } 98 + 99 + static void * __init find_section(Elf32_Ehdr *ehdr, const char *name, 100 + unsigned long *size) 101 + { 102 + Elf32_Shdr *sechdrs; 103 + unsigned int i; 104 + char *secnames; 105 + 106 + /* Grab section headers and strings so we can tell who is who */ 107 + sechdrs = (void *)ehdr + ehdr->e_shoff; 108 + secnames = (void *)ehdr + sechdrs[ehdr->e_shstrndx].sh_offset; 109 + 110 + /* Find the section they want */ 111 + for (i = 1; i < ehdr->e_shnum; i++) { 112 + if (strcmp(secnames + sechdrs[i].sh_name, name) == 0) { 113 + if (size) 114 + *size = sechdrs[i].sh_size; 115 + return (void *)ehdr + sechdrs[i].sh_offset; 116 + } 117 + } 118 + 119 + if (size) 120 + *size = 0; 121 + return NULL; 122 + } 123 + 124 + static Elf32_Sym * __init find_symbol(struct elfinfo *lib, const char *symname) 125 + { 126 + unsigned int i; 127 + 128 + for (i = 0; i < (lib->dynsymsize / sizeof(Elf32_Sym)); i++) { 129 + char name[MAX_SYMNAME], *c; 130 + 131 + if (lib->dynsym[i].st_name == 0) 132 + continue; 133 + strlcpy(name, lib->dynstr + lib->dynsym[i].st_name, 134 + MAX_SYMNAME); 135 + c = strchr(name, '@'); 136 + if (c) 137 + *c = 0; 138 + if (strcmp(symname, name) == 0) 139 + return &lib->dynsym[i]; 140 + } 141 + return NULL; 142 + } 143 + 144 + static void __init vdso_nullpatch_one(struct elfinfo *lib, const char *symname) 145 + { 146 + Elf32_Sym *sym; 147 + 148 + sym = find_symbol(lib, symname); 149 + if (!sym) 150 + return; 151 + 152 + sym->st_name = 0; 153 + } 154 + 155 + static void __init patch_vdso(void *ehdr) 156 + { 157 + struct elfinfo einfo; 158 + 159 + einfo = (struct elfinfo) { 160 + .hdr = ehdr, 161 + }; 162 + 163 + einfo.dynsym = find_section(einfo.hdr, ".dynsym", &einfo.dynsymsize); 164 + einfo.dynstr = find_section(einfo.hdr, ".dynstr", NULL); 165 + 166 + /* If the virtual counter is absent or non-functional we don't 167 + * want programs to incur the slight additional overhead of 168 + * dispatching through the VDSO only to fall back to syscalls. 169 + */ 170 + if (!cntvct_ok) { 171 + vdso_nullpatch_one(&einfo, "__vdso_gettimeofday"); 172 + vdso_nullpatch_one(&einfo, "__vdso_clock_gettime"); 173 + } 174 + } 175 + 176 + static int __init vdso_init(void) 177 + { 178 + unsigned int text_pages; 179 + int i; 180 + 181 + if (memcmp(&vdso_start, "\177ELF", 4)) { 182 + pr_err("VDSO is not a valid ELF object!\n"); 183 + return -ENOEXEC; 184 + } 185 + 186 + text_pages = (&vdso_end - &vdso_start) >> PAGE_SHIFT; 187 + pr_debug("vdso: %i text pages at base %p\n", text_pages, &vdso_start); 188 + 189 + /* Allocate the VDSO text pagelist */ 190 + vdso_text_pagelist = kcalloc(text_pages, sizeof(struct page *), 191 + GFP_KERNEL); 192 + if (vdso_text_pagelist == NULL) 193 + return -ENOMEM; 194 + 195 + /* Grab the VDSO data page. */ 196 + vdso_data_page = virt_to_page(vdso_data); 197 + 198 + /* Grab the VDSO text pages. */ 199 + for (i = 0; i < text_pages; i++) { 200 + struct page *page; 201 + 202 + page = virt_to_page(&vdso_start + i * PAGE_SIZE); 203 + vdso_text_pagelist[i] = page; 204 + } 205 + 206 + vdso_text_mapping.pages = vdso_text_pagelist; 207 + 208 + vdso_total_pages = 1; /* for the data/vvar page */ 209 + vdso_total_pages += text_pages; 210 + 211 + cntvct_ok = cntvct_functional(); 212 + 213 + patch_vdso(&vdso_start); 214 + 215 + return 0; 216 + } 217 + arch_initcall(vdso_init); 218 + 219 + static int install_vvar(struct mm_struct *mm, unsigned long addr) 220 + { 221 + struct vm_area_struct *vma; 222 + 223 + vma = _install_special_mapping(mm, addr, PAGE_SIZE, 224 + VM_READ | VM_MAYREAD, 225 + &vdso_data_mapping); 226 + 227 + return IS_ERR(vma) ? PTR_ERR(vma) : 0; 228 + } 229 + 230 + /* assumes mmap_sem is write-locked */ 231 + void arm_install_vdso(struct mm_struct *mm, unsigned long addr) 232 + { 233 + struct vm_area_struct *vma; 234 + unsigned long len; 235 + 236 + mm->context.vdso = 0; 237 + 238 + if (vdso_text_pagelist == NULL) 239 + return; 240 + 241 + if (install_vvar(mm, addr)) 242 + return; 243 + 244 + /* Account for vvar page. */ 245 + addr += PAGE_SIZE; 246 + len = (vdso_total_pages - 1) << PAGE_SHIFT; 247 + 248 + vma = _install_special_mapping(mm, addr, len, 249 + VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC, 250 + &vdso_text_mapping); 251 + 252 + if (!IS_ERR(vma)) 253 + mm->context.vdso = addr; 254 + } 255 + 256 + static void vdso_write_begin(struct vdso_data *vdata) 257 + { 258 + ++vdso_data->seq_count; 259 + smp_wmb(); /* Pairs with smp_rmb in vdso_read_retry */ 260 + } 261 + 262 + static void vdso_write_end(struct vdso_data *vdata) 263 + { 264 + smp_wmb(); /* Pairs with smp_rmb in vdso_read_begin */ 265 + ++vdso_data->seq_count; 266 + } 267 + 268 + static bool tk_is_cntvct(const struct timekeeper *tk) 269 + { 270 + if (!IS_ENABLED(CONFIG_ARM_ARCH_TIMER)) 271 + return false; 272 + 273 + if (strcmp(tk->tkr_mono.clock->name, "arch_sys_counter") != 0) 274 + return false; 275 + 276 + return true; 277 + } 278 + 279 + /** 280 + * update_vsyscall - update the vdso data page 281 + * 282 + * Increment the sequence counter, making it odd, indicating to 283 + * userspace that an update is in progress. Update the fields used 284 + * for coarse clocks and, if the architected system timer is in use, 285 + * the fields used for high precision clocks. Increment the sequence 286 + * counter again, making it even, indicating to userspace that the 287 + * update is finished. 288 + * 289 + * Userspace is expected to sample seq_count before reading any other 290 + * fields from the data page. If seq_count is odd, userspace is 291 + * expected to wait until it becomes even. After copying data from 292 + * the page, userspace must sample seq_count again; if it has changed 293 + * from its previous value, userspace must retry the whole sequence. 294 + * 295 + * Calls to update_vsyscall are serialized by the timekeeping core. 296 + */ 297 + void update_vsyscall(struct timekeeper *tk) 298 + { 299 + struct timespec xtime_coarse; 300 + struct timespec64 *wtm = &tk->wall_to_monotonic; 301 + 302 + if (!cntvct_ok) { 303 + /* The entry points have been zeroed, so there is no 304 + * point in updating the data page. 305 + */ 306 + return; 307 + } 308 + 309 + vdso_write_begin(vdso_data); 310 + 311 + xtime_coarse = __current_kernel_time(); 312 + vdso_data->tk_is_cntvct = tk_is_cntvct(tk); 313 + vdso_data->xtime_coarse_sec = xtime_coarse.tv_sec; 314 + vdso_data->xtime_coarse_nsec = xtime_coarse.tv_nsec; 315 + vdso_data->wtm_clock_sec = wtm->tv_sec; 316 + vdso_data->wtm_clock_nsec = wtm->tv_nsec; 317 + 318 + if (vdso_data->tk_is_cntvct) { 319 + vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last; 320 + vdso_data->xtime_clock_sec = tk->xtime_sec; 321 + vdso_data->xtime_clock_snsec = tk->tkr_mono.xtime_nsec; 322 + vdso_data->cs_mult = tk->tkr_mono.mult; 323 + vdso_data->cs_shift = tk->tkr_mono.shift; 324 + vdso_data->cs_mask = tk->tkr_mono.mask; 325 + } 326 + 327 + vdso_write_end(vdso_data); 328 + 329 + flush_dcache_page(virt_to_page(vdso_data)); 330 + } 331 + 332 + void update_vsyscall_tz(void) 333 + { 334 + vdso_data->tz_minuteswest = sys_tz.tz_minuteswest; 335 + vdso_data->tz_dsttime = sys_tz.tz_dsttime; 336 + flush_dcache_page(virt_to_page(vdso_data)); 337 + }
+2 -5
arch/arm/kernel/vmlinux.lds.S
··· 74 74 ARM_EXIT_DISCARD(EXIT_DATA) 75 75 EXIT_CALL 76 76 #ifndef CONFIG_MMU 77 - *(.fixup) 77 + *(.text.fixup) 78 78 *(__ex_table) 79 79 #endif 80 80 #ifndef CONFIG_SMP_ON_UP ··· 100 100 101 101 .text : { /* Real text segment */ 102 102 _stext = .; /* Text and read-only data */ 103 + IDMAP_TEXT 103 104 __exception_text_start = .; 104 105 *(.exception.text) 105 106 __exception_text_end = .; ··· 109 108 SCHED_TEXT 110 109 LOCK_TEXT 111 110 KPROBES_TEXT 112 - IDMAP_TEXT 113 - #ifdef CONFIG_MMU 114 - *(.fixup) 115 - #endif 116 111 *(.gnu.warning) 117 112 *(.glue_7) 118 113 *(.glue_7t)
+1 -1
arch/arm/lib/clear_user.S
··· 47 47 ENDPROC(__clear_user) 48 48 ENDPROC(__clear_user_std) 49 49 50 - .pushsection .fixup,"ax" 50 + .pushsection .text.fixup,"ax" 51 51 .align 0 52 52 9001: ldmfd sp!, {r0, pc} 53 53 .popsection
+1 -1
arch/arm/lib/copy_to_user.S
··· 100 100 ENDPROC(__copy_to_user) 101 101 ENDPROC(__copy_to_user_std) 102 102 103 - .pushsection .fixup,"ax" 103 + .pushsection .text.fixup,"ax" 104 104 .align 0 105 105 copy_abort_preamble 106 106 ldmfd sp!, {r1, r2, r3}
+1 -1
arch/arm/lib/csumpartialcopyuser.S
··· 68 68 * so properly, we would have to add in whatever registers were loaded before 69 69 * the fault, which, with the current asm above is not predictable. 70 70 */ 71 - .pushsection .fixup,"ax" 71 + .pushsection .text.fixup,"ax" 72 72 .align 4 73 73 9001: mov r4, #-EFAULT 74 74 ldr r5, [sp, #8*4] @ *err_ptr
+6
arch/arm/lib/delay.c
··· 83 83 NSEC_PER_SEC, 3600); 84 84 res = cyc_to_ns(1ULL, new_mult, new_shift); 85 85 86 + if (res > 1000) { 87 + pr_err("Ignoring delay timer %ps, which has insufficient resolution of %lluns\n", 88 + timer, res); 89 + return; 90 + } 91 + 86 92 if (!delay_calibrated && (!delay_res || (res < delay_res))) { 87 93 pr_info("Switching to timer-based delay loop, resolution %lluns\n", res); 88 94 delay_timer = timer;
+16 -15
arch/arm/mach-exynos/sleep.S
··· 23 23 #define CPU_MASK 0xff0ffff0 24 24 #define CPU_CORTEX_A9 0x410fc090 25 25 26 - /* 27 - * The following code is located into the .data section. This is to 28 - * allow l2x0_regs_phys to be accessed with a relative load while we 29 - * can't rely on any MMU translation. We could have put l2x0_regs_phys 30 - * in the .text section as well, but some setups might insist on it to 31 - * be truly read-only. (Reference from: arch/arm/kernel/sleep.S) 32 - */ 33 - .data 26 + .text 34 27 .align 35 28 36 29 /* ··· 62 69 cmp r0, r1 63 70 bne skip_cp15 64 71 65 - adr r0, cp15_save_power 72 + adr r0, _cp15_save_power 66 73 ldr r1, [r0] 67 - adr r0, cp15_save_diag 74 + ldr r1, [r0, r1] 75 + adr r0, _cp15_save_diag 68 76 ldr r2, [r0] 77 + ldr r2, [r0, r2] 69 78 mov r0, #SMC_CMD_C15RESUME 70 79 dsb 71 80 smc #0 ··· 113 118 skip_cp15: 114 119 b cpu_resume 115 120 ENDPROC(exynos_cpu_resume_ns) 121 + 122 + .align 123 + _cp15_save_power: 124 + .long cp15_save_power - . 125 + _cp15_save_diag: 126 + .long cp15_save_diag - . 127 + #ifdef CONFIG_CACHE_L2X0 128 + 1: .long l2x0_saved_regs - . 129 + #endif /* CONFIG_CACHE_L2X0 */ 130 + 131 + .data 116 132 .globl cp15_save_diag 117 133 cp15_save_diag: 118 134 .long 0 @ cp15 diagnostic 119 135 .globl cp15_save_power 120 136 cp15_save_power: 121 137 .long 0 @ cp15 power control 122 - 123 - #ifdef CONFIG_CACHE_L2X0 124 - .align 125 - 1: .long l2x0_saved_regs - . 126 - #endif /* CONFIG_CACHE_L2X0 */
+1 -1
arch/arm/mach-s5pv210/sleep.S
··· 14 14 15 15 #include <linux/linkage.h> 16 16 17 - .data 17 + .text 18 18 .align 19 19 20 20 /*
+1
arch/arm/mach-vexpress/Kconfig
··· 42 42 config ARCH_VEXPRESS_CORTEX_A5_A9_ERRATA 43 43 bool "Enable A5 and A9 only errata work-arounds" 44 44 default y 45 + select ARM_ERRATA_643719 if SMP 45 46 select ARM_ERRATA_720789 46 47 select PL310_ERRATA_753970 if CACHE_L2X0 47 48 help
+15 -1
arch/arm/mm/Kconfig
··· 738 738 739 739 config CPU_DCACHE_DISABLE 740 740 bool "Disable D-Cache (C-bit)" 741 - depends on CPU_CP15 741 + depends on CPU_CP15 && !SMP 742 742 help 743 743 Say Y here to disable the processor data cache. Unless 744 744 you have a reason not to or are unsure, say N. ··· 824 824 825 825 Say N here only if you are absolutely certain that you do not 826 826 need these helpers; otherwise, the safe option is to say Y. 827 + 828 + config VDSO 829 + bool "Enable VDSO for acceleration of some system calls" 830 + depends on AEABI && MMU 831 + default y if ARM_ARCH_TIMER 832 + select GENERIC_TIME_VSYSCALL 833 + help 834 + Place in the process address space an ELF shared object 835 + providing fast implementations of gettimeofday and 836 + clock_gettime. Systems that implement the ARM architected 837 + timer will receive maximum benefit. 838 + 839 + You must have glibc 2.22 or later for programs to seamlessly 840 + take advantage of this. 827 841 828 842 config DMA_CACHE_RWFO 829 843 bool "Enable read/write for ownership DMA cache maintenance"
+3 -3
arch/arm/mm/alignment.c
··· 201 201 THUMB( "1: "ins" %1, [%2]\n" ) \ 202 202 THUMB( " add %2, %2, #1\n" ) \ 203 203 "2:\n" \ 204 - " .pushsection .fixup,\"ax\"\n" \ 204 + " .pushsection .text.fixup,\"ax\"\n" \ 205 205 " .align 2\n" \ 206 206 "3: mov %0, #1\n" \ 207 207 " b 2b\n" \ ··· 261 261 " mov %1, %1, "NEXT_BYTE"\n" \ 262 262 "2: "ins" %1, [%2]\n" \ 263 263 "3:\n" \ 264 - " .pushsection .fixup,\"ax\"\n" \ 264 + " .pushsection .text.fixup,\"ax\"\n" \ 265 265 " .align 2\n" \ 266 266 "4: mov %0, #1\n" \ 267 267 " b 3b\n" \ ··· 301 301 " mov %1, %1, "NEXT_BYTE"\n" \ 302 302 "4: "ins" %1, [%2]\n" \ 303 303 "5:\n" \ 304 - " .pushsection .fixup,\"ax\"\n" \ 304 + " .pushsection .text.fixup,\"ax\"\n" \ 305 305 " .align 2\n" \ 306 306 "6: mov %0, #1\n" \ 307 307 " b 5b\n" \
+7
arch/arm/mm/cache-l2x0.c
··· 1647 1647 struct device_node *np; 1648 1648 struct resource res; 1649 1649 u32 cache_id, old_aux; 1650 + u32 cache_level = 2; 1650 1651 1651 1652 np = of_find_matching_node(NULL, l2x0_ids); 1652 1653 if (!np) ··· 1679 1678 /* All L2 caches are unified, so this property should be specified */ 1680 1679 if (!of_property_read_bool(np, "cache-unified")) 1681 1680 pr_err("L2C: device tree omits to specify unified cache\n"); 1681 + 1682 + if (of_property_read_u32(np, "cache-level", &cache_level)) 1683 + pr_err("L2C: device tree omits to specify cache-level\n"); 1684 + 1685 + if (cache_level != 2) 1686 + pr_err("L2C: device tree specifies invalid cache level\n"); 1682 1687 1683 1688 /* Read back current (default) hardware configuration */ 1684 1689 if (data->save)
+19 -19
arch/arm/mm/cache-v7.S
··· 36 36 mcr p15, 2, r0, c0, c0, 0 37 37 mrc p15, 1, r0, c0, c0, 0 38 38 39 - ldr r1, =0x7fff 39 + movw r1, #0x7fff 40 40 and r2, r1, r0, lsr #13 41 41 42 - ldr r1, =0x3ff 42 + movw r1, #0x3ff 43 43 44 44 and r3, r1, r0, lsr #3 @ NumWays - 1 45 45 add r2, r2, #1 @ NumSets ··· 90 90 ENTRY(v7_flush_dcache_louis) 91 91 dmb @ ensure ordering with previous memory accesses 92 92 mrc p15, 1, r0, c0, c0, 1 @ read clidr, r0 = clidr 93 - ALT_SMP(ands r3, r0, #(7 << 21)) @ extract LoUIS from clidr 94 - ALT_UP(ands r3, r0, #(7 << 27)) @ extract LoUU from clidr 93 + ALT_SMP(mov r3, r0, lsr #20) @ move LoUIS into position 94 + ALT_UP( mov r3, r0, lsr #26) @ move LoUU into position 95 + ands r3, r3, #7 << 1 @ extract LoU*2 field from clidr 96 + bne start_flush_levels @ LoU != 0, start flushing 95 97 #ifdef CONFIG_ARM_ERRATA_643719 96 - ALT_SMP(mrceq p15, 0, r2, c0, c0, 0) @ read main ID register 97 - ALT_UP(reteq lr) @ LoUU is zero, so nothing to do 98 - ldreq r1, =0x410fc090 @ ID of ARM Cortex A9 r0p? 99 - biceq r2, r2, #0x0000000f @ clear minor revision number 100 - teqeq r2, r1 @ test for errata affected core and if so... 101 - orreqs r3, #(1 << 21) @ fix LoUIS value (and set flags state to 'ne') 98 + ALT_SMP(mrc p15, 0, r2, c0, c0, 0) @ read main ID register 99 + ALT_UP( ret lr) @ LoUU is zero, so nothing to do 100 + movw r1, #:lower16:(0x410fc090 >> 4) @ ID of ARM Cortex A9 r0p? 101 + movt r1, #:upper16:(0x410fc090 >> 4) 102 + teq r1, r2, lsr #4 @ test for errata affected core and if so... 103 + moveq r3, #1 << 1 @ fix LoUIS value 104 + beq start_flush_levels @ start flushing cache levels 102 105 #endif 103 - ALT_SMP(mov r3, r3, lsr #20) @ r3 = LoUIS * 2 104 - ALT_UP(mov r3, r3, lsr #26) @ r3 = LoUU * 2 105 - reteq lr @ return if level == 0 106 - mov r10, #0 @ r10 (starting level) = 0 107 - b flush_levels @ start flushing cache levels 106 + ret lr 108 107 ENDPROC(v7_flush_dcache_louis) 109 108 110 109 /* ··· 118 119 ENTRY(v7_flush_dcache_all) 119 120 dmb @ ensure ordering with previous memory accesses 120 121 mrc p15, 1, r0, c0, c0, 1 @ read clidr 121 - ands r3, r0, #0x7000000 @ extract loc from clidr 122 - mov r3, r3, lsr #23 @ left align loc bit field 122 + mov r3, r0, lsr #23 @ move LoC into position 123 + ands r3, r3, #7 << 1 @ extract LoC*2 from clidr 123 124 beq finished @ if loc is 0, then no need to clean 125 + start_flush_levels: 124 126 mov r10, #0 @ start clean at cache level 0 125 127 flush_levels: 126 128 add r2, r10, r10, lsr #1 @ work out 3x current cache level ··· 140 140 #endif 141 141 and r2, r1, #7 @ extract the length of the cache lines 142 142 add r2, r2, #4 @ add 4 (line length offset) 143 - ldr r4, =0x3ff 143 + movw r4, #0x3ff 144 144 ands r4, r4, r1, lsr #3 @ find maximum number on the way size 145 145 clz r5, r4 @ find bit position of way size increment 146 - ldr r7, =0x7fff 146 + movw r7, #0x7fff 147 147 ands r7, r7, r1, lsr #13 @ extract max number of the index size 148 148 loop1: 149 149 mov r9, r7 @ create working copy of max index
+73 -43
arch/arm/mm/dma-mapping.c
··· 289 289 290 290 static void *__alloc_from_contiguous(struct device *dev, size_t size, 291 291 pgprot_t prot, struct page **ret_page, 292 - const void *caller); 292 + const void *caller, bool want_vaddr); 293 293 294 294 static void *__alloc_remap_buffer(struct device *dev, size_t size, gfp_t gfp, 295 295 pgprot_t prot, struct page **ret_page, 296 - const void *caller); 296 + const void *caller, bool want_vaddr); 297 297 298 298 static void * 299 299 __dma_alloc_remap(struct page *page, size_t size, gfp_t gfp, pgprot_t prot, ··· 357 357 358 358 if (dev_get_cma_area(NULL)) 359 359 ptr = __alloc_from_contiguous(NULL, atomic_pool_size, prot, 360 - &page, atomic_pool_init); 360 + &page, atomic_pool_init, true); 361 361 else 362 362 ptr = __alloc_remap_buffer(NULL, atomic_pool_size, gfp, prot, 363 - &page, atomic_pool_init); 363 + &page, atomic_pool_init, true); 364 364 if (ptr) { 365 365 int ret; 366 366 ··· 467 467 468 468 static void *__alloc_remap_buffer(struct device *dev, size_t size, gfp_t gfp, 469 469 pgprot_t prot, struct page **ret_page, 470 - const void *caller) 470 + const void *caller, bool want_vaddr) 471 471 { 472 472 struct page *page; 473 - void *ptr; 473 + void *ptr = NULL; 474 474 page = __dma_alloc_buffer(dev, size, gfp); 475 475 if (!page) 476 476 return NULL; 477 + if (!want_vaddr) 478 + goto out; 477 479 478 480 ptr = __dma_alloc_remap(page, size, gfp, prot, caller); 479 481 if (!ptr) { ··· 483 481 return NULL; 484 482 } 485 483 484 + out: 486 485 *ret_page = page; 487 486 return ptr; 488 487 } ··· 526 523 527 524 static void *__alloc_from_contiguous(struct device *dev, size_t size, 528 525 pgprot_t prot, struct page **ret_page, 529 - const void *caller) 526 + const void *caller, bool want_vaddr) 530 527 { 531 528 unsigned long order = get_order(size); 532 529 size_t count = size >> PAGE_SHIFT; 533 530 struct page *page; 534 - void *ptr; 531 + void *ptr = NULL; 535 532 536 533 page = dma_alloc_from_contiguous(dev, count, order); 537 534 if (!page) 538 535 return NULL; 539 536 540 537 __dma_clear_buffer(page, size); 538 + 539 + if (!want_vaddr) 540 + goto out; 541 541 542 542 if (PageHighMem(page)) { 543 543 ptr = __dma_alloc_remap(page, size, GFP_KERNEL, prot, caller); ··· 552 546 __dma_remap(page, size, prot); 553 547 ptr = page_address(page); 554 548 } 549 + 550 + out: 555 551 *ret_page = page; 556 552 return ptr; 557 553 } 558 554 559 555 static void __free_from_contiguous(struct device *dev, struct page *page, 560 - void *cpu_addr, size_t size) 556 + void *cpu_addr, size_t size, bool want_vaddr) 561 557 { 562 - if (PageHighMem(page)) 563 - __dma_free_remap(cpu_addr, size); 564 - else 565 - __dma_remap(page, size, PAGE_KERNEL); 558 + if (want_vaddr) { 559 + if (PageHighMem(page)) 560 + __dma_free_remap(cpu_addr, size); 561 + else 562 + __dma_remap(page, size, PAGE_KERNEL); 563 + } 566 564 dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT); 567 565 } 568 566 ··· 584 574 585 575 #define nommu() 1 586 576 587 - #define __get_dma_pgprot(attrs, prot) __pgprot(0) 588 - #define __alloc_remap_buffer(dev, size, gfp, prot, ret, c) NULL 577 + #define __get_dma_pgprot(attrs, prot) __pgprot(0) 578 + #define __alloc_remap_buffer(dev, size, gfp, prot, ret, c, wv) NULL 589 579 #define __alloc_from_pool(size, ret_page) NULL 590 - #define __alloc_from_contiguous(dev, size, prot, ret, c) NULL 580 + #define __alloc_from_contiguous(dev, size, prot, ret, c, wv) NULL 591 581 #define __free_from_pool(cpu_addr, size) 0 592 - #define __free_from_contiguous(dev, page, cpu_addr, size) do { } while (0) 582 + #define __free_from_contiguous(dev, page, cpu_addr, size, wv) do { } while (0) 593 583 #define __dma_free_remap(cpu_addr, size) do { } while (0) 594 584 595 585 #endif /* CONFIG_MMU */ ··· 609 599 610 600 611 601 static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, 612 - gfp_t gfp, pgprot_t prot, bool is_coherent, const void *caller) 602 + gfp_t gfp, pgprot_t prot, bool is_coherent, 603 + struct dma_attrs *attrs, const void *caller) 613 604 { 614 605 u64 mask = get_coherent_dma_mask(dev); 615 606 struct page *page = NULL; 616 607 void *addr; 608 + bool want_vaddr; 617 609 618 610 #ifdef CONFIG_DMA_API_DEBUG 619 611 u64 limit = (mask + 1) & ~mask; ··· 643 631 644 632 *handle = DMA_ERROR_CODE; 645 633 size = PAGE_ALIGN(size); 634 + want_vaddr = !dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs); 646 635 647 636 if (is_coherent || nommu()) 648 637 addr = __alloc_simple_buffer(dev, size, gfp, &page); 649 638 else if (!(gfp & __GFP_WAIT)) 650 639 addr = __alloc_from_pool(size, &page); 651 640 else if (!dev_get_cma_area(dev)) 652 - addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller); 641 + addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller, want_vaddr); 653 642 else 654 - addr = __alloc_from_contiguous(dev, size, prot, &page, caller); 643 + addr = __alloc_from_contiguous(dev, size, prot, &page, caller, want_vaddr); 655 644 656 - if (addr) 645 + if (page) 657 646 *handle = pfn_to_dma(dev, page_to_pfn(page)); 658 647 659 - return addr; 648 + return want_vaddr ? addr : page; 660 649 } 661 650 662 651 /* ··· 674 661 return memory; 675 662 676 663 return __dma_alloc(dev, size, handle, gfp, prot, false, 677 - __builtin_return_address(0)); 664 + attrs, __builtin_return_address(0)); 678 665 } 679 666 680 667 static void *arm_coherent_dma_alloc(struct device *dev, size_t size, ··· 687 674 return memory; 688 675 689 676 return __dma_alloc(dev, size, handle, gfp, prot, true, 690 - __builtin_return_address(0)); 677 + attrs, __builtin_return_address(0)); 691 678 } 692 679 693 680 /* ··· 728 715 bool is_coherent) 729 716 { 730 717 struct page *page = pfn_to_page(dma_to_pfn(dev, handle)); 718 + bool want_vaddr = !dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs); 731 719 732 720 if (dma_release_from_coherent(dev, get_order(size), cpu_addr)) 733 721 return; ··· 740 726 } else if (__free_from_pool(cpu_addr, size)) { 741 727 return; 742 728 } else if (!dev_get_cma_area(dev)) { 743 - __dma_free_remap(cpu_addr, size); 729 + if (want_vaddr) 730 + __dma_free_remap(cpu_addr, size); 744 731 __dma_free_buffer(page, size); 745 732 } else { 746 733 /* 747 734 * Non-atomic allocations cannot be freed with IRQs disabled 748 735 */ 749 736 WARN_ON(irqs_disabled()); 750 - __free_from_contiguous(dev, page, cpu_addr, size); 737 + __free_from_contiguous(dev, page, cpu_addr, size, want_vaddr); 751 738 } 752 739 } 753 740 ··· 1150 1135 gfp |= __GFP_NOWARN | __GFP_HIGHMEM; 1151 1136 1152 1137 while (count) { 1153 - int j, order = __fls(count); 1138 + int j, order; 1154 1139 1155 - pages[i] = alloc_pages(gfp, order); 1156 - while (!pages[i] && order) 1157 - pages[i] = alloc_pages(gfp, --order); 1158 - if (!pages[i]) 1159 - goto error; 1140 + for (order = __fls(count); order > 0; --order) { 1141 + /* 1142 + * We do not want OOM killer to be invoked as long 1143 + * as we can fall back to single pages, so we force 1144 + * __GFP_NORETRY for orders higher than zero. 1145 + */ 1146 + pages[i] = alloc_pages(gfp | __GFP_NORETRY, order); 1147 + if (pages[i]) 1148 + break; 1149 + } 1150 + 1151 + if (!pages[i]) { 1152 + /* 1153 + * Fall back to single page allocation. 1154 + * Might invoke OOM killer as last resort. 1155 + */ 1156 + pages[i] = alloc_pages(gfp, 0); 1157 + if (!pages[i]) 1158 + goto error; 1159 + } 1160 1160 1161 1161 if (order) { 1162 1162 split_page(pages[i], order); ··· 1236 1206 static dma_addr_t 1237 1207 __iommu_create_mapping(struct device *dev, struct page **pages, size_t size) 1238 1208 { 1239 - struct dma_iommu_mapping *mapping = dev->archdata.mapping; 1209 + struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); 1240 1210 unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT; 1241 1211 dma_addr_t dma_addr, iova; 1242 1212 int i, ret = DMA_ERROR_CODE; ··· 1272 1242 1273 1243 static int __iommu_remove_mapping(struct device *dev, dma_addr_t iova, size_t size) 1274 1244 { 1275 - struct dma_iommu_mapping *mapping = dev->archdata.mapping; 1245 + struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); 1276 1246 1277 1247 /* 1278 1248 * add optional in-page offset from iova to size and align ··· 1487 1457 enum dma_data_direction dir, struct dma_attrs *attrs, 1488 1458 bool is_coherent) 1489 1459 { 1490 - struct dma_iommu_mapping *mapping = dev->archdata.mapping; 1460 + struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); 1491 1461 dma_addr_t iova, iova_base; 1492 1462 int ret = 0; 1493 1463 unsigned int count; ··· 1708 1678 unsigned long offset, size_t size, enum dma_data_direction dir, 1709 1679 struct dma_attrs *attrs) 1710 1680 { 1711 - struct dma_iommu_mapping *mapping = dev->archdata.mapping; 1681 + struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); 1712 1682 dma_addr_t dma_addr; 1713 1683 int ret, prot, len = PAGE_ALIGN(size + offset); 1714 1684 ··· 1761 1731 size_t size, enum dma_data_direction dir, 1762 1732 struct dma_attrs *attrs) 1763 1733 { 1764 - struct dma_iommu_mapping *mapping = dev->archdata.mapping; 1734 + struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); 1765 1735 dma_addr_t iova = handle & PAGE_MASK; 1766 1736 int offset = handle & ~PAGE_MASK; 1767 1737 int len = PAGE_ALIGN(size + offset); ··· 1786 1756 size_t size, enum dma_data_direction dir, 1787 1757 struct dma_attrs *attrs) 1788 1758 { 1789 - struct dma_iommu_mapping *mapping = dev->archdata.mapping; 1759 + struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); 1790 1760 dma_addr_t iova = handle & PAGE_MASK; 1791 1761 struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); 1792 1762 int offset = handle & ~PAGE_MASK; ··· 1805 1775 static void arm_iommu_sync_single_for_cpu(struct device *dev, 1806 1776 dma_addr_t handle, size_t size, enum dma_data_direction dir) 1807 1777 { 1808 - struct dma_iommu_mapping *mapping = dev->archdata.mapping; 1778 + struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); 1809 1779 dma_addr_t iova = handle & PAGE_MASK; 1810 1780 struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); 1811 1781 unsigned int offset = handle & ~PAGE_MASK; ··· 1819 1789 static void arm_iommu_sync_single_for_device(struct device *dev, 1820 1790 dma_addr_t handle, size_t size, enum dma_data_direction dir) 1821 1791 { 1822 - struct dma_iommu_mapping *mapping = dev->archdata.mapping; 1792 + struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); 1823 1793 dma_addr_t iova = handle & PAGE_MASK; 1824 1794 struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); 1825 1795 unsigned int offset = handle & ~PAGE_MASK; ··· 1980 1950 return err; 1981 1951 1982 1952 kref_get(&mapping->kref); 1983 - dev->archdata.mapping = mapping; 1953 + to_dma_iommu_mapping(dev) = mapping; 1984 1954 1985 1955 pr_debug("Attached IOMMU controller to %s device.\n", dev_name(dev)); 1986 1956 return 0; ··· 2025 1995 2026 1996 iommu_detach_device(mapping->domain, dev); 2027 1997 kref_put(&mapping->kref, release_iommu_mapping); 2028 - dev->archdata.mapping = NULL; 1998 + to_dma_iommu_mapping(dev) = NULL; 2029 1999 2030 2000 pr_debug("Detached IOMMU controller from %s device.\n", dev_name(dev)); 2031 2001 } ··· 2083 2053 2084 2054 static void arm_teardown_iommu_dma_ops(struct device *dev) 2085 2055 { 2086 - struct dma_iommu_mapping *mapping = dev->archdata.mapping; 2056 + struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev); 2087 2057 2088 2058 if (!mapping) 2089 2059 return;
-49
arch/arm/mm/init.c
··· 86 86 87 87 __tagtable(ATAG_INITRD2, parse_tag_initrd2); 88 88 89 - /* 90 - * This keeps memory configuration data used by a couple memory 91 - * initialization functions, as well as show_mem() for the skipping 92 - * of holes in the memory map. It is populated by arm_add_memory(). 93 - */ 94 - void show_mem(unsigned int filter) 95 - { 96 - int free = 0, total = 0, reserved = 0; 97 - int shared = 0, cached = 0, slab = 0; 98 - struct memblock_region *reg; 99 - 100 - printk("Mem-info:\n"); 101 - show_free_areas(filter); 102 - 103 - for_each_memblock (memory, reg) { 104 - unsigned int pfn1, pfn2; 105 - struct page *page, *end; 106 - 107 - pfn1 = memblock_region_memory_base_pfn(reg); 108 - pfn2 = memblock_region_memory_end_pfn(reg); 109 - 110 - page = pfn_to_page(pfn1); 111 - end = pfn_to_page(pfn2 - 1) + 1; 112 - 113 - do { 114 - total++; 115 - if (PageReserved(page)) 116 - reserved++; 117 - else if (PageSwapCache(page)) 118 - cached++; 119 - else if (PageSlab(page)) 120 - slab++; 121 - else if (!page_count(page)) 122 - free++; 123 - else 124 - shared += page_count(page) - 1; 125 - pfn1++; 126 - page = pfn_to_page(pfn1); 127 - } while (pfn1 < pfn2); 128 - } 129 - 130 - printk("%d pages of RAM\n", total); 131 - printk("%d free pages\n", free); 132 - printk("%d reserved pages\n", reserved); 133 - printk("%d slab pages\n", slab); 134 - printk("%d pages shared\n", shared); 135 - printk("%d pages swap cached\n", cached); 136 - } 137 - 138 89 static void __init find_limits(unsigned long *min, unsigned long *max_low, 139 90 unsigned long *max_high) 140 91 {
+2 -2
arch/arm/mm/proc-arm1020.S
··· 507 507 508 508 .align 509 509 510 - .section ".proc.info.init", #alloc, #execinstr 510 + .section ".proc.info.init", #alloc 511 511 512 512 .type __arm1020_proc_info,#object 513 513 __arm1020_proc_info: ··· 519 519 .long PMD_TYPE_SECT | \ 520 520 PMD_SECT_AP_WRITE | \ 521 521 PMD_SECT_AP_READ 522 - b __arm1020_setup 522 + initfn __arm1020_setup, __arm1020_proc_info 523 523 .long cpu_arch_name 524 524 .long cpu_elf_name 525 525 .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB
+2 -2
arch/arm/mm/proc-arm1020e.S
··· 465 465 466 466 .align 467 467 468 - .section ".proc.info.init", #alloc, #execinstr 468 + .section ".proc.info.init", #alloc 469 469 470 470 .type __arm1020e_proc_info,#object 471 471 __arm1020e_proc_info: ··· 479 479 PMD_BIT4 | \ 480 480 PMD_SECT_AP_WRITE | \ 481 481 PMD_SECT_AP_READ 482 - b __arm1020e_setup 482 + initfn __arm1020e_setup, __arm1020e_proc_info 483 483 .long cpu_arch_name 484 484 .long cpu_elf_name 485 485 .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_EDSP
+2 -2
arch/arm/mm/proc-arm1022.S
··· 448 448 449 449 .align 450 450 451 - .section ".proc.info.init", #alloc, #execinstr 451 + .section ".proc.info.init", #alloc 452 452 453 453 .type __arm1022_proc_info,#object 454 454 __arm1022_proc_info: ··· 462 462 PMD_BIT4 | \ 463 463 PMD_SECT_AP_WRITE | \ 464 464 PMD_SECT_AP_READ 465 - b __arm1022_setup 465 + initfn __arm1022_setup, __arm1022_proc_info 466 466 .long cpu_arch_name 467 467 .long cpu_elf_name 468 468 .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_EDSP
+2 -2
arch/arm/mm/proc-arm1026.S
··· 442 442 string cpu_arm1026_name, "ARM1026EJ-S" 443 443 .align 444 444 445 - .section ".proc.info.init", #alloc, #execinstr 445 + .section ".proc.info.init", #alloc 446 446 447 447 .type __arm1026_proc_info,#object 448 448 __arm1026_proc_info: ··· 456 456 PMD_BIT4 | \ 457 457 PMD_SECT_AP_WRITE | \ 458 458 PMD_SECT_AP_READ 459 - b __arm1026_setup 459 + initfn __arm1026_setup, __arm1026_proc_info 460 460 .long cpu_arch_name 461 461 .long cpu_elf_name 462 462 .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP|HWCAP_JAVA
+2 -2
arch/arm/mm/proc-arm720.S
··· 186 186 * See <asm/procinfo.h> for a definition of this structure. 187 187 */ 188 188 189 - .section ".proc.info.init", #alloc, #execinstr 189 + .section ".proc.info.init", #alloc 190 190 191 191 .macro arm720_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, cpu_flush:req 192 192 .type __\name\()_proc_info,#object ··· 203 203 PMD_BIT4 | \ 204 204 PMD_SECT_AP_WRITE | \ 205 205 PMD_SECT_AP_READ 206 - b \cpu_flush @ cpu_flush 206 + initfn \cpu_flush, __\name\()_proc_info @ cpu_flush 207 207 .long cpu_arch_name @ arch_name 208 208 .long cpu_elf_name @ elf_name 209 209 .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB @ elf_hwcap
+2 -2
arch/arm/mm/proc-arm740.S
··· 132 132 133 133 .align 134 134 135 - .section ".proc.info.init", #alloc, #execinstr 135 + .section ".proc.info.init", #alloc 136 136 .type __arm740_proc_info,#object 137 137 __arm740_proc_info: 138 138 .long 0x41807400 139 139 .long 0xfffffff0 140 140 .long 0 141 141 .long 0 142 - b __arm740_setup 142 + initfn __arm740_setup, __arm740_proc_info 143 143 .long cpu_arch_name 144 144 .long cpu_elf_name 145 145 .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_26BIT
+2 -2
arch/arm/mm/proc-arm7tdmi.S
··· 76 76 77 77 .align 78 78 79 - .section ".proc.info.init", #alloc, #execinstr 79 + .section ".proc.info.init", #alloc 80 80 81 81 .macro arm7tdmi_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, \ 82 82 extra_hwcaps=0 ··· 86 86 .long \cpu_mask 87 87 .long 0 88 88 .long 0 89 - b __arm7tdmi_setup 89 + initfn __arm7tdmi_setup, __\name\()_proc_info 90 90 .long cpu_arch_name 91 91 .long cpu_elf_name 92 92 .long HWCAP_SWP | HWCAP_26BIT | ( \extra_hwcaps )
+2 -2
arch/arm/mm/proc-arm920.S
··· 448 448 449 449 .align 450 450 451 - .section ".proc.info.init", #alloc, #execinstr 451 + .section ".proc.info.init", #alloc 452 452 453 453 .type __arm920_proc_info,#object 454 454 __arm920_proc_info: ··· 464 464 PMD_BIT4 | \ 465 465 PMD_SECT_AP_WRITE | \ 466 466 PMD_SECT_AP_READ 467 - b __arm920_setup 467 + initfn __arm920_setup, __arm920_proc_info 468 468 .long cpu_arch_name 469 469 .long cpu_elf_name 470 470 .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB
+2 -2
arch/arm/mm/proc-arm922.S
··· 426 426 427 427 .align 428 428 429 - .section ".proc.info.init", #alloc, #execinstr 429 + .section ".proc.info.init", #alloc 430 430 431 431 .type __arm922_proc_info,#object 432 432 __arm922_proc_info: ··· 442 442 PMD_BIT4 | \ 443 443 PMD_SECT_AP_WRITE | \ 444 444 PMD_SECT_AP_READ 445 - b __arm922_setup 445 + initfn __arm922_setup, __arm922_proc_info 446 446 .long cpu_arch_name 447 447 .long cpu_elf_name 448 448 .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB
+2 -2
arch/arm/mm/proc-arm925.S
··· 494 494 495 495 .align 496 496 497 - .section ".proc.info.init", #alloc, #execinstr 497 + .section ".proc.info.init", #alloc 498 498 499 499 .macro arm925_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, cache 500 500 .type __\name\()_proc_info,#object ··· 510 510 PMD_BIT4 | \ 511 511 PMD_SECT_AP_WRITE | \ 512 512 PMD_SECT_AP_READ 513 - b __arm925_setup 513 + initfn __arm925_setup, __\name\()_proc_info 514 514 .long cpu_arch_name 515 515 .long cpu_elf_name 516 516 .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB
+2 -2
arch/arm/mm/proc-arm926.S
··· 474 474 475 475 .align 476 476 477 - .section ".proc.info.init", #alloc, #execinstr 477 + .section ".proc.info.init", #alloc 478 478 479 479 .type __arm926_proc_info,#object 480 480 __arm926_proc_info: ··· 490 490 PMD_BIT4 | \ 491 491 PMD_SECT_AP_WRITE | \ 492 492 PMD_SECT_AP_READ 493 - b __arm926_setup 493 + initfn __arm926_setup, __arm926_proc_info 494 494 .long cpu_arch_name 495 495 .long cpu_elf_name 496 496 .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP|HWCAP_JAVA
+10 -20
arch/arm/mm/proc-arm940.S
··· 297 297 mcr p15, 0, r0, c6, c0, 1 298 298 299 299 ldr r0, =(CONFIG_DRAM_BASE & 0xFFFFF000) @ base[31:12] of RAM 300 - ldr r1, =(CONFIG_DRAM_SIZE >> 12) @ size of RAM (must be >= 4KB) 301 - mov r2, #10 @ 11 is the minimum (4KB) 302 - 1: add r2, r2, #1 @ area size *= 2 303 - mov r1, r1, lsr #1 304 - bne 1b @ count not zero r-shift 305 - orr r0, r0, r2, lsl #1 @ the area register value 306 - orr r0, r0, #1 @ set enable bit 307 - mcr p15, 0, r0, c6, c1, 0 @ set area 1, RAM 308 - mcr p15, 0, r0, c6, c1, 1 300 + ldr r7, =CONFIG_DRAM_SIZE >> 12 @ size of RAM (must be >= 4KB) 301 + pr_val r3, r0, r7, #1 302 + mcr p15, 0, r3, c6, c1, 0 @ set area 1, RAM 303 + mcr p15, 0, r3, c6, c1, 1 309 304 310 305 ldr r0, =(CONFIG_FLASH_MEM_BASE & 0xFFFFF000) @ base[31:12] of FLASH 311 - ldr r1, =(CONFIG_FLASH_SIZE >> 12) @ size of FLASH (must be >= 4KB) 312 - mov r2, #10 @ 11 is the minimum (4KB) 313 - 1: add r2, r2, #1 @ area size *= 2 314 - mov r1, r1, lsr #1 315 - bne 1b @ count not zero r-shift 316 - orr r0, r0, r2, lsl #1 @ the area register value 317 - orr r0, r0, #1 @ set enable bit 318 - mcr p15, 0, r0, c6, c2, 0 @ set area 2, ROM/FLASH 319 - mcr p15, 0, r0, c6, c2, 1 306 + ldr r7, =CONFIG_FLASH_SIZE @ size of FLASH (must be >= 4KB) 307 + pr_val r3, r0, r6, #1 308 + mcr p15, 0, r3, c6, c2, 0 @ set area 2, ROM/FLASH 309 + mcr p15, 0, r3, c6, c2, 1 320 310 321 311 mov r0, #0x06 322 312 mcr p15, 0, r0, c2, c0, 0 @ Region 1&2 cacheable ··· 344 354 345 355 .align 346 356 347 - .section ".proc.info.init", #alloc, #execinstr 357 + .section ".proc.info.init", #alloc 348 358 349 359 .type __arm940_proc_info,#object 350 360 __arm940_proc_info: 351 361 .long 0x41009400 352 362 .long 0xff00fff0 353 363 .long 0 354 - b __arm940_setup 364 + initfn __arm940_setup, __arm940_proc_info 355 365 .long cpu_arch_name 356 366 .long cpu_elf_name 357 367 .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB
+8 -18
arch/arm/mm/proc-arm946.S
··· 343 343 mcr p15, 0, r0, c6, c0, 0 @ set region 0, default 344 344 345 345 ldr r0, =(CONFIG_DRAM_BASE & 0xFFFFF000) @ base[31:12] of RAM 346 - ldr r1, =(CONFIG_DRAM_SIZE >> 12) @ size of RAM (must be >= 4KB) 347 - mov r2, #10 @ 11 is the minimum (4KB) 348 - 1: add r2, r2, #1 @ area size *= 2 349 - mov r1, r1, lsr #1 350 - bne 1b @ count not zero r-shift 351 - orr r0, r0, r2, lsl #1 @ the region register value 352 - orr r0, r0, #1 @ set enable bit 353 - mcr p15, 0, r0, c6, c1, 0 @ set region 1, RAM 346 + ldr r7, =CONFIG_DRAM_SIZE @ size of RAM (must be >= 4KB) 347 + pr_val r3, r0, r7, #1 348 + mcr p15, 0, r3, c6, c1, 0 354 349 355 350 ldr r0, =(CONFIG_FLASH_MEM_BASE & 0xFFFFF000) @ base[31:12] of FLASH 356 - ldr r1, =(CONFIG_FLASH_SIZE >> 12) @ size of FLASH (must be >= 4KB) 357 - mov r2, #10 @ 11 is the minimum (4KB) 358 - 1: add r2, r2, #1 @ area size *= 2 359 - mov r1, r1, lsr #1 360 - bne 1b @ count not zero r-shift 361 - orr r0, r0, r2, lsl #1 @ the region register value 362 - orr r0, r0, #1 @ set enable bit 363 - mcr p15, 0, r0, c6, c2, 0 @ set region 2, ROM/FLASH 351 + ldr r7, =CONFIG_FLASH_SIZE @ size of FLASH (must be >= 4KB) 352 + pr_val r3, r0, r7, #1 353 + mcr p15, 0, r3, c6, c2, 0 364 354 365 355 mov r0, #0x06 366 356 mcr p15, 0, r0, c2, c0, 0 @ region 1,2 d-cacheable ··· 399 409 400 410 .align 401 411 402 - .section ".proc.info.init", #alloc, #execinstr 412 + .section ".proc.info.init", #alloc 403 413 .type __arm946_proc_info,#object 404 414 __arm946_proc_info: 405 415 .long 0x41009460 406 416 .long 0xff00fff0 407 417 .long 0 408 418 .long 0 409 - b __arm946_setup 419 + initfn __arm946_setup, __arm946_proc_info 410 420 .long cpu_arch_name 411 421 .long cpu_elf_name 412 422 .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB
+2 -2
arch/arm/mm/proc-arm9tdmi.S
··· 70 70 71 71 .align 72 72 73 - .section ".proc.info.init", #alloc, #execinstr 73 + .section ".proc.info.init", #alloc 74 74 75 75 .macro arm9tdmi_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req 76 76 .type __\name\()_proc_info, #object ··· 79 79 .long \cpu_mask 80 80 .long 0 81 81 .long 0 82 - b __arm9tdmi_setup 82 + initfn __arm9tdmi_setup, __\name\()_proc_info 83 83 .long cpu_arch_name 84 84 .long cpu_elf_name 85 85 .long HWCAP_SWP | HWCAP_THUMB | HWCAP_26BIT
+2 -2
arch/arm/mm/proc-fa526.S
··· 190 190 191 191 .align 192 192 193 - .section ".proc.info.init", #alloc, #execinstr 193 + .section ".proc.info.init", #alloc 194 194 195 195 .type __fa526_proc_info,#object 196 196 __fa526_proc_info: ··· 206 206 PMD_BIT4 | \ 207 207 PMD_SECT_AP_WRITE | \ 208 208 PMD_SECT_AP_READ 209 - b __fa526_setup 209 + initfn __fa526_setup, __fa526_proc_info 210 210 .long cpu_arch_name 211 211 .long cpu_elf_name 212 212 .long HWCAP_SWP | HWCAP_HALF
+3 -2
arch/arm/mm/proc-feroceon.S
··· 584 584 585 585 .align 586 586 587 - .section ".proc.info.init", #alloc, #execinstr 587 + .section ".proc.info.init", #alloc 588 588 589 589 .macro feroceon_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, cache:req 590 590 .type __\name\()_proc_info,#object ··· 601 601 PMD_BIT4 | \ 602 602 PMD_SECT_AP_WRITE | \ 603 603 PMD_SECT_AP_READ 604 - b __feroceon_setup 604 + initfn __feroceon_setup, __\name\()_proc_info 605 + .long __feroceon_setup 605 606 .long cpu_arch_name 606 607 .long cpu_elf_name 607 608 .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP
+28
arch/arm/mm/proc-macros.S
··· 331 331 .globl \x 332 332 .equ \x, \y 333 333 .endm 334 + 335 + .macro initfn, func, base 336 + .long \func - \base 337 + .endm 338 + 339 + /* 340 + * Macro to calculate the log2 size for the protection region 341 + * registers. This calculates rd = log2(size) - 1. tmp must 342 + * not be the same register as rd. 343 + */ 344 + .macro pr_sz, rd, size, tmp 345 + mov \tmp, \size, lsr #12 346 + mov \rd, #11 347 + 1: movs \tmp, \tmp, lsr #1 348 + addne \rd, \rd, #1 349 + bne 1b 350 + .endm 351 + 352 + /* 353 + * Macro to generate a protection region register value 354 + * given a pre-masked address, size, and enable bit. 355 + * Corrupts size. 356 + */ 357 + .macro pr_val, dest, addr, size, enable 358 + pr_sz \dest, \size, \size @ calculate log2(size) - 1 359 + orr \dest, \addr, \dest, lsl #1 @ mask in the region size 360 + orr \dest, \dest, \enable 361 + .endm
+2 -2
arch/arm/mm/proc-mohawk.S
··· 427 427 428 428 .align 429 429 430 - .section ".proc.info.init", #alloc, #execinstr 430 + .section ".proc.info.init", #alloc 431 431 432 432 .type __88sv331x_proc_info,#object 433 433 __88sv331x_proc_info: ··· 443 443 PMD_BIT4 | \ 444 444 PMD_SECT_AP_WRITE | \ 445 445 PMD_SECT_AP_READ 446 - b __mohawk_setup 446 + initfn __mohawk_setup, __88sv331x_proc_info 447 447 .long cpu_arch_name 448 448 .long cpu_elf_name 449 449 .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP
+2 -2
arch/arm/mm/proc-sa110.S
··· 199 199 200 200 .align 201 201 202 - .section ".proc.info.init", #alloc, #execinstr 202 + .section ".proc.info.init", #alloc 203 203 204 204 .type __sa110_proc_info,#object 205 205 __sa110_proc_info: ··· 213 213 .long PMD_TYPE_SECT | \ 214 214 PMD_SECT_AP_WRITE | \ 215 215 PMD_SECT_AP_READ 216 - b __sa110_setup 216 + initfn __sa110_setup, __sa110_proc_info 217 217 .long cpu_arch_name 218 218 .long cpu_elf_name 219 219 .long HWCAP_SWP | HWCAP_HALF | HWCAP_26BIT | HWCAP_FAST_MULT
+2 -2
arch/arm/mm/proc-sa1100.S
··· 242 242 243 243 .align 244 244 245 - .section ".proc.info.init", #alloc, #execinstr 245 + .section ".proc.info.init", #alloc 246 246 247 247 .macro sa1100_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req 248 248 .type __\name\()_proc_info,#object ··· 257 257 .long PMD_TYPE_SECT | \ 258 258 PMD_SECT_AP_WRITE | \ 259 259 PMD_SECT_AP_READ 260 - b __sa1100_setup 260 + initfn __sa1100_setup, __\name\()_proc_info 261 261 .long cpu_arch_name 262 262 .long cpu_elf_name 263 263 .long HWCAP_SWP | HWCAP_HALF | HWCAP_26BIT | HWCAP_FAST_MULT
+2 -2
arch/arm/mm/proc-v6.S
··· 264 264 string cpu_elf_name, "v6" 265 265 .align 266 266 267 - .section ".proc.info.init", #alloc, #execinstr 267 + .section ".proc.info.init", #alloc 268 268 269 269 /* 270 270 * Match any ARMv6 processor core. ··· 287 287 PMD_SECT_XN | \ 288 288 PMD_SECT_AP_WRITE | \ 289 289 PMD_SECT_AP_READ 290 - b __v6_setup 290 + initfn __v6_setup, __v6_proc_info 291 291 .long cpu_arch_name 292 292 .long cpu_elf_name 293 293 /* See also feat_v6_fixup() for HWCAP_TLS */
+8 -4
arch/arm/mm/proc-v7-2level.S
··· 37 37 * It is assumed that: 38 38 * - we are not using split page tables 39 39 */ 40 - ENTRY(cpu_v7_switch_mm) 40 + ENTRY(cpu_ca8_switch_mm) 41 41 #ifdef CONFIG_MMU 42 42 mov r2, #0 43 - mmid r1, r1 @ get mm->context.id 44 - ALT_SMP(orr r0, r0, #TTB_FLAGS_SMP) 45 - ALT_UP(orr r0, r0, #TTB_FLAGS_UP) 46 43 #ifdef CONFIG_ARM_ERRATA_430973 47 44 mcr p15, 0, r2, c7, c5, 6 @ flush BTAC/BTB 48 45 #endif 46 + #endif 47 + ENTRY(cpu_v7_switch_mm) 48 + #ifdef CONFIG_MMU 49 + mmid r1, r1 @ get mm->context.id 50 + ALT_SMP(orr r0, r0, #TTB_FLAGS_SMP) 51 + ALT_UP(orr r0, r0, #TTB_FLAGS_UP) 49 52 #ifdef CONFIG_PID_IN_CONTEXTIDR 50 53 mrc p15, 0, r2, c13, c0, 1 @ read current context ID 51 54 lsr r2, r2, #8 @ extract the PID ··· 64 61 #endif 65 62 bx lr 66 63 ENDPROC(cpu_v7_switch_mm) 64 + ENDPROC(cpu_ca8_switch_mm) 67 65 68 66 /* 69 67 * cpu_v7_set_pte_ext(ptep, pte)
+42 -14
arch/arm/mm/proc-v7.S
··· 153 153 #endif 154 154 155 155 /* 156 + * Cortex-A8 157 + */ 158 + globl_equ cpu_ca8_proc_init, cpu_v7_proc_init 159 + globl_equ cpu_ca8_proc_fin, cpu_v7_proc_fin 160 + globl_equ cpu_ca8_reset, cpu_v7_reset 161 + globl_equ cpu_ca8_do_idle, cpu_v7_do_idle 162 + globl_equ cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area 163 + globl_equ cpu_ca8_set_pte_ext, cpu_v7_set_pte_ext 164 + globl_equ cpu_ca8_suspend_size, cpu_v7_suspend_size 165 + #ifdef CONFIG_ARM_CPU_SUSPEND 166 + globl_equ cpu_ca8_do_suspend, cpu_v7_do_suspend 167 + globl_equ cpu_ca8_do_resume, cpu_v7_do_resume 168 + #endif 169 + 170 + /* 156 171 * Cortex-A9 processor functions 157 172 */ 158 173 globl_equ cpu_ca9mp_proc_init, cpu_v7_proc_init ··· 466 451 467 452 @ define struct processor (see <asm/proc-fns.h> and proc-macros.S) 468 453 define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 454 + #ifndef CONFIG_ARM_LPAE 455 + define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 469 456 define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 457 + #endif 470 458 #ifdef CONFIG_CPU_PJ4B 471 459 define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 472 460 #endif ··· 480 462 string cpu_elf_name, "v7" 481 463 .align 482 464 483 - .section ".proc.info.init", #alloc, #execinstr 465 + .section ".proc.info.init", #alloc 484 466 485 467 /* 486 468 * Standard v7 proc info content 487 469 */ 488 - .macro __v7_proc initfunc, mm_mmuflags = 0, io_mmuflags = 0, hwcaps = 0, proc_fns = v7_processor_functions 470 + .macro __v7_proc name, initfunc, mm_mmuflags = 0, io_mmuflags = 0, hwcaps = 0, proc_fns = v7_processor_functions 489 471 ALT_SMP(.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | \ 490 472 PMD_SECT_AF | PMD_FLAGS_SMP | \mm_mmuflags) 491 473 ALT_UP(.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | \ 492 474 PMD_SECT_AF | PMD_FLAGS_UP | \mm_mmuflags) 493 475 .long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | \ 494 476 PMD_SECT_AP_READ | PMD_SECT_AF | \io_mmuflags 495 - W(b) \initfunc 477 + initfn \initfunc, \name 496 478 .long cpu_arch_name 497 479 .long cpu_elf_name 498 480 .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_FAST_MULT | \ ··· 512 494 __v7_ca5mp_proc_info: 513 495 .long 0x410fc050 514 496 .long 0xff0ffff0 515 - __v7_proc __v7_ca5mp_setup 497 + __v7_proc __v7_ca5mp_proc_info, __v7_ca5mp_setup 516 498 .size __v7_ca5mp_proc_info, . - __v7_ca5mp_proc_info 517 499 518 500 /* ··· 522 504 __v7_ca9mp_proc_info: 523 505 .long 0x410fc090 524 506 .long 0xff0ffff0 525 - __v7_proc __v7_ca9mp_setup, proc_fns = ca9mp_processor_functions 507 + __v7_proc __v7_ca9mp_proc_info, __v7_ca9mp_setup, proc_fns = ca9mp_processor_functions 526 508 .size __v7_ca9mp_proc_info, . - __v7_ca9mp_proc_info 509 + 510 + /* 511 + * ARM Ltd. Cortex A8 processor. 512 + */ 513 + .type __v7_ca8_proc_info, #object 514 + __v7_ca8_proc_info: 515 + .long 0x410fc080 516 + .long 0xff0ffff0 517 + __v7_proc __v7_ca8_proc_info, __v7_setup, proc_fns = ca8_processor_functions 518 + .size __v7_ca8_proc_info, . - __v7_ca8_proc_info 527 519 528 520 #endif /* CONFIG_ARM_LPAE */ 529 521 ··· 545 517 __v7_pj4b_proc_info: 546 518 .long 0x560f5800 547 519 .long 0xff0fff00 548 - __v7_proc __v7_pj4b_setup, proc_fns = pj4b_processor_functions 520 + __v7_proc __v7_pj4b_proc_info, __v7_pj4b_setup, proc_fns = pj4b_processor_functions 549 521 .size __v7_pj4b_proc_info, . - __v7_pj4b_proc_info 550 522 #endif 551 523 ··· 556 528 __v7_cr7mp_proc_info: 557 529 .long 0x410fc170 558 530 .long 0xff0ffff0 559 - __v7_proc __v7_cr7mp_setup 531 + __v7_proc __v7_cr7mp_proc_info, __v7_cr7mp_setup 560 532 .size __v7_cr7mp_proc_info, . - __v7_cr7mp_proc_info 561 533 562 534 /* ··· 566 538 __v7_ca7mp_proc_info: 567 539 .long 0x410fc070 568 540 .long 0xff0ffff0 569 - __v7_proc __v7_ca7mp_setup 541 + __v7_proc __v7_ca7mp_proc_info, __v7_ca7mp_setup 570 542 .size __v7_ca7mp_proc_info, . - __v7_ca7mp_proc_info 571 543 572 544 /* ··· 576 548 __v7_ca12mp_proc_info: 577 549 .long 0x410fc0d0 578 550 .long 0xff0ffff0 579 - __v7_proc __v7_ca12mp_setup 551 + __v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup 580 552 .size __v7_ca12mp_proc_info, . - __v7_ca12mp_proc_info 581 553 582 554 /* ··· 586 558 __v7_ca15mp_proc_info: 587 559 .long 0x410fc0f0 588 560 .long 0xff0ffff0 589 - __v7_proc __v7_ca15mp_setup 561 + __v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup 590 562 .size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info 591 563 592 564 /* ··· 596 568 __v7_b15mp_proc_info: 597 569 .long 0x420f00f0 598 570 .long 0xff0ffff0 599 - __v7_proc __v7_b15mp_setup 571 + __v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup 600 572 .size __v7_b15mp_proc_info, . - __v7_b15mp_proc_info 601 573 602 574 /* ··· 606 578 __v7_ca17mp_proc_info: 607 579 .long 0x410fc0e0 608 580 .long 0xff0ffff0 609 - __v7_proc __v7_ca17mp_setup 581 + __v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup 610 582 .size __v7_ca17mp_proc_info, . - __v7_ca17mp_proc_info 611 583 612 584 /* ··· 622 594 * do support them. They also don't indicate support for fused multiply 623 595 * instructions even though they actually do support them. 624 596 */ 625 - __v7_proc __v7_setup, hwcaps = HWCAP_IDIV | HWCAP_VFPv4 597 + __v7_proc __krait_proc_info, __v7_setup, hwcaps = HWCAP_IDIV | HWCAP_VFPv4 626 598 .size __krait_proc_info, . - __krait_proc_info 627 599 628 600 /* ··· 632 604 __v7_proc_info: 633 605 .long 0x000f0000 @ Required ID value 634 606 .long 0x000f0000 @ Mask for ID 635 - __v7_proc __v7_setup 607 + __v7_proc __v7_proc_info, __v7_setup 636 608 .size __v7_proc_info, . - __v7_proc_info
+2 -2
arch/arm/mm/proc-v7m.S
··· 135 135 string cpu_elf_name "v7m" 136 136 string cpu_v7m_name "ARMv7-M" 137 137 138 - .section ".proc.info.init", #alloc, #execinstr 138 + .section ".proc.info.init", #alloc 139 139 140 140 /* 141 141 * Match any ARMv7-M processor core. ··· 146 146 .long 0x000f0000 @ Mask for ID 147 147 .long 0 @ proc_info_list.__cpu_mm_mmu_flags 148 148 .long 0 @ proc_info_list.__cpu_io_mmu_flags 149 - b __v7m_setup @ proc_info_list.__cpu_flush 149 + initfn __v7m_setup, __v7m_proc_info @ proc_info_list.__cpu_flush 150 150 .long cpu_arch_name 151 151 .long cpu_elf_name 152 152 .long HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT
+2 -2
arch/arm/mm/proc-xsc3.S
··· 499 499 500 500 .align 501 501 502 - .section ".proc.info.init", #alloc, #execinstr 502 + .section ".proc.info.init", #alloc 503 503 504 504 .macro xsc3_proc_info name:req, cpu_val:req, cpu_mask:req 505 505 .type __\name\()_proc_info,#object ··· 514 514 .long PMD_TYPE_SECT | \ 515 515 PMD_SECT_AP_WRITE | \ 516 516 PMD_SECT_AP_READ 517 - b __xsc3_setup 517 + initfn __xsc3_setup, __\name\()_proc_info 518 518 .long cpu_arch_name 519 519 .long cpu_elf_name 520 520 .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP
+2 -2
arch/arm/mm/proc-xscale.S
··· 612 612 613 613 .align 614 614 615 - .section ".proc.info.init", #alloc, #execinstr 615 + .section ".proc.info.init", #alloc 616 616 617 617 .macro xscale_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, cache 618 618 .type __\name\()_proc_info,#object ··· 627 627 .long PMD_TYPE_SECT | \ 628 628 PMD_SECT_AP_WRITE | \ 629 629 PMD_SECT_AP_READ 630 - b __xscale_setup 630 + initfn __xscale_setup, __\name\()_proc_info 631 631 .long cpu_arch_name 632 632 .long cpu_elf_name 633 633 .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP
+1 -1
arch/arm/nwfpe/entry.S
··· 113 113 @ to fault. Emit the appropriate exception gunk to fix things up. 114 114 @ ??? For some reason, faults can happen at .Lx2 even with a 115 115 @ plain LDR instruction. Weird, but it seems harmless. 116 - .pushsection .fixup,"ax" 116 + .pushsection .text.fixup,"ax" 117 117 .align 2 118 118 .Lfix: ret r9 @ let the user eat segfaults 119 119 .popsection
+1
arch/arm/vdso/.gitignore
··· 1 + vdso.lds
+74
arch/arm/vdso/Makefile
··· 1 + hostprogs-y := vdsomunge 2 + 3 + obj-vdso := vgettimeofday.o datapage.o 4 + 5 + # Build rules 6 + targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.so.raw vdso.lds 7 + obj-vdso := $(addprefix $(obj)/, $(obj-vdso)) 8 + 9 + ccflags-y := -shared -fPIC -fno-common -fno-builtin -fno-stack-protector 10 + ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 -DDISABLE_BRANCH_PROFILING 11 + ccflags-y += -Wl,--no-undefined $(call cc-ldoption, -Wl$(comma)--hash-style=sysv) 12 + 13 + obj-y += vdso.o 14 + extra-y += vdso.lds 15 + CPPFLAGS_vdso.lds += -P -C -U$(ARCH) 16 + 17 + CFLAGS_REMOVE_vdso.o = -pg 18 + 19 + # Force -O2 to avoid libgcc dependencies 20 + CFLAGS_REMOVE_vgettimeofday.o = -pg -Os 21 + CFLAGS_vgettimeofday.o = -O2 22 + 23 + # Disable gcov profiling for VDSO code 24 + GCOV_PROFILE := n 25 + 26 + # Force dependency 27 + $(obj)/vdso.o : $(obj)/vdso.so 28 + 29 + # Link rule for the .so file 30 + $(obj)/vdso.so.raw: $(src)/vdso.lds $(obj-vdso) FORCE 31 + $(call if_changed,vdsold) 32 + 33 + $(obj)/vdso.so.dbg: $(obj)/vdso.so.raw $(obj)/vdsomunge FORCE 34 + $(call if_changed,vdsomunge) 35 + 36 + # Strip rule for the .so file 37 + $(obj)/%.so: OBJCOPYFLAGS := -S 38 + $(obj)/%.so: $(obj)/%.so.dbg FORCE 39 + $(call if_changed,objcopy) 40 + 41 + # Actual build commands 42 + quiet_cmd_vdsold = VDSO $@ 43 + cmd_vdsold = $(CC) $(c_flags) -Wl,-T $(filter %.lds,$^) $(filter %.o,$^) \ 44 + $(call cc-ldoption, -Wl$(comma)--build-id) \ 45 + -Wl,-Bsymbolic -Wl,-z,max-page-size=4096 \ 46 + -Wl,-z,common-page-size=4096 -o $@ 47 + 48 + quiet_cmd_vdsomunge = MUNGE $@ 49 + cmd_vdsomunge = $(objtree)/$(obj)/vdsomunge $< $@ 50 + 51 + # 52 + # Install the unstripped copy of vdso.so.dbg. If our toolchain 53 + # supports build-id, install .build-id links as well. 54 + # 55 + # Cribbed from arch/x86/vdso/Makefile. 56 + # 57 + quiet_cmd_vdso_install = INSTALL $< 58 + define cmd_vdso_install 59 + cp $< "$(MODLIB)/vdso/vdso.so"; \ 60 + if readelf -n $< | grep -q 'Build ID'; then \ 61 + buildid=`readelf -n $< |grep 'Build ID' |sed -e 's/^.*Build ID: \(.*\)$$/\1/'`; \ 62 + first=`echo $$buildid | cut -b-2`; \ 63 + last=`echo $$buildid | cut -b3-`; \ 64 + mkdir -p "$(MODLIB)/vdso/.build-id/$$first"; \ 65 + ln -sf "../../vdso.so" "$(MODLIB)/vdso/.build-id/$$first/$$last.debug"; \ 66 + fi 67 + endef 68 + 69 + $(MODLIB)/vdso: FORCE 70 + @mkdir -p $(MODLIB)/vdso 71 + 72 + PHONY += vdso_install 73 + vdso_install: $(obj)/vdso.so.dbg $(MODLIB)/vdso FORCE 74 + $(call cmd,vdso_install)
+15
arch/arm/vdso/datapage.S
··· 1 + #include <linux/linkage.h> 2 + #include <asm/asm-offsets.h> 3 + 4 + .align 2 5 + .L_vdso_data_ptr: 6 + .long _start - . - VDSO_DATA_SIZE 7 + 8 + ENTRY(__get_datapage) 9 + .fnstart 10 + adr r0, .L_vdso_data_ptr 11 + ldr r1, [r0] 12 + add r0, r0, r1 13 + bx lr 14 + .fnend 15 + ENDPROC(__get_datapage)
+35
arch/arm/vdso/vdso.S
··· 1 + /* 2 + * Adapted from arm64 version. 3 + * 4 + * Copyright (C) 2012 ARM Limited 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 17 + * 18 + * Author: Will Deacon <will.deacon@arm.com> 19 + */ 20 + 21 + #include <linux/init.h> 22 + #include <linux/linkage.h> 23 + #include <linux/const.h> 24 + #include <asm/page.h> 25 + 26 + __PAGE_ALIGNED_DATA 27 + 28 + .globl vdso_start, vdso_end 29 + .balign PAGE_SIZE 30 + vdso_start: 31 + .incbin "arch/arm/vdso/vdso.so" 32 + .balign PAGE_SIZE 33 + vdso_end: 34 + 35 + .previous
+87
arch/arm/vdso/vdso.lds.S
··· 1 + /* 2 + * Adapted from arm64 version. 3 + * 4 + * GNU linker script for the VDSO library. 5 + * 6 + * Copyright (C) 2012 ARM Limited 7 + * 8 + * This program is free software; you can redistribute it and/or modify 9 + * it under the terms of the GNU General Public License version 2 as 10 + * published by the Free Software Foundation. 11 + * 12 + * This program is distributed in the hope that it will be useful, 13 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + * GNU General Public License for more details. 16 + * 17 + * You should have received a copy of the GNU General Public License 18 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 19 + * 20 + * Author: Will Deacon <will.deacon@arm.com> 21 + * Heavily based on the vDSO linker scripts for other archs. 22 + */ 23 + 24 + #include <linux/const.h> 25 + #include <asm/page.h> 26 + #include <asm/vdso.h> 27 + 28 + OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", "elf32-littlearm") 29 + OUTPUT_ARCH(arm) 30 + 31 + SECTIONS 32 + { 33 + PROVIDE(_start = .); 34 + 35 + . = SIZEOF_HEADERS; 36 + 37 + .hash : { *(.hash) } :text 38 + .gnu.hash : { *(.gnu.hash) } 39 + .dynsym : { *(.dynsym) } 40 + .dynstr : { *(.dynstr) } 41 + .gnu.version : { *(.gnu.version) } 42 + .gnu.version_d : { *(.gnu.version_d) } 43 + .gnu.version_r : { *(.gnu.version_r) } 44 + 45 + .note : { *(.note.*) } :text :note 46 + 47 + 48 + .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr 49 + .eh_frame : { KEEP (*(.eh_frame)) } :text 50 + 51 + .dynamic : { *(.dynamic) } :text :dynamic 52 + 53 + .rodata : { *(.rodata*) } :text 54 + 55 + .text : { *(.text*) } :text =0xe7f001f2 56 + 57 + .got : { *(.got) } 58 + .rel.plt : { *(.rel.plt) } 59 + 60 + /DISCARD/ : { 61 + *(.note.GNU-stack) 62 + *(.data .data.* .gnu.linkonce.d.* .sdata*) 63 + *(.bss .sbss .dynbss .dynsbss) 64 + } 65 + } 66 + 67 + /* 68 + * We must supply the ELF program headers explicitly to get just one 69 + * PT_LOAD segment, and set the flags explicitly to make segments read-only. 70 + */ 71 + PHDRS 72 + { 73 + text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */ 74 + dynamic PT_DYNAMIC FLAGS(4); /* PF_R */ 75 + note PT_NOTE FLAGS(4); /* PF_R */ 76 + eh_frame_hdr PT_GNU_EH_FRAME; 77 + } 78 + 79 + VERSION 80 + { 81 + LINUX_2.6 { 82 + global: 83 + __vdso_clock_gettime; 84 + __vdso_gettimeofday; 85 + local: *; 86 + }; 87 + }
+201
arch/arm/vdso/vdsomunge.c
··· 1 + /* 2 + * Copyright 2015 Mentor Graphics Corporation. 3 + * 4 + * This program is free software; you can redistribute it and/or 5 + * modify it under the terms of the GNU General Public License 6 + * as published by the Free Software Foundation; version 2 of the 7 + * License. 8 + * 9 + * This program is distributed in the hope that it will be useful, but 10 + * WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 12 + * General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 + * 17 + * 18 + * vdsomunge - Host program which produces a shared object 19 + * architecturally specified to be usable by both soft- and hard-float 20 + * programs. 21 + * 22 + * The Procedure Call Standard for the ARM Architecture (ARM IHI 23 + * 0042E) says: 24 + * 25 + * 6.4.1 VFP and Base Standard Compatibility 26 + * 27 + * Code compiled for the VFP calling standard is compatible with 28 + * the base standard (and vice-versa) if no floating-point or 29 + * containerized vector arguments or results are used. 30 + * 31 + * And ELF for the ARM Architecture (ARM IHI 0044E) (Table 4-2) says: 32 + * 33 + * If both EF_ARM_ABI_FLOAT_XXXX bits are clear, conformance to the 34 + * base procedure-call standard is implied. 35 + * 36 + * The VDSO is built with -msoft-float, as with the rest of the ARM 37 + * kernel, and uses no floating point arguments or results. The build 38 + * process will produce a shared object that may or may not have the 39 + * EF_ARM_ABI_FLOAT_SOFT flag set (it seems to depend on the binutils 40 + * version; binutils starting with 2.24 appears to set it). The 41 + * EF_ARM_ABI_FLOAT_HARD flag should definitely not be set, and this 42 + * program will error out if it is. 43 + * 44 + * If the soft-float flag is set, this program clears it. That's all 45 + * it does. 46 + */ 47 + 48 + #define _GNU_SOURCE 49 + 50 + #include <byteswap.h> 51 + #include <elf.h> 52 + #include <errno.h> 53 + #include <error.h> 54 + #include <fcntl.h> 55 + #include <stdbool.h> 56 + #include <stdio.h> 57 + #include <stdlib.h> 58 + #include <string.h> 59 + #include <sys/mman.h> 60 + #include <sys/stat.h> 61 + #include <sys/types.h> 62 + #include <unistd.h> 63 + 64 + #if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ 65 + #define HOST_ORDER ELFDATA2LSB 66 + #elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__ 67 + #define HOST_ORDER ELFDATA2MSB 68 + #endif 69 + 70 + /* Some of the ELF constants we'd like to use were added to <elf.h> 71 + * relatively recently. 72 + */ 73 + #ifndef EF_ARM_EABI_VER5 74 + #define EF_ARM_EABI_VER5 0x05000000 75 + #endif 76 + 77 + #ifndef EF_ARM_ABI_FLOAT_SOFT 78 + #define EF_ARM_ABI_FLOAT_SOFT 0x200 79 + #endif 80 + 81 + #ifndef EF_ARM_ABI_FLOAT_HARD 82 + #define EF_ARM_ABI_FLOAT_HARD 0x400 83 + #endif 84 + 85 + static const char *outfile; 86 + 87 + static void cleanup(void) 88 + { 89 + if (error_message_count > 0 && outfile != NULL) 90 + unlink(outfile); 91 + } 92 + 93 + static Elf32_Word read_elf_word(Elf32_Word word, bool swap) 94 + { 95 + return swap ? bswap_32(word) : word; 96 + } 97 + 98 + static Elf32_Half read_elf_half(Elf32_Half half, bool swap) 99 + { 100 + return swap ? bswap_16(half) : half; 101 + } 102 + 103 + static void write_elf_word(Elf32_Word val, Elf32_Word *dst, bool swap) 104 + { 105 + *dst = swap ? bswap_32(val) : val; 106 + } 107 + 108 + int main(int argc, char **argv) 109 + { 110 + const Elf32_Ehdr *inhdr; 111 + bool clear_soft_float; 112 + const char *infile; 113 + Elf32_Word e_flags; 114 + const void *inbuf; 115 + struct stat stat; 116 + void *outbuf; 117 + bool swap; 118 + int outfd; 119 + int infd; 120 + 121 + atexit(cleanup); 122 + 123 + if (argc != 3) 124 + error(EXIT_FAILURE, 0, "Usage: %s [infile] [outfile]", argv[0]); 125 + 126 + infile = argv[1]; 127 + outfile = argv[2]; 128 + 129 + infd = open(infile, O_RDONLY); 130 + if (infd < 0) 131 + error(EXIT_FAILURE, errno, "Cannot open %s", infile); 132 + 133 + if (fstat(infd, &stat) != 0) 134 + error(EXIT_FAILURE, errno, "Failed stat for %s", infile); 135 + 136 + inbuf = mmap(NULL, stat.st_size, PROT_READ, MAP_PRIVATE, infd, 0); 137 + if (inbuf == MAP_FAILED) 138 + error(EXIT_FAILURE, errno, "Failed to map %s", infile); 139 + 140 + close(infd); 141 + 142 + inhdr = inbuf; 143 + 144 + if (memcmp(&inhdr->e_ident, ELFMAG, SELFMAG) != 0) 145 + error(EXIT_FAILURE, 0, "Not an ELF file"); 146 + 147 + if (inhdr->e_ident[EI_CLASS] != ELFCLASS32) 148 + error(EXIT_FAILURE, 0, "Unsupported ELF class"); 149 + 150 + swap = inhdr->e_ident[EI_DATA] != HOST_ORDER; 151 + 152 + if (read_elf_half(inhdr->e_type, swap) != ET_DYN) 153 + error(EXIT_FAILURE, 0, "Not a shared object"); 154 + 155 + if (read_elf_half(inhdr->e_machine, swap) != EM_ARM) { 156 + error(EXIT_FAILURE, 0, "Unsupported architecture %#x", 157 + inhdr->e_machine); 158 + } 159 + 160 + e_flags = read_elf_word(inhdr->e_flags, swap); 161 + 162 + if (EF_ARM_EABI_VERSION(e_flags) != EF_ARM_EABI_VER5) { 163 + error(EXIT_FAILURE, 0, "Unsupported EABI version %#x", 164 + EF_ARM_EABI_VERSION(e_flags)); 165 + } 166 + 167 + if (e_flags & EF_ARM_ABI_FLOAT_HARD) 168 + error(EXIT_FAILURE, 0, 169 + "Unexpected hard-float flag set in e_flags"); 170 + 171 + clear_soft_float = !!(e_flags & EF_ARM_ABI_FLOAT_SOFT); 172 + 173 + outfd = open(outfile, O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR); 174 + if (outfd < 0) 175 + error(EXIT_FAILURE, errno, "Cannot open %s", outfile); 176 + 177 + if (ftruncate(outfd, stat.st_size) != 0) 178 + error(EXIT_FAILURE, errno, "Cannot truncate %s", outfile); 179 + 180 + outbuf = mmap(NULL, stat.st_size, PROT_READ | PROT_WRITE, MAP_SHARED, 181 + outfd, 0); 182 + if (outbuf == MAP_FAILED) 183 + error(EXIT_FAILURE, errno, "Failed to map %s", outfile); 184 + 185 + close(outfd); 186 + 187 + memcpy(outbuf, inbuf, stat.st_size); 188 + 189 + if (clear_soft_float) { 190 + Elf32_Ehdr *outhdr; 191 + 192 + outhdr = outbuf; 193 + e_flags &= ~EF_ARM_ABI_FLOAT_SOFT; 194 + write_elf_word(e_flags, &outhdr->e_flags, swap); 195 + } 196 + 197 + if (msync(outbuf, stat.st_size, MS_SYNC) != 0) 198 + error(EXIT_FAILURE, errno, "Failed to sync %s", outfile); 199 + 200 + return EXIT_SUCCESS; 201 + }
+282
arch/arm/vdso/vgettimeofday.c
··· 1 + /* 2 + * Copyright 2015 Mentor Graphics Corporation. 3 + * 4 + * This program is free software; you can redistribute it and/or 5 + * modify it under the terms of the GNU General Public License 6 + * as published by the Free Software Foundation; version 2 of the 7 + * License. 8 + * 9 + * This program is distributed in the hope that it will be useful, but 10 + * WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 12 + * General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 + */ 17 + 18 + #include <linux/compiler.h> 19 + #include <linux/hrtimer.h> 20 + #include <linux/time.h> 21 + #include <asm/arch_timer.h> 22 + #include <asm/barrier.h> 23 + #include <asm/bug.h> 24 + #include <asm/page.h> 25 + #include <asm/unistd.h> 26 + #include <asm/vdso_datapage.h> 27 + 28 + #ifndef CONFIG_AEABI 29 + #error This code depends on AEABI system call conventions 30 + #endif 31 + 32 + extern struct vdso_data *__get_datapage(void); 33 + 34 + static notrace u32 __vdso_read_begin(const struct vdso_data *vdata) 35 + { 36 + u32 seq; 37 + repeat: 38 + seq = ACCESS_ONCE(vdata->seq_count); 39 + if (seq & 1) { 40 + cpu_relax(); 41 + goto repeat; 42 + } 43 + return seq; 44 + } 45 + 46 + static notrace u32 vdso_read_begin(const struct vdso_data *vdata) 47 + { 48 + u32 seq; 49 + 50 + seq = __vdso_read_begin(vdata); 51 + 52 + smp_rmb(); /* Pairs with smp_wmb in vdso_write_end */ 53 + return seq; 54 + } 55 + 56 + static notrace int vdso_read_retry(const struct vdso_data *vdata, u32 start) 57 + { 58 + smp_rmb(); /* Pairs with smp_wmb in vdso_write_begin */ 59 + return vdata->seq_count != start; 60 + } 61 + 62 + static notrace long clock_gettime_fallback(clockid_t _clkid, 63 + struct timespec *_ts) 64 + { 65 + register struct timespec *ts asm("r1") = _ts; 66 + register clockid_t clkid asm("r0") = _clkid; 67 + register long ret asm ("r0"); 68 + register long nr asm("r7") = __NR_clock_gettime; 69 + 70 + asm volatile( 71 + " swi #0\n" 72 + : "=r" (ret) 73 + : "r" (clkid), "r" (ts), "r" (nr) 74 + : "memory"); 75 + 76 + return ret; 77 + } 78 + 79 + static notrace int do_realtime_coarse(struct timespec *ts, 80 + struct vdso_data *vdata) 81 + { 82 + u32 seq; 83 + 84 + do { 85 + seq = vdso_read_begin(vdata); 86 + 87 + ts->tv_sec = vdata->xtime_coarse_sec; 88 + ts->tv_nsec = vdata->xtime_coarse_nsec; 89 + 90 + } while (vdso_read_retry(vdata, seq)); 91 + 92 + return 0; 93 + } 94 + 95 + static notrace int do_monotonic_coarse(struct timespec *ts, 96 + struct vdso_data *vdata) 97 + { 98 + struct timespec tomono; 99 + u32 seq; 100 + 101 + do { 102 + seq = vdso_read_begin(vdata); 103 + 104 + ts->tv_sec = vdata->xtime_coarse_sec; 105 + ts->tv_nsec = vdata->xtime_coarse_nsec; 106 + 107 + tomono.tv_sec = vdata->wtm_clock_sec; 108 + tomono.tv_nsec = vdata->wtm_clock_nsec; 109 + 110 + } while (vdso_read_retry(vdata, seq)); 111 + 112 + ts->tv_sec += tomono.tv_sec; 113 + timespec_add_ns(ts, tomono.tv_nsec); 114 + 115 + return 0; 116 + } 117 + 118 + #ifdef CONFIG_ARM_ARCH_TIMER 119 + 120 + static notrace u64 get_ns(struct vdso_data *vdata) 121 + { 122 + u64 cycle_delta; 123 + u64 cycle_now; 124 + u64 nsec; 125 + 126 + cycle_now = arch_counter_get_cntvct(); 127 + 128 + cycle_delta = (cycle_now - vdata->cs_cycle_last) & vdata->cs_mask; 129 + 130 + nsec = (cycle_delta * vdata->cs_mult) + vdata->xtime_clock_snsec; 131 + nsec >>= vdata->cs_shift; 132 + 133 + return nsec; 134 + } 135 + 136 + static notrace int do_realtime(struct timespec *ts, struct vdso_data *vdata) 137 + { 138 + u64 nsecs; 139 + u32 seq; 140 + 141 + do { 142 + seq = vdso_read_begin(vdata); 143 + 144 + if (!vdata->tk_is_cntvct) 145 + return -1; 146 + 147 + ts->tv_sec = vdata->xtime_clock_sec; 148 + nsecs = get_ns(vdata); 149 + 150 + } while (vdso_read_retry(vdata, seq)); 151 + 152 + ts->tv_nsec = 0; 153 + timespec_add_ns(ts, nsecs); 154 + 155 + return 0; 156 + } 157 + 158 + static notrace int do_monotonic(struct timespec *ts, struct vdso_data *vdata) 159 + { 160 + struct timespec tomono; 161 + u64 nsecs; 162 + u32 seq; 163 + 164 + do { 165 + seq = vdso_read_begin(vdata); 166 + 167 + if (!vdata->tk_is_cntvct) 168 + return -1; 169 + 170 + ts->tv_sec = vdata->xtime_clock_sec; 171 + nsecs = get_ns(vdata); 172 + 173 + tomono.tv_sec = vdata->wtm_clock_sec; 174 + tomono.tv_nsec = vdata->wtm_clock_nsec; 175 + 176 + } while (vdso_read_retry(vdata, seq)); 177 + 178 + ts->tv_sec += tomono.tv_sec; 179 + ts->tv_nsec = 0; 180 + timespec_add_ns(ts, nsecs + tomono.tv_nsec); 181 + 182 + return 0; 183 + } 184 + 185 + #else /* CONFIG_ARM_ARCH_TIMER */ 186 + 187 + static notrace int do_realtime(struct timespec *ts, struct vdso_data *vdata) 188 + { 189 + return -1; 190 + } 191 + 192 + static notrace int do_monotonic(struct timespec *ts, struct vdso_data *vdata) 193 + { 194 + return -1; 195 + } 196 + 197 + #endif /* CONFIG_ARM_ARCH_TIMER */ 198 + 199 + notrace int __vdso_clock_gettime(clockid_t clkid, struct timespec *ts) 200 + { 201 + struct vdso_data *vdata; 202 + int ret = -1; 203 + 204 + vdata = __get_datapage(); 205 + 206 + switch (clkid) { 207 + case CLOCK_REALTIME_COARSE: 208 + ret = do_realtime_coarse(ts, vdata); 209 + break; 210 + case CLOCK_MONOTONIC_COARSE: 211 + ret = do_monotonic_coarse(ts, vdata); 212 + break; 213 + case CLOCK_REALTIME: 214 + ret = do_realtime(ts, vdata); 215 + break; 216 + case CLOCK_MONOTONIC: 217 + ret = do_monotonic(ts, vdata); 218 + break; 219 + default: 220 + break; 221 + } 222 + 223 + if (ret) 224 + ret = clock_gettime_fallback(clkid, ts); 225 + 226 + return ret; 227 + } 228 + 229 + static notrace long gettimeofday_fallback(struct timeval *_tv, 230 + struct timezone *_tz) 231 + { 232 + register struct timezone *tz asm("r1") = _tz; 233 + register struct timeval *tv asm("r0") = _tv; 234 + register long ret asm ("r0"); 235 + register long nr asm("r7") = __NR_gettimeofday; 236 + 237 + asm volatile( 238 + " swi #0\n" 239 + : "=r" (ret) 240 + : "r" (tv), "r" (tz), "r" (nr) 241 + : "memory"); 242 + 243 + return ret; 244 + } 245 + 246 + notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz) 247 + { 248 + struct timespec ts; 249 + struct vdso_data *vdata; 250 + int ret; 251 + 252 + vdata = __get_datapage(); 253 + 254 + ret = do_realtime(&ts, vdata); 255 + if (ret) 256 + return gettimeofday_fallback(tv, tz); 257 + 258 + if (tv) { 259 + tv->tv_sec = ts.tv_sec; 260 + tv->tv_usec = ts.tv_nsec / 1000; 261 + } 262 + if (tz) { 263 + tz->tz_minuteswest = vdata->tz_minuteswest; 264 + tz->tz_dsttime = vdata->tz_dsttime; 265 + } 266 + 267 + return ret; 268 + } 269 + 270 + /* Avoid unresolved references emitted by GCC */ 271 + 272 + void __aeabi_unwind_cpp_pr0(void) 273 + { 274 + } 275 + 276 + void __aeabi_unwind_cpp_pr1(void) 277 + { 278 + } 279 + 280 + void __aeabi_unwind_cpp_pr2(void) 281 + { 282 + }
+49 -29
drivers/amba/tegra-ahb.c
··· 25 25 #include <linux/module.h> 26 26 #include <linux/platform_device.h> 27 27 #include <linux/io.h> 28 + #include <linux/of.h> 28 29 29 30 #include <soc/tegra/ahb.h> 30 31 31 32 #define DRV_NAME "tegra-ahb" 32 33 33 - #define AHB_ARBITRATION_DISABLE 0x00 34 - #define AHB_ARBITRATION_PRIORITY_CTRL 0x04 34 + #define AHB_ARBITRATION_DISABLE 0x04 35 + #define AHB_ARBITRATION_PRIORITY_CTRL 0x08 35 36 #define AHB_PRIORITY_WEIGHT(x) (((x) & 0x7) << 29) 36 37 #define PRIORITY_SELECT_USB BIT(6) 37 38 #define PRIORITY_SELECT_USB2 BIT(18) 38 39 #define PRIORITY_SELECT_USB3 BIT(17) 39 40 40 - #define AHB_GIZMO_AHB_MEM 0x0c 41 + #define AHB_GIZMO_AHB_MEM 0x10 41 42 #define ENB_FAST_REARBITRATE BIT(2) 42 43 #define DONT_SPLIT_AHB_WR BIT(7) 43 44 44 - #define AHB_GIZMO_APB_DMA 0x10 45 - #define AHB_GIZMO_IDE 0x18 46 - #define AHB_GIZMO_USB 0x1c 47 - #define AHB_GIZMO_AHB_XBAR_BRIDGE 0x20 48 - #define AHB_GIZMO_CPU_AHB_BRIDGE 0x24 49 - #define AHB_GIZMO_COP_AHB_BRIDGE 0x28 50 - #define AHB_GIZMO_XBAR_APB_CTLR 0x2c 51 - #define AHB_GIZMO_VCP_AHB_BRIDGE 0x30 52 - #define AHB_GIZMO_NAND 0x3c 53 - #define AHB_GIZMO_SDMMC4 0x44 54 - #define AHB_GIZMO_XIO 0x48 55 - #define AHB_GIZMO_BSEV 0x60 56 - #define AHB_GIZMO_BSEA 0x70 57 - #define AHB_GIZMO_NOR 0x74 58 - #define AHB_GIZMO_USB2 0x78 59 - #define AHB_GIZMO_USB3 0x7c 45 + #define AHB_GIZMO_APB_DMA 0x14 46 + #define AHB_GIZMO_IDE 0x1c 47 + #define AHB_GIZMO_USB 0x20 48 + #define AHB_GIZMO_AHB_XBAR_BRIDGE 0x24 49 + #define AHB_GIZMO_CPU_AHB_BRIDGE 0x28 50 + #define AHB_GIZMO_COP_AHB_BRIDGE 0x2c 51 + #define AHB_GIZMO_XBAR_APB_CTLR 0x30 52 + #define AHB_GIZMO_VCP_AHB_BRIDGE 0x34 53 + #define AHB_GIZMO_NAND 0x40 54 + #define AHB_GIZMO_SDMMC4 0x48 55 + #define AHB_GIZMO_XIO 0x4c 56 + #define AHB_GIZMO_BSEV 0x64 57 + #define AHB_GIZMO_BSEA 0x74 58 + #define AHB_GIZMO_NOR 0x78 59 + #define AHB_GIZMO_USB2 0x7c 60 + #define AHB_GIZMO_USB3 0x80 60 61 #define IMMEDIATE BIT(18) 61 62 62 - #define AHB_GIZMO_SDMMC1 0x80 63 - #define AHB_GIZMO_SDMMC2 0x84 64 - #define AHB_GIZMO_SDMMC3 0x88 65 - #define AHB_MEM_PREFETCH_CFG_X 0xd8 66 - #define AHB_ARBITRATION_XBAR_CTRL 0xdc 67 - #define AHB_MEM_PREFETCH_CFG3 0xe0 68 - #define AHB_MEM_PREFETCH_CFG4 0xe4 69 - #define AHB_MEM_PREFETCH_CFG1 0xec 70 - #define AHB_MEM_PREFETCH_CFG2 0xf0 63 + #define AHB_GIZMO_SDMMC1 0x84 64 + #define AHB_GIZMO_SDMMC2 0x88 65 + #define AHB_GIZMO_SDMMC3 0x8c 66 + #define AHB_MEM_PREFETCH_CFG_X 0xdc 67 + #define AHB_ARBITRATION_XBAR_CTRL 0xe0 68 + #define AHB_MEM_PREFETCH_CFG3 0xe4 69 + #define AHB_MEM_PREFETCH_CFG4 0xe8 70 + #define AHB_MEM_PREFETCH_CFG1 0xf0 71 + #define AHB_MEM_PREFETCH_CFG2 0xf4 71 72 #define PREFETCH_ENB BIT(31) 72 73 #define MST_ID(x) (((x) & 0x1f) << 26) 73 74 #define AHBDMA_MST_ID MST_ID(5) ··· 78 77 #define ADDR_BNDRY(x) (((x) & 0xf) << 21) 79 78 #define INACTIVITY_TIMEOUT(x) (((x) & 0xffff) << 0) 80 79 81 - #define AHB_ARBITRATION_AHB_MEM_WRQUE_MST_ID 0xf8 80 + #define AHB_ARBITRATION_AHB_MEM_WRQUE_MST_ID 0xfc 82 81 83 82 #define AHB_ARBITRATION_XBAR_CTRL_SMMU_INIT_DONE BIT(17) 83 + 84 + /* 85 + * INCORRECT_BASE_ADDR_LOW_BYTE: Legacy kernel DT files for Tegra SoCs 86 + * prior to Tegra124 generally use a physical base address ending in 87 + * 0x4 for the AHB IP block. According to the TRM, the low byte 88 + * should be 0x0. During device probing, this macro is used to detect 89 + * whether the passed-in physical address is incorrect, and if so, to 90 + * correct it. 91 + */ 92 + #define INCORRECT_BASE_ADDR_LOW_BYTE 0x4 84 93 85 94 static struct platform_driver tegra_ahb_driver; 86 95 ··· 268 257 return -ENOMEM; 269 258 270 259 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 260 + 261 + /* Correct the IP block base address if necessary */ 262 + if (res && 263 + (res->start & INCORRECT_BASE_ADDR_LOW_BYTE) == 264 + INCORRECT_BASE_ADDR_LOW_BYTE) { 265 + dev_warn(&pdev->dev, "incorrect AHB base address in DT data - enabling workaround\n"); 266 + res->start -= INCORRECT_BASE_ADDR_LOW_BYTE; 267 + } 268 + 271 269 ahb->regs = devm_ioremap_resource(&pdev->dev, res); 272 270 if (IS_ERR(ahb->regs)) 273 271 return PTR_ERR(ahb->regs);
+1 -1
include/asm-generic/vmlinux.lds.h
··· 405 405 #define TEXT_TEXT \ 406 406 ALIGN_FUNCTION(); \ 407 407 *(.text.hot) \ 408 - *(.text) \ 408 + *(.text .text.fixup) \ 409 409 *(.ref.text) \ 410 410 MEM_KEEP(init.text) \ 411 411 MEM_KEEP(exit.text) \