Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'devel-stable' of http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm

* 'devel-stable' of http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm: (178 commits)
ARM: 7139/1: fix compilation with CONFIG_ARM_ATAG_DTB_COMPAT and large TEXT_OFFSET
ARM: gic, local timers: use the request_percpu_irq() interface
ARM: gic: consolidate PPI handling
ARM: switch from NO_MACH_MEMORY_H to NEED_MACH_MEMORY_H
ARM: mach-s5p64x0: remove mach/memory.h
ARM: mach-s3c64xx: remove mach/memory.h
ARM: plat-mxc: remove mach/memory.h
ARM: mach-prima2: remove mach/memory.h
ARM: mach-zynq: remove mach/memory.h
ARM: mach-bcmring: remove mach/memory.h
ARM: mach-davinci: remove mach/memory.h
ARM: mach-pxa: remove mach/memory.h
ARM: mach-ixp4xx: remove mach/memory.h
ARM: mach-h720x: remove mach/memory.h
ARM: mach-vt8500: remove mach/memory.h
ARM: mach-s5pc100: remove mach/memory.h
ARM: mach-tegra: remove mach/memory.h
ARM: plat-tcc: remove mach/memory.h
ARM: mach-mmp: remove mach/memory.h
ARM: mach-cns3xxx: remove mach/memory.h
...

Fix up mostly pretty trivial conflicts in:
- arch/arm/Kconfig
- arch/arm/include/asm/localtimer.h
- arch/arm/kernel/Makefile
- arch/arm/mach-shmobile/board-ap4evb.c
- arch/arm/mach-u300/core.c
- arch/arm/mm/dma-mapping.c
- arch/arm/mm/proc-v7.S
- arch/arm/plat-omap/Kconfig
largely due to some CONFIG option renaming (ie CONFIG_PM_SLEEP ->
CONFIG_ARM_CPU_SUSPEND for the arm-specific suspend code etc) and
addition of NEED_MACH_MEMORY_H next to HAVE_IDE.

+7579 -2900
+63
arch/arm/Kconfig
··· 29 29 select HAVE_GENERIC_HARDIRQS 30 30 select HAVE_SPARSE_IRQ 31 31 select GENERIC_IRQ_SHOW 32 + select CPU_PM if (SUSPEND || CPU_IDLE) 32 33 help 33 34 The ARM series is a line of low-power-consumption RISC chip designs 34 35 licensed by ARM Ltd and targeted at embedded applications and ··· 212 211 this feature (eg, building a kernel for a single machine) and 213 212 you need to shrink the kernel to the minimal size. 214 213 214 + config NEED_MACH_MEMORY_H 215 + bool 216 + help 217 + Select this when mach/memory.h is required to provide special 218 + definitions for this platform. The need for mach/memory.h should 219 + be avoided when possible. 220 + 221 + config PHYS_OFFSET 222 + hex "Physical address of main memory" 223 + depends on !ARM_PATCH_PHYS_VIRT && !NEED_MACH_MEMORY_H 224 + help 225 + Please provide the physical address corresponding to the 226 + location of main memory in your system. 215 227 216 228 config GENERIC_BUG 217 229 def_bool y ··· 261 247 select GENERIC_CLOCKEVENTS 262 248 select PLAT_VERSATILE 263 249 select PLAT_VERSATILE_FPGA_IRQ 250 + select NEED_MACH_MEMORY_H 264 251 help 265 252 Support for ARM's Integrator platform. 266 253 ··· 277 262 select PLAT_VERSATILE_CLCD 278 263 select ARM_TIMER_SP804 279 264 select GPIO_PL061 if GPIOLIB 265 + select NEED_MACH_MEMORY_H 280 266 help 281 267 This enables support for ARM Ltd RealView boards. 282 268 ··· 338 322 bool "Cirrus Logic CLPS711x/EP721x-based" 339 323 select CPU_ARM720T 340 324 select ARCH_USES_GETTIMEOFFSET 325 + select NEED_MACH_MEMORY_H 341 326 help 342 327 Support for Cirrus Logic 711x/721x based boards. 343 328 ··· 378 361 select ISA 379 362 select NO_IOPORT 380 363 select ARCH_USES_GETTIMEOFFSET 364 + select NEED_MACH_MEMORY_H 381 365 help 382 366 This is an evaluation board for the StrongARM processor available 383 367 from Digital. It has limited hardware on-board, including an ··· 394 376 select ARCH_REQUIRE_GPIOLIB 395 377 select ARCH_HAS_HOLES_MEMORYMODEL 396 378 select ARCH_USES_GETTIMEOFFSET 379 + select NEED_MEMORY_H 397 380 help 398 381 This enables support for the Cirrus EP93xx series of CPUs. 399 382 ··· 404 385 select FOOTBRIDGE 405 386 select GENERIC_CLOCKEVENTS 406 387 select HAVE_IDE 388 + select NEED_MACH_MEMORY_H 407 389 help 408 390 Support for systems based on the DC21285 companion chip 409 391 ("FootBridge"), such as the Simtec CATS and the Rebel NetWinder. ··· 454 434 select PCI 455 435 select ARCH_SUPPORTS_MSI 456 436 select VMSPLIT_1G 437 + select NEED_MACH_MEMORY_H 457 438 help 458 439 Support for Intel's IOP13XX (XScale) family of processors. 459 440 ··· 485 464 select CPU_XSC3 486 465 select PCI 487 466 select ARCH_USES_GETTIMEOFFSET 467 + select NEED_MACH_MEMORY_H 488 468 help 489 469 Support for Intel's IXP23xx (XScale) family of processors. 490 470 ··· 495 473 select CPU_XSCALE 496 474 select PCI 497 475 select ARCH_USES_GETTIMEOFFSET 476 + select NEED_MACH_MEMORY_H 498 477 help 499 478 Support for Intel's IXP2400/2800 (XScale) family of processors. 500 479 ··· 588 565 select CPU_ARM922T 589 566 select ARCH_REQUIRE_GPIOLIB 590 567 select ARCH_USES_GETTIMEOFFSET 568 + select NEED_MACH_MEMORY_H 591 569 help 592 570 Support for Micrel/Kendin KS8695 "Centaur" (ARM922T) based 593 571 System-on-Chip devices. ··· 681 657 select SPARSE_IRQ 682 658 select MULTI_IRQ_HANDLER 683 659 select PM_GENERIC_DOMAINS if PM 660 + select NEED_MACH_MEMORY_H 684 661 help 685 662 Support for Renesas's SH-Mobile and R-Mobile ARM platforms. 686 663 ··· 697 672 select ARCH_SPARSEMEM_ENABLE 698 673 select ARCH_USES_GETTIMEOFFSET 699 674 select HAVE_IDE 675 + select NEED_MACH_MEMORY_H 700 676 help 701 677 On the Acorn Risc-PC, Linux can support the internal IDE disk and 702 678 CD-ROM interface, serial and parallel port, and the floppy drive. ··· 717 691 select TICK_ONESHOT 718 692 select ARCH_REQUIRE_GPIOLIB 719 693 select HAVE_IDE 694 + select NEED_MACH_MEMORY_H 720 695 help 721 696 Support for StrongARM 11x0 based boards. 722 697 ··· 809 782 select HAVE_S3C2410_I2C if I2C 810 783 select HAVE_S3C_RTC if RTC_CLASS 811 784 select HAVE_S3C2410_WATCHDOG if WATCHDOG 785 + select NEED_MACH_MEMORY_H 812 786 help 813 787 Samsung S5PV210/S5PC110 series based systems 814 788 ··· 826 798 select HAVE_S3C_RTC if RTC_CLASS 827 799 select HAVE_S3C2410_I2C if I2C 828 800 select HAVE_S3C2410_WATCHDOG if WATCHDOG 801 + select NEED_MACH_MEMORY_H 829 802 help 830 803 Samsung EXYNOS4 series based systems 831 804 ··· 838 809 select ZONE_DMA 839 810 select PCI 840 811 select ARCH_USES_GETTIMEOFFSET 812 + select NEED_MACH_MEMORY_H 841 813 help 842 814 Support for the StrongARM based Digital DNARD machine, also known 843 815 as "Shark" (<http://www.shark-linux.de/shark.html>). ··· 867 837 select HAVE_MACH_CLKDEV 868 838 select GENERIC_GPIO 869 839 select ARCH_REQUIRE_GPIOLIB 840 + select NEED_MACH_MEMORY_H 870 841 help 871 842 Support for ST-Ericsson U300 series mobile platforms. 872 843 ··· 1865 1834 Load image from SDHI hardware block 1866 1835 1867 1836 endchoice 1837 + 1838 + config ARM_APPENDED_DTB 1839 + bool "Use appended device tree blob to zImage (EXPERIMENTAL)" 1840 + depends on OF && !ZBOOT_ROM && EXPERIMENTAL 1841 + help 1842 + With this option, the boot code will look for a device tree binary 1843 + (DTB) appended to zImage 1844 + (e.g. cat zImage <filename>.dtb > zImage_w_dtb). 1845 + 1846 + This is meant as a backward compatibility convenience for those 1847 + systems with a bootloader that can't be upgraded to accommodate 1848 + the documented boot protocol using a device tree. 1849 + 1850 + Beware that there is very little in terms of protection against 1851 + this option being confused by leftover garbage in memory that might 1852 + look like a DTB header after a reboot if no actual DTB is appended 1853 + to zImage. Do not leave this option active in a production kernel 1854 + if you don't intend to always append a DTB. Proper passing of the 1855 + location into r2 of a bootloader provided DTB is always preferable 1856 + to this option. 1857 + 1858 + config ARM_ATAG_DTB_COMPAT 1859 + bool "Supplement the appended DTB with traditional ATAG information" 1860 + depends on ARM_APPENDED_DTB 1861 + help 1862 + Some old bootloaders can't be updated to a DTB capable one, yet 1863 + they provide ATAGs with memory configuration, the ramdisk address, 1864 + the kernel cmdline string, etc. Such information is dynamically 1865 + provided by the bootloader and can't always be stored in a static 1866 + DTB. To allow a device tree enabled kernel to be used with such 1867 + bootloaders, this option allows zImage to extract the information 1868 + from the ATAG list and store it at run time into the appended DTB. 1868 1869 1869 1870 config CMDLINE 1870 1871 string "Default kernel command string"
+6
arch/arm/Kconfig.debug
··· 158 158 The uncompressor code port configuration is now handled 159 159 by CONFIG_S3C_LOWLEVEL_UART_PORT. 160 160 161 + config ARM_KPROBES_TEST 162 + tristate "Kprobes test module" 163 + depends on KPROBES && MODULES 164 + help 165 + Perform tests of kprobes API and instruction set simulation. 166 + 161 167 endmenu
+9
arch/arm/boot/compressed/.gitignore
··· 5 5 piggy.lzma 6 6 vmlinux 7 7 vmlinux.lds 8 + 9 + # borrowed libfdt files 10 + fdt.c 11 + fdt.h 12 + fdt_ro.c 13 + fdt_rw.c 14 + fdt_wip.c 15 + libfdt.h 16 + libfdt_internal.h
+28 -4
arch/arm/boot/compressed/Makefile
··· 26 26 OBJS += misc.o decompress.o 27 27 FONTC = $(srctree)/drivers/video/console/font_acorn_8x8.c 28 28 29 + # string library code (-Os is enforced to keep it much smaller) 30 + OBJS += string.o 31 + CFLAGS_string.o := -Os 32 + 29 33 # 30 34 # Architecture dependencies 31 35 # ··· 93 89 suffix_$(CONFIG_KERNEL_LZO) = lzo 94 90 suffix_$(CONFIG_KERNEL_LZMA) = lzma 95 91 92 + # Borrowed libfdt files for the ATAG compatibility mode 93 + 94 + libfdt := fdt_rw.c fdt_ro.c fdt_wip.c fdt.c 95 + libfdt_hdrs := fdt.h libfdt.h libfdt_internal.h 96 + 97 + libfdt_objs := $(addsuffix .o, $(basename $(libfdt))) 98 + 99 + $(addprefix $(obj)/,$(libfdt) $(libfdt_hdrs)): $(obj)/%: $(srctree)/scripts/dtc/libfdt/% 100 + $(call cmd,shipped) 101 + 102 + $(addprefix $(obj)/,$(libfdt_objs) atags_to_fdt.o): \ 103 + $(addprefix $(obj)/,$(libfdt_hdrs)) 104 + 105 + ifeq ($(CONFIG_ARM_ATAG_DTB_COMPAT),y) 106 + OBJS += $(libfdt_objs) atags_to_fdt.o 107 + endif 108 + 96 109 targets := vmlinux vmlinux.lds \ 97 110 piggy.$(suffix_y) piggy.$(suffix_y).o \ 98 - font.o font.c head.o misc.o $(OBJS) 111 + lib1funcs.o lib1funcs.S font.o font.c head.o misc.o $(OBJS) 99 112 100 113 # Make sure files are removed during clean 101 - extra-y += piggy.gzip piggy.lzo piggy.lzma lib1funcs.S 114 + extra-y += piggy.gzip piggy.lzo piggy.lzma lib1funcs.S $(libfdt) $(libfdt_hdrs) 102 115 103 116 ifeq ($(CONFIG_FUNCTION_TRACER),y) 104 117 ORIG_CFLAGS := $(KBUILD_CFLAGS) 105 118 KBUILD_CFLAGS = $(subst -pg, , $(ORIG_CFLAGS)) 106 119 endif 107 120 108 - ccflags-y := -fpic -fno-builtin 121 + ccflags-y := -fpic -fno-builtin -I$(obj) 109 122 asflags-y := -Wa,-march=all 110 123 124 + # Supply kernel BSS size to the decompressor via a linker symbol. 125 + KBSS_SZ = $(shell size $(obj)/../../../../vmlinux | awk 'END{print $$3}') 126 + LDFLAGS_vmlinux = --defsym _kernel_bss_size=$(KBSS_SZ) 111 127 # Supply ZRELADDR to the decompressor via a linker symbol. 112 128 ifneq ($(CONFIG_AUTO_ZRELADDR),y) 113 129 LDFLAGS_vmlinux += --defsym zreladdr=$(ZRELADDR) ··· 147 123 # For __aeabi_uidivmod 148 124 lib1funcs = $(obj)/lib1funcs.o 149 125 150 - $(obj)/lib1funcs.S: $(srctree)/arch/$(SRCARCH)/lib/lib1funcs.S FORCE 126 + $(obj)/lib1funcs.S: $(srctree)/arch/$(SRCARCH)/lib/lib1funcs.S 151 127 $(call cmd,shipped) 152 128 153 129 # We need to prevent any GOTOFF relocs being used with references
+97
arch/arm/boot/compressed/atags_to_fdt.c
··· 1 + #include <asm/setup.h> 2 + #include <libfdt.h> 3 + 4 + static int node_offset(void *fdt, const char *node_path) 5 + { 6 + int offset = fdt_path_offset(fdt, node_path); 7 + if (offset == -FDT_ERR_NOTFOUND) 8 + offset = fdt_add_subnode(fdt, 0, node_path); 9 + return offset; 10 + } 11 + 12 + static int setprop(void *fdt, const char *node_path, const char *property, 13 + uint32_t *val_array, int size) 14 + { 15 + int offset = node_offset(fdt, node_path); 16 + if (offset < 0) 17 + return offset; 18 + return fdt_setprop(fdt, offset, property, val_array, size); 19 + } 20 + 21 + static int setprop_string(void *fdt, const char *node_path, 22 + const char *property, const char *string) 23 + { 24 + int offset = node_offset(fdt, node_path); 25 + if (offset < 0) 26 + return offset; 27 + return fdt_setprop_string(fdt, offset, property, string); 28 + } 29 + 30 + static int setprop_cell(void *fdt, const char *node_path, 31 + const char *property, uint32_t val) 32 + { 33 + int offset = node_offset(fdt, node_path); 34 + if (offset < 0) 35 + return offset; 36 + return fdt_setprop_cell(fdt, offset, property, val); 37 + } 38 + 39 + /* 40 + * Convert and fold provided ATAGs into the provided FDT. 41 + * 42 + * REturn values: 43 + * = 0 -> pretend success 44 + * = 1 -> bad ATAG (may retry with another possible ATAG pointer) 45 + * < 0 -> error from libfdt 46 + */ 47 + int atags_to_fdt(void *atag_list, void *fdt, int total_space) 48 + { 49 + struct tag *atag = atag_list; 50 + uint32_t mem_reg_property[2 * NR_BANKS]; 51 + int memcount = 0; 52 + int ret; 53 + 54 + /* make sure we've got an aligned pointer */ 55 + if ((u32)atag_list & 0x3) 56 + return 1; 57 + 58 + /* if we get a DTB here we're done already */ 59 + if (*(u32 *)atag_list == fdt32_to_cpu(FDT_MAGIC)) 60 + return 0; 61 + 62 + /* validate the ATAG */ 63 + if (atag->hdr.tag != ATAG_CORE || 64 + (atag->hdr.size != tag_size(tag_core) && 65 + atag->hdr.size != 2)) 66 + return 1; 67 + 68 + /* let's give it all the room it could need */ 69 + ret = fdt_open_into(fdt, fdt, total_space); 70 + if (ret < 0) 71 + return ret; 72 + 73 + for_each_tag(atag, atag_list) { 74 + if (atag->hdr.tag == ATAG_CMDLINE) { 75 + setprop_string(fdt, "/chosen", "bootargs", 76 + atag->u.cmdline.cmdline); 77 + } else if (atag->hdr.tag == ATAG_MEM) { 78 + if (memcount >= sizeof(mem_reg_property)/4) 79 + continue; 80 + mem_reg_property[memcount++] = cpu_to_fdt32(atag->u.mem.start); 81 + mem_reg_property[memcount++] = cpu_to_fdt32(atag->u.mem.size); 82 + } else if (atag->hdr.tag == ATAG_INITRD2) { 83 + uint32_t initrd_start, initrd_size; 84 + initrd_start = atag->u.initrd.start; 85 + initrd_size = atag->u.initrd.size; 86 + setprop_cell(fdt, "/chosen", "linux,initrd-start", 87 + initrd_start); 88 + setprop_cell(fdt, "/chosen", "linux,initrd-end", 89 + initrd_start + initrd_size); 90 + } 91 + } 92 + 93 + if (memcount) 94 + setprop(fdt, "/memory", "reg", mem_reg_property, 4*memcount); 95 + 96 + return fdt_pack(fdt); 97 + }
+115 -7
arch/arm/boot/compressed/head.S
··· 216 216 mov r10, r6 217 217 #endif 218 218 219 + mov r5, #0 @ init dtb size to 0 220 + #ifdef CONFIG_ARM_APPENDED_DTB 221 + /* 222 + * r0 = delta 223 + * r2 = BSS start 224 + * r3 = BSS end 225 + * r4 = final kernel address 226 + * r5 = appended dtb size (still unknown) 227 + * r6 = _edata 228 + * r7 = architecture ID 229 + * r8 = atags/device tree pointer 230 + * r9 = size of decompressed image 231 + * r10 = end of this image, including bss/stack/malloc space if non XIP 232 + * r11 = GOT start 233 + * r12 = GOT end 234 + * sp = stack pointer 235 + * 236 + * if there are device trees (dtb) appended to zImage, advance r10 so that the 237 + * dtb data will get relocated along with the kernel if necessary. 238 + */ 239 + 240 + ldr lr, [r6, #0] 241 + #ifndef __ARMEB__ 242 + ldr r1, =0xedfe0dd0 @ sig is 0xd00dfeed big endian 243 + #else 244 + ldr r1, =0xd00dfeed 245 + #endif 246 + cmp lr, r1 247 + bne dtb_check_done @ not found 248 + 249 + #ifdef CONFIG_ARM_ATAG_DTB_COMPAT 250 + /* 251 + * OK... Let's do some funky business here. 252 + * If we do have a DTB appended to zImage, and we do have 253 + * an ATAG list around, we want the later to be translated 254 + * and folded into the former here. To be on the safe side, 255 + * let's temporarily move the stack away into the malloc 256 + * area. No GOT fixup has occurred yet, but none of the 257 + * code we're about to call uses any global variable. 258 + */ 259 + add sp, sp, #0x10000 260 + stmfd sp!, {r0-r3, ip, lr} 261 + mov r0, r8 262 + mov r1, r6 263 + sub r2, sp, r6 264 + bl atags_to_fdt 265 + 266 + /* 267 + * If returned value is 1, there is no ATAG at the location 268 + * pointed by r8. Try the typical 0x100 offset from start 269 + * of RAM and hope for the best. 270 + */ 271 + cmp r0, #1 272 + sub r0, r4, #TEXT_OFFSET 273 + add r0, r0, #0x100 274 + mov r1, r6 275 + sub r2, sp, r6 276 + blne atags_to_fdt 277 + 278 + ldmfd sp!, {r0-r3, ip, lr} 279 + sub sp, sp, #0x10000 280 + #endif 281 + 282 + mov r8, r6 @ use the appended device tree 283 + 284 + /* 285 + * Make sure that the DTB doesn't end up in the final 286 + * kernel's .bss area. To do so, we adjust the decompressed 287 + * kernel size to compensate if that .bss size is larger 288 + * than the relocated code. 289 + */ 290 + ldr r5, =_kernel_bss_size 291 + adr r1, wont_overwrite 292 + sub r1, r6, r1 293 + subs r1, r5, r1 294 + addhi r9, r9, r1 295 + 296 + /* Get the dtb's size */ 297 + ldr r5, [r6, #4] 298 + #ifndef __ARMEB__ 299 + /* convert r5 (dtb size) to little endian */ 300 + eor r1, r5, r5, ror #16 301 + bic r1, r1, #0x00ff0000 302 + mov r5, r5, ror #8 303 + eor r5, r5, r1, lsr #8 304 + #endif 305 + 306 + /* preserve 64-bit alignment */ 307 + add r5, r5, #7 308 + bic r5, r5, #7 309 + 310 + /* relocate some pointers past the appended dtb */ 311 + add r6, r6, r5 312 + add r10, r10, r5 313 + add sp, sp, r5 314 + dtb_check_done: 315 + #endif 316 + 219 317 /* 220 318 * Check to see if we will overwrite ourselves. 221 319 * r4 = final kernel address ··· 321 223 * r10 = end of this image, including bss/stack/malloc space if non XIP 322 224 * We basically want: 323 225 * r4 - 16k page directory >= r10 -> OK 324 - * r4 + image length <= current position (pc) -> OK 226 + * r4 + image length <= address of wont_overwrite -> OK 325 227 */ 326 228 add r10, r10, #16384 327 229 cmp r4, r10 328 230 bhs wont_overwrite 329 231 add r10, r4, r9 330 - ARM( cmp r10, pc ) 331 - THUMB( mov lr, pc ) 332 - THUMB( cmp r10, lr ) 232 + adr r9, wont_overwrite 233 + cmp r10, r9 333 234 bls wont_overwrite 334 235 335 236 /* ··· 382 285 * r2 = BSS start 383 286 * r3 = BSS end 384 287 * r4 = kernel execution address 288 + * r5 = appended dtb size (0 if not present) 385 289 * r7 = architecture ID 386 290 * r8 = atags pointer 387 291 * r11 = GOT start 388 292 * r12 = GOT end 389 293 * sp = stack pointer 390 294 */ 391 - teq r0, #0 295 + orrs r1, r0, r5 392 296 beq not_relocated 297 + 393 298 add r11, r11, r0 394 299 add r12, r12, r0 395 300 ··· 406 307 407 308 /* 408 309 * Relocate all entries in the GOT table. 310 + * Bump bss entries to _edata + dtb size 409 311 */ 410 312 1: ldr r1, [r11, #0] @ relocate entries in the GOT 411 - add r1, r1, r0 @ table. This fixes up the 412 - str r1, [r11], #4 @ C references. 313 + add r1, r1, r0 @ This fixes up C references 314 + cmp r1, r2 @ if entry >= bss_start && 315 + cmphs r3, r1 @ bss_end > entry 316 + addhi r1, r1, r5 @ entry += dtb size 317 + str r1, [r11], #4 @ next entry 413 318 cmp r11, r12 414 319 blo 1b 320 + 321 + /* bump our bss pointers too */ 322 + add r2, r2, r5 323 + add r3, r3, r5 324 + 415 325 #else 416 326 417 327 /*
+15
arch/arm/boot/compressed/libfdt_env.h
··· 1 + #ifndef _ARM_LIBFDT_ENV_H 2 + #define _ARM_LIBFDT_ENV_H 3 + 4 + #include <linux/types.h> 5 + #include <linux/string.h> 6 + #include <asm/byteorder.h> 7 + 8 + #define fdt16_to_cpu(x) be16_to_cpu(x) 9 + #define cpu_to_fdt16(x) cpu_to_be16(x) 10 + #define fdt32_to_cpu(x) be32_to_cpu(x) 11 + #define cpu_to_fdt32(x) cpu_to_be32(x) 12 + #define fdt64_to_cpu(x) be64_to_cpu(x) 13 + #define cpu_to_fdt64(x) cpu_to_be64(x) 14 + 15 + #endif
+1 -41
arch/arm/boot/compressed/misc.c
··· 18 18 19 19 unsigned int __machine_arch_type; 20 20 21 - #define _LINUX_STRING_H_ 22 - 23 21 #include <linux/compiler.h> /* for inline */ 24 - #include <linux/types.h> /* for size_t */ 25 - #include <linux/stddef.h> /* for NULL */ 22 + #include <linux/types.h> 26 23 #include <linux/linkage.h> 27 - #include <asm/string.h> 28 - 29 24 30 25 static void putstr(const char *ptr); 31 26 extern void error(char *x); ··· 94 99 } 95 100 96 101 flush(); 97 - } 98 - 99 - 100 - void *memcpy(void *__dest, __const void *__src, size_t __n) 101 - { 102 - int i = 0; 103 - unsigned char *d = (unsigned char *)__dest, *s = (unsigned char *)__src; 104 - 105 - for (i = __n >> 3; i > 0; i--) { 106 - *d++ = *s++; 107 - *d++ = *s++; 108 - *d++ = *s++; 109 - *d++ = *s++; 110 - *d++ = *s++; 111 - *d++ = *s++; 112 - *d++ = *s++; 113 - *d++ = *s++; 114 - } 115 - 116 - if (__n & 1 << 2) { 117 - *d++ = *s++; 118 - *d++ = *s++; 119 - *d++ = *s++; 120 - *d++ = *s++; 121 - } 122 - 123 - if (__n & 1 << 1) { 124 - *d++ = *s++; 125 - *d++ = *s++; 126 - } 127 - 128 - if (__n & 1) 129 - *d++ = *s++; 130 - 131 - return __dest; 132 102 } 133 103 134 104 /*
+127
arch/arm/boot/compressed/string.c
··· 1 + /* 2 + * arch/arm/boot/compressed/string.c 3 + * 4 + * Small subset of simple string routines 5 + */ 6 + 7 + #include <linux/string.h> 8 + 9 + void *memcpy(void *__dest, __const void *__src, size_t __n) 10 + { 11 + int i = 0; 12 + unsigned char *d = (unsigned char *)__dest, *s = (unsigned char *)__src; 13 + 14 + for (i = __n >> 3; i > 0; i--) { 15 + *d++ = *s++; 16 + *d++ = *s++; 17 + *d++ = *s++; 18 + *d++ = *s++; 19 + *d++ = *s++; 20 + *d++ = *s++; 21 + *d++ = *s++; 22 + *d++ = *s++; 23 + } 24 + 25 + if (__n & 1 << 2) { 26 + *d++ = *s++; 27 + *d++ = *s++; 28 + *d++ = *s++; 29 + *d++ = *s++; 30 + } 31 + 32 + if (__n & 1 << 1) { 33 + *d++ = *s++; 34 + *d++ = *s++; 35 + } 36 + 37 + if (__n & 1) 38 + *d++ = *s++; 39 + 40 + return __dest; 41 + } 42 + 43 + void *memmove(void *__dest, __const void *__src, size_t count) 44 + { 45 + unsigned char *d = __dest; 46 + const unsigned char *s = __src; 47 + 48 + if (__dest == __src) 49 + return __dest; 50 + 51 + if (__dest < __src) 52 + return memcpy(__dest, __src, count); 53 + 54 + while (count--) 55 + d[count] = s[count]; 56 + return __dest; 57 + } 58 + 59 + size_t strlen(const char *s) 60 + { 61 + const char *sc = s; 62 + 63 + while (*sc != '\0') 64 + sc++; 65 + return sc - s; 66 + } 67 + 68 + int memcmp(const void *cs, const void *ct, size_t count) 69 + { 70 + const unsigned char *su1 = cs, *su2 = ct, *end = su1 + count; 71 + int res = 0; 72 + 73 + while (su1 < end) { 74 + res = *su1++ - *su2++; 75 + if (res) 76 + break; 77 + } 78 + return res; 79 + } 80 + 81 + int strcmp(const char *cs, const char *ct) 82 + { 83 + unsigned char c1, c2; 84 + int res = 0; 85 + 86 + do { 87 + c1 = *cs++; 88 + c2 = *ct++; 89 + res = c1 - c2; 90 + if (res) 91 + break; 92 + } while (c1); 93 + return res; 94 + } 95 + 96 + void *memchr(const void *s, int c, size_t count) 97 + { 98 + const unsigned char *p = s; 99 + 100 + while (count--) 101 + if ((unsigned char)c == *p++) 102 + return (void *)(p - 1); 103 + return NULL; 104 + } 105 + 106 + char *strchr(const char *s, int c) 107 + { 108 + while (*s != (char)c) 109 + if (*s++ == '\0') 110 + return NULL; 111 + return (char *)s; 112 + } 113 + 114 + #undef memset 115 + 116 + void *memset(void *s, int c, size_t count) 117 + { 118 + char *xs = s; 119 + while (count--) 120 + *xs++ = c; 121 + return s; 122 + } 123 + 124 + void __memzero(void *s, size_t count) 125 + { 126 + memset(s, 0, count); 127 + }
+4
arch/arm/boot/compressed/vmlinux.lds.in
··· 51 51 _got_start = .; 52 52 .got : { *(.got) } 53 53 _got_end = .; 54 + 55 + /* ensure the zImage file size is always a multiple of 64 bits */ 56 + /* (without a dummy byte, ld just ignores the empty section) */ 57 + .pad : { BYTE(0); . = ALIGN(8); } 54 58 _edata = .; 55 59 56 60 . = BSS_START;
+220 -11
arch/arm/common/gic.c
··· 26 26 #include <linux/kernel.h> 27 27 #include <linux/list.h> 28 28 #include <linux/smp.h> 29 + #include <linux/cpu_pm.h> 29 30 #include <linux/cpumask.h> 30 31 #include <linux/io.h> 32 + #include <linux/interrupt.h> 33 + #include <linux/percpu.h> 34 + #include <linux/slab.h> 31 35 32 36 #include <asm/irq.h> 33 37 #include <asm/mach/irq.h> ··· 266 262 u32 cpumask; 267 263 void __iomem *base = gic->dist_base; 268 264 u32 cpu = 0; 265 + u32 nrppis = 0, ppi_base = 0; 269 266 270 267 #ifdef CONFIG_SMP 271 268 cpu = cpu_logical_map(smp_processor_id()); ··· 286 281 gic_irqs = (gic_irqs + 1) * 32; 287 282 if (gic_irqs > 1020) 288 283 gic_irqs = 1020; 284 + 285 + gic->gic_irqs = gic_irqs; 286 + 287 + /* 288 + * Nobody would be insane enough to use PPIs on a secondary 289 + * GIC, right? 290 + */ 291 + if (gic == &gic_data[0]) { 292 + nrppis = (32 - irq_start) & 31; 293 + 294 + /* The GIC only supports up to 16 PPIs. */ 295 + if (nrppis > 16) 296 + BUG(); 297 + 298 + ppi_base = gic->irq_offset + 32 - nrppis; 299 + } 300 + 301 + pr_info("Configuring GIC with %d sources (%d PPIs)\n", 302 + gic_irqs, (gic == &gic_data[0]) ? nrppis : 0); 289 303 290 304 /* 291 305 * Set all global interrupts to be level triggered, active low. ··· 341 317 /* 342 318 * Setup the Linux IRQ subsystem. 343 319 */ 344 - for (i = irq_start; i < irq_limit; i++) { 320 + for (i = 0; i < nrppis; i++) { 321 + int ppi = i + ppi_base; 322 + 323 + irq_set_percpu_devid(ppi); 324 + irq_set_chip_and_handler(ppi, &gic_chip, 325 + handle_percpu_devid_irq); 326 + irq_set_chip_data(ppi, gic); 327 + set_irq_flags(ppi, IRQF_VALID | IRQF_NOAUTOEN); 328 + } 329 + 330 + for (i = irq_start + nrppis; i < irq_limit; i++) { 345 331 irq_set_chip_and_handler(i, &gic_chip, handle_fasteoi_irq); 346 332 irq_set_chip_data(i, gic); 347 333 set_irq_flags(i, IRQF_VALID | IRQF_PROBE); ··· 383 349 writel_relaxed(1, base + GIC_CPU_CTRL); 384 350 } 385 351 352 + #ifdef CONFIG_CPU_PM 353 + /* 354 + * Saves the GIC distributor registers during suspend or idle. Must be called 355 + * with interrupts disabled but before powering down the GIC. After calling 356 + * this function, no interrupts will be delivered by the GIC, and another 357 + * platform-specific wakeup source must be enabled. 358 + */ 359 + static void gic_dist_save(unsigned int gic_nr) 360 + { 361 + unsigned int gic_irqs; 362 + void __iomem *dist_base; 363 + int i; 364 + 365 + if (gic_nr >= MAX_GIC_NR) 366 + BUG(); 367 + 368 + gic_irqs = gic_data[gic_nr].gic_irqs; 369 + dist_base = gic_data[gic_nr].dist_base; 370 + 371 + if (!dist_base) 372 + return; 373 + 374 + for (i = 0; i < DIV_ROUND_UP(gic_irqs, 16); i++) 375 + gic_data[gic_nr].saved_spi_conf[i] = 376 + readl_relaxed(dist_base + GIC_DIST_CONFIG + i * 4); 377 + 378 + for (i = 0; i < DIV_ROUND_UP(gic_irqs, 4); i++) 379 + gic_data[gic_nr].saved_spi_target[i] = 380 + readl_relaxed(dist_base + GIC_DIST_TARGET + i * 4); 381 + 382 + for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++) 383 + gic_data[gic_nr].saved_spi_enable[i] = 384 + readl_relaxed(dist_base + GIC_DIST_ENABLE_SET + i * 4); 385 + } 386 + 387 + /* 388 + * Restores the GIC distributor registers during resume or when coming out of 389 + * idle. Must be called before enabling interrupts. If a level interrupt 390 + * that occured while the GIC was suspended is still present, it will be 391 + * handled normally, but any edge interrupts that occured will not be seen by 392 + * the GIC and need to be handled by the platform-specific wakeup source. 393 + */ 394 + static void gic_dist_restore(unsigned int gic_nr) 395 + { 396 + unsigned int gic_irqs; 397 + unsigned int i; 398 + void __iomem *dist_base; 399 + 400 + if (gic_nr >= MAX_GIC_NR) 401 + BUG(); 402 + 403 + gic_irqs = gic_data[gic_nr].gic_irqs; 404 + dist_base = gic_data[gic_nr].dist_base; 405 + 406 + if (!dist_base) 407 + return; 408 + 409 + writel_relaxed(0, dist_base + GIC_DIST_CTRL); 410 + 411 + for (i = 0; i < DIV_ROUND_UP(gic_irqs, 16); i++) 412 + writel_relaxed(gic_data[gic_nr].saved_spi_conf[i], 413 + dist_base + GIC_DIST_CONFIG + i * 4); 414 + 415 + for (i = 0; i < DIV_ROUND_UP(gic_irqs, 4); i++) 416 + writel_relaxed(0xa0a0a0a0, 417 + dist_base + GIC_DIST_PRI + i * 4); 418 + 419 + for (i = 0; i < DIV_ROUND_UP(gic_irqs, 4); i++) 420 + writel_relaxed(gic_data[gic_nr].saved_spi_target[i], 421 + dist_base + GIC_DIST_TARGET + i * 4); 422 + 423 + for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++) 424 + writel_relaxed(gic_data[gic_nr].saved_spi_enable[i], 425 + dist_base + GIC_DIST_ENABLE_SET + i * 4); 426 + 427 + writel_relaxed(1, dist_base + GIC_DIST_CTRL); 428 + } 429 + 430 + static void gic_cpu_save(unsigned int gic_nr) 431 + { 432 + int i; 433 + u32 *ptr; 434 + void __iomem *dist_base; 435 + void __iomem *cpu_base; 436 + 437 + if (gic_nr >= MAX_GIC_NR) 438 + BUG(); 439 + 440 + dist_base = gic_data[gic_nr].dist_base; 441 + cpu_base = gic_data[gic_nr].cpu_base; 442 + 443 + if (!dist_base || !cpu_base) 444 + return; 445 + 446 + ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_enable); 447 + for (i = 0; i < DIV_ROUND_UP(32, 32); i++) 448 + ptr[i] = readl_relaxed(dist_base + GIC_DIST_ENABLE_SET + i * 4); 449 + 450 + ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_conf); 451 + for (i = 0; i < DIV_ROUND_UP(32, 16); i++) 452 + ptr[i] = readl_relaxed(dist_base + GIC_DIST_CONFIG + i * 4); 453 + 454 + } 455 + 456 + static void gic_cpu_restore(unsigned int gic_nr) 457 + { 458 + int i; 459 + u32 *ptr; 460 + void __iomem *dist_base; 461 + void __iomem *cpu_base; 462 + 463 + if (gic_nr >= MAX_GIC_NR) 464 + BUG(); 465 + 466 + dist_base = gic_data[gic_nr].dist_base; 467 + cpu_base = gic_data[gic_nr].cpu_base; 468 + 469 + if (!dist_base || !cpu_base) 470 + return; 471 + 472 + ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_enable); 473 + for (i = 0; i < DIV_ROUND_UP(32, 32); i++) 474 + writel_relaxed(ptr[i], dist_base + GIC_DIST_ENABLE_SET + i * 4); 475 + 476 + ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_conf); 477 + for (i = 0; i < DIV_ROUND_UP(32, 16); i++) 478 + writel_relaxed(ptr[i], dist_base + GIC_DIST_CONFIG + i * 4); 479 + 480 + for (i = 0; i < DIV_ROUND_UP(32, 4); i++) 481 + writel_relaxed(0xa0a0a0a0, dist_base + GIC_DIST_PRI + i * 4); 482 + 483 + writel_relaxed(0xf0, cpu_base + GIC_CPU_PRIMASK); 484 + writel_relaxed(1, cpu_base + GIC_CPU_CTRL); 485 + } 486 + 487 + static int gic_notifier(struct notifier_block *self, unsigned long cmd, void *v) 488 + { 489 + int i; 490 + 491 + for (i = 0; i < MAX_GIC_NR; i++) { 492 + switch (cmd) { 493 + case CPU_PM_ENTER: 494 + gic_cpu_save(i); 495 + break; 496 + case CPU_PM_ENTER_FAILED: 497 + case CPU_PM_EXIT: 498 + gic_cpu_restore(i); 499 + break; 500 + case CPU_CLUSTER_PM_ENTER: 501 + gic_dist_save(i); 502 + break; 503 + case CPU_CLUSTER_PM_ENTER_FAILED: 504 + case CPU_CLUSTER_PM_EXIT: 505 + gic_dist_restore(i); 506 + break; 507 + } 508 + } 509 + 510 + return NOTIFY_OK; 511 + } 512 + 513 + static struct notifier_block gic_notifier_block = { 514 + .notifier_call = gic_notifier, 515 + }; 516 + 517 + static void __init gic_pm_init(struct gic_chip_data *gic) 518 + { 519 + gic->saved_ppi_enable = __alloc_percpu(DIV_ROUND_UP(32, 32) * 4, 520 + sizeof(u32)); 521 + BUG_ON(!gic->saved_ppi_enable); 522 + 523 + gic->saved_ppi_conf = __alloc_percpu(DIV_ROUND_UP(32, 16) * 4, 524 + sizeof(u32)); 525 + BUG_ON(!gic->saved_ppi_conf); 526 + 527 + cpu_pm_register_notifier(&gic_notifier_block); 528 + } 529 + #else 530 + static void __init gic_pm_init(struct gic_chip_data *gic) 531 + { 532 + } 533 + #endif 534 + 386 535 void __init gic_init(unsigned int gic_nr, unsigned int irq_start, 387 536 void __iomem *dist_base, void __iomem *cpu_base) 388 537 { ··· 581 364 if (gic_nr == 0) 582 365 gic_cpu_base_addr = cpu_base; 583 366 367 + gic_chip.flags |= gic_arch_extn.flags; 584 368 gic_dist_init(gic, irq_start); 585 369 gic_cpu_init(gic); 370 + gic_pm_init(gic); 586 371 } 587 372 588 373 void __cpuinit gic_secondary_init(unsigned int gic_nr) ··· 592 373 BUG_ON(gic_nr >= MAX_GIC_NR); 593 374 594 375 gic_cpu_init(&gic_data[gic_nr]); 595 - } 596 - 597 - void __cpuinit gic_enable_ppi(unsigned int irq) 598 - { 599 - unsigned long flags; 600 - 601 - local_irq_save(flags); 602 - irq_set_status_flags(irq, IRQ_NOPROBE); 603 - gic_unmask_irq(irq_get_irq_data(irq)); 604 - local_irq_restore(flags); 605 376 } 606 377 607 378 #ifdef CONFIG_SMP
+7
arch/arm/include/asm/dma-mapping.h
··· 205 205 int dma_mmap_writecombine(struct device *, struct vm_area_struct *, 206 206 void *, dma_addr_t, size_t); 207 207 208 + /* 209 + * This can be called during boot to increase the size of the consistent 210 + * DMA region above it's default value of 2MB. It must be called before the 211 + * memory allocator is initialised, i.e. before any core_initcall. 212 + */ 213 + extern void __init init_consistent_dma_size(unsigned long size); 214 + 208 215 209 216 #ifdef CONFIG_DMABOUNCE 210 217 /*
-7
arch/arm/include/asm/entry-macro-multi.S
··· 25 25 movne r1, sp 26 26 adrne lr, BSYM(1b) 27 27 bne do_IPI 28 - 29 - #ifdef CONFIG_LOCAL_TIMERS 30 - test_for_ltirq r0, r2, r6, lr 31 - movne r0, sp 32 - adrne lr, BSYM(1b) 33 - bne do_local_timer 34 - #endif 35 28 #endif 36 29 9997: 37 30 .endm
-3
arch/arm/include/asm/hardirq.h
··· 9 9 10 10 typedef struct { 11 11 unsigned int __softirq_pending; 12 - #ifdef CONFIG_LOCAL_TIMERS 13 - unsigned int local_timer_irqs; 14 - #endif 15 12 #ifdef CONFIG_SMP 16 13 unsigned int ipi_irqs[NR_IPI]; 17 14 #endif
+2 -17
arch/arm/include/asm/hardware/entry-macro-gic.S
··· 22 22 * interrupt controller spec. To wit: 23 23 * 24 24 * Interrupts 0-15 are IPI 25 - * 16-28 are reserved 26 - * 29-31 are local. We allow 30 to be used for the watchdog. 25 + * 16-31 are local. We allow 30 to be used for the watchdog. 27 26 * 32-1020 are global 28 27 * 1021-1022 are reserved 29 28 * 1023 is "spurious" (no interrupt) 30 - * 31 - * For now, we ignore all local interrupts so only return an interrupt if it's 32 - * between 30 and 1020. The test_for_ipi routine below will pick up on IPIs. 33 29 * 34 30 * A simple read from the controller will tell us the number of the highest 35 31 * priority enabled interrupt. We then just need to check whether it is in the ··· 39 43 40 44 ldr \tmp, =1021 41 45 bic \irqnr, \irqstat, #0x1c00 42 - cmp \irqnr, #29 46 + cmp \irqnr, #15 43 47 cmpcc \irqnr, \irqnr 44 48 cmpne \irqnr, \tmp 45 49 cmpcs \irqnr, \irqnr ··· 57 61 cmp \irqnr, #16 58 62 strcc \irqstat, [\base, #GIC_CPU_EOI] 59 63 cmpcs \irqnr, \irqnr 60 - .endm 61 - 62 - /* As above, this assumes that irqstat and base are preserved.. */ 63 - 64 - .macro test_for_ltirq, irqnr, irqstat, base, tmp 65 - bic \irqnr, \irqstat, #0x1c00 66 - mov \tmp, #0 67 - cmp \irqnr, #29 68 - moveq \tmp, #1 69 - streq \irqstat, [\base, #GIC_CPU_EOI] 70 - cmp \tmp, #0 71 64 .endm
+8 -1
arch/arm/include/asm/hardware/gic.h
··· 40 40 void gic_secondary_init(unsigned int); 41 41 void gic_cascade_irq(unsigned int gic_nr, unsigned int irq); 42 42 void gic_raise_softirq(const struct cpumask *mask, unsigned int irq); 43 - void gic_enable_ppi(unsigned int); 44 43 45 44 struct gic_chip_data { 46 45 unsigned int irq_offset; 47 46 void __iomem *dist_base; 48 47 void __iomem *cpu_base; 48 + #ifdef CONFIG_CPU_PM 49 + u32 saved_spi_enable[DIV_ROUND_UP(1020, 32)]; 50 + u32 saved_spi_conf[DIV_ROUND_UP(1020, 16)]; 51 + u32 saved_spi_target[DIV_ROUND_UP(1020, 4)]; 52 + u32 __percpu *saved_ppi_enable; 53 + u32 __percpu *saved_ppi_conf; 54 + #endif 55 + unsigned int gic_irqs; 49 56 }; 50 57 #endif 51 58
+2
arch/arm/include/asm/hw_breakpoint.h
··· 50 50 #define ARM_DEBUG_ARCH_V6_1 2 51 51 #define ARM_DEBUG_ARCH_V7_ECP14 3 52 52 #define ARM_DEBUG_ARCH_V7_MM 4 53 + #define ARM_DEBUG_ARCH_V7_1 5 53 54 54 55 /* Breakpoint */ 55 56 #define ARM_BREAKPOINT_EXECUTE 0 ··· 58 57 /* Watchpoints */ 59 58 #define ARM_BREAKPOINT_LOAD 1 60 59 #define ARM_BREAKPOINT_STORE 2 60 + #define ARM_FSR_ACCESS_MASK (1 << 11) 61 61 62 62 /* Privilege Levels */ 63 63 #define ARM_BREAKPOINT_PRIV 1
+8 -14
arch/arm/include/asm/localtimer.h
··· 11 11 #define __ASM_ARM_LOCALTIMER_H 12 12 13 13 #include <linux/errno.h> 14 + #include <linux/interrupt.h> 14 15 15 16 struct clock_event_device; 16 17 ··· 20 19 */ 21 20 void percpu_timer_setup(void); 22 21 23 - /* 24 - * Called from assembly, this is the local timer IRQ handler 25 - */ 26 - asmlinkage void do_local_timer(struct pt_regs *); 27 - 28 - /* 29 - * Called from C code 30 - */ 31 - void handle_local_timer(struct pt_regs *); 32 - 33 22 #ifdef CONFIG_LOCAL_TIMERS 34 23 35 24 #ifdef CONFIG_HAVE_ARM_TWD 36 25 37 26 #include "smp_twd.h" 38 27 39 - #define local_timer_ack() twd_timer_ack() 28 + #define local_timer_stop(c) twd_timer_stop((c)) 40 29 41 30 #else 42 31 43 32 /* 44 - * Platform provides this to acknowledge a local timer IRQ. 45 - * Returns true if the local timer IRQ is to be processed. 33 + * Stop the local timer 46 34 */ 47 - int local_timer_ack(void); 35 + void local_timer_stop(struct clock_event_device *); 48 36 49 37 #endif 50 38 ··· 47 57 static inline int local_timer_setup(struct clock_event_device *evt) 48 58 { 49 59 return -ENXIO; 60 + } 61 + 62 + static inline void local_timer_stop(struct clock_event_device *evt) 63 + { 50 64 } 51 65 #endif 52 66
+1 -1
arch/arm/include/asm/mach/arch.h
··· 17 17 struct machine_desc { 18 18 unsigned int nr; /* architecture number */ 19 19 const char *name; /* architecture name */ 20 - unsigned long boot_params; /* tagged list */ 20 + unsigned long atag_offset; /* tagged list (relative) */ 21 21 const char **dt_compat; /* array of device tree 22 22 * 'compatible' strings */ 23 23
+1
arch/arm/include/asm/mach/map.h
··· 29 29 #define MT_MEMORY_NONCACHED 11 30 30 #define MT_MEMORY_DTCM 12 31 31 #define MT_MEMORY_ITCM 13 32 + #define MT_MEMORY_SO 14 32 33 33 34 #ifdef CONFIG_MMU 34 35 extern void iotable_init(struct map_desc *, int);
+8 -10
arch/arm/include/asm/memory.h
··· 16 16 #include <linux/compiler.h> 17 17 #include <linux/const.h> 18 18 #include <linux/types.h> 19 - #include <mach/memory.h> 20 19 #include <asm/sizes.h> 20 + 21 + #ifdef CONFIG_NEED_MACH_MEMORY_H 22 + #include <mach/memory.h> 23 + #endif 21 24 22 25 /* 23 26 * Allow for constants defined here to be used from assembly code ··· 80 77 */ 81 78 #define IOREMAP_MAX_ORDER 24 82 79 83 - /* 84 - * Size of DMA-consistent memory region. Must be multiple of 2M, 85 - * between 2MB and 14MB inclusive. 86 - */ 87 - #ifndef CONSISTENT_DMA_SIZE 88 - #define CONSISTENT_DMA_SIZE SZ_2M 89 - #endif 90 - 91 80 #define CONSISTENT_END (0xffe00000UL) 92 - #define CONSISTENT_BASE (CONSISTENT_END - CONSISTENT_DMA_SIZE) 93 81 94 82 #else /* CONFIG_MMU */ 95 83 ··· 187 193 #endif 188 194 189 195 #ifndef PHYS_OFFSET 196 + #ifdef PLAT_PHYS_OFFSET 190 197 #define PHYS_OFFSET PLAT_PHYS_OFFSET 198 + #else 199 + #define PHYS_OFFSET UL(CONFIG_PHYS_OFFSET) 200 + #endif 191 201 #endif 192 202 193 203 /*
+3
arch/arm/include/asm/pgtable.h
··· 101 101 #define pgprot_writecombine(prot) \ 102 102 __pgprot_modify(prot, L_PTE_MT_MASK, L_PTE_MT_BUFFERABLE) 103 103 104 + #define pgprot_stronglyordered(prot) \ 105 + __pgprot_modify(prot, L_PTE_MT_MASK, L_PTE_MT_UNCACHED) 106 + 104 107 #ifdef CONFIG_ARM_DMA_MEM_BUFFERABLE 105 108 #define pgprot_dmacoherent(prot) \ 106 109 __pgprot_modify(prot, L_PTE_MT_MASK, L_PTE_MT_BUFFERABLE | L_PTE_XN)
+74 -19
arch/arm/include/asm/pmu.h
··· 13 13 #define __ARM_PMU_H__ 14 14 15 15 #include <linux/interrupt.h> 16 + #include <linux/perf_event.h> 16 17 18 + /* 19 + * Types of PMUs that can be accessed directly and require mutual 20 + * exclusion between profiling tools. 21 + */ 17 22 enum arm_pmu_type { 18 23 ARM_PMU_DEVICE_CPU = 0, 19 24 ARM_NUM_PMU_DEVICES, ··· 42 37 * reserve_pmu() - reserve the hardware performance counters 43 38 * 44 39 * Reserve the hardware performance counters in the system for exclusive use. 45 - * The platform_device for the system is returned on success, ERR_PTR() 46 - * encoded error on failure. 40 + * Returns 0 on success or -EBUSY if the lock is already held. 47 41 */ 48 - extern struct platform_device * 42 + extern int 49 43 reserve_pmu(enum arm_pmu_type type); 50 44 51 45 /** 52 46 * release_pmu() - Relinquish control of the performance counters 53 47 * 54 48 * Release the performance counters and allow someone else to use them. 55 - * Callers must have disabled the counters and released IRQs before calling 56 - * this. The platform_device returned from reserve_pmu() must be passed as 57 - * a cookie. 58 49 */ 59 - extern int 50 + extern void 60 51 release_pmu(enum arm_pmu_type type); 61 52 62 53 /** ··· 69 68 70 69 #include <linux/err.h> 71 70 72 - static inline struct platform_device * 71 + static inline int 73 72 reserve_pmu(enum arm_pmu_type type) 74 73 { 75 - return ERR_PTR(-ENODEV); 76 - } 77 - 78 - static inline int 79 - release_pmu(enum arm_pmu_type type) 80 - { 81 74 return -ENODEV; 82 75 } 83 76 84 - static inline int 85 - init_pmu(enum arm_pmu_type type) 86 - { 87 - return -ENODEV; 88 - } 77 + static inline void 78 + release_pmu(enum arm_pmu_type type) { } 89 79 90 80 #endif /* CONFIG_CPU_HAS_PMU */ 81 + 82 + #ifdef CONFIG_HW_PERF_EVENTS 83 + 84 + /* The events for a given PMU register set. */ 85 + struct pmu_hw_events { 86 + /* 87 + * The events that are active on the PMU for the given index. 88 + */ 89 + struct perf_event **events; 90 + 91 + /* 92 + * A 1 bit for an index indicates that the counter is being used for 93 + * an event. A 0 means that the counter can be used. 94 + */ 95 + unsigned long *used_mask; 96 + 97 + /* 98 + * Hardware lock to serialize accesses to PMU registers. Needed for the 99 + * read/modify/write sequences. 100 + */ 101 + raw_spinlock_t pmu_lock; 102 + }; 103 + 104 + struct arm_pmu { 105 + struct pmu pmu; 106 + enum arm_perf_pmu_ids id; 107 + enum arm_pmu_type type; 108 + cpumask_t active_irqs; 109 + const char *name; 110 + irqreturn_t (*handle_irq)(int irq_num, void *dev); 111 + void (*enable)(struct hw_perf_event *evt, int idx); 112 + void (*disable)(struct hw_perf_event *evt, int idx); 113 + int (*get_event_idx)(struct pmu_hw_events *hw_events, 114 + struct hw_perf_event *hwc); 115 + int (*set_event_filter)(struct hw_perf_event *evt, 116 + struct perf_event_attr *attr); 117 + u32 (*read_counter)(int idx); 118 + void (*write_counter)(int idx, u32 val); 119 + void (*start)(void); 120 + void (*stop)(void); 121 + void (*reset)(void *); 122 + int (*map_event)(struct perf_event *event); 123 + int num_events; 124 + atomic_t active_events; 125 + struct mutex reserve_mutex; 126 + u64 max_period; 127 + struct platform_device *plat_device; 128 + struct pmu_hw_events *(*get_hw_events)(void); 129 + }; 130 + 131 + #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) 132 + 133 + int __init armpmu_register(struct arm_pmu *armpmu, char *name, int type); 134 + 135 + u64 armpmu_event_update(struct perf_event *event, 136 + struct hw_perf_event *hwc, 137 + int idx, int overflow); 138 + 139 + int armpmu_event_set_period(struct perf_event *event, 140 + struct hw_perf_event *hwc, 141 + int idx); 142 + 143 + #endif /* CONFIG_HW_PERF_EVENTS */ 91 144 92 145 #endif /* __ARM_PMU_H__ */
+8
arch/arm/include/asm/proc-fns.h
··· 81 81 extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm); 82 82 extern void cpu_set_pte_ext(pte_t *ptep, pte_t pte, unsigned int ext); 83 83 extern void cpu_reset(unsigned long addr) __attribute__((noreturn)); 84 + 85 + /* These three are private to arch/arm/kernel/suspend.c */ 86 + extern void cpu_do_suspend(void *); 87 + extern void cpu_do_resume(void *); 84 88 #else 85 89 #define cpu_proc_init processor._proc_init 86 90 #define cpu_proc_fin processor._proc_fin ··· 93 89 #define cpu_dcache_clean_area processor.dcache_clean_area 94 90 #define cpu_set_pte_ext processor.set_pte_ext 95 91 #define cpu_do_switch_mm processor.switch_mm 92 + 93 + /* These three are private to arch/arm/kernel/suspend.c */ 94 + #define cpu_do_suspend processor.do_suspend 95 + #define cpu_do_resume processor.do_resume 96 96 #endif 97 97 98 98 extern void cpu_resume(void);
-5
arch/arm/include/asm/smp.h
··· 99 99 extern void arch_send_call_function_single_ipi(int cpu); 100 100 extern void arch_send_call_function_ipi_mask(const struct cpumask *mask); 101 101 102 - /* 103 - * show local interrupt info 104 - */ 105 - extern void show_local_irqs(struct seq_file *, int); 106 - 107 102 #endif /* ifndef __ASM_ARM_SMP_H */
+1 -1
arch/arm/include/asm/smp_twd.h
··· 22 22 23 23 extern void __iomem *twd_base; 24 24 25 - int twd_timer_ack(void); 26 25 void twd_timer_setup(struct clock_event_device *); 26 + void twd_timer_stop(struct clock_event_device *); 27 27 28 28 #endif
+1 -16
arch/arm/include/asm/suspend.h
··· 1 1 #ifndef __ASM_ARM_SUSPEND_H 2 2 #define __ASM_ARM_SUSPEND_H 3 3 4 - #include <asm/memory.h> 5 - #include <asm/tlbflush.h> 6 - 7 4 extern void cpu_resume(void); 8 - 9 - /* 10 - * Hide the first two arguments to __cpu_suspend - these are an implementation 11 - * detail which platform code shouldn't have to know about. 12 - */ 13 - static inline int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) 14 - { 15 - extern int __cpu_suspend(int, long, unsigned long, 16 - int (*)(unsigned long)); 17 - int ret = __cpu_suspend(0, PHYS_OFFSET - PAGE_OFFSET, arg, fn); 18 - flush_tlb_all(); 19 - return ret; 20 - } 5 + extern int cpu_suspend(unsigned long, int (*)(unsigned long)); 21 6 22 7 #endif
+8 -1
arch/arm/kernel/Makefile
··· 29 29 obj-$(CONFIG_ARTHUR) += arthur.o 30 30 obj-$(CONFIG_ISA_DMA) += dma-isa.o 31 31 obj-$(CONFIG_PCI) += bios32.o isa.o 32 - obj-$(CONFIG_ARM_CPU_SUSPEND) += sleep.o 32 + obj-$(CONFIG_ARM_CPU_SUSPEND) += sleep.o suspend.o 33 33 obj-$(CONFIG_HAVE_SCHED_CLOCK) += sched_clock.o 34 34 obj-$(CONFIG_SMP) += smp.o smp_tlb.o 35 35 obj-$(CONFIG_HAVE_ARM_SCU) += smp_scu.o ··· 42 42 obj-$(CONFIG_KPROBES) += kprobes-thumb.o 43 43 else 44 44 obj-$(CONFIG_KPROBES) += kprobes-arm.o 45 + endif 46 + obj-$(CONFIG_ARM_KPROBES_TEST) += test-kprobes.o 47 + test-kprobes-objs := kprobes-test.o 48 + ifdef CONFIG_THUMB2_KERNEL 49 + test-kprobes-objs += kprobes-test-thumb.o 50 + else 51 + test-kprobes-objs += kprobes-test-arm.o 45 52 endif 46 53 obj-$(CONFIG_ATAGS_PROC) += atags.o 47 54 obj-$(CONFIG_OABI_COMPAT) += sys_oabi-compat.o
+2 -2
arch/arm/kernel/debug.S
··· 22 22 #if defined(CONFIG_DEBUG_ICEDCC) 23 23 @@ debug using ARM EmbeddedICE DCC channel 24 24 25 - .macro addruart, rp, rv 25 + .macro addruart, rp, rv, tmp 26 26 .endm 27 27 28 28 #if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_V6K) || defined(CONFIG_CPU_V7) ··· 106 106 107 107 #ifdef CONFIG_MMU 108 108 .macro addruart_current, rx, tmp1, tmp2 109 - addruart \tmp1, \tmp2 109 + addruart \tmp1, \tmp2, \rx 110 110 mrc p15, 0, \rx, c1, c0 111 111 tst \rx, #1 112 112 moveq \rx, \tmp1
+2 -2
arch/arm/kernel/head.S
··· 99 99 sub r4, r3, r4 @ (PHYS_OFFSET - PAGE_OFFSET) 100 100 add r8, r8, r4 @ PHYS_OFFSET 101 101 #else 102 - ldr r8, =PLAT_PHYS_OFFSET 102 + ldr r8, =PHYS_OFFSET @ always constant in this case 103 103 #endif 104 104 105 105 /* ··· 238 238 * This allows debug messages to be output 239 239 * via a serial console before paging_init. 240 240 */ 241 - addruart r7, r3 241 + addruart r7, r3, r0 242 242 243 243 mov r3, r3, lsr #SECTION_SHIFT 244 244 mov r3, r3, lsl #PMD_ORDER
+172 -107
arch/arm/kernel/hw_breakpoint.c
··· 45 45 46 46 /* Number of BRP/WRP registers on this CPU. */ 47 47 static int core_num_brps; 48 - static int core_num_reserved_brps; 49 48 static int core_num_wrps; 50 49 51 50 /* Debug architecture version. */ ··· 136 137 u32 didr; 137 138 138 139 /* Do we implement the extended CPUID interface? */ 139 - if (WARN_ONCE((((read_cpuid_id() >> 16) & 0xf) != 0xf), 140 - "CPUID feature registers not supported. " 141 - "Assuming v6 debug is present.\n")) 140 + if (((read_cpuid_id() >> 16) & 0xf) != 0xf) { 141 + pr_warning("CPUID feature registers not supported. " 142 + "Assuming v6 debug is present.\n"); 142 143 return ARM_DEBUG_ARCH_V6; 144 + } 143 145 144 146 ARM_DBG_READ(c0, 0, didr); 145 147 return (didr >> 16) & 0xf; ··· 154 154 static int debug_arch_supported(void) 155 155 { 156 156 u8 arch = get_debug_arch(); 157 - return arch >= ARM_DEBUG_ARCH_V6 && arch <= ARM_DEBUG_ARCH_V7_ECP14; 157 + 158 + /* We don't support the memory-mapped interface. */ 159 + return (arch >= ARM_DEBUG_ARCH_V6 && arch <= ARM_DEBUG_ARCH_V7_ECP14) || 160 + arch >= ARM_DEBUG_ARCH_V7_1; 158 161 } 159 162 160 - /* Determine number of BRP register available. */ 163 + /* Determine number of WRP registers available. */ 164 + static int get_num_wrp_resources(void) 165 + { 166 + u32 didr; 167 + ARM_DBG_READ(c0, 0, didr); 168 + return ((didr >> 28) & 0xf) + 1; 169 + } 170 + 171 + /* Determine number of BRP registers available. */ 161 172 static int get_num_brp_resources(void) 162 173 { 163 174 u32 didr; ··· 187 176 static int get_num_wrps(void) 188 177 { 189 178 /* 190 - * FIXME: When a watchpoint fires, the only way to work out which 191 - * watchpoint it was is by disassembling the faulting instruction 192 - * and working out the address of the memory access. 179 + * On debug architectures prior to 7.1, when a watchpoint fires, the 180 + * only way to work out which watchpoint it was is by disassembling 181 + * the faulting instruction and working out the address of the memory 182 + * access. 193 183 * 194 184 * Furthermore, we can only do this if the watchpoint was precise 195 185 * since imprecise watchpoints prevent us from calculating register ··· 204 192 * [the ARM ARM states that the DFAR is UNKNOWN, but experience shows 205 193 * that it is set on some implementations]. 206 194 */ 195 + if (get_debug_arch() < ARM_DEBUG_ARCH_V7_1) 196 + return 1; 207 197 208 - #if 0 209 - int wrps; 210 - u32 didr; 211 - ARM_DBG_READ(c0, 0, didr); 212 - wrps = ((didr >> 28) & 0xf) + 1; 213 - #endif 214 - int wrps = 1; 215 - 216 - if (core_has_mismatch_brps() && wrps >= get_num_brp_resources()) 217 - wrps = get_num_brp_resources() - 1; 218 - 219 - return wrps; 220 - } 221 - 222 - /* We reserve one breakpoint for each watchpoint. */ 223 - static int get_num_reserved_brps(void) 224 - { 225 - if (core_has_mismatch_brps()) 226 - return get_num_wrps(); 227 - return 0; 198 + return get_num_wrp_resources(); 228 199 } 229 200 230 201 /* Determine number of usable BRPs available. */ 231 202 static int get_num_brps(void) 232 203 { 233 204 int brps = get_num_brp_resources(); 234 - if (core_has_mismatch_brps()) 235 - brps -= get_num_reserved_brps(); 236 - return brps; 205 + return core_has_mismatch_brps() ? brps - 1 : brps; 237 206 } 238 207 239 208 /* ··· 232 239 233 240 /* Ensure that halting mode is disabled. */ 234 241 if (WARN_ONCE(dscr & ARM_DSCR_HDBGEN, 235 - "halting debug mode enabled. Unable to access hardware resources.\n")) { 242 + "halting debug mode enabled. Unable to access hardware resources.\n")) { 236 243 ret = -EPERM; 237 244 goto out; 238 245 } ··· 248 255 ARM_DBG_WRITE(c1, 0, (dscr | ARM_DSCR_MDBGEN)); 249 256 break; 250 257 case ARM_DEBUG_ARCH_V7_ECP14: 258 + case ARM_DEBUG_ARCH_V7_1: 251 259 ARM_DBG_WRITE(c2, 2, (dscr | ARM_DSCR_MDBGEN)); 252 260 break; 253 261 default: ··· 340 346 val_base = ARM_BASE_BVR; 341 347 slots = (struct perf_event **)__get_cpu_var(bp_on_reg); 342 348 max_slots = core_num_brps; 343 - if (info->step_ctrl.enabled) { 344 - /* Override the breakpoint data with the step data. */ 345 - addr = info->trigger & ~0x3; 346 - ctrl = encode_ctrl_reg(info->step_ctrl); 347 - } 348 349 } else { 349 350 /* Watchpoint */ 350 - if (info->step_ctrl.enabled) { 351 - /* Install into the reserved breakpoint region. */ 352 - ctrl_base = ARM_BASE_BCR + core_num_brps; 353 - val_base = ARM_BASE_BVR + core_num_brps; 354 - /* Override the watchpoint data with the step data. */ 355 - addr = info->trigger & ~0x3; 356 - ctrl = encode_ctrl_reg(info->step_ctrl); 357 - } else { 358 - ctrl_base = ARM_BASE_WCR; 359 - val_base = ARM_BASE_WVR; 360 - } 351 + ctrl_base = ARM_BASE_WCR; 352 + val_base = ARM_BASE_WVR; 361 353 slots = (struct perf_event **)__get_cpu_var(wp_on_reg); 362 354 max_slots = core_num_wrps; 363 355 } ··· 360 380 if (WARN_ONCE(i == max_slots, "Can't find any breakpoint slot\n")) { 361 381 ret = -EBUSY; 362 382 goto out; 383 + } 384 + 385 + /* Override the breakpoint data with the step data. */ 386 + if (info->step_ctrl.enabled) { 387 + addr = info->trigger & ~0x3; 388 + ctrl = encode_ctrl_reg(info->step_ctrl); 389 + if (info->ctrl.type != ARM_BREAKPOINT_EXECUTE) { 390 + i = 0; 391 + ctrl_base = ARM_BASE_BCR + core_num_brps; 392 + val_base = ARM_BASE_BVR + core_num_brps; 393 + } 363 394 } 364 395 365 396 /* Setup the address register. */ ··· 396 405 max_slots = core_num_brps; 397 406 } else { 398 407 /* Watchpoint */ 399 - if (info->step_ctrl.enabled) 400 - base = ARM_BASE_BCR + core_num_brps; 401 - else 402 - base = ARM_BASE_WCR; 408 + base = ARM_BASE_WCR; 403 409 slots = (struct perf_event **)__get_cpu_var(wp_on_reg); 404 410 max_slots = core_num_wrps; 405 411 } ··· 413 425 414 426 if (WARN_ONCE(i == max_slots, "Can't find any breakpoint slot\n")) 415 427 return; 428 + 429 + /* Ensure that we disable the mismatch breakpoint. */ 430 + if (info->ctrl.type != ARM_BREAKPOINT_EXECUTE && 431 + info->step_ctrl.enabled) { 432 + i = 0; 433 + base = ARM_BASE_BCR + core_num_brps; 434 + } 416 435 417 436 /* Reset the control register. */ 418 437 write_wb_reg(base + i, 0); ··· 627 632 * we can use the mismatch feature as a poor-man's hardware 628 633 * single-step, but this only works for per-task breakpoints. 629 634 */ 630 - if (WARN_ONCE(!bp->overflow_handler && 631 - (arch_check_bp_in_kernelspace(bp) || !core_has_mismatch_brps() 632 - || !bp->hw.bp_target), 633 - "overflow handler required but none found\n")) { 635 + if (!bp->overflow_handler && (arch_check_bp_in_kernelspace(bp) || 636 + !core_has_mismatch_brps() || !bp->hw.bp_target)) { 637 + pr_warning("overflow handler required but none found\n"); 634 638 ret = -EINVAL; 635 639 } 636 640 out: ··· 660 666 arch_install_hw_breakpoint(bp); 661 667 } 662 668 663 - static void watchpoint_handler(unsigned long unknown, struct pt_regs *regs) 669 + static void watchpoint_handler(unsigned long addr, unsigned int fsr, 670 + struct pt_regs *regs) 664 671 { 665 - int i; 672 + int i, access; 673 + u32 val, ctrl_reg, alignment_mask; 666 674 struct perf_event *wp, **slots; 667 675 struct arch_hw_breakpoint *info; 676 + struct arch_hw_breakpoint_ctrl ctrl; 668 677 669 678 slots = (struct perf_event **)__get_cpu_var(wp_on_reg); 670 - 671 - /* Without a disassembler, we can only handle 1 watchpoint. */ 672 - BUG_ON(core_num_wrps > 1); 673 679 674 680 for (i = 0; i < core_num_wrps; ++i) { 675 681 rcu_read_lock(); 676 682 677 683 wp = slots[i]; 678 684 679 - if (wp == NULL) { 680 - rcu_read_unlock(); 681 - continue; 685 + if (wp == NULL) 686 + goto unlock; 687 + 688 + info = counter_arch_bp(wp); 689 + /* 690 + * The DFAR is an unknown value on debug architectures prior 691 + * to 7.1. Since we only allow a single watchpoint on these 692 + * older CPUs, we can set the trigger to the lowest possible 693 + * faulting address. 694 + */ 695 + if (debug_arch < ARM_DEBUG_ARCH_V7_1) { 696 + BUG_ON(i > 0); 697 + info->trigger = wp->attr.bp_addr; 698 + } else { 699 + if (info->ctrl.len == ARM_BREAKPOINT_LEN_8) 700 + alignment_mask = 0x7; 701 + else 702 + alignment_mask = 0x3; 703 + 704 + /* Check if the watchpoint value matches. */ 705 + val = read_wb_reg(ARM_BASE_WVR + i); 706 + if (val != (addr & ~alignment_mask)) 707 + goto unlock; 708 + 709 + /* Possible match, check the byte address select. */ 710 + ctrl_reg = read_wb_reg(ARM_BASE_WCR + i); 711 + decode_ctrl_reg(ctrl_reg, &ctrl); 712 + if (!((1 << (addr & alignment_mask)) & ctrl.len)) 713 + goto unlock; 714 + 715 + /* Check that the access type matches. */ 716 + access = (fsr & ARM_FSR_ACCESS_MASK) ? HW_BREAKPOINT_W : 717 + HW_BREAKPOINT_R; 718 + if (!(access & hw_breakpoint_type(wp))) 719 + goto unlock; 720 + 721 + /* We have a winner. */ 722 + info->trigger = addr; 682 723 } 683 724 684 - /* 685 - * The DFAR is an unknown value. Since we only allow a 686 - * single watchpoint, we can set the trigger to the lowest 687 - * possible faulting address. 688 - */ 689 - info = counter_arch_bp(wp); 690 - info->trigger = wp->attr.bp_addr; 691 725 pr_debug("watchpoint fired: address = 0x%x\n", info->trigger); 692 726 perf_bp_event(wp, regs); 693 727 ··· 727 705 if (!wp->overflow_handler) 728 706 enable_single_step(wp, instruction_pointer(regs)); 729 707 708 + unlock: 730 709 rcu_read_unlock(); 731 710 } 732 711 } ··· 740 717 741 718 slots = (struct perf_event **)__get_cpu_var(wp_on_reg); 742 719 743 - for (i = 0; i < core_num_reserved_brps; ++i) { 720 + for (i = 0; i < core_num_wrps; ++i) { 744 721 rcu_read_lock(); 745 722 746 723 wp = slots[i]; ··· 843 820 case ARM_ENTRY_ASYNC_WATCHPOINT: 844 821 WARN(1, "Asynchronous watchpoint exception taken. Debugging results may be unreliable\n"); 845 822 case ARM_ENTRY_SYNC_WATCHPOINT: 846 - watchpoint_handler(addr, regs); 823 + watchpoint_handler(addr, fsr, regs); 847 824 break; 848 825 default: 849 826 ret = 1; /* Unhandled fault. */ ··· 857 834 /* 858 835 * One-time initialisation. 859 836 */ 860 - static void reset_ctrl_regs(void *info) 837 + static cpumask_t debug_err_mask; 838 + 839 + static int debug_reg_trap(struct pt_regs *regs, unsigned int instr) 861 840 { 862 - int i, cpu = smp_processor_id(); 841 + int cpu = smp_processor_id(); 842 + 843 + pr_warning("Debug register access (0x%x) caused undefined instruction on CPU %d\n", 844 + instr, cpu); 845 + 846 + /* Set the error flag for this CPU and skip the faulting instruction. */ 847 + cpumask_set_cpu(cpu, &debug_err_mask); 848 + instruction_pointer(regs) += 4; 849 + return 0; 850 + } 851 + 852 + static struct undef_hook debug_reg_hook = { 853 + .instr_mask = 0x0fe80f10, 854 + .instr_val = 0x0e000e10, 855 + .fn = debug_reg_trap, 856 + }; 857 + 858 + static void reset_ctrl_regs(void *unused) 859 + { 860 + int i, raw_num_brps, err = 0, cpu = smp_processor_id(); 863 861 u32 dbg_power; 864 - cpumask_t *cpumask = info; 865 862 866 863 /* 867 864 * v7 debug contains save and restore registers so that debug state ··· 891 848 * Access Register to avoid taking undefined instruction exceptions 892 849 * later on. 893 850 */ 894 - if (debug_arch >= ARM_DEBUG_ARCH_V7_ECP14) { 851 + switch (debug_arch) { 852 + case ARM_DEBUG_ARCH_V6: 853 + case ARM_DEBUG_ARCH_V6_1: 854 + /* ARMv6 cores just need to reset the registers. */ 855 + goto reset_regs; 856 + case ARM_DEBUG_ARCH_V7_ECP14: 895 857 /* 896 858 * Ensure sticky power-down is clear (i.e. debug logic is 897 859 * powered up). 898 860 */ 899 861 asm volatile("mrc p14, 0, %0, c1, c5, 4" : "=r" (dbg_power)); 900 - if ((dbg_power & 0x1) == 0) { 901 - pr_warning("CPU %d debug is powered down!\n", cpu); 902 - cpumask_or(cpumask, cpumask, cpumask_of(cpu)); 903 - return; 904 - } 905 - 862 + if ((dbg_power & 0x1) == 0) 863 + err = -EPERM; 864 + break; 865 + case ARM_DEBUG_ARCH_V7_1: 906 866 /* 907 - * Unconditionally clear the lock by writing a value 908 - * other than 0xC5ACCE55 to the access register. 867 + * Ensure the OS double lock is clear. 909 868 */ 910 - asm volatile("mcr p14, 0, %0, c1, c0, 4" : : "r" (0)); 911 - isb(); 912 - 913 - /* 914 - * Clear any configured vector-catch events before 915 - * enabling monitor mode. 916 - */ 917 - asm volatile("mcr p14, 0, %0, c0, c7, 0" : : "r" (0)); 918 - isb(); 869 + asm volatile("mrc p14, 0, %0, c1, c3, 4" : "=r" (dbg_power)); 870 + if ((dbg_power & 0x1) == 1) 871 + err = -EPERM; 872 + break; 919 873 } 920 874 875 + if (err) { 876 + pr_warning("CPU %d debug is powered down!\n", cpu); 877 + cpumask_or(&debug_err_mask, &debug_err_mask, cpumask_of(cpu)); 878 + return; 879 + } 880 + 881 + /* 882 + * Unconditionally clear the lock by writing a value 883 + * other than 0xC5ACCE55 to the access register. 884 + */ 885 + asm volatile("mcr p14, 0, %0, c1, c0, 4" : : "r" (0)); 886 + isb(); 887 + 888 + /* 889 + * Clear any configured vector-catch events before 890 + * enabling monitor mode. 891 + */ 892 + asm volatile("mcr p14, 0, %0, c0, c7, 0" : : "r" (0)); 893 + isb(); 894 + 895 + reset_regs: 921 896 if (enable_monitor_mode()) 922 897 return; 923 898 924 899 /* We must also reset any reserved registers. */ 925 - for (i = 0; i < core_num_brps + core_num_reserved_brps; ++i) { 900 + raw_num_brps = get_num_brp_resources(); 901 + for (i = 0; i < raw_num_brps; ++i) { 926 902 write_wb_reg(ARM_BASE_BCR + i, 0UL); 927 903 write_wb_reg(ARM_BASE_BVR + i, 0UL); 928 904 } ··· 957 895 { 958 896 if (action == CPU_ONLINE) 959 897 smp_call_function_single((int)cpu, reset_ctrl_regs, NULL, 1); 898 + 960 899 return NOTIFY_OK; 961 900 } 962 901 ··· 968 905 static int __init arch_hw_breakpoint_init(void) 969 906 { 970 907 u32 dscr; 971 - cpumask_t cpumask = { CPU_BITS_NONE }; 972 908 973 909 debug_arch = get_debug_arch(); 974 910 ··· 978 916 979 917 /* Determine how many BRPs/WRPs are available. */ 980 918 core_num_brps = get_num_brps(); 981 - core_num_reserved_brps = get_num_reserved_brps(); 982 919 core_num_wrps = get_num_wrps(); 983 920 984 - pr_info("found %d breakpoint and %d watchpoint registers.\n", 985 - core_num_brps + core_num_reserved_brps, core_num_wrps); 986 - 987 - if (core_num_reserved_brps) 988 - pr_info("%d breakpoint(s) reserved for watchpoint " 989 - "single-step.\n", core_num_reserved_brps); 921 + /* 922 + * We need to tread carefully here because DBGSWENABLE may be 923 + * driven low on this core and there isn't an architected way to 924 + * determine that. 925 + */ 926 + register_undef_hook(&debug_reg_hook); 990 927 991 928 /* 992 929 * Reset the breakpoint resources. We assume that a halting 993 930 * debugger will leave the world in a nice state for us. 994 931 */ 995 - on_each_cpu(reset_ctrl_regs, &cpumask, 1); 996 - if (!cpumask_empty(&cpumask)) { 932 + on_each_cpu(reset_ctrl_regs, NULL, 1); 933 + unregister_undef_hook(&debug_reg_hook); 934 + if (!cpumask_empty(&debug_err_mask)) { 997 935 core_num_brps = 0; 998 - core_num_reserved_brps = 0; 999 936 core_num_wrps = 0; 1000 937 return 0; 1001 938 } 939 + 940 + pr_info("found %d " "%s" "breakpoint and %d watchpoint registers.\n", 941 + core_num_brps, core_has_mismatch_brps() ? "(+1 reserved) " : 942 + "", core_num_wrps); 1002 943 1003 944 ARM_DBG_READ(c1, 0, dscr); 1004 945 if (dscr & ARM_DSCR_HDBGEN) {
-3
arch/arm/kernel/irq.c
··· 59 59 #ifdef CONFIG_SMP 60 60 show_ipi_list(p, prec); 61 61 #endif 62 - #ifdef CONFIG_LOCAL_TIMERS 63 - show_local_irqs(p, prec); 64 - #endif 65 62 seq_printf(p, "%*s: %10lu\n", prec, "Err", irq_err_count); 66 63 return 0; 67 64 }
+4
arch/arm/kernel/kprobes-arm.c
··· 60 60 61 61 #include <linux/kernel.h> 62 62 #include <linux/kprobes.h> 63 + #include <linux/module.h> 63 64 64 65 #include "kprobes.h" 65 66 ··· 972 971 973 972 DECODE_END 974 973 }; 974 + #ifdef CONFIG_ARM_KPROBES_TEST_MODULE 975 + EXPORT_SYMBOL_GPL(kprobe_decode_arm_table); 976 + #endif 975 977 976 978 static void __kprobes arm_singlestep(struct kprobe *p, struct pt_regs *regs) 977 979 {
+1323
arch/arm/kernel/kprobes-test-arm.c
··· 1 + /* 2 + * arch/arm/kernel/kprobes-test-arm.c 3 + * 4 + * Copyright (C) 2011 Jon Medhurst <tixy@yxit.co.uk>. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + */ 10 + 11 + #include <linux/kernel.h> 12 + #include <linux/module.h> 13 + 14 + #include "kprobes-test.h" 15 + 16 + 17 + #define TEST_ISA "32" 18 + 19 + #define TEST_ARM_TO_THUMB_INTERWORK_R(code1, reg, val, code2) \ 20 + TESTCASE_START(code1 #reg code2) \ 21 + TEST_ARG_REG(reg, val) \ 22 + TEST_ARG_REG(14, 99f) \ 23 + TEST_ARG_END("") \ 24 + "50: nop \n\t" \ 25 + "1: "code1 #reg code2" \n\t" \ 26 + " bx lr \n\t" \ 27 + ".thumb \n\t" \ 28 + "3: adr lr, 2f \n\t" \ 29 + " bx lr \n\t" \ 30 + ".arm \n\t" \ 31 + "2: nop \n\t" \ 32 + TESTCASE_END 33 + 34 + #define TEST_ARM_TO_THUMB_INTERWORK_P(code1, reg, val, code2) \ 35 + TESTCASE_START(code1 #reg code2) \ 36 + TEST_ARG_PTR(reg, val) \ 37 + TEST_ARG_REG(14, 99f) \ 38 + TEST_ARG_MEM(15, 3f+1) \ 39 + TEST_ARG_END("") \ 40 + "50: nop \n\t" \ 41 + "1: "code1 #reg code2" \n\t" \ 42 + " bx lr \n\t" \ 43 + ".thumb \n\t" \ 44 + "3: adr lr, 2f \n\t" \ 45 + " bx lr \n\t" \ 46 + ".arm \n\t" \ 47 + "2: nop \n\t" \ 48 + TESTCASE_END 49 + 50 + 51 + void kprobe_arm_test_cases(void) 52 + { 53 + kprobe_test_flags = 0; 54 + 55 + TEST_GROUP("Data-processing (register), (register-shifted register), (immediate)") 56 + 57 + #define _DATA_PROCESSING_DNM(op,s,val) \ 58 + TEST_RR( op "eq" s " r0, r",1, VAL1,", r",2, val, "") \ 59 + TEST_RR( op "ne" s " r1, r",1, VAL1,", r",2, val, ", lsl #3") \ 60 + TEST_RR( op "cs" s " r2, r",3, VAL1,", r",2, val, ", lsr #4") \ 61 + TEST_RR( op "cc" s " r3, r",3, VAL1,", r",2, val, ", asr #5") \ 62 + TEST_RR( op "mi" s " r4, r",5, VAL1,", r",2, N(val),", asr #6") \ 63 + TEST_RR( op "pl" s " r5, r",5, VAL1,", r",2, val, ", ror #7") \ 64 + TEST_RR( op "vs" s " r6, r",7, VAL1,", r",2, val, ", rrx") \ 65 + TEST_R( op "vc" s " r6, r",7, VAL1,", pc, lsl #3") \ 66 + TEST_R( op "vc" s " r6, r",7, VAL1,", sp, lsr #4") \ 67 + TEST_R( op "vc" s " r6, pc, r",7, VAL1,", asr #5") \ 68 + TEST_R( op "vc" s " r6, sp, r",7, VAL1,", ror #6") \ 69 + TEST_RRR( op "hi" s " r8, r",9, VAL1,", r",14,val, ", lsl r",0, 3,"")\ 70 + TEST_RRR( op "ls" s " r9, r",9, VAL1,", r",14,val, ", lsr r",7, 4,"")\ 71 + TEST_RRR( op "ge" s " r10, r",11,VAL1,", r",14,val, ", asr r",7, 5,"")\ 72 + TEST_RRR( op "lt" s " r11, r",11,VAL1,", r",14,N(val),", asr r",7, 6,"")\ 73 + TEST_RR( op "gt" s " r12, r13" ", r",14,val, ", ror r",14,7,"")\ 74 + TEST_RR( op "le" s " r14, r",0, val, ", r13" ", lsl r",14,8,"")\ 75 + TEST_RR( op s " r12, pc" ", r",14,val, ", ror r",14,7,"")\ 76 + TEST_RR( op s " r14, r",0, val, ", pc" ", lsl r",14,8,"")\ 77 + TEST_R( op "eq" s " r0, r",11,VAL1,", #0xf5") \ 78 + TEST_R( op "ne" s " r11, r",0, VAL1,", #0xf5000000") \ 79 + TEST_R( op s " r7, r",8, VAL2,", #0x000af000") \ 80 + TEST( op s " r4, pc" ", #0x00005a00") 81 + 82 + #define DATA_PROCESSING_DNM(op,val) \ 83 + _DATA_PROCESSING_DNM(op,"",val) \ 84 + _DATA_PROCESSING_DNM(op,"s",val) 85 + 86 + #define DATA_PROCESSING_NM(op,val) \ 87 + TEST_RR( op "ne r",1, VAL1,", r",2, val, "") \ 88 + TEST_RR( op "eq r",1, VAL1,", r",2, val, ", lsl #3") \ 89 + TEST_RR( op "cc r",3, VAL1,", r",2, val, ", lsr #4") \ 90 + TEST_RR( op "cs r",3, VAL1,", r",2, val, ", asr #5") \ 91 + TEST_RR( op "pl r",5, VAL1,", r",2, N(val),", asr #6") \ 92 + TEST_RR( op "mi r",5, VAL1,", r",2, val, ", ror #7") \ 93 + TEST_RR( op "vc r",7, VAL1,", r",2, val, ", rrx") \ 94 + TEST_R ( op "vs r",7, VAL1,", pc, lsl #3") \ 95 + TEST_R ( op "vs r",7, VAL1,", sp, lsr #4") \ 96 + TEST_R( op "vs pc, r",7, VAL1,", asr #5") \ 97 + TEST_R( op "vs sp, r",7, VAL1,", ror #6") \ 98 + TEST_RRR( op "ls r",9, VAL1,", r",14,val, ", lsl r",0, 3,"") \ 99 + TEST_RRR( op "hi r",9, VAL1,", r",14,val, ", lsr r",7, 4,"") \ 100 + TEST_RRR( op "lt r",11,VAL1,", r",14,val, ", asr r",7, 5,"") \ 101 + TEST_RRR( op "ge r",11,VAL1,", r",14,N(val),", asr r",7, 6,"") \ 102 + TEST_RR( op "le r13" ", r",14,val, ", ror r",14,7,"") \ 103 + TEST_RR( op "gt r",0, val, ", r13" ", lsl r",14,8,"") \ 104 + TEST_RR( op " pc" ", r",14,val, ", ror r",14,7,"") \ 105 + TEST_RR( op " r",0, val, ", pc" ", lsl r",14,8,"") \ 106 + TEST_R( op "eq r",11,VAL1,", #0xf5") \ 107 + TEST_R( op "ne r",0, VAL1,", #0xf5000000") \ 108 + TEST_R( op " r",8, VAL2,", #0x000af000") 109 + 110 + #define _DATA_PROCESSING_DM(op,s,val) \ 111 + TEST_R( op "eq" s " r0, r",1, val, "") \ 112 + TEST_R( op "ne" s " r1, r",1, val, ", lsl #3") \ 113 + TEST_R( op "cs" s " r2, r",3, val, ", lsr #4") \ 114 + TEST_R( op "cc" s " r3, r",3, val, ", asr #5") \ 115 + TEST_R( op "mi" s " r4, r",5, N(val),", asr #6") \ 116 + TEST_R( op "pl" s " r5, r",5, val, ", ror #7") \ 117 + TEST_R( op "vs" s " r6, r",10,val, ", rrx") \ 118 + TEST( op "vs" s " r7, pc, lsl #3") \ 119 + TEST( op "vs" s " r7, sp, lsr #4") \ 120 + TEST_RR( op "vc" s " r8, r",7, val, ", lsl r",0, 3,"") \ 121 + TEST_RR( op "hi" s " r9, r",9, val, ", lsr r",7, 4,"") \ 122 + TEST_RR( op "ls" s " r10, r",9, val, ", asr r",7, 5,"") \ 123 + TEST_RR( op "ge" s " r11, r",11,N(val),", asr r",7, 6,"") \ 124 + TEST_RR( op "lt" s " r12, r",11,val, ", ror r",14,7,"") \ 125 + TEST_R( op "gt" s " r14, r13" ", lsl r",14,8,"") \ 126 + TEST_R( op "le" s " r14, pc" ", lsl r",14,8,"") \ 127 + TEST( op "eq" s " r0, #0xf5") \ 128 + TEST( op "ne" s " r11, #0xf5000000") \ 129 + TEST( op s " r7, #0x000af000") \ 130 + TEST( op s " r4, #0x00005a00") 131 + 132 + #define DATA_PROCESSING_DM(op,val) \ 133 + _DATA_PROCESSING_DM(op,"",val) \ 134 + _DATA_PROCESSING_DM(op,"s",val) 135 + 136 + DATA_PROCESSING_DNM("and",0xf00f00ff) 137 + DATA_PROCESSING_DNM("eor",0xf00f00ff) 138 + DATA_PROCESSING_DNM("sub",VAL2) 139 + DATA_PROCESSING_DNM("rsb",VAL2) 140 + DATA_PROCESSING_DNM("add",VAL2) 141 + DATA_PROCESSING_DNM("adc",VAL2) 142 + DATA_PROCESSING_DNM("sbc",VAL2) 143 + DATA_PROCESSING_DNM("rsc",VAL2) 144 + DATA_PROCESSING_NM("tst",0xf00f00ff) 145 + DATA_PROCESSING_NM("teq",0xf00f00ff) 146 + DATA_PROCESSING_NM("cmp",VAL2) 147 + DATA_PROCESSING_NM("cmn",VAL2) 148 + DATA_PROCESSING_DNM("orr",0xf00f00ff) 149 + DATA_PROCESSING_DM("mov",VAL2) 150 + DATA_PROCESSING_DNM("bic",0xf00f00ff) 151 + DATA_PROCESSING_DM("mvn",VAL2) 152 + 153 + TEST("mov ip, sp") /* This has special case emulation code */ 154 + 155 + TEST_SUPPORTED("mov pc, #0x1000"); 156 + TEST_SUPPORTED("mov sp, #0x1000"); 157 + TEST_SUPPORTED("cmp pc, #0x1000"); 158 + TEST_SUPPORTED("cmp sp, #0x1000"); 159 + 160 + /* Data-processing with PC as shift*/ 161 + TEST_UNSUPPORTED(".word 0xe15c0f1e @ cmp r12, r14, asl pc") 162 + TEST_UNSUPPORTED(".word 0xe1a0cf1e @ mov r12, r14, asl pc") 163 + TEST_UNSUPPORTED(".word 0xe08caf1e @ add r10, r12, r14, asl pc") 164 + 165 + /* Data-processing with PC as shift*/ 166 + TEST_UNSUPPORTED("movs pc, r1") 167 + TEST_UNSUPPORTED("movs pc, r1, lsl r2") 168 + TEST_UNSUPPORTED("movs pc, #0x10000") 169 + TEST_UNSUPPORTED("adds pc, lr, r1") 170 + TEST_UNSUPPORTED("adds pc, lr, r1, lsl r2") 171 + TEST_UNSUPPORTED("adds pc, lr, #4") 172 + 173 + /* Data-processing with SP as target */ 174 + TEST("add sp, sp, #16") 175 + TEST("sub sp, sp, #8") 176 + TEST("bic sp, sp, #0x20") 177 + TEST("orr sp, sp, #0x20") 178 + TEST_PR( "add sp, r",10,0,", r",11,4,"") 179 + TEST_PRR("add sp, r",10,0,", r",11,4,", asl r",12,1,"") 180 + TEST_P( "mov sp, r",10,0,"") 181 + TEST_PR( "mov sp, r",10,0,", asl r",12,0,"") 182 + 183 + /* Data-processing with PC as target */ 184 + TEST_BF( "add pc, pc, #2f-1b-8") 185 + TEST_BF_R ("add pc, pc, r",14,2f-1f-8,"") 186 + TEST_BF_R ("add pc, r",14,2f-1f-8,", pc") 187 + TEST_BF_R ("mov pc, r",0,2f,"") 188 + TEST_BF_RR("mov pc, r",0,2f,", asl r",1,0,"") 189 + TEST_BB( "sub pc, pc, #1b-2b+8") 190 + #if __LINUX_ARM_ARCH__ >= 6 191 + TEST_BB( "sub pc, pc, #1b-2b+8-2") /* UNPREDICTABLE before ARMv6 */ 192 + #endif 193 + TEST_BB_R( "sub pc, pc, r",14, 1f-2f+8,"") 194 + TEST_BB_R( "rsb pc, r",14,1f-2f+8,", pc") 195 + TEST_RR( "add pc, pc, r",10,-2,", asl r",11,1,"") 196 + #ifdef CONFIG_THUMB2_KERNEL 197 + TEST_ARM_TO_THUMB_INTERWORK_R("add pc, pc, r",0,3f-1f-8+1,"") 198 + TEST_ARM_TO_THUMB_INTERWORK_R("sub pc, r",0,3f+8+1,", #8") 199 + #endif 200 + TEST_GROUP("Miscellaneous instructions") 201 + 202 + TEST("mrs r0, cpsr") 203 + TEST("mrspl r7, cpsr") 204 + TEST("mrs r14, cpsr") 205 + TEST_UNSUPPORTED(".word 0xe10ff000 @ mrs r15, cpsr") 206 + TEST_UNSUPPORTED("mrs r0, spsr") 207 + TEST_UNSUPPORTED("mrs lr, spsr") 208 + 209 + TEST_UNSUPPORTED("msr cpsr, r0") 210 + TEST_UNSUPPORTED("msr cpsr_f, lr") 211 + TEST_UNSUPPORTED("msr spsr, r0") 212 + 213 + TEST_BF_R("bx r",0,2f,"") 214 + TEST_BB_R("bx r",7,2f,"") 215 + TEST_BF_R("bxeq r",14,2f,"") 216 + 217 + TEST_R("clz r0, r",0, 0x0,"") 218 + TEST_R("clzeq r7, r",14,0x1,"") 219 + TEST_R("clz lr, r",7, 0xffffffff,"") 220 + TEST( "clz r4, sp") 221 + TEST_UNSUPPORTED(".word 0x016fff10 @ clz pc, r0") 222 + TEST_UNSUPPORTED(".word 0x016f0f1f @ clz r0, pc") 223 + 224 + #if __LINUX_ARM_ARCH__ >= 6 225 + TEST_UNSUPPORTED("bxj r0") 226 + #endif 227 + 228 + TEST_BF_R("blx r",0,2f,"") 229 + TEST_BB_R("blx r",7,2f,"") 230 + TEST_BF_R("blxeq r",14,2f,"") 231 + TEST_UNSUPPORTED(".word 0x0120003f @ blx pc") 232 + 233 + TEST_RR( "qadd r0, r",1, VAL1,", r",2, VAL2,"") 234 + TEST_RR( "qaddvs lr, r",9, VAL2,", r",8, VAL1,"") 235 + TEST_R( "qadd lr, r",9, VAL2,", r13") 236 + TEST_RR( "qsub r0, r",1, VAL1,", r",2, VAL2,"") 237 + TEST_RR( "qsubvs lr, r",9, VAL2,", r",8, VAL1,"") 238 + TEST_R( "qsub lr, r",9, VAL2,", r13") 239 + TEST_RR( "qdadd r0, r",1, VAL1,", r",2, VAL2,"") 240 + TEST_RR( "qdaddvs lr, r",9, VAL2,", r",8, VAL1,"") 241 + TEST_R( "qdadd lr, r",9, VAL2,", r13") 242 + TEST_RR( "qdsub r0, r",1, VAL1,", r",2, VAL2,"") 243 + TEST_RR( "qdsubvs lr, r",9, VAL2,", r",8, VAL1,"") 244 + TEST_R( "qdsub lr, r",9, VAL2,", r13") 245 + TEST_UNSUPPORTED(".word 0xe101f050 @ qadd pc, r0, r1") 246 + TEST_UNSUPPORTED(".word 0xe121f050 @ qsub pc, r0, r1") 247 + TEST_UNSUPPORTED(".word 0xe141f050 @ qdadd pc, r0, r1") 248 + TEST_UNSUPPORTED(".word 0xe161f050 @ qdsub pc, r0, r1") 249 + TEST_UNSUPPORTED(".word 0xe16f2050 @ qdsub r2, r0, pc") 250 + TEST_UNSUPPORTED(".word 0xe161205f @ qdsub r2, pc, r1") 251 + 252 + TEST_UNSUPPORTED("bkpt 0xffff") 253 + TEST_UNSUPPORTED("bkpt 0x0000") 254 + 255 + TEST_UNSUPPORTED(".word 0xe1600070 @ smc #0") 256 + 257 + TEST_GROUP("Halfword multiply and multiply-accumulate") 258 + 259 + TEST_RRR( "smlabb r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 260 + TEST_RRR( "smlabbge r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 261 + TEST_RR( "smlabb lr, r",1, VAL2,", r",2, VAL3,", r13") 262 + TEST_UNSUPPORTED(".word 0xe10f3281 @ smlabb pc, r1, r2, r3") 263 + TEST_RRR( "smlatb r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 264 + TEST_RRR( "smlatbge r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 265 + TEST_RR( "smlatb lr, r",1, VAL2,", r",2, VAL3,", r13") 266 + TEST_UNSUPPORTED(".word 0xe10f32a1 @ smlatb pc, r1, r2, r3") 267 + TEST_RRR( "smlabt r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 268 + TEST_RRR( "smlabtge r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 269 + TEST_RR( "smlabt lr, r",1, VAL2,", r",2, VAL3,", r13") 270 + TEST_UNSUPPORTED(".word 0xe10f32c1 @ smlabt pc, r1, r2, r3") 271 + TEST_RRR( "smlatt r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 272 + TEST_RRR( "smlattge r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 273 + TEST_RR( "smlatt lr, r",1, VAL2,", r",2, VAL3,", r13") 274 + TEST_UNSUPPORTED(".word 0xe10f32e1 @ smlatt pc, r1, r2, r3") 275 + 276 + TEST_RRR( "smlawb r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 277 + TEST_RRR( "smlawbge r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 278 + TEST_RR( "smlawb lr, r",1, VAL2,", r",2, VAL3,", r13") 279 + TEST_UNSUPPORTED(".word 0xe12f3281 @ smlawb pc, r1, r2, r3") 280 + TEST_RRR( "smlawt r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 281 + TEST_RRR( "smlawtge r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 282 + TEST_RR( "smlawt lr, r",1, VAL2,", r",2, VAL3,", r13") 283 + TEST_UNSUPPORTED(".word 0xe12f32c1 @ smlawt pc, r1, r2, r3") 284 + TEST_UNSUPPORTED(".word 0xe12032cf @ smlawt r0, pc, r2, r3") 285 + TEST_UNSUPPORTED(".word 0xe1203fc1 @ smlawt r0, r1, pc, r3") 286 + TEST_UNSUPPORTED(".word 0xe120f2c1 @ smlawt r0, r1, r2, pc") 287 + 288 + TEST_RR( "smulwb r0, r",1, VAL1,", r",2, VAL2,"") 289 + TEST_RR( "smulwbge r7, r",8, VAL3,", r",9, VAL1,"") 290 + TEST_R( "smulwb lr, r",1, VAL2,", r13") 291 + TEST_UNSUPPORTED(".word 0xe12f02a1 @ smulwb pc, r1, r2") 292 + TEST_RR( "smulwt r0, r",1, VAL1,", r",2, VAL2,"") 293 + TEST_RR( "smulwtge r7, r",8, VAL3,", r",9, VAL1,"") 294 + TEST_R( "smulwt lr, r",1, VAL2,", r13") 295 + TEST_UNSUPPORTED(".word 0xe12f02e1 @ smulwt pc, r1, r2") 296 + 297 + TEST_RRRR( "smlalbb r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 298 + TEST_RRRR( "smlalbble r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 299 + TEST_RRR( "smlalbb r",14,VAL3,", r",7, VAL4,", r",5, VAL1,", r13") 300 + TEST_UNSUPPORTED(".word 0xe14f1382 @ smlalbb pc, r1, r2, r3") 301 + TEST_UNSUPPORTED(".word 0xe141f382 @ smlalbb r1, pc, r2, r3") 302 + TEST_RRRR( "smlaltb r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 303 + TEST_RRRR( "smlaltble r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 304 + TEST_RRR( "smlaltb r",14,VAL3,", r",7, VAL4,", r",5, VAL1,", r13") 305 + TEST_UNSUPPORTED(".word 0xe14f13a2 @ smlaltb pc, r1, r2, r3") 306 + TEST_UNSUPPORTED(".word 0xe141f3a2 @ smlaltb r1, pc, r2, r3") 307 + TEST_RRRR( "smlalbt r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 308 + TEST_RRRR( "smlalbtle r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 309 + TEST_RRR( "smlalbt r",14,VAL3,", r",7, VAL4,", r",5, VAL1,", r13") 310 + TEST_UNSUPPORTED(".word 0xe14f13c2 @ smlalbt pc, r1, r2, r3") 311 + TEST_UNSUPPORTED(".word 0xe141f3c2 @ smlalbt r1, pc, r2, r3") 312 + TEST_RRRR( "smlaltt r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 313 + TEST_RRRR( "smlalttle r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 314 + TEST_RRR( "smlaltt r",14,VAL3,", r",7, VAL4,", r",5, VAL1,", r13") 315 + TEST_UNSUPPORTED(".word 0xe14f13e2 @ smlalbb pc, r1, r2, r3") 316 + TEST_UNSUPPORTED(".word 0xe140f3e2 @ smlalbb r0, pc, r2, r3") 317 + TEST_UNSUPPORTED(".word 0xe14013ef @ smlalbb r0, r1, pc, r3") 318 + TEST_UNSUPPORTED(".word 0xe1401fe2 @ smlalbb r0, r1, r2, pc") 319 + 320 + TEST_RR( "smulbb r0, r",1, VAL1,", r",2, VAL2,"") 321 + TEST_RR( "smulbbge r7, r",8, VAL3,", r",9, VAL1,"") 322 + TEST_R( "smulbb lr, r",1, VAL2,", r13") 323 + TEST_UNSUPPORTED(".word 0xe16f0281 @ smulbb pc, r1, r2") 324 + TEST_RR( "smultb r0, r",1, VAL1,", r",2, VAL2,"") 325 + TEST_RR( "smultbge r7, r",8, VAL3,", r",9, VAL1,"") 326 + TEST_R( "smultb lr, r",1, VAL2,", r13") 327 + TEST_UNSUPPORTED(".word 0xe16f02a1 @ smultb pc, r1, r2") 328 + TEST_RR( "smulbt r0, r",1, VAL1,", r",2, VAL2,"") 329 + TEST_RR( "smulbtge r7, r",8, VAL3,", r",9, VAL1,"") 330 + TEST_R( "smulbt lr, r",1, VAL2,", r13") 331 + TEST_UNSUPPORTED(".word 0xe16f02c1 @ smultb pc, r1, r2") 332 + TEST_RR( "smultt r0, r",1, VAL1,", r",2, VAL2,"") 333 + TEST_RR( "smulttge r7, r",8, VAL3,", r",9, VAL1,"") 334 + TEST_R( "smultt lr, r",1, VAL2,", r13") 335 + TEST_UNSUPPORTED(".word 0xe16f02e1 @ smultt pc, r1, r2") 336 + TEST_UNSUPPORTED(".word 0xe16002ef @ smultt r0, pc, r2") 337 + TEST_UNSUPPORTED(".word 0xe1600fe1 @ smultt r0, r1, pc") 338 + 339 + TEST_GROUP("Multiply and multiply-accumulate") 340 + 341 + TEST_RR( "mul r0, r",1, VAL1,", r",2, VAL2,"") 342 + TEST_RR( "mulls r7, r",8, VAL2,", r",9, VAL2,"") 343 + TEST_R( "mul lr, r",4, VAL3,", r13") 344 + TEST_UNSUPPORTED(".word 0xe00f0291 @ mul pc, r1, r2") 345 + TEST_UNSUPPORTED(".word 0xe000029f @ mul r0, pc, r2") 346 + TEST_UNSUPPORTED(".word 0xe0000f91 @ mul r0, r1, pc") 347 + TEST_RR( "muls r0, r",1, VAL1,", r",2, VAL2,"") 348 + TEST_RR( "mullss r7, r",8, VAL2,", r",9, VAL2,"") 349 + TEST_R( "muls lr, r",4, VAL3,", r13") 350 + TEST_UNSUPPORTED(".word 0xe01f0291 @ muls pc, r1, r2") 351 + 352 + TEST_RRR( "mla r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 353 + TEST_RRR( "mlahi r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 354 + TEST_RR( "mla lr, r",1, VAL2,", r",2, VAL3,", r13") 355 + TEST_UNSUPPORTED(".word 0xe02f3291 @ mla pc, r1, r2, r3") 356 + TEST_RRR( "mlas r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 357 + TEST_RRR( "mlahis r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 358 + TEST_RR( "mlas lr, r",1, VAL2,", r",2, VAL3,", r13") 359 + TEST_UNSUPPORTED(".word 0xe03f3291 @ mlas pc, r1, r2, r3") 360 + 361 + #if __LINUX_ARM_ARCH__ >= 6 362 + TEST_RR( "umaal r0, r1, r",2, VAL1,", r",3, VAL2,"") 363 + TEST_RR( "umaalls r7, r8, r",9, VAL2,", r",10, VAL1,"") 364 + TEST_R( "umaal lr, r12, r",11,VAL3,", r13") 365 + TEST_UNSUPPORTED(".word 0xe041f392 @ umaal pc, r1, r2, r3") 366 + TEST_UNSUPPORTED(".word 0xe04f0392 @ umaal r0, pc, r2, r3") 367 + TEST_UNSUPPORTED(".word 0xe0500090 @ undef") 368 + TEST_UNSUPPORTED(".word 0xe05fff9f @ undef") 369 + 370 + TEST_RRR( "mls r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 371 + TEST_RRR( "mlshi r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 372 + TEST_RR( "mls lr, r",1, VAL2,", r",2, VAL3,", r13") 373 + TEST_UNSUPPORTED(".word 0xe06f3291 @ mls pc, r1, r2, r3") 374 + TEST_UNSUPPORTED(".word 0xe060329f @ mls r0, pc, r2, r3") 375 + TEST_UNSUPPORTED(".word 0xe0603f91 @ mls r0, r1, pc, r3") 376 + TEST_UNSUPPORTED(".word 0xe060f291 @ mls r0, r1, r2, pc") 377 + #endif 378 + 379 + TEST_UNSUPPORTED(".word 0xe0700090 @ undef") 380 + TEST_UNSUPPORTED(".word 0xe07fff9f @ undef") 381 + 382 + TEST_RR( "umull r0, r1, r",2, VAL1,", r",3, VAL2,"") 383 + TEST_RR( "umullls r7, r8, r",9, VAL2,", r",10, VAL1,"") 384 + TEST_R( "umull lr, r12, r",11,VAL3,", r13") 385 + TEST_UNSUPPORTED(".word 0xe081f392 @ umull pc, r1, r2, r3") 386 + TEST_UNSUPPORTED(".word 0xe08f1392 @ umull r1, pc, r2, r3") 387 + TEST_RR( "umulls r0, r1, r",2, VAL1,", r",3, VAL2,"") 388 + TEST_RR( "umulllss r7, r8, r",9, VAL2,", r",10, VAL1,"") 389 + TEST_R( "umulls lr, r12, r",11,VAL3,", r13") 390 + TEST_UNSUPPORTED(".word 0xe091f392 @ umulls pc, r1, r2, r3") 391 + TEST_UNSUPPORTED(".word 0xe09f1392 @ umulls r1, pc, r2, r3") 392 + 393 + TEST_RRRR( "umlal r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 394 + TEST_RRRR( "umlalle r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 395 + TEST_RRR( "umlal r",14,VAL3,", r",7, VAL4,", r",5, VAL1,", r13") 396 + TEST_UNSUPPORTED(".word 0xe0af1392 @ umlal pc, r1, r2, r3") 397 + TEST_UNSUPPORTED(".word 0xe0a1f392 @ umlal r1, pc, r2, r3") 398 + TEST_RRRR( "umlals r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 399 + TEST_RRRR( "umlalles r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 400 + TEST_RRR( "umlals r",14,VAL3,", r",7, VAL4,", r",5, VAL1,", r13") 401 + TEST_UNSUPPORTED(".word 0xe0bf1392 @ umlals pc, r1, r2, r3") 402 + TEST_UNSUPPORTED(".word 0xe0b1f392 @ umlals r1, pc, r2, r3") 403 + 404 + TEST_RR( "smull r0, r1, r",2, VAL1,", r",3, VAL2,"") 405 + TEST_RR( "smullls r7, r8, r",9, VAL2,", r",10, VAL1,"") 406 + TEST_R( "smull lr, r12, r",11,VAL3,", r13") 407 + TEST_UNSUPPORTED(".word 0xe0c1f392 @ smull pc, r1, r2, r3") 408 + TEST_UNSUPPORTED(".word 0xe0cf1392 @ smull r1, pc, r2, r3") 409 + TEST_RR( "smulls r0, r1, r",2, VAL1,", r",3, VAL2,"") 410 + TEST_RR( "smulllss r7, r8, r",9, VAL2,", r",10, VAL1,"") 411 + TEST_R( "smulls lr, r12, r",11,VAL3,", r13") 412 + TEST_UNSUPPORTED(".word 0xe0d1f392 @ smulls pc, r1, r2, r3") 413 + TEST_UNSUPPORTED(".word 0xe0df1392 @ smulls r1, pc, r2, r3") 414 + 415 + TEST_RRRR( "smlal r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 416 + TEST_RRRR( "smlalle r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 417 + TEST_RRR( "smlal r",14,VAL3,", r",7, VAL4,", r",5, VAL1,", r13") 418 + TEST_UNSUPPORTED(".word 0xe0ef1392 @ smlal pc, r1, r2, r3") 419 + TEST_UNSUPPORTED(".word 0xe0e1f392 @ smlal r1, pc, r2, r3") 420 + TEST_RRRR( "smlals r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 421 + TEST_RRRR( "smlalles r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 422 + TEST_RRR( "smlals r",14,VAL3,", r",7, VAL4,", r",5, VAL1,", r13") 423 + TEST_UNSUPPORTED(".word 0xe0ff1392 @ smlals pc, r1, r2, r3") 424 + TEST_UNSUPPORTED(".word 0xe0f0f392 @ smlals r0, pc, r2, r3") 425 + TEST_UNSUPPORTED(".word 0xe0f0139f @ smlals r0, r1, pc, r3") 426 + TEST_UNSUPPORTED(".word 0xe0f01f92 @ smlals r0, r1, r2, pc") 427 + 428 + TEST_GROUP("Synchronization primitives") 429 + 430 + /* 431 + * Use hard coded constants for SWP instructions to avoid warnings 432 + * about deprecated instructions. 433 + */ 434 + TEST_RP( ".word 0xe108e097 @ swp lr, r",7,VAL2,", [r",8,0,"]") 435 + TEST_R( ".word 0x610d0091 @ swpvs r0, r",1,VAL1,", [sp]") 436 + TEST_RP( ".word 0xe10cd09e @ swp sp, r",14,VAL2,", [r",12,13*4,"]") 437 + TEST_UNSUPPORTED(".word 0xe102f091 @ swp pc, r1, [r2]") 438 + TEST_UNSUPPORTED(".word 0xe102009f @ swp r0, pc, [r2]") 439 + TEST_UNSUPPORTED(".word 0xe10f0091 @ swp r0, r1, [pc]") 440 + TEST_RP( ".word 0xe148e097 @ swpb lr, r",7,VAL2,", [r",8,0,"]") 441 + TEST_R( ".word 0x614d0091 @ swpvsb r0, r",1,VAL1,", [sp]") 442 + TEST_UNSUPPORTED(".word 0xe142f091 @ swpb pc, r1, [r2]") 443 + 444 + TEST_UNSUPPORTED(".word 0xe1100090") /* Unallocated space */ 445 + TEST_UNSUPPORTED(".word 0xe1200090") /* Unallocated space */ 446 + TEST_UNSUPPORTED(".word 0xe1300090") /* Unallocated space */ 447 + TEST_UNSUPPORTED(".word 0xe1500090") /* Unallocated space */ 448 + TEST_UNSUPPORTED(".word 0xe1600090") /* Unallocated space */ 449 + TEST_UNSUPPORTED(".word 0xe1700090") /* Unallocated space */ 450 + #if __LINUX_ARM_ARCH__ >= 6 451 + TEST_UNSUPPORTED("ldrex r2, [sp]") 452 + TEST_UNSUPPORTED("strexd r0, r2, r3, [sp]") 453 + TEST_UNSUPPORTED("ldrexd r2, r3, [sp]") 454 + TEST_UNSUPPORTED("strexb r0, r2, [sp]") 455 + TEST_UNSUPPORTED("ldrexb r2, [sp]") 456 + TEST_UNSUPPORTED("strexh r0, r2, [sp]") 457 + TEST_UNSUPPORTED("ldrexh r2, [sp]") 458 + #endif 459 + TEST_GROUP("Extra load/store instructions") 460 + 461 + TEST_RPR( "strh r",0, VAL1,", [r",1, 48,", -r",2, 24,"]") 462 + TEST_RPR( "streqh r",14,VAL2,", [r",13,0, ", r",12, 48,"]") 463 + TEST_RPR( "strh r",1, VAL1,", [r",2, 24,", r",3, 48,"]!") 464 + TEST_RPR( "strneh r",12,VAL2,", [r",11,48,", -r",10,24,"]!") 465 + TEST_RPR( "strh r",2, VAL1,", [r",3, 24,"], r",4, 48,"") 466 + TEST_RPR( "strh r",10,VAL2,", [r",9, 48,"], -r",11,24,"") 467 + TEST_UNSUPPORTED(".word 0xe1afc0ba @ strh r12, [pc, r10]!") 468 + TEST_UNSUPPORTED(".word 0xe089f0bb @ strh pc, [r9], r11") 469 + TEST_UNSUPPORTED(".word 0xe089a0bf @ strh r10, [r9], pc") 470 + 471 + TEST_PR( "ldrh r0, [r",0, 48,", -r",2, 24,"]") 472 + TEST_PR( "ldrcsh r14, [r",13,0, ", r",12, 48,"]") 473 + TEST_PR( "ldrh r1, [r",2, 24,", r",3, 48,"]!") 474 + TEST_PR( "ldrcch r12, [r",11,48,", -r",10,24,"]!") 475 + TEST_PR( "ldrh r2, [r",3, 24,"], r",4, 48,"") 476 + TEST_PR( "ldrh r10, [r",9, 48,"], -r",11,24,"") 477 + TEST_UNSUPPORTED(".word 0xe1bfc0ba @ ldrh r12, [pc, r10]!") 478 + TEST_UNSUPPORTED(".word 0xe099f0bb @ ldrh pc, [r9], r11") 479 + TEST_UNSUPPORTED(".word 0xe099a0bf @ ldrh r10, [r9], pc") 480 + 481 + TEST_RP( "strh r",0, VAL1,", [r",1, 24,", #-2]") 482 + TEST_RP( "strmih r",14,VAL2,", [r",13,0, ", #2]") 483 + TEST_RP( "strh r",1, VAL1,", [r",2, 24,", #4]!") 484 + TEST_RP( "strplh r",12,VAL2,", [r",11,24,", #-4]!") 485 + TEST_RP( "strh r",2, VAL1,", [r",3, 24,"], #48") 486 + TEST_RP( "strh r",10,VAL2,", [r",9, 64,"], #-48") 487 + TEST_UNSUPPORTED(".word 0xe1efc3b0 @ strh r12, [pc, #48]!") 488 + TEST_UNSUPPORTED(".word 0xe0c9f3b0 @ strh pc, [r9], #48") 489 + 490 + TEST_P( "ldrh r0, [r",0, 24,", #-2]") 491 + TEST_P( "ldrvsh r14, [r",13,0, ", #2]") 492 + TEST_P( "ldrh r1, [r",2, 24,", #4]!") 493 + TEST_P( "ldrvch r12, [r",11,24,", #-4]!") 494 + TEST_P( "ldrh r2, [r",3, 24,"], #48") 495 + TEST_P( "ldrh r10, [r",9, 64,"], #-48") 496 + TEST( "ldrh r0, [pc, #0]") 497 + TEST_UNSUPPORTED(".word 0xe1ffc3b0 @ ldrh r12, [pc, #48]!") 498 + TEST_UNSUPPORTED(".word 0xe0d9f3b0 @ ldrh pc, [r9], #48") 499 + 500 + TEST_PR( "ldrsb r0, [r",0, 48,", -r",2, 24,"]") 501 + TEST_PR( "ldrhisb r14, [r",13,0,", r",12, 48,"]") 502 + TEST_PR( "ldrsb r1, [r",2, 24,", r",3, 48,"]!") 503 + TEST_PR( "ldrlssb r12, [r",11,48,", -r",10,24,"]!") 504 + TEST_PR( "ldrsb r2, [r",3, 24,"], r",4, 48,"") 505 + TEST_PR( "ldrsb r10, [r",9, 48,"], -r",11,24,"") 506 + TEST_UNSUPPORTED(".word 0xe1bfc0da @ ldrsb r12, [pc, r10]!") 507 + TEST_UNSUPPORTED(".word 0xe099f0db @ ldrsb pc, [r9], r11") 508 + 509 + TEST_P( "ldrsb r0, [r",0, 24,", #-1]") 510 + TEST_P( "ldrgesb r14, [r",13,0, ", #1]") 511 + TEST_P( "ldrsb r1, [r",2, 24,", #4]!") 512 + TEST_P( "ldrltsb r12, [r",11,24,", #-4]!") 513 + TEST_P( "ldrsb r2, [r",3, 24,"], #48") 514 + TEST_P( "ldrsb r10, [r",9, 64,"], #-48") 515 + TEST( "ldrsb r0, [pc, #0]") 516 + TEST_UNSUPPORTED(".word 0xe1ffc3d0 @ ldrsb r12, [pc, #48]!") 517 + TEST_UNSUPPORTED(".word 0xe0d9f3d0 @ ldrsb pc, [r9], #48") 518 + 519 + TEST_PR( "ldrsh r0, [r",0, 48,", -r",2, 24,"]") 520 + TEST_PR( "ldrgtsh r14, [r",13,0, ", r",12, 48,"]") 521 + TEST_PR( "ldrsh r1, [r",2, 24,", r",3, 48,"]!") 522 + TEST_PR( "ldrlesh r12, [r",11,48,", -r",10,24,"]!") 523 + TEST_PR( "ldrsh r2, [r",3, 24,"], r",4, 48,"") 524 + TEST_PR( "ldrsh r10, [r",9, 48,"], -r",11,24,"") 525 + TEST_UNSUPPORTED(".word 0xe1bfc0fa @ ldrsh r12, [pc, r10]!") 526 + TEST_UNSUPPORTED(".word 0xe099f0fb @ ldrsh pc, [r9], r11") 527 + 528 + TEST_P( "ldrsh r0, [r",0, 24,", #-1]") 529 + TEST_P( "ldreqsh r14, [r",13,0 ,", #1]") 530 + TEST_P( "ldrsh r1, [r",2, 24,", #4]!") 531 + TEST_P( "ldrnesh r12, [r",11,24,", #-4]!") 532 + TEST_P( "ldrsh r2, [r",3, 24,"], #48") 533 + TEST_P( "ldrsh r10, [r",9, 64,"], #-48") 534 + TEST( "ldrsh r0, [pc, #0]") 535 + TEST_UNSUPPORTED(".word 0xe1ffc3f0 @ ldrsh r12, [pc, #48]!") 536 + TEST_UNSUPPORTED(".word 0xe0d9f3f0 @ ldrsh pc, [r9], #48") 537 + 538 + #if __LINUX_ARM_ARCH__ >= 7 539 + TEST_UNSUPPORTED("strht r1, [r2], r3") 540 + TEST_UNSUPPORTED("ldrht r1, [r2], r3") 541 + TEST_UNSUPPORTED("strht r1, [r2], #48") 542 + TEST_UNSUPPORTED("ldrht r1, [r2], #48") 543 + TEST_UNSUPPORTED("ldrsbt r1, [r2], r3") 544 + TEST_UNSUPPORTED("ldrsbt r1, [r2], #48") 545 + TEST_UNSUPPORTED("ldrsht r1, [r2], r3") 546 + TEST_UNSUPPORTED("ldrsht r1, [r2], #48") 547 + #endif 548 + 549 + TEST_RPR( "strd r",0, VAL1,", [r",1, 48,", -r",2,24,"]") 550 + TEST_RPR( "strccd r",8, VAL2,", [r",13,0, ", r",12,48,"]") 551 + TEST_RPR( "strd r",4, VAL1,", [r",2, 24,", r",3, 48,"]!") 552 + TEST_RPR( "strcsd r",12,VAL2,", [r",11,48,", -r",10,24,"]!") 553 + TEST_RPR( "strd r",2, VAL1,", [r",3, 24,"], r",4,48,"") 554 + TEST_RPR( "strd r",10,VAL2,", [r",9, 48,"], -r",7,24,"") 555 + TEST_UNSUPPORTED(".word 0xe1afc0fa @ strd r12, [pc, r10]!") 556 + 557 + TEST_PR( "ldrd r0, [r",0, 48,", -r",2,24,"]") 558 + TEST_PR( "ldrmid r8, [r",13,0, ", r",12,48,"]") 559 + TEST_PR( "ldrd r4, [r",2, 24,", r",3, 48,"]!") 560 + TEST_PR( "ldrpld r6, [r",11,48,", -r",10,24,"]!") 561 + TEST_PR( "ldrd r2, [r",5, 24,"], r",4,48,"") 562 + TEST_PR( "ldrd r10, [r",9,48,"], -r",7,24,"") 563 + TEST_UNSUPPORTED(".word 0xe1afc0da @ ldrd r12, [pc, r10]!") 564 + TEST_UNSUPPORTED(".word 0xe089f0db @ ldrd pc, [r9], r11") 565 + TEST_UNSUPPORTED(".word 0xe089e0db @ ldrd lr, [r9], r11") 566 + TEST_UNSUPPORTED(".word 0xe089c0df @ ldrd r12, [r9], pc") 567 + 568 + TEST_RP( "strd r",0, VAL1,", [r",1, 24,", #-8]") 569 + TEST_RP( "strvsd r",8, VAL2,", [r",13,0, ", #8]") 570 + TEST_RP( "strd r",4, VAL1,", [r",2, 24,", #16]!") 571 + TEST_RP( "strvcd r",12,VAL2,", [r",11,24,", #-16]!") 572 + TEST_RP( "strd r",2, VAL1,", [r",4, 24,"], #48") 573 + TEST_RP( "strd r",10,VAL2,", [r",9, 64,"], #-48") 574 + TEST_UNSUPPORTED(".word 0xe1efc3f0 @ strd r12, [pc, #48]!") 575 + 576 + TEST_P( "ldrd r0, [r",0, 24,", #-8]") 577 + TEST_P( "ldrhid r8, [r",13,0, ", #8]") 578 + TEST_P( "ldrd r4, [r",2, 24,", #16]!") 579 + TEST_P( "ldrlsd r6, [r",11,24,", #-16]!") 580 + TEST_P( "ldrd r2, [r",5, 24,"], #48") 581 + TEST_P( "ldrd r10, [r",9,6,"], #-48") 582 + TEST_UNSUPPORTED(".word 0xe1efc3d0 @ ldrd r12, [pc, #48]!") 583 + TEST_UNSUPPORTED(".word 0xe0c9f3d0 @ ldrd pc, [r9], #48") 584 + TEST_UNSUPPORTED(".word 0xe0c9e3d0 @ ldrd lr, [r9], #48") 585 + 586 + TEST_GROUP("Miscellaneous") 587 + 588 + #if __LINUX_ARM_ARCH__ >= 7 589 + TEST("movw r0, #0") 590 + TEST("movw r0, #0xffff") 591 + TEST("movw lr, #0xffff") 592 + TEST_UNSUPPORTED(".word 0xe300f000 @ movw pc, #0") 593 + TEST_R("movt r",0, VAL1,", #0") 594 + TEST_R("movt r",0, VAL2,", #0xffff") 595 + TEST_R("movt r",14,VAL1,", #0xffff") 596 + TEST_UNSUPPORTED(".word 0xe340f000 @ movt pc, #0") 597 + #endif 598 + 599 + TEST_UNSUPPORTED("msr cpsr, 0x13") 600 + TEST_UNSUPPORTED("msr cpsr_f, 0xf0000000") 601 + TEST_UNSUPPORTED("msr spsr, 0x13") 602 + 603 + #if __LINUX_ARM_ARCH__ >= 7 604 + TEST_SUPPORTED("yield") 605 + TEST("sev") 606 + TEST("nop") 607 + TEST("wfi") 608 + TEST_SUPPORTED("wfe") 609 + TEST_UNSUPPORTED("dbg #0") 610 + #endif 611 + 612 + TEST_GROUP("Load/store word and unsigned byte") 613 + 614 + #define LOAD_STORE(byte) \ 615 + TEST_RP( "str"byte" r",0, VAL1,", [r",1, 24,", #-2]") \ 616 + TEST_RP( "str"byte" r",14,VAL2,", [r",13,0, ", #2]") \ 617 + TEST_RP( "str"byte" r",1, VAL1,", [r",2, 24,", #4]!") \ 618 + TEST_RP( "str"byte" r",12,VAL2,", [r",11,24,", #-4]!") \ 619 + TEST_RP( "str"byte" r",2, VAL1,", [r",3, 24,"], #48") \ 620 + TEST_RP( "str"byte" r",10,VAL2,", [r",9, 64,"], #-48") \ 621 + TEST_RPR("str"byte" r",0, VAL1,", [r",1, 48,", -r",2, 24,"]") \ 622 + TEST_RPR("str"byte" r",14,VAL2,", [r",13,0, ", r",12, 48,"]") \ 623 + TEST_RPR("str"byte" r",1, VAL1,", [r",2, 24,", r",3, 48,"]!") \ 624 + TEST_RPR("str"byte" r",12,VAL2,", [r",11,48,", -r",10,24,"]!") \ 625 + TEST_RPR("str"byte" r",2, VAL1,", [r",3, 24,"], r",4, 48,"") \ 626 + TEST_RPR("str"byte" r",10,VAL2,", [r",9, 48,"], -r",11,24,"") \ 627 + TEST_RPR("str"byte" r",0, VAL1,", [r",1, 24,", r",2, 32,", asl #1]")\ 628 + TEST_RPR("str"byte" r",14,VAL2,", [r",13,0, ", r",12, 32,", lsr #2]")\ 629 + TEST_RPR("str"byte" r",1, VAL1,", [r",2, 24,", r",3, 32,", asr #3]!")\ 630 + TEST_RPR("str"byte" r",12,VAL2,", [r",11,24,", r",10, 4,", ror #31]!")\ 631 + TEST_P( "ldr"byte" r0, [r",0, 24,", #-2]") \ 632 + TEST_P( "ldr"byte" r14, [r",13,0, ", #2]") \ 633 + TEST_P( "ldr"byte" r1, [r",2, 24,", #4]!") \ 634 + TEST_P( "ldr"byte" r12, [r",11,24,", #-4]!") \ 635 + TEST_P( "ldr"byte" r2, [r",3, 24,"], #48") \ 636 + TEST_P( "ldr"byte" r10, [r",9, 64,"], #-48") \ 637 + TEST_PR( "ldr"byte" r0, [r",0, 48,", -r",2, 24,"]") \ 638 + TEST_PR( "ldr"byte" r14, [r",13,0, ", r",12, 48,"]") \ 639 + TEST_PR( "ldr"byte" r1, [r",2, 24,", r",3, 48,"]!") \ 640 + TEST_PR( "ldr"byte" r12, [r",11,48,", -r",10,24,"]!") \ 641 + TEST_PR( "ldr"byte" r2, [r",3, 24,"], r",4, 48,"") \ 642 + TEST_PR( "ldr"byte" r10, [r",9, 48,"], -r",11,24,"") \ 643 + TEST_PR( "ldr"byte" r0, [r",0, 24,", r",2, 32,", asl #1]") \ 644 + TEST_PR( "ldr"byte" r14, [r",13,0, ", r",12, 32,", lsr #2]") \ 645 + TEST_PR( "ldr"byte" r1, [r",2, 24,", r",3, 32,", asr #3]!") \ 646 + TEST_PR( "ldr"byte" r12, [r",11,24,", r",10, 4,", ror #31]!") \ 647 + TEST( "ldr"byte" r0, [pc, #0]") \ 648 + TEST_R( "ldr"byte" r12, [pc, r",14,0,"]") 649 + 650 + LOAD_STORE("") 651 + TEST_P( "str pc, [r",0,0,", #15*4]") 652 + TEST_R( "str pc, [sp, r",2,15*4,"]") 653 + TEST_BF( "ldr pc, [sp, #15*4]") 654 + TEST_BF_R("ldr pc, [sp, r",2,15*4,"]") 655 + 656 + TEST_P( "str sp, [r",0,0,", #13*4]") 657 + TEST_R( "str sp, [sp, r",2,13*4,"]") 658 + TEST_BF( "ldr sp, [sp, #13*4]") 659 + TEST_BF_R("ldr sp, [sp, r",2,13*4,"]") 660 + 661 + #ifdef CONFIG_THUMB2_KERNEL 662 + TEST_ARM_TO_THUMB_INTERWORK_P("ldr pc, [r",0,0,", #15*4]") 663 + #endif 664 + TEST_UNSUPPORTED(".word 0xe5af6008 @ str r6, [pc, #8]!") 665 + TEST_UNSUPPORTED(".word 0xe7af6008 @ str r6, [pc, r8]!") 666 + TEST_UNSUPPORTED(".word 0xe5bf6008 @ ldr r6, [pc, #8]!") 667 + TEST_UNSUPPORTED(".word 0xe7bf6008 @ ldr r6, [pc, r8]!") 668 + TEST_UNSUPPORTED(".word 0xe788600f @ str r6, [r8, pc]") 669 + TEST_UNSUPPORTED(".word 0xe798600f @ ldr r6, [r8, pc]") 670 + 671 + LOAD_STORE("b") 672 + TEST_UNSUPPORTED(".word 0xe5f7f008 @ ldrb pc, [r7, #8]!") 673 + TEST_UNSUPPORTED(".word 0xe7f7f008 @ ldrb pc, [r7, r8]!") 674 + TEST_UNSUPPORTED(".word 0xe5ef6008 @ strb r6, [pc, #8]!") 675 + TEST_UNSUPPORTED(".word 0xe7ef6008 @ strb r6, [pc, r3]!") 676 + TEST_UNSUPPORTED(".word 0xe5ff6008 @ ldrb r6, [pc, #8]!") 677 + TEST_UNSUPPORTED(".word 0xe7ff6008 @ ldrb r6, [pc, r3]!") 678 + 679 + TEST_UNSUPPORTED("ldrt r0, [r1], #4") 680 + TEST_UNSUPPORTED("ldrt r1, [r2], r3") 681 + TEST_UNSUPPORTED("strt r2, [r3], #4") 682 + TEST_UNSUPPORTED("strt r3, [r4], r5") 683 + TEST_UNSUPPORTED("ldrbt r4, [r5], #4") 684 + TEST_UNSUPPORTED("ldrbt r5, [r6], r7") 685 + TEST_UNSUPPORTED("strbt r6, [r7], #4") 686 + TEST_UNSUPPORTED("strbt r7, [r8], r9") 687 + 688 + #if __LINUX_ARM_ARCH__ >= 7 689 + TEST_GROUP("Parallel addition and subtraction, signed") 690 + 691 + TEST_UNSUPPORTED(".word 0xe6000010") /* Unallocated space */ 692 + TEST_UNSUPPORTED(".word 0xe60fffff") /* Unallocated space */ 693 + 694 + TEST_RR( "sadd16 r0, r",0, HH1,", r",1, HH2,"") 695 + TEST_RR( "sadd16 r14, r",12,HH2,", r",10,HH1,"") 696 + TEST_UNSUPPORTED(".word 0xe61cff1a @ sadd16 pc, r12, r10") 697 + TEST_RR( "sasx r0, r",0, HH1,", r",1, HH2,"") 698 + TEST_RR( "sasx r14, r",12,HH2,", r",10,HH1,"") 699 + TEST_UNSUPPORTED(".word 0xe61cff3a @ sasx pc, r12, r10") 700 + TEST_RR( "ssax r0, r",0, HH1,", r",1, HH2,"") 701 + TEST_RR( "ssax r14, r",12,HH2,", r",10,HH1,"") 702 + TEST_UNSUPPORTED(".word 0xe61cff5a @ ssax pc, r12, r10") 703 + TEST_RR( "ssub16 r0, r",0, HH1,", r",1, HH2,"") 704 + TEST_RR( "ssub16 r14, r",12,HH2,", r",10,HH1,"") 705 + TEST_UNSUPPORTED(".word 0xe61cff7a @ ssub16 pc, r12, r10") 706 + TEST_RR( "sadd8 r0, r",0, HH1,", r",1, HH2,"") 707 + TEST_RR( "sadd8 r14, r",12,HH2,", r",10,HH1,"") 708 + TEST_UNSUPPORTED(".word 0xe61cff9a @ sadd8 pc, r12, r10") 709 + TEST_UNSUPPORTED(".word 0xe61000b0") /* Unallocated space */ 710 + TEST_UNSUPPORTED(".word 0xe61fffbf") /* Unallocated space */ 711 + TEST_UNSUPPORTED(".word 0xe61000d0") /* Unallocated space */ 712 + TEST_UNSUPPORTED(".word 0xe61fffdf") /* Unallocated space */ 713 + TEST_RR( "ssub8 r0, r",0, HH1,", r",1, HH2,"") 714 + TEST_RR( "ssub8 r14, r",12,HH2,", r",10,HH1,"") 715 + TEST_UNSUPPORTED(".word 0xe61cfffa @ ssub8 pc, r12, r10") 716 + 717 + TEST_RR( "qadd16 r0, r",0, HH1,", r",1, HH2,"") 718 + TEST_RR( "qadd16 r14, r",12,HH2,", r",10,HH1,"") 719 + TEST_UNSUPPORTED(".word 0xe62cff1a @ qadd16 pc, r12, r10") 720 + TEST_RR( "qasx r0, r",0, HH1,", r",1, HH2,"") 721 + TEST_RR( "qasx r14, r",12,HH2,", r",10,HH1,"") 722 + TEST_UNSUPPORTED(".word 0xe62cff3a @ qasx pc, r12, r10") 723 + TEST_RR( "qsax r0, r",0, HH1,", r",1, HH2,"") 724 + TEST_RR( "qsax r14, r",12,HH2,", r",10,HH1,"") 725 + TEST_UNSUPPORTED(".word 0xe62cff5a @ qsax pc, r12, r10") 726 + TEST_RR( "qsub16 r0, r",0, HH1,", r",1, HH2,"") 727 + TEST_RR( "qsub16 r14, r",12,HH2,", r",10,HH1,"") 728 + TEST_UNSUPPORTED(".word 0xe62cff7a @ qsub16 pc, r12, r10") 729 + TEST_RR( "qadd8 r0, r",0, HH1,", r",1, HH2,"") 730 + TEST_RR( "qadd8 r14, r",12,HH2,", r",10,HH1,"") 731 + TEST_UNSUPPORTED(".word 0xe62cff9a @ qadd8 pc, r12, r10") 732 + TEST_UNSUPPORTED(".word 0xe62000b0") /* Unallocated space */ 733 + TEST_UNSUPPORTED(".word 0xe62fffbf") /* Unallocated space */ 734 + TEST_UNSUPPORTED(".word 0xe62000d0") /* Unallocated space */ 735 + TEST_UNSUPPORTED(".word 0xe62fffdf") /* Unallocated space */ 736 + TEST_RR( "qsub8 r0, r",0, HH1,", r",1, HH2,"") 737 + TEST_RR( "qsub8 r14, r",12,HH2,", r",10,HH1,"") 738 + TEST_UNSUPPORTED(".word 0xe62cfffa @ qsub8 pc, r12, r10") 739 + 740 + TEST_RR( "shadd16 r0, r",0, HH1,", r",1, HH2,"") 741 + TEST_RR( "shadd16 r14, r",12,HH2,", r",10,HH1,"") 742 + TEST_UNSUPPORTED(".word 0xe63cff1a @ shadd16 pc, r12, r10") 743 + TEST_RR( "shasx r0, r",0, HH1,", r",1, HH2,"") 744 + TEST_RR( "shasx r14, r",12,HH2,", r",10,HH1,"") 745 + TEST_UNSUPPORTED(".word 0xe63cff3a @ shasx pc, r12, r10") 746 + TEST_RR( "shsax r0, r",0, HH1,", r",1, HH2,"") 747 + TEST_RR( "shsax r14, r",12,HH2,", r",10,HH1,"") 748 + TEST_UNSUPPORTED(".word 0xe63cff5a @ shsax pc, r12, r10") 749 + TEST_RR( "shsub16 r0, r",0, HH1,", r",1, HH2,"") 750 + TEST_RR( "shsub16 r14, r",12,HH2,", r",10,HH1,"") 751 + TEST_UNSUPPORTED(".word 0xe63cff7a @ shsub16 pc, r12, r10") 752 + TEST_RR( "shadd8 r0, r",0, HH1,", r",1, HH2,"") 753 + TEST_RR( "shadd8 r14, r",12,HH2,", r",10,HH1,"") 754 + TEST_UNSUPPORTED(".word 0xe63cff9a @ shadd8 pc, r12, r10") 755 + TEST_UNSUPPORTED(".word 0xe63000b0") /* Unallocated space */ 756 + TEST_UNSUPPORTED(".word 0xe63fffbf") /* Unallocated space */ 757 + TEST_UNSUPPORTED(".word 0xe63000d0") /* Unallocated space */ 758 + TEST_UNSUPPORTED(".word 0xe63fffdf") /* Unallocated space */ 759 + TEST_RR( "shsub8 r0, r",0, HH1,", r",1, HH2,"") 760 + TEST_RR( "shsub8 r14, r",12,HH2,", r",10,HH1,"") 761 + TEST_UNSUPPORTED(".word 0xe63cfffa @ shsub8 pc, r12, r10") 762 + 763 + TEST_GROUP("Parallel addition and subtraction, unsigned") 764 + 765 + TEST_UNSUPPORTED(".word 0xe6400010") /* Unallocated space */ 766 + TEST_UNSUPPORTED(".word 0xe64fffff") /* Unallocated space */ 767 + 768 + TEST_RR( "uadd16 r0, r",0, HH1,", r",1, HH2,"") 769 + TEST_RR( "uadd16 r14, r",12,HH2,", r",10,HH1,"") 770 + TEST_UNSUPPORTED(".word 0xe65cff1a @ uadd16 pc, r12, r10") 771 + TEST_RR( "uasx r0, r",0, HH1,", r",1, HH2,"") 772 + TEST_RR( "uasx r14, r",12,HH2,", r",10,HH1,"") 773 + TEST_UNSUPPORTED(".word 0xe65cff3a @ uasx pc, r12, r10") 774 + TEST_RR( "usax r0, r",0, HH1,", r",1, HH2,"") 775 + TEST_RR( "usax r14, r",12,HH2,", r",10,HH1,"") 776 + TEST_UNSUPPORTED(".word 0xe65cff5a @ usax pc, r12, r10") 777 + TEST_RR( "usub16 r0, r",0, HH1,", r",1, HH2,"") 778 + TEST_RR( "usub16 r14, r",12,HH2,", r",10,HH1,"") 779 + TEST_UNSUPPORTED(".word 0xe65cff7a @ usub16 pc, r12, r10") 780 + TEST_RR( "uadd8 r0, r",0, HH1,", r",1, HH2,"") 781 + TEST_RR( "uadd8 r14, r",12,HH2,", r",10,HH1,"") 782 + TEST_UNSUPPORTED(".word 0xe65cff9a @ uadd8 pc, r12, r10") 783 + TEST_UNSUPPORTED(".word 0xe65000b0") /* Unallocated space */ 784 + TEST_UNSUPPORTED(".word 0xe65fffbf") /* Unallocated space */ 785 + TEST_UNSUPPORTED(".word 0xe65000d0") /* Unallocated space */ 786 + TEST_UNSUPPORTED(".word 0xe65fffdf") /* Unallocated space */ 787 + TEST_RR( "usub8 r0, r",0, HH1,", r",1, HH2,"") 788 + TEST_RR( "usub8 r14, r",12,HH2,", r",10,HH1,"") 789 + TEST_UNSUPPORTED(".word 0xe65cfffa @ usub8 pc, r12, r10") 790 + 791 + TEST_RR( "uqadd16 r0, r",0, HH1,", r",1, HH2,"") 792 + TEST_RR( "uqadd16 r14, r",12,HH2,", r",10,HH1,"") 793 + TEST_UNSUPPORTED(".word 0xe66cff1a @ uqadd16 pc, r12, r10") 794 + TEST_RR( "uqasx r0, r",0, HH1,", r",1, HH2,"") 795 + TEST_RR( "uqasx r14, r",12,HH2,", r",10,HH1,"") 796 + TEST_UNSUPPORTED(".word 0xe66cff3a @ uqasx pc, r12, r10") 797 + TEST_RR( "uqsax r0, r",0, HH1,", r",1, HH2,"") 798 + TEST_RR( "uqsax r14, r",12,HH2,", r",10,HH1,"") 799 + TEST_UNSUPPORTED(".word 0xe66cff5a @ uqsax pc, r12, r10") 800 + TEST_RR( "uqsub16 r0, r",0, HH1,", r",1, HH2,"") 801 + TEST_RR( "uqsub16 r14, r",12,HH2,", r",10,HH1,"") 802 + TEST_UNSUPPORTED(".word 0xe66cff7a @ uqsub16 pc, r12, r10") 803 + TEST_RR( "uqadd8 r0, r",0, HH1,", r",1, HH2,"") 804 + TEST_RR( "uqadd8 r14, r",12,HH2,", r",10,HH1,"") 805 + TEST_UNSUPPORTED(".word 0xe66cff9a @ uqadd8 pc, r12, r10") 806 + TEST_UNSUPPORTED(".word 0xe66000b0") /* Unallocated space */ 807 + TEST_UNSUPPORTED(".word 0xe66fffbf") /* Unallocated space */ 808 + TEST_UNSUPPORTED(".word 0xe66000d0") /* Unallocated space */ 809 + TEST_UNSUPPORTED(".word 0xe66fffdf") /* Unallocated space */ 810 + TEST_RR( "uqsub8 r0, r",0, HH1,", r",1, HH2,"") 811 + TEST_RR( "uqsub8 r14, r",12,HH2,", r",10,HH1,"") 812 + TEST_UNSUPPORTED(".word 0xe66cfffa @ uqsub8 pc, r12, r10") 813 + 814 + TEST_RR( "uhadd16 r0, r",0, HH1,", r",1, HH2,"") 815 + TEST_RR( "uhadd16 r14, r",12,HH2,", r",10,HH1,"") 816 + TEST_UNSUPPORTED(".word 0xe67cff1a @ uhadd16 pc, r12, r10") 817 + TEST_RR( "uhasx r0, r",0, HH1,", r",1, HH2,"") 818 + TEST_RR( "uhasx r14, r",12,HH2,", r",10,HH1,"") 819 + TEST_UNSUPPORTED(".word 0xe67cff3a @ uhasx pc, r12, r10") 820 + TEST_RR( "uhsax r0, r",0, HH1,", r",1, HH2,"") 821 + TEST_RR( "uhsax r14, r",12,HH2,", r",10,HH1,"") 822 + TEST_UNSUPPORTED(".word 0xe67cff5a @ uhsax pc, r12, r10") 823 + TEST_RR( "uhsub16 r0, r",0, HH1,", r",1, HH2,"") 824 + TEST_RR( "uhsub16 r14, r",12,HH2,", r",10,HH1,"") 825 + TEST_UNSUPPORTED(".word 0xe67cff7a @ uhsub16 pc, r12, r10") 826 + TEST_RR( "uhadd8 r0, r",0, HH1,", r",1, HH2,"") 827 + TEST_RR( "uhadd8 r14, r",12,HH2,", r",10,HH1,"") 828 + TEST_UNSUPPORTED(".word 0xe67cff9a @ uhadd8 pc, r12, r10") 829 + TEST_UNSUPPORTED(".word 0xe67000b0") /* Unallocated space */ 830 + TEST_UNSUPPORTED(".word 0xe67fffbf") /* Unallocated space */ 831 + TEST_UNSUPPORTED(".word 0xe67000d0") /* Unallocated space */ 832 + TEST_UNSUPPORTED(".word 0xe67fffdf") /* Unallocated space */ 833 + TEST_RR( "uhsub8 r0, r",0, HH1,", r",1, HH2,"") 834 + TEST_RR( "uhsub8 r14, r",12,HH2,", r",10,HH1,"") 835 + TEST_UNSUPPORTED(".word 0xe67cfffa @ uhsub8 pc, r12, r10") 836 + TEST_UNSUPPORTED(".word 0xe67feffa @ uhsub8 r14, pc, r10") 837 + TEST_UNSUPPORTED(".word 0xe67cefff @ uhsub8 r14, r12, pc") 838 + #endif /* __LINUX_ARM_ARCH__ >= 7 */ 839 + 840 + #if __LINUX_ARM_ARCH__ >= 6 841 + TEST_GROUP("Packing, unpacking, saturation, and reversal") 842 + 843 + TEST_RR( "pkhbt r0, r",0, HH1,", r",1, HH2,"") 844 + TEST_RR( "pkhbt r14,r",12, HH1,", r",10,HH2,", lsl #2") 845 + TEST_UNSUPPORTED(".word 0xe68cf11a @ pkhbt pc, r12, r10, lsl #2") 846 + TEST_RR( "pkhtb r0, r",0, HH1,", r",1, HH2,"") 847 + TEST_RR( "pkhtb r14,r",12, HH1,", r",10,HH2,", asr #2") 848 + TEST_UNSUPPORTED(".word 0xe68cf15a @ pkhtb pc, r12, r10, asr #2") 849 + TEST_UNSUPPORTED(".word 0xe68fe15a @ pkhtb r14, pc, r10, asr #2") 850 + TEST_UNSUPPORTED(".word 0xe68ce15f @ pkhtb r14, r12, pc, asr #2") 851 + TEST_UNSUPPORTED(".word 0xe6900010") /* Unallocated space */ 852 + TEST_UNSUPPORTED(".word 0xe69fffdf") /* Unallocated space */ 853 + 854 + TEST_R( "ssat r0, #24, r",0, VAL1,"") 855 + TEST_R( "ssat r14, #24, r",12, VAL2,"") 856 + TEST_R( "ssat r0, #24, r",0, VAL1,", lsl #8") 857 + TEST_R( "ssat r14, #24, r",12, VAL2,", asr #8") 858 + TEST_UNSUPPORTED(".word 0xe6b7f01c @ ssat pc, #24, r12") 859 + 860 + TEST_R( "usat r0, #24, r",0, VAL1,"") 861 + TEST_R( "usat r14, #24, r",12, VAL2,"") 862 + TEST_R( "usat r0, #24, r",0, VAL1,", lsl #8") 863 + TEST_R( "usat r14, #24, r",12, VAL2,", asr #8") 864 + TEST_UNSUPPORTED(".word 0xe6f7f01c @ usat pc, #24, r12") 865 + 866 + TEST_RR( "sxtab16 r0, r",0, HH1,", r",1, HH2,"") 867 + TEST_RR( "sxtab16 r14,r",12, HH2,", r",10,HH1,", ror #8") 868 + TEST_R( "sxtb16 r8, r",7, HH1,"") 869 + TEST_UNSUPPORTED(".word 0xe68cf47a @ sxtab16 pc,r12, r10, ror #8") 870 + 871 + TEST_RR( "sel r0, r",0, VAL1,", r",1, VAL2,"") 872 + TEST_RR( "sel r14, r",12,VAL1,", r",10, VAL2,"") 873 + TEST_UNSUPPORTED(".word 0xe68cffba @ sel pc, r12, r10") 874 + TEST_UNSUPPORTED(".word 0xe68fefba @ sel r14, pc, r10") 875 + TEST_UNSUPPORTED(".word 0xe68cefbf @ sel r14, r12, pc") 876 + 877 + TEST_R( "ssat16 r0, #12, r",0, HH1,"") 878 + TEST_R( "ssat16 r14, #12, r",12, HH2,"") 879 + TEST_UNSUPPORTED(".word 0xe6abff3c @ ssat16 pc, #12, r12") 880 + 881 + TEST_RR( "sxtab r0, r",0, HH1,", r",1, HH2,"") 882 + TEST_RR( "sxtab r14,r",12, HH2,", r",10,HH1,", ror #8") 883 + TEST_R( "sxtb r8, r",7, HH1,"") 884 + TEST_UNSUPPORTED(".word 0xe6acf47a @ sxtab pc,r12, r10, ror #8") 885 + 886 + TEST_R( "rev r0, r",0, VAL1,"") 887 + TEST_R( "rev r14, r",12, VAL2,"") 888 + TEST_UNSUPPORTED(".word 0xe6bfff3c @ rev pc, r12") 889 + 890 + TEST_RR( "sxtah r0, r",0, HH1,", r",1, HH2,"") 891 + TEST_RR( "sxtah r14,r",12, HH2,", r",10,HH1,", ror #8") 892 + TEST_R( "sxth r8, r",7, HH1,"") 893 + TEST_UNSUPPORTED(".word 0xe6bcf47a @ sxtah pc,r12, r10, ror #8") 894 + 895 + TEST_R( "rev16 r0, r",0, VAL1,"") 896 + TEST_R( "rev16 r14, r",12, VAL2,"") 897 + TEST_UNSUPPORTED(".word 0xe6bfffbc @ rev16 pc, r12") 898 + 899 + TEST_RR( "uxtab16 r0, r",0, HH1,", r",1, HH2,"") 900 + TEST_RR( "uxtab16 r14,r",12, HH2,", r",10,HH1,", ror #8") 901 + TEST_R( "uxtb16 r8, r",7, HH1,"") 902 + TEST_UNSUPPORTED(".word 0xe6ccf47a @ uxtab16 pc,r12, r10, ror #8") 903 + 904 + TEST_R( "usat16 r0, #12, r",0, HH1,"") 905 + TEST_R( "usat16 r14, #12, r",12, HH2,"") 906 + TEST_UNSUPPORTED(".word 0xe6ecff3c @ usat16 pc, #12, r12") 907 + TEST_UNSUPPORTED(".word 0xe6ecef3f @ usat16 r14, #12, pc") 908 + 909 + TEST_RR( "uxtab r0, r",0, HH1,", r",1, HH2,"") 910 + TEST_RR( "uxtab r14,r",12, HH2,", r",10,HH1,", ror #8") 911 + TEST_R( "uxtb r8, r",7, HH1,"") 912 + TEST_UNSUPPORTED(".word 0xe6ecf47a @ uxtab pc,r12, r10, ror #8") 913 + 914 + #if __LINUX_ARM_ARCH__ >= 7 915 + TEST_R( "rbit r0, r",0, VAL1,"") 916 + TEST_R( "rbit r14, r",12, VAL2,"") 917 + TEST_UNSUPPORTED(".word 0xe6ffff3c @ rbit pc, r12") 918 + #endif 919 + 920 + TEST_RR( "uxtah r0, r",0, HH1,", r",1, HH2,"") 921 + TEST_RR( "uxtah r14,r",12, HH2,", r",10,HH1,", ror #8") 922 + TEST_R( "uxth r8, r",7, HH1,"") 923 + TEST_UNSUPPORTED(".word 0xe6fff077 @ uxth pc, r7") 924 + TEST_UNSUPPORTED(".word 0xe6ff807f @ uxth r8, pc") 925 + TEST_UNSUPPORTED(".word 0xe6fcf47a @ uxtah pc, r12, r10, ror #8") 926 + TEST_UNSUPPORTED(".word 0xe6fce47f @ uxtah r14, r12, pc, ror #8") 927 + 928 + TEST_R( "revsh r0, r",0, VAL1,"") 929 + TEST_R( "revsh r14, r",12, VAL2,"") 930 + TEST_UNSUPPORTED(".word 0xe6ffff3c @ revsh pc, r12") 931 + TEST_UNSUPPORTED(".word 0xe6ffef3f @ revsh r14, pc") 932 + 933 + TEST_UNSUPPORTED(".word 0xe6900070") /* Unallocated space */ 934 + TEST_UNSUPPORTED(".word 0xe69fff7f") /* Unallocated space */ 935 + 936 + TEST_UNSUPPORTED(".word 0xe6d00070") /* Unallocated space */ 937 + TEST_UNSUPPORTED(".word 0xe6dfff7f") /* Unallocated space */ 938 + #endif /* __LINUX_ARM_ARCH__ >= 6 */ 939 + 940 + #if __LINUX_ARM_ARCH__ >= 6 941 + TEST_GROUP("Signed multiplies") 942 + 943 + TEST_RRR( "smlad r0, r",0, HH1,", r",1, HH2,", r",2, VAL1,"") 944 + TEST_RRR( "smlad r14, r",12,HH2,", r",10,HH1,", r",8, VAL2,"") 945 + TEST_UNSUPPORTED(".word 0xe70f8a1c @ smlad pc, r12, r10, r8") 946 + TEST_RRR( "smladx r0, r",0, HH1,", r",1, HH2,", r",2, VAL1,"") 947 + TEST_RRR( "smladx r14, r",12,HH2,", r",10,HH1,", r",8, VAL2,"") 948 + TEST_UNSUPPORTED(".word 0xe70f8a3c @ smladx pc, r12, r10, r8") 949 + 950 + TEST_RR( "smuad r0, r",0, HH1,", r",1, HH2,"") 951 + TEST_RR( "smuad r14, r",12,HH2,", r",10,HH1,"") 952 + TEST_UNSUPPORTED(".word 0xe70ffa1c @ smuad pc, r12, r10") 953 + TEST_RR( "smuadx r0, r",0, HH1,", r",1, HH2,"") 954 + TEST_RR( "smuadx r14, r",12,HH2,", r",10,HH1,"") 955 + TEST_UNSUPPORTED(".word 0xe70ffa3c @ smuadx pc, r12, r10") 956 + 957 + TEST_RRR( "smlsd r0, r",0, HH1,", r",1, HH2,", r",2, VAL1,"") 958 + TEST_RRR( "smlsd r14, r",12,HH2,", r",10,HH1,", r",8, VAL2,"") 959 + TEST_UNSUPPORTED(".word 0xe70f8a5c @ smlsd pc, r12, r10, r8") 960 + TEST_RRR( "smlsdx r0, r",0, HH1,", r",1, HH2,", r",2, VAL1,"") 961 + TEST_RRR( "smlsdx r14, r",12,HH2,", r",10,HH1,", r",8, VAL2,"") 962 + TEST_UNSUPPORTED(".word 0xe70f8a7c @ smlsdx pc, r12, r10, r8") 963 + 964 + TEST_RR( "smusd r0, r",0, HH1,", r",1, HH2,"") 965 + TEST_RR( "smusd r14, r",12,HH2,", r",10,HH1,"") 966 + TEST_UNSUPPORTED(".word 0xe70ffa5c @ smusd pc, r12, r10") 967 + TEST_RR( "smusdx r0, r",0, HH1,", r",1, HH2,"") 968 + TEST_RR( "smusdx r14, r",12,HH2,", r",10,HH1,"") 969 + TEST_UNSUPPORTED(".word 0xe70ffa7c @ smusdx pc, r12, r10") 970 + 971 + TEST_RRRR( "smlald r",0, VAL1,", r",1, VAL2, ", r",0, HH1,", r",1, HH2) 972 + TEST_RRRR( "smlald r",11,VAL2,", r",10,VAL1, ", r",9, HH2,", r",8, HH1) 973 + TEST_UNSUPPORTED(".word 0xe74af819 @ smlald pc, r10, r9, r8") 974 + TEST_UNSUPPORTED(".word 0xe74fb819 @ smlald r11, pc, r9, r8") 975 + TEST_UNSUPPORTED(".word 0xe74ab81f @ smlald r11, r10, pc, r8") 976 + TEST_UNSUPPORTED(".word 0xe74abf19 @ smlald r11, r10, r9, pc") 977 + 978 + TEST_RRRR( "smlaldx r",0, VAL1,", r",1, VAL2, ", r",0, HH1,", r",1, HH2) 979 + TEST_RRRR( "smlaldx r",11,VAL2,", r",10,VAL1, ", r",9, HH2,", r",8, HH1) 980 + TEST_UNSUPPORTED(".word 0xe74af839 @ smlaldx pc, r10, r9, r8") 981 + TEST_UNSUPPORTED(".word 0xe74fb839 @ smlaldx r11, pc, r9, r8") 982 + 983 + TEST_RRR( "smmla r0, r",0, VAL1,", r",1, VAL2,", r",2, VAL1,"") 984 + TEST_RRR( "smmla r14, r",12,VAL2,", r",10,VAL1,", r",8, VAL2,"") 985 + TEST_UNSUPPORTED(".word 0xe75f8a1c @ smmla pc, r12, r10, r8") 986 + TEST_RRR( "smmlar r0, r",0, VAL1,", r",1, VAL2,", r",2, VAL1,"") 987 + TEST_RRR( "smmlar r14, r",12,VAL2,", r",10,VAL1,", r",8, VAL2,"") 988 + TEST_UNSUPPORTED(".word 0xe75f8a3c @ smmlar pc, r12, r10, r8") 989 + 990 + TEST_RR( "smmul r0, r",0, VAL1,", r",1, VAL2,"") 991 + TEST_RR( "smmul r14, r",12,VAL2,", r",10,VAL1,"") 992 + TEST_UNSUPPORTED(".word 0xe75ffa1c @ smmul pc, r12, r10") 993 + TEST_RR( "smmulr r0, r",0, VAL1,", r",1, VAL2,"") 994 + TEST_RR( "smmulr r14, r",12,VAL2,", r",10,VAL1,"") 995 + TEST_UNSUPPORTED(".word 0xe75ffa3c @ smmulr pc, r12, r10") 996 + 997 + TEST_RRR( "smmls r0, r",0, VAL1,", r",1, VAL2,", r",2, VAL1,"") 998 + TEST_RRR( "smmls r14, r",12,VAL2,", r",10,VAL1,", r",8, VAL2,"") 999 + TEST_UNSUPPORTED(".word 0xe75f8adc @ smmls pc, r12, r10, r8") 1000 + TEST_RRR( "smmlsr r0, r",0, VAL1,", r",1, VAL2,", r",2, VAL1,"") 1001 + TEST_RRR( "smmlsr r14, r",12,VAL2,", r",10,VAL1,", r",8, VAL2,"") 1002 + TEST_UNSUPPORTED(".word 0xe75f8afc @ smmlsr pc, r12, r10, r8") 1003 + TEST_UNSUPPORTED(".word 0xe75e8aff @ smmlsr r14, pc, r10, r8") 1004 + TEST_UNSUPPORTED(".word 0xe75e8ffc @ smmlsr r14, r12, pc, r8") 1005 + TEST_UNSUPPORTED(".word 0xe75efafc @ smmlsr r14, r12, r10, pc") 1006 + 1007 + TEST_RR( "usad8 r0, r",0, VAL1,", r",1, VAL2,"") 1008 + TEST_RR( "usad8 r14, r",12,VAL2,", r",10,VAL1,"") 1009 + TEST_UNSUPPORTED(".word 0xe75ffa1c @ usad8 pc, r12, r10") 1010 + TEST_UNSUPPORTED(".word 0xe75efa1f @ usad8 r14, pc, r10") 1011 + TEST_UNSUPPORTED(".word 0xe75eff1c @ usad8 r14, r12, pc") 1012 + 1013 + TEST_RRR( "usada8 r0, r",0, VAL1,", r",1, VAL2,", r",2, VAL3,"") 1014 + TEST_RRR( "usada8 r14, r",12,VAL2,", r",10,VAL1,", r",8, VAL3,"") 1015 + TEST_UNSUPPORTED(".word 0xe78f8a1c @ usada8 pc, r12, r10, r8") 1016 + TEST_UNSUPPORTED(".word 0xe78e8a1f @ usada8 r14, pc, r10, r8") 1017 + TEST_UNSUPPORTED(".word 0xe78e8f1c @ usada8 r14, r12, pc, r8") 1018 + #endif /* __LINUX_ARM_ARCH__ >= 6 */ 1019 + 1020 + #if __LINUX_ARM_ARCH__ >= 7 1021 + TEST_GROUP("Bit Field") 1022 + 1023 + TEST_R( "sbfx r0, r",0 , VAL1,", #0, #31") 1024 + TEST_R( "sbfxeq r14, r",12, VAL2,", #8, #16") 1025 + TEST_R( "sbfx r4, r",10, VAL1,", #16, #15") 1026 + TEST_UNSUPPORTED(".word 0xe7aff45c @ sbfx pc, r12, #8, #16") 1027 + 1028 + TEST_R( "ubfx r0, r",0 , VAL1,", #0, #31") 1029 + TEST_R( "ubfxcs r14, r",12, VAL2,", #8, #16") 1030 + TEST_R( "ubfx r4, r",10, VAL1,", #16, #15") 1031 + TEST_UNSUPPORTED(".word 0xe7eff45c @ ubfx pc, r12, #8, #16") 1032 + TEST_UNSUPPORTED(".word 0xe7efc45f @ ubfx r12, pc, #8, #16") 1033 + 1034 + TEST_R( "bfc r",0, VAL1,", #4, #20") 1035 + TEST_R( "bfcvs r",14,VAL2,", #4, #20") 1036 + TEST_R( "bfc r",7, VAL1,", #0, #31") 1037 + TEST_R( "bfc r",8, VAL2,", #0, #31") 1038 + TEST_UNSUPPORTED(".word 0xe7def01f @ bfc pc, #0, #31"); 1039 + 1040 + TEST_RR( "bfi r",0, VAL1,", r",0 , VAL2,", #0, #31") 1041 + TEST_RR( "bfipl r",12,VAL1,", r",14 , VAL2,", #4, #20") 1042 + TEST_UNSUPPORTED(".word 0xe7d7f21e @ bfi pc, r14, #4, #20") 1043 + 1044 + TEST_UNSUPPORTED(".word 0x07f000f0") /* Permanently UNDEFINED */ 1045 + TEST_UNSUPPORTED(".word 0x07ffffff") /* Permanently UNDEFINED */ 1046 + #endif /* __LINUX_ARM_ARCH__ >= 6 */ 1047 + 1048 + TEST_GROUP("Branch, branch with link, and block data transfer") 1049 + 1050 + TEST_P( "stmda r",0, 16*4,", {r0}") 1051 + TEST_P( "stmeqda r",4, 16*4,", {r0-r15}") 1052 + TEST_P( "stmneda r",8, 16*4,"!, {r8-r15}") 1053 + TEST_P( "stmda r",12,16*4,"!, {r1,r3,r5,r7,r8-r11,r14}") 1054 + TEST_P( "stmda r",13,0, "!, {pc}") 1055 + 1056 + TEST_P( "ldmda r",0, 16*4,", {r0}") 1057 + TEST_BF_P("ldmcsda r",4, 15*4,", {r0-r15}") 1058 + TEST_BF_P("ldmccda r",7, 15*4,"!, {r8-r15}") 1059 + TEST_P( "ldmda r",12,16*4,"!, {r1,r3,r5,r7,r8-r11,r14}") 1060 + TEST_BF_P("ldmda r",14,15*4,"!, {pc}") 1061 + 1062 + TEST_P( "stmia r",0, 16*4,", {r0}") 1063 + TEST_P( "stmmiia r",4, 16*4,", {r0-r15}") 1064 + TEST_P( "stmplia r",8, 16*4,"!, {r8-r15}") 1065 + TEST_P( "stmia r",12,16*4,"!, {r1,r3,r5,r7,r8-r11,r14}") 1066 + TEST_P( "stmia r",14,0, "!, {pc}") 1067 + 1068 + TEST_P( "ldmia r",0, 16*4,", {r0}") 1069 + TEST_BF_P("ldmvsia r",4, 0, ", {r0-r15}") 1070 + TEST_BF_P("ldmvcia r",7, 8*4, "!, {r8-r15}") 1071 + TEST_P( "ldmia r",12,16*4,"!, {r1,r3,r5,r7,r8-r11,r14}") 1072 + TEST_BF_P("ldmia r",14,15*4,"!, {pc}") 1073 + 1074 + TEST_P( "stmdb r",0, 16*4,", {r0}") 1075 + TEST_P( "stmhidb r",4, 16*4,", {r0-r15}") 1076 + TEST_P( "stmlsdb r",8, 16*4,"!, {r8-r15}") 1077 + TEST_P( "stmdb r",12,16*4,"!, {r1,r3,r5,r7,r8-r11,r14}") 1078 + TEST_P( "stmdb r",13,4, "!, {pc}") 1079 + 1080 + TEST_P( "ldmdb r",0, 16*4,", {r0}") 1081 + TEST_BF_P("ldmgedb r",4, 16*4,", {r0-r15}") 1082 + TEST_BF_P("ldmltdb r",7, 16*4,"!, {r8-r15}") 1083 + TEST_P( "ldmdb r",12,16*4,"!, {r1,r3,r5,r7,r8-r11,r14}") 1084 + TEST_BF_P("ldmdb r",14,16*4,"!, {pc}") 1085 + 1086 + TEST_P( "stmib r",0, 16*4,", {r0}") 1087 + TEST_P( "stmgtib r",4, 16*4,", {r0-r15}") 1088 + TEST_P( "stmleib r",8, 16*4,"!, {r8-r15}") 1089 + TEST_P( "stmib r",12,16*4,"!, {r1,r3,r5,r7,r8-r11,r14}") 1090 + TEST_P( "stmib r",13,-4, "!, {pc}") 1091 + 1092 + TEST_P( "ldmib r",0, 16*4,", {r0}") 1093 + TEST_BF_P("ldmeqib r",4, -4,", {r0-r15}") 1094 + TEST_BF_P("ldmneib r",7, 7*4,"!, {r8-r15}") 1095 + TEST_P( "ldmib r",12,16*4,"!, {r1,r3,r5,r7,r8-r11,r14}") 1096 + TEST_BF_P("ldmib r",14,14*4,"!, {pc}") 1097 + 1098 + TEST_P( "stmdb r",13,16*4,"!, {r3-r12,lr}") 1099 + TEST_P( "stmeqdb r",13,16*4,"!, {r3-r12}") 1100 + TEST_P( "stmnedb r",2, 16*4,", {r3-r12,lr}") 1101 + TEST_P( "stmdb r",13,16*4,"!, {r2-r12,lr}") 1102 + TEST_P( "stmdb r",0, 16*4,", {r0-r12}") 1103 + TEST_P( "stmdb r",0, 16*4,", {r0-r12,lr}") 1104 + 1105 + TEST_BF_P("ldmia r",13,5*4, "!, {r3-r12,pc}") 1106 + TEST_P( "ldmccia r",13,5*4, "!, {r3-r12}") 1107 + TEST_BF_P("ldmcsia r",2, 5*4, "!, {r3-r12,pc}") 1108 + TEST_BF_P("ldmia r",13,4*4, "!, {r2-r12,pc}") 1109 + TEST_P( "ldmia r",0, 16*4,", {r0-r12}") 1110 + TEST_P( "ldmia r",0, 16*4,", {r0-r12,lr}") 1111 + 1112 + #ifdef CONFIG_THUMB2_KERNEL 1113 + TEST_ARM_TO_THUMB_INTERWORK_P("ldmplia r",0,15*4,", {pc}") 1114 + TEST_ARM_TO_THUMB_INTERWORK_P("ldmmiia r",13,0,", {r0-r15}") 1115 + #endif 1116 + TEST_BF("b 2f") 1117 + TEST_BF("bl 2f") 1118 + TEST_BB("b 2b") 1119 + TEST_BB("bl 2b") 1120 + 1121 + TEST_BF("beq 2f") 1122 + TEST_BF("bleq 2f") 1123 + TEST_BB("bne 2b") 1124 + TEST_BB("blne 2b") 1125 + 1126 + TEST_BF("bgt 2f") 1127 + TEST_BF("blgt 2f") 1128 + TEST_BB("blt 2b") 1129 + TEST_BB("bllt 2b") 1130 + 1131 + TEST_GROUP("Supervisor Call, and coprocessor instructions") 1132 + 1133 + /* 1134 + * We can't really test these by executing them, so all 1135 + * we can do is check that probes are, or are not allowed. 1136 + * At the moment none are allowed... 1137 + */ 1138 + #define TEST_COPROCESSOR(code) TEST_UNSUPPORTED(code) 1139 + 1140 + #define COPROCESSOR_INSTRUCTIONS_ST_LD(two,cc) \ 1141 + TEST_COPROCESSOR("stc"two" 0, cr0, [r13, #4]") \ 1142 + TEST_COPROCESSOR("stc"two" 0, cr0, [r13, #-4]") \ 1143 + TEST_COPROCESSOR("stc"two" 0, cr0, [r13, #4]!") \ 1144 + TEST_COPROCESSOR("stc"two" 0, cr0, [r13, #-4]!") \ 1145 + TEST_COPROCESSOR("stc"two" 0, cr0, [r13], #4") \ 1146 + TEST_COPROCESSOR("stc"two" 0, cr0, [r13], #-4") \ 1147 + TEST_COPROCESSOR("stc"two" 0, cr0, [r13], {1}") \ 1148 + TEST_COPROCESSOR("stc"two"l 0, cr0, [r13, #4]") \ 1149 + TEST_COPROCESSOR("stc"two"l 0, cr0, [r13, #-4]") \ 1150 + TEST_COPROCESSOR("stc"two"l 0, cr0, [r13, #4]!") \ 1151 + TEST_COPROCESSOR("stc"two"l 0, cr0, [r13, #-4]!") \ 1152 + TEST_COPROCESSOR("stc"two"l 0, cr0, [r13], #4") \ 1153 + TEST_COPROCESSOR("stc"two"l 0, cr0, [r13], #-4") \ 1154 + TEST_COPROCESSOR("stc"two"l 0, cr0, [r13], {1}") \ 1155 + TEST_COPROCESSOR("ldc"two" 0, cr0, [r13, #4]") \ 1156 + TEST_COPROCESSOR("ldc"two" 0, cr0, [r13, #-4]") \ 1157 + TEST_COPROCESSOR("ldc"two" 0, cr0, [r13, #4]!") \ 1158 + TEST_COPROCESSOR("ldc"two" 0, cr0, [r13, #-4]!") \ 1159 + TEST_COPROCESSOR("ldc"two" 0, cr0, [r13], #4") \ 1160 + TEST_COPROCESSOR("ldc"two" 0, cr0, [r13], #-4") \ 1161 + TEST_COPROCESSOR("ldc"two" 0, cr0, [r13], {1}") \ 1162 + TEST_COPROCESSOR("ldc"two"l 0, cr0, [r13, #4]") \ 1163 + TEST_COPROCESSOR("ldc"two"l 0, cr0, [r13, #-4]") \ 1164 + TEST_COPROCESSOR("ldc"two"l 0, cr0, [r13, #4]!") \ 1165 + TEST_COPROCESSOR("ldc"two"l 0, cr0, [r13, #-4]!") \ 1166 + TEST_COPROCESSOR("ldc"two"l 0, cr0, [r13], #4") \ 1167 + TEST_COPROCESSOR("ldc"two"l 0, cr0, [r13], #-4") \ 1168 + TEST_COPROCESSOR("ldc"two"l 0, cr0, [r13], {1}") \ 1169 + \ 1170 + TEST_COPROCESSOR( "stc"two" 0, cr0, [r15, #4]") \ 1171 + TEST_COPROCESSOR( "stc"two" 0, cr0, [r15, #-4]") \ 1172 + TEST_UNSUPPORTED(".word 0x"cc"daf0001 @ stc"two" 0, cr0, [r15, #4]!") \ 1173 + TEST_UNSUPPORTED(".word 0x"cc"d2f0001 @ stc"two" 0, cr0, [r15, #-4]!") \ 1174 + TEST_UNSUPPORTED(".word 0x"cc"caf0001 @ stc"two" 0, cr0, [r15], #4") \ 1175 + TEST_UNSUPPORTED(".word 0x"cc"c2f0001 @ stc"two" 0, cr0, [r15], #-4") \ 1176 + TEST_COPROCESSOR( "stc"two" 0, cr0, [r15], {1}") \ 1177 + TEST_COPROCESSOR( "stc"two"l 0, cr0, [r15, #4]") \ 1178 + TEST_COPROCESSOR( "stc"two"l 0, cr0, [r15, #-4]") \ 1179 + TEST_UNSUPPORTED(".word 0x"cc"def0001 @ stc"two"l 0, cr0, [r15, #4]!") \ 1180 + TEST_UNSUPPORTED(".word 0x"cc"d6f0001 @ stc"two"l 0, cr0, [r15, #-4]!") \ 1181 + TEST_UNSUPPORTED(".word 0x"cc"cef0001 @ stc"two"l 0, cr0, [r15], #4") \ 1182 + TEST_UNSUPPORTED(".word 0x"cc"c6f0001 @ stc"two"l 0, cr0, [r15], #-4") \ 1183 + TEST_COPROCESSOR( "stc"two"l 0, cr0, [r15], {1}") \ 1184 + TEST_COPROCESSOR( "ldc"two" 0, cr0, [r15, #4]") \ 1185 + TEST_COPROCESSOR( "ldc"two" 0, cr0, [r15, #-4]") \ 1186 + TEST_UNSUPPORTED(".word 0x"cc"dbf0001 @ ldc"two" 0, cr0, [r15, #4]!") \ 1187 + TEST_UNSUPPORTED(".word 0x"cc"d3f0001 @ ldc"two" 0, cr0, [r15, #-4]!") \ 1188 + TEST_UNSUPPORTED(".word 0x"cc"cbf0001 @ ldc"two" 0, cr0, [r15], #4") \ 1189 + TEST_UNSUPPORTED(".word 0x"cc"c3f0001 @ ldc"two" 0, cr0, [r15], #-4") \ 1190 + TEST_COPROCESSOR( "ldc"two" 0, cr0, [r15], {1}") \ 1191 + TEST_COPROCESSOR( "ldc"two"l 0, cr0, [r15, #4]") \ 1192 + TEST_COPROCESSOR( "ldc"two"l 0, cr0, [r15, #-4]") \ 1193 + TEST_UNSUPPORTED(".word 0x"cc"dff0001 @ ldc"two"l 0, cr0, [r15, #4]!") \ 1194 + TEST_UNSUPPORTED(".word 0x"cc"d7f0001 @ ldc"two"l 0, cr0, [r15, #-4]!") \ 1195 + TEST_UNSUPPORTED(".word 0x"cc"cff0001 @ ldc"two"l 0, cr0, [r15], #4") \ 1196 + TEST_UNSUPPORTED(".word 0x"cc"c7f0001 @ ldc"two"l 0, cr0, [r15], #-4") \ 1197 + TEST_COPROCESSOR( "ldc"two"l 0, cr0, [r15], {1}") 1198 + 1199 + #define COPROCESSOR_INSTRUCTIONS_MC_MR(two,cc) \ 1200 + \ 1201 + TEST_COPROCESSOR( "mcrr"two" 0, 15, r0, r14, cr0") \ 1202 + TEST_COPROCESSOR( "mcrr"two" 15, 0, r14, r0, cr15") \ 1203 + TEST_UNSUPPORTED(".word 0x"cc"c4f00f0 @ mcrr"two" 0, 15, r0, r15, cr0") \ 1204 + TEST_UNSUPPORTED(".word 0x"cc"c40ff0f @ mcrr"two" 15, 0, r15, r0, cr15") \ 1205 + TEST_COPROCESSOR( "mrrc"two" 0, 15, r0, r14, cr0") \ 1206 + TEST_COPROCESSOR( "mrrc"two" 15, 0, r14, r0, cr15") \ 1207 + TEST_UNSUPPORTED(".word 0x"cc"c5f00f0 @ mrrc"two" 0, 15, r0, r15, cr0") \ 1208 + TEST_UNSUPPORTED(".word 0x"cc"c50ff0f @ mrrc"two" 15, 0, r15, r0, cr15") \ 1209 + TEST_COPROCESSOR( "cdp"two" 15, 15, cr15, cr15, cr15, 7") \ 1210 + TEST_COPROCESSOR( "cdp"two" 0, 0, cr0, cr0, cr0, 0") \ 1211 + TEST_COPROCESSOR( "mcr"two" 15, 7, r15, cr15, cr15, 7") \ 1212 + TEST_COPROCESSOR( "mcr"two" 0, 0, r0, cr0, cr0, 0") \ 1213 + TEST_COPROCESSOR( "mrc"two" 15, 7, r15, cr15, cr15, 7") \ 1214 + TEST_COPROCESSOR( "mrc"two" 0, 0, r0, cr0, cr0, 0") 1215 + 1216 + COPROCESSOR_INSTRUCTIONS_ST_LD("","e") 1217 + COPROCESSOR_INSTRUCTIONS_MC_MR("","e") 1218 + TEST_UNSUPPORTED("svc 0") 1219 + TEST_UNSUPPORTED("svc 0xffffff") 1220 + 1221 + TEST_UNSUPPORTED("svc 0") 1222 + 1223 + TEST_GROUP("Unconditional instruction") 1224 + 1225 + #if __LINUX_ARM_ARCH__ >= 6 1226 + TEST_UNSUPPORTED("srsda sp, 0x13") 1227 + TEST_UNSUPPORTED("srsdb sp, 0x13") 1228 + TEST_UNSUPPORTED("srsia sp, 0x13") 1229 + TEST_UNSUPPORTED("srsib sp, 0x13") 1230 + TEST_UNSUPPORTED("srsda sp!, 0x13") 1231 + TEST_UNSUPPORTED("srsdb sp!, 0x13") 1232 + TEST_UNSUPPORTED("srsia sp!, 0x13") 1233 + TEST_UNSUPPORTED("srsib sp!, 0x13") 1234 + 1235 + TEST_UNSUPPORTED("rfeda sp") 1236 + TEST_UNSUPPORTED("rfedb sp") 1237 + TEST_UNSUPPORTED("rfeia sp") 1238 + TEST_UNSUPPORTED("rfeib sp") 1239 + TEST_UNSUPPORTED("rfeda sp!") 1240 + TEST_UNSUPPORTED("rfedb sp!") 1241 + TEST_UNSUPPORTED("rfeia sp!") 1242 + TEST_UNSUPPORTED("rfeib sp!") 1243 + TEST_UNSUPPORTED(".word 0xf81d0a00 @ rfeda pc") 1244 + TEST_UNSUPPORTED(".word 0xf91d0a00 @ rfedb pc") 1245 + TEST_UNSUPPORTED(".word 0xf89d0a00 @ rfeia pc") 1246 + TEST_UNSUPPORTED(".word 0xf99d0a00 @ rfeib pc") 1247 + TEST_UNSUPPORTED(".word 0xf83d0a00 @ rfeda pc!") 1248 + TEST_UNSUPPORTED(".word 0xf93d0a00 @ rfedb pc!") 1249 + TEST_UNSUPPORTED(".word 0xf8bd0a00 @ rfeia pc!") 1250 + TEST_UNSUPPORTED(".word 0xf9bd0a00 @ rfeib pc!") 1251 + #endif /* __LINUX_ARM_ARCH__ >= 6 */ 1252 + 1253 + #if __LINUX_ARM_ARCH__ >= 6 1254 + TEST_X( "blx __dummy_thumb_subroutine_even", 1255 + ".thumb \n\t" 1256 + ".space 4 \n\t" 1257 + ".type __dummy_thumb_subroutine_even, %%function \n\t" 1258 + "__dummy_thumb_subroutine_even: \n\t" 1259 + "mov r0, pc \n\t" 1260 + "bx lr \n\t" 1261 + ".arm \n\t" 1262 + ) 1263 + TEST( "blx __dummy_thumb_subroutine_even") 1264 + 1265 + TEST_X( "blx __dummy_thumb_subroutine_odd", 1266 + ".thumb \n\t" 1267 + ".space 2 \n\t" 1268 + ".type __dummy_thumb_subroutine_odd, %%function \n\t" 1269 + "__dummy_thumb_subroutine_odd: \n\t" 1270 + "mov r0, pc \n\t" 1271 + "bx lr \n\t" 1272 + ".arm \n\t" 1273 + ) 1274 + TEST( "blx __dummy_thumb_subroutine_odd") 1275 + #endif /* __LINUX_ARM_ARCH__ >= 6 */ 1276 + 1277 + COPROCESSOR_INSTRUCTIONS_ST_LD("2","f") 1278 + #if __LINUX_ARM_ARCH__ >= 6 1279 + COPROCESSOR_INSTRUCTIONS_MC_MR("2","f") 1280 + #endif 1281 + 1282 + TEST_GROUP("Miscellaneous instructions, memory hints, and Advanced SIMD instructions") 1283 + 1284 + #if __LINUX_ARM_ARCH__ >= 6 1285 + TEST_UNSUPPORTED("cps 0x13") 1286 + TEST_UNSUPPORTED("cpsie i") 1287 + TEST_UNSUPPORTED("cpsid i") 1288 + TEST_UNSUPPORTED("cpsie i,0x13") 1289 + TEST_UNSUPPORTED("cpsid i,0x13") 1290 + TEST_UNSUPPORTED("setend le") 1291 + TEST_UNSUPPORTED("setend be") 1292 + #endif 1293 + 1294 + #if __LINUX_ARM_ARCH__ >= 7 1295 + TEST_P("pli [r",0,0b,", #16]") 1296 + TEST( "pli [pc, #0]") 1297 + TEST_RR("pli [r",12,0b,", r",0, 16,"]") 1298 + TEST_RR("pli [r",0, 0b,", -r",12,16,", lsl #4]") 1299 + #endif 1300 + 1301 + #if __LINUX_ARM_ARCH__ >= 5 1302 + TEST_P("pld [r",0,32,", #-16]") 1303 + TEST( "pld [pc, #0]") 1304 + TEST_PR("pld [r",7, 24, ", r",0, 16,"]") 1305 + TEST_PR("pld [r",8, 24, ", -r",12,16,", lsl #4]") 1306 + #endif 1307 + 1308 + #if __LINUX_ARM_ARCH__ >= 7 1309 + TEST_SUPPORTED( ".word 0xf590f000 @ pldw [r0, #0]") 1310 + TEST_SUPPORTED( ".word 0xf797f000 @ pldw [r7, r0]") 1311 + TEST_SUPPORTED( ".word 0xf798f18c @ pldw [r8, r12, lsl #3]"); 1312 + #endif 1313 + 1314 + #if __LINUX_ARM_ARCH__ >= 7 1315 + TEST_UNSUPPORTED("clrex") 1316 + TEST_UNSUPPORTED("dsb") 1317 + TEST_UNSUPPORTED("dmb") 1318 + TEST_UNSUPPORTED("isb") 1319 + #endif 1320 + 1321 + verbose("\n"); 1322 + } 1323 +
+1187
arch/arm/kernel/kprobes-test-thumb.c
··· 1 + /* 2 + * arch/arm/kernel/kprobes-test-thumb.c 3 + * 4 + * Copyright (C) 2011 Jon Medhurst <tixy@yxit.co.uk>. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + */ 10 + 11 + #include <linux/kernel.h> 12 + #include <linux/module.h> 13 + 14 + #include "kprobes-test.h" 15 + 16 + 17 + #define TEST_ISA "16" 18 + 19 + #define DONT_TEST_IN_ITBLOCK(tests) \ 20 + kprobe_test_flags |= TEST_FLAG_NO_ITBLOCK; \ 21 + tests \ 22 + kprobe_test_flags &= ~TEST_FLAG_NO_ITBLOCK; 23 + 24 + #define CONDITION_INSTRUCTIONS(cc_pos, tests) \ 25 + kprobe_test_cc_position = cc_pos; \ 26 + DONT_TEST_IN_ITBLOCK(tests) \ 27 + kprobe_test_cc_position = 0; 28 + 29 + #define TEST_ITBLOCK(code) \ 30 + kprobe_test_flags |= TEST_FLAG_FULL_ITBLOCK; \ 31 + TESTCASE_START(code) \ 32 + TEST_ARG_END("") \ 33 + "50: nop \n\t" \ 34 + "1: "code" \n\t" \ 35 + " mov r1, #0x11 \n\t" \ 36 + " mov r2, #0x22 \n\t" \ 37 + " mov r3, #0x33 \n\t" \ 38 + "2: nop \n\t" \ 39 + TESTCASE_END \ 40 + kprobe_test_flags &= ~TEST_FLAG_FULL_ITBLOCK; 41 + 42 + #define TEST_THUMB_TO_ARM_INTERWORK_P(code1, reg, val, code2) \ 43 + TESTCASE_START(code1 #reg code2) \ 44 + TEST_ARG_PTR(reg, val) \ 45 + TEST_ARG_REG(14, 99f+1) \ 46 + TEST_ARG_MEM(15, 3f) \ 47 + TEST_ARG_END("") \ 48 + " nop \n\t" /* To align 1f */ \ 49 + "50: nop \n\t" \ 50 + "1: "code1 #reg code2" \n\t" \ 51 + " bx lr \n\t" \ 52 + ".arm \n\t" \ 53 + "3: adr lr, 2f+1 \n\t" \ 54 + " bx lr \n\t" \ 55 + ".thumb \n\t" \ 56 + "2: nop \n\t" \ 57 + TESTCASE_END 58 + 59 + 60 + void kprobe_thumb16_test_cases(void) 61 + { 62 + kprobe_test_flags = TEST_FLAG_NARROW_INSTR; 63 + 64 + TEST_GROUP("Shift (immediate), add, subtract, move, and compare") 65 + 66 + TEST_R( "lsls r7, r",0,VAL1,", #5") 67 + TEST_R( "lsls r0, r",7,VAL2,", #11") 68 + TEST_R( "lsrs r7, r",0,VAL1,", #5") 69 + TEST_R( "lsrs r0, r",7,VAL2,", #11") 70 + TEST_R( "asrs r7, r",0,VAL1,", #5") 71 + TEST_R( "asrs r0, r",7,VAL2,", #11") 72 + TEST_RR( "adds r2, r",0,VAL1,", r",7,VAL2,"") 73 + TEST_RR( "adds r5, r",7,VAL2,", r",0,VAL2,"") 74 + TEST_RR( "subs r2, r",0,VAL1,", r",7,VAL2,"") 75 + TEST_RR( "subs r5, r",7,VAL2,", r",0,VAL2,"") 76 + TEST_R( "adds r7, r",0,VAL1,", #5") 77 + TEST_R( "adds r0, r",7,VAL2,", #2") 78 + TEST_R( "subs r7, r",0,VAL1,", #5") 79 + TEST_R( "subs r0, r",7,VAL2,", #2") 80 + TEST( "movs.n r0, #0x5f") 81 + TEST( "movs.n r7, #0xa0") 82 + TEST_R( "cmp.n r",0,0x5e, ", #0x5f") 83 + TEST_R( "cmp.n r",5,0x15f,", #0x5f") 84 + TEST_R( "cmp.n r",7,0xa0, ", #0xa0") 85 + TEST_R( "adds.n r",0,VAL1,", #0x5f") 86 + TEST_R( "adds.n r",7,VAL2,", #0xa0") 87 + TEST_R( "subs.n r",0,VAL1,", #0x5f") 88 + TEST_R( "subs.n r",7,VAL2,", #0xa0") 89 + 90 + TEST_GROUP("16-bit Thumb data-processing instructions") 91 + 92 + #define DATA_PROCESSING16(op,val) \ 93 + TEST_RR( op" r",0,VAL1,", r",7,val,"") \ 94 + TEST_RR( op" r",7,VAL2,", r",0,val,"") 95 + 96 + DATA_PROCESSING16("ands",0xf00f00ff) 97 + DATA_PROCESSING16("eors",0xf00f00ff) 98 + DATA_PROCESSING16("lsls",11) 99 + DATA_PROCESSING16("lsrs",11) 100 + DATA_PROCESSING16("asrs",11) 101 + DATA_PROCESSING16("adcs",VAL2) 102 + DATA_PROCESSING16("sbcs",VAL2) 103 + DATA_PROCESSING16("rors",11) 104 + DATA_PROCESSING16("tst",0xf00f00ff) 105 + TEST_R("rsbs r",0,VAL1,", #0") 106 + TEST_R("rsbs r",7,VAL2,", #0") 107 + DATA_PROCESSING16("cmp",0xf00f00ff) 108 + DATA_PROCESSING16("cmn",0xf00f00ff) 109 + DATA_PROCESSING16("orrs",0xf00f00ff) 110 + DATA_PROCESSING16("muls",VAL2) 111 + DATA_PROCESSING16("bics",0xf00f00ff) 112 + DATA_PROCESSING16("mvns",VAL2) 113 + 114 + TEST_GROUP("Special data instructions and branch and exchange") 115 + 116 + TEST_RR( "add r",0, VAL1,", r",7,VAL2,"") 117 + TEST_RR( "add r",3, VAL2,", r",8,VAL3,"") 118 + TEST_RR( "add r",8, VAL3,", r",0,VAL1,"") 119 + TEST_R( "add sp" ", r",8,-8, "") 120 + TEST_R( "add r",14,VAL1,", pc") 121 + TEST_BF_R("add pc" ", r",0,2f-1f-8,"") 122 + TEST_UNSUPPORTED(".short 0x44ff @ add pc, pc") 123 + 124 + TEST_RR( "cmp r",3,VAL1,", r",8,VAL2,"") 125 + TEST_RR( "cmp r",8,VAL2,", r",0,VAL1,"") 126 + TEST_R( "cmp sp" ", r",8,-8, "") 127 + 128 + TEST_R( "mov r0, r",7,VAL2,"") 129 + TEST_R( "mov r3, r",8,VAL3,"") 130 + TEST_R( "mov r8, r",0,VAL1,"") 131 + TEST_P( "mov sp, r",8,-8, "") 132 + TEST( "mov lr, pc") 133 + TEST_BF_R("mov pc, r",0,2f, "") 134 + 135 + TEST_BF_R("bx r",0, 2f+1,"") 136 + TEST_BF_R("bx r",14,2f+1,"") 137 + TESTCASE_START("bx pc") 138 + TEST_ARG_REG(14, 99f+1) 139 + TEST_ARG_END("") 140 + " nop \n\t" /* To align the bx pc*/ 141 + "50: nop \n\t" 142 + "1: bx pc \n\t" 143 + " bx lr \n\t" 144 + ".arm \n\t" 145 + " adr lr, 2f+1 \n\t" 146 + " bx lr \n\t" 147 + ".thumb \n\t" 148 + "2: nop \n\t" 149 + TESTCASE_END 150 + 151 + TEST_BF_R("blx r",0, 2f+1,"") 152 + TEST_BB_R("blx r",14,2f+1,"") 153 + TEST_UNSUPPORTED(".short 0x47f8 @ blx pc") 154 + 155 + TEST_GROUP("Load from Literal Pool") 156 + 157 + TEST_X( "ldr r0, 3f", 158 + ".align \n\t" 159 + "3: .word "__stringify(VAL1)) 160 + TEST_X( "ldr r7, 3f", 161 + ".space 128 \n\t" 162 + ".align \n\t" 163 + "3: .word "__stringify(VAL2)) 164 + 165 + TEST_GROUP("16-bit Thumb Load/store instructions") 166 + 167 + TEST_RPR("str r",0, VAL1,", [r",1, 24,", r",2, 48,"]") 168 + TEST_RPR("str r",7, VAL2,", [r",6, 24,", r",5, 48,"]") 169 + TEST_RPR("strh r",0, VAL1,", [r",1, 24,", r",2, 48,"]") 170 + TEST_RPR("strh r",7, VAL2,", [r",6, 24,", r",5, 48,"]") 171 + TEST_RPR("strb r",0, VAL1,", [r",1, 24,", r",2, 48,"]") 172 + TEST_RPR("strb r",7, VAL2,", [r",6, 24,", r",5, 48,"]") 173 + TEST_PR( "ldrsb r0, [r",1, 24,", r",2, 48,"]") 174 + TEST_PR( "ldrsb r7, [r",6, 24,", r",5, 50,"]") 175 + TEST_PR( "ldr r0, [r",1, 24,", r",2, 48,"]") 176 + TEST_PR( "ldr r7, [r",6, 24,", r",5, 48,"]") 177 + TEST_PR( "ldrh r0, [r",1, 24,", r",2, 48,"]") 178 + TEST_PR( "ldrh r7, [r",6, 24,", r",5, 50,"]") 179 + TEST_PR( "ldrb r0, [r",1, 24,", r",2, 48,"]") 180 + TEST_PR( "ldrb r7, [r",6, 24,", r",5, 50,"]") 181 + TEST_PR( "ldrsh r0, [r",1, 24,", r",2, 48,"]") 182 + TEST_PR( "ldrsh r7, [r",6, 24,", r",5, 50,"]") 183 + 184 + TEST_RP("str r",0, VAL1,", [r",1, 24,", #120]") 185 + TEST_RP("str r",7, VAL2,", [r",6, 24,", #120]") 186 + TEST_P( "ldr r0, [r",1, 24,", #120]") 187 + TEST_P( "ldr r7, [r",6, 24,", #120]") 188 + TEST_RP("strb r",0, VAL1,", [r",1, 24,", #30]") 189 + TEST_RP("strb r",7, VAL2,", [r",6, 24,", #30]") 190 + TEST_P( "ldrb r0, [r",1, 24,", #30]") 191 + TEST_P( "ldrb r7, [r",6, 24,", #30]") 192 + TEST_RP("strh r",0, VAL1,", [r",1, 24,", #60]") 193 + TEST_RP("strh r",7, VAL2,", [r",6, 24,", #60]") 194 + TEST_P( "ldrh r0, [r",1, 24,", #60]") 195 + TEST_P( "ldrh r7, [r",6, 24,", #60]") 196 + 197 + TEST_R( "str r",0, VAL1,", [sp, #0]") 198 + TEST_R( "str r",7, VAL2,", [sp, #160]") 199 + TEST( "ldr r0, [sp, #0]") 200 + TEST( "ldr r7, [sp, #160]") 201 + 202 + TEST_RP("str r",0, VAL1,", [r",0, 24,"]") 203 + TEST_P( "ldr r0, [r",0, 24,"]") 204 + 205 + TEST_GROUP("Generate PC-/SP-relative address") 206 + 207 + TEST("add r0, pc, #4") 208 + TEST("add r7, pc, #1020") 209 + TEST("add r0, sp, #4") 210 + TEST("add r7, sp, #1020") 211 + 212 + TEST_GROUP("Miscellaneous 16-bit instructions") 213 + 214 + TEST_UNSUPPORTED( "cpsie i") 215 + TEST_UNSUPPORTED( "cpsid i") 216 + TEST_UNSUPPORTED( "setend le") 217 + TEST_UNSUPPORTED( "setend be") 218 + 219 + TEST("add sp, #"__stringify(TEST_MEMORY_SIZE)) /* Assumes TEST_MEMORY_SIZE < 0x400 */ 220 + TEST("sub sp, #0x7f*4") 221 + 222 + DONT_TEST_IN_ITBLOCK( 223 + TEST_BF_R( "cbnz r",0,0, ", 2f") 224 + TEST_BF_R( "cbz r",2,-1,", 2f") 225 + TEST_BF_RX( "cbnz r",4,1, ", 2f",0x20) 226 + TEST_BF_RX( "cbz r",7,0, ", 2f",0x40) 227 + ) 228 + TEST_R("sxth r0, r",7, HH1,"") 229 + TEST_R("sxth r7, r",0, HH2,"") 230 + TEST_R("sxtb r0, r",7, HH1,"") 231 + TEST_R("sxtb r7, r",0, HH2,"") 232 + TEST_R("uxth r0, r",7, HH1,"") 233 + TEST_R("uxth r7, r",0, HH2,"") 234 + TEST_R("uxtb r0, r",7, HH1,"") 235 + TEST_R("uxtb r7, r",0, HH2,"") 236 + TEST_R("rev r0, r",7, VAL1,"") 237 + TEST_R("rev r7, r",0, VAL2,"") 238 + TEST_R("rev16 r0, r",7, VAL1,"") 239 + TEST_R("rev16 r7, r",0, VAL2,"") 240 + TEST_UNSUPPORTED(".short 0xba80") 241 + TEST_UNSUPPORTED(".short 0xbabf") 242 + TEST_R("revsh r0, r",7, VAL1,"") 243 + TEST_R("revsh r7, r",0, VAL2,"") 244 + 245 + #define TEST_POPPC(code, offset) \ 246 + TESTCASE_START(code) \ 247 + TEST_ARG_PTR(13, offset) \ 248 + TEST_ARG_END("") \ 249 + TEST_BRANCH_F(code,0) \ 250 + TESTCASE_END 251 + 252 + TEST("push {r0}") 253 + TEST("push {r7}") 254 + TEST("push {r14}") 255 + TEST("push {r0-r7,r14}") 256 + TEST("push {r0,r2,r4,r6,r14}") 257 + TEST("push {r1,r3,r5,r7}") 258 + TEST("pop {r0}") 259 + TEST("pop {r7}") 260 + TEST("pop {r0,r2,r4,r6}") 261 + TEST_POPPC("pop {pc}",15*4) 262 + TEST_POPPC("pop {r0-r7,pc}",7*4) 263 + TEST_POPPC("pop {r1,r3,r5,r7,pc}",11*4) 264 + TEST_THUMB_TO_ARM_INTERWORK_P("pop {pc} @ ",13,15*4,"") 265 + TEST_THUMB_TO_ARM_INTERWORK_P("pop {r0-r7,pc} @ ",13,7*4,"") 266 + 267 + TEST_UNSUPPORTED("bkpt.n 0") 268 + TEST_UNSUPPORTED("bkpt.n 255") 269 + 270 + TEST_SUPPORTED("yield") 271 + TEST("sev") 272 + TEST("nop") 273 + TEST("wfi") 274 + TEST_SUPPORTED("wfe") 275 + TEST_UNSUPPORTED(".short 0xbf50") /* Unassigned hints */ 276 + TEST_UNSUPPORTED(".short 0xbff0") /* Unassigned hints */ 277 + 278 + #define TEST_IT(code, code2) \ 279 + TESTCASE_START(code) \ 280 + TEST_ARG_END("") \ 281 + "50: nop \n\t" \ 282 + "1: "code" \n\t" \ 283 + " "code2" \n\t" \ 284 + "2: nop \n\t" \ 285 + TESTCASE_END 286 + 287 + DONT_TEST_IN_ITBLOCK( 288 + TEST_IT("it eq","moveq r0,#0") 289 + TEST_IT("it vc","movvc r0,#0") 290 + TEST_IT("it le","movle r0,#0") 291 + TEST_IT("ite eq","moveq r0,#0\n\t movne r1,#1") 292 + TEST_IT("itet vc","movvc r0,#0\n\t movvs r1,#1\n\t movvc r2,#2") 293 + TEST_IT("itete le","movle r0,#0\n\t movgt r1,#1\n\t movle r2,#2\n\t movgt r3,#3") 294 + TEST_IT("itttt le","movle r0,#0\n\t movle r1,#1\n\t movle r2,#2\n\t movle r3,#3") 295 + TEST_IT("iteee le","movle r0,#0\n\t movgt r1,#1\n\t movgt r2,#2\n\t movgt r3,#3") 296 + ) 297 + 298 + TEST_GROUP("Load and store multiple") 299 + 300 + TEST_P("ldmia r",4, 16*4,"!, {r0,r7}") 301 + TEST_P("ldmia r",7, 16*4,"!, {r0-r6}") 302 + TEST_P("stmia r",4, 16*4,"!, {r0,r7}") 303 + TEST_P("stmia r",0, 16*4,"!, {r0-r7}") 304 + 305 + TEST_GROUP("Conditional branch and Supervisor Call instructions") 306 + 307 + CONDITION_INSTRUCTIONS(8, 308 + TEST_BF("beq 2f") 309 + TEST_BB("bne 2b") 310 + TEST_BF("bgt 2f") 311 + TEST_BB("blt 2b") 312 + ) 313 + TEST_UNSUPPORTED(".short 0xde00") 314 + TEST_UNSUPPORTED(".short 0xdeff") 315 + TEST_UNSUPPORTED("svc #0x00") 316 + TEST_UNSUPPORTED("svc #0xff") 317 + 318 + TEST_GROUP("Unconditional branch") 319 + 320 + TEST_BF( "b 2f") 321 + TEST_BB( "b 2b") 322 + TEST_BF_X("b 2f", 0x400) 323 + TEST_BB_X("b 2b", 0x400) 324 + 325 + TEST_GROUP("Testing instructions in IT blocks") 326 + 327 + TEST_ITBLOCK("subs.n r0, r0") 328 + 329 + verbose("\n"); 330 + } 331 + 332 + 333 + void kprobe_thumb32_test_cases(void) 334 + { 335 + kprobe_test_flags = 0; 336 + 337 + TEST_GROUP("Load/store multiple") 338 + 339 + TEST_UNSUPPORTED("rfedb sp") 340 + TEST_UNSUPPORTED("rfeia sp") 341 + TEST_UNSUPPORTED("rfedb sp!") 342 + TEST_UNSUPPORTED("rfeia sp!") 343 + 344 + TEST_P( "stmia r",0, 16*4,", {r0,r8}") 345 + TEST_P( "stmia r",4, 16*4,", {r0-r12,r14}") 346 + TEST_P( "stmia r",7, 16*4,"!, {r8-r12,r14}") 347 + TEST_P( "stmia r",12,16*4,"!, {r1,r3,r5,r7,r8-r11,r14}") 348 + 349 + TEST_P( "ldmia r",0, 16*4,", {r0,r8}") 350 + TEST_P( "ldmia r",4, 0, ", {r0-r12,r14}") 351 + TEST_BF_P("ldmia r",5, 8*4, "!, {r6-r12,r15}") 352 + TEST_P( "ldmia r",12,16*4,"!, {r1,r3,r5,r7,r8-r11,r14}") 353 + TEST_BF_P("ldmia r",14,14*4,"!, {r4,pc}") 354 + 355 + TEST_P( "stmdb r",0, 16*4,", {r0,r8}") 356 + TEST_P( "stmdb r",4, 16*4,", {r0-r12,r14}") 357 + TEST_P( "stmdb r",5, 16*4,"!, {r8-r12,r14}") 358 + TEST_P( "stmdb r",12,16*4,"!, {r1,r3,r5,r7,r8-r11,r14}") 359 + 360 + TEST_P( "ldmdb r",0, 16*4,", {r0,r8}") 361 + TEST_P( "ldmdb r",4, 16*4,", {r0-r12,r14}") 362 + TEST_BF_P("ldmdb r",5, 16*4,"!, {r6-r12,r15}") 363 + TEST_P( "ldmdb r",12,16*4,"!, {r1,r3,r5,r7,r8-r11,r14}") 364 + TEST_BF_P("ldmdb r",14,16*4,"!, {r4,pc}") 365 + 366 + TEST_P( "stmdb r",13,16*4,"!, {r3-r12,lr}") 367 + TEST_P( "stmdb r",13,16*4,"!, {r3-r12}") 368 + TEST_P( "stmdb r",2, 16*4,", {r3-r12,lr}") 369 + TEST_P( "stmdb r",13,16*4,"!, {r2-r12,lr}") 370 + TEST_P( "stmdb r",0, 16*4,", {r0-r12}") 371 + TEST_P( "stmdb r",0, 16*4,", {r0-r12,lr}") 372 + 373 + TEST_BF_P("ldmia r",13,5*4, "!, {r3-r12,pc}") 374 + TEST_P( "ldmia r",13,5*4, "!, {r3-r12}") 375 + TEST_BF_P("ldmia r",2, 5*4, "!, {r3-r12,pc}") 376 + TEST_BF_P("ldmia r",13,4*4, "!, {r2-r12,pc}") 377 + TEST_P( "ldmia r",0, 16*4,", {r0-r12}") 378 + TEST_P( "ldmia r",0, 16*4,", {r0-r12,lr}") 379 + 380 + TEST_THUMB_TO_ARM_INTERWORK_P("ldmia r",0,14*4,", {r12,pc}") 381 + TEST_THUMB_TO_ARM_INTERWORK_P("ldmia r",13,2*4,", {r0-r12,pc}") 382 + 383 + TEST_UNSUPPORTED(".short 0xe88f,0x0101 @ stmia pc, {r0,r8}") 384 + TEST_UNSUPPORTED(".short 0xe92f,0x5f00 @ stmdb pc!, {r8-r12,r14}") 385 + TEST_UNSUPPORTED(".short 0xe8bd,0xc000 @ ldmia r13!, {r14,pc}") 386 + TEST_UNSUPPORTED(".short 0xe93e,0xc000 @ ldmdb r14!, {r14,pc}") 387 + TEST_UNSUPPORTED(".short 0xe8a7,0x3f00 @ stmia r7!, {r8-r12,sp}") 388 + TEST_UNSUPPORTED(".short 0xe8a7,0x9f00 @ stmia r7!, {r8-r12,pc}") 389 + TEST_UNSUPPORTED(".short 0xe93e,0x2010 @ ldmdb r14!, {r4,sp}") 390 + 391 + TEST_GROUP("Load/store double or exclusive, table branch") 392 + 393 + TEST_P( "ldrd r0, r1, [r",1, 24,", #-16]") 394 + TEST( "ldrd r12, r14, [sp, #16]") 395 + TEST_P( "ldrd r1, r0, [r",7, 24,", #-16]!") 396 + TEST( "ldrd r14, r12, [sp, #16]!") 397 + TEST_P( "ldrd r1, r0, [r",7, 24,"], #16") 398 + TEST( "ldrd r7, r8, [sp], #-16") 399 + 400 + TEST_X( "ldrd r12, r14, 3f", 401 + ".align 3 \n\t" 402 + "3: .word "__stringify(VAL1)" \n\t" 403 + " .word "__stringify(VAL2)) 404 + 405 + TEST_UNSUPPORTED(".short 0xe9ff,0xec04 @ ldrd r14, r12, [pc, #16]!") 406 + TEST_UNSUPPORTED(".short 0xe8ff,0xec04 @ ldrd r14, r12, [pc], #16") 407 + TEST_UNSUPPORTED(".short 0xe9d4,0xd800 @ ldrd sp, r8, [r4]") 408 + TEST_UNSUPPORTED(".short 0xe9d4,0xf800 @ ldrd pc, r8, [r4]") 409 + TEST_UNSUPPORTED(".short 0xe9d4,0x7d00 @ ldrd r7, sp, [r4]") 410 + TEST_UNSUPPORTED(".short 0xe9d4,0x7f00 @ ldrd r7, pc, [r4]") 411 + 412 + TEST_RRP("strd r",0, VAL1,", r",1, VAL2,", [r",1, 24,", #-16]") 413 + TEST_RR( "strd r",12,VAL2,", r",14,VAL1,", [sp, #16]") 414 + TEST_RRP("strd r",1, VAL1,", r",0, VAL2,", [r",7, 24,", #-16]!") 415 + TEST_RR( "strd r",14,VAL2,", r",12,VAL1,", [sp, #16]!") 416 + TEST_RRP("strd r",1, VAL1,", r",0, VAL2,", [r",7, 24,"], #16") 417 + TEST_RR( "strd r",7, VAL2,", r",8, VAL1,", [sp], #-16") 418 + TEST_UNSUPPORTED(".short 0xe9ef,0xec04 @ strd r14, r12, [pc, #16]!") 419 + TEST_UNSUPPORTED(".short 0xe8ef,0xec04 @ strd r14, r12, [pc], #16") 420 + 421 + TEST_RX("tbb [pc, r",0, (9f-(1f+4)),"]", 422 + "9: \n\t" 423 + ".byte (2f-1b-4)>>1 \n\t" 424 + ".byte (3f-1b-4)>>1 \n\t" 425 + "3: mvn r0, r0 \n\t" 426 + "2: nop \n\t") 427 + 428 + TEST_RX("tbb [pc, r",4, (9f-(1f+4)+1),"]", 429 + "9: \n\t" 430 + ".byte (2f-1b-4)>>1 \n\t" 431 + ".byte (3f-1b-4)>>1 \n\t" 432 + "3: mvn r0, r0 \n\t" 433 + "2: nop \n\t") 434 + 435 + TEST_RRX("tbb [r",1,9f,", r",2,0,"]", 436 + "9: \n\t" 437 + ".byte (2f-1b-4)>>1 \n\t" 438 + ".byte (3f-1b-4)>>1 \n\t" 439 + "3: mvn r0, r0 \n\t" 440 + "2: nop \n\t") 441 + 442 + TEST_RX("tbh [pc, r",7, (9f-(1f+4))>>1,"]", 443 + "9: \n\t" 444 + ".short (2f-1b-4)>>1 \n\t" 445 + ".short (3f-1b-4)>>1 \n\t" 446 + "3: mvn r0, r0 \n\t" 447 + "2: nop \n\t") 448 + 449 + TEST_RX("tbh [pc, r",12, ((9f-(1f+4))>>1)+1,"]", 450 + "9: \n\t" 451 + ".short (2f-1b-4)>>1 \n\t" 452 + ".short (3f-1b-4)>>1 \n\t" 453 + "3: mvn r0, r0 \n\t" 454 + "2: nop \n\t") 455 + 456 + TEST_RRX("tbh [r",1,9f, ", r",14,1,"]", 457 + "9: \n\t" 458 + ".short (2f-1b-4)>>1 \n\t" 459 + ".short (3f-1b-4)>>1 \n\t" 460 + "3: mvn r0, r0 \n\t" 461 + "2: nop \n\t") 462 + 463 + TEST_UNSUPPORTED(".short 0xe8d1,0xf01f @ tbh [r1, pc]") 464 + TEST_UNSUPPORTED(".short 0xe8d1,0xf01d @ tbh [r1, sp]") 465 + TEST_UNSUPPORTED(".short 0xe8dd,0xf012 @ tbh [sp, r2]") 466 + 467 + TEST_UNSUPPORTED("strexb r0, r1, [r2]") 468 + TEST_UNSUPPORTED("strexh r0, r1, [r2]") 469 + TEST_UNSUPPORTED("strexd r0, r1, [r2]") 470 + TEST_UNSUPPORTED("ldrexb r0, [r1]") 471 + TEST_UNSUPPORTED("ldrexh r0, [r1]") 472 + TEST_UNSUPPORTED("ldrexd r0, [r1]") 473 + 474 + TEST_GROUP("Data-processing (shifted register) and (modified immediate)") 475 + 476 + #define _DATA_PROCESSING32_DNM(op,s,val) \ 477 + TEST_RR(op s".w r0, r",1, VAL1,", r",2, val, "") \ 478 + TEST_RR(op s" r1, r",1, VAL1,", r",2, val, ", lsl #3") \ 479 + TEST_RR(op s" r2, r",3, VAL1,", r",2, val, ", lsr #4") \ 480 + TEST_RR(op s" r3, r",3, VAL1,", r",2, val, ", asr #5") \ 481 + TEST_RR(op s" r4, r",5, VAL1,", r",2, N(val),", asr #6") \ 482 + TEST_RR(op s" r5, r",5, VAL1,", r",2, val, ", ror #7") \ 483 + TEST_RR(op s" r8, r",9, VAL1,", r",10,val, ", rrx") \ 484 + TEST_R( op s" r0, r",11,VAL1,", #0x00010001") \ 485 + TEST_R( op s" r11, r",0, VAL1,", #0xf5000000") \ 486 + TEST_R( op s" r7, r",8, VAL2,", #0x000af000") 487 + 488 + #define DATA_PROCESSING32_DNM(op,val) \ 489 + _DATA_PROCESSING32_DNM(op,"",val) \ 490 + _DATA_PROCESSING32_DNM(op,"s",val) 491 + 492 + #define DATA_PROCESSING32_NM(op,val) \ 493 + TEST_RR(op".w r",1, VAL1,", r",2, val, "") \ 494 + TEST_RR(op" r",1, VAL1,", r",2, val, ", lsl #3") \ 495 + TEST_RR(op" r",3, VAL1,", r",2, val, ", lsr #4") \ 496 + TEST_RR(op" r",3, VAL1,", r",2, val, ", asr #5") \ 497 + TEST_RR(op" r",5, VAL1,", r",2, N(val),", asr #6") \ 498 + TEST_RR(op" r",5, VAL1,", r",2, val, ", ror #7") \ 499 + TEST_RR(op" r",9, VAL1,", r",10,val, ", rrx") \ 500 + TEST_R( op" r",11,VAL1,", #0x00010001") \ 501 + TEST_R( op" r",0, VAL1,", #0xf5000000") \ 502 + TEST_R( op" r",8, VAL2,", #0x000af000") 503 + 504 + #define _DATA_PROCESSING32_DM(op,s,val) \ 505 + TEST_R( op s".w r0, r",14, val, "") \ 506 + TEST_R( op s" r1, r",12, val, ", lsl #3") \ 507 + TEST_R( op s" r2, r",11, val, ", lsr #4") \ 508 + TEST_R( op s" r3, r",10, val, ", asr #5") \ 509 + TEST_R( op s" r4, r",9, N(val),", asr #6") \ 510 + TEST_R( op s" r5, r",8, val, ", ror #7") \ 511 + TEST_R( op s" r8, r",7,val, ", rrx") \ 512 + TEST( op s" r0, #0x00010001") \ 513 + TEST( op s" r11, #0xf5000000") \ 514 + TEST( op s" r7, #0x000af000") \ 515 + TEST( op s" r4, #0x00005a00") 516 + 517 + #define DATA_PROCESSING32_DM(op,val) \ 518 + _DATA_PROCESSING32_DM(op,"",val) \ 519 + _DATA_PROCESSING32_DM(op,"s",val) 520 + 521 + DATA_PROCESSING32_DNM("and",0xf00f00ff) 522 + DATA_PROCESSING32_NM("tst",0xf00f00ff) 523 + DATA_PROCESSING32_DNM("bic",0xf00f00ff) 524 + DATA_PROCESSING32_DNM("orr",0xf00f00ff) 525 + DATA_PROCESSING32_DM("mov",VAL2) 526 + DATA_PROCESSING32_DNM("orn",0xf00f00ff) 527 + DATA_PROCESSING32_DM("mvn",VAL2) 528 + DATA_PROCESSING32_DNM("eor",0xf00f00ff) 529 + DATA_PROCESSING32_NM("teq",0xf00f00ff) 530 + DATA_PROCESSING32_DNM("add",VAL2) 531 + DATA_PROCESSING32_NM("cmn",VAL2) 532 + DATA_PROCESSING32_DNM("adc",VAL2) 533 + DATA_PROCESSING32_DNM("sbc",VAL2) 534 + DATA_PROCESSING32_DNM("sub",VAL2) 535 + DATA_PROCESSING32_NM("cmp",VAL2) 536 + DATA_PROCESSING32_DNM("rsb",VAL2) 537 + 538 + TEST_RR("pkhbt r0, r",0, HH1,", r",1, HH2,"") 539 + TEST_RR("pkhbt r14,r",12, HH1,", r",10,HH2,", lsl #2") 540 + TEST_RR("pkhtb r0, r",0, HH1,", r",1, HH2,"") 541 + TEST_RR("pkhtb r14,r",12, HH1,", r",10,HH2,", asr #2") 542 + 543 + TEST_UNSUPPORTED(".short 0xea17,0x0f0d @ tst.w r7, sp") 544 + TEST_UNSUPPORTED(".short 0xea17,0x0f0f @ tst.w r7, pc") 545 + TEST_UNSUPPORTED(".short 0xea1d,0x0f07 @ tst.w sp, r7") 546 + TEST_UNSUPPORTED(".short 0xea1f,0x0f07 @ tst.w pc, r7") 547 + TEST_UNSUPPORTED(".short 0xf01d,0x1f08 @ tst sp, #0x00080008") 548 + TEST_UNSUPPORTED(".short 0xf01f,0x1f08 @ tst pc, #0x00080008") 549 + 550 + TEST_UNSUPPORTED(".short 0xea97,0x0f0d @ teq.w r7, sp") 551 + TEST_UNSUPPORTED(".short 0xea97,0x0f0f @ teq.w r7, pc") 552 + TEST_UNSUPPORTED(".short 0xea9d,0x0f07 @ teq.w sp, r7") 553 + TEST_UNSUPPORTED(".short 0xea9f,0x0f07 @ teq.w pc, r7") 554 + TEST_UNSUPPORTED(".short 0xf09d,0x1f08 @ tst sp, #0x00080008") 555 + TEST_UNSUPPORTED(".short 0xf09f,0x1f08 @ tst pc, #0x00080008") 556 + 557 + TEST_UNSUPPORTED(".short 0xeb17,0x0f0d @ cmn.w r7, sp") 558 + TEST_UNSUPPORTED(".short 0xeb17,0x0f0f @ cmn.w r7, pc") 559 + TEST_P("cmn.w sp, r",7,0,"") 560 + TEST_UNSUPPORTED(".short 0xeb1f,0x0f07 @ cmn.w pc, r7") 561 + TEST( "cmn sp, #0x00080008") 562 + TEST_UNSUPPORTED(".short 0xf11f,0x1f08 @ cmn pc, #0x00080008") 563 + 564 + TEST_UNSUPPORTED(".short 0xebb7,0x0f0d @ cmp.w r7, sp") 565 + TEST_UNSUPPORTED(".short 0xebb7,0x0f0f @ cmp.w r7, pc") 566 + TEST_P("cmp.w sp, r",7,0,"") 567 + TEST_UNSUPPORTED(".short 0xebbf,0x0f07 @ cmp.w pc, r7") 568 + TEST( "cmp sp, #0x00080008") 569 + TEST_UNSUPPORTED(".short 0xf1bf,0x1f08 @ cmp pc, #0x00080008") 570 + 571 + TEST_UNSUPPORTED(".short 0xea5f,0x070d @ movs.w r7, sp") 572 + TEST_UNSUPPORTED(".short 0xea5f,0x070f @ movs.w r7, pc") 573 + TEST_UNSUPPORTED(".short 0xea5f,0x0d07 @ movs.w sp, r7") 574 + TEST_UNSUPPORTED(".short 0xea4f,0x0f07 @ mov.w pc, r7") 575 + TEST_UNSUPPORTED(".short 0xf04f,0x1d08 @ mov sp, #0x00080008") 576 + TEST_UNSUPPORTED(".short 0xf04f,0x1f08 @ mov pc, #0x00080008") 577 + 578 + TEST_R("add.w r0, sp, r",1, 4,"") 579 + TEST_R("adds r0, sp, r",1, 4,", asl #3") 580 + TEST_R("add r0, sp, r",1, 4,", asl #4") 581 + TEST_R("add r0, sp, r",1, 16,", ror #1") 582 + TEST_R("add.w sp, sp, r",1, 4,"") 583 + TEST_R("add sp, sp, r",1, 4,", asl #3") 584 + TEST_UNSUPPORTED(".short 0xeb0d,0x1d01 @ add sp, sp, r1, asl #4") 585 + TEST_UNSUPPORTED(".short 0xeb0d,0x0d71 @ add sp, sp, r1, ror #1") 586 + TEST( "add.w r0, sp, #24") 587 + TEST( "add.w sp, sp, #24") 588 + TEST_UNSUPPORTED(".short 0xeb0d,0x0f01 @ add pc, sp, r1") 589 + TEST_UNSUPPORTED(".short 0xeb0d,0x000f @ add r0, sp, pc") 590 + TEST_UNSUPPORTED(".short 0xeb0d,0x000d @ add r0, sp, sp") 591 + TEST_UNSUPPORTED(".short 0xeb0d,0x0d0f @ add sp, sp, pc") 592 + TEST_UNSUPPORTED(".short 0xeb0d,0x0d0d @ add sp, sp, sp") 593 + 594 + TEST_R("sub.w r0, sp, r",1, 4,"") 595 + TEST_R("subs r0, sp, r",1, 4,", asl #3") 596 + TEST_R("sub r0, sp, r",1, 4,", asl #4") 597 + TEST_R("sub r0, sp, r",1, 16,", ror #1") 598 + TEST_R("sub.w sp, sp, r",1, 4,"") 599 + TEST_R("sub sp, sp, r",1, 4,", asl #3") 600 + TEST_UNSUPPORTED(".short 0xebad,0x1d01 @ sub sp, sp, r1, asl #4") 601 + TEST_UNSUPPORTED(".short 0xebad,0x0d71 @ sub sp, sp, r1, ror #1") 602 + TEST_UNSUPPORTED(".short 0xebad,0x0f01 @ sub pc, sp, r1") 603 + TEST( "sub.w r0, sp, #24") 604 + TEST( "sub.w sp, sp, #24") 605 + 606 + TEST_UNSUPPORTED(".short 0xea02,0x010f @ and r1, r2, pc") 607 + TEST_UNSUPPORTED(".short 0xea0f,0x0103 @ and r1, pc, r3") 608 + TEST_UNSUPPORTED(".short 0xea02,0x0f03 @ and pc, r2, r3") 609 + TEST_UNSUPPORTED(".short 0xea02,0x010d @ and r1, r2, sp") 610 + TEST_UNSUPPORTED(".short 0xea0d,0x0103 @ and r1, sp, r3") 611 + TEST_UNSUPPORTED(".short 0xea02,0x0d03 @ and sp, r2, r3") 612 + TEST_UNSUPPORTED(".short 0xf00d,0x1108 @ and r1, sp, #0x00080008") 613 + TEST_UNSUPPORTED(".short 0xf00f,0x1108 @ and r1, pc, #0x00080008") 614 + TEST_UNSUPPORTED(".short 0xf002,0x1d08 @ and sp, r8, #0x00080008") 615 + TEST_UNSUPPORTED(".short 0xf002,0x1f08 @ and pc, r8, #0x00080008") 616 + 617 + TEST_UNSUPPORTED(".short 0xeb02,0x010f @ add r1, r2, pc") 618 + TEST_UNSUPPORTED(".short 0xeb0f,0x0103 @ add r1, pc, r3") 619 + TEST_UNSUPPORTED(".short 0xeb02,0x0f03 @ add pc, r2, r3") 620 + TEST_UNSUPPORTED(".short 0xeb02,0x010d @ add r1, r2, sp") 621 + TEST_SUPPORTED( ".short 0xeb0d,0x0103 @ add r1, sp, r3") 622 + TEST_UNSUPPORTED(".short 0xeb02,0x0d03 @ add sp, r2, r3") 623 + TEST_SUPPORTED( ".short 0xf10d,0x1108 @ add r1, sp, #0x00080008") 624 + TEST_UNSUPPORTED(".short 0xf10d,0x1f08 @ add pc, sp, #0x00080008") 625 + TEST_UNSUPPORTED(".short 0xf10f,0x1108 @ add r1, pc, #0x00080008") 626 + TEST_UNSUPPORTED(".short 0xf102,0x1d08 @ add sp, r8, #0x00080008") 627 + TEST_UNSUPPORTED(".short 0xf102,0x1f08 @ add pc, r8, #0x00080008") 628 + 629 + TEST_UNSUPPORTED(".short 0xeaa0,0x0000") 630 + TEST_UNSUPPORTED(".short 0xeaf0,0x0000") 631 + TEST_UNSUPPORTED(".short 0xeb20,0x0000") 632 + TEST_UNSUPPORTED(".short 0xeb80,0x0000") 633 + TEST_UNSUPPORTED(".short 0xebe0,0x0000") 634 + 635 + TEST_UNSUPPORTED(".short 0xf0a0,0x0000") 636 + TEST_UNSUPPORTED(".short 0xf0c0,0x0000") 637 + TEST_UNSUPPORTED(".short 0xf0f0,0x0000") 638 + TEST_UNSUPPORTED(".short 0xf120,0x0000") 639 + TEST_UNSUPPORTED(".short 0xf180,0x0000") 640 + TEST_UNSUPPORTED(".short 0xf1e0,0x0000") 641 + 642 + TEST_GROUP("Coprocessor instructions") 643 + 644 + TEST_UNSUPPORTED(".short 0xec00,0x0000") 645 + TEST_UNSUPPORTED(".short 0xeff0,0x0000") 646 + TEST_UNSUPPORTED(".short 0xfc00,0x0000") 647 + TEST_UNSUPPORTED(".short 0xfff0,0x0000") 648 + 649 + TEST_GROUP("Data-processing (plain binary immediate)") 650 + 651 + TEST_R("addw r0, r",1, VAL1,", #0x123") 652 + TEST( "addw r14, sp, #0xf5a") 653 + TEST( "addw sp, sp, #0x20") 654 + TEST( "addw r7, pc, #0x888") 655 + TEST_UNSUPPORTED(".short 0xf20f,0x1f20 @ addw pc, pc, #0x120") 656 + TEST_UNSUPPORTED(".short 0xf20d,0x1f20 @ addw pc, sp, #0x120") 657 + TEST_UNSUPPORTED(".short 0xf20f,0x1d20 @ addw sp, pc, #0x120") 658 + TEST_UNSUPPORTED(".short 0xf200,0x1d20 @ addw sp, r0, #0x120") 659 + 660 + TEST_R("subw r0, r",1, VAL1,", #0x123") 661 + TEST( "subw r14, sp, #0xf5a") 662 + TEST( "subw sp, sp, #0x20") 663 + TEST( "subw r7, pc, #0x888") 664 + TEST_UNSUPPORTED(".short 0xf2af,0x1f20 @ subw pc, pc, #0x120") 665 + TEST_UNSUPPORTED(".short 0xf2ad,0x1f20 @ subw pc, sp, #0x120") 666 + TEST_UNSUPPORTED(".short 0xf2af,0x1d20 @ subw sp, pc, #0x120") 667 + TEST_UNSUPPORTED(".short 0xf2a0,0x1d20 @ subw sp, r0, #0x120") 668 + 669 + TEST("movw r0, #0") 670 + TEST("movw r0, #0xffff") 671 + TEST("movw lr, #0xffff") 672 + TEST_UNSUPPORTED(".short 0xf240,0x0d00 @ movw sp, #0") 673 + TEST_UNSUPPORTED(".short 0xf240,0x0f00 @ movw pc, #0") 674 + 675 + TEST_R("movt r",0, VAL1,", #0") 676 + TEST_R("movt r",0, VAL2,", #0xffff") 677 + TEST_R("movt r",14,VAL1,", #0xffff") 678 + TEST_UNSUPPORTED(".short 0xf2c0,0x0d00 @ movt sp, #0") 679 + TEST_UNSUPPORTED(".short 0xf2c0,0x0f00 @ movt pc, #0") 680 + 681 + TEST_R( "ssat r0, #24, r",0, VAL1,"") 682 + TEST_R( "ssat r14, #24, r",12, VAL2,"") 683 + TEST_R( "ssat r0, #24, r",0, VAL1,", lsl #8") 684 + TEST_R( "ssat r14, #24, r",12, VAL2,", asr #8") 685 + TEST_UNSUPPORTED(".short 0xf30c,0x0d17 @ ssat sp, #24, r12") 686 + TEST_UNSUPPORTED(".short 0xf30c,0x0f17 @ ssat pc, #24, r12") 687 + TEST_UNSUPPORTED(".short 0xf30d,0x0c17 @ ssat r12, #24, sp") 688 + TEST_UNSUPPORTED(".short 0xf30f,0x0c17 @ ssat r12, #24, pc") 689 + 690 + TEST_R( "usat r0, #24, r",0, VAL1,"") 691 + TEST_R( "usat r14, #24, r",12, VAL2,"") 692 + TEST_R( "usat r0, #24, r",0, VAL1,", lsl #8") 693 + TEST_R( "usat r14, #24, r",12, VAL2,", asr #8") 694 + TEST_UNSUPPORTED(".short 0xf38c,0x0d17 @ usat sp, #24, r12") 695 + TEST_UNSUPPORTED(".short 0xf38c,0x0f17 @ usat pc, #24, r12") 696 + TEST_UNSUPPORTED(".short 0xf38d,0x0c17 @ usat r12, #24, sp") 697 + TEST_UNSUPPORTED(".short 0xf38f,0x0c17 @ usat r12, #24, pc") 698 + 699 + TEST_R( "ssat16 r0, #12, r",0, HH1,"") 700 + TEST_R( "ssat16 r14, #12, r",12, HH2,"") 701 + TEST_UNSUPPORTED(".short 0xf32c,0x0d0b @ ssat16 sp, #12, r12") 702 + TEST_UNSUPPORTED(".short 0xf32c,0x0f0b @ ssat16 pc, #12, r12") 703 + TEST_UNSUPPORTED(".short 0xf32d,0x0c0b @ ssat16 r12, #12, sp") 704 + TEST_UNSUPPORTED(".short 0xf32f,0x0c0b @ ssat16 r12, #12, pc") 705 + 706 + TEST_R( "usat16 r0, #12, r",0, HH1,"") 707 + TEST_R( "usat16 r14, #12, r",12, HH2,"") 708 + TEST_UNSUPPORTED(".short 0xf3ac,0x0d0b @ usat16 sp, #12, r12") 709 + TEST_UNSUPPORTED(".short 0xf3ac,0x0f0b @ usat16 pc, #12, r12") 710 + TEST_UNSUPPORTED(".short 0xf3ad,0x0c0b @ usat16 r12, #12, sp") 711 + TEST_UNSUPPORTED(".short 0xf3af,0x0c0b @ usat16 r12, #12, pc") 712 + 713 + TEST_R( "sbfx r0, r",0 , VAL1,", #0, #31") 714 + TEST_R( "sbfx r14, r",12, VAL2,", #8, #16") 715 + TEST_R( "sbfx r4, r",10, VAL1,", #16, #15") 716 + TEST_UNSUPPORTED(".short 0xf34c,0x2d0f @ sbfx sp, r12, #8, #16") 717 + TEST_UNSUPPORTED(".short 0xf34c,0x2f0f @ sbfx pc, r12, #8, #16") 718 + TEST_UNSUPPORTED(".short 0xf34d,0x2c0f @ sbfx r12, sp, #8, #16") 719 + TEST_UNSUPPORTED(".short 0xf34f,0x2c0f @ sbfx r12, pc, #8, #16") 720 + 721 + TEST_R( "ubfx r0, r",0 , VAL1,", #0, #31") 722 + TEST_R( "ubfx r14, r",12, VAL2,", #8, #16") 723 + TEST_R( "ubfx r4, r",10, VAL1,", #16, #15") 724 + TEST_UNSUPPORTED(".short 0xf3cc,0x2d0f @ ubfx sp, r12, #8, #16") 725 + TEST_UNSUPPORTED(".short 0xf3cc,0x2f0f @ ubfx pc, r12, #8, #16") 726 + TEST_UNSUPPORTED(".short 0xf3cd,0x2c0f @ ubfx r12, sp, #8, #16") 727 + TEST_UNSUPPORTED(".short 0xf3cf,0x2c0f @ ubfx r12, pc, #8, #16") 728 + 729 + TEST_R( "bfc r",0, VAL1,", #4, #20") 730 + TEST_R( "bfc r",14,VAL2,", #4, #20") 731 + TEST_R( "bfc r",7, VAL1,", #0, #31") 732 + TEST_R( "bfc r",8, VAL2,", #0, #31") 733 + TEST_UNSUPPORTED(".short 0xf36f,0x0d1e @ bfc sp, #0, #31") 734 + TEST_UNSUPPORTED(".short 0xf36f,0x0f1e @ bfc pc, #0, #31") 735 + 736 + TEST_RR( "bfi r",0, VAL1,", r",0 , VAL2,", #0, #31") 737 + TEST_RR( "bfi r",12,VAL1,", r",14 , VAL2,", #4, #20") 738 + TEST_UNSUPPORTED(".short 0xf36e,0x1d17 @ bfi sp, r14, #4, #20") 739 + TEST_UNSUPPORTED(".short 0xf36e,0x1f17 @ bfi pc, r14, #4, #20") 740 + TEST_UNSUPPORTED(".short 0xf36d,0x1e17 @ bfi r14, sp, #4, #20") 741 + 742 + TEST_GROUP("Branches and miscellaneous control") 743 + 744 + CONDITION_INSTRUCTIONS(22, 745 + TEST_BF("beq.w 2f") 746 + TEST_BB("bne.w 2b") 747 + TEST_BF("bgt.w 2f") 748 + TEST_BB("blt.w 2b") 749 + TEST_BF_X("bpl.w 2f",0x1000) 750 + ) 751 + 752 + TEST_UNSUPPORTED("msr cpsr, r0") 753 + TEST_UNSUPPORTED("msr cpsr_f, r1") 754 + TEST_UNSUPPORTED("msr spsr, r2") 755 + 756 + TEST_UNSUPPORTED("cpsie.w i") 757 + TEST_UNSUPPORTED("cpsid.w i") 758 + TEST_UNSUPPORTED("cps 0x13") 759 + 760 + TEST_SUPPORTED("yield.w") 761 + TEST("sev.w") 762 + TEST("nop.w") 763 + TEST("wfi.w") 764 + TEST_SUPPORTED("wfe.w") 765 + TEST_UNSUPPORTED("dbg.w #0") 766 + 767 + TEST_UNSUPPORTED("clrex") 768 + TEST_UNSUPPORTED("dsb") 769 + TEST_UNSUPPORTED("dmb") 770 + TEST_UNSUPPORTED("isb") 771 + 772 + TEST_UNSUPPORTED("bxj r0") 773 + 774 + TEST_UNSUPPORTED("subs pc, lr, #4") 775 + 776 + TEST("mrs r0, cpsr") 777 + TEST("mrs r14, cpsr") 778 + TEST_UNSUPPORTED(".short 0xf3ef,0x8d00 @ mrs sp, spsr") 779 + TEST_UNSUPPORTED(".short 0xf3ef,0x8f00 @ mrs pc, spsr") 780 + TEST_UNSUPPORTED("mrs r0, spsr") 781 + TEST_UNSUPPORTED("mrs lr, spsr") 782 + 783 + TEST_UNSUPPORTED(".short 0xf7f0,0x8000 @ smc #0") 784 + 785 + TEST_UNSUPPORTED(".short 0xf7f0,0xa000 @ undefeined") 786 + 787 + TEST_BF( "b.w 2f") 788 + TEST_BB( "b.w 2b") 789 + TEST_BF_X("b.w 2f", 0x1000) 790 + 791 + TEST_BF( "bl.w 2f") 792 + TEST_BB( "bl.w 2b") 793 + TEST_BB_X("bl.w 2b", 0x1000) 794 + 795 + TEST_X( "blx __dummy_arm_subroutine", 796 + ".arm \n\t" 797 + ".align \n\t" 798 + ".type __dummy_arm_subroutine, %%function \n\t" 799 + "__dummy_arm_subroutine: \n\t" 800 + "mov r0, pc \n\t" 801 + "bx lr \n\t" 802 + ".thumb \n\t" 803 + ) 804 + TEST( "blx __dummy_arm_subroutine") 805 + 806 + TEST_GROUP("Store single data item") 807 + 808 + #define SINGLE_STORE(size) \ 809 + TEST_RP( "str"size" r",0, VAL1,", [r",11,-1024,", #1024]") \ 810 + TEST_RP( "str"size" r",14,VAL2,", [r",1, -1024,", #1080]") \ 811 + TEST_RP( "str"size" r",0, VAL1,", [r",11,256, ", #-120]") \ 812 + TEST_RP( "str"size" r",14,VAL2,", [r",1, 256, ", #-128]") \ 813 + TEST_RP( "str"size" r",0, VAL1,", [r",11,24, "], #120") \ 814 + TEST_RP( "str"size" r",14,VAL2,", [r",1, 24, "], #128") \ 815 + TEST_RP( "str"size" r",0, VAL1,", [r",11,24, "], #-120") \ 816 + TEST_RP( "str"size" r",14,VAL2,", [r",1, 24, "], #-128") \ 817 + TEST_RP( "str"size" r",0, VAL1,", [r",11,24, ", #120]!") \ 818 + TEST_RP( "str"size" r",14,VAL2,", [r",1, 24, ", #128]!") \ 819 + TEST_RP( "str"size" r",0, VAL1,", [r",11,256, ", #-120]!") \ 820 + TEST_RP( "str"size" r",14,VAL2,", [r",1, 256, ", #-128]!") \ 821 + TEST_RPR("str"size".w r",0, VAL1,", [r",1, 0,", r",2, 4,"]") \ 822 + TEST_RPR("str"size" r",14,VAL2,", [r",10,0,", r",11,4,", lsl #1]") \ 823 + TEST_R( "str"size".w r",7, VAL1,", [sp, #24]") \ 824 + TEST_RP( "str"size".w r",0, VAL2,", [r",0,0, "]") \ 825 + TEST_UNSUPPORTED("str"size"t r0, [r1, #4]") 826 + 827 + SINGLE_STORE("b") 828 + SINGLE_STORE("h") 829 + SINGLE_STORE("") 830 + 831 + TEST("str sp, [sp]") 832 + TEST_UNSUPPORTED(".short 0xf8cf,0xe000 @ str r14, [pc]") 833 + TEST_UNSUPPORTED(".short 0xf8ce,0xf000 @ str pc, [r14]") 834 + 835 + TEST_GROUP("Advanced SIMD element or structure load/store instructions") 836 + 837 + TEST_UNSUPPORTED(".short 0xf900,0x0000") 838 + TEST_UNSUPPORTED(".short 0xf92f,0xffff") 839 + TEST_UNSUPPORTED(".short 0xf980,0x0000") 840 + TEST_UNSUPPORTED(".short 0xf9ef,0xffff") 841 + 842 + TEST_GROUP("Load single data item and memory hints") 843 + 844 + #define SINGLE_LOAD(size) \ 845 + TEST_P( "ldr"size" r0, [r",11,-1024, ", #1024]") \ 846 + TEST_P( "ldr"size" r14, [r",1, -1024,", #1080]") \ 847 + TEST_P( "ldr"size" r0, [r",11,256, ", #-120]") \ 848 + TEST_P( "ldr"size" r14, [r",1, 256, ", #-128]") \ 849 + TEST_P( "ldr"size" r0, [r",11,24, "], #120") \ 850 + TEST_P( "ldr"size" r14, [r",1, 24, "], #128") \ 851 + TEST_P( "ldr"size" r0, [r",11,24, "], #-120") \ 852 + TEST_P( "ldr"size" r14, [r",1,24, "], #-128") \ 853 + TEST_P( "ldr"size" r0, [r",11,24, ", #120]!") \ 854 + TEST_P( "ldr"size" r14, [r",1, 24, ", #128]!") \ 855 + TEST_P( "ldr"size" r0, [r",11,256, ", #-120]!") \ 856 + TEST_P( "ldr"size" r14, [r",1, 256, ", #-128]!") \ 857 + TEST_PR("ldr"size".w r0, [r",1, 0,", r",2, 4,"]") \ 858 + TEST_PR("ldr"size" r14, [r",10,0,", r",11,4,", lsl #1]") \ 859 + TEST_X( "ldr"size".w r0, 3f", \ 860 + ".align 3 \n\t" \ 861 + "3: .word "__stringify(VAL1)) \ 862 + TEST_X( "ldr"size".w r14, 3f", \ 863 + ".align 3 \n\t" \ 864 + "3: .word "__stringify(VAL2)) \ 865 + TEST( "ldr"size".w r7, 3b") \ 866 + TEST( "ldr"size".w r7, [sp, #24]") \ 867 + TEST_P( "ldr"size".w r0, [r",0,0, "]") \ 868 + TEST_UNSUPPORTED("ldr"size"t r0, [r1, #4]") 869 + 870 + SINGLE_LOAD("b") 871 + SINGLE_LOAD("sb") 872 + SINGLE_LOAD("h") 873 + SINGLE_LOAD("sh") 874 + SINGLE_LOAD("") 875 + 876 + TEST_BF_P("ldr pc, [r",14, 15*4,"]") 877 + TEST_P( "ldr sp, [r",14, 13*4,"]") 878 + TEST_BF_R("ldr pc, [sp, r",14, 15*4,"]") 879 + TEST_R( "ldr sp, [sp, r",14, 13*4,"]") 880 + TEST_THUMB_TO_ARM_INTERWORK_P("ldr pc, [r",0,0,", #15*4]") 881 + TEST_SUPPORTED("ldr sp, 99f") 882 + TEST_SUPPORTED("ldr pc, 99f") 883 + 884 + TEST_UNSUPPORTED(".short 0xf854,0x700d @ ldr r7, [r4, sp]") 885 + TEST_UNSUPPORTED(".short 0xf854,0x700f @ ldr r7, [r4, pc]") 886 + TEST_UNSUPPORTED(".short 0xf814,0x700d @ ldrb r7, [r4, sp]") 887 + TEST_UNSUPPORTED(".short 0xf814,0x700f @ ldrb r7, [r4, pc]") 888 + TEST_UNSUPPORTED(".short 0xf89f,0xd004 @ ldrb sp, 99f") 889 + TEST_UNSUPPORTED(".short 0xf814,0xd008 @ ldrb sp, [r4, r8]") 890 + TEST_UNSUPPORTED(".short 0xf894,0xd000 @ ldrb sp, [r4]") 891 + 892 + TEST_UNSUPPORTED(".short 0xf860,0x0000") /* Unallocated space */ 893 + TEST_UNSUPPORTED(".short 0xf9ff,0xffff") /* Unallocated space */ 894 + TEST_UNSUPPORTED(".short 0xf950,0x0000") /* Unallocated space */ 895 + TEST_UNSUPPORTED(".short 0xf95f,0xffff") /* Unallocated space */ 896 + TEST_UNSUPPORTED(".short 0xf800,0x0800") /* Unallocated space */ 897 + TEST_UNSUPPORTED(".short 0xf97f,0xfaff") /* Unallocated space */ 898 + 899 + TEST( "pli [pc, #4]") 900 + TEST( "pli [pc, #-4]") 901 + TEST( "pld [pc, #4]") 902 + TEST( "pld [pc, #-4]") 903 + 904 + TEST_P( "pld [r",0,-1024,", #1024]") 905 + TEST( ".short 0xf8b0,0xf400 @ pldw [r0, #1024]") 906 + TEST_P( "pli [r",4, 0b,", #1024]") 907 + TEST_P( "pld [r",7, 120,", #-120]") 908 + TEST( ".short 0xf837,0xfc78 @ pldw [r7, #-120]") 909 + TEST_P( "pli [r",11,120,", #-120]") 910 + TEST( "pld [sp, #0]") 911 + 912 + TEST_PR("pld [r",7, 24, ", r",0, 16,"]") 913 + TEST_PR("pld [r",8, 24, ", r",12,16,", lsl #3]") 914 + TEST_SUPPORTED(".short 0xf837,0xf000 @ pldw [r7, r0]") 915 + TEST_SUPPORTED(".short 0xf838,0xf03c @ pldw [r8, r12, lsl #3]"); 916 + TEST_RR("pli [r",12,0b,", r",0, 16,"]") 917 + TEST_RR("pli [r",0, 0b,", r",12,16,", lsl #3]") 918 + TEST_R( "pld [sp, r",1, 16,"]") 919 + TEST_UNSUPPORTED(".short 0xf817,0xf00d @pld [r7, sp]") 920 + TEST_UNSUPPORTED(".short 0xf817,0xf00f @pld [r7, pc]") 921 + 922 + TEST_GROUP("Data-processing (register)") 923 + 924 + #define SHIFTS32(op) \ 925 + TEST_RR(op" r0, r",1, VAL1,", r",2, 3, "") \ 926 + TEST_RR(op" r14, r",12,VAL2,", r",11,10,"") 927 + 928 + SHIFTS32("lsl") 929 + SHIFTS32("lsls") 930 + SHIFTS32("lsr") 931 + SHIFTS32("lsrs") 932 + SHIFTS32("asr") 933 + SHIFTS32("asrs") 934 + SHIFTS32("ror") 935 + SHIFTS32("rors") 936 + 937 + TEST_UNSUPPORTED(".short 0xfa01,0xff02 @ lsl pc, r1, r2") 938 + TEST_UNSUPPORTED(".short 0xfa01,0xfd02 @ lsl sp, r1, r2") 939 + TEST_UNSUPPORTED(".short 0xfa0f,0xf002 @ lsl r0, pc, r2") 940 + TEST_UNSUPPORTED(".short 0xfa0d,0xf002 @ lsl r0, sp, r2") 941 + TEST_UNSUPPORTED(".short 0xfa01,0xf00f @ lsl r0, r1, pc") 942 + TEST_UNSUPPORTED(".short 0xfa01,0xf00d @ lsl r0, r1, sp") 943 + 944 + TEST_RR( "sxtah r0, r",0, HH1,", r",1, HH2,"") 945 + TEST_RR( "sxtah r14,r",12, HH2,", r",10,HH1,", ror #8") 946 + TEST_R( "sxth r8, r",7, HH1,"") 947 + 948 + TEST_UNSUPPORTED(".short 0xfa0f,0xff87 @ sxth pc, r7"); 949 + TEST_UNSUPPORTED(".short 0xfa0f,0xfd87 @ sxth sp, r7"); 950 + TEST_UNSUPPORTED(".short 0xfa0f,0xf88f @ sxth r8, pc"); 951 + TEST_UNSUPPORTED(".short 0xfa0f,0xf88d @ sxth r8, sp"); 952 + 953 + TEST_RR( "uxtah r0, r",0, HH1,", r",1, HH2,"") 954 + TEST_RR( "uxtah r14,r",12, HH2,", r",10,HH1,", ror #8") 955 + TEST_R( "uxth r8, r",7, HH1,"") 956 + 957 + TEST_RR( "sxtab16 r0, r",0, HH1,", r",1, HH2,"") 958 + TEST_RR( "sxtab16 r14,r",12, HH2,", r",10,HH1,", ror #8") 959 + TEST_R( "sxtb16 r8, r",7, HH1,"") 960 + 961 + TEST_RR( "uxtab16 r0, r",0, HH1,", r",1, HH2,"") 962 + TEST_RR( "uxtab16 r14,r",12, HH2,", r",10,HH1,", ror #8") 963 + TEST_R( "uxtb16 r8, r",7, HH1,"") 964 + 965 + TEST_RR( "sxtab r0, r",0, HH1,", r",1, HH2,"") 966 + TEST_RR( "sxtab r14,r",12, HH2,", r",10,HH1,", ror #8") 967 + TEST_R( "sxtb r8, r",7, HH1,"") 968 + 969 + TEST_RR( "uxtab r0, r",0, HH1,", r",1, HH2,"") 970 + TEST_RR( "uxtab r14,r",12, HH2,", r",10,HH1,", ror #8") 971 + TEST_R( "uxtb r8, r",7, HH1,"") 972 + 973 + TEST_UNSUPPORTED(".short 0xfa60,0x00f0") 974 + TEST_UNSUPPORTED(".short 0xfa7f,0xffff") 975 + 976 + #define PARALLEL_ADD_SUB(op) \ 977 + TEST_RR( op"add16 r0, r",0, HH1,", r",1, HH2,"") \ 978 + TEST_RR( op"add16 r14, r",12,HH2,", r",10,HH1,"") \ 979 + TEST_RR( op"asx r0, r",0, HH1,", r",1, HH2,"") \ 980 + TEST_RR( op"asx r14, r",12,HH2,", r",10,HH1,"") \ 981 + TEST_RR( op"sax r0, r",0, HH1,", r",1, HH2,"") \ 982 + TEST_RR( op"sax r14, r",12,HH2,", r",10,HH1,"") \ 983 + TEST_RR( op"sub16 r0, r",0, HH1,", r",1, HH2,"") \ 984 + TEST_RR( op"sub16 r14, r",12,HH2,", r",10,HH1,"") \ 985 + TEST_RR( op"add8 r0, r",0, HH1,", r",1, HH2,"") \ 986 + TEST_RR( op"add8 r14, r",12,HH2,", r",10,HH1,"") \ 987 + TEST_RR( op"sub8 r0, r",0, HH1,", r",1, HH2,"") \ 988 + TEST_RR( op"sub8 r14, r",12,HH2,", r",10,HH1,"") 989 + 990 + TEST_GROUP("Parallel addition and subtraction, signed") 991 + 992 + PARALLEL_ADD_SUB("s") 993 + PARALLEL_ADD_SUB("q") 994 + PARALLEL_ADD_SUB("sh") 995 + 996 + TEST_GROUP("Parallel addition and subtraction, unsigned") 997 + 998 + PARALLEL_ADD_SUB("u") 999 + PARALLEL_ADD_SUB("uq") 1000 + PARALLEL_ADD_SUB("uh") 1001 + 1002 + TEST_GROUP("Miscellaneous operations") 1003 + 1004 + TEST_RR("qadd r0, r",1, VAL1,", r",2, VAL2,"") 1005 + TEST_RR("qadd lr, r",9, VAL2,", r",8, VAL1,"") 1006 + TEST_RR("qsub r0, r",1, VAL1,", r",2, VAL2,"") 1007 + TEST_RR("qsub lr, r",9, VAL2,", r",8, VAL1,"") 1008 + TEST_RR("qdadd r0, r",1, VAL1,", r",2, VAL2,"") 1009 + TEST_RR("qdadd lr, r",9, VAL2,", r",8, VAL1,"") 1010 + TEST_RR("qdsub r0, r",1, VAL1,", r",2, VAL2,"") 1011 + TEST_RR("qdsub lr, r",9, VAL2,", r",8, VAL1,"") 1012 + 1013 + TEST_R("rev.w r0, r",0, VAL1,"") 1014 + TEST_R("rev r14, r",12, VAL2,"") 1015 + TEST_R("rev16.w r0, r",0, VAL1,"") 1016 + TEST_R("rev16 r14, r",12, VAL2,"") 1017 + TEST_R("rbit r0, r",0, VAL1,"") 1018 + TEST_R("rbit r14, r",12, VAL2,"") 1019 + TEST_R("revsh.w r0, r",0, VAL1,"") 1020 + TEST_R("revsh r14, r",12, VAL2,"") 1021 + 1022 + TEST_UNSUPPORTED(".short 0xfa9c,0xff8c @ rev pc, r12"); 1023 + TEST_UNSUPPORTED(".short 0xfa9c,0xfd8c @ rev sp, r12"); 1024 + TEST_UNSUPPORTED(".short 0xfa9f,0xfe8f @ rev r14, pc"); 1025 + TEST_UNSUPPORTED(".short 0xfa9d,0xfe8d @ rev r14, sp"); 1026 + 1027 + TEST_RR("sel r0, r",0, VAL1,", r",1, VAL2,"") 1028 + TEST_RR("sel r14, r",12,VAL1,", r",10, VAL2,"") 1029 + 1030 + TEST_R("clz r0, r",0, 0x0,"") 1031 + TEST_R("clz r7, r",14,0x1,"") 1032 + TEST_R("clz lr, r",7, 0xffffffff,"") 1033 + 1034 + TEST_UNSUPPORTED(".short 0xfa80,0xf030") /* Unallocated space */ 1035 + TEST_UNSUPPORTED(".short 0xfaff,0xff7f") /* Unallocated space */ 1036 + TEST_UNSUPPORTED(".short 0xfab0,0xf000") /* Unallocated space */ 1037 + TEST_UNSUPPORTED(".short 0xfaff,0xff7f") /* Unallocated space */ 1038 + 1039 + TEST_GROUP("Multiply, multiply accumulate, and absolute difference operations") 1040 + 1041 + TEST_RR( "mul r0, r",1, VAL1,", r",2, VAL2,"") 1042 + TEST_RR( "mul r7, r",8, VAL2,", r",9, VAL2,"") 1043 + TEST_UNSUPPORTED(".short 0xfb08,0xff09 @ mul pc, r8, r9") 1044 + TEST_UNSUPPORTED(".short 0xfb08,0xfd09 @ mul sp, r8, r9") 1045 + TEST_UNSUPPORTED(".short 0xfb0f,0xf709 @ mul r7, pc, r9") 1046 + TEST_UNSUPPORTED(".short 0xfb0d,0xf709 @ mul r7, sp, r9") 1047 + TEST_UNSUPPORTED(".short 0xfb08,0xf70f @ mul r7, r8, pc") 1048 + TEST_UNSUPPORTED(".short 0xfb08,0xf70d @ mul r7, r8, sp") 1049 + 1050 + TEST_RRR( "mla r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 1051 + TEST_RRR( "mla r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 1052 + TEST_UNSUPPORTED(".short 0xfb08,0xaf09 @ mla pc, r8, r9, r10"); 1053 + TEST_UNSUPPORTED(".short 0xfb08,0xad09 @ mla sp, r8, r9, r10"); 1054 + TEST_UNSUPPORTED(".short 0xfb0f,0xa709 @ mla r7, pc, r9, r10"); 1055 + TEST_UNSUPPORTED(".short 0xfb0d,0xa709 @ mla r7, sp, r9, r10"); 1056 + TEST_UNSUPPORTED(".short 0xfb08,0xa70f @ mla r7, r8, pc, r10"); 1057 + TEST_UNSUPPORTED(".short 0xfb08,0xa70d @ mla r7, r8, sp, r10"); 1058 + TEST_UNSUPPORTED(".short 0xfb08,0xd709 @ mla r7, r8, r9, sp"); 1059 + 1060 + TEST_RRR( "mls r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 1061 + TEST_RRR( "mls r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 1062 + 1063 + TEST_RRR( "smlabb r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 1064 + TEST_RRR( "smlabb r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 1065 + TEST_RRR( "smlatb r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 1066 + TEST_RRR( "smlatb r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 1067 + TEST_RRR( "smlabt r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 1068 + TEST_RRR( "smlabt r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 1069 + TEST_RRR( "smlatt r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 1070 + TEST_RRR( "smlatt r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 1071 + TEST_RR( "smulbb r0, r",1, VAL1,", r",2, VAL2,"") 1072 + TEST_RR( "smulbb r7, r",8, VAL3,", r",9, VAL1,"") 1073 + TEST_RR( "smultb r0, r",1, VAL1,", r",2, VAL2,"") 1074 + TEST_RR( "smultb r7, r",8, VAL3,", r",9, VAL1,"") 1075 + TEST_RR( "smulbt r0, r",1, VAL1,", r",2, VAL2,"") 1076 + TEST_RR( "smulbt r7, r",8, VAL3,", r",9, VAL1,"") 1077 + TEST_RR( "smultt r0, r",1, VAL1,", r",2, VAL2,"") 1078 + TEST_RR( "smultt r7, r",8, VAL3,", r",9, VAL1,"") 1079 + 1080 + TEST_RRR( "smlad r0, r",0, HH1,", r",1, HH2,", r",2, VAL1,"") 1081 + TEST_RRR( "smlad r14, r",12,HH2,", r",10,HH1,", r",8, VAL2,"") 1082 + TEST_RRR( "smladx r0, r",0, HH1,", r",1, HH2,", r",2, VAL1,"") 1083 + TEST_RRR( "smladx r14, r",12,HH2,", r",10,HH1,", r",8, VAL2,"") 1084 + TEST_RR( "smuad r0, r",0, HH1,", r",1, HH2,"") 1085 + TEST_RR( "smuad r14, r",12,HH2,", r",10,HH1,"") 1086 + TEST_RR( "smuadx r0, r",0, HH1,", r",1, HH2,"") 1087 + TEST_RR( "smuadx r14, r",12,HH2,", r",10,HH1,"") 1088 + 1089 + TEST_RRR( "smlawb r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 1090 + TEST_RRR( "smlawb r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 1091 + TEST_RRR( "smlawt r0, r",1, VAL1,", r",2, VAL2,", r",3, VAL3,"") 1092 + TEST_RRR( "smlawt r7, r",8, VAL3,", r",9, VAL1,", r",10, VAL2,"") 1093 + TEST_RR( "smulwb r0, r",1, VAL1,", r",2, VAL2,"") 1094 + TEST_RR( "smulwb r7, r",8, VAL3,", r",9, VAL1,"") 1095 + TEST_RR( "smulwt r0, r",1, VAL1,", r",2, VAL2,"") 1096 + TEST_RR( "smulwt r7, r",8, VAL3,", r",9, VAL1,"") 1097 + 1098 + TEST_RRR( "smlsd r0, r",0, HH1,", r",1, HH2,", r",2, VAL1,"") 1099 + TEST_RRR( "smlsd r14, r",12,HH2,", r",10,HH1,", r",8, VAL2,"") 1100 + TEST_RRR( "smlsdx r0, r",0, HH1,", r",1, HH2,", r",2, VAL1,"") 1101 + TEST_RRR( "smlsdx r14, r",12,HH2,", r",10,HH1,", r",8, VAL2,"") 1102 + TEST_RR( "smusd r0, r",0, HH1,", r",1, HH2,"") 1103 + TEST_RR( "smusd r14, r",12,HH2,", r",10,HH1,"") 1104 + TEST_RR( "smusdx r0, r",0, HH1,", r",1, HH2,"") 1105 + TEST_RR( "smusdx r14, r",12,HH2,", r",10,HH1,"") 1106 + 1107 + TEST_RRR( "smmla r0, r",0, VAL1,", r",1, VAL2,", r",2, VAL1,"") 1108 + TEST_RRR( "smmla r14, r",12,VAL2,", r",10,VAL1,", r",8, VAL2,"") 1109 + TEST_RRR( "smmlar r0, r",0, VAL1,", r",1, VAL2,", r",2, VAL1,"") 1110 + TEST_RRR( "smmlar r14, r",12,VAL2,", r",10,VAL1,", r",8, VAL2,"") 1111 + TEST_RR( "smmul r0, r",0, VAL1,", r",1, VAL2,"") 1112 + TEST_RR( "smmul r14, r",12,VAL2,", r",10,VAL1,"") 1113 + TEST_RR( "smmulr r0, r",0, VAL1,", r",1, VAL2,"") 1114 + TEST_RR( "smmulr r14, r",12,VAL2,", r",10,VAL1,"") 1115 + 1116 + TEST_RRR( "smmls r0, r",0, VAL1,", r",1, VAL2,", r",2, VAL1,"") 1117 + TEST_RRR( "smmls r14, r",12,VAL2,", r",10,VAL1,", r",8, VAL2,"") 1118 + TEST_RRR( "smmlsr r0, r",0, VAL1,", r",1, VAL2,", r",2, VAL1,"") 1119 + TEST_RRR( "smmlsr r14, r",12,VAL2,", r",10,VAL1,", r",8, VAL2,"") 1120 + 1121 + TEST_RRR( "usada8 r0, r",0, VAL1,", r",1, VAL2,", r",2, VAL3,"") 1122 + TEST_RRR( "usada8 r14, r",12,VAL2,", r",10,VAL1,", r",8, VAL3,"") 1123 + TEST_RR( "usad8 r0, r",0, VAL1,", r",1, VAL2,"") 1124 + TEST_RR( "usad8 r14, r",12,VAL2,", r",10,VAL1,"") 1125 + 1126 + TEST_UNSUPPORTED(".short 0xfb00,0xf010") /* Unallocated space */ 1127 + TEST_UNSUPPORTED(".short 0xfb0f,0xff1f") /* Unallocated space */ 1128 + TEST_UNSUPPORTED(".short 0xfb70,0xf010") /* Unallocated space */ 1129 + TEST_UNSUPPORTED(".short 0xfb7f,0xff1f") /* Unallocated space */ 1130 + TEST_UNSUPPORTED(".short 0xfb70,0x0010") /* Unallocated space */ 1131 + TEST_UNSUPPORTED(".short 0xfb7f,0xff1f") /* Unallocated space */ 1132 + 1133 + TEST_GROUP("Long multiply, long multiply accumulate, and divide") 1134 + 1135 + TEST_RR( "smull r0, r1, r",2, VAL1,", r",3, VAL2,"") 1136 + TEST_RR( "smull r7, r8, r",9, VAL2,", r",10, VAL1,"") 1137 + TEST_UNSUPPORTED(".short 0xfb89,0xf80a @ smull pc, r8, r9, r10"); 1138 + TEST_UNSUPPORTED(".short 0xfb89,0xd80a @ smull sp, r8, r9, r10"); 1139 + TEST_UNSUPPORTED(".short 0xfb89,0x7f0a @ smull r7, pc, r9, r10"); 1140 + TEST_UNSUPPORTED(".short 0xfb89,0x7d0a @ smull r7, sp, r9, r10"); 1141 + TEST_UNSUPPORTED(".short 0xfb8f,0x780a @ smull r7, r8, pc, r10"); 1142 + TEST_UNSUPPORTED(".short 0xfb8d,0x780a @ smull r7, r8, sp, r10"); 1143 + TEST_UNSUPPORTED(".short 0xfb89,0x780f @ smull r7, r8, r9, pc"); 1144 + TEST_UNSUPPORTED(".short 0xfb89,0x780d @ smull r7, r8, r9, sp"); 1145 + 1146 + TEST_RR( "umull r0, r1, r",2, VAL1,", r",3, VAL2,"") 1147 + TEST_RR( "umull r7, r8, r",9, VAL2,", r",10, VAL1,"") 1148 + 1149 + TEST_RRRR( "smlal r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 1150 + TEST_RRRR( "smlal r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 1151 + 1152 + TEST_RRRR( "smlalbb r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 1153 + TEST_RRRR( "smlalbb r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 1154 + TEST_RRRR( "smlalbt r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 1155 + TEST_RRRR( "smlalbt r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 1156 + TEST_RRRR( "smlaltb r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 1157 + TEST_RRRR( "smlaltb r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 1158 + TEST_RRRR( "smlaltt r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 1159 + TEST_RRRR( "smlaltt r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 1160 + 1161 + TEST_RRRR( "smlald r",0, VAL1,", r",1, VAL2, ", r",0, HH1,", r",1, HH2) 1162 + TEST_RRRR( "smlald r",11,VAL2,", r",10,VAL1, ", r",9, HH2,", r",8, HH1) 1163 + TEST_RRRR( "smlaldx r",0, VAL1,", r",1, VAL2, ", r",0, HH1,", r",1, HH2) 1164 + TEST_RRRR( "smlaldx r",11,VAL2,", r",10,VAL1, ", r",9, HH2,", r",8, HH1) 1165 + 1166 + TEST_RRRR( "smlsld r",0, VAL1,", r",1, VAL2, ", r",0, HH1,", r",1, HH2) 1167 + TEST_RRRR( "smlsld r",11,VAL2,", r",10,VAL1, ", r",9, HH2,", r",8, HH1) 1168 + TEST_RRRR( "smlsldx r",0, VAL1,", r",1, VAL2, ", r",0, HH1,", r",1, HH2) 1169 + TEST_RRRR( "smlsldx r",11,VAL2,", r",10,VAL1, ", r",9, HH2,", r",8, HH1) 1170 + 1171 + TEST_RRRR( "umlal r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 1172 + TEST_RRRR( "umlal r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 1173 + TEST_RRRR( "umaal r",0, VAL1,", r",1, VAL2,", r",2, VAL3,", r",3, VAL4) 1174 + TEST_RRRR( "umaal r",8, VAL4,", r",9, VAL1,", r",10,VAL2,", r",11,VAL3) 1175 + 1176 + TEST_GROUP("Coprocessor instructions") 1177 + 1178 + TEST_UNSUPPORTED(".short 0xfc00,0x0000") 1179 + TEST_UNSUPPORTED(".short 0xffff,0xffff") 1180 + 1181 + TEST_GROUP("Testing instructions in IT blocks") 1182 + 1183 + TEST_ITBLOCK("sub.w r0, r0") 1184 + 1185 + verbose("\n"); 1186 + } 1187 +
+1748
arch/arm/kernel/kprobes-test.c
··· 1 + /* 2 + * arch/arm/kernel/kprobes-test.c 3 + * 4 + * Copyright (C) 2011 Jon Medhurst <tixy@yxit.co.uk>. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + */ 10 + 11 + /* 12 + * This file contains test code for ARM kprobes. 13 + * 14 + * The top level function run_all_tests() executes tests for all of the 15 + * supported instruction sets: ARM, 16-bit Thumb, and 32-bit Thumb. These tests 16 + * fall into two categories; run_api_tests() checks basic functionality of the 17 + * kprobes API, and run_test_cases() is a comprehensive test for kprobes 18 + * instruction decoding and simulation. 19 + * 20 + * run_test_cases() first checks the kprobes decoding table for self consistency 21 + * (using table_test()) then executes a series of test cases for each of the CPU 22 + * instruction forms. coverage_start() and coverage_end() are used to verify 23 + * that these test cases cover all of the possible combinations of instructions 24 + * described by the kprobes decoding tables. 25 + * 26 + * The individual test cases are in kprobes-test-arm.c and kprobes-test-thumb.c 27 + * which use the macros defined in kprobes-test.h. The rest of this 28 + * documentation will describe the operation of the framework used by these 29 + * test cases. 30 + */ 31 + 32 + /* 33 + * TESTING METHODOLOGY 34 + * ------------------- 35 + * 36 + * The methodology used to test an ARM instruction 'test_insn' is to use 37 + * inline assembler like: 38 + * 39 + * test_before: nop 40 + * test_case: test_insn 41 + * test_after: nop 42 + * 43 + * When the test case is run a kprobe is placed of each nop. The 44 + * post-handler of the test_before probe is used to modify the saved CPU 45 + * register context to that which we require for the test case. The 46 + * pre-handler of the of the test_after probe saves a copy of the CPU 47 + * register context. In this way we can execute test_insn with a specific 48 + * register context and see the results afterwards. 49 + * 50 + * To actually test the kprobes instruction emulation we perform the above 51 + * step a second time but with an additional kprobe on the test_case 52 + * instruction itself. If the emulation is accurate then the results seen 53 + * by the test_after probe will be identical to the first run which didn't 54 + * have a probe on test_case. 55 + * 56 + * Each test case is run several times with a variety of variations in the 57 + * flags value of stored in CPSR, and for Thumb code, different ITState. 58 + * 59 + * For instructions which can modify PC, a second test_after probe is used 60 + * like this: 61 + * 62 + * test_before: nop 63 + * test_case: test_insn 64 + * test_after: nop 65 + * b test_done 66 + * test_after2: nop 67 + * test_done: 68 + * 69 + * The test case is constructed such that test_insn branches to 70 + * test_after2, or, if testing a conditional instruction, it may just 71 + * continue to test_after. The probes inserted at both locations let us 72 + * determine which happened. A similar approach is used for testing 73 + * backwards branches... 74 + * 75 + * b test_before 76 + * b test_done @ helps to cope with off by 1 branches 77 + * test_after2: nop 78 + * b test_done 79 + * test_before: nop 80 + * test_case: test_insn 81 + * test_after: nop 82 + * test_done: 83 + * 84 + * The macros used to generate the assembler instructions describe above 85 + * are TEST_INSTRUCTION, TEST_BRANCH_F (branch forwards) and TEST_BRANCH_B 86 + * (branch backwards). In these, the local variables numbered 1, 50, 2 and 87 + * 99 represent: test_before, test_case, test_after2 and test_done. 88 + * 89 + * FRAMEWORK 90 + * --------- 91 + * 92 + * Each test case is wrapped between the pair of macros TESTCASE_START and 93 + * TESTCASE_END. As well as performing the inline assembler boilerplate, 94 + * these call out to the kprobes_test_case_start() and 95 + * kprobes_test_case_end() functions which drive the execution of the test 96 + * case. The specific arguments to use for each test case are stored as 97 + * inline data constructed using the various TEST_ARG_* macros. Putting 98 + * this all together, a simple test case may look like: 99 + * 100 + * TESTCASE_START("Testing mov r0, r7") 101 + * TEST_ARG_REG(7, 0x12345678) // Set r7=0x12345678 102 + * TEST_ARG_END("") 103 + * TEST_INSTRUCTION("mov r0, r7") 104 + * TESTCASE_END 105 + * 106 + * Note, in practice the single convenience macro TEST_R would be used for this 107 + * instead. 108 + * 109 + * The above would expand to assembler looking something like: 110 + * 111 + * @ TESTCASE_START 112 + * bl __kprobes_test_case_start 113 + * @ start of inline data... 114 + * .ascii "mov r0, r7" @ text title for test case 115 + * .byte 0 116 + * .align 2 117 + * 118 + * @ TEST_ARG_REG 119 + * .byte ARG_TYPE_REG 120 + * .byte 7 121 + * .short 0 122 + * .word 0x1234567 123 + * 124 + * @ TEST_ARG_END 125 + * .byte ARG_TYPE_END 126 + * .byte TEST_ISA @ flags, including ISA being tested 127 + * .short 50f-0f @ offset of 'test_before' 128 + * .short 2f-0f @ offset of 'test_after2' (if relevent) 129 + * .short 99f-0f @ offset of 'test_done' 130 + * @ start of test case code... 131 + * 0: 132 + * .code TEST_ISA @ switch to ISA being tested 133 + * 134 + * @ TEST_INSTRUCTION 135 + * 50: nop @ location for 'test_before' probe 136 + * 1: mov r0, r7 @ the test case instruction 'test_insn' 137 + * nop @ location for 'test_after' probe 138 + * 139 + * // TESTCASE_END 140 + * 2: 141 + * 99: bl __kprobes_test_case_end_##TEST_ISA 142 + * .code NONMAL_ISA 143 + * 144 + * When the above is execute the following happens... 145 + * 146 + * __kprobes_test_case_start() is an assembler wrapper which sets up space 147 + * for a stack buffer and calls the C function kprobes_test_case_start(). 148 + * This C function will do some initial processing of the inline data and 149 + * setup some global state. It then inserts the test_before and test_after 150 + * kprobes and returns a value which causes the assembler wrapper to jump 151 + * to the start of the test case code, (local label '0'). 152 + * 153 + * When the test case code executes, the test_before probe will be hit and 154 + * test_before_post_handler will call setup_test_context(). This fills the 155 + * stack buffer and CPU registers with a test pattern and then processes 156 + * the test case arguments. In our example there is one TEST_ARG_REG which 157 + * indicates that R7 should be loaded with the value 0x12345678. 158 + * 159 + * When the test_before probe ends, the test case continues and executes 160 + * the "mov r0, r7" instruction. It then hits the test_after probe and the 161 + * pre-handler for this (test_after_pre_handler) will save a copy of the 162 + * CPU register context. This should now have R0 holding the same value as 163 + * R7. 164 + * 165 + * Finally we get to the call to __kprobes_test_case_end_{32,16}. This is 166 + * an assembler wrapper which switches back to the ISA used by the test 167 + * code and calls the C function kprobes_test_case_end(). 168 + * 169 + * For each run through the test case, test_case_run_count is incremented 170 + * by one. For even runs, kprobes_test_case_end() saves a copy of the 171 + * register and stack buffer contents from the test case just run. It then 172 + * inserts a kprobe on the test case instruction 'test_insn' and returns a 173 + * value to cause the test case code to be re-run. 174 + * 175 + * For odd numbered runs, kprobes_test_case_end() compares the register and 176 + * stack buffer contents to those that were saved on the previous even 177 + * numbered run (the one without the kprobe on test_insn). These should be 178 + * the same if the kprobe instruction simulation routine is correct. 179 + * 180 + * The pair of test case runs is repeated with different combinations of 181 + * flag values in CPSR and, for Thumb, different ITState. This is 182 + * controlled by test_context_cpsr(). 183 + * 184 + * BUILDING TEST CASES 185 + * ------------------- 186 + * 187 + * 188 + * As an aid to building test cases, the stack buffer is initialised with 189 + * some special values: 190 + * 191 + * [SP+13*4] Contains SP+120. This can be used to test instructions 192 + * which load a value into SP. 193 + * 194 + * [SP+15*4] When testing branching instructions using TEST_BRANCH_{F,B}, 195 + * this holds the target address of the branch, 'test_after2'. 196 + * This can be used to test instructions which load a PC value 197 + * from memory. 198 + */ 199 + 200 + #include <linux/kernel.h> 201 + #include <linux/module.h> 202 + #include <linux/slab.h> 203 + #include <linux/kprobes.h> 204 + 205 + #include "kprobes.h" 206 + #include "kprobes-test.h" 207 + 208 + 209 + #define BENCHMARKING 1 210 + 211 + 212 + /* 213 + * Test basic API 214 + */ 215 + 216 + static bool test_regs_ok; 217 + static int test_func_instance; 218 + static int pre_handler_called; 219 + static int post_handler_called; 220 + static int jprobe_func_called; 221 + static int kretprobe_handler_called; 222 + 223 + #define FUNC_ARG1 0x12345678 224 + #define FUNC_ARG2 0xabcdef 225 + 226 + 227 + #ifndef CONFIG_THUMB2_KERNEL 228 + 229 + long arm_func(long r0, long r1); 230 + 231 + static void __used __naked __arm_kprobes_test_func(void) 232 + { 233 + __asm__ __volatile__ ( 234 + ".arm \n\t" 235 + ".type arm_func, %%function \n\t" 236 + "arm_func: \n\t" 237 + "adds r0, r0, r1 \n\t" 238 + "bx lr \n\t" 239 + ".code "NORMAL_ISA /* Back to Thumb if necessary */ 240 + : : : "r0", "r1", "cc" 241 + ); 242 + } 243 + 244 + #else /* CONFIG_THUMB2_KERNEL */ 245 + 246 + long thumb16_func(long r0, long r1); 247 + long thumb32even_func(long r0, long r1); 248 + long thumb32odd_func(long r0, long r1); 249 + 250 + static void __used __naked __thumb_kprobes_test_funcs(void) 251 + { 252 + __asm__ __volatile__ ( 253 + ".type thumb16_func, %%function \n\t" 254 + "thumb16_func: \n\t" 255 + "adds.n r0, r0, r1 \n\t" 256 + "bx lr \n\t" 257 + 258 + ".align \n\t" 259 + ".type thumb32even_func, %%function \n\t" 260 + "thumb32even_func: \n\t" 261 + "adds.w r0, r0, r1 \n\t" 262 + "bx lr \n\t" 263 + 264 + ".align \n\t" 265 + "nop.n \n\t" 266 + ".type thumb32odd_func, %%function \n\t" 267 + "thumb32odd_func: \n\t" 268 + "adds.w r0, r0, r1 \n\t" 269 + "bx lr \n\t" 270 + 271 + : : : "r0", "r1", "cc" 272 + ); 273 + } 274 + 275 + #endif /* CONFIG_THUMB2_KERNEL */ 276 + 277 + 278 + static int call_test_func(long (*func)(long, long), bool check_test_regs) 279 + { 280 + long ret; 281 + 282 + ++test_func_instance; 283 + test_regs_ok = false; 284 + 285 + ret = (*func)(FUNC_ARG1, FUNC_ARG2); 286 + if (ret != FUNC_ARG1 + FUNC_ARG2) { 287 + pr_err("FAIL: call_test_func: func returned %lx\n", ret); 288 + return false; 289 + } 290 + 291 + if (check_test_regs && !test_regs_ok) { 292 + pr_err("FAIL: test regs not OK\n"); 293 + return false; 294 + } 295 + 296 + return true; 297 + } 298 + 299 + static int __kprobes pre_handler(struct kprobe *p, struct pt_regs *regs) 300 + { 301 + pre_handler_called = test_func_instance; 302 + if (regs->ARM_r0 == FUNC_ARG1 && regs->ARM_r1 == FUNC_ARG2) 303 + test_regs_ok = true; 304 + return 0; 305 + } 306 + 307 + static void __kprobes post_handler(struct kprobe *p, struct pt_regs *regs, 308 + unsigned long flags) 309 + { 310 + post_handler_called = test_func_instance; 311 + if (regs->ARM_r0 != FUNC_ARG1 + FUNC_ARG2 || regs->ARM_r1 != FUNC_ARG2) 312 + test_regs_ok = false; 313 + } 314 + 315 + static struct kprobe the_kprobe = { 316 + .addr = 0, 317 + .pre_handler = pre_handler, 318 + .post_handler = post_handler 319 + }; 320 + 321 + static int test_kprobe(long (*func)(long, long)) 322 + { 323 + int ret; 324 + 325 + the_kprobe.addr = (kprobe_opcode_t *)func; 326 + ret = register_kprobe(&the_kprobe); 327 + if (ret < 0) { 328 + pr_err("FAIL: register_kprobe failed with %d\n", ret); 329 + return ret; 330 + } 331 + 332 + ret = call_test_func(func, true); 333 + 334 + unregister_kprobe(&the_kprobe); 335 + the_kprobe.flags = 0; /* Clear disable flag to allow reuse */ 336 + 337 + if (!ret) 338 + return -EINVAL; 339 + if (pre_handler_called != test_func_instance) { 340 + pr_err("FAIL: kprobe pre_handler not called\n"); 341 + return -EINVAL; 342 + } 343 + if (post_handler_called != test_func_instance) { 344 + pr_err("FAIL: kprobe post_handler not called\n"); 345 + return -EINVAL; 346 + } 347 + if (!call_test_func(func, false)) 348 + return -EINVAL; 349 + if (pre_handler_called == test_func_instance || 350 + post_handler_called == test_func_instance) { 351 + pr_err("FAIL: probe called after unregistering\n"); 352 + return -EINVAL; 353 + } 354 + 355 + return 0; 356 + } 357 + 358 + static void __kprobes jprobe_func(long r0, long r1) 359 + { 360 + jprobe_func_called = test_func_instance; 361 + if (r0 == FUNC_ARG1 && r1 == FUNC_ARG2) 362 + test_regs_ok = true; 363 + jprobe_return(); 364 + } 365 + 366 + static struct jprobe the_jprobe = { 367 + .entry = jprobe_func, 368 + }; 369 + 370 + static int test_jprobe(long (*func)(long, long)) 371 + { 372 + int ret; 373 + 374 + the_jprobe.kp.addr = (kprobe_opcode_t *)func; 375 + ret = register_jprobe(&the_jprobe); 376 + if (ret < 0) { 377 + pr_err("FAIL: register_jprobe failed with %d\n", ret); 378 + return ret; 379 + } 380 + 381 + ret = call_test_func(func, true); 382 + 383 + unregister_jprobe(&the_jprobe); 384 + the_jprobe.kp.flags = 0; /* Clear disable flag to allow reuse */ 385 + 386 + if (!ret) 387 + return -EINVAL; 388 + if (jprobe_func_called != test_func_instance) { 389 + pr_err("FAIL: jprobe handler function not called\n"); 390 + return -EINVAL; 391 + } 392 + if (!call_test_func(func, false)) 393 + return -EINVAL; 394 + if (jprobe_func_called == test_func_instance) { 395 + pr_err("FAIL: probe called after unregistering\n"); 396 + return -EINVAL; 397 + } 398 + 399 + return 0; 400 + } 401 + 402 + static int __kprobes 403 + kretprobe_handler(struct kretprobe_instance *ri, struct pt_regs *regs) 404 + { 405 + kretprobe_handler_called = test_func_instance; 406 + if (regs_return_value(regs) == FUNC_ARG1 + FUNC_ARG2) 407 + test_regs_ok = true; 408 + return 0; 409 + } 410 + 411 + static struct kretprobe the_kretprobe = { 412 + .handler = kretprobe_handler, 413 + }; 414 + 415 + static int test_kretprobe(long (*func)(long, long)) 416 + { 417 + int ret; 418 + 419 + the_kretprobe.kp.addr = (kprobe_opcode_t *)func; 420 + ret = register_kretprobe(&the_kretprobe); 421 + if (ret < 0) { 422 + pr_err("FAIL: register_kretprobe failed with %d\n", ret); 423 + return ret; 424 + } 425 + 426 + ret = call_test_func(func, true); 427 + 428 + unregister_kretprobe(&the_kretprobe); 429 + the_kretprobe.kp.flags = 0; /* Clear disable flag to allow reuse */ 430 + 431 + if (!ret) 432 + return -EINVAL; 433 + if (kretprobe_handler_called != test_func_instance) { 434 + pr_err("FAIL: kretprobe handler not called\n"); 435 + return -EINVAL; 436 + } 437 + if (!call_test_func(func, false)) 438 + return -EINVAL; 439 + if (jprobe_func_called == test_func_instance) { 440 + pr_err("FAIL: kretprobe called after unregistering\n"); 441 + return -EINVAL; 442 + } 443 + 444 + return 0; 445 + } 446 + 447 + static int run_api_tests(long (*func)(long, long)) 448 + { 449 + int ret; 450 + 451 + pr_info(" kprobe\n"); 452 + ret = test_kprobe(func); 453 + if (ret < 0) 454 + return ret; 455 + 456 + pr_info(" jprobe\n"); 457 + ret = test_jprobe(func); 458 + if (ret < 0) 459 + return ret; 460 + 461 + pr_info(" kretprobe\n"); 462 + ret = test_kretprobe(func); 463 + if (ret < 0) 464 + return ret; 465 + 466 + return 0; 467 + } 468 + 469 + 470 + /* 471 + * Benchmarking 472 + */ 473 + 474 + #if BENCHMARKING 475 + 476 + static void __naked benchmark_nop(void) 477 + { 478 + __asm__ __volatile__ ( 479 + "nop \n\t" 480 + "bx lr" 481 + ); 482 + } 483 + 484 + #ifdef CONFIG_THUMB2_KERNEL 485 + #define wide ".w" 486 + #else 487 + #define wide 488 + #endif 489 + 490 + static void __naked benchmark_pushpop1(void) 491 + { 492 + __asm__ __volatile__ ( 493 + "stmdb"wide" sp!, {r3-r11,lr} \n\t" 494 + "ldmia"wide" sp!, {r3-r11,pc}" 495 + ); 496 + } 497 + 498 + static void __naked benchmark_pushpop2(void) 499 + { 500 + __asm__ __volatile__ ( 501 + "stmdb"wide" sp!, {r0-r8,lr} \n\t" 502 + "ldmia"wide" sp!, {r0-r8,pc}" 503 + ); 504 + } 505 + 506 + static void __naked benchmark_pushpop3(void) 507 + { 508 + __asm__ __volatile__ ( 509 + "stmdb"wide" sp!, {r4,lr} \n\t" 510 + "ldmia"wide" sp!, {r4,pc}" 511 + ); 512 + } 513 + 514 + static void __naked benchmark_pushpop4(void) 515 + { 516 + __asm__ __volatile__ ( 517 + "stmdb"wide" sp!, {r0,lr} \n\t" 518 + "ldmia"wide" sp!, {r0,pc}" 519 + ); 520 + } 521 + 522 + 523 + #ifdef CONFIG_THUMB2_KERNEL 524 + 525 + static void __naked benchmark_pushpop_thumb(void) 526 + { 527 + __asm__ __volatile__ ( 528 + "push.n {r0-r7,lr} \n\t" 529 + "pop.n {r0-r7,pc}" 530 + ); 531 + } 532 + 533 + #endif 534 + 535 + static int __kprobes 536 + benchmark_pre_handler(struct kprobe *p, struct pt_regs *regs) 537 + { 538 + return 0; 539 + } 540 + 541 + static int benchmark(void(*fn)(void)) 542 + { 543 + unsigned n, i, t, t0; 544 + 545 + for (n = 1000; ; n *= 2) { 546 + t0 = sched_clock(); 547 + for (i = n; i > 0; --i) 548 + fn(); 549 + t = sched_clock() - t0; 550 + if (t >= 250000000) 551 + break; /* Stop once we took more than 0.25 seconds */ 552 + } 553 + return t / n; /* Time for one iteration in nanoseconds */ 554 + }; 555 + 556 + static int kprobe_benchmark(void(*fn)(void), unsigned offset) 557 + { 558 + struct kprobe k = { 559 + .addr = (kprobe_opcode_t *)((uintptr_t)fn + offset), 560 + .pre_handler = benchmark_pre_handler, 561 + }; 562 + 563 + int ret = register_kprobe(&k); 564 + if (ret < 0) { 565 + pr_err("FAIL: register_kprobe failed with %d\n", ret); 566 + return ret; 567 + } 568 + 569 + ret = benchmark(fn); 570 + 571 + unregister_kprobe(&k); 572 + return ret; 573 + }; 574 + 575 + struct benchmarks { 576 + void (*fn)(void); 577 + unsigned offset; 578 + const char *title; 579 + }; 580 + 581 + static int run_benchmarks(void) 582 + { 583 + int ret; 584 + struct benchmarks list[] = { 585 + {&benchmark_nop, 0, "nop"}, 586 + /* 587 + * benchmark_pushpop{1,3} will have the optimised 588 + * instruction emulation, whilst benchmark_pushpop{2,4} will 589 + * be the equivalent unoptimised instructions. 590 + */ 591 + {&benchmark_pushpop1, 0, "stmdb sp!, {r3-r11,lr}"}, 592 + {&benchmark_pushpop1, 4, "ldmia sp!, {r3-r11,pc}"}, 593 + {&benchmark_pushpop2, 0, "stmdb sp!, {r0-r8,lr}"}, 594 + {&benchmark_pushpop2, 4, "ldmia sp!, {r0-r8,pc}"}, 595 + {&benchmark_pushpop3, 0, "stmdb sp!, {r4,lr}"}, 596 + {&benchmark_pushpop3, 4, "ldmia sp!, {r4,pc}"}, 597 + {&benchmark_pushpop4, 0, "stmdb sp!, {r0,lr}"}, 598 + {&benchmark_pushpop4, 4, "ldmia sp!, {r0,pc}"}, 599 + #ifdef CONFIG_THUMB2_KERNEL 600 + {&benchmark_pushpop_thumb, 0, "push.n {r0-r7,lr}"}, 601 + {&benchmark_pushpop_thumb, 2, "pop.n {r0-r7,pc}"}, 602 + #endif 603 + {0} 604 + }; 605 + 606 + struct benchmarks *b; 607 + for (b = list; b->fn; ++b) { 608 + ret = kprobe_benchmark(b->fn, b->offset); 609 + if (ret < 0) 610 + return ret; 611 + pr_info(" %dns for kprobe %s\n", ret, b->title); 612 + } 613 + 614 + pr_info("\n"); 615 + return 0; 616 + } 617 + 618 + #endif /* BENCHMARKING */ 619 + 620 + 621 + /* 622 + * Decoding table self-consistency tests 623 + */ 624 + 625 + static const int decode_struct_sizes[NUM_DECODE_TYPES] = { 626 + [DECODE_TYPE_TABLE] = sizeof(struct decode_table), 627 + [DECODE_TYPE_CUSTOM] = sizeof(struct decode_custom), 628 + [DECODE_TYPE_SIMULATE] = sizeof(struct decode_simulate), 629 + [DECODE_TYPE_EMULATE] = sizeof(struct decode_emulate), 630 + [DECODE_TYPE_OR] = sizeof(struct decode_or), 631 + [DECODE_TYPE_REJECT] = sizeof(struct decode_reject) 632 + }; 633 + 634 + static int table_iter(const union decode_item *table, 635 + int (*fn)(const struct decode_header *, void *), 636 + void *args) 637 + { 638 + const struct decode_header *h = (struct decode_header *)table; 639 + int result; 640 + 641 + for (;;) { 642 + enum decode_type type = h->type_regs.bits & DECODE_TYPE_MASK; 643 + 644 + if (type == DECODE_TYPE_END) 645 + return 0; 646 + 647 + result = fn(h, args); 648 + if (result) 649 + return result; 650 + 651 + h = (struct decode_header *) 652 + ((uintptr_t)h + decode_struct_sizes[type]); 653 + 654 + } 655 + } 656 + 657 + static int table_test_fail(const struct decode_header *h, const char* message) 658 + { 659 + 660 + pr_err("FAIL: kprobes test failure \"%s\" (mask %08x, value %08x)\n", 661 + message, h->mask.bits, h->value.bits); 662 + return -EINVAL; 663 + } 664 + 665 + struct table_test_args { 666 + const union decode_item *root_table; 667 + u32 parent_mask; 668 + u32 parent_value; 669 + }; 670 + 671 + static int table_test_fn(const struct decode_header *h, void *args) 672 + { 673 + struct table_test_args *a = (struct table_test_args *)args; 674 + enum decode_type type = h->type_regs.bits & DECODE_TYPE_MASK; 675 + 676 + if (h->value.bits & ~h->mask.bits) 677 + return table_test_fail(h, "Match value has bits not in mask"); 678 + 679 + if ((h->mask.bits & a->parent_mask) != a->parent_mask) 680 + return table_test_fail(h, "Mask has bits not in parent mask"); 681 + 682 + if ((h->value.bits ^ a->parent_value) & a->parent_mask) 683 + return table_test_fail(h, "Value is inconsistent with parent"); 684 + 685 + if (type == DECODE_TYPE_TABLE) { 686 + struct decode_table *d = (struct decode_table *)h; 687 + struct table_test_args args2 = *a; 688 + args2.parent_mask = h->mask.bits; 689 + args2.parent_value = h->value.bits; 690 + return table_iter(d->table.table, table_test_fn, &args2); 691 + } 692 + 693 + return 0; 694 + } 695 + 696 + static int table_test(const union decode_item *table) 697 + { 698 + struct table_test_args args = { 699 + .root_table = table, 700 + .parent_mask = 0, 701 + .parent_value = 0 702 + }; 703 + return table_iter(args.root_table, table_test_fn, &args); 704 + } 705 + 706 + 707 + /* 708 + * Decoding table test coverage analysis 709 + * 710 + * coverage_start() builds a coverage_table which contains a list of 711 + * coverage_entry's to match each entry in the specified kprobes instruction 712 + * decoding table. 713 + * 714 + * When test cases are run, coverage_add() is called to process each case. 715 + * This looks up the corresponding entry in the coverage_table and sets it as 716 + * being matched, as well as clearing the regs flag appropriate for the test. 717 + * 718 + * After all test cases have been run, coverage_end() is called to check that 719 + * all entries in coverage_table have been matched and that all regs flags are 720 + * cleared. I.e. that all possible combinations of instructions described by 721 + * the kprobes decoding tables have had a test case executed for them. 722 + */ 723 + 724 + bool coverage_fail; 725 + 726 + #define MAX_COVERAGE_ENTRIES 256 727 + 728 + struct coverage_entry { 729 + const struct decode_header *header; 730 + unsigned regs; 731 + unsigned nesting; 732 + char matched; 733 + }; 734 + 735 + struct coverage_table { 736 + struct coverage_entry *base; 737 + unsigned num_entries; 738 + unsigned nesting; 739 + }; 740 + 741 + struct coverage_table coverage; 742 + 743 + #define COVERAGE_ANY_REG (1<<0) 744 + #define COVERAGE_SP (1<<1) 745 + #define COVERAGE_PC (1<<2) 746 + #define COVERAGE_PCWB (1<<3) 747 + 748 + static const char coverage_register_lookup[16] = { 749 + [REG_TYPE_ANY] = COVERAGE_ANY_REG | COVERAGE_SP | COVERAGE_PC, 750 + [REG_TYPE_SAMEAS16] = COVERAGE_ANY_REG, 751 + [REG_TYPE_SP] = COVERAGE_SP, 752 + [REG_TYPE_PC] = COVERAGE_PC, 753 + [REG_TYPE_NOSP] = COVERAGE_ANY_REG | COVERAGE_SP, 754 + [REG_TYPE_NOSPPC] = COVERAGE_ANY_REG | COVERAGE_SP | COVERAGE_PC, 755 + [REG_TYPE_NOPC] = COVERAGE_ANY_REG | COVERAGE_PC, 756 + [REG_TYPE_NOPCWB] = COVERAGE_ANY_REG | COVERAGE_PC | COVERAGE_PCWB, 757 + [REG_TYPE_NOPCX] = COVERAGE_ANY_REG, 758 + [REG_TYPE_NOSPPCX] = COVERAGE_ANY_REG | COVERAGE_SP, 759 + }; 760 + 761 + unsigned coverage_start_registers(const struct decode_header *h) 762 + { 763 + unsigned regs = 0; 764 + int i; 765 + for (i = 0; i < 20; i += 4) { 766 + int r = (h->type_regs.bits >> (DECODE_TYPE_BITS + i)) & 0xf; 767 + regs |= coverage_register_lookup[r] << i; 768 + } 769 + return regs; 770 + } 771 + 772 + static int coverage_start_fn(const struct decode_header *h, void *args) 773 + { 774 + struct coverage_table *coverage = (struct coverage_table *)args; 775 + enum decode_type type = h->type_regs.bits & DECODE_TYPE_MASK; 776 + struct coverage_entry *entry = coverage->base + coverage->num_entries; 777 + 778 + if (coverage->num_entries == MAX_COVERAGE_ENTRIES - 1) { 779 + pr_err("FAIL: Out of space for test coverage data"); 780 + return -ENOMEM; 781 + } 782 + 783 + ++coverage->num_entries; 784 + 785 + entry->header = h; 786 + entry->regs = coverage_start_registers(h); 787 + entry->nesting = coverage->nesting; 788 + entry->matched = false; 789 + 790 + if (type == DECODE_TYPE_TABLE) { 791 + struct decode_table *d = (struct decode_table *)h; 792 + int ret; 793 + ++coverage->nesting; 794 + ret = table_iter(d->table.table, coverage_start_fn, coverage); 795 + --coverage->nesting; 796 + return ret; 797 + } 798 + 799 + return 0; 800 + } 801 + 802 + static int coverage_start(const union decode_item *table) 803 + { 804 + coverage.base = kmalloc(MAX_COVERAGE_ENTRIES * 805 + sizeof(struct coverage_entry), GFP_KERNEL); 806 + coverage.num_entries = 0; 807 + coverage.nesting = 0; 808 + return table_iter(table, coverage_start_fn, &coverage); 809 + } 810 + 811 + static void 812 + coverage_add_registers(struct coverage_entry *entry, kprobe_opcode_t insn) 813 + { 814 + int regs = entry->header->type_regs.bits >> DECODE_TYPE_BITS; 815 + int i; 816 + for (i = 0; i < 20; i += 4) { 817 + enum decode_reg_type reg_type = (regs >> i) & 0xf; 818 + int reg = (insn >> i) & 0xf; 819 + int flag; 820 + 821 + if (!reg_type) 822 + continue; 823 + 824 + if (reg == 13) 825 + flag = COVERAGE_SP; 826 + else if (reg == 15) 827 + flag = COVERAGE_PC; 828 + else 829 + flag = COVERAGE_ANY_REG; 830 + entry->regs &= ~(flag << i); 831 + 832 + switch (reg_type) { 833 + 834 + case REG_TYPE_NONE: 835 + case REG_TYPE_ANY: 836 + case REG_TYPE_SAMEAS16: 837 + break; 838 + 839 + case REG_TYPE_SP: 840 + if (reg != 13) 841 + return; 842 + break; 843 + 844 + case REG_TYPE_PC: 845 + if (reg != 15) 846 + return; 847 + break; 848 + 849 + case REG_TYPE_NOSP: 850 + if (reg == 13) 851 + return; 852 + break; 853 + 854 + case REG_TYPE_NOSPPC: 855 + case REG_TYPE_NOSPPCX: 856 + if (reg == 13 || reg == 15) 857 + return; 858 + break; 859 + 860 + case REG_TYPE_NOPCWB: 861 + if (!is_writeback(insn)) 862 + break; 863 + if (reg == 15) { 864 + entry->regs &= ~(COVERAGE_PCWB << i); 865 + return; 866 + } 867 + break; 868 + 869 + case REG_TYPE_NOPC: 870 + case REG_TYPE_NOPCX: 871 + if (reg == 15) 872 + return; 873 + break; 874 + } 875 + 876 + } 877 + } 878 + 879 + static void coverage_add(kprobe_opcode_t insn) 880 + { 881 + struct coverage_entry *entry = coverage.base; 882 + struct coverage_entry *end = coverage.base + coverage.num_entries; 883 + bool matched = false; 884 + unsigned nesting = 0; 885 + 886 + for (; entry < end; ++entry) { 887 + const struct decode_header *h = entry->header; 888 + enum decode_type type = h->type_regs.bits & DECODE_TYPE_MASK; 889 + 890 + if (entry->nesting > nesting) 891 + continue; /* Skip sub-table we didn't match */ 892 + 893 + if (entry->nesting < nesting) 894 + break; /* End of sub-table we were scanning */ 895 + 896 + if (!matched) { 897 + if ((insn & h->mask.bits) != h->value.bits) 898 + continue; 899 + entry->matched = true; 900 + } 901 + 902 + switch (type) { 903 + 904 + case DECODE_TYPE_TABLE: 905 + ++nesting; 906 + break; 907 + 908 + case DECODE_TYPE_CUSTOM: 909 + case DECODE_TYPE_SIMULATE: 910 + case DECODE_TYPE_EMULATE: 911 + coverage_add_registers(entry, insn); 912 + return; 913 + 914 + case DECODE_TYPE_OR: 915 + matched = true; 916 + break; 917 + 918 + case DECODE_TYPE_REJECT: 919 + default: 920 + return; 921 + } 922 + 923 + } 924 + } 925 + 926 + static void coverage_end(void) 927 + { 928 + struct coverage_entry *entry = coverage.base; 929 + struct coverage_entry *end = coverage.base + coverage.num_entries; 930 + 931 + for (; entry < end; ++entry) { 932 + u32 mask = entry->header->mask.bits; 933 + u32 value = entry->header->value.bits; 934 + 935 + if (entry->regs) { 936 + pr_err("FAIL: Register test coverage missing for %08x %08x (%05x)\n", 937 + mask, value, entry->regs); 938 + coverage_fail = true; 939 + } 940 + if (!entry->matched) { 941 + pr_err("FAIL: Test coverage entry missing for %08x %08x\n", 942 + mask, value); 943 + coverage_fail = true; 944 + } 945 + } 946 + 947 + kfree(coverage.base); 948 + } 949 + 950 + 951 + /* 952 + * Framework for instruction set test cases 953 + */ 954 + 955 + void __naked __kprobes_test_case_start(void) 956 + { 957 + __asm__ __volatile__ ( 958 + "stmdb sp!, {r4-r11} \n\t" 959 + "sub sp, sp, #"__stringify(TEST_MEMORY_SIZE)"\n\t" 960 + "bic r0, lr, #1 @ r0 = inline title string \n\t" 961 + "mov r1, sp \n\t" 962 + "bl kprobes_test_case_start \n\t" 963 + "bx r0 \n\t" 964 + ); 965 + } 966 + 967 + #ifndef CONFIG_THUMB2_KERNEL 968 + 969 + void __naked __kprobes_test_case_end_32(void) 970 + { 971 + __asm__ __volatile__ ( 972 + "mov r4, lr \n\t" 973 + "bl kprobes_test_case_end \n\t" 974 + "cmp r0, #0 \n\t" 975 + "movne pc, r0 \n\t" 976 + "mov r0, r4 \n\t" 977 + "add sp, sp, #"__stringify(TEST_MEMORY_SIZE)"\n\t" 978 + "ldmia sp!, {r4-r11} \n\t" 979 + "mov pc, r0 \n\t" 980 + ); 981 + } 982 + 983 + #else /* CONFIG_THUMB2_KERNEL */ 984 + 985 + void __naked __kprobes_test_case_end_16(void) 986 + { 987 + __asm__ __volatile__ ( 988 + "mov r4, lr \n\t" 989 + "bl kprobes_test_case_end \n\t" 990 + "cmp r0, #0 \n\t" 991 + "bxne r0 \n\t" 992 + "mov r0, r4 \n\t" 993 + "add sp, sp, #"__stringify(TEST_MEMORY_SIZE)"\n\t" 994 + "ldmia sp!, {r4-r11} \n\t" 995 + "bx r0 \n\t" 996 + ); 997 + } 998 + 999 + void __naked __kprobes_test_case_end_32(void) 1000 + { 1001 + __asm__ __volatile__ ( 1002 + ".arm \n\t" 1003 + "orr lr, lr, #1 @ will return to Thumb code \n\t" 1004 + "ldr pc, 1f \n\t" 1005 + "1: \n\t" 1006 + ".word __kprobes_test_case_end_16 \n\t" 1007 + ); 1008 + } 1009 + 1010 + #endif 1011 + 1012 + 1013 + int kprobe_test_flags; 1014 + int kprobe_test_cc_position; 1015 + 1016 + static int test_try_count; 1017 + static int test_pass_count; 1018 + static int test_fail_count; 1019 + 1020 + static struct pt_regs initial_regs; 1021 + static struct pt_regs expected_regs; 1022 + static struct pt_regs result_regs; 1023 + 1024 + static u32 expected_memory[TEST_MEMORY_SIZE/sizeof(u32)]; 1025 + 1026 + static const char *current_title; 1027 + static struct test_arg *current_args; 1028 + static u32 *current_stack; 1029 + static uintptr_t current_branch_target; 1030 + 1031 + static uintptr_t current_code_start; 1032 + static kprobe_opcode_t current_instruction; 1033 + 1034 + 1035 + #define TEST_CASE_PASSED -1 1036 + #define TEST_CASE_FAILED -2 1037 + 1038 + static int test_case_run_count; 1039 + static bool test_case_is_thumb; 1040 + static int test_instance; 1041 + 1042 + /* 1043 + * We ignore the state of the imprecise abort disable flag (CPSR.A) because this 1044 + * can change randomly as the kernel doesn't take care to preserve or initialise 1045 + * this across context switches. Also, with Security Extentions, the flag may 1046 + * not be under control of the kernel; for this reason we ignore the state of 1047 + * the FIQ disable flag CPSR.F as well. 1048 + */ 1049 + #define PSR_IGNORE_BITS (PSR_A_BIT | PSR_F_BIT) 1050 + 1051 + static unsigned long test_check_cc(int cc, unsigned long cpsr) 1052 + { 1053 + unsigned long temp; 1054 + 1055 + switch (cc) { 1056 + case 0x0: /* eq */ 1057 + return cpsr & PSR_Z_BIT; 1058 + 1059 + case 0x1: /* ne */ 1060 + return (~cpsr) & PSR_Z_BIT; 1061 + 1062 + case 0x2: /* cs */ 1063 + return cpsr & PSR_C_BIT; 1064 + 1065 + case 0x3: /* cc */ 1066 + return (~cpsr) & PSR_C_BIT; 1067 + 1068 + case 0x4: /* mi */ 1069 + return cpsr & PSR_N_BIT; 1070 + 1071 + case 0x5: /* pl */ 1072 + return (~cpsr) & PSR_N_BIT; 1073 + 1074 + case 0x6: /* vs */ 1075 + return cpsr & PSR_V_BIT; 1076 + 1077 + case 0x7: /* vc */ 1078 + return (~cpsr) & PSR_V_BIT; 1079 + 1080 + case 0x8: /* hi */ 1081 + cpsr &= ~(cpsr >> 1); /* PSR_C_BIT &= ~PSR_Z_BIT */ 1082 + return cpsr & PSR_C_BIT; 1083 + 1084 + case 0x9: /* ls */ 1085 + cpsr &= ~(cpsr >> 1); /* PSR_C_BIT &= ~PSR_Z_BIT */ 1086 + return (~cpsr) & PSR_C_BIT; 1087 + 1088 + case 0xa: /* ge */ 1089 + cpsr ^= (cpsr << 3); /* PSR_N_BIT ^= PSR_V_BIT */ 1090 + return (~cpsr) & PSR_N_BIT; 1091 + 1092 + case 0xb: /* lt */ 1093 + cpsr ^= (cpsr << 3); /* PSR_N_BIT ^= PSR_V_BIT */ 1094 + return cpsr & PSR_N_BIT; 1095 + 1096 + case 0xc: /* gt */ 1097 + temp = cpsr ^ (cpsr << 3); /* PSR_N_BIT ^= PSR_V_BIT */ 1098 + temp |= (cpsr << 1); /* PSR_N_BIT |= PSR_Z_BIT */ 1099 + return (~temp) & PSR_N_BIT; 1100 + 1101 + case 0xd: /* le */ 1102 + temp = cpsr ^ (cpsr << 3); /* PSR_N_BIT ^= PSR_V_BIT */ 1103 + temp |= (cpsr << 1); /* PSR_N_BIT |= PSR_Z_BIT */ 1104 + return temp & PSR_N_BIT; 1105 + 1106 + case 0xe: /* al */ 1107 + case 0xf: /* unconditional */ 1108 + return true; 1109 + } 1110 + BUG(); 1111 + return false; 1112 + } 1113 + 1114 + static int is_last_scenario; 1115 + static int probe_should_run; /* 0 = no, 1 = yes, -1 = unknown */ 1116 + static int memory_needs_checking; 1117 + 1118 + static unsigned long test_context_cpsr(int scenario) 1119 + { 1120 + unsigned long cpsr; 1121 + 1122 + probe_should_run = 1; 1123 + 1124 + /* Default case is that we cycle through 16 combinations of flags */ 1125 + cpsr = (scenario & 0xf) << 28; /* N,Z,C,V flags */ 1126 + cpsr |= (scenario & 0xf) << 16; /* GE flags */ 1127 + cpsr |= (scenario & 0x1) << 27; /* Toggle Q flag */ 1128 + 1129 + if (!test_case_is_thumb) { 1130 + /* Testing ARM code */ 1131 + probe_should_run = test_check_cc(current_instruction >> 28, cpsr) != 0; 1132 + if (scenario == 15) 1133 + is_last_scenario = true; 1134 + 1135 + } else if (kprobe_test_flags & TEST_FLAG_NO_ITBLOCK) { 1136 + /* Testing Thumb code without setting ITSTATE */ 1137 + if (kprobe_test_cc_position) { 1138 + int cc = (current_instruction >> kprobe_test_cc_position) & 0xf; 1139 + probe_should_run = test_check_cc(cc, cpsr) != 0; 1140 + } 1141 + 1142 + if (scenario == 15) 1143 + is_last_scenario = true; 1144 + 1145 + } else if (kprobe_test_flags & TEST_FLAG_FULL_ITBLOCK) { 1146 + /* Testing Thumb code with all combinations of ITSTATE */ 1147 + unsigned x = (scenario >> 4); 1148 + unsigned cond_base = x % 7; /* ITSTATE<7:5> */ 1149 + unsigned mask = x / 7 + 2; /* ITSTATE<4:0>, bits reversed */ 1150 + 1151 + if (mask > 0x1f) { 1152 + /* Finish by testing state from instruction 'itt al' */ 1153 + cond_base = 7; 1154 + mask = 0x4; 1155 + if ((scenario & 0xf) == 0xf) 1156 + is_last_scenario = true; 1157 + } 1158 + 1159 + cpsr |= cond_base << 13; /* ITSTATE<7:5> */ 1160 + cpsr |= (mask & 0x1) << 12; /* ITSTATE<4> */ 1161 + cpsr |= (mask & 0x2) << 10; /* ITSTATE<3> */ 1162 + cpsr |= (mask & 0x4) << 8; /* ITSTATE<2> */ 1163 + cpsr |= (mask & 0x8) << 23; /* ITSTATE<1> */ 1164 + cpsr |= (mask & 0x10) << 21; /* ITSTATE<0> */ 1165 + 1166 + probe_should_run = test_check_cc((cpsr >> 12) & 0xf, cpsr) != 0; 1167 + 1168 + } else { 1169 + /* Testing Thumb code with several combinations of ITSTATE */ 1170 + switch (scenario) { 1171 + case 16: /* Clear NZCV flags and 'it eq' state (false as Z=0) */ 1172 + cpsr = 0x00000800; 1173 + probe_should_run = 0; 1174 + break; 1175 + case 17: /* Set NZCV flags and 'it vc' state (false as V=1) */ 1176 + cpsr = 0xf0007800; 1177 + probe_should_run = 0; 1178 + break; 1179 + case 18: /* Clear NZCV flags and 'it ls' state (true as C=0) */ 1180 + cpsr = 0x00009800; 1181 + break; 1182 + case 19: /* Set NZCV flags and 'it cs' state (true as C=1) */ 1183 + cpsr = 0xf0002800; 1184 + is_last_scenario = true; 1185 + break; 1186 + } 1187 + } 1188 + 1189 + return cpsr; 1190 + } 1191 + 1192 + static void setup_test_context(struct pt_regs *regs) 1193 + { 1194 + int scenario = test_case_run_count>>1; 1195 + unsigned long val; 1196 + struct test_arg *args; 1197 + int i; 1198 + 1199 + is_last_scenario = false; 1200 + memory_needs_checking = false; 1201 + 1202 + /* Initialise test memory on stack */ 1203 + val = (scenario & 1) ? VALM : ~VALM; 1204 + for (i = 0; i < TEST_MEMORY_SIZE / sizeof(current_stack[0]); ++i) 1205 + current_stack[i] = val + (i << 8); 1206 + /* Put target of branch on stack for tests which load PC from memory */ 1207 + if (current_branch_target) 1208 + current_stack[15] = current_branch_target; 1209 + /* Put a value for SP on stack for tests which load SP from memory */ 1210 + current_stack[13] = (u32)current_stack + 120; 1211 + 1212 + /* Initialise register values to their default state */ 1213 + val = (scenario & 2) ? VALR : ~VALR; 1214 + for (i = 0; i < 13; ++i) 1215 + regs->uregs[i] = val ^ (i << 8); 1216 + regs->ARM_lr = val ^ (14 << 8); 1217 + regs->ARM_cpsr &= ~(APSR_MASK | PSR_IT_MASK); 1218 + regs->ARM_cpsr |= test_context_cpsr(scenario); 1219 + 1220 + /* Perform testcase specific register setup */ 1221 + args = current_args; 1222 + for (; args[0].type != ARG_TYPE_END; ++args) 1223 + switch (args[0].type) { 1224 + case ARG_TYPE_REG: { 1225 + struct test_arg_regptr *arg = 1226 + (struct test_arg_regptr *)args; 1227 + regs->uregs[arg->reg] = arg->val; 1228 + break; 1229 + } 1230 + case ARG_TYPE_PTR: { 1231 + struct test_arg_regptr *arg = 1232 + (struct test_arg_regptr *)args; 1233 + regs->uregs[arg->reg] = 1234 + (unsigned long)current_stack + arg->val; 1235 + memory_needs_checking = true; 1236 + break; 1237 + } 1238 + case ARG_TYPE_MEM: { 1239 + struct test_arg_mem *arg = (struct test_arg_mem *)args; 1240 + current_stack[arg->index] = arg->val; 1241 + break; 1242 + } 1243 + default: 1244 + break; 1245 + } 1246 + } 1247 + 1248 + struct test_probe { 1249 + struct kprobe kprobe; 1250 + bool registered; 1251 + int hit; 1252 + }; 1253 + 1254 + static void unregister_test_probe(struct test_probe *probe) 1255 + { 1256 + if (probe->registered) { 1257 + unregister_kprobe(&probe->kprobe); 1258 + probe->kprobe.flags = 0; /* Clear disable flag to allow reuse */ 1259 + } 1260 + probe->registered = false; 1261 + } 1262 + 1263 + static int register_test_probe(struct test_probe *probe) 1264 + { 1265 + int ret; 1266 + 1267 + if (probe->registered) 1268 + BUG(); 1269 + 1270 + ret = register_kprobe(&probe->kprobe); 1271 + if (ret >= 0) { 1272 + probe->registered = true; 1273 + probe->hit = -1; 1274 + } 1275 + return ret; 1276 + } 1277 + 1278 + static int __kprobes 1279 + test_before_pre_handler(struct kprobe *p, struct pt_regs *regs) 1280 + { 1281 + container_of(p, struct test_probe, kprobe)->hit = test_instance; 1282 + return 0; 1283 + } 1284 + 1285 + static void __kprobes 1286 + test_before_post_handler(struct kprobe *p, struct pt_regs *regs, 1287 + unsigned long flags) 1288 + { 1289 + setup_test_context(regs); 1290 + initial_regs = *regs; 1291 + initial_regs.ARM_cpsr &= ~PSR_IGNORE_BITS; 1292 + } 1293 + 1294 + static int __kprobes 1295 + test_case_pre_handler(struct kprobe *p, struct pt_regs *regs) 1296 + { 1297 + container_of(p, struct test_probe, kprobe)->hit = test_instance; 1298 + return 0; 1299 + } 1300 + 1301 + static int __kprobes 1302 + test_after_pre_handler(struct kprobe *p, struct pt_regs *regs) 1303 + { 1304 + if (container_of(p, struct test_probe, kprobe)->hit == test_instance) 1305 + return 0; /* Already run for this test instance */ 1306 + 1307 + result_regs = *regs; 1308 + result_regs.ARM_cpsr &= ~PSR_IGNORE_BITS; 1309 + 1310 + /* Undo any changes done to SP by the test case */ 1311 + regs->ARM_sp = (unsigned long)current_stack; 1312 + 1313 + container_of(p, struct test_probe, kprobe)->hit = test_instance; 1314 + return 0; 1315 + } 1316 + 1317 + static struct test_probe test_before_probe = { 1318 + .kprobe.pre_handler = test_before_pre_handler, 1319 + .kprobe.post_handler = test_before_post_handler, 1320 + }; 1321 + 1322 + static struct test_probe test_case_probe = { 1323 + .kprobe.pre_handler = test_case_pre_handler, 1324 + }; 1325 + 1326 + static struct test_probe test_after_probe = { 1327 + .kprobe.pre_handler = test_after_pre_handler, 1328 + }; 1329 + 1330 + static struct test_probe test_after2_probe = { 1331 + .kprobe.pre_handler = test_after_pre_handler, 1332 + }; 1333 + 1334 + static void test_case_cleanup(void) 1335 + { 1336 + unregister_test_probe(&test_before_probe); 1337 + unregister_test_probe(&test_case_probe); 1338 + unregister_test_probe(&test_after_probe); 1339 + unregister_test_probe(&test_after2_probe); 1340 + } 1341 + 1342 + static void print_registers(struct pt_regs *regs) 1343 + { 1344 + pr_err("r0 %08lx | r1 %08lx | r2 %08lx | r3 %08lx\n", 1345 + regs->ARM_r0, regs->ARM_r1, regs->ARM_r2, regs->ARM_r3); 1346 + pr_err("r4 %08lx | r5 %08lx | r6 %08lx | r7 %08lx\n", 1347 + regs->ARM_r4, regs->ARM_r5, regs->ARM_r6, regs->ARM_r7); 1348 + pr_err("r8 %08lx | r9 %08lx | r10 %08lx | r11 %08lx\n", 1349 + regs->ARM_r8, regs->ARM_r9, regs->ARM_r10, regs->ARM_fp); 1350 + pr_err("r12 %08lx | sp %08lx | lr %08lx | pc %08lx\n", 1351 + regs->ARM_ip, regs->ARM_sp, regs->ARM_lr, regs->ARM_pc); 1352 + pr_err("cpsr %08lx\n", regs->ARM_cpsr); 1353 + } 1354 + 1355 + static void print_memory(u32 *mem, size_t size) 1356 + { 1357 + int i; 1358 + for (i = 0; i < size / sizeof(u32); i += 4) 1359 + pr_err("%08x %08x %08x %08x\n", mem[i], mem[i+1], 1360 + mem[i+2], mem[i+3]); 1361 + } 1362 + 1363 + static size_t expected_memory_size(u32 *sp) 1364 + { 1365 + size_t size = sizeof(expected_memory); 1366 + int offset = (uintptr_t)sp - (uintptr_t)current_stack; 1367 + if (offset > 0) 1368 + size -= offset; 1369 + return size; 1370 + } 1371 + 1372 + static void test_case_failed(const char *message) 1373 + { 1374 + test_case_cleanup(); 1375 + 1376 + pr_err("FAIL: %s\n", message); 1377 + pr_err("FAIL: Test %s\n", current_title); 1378 + pr_err("FAIL: Scenario %d\n", test_case_run_count >> 1); 1379 + } 1380 + 1381 + static unsigned long next_instruction(unsigned long pc) 1382 + { 1383 + #ifdef CONFIG_THUMB2_KERNEL 1384 + if ((pc & 1) && !is_wide_instruction(*(u16 *)(pc - 1))) 1385 + return pc + 2; 1386 + else 1387 + #endif 1388 + return pc + 4; 1389 + } 1390 + 1391 + static uintptr_t __used kprobes_test_case_start(const char *title, void *stack) 1392 + { 1393 + struct test_arg *args; 1394 + struct test_arg_end *end_arg; 1395 + unsigned long test_code; 1396 + 1397 + args = (struct test_arg *)PTR_ALIGN(title + strlen(title) + 1, 4); 1398 + 1399 + current_title = title; 1400 + current_args = args; 1401 + current_stack = stack; 1402 + 1403 + ++test_try_count; 1404 + 1405 + while (args->type != ARG_TYPE_END) 1406 + ++args; 1407 + end_arg = (struct test_arg_end *)args; 1408 + 1409 + test_code = (unsigned long)(args + 1); /* Code starts after args */ 1410 + 1411 + test_case_is_thumb = end_arg->flags & ARG_FLAG_THUMB; 1412 + if (test_case_is_thumb) 1413 + test_code |= 1; 1414 + 1415 + current_code_start = test_code; 1416 + 1417 + current_branch_target = 0; 1418 + if (end_arg->branch_offset != end_arg->end_offset) 1419 + current_branch_target = test_code + end_arg->branch_offset; 1420 + 1421 + test_code += end_arg->code_offset; 1422 + test_before_probe.kprobe.addr = (kprobe_opcode_t *)test_code; 1423 + 1424 + test_code = next_instruction(test_code); 1425 + test_case_probe.kprobe.addr = (kprobe_opcode_t *)test_code; 1426 + 1427 + if (test_case_is_thumb) { 1428 + u16 *p = (u16 *)(test_code & ~1); 1429 + current_instruction = p[0]; 1430 + if (is_wide_instruction(current_instruction)) { 1431 + current_instruction <<= 16; 1432 + current_instruction |= p[1]; 1433 + } 1434 + } else { 1435 + current_instruction = *(u32 *)test_code; 1436 + } 1437 + 1438 + if (current_title[0] == '.') 1439 + verbose("%s\n", current_title); 1440 + else 1441 + verbose("%s\t@ %0*x\n", current_title, 1442 + test_case_is_thumb ? 4 : 8, 1443 + current_instruction); 1444 + 1445 + test_code = next_instruction(test_code); 1446 + test_after_probe.kprobe.addr = (kprobe_opcode_t *)test_code; 1447 + 1448 + if (kprobe_test_flags & TEST_FLAG_NARROW_INSTR) { 1449 + if (!test_case_is_thumb || 1450 + is_wide_instruction(current_instruction)) { 1451 + test_case_failed("expected 16-bit instruction"); 1452 + goto fail; 1453 + } 1454 + } else { 1455 + if (test_case_is_thumb && 1456 + !is_wide_instruction(current_instruction)) { 1457 + test_case_failed("expected 32-bit instruction"); 1458 + goto fail; 1459 + } 1460 + } 1461 + 1462 + coverage_add(current_instruction); 1463 + 1464 + if (end_arg->flags & ARG_FLAG_UNSUPPORTED) { 1465 + if (register_test_probe(&test_case_probe) < 0) 1466 + goto pass; 1467 + test_case_failed("registered probe for unsupported instruction"); 1468 + goto fail; 1469 + } 1470 + 1471 + if (end_arg->flags & ARG_FLAG_SUPPORTED) { 1472 + if (register_test_probe(&test_case_probe) >= 0) 1473 + goto pass; 1474 + test_case_failed("couldn't register probe for supported instruction"); 1475 + goto fail; 1476 + } 1477 + 1478 + if (register_test_probe(&test_before_probe) < 0) { 1479 + test_case_failed("register test_before_probe failed"); 1480 + goto fail; 1481 + } 1482 + if (register_test_probe(&test_after_probe) < 0) { 1483 + test_case_failed("register test_after_probe failed"); 1484 + goto fail; 1485 + } 1486 + if (current_branch_target) { 1487 + test_after2_probe.kprobe.addr = 1488 + (kprobe_opcode_t *)current_branch_target; 1489 + if (register_test_probe(&test_after2_probe) < 0) { 1490 + test_case_failed("register test_after2_probe failed"); 1491 + goto fail; 1492 + } 1493 + } 1494 + 1495 + /* Start first run of test case */ 1496 + test_case_run_count = 0; 1497 + ++test_instance; 1498 + return current_code_start; 1499 + pass: 1500 + test_case_run_count = TEST_CASE_PASSED; 1501 + return (uintptr_t)test_after_probe.kprobe.addr; 1502 + fail: 1503 + test_case_run_count = TEST_CASE_FAILED; 1504 + return (uintptr_t)test_after_probe.kprobe.addr; 1505 + } 1506 + 1507 + static bool check_test_results(void) 1508 + { 1509 + size_t mem_size = 0; 1510 + u32 *mem = 0; 1511 + 1512 + if (memcmp(&expected_regs, &result_regs, sizeof(expected_regs))) { 1513 + test_case_failed("registers differ"); 1514 + goto fail; 1515 + } 1516 + 1517 + if (memory_needs_checking) { 1518 + mem = (u32 *)result_regs.ARM_sp; 1519 + mem_size = expected_memory_size(mem); 1520 + if (memcmp(expected_memory, mem, mem_size)) { 1521 + test_case_failed("test memory differs"); 1522 + goto fail; 1523 + } 1524 + } 1525 + 1526 + return true; 1527 + 1528 + fail: 1529 + pr_err("initial_regs:\n"); 1530 + print_registers(&initial_regs); 1531 + pr_err("expected_regs:\n"); 1532 + print_registers(&expected_regs); 1533 + pr_err("result_regs:\n"); 1534 + print_registers(&result_regs); 1535 + 1536 + if (mem) { 1537 + pr_err("current_stack=%p\n", current_stack); 1538 + pr_err("expected_memory:\n"); 1539 + print_memory(expected_memory, mem_size); 1540 + pr_err("result_memory:\n"); 1541 + print_memory(mem, mem_size); 1542 + } 1543 + 1544 + return false; 1545 + } 1546 + 1547 + static uintptr_t __used kprobes_test_case_end(void) 1548 + { 1549 + if (test_case_run_count < 0) { 1550 + if (test_case_run_count == TEST_CASE_PASSED) 1551 + /* kprobes_test_case_start did all the needed testing */ 1552 + goto pass; 1553 + else 1554 + /* kprobes_test_case_start failed */ 1555 + goto fail; 1556 + } 1557 + 1558 + if (test_before_probe.hit != test_instance) { 1559 + test_case_failed("test_before_handler not run"); 1560 + goto fail; 1561 + } 1562 + 1563 + if (test_after_probe.hit != test_instance && 1564 + test_after2_probe.hit != test_instance) { 1565 + test_case_failed("test_after_handler not run"); 1566 + goto fail; 1567 + } 1568 + 1569 + /* 1570 + * Even numbered test runs ran without a probe on the test case so 1571 + * we can gather reference results. The subsequent odd numbered run 1572 + * will have the probe inserted. 1573 + */ 1574 + if ((test_case_run_count & 1) == 0) { 1575 + /* Save results from run without probe */ 1576 + u32 *mem = (u32 *)result_regs.ARM_sp; 1577 + expected_regs = result_regs; 1578 + memcpy(expected_memory, mem, expected_memory_size(mem)); 1579 + 1580 + /* Insert probe onto test case instruction */ 1581 + if (register_test_probe(&test_case_probe) < 0) { 1582 + test_case_failed("register test_case_probe failed"); 1583 + goto fail; 1584 + } 1585 + } else { 1586 + /* Check probe ran as expected */ 1587 + if (probe_should_run == 1) { 1588 + if (test_case_probe.hit != test_instance) { 1589 + test_case_failed("test_case_handler not run"); 1590 + goto fail; 1591 + } 1592 + } else if (probe_should_run == 0) { 1593 + if (test_case_probe.hit == test_instance) { 1594 + test_case_failed("test_case_handler ran"); 1595 + goto fail; 1596 + } 1597 + } 1598 + 1599 + /* Remove probe for any subsequent reference run */ 1600 + unregister_test_probe(&test_case_probe); 1601 + 1602 + if (!check_test_results()) 1603 + goto fail; 1604 + 1605 + if (is_last_scenario) 1606 + goto pass; 1607 + } 1608 + 1609 + /* Do next test run */ 1610 + ++test_case_run_count; 1611 + ++test_instance; 1612 + return current_code_start; 1613 + fail: 1614 + ++test_fail_count; 1615 + goto end; 1616 + pass: 1617 + ++test_pass_count; 1618 + end: 1619 + test_case_cleanup(); 1620 + return 0; 1621 + } 1622 + 1623 + 1624 + /* 1625 + * Top level test functions 1626 + */ 1627 + 1628 + static int run_test_cases(void (*tests)(void), const union decode_item *table) 1629 + { 1630 + int ret; 1631 + 1632 + pr_info(" Check decoding tables\n"); 1633 + ret = table_test(table); 1634 + if (ret) 1635 + return ret; 1636 + 1637 + pr_info(" Run test cases\n"); 1638 + ret = coverage_start(table); 1639 + if (ret) 1640 + return ret; 1641 + 1642 + tests(); 1643 + 1644 + coverage_end(); 1645 + return 0; 1646 + } 1647 + 1648 + 1649 + static int __init run_all_tests(void) 1650 + { 1651 + int ret = 0; 1652 + 1653 + pr_info("Begining kprobe tests...\n"); 1654 + 1655 + #ifndef CONFIG_THUMB2_KERNEL 1656 + 1657 + pr_info("Probe ARM code\n"); 1658 + ret = run_api_tests(arm_func); 1659 + if (ret) 1660 + goto out; 1661 + 1662 + pr_info("ARM instruction simulation\n"); 1663 + ret = run_test_cases(kprobe_arm_test_cases, kprobe_decode_arm_table); 1664 + if (ret) 1665 + goto out; 1666 + 1667 + #else /* CONFIG_THUMB2_KERNEL */ 1668 + 1669 + pr_info("Probe 16-bit Thumb code\n"); 1670 + ret = run_api_tests(thumb16_func); 1671 + if (ret) 1672 + goto out; 1673 + 1674 + pr_info("Probe 32-bit Thumb code, even halfword\n"); 1675 + ret = run_api_tests(thumb32even_func); 1676 + if (ret) 1677 + goto out; 1678 + 1679 + pr_info("Probe 32-bit Thumb code, odd halfword\n"); 1680 + ret = run_api_tests(thumb32odd_func); 1681 + if (ret) 1682 + goto out; 1683 + 1684 + pr_info("16-bit Thumb instruction simulation\n"); 1685 + ret = run_test_cases(kprobe_thumb16_test_cases, 1686 + kprobe_decode_thumb16_table); 1687 + if (ret) 1688 + goto out; 1689 + 1690 + pr_info("32-bit Thumb instruction simulation\n"); 1691 + ret = run_test_cases(kprobe_thumb32_test_cases, 1692 + kprobe_decode_thumb32_table); 1693 + if (ret) 1694 + goto out; 1695 + #endif 1696 + 1697 + pr_info("Total instruction simulation tests=%d, pass=%d fail=%d\n", 1698 + test_try_count, test_pass_count, test_fail_count); 1699 + if (test_fail_count) { 1700 + ret = -EINVAL; 1701 + goto out; 1702 + } 1703 + 1704 + #if BENCHMARKING 1705 + pr_info("Benchmarks\n"); 1706 + ret = run_benchmarks(); 1707 + if (ret) 1708 + goto out; 1709 + #endif 1710 + 1711 + #if __LINUX_ARM_ARCH__ >= 7 1712 + /* We are able to run all test cases so coverage should be complete */ 1713 + if (coverage_fail) { 1714 + pr_err("FAIL: Test coverage checks failed\n"); 1715 + ret = -EINVAL; 1716 + goto out; 1717 + } 1718 + #endif 1719 + 1720 + out: 1721 + if (ret == 0) 1722 + pr_info("Finished kprobe tests OK\n"); 1723 + else 1724 + pr_err("kprobe tests failed\n"); 1725 + 1726 + return ret; 1727 + } 1728 + 1729 + 1730 + /* 1731 + * Module setup 1732 + */ 1733 + 1734 + #ifdef MODULE 1735 + 1736 + static void __exit kprobe_test_exit(void) 1737 + { 1738 + } 1739 + 1740 + module_init(run_all_tests) 1741 + module_exit(kprobe_test_exit) 1742 + MODULE_LICENSE("GPL"); 1743 + 1744 + #else /* !MODULE */ 1745 + 1746 + late_initcall(run_all_tests); 1747 + 1748 + #endif
+392
arch/arm/kernel/kprobes-test.h
··· 1 + /* 2 + * arch/arm/kernel/kprobes-test.h 3 + * 4 + * Copyright (C) 2011 Jon Medhurst <tixy@yxit.co.uk>. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + */ 10 + 11 + #define VERBOSE 0 /* Set to '1' for more logging of test cases */ 12 + 13 + #ifdef CONFIG_THUMB2_KERNEL 14 + #define NORMAL_ISA "16" 15 + #else 16 + #define NORMAL_ISA "32" 17 + #endif 18 + 19 + 20 + /* Flags used in kprobe_test_flags */ 21 + #define TEST_FLAG_NO_ITBLOCK (1<<0) 22 + #define TEST_FLAG_FULL_ITBLOCK (1<<1) 23 + #define TEST_FLAG_NARROW_INSTR (1<<2) 24 + 25 + extern int kprobe_test_flags; 26 + extern int kprobe_test_cc_position; 27 + 28 + 29 + #define TEST_MEMORY_SIZE 256 30 + 31 + 32 + /* 33 + * Test case structures. 34 + * 35 + * The arguments given to test cases can be one of three types. 36 + * 37 + * ARG_TYPE_REG 38 + * Load a register with the given value. 39 + * 40 + * ARG_TYPE_PTR 41 + * Load a register with a pointer into the stack buffer (SP + given value). 42 + * 43 + * ARG_TYPE_MEM 44 + * Store the given value into the stack buffer at [SP+index]. 45 + * 46 + */ 47 + 48 + #define ARG_TYPE_END 0 49 + #define ARG_TYPE_REG 1 50 + #define ARG_TYPE_PTR 2 51 + #define ARG_TYPE_MEM 3 52 + 53 + #define ARG_FLAG_UNSUPPORTED 0x01 54 + #define ARG_FLAG_SUPPORTED 0x02 55 + #define ARG_FLAG_THUMB 0x10 /* Must be 16 so TEST_ISA can be used */ 56 + #define ARG_FLAG_ARM 0x20 /* Must be 32 so TEST_ISA can be used */ 57 + 58 + struct test_arg { 59 + u8 type; /* ARG_TYPE_x */ 60 + u8 _padding[7]; 61 + }; 62 + 63 + struct test_arg_regptr { 64 + u8 type; /* ARG_TYPE_REG or ARG_TYPE_PTR */ 65 + u8 reg; 66 + u8 _padding[2]; 67 + u32 val; 68 + }; 69 + 70 + struct test_arg_mem { 71 + u8 type; /* ARG_TYPE_MEM */ 72 + u8 index; 73 + u8 _padding[2]; 74 + u32 val; 75 + }; 76 + 77 + struct test_arg_end { 78 + u8 type; /* ARG_TYPE_END */ 79 + u8 flags; /* ARG_FLAG_x */ 80 + u16 code_offset; 81 + u16 branch_offset; 82 + u16 end_offset; 83 + }; 84 + 85 + 86 + /* 87 + * Building blocks for test cases. 88 + * 89 + * Each test case is wrapped between TESTCASE_START and TESTCASE_END. 90 + * 91 + * To specify arguments for a test case the TEST_ARG_{REG,PTR,MEM} macros are 92 + * used followed by a terminating TEST_ARG_END. 93 + * 94 + * After this, the instruction to be tested is defined with TEST_INSTRUCTION. 95 + * Or for branches, TEST_BRANCH_B and TEST_BRANCH_F (branch forwards/backwards). 96 + * 97 + * Some specific test cases may make use of other custom constructs. 98 + */ 99 + 100 + #if VERBOSE 101 + #define verbose(fmt, ...) pr_info(fmt, ##__VA_ARGS__) 102 + #else 103 + #define verbose(fmt, ...) 104 + #endif 105 + 106 + #define TEST_GROUP(title) \ 107 + verbose("\n"); \ 108 + verbose(title"\n"); \ 109 + verbose("---------------------------------------------------------\n"); 110 + 111 + #define TESTCASE_START(title) \ 112 + __asm__ __volatile__ ( \ 113 + "bl __kprobes_test_case_start \n\t" \ 114 + /* don't use .asciz here as 'title' may be */ \ 115 + /* multiple strings to be concatenated. */ \ 116 + ".ascii "#title" \n\t" \ 117 + ".byte 0 \n\t" \ 118 + ".align 2 \n\t" 119 + 120 + #define TEST_ARG_REG(reg, val) \ 121 + ".byte "__stringify(ARG_TYPE_REG)" \n\t" \ 122 + ".byte "#reg" \n\t" \ 123 + ".short 0 \n\t" \ 124 + ".word "#val" \n\t" 125 + 126 + #define TEST_ARG_PTR(reg, val) \ 127 + ".byte "__stringify(ARG_TYPE_PTR)" \n\t" \ 128 + ".byte "#reg" \n\t" \ 129 + ".short 0 \n\t" \ 130 + ".word "#val" \n\t" 131 + 132 + #define TEST_ARG_MEM(index, val) \ 133 + ".byte "__stringify(ARG_TYPE_MEM)" \n\t" \ 134 + ".byte "#index" \n\t" \ 135 + ".short 0 \n\t" \ 136 + ".word "#val" \n\t" 137 + 138 + #define TEST_ARG_END(flags) \ 139 + ".byte "__stringify(ARG_TYPE_END)" \n\t" \ 140 + ".byte "TEST_ISA flags" \n\t" \ 141 + ".short 50f-0f \n\t" \ 142 + ".short 2f-0f \n\t" \ 143 + ".short 99f-0f \n\t" \ 144 + ".code "TEST_ISA" \n\t" \ 145 + "0: \n\t" 146 + 147 + #define TEST_INSTRUCTION(instruction) \ 148 + "50: nop \n\t" \ 149 + "1: "instruction" \n\t" \ 150 + " nop \n\t" 151 + 152 + #define TEST_BRANCH_F(instruction, xtra_dist) \ 153 + TEST_INSTRUCTION(instruction) \ 154 + ".if "#xtra_dist" \n\t" \ 155 + " b 99f \n\t" \ 156 + ".space "#xtra_dist" \n\t" \ 157 + ".endif \n\t" \ 158 + " b 99f \n\t" \ 159 + "2: nop \n\t" 160 + 161 + #define TEST_BRANCH_B(instruction, xtra_dist) \ 162 + " b 50f \n\t" \ 163 + " b 99f \n\t" \ 164 + "2: nop \n\t" \ 165 + " b 99f \n\t" \ 166 + ".if "#xtra_dist" \n\t" \ 167 + ".space "#xtra_dist" \n\t" \ 168 + ".endif \n\t" \ 169 + TEST_INSTRUCTION(instruction) 170 + 171 + #define TESTCASE_END \ 172 + "2: \n\t" \ 173 + "99: \n\t" \ 174 + " bl __kprobes_test_case_end_"TEST_ISA" \n\t" \ 175 + ".code "NORMAL_ISA" \n\t" \ 176 + : : \ 177 + : "r0", "r1", "r2", "r3", "ip", "lr", "memory", "cc" \ 178 + ); 179 + 180 + 181 + /* 182 + * Macros to define test cases. 183 + * 184 + * Those of the form TEST_{R,P,M}* can be used to define test cases 185 + * which take combinations of the three basic types of arguments. E.g. 186 + * 187 + * TEST_R One register argument 188 + * TEST_RR Two register arguments 189 + * TEST_RPR A register, a pointer, then a register argument 190 + * 191 + * For testing instructions which may branch, there are macros TEST_BF_* 192 + * and TEST_BB_* for branching forwards and backwards. 193 + * 194 + * TEST_SUPPORTED and TEST_UNSUPPORTED don't cause the code to be executed, 195 + * the just verify that a kprobe is or is not allowed on the given instruction. 196 + */ 197 + 198 + #define TEST(code) \ 199 + TESTCASE_START(code) \ 200 + TEST_ARG_END("") \ 201 + TEST_INSTRUCTION(code) \ 202 + TESTCASE_END 203 + 204 + #define TEST_UNSUPPORTED(code) \ 205 + TESTCASE_START(code) \ 206 + TEST_ARG_END("|"__stringify(ARG_FLAG_UNSUPPORTED)) \ 207 + TEST_INSTRUCTION(code) \ 208 + TESTCASE_END 209 + 210 + #define TEST_SUPPORTED(code) \ 211 + TESTCASE_START(code) \ 212 + TEST_ARG_END("|"__stringify(ARG_FLAG_SUPPORTED)) \ 213 + TEST_INSTRUCTION(code) \ 214 + TESTCASE_END 215 + 216 + #define TEST_R(code1, reg, val, code2) \ 217 + TESTCASE_START(code1 #reg code2) \ 218 + TEST_ARG_REG(reg, val) \ 219 + TEST_ARG_END("") \ 220 + TEST_INSTRUCTION(code1 #reg code2) \ 221 + TESTCASE_END 222 + 223 + #define TEST_RR(code1, reg1, val1, code2, reg2, val2, code3) \ 224 + TESTCASE_START(code1 #reg1 code2 #reg2 code3) \ 225 + TEST_ARG_REG(reg1, val1) \ 226 + TEST_ARG_REG(reg2, val2) \ 227 + TEST_ARG_END("") \ 228 + TEST_INSTRUCTION(code1 #reg1 code2 #reg2 code3) \ 229 + TESTCASE_END 230 + 231 + #define TEST_RRR(code1, reg1, val1, code2, reg2, val2, code3, reg3, val3, code4)\ 232 + TESTCASE_START(code1 #reg1 code2 #reg2 code3 #reg3 code4) \ 233 + TEST_ARG_REG(reg1, val1) \ 234 + TEST_ARG_REG(reg2, val2) \ 235 + TEST_ARG_REG(reg3, val3) \ 236 + TEST_ARG_END("") \ 237 + TEST_INSTRUCTION(code1 #reg1 code2 #reg2 code3 #reg3 code4) \ 238 + TESTCASE_END 239 + 240 + #define TEST_RRRR(code1, reg1, val1, code2, reg2, val2, code3, reg3, val3, code4, reg4, val4) \ 241 + TESTCASE_START(code1 #reg1 code2 #reg2 code3 #reg3 code4 #reg4) \ 242 + TEST_ARG_REG(reg1, val1) \ 243 + TEST_ARG_REG(reg2, val2) \ 244 + TEST_ARG_REG(reg3, val3) \ 245 + TEST_ARG_REG(reg4, val4) \ 246 + TEST_ARG_END("") \ 247 + TEST_INSTRUCTION(code1 #reg1 code2 #reg2 code3 #reg3 code4 #reg4) \ 248 + TESTCASE_END 249 + 250 + #define TEST_P(code1, reg1, val1, code2) \ 251 + TESTCASE_START(code1 #reg1 code2) \ 252 + TEST_ARG_PTR(reg1, val1) \ 253 + TEST_ARG_END("") \ 254 + TEST_INSTRUCTION(code1 #reg1 code2) \ 255 + TESTCASE_END 256 + 257 + #define TEST_PR(code1, reg1, val1, code2, reg2, val2, code3) \ 258 + TESTCASE_START(code1 #reg1 code2 #reg2 code3) \ 259 + TEST_ARG_PTR(reg1, val1) \ 260 + TEST_ARG_REG(reg2, val2) \ 261 + TEST_ARG_END("") \ 262 + TEST_INSTRUCTION(code1 #reg1 code2 #reg2 code3) \ 263 + TESTCASE_END 264 + 265 + #define TEST_RP(code1, reg1, val1, code2, reg2, val2, code3) \ 266 + TESTCASE_START(code1 #reg1 code2 #reg2 code3) \ 267 + TEST_ARG_REG(reg1, val1) \ 268 + TEST_ARG_PTR(reg2, val2) \ 269 + TEST_ARG_END("") \ 270 + TEST_INSTRUCTION(code1 #reg1 code2 #reg2 code3) \ 271 + TESTCASE_END 272 + 273 + #define TEST_PRR(code1, reg1, val1, code2, reg2, val2, code3, reg3, val3, code4)\ 274 + TESTCASE_START(code1 #reg1 code2 #reg2 code3 #reg3 code4) \ 275 + TEST_ARG_PTR(reg1, val1) \ 276 + TEST_ARG_REG(reg2, val2) \ 277 + TEST_ARG_REG(reg3, val3) \ 278 + TEST_ARG_END("") \ 279 + TEST_INSTRUCTION(code1 #reg1 code2 #reg2 code3 #reg3 code4) \ 280 + TESTCASE_END 281 + 282 + #define TEST_RPR(code1, reg1, val1, code2, reg2, val2, code3, reg3, val3, code4)\ 283 + TESTCASE_START(code1 #reg1 code2 #reg2 code3 #reg3 code4) \ 284 + TEST_ARG_REG(reg1, val1) \ 285 + TEST_ARG_PTR(reg2, val2) \ 286 + TEST_ARG_REG(reg3, val3) \ 287 + TEST_ARG_END("") \ 288 + TEST_INSTRUCTION(code1 #reg1 code2 #reg2 code3 #reg3 code4) \ 289 + TESTCASE_END 290 + 291 + #define TEST_RRP(code1, reg1, val1, code2, reg2, val2, code3, reg3, val3, code4)\ 292 + TESTCASE_START(code1 #reg1 code2 #reg2 code3 #reg3 code4) \ 293 + TEST_ARG_REG(reg1, val1) \ 294 + TEST_ARG_REG(reg2, val2) \ 295 + TEST_ARG_PTR(reg3, val3) \ 296 + TEST_ARG_END("") \ 297 + TEST_INSTRUCTION(code1 #reg1 code2 #reg2 code3 #reg3 code4) \ 298 + TESTCASE_END 299 + 300 + #define TEST_BF_P(code1, reg1, val1, code2) \ 301 + TESTCASE_START(code1 #reg1 code2) \ 302 + TEST_ARG_PTR(reg1, val1) \ 303 + TEST_ARG_END("") \ 304 + TEST_BRANCH_F(code1 #reg1 code2, 0) \ 305 + TESTCASE_END 306 + 307 + #define TEST_BF_X(code, xtra_dist) \ 308 + TESTCASE_START(code) \ 309 + TEST_ARG_END("") \ 310 + TEST_BRANCH_F(code, xtra_dist) \ 311 + TESTCASE_END 312 + 313 + #define TEST_BB_X(code, xtra_dist) \ 314 + TESTCASE_START(code) \ 315 + TEST_ARG_END("") \ 316 + TEST_BRANCH_B(code, xtra_dist) \ 317 + TESTCASE_END 318 + 319 + #define TEST_BF_RX(code1, reg, val, code2, xtra_dist) \ 320 + TESTCASE_START(code1 #reg code2) \ 321 + TEST_ARG_REG(reg, val) \ 322 + TEST_ARG_END("") \ 323 + TEST_BRANCH_F(code1 #reg code2, xtra_dist) \ 324 + TESTCASE_END 325 + 326 + #define TEST_BB_RX(code1, reg, val, code2, xtra_dist) \ 327 + TESTCASE_START(code1 #reg code2) \ 328 + TEST_ARG_REG(reg, val) \ 329 + TEST_ARG_END("") \ 330 + TEST_BRANCH_B(code1 #reg code2, xtra_dist) \ 331 + TESTCASE_END 332 + 333 + #define TEST_BF(code) TEST_BF_X(code, 0) 334 + #define TEST_BB(code) TEST_BB_X(code, 0) 335 + 336 + #define TEST_BF_R(code1, reg, val, code2) TEST_BF_RX(code1, reg, val, code2, 0) 337 + #define TEST_BB_R(code1, reg, val, code2) TEST_BB_RX(code1, reg, val, code2, 0) 338 + 339 + #define TEST_BF_RR(code1, reg1, val1, code2, reg2, val2, code3) \ 340 + TESTCASE_START(code1 #reg1 code2 #reg2 code3) \ 341 + TEST_ARG_REG(reg1, val1) \ 342 + TEST_ARG_REG(reg2, val2) \ 343 + TEST_ARG_END("") \ 344 + TEST_BRANCH_F(code1 #reg1 code2 #reg2 code3, 0) \ 345 + TESTCASE_END 346 + 347 + #define TEST_X(code, codex) \ 348 + TESTCASE_START(code) \ 349 + TEST_ARG_END("") \ 350 + TEST_INSTRUCTION(code) \ 351 + " b 99f \n\t" \ 352 + " "codex" \n\t" \ 353 + TESTCASE_END 354 + 355 + #define TEST_RX(code1, reg, val, code2, codex) \ 356 + TESTCASE_START(code1 #reg code2) \ 357 + TEST_ARG_REG(reg, val) \ 358 + TEST_ARG_END("") \ 359 + TEST_INSTRUCTION(code1 __stringify(reg) code2) \ 360 + " b 99f \n\t" \ 361 + " "codex" \n\t" \ 362 + TESTCASE_END 363 + 364 + #define TEST_RRX(code1, reg1, val1, code2, reg2, val2, code3, codex) \ 365 + TESTCASE_START(code1 #reg1 code2 #reg2 code3) \ 366 + TEST_ARG_REG(reg1, val1) \ 367 + TEST_ARG_REG(reg2, val2) \ 368 + TEST_ARG_END("") \ 369 + TEST_INSTRUCTION(code1 __stringify(reg1) code2 __stringify(reg2) code3) \ 370 + " b 99f \n\t" \ 371 + " "codex" \n\t" \ 372 + TESTCASE_END 373 + 374 + 375 + /* Various values used in test cases... */ 376 + #define N(val) (val ^ 0xffffffff) 377 + #define VAL1 0x12345678 378 + #define VAL2 N(VAL1) 379 + #define VAL3 0xa5f801 380 + #define VAL4 N(VAL3) 381 + #define VALM 0x456789ab 382 + #define VALR 0xdeaddead 383 + #define HH1 0x0123fecb 384 + #define HH2 0xa9874567 385 + 386 + 387 + #ifdef CONFIG_THUMB2_KERNEL 388 + void kprobe_thumb16_test_cases(void); 389 + void kprobe_thumb32_test_cases(void); 390 + #else 391 + void kprobe_arm_test_cases(void); 392 + #endif
+7
arch/arm/kernel/kprobes-thumb.c
··· 10 10 11 11 #include <linux/kernel.h> 12 12 #include <linux/kprobes.h> 13 + #include <linux/module.h> 13 14 14 15 #include "kprobes.h" 15 16 ··· 944 943 */ 945 944 DECODE_END 946 945 }; 946 + #ifdef CONFIG_ARM_KPROBES_TEST_MODULE 947 + EXPORT_SYMBOL_GPL(kprobe_decode_thumb32_table); 948 + #endif 947 949 948 950 static void __kprobes 949 951 t16_simulate_bxblx(struct kprobe *p, struct pt_regs *regs) ··· 1427 1423 1428 1424 DECODE_END 1429 1425 }; 1426 + #ifdef CONFIG_ARM_KPROBES_TEST_MODULE 1427 + EXPORT_SYMBOL_GPL(kprobe_decode_thumb16_table); 1428 + #endif 1430 1429 1431 1430 static unsigned long __kprobes thumb_check_cc(unsigned long cpsr) 1432 1431 {
+8
arch/arm/kernel/kprobes.h
··· 413 413 DECODE_HEADER(DECODE_TYPE_REJECT, _mask, _value, 0) 414 414 415 415 416 + #ifdef CONFIG_THUMB2_KERNEL 417 + extern const union decode_item kprobe_decode_thumb16_table[]; 418 + extern const union decode_item kprobe_decode_thumb32_table[]; 419 + #else 420 + extern const union decode_item kprobe_decode_arm_table[]; 421 + #endif 422 + 423 + 416 424 int kprobe_decode_insn(kprobe_opcode_t insn, struct arch_specific_insn *asi, 417 425 const union decode_item *table, bool thumb16); 418 426
+259 -226
arch/arm/kernel/perf_event.c
··· 12 12 */ 13 13 #define pr_fmt(fmt) "hw perfevents: " fmt 14 14 15 + #include <linux/bitmap.h> 15 16 #include <linux/interrupt.h> 16 17 #include <linux/kernel.h> 17 18 #include <linux/module.h> ··· 27 26 #include <asm/pmu.h> 28 27 #include <asm/stacktrace.h> 29 28 30 - static struct platform_device *pmu_device; 31 - 32 29 /* 33 - * Hardware lock to serialize accesses to PMU registers. Needed for the 34 - * read/modify/write sequences. 35 - */ 36 - static DEFINE_RAW_SPINLOCK(pmu_lock); 37 - 38 - /* 39 - * ARMv6 supports a maximum of 3 events, starting from index 1. If we add 30 + * ARMv6 supports a maximum of 3 events, starting from index 0. If we add 40 31 * another platform that supports more, we need to increase this to be the 41 32 * largest of all platforms. 42 33 * ··· 36 43 * cycle counter CCNT + 31 events counters CNT0..30. 37 44 * Cortex-A8 has 1+4 counters, Cortex-A9 has 1+6 counters. 38 45 */ 39 - #define ARMPMU_MAX_HWEVENTS 33 46 + #define ARMPMU_MAX_HWEVENTS 32 40 47 41 - /* The events for a given CPU. */ 42 - struct cpu_hw_events { 43 - /* 44 - * The events that are active on the CPU for the given index. Index 0 45 - * is reserved. 46 - */ 47 - struct perf_event *events[ARMPMU_MAX_HWEVENTS]; 48 + static DEFINE_PER_CPU(struct perf_event * [ARMPMU_MAX_HWEVENTS], hw_events); 49 + static DEFINE_PER_CPU(unsigned long [BITS_TO_LONGS(ARMPMU_MAX_HWEVENTS)], used_mask); 50 + static DEFINE_PER_CPU(struct pmu_hw_events, cpu_hw_events); 48 51 49 - /* 50 - * A 1 bit for an index indicates that the counter is being used for 51 - * an event. A 0 means that the counter can be used. 52 - */ 53 - unsigned long used_mask[BITS_TO_LONGS(ARMPMU_MAX_HWEVENTS)]; 54 - 55 - /* 56 - * A 1 bit for an index indicates that the counter is actively being 57 - * used. 58 - */ 59 - unsigned long active_mask[BITS_TO_LONGS(ARMPMU_MAX_HWEVENTS)]; 60 - }; 61 - static DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events); 62 - 63 - struct arm_pmu { 64 - enum arm_perf_pmu_ids id; 65 - const char *name; 66 - irqreturn_t (*handle_irq)(int irq_num, void *dev); 67 - void (*enable)(struct hw_perf_event *evt, int idx); 68 - void (*disable)(struct hw_perf_event *evt, int idx); 69 - int (*get_event_idx)(struct cpu_hw_events *cpuc, 70 - struct hw_perf_event *hwc); 71 - u32 (*read_counter)(int idx); 72 - void (*write_counter)(int idx, u32 val); 73 - void (*start)(void); 74 - void (*stop)(void); 75 - void (*reset)(void *); 76 - const unsigned (*cache_map)[PERF_COUNT_HW_CACHE_MAX] 77 - [PERF_COUNT_HW_CACHE_OP_MAX] 78 - [PERF_COUNT_HW_CACHE_RESULT_MAX]; 79 - const unsigned (*event_map)[PERF_COUNT_HW_MAX]; 80 - u32 raw_event_mask; 81 - int num_events; 82 - u64 max_period; 83 - }; 52 + #define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu)) 84 53 85 54 /* Set at runtime when we know what CPU type we are. */ 86 - static const struct arm_pmu *armpmu; 55 + static struct arm_pmu *cpu_pmu; 87 56 88 57 enum arm_perf_pmu_ids 89 58 armpmu_get_pmu_id(void) 90 59 { 91 60 int id = -ENODEV; 92 61 93 - if (armpmu != NULL) 94 - id = armpmu->id; 62 + if (cpu_pmu != NULL) 63 + id = cpu_pmu->id; 95 64 96 65 return id; 97 66 } ··· 64 109 { 65 110 int max_events = 0; 66 111 67 - if (armpmu != NULL) 68 - max_events = armpmu->num_events; 112 + if (cpu_pmu != NULL) 113 + max_events = cpu_pmu->num_events; 69 114 70 115 return max_events; 71 116 } ··· 85 130 #define CACHE_OP_UNSUPPORTED 0xFFFF 86 131 87 132 static int 88 - armpmu_map_cache_event(u64 config) 133 + armpmu_map_cache_event(const unsigned (*cache_map) 134 + [PERF_COUNT_HW_CACHE_MAX] 135 + [PERF_COUNT_HW_CACHE_OP_MAX] 136 + [PERF_COUNT_HW_CACHE_RESULT_MAX], 137 + u64 config) 89 138 { 90 139 unsigned int cache_type, cache_op, cache_result, ret; 91 140 ··· 105 146 if (cache_result >= PERF_COUNT_HW_CACHE_RESULT_MAX) 106 147 return -EINVAL; 107 148 108 - ret = (int)(*armpmu->cache_map)[cache_type][cache_op][cache_result]; 149 + ret = (int)(*cache_map)[cache_type][cache_op][cache_result]; 109 150 110 151 if (ret == CACHE_OP_UNSUPPORTED) 111 152 return -ENOENT; ··· 114 155 } 115 156 116 157 static int 117 - armpmu_map_event(u64 config) 158 + armpmu_map_event(const unsigned (*event_map)[PERF_COUNT_HW_MAX], u64 config) 118 159 { 119 - int mapping = (*armpmu->event_map)[config]; 120 - return mapping == HW_OP_UNSUPPORTED ? -EOPNOTSUPP : mapping; 160 + int mapping = (*event_map)[config]; 161 + return mapping == HW_OP_UNSUPPORTED ? -ENOENT : mapping; 121 162 } 122 163 123 164 static int 124 - armpmu_map_raw_event(u64 config) 165 + armpmu_map_raw_event(u32 raw_event_mask, u64 config) 125 166 { 126 - return (int)(config & armpmu->raw_event_mask); 167 + return (int)(config & raw_event_mask); 127 168 } 128 169 129 - static int 170 + static int map_cpu_event(struct perf_event *event, 171 + const unsigned (*event_map)[PERF_COUNT_HW_MAX], 172 + const unsigned (*cache_map) 173 + [PERF_COUNT_HW_CACHE_MAX] 174 + [PERF_COUNT_HW_CACHE_OP_MAX] 175 + [PERF_COUNT_HW_CACHE_RESULT_MAX], 176 + u32 raw_event_mask) 177 + { 178 + u64 config = event->attr.config; 179 + 180 + switch (event->attr.type) { 181 + case PERF_TYPE_HARDWARE: 182 + return armpmu_map_event(event_map, config); 183 + case PERF_TYPE_HW_CACHE: 184 + return armpmu_map_cache_event(cache_map, config); 185 + case PERF_TYPE_RAW: 186 + return armpmu_map_raw_event(raw_event_mask, config); 187 + } 188 + 189 + return -ENOENT; 190 + } 191 + 192 + int 130 193 armpmu_event_set_period(struct perf_event *event, 131 194 struct hw_perf_event *hwc, 132 195 int idx) 133 196 { 197 + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 134 198 s64 left = local64_read(&hwc->period_left); 135 199 s64 period = hwc->sample_period; 136 200 int ret = 0; ··· 184 202 return ret; 185 203 } 186 204 187 - static u64 205 + u64 188 206 armpmu_event_update(struct perf_event *event, 189 207 struct hw_perf_event *hwc, 190 208 int idx, int overflow) 191 209 { 210 + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 192 211 u64 delta, prev_raw_count, new_raw_count; 193 212 194 213 again: ··· 229 246 static void 230 247 armpmu_stop(struct perf_event *event, int flags) 231 248 { 249 + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 232 250 struct hw_perf_event *hwc = &event->hw; 233 - 234 - if (!armpmu) 235 - return; 236 251 237 252 /* 238 253 * ARM pmu always has to update the counter, so ignore ··· 247 266 static void 248 267 armpmu_start(struct perf_event *event, int flags) 249 268 { 269 + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 250 270 struct hw_perf_event *hwc = &event->hw; 251 - 252 - if (!armpmu) 253 - return; 254 271 255 272 /* 256 273 * ARM pmu always has to reprogram the period, so ignore ··· 272 293 static void 273 294 armpmu_del(struct perf_event *event, int flags) 274 295 { 275 - struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); 296 + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 297 + struct pmu_hw_events *hw_events = armpmu->get_hw_events(); 276 298 struct hw_perf_event *hwc = &event->hw; 277 299 int idx = hwc->idx; 278 300 279 301 WARN_ON(idx < 0); 280 302 281 - clear_bit(idx, cpuc->active_mask); 282 303 armpmu_stop(event, PERF_EF_UPDATE); 283 - cpuc->events[idx] = NULL; 284 - clear_bit(idx, cpuc->used_mask); 304 + hw_events->events[idx] = NULL; 305 + clear_bit(idx, hw_events->used_mask); 285 306 286 307 perf_event_update_userpage(event); 287 308 } ··· 289 310 static int 290 311 armpmu_add(struct perf_event *event, int flags) 291 312 { 292 - struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); 313 + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 314 + struct pmu_hw_events *hw_events = armpmu->get_hw_events(); 293 315 struct hw_perf_event *hwc = &event->hw; 294 316 int idx; 295 317 int err = 0; ··· 298 318 perf_pmu_disable(event->pmu); 299 319 300 320 /* If we don't have a space for the counter then finish early. */ 301 - idx = armpmu->get_event_idx(cpuc, hwc); 321 + idx = armpmu->get_event_idx(hw_events, hwc); 302 322 if (idx < 0) { 303 323 err = idx; 304 324 goto out; ··· 310 330 */ 311 331 event->hw.idx = idx; 312 332 armpmu->disable(hwc, idx); 313 - cpuc->events[idx] = event; 314 - set_bit(idx, cpuc->active_mask); 333 + hw_events->events[idx] = event; 315 334 316 335 hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE; 317 336 if (flags & PERF_EF_START) ··· 324 345 return err; 325 346 } 326 347 327 - static struct pmu pmu; 328 - 329 348 static int 330 - validate_event(struct cpu_hw_events *cpuc, 349 + validate_event(struct pmu_hw_events *hw_events, 331 350 struct perf_event *event) 332 351 { 352 + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 333 353 struct hw_perf_event fake_event = event->hw; 354 + struct pmu *leader_pmu = event->group_leader->pmu; 334 355 335 - if (event->pmu != &pmu || event->state <= PERF_EVENT_STATE_OFF) 356 + if (event->pmu != leader_pmu || event->state <= PERF_EVENT_STATE_OFF) 336 357 return 1; 337 358 338 - return armpmu->get_event_idx(cpuc, &fake_event) >= 0; 359 + return armpmu->get_event_idx(hw_events, &fake_event) >= 0; 339 360 } 340 361 341 362 static int 342 363 validate_group(struct perf_event *event) 343 364 { 344 365 struct perf_event *sibling, *leader = event->group_leader; 345 - struct cpu_hw_events fake_pmu; 366 + struct pmu_hw_events fake_pmu; 346 367 347 368 memset(&fake_pmu, 0, sizeof(fake_pmu)); 348 369 ··· 362 383 363 384 static irqreturn_t armpmu_platform_irq(int irq, void *dev) 364 385 { 365 - struct arm_pmu_platdata *plat = dev_get_platdata(&pmu_device->dev); 386 + struct arm_pmu *armpmu = (struct arm_pmu *) dev; 387 + struct platform_device *plat_device = armpmu->plat_device; 388 + struct arm_pmu_platdata *plat = dev_get_platdata(&plat_device->dev); 366 389 367 390 return plat->handle_irq(irq, dev, armpmu->handle_irq); 368 391 } 369 392 393 + static void 394 + armpmu_release_hardware(struct arm_pmu *armpmu) 395 + { 396 + int i, irq, irqs; 397 + struct platform_device *pmu_device = armpmu->plat_device; 398 + 399 + irqs = min(pmu_device->num_resources, num_possible_cpus()); 400 + 401 + for (i = 0; i < irqs; ++i) { 402 + if (!cpumask_test_and_clear_cpu(i, &armpmu->active_irqs)) 403 + continue; 404 + irq = platform_get_irq(pmu_device, i); 405 + if (irq >= 0) 406 + free_irq(irq, armpmu); 407 + } 408 + 409 + release_pmu(armpmu->type); 410 + } 411 + 370 412 static int 371 - armpmu_reserve_hardware(void) 413 + armpmu_reserve_hardware(struct arm_pmu *armpmu) 372 414 { 373 415 struct arm_pmu_platdata *plat; 374 416 irq_handler_t handle_irq; 375 - int i, err = -ENODEV, irq; 417 + int i, err, irq, irqs; 418 + struct platform_device *pmu_device = armpmu->plat_device; 376 419 377 - pmu_device = reserve_pmu(ARM_PMU_DEVICE_CPU); 378 - if (IS_ERR(pmu_device)) { 420 + err = reserve_pmu(armpmu->type); 421 + if (err) { 379 422 pr_warning("unable to reserve pmu\n"); 380 - return PTR_ERR(pmu_device); 423 + return err; 381 424 } 382 - 383 - init_pmu(ARM_PMU_DEVICE_CPU); 384 425 385 426 plat = dev_get_platdata(&pmu_device->dev); 386 427 if (plat && plat->handle_irq) ··· 408 409 else 409 410 handle_irq = armpmu->handle_irq; 410 411 411 - if (pmu_device->num_resources < 1) { 412 + irqs = min(pmu_device->num_resources, num_possible_cpus()); 413 + if (irqs < 1) { 412 414 pr_err("no irqs for PMUs defined\n"); 413 415 return -ENODEV; 414 416 } 415 417 416 - for (i = 0; i < pmu_device->num_resources; ++i) { 418 + for (i = 0; i < irqs; ++i) { 419 + err = 0; 417 420 irq = platform_get_irq(pmu_device, i); 418 421 if (irq < 0) 419 422 continue; 420 423 424 + /* 425 + * If we have a single PMU interrupt that we can't shift, 426 + * assume that we're running on a uniprocessor machine and 427 + * continue. Otherwise, continue without this interrupt. 428 + */ 429 + if (irq_set_affinity(irq, cpumask_of(i)) && irqs > 1) { 430 + pr_warning("unable to set irq affinity (irq=%d, cpu=%u)\n", 431 + irq, i); 432 + continue; 433 + } 434 + 421 435 err = request_irq(irq, handle_irq, 422 436 IRQF_DISABLED | IRQF_NOBALANCING, 423 - "armpmu", NULL); 437 + "arm-pmu", armpmu); 424 438 if (err) { 425 - pr_warning("unable to request IRQ%d for ARM perf " 426 - "counters\n", irq); 427 - break; 439 + pr_err("unable to request IRQ%d for ARM PMU counters\n", 440 + irq); 441 + armpmu_release_hardware(armpmu); 442 + return err; 428 443 } 444 + 445 + cpumask_set_cpu(i, &armpmu->active_irqs); 429 446 } 430 447 431 - if (err) { 432 - for (i = i - 1; i >= 0; --i) { 433 - irq = platform_get_irq(pmu_device, i); 434 - if (irq >= 0) 435 - free_irq(irq, NULL); 436 - } 437 - release_pmu(ARM_PMU_DEVICE_CPU); 438 - pmu_device = NULL; 439 - } 440 - 441 - return err; 448 + return 0; 442 449 } 443 - 444 - static void 445 - armpmu_release_hardware(void) 446 - { 447 - int i, irq; 448 - 449 - for (i = pmu_device->num_resources - 1; i >= 0; --i) { 450 - irq = platform_get_irq(pmu_device, i); 451 - if (irq >= 0) 452 - free_irq(irq, NULL); 453 - } 454 - armpmu->stop(); 455 - 456 - release_pmu(ARM_PMU_DEVICE_CPU); 457 - pmu_device = NULL; 458 - } 459 - 460 - static atomic_t active_events = ATOMIC_INIT(0); 461 - static DEFINE_MUTEX(pmu_reserve_mutex); 462 450 463 451 static void 464 452 hw_perf_event_destroy(struct perf_event *event) 465 453 { 466 - if (atomic_dec_and_mutex_lock(&active_events, &pmu_reserve_mutex)) { 467 - armpmu_release_hardware(); 468 - mutex_unlock(&pmu_reserve_mutex); 454 + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 455 + atomic_t *active_events = &armpmu->active_events; 456 + struct mutex *pmu_reserve_mutex = &armpmu->reserve_mutex; 457 + 458 + if (atomic_dec_and_mutex_lock(active_events, pmu_reserve_mutex)) { 459 + armpmu_release_hardware(armpmu); 460 + mutex_unlock(pmu_reserve_mutex); 469 461 } 462 + } 463 + 464 + static int 465 + event_requires_mode_exclusion(struct perf_event_attr *attr) 466 + { 467 + return attr->exclude_idle || attr->exclude_user || 468 + attr->exclude_kernel || attr->exclude_hv; 470 469 } 471 470 472 471 static int 473 472 __hw_perf_event_init(struct perf_event *event) 474 473 { 474 + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 475 475 struct hw_perf_event *hwc = &event->hw; 476 476 int mapping, err; 477 477 478 - /* Decode the generic type into an ARM event identifier. */ 479 - if (PERF_TYPE_HARDWARE == event->attr.type) { 480 - mapping = armpmu_map_event(event->attr.config); 481 - } else if (PERF_TYPE_HW_CACHE == event->attr.type) { 482 - mapping = armpmu_map_cache_event(event->attr.config); 483 - } else if (PERF_TYPE_RAW == event->attr.type) { 484 - mapping = armpmu_map_raw_event(event->attr.config); 485 - } else { 486 - pr_debug("event type %x not supported\n", event->attr.type); 487 - return -EOPNOTSUPP; 488 - } 478 + mapping = armpmu->map_event(event); 489 479 490 480 if (mapping < 0) { 491 481 pr_debug("event %x:%llx not supported\n", event->attr.type, ··· 483 495 } 484 496 485 497 /* 486 - * Check whether we need to exclude the counter from certain modes. 487 - * The ARM performance counters are on all of the time so if someone 488 - * has asked us for some excludes then we have to fail. 498 + * We don't assign an index until we actually place the event onto 499 + * hardware. Use -1 to signify that we haven't decided where to put it 500 + * yet. For SMP systems, each core has it's own PMU so we can't do any 501 + * clever allocation or constraints checking at this point. 489 502 */ 490 - if (event->attr.exclude_kernel || event->attr.exclude_user || 491 - event->attr.exclude_hv || event->attr.exclude_idle) { 503 + hwc->idx = -1; 504 + hwc->config_base = 0; 505 + hwc->config = 0; 506 + hwc->event_base = 0; 507 + 508 + /* 509 + * Check whether we need to exclude the counter from certain modes. 510 + */ 511 + if ((!armpmu->set_event_filter || 512 + armpmu->set_event_filter(hwc, &event->attr)) && 513 + event_requires_mode_exclusion(&event->attr)) { 492 514 pr_debug("ARM performance counters do not support " 493 515 "mode exclusion\n"); 494 516 return -EPERM; 495 517 } 496 518 497 519 /* 498 - * We don't assign an index until we actually place the event onto 499 - * hardware. Use -1 to signify that we haven't decided where to put it 500 - * yet. For SMP systems, each core has it's own PMU so we can't do any 501 - * clever allocation or constraints checking at this point. 520 + * Store the event encoding into the config_base field. 502 521 */ 503 - hwc->idx = -1; 504 - 505 - /* 506 - * Store the event encoding into the config_base field. config and 507 - * event_base are unused as the only 2 things we need to know are 508 - * the event mapping and the counter to use. The counter to use is 509 - * also the indx and the config_base is the event type. 510 - */ 511 - hwc->config_base = (unsigned long)mapping; 512 - hwc->config = 0; 513 - hwc->event_base = 0; 522 + hwc->config_base |= (unsigned long)mapping; 514 523 515 524 if (!hwc->sample_period) { 516 525 hwc->sample_period = armpmu->max_period; ··· 527 542 528 543 static int armpmu_event_init(struct perf_event *event) 529 544 { 545 + struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 530 546 int err = 0; 547 + atomic_t *active_events = &armpmu->active_events; 531 548 532 - switch (event->attr.type) { 533 - case PERF_TYPE_RAW: 534 - case PERF_TYPE_HARDWARE: 535 - case PERF_TYPE_HW_CACHE: 536 - break; 537 - 538 - default: 549 + if (armpmu->map_event(event) == -ENOENT) 539 550 return -ENOENT; 540 - } 541 - 542 - if (!armpmu) 543 - return -ENODEV; 544 551 545 552 event->destroy = hw_perf_event_destroy; 546 553 547 - if (!atomic_inc_not_zero(&active_events)) { 548 - mutex_lock(&pmu_reserve_mutex); 549 - if (atomic_read(&active_events) == 0) { 550 - err = armpmu_reserve_hardware(); 551 - } 554 + if (!atomic_inc_not_zero(active_events)) { 555 + mutex_lock(&armpmu->reserve_mutex); 556 + if (atomic_read(active_events) == 0) 557 + err = armpmu_reserve_hardware(armpmu); 552 558 553 559 if (!err) 554 - atomic_inc(&active_events); 555 - mutex_unlock(&pmu_reserve_mutex); 560 + atomic_inc(active_events); 561 + mutex_unlock(&armpmu->reserve_mutex); 556 562 } 557 563 558 564 if (err) ··· 558 582 559 583 static void armpmu_enable(struct pmu *pmu) 560 584 { 561 - /* Enable all of the perf events on hardware. */ 562 - int idx, enabled = 0; 563 - struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events); 564 - 565 - if (!armpmu) 566 - return; 567 - 568 - for (idx = 0; idx <= armpmu->num_events; ++idx) { 569 - struct perf_event *event = cpuc->events[idx]; 570 - 571 - if (!event) 572 - continue; 573 - 574 - armpmu->enable(&event->hw, idx); 575 - enabled = 1; 576 - } 585 + struct arm_pmu *armpmu = to_arm_pmu(pmu); 586 + struct pmu_hw_events *hw_events = armpmu->get_hw_events(); 587 + int enabled = bitmap_weight(hw_events->used_mask, armpmu->num_events); 577 588 578 589 if (enabled) 579 590 armpmu->start(); ··· 568 605 569 606 static void armpmu_disable(struct pmu *pmu) 570 607 { 571 - if (armpmu) 572 - armpmu->stop(); 608 + struct arm_pmu *armpmu = to_arm_pmu(pmu); 609 + armpmu->stop(); 573 610 } 574 611 575 - static struct pmu pmu = { 576 - .pmu_enable = armpmu_enable, 577 - .pmu_disable = armpmu_disable, 578 - .event_init = armpmu_event_init, 579 - .add = armpmu_add, 580 - .del = armpmu_del, 581 - .start = armpmu_start, 582 - .stop = armpmu_stop, 583 - .read = armpmu_read, 584 - }; 612 + static void __init armpmu_init(struct arm_pmu *armpmu) 613 + { 614 + atomic_set(&armpmu->active_events, 0); 615 + mutex_init(&armpmu->reserve_mutex); 616 + 617 + armpmu->pmu = (struct pmu) { 618 + .pmu_enable = armpmu_enable, 619 + .pmu_disable = armpmu_disable, 620 + .event_init = armpmu_event_init, 621 + .add = armpmu_add, 622 + .del = armpmu_del, 623 + .start = armpmu_start, 624 + .stop = armpmu_stop, 625 + .read = armpmu_read, 626 + }; 627 + } 628 + 629 + int __init armpmu_register(struct arm_pmu *armpmu, char *name, int type) 630 + { 631 + armpmu_init(armpmu); 632 + return perf_pmu_register(&armpmu->pmu, name, type); 633 + } 585 634 586 635 /* Include the PMU-specific implementations. */ 587 636 #include "perf_event_xscale.c" ··· 605 630 * This requires SMP to be available, so exists as a separate initcall. 606 631 */ 607 632 static int __init 608 - armpmu_reset(void) 633 + cpu_pmu_reset(void) 609 634 { 610 - if (armpmu && armpmu->reset) 611 - return on_each_cpu(armpmu->reset, NULL, 1); 635 + if (cpu_pmu && cpu_pmu->reset) 636 + return on_each_cpu(cpu_pmu->reset, NULL, 1); 612 637 return 0; 613 638 } 614 - arch_initcall(armpmu_reset); 639 + arch_initcall(cpu_pmu_reset); 615 640 641 + /* 642 + * PMU platform driver and devicetree bindings. 643 + */ 644 + static struct of_device_id armpmu_of_device_ids[] = { 645 + {.compatible = "arm,cortex-a9-pmu"}, 646 + {.compatible = "arm,cortex-a8-pmu"}, 647 + {.compatible = "arm,arm1136-pmu"}, 648 + {.compatible = "arm,arm1176-pmu"}, 649 + {}, 650 + }; 651 + 652 + static struct platform_device_id armpmu_plat_device_ids[] = { 653 + {.name = "arm-pmu"}, 654 + {}, 655 + }; 656 + 657 + static int __devinit armpmu_device_probe(struct platform_device *pdev) 658 + { 659 + cpu_pmu->plat_device = pdev; 660 + return 0; 661 + } 662 + 663 + static struct platform_driver armpmu_driver = { 664 + .driver = { 665 + .name = "arm-pmu", 666 + .of_match_table = armpmu_of_device_ids, 667 + }, 668 + .probe = armpmu_device_probe, 669 + .id_table = armpmu_plat_device_ids, 670 + }; 671 + 672 + static int __init register_pmu_driver(void) 673 + { 674 + return platform_driver_register(&armpmu_driver); 675 + } 676 + device_initcall(register_pmu_driver); 677 + 678 + static struct pmu_hw_events *armpmu_get_cpu_events(void) 679 + { 680 + return &__get_cpu_var(cpu_hw_events); 681 + } 682 + 683 + static void __init cpu_pmu_init(struct arm_pmu *armpmu) 684 + { 685 + int cpu; 686 + for_each_possible_cpu(cpu) { 687 + struct pmu_hw_events *events = &per_cpu(cpu_hw_events, cpu); 688 + events->events = per_cpu(hw_events, cpu); 689 + events->used_mask = per_cpu(used_mask, cpu); 690 + raw_spin_lock_init(&events->pmu_lock); 691 + } 692 + armpmu->get_hw_events = armpmu_get_cpu_events; 693 + armpmu->type = ARM_PMU_DEVICE_CPU; 694 + } 695 + 696 + /* 697 + * CPU PMU identification and registration. 698 + */ 616 699 static int __init 617 700 init_hw_perf_events(void) 618 701 { ··· 684 651 case 0xB360: /* ARM1136 */ 685 652 case 0xB560: /* ARM1156 */ 686 653 case 0xB760: /* ARM1176 */ 687 - armpmu = armv6pmu_init(); 654 + cpu_pmu = armv6pmu_init(); 688 655 break; 689 656 case 0xB020: /* ARM11mpcore */ 690 - armpmu = armv6mpcore_pmu_init(); 657 + cpu_pmu = armv6mpcore_pmu_init(); 691 658 break; 692 659 case 0xC080: /* Cortex-A8 */ 693 - armpmu = armv7_a8_pmu_init(); 660 + cpu_pmu = armv7_a8_pmu_init(); 694 661 break; 695 662 case 0xC090: /* Cortex-A9 */ 696 - armpmu = armv7_a9_pmu_init(); 663 + cpu_pmu = armv7_a9_pmu_init(); 697 664 break; 698 665 case 0xC050: /* Cortex-A5 */ 699 - armpmu = armv7_a5_pmu_init(); 666 + cpu_pmu = armv7_a5_pmu_init(); 700 667 break; 701 668 case 0xC0F0: /* Cortex-A15 */ 702 - armpmu = armv7_a15_pmu_init(); 669 + cpu_pmu = armv7_a15_pmu_init(); 703 670 break; 704 671 } 705 672 /* Intel CPUs [xscale]. */ ··· 707 674 part_number = (cpuid >> 13) & 0x7; 708 675 switch (part_number) { 709 676 case 1: 710 - armpmu = xscale1pmu_init(); 677 + cpu_pmu = xscale1pmu_init(); 711 678 break; 712 679 case 2: 713 - armpmu = xscale2pmu_init(); 680 + cpu_pmu = xscale2pmu_init(); 714 681 break; 715 682 } 716 683 } 717 684 718 - if (armpmu) { 685 + if (cpu_pmu) { 719 686 pr_info("enabled with %s PMU driver, %d counters available\n", 720 - armpmu->name, armpmu->num_events); 687 + cpu_pmu->name, cpu_pmu->num_events); 688 + cpu_pmu_init(cpu_pmu); 689 + armpmu_register(cpu_pmu, "cpu", PERF_TYPE_RAW); 721 690 } else { 722 691 pr_info("no hardware support available\n"); 723 692 } 724 - 725 - perf_pmu_register(&pmu, "cpu", PERF_TYPE_RAW); 726 693 727 694 return 0; 728 695 }
+59 -28
arch/arm/kernel/perf_event_v6.c
··· 54 54 }; 55 55 56 56 enum armv6_counters { 57 - ARMV6_CYCLE_COUNTER = 1, 57 + ARMV6_CYCLE_COUNTER = 0, 58 58 ARMV6_COUNTER0, 59 59 ARMV6_COUNTER1, 60 60 }; ··· 433 433 int idx) 434 434 { 435 435 unsigned long val, mask, evt, flags; 436 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 436 437 437 438 if (ARMV6_CYCLE_COUNTER == idx) { 438 439 mask = 0; ··· 455 454 * Mask out the current event and set the counter to count the event 456 455 * that we're interested in. 457 456 */ 458 - raw_spin_lock_irqsave(&pmu_lock, flags); 457 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 459 458 val = armv6_pmcr_read(); 460 459 val &= ~mask; 461 460 val |= evt; 462 461 armv6_pmcr_write(val); 463 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 462 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 463 + } 464 + 465 + static int counter_is_active(unsigned long pmcr, int idx) 466 + { 467 + unsigned long mask = 0; 468 + if (idx == ARMV6_CYCLE_COUNTER) 469 + mask = ARMV6_PMCR_CCOUNT_IEN; 470 + else if (idx == ARMV6_COUNTER0) 471 + mask = ARMV6_PMCR_COUNT0_IEN; 472 + else if (idx == ARMV6_COUNTER1) 473 + mask = ARMV6_PMCR_COUNT1_IEN; 474 + 475 + if (mask) 476 + return pmcr & mask; 477 + 478 + WARN_ONCE(1, "invalid counter number (%d)\n", idx); 479 + return 0; 464 480 } 465 481 466 482 static irqreturn_t ··· 486 468 { 487 469 unsigned long pmcr = armv6_pmcr_read(); 488 470 struct perf_sample_data data; 489 - struct cpu_hw_events *cpuc; 471 + struct pmu_hw_events *cpuc; 490 472 struct pt_regs *regs; 491 473 int idx; 492 474 ··· 505 487 perf_sample_data_init(&data, 0); 506 488 507 489 cpuc = &__get_cpu_var(cpu_hw_events); 508 - for (idx = 0; idx <= armpmu->num_events; ++idx) { 490 + for (idx = 0; idx < cpu_pmu->num_events; ++idx) { 509 491 struct perf_event *event = cpuc->events[idx]; 510 492 struct hw_perf_event *hwc; 511 493 512 - if (!test_bit(idx, cpuc->active_mask)) 494 + if (!counter_is_active(pmcr, idx)) 513 495 continue; 514 496 515 497 /* ··· 526 508 continue; 527 509 528 510 if (perf_event_overflow(event, &data, regs)) 529 - armpmu->disable(hwc, idx); 511 + cpu_pmu->disable(hwc, idx); 530 512 } 531 513 532 514 /* ··· 545 527 armv6pmu_start(void) 546 528 { 547 529 unsigned long flags, val; 530 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 548 531 549 - raw_spin_lock_irqsave(&pmu_lock, flags); 532 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 550 533 val = armv6_pmcr_read(); 551 534 val |= ARMV6_PMCR_ENABLE; 552 535 armv6_pmcr_write(val); 553 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 536 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 554 537 } 555 538 556 539 static void 557 540 armv6pmu_stop(void) 558 541 { 559 542 unsigned long flags, val; 543 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 560 544 561 - raw_spin_lock_irqsave(&pmu_lock, flags); 545 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 562 546 val = armv6_pmcr_read(); 563 547 val &= ~ARMV6_PMCR_ENABLE; 564 548 armv6_pmcr_write(val); 565 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 549 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 566 550 } 567 551 568 552 static int 569 - armv6pmu_get_event_idx(struct cpu_hw_events *cpuc, 553 + armv6pmu_get_event_idx(struct pmu_hw_events *cpuc, 570 554 struct hw_perf_event *event) 571 555 { 572 556 /* Always place a cycle counter into the cycle counter. */ ··· 598 578 int idx) 599 579 { 600 580 unsigned long val, mask, evt, flags; 581 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 601 582 602 583 if (ARMV6_CYCLE_COUNTER == idx) { 603 584 mask = ARMV6_PMCR_CCOUNT_IEN; ··· 619 598 * of ETM bus signal assertion cycles. The external reporting should 620 599 * be disabled and so this should never increment. 621 600 */ 622 - raw_spin_lock_irqsave(&pmu_lock, flags); 601 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 623 602 val = armv6_pmcr_read(); 624 603 val &= ~mask; 625 604 val |= evt; 626 605 armv6_pmcr_write(val); 627 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 606 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 628 607 } 629 608 630 609 static void ··· 632 611 int idx) 633 612 { 634 613 unsigned long val, mask, flags, evt = 0; 614 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 635 615 636 616 if (ARMV6_CYCLE_COUNTER == idx) { 637 617 mask = ARMV6_PMCR_CCOUNT_IEN; ··· 649 627 * Unlike UP ARMv6, we don't have a way of stopping the counters. We 650 628 * simply disable the interrupt reporting. 651 629 */ 652 - raw_spin_lock_irqsave(&pmu_lock, flags); 630 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 653 631 val = armv6_pmcr_read(); 654 632 val &= ~mask; 655 633 val |= evt; 656 634 armv6_pmcr_write(val); 657 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 635 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 658 636 } 659 637 660 - static const struct arm_pmu armv6pmu = { 638 + static int armv6_map_event(struct perf_event *event) 639 + { 640 + return map_cpu_event(event, &armv6_perf_map, 641 + &armv6_perf_cache_map, 0xFF); 642 + } 643 + 644 + static struct arm_pmu armv6pmu = { 661 645 .id = ARM_PERF_PMU_ID_V6, 662 646 .name = "v6", 663 647 .handle_irq = armv6pmu_handle_irq, ··· 674 646 .get_event_idx = armv6pmu_get_event_idx, 675 647 .start = armv6pmu_start, 676 648 .stop = armv6pmu_stop, 677 - .cache_map = &armv6_perf_cache_map, 678 - .event_map = &armv6_perf_map, 679 - .raw_event_mask = 0xFF, 649 + .map_event = armv6_map_event, 680 650 .num_events = 3, 681 651 .max_period = (1LLU << 32) - 1, 682 652 }; 683 653 684 - static const struct arm_pmu *__init armv6pmu_init(void) 654 + static struct arm_pmu *__init armv6pmu_init(void) 685 655 { 686 656 return &armv6pmu; 687 657 } ··· 691 665 * disable the interrupt reporting and update the event. When unthrottling we 692 666 * reset the period and enable the interrupt reporting. 693 667 */ 694 - static const struct arm_pmu armv6mpcore_pmu = { 668 + 669 + static int armv6mpcore_map_event(struct perf_event *event) 670 + { 671 + return map_cpu_event(event, &armv6mpcore_perf_map, 672 + &armv6mpcore_perf_cache_map, 0xFF); 673 + } 674 + 675 + static struct arm_pmu armv6mpcore_pmu = { 695 676 .id = ARM_PERF_PMU_ID_V6MP, 696 677 .name = "v6mpcore", 697 678 .handle_irq = armv6pmu_handle_irq, ··· 709 676 .get_event_idx = armv6pmu_get_event_idx, 710 677 .start = armv6pmu_start, 711 678 .stop = armv6pmu_stop, 712 - .cache_map = &armv6mpcore_perf_cache_map, 713 - .event_map = &armv6mpcore_perf_map, 714 - .raw_event_mask = 0xFF, 679 + .map_event = armv6mpcore_map_event, 715 680 .num_events = 3, 716 681 .max_period = (1LLU << 32) - 1, 717 682 }; 718 683 719 - static const struct arm_pmu *__init armv6mpcore_pmu_init(void) 684 + static struct arm_pmu *__init armv6mpcore_pmu_init(void) 720 685 { 721 686 return &armv6mpcore_pmu; 722 687 } 723 688 #else 724 - static const struct arm_pmu *__init armv6pmu_init(void) 689 + static struct arm_pmu *__init armv6pmu_init(void) 725 690 { 726 691 return NULL; 727 692 } 728 693 729 - static const struct arm_pmu *__init armv6mpcore_pmu_init(void) 694 + static struct arm_pmu *__init armv6mpcore_pmu_init(void) 730 695 { 731 696 return NULL; 732 697 }
+201 -204
arch/arm/kernel/perf_event_v7.c
··· 17 17 */ 18 18 19 19 #ifdef CONFIG_CPU_V7 20 + 21 + static struct arm_pmu armv7pmu; 22 + 20 23 /* 21 24 * Common ARMv7 event types 22 25 * ··· 679 676 }; 680 677 681 678 /* 682 - * Perf Events counters 679 + * Perf Events' indices 683 680 */ 684 - enum armv7_counters { 685 - ARMV7_CYCLE_COUNTER = 1, /* Cycle counter */ 686 - ARMV7_COUNTER0 = 2, /* First event counter */ 687 - }; 681 + #define ARMV7_IDX_CYCLE_COUNTER 0 682 + #define ARMV7_IDX_COUNTER0 1 683 + #define ARMV7_IDX_COUNTER_LAST (ARMV7_IDX_CYCLE_COUNTER + cpu_pmu->num_events - 1) 688 684 689 - /* 690 - * The cycle counter is ARMV7_CYCLE_COUNTER. 691 - * The first event counter is ARMV7_COUNTER0. 692 - * The last event counter is (ARMV7_COUNTER0 + armpmu->num_events - 1). 693 - */ 694 - #define ARMV7_COUNTER_LAST (ARMV7_COUNTER0 + armpmu->num_events - 1) 685 + #define ARMV7_MAX_COUNTERS 32 686 + #define ARMV7_COUNTER_MASK (ARMV7_MAX_COUNTERS - 1) 695 687 696 688 /* 697 689 * ARMv7 low level PMNC access 698 690 */ 691 + 692 + /* 693 + * Perf Event to low level counters mapping 694 + */ 695 + #define ARMV7_IDX_TO_COUNTER(x) \ 696 + (((x) - ARMV7_IDX_COUNTER0) & ARMV7_COUNTER_MASK) 699 697 700 698 /* 701 699 * Per-CPU PMNC: config reg ··· 712 708 #define ARMV7_PMNC_MASK 0x3f /* Mask for writable bits */ 713 709 714 710 /* 715 - * Available counters 716 - */ 717 - #define ARMV7_CNT0 0 /* First event counter */ 718 - #define ARMV7_CCNT 31 /* Cycle counter */ 719 - 720 - /* Perf Event to low level counters mapping */ 721 - #define ARMV7_EVENT_CNT_TO_CNTx (ARMV7_COUNTER0 - ARMV7_CNT0) 722 - 723 - /* 724 - * CNTENS: counters enable reg 725 - */ 726 - #define ARMV7_CNTENS_P(idx) (1 << (idx - ARMV7_EVENT_CNT_TO_CNTx)) 727 - #define ARMV7_CNTENS_C (1 << ARMV7_CCNT) 728 - 729 - /* 730 - * CNTENC: counters disable reg 731 - */ 732 - #define ARMV7_CNTENC_P(idx) (1 << (idx - ARMV7_EVENT_CNT_TO_CNTx)) 733 - #define ARMV7_CNTENC_C (1 << ARMV7_CCNT) 734 - 735 - /* 736 - * INTENS: counters overflow interrupt enable reg 737 - */ 738 - #define ARMV7_INTENS_P(idx) (1 << (idx - ARMV7_EVENT_CNT_TO_CNTx)) 739 - #define ARMV7_INTENS_C (1 << ARMV7_CCNT) 740 - 741 - /* 742 - * INTENC: counters overflow interrupt disable reg 743 - */ 744 - #define ARMV7_INTENC_P(idx) (1 << (idx - ARMV7_EVENT_CNT_TO_CNTx)) 745 - #define ARMV7_INTENC_C (1 << ARMV7_CCNT) 746 - 747 - /* 748 - * EVTSEL: Event selection reg 749 - */ 750 - #define ARMV7_EVTSEL_MASK 0xff /* Mask for writable bits */ 751 - 752 - /* 753 - * SELECT: Counter selection reg 754 - */ 755 - #define ARMV7_SELECT_MASK 0x1f /* Mask for writable bits */ 756 - 757 - /* 758 711 * FLAG: counters overflow flag status reg 759 712 */ 760 - #define ARMV7_FLAG_P(idx) (1 << (idx - ARMV7_EVENT_CNT_TO_CNTx)) 761 - #define ARMV7_FLAG_C (1 << ARMV7_CCNT) 762 713 #define ARMV7_FLAG_MASK 0xffffffff /* Mask for writable bits */ 763 714 #define ARMV7_OVERFLOWED_MASK ARMV7_FLAG_MASK 764 715 765 - static inline unsigned long armv7_pmnc_read(void) 716 + /* 717 + * PMXEVTYPER: Event selection reg 718 + */ 719 + #define ARMV7_EVTYPE_MASK 0xc00000ff /* Mask for writable bits */ 720 + #define ARMV7_EVTYPE_EVENT 0xff /* Mask for EVENT bits */ 721 + 722 + /* 723 + * Event filters for PMUv2 724 + */ 725 + #define ARMV7_EXCLUDE_PL1 (1 << 31) 726 + #define ARMV7_EXCLUDE_USER (1 << 30) 727 + #define ARMV7_INCLUDE_HYP (1 << 27) 728 + 729 + static inline u32 armv7_pmnc_read(void) 766 730 { 767 731 u32 val; 768 732 asm volatile("mrc p15, 0, %0, c9, c12, 0" : "=r"(val)); 769 733 return val; 770 734 } 771 735 772 - static inline void armv7_pmnc_write(unsigned long val) 736 + static inline void armv7_pmnc_write(u32 val) 773 737 { 774 738 val &= ARMV7_PMNC_MASK; 775 739 isb(); 776 740 asm volatile("mcr p15, 0, %0, c9, c12, 0" : : "r"(val)); 777 741 } 778 742 779 - static inline int armv7_pmnc_has_overflowed(unsigned long pmnc) 743 + static inline int armv7_pmnc_has_overflowed(u32 pmnc) 780 744 { 781 745 return pmnc & ARMV7_OVERFLOWED_MASK; 782 746 } 783 747 784 - static inline int armv7_pmnc_counter_has_overflowed(unsigned long pmnc, 785 - enum armv7_counters counter) 748 + static inline int armv7_pmnc_counter_valid(int idx) 749 + { 750 + return idx >= ARMV7_IDX_CYCLE_COUNTER && idx <= ARMV7_IDX_COUNTER_LAST; 751 + } 752 + 753 + static inline int armv7_pmnc_counter_has_overflowed(u32 pmnc, int idx) 786 754 { 787 755 int ret = 0; 756 + u32 counter; 788 757 789 - if (counter == ARMV7_CYCLE_COUNTER) 790 - ret = pmnc & ARMV7_FLAG_C; 791 - else if ((counter >= ARMV7_COUNTER0) && (counter <= ARMV7_COUNTER_LAST)) 792 - ret = pmnc & ARMV7_FLAG_P(counter); 793 - else 758 + if (!armv7_pmnc_counter_valid(idx)) { 794 759 pr_err("CPU%u checking wrong counter %d overflow status\n", 795 - smp_processor_id(), counter); 760 + smp_processor_id(), idx); 761 + } else { 762 + counter = ARMV7_IDX_TO_COUNTER(idx); 763 + ret = pmnc & BIT(counter); 764 + } 796 765 797 766 return ret; 798 767 } 799 768 800 - static inline int armv7_pmnc_select_counter(unsigned int idx) 769 + static inline int armv7_pmnc_select_counter(int idx) 801 770 { 802 - u32 val; 771 + u32 counter; 803 772 804 - if ((idx < ARMV7_COUNTER0) || (idx > ARMV7_COUNTER_LAST)) { 805 - pr_err("CPU%u selecting wrong PMNC counter" 806 - " %d\n", smp_processor_id(), idx); 807 - return -1; 773 + if (!armv7_pmnc_counter_valid(idx)) { 774 + pr_err("CPU%u selecting wrong PMNC counter %d\n", 775 + smp_processor_id(), idx); 776 + return -EINVAL; 808 777 } 809 778 810 - val = (idx - ARMV7_EVENT_CNT_TO_CNTx) & ARMV7_SELECT_MASK; 811 - asm volatile("mcr p15, 0, %0, c9, c12, 5" : : "r" (val)); 779 + counter = ARMV7_IDX_TO_COUNTER(idx); 780 + asm volatile("mcr p15, 0, %0, c9, c12, 5" : : "r" (counter)); 812 781 isb(); 813 782 814 783 return idx; ··· 789 812 790 813 static inline u32 armv7pmu_read_counter(int idx) 791 814 { 792 - unsigned long value = 0; 815 + u32 value = 0; 793 816 794 - if (idx == ARMV7_CYCLE_COUNTER) 795 - asm volatile("mrc p15, 0, %0, c9, c13, 0" : "=r" (value)); 796 - else if ((idx >= ARMV7_COUNTER0) && (idx <= ARMV7_COUNTER_LAST)) { 797 - if (armv7_pmnc_select_counter(idx) == idx) 798 - asm volatile("mrc p15, 0, %0, c9, c13, 2" 799 - : "=r" (value)); 800 - } else 817 + if (!armv7_pmnc_counter_valid(idx)) 801 818 pr_err("CPU%u reading wrong counter %d\n", 802 819 smp_processor_id(), idx); 820 + else if (idx == ARMV7_IDX_CYCLE_COUNTER) 821 + asm volatile("mrc p15, 0, %0, c9, c13, 0" : "=r" (value)); 822 + else if (armv7_pmnc_select_counter(idx) == idx) 823 + asm volatile("mrc p15, 0, %0, c9, c13, 2" : "=r" (value)); 803 824 804 825 return value; 805 826 } 806 827 807 828 static inline void armv7pmu_write_counter(int idx, u32 value) 808 829 { 809 - if (idx == ARMV7_CYCLE_COUNTER) 810 - asm volatile("mcr p15, 0, %0, c9, c13, 0" : : "r" (value)); 811 - else if ((idx >= ARMV7_COUNTER0) && (idx <= ARMV7_COUNTER_LAST)) { 812 - if (armv7_pmnc_select_counter(idx) == idx) 813 - asm volatile("mcr p15, 0, %0, c9, c13, 2" 814 - : : "r" (value)); 815 - } else 830 + if (!armv7_pmnc_counter_valid(idx)) 816 831 pr_err("CPU%u writing wrong counter %d\n", 817 832 smp_processor_id(), idx); 833 + else if (idx == ARMV7_IDX_CYCLE_COUNTER) 834 + asm volatile("mcr p15, 0, %0, c9, c13, 0" : : "r" (value)); 835 + else if (armv7_pmnc_select_counter(idx) == idx) 836 + asm volatile("mcr p15, 0, %0, c9, c13, 2" : : "r" (value)); 818 837 } 819 838 820 - static inline void armv7_pmnc_write_evtsel(unsigned int idx, u32 val) 839 + static inline void armv7_pmnc_write_evtsel(int idx, u32 val) 821 840 { 822 841 if (armv7_pmnc_select_counter(idx) == idx) { 823 - val &= ARMV7_EVTSEL_MASK; 842 + val &= ARMV7_EVTYPE_MASK; 824 843 asm volatile("mcr p15, 0, %0, c9, c13, 1" : : "r" (val)); 825 844 } 826 845 } 827 846 828 - static inline u32 armv7_pmnc_enable_counter(unsigned int idx) 847 + static inline int armv7_pmnc_enable_counter(int idx) 829 848 { 830 - u32 val; 849 + u32 counter; 831 850 832 - if ((idx != ARMV7_CYCLE_COUNTER) && 833 - ((idx < ARMV7_COUNTER0) || (idx > ARMV7_COUNTER_LAST))) { 834 - pr_err("CPU%u enabling wrong PMNC counter" 835 - " %d\n", smp_processor_id(), idx); 836 - return -1; 851 + if (!armv7_pmnc_counter_valid(idx)) { 852 + pr_err("CPU%u enabling wrong PMNC counter %d\n", 853 + smp_processor_id(), idx); 854 + return -EINVAL; 837 855 } 838 856 839 - if (idx == ARMV7_CYCLE_COUNTER) 840 - val = ARMV7_CNTENS_C; 841 - else 842 - val = ARMV7_CNTENS_P(idx); 843 - 844 - asm volatile("mcr p15, 0, %0, c9, c12, 1" : : "r" (val)); 845 - 857 + counter = ARMV7_IDX_TO_COUNTER(idx); 858 + asm volatile("mcr p15, 0, %0, c9, c12, 1" : : "r" (BIT(counter))); 846 859 return idx; 847 860 } 848 861 849 - static inline u32 armv7_pmnc_disable_counter(unsigned int idx) 862 + static inline int armv7_pmnc_disable_counter(int idx) 850 863 { 851 - u32 val; 864 + u32 counter; 852 865 853 - 854 - if ((idx != ARMV7_CYCLE_COUNTER) && 855 - ((idx < ARMV7_COUNTER0) || (idx > ARMV7_COUNTER_LAST))) { 856 - pr_err("CPU%u disabling wrong PMNC counter" 857 - " %d\n", smp_processor_id(), idx); 858 - return -1; 866 + if (!armv7_pmnc_counter_valid(idx)) { 867 + pr_err("CPU%u disabling wrong PMNC counter %d\n", 868 + smp_processor_id(), idx); 869 + return -EINVAL; 859 870 } 860 871 861 - if (idx == ARMV7_CYCLE_COUNTER) 862 - val = ARMV7_CNTENC_C; 863 - else 864 - val = ARMV7_CNTENC_P(idx); 865 - 866 - asm volatile("mcr p15, 0, %0, c9, c12, 2" : : "r" (val)); 867 - 872 + counter = ARMV7_IDX_TO_COUNTER(idx); 873 + asm volatile("mcr p15, 0, %0, c9, c12, 2" : : "r" (BIT(counter))); 868 874 return idx; 869 875 } 870 876 871 - static inline u32 armv7_pmnc_enable_intens(unsigned int idx) 877 + static inline int armv7_pmnc_enable_intens(int idx) 872 878 { 873 - u32 val; 879 + u32 counter; 874 880 875 - if ((idx != ARMV7_CYCLE_COUNTER) && 876 - ((idx < ARMV7_COUNTER0) || (idx > ARMV7_COUNTER_LAST))) { 877 - pr_err("CPU%u enabling wrong PMNC counter" 878 - " interrupt enable %d\n", smp_processor_id(), idx); 879 - return -1; 881 + if (!armv7_pmnc_counter_valid(idx)) { 882 + pr_err("CPU%u enabling wrong PMNC counter IRQ enable %d\n", 883 + smp_processor_id(), idx); 884 + return -EINVAL; 880 885 } 881 886 882 - if (idx == ARMV7_CYCLE_COUNTER) 883 - val = ARMV7_INTENS_C; 884 - else 885 - val = ARMV7_INTENS_P(idx); 886 - 887 - asm volatile("mcr p15, 0, %0, c9, c14, 1" : : "r" (val)); 888 - 887 + counter = ARMV7_IDX_TO_COUNTER(idx); 888 + asm volatile("mcr p15, 0, %0, c9, c14, 1" : : "r" (BIT(counter))); 889 889 return idx; 890 890 } 891 891 892 - static inline u32 armv7_pmnc_disable_intens(unsigned int idx) 892 + static inline int armv7_pmnc_disable_intens(int idx) 893 893 { 894 - u32 val; 894 + u32 counter; 895 895 896 - if ((idx != ARMV7_CYCLE_COUNTER) && 897 - ((idx < ARMV7_COUNTER0) || (idx > ARMV7_COUNTER_LAST))) { 898 - pr_err("CPU%u disabling wrong PMNC counter" 899 - " interrupt enable %d\n", smp_processor_id(), idx); 900 - return -1; 896 + if (!armv7_pmnc_counter_valid(idx)) { 897 + pr_err("CPU%u disabling wrong PMNC counter IRQ enable %d\n", 898 + smp_processor_id(), idx); 899 + return -EINVAL; 901 900 } 902 901 903 - if (idx == ARMV7_CYCLE_COUNTER) 904 - val = ARMV7_INTENC_C; 905 - else 906 - val = ARMV7_INTENC_P(idx); 907 - 908 - asm volatile("mcr p15, 0, %0, c9, c14, 2" : : "r" (val)); 909 - 902 + counter = ARMV7_IDX_TO_COUNTER(idx); 903 + asm volatile("mcr p15, 0, %0, c9, c14, 2" : : "r" (BIT(counter))); 910 904 return idx; 911 905 } 912 906 ··· 921 973 asm volatile("mrc p15, 0, %0, c9, c13, 0" : "=r" (val)); 922 974 printk(KERN_INFO "CCNT =0x%08x\n", val); 923 975 924 - for (cnt = ARMV7_COUNTER0; cnt < ARMV7_COUNTER_LAST; cnt++) { 976 + for (cnt = ARMV7_IDX_COUNTER0; cnt <= ARMV7_IDX_COUNTER_LAST; cnt++) { 925 977 armv7_pmnc_select_counter(cnt); 926 978 asm volatile("mrc p15, 0, %0, c9, c13, 2" : "=r" (val)); 927 979 printk(KERN_INFO "CNT[%d] count =0x%08x\n", 928 - cnt-ARMV7_EVENT_CNT_TO_CNTx, val); 980 + ARMV7_IDX_TO_COUNTER(cnt), val); 929 981 asm volatile("mrc p15, 0, %0, c9, c13, 1" : "=r" (val)); 930 982 printk(KERN_INFO "CNT[%d] evtsel=0x%08x\n", 931 - cnt-ARMV7_EVENT_CNT_TO_CNTx, val); 983 + ARMV7_IDX_TO_COUNTER(cnt), val); 932 984 } 933 985 } 934 986 #endif ··· 936 988 static void armv7pmu_enable_event(struct hw_perf_event *hwc, int idx) 937 989 { 938 990 unsigned long flags; 991 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 939 992 940 993 /* 941 994 * Enable counter and interrupt, and set the counter to count 942 995 * the event that we're interested in. 943 996 */ 944 - raw_spin_lock_irqsave(&pmu_lock, flags); 997 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 945 998 946 999 /* 947 1000 * Disable counter ··· 951 1002 952 1003 /* 953 1004 * Set event (if destined for PMNx counters) 954 - * We don't need to set the event if it's a cycle count 1005 + * We only need to set the event for the cycle counter if we 1006 + * have the ability to perform event filtering. 955 1007 */ 956 - if (idx != ARMV7_CYCLE_COUNTER) 1008 + if (armv7pmu.set_event_filter || idx != ARMV7_IDX_CYCLE_COUNTER) 957 1009 armv7_pmnc_write_evtsel(idx, hwc->config_base); 958 1010 959 1011 /* ··· 967 1017 */ 968 1018 armv7_pmnc_enable_counter(idx); 969 1019 970 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 1020 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 971 1021 } 972 1022 973 1023 static void armv7pmu_disable_event(struct hw_perf_event *hwc, int idx) 974 1024 { 975 1025 unsigned long flags; 1026 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 976 1027 977 1028 /* 978 1029 * Disable counter and interrupt 979 1030 */ 980 - raw_spin_lock_irqsave(&pmu_lock, flags); 1031 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 981 1032 982 1033 /* 983 1034 * Disable counter ··· 990 1039 */ 991 1040 armv7_pmnc_disable_intens(idx); 992 1041 993 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 1042 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 994 1043 } 995 1044 996 1045 static irqreturn_t armv7pmu_handle_irq(int irq_num, void *dev) 997 1046 { 998 - unsigned long pmnc; 1047 + u32 pmnc; 999 1048 struct perf_sample_data data; 1000 - struct cpu_hw_events *cpuc; 1049 + struct pmu_hw_events *cpuc; 1001 1050 struct pt_regs *regs; 1002 1051 int idx; 1003 1052 ··· 1020 1069 perf_sample_data_init(&data, 0); 1021 1070 1022 1071 cpuc = &__get_cpu_var(cpu_hw_events); 1023 - for (idx = 0; idx <= armpmu->num_events; ++idx) { 1072 + for (idx = 0; idx < cpu_pmu->num_events; ++idx) { 1024 1073 struct perf_event *event = cpuc->events[idx]; 1025 1074 struct hw_perf_event *hwc; 1026 - 1027 - if (!test_bit(idx, cpuc->active_mask)) 1028 - continue; 1029 1075 1030 1076 /* 1031 1077 * We have a single interrupt for all counters. Check that ··· 1038 1090 continue; 1039 1091 1040 1092 if (perf_event_overflow(event, &data, regs)) 1041 - armpmu->disable(hwc, idx); 1093 + cpu_pmu->disable(hwc, idx); 1042 1094 } 1043 1095 1044 1096 /* ··· 1056 1108 static void armv7pmu_start(void) 1057 1109 { 1058 1110 unsigned long flags; 1111 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 1059 1112 1060 - raw_spin_lock_irqsave(&pmu_lock, flags); 1113 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 1061 1114 /* Enable all counters */ 1062 1115 armv7_pmnc_write(armv7_pmnc_read() | ARMV7_PMNC_E); 1063 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 1116 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 1064 1117 } 1065 1118 1066 1119 static void armv7pmu_stop(void) 1067 1120 { 1068 1121 unsigned long flags; 1122 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 1069 1123 1070 - raw_spin_lock_irqsave(&pmu_lock, flags); 1124 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 1071 1125 /* Disable all counters */ 1072 1126 armv7_pmnc_write(armv7_pmnc_read() & ~ARMV7_PMNC_E); 1073 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 1127 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 1074 1128 } 1075 1129 1076 - static int armv7pmu_get_event_idx(struct cpu_hw_events *cpuc, 1130 + static int armv7pmu_get_event_idx(struct pmu_hw_events *cpuc, 1077 1131 struct hw_perf_event *event) 1078 1132 { 1079 1133 int idx; 1134 + unsigned long evtype = event->config_base & ARMV7_EVTYPE_EVENT; 1080 1135 1081 1136 /* Always place a cycle counter into the cycle counter. */ 1082 - if (event->config_base == ARMV7_PERFCTR_CPU_CYCLES) { 1083 - if (test_and_set_bit(ARMV7_CYCLE_COUNTER, cpuc->used_mask)) 1137 + if (evtype == ARMV7_PERFCTR_CPU_CYCLES) { 1138 + if (test_and_set_bit(ARMV7_IDX_CYCLE_COUNTER, cpuc->used_mask)) 1084 1139 return -EAGAIN; 1085 1140 1086 - return ARMV7_CYCLE_COUNTER; 1087 - } else { 1088 - /* 1089 - * For anything other than a cycle counter, try and use 1090 - * the events counters 1091 - */ 1092 - for (idx = ARMV7_COUNTER0; idx <= armpmu->num_events; ++idx) { 1093 - if (!test_and_set_bit(idx, cpuc->used_mask)) 1094 - return idx; 1095 - } 1096 - 1097 - /* The counters are all in use. */ 1098 - return -EAGAIN; 1141 + return ARMV7_IDX_CYCLE_COUNTER; 1099 1142 } 1143 + 1144 + /* 1145 + * For anything other than a cycle counter, try and use 1146 + * the events counters 1147 + */ 1148 + for (idx = ARMV7_IDX_COUNTER0; idx < cpu_pmu->num_events; ++idx) { 1149 + if (!test_and_set_bit(idx, cpuc->used_mask)) 1150 + return idx; 1151 + } 1152 + 1153 + /* The counters are all in use. */ 1154 + return -EAGAIN; 1155 + } 1156 + 1157 + /* 1158 + * Add an event filter to a given event. This will only work for PMUv2 PMUs. 1159 + */ 1160 + static int armv7pmu_set_event_filter(struct hw_perf_event *event, 1161 + struct perf_event_attr *attr) 1162 + { 1163 + unsigned long config_base = 0; 1164 + 1165 + if (attr->exclude_idle) 1166 + return -EPERM; 1167 + if (attr->exclude_user) 1168 + config_base |= ARMV7_EXCLUDE_USER; 1169 + if (attr->exclude_kernel) 1170 + config_base |= ARMV7_EXCLUDE_PL1; 1171 + if (!attr->exclude_hv) 1172 + config_base |= ARMV7_INCLUDE_HYP; 1173 + 1174 + /* 1175 + * Install the filter into config_base as this is used to 1176 + * construct the event type. 1177 + */ 1178 + event->config_base = config_base; 1179 + 1180 + return 0; 1100 1181 } 1101 1182 1102 1183 static void armv7pmu_reset(void *info) 1103 1184 { 1104 - u32 idx, nb_cnt = armpmu->num_events; 1185 + u32 idx, nb_cnt = cpu_pmu->num_events; 1105 1186 1106 1187 /* The counter and interrupt enable registers are unknown at reset. */ 1107 - for (idx = 1; idx < nb_cnt; ++idx) 1188 + for (idx = ARMV7_IDX_CYCLE_COUNTER; idx < nb_cnt; ++idx) 1108 1189 armv7pmu_disable_event(NULL, idx); 1109 1190 1110 1191 /* Initialize & Reset PMNC: C and P bits */ 1111 1192 armv7_pmnc_write(ARMV7_PMNC_P | ARMV7_PMNC_C); 1193 + } 1194 + 1195 + static int armv7_a8_map_event(struct perf_event *event) 1196 + { 1197 + return map_cpu_event(event, &armv7_a8_perf_map, 1198 + &armv7_a8_perf_cache_map, 0xFF); 1199 + } 1200 + 1201 + static int armv7_a9_map_event(struct perf_event *event) 1202 + { 1203 + return map_cpu_event(event, &armv7_a9_perf_map, 1204 + &armv7_a9_perf_cache_map, 0xFF); 1205 + } 1206 + 1207 + static int armv7_a5_map_event(struct perf_event *event) 1208 + { 1209 + return map_cpu_event(event, &armv7_a5_perf_map, 1210 + &armv7_a5_perf_cache_map, 0xFF); 1211 + } 1212 + 1213 + static int armv7_a15_map_event(struct perf_event *event) 1214 + { 1215 + return map_cpu_event(event, &armv7_a15_perf_map, 1216 + &armv7_a15_perf_cache_map, 0xFF); 1112 1217 } 1113 1218 1114 1219 static struct arm_pmu armv7pmu = { ··· 1174 1173 .start = armv7pmu_start, 1175 1174 .stop = armv7pmu_stop, 1176 1175 .reset = armv7pmu_reset, 1177 - .raw_event_mask = 0xFF, 1178 1176 .max_period = (1LLU << 32) - 1, 1179 1177 }; 1180 1178 ··· 1188 1188 return nb_cnt + 1; 1189 1189 } 1190 1190 1191 - static const struct arm_pmu *__init armv7_a8_pmu_init(void) 1191 + static struct arm_pmu *__init armv7_a8_pmu_init(void) 1192 1192 { 1193 1193 armv7pmu.id = ARM_PERF_PMU_ID_CA8; 1194 1194 armv7pmu.name = "ARMv7 Cortex-A8"; 1195 - armv7pmu.cache_map = &armv7_a8_perf_cache_map; 1196 - armv7pmu.event_map = &armv7_a8_perf_map; 1195 + armv7pmu.map_event = armv7_a8_map_event; 1197 1196 armv7pmu.num_events = armv7_read_num_pmnc_events(); 1198 1197 return &armv7pmu; 1199 1198 } 1200 1199 1201 - static const struct arm_pmu *__init armv7_a9_pmu_init(void) 1200 + static struct arm_pmu *__init armv7_a9_pmu_init(void) 1202 1201 { 1203 1202 armv7pmu.id = ARM_PERF_PMU_ID_CA9; 1204 1203 armv7pmu.name = "ARMv7 Cortex-A9"; 1205 - armv7pmu.cache_map = &armv7_a9_perf_cache_map; 1206 - armv7pmu.event_map = &armv7_a9_perf_map; 1204 + armv7pmu.map_event = armv7_a9_map_event; 1207 1205 armv7pmu.num_events = armv7_read_num_pmnc_events(); 1208 1206 return &armv7pmu; 1209 1207 } 1210 1208 1211 - static const struct arm_pmu *__init armv7_a5_pmu_init(void) 1209 + static struct arm_pmu *__init armv7_a5_pmu_init(void) 1212 1210 { 1213 1211 armv7pmu.id = ARM_PERF_PMU_ID_CA5; 1214 1212 armv7pmu.name = "ARMv7 Cortex-A5"; 1215 - armv7pmu.cache_map = &armv7_a5_perf_cache_map; 1216 - armv7pmu.event_map = &armv7_a5_perf_map; 1213 + armv7pmu.map_event = armv7_a5_map_event; 1217 1214 armv7pmu.num_events = armv7_read_num_pmnc_events(); 1218 1215 return &armv7pmu; 1219 1216 } 1220 1217 1221 - static const struct arm_pmu *__init armv7_a15_pmu_init(void) 1218 + static struct arm_pmu *__init armv7_a15_pmu_init(void) 1222 1219 { 1223 1220 armv7pmu.id = ARM_PERF_PMU_ID_CA15; 1224 1221 armv7pmu.name = "ARMv7 Cortex-A15"; 1225 - armv7pmu.cache_map = &armv7_a15_perf_cache_map; 1226 - armv7pmu.event_map = &armv7_a15_perf_map; 1222 + armv7pmu.map_event = armv7_a15_map_event; 1227 1223 armv7pmu.num_events = armv7_read_num_pmnc_events(); 1224 + armv7pmu.set_event_filter = armv7pmu_set_event_filter; 1228 1225 return &armv7pmu; 1229 1226 } 1230 1227 #else 1231 - static const struct arm_pmu *__init armv7_a8_pmu_init(void) 1228 + static struct arm_pmu *__init armv7_a8_pmu_init(void) 1232 1229 { 1233 1230 return NULL; 1234 1231 } 1235 1232 1236 - static const struct arm_pmu *__init armv7_a9_pmu_init(void) 1233 + static struct arm_pmu *__init armv7_a9_pmu_init(void) 1237 1234 { 1238 1235 return NULL; 1239 1236 } 1240 1237 1241 - static const struct arm_pmu *__init armv7_a5_pmu_init(void) 1238 + static struct arm_pmu *__init armv7_a5_pmu_init(void) 1242 1239 { 1243 1240 return NULL; 1244 1241 } 1245 1242 1246 - static const struct arm_pmu *__init armv7_a15_pmu_init(void) 1243 + static struct arm_pmu *__init armv7_a15_pmu_init(void) 1247 1244 { 1248 1245 return NULL; 1249 1246 }
+47 -43
arch/arm/kernel/perf_event_xscale.c
··· 40 40 }; 41 41 42 42 enum xscale_counters { 43 - XSCALE_CYCLE_COUNTER = 1, 43 + XSCALE_CYCLE_COUNTER = 0, 44 44 XSCALE_COUNTER0, 45 45 XSCALE_COUNTER1, 46 46 XSCALE_COUNTER2, ··· 222 222 { 223 223 unsigned long pmnc; 224 224 struct perf_sample_data data; 225 - struct cpu_hw_events *cpuc; 225 + struct pmu_hw_events *cpuc; 226 226 struct pt_regs *regs; 227 227 int idx; 228 228 ··· 249 249 perf_sample_data_init(&data, 0); 250 250 251 251 cpuc = &__get_cpu_var(cpu_hw_events); 252 - for (idx = 0; idx <= armpmu->num_events; ++idx) { 252 + for (idx = 0; idx < cpu_pmu->num_events; ++idx) { 253 253 struct perf_event *event = cpuc->events[idx]; 254 254 struct hw_perf_event *hwc; 255 - 256 - if (!test_bit(idx, cpuc->active_mask)) 257 - continue; 258 255 259 256 if (!xscale1_pmnc_counter_has_overflowed(pmnc, idx)) 260 257 continue; ··· 263 266 continue; 264 267 265 268 if (perf_event_overflow(event, &data, regs)) 266 - armpmu->disable(hwc, idx); 269 + cpu_pmu->disable(hwc, idx); 267 270 } 268 271 269 272 irq_work_run(); ··· 281 284 xscale1pmu_enable_event(struct hw_perf_event *hwc, int idx) 282 285 { 283 286 unsigned long val, mask, evt, flags; 287 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 284 288 285 289 switch (idx) { 286 290 case XSCALE_CYCLE_COUNTER: ··· 303 305 return; 304 306 } 305 307 306 - raw_spin_lock_irqsave(&pmu_lock, flags); 308 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 307 309 val = xscale1pmu_read_pmnc(); 308 310 val &= ~mask; 309 311 val |= evt; 310 312 xscale1pmu_write_pmnc(val); 311 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 313 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 312 314 } 313 315 314 316 static void 315 317 xscale1pmu_disable_event(struct hw_perf_event *hwc, int idx) 316 318 { 317 319 unsigned long val, mask, evt, flags; 320 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 318 321 319 322 switch (idx) { 320 323 case XSCALE_CYCLE_COUNTER: ··· 335 336 return; 336 337 } 337 338 338 - raw_spin_lock_irqsave(&pmu_lock, flags); 339 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 339 340 val = xscale1pmu_read_pmnc(); 340 341 val &= ~mask; 341 342 val |= evt; 342 343 xscale1pmu_write_pmnc(val); 343 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 344 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 344 345 } 345 346 346 347 static int 347 - xscale1pmu_get_event_idx(struct cpu_hw_events *cpuc, 348 + xscale1pmu_get_event_idx(struct pmu_hw_events *cpuc, 348 349 struct hw_perf_event *event) 349 350 { 350 351 if (XSCALE_PERFCTR_CCNT == event->config_base) { ··· 367 368 xscale1pmu_start(void) 368 369 { 369 370 unsigned long flags, val; 371 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 370 372 371 - raw_spin_lock_irqsave(&pmu_lock, flags); 373 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 372 374 val = xscale1pmu_read_pmnc(); 373 375 val |= XSCALE_PMU_ENABLE; 374 376 xscale1pmu_write_pmnc(val); 375 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 377 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 376 378 } 377 379 378 380 static void 379 381 xscale1pmu_stop(void) 380 382 { 381 383 unsigned long flags, val; 384 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 382 385 383 - raw_spin_lock_irqsave(&pmu_lock, flags); 386 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 384 387 val = xscale1pmu_read_pmnc(); 385 388 val &= ~XSCALE_PMU_ENABLE; 386 389 xscale1pmu_write_pmnc(val); 387 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 390 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 388 391 } 389 392 390 393 static inline u32 ··· 425 424 } 426 425 } 427 426 428 - static const struct arm_pmu xscale1pmu = { 427 + static int xscale_map_event(struct perf_event *event) 428 + { 429 + return map_cpu_event(event, &xscale_perf_map, 430 + &xscale_perf_cache_map, 0xFF); 431 + } 432 + 433 + static struct arm_pmu xscale1pmu = { 429 434 .id = ARM_PERF_PMU_ID_XSCALE1, 430 435 .name = "xscale1", 431 436 .handle_irq = xscale1pmu_handle_irq, ··· 442 435 .get_event_idx = xscale1pmu_get_event_idx, 443 436 .start = xscale1pmu_start, 444 437 .stop = xscale1pmu_stop, 445 - .cache_map = &xscale_perf_cache_map, 446 - .event_map = &xscale_perf_map, 447 - .raw_event_mask = 0xFF, 438 + .map_event = xscale_map_event, 448 439 .num_events = 3, 449 440 .max_period = (1LLU << 32) - 1, 450 441 }; 451 442 452 - static const struct arm_pmu *__init xscale1pmu_init(void) 443 + static struct arm_pmu *__init xscale1pmu_init(void) 453 444 { 454 445 return &xscale1pmu; 455 446 } ··· 565 560 { 566 561 unsigned long pmnc, of_flags; 567 562 struct perf_sample_data data; 568 - struct cpu_hw_events *cpuc; 563 + struct pmu_hw_events *cpuc; 569 564 struct pt_regs *regs; 570 565 int idx; 571 566 ··· 586 581 perf_sample_data_init(&data, 0); 587 582 588 583 cpuc = &__get_cpu_var(cpu_hw_events); 589 - for (idx = 0; idx <= armpmu->num_events; ++idx) { 584 + for (idx = 0; idx < cpu_pmu->num_events; ++idx) { 590 585 struct perf_event *event = cpuc->events[idx]; 591 586 struct hw_perf_event *hwc; 592 - 593 - if (!test_bit(idx, cpuc->active_mask)) 594 - continue; 595 587 596 588 if (!xscale2_pmnc_counter_has_overflowed(pmnc, idx)) 597 589 continue; ··· 600 598 continue; 601 599 602 600 if (perf_event_overflow(event, &data, regs)) 603 - armpmu->disable(hwc, idx); 601 + cpu_pmu->disable(hwc, idx); 604 602 } 605 603 606 604 irq_work_run(); ··· 618 616 xscale2pmu_enable_event(struct hw_perf_event *hwc, int idx) 619 617 { 620 618 unsigned long flags, ien, evtsel; 619 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 621 620 622 621 ien = xscale2pmu_read_int_enable(); 623 622 evtsel = xscale2pmu_read_event_select(); ··· 652 649 return; 653 650 } 654 651 655 - raw_spin_lock_irqsave(&pmu_lock, flags); 652 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 656 653 xscale2pmu_write_event_select(evtsel); 657 654 xscale2pmu_write_int_enable(ien); 658 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 655 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 659 656 } 660 657 661 658 static void 662 659 xscale2pmu_disable_event(struct hw_perf_event *hwc, int idx) 663 660 { 664 661 unsigned long flags, ien, evtsel; 662 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 665 663 666 664 ien = xscale2pmu_read_int_enable(); 667 665 evtsel = xscale2pmu_read_event_select(); ··· 696 692 return; 697 693 } 698 694 699 - raw_spin_lock_irqsave(&pmu_lock, flags); 695 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 700 696 xscale2pmu_write_event_select(evtsel); 701 697 xscale2pmu_write_int_enable(ien); 702 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 698 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 703 699 } 704 700 705 701 static int 706 - xscale2pmu_get_event_idx(struct cpu_hw_events *cpuc, 702 + xscale2pmu_get_event_idx(struct pmu_hw_events *cpuc, 707 703 struct hw_perf_event *event) 708 704 { 709 705 int idx = xscale1pmu_get_event_idx(cpuc, event); ··· 722 718 xscale2pmu_start(void) 723 719 { 724 720 unsigned long flags, val; 721 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 725 722 726 - raw_spin_lock_irqsave(&pmu_lock, flags); 723 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 727 724 val = xscale2pmu_read_pmnc() & ~XSCALE_PMU_CNT64; 728 725 val |= XSCALE_PMU_ENABLE; 729 726 xscale2pmu_write_pmnc(val); 730 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 727 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 731 728 } 732 729 733 730 static void 734 731 xscale2pmu_stop(void) 735 732 { 736 733 unsigned long flags, val; 734 + struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 737 735 738 - raw_spin_lock_irqsave(&pmu_lock, flags); 736 + raw_spin_lock_irqsave(&events->pmu_lock, flags); 739 737 val = xscale2pmu_read_pmnc(); 740 738 val &= ~XSCALE_PMU_ENABLE; 741 739 xscale2pmu_write_pmnc(val); 742 - raw_spin_unlock_irqrestore(&pmu_lock, flags); 740 + raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 743 741 } 744 742 745 743 static inline u32 ··· 792 786 } 793 787 } 794 788 795 - static const struct arm_pmu xscale2pmu = { 789 + static struct arm_pmu xscale2pmu = { 796 790 .id = ARM_PERF_PMU_ID_XSCALE2, 797 791 .name = "xscale2", 798 792 .handle_irq = xscale2pmu_handle_irq, ··· 803 797 .get_event_idx = xscale2pmu_get_event_idx, 804 798 .start = xscale2pmu_start, 805 799 .stop = xscale2pmu_stop, 806 - .cache_map = &xscale_perf_cache_map, 807 - .event_map = &xscale_perf_map, 808 - .raw_event_mask = 0xFF, 800 + .map_event = xscale_map_event, 809 801 .num_events = 5, 810 802 .max_period = (1LLU << 32) - 1, 811 803 }; 812 804 813 - static const struct arm_pmu *__init xscale2pmu_init(void) 805 + static struct arm_pmu *__init xscale2pmu_init(void) 814 806 { 815 807 return &xscale2pmu; 816 808 } 817 809 #else 818 - static const struct arm_pmu *__init xscale1pmu_init(void) 810 + static struct arm_pmu *__init xscale1pmu_init(void) 819 811 { 820 812 return NULL; 821 813 } 822 814 823 - static const struct arm_pmu *__init xscale2pmu_init(void) 815 + static struct arm_pmu *__init xscale2pmu_init(void) 824 816 { 825 817 return NULL; 826 818 }
+8 -174
arch/arm/kernel/pmu.c
··· 10 10 * 11 11 */ 12 12 13 - #define pr_fmt(fmt) "PMU: " fmt 14 - 15 - #include <linux/cpumask.h> 16 13 #include <linux/err.h> 17 - #include <linux/interrupt.h> 18 14 #include <linux/kernel.h> 19 15 #include <linux/module.h> 20 - #include <linux/of_device.h> 21 - #include <linux/platform_device.h> 22 16 23 17 #include <asm/pmu.h> 24 18 25 - static volatile long pmu_lock; 19 + /* 20 + * PMU locking to ensure mutual exclusion between different subsystems. 21 + */ 22 + static unsigned long pmu_lock[BITS_TO_LONGS(ARM_NUM_PMU_DEVICES)]; 26 23 27 - static struct platform_device *pmu_devices[ARM_NUM_PMU_DEVICES]; 28 - 29 - static int __devinit pmu_register(struct platform_device *pdev, 30 - enum arm_pmu_type type) 31 - { 32 - if (type < 0 || type >= ARM_NUM_PMU_DEVICES) { 33 - pr_warning("received registration request for unknown " 34 - "PMU device type %d\n", type); 35 - return -EINVAL; 36 - } 37 - 38 - if (pmu_devices[type]) { 39 - pr_warning("rejecting duplicate registration of PMU device " 40 - "type %d.", type); 41 - return -ENOSPC; 42 - } 43 - 44 - pr_info("registered new PMU device of type %d\n", type); 45 - pmu_devices[type] = pdev; 46 - return 0; 47 - } 48 - 49 - #define OF_MATCH_PMU(_name, _type) { \ 50 - .compatible = _name, \ 51 - .data = (void *)_type, \ 52 - } 53 - 54 - #define OF_MATCH_CPU(name) OF_MATCH_PMU(name, ARM_PMU_DEVICE_CPU) 55 - 56 - static struct of_device_id armpmu_of_device_ids[] = { 57 - OF_MATCH_CPU("arm,cortex-a9-pmu"), 58 - OF_MATCH_CPU("arm,cortex-a8-pmu"), 59 - OF_MATCH_CPU("arm,arm1136-pmu"), 60 - OF_MATCH_CPU("arm,arm1176-pmu"), 61 - {}, 62 - }; 63 - 64 - #define PLAT_MATCH_PMU(_name, _type) { \ 65 - .name = _name, \ 66 - .driver_data = _type, \ 67 - } 68 - 69 - #define PLAT_MATCH_CPU(_name) PLAT_MATCH_PMU(_name, ARM_PMU_DEVICE_CPU) 70 - 71 - static struct platform_device_id armpmu_plat_device_ids[] = { 72 - PLAT_MATCH_CPU("arm-pmu"), 73 - {}, 74 - }; 75 - 76 - enum arm_pmu_type armpmu_device_type(struct platform_device *pdev) 77 - { 78 - const struct of_device_id *of_id; 79 - const struct platform_device_id *pdev_id; 80 - 81 - /* provided by of_device_id table */ 82 - if (pdev->dev.of_node) { 83 - of_id = of_match_device(armpmu_of_device_ids, &pdev->dev); 84 - BUG_ON(!of_id); 85 - return (enum arm_pmu_type)of_id->data; 86 - } 87 - 88 - /* Provided by platform_device_id table */ 89 - pdev_id = platform_get_device_id(pdev); 90 - BUG_ON(!pdev_id); 91 - return pdev_id->driver_data; 92 - } 93 - 94 - static int __devinit armpmu_device_probe(struct platform_device *pdev) 95 - { 96 - return pmu_register(pdev, armpmu_device_type(pdev)); 97 - } 98 - 99 - static struct platform_driver armpmu_driver = { 100 - .driver = { 101 - .name = "arm-pmu", 102 - .of_match_table = armpmu_of_device_ids, 103 - }, 104 - .probe = armpmu_device_probe, 105 - .id_table = armpmu_plat_device_ids, 106 - }; 107 - 108 - static int __init register_pmu_driver(void) 109 - { 110 - return platform_driver_register(&armpmu_driver); 111 - } 112 - device_initcall(register_pmu_driver); 113 - 114 - struct platform_device * 24 + int 115 25 reserve_pmu(enum arm_pmu_type type) 116 26 { 117 - struct platform_device *pdev; 118 - 119 - if (test_and_set_bit_lock(type, &pmu_lock)) { 120 - pdev = ERR_PTR(-EBUSY); 121 - } else if (pmu_devices[type] == NULL) { 122 - clear_bit_unlock(type, &pmu_lock); 123 - pdev = ERR_PTR(-ENODEV); 124 - } else { 125 - pdev = pmu_devices[type]; 126 - } 127 - 128 - return pdev; 27 + return test_and_set_bit_lock(type, pmu_lock) ? -EBUSY : 0; 129 28 } 130 29 EXPORT_SYMBOL_GPL(reserve_pmu); 131 30 132 - int 31 + void 133 32 release_pmu(enum arm_pmu_type type) 134 33 { 135 - if (WARN_ON(!pmu_devices[type])) 136 - return -EINVAL; 137 - clear_bit_unlock(type, &pmu_lock); 138 - return 0; 34 + clear_bit_unlock(type, pmu_lock); 139 35 } 140 - EXPORT_SYMBOL_GPL(release_pmu); 141 - 142 - static int 143 - set_irq_affinity(int irq, 144 - unsigned int cpu) 145 - { 146 - #ifdef CONFIG_SMP 147 - int err = irq_set_affinity(irq, cpumask_of(cpu)); 148 - if (err) 149 - pr_warning("unable to set irq affinity (irq=%d, cpu=%u)\n", 150 - irq, cpu); 151 - return err; 152 - #else 153 - return -EINVAL; 154 - #endif 155 - } 156 - 157 - static int 158 - init_cpu_pmu(void) 159 - { 160 - int i, irqs, err = 0; 161 - struct platform_device *pdev = pmu_devices[ARM_PMU_DEVICE_CPU]; 162 - 163 - if (!pdev) 164 - return -ENODEV; 165 - 166 - irqs = pdev->num_resources; 167 - 168 - /* 169 - * If we have a single PMU interrupt that we can't shift, assume that 170 - * we're running on a uniprocessor machine and continue. 171 - */ 172 - if (irqs == 1 && !irq_can_set_affinity(platform_get_irq(pdev, 0))) 173 - return 0; 174 - 175 - for (i = 0; i < irqs; ++i) { 176 - err = set_irq_affinity(platform_get_irq(pdev, i), i); 177 - if (err) 178 - break; 179 - } 180 - 181 - return err; 182 - } 183 - 184 - int 185 - init_pmu(enum arm_pmu_type type) 186 - { 187 - int err = 0; 188 - 189 - switch (type) { 190 - case ARM_PMU_DEVICE_CPU: 191 - err = init_cpu_pmu(); 192 - break; 193 - default: 194 - pr_warning("attempt to initialise PMU of unknown " 195 - "type %d\n", type); 196 - err = -EINVAL; 197 - } 198 - 199 - return err; 200 - } 201 - EXPORT_SYMBOL_GPL(init_pmu);
+2 -19
arch/arm/kernel/setup.c
··· 849 849 850 850 if (__atags_pointer) 851 851 tags = phys_to_virt(__atags_pointer); 852 - else if (mdesc->boot_params) { 853 - #ifdef CONFIG_MMU 854 - /* 855 - * We still are executing with a minimal MMU mapping created 856 - * with the presumption that the machine default for this 857 - * is located in the first MB of RAM. Anything else will 858 - * fault and silently hang the kernel at this point. 859 - */ 860 - if (mdesc->boot_params < PHYS_OFFSET || 861 - mdesc->boot_params >= PHYS_OFFSET + SZ_1M) { 862 - printk(KERN_WARNING 863 - "Default boot params at physical 0x%08lx out of reach\n", 864 - mdesc->boot_params); 865 - } else 866 - #endif 867 - { 868 - tags = phys_to_virt(mdesc->boot_params); 869 - } 870 - } 852 + else if (mdesc->atag_offset) 853 + tags = (void *)(PAGE_OFFSET + mdesc->atag_offset); 871 854 872 855 #if defined(CONFIG_DEPRECATED_PARAM_STRUCT) 873 856 /*
+28 -59
arch/arm/kernel/sleep.S
··· 8 8 .text 9 9 10 10 /* 11 - * Save CPU state for a suspend 12 - * r1 = v:p offset 13 - * r2 = suspend function arg0 14 - * r3 = suspend function 11 + * Save CPU state for a suspend. This saves the CPU general purpose 12 + * registers, and allocates space on the kernel stack to save the CPU 13 + * specific registers and some other data for resume. 14 + * r0 = suspend function arg0 15 + * r1 = suspend function 15 16 */ 16 17 ENTRY(__cpu_suspend) 17 18 stmfd sp!, {r4 - r11, lr} 18 19 #ifdef MULTI_CPU 19 20 ldr r10, =processor 20 - ldr r5, [r10, #CPU_SLEEP_SIZE] @ size of CPU sleep state 21 - ldr ip, [r10, #CPU_DO_RESUME] @ virtual resume function 21 + ldr r4, [r10, #CPU_SLEEP_SIZE] @ size of CPU sleep state 22 22 #else 23 - ldr r5, =cpu_suspend_size 24 - ldr ip, =cpu_do_resume 23 + ldr r4, =cpu_suspend_size 25 24 #endif 26 - mov r6, sp @ current virtual SP 27 - sub sp, sp, r5 @ allocate CPU state on stack 28 - mov r0, sp @ save pointer to CPU save block 29 - add ip, ip, r1 @ convert resume fn to phys 30 - stmfd sp!, {r1, r6, ip} @ save v:p, virt SP, phys resume fn 31 - ldr r5, =sleep_save_sp 32 - add r6, sp, r1 @ convert SP to phys 33 - stmfd sp!, {r2, r3} @ save suspend func arg and pointer 25 + mov r5, sp @ current virtual SP 26 + add r4, r4, #12 @ Space for pgd, virt sp, phys resume fn 27 + sub sp, sp, r4 @ allocate CPU state on stack 28 + stmfd sp!, {r0, r1} @ save suspend func arg and pointer 29 + add r0, sp, #8 @ save pointer to save block 30 + mov r1, r4 @ size of save block 31 + mov r2, r5 @ virtual SP 32 + ldr r3, =sleep_save_sp 34 33 #ifdef CONFIG_SMP 35 34 ALT_SMP(mrc p15, 0, lr, c0, c0, 5) 36 35 ALT_UP(mov lr, #0) 37 36 and lr, lr, #15 38 - str r6, [r5, lr, lsl #2] @ save phys SP 39 - #else 40 - str r6, [r5] @ save phys SP 37 + add r3, r3, lr, lsl #2 41 38 #endif 42 - #ifdef MULTI_CPU 43 - mov lr, pc 44 - ldr pc, [r10, #CPU_DO_SUSPEND] @ save CPU state 45 - #else 46 - bl cpu_do_suspend 47 - #endif 48 - 49 - @ flush data cache 50 - #ifdef MULTI_CACHE 51 - ldr r10, =cpu_cache 52 - mov lr, pc 53 - ldr pc, [r10, #CACHE_FLUSH_KERN_ALL] 54 - #else 55 - bl __cpuc_flush_kern_all 56 - #endif 39 + bl __cpu_suspend_save 57 40 adr lr, BSYM(cpu_suspend_abort) 58 41 ldmfd sp!, {r0, pc} @ call suspend fn 59 42 ENDPROC(__cpu_suspend) 60 43 .ltorg 61 44 62 45 cpu_suspend_abort: 63 - ldmia sp!, {r1 - r3} @ pop v:p, virt SP, phys resume fn 46 + ldmia sp!, {r1 - r3} @ pop phys pgd, virt SP, phys resume fn 47 + teq r0, #0 48 + moveq r0, #1 @ force non-zero value 64 49 mov sp, r2 65 50 ldmfd sp!, {r4 - r11, pc} 66 51 ENDPROC(cpu_suspend_abort) 67 52 68 53 /* 69 54 * r0 = control register value 70 - * r1 = v:p offset (preserved by cpu_do_resume) 71 - * r2 = phys page table base 72 - * r3 = L1 section flags 73 55 */ 74 - ENTRY(cpu_resume_mmu) 75 - adr r4, cpu_resume_turn_mmu_on 76 - mov r4, r4, lsr #20 77 - orr r3, r3, r4, lsl #20 78 - ldr r5, [r2, r4, lsl #2] @ save old mapping 79 - str r3, [r2, r4, lsl #2] @ setup 1:1 mapping for mmu code 80 - sub r2, r2, r1 81 - ldr r3, =cpu_resume_after_mmu 82 - bic r1, r0, #CR_C @ ensure D-cache is disabled 83 - b cpu_resume_turn_mmu_on 84 - ENDPROC(cpu_resume_mmu) 85 - .ltorg 86 56 .align 5 87 - cpu_resume_turn_mmu_on: 88 - mcr p15, 0, r1, c1, c0, 0 @ turn on MMU, I-cache, etc 89 - mrc p15, 0, r1, c0, c0, 0 @ read id reg 90 - mov r1, r1 91 - mov r1, r1 57 + ENTRY(cpu_resume_mmu) 58 + ldr r3, =cpu_resume_after_mmu 59 + mcr p15, 0, r0, c1, c0, 0 @ turn on MMU, I-cache, etc 60 + mrc p15, 0, r0, c0, c0, 0 @ read id reg 61 + mov r0, r0 62 + mov r0, r0 92 63 mov pc, r3 @ jump to virtual address 93 - ENDPROC(cpu_resume_turn_mmu_on) 64 + ENDPROC(cpu_resume_mmu) 94 65 cpu_resume_after_mmu: 95 - str r5, [r2, r4, lsl #2] @ restore old mapping 96 - mcr p15, 0, r0, c1, c0, 0 @ turn on D-cache 97 66 bl cpu_init @ restore the und/abt/irq banked regs 98 67 mov r0, #0 @ return zero on success 99 68 ldmfd sp!, {r4 - r11, pc} ··· 88 119 ldr r0, sleep_save_sp @ stack phys addr 89 120 #endif 90 121 setmode PSR_I_BIT | PSR_F_BIT | SVC_MODE, r1 @ set SVC, irqs off 91 - @ load v:p, stack, resume fn 122 + @ load phys pgd, stack, resume fn 92 123 ARM( ldmia r0!, {r1, sp, pc} ) 93 124 THUMB( ldmia r0!, {r1, r2, r3} ) 94 125 THUMB( mov sp, r2 )
+1 -37
arch/arm/kernel/smp.c
··· 460 460 for (i = 0; i < NR_IPI; i++) 461 461 sum += __get_irq_stat(cpu, ipi_irqs[i]); 462 462 463 - #ifdef CONFIG_LOCAL_TIMERS 464 - sum += __get_irq_stat(cpu, local_timer_irqs); 465 - #endif 466 - 467 463 return sum; 468 464 } 469 465 ··· 475 479 evt->event_handler(evt); 476 480 irq_exit(); 477 481 } 478 - 479 - #ifdef CONFIG_LOCAL_TIMERS 480 - asmlinkage void __exception_irq_entry do_local_timer(struct pt_regs *regs) 481 - { 482 - handle_local_timer(regs); 483 - } 484 - 485 - void handle_local_timer(struct pt_regs *regs) 486 - { 487 - struct pt_regs *old_regs = set_irq_regs(regs); 488 - int cpu = smp_processor_id(); 489 - 490 - if (local_timer_ack()) { 491 - __inc_irq_stat(cpu, local_timer_irqs); 492 - ipi_timer(); 493 - } 494 - 495 - set_irq_regs(old_regs); 496 - } 497 - 498 - void show_local_irqs(struct seq_file *p, int prec) 499 - { 500 - unsigned int cpu; 501 - 502 - seq_printf(p, "%*s: ", prec, "LOC"); 503 - 504 - for_each_present_cpu(cpu) 505 - seq_printf(p, "%10u ", __get_irq_stat(cpu, local_timer_irqs)); 506 - 507 - seq_printf(p, " Local timer interrupts\n"); 508 - } 509 - #endif 510 482 511 483 #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST 512 484 static void smp_timer_broadcast(const struct cpumask *mask) ··· 526 562 unsigned int cpu = smp_processor_id(); 527 563 struct clock_event_device *evt = &per_cpu(percpu_clockevent, cpu); 528 564 529 - evt->set_mode(CLOCK_EVT_MODE_UNUSED, evt); 565 + local_timer_stop(evt); 530 566 } 531 567 #endif 532 568
+45 -2
arch/arm/kernel/smp_twd.c
··· 19 19 #include <linux/io.h> 20 20 21 21 #include <asm/smp_twd.h> 22 + #include <asm/localtimer.h> 22 23 #include <asm/hardware/gic.h> 23 24 24 25 /* set up by the platform code */ 25 26 void __iomem *twd_base; 26 27 27 28 static unsigned long twd_timer_rate; 29 + 30 + static struct clock_event_device __percpu **twd_evt; 28 31 29 32 static void twd_set_mode(enum clock_event_mode mode, 30 33 struct clock_event_device *clk) ··· 83 80 return 0; 84 81 } 85 82 83 + void twd_timer_stop(struct clock_event_device *clk) 84 + { 85 + twd_set_mode(CLOCK_EVT_MODE_UNUSED, clk); 86 + disable_percpu_irq(clk->irq); 87 + } 88 + 86 89 static void __cpuinit twd_calibrate_rate(void) 87 90 { 88 91 unsigned long count; ··· 128 119 } 129 120 } 130 121 122 + static irqreturn_t twd_handler(int irq, void *dev_id) 123 + { 124 + struct clock_event_device *evt = *(struct clock_event_device **)dev_id; 125 + 126 + if (twd_timer_ack()) { 127 + evt->event_handler(evt); 128 + return IRQ_HANDLED; 129 + } 130 + 131 + return IRQ_NONE; 132 + } 133 + 131 134 /* 132 135 * Setup the local clock events for a CPU. 133 136 */ 134 137 void __cpuinit twd_timer_setup(struct clock_event_device *clk) 135 138 { 139 + struct clock_event_device **this_cpu_clk; 140 + 141 + if (!twd_evt) { 142 + int err; 143 + 144 + twd_evt = alloc_percpu(struct clock_event_device *); 145 + if (!twd_evt) { 146 + pr_err("twd: can't allocate memory\n"); 147 + return; 148 + } 149 + 150 + err = request_percpu_irq(clk->irq, twd_handler, 151 + "twd", twd_evt); 152 + if (err) { 153 + pr_err("twd: can't register interrupt %d (%d)\n", 154 + clk->irq, err); 155 + return; 156 + } 157 + } 158 + 136 159 twd_calibrate_rate(); 137 160 138 161 clk->name = "local_timer"; ··· 178 137 clk->max_delta_ns = clockevent_delta2ns(0xffffffff, clk); 179 138 clk->min_delta_ns = clockevent_delta2ns(0xf, clk); 180 139 140 + this_cpu_clk = __this_cpu_ptr(twd_evt); 141 + *this_cpu_clk = clk; 142 + 181 143 clockevents_register_device(clk); 182 144 183 - /* Make sure our local interrupt controller has this enabled */ 184 - gic_enable_ppi(clk->irq); 145 + enable_percpu_irq(clk->irq, 0); 185 146 }
+72
arch/arm/kernel/suspend.c
··· 1 + #include <linux/init.h> 2 + 3 + #include <asm/pgalloc.h> 4 + #include <asm/pgtable.h> 5 + #include <asm/memory.h> 6 + #include <asm/suspend.h> 7 + #include <asm/tlbflush.h> 8 + 9 + static pgd_t *suspend_pgd; 10 + 11 + extern int __cpu_suspend(unsigned long, int (*)(unsigned long)); 12 + extern void cpu_resume_mmu(void); 13 + 14 + /* 15 + * This is called by __cpu_suspend() to save the state, and do whatever 16 + * flushing is required to ensure that when the CPU goes to sleep we have 17 + * the necessary data available when the caches are not searched. 18 + */ 19 + void __cpu_suspend_save(u32 *ptr, u32 ptrsz, u32 sp, u32 *save_ptr) 20 + { 21 + *save_ptr = virt_to_phys(ptr); 22 + 23 + /* This must correspond to the LDM in cpu_resume() assembly */ 24 + *ptr++ = virt_to_phys(suspend_pgd); 25 + *ptr++ = sp; 26 + *ptr++ = virt_to_phys(cpu_do_resume); 27 + 28 + cpu_do_suspend(ptr); 29 + 30 + flush_cache_all(); 31 + outer_clean_range(*save_ptr, *save_ptr + ptrsz); 32 + outer_clean_range(virt_to_phys(save_ptr), 33 + virt_to_phys(save_ptr) + sizeof(*save_ptr)); 34 + } 35 + 36 + /* 37 + * Hide the first two arguments to __cpu_suspend - these are an implementation 38 + * detail which platform code shouldn't have to know about. 39 + */ 40 + int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) 41 + { 42 + struct mm_struct *mm = current->active_mm; 43 + int ret; 44 + 45 + if (!suspend_pgd) 46 + return -EINVAL; 47 + 48 + /* 49 + * Provide a temporary page table with an identity mapping for 50 + * the MMU-enable code, required for resuming. On successful 51 + * resume (indicated by a zero return code), we need to switch 52 + * back to the correct page tables. 53 + */ 54 + ret = __cpu_suspend(arg, fn); 55 + if (ret == 0) { 56 + cpu_switch_mm(mm->pgd, mm); 57 + local_flush_tlb_all(); 58 + } 59 + 60 + return ret; 61 + } 62 + 63 + static int __init cpu_suspend_init(void) 64 + { 65 + suspend_pgd = pgd_alloc(&init_mm); 66 + if (suspend_pgd) { 67 + unsigned long addr = virt_to_phys(cpu_resume_mmu); 68 + identity_mapping_add(suspend_pgd, addr, addr + SECTION_SIZE); 69 + } 70 + return suspend_pgd ? 0 : -ENOMEM; 71 + } 72 + core_initcall(cpu_suspend_init);
+2
arch/arm/mach-at91/at91sam9g45.c
··· 12 12 13 13 #include <linux/module.h> 14 14 #include <linux/pm.h> 15 + #include <linux/dma-mapping.h> 15 16 16 17 #include <asm/irq.h> 17 18 #include <asm/mach/arch.h> ··· 320 319 static void __init at91sam9g45_map_io(void) 321 320 { 322 321 at91_init_sram(0, AT91SAM9G45_SRAM_BASE, AT91SAM9G45_SRAM_SIZE); 322 + init_consistent_dma_size(SZ_4M); 323 323 } 324 324 325 325 static void __init at91sam9g45_initialize(void)
-2
arch/arm/mach-at91/include/mach/at91sam9g45.h
··· 128 128 #define AT91SAM9G45_EHCI_BASE 0x00800000 /* USB Host controller (EHCI) */ 129 129 #define AT91SAM9G45_VDEC_BASE 0x00900000 /* Video Decoder Controller */ 130 130 131 - #define CONSISTENT_DMA_SIZE SZ_4M 132 - 133 131 /* 134 132 * DMA peripheral identifiers 135 133 * for hardware handshaking interface
+1 -1
arch/arm/mach-at91/include/mach/debug-macro.S
··· 14 14 #include <mach/hardware.h> 15 15 #include <mach/at91_dbgu.h> 16 16 17 - .macro addruart, rp, rv 17 + .macro addruart, rp, rv, tmp 18 18 ldr \rp, =(AT91_BASE_SYS + AT91_DBGU) @ System peripherals (phys address) 19 19 ldr \rv, =(AT91_VA_BASE_SYS + AT91_DBGU) @ System peripherals (virt address) 20 20 .endm
+1 -2
arch/arm/mach-bcmring/include/mach/hardware.h
··· 22 22 #define __ASM_ARCH_HARDWARE_H 23 23 24 24 #include <asm/sizes.h> 25 - #include <mach/memory.h> 26 25 #include <cfg_global.h> 27 26 #include <mach/csp/mm_io.h> 28 27 ··· 30 31 * *_SIZE is the size of the region 31 32 * *_BASE is the virtual address 32 33 */ 33 - #define RAM_START PLAT_PHYS_OFFSET 34 + #define RAM_START PHYS_OFFSET 34 35 35 36 #define RAM_SIZE (CFG_GLOBAL_RAM_SIZE-CFG_GLOBAL_RAM_SIZE_RESERVED) 36 37 #define RAM_BASE PAGE_OFFSET
-33
arch/arm/mach-bcmring/include/mach/memory.h
··· 1 - /***************************************************************************** 2 - * Copyright 2005 - 2008 Broadcom Corporation. All rights reserved. 3 - * 4 - * Unless you and Broadcom execute a separate written software license 5 - * agreement governing use of this software, this software is licensed to you 6 - * under the terms of the GNU General Public License version 2, available at 7 - * http://www.broadcom.com/licenses/GPLv2.php (the "GPL"). 8 - * 9 - * Notwithstanding the above, under no circumstances may you combine this 10 - * software in any way with any other Broadcom software provided under a 11 - * license other than the GPL, without Broadcom's express prior written 12 - * consent. 13 - *****************************************************************************/ 14 - 15 - #ifndef __ASM_ARCH_MEMORY_H 16 - #define __ASM_ARCH_MEMORY_H 17 - 18 - #include <cfg_global.h> 19 - 20 - /* 21 - * Physical vs virtual RAM address space conversion. These are 22 - * private definitions which should NOT be used outside memory.h 23 - * files. Use virt_to_phys/phys_to_virt/__pa/__va instead. 24 - */ 25 - 26 - #define PLAT_PHYS_OFFSET CFG_GLOBAL_RAM_BASE 27 - 28 - /* 29 - * Maximum DMA memory allowed is 14M 30 - */ 31 - #define CONSISTENT_DMA_SIZE (SZ_16M - SZ_2M) 32 - 33 - #endif
+3
arch/arm/mach-bcmring/mm.c
··· 13 13 *****************************************************************************/ 14 14 15 15 #include <linux/platform_device.h> 16 + #include <linux/dma-mapping.h> 16 17 #include <asm/mach/map.h> 17 18 18 19 #include <mach/hardware.h> ··· 54 53 { 55 54 56 55 iotable_init(bcmring_io_desc, ARRAY_SIZE(bcmring_io_desc)); 56 + /* Maximum DMA memory allowed is 14M */ 57 + init_consistent_dma_size(14 << 20); 57 58 }
+1 -1
arch/arm/mach-clps711x/autcpu12.c
··· 64 64 65 65 MACHINE_START(AUTCPU12, "autronix autcpu12") 66 66 /* Maintainer: Thomas Gleixner */ 67 - .boot_params = 0xc0020000, 67 + .atag_offset = 0x20000, 68 68 .map_io = autcpu12_map_io, 69 69 .init_irq = clps711x_init_irq, 70 70 .timer = &clps711x_timer,
+1 -1
arch/arm/mach-clps711x/cdb89712.c
··· 55 55 56 56 MACHINE_START(CDB89712, "Cirrus-CDB89712") 57 57 /* Maintainer: Ray Lehtiniemi */ 58 - .boot_params = 0xc0000100, 58 + .atag_offset = 0x100, 59 59 .map_io = cdb89712_map_io, 60 60 .init_irq = clps711x_init_irq, 61 61 .timer = &clps711x_timer,
+1 -1
arch/arm/mach-clps711x/ceiva.c
··· 56 56 57 57 MACHINE_START(CEIVA, "CEIVA/Polaroid Photo MAX Digital Picture Frame") 58 58 /* Maintainer: Rob Scott */ 59 - .boot_params = 0xc0000100, 59 + .atag_offset = 0x100, 60 60 .map_io = ceiva_map_io, 61 61 .init_irq = clps711x_init_irq, 62 62 .timer = &clps711x_timer,
+1 -1
arch/arm/mach-clps711x/clep7312.c
··· 36 36 37 37 MACHINE_START(CLEP7212, "Cirrus Logic 7212/7312") 38 38 /* Maintainer: Nobody */ 39 - .boot_params = 0xc0000100, 39 + .atag_offset = 0x0100, 40 40 .fixup = fixup_clep7312, 41 41 .map_io = clps711x_map_io, 42 42 .init_irq = clps711x_init_irq,
+1 -1
arch/arm/mach-clps711x/edb7211-arch.c
··· 56 56 57 57 MACHINE_START(EDB7211, "CL-EDB7211 (EP7211 eval board)") 58 58 /* Maintainer: Jon McClintock */ 59 - .boot_params = 0xc0020100, /* 0xc0000000 - 0xc001ffff can be video RAM */ 59 + .atag_offset = 0x20100, /* 0xc0000000 - 0xc001ffff can be video RAM */ 60 60 .fixup = fixup_edb7211, 61 61 .map_io = edb7211_map_io, 62 62 .reserve = edb7211_reserve,
-1
arch/arm/mach-clps711x/fortunet.c
··· 74 74 75 75 MACHINE_START(FORTUNET, "ARM-FortuNet") 76 76 /* Maintainer: FortuNet Inc. */ 77 - .boot_params = 0x00000000, 78 77 .fixup = fortunet_fixup, 79 78 .map_io = clps711x_map_io, 80 79 .init_irq = clps711x_init_irq,
+1 -1
arch/arm/mach-clps711x/include/mach/debug-macro.S
··· 14 14 #include <mach/hardware.h> 15 15 #include <asm/hardware/clps7111.h> 16 16 17 - .macro addruart, rp, rv 17 + .macro addruart, rp, rv, tmp 18 18 #ifndef CONFIG_DEBUG_CLPS711X_UART2 19 19 mov \rp, #0x0000 @ UART1 20 20 #else
+1 -1
arch/arm/mach-clps711x/p720t.c
··· 88 88 89 89 MACHINE_START(P720T, "ARM-Prospector720T") 90 90 /* Maintainer: ARM Ltd/Deep Blue Solutions Ltd */ 91 - .boot_params = 0xc0000100, 91 + .atag_offset = 0x100, 92 92 .fixup = fixup_p720t, 93 93 .map_io = p720t_map_io, 94 94 .init_irq = clps711x_init_irq,
+1 -1
arch/arm/mach-cns3xxx/cns3420vb.c
··· 197 197 } 198 198 199 199 MACHINE_START(CNS3420VB, "Cavium Networks CNS3420 Validation Board") 200 - .boot_params = 0x00000100, 200 + .atag_offset = 0x100, 201 201 .map_io = cns3420_map_io, 202 202 .init_irq = cns3xxx_init_irq, 203 203 .timer = &cns3xxx_timer,
+1 -1
arch/arm/mach-cns3xxx/include/mach/debug-macro.S
··· 10 10 * published by the Free Software Foundation. 11 11 */ 12 12 13 - .macro addruart,rp,rv 13 + .macro addruart,rp,rv,tmp 14 14 mov \rp, #0x00009000 15 15 orr \rv, \rp, #0xf0000000 @ virtual base 16 16 orr \rp, \rp, #0x10000000
-26
arch/arm/mach-cns3xxx/include/mach/memory.h
··· 1 - /* 2 - * Copyright 2003 ARM Limited 3 - * Copyright 2008 Cavium Networks 4 - * 5 - * This file is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License, Version 2, as 7 - * published by the Free Software Foundation. 8 - */ 9 - 10 - #ifndef __MACH_MEMORY_H 11 - #define __MACH_MEMORY_H 12 - 13 - /* 14 - * Physical DRAM offset. 15 - */ 16 - #define PLAT_PHYS_OFFSET UL(0x00000000) 17 - 18 - #define __phys_to_bus(x) ((x) + PHYS_OFFSET) 19 - #define __bus_to_phys(x) ((x) - PHYS_OFFSET) 20 - 21 - #define __virt_to_bus(v) __phys_to_bus(__virt_to_phys(v)) 22 - #define __bus_to_virt(b) __phys_to_virt(__bus_to_phys(b)) 23 - #define __pfn_to_bus(p) __phys_to_bus(__pfn_to_phys(p)) 24 - #define __bus_to_pfn(b) __phys_to_pfn(__bus_to_phys(b)) 25 - 26 - #endif
+1 -1
arch/arm/mach-davinci/board-da830-evm.c
··· 676 676 } 677 677 678 678 MACHINE_START(DAVINCI_DA830_EVM, "DaVinci DA830/OMAP-L137/AM17x EVM") 679 - .boot_params = (DA8XX_DDR_BASE + 0x100), 679 + .atag_offset = 0x100, 680 680 .map_io = da830_evm_map_io, 681 681 .init_irq = cp_intc_init, 682 682 .timer = &davinci_timer,
+1 -1
arch/arm/mach-davinci/board-da850-evm.c
··· 1291 1291 } 1292 1292 1293 1293 MACHINE_START(DAVINCI_DA850_EVM, "DaVinci DA850/OMAP-L138/AM18x EVM") 1294 - .boot_params = (DA8XX_DDR_BASE + 0x100), 1294 + .atag_offset = 0x100, 1295 1295 .map_io = da850_evm_map_io, 1296 1296 .init_irq = cp_intc_init, 1297 1297 .timer = &davinci_timer,
+1 -1
arch/arm/mach-davinci/board-dm355-evm.c
··· 351 351 } 352 352 353 353 MACHINE_START(DAVINCI_DM355_EVM, "DaVinci DM355 EVM") 354 - .boot_params = (0x80000100), 354 + .atag_offset = 0x100, 355 355 .map_io = dm355_evm_map_io, 356 356 .init_irq = davinci_irq_init, 357 357 .timer = &davinci_timer,
+1 -1
arch/arm/mach-davinci/board-dm355-leopard.c
··· 270 270 } 271 271 272 272 MACHINE_START(DM355_LEOPARD, "DaVinci DM355 leopard") 273 - .boot_params = (0x80000100), 273 + .atag_offset = 0x100, 274 274 .map_io = dm355_leopard_map_io, 275 275 .init_irq = davinci_irq_init, 276 276 .timer = &davinci_timer,
+1 -1
arch/arm/mach-davinci/board-dm365-evm.c
··· 612 612 } 613 613 614 614 MACHINE_START(DAVINCI_DM365_EVM, "DaVinci DM365 EVM") 615 - .boot_params = (0x80000100), 615 + .atag_offset = 0x100, 616 616 .map_io = dm365_evm_map_io, 617 617 .init_irq = davinci_irq_init, 618 618 .timer = &davinci_timer,
+1 -1
arch/arm/mach-davinci/board-dm644x-evm.c
··· 712 712 713 713 MACHINE_START(DAVINCI_EVM, "DaVinci DM644x EVM") 714 714 /* Maintainer: MontaVista Software <source@mvista.com> */ 715 - .boot_params = (DAVINCI_DDR_BASE + 0x100), 715 + .atag_offset = 0x100, 716 716 .map_io = davinci_evm_map_io, 717 717 .init_irq = davinci_irq_init, 718 718 .timer = &davinci_timer,
+2 -2
arch/arm/mach-davinci/board-dm646x-evm.c
··· 792 792 } 793 793 794 794 MACHINE_START(DAVINCI_DM6467_EVM, "DaVinci DM646x EVM") 795 - .boot_params = (0x80000100), 795 + .atag_offset = 0x100, 796 796 .map_io = davinci_map_io, 797 797 .init_irq = davinci_irq_init, 798 798 .timer = &davinci_timer, ··· 801 801 MACHINE_END 802 802 803 803 MACHINE_START(DAVINCI_DM6467TEVM, "DaVinci DM6467T EVM") 804 - .boot_params = (0x80000100), 804 + .atag_offset = 0x100, 805 805 .map_io = davinci_map_io, 806 806 .init_irq = davinci_irq_init, 807 807 .timer = &davinci_timer,
+1 -1
arch/arm/mach-davinci/board-mityomapl138.c
··· 566 566 } 567 567 568 568 MACHINE_START(MITYOMAPL138, "MityDSP-L138/MityARM-1808") 569 - .boot_params = (DA8XX_DDR_BASE + 0x100), 569 + .atag_offset = 0x100, 570 570 .map_io = mityomapl138_map_io, 571 571 .init_irq = cp_intc_init, 572 572 .timer = &davinci_timer,
+1 -1
arch/arm/mach-davinci/board-neuros-osd2.c
··· 272 272 273 273 MACHINE_START(NEUROS_OSD2, "Neuros OSD2") 274 274 /* Maintainer: Neuros Technologies <neuros@groups.google.com> */ 275 - .boot_params = (DAVINCI_DDR_BASE + 0x100), 275 + .atag_offset = 0x100, 276 276 .map_io = davinci_ntosd2_map_io, 277 277 .init_irq = davinci_irq_init, 278 278 .timer = &davinci_timer,
+1 -1
arch/arm/mach-davinci/board-omapl138-hawk.c
··· 338 338 } 339 339 340 340 MACHINE_START(OMAPL138_HAWKBOARD, "AM18x/OMAP-L138 Hawkboard") 341 - .boot_params = (DA8XX_DDR_BASE + 0x100), 341 + .atag_offset = 0x100, 342 342 .map_io = omapl138_hawk_map_io, 343 343 .init_irq = cp_intc_init, 344 344 .timer = &davinci_timer,
+1 -1
arch/arm/mach-davinci/board-sffsdr.c
··· 151 151 152 152 MACHINE_START(SFFSDR, "Lyrtech SFFSDR") 153 153 /* Maintainer: Hugo Villeneuve hugo.villeneuve@lyrtech.com */ 154 - .boot_params = (DAVINCI_DDR_BASE + 0x100), 154 + .atag_offset = 0x100, 155 155 .map_io = davinci_sffsdr_map_io, 156 156 .init_irq = davinci_irq_init, 157 157 .timer = &davinci_timer,
+1 -1
arch/arm/mach-davinci/board-tnetv107x-evm.c
··· 277 277 #endif 278 278 279 279 MACHINE_START(TNETV107X, "TNETV107X EVM") 280 - .boot_params = (TNETV107X_DDR_BASE + 0x100), 280 + .atag_offset = 0x100, 281 281 .map_io = tnetv107x_init, 282 282 .init_irq = cp_intc_init, 283 283 .timer = &davinci_timer,
+3
arch/arm/mach-davinci/common.c
··· 12 12 #include <linux/io.h> 13 13 #include <linux/etherdevice.h> 14 14 #include <linux/davinci_emac.h> 15 + #include <linux/dma-mapping.h> 15 16 16 17 #include <asm/tlb.h> 17 18 #include <asm/mach/map.h> ··· 86 85 if (davinci_soc_info.io_desc && (davinci_soc_info.io_desc_num > 0)) 87 86 iotable_init(davinci_soc_info.io_desc, 88 87 davinci_soc_info.io_desc_num); 88 + 89 + init_consistent_dma_size(14 << 20); 89 90 90 91 /* 91 92 * Normally devicemaps_init() would flush caches and tlb after
+1 -1
arch/arm/mach-davinci/cpuidle.c
··· 19 19 #include <asm/proc-fns.h> 20 20 21 21 #include <mach/cpuidle.h> 22 - #include <mach/memory.h> 22 + #include <mach/ddr2.h> 23 23 24 24 #define DAVINCI_CPUIDLE_MAX_STATES 2 25 25
+4
arch/arm/mach-davinci/include/mach/ddr2.h
··· 1 + #define DDR2_SDRCR_OFFSET 0xc 2 + #define DDR2_SRPD_BIT (1 << 23) 3 + #define DDR2_MCLKSTOPEN_BIT (1 << 30) 4 + #define DDR2_LPMODEN_BIT (1 << 31)
+23 -29
arch/arm/mach-davinci/include/mach/debug-macro.S
··· 18 18 19 19 #include <linux/serial_reg.h> 20 20 21 - #include <asm/memory.h> 22 - 23 21 #include <mach/serial.h> 24 22 25 23 #define UART_SHIFT 2 26 - 27 - #define davinci_uart_v2p(x) ((x) - PAGE_OFFSET + PLAT_PHYS_OFFSET) 28 - #define davinci_uart_p2v(x) ((x) - PLAT_PHYS_OFFSET + PAGE_OFFSET) 29 24 30 25 .pushsection .data 31 26 davinci_uart_phys: .word 0 32 27 davinci_uart_virt: .word 0 33 28 .popsection 34 29 35 - .macro addruart, rp, rv 30 + .macro addruart, rp, rv, tmp 36 31 37 32 /* Use davinci_uart_phys/virt if already configured */ 38 - 10: mrc p15, 0, \rp, c1, c0 39 - tst \rp, #1 @ MMU enabled? 40 - ldreq \rp, =davinci_uart_v2p(davinci_uart_phys) 41 - ldrne \rp, =davinci_uart_phys 42 - add \rv, \rp, #4 @ davinci_uart_virt 43 - ldr \rp, [\rp, #0] 44 - ldr \rv, [\rv, #0] 33 + 10: adr \rp, 99f @ get effective addr of 99f 34 + ldr \rv, [\rp] @ get absolute addr of 99f 35 + sub \rv, \rv, \rp @ offset between the two 36 + ldr \rp, [\rp, #4] @ abs addr of omap_uart_phys 37 + sub \tmp, \rp, \rv @ make it effective 38 + ldr \rp, [\tmp, #0] @ davinci_uart_phys 39 + ldr \rv, [\tmp, #4] @ davinci_uart_virt 45 40 cmp \rp, #0 @ is port configured? 46 41 cmpne \rv, #0 47 - bne 99f @ already configured 42 + bne 100f @ already configured 48 43 49 44 /* Check the debug UART address set in uncompress.h */ 50 - mrc p15, 0, \rp, c1, c0 51 - tst \rp, #1 @ MMU enabled? 45 + and \rp, pc, #0xff000000 46 + ldr \rv, =DAVINCI_UART_INFO_OFS 47 + add \rp, \rp, \rv 52 48 53 49 /* Copy uart phys address from decompressor uart info */ 54 - ldreq \rv, =davinci_uart_v2p(davinci_uart_phys) 55 - ldrne \rv, =davinci_uart_phys 56 - ldreq \rp, =DAVINCI_UART_INFO 57 - ldrne \rp, =davinci_uart_p2v(DAVINCI_UART_INFO) 58 - ldr \rp, [\rp, #0] 59 - str \rp, [\rv] 50 + ldr \rv, [\rp, #0] 51 + str \rv, [\tmp, #0] 60 52 61 53 /* Copy uart virt address from decompressor uart info */ 62 - ldreq \rv, =davinci_uart_v2p(davinci_uart_virt) 63 - ldrne \rv, =davinci_uart_virt 64 - ldreq \rp, =DAVINCI_UART_INFO 65 - ldrne \rp, =davinci_uart_p2v(DAVINCI_UART_INFO) 66 - ldr \rp, [\rp, #4] 67 - str \rp, [\rv] 54 + ldr \rv, [\rp, #4] 55 + str \rv, [\tmp, #4] 68 56 69 57 b 10b 70 - 99: 58 + 59 + .align 60 + 99: .word . 61 + .word davinci_uart_phys 62 + .ltorg 63 + 64 + 100: 71 65 .endm 72 66 73 67 .macro senduart,rd,rx
-44
arch/arm/mach-davinci/include/mach/memory.h
··· 1 - /* 2 - * DaVinci memory space definitions 3 - * 4 - * Author: Kevin Hilman, MontaVista Software, Inc. <source@mvista.com> 5 - * 6 - * 2007 (c) MontaVista Software, Inc. This file is licensed under 7 - * the terms of the GNU General Public License version 2. This program 8 - * is licensed "as is" without any warranty of any kind, whether express 9 - * or implied. 10 - */ 11 - #ifndef __ASM_ARCH_MEMORY_H 12 - #define __ASM_ARCH_MEMORY_H 13 - 14 - /************************************************************************** 15 - * Included Files 16 - **************************************************************************/ 17 - #include <asm/page.h> 18 - #include <asm/sizes.h> 19 - 20 - /************************************************************************** 21 - * Definitions 22 - **************************************************************************/ 23 - #define DAVINCI_DDR_BASE 0x80000000 24 - #define DA8XX_DDR_BASE 0xc0000000 25 - 26 - #if defined(CONFIG_ARCH_DAVINCI_DA8XX) && defined(CONFIG_ARCH_DAVINCI_DMx) 27 - #error Cannot enable DaVinci and DA8XX platforms concurrently 28 - #elif defined(CONFIG_ARCH_DAVINCI_DA8XX) 29 - #define PLAT_PHYS_OFFSET DA8XX_DDR_BASE 30 - #else 31 - #define PLAT_PHYS_OFFSET DAVINCI_DDR_BASE 32 - #endif 33 - 34 - #define DDR2_SDRCR_OFFSET 0xc 35 - #define DDR2_SRPD_BIT BIT(23) 36 - #define DDR2_MCLKSTOPEN_BIT BIT(30) 37 - #define DDR2_LPMODEN_BIT BIT(31) 38 - 39 - /* 40 - * Increase size of DMA-consistent memory region 41 - */ 42 - #define CONSISTENT_DMA_SIZE (14<<20) 43 - 44 - #endif /* __ASM_ARCH_MEMORY_H */
+2 -1
arch/arm/mach-davinci/include/mach/serial.h
··· 21 21 * macros in debug-macro.S. 22 22 * 23 23 * This area sits just below the page tables (see arch/arm/kernel/head.S). 24 + * We define it as a relative offset from start of usable RAM. 24 25 */ 25 - #define DAVINCI_UART_INFO (PLAT_PHYS_OFFSET + 0x3ff8) 26 + #define DAVINCI_UART_INFO_OFS 0x3ff8 26 27 27 28 #define DAVINCI_UART0_BASE (IO_PHYS + 0x20000) 28 29 #define DAVINCI_UART1_BASE (IO_PHYS + 0x20400)
+6 -1
arch/arm/mach-davinci/include/mach/uncompress.h
··· 43 43 44 44 static inline void set_uart_info(u32 phys, void * __iomem virt) 45 45 { 46 - u32 *uart_info = (u32 *)(DAVINCI_UART_INFO); 46 + /* 47 + * Get address of some.bss variable and round it down 48 + * a la CONFIG_AUTO_ZRELADDR. 49 + */ 50 + u32 ram_start = (u32)&uart & 0xf8000000; 51 + u32 *uart_info = (u32 *)(ram_start + DAVINCI_UART_INFO_OFS); 47 52 48 53 uart = (u32 *)phys; 49 54 uart_info[0] = phys;
+1 -1
arch/arm/mach-davinci/sleep.S
··· 22 22 #include <linux/linkage.h> 23 23 #include <asm/assembler.h> 24 24 #include <mach/psc.h> 25 - #include <mach/memory.h> 25 + #include <mach/ddr2.h> 26 26 27 27 #include "clock.h" 28 28
+1 -1
arch/arm/mach-dove/cm-a510.c
··· 87 87 } 88 88 89 89 MACHINE_START(CM_A510, "Compulab CM-A510 Board") 90 - .boot_params = 0x00000100, 90 + .atag_offset = 0x100, 91 91 .init_machine = cm_a510_init, 92 92 .map_io = dove_map_io, 93 93 .init_early = dove_init_early,
+1 -1
arch/arm/mach-dove/dove-db-setup.c
··· 94 94 } 95 95 96 96 MACHINE_START(DOVE_DB, "Marvell DB-MV88AP510-BP Development Board") 97 - .boot_params = 0x00000100, 97 + .atag_offset = 0x100, 98 98 .init_machine = dove_db_init, 99 99 .map_io = dove_map_io, 100 100 .init_early = dove_init_early,
+1 -1
arch/arm/mach-dove/include/mach/debug-macro.S
··· 8 8 9 9 #include <mach/bridge-regs.h> 10 10 11 - .macro addruart, rp, rv 11 + .macro addruart, rp, rv, tmp 12 12 ldr \rp, =DOVE_SB_REGS_PHYS_BASE 13 13 ldr \rv, =DOVE_SB_REGS_VIRT_BASE 14 14 orr \rp, \rp, #0x00012000
-10
arch/arm/mach-dove/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-dove/include/mach/memory.h 3 - */ 4 - 5 - #ifndef __ASM_ARCH_MEMORY_H 6 - #define __ASM_ARCH_MEMORY_H 7 - 8 - #define PLAT_PHYS_OFFSET UL(0x00000000) 9 - 10 - #endif
+1 -1
arch/arm/mach-ebsa110/core.c
··· 280 280 281 281 MACHINE_START(EBSA110, "EBSA110") 282 282 /* Maintainer: Russell King */ 283 - .boot_params = 0x00000400, 283 + .atag_offset = 0x400, 284 284 .reserve_lp0 = 1, 285 285 .reserve_lp2 = 1, 286 286 .soft_reboot = 1,
+1 -1
arch/arm/mach-ebsa110/include/mach/debug-macro.S
··· 11 11 * 12 12 **/ 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 mov \rp, #0xf0000000 16 16 orr \rp, \rp, #0x00000be0 17 17 mov \rp, \rv
+1 -1
arch/arm/mach-ep93xx/adssphere.c
··· 33 33 34 34 MACHINE_START(ADSSPHERE, "ADS Sphere board") 35 35 /* Maintainer: Lennert Buytenhek <buytenh@wantstofly.org> */ 36 - .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, 36 + .atag_offset = 0x100, 37 37 .map_io = ep93xx_map_io, 38 38 .init_irq = ep93xx_init_irq, 39 39 .timer = &ep93xx_timer,
+8 -8
arch/arm/mach-ep93xx/edb93xx.c
··· 241 241 #ifdef CONFIG_MACH_EDB9301 242 242 MACHINE_START(EDB9301, "Cirrus Logic EDB9301 Evaluation Board") 243 243 /* Maintainer: H Hartley Sweeten <hsweeten@visionengravers.com> */ 244 - .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, 244 + .atag_offset = 0x100, 245 245 .map_io = ep93xx_map_io, 246 246 .init_irq = ep93xx_init_irq, 247 247 .timer = &ep93xx_timer, ··· 252 252 #ifdef CONFIG_MACH_EDB9302 253 253 MACHINE_START(EDB9302, "Cirrus Logic EDB9302 Evaluation Board") 254 254 /* Maintainer: George Kashperko <george@chas.com.ua> */ 255 - .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, 255 + .atag_offset = 0x100, 256 256 .map_io = ep93xx_map_io, 257 257 .init_irq = ep93xx_init_irq, 258 258 .timer = &ep93xx_timer, ··· 263 263 #ifdef CONFIG_MACH_EDB9302A 264 264 MACHINE_START(EDB9302A, "Cirrus Logic EDB9302A Evaluation Board") 265 265 /* Maintainer: Lennert Buytenhek <buytenh@wantstofly.org> */ 266 - .boot_params = EP93XX_SDCE0_PHYS_BASE + 0x100, 266 + .atag_offset = 0x100, 267 267 .map_io = ep93xx_map_io, 268 268 .init_irq = ep93xx_init_irq, 269 269 .timer = &ep93xx_timer, ··· 274 274 #ifdef CONFIG_MACH_EDB9307 275 275 MACHINE_START(EDB9307, "Cirrus Logic EDB9307 Evaluation Board") 276 276 /* Maintainer: Herbert Valerio Riedel <hvr@gnu.org> */ 277 - .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, 277 + .atag_offset = 0x100, 278 278 .map_io = ep93xx_map_io, 279 279 .init_irq = ep93xx_init_irq, 280 280 .timer = &ep93xx_timer, ··· 285 285 #ifdef CONFIG_MACH_EDB9307A 286 286 MACHINE_START(EDB9307A, "Cirrus Logic EDB9307A Evaluation Board") 287 287 /* Maintainer: H Hartley Sweeten <hsweeten@visionengravers.com> */ 288 - .boot_params = EP93XX_SDCE0_PHYS_BASE + 0x100, 288 + .atag_offset = 0x100, 289 289 .map_io = ep93xx_map_io, 290 290 .init_irq = ep93xx_init_irq, 291 291 .timer = &ep93xx_timer, ··· 296 296 #ifdef CONFIG_MACH_EDB9312 297 297 MACHINE_START(EDB9312, "Cirrus Logic EDB9312 Evaluation Board") 298 298 /* Maintainer: Toufeeq Hussain <toufeeq_hussain@infosys.com> */ 299 - .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, 299 + .atag_offset = 0x100, 300 300 .map_io = ep93xx_map_io, 301 301 .init_irq = ep93xx_init_irq, 302 302 .timer = &ep93xx_timer, ··· 307 307 #ifdef CONFIG_MACH_EDB9315 308 308 MACHINE_START(EDB9315, "Cirrus Logic EDB9315 Evaluation Board") 309 309 /* Maintainer: Lennert Buytenhek <buytenh@wantstofly.org> */ 310 - .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, 310 + .atag_offset = 0x100, 311 311 .map_io = ep93xx_map_io, 312 312 .init_irq = ep93xx_init_irq, 313 313 .timer = &ep93xx_timer, ··· 318 318 #ifdef CONFIG_MACH_EDB9315A 319 319 MACHINE_START(EDB9315A, "Cirrus Logic EDB9315A Evaluation Board") 320 320 /* Maintainer: Lennert Buytenhek <buytenh@wantstofly.org> */ 321 - .boot_params = EP93XX_SDCE0_PHYS_BASE + 0x100, 321 + .atag_offset = 0x100, 322 322 .map_io = ep93xx_map_io, 323 323 .init_irq = ep93xx_init_irq, 324 324 .timer = &ep93xx_timer,
+1 -1
arch/arm/mach-ep93xx/gesbc9312.c
··· 33 33 34 34 MACHINE_START(GESBC9312, "Glomation GESBC-9312-sx") 35 35 /* Maintainer: Lennert Buytenhek <buytenh@wantstofly.org> */ 36 - .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, 36 + .atag_offset = 0x100, 37 37 .map_io = ep93xx_map_io, 38 38 .init_irq = ep93xx_init_irq, 39 39 .timer = &ep93xx_timer,
+1 -1
arch/arm/mach-ep93xx/include/mach/debug-macro.S
··· 11 11 */ 12 12 #include <mach/ep93xx-regs.h> 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 ldr \rp, =EP93XX_APB_PHYS_BASE @ Physical base 16 16 ldr \rv, =EP93XX_APB_VIRT_BASE @ virtual base 17 17 orr \rp, \rp, #0x000c0000
+4 -4
arch/arm/mach-ep93xx/micro9.c
··· 77 77 #ifdef CONFIG_MACH_MICRO9H 78 78 MACHINE_START(MICRO9, "Contec Micro9-High") 79 79 /* Maintainer: Hubert Feurstein <hubert.feurstein@contec.at> */ 80 - .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, 80 + .atag_offset = 0x100, 81 81 .map_io = ep93xx_map_io, 82 82 .init_irq = ep93xx_init_irq, 83 83 .timer = &ep93xx_timer, ··· 88 88 #ifdef CONFIG_MACH_MICRO9M 89 89 MACHINE_START(MICRO9M, "Contec Micro9-Mid") 90 90 /* Maintainer: Hubert Feurstein <hubert.feurstein@contec.at> */ 91 - .boot_params = EP93XX_SDCE3_PHYS_BASE_ASYNC + 0x100, 91 + .atag_offset = 0x100, 92 92 .map_io = ep93xx_map_io, 93 93 .init_irq = ep93xx_init_irq, 94 94 .timer = &ep93xx_timer, ··· 99 99 #ifdef CONFIG_MACH_MICRO9L 100 100 MACHINE_START(MICRO9L, "Contec Micro9-Lite") 101 101 /* Maintainer: Hubert Feurstein <hubert.feurstein@contec.at> */ 102 - .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, 102 + .atag_offset = 0x100, 103 103 .map_io = ep93xx_map_io, 104 104 .init_irq = ep93xx_init_irq, 105 105 .timer = &ep93xx_timer, ··· 110 110 #ifdef CONFIG_MACH_MICRO9S 111 111 MACHINE_START(MICRO9S, "Contec Micro9-Slim") 112 112 /* Maintainer: Hubert Feurstein <hubert.feurstein@contec.at> */ 113 - .boot_params = EP93XX_SDCE3_PHYS_BASE_ASYNC + 0x100, 113 + .atag_offset = 0x100, 114 114 .map_io = ep93xx_map_io, 115 115 .init_irq = ep93xx_init_irq, 116 116 .timer = &ep93xx_timer,
+2 -2
arch/arm/mach-ep93xx/simone.c
··· 65 65 } 66 66 67 67 MACHINE_START(SIM_ONE, "Simplemachines Sim.One Board") 68 - /* Maintainer: Ryan Mallon */ 69 - .boot_params = EP93XX_SDCE0_PHYS_BASE + 0x100, 68 + /* Maintainer: Ryan Mallon */ 69 + .atag_offset = 0x100, 70 70 .map_io = ep93xx_map_io, 71 71 .init_irq = ep93xx_init_irq, 72 72 .timer = &ep93xx_timer,
+1 -1
arch/arm/mach-ep93xx/snappercl15.c
··· 163 163 164 164 MACHINE_START(SNAPPER_CL15, "Bluewater Systems Snapper CL15") 165 165 /* Maintainer: Ryan Mallon */ 166 - .boot_params = EP93XX_SDCE0_PHYS_BASE + 0x100, 166 + .atag_offset = 0x100, 167 167 .map_io = ep93xx_map_io, 168 168 .init_irq = ep93xx_init_irq, 169 169 .timer = &ep93xx_timer,
+1 -1
arch/arm/mach-ep93xx/ts72xx.c
··· 257 257 258 258 MACHINE_START(TS72XX, "Technologic Systems TS-72xx SBC") 259 259 /* Maintainer: Lennert Buytenhek <buytenh@wantstofly.org> */ 260 - .boot_params = EP93XX_SDCE3_PHYS_BASE_SYNC + 0x100, 260 + .atag_offset = 0x100, 261 261 .map_io = ts72xx_map_io, 262 262 .init_irq = ep93xx_init_irq, 263 263 .timer = &ep93xx_timer,
+1 -1
arch/arm/mach-exynos4/include/mach/debug-macro.S
··· 20 20 * aligned and add in the offset when we load the value here. 21 21 */ 22 22 23 - .macro addruart, rp, rv 23 + .macro addruart, rp, rv, tmp 24 24 ldr \rp, = S3C_PA_UART 25 25 ldr \rv, = S3C_VA_UART 26 26 #if CONFIG_DEBUG_S3C_UART != 0
+1 -6
arch/arm/mach-exynos4/include/mach/entry-macro.S
··· 55 55 56 56 bic \irqnr, \irqstat, #0x1c00 57 57 58 - cmp \irqnr, #29 58 + cmp \irqnr, #15 59 59 cmpcc \irqnr, \irqnr 60 60 cmpne \irqnr, \tmp 61 61 cmpcs \irqnr, \irqnr ··· 75 75 cmp \irqnr, #16 76 76 strcc \irqstat, [\base, #GIC_CPU_EOI] 77 77 cmpcs \irqnr, \irqnr 78 - .endm 79 - 80 - /* As above, this assumes that irqstat and base are preserved.. */ 81 - 82 - .macro test_for_ltirq, irqnr, irqstat, base, tmp 83 78 .endm
+1 -1
arch/arm/mach-exynos4/mach-armlex4210.c
··· 207 207 208 208 MACHINE_START(ARMLEX4210, "ARMLEX4210") 209 209 /* Maintainer: Alim Akhtar <alim.akhtar@samsung.com> */ 210 - .boot_params = S5P_PA_SDRAM + 0x100, 210 + .atag_offset = 0x100, 211 211 .init_irq = exynos4_init_irq, 212 212 .map_io = armlex4210_map_io, 213 213 .init_machine = armlex4210_machine_init,
+1 -1
arch/arm/mach-exynos4/mach-nuri.c
··· 1152 1152 1153 1153 MACHINE_START(NURI, "NURI") 1154 1154 /* Maintainer: Kyungmin Park <kyungmin.park@samsung.com> */ 1155 - .boot_params = S5P_PA_SDRAM + 0x100, 1155 + .atag_offset = 0x100, 1156 1156 .init_irq = exynos4_init_irq, 1157 1157 .map_io = nuri_map_io, 1158 1158 .init_machine = nuri_machine_init,
+1 -1
arch/arm/mach-exynos4/mach-smdkc210.c
··· 301 301 302 302 MACHINE_START(SMDKC210, "SMDKC210") 303 303 /* Maintainer: Kukjin Kim <kgene.kim@samsung.com> */ 304 - .boot_params = S5P_PA_SDRAM + 0x100, 304 + .atag_offset = 0x100, 305 305 .init_irq = exynos4_init_irq, 306 306 .map_io = smdkc210_map_io, 307 307 .init_machine = smdkc210_machine_init,
+1 -1
arch/arm/mach-exynos4/mach-smdkv310.c
··· 255 255 MACHINE_START(SMDKV310, "SMDKV310") 256 256 /* Maintainer: Kukjin Kim <kgene.kim@samsung.com> */ 257 257 /* Maintainer: Changhwan Youn <chaos.youn@samsung.com> */ 258 - .boot_params = S5P_PA_SDRAM + 0x100, 258 + .atag_offset = 0x100, 259 259 .init_irq = exynos4_init_irq, 260 260 .map_io = smdkv310_map_io, 261 261 .init_machine = smdkv310_machine_init,
+1 -1
arch/arm/mach-exynos4/mach-universal_c210.c
··· 762 762 763 763 MACHINE_START(UNIVERSAL_C210, "UNIVERSAL_C210") 764 764 /* Maintainer: Kyungmin Park <kyungmin.park@samsung.com> */ 765 - .boot_params = S5P_PA_SDRAM + 0x100, 765 + .atag_offset = 0x100, 766 766 .init_irq = exynos4_init_irq, 767 767 .map_io = universal_map_io, 768 768 .init_machine = universal_machine_init,
+5 -2
arch/arm/mach-exynos4/mct.c
··· 386 386 387 387 if (cpu == 0) { 388 388 mct_tick0_event_irq.dev_id = &mct_tick[cpu]; 389 + evt->irq = IRQ_MCT_L0; 389 390 setup_irq(IRQ_MCT_L0, &mct_tick0_event_irq); 390 391 } else { 391 392 mct_tick1_event_irq.dev_id = &mct_tick[cpu]; 393 + evt->irq = IRQ_MCT_L1; 392 394 setup_irq(IRQ_MCT_L1, &mct_tick1_event_irq); 393 395 irq_set_affinity(IRQ_MCT_L1, cpumask_of(1)); 394 396 } ··· 404 402 return 0; 405 403 } 406 404 407 - int local_timer_ack(void) 405 + void local_timer_stop(struct clock_event_device *evt) 408 406 { 409 - return 0; 407 + evt->set_mode(CLOCK_EVT_MODE_UNUSED, evt); 408 + disable_irq(evt->irq); 410 409 } 411 410 412 411 #endif /* CONFIG_LOCAL_TIMERS */
+1 -1
arch/arm/mach-footbridge/cats-hw.c
··· 85 85 86 86 MACHINE_START(CATS, "Chalice-CATS") 87 87 /* Maintainer: Philip Blundell */ 88 - .boot_params = 0x00000100, 88 + .atag_offset = 0x100, 89 89 .soft_reboot = 1, 90 90 .fixup = fixup_cats, 91 91 .map_io = footbridge_map_io,
+1 -1
arch/arm/mach-footbridge/ebsa285.c
··· 15 15 16 16 MACHINE_START(EBSA285, "EBSA285") 17 17 /* Maintainer: Russell King */ 18 - .boot_params = 0x00000100, 18 + .atag_offset = 0x100, 19 19 .video_start = 0x000a0000, 20 20 .video_end = 0x000bffff, 21 21 .map_io = footbridge_map_io,
+2 -2
arch/arm/mach-footbridge/include/mach/debug-macro.S
··· 15 15 16 16 #ifndef CONFIG_DEBUG_DC21285_PORT 17 17 /* For NetWinder debugging */ 18 - .macro addruart, rp, rv 18 + .macro addruart, rp, rv, tmp 19 19 mov \rp, #0x000003f8 20 20 orr \rv, \rp, #0xff000000 @ virtual 21 21 orr \rp, \rp, #0x7c000000 @ physical ··· 31 31 .equ dc21285_high, ARMCSR_BASE & 0xff000000 32 32 .equ dc21285_low, ARMCSR_BASE & 0x00ffffff 33 33 34 - .macro addruart, rp, rv 34 + .macro addruart, rp, rv, tmp 35 35 .if dc21285_low 36 36 mov \rp, #dc21285_low 37 37 .else
+1 -1
arch/arm/mach-footbridge/netwinder-hw.c
··· 647 647 648 648 MACHINE_START(NETWINDER, "Rebel-NetWinder") 649 649 /* Maintainer: Russell King/Rebel.com */ 650 - .boot_params = 0x00000100, 650 + .atag_offset = 0x100, 651 651 .video_start = 0x000a0000, 652 652 .video_end = 0x000bffff, 653 653 .reserve_lp0 = 1,
+1 -1
arch/arm/mach-footbridge/personal.c
··· 15 15 16 16 MACHINE_START(PERSONAL_SERVER, "Compaq-PersonalServer") 17 17 /* Maintainer: Jamey Hicks / George France */ 18 - .boot_params = 0x00000100, 18 + .atag_offset = 0x100, 19 19 .map_io = footbridge_map_io, 20 20 .init_irq = footbridge_init_irq, 21 21 .timer = &footbridge_timer,
+1 -1
arch/arm/mach-gemini/board-nas4220b.c
··· 102 102 } 103 103 104 104 MACHINE_START(NAS4220B, "Raidsonic NAS IB-4220-B") 105 - .boot_params = 0x100, 105 + .atag_offset = 0x100, 106 106 .map_io = gemini_map_io, 107 107 .init_irq = gemini_init_irq, 108 108 .timer = &ib4220b_timer,
+1 -1
arch/arm/mach-gemini/board-rut1xx.c
··· 86 86 } 87 87 88 88 MACHINE_START(RUT100, "Teltonika RUT100") 89 - .boot_params = 0x100, 89 + .atag_offset = 0x100, 90 90 .map_io = gemini_map_io, 91 91 .init_irq = gemini_init_irq, 92 92 .timer = &rut1xx_timer,
+1 -1
arch/arm/mach-gemini/board-wbd111.c
··· 129 129 } 130 130 131 131 MACHINE_START(WBD111, "Wiliboard WBD-111") 132 - .boot_params = 0x100, 132 + .atag_offset = 0x100, 133 133 .map_io = gemini_map_io, 134 134 .init_irq = gemini_init_irq, 135 135 .timer = &wbd111_timer,
+1 -1
arch/arm/mach-gemini/board-wbd222.c
··· 129 129 } 130 130 131 131 MACHINE_START(WBD222, "Wiliboard WBD-222") 132 - .boot_params = 0x100, 132 + .atag_offset = 0x100, 133 133 .map_io = gemini_map_io, 134 134 .init_irq = gemini_init_irq, 135 135 .timer = &wbd222_timer,
+1 -1
arch/arm/mach-gemini/include/mach/debug-macro.S
··· 11 11 */ 12 12 #include <mach/hardware.h> 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 ldr \rp, =GEMINI_UART_BASE @ physical 16 16 ldr \rv, =IO_ADDRESS(GEMINI_UART_BASE) @ virtual 17 17 .endm
-19
arch/arm/mach-gemini/include/mach/memory.h
··· 1 - /* 2 - * Copyright (C) 2001-2006 Storlink, Corp. 3 - * Copyright (C) 2008-2009 Paulius Zaleckas <paulius.zaleckas@teltonika.lt> 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License as published by 7 - * the Free Software Foundation; either version 2 of the License, or 8 - * (at your option) any later version. 9 - */ 10 - #ifndef __MACH_MEMORY_H 11 - #define __MACH_MEMORY_H 12 - 13 - #ifdef CONFIG_GEMINI_MEM_SWAP 14 - # define PLAT_PHYS_OFFSET UL(0x00000000) 15 - #else 16 - # define PLAT_PHYS_OFFSET UL(0x10000000) 17 - #endif 18 - 19 - #endif /* __MACH_MEMORY_H */
+1 -1
arch/arm/mach-h720x/h7201-eval.c
··· 29 29 30 30 MACHINE_START(H7201, "Hynix GMS30C7201") 31 31 /* Maintainer: Robert Schwebel, Pengutronix */ 32 - .boot_params = 0xc0001000, 32 + .atag_offset = 0x1000, 33 33 .map_io = h720x_map_io, 34 34 .init_irq = h720x_init_irq, 35 35 .timer = &h7201_timer,
+1 -1
arch/arm/mach-h720x/h7202-eval.c
··· 71 71 72 72 MACHINE_START(H7202, "Hynix HMS30C7202") 73 73 /* Maintainer: Robert Schwebel, Pengutronix */ 74 - .boot_params = 0x40000100, 74 + .atag_offset = 0x100, 75 75 .map_io = h720x_map_io, 76 76 .init_irq = h7202_init_irq, 77 77 .timer = &h7202_timer,
+1 -1
arch/arm/mach-h720x/include/mach/debug-macro.S
··· 16 16 .equ io_virt, IO_VIRT 17 17 .equ io_phys, IO_PHYS 18 18 19 - .macro addruart, rp, rv 19 + .macro addruart, rp, rv, tmp 20 20 mov \rp, #0x00020000 @ UART1 21 21 add \rv, \rp, #io_virt @ virtual address 22 22 add \rp, \rp, #io_phys @ physical base address
-11
arch/arm/mach-h720x/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-h720x/include/mach/memory.h 3 - * 4 - * Copyright (c) 2000 Jungjun Kim 5 - * 6 - */ 7 - #ifndef __ASM_ARCH_MEMORY_H 8 - #define __ASM_ARCH_MEMORY_H 9 - 10 - #define PLAT_PHYS_OFFSET UL(0x40000000) 11 - #endif
+1 -1
arch/arm/mach-imx/mach-armadillo5x0.c
··· 558 558 559 559 MACHINE_START(ARMADILLO5X0, "Armadillo-500") 560 560 /* Maintainer: Alberto Panizzo */ 561 - .boot_params = MX3x_PHYS_OFFSET + 0x100, 561 + .atag_offset = 0x100, 562 562 .map_io = mx31_map_io, 563 563 .init_early = imx31_init_early, 564 564 .init_irq = mx31_init_irq,
+1 -1
arch/arm/mach-imx/mach-cpuimx27.c
··· 311 311 }; 312 312 313 313 MACHINE_START(EUKREA_CPUIMX27, "EUKREA CPUIMX27") 314 - .boot_params = MX27_PHYS_OFFSET + 0x100, 314 + .atag_offset = 0x100, 315 315 .map_io = mx27_map_io, 316 316 .init_early = imx27_init_early, 317 317 .init_irq = mx27_init_irq,
+1 -1
arch/arm/mach-imx/mach-cpuimx35.c
··· 194 194 195 195 MACHINE_START(EUKREA_CPUIMX35SD, "Eukrea CPUIMX35") 196 196 /* Maintainer: Eukrea Electromatique */ 197 - .boot_params = MX3x_PHYS_OFFSET + 0x100, 197 + .atag_offset = 0x100, 198 198 .map_io = mx35_map_io, 199 199 .init_early = imx35_init_early, 200 200 .init_irq = mx35_init_irq,
+1 -1
arch/arm/mach-imx/mach-eukrea_cpuimx25.c
··· 163 163 164 164 MACHINE_START(EUKREA_CPUIMX25SD, "Eukrea CPUIMX25") 165 165 /* Maintainer: Eukrea Electromatique */ 166 - .boot_params = MX25_PHYS_OFFSET + 0x100, 166 + .atag_offset = 0x100, 167 167 .map_io = mx25_map_io, 168 168 .init_early = imx25_init_early, 169 169 .init_irq = mx25_init_irq,
+1 -1
arch/arm/mach-imx/mach-imx27_visstrim_m10.c
··· 275 275 }; 276 276 277 277 MACHINE_START(IMX27_VISSTRIM_M10, "Vista Silicon Visstrim_M10") 278 - .boot_params = MX27_PHYS_OFFSET + 0x100, 278 + .atag_offset = 0x100, 279 279 .map_io = mx27_map_io, 280 280 .init_early = imx27_init_early, 281 281 .init_irq = mx27_init_irq,
+1 -1
arch/arm/mach-imx/mach-imx27ipcam.c
··· 71 71 72 72 MACHINE_START(IMX27IPCAM, "Freescale IMX27IPCAM") 73 73 /* maintainer: Freescale Semiconductor, Inc. */ 74 - .boot_params = MX27_PHYS_OFFSET + 0x100, 74 + .atag_offset = 0x100, 75 75 .map_io = mx27_map_io, 76 76 .init_early = imx27_init_early, 77 77 .init_irq = mx27_init_irq,
+1 -1
arch/arm/mach-imx/mach-imx27lite.c
··· 77 77 }; 78 78 79 79 MACHINE_START(IMX27LITE, "LogicPD i.MX27LITE") 80 - .boot_params = MX27_PHYS_OFFSET + 0x100, 80 + .atag_offset = 0x100, 81 81 .map_io = mx27_map_io, 82 82 .init_early = imx27_init_early, 83 83 .init_irq = mx27_init_irq,
+1 -1
arch/arm/mach-imx/mach-kzm_arm11_01.c
··· 271 271 }; 272 272 273 273 MACHINE_START(KZM_ARM11_01, "Kyoto Microcomputer Co., Ltd. KZM-ARM11-01") 274 - .boot_params = MX3x_PHYS_OFFSET + 0x100, 274 + .atag_offset = 0x100, 275 275 .map_io = kzm_map_io, 276 276 .init_early = imx31_init_early, 277 277 .init_irq = mx31_init_irq,
+2 -2
arch/arm/mach-imx/mach-mx1ads.c
··· 145 145 146 146 MACHINE_START(MX1ADS, "Freescale MX1ADS") 147 147 /* Maintainer: Sascha Hauer, Pengutronix */ 148 - .boot_params = MX1_PHYS_OFFSET + 0x100, 148 + .atag_offset = 0x100, 149 149 .map_io = mx1_map_io, 150 150 .init_early = imx1_init_early, 151 151 .init_irq = mx1_init_irq, ··· 154 154 MACHINE_END 155 155 156 156 MACHINE_START(MXLADS, "Freescale MXLADS") 157 - .boot_params = MX1_PHYS_OFFSET + 0x100, 157 + .atag_offset = 0x100, 158 158 .map_io = mx1_map_io, 159 159 .init_early = imx1_init_early, 160 160 .init_irq = mx1_init_irq,
+1 -1
arch/arm/mach-imx/mach-mx21ads.c
··· 305 305 306 306 MACHINE_START(MX21ADS, "Freescale i.MX21ADS") 307 307 /* maintainer: Freescale Semiconductor, Inc. */ 308 - .boot_params = MX21_PHYS_OFFSET + 0x100, 308 + .atag_offset = 0x100, 309 309 .map_io = mx21ads_map_io, 310 310 .init_early = imx21_init_early, 311 311 .init_irq = mx21_init_irq,
+1 -1
arch/arm/mach-imx/mach-mx25_3ds.c
··· 253 253 254 254 MACHINE_START(MX25_3DS, "Freescale MX25PDK (3DS)") 255 255 /* Maintainer: Freescale Semiconductor, Inc. */ 256 - .boot_params = MX25_PHYS_OFFSET + 0x100, 256 + .atag_offset = 0x100, 257 257 .map_io = mx25_map_io, 258 258 .init_early = imx25_init_early, 259 259 .init_irq = mx25_init_irq,
+1 -1
arch/arm/mach-imx/mach-mx27_3ds.c
··· 421 421 422 422 MACHINE_START(MX27_3DS, "Freescale MX27PDK") 423 423 /* maintainer: Freescale Semiconductor, Inc. */ 424 - .boot_params = MX27_PHYS_OFFSET + 0x100, 424 + .atag_offset = 0x100, 425 425 .map_io = mx27_map_io, 426 426 .init_early = imx27_init_early, 427 427 .init_irq = mx27_init_irq,
+1 -1
arch/arm/mach-imx/mach-mx27ads.c
··· 344 344 345 345 MACHINE_START(MX27ADS, "Freescale i.MX27ADS") 346 346 /* maintainer: Freescale Semiconductor, Inc. */ 347 - .boot_params = MX27_PHYS_OFFSET + 0x100, 347 + .atag_offset = 0x100, 348 348 .map_io = mx27ads_map_io, 349 349 .init_early = imx27_init_early, 350 350 .init_irq = mx27_init_irq,
+1 -1
arch/arm/mach-imx/mach-mx31_3ds.c
··· 764 764 765 765 MACHINE_START(MX31_3DS, "Freescale MX31PDK (3DS)") 766 766 /* Maintainer: Freescale Semiconductor, Inc. */ 767 - .boot_params = MX3x_PHYS_OFFSET + 0x100, 767 + .atag_offset = 0x100, 768 768 .map_io = mx31_map_io, 769 769 .init_early = imx31_init_early, 770 770 .init_irq = mx31_init_irq,
+1 -1
arch/arm/mach-imx/mach-mx31ads.c
··· 535 535 536 536 MACHINE_START(MX31ADS, "Freescale MX31ADS") 537 537 /* Maintainer: Freescale Semiconductor, Inc. */ 538 - .boot_params = MX3x_PHYS_OFFSET + 0x100, 538 + .atag_offset = 0x100, 539 539 .map_io = mx31ads_map_io, 540 540 .init_early = imx31_init_early, 541 541 .init_irq = mx31ads_init_irq,
+1 -1
arch/arm/mach-imx/mach-mx31lilly.c
··· 295 295 }; 296 296 297 297 MACHINE_START(LILLY1131, "INCO startec LILLY-1131") 298 - .boot_params = MX3x_PHYS_OFFSET + 0x100, 298 + .atag_offset = 0x100, 299 299 .map_io = mx31_map_io, 300 300 .init_early = imx31_init_early, 301 301 .init_irq = mx31_init_irq,
+1 -1
arch/arm/mach-imx/mach-mx31lite.c
··· 280 280 281 281 MACHINE_START(MX31LITE, "LogicPD i.MX31 SOM") 282 282 /* Maintainer: Freescale Semiconductor, Inc. */ 283 - .boot_params = MX3x_PHYS_OFFSET + 0x100, 283 + .atag_offset = 0x100, 284 284 .map_io = mx31lite_map_io, 285 285 .init_early = imx31_init_early, 286 286 .init_irq = mx31_init_irq,
+1 -1
arch/arm/mach-imx/mach-mx31moboard.c
··· 567 567 568 568 MACHINE_START(MX31MOBOARD, "EPFL Mobots mx31moboard") 569 569 /* Maintainer: Valentin Longchamp, EPFL Mobots group */ 570 - .boot_params = MX3x_PHYS_OFFSET + 0x100, 570 + .atag_offset = 0x100, 571 571 .reserve = mx31moboard_reserve, 572 572 .map_io = mx31_map_io, 573 573 .init_early = imx31_init_early,
+1 -1
arch/arm/mach-imx/mach-mx35_3ds.c
··· 217 217 218 218 MACHINE_START(MX35_3DS, "Freescale MX35PDK") 219 219 /* Maintainer: Freescale Semiconductor, Inc */ 220 - .boot_params = MX3x_PHYS_OFFSET + 0x100, 220 + .atag_offset = 0x100, 221 221 .map_io = mx35_map_io, 222 222 .init_early = imx35_init_early, 223 223 .init_irq = mx35_init_irq,
+1 -1
arch/arm/mach-imx/mach-mxt_td60.c
··· 267 267 268 268 MACHINE_START(MXT_TD60, "Maxtrack i-MXT TD60") 269 269 /* maintainer: Maxtrack Industrial */ 270 - .boot_params = MX27_PHYS_OFFSET + 0x100, 270 + .atag_offset = 0x100, 271 271 .map_io = mx27_map_io, 272 272 .init_early = imx27_init_early, 273 273 .init_irq = mx27_init_irq,
+1 -1
arch/arm/mach-imx/mach-pca100.c
··· 435 435 }; 436 436 437 437 MACHINE_START(PCA100, "phyCARD-i.MX27") 438 - .boot_params = MX27_PHYS_OFFSET + 0x100, 438 + .atag_offset = 0x100, 439 439 .map_io = mx27_map_io, 440 440 .init_early = imx27_init_early, 441 441 .init_irq = mx27_init_irq,
+1 -1
arch/arm/mach-imx/mach-pcm037.c
··· 688 688 689 689 MACHINE_START(PCM037, "Phytec Phycore pcm037") 690 690 /* Maintainer: Pengutronix */ 691 - .boot_params = MX3x_PHYS_OFFSET + 0x100, 691 + .atag_offset = 0x100, 692 692 .reserve = pcm037_reserve, 693 693 .map_io = mx31_map_io, 694 694 .init_early = imx31_init_early,
+1 -1
arch/arm/mach-imx/mach-pcm038.c
··· 349 349 }; 350 350 351 351 MACHINE_START(PCM038, "phyCORE-i.MX27") 352 - .boot_params = MX27_PHYS_OFFSET + 0x100, 352 + .atag_offset = 0x100, 353 353 .map_io = mx27_map_io, 354 354 .init_early = imx27_init_early, 355 355 .init_irq = mx27_init_irq,
+1 -1
arch/arm/mach-imx/mach-pcm043.c
··· 418 418 419 419 MACHINE_START(PCM043, "Phytec Phycore pcm043") 420 420 /* Maintainer: Pengutronix */ 421 - .boot_params = MX3x_PHYS_OFFSET + 0x100, 421 + .atag_offset = 0x100, 422 422 .map_io = mx35_map_io, 423 423 .init_early = imx35_init_early, 424 424 .init_irq = mx35_init_irq,
+1 -1
arch/arm/mach-imx/mach-qong.c
··· 262 262 263 263 MACHINE_START(QONG, "Dave/DENX QongEVB-LITE") 264 264 /* Maintainer: DENX Software Engineering GmbH */ 265 - .boot_params = MX3x_PHYS_OFFSET + 0x100, 265 + .atag_offset = 0x100, 266 266 .map_io = mx31_map_io, 267 267 .init_early = imx31_init_early, 268 268 .init_irq = mx31_init_irq,
+1 -1
arch/arm/mach-imx/mach-scb9328.c
··· 137 137 138 138 MACHINE_START(SCB9328, "Synertronixx scb9328") 139 139 /* Sascha Hauer */ 140 - .boot_params = 0x08000100, 140 + .atag_offset = 100, 141 141 .map_io = mx1_map_io, 142 142 .init_early = imx1_init_early, 143 143 .init_irq = mx1_init_irq,
+1 -1
arch/arm/mach-integrator/include/mach/debug-macro.S
··· 11 11 * 12 12 */ 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 mov \rp, #0x16000000 @ physical base address 16 16 mov \rv, #0xf0000000 @ virtual base 17 17 add \rv, \rv, #0x16000000 >> 4
+1 -1
arch/arm/mach-integrator/integrator_ap.c
··· 465 465 466 466 MACHINE_START(INTEGRATOR, "ARM-Integrator") 467 467 /* Maintainer: ARM Ltd/Deep Blue Solutions Ltd */ 468 - .boot_params = 0x00000100, 468 + .atag_offset = 0x100, 469 469 .reserve = integrator_reserve, 470 470 .map_io = ap_map_io, 471 471 .init_early = integrator_init_early,
+1 -1
arch/arm/mach-integrator/integrator_cp.c
··· 492 492 493 493 MACHINE_START(CINTEGRATOR, "ARM-IntegratorCP") 494 494 /* Maintainer: ARM Ltd/Deep Blue Solutions Ltd */ 495 - .boot_params = 0x00000100, 495 + .atag_offset = 0x100, 496 496 .reserve = integrator_reserve, 497 497 .map_io = intcp_map_io, 498 498 .init_early = intcp_init_early,
+1 -1
arch/arm/mach-iop13xx/include/mach/debug-macro.S
··· 11 11 * published by the Free Software Foundation. 12 12 */ 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 mov \rp, #0x00002300 16 16 orr \rp, \rp, #0x00000040 17 17 orr \rv, \rp, #0xfe000000 @ virtual
+1 -1
arch/arm/mach-iop13xx/iq81340mc.c
··· 91 91 92 92 MACHINE_START(IQ81340MC, "Intel IQ81340MC") 93 93 /* Maintainer: Dan Williams <dan.j.williams@intel.com> */ 94 - .boot_params = 0x00000100, 94 + .atag_offset = 0x100, 95 95 .map_io = iop13xx_map_io, 96 96 .init_irq = iop13xx_init_irq, 97 97 .timer = &iq81340mc_timer,
+1 -1
arch/arm/mach-iop13xx/iq81340sc.c
··· 93 93 94 94 MACHINE_START(IQ81340SC, "Intel IQ81340SC") 95 95 /* Maintainer: Dan Williams <dan.j.williams@intel.com> */ 96 - .boot_params = 0x00000100, 96 + .atag_offset = 0x100, 97 97 .map_io = iop13xx_map_io, 98 98 .init_irq = iop13xx_init_irq, 99 99 .timer = &iq81340sc_timer,
+1 -1
arch/arm/mach-iop32x/em7210.c
··· 203 203 } 204 204 205 205 MACHINE_START(EM7210, "Lanner EM7210") 206 - .boot_params = 0xa0000100, 206 + .atag_offset = 0x100, 207 207 .map_io = em7210_map_io, 208 208 .init_irq = iop32x_init_irq, 209 209 .timer = &em7210_timer,
+1 -1
arch/arm/mach-iop32x/glantank.c
··· 207 207 208 208 MACHINE_START(GLANTANK, "GLAN Tank") 209 209 /* Maintainer: Lennert Buytenhek <buytenh@wantstofly.org> */ 210 - .boot_params = 0xa0000100, 210 + .atag_offset = 0x100, 211 211 .map_io = glantank_map_io, 212 212 .init_irq = iop32x_init_irq, 213 213 .timer = &glantank_timer,
+1 -1
arch/arm/mach-iop32x/include/mach/debug-macro.S
··· 11 11 * published by the Free Software Foundation. 12 12 */ 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 mov \rp, #0xfe000000 @ physical as well as virtual 16 16 orr \rp, \rp, #0x00800000 @ location of the UART 17 17 mov \rv, \rp
-13
arch/arm/mach-iop32x/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-iop32x/include/mach/memory.h 3 - */ 4 - 5 - #ifndef __MEMORY_H 6 - #define __MEMORY_H 7 - 8 - /* 9 - * Physical DRAM offset. 10 - */ 11 - #define PLAT_PHYS_OFFSET UL(0xa0000000) 12 - 13 - #endif
+2 -2
arch/arm/mach-iop32x/iq31244.c
··· 313 313 314 314 MACHINE_START(IQ31244, "Intel IQ31244") 315 315 /* Maintainer: Intel Corp. */ 316 - .boot_params = 0xa0000100, 316 + .atag_offset = 0x100, 317 317 .map_io = iq31244_map_io, 318 318 .init_irq = iop32x_init_irq, 319 319 .timer = &iq31244_timer, ··· 327 327 */ 328 328 MACHINE_START(EP80219, "Intel EP80219") 329 329 /* Maintainer: Intel Corp. */ 330 - .boot_params = 0xa0000100, 330 + .atag_offset = 0x100, 331 331 .map_io = iq31244_map_io, 332 332 .init_irq = iop32x_init_irq, 333 333 .timer = &iq31244_timer,
+1 -1
arch/arm/mach-iop32x/iq80321.c
··· 186 186 187 187 MACHINE_START(IQ80321, "Intel IQ80321") 188 188 /* Maintainer: Intel Corp. */ 189 - .boot_params = 0xa0000100, 189 + .atag_offset = 0x100, 190 190 .map_io = iq80321_map_io, 191 191 .init_irq = iop32x_init_irq, 192 192 .timer = &iq80321_timer,
+1 -1
arch/arm/mach-iop32x/n2100.c
··· 327 327 328 328 MACHINE_START(N2100, "Thecus N2100") 329 329 /* Maintainer: Lennert Buytenhek <buytenh@wantstofly.org> */ 330 - .boot_params = 0xa0000100, 330 + .atag_offset = 0x100, 331 331 .map_io = n2100_map_io, 332 332 .init_irq = iop32x_init_irq, 333 333 .timer = &n2100_timer,
+1 -1
arch/arm/mach-iop33x/include/mach/debug-macro.S
··· 11 11 * published by the Free Software Foundation. 12 12 */ 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 mov \rp, #0x00ff0000 16 16 orr \rp, \rp, #0x0000f700 17 17 orr \rv, #0xfe000000 @ virtual
-13
arch/arm/mach-iop33x/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-iop33x/include/mach/memory.h 3 - */ 4 - 5 - #ifndef __MEMORY_H 6 - #define __MEMORY_H 7 - 8 - /* 9 - * Physical DRAM offset. 10 - */ 11 - #define PLAT_PHYS_OFFSET UL(0x00000000) 12 - 13 - #endif
+1 -1
arch/arm/mach-iop33x/iq80331.c
··· 141 141 142 142 MACHINE_START(IQ80331, "Intel IQ80331") 143 143 /* Maintainer: Intel Corp. */ 144 - .boot_params = 0x00000100, 144 + .atag_offset = 0x100, 145 145 .map_io = iop3xx_map_io, 146 146 .init_irq = iop33x_init_irq, 147 147 .timer = &iq80331_timer,
+1 -1
arch/arm/mach-iop33x/iq80332.c
··· 141 141 142 142 MACHINE_START(IQ80332, "Intel IQ80332") 143 143 /* Maintainer: Intel Corp. */ 144 - .boot_params = 0x00000100, 144 + .atag_offset = 0x100, 145 145 .map_io = iop3xx_map_io, 146 146 .init_irq = iop33x_init_irq, 147 147 .timer = &iq80332_timer,
+1 -1
arch/arm/mach-ixp2000/enp2611.c
··· 254 254 255 255 MACHINE_START(ENP2611, "Radisys ENP-2611 PCI network processor board") 256 256 /* Maintainer: Lennert Buytenhek <buytenh@wantstofly.org> */ 257 - .boot_params = 0x00000100, 257 + .atag_offset = 0x100, 258 258 .map_io = enp2611_map_io, 259 259 .init_irq = ixp2000_init_irq, 260 260 .timer = &enp2611_timer,
+1 -1
arch/arm/mach-ixp2000/include/mach/debug-macro.S
··· 11 11 * 12 12 */ 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 mov \rp, #0x00030000 16 16 #ifdef __ARMEB__ 17 17 orr \rp, \rp, #0x00000003
+1 -1
arch/arm/mach-ixp2000/ixdp2400.c
··· 171 171 172 172 MACHINE_START(IXDP2400, "Intel IXDP2400 Development Platform") 173 173 /* Maintainer: MontaVista Software, Inc. */ 174 - .boot_params = 0x00000100, 174 + .atag_offset = 0x100, 175 175 .map_io = ixdp2x00_map_io, 176 176 .init_irq = ixdp2400_init_irq, 177 177 .timer = &ixdp2400_timer,
+1 -1
arch/arm/mach-ixp2000/ixdp2800.c
··· 286 286 287 287 MACHINE_START(IXDP2800, "Intel IXDP2800 Development Platform") 288 288 /* Maintainer: MontaVista Software, Inc. */ 289 - .boot_params = 0x00000100, 289 + .atag_offset = 0x100, 290 290 .map_io = ixdp2x00_map_io, 291 291 .init_irq = ixdp2800_init_irq, 292 292 .timer = &ixdp2800_timer,
+3 -3
arch/arm/mach-ixp2000/ixdp2x01.c
··· 417 417 #ifdef CONFIG_ARCH_IXDP2401 418 418 MACHINE_START(IXDP2401, "Intel IXDP2401 Development Platform") 419 419 /* Maintainer: MontaVista Software, Inc. */ 420 - .boot_params = 0x00000100, 420 + .atag_offset = 0x100, 421 421 .map_io = ixdp2x01_map_io, 422 422 .init_irq = ixdp2x01_init_irq, 423 423 .timer = &ixdp2x01_timer, ··· 428 428 #ifdef CONFIG_ARCH_IXDP2801 429 429 MACHINE_START(IXDP2801, "Intel IXDP2801 Development Platform") 430 430 /* Maintainer: MontaVista Software, Inc. */ 431 - .boot_params = 0x00000100, 431 + .atag_offset = 0x100, 432 432 .map_io = ixdp2x01_map_io, 433 433 .init_irq = ixdp2x01_init_irq, 434 434 .timer = &ixdp2x01_timer, ··· 441 441 */ 442 442 MACHINE_START(IXDP28X5, "Intel IXDP2805/2855 Development Platform") 443 443 /* Maintainer: MontaVista Software, Inc. */ 444 - .boot_params = 0x00000100, 444 + .atag_offset = 0x100, 445 445 .map_io = ixdp2x01_map_io, 446 446 .init_irq = ixdp2x01_init_irq, 447 447 .timer = &ixdp2x01_timer,
+1 -1
arch/arm/mach-ixp23xx/espresso.c
··· 88 88 .map_io = ixp23xx_map_io, 89 89 .init_irq = ixp23xx_init_irq, 90 90 .timer = &ixp23xx_timer, 91 - .boot_params = 0x00000100, 91 + .atag_offset = 0x100, 92 92 .init_machine = espresso_init, 93 93 MACHINE_END
+1 -1
arch/arm/mach-ixp23xx/include/mach/debug-macro.S
··· 12 12 */ 13 13 #include <mach/ixp23xx.h> 14 14 15 - .macro addruart, rp, rv 15 + .macro addruart, rp, rv, tmp 16 16 ldr \rp, =IXP23XX_PERIPHERAL_PHYS @ physical 17 17 ldr \rv, =IXP23XX_PERIPHERAL_VIRT @ virtual 18 18 #ifdef __ARMEB__
+1 -1
arch/arm/mach-ixp23xx/ixdp2351.c
··· 331 331 .map_io = ixdp2351_map_io, 332 332 .init_irq = ixdp2351_init_irq, 333 333 .timer = &ixp23xx_timer, 334 - .boot_params = 0x00000100, 334 + .atag_offset = 0x100, 335 335 .init_machine = ixdp2351_init, 336 336 MACHINE_END
+1 -1
arch/arm/mach-ixp23xx/roadrunner.c
··· 175 175 .map_io = ixp23xx_map_io, 176 176 .init_irq = ixp23xx_init_irq, 177 177 .timer = &ixp23xx_timer, 178 - .boot_params = 0x00000100, 178 + .atag_offset = 0x100, 179 179 .init_machine = roadrunner_init, 180 180 MACHINE_END
+2 -2
arch/arm/mach-ixp4xx/avila-setup.c
··· 167 167 .map_io = ixp4xx_map_io, 168 168 .init_irq = ixp4xx_init_irq, 169 169 .timer = &ixp4xx_timer, 170 - .boot_params = 0x0100, 170 + .atag_offset = 0x100, 171 171 .init_machine = avila_init, 172 172 #if defined(CONFIG_PCI) 173 173 .dma_zone_size = SZ_64M, ··· 185 185 .map_io = ixp4xx_map_io, 186 186 .init_irq = ixp4xx_init_irq, 187 187 .timer = &ixp4xx_timer, 188 - .boot_params = 0x0100, 188 + .atag_offset = 0x100, 189 189 .init_machine = avila_init, 190 190 #if defined(CONFIG_PCI) 191 191 .dma_zone_size = SZ_64M,
+2 -2
arch/arm/mach-ixp4xx/coyote-setup.c
··· 112 112 .map_io = ixp4xx_map_io, 113 113 .init_irq = ixp4xx_init_irq, 114 114 .timer = &ixp4xx_timer, 115 - .boot_params = 0x0100, 115 + .atag_offset = 0x100, 116 116 .init_machine = coyote_init, 117 117 #if defined(CONFIG_PCI) 118 118 .dma_zone_size = SZ_64M, ··· 130 130 .map_io = ixp4xx_map_io, 131 131 .init_irq = ixp4xx_init_irq, 132 132 .timer = &ixp4xx_timer, 133 - .boot_params = 0x0100, 133 + .atag_offset = 0x100, 134 134 .init_machine = coyote_init, 135 135 MACHINE_END 136 136 #endif
+1 -1
arch/arm/mach-ixp4xx/dsmg600-setup.c
··· 278 278 279 279 MACHINE_START(DSMG600, "D-Link DSM-G600 RevA") 280 280 /* Maintainer: www.nslu2-linux.org */ 281 - .boot_params = 0x00000100, 281 + .atag_offset = 0x100, 282 282 .map_io = ixp4xx_map_io, 283 283 .init_irq = ixp4xx_init_irq, 284 284 .timer = &dsmg600_timer,
+1 -1
arch/arm/mach-ixp4xx/fsg-setup.c
··· 272 272 .map_io = ixp4xx_map_io, 273 273 .init_irq = ixp4xx_init_irq, 274 274 .timer = &ixp4xx_timer, 275 - .boot_params = 0x0100, 275 + .atag_offset = 0x100, 276 276 .init_machine = fsg_init, 277 277 #if defined(CONFIG_PCI) 278 278 .dma_zone_size = SZ_64M,
+1 -1
arch/arm/mach-ixp4xx/gateway7001-setup.c
··· 99 99 .map_io = ixp4xx_map_io, 100 100 .init_irq = ixp4xx_init_irq, 101 101 .timer = &ixp4xx_timer, 102 - .boot_params = 0x0100, 102 + .atag_offset = 0x100, 103 103 .init_machine = gateway7001_init, 104 104 #if defined(CONFIG_PCI) 105 105 .dma_zone_size = SZ_64M,
+1 -1
arch/arm/mach-ixp4xx/goramo_mlr.c
··· 499 499 .map_io = ixp4xx_map_io, 500 500 .init_irq = ixp4xx_init_irq, 501 501 .timer = &ixp4xx_timer, 502 - .boot_params = 0x0100, 502 + .atag_offset = 0x100, 503 503 .init_machine = gmlr_init, 504 504 #if defined(CONFIG_PCI) 505 505 .dma_zone_size = SZ_64M,
+1 -1
arch/arm/mach-ixp4xx/gtwx5715-setup.c
··· 167 167 .map_io = ixp4xx_map_io, 168 168 .init_irq = ixp4xx_init_irq, 169 169 .timer = &ixp4xx_timer, 170 - .boot_params = 0x0100, 170 + .atag_offset = 0x100, 171 171 .init_machine = gtwx5715_init, 172 172 #if defined(CONFIG_PCI) 173 173 .dma_zone_size = SZ_64M,
+1 -1
arch/arm/mach-ixp4xx/include/mach/debug-macro.S
··· 10 10 * published by the Free Software Foundation. 11 11 */ 12 12 13 - .macro addruart, rp, rv 13 + .macro addruart, rp, rv, tmp 14 14 #ifdef __ARMEB__ 15 15 mov \rp, #3 @ Uart regs are at off set of 3 if 16 16 @ byte writes used - Big Endian.
-17
arch/arm/mach-ixp4xx/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-ixp4xx/include/mach/memory.h 3 - * 4 - * Copyright (c) 2001-2004 MontaVista Software, Inc. 5 - */ 6 - 7 - #ifndef __ASM_ARCH_MEMORY_H 8 - #define __ASM_ARCH_MEMORY_H 9 - 10 - #include <asm/sizes.h> 11 - 12 - /* 13 - * Physical DRAM offset. 14 - */ 15 - #define PLAT_PHYS_OFFSET UL(0x00000000) 16 - 17 - #endif
+4 -4
arch/arm/mach-ixp4xx/ixdp425-setup.c
··· 256 256 .map_io = ixp4xx_map_io, 257 257 .init_irq = ixp4xx_init_irq, 258 258 .timer = &ixp4xx_timer, 259 - .boot_params = 0x0100, 259 + .atag_offset = 0x100, 260 260 .init_machine = ixdp425_init, 261 261 #if defined(CONFIG_PCI) 262 262 .dma_zone_size = SZ_64M, ··· 270 270 .map_io = ixp4xx_map_io, 271 271 .init_irq = ixp4xx_init_irq, 272 272 .timer = &ixp4xx_timer, 273 - .boot_params = 0x0100, 273 + .atag_offset = 0x100, 274 274 .init_machine = ixdp425_init, 275 275 #if defined(CONFIG_PCI) 276 276 .dma_zone_size = SZ_64M, ··· 284 284 .map_io = ixp4xx_map_io, 285 285 .init_irq = ixp4xx_init_irq, 286 286 .timer = &ixp4xx_timer, 287 - .boot_params = 0x0100, 287 + .atag_offset = 0x100, 288 288 .init_machine = ixdp425_init, 289 289 #if defined(CONFIG_PCI) 290 290 .dma_zone_size = SZ_64M, ··· 298 298 .map_io = ixp4xx_map_io, 299 299 .init_irq = ixp4xx_init_irq, 300 300 .timer = &ixp4xx_timer, 301 - .boot_params = 0x0100, 301 + .atag_offset = 0x100, 302 302 .init_machine = ixdp425_init, 303 303 #if defined(CONFIG_PCI) 304 304 .dma_zone_size = SZ_64M,
+1 -1
arch/arm/mach-ixp4xx/nas100d-setup.c
··· 313 313 314 314 MACHINE_START(NAS100D, "Iomega NAS 100d") 315 315 /* Maintainer: www.nslu2-linux.org */ 316 - .boot_params = 0x00000100, 316 + .atag_offset = 0x100, 317 317 .map_io = ixp4xx_map_io, 318 318 .init_irq = ixp4xx_init_irq, 319 319 .timer = &ixp4xx_timer,
+1 -1
arch/arm/mach-ixp4xx/nslu2-setup.c
··· 299 299 300 300 MACHINE_START(NSLU2, "Linksys NSLU2") 301 301 /* Maintainer: www.nslu2-linux.org */ 302 - .boot_params = 0x00000100, 302 + .atag_offset = 0x100, 303 303 .map_io = ixp4xx_map_io, 304 304 .init_irq = ixp4xx_init_irq, 305 305 .timer = &nslu2_timer,
+1 -1
arch/arm/mach-ixp4xx/vulcan-setup.c
··· 239 239 .map_io = ixp4xx_map_io, 240 240 .init_irq = ixp4xx_init_irq, 241 241 .timer = &ixp4xx_timer, 242 - .boot_params = 0x0100, 242 + .atag_offset = 0x100, 243 243 .init_machine = vulcan_init, 244 244 #if defined(CONFIG_PCI) 245 245 .dma_zone_size = SZ_64M,
+1 -1
arch/arm/mach-ixp4xx/wg302v2-setup.c
··· 100 100 .map_io = ixp4xx_map_io, 101 101 .init_irq = ixp4xx_init_irq, 102 102 .timer = &ixp4xx_timer, 103 - .boot_params = 0x0100, 103 + .atag_offset = 0x100, 104 104 .init_machine = wg302v2_init, 105 105 #if defined(CONFIG_PCI) 106 106 .dma_zone_size = SZ_64M,
+1 -1
arch/arm/mach-kirkwood/d2net_v2-setup.c
··· 221 221 } 222 222 223 223 MACHINE_START(D2NET_V2, "LaCie d2 Network v2") 224 - .boot_params = 0x00000100, 224 + .atag_offset = 0x100, 225 225 .init_machine = d2net_v2_init, 226 226 .map_io = kirkwood_map_io, 227 227 .init_early = kirkwood_init_early,
+1 -1
arch/arm/mach-kirkwood/db88f6281-bp-setup.c
··· 97 97 98 98 MACHINE_START(DB88F6281_BP, "Marvell DB-88F6281-BP Development Board") 99 99 /* Maintainer: Saeed Bishara <saeed@marvell.com> */ 100 - .boot_params = 0x00000100, 100 + .atag_offset = 0x100, 101 101 .init_machine = db88f6281_init, 102 102 .map_io = kirkwood_map_io, 103 103 .init_early = kirkwood_init_early,
+1 -1
arch/arm/mach-kirkwood/dockstar-setup.c
··· 102 102 } 103 103 104 104 MACHINE_START(DOCKSTAR, "Seagate FreeAgent DockStar") 105 - .boot_params = 0x00000100, 105 + .atag_offset = 0x100, 106 106 .init_machine = dockstar_init, 107 107 .map_io = kirkwood_map_io, 108 108 .init_early = kirkwood_init_early,
+1 -1
arch/arm/mach-kirkwood/guruplug-setup.c
··· 121 121 122 122 MACHINE_START(GURUPLUG, "Marvell GuruPlug Reference Board") 123 123 /* Maintainer: Siddarth Gore <gores@marvell.com> */ 124 - .boot_params = 0x00000100, 124 + .atag_offset = 0x100, 125 125 .init_machine = guruplug_init, 126 126 .map_io = kirkwood_map_io, 127 127 .init_early = kirkwood_init_early,
+1 -1
arch/arm/mach-kirkwood/include/mach/debug-macro.S
··· 8 8 9 9 #include <mach/bridge-regs.h> 10 10 11 - .macro addruart, rp, rv 11 + .macro addruart, rp, rv, tmp 12 12 ldr \rp, =KIRKWOOD_REGS_PHYS_BASE 13 13 ldr \rv, =KIRKWOOD_REGS_VIRT_BASE 14 14 orr \rp, \rp, #0x00012000
-10
arch/arm/mach-kirkwood/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-kirkwood/include/mach/memory.h 3 - */ 4 - 5 - #ifndef __ASM_ARCH_MEMORY_H 6 - #define __ASM_ARCH_MEMORY_H 7 - 8 - #define PLAT_PHYS_OFFSET UL(0x00000000) 9 - 10 - #endif
+1 -1
arch/arm/mach-kirkwood/mv88f6281gtw_ge-setup.c
··· 163 163 164 164 MACHINE_START(MV88F6281GTW_GE, "Marvell 88F6281 GTW GE Board") 165 165 /* Maintainer: Lennert Buytenhek <buytenh@marvell.com> */ 166 - .boot_params = 0x00000100, 166 + .atag_offset = 0x100, 167 167 .init_machine = mv88f6281gtw_ge_init, 168 168 .map_io = kirkwood_map_io, 169 169 .init_early = kirkwood_init_early,
+3 -3
arch/arm/mach-kirkwood/netspace_v2-setup.c
··· 258 258 259 259 #ifdef CONFIG_MACH_NETSPACE_V2 260 260 MACHINE_START(NETSPACE_V2, "LaCie Network Space v2") 261 - .boot_params = 0x00000100, 261 + .atag_offset = 0x100, 262 262 .init_machine = netspace_v2_init, 263 263 .map_io = kirkwood_map_io, 264 264 .init_early = kirkwood_init_early, ··· 269 269 270 270 #ifdef CONFIG_MACH_INETSPACE_V2 271 271 MACHINE_START(INETSPACE_V2, "LaCie Internet Space v2") 272 - .boot_params = 0x00000100, 272 + .atag_offset = 0x100, 273 273 .init_machine = netspace_v2_init, 274 274 .map_io = kirkwood_map_io, 275 275 .init_early = kirkwood_init_early, ··· 280 280 281 281 #ifdef CONFIG_MACH_NETSPACE_MAX_V2 282 282 MACHINE_START(NETSPACE_MAX_V2, "LaCie Network Space Max v2") 283 - .boot_params = 0x00000100, 283 + .atag_offset = 0x100, 284 284 .init_machine = netspace_v2_init, 285 285 .map_io = kirkwood_map_io, 286 286 .init_early = kirkwood_init_early,
+2 -2
arch/arm/mach-kirkwood/netxbig_v2-setup.c
··· 399 399 400 400 #ifdef CONFIG_MACH_NET2BIG_V2 401 401 MACHINE_START(NET2BIG_V2, "LaCie 2Big Network v2") 402 - .boot_params = 0x00000100, 402 + .atag_offset = 0x100, 403 403 .init_machine = netxbig_v2_init, 404 404 .map_io = kirkwood_map_io, 405 405 .init_early = kirkwood_init_early, ··· 410 410 411 411 #ifdef CONFIG_MACH_NET5BIG_V2 412 412 MACHINE_START(NET5BIG_V2, "LaCie 5Big Network v2") 413 - .boot_params = 0x00000100, 413 + .atag_offset = 0x100, 414 414 .init_machine = netxbig_v2_init, 415 415 .map_io = kirkwood_map_io, 416 416 .init_early = kirkwood_init_early,
+3 -3
arch/arm/mach-kirkwood/openrd-setup.c
··· 214 214 #ifdef CONFIG_MACH_OPENRD_BASE 215 215 MACHINE_START(OPENRD_BASE, "Marvell OpenRD Base Board") 216 216 /* Maintainer: Dhaval Vasa <dhaval.vasa@einfochips.com> */ 217 - .boot_params = 0x00000100, 217 + .atag_offset = 0x100, 218 218 .init_machine = openrd_init, 219 219 .map_io = kirkwood_map_io, 220 220 .init_early = kirkwood_init_early, ··· 226 226 #ifdef CONFIG_MACH_OPENRD_CLIENT 227 227 MACHINE_START(OPENRD_CLIENT, "Marvell OpenRD Client Board") 228 228 /* Maintainer: Dhaval Vasa <dhaval.vasa@einfochips.com> */ 229 - .boot_params = 0x00000100, 229 + .atag_offset = 0x100, 230 230 .init_machine = openrd_init, 231 231 .map_io = kirkwood_map_io, 232 232 .init_early = kirkwood_init_early, ··· 238 238 #ifdef CONFIG_MACH_OPENRD_ULTIMATE 239 239 MACHINE_START(OPENRD_ULTIMATE, "Marvell OpenRD Ultimate Board") 240 240 /* Maintainer: Dhaval Vasa <dhaval.vasa@einfochips.com> */ 241 - .boot_params = 0x00000100, 241 + .atag_offset = 0x100, 242 242 .init_machine = openrd_init, 243 243 .map_io = kirkwood_map_io, 244 244 .init_early = kirkwood_init_early,
+1 -1
arch/arm/mach-kirkwood/rd88f6192-nas-setup.c
··· 79 79 80 80 MACHINE_START(RD88F6192_NAS, "Marvell RD-88F6192-NAS Development Board") 81 81 /* Maintainer: Saeed Bishara <saeed@marvell.com> */ 82 - .boot_params = 0x00000100, 82 + .atag_offset = 0x100, 83 83 .init_machine = rd88f6192_init, 84 84 .map_io = kirkwood_map_io, 85 85 .init_early = kirkwood_init_early,
+1 -1
arch/arm/mach-kirkwood/rd88f6281-setup.c
··· 115 115 116 116 MACHINE_START(RD88F6281, "Marvell RD-88F6281 Reference Board") 117 117 /* Maintainer: Saeed Bishara <saeed@marvell.com> */ 118 - .boot_params = 0x00000100, 118 + .atag_offset = 0x100, 119 119 .init_machine = rd88f6281_init, 120 120 .map_io = kirkwood_map_io, 121 121 .init_early = kirkwood_init_early,
+2 -2
arch/arm/mach-kirkwood/sheevaplug-setup.c
··· 138 138 #ifdef CONFIG_MACH_SHEEVAPLUG 139 139 MACHINE_START(SHEEVAPLUG, "Marvell SheevaPlug Reference Board") 140 140 /* Maintainer: shadi Ammouri <shadi@marvell.com> */ 141 - .boot_params = 0x00000100, 141 + .atag_offset = 0x100, 142 142 .init_machine = sheevaplug_init, 143 143 .map_io = kirkwood_map_io, 144 144 .init_early = kirkwood_init_early, ··· 149 149 150 150 #ifdef CONFIG_MACH_ESATA_SHEEVAPLUG 151 151 MACHINE_START(ESATA_SHEEVAPLUG, "Marvell eSATA SheevaPlug Reference Board") 152 - .boot_params = 0x00000100, 152 + .atag_offset = 0x100, 153 153 .init_machine = sheevaplug_init, 154 154 .map_io = kirkwood_map_io, 155 155 .init_early = kirkwood_init_early,
+1 -1
arch/arm/mach-kirkwood/t5325-setup.c
··· 201 201 202 202 MACHINE_START(T5325, "HP t5325 Thin Client") 203 203 /* Maintainer: Martin Michlmayr <tbm@cyrius.com> */ 204 - .boot_params = 0x00000100, 204 + .atag_offset = 0x100, 205 205 .init_machine = hp_t5325_init, 206 206 .map_io = kirkwood_map_io, 207 207 .init_early = kirkwood_init_early,
+1 -1
arch/arm/mach-kirkwood/ts219-setup.c
··· 132 132 133 133 MACHINE_START(TS219, "QNAP TS-119/TS-219") 134 134 /* Maintainer: Martin Michlmayr <tbm@cyrius.com> */ 135 - .boot_params = 0x00000100, 135 + .atag_offset = 0x100, 136 136 .init_machine = qnap_ts219_init, 137 137 .map_io = kirkwood_map_io, 138 138 .init_early = kirkwood_init_early,
+1 -1
arch/arm/mach-kirkwood/ts41x-setup.c
··· 176 176 177 177 MACHINE_START(TS41X, "QNAP TS-41x") 178 178 /* Maintainer: Martin Michlmayr <tbm@cyrius.com> */ 179 - .boot_params = 0x00000100, 179 + .atag_offset = 0x100, 180 180 .init_machine = qnap_ts41x_init, 181 181 .map_io = kirkwood_map_io, 182 182 .init_early = kirkwood_init_early,
+1 -1
arch/arm/mach-ks8695/board-acs5k.c
··· 223 223 224 224 MACHINE_START(ACS5K, "Brivo Systems LLC ACS-5000 Master board") 225 225 /* Maintainer: Simtec Electronics. */ 226 - .boot_params = KS8695_SDRAM_PA + 0x100, 226 + .atag_offset = 0x100, 227 227 .map_io = ks8695_map_io, 228 228 .init_irq = ks8695_init_irq, 229 229 .init_machine = acs5k_init,
+1 -1
arch/arm/mach-ks8695/board-dsm320.c
··· 121 121 122 122 MACHINE_START(DSM320, "D-Link DSM-320 Wireless Media Player") 123 123 /* Maintainer: Simtec Electronics. */ 124 - .boot_params = KS8695_SDRAM_PA + 0x100, 124 + .atag_offset = 0x100, 125 125 .map_io = ks8695_map_io, 126 126 .init_irq = ks8695_init_irq, 127 127 .init_machine = dsm320_init,
+1 -1
arch/arm/mach-ks8695/board-micrel.c
··· 53 53 54 54 MACHINE_START(KS8695, "KS8695 Centaur Development Board") 55 55 /* Maintainer: Micrel Semiconductor Inc. */ 56 - .boot_params = KS8695_SDRAM_PA + 0x100, 56 + .atag_offset = 0x100, 57 57 .map_io = ks8695_map_io, 58 58 .init_irq = ks8695_init_irq, 59 59 .init_machine = micrel_init,
+1 -1
arch/arm/mach-ks8695/include/mach/debug-macro.S
··· 14 14 #include <mach/hardware.h> 15 15 #include <mach/regs-uart.h> 16 16 17 - .macro addruart, rp, rv 17 + .macro addruart, rp, rv, tmp 18 18 ldr \rp, =KS8695_UART_PA @ physical base address 19 19 ldr \rv, =KS8695_UART_VA @ virtual base address 20 20 .endm
+1 -1
arch/arm/mach-l7200/include/mach/debug-macro.S
··· 14 14 .equ io_virt, IO_BASE 15 15 .equ io_phys, IO_START 16 16 17 - .macro addruart, rp, rv 17 + .macro addruart, rp, rv, tmp 18 18 mov \rp, #0x00044000 @ UART1 19 19 @ mov \rp, #0x00045000 @ UART2 20 20 add \rv, \rp, #io_virt @ virtual address
+1 -1
arch/arm/mach-lpc32xx/include/mach/debug-macro.S
··· 20 20 * Debug output is hardcoded to standard UART 5 21 21 */ 22 22 23 - .macro addruart, rp, rv 23 + .macro addruart, rp, rv, tmp 24 24 ldreq \rp, =0x40090000 25 25 ldrne \rv, =0xF4090000 26 26 .endm
-27
arch/arm/mach-lpc32xx/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-lpc32xx/include/mach/memory.h 3 - * 4 - * Author: Kevin Wells <kevin.wells@nxp.com> 5 - * 6 - * Copyright (C) 2010 NXP Semiconductors 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License, or 11 - * (at your option) any later version. 12 - * 13 - * This program is distributed in the hope that it will be useful, 14 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 - * GNU General Public License for more details. 17 - */ 18 - 19 - #ifndef __ASM_ARCH_MEMORY_H 20 - #define __ASM_ARCH_MEMORY_H 21 - 22 - /* 23 - * Physical DRAM offset of bank 0 24 - */ 25 - #define PLAT_PHYS_OFFSET UL(0x80000000) 26 - 27 - #endif
+1 -1
arch/arm/mach-lpc32xx/phy3250.c
··· 383 383 384 384 MACHINE_START(PHY3250, "Phytec 3250 board with the LPC3250 Microcontroller") 385 385 /* Maintainer: Kevin Wells, NXP Semiconductors */ 386 - .boot_params = 0x80000100, 386 + .atag_offset = 0x100, 387 387 .map_io = lpc32xx_map_io, 388 388 .init_irq = lpc32xx_init_irq, 389 389 .timer = &lpc32xx_timer,
+1 -1
arch/arm/mach-mmp/include/mach/debug-macro.S
··· 11 11 12 12 #include <mach/addr-map.h> 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 ldr \rp, =APB_PHYS_BASE @ physical 16 16 ldr \rv, =APB_VIRT_BASE @ virtual 17 17 orr \rp, \rp, #0x00017000
-14
arch/arm/mach-mmp/include/mach/memory.h
··· 1 - /* 2 - * linux/arch/arm/mach-mmp/include/mach/memory.h 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License version 2 as 6 - * published by the Free Software Foundation. 7 - */ 8 - 9 - #ifndef __ASM_MACH_MEMORY_H 10 - #define __ASM_MACH_MEMORY_H 11 - 12 - #define PLAT_PHYS_OFFSET UL(0x00000000) 13 - 14 - #endif /* __ASM_MACH_MEMORY_H */
+1 -1
arch/arm/mach-msm/board-halibut.c
··· 93 93 } 94 94 95 95 MACHINE_START(HALIBUT, "Halibut Board (QCT SURF7200A)") 96 - .boot_params = 0x10000100, 96 + .atag_offset = 0x100, 97 97 .fixup = halibut_fixup, 98 98 .map_io = halibut_map_io, 99 99 .init_irq = halibut_init_irq,
+1 -1
arch/arm/mach-msm/board-mahimahi.c
··· 74 74 extern struct sys_timer msm_timer; 75 75 76 76 MACHINE_START(MAHIMAHI, "mahimahi") 77 - .boot_params = 0x20000100, 77 + .atag_offset = 0x100, 78 78 .fixup = mahimahi_fixup, 79 79 .map_io = mahimahi_map_io, 80 80 .init_irq = msm_init_irq,
+4 -4
arch/arm/mach-msm/board-msm7x27.c
··· 129 129 } 130 130 131 131 MACHINE_START(MSM7X27_SURF, "QCT MSM7x27 SURF") 132 - .boot_params = PLAT_PHYS_OFFSET + 0x100, 132 + .atag_offset = 0x100, 133 133 .map_io = msm7x2x_map_io, 134 134 .init_irq = msm7x2x_init_irq, 135 135 .init_machine = msm7x2x_init, ··· 137 137 MACHINE_END 138 138 139 139 MACHINE_START(MSM7X27_FFA, "QCT MSM7x27 FFA") 140 - .boot_params = PLAT_PHYS_OFFSET + 0x100, 140 + .atag_offset = 0x100, 141 141 .map_io = msm7x2x_map_io, 142 142 .init_irq = msm7x2x_init_irq, 143 143 .init_machine = msm7x2x_init, ··· 145 145 MACHINE_END 146 146 147 147 MACHINE_START(MSM7X25_SURF, "QCT MSM7x25 SURF") 148 - .boot_params = PLAT_PHYS_OFFSET + 0x100, 148 + .atag_offset = 0x100, 149 149 .map_io = msm7x2x_map_io, 150 150 .init_irq = msm7x2x_init_irq, 151 151 .init_machine = msm7x2x_init, ··· 153 153 MACHINE_END 154 154 155 155 MACHINE_START(MSM7X25_FFA, "QCT MSM7x25 FFA") 156 - .boot_params = PLAT_PHYS_OFFSET + 0x100, 156 + .atag_offset = 0x100, 157 157 .map_io = msm7x2x_map_io, 158 158 .init_irq = msm7x2x_init_irq, 159 159 .init_machine = msm7x2x_init,
+3 -3
arch/arm/mach-msm/board-msm7x30.c
··· 121 121 } 122 122 123 123 MACHINE_START(MSM7X30_SURF, "QCT MSM7X30 SURF") 124 - .boot_params = PLAT_PHYS_OFFSET + 0x100, 124 + .atag_offset = 0x100, 125 125 .fixup = msm7x30_fixup, 126 126 .reserve = msm7x30_reserve, 127 127 .map_io = msm7x30_map_io, ··· 131 131 MACHINE_END 132 132 133 133 MACHINE_START(MSM7X30_FFA, "QCT MSM7X30 FFA") 134 - .boot_params = PLAT_PHYS_OFFSET + 0x100, 134 + .atag_offset = 0x100, 135 135 .fixup = msm7x30_fixup, 136 136 .reserve = msm7x30_reserve, 137 137 .map_io = msm7x30_map_io, ··· 141 141 MACHINE_END 142 142 143 143 MACHINE_START(MSM7X30_FLUID, "QCT MSM7X30 FLUID") 144 - .boot_params = PLAT_PHYS_OFFSET + 0x100, 144 + .atag_offset = 0x100, 145 145 .fixup = msm7x30_fixup, 146 146 .reserve = msm7x30_reserve, 147 147 .map_io = msm7x30_map_io,
-11
arch/arm/mach-msm/board-msm8x60.c
··· 53 53 54 54 static void __init msm8x60_init_irq(void) 55 55 { 56 - unsigned int i; 57 - 58 56 gic_init(0, GIC_PPI_START, MSM_QGIC_DIST_BASE, 59 57 (void *)MSM_QGIC_CPU_BASE); 60 58 ··· 64 66 */ 65 67 if (!machine_is_msm8x60_sim()) 66 68 writel(0x0000FFFF, MSM_QGIC_DIST_BASE + GIC_DIST_ENABLE_SET); 67 - 68 - /* FIXME: Not installing AVS_SVICINT and AVS_SVICINTSWDONE yet 69 - * as they are configured as level, which does not play nice with 70 - * handle_percpu_irq. 71 - */ 72 - for (i = GIC_PPI_START; i < GIC_SPI_START; i++) { 73 - if (i != AVS_SVICINT && i != AVS_SVICINTSWDONE) 74 - irq_set_handler(i, handle_percpu_irq); 75 - } 76 69 } 77 70 78 71 static void __init msm8x60_init(void)
+2 -2
arch/arm/mach-msm/board-qsd8x50.c
··· 192 192 } 193 193 194 194 MACHINE_START(QSD8X50_SURF, "QCT QSD8X50 SURF") 195 - .boot_params = PLAT_PHYS_OFFSET + 0x100, 195 + .atag_offset = 0x100, 196 196 .map_io = qsd8x50_map_io, 197 197 .init_irq = qsd8x50_init_irq, 198 198 .init_machine = qsd8x50_init, ··· 200 200 MACHINE_END 201 201 202 202 MACHINE_START(QSD8X50A_ST1_5, "QCT QSD8X50A ST1.5") 203 - .boot_params = PLAT_PHYS_OFFSET + 0x100, 203 + .atag_offset = 0x100, 204 204 .map_io = qsd8x50_map_io, 205 205 .init_irq = qsd8x50_init_irq, 206 206 .init_machine = qsd8x50_init,
+1 -1
arch/arm/mach-msm/board-sapphire.c
··· 104 104 105 105 MACHINE_START(SAPPHIRE, "sapphire") 106 106 /* Maintainer: Brian Swetland <swetland@google.com> */ 107 - .boot_params = PLAT_PHYS_OFFSET + 0x100, 107 + .atag_offset = 0x100, 108 108 .fixup = sapphire_fixup, 109 109 .map_io = sapphire_map_io, 110 110 .init_irq = sapphire_init_irq,
+1 -1
arch/arm/mach-msm/board-trout.c
··· 93 93 } 94 94 95 95 MACHINE_START(TROUT, "HTC Dream") 96 - .boot_params = 0x10000100, 96 + .atag_offset = 0x100, 97 97 .fixup = trout_fixup, 98 98 .map_io = trout_map_io, 99 99 .init_irq = trout_init_irq,
+2 -2
arch/arm/mach-msm/include/mach/debug-macro.S
··· 20 20 #include <mach/msm_iomap.h> 21 21 22 22 #if defined(CONFIG_HAS_MSM_DEBUG_UART_PHYS) && !defined(CONFIG_MSM_DEBUG_UART_NONE) 23 - .macro addruart, rp, rv 23 + .macro addruart, rp, rv, tmp 24 24 ldr \rp, =MSM_DEBUG_UART_PHYS 25 25 ldr \rv, =MSM_DEBUG_UART_BASE 26 26 .endm ··· 37 37 beq 1001b 38 38 .endm 39 39 #else 40 - .macro addruart, rp, rv 40 + .macro addruart, rp, rv, tmp 41 41 mov \rv, #0xff000000 42 42 orr \rv, \rv, #0x00f00000 43 43 .endm
+1 -72
arch/arm/mach-msm/include/mach/entry-macro-qgic.S
··· 8 8 * warranty of any kind, whether express or implied. 9 9 */ 10 10 11 - #include <mach/hardware.h> 12 - #include <asm/hardware/gic.h> 11 + #include <asm/hardware/entry-macro-gic.S> 13 12 14 13 .macro disable_fiq 15 14 .endm 16 15 17 - .macro get_irqnr_preamble, base, tmp 18 - ldr \base, =gic_cpu_base_addr 19 - ldr \base, [\base] 20 - .endm 21 - 22 16 .macro arch_ret_to_user, tmp1, tmp2 23 - .endm 24 - 25 - /* 26 - * The interrupt numbering scheme is defined in the 27 - * interrupt controller spec. To wit: 28 - * 29 - * Migrated the code from ARM MP port to be more consistent 30 - * with interrupt processing , the following still holds true 31 - * however, all interrupts are treated the same regardless of 32 - * if they are local IPI or PPI 33 - * 34 - * Interrupts 0-15 are IPI 35 - * 16-31 are PPI 36 - * (16-18 are the timers) 37 - * 32-1020 are global 38 - * 1021-1022 are reserved 39 - * 1023 is "spurious" (no interrupt) 40 - * 41 - * A simple read from the controller will tell us the number of the 42 - * highest priority enabled interrupt. We then just need to check 43 - * whether it is in the valid range for an IRQ (0-1020 inclusive). 44 - * 45 - * Base ARM code assumes that the local (private) peripheral interrupts 46 - * are not valid, we treat them differently, in that the privates are 47 - * handled like normal shared interrupts with the exception that only 48 - * one processor can register the interrupt and the handler must be 49 - * the same for all processors. 50 - */ 51 - 52 - .macro get_irqnr_and_base, irqnr, irqstat, base, tmp 53 - 54 - ldr \irqstat, [\base, #GIC_CPU_INTACK] /* bits 12-10 =srcCPU, 55 - 9-0 =int # */ 56 - 57 - bic \irqnr, \irqstat, #0x1c00 @mask src 58 - cmp \irqnr, #15 59 - ldr \tmp, =1021 60 - cmpcc \irqnr, \irqnr 61 - cmpne \irqnr, \tmp 62 - cmpcs \irqnr, \irqnr 63 - 64 - .endm 65 - 66 - /* We assume that irqstat (the raw value of the IRQ acknowledge 67 - * register) is preserved from the macro above. 68 - * If there is an IPI, we immediately signal end of interrupt on the 69 - * controller, since this requires the original irqstat value which 70 - * we won't easily be able to recreate later. 71 - */ 72 - .macro test_for_ipi, irqnr, irqstat, base, tmp 73 - bic \irqnr, \irqstat, #0x1c00 74 - cmp \irqnr, #16 75 - strcc \irqstat, [\base, #GIC_CPU_EOI] 76 - cmpcs \irqnr, \irqnr 77 - .endm 78 - 79 - /* As above, this assumes that irqstat and base are preserved.. */ 80 - 81 - .macro test_for_ltirq, irqnr, irqstat, base, tmp 82 - bic \irqnr, \irqstat, #0x1c00 83 - mov \tmp, #0 84 - cmp \irqnr, #16 85 - moveq \tmp, #1 86 - streq \irqstat, [\base, #GIC_CPU_EOI] 87 - cmp \tmp, #0 88 17 .endm
-35
arch/arm/mach-msm/include/mach/memory.h
··· 1 - /* arch/arm/mach-msm/include/mach/memory.h 2 - * 3 - * Copyright (C) 2007 Google, Inc. 4 - * 5 - * This software is licensed under the terms of the GNU General Public 6 - * License version 2, as published by the Free Software Foundation, and 7 - * may be copied, distributed, and modified under those terms. 8 - * 9 - * This program is distributed in the hope that it will be useful, 10 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 - * GNU General Public License for more details. 13 - * 14 - */ 15 - 16 - #ifndef __ASM_ARCH_MEMORY_H 17 - #define __ASM_ARCH_MEMORY_H 18 - 19 - /* physical offset of RAM */ 20 - #if defined(CONFIG_ARCH_QSD8X50) && defined(CONFIG_MSM_SOC_REV_A) 21 - #define PLAT_PHYS_OFFSET UL(0x00000000) 22 - #elif defined(CONFIG_ARCH_QSD8X50) 23 - #define PLAT_PHYS_OFFSET UL(0x20000000) 24 - #elif defined(CONFIG_ARCH_MSM7X30) 25 - #define PLAT_PHYS_OFFSET UL(0x00000000) 26 - #elif defined(CONFIG_ARCH_MSM8X60) 27 - #define PLAT_PHYS_OFFSET UL(0x40000000) 28 - #elif defined(CONFIG_ARCH_MSM8960) 29 - #define PLAT_PHYS_OFFSET UL(0x40000000) 30 - #else 31 - #define PLAT_PHYS_OFFSET UL(0x10000000) 32 - #endif 33 - 34 - #endif 35 -
+40 -29
arch/arm/mach-msm/timer.c
··· 71 71 struct msm_clock { 72 72 struct clock_event_device clockevent; 73 73 struct clocksource clocksource; 74 - struct irqaction irq; 74 + unsigned int irq; 75 75 void __iomem *regbase; 76 76 uint32_t freq; 77 77 uint32_t shift; 78 78 void __iomem *global_counter; 79 79 void __iomem *local_counter; 80 + union { 81 + struct clock_event_device *evt; 82 + struct clock_event_device __percpu **percpu_evt; 83 + }; 80 84 }; 81 85 82 86 enum { ··· 91 87 92 88 93 89 static struct msm_clock msm_clocks[]; 94 - static struct clock_event_device *local_clock_event; 95 90 96 91 static irqreturn_t msm_timer_interrupt(int irq, void *dev_id) 97 92 { 98 - struct clock_event_device *evt = dev_id; 99 - if (smp_processor_id() != 0) 100 - evt = local_clock_event; 93 + struct clock_event_device *evt = *(struct clock_event_device **)dev_id; 101 94 if (evt->event_handler == NULL) 102 95 return IRQ_HANDLED; 103 96 evt->event_handler(evt); ··· 172 171 .mask = CLOCKSOURCE_MASK(32), 173 172 .flags = CLOCK_SOURCE_IS_CONTINUOUS, 174 173 }, 175 - .irq = { 176 - .name = "gp_timer", 177 - .flags = IRQF_DISABLED | IRQF_TIMER | IRQF_TRIGGER_RISING, 178 - .handler = msm_timer_interrupt, 179 - .dev_id = &msm_clocks[0].clockevent, 180 - .irq = INT_GP_TIMER_EXP 181 - }, 174 + .irq = INT_GP_TIMER_EXP, 182 175 .freq = GPT_HZ, 183 176 }, 184 177 [MSM_CLOCK_DGT] = { ··· 191 196 .mask = CLOCKSOURCE_MASK((32 - MSM_DGT_SHIFT)), 192 197 .flags = CLOCK_SOURCE_IS_CONTINUOUS, 193 198 }, 194 - .irq = { 195 - .name = "dg_timer", 196 - .flags = IRQF_DISABLED | IRQF_TIMER | IRQF_TRIGGER_RISING, 197 - .handler = msm_timer_interrupt, 198 - .dev_id = &msm_clocks[1].clockevent, 199 - .irq = INT_DEBUG_TIMER_EXP 200 - }, 199 + .irq = INT_DEBUG_TIMER_EXP, 201 200 .freq = DGT_HZ >> MSM_DGT_SHIFT, 202 201 .shift = MSM_DGT_SHIFT, 203 202 } ··· 250 261 printk(KERN_ERR "msm_timer_init: clocksource_register " 251 262 "failed for %s\n", cs->name); 252 263 253 - res = setup_irq(clock->irq.irq, &clock->irq); 264 + ce->irq = clock->irq; 265 + if (cpu_is_msm8x60() || cpu_is_msm8960()) { 266 + clock->percpu_evt = alloc_percpu(struct clock_event_device *); 267 + if (!clock->percpu_evt) { 268 + pr_err("msm_timer_init: memory allocation " 269 + "failed for %s\n", ce->name); 270 + continue; 271 + } 272 + 273 + *__this_cpu_ptr(clock->percpu_evt) = ce; 274 + res = request_percpu_irq(ce->irq, msm_timer_interrupt, 275 + ce->name, clock->percpu_evt); 276 + if (!res) 277 + enable_percpu_irq(ce->irq, 0); 278 + } else { 279 + clock->evt = ce; 280 + res = request_irq(ce->irq, msm_timer_interrupt, 281 + IRQF_TIMER | IRQF_NOBALANCING | IRQF_TRIGGER_RISING, 282 + ce->name, &clock->evt); 283 + } 284 + 254 285 if (res) 255 - printk(KERN_ERR "msm_timer_init: setup_irq " 256 - "failed for %s\n", cs->name); 286 + pr_err("msm_timer_init: request_irq failed for %s\n", 287 + ce->name); 257 288 258 289 clockevents_register_device(ce); 259 290 } ··· 282 273 #ifdef CONFIG_SMP 283 274 int __cpuinit local_timer_setup(struct clock_event_device *evt) 284 275 { 276 + static bool local_timer_inited; 285 277 struct msm_clock *clock = &msm_clocks[MSM_GLOBAL_TIMER]; 286 278 287 279 /* Use existing clock_event for cpu 0 */ ··· 291 281 292 282 writel(DGT_CLK_CTL_DIV_4, MSM_TMR_BASE + DGT_CLK_CTL); 293 283 294 - if (!local_clock_event) { 284 + if (!local_timer_inited) { 295 285 writel(0, clock->regbase + TIMER_ENABLE); 296 286 writel(0, clock->regbase + TIMER_CLEAR); 297 287 writel(~0, clock->regbase + TIMER_MATCH_VAL); 288 + local_timer_inited = true; 298 289 } 299 - evt->irq = clock->irq.irq; 290 + evt->irq = clock->irq; 300 291 evt->name = "local_timer"; 301 292 evt->features = CLOCK_EVT_FEAT_ONESHOT; 302 293 evt->rating = clock->clockevent.rating; ··· 309 298 clockevent_delta2ns(0xf0000000 >> clock->shift, evt); 310 299 evt->min_delta_ns = clockevent_delta2ns(4, evt); 311 300 312 - local_clock_event = evt; 313 - 314 - gic_enable_ppi(clock->irq.irq); 301 + *__this_cpu_ptr(clock->percpu_evt) = evt; 302 + enable_percpu_irq(evt->irq, 0); 315 303 316 304 clockevents_register_device(evt); 317 305 return 0; 318 306 } 319 307 320 - inline int local_timer_ack(void) 308 + void local_timer_stop(struct clock_event_device *evt) 321 309 { 322 - return 1; 310 + evt->set_mode(CLOCK_EVT_MODE_UNUSED, evt); 311 + disable_percpu_irq(evt->irq); 323 312 } 324 313 325 314 #endif
+1 -1
arch/arm/mach-mv78xx0/buffalo-wxl-setup.c
··· 145 145 146 146 MACHINE_START(TERASTATION_WXL, "Buffalo Nas WXL") 147 147 /* Maintainer: Sebastien Requiem <sebastien@requiem.fr> */ 148 - .boot_params = 0x00000100, 148 + .atag_offset = 0x100, 149 149 .init_machine = wxl_init, 150 150 .map_io = mv78xx0_map_io, 151 151 .init_early = mv78xx0_init_early,
+1 -1
arch/arm/mach-mv78xx0/db78x00-bp-setup.c
··· 93 93 94 94 MACHINE_START(DB78X00_BP, "Marvell DB-78x00-BP Development Board") 95 95 /* Maintainer: Lennert Buytenhek <buytenh@marvell.com> */ 96 - .boot_params = 0x00000100, 96 + .atag_offset = 0x100, 97 97 .init_machine = db78x00_init, 98 98 .map_io = mv78xx0_map_io, 99 99 .init_early = mv78xx0_init_early,
+1 -1
arch/arm/mach-mv78xx0/include/mach/debug-macro.S
··· 8 8 9 9 #include <mach/mv78xx0.h> 10 10 11 - .macro addruart, rp, rv 11 + .macro addruart, rp, rv, tmp 12 12 ldr \rp, =MV78XX0_REGS_PHYS_BASE 13 13 ldr \rv, =MV78XX0_REGS_VIRT_BASE 14 14 orr \rp, \rp, #0x00012000
-10
arch/arm/mach-mv78xx0/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-mv78xx0/include/mach/memory.h 3 - */ 4 - 5 - #ifndef __ASM_ARCH_MEMORY_H 6 - #define __ASM_ARCH_MEMORY_H 7 - 8 - #define PLAT_PHYS_OFFSET UL(0x00000000) 9 - 10 - #endif
+1 -1
arch/arm/mach-mv78xx0/rd78x00-masa-setup.c
··· 78 78 79 79 MACHINE_START(RD78X00_MASA, "Marvell RD-78x00-MASA Development Board") 80 80 /* Maintainer: Lennert Buytenhek <buytenh@marvell.com> */ 81 - .boot_params = 0x00000100, 81 + .atag_offset = 0x100, 82 82 .init_machine = rd78x00_masa_init, 83 83 .map_io = mv78xx0_map_io, 84 84 .init_early = mv78xx0_init_early,
+1 -1
arch/arm/mach-mx5/board-cpuimx51.c
··· 293 293 294 294 MACHINE_START(EUKREA_CPUIMX51, "Eukrea CPUIMX51 Module") 295 295 /* Maintainer: Eric Bénard <eric@eukrea.com> */ 296 - .boot_params = MX51_PHYS_OFFSET + 0x100, 296 + .atag_offset = 0x100, 297 297 .map_io = mx51_map_io, 298 298 .init_early = imx51_init_early, 299 299 .init_irq = mx51_init_irq,
+1 -1
arch/arm/mach-mx5/board-cpuimx51sd.c
··· 331 331 332 332 MACHINE_START(EUKREA_CPUIMX51SD, "Eukrea CPUIMX51SD") 333 333 /* Maintainer: Eric Bénard <eric@eukrea.com> */ 334 - .boot_params = MX51_PHYS_OFFSET + 0x100, 334 + .atag_offset = 0x100, 335 335 .map_io = mx51_map_io, 336 336 .init_early = imx51_init_early, 337 337 .init_irq = mx51_init_irq,
+1 -1
arch/arm/mach-mx5/board-mx51_3ds.c
··· 169 169 170 170 MACHINE_START(MX51_3DS, "Freescale MX51 3-Stack Board") 171 171 /* Maintainer: Freescale Semiconductor, Inc. */ 172 - .boot_params = MX51_PHYS_OFFSET + 0x100, 172 + .atag_offset = 0x100, 173 173 .map_io = mx51_map_io, 174 174 .init_early = imx51_init_early, 175 175 .init_irq = mx51_init_irq,
+1 -1
arch/arm/mach-mx5/board-mx51_babbage.c
··· 416 416 417 417 MACHINE_START(MX51_BABBAGE, "Freescale MX51 Babbage Board") 418 418 /* Maintainer: Amit Kucheria <amit.kucheria@canonical.com> */ 419 - .boot_params = MX51_PHYS_OFFSET + 0x100, 419 + .atag_offset = 0x100, 420 420 .map_io = mx51_map_io, 421 421 .init_early = imx51_init_early, 422 422 .init_irq = mx51_init_irq,
+1 -1
arch/arm/mach-mx5/board-mx51_efikamx.c
··· 280 280 281 281 MACHINE_START(MX51_EFIKAMX, "Genesi EfikaMX nettop") 282 282 /* Maintainer: Amit Kucheria <amit.kucheria@linaro.org> */ 283 - .boot_params = MX51_PHYS_OFFSET + 0x100, 283 + .atag_offset = 0x100, 284 284 .map_io = mx51_map_io, 285 285 .init_early = imx51_init_early, 286 286 .init_irq = mx51_init_irq,
+1 -1
arch/arm/mach-mx5/board-mx51_efikasb.c
··· 266 266 }; 267 267 268 268 MACHINE_START(MX51_EFIKASB, "Genesi Efika Smartbook") 269 - .boot_params = MX51_PHYS_OFFSET + 0x100, 269 + .atag_offset = 0x100, 270 270 .map_io = mx51_map_io, 271 271 .init_early = imx51_init_early, 272 272 .init_irq = mx51_init_irq,
+1 -1
arch/arm/mach-mxs/include/mach/debug-macro.S
··· 30 30 31 31 #define UART_VADDR MXS_IO_ADDRESS(UART_PADDR) 32 32 33 - .macro addruart, rp, rv 33 + .macro addruart, rp, rv, tmp 34 34 ldr \rp, =UART_PADDR @ physical 35 35 ldr \rv, =UART_VADDR @ virtual 36 36 .endm
-24
arch/arm/mach-mxs/include/mach/memory.h
··· 1 - /* 2 - * Copyright (C) 2009-2010 Freescale Semiconductor, Inc. All Rights Reserved. 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License as published by 6 - * the Free Software Foundation; either version 2 of the License, or 7 - * (at your option) any later version. 8 - * 9 - * This program is distributed in the hope that it will be useful, 10 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 - * GNU General Public License for more details. 13 - * 14 - * You should have received a copy of the GNU General Public License along 15 - * with this program; if not, write to the Free Software Foundation, Inc., 16 - * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 17 - */ 18 - 19 - #ifndef __MACH_MXS_MEMORY_H__ 20 - #define __MACH_MXS_MEMORY_H__ 21 - 22 - #define PHYS_OFFSET UL(0x40000000) 23 - 24 - #endif /* __MACH_MXS_MEMORY_H__ */
+1 -1
arch/arm/mach-netx/include/mach/debug-macro.S
··· 13 13 14 14 #include "hardware.h" 15 15 16 - .macro addruart, rp, rv 16 + .macro addruart, rp, rv, tmp 17 17 mov \rp, #0x00000a00 18 18 orr \rv, \rp, #io_p2v(0x00100000) @ virtual 19 19 orr \rp, \rp, #0x00100000 @ physical
-26
arch/arm/mach-netx/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-netx/include/mach/memory.h 3 - * 4 - * Copyright (C) 2005 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 8 - * as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope that it will be useful, 11 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 - * GNU General Public License for more details. 14 - * 15 - * You should have received a copy of the GNU General Public License 16 - * along with this program; if not, write to the Free Software 17 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 18 - */ 19 - 20 - #ifndef __ASM_ARCH_MEMORY_H 21 - #define __ASM_ARCH_MEMORY_H 22 - 23 - #define PLAT_PHYS_OFFSET UL(0x80000000) 24 - 25 - #endif 26 -
+1 -1
arch/arm/mach-netx/nxdb500.c
··· 200 200 } 201 201 202 202 MACHINE_START(NXDB500, "Hilscher nxdb500") 203 - .boot_params = 0x80000100, 203 + .atag_offset = 0x100, 204 204 .map_io = netx_map_io, 205 205 .init_irq = netx_init_irq, 206 206 .timer = &netx_timer,
+1 -1
arch/arm/mach-netx/nxdkn.c
··· 93 93 } 94 94 95 95 MACHINE_START(NXDKN, "Hilscher nxdkn") 96 - .boot_params = 0x80000100, 96 + .atag_offset = 0x100, 97 97 .map_io = netx_map_io, 98 98 .init_irq = netx_init_irq, 99 99 .timer = &netx_timer,
+1 -1
arch/arm/mach-netx/nxeb500hmi.c
··· 177 177 } 178 178 179 179 MACHINE_START(NXEB500HMI, "Hilscher nxeb500hmi") 180 - .boot_params = 0x80000100, 180 + .atag_offset = 0x100, 181 181 .map_io = netx_map_io, 182 182 .init_irq = netx_init_irq, 183 183 .timer = &netx_timer,
+1 -1
arch/arm/mach-nomadik/board-nhk8815.c
··· 277 277 278 278 MACHINE_START(NOMADIK, "NHK8815") 279 279 /* Maintainer: ST MicroElectronics */ 280 - .boot_params = 0x100, 280 + .atag_offset = 0x100, 281 281 .map_io = cpu8815_map_io, 282 282 .init_irq = cpu8815_init_irq, 283 283 .timer = &nomadik_timer,
+1 -1
arch/arm/mach-nomadik/include/mach/debug-macro.S
··· 10 10 * 11 11 */ 12 12 13 - .macro addruart, rp, rv 13 + .macro addruart, rp, rv, tmp 14 14 mov \rp, #0x00100000 15 15 add \rp, \rp, #0x000fb000 16 16 add \rv, \rp, #0xf0000000 @ virtual base
-28
arch/arm/mach-nomadik/include/mach/memory.h
··· 1 - /* 2 - * mach-nomadik/include/mach/memory.h 3 - * 4 - * Copyright (C) 1999 ARM Limited 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License, or 9 - * (at your option) any later version. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 - */ 20 - #ifndef __ASM_ARCH_MEMORY_H 21 - #define __ASM_ARCH_MEMORY_H 22 - 23 - /* 24 - * Physical DRAM offset. 25 - */ 26 - #define PLAT_PHYS_OFFSET UL(0x00000000) 27 - 28 - #endif
-21
arch/arm/mach-nuc93x/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-nuc93x/include/mach/memory.h 3 - * 4 - * Copyright (c) 2008 Nuvoton technology corporation 5 - * All rights reserved. 6 - * 7 - * Wan ZongShun <mcuos.com@gmail.com> 8 - * 9 - * This program is free software; you can redistribute it and/or modify 10 - * it under the terms of the GNU General Public License as published by 11 - * the Free Software Foundation; either version 2 of the License, or 12 - * (at your option) any later version. 13 - * 14 - */ 15 - 16 - #ifndef __ASM_ARCH_MEMORY_H 17 - #define __ASM_ARCH_MEMORY_H 18 - 19 - #define PLAT_PHYS_OFFSET UL(0x00000000) 20 - 21 - #endif
-1
arch/arm/mach-nuc93x/mach-nuc932evb.c
··· 35 35 36 36 MACHINE_START(NUC932EVB, "NUC932EVB") 37 37 /* Maintainer: Wan ZongShun */ 38 - .boot_params = 0, 39 38 .map_io = nuc932evb_map_io, 40 39 .init_irq = nuc93x_init_irq, 41 40 .init_machine = nuc932evb_init,
+1 -1
arch/arm/mach-omap1/board-ams-delta.c
··· 385 385 386 386 MACHINE_START(AMS_DELTA, "Amstrad E3 (Delta)") 387 387 /* Maintainer: Jonathan McDowell <noodles@earth.li> */ 388 - .boot_params = 0x10000100, 388 + .atag_offset = 0x100, 389 389 .map_io = ams_delta_map_io, 390 390 .reserve = omap_reserve, 391 391 .init_irq = ams_delta_init_irq,
+1 -1
arch/arm/mach-omap1/board-fsample.c
··· 388 388 389 389 MACHINE_START(OMAP_FSAMPLE, "OMAP730 F-Sample") 390 390 /* Maintainer: Brian Swetland <swetland@google.com> */ 391 - .boot_params = 0x10000100, 391 + .atag_offset = 0x100, 392 392 .map_io = omap_fsample_map_io, 393 393 .reserve = omap_reserve, 394 394 .init_irq = omap_fsample_init_irq,
+1 -1
arch/arm/mach-omap1/board-generic.c
··· 93 93 94 94 MACHINE_START(OMAP_GENERIC, "Generic OMAP1510/1610/1710") 95 95 /* Maintainer: Tony Lindgren <tony@atomide.com> */ 96 - .boot_params = 0x10000100, 96 + .atag_offset = 0x100, 97 97 .map_io = omap_generic_map_io, 98 98 .reserve = omap_reserve, 99 99 .init_irq = omap_generic_init_irq,
+1 -1
arch/arm/mach-omap1/board-h2.c
··· 460 460 461 461 MACHINE_START(OMAP_H2, "TI-H2") 462 462 /* Maintainer: Imre Deak <imre.deak@nokia.com> */ 463 - .boot_params = 0x10000100, 463 + .atag_offset = 0x100, 464 464 .map_io = h2_map_io, 465 465 .reserve = omap_reserve, 466 466 .init_irq = h2_init_irq,
+1 -1
arch/arm/mach-omap1/board-h3.c
··· 448 448 449 449 MACHINE_START(OMAP_H3, "TI OMAP1710 H3 board") 450 450 /* Maintainer: Texas Instruments, Inc. */ 451 - .boot_params = 0x10000100, 451 + .atag_offset = 0x100, 452 452 .map_io = h3_map_io, 453 453 .reserve = omap_reserve, 454 454 .init_irq = h3_init_irq,
+1 -1
arch/arm/mach-omap1/board-htcherald.c
··· 610 610 MACHINE_START(HERALD, "HTC Herald") 611 611 /* Maintainer: Cory Maccarrone <darkstar6262@gmail.com> */ 612 612 /* Maintainer: wing-linux.sourceforge.net */ 613 - .boot_params = 0x10000100, 613 + .atag_offset = 0x100, 614 614 .map_io = htcherald_map_io, 615 615 .reserve = omap_reserve, 616 616 .init_irq = htcherald_init_irq,
+1 -1
arch/arm/mach-omap1/board-innovator.c
··· 458 458 459 459 MACHINE_START(OMAP_INNOVATOR, "TI-Innovator") 460 460 /* Maintainer: MontaVista Software, Inc. */ 461 - .boot_params = 0x10000100, 461 + .atag_offset = 0x100, 462 462 .map_io = innovator_map_io, 463 463 .reserve = omap_reserve, 464 464 .init_irq = innovator_init_irq,
+1 -1
arch/arm/mach-omap1/board-nokia770.c
··· 263 263 } 264 264 265 265 MACHINE_START(NOKIA770, "Nokia 770") 266 - .boot_params = 0x10000100, 266 + .atag_offset = 0x100, 267 267 .map_io = omap_nokia770_map_io, 268 268 .reserve = omap_reserve, 269 269 .init_irq = omap_nokia770_init_irq,
+1 -1
arch/arm/mach-omap1/board-osk.c
··· 582 582 583 583 MACHINE_START(OMAP_OSK, "TI-OSK") 584 584 /* Maintainer: Dirk Behme <dirk.behme@de.bosch.com> */ 585 - .boot_params = 0x10000100, 585 + .atag_offset = 0x100, 586 586 .map_io = osk_map_io, 587 587 .reserve = omap_reserve, 588 588 .init_irq = osk_init_irq,
+1 -1
arch/arm/mach-omap1/board-palmte.c
··· 274 274 } 275 275 276 276 MACHINE_START(OMAP_PALMTE, "OMAP310 based Palm Tungsten E") 277 - .boot_params = 0x10000100, 277 + .atag_offset = 0x100, 278 278 .map_io = omap_palmte_map_io, 279 279 .reserve = omap_reserve, 280 280 .init_irq = omap_palmte_init_irq,
+1 -1
arch/arm/mach-omap1/board-palmtt.c
··· 321 321 } 322 322 323 323 MACHINE_START(OMAP_PALMTT, "OMAP1510 based Palm Tungsten|T") 324 - .boot_params = 0x10000100, 324 + .atag_offset = 0x100, 325 325 .map_io = omap_palmtt_map_io, 326 326 .reserve = omap_reserve, 327 327 .init_irq = omap_palmtt_init_irq,
+1 -1
arch/arm/mach-omap1/board-palmz71.c
··· 341 341 } 342 342 343 343 MACHINE_START(OMAP_PALMZ71, "OMAP310 based Palm Zire71") 344 - .boot_params = 0x10000100, 344 + .atag_offset = 0x100, 345 345 .map_io = omap_palmz71_map_io, 346 346 .reserve = omap_reserve, 347 347 .init_irq = omap_palmz71_init_irq,
+1 -1
arch/arm/mach-omap1/board-perseus2.c
··· 349 349 350 350 MACHINE_START(OMAP_PERSEUS2, "OMAP730 Perseus2") 351 351 /* Maintainer: Kevin Hilman <kjh@hilman.org> */ 352 - .boot_params = 0x10000100, 352 + .atag_offset = 0x100, 353 353 .map_io = omap_perseus2_map_io, 354 354 .reserve = omap_reserve, 355 355 .init_irq = omap_perseus2_init_irq,
+1 -1
arch/arm/mach-omap1/board-sx1.c
··· 420 420 } 421 421 422 422 MACHINE_START(SX1, "OMAP310 based Siemens SX1") 423 - .boot_params = 0x10000100, 423 + .atag_offset = 0x100, 424 424 .map_io = omap_sx1_map_io, 425 425 .reserve = omap_reserve, 426 426 .init_irq = omap_sx1_init_irq,
+1 -1
arch/arm/mach-omap1/board-voiceblue.c
··· 301 301 302 302 MACHINE_START(VOICEBLUE, "VoiceBlue OMAP5910") 303 303 /* Maintainer: Ladislav Michl <michl@2n.cz> */ 304 - .boot_params = 0x10000100, 304 + .atag_offset = 0x100, 305 305 .map_io = voiceblue_map_io, 306 306 .reserve = omap_reserve, 307 307 .init_irq = voiceblue_init_irq,
+21 -27
arch/arm/mach-omap1/include/mach/debug-macro.S
··· 13 13 14 14 #include <linux/serial_reg.h> 15 15 16 - #include <asm/memory.h> 17 - 18 16 #include <plat/serial.h> 19 - 20 - #define omap_uart_v2p(x) ((x) - PAGE_OFFSET + PLAT_PHYS_OFFSET) 21 - #define omap_uart_p2v(x) ((x) - PLAT_PHYS_OFFSET + PAGE_OFFSET) 22 17 23 18 .pushsection .data 24 19 omap_uart_phys: .word 0x0 ··· 26 31 * the desired UART phys and virt addresses temporarily into 27 32 * the omap_uart_phys and omap_uart_virt above. 28 33 */ 29 - .macro addruart, rp, rv 34 + .macro addruart, rp, rv, tmp 30 35 31 36 /* Use omap_uart_phys/virt if already configured */ 32 - 9: mrc p15, 0, \rp, c1, c0 33 - tst \rp, #1 @ MMU enabled? 34 - ldreq \rp, =omap_uart_v2p(omap_uart_phys) @ MMU disabled 35 - ldrne \rp, =omap_uart_phys @ MMU enabled 36 - add \rv, \rp, #4 @ omap_uart_virt 37 - ldr \rp, [\rp, #0] 38 - ldr \rv, [\rv, #0] 37 + 9: adr \rp, 99f @ get effective addr of 99f 38 + ldr \rv, [\rp] @ get absolute addr of 99f 39 + sub \rv, \rv, \rp @ offset between the two 40 + ldr \rp, [\rp, #4] @ abs addr of omap_uart_phys 41 + sub \tmp, \rp, \rv @ make it effective 42 + ldr \rp, [\tmp, #0] @ omap_uart_phys 43 + ldr \rv, [\tmp, #4] @ omap_uart_virt 39 44 cmp \rp, #0 @ is port configured? 40 45 cmpne \rv, #0 41 - bne 99f @ already configured 46 + bne 100f @ already configured 42 47 43 48 /* Check the debug UART configuration set in uncompress.h */ 44 - mrc p15, 0, \rp, c1, c0 45 - tst \rp, #1 @ MMU enabled? 46 - ldreq \rp, =OMAP_UART_INFO @ MMU not enabled 47 - ldrne \rp, =omap_uart_p2v(OMAP_UART_INFO) @ MMU enabled 48 - ldr \rp, [\rp, #0] 49 + and \rp, pc, #0xff000000 50 + ldr \rv, =OMAP_UART_INFO_OFS 51 + ldr \rp, [\rp, \rv] 49 52 50 53 /* Select the UART to use based on the UART1 scratchpad value */ 51 54 10: cmp \rp, #0 @ no port configured? ··· 67 74 68 75 /* Store both phys and virt address for the uart */ 69 76 98: add \rp, \rp, #0xff000000 @ phys base 70 - mrc p15, 0, \rv, c1, c0 71 - tst \rv, #1 @ MMU enabled? 72 - ldreq \rv, =omap_uart_v2p(omap_uart_phys) @ MMU disabled 73 - ldrne \rv, =omap_uart_phys @ MMU enabled 74 - str \rp, [\rv, #0] 77 + str \rp, [\tmp, #0] @ omap_uart_phys 75 78 sub \rp, \rp, #0xff000000 @ phys base 76 79 add \rp, \rp, #0xfe000000 @ virt base 77 - add \rv, \rv, #4 @ omap_uart_lsr 78 - str \rp, [\rv, #0] 80 + str \rp, [\tmp, #4] @ omap_uart_virt 79 81 b 9b 80 - 99: 82 + 83 + .align 84 + 99: .word . 85 + .word omap_uart_phys 86 + .ltorg 87 + 88 + 100: 81 89 .endm 82 90 83 91 .macro senduart,rd,rx
+52 -1
arch/arm/mach-omap1/include/mach/memory.h
··· 2 2 * arch/arm/mach-omap1/include/mach/memory.h 3 3 */ 4 4 5 - #include <plat/memory.h> 5 + #ifndef __ASM_ARCH_MEMORY_H 6 + #define __ASM_ARCH_MEMORY_H 7 + 8 + /* 9 + * Physical DRAM offset. 10 + */ 11 + #define PLAT_PHYS_OFFSET UL(0x10000000) 12 + 13 + /* 14 + * Bus address is physical address, except for OMAP-1510 Local Bus. 15 + * OMAP-1510 bus address is translated into a Local Bus address if the 16 + * OMAP bus type is lbus. We do the address translation based on the 17 + * device overriding the defaults used in the dma-mapping API. 18 + * Note that the is_lbus_device() test is not very efficient on 1510 19 + * because of the strncmp(). 20 + */ 21 + #ifdef CONFIG_ARCH_OMAP15XX 22 + 23 + /* 24 + * OMAP-1510 Local Bus address offset 25 + */ 26 + #define OMAP1510_LB_OFFSET UL(0x30000000) 27 + 28 + #define virt_to_lbus(x) ((x) - PAGE_OFFSET + OMAP1510_LB_OFFSET) 29 + #define lbus_to_virt(x) ((x) - OMAP1510_LB_OFFSET + PAGE_OFFSET) 30 + #define is_lbus_device(dev) (cpu_is_omap15xx() && dev && (strncmp(dev_name(dev), "ohci", 4) == 0)) 31 + 32 + #define __arch_pfn_to_dma(dev, pfn) \ 33 + ({ dma_addr_t __dma = __pfn_to_phys(pfn); \ 34 + if (is_lbus_device(dev)) \ 35 + __dma = __dma - PHYS_OFFSET + OMAP1510_LB_OFFSET; \ 36 + __dma; }) 37 + 38 + #define __arch_dma_to_pfn(dev, addr) \ 39 + ({ dma_addr_t __dma = addr; \ 40 + if (is_lbus_device(dev)) \ 41 + __dma += PHYS_OFFSET - OMAP1510_LB_OFFSET; \ 42 + __phys_to_pfn(__dma); \ 43 + }) 44 + 45 + #define __arch_dma_to_virt(dev, addr) ({ (void *) (is_lbus_device(dev) ? \ 46 + lbus_to_virt(addr) : \ 47 + __phys_to_virt(addr)); }) 48 + 49 + #define __arch_virt_to_dma(dev, addr) ({ unsigned long __addr = (unsigned long)(addr); \ 50 + (dma_addr_t) (is_lbus_device(dev) ? \ 51 + virt_to_lbus(__addr) : \ 52 + __virt_to_phys(__addr)); }) 53 + 54 + #endif /* CONFIG_ARCH_OMAP15XX */ 55 + 56 + #endif
+1
arch/arm/mach-omap1/io.c
··· 121 121 #endif 122 122 123 123 omap_sram_init(); 124 + omap_init_consistent_dma_size(); 124 125 } 125 126 126 127 /*
+1 -1
arch/arm/mach-omap2/board-2430sdp.c
··· 257 257 258 258 MACHINE_START(OMAP_2430SDP, "OMAP2430 sdp2430 board") 259 259 /* Maintainer: Syed Khasim - Texas Instruments Inc */ 260 - .boot_params = 0x80000100, 260 + .atag_offset = 0x100, 261 261 .reserve = omap_reserve, 262 262 .map_io = omap_2430sdp_map_io, 263 263 .init_early = omap_2430sdp_init_early,
+1 -1
arch/arm/mach-omap2/board-3430sdp.c
··· 729 729 730 730 MACHINE_START(OMAP_3430SDP, "OMAP3430 3430SDP board") 731 731 /* Maintainer: Syed Khasim - Texas Instruments Inc */ 732 - .boot_params = 0x80000100, 732 + .atag_offset = 0x100, 733 733 .reserve = omap_reserve, 734 734 .map_io = omap3_map_io, 735 735 .init_early = omap_3430sdp_init_early,
+1 -1
arch/arm/mach-omap2/board-3630sdp.c
··· 215 215 } 216 216 217 217 MACHINE_START(OMAP_3630SDP, "OMAP 3630SDP board") 218 - .boot_params = 0x80000100, 218 + .atag_offset = 0x100, 219 219 .reserve = omap_reserve, 220 220 .map_io = omap3_map_io, 221 221 .init_early = omap_sdp_init_early,
+1 -1
arch/arm/mach-omap2/board-4430sdp.c
··· 838 838 839 839 MACHINE_START(OMAP_4430SDP, "OMAP4430 4430SDP board") 840 840 /* Maintainer: Santosh Shilimkar - Texas Instruments Inc */ 841 - .boot_params = 0x80000100, 841 + .atag_offset = 0x100, 842 842 .reserve = omap_reserve, 843 843 .map_io = omap_4430sdp_map_io, 844 844 .init_early = omap_4430sdp_init_early,
+1 -1
arch/arm/mach-omap2/board-am3517crane.c
··· 98 98 } 99 99 100 100 MACHINE_START(CRANEBOARD, "AM3517/05 CRANEBOARD") 101 - .boot_params = 0x80000100, 101 + .atag_offset = 0x100, 102 102 .reserve = omap_reserve, 103 103 .map_io = omap3_map_io, 104 104 .init_early = am3517_crane_init_early,
+1 -1
arch/arm/mach-omap2/board-am3517evm.c
··· 490 490 } 491 491 492 492 MACHINE_START(OMAP3517EVM, "OMAP3517/AM3517 EVM") 493 - .boot_params = 0x80000100, 493 + .atag_offset = 0x100, 494 494 .reserve = omap_reserve, 495 495 .map_io = omap3_map_io, 496 496 .init_early = am3517_evm_init_early,
+1 -1
arch/arm/mach-omap2/board-apollon.c
··· 350 350 351 351 MACHINE_START(OMAP_APOLLON, "OMAP24xx Apollon") 352 352 /* Maintainer: Kyungmin Park <kyungmin.park@samsung.com> */ 353 - .boot_params = 0x80000100, 353 + .atag_offset = 0x100, 354 354 .reserve = omap_reserve, 355 355 .map_io = omap_apollon_map_io, 356 356 .init_early = omap_apollon_init_early,
+2 -2
arch/arm/mach-omap2/board-cm-t35.c
··· 634 634 } 635 635 636 636 MACHINE_START(CM_T35, "Compulab CM-T35") 637 - .boot_params = 0x80000100, 637 + .atag_offset = 0x100, 638 638 .reserve = omap_reserve, 639 639 .map_io = omap3_map_io, 640 640 .init_early = cm_t35_init_early, ··· 644 644 MACHINE_END 645 645 646 646 MACHINE_START(CM_T3730, "Compulab CM-T3730") 647 - .boot_params = 0x80000100, 647 + .atag_offset = 0x100, 648 648 .reserve = omap_reserve, 649 649 .map_io = omap3_map_io, 650 650 .init_early = cm_t35_init_early,
+1 -1
arch/arm/mach-omap2/board-cm-t3517.c
··· 299 299 } 300 300 301 301 MACHINE_START(CM_T3517, "Compulab CM-T3517") 302 - .boot_params = 0x80000100, 302 + .atag_offset = 0x100, 303 303 .reserve = omap_reserve, 304 304 .map_io = omap3_map_io, 305 305 .init_early = cm_t3517_init_early,
+1 -1
arch/arm/mach-omap2/board-devkit8000.c
··· 667 667 } 668 668 669 669 MACHINE_START(DEVKIT8000, "OMAP3 Devkit8000") 670 - .boot_params = 0x80000100, 670 + .atag_offset = 0x100, 671 671 .reserve = omap_reserve, 672 672 .map_io = omap3_map_io, 673 673 .init_early = devkit8000_init_early,
+1 -1
arch/arm/mach-omap2/board-generic.c
··· 65 65 /* XXX This machine entry name should be updated */ 66 66 MACHINE_START(OMAP_GENERIC, "Generic OMAP24xx") 67 67 /* Maintainer: Paul Mundt <paul.mundt@nokia.com> */ 68 - .boot_params = 0x80000100, 68 + .atag_offset = 0x100, 69 69 .reserve = omap_reserve, 70 70 .map_io = omap_generic_map_io, 71 71 .init_early = omap_generic_init_early,
+1 -1
arch/arm/mach-omap2/board-h4.c
··· 381 381 382 382 MACHINE_START(OMAP_H4, "OMAP2420 H4 board") 383 383 /* Maintainer: Paul Mundt <paul.mundt@nokia.com> */ 384 - .boot_params = 0x80000100, 384 + .atag_offset = 0x100, 385 385 .reserve = omap_reserve, 386 386 .map_io = omap_h4_map_io, 387 387 .init_early = omap_h4_init_early,
+2 -2
arch/arm/mach-omap2/board-igep0020.c
··· 672 672 } 673 673 674 674 MACHINE_START(IGEP0020, "IGEP v2 board") 675 - .boot_params = 0x80000100, 675 + .atag_offset = 0x100, 676 676 .reserve = omap_reserve, 677 677 .map_io = omap3_map_io, 678 678 .init_early = igep_init_early, ··· 682 682 MACHINE_END 683 683 684 684 MACHINE_START(IGEP0030, "IGEP OMAP3 module") 685 - .boot_params = 0x80000100, 685 + .atag_offset = 0x100, 686 686 .reserve = omap_reserve, 687 687 .map_io = omap3_map_io, 688 688 .init_early = igep_init_early,
+1 -1
arch/arm/mach-omap2/board-ldp.c
··· 332 332 } 333 333 334 334 MACHINE_START(OMAP_LDP, "OMAP LDP board") 335 - .boot_params = 0x80000100, 335 + .atag_offset = 0x100, 336 336 .reserve = omap_reserve, 337 337 .map_io = omap3_map_io, 338 338 .init_early = omap_ldp_init_early,
+3 -3
arch/arm/mach-omap2/board-n8x0.c
··· 695 695 } 696 696 697 697 MACHINE_START(NOKIA_N800, "Nokia N800") 698 - .boot_params = 0x80000100, 698 + .atag_offset = 0x100, 699 699 .reserve = omap_reserve, 700 700 .map_io = n8x0_map_io, 701 701 .init_early = n8x0_init_early, ··· 705 705 MACHINE_END 706 706 707 707 MACHINE_START(NOKIA_N810, "Nokia N810") 708 - .boot_params = 0x80000100, 708 + .atag_offset = 0x100, 709 709 .reserve = omap_reserve, 710 710 .map_io = n8x0_map_io, 711 711 .init_early = n8x0_init_early, ··· 715 715 MACHINE_END 716 716 717 717 MACHINE_START(NOKIA_N810_WIMAX, "Nokia N810 WiMAX") 718 - .boot_params = 0x80000100, 718 + .atag_offset = 0x100, 719 719 .reserve = omap_reserve, 720 720 .map_io = n8x0_map_io, 721 721 .init_early = n8x0_init_early,
+1 -1
arch/arm/mach-omap2/board-omap3beagle.c
··· 557 557 558 558 MACHINE_START(OMAP3_BEAGLE, "OMAP3 Beagle Board") 559 559 /* Maintainer: Syed Mohammed Khasim - http://beagleboard.org */ 560 - .boot_params = 0x80000100, 560 + .atag_offset = 0x100, 561 561 .reserve = omap_reserve, 562 562 .map_io = omap3_map_io, 563 563 .init_early = omap3_beagle_init_early,
+1 -1
arch/arm/mach-omap2/board-omap3evm.c
··· 681 681 682 682 MACHINE_START(OMAP3EVM, "OMAP3 EVM") 683 683 /* Maintainer: Syed Mohammed Khasim - Texas Instruments */ 684 - .boot_params = 0x80000100, 684 + .atag_offset = 0x100, 685 685 .reserve = omap_reserve, 686 686 .map_io = omap3_map_io, 687 687 .init_early = omap3_evm_init_early,
+2 -2
arch/arm/mach-omap2/board-omap3logic.c
··· 209 209 } 210 210 211 211 MACHINE_START(OMAP3_TORPEDO, "Logic OMAP3 Torpedo board") 212 - .boot_params = 0x80000100, 212 + .atag_offset = 0x100, 213 213 .map_io = omap3_map_io, 214 214 .init_early = omap3logic_init_early, 215 215 .init_irq = omap3_init_irq, ··· 218 218 MACHINE_END 219 219 220 220 MACHINE_START(OMAP3530_LV_SOM, "OMAP Logic 3530 LV SOM board") 221 - .boot_params = 0x80000100, 221 + .atag_offset = 0x100, 222 222 .map_io = omap3_map_io, 223 223 .init_early = omap3logic_init_early, 224 224 .init_irq = omap3_init_irq,
+1 -1
arch/arm/mach-omap2/board-omap3pandora.c
··· 606 606 } 607 607 608 608 MACHINE_START(OMAP3_PANDORA, "Pandora Handheld Console") 609 - .boot_params = 0x80000100, 609 + .atag_offset = 0x100, 610 610 .reserve = omap_reserve, 611 611 .map_io = omap3_map_io, 612 612 .init_early = omap3pandora_init_early,
+1 -1
arch/arm/mach-omap2/board-omap3stalker.c
··· 494 494 495 495 MACHINE_START(SBC3530, "OMAP3 STALKER") 496 496 /* Maintainer: Jason Lam -lzg@ema-tech.com */ 497 - .boot_params = 0x80000100, 497 + .atag_offset = 0x100, 498 498 .map_io = omap3_map_io, 499 499 .init_early = omap3_stalker_init_early, 500 500 .init_irq = omap3_stalker_init_irq,
+1 -1
arch/arm/mach-omap2/board-omap3touchbook.c
··· 404 404 405 405 MACHINE_START(TOUCHBOOK, "OMAP3 touchbook Board") 406 406 /* Maintainer: Gregoire Gentil - http://www.alwaysinnovating.com */ 407 - .boot_params = 0x80000100, 407 + .atag_offset = 0x100, 408 408 .reserve = omap_reserve, 409 409 .map_io = omap3_map_io, 410 410 .init_early = omap3_touchbook_init_early,
+1 -1
arch/arm/mach-omap2/board-omap4panda.c
··· 583 583 584 584 MACHINE_START(OMAP4_PANDA, "OMAP4 Panda board") 585 585 /* Maintainer: David Anders - Texas Instruments Inc */ 586 - .boot_params = 0x80000100, 586 + .atag_offset = 0x100, 587 587 .reserve = omap_reserve, 588 588 .map_io = omap4_panda_map_io, 589 589 .init_early = omap4_panda_init_early,
+1 -1
arch/arm/mach-omap2/board-overo.c
··· 561 561 } 562 562 563 563 MACHINE_START(OVERO, "Gumstix Overo") 564 - .boot_params = 0x80000100, 564 + .atag_offset = 0x100, 565 565 .reserve = omap_reserve, 566 566 .map_io = omap3_map_io, 567 567 .init_early = overo_init_early,
+1 -1
arch/arm/mach-omap2/board-rm680.c
··· 153 153 } 154 154 155 155 MACHINE_START(NOKIA_RM680, "Nokia RM-680 board") 156 - .boot_params = 0x80000100, 156 + .atag_offset = 0x100, 157 157 .reserve = omap_reserve, 158 158 .map_io = rm680_map_io, 159 159 .init_early = rm680_init_early,
+1 -1
arch/arm/mach-omap2/board-rx51.c
··· 156 156 157 157 MACHINE_START(NOKIA_RX51, "Nokia RX-51 board") 158 158 /* Maintainer: Lauri Leukkunen <lauri.leukkunen@nokia.com> */ 159 - .boot_params = 0x80000100, 159 + .atag_offset = 0x100, 160 160 .reserve = rx51_reserve, 161 161 .map_io = rx51_map_io, 162 162 .init_early = rx51_init_early,
+1 -1
arch/arm/mach-omap2/board-ti8168evm.c
··· 48 48 49 49 MACHINE_START(TI8168EVM, "ti8168evm") 50 50 /* Maintainer: Texas Instruments */ 51 - .boot_params = 0x80000100, 51 + .atag_offset = 0x100, 52 52 .map_io = ti8168_evm_map_io, 53 53 .init_early = ti8168_init_early, 54 54 .init_irq = ti816x_init_irq,
+2 -2
arch/arm/mach-omap2/board-zoom.c
··· 133 133 } 134 134 135 135 MACHINE_START(OMAP_ZOOM2, "OMAP Zoom2 board") 136 - .boot_params = 0x80000100, 136 + .atag_offset = 0x100, 137 137 .reserve = omap_reserve, 138 138 .map_io = omap3_map_io, 139 139 .init_early = omap_zoom_init_early, ··· 143 143 MACHINE_END 144 144 145 145 MACHINE_START(OMAP_ZOOM3, "OMAP Zoom3 board") 146 - .boot_params = 0x80000100, 146 + .atag_offset = 0x100, 147 147 .reserve = omap_reserve, 148 148 .map_io = omap3_map_io, 149 149 .init_early = omap_zoom_init_early,
+36 -45
arch/arm/mach-omap2/include/mach/debug-macro.S
··· 13 13 14 14 #include <linux/serial_reg.h> 15 15 16 - #include <asm/memory.h> 17 - 18 16 #include <plat/serial.h> 19 17 20 18 #define UART_OFFSET(addr) ((addr) & 0x00ffffff) 21 - 22 - #define omap_uart_v2p(x) ((x) - PAGE_OFFSET + PLAT_PHYS_OFFSET) 23 - #define omap_uart_p2v(x) ((x) - PLAT_PHYS_OFFSET + PAGE_OFFSET) 24 19 25 20 .pushsection .data 26 21 omap_uart_phys: .word 0 ··· 29 34 * the desired UART phys and virt addresses temporarily into 30 35 * the omap_uart_phys and omap_uart_virt above. 31 36 */ 32 - .macro addruart, rp, rv 37 + .macro addruart, rp, rv, tmp 33 38 34 39 /* Use omap_uart_phys/virt if already configured */ 35 - 10: mrc p15, 0, \rp, c1, c0 36 - tst \rp, #1 @ MMU enabled? 37 - ldreq \rp, =omap_uart_v2p(omap_uart_phys) @ MMU disabled 38 - ldrne \rp, =omap_uart_phys @ MMU enabled 39 - add \rv, \rp, #4 @ omap_uart_virt 40 - ldr \rp, [\rp, #0] 41 - ldr \rv, [\rv, #0] 40 + 10: adr \rp, 99f @ get effective addr of 99f 41 + ldr \rv, [\rp] @ get absolute addr of 99f 42 + sub \rv, \rv, \rp @ offset between the two 43 + ldr \rp, [\rp, #4] @ abs addr of omap_uart_phys 44 + sub \tmp, \rp, \rv @ make it effective 45 + ldr \rp, [\tmp, #0] @ omap_uart_phys 46 + ldr \rv, [\tmp, #4] @ omap_uart_virt 42 47 cmp \rp, #0 @ is port configured? 43 48 cmpne \rv, #0 44 - bne 99f @ already configured 49 + bne 100f @ already configured 45 50 46 51 /* Check the debug UART configuration set in uncompress.h */ 47 - mrc p15, 0, \rp, c1, c0 48 - tst \rp, #1 @ MMU enabled? 49 - ldreq \rp, =OMAP_UART_INFO @ MMU not enabled 50 - ldrne \rp, =omap_uart_p2v(OMAP_UART_INFO) @ MMU enabled 51 - ldr \rp, [\rp, #0] 52 + mov \rp, pc 53 + ldr \rv, =OMAP_UART_INFO_OFS 54 + and \rp, \rp, #0xff000000 55 + ldr \rp, [\rp, \rv] 52 56 53 57 /* Select the UART to use based on the UART1 scratchpad value */ 54 58 cmp \rp, #0 @ no port configured? ··· 100 106 b 98f 101 107 83: mov \rp, #UART_OFFSET(TI816X_UART3_BASE) 102 108 b 98f 109 + 103 110 95: ldr \rp, =ZOOM_UART_BASE 104 - mrc p15, 0, \rv, c1, c0 105 - tst \rv, #1 @ MMU enabled? 106 - ldreq \rv, =omap_uart_v2p(omap_uart_phys) @ MMU disabled 107 - ldrne \rv, =omap_uart_phys @ MMU enabled 108 - str \rp, [\rv, #0] 111 + str \rp, [\tmp, #0] @ omap_uart_phys 109 112 ldr \rp, =ZOOM_UART_VIRT 110 - add \rv, \rv, #4 @ omap_uart_virt 111 - str \rp, [\rv, #0] 113 + str \rp, [\tmp, #4] @ omap_uart_virt 112 114 mov \rp, #(UART_LSR << ZOOM_PORT_SHIFT) 113 - add \rv, \rv, #4 @ omap_uart_lsr 114 - str \rp, [\rv, #0] 115 + str \rp, [\tmp, #8] @ omap_uart_lsr 115 116 b 10b 116 117 117 118 /* Store both phys and virt address for the uart */ 118 119 98: add \rp, \rp, #0x48000000 @ phys base 119 - mrc p15, 0, \rv, c1, c0 120 - tst \rv, #1 @ MMU enabled? 121 - ldreq \rv, =omap_uart_v2p(omap_uart_phys) @ MMU disabled 122 - ldrne \rv, =omap_uart_phys @ MMU enabled 123 - str \rp, [\rv, #0] 120 + str \rp, [\tmp, #0] @ omap_uart_phys 124 121 sub \rp, \rp, #0x48000000 @ phys base 125 122 add \rp, \rp, #0xfa000000 @ virt base 126 - add \rv, \rv, #4 @ omap_uart_virt 127 - str \rp, [\rv, #0] 123 + str \rp, [\tmp, #4] @ omap_uart_virt 128 124 mov \rp, #(UART_LSR << OMAP_PORT_SHIFT) 129 - add \rv, \rv, #4 @ omap_uart_lsr 130 - str \rp, [\rv, #0] 125 + str \rp, [\tmp, #8] @ omap_uart_lsr 131 126 132 127 b 10b 133 - 99: 128 + 129 + .align 130 + 99: .word . 131 + .word omap_uart_phys 132 + .ltorg 133 + 134 + 100: /* Pass the UART_LSR reg address */ 135 + ldr \tmp, [\tmp, #8] @ omap_uart_lsr 136 + add \rp, \rp, \tmp 137 + add \rv, \rv, \tmp 134 138 .endm 135 139 136 140 .macro senduart,rd,rx 137 - strb \rd, [\rx] 141 + orr \rd, \rd, \rx, lsl #24 @ preserve LSR reg offset 142 + bic \rx, \rx, #0xff @ get base (THR) reg address 143 + strb \rd, [\rx] @ send lower byte of rd 144 + orr \rx, \rx, \rd, lsr #24 @ restore original rx (LSR) 145 + bic \rd, \rd, #(0xff << 24) @ restore original rd 138 146 .endm 139 147 140 148 .macro busyuart,rd,rx 141 - 1001: mrc p15, 0, \rd, c1, c0 142 - tst \rd, #1 @ MMU enabled? 143 - ldreq \rd, =omap_uart_v2p(omap_uart_lsr) @ MMU disabled 144 - ldrne \rd, =omap_uart_lsr @ MMU enabled 145 - ldr \rd, [\rd, #0] 146 - ldrb \rd, [\rx, \rd] 149 + 1001: ldrb \rd, [\rx] @ rx contains UART_LSR address 147 150 and \rd, \rd, #(UART_LSR_TEMT | UART_LSR_THRE) 148 151 teq \rd, #(UART_LSR_TEMT | UART_LSR_THRE) 149 152 bne 1001b
+1 -13
arch/arm/mach-omap2/include/mach/entry-macro.S
··· 78 78 4401: ldr \irqstat, [\base, #GIC_CPU_INTACK] 79 79 ldr \tmp, =1021 80 80 bic \irqnr, \irqstat, #0x1c00 81 - cmp \irqnr, #29 81 + cmp \irqnr, #15 82 82 cmpcc \irqnr, \irqnr 83 83 cmpne \irqnr, \tmp 84 84 cmpcs \irqnr, \irqnr ··· 100 100 strcc \irqstat, [\base, #GIC_CPU_EOI] 101 101 it cs 102 102 cmpcs \irqnr, \irqnr 103 - .endm 104 - 105 - /* As above, this assumes that irqstat and base are preserved */ 106 - 107 - .macro test_for_ltirq, irqnr, irqstat, base, tmp 108 - bic \irqnr, \irqstat, #0x1c00 109 - mov \tmp, #0 110 - cmp \irqnr, #29 111 - itt eq 112 - moveq \tmp, #1 113 - streq \irqstat, [\base, #GIC_CPU_EOI] 114 - cmp \tmp, #0 115 103 .endm 116 104 #endif /* CONFIG_SMP */ 117 105
-5
arch/arm/mach-omap2/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-omap2/include/mach/memory.h 3 - */ 4 - 5 - #include <plat/memory.h>
+1 -1
arch/arm/mach-omap2/io.c
··· 16 16 * it under the terms of the GNU General Public License version 2 as 17 17 * published by the Free Software Foundation. 18 18 */ 19 - 20 19 #include <linux/module.h> 21 20 #include <linux/kernel.h> 22 21 #include <linux/init.h> ··· 249 250 250 251 omap2_check_revision(); 251 252 omap_sram_init(); 253 + omap_init_consistent_dma_size(); 252 254 } 253 255 254 256 #ifdef CONFIG_SOC_OMAP2420
+2 -2
arch/arm/mach-orion5x/d2net-setup.c
··· 336 336 337 337 #ifdef CONFIG_MACH_D2NET 338 338 MACHINE_START(D2NET, "LaCie d2 Network") 339 - .boot_params = 0x00000100, 339 + .atag_offset = 0x100, 340 340 .init_machine = d2net_init, 341 341 .map_io = orion5x_map_io, 342 342 .init_early = orion5x_init_early, ··· 348 348 349 349 #ifdef CONFIG_MACH_BIGDISK 350 350 MACHINE_START(BIGDISK, "LaCie Big Disk Network") 351 - .boot_params = 0x00000100, 351 + .atag_offset = 0x100, 352 352 .init_machine = d2net_init, 353 353 .map_io = orion5x_map_io, 354 354 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/db88f5281-setup.c
··· 358 358 359 359 MACHINE_START(DB88F5281, "Marvell Orion-2 Development Board") 360 360 /* Maintainer: Tzachi Perelstein <tzachi@marvell.com> */ 361 - .boot_params = 0x00000100, 361 + .atag_offset = 0x100, 362 362 .init_machine = db88f5281_init, 363 363 .map_io = orion5x_map_io, 364 364 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/dns323-setup.c
··· 729 729 /* Warning: D-Link uses a wrong mach-type (=526) in their bootloader */ 730 730 MACHINE_START(DNS323, "D-Link DNS-323") 731 731 /* Maintainer: Herbert Valerio Riedel <hvr@gnu.org> */ 732 - .boot_params = 0x00000100, 732 + .atag_offset = 0x100, 733 733 .init_machine = dns323_init, 734 734 .map_io = orion5x_map_io, 735 735 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/edmini_v2-setup.c
··· 251 251 /* Warning: LaCie use a wrong mach-type (0x20e=526) in their bootloader. */ 252 252 MACHINE_START(EDMINI_V2, "LaCie Ethernet Disk mini V2") 253 253 /* Maintainer: Christopher Moore <moore@free.fr> */ 254 - .boot_params = 0x00000100, 254 + .atag_offset = 0x100, 255 255 .init_machine = edmini_v2_init, 256 256 .map_io = orion5x_map_io, 257 257 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/include/mach/debug-macro.S
··· 10 10 11 11 #include <mach/orion5x.h> 12 12 13 - .macro addruart, rp, rv 13 + .macro addruart, rp, rv, tmp 14 14 ldr \rp, =ORION5X_REGS_PHYS_BASE 15 15 ldr \rv, =ORION5X_REGS_VIRT_BASE 16 16 orr \rp, \rp, #0x00012000
-12
arch/arm/mach-orion5x/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-orion5x/include/mach/memory.h 3 - * 4 - * Marvell Orion memory definitions 5 - */ 6 - 7 - #ifndef __ASM_ARCH_MEMORY_H 8 - #define __ASM_ARCH_MEMORY_H 9 - 10 - #define PLAT_PHYS_OFFSET UL(0x00000000) 11 - 12 - #endif
+2 -2
arch/arm/mach-orion5x/kurobox_pro-setup.c
··· 379 379 #ifdef CONFIG_MACH_KUROBOX_PRO 380 380 MACHINE_START(KUROBOX_PRO, "Buffalo/Revogear Kurobox Pro") 381 381 /* Maintainer: Ronen Shitrit <rshitrit@marvell.com> */ 382 - .boot_params = 0x00000100, 382 + .atag_offset = 0x100, 383 383 .init_machine = kurobox_pro_init, 384 384 .map_io = orion5x_map_io, 385 385 .init_early = orion5x_init_early, ··· 392 392 #ifdef CONFIG_MACH_LINKSTATION_PRO 393 393 MACHINE_START(LINKSTATION_PRO, "Buffalo Linkstation Pro/Live") 394 394 /* Maintainer: Byron Bradley <byron.bbradley@gmail.com> */ 395 - .boot_params = 0x00000100, 395 + .atag_offset = 0x100, 396 396 .init_machine = kurobox_pro_init, 397 397 .map_io = orion5x_map_io, 398 398 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/ls-chl-setup.c
··· 318 318 319 319 MACHINE_START(LINKSTATION_LSCHL, "Buffalo Linkstation LiveV3 (LS-CHL)") 320 320 /* Maintainer: Ash Hughes <ashley.hughes@blueyonder.co.uk> */ 321 - .boot_params = 0x00000100, 321 + .atag_offset = 0x100, 322 322 .init_machine = lschl_init, 323 323 .map_io = orion5x_map_io, 324 324 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/ls_hgl-setup.c
··· 265 265 266 266 MACHINE_START(LINKSTATION_LS_HGL, "Buffalo Linkstation LS-HGL") 267 267 /* Maintainer: Zhu Qingsen <zhuqs@cn.fujistu.com> */ 268 - .boot_params = 0x00000100, 268 + .atag_offset = 0x100, 269 269 .init_machine = ls_hgl_init, 270 270 .map_io = orion5x_map_io, 271 271 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/lsmini-setup.c
··· 267 267 #ifdef CONFIG_MACH_LINKSTATION_MINI 268 268 MACHINE_START(LINKSTATION_MINI, "Buffalo Linkstation Mini") 269 269 /* Maintainer: Alexey Kopytko <alexey@kopytko.ru> */ 270 - .boot_params = 0x00000100, 270 + .atag_offset = 0x100, 271 271 .init_machine = lsmini_init, 272 272 .map_io = orion5x_map_io, 273 273 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/mss2-setup.c
··· 261 261 262 262 MACHINE_START(MSS2, "Maxtor Shared Storage II") 263 263 /* Maintainer: Sylver Bruneau <sylver.bruneau@googlemail.com> */ 264 - .boot_params = 0x00000100, 264 + .atag_offset = 0x100, 265 265 .init_machine = mss2_init, 266 266 .map_io = orion5x_map_io, 267 267 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/mv2120-setup.c
··· 228 228 /* Warning: HP uses a wrong mach-type (=526) in their bootloader */ 229 229 MACHINE_START(MV2120, "HP Media Vault mv2120") 230 230 /* Maintainer: Martin Michlmayr <tbm@cyrius.com> */ 231 - .boot_params = 0x00000100, 231 + .atag_offset = 0x100, 232 232 .init_machine = mv2120_init, 233 233 .map_io = orion5x_map_io, 234 234 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/net2big-setup.c
··· 419 419 420 420 /* Warning: LaCie use a wrong mach-type (0x20e=526) in their bootloader. */ 421 421 MACHINE_START(NET2BIG, "LaCie 2Big Network") 422 - .boot_params = 0x00000100, 422 + .atag_offset = 0x100, 423 423 .init_machine = net2big_init, 424 424 .map_io = orion5x_map_io, 425 425 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/rd88f5181l-fxo-setup.c
··· 168 168 169 169 MACHINE_START(RD88F5181L_FXO, "Marvell Orion-VoIP FXO Reference Design") 170 170 /* Maintainer: Nicolas Pitre <nico@marvell.com> */ 171 - .boot_params = 0x00000100, 171 + .atag_offset = 0x100, 172 172 .init_machine = rd88f5181l_fxo_init, 173 173 .map_io = orion5x_map_io, 174 174 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/rd88f5181l-ge-setup.c
··· 180 180 181 181 MACHINE_START(RD88F5181L_GE, "Marvell Orion-VoIP GE Reference Design") 182 182 /* Maintainer: Lennert Buytenhek <buytenh@marvell.com> */ 183 - .boot_params = 0x00000100, 183 + .atag_offset = 0x100, 184 184 .init_machine = rd88f5181l_ge_init, 185 185 .map_io = orion5x_map_io, 186 186 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/rd88f5182-setup.c
··· 305 305 306 306 MACHINE_START(RD88F5182, "Marvell Orion-NAS Reference Design") 307 307 /* Maintainer: Ronen Shitrit <rshitrit@marvell.com> */ 308 - .boot_params = 0x00000100, 308 + .atag_offset = 0x100, 309 309 .init_machine = rd88f5182_init, 310 310 .map_io = orion5x_map_io, 311 311 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/rd88f6183ap-ge-setup.c
··· 121 121 122 122 MACHINE_START(RD88F6183AP_GE, "Marvell Orion-1-90 AP GE Reference Design") 123 123 /* Maintainer: Lennert Buytenhek <buytenh@marvell.com> */ 124 - .boot_params = 0x00000100, 124 + .atag_offset = 0x100, 125 125 .init_machine = rd88f6183ap_ge_init, 126 126 .map_io = orion5x_map_io, 127 127 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/terastation_pro2-setup.c
··· 357 357 358 358 MACHINE_START(TERASTATION_PRO2, "Buffalo Terastation Pro II/Live") 359 359 /* Maintainer: Sylver Bruneau <sylver.bruneau@googlemail.com> */ 360 - .boot_params = 0x00000100, 360 + .atag_offset = 0x100, 361 361 .init_machine = tsp2_init, 362 362 .map_io = orion5x_map_io, 363 363 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/ts209-setup.c
··· 322 322 323 323 MACHINE_START(TS209, "QNAP TS-109/TS-209") 324 324 /* Maintainer: Byron Bradley <byron.bbradley@gmail.com> */ 325 - .boot_params = 0x00000100, 325 + .atag_offset = 0x100, 326 326 .init_machine = qnap_ts209_init, 327 327 .map_io = orion5x_map_io, 328 328 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/ts409-setup.c
··· 311 311 312 312 MACHINE_START(TS409, "QNAP TS-409") 313 313 /* Maintainer: Sylver Bruneau <sylver.bruneau@gmail.com> */ 314 - .boot_params = 0x00000100, 314 + .atag_offset = 0x100, 315 315 .init_machine = qnap_ts409_init, 316 316 .map_io = orion5x_map_io, 317 317 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/ts78xx-setup.c
··· 621 621 622 622 MACHINE_START(TS78XX, "Technologic Systems TS-78xx SBC") 623 623 /* Maintainer: Alexander Clouter <alex@digriz.org.uk> */ 624 - .boot_params = 0x00000100, 624 + .atag_offset = 0x100, 625 625 .init_machine = ts78xx_init, 626 626 .map_io = ts78xx_map_io, 627 627 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/wnr854t-setup.c
··· 172 172 173 173 MACHINE_START(WNR854T, "Netgear WNR854T") 174 174 /* Maintainer: Imre Kaloz <kaloz@openwrt.org> */ 175 - .boot_params = 0x00000100, 175 + .atag_offset = 0x100, 176 176 .init_machine = wnr854t_init, 177 177 .map_io = orion5x_map_io, 178 178 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-orion5x/wrt350n-v2-setup.c
··· 260 260 261 261 MACHINE_START(WRT350N_V2, "Linksys WRT350N v2") 262 262 /* Maintainer: Lennert Buytenhek <buytenh@marvell.com> */ 263 - .boot_params = 0x00000100, 263 + .atag_offset = 0x100, 264 264 .init_machine = wrt350n_v2_init, 265 265 .map_io = orion5x_map_io, 266 266 .init_early = orion5x_init_early,
+1 -1
arch/arm/mach-pnx4008/core.c
··· 264 264 265 265 MACHINE_START(PNX4008, "Philips PNX4008") 266 266 /* Maintainer: MontaVista Software Inc. */ 267 - .boot_params = 0x80000100, 267 + .atag_offset = 0x100, 268 268 .map_io = pnx4008_map_io, 269 269 .init_irq = pnx4008_init_irq, 270 270 .init_machine = pnx4008_init,
+1 -1
arch/arm/mach-pnx4008/include/mach/debug-macro.S
··· 11 11 * 12 12 */ 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 mov \rp, #0x00090000 16 16 add \rv, \rp, #0xf4000000 @ virtual 17 17 add \rp, \rp, #0x40000000 @ physical
-21
arch/arm/mach-pnx4008/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-pnx4008/include/mach/memory.h 3 - * 4 - * Copyright (c) 2005 Philips Semiconductors 5 - * Copyright (c) 2005 MontaVista Software, Inc. 6 - * 7 - * This program is free software; you can redistribute it and/or modify it 8 - * under the terms of the GNU General Public License as published by the 9 - * Free Software Foundation; either version 2 of the License, or (at your 10 - * option) any later version. 11 - */ 12 - 13 - #ifndef __ASM_ARCH_MEMORY_H 14 - #define __ASM_ARCH_MEMORY_H 15 - 16 - /* 17 - * Physical DRAM offset. 18 - */ 19 - #define PLAT_PHYS_OFFSET UL(0x80000000) 20 - 21 - #endif
+1 -1
arch/arm/mach-prima2/include/mach/debug-macro.S
··· 9 9 #include <mach/hardware.h> 10 10 #include <mach/uart.h> 11 11 12 - .macro addruart, rp, rv 12 + .macro addruart, rp, rv, tmp 13 13 ldr \rp, =SIRFSOC_UART1_PA_BASE @ physical 14 14 ldr \rv, =SIRFSOC_UART1_VA_BASE @ virtual 15 15 .endm
-21
arch/arm/mach-prima2/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-prima2/include/mach/memory.h 3 - * 4 - * Copyright (c) 2010 – 2011 Cambridge Silicon Radio Limited, a CSR plc group company. 5 - * 6 - * Licensed under GPLv2 or later. 7 - */ 8 - 9 - #ifndef __ASM_ARCH_MEMORY_H 10 - #define __ASM_ARCH_MEMORY_H 11 - 12 - #define PLAT_PHYS_OFFSET UL(0x00000000) 13 - 14 - /* 15 - * Restrict DMA-able region to workaround silicon limitation. 16 - * The limitation restricts buffers available for DMA to SD/MMC 17 - * hardware to be below 256MB 18 - */ 19 - #define ARM_DMA_ZONE_SIZE (SZ_256M) 20 - 21 - #endif
+2 -3
arch/arm/mach-prima2/l2x0.c
··· 13 13 #include <linux/of.h> 14 14 #include <linux/of_address.h> 15 15 #include <asm/hardware/cache-l2x0.h> 16 - #include <mach/memory.h> 17 16 18 17 #define L2X0_ADDR_FILTERING_START 0xC00 19 18 #define L2X0_ADDR_FILTERING_END 0xC04 ··· 40 41 /* 41 42 * set the physical memory windows L2 cache will cover 42 43 */ 43 - writel_relaxed(PLAT_PHYS_OFFSET + 1024 * 1024 * 1024, 44 + writel_relaxed(PHYS_OFFSET + 1024 * 1024 * 1024, 44 45 sirfsoc_l2x_base + L2X0_ADDR_FILTERING_END); 45 - writel_relaxed(PLAT_PHYS_OFFSET | 0x1, 46 + writel_relaxed(PHYS_OFFSET | 0x1, 46 47 sirfsoc_l2x_base + L2X0_ADDR_FILTERING_START); 47 48 48 49 writel_relaxed(0,
+2 -1
arch/arm/mach-prima2/prima2.c
··· 31 31 32 32 MACHINE_START(PRIMA2_EVB, "prima2cb") 33 33 /* Maintainer: Barry Song <baohua.song@csr.com> */ 34 - .boot_params = 0x00000100, 34 + .atag_offset = 0x100, 35 35 .init_early = sirfsoc_of_clk_init, 36 36 .map_io = sirfsoc_map_lluart, 37 37 .init_irq = sirfsoc_of_irq_init, 38 38 .timer = &sirfsoc_timer, 39 + .dma_zone_size = SZ_256M, 39 40 .init_machine = sirfsoc_mach_init, 40 41 .dt_compat = prima2cb_dt_match, 41 42 MACHINE_END
+1 -1
arch/arm/mach-pxa/balloon3.c
··· 828 828 .handle_irq = pxa27x_handle_irq, 829 829 .timer = &pxa_timer, 830 830 .init_machine = balloon3_init, 831 - .boot_params = PLAT_PHYS_OFFSET + 0x100, 831 + .atag_offset = 0x100, 832 832 MACHINE_END
+1 -1
arch/arm/mach-pxa/capc7117.c
··· 148 148 149 149 MACHINE_START(CAPC7117, 150 150 "Embedian CAPC-7117 evaluation kit based on the MXM-8x10 CoM") 151 - .boot_params = 0xa0000100, 151 + .atag_offset = 0x100, 152 152 .map_io = pxa3xx_map_io, 153 153 .init_irq = pxa3xx_init_irq, 154 154 .handle_irq = pxa3xx_handle_irq,
+1 -1
arch/arm/mach-pxa/cm-x2xx.c
··· 513 513 #endif 514 514 515 515 MACHINE_START(ARMCORE, "Compulab CM-X2XX") 516 - .boot_params = 0xa0000100, 516 + .atag_offset = 0x100, 517 517 .map_io = cmx2xx_map_io, 518 518 .nr_irqs = CMX2XX_NR_IRQS, 519 519 .init_irq = cmx2xx_init_irq,
+1 -1
arch/arm/mach-pxa/cm-x300.c
··· 852 852 } 853 853 854 854 MACHINE_START(CM_X300, "CM-X300 module") 855 - .boot_params = 0xa0000100, 855 + .atag_offset = 0x100, 856 856 .map_io = pxa3xx_map_io, 857 857 .init_irq = pxa3xx_init_irq, 858 858 .handle_irq = pxa3xx_handle_irq,
+2 -2
arch/arm/mach-pxa/colibri-pxa270.c
··· 306 306 } 307 307 308 308 MACHINE_START(COLIBRI, "Toradex Colibri PXA270") 309 - .boot_params = COLIBRI_SDRAM_BASE + 0x100, 309 + .atag_offset = 0x100, 310 310 .init_machine = colibri_pxa270_init, 311 311 .map_io = pxa27x_map_io, 312 312 .init_irq = pxa27x_init_irq, ··· 315 315 MACHINE_END 316 316 317 317 MACHINE_START(INCOME, "Income s.r.o. SH-Dmaster PXA270 SBC") 318 - .boot_params = 0xa0000100, 318 + .atag_offset = 0x100, 319 319 .init_machine = colibri_pxa270_income_init, 320 320 .map_io = pxa27x_map_io, 321 321 .init_irq = pxa27x_init_irq,
+1 -1
arch/arm/mach-pxa/colibri-pxa300.c
··· 183 183 } 184 184 185 185 MACHINE_START(COLIBRI300, "Toradex Colibri PXA300") 186 - .boot_params = COLIBRI_SDRAM_BASE + 0x100, 186 + .atag_offset = 0x100, 187 187 .init_machine = colibri_pxa300_init, 188 188 .map_io = pxa3xx_map_io, 189 189 .init_irq = pxa3xx_init_irq,
+1 -1
arch/arm/mach-pxa/colibri-pxa320.c
··· 253 253 } 254 254 255 255 MACHINE_START(COLIBRI320, "Toradex Colibri PXA320") 256 - .boot_params = COLIBRI_SDRAM_BASE + 0x100, 256 + .atag_offset = 0x100, 257 257 .init_machine = colibri_pxa320_init, 258 258 .map_io = pxa3xx_map_io, 259 259 .init_irq = pxa3xx_init_irq,
+1 -1
arch/arm/mach-pxa/csb726.c
··· 272 272 } 273 273 274 274 MACHINE_START(CSB726, "Cogent CSB726") 275 - .boot_params = 0xa0000100, 275 + .atag_offset = 0x100, 276 276 .map_io = pxa27x_map_io, 277 277 .init_irq = pxa27x_init_irq, 278 278 .handle_irq = pxa27x_handle_irq,
+2 -2
arch/arm/mach-pxa/em-x270.c
··· 1299 1299 } 1300 1300 1301 1301 MACHINE_START(EM_X270, "Compulab EM-X270") 1302 - .boot_params = 0xa0000100, 1302 + .atag_offset = 0x100, 1303 1303 .map_io = pxa27x_map_io, 1304 1304 .init_irq = pxa27x_init_irq, 1305 1305 .handle_irq = pxa27x_handle_irq, ··· 1308 1308 MACHINE_END 1309 1309 1310 1310 MACHINE_START(EXEDA, "Compulab eXeda") 1311 - .boot_params = 0xa0000100, 1311 + .atag_offset = 0x100, 1312 1312 .map_io = pxa27x_map_io, 1313 1313 .init_irq = pxa27x_init_irq, 1314 1314 .handle_irq = pxa27x_handle_irq,
+6 -6
arch/arm/mach-pxa/eseries.c
··· 188 188 189 189 MACHINE_START(E330, "Toshiba e330") 190 190 /* Maintainer: Ian Molton (spyro@f2s.com) */ 191 - .boot_params = 0xa0000100, 191 + .atag_offset = 0x100, 192 192 .map_io = pxa25x_map_io, 193 193 .nr_irqs = ESERIES_NR_IRQS, 194 194 .init_irq = pxa25x_init_irq, ··· 238 238 239 239 MACHINE_START(E350, "Toshiba e350") 240 240 /* Maintainer: Ian Molton (spyro@f2s.com) */ 241 - .boot_params = 0xa0000100, 241 + .atag_offset = 0x100, 242 242 .map_io = pxa25x_map_io, 243 243 .nr_irqs = ESERIES_NR_IRQS, 244 244 .init_irq = pxa25x_init_irq, ··· 361 361 362 362 MACHINE_START(E400, "Toshiba e400") 363 363 /* Maintainer: Ian Molton (spyro@f2s.com) */ 364 - .boot_params = 0xa0000100, 364 + .atag_offset = 0x100, 365 365 .map_io = pxa25x_map_io, 366 366 .nr_irqs = ESERIES_NR_IRQS, 367 367 .init_irq = pxa25x_init_irq, ··· 550 550 551 551 MACHINE_START(E740, "Toshiba e740") 552 552 /* Maintainer: Ian Molton (spyro@f2s.com) */ 553 - .boot_params = 0xa0000100, 553 + .atag_offset = 0x100, 554 554 .map_io = pxa25x_map_io, 555 555 .nr_irqs = ESERIES_NR_IRQS, 556 556 .init_irq = pxa25x_init_irq, ··· 742 742 743 743 MACHINE_START(E750, "Toshiba e750") 744 744 /* Maintainer: Ian Molton (spyro@f2s.com) */ 745 - .boot_params = 0xa0000100, 745 + .atag_offset = 0x100, 746 746 .map_io = pxa25x_map_io, 747 747 .nr_irqs = ESERIES_NR_IRQS, 748 748 .init_irq = pxa25x_init_irq, ··· 947 947 948 948 MACHINE_START(E800, "Toshiba e800") 949 949 /* Maintainer: Ian Molton (spyro@f2s.com) */ 950 - .boot_params = 0xa0000100, 950 + .atag_offset = 0x100, 951 951 .map_io = pxa25x_map_io, 952 952 .nr_irqs = ESERIES_NR_IRQS, 953 953 .init_irq = pxa25x_init_irq,
+6 -6
arch/arm/mach-pxa/ezx.c
··· 797 797 } 798 798 799 799 MACHINE_START(EZX_A780, "Motorola EZX A780") 800 - .boot_params = 0xa0000100, 800 + .atag_offset = 0x100, 801 801 .map_io = pxa27x_map_io, 802 802 .nr_irqs = EZX_NR_IRQS, 803 803 .init_irq = pxa27x_init_irq, ··· 863 863 } 864 864 865 865 MACHINE_START(EZX_E680, "Motorola EZX E680") 866 - .boot_params = 0xa0000100, 866 + .atag_offset = 0x100, 867 867 .map_io = pxa27x_map_io, 868 868 .nr_irqs = EZX_NR_IRQS, 869 869 .init_irq = pxa27x_init_irq, ··· 929 929 } 930 930 931 931 MACHINE_START(EZX_A1200, "Motorola EZX A1200") 932 - .boot_params = 0xa0000100, 932 + .atag_offset = 0x100, 933 933 .map_io = pxa27x_map_io, 934 934 .nr_irqs = EZX_NR_IRQS, 935 935 .init_irq = pxa27x_init_irq, ··· 1120 1120 } 1121 1121 1122 1122 MACHINE_START(EZX_A910, "Motorola EZX A910") 1123 - .boot_params = 0xa0000100, 1123 + .atag_offset = 0x100, 1124 1124 .map_io = pxa27x_map_io, 1125 1125 .nr_irqs = EZX_NR_IRQS, 1126 1126 .init_irq = pxa27x_init_irq, ··· 1186 1186 } 1187 1187 1188 1188 MACHINE_START(EZX_E6, "Motorola EZX E6") 1189 - .boot_params = 0xa0000100, 1189 + .atag_offset = 0x100, 1190 1190 .map_io = pxa27x_map_io, 1191 1191 .nr_irqs = EZX_NR_IRQS, 1192 1192 .init_irq = pxa27x_init_irq, ··· 1226 1226 } 1227 1227 1228 1228 MACHINE_START(EZX_E2, "Motorola EZX E2") 1229 - .boot_params = 0xa0000100, 1229 + .atag_offset = 0x100, 1230 1230 .map_io = pxa27x_map_io, 1231 1231 .nr_irqs = EZX_NR_IRQS, 1232 1232 .init_irq = pxa27x_init_irq,
+1 -1
arch/arm/mach-pxa/gumstix.c
··· 233 233 } 234 234 235 235 MACHINE_START(GUMSTIX, "Gumstix") 236 - .boot_params = 0xa0000100, /* match u-boot bi_boot_params */ 236 + .atag_offset = 0x100, /* match u-boot bi_boot_params */ 237 237 .map_io = pxa25x_map_io, 238 238 .init_irq = pxa25x_init_irq, 239 239 .handle_irq = pxa25x_handle_irq,
+1 -1
arch/arm/mach-pxa/h5000.c
··· 203 203 } 204 204 205 205 MACHINE_START(H5400, "HP iPAQ H5000") 206 - .boot_params = 0xa0000100, 206 + .atag_offset = 0x100, 207 207 .map_io = pxa25x_map_io, 208 208 .init_irq = pxa25x_init_irq, 209 209 .handle_irq = pxa25x_handle_irq,
+1 -1
arch/arm/mach-pxa/himalaya.c
··· 158 158 159 159 160 160 MACHINE_START(HIMALAYA, "HTC Himalaya") 161 - .boot_params = 0xa0000100, 161 + .atag_offset = 0x100, 162 162 .map_io = pxa25x_map_io, 163 163 .init_irq = pxa25x_init_irq, 164 164 .handle_irq = pxa25x_handle_irq,
+1 -1
arch/arm/mach-pxa/hx4700.c
··· 838 838 } 839 839 840 840 MACHINE_START(H4700, "HP iPAQ HX4700") 841 - .boot_params = 0xa0000100, 841 + .atag_offset = 0x100, 842 842 .map_io = pxa27x_map_io, 843 843 .nr_irqs = HX4700_NR_IRQS, 844 844 .init_irq = pxa27x_init_irq,
+1 -1
arch/arm/mach-pxa/icontrol.c
··· 191 191 } 192 192 193 193 MACHINE_START(ICONTROL, "iControl/SafeTcam boards using Embedian MXM-8x10 CoM") 194 - .boot_params = 0xa0000100, 194 + .atag_offset = 0x100, 195 195 .map_io = pxa3xx_map_io, 196 196 .init_irq = pxa3xx_init_irq, 197 197 .handle_irq = pxa3xx_handle_irq,
+1 -1
arch/arm/mach-pxa/include/mach/debug-macro.S
··· 13 13 14 14 #include "hardware.h" 15 15 16 - .macro addruart, rp, rv 16 + .macro addruart, rp, rv, tmp 17 17 mov \rp, #0x00100000 18 18 orr \rv, \rp, #io_p2v(0x40000000) @ virtual 19 19 orr \rp, \rp, #0x40000000 @ physical
-20
arch/arm/mach-pxa/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-pxa/include/mach/memory.h 3 - * 4 - * Author: Nicolas Pitre 5 - * Copyright: (C) 2001 MontaVista Software Inc. 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 - */ 11 - 12 - #ifndef __ASM_ARCH_MEMORY_H 13 - #define __ASM_ARCH_MEMORY_H 14 - 15 - /* 16 - * Physical DRAM offset. 17 - */ 18 - #define PLAT_PHYS_OFFSET UL(0xa0000000) 19 - 20 - #endif
+1 -1
arch/arm/mach-pxa/littleton.c
··· 437 437 } 438 438 439 439 MACHINE_START(LITTLETON, "Marvell Form Factor Development Platform (aka Littleton)") 440 - .boot_params = 0xa0000100, 440 + .atag_offset = 0x100, 441 441 .map_io = pxa3xx_map_io, 442 442 .nr_irqs = LITTLETON_NR_IRQS, 443 443 .init_irq = pxa3xx_init_irq,
+1 -1
arch/arm/mach-pxa/lpd270.c
··· 498 498 499 499 MACHINE_START(LOGICPD_PXA270, "LogicPD PXA270 Card Engine") 500 500 /* Maintainer: Peter Barada */ 501 - .boot_params = 0xa0000100, 501 + .atag_offset = 0x100, 502 502 .map_io = lpd270_map_io, 503 503 .nr_irqs = LPD270_NR_IRQS, 504 504 .init_irq = lpd270_init_irq,
+1 -1
arch/arm/mach-pxa/magician.c
··· 753 753 754 754 755 755 MACHINE_START(MAGICIAN, "HTC Magician") 756 - .boot_params = 0xa0000100, 756 + .atag_offset = 0x100, 757 757 .map_io = pxa27x_map_io, 758 758 .nr_irqs = MAGICIAN_NR_IRQS, 759 759 .init_irq = pxa27x_init_irq,
+1 -1
arch/arm/mach-pxa/mainstone.c
··· 615 615 616 616 MACHINE_START(MAINSTONE, "Intel HCDDBBVA0 Development Platform (aka Mainstone)") 617 617 /* Maintainer: MontaVista Software Inc. */ 618 - .boot_params = 0xa0000100, /* BLOB boot parameter setting */ 618 + .atag_offset = 0x100, /* BLOB boot parameter setting */ 619 619 .map_io = mainstone_map_io, 620 620 .nr_irqs = MAINSTONE_NR_IRQS, 621 621 .init_irq = mainstone_init_irq,
+1 -1
arch/arm/mach-pxa/mioa701.c
··· 751 751 } 752 752 753 753 MACHINE_START(MIOA701, "MIO A701") 754 - .boot_params = 0xa0000100, 754 + .atag_offset = 0x100, 755 755 .map_io = &pxa27x_map_io, 756 756 .init_irq = &pxa27x_init_irq, 757 757 .handle_irq = &pxa27x_handle_irq,
+1 -1
arch/arm/mach-pxa/mp900.c
··· 92 92 93 93 /* Maintainer - Michael Petchkovsky <mkpetch@internode.on.net> */ 94 94 MACHINE_START(NEC_MP900, "MobilePro900/C") 95 - .boot_params = 0xa0220100, 95 + .atag_offset = 0x220100, 96 96 .timer = &pxa_timer, 97 97 .map_io = pxa25x_map_io, 98 98 .init_irq = pxa25x_init_irq,
+1 -1
arch/arm/mach-pxa/palmld.c
··· 342 342 } 343 343 344 344 MACHINE_START(PALMLD, "Palm LifeDrive") 345 - .boot_params = 0xa0000100, 345 + .atag_offset = 0x100, 346 346 .map_io = palmld_map_io, 347 347 .init_irq = pxa27x_init_irq, 348 348 .handle_irq = pxa27x_handle_irq,
+1 -1
arch/arm/mach-pxa/palmt5.c
··· 202 202 } 203 203 204 204 MACHINE_START(PALMT5, "Palm Tungsten|T5") 205 - .boot_params = 0xa0000100, 205 + .atag_offset = 0x100, 206 206 .map_io = pxa27x_map_io, 207 207 .reserve = palmt5_reserve, 208 208 .init_irq = pxa27x_init_irq,
+1 -1
arch/arm/mach-pxa/palmtc.c
··· 537 537 }; 538 538 539 539 MACHINE_START(PALMTC, "Palm Tungsten|C") 540 - .boot_params = 0xa0000100, 540 + .atag_offset = 0x100, 541 541 .map_io = pxa25x_map_io, 542 542 .init_irq = pxa25x_init_irq, 543 543 .handle_irq = pxa25x_handle_irq,
+1 -1
arch/arm/mach-pxa/palmte2.c
··· 356 356 } 357 357 358 358 MACHINE_START(PALMTE2, "Palm Tungsten|E2") 359 - .boot_params = 0xa0000100, 359 + .atag_offset = 0x100, 360 360 .map_io = pxa25x_map_io, 361 361 .init_irq = pxa25x_init_irq, 362 362 .handle_irq = pxa25x_handle_irq,
+2 -2
arch/arm/mach-pxa/palmtreo.c
··· 440 440 } 441 441 442 442 MACHINE_START(TREO680, "Palm Treo 680") 443 - .boot_params = 0xa0000100, 443 + .atag_offset = 0x100, 444 444 .map_io = pxa27x_map_io, 445 445 .reserve = treo_reserve, 446 446 .init_irq = pxa27x_init_irq, ··· 450 450 MACHINE_END 451 451 452 452 MACHINE_START(CENTRO, "Palm Centro 685") 453 - .boot_params = 0xa0000100, 453 + .atag_offset = 0x100, 454 454 .map_io = pxa27x_map_io, 455 455 .reserve = treo_reserve, 456 456 .init_irq = pxa27x_init_irq,
+1 -1
arch/arm/mach-pxa/palmtx.c
··· 364 364 } 365 365 366 366 MACHINE_START(PALMTX, "Palm T|X") 367 - .boot_params = 0xa0000100, 367 + .atag_offset = 0x100, 368 368 .map_io = palmtx_map_io, 369 369 .init_irq = pxa27x_init_irq, 370 370 .handle_irq = pxa27x_handle_irq,
+1 -1
arch/arm/mach-pxa/palmz72.c
··· 399 399 } 400 400 401 401 MACHINE_START(PALMZ72, "Palm Zire72") 402 - .boot_params = 0xa0000100, 402 + .atag_offset = 0x100, 403 403 .map_io = pxa27x_map_io, 404 404 .init_irq = pxa27x_init_irq, 405 405 .handle_irq = pxa27x_handle_irq,
+1 -1
arch/arm/mach-pxa/pcm027.c
··· 258 258 259 259 MACHINE_START(PCM027, "Phytec Messtechnik GmbH phyCORE-PXA270") 260 260 /* Maintainer: Pengutronix */ 261 - .boot_params = 0xa0000100, 261 + .atag_offset = 0x100, 262 262 .map_io = pcm027_map_io, 263 263 .nr_irqs = PCM027_NR_IRQS, 264 264 .init_irq = pxa27x_init_irq,
+3 -3
arch/arm/mach-pxa/raumfeld.c
··· 1086 1086 1087 1087 #ifdef CONFIG_MACH_RAUMFELD_RC 1088 1088 MACHINE_START(RAUMFELD_RC, "Raumfeld Controller") 1089 - .boot_params = RAUMFELD_SDRAM_BASE + 0x100, 1089 + .atag_offset = 0x100, 1090 1090 .init_machine = raumfeld_controller_init, 1091 1091 .map_io = pxa3xx_map_io, 1092 1092 .init_irq = pxa3xx_init_irq, ··· 1097 1097 1098 1098 #ifdef CONFIG_MACH_RAUMFELD_CONNECTOR 1099 1099 MACHINE_START(RAUMFELD_CONNECTOR, "Raumfeld Connector") 1100 - .boot_params = RAUMFELD_SDRAM_BASE + 0x100, 1100 + .atag_offset = 0x100, 1101 1101 .init_machine = raumfeld_connector_init, 1102 1102 .map_io = pxa3xx_map_io, 1103 1103 .init_irq = pxa3xx_init_irq, ··· 1108 1108 1109 1109 #ifdef CONFIG_MACH_RAUMFELD_SPEAKER 1110 1110 MACHINE_START(RAUMFELD_SPEAKER, "Raumfeld Speaker") 1111 - .boot_params = RAUMFELD_SDRAM_BASE + 0x100, 1111 + .atag_offset = 0x100, 1112 1112 .init_machine = raumfeld_speaker_init, 1113 1113 .map_io = pxa3xx_map_io, 1114 1114 .init_irq = pxa3xx_init_irq,
+1 -1
arch/arm/mach-pxa/saar.c
··· 596 596 597 597 MACHINE_START(SAAR, "PXA930 Handheld Platform (aka SAAR)") 598 598 /* Maintainer: Eric Miao <eric.miao@marvell.com> */ 599 - .boot_params = 0xa0000100, 599 + .atag_offset = 0x100, 600 600 .map_io = pxa3xx_map_io, 601 601 .init_irq = pxa3xx_init_irq, 602 602 .handle_irq = pxa3xx_handle_irq,
+1 -1
arch/arm/mach-pxa/saarb.c
··· 103 103 } 104 104 105 105 MACHINE_START(SAARB, "PXA955 Handheld Platform (aka SAARB)") 106 - .boot_params = 0xa0000100, 106 + .atag_offset = 0x100, 107 107 .map_io = pxa3xx_map_io, 108 108 .nr_irqs = SAARB_NR_IRQS, 109 109 .init_irq = pxa95x_init_irq,
+2 -2
arch/arm/mach-pxa/stargate2.c
··· 1004 1004 .handle_irq = pxa27x_handle_irq, 1005 1005 .timer = &pxa_timer, 1006 1006 .init_machine = imote2_init, 1007 - .boot_params = 0xA0000100, 1007 + .atag_offset = 0x100, 1008 1008 MACHINE_END 1009 1009 #endif 1010 1010 ··· 1016 1016 .handle_irq = pxa27x_handle_irq, 1017 1017 .timer = &pxa_timer, 1018 1018 .init_machine = stargate2_init, 1019 - .boot_params = 0xA0000100, 1019 + .atag_offset = 0x100, 1020 1020 MACHINE_END 1021 1021 #endif
+1 -1
arch/arm/mach-pxa/tavorevb.c
··· 489 489 490 490 MACHINE_START(TAVOREVB, "PXA930 Evaluation Board (aka TavorEVB)") 491 491 /* Maintainer: Eric Miao <eric.miao@marvell.com> */ 492 - .boot_params = 0xa0000100, 492 + .atag_offset = 0x100, 493 493 .map_io = pxa3xx_map_io, 494 494 .init_irq = pxa3xx_init_irq, 495 495 .handle_irq = pxa3xx_handle_irq,
+1 -1
arch/arm/mach-pxa/tavorevb3.c
··· 125 125 } 126 126 127 127 MACHINE_START(TAVOREVB3, "PXA950 Evaluation Board (aka TavorEVB3)") 128 - .boot_params = 0xa0000100, 128 + .atag_offset = 0x100, 129 129 .map_io = pxa3xx_map_io, 130 130 .nr_irqs = TAVOREVB3_NR_IRQS, 131 131 .init_irq = pxa3xx_init_irq,
+2 -2
arch/arm/mach-pxa/trizeps4.c
··· 554 554 555 555 MACHINE_START(TRIZEPS4, "Keith und Koep Trizeps IV module") 556 556 /* MAINTAINER("Jürgen Schindele") */ 557 - .boot_params = TRIZEPS4_SDRAM_BASE + 0x100, 557 + .atag_offset = 0x100, 558 558 .init_machine = trizeps4_init, 559 559 .map_io = trizeps4_map_io, 560 560 .init_irq = pxa27x_init_irq, ··· 564 564 565 565 MACHINE_START(TRIZEPS4WL, "Keith und Koep Trizeps IV-WL module") 566 566 /* MAINTAINER("Jürgen Schindele") */ 567 - .boot_params = TRIZEPS4_SDRAM_BASE + 0x100, 567 + .atag_offset = 0x100, 568 568 .init_machine = trizeps4_init, 569 569 .map_io = trizeps4_map_io, 570 570 .init_irq = pxa27x_init_irq,
+1 -1
arch/arm/mach-pxa/viper.c
··· 992 992 993 993 MACHINE_START(VIPER, "Arcom/Eurotech VIPER SBC") 994 994 /* Maintainer: Marc Zyngier <maz@misterjones.org> */ 995 - .boot_params = 0xa0000100, 995 + .atag_offset = 0x100, 996 996 .map_io = viper_map_io, 997 997 .init_irq = viper_init_irq, 998 998 .handle_irq = pxa25x_handle_irq,
+1 -1
arch/arm/mach-pxa/vpac270.c
··· 716 716 } 717 717 718 718 MACHINE_START(VPAC270, "Voipac PXA270") 719 - .boot_params = 0xa0000100, 719 + .atag_offset = 0x100, 720 720 .map_io = pxa27x_map_io, 721 721 .init_irq = pxa27x_init_irq, 722 722 .handle_irq = pxa27x_handle_irq,
+1 -1
arch/arm/mach-pxa/xcep.c
··· 179 179 } 180 180 181 181 MACHINE_START(XCEP, "Iskratel XCEP") 182 - .boot_params = 0xa0000100, 182 + .atag_offset = 0x100, 183 183 .init_machine = xcep_init, 184 184 .map_io = pxa25x_map_io, 185 185 .init_irq = pxa25x_init_irq,
+2 -2
arch/arm/mach-pxa/z2.c
··· 686 686 */ 687 687 PSPR = 0x0; 688 688 local_irq_disable(); 689 - pxa27x_cpu_suspend(PWRMODE_DEEPSLEEP, PLAT_PHYS_OFFSET - PAGE_OFFSET); 689 + pxa27x_cpu_suspend(PWRMODE_DEEPSLEEP, PHYS_OFFSET - PAGE_OFFSET); 690 690 } 691 691 #else 692 692 #define z2_power_off NULL ··· 718 718 } 719 719 720 720 MACHINE_START(ZIPIT2, "Zipit Z2") 721 - .boot_params = 0xa0000100, 721 + .atag_offset = 0x100, 722 722 .map_io = pxa27x_map_io, 723 723 .init_irq = pxa27x_init_irq, 724 724 .handle_irq = pxa27x_handle_irq,
+1 -1
arch/arm/mach-pxa/zeus.c
··· 904 904 905 905 MACHINE_START(ARCOM_ZEUS, "Arcom/Eurotech ZEUS") 906 906 /* Maintainer: Marc Zyngier <maz@misterjones.org> */ 907 - .boot_params = 0xa0000100, 907 + .atag_offset = 0x100, 908 908 .map_io = zeus_map_io, 909 909 .nr_irqs = ZEUS_NR_IRQS, 910 910 .init_irq = zeus_init_irq,
+1 -1
arch/arm/mach-pxa/zylonite.c
··· 422 422 } 423 423 424 424 MACHINE_START(ZYLONITE, "PXA3xx Platform Development Kit (aka Zylonite)") 425 - .boot_params = 0xa0000100, 425 + .atag_offset = 0x100, 426 426 .map_io = pxa3xx_map_io, 427 427 .nr_irqs = ZYLONITE_NR_IRQS, 428 428 .init_irq = pxa3xx_init_irq,
+1 -1
arch/arm/mach-realview/include/mach/debug-macro.S
··· 33 33 #error "Unknown RealView platform" 34 34 #endif 35 35 36 - .macro addruart, rp, rv 36 + .macro addruart, rp, rv, tmp 37 37 mov \rp, #DEBUG_LL_UART_OFFSET 38 38 orr \rv, \rp, #0xfb000000 @ virtual base 39 39 orr \rp, \rp, #0x10000000 @ physical base
+1 -1
arch/arm/mach-realview/realview_eb.c
··· 463 463 464 464 MACHINE_START(REALVIEW_EB, "ARM-RealView EB") 465 465 /* Maintainer: ARM Ltd/Deep Blue Solutions Ltd */ 466 - .boot_params = PLAT_PHYS_OFFSET + 0x00000100, 466 + .atag_offset = 0x100, 467 467 .fixup = realview_fixup, 468 468 .map_io = realview_eb_map_io, 469 469 .init_early = realview_init_early,
+1 -1
arch/arm/mach-realview/realview_pb1176.c
··· 386 386 387 387 MACHINE_START(REALVIEW_PB1176, "ARM-RealView PB1176") 388 388 /* Maintainer: ARM Ltd/Deep Blue Solutions Ltd */ 389 - .boot_params = PLAT_PHYS_OFFSET + 0x00000100, 389 + .atag_offset = 0x100, 390 390 .fixup = realview_pb1176_fixup, 391 391 .map_io = realview_pb1176_map_io, 392 392 .init_early = realview_init_early,
+1 -1
arch/arm/mach-realview/realview_pb11mp.c
··· 360 360 361 361 MACHINE_START(REALVIEW_PB11MP, "ARM-RealView PB11MPCore") 362 362 /* Maintainer: ARM Ltd/Deep Blue Solutions Ltd */ 363 - .boot_params = PLAT_PHYS_OFFSET + 0x00000100, 363 + .atag_offset = 0x100, 364 364 .fixup = realview_fixup, 365 365 .map_io = realview_pb11mp_map_io, 366 366 .init_early = realview_init_early,
+1 -1
arch/arm/mach-realview/realview_pba8.c
··· 310 310 311 311 MACHINE_START(REALVIEW_PBA8, "ARM-RealView PB-A8") 312 312 /* Maintainer: ARM Ltd/Deep Blue Solutions Ltd */ 313 - .boot_params = PLAT_PHYS_OFFSET + 0x00000100, 313 + .atag_offset = 0x100, 314 314 .fixup = realview_fixup, 315 315 .map_io = realview_pba8_map_io, 316 316 .init_early = realview_init_early,
+1 -1
arch/arm/mach-realview/realview_pbx.c
··· 393 393 394 394 MACHINE_START(REALVIEW_PBX, "ARM-RealView PBX") 395 395 /* Maintainer: ARM Ltd/Deep Blue Solutions Ltd */ 396 - .boot_params = PLAT_PHYS_OFFSET + 0x00000100, 396 + .atag_offset = 0x100, 397 397 .fixup = realview_pbx_fixup, 398 398 .map_io = realview_pbx_map_io, 399 399 .init_early = realview_init_early,
+1 -1
arch/arm/mach-rpc/include/mach/debug-macro.S
··· 11 11 * 12 12 */ 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 mov \rp, #0x00010000 16 16 orr \rp, \rp, #0x00000fe0 17 17 orr \rv, \rp, #0xe0000000 @ virtual
+1 -1
arch/arm/mach-rpc/riscpc.c
··· 218 218 219 219 MACHINE_START(RISCPC, "Acorn-RiscPC") 220 220 /* Maintainer: Russell King */ 221 - .boot_params = 0x10000100, 221 + .atag_offset = 0x100, 222 222 .reserve_lp0 = 1, 223 223 .reserve_lp1 = 1, 224 224 .map_io = rpc_map_io,
-20
arch/arm/mach-s3c2400/include/mach/memory.h
··· 1 - /* arch/arm/mach-s3c2400/include/mach/memory.h 2 - * from arch/arm/mach-rpc/include/mach/memory.h 3 - * 4 - * Copyright 2007 Simtec Electronics 5 - * http://armlinux.simtec.co.uk/ 6 - * Ben Dooks <ben@simtec.co.uk> 7 - * 8 - * Copyright (C) 1996,1997,1998 Russell King. 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of the GNU General Public License version 2 as 12 - * published by the Free Software Foundation. 13 - */ 14 - 15 - #ifndef __ASM_ARCH_MEMORY_H 16 - #define __ASM_ARCH_MEMORY_H 17 - 18 - #define PLAT_PHYS_OFFSET UL(0x0C000000) 19 - 20 - #endif
+1 -1
arch/arm/mach-s3c2410/include/mach/debug-macro.S
··· 19 19 #define S3C2410_UART1_OFF (0x4000) 20 20 #define SHIFT_2440TXF (14-9) 21 21 22 - .macro addruart, rp, rv 22 + .macro addruart, rp, rv, tmp 23 23 ldr \rp, = S3C24XX_PA_UART 24 24 ldr \rv, = S3C24XX_VA_UART 25 25 #if CONFIG_DEBUG_S3C_UART != 0
-16
arch/arm/mach-s3c2410/include/mach/memory.h
··· 1 - /* arch/arm/mach-s3c2410/include/mach/memory.h 2 - * from arch/arm/mach-rpc/include/mach/memory.h 3 - * 4 - * Copyright (C) 1996,1997,1998 Russell King. 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 as 8 - * published by the Free Software Foundation. 9 - */ 10 - 11 - #ifndef __ASM_ARCH_MEMORY_H 12 - #define __ASM_ARCH_MEMORY_H 13 - 14 - #define PLAT_PHYS_OFFSET UL(0x30000000) 15 - 16 - #endif
+1 -1
arch/arm/mach-s3c2410/mach-amlm5900.c
··· 236 236 } 237 237 238 238 MACHINE_START(AML_M5900, "AML_M5900") 239 - .boot_params = S3C2410_SDRAM_PA + 0x100, 239 + .atag_offset = 0x100, 240 240 .map_io = amlm5900_map_io, 241 241 .init_irq = s3c24xx_init_irq, 242 242 .init_machine = amlm5900_init,
+1 -1
arch/arm/mach-s3c2410/mach-bast.c
··· 657 657 658 658 MACHINE_START(BAST, "Simtec-BAST") 659 659 /* Maintainer: Ben Dooks <ben@simtec.co.uk> */ 660 - .boot_params = S3C2410_SDRAM_PA + 0x100, 660 + .atag_offset = 0x100, 661 661 .map_io = bast_map_io, 662 662 .init_irq = s3c24xx_init_irq, 663 663 .init_machine = bast_init,
+1 -1
arch/arm/mach-s3c2410/mach-h1940.c
··· 744 744 745 745 MACHINE_START(H1940, "IPAQ-H1940") 746 746 /* Maintainer: Ben Dooks <ben-linux@fluff.org> */ 747 - .boot_params = S3C2410_SDRAM_PA + 0x100, 747 + .atag_offset = 0x100, 748 748 .map_io = h1940_map_io, 749 749 .reserve = h1940_reserve, 750 750 .init_irq = h1940_init_irq,
+2 -2
arch/arm/mach-s3c2410/mach-n30.c
··· 586 586 /* Maintainer: Christer Weinigel <christer@weinigel.se>, 587 587 Ben Dooks <ben-linux@fluff.org> 588 588 */ 589 - .boot_params = S3C2410_SDRAM_PA + 0x100, 589 + .atag_offset = 0x100, 590 590 .timer = &s3c24xx_timer, 591 591 .init_machine = n30_init, 592 592 .init_irq = s3c24xx_init_irq, ··· 596 596 MACHINE_START(N35, "Acer-N35") 597 597 /* Maintainer: Christer Weinigel <christer@weinigel.se> 598 598 */ 599 - .boot_params = S3C2410_SDRAM_PA + 0x100, 599 + .atag_offset = 0x100, 600 600 .timer = &s3c24xx_timer, 601 601 .init_machine = n30_init, 602 602 .init_irq = s3c24xx_init_irq,
+1 -1
arch/arm/mach-s3c2410/mach-otom.c
··· 116 116 117 117 MACHINE_START(OTOM, "Nex Vision - Otom 1.1") 118 118 /* Maintainer: Guillaume GOURAT <guillaume.gourat@nexvision.tv> */ 119 - .boot_params = S3C2410_SDRAM_PA + 0x100, 119 + .atag_offset = 0x100, 120 120 .map_io = otom11_map_io, 121 121 .init_machine = otom11_init, 122 122 .init_irq = s3c24xx_init_irq,
+1 -1
arch/arm/mach-s3c2410/mach-qt2410.c
··· 344 344 } 345 345 346 346 MACHINE_START(QT2410, "QT2410") 347 - .boot_params = S3C2410_SDRAM_PA + 0x100, 347 + .atag_offset = 0x100, 348 348 .map_io = qt2410_map_io, 349 349 .init_irq = s3c24xx_init_irq, 350 350 .init_machine = qt2410_machine_init,
+1 -1
arch/arm/mach-s3c2410/mach-smdk2410.c
··· 111 111 MACHINE_START(SMDK2410, "SMDK2410") /* @TODO: request a new identifier and switch 112 112 * to SMDK2410 */ 113 113 /* Maintainer: Jonas Dietsche */ 114 - .boot_params = S3C2410_SDRAM_PA + 0x100, 114 + .atag_offset = 0x100, 115 115 .map_io = smdk2410_map_io, 116 116 .init_irq = s3c24xx_init_irq, 117 117 .init_machine = smdk2410_init,
+1 -1
arch/arm/mach-s3c2410/mach-tct_hammer.c
··· 146 146 } 147 147 148 148 MACHINE_START(TCT_HAMMER, "TCT_HAMMER") 149 - .boot_params = S3C2410_SDRAM_PA + 0x100, 149 + .atag_offset = 0x100, 150 150 .map_io = tct_hammer_map_io, 151 151 .init_irq = s3c24xx_init_irq, 152 152 .init_machine = tct_hammer_init,
+1 -1
arch/arm/mach-s3c2410/mach-vr1000.c
··· 400 400 401 401 MACHINE_START(VR1000, "Thorcom-VR1000") 402 402 /* Maintainer: Ben Dooks <ben@simtec.co.uk> */ 403 - .boot_params = S3C2410_SDRAM_PA + 0x100, 403 + .atag_offset = 0x100, 404 404 .map_io = vr1000_map_io, 405 405 .init_machine = vr1000_init, 406 406 .init_irq = s3c24xx_init_irq,
+1 -1
arch/arm/mach-s3c2412/mach-jive.c
··· 655 655 656 656 MACHINE_START(JIVE, "JIVE") 657 657 /* Maintainer: Ben Dooks <ben-linux@fluff.org> */ 658 - .boot_params = S3C2410_SDRAM_PA + 0x100, 658 + .atag_offset = 0x100, 659 659 660 660 .init_irq = s3c24xx_init_irq, 661 661 .map_io = jive_map_io,
+3 -3
arch/arm/mach-s3c2412/mach-smdk2413.c
··· 127 127 128 128 MACHINE_START(S3C2413, "S3C2413") 129 129 /* Maintainer: Ben Dooks <ben-linux@fluff.org> */ 130 - .boot_params = S3C2410_SDRAM_PA + 0x100, 130 + .atag_offset = 0x100, 131 131 132 132 .fixup = smdk2413_fixup, 133 133 .init_irq = s3c24xx_init_irq, ··· 138 138 139 139 MACHINE_START(SMDK2412, "SMDK2412") 140 140 /* Maintainer: Ben Dooks <ben-linux@fluff.org> */ 141 - .boot_params = S3C2410_SDRAM_PA + 0x100, 141 + .atag_offset = 0x100, 142 142 143 143 .fixup = smdk2413_fixup, 144 144 .init_irq = s3c24xx_init_irq, ··· 149 149 150 150 MACHINE_START(SMDK2413, "SMDK2413") 151 151 /* Maintainer: Ben Dooks <ben-linux@fluff.org> */ 152 - .boot_params = S3C2410_SDRAM_PA + 0x100, 152 + .atag_offset = 0x100, 153 153 154 154 .fixup = smdk2413_fixup, 155 155 .init_irq = s3c24xx_init_irq,
+1 -1
arch/arm/mach-s3c2412/mach-vstms.c
··· 155 155 } 156 156 157 157 MACHINE_START(VSTMS, "VSTMS") 158 - .boot_params = S3C2410_SDRAM_PA + 0x100, 158 + .atag_offset = 0x100, 159 159 160 160 .fixup = vstms_fixup, 161 161 .init_irq = s3c24xx_init_irq,
+1 -1
arch/arm/mach-s3c2416/mach-smdk2416.c
··· 245 245 246 246 MACHINE_START(SMDK2416, "SMDK2416") 247 247 /* Maintainer: Yauhen Kharuzhy <jekhor@gmail.com> */ 248 - .boot_params = S3C2410_SDRAM_PA + 0x100, 248 + .atag_offset = 0x100, 249 249 250 250 .init_irq = s3c24xx_init_irq, 251 251 .map_io = smdk2416_map_io,
+1 -1
arch/arm/mach-s3c2440/mach-anubis.c
··· 498 498 499 499 MACHINE_START(ANUBIS, "Simtec-Anubis") 500 500 /* Maintainer: Ben Dooks <ben@simtec.co.uk> */ 501 - .boot_params = S3C2410_SDRAM_PA + 0x100, 501 + .atag_offset = 0x100, 502 502 .map_io = anubis_map_io, 503 503 .init_machine = anubis_init, 504 504 .init_irq = s3c24xx_init_irq,
+1 -1
arch/arm/mach-s3c2440/mach-at2440evb.c
··· 233 233 234 234 235 235 MACHINE_START(AT2440EVB, "AT2440EVB") 236 - .boot_params = S3C2410_SDRAM_PA + 0x100, 236 + .atag_offset = 0x100, 237 237 .map_io = at2440evb_map_io, 238 238 .init_machine = at2440evb_init, 239 239 .init_irq = s3c24xx_init_irq,
+1 -1
arch/arm/mach-s3c2440/mach-gta02.c
··· 595 595 596 596 MACHINE_START(NEO1973_GTA02, "GTA02") 597 597 /* Maintainer: Nelson Castillo <arhuaco@freaks-unidos.net> */ 598 - .boot_params = S3C2410_SDRAM_PA + 0x100, 598 + .atag_offset = 0x100, 599 599 .map_io = gta02_map_io, 600 600 .init_irq = s3c24xx_init_irq, 601 601 .init_machine = gta02_machine_init,
+1 -1
arch/arm/mach-s3c2440/mach-mini2440.c
··· 676 676 677 677 MACHINE_START(MINI2440, "MINI2440") 678 678 /* Maintainer: Michel Pollet <buserror@gmail.com> */ 679 - .boot_params = S3C2410_SDRAM_PA + 0x100, 679 + .atag_offset = 0x100, 680 680 .map_io = mini2440_map_io, 681 681 .init_machine = mini2440_init, 682 682 .init_irq = s3c24xx_init_irq,
+1 -1
arch/arm/mach-s3c2440/mach-nexcoder.c
··· 151 151 152 152 MACHINE_START(NEXCODER_2440, "NexVision - Nexcoder 2440") 153 153 /* Maintainer: Guillaume GOURAT <guillaume.gourat@nexvision.tv> */ 154 - .boot_params = S3C2410_SDRAM_PA + 0x100, 154 + .atag_offset = 0x100, 155 155 .map_io = nexcoder_map_io, 156 156 .init_machine = nexcoder_init, 157 157 .init_irq = s3c24xx_init_irq,
+1 -1
arch/arm/mach-s3c2440/mach-osiris.c
··· 447 447 448 448 MACHINE_START(OSIRIS, "Simtec-OSIRIS") 449 449 /* Maintainer: Ben Dooks <ben@simtec.co.uk> */ 450 - .boot_params = S3C2410_SDRAM_PA + 0x100, 450 + .atag_offset = 0x100, 451 451 .map_io = osiris_map_io, 452 452 .init_irq = s3c24xx_init_irq, 453 453 .init_machine = osiris_init,
+1 -1
arch/arm/mach-s3c2440/mach-rx1950.c
··· 825 825 826 826 MACHINE_START(RX1950, "HP iPAQ RX1950") 827 827 /* Maintainers: Vasily Khoruzhick */ 828 - .boot_params = S3C2410_SDRAM_PA + 0x100, 828 + .atag_offset = 0x100, 829 829 .map_io = rx1950_map_io, 830 830 .reserve = rx1950_reserve, 831 831 .init_irq = s3c24xx_init_irq,
+1 -1
arch/arm/mach-s3c2440/mach-rx3715.c
··· 218 218 219 219 MACHINE_START(RX3715, "IPAQ-RX3715") 220 220 /* Maintainer: Ben Dooks <ben-linux@fluff.org> */ 221 - .boot_params = S3C2410_SDRAM_PA + 0x100, 221 + .atag_offset = 0x100, 222 222 .map_io = rx3715_map_io, 223 223 .reserve = rx3715_reserve, 224 224 .init_irq = rx3715_init_irq,
+1 -1
arch/arm/mach-s3c2440/mach-smdk2440.c
··· 175 175 176 176 MACHINE_START(S3C2440, "SMDK2440") 177 177 /* Maintainer: Ben Dooks <ben-linux@fluff.org> */ 178 - .boot_params = S3C2410_SDRAM_PA + 0x100, 178 + .atag_offset = 0x100, 179 179 180 180 .init_irq = s3c24xx_init_irq, 181 181 .map_io = smdk2440_map_io,
+1 -1
arch/arm/mach-s3c2443/mach-smdk2443.c
··· 139 139 140 140 MACHINE_START(SMDK2443, "SMDK2443") 141 141 /* Maintainer: Ben Dooks <ben-linux@fluff.org> */ 142 - .boot_params = S3C2410_SDRAM_PA + 0x100, 142 + .atag_offset = 0x100, 143 143 144 144 .init_irq = s3c24xx_init_irq, 145 145 .map_io = smdk2443_map_io,
+2
arch/arm/mach-s3c64xx/cpu.c
··· 20 20 #include <linux/serial_core.h> 21 21 #include <linux/platform_device.h> 22 22 #include <linux/io.h> 23 + #include <linux/dma-mapping.h> 23 24 24 25 #include <mach/hardware.h> 25 26 #include <mach/map.h> ··· 146 145 /* initialise the io descriptors we need for initialisation */ 147 146 iotable_init(s3c_iodesc, ARRAY_SIZE(s3c_iodesc)); 148 147 iotable_init(mach_desc, size); 148 + init_consistent_dma_size(SZ_8M); 149 149 150 150 idcode = __raw_readl(S3C_VA_SYS + 0x118); 151 151 if (!idcode) {
+1 -1
arch/arm/mach-s3c64xx/include/mach/debug-macro.S
··· 21 21 * aligned and add in the offset when we load the value here. 22 22 */ 23 23 24 - .macro addruart, rp, rv 24 + .macro addruart, rp, rv, tmp 25 25 ldr \rp, = S3C_PA_UART 26 26 ldr \rv, = (S3C_VA_UART + S3C_PA_UART & 0xfffff) 27 27 #if CONFIG_DEBUG_S3C_UART != 0
-20
arch/arm/mach-s3c64xx/include/mach/memory.h
··· 1 - /* arch/arm/mach-s3c6400/include/mach/memory.h 2 - * 3 - * Copyright 2008 Openmoko, Inc. 4 - * Copyright 2008 Simtec Electronics 5 - * Ben Dooks <ben@simtec.co.uk> 6 - * http://armlinux.simtec.co.uk/ 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 - */ 12 - 13 - #ifndef __ASM_ARCH_MEMORY_H 14 - #define __ASM_ARCH_MEMORY_H 15 - 16 - #define PLAT_PHYS_OFFSET UL(0x50000000) 17 - 18 - #define CONSISTENT_DMA_SIZE SZ_8M 19 - 20 - #endif
+1 -1
arch/arm/mach-s3c64xx/mach-anw6410.c
··· 233 233 234 234 MACHINE_START(ANW6410, "A&W6410") 235 235 /* Maintainer: Kwangwoo Lee <kwangwoo.lee@gmail.com> */ 236 - .boot_params = S3C64XX_PA_SDRAM + 0x100, 236 + .atag_offset = 0x100, 237 237 238 238 .init_irq = s3c6410_init_irq, 239 239 .map_io = anw6410_map_io,
+1 -1
arch/arm/mach-s3c64xx/mach-crag6410.c
··· 766 766 767 767 MACHINE_START(WLF_CRAGG_6410, "Wolfson Cragganmore 6410") 768 768 /* Maintainer: Mark Brown <broonie@opensource.wolfsonmicro.com> */ 769 - .boot_params = S3C64XX_PA_SDRAM + 0x100, 769 + .atag_offset = 0x100, 770 770 .init_irq = s3c6410_init_irq, 771 771 .map_io = crag6410_map_io, 772 772 .init_machine = crag6410_machine_init,
+1 -1
arch/arm/mach-s3c64xx/mach-hmt.c
··· 265 265 266 266 MACHINE_START(HMT, "Airgoo-HMT") 267 267 /* Maintainer: Peter Korsgaard <jacmet@sunsite.dk> */ 268 - .boot_params = S3C64XX_PA_SDRAM + 0x100, 268 + .atag_offset = 0x100, 269 269 .init_irq = s3c6410_init_irq, 270 270 .map_io = hmt_map_io, 271 271 .init_machine = hmt_machine_init,
+1 -1
arch/arm/mach-s3c64xx/mach-mini6410.c
··· 349 349 350 350 MACHINE_START(MINI6410, "MINI6410") 351 351 /* Maintainer: Darius Augulis <augulis.darius@gmail.com> */ 352 - .boot_params = S3C64XX_PA_SDRAM + 0x100, 352 + .atag_offset = 0x100, 353 353 .init_irq = s3c6410_init_irq, 354 354 .map_io = mini6410_map_io, 355 355 .init_machine = mini6410_machine_init,
+1 -1
arch/arm/mach-s3c64xx/mach-ncp.c
··· 97 97 98 98 MACHINE_START(NCP, "NCP") 99 99 /* Maintainer: Samsung Electronics */ 100 - .boot_params = S3C64XX_PA_SDRAM + 0x100, 100 + .atag_offset = 0x100, 101 101 .init_irq = s3c6410_init_irq, 102 102 .map_io = ncp_map_io, 103 103 .init_machine = ncp_machine_init,
+1 -1
arch/arm/mach-s3c64xx/mach-real6410.c
··· 329 329 330 330 MACHINE_START(REAL6410, "REAL6410") 331 331 /* Maintainer: Darius Augulis <augulis.darius@gmail.com> */ 332 - .boot_params = S3C64XX_PA_SDRAM + 0x100, 332 + .atag_offset = 0x100, 333 333 334 334 .init_irq = s3c6410_init_irq, 335 335 .map_io = real6410_map_io,
+1 -1
arch/arm/mach-s3c64xx/mach-smartq5.c
··· 146 146 147 147 MACHINE_START(SMARTQ5, "SmartQ 5") 148 148 /* Maintainer: Maurus Cuelenaere <mcuelenaere AT gmail DOT com> */ 149 - .boot_params = S3C64XX_PA_SDRAM + 0x100, 149 + .atag_offset = 0x100, 150 150 .init_irq = s3c6410_init_irq, 151 151 .map_io = smartq_map_io, 152 152 .init_machine = smartq5_machine_init,
+1 -1
arch/arm/mach-s3c64xx/mach-smartq7.c
··· 162 162 163 163 MACHINE_START(SMARTQ7, "SmartQ 7") 164 164 /* Maintainer: Maurus Cuelenaere <mcuelenaere AT gmail DOT com> */ 165 - .boot_params = S3C64XX_PA_SDRAM + 0x100, 165 + .atag_offset = 0x100, 166 166 .init_irq = s3c6410_init_irq, 167 167 .map_io = smartq_map_io, 168 168 .init_machine = smartq7_machine_init,
+1 -1
arch/arm/mach-s3c64xx/mach-smdk6400.c
··· 85 85 86 86 MACHINE_START(SMDK6400, "SMDK6400") 87 87 /* Maintainer: Ben Dooks <ben-linux@fluff.org> */ 88 - .boot_params = S3C64XX_PA_SDRAM + 0x100, 88 + .atag_offset = 0x100, 89 89 90 90 .init_irq = s3c6400_init_irq, 91 91 .map_io = smdk6400_map_io,
+1 -1
arch/arm/mach-s3c64xx/mach-smdk6410.c
··· 703 703 704 704 MACHINE_START(SMDK6410, "SMDK6410") 705 705 /* Maintainer: Ben Dooks <ben-linux@fluff.org> */ 706 - .boot_params = S3C64XX_PA_SDRAM + 0x100, 706 + .atag_offset = 0x100, 707 707 708 708 .init_irq = s3c6410_init_irq, 709 709 .map_io = smdk6410_map_io,
+3
arch/arm/mach-s5p64x0/cpu.c
··· 20 20 #include <linux/serial_core.h> 21 21 #include <linux/platform_device.h> 22 22 #include <linux/sched.h> 23 + #include <linux/dma-mapping.h> 23 24 24 25 #include <asm/mach/arch.h> 25 26 #include <asm/mach/map.h> ··· 112 111 113 112 iotable_init(s5p64x0_iodesc, ARRAY_SIZE(s5p64x0_iodesc)); 114 113 iotable_init(s5p6440_iodesc, ARRAY_SIZE(s5p6440_iodesc)); 114 + init_consistent_dma_size(SZ_8M); 115 115 } 116 116 117 117 void __init s5p6450_map_io(void) ··· 122 120 123 121 iotable_init(s5p64x0_iodesc, ARRAY_SIZE(s5p64x0_iodesc)); 124 122 iotable_init(s5p6450_iodesc, ARRAY_SIZE(s5p6450_iodesc)); 123 + init_consistent_dma_size(SZ_8M); 125 124 } 126 125 127 126 /*
+1 -1
arch/arm/mach-s5p64x0/include/mach/debug-macro.S
··· 15 15 16 16 #include <plat/regs-serial.h> 17 17 18 - .macro addruart, rp, rv 18 + .macro addruart, rp, rv, tmp 19 19 mov \rp, #0xE0000000 20 20 orr \rp, \rp, #0x00100000 21 21 ldr \rp, [\rp, #0x118 ]
-19
arch/arm/mach-s5p64x0/include/mach/memory.h
··· 1 - /* linux/arch/arm/mach-s5p64x0/include/mach/memory.h 2 - * 3 - * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd. 4 - * http://www.samsung.com 5 - * 6 - * S5P64X0 - Memory definitions 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 - */ 12 - 13 - #ifndef __ASM_ARCH_MEMORY_H 14 - #define __ASM_ARCH_MEMORY_H __FILE__ 15 - 16 - #define PLAT_PHYS_OFFSET UL(0x20000000) 17 - #define CONSISTENT_DMA_SIZE SZ_8M 18 - 19 - #endif /* __ASM_ARCH_MEMORY_H */
+1 -1
arch/arm/mach-s5p64x0/mach-smdk6440.c
··· 171 171 172 172 MACHINE_START(SMDK6440, "SMDK6440") 173 173 /* Maintainer: Kukjin Kim <kgene.kim@samsung.com> */ 174 - .boot_params = S5P64X0_PA_SDRAM + 0x100, 174 + .atag_offset = 0x100, 175 175 176 176 .init_irq = s5p6440_init_irq, 177 177 .map_io = smdk6440_map_io,
+1 -1
arch/arm/mach-s5p64x0/mach-smdk6450.c
··· 190 190 191 191 MACHINE_START(SMDK6450, "SMDK6450") 192 192 /* Maintainer: Kukjin Kim <kgene.kim@samsung.com> */ 193 - .boot_params = S5P64X0_PA_SDRAM + 0x100, 193 + .atag_offset = 0x100, 194 194 195 195 .init_irq = s5p6450_init_irq, 196 196 .map_io = smdk6450_map_io,
+1 -1
arch/arm/mach-s5pc100/include/mach/debug-macro.S
··· 22 22 * aligned and add in the offset when we load the value here. 23 23 */ 24 24 25 - .macro addruart, rp, rv 25 + .macro addruart, rp, rv, tmp 26 26 ldr \rp, = S3C_PA_UART 27 27 ldr \rv, = S3C_VA_UART 28 28 #if CONFIG_DEBUG_S3C_UART != 0
-18
arch/arm/mach-s5pc100/include/mach/memory.h
··· 1 - /* arch/arm/mach-s5pc100/include/mach/memory.h 2 - * 3 - * Copyright 2008 Samsung Electronics Co. 4 - * Byungho Min <bhmin@samsung.com> 5 - * 6 - * Based on mach-s3c6400/include/mach/memory.h 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 - */ 12 - 13 - #ifndef __ASM_ARCH_MEMORY_H 14 - #define __ASM_ARCH_MEMORY_H 15 - 16 - #define PLAT_PHYS_OFFSET UL(0x20000000) 17 - 18 - #endif
+1 -1
arch/arm/mach-s5pc100/mach-smdkc100.c
··· 254 254 255 255 MACHINE_START(SMDKC100, "SMDKC100") 256 256 /* Maintainer: Byungho Min <bhmin@samsung.com> */ 257 - .boot_params = S5P_PA_SDRAM + 0x100, 257 + .atag_offset = 0x100, 258 258 .init_irq = s5pc100_init_irq, 259 259 .map_io = smdkc100_map_io, 260 260 .init_machine = smdkc100_machine_init,
+2
arch/arm/mach-s5pv210/cpu.c
··· 20 20 #include <linux/sysdev.h> 21 21 #include <linux/platform_device.h> 22 22 #include <linux/sched.h> 23 + #include <linux/dma-mapping.h> 23 24 24 25 #include <asm/mach/arch.h> 25 26 #include <asm/mach/map.h> ··· 120 119 void __init s5pv210_map_io(void) 121 120 { 122 121 iotable_init(s5pv210_iodesc, ARRAY_SIZE(s5pv210_iodesc)); 122 + init_consistent_dma_size(14 << 20); 123 123 124 124 /* initialise device information early */ 125 125 s5pv210_default_sdhci0();
+1 -1
arch/arm/mach-s5pv210/include/mach/debug-macro.S
··· 21 21 * aligned and add in the offset when we load the value here. 22 22 */ 23 23 24 - .macro addruart, rp, rv 24 + .macro addruart, rp, rv, tmp 25 25 ldr \rp, = S3C_PA_UART 26 26 ldr \rv, = S3C_VA_UART 27 27 #if CONFIG_DEBUG_S3C_UART != 0
-1
arch/arm/mach-s5pv210/include/mach/memory.h
··· 14 14 #define __ASM_ARCH_MEMORY_H 15 15 16 16 #define PLAT_PHYS_OFFSET UL(0x20000000) 17 - #define CONSISTENT_DMA_SIZE (SZ_8M + SZ_4M + SZ_2M) 18 17 19 18 /* 20 19 * Sparsemem support
+1 -1
arch/arm/mach-s5pv210/mach-aquila.c
··· 678 678 /* Maintainers: 679 679 Marek Szyprowski <m.szyprowski@samsung.com> 680 680 Kyungmin Park <kyungmin.park@samsung.com> */ 681 - .boot_params = S5P_PA_SDRAM + 0x100, 681 + .atag_offset = 0x100, 682 682 .init_irq = s5pv210_init_irq, 683 683 .map_io = aquila_map_io, 684 684 .init_machine = aquila_machine_init,
+1 -1
arch/arm/mach-s5pv210/mach-goni.c
··· 897 897 898 898 MACHINE_START(GONI, "GONI") 899 899 /* Maintainers: Kyungmin Park <kyungmin.park@samsung.com> */ 900 - .boot_params = S5P_PA_SDRAM + 0x100, 900 + .atag_offset = 0x100, 901 901 .init_irq = s5pv210_init_irq, 902 902 .map_io = goni_map_io, 903 903 .init_machine = goni_machine_init,
+1 -1
arch/arm/mach-s5pv210/mach-smdkc110.c
··· 136 136 137 137 MACHINE_START(SMDKC110, "SMDKC110") 138 138 /* Maintainer: Kukjin Kim <kgene.kim@samsung.com> */ 139 - .boot_params = S5P_PA_SDRAM + 0x100, 139 + .atag_offset = 0x100, 140 140 .init_irq = s5pv210_init_irq, 141 141 .map_io = smdkc110_map_io, 142 142 .init_machine = smdkc110_machine_init,
+1 -1
arch/arm/mach-s5pv210/mach-smdkv210.c
··· 319 319 320 320 MACHINE_START(SMDKV210, "SMDKV210") 321 321 /* Maintainer: Kukjin Kim <kgene.kim@samsung.com> */ 322 - .boot_params = S5P_PA_SDRAM + 0x100, 322 + .atag_offset = 0x100, 323 323 .init_irq = s5pv210_init_irq, 324 324 .map_io = smdkv210_map_io, 325 325 .init_machine = smdkv210_machine_init,
+1 -1
arch/arm/mach-s5pv210/mach-torbreck.c
··· 125 125 126 126 MACHINE_START(TORBRECK, "TORBRECK") 127 127 /* Maintainer: Hyunchul Ko <ghcstop@gmail.com> */ 128 - .boot_params = S5P_PA_SDRAM + 0x100, 128 + .atag_offset = 0x100, 129 129 .init_irq = s5pv210_init_irq, 130 130 .map_io = torbreck_map_io, 131 131 .init_machine = torbreck_machine_init,
+1 -1
arch/arm/mach-sa1100/assabet.c
··· 446 446 447 447 448 448 MACHINE_START(ASSABET, "Intel-Assabet") 449 - .boot_params = 0xc0000100, 449 + .atag_offset = 0x100, 450 450 .fixup = fixup_assabet, 451 451 .map_io = assabet_map_io, 452 452 .init_irq = sa1100_init_irq,
+1 -1
arch/arm/mach-sa1100/badge4.c
··· 302 302 } 303 303 304 304 MACHINE_START(BADGE4, "Hewlett-Packard Laboratories BadgePAD 4") 305 - .boot_params = 0xc0000100, 305 + .atag_offset = 0x100, 306 306 .map_io = badge4_map_io, 307 307 .init_irq = sa1100_init_irq, 308 308 .timer = &sa1100_timer,
+1 -1
arch/arm/mach-sa1100/h3100.c
··· 84 84 } 85 85 86 86 MACHINE_START(H3100, "Compaq iPAQ H3100") 87 - .boot_params = 0xc0000100, 87 + .atag_offset = 0x100, 88 88 .map_io = h3100_map_io, 89 89 .init_irq = sa1100_init_irq, 90 90 .timer = &sa1100_timer,
+1 -1
arch/arm/mach-sa1100/h3600.c
··· 125 125 } 126 126 127 127 MACHINE_START(H3600, "Compaq iPAQ H3600") 128 - .boot_params = 0xc0000100, 128 + .atag_offset = 0x100, 129 129 .map_io = h3600_map_io, 130 130 .init_irq = sa1100_init_irq, 131 131 .timer = &sa1100_timer,
+1 -1
arch/arm/mach-sa1100/hackkit.c
··· 195 195 */ 196 196 197 197 MACHINE_START(HACKKIT, "HackKit Cpu Board") 198 - .boot_params = 0xc0000100, 198 + .atag_offset = 0x100, 199 199 .map_io = hackkit_map_io, 200 200 .init_irq = sa1100_init_irq, 201 201 .timer = &sa1100_timer,
+1 -1
arch/arm/mach-sa1100/include/mach/debug-macro.S
··· 12 12 */ 13 13 #include <mach/hardware.h> 14 14 15 - .macro addruart, rp, rv 15 + .macro addruart, rp, rv, tmp 16 16 mrc p15, 0, \rp, c1, c0 17 17 tst \rp, #1 @ MMU enabled? 18 18 moveq \rp, #0x80000000 @ physical base address
+1 -1
arch/arm/mach-sa1100/jornada720.c
··· 364 364 365 365 MACHINE_START(JORNADA720, "HP Jornada 720") 366 366 /* Maintainer: Kristoffer Ericson <Kristoffer.Ericson@gmail.com> */ 367 - .boot_params = 0xc0000100, 367 + .atag_offset = 0x100, 368 368 .map_io = jornada720_map_io, 369 369 .init_irq = sa1100_init_irq, 370 370 .timer = &sa1100_timer,
+1 -1
arch/arm/mach-sa1100/lart.c
··· 61 61 } 62 62 63 63 MACHINE_START(LART, "LART") 64 - .boot_params = 0xc0000100, 64 + .atag_offset = 0x100, 65 65 .map_io = lart_map_io, 66 66 .init_irq = sa1100_init_irq, 67 67 .init_machine = lart_init,
+1 -1
arch/arm/mach-sa1100/nanoengine.c
··· 111 111 } 112 112 113 113 MACHINE_START(NANOENGINE, "BSE nanoEngine") 114 - .boot_params = 0xc0000000, 114 + .atag_offset = 0x100, 115 115 .map_io = nanoengine_map_io, 116 116 .init_irq = sa1100_init_irq, 117 117 .timer = &sa1100_timer,
+1 -1
arch/arm/mach-sa1100/shannon.c
··· 82 82 } 83 83 84 84 MACHINE_START(SHANNON, "Shannon (AKA: Tuxscreen)") 85 - .boot_params = 0xc0000100, 85 + .atag_offset = 0x100, 86 86 .map_io = shannon_map_io, 87 87 .init_irq = sa1100_init_irq, 88 88 .timer = &sa1100_timer,
+1 -1
arch/arm/mach-sa1100/simpad.c
··· 392 392 393 393 MACHINE_START(SIMPAD, "Simpad") 394 394 /* Maintainer: Holger Freyther */ 395 - .boot_params = 0xc0000100, 395 + .atag_offset = 0x100, 396 396 .map_io = simpad_map_io, 397 397 .init_irq = sa1100_init_irq, 398 398 .timer = &sa1100_timer,
+1 -1
arch/arm/mach-shark/core.c
··· 152 152 153 153 MACHINE_START(SHARK, "Shark") 154 154 /* Maintainer: Alexander Schulz */ 155 - .boot_params = 0x08003000, 155 + .atag_offset = 0x3000, 156 156 .map_io = shark_map_io, 157 157 .init_irq = shark_init_irq, 158 158 .timer = &shark_timer,
+1 -1
arch/arm/mach-shark/include/mach/debug-macro.S
··· 11 11 * 12 12 */ 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 mov \rp, #0xe0000000 16 16 orr \rp, \rp, #0x000003f8 17 17 mov \rv, \rp
+3
arch/arm/mach-shmobile/board-ag5evm.c
··· 37 37 #include <linux/mmc/sh_mobile_sdhi.h> 38 38 #include <linux/mfd/tmio.h> 39 39 #include <linux/sh_clk.h> 40 + #include <linux/dma-mapping.h> 40 41 #include <video/sh_mobile_lcdc.h> 41 42 #include <video/sh_mipi_dsi.h> 42 43 #include <sound/sh_fsi.h> ··· 448 447 static void __init ag5evm_map_io(void) 449 448 { 450 449 iotable_init(ag5evm_io_desc, ARRAY_SIZE(ag5evm_io_desc)); 450 + /* DMA memory at 0xf6000000 - 0xffdfffff */ 451 + init_consistent_dma_size(158 << 20); 451 452 452 453 /* setup early devices and console here as well */ 453 454 sh73a0_add_early_devices();
+3
arch/arm/mach-shmobile/board-ap4evb.c
··· 43 43 #include <linux/input/sh_keysc.h> 44 44 #include <linux/usb/r8a66597.h> 45 45 #include <linux/pm_clock.h> 46 + #include <linux/dma-mapping.h> 46 47 47 48 #include <media/sh_mobile_ceu.h> 48 49 #include <media/sh_mobile_csi2.h> ··· 1172 1171 static void __init ap4evb_map_io(void) 1173 1172 { 1174 1173 iotable_init(ap4evb_io_desc, ARRAY_SIZE(ap4evb_io_desc)); 1174 + /* DMA memory at 0xf6000000 - 0xffdfffff */ 1175 + init_consistent_dma_size(158 << 20); 1175 1176 1176 1177 /* setup early devices and console here as well */ 1177 1178 sh7372_add_early_devices();
+3
arch/arm/mach-shmobile/board-g3evm.c
··· 32 32 #include <linux/gpio.h> 33 33 #include <linux/input.h> 34 34 #include <linux/input/sh_keysc.h> 35 + #include <linux/dma-mapping.h> 35 36 #include <mach/sh7367.h> 36 37 #include <mach/common.h> 37 38 #include <asm/mach-types.h> ··· 261 260 static void __init g3evm_map_io(void) 262 261 { 263 262 iotable_init(g3evm_io_desc, ARRAY_SIZE(g3evm_io_desc)); 263 + /* DMA memory at 0xf6000000 - 0xffdfffff */ 264 + init_consistent_dma_size(158 << 20); 264 265 265 266 /* setup early devices and console here as well */ 266 267 sh7367_add_early_devices();
+3
arch/arm/mach-shmobile/board-g4evm.c
··· 33 33 #include <linux/mmc/host.h> 34 34 #include <linux/mmc/sh_mobile_sdhi.h> 35 35 #include <linux/gpio.h> 36 + #include <linux/dma-mapping.h> 36 37 #include <mach/sh7377.h> 37 38 #include <mach/common.h> 38 39 #include <asm/mach-types.h> ··· 275 274 static void __init g4evm_map_io(void) 276 275 { 277 276 iotable_init(g4evm_io_desc, ARRAY_SIZE(g4evm_io_desc)); 277 + /* DMA memory at 0xf6000000 - 0xffdfffff */ 278 + init_consistent_dma_size(158 << 20); 278 279 279 280 /* setup early devices and console here as well */ 280 281 sh7377_add_early_devices();
+3
arch/arm/mach-shmobile/board-mackerel.c
··· 45 45 #include <linux/tca6416_keypad.h> 46 46 #include <linux/usb/r8a66597.h> 47 47 #include <linux/usb/renesas_usbhs.h> 48 + #include <linux/dma-mapping.h> 48 49 49 50 #include <video/sh_mobile_hdmi.h> 50 51 #include <video/sh_mobile_lcdc.h> ··· 1383 1382 static void __init mackerel_map_io(void) 1384 1383 { 1385 1384 iotable_init(mackerel_io_desc, ARRAY_SIZE(mackerel_io_desc)); 1385 + /* DMA memory at 0xf6000000 - 0xffdfffff */ 1386 + init_consistent_dma_size(158 << 20); 1386 1387 1387 1388 /* setup early devices and console here as well */ 1388 1389 sh7372_add_early_devices();
-3
arch/arm/mach-shmobile/entry-intc.S
··· 51 51 .macro test_for_ipi, irqnr, irqstat, base, tmp 52 52 .endm 53 53 54 - .macro test_for_ltirq, irqnr, irqstat, base, tmp 55 - .endm 56 - 57 54 arch_irq_handler shmobile_handle_irq_intc
-3
arch/arm/mach-shmobile/include/mach/entry-macro.S
··· 27 27 .macro test_for_ipi, irqnr, irqstat, base, tmp 28 28 .endm 29 29 30 - .macro test_for_ltirq, irqnr, irqstat, base, tmp 31 - .endm 32 - 33 30 .macro arch_ret_to_user, tmp1, tmp2 34 31 .endm
-3
arch/arm/mach-shmobile/include/mach/memory.h
··· 4 4 #define PLAT_PHYS_OFFSET UL(CONFIG_MEMORY_START) 5 5 #define MEM_SIZE UL(CONFIG_MEMORY_SIZE) 6 6 7 - /* DMA memory at 0xf6000000 - 0xffdfffff */ 8 - #define CONSISTENT_DMA_SIZE (158 << 20) 9 - 10 7 #endif /* __ASM_MACH_MEMORY_H */
-19
arch/arm/mach-spear3xx/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-spear3xx/include/mach/memory.h 3 - * 4 - * Memory map for SPEAr3xx machine family 5 - * 6 - * Copyright (C) 2009 ST Microelectronics 7 - * Viresh Kumar<viresh.kumar@st.com> 8 - * 9 - * This file is licensed under the terms of the GNU General Public 10 - * License version 2. This program is licensed "as is" without any 11 - * warranty of any kind, whether express or implied. 12 - */ 13 - 14 - #ifndef __MACH_MEMORY_H 15 - #define __MACH_MEMORY_H 16 - 17 - #include <plat/memory.h> 18 - 19 - #endif /* __MACH_MEMORY_H */
+1 -1
arch/arm/mach-spear3xx/spear300_evb.c
··· 64 64 } 65 65 66 66 MACHINE_START(SPEAR300, "ST-SPEAR300-EVB") 67 - .boot_params = 0x00000100, 67 + .atag_offset = 0x100, 68 68 .map_io = spear3xx_map_io, 69 69 .init_irq = spear3xx_init_irq, 70 70 .timer = &spear3xx_timer,
+1 -1
arch/arm/mach-spear3xx/spear310_evb.c
··· 70 70 } 71 71 72 72 MACHINE_START(SPEAR310, "ST-SPEAR310-EVB") 73 - .boot_params = 0x00000100, 73 + .atag_offset = 0x100, 74 74 .map_io = spear3xx_map_io, 75 75 .init_irq = spear3xx_init_irq, 76 76 .timer = &spear3xx_timer,
+1 -1
arch/arm/mach-spear3xx/spear320_evb.c
··· 68 68 } 69 69 70 70 MACHINE_START(SPEAR320, "ST-SPEAR320-EVB") 71 - .boot_params = 0x00000100, 71 + .atag_offset = 0x100, 72 72 .map_io = spear3xx_map_io, 73 73 .init_irq = spear3xx_init_irq, 74 74 .timer = &spear3xx_timer,
-19
arch/arm/mach-spear6xx/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-spear6xx/include/mach/memory.h 3 - * 4 - * Memory map for SPEAr6xx machine family 5 - * 6 - * Copyright (C) 2009 ST Microelectronics 7 - * Rajeev Kumar<rajeev-dlh.kumar@st.com> 8 - * 9 - * This file is licensed under the terms of the GNU General Public 10 - * License version 2. This program is licensed "as is" without any 11 - * warranty of any kind, whether express or implied. 12 - */ 13 - 14 - #ifndef __MACH_MEMORY_H 15 - #define __MACH_MEMORY_H 16 - 17 - #include <plat/memory.h> 18 - 19 - #endif /* __MACH_MEMORY_H */
+1 -1
arch/arm/mach-spear6xx/spear600_evb.c
··· 43 43 } 44 44 45 45 MACHINE_START(SPEAR600, "ST-SPEAR600-EVB") 46 - .boot_params = 0x00000100, 46 + .atag_offset = 0x100, 47 47 .map_io = spear6xx_map_io, 48 48 .init_irq = spear6xx_init_irq, 49 49 .timer = &spear6xx_timer,
+1 -1
arch/arm/mach-tcc8k/board-tcc8000-sdk.c
··· 73 73 } 74 74 75 75 MACHINE_START(TCC8000_SDK, "Telechips TCC8000-SDK Demo Board") 76 - .boot_params = PLAT_PHYS_OFFSET + 0x00000100, 76 + .atag_offset = 0x100, 77 77 .map_io = tcc8k_map_io, 78 78 .init_irq = tcc8k_init_irq, 79 79 .init_machine = tcc8k_init,
+1 -1
arch/arm/mach-tegra/board-harmony.c
··· 179 179 } 180 180 181 181 MACHINE_START(HARMONY, "harmony") 182 - .boot_params = 0x00000100, 182 + .atag_offset = 0x100, 183 183 .fixup = tegra_harmony_fixup, 184 184 .map_io = tegra_map_common_io, 185 185 .init_early = tegra_init_early,
+1 -1
arch/arm/mach-tegra/board-paz00.c
··· 127 127 } 128 128 129 129 MACHINE_START(PAZ00, "Toshiba AC100 / Dynabook AZ") 130 - .boot_params = 0x00000100, 130 + .atag_offset = 0x100, 131 131 .fixup = tegra_paz00_fixup, 132 132 .map_io = tegra_map_common_io, 133 133 .init_early = tegra_init_early,
+3 -3
arch/arm/mach-tegra/board-seaboard.c
··· 201 201 202 202 203 203 MACHINE_START(SEABOARD, "seaboard") 204 - .boot_params = 0x00000100, 204 + .atag_offset = 0x100, 205 205 .map_io = tegra_map_common_io, 206 206 .init_early = tegra_init_early, 207 207 .init_irq = tegra_init_irq, ··· 210 210 MACHINE_END 211 211 212 212 MACHINE_START(KAEN, "kaen") 213 - .boot_params = 0x00000100, 213 + .atag_offset = 0x100, 214 214 .map_io = tegra_map_common_io, 215 215 .init_early = tegra_init_early, 216 216 .init_irq = tegra_init_irq, ··· 219 219 MACHINE_END 220 220 221 221 MACHINE_START(WARIO, "wario") 222 - .boot_params = 0x00000100, 222 + .atag_offset = 0x100, 223 223 .map_io = tegra_map_common_io, 224 224 .init_early = tegra_init_early, 225 225 .init_irq = tegra_init_irq,
+1 -1
arch/arm/mach-tegra/board-trimslice.c
··· 171 171 } 172 172 173 173 MACHINE_START(TRIMSLICE, "trimslice") 174 - .boot_params = 0x00000100, 174 + .atag_offset = 0x100, 175 175 .fixup = tegra_trimslice_fixup, 176 176 .map_io = tegra_map_common_io, 177 177 .init_early = tegra_init_early,
+1 -1
arch/arm/mach-tegra/include/mach/debug-macro.S
··· 21 21 #include <mach/io.h> 22 22 #include <mach/iomap.h> 23 23 24 - .macro addruart, rp, rv 24 + .macro addruart, rp, rv, tmp 25 25 ldr \rp, =IO_APB_PHYS @ physical 26 26 ldr \rv, =IO_APB_VIRT @ virtual 27 27 orr \rp, \rp, #(TEGRA_DEBUG_UART_BASE & 0xFF)
-28
arch/arm/mach-tegra/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-tegra/include/mach/memory.h 3 - * 4 - * Copyright (C) 2010 Google, Inc. 5 - * 6 - * Author: 7 - * Colin Cross <ccross@google.com> 8 - * Erik Gilling <konkers@google.com> 9 - * 10 - * This software is licensed under the terms of the GNU General Public 11 - * License version 2, as published by the Free Software Foundation, and 12 - * may be copied, distributed, and modified under those terms. 13 - * 14 - * This program is distributed in the hope that it will be useful, 15 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 16 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 17 - * GNU General Public License for more details. 18 - * 19 - */ 20 - 21 - #ifndef __MACH_TEGRA_MEMORY_H 22 - #define __MACH_TEGRA_MEMORY_H 23 - 24 - /* physical offset of RAM */ 25 - #define PLAT_PHYS_OFFSET UL(0) 26 - 27 - #endif 28 -
+3
arch/arm/mach-u300/core.c
··· 27 27 #include <linux/mtd/fsmc.h> 28 28 #include <linux/pinctrl/machine.h> 29 29 #include <linux/pinctrl/pinmux.h> 30 + #include <linux/dma-mapping.h> 30 31 31 32 #include <asm/types.h> 32 33 #include <asm/setup.h> ··· 96 95 void __init u300_map_io(void) 97 96 { 98 97 iotable_init(u300_io_desc, ARRAY_SIZE(u300_io_desc)); 98 + /* We enable a real big DMA buffer if need be. */ 99 + init_consistent_dma_size(SZ_4M); 99 100 } 100 101 101 102 /*
+1 -1
arch/arm/mach-u300/include/mach/debug-macro.S
··· 10 10 */ 11 11 #include <mach/hardware.h> 12 12 13 - .macro addruart, rp, rv 13 + .macro addruart, rp, rv, tmp 14 14 /* If we move the address using MMU, use this. */ 15 15 ldr \rp, = U300_SLOW_PER_PHYS_BASE @ MMU off, physical address 16 16 ldr \rv, = U300_SLOW_PER_VIRT_BASE @ MMU on, virtual address
+4 -9
arch/arm/mach-u300/include/mach/memory.h
··· 16 16 #ifdef CONFIG_MACH_U300_DUAL_RAM 17 17 18 18 #define PLAT_PHYS_OFFSET UL(0x48000000) 19 - #define BOOT_PARAMS_OFFSET (PHYS_OFFSET + 0x100) 19 + #define BOOT_PARAMS_OFFSET 0x100 20 20 21 21 #else 22 22 ··· 24 24 #define PLAT_PHYS_OFFSET (0x28000000 + \ 25 25 (CONFIG_MACH_U300_ACCESS_MEM_SIZE - \ 26 26 (CONFIG_MACH_U300_ACCESS_MEM_SIZE & 1))*1024*1024) 27 + #define BOOT_PARAMS_OFFSET (0x100 + \ 28 + (CONFIG_MACH_U300_ACCESS_MEM_SIZE & 1)*1024*1024*2) 27 29 #else 28 30 #define PLAT_PHYS_OFFSET (0x28000000 + \ 29 31 (CONFIG_MACH_U300_ACCESS_MEM_SIZE + \ 30 32 (CONFIG_MACH_U300_ACCESS_MEM_SIZE & 1))*1024*1024) 33 + #define BOOT_PARAMS_OFFSET 0x100 31 34 #endif 32 - #define BOOT_PARAMS_OFFSET (0x28000000 + \ 33 - (CONFIG_MACH_U300_ACCESS_MEM_SIZE + \ 34 - (CONFIG_MACH_U300_ACCESS_MEM_SIZE & 1))*1024*1024 + 0x100) 35 35 #endif 36 - 37 - /* 38 - * We enable a real big DMA buffer if need be. 39 - */ 40 - #define CONSISTENT_DMA_SIZE SZ_4M 41 36 42 37 #endif
+1 -1
arch/arm/mach-u300/u300.c
··· 61 61 62 62 MACHINE_START(U300, MACH_U300_STRING) 63 63 /* Maintainer: Linus Walleij <linus.walleij@stericsson.com> */ 64 - .boot_params = BOOT_PARAMS_OFFSET, 64 + .atag_offset = BOOT_PARAMS_OFFSET, 65 65 .map_io = u300_map_io, 66 66 .reserve = u300_reserve, 67 67 .init_irq = u300_init_irq,
+3 -3
arch/arm/mach-ux500/board-mop500.c
··· 646 646 647 647 MACHINE_START(U8500, "ST-Ericsson MOP500 platform") 648 648 /* Maintainer: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com> */ 649 - .boot_params = 0x100, 649 + .atag_offset = 0x100, 650 650 .map_io = u8500_map_io, 651 651 .init_irq = ux500_init_irq, 652 652 /* we re-use nomadik timer here */ ··· 655 655 MACHINE_END 656 656 657 657 MACHINE_START(HREFV60, "ST-Ericsson U8500 Platform HREFv60+") 658 - .boot_params = 0x100, 658 + .atag_offset = 0x100, 659 659 .map_io = u8500_map_io, 660 660 .init_irq = ux500_init_irq, 661 661 .timer = &ux500_timer, ··· 663 663 MACHINE_END 664 664 665 665 MACHINE_START(SNOWBALL, "Calao Systems Snowball platform") 666 - .boot_params = 0x100, 666 + .atag_offset = 0x100, 667 667 .map_io = u8500_map_io, 668 668 .init_irq = ux500_init_irq, 669 669 /* we re-use nomadik timer here */
+1 -1
arch/arm/mach-ux500/board-u5500.c
··· 118 118 } 119 119 120 120 MACHINE_START(U5500, "ST-Ericsson U5500 Platform") 121 - .boot_params = 0x00000100, 121 + .atag_offset = 0x100, 122 122 .map_io = u5500_map_io, 123 123 .init_irq = ux500_init_irq, 124 124 .timer = &ux500_timer,
+1 -1
arch/arm/mach-ux500/include/mach/debug-macro.S
··· 35 35 #define UX500_UART(n) __UX500_UART(n) 36 36 #define UART_BASE UX500_UART(CONFIG_UX500_DEBUG_UART) 37 37 38 - .macro addruart, rp, rv 38 + .macro addruart, rp, rv, tmp 39 39 ldr \rp, =UART_BASE @ no, physical address 40 40 ldr \rv, =IO_ADDRESS(UART_BASE) @ yes, virtual address 41 41 .endm
-18
arch/arm/mach-ux500/include/mach/memory.h
··· 1 - /* 2 - * Copyright (C) 2009 ST-Ericsson 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License as published by 6 - * the Free Software Foundation; either version 2 of the License, or 7 - * (at your option) any later version. 8 - */ 9 - #ifndef __ASM_ARCH_MEMORY_H 10 - #define __ASM_ARCH_MEMORY_H 11 - 12 - /* 13 - * Physical DRAM offset. 14 - */ 15 - #define PLAT_PHYS_OFFSET UL(0x00000000) 16 - #define BUS_OFFSET UL(0x00000000) 17 - 18 - #endif
+1 -1
arch/arm/mach-versatile/include/mach/debug-macro.S
··· 11 11 * 12 12 */ 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 mov \rp, #0x001F0000 16 16 orr \rp, \rp, #0x00001000 17 17 orr \rv, \rp, #0xf1000000 @ virtual base
-28
arch/arm/mach-versatile/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-versatile/include/mach/memory.h 3 - * 4 - * Copyright (C) 2003 ARM Limited 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License, or 9 - * (at your option) any later version. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 - */ 20 - #ifndef __ASM_ARCH_MEMORY_H 21 - #define __ASM_ARCH_MEMORY_H 22 - 23 - /* 24 - * Physical DRAM offset. 25 - */ 26 - #define PLAT_PHYS_OFFSET UL(0x00000000) 27 - 28 - #endif
+1 -1
arch/arm/mach-versatile/versatile_ab.c
··· 35 35 36 36 MACHINE_START(VERSATILE_AB, "ARM-Versatile AB") 37 37 /* Maintainer: ARM Ltd/Deep Blue Solutions Ltd */ 38 - .boot_params = 0x00000100, 38 + .atag_offset = 0x100, 39 39 .map_io = versatile_map_io, 40 40 .init_early = versatile_init_early, 41 41 .init_irq = versatile_init_irq,
+1 -1
arch/arm/mach-versatile/versatile_pb.c
··· 103 103 104 104 MACHINE_START(VERSATILE_PB, "ARM-Versatile PB") 105 105 /* Maintainer: ARM Ltd/Deep Blue Solutions Ltd */ 106 - .boot_params = 0x00000100, 106 + .atag_offset = 0x100, 107 107 .map_io = versatile_map_io, 108 108 .init_early = versatile_init_early, 109 109 .init_irq = versatile_init_irq,
+1 -1
arch/arm/mach-vexpress/include/mach/debug-macro.S
··· 12 12 13 13 #define DEBUG_LL_UART_OFFSET 0x00009000 14 14 15 - .macro addruart,rp,rv 15 + .macro addruart,rp,rv,tmp 16 16 mov \rp, #DEBUG_LL_UART_OFFSET 17 17 orr \rv, \rp, #0xf8000000 @ virtual base 18 18 orr \rp, \rp, #0x10000000 @ physical base
-25
arch/arm/mach-vexpress/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-vexpress/include/mach/memory.h 3 - * 4 - * Copyright (C) 2003 ARM Limited 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License, or 9 - * (at your option) any later version. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 - */ 20 - #ifndef __ASM_ARCH_MEMORY_H 21 - #define __ASM_ARCH_MEMORY_H 22 - 23 - #define PLAT_PHYS_OFFSET UL(0x60000000) 24 - 25 - #endif
+1 -1
arch/arm/mach-vexpress/v2m.c
··· 443 443 } 444 444 445 445 MACHINE_START(VEXPRESS, "ARM-Versatile Express") 446 - .boot_params = PLAT_PHYS_OFFSET + 0x00000100, 446 + .atag_offset = 0x100, 447 447 .map_io = v2m_map_io, 448 448 .init_early = v2m_init_early, 449 449 .init_irq = v2m_init_irq,
+1 -1
arch/arm/mach-vt8500/bv07.c
··· 68 68 } 69 69 70 70 MACHINE_START(BV07, "Benign BV07 Mini Netbook") 71 - .boot_params = 0x00000100, 71 + .atag_offset = 0x100, 72 72 .reserve = vt8500_reserve_mem, 73 73 .map_io = vt8500_map_io, 74 74 .init_irq = vt8500_init_irq,
+1 -1
arch/arm/mach-vt8500/include/mach/debug-macro.S
··· 11 11 * 12 12 */ 13 13 14 - .macro addruart, rp, rv 14 + .macro addruart, rp, rv, tmp 15 15 mov \rp, #0x00200000 16 16 orr \rv, \rp, #0xf8000000 17 17 orr \rp, \rp, #0xd8000000
-28
arch/arm/mach-vt8500/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-vt8500/include/mach/memory.h 3 - * 4 - * Copyright (C) 2003 ARM Limited 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License, or 9 - * (at your option) any later version. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 - */ 20 - #ifndef __ASM_ARCH_MEMORY_H 21 - #define __ASM_ARCH_MEMORY_H 22 - 23 - /* 24 - * Physical DRAM offset. 25 - */ 26 - #define PHYS_OFFSET UL(0x00000000) 27 - 28 - #endif
+1 -1
arch/arm/mach-vt8500/wm8505_7in.c
··· 68 68 } 69 69 70 70 MACHINE_START(WM8505_7IN_NETBOOK, "WM8505 7-inch generic netbook") 71 - .boot_params = 0x00000100, 71 + .atag_offset = 0x100, 72 72 .reserve = wm8505_reserve_mem, 73 73 .map_io = wm8505_map_io, 74 74 .init_irq = wm8505_init_irq,
-23
arch/arm/mach-w90x900/include/mach/memory.h
··· 1 - /* 2 - * arch/arm/mach-w90x900/include/mach/memory.h 3 - * 4 - * Copyright (c) 2008 Nuvoton technology corporation 5 - * All rights reserved. 6 - * 7 - * Wan ZongShun <mcuos.com@gmail.com> 8 - * 9 - * Based on arch/arm/mach-s3c2410/include/mach/memory.h 10 - * 11 - * This program is free software; you can redistribute it and/or modify 12 - * it under the terms of the GNU General Public License as published by 13 - * the Free Software Foundation; either version 2 of the License, or 14 - * (at your option) any later version. 15 - * 16 - */ 17 - 18 - #ifndef __ASM_ARCH_MEMORY_H 19 - #define __ASM_ARCH_MEMORY_H 20 - 21 - #define PLAT_PHYS_OFFSET UL(0x00000000) 22 - 23 - #endif
-1
arch/arm/mach-w90x900/mach-nuc910evb.c
··· 34 34 35 35 MACHINE_START(W90P910EVB, "W90P910EVB") 36 36 /* Maintainer: Wan ZongShun */ 37 - .boot_params = 0, 38 37 .map_io = nuc910evb_map_io, 39 38 .init_irq = nuc900_init_irq, 40 39 .init_machine = nuc910evb_init,
-1
arch/arm/mach-w90x900/mach-nuc950evb.c
··· 37 37 38 38 MACHINE_START(W90P950EVB, "W90P950EVB") 39 39 /* Maintainer: Wan ZongShun */ 40 - .boot_params = 0, 41 40 .map_io = nuc950evb_map_io, 42 41 .init_irq = nuc900_init_irq, 43 42 .init_machine = nuc950evb_init,
-1
arch/arm/mach-w90x900/mach-nuc960evb.c
··· 34 34 35 35 MACHINE_START(W90N960EVB, "W90N960EVB") 36 36 /* Maintainer: Wan ZongShun */ 37 - .boot_params = 0, 38 37 .map_io = nuc960evb_map_io, 39 38 .init_irq = nuc900_init_irq, 40 39 .init_machine = nuc960evb_init,
+1 -1
arch/arm/mach-zynq/include/mach/debug-macro.S
··· 17 17 #include <mach/zynq_soc.h> 18 18 #include <mach/uart.h> 19 19 20 - .macro addruart, rp, rv 20 + .macro addruart, rp, rv, tmp 21 21 ldr \rp, =LL_UART_PADDR @ physical 22 22 ldr \rv, =LL_UART_VADDR @ virtual 23 23 .endm
-22
arch/arm/mach-zynq/include/mach/memory.h
··· 1 - /* arch/arm/mach-zynq/include/mach/memory.h 2 - * 3 - * Copyright (C) 2011 Xilinx 4 - * 5 - * This software is licensed under the terms of the GNU General Public 6 - * License version 2, as published by the Free Software Foundation, and 7 - * may be copied, distributed, and modified under those terms. 8 - * 9 - * This program is distributed in the hope that it will be useful, 10 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 - * GNU General Public License for more details. 13 - */ 14 - 15 - #ifndef __MACH_MEMORY_H__ 16 - #define __MACH_MEMORY_H__ 17 - 18 - #include <asm/sizes.h> 19 - 20 - #define PLAT_PHYS_OFFSET UL(0x0) 21 - 22 - #endif
+33 -11
arch/arm/mm/dma-mapping.c
··· 18 18 #include <linux/device.h> 19 19 #include <linux/dma-mapping.h> 20 20 #include <linux/highmem.h> 21 + #include <linux/slab.h> 21 22 22 23 #include <asm/memory.h> 23 24 #include <asm/highmem.h> 24 25 #include <asm/cacheflush.h> 25 26 #include <asm/tlbflush.h> 26 27 #include <asm/sizes.h> 28 + #include <asm/mach/arch.h> 27 29 28 30 #include "mm.h" 29 31 ··· 119 117 } 120 118 121 119 #ifdef CONFIG_MMU 122 - /* Sanity check size */ 123 - #if (CONSISTENT_DMA_SIZE % SZ_2M) 124 - #error "CONSISTENT_DMA_SIZE must be multiple of 2MiB" 125 - #endif 126 120 127 - #define CONSISTENT_OFFSET(x) (((unsigned long)(x) - CONSISTENT_BASE) >> PAGE_SHIFT) 128 - #define CONSISTENT_PTE_INDEX(x) (((unsigned long)(x) - CONSISTENT_BASE) >> PMD_SHIFT) 129 - #define NUM_CONSISTENT_PTES (CONSISTENT_DMA_SIZE >> PMD_SHIFT) 121 + #define CONSISTENT_OFFSET(x) (((unsigned long)(x) - consistent_base) >> PAGE_SHIFT) 122 + #define CONSISTENT_PTE_INDEX(x) (((unsigned long)(x) - consistent_base) >> PMD_SHIFT) 130 123 131 124 /* 132 125 * These are the page tables (2MB each) covering uncached, DMA consistent allocations 133 126 */ 134 - static pte_t *consistent_pte[NUM_CONSISTENT_PTES]; 127 + static pte_t **consistent_pte; 128 + 129 + #define DEFAULT_CONSISTENT_DMA_SIZE SZ_2M 130 + 131 + unsigned long consistent_base = CONSISTENT_END - DEFAULT_CONSISTENT_DMA_SIZE; 132 + 133 + void __init init_consistent_dma_size(unsigned long size) 134 + { 135 + unsigned long base = CONSISTENT_END - ALIGN(size, SZ_2M); 136 + 137 + BUG_ON(consistent_pte); /* Check we're called before DMA region init */ 138 + BUG_ON(base < VMALLOC_END); 139 + 140 + /* Grow region to accommodate specified size */ 141 + if (base < consistent_base) 142 + consistent_base = base; 143 + } 135 144 136 145 #include "vmregion.h" 137 146 138 147 static struct arm_vmregion_head consistent_head = { 139 148 .vm_lock = __SPIN_LOCK_UNLOCKED(&consistent_head.vm_lock), 140 149 .vm_list = LIST_HEAD_INIT(consistent_head.vm_list), 141 - .vm_start = CONSISTENT_BASE, 142 150 .vm_end = CONSISTENT_END, 143 151 }; 144 152 ··· 167 155 pmd_t *pmd; 168 156 pte_t *pte; 169 157 int i = 0; 170 - u32 base = CONSISTENT_BASE; 158 + unsigned long base = consistent_base; 159 + unsigned long num_ptes = (CONSISTENT_END - base) >> PGDIR_SHIFT; 160 + 161 + consistent_pte = kmalloc(num_ptes * sizeof(pte_t), GFP_KERNEL); 162 + if (!consistent_pte) { 163 + pr_err("%s: no memory\n", __func__); 164 + return -ENOMEM; 165 + } 166 + 167 + pr_debug("DMA memory: 0x%08lx - 0x%08lx:\n", base, CONSISTENT_END); 168 + consistent_head.vm_start = base; 171 169 172 170 do { 173 171 pgd = pgd_offset(&init_mm, base); ··· 220 198 size_t align; 221 199 int bit; 222 200 223 - if (!consistent_pte[0]) { 201 + if (!consistent_pte) { 224 202 printk(KERN_ERR "%s: not initialised\n", __func__); 225 203 dump_stack(); 226 204 return NULL;
-9
arch/arm/mm/init.c
··· 660 660 " ITCM : 0x%08lx - 0x%08lx (%4ld kB)\n" 661 661 #endif 662 662 " fixmap : 0x%08lx - 0x%08lx (%4ld kB)\n" 663 - #ifdef CONFIG_MMU 664 - " DMA : 0x%08lx - 0x%08lx (%4ld MB)\n" 665 - #endif 666 663 " vmalloc : 0x%08lx - 0x%08lx (%4ld MB)\n" 667 664 " lowmem : 0x%08lx - 0x%08lx (%4ld MB)\n" 668 665 #ifdef CONFIG_HIGHMEM ··· 678 681 MLK(ITCM_OFFSET, (unsigned long) itcm_end), 679 682 #endif 680 683 MLK(FIXADDR_START, FIXADDR_TOP), 681 - #ifdef CONFIG_MMU 682 - MLM(CONSISTENT_BASE, CONSISTENT_END), 683 - #endif 684 684 MLM(VMALLOC_START, VMALLOC_END), 685 685 MLM(PAGE_OFFSET, (unsigned long)high_memory), 686 686 #ifdef CONFIG_HIGHMEM ··· 700 706 * be detected at build time already. 701 707 */ 702 708 #ifdef CONFIG_MMU 703 - BUILD_BUG_ON(VMALLOC_END > CONSISTENT_BASE); 704 - BUG_ON(VMALLOC_END > CONSISTENT_BASE); 705 - 706 709 BUILD_BUG_ON(TASK_SIZE > MODULES_VADDR); 707 710 BUG_ON(TASK_SIZE > MODULES_VADDR); 708 711 #endif
+8
arch/arm/mm/mmu.c
··· 273 273 .prot_l1 = PMD_TYPE_TABLE, 274 274 .domain = DOMAIN_KERNEL, 275 275 }, 276 + [MT_MEMORY_SO] = { 277 + .prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY | 278 + L_PTE_MT_UNCACHED, 279 + .prot_l1 = PMD_TYPE_TABLE, 280 + .prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_S | 281 + PMD_SECT_UNCACHED | PMD_SECT_XN, 282 + .domain = DOMAIN_KERNEL, 283 + }, 276 284 }; 277 285 278 286 const struct mem_type *get_mem_type(unsigned int type)
+8 -13
arch/arm/mm/proc-arm920.S
··· 379 379 380 380 /* Suspend/resume support: taken from arch/arm/plat-s3c24xx/sleep.S */ 381 381 .globl cpu_arm920_suspend_size 382 - .equ cpu_arm920_suspend_size, 4 * 4 382 + .equ cpu_arm920_suspend_size, 4 * 3 383 383 #ifdef CONFIG_PM_SLEEP 384 384 ENTRY(cpu_arm920_do_suspend) 385 - stmfd sp!, {r4 - r7, lr} 385 + stmfd sp!, {r4 - r6, lr} 386 386 mrc p15, 0, r4, c13, c0, 0 @ PID 387 387 mrc p15, 0, r5, c3, c0, 0 @ Domain ID 388 - mrc p15, 0, r6, c2, c0, 0 @ TTB address 389 - mrc p15, 0, r7, c1, c0, 0 @ Control register 390 - stmia r0, {r4 - r7} 391 - ldmfd sp!, {r4 - r7, pc} 388 + mrc p15, 0, r6, c1, c0, 0 @ Control register 389 + stmia r0, {r4 - r6} 390 + ldmfd sp!, {r4 - r6, pc} 392 391 ENDPROC(cpu_arm920_do_suspend) 393 392 394 393 ENTRY(cpu_arm920_do_resume) 395 394 mov ip, #0 396 395 mcr p15, 0, ip, c8, c7, 0 @ invalidate I+D TLBs 397 396 mcr p15, 0, ip, c7, c7, 0 @ invalidate I+D caches 398 - ldmia r0, {r4 - r7} 397 + ldmia r0, {r4 - r6} 399 398 mcr p15, 0, r4, c13, c0, 0 @ PID 400 399 mcr p15, 0, r5, c3, c0, 0 @ Domain ID 401 - mcr p15, 0, r6, c2, c0, 0 @ TTB address 402 - mov r0, r7 @ control register 403 - mov r2, r6, lsr #14 @ get TTB0 base 404 - mov r2, r2, lsl #14 405 - ldr r3, =PMD_TYPE_SECT | PMD_SECT_BUFFERABLE | \ 406 - PMD_SECT_CACHEABLE | PMD_BIT4 | PMD_SECT_AP_WRITE 400 + mcr p15, 0, r1, c2, c0, 0 @ TTB address 401 + mov r0, r6 @ control register 407 402 b cpu_resume_mmu 408 403 ENDPROC(cpu_arm920_do_resume) 409 404 #endif
+8 -13
arch/arm/mm/proc-arm926.S
··· 394 394 395 395 /* Suspend/resume support: taken from arch/arm/plat-s3c24xx/sleep.S */ 396 396 .globl cpu_arm926_suspend_size 397 - .equ cpu_arm926_suspend_size, 4 * 4 397 + .equ cpu_arm926_suspend_size, 4 * 3 398 398 #ifdef CONFIG_PM_SLEEP 399 399 ENTRY(cpu_arm926_do_suspend) 400 - stmfd sp!, {r4 - r7, lr} 400 + stmfd sp!, {r4 - r6, lr} 401 401 mrc p15, 0, r4, c13, c0, 0 @ PID 402 402 mrc p15, 0, r5, c3, c0, 0 @ Domain ID 403 - mrc p15, 0, r6, c2, c0, 0 @ TTB address 404 - mrc p15, 0, r7, c1, c0, 0 @ Control register 405 - stmia r0, {r4 - r7} 406 - ldmfd sp!, {r4 - r7, pc} 403 + mrc p15, 0, r6, c1, c0, 0 @ Control register 404 + stmia r0, {r4 - r6} 405 + ldmfd sp!, {r4 - r6, pc} 407 406 ENDPROC(cpu_arm926_do_suspend) 408 407 409 408 ENTRY(cpu_arm926_do_resume) 410 409 mov ip, #0 411 410 mcr p15, 0, ip, c8, c7, 0 @ invalidate I+D TLBs 412 411 mcr p15, 0, ip, c7, c7, 0 @ invalidate I+D caches 413 - ldmia r0, {r4 - r7} 412 + ldmia r0, {r4 - r6} 414 413 mcr p15, 0, r4, c13, c0, 0 @ PID 415 414 mcr p15, 0, r5, c3, c0, 0 @ Domain ID 416 - mcr p15, 0, r6, c2, c0, 0 @ TTB address 417 - mov r0, r7 @ control register 418 - mov r2, r6, lsr #14 @ get TTB0 base 419 - mov r2, r2, lsl #14 420 - ldr r3, =PMD_TYPE_SECT | PMD_SECT_BUFFERABLE | \ 421 - PMD_SECT_CACHEABLE | PMD_BIT4 | PMD_SECT_AP_WRITE 415 + mcr p15, 0, r1, c2, c0, 0 @ TTB address 416 + mov r0, r6 @ control register 422 417 b cpu_resume_mmu 423 418 ENDPROC(cpu_arm926_do_resume) 424 419 #endif
+10 -15
arch/arm/mm/proc-sa1100.S
··· 168 168 mov pc, lr 169 169 170 170 .globl cpu_sa1100_suspend_size 171 - .equ cpu_sa1100_suspend_size, 4*4 171 + .equ cpu_sa1100_suspend_size, 4 * 3 172 172 #ifdef CONFIG_PM_SLEEP 173 173 ENTRY(cpu_sa1100_do_suspend) 174 - stmfd sp!, {r4 - r7, lr} 174 + stmfd sp!, {r4 - r6, lr} 175 175 mrc p15, 0, r4, c3, c0, 0 @ domain ID 176 - mrc p15, 0, r5, c2, c0, 0 @ translation table base addr 177 - mrc p15, 0, r6, c13, c0, 0 @ PID 178 - mrc p15, 0, r7, c1, c0, 0 @ control reg 179 - stmia r0, {r4 - r7} @ store cp regs 180 - ldmfd sp!, {r4 - r7, pc} 176 + mrc p15, 0, r5, c13, c0, 0 @ PID 177 + mrc p15, 0, r6, c1, c0, 0 @ control reg 178 + stmia r0, {r4 - r6} @ store cp regs 179 + ldmfd sp!, {r4 - r6, pc} 181 180 ENDPROC(cpu_sa1100_do_suspend) 182 181 183 182 ENTRY(cpu_sa1100_do_resume) 184 - ldmia r0, {r4 - r7} @ load cp regs 183 + ldmia r0, {r4 - r6} @ load cp regs 185 184 mov ip, #0 186 185 mcr p15, 0, ip, c8, c7, 0 @ flush I+D TLBs 187 186 mcr p15, 0, ip, c7, c7, 0 @ flush I&D cache ··· 188 189 mcr p15, 0, ip, c9, c0, 5 @ allow user space to use RB 189 190 190 191 mcr p15, 0, r4, c3, c0, 0 @ domain ID 191 - mcr p15, 0, r5, c2, c0, 0 @ translation table base addr 192 - mcr p15, 0, r6, c13, c0, 0 @ PID 193 - mov r0, r7 @ control register 194 - mov r2, r5, lsr #14 @ get TTB0 base 195 - mov r2, r2, lsl #14 196 - ldr r3, =PMD_TYPE_SECT | PMD_SECT_BUFFERABLE | \ 197 - PMD_SECT_CACHEABLE | PMD_SECT_AP_WRITE 192 + mcr p15, 0, r1, c2, c0, 0 @ translation table base addr 193 + mcr p15, 0, r5, c13, c0, 0 @ PID 194 + mov r0, r6 @ control register 198 195 b cpu_resume_mmu 199 196 ENDPROC(cpu_sa1100_do_resume) 200 197 #endif
+19 -25
arch/arm/mm/proc-v6.S
··· 128 128 129 129 /* Suspend/resume support: taken from arch/arm/mach-s3c64xx/sleep.S */ 130 130 .globl cpu_v6_suspend_size 131 - .equ cpu_v6_suspend_size, 4 * 8 131 + .equ cpu_v6_suspend_size, 4 * 6 132 132 #ifdef CONFIG_PM_SLEEP 133 133 ENTRY(cpu_v6_do_suspend) 134 - stmfd sp!, {r4 - r11, lr} 134 + stmfd sp!, {r4 - r9, lr} 135 135 mrc p15, 0, r4, c13, c0, 0 @ FCSE/PID 136 - mrc p15, 0, r5, c13, c0, 1 @ Context ID 137 - mrc p15, 0, r6, c3, c0, 0 @ Domain ID 138 - mrc p15, 0, r7, c2, c0, 0 @ Translation table base 0 139 - mrc p15, 0, r8, c2, c0, 1 @ Translation table base 1 140 - mrc p15, 0, r9, c1, c0, 1 @ auxiliary control register 141 - mrc p15, 0, r10, c1, c0, 2 @ co-processor access control 142 - mrc p15, 0, r11, c1, c0, 0 @ control register 143 - stmia r0, {r4 - r11} 144 - ldmfd sp!, {r4- r11, pc} 136 + mrc p15, 0, r5, c3, c0, 0 @ Domain ID 137 + mrc p15, 0, r6, c2, c0, 1 @ Translation table base 1 138 + mrc p15, 0, r7, c1, c0, 1 @ auxiliary control register 139 + mrc p15, 0, r8, c1, c0, 2 @ co-processor access control 140 + mrc p15, 0, r9, c1, c0, 0 @ control register 141 + stmia r0, {r4 - r9} 142 + ldmfd sp!, {r4- r9, pc} 145 143 ENDPROC(cpu_v6_do_suspend) 146 144 147 145 ENTRY(cpu_v6_do_resume) ··· 148 150 mcr p15, 0, ip, c7, c5, 0 @ invalidate I cache 149 151 mcr p15, 0, ip, c7, c15, 0 @ clean+invalidate cache 150 152 mcr p15, 0, ip, c7, c10, 4 @ drain write buffer 151 - ldmia r0, {r4 - r11} 153 + mcr p15, 0, ip, c13, c0, 1 @ set reserved context ID 154 + ldmia r0, {r4 - r9} 152 155 mcr p15, 0, r4, c13, c0, 0 @ FCSE/PID 153 - mcr p15, 0, r5, c13, c0, 1 @ Context ID 154 - mcr p15, 0, r6, c3, c0, 0 @ Domain ID 155 - mcr p15, 0, r7, c2, c0, 0 @ Translation table base 0 156 - mcr p15, 0, r8, c2, c0, 1 @ Translation table base 1 157 - mcr p15, 0, r9, c1, c0, 1 @ auxiliary control register 158 - mcr p15, 0, r10, c1, c0, 2 @ co-processor access control 156 + mcr p15, 0, r5, c3, c0, 0 @ Domain ID 157 + ALT_SMP(orr r1, r1, #TTB_FLAGS_SMP) 158 + ALT_UP(orr r1, r1, #TTB_FLAGS_UP) 159 + mcr p15, 0, r1, c2, c0, 0 @ Translation table base 0 160 + mcr p15, 0, r6, c2, c0, 1 @ Translation table base 1 161 + mcr p15, 0, r7, c1, c0, 1 @ auxiliary control register 162 + mcr p15, 0, r8, c1, c0, 2 @ co-processor access control 159 163 mcr p15, 0, ip, c2, c0, 2 @ TTB control register 160 164 mcr p15, 0, ip, c7, c5, 4 @ ISB 161 - mov r0, r11 @ control register 162 - mov r2, r7, lsr #14 @ get TTB0 base 163 - mov r2, r2, lsl #14 164 - ldr r3, cpu_resume_l1_flags 165 + mov r0, r9 @ control register 165 166 b cpu_resume_mmu 166 167 ENDPROC(cpu_v6_do_resume) 167 - cpu_resume_l1_flags: 168 - ALT_SMP(.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_FLAGS_SMP) 169 - ALT_UP(.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_FLAGS_UP) 170 168 #endif 171 169 172 170 string cpu_v6_name, "ARMv6-compatible processor"
+22 -28
arch/arm/mm/proc-v7.S
··· 217 217 218 218 /* Suspend/resume support: derived from arch/arm/mach-s5pv210/sleep.S */ 219 219 .globl cpu_v7_suspend_size 220 - .equ cpu_v7_suspend_size, 4 * 9 220 + .equ cpu_v7_suspend_size, 4 * 7 221 221 #ifdef CONFIG_ARM_CPU_SUSPEND 222 222 ENTRY(cpu_v7_do_suspend) 223 - stmfd sp!, {r4 - r11, lr} 223 + stmfd sp!, {r4 - r10, lr} 224 224 mrc p15, 0, r4, c13, c0, 0 @ FCSE/PID 225 - mrc p15, 0, r5, c13, c0, 1 @ Context ID 226 - mrc p15, 0, r6, c13, c0, 3 @ User r/o thread ID 227 - stmia r0!, {r4 - r6} 225 + mrc p15, 0, r5, c13, c0, 3 @ User r/o thread ID 226 + stmia r0!, {r4 - r5} 228 227 mrc p15, 0, r6, c3, c0, 0 @ Domain ID 229 - mrc p15, 0, r7, c2, c0, 0 @ TTB 0 230 - mrc p15, 0, r8, c2, c0, 1 @ TTB 1 231 - mrc p15, 0, r9, c1, c0, 0 @ Control register 232 - mrc p15, 0, r10, c1, c0, 1 @ Auxiliary control register 233 - mrc p15, 0, r11, c1, c0, 2 @ Co-processor access control 234 - stmia r0, {r6 - r11} 235 - ldmfd sp!, {r4 - r11, pc} 228 + mrc p15, 0, r7, c2, c0, 1 @ TTB 1 229 + mrc p15, 0, r8, c1, c0, 0 @ Control register 230 + mrc p15, 0, r9, c1, c0, 1 @ Auxiliary control register 231 + mrc p15, 0, r10, c1, c0, 2 @ Co-processor access control 232 + stmia r0, {r6 - r10} 233 + ldmfd sp!, {r4 - r10, pc} 236 234 ENDPROC(cpu_v7_do_suspend) 237 235 238 236 ENTRY(cpu_v7_do_resume) 239 237 mov ip, #0 240 238 mcr p15, 0, ip, c8, c7, 0 @ invalidate TLBs 241 239 mcr p15, 0, ip, c7, c5, 0 @ invalidate I cache 242 - ldmia r0!, {r4 - r6} 240 + mcr p15, 0, ip, c13, c0, 1 @ set reserved context ID 241 + ldmia r0!, {r4 - r5} 243 242 mcr p15, 0, r4, c13, c0, 0 @ FCSE/PID 244 - mcr p15, 0, r5, c13, c0, 1 @ Context ID 245 - mcr p15, 0, r6, c13, c0, 3 @ User r/o thread ID 246 - ldmia r0, {r6 - r11} 243 + mcr p15, 0, r5, c13, c0, 3 @ User r/o thread ID 244 + ldmia r0, {r6 - r10} 247 245 mcr p15, 0, r6, c3, c0, 0 @ Domain ID 248 - mcr p15, 0, r7, c2, c0, 0 @ TTB 0 249 - mcr p15, 0, r8, c2, c0, 1 @ TTB 1 246 + ALT_SMP(orr r1, r1, #TTB_FLAGS_SMP) 247 + ALT_UP(orr r1, r1, #TTB_FLAGS_UP) 248 + mcr p15, 0, r1, c2, c0, 0 @ TTB 0 249 + mcr p15, 0, r7, c2, c0, 1 @ TTB 1 250 250 mcr p15, 0, ip, c2, c0, 2 @ TTB control register 251 251 mrc p15, 0, r4, c1, c0, 1 @ Read Auxiliary control register 252 - teq r4, r10 @ Is it already set? 253 - mcrne p15, 0, r10, c1, c0, 1 @ No, so write it 254 - mcr p15, 0, r11, c1, c0, 2 @ Co-processor access control 252 + teq r4, r9 @ Is it already set? 253 + mcrne p15, 0, r9, c1, c0, 1 @ No, so write it 254 + mcr p15, 0, r10, c1, c0, 2 @ Co-processor access control 255 255 ldr r4, =PRRR @ PRRR 256 256 ldr r5, =NMRR @ NMRR 257 257 mcr p15, 0, r4, c10, c2, 0 @ write PRRR 258 258 mcr p15, 0, r5, c10, c2, 1 @ write NMRR 259 259 isb 260 260 dsb 261 - mov r0, r9 @ control register 262 - mov r2, r7, lsr #14 @ get TTB0 base 263 - mov r2, r2, lsl #14 264 - ldr r3, cpu_resume_l1_flags 261 + mov r0, r8 @ control register 265 262 b cpu_resume_mmu 266 263 ENDPROC(cpu_v7_do_resume) 267 - cpu_resume_l1_flags: 268 - ALT_SMP(.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_FLAGS_SMP) 269 - ALT_UP(.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_FLAGS_UP) 270 264 #endif 271 265 272 266 __CPUINIT
+11 -17
arch/arm/mm/proc-xsc3.S
··· 406 406 .align 407 407 408 408 .globl cpu_xsc3_suspend_size 409 - .equ cpu_xsc3_suspend_size, 4 * 7 409 + .equ cpu_xsc3_suspend_size, 4 * 6 410 410 #ifdef CONFIG_PM_SLEEP 411 411 ENTRY(cpu_xsc3_do_suspend) 412 - stmfd sp!, {r4 - r10, lr} 412 + stmfd sp!, {r4 - r9, lr} 413 413 mrc p14, 0, r4, c6, c0, 0 @ clock configuration, for turbo mode 414 414 mrc p15, 0, r5, c15, c1, 0 @ CP access reg 415 415 mrc p15, 0, r6, c13, c0, 0 @ PID 416 416 mrc p15, 0, r7, c3, c0, 0 @ domain ID 417 - mrc p15, 0, r8, c2, c0, 0 @ translation table base addr 418 - mrc p15, 0, r9, c1, c0, 1 @ auxiliary control reg 419 - mrc p15, 0, r10, c1, c0, 0 @ control reg 417 + mrc p15, 0, r8, c1, c0, 1 @ auxiliary control reg 418 + mrc p15, 0, r9, c1, c0, 0 @ control reg 420 419 bic r4, r4, #2 @ clear frequency change bit 421 - stmia r0, {r4 - r10} @ store cp regs 422 - ldmia sp!, {r4 - r10, pc} 420 + stmia r0, {r4 - r9} @ store cp regs 421 + ldmia sp!, {r4 - r9, pc} 423 422 ENDPROC(cpu_xsc3_do_suspend) 424 423 425 424 ENTRY(cpu_xsc3_do_resume) 426 - ldmia r0, {r4 - r10} @ load cp regs 425 + ldmia r0, {r4 - r9} @ load cp regs 427 426 mov ip, #0 428 427 mcr p15, 0, ip, c7, c7, 0 @ invalidate I & D caches, BTB 429 428 mcr p15, 0, ip, c7, c10, 4 @ drain write (&fill) buffer ··· 432 433 mcr p15, 0, r5, c15, c1, 0 @ CP access reg 433 434 mcr p15, 0, r6, c13, c0, 0 @ PID 434 435 mcr p15, 0, r7, c3, c0, 0 @ domain ID 435 - mcr p15, 0, r8, c2, c0, 0 @ translation table base addr 436 - mcr p15, 0, r9, c1, c0, 1 @ auxiliary control reg 437 - 438 - @ temporarily map resume_turn_on_mmu into the page table, 439 - @ otherwise prefetch abort occurs after MMU is turned on 440 - mov r0, r10 @ control register 441 - mov r2, r8, lsr #14 @ get TTB0 base 442 - mov r2, r2, lsl #14 443 - ldr r3, =0x542e @ section flags 436 + orr r1, r1, #0x18 @ cache the page table in L2 437 + mcr p15, 0, r1, c2, c0, 0 @ translation table base addr 438 + mcr p15, 0, r8, c1, c0, 1 @ auxiliary control reg 439 + mov r0, r9 @ control register 444 440 b cpu_resume_mmu 445 441 ENDPROC(cpu_xsc3_do_resume) 446 442 #endif
+10 -15
arch/arm/mm/proc-xscale.S
··· 520 520 .align 521 521 522 522 .globl cpu_xscale_suspend_size 523 - .equ cpu_xscale_suspend_size, 4 * 7 523 + .equ cpu_xscale_suspend_size, 4 * 6 524 524 #ifdef CONFIG_PM_SLEEP 525 525 ENTRY(cpu_xscale_do_suspend) 526 - stmfd sp!, {r4 - r10, lr} 526 + stmfd sp!, {r4 - r9, lr} 527 527 mrc p14, 0, r4, c6, c0, 0 @ clock configuration, for turbo mode 528 528 mrc p15, 0, r5, c15, c1, 0 @ CP access reg 529 529 mrc p15, 0, r6, c13, c0, 0 @ PID 530 530 mrc p15, 0, r7, c3, c0, 0 @ domain ID 531 - mrc p15, 0, r8, c2, c0, 0 @ translation table base addr 532 - mrc p15, 0, r9, c1, c1, 0 @ auxiliary control reg 533 - mrc p15, 0, r10, c1, c0, 0 @ control reg 531 + mrc p15, 0, r8, c1, c1, 0 @ auxiliary control reg 532 + mrc p15, 0, r9, c1, c0, 0 @ control reg 534 533 bic r4, r4, #2 @ clear frequency change bit 535 - stmia r0, {r4 - r10} @ store cp regs 536 - ldmfd sp!, {r4 - r10, pc} 534 + stmia r0, {r4 - r9} @ store cp regs 535 + ldmfd sp!, {r4 - r9, pc} 537 536 ENDPROC(cpu_xscale_do_suspend) 538 537 539 538 ENTRY(cpu_xscale_do_resume) 540 - ldmia r0, {r4 - r10} @ load cp regs 539 + ldmia r0, {r4 - r9} @ load cp regs 541 540 mov ip, #0 542 541 mcr p15, 0, ip, c8, c7, 0 @ invalidate I & D TLBs 543 542 mcr p15, 0, ip, c7, c7, 0 @ invalidate I & D caches, BTB ··· 544 545 mcr p15, 0, r5, c15, c1, 0 @ CP access reg 545 546 mcr p15, 0, r6, c13, c0, 0 @ PID 546 547 mcr p15, 0, r7, c3, c0, 0 @ domain ID 547 - mcr p15, 0, r8, c2, c0, 0 @ translation table base addr 548 - mcr p15, 0, r9, c1, c1, 0 @ auxiliary control reg 549 - mov r0, r10 @ control register 550 - mov r2, r8, lsr #14 @ get TTB0 base 551 - mov r2, r2, lsl #14 552 - ldr r3, =PMD_TYPE_SECT | PMD_SECT_BUFFERABLE | \ 553 - PMD_SECT_CACHEABLE | PMD_SECT_AP_WRITE 548 + mcr p15, 0, r1, c2, c0, 0 @ translation table base addr 549 + mcr p15, 0, r8, c1, c1, 0 @ auxiliary control reg 550 + mov r0, r9 @ control register 554 551 b cpu_resume_mmu 555 552 ENDPROC(cpu_xscale_do_resume) 556 553 #endif
+1 -1
arch/arm/plat-mxc/include/mach/debug-macro.S
··· 54 54 55 55 #define UART_VADDR IMX_IO_ADDRESS(UART_PADDR) 56 56 57 - .macro addruart, rp, rv 57 + .macro addruart, rp, rv, tmp 58 58 ldr \rp, =UART_PADDR @ physical 59 59 ldr \rv, =UART_VADDR @ virtual 60 60 .endm
-58
arch/arm/plat-mxc/include/mach/memory.h
··· 1 - /* 2 - * Copyright 2004-2007 Freescale Semiconductor, Inc. All Rights Reserved. 3 - */ 4 - 5 - /* 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 as 8 - * published by the Free Software Foundation. 9 - */ 10 - 11 - #ifndef __ASM_ARCH_MXC_MEMORY_H__ 12 - #define __ASM_ARCH_MXC_MEMORY_H__ 13 - 14 - #define MX1_PHYS_OFFSET UL(0x08000000) 15 - #define MX21_PHYS_OFFSET UL(0xc0000000) 16 - #define MX25_PHYS_OFFSET UL(0x80000000) 17 - #define MX27_PHYS_OFFSET UL(0xa0000000) 18 - #define MX3x_PHYS_OFFSET UL(0x80000000) 19 - #define MX50_PHYS_OFFSET UL(0x70000000) 20 - #define MX51_PHYS_OFFSET UL(0x90000000) 21 - #define MX53_PHYS_OFFSET UL(0x70000000) 22 - 23 - #if !defined(CONFIG_RUNTIME_PHYS_OFFSET) 24 - # if defined CONFIG_ARCH_MX1 25 - # define PLAT_PHYS_OFFSET MX1_PHYS_OFFSET 26 - # elif defined CONFIG_MACH_MX21 27 - # define PLAT_PHYS_OFFSET MX21_PHYS_OFFSET 28 - # elif defined CONFIG_ARCH_MX25 29 - # define PLAT_PHYS_OFFSET MX25_PHYS_OFFSET 30 - # elif defined CONFIG_MACH_MX27 31 - # define PLAT_PHYS_OFFSET MX27_PHYS_OFFSET 32 - # elif defined CONFIG_ARCH_MX3 33 - # define PLAT_PHYS_OFFSET MX3x_PHYS_OFFSET 34 - # elif defined CONFIG_ARCH_MX50 35 - # define PLAT_PHYS_OFFSET MX50_PHYS_OFFSET 36 - # elif defined CONFIG_ARCH_MX51 37 - # define PLAT_PHYS_OFFSET MX51_PHYS_OFFSET 38 - # elif defined CONFIG_ARCH_MX53 39 - # define PLAT_PHYS_OFFSET MX53_PHYS_OFFSET 40 - # endif 41 - #endif 42 - 43 - #if defined(CONFIG_MX3_VIDEO) 44 - /* 45 - * Increase size of DMA-consistent memory region. 46 - * This is required for mx3 camera driver to capture at least two QXGA frames. 47 - */ 48 - #define CONSISTENT_DMA_SIZE SZ_8M 49 - 50 - #elif defined(CONFIG_MX1_VIDEO) || defined(CONFIG_VIDEO_MX2_HOSTSUPPORT) 51 - /* 52 - * Increase size of DMA-consistent memory region. 53 - * This is required for i.MX camera driver to capture at least four VGA frames. 54 - */ 55 - #define CONSISTENT_DMA_SIZE SZ_4M 56 - #endif /* CONFIG_MX1_VIDEO || CONFIG_VIDEO_MX2_HOSTSUPPORT */ 57 - 58 - #endif /* __ASM_ARCH_MXC_MEMORY_H__ */
+1
arch/arm/plat-omap/Kconfig
··· 15 15 select CLKSRC_MMIO 16 16 select GENERIC_IRQ_CHIP 17 17 select HAVE_IDE 18 + select NEED_MACH_MEMORY_H 18 19 help 19 20 "Systems based on omap7xx, omap15xx or omap16xx" 20 21
+2
arch/arm/plat-omap/include/plat/io.h
··· 309 309 void __iomem *omap_ioremap(unsigned long phys, size_t size, unsigned int type); 310 310 void omap_iounmap(volatile void __iomem *addr); 311 311 312 + extern void __init omap_init_consistent_dma_size(void); 313 + 312 314 #endif 313 315 314 316 #endif
-102
arch/arm/plat-omap/include/plat/memory.h
··· 1 - /* 2 - * arch/arm/plat-omap/include/mach/memory.h 3 - * 4 - * Memory map for OMAP-1510 and 1610 5 - * 6 - * Copyright (C) 2000 RidgeRun, Inc. 7 - * Author: Greg Lonnon <glonnon@ridgerun.com> 8 - * 9 - * This file was derived from arch/arm/mach-intergrator/include/mach/memory.h 10 - * Copyright (C) 1999 ARM Limited 11 - * 12 - * This program is free software; you can redistribute it and/or modify it 13 - * under the terms of the GNU General Public License as published by the 14 - * Free Software Foundation; either version 2 of the License, or (at your 15 - * option) any later version. 16 - * 17 - * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED 18 - * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 19 - * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN 20 - * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, 21 - * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT 22 - * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF 23 - * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 24 - * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 25 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF 26 - * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 27 - * 28 - * You should have received a copy of the GNU General Public License along 29 - * with this program; if not, write to the Free Software Foundation, Inc., 30 - * 675 Mass Ave, Cambridge, MA 02139, USA. 31 - */ 32 - 33 - #ifndef __ASM_ARCH_MEMORY_H 34 - #define __ASM_ARCH_MEMORY_H 35 - 36 - /* 37 - * Physical DRAM offset. 38 - */ 39 - #if defined(CONFIG_ARCH_OMAP1) 40 - #define PLAT_PHYS_OFFSET UL(0x10000000) 41 - #else 42 - #define PLAT_PHYS_OFFSET UL(0x80000000) 43 - #endif 44 - 45 - /* 46 - * Bus address is physical address, except for OMAP-1510 Local Bus. 47 - * OMAP-1510 bus address is translated into a Local Bus address if the 48 - * OMAP bus type is lbus. We do the address translation based on the 49 - * device overriding the defaults used in the dma-mapping API. 50 - * Note that the is_lbus_device() test is not very efficient on 1510 51 - * because of the strncmp(). 52 - */ 53 - #ifdef CONFIG_ARCH_OMAP15XX 54 - 55 - /* 56 - * OMAP-1510 Local Bus address offset 57 - */ 58 - #define OMAP1510_LB_OFFSET UL(0x30000000) 59 - 60 - #define virt_to_lbus(x) ((x) - PAGE_OFFSET + OMAP1510_LB_OFFSET) 61 - #define lbus_to_virt(x) ((x) - OMAP1510_LB_OFFSET + PAGE_OFFSET) 62 - #define is_lbus_device(dev) (cpu_is_omap15xx() && dev && (strncmp(dev_name(dev), "ohci", 4) == 0)) 63 - 64 - #define __arch_pfn_to_dma(dev, pfn) \ 65 - ({ dma_addr_t __dma = __pfn_to_phys(pfn); \ 66 - if (is_lbus_device(dev)) \ 67 - __dma = __dma - PHYS_OFFSET + OMAP1510_LB_OFFSET; \ 68 - __dma; }) 69 - 70 - #define __arch_dma_to_pfn(dev, addr) \ 71 - ({ dma_addr_t __dma = addr; \ 72 - if (is_lbus_device(dev)) \ 73 - __dma += PHYS_OFFSET - OMAP1510_LB_OFFSET; \ 74 - __phys_to_pfn(__dma); \ 75 - }) 76 - 77 - #define __arch_dma_to_virt(dev, addr) ({ (void *) (is_lbus_device(dev) ? \ 78 - lbus_to_virt(addr) : \ 79 - __phys_to_virt(addr)); }) 80 - 81 - #define __arch_virt_to_dma(dev, addr) ({ unsigned long __addr = (unsigned long)(addr); \ 82 - (dma_addr_t) (is_lbus_device(dev) ? \ 83 - virt_to_lbus(__addr) : \ 84 - __virt_to_phys(__addr)); }) 85 - 86 - #endif /* CONFIG_ARCH_OMAP15XX */ 87 - 88 - /* Override the ARM default */ 89 - #ifdef CONFIG_FB_OMAP_CONSISTENT_DMA_SIZE 90 - 91 - #if (CONFIG_FB_OMAP_CONSISTENT_DMA_SIZE == 0) 92 - #undef CONFIG_FB_OMAP_CONSISTENT_DMA_SIZE 93 - #define CONFIG_FB_OMAP_CONSISTENT_DMA_SIZE 2 94 - #endif 95 - 96 - #define CONSISTENT_DMA_SIZE \ 97 - (((CONFIG_FB_OMAP_CONSISTENT_DMA_SIZE + 1) & ~1) * 1024 * 1024) 98 - 99 - #endif 100 - 101 - #endif 102 -
+3 -3
arch/arm/plat-omap/include/plat/serial.h
··· 16 16 #include <linux/init.h> 17 17 18 18 /* 19 - * Memory entry used for the DEBUG_LL UART configuration. See also 20 - * uncompress.h and debug-macro.S. 19 + * Memory entry used for the DEBUG_LL UART configuration, relative to 20 + * start of RAM. See also uncompress.h and debug-macro.S. 21 21 * 22 22 * Note that using a memory location for storing the UART configuration 23 23 * has at least two limitations: ··· 27 27 * 2. We assume printascii is called at least once before paging_init, 28 28 * and addruart has a chance to read OMAP_UART_INFO 29 29 */ 30 - #define OMAP_UART_INFO (PLAT_PHYS_OFFSET + 0x3ffc) 30 + #define OMAP_UART_INFO_OFS 0x3ffc 31 31 32 32 /* OMAP1 serial ports */ 33 33 #define OMAP1_UART1_BASE 0xfffb0000
+7 -1
arch/arm/plat-omap/include/plat/uncompress.h
··· 36 36 */ 37 37 static void set_omap_uart_info(unsigned char port) 38 38 { 39 - *(volatile u32 *)OMAP_UART_INFO = port; 39 + /* 40 + * Get address of some.bss variable and round it down 41 + * a la CONFIG_AUTO_ZRELADDR. 42 + */ 43 + u32 ram_start = (u32)&uart_shift & 0xf8000000; 44 + u32 *uart_info = (u32 *)(ram_start + OMAP_UART_INFO_OFS); 45 + *uart_info = port; 40 46 } 41 47 42 48 static void putc(int c)
+8
arch/arm/plat-omap/io.c
··· 12 12 #include <linux/module.h> 13 13 #include <linux/io.h> 14 14 #include <linux/mm.h> 15 + #include <linux/dma-mapping.h> 15 16 16 17 #include <plat/omap7xx.h> 17 18 #include <plat/omap1510.h> ··· 140 139 __iounmap(addr); 141 140 } 142 141 EXPORT_SYMBOL(omap_iounmap); 142 + 143 + void __init omap_init_consistent_dma_size(void) 144 + { 145 + #ifdef CONFIG_FB_OMAP_CONSISTENT_DMA_SIZE 146 + init_consistent_dma_size(CONFIG_FB_OMAP_CONSISTENT_DMA_SIZE << 20); 147 + #endif 148 + }
+1 -1
arch/arm/plat-spear/include/plat/debug-macro.S
··· 14 14 #include <linux/amba/serial.h> 15 15 #include <mach/hardware.h> 16 16 17 - .macro addruart, rp, rv 17 + .macro addruart, rp, rv, tmp 18 18 mov \rp, #SPEAR_DBG_UART_BASE @ Physical base 19 19 mov \rv, #VA_SPEAR_DBG_UART_BASE @ Virtual base 20 20 .endm
-20
arch/arm/plat-spear/include/plat/memory.h
··· 1 - /* 2 - * arch/arm/plat-spear/include/plat/memory.h 3 - * 4 - * Memory map for SPEAr platform 5 - * 6 - * Copyright (C) 2009 ST Microelectronics 7 - * Viresh Kumar<viresh.kumar@st.com> 8 - * 9 - * This file is licensed under the terms of the GNU General Public 10 - * License version 2. This program is licensed "as is" without any 11 - * warranty of any kind, whether express or implied. 12 - */ 13 - 14 - #ifndef __PLAT_MEMORY_H 15 - #define __PLAT_MEMORY_H 16 - 17 - /* Physical DRAM offset */ 18 - #define PLAT_PHYS_OFFSET UL(0x00000000) 19 - 20 - #endif /* __PLAT_MEMORY_H */
+1 -1
arch/arm/plat-tcc/include/mach/debug-macro.S
··· 9 9 * 10 10 */ 11 11 12 - .macro addruart, rp, rv 12 + .macro addruart, rp, rv, tmp 13 13 moveq \rp, #0x90000000 @ physical base address 14 14 movne \rv, #0xF1000000 @ virtual base 15 15 orr \rp, \rp, #0x00007000 @ UART0
-18
arch/arm/plat-tcc/include/mach/memory.h
··· 1 - /* 2 - * Copyright (C) 1999 ARM Limited 3 - * Copyright (C) 2000 RidgeRun, Inc. 4 - * Copyright (C) 2008-2009 Telechips 5 - * Copyright (C) 2010 Hans J. Koch <hjk@linutronix.de> 6 - * 7 - * Licensed under the terms of the GPL v2. 8 - */ 9 - 10 - #ifndef __ASM_ARCH_MEMORY_H 11 - #define __ASM_ARCH_MEMORY_H 12 - 13 - /* 14 - * Physical DRAM offset. 15 - */ 16 - #define PLAT_PHYS_OFFSET UL(0x20000000) 17 - 18 - #endif
+22 -9
arch/arm/vfp/vfpmodule.c
··· 11 11 #include <linux/module.h> 12 12 #include <linux/types.h> 13 13 #include <linux/cpu.h> 14 + #include <linux/cpu_pm.h> 14 15 #include <linux/kernel.h> 15 16 #include <linux/notifier.h> 16 17 #include <linux/signal.h> ··· 69 68 /* 70 69 * Force a reload of the VFP context from the thread structure. We do 71 70 * this by ensuring that access to the VFP hardware is disabled, and 72 - * clear last_VFP_context. Must be called from non-preemptible context. 71 + * clear vfp_current_hw_state. Must be called from non-preemptible context. 73 72 */ 74 73 static void vfp_force_reload(unsigned int cpu, struct thread_info *thread) 75 74 { ··· 437 436 set_copro_access(access | CPACC_FULL(10) | CPACC_FULL(11)); 438 437 } 439 438 440 - #ifdef CONFIG_PM 441 - #include <linux/syscore_ops.h> 442 - 439 + #ifdef CONFIG_CPU_PM 443 440 static int vfp_pm_suspend(void) 444 441 { 445 442 struct thread_info *ti = current_thread_info(); ··· 467 468 fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_EN); 468 469 } 469 470 470 - static struct syscore_ops vfp_pm_syscore_ops = { 471 - .suspend = vfp_pm_suspend, 472 - .resume = vfp_pm_resume, 471 + static int vfp_cpu_pm_notifier(struct notifier_block *self, unsigned long cmd, 472 + void *v) 473 + { 474 + switch (cmd) { 475 + case CPU_PM_ENTER: 476 + vfp_pm_suspend(); 477 + break; 478 + case CPU_PM_ENTER_FAILED: 479 + case CPU_PM_EXIT: 480 + vfp_pm_resume(); 481 + break; 482 + } 483 + return NOTIFY_OK; 484 + } 485 + 486 + static struct notifier_block vfp_cpu_pm_notifier_block = { 487 + .notifier_call = vfp_cpu_pm_notifier, 473 488 }; 474 489 475 490 static void vfp_pm_init(void) 476 491 { 477 - register_syscore_ops(&vfp_pm_syscore_ops); 492 + cpu_pm_register_notifier(&vfp_cpu_pm_notifier_block); 478 493 } 479 494 480 495 #else 481 496 static inline void vfp_pm_init(void) { } 482 - #endif /* CONFIG_PM */ 497 + #endif /* CONFIG_CPU_PM */ 483 498 484 499 /* 485 500 * Ensure that the VFP state stored in 'thread->vfpstate' is up to date
-6
drivers/usb/musb/musb_debugfs.c
··· 41 41 #include <linux/debugfs.h> 42 42 #include <linux/seq_file.h> 43 43 44 - #ifdef CONFIG_ARM 45 - #include <mach/hardware.h> 46 - #include <mach/memory.h> 47 - #include <asm/mach-types.h> 48 - #endif 49 - 50 44 #include <asm/uaccess.h> 51 45 52 46 #include "musb_core.h"
+109
include/linux/cpu_pm.h
··· 1 + /* 2 + * Copyright (C) 2011 Google, Inc. 3 + * 4 + * Author: 5 + * Colin Cross <ccross@android.com> 6 + * 7 + * This software is licensed under the terms of the GNU General Public 8 + * License version 2, as published by the Free Software Foundation, and 9 + * may be copied, distributed, and modified under those terms. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + */ 17 + 18 + #ifndef _LINUX_CPU_PM_H 19 + #define _LINUX_CPU_PM_H 20 + 21 + #include <linux/kernel.h> 22 + #include <linux/notifier.h> 23 + 24 + /* 25 + * When a CPU goes to a low power state that turns off power to the CPU's 26 + * power domain, the contents of some blocks (floating point coprocessors, 27 + * interrupt controllers, caches, timers) in the same power domain can 28 + * be lost. The cpm_pm notifiers provide a method for platform idle, suspend, 29 + * and hotplug implementations to notify the drivers for these blocks that 30 + * they may be reset. 31 + * 32 + * All cpu_pm notifications must be called with interrupts disabled. 33 + * 34 + * The notifications are split into two classes: CPU notifications and CPU 35 + * cluster notifications. 36 + * 37 + * CPU notifications apply to a single CPU and must be called on the affected 38 + * CPU. They are used to save per-cpu context for affected blocks. 39 + * 40 + * CPU cluster notifications apply to all CPUs in a single power domain. They 41 + * are used to save any global context for affected blocks, and must be called 42 + * after all the CPUs in the power domain have been notified of the low power 43 + * state. 44 + */ 45 + 46 + /* 47 + * Event codes passed as unsigned long val to notifier calls 48 + */ 49 + enum cpu_pm_event { 50 + /* A single cpu is entering a low power state */ 51 + CPU_PM_ENTER, 52 + 53 + /* A single cpu failed to enter a low power state */ 54 + CPU_PM_ENTER_FAILED, 55 + 56 + /* A single cpu is exiting a low power state */ 57 + CPU_PM_EXIT, 58 + 59 + /* A cpu power domain is entering a low power state */ 60 + CPU_CLUSTER_PM_ENTER, 61 + 62 + /* A cpu power domain failed to enter a low power state */ 63 + CPU_CLUSTER_PM_ENTER_FAILED, 64 + 65 + /* A cpu power domain is exiting a low power state */ 66 + CPU_CLUSTER_PM_EXIT, 67 + }; 68 + 69 + #ifdef CONFIG_CPU_PM 70 + int cpu_pm_register_notifier(struct notifier_block *nb); 71 + int cpu_pm_unregister_notifier(struct notifier_block *nb); 72 + int cpu_pm_enter(void); 73 + int cpu_pm_exit(void); 74 + int cpu_cluster_pm_enter(void); 75 + int cpu_cluster_pm_exit(void); 76 + 77 + #else 78 + 79 + static inline int cpu_pm_register_notifier(struct notifier_block *nb) 80 + { 81 + return 0; 82 + } 83 + 84 + static inline int cpu_pm_unregister_notifier(struct notifier_block *nb) 85 + { 86 + return 0; 87 + } 88 + 89 + static inline int cpu_pm_enter(void) 90 + { 91 + return 0; 92 + } 93 + 94 + static inline int cpu_pm_exit(void) 95 + { 96 + return 0; 97 + } 98 + 99 + static inline int cpu_cluster_pm_enter(void) 100 + { 101 + return 0; 102 + } 103 + 104 + static inline int cpu_cluster_pm_exit(void) 105 + { 106 + return 0; 107 + } 108 + #endif 109 + #endif
+1
kernel/Makefile
··· 101 101 obj-$(CONFIG_TRACEPOINTS) += trace/ 102 102 obj-$(CONFIG_SMP) += sched_cpupri.o 103 103 obj-$(CONFIG_IRQ_WORK) += irq_work.o 104 + obj-$(CONFIG_CPU_PM) += cpu_pm.o 104 105 105 106 obj-$(CONFIG_PERF_EVENTS) += events/ 106 107
+233
kernel/cpu_pm.c
··· 1 + /* 2 + * Copyright (C) 2011 Google, Inc. 3 + * 4 + * Author: 5 + * Colin Cross <ccross@android.com> 6 + * 7 + * This software is licensed under the terms of the GNU General Public 8 + * License version 2, as published by the Free Software Foundation, and 9 + * may be copied, distributed, and modified under those terms. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + */ 17 + 18 + #include <linux/kernel.h> 19 + #include <linux/cpu_pm.h> 20 + #include <linux/module.h> 21 + #include <linux/notifier.h> 22 + #include <linux/spinlock.h> 23 + #include <linux/syscore_ops.h> 24 + 25 + static DEFINE_RWLOCK(cpu_pm_notifier_lock); 26 + static RAW_NOTIFIER_HEAD(cpu_pm_notifier_chain); 27 + 28 + static int cpu_pm_notify(enum cpu_pm_event event, int nr_to_call, int *nr_calls) 29 + { 30 + int ret; 31 + 32 + ret = __raw_notifier_call_chain(&cpu_pm_notifier_chain, event, NULL, 33 + nr_to_call, nr_calls); 34 + 35 + return notifier_to_errno(ret); 36 + } 37 + 38 + /** 39 + * cpu_pm_register_notifier - register a driver with cpu_pm 40 + * @nb: notifier block to register 41 + * 42 + * Add a driver to a list of drivers that are notified about 43 + * CPU and CPU cluster low power entry and exit. 44 + * 45 + * This function may sleep, and has the same return conditions as 46 + * raw_notifier_chain_register. 47 + */ 48 + int cpu_pm_register_notifier(struct notifier_block *nb) 49 + { 50 + unsigned long flags; 51 + int ret; 52 + 53 + write_lock_irqsave(&cpu_pm_notifier_lock, flags); 54 + ret = raw_notifier_chain_register(&cpu_pm_notifier_chain, nb); 55 + write_unlock_irqrestore(&cpu_pm_notifier_lock, flags); 56 + 57 + return ret; 58 + } 59 + EXPORT_SYMBOL_GPL(cpu_pm_register_notifier); 60 + 61 + /** 62 + * cpu_pm_unregister_notifier - unregister a driver with cpu_pm 63 + * @nb: notifier block to be unregistered 64 + * 65 + * Remove a driver from the CPU PM notifier list. 66 + * 67 + * This function may sleep, and has the same return conditions as 68 + * raw_notifier_chain_unregister. 69 + */ 70 + int cpu_pm_unregister_notifier(struct notifier_block *nb) 71 + { 72 + unsigned long flags; 73 + int ret; 74 + 75 + write_lock_irqsave(&cpu_pm_notifier_lock, flags); 76 + ret = raw_notifier_chain_unregister(&cpu_pm_notifier_chain, nb); 77 + write_unlock_irqrestore(&cpu_pm_notifier_lock, flags); 78 + 79 + return ret; 80 + } 81 + EXPORT_SYMBOL_GPL(cpu_pm_unregister_notifier); 82 + 83 + /** 84 + * cpm_pm_enter - CPU low power entry notifier 85 + * 86 + * Notifies listeners that a single CPU is entering a low power state that may 87 + * cause some blocks in the same power domain as the cpu to reset. 88 + * 89 + * Must be called on the affected CPU with interrupts disabled. Platform is 90 + * responsible for ensuring that cpu_pm_enter is not called twice on the same 91 + * CPU before cpu_pm_exit is called. Notified drivers can include VFP 92 + * co-processor, interrupt controller and it's PM extensions, local CPU 93 + * timers context save/restore which shouldn't be interrupted. Hence it 94 + * must be called with interrupts disabled. 95 + * 96 + * Return conditions are same as __raw_notifier_call_chain. 97 + */ 98 + int cpu_pm_enter(void) 99 + { 100 + int nr_calls; 101 + int ret = 0; 102 + 103 + read_lock(&cpu_pm_notifier_lock); 104 + ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls); 105 + if (ret) 106 + /* 107 + * Inform listeners (nr_calls - 1) about failure of CPU PM 108 + * PM entry who are notified earlier to prepare for it. 109 + */ 110 + cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL); 111 + read_unlock(&cpu_pm_notifier_lock); 112 + 113 + return ret; 114 + } 115 + EXPORT_SYMBOL_GPL(cpu_pm_enter); 116 + 117 + /** 118 + * cpm_pm_exit - CPU low power exit notifier 119 + * 120 + * Notifies listeners that a single CPU is exiting a low power state that may 121 + * have caused some blocks in the same power domain as the cpu to reset. 122 + * 123 + * Notified drivers can include VFP co-processor, interrupt controller 124 + * and it's PM extensions, local CPU timers context save/restore which 125 + * shouldn't be interrupted. Hence it must be called with interrupts disabled. 126 + * 127 + * Return conditions are same as __raw_notifier_call_chain. 128 + */ 129 + int cpu_pm_exit(void) 130 + { 131 + int ret; 132 + 133 + read_lock(&cpu_pm_notifier_lock); 134 + ret = cpu_pm_notify(CPU_PM_EXIT, -1, NULL); 135 + read_unlock(&cpu_pm_notifier_lock); 136 + 137 + return ret; 138 + } 139 + EXPORT_SYMBOL_GPL(cpu_pm_exit); 140 + 141 + /** 142 + * cpm_cluster_pm_enter - CPU cluster low power entry notifier 143 + * 144 + * Notifies listeners that all cpus in a power domain are entering a low power 145 + * state that may cause some blocks in the same power domain to reset. 146 + * 147 + * Must be called after cpu_pm_enter has been called on all cpus in the power 148 + * domain, and before cpu_pm_exit has been called on any cpu in the power 149 + * domain. Notified drivers can include VFP co-processor, interrupt controller 150 + * and it's PM extensions, local CPU timers context save/restore which 151 + * shouldn't be interrupted. Hence it must be called with interrupts disabled. 152 + * 153 + * Must be called with interrupts disabled. 154 + * 155 + * Return conditions are same as __raw_notifier_call_chain. 156 + */ 157 + int cpu_cluster_pm_enter(void) 158 + { 159 + int nr_calls; 160 + int ret = 0; 161 + 162 + read_lock(&cpu_pm_notifier_lock); 163 + ret = cpu_pm_notify(CPU_CLUSTER_PM_ENTER, -1, &nr_calls); 164 + if (ret) 165 + /* 166 + * Inform listeners (nr_calls - 1) about failure of CPU cluster 167 + * PM entry who are notified earlier to prepare for it. 168 + */ 169 + cpu_pm_notify(CPU_CLUSTER_PM_ENTER_FAILED, nr_calls - 1, NULL); 170 + read_unlock(&cpu_pm_notifier_lock); 171 + 172 + return ret; 173 + } 174 + EXPORT_SYMBOL_GPL(cpu_cluster_pm_enter); 175 + 176 + /** 177 + * cpm_cluster_pm_exit - CPU cluster low power exit notifier 178 + * 179 + * Notifies listeners that all cpus in a power domain are exiting form a 180 + * low power state that may have caused some blocks in the same power domain 181 + * to reset. 182 + * 183 + * Must be called after cpu_pm_exit has been called on all cpus in the power 184 + * domain, and before cpu_pm_exit has been called on any cpu in the power 185 + * domain. Notified drivers can include VFP co-processor, interrupt controller 186 + * and it's PM extensions, local CPU timers context save/restore which 187 + * shouldn't be interrupted. Hence it must be called with interrupts disabled. 188 + * 189 + * Return conditions are same as __raw_notifier_call_chain. 190 + */ 191 + int cpu_cluster_pm_exit(void) 192 + { 193 + int ret; 194 + 195 + read_lock(&cpu_pm_notifier_lock); 196 + ret = cpu_pm_notify(CPU_CLUSTER_PM_EXIT, -1, NULL); 197 + read_unlock(&cpu_pm_notifier_lock); 198 + 199 + return ret; 200 + } 201 + EXPORT_SYMBOL_GPL(cpu_cluster_pm_exit); 202 + 203 + #ifdef CONFIG_PM 204 + static int cpu_pm_suspend(void) 205 + { 206 + int ret; 207 + 208 + ret = cpu_pm_enter(); 209 + if (ret) 210 + return ret; 211 + 212 + ret = cpu_cluster_pm_enter(); 213 + return ret; 214 + } 215 + 216 + static void cpu_pm_resume(void) 217 + { 218 + cpu_cluster_pm_exit(); 219 + cpu_pm_exit(); 220 + } 221 + 222 + static struct syscore_ops cpu_pm_syscore_ops = { 223 + .suspend = cpu_pm_suspend, 224 + .resume = cpu_pm_resume, 225 + }; 226 + 227 + static int cpu_pm_init(void) 228 + { 229 + register_syscore_ops(&cpu_pm_syscore_ops); 230 + return 0; 231 + } 232 + core_initcall(cpu_pm_init); 233 + #endif
+4
kernel/power/Kconfig
··· 239 239 config PM_GENERIC_DOMAINS_RUNTIME 240 240 def_bool y 241 241 depends on PM_RUNTIME && PM_GENERIC_DOMAINS 242 + 243 + config CPU_PM 244 + bool 245 + depends on SUSPEND || CPU_IDLE