Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm

Pull ARM updates from Russell King:

- clang assembly fixes from Ard

- optimisations and cleanups for Aurora L2 cache support

- efficient L2 cache support for secure monitor API on Exynos SoCs

- debug menu cleanup from Daniel Thompson to allow better behaviour for
multiplatform kernels

- StrongARM SA11x0 conversion to irq domains, and pxa_timer

- kprobes updates for older ARM CPUs

- move probes support out of arch/arm/kernel to arch/arm/probes

- add inline asm support for the rbit (reverse bits) instruction

- provide an ARM mode secondary CPU entry point (for Qualcomm CPUs)

- remove the unused ARMv3 user access code

- add driver_override support to AMBA Primecell bus

* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (55 commits)
ARM: 8256/1: driver coamba: add device binding path 'driver_override'
ARM: 8301/1: qcom: Use secondary_startup_arm()
ARM: 8302/1: Add a secondary_startup that assumes ARM mode
ARM: 8300/1: teach __asmeq that r11 == fp and r12 == ip
ARM: kprobes: Fix compilation error caused by superfluous '*'
ARM: 8297/1: cache-l2x0: optimize aurora range operations
ARM: 8296/1: cache-l2x0: clean up aurora cache handling
ARM: 8284/1: sa1100: clear RCSR_SMR on resume
ARM: 8283/1: sa1100: collie: clear PWER register on machine init
ARM: 8282/1: sa1100: use handle_domain_irq
ARM: 8281/1: sa1100: move GPIO-related IRQ code to gpio driver
ARM: 8280/1: sa1100: switch to irq_domain_add_simple()
ARM: 8279/1: sa1100: merge both GPIO irqdomains
ARM: 8278/1: sa1100: split irq handling for low GPIOs
ARM: 8291/1: replace magic number with PAGE_SHIFT macro in fixup_pv code
ARM: 8290/1: decompressor: fix a wrong comment
ARM: 8286/1: mm: Fix dma_contiguous_reserve comment
ARM: 8248/1: pm: remove outdated comment
ARM: 8274/1: Fix DEBUG_LL for multi-platform kernels (without PL01X)
ARM: 8273/1: Seperate DEBUG_UART_PHYS from DEBUG_LL on EP93XX
...

+2188 -1479
+20
Documentation/ABI/testing/sysfs-bus-amba
··· 1 + What: /sys/bus/amba/devices/.../driver_override 2 + Date: September 2014 3 + Contact: Antonios Motakis <a.motakis@virtualopensystems.com> 4 + Description: 5 + This file allows the driver for a device to be specified which 6 + will override standard OF, ACPI, ID table, and name matching. 7 + When specified, only a driver with a name matching the value 8 + written to driver_override will have an opportunity to bind to 9 + the device. The override is specified by writing a string to the 10 + driver_override file (echo vfio-amba > driver_override) and may 11 + be cleared with an empty string (echo > driver_override). 12 + This returns the device to standard matching rules binding. 13 + Writing to driver_override does not automatically unbind the 14 + device from its current driver or make any attempt to 15 + automatically load the specified driver. If no driver with a 16 + matching name is currently loaded in the kernel, the device will 17 + not bind to any driver. This also allows devices to opt-out of 18 + driver binding using a driver_override name such as "none". 19 + Only a single driver may be specified in the override, there is 20 + no support for parsing delimiters.
+10
Documentation/devicetree/bindings/arm/l2cc.txt
··· 57 57 - cache-id-part: cache id part number to be used if it is not present 58 58 on hardware 59 59 - wt-override: If present then L2 is forced to Write through mode 60 + - arm,double-linefill : Override double linefill enable setting. Enable if 61 + non-zero, disable if zero. 62 + - arm,double-linefill-incr : Override double linefill on INCR read. Enable 63 + if non-zero, disable if zero. 64 + - arm,double-linefill-wrap : Override double linefill on WRAP read. Enable 65 + if non-zero, disable if zero. 66 + - arm,prefetch-drop : Override prefetch drop enable setting. Enable if non-zero, 67 + disable if zero. 68 + - arm,prefetch-offset : Override prefetch offset value. Valid values are 69 + 0-7, 15, 23, and 31. 60 70 61 71 Example: 62 72
+2
arch/arm/Kconfig
··· 29 29 select HANDLE_DOMAIN_IRQ 30 30 select HARDIRQS_SW_RESEND 31 31 select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT) 32 + select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6 32 33 select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL 33 34 select HAVE_ARCH_KGDB 34 35 select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT) ··· 61 60 select HAVE_MEMBLOCK 62 61 select HAVE_MOD_ARCH_SPECIFIC if ARM_UNWIND 63 62 select HAVE_OPROFILE if (HAVE_PERF_EVENTS) 63 + select HAVE_OPTPROBES if !THUMB2_KERNEL 64 64 select HAVE_PERF_EVENTS 65 65 select HAVE_PERF_REGS 66 66 select HAVE_PERF_USER_STACK_DUMP
+81 -20
arch/arm/Kconfig.debug
··· 397 397 Say Y here if you want the debug print routines to direct 398 398 their output to UART1 serial port on KEYSTONE2 devices. 399 399 400 + config DEBUG_KS8695_UART 401 + bool "KS8695 Debug UART" 402 + depends on ARCH_KS8695 403 + help 404 + Say Y here if you want kernel low-level debugging support 405 + on KS8695. 406 + 400 407 config DEBUG_MESON_UARTAO 401 408 bool "Kernel low-level debugging via Meson6 UARTAO" 402 409 depends on ARCH_MESON ··· 503 496 Say Y here if you want kernel low-level debugging support 504 497 on Vybrid based platforms. 505 498 499 + config DEBUG_NETX_UART 500 + bool "Kernel low-level debugging messages via NetX UART" 501 + depends on ARCH_NETX 502 + help 503 + Say Y here if you want kernel low-level debugging support 504 + on Hilscher NetX based platforms. 505 + 506 506 config DEBUG_NOMADIK_UART 507 507 bool "Kernel low-level debugging messages via NOMADIK UART" 508 508 depends on ARCH_NOMADIK ··· 533 519 help 534 520 Say Y here if you want kernel low-level debugging support 535 521 on TI-NSPIRE CX models. 522 + 523 + config DEBUG_OMAP1UART1 524 + bool "Kernel low-level debugging via OMAP1 UART1" 525 + depends on ARCH_OMAP1 526 + select DEBUG_UART_8250 527 + help 528 + Say Y here if you want kernel low-level debugging support 529 + on OMAP1 based platforms (except OMAP730) on the UART1. 530 + 531 + config DEBUG_OMAP1UART2 532 + bool "Kernel low-level debugging via OMAP1 UART2" 533 + depends on ARCH_OMAP1 534 + select DEBUG_UART_8250 535 + help 536 + Say Y here if you want kernel low-level debugging support 537 + on OMAP1 based platforms (except OMAP730) on the UART2. 538 + 539 + config DEBUG_OMAP1UART3 540 + bool "Kernel low-level debugging via OMAP1 UART3" 541 + depends on ARCH_OMAP1 542 + select DEBUG_UART_8250 543 + help 544 + Say Y here if you want kernel low-level debugging support 545 + on OMAP1 based platforms (except OMAP730) on the UART3. 536 546 537 547 config DEBUG_OMAP2UART1 538 548 bool "OMAP2/3/4 UART1 (omap2/3 sdp boards and some omap3 boards)" ··· 599 561 bool "Kernel low-level debugging messages via OMAP4/5 UART4" 600 562 depends on ARCH_OMAP2PLUS 601 563 select DEBUG_OMAP2PLUS_UART 564 + 565 + config DEBUG_OMAP7XXUART1 566 + bool "Kernel low-level debugging via OMAP730 UART1" 567 + depends on ARCH_OMAP730 568 + select DEBUG_UART_8250 569 + help 570 + Say Y here if you want kernel low-level debugging support 571 + on OMAP730 based platforms on the UART1. 572 + 573 + config DEBUG_OMAP7XXUART2 574 + bool "Kernel low-level debugging via OMAP730 UART2" 575 + depends on ARCH_OMAP730 576 + select DEBUG_UART_8250 577 + help 578 + Say Y here if you want kernel low-level debugging support 579 + on OMAP730 based platforms on the UART2. 580 + 581 + config DEBUG_OMAP7XXUART3 582 + bool "Kernel low-level debugging via OMAP730 UART3" 583 + depends on ARCH_OMAP730 584 + select DEBUG_UART_8250 585 + help 586 + Say Y here if you want kernel low-level debugging support 587 + on OMAP730 based platforms on the UART3. 602 588 603 589 config DEBUG_TI81XXUART1 604 590 bool "Kernel low-level debugging messages via TI81XX UART1 (ti8148evm)" ··· 1093 1031 This option selects UART0 on VIA/Wondermedia System-on-a-chip 1094 1032 devices, including VT8500, WM8505, WM8650 and WM8850. 1095 1033 1096 - config DEBUG_LL_UART_NONE 1097 - bool "No low-level debugging UART" 1098 - depends on !ARCH_MULTIPLATFORM 1099 - help 1100 - Say Y here if your platform doesn't provide a UART option 1101 - above. This relies on your platform choosing the right UART 1102 - definition internally in order for low-level debugging to 1103 - work. 1104 - 1105 1034 config DEBUG_ICEDCC 1106 1035 bool "Kernel low-level debugging via EmbeddedICE DCC channel" 1107 1036 help ··· 1236 1183 DEBUG_IMX6Q_UART || \ 1237 1184 DEBUG_IMX6SL_UART || \ 1238 1185 DEBUG_IMX6SX_UART 1186 + default "debug/ks8695.S" if DEBUG_KS8695_UART 1239 1187 default "debug/msm.S" if DEBUG_MSM_UART || DEBUG_QCOM_UARTDM 1188 + default "debug/netx.S" if DEBUG_NETX_UART 1240 1189 default "debug/omap2plus.S" if DEBUG_OMAP2PLUS_UART 1241 1190 default "debug/renesas-scif.S" if DEBUG_R7S72100_SCIF2 1242 1191 default "debug/renesas-scif.S" if DEBUG_RCAR_GEN1_SCIF0 ··· 1263 1208 1264 1209 # Compatibility options for PL01x 1265 1210 config DEBUG_UART_PL01X 1266 - def_bool ARCH_EP93XX || \ 1267 - ARCH_INTEGRATOR || \ 1268 - ARCH_SPEAR3XX || \ 1269 - ARCH_SPEAR6XX || \ 1270 - ARCH_SPEAR13XX || \ 1271 - ARCH_VERSATILE 1211 + bool 1272 1212 1273 1213 # Compatibility options for 8250 1274 1214 config DEBUG_UART_8250 ··· 1279 1229 1280 1230 config DEBUG_UART_PHYS 1281 1231 hex "Physical base address of debug UART" 1232 + default 0x00100a00 if DEBUG_NETX_UART 1282 1233 default 0x01c20000 if DEBUG_DAVINCI_DMx_UART0 1283 1234 default 0x01c28000 if DEBUG_SUNXI_UART0 1284 1235 default 0x01c28400 if DEBUG_SUNXI_UART1 ··· 1320 1269 DEBUG_S3C2410_UART2) 1321 1270 default 0x78000000 if DEBUG_CNS3XXX 1322 1271 default 0x7c0003f8 if FOOTBRIDGE 1323 - default 0x78000000 if DEBUG_CNS3XXX 1324 1272 default 0x80010000 if DEBUG_ASM9260_UART 1325 1273 default 0x80070000 if DEBUG_IMX23_UART 1326 1274 default 0x80074000 if DEBUG_IMX28_UART ··· 1360 1310 default 0xffe40000 if DEBUG_RCAR_GEN1_SCIF0 1361 1311 default 0xffe42000 if DEBUG_RCAR_GEN1_SCIF2 1362 1312 default 0xfff36000 if DEBUG_HIGHBANK_UART 1313 + default 0xfffb0000 if DEBUG_OMAP1UART1 || DEBUG_OMAP7XXUART1 1314 + default 0xfffb0800 if DEBUG_OMAP1UART2 || DEBUG_OMAP7XXUART2 1315 + default 0xfffb9800 if DEBUG_OMAP1UART3 || DEBUG_OMAP7XXUART3 1363 1316 default 0xfffe8600 if DEBUG_UART_BCM63XX 1364 1317 default 0xfffff700 if ARCH_IOP33X 1365 - depends on DEBUG_LL_UART_8250 || DEBUG_LL_UART_PL01X || \ 1318 + depends on ARCH_EP93XX || \ 1319 + DEBUG_LL_UART_8250 || DEBUG_LL_UART_PL01X || \ 1366 1320 DEBUG_LL_UART_EFM32 || \ 1367 1321 DEBUG_UART_8250 || DEBUG_UART_PL01X || DEBUG_MESON_UARTAO || \ 1368 - DEBUG_MSM_UART || DEBUG_QCOM_UARTDM || DEBUG_R7S72100_SCIF2 || \ 1322 + DEBUG_MSM_UART || DEBUG_NETX_UART || \ 1323 + DEBUG_QCOM_UARTDM || DEBUG_R7S72100_SCIF2 || \ 1369 1324 DEBUG_RCAR_GEN1_SCIF0 || DEBUG_RCAR_GEN1_SCIF2 || \ 1370 1325 DEBUG_RCAR_GEN2_SCIF0 || DEBUG_RCAR_GEN2_SCIF2 || \ 1371 1326 DEBUG_RMOBILE_SCIFA0 || DEBUG_RMOBILE_SCIFA1 || \ ··· 1379 1324 1380 1325 config DEBUG_UART_VIRT 1381 1326 hex "Virtual base address of debug UART" 1327 + default 0xe0000a00 if DEBUG_NETX_UART 1382 1328 default 0xe0010fe0 if ARCH_RPC 1383 1329 default 0xe1000000 if DEBUG_MSM_UART 1384 1330 default 0xf0000be0 if ARCH_EBSA110 ··· 1448 1392 default 0xfef00000 if ARCH_IXP4XX && !CPU_BIG_ENDIAN 1449 1393 default 0xfef00003 if ARCH_IXP4XX && CPU_BIG_ENDIAN 1450 1394 default 0xfef36000 if DEBUG_HIGHBANK_UART 1395 + default 0xfefb0000 if DEBUG_OMAP1UART1 || DEBUG_OMAP7XXUART1 1396 + default 0xfefb0800 if DEBUG_OMAP1UART2 || DEBUG_OMAP7XXUART2 1397 + default 0xfefb9800 if DEBUG_OMAP1UART3 || DEBUG_OMAP7XXUART3 1451 1398 default 0xfefff700 if ARCH_IOP33X 1452 1399 default 0xff003000 if DEBUG_U300_UART 1453 1400 default DEBUG_UART_PHYS if !MMU 1454 1401 depends on DEBUG_LL_UART_8250 || DEBUG_LL_UART_PL01X || \ 1455 1402 DEBUG_UART_8250 || DEBUG_UART_PL01X || DEBUG_MESON_UARTAO || \ 1456 - DEBUG_MSM_UART || DEBUG_QCOM_UARTDM || DEBUG_S3C24XX_UART || \ 1403 + DEBUG_MSM_UART || DEBUG_NETX_UART || \ 1404 + DEBUG_QCOM_UARTDM || DEBUG_S3C24XX_UART || \ 1457 1405 DEBUG_UART_BCM63XX || DEBUG_ASM9260_UART 1458 1406 1459 1407 config DEBUG_UART_8250_SHIFT 1460 1408 int "Register offset shift for the 8250 debug UART" 1461 1409 depends on DEBUG_LL_UART_8250 || DEBUG_UART_8250 1462 - default 0 if FOOTBRIDGE || ARCH_IOP32X || DEBUG_BCM_5301X 1410 + default 0 if FOOTBRIDGE || ARCH_IOP32X || DEBUG_BCM_5301X || \ 1411 + DEBUG_OMAP7XXUART1 || DEBUG_OMAP7XXUART2 || DEBUG_OMAP7XXUART3 1463 1412 default 2 1464 1413 1465 1414 config DEBUG_UART_8250_WORD
+1
arch/arm/Makefile
··· 266 266 267 267 # If we have a machine-specific directory, then include it in the build. 268 268 core-y += arch/arm/kernel/ arch/arm/mm/ arch/arm/common/ 269 + core-y += arch/arm/probes/ 269 270 core-y += arch/arm/net/ 270 271 core-y += arch/arm/crypto/ 271 272 core-y += arch/arm/firmware/
+1 -1
arch/arm/boot/compressed/head.S
··· 178 178 179 179 /* 180 180 * Set up a page table only if it won't overwrite ourself. 181 - * That means r4 < pc && r4 - 16k page directory > &_end. 181 + * That means r4 < pc || r4 - 16k page directory > &_end. 182 182 * Given that r4 > &_end is most unfrequent, we add a rough 183 183 * additional 1MB of room for a possible appended DTB. 184 184 */
+9
arch/arm/boot/dts/exynos4210.dtsi
··· 81 81 reg = <0x10023CA0 0x20>; 82 82 }; 83 83 84 + l2c: l2-cache-controller@10502000 { 85 + compatible = "arm,pl310-cache"; 86 + reg = <0x10502000 0x1000>; 87 + cache-unified; 88 + cache-level = <2>; 89 + arm,tag-latency = <2 2 1>; 90 + arm,data-latency = <2 2 1>; 91 + }; 92 + 84 93 gic: interrupt-controller@10490000 { 85 94 cpu-offset = <0x8000>; 86 95 };
+14
arch/arm/boot/dts/exynos4x12.dtsi
··· 54 54 reg = <0x10023CA0 0x20>; 55 55 }; 56 56 57 + l2c: l2-cache-controller@10502000 { 58 + compatible = "arm,pl310-cache"; 59 + reg = <0x10502000 0x1000>; 60 + cache-unified; 61 + cache-level = <2>; 62 + arm,tag-latency = <2 2 1>; 63 + arm,data-latency = <3 2 1>; 64 + arm,double-linefill = <1>; 65 + arm,double-linefill-incr = <0>; 66 + arm,double-linefill-wrap = <1>; 67 + arm,prefetch-drop = <1>; 68 + arm,prefetch-offset = <7>; 69 + }; 70 + 57 71 clock: clock-controller@10030000 { 58 72 compatible = "samsung,exynos4412-clock"; 59 73 reg = <0x10030000 0x20000>;
+1
arch/arm/configs/iop32x_defconfig
··· 106 106 CONFIG_DEBUG_KERNEL=y 107 107 CONFIG_DEBUG_USER=y 108 108 CONFIG_DEBUG_LL=y 109 + CONFIG_DEBUG_LL_UART_8250=y 109 110 CONFIG_KEYS=y 110 111 CONFIG_KEYS_DEBUG_PROC_KEYS=y 111 112 CONFIG_CRYPTO_NULL=y
+1
arch/arm/configs/iop33x_defconfig
··· 87 87 # CONFIG_RCU_CPU_STALL_DETECTOR is not set 88 88 CONFIG_DEBUG_USER=y 89 89 CONFIG_DEBUG_LL=y 90 + CONFIG_DEBUG_LL_UART_8250=y 90 91 # CONFIG_CRYPTO_ANSI_CPRNG is not set 91 92 # CONFIG_CRC32 is not set
+1
arch/arm/configs/ixp4xx_defconfig
··· 202 202 CONFIG_DEBUG_KERNEL=y 203 203 CONFIG_DEBUG_ERRORS=y 204 204 CONFIG_DEBUG_LL=y 205 + CONFIG_DEBUG_LL_UART_8250=y
+1
arch/arm/configs/lpc32xx_defconfig
··· 204 204 # CONFIG_FTRACE is not set 205 205 # CONFIG_ARM_UNWIND is not set 206 206 CONFIG_DEBUG_LL=y 207 + CONFIG_DEBUG_LL_UART_8250=y 207 208 CONFIG_EARLY_PRINTK=y 208 209 CONFIG_CRYPTO_ANSI_CPRNG=y 209 210 # CONFIG_CRYPTO_HW is not set
+1
arch/arm/configs/mv78xx0_defconfig
··· 132 132 CONFIG_DEBUG_USER=y 133 133 CONFIG_DEBUG_ERRORS=y 134 134 CONFIG_DEBUG_LL=y 135 + CONFIG_DEBUG_LL_UART_8250=y 135 136 CONFIG_CRYPTO_CBC=m 136 137 CONFIG_CRYPTO_ECB=m 137 138 CONFIG_CRYPTO_PCBC=m
+1
arch/arm/configs/orion5x_defconfig
··· 156 156 # CONFIG_FTRACE is not set 157 157 CONFIG_DEBUG_USER=y 158 158 CONFIG_DEBUG_LL=y 159 + CONFIG_DEBUG_LL_UART_8250=y 159 160 CONFIG_CRYPTO_CBC=m 160 161 CONFIG_CRYPTO_ECB=m 161 162 CONFIG_CRYPTO_PCBC=m
+1
arch/arm/configs/rpc_defconfig
··· 131 131 CONFIG_DEBUG_USER=y 132 132 CONFIG_DEBUG_ERRORS=y 133 133 CONFIG_DEBUG_LL=y 134 + CONFIG_DEBUG_LL_UART_8250=y
+20
arch/arm/include/asm/bitrev.h
··· 1 + #ifndef __ASM_BITREV_H 2 + #define __ASM_BITREV_H 3 + 4 + static __always_inline __attribute_const__ u32 __arch_bitrev32(u32 x) 5 + { 6 + __asm__ ("rbit %0, %1" : "=r" (x) : "r" (x)); 7 + return x; 8 + } 9 + 10 + static __always_inline __attribute_const__ u16 __arch_bitrev16(u16 x) 11 + { 12 + return __arch_bitrev32((u32)x) >> 16; 13 + } 14 + 15 + static __always_inline __attribute_const__ u8 __arch_bitrev8(u8 x) 16 + { 17 + return __arch_bitrev32((u32)x) >> 24; 18 + } 19 + 20 + #endif
+14 -1
arch/arm/include/asm/compiler.h
··· 8 8 * This string is meant to be concatenated with the inline asm string and 9 9 * will cause compilation to stop on mismatch. 10 10 * (for details, see gcc PR 15089) 11 + * For compatibility with clang, we have to specifically take the equivalence 12 + * of 'r11' <-> 'fp' and 'r12' <-> 'ip' into account as well. 11 13 */ 12 - #define __asmeq(x, y) ".ifnc " x "," y " ; .err ; .endif\n\t" 14 + #define __asmeq(x, y) \ 15 + ".ifnc " x "," y "; " \ 16 + ".ifnc " x y ",fpr11; " \ 17 + ".ifnc " x y ",r11fp; " \ 18 + ".ifnc " x y ",ipr12; " \ 19 + ".ifnc " x y ",r12ip; " \ 20 + ".err; " \ 21 + ".endif; " \ 22 + ".endif; " \ 23 + ".endif; " \ 24 + ".endif; " \ 25 + ".endif\n\t" 13 26 14 27 15 28 #endif /* __ASM_ARM_COMPILER_H */
+32 -1
arch/arm/include/asm/kprobes.h
··· 22 22 23 23 #define __ARCH_WANT_KPROBES_INSN_SLOT 24 24 #define MAX_INSN_SIZE 2 25 - #define MAX_STACK_SIZE 64 /* 32 would probably be OK */ 26 25 27 26 #define flush_insn_slot(p) do { } while (0) 28 27 #define kretprobe_blacklist_size 0 ··· 50 51 int kprobe_exceptions_notify(struct notifier_block *self, 51 52 unsigned long val, void *data); 52 53 54 + /* optinsn template addresses */ 55 + extern __visible kprobe_opcode_t optprobe_template_entry; 56 + extern __visible kprobe_opcode_t optprobe_template_val; 57 + extern __visible kprobe_opcode_t optprobe_template_call; 58 + extern __visible kprobe_opcode_t optprobe_template_end; 59 + extern __visible kprobe_opcode_t optprobe_template_sub_sp; 60 + extern __visible kprobe_opcode_t optprobe_template_add_sp; 61 + extern __visible kprobe_opcode_t optprobe_template_restore_begin; 62 + extern __visible kprobe_opcode_t optprobe_template_restore_orig_insn; 63 + extern __visible kprobe_opcode_t optprobe_template_restore_end; 64 + 65 + #define MAX_OPTIMIZED_LENGTH 4 66 + #define MAX_OPTINSN_SIZE \ 67 + ((unsigned long)&optprobe_template_end - \ 68 + (unsigned long)&optprobe_template_entry) 69 + #define RELATIVEJUMP_SIZE 4 70 + 71 + struct arch_optimized_insn { 72 + /* 73 + * copy of the original instructions. 74 + * Different from x86, ARM kprobe_opcode_t is u32. 75 + */ 76 + #define MAX_COPIED_INSN DIV_ROUND_UP(RELATIVEJUMP_SIZE, sizeof(kprobe_opcode_t)) 77 + kprobe_opcode_t copied_insn[MAX_COPIED_INSN]; 78 + /* detour code buffer */ 79 + kprobe_opcode_t *insn; 80 + /* 81 + * We always copy one instruction on ARM, 82 + * so size will always be 4, and unlike x86, there is no 83 + * need for a size field. 84 + */ 85 + }; 53 86 54 87 #endif /* _ARM_KPROBES_H */
+3
arch/arm/include/asm/outercache.h
··· 23 23 24 24 #include <linux/types.h> 25 25 26 + struct l2x0_regs; 27 + 26 28 struct outer_cache_fns { 27 29 void (*inv_range)(unsigned long, unsigned long); 28 30 void (*clean_range)(unsigned long, unsigned long); ··· 38 36 39 37 /* This is an ARM L2C thing */ 40 38 void (*write_sec)(unsigned long, unsigned); 39 + void (*configure)(const struct l2x0_regs *); 41 40 }; 42 41 43 42 extern struct outer_cache_fns outer_cache;
+15
arch/arm/include/asm/probes.h
··· 19 19 #ifndef _ASM_PROBES_H 20 20 #define _ASM_PROBES_H 21 21 22 + #ifndef __ASSEMBLY__ 23 + 22 24 typedef u32 probes_opcode_t; 23 25 24 26 struct arch_probes_insn; ··· 40 38 probes_check_cc *insn_check_cc; 41 39 probes_insn_singlestep_t *insn_singlestep; 42 40 probes_insn_fn_t *insn_fn; 41 + int stack_space; 42 + unsigned long register_usage_flags; 43 + bool kprobe_direct_exec; 43 44 }; 45 + 46 + #endif /* __ASSEMBLY__ */ 47 + 48 + /* 49 + * We assume one instruction can consume at most 64 bytes stack, which is 50 + * 'push {r0-r15}'. Instructions consume more or unknown stack space like 51 + * 'str r0, [sp, #-80]' and 'str r0, [sp, r1]' should be prohibit to probe. 52 + * Both kprobe and jprobe use this macro. 53 + */ 54 + #define MAX_STACK_SIZE 64 44 55 45 56 #endif
+2 -14
arch/arm/kernel/Makefile
··· 51 51 obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o insn.o 52 52 obj-$(CONFIG_JUMP_LABEL) += jump_label.o insn.o patch.o 53 53 obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o 54 - obj-$(CONFIG_UPROBES) += probes.o probes-arm.o uprobes.o uprobes-arm.o 55 - obj-$(CONFIG_KPROBES) += probes.o kprobes.o kprobes-common.o patch.o 56 - ifdef CONFIG_THUMB2_KERNEL 57 - obj-$(CONFIG_KPROBES) += kprobes-thumb.o probes-thumb.o 58 - else 59 - obj-$(CONFIG_KPROBES) += kprobes-arm.o probes-arm.o 60 - endif 61 - obj-$(CONFIG_ARM_KPROBES_TEST) += test-kprobes.o 62 - test-kprobes-objs := kprobes-test.o 63 - ifdef CONFIG_THUMB2_KERNEL 64 - test-kprobes-objs += kprobes-test-thumb.o 65 - else 66 - test-kprobes-objs += kprobes-test-arm.o 67 - endif 54 + # Main staffs in KPROBES are in arch/arm/probes/ . 55 + obj-$(CONFIG_KPROBES) += patch.o insn.o 68 56 obj-$(CONFIG_OABI_COMPAT) += sys_oabi-compat.o 69 57 obj-$(CONFIG_ARM_THUMBEE) += thumbee.o 70 58 obj-$(CONFIG_KGDB) += kgdb.o patch.o
+2 -1
arch/arm/kernel/entry-armv.S
··· 31 31 32 32 #include "entry-header.S" 33 33 #include <asm/entry-macro-multi.S> 34 + #include <asm/probes.h> 34 35 35 36 /* 36 37 * Interrupt handling. ··· 250 249 @ If a kprobe is about to simulate a "stmdb sp..." instruction, 251 250 @ it obviously needs free stack space which then will belong to 252 251 @ the saved context. 253 - svc_entry 64 252 + svc_entry MAX_STACK_SIZE 254 253 #else 255 254 svc_entry 256 255 #endif
+1 -2
arch/arm/kernel/ftrace.c
··· 20 20 #include <asm/cacheflush.h> 21 21 #include <asm/opcodes.h> 22 22 #include <asm/ftrace.h> 23 - 24 - #include "insn.h" 23 + #include <asm/insn.h> 25 24 26 25 #ifdef CONFIG_THUMB2_KERNEL 27 26 #define NOP 0xf85deb04 /* pop.w {lr} */
+8 -1
arch/arm/kernel/head.S
··· 346 346 347 347 #if defined(CONFIG_SMP) 348 348 .text 349 + ENTRY(secondary_startup_arm) 350 + .arm 351 + THUMB( adr r9, BSYM(1f) ) @ Kernel is entered in ARM. 352 + THUMB( bx r9 ) @ If this is a Thumb-2 kernel, 353 + THUMB( .thumb ) @ switch to Thumb now. 354 + THUMB(1: ) 349 355 ENTRY(secondary_startup) 350 356 /* 351 357 * Common entry point for secondary CPUs. ··· 391 385 THUMB( add r12, r10, #PROCINFO_INITFUNC ) 392 386 THUMB( ret r12 ) 393 387 ENDPROC(secondary_startup) 388 + ENDPROC(secondary_startup_arm) 394 389 395 390 /* 396 391 * r6 = &secondary_data ··· 593 586 add r5, r5, r3 @ adjust table end address 594 587 add r6, r6, r3 @ adjust __pv_phys_pfn_offset address 595 588 add r7, r7, r3 @ adjust __pv_offset address 596 - mov r0, r8, lsr #12 @ convert to PFN 589 + mov r0, r8, lsr #PAGE_SHIFT @ convert to PFN 597 590 str r0, [r6] @ save computed PHYS_OFFSET to __pv_phys_pfn_offset 598 591 strcc ip, [r7, #HIGH_OFFSET] @ save to __pv_offset high bits 599 592 mov r6, r3, lsr #24 @ constant for add/sub instructions
arch/arm/kernel/insn.h arch/arm/include/asm/insn.h
+2 -1
arch/arm/kernel/irq.c
··· 109 109 110 110 if (IS_ENABLED(CONFIG_OF) && IS_ENABLED(CONFIG_CACHE_L2X0) && 111 111 (machine_desc->l2c_aux_mask || machine_desc->l2c_aux_val)) { 112 - outer_cache.write_sec = machine_desc->l2c_write_sec; 112 + if (!outer_cache.write_sec) 113 + outer_cache.write_sec = machine_desc->l2c_write_sec; 113 114 ret = l2x0_of_init(machine_desc->l2c_aux_val, 114 115 machine_desc->l2c_aux_mask); 115 116 if (ret)
+2 -3
arch/arm/kernel/jump_label.c
··· 1 1 #include <linux/kernel.h> 2 2 #include <linux/jump_label.h> 3 - 4 - #include "insn.h" 5 - #include "patch.h" 3 + #include <asm/patch.h> 4 + #include <asm/insn.h> 6 5 7 6 #ifdef HAVE_JUMP_LABEL 8 7
+1 -2
arch/arm/kernel/kgdb.c
··· 14 14 #include <linux/kgdb.h> 15 15 #include <linux/uaccess.h> 16 16 17 + #include <asm/patch.h> 17 18 #include <asm/traps.h> 18 - 19 - #include "patch.h" 20 19 21 20 struct dbg_reg_def_t dbg_reg_def[DBG_MAX_REG_NUM] = 22 21 {
+6 -5
arch/arm/kernel/kprobes-arm.c arch/arm/probes/kprobes/actions-arm.c
··· 1 1 /* 2 - * arch/arm/kernel/kprobes-decode.c 2 + * arch/arm/probes/kprobes/actions-arm.c 3 3 * 4 4 * Copyright (C) 2006, 2007 Motorola Inc. 5 5 * ··· 62 62 #include <linux/kprobes.h> 63 63 #include <linux/ptrace.h> 64 64 65 - #include "kprobes.h" 66 - #include "probes-arm.h" 65 + #include "../decode-arm.h" 66 + #include "core.h" 67 + #include "checkers.h" 67 68 68 69 #if __LINUX_ARM_ARCH__ >= 6 69 70 #define BLX(reg) "blx "reg" \n\t" ··· 303 302 } 304 303 305 304 const union decode_action kprobes_arm_actions[NUM_PROBES_ARM_ACTIONS] = { 306 - [PROBES_EMULATE_NONE] = {.handler = probes_emulate_none}, 307 - [PROBES_SIMULATE_NOP] = {.handler = probes_simulate_nop}, 308 305 [PROBES_PRELOAD_IMM] = {.handler = probes_simulate_nop}, 309 306 [PROBES_PRELOAD_REG] = {.handler = probes_simulate_nop}, 310 307 [PROBES_BRANCH_IMM] = {.handler = simulate_blx1}, ··· 340 341 [PROBES_BRANCH] = {.handler = simulate_bbl}, 341 342 [PROBES_LDMSTM] = {.decoder = kprobe_decode_ldmstm} 342 343 }; 344 + 345 + const struct decode_checker *kprobes_arm_checkers[] = {arm_stack_checker, arm_regs_checker, NULL};
+2 -2
arch/arm/kernel/kprobes-common.c arch/arm/probes/kprobes/actions-common.c
··· 1 1 /* 2 - * arch/arm/kernel/kprobes-common.c 2 + * arch/arm/probes/kprobes/actions-common.c 3 3 * 4 4 * Copyright (C) 2011 Jon Medhurst <tixy@yxit.co.uk>. 5 5 * ··· 15 15 #include <linux/kprobes.h> 16 16 #include <asm/opcodes.h> 17 17 18 - #include "kprobes.h" 18 + #include "core.h" 19 19 20 20 21 21 static void __kprobes simulate_ldm1stm1(probes_opcode_t insn,
+30 -10
arch/arm/kernel/kprobes-test-arm.c arch/arm/probes/kprobes/test-arm.c
··· 12 12 #include <linux/module.h> 13 13 #include <asm/system_info.h> 14 14 #include <asm/opcodes.h> 15 + #include <asm/probes.h> 15 16 16 - #include "kprobes-test.h" 17 + #include "test-core.h" 17 18 18 19 19 20 #define TEST_ISA "32" ··· 204 203 #endif 205 204 TEST_GROUP("Miscellaneous instructions") 206 205 207 - TEST("mrs r0, cpsr") 208 - TEST("mrspl r7, cpsr") 209 - TEST("mrs r14, cpsr") 206 + TEST_RMASKED("mrs r",0,~PSR_IGNORE_BITS,", cpsr") 207 + TEST_RMASKED("mrspl r",7,~PSR_IGNORE_BITS,", cpsr") 208 + TEST_RMASKED("mrs r",14,~PSR_IGNORE_BITS,", cpsr") 210 209 TEST_UNSUPPORTED(__inst_arm(0xe10ff000) " @ mrs r15, cpsr") 211 210 TEST_UNSUPPORTED("mrs r0, spsr") 212 211 TEST_UNSUPPORTED("mrs lr, spsr") ··· 215 214 TEST_UNSUPPORTED("msr cpsr_f, lr") 216 215 TEST_UNSUPPORTED("msr spsr, r0") 217 216 217 + #if __LINUX_ARM_ARCH__ >= 5 || \ 218 + (__LINUX_ARM_ARCH__ == 4 && !defined(CONFIG_CPU_32v4)) 218 219 TEST_BF_R("bx r",0,2f,"") 219 220 TEST_BB_R("bx r",7,2f,"") 220 221 TEST_BF_R("bxeq r",14,2f,"") 222 + #endif 221 223 222 224 #if __LINUX_ARM_ARCH__ >= 5 223 225 TEST_R("clz r0, r",0, 0x0,"") ··· 480 476 TEST_GROUP("Extra load/store instructions") 481 477 482 478 TEST_RPR( "strh r",0, VAL1,", [r",1, 48,", -r",2, 24,"]") 483 - TEST_RPR( "streqh r",14,VAL2,", [r",13,0, ", r",12, 48,"]") 479 + TEST_RPR( "streqh r",14,VAL2,", [r",11,0, ", r",12, 48,"]") 480 + TEST_UNSUPPORTED( "streqh r14, [r13, r12]") 481 + TEST_UNSUPPORTED( "streqh r14, [r12, r13]") 484 482 TEST_RPR( "strh r",1, VAL1,", [r",2, 24,", r",3, 48,"]!") 485 483 TEST_RPR( "strneh r",12,VAL2,", [r",11,48,", -r",10,24,"]!") 486 484 TEST_RPR( "strh r",2, VAL1,", [r",3, 24,"], r",4, 48,"") ··· 507 501 TEST_RP( "strplh r",12,VAL2,", [r",11,24,", #-4]!") 508 502 TEST_RP( "strh r",2, VAL1,", [r",3, 24,"], #48") 509 503 TEST_RP( "strh r",10,VAL2,", [r",9, 64,"], #-48") 504 + TEST_RP( "strh r",3, VAL1,", [r",13,TEST_MEMORY_SIZE,", #-"__stringify(MAX_STACK_SIZE)"]!") 505 + TEST_UNSUPPORTED("strh r3, [r13, #-"__stringify(MAX_STACK_SIZE)"-8]!") 506 + TEST_RP( "strh r",4, VAL1,", [r",14,TEST_MEMORY_SIZE,", #-"__stringify(MAX_STACK_SIZE)"-8]!") 510 507 TEST_UNSUPPORTED(__inst_arm(0xe1efc3b0) " @ strh r12, [pc, #48]!") 511 508 TEST_UNSUPPORTED(__inst_arm(0xe0c9f3b0) " @ strh pc, [r9], #48") 512 509 ··· 574 565 575 566 #if __LINUX_ARM_ARCH__ >= 5 576 567 TEST_RPR( "strd r",0, VAL1,", [r",1, 48,", -r",2,24,"]") 577 - TEST_RPR( "strccd r",8, VAL2,", [r",13,0, ", r",12,48,"]") 568 + TEST_RPR( "strccd r",8, VAL2,", [r",11,0, ", r",12,48,"]") 569 + TEST_UNSUPPORTED( "strccd r8, [r13, r12]") 570 + TEST_UNSUPPORTED( "strccd r8, [r12, r13]") 578 571 TEST_RPR( "strd r",4, VAL1,", [r",2, 24,", r",3, 48,"]!") 579 572 TEST_RPR( "strcsd r",12,VAL2,", [r",11,48,", -r",10,24,"]!") 580 573 TEST_RPR( "strd r",2, VAL1,", [r",5, 24,"], r",4,48,"") ··· 600 589 TEST_RP( "strvcd r",12,VAL2,", [r",11,24,", #-16]!") 601 590 TEST_RP( "strd r",2, VAL1,", [r",4, 24,"], #48") 602 591 TEST_RP( "strd r",10,VAL2,", [r",9, 64,"], #-48") 592 + TEST_RP( "strd r",6, VAL1,", [r",13,TEST_MEMORY_SIZE,", #-"__stringify(MAX_STACK_SIZE)"]!") 593 + TEST_UNSUPPORTED("strd r6, [r13, #-"__stringify(MAX_STACK_SIZE)"-8]!") 594 + TEST_RP( "strd r",4, VAL1,", [r",12,TEST_MEMORY_SIZE,", #-"__stringify(MAX_STACK_SIZE)"-8]!") 603 595 TEST_UNSUPPORTED(__inst_arm(0xe1efc3f0) " @ strd r12, [pc, #48]!") 604 596 605 597 TEST_P( "ldrd r0, [r",0, 24,", #-8]") ··· 651 637 TEST_RP( "str"byte" r",12,VAL2,", [r",11,24,", #-4]!") \ 652 638 TEST_RP( "str"byte" r",2, VAL1,", [r",3, 24,"], #48") \ 653 639 TEST_RP( "str"byte" r",10,VAL2,", [r",9, 64,"], #-48") \ 640 + TEST_RP( "str"byte" r",3, VAL1,", [r",13,TEST_MEMORY_SIZE,", #-"__stringify(MAX_STACK_SIZE)"]!") \ 641 + TEST_UNSUPPORTED("str"byte" r3, [r13, #-"__stringify(MAX_STACK_SIZE)"-8]!") \ 642 + TEST_RP( "str"byte" r",4, VAL1,", [r",10,TEST_MEMORY_SIZE,", #-"__stringify(MAX_STACK_SIZE)"-8]!") \ 654 643 TEST_RPR("str"byte" r",0, VAL1,", [r",1, 48,", -r",2, 24,"]") \ 655 - TEST_RPR("str"byte" r",14,VAL2,", [r",13,0, ", r",12, 48,"]") \ 644 + TEST_RPR("str"byte" r",14,VAL2,", [r",11,0, ", r",12, 48,"]") \ 645 + TEST_UNSUPPORTED("str"byte" r14, [r13, r12]") \ 646 + TEST_UNSUPPORTED("str"byte" r14, [r12, r13]") \ 656 647 TEST_RPR("str"byte" r",1, VAL1,", [r",2, 24,", r",3, 48,"]!") \ 657 648 TEST_RPR("str"byte" r",12,VAL2,", [r",11,48,", -r",10,24,"]!") \ 658 649 TEST_RPR("str"byte" r",2, VAL1,", [r",3, 24,"], r",4, 48,"") \ 659 650 TEST_RPR("str"byte" r",10,VAL2,", [r",9, 48,"], -r",11,24,"") \ 660 651 TEST_RPR("str"byte" r",0, VAL1,", [r",1, 24,", r",2, 32,", asl #1]")\ 661 - TEST_RPR("str"byte" r",14,VAL2,", [r",13,0, ", r",12, 32,", lsr #2]")\ 652 + TEST_RPR("str"byte" r",14,VAL2,", [r",11,0, ", r",12, 32,", lsr #2]")\ 653 + TEST_UNSUPPORTED("str"byte" r14, [r13, r12, lsr #2]") \ 662 654 TEST_RPR("str"byte" r",1, VAL1,", [r",2, 24,", r",3, 32,", asr #3]!")\ 663 655 TEST_RPR("str"byte" r",12,VAL2,", [r",11,24,", r",10, 4,", ror #31]!")\ 664 656 TEST_P( "ldr"byte" r0, [r",0, 24,", #-2]") \ ··· 688 668 689 669 LOAD_STORE("") 690 670 TEST_P( "str pc, [r",0,0,", #15*4]") 691 - TEST_R( "str pc, [sp, r",2,15*4,"]") 671 + TEST_UNSUPPORTED( "str pc, [sp, r2]") 692 672 TEST_BF( "ldr pc, [sp, #15*4]") 693 673 TEST_BF_R("ldr pc, [sp, r",2,15*4,"]") 694 674 695 675 TEST_P( "str sp, [r",0,0,", #13*4]") 696 - TEST_R( "str sp, [sp, r",2,13*4,"]") 676 + TEST_UNSUPPORTED( "str sp, [sp, r2]") 697 677 TEST_BF( "ldr sp, [sp, #13*4]") 698 678 TEST_BF_R("ldr sp, [sp, r",2,13*4,"]") 699 679
+16 -4
arch/arm/kernel/kprobes-test-thumb.c arch/arm/probes/kprobes/test-thumb.c
··· 1 1 /* 2 - * arch/arm/kernel/kprobes-test-thumb.c 2 + * arch/arm/probes/kprobes/test-thumb.c 3 3 * 4 4 * Copyright (C) 2011 Jon Medhurst <tixy@yxit.co.uk>. 5 5 * ··· 11 11 #include <linux/kernel.h> 12 12 #include <linux/module.h> 13 13 #include <asm/opcodes.h> 14 + #include <asm/probes.h> 14 15 15 - #include "kprobes-test.h" 16 + #include "test-core.h" 16 17 17 18 18 19 #define TEST_ISA "16" ··· 417 416 TEST_RR( "strd r",14,VAL2,", r",12,VAL1,", [sp, #16]!") 418 417 TEST_RRP("strd r",1, VAL1,", r",0, VAL2,", [r",7, 24,"], #16") 419 418 TEST_RR( "strd r",7, VAL2,", r",8, VAL1,", [sp], #-16") 419 + TEST_RRP("strd r",6, VAL1,", r",7, VAL2,", [r",13, TEST_MEMORY_SIZE,", #-"__stringify(MAX_STACK_SIZE)"]!") 420 + TEST_UNSUPPORTED("strd r6, r7, [r13, #-"__stringify(MAX_STACK_SIZE)"-8]!") 421 + TEST_RRP("strd r",4, VAL1,", r",5, VAL2,", [r",14, TEST_MEMORY_SIZE,", #-"__stringify(MAX_STACK_SIZE)"-8]!") 420 422 TEST_UNSUPPORTED(__inst_thumb32(0xe9efec04) " @ strd r14, r12, [pc, #16]!") 421 423 TEST_UNSUPPORTED(__inst_thumb32(0xe8efec04) " @ strd r14, r12, [pc], #16") 422 424 ··· 778 774 779 775 TEST_UNSUPPORTED("subs pc, lr, #4") 780 776 781 - TEST("mrs r0, cpsr") 782 - TEST("mrs r14, cpsr") 777 + TEST_RMASKED("mrs r",0,~PSR_IGNORE_BITS,", cpsr") 778 + TEST_RMASKED("mrs r",14,~PSR_IGNORE_BITS,", cpsr") 783 779 TEST_UNSUPPORTED(__inst_thumb32(0xf3ef8d00) " @ mrs sp, spsr") 784 780 TEST_UNSUPPORTED(__inst_thumb32(0xf3ef8f00) " @ mrs pc, spsr") 785 781 TEST_UNSUPPORTED("mrs r0, spsr") ··· 825 821 TEST_RP( "str"size" r",14,VAL2,", [r",1, 256, ", #-128]!") \ 826 822 TEST_RPR("str"size".w r",0, VAL1,", [r",1, 0,", r",2, 4,"]") \ 827 823 TEST_RPR("str"size" r",14,VAL2,", [r",10,0,", r",11,4,", lsl #1]") \ 824 + TEST_UNSUPPORTED("str"size" r0, [r13, r1]") \ 828 825 TEST_R( "str"size".w r",7, VAL1,", [sp, #24]") \ 829 826 TEST_RP( "str"size".w r",0, VAL2,", [r",0,0, "]") \ 827 + TEST_RP( "str"size" r",6, VAL1,", [r",13, TEST_MEMORY_SIZE,", #-"__stringify(MAX_STACK_SIZE)"]!") \ 828 + TEST_UNSUPPORTED("str"size" r6, [r13, #-"__stringify(MAX_STACK_SIZE)"-8]!") \ 829 + TEST_RP( "str"size" r",4, VAL2,", [r",12, TEST_MEMORY_SIZE,", #-"__stringify(MAX_STACK_SIZE)"-8]!") \ 830 830 TEST_UNSUPPORTED("str"size"t r0, [r1, #4]") 831 831 832 832 SINGLE_STORE("b") 833 833 SINGLE_STORE("h") 834 834 SINGLE_STORE("") 835 + 836 + TEST_UNSUPPORTED(__inst_thumb32(0xf801000d) " @ strb r0, [r1, r13]") 837 + TEST_UNSUPPORTED(__inst_thumb32(0xf821000d) " @ strh r0, [r1, r13]") 838 + TEST_UNSUPPORTED(__inst_thumb32(0xf841000d) " @ str r0, [r1, r13]") 835 839 836 840 TEST("str sp, [sp]") 837 841 TEST_UNSUPPORTED(__inst_thumb32(0xf8cfe000) " @ str r14, [pc]")
+30 -16
arch/arm/kernel/kprobes-test.c arch/arm/probes/kprobes/test-core.c
··· 209 209 #include <linux/bug.h> 210 210 #include <asm/opcodes.h> 211 211 212 - #include "kprobes.h" 213 - #include "probes-arm.h" 214 - #include "probes-thumb.h" 215 - #include "kprobes-test.h" 212 + #include "core.h" 213 + #include "test-core.h" 214 + #include "../decode-arm.h" 215 + #include "../decode-thumb.h" 216 216 217 217 218 218 #define BENCHMARKING 1 ··· 236 236 237 237 #ifndef CONFIG_THUMB2_KERNEL 238 238 239 + #define RET(reg) "mov pc, "#reg 240 + 239 241 long arm_func(long r0, long r1); 240 242 241 243 static void __used __naked __arm_kprobes_test_func(void) ··· 247 245 ".type arm_func, %%function \n\t" 248 246 "arm_func: \n\t" 249 247 "adds r0, r0, r1 \n\t" 250 - "bx lr \n\t" 248 + "mov pc, lr \n\t" 251 249 ".code "NORMAL_ISA /* Back to Thumb if necessary */ 252 250 : : : "r0", "r1", "cc" 253 251 ); 254 252 } 255 253 256 254 #else /* CONFIG_THUMB2_KERNEL */ 255 + 256 + #define RET(reg) "bx "#reg 257 257 258 258 long thumb16_func(long r0, long r1); 259 259 long thumb32even_func(long r0, long r1); ··· 498 494 { 499 495 __asm__ __volatile__ ( 500 496 "nop \n\t" 501 - "bx lr" 497 + RET(lr)" \n\t" 502 498 ); 503 499 } 504 500 ··· 981 977 "bic r0, lr, #1 @ r0 = inline data \n\t" 982 978 "mov r1, sp \n\t" 983 979 "bl kprobes_test_case_start \n\t" 984 - "bx r0 \n\t" 980 + RET(r0)" \n\t" 985 981 ); 986 982 } 987 983 ··· 1059 1055 static int test_case_run_count; 1060 1056 static bool test_case_is_thumb; 1061 1057 static int test_instance; 1062 - 1063 - /* 1064 - * We ignore the state of the imprecise abort disable flag (CPSR.A) because this 1065 - * can change randomly as the kernel doesn't take care to preserve or initialise 1066 - * this across context switches. Also, with Security Extentions, the flag may 1067 - * not be under control of the kernel; for this reason we ignore the state of 1068 - * the FIQ disable flag CPSR.F as well. 1069 - */ 1070 - #define PSR_IGNORE_BITS (PSR_A_BIT | PSR_F_BIT) 1071 1058 1072 1059 static unsigned long test_check_cc(int cc, unsigned long cpsr) 1073 1060 { ··· 1191 1196 regs->uregs[arg->reg] = 1192 1197 (unsigned long)current_stack + arg->val; 1193 1198 memory_needs_checking = true; 1199 + /* 1200 + * Test memory at an address below SP is in danger of 1201 + * being altered by an interrupt occurring and pushing 1202 + * data onto the stack. Disable interrupts to stop this. 1203 + */ 1204 + if (arg->reg == 13) 1205 + regs->ARM_cpsr |= PSR_I_BIT; 1194 1206 break; 1195 1207 } 1196 1208 case ARG_TYPE_MEM: { ··· 1266 1264 static int __kprobes 1267 1265 test_after_pre_handler(struct kprobe *p, struct pt_regs *regs) 1268 1266 { 1267 + struct test_arg *args; 1268 + 1269 1269 if (container_of(p, struct test_probe, kprobe)->hit == test_instance) 1270 1270 return 0; /* Already run for this test instance */ 1271 1271 1272 1272 result_regs = *regs; 1273 + 1274 + /* Mask out results which are indeterminate */ 1273 1275 result_regs.ARM_cpsr &= ~PSR_IGNORE_BITS; 1276 + for (args = current_args; args[0].type != ARG_TYPE_END; ++args) 1277 + if (args[0].type == ARG_TYPE_REG_MASKED) { 1278 + struct test_arg_regptr *arg = 1279 + (struct test_arg_regptr *)args; 1280 + result_regs.uregs[arg->reg] &= arg->val; 1281 + } 1274 1282 1275 1283 /* Undo any changes done to SP by the test case */ 1276 1284 regs->ARM_sp = (unsigned long)current_stack; 1285 + /* Enable interrupts in case setup_test_context disabled them */ 1286 + regs->ARM_cpsr &= ~PSR_I_BIT; 1277 1287 1278 1288 container_of(p, struct test_probe, kprobe)->hit = test_instance; 1279 1289 return 0;
+29 -6
arch/arm/kernel/kprobes-test.h arch/arm/probes/kprobes/test-core.h
··· 1 1 /* 2 - * arch/arm/kernel/kprobes-test.h 2 + * arch/arm/probes/kprobes/test-core.h 3 3 * 4 4 * Copyright (C) 2011 Jon Medhurst <tixy@yxit.co.uk>. 5 5 * ··· 45 45 * 46 46 */ 47 47 48 - #define ARG_TYPE_END 0 49 - #define ARG_TYPE_REG 1 50 - #define ARG_TYPE_PTR 2 51 - #define ARG_TYPE_MEM 3 48 + #define ARG_TYPE_END 0 49 + #define ARG_TYPE_REG 1 50 + #define ARG_TYPE_PTR 2 51 + #define ARG_TYPE_MEM 3 52 + #define ARG_TYPE_REG_MASKED 4 52 53 53 54 #define ARG_FLAG_UNSUPPORTED 0x01 54 55 #define ARG_FLAG_SUPPORTED 0x02 ··· 62 61 }; 63 62 64 63 struct test_arg_regptr { 65 - u8 type; /* ARG_TYPE_REG or ARG_TYPE_PTR */ 64 + u8 type; /* ARG_TYPE_REG or ARG_TYPE_PTR or ARG_TYPE_REG_MASKED */ 66 65 u8 reg; 67 66 u8 _padding[2]; 68 67 u32 val; ··· 136 135 #define TEST_ARG_MEM(index, val) \ 137 136 ".byte "__stringify(ARG_TYPE_MEM)" \n\t" \ 138 137 ".byte "#index" \n\t" \ 138 + ".short 0 \n\t" \ 139 + ".word "#val" \n\t" 140 + 141 + #define TEST_ARG_REG_MASKED(reg, val) \ 142 + ".byte "__stringify(ARG_TYPE_REG_MASKED)" \n\t" \ 143 + ".byte "#reg" \n\t" \ 139 144 ".short 0 \n\t" \ 140 145 ".word "#val" \n\t" 141 146 ··· 401 394 " b 99f \n\t" \ 402 395 " "codex" \n\t" \ 403 396 TESTCASE_END 397 + 398 + #define TEST_RMASKED(code1, reg, mask, code2) \ 399 + TESTCASE_START(code1 #reg code2) \ 400 + TEST_ARG_REG_MASKED(reg, mask) \ 401 + TEST_ARG_END("") \ 402 + TEST_INSTRUCTION(code1 #reg code2) \ 403 + TESTCASE_END 404 + 405 + /* 406 + * We ignore the state of the imprecise abort disable flag (CPSR.A) because this 407 + * can change randomly as the kernel doesn't take care to preserve or initialise 408 + * this across context switches. Also, with Security Extensions, the flag may 409 + * not be under control of the kernel; for this reason we ignore the state of 410 + * the FIQ disable flag CPSR.F as well. 411 + */ 412 + #define PSR_IGNORE_BITS (PSR_A_BIT | PSR_F_BIT) 404 413 405 414 406 415 /*
+7 -3
arch/arm/kernel/kprobes-thumb.c arch/arm/probes/kprobes/actions-thumb.c
··· 1 1 /* 2 - * arch/arm/kernel/kprobes-thumb.c 2 + * arch/arm/probes/kprobes/actions-thumb.c 3 3 * 4 4 * Copyright (C) 2011 Jon Medhurst <tixy@yxit.co.uk>. 5 5 * ··· 13 13 #include <linux/ptrace.h> 14 14 #include <linux/kprobes.h> 15 15 16 - #include "kprobes.h" 17 - #include "probes-thumb.h" 16 + #include "../decode-thumb.h" 17 + #include "core.h" 18 + #include "checkers.h" 18 19 19 20 /* These emulation encodings are functionally equivalent... */ 20 21 #define t32_emulate_rd8rn16rm0ra12_noflags \ ··· 665 664 [PROBES_T32_MUL_ADD_LONG] = { 666 665 .handler = t32_emulate_rdlo12rdhi8rn16rm0_noflags}, 667 666 }; 667 + 668 + const struct decode_checker *kprobes_t32_checkers[] = {t32_stack_checker, NULL}; 669 + const struct decode_checker *kprobes_t16_checkers[] = {t16_stack_checker, NULL};
+37 -12
arch/arm/kernel/kprobes.c arch/arm/probes/kprobes/core.c
··· 30 30 #include <asm/cacheflush.h> 31 31 #include <linux/percpu.h> 32 32 #include <linux/bug.h> 33 + #include <asm/patch.h> 33 34 34 - #include "kprobes.h" 35 - #include "probes-arm.h" 36 - #include "probes-thumb.h" 37 - #include "patch.h" 35 + #include "../decode-arm.h" 36 + #include "../decode-thumb.h" 37 + #include "core.h" 38 38 39 39 #define MIN_STACK_SIZE(addr) \ 40 40 min((unsigned long)MAX_STACK_SIZE, \ ··· 61 61 kprobe_decode_insn_t *decode_insn; 62 62 const union decode_action *actions; 63 63 int is; 64 + const struct decode_checker **checkers; 64 65 65 66 if (in_exception_text(addr)) 66 67 return -EINVAL; ··· 75 74 insn = __opcode_thumb32_compose(insn, inst2); 76 75 decode_insn = thumb32_probes_decode_insn; 77 76 actions = kprobes_t32_actions; 77 + checkers = kprobes_t32_checkers; 78 78 } else { 79 79 decode_insn = thumb16_probes_decode_insn; 80 80 actions = kprobes_t16_actions; 81 + checkers = kprobes_t16_checkers; 81 82 } 82 83 #else /* !CONFIG_THUMB2_KERNEL */ 83 84 thumb = false; ··· 88 85 insn = __mem_to_opcode_arm(*p->addr); 89 86 decode_insn = arm_probes_decode_insn; 90 87 actions = kprobes_arm_actions; 88 + checkers = kprobes_arm_checkers; 91 89 #endif 92 90 93 91 p->opcode = insn; 94 92 p->ainsn.insn = tmp_insn; 95 93 96 - switch ((*decode_insn)(insn, &p->ainsn, true, actions)) { 94 + switch ((*decode_insn)(insn, &p->ainsn, true, actions, checkers)) { 97 95 case INSN_REJECTED: /* not supported */ 98 96 return -EINVAL; 99 97 ··· 114 110 p->ainsn.insn = NULL; 115 111 break; 116 112 } 113 + 114 + /* 115 + * Never instrument insn like 'str r0, [sp, +/-r1]'. Also, insn likes 116 + * 'str r0, [sp, #-68]' should also be prohibited. 117 + * See __und_svc. 118 + */ 119 + if ((p->ainsn.stack_space < 0) || 120 + (p->ainsn.stack_space > MAX_STACK_SIZE)) 121 + return -EINVAL; 117 122 118 123 return 0; 119 124 } ··· 163 150 * memory. It is also needed to atomically set the two half-words of a 32-bit 164 151 * Thumb breakpoint. 165 152 */ 166 - int __kprobes __arch_disarm_kprobe(void *p) 153 + struct patch { 154 + void *addr; 155 + unsigned int insn; 156 + }; 157 + 158 + static int __kprobes_remove_breakpoint(void *data) 167 159 { 168 - struct kprobe *kp = p; 169 - void *addr = (void *)((uintptr_t)kp->addr & ~1); 170 - 171 - __patch_text(addr, kp->opcode); 172 - 160 + struct patch *p = data; 161 + __patch_text(p->addr, p->insn); 173 162 return 0; 163 + } 164 + 165 + void __kprobes kprobes_remove_breakpoint(void *addr, unsigned int insn) 166 + { 167 + struct patch p = { 168 + .addr = addr, 169 + .insn = insn, 170 + }; 171 + stop_machine(__kprobes_remove_breakpoint, &p, cpu_online_mask); 174 172 } 175 173 176 174 void __kprobes arch_disarm_kprobe(struct kprobe *p) 177 175 { 178 - stop_machine(__arch_disarm_kprobe, p, cpu_online_mask); 176 + kprobes_remove_breakpoint((void *)((uintptr_t)p->addr & ~1), 177 + p->opcode); 179 178 } 180 179 181 180 void __kprobes arch_remove_kprobe(struct kprobe *p)
+9 -3
arch/arm/kernel/kprobes.h arch/arm/probes/kprobes/core.h
··· 19 19 #ifndef _ARM_KERNEL_KPROBES_H 20 20 #define _ARM_KERNEL_KPROBES_H 21 21 22 - #include "probes.h" 22 + #include <asm/kprobes.h> 23 + #include "../decode.h" 23 24 24 25 /* 25 26 * These undefined instructions must be unique and ··· 30 29 #define KPROBE_THUMB16_BREAKPOINT_INSTRUCTION 0xde18 31 30 #define KPROBE_THUMB32_BREAKPOINT_INSTRUCTION 0xf7f0a018 32 31 32 + extern void kprobes_remove_breakpoint(void *addr, unsigned int insn); 33 + 33 34 enum probes_insn __kprobes 34 35 kprobe_decode_ldmstm(kprobe_opcode_t insn, struct arch_probes_insn *asi, 35 36 const struct decode_header *h); ··· 39 36 typedef enum probes_insn (kprobe_decode_insn_t)(probes_opcode_t, 40 37 struct arch_probes_insn *, 41 38 bool, 42 - const union decode_action *); 39 + const union decode_action *, 40 + const struct decode_checker *[]); 43 41 44 42 #ifdef CONFIG_THUMB2_KERNEL 45 43 46 44 extern const union decode_action kprobes_t32_actions[]; 47 45 extern const union decode_action kprobes_t16_actions[]; 48 - 46 + extern const struct decode_checker *kprobes_t32_checkers[]; 47 + extern const struct decode_checker *kprobes_t16_checkers[]; 49 48 #else /* !CONFIG_THUMB2_KERNEL */ 50 49 51 50 extern const union decode_action kprobes_arm_actions[]; 51 + extern const struct decode_checker *kprobes_arm_checkers[]; 52 52 53 53 #endif 54 54
+1 -2
arch/arm/kernel/patch.c
··· 8 8 #include <asm/fixmap.h> 9 9 #include <asm/smp_plat.h> 10 10 #include <asm/opcodes.h> 11 - 12 - #include "patch.h" 11 + #include <asm/patch.h> 13 12 14 13 struct patch { 15 14 void *addr;
arch/arm/kernel/patch.h arch/arm/include/asm/patch.h
+10 -8
arch/arm/kernel/probes-arm.c arch/arm/probes/decode-arm.c
··· 1 1 /* 2 - * arch/arm/kernel/probes-arm.c 2 + * 3 + * arch/arm/probes/decode-arm.c 3 4 * 4 5 * Some code moved here from arch/arm/kernel/kprobes-arm.c 5 6 * ··· 21 20 #include <linux/stddef.h> 22 21 #include <linux/ptrace.h> 23 22 24 - #include "probes.h" 25 - #include "probes-arm.h" 23 + #include "decode.h" 24 + #include "decode-arm.h" 26 25 27 26 #define sign_extend(x, signbit) ((x) | (0 - ((x) & (1 << (signbit))))) 28 27 ··· 370 369 371 370 /* MOVW cccc 0011 0000 xxxx xxxx xxxx xxxx xxxx */ 372 371 /* MOVT cccc 0011 0100 xxxx xxxx xxxx xxxx xxxx */ 373 - DECODE_EMULATEX (0x0fb00000, 0x03000000, PROBES_DATA_PROCESSING_IMM, 372 + DECODE_EMULATEX (0x0fb00000, 0x03000000, PROBES_MOV_HALFWORD, 374 373 REGS(0, NOPC, 0, 0, 0)), 375 374 376 375 /* YIELD cccc 0011 0010 0000 xxxx xxxx 0000 0001 */ 377 376 DECODE_OR (0x0fff00ff, 0x03200001), 378 377 /* SEV cccc 0011 0010 0000 xxxx xxxx 0000 0100 */ 379 - DECODE_EMULATE (0x0fff00ff, 0x03200004, PROBES_EMULATE_NONE), 378 + DECODE_EMULATE (0x0fff00ff, 0x03200004, PROBES_SEV), 380 379 /* NOP cccc 0011 0010 0000 xxxx xxxx 0000 0000 */ 381 380 /* WFE cccc 0011 0010 0000 xxxx xxxx 0000 0010 */ 382 381 /* WFI cccc 0011 0010 0000 xxxx xxxx 0000 0011 */ 383 - DECODE_SIMULATE (0x0fff00fc, 0x03200000, PROBES_SIMULATE_NOP), 382 + DECODE_SIMULATE (0x0fff00fc, 0x03200000, PROBES_WFE), 384 383 /* DBG cccc 0011 0010 0000 xxxx xxxx ffff xxxx */ 385 384 /* unallocated hints cccc 0011 0010 0000 xxxx xxxx xxxx xxxx */ 386 385 /* MSR (immediate) cccc 0011 0x10 xxxx xxxx xxxx xxxx xxxx */ ··· 726 725 */ 727 726 enum probes_insn __kprobes 728 727 arm_probes_decode_insn(probes_opcode_t insn, struct arch_probes_insn *asi, 729 - bool emulate, const union decode_action *actions) 728 + bool emulate, const union decode_action *actions, 729 + const struct decode_checker *checkers[]) 730 730 { 731 731 asi->insn_singlestep = arm_singlestep; 732 732 asi->insn_check_cc = probes_condition_checks[insn>>28]; 733 733 return probes_decode_insn(insn, asi, probes_decode_arm_table, false, 734 - emulate, actions); 734 + emulate, actions, checkers); 735 735 }
+5 -4
arch/arm/kernel/probes-arm.h arch/arm/probes/decode-arm.h
··· 1 1 /* 2 - * arch/arm/kernel/probes-arm.h 2 + * arch/arm/probes/decode-arm.h 3 3 * 4 4 * Copyright 2013 Linaro Ltd. 5 5 * Written by: David A. Long ··· 15 15 #ifndef _ARM_KERNEL_PROBES_ARM_H 16 16 #define _ARM_KERNEL_PROBES_ARM_H 17 17 18 + #include "decode.h" 19 + 18 20 enum probes_arm_action { 19 - PROBES_EMULATE_NONE, 20 - PROBES_SIMULATE_NOP, 21 21 PROBES_PRELOAD_IMM, 22 22 PROBES_PRELOAD_REG, 23 23 PROBES_BRANCH_IMM, ··· 68 68 69 69 enum probes_insn arm_probes_decode_insn(probes_opcode_t, 70 70 struct arch_probes_insn *, bool emulate, 71 - const union decode_action *actions); 71 + const union decode_action *actions, 72 + const struct decode_checker *checkers[]); 72 73 73 74 #endif
+9 -7
arch/arm/kernel/probes-thumb.c arch/arm/probes/decode-thumb.c
··· 1 1 /* 2 - * arch/arm/kernel/probes-thumb.c 2 + * arch/arm/probes/decode-thumb.c 3 3 * 4 4 * Copyright (C) 2011 Jon Medhurst <tixy@yxit.co.uk>. 5 5 * ··· 12 12 #include <linux/kernel.h> 13 13 #include <linux/module.h> 14 14 15 - #include "probes.h" 16 - #include "probes-thumb.h" 15 + #include "decode.h" 16 + #include "decode-thumb.h" 17 17 18 18 19 19 static const union decode_item t32_table_1110_100x_x0xx[] = { ··· 863 863 864 864 enum probes_insn __kprobes 865 865 thumb16_probes_decode_insn(probes_opcode_t insn, struct arch_probes_insn *asi, 866 - bool emulate, const union decode_action *actions) 866 + bool emulate, const union decode_action *actions, 867 + const struct decode_checker *checkers[]) 867 868 { 868 869 asi->insn_singlestep = thumb16_singlestep; 869 870 asi->insn_check_cc = thumb_check_cc; 870 871 return probes_decode_insn(insn, asi, probes_decode_thumb16_table, true, 871 - emulate, actions); 872 + emulate, actions, checkers); 872 873 } 873 874 874 875 enum probes_insn __kprobes 875 876 thumb32_probes_decode_insn(probes_opcode_t insn, struct arch_probes_insn *asi, 876 - bool emulate, const union decode_action *actions) 877 + bool emulate, const union decode_action *actions, 878 + const struct decode_checker *checkers[]) 877 879 { 878 880 asi->insn_singlestep = thumb32_singlestep; 879 881 asi->insn_check_cc = thumb_check_cc; 880 882 return probes_decode_insn(insn, asi, probes_decode_thumb32_table, true, 881 - emulate, actions); 883 + emulate, actions, checkers); 882 884 }
+7 -3
arch/arm/kernel/probes-thumb.h arch/arm/probes/decode-thumb.h
··· 1 1 /* 2 - * arch/arm/kernel/probes-thumb.h 2 + * arch/arm/probes/decode-thumb.h 3 3 * 4 4 * Copyright 2013 Linaro Ltd. 5 5 * Written by: David A. Long ··· 14 14 15 15 #ifndef _ARM_KERNEL_PROBES_THUMB_H 16 16 #define _ARM_KERNEL_PROBES_THUMB_H 17 + 18 + #include "decode.h" 17 19 18 20 /* 19 21 * True if current instruction is in an IT block. ··· 91 89 92 90 enum probes_insn __kprobes 93 91 thumb16_probes_decode_insn(probes_opcode_t insn, struct arch_probes_insn *asi, 94 - bool emulate, const union decode_action *actions); 92 + bool emulate, const union decode_action *actions, 93 + const struct decode_checker *checkers[]); 95 94 enum probes_insn __kprobes 96 95 thumb32_probes_decode_insn(probes_opcode_t insn, struct arch_probes_insn *asi, 97 - bool emulate, const union decode_action *actions); 96 + bool emulate, const union decode_action *actions, 97 + const struct decode_checker *checkers[]); 98 98 99 99 #endif
+73 -8
arch/arm/kernel/probes.c arch/arm/probes/decode.c
··· 1 1 /* 2 - * arch/arm/kernel/probes.c 2 + * arch/arm/probes/decode.c 3 3 * 4 4 * Copyright (C) 2011 Jon Medhurst <tixy@yxit.co.uk>. 5 5 * ··· 17 17 #include <asm/ptrace.h> 18 18 #include <linux/bug.h> 19 19 20 - #include "probes.h" 20 + #include "decode.h" 21 21 22 22 23 23 #ifndef find_str_pc_offset ··· 342 342 [DECODE_TYPE_REJECT] = sizeof(struct decode_reject) 343 343 }; 344 344 345 + static int run_checkers(const struct decode_checker *checkers[], 346 + int action, probes_opcode_t insn, 347 + struct arch_probes_insn *asi, 348 + const struct decode_header *h) 349 + { 350 + const struct decode_checker **p; 351 + 352 + if (!checkers) 353 + return INSN_GOOD; 354 + 355 + p = checkers; 356 + while (*p != NULL) { 357 + int retval; 358 + probes_check_t *checker_func = (*p)[action].checker; 359 + 360 + retval = INSN_GOOD; 361 + if (checker_func) 362 + retval = checker_func(insn, asi, h); 363 + if (retval == INSN_REJECTED) 364 + return retval; 365 + p++; 366 + } 367 + return INSN_GOOD; 368 + } 369 + 345 370 /* 346 371 * probes_decode_insn operates on data tables in order to decode an ARM 347 372 * architecture instruction onto which a kprobe has been placed. ··· 413 388 int __kprobes 414 389 probes_decode_insn(probes_opcode_t insn, struct arch_probes_insn *asi, 415 390 const union decode_item *table, bool thumb, 416 - bool emulate, const union decode_action *actions) 391 + bool emulate, const union decode_action *actions, 392 + const struct decode_checker *checkers[]) 417 393 { 418 394 const struct decode_header *h = (struct decode_header *)table; 419 395 const struct decode_header *next; 420 396 bool matched = false; 397 + /* 398 + * @insn can be modified by decode_regs. Save its original 399 + * value for checkers. 400 + */ 401 + probes_opcode_t origin_insn = insn; 402 + 403 + /* 404 + * stack_space is initialized to 0 here. Checker functions 405 + * should update is value if they find this is a stack store 406 + * instruction: positive value means bytes of stack usage, 407 + * negitive value means unable to determine stack usage 408 + * statically. For instruction doesn't store to stack, checker 409 + * do nothing with it. 410 + */ 411 + asi->stack_space = 0; 412 + 413 + /* 414 + * Similarly to stack_space, register_usage_flags is filled by 415 + * checkers. Its default value is set to ~0, which is 'all 416 + * registers are used', to prevent any potential optimization. 417 + */ 418 + asi->register_usage_flags = ~0UL; 421 419 422 420 if (emulate) 423 421 insn = prepare_emulated_insn(insn, asi, thumb); ··· 470 422 } 471 423 472 424 case DECODE_TYPE_CUSTOM: { 425 + int err; 473 426 struct decode_custom *d = (struct decode_custom *)h; 474 - return actions[d->decoder.action].decoder(insn, asi, h); 427 + int action = d->decoder.action; 428 + 429 + err = run_checkers(checkers, action, origin_insn, asi, h); 430 + if (err == INSN_REJECTED) 431 + return INSN_REJECTED; 432 + return actions[action].decoder(insn, asi, h); 475 433 } 476 434 477 435 case DECODE_TYPE_SIMULATE: { 436 + int err; 478 437 struct decode_simulate *d = (struct decode_simulate *)h; 479 - asi->insn_handler = actions[d->handler.action].handler; 438 + int action = d->handler.action; 439 + 440 + err = run_checkers(checkers, action, origin_insn, asi, h); 441 + if (err == INSN_REJECTED) 442 + return INSN_REJECTED; 443 + asi->insn_handler = actions[action].handler; 480 444 return INSN_GOOD_NO_SLOT; 481 445 } 482 446 483 447 case DECODE_TYPE_EMULATE: { 448 + int err; 484 449 struct decode_emulate *d = (struct decode_emulate *)h; 450 + int action = d->handler.action; 451 + 452 + err = run_checkers(checkers, action, origin_insn, asi, h); 453 + if (err == INSN_REJECTED) 454 + return INSN_REJECTED; 485 455 486 456 if (!emulate) 487 - return actions[d->handler.action].decoder(insn, 488 - asi, h); 457 + return actions[action].decoder(insn, asi, h); 489 458 490 - asi->insn_handler = actions[d->handler.action].handler; 459 + asi->insn_handler = actions[action].handler; 491 460 set_emulated_insn(insn, asi, thumb); 492 461 return INSN_GOOD; 493 462 }
+11 -2
arch/arm/kernel/probes.h arch/arm/probes/decode.h
··· 1 1 /* 2 - * arch/arm/kernel/probes.h 2 + * arch/arm/probes/decode.h 3 3 * 4 4 * Copyright (C) 2011 Jon Medhurst <tixy@yxit.co.uk>. 5 5 * ··· 314 314 probes_custom_decode_t *decoder; 315 315 }; 316 316 317 + typedef enum probes_insn (probes_check_t)(probes_opcode_t, 318 + struct arch_probes_insn *, 319 + const struct decode_header *); 320 + 321 + struct decode_checker { 322 + probes_check_t *checker; 323 + }; 324 + 317 325 #define DECODE_END \ 318 326 {.bits = DECODE_TYPE_END} 319 327 ··· 410 402 int __kprobes 411 403 probes_decode_insn(probes_opcode_t insn, struct arch_probes_insn *asi, 412 404 const union decode_item *table, bool thumb, bool emulate, 413 - const union decode_action *actions); 405 + const union decode_action *actions, 406 + const struct decode_checker **checkers); 414 407 415 408 #endif
-4
arch/arm/kernel/suspend.c
··· 14 14 extern void cpu_resume_mmu(void); 15 15 16 16 #ifdef CONFIG_MMU 17 - /* 18 - * Hide the first two arguments to __cpu_suspend - these are an implementation 19 - * detail which platform code shouldn't have to know about. 20 - */ 21 17 int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) 22 18 { 23 19 struct mm_struct *mm = current->active_mm;
+3 -5
arch/arm/kernel/uprobes-arm.c arch/arm/probes/uprobes/actions-arm.c
··· 13 13 #include <linux/uprobes.h> 14 14 #include <linux/module.h> 15 15 16 - #include "probes.h" 17 - #include "probes-arm.h" 18 - #include "uprobes.h" 16 + #include "../decode.h" 17 + #include "../decode-arm.h" 18 + #include "core.h" 19 19 20 20 static int uprobes_substitute_pc(unsigned long *pinsn, u32 oregs) 21 21 { ··· 195 195 } 196 196 197 197 const union decode_action uprobes_probes_actions[] = { 198 - [PROBES_EMULATE_NONE] = {.handler = probes_simulate_nop}, 199 - [PROBES_SIMULATE_NOP] = {.handler = probes_simulate_nop}, 200 198 [PROBES_PRELOAD_IMM] = {.handler = probes_simulate_nop}, 201 199 [PROBES_PRELOAD_REG] = {.handler = probes_simulate_nop}, 202 200 [PROBES_BRANCH_IMM] = {.handler = simulate_blx1},
+4 -4
arch/arm/kernel/uprobes.c arch/arm/probes/uprobes/core.c
··· 17 17 #include <asm/opcodes.h> 18 18 #include <asm/traps.h> 19 19 20 - #include "probes.h" 21 - #include "probes-arm.h" 22 - #include "uprobes.h" 20 + #include "../decode.h" 21 + #include "../decode-arm.h" 22 + #include "core.h" 23 23 24 24 #define UPROBE_TRAP_NR UINT_MAX 25 25 ··· 88 88 auprobe->ixol[1] = __opcode_to_mem_arm(UPROBE_SS_ARM_INSN); 89 89 90 90 ret = arm_probes_decode_insn(insn, &auprobe->asi, false, 91 - uprobes_probes_actions); 91 + uprobes_probes_actions, NULL); 92 92 switch (ret) { 93 93 case INSN_REJECTED: 94 94 return -EINVAL;
arch/arm/kernel/uprobes.h arch/arm/probes/uprobes/core.h
+2 -13
arch/arm/lib/Makefile
··· 15 15 io-readsb.o io-writesb.o io-readsl.o io-writesl.o \ 16 16 call_with_stack.o bswapsdi2.o 17 17 18 - mmu-y := clear_user.o copy_page.o getuser.o putuser.o 19 - 20 - # the code in uaccess.S is not preemption safe and 21 - # probably faster on ARMv3 only 22 - ifeq ($(CONFIG_PREEMPT),y) 23 - mmu-y += copy_from_user.o copy_to_user.o 24 - else 25 - ifneq ($(CONFIG_CPU_32v3),y) 26 - mmu-y += copy_from_user.o copy_to_user.o 27 - else 28 - mmu-y += uaccess.o 29 - endif 30 - endif 18 + mmu-y := clear_user.o copy_page.o getuser.o putuser.o \ 19 + copy_from_user.o copy_to_user.o 31 20 32 21 # using lib_ here won't override already available weak symbols 33 22 obj-$(CONFIG_UACCESS_WITH_MEMCPY) += uaccess_with_memcpy.o
-564
arch/arm/lib/uaccess.S
··· 1 - /* 2 - * linux/arch/arm/lib/uaccess.S 3 - * 4 - * Copyright (C) 1995, 1996,1997,1998 Russell King 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License version 2 as 8 - * published by the Free Software Foundation. 9 - * 10 - * Routines to block copy data to/from user memory 11 - * These are highly optimised both for the 4k page size 12 - * and for various alignments. 13 - */ 14 - #include <linux/linkage.h> 15 - #include <asm/assembler.h> 16 - #include <asm/errno.h> 17 - #include <asm/domain.h> 18 - 19 - .text 20 - 21 - #define PAGE_SHIFT 12 22 - 23 - /* Prototype: int __copy_to_user(void *to, const char *from, size_t n) 24 - * Purpose : copy a block to user memory from kernel memory 25 - * Params : to - user memory 26 - * : from - kernel memory 27 - * : n - number of bytes to copy 28 - * Returns : Number of bytes NOT copied. 29 - */ 30 - 31 - .Lc2u_dest_not_aligned: 32 - rsb ip, ip, #4 33 - cmp ip, #2 34 - ldrb r3, [r1], #1 35 - USER( TUSER( strb) r3, [r0], #1) @ May fault 36 - ldrgeb r3, [r1], #1 37 - USER( TUSER( strgeb) r3, [r0], #1) @ May fault 38 - ldrgtb r3, [r1], #1 39 - USER( TUSER( strgtb) r3, [r0], #1) @ May fault 40 - sub r2, r2, ip 41 - b .Lc2u_dest_aligned 42 - 43 - ENTRY(__copy_to_user) 44 - stmfd sp!, {r2, r4 - r7, lr} 45 - cmp r2, #4 46 - blt .Lc2u_not_enough 47 - ands ip, r0, #3 48 - bne .Lc2u_dest_not_aligned 49 - .Lc2u_dest_aligned: 50 - 51 - ands ip, r1, #3 52 - bne .Lc2u_src_not_aligned 53 - /* 54 - * Seeing as there has to be at least 8 bytes to copy, we can 55 - * copy one word, and force a user-mode page fault... 56 - */ 57 - 58 - .Lc2u_0fupi: subs r2, r2, #4 59 - addmi ip, r2, #4 60 - bmi .Lc2u_0nowords 61 - ldr r3, [r1], #4 62 - USER( TUSER( str) r3, [r0], #4) @ May fault 63 - mov ip, r0, lsl #32 - PAGE_SHIFT @ On each page, use a ld/st??t instruction 64 - rsb ip, ip, #0 65 - movs ip, ip, lsr #32 - PAGE_SHIFT 66 - beq .Lc2u_0fupi 67 - /* 68 - * ip = max no. of bytes to copy before needing another "strt" insn 69 - */ 70 - cmp r2, ip 71 - movlt ip, r2 72 - sub r2, r2, ip 73 - subs ip, ip, #32 74 - blt .Lc2u_0rem8lp 75 - 76 - .Lc2u_0cpy8lp: ldmia r1!, {r3 - r6} 77 - stmia r0!, {r3 - r6} @ Shouldnt fault 78 - ldmia r1!, {r3 - r6} 79 - subs ip, ip, #32 80 - stmia r0!, {r3 - r6} @ Shouldnt fault 81 - bpl .Lc2u_0cpy8lp 82 - 83 - .Lc2u_0rem8lp: cmn ip, #16 84 - ldmgeia r1!, {r3 - r6} 85 - stmgeia r0!, {r3 - r6} @ Shouldnt fault 86 - tst ip, #8 87 - ldmneia r1!, {r3 - r4} 88 - stmneia r0!, {r3 - r4} @ Shouldnt fault 89 - tst ip, #4 90 - ldrne r3, [r1], #4 91 - TUSER( strne) r3, [r0], #4 @ Shouldnt fault 92 - ands ip, ip, #3 93 - beq .Lc2u_0fupi 94 - .Lc2u_0nowords: teq ip, #0 95 - beq .Lc2u_finished 96 - .Lc2u_nowords: cmp ip, #2 97 - ldrb r3, [r1], #1 98 - USER( TUSER( strb) r3, [r0], #1) @ May fault 99 - ldrgeb r3, [r1], #1 100 - USER( TUSER( strgeb) r3, [r0], #1) @ May fault 101 - ldrgtb r3, [r1], #1 102 - USER( TUSER( strgtb) r3, [r0], #1) @ May fault 103 - b .Lc2u_finished 104 - 105 - .Lc2u_not_enough: 106 - movs ip, r2 107 - bne .Lc2u_nowords 108 - .Lc2u_finished: mov r0, #0 109 - ldmfd sp!, {r2, r4 - r7, pc} 110 - 111 - .Lc2u_src_not_aligned: 112 - bic r1, r1, #3 113 - ldr r7, [r1], #4 114 - cmp ip, #2 115 - bgt .Lc2u_3fupi 116 - beq .Lc2u_2fupi 117 - .Lc2u_1fupi: subs r2, r2, #4 118 - addmi ip, r2, #4 119 - bmi .Lc2u_1nowords 120 - mov r3, r7, lspull #8 121 - ldr r7, [r1], #4 122 - orr r3, r3, r7, lspush #24 123 - USER( TUSER( str) r3, [r0], #4) @ May fault 124 - mov ip, r0, lsl #32 - PAGE_SHIFT 125 - rsb ip, ip, #0 126 - movs ip, ip, lsr #32 - PAGE_SHIFT 127 - beq .Lc2u_1fupi 128 - cmp r2, ip 129 - movlt ip, r2 130 - sub r2, r2, ip 131 - subs ip, ip, #16 132 - blt .Lc2u_1rem8lp 133 - 134 - .Lc2u_1cpy8lp: mov r3, r7, lspull #8 135 - ldmia r1!, {r4 - r7} 136 - subs ip, ip, #16 137 - orr r3, r3, r4, lspush #24 138 - mov r4, r4, lspull #8 139 - orr r4, r4, r5, lspush #24 140 - mov r5, r5, lspull #8 141 - orr r5, r5, r6, lspush #24 142 - mov r6, r6, lspull #8 143 - orr r6, r6, r7, lspush #24 144 - stmia r0!, {r3 - r6} @ Shouldnt fault 145 - bpl .Lc2u_1cpy8lp 146 - 147 - .Lc2u_1rem8lp: tst ip, #8 148 - movne r3, r7, lspull #8 149 - ldmneia r1!, {r4, r7} 150 - orrne r3, r3, r4, lspush #24 151 - movne r4, r4, lspull #8 152 - orrne r4, r4, r7, lspush #24 153 - stmneia r0!, {r3 - r4} @ Shouldnt fault 154 - tst ip, #4 155 - movne r3, r7, lspull #8 156 - ldrne r7, [r1], #4 157 - orrne r3, r3, r7, lspush #24 158 - TUSER( strne) r3, [r0], #4 @ Shouldnt fault 159 - ands ip, ip, #3 160 - beq .Lc2u_1fupi 161 - .Lc2u_1nowords: mov r3, r7, get_byte_1 162 - teq ip, #0 163 - beq .Lc2u_finished 164 - cmp ip, #2 165 - USER( TUSER( strb) r3, [r0], #1) @ May fault 166 - movge r3, r7, get_byte_2 167 - USER( TUSER( strgeb) r3, [r0], #1) @ May fault 168 - movgt r3, r7, get_byte_3 169 - USER( TUSER( strgtb) r3, [r0], #1) @ May fault 170 - b .Lc2u_finished 171 - 172 - .Lc2u_2fupi: subs r2, r2, #4 173 - addmi ip, r2, #4 174 - bmi .Lc2u_2nowords 175 - mov r3, r7, lspull #16 176 - ldr r7, [r1], #4 177 - orr r3, r3, r7, lspush #16 178 - USER( TUSER( str) r3, [r0], #4) @ May fault 179 - mov ip, r0, lsl #32 - PAGE_SHIFT 180 - rsb ip, ip, #0 181 - movs ip, ip, lsr #32 - PAGE_SHIFT 182 - beq .Lc2u_2fupi 183 - cmp r2, ip 184 - movlt ip, r2 185 - sub r2, r2, ip 186 - subs ip, ip, #16 187 - blt .Lc2u_2rem8lp 188 - 189 - .Lc2u_2cpy8lp: mov r3, r7, lspull #16 190 - ldmia r1!, {r4 - r7} 191 - subs ip, ip, #16 192 - orr r3, r3, r4, lspush #16 193 - mov r4, r4, lspull #16 194 - orr r4, r4, r5, lspush #16 195 - mov r5, r5, lspull #16 196 - orr r5, r5, r6, lspush #16 197 - mov r6, r6, lspull #16 198 - orr r6, r6, r7, lspush #16 199 - stmia r0!, {r3 - r6} @ Shouldnt fault 200 - bpl .Lc2u_2cpy8lp 201 - 202 - .Lc2u_2rem8lp: tst ip, #8 203 - movne r3, r7, lspull #16 204 - ldmneia r1!, {r4, r7} 205 - orrne r3, r3, r4, lspush #16 206 - movne r4, r4, lspull #16 207 - orrne r4, r4, r7, lspush #16 208 - stmneia r0!, {r3 - r4} @ Shouldnt fault 209 - tst ip, #4 210 - movne r3, r7, lspull #16 211 - ldrne r7, [r1], #4 212 - orrne r3, r3, r7, lspush #16 213 - TUSER( strne) r3, [r0], #4 @ Shouldnt fault 214 - ands ip, ip, #3 215 - beq .Lc2u_2fupi 216 - .Lc2u_2nowords: mov r3, r7, get_byte_2 217 - teq ip, #0 218 - beq .Lc2u_finished 219 - cmp ip, #2 220 - USER( TUSER( strb) r3, [r0], #1) @ May fault 221 - movge r3, r7, get_byte_3 222 - USER( TUSER( strgeb) r3, [r0], #1) @ May fault 223 - ldrgtb r3, [r1], #0 224 - USER( TUSER( strgtb) r3, [r0], #1) @ May fault 225 - b .Lc2u_finished 226 - 227 - .Lc2u_3fupi: subs r2, r2, #4 228 - addmi ip, r2, #4 229 - bmi .Lc2u_3nowords 230 - mov r3, r7, lspull #24 231 - ldr r7, [r1], #4 232 - orr r3, r3, r7, lspush #8 233 - USER( TUSER( str) r3, [r0], #4) @ May fault 234 - mov ip, r0, lsl #32 - PAGE_SHIFT 235 - rsb ip, ip, #0 236 - movs ip, ip, lsr #32 - PAGE_SHIFT 237 - beq .Lc2u_3fupi 238 - cmp r2, ip 239 - movlt ip, r2 240 - sub r2, r2, ip 241 - subs ip, ip, #16 242 - blt .Lc2u_3rem8lp 243 - 244 - .Lc2u_3cpy8lp: mov r3, r7, lspull #24 245 - ldmia r1!, {r4 - r7} 246 - subs ip, ip, #16 247 - orr r3, r3, r4, lspush #8 248 - mov r4, r4, lspull #24 249 - orr r4, r4, r5, lspush #8 250 - mov r5, r5, lspull #24 251 - orr r5, r5, r6, lspush #8 252 - mov r6, r6, lspull #24 253 - orr r6, r6, r7, lspush #8 254 - stmia r0!, {r3 - r6} @ Shouldnt fault 255 - bpl .Lc2u_3cpy8lp 256 - 257 - .Lc2u_3rem8lp: tst ip, #8 258 - movne r3, r7, lspull #24 259 - ldmneia r1!, {r4, r7} 260 - orrne r3, r3, r4, lspush #8 261 - movne r4, r4, lspull #24 262 - orrne r4, r4, r7, lspush #8 263 - stmneia r0!, {r3 - r4} @ Shouldnt fault 264 - tst ip, #4 265 - movne r3, r7, lspull #24 266 - ldrne r7, [r1], #4 267 - orrne r3, r3, r7, lspush #8 268 - TUSER( strne) r3, [r0], #4 @ Shouldnt fault 269 - ands ip, ip, #3 270 - beq .Lc2u_3fupi 271 - .Lc2u_3nowords: mov r3, r7, get_byte_3 272 - teq ip, #0 273 - beq .Lc2u_finished 274 - cmp ip, #2 275 - USER( TUSER( strb) r3, [r0], #1) @ May fault 276 - ldrgeb r3, [r1], #1 277 - USER( TUSER( strgeb) r3, [r0], #1) @ May fault 278 - ldrgtb r3, [r1], #0 279 - USER( TUSER( strgtb) r3, [r0], #1) @ May fault 280 - b .Lc2u_finished 281 - ENDPROC(__copy_to_user) 282 - 283 - .pushsection .fixup,"ax" 284 - .align 0 285 - 9001: ldmfd sp!, {r0, r4 - r7, pc} 286 - .popsection 287 - 288 - /* Prototype: unsigned long __copy_from_user(void *to,const void *from,unsigned long n); 289 - * Purpose : copy a block from user memory to kernel memory 290 - * Params : to - kernel memory 291 - * : from - user memory 292 - * : n - number of bytes to copy 293 - * Returns : Number of bytes NOT copied. 294 - */ 295 - .Lcfu_dest_not_aligned: 296 - rsb ip, ip, #4 297 - cmp ip, #2 298 - USER( TUSER( ldrb) r3, [r1], #1) @ May fault 299 - strb r3, [r0], #1 300 - USER( TUSER( ldrgeb) r3, [r1], #1) @ May fault 301 - strgeb r3, [r0], #1 302 - USER( TUSER( ldrgtb) r3, [r1], #1) @ May fault 303 - strgtb r3, [r0], #1 304 - sub r2, r2, ip 305 - b .Lcfu_dest_aligned 306 - 307 - ENTRY(__copy_from_user) 308 - stmfd sp!, {r0, r2, r4 - r7, lr} 309 - cmp r2, #4 310 - blt .Lcfu_not_enough 311 - ands ip, r0, #3 312 - bne .Lcfu_dest_not_aligned 313 - .Lcfu_dest_aligned: 314 - ands ip, r1, #3 315 - bne .Lcfu_src_not_aligned 316 - 317 - /* 318 - * Seeing as there has to be at least 8 bytes to copy, we can 319 - * copy one word, and force a user-mode page fault... 320 - */ 321 - 322 - .Lcfu_0fupi: subs r2, r2, #4 323 - addmi ip, r2, #4 324 - bmi .Lcfu_0nowords 325 - USER( TUSER( ldr) r3, [r1], #4) 326 - str r3, [r0], #4 327 - mov ip, r1, lsl #32 - PAGE_SHIFT @ On each page, use a ld/st??t instruction 328 - rsb ip, ip, #0 329 - movs ip, ip, lsr #32 - PAGE_SHIFT 330 - beq .Lcfu_0fupi 331 - /* 332 - * ip = max no. of bytes to copy before needing another "strt" insn 333 - */ 334 - cmp r2, ip 335 - movlt ip, r2 336 - sub r2, r2, ip 337 - subs ip, ip, #32 338 - blt .Lcfu_0rem8lp 339 - 340 - .Lcfu_0cpy8lp: ldmia r1!, {r3 - r6} @ Shouldnt fault 341 - stmia r0!, {r3 - r6} 342 - ldmia r1!, {r3 - r6} @ Shouldnt fault 343 - subs ip, ip, #32 344 - stmia r0!, {r3 - r6} 345 - bpl .Lcfu_0cpy8lp 346 - 347 - .Lcfu_0rem8lp: cmn ip, #16 348 - ldmgeia r1!, {r3 - r6} @ Shouldnt fault 349 - stmgeia r0!, {r3 - r6} 350 - tst ip, #8 351 - ldmneia r1!, {r3 - r4} @ Shouldnt fault 352 - stmneia r0!, {r3 - r4} 353 - tst ip, #4 354 - TUSER( ldrne) r3, [r1], #4 @ Shouldnt fault 355 - strne r3, [r0], #4 356 - ands ip, ip, #3 357 - beq .Lcfu_0fupi 358 - .Lcfu_0nowords: teq ip, #0 359 - beq .Lcfu_finished 360 - .Lcfu_nowords: cmp ip, #2 361 - USER( TUSER( ldrb) r3, [r1], #1) @ May fault 362 - strb r3, [r0], #1 363 - USER( TUSER( ldrgeb) r3, [r1], #1) @ May fault 364 - strgeb r3, [r0], #1 365 - USER( TUSER( ldrgtb) r3, [r1], #1) @ May fault 366 - strgtb r3, [r0], #1 367 - b .Lcfu_finished 368 - 369 - .Lcfu_not_enough: 370 - movs ip, r2 371 - bne .Lcfu_nowords 372 - .Lcfu_finished: mov r0, #0 373 - add sp, sp, #8 374 - ldmfd sp!, {r4 - r7, pc} 375 - 376 - .Lcfu_src_not_aligned: 377 - bic r1, r1, #3 378 - USER( TUSER( ldr) r7, [r1], #4) @ May fault 379 - cmp ip, #2 380 - bgt .Lcfu_3fupi 381 - beq .Lcfu_2fupi 382 - .Lcfu_1fupi: subs r2, r2, #4 383 - addmi ip, r2, #4 384 - bmi .Lcfu_1nowords 385 - mov r3, r7, lspull #8 386 - USER( TUSER( ldr) r7, [r1], #4) @ May fault 387 - orr r3, r3, r7, lspush #24 388 - str r3, [r0], #4 389 - mov ip, r1, lsl #32 - PAGE_SHIFT 390 - rsb ip, ip, #0 391 - movs ip, ip, lsr #32 - PAGE_SHIFT 392 - beq .Lcfu_1fupi 393 - cmp r2, ip 394 - movlt ip, r2 395 - sub r2, r2, ip 396 - subs ip, ip, #16 397 - blt .Lcfu_1rem8lp 398 - 399 - .Lcfu_1cpy8lp: mov r3, r7, lspull #8 400 - ldmia r1!, {r4 - r7} @ Shouldnt fault 401 - subs ip, ip, #16 402 - orr r3, r3, r4, lspush #24 403 - mov r4, r4, lspull #8 404 - orr r4, r4, r5, lspush #24 405 - mov r5, r5, lspull #8 406 - orr r5, r5, r6, lspush #24 407 - mov r6, r6, lspull #8 408 - orr r6, r6, r7, lspush #24 409 - stmia r0!, {r3 - r6} 410 - bpl .Lcfu_1cpy8lp 411 - 412 - .Lcfu_1rem8lp: tst ip, #8 413 - movne r3, r7, lspull #8 414 - ldmneia r1!, {r4, r7} @ Shouldnt fault 415 - orrne r3, r3, r4, lspush #24 416 - movne r4, r4, lspull #8 417 - orrne r4, r4, r7, lspush #24 418 - stmneia r0!, {r3 - r4} 419 - tst ip, #4 420 - movne r3, r7, lspull #8 421 - USER( TUSER( ldrne) r7, [r1], #4) @ May fault 422 - orrne r3, r3, r7, lspush #24 423 - strne r3, [r0], #4 424 - ands ip, ip, #3 425 - beq .Lcfu_1fupi 426 - .Lcfu_1nowords: mov r3, r7, get_byte_1 427 - teq ip, #0 428 - beq .Lcfu_finished 429 - cmp ip, #2 430 - strb r3, [r0], #1 431 - movge r3, r7, get_byte_2 432 - strgeb r3, [r0], #1 433 - movgt r3, r7, get_byte_3 434 - strgtb r3, [r0], #1 435 - b .Lcfu_finished 436 - 437 - .Lcfu_2fupi: subs r2, r2, #4 438 - addmi ip, r2, #4 439 - bmi .Lcfu_2nowords 440 - mov r3, r7, lspull #16 441 - USER( TUSER( ldr) r7, [r1], #4) @ May fault 442 - orr r3, r3, r7, lspush #16 443 - str r3, [r0], #4 444 - mov ip, r1, lsl #32 - PAGE_SHIFT 445 - rsb ip, ip, #0 446 - movs ip, ip, lsr #32 - PAGE_SHIFT 447 - beq .Lcfu_2fupi 448 - cmp r2, ip 449 - movlt ip, r2 450 - sub r2, r2, ip 451 - subs ip, ip, #16 452 - blt .Lcfu_2rem8lp 453 - 454 - 455 - .Lcfu_2cpy8lp: mov r3, r7, lspull #16 456 - ldmia r1!, {r4 - r7} @ Shouldnt fault 457 - subs ip, ip, #16 458 - orr r3, r3, r4, lspush #16 459 - mov r4, r4, lspull #16 460 - orr r4, r4, r5, lspush #16 461 - mov r5, r5, lspull #16 462 - orr r5, r5, r6, lspush #16 463 - mov r6, r6, lspull #16 464 - orr r6, r6, r7, lspush #16 465 - stmia r0!, {r3 - r6} 466 - bpl .Lcfu_2cpy8lp 467 - 468 - .Lcfu_2rem8lp: tst ip, #8 469 - movne r3, r7, lspull #16 470 - ldmneia r1!, {r4, r7} @ Shouldnt fault 471 - orrne r3, r3, r4, lspush #16 472 - movne r4, r4, lspull #16 473 - orrne r4, r4, r7, lspush #16 474 - stmneia r0!, {r3 - r4} 475 - tst ip, #4 476 - movne r3, r7, lspull #16 477 - USER( TUSER( ldrne) r7, [r1], #4) @ May fault 478 - orrne r3, r3, r7, lspush #16 479 - strne r3, [r0], #4 480 - ands ip, ip, #3 481 - beq .Lcfu_2fupi 482 - .Lcfu_2nowords: mov r3, r7, get_byte_2 483 - teq ip, #0 484 - beq .Lcfu_finished 485 - cmp ip, #2 486 - strb r3, [r0], #1 487 - movge r3, r7, get_byte_3 488 - strgeb r3, [r0], #1 489 - USER( TUSER( ldrgtb) r3, [r1], #0) @ May fault 490 - strgtb r3, [r0], #1 491 - b .Lcfu_finished 492 - 493 - .Lcfu_3fupi: subs r2, r2, #4 494 - addmi ip, r2, #4 495 - bmi .Lcfu_3nowords 496 - mov r3, r7, lspull #24 497 - USER( TUSER( ldr) r7, [r1], #4) @ May fault 498 - orr r3, r3, r7, lspush #8 499 - str r3, [r0], #4 500 - mov ip, r1, lsl #32 - PAGE_SHIFT 501 - rsb ip, ip, #0 502 - movs ip, ip, lsr #32 - PAGE_SHIFT 503 - beq .Lcfu_3fupi 504 - cmp r2, ip 505 - movlt ip, r2 506 - sub r2, r2, ip 507 - subs ip, ip, #16 508 - blt .Lcfu_3rem8lp 509 - 510 - .Lcfu_3cpy8lp: mov r3, r7, lspull #24 511 - ldmia r1!, {r4 - r7} @ Shouldnt fault 512 - orr r3, r3, r4, lspush #8 513 - mov r4, r4, lspull #24 514 - orr r4, r4, r5, lspush #8 515 - mov r5, r5, lspull #24 516 - orr r5, r5, r6, lspush #8 517 - mov r6, r6, lspull #24 518 - orr r6, r6, r7, lspush #8 519 - stmia r0!, {r3 - r6} 520 - subs ip, ip, #16 521 - bpl .Lcfu_3cpy8lp 522 - 523 - .Lcfu_3rem8lp: tst ip, #8 524 - movne r3, r7, lspull #24 525 - ldmneia r1!, {r4, r7} @ Shouldnt fault 526 - orrne r3, r3, r4, lspush #8 527 - movne r4, r4, lspull #24 528 - orrne r4, r4, r7, lspush #8 529 - stmneia r0!, {r3 - r4} 530 - tst ip, #4 531 - movne r3, r7, lspull #24 532 - USER( TUSER( ldrne) r7, [r1], #4) @ May fault 533 - orrne r3, r3, r7, lspush #8 534 - strne r3, [r0], #4 535 - ands ip, ip, #3 536 - beq .Lcfu_3fupi 537 - .Lcfu_3nowords: mov r3, r7, get_byte_3 538 - teq ip, #0 539 - beq .Lcfu_finished 540 - cmp ip, #2 541 - strb r3, [r0], #1 542 - USER( TUSER( ldrgeb) r3, [r1], #1) @ May fault 543 - strgeb r3, [r0], #1 544 - USER( TUSER( ldrgtb) r3, [r1], #1) @ May fault 545 - strgtb r3, [r0], #1 546 - b .Lcfu_finished 547 - ENDPROC(__copy_from_user) 548 - 549 - .pushsection .fixup,"ax" 550 - .align 0 551 - /* 552 - * We took an exception. r0 contains a pointer to 553 - * the byte not copied. 554 - */ 555 - 9001: ldr r2, [sp], #4 @ void *to 556 - sub r2, r0, r2 @ bytes copied 557 - ldr r1, [sp], #4 @ unsigned long count 558 - subs r4, r1, r2 @ bytes left to copy 559 - movne r1, r4 560 - blne __memzero 561 - mov r0, r4 562 - ldmfd sp!, {r4 - r7, pc} 563 - .popsection 564 -
+50
arch/arm/mach-exynos/firmware.c
··· 17 17 #include <asm/cacheflush.h> 18 18 #include <asm/cputype.h> 19 19 #include <asm/firmware.h> 20 + #include <asm/hardware/cache-l2x0.h> 20 21 #include <asm/suspend.h> 21 22 22 23 #include <mach/map.h> ··· 137 136 .resume = IS_ENABLED(CONFIG_EXYNOS_CPU_SUSPEND) ? exynos_resume : NULL, 138 137 }; 139 138 139 + static void exynos_l2_write_sec(unsigned long val, unsigned reg) 140 + { 141 + static int l2cache_enabled; 142 + 143 + switch (reg) { 144 + case L2X0_CTRL: 145 + if (val & L2X0_CTRL_EN) { 146 + /* 147 + * Before the cache can be enabled, due to firmware 148 + * design, SMC_CMD_L2X0INVALL must be called. 149 + */ 150 + if (!l2cache_enabled) { 151 + exynos_smc(SMC_CMD_L2X0INVALL, 0, 0, 0); 152 + l2cache_enabled = 1; 153 + } 154 + } else { 155 + l2cache_enabled = 0; 156 + } 157 + exynos_smc(SMC_CMD_L2X0CTRL, val, 0, 0); 158 + break; 159 + 160 + case L2X0_DEBUG_CTRL: 161 + exynos_smc(SMC_CMD_L2X0DEBUG, val, 0, 0); 162 + break; 163 + 164 + default: 165 + WARN_ONCE(1, "%s: ignoring write to reg 0x%x\n", __func__, reg); 166 + } 167 + } 168 + 169 + static void exynos_l2_configure(const struct l2x0_regs *regs) 170 + { 171 + exynos_smc(SMC_CMD_L2X0SETUP1, regs->tag_latency, regs->data_latency, 172 + regs->prefetch_ctrl); 173 + exynos_smc(SMC_CMD_L2X0SETUP2, regs->pwr_ctrl, regs->aux_ctrl, 0); 174 + } 175 + 140 176 void __init exynos_firmware_init(void) 141 177 { 142 178 struct device_node *nd; ··· 193 155 pr_info("Running under secure firmware.\n"); 194 156 195 157 register_firmware_ops(&exynos_firmware_ops); 158 + 159 + /* 160 + * Exynos 4 SoCs (based on Cortex A9 and equipped with L2C-310), 161 + * running under secure firmware, require certain registers of L2 162 + * cache controller to be written in secure mode. Here .write_sec 163 + * callback is provided to perform necessary SMC calls. 164 + */ 165 + if (IS_ENABLED(CONFIG_CACHE_L2X0) && 166 + read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) { 167 + outer_cache.write_sec = exynos_l2_write_sec; 168 + outer_cache.configure = exynos_l2_configure; 169 + } 196 170 }
+46
arch/arm/mach-exynos/sleep.S
··· 16 16 */ 17 17 18 18 #include <linux/linkage.h> 19 + #include <asm/asm-offsets.h> 20 + #include <asm/hardware/cache-l2x0.h> 19 21 #include "smc.h" 20 22 21 23 #define CPU_MASK 0xff0ffff0 ··· 76 74 mov r0, #SMC_CMD_C15RESUME 77 75 dsb 78 76 smc #0 77 + #ifdef CONFIG_CACHE_L2X0 78 + adr r0, 1f 79 + ldr r2, [r0] 80 + add r0, r2, r0 81 + 82 + /* Check that the address has been initialised. */ 83 + ldr r1, [r0, #L2X0_R_PHY_BASE] 84 + teq r1, #0 85 + beq skip_l2x0 86 + 87 + /* Check if controller has been enabled. */ 88 + ldr r2, [r1, #L2X0_CTRL] 89 + tst r2, #0x1 90 + bne skip_l2x0 91 + 92 + ldr r1, [r0, #L2X0_R_TAG_LATENCY] 93 + ldr r2, [r0, #L2X0_R_DATA_LATENCY] 94 + ldr r3, [r0, #L2X0_R_PREFETCH_CTRL] 95 + mov r0, #SMC_CMD_L2X0SETUP1 96 + smc #0 97 + 98 + /* Reload saved regs pointer because smc corrupts registers. */ 99 + adr r0, 1f 100 + ldr r2, [r0] 101 + add r0, r2, r0 102 + 103 + ldr r1, [r0, #L2X0_R_PWR_CTRL] 104 + ldr r2, [r0, #L2X0_R_AUX_CTRL] 105 + mov r0, #SMC_CMD_L2X0SETUP2 106 + smc #0 107 + 108 + mov r0, #SMC_CMD_L2X0INVALL 109 + smc #0 110 + 111 + mov r1, #1 112 + mov r0, #SMC_CMD_L2X0CTRL 113 + smc #0 114 + skip_l2x0: 115 + #endif /* CONFIG_CACHE_L2X0 */ 79 116 skip_cp15: 80 117 b cpu_resume 81 118 ENDPROC(exynos_cpu_resume_ns) ··· 124 83 .globl cp15_save_power 125 84 cp15_save_power: 126 85 .long 0 @ cp15 power control 86 + 87 + #ifdef CONFIG_CACHE_L2X0 88 + .align 89 + 1: .long l2x0_saved_regs - . 90 + #endif /* CONFIG_CACHE_L2X0 */
+7 -3
arch/arm/mach-ks8695/include/mach/debug-macro.S arch/arm/include/debug/ks8695.S
··· 1 1 /* 2 - * arch/arm/mach-ks8695/include/mach/debug-macro.S 2 + * arch/arm/include/debug/ks8695.S 3 3 * 4 4 * Copyright (C) 2006 Ben Dooks <ben@simtec.co.uk> 5 5 * Copyright (C) 2006 Simtec Electronics ··· 11 11 * published by the Free Software Foundation. 12 12 */ 13 13 14 - #include <mach/hardware.h> 15 - #include <mach/regs-uart.h> 14 + #define KS8695_UART_PA 0x03ffe000 15 + #define KS8695_UART_VA 0xf00fe000 16 + #define KS8695_URTH (0x04) 17 + #define KS8695_URLS (0x14) 18 + #define URLS_URTE (1 << 6) 19 + #define URLS_URTHRE (1 << 5) 16 20 17 21 .macro addruart, rp, rv, tmp 18 22 ldr \rp, =KS8695_UART_PA @ physical base address
+11 -11
arch/arm/mach-netx/include/mach/debug-macro.S arch/arm/include/debug/netx.S
··· 1 - /* arch/arm/mach-netx/include/mach/debug-macro.S 2 - * 1 + /* 3 2 * Debugging macro include header 4 3 * 5 4 * Copyright (C) 1994-1999 Russell King ··· 10 11 * 11 12 */ 12 13 13 - #include "hardware.h" 14 + #define UART_DATA 0 15 + #define UART_FLAG 0x18 16 + #define UART_FLAG_BUSY (1 << 3) 14 17 15 18 .macro addruart, rp, rv, tmp 16 - mov \rp, #0x00000a00 17 - orr \rv, \rp, #io_p2v(0x00100000) @ virtual 18 - orr \rp, \rp, #0x00100000 @ physical 19 + ldr \rp, =CONFIG_DEBUG_UART_PHYS 20 + ldr \rv, =CONFIG_DEBUG_UART_VIRT 19 21 .endm 20 22 21 23 .macro senduart,rd,rx 22 - str \rd, [\rx, #0] 24 + str \rd, [\rx, #UART_DATA] 23 25 .endm 24 26 25 27 .macro busyuart,rd,rx 26 - 1002: ldr \rd, [\rx, #0x18] 27 - tst \rd, #(1 << 3) 28 + 1002: ldr \rd, [\rx, #UART_FLAG] 29 + tst \rd, #UART_FLAG_BUSY 28 30 bne 1002b 29 31 .endm 30 32 31 33 .macro waituart,rd,rx 32 - 1001: ldr \rd, [\rx, #0x18] 33 - tst \rd, #(1 << 3) 34 + 1001: ldr \rd, [\rx, #UART_FLAG] 35 + tst \rd, #UART_FLAG_BUSY 34 36 bne 1001b 35 37 .endm
-101
arch/arm/mach-omap1/include/mach/debug-macro.S
··· 1 - /* arch/arm/mach-omap1/include/mach/debug-macro.S 2 - * 3 - * Debugging macro include header 4 - * 5 - * Copyright (C) 1994-1999 Russell King 6 - * Moved from linux/arch/arm/kernel/debug.S by Ben Dooks 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 - * 12 - */ 13 - 14 - #include <linux/serial_reg.h> 15 - 16 - #include "serial.h" 17 - 18 - .pushsection .data 19 - omap_uart_phys: .word 0x0 20 - omap_uart_virt: .word 0x0 21 - .popsection 22 - 23 - /* 24 - * Note that this code won't work if the bootloader passes 25 - * a wrong machine ID number in r1. To debug, just hardcode 26 - * the desired UART phys and virt addresses temporarily into 27 - * the omap_uart_phys and omap_uart_virt above. 28 - */ 29 - .macro addruart, rp, rv, tmp 30 - 31 - /* Use omap_uart_phys/virt if already configured */ 32 - 9: adr \rp, 99f @ get effective addr of 99f 33 - ldr \rv, [\rp] @ get absolute addr of 99f 34 - sub \rv, \rv, \rp @ offset between the two 35 - ldr \rp, [\rp, #4] @ abs addr of omap_uart_phys 36 - sub \tmp, \rp, \rv @ make it effective 37 - ldr \rp, [\tmp, #0] @ omap_uart_phys 38 - ldr \rv, [\tmp, #4] @ omap_uart_virt 39 - cmp \rp, #0 @ is port configured? 40 - cmpne \rv, #0 41 - bne 100f @ already configured 42 - 43 - /* Check the debug UART configuration set in uncompress.h */ 44 - and \rp, pc, #0xff000000 45 - ldr \rv, =OMAP_UART_INFO_OFS 46 - ldr \rp, [\rp, \rv] 47 - 48 - /* Select the UART to use based on the UART1 scratchpad value */ 49 - 10: cmp \rp, #0 @ no port configured? 50 - beq 11f @ if none, try to use UART1 51 - cmp \rp, #OMAP1UART1 52 - beq 11f @ configure OMAP1UART1 53 - cmp \rp, #OMAP1UART2 54 - beq 12f @ configure OMAP1UART2 55 - cmp \rp, #OMAP1UART3 56 - beq 13f @ configure OMAP2UART3 57 - 58 - /* Configure the UART offset from the phys/virt base */ 59 - 11: mov \rp, #0x00fb0000 @ OMAP1UART1 60 - b 98f 61 - 12: mov \rp, #0x00fb0000 @ OMAP1UART1 62 - orr \rp, \rp, #0x00000800 @ OMAP1UART2 63 - b 98f 64 - 13: mov \rp, #0x00fb0000 @ OMAP1UART1 65 - orr \rp, \rp, #0x00000800 @ OMAP1UART2 66 - orr \rp, \rp, #0x00009000 @ OMAP1UART3 67 - 68 - /* Store both phys and virt address for the uart */ 69 - 98: add \rp, \rp, #0xff000000 @ phys base 70 - str \rp, [\tmp, #0] @ omap_uart_phys 71 - sub \rp, \rp, #0xff000000 @ phys base 72 - add \rp, \rp, #0xfe000000 @ virt base 73 - str \rp, [\tmp, #4] @ omap_uart_virt 74 - b 9b 75 - 76 - .align 77 - 99: .word . 78 - .word omap_uart_phys 79 - .ltorg 80 - 81 - 100: 82 - .endm 83 - 84 - .macro senduart,rd,rx 85 - strb \rd, [\rx] 86 - .endm 87 - 88 - .macro busyuart,rd,rx 89 - 1001: ldrb \rd, [\rx, #(UART_LSR << OMAP_PORT_SHIFT)] 90 - and \rd, \rd, #(UART_LSR_TEMT | UART_LSR_THRE) 91 - teq \rd, #(UART_LSR_TEMT | UART_LSR_THRE) 92 - beq 1002f 93 - ldrb \rd, [\rx, #(UART_LSR << OMAP7XX_PORT_SHIFT)] 94 - and \rd, \rd, #(UART_LSR_TEMT | UART_LSR_THRE) 95 - teq \rd, #(UART_LSR_TEMT | UART_LSR_THRE) 96 - bne 1001b 97 - 1002: 98 - .endm 99 - 100 - .macro waituart,rd,rx 101 - .endm
+6
arch/arm/mach-omap2/board-generic.c
··· 189 189 }; 190 190 191 191 DT_MACHINE_START(OMAP4_DT, "Generic OMAP4 (Flattened Device Tree)") 192 + .l2c_aux_val = OMAP_L2C_AUX_CTRL, 193 + .l2c_aux_mask = 0xcf9fffff, 194 + .l2c_write_sec = omap4_l2c310_write_sec, 192 195 .reserve = omap_reserve, 193 196 .smp = smp_ops(omap4_smp_ops), 194 197 .map_io = omap4_map_io, ··· 235 232 }; 236 233 237 234 DT_MACHINE_START(AM43_DT, "Generic AM43 (Flattened Device Tree)") 235 + .l2c_aux_val = OMAP_L2C_AUX_CTRL, 236 + .l2c_aux_mask = 0xcf9fffff, 237 + .l2c_write_sec = omap4_l2c310_write_sec, 238 238 .map_io = am33xx_map_io, 239 239 .init_early = am43xx_init_early, 240 240 .init_late = am43xx_init_late,
+8
arch/arm/mach-omap2/common.h
··· 35 35 #include <linux/irqchip/irq-omap-intc.h> 36 36 37 37 #include <asm/proc-fns.h> 38 + #include <asm/hardware/cache-l2x0.h> 38 39 39 40 #include "i2c.h" 40 41 #include "serial.h" ··· 95 94 extern void omap4_local_timer_init(void); 96 95 #ifdef CONFIG_CACHE_L2X0 97 96 int omap_l2_cache_init(void); 97 + #define OMAP_L2C_AUX_CTRL (L2C_AUX_CTRL_SHARED_OVERRIDE | \ 98 + L310_AUX_CTRL_DATA_PREFETCH | \ 99 + L310_AUX_CTRL_INSTR_PREFETCH) 100 + void omap4_l2c310_write_sec(unsigned long val, unsigned reg); 98 101 #else 99 102 static inline int omap_l2_cache_init(void) 100 103 { 101 104 return 0; 102 105 } 106 + 107 + #define OMAP_L2C_AUX_CTRL 0 108 + #define omap4_l2c310_write_sec NULL 103 109 #endif 104 110 extern void omap5_realtime_timer_init(void); 105 111
+1 -15
arch/arm/mach-omap2/omap4-common.c
··· 166 166 return l2cache_base; 167 167 } 168 168 169 - static void omap4_l2c310_write_sec(unsigned long val, unsigned reg) 169 + void omap4_l2c310_write_sec(unsigned long val, unsigned reg) 170 170 { 171 171 unsigned smc_op; 172 172 ··· 201 201 202 202 int __init omap_l2_cache_init(void) 203 203 { 204 - u32 aux_ctrl; 205 - 206 204 /* Static mapping, never released */ 207 205 l2cache_base = ioremap(OMAP44XX_L2CACHE_BASE, SZ_4K); 208 206 if (WARN_ON(!l2cache_base)) 209 207 return -ENOMEM; 210 - 211 - /* 16-way associativity, parity disabled, way size - 64KB (es2.0 +) */ 212 - aux_ctrl = L2C_AUX_CTRL_SHARED_OVERRIDE | 213 - L310_AUX_CTRL_DATA_PREFETCH | 214 - L310_AUX_CTRL_INSTR_PREFETCH; 215 - 216 - outer_cache.write_sec = omap4_l2c310_write_sec; 217 - if (of_have_populated_dt()) 218 - l2x0_of_init(aux_ctrl, 0xcf9fffff); 219 - else 220 - l2x0_init(l2cache_base, aux_ctrl, 0xcf9fffff); 221 - 222 208 return 0; 223 209 } 224 210 #endif
+2 -2
arch/arm/mach-qcom/platsmp.c
··· 44 44 #define APCS_SAW2_VCTL 0x14 45 45 #define APCS_SAW2_2_VCTL 0x1c 46 46 47 - extern void secondary_startup(void); 47 + extern void secondary_startup_arm(void); 48 48 49 49 static DEFINE_SPINLOCK(boot_lock); 50 50 ··· 337 337 flags |= cold_boot_flags[map]; 338 338 } 339 339 340 - if (scm_set_boot_addr(virt_to_phys(secondary_startup), flags)) { 340 + if (scm_set_boot_addr(virt_to_phys(secondary_startup_arm), flags)) { 341 341 for_each_present_cpu(cpu) { 342 342 if (cpu == smp_processor_id()) 343 343 continue;
+1 -1
arch/arm/mach-sa1100/Makefile
··· 3 3 # 4 4 5 5 # Common support 6 - obj-y := clock.o generic.o irq.o time.o #nmi-oopser.o 6 + obj-y := clock.o generic.o irq.o #nmi-oopser.o 7 7 8 8 # Specific board support 9 9 obj-$(CONFIG_SA1100_ASSABET) += assabet.o
+12
arch/arm/mach-sa1100/clock.c
··· 119 119 120 120 static DEFINE_CLK(cpu, &clk_cpu_ops); 121 121 122 + static unsigned long clk_36864_get_rate(struct clk *clk) 123 + { 124 + return 3686400; 125 + } 126 + 127 + static struct clkops clk_36864_ops = { 128 + .get_rate = clk_36864_get_rate, 129 + }; 130 + 131 + static DEFINE_CLK(36864, &clk_36864_ops); 132 + 122 133 static struct clk_lookup sa11xx_clkregs[] = { 123 134 CLKDEV_INIT("sa1111.0", NULL, &clk_gpio27), 124 135 CLKDEV_INIT("sa1100-rtc", NULL, NULL), ··· 137 126 CLKDEV_INIT("sa11x0-pcmcia", NULL, &clk_cpu), 138 127 /* sa1111 names devices using internal offsets, PCMCIA is at 0x1800 */ 139 128 CLKDEV_INIT("1800", NULL, &clk_cpu), 129 + CLKDEV_INIT(NULL, "OSTIMER0", &clk_36864), 140 130 }; 141 131 142 132 static int __init sa11xx_clk_init(void)
+1 -2
arch/arm/mach-sa1100/collie.c
··· 371 371 PPC_LDD6 | PPC_LDD7 | PPC_L_PCLK | PPC_L_LCLK | PPC_L_FCLK | PPC_L_BIAS | 372 372 PPC_TXD1 | PPC_TXD2 | PPC_TXD3 | PPC_TXD4 | PPC_SCLK | PPC_SFRM; 373 373 374 - PWER = _COLLIE_GPIO_AC_IN | _COLLIE_GPIO_CO | _COLLIE_GPIO_ON_KEY | 375 - _COLLIE_GPIO_WAKEUP | _COLLIE_GPIO_nREMOCON_INT | PWER_RTC; 374 + PWER = 0; 376 375 377 376 PGSR = _COLLIE_GPIO_nREMOCON_ON; 378 377
+6
arch/arm/mach-sa1100/generic.c
··· 33 33 #include <mach/irqs.h> 34 34 35 35 #include "generic.h" 36 + #include <clocksource/pxa.h> 36 37 37 38 unsigned int reset_status; 38 39 EXPORT_SYMBOL(reset_status); ··· 368 367 void __init sa1100_map_io(void) 369 368 { 370 369 iotable_init(standard_io_desc, ARRAY_SIZE(standard_io_desc)); 370 + } 371 + 372 + void __init sa1100_timer_init(void) 373 + { 374 + pxa_timer_nodt_init(IRQ_OST0, io_p2v(0x90000000), 3686400); 371 375 } 372 376 373 377 /*
+42 -31
arch/arm/mach-sa1100/include/mach/irqs.h
··· 8 8 * 2001/11/14 RMK Cleaned up and standardised a lot of the IRQs. 9 9 */ 10 10 11 - #define IRQ_GPIO0 1 12 - #define IRQ_GPIO1 2 13 - #define IRQ_GPIO2 3 14 - #define IRQ_GPIO3 4 15 - #define IRQ_GPIO4 5 16 - #define IRQ_GPIO5 6 17 - #define IRQ_GPIO6 7 18 - #define IRQ_GPIO7 8 19 - #define IRQ_GPIO8 9 20 - #define IRQ_GPIO9 10 21 - #define IRQ_GPIO10 11 11 + #define IRQ_GPIO0_SC 1 12 + #define IRQ_GPIO1_SC 2 13 + #define IRQ_GPIO2_SC 3 14 + #define IRQ_GPIO3_SC 4 15 + #define IRQ_GPIO4_SC 5 16 + #define IRQ_GPIO5_SC 6 17 + #define IRQ_GPIO6_SC 7 18 + #define IRQ_GPIO7_SC 8 19 + #define IRQ_GPIO8_SC 9 20 + #define IRQ_GPIO9_SC 10 21 + #define IRQ_GPIO10_SC 11 22 22 #define IRQ_GPIO11_27 12 23 23 #define IRQ_LCD 13 /* LCD controller */ 24 24 #define IRQ_Ser0UDC 14 /* Ser. port 0 UDC */ ··· 41 41 #define IRQ_RTC1Hz 31 /* RTC 1 Hz clock */ 42 42 #define IRQ_RTCAlrm 32 /* RTC Alarm */ 43 43 44 - #define IRQ_GPIO11 33 45 - #define IRQ_GPIO12 34 46 - #define IRQ_GPIO13 35 47 - #define IRQ_GPIO14 36 48 - #define IRQ_GPIO15 37 49 - #define IRQ_GPIO16 38 50 - #define IRQ_GPIO17 39 51 - #define IRQ_GPIO18 40 52 - #define IRQ_GPIO19 41 53 - #define IRQ_GPIO20 42 54 - #define IRQ_GPIO21 43 55 - #define IRQ_GPIO22 44 56 - #define IRQ_GPIO23 45 57 - #define IRQ_GPIO24 46 58 - #define IRQ_GPIO25 47 59 - #define IRQ_GPIO26 48 60 - #define IRQ_GPIO27 49 44 + #define IRQ_GPIO0 33 45 + #define IRQ_GPIO1 34 46 + #define IRQ_GPIO2 35 47 + #define IRQ_GPIO3 36 48 + #define IRQ_GPIO4 37 49 + #define IRQ_GPIO5 38 50 + #define IRQ_GPIO6 39 51 + #define IRQ_GPIO7 40 52 + #define IRQ_GPIO8 41 53 + #define IRQ_GPIO9 42 54 + #define IRQ_GPIO10 43 55 + #define IRQ_GPIO11 44 56 + #define IRQ_GPIO12 45 57 + #define IRQ_GPIO13 46 58 + #define IRQ_GPIO14 47 59 + #define IRQ_GPIO15 48 60 + #define IRQ_GPIO16 49 61 + #define IRQ_GPIO17 50 62 + #define IRQ_GPIO18 51 63 + #define IRQ_GPIO19 52 64 + #define IRQ_GPIO20 53 65 + #define IRQ_GPIO21 54 66 + #define IRQ_GPIO22 55 67 + #define IRQ_GPIO23 56 68 + #define IRQ_GPIO24 57 69 + #define IRQ_GPIO25 58 70 + #define IRQ_GPIO26 59 71 + #define IRQ_GPIO27 60 61 72 62 73 /* 63 74 * The next 16 interrupts are for board specific purposes. Since 64 75 * the kernel can only run on one machine at a time, we can re-use 65 76 * these. If you need more, increase IRQ_BOARD_END, but keep it 66 - * within sensible limits. IRQs 49 to 64 are available. 77 + * within sensible limits. IRQs 61 to 76 are available. 67 78 */ 68 - #define IRQ_BOARD_START 50 69 - #define IRQ_BOARD_END 66 79 + #define IRQ_BOARD_START 61 80 + #define IRQ_BOARD_END 77 70 81 71 82 /* 72 83 * Figure out the MAX IRQ number.
+4 -199
arch/arm/mach-sa1100/irq.c
··· 80 80 81 81 static struct irq_domain *sa1100_normal_irqdomain; 82 82 83 - /* 84 - * SA1100 GPIO edge detection for IRQs: 85 - * IRQs are generated on Falling-Edge, Rising-Edge, or both. 86 - * Use this instead of directly setting GRER/GFER. 87 - */ 88 - static int GPIO_IRQ_rising_edge; 89 - static int GPIO_IRQ_falling_edge; 90 - static int GPIO_IRQ_mask = (1 << 11) - 1; 91 - 92 - static int sa1100_gpio_type(struct irq_data *d, unsigned int type) 93 - { 94 - unsigned int mask; 95 - 96 - mask = BIT(d->hwirq); 97 - 98 - if (type == IRQ_TYPE_PROBE) { 99 - if ((GPIO_IRQ_rising_edge | GPIO_IRQ_falling_edge) & mask) 100 - return 0; 101 - type = IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING; 102 - } 103 - 104 - if (type & IRQ_TYPE_EDGE_RISING) { 105 - GPIO_IRQ_rising_edge |= mask; 106 - } else 107 - GPIO_IRQ_rising_edge &= ~mask; 108 - if (type & IRQ_TYPE_EDGE_FALLING) { 109 - GPIO_IRQ_falling_edge |= mask; 110 - } else 111 - GPIO_IRQ_falling_edge &= ~mask; 112 - 113 - GRER = GPIO_IRQ_rising_edge & GPIO_IRQ_mask; 114 - GFER = GPIO_IRQ_falling_edge & GPIO_IRQ_mask; 115 - 116 - return 0; 117 - } 118 - 119 - /* 120 - * GPIO IRQs must be acknowledged. 121 - */ 122 - static void sa1100_gpio_ack(struct irq_data *d) 123 - { 124 - GEDR = BIT(d->hwirq); 125 - } 126 - 127 - static int sa1100_gpio_wake(struct irq_data *d, unsigned int on) 128 - { 129 - if (on) 130 - PWER |= BIT(d->hwirq); 131 - else 132 - PWER &= ~BIT(d->hwirq); 133 - return 0; 134 - } 135 - 136 - /* 137 - * This is for IRQs from 0 to 10. 138 - */ 139 - static struct irq_chip sa1100_low_gpio_chip = { 140 - .name = "GPIO-l", 141 - .irq_ack = sa1100_gpio_ack, 142 - .irq_mask = sa1100_mask_irq, 143 - .irq_unmask = sa1100_unmask_irq, 144 - .irq_set_type = sa1100_gpio_type, 145 - .irq_set_wake = sa1100_gpio_wake, 146 - }; 147 - 148 - static int sa1100_low_gpio_irqdomain_map(struct irq_domain *d, 149 - unsigned int irq, irq_hw_number_t hwirq) 150 - { 151 - irq_set_chip_and_handler(irq, &sa1100_low_gpio_chip, 152 - handle_edge_irq); 153 - set_irq_flags(irq, IRQF_VALID | IRQF_PROBE); 154 - 155 - return 0; 156 - } 157 - 158 - static struct irq_domain_ops sa1100_low_gpio_irqdomain_ops = { 159 - .map = sa1100_low_gpio_irqdomain_map, 160 - .xlate = irq_domain_xlate_onetwocell, 161 - }; 162 - 163 - static struct irq_domain *sa1100_low_gpio_irqdomain; 164 - 165 - /* 166 - * IRQ11 (GPIO11 through 27) handler. We enter here with the 167 - * irq_controller_lock held, and IRQs disabled. Decode the IRQ 168 - * and call the handler. 169 - */ 170 - static void 171 - sa1100_high_gpio_handler(unsigned int irq, struct irq_desc *desc) 172 - { 173 - unsigned int mask; 174 - 175 - mask = GEDR & 0xfffff800; 176 - do { 177 - /* 178 - * clear down all currently active IRQ sources. 179 - * We will be processing them all. 180 - */ 181 - GEDR = mask; 182 - 183 - irq = IRQ_GPIO11; 184 - mask >>= 11; 185 - do { 186 - if (mask & 1) 187 - generic_handle_irq(irq); 188 - mask >>= 1; 189 - irq++; 190 - } while (mask); 191 - 192 - mask = GEDR & 0xfffff800; 193 - } while (mask); 194 - } 195 - 196 - /* 197 - * Like GPIO0 to 10, GPIO11-27 IRQs need to be handled specially. 198 - * In addition, the IRQs are all collected up into one bit in the 199 - * interrupt controller registers. 200 - */ 201 - static void sa1100_high_gpio_mask(struct irq_data *d) 202 - { 203 - unsigned int mask = BIT(d->hwirq); 204 - 205 - GPIO_IRQ_mask &= ~mask; 206 - 207 - GRER &= ~mask; 208 - GFER &= ~mask; 209 - } 210 - 211 - static void sa1100_high_gpio_unmask(struct irq_data *d) 212 - { 213 - unsigned int mask = BIT(d->hwirq); 214 - 215 - GPIO_IRQ_mask |= mask; 216 - 217 - GRER = GPIO_IRQ_rising_edge & GPIO_IRQ_mask; 218 - GFER = GPIO_IRQ_falling_edge & GPIO_IRQ_mask; 219 - } 220 - 221 - static struct irq_chip sa1100_high_gpio_chip = { 222 - .name = "GPIO-h", 223 - .irq_ack = sa1100_gpio_ack, 224 - .irq_mask = sa1100_high_gpio_mask, 225 - .irq_unmask = sa1100_high_gpio_unmask, 226 - .irq_set_type = sa1100_gpio_type, 227 - .irq_set_wake = sa1100_gpio_wake, 228 - }; 229 - 230 - static int sa1100_high_gpio_irqdomain_map(struct irq_domain *d, 231 - unsigned int irq, irq_hw_number_t hwirq) 232 - { 233 - irq_set_chip_and_handler(irq, &sa1100_high_gpio_chip, 234 - handle_edge_irq); 235 - set_irq_flags(irq, IRQF_VALID | IRQF_PROBE); 236 - 237 - return 0; 238 - } 239 - 240 - static struct irq_domain_ops sa1100_high_gpio_irqdomain_ops = { 241 - .map = sa1100_high_gpio_irqdomain_map, 242 - .xlate = irq_domain_xlate_onetwocell, 243 - }; 244 - 245 - static struct irq_domain *sa1100_high_gpio_irqdomain; 246 - 247 83 static struct resource irq_resource = 248 84 DEFINE_RES_MEM_NAMED(0x90050000, SZ_64K, "irqs"); 249 85 ··· 106 270 IC_GPIO6|IC_GPIO5|IC_GPIO4|IC_GPIO3|IC_GPIO2| 107 271 IC_GPIO1|IC_GPIO0); 108 272 109 - /* 110 - * Set the appropriate edges for wakeup. 111 - */ 112 - GRER = PWER & GPIO_IRQ_rising_edge; 113 - GFER = PWER & GPIO_IRQ_falling_edge; 114 - 115 - /* 116 - * Clear any pending GPIO interrupts. 117 - */ 118 - GEDR = GEDR; 119 - 120 273 return 0; 121 274 } 122 275 ··· 116 291 if (st->saved) { 117 292 ICCR = st->iccr; 118 293 ICLR = st->iclr; 119 - 120 - GRER = GPIO_IRQ_rising_edge & GPIO_IRQ_mask; 121 - GFER = GPIO_IRQ_falling_edge & GPIO_IRQ_mask; 122 294 123 295 ICMR = st->icmr; 124 296 } ··· 147 325 if (mask == 0) 148 326 break; 149 327 150 - handle_IRQ(ffs(mask) - 1 + IRQ_GPIO0, regs); 328 + handle_domain_irq(sa1100_normal_irqdomain, 329 + ffs(mask) - 1, regs); 151 330 } while (1); 152 331 } 153 332 ··· 162 339 /* all IRQs are IRQ, not FIQ */ 163 340 ICLR = 0; 164 341 165 - /* clear all GPIO edge detects */ 166 - GFER = 0; 167 - GRER = 0; 168 - GEDR = -1; 169 - 170 342 /* 171 343 * Whatever the doc says, this has to be set for the wait-on-irq 172 344 * instruction to work... on a SA1100 rev 9 at least. 173 345 */ 174 346 ICCR = 1; 175 347 176 - sa1100_low_gpio_irqdomain = irq_domain_add_legacy(NULL, 177 - 11, IRQ_GPIO0, 0, 178 - &sa1100_low_gpio_irqdomain_ops, NULL); 179 - 180 - sa1100_normal_irqdomain = irq_domain_add_legacy(NULL, 181 - 21, IRQ_GPIO11_27, 11, 348 + sa1100_normal_irqdomain = irq_domain_add_simple(NULL, 349 + 32, IRQ_GPIO0_SC, 182 350 &sa1100_normal_irqdomain_ops, NULL); 183 - 184 - sa1100_high_gpio_irqdomain = irq_domain_add_legacy(NULL, 185 - 17, IRQ_GPIO11, 11, 186 - &sa1100_high_gpio_irqdomain_ops, NULL); 187 - 188 - /* 189 - * Install handler for GPIO 11-27 edge detect interrupts 190 - */ 191 - irq_set_chained_handler(IRQ_GPIO11_27, sa1100_high_gpio_handler); 192 351 193 352 set_handle_irq(sa1100_handle_irq); 194 353
+1
arch/arm/mach-sa1100/pm.c
··· 81 81 /* 82 82 * Ensure not to come back here if it wasn't intended 83 83 */ 84 + RCSR = RCSR_SMR; 84 85 PSPR = 0; 85 86 86 87 /*
-139
arch/arm/mach-sa1100/time.c
··· 1 - /* 2 - * linux/arch/arm/mach-sa1100/time.c 3 - * 4 - * Copyright (C) 1998 Deborah Wallach. 5 - * Twiddles (C) 1999 Hugo Fiennes <hugo@empeg.com> 6 - * 7 - * 2000/03/29 (C) Nicolas Pitre <nico@fluxnic.net> 8 - * Rewritten: big cleanup, much simpler, better HZ accuracy. 9 - * 10 - */ 11 - #include <linux/init.h> 12 - #include <linux/kernel.h> 13 - #include <linux/errno.h> 14 - #include <linux/interrupt.h> 15 - #include <linux/irq.h> 16 - #include <linux/timex.h> 17 - #include <linux/clockchips.h> 18 - #include <linux/sched_clock.h> 19 - 20 - #include <asm/mach/time.h> 21 - #include <mach/hardware.h> 22 - #include <mach/irqs.h> 23 - 24 - #define SA1100_CLOCK_FREQ 3686400 25 - #define SA1100_LATCH DIV_ROUND_CLOSEST(SA1100_CLOCK_FREQ, HZ) 26 - 27 - static u64 notrace sa1100_read_sched_clock(void) 28 - { 29 - return readl_relaxed(OSCR); 30 - } 31 - 32 - #define MIN_OSCR_DELTA 2 33 - 34 - static irqreturn_t sa1100_ost0_interrupt(int irq, void *dev_id) 35 - { 36 - struct clock_event_device *c = dev_id; 37 - 38 - /* Disarm the compare/match, signal the event. */ 39 - writel_relaxed(readl_relaxed(OIER) & ~OIER_E0, OIER); 40 - writel_relaxed(OSSR_M0, OSSR); 41 - c->event_handler(c); 42 - 43 - return IRQ_HANDLED; 44 - } 45 - 46 - static int 47 - sa1100_osmr0_set_next_event(unsigned long delta, struct clock_event_device *c) 48 - { 49 - unsigned long next, oscr; 50 - 51 - writel_relaxed(readl_relaxed(OIER) | OIER_E0, OIER); 52 - next = readl_relaxed(OSCR) + delta; 53 - writel_relaxed(next, OSMR0); 54 - oscr = readl_relaxed(OSCR); 55 - 56 - return (signed)(next - oscr) <= MIN_OSCR_DELTA ? -ETIME : 0; 57 - } 58 - 59 - static void 60 - sa1100_osmr0_set_mode(enum clock_event_mode mode, struct clock_event_device *c) 61 - { 62 - switch (mode) { 63 - case CLOCK_EVT_MODE_ONESHOT: 64 - case CLOCK_EVT_MODE_UNUSED: 65 - case CLOCK_EVT_MODE_SHUTDOWN: 66 - writel_relaxed(readl_relaxed(OIER) & ~OIER_E0, OIER); 67 - writel_relaxed(OSSR_M0, OSSR); 68 - break; 69 - 70 - case CLOCK_EVT_MODE_RESUME: 71 - case CLOCK_EVT_MODE_PERIODIC: 72 - break; 73 - } 74 - } 75 - 76 - #ifdef CONFIG_PM 77 - unsigned long osmr[4], oier; 78 - 79 - static void sa1100_timer_suspend(struct clock_event_device *cedev) 80 - { 81 - osmr[0] = readl_relaxed(OSMR0); 82 - osmr[1] = readl_relaxed(OSMR1); 83 - osmr[2] = readl_relaxed(OSMR2); 84 - osmr[3] = readl_relaxed(OSMR3); 85 - oier = readl_relaxed(OIER); 86 - } 87 - 88 - static void sa1100_timer_resume(struct clock_event_device *cedev) 89 - { 90 - writel_relaxed(0x0f, OSSR); 91 - writel_relaxed(osmr[0], OSMR0); 92 - writel_relaxed(osmr[1], OSMR1); 93 - writel_relaxed(osmr[2], OSMR2); 94 - writel_relaxed(osmr[3], OSMR3); 95 - writel_relaxed(oier, OIER); 96 - 97 - /* 98 - * OSMR0 is the system timer: make sure OSCR is sufficiently behind 99 - */ 100 - writel_relaxed(OSMR0 - SA1100_LATCH, OSCR); 101 - } 102 - #else 103 - #define sa1100_timer_suspend NULL 104 - #define sa1100_timer_resume NULL 105 - #endif 106 - 107 - static struct clock_event_device ckevt_sa1100_osmr0 = { 108 - .name = "osmr0", 109 - .features = CLOCK_EVT_FEAT_ONESHOT, 110 - .rating = 200, 111 - .set_next_event = sa1100_osmr0_set_next_event, 112 - .set_mode = sa1100_osmr0_set_mode, 113 - .suspend = sa1100_timer_suspend, 114 - .resume = sa1100_timer_resume, 115 - }; 116 - 117 - static struct irqaction sa1100_timer_irq = { 118 - .name = "ost0", 119 - .flags = IRQF_TIMER | IRQF_IRQPOLL, 120 - .handler = sa1100_ost0_interrupt, 121 - .dev_id = &ckevt_sa1100_osmr0, 122 - }; 123 - 124 - void __init sa1100_timer_init(void) 125 - { 126 - writel_relaxed(0, OIER); 127 - writel_relaxed(OSSR_M0 | OSSR_M1 | OSSR_M2 | OSSR_M3, OSSR); 128 - 129 - sched_clock_register(sa1100_read_sched_clock, 32, 3686400); 130 - 131 - ckevt_sa1100_osmr0.cpumask = cpumask_of(0); 132 - 133 - setup_irq(IRQ_OST0, &sa1100_timer_irq); 134 - 135 - clocksource_mmio_init(OSCR, "oscr", SA1100_CLOCK_FREQ, 200, 32, 136 - clocksource_mmio_readl_up); 137 - clockevents_config_and_register(&ckevt_sa1100_osmr0, 3686400, 138 - MIN_OSCR_DELTA * 2, 0x7fffffff); 139 - }
+231 -210
arch/arm/mm/cache-l2x0.c
··· 41 41 void (*enable)(void __iomem *, u32, unsigned); 42 42 void (*fixup)(void __iomem *, u32, struct outer_cache_fns *); 43 43 void (*save)(void __iomem *); 44 + void (*configure)(void __iomem *); 44 45 struct outer_cache_fns outer_cache; 45 46 }; 46 47 47 48 #define CACHE_LINE_SIZE 32 48 49 49 50 static void __iomem *l2x0_base; 51 + static const struct l2c_init_data *l2x0_data; 50 52 static DEFINE_RAW_SPINLOCK(l2x0_lock); 51 53 static u32 l2x0_way_mask; /* Bitmask of active ways */ 52 54 static u32 l2x0_size; ··· 108 106 } 109 107 } 110 108 109 + static void l2c_configure(void __iomem *base) 110 + { 111 + if (outer_cache.configure) { 112 + outer_cache.configure(&l2x0_saved_regs); 113 + return; 114 + } 115 + 116 + if (l2x0_data->configure) 117 + l2x0_data->configure(base); 118 + 119 + l2c_write_sec(l2x0_saved_regs.aux_ctrl, base, L2X0_AUX_CTRL); 120 + } 121 + 111 122 /* 112 123 * Enable the L2 cache controller. This function must only be 113 124 * called when the cache controller is known to be disabled. ··· 129 114 { 130 115 unsigned long flags; 131 116 132 - l2c_write_sec(aux, base, L2X0_AUX_CTRL); 117 + /* Do not touch the controller if already enabled. */ 118 + if (readl_relaxed(base + L2X0_CTRL) & L2X0_CTRL_EN) 119 + return; 120 + 121 + l2x0_saved_regs.aux_ctrl = aux; 122 + l2c_configure(base); 133 123 134 124 l2c_unlock(base, num_lock); 135 125 ··· 156 136 dsb(st); 157 137 } 158 138 159 - #ifdef CONFIG_CACHE_PL310 160 - static inline void cache_wait(void __iomem *reg, unsigned long mask) 161 - { 162 - /* cache operations by line are atomic on PL310 */ 163 - } 164 - #else 165 - #define cache_wait l2c_wait_mask 166 - #endif 167 - 168 - static inline void cache_sync(void) 169 - { 170 - void __iomem *base = l2x0_base; 171 - 172 - writel_relaxed(0, base + sync_reg_offset); 173 - cache_wait(base + L2X0_CACHE_SYNC, 1); 174 - } 175 - 176 - #if defined(CONFIG_PL310_ERRATA_588369) || defined(CONFIG_PL310_ERRATA_727915) 177 - static inline void debug_writel(unsigned long val) 178 - { 179 - l2c_set_debug(l2x0_base, val); 180 - } 181 - #else 182 - /* Optimised out for non-errata case */ 183 - static inline void debug_writel(unsigned long val) 184 - { 185 - } 186 - #endif 187 - 188 - static void l2x0_cache_sync(void) 189 - { 190 - unsigned long flags; 191 - 192 - raw_spin_lock_irqsave(&l2x0_lock, flags); 193 - cache_sync(); 194 - raw_spin_unlock_irqrestore(&l2x0_lock, flags); 195 - } 196 - 197 - static void __l2x0_flush_all(void) 198 - { 199 - debug_writel(0x03); 200 - __l2c_op_way(l2x0_base + L2X0_CLEAN_INV_WAY); 201 - cache_sync(); 202 - debug_writel(0x00); 203 - } 204 - 205 - static void l2x0_flush_all(void) 206 - { 207 - unsigned long flags; 208 - 209 - /* clean all ways */ 210 - raw_spin_lock_irqsave(&l2x0_lock, flags); 211 - __l2x0_flush_all(); 212 - raw_spin_unlock_irqrestore(&l2x0_lock, flags); 213 - } 214 - 215 - static void l2x0_disable(void) 216 - { 217 - unsigned long flags; 218 - 219 - raw_spin_lock_irqsave(&l2x0_lock, flags); 220 - __l2x0_flush_all(); 221 - l2c_write_sec(0, l2x0_base, L2X0_CTRL); 222 - dsb(st); 223 - raw_spin_unlock_irqrestore(&l2x0_lock, flags); 224 - } 225 - 226 139 static void l2c_save(void __iomem *base) 227 140 { 228 141 l2x0_saved_regs.aux_ctrl = readl_relaxed(l2x0_base + L2X0_AUX_CTRL); 142 + } 143 + 144 + static void l2c_resume(void) 145 + { 146 + l2c_enable(l2x0_base, l2x0_saved_regs.aux_ctrl, l2x0_data->num_lock); 229 147 } 230 148 231 149 /* ··· 246 288 __l2c210_cache_sync(l2x0_base); 247 289 } 248 290 249 - static void l2c210_resume(void) 250 - { 251 - void __iomem *base = l2x0_base; 252 - 253 - if (!(readl_relaxed(base + L2X0_CTRL) & L2X0_CTRL_EN)) 254 - l2c_enable(base, l2x0_saved_regs.aux_ctrl, 1); 255 - } 256 - 257 291 static const struct l2c_init_data l2c210_data __initconst = { 258 292 .type = "L2C-210", 259 293 .way_size_0 = SZ_8K, ··· 259 309 .flush_all = l2c210_flush_all, 260 310 .disable = l2c_disable, 261 311 .sync = l2c210_sync, 262 - .resume = l2c210_resume, 312 + .resume = l2c_resume, 263 313 }, 264 314 }; 265 315 ··· 416 466 .flush_all = l2c220_flush_all, 417 467 .disable = l2c_disable, 418 468 .sync = l2c220_sync, 419 - .resume = l2c210_resume, 469 + .resume = l2c_resume, 420 470 }, 421 471 }; 422 472 ··· 565 615 L310_POWER_CTRL); 566 616 } 567 617 568 - static void l2c310_resume(void) 618 + static void l2c310_configure(void __iomem *base) 569 619 { 570 - void __iomem *base = l2x0_base; 620 + unsigned revision; 571 621 572 - if (!(readl_relaxed(base + L2X0_CTRL) & L2X0_CTRL_EN)) { 573 - unsigned revision; 622 + /* restore pl310 setup */ 623 + l2c_write_sec(l2x0_saved_regs.tag_latency, base, 624 + L310_TAG_LATENCY_CTRL); 625 + l2c_write_sec(l2x0_saved_regs.data_latency, base, 626 + L310_DATA_LATENCY_CTRL); 627 + l2c_write_sec(l2x0_saved_regs.filter_end, base, 628 + L310_ADDR_FILTER_END); 629 + l2c_write_sec(l2x0_saved_regs.filter_start, base, 630 + L310_ADDR_FILTER_START); 574 631 575 - /* restore pl310 setup */ 576 - writel_relaxed(l2x0_saved_regs.tag_latency, 577 - base + L310_TAG_LATENCY_CTRL); 578 - writel_relaxed(l2x0_saved_regs.data_latency, 579 - base + L310_DATA_LATENCY_CTRL); 580 - writel_relaxed(l2x0_saved_regs.filter_end, 581 - base + L310_ADDR_FILTER_END); 582 - writel_relaxed(l2x0_saved_regs.filter_start, 583 - base + L310_ADDR_FILTER_START); 632 + revision = readl_relaxed(base + L2X0_CACHE_ID) & 633 + L2X0_CACHE_ID_RTL_MASK; 584 634 585 - revision = readl_relaxed(base + L2X0_CACHE_ID) & 586 - L2X0_CACHE_ID_RTL_MASK; 587 - 588 - if (revision >= L310_CACHE_ID_RTL_R2P0) 589 - l2c_write_sec(l2x0_saved_regs.prefetch_ctrl, base, 590 - L310_PREFETCH_CTRL); 591 - if (revision >= L310_CACHE_ID_RTL_R3P0) 592 - l2c_write_sec(l2x0_saved_regs.pwr_ctrl, base, 593 - L310_POWER_CTRL); 594 - 595 - l2c_enable(base, l2x0_saved_regs.aux_ctrl, 8); 596 - 597 - /* Re-enable full-line-of-zeros for Cortex-A9 */ 598 - if (l2x0_saved_regs.aux_ctrl & L310_AUX_CTRL_FULL_LINE_ZERO) 599 - set_auxcr(get_auxcr() | BIT(3) | BIT(2) | BIT(1)); 600 - } 635 + if (revision >= L310_CACHE_ID_RTL_R2P0) 636 + l2c_write_sec(l2x0_saved_regs.prefetch_ctrl, base, 637 + L310_PREFETCH_CTRL); 638 + if (revision >= L310_CACHE_ID_RTL_R3P0) 639 + l2c_write_sec(l2x0_saved_regs.pwr_ctrl, base, 640 + L310_POWER_CTRL); 601 641 } 602 642 603 643 static int l2c310_cpu_enable_flz(struct notifier_block *nb, unsigned long act, void *data) ··· 639 699 aux &= ~(L310_AUX_CTRL_FULL_LINE_ZERO | L310_AUX_CTRL_EARLY_BRESP); 640 700 } 641 701 702 + /* r3p0 or later has power control register */ 703 + if (rev >= L310_CACHE_ID_RTL_R3P0) 704 + l2x0_saved_regs.pwr_ctrl = L310_DYNAMIC_CLK_GATING_EN | 705 + L310_STNDBY_MODE_EN; 706 + 707 + /* 708 + * Always enable non-secure access to the lockdown registers - 709 + * we write to them as part of the L2C enable sequence so they 710 + * need to be accessible. 711 + */ 712 + aux |= L310_AUX_CTRL_NS_LOCKDOWN; 713 + 714 + l2c_enable(base, aux, num_lock); 715 + 716 + /* Read back resulting AUX_CTRL value as it could have been altered. */ 717 + aux = readl_relaxed(base + L2X0_AUX_CTRL); 718 + 642 719 if (aux & (L310_AUX_CTRL_DATA_PREFETCH | L310_AUX_CTRL_INSTR_PREFETCH)) { 643 720 u32 prefetch = readl_relaxed(base + L310_PREFETCH_CTRL); 644 721 ··· 669 712 if (rev >= L310_CACHE_ID_RTL_R3P0) { 670 713 u32 power_ctrl; 671 714 672 - l2c_write_sec(L310_DYNAMIC_CLK_GATING_EN | L310_STNDBY_MODE_EN, 673 - base, L310_POWER_CTRL); 674 715 power_ctrl = readl_relaxed(base + L310_POWER_CTRL); 675 716 pr_info("L2C-310 dynamic clock gating %sabled, standby mode %sabled\n", 676 717 power_ctrl & L310_DYNAMIC_CLK_GATING_EN ? "en" : "dis", 677 718 power_ctrl & L310_STNDBY_MODE_EN ? "en" : "dis"); 678 719 } 679 - 680 - /* 681 - * Always enable non-secure access to the lockdown registers - 682 - * we write to them as part of the L2C enable sequence so they 683 - * need to be accessible. 684 - */ 685 - aux |= L310_AUX_CTRL_NS_LOCKDOWN; 686 - 687 - l2c_enable(base, aux, num_lock); 688 720 689 721 if (aux & L310_AUX_CTRL_FULL_LINE_ZERO) { 690 722 set_auxcr(get_auxcr() | BIT(3) | BIT(2) | BIT(1)); ··· 706 760 707 761 if (revision >= L310_CACHE_ID_RTL_R3P0 && 708 762 revision < L310_CACHE_ID_RTL_R3P2) { 709 - u32 val = readl_relaxed(base + L310_PREFETCH_CTRL); 763 + u32 val = l2x0_saved_regs.prefetch_ctrl; 710 764 /* I don't think bit23 is required here... but iMX6 does so */ 711 765 if (val & (BIT(30) | BIT(23))) { 712 766 val &= ~(BIT(30) | BIT(23)); 713 - l2c_write_sec(val, base, L310_PREFETCH_CTRL); 767 + l2x0_saved_regs.prefetch_ctrl = val; 714 768 errata[n++] = "752271"; 715 769 } 716 770 } ··· 746 800 l2c_disable(); 747 801 } 748 802 803 + static void l2c310_resume(void) 804 + { 805 + l2c_resume(); 806 + 807 + /* Re-enable full-line-of-zeros for Cortex-A9 */ 808 + if (l2x0_saved_regs.aux_ctrl & L310_AUX_CTRL_FULL_LINE_ZERO) 809 + set_auxcr(get_auxcr() | BIT(3) | BIT(2) | BIT(1)); 810 + } 811 + 749 812 static const struct l2c_init_data l2c310_init_fns __initconst = { 750 813 .type = "L2C-310", 751 814 .way_size_0 = SZ_8K, ··· 762 807 .enable = l2c310_enable, 763 808 .fixup = l2c310_fixup, 764 809 .save = l2c310_save, 810 + .configure = l2c310_configure, 765 811 .outer_cache = { 766 812 .inv_range = l2c210_inv_range, 767 813 .clean_range = l2c210_clean_range, ··· 774 818 }, 775 819 }; 776 820 777 - static void __init __l2c_init(const struct l2c_init_data *data, 778 - u32 aux_val, u32 aux_mask, u32 cache_id) 821 + static int __init __l2c_init(const struct l2c_init_data *data, 822 + u32 aux_val, u32 aux_mask, u32 cache_id) 779 823 { 780 824 struct outer_cache_fns fns; 781 825 unsigned way_size_bits, ways; 782 826 u32 aux, old_aux; 827 + 828 + /* 829 + * Save the pointer globally so that callbacks which do not receive 830 + * context from callers can access the structure. 831 + */ 832 + l2x0_data = kmemdup(data, sizeof(*data), GFP_KERNEL); 833 + if (!l2x0_data) 834 + return -ENOMEM; 783 835 784 836 /* 785 837 * Sanity check the aux values. aux_mask is the bits we preserve ··· 848 884 849 885 fns = data->outer_cache; 850 886 fns.write_sec = outer_cache.write_sec; 887 + fns.configure = outer_cache.configure; 851 888 if (data->fixup) 852 889 data->fixup(l2x0_base, cache_id, &fns); 853 890 ··· 875 910 data->type, ways, l2x0_size >> 10); 876 911 pr_info("%s: CACHE_ID 0x%08x, AUX_CTRL 0x%08x\n", 877 912 data->type, cache_id, aux); 913 + 914 + return 0; 878 915 } 879 916 880 917 void __init l2x0_init(void __iomem *base, u32 aux_val, u32 aux_mask) ··· 902 935 data = &l2c310_init_fns; 903 936 break; 904 937 } 938 + 939 + /* Read back current (default) hardware configuration */ 940 + if (data->save) 941 + data->save(l2x0_base); 905 942 906 943 __l2c_init(data, aux_val, aux_mask, cache_id); 907 944 } ··· 1073 1102 .flush_all = l2c210_flush_all, 1074 1103 .disable = l2c_disable, 1075 1104 .sync = l2c210_sync, 1076 - .resume = l2c210_resume, 1105 + .resume = l2c_resume, 1077 1106 }, 1078 1107 }; 1079 1108 ··· 1091 1120 .flush_all = l2c220_flush_all, 1092 1121 .disable = l2c_disable, 1093 1122 .sync = l2c220_sync, 1094 - .resume = l2c210_resume, 1123 + .resume = l2c_resume, 1095 1124 }, 1096 1125 }; 1097 1126 ··· 1102 1131 u32 tag[3] = { 0, 0, 0 }; 1103 1132 u32 filter[2] = { 0, 0 }; 1104 1133 u32 assoc; 1134 + u32 prefetch; 1135 + u32 val; 1105 1136 int ret; 1106 1137 1107 1138 of_property_read_u32_array(np, "arm,tag-latency", tag, ARRAY_SIZE(tag)); 1108 1139 if (tag[0] && tag[1] && tag[2]) 1109 - writel_relaxed( 1140 + l2x0_saved_regs.tag_latency = 1110 1141 L310_LATENCY_CTRL_RD(tag[0] - 1) | 1111 1142 L310_LATENCY_CTRL_WR(tag[1] - 1) | 1112 - L310_LATENCY_CTRL_SETUP(tag[2] - 1), 1113 - l2x0_base + L310_TAG_LATENCY_CTRL); 1143 + L310_LATENCY_CTRL_SETUP(tag[2] - 1); 1114 1144 1115 1145 of_property_read_u32_array(np, "arm,data-latency", 1116 1146 data, ARRAY_SIZE(data)); 1117 1147 if (data[0] && data[1] && data[2]) 1118 - writel_relaxed( 1148 + l2x0_saved_regs.data_latency = 1119 1149 L310_LATENCY_CTRL_RD(data[0] - 1) | 1120 1150 L310_LATENCY_CTRL_WR(data[1] - 1) | 1121 - L310_LATENCY_CTRL_SETUP(data[2] - 1), 1122 - l2x0_base + L310_DATA_LATENCY_CTRL); 1151 + L310_LATENCY_CTRL_SETUP(data[2] - 1); 1123 1152 1124 1153 of_property_read_u32_array(np, "arm,filter-ranges", 1125 1154 filter, ARRAY_SIZE(filter)); 1126 1155 if (filter[1]) { 1127 - writel_relaxed(ALIGN(filter[0] + filter[1], SZ_1M), 1128 - l2x0_base + L310_ADDR_FILTER_END); 1129 - writel_relaxed((filter[0] & ~(SZ_1M - 1)) | L310_ADDR_FILTER_EN, 1130 - l2x0_base + L310_ADDR_FILTER_START); 1156 + l2x0_saved_regs.filter_end = 1157 + ALIGN(filter[0] + filter[1], SZ_1M); 1158 + l2x0_saved_regs.filter_start = (filter[0] & ~(SZ_1M - 1)) 1159 + | L310_ADDR_FILTER_EN; 1131 1160 } 1132 1161 1133 1162 ret = l2x0_cache_size_of_parse(np, aux_val, aux_mask, &assoc, SZ_512K); ··· 1149 1178 assoc); 1150 1179 break; 1151 1180 } 1181 + 1182 + prefetch = l2x0_saved_regs.prefetch_ctrl; 1183 + 1184 + ret = of_property_read_u32(np, "arm,double-linefill", &val); 1185 + if (ret == 0) { 1186 + if (val) 1187 + prefetch |= L310_PREFETCH_CTRL_DBL_LINEFILL; 1188 + else 1189 + prefetch &= ~L310_PREFETCH_CTRL_DBL_LINEFILL; 1190 + } else if (ret != -EINVAL) { 1191 + pr_err("L2C-310 OF arm,double-linefill property value is missing\n"); 1192 + } 1193 + 1194 + ret = of_property_read_u32(np, "arm,double-linefill-incr", &val); 1195 + if (ret == 0) { 1196 + if (val) 1197 + prefetch |= L310_PREFETCH_CTRL_DBL_LINEFILL_INCR; 1198 + else 1199 + prefetch &= ~L310_PREFETCH_CTRL_DBL_LINEFILL_INCR; 1200 + } else if (ret != -EINVAL) { 1201 + pr_err("L2C-310 OF arm,double-linefill-incr property value is missing\n"); 1202 + } 1203 + 1204 + ret = of_property_read_u32(np, "arm,double-linefill-wrap", &val); 1205 + if (ret == 0) { 1206 + if (!val) 1207 + prefetch |= L310_PREFETCH_CTRL_DBL_LINEFILL_WRAP; 1208 + else 1209 + prefetch &= ~L310_PREFETCH_CTRL_DBL_LINEFILL_WRAP; 1210 + } else if (ret != -EINVAL) { 1211 + pr_err("L2C-310 OF arm,double-linefill-wrap property value is missing\n"); 1212 + } 1213 + 1214 + ret = of_property_read_u32(np, "arm,prefetch-drop", &val); 1215 + if (ret == 0) { 1216 + if (val) 1217 + prefetch |= L310_PREFETCH_CTRL_PREFETCH_DROP; 1218 + else 1219 + prefetch &= ~L310_PREFETCH_CTRL_PREFETCH_DROP; 1220 + } else if (ret != -EINVAL) { 1221 + pr_err("L2C-310 OF arm,prefetch-drop property value is missing\n"); 1222 + } 1223 + 1224 + ret = of_property_read_u32(np, "arm,prefetch-offset", &val); 1225 + if (ret == 0) { 1226 + prefetch &= ~L310_PREFETCH_CTRL_OFFSET_MASK; 1227 + prefetch |= val & L310_PREFETCH_CTRL_OFFSET_MASK; 1228 + } else if (ret != -EINVAL) { 1229 + pr_err("L2C-310 OF arm,prefetch-offset property value is missing\n"); 1230 + } 1231 + 1232 + l2x0_saved_regs.prefetch_ctrl = prefetch; 1152 1233 } 1153 1234 1154 1235 static const struct l2c_init_data of_l2c310_data __initconst = { ··· 1211 1188 .enable = l2c310_enable, 1212 1189 .fixup = l2c310_fixup, 1213 1190 .save = l2c310_save, 1191 + .configure = l2c310_configure, 1214 1192 .outer_cache = { 1215 1193 .inv_range = l2c210_inv_range, 1216 1194 .clean_range = l2c210_clean_range, ··· 1240 1216 .enable = l2c310_enable, 1241 1217 .fixup = l2c310_fixup, 1242 1218 .save = l2c310_save, 1219 + .configure = l2c310_configure, 1243 1220 .outer_cache = { 1244 1221 .inv_range = l2c210_inv_range, 1245 1222 .clean_range = l2c210_clean_range, ··· 1256 1231 * noninclusive, while the hardware cache range operations use 1257 1232 * inclusive start and end addresses. 1258 1233 */ 1259 - static unsigned long calc_range_end(unsigned long start, unsigned long end) 1234 + static unsigned long aurora_range_end(unsigned long start, unsigned long end) 1260 1235 { 1261 1236 /* 1262 1237 * Limit the number of cache lines processed at once, ··· 1275 1250 return end; 1276 1251 } 1277 1252 1278 - /* 1279 - * Make sure 'start' and 'end' reference the same page, as L2 is PIPT 1280 - * and range operations only do a TLB lookup on the start address. 1281 - */ 1282 1253 static void aurora_pa_range(unsigned long start, unsigned long end, 1283 - unsigned long offset) 1254 + unsigned long offset) 1284 1255 { 1256 + void __iomem *base = l2x0_base; 1257 + unsigned long range_end; 1285 1258 unsigned long flags; 1286 1259 1287 - raw_spin_lock_irqsave(&l2x0_lock, flags); 1288 - writel_relaxed(start, l2x0_base + AURORA_RANGE_BASE_ADDR_REG); 1289 - writel_relaxed(end, l2x0_base + offset); 1290 - raw_spin_unlock_irqrestore(&l2x0_lock, flags); 1291 - 1292 - cache_sync(); 1293 - } 1294 - 1295 - static void aurora_inv_range(unsigned long start, unsigned long end) 1296 - { 1297 1260 /* 1298 1261 * round start and end adresses up to cache line size 1299 1262 */ ··· 1289 1276 end = ALIGN(end, CACHE_LINE_SIZE); 1290 1277 1291 1278 /* 1292 - * Invalidate all full cache lines between 'start' and 'end'. 1279 + * perform operation on all full cache lines between 'start' and 'end' 1293 1280 */ 1294 1281 while (start < end) { 1295 - unsigned long range_end = calc_range_end(start, end); 1296 - aurora_pa_range(start, range_end - CACHE_LINE_SIZE, 1297 - AURORA_INVAL_RANGE_REG); 1282 + range_end = aurora_range_end(start, end); 1283 + 1284 + raw_spin_lock_irqsave(&l2x0_lock, flags); 1285 + writel_relaxed(start, base + AURORA_RANGE_BASE_ADDR_REG); 1286 + writel_relaxed(range_end - CACHE_LINE_SIZE, base + offset); 1287 + raw_spin_unlock_irqrestore(&l2x0_lock, flags); 1288 + 1289 + writel_relaxed(0, base + AURORA_SYNC_REG); 1298 1290 start = range_end; 1299 1291 } 1292 + } 1293 + static void aurora_inv_range(unsigned long start, unsigned long end) 1294 + { 1295 + aurora_pa_range(start, end, AURORA_INVAL_RANGE_REG); 1300 1296 } 1301 1297 1302 1298 static void aurora_clean_range(unsigned long start, unsigned long end) ··· 1314 1292 * If L2 is forced to WT, the L2 will always be clean and we 1315 1293 * don't need to do anything here. 1316 1294 */ 1317 - if (!l2_wt_override) { 1318 - start &= ~(CACHE_LINE_SIZE - 1); 1319 - end = ALIGN(end, CACHE_LINE_SIZE); 1320 - while (start != end) { 1321 - unsigned long range_end = calc_range_end(start, end); 1322 - aurora_pa_range(start, range_end - CACHE_LINE_SIZE, 1323 - AURORA_CLEAN_RANGE_REG); 1324 - start = range_end; 1325 - } 1326 - } 1295 + if (!l2_wt_override) 1296 + aurora_pa_range(start, end, AURORA_CLEAN_RANGE_REG); 1327 1297 } 1328 1298 1329 1299 static void aurora_flush_range(unsigned long start, unsigned long end) 1330 1300 { 1331 - start &= ~(CACHE_LINE_SIZE - 1); 1332 - end = ALIGN(end, CACHE_LINE_SIZE); 1333 - while (start != end) { 1334 - unsigned long range_end = calc_range_end(start, end); 1335 - /* 1336 - * If L2 is forced to WT, the L2 will always be clean and we 1337 - * just need to invalidate. 1338 - */ 1339 - if (l2_wt_override) 1340 - aurora_pa_range(start, range_end - CACHE_LINE_SIZE, 1341 - AURORA_INVAL_RANGE_REG); 1342 - else 1343 - aurora_pa_range(start, range_end - CACHE_LINE_SIZE, 1344 - AURORA_FLUSH_RANGE_REG); 1345 - start = range_end; 1346 - } 1301 + if (l2_wt_override) 1302 + aurora_pa_range(start, end, AURORA_INVAL_RANGE_REG); 1303 + else 1304 + aurora_pa_range(start, end, AURORA_FLUSH_RANGE_REG); 1305 + } 1306 + 1307 + static void aurora_flush_all(void) 1308 + { 1309 + void __iomem *base = l2x0_base; 1310 + unsigned long flags; 1311 + 1312 + /* clean all ways */ 1313 + raw_spin_lock_irqsave(&l2x0_lock, flags); 1314 + __l2c_op_way(base + L2X0_CLEAN_INV_WAY); 1315 + raw_spin_unlock_irqrestore(&l2x0_lock, flags); 1316 + 1317 + writel_relaxed(0, base + AURORA_SYNC_REG); 1318 + } 1319 + 1320 + static void aurora_cache_sync(void) 1321 + { 1322 + writel_relaxed(0, l2x0_base + AURORA_SYNC_REG); 1323 + } 1324 + 1325 + static void aurora_disable(void) 1326 + { 1327 + void __iomem *base = l2x0_base; 1328 + unsigned long flags; 1329 + 1330 + raw_spin_lock_irqsave(&l2x0_lock, flags); 1331 + __l2c_op_way(base + L2X0_CLEAN_INV_WAY); 1332 + writel_relaxed(0, base + AURORA_SYNC_REG); 1333 + l2c_write_sec(0, base, L2X0_CTRL); 1334 + dsb(st); 1335 + raw_spin_unlock_irqrestore(&l2x0_lock, flags); 1347 1336 } 1348 1337 1349 1338 static void aurora_save(void __iomem *base) 1350 1339 { 1351 1340 l2x0_saved_regs.ctrl = readl_relaxed(base + L2X0_CTRL); 1352 1341 l2x0_saved_regs.aux_ctrl = readl_relaxed(base + L2X0_AUX_CTRL); 1353 - } 1354 - 1355 - static void aurora_resume(void) 1356 - { 1357 - void __iomem *base = l2x0_base; 1358 - 1359 - if (!(readl(base + L2X0_CTRL) & L2X0_CTRL_EN)) { 1360 - writel_relaxed(l2x0_saved_regs.aux_ctrl, base + L2X0_AUX_CTRL); 1361 - writel_relaxed(l2x0_saved_regs.ctrl, base + L2X0_CTRL); 1362 - } 1363 1342 } 1364 1343 1365 1344 /* ··· 1421 1398 .inv_range = aurora_inv_range, 1422 1399 .clean_range = aurora_clean_range, 1423 1400 .flush_range = aurora_flush_range, 1424 - .flush_all = l2x0_flush_all, 1425 - .disable = l2x0_disable, 1426 - .sync = l2x0_cache_sync, 1427 - .resume = aurora_resume, 1401 + .flush_all = aurora_flush_all, 1402 + .disable = aurora_disable, 1403 + .sync = aurora_cache_sync, 1404 + .resume = l2c_resume, 1428 1405 }, 1429 1406 }; 1430 1407 ··· 1437 1414 .fixup = aurora_fixup, 1438 1415 .save = aurora_save, 1439 1416 .outer_cache = { 1440 - .resume = aurora_resume, 1417 + .resume = l2c_resume, 1441 1418 }, 1442 1419 }; 1443 1420 ··· 1585 1562 .of_parse = l2c310_of_parse, 1586 1563 .enable = l2c310_enable, 1587 1564 .save = l2c310_save, 1565 + .configure = l2c310_configure, 1588 1566 .outer_cache = { 1589 1567 .inv_range = bcm_inv_range, 1590 1568 .clean_range = bcm_clean_range, ··· 1607 1583 readl_relaxed(base + L310_PREFETCH_CTRL); 1608 1584 } 1609 1585 1610 - static void tauros3_resume(void) 1586 + static void tauros3_configure(void __iomem *base) 1611 1587 { 1612 - void __iomem *base = l2x0_base; 1613 - 1614 - if (!(readl_relaxed(base + L2X0_CTRL) & L2X0_CTRL_EN)) { 1615 - writel_relaxed(l2x0_saved_regs.aux2_ctrl, 1616 - base + TAUROS3_AUX2_CTRL); 1617 - writel_relaxed(l2x0_saved_regs.prefetch_ctrl, 1618 - base + L310_PREFETCH_CTRL); 1619 - 1620 - l2c_enable(base, l2x0_saved_regs.aux_ctrl, 8); 1621 - } 1588 + writel_relaxed(l2x0_saved_regs.aux2_ctrl, 1589 + base + TAUROS3_AUX2_CTRL); 1590 + writel_relaxed(l2x0_saved_regs.prefetch_ctrl, 1591 + base + L310_PREFETCH_CTRL); 1622 1592 } 1623 1593 1624 1594 static const struct l2c_init_data of_tauros3_data __initconst = { ··· 1621 1603 .num_lock = 8, 1622 1604 .enable = l2c_enable, 1623 1605 .save = tauros3_save, 1606 + .configure = tauros3_configure, 1624 1607 /* Tauros3 broadcasts L1 cache operations to L2 */ 1625 1608 .outer_cache = { 1626 - .resume = tauros3_resume, 1609 + .resume = l2c_resume, 1627 1610 }, 1628 1611 }; 1629 1612 ··· 1680 1661 if (!of_property_read_bool(np, "cache-unified")) 1681 1662 pr_err("L2C: device tree omits to specify unified cache\n"); 1682 1663 1664 + /* Read back current (default) hardware configuration */ 1665 + if (data->save) 1666 + data->save(l2x0_base); 1667 + 1683 1668 /* L2 configuration can only be changed if the cache is disabled */ 1684 1669 if (!(readl_relaxed(l2x0_base + L2X0_CTRL) & L2X0_CTRL_EN)) 1685 1670 if (data->of_parse) ··· 1694 1671 else 1695 1672 cache_id = readl_relaxed(l2x0_base + L2X0_CACHE_ID); 1696 1673 1697 - __l2c_init(data, aux_val, aux_mask, cache_id); 1698 - 1699 - return 0; 1674 + return __l2c_init(data, aux_val, aux_mask, cache_id); 1700 1675 } 1701 1676 #endif
+1 -4
arch/arm/mm/init.c
··· 319 319 320 320 early_init_fdt_scan_reserved_mem(); 321 321 322 - /* 323 - * reserve memory for DMA contigouos allocations, 324 - * must come from DMA area inside low memory 325 - */ 322 + /* reserve memory for DMA contiguous allocations */ 326 323 dma_contiguous_reserve(arm_dma_limit); 327 324 328 325 arm_memblock_steal_permitted = false;
+7
arch/arm/probes/Makefile
··· 1 + obj-$(CONFIG_UPROBES) += decode.o decode-arm.o uprobes/ 2 + obj-$(CONFIG_KPROBES) += decode.o kprobes/ 3 + ifdef CONFIG_THUMB2_KERNEL 4 + obj-$(CONFIG_KPROBES) += decode-thumb.o 5 + else 6 + obj-$(CONFIG_KPROBES) += decode-arm.o 7 + endif
+12
arch/arm/probes/kprobes/Makefile
··· 1 + obj-$(CONFIG_KPROBES) += core.o actions-common.o checkers-common.o 2 + obj-$(CONFIG_ARM_KPROBES_TEST) += test-kprobes.o 3 + test-kprobes-objs := test-core.o 4 + 5 + ifdef CONFIG_THUMB2_KERNEL 6 + obj-$(CONFIG_KPROBES) += actions-thumb.o checkers-thumb.o 7 + test-kprobes-objs += test-thumb.o 8 + else 9 + obj-$(CONFIG_KPROBES) += actions-arm.o checkers-arm.o 10 + obj-$(CONFIG_OPTPROBES) += opt-arm.o 11 + test-kprobes-objs += test-arm.o 12 + endif
+192
arch/arm/probes/kprobes/checkers-arm.c
··· 1 + /* 2 + * arch/arm/probes/kprobes/checkers-arm.c 3 + * 4 + * Copyright (C) 2014 Huawei Inc. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 + * General Public License for more details. 14 + */ 15 + 16 + #include <linux/kernel.h> 17 + #include "../decode.h" 18 + #include "../decode-arm.h" 19 + #include "checkers.h" 20 + 21 + static enum probes_insn __kprobes arm_check_stack(probes_opcode_t insn, 22 + struct arch_probes_insn *asi, 23 + const struct decode_header *h) 24 + { 25 + /* 26 + * PROBES_LDRSTRD, PROBES_LDMSTM, PROBES_STORE, 27 + * PROBES_STORE_EXTRA may get here. Simply mark all normal 28 + * insns as STACK_USE_NONE. 29 + */ 30 + static const union decode_item table[] = { 31 + /* 32 + * 'STR{,D,B,H}, Rt, [Rn, Rm]' should be marked as UNKNOWN 33 + * if Rn or Rm is SP. 34 + * x 35 + * STR (register) cccc 011x x0x0 xxxx xxxx xxxx xxxx xxxx 36 + * STRB (register) cccc 011x x1x0 xxxx xxxx xxxx xxxx xxxx 37 + */ 38 + DECODE_OR (0x0e10000f, 0x0600000d), 39 + DECODE_OR (0x0e1f0000, 0x060d0000), 40 + 41 + /* 42 + * x 43 + * STRD (register) cccc 000x x0x0 xxxx xxxx xxxx 1111 xxxx 44 + * STRH (register) cccc 000x x0x0 xxxx xxxx xxxx 1011 xxxx 45 + */ 46 + DECODE_OR (0x0e5000bf, 0x000000bd), 47 + DECODE_CUSTOM (0x0e5f00b0, 0x000d00b0, STACK_USE_UNKNOWN), 48 + 49 + /* 50 + * For PROBES_LDMSTM, only stmdx sp, [...] need to examine 51 + * 52 + * Bit B/A (bit 24) encodes arithmetic operation order. 1 means 53 + * before, 0 means after. 54 + * Bit I/D (bit 23) encodes arithmetic operation. 1 means 55 + * increment, 0 means decrement. 56 + * 57 + * So: 58 + * B I 59 + * / / 60 + * A D | Rn | 61 + * STMDX SP, [...] cccc 100x 00x0 xxxx xxxx xxxx xxxx xxxx 62 + */ 63 + DECODE_CUSTOM (0x0edf0000, 0x080d0000, STACK_USE_STMDX), 64 + 65 + /* P U W | Rn | Rt | imm12 |*/ 66 + /* STR (immediate) cccc 010x x0x0 1101 xxxx xxxx xxxx xxxx */ 67 + /* STRB (immediate) cccc 010x x1x0 1101 xxxx xxxx xxxx xxxx */ 68 + /* P U W | Rn | Rt |imm4| |imm4|*/ 69 + /* STRD (immediate) cccc 000x x1x0 1101 xxxx xxxx 1111 xxxx */ 70 + /* STRH (immediate) cccc 000x x1x0 1101 xxxx xxxx 1011 xxxx */ 71 + /* 72 + * index = (P == '1'); add = (U == '1'). 73 + * Above insns with: 74 + * index == 0 (str{,d,h} rx, [sp], #+/-imm) or 75 + * add == 1 (str{,d,h} rx, [sp, #+<imm>]) 76 + * should be STACK_USE_NONE. 77 + * Only str{,b,d,h} rx,[sp,#-n] (P == 1 and U == 0) are 78 + * required to be examined. 79 + */ 80 + /* STR{,B} Rt,[SP,#-n] cccc 0101 0xx0 1101 xxxx xxxx xxxx xxxx */ 81 + DECODE_CUSTOM (0x0f9f0000, 0x050d0000, STACK_USE_FIXED_XXX), 82 + 83 + /* STR{D,H} Rt,[SP,#-n] cccc 0001 01x0 1101 xxxx xxxx 1x11 xxxx */ 84 + DECODE_CUSTOM (0x0fdf00b0, 0x014d00b0, STACK_USE_FIXED_X0X), 85 + 86 + /* fall through */ 87 + DECODE_CUSTOM (0, 0, STACK_USE_NONE), 88 + DECODE_END 89 + }; 90 + 91 + return probes_decode_insn(insn, asi, table, false, false, stack_check_actions, NULL); 92 + } 93 + 94 + const struct decode_checker arm_stack_checker[NUM_PROBES_ARM_ACTIONS] = { 95 + [PROBES_LDRSTRD] = {.checker = arm_check_stack}, 96 + [PROBES_STORE_EXTRA] = {.checker = arm_check_stack}, 97 + [PROBES_STORE] = {.checker = arm_check_stack}, 98 + [PROBES_LDMSTM] = {.checker = arm_check_stack}, 99 + }; 100 + 101 + static enum probes_insn __kprobes arm_check_regs_nouse(probes_opcode_t insn, 102 + struct arch_probes_insn *asi, 103 + const struct decode_header *h) 104 + { 105 + asi->register_usage_flags = 0; 106 + return INSN_GOOD; 107 + } 108 + 109 + static enum probes_insn arm_check_regs_normal(probes_opcode_t insn, 110 + struct arch_probes_insn *asi, 111 + const struct decode_header *h) 112 + { 113 + u32 regs = h->type_regs.bits >> DECODE_TYPE_BITS; 114 + int i; 115 + 116 + asi->register_usage_flags = 0; 117 + for (i = 0; i < 5; regs >>= 4, insn >>= 4, i++) 118 + if (regs & 0xf) 119 + asi->register_usage_flags |= 1 << (insn & 0xf); 120 + 121 + return INSN_GOOD; 122 + } 123 + 124 + 125 + static enum probes_insn arm_check_regs_ldmstm(probes_opcode_t insn, 126 + struct arch_probes_insn *asi, 127 + const struct decode_header *h) 128 + { 129 + unsigned int reglist = insn & 0xffff; 130 + unsigned int rn = (insn >> 16) & 0xf; 131 + asi->register_usage_flags = reglist | (1 << rn); 132 + return INSN_GOOD; 133 + } 134 + 135 + static enum probes_insn arm_check_regs_mov_ip_sp(probes_opcode_t insn, 136 + struct arch_probes_insn *asi, 137 + const struct decode_header *h) 138 + { 139 + /* Instruction is 'mov ip, sp' i.e. 'mov r12, r13' */ 140 + asi->register_usage_flags = (1 << 12) | (1<< 13); 141 + return INSN_GOOD; 142 + } 143 + 144 + /* 145 + * | Rn |Rt/d| | Rm | 146 + * LDRD (register) cccc 000x x0x0 xxxx xxxx xxxx 1101 xxxx 147 + * STRD (register) cccc 000x x0x0 xxxx xxxx xxxx 1111 xxxx 148 + * | Rn |Rt/d| |imm4L| 149 + * LDRD (immediate) cccc 000x x1x0 xxxx xxxx xxxx 1101 xxxx 150 + * STRD (immediate) cccc 000x x1x0 xxxx xxxx xxxx 1111 xxxx 151 + * 152 + * Such instructions access Rt/d and its next register, so different 153 + * from others, a specific checker is required to handle this extra 154 + * implicit register usage. 155 + */ 156 + static enum probes_insn arm_check_regs_ldrdstrd(probes_opcode_t insn, 157 + struct arch_probes_insn *asi, 158 + const struct decode_header *h) 159 + { 160 + int rdt = (insn >> 12) & 0xf; 161 + arm_check_regs_normal(insn, asi, h); 162 + asi->register_usage_flags |= 1 << (rdt + 1); 163 + return INSN_GOOD; 164 + } 165 + 166 + 167 + const struct decode_checker arm_regs_checker[NUM_PROBES_ARM_ACTIONS] = { 168 + [PROBES_MRS] = {.checker = arm_check_regs_normal}, 169 + [PROBES_SATURATING_ARITHMETIC] = {.checker = arm_check_regs_normal}, 170 + [PROBES_MUL1] = {.checker = arm_check_regs_normal}, 171 + [PROBES_MUL2] = {.checker = arm_check_regs_normal}, 172 + [PROBES_MUL_ADD_LONG] = {.checker = arm_check_regs_normal}, 173 + [PROBES_MUL_ADD] = {.checker = arm_check_regs_normal}, 174 + [PROBES_LOAD] = {.checker = arm_check_regs_normal}, 175 + [PROBES_LOAD_EXTRA] = {.checker = arm_check_regs_normal}, 176 + [PROBES_STORE] = {.checker = arm_check_regs_normal}, 177 + [PROBES_STORE_EXTRA] = {.checker = arm_check_regs_normal}, 178 + [PROBES_DATA_PROCESSING_REG] = {.checker = arm_check_regs_normal}, 179 + [PROBES_DATA_PROCESSING_IMM] = {.checker = arm_check_regs_normal}, 180 + [PROBES_SEV] = {.checker = arm_check_regs_nouse}, 181 + [PROBES_WFE] = {.checker = arm_check_regs_nouse}, 182 + [PROBES_SATURATE] = {.checker = arm_check_regs_normal}, 183 + [PROBES_REV] = {.checker = arm_check_regs_normal}, 184 + [PROBES_MMI] = {.checker = arm_check_regs_normal}, 185 + [PROBES_PACK] = {.checker = arm_check_regs_normal}, 186 + [PROBES_EXTEND] = {.checker = arm_check_regs_normal}, 187 + [PROBES_EXTEND_ADD] = {.checker = arm_check_regs_normal}, 188 + [PROBES_BITFIELD] = {.checker = arm_check_regs_normal}, 189 + [PROBES_LDMSTM] = {.checker = arm_check_regs_ldmstm}, 190 + [PROBES_MOV_IP_SP] = {.checker = arm_check_regs_mov_ip_sp}, 191 + [PROBES_LDRSTRD] = {.checker = arm_check_regs_ldrdstrd}, 192 + };
+101
arch/arm/probes/kprobes/checkers-common.c
··· 1 + /* 2 + * arch/arm/probes/kprobes/checkers-common.c 3 + * 4 + * Copyright (C) 2014 Huawei Inc. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 + * General Public License for more details. 14 + */ 15 + 16 + #include <linux/kernel.h> 17 + #include "../decode.h" 18 + #include "../decode-arm.h" 19 + #include "checkers.h" 20 + 21 + enum probes_insn checker_stack_use_none(probes_opcode_t insn, 22 + struct arch_probes_insn *asi, 23 + const struct decode_header *h) 24 + { 25 + asi->stack_space = 0; 26 + return INSN_GOOD_NO_SLOT; 27 + } 28 + 29 + enum probes_insn checker_stack_use_unknown(probes_opcode_t insn, 30 + struct arch_probes_insn *asi, 31 + const struct decode_header *h) 32 + { 33 + asi->stack_space = -1; 34 + return INSN_GOOD_NO_SLOT; 35 + } 36 + 37 + #ifdef CONFIG_THUMB2_KERNEL 38 + enum probes_insn checker_stack_use_imm_0xx(probes_opcode_t insn, 39 + struct arch_probes_insn *asi, 40 + const struct decode_header *h) 41 + { 42 + int imm = insn & 0xff; 43 + asi->stack_space = imm; 44 + return INSN_GOOD_NO_SLOT; 45 + } 46 + 47 + /* 48 + * Different from other insn uses imm8, the real addressing offset of 49 + * STRD in T32 encoding should be imm8 * 4. See ARMARM description. 50 + */ 51 + enum probes_insn checker_stack_use_t32strd(probes_opcode_t insn, 52 + struct arch_probes_insn *asi, 53 + const struct decode_header *h) 54 + { 55 + int imm = insn & 0xff; 56 + asi->stack_space = imm << 2; 57 + return INSN_GOOD_NO_SLOT; 58 + } 59 + #else 60 + enum probes_insn checker_stack_use_imm_x0x(probes_opcode_t insn, 61 + struct arch_probes_insn *asi, 62 + const struct decode_header *h) 63 + { 64 + int imm = ((insn & 0xf00) >> 4) + (insn & 0xf); 65 + asi->stack_space = imm; 66 + return INSN_GOOD_NO_SLOT; 67 + } 68 + #endif 69 + 70 + enum probes_insn checker_stack_use_imm_xxx(probes_opcode_t insn, 71 + struct arch_probes_insn *asi, 72 + const struct decode_header *h) 73 + { 74 + int imm = insn & 0xfff; 75 + asi->stack_space = imm; 76 + return INSN_GOOD_NO_SLOT; 77 + } 78 + 79 + enum probes_insn checker_stack_use_stmdx(probes_opcode_t insn, 80 + struct arch_probes_insn *asi, 81 + const struct decode_header *h) 82 + { 83 + unsigned int reglist = insn & 0xffff; 84 + int pbit = insn & (1 << 24); 85 + asi->stack_space = (hweight32(reglist) - (!pbit ? 1 : 0)) * 4; 86 + 87 + return INSN_GOOD_NO_SLOT; 88 + } 89 + 90 + const union decode_action stack_check_actions[] = { 91 + [STACK_USE_NONE] = {.decoder = checker_stack_use_none}, 92 + [STACK_USE_UNKNOWN] = {.decoder = checker_stack_use_unknown}, 93 + #ifdef CONFIG_THUMB2_KERNEL 94 + [STACK_USE_FIXED_0XX] = {.decoder = checker_stack_use_imm_0xx}, 95 + [STACK_USE_T32STRD] = {.decoder = checker_stack_use_t32strd}, 96 + #else 97 + [STACK_USE_FIXED_X0X] = {.decoder = checker_stack_use_imm_x0x}, 98 + #endif 99 + [STACK_USE_FIXED_XXX] = {.decoder = checker_stack_use_imm_xxx}, 100 + [STACK_USE_STMDX] = {.decoder = checker_stack_use_stmdx}, 101 + };
+110
arch/arm/probes/kprobes/checkers-thumb.c
··· 1 + /* 2 + * arch/arm/probes/kprobes/checkers-thumb.c 3 + * 4 + * Copyright (C) 2014 Huawei Inc. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 + * General Public License for more details. 14 + */ 15 + 16 + #include <linux/kernel.h> 17 + #include "../decode.h" 18 + #include "../decode-thumb.h" 19 + #include "checkers.h" 20 + 21 + static enum probes_insn __kprobes t32_check_stack(probes_opcode_t insn, 22 + struct arch_probes_insn *asi, 23 + const struct decode_header *h) 24 + { 25 + /* 26 + * PROBES_T32_LDMSTM, PROBES_T32_LDRDSTRD and PROBES_T32_LDRSTR 27 + * may get here. Simply mark all normal insns as STACK_USE_NONE. 28 + */ 29 + static const union decode_item table[] = { 30 + 31 + /* 32 + * First, filter out all ldr insns to make our life easier. 33 + * Following load insns may come here: 34 + * LDM, LDRD, LDR. 35 + * In T32 encoding, bit 20 is enough for distinguishing 36 + * load and store. All load insns have this bit set, when 37 + * all store insns have this bit clear. 38 + */ 39 + DECODE_CUSTOM (0x00100000, 0x00100000, STACK_USE_NONE), 40 + 41 + /* 42 + * Mark all 'STR{,B,H}, Rt, [Rn, Rm]' as STACK_USE_UNKNOWN 43 + * if Rn or Rm is SP. T32 doesn't encode STRD. 44 + */ 45 + /* xx | Rn | Rt | | Rm |*/ 46 + /* STR (register) 1111 1000 0100 xxxx xxxx 0000 00xx xxxx */ 47 + /* STRB (register) 1111 1000 0000 xxxx xxxx 0000 00xx xxxx */ 48 + /* STRH (register) 1111 1000 0010 xxxx xxxx 0000 00xx xxxx */ 49 + /* INVALID INSN 1111 1000 0110 xxxx xxxx 0000 00xx xxxx */ 50 + /* By Introducing INVALID INSN, bit 21 and 22 can be ignored. */ 51 + DECODE_OR (0xff9f0fc0, 0xf80d0000), 52 + DECODE_CUSTOM (0xff900fcf, 0xf800000d, STACK_USE_UNKNOWN), 53 + 54 + 55 + /* xx | Rn | Rt | PUW| imm8 |*/ 56 + /* STR (imm 8) 1111 1000 0100 1101 xxxx 110x xxxx xxxx */ 57 + /* STRB (imm 8) 1111 1000 0000 1101 xxxx 110x xxxx xxxx */ 58 + /* STRH (imm 8) 1111 1000 0010 1101 xxxx 110x xxxx xxxx */ 59 + /* INVALID INSN 1111 1000 0110 1101 xxxx 110x xxxx xxxx */ 60 + /* Only consider U == 0 and P == 1: strx rx, [sp, #-<imm>] */ 61 + DECODE_CUSTOM (0xff9f0e00, 0xf80d0c00, STACK_USE_FIXED_0XX), 62 + 63 + /* For STR{,B,H} (imm 12), offset is always positive, so ignore them. */ 64 + 65 + /* P U W | Rn | Rt | Rt2| imm8 |*/ 66 + /* STRD (immediate) 1110 1001 01x0 1101 xxxx xxxx xxxx xxxx */ 67 + /* 68 + * Only consider U == 0 and P == 1. 69 + * Also note that STRD in T32 encoding is special: 70 + * imm = ZeroExtend(imm8:'00', 32) 71 + */ 72 + DECODE_CUSTOM (0xffdf0000, 0xe94d0000, STACK_USE_T32STRD), 73 + 74 + /* | Rn | */ 75 + /* STMDB 1110 1001 00x0 1101 xxxx xxxx xxxx xxxx */ 76 + DECODE_CUSTOM (0xffdf0000, 0xe90d0000, STACK_USE_STMDX), 77 + 78 + /* fall through */ 79 + DECODE_CUSTOM (0, 0, STACK_USE_NONE), 80 + DECODE_END 81 + }; 82 + 83 + return probes_decode_insn(insn, asi, table, false, false, stack_check_actions, NULL); 84 + } 85 + 86 + const struct decode_checker t32_stack_checker[NUM_PROBES_T32_ACTIONS] = { 87 + [PROBES_T32_LDMSTM] = {.checker = t32_check_stack}, 88 + [PROBES_T32_LDRDSTRD] = {.checker = t32_check_stack}, 89 + [PROBES_T32_LDRSTR] = {.checker = t32_check_stack}, 90 + }; 91 + 92 + /* 93 + * See following comments. This insn must be 'push'. 94 + */ 95 + static enum probes_insn __kprobes t16_check_stack(probes_opcode_t insn, 96 + struct arch_probes_insn *asi, 97 + const struct decode_header *h) 98 + { 99 + unsigned int reglist = insn & 0x1ff; 100 + asi->stack_space = hweight32(reglist) * 4; 101 + return INSN_GOOD; 102 + } 103 + 104 + /* 105 + * T16 encoding is simple: only the 'push' insn can need extra stack space. 106 + * Other insns, like str, can only use r0-r7 as Rn. 107 + */ 108 + const struct decode_checker t16_stack_checker[NUM_PROBES_T16_ACTIONS] = { 109 + [PROBES_T16_PUSH] = {.checker = t16_check_stack}, 110 + };
+55
arch/arm/probes/kprobes/checkers.h
··· 1 + /* 2 + * arch/arm/probes/kprobes/checkers.h 3 + * 4 + * Copyright (C) 2014 Huawei Inc. 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 13 + * General Public License for more details. 14 + */ 15 + #ifndef _ARM_KERNEL_PROBES_CHECKERS_H 16 + #define _ARM_KERNEL_PROBES_CHECKERS_H 17 + 18 + #include <linux/kernel.h> 19 + #include <linux/types.h> 20 + #include "../decode.h" 21 + 22 + extern probes_check_t checker_stack_use_none; 23 + extern probes_check_t checker_stack_use_unknown; 24 + #ifdef CONFIG_THUMB2_KERNEL 25 + extern probes_check_t checker_stack_use_imm_0xx; 26 + #else 27 + extern probes_check_t checker_stack_use_imm_x0x; 28 + #endif 29 + extern probes_check_t checker_stack_use_imm_xxx; 30 + extern probes_check_t checker_stack_use_stmdx; 31 + 32 + enum { 33 + STACK_USE_NONE, 34 + STACK_USE_UNKNOWN, 35 + #ifdef CONFIG_THUMB2_KERNEL 36 + STACK_USE_FIXED_0XX, 37 + STACK_USE_T32STRD, 38 + #else 39 + STACK_USE_FIXED_X0X, 40 + #endif 41 + STACK_USE_FIXED_XXX, 42 + STACK_USE_STMDX, 43 + NUM_STACK_USE_TYPES 44 + }; 45 + 46 + extern const union decode_action stack_check_actions[]; 47 + 48 + #ifndef CONFIG_THUMB2_KERNEL 49 + extern const struct decode_checker arm_stack_checker[]; 50 + extern const struct decode_checker arm_regs_checker[]; 51 + #else 52 + #endif 53 + extern const struct decode_checker t32_stack_checker[]; 54 + extern const struct decode_checker t16_stack_checker[]; 55 + #endif
+370
arch/arm/probes/kprobes/opt-arm.c
··· 1 + /* 2 + * Kernel Probes Jump Optimization (Optprobes) 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License, or 7 + * (at your option) any later version. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program; if not, write to the Free Software 16 + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 17 + * 18 + * Copyright (C) IBM Corporation, 2002, 2004 19 + * Copyright (C) Hitachi Ltd., 2012 20 + * Copyright (C) Huawei Inc., 2014 21 + */ 22 + 23 + #include <linux/kprobes.h> 24 + #include <linux/jump_label.h> 25 + #include <asm/kprobes.h> 26 + #include <asm/cacheflush.h> 27 + /* for arm_gen_branch */ 28 + #include <asm/insn.h> 29 + /* for patch_text */ 30 + #include <asm/patch.h> 31 + 32 + #include "core.h" 33 + 34 + /* 35 + * See register_usage_flags. If the probed instruction doesn't use PC, 36 + * we can copy it into template and have it executed directly without 37 + * simulation or emulation. 38 + */ 39 + #define ARM_REG_PC 15 40 + #define can_kprobe_direct_exec(m) (!test_bit(ARM_REG_PC, &(m))) 41 + 42 + /* 43 + * NOTE: the first sub and add instruction will be modified according 44 + * to the stack cost of the instruction. 45 + */ 46 + asm ( 47 + ".global optprobe_template_entry\n" 48 + "optprobe_template_entry:\n" 49 + ".global optprobe_template_sub_sp\n" 50 + "optprobe_template_sub_sp:" 51 + " sub sp, sp, #0xff\n" 52 + " stmia sp, {r0 - r14} \n" 53 + ".global optprobe_template_add_sp\n" 54 + "optprobe_template_add_sp:" 55 + " add r3, sp, #0xff\n" 56 + " str r3, [sp, #52]\n" 57 + " mrs r4, cpsr\n" 58 + " str r4, [sp, #64]\n" 59 + " mov r1, sp\n" 60 + " ldr r0, 1f\n" 61 + " ldr r2, 2f\n" 62 + /* 63 + * AEABI requires an 8-bytes alignment stack. If 64 + * SP % 8 != 0 (SP % 4 == 0 should be ensured), 65 + * alloc more bytes here. 66 + */ 67 + " and r4, sp, #4\n" 68 + " sub sp, sp, r4\n" 69 + #if __LINUX_ARM_ARCH__ >= 5 70 + " blx r2\n" 71 + #else 72 + " mov lr, pc\n" 73 + " mov pc, r2\n" 74 + #endif 75 + " add sp, sp, r4\n" 76 + " ldr r1, [sp, #64]\n" 77 + " tst r1, #"__stringify(PSR_T_BIT)"\n" 78 + " ldrne r2, [sp, #60]\n" 79 + " orrne r2, #1\n" 80 + " strne r2, [sp, #60] @ set bit0 of PC for thumb\n" 81 + " msr cpsr_cxsf, r1\n" 82 + ".global optprobe_template_restore_begin\n" 83 + "optprobe_template_restore_begin:\n" 84 + " ldmia sp, {r0 - r15}\n" 85 + ".global optprobe_template_restore_orig_insn\n" 86 + "optprobe_template_restore_orig_insn:\n" 87 + " nop\n" 88 + ".global optprobe_template_restore_end\n" 89 + "optprobe_template_restore_end:\n" 90 + " nop\n" 91 + ".global optprobe_template_val\n" 92 + "optprobe_template_val:\n" 93 + "1: .long 0\n" 94 + ".global optprobe_template_call\n" 95 + "optprobe_template_call:\n" 96 + "2: .long 0\n" 97 + ".global optprobe_template_end\n" 98 + "optprobe_template_end:\n"); 99 + 100 + #define TMPL_VAL_IDX \ 101 + ((unsigned long *)&optprobe_template_val - (unsigned long *)&optprobe_template_entry) 102 + #define TMPL_CALL_IDX \ 103 + ((unsigned long *)&optprobe_template_call - (unsigned long *)&optprobe_template_entry) 104 + #define TMPL_END_IDX \ 105 + ((unsigned long *)&optprobe_template_end - (unsigned long *)&optprobe_template_entry) 106 + #define TMPL_ADD_SP \ 107 + ((unsigned long *)&optprobe_template_add_sp - (unsigned long *)&optprobe_template_entry) 108 + #define TMPL_SUB_SP \ 109 + ((unsigned long *)&optprobe_template_sub_sp - (unsigned long *)&optprobe_template_entry) 110 + #define TMPL_RESTORE_BEGIN \ 111 + ((unsigned long *)&optprobe_template_restore_begin - (unsigned long *)&optprobe_template_entry) 112 + #define TMPL_RESTORE_ORIGN_INSN \ 113 + ((unsigned long *)&optprobe_template_restore_orig_insn - (unsigned long *)&optprobe_template_entry) 114 + #define TMPL_RESTORE_END \ 115 + ((unsigned long *)&optprobe_template_restore_end - (unsigned long *)&optprobe_template_entry) 116 + 117 + /* 118 + * ARM can always optimize an instruction when using ARM ISA, except 119 + * instructions like 'str r0, [sp, r1]' which store to stack and unable 120 + * to determine stack space consumption statically. 121 + */ 122 + int arch_prepared_optinsn(struct arch_optimized_insn *optinsn) 123 + { 124 + return optinsn->insn != NULL; 125 + } 126 + 127 + /* 128 + * In ARM ISA, kprobe opt always replace one instruction (4 bytes 129 + * aligned and 4 bytes long). It is impossible to encounter another 130 + * kprobe in the address range. So always return 0. 131 + */ 132 + int arch_check_optimized_kprobe(struct optimized_kprobe *op) 133 + { 134 + return 0; 135 + } 136 + 137 + /* Caller must ensure addr & 3 == 0 */ 138 + static int can_optimize(struct kprobe *kp) 139 + { 140 + if (kp->ainsn.stack_space < 0) 141 + return 0; 142 + /* 143 + * 255 is the biggest imm can be used in 'sub r0, r0, #<imm>'. 144 + * Number larger than 255 needs special encoding. 145 + */ 146 + if (kp->ainsn.stack_space > 255 - sizeof(struct pt_regs)) 147 + return 0; 148 + return 1; 149 + } 150 + 151 + /* Free optimized instruction slot */ 152 + static void 153 + __arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty) 154 + { 155 + if (op->optinsn.insn) { 156 + free_optinsn_slot(op->optinsn.insn, dirty); 157 + op->optinsn.insn = NULL; 158 + } 159 + } 160 + 161 + extern void kprobe_handler(struct pt_regs *regs); 162 + 163 + static void 164 + optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) 165 + { 166 + unsigned long flags; 167 + struct kprobe *p = &op->kp; 168 + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); 169 + 170 + /* Save skipped registers */ 171 + regs->ARM_pc = (unsigned long)op->kp.addr; 172 + regs->ARM_ORIG_r0 = ~0UL; 173 + 174 + local_irq_save(flags); 175 + 176 + if (kprobe_running()) { 177 + kprobes_inc_nmissed_count(&op->kp); 178 + } else { 179 + __this_cpu_write(current_kprobe, &op->kp); 180 + kcb->kprobe_status = KPROBE_HIT_ACTIVE; 181 + opt_pre_handler(&op->kp, regs); 182 + __this_cpu_write(current_kprobe, NULL); 183 + } 184 + 185 + /* 186 + * We singlestep the replaced instruction only when it can't be 187 + * executed directly during restore. 188 + */ 189 + if (!p->ainsn.kprobe_direct_exec) 190 + op->kp.ainsn.insn_singlestep(p->opcode, &p->ainsn, regs); 191 + 192 + local_irq_restore(flags); 193 + } 194 + 195 + int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *orig) 196 + { 197 + kprobe_opcode_t *code; 198 + unsigned long rel_chk; 199 + unsigned long val; 200 + unsigned long stack_protect = sizeof(struct pt_regs); 201 + 202 + if (!can_optimize(orig)) 203 + return -EILSEQ; 204 + 205 + code = get_optinsn_slot(); 206 + if (!code) 207 + return -ENOMEM; 208 + 209 + /* 210 + * Verify if the address gap is in 32MiB range, because this uses 211 + * a relative jump. 212 + * 213 + * kprobe opt use a 'b' instruction to branch to optinsn.insn. 214 + * According to ARM manual, branch instruction is: 215 + * 216 + * 31 28 27 24 23 0 217 + * +------+---+---+---+---+----------------+ 218 + * | cond | 1 | 0 | 1 | 0 | imm24 | 219 + * +------+---+---+---+---+----------------+ 220 + * 221 + * imm24 is a signed 24 bits integer. The real branch offset is computed 222 + * by: imm32 = SignExtend(imm24:'00', 32); 223 + * 224 + * So the maximum forward branch should be: 225 + * (0x007fffff << 2) = 0x01fffffc = 0x1fffffc 226 + * The maximum backword branch should be: 227 + * (0xff800000 << 2) = 0xfe000000 = -0x2000000 228 + * 229 + * We can simply check (rel & 0xfe000003): 230 + * if rel is positive, (rel & 0xfe000000) shoule be 0 231 + * if rel is negitive, (rel & 0xfe000000) should be 0xfe000000 232 + * the last '3' is used for alignment checking. 233 + */ 234 + rel_chk = (unsigned long)((long)code - 235 + (long)orig->addr + 8) & 0xfe000003; 236 + 237 + if ((rel_chk != 0) && (rel_chk != 0xfe000000)) { 238 + /* 239 + * Different from x86, we free code buf directly instead of 240 + * calling __arch_remove_optimized_kprobe() because 241 + * we have not fill any field in op. 242 + */ 243 + free_optinsn_slot(code, 0); 244 + return -ERANGE; 245 + } 246 + 247 + /* Copy arch-dep-instance from template. */ 248 + memcpy(code, &optprobe_template_entry, 249 + TMPL_END_IDX * sizeof(kprobe_opcode_t)); 250 + 251 + /* Adjust buffer according to instruction. */ 252 + BUG_ON(orig->ainsn.stack_space < 0); 253 + 254 + stack_protect += orig->ainsn.stack_space; 255 + 256 + /* Should have been filtered by can_optimize(). */ 257 + BUG_ON(stack_protect > 255); 258 + 259 + /* Create a 'sub sp, sp, #<stack_protect>' */ 260 + code[TMPL_SUB_SP] = __opcode_to_mem_arm(0xe24dd000 | stack_protect); 261 + /* Create a 'add r3, sp, #<stack_protect>' */ 262 + code[TMPL_ADD_SP] = __opcode_to_mem_arm(0xe28d3000 | stack_protect); 263 + 264 + /* Set probe information */ 265 + val = (unsigned long)op; 266 + code[TMPL_VAL_IDX] = val; 267 + 268 + /* Set probe function call */ 269 + val = (unsigned long)optimized_callback; 270 + code[TMPL_CALL_IDX] = val; 271 + 272 + /* If possible, copy insn and have it executed during restore */ 273 + orig->ainsn.kprobe_direct_exec = false; 274 + if (can_kprobe_direct_exec(orig->ainsn.register_usage_flags)) { 275 + kprobe_opcode_t final_branch = arm_gen_branch( 276 + (unsigned long)(&code[TMPL_RESTORE_END]), 277 + (unsigned long)(op->kp.addr) + 4); 278 + if (final_branch != 0) { 279 + /* 280 + * Replace original 'ldmia sp, {r0 - r15}' with 281 + * 'ldmia {r0 - r14}', restore all registers except pc. 282 + */ 283 + code[TMPL_RESTORE_BEGIN] = __opcode_to_mem_arm(0xe89d7fff); 284 + 285 + /* The original probed instruction */ 286 + code[TMPL_RESTORE_ORIGN_INSN] = __opcode_to_mem_arm(orig->opcode); 287 + 288 + /* Jump back to next instruction */ 289 + code[TMPL_RESTORE_END] = __opcode_to_mem_arm(final_branch); 290 + orig->ainsn.kprobe_direct_exec = true; 291 + } 292 + } 293 + 294 + flush_icache_range((unsigned long)code, 295 + (unsigned long)(&code[TMPL_END_IDX])); 296 + 297 + /* Set op->optinsn.insn means prepared. */ 298 + op->optinsn.insn = code; 299 + return 0; 300 + } 301 + 302 + void __kprobes arch_optimize_kprobes(struct list_head *oplist) 303 + { 304 + struct optimized_kprobe *op, *tmp; 305 + 306 + list_for_each_entry_safe(op, tmp, oplist, list) { 307 + unsigned long insn; 308 + WARN_ON(kprobe_disabled(&op->kp)); 309 + 310 + /* 311 + * Backup instructions which will be replaced 312 + * by jump address 313 + */ 314 + memcpy(op->optinsn.copied_insn, op->kp.addr, 315 + RELATIVEJUMP_SIZE); 316 + 317 + insn = arm_gen_branch((unsigned long)op->kp.addr, 318 + (unsigned long)op->optinsn.insn); 319 + BUG_ON(insn == 0); 320 + 321 + /* 322 + * Make it a conditional branch if replaced insn 323 + * is consitional 324 + */ 325 + insn = (__mem_to_opcode_arm( 326 + op->optinsn.copied_insn[0]) & 0xf0000000) | 327 + (insn & 0x0fffffff); 328 + 329 + /* 330 + * Similar to __arch_disarm_kprobe, operations which 331 + * removing breakpoints must be wrapped by stop_machine 332 + * to avoid racing. 333 + */ 334 + kprobes_remove_breakpoint(op->kp.addr, insn); 335 + 336 + list_del_init(&op->list); 337 + } 338 + } 339 + 340 + void arch_unoptimize_kprobe(struct optimized_kprobe *op) 341 + { 342 + arch_arm_kprobe(&op->kp); 343 + } 344 + 345 + /* 346 + * Recover original instructions and breakpoints from relative jumps. 347 + * Caller must call with locking kprobe_mutex. 348 + */ 349 + void arch_unoptimize_kprobes(struct list_head *oplist, 350 + struct list_head *done_list) 351 + { 352 + struct optimized_kprobe *op, *tmp; 353 + 354 + list_for_each_entry_safe(op, tmp, oplist, list) { 355 + arch_unoptimize_kprobe(op); 356 + list_move(&op->list, done_list); 357 + } 358 + } 359 + 360 + int arch_within_optimized_kprobe(struct optimized_kprobe *op, 361 + unsigned long addr) 362 + { 363 + return ((unsigned long)op->kp.addr <= addr && 364 + (unsigned long)op->kp.addr + RELATIVEJUMP_SIZE > addr); 365 + } 366 + 367 + void arch_remove_optimized_kprobe(struct optimized_kprobe *op) 368 + { 369 + __arch_remove_optimized_kprobe(op, 1); 370 + }
+1
arch/arm/probes/uprobes/Makefile
··· 1 + obj-$(CONFIG_UPROBES) += core.o actions-arm.o
+1
arch/arm64/Kconfig
··· 39 39 select HARDIRQS_SW_RESEND 40 40 select HAVE_ALIGNED_STRUCT_PAGE if SLUB 41 41 select HAVE_ARCH_AUDITSYSCALL 42 + select HAVE_ARCH_BITREVERSE 42 43 select HAVE_ARCH_JUMP_LABEL 43 44 select HAVE_ARCH_KGDB 44 45 select HAVE_ARCH_SECCOMP_FILTER
+19
arch/arm64/include/asm/bitrev.h
··· 1 + #ifndef __ASM_BITREV_H 2 + #define __ASM_BITREV_H 3 + static __always_inline __attribute_const__ u32 __arch_bitrev32(u32 x) 4 + { 5 + __asm__ ("rbit %w0, %w1" : "=r" (x) : "r" (x)); 6 + return x; 7 + } 8 + 9 + static __always_inline __attribute_const__ u16 __arch_bitrev16(u16 x) 10 + { 11 + return __arch_bitrev32((u32)x) >> 16; 12 + } 13 + 14 + static __always_inline __attribute_const__ u8 __arch_bitrev8(u8 x) 15 + { 16 + return __arch_bitrev32((u32)x) >> 24; 17 + } 18 + 19 + #endif
+2 -1
arch/x86/kernel/kprobes/opt.c
··· 322 322 * Target instructions MUST be relocatable (checked inside) 323 323 * This is called when new aggr(opt)probe is allocated or reused. 324 324 */ 325 - int arch_prepare_optimized_kprobe(struct optimized_kprobe *op) 325 + int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, 326 + struct kprobe *__unused) 326 327 { 327 328 u8 *buf; 328 329 int ret;
+47
drivers/amba/bus.c
··· 18 18 #include <linux/pm_domain.h> 19 19 #include <linux/amba/bus.h> 20 20 #include <linux/sizes.h> 21 + #include <linux/limits.h> 21 22 22 23 #include <asm/irq.h> 23 24 ··· 44 43 struct amba_device *pcdev = to_amba_device(dev); 45 44 struct amba_driver *pcdrv = to_amba_driver(drv); 46 45 46 + /* When driver_override is set, only bind to the matching driver */ 47 + if (pcdev->driver_override) 48 + return !strcmp(pcdev->driver_override, drv->name); 49 + 47 50 return amba_lookup(pcdrv->id_table, pcdev) != NULL; 48 51 } 49 52 ··· 62 57 63 58 retval = add_uevent_var(env, "MODALIAS=amba:d%08X", pcdev->periphid); 64 59 return retval; 60 + } 61 + 62 + static ssize_t driver_override_show(struct device *_dev, 63 + struct device_attribute *attr, char *buf) 64 + { 65 + struct amba_device *dev = to_amba_device(_dev); 66 + 67 + if (!dev->driver_override) 68 + return 0; 69 + 70 + return sprintf(buf, "%s\n", dev->driver_override); 71 + } 72 + 73 + static ssize_t driver_override_store(struct device *_dev, 74 + struct device_attribute *attr, 75 + const char *buf, size_t count) 76 + { 77 + struct amba_device *dev = to_amba_device(_dev); 78 + char *driver_override, *old = dev->driver_override, *cp; 79 + 80 + if (count > PATH_MAX) 81 + return -EINVAL; 82 + 83 + driver_override = kstrndup(buf, count, GFP_KERNEL); 84 + if (!driver_override) 85 + return -ENOMEM; 86 + 87 + cp = strchr(driver_override, '\n'); 88 + if (cp) 89 + *cp = '\0'; 90 + 91 + if (strlen(driver_override)) { 92 + dev->driver_override = driver_override; 93 + } else { 94 + kfree(driver_override); 95 + dev->driver_override = NULL; 96 + } 97 + 98 + kfree(old); 99 + 100 + return count; 65 101 } 66 102 67 103 #define amba_attr_func(name,fmt,arg...) \ ··· 127 81 static struct device_attribute amba_dev_attrs[] = { 128 82 __ATTR_RO(id), 129 83 __ATTR_RO(resource), 84 + __ATTR_RW(driver_override), 130 85 __ATTR_NULL, 131 86 }; 132 87
+7
drivers/clocksource/Kconfig
··· 229 229 depends on MIPS_GIC 230 230 select CLKSRC_OF 231 231 232 + config CLKSRC_PXA 233 + def_bool y if ARCH_PXA || ARCH_SA1100 234 + select CLKSRC_OF if USE_OF 235 + help 236 + This enables OST0 support available on PXA and SA-11x0 237 + platforms. 238 + 232 239 endmenu
+1 -1
drivers/clocksource/Makefile
··· 21 21 obj-$(CONFIG_ARCH_MARCO) += timer-marco.o 22 22 obj-$(CONFIG_ARCH_MOXART) += moxart_timer.o 23 23 obj-$(CONFIG_ARCH_MXS) += mxs_timer.o 24 - obj-$(CONFIG_ARCH_PXA) += pxa_timer.o 24 + obj-$(CONFIG_CLKSRC_PXA) += pxa_timer.o 25 25 obj-$(CONFIG_ARCH_PRIMA2) += timer-prima2.o 26 26 obj-$(CONFIG_ARCH_U300) += timer-u300.o 27 27 obj-$(CONFIG_SUN4I_TIMER) += sun4i_timer.o
+198 -1
drivers/gpio/gpio-sa1100.c
··· 11 11 #include <linux/init.h> 12 12 #include <linux/module.h> 13 13 #include <linux/io.h> 14 + #include <linux/syscore_ops.h> 14 15 #include <mach/hardware.h> 15 16 #include <mach/irqs.h> 16 17 ··· 51 50 52 51 static int sa1100_to_irq(struct gpio_chip *chip, unsigned offset) 53 52 { 54 - return offset < 11 ? (IRQ_GPIO0 + offset) : (IRQ_GPIO11 - 11 + offset); 53 + return IRQ_GPIO0 + offset; 55 54 } 56 55 57 56 static struct gpio_chip sa1100_gpio_chip = { ··· 65 64 .ngpio = GPIO_MAX + 1, 66 65 }; 67 66 67 + /* 68 + * SA1100 GPIO edge detection for IRQs: 69 + * IRQs are generated on Falling-Edge, Rising-Edge, or both. 70 + * Use this instead of directly setting GRER/GFER. 71 + */ 72 + static int GPIO_IRQ_rising_edge; 73 + static int GPIO_IRQ_falling_edge; 74 + static int GPIO_IRQ_mask; 75 + 76 + static int sa1100_gpio_type(struct irq_data *d, unsigned int type) 77 + { 78 + unsigned int mask; 79 + 80 + mask = BIT(d->hwirq); 81 + 82 + if (type == IRQ_TYPE_PROBE) { 83 + if ((GPIO_IRQ_rising_edge | GPIO_IRQ_falling_edge) & mask) 84 + return 0; 85 + type = IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING; 86 + } 87 + 88 + if (type & IRQ_TYPE_EDGE_RISING) 89 + GPIO_IRQ_rising_edge |= mask; 90 + else 91 + GPIO_IRQ_rising_edge &= ~mask; 92 + if (type & IRQ_TYPE_EDGE_FALLING) 93 + GPIO_IRQ_falling_edge |= mask; 94 + else 95 + GPIO_IRQ_falling_edge &= ~mask; 96 + 97 + GRER = GPIO_IRQ_rising_edge & GPIO_IRQ_mask; 98 + GFER = GPIO_IRQ_falling_edge & GPIO_IRQ_mask; 99 + 100 + return 0; 101 + } 102 + 103 + /* 104 + * GPIO IRQs must be acknowledged. 105 + */ 106 + static void sa1100_gpio_ack(struct irq_data *d) 107 + { 108 + GEDR = BIT(d->hwirq); 109 + } 110 + 111 + static void sa1100_gpio_mask(struct irq_data *d) 112 + { 113 + unsigned int mask = BIT(d->hwirq); 114 + 115 + GPIO_IRQ_mask &= ~mask; 116 + 117 + GRER &= ~mask; 118 + GFER &= ~mask; 119 + } 120 + 121 + static void sa1100_gpio_unmask(struct irq_data *d) 122 + { 123 + unsigned int mask = BIT(d->hwirq); 124 + 125 + GPIO_IRQ_mask |= mask; 126 + 127 + GRER = GPIO_IRQ_rising_edge & GPIO_IRQ_mask; 128 + GFER = GPIO_IRQ_falling_edge & GPIO_IRQ_mask; 129 + } 130 + 131 + static int sa1100_gpio_wake(struct irq_data *d, unsigned int on) 132 + { 133 + if (on) 134 + PWER |= BIT(d->hwirq); 135 + else 136 + PWER &= ~BIT(d->hwirq); 137 + return 0; 138 + } 139 + 140 + /* 141 + * This is for GPIO IRQs 142 + */ 143 + static struct irq_chip sa1100_gpio_irq_chip = { 144 + .name = "GPIO", 145 + .irq_ack = sa1100_gpio_ack, 146 + .irq_mask = sa1100_gpio_mask, 147 + .irq_unmask = sa1100_gpio_unmask, 148 + .irq_set_type = sa1100_gpio_type, 149 + .irq_set_wake = sa1100_gpio_wake, 150 + }; 151 + 152 + static int sa1100_gpio_irqdomain_map(struct irq_domain *d, 153 + unsigned int irq, irq_hw_number_t hwirq) 154 + { 155 + irq_set_chip_and_handler(irq, &sa1100_gpio_irq_chip, 156 + handle_edge_irq); 157 + set_irq_flags(irq, IRQF_VALID | IRQF_PROBE); 158 + 159 + return 0; 160 + } 161 + 162 + static struct irq_domain_ops sa1100_gpio_irqdomain_ops = { 163 + .map = sa1100_gpio_irqdomain_map, 164 + .xlate = irq_domain_xlate_onetwocell, 165 + }; 166 + 167 + static struct irq_domain *sa1100_gpio_irqdomain; 168 + 169 + /* 170 + * IRQ 0-11 (GPIO) handler. We enter here with the 171 + * irq_controller_lock held, and IRQs disabled. Decode the IRQ 172 + * and call the handler. 173 + */ 174 + static void 175 + sa1100_gpio_handler(unsigned int irq, struct irq_desc *desc) 176 + { 177 + unsigned int mask; 178 + 179 + mask = GEDR; 180 + do { 181 + /* 182 + * clear down all currently active IRQ sources. 183 + * We will be processing them all. 184 + */ 185 + GEDR = mask; 186 + 187 + irq = IRQ_GPIO0; 188 + do { 189 + if (mask & 1) 190 + generic_handle_irq(irq); 191 + mask >>= 1; 192 + irq++; 193 + } while (mask); 194 + 195 + mask = GEDR; 196 + } while (mask); 197 + } 198 + 199 + static int sa1100_gpio_suspend(void) 200 + { 201 + /* 202 + * Set the appropriate edges for wakeup. 203 + */ 204 + GRER = PWER & GPIO_IRQ_rising_edge; 205 + GFER = PWER & GPIO_IRQ_falling_edge; 206 + 207 + /* 208 + * Clear any pending GPIO interrupts. 209 + */ 210 + GEDR = GEDR; 211 + 212 + return 0; 213 + } 214 + 215 + static void sa1100_gpio_resume(void) 216 + { 217 + GRER = GPIO_IRQ_rising_edge & GPIO_IRQ_mask; 218 + GFER = GPIO_IRQ_falling_edge & GPIO_IRQ_mask; 219 + } 220 + 221 + static struct syscore_ops sa1100_gpio_syscore_ops = { 222 + .suspend = sa1100_gpio_suspend, 223 + .resume = sa1100_gpio_resume, 224 + }; 225 + 226 + static int __init sa1100_gpio_init_devicefs(void) 227 + { 228 + register_syscore_ops(&sa1100_gpio_syscore_ops); 229 + return 0; 230 + } 231 + 232 + device_initcall(sa1100_gpio_init_devicefs); 233 + 68 234 void __init sa1100_init_gpio(void) 69 235 { 236 + /* clear all GPIO edge detects */ 237 + GFER = 0; 238 + GRER = 0; 239 + GEDR = -1; 240 + 70 241 gpiochip_add(&sa1100_gpio_chip); 242 + 243 + sa1100_gpio_irqdomain = irq_domain_add_simple(NULL, 244 + 28, IRQ_GPIO0, 245 + &sa1100_gpio_irqdomain_ops, NULL); 246 + 247 + /* 248 + * Install handlers for GPIO 0-10 edge detect interrupts 249 + */ 250 + irq_set_chained_handler(IRQ_GPIO0_SC, sa1100_gpio_handler); 251 + irq_set_chained_handler(IRQ_GPIO1_SC, sa1100_gpio_handler); 252 + irq_set_chained_handler(IRQ_GPIO2_SC, sa1100_gpio_handler); 253 + irq_set_chained_handler(IRQ_GPIO3_SC, sa1100_gpio_handler); 254 + irq_set_chained_handler(IRQ_GPIO4_SC, sa1100_gpio_handler); 255 + irq_set_chained_handler(IRQ_GPIO5_SC, sa1100_gpio_handler); 256 + irq_set_chained_handler(IRQ_GPIO6_SC, sa1100_gpio_handler); 257 + irq_set_chained_handler(IRQ_GPIO7_SC, sa1100_gpio_handler); 258 + irq_set_chained_handler(IRQ_GPIO8_SC, sa1100_gpio_handler); 259 + irq_set_chained_handler(IRQ_GPIO9_SC, sa1100_gpio_handler); 260 + irq_set_chained_handler(IRQ_GPIO10_SC, sa1100_gpio_handler); 261 + /* 262 + * Install handler for GPIO 11-27 edge detect interrupts 263 + */ 264 + irq_set_chained_handler(IRQ_GPIO11_27, sa1100_gpio_handler); 265 + 71 266 }
+9 -4
include/linux/amba/bus.h
··· 33 33 struct clk *pclk; 34 34 unsigned int periphid; 35 35 unsigned int irq[AMBA_NR_IRQS]; 36 + char *driver_override; 36 37 }; 37 38 38 39 struct amba_driver { ··· 93 92 int amba_request_regions(struct amba_device *, const char *); 94 93 void amba_release_regions(struct amba_device *); 95 94 96 - #define amba_pclk_enable(d) \ 97 - (IS_ERR((d)->pclk) ? 0 : clk_enable((d)->pclk)) 95 + static inline int amba_pclk_enable(struct amba_device *dev) 96 + { 97 + return clk_enable(dev->pclk); 98 + } 98 99 99 - #define amba_pclk_disable(d) \ 100 - do { if (!IS_ERR((d)->pclk)) clk_disable((d)->pclk); } while (0) 100 + static inline void amba_pclk_disable(struct amba_device *dev) 101 + { 102 + clk_disable(dev->pclk); 103 + } 101 104 102 105 static inline int amba_pclk_prepare(struct amba_device *dev) 103 106 {
+73 -4
include/linux/bitrev.h
··· 3 3 4 4 #include <linux/types.h> 5 5 6 - extern u8 const byte_rev_table[256]; 6 + #ifdef CONFIG_HAVE_ARCH_BITREVERSE 7 + #include <asm/bitrev.h> 7 8 8 - static inline u8 bitrev8(u8 byte) 9 + #define __bitrev32 __arch_bitrev32 10 + #define __bitrev16 __arch_bitrev16 11 + #define __bitrev8 __arch_bitrev8 12 + 13 + #else 14 + extern u8 const byte_rev_table[256]; 15 + static inline u8 __bitrev8(u8 byte) 9 16 { 10 17 return byte_rev_table[byte]; 11 18 } 12 19 13 - extern u16 bitrev16(u16 in); 14 - extern u32 bitrev32(u32 in); 20 + static inline u16 __bitrev16(u16 x) 21 + { 22 + return (__bitrev8(x & 0xff) << 8) | __bitrev8(x >> 8); 23 + } 15 24 25 + static inline u32 __bitrev32(u32 x) 26 + { 27 + return (__bitrev16(x & 0xffff) << 16) | __bitrev16(x >> 16); 28 + } 29 + 30 + #endif /* CONFIG_HAVE_ARCH_BITREVERSE */ 31 + 32 + #define __constant_bitrev32(x) \ 33 + ({ \ 34 + u32 __x = x; \ 35 + __x = (__x >> 16) | (__x << 16); \ 36 + __x = ((__x & (u32)0xFF00FF00UL) >> 8) | ((__x & (u32)0x00FF00FFUL) << 8); \ 37 + __x = ((__x & (u32)0xF0F0F0F0UL) >> 4) | ((__x & (u32)0x0F0F0F0FUL) << 4); \ 38 + __x = ((__x & (u32)0xCCCCCCCCUL) >> 2) | ((__x & (u32)0x33333333UL) << 2); \ 39 + __x = ((__x & (u32)0xAAAAAAAAUL) >> 1) | ((__x & (u32)0x55555555UL) << 1); \ 40 + __x; \ 41 + }) 42 + 43 + #define __constant_bitrev16(x) \ 44 + ({ \ 45 + u16 __x = x; \ 46 + __x = (__x >> 8) | (__x << 8); \ 47 + __x = ((__x & (u16)0xF0F0U) >> 4) | ((__x & (u16)0x0F0FU) << 4); \ 48 + __x = ((__x & (u16)0xCCCCU) >> 2) | ((__x & (u16)0x3333U) << 2); \ 49 + __x = ((__x & (u16)0xAAAAU) >> 1) | ((__x & (u16)0x5555U) << 1); \ 50 + __x; \ 51 + }) 52 + 53 + #define __constant_bitrev8(x) \ 54 + ({ \ 55 + u8 __x = x; \ 56 + __x = (__x >> 4) | (__x << 4); \ 57 + __x = ((__x & (u8)0xCCU) >> 2) | ((__x & (u8)0x33U) << 2); \ 58 + __x = ((__x & (u8)0xAAU) >> 1) | ((__x & (u8)0x55U) << 1); \ 59 + __x; \ 60 + }) 61 + 62 + #define bitrev32(x) \ 63 + ({ \ 64 + u32 __x = x; \ 65 + __builtin_constant_p(__x) ? \ 66 + __constant_bitrev32(__x) : \ 67 + __bitrev32(__x); \ 68 + }) 69 + 70 + #define bitrev16(x) \ 71 + ({ \ 72 + u16 __x = x; \ 73 + __builtin_constant_p(__x) ? \ 74 + __constant_bitrev16(__x) : \ 75 + __bitrev16(__x); \ 76 + }) 77 + 78 + #define bitrev8(x) \ 79 + ({ \ 80 + u8 __x = x; \ 81 + __builtin_constant_p(__x) ? \ 82 + __constant_bitrev8(__x) : \ 83 + __bitrev8(__x) ; \ 84 + }) 16 85 #endif /* _LINUX_BITREV_H */
+2 -1
include/linux/kprobes.h
··· 308 308 /* Architecture dependent functions for direct jump optimization */ 309 309 extern int arch_prepared_optinsn(struct arch_optimized_insn *optinsn); 310 310 extern int arch_check_optimized_kprobe(struct optimized_kprobe *op); 311 - extern int arch_prepare_optimized_kprobe(struct optimized_kprobe *op); 311 + extern int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, 312 + struct kprobe *orig); 312 313 extern void arch_remove_optimized_kprobe(struct optimized_kprobe *op); 313 314 extern void arch_optimize_kprobes(struct list_head *oplist); 314 315 extern void arch_unoptimize_kprobes(struct list_head *oplist,
+2 -2
kernel/kprobes.c
··· 717 717 struct optimized_kprobe *op; 718 718 719 719 op = container_of(p, struct optimized_kprobe, kp); 720 - arch_prepare_optimized_kprobe(op); 720 + arch_prepare_optimized_kprobe(op, p); 721 721 } 722 722 723 723 /* Allocate new optimized_kprobe and try to prepare optimized instructions */ ··· 731 731 732 732 INIT_LIST_HEAD(&op->list); 733 733 op->kp.addr = p->addr; 734 - arch_prepare_optimized_kprobe(op); 734 + arch_prepare_optimized_kprobe(op, p); 735 735 736 736 return &op->kp; 737 737 }
+9
lib/Kconfig
··· 13 13 config BITREVERSE 14 14 tristate 15 15 16 + config HAVE_ARCH_BITREVERSE 17 + boolean 18 + default n 19 + depends on BITREVERSE 20 + help 21 + This option provides an config for the architecture which have instruction 22 + can do bitreverse operation, we use the hardware instruction if the architecture 23 + have this capability. 24 + 16 25 config RATIONAL 17 26 boolean 18 27
+2 -15
lib/bitrev.c
··· 1 + #ifndef CONFIG_HAVE_ARCH_BITREVERSE 1 2 #include <linux/types.h> 2 3 #include <linux/module.h> 3 4 #include <linux/bitrev.h> ··· 43 42 }; 44 43 EXPORT_SYMBOL_GPL(byte_rev_table); 45 44 46 - u16 bitrev16(u16 x) 47 - { 48 - return (bitrev8(x & 0xff) << 8) | bitrev8(x >> 8); 49 - } 50 - EXPORT_SYMBOL(bitrev16); 51 - 52 - /** 53 - * bitrev32 - reverse the order of bits in a u32 value 54 - * @x: value to be bit-reversed 55 - */ 56 - u32 bitrev32(u32 x) 57 - { 58 - return (bitrev16(x & 0xffff) << 16) | bitrev16(x >> 16); 59 - } 60 - EXPORT_SYMBOL(bitrev32); 45 + #endif /* CONFIG_HAVE_ARCH_BITREVERSE */