Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 updates from Catalin Marinas:
- eBPF JIT compiler for arm64
- CPU suspend backend for PSCI (firmware interface) with standard idle
states defined in DT (generic idle driver to be merged via a
different tree)
- Support for CONFIG_DEBUG_SET_MODULE_RONX
- Support for unmapped cpu-release-addr (outside kernel linear mapping)
- set_arch_dma_coherent_ops() implemented and bus notifiers removed
- EFI_STUB improvements when base of DRAM is occupied
- Typos in KGDB macros
- Clean-up to (partially) allow kernel building with LLVM
- Other clean-ups (extern keyword, phys_addr_t usage)

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (51 commits)
arm64: Remove unneeded extern keyword
ARM64: make of_device_ids const
arm64: Use phys_addr_t type for physical address
aarch64: filter $x from kallsyms
arm64: Use DMA_ERROR_CODE to denote failed allocation
arm64: Fix typos in KGDB macros
arm64: insn: Add return statements after BUG_ON()
arm64: debug: don't re-enable debug exceptions on return from el1_dbg
Revert "arm64: dmi: Add SMBIOS/DMI support"
arm64: Implement set_arch_dma_coherent_ops() to replace bus notifiers
of: amba: use of_dma_configure for AMBA devices
arm64: dmi: Add SMBIOS/DMI support
arm64: Correct ftrace calls to aarch64_insn_gen_branch_imm()
arm64:mm: initialize max_mapnr using function set_max_mapnr
setup: Move unmask of async interrupts after possible earlycon setup
arm64: LLVMLinux: Fix inline arm64 assembly for use with clang
arm64: pageattr: Correctly adjust unaligned start addresses
net: bpf: arm64: fix module memory leak when JIT image build fails
arm64: add PSCI CPU_SUSPEND based cpu_suspend support
arm64: kernel: introduce cpu_init_idle CPU operation
...

+2979 -189
+8
Documentation/devicetree/bindings/arm/cpus.txt
··· 219 219 Value type: <phandle> 220 220 Definition: Specifies the ACC[2] node associated with this CPU. 221 221 222 + - cpu-idle-states 223 + Usage: Optional 224 + Value type: <prop-encoded-array> 225 + Definition: 226 + # List of phandles to idle state nodes supported 227 + by this cpu [3]. 222 228 223 229 Example 1 (dual-cluster big.LITTLE system 32-bit): 224 230 ··· 421 415 -- 422 416 [1] arm/msm/qcom,saw2.txt 423 417 [2] arm/msm/qcom,kpss-acc.txt 418 + [3] ARM Linux kernel documentation - idle states bindings 419 + Documentation/devicetree/bindings/arm/idle-states.txt
+679
Documentation/devicetree/bindings/arm/idle-states.txt
··· 1 + ========================================== 2 + ARM idle states binding description 3 + ========================================== 4 + 5 + ========================================== 6 + 1 - Introduction 7 + ========================================== 8 + 9 + ARM systems contain HW capable of managing power consumption dynamically, 10 + where cores can be put in different low-power states (ranging from simple 11 + wfi to power gating) according to OS PM policies. The CPU states representing 12 + the range of dynamic idle states that a processor can enter at run-time, can be 13 + specified through device tree bindings representing the parameters required 14 + to enter/exit specific idle states on a given processor. 15 + 16 + According to the Server Base System Architecture document (SBSA, [3]), the 17 + power states an ARM CPU can be put into are identified by the following list: 18 + 19 + - Running 20 + - Idle_standby 21 + - Idle_retention 22 + - Sleep 23 + - Off 24 + 25 + The power states described in the SBSA document define the basic CPU states on 26 + top of which ARM platforms implement power management schemes that allow an OS 27 + PM implementation to put the processor in different idle states (which include 28 + states listed above; "off" state is not an idle state since it does not have 29 + wake-up capabilities, hence it is not considered in this document). 30 + 31 + Idle state parameters (eg entry latency) are platform specific and need to be 32 + characterized with bindings that provide the required information to OS PM 33 + code so that it can build the required tables and use them at runtime. 34 + 35 + The device tree binding definition for ARM idle states is the subject of this 36 + document. 37 + 38 + =========================================== 39 + 2 - idle-states definitions 40 + =========================================== 41 + 42 + Idle states are characterized for a specific system through a set of 43 + timing and energy related properties, that underline the HW behaviour 44 + triggered upon idle states entry and exit. 45 + 46 + The following diagram depicts the CPU execution phases and related timing 47 + properties required to enter and exit an idle state: 48 + 49 + ..__[EXEC]__|__[PREP]__|__[ENTRY]__|__[IDLE]__|__[EXIT]__|__[EXEC]__.. 50 + | | | | | 51 + 52 + |<------ entry ------->| 53 + | latency | 54 + |<- exit ->| 55 + | latency | 56 + |<-------- min-residency -------->| 57 + |<------- wakeup-latency ------->| 58 + 59 + Diagram 1: CPU idle state execution phases 60 + 61 + EXEC: Normal CPU execution. 62 + 63 + PREP: Preparation phase before committing the hardware to idle mode 64 + like cache flushing. This is abortable on pending wake-up 65 + event conditions. The abort latency is assumed to be negligible 66 + (i.e. less than the ENTRY + EXIT duration). If aborted, CPU 67 + goes back to EXEC. This phase is optional. If not abortable, 68 + this should be included in the ENTRY phase instead. 69 + 70 + ENTRY: The hardware is committed to idle mode. This period must run 71 + to completion up to IDLE before anything else can happen. 72 + 73 + IDLE: This is the actual energy-saving idle period. This may last 74 + between 0 and infinite time, until a wake-up event occurs. 75 + 76 + EXIT: Period during which the CPU is brought back to operational 77 + mode (EXEC). 78 + 79 + entry-latency: Worst case latency required to enter the idle state. The 80 + exit-latency may be guaranteed only after entry-latency has passed. 81 + 82 + min-residency: Minimum period, including preparation and entry, for a given 83 + idle state to be worthwhile energywise. 84 + 85 + wakeup-latency: Maximum delay between the signaling of a wake-up event and the 86 + CPU being able to execute normal code again. If not specified, this is assumed 87 + to be entry-latency + exit-latency. 88 + 89 + These timing parameters can be used by an OS in different circumstances. 90 + 91 + An idle CPU requires the expected min-residency time to select the most 92 + appropriate idle state based on the expected expiry time of the next IRQ 93 + (ie wake-up) that causes the CPU to return to the EXEC phase. 94 + 95 + An operating system scheduler may need to compute the shortest wake-up delay 96 + for CPUs in the system by detecting how long will it take to get a CPU out 97 + of an idle state, eg: 98 + 99 + wakeup-delay = exit-latency + max(entry-latency - (now - entry-timestamp), 0) 100 + 101 + In other words, the scheduler can make its scheduling decision by selecting 102 + (eg waking-up) the CPU with the shortest wake-up latency. 103 + The wake-up latency must take into account the entry latency if that period 104 + has not expired. The abortable nature of the PREP period can be ignored 105 + if it cannot be relied upon (e.g. the PREP deadline may occur much sooner than 106 + the worst case since it depends on the CPU operating conditions, ie caches 107 + state). 108 + 109 + An OS has to reliably probe the wakeup-latency since some devices can enforce 110 + latency constraints guarantees to work properly, so the OS has to detect the 111 + worst case wake-up latency it can incur if a CPU is allowed to enter an 112 + idle state, and possibly to prevent that to guarantee reliable device 113 + functioning. 114 + 115 + The min-residency time parameter deserves further explanation since it is 116 + expressed in time units but must factor in energy consumption coefficients. 117 + 118 + The energy consumption of a cpu when it enters a power state can be roughly 119 + characterised by the following graph: 120 + 121 + | 122 + | 123 + | 124 + e | 125 + n | /--- 126 + e | /------ 127 + r | /------ 128 + g | /----- 129 + y | /------ 130 + | ---- 131 + | /| 132 + | / | 133 + | / | 134 + | / | 135 + | / | 136 + | / | 137 + |/ | 138 + -----|-------+---------------------------------- 139 + 0| 1 time(ms) 140 + 141 + Graph 1: Energy vs time example 142 + 143 + The graph is split in two parts delimited by time 1ms on the X-axis. 144 + The graph curve with X-axis values = { x | 0 < x < 1ms } has a steep slope 145 + and denotes the energy costs incurred whilst entering and leaving the idle 146 + state. 147 + The graph curve in the area delimited by X-axis values = {x | x > 1ms } has 148 + shallower slope and essentially represents the energy consumption of the idle 149 + state. 150 + 151 + min-residency is defined for a given idle state as the minimum expected 152 + residency time for a state (inclusive of preparation and entry) after 153 + which choosing that state become the most energy efficient option. A good 154 + way to visualise this, is by taking the same graph above and comparing some 155 + states energy consumptions plots. 156 + 157 + For sake of simplicity, let's consider a system with two idle states IDLE1, 158 + and IDLE2: 159 + 160 + | 161 + | 162 + | 163 + | /-- IDLE1 164 + e | /--- 165 + n | /---- 166 + e | /--- 167 + r | /-----/--------- IDLE2 168 + g | /-------/--------- 169 + y | ------------ /---| 170 + | / /---- | 171 + | / /--- | 172 + | / /---- | 173 + | / /--- | 174 + | --- | 175 + | / | 176 + | / | 177 + |/ | time 178 + ---/----------------------------+------------------------ 179 + |IDLE1-energy < IDLE2-energy | IDLE2-energy < IDLE1-energy 180 + | 181 + IDLE2-min-residency 182 + 183 + Graph 2: idle states min-residency example 184 + 185 + In graph 2 above, that takes into account idle states entry/exit energy 186 + costs, it is clear that if the idle state residency time (ie time till next 187 + wake-up IRQ) is less than IDLE2-min-residency, IDLE1 is the better idle state 188 + choice energywise. 189 + 190 + This is mainly down to the fact that IDLE1 entry/exit energy costs are lower 191 + than IDLE2. 192 + 193 + However, the lower power consumption (ie shallower energy curve slope) of idle 194 + state IDLE2 implies that after a suitable time, IDLE2 becomes more energy 195 + efficient. 196 + 197 + The time at which IDLE2 becomes more energy efficient than IDLE1 (and other 198 + shallower states in a system with multiple idle states) is defined 199 + IDLE2-min-residency and corresponds to the time when energy consumption of 200 + IDLE1 and IDLE2 states breaks even. 201 + 202 + The definitions provided in this section underpin the idle states 203 + properties specification that is the subject of the following sections. 204 + 205 + =========================================== 206 + 3 - idle-states node 207 + =========================================== 208 + 209 + ARM processor idle states are defined within the idle-states node, which is 210 + a direct child of the cpus node [1] and provides a container where the 211 + processor idle states, defined as device tree nodes, are listed. 212 + 213 + - idle-states node 214 + 215 + Usage: Optional - On ARM systems, it is a container of processor idle 216 + states nodes. If the system does not provide CPU 217 + power management capabilities or the processor just 218 + supports idle_standby an idle-states node is not 219 + required. 220 + 221 + Description: idle-states node is a container node, where its 222 + subnodes describe the CPU idle states. 223 + 224 + Node name must be "idle-states". 225 + 226 + The idle-states node's parent node must be the cpus node. 227 + 228 + The idle-states node's child nodes can be: 229 + 230 + - one or more state nodes 231 + 232 + Any other configuration is considered invalid. 233 + 234 + An idle-states node defines the following properties: 235 + 236 + - entry-method 237 + Value type: <stringlist> 238 + Usage and definition depend on ARM architecture version. 239 + # On ARM v8 64-bit this property is required and must 240 + be one of: 241 + - "psci" (see bindings in [2]) 242 + # On ARM 32-bit systems this property is optional 243 + 244 + The nodes describing the idle states (state) can only be defined within the 245 + idle-states node, any other configuration is considered invalid and therefore 246 + must be ignored. 247 + 248 + =========================================== 249 + 4 - state node 250 + =========================================== 251 + 252 + A state node represents an idle state description and must be defined as 253 + follows: 254 + 255 + - state node 256 + 257 + Description: must be child of the idle-states node 258 + 259 + The state node name shall follow standard device tree naming 260 + rules ([5], 2.2.1 "Node names"), in particular state nodes which 261 + are siblings within a single common parent must be given a unique name. 262 + 263 + The idle state entered by executing the wfi instruction (idle_standby 264 + SBSA,[3][4]) is considered standard on all ARM platforms and therefore 265 + must not be listed. 266 + 267 + With the definitions provided above, the following list represents 268 + the valid properties for a state node: 269 + 270 + - compatible 271 + Usage: Required 272 + Value type: <stringlist> 273 + Definition: Must be "arm,idle-state". 274 + 275 + - local-timer-stop 276 + Usage: See definition 277 + Value type: <none> 278 + Definition: if present the CPU local timer control logic is 279 + lost on state entry, otherwise it is retained. 280 + 281 + - entry-latency-us 282 + Usage: Required 283 + Value type: <prop-encoded-array> 284 + Definition: u32 value representing worst case latency in 285 + microseconds required to enter the idle state. 286 + The exit-latency-us duration may be guaranteed 287 + only after entry-latency-us has passed. 288 + 289 + - exit-latency-us 290 + Usage: Required 291 + Value type: <prop-encoded-array> 292 + Definition: u32 value representing worst case latency 293 + in microseconds required to exit the idle state. 294 + 295 + - min-residency-us 296 + Usage: Required 297 + Value type: <prop-encoded-array> 298 + Definition: u32 value representing minimum residency duration 299 + in microseconds, inclusive of preparation and 300 + entry, for this idle state to be considered 301 + worthwhile energy wise (refer to section 2 of 302 + this document for a complete description). 303 + 304 + - wakeup-latency-us: 305 + Usage: Optional 306 + Value type: <prop-encoded-array> 307 + Definition: u32 value representing maximum delay between the 308 + signaling of a wake-up event and the CPU being 309 + able to execute normal code again. If omitted, 310 + this is assumed to be equal to: 311 + 312 + entry-latency-us + exit-latency-us 313 + 314 + It is important to supply this value on systems 315 + where the duration of PREP phase (see diagram 1, 316 + section 2) is non-neglibigle. 317 + In such systems entry-latency-us + exit-latency-us 318 + will exceed wakeup-latency-us by this duration. 319 + 320 + In addition to the properties listed above, a state node may require 321 + additional properties specifics to the entry-method defined in the 322 + idle-states node, please refer to the entry-method bindings 323 + documentation for properties definitions. 324 + 325 + =========================================== 326 + 4 - Examples 327 + =========================================== 328 + 329 + Example 1 (ARM 64-bit, 16-cpu system, PSCI enable-method): 330 + 331 + cpus { 332 + #size-cells = <0>; 333 + #address-cells = <2>; 334 + 335 + CPU0: cpu@0 { 336 + device_type = "cpu"; 337 + compatible = "arm,cortex-a57"; 338 + reg = <0x0 0x0>; 339 + enable-method = "psci"; 340 + cpu-idle-states = <&CPU_RETENTION_0_0 &CPU_SLEEP_0_0 341 + &CLUSTER_RETENTION_0 &CLUSTER_SLEEP_0>; 342 + }; 343 + 344 + CPU1: cpu@1 { 345 + device_type = "cpu"; 346 + compatible = "arm,cortex-a57"; 347 + reg = <0x0 0x1>; 348 + enable-method = "psci"; 349 + cpu-idle-states = <&CPU_RETENTION_0_0 &CPU_SLEEP_0_0 350 + &CLUSTER_RETENTION_0 &CLUSTER_SLEEP_0>; 351 + }; 352 + 353 + CPU2: cpu@100 { 354 + device_type = "cpu"; 355 + compatible = "arm,cortex-a57"; 356 + reg = <0x0 0x100>; 357 + enable-method = "psci"; 358 + cpu-idle-states = <&CPU_RETENTION_0_0 &CPU_SLEEP_0_0 359 + &CLUSTER_RETENTION_0 &CLUSTER_SLEEP_0>; 360 + }; 361 + 362 + CPU3: cpu@101 { 363 + device_type = "cpu"; 364 + compatible = "arm,cortex-a57"; 365 + reg = <0x0 0x101>; 366 + enable-method = "psci"; 367 + cpu-idle-states = <&CPU_RETENTION_0_0 &CPU_SLEEP_0_0 368 + &CLUSTER_RETENTION_0 &CLUSTER_SLEEP_0>; 369 + }; 370 + 371 + CPU4: cpu@10000 { 372 + device_type = "cpu"; 373 + compatible = "arm,cortex-a57"; 374 + reg = <0x0 0x10000>; 375 + enable-method = "psci"; 376 + cpu-idle-states = <&CPU_RETENTION_0_0 &CPU_SLEEP_0_0 377 + &CLUSTER_RETENTION_0 &CLUSTER_SLEEP_0>; 378 + }; 379 + 380 + CPU5: cpu@10001 { 381 + device_type = "cpu"; 382 + compatible = "arm,cortex-a57"; 383 + reg = <0x0 0x10001>; 384 + enable-method = "psci"; 385 + cpu-idle-states = <&CPU_RETENTION_0_0 &CPU_SLEEP_0_0 386 + &CLUSTER_RETENTION_0 &CLUSTER_SLEEP_0>; 387 + }; 388 + 389 + CPU6: cpu@10100 { 390 + device_type = "cpu"; 391 + compatible = "arm,cortex-a57"; 392 + reg = <0x0 0x10100>; 393 + enable-method = "psci"; 394 + cpu-idle-states = <&CPU_RETENTION_0_0 &CPU_SLEEP_0_0 395 + &CLUSTER_RETENTION_0 &CLUSTER_SLEEP_0>; 396 + }; 397 + 398 + CPU7: cpu@10101 { 399 + device_type = "cpu"; 400 + compatible = "arm,cortex-a57"; 401 + reg = <0x0 0x10101>; 402 + enable-method = "psci"; 403 + cpu-idle-states = <&CPU_RETENTION_0_0 &CPU_SLEEP_0_0 404 + &CLUSTER_RETENTION_0 &CLUSTER_SLEEP_0>; 405 + }; 406 + 407 + CPU8: cpu@100000000 { 408 + device_type = "cpu"; 409 + compatible = "arm,cortex-a53"; 410 + reg = <0x1 0x0>; 411 + enable-method = "psci"; 412 + cpu-idle-states = <&CPU_RETENTION_1_0 &CPU_SLEEP_1_0 413 + &CLUSTER_RETENTION_1 &CLUSTER_SLEEP_1>; 414 + }; 415 + 416 + CPU9: cpu@100000001 { 417 + device_type = "cpu"; 418 + compatible = "arm,cortex-a53"; 419 + reg = <0x1 0x1>; 420 + enable-method = "psci"; 421 + cpu-idle-states = <&CPU_RETENTION_1_0 &CPU_SLEEP_1_0 422 + &CLUSTER_RETENTION_1 &CLUSTER_SLEEP_1>; 423 + }; 424 + 425 + CPU10: cpu@100000100 { 426 + device_type = "cpu"; 427 + compatible = "arm,cortex-a53"; 428 + reg = <0x1 0x100>; 429 + enable-method = "psci"; 430 + cpu-idle-states = <&CPU_RETENTION_1_0 &CPU_SLEEP_1_0 431 + &CLUSTER_RETENTION_1 &CLUSTER_SLEEP_1>; 432 + }; 433 + 434 + CPU11: cpu@100000101 { 435 + device_type = "cpu"; 436 + compatible = "arm,cortex-a53"; 437 + reg = <0x1 0x101>; 438 + enable-method = "psci"; 439 + cpu-idle-states = <&CPU_RETENTION_1_0 &CPU_SLEEP_1_0 440 + &CLUSTER_RETENTION_1 &CLUSTER_SLEEP_1>; 441 + }; 442 + 443 + CPU12: cpu@100010000 { 444 + device_type = "cpu"; 445 + compatible = "arm,cortex-a53"; 446 + reg = <0x1 0x10000>; 447 + enable-method = "psci"; 448 + cpu-idle-states = <&CPU_RETENTION_1_0 &CPU_SLEEP_1_0 449 + &CLUSTER_RETENTION_1 &CLUSTER_SLEEP_1>; 450 + }; 451 + 452 + CPU13: cpu@100010001 { 453 + device_type = "cpu"; 454 + compatible = "arm,cortex-a53"; 455 + reg = <0x1 0x10001>; 456 + enable-method = "psci"; 457 + cpu-idle-states = <&CPU_RETENTION_1_0 &CPU_SLEEP_1_0 458 + &CLUSTER_RETENTION_1 &CLUSTER_SLEEP_1>; 459 + }; 460 + 461 + CPU14: cpu@100010100 { 462 + device_type = "cpu"; 463 + compatible = "arm,cortex-a53"; 464 + reg = <0x1 0x10100>; 465 + enable-method = "psci"; 466 + cpu-idle-states = <&CPU_RETENTION_1_0 &CPU_SLEEP_1_0 467 + &CLUSTER_RETENTION_1 &CLUSTER_SLEEP_1>; 468 + }; 469 + 470 + CPU15: cpu@100010101 { 471 + device_type = "cpu"; 472 + compatible = "arm,cortex-a53"; 473 + reg = <0x1 0x10101>; 474 + enable-method = "psci"; 475 + cpu-idle-states = <&CPU_RETENTION_1_0 &CPU_SLEEP_1_0 476 + &CLUSTER_RETENTION_1 &CLUSTER_SLEEP_1>; 477 + }; 478 + 479 + idle-states { 480 + entry-method = "arm,psci"; 481 + 482 + CPU_RETENTION_0_0: cpu-retention-0-0 { 483 + compatible = "arm,idle-state"; 484 + arm,psci-suspend-param = <0x0010000>; 485 + entry-latency-us = <20>; 486 + exit-latency-us = <40>; 487 + min-residency-us = <80>; 488 + }; 489 + 490 + CLUSTER_RETENTION_0: cluster-retention-0 { 491 + compatible = "arm,idle-state"; 492 + local-timer-stop; 493 + arm,psci-suspend-param = <0x1010000>; 494 + entry-latency-us = <50>; 495 + exit-latency-us = <100>; 496 + min-residency-us = <250>; 497 + wakeup-latency-us = <130>; 498 + }; 499 + 500 + CPU_SLEEP_0_0: cpu-sleep-0-0 { 501 + compatible = "arm,idle-state"; 502 + local-timer-stop; 503 + arm,psci-suspend-param = <0x0010000>; 504 + entry-latency-us = <250>; 505 + exit-latency-us = <500>; 506 + min-residency-us = <950>; 507 + }; 508 + 509 + CLUSTER_SLEEP_0: cluster-sleep-0 { 510 + compatible = "arm,idle-state"; 511 + local-timer-stop; 512 + arm,psci-suspend-param = <0x1010000>; 513 + entry-latency-us = <600>; 514 + exit-latency-us = <1100>; 515 + min-residency-us = <2700>; 516 + wakeup-latency-us = <1500>; 517 + }; 518 + 519 + CPU_RETENTION_1_0: cpu-retention-1-0 { 520 + compatible = "arm,idle-state"; 521 + arm,psci-suspend-param = <0x0010000>; 522 + entry-latency-us = <20>; 523 + exit-latency-us = <40>; 524 + min-residency-us = <90>; 525 + }; 526 + 527 + CLUSTER_RETENTION_1: cluster-retention-1 { 528 + compatible = "arm,idle-state"; 529 + local-timer-stop; 530 + arm,psci-suspend-param = <0x1010000>; 531 + entry-latency-us = <50>; 532 + exit-latency-us = <100>; 533 + min-residency-us = <270>; 534 + wakeup-latency-us = <100>; 535 + }; 536 + 537 + CPU_SLEEP_1_0: cpu-sleep-1-0 { 538 + compatible = "arm,idle-state"; 539 + local-timer-stop; 540 + arm,psci-suspend-param = <0x0010000>; 541 + entry-latency-us = <70>; 542 + exit-latency-us = <100>; 543 + min-residency-us = <300>; 544 + wakeup-latency-us = <150>; 545 + }; 546 + 547 + CLUSTER_SLEEP_1: cluster-sleep-1 { 548 + compatible = "arm,idle-state"; 549 + local-timer-stop; 550 + arm,psci-suspend-param = <0x1010000>; 551 + entry-latency-us = <500>; 552 + exit-latency-us = <1200>; 553 + min-residency-us = <3500>; 554 + wakeup-latency-us = <1300>; 555 + }; 556 + }; 557 + 558 + }; 559 + 560 + Example 2 (ARM 32-bit, 8-cpu system, two clusters): 561 + 562 + cpus { 563 + #size-cells = <0>; 564 + #address-cells = <1>; 565 + 566 + CPU0: cpu@0 { 567 + device_type = "cpu"; 568 + compatible = "arm,cortex-a15"; 569 + reg = <0x0>; 570 + cpu-idle-states = <&CPU_SLEEP_0_0 &CLUSTER_SLEEP_0>; 571 + }; 572 + 573 + CPU1: cpu@1 { 574 + device_type = "cpu"; 575 + compatible = "arm,cortex-a15"; 576 + reg = <0x1>; 577 + cpu-idle-states = <&CPU_SLEEP_0_0 &CLUSTER_SLEEP_0>; 578 + }; 579 + 580 + CPU2: cpu@2 { 581 + device_type = "cpu"; 582 + compatible = "arm,cortex-a15"; 583 + reg = <0x2>; 584 + cpu-idle-states = <&CPU_SLEEP_0_0 &CLUSTER_SLEEP_0>; 585 + }; 586 + 587 + CPU3: cpu@3 { 588 + device_type = "cpu"; 589 + compatible = "arm,cortex-a15"; 590 + reg = <0x3>; 591 + cpu-idle-states = <&CPU_SLEEP_0_0 &CLUSTER_SLEEP_0>; 592 + }; 593 + 594 + CPU4: cpu@100 { 595 + device_type = "cpu"; 596 + compatible = "arm,cortex-a7"; 597 + reg = <0x100>; 598 + cpu-idle-states = <&CPU_SLEEP_1_0 &CLUSTER_SLEEP_1>; 599 + }; 600 + 601 + CPU5: cpu@101 { 602 + device_type = "cpu"; 603 + compatible = "arm,cortex-a7"; 604 + reg = <0x101>; 605 + cpu-idle-states = <&CPU_SLEEP_1_0 &CLUSTER_SLEEP_1>; 606 + }; 607 + 608 + CPU6: cpu@102 { 609 + device_type = "cpu"; 610 + compatible = "arm,cortex-a7"; 611 + reg = <0x102>; 612 + cpu-idle-states = <&CPU_SLEEP_1_0 &CLUSTER_SLEEP_1>; 613 + }; 614 + 615 + CPU7: cpu@103 { 616 + device_type = "cpu"; 617 + compatible = "arm,cortex-a7"; 618 + reg = <0x103>; 619 + cpu-idle-states = <&CPU_SLEEP_1_0 &CLUSTER_SLEEP_1>; 620 + }; 621 + 622 + idle-states { 623 + CPU_SLEEP_0_0: cpu-sleep-0-0 { 624 + compatible = "arm,idle-state"; 625 + local-timer-stop; 626 + entry-latency-us = <200>; 627 + exit-latency-us = <100>; 628 + min-residency-us = <400>; 629 + wakeup-latency-us = <250>; 630 + }; 631 + 632 + CLUSTER_SLEEP_0: cluster-sleep-0 { 633 + compatible = "arm,idle-state"; 634 + local-timer-stop; 635 + entry-latency-us = <500>; 636 + exit-latency-us = <1500>; 637 + min-residency-us = <2500>; 638 + wakeup-latency-us = <1700>; 639 + }; 640 + 641 + CPU_SLEEP_1_0: cpu-sleep-1-0 { 642 + compatible = "arm,idle-state"; 643 + local-timer-stop; 644 + entry-latency-us = <300>; 645 + exit-latency-us = <500>; 646 + min-residency-us = <900>; 647 + wakeup-latency-us = <600>; 648 + }; 649 + 650 + CLUSTER_SLEEP_1: cluster-sleep-1 { 651 + compatible = "arm,idle-state"; 652 + local-timer-stop; 653 + entry-latency-us = <800>; 654 + exit-latency-us = <2000>; 655 + min-residency-us = <6500>; 656 + wakeup-latency-us = <2300>; 657 + }; 658 + }; 659 + 660 + }; 661 + 662 + =========================================== 663 + 5 - References 664 + =========================================== 665 + 666 + [1] ARM Linux Kernel documentation - CPUs bindings 667 + Documentation/devicetree/bindings/arm/cpus.txt 668 + 669 + [2] ARM Linux Kernel documentation - PSCI bindings 670 + Documentation/devicetree/bindings/arm/psci.txt 671 + 672 + [3] ARM Server Base System Architecture (SBSA) 673 + http://infocenter.arm.com/help/index.jsp 674 + 675 + [4] ARM Architecture Reference Manuals 676 + http://infocenter.arm.com/help/index.jsp 677 + 678 + [5] ePAPR standard 679 + https://www.power.org/documentation/epapr-version-1-1/
+13 -1
Documentation/devicetree/bindings/arm/psci.txt
··· 50 50 51 51 - migrate : Function ID for MIGRATE operation 52 52 53 + Device tree nodes that require usage of PSCI CPU_SUSPEND function (ie idle 54 + state nodes, as per bindings in [1]) must specify the following properties: 55 + 56 + - arm,psci-suspend-param 57 + Usage: Required for state nodes[1] if the corresponding 58 + idle-states node entry-method property is set 59 + to "psci". 60 + Value type: <u32> 61 + Definition: power_state parameter to pass to the PSCI 62 + suspend call. 53 63 54 64 Example: 55 65 ··· 73 63 cpu_on = <0x95c10002>; 74 64 migrate = <0x95c10003>; 75 65 }; 76 - 77 66 78 67 Case 2: PSCI v0.2 only 79 68 ··· 97 88 98 89 ... 99 90 }; 91 + 92 + [1] Kernel documentation - ARM idle states bindings 93 + Documentation/devicetree/bindings/arm/idle-states.txt
+3 -3
Documentation/networking/filter.txt
··· 462 462 ------------ 463 463 464 464 The Linux kernel has a built-in BPF JIT compiler for x86_64, SPARC, PowerPC, 465 - ARM, MIPS and s390 and can be enabled through CONFIG_BPF_JIT. The JIT compiler 466 - is transparently invoked for each attached filter from user space or for 467 - internal kernel users if it has been previously enabled by root: 465 + ARM, ARM64, MIPS and s390 and can be enabled through CONFIG_BPF_JIT. The JIT 466 + compiler is transparently invoked for each attached filter from user space 467 + or for internal kernel users if it has been previously enabled by root: 468 468 469 469 echo 1 > /proc/sys/net/core/bpf_jit_enable 470 470
+4 -3
arch/arm64/Kconfig
··· 35 35 select HAVE_ARCH_JUMP_LABEL 36 36 select HAVE_ARCH_KGDB 37 37 select HAVE_ARCH_TRACEHOOK 38 + select HAVE_BPF_JIT 38 39 select HAVE_C_RECORDMCOUNT 39 40 select HAVE_CC_STACKPROTECTOR 40 41 select HAVE_DEBUG_BUGVERBOSE ··· 253 252 places. If unsure say N here. 254 253 255 254 config NR_CPUS 256 - int "Maximum number of CPUs (2-32)" 257 - range 2 32 255 + int "Maximum number of CPUs (2-64)" 256 + range 2 64 258 257 depends on SMP 259 258 # These have to remain sorted largest to smallest 260 - default "8" 259 + default "64" 261 260 262 261 config HOTPLUG_CPU 263 262 bool "Support for hot-pluggable CPUs"
+11
arch/arm64/Kconfig.debug
··· 43 43 of TEXT_OFFSET and platforms must not require a specific 44 44 value. 45 45 46 + config DEBUG_SET_MODULE_RONX 47 + bool "Set loadable kernel module data as NX and text as RO" 48 + depends on MODULES 49 + help 50 + This option helps catch unintended modifications to loadable 51 + kernel module's text and read-only data. It also prevents execution 52 + of module data. Such protection may interfere with run-time code 53 + patching and dynamic kernel tracing - and they might also protect 54 + against certain classes of kernel exploits. 55 + If in doubt, say "N". 56 + 46 57 endmenu
+1
arch/arm64/Makefile
··· 47 47 export TEXT_OFFSET GZFLAGS 48 48 49 49 core-y += arch/arm64/kernel/ arch/arm64/mm/ 50 + core-$(CONFIG_NET) += arch/arm64/net/ 50 51 core-$(CONFIG_KVM) += arch/arm64/kvm/ 51 52 core-$(CONFIG_XEN) += arch/arm64/xen/ 52 53 core-$(CONFIG_CRYPTO) += arch/arm64/crypto/
+4
arch/arm64/include/asm/cacheflush.h
··· 148 148 { 149 149 } 150 150 151 + int set_memory_ro(unsigned long addr, int numpages); 152 + int set_memory_rw(unsigned long addr, int numpages); 153 + int set_memory_x(unsigned long addr, int numpages); 154 + int set_memory_nx(unsigned long addr, int numpages); 151 155 #endif
+20
arch/arm64/include/asm/cachetype.h
··· 39 39 40 40 extern unsigned long __icache_flags; 41 41 42 + #define CCSIDR_EL1_LINESIZE_MASK 0x7 43 + #define CCSIDR_EL1_LINESIZE(x) ((x) & CCSIDR_EL1_LINESIZE_MASK) 44 + 45 + #define CCSIDR_EL1_NUMSETS_SHIFT 13 46 + #define CCSIDR_EL1_NUMSETS_MASK (0x7fff << CCSIDR_EL1_NUMSETS_SHIFT) 47 + #define CCSIDR_EL1_NUMSETS(x) \ 48 + (((x) & CCSIDR_EL1_NUMSETS_MASK) >> CCSIDR_EL1_NUMSETS_SHIFT) 49 + 50 + extern u64 __attribute_const__ icache_get_ccsidr(void); 51 + 52 + static inline int icache_get_linesize(void) 53 + { 54 + return 16 << CCSIDR_EL1_LINESIZE(icache_get_ccsidr()); 55 + } 56 + 57 + static inline int icache_get_numsets(void) 58 + { 59 + return 1 + CCSIDR_EL1_NUMSETS(icache_get_ccsidr()); 60 + } 61 + 42 62 /* 43 63 * Whilst the D-side always behaves as PIPT on AArch64, aliasing is 44 64 * permitted in the I-cache.
+5 -2
arch/arm64/include/asm/cpu_ops.h
··· 28 28 * enable-method property. 29 29 * @cpu_init: Reads any data necessary for a specific enable-method from the 30 30 * devicetree, for a given cpu node and proposed logical id. 31 + * @cpu_init_idle: Reads any data necessary to initialize CPU idle states from 32 + * devicetree, for a given cpu node and proposed logical id. 31 33 * @cpu_prepare: Early one-time preparation step for a cpu. If there is a 32 34 * mechanism for doing so, tests whether it is possible to boot 33 35 * the given CPU. ··· 49 47 struct cpu_operations { 50 48 const char *name; 51 49 int (*cpu_init)(struct device_node *, unsigned int); 50 + int (*cpu_init_idle)(struct device_node *, unsigned int); 52 51 int (*cpu_prepare)(unsigned int); 53 52 int (*cpu_boot)(unsigned int); 54 53 void (*cpu_postboot)(void); ··· 64 61 }; 65 62 66 63 extern const struct cpu_operations *cpu_ops[NR_CPUS]; 67 - extern int __init cpu_read_ops(struct device_node *dn, int cpu); 68 - extern void __init cpu_read_bootcpu_ops(void); 64 + int __init cpu_read_ops(struct device_node *dn, int cpu); 65 + void __init cpu_read_bootcpu_ops(void); 69 66 70 67 #endif /* ifndef __ASM_CPU_OPS_H */
+13
arch/arm64/include/asm/cpuidle.h
··· 1 + #ifndef __ASM_CPUIDLE_H 2 + #define __ASM_CPUIDLE_H 3 + 4 + #ifdef CONFIG_CPU_IDLE 5 + extern int cpu_init_idle(unsigned int cpu); 6 + #else 7 + static inline int cpu_init_idle(unsigned int cpu) 8 + { 9 + return -EOPNOTSUPP; 10 + } 11 + #endif 12 + 13 + #endif
+19 -11
arch/arm64/include/asm/debug-monitors.h
··· 48 48 /* 49 49 * #imm16 values used for BRK instruction generation 50 50 * Allowed values for kgbd are 0x400 - 0x7ff 51 + * 0x100: for triggering a fault on purpose (reserved) 51 52 * 0x400: for dynamic BRK instruction 52 53 * 0x401: for compile time BRK instruction 53 54 */ 54 - #define KGDB_DYN_DGB_BRK_IMM 0x400 55 - #define KDBG_COMPILED_DBG_BRK_IMM 0x401 55 + #define FAULT_BRK_IMM 0x100 56 + #define KGDB_DYN_DBG_BRK_IMM 0x400 57 + #define KGDB_COMPILED_DBG_BRK_IMM 0x401 56 58 57 59 /* 58 60 * BRK instruction encoding ··· 63 61 #define AARCH64_BREAK_MON 0xd4200000 64 62 65 63 /* 64 + * BRK instruction for provoking a fault on purpose 65 + * Unlike kgdb, #imm16 value with unallocated handler is used for faulting. 66 + */ 67 + #define AARCH64_BREAK_FAULT (AARCH64_BREAK_MON | (FAULT_BRK_IMM << 5)) 68 + 69 + /* 66 70 * Extract byte from BRK instruction 67 71 */ 68 - #define KGDB_DYN_DGB_BRK_INS_BYTE(x) \ 72 + #define KGDB_DYN_DBG_BRK_INS_BYTE(x) \ 69 73 ((((AARCH64_BREAK_MON) & 0xffe0001f) >> (x * 8)) & 0xff) 70 74 71 75 /* 72 76 * Extract byte from BRK #imm16 73 77 */ 74 - #define KGBD_DYN_DGB_BRK_IMM_BYTE(x) \ 75 - (((((KGDB_DYN_DGB_BRK_IMM) & 0xffff) << 5) >> (x * 8)) & 0xff) 78 + #define KGBD_DYN_DBG_BRK_IMM_BYTE(x) \ 79 + (((((KGDB_DYN_DBG_BRK_IMM) & 0xffff) << 5) >> (x * 8)) & 0xff) 76 80 77 - #define KGDB_DYN_DGB_BRK_BYTE(x) \ 78 - (KGDB_DYN_DGB_BRK_INS_BYTE(x) | KGBD_DYN_DGB_BRK_IMM_BYTE(x)) 81 + #define KGDB_DYN_DBG_BRK_BYTE(x) \ 82 + (KGDB_DYN_DBG_BRK_INS_BYTE(x) | KGBD_DYN_DBG_BRK_IMM_BYTE(x)) 79 83 80 - #define KGDB_DYN_BRK_INS_BYTE0 KGDB_DYN_DGB_BRK_BYTE(0) 81 - #define KGDB_DYN_BRK_INS_BYTE1 KGDB_DYN_DGB_BRK_BYTE(1) 82 - #define KGDB_DYN_BRK_INS_BYTE2 KGDB_DYN_DGB_BRK_BYTE(2) 83 - #define KGDB_DYN_BRK_INS_BYTE3 KGDB_DYN_DGB_BRK_BYTE(3) 84 + #define KGDB_DYN_BRK_INS_BYTE0 KGDB_DYN_DBG_BRK_BYTE(0) 85 + #define KGDB_DYN_BRK_INS_BYTE1 KGDB_DYN_DBG_BRK_BYTE(1) 86 + #define KGDB_DYN_BRK_INS_BYTE2 KGDB_DYN_DBG_BRK_BYTE(2) 87 + #define KGDB_DYN_BRK_INS_BYTE3 KGDB_DYN_DBG_BRK_BYTE(3) 84 88 85 89 #define CACHE_FLUSH_IS_SAFE 1 86 90
+7
arch/arm64/include/asm/dma-mapping.h
··· 52 52 dev->archdata.dma_ops = ops; 53 53 } 54 54 55 + static inline int set_arch_dma_coherent_ops(struct device *dev) 56 + { 57 + set_dma_ops(dev, &coherent_swiotlb_dma_ops); 58 + return 0; 59 + } 60 + #define set_arch_dma_coherent_ops set_arch_dma_coherent_ops 61 + 55 62 #include <asm-generic/dma-mapping-common.h> 56 63 57 64 static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
+249
arch/arm64/include/asm/insn.h
··· 2 2 * Copyright (C) 2013 Huawei Ltd. 3 3 * Author: Jiang Liu <liuj97@gmail.com> 4 4 * 5 + * Copyright (C) 2014 Zi Shen Lim <zlim.lnx@gmail.com> 6 + * 5 7 * This program is free software; you can redistribute it and/or modify 6 8 * it under the terms of the GNU General Public License version 2 as 7 9 * published by the Free Software Foundation. ··· 66 64 AARCH64_INSN_IMM_14, 67 65 AARCH64_INSN_IMM_12, 68 66 AARCH64_INSN_IMM_9, 67 + AARCH64_INSN_IMM_7, 68 + AARCH64_INSN_IMM_6, 69 + AARCH64_INSN_IMM_S, 70 + AARCH64_INSN_IMM_R, 69 71 AARCH64_INSN_IMM_MAX 72 + }; 73 + 74 + enum aarch64_insn_register_type { 75 + AARCH64_INSN_REGTYPE_RT, 76 + AARCH64_INSN_REGTYPE_RN, 77 + AARCH64_INSN_REGTYPE_RT2, 78 + AARCH64_INSN_REGTYPE_RM, 79 + AARCH64_INSN_REGTYPE_RD, 80 + AARCH64_INSN_REGTYPE_RA, 81 + }; 82 + 83 + enum aarch64_insn_register { 84 + AARCH64_INSN_REG_0 = 0, 85 + AARCH64_INSN_REG_1 = 1, 86 + AARCH64_INSN_REG_2 = 2, 87 + AARCH64_INSN_REG_3 = 3, 88 + AARCH64_INSN_REG_4 = 4, 89 + AARCH64_INSN_REG_5 = 5, 90 + AARCH64_INSN_REG_6 = 6, 91 + AARCH64_INSN_REG_7 = 7, 92 + AARCH64_INSN_REG_8 = 8, 93 + AARCH64_INSN_REG_9 = 9, 94 + AARCH64_INSN_REG_10 = 10, 95 + AARCH64_INSN_REG_11 = 11, 96 + AARCH64_INSN_REG_12 = 12, 97 + AARCH64_INSN_REG_13 = 13, 98 + AARCH64_INSN_REG_14 = 14, 99 + AARCH64_INSN_REG_15 = 15, 100 + AARCH64_INSN_REG_16 = 16, 101 + AARCH64_INSN_REG_17 = 17, 102 + AARCH64_INSN_REG_18 = 18, 103 + AARCH64_INSN_REG_19 = 19, 104 + AARCH64_INSN_REG_20 = 20, 105 + AARCH64_INSN_REG_21 = 21, 106 + AARCH64_INSN_REG_22 = 22, 107 + AARCH64_INSN_REG_23 = 23, 108 + AARCH64_INSN_REG_24 = 24, 109 + AARCH64_INSN_REG_25 = 25, 110 + AARCH64_INSN_REG_26 = 26, 111 + AARCH64_INSN_REG_27 = 27, 112 + AARCH64_INSN_REG_28 = 28, 113 + AARCH64_INSN_REG_29 = 29, 114 + AARCH64_INSN_REG_FP = 29, /* Frame pointer */ 115 + AARCH64_INSN_REG_30 = 30, 116 + AARCH64_INSN_REG_LR = 30, /* Link register */ 117 + AARCH64_INSN_REG_ZR = 31, /* Zero: as source register */ 118 + AARCH64_INSN_REG_SP = 31 /* Stack pointer: as load/store base reg */ 119 + }; 120 + 121 + enum aarch64_insn_variant { 122 + AARCH64_INSN_VARIANT_32BIT, 123 + AARCH64_INSN_VARIANT_64BIT 124 + }; 125 + 126 + enum aarch64_insn_condition { 127 + AARCH64_INSN_COND_EQ = 0x0, /* == */ 128 + AARCH64_INSN_COND_NE = 0x1, /* != */ 129 + AARCH64_INSN_COND_CS = 0x2, /* unsigned >= */ 130 + AARCH64_INSN_COND_CC = 0x3, /* unsigned < */ 131 + AARCH64_INSN_COND_MI = 0x4, /* < 0 */ 132 + AARCH64_INSN_COND_PL = 0x5, /* >= 0 */ 133 + AARCH64_INSN_COND_VS = 0x6, /* overflow */ 134 + AARCH64_INSN_COND_VC = 0x7, /* no overflow */ 135 + AARCH64_INSN_COND_HI = 0x8, /* unsigned > */ 136 + AARCH64_INSN_COND_LS = 0x9, /* unsigned <= */ 137 + AARCH64_INSN_COND_GE = 0xa, /* signed >= */ 138 + AARCH64_INSN_COND_LT = 0xb, /* signed < */ 139 + AARCH64_INSN_COND_GT = 0xc, /* signed > */ 140 + AARCH64_INSN_COND_LE = 0xd, /* signed <= */ 141 + AARCH64_INSN_COND_AL = 0xe, /* always */ 70 142 }; 71 143 72 144 enum aarch64_insn_branch_type { 73 145 AARCH64_INSN_BRANCH_NOLINK, 74 146 AARCH64_INSN_BRANCH_LINK, 147 + AARCH64_INSN_BRANCH_RETURN, 148 + AARCH64_INSN_BRANCH_COMP_ZERO, 149 + AARCH64_INSN_BRANCH_COMP_NONZERO, 150 + }; 151 + 152 + enum aarch64_insn_size_type { 153 + AARCH64_INSN_SIZE_8, 154 + AARCH64_INSN_SIZE_16, 155 + AARCH64_INSN_SIZE_32, 156 + AARCH64_INSN_SIZE_64, 157 + }; 158 + 159 + enum aarch64_insn_ldst_type { 160 + AARCH64_INSN_LDST_LOAD_REG_OFFSET, 161 + AARCH64_INSN_LDST_STORE_REG_OFFSET, 162 + AARCH64_INSN_LDST_LOAD_PAIR_PRE_INDEX, 163 + AARCH64_INSN_LDST_STORE_PAIR_PRE_INDEX, 164 + AARCH64_INSN_LDST_LOAD_PAIR_POST_INDEX, 165 + AARCH64_INSN_LDST_STORE_PAIR_POST_INDEX, 166 + }; 167 + 168 + enum aarch64_insn_adsb_type { 169 + AARCH64_INSN_ADSB_ADD, 170 + AARCH64_INSN_ADSB_SUB, 171 + AARCH64_INSN_ADSB_ADD_SETFLAGS, 172 + AARCH64_INSN_ADSB_SUB_SETFLAGS 173 + }; 174 + 175 + enum aarch64_insn_movewide_type { 176 + AARCH64_INSN_MOVEWIDE_ZERO, 177 + AARCH64_INSN_MOVEWIDE_KEEP, 178 + AARCH64_INSN_MOVEWIDE_INVERSE 179 + }; 180 + 181 + enum aarch64_insn_bitfield_type { 182 + AARCH64_INSN_BITFIELD_MOVE, 183 + AARCH64_INSN_BITFIELD_MOVE_UNSIGNED, 184 + AARCH64_INSN_BITFIELD_MOVE_SIGNED 185 + }; 186 + 187 + enum aarch64_insn_data1_type { 188 + AARCH64_INSN_DATA1_REVERSE_16, 189 + AARCH64_INSN_DATA1_REVERSE_32, 190 + AARCH64_INSN_DATA1_REVERSE_64, 191 + }; 192 + 193 + enum aarch64_insn_data2_type { 194 + AARCH64_INSN_DATA2_UDIV, 195 + AARCH64_INSN_DATA2_SDIV, 196 + AARCH64_INSN_DATA2_LSLV, 197 + AARCH64_INSN_DATA2_LSRV, 198 + AARCH64_INSN_DATA2_ASRV, 199 + AARCH64_INSN_DATA2_RORV, 200 + }; 201 + 202 + enum aarch64_insn_data3_type { 203 + AARCH64_INSN_DATA3_MADD, 204 + AARCH64_INSN_DATA3_MSUB, 205 + }; 206 + 207 + enum aarch64_insn_logic_type { 208 + AARCH64_INSN_LOGIC_AND, 209 + AARCH64_INSN_LOGIC_BIC, 210 + AARCH64_INSN_LOGIC_ORR, 211 + AARCH64_INSN_LOGIC_ORN, 212 + AARCH64_INSN_LOGIC_EOR, 213 + AARCH64_INSN_LOGIC_EON, 214 + AARCH64_INSN_LOGIC_AND_SETFLAGS, 215 + AARCH64_INSN_LOGIC_BIC_SETFLAGS 75 216 }; 76 217 77 218 #define __AARCH64_INSN_FUNCS(abbr, mask, val) \ ··· 223 78 static __always_inline u32 aarch64_insn_get_##abbr##_value(void) \ 224 79 { return (val); } 225 80 81 + __AARCH64_INSN_FUNCS(str_reg, 0x3FE0EC00, 0x38206800) 82 + __AARCH64_INSN_FUNCS(ldr_reg, 0x3FE0EC00, 0x38606800) 83 + __AARCH64_INSN_FUNCS(stp_post, 0x7FC00000, 0x28800000) 84 + __AARCH64_INSN_FUNCS(ldp_post, 0x7FC00000, 0x28C00000) 85 + __AARCH64_INSN_FUNCS(stp_pre, 0x7FC00000, 0x29800000) 86 + __AARCH64_INSN_FUNCS(ldp_pre, 0x7FC00000, 0x29C00000) 87 + __AARCH64_INSN_FUNCS(add_imm, 0x7F000000, 0x11000000) 88 + __AARCH64_INSN_FUNCS(adds_imm, 0x7F000000, 0x31000000) 89 + __AARCH64_INSN_FUNCS(sub_imm, 0x7F000000, 0x51000000) 90 + __AARCH64_INSN_FUNCS(subs_imm, 0x7F000000, 0x71000000) 91 + __AARCH64_INSN_FUNCS(movn, 0x7F800000, 0x12800000) 92 + __AARCH64_INSN_FUNCS(sbfm, 0x7F800000, 0x13000000) 93 + __AARCH64_INSN_FUNCS(bfm, 0x7F800000, 0x33000000) 94 + __AARCH64_INSN_FUNCS(movz, 0x7F800000, 0x52800000) 95 + __AARCH64_INSN_FUNCS(ubfm, 0x7F800000, 0x53000000) 96 + __AARCH64_INSN_FUNCS(movk, 0x7F800000, 0x72800000) 97 + __AARCH64_INSN_FUNCS(add, 0x7F200000, 0x0B000000) 98 + __AARCH64_INSN_FUNCS(adds, 0x7F200000, 0x2B000000) 99 + __AARCH64_INSN_FUNCS(sub, 0x7F200000, 0x4B000000) 100 + __AARCH64_INSN_FUNCS(subs, 0x7F200000, 0x6B000000) 101 + __AARCH64_INSN_FUNCS(madd, 0x7FE08000, 0x1B000000) 102 + __AARCH64_INSN_FUNCS(msub, 0x7FE08000, 0x1B008000) 103 + __AARCH64_INSN_FUNCS(udiv, 0x7FE0FC00, 0x1AC00800) 104 + __AARCH64_INSN_FUNCS(sdiv, 0x7FE0FC00, 0x1AC00C00) 105 + __AARCH64_INSN_FUNCS(lslv, 0x7FE0FC00, 0x1AC02000) 106 + __AARCH64_INSN_FUNCS(lsrv, 0x7FE0FC00, 0x1AC02400) 107 + __AARCH64_INSN_FUNCS(asrv, 0x7FE0FC00, 0x1AC02800) 108 + __AARCH64_INSN_FUNCS(rorv, 0x7FE0FC00, 0x1AC02C00) 109 + __AARCH64_INSN_FUNCS(rev16, 0x7FFFFC00, 0x5AC00400) 110 + __AARCH64_INSN_FUNCS(rev32, 0x7FFFFC00, 0x5AC00800) 111 + __AARCH64_INSN_FUNCS(rev64, 0x7FFFFC00, 0x5AC00C00) 112 + __AARCH64_INSN_FUNCS(and, 0x7F200000, 0x0A000000) 113 + __AARCH64_INSN_FUNCS(bic, 0x7F200000, 0x0A200000) 114 + __AARCH64_INSN_FUNCS(orr, 0x7F200000, 0x2A000000) 115 + __AARCH64_INSN_FUNCS(orn, 0x7F200000, 0x2A200000) 116 + __AARCH64_INSN_FUNCS(eor, 0x7F200000, 0x4A000000) 117 + __AARCH64_INSN_FUNCS(eon, 0x7F200000, 0x4A200000) 118 + __AARCH64_INSN_FUNCS(ands, 0x7F200000, 0x6A000000) 119 + __AARCH64_INSN_FUNCS(bics, 0x7F200000, 0x6A200000) 226 120 __AARCH64_INSN_FUNCS(b, 0xFC000000, 0x14000000) 227 121 __AARCH64_INSN_FUNCS(bl, 0xFC000000, 0x94000000) 122 + __AARCH64_INSN_FUNCS(cbz, 0xFE000000, 0x34000000) 123 + __AARCH64_INSN_FUNCS(cbnz, 0xFE000000, 0x35000000) 124 + __AARCH64_INSN_FUNCS(bcond, 0xFF000010, 0x54000000) 228 125 __AARCH64_INSN_FUNCS(svc, 0xFFE0001F, 0xD4000001) 229 126 __AARCH64_INSN_FUNCS(hvc, 0xFFE0001F, 0xD4000002) 230 127 __AARCH64_INSN_FUNCS(smc, 0xFFE0001F, 0xD4000003) 231 128 __AARCH64_INSN_FUNCS(brk, 0xFFE0001F, 0xD4200000) 232 129 __AARCH64_INSN_FUNCS(hint, 0xFFFFF01F, 0xD503201F) 130 + __AARCH64_INSN_FUNCS(br, 0xFFFFFC1F, 0xD61F0000) 131 + __AARCH64_INSN_FUNCS(blr, 0xFFFFFC1F, 0xD63F0000) 132 + __AARCH64_INSN_FUNCS(ret, 0xFFFFFC1F, 0xD65F0000) 233 133 234 134 #undef __AARCH64_INSN_FUNCS 235 135 ··· 287 97 u32 insn, u64 imm); 288 98 u32 aarch64_insn_gen_branch_imm(unsigned long pc, unsigned long addr, 289 99 enum aarch64_insn_branch_type type); 100 + u32 aarch64_insn_gen_comp_branch_imm(unsigned long pc, unsigned long addr, 101 + enum aarch64_insn_register reg, 102 + enum aarch64_insn_variant variant, 103 + enum aarch64_insn_branch_type type); 104 + u32 aarch64_insn_gen_cond_branch_imm(unsigned long pc, unsigned long addr, 105 + enum aarch64_insn_condition cond); 290 106 u32 aarch64_insn_gen_hint(enum aarch64_insn_hint_op op); 291 107 u32 aarch64_insn_gen_nop(void); 108 + u32 aarch64_insn_gen_branch_reg(enum aarch64_insn_register reg, 109 + enum aarch64_insn_branch_type type); 110 + u32 aarch64_insn_gen_load_store_reg(enum aarch64_insn_register reg, 111 + enum aarch64_insn_register base, 112 + enum aarch64_insn_register offset, 113 + enum aarch64_insn_size_type size, 114 + enum aarch64_insn_ldst_type type); 115 + u32 aarch64_insn_gen_load_store_pair(enum aarch64_insn_register reg1, 116 + enum aarch64_insn_register reg2, 117 + enum aarch64_insn_register base, 118 + int offset, 119 + enum aarch64_insn_variant variant, 120 + enum aarch64_insn_ldst_type type); 121 + u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst, 122 + enum aarch64_insn_register src, 123 + int imm, enum aarch64_insn_variant variant, 124 + enum aarch64_insn_adsb_type type); 125 + u32 aarch64_insn_gen_bitfield(enum aarch64_insn_register dst, 126 + enum aarch64_insn_register src, 127 + int immr, int imms, 128 + enum aarch64_insn_variant variant, 129 + enum aarch64_insn_bitfield_type type); 130 + u32 aarch64_insn_gen_movewide(enum aarch64_insn_register dst, 131 + int imm, int shift, 132 + enum aarch64_insn_variant variant, 133 + enum aarch64_insn_movewide_type type); 134 + u32 aarch64_insn_gen_add_sub_shifted_reg(enum aarch64_insn_register dst, 135 + enum aarch64_insn_register src, 136 + enum aarch64_insn_register reg, 137 + int shift, 138 + enum aarch64_insn_variant variant, 139 + enum aarch64_insn_adsb_type type); 140 + u32 aarch64_insn_gen_data1(enum aarch64_insn_register dst, 141 + enum aarch64_insn_register src, 142 + enum aarch64_insn_variant variant, 143 + enum aarch64_insn_data1_type type); 144 + u32 aarch64_insn_gen_data2(enum aarch64_insn_register dst, 145 + enum aarch64_insn_register src, 146 + enum aarch64_insn_register reg, 147 + enum aarch64_insn_variant variant, 148 + enum aarch64_insn_data2_type type); 149 + u32 aarch64_insn_gen_data3(enum aarch64_insn_register dst, 150 + enum aarch64_insn_register src, 151 + enum aarch64_insn_register reg1, 152 + enum aarch64_insn_register reg2, 153 + enum aarch64_insn_variant variant, 154 + enum aarch64_insn_data3_type type); 155 + u32 aarch64_insn_gen_logical_shifted_reg(enum aarch64_insn_register dst, 156 + enum aarch64_insn_register src, 157 + enum aarch64_insn_register reg, 158 + int shift, 159 + enum aarch64_insn_variant variant, 160 + enum aarch64_insn_logic_type type); 292 161 293 162 bool aarch64_insn_hotpatch_safe(u32 old_insn, u32 new_insn); 294 163
+1 -1
arch/arm64/include/asm/io.h
··· 243 243 * (PHYS_OFFSET and PHYS_MASK taken into account). 244 244 */ 245 245 #define ARCH_HAS_VALID_PHYS_ADDR_RANGE 246 - extern int valid_phys_addr_range(unsigned long addr, size_t size); 246 + extern int valid_phys_addr_range(phys_addr_t addr, size_t size); 247 247 extern int valid_mmap_phys_addr_range(unsigned long pfn, size_t size); 248 248 249 249 extern int devmem_is_allowed(unsigned long pfn);
+1 -1
arch/arm64/include/asm/kgdb.h
··· 29 29 30 30 static inline void arch_kgdb_breakpoint(void) 31 31 { 32 - asm ("brk %0" : : "I" (KDBG_COMPILED_DBG_BRK_IMM)); 32 + asm ("brk %0" : : "I" (KGDB_COMPILED_DBG_BRK_IMM)); 33 33 } 34 34 35 35 extern void kgdb_handle_bus_error(void);
+2 -2
arch/arm64/include/asm/percpu.h
··· 26 26 static inline unsigned long __my_cpu_offset(void) 27 27 { 28 28 unsigned long off; 29 - register unsigned long *sp asm ("sp"); 30 29 31 30 /* 32 31 * We want to allow caching the value, so avoid using volatile and 33 32 * instead use a fake stack read to hazard against barrier(). 34 33 */ 35 - asm("mrs %0, tpidr_el1" : "=r" (off) : "Q" (*sp)); 34 + asm("mrs %0, tpidr_el1" : "=r" (off) : 35 + "Q" (*(const unsigned long *)current_stack_pointer)); 36 36 37 37 return off; 38 38 }
+19 -14
arch/arm64/include/asm/pgtable.h
··· 149 149 #define pte_valid_not_user(pte) \ 150 150 ((pte_val(pte) & (PTE_VALID | PTE_USER)) == PTE_VALID) 151 151 152 + static inline pte_t clear_pte_bit(pte_t pte, pgprot_t prot) 153 + { 154 + pte_val(pte) &= ~pgprot_val(prot); 155 + return pte; 156 + } 157 + 158 + static inline pte_t set_pte_bit(pte_t pte, pgprot_t prot) 159 + { 160 + pte_val(pte) |= pgprot_val(prot); 161 + return pte; 162 + } 163 + 152 164 static inline pte_t pte_wrprotect(pte_t pte) 153 165 { 154 - pte_val(pte) &= ~PTE_WRITE; 155 - return pte; 166 + return clear_pte_bit(pte, __pgprot(PTE_WRITE)); 156 167 } 157 168 158 169 static inline pte_t pte_mkwrite(pte_t pte) 159 170 { 160 - pte_val(pte) |= PTE_WRITE; 161 - return pte; 171 + return set_pte_bit(pte, __pgprot(PTE_WRITE)); 162 172 } 163 173 164 174 static inline pte_t pte_mkclean(pte_t pte) 165 175 { 166 - pte_val(pte) &= ~PTE_DIRTY; 167 - return pte; 176 + return clear_pte_bit(pte, __pgprot(PTE_DIRTY)); 168 177 } 169 178 170 179 static inline pte_t pte_mkdirty(pte_t pte) 171 180 { 172 - pte_val(pte) |= PTE_DIRTY; 173 - return pte; 181 + return set_pte_bit(pte, __pgprot(PTE_DIRTY)); 174 182 } 175 183 176 184 static inline pte_t pte_mkold(pte_t pte) 177 185 { 178 - pte_val(pte) &= ~PTE_AF; 179 - return pte; 186 + return clear_pte_bit(pte, __pgprot(PTE_AF)); 180 187 } 181 188 182 189 static inline pte_t pte_mkyoung(pte_t pte) 183 190 { 184 - pte_val(pte) |= PTE_AF; 185 - return pte; 191 + return set_pte_bit(pte, __pgprot(PTE_AF)); 186 192 } 187 193 188 194 static inline pte_t pte_mkspecial(pte_t pte) 189 195 { 190 - pte_val(pte) |= PTE_SPECIAL; 191 - return pte; 196 + return set_pte_bit(pte, __pgprot(PTE_SPECIAL)); 192 197 } 193 198 194 199 static inline void set_pte(pte_t *ptep, pte_t pte)
+2
arch/arm64/include/asm/proc-fns.h
··· 32 32 extern void cpu_do_idle(void); 33 33 extern void cpu_do_switch_mm(unsigned long pgd_phys, struct mm_struct *mm); 34 34 extern void cpu_reset(unsigned long addr) __attribute__((noreturn)); 35 + void cpu_soft_restart(phys_addr_t cpu_reset, 36 + unsigned long addr) __attribute__((noreturn)); 35 37 extern void cpu_do_suspend(struct cpu_suspend_ctx *ptr); 36 38 extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr); 37 39
+1
arch/arm64/include/asm/suspend.h
··· 21 21 phys_addr_t save_ptr_stash_phys; 22 22 }; 23 23 24 + extern int __cpu_suspend(unsigned long arg, int (*fn)(unsigned long)); 24 25 extern void cpu_resume(void); 25 26 extern int cpu_suspend(unsigned long); 26 27
+7 -2
arch/arm64/include/asm/thread_info.h
··· 69 69 #define init_stack (init_thread_union.stack) 70 70 71 71 /* 72 + * how to get the current stack pointer from C 73 + */ 74 + register unsigned long current_stack_pointer asm ("sp"); 75 + 76 + /* 72 77 * how to get the thread information struct from C 73 78 */ 74 79 static inline struct thread_info *current_thread_info(void) __attribute_const__; 75 80 76 81 static inline struct thread_info *current_thread_info(void) 77 82 { 78 - register unsigned long sp asm ("sp"); 79 - return (struct thread_info *)(sp & ~(THREAD_SIZE - 1)); 83 + return (struct thread_info *) 84 + (current_stack_pointer & ~(THREAD_SIZE - 1)); 80 85 } 81 86 82 87 #define thread_saved_pc(tsk) \
+1
arch/arm64/kernel/Makefile
··· 26 26 arm64-obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o 27 27 arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o 28 28 arm64-obj-$(CONFIG_ARM64_CPU_SUSPEND) += sleep.o suspend.o 29 + arm64-obj-$(CONFIG_CPU_IDLE) += cpuidle.o 29 30 arm64-obj-$(CONFIG_JUMP_LABEL) += jump_label.o 30 31 arm64-obj-$(CONFIG_KGDB) += kgdb.o 31 32 arm64-obj-$(CONFIG_EFI) += efi.o efi-stub.o efi-entry.o
+31
arch/arm64/kernel/cpuidle.c
··· 1 + /* 2 + * ARM64 CPU idle arch support 3 + * 4 + * Copyright (C) 2014 ARM Ltd. 5 + * Author: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + 12 + #include <linux/of.h> 13 + #include <linux/of_device.h> 14 + 15 + #include <asm/cpuidle.h> 16 + #include <asm/cpu_ops.h> 17 + 18 + int cpu_init_idle(unsigned int cpu) 19 + { 20 + int ret = -EOPNOTSUPP; 21 + struct device_node *cpu_node = of_cpu_device_node_get(cpu); 22 + 23 + if (!cpu_node) 24 + return -ENODEV; 25 + 26 + if (cpu_ops[cpu] && cpu_ops[cpu]->cpu_init_idle) 27 + ret = cpu_ops[cpu]->cpu_init_idle(cpu_node, cpu); 28 + 29 + of_node_put(cpu_node); 30 + return ret; 31 + }
+26 -2
arch/arm64/kernel/cpuinfo.c
··· 20 20 #include <asm/cputype.h> 21 21 22 22 #include <linux/bitops.h> 23 + #include <linux/bug.h> 23 24 #include <linux/init.h> 24 25 #include <linux/kernel.h> 26 + #include <linux/preempt.h> 25 27 #include <linux/printk.h> 26 28 #include <linux/smp.h> 27 29 ··· 49 47 unsigned int cpu = smp_processor_id(); 50 48 u32 l1ip = CTR_L1IP(info->reg_ctr); 51 49 52 - if (l1ip != ICACHE_POLICY_PIPT) 53 - set_bit(ICACHEF_ALIASING, &__icache_flags); 50 + if (l1ip != ICACHE_POLICY_PIPT) { 51 + /* 52 + * VIPT caches are non-aliasing if the VA always equals the PA 53 + * in all bit positions that are covered by the index. This is 54 + * the case if the size of a way (# of sets * line size) does 55 + * not exceed PAGE_SIZE. 56 + */ 57 + u32 waysize = icache_get_numsets() * icache_get_linesize(); 58 + 59 + if (l1ip != ICACHE_POLICY_VIPT || waysize > PAGE_SIZE) 60 + set_bit(ICACHEF_ALIASING, &__icache_flags); 61 + } 54 62 if (l1ip == ICACHE_POLICY_AIVIVT) 55 63 set_bit(ICACHEF_AIVIVT, &__icache_flags); 56 64 ··· 201 189 __cpuinfo_store_cpu(info); 202 190 203 191 boot_cpu_data = *info; 192 + } 193 + 194 + u64 __attribute_const__ icache_get_ccsidr(void) 195 + { 196 + u64 ccsidr; 197 + 198 + WARN_ON(preemptible()); 199 + 200 + /* Select L1 I-cache and read its size ID register */ 201 + asm("msr csselr_el1, %1; isb; mrs %0, ccsidr_el1" 202 + : "=r"(ccsidr) : "r"(1L)); 203 + return ccsidr; 204 204 }
+6 -10
arch/arm64/kernel/efi-stub.c
··· 28 28 kernel_size = _edata - _text; 29 29 if (*image_addr != (dram_base + TEXT_OFFSET)) { 30 30 kernel_memsize = kernel_size + (_end - _edata); 31 - status = efi_relocate_kernel(sys_table, image_addr, 32 - kernel_size, kernel_memsize, 33 - dram_base + TEXT_OFFSET, 34 - PAGE_SIZE); 31 + status = efi_low_alloc(sys_table, kernel_memsize + TEXT_OFFSET, 32 + SZ_2M, reserve_addr); 35 33 if (status != EFI_SUCCESS) { 36 34 pr_efi_err(sys_table, "Failed to relocate kernel\n"); 37 35 return status; 38 36 } 39 - if (*image_addr != (dram_base + TEXT_OFFSET)) { 40 - pr_efi_err(sys_table, "Failed to alloc kernel memory\n"); 41 - efi_free(sys_table, kernel_memsize, *image_addr); 42 - return EFI_LOAD_ERROR; 43 - } 44 - *image_size = kernel_memsize; 37 + memcpy((void *)*reserve_addr + TEXT_OFFSET, (void *)*image_addr, 38 + kernel_size); 39 + *image_addr = *reserve_addr + TEXT_OFFSET; 40 + *reserve_size = kernel_memsize + TEXT_OFFSET; 45 41 } 46 42 47 43
-1
arch/arm64/kernel/entry.S
··· 324 324 mrs x0, far_el1 325 325 mov x2, sp // struct pt_regs 326 326 bl do_debug_exception 327 - enable_dbg 328 327 kernel_exit 1 329 328 el1_inv: 330 329 // TODO: add support for undefined instructions in kernel mode
+6 -4
arch/arm64/kernel/ftrace.c
··· 58 58 u32 new; 59 59 60 60 pc = (unsigned long)&ftrace_call; 61 - new = aarch64_insn_gen_branch_imm(pc, (unsigned long)func, true); 61 + new = aarch64_insn_gen_branch_imm(pc, (unsigned long)func, 62 + AARCH64_INSN_BRANCH_LINK); 62 63 63 64 return ftrace_modify_code(pc, 0, new, false); 64 65 } ··· 73 72 u32 old, new; 74 73 75 74 old = aarch64_insn_gen_nop(); 76 - new = aarch64_insn_gen_branch_imm(pc, addr, true); 75 + new = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK); 77 76 78 77 return ftrace_modify_code(pc, old, new, true); 79 78 } ··· 87 86 unsigned long pc = rec->ip; 88 87 u32 old, new; 89 88 90 - old = aarch64_insn_gen_branch_imm(pc, addr, true); 89 + old = aarch64_insn_gen_branch_imm(pc, addr, AARCH64_INSN_BRANCH_LINK); 91 90 new = aarch64_insn_gen_nop(); 92 91 93 92 return ftrace_modify_code(pc, old, new, true); ··· 155 154 u32 branch, nop; 156 155 157 156 branch = aarch64_insn_gen_branch_imm(pc, 158 - (unsigned long)ftrace_graph_caller, false); 157 + (unsigned long)ftrace_graph_caller, 158 + AARCH64_INSN_BRANCH_LINK); 159 159 nop = aarch64_insn_gen_nop(); 160 160 161 161 if (enable)
+3 -3
arch/arm64/kernel/head.S
··· 151 151 .short 0x20b // PE32+ format 152 152 .byte 0x02 // MajorLinkerVersion 153 153 .byte 0x14 // MinorLinkerVersion 154 - .long _edata - stext // SizeOfCode 154 + .long _end - stext // SizeOfCode 155 155 .long 0 // SizeOfInitializedData 156 156 .long 0 // SizeOfUninitializedData 157 157 .long efi_stub_entry - efi_head // AddressOfEntryPoint ··· 169 169 .short 0 // MinorSubsystemVersion 170 170 .long 0 // Win32VersionValue 171 171 172 - .long _edata - efi_head // SizeOfImage 172 + .long _end - efi_head // SizeOfImage 173 173 174 174 // Everything before the kernel image is considered part of the header 175 175 .long stext - efi_head // SizeOfHeaders ··· 216 216 .byte 0 217 217 .byte 0 218 218 .byte 0 // end of 0 padding of section name 219 - .long _edata - stext // VirtualSize 219 + .long _end - stext // VirtualSize 220 220 .long stext - efi_head // VirtualAddress 221 221 .long _edata - stext // SizeOfRawData 222 222 .long stext - efi_head // PointerToRawData
+664 -7
arch/arm64/kernel/insn.c
··· 2 2 * Copyright (C) 2013 Huawei Ltd. 3 3 * Author: Jiang Liu <liuj97@gmail.com> 4 4 * 5 + * Copyright (C) 2014 Zi Shen Lim <zlim.lnx@gmail.com> 6 + * 5 7 * This program is free software; you can redistribute it and/or modify 6 8 * it under the terms of the GNU General Public License version 2 as 7 9 * published by the Free Software Foundation. ··· 22 20 #include <linux/smp.h> 23 21 #include <linux/stop_machine.h> 24 22 #include <linux/uaccess.h> 23 + 25 24 #include <asm/cacheflush.h> 25 + #include <asm/debug-monitors.h> 26 26 #include <asm/insn.h> 27 + 28 + #define AARCH64_INSN_SF_BIT BIT(31) 29 + #define AARCH64_INSN_N_BIT BIT(22) 27 30 28 31 static int aarch64_insn_encoding_class[] = { 29 32 AARCH64_INSN_CLS_UNKNOWN, ··· 258 251 mask = BIT(9) - 1; 259 252 shift = 12; 260 253 break; 254 + case AARCH64_INSN_IMM_7: 255 + mask = BIT(7) - 1; 256 + shift = 15; 257 + break; 258 + case AARCH64_INSN_IMM_6: 259 + case AARCH64_INSN_IMM_S: 260 + mask = BIT(6) - 1; 261 + shift = 10; 262 + break; 263 + case AARCH64_INSN_IMM_R: 264 + mask = BIT(6) - 1; 265 + shift = 16; 266 + break; 261 267 default: 262 268 pr_err("aarch64_insn_encode_immediate: unknown immediate encoding %d\n", 263 269 type); ··· 284 264 return insn; 285 265 } 286 266 287 - u32 __kprobes aarch64_insn_gen_branch_imm(unsigned long pc, unsigned long addr, 288 - enum aarch64_insn_branch_type type) 267 + static u32 aarch64_insn_encode_register(enum aarch64_insn_register_type type, 268 + u32 insn, 269 + enum aarch64_insn_register reg) 289 270 { 290 - u32 insn; 271 + int shift; 272 + 273 + if (reg < AARCH64_INSN_REG_0 || reg > AARCH64_INSN_REG_SP) { 274 + pr_err("%s: unknown register encoding %d\n", __func__, reg); 275 + return 0; 276 + } 277 + 278 + switch (type) { 279 + case AARCH64_INSN_REGTYPE_RT: 280 + case AARCH64_INSN_REGTYPE_RD: 281 + shift = 0; 282 + break; 283 + case AARCH64_INSN_REGTYPE_RN: 284 + shift = 5; 285 + break; 286 + case AARCH64_INSN_REGTYPE_RT2: 287 + case AARCH64_INSN_REGTYPE_RA: 288 + shift = 10; 289 + break; 290 + case AARCH64_INSN_REGTYPE_RM: 291 + shift = 16; 292 + break; 293 + default: 294 + pr_err("%s: unknown register type encoding %d\n", __func__, 295 + type); 296 + return 0; 297 + } 298 + 299 + insn &= ~(GENMASK(4, 0) << shift); 300 + insn |= reg << shift; 301 + 302 + return insn; 303 + } 304 + 305 + static u32 aarch64_insn_encode_ldst_size(enum aarch64_insn_size_type type, 306 + u32 insn) 307 + { 308 + u32 size; 309 + 310 + switch (type) { 311 + case AARCH64_INSN_SIZE_8: 312 + size = 0; 313 + break; 314 + case AARCH64_INSN_SIZE_16: 315 + size = 1; 316 + break; 317 + case AARCH64_INSN_SIZE_32: 318 + size = 2; 319 + break; 320 + case AARCH64_INSN_SIZE_64: 321 + size = 3; 322 + break; 323 + default: 324 + pr_err("%s: unknown size encoding %d\n", __func__, type); 325 + return 0; 326 + } 327 + 328 + insn &= ~GENMASK(31, 30); 329 + insn |= size << 30; 330 + 331 + return insn; 332 + } 333 + 334 + static inline long branch_imm_common(unsigned long pc, unsigned long addr, 335 + long range) 336 + { 291 337 long offset; 292 338 293 339 /* ··· 362 276 */ 363 277 BUG_ON((pc & 0x3) || (addr & 0x3)); 364 278 279 + offset = ((long)addr - (long)pc); 280 + BUG_ON(offset < -range || offset >= range); 281 + 282 + return offset; 283 + } 284 + 285 + u32 __kprobes aarch64_insn_gen_branch_imm(unsigned long pc, unsigned long addr, 286 + enum aarch64_insn_branch_type type) 287 + { 288 + u32 insn; 289 + long offset; 290 + 365 291 /* 366 292 * B/BL support [-128M, 128M) offset 367 293 * ARM64 virtual address arrangement guarantees all kernel and module 368 294 * texts are within +/-128M. 369 295 */ 370 - offset = ((long)addr - (long)pc); 371 - BUG_ON(offset < -SZ_128M || offset >= SZ_128M); 296 + offset = branch_imm_common(pc, addr, SZ_128M); 372 297 373 - if (type == AARCH64_INSN_BRANCH_LINK) 298 + switch (type) { 299 + case AARCH64_INSN_BRANCH_LINK: 374 300 insn = aarch64_insn_get_bl_value(); 375 - else 301 + break; 302 + case AARCH64_INSN_BRANCH_NOLINK: 376 303 insn = aarch64_insn_get_b_value(); 304 + break; 305 + default: 306 + BUG_ON(1); 307 + return AARCH64_BREAK_FAULT; 308 + } 377 309 378 310 return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_26, insn, 311 + offset >> 2); 312 + } 313 + 314 + u32 aarch64_insn_gen_comp_branch_imm(unsigned long pc, unsigned long addr, 315 + enum aarch64_insn_register reg, 316 + enum aarch64_insn_variant variant, 317 + enum aarch64_insn_branch_type type) 318 + { 319 + u32 insn; 320 + long offset; 321 + 322 + offset = branch_imm_common(pc, addr, SZ_1M); 323 + 324 + switch (type) { 325 + case AARCH64_INSN_BRANCH_COMP_ZERO: 326 + insn = aarch64_insn_get_cbz_value(); 327 + break; 328 + case AARCH64_INSN_BRANCH_COMP_NONZERO: 329 + insn = aarch64_insn_get_cbnz_value(); 330 + break; 331 + default: 332 + BUG_ON(1); 333 + return AARCH64_BREAK_FAULT; 334 + } 335 + 336 + switch (variant) { 337 + case AARCH64_INSN_VARIANT_32BIT: 338 + break; 339 + case AARCH64_INSN_VARIANT_64BIT: 340 + insn |= AARCH64_INSN_SF_BIT; 341 + break; 342 + default: 343 + BUG_ON(1); 344 + return AARCH64_BREAK_FAULT; 345 + } 346 + 347 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn, reg); 348 + 349 + return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_19, insn, 350 + offset >> 2); 351 + } 352 + 353 + u32 aarch64_insn_gen_cond_branch_imm(unsigned long pc, unsigned long addr, 354 + enum aarch64_insn_condition cond) 355 + { 356 + u32 insn; 357 + long offset; 358 + 359 + offset = branch_imm_common(pc, addr, SZ_1M); 360 + 361 + insn = aarch64_insn_get_bcond_value(); 362 + 363 + BUG_ON(cond < AARCH64_INSN_COND_EQ || cond > AARCH64_INSN_COND_AL); 364 + insn |= cond; 365 + 366 + return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_19, insn, 379 367 offset >> 2); 380 368 } 381 369 ··· 461 301 u32 __kprobes aarch64_insn_gen_nop(void) 462 302 { 463 303 return aarch64_insn_gen_hint(AARCH64_INSN_HINT_NOP); 304 + } 305 + 306 + u32 aarch64_insn_gen_branch_reg(enum aarch64_insn_register reg, 307 + enum aarch64_insn_branch_type type) 308 + { 309 + u32 insn; 310 + 311 + switch (type) { 312 + case AARCH64_INSN_BRANCH_NOLINK: 313 + insn = aarch64_insn_get_br_value(); 314 + break; 315 + case AARCH64_INSN_BRANCH_LINK: 316 + insn = aarch64_insn_get_blr_value(); 317 + break; 318 + case AARCH64_INSN_BRANCH_RETURN: 319 + insn = aarch64_insn_get_ret_value(); 320 + break; 321 + default: 322 + BUG_ON(1); 323 + return AARCH64_BREAK_FAULT; 324 + } 325 + 326 + return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, reg); 327 + } 328 + 329 + u32 aarch64_insn_gen_load_store_reg(enum aarch64_insn_register reg, 330 + enum aarch64_insn_register base, 331 + enum aarch64_insn_register offset, 332 + enum aarch64_insn_size_type size, 333 + enum aarch64_insn_ldst_type type) 334 + { 335 + u32 insn; 336 + 337 + switch (type) { 338 + case AARCH64_INSN_LDST_LOAD_REG_OFFSET: 339 + insn = aarch64_insn_get_ldr_reg_value(); 340 + break; 341 + case AARCH64_INSN_LDST_STORE_REG_OFFSET: 342 + insn = aarch64_insn_get_str_reg_value(); 343 + break; 344 + default: 345 + BUG_ON(1); 346 + return AARCH64_BREAK_FAULT; 347 + } 348 + 349 + insn = aarch64_insn_encode_ldst_size(size, insn); 350 + 351 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn, reg); 352 + 353 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, 354 + base); 355 + 356 + return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, 357 + offset); 358 + } 359 + 360 + u32 aarch64_insn_gen_load_store_pair(enum aarch64_insn_register reg1, 361 + enum aarch64_insn_register reg2, 362 + enum aarch64_insn_register base, 363 + int offset, 364 + enum aarch64_insn_variant variant, 365 + enum aarch64_insn_ldst_type type) 366 + { 367 + u32 insn; 368 + int shift; 369 + 370 + switch (type) { 371 + case AARCH64_INSN_LDST_LOAD_PAIR_PRE_INDEX: 372 + insn = aarch64_insn_get_ldp_pre_value(); 373 + break; 374 + case AARCH64_INSN_LDST_STORE_PAIR_PRE_INDEX: 375 + insn = aarch64_insn_get_stp_pre_value(); 376 + break; 377 + case AARCH64_INSN_LDST_LOAD_PAIR_POST_INDEX: 378 + insn = aarch64_insn_get_ldp_post_value(); 379 + break; 380 + case AARCH64_INSN_LDST_STORE_PAIR_POST_INDEX: 381 + insn = aarch64_insn_get_stp_post_value(); 382 + break; 383 + default: 384 + BUG_ON(1); 385 + return AARCH64_BREAK_FAULT; 386 + } 387 + 388 + switch (variant) { 389 + case AARCH64_INSN_VARIANT_32BIT: 390 + /* offset must be multiples of 4 in the range [-256, 252] */ 391 + BUG_ON(offset & 0x3); 392 + BUG_ON(offset < -256 || offset > 252); 393 + shift = 2; 394 + break; 395 + case AARCH64_INSN_VARIANT_64BIT: 396 + /* offset must be multiples of 8 in the range [-512, 504] */ 397 + BUG_ON(offset & 0x7); 398 + BUG_ON(offset < -512 || offset > 504); 399 + shift = 3; 400 + insn |= AARCH64_INSN_SF_BIT; 401 + break; 402 + default: 403 + BUG_ON(1); 404 + return AARCH64_BREAK_FAULT; 405 + } 406 + 407 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn, 408 + reg1); 409 + 410 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT2, insn, 411 + reg2); 412 + 413 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, 414 + base); 415 + 416 + return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_7, insn, 417 + offset >> shift); 418 + } 419 + 420 + u32 aarch64_insn_gen_add_sub_imm(enum aarch64_insn_register dst, 421 + enum aarch64_insn_register src, 422 + int imm, enum aarch64_insn_variant variant, 423 + enum aarch64_insn_adsb_type type) 424 + { 425 + u32 insn; 426 + 427 + switch (type) { 428 + case AARCH64_INSN_ADSB_ADD: 429 + insn = aarch64_insn_get_add_imm_value(); 430 + break; 431 + case AARCH64_INSN_ADSB_SUB: 432 + insn = aarch64_insn_get_sub_imm_value(); 433 + break; 434 + case AARCH64_INSN_ADSB_ADD_SETFLAGS: 435 + insn = aarch64_insn_get_adds_imm_value(); 436 + break; 437 + case AARCH64_INSN_ADSB_SUB_SETFLAGS: 438 + insn = aarch64_insn_get_subs_imm_value(); 439 + break; 440 + default: 441 + BUG_ON(1); 442 + return AARCH64_BREAK_FAULT; 443 + } 444 + 445 + switch (variant) { 446 + case AARCH64_INSN_VARIANT_32BIT: 447 + break; 448 + case AARCH64_INSN_VARIANT_64BIT: 449 + insn |= AARCH64_INSN_SF_BIT; 450 + break; 451 + default: 452 + BUG_ON(1); 453 + return AARCH64_BREAK_FAULT; 454 + } 455 + 456 + BUG_ON(imm & ~(SZ_4K - 1)); 457 + 458 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, dst); 459 + 460 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, src); 461 + 462 + return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_12, insn, imm); 463 + } 464 + 465 + u32 aarch64_insn_gen_bitfield(enum aarch64_insn_register dst, 466 + enum aarch64_insn_register src, 467 + int immr, int imms, 468 + enum aarch64_insn_variant variant, 469 + enum aarch64_insn_bitfield_type type) 470 + { 471 + u32 insn; 472 + u32 mask; 473 + 474 + switch (type) { 475 + case AARCH64_INSN_BITFIELD_MOVE: 476 + insn = aarch64_insn_get_bfm_value(); 477 + break; 478 + case AARCH64_INSN_BITFIELD_MOVE_UNSIGNED: 479 + insn = aarch64_insn_get_ubfm_value(); 480 + break; 481 + case AARCH64_INSN_BITFIELD_MOVE_SIGNED: 482 + insn = aarch64_insn_get_sbfm_value(); 483 + break; 484 + default: 485 + BUG_ON(1); 486 + return AARCH64_BREAK_FAULT; 487 + } 488 + 489 + switch (variant) { 490 + case AARCH64_INSN_VARIANT_32BIT: 491 + mask = GENMASK(4, 0); 492 + break; 493 + case AARCH64_INSN_VARIANT_64BIT: 494 + insn |= AARCH64_INSN_SF_BIT | AARCH64_INSN_N_BIT; 495 + mask = GENMASK(5, 0); 496 + break; 497 + default: 498 + BUG_ON(1); 499 + return AARCH64_BREAK_FAULT; 500 + } 501 + 502 + BUG_ON(immr & ~mask); 503 + BUG_ON(imms & ~mask); 504 + 505 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, dst); 506 + 507 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, src); 508 + 509 + insn = aarch64_insn_encode_immediate(AARCH64_INSN_IMM_R, insn, immr); 510 + 511 + return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_S, insn, imms); 512 + } 513 + 514 + u32 aarch64_insn_gen_movewide(enum aarch64_insn_register dst, 515 + int imm, int shift, 516 + enum aarch64_insn_variant variant, 517 + enum aarch64_insn_movewide_type type) 518 + { 519 + u32 insn; 520 + 521 + switch (type) { 522 + case AARCH64_INSN_MOVEWIDE_ZERO: 523 + insn = aarch64_insn_get_movz_value(); 524 + break; 525 + case AARCH64_INSN_MOVEWIDE_KEEP: 526 + insn = aarch64_insn_get_movk_value(); 527 + break; 528 + case AARCH64_INSN_MOVEWIDE_INVERSE: 529 + insn = aarch64_insn_get_movn_value(); 530 + break; 531 + default: 532 + BUG_ON(1); 533 + return AARCH64_BREAK_FAULT; 534 + } 535 + 536 + BUG_ON(imm & ~(SZ_64K - 1)); 537 + 538 + switch (variant) { 539 + case AARCH64_INSN_VARIANT_32BIT: 540 + BUG_ON(shift != 0 && shift != 16); 541 + break; 542 + case AARCH64_INSN_VARIANT_64BIT: 543 + insn |= AARCH64_INSN_SF_BIT; 544 + BUG_ON(shift != 0 && shift != 16 && shift != 32 && 545 + shift != 48); 546 + break; 547 + default: 548 + BUG_ON(1); 549 + return AARCH64_BREAK_FAULT; 550 + } 551 + 552 + insn |= (shift >> 4) << 21; 553 + 554 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, dst); 555 + 556 + return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_16, insn, imm); 557 + } 558 + 559 + u32 aarch64_insn_gen_add_sub_shifted_reg(enum aarch64_insn_register dst, 560 + enum aarch64_insn_register src, 561 + enum aarch64_insn_register reg, 562 + int shift, 563 + enum aarch64_insn_variant variant, 564 + enum aarch64_insn_adsb_type type) 565 + { 566 + u32 insn; 567 + 568 + switch (type) { 569 + case AARCH64_INSN_ADSB_ADD: 570 + insn = aarch64_insn_get_add_value(); 571 + break; 572 + case AARCH64_INSN_ADSB_SUB: 573 + insn = aarch64_insn_get_sub_value(); 574 + break; 575 + case AARCH64_INSN_ADSB_ADD_SETFLAGS: 576 + insn = aarch64_insn_get_adds_value(); 577 + break; 578 + case AARCH64_INSN_ADSB_SUB_SETFLAGS: 579 + insn = aarch64_insn_get_subs_value(); 580 + break; 581 + default: 582 + BUG_ON(1); 583 + return AARCH64_BREAK_FAULT; 584 + } 585 + 586 + switch (variant) { 587 + case AARCH64_INSN_VARIANT_32BIT: 588 + BUG_ON(shift & ~(SZ_32 - 1)); 589 + break; 590 + case AARCH64_INSN_VARIANT_64BIT: 591 + insn |= AARCH64_INSN_SF_BIT; 592 + BUG_ON(shift & ~(SZ_64 - 1)); 593 + break; 594 + default: 595 + BUG_ON(1); 596 + return AARCH64_BREAK_FAULT; 597 + } 598 + 599 + 600 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, dst); 601 + 602 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, src); 603 + 604 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, reg); 605 + 606 + return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_6, insn, shift); 607 + } 608 + 609 + u32 aarch64_insn_gen_data1(enum aarch64_insn_register dst, 610 + enum aarch64_insn_register src, 611 + enum aarch64_insn_variant variant, 612 + enum aarch64_insn_data1_type type) 613 + { 614 + u32 insn; 615 + 616 + switch (type) { 617 + case AARCH64_INSN_DATA1_REVERSE_16: 618 + insn = aarch64_insn_get_rev16_value(); 619 + break; 620 + case AARCH64_INSN_DATA1_REVERSE_32: 621 + insn = aarch64_insn_get_rev32_value(); 622 + break; 623 + case AARCH64_INSN_DATA1_REVERSE_64: 624 + BUG_ON(variant != AARCH64_INSN_VARIANT_64BIT); 625 + insn = aarch64_insn_get_rev64_value(); 626 + break; 627 + default: 628 + BUG_ON(1); 629 + return AARCH64_BREAK_FAULT; 630 + } 631 + 632 + switch (variant) { 633 + case AARCH64_INSN_VARIANT_32BIT: 634 + break; 635 + case AARCH64_INSN_VARIANT_64BIT: 636 + insn |= AARCH64_INSN_SF_BIT; 637 + break; 638 + default: 639 + BUG_ON(1); 640 + return AARCH64_BREAK_FAULT; 641 + } 642 + 643 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, dst); 644 + 645 + return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, src); 646 + } 647 + 648 + u32 aarch64_insn_gen_data2(enum aarch64_insn_register dst, 649 + enum aarch64_insn_register src, 650 + enum aarch64_insn_register reg, 651 + enum aarch64_insn_variant variant, 652 + enum aarch64_insn_data2_type type) 653 + { 654 + u32 insn; 655 + 656 + switch (type) { 657 + case AARCH64_INSN_DATA2_UDIV: 658 + insn = aarch64_insn_get_udiv_value(); 659 + break; 660 + case AARCH64_INSN_DATA2_SDIV: 661 + insn = aarch64_insn_get_sdiv_value(); 662 + break; 663 + case AARCH64_INSN_DATA2_LSLV: 664 + insn = aarch64_insn_get_lslv_value(); 665 + break; 666 + case AARCH64_INSN_DATA2_LSRV: 667 + insn = aarch64_insn_get_lsrv_value(); 668 + break; 669 + case AARCH64_INSN_DATA2_ASRV: 670 + insn = aarch64_insn_get_asrv_value(); 671 + break; 672 + case AARCH64_INSN_DATA2_RORV: 673 + insn = aarch64_insn_get_rorv_value(); 674 + break; 675 + default: 676 + BUG_ON(1); 677 + return AARCH64_BREAK_FAULT; 678 + } 679 + 680 + switch (variant) { 681 + case AARCH64_INSN_VARIANT_32BIT: 682 + break; 683 + case AARCH64_INSN_VARIANT_64BIT: 684 + insn |= AARCH64_INSN_SF_BIT; 685 + break; 686 + default: 687 + BUG_ON(1); 688 + return AARCH64_BREAK_FAULT; 689 + } 690 + 691 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, dst); 692 + 693 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, src); 694 + 695 + return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, reg); 696 + } 697 + 698 + u32 aarch64_insn_gen_data3(enum aarch64_insn_register dst, 699 + enum aarch64_insn_register src, 700 + enum aarch64_insn_register reg1, 701 + enum aarch64_insn_register reg2, 702 + enum aarch64_insn_variant variant, 703 + enum aarch64_insn_data3_type type) 704 + { 705 + u32 insn; 706 + 707 + switch (type) { 708 + case AARCH64_INSN_DATA3_MADD: 709 + insn = aarch64_insn_get_madd_value(); 710 + break; 711 + case AARCH64_INSN_DATA3_MSUB: 712 + insn = aarch64_insn_get_msub_value(); 713 + break; 714 + default: 715 + BUG_ON(1); 716 + return AARCH64_BREAK_FAULT; 717 + } 718 + 719 + switch (variant) { 720 + case AARCH64_INSN_VARIANT_32BIT: 721 + break; 722 + case AARCH64_INSN_VARIANT_64BIT: 723 + insn |= AARCH64_INSN_SF_BIT; 724 + break; 725 + default: 726 + BUG_ON(1); 727 + return AARCH64_BREAK_FAULT; 728 + } 729 + 730 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, dst); 731 + 732 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RA, insn, src); 733 + 734 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, 735 + reg1); 736 + 737 + return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, 738 + reg2); 739 + } 740 + 741 + u32 aarch64_insn_gen_logical_shifted_reg(enum aarch64_insn_register dst, 742 + enum aarch64_insn_register src, 743 + enum aarch64_insn_register reg, 744 + int shift, 745 + enum aarch64_insn_variant variant, 746 + enum aarch64_insn_logic_type type) 747 + { 748 + u32 insn; 749 + 750 + switch (type) { 751 + case AARCH64_INSN_LOGIC_AND: 752 + insn = aarch64_insn_get_and_value(); 753 + break; 754 + case AARCH64_INSN_LOGIC_BIC: 755 + insn = aarch64_insn_get_bic_value(); 756 + break; 757 + case AARCH64_INSN_LOGIC_ORR: 758 + insn = aarch64_insn_get_orr_value(); 759 + break; 760 + case AARCH64_INSN_LOGIC_ORN: 761 + insn = aarch64_insn_get_orn_value(); 762 + break; 763 + case AARCH64_INSN_LOGIC_EOR: 764 + insn = aarch64_insn_get_eor_value(); 765 + break; 766 + case AARCH64_INSN_LOGIC_EON: 767 + insn = aarch64_insn_get_eon_value(); 768 + break; 769 + case AARCH64_INSN_LOGIC_AND_SETFLAGS: 770 + insn = aarch64_insn_get_ands_value(); 771 + break; 772 + case AARCH64_INSN_LOGIC_BIC_SETFLAGS: 773 + insn = aarch64_insn_get_bics_value(); 774 + break; 775 + default: 776 + BUG_ON(1); 777 + return AARCH64_BREAK_FAULT; 778 + } 779 + 780 + switch (variant) { 781 + case AARCH64_INSN_VARIANT_32BIT: 782 + BUG_ON(shift & ~(SZ_32 - 1)); 783 + break; 784 + case AARCH64_INSN_VARIANT_64BIT: 785 + insn |= AARCH64_INSN_SF_BIT; 786 + BUG_ON(shift & ~(SZ_64 - 1)); 787 + break; 788 + default: 789 + BUG_ON(1); 790 + return AARCH64_BREAK_FAULT; 791 + } 792 + 793 + 794 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RD, insn, dst); 795 + 796 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, src); 797 + 798 + insn = aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RM, insn, reg); 799 + 800 + return aarch64_insn_encode_immediate(AARCH64_INSN_IMM_6, insn, shift); 464 801 }
+2 -2
arch/arm64/kernel/kgdb.c
··· 235 235 236 236 static struct break_hook kgdb_brkpt_hook = { 237 237 .esr_mask = 0xffffffff, 238 - .esr_val = DBG_ESR_VAL_BRK(KGDB_DYN_DGB_BRK_IMM), 238 + .esr_val = DBG_ESR_VAL_BRK(KGDB_DYN_DBG_BRK_IMM), 239 239 .fn = kgdb_brk_fn 240 240 }; 241 241 242 242 static struct break_hook kgdb_compiled_brkpt_hook = { 243 243 .esr_mask = 0xffffffff, 244 - .esr_val = DBG_ESR_VAL_BRK(KDBG_COMPILED_DBG_BRK_IMM), 244 + .esr_val = DBG_ESR_VAL_BRK(KGDB_COMPILED_DBG_BRK_IMM), 245 245 .fn = kgdb_compiled_brk_fn 246 246 }; 247 247
+1 -1
arch/arm64/kernel/perf_event.c
··· 1276 1276 /* 1277 1277 * PMU platform driver and devicetree bindings. 1278 1278 */ 1279 - static struct of_device_id armpmu_of_device_ids[] = { 1279 + static const struct of_device_id armpmu_of_device_ids[] = { 1280 1280 {.compatible = "arm,armv8-pmuv3"}, 1281 1281 {}, 1282 1282 };
+2 -28
arch/arm64/kernel/process.c
··· 57 57 EXPORT_SYMBOL(__stack_chk_guard); 58 58 #endif 59 59 60 - static void setup_restart(void) 61 - { 62 - /* 63 - * Tell the mm system that we are going to reboot - 64 - * we may need it to insert some 1:1 mappings so that 65 - * soft boot works. 66 - */ 67 - setup_mm_for_reboot(); 68 - 69 - /* Clean and invalidate caches */ 70 - flush_cache_all(); 71 - 72 - /* Turn D-cache off */ 73 - cpu_cache_off(); 74 - 75 - /* Push out any further dirty data, and ensure cache is empty */ 76 - flush_cache_all(); 77 - } 78 - 79 60 void soft_restart(unsigned long addr) 80 61 { 81 - typedef void (*phys_reset_t)(unsigned long); 82 - phys_reset_t phys_reset; 83 - 84 - setup_restart(); 85 - 86 - /* Switch to the identity mapping */ 87 - phys_reset = (phys_reset_t)virt_to_phys(cpu_reset); 88 - phys_reset(addr); 89 - 62 + setup_mm_for_reboot(); 63 + cpu_soft_restart(virt_to_phys(cpu_reset), addr); 90 64 /* Should never get here */ 91 65 BUG(); 92 66 }
+104
arch/arm64/kernel/psci.c
··· 21 21 #include <linux/reboot.h> 22 22 #include <linux/pm.h> 23 23 #include <linux/delay.h> 24 + #include <linux/slab.h> 24 25 #include <uapi/linux/psci.h> 25 26 26 27 #include <asm/compiler.h> ··· 29 28 #include <asm/errno.h> 30 29 #include <asm/psci.h> 31 30 #include <asm/smp_plat.h> 31 + #include <asm/suspend.h> 32 32 #include <asm/system_misc.h> 33 33 34 34 #define PSCI_POWER_STATE_TYPE_STANDBY 0 ··· 67 65 PSCI_FN_MAX, 68 66 }; 69 67 68 + static DEFINE_PER_CPU_READ_MOSTLY(struct psci_power_state *, psci_power_state); 69 + 70 70 static u32 psci_function_id[PSCI_FN_MAX]; 71 71 72 72 static int psci_to_linux_errno(int errno) ··· 95 91 & PSCI_0_2_POWER_STATE_TYPE_MASK) | 96 92 ((state.affinity_level << PSCI_0_2_POWER_STATE_AFFL_SHIFT) 97 93 & PSCI_0_2_POWER_STATE_AFFL_MASK); 94 + } 95 + 96 + static void psci_power_state_unpack(u32 power_state, 97 + struct psci_power_state *state) 98 + { 99 + state->id = (power_state & PSCI_0_2_POWER_STATE_ID_MASK) >> 100 + PSCI_0_2_POWER_STATE_ID_SHIFT; 101 + state->type = (power_state & PSCI_0_2_POWER_STATE_TYPE_MASK) >> 102 + PSCI_0_2_POWER_STATE_TYPE_SHIFT; 103 + state->affinity_level = 104 + (power_state & PSCI_0_2_POWER_STATE_AFFL_MASK) >> 105 + PSCI_0_2_POWER_STATE_AFFL_SHIFT; 98 106 } 99 107 100 108 /* ··· 213 197 fn = psci_function_id[PSCI_FN_MIGRATE_INFO_TYPE]; 214 198 err = invoke_psci_fn(fn, 0, 0, 0); 215 199 return err; 200 + } 201 + 202 + static int __maybe_unused cpu_psci_cpu_init_idle(struct device_node *cpu_node, 203 + unsigned int cpu) 204 + { 205 + int i, ret, count = 0; 206 + struct psci_power_state *psci_states; 207 + struct device_node *state_node; 208 + 209 + /* 210 + * If the PSCI cpu_suspend function hook has not been initialized 211 + * idle states must not be enabled, so bail out 212 + */ 213 + if (!psci_ops.cpu_suspend) 214 + return -EOPNOTSUPP; 215 + 216 + /* Count idle states */ 217 + while ((state_node = of_parse_phandle(cpu_node, "cpu-idle-states", 218 + count))) { 219 + count++; 220 + of_node_put(state_node); 221 + } 222 + 223 + if (!count) 224 + return -ENODEV; 225 + 226 + psci_states = kcalloc(count, sizeof(*psci_states), GFP_KERNEL); 227 + if (!psci_states) 228 + return -ENOMEM; 229 + 230 + for (i = 0; i < count; i++) { 231 + u32 psci_power_state; 232 + 233 + state_node = of_parse_phandle(cpu_node, "cpu-idle-states", i); 234 + 235 + ret = of_property_read_u32(state_node, 236 + "arm,psci-suspend-param", 237 + &psci_power_state); 238 + if (ret) { 239 + pr_warn(" * %s missing arm,psci-suspend-param property\n", 240 + state_node->full_name); 241 + of_node_put(state_node); 242 + goto free_mem; 243 + } 244 + 245 + of_node_put(state_node); 246 + pr_debug("psci-power-state %#x index %d\n", psci_power_state, 247 + i); 248 + psci_power_state_unpack(psci_power_state, &psci_states[i]); 249 + } 250 + /* Idle states parsed correctly, initialize per-cpu pointer */ 251 + per_cpu(psci_power_state, cpu) = psci_states; 252 + return 0; 253 + 254 + free_mem: 255 + kfree(psci_states); 256 + return ret; 216 257 } 217 258 218 259 static int get_set_conduit_method(struct device_node *np) ··· 509 436 #endif 510 437 #endif 511 438 439 + static int psci_suspend_finisher(unsigned long index) 440 + { 441 + struct psci_power_state *state = __get_cpu_var(psci_power_state); 442 + 443 + return psci_ops.cpu_suspend(state[index - 1], 444 + virt_to_phys(cpu_resume)); 445 + } 446 + 447 + static int __maybe_unused cpu_psci_cpu_suspend(unsigned long index) 448 + { 449 + int ret; 450 + struct psci_power_state *state = __get_cpu_var(psci_power_state); 451 + /* 452 + * idle state index 0 corresponds to wfi, should never be called 453 + * from the cpu_suspend operations 454 + */ 455 + if (WARN_ON_ONCE(!index)) 456 + return -EINVAL; 457 + 458 + if (state->type == PSCI_POWER_STATE_TYPE_STANDBY) 459 + ret = psci_ops.cpu_suspend(state[index - 1], 0); 460 + else 461 + ret = __cpu_suspend(index, psci_suspend_finisher); 462 + 463 + return ret; 464 + } 465 + 512 466 const struct cpu_operations cpu_psci_ops = { 513 467 .name = "psci", 468 + #ifdef CONFIG_CPU_IDLE 469 + .cpu_init_idle = cpu_psci_cpu_init_idle, 470 + .cpu_suspend = cpu_psci_cpu_suspend, 471 + #endif 514 472 #ifdef CONFIG_SMP 515 473 .cpu_init = cpu_psci_cpu_init, 516 474 .cpu_prepare = cpu_psci_cpu_prepare,
+1 -2
arch/arm64/kernel/return_address.c
··· 36 36 { 37 37 struct return_address_data data; 38 38 struct stackframe frame; 39 - register unsigned long current_sp asm ("sp"); 40 39 41 40 data.level = level + 2; 42 41 data.addr = NULL; 43 42 44 43 frame.fp = (unsigned long)__builtin_frame_address(0); 45 - frame.sp = current_sp; 44 + frame.sp = current_stack_pointer; 46 45 frame.pc = (unsigned long)return_address; /* dummy */ 47 46 48 47 walk_stackframe(&frame, save_return_addr, &data);
+6 -5
arch/arm64/kernel/setup.c
··· 365 365 366 366 void __init setup_arch(char **cmdline_p) 367 367 { 368 - /* 369 - * Unmask asynchronous aborts early to catch possible system errors. 370 - */ 371 - local_async_enable(); 372 - 373 368 setup_processor(); 374 369 375 370 setup_machine_fdt(__fdt_pointer); ··· 379 384 early_ioremap_init(); 380 385 381 386 parse_early_param(); 387 + 388 + /* 389 + * Unmask asynchronous aborts after bringing up possible earlycon. 390 + * (Report possible System Errors once we can report this occurred) 391 + */ 392 + local_async_enable(); 382 393 383 394 efi_init(); 384 395 arm64_memblock_init();
+34 -13
arch/arm64/kernel/sleep.S
··· 49 49 orr \dst, \dst, \mask // dst|=(aff3>>rs3) 50 50 .endm 51 51 /* 52 - * Save CPU state for a suspend. This saves callee registers, and allocates 53 - * space on the kernel stack to save the CPU specific registers + some 54 - * other data for resume. 52 + * Save CPU state for a suspend and execute the suspend finisher. 53 + * On success it will return 0 through cpu_resume - ie through a CPU 54 + * soft/hard reboot from the reset vector. 55 + * On failure it returns the suspend finisher return value or force 56 + * -EOPNOTSUPP if the finisher erroneously returns 0 (the suspend finisher 57 + * is not allowed to return, if it does this must be considered failure). 58 + * It saves callee registers, and allocates space on the kernel stack 59 + * to save the CPU specific registers + some other data for resume. 55 60 * 56 61 * x0 = suspend finisher argument 62 + * x1 = suspend finisher function pointer 57 63 */ 58 - ENTRY(__cpu_suspend) 64 + ENTRY(__cpu_suspend_enter) 59 65 stp x29, lr, [sp, #-96]! 60 66 stp x19, x20, [sp,#16] 61 67 stp x21, x22, [sp,#32] 62 68 stp x23, x24, [sp,#48] 63 69 stp x25, x26, [sp,#64] 64 70 stp x27, x28, [sp,#80] 71 + /* 72 + * Stash suspend finisher and its argument in x20 and x19 73 + */ 74 + mov x19, x0 75 + mov x20, x1 65 76 mov x2, sp 66 77 sub sp, sp, #CPU_SUSPEND_SZ // allocate cpu_suspend_ctx 67 - mov x1, sp 78 + mov x0, sp 68 79 /* 69 - * x1 now points to struct cpu_suspend_ctx allocated on the stack 80 + * x0 now points to struct cpu_suspend_ctx allocated on the stack 70 81 */ 71 - str x2, [x1, #CPU_CTX_SP] 72 - ldr x2, =sleep_save_sp 73 - ldr x2, [x2, #SLEEP_SAVE_SP_VIRT] 82 + str x2, [x0, #CPU_CTX_SP] 83 + ldr x1, =sleep_save_sp 84 + ldr x1, [x1, #SLEEP_SAVE_SP_VIRT] 74 85 #ifdef CONFIG_SMP 75 86 mrs x7, mpidr_el1 76 87 ldr x9, =mpidr_hash ··· 93 82 ldp w3, w4, [x9, #MPIDR_HASH_SHIFTS] 94 83 ldp w5, w6, [x9, #(MPIDR_HASH_SHIFTS + 8)] 95 84 compute_mpidr_hash x8, x3, x4, x5, x6, x7, x10 96 - add x2, x2, x8, lsl #3 85 + add x1, x1, x8, lsl #3 97 86 #endif 98 - bl __cpu_suspend_finisher 87 + bl __cpu_suspend_save 88 + /* 89 + * Grab suspend finisher in x20 and its argument in x19 90 + */ 91 + mov x0, x19 92 + mov x1, x20 93 + /* 94 + * We are ready for power down, fire off the suspend finisher 95 + * in x1, with argument in x0 96 + */ 97 + blr x1 99 98 /* 100 - * Never gets here, unless suspend fails. 99 + * Never gets here, unless suspend finisher fails. 101 100 * Successful cpu_suspend should return from cpu_resume, returning 102 101 * through this code path is considered an error 103 102 * If the return value is set to 0 force x0 = -EOPNOTSUPP ··· 124 103 ldp x27, x28, [sp, #80] 125 104 ldp x29, lr, [sp], #96 126 105 ret 127 - ENDPROC(__cpu_suspend) 106 + ENDPROC(__cpu_suspend_enter) 128 107 .ltorg 129 108 130 109 /*
+17 -5
arch/arm64/kernel/smp_spin_table.c
··· 20 20 #include <linux/init.h> 21 21 #include <linux/of.h> 22 22 #include <linux/smp.h> 23 + #include <linux/types.h> 23 24 24 25 #include <asm/cacheflush.h> 25 26 #include <asm/cpu_ops.h> ··· 66 65 67 66 static int smp_spin_table_cpu_prepare(unsigned int cpu) 68 67 { 69 - void **release_addr; 68 + __le64 __iomem *release_addr; 70 69 71 70 if (!cpu_release_addr[cpu]) 72 71 return -ENODEV; 73 72 74 - release_addr = __va(cpu_release_addr[cpu]); 73 + /* 74 + * The cpu-release-addr may or may not be inside the linear mapping. 75 + * As ioremap_cache will either give us a new mapping or reuse the 76 + * existing linear mapping, we can use it to cover both cases. In 77 + * either case the memory will be MT_NORMAL. 78 + */ 79 + release_addr = ioremap_cache(cpu_release_addr[cpu], 80 + sizeof(*release_addr)); 81 + if (!release_addr) 82 + return -ENOMEM; 75 83 76 84 /* 77 85 * We write the release address as LE regardless of the native ··· 89 79 * boot-loader's endianess before jumping. This is mandated by 90 80 * the boot protocol. 91 81 */ 92 - release_addr[0] = (void *) cpu_to_le64(__pa(secondary_holding_pen)); 93 - 94 - __flush_dcache_area(release_addr, sizeof(release_addr[0])); 82 + writeq_relaxed(__pa(secondary_holding_pen), release_addr); 83 + __flush_dcache_area((__force void *)release_addr, 84 + sizeof(*release_addr)); 95 85 96 86 /* 97 87 * Send an event to wake up the secondary CPU. 98 88 */ 99 89 sev(); 90 + 91 + iounmap(release_addr); 100 92 101 93 return 0; 102 94 }
+1 -2
arch/arm64/kernel/stacktrace.c
··· 111 111 frame.sp = thread_saved_sp(tsk); 112 112 frame.pc = thread_saved_pc(tsk); 113 113 } else { 114 - register unsigned long current_sp asm("sp"); 115 114 data.no_sched_functions = 0; 116 115 frame.fp = (unsigned long)__builtin_frame_address(0); 117 - frame.sp = current_sp; 116 + frame.sp = current_stack_pointer; 118 117 frame.pc = (unsigned long)save_stack_trace_tsk; 119 118 } 120 119
+29 -19
arch/arm64/kernel/suspend.c
··· 9 9 #include <asm/suspend.h> 10 10 #include <asm/tlbflush.h> 11 11 12 - extern int __cpu_suspend(unsigned long); 12 + extern int __cpu_suspend_enter(unsigned long arg, int (*fn)(unsigned long)); 13 13 /* 14 - * This is called by __cpu_suspend() to save the state, and do whatever 14 + * This is called by __cpu_suspend_enter() to save the state, and do whatever 15 15 * flushing is required to ensure that when the CPU goes to sleep we have 16 16 * the necessary data available when the caches are not searched. 17 17 * 18 - * @arg: Argument to pass to suspend operations 19 - * @ptr: CPU context virtual address 20 - * @save_ptr: address of the location where the context physical address 21 - * must be saved 18 + * ptr: CPU context virtual address 19 + * save_ptr: address of the location where the context physical address 20 + * must be saved 22 21 */ 23 - int __cpu_suspend_finisher(unsigned long arg, struct cpu_suspend_ctx *ptr, 24 - phys_addr_t *save_ptr) 22 + void notrace __cpu_suspend_save(struct cpu_suspend_ctx *ptr, 23 + phys_addr_t *save_ptr) 25 24 { 26 - int cpu = smp_processor_id(); 27 - 28 25 *save_ptr = virt_to_phys(ptr); 29 26 30 27 cpu_do_suspend(ptr); ··· 32 35 */ 33 36 __flush_dcache_area(ptr, sizeof(*ptr)); 34 37 __flush_dcache_area(save_ptr, sizeof(*save_ptr)); 35 - 36 - return cpu_ops[cpu]->cpu_suspend(arg); 37 38 } 38 39 39 40 /* ··· 51 56 } 52 57 53 58 /** 54 - * cpu_suspend 59 + * cpu_suspend() - function to enter a low-power state 60 + * @arg: argument to pass to CPU suspend operations 55 61 * 56 - * @arg: argument to pass to the finisher function 62 + * Return: 0 on success, -EOPNOTSUPP if CPU suspend hook not initialized, CPU 63 + * operations back-end error code otherwise. 57 64 */ 58 65 int cpu_suspend(unsigned long arg) 59 66 { 60 - struct mm_struct *mm = current->active_mm; 61 - int ret, cpu = smp_processor_id(); 62 - unsigned long flags; 67 + int cpu = smp_processor_id(); 63 68 64 69 /* 65 70 * If cpu_ops have not been registered or suspend ··· 67 72 */ 68 73 if (!cpu_ops[cpu] || !cpu_ops[cpu]->cpu_suspend) 69 74 return -EOPNOTSUPP; 75 + return cpu_ops[cpu]->cpu_suspend(arg); 76 + } 77 + 78 + /* 79 + * __cpu_suspend 80 + * 81 + * arg: argument to pass to the finisher function 82 + * fn: finisher function pointer 83 + * 84 + */ 85 + int __cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) 86 + { 87 + struct mm_struct *mm = current->active_mm; 88 + int ret; 89 + unsigned long flags; 70 90 71 91 /* 72 92 * From this point debug exceptions are disabled to prevent ··· 96 86 * page tables, so that the thread address space is properly 97 87 * set-up on function return. 98 88 */ 99 - ret = __cpu_suspend(arg); 89 + ret = __cpu_suspend_enter(arg, fn); 100 90 if (ret == 0) { 101 91 cpu_switch_mm(mm->pgd, mm); 102 92 flush_tlb_all(); ··· 105 95 * Restore per-cpu offset before any kernel 106 96 * subsystem relying on it has a chance to run. 107 97 */ 108 - set_my_cpu_offset(per_cpu_offset(cpu)); 98 + set_my_cpu_offset(per_cpu_offset(smp_processor_id())); 109 99 110 100 /* 111 101 * Restore HW breakpoint registers to sane values
+1 -2
arch/arm64/kernel/traps.c
··· 132 132 static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk) 133 133 { 134 134 struct stackframe frame; 135 - const register unsigned long current_sp asm ("sp"); 136 135 137 136 pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk); 138 137 ··· 144 145 frame.pc = regs->pc; 145 146 } else if (tsk == current) { 146 147 frame.fp = (unsigned long)__builtin_frame_address(0); 147 - frame.sp = current_sp; 148 + frame.sp = current_stack_pointer; 148 149 frame.pc = (unsigned long)dump_backtrace; 149 150 } else { 150 151 /*
+1 -1
arch/arm64/mm/Makefile
··· 1 1 obj-y := dma-mapping.o extable.o fault.o init.o \ 2 2 cache.o copypage.o flush.o \ 3 3 ioremap.o mmap.o pgd.o mmu.o \ 4 - context.o proc.o 4 + context.o proc.o pageattr.o 5 5 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
+1 -32
arch/arm64/mm/dma-mapping.c
··· 22 22 #include <linux/slab.h> 23 23 #include <linux/dma-mapping.h> 24 24 #include <linux/dma-contiguous.h> 25 - #include <linux/of.h> 26 - #include <linux/platform_device.h> 27 25 #include <linux/vmalloc.h> 28 26 #include <linux/swiotlb.h> 29 - #include <linux/amba/bus.h> 30 27 31 28 #include <asm/cacheflush.h> 32 29 ··· 122 125 no_map: 123 126 __dma_free_coherent(dev, size, ptr, *dma_handle, attrs); 124 127 no_mem: 125 - *dma_handle = ~0; 128 + *dma_handle = DMA_ERROR_CODE; 126 129 return NULL; 127 130 } 128 131 ··· 305 308 }; 306 309 EXPORT_SYMBOL(coherent_swiotlb_dma_ops); 307 310 308 - static int dma_bus_notifier(struct notifier_block *nb, 309 - unsigned long event, void *_dev) 310 - { 311 - struct device *dev = _dev; 312 - 313 - if (event != BUS_NOTIFY_ADD_DEVICE) 314 - return NOTIFY_DONE; 315 - 316 - if (of_property_read_bool(dev->of_node, "dma-coherent")) 317 - set_dma_ops(dev, &coherent_swiotlb_dma_ops); 318 - 319 - return NOTIFY_OK; 320 - } 321 - 322 - static struct notifier_block platform_bus_nb = { 323 - .notifier_call = dma_bus_notifier, 324 - }; 325 - 326 - static struct notifier_block amba_bus_nb = { 327 - .notifier_call = dma_bus_notifier, 328 - }; 329 - 330 311 extern int swiotlb_late_init_with_default_size(size_t default_size); 331 312 332 313 static int __init swiotlb_late_init(void) 333 314 { 334 315 size_t swiotlb_size = min(SZ_64M, MAX_ORDER_NR_PAGES << PAGE_SHIFT); 335 - 336 - /* 337 - * These must be registered before of_platform_populate(). 338 - */ 339 - bus_register_notifier(&platform_bus_type, &platform_bus_nb); 340 - bus_register_notifier(&amba_bustype, &amba_bus_nb); 341 316 342 317 dma_ops = &noncoherent_swiotlb_dma_ops; 343 318
+1 -1
arch/arm64/mm/init.c
··· 255 255 */ 256 256 void __init mem_init(void) 257 257 { 258 - max_mapnr = pfn_to_page(max_pfn + PHYS_PFN_OFFSET) - mem_map; 258 + set_max_mapnr(pfn_to_page(max_pfn) - mem_map); 259 259 260 260 #ifndef CONFIG_SPARSEMEM_VMEMMAP 261 261 free_unused_memmap();
+1 -1
arch/arm64/mm/mmap.c
··· 102 102 * You really shouldn't be using read() or write() on /dev/mem. This might go 103 103 * away in the future. 104 104 */ 105 - int valid_phys_addr_range(unsigned long addr, size_t size) 105 + int valid_phys_addr_range(phys_addr_t addr, size_t size) 106 106 { 107 107 if (addr < PHYS_OFFSET) 108 108 return 0;
+1 -1
arch/arm64/mm/mmu.c
··· 94 94 */ 95 95 asm volatile( 96 96 " mrs %0, mair_el1\n" 97 - " bfi %0, %1, #%2, #8\n" 97 + " bfi %0, %1, %2, #8\n" 98 98 " msr mair_el1, %0\n" 99 99 " isb\n" 100 100 : "=&r" (tmp)
+97
arch/arm64/mm/pageattr.c
··· 1 + /* 2 + * Copyright (c) 2014, The Linux Foundation. All rights reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 and 6 + * only version 2 as published by the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, 9 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 + * GNU General Public License for more details. 12 + */ 13 + #include <linux/kernel.h> 14 + #include <linux/mm.h> 15 + #include <linux/module.h> 16 + #include <linux/sched.h> 17 + 18 + #include <asm/pgtable.h> 19 + #include <asm/tlbflush.h> 20 + 21 + struct page_change_data { 22 + pgprot_t set_mask; 23 + pgprot_t clear_mask; 24 + }; 25 + 26 + static int change_page_range(pte_t *ptep, pgtable_t token, unsigned long addr, 27 + void *data) 28 + { 29 + struct page_change_data *cdata = data; 30 + pte_t pte = *ptep; 31 + 32 + pte = clear_pte_bit(pte, cdata->clear_mask); 33 + pte = set_pte_bit(pte, cdata->set_mask); 34 + 35 + set_pte(ptep, pte); 36 + return 0; 37 + } 38 + 39 + static int change_memory_common(unsigned long addr, int numpages, 40 + pgprot_t set_mask, pgprot_t clear_mask) 41 + { 42 + unsigned long start = addr; 43 + unsigned long size = PAGE_SIZE*numpages; 44 + unsigned long end = start + size; 45 + int ret; 46 + struct page_change_data data; 47 + 48 + if (!IS_ALIGNED(addr, PAGE_SIZE)) { 49 + start &= PAGE_MASK; 50 + end = start + size; 51 + WARN_ON_ONCE(1); 52 + } 53 + 54 + if (!is_module_address(start) || !is_module_address(end - 1)) 55 + return -EINVAL; 56 + 57 + data.set_mask = set_mask; 58 + data.clear_mask = clear_mask; 59 + 60 + ret = apply_to_page_range(&init_mm, start, size, change_page_range, 61 + &data); 62 + 63 + flush_tlb_kernel_range(start, end); 64 + return ret; 65 + } 66 + 67 + int set_memory_ro(unsigned long addr, int numpages) 68 + { 69 + return change_memory_common(addr, numpages, 70 + __pgprot(PTE_RDONLY), 71 + __pgprot(PTE_WRITE)); 72 + } 73 + EXPORT_SYMBOL_GPL(set_memory_ro); 74 + 75 + int set_memory_rw(unsigned long addr, int numpages) 76 + { 77 + return change_memory_common(addr, numpages, 78 + __pgprot(PTE_WRITE), 79 + __pgprot(PTE_RDONLY)); 80 + } 81 + EXPORT_SYMBOL_GPL(set_memory_rw); 82 + 83 + int set_memory_nx(unsigned long addr, int numpages) 84 + { 85 + return change_memory_common(addr, numpages, 86 + __pgprot(PTE_PXN), 87 + __pgprot(0)); 88 + } 89 + EXPORT_SYMBOL_GPL(set_memory_nx); 90 + 91 + int set_memory_x(unsigned long addr, int numpages) 92 + { 93 + return change_memory_common(addr, numpages, 94 + __pgprot(0), 95 + __pgprot(PTE_PXN)); 96 + } 97 + EXPORT_SYMBOL_GPL(set_memory_x);
+15
arch/arm64/mm/proc.S
··· 76 76 ret x0 77 77 ENDPROC(cpu_reset) 78 78 79 + ENTRY(cpu_soft_restart) 80 + /* Save address of cpu_reset() and reset address */ 81 + mov x19, x0 82 + mov x20, x1 83 + 84 + /* Turn D-cache off */ 85 + bl cpu_cache_off 86 + 87 + /* Push out all dirty data, and ensure cache is empty */ 88 + bl flush_cache_all 89 + 90 + mov x0, x20 91 + ret x19 92 + ENDPROC(cpu_soft_restart) 93 + 79 94 /* 80 95 * cpu_do_idle() 81 96 *
+4
arch/arm64/net/Makefile
··· 1 + # 2 + # ARM64 networking code 3 + # 4 + obj-$(CONFIG_BPF_JIT) += bpf_jit_comp.o
+169
arch/arm64/net/bpf_jit.h
··· 1 + /* 2 + * BPF JIT compiler for ARM64 3 + * 4 + * Copyright (C) 2014 Zi Shen Lim <zlim.lnx@gmail.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + #ifndef _BPF_JIT_H 19 + #define _BPF_JIT_H 20 + 21 + #include <asm/insn.h> 22 + 23 + /* 5-bit Register Operand */ 24 + #define A64_R(x) AARCH64_INSN_REG_##x 25 + #define A64_FP AARCH64_INSN_REG_FP 26 + #define A64_LR AARCH64_INSN_REG_LR 27 + #define A64_ZR AARCH64_INSN_REG_ZR 28 + #define A64_SP AARCH64_INSN_REG_SP 29 + 30 + #define A64_VARIANT(sf) \ 31 + ((sf) ? AARCH64_INSN_VARIANT_64BIT : AARCH64_INSN_VARIANT_32BIT) 32 + 33 + /* Compare & branch (immediate) */ 34 + #define A64_COMP_BRANCH(sf, Rt, offset, type) \ 35 + aarch64_insn_gen_comp_branch_imm(0, offset, Rt, A64_VARIANT(sf), \ 36 + AARCH64_INSN_BRANCH_COMP_##type) 37 + #define A64_CBZ(sf, Rt, imm19) A64_COMP_BRANCH(sf, Rt, (imm19) << 2, ZERO) 38 + 39 + /* Conditional branch (immediate) */ 40 + #define A64_COND_BRANCH(cond, offset) \ 41 + aarch64_insn_gen_cond_branch_imm(0, offset, cond) 42 + #define A64_COND_EQ AARCH64_INSN_COND_EQ /* == */ 43 + #define A64_COND_NE AARCH64_INSN_COND_NE /* != */ 44 + #define A64_COND_CS AARCH64_INSN_COND_CS /* unsigned >= */ 45 + #define A64_COND_HI AARCH64_INSN_COND_HI /* unsigned > */ 46 + #define A64_COND_GE AARCH64_INSN_COND_GE /* signed >= */ 47 + #define A64_COND_GT AARCH64_INSN_COND_GT /* signed > */ 48 + #define A64_B_(cond, imm19) A64_COND_BRANCH(cond, (imm19) << 2) 49 + 50 + /* Unconditional branch (immediate) */ 51 + #define A64_BRANCH(offset, type) aarch64_insn_gen_branch_imm(0, offset, \ 52 + AARCH64_INSN_BRANCH_##type) 53 + #define A64_B(imm26) A64_BRANCH((imm26) << 2, NOLINK) 54 + #define A64_BL(imm26) A64_BRANCH((imm26) << 2, LINK) 55 + 56 + /* Unconditional branch (register) */ 57 + #define A64_BLR(Rn) aarch64_insn_gen_branch_reg(Rn, AARCH64_INSN_BRANCH_LINK) 58 + #define A64_RET(Rn) aarch64_insn_gen_branch_reg(Rn, AARCH64_INSN_BRANCH_RETURN) 59 + 60 + /* Load/store register (register offset) */ 61 + #define A64_LS_REG(Rt, Rn, Rm, size, type) \ 62 + aarch64_insn_gen_load_store_reg(Rt, Rn, Rm, \ 63 + AARCH64_INSN_SIZE_##size, \ 64 + AARCH64_INSN_LDST_##type##_REG_OFFSET) 65 + #define A64_STRB(Wt, Xn, Xm) A64_LS_REG(Wt, Xn, Xm, 8, STORE) 66 + #define A64_LDRB(Wt, Xn, Xm) A64_LS_REG(Wt, Xn, Xm, 8, LOAD) 67 + #define A64_STRH(Wt, Xn, Xm) A64_LS_REG(Wt, Xn, Xm, 16, STORE) 68 + #define A64_LDRH(Wt, Xn, Xm) A64_LS_REG(Wt, Xn, Xm, 16, LOAD) 69 + #define A64_STR32(Wt, Xn, Xm) A64_LS_REG(Wt, Xn, Xm, 32, STORE) 70 + #define A64_LDR32(Wt, Xn, Xm) A64_LS_REG(Wt, Xn, Xm, 32, LOAD) 71 + #define A64_STR64(Xt, Xn, Xm) A64_LS_REG(Xt, Xn, Xm, 64, STORE) 72 + #define A64_LDR64(Xt, Xn, Xm) A64_LS_REG(Xt, Xn, Xm, 64, LOAD) 73 + 74 + /* Load/store register pair */ 75 + #define A64_LS_PAIR(Rt, Rt2, Rn, offset, ls, type) \ 76 + aarch64_insn_gen_load_store_pair(Rt, Rt2, Rn, offset, \ 77 + AARCH64_INSN_VARIANT_64BIT, \ 78 + AARCH64_INSN_LDST_##ls##_PAIR_##type) 79 + /* Rn -= 16; Rn[0] = Rt; Rn[8] = Rt2; */ 80 + #define A64_PUSH(Rt, Rt2, Rn) A64_LS_PAIR(Rt, Rt2, Rn, -16, STORE, PRE_INDEX) 81 + /* Rt = Rn[0]; Rt2 = Rn[8]; Rn += 16; */ 82 + #define A64_POP(Rt, Rt2, Rn) A64_LS_PAIR(Rt, Rt2, Rn, 16, LOAD, POST_INDEX) 83 + 84 + /* Add/subtract (immediate) */ 85 + #define A64_ADDSUB_IMM(sf, Rd, Rn, imm12, type) \ 86 + aarch64_insn_gen_add_sub_imm(Rd, Rn, imm12, \ 87 + A64_VARIANT(sf), AARCH64_INSN_ADSB_##type) 88 + /* Rd = Rn OP imm12 */ 89 + #define A64_ADD_I(sf, Rd, Rn, imm12) A64_ADDSUB_IMM(sf, Rd, Rn, imm12, ADD) 90 + #define A64_SUB_I(sf, Rd, Rn, imm12) A64_ADDSUB_IMM(sf, Rd, Rn, imm12, SUB) 91 + /* Rd = Rn */ 92 + #define A64_MOV(sf, Rd, Rn) A64_ADD_I(sf, Rd, Rn, 0) 93 + 94 + /* Bitfield move */ 95 + #define A64_BITFIELD(sf, Rd, Rn, immr, imms, type) \ 96 + aarch64_insn_gen_bitfield(Rd, Rn, immr, imms, \ 97 + A64_VARIANT(sf), AARCH64_INSN_BITFIELD_MOVE_##type) 98 + /* Signed, with sign replication to left and zeros to right */ 99 + #define A64_SBFM(sf, Rd, Rn, ir, is) A64_BITFIELD(sf, Rd, Rn, ir, is, SIGNED) 100 + /* Unsigned, with zeros to left and right */ 101 + #define A64_UBFM(sf, Rd, Rn, ir, is) A64_BITFIELD(sf, Rd, Rn, ir, is, UNSIGNED) 102 + 103 + /* Rd = Rn << shift */ 104 + #define A64_LSL(sf, Rd, Rn, shift) ({ \ 105 + int sz = (sf) ? 64 : 32; \ 106 + A64_UBFM(sf, Rd, Rn, (unsigned)-(shift) % sz, sz - 1 - (shift)); \ 107 + }) 108 + /* Rd = Rn >> shift */ 109 + #define A64_LSR(sf, Rd, Rn, shift) A64_UBFM(sf, Rd, Rn, shift, (sf) ? 63 : 31) 110 + /* Rd = Rn >> shift; signed */ 111 + #define A64_ASR(sf, Rd, Rn, shift) A64_SBFM(sf, Rd, Rn, shift, (sf) ? 63 : 31) 112 + 113 + /* Move wide (immediate) */ 114 + #define A64_MOVEW(sf, Rd, imm16, shift, type) \ 115 + aarch64_insn_gen_movewide(Rd, imm16, shift, \ 116 + A64_VARIANT(sf), AARCH64_INSN_MOVEWIDE_##type) 117 + /* Rd = Zeros (for MOVZ); 118 + * Rd |= imm16 << shift (where shift is {0, 16, 32, 48}); 119 + * Rd = ~Rd; (for MOVN); */ 120 + #define A64_MOVN(sf, Rd, imm16, shift) A64_MOVEW(sf, Rd, imm16, shift, INVERSE) 121 + #define A64_MOVZ(sf, Rd, imm16, shift) A64_MOVEW(sf, Rd, imm16, shift, ZERO) 122 + #define A64_MOVK(sf, Rd, imm16, shift) A64_MOVEW(sf, Rd, imm16, shift, KEEP) 123 + 124 + /* Add/subtract (shifted register) */ 125 + #define A64_ADDSUB_SREG(sf, Rd, Rn, Rm, type) \ 126 + aarch64_insn_gen_add_sub_shifted_reg(Rd, Rn, Rm, 0, \ 127 + A64_VARIANT(sf), AARCH64_INSN_ADSB_##type) 128 + /* Rd = Rn OP Rm */ 129 + #define A64_ADD(sf, Rd, Rn, Rm) A64_ADDSUB_SREG(sf, Rd, Rn, Rm, ADD) 130 + #define A64_SUB(sf, Rd, Rn, Rm) A64_ADDSUB_SREG(sf, Rd, Rn, Rm, SUB) 131 + #define A64_SUBS(sf, Rd, Rn, Rm) A64_ADDSUB_SREG(sf, Rd, Rn, Rm, SUB_SETFLAGS) 132 + /* Rd = -Rm */ 133 + #define A64_NEG(sf, Rd, Rm) A64_SUB(sf, Rd, A64_ZR, Rm) 134 + /* Rn - Rm; set condition flags */ 135 + #define A64_CMP(sf, Rn, Rm) A64_SUBS(sf, A64_ZR, Rn, Rm) 136 + 137 + /* Data-processing (1 source) */ 138 + #define A64_DATA1(sf, Rd, Rn, type) aarch64_insn_gen_data1(Rd, Rn, \ 139 + A64_VARIANT(sf), AARCH64_INSN_DATA1_##type) 140 + /* Rd = BSWAPx(Rn) */ 141 + #define A64_REV16(sf, Rd, Rn) A64_DATA1(sf, Rd, Rn, REVERSE_16) 142 + #define A64_REV32(sf, Rd, Rn) A64_DATA1(sf, Rd, Rn, REVERSE_32) 143 + #define A64_REV64(Rd, Rn) A64_DATA1(1, Rd, Rn, REVERSE_64) 144 + 145 + /* Data-processing (2 source) */ 146 + /* Rd = Rn OP Rm */ 147 + #define A64_UDIV(sf, Rd, Rn, Rm) aarch64_insn_gen_data2(Rd, Rn, Rm, \ 148 + A64_VARIANT(sf), AARCH64_INSN_DATA2_UDIV) 149 + 150 + /* Data-processing (3 source) */ 151 + /* Rd = Ra + Rn * Rm */ 152 + #define A64_MADD(sf, Rd, Ra, Rn, Rm) aarch64_insn_gen_data3(Rd, Ra, Rn, Rm, \ 153 + A64_VARIANT(sf), AARCH64_INSN_DATA3_MADD) 154 + /* Rd = Rn * Rm */ 155 + #define A64_MUL(sf, Rd, Rn, Rm) A64_MADD(sf, Rd, A64_ZR, Rn, Rm) 156 + 157 + /* Logical (shifted register) */ 158 + #define A64_LOGIC_SREG(sf, Rd, Rn, Rm, type) \ 159 + aarch64_insn_gen_logical_shifted_reg(Rd, Rn, Rm, 0, \ 160 + A64_VARIANT(sf), AARCH64_INSN_LOGIC_##type) 161 + /* Rd = Rn OP Rm */ 162 + #define A64_AND(sf, Rd, Rn, Rm) A64_LOGIC_SREG(sf, Rd, Rn, Rm, AND) 163 + #define A64_ORR(sf, Rd, Rn, Rm) A64_LOGIC_SREG(sf, Rd, Rn, Rm, ORR) 164 + #define A64_EOR(sf, Rd, Rn, Rm) A64_LOGIC_SREG(sf, Rd, Rn, Rm, EOR) 165 + #define A64_ANDS(sf, Rd, Rn, Rm) A64_LOGIC_SREG(sf, Rd, Rn, Rm, AND_SETFLAGS) 166 + /* Rn & Rm; set condition flags */ 167 + #define A64_TST(sf, Rn, Rm) A64_ANDS(sf, A64_ZR, Rn, Rm) 168 + 169 + #endif /* _BPF_JIT_H */
+679
arch/arm64/net/bpf_jit_comp.c
··· 1 + /* 2 + * BPF JIT compiler for ARM64 3 + * 4 + * Copyright (C) 2014 Zi Shen Lim <zlim.lnx@gmail.com> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + 19 + #define pr_fmt(fmt) "bpf_jit: " fmt 20 + 21 + #include <linux/filter.h> 22 + #include <linux/moduleloader.h> 23 + #include <linux/printk.h> 24 + #include <linux/skbuff.h> 25 + #include <linux/slab.h> 26 + #include <asm/byteorder.h> 27 + #include <asm/cacheflush.h> 28 + 29 + #include "bpf_jit.h" 30 + 31 + int bpf_jit_enable __read_mostly; 32 + 33 + #define TMP_REG_1 (MAX_BPF_REG + 0) 34 + #define TMP_REG_2 (MAX_BPF_REG + 1) 35 + 36 + /* Map BPF registers to A64 registers */ 37 + static const int bpf2a64[] = { 38 + /* return value from in-kernel function, and exit value from eBPF */ 39 + [BPF_REG_0] = A64_R(7), 40 + /* arguments from eBPF program to in-kernel function */ 41 + [BPF_REG_1] = A64_R(0), 42 + [BPF_REG_2] = A64_R(1), 43 + [BPF_REG_3] = A64_R(2), 44 + [BPF_REG_4] = A64_R(3), 45 + [BPF_REG_5] = A64_R(4), 46 + /* callee saved registers that in-kernel function will preserve */ 47 + [BPF_REG_6] = A64_R(19), 48 + [BPF_REG_7] = A64_R(20), 49 + [BPF_REG_8] = A64_R(21), 50 + [BPF_REG_9] = A64_R(22), 51 + /* read-only frame pointer to access stack */ 52 + [BPF_REG_FP] = A64_FP, 53 + /* temporary register for internal BPF JIT */ 54 + [TMP_REG_1] = A64_R(23), 55 + [TMP_REG_2] = A64_R(24), 56 + }; 57 + 58 + struct jit_ctx { 59 + const struct bpf_prog *prog; 60 + int idx; 61 + int tmp_used; 62 + int body_offset; 63 + int *offset; 64 + u32 *image; 65 + }; 66 + 67 + static inline void emit(const u32 insn, struct jit_ctx *ctx) 68 + { 69 + if (ctx->image != NULL) 70 + ctx->image[ctx->idx] = cpu_to_le32(insn); 71 + 72 + ctx->idx++; 73 + } 74 + 75 + static inline void emit_a64_mov_i64(const int reg, const u64 val, 76 + struct jit_ctx *ctx) 77 + { 78 + u64 tmp = val; 79 + int shift = 0; 80 + 81 + emit(A64_MOVZ(1, reg, tmp & 0xffff, shift), ctx); 82 + tmp >>= 16; 83 + shift += 16; 84 + while (tmp) { 85 + if (tmp & 0xffff) 86 + emit(A64_MOVK(1, reg, tmp & 0xffff, shift), ctx); 87 + tmp >>= 16; 88 + shift += 16; 89 + } 90 + } 91 + 92 + static inline void emit_a64_mov_i(const int is64, const int reg, 93 + const s32 val, struct jit_ctx *ctx) 94 + { 95 + u16 hi = val >> 16; 96 + u16 lo = val & 0xffff; 97 + 98 + if (hi & 0x8000) { 99 + if (hi == 0xffff) { 100 + emit(A64_MOVN(is64, reg, (u16)~lo, 0), ctx); 101 + } else { 102 + emit(A64_MOVN(is64, reg, (u16)~hi, 16), ctx); 103 + emit(A64_MOVK(is64, reg, lo, 0), ctx); 104 + } 105 + } else { 106 + emit(A64_MOVZ(is64, reg, lo, 0), ctx); 107 + if (hi) 108 + emit(A64_MOVK(is64, reg, hi, 16), ctx); 109 + } 110 + } 111 + 112 + static inline int bpf2a64_offset(int bpf_to, int bpf_from, 113 + const struct jit_ctx *ctx) 114 + { 115 + int to = ctx->offset[bpf_to + 1]; 116 + /* -1 to account for the Branch instruction */ 117 + int from = ctx->offset[bpf_from + 1] - 1; 118 + 119 + return to - from; 120 + } 121 + 122 + static inline int epilogue_offset(const struct jit_ctx *ctx) 123 + { 124 + int to = ctx->offset[ctx->prog->len - 1]; 125 + int from = ctx->idx - ctx->body_offset; 126 + 127 + return to - from; 128 + } 129 + 130 + /* Stack must be multiples of 16B */ 131 + #define STACK_ALIGN(sz) (((sz) + 15) & ~15) 132 + 133 + static void build_prologue(struct jit_ctx *ctx) 134 + { 135 + const u8 r6 = bpf2a64[BPF_REG_6]; 136 + const u8 r7 = bpf2a64[BPF_REG_7]; 137 + const u8 r8 = bpf2a64[BPF_REG_8]; 138 + const u8 r9 = bpf2a64[BPF_REG_9]; 139 + const u8 fp = bpf2a64[BPF_REG_FP]; 140 + const u8 ra = bpf2a64[BPF_REG_A]; 141 + const u8 rx = bpf2a64[BPF_REG_X]; 142 + const u8 tmp1 = bpf2a64[TMP_REG_1]; 143 + const u8 tmp2 = bpf2a64[TMP_REG_2]; 144 + int stack_size = MAX_BPF_STACK; 145 + 146 + stack_size += 4; /* extra for skb_copy_bits buffer */ 147 + stack_size = STACK_ALIGN(stack_size); 148 + 149 + /* Save callee-saved register */ 150 + emit(A64_PUSH(r6, r7, A64_SP), ctx); 151 + emit(A64_PUSH(r8, r9, A64_SP), ctx); 152 + if (ctx->tmp_used) 153 + emit(A64_PUSH(tmp1, tmp2, A64_SP), ctx); 154 + 155 + /* Set up BPF stack */ 156 + emit(A64_SUB_I(1, A64_SP, A64_SP, stack_size), ctx); 157 + 158 + /* Set up frame pointer */ 159 + emit(A64_MOV(1, fp, A64_SP), ctx); 160 + 161 + /* Clear registers A and X */ 162 + emit_a64_mov_i64(ra, 0, ctx); 163 + emit_a64_mov_i64(rx, 0, ctx); 164 + } 165 + 166 + static void build_epilogue(struct jit_ctx *ctx) 167 + { 168 + const u8 r0 = bpf2a64[BPF_REG_0]; 169 + const u8 r6 = bpf2a64[BPF_REG_6]; 170 + const u8 r7 = bpf2a64[BPF_REG_7]; 171 + const u8 r8 = bpf2a64[BPF_REG_8]; 172 + const u8 r9 = bpf2a64[BPF_REG_9]; 173 + const u8 fp = bpf2a64[BPF_REG_FP]; 174 + const u8 tmp1 = bpf2a64[TMP_REG_1]; 175 + const u8 tmp2 = bpf2a64[TMP_REG_2]; 176 + int stack_size = MAX_BPF_STACK; 177 + 178 + stack_size += 4; /* extra for skb_copy_bits buffer */ 179 + stack_size = STACK_ALIGN(stack_size); 180 + 181 + /* We're done with BPF stack */ 182 + emit(A64_ADD_I(1, A64_SP, A64_SP, stack_size), ctx); 183 + 184 + /* Restore callee-saved register */ 185 + if (ctx->tmp_used) 186 + emit(A64_POP(tmp1, tmp2, A64_SP), ctx); 187 + emit(A64_POP(r8, r9, A64_SP), ctx); 188 + emit(A64_POP(r6, r7, A64_SP), ctx); 189 + 190 + /* Restore frame pointer */ 191 + emit(A64_MOV(1, fp, A64_SP), ctx); 192 + 193 + /* Set return value */ 194 + emit(A64_MOV(1, A64_R(0), r0), ctx); 195 + 196 + emit(A64_RET(A64_LR), ctx); 197 + } 198 + 199 + static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) 200 + { 201 + const u8 code = insn->code; 202 + const u8 dst = bpf2a64[insn->dst_reg]; 203 + const u8 src = bpf2a64[insn->src_reg]; 204 + const u8 tmp = bpf2a64[TMP_REG_1]; 205 + const u8 tmp2 = bpf2a64[TMP_REG_2]; 206 + const s16 off = insn->off; 207 + const s32 imm = insn->imm; 208 + const int i = insn - ctx->prog->insnsi; 209 + const bool is64 = BPF_CLASS(code) == BPF_ALU64; 210 + u8 jmp_cond; 211 + s32 jmp_offset; 212 + 213 + switch (code) { 214 + /* dst = src */ 215 + case BPF_ALU | BPF_MOV | BPF_X: 216 + case BPF_ALU64 | BPF_MOV | BPF_X: 217 + emit(A64_MOV(is64, dst, src), ctx); 218 + break; 219 + /* dst = dst OP src */ 220 + case BPF_ALU | BPF_ADD | BPF_X: 221 + case BPF_ALU64 | BPF_ADD | BPF_X: 222 + emit(A64_ADD(is64, dst, dst, src), ctx); 223 + break; 224 + case BPF_ALU | BPF_SUB | BPF_X: 225 + case BPF_ALU64 | BPF_SUB | BPF_X: 226 + emit(A64_SUB(is64, dst, dst, src), ctx); 227 + break; 228 + case BPF_ALU | BPF_AND | BPF_X: 229 + case BPF_ALU64 | BPF_AND | BPF_X: 230 + emit(A64_AND(is64, dst, dst, src), ctx); 231 + break; 232 + case BPF_ALU | BPF_OR | BPF_X: 233 + case BPF_ALU64 | BPF_OR | BPF_X: 234 + emit(A64_ORR(is64, dst, dst, src), ctx); 235 + break; 236 + case BPF_ALU | BPF_XOR | BPF_X: 237 + case BPF_ALU64 | BPF_XOR | BPF_X: 238 + emit(A64_EOR(is64, dst, dst, src), ctx); 239 + break; 240 + case BPF_ALU | BPF_MUL | BPF_X: 241 + case BPF_ALU64 | BPF_MUL | BPF_X: 242 + emit(A64_MUL(is64, dst, dst, src), ctx); 243 + break; 244 + case BPF_ALU | BPF_DIV | BPF_X: 245 + case BPF_ALU64 | BPF_DIV | BPF_X: 246 + emit(A64_UDIV(is64, dst, dst, src), ctx); 247 + break; 248 + case BPF_ALU | BPF_MOD | BPF_X: 249 + case BPF_ALU64 | BPF_MOD | BPF_X: 250 + ctx->tmp_used = 1; 251 + emit(A64_UDIV(is64, tmp, dst, src), ctx); 252 + emit(A64_MUL(is64, tmp, tmp, src), ctx); 253 + emit(A64_SUB(is64, dst, dst, tmp), ctx); 254 + break; 255 + /* dst = -dst */ 256 + case BPF_ALU | BPF_NEG: 257 + case BPF_ALU64 | BPF_NEG: 258 + emit(A64_NEG(is64, dst, dst), ctx); 259 + break; 260 + /* dst = BSWAP##imm(dst) */ 261 + case BPF_ALU | BPF_END | BPF_FROM_LE: 262 + case BPF_ALU | BPF_END | BPF_FROM_BE: 263 + #ifdef CONFIG_CPU_BIG_ENDIAN 264 + if (BPF_SRC(code) == BPF_FROM_BE) 265 + break; 266 + #else /* !CONFIG_CPU_BIG_ENDIAN */ 267 + if (BPF_SRC(code) == BPF_FROM_LE) 268 + break; 269 + #endif 270 + switch (imm) { 271 + case 16: 272 + emit(A64_REV16(is64, dst, dst), ctx); 273 + break; 274 + case 32: 275 + emit(A64_REV32(is64, dst, dst), ctx); 276 + break; 277 + case 64: 278 + emit(A64_REV64(dst, dst), ctx); 279 + break; 280 + } 281 + break; 282 + /* dst = imm */ 283 + case BPF_ALU | BPF_MOV | BPF_K: 284 + case BPF_ALU64 | BPF_MOV | BPF_K: 285 + emit_a64_mov_i(is64, dst, imm, ctx); 286 + break; 287 + /* dst = dst OP imm */ 288 + case BPF_ALU | BPF_ADD | BPF_K: 289 + case BPF_ALU64 | BPF_ADD | BPF_K: 290 + ctx->tmp_used = 1; 291 + emit_a64_mov_i(is64, tmp, imm, ctx); 292 + emit(A64_ADD(is64, dst, dst, tmp), ctx); 293 + break; 294 + case BPF_ALU | BPF_SUB | BPF_K: 295 + case BPF_ALU64 | BPF_SUB | BPF_K: 296 + ctx->tmp_used = 1; 297 + emit_a64_mov_i(is64, tmp, imm, ctx); 298 + emit(A64_SUB(is64, dst, dst, tmp), ctx); 299 + break; 300 + case BPF_ALU | BPF_AND | BPF_K: 301 + case BPF_ALU64 | BPF_AND | BPF_K: 302 + ctx->tmp_used = 1; 303 + emit_a64_mov_i(is64, tmp, imm, ctx); 304 + emit(A64_AND(is64, dst, dst, tmp), ctx); 305 + break; 306 + case BPF_ALU | BPF_OR | BPF_K: 307 + case BPF_ALU64 | BPF_OR | BPF_K: 308 + ctx->tmp_used = 1; 309 + emit_a64_mov_i(is64, tmp, imm, ctx); 310 + emit(A64_ORR(is64, dst, dst, tmp), ctx); 311 + break; 312 + case BPF_ALU | BPF_XOR | BPF_K: 313 + case BPF_ALU64 | BPF_XOR | BPF_K: 314 + ctx->tmp_used = 1; 315 + emit_a64_mov_i(is64, tmp, imm, ctx); 316 + emit(A64_EOR(is64, dst, dst, tmp), ctx); 317 + break; 318 + case BPF_ALU | BPF_MUL | BPF_K: 319 + case BPF_ALU64 | BPF_MUL | BPF_K: 320 + ctx->tmp_used = 1; 321 + emit_a64_mov_i(is64, tmp, imm, ctx); 322 + emit(A64_MUL(is64, dst, dst, tmp), ctx); 323 + break; 324 + case BPF_ALU | BPF_DIV | BPF_K: 325 + case BPF_ALU64 | BPF_DIV | BPF_K: 326 + ctx->tmp_used = 1; 327 + emit_a64_mov_i(is64, tmp, imm, ctx); 328 + emit(A64_UDIV(is64, dst, dst, tmp), ctx); 329 + break; 330 + case BPF_ALU | BPF_MOD | BPF_K: 331 + case BPF_ALU64 | BPF_MOD | BPF_K: 332 + ctx->tmp_used = 1; 333 + emit_a64_mov_i(is64, tmp2, imm, ctx); 334 + emit(A64_UDIV(is64, tmp, dst, tmp2), ctx); 335 + emit(A64_MUL(is64, tmp, tmp, tmp2), ctx); 336 + emit(A64_SUB(is64, dst, dst, tmp), ctx); 337 + break; 338 + case BPF_ALU | BPF_LSH | BPF_K: 339 + case BPF_ALU64 | BPF_LSH | BPF_K: 340 + emit(A64_LSL(is64, dst, dst, imm), ctx); 341 + break; 342 + case BPF_ALU | BPF_RSH | BPF_K: 343 + case BPF_ALU64 | BPF_RSH | BPF_K: 344 + emit(A64_LSR(is64, dst, dst, imm), ctx); 345 + break; 346 + case BPF_ALU | BPF_ARSH | BPF_K: 347 + case BPF_ALU64 | BPF_ARSH | BPF_K: 348 + emit(A64_ASR(is64, dst, dst, imm), ctx); 349 + break; 350 + 351 + #define check_imm(bits, imm) do { \ 352 + if ((((imm) > 0) && ((imm) >> (bits))) || \ 353 + (((imm) < 0) && (~(imm) >> (bits)))) { \ 354 + pr_info("[%2d] imm=%d(0x%x) out of range\n", \ 355 + i, imm, imm); \ 356 + return -EINVAL; \ 357 + } \ 358 + } while (0) 359 + #define check_imm19(imm) check_imm(19, imm) 360 + #define check_imm26(imm) check_imm(26, imm) 361 + 362 + /* JUMP off */ 363 + case BPF_JMP | BPF_JA: 364 + jmp_offset = bpf2a64_offset(i + off, i, ctx); 365 + check_imm26(jmp_offset); 366 + emit(A64_B(jmp_offset), ctx); 367 + break; 368 + /* IF (dst COND src) JUMP off */ 369 + case BPF_JMP | BPF_JEQ | BPF_X: 370 + case BPF_JMP | BPF_JGT | BPF_X: 371 + case BPF_JMP | BPF_JGE | BPF_X: 372 + case BPF_JMP | BPF_JNE | BPF_X: 373 + case BPF_JMP | BPF_JSGT | BPF_X: 374 + case BPF_JMP | BPF_JSGE | BPF_X: 375 + emit(A64_CMP(1, dst, src), ctx); 376 + emit_cond_jmp: 377 + jmp_offset = bpf2a64_offset(i + off, i, ctx); 378 + check_imm19(jmp_offset); 379 + switch (BPF_OP(code)) { 380 + case BPF_JEQ: 381 + jmp_cond = A64_COND_EQ; 382 + break; 383 + case BPF_JGT: 384 + jmp_cond = A64_COND_HI; 385 + break; 386 + case BPF_JGE: 387 + jmp_cond = A64_COND_CS; 388 + break; 389 + case BPF_JNE: 390 + jmp_cond = A64_COND_NE; 391 + break; 392 + case BPF_JSGT: 393 + jmp_cond = A64_COND_GT; 394 + break; 395 + case BPF_JSGE: 396 + jmp_cond = A64_COND_GE; 397 + break; 398 + default: 399 + return -EFAULT; 400 + } 401 + emit(A64_B_(jmp_cond, jmp_offset), ctx); 402 + break; 403 + case BPF_JMP | BPF_JSET | BPF_X: 404 + emit(A64_TST(1, dst, src), ctx); 405 + goto emit_cond_jmp; 406 + /* IF (dst COND imm) JUMP off */ 407 + case BPF_JMP | BPF_JEQ | BPF_K: 408 + case BPF_JMP | BPF_JGT | BPF_K: 409 + case BPF_JMP | BPF_JGE | BPF_K: 410 + case BPF_JMP | BPF_JNE | BPF_K: 411 + case BPF_JMP | BPF_JSGT | BPF_K: 412 + case BPF_JMP | BPF_JSGE | BPF_K: 413 + ctx->tmp_used = 1; 414 + emit_a64_mov_i(1, tmp, imm, ctx); 415 + emit(A64_CMP(1, dst, tmp), ctx); 416 + goto emit_cond_jmp; 417 + case BPF_JMP | BPF_JSET | BPF_K: 418 + ctx->tmp_used = 1; 419 + emit_a64_mov_i(1, tmp, imm, ctx); 420 + emit(A64_TST(1, dst, tmp), ctx); 421 + goto emit_cond_jmp; 422 + /* function call */ 423 + case BPF_JMP | BPF_CALL: 424 + { 425 + const u8 r0 = bpf2a64[BPF_REG_0]; 426 + const u64 func = (u64)__bpf_call_base + imm; 427 + 428 + ctx->tmp_used = 1; 429 + emit_a64_mov_i64(tmp, func, ctx); 430 + emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx); 431 + emit(A64_MOV(1, A64_FP, A64_SP), ctx); 432 + emit(A64_BLR(tmp), ctx); 433 + emit(A64_MOV(1, r0, A64_R(0)), ctx); 434 + emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx); 435 + break; 436 + } 437 + /* function return */ 438 + case BPF_JMP | BPF_EXIT: 439 + if (i == ctx->prog->len - 1) 440 + break; 441 + jmp_offset = epilogue_offset(ctx); 442 + check_imm26(jmp_offset); 443 + emit(A64_B(jmp_offset), ctx); 444 + break; 445 + 446 + /* LDX: dst = *(size *)(src + off) */ 447 + case BPF_LDX | BPF_MEM | BPF_W: 448 + case BPF_LDX | BPF_MEM | BPF_H: 449 + case BPF_LDX | BPF_MEM | BPF_B: 450 + case BPF_LDX | BPF_MEM | BPF_DW: 451 + ctx->tmp_used = 1; 452 + emit_a64_mov_i(1, tmp, off, ctx); 453 + switch (BPF_SIZE(code)) { 454 + case BPF_W: 455 + emit(A64_LDR32(dst, src, tmp), ctx); 456 + break; 457 + case BPF_H: 458 + emit(A64_LDRH(dst, src, tmp), ctx); 459 + break; 460 + case BPF_B: 461 + emit(A64_LDRB(dst, src, tmp), ctx); 462 + break; 463 + case BPF_DW: 464 + emit(A64_LDR64(dst, src, tmp), ctx); 465 + break; 466 + } 467 + break; 468 + 469 + /* ST: *(size *)(dst + off) = imm */ 470 + case BPF_ST | BPF_MEM | BPF_W: 471 + case BPF_ST | BPF_MEM | BPF_H: 472 + case BPF_ST | BPF_MEM | BPF_B: 473 + case BPF_ST | BPF_MEM | BPF_DW: 474 + goto notyet; 475 + 476 + /* STX: *(size *)(dst + off) = src */ 477 + case BPF_STX | BPF_MEM | BPF_W: 478 + case BPF_STX | BPF_MEM | BPF_H: 479 + case BPF_STX | BPF_MEM | BPF_B: 480 + case BPF_STX | BPF_MEM | BPF_DW: 481 + ctx->tmp_used = 1; 482 + emit_a64_mov_i(1, tmp, off, ctx); 483 + switch (BPF_SIZE(code)) { 484 + case BPF_W: 485 + emit(A64_STR32(src, dst, tmp), ctx); 486 + break; 487 + case BPF_H: 488 + emit(A64_STRH(src, dst, tmp), ctx); 489 + break; 490 + case BPF_B: 491 + emit(A64_STRB(src, dst, tmp), ctx); 492 + break; 493 + case BPF_DW: 494 + emit(A64_STR64(src, dst, tmp), ctx); 495 + break; 496 + } 497 + break; 498 + /* STX XADD: lock *(u32 *)(dst + off) += src */ 499 + case BPF_STX | BPF_XADD | BPF_W: 500 + /* STX XADD: lock *(u64 *)(dst + off) += src */ 501 + case BPF_STX | BPF_XADD | BPF_DW: 502 + goto notyet; 503 + 504 + /* R0 = ntohx(*(size *)(((struct sk_buff *)R6)->data + imm)) */ 505 + case BPF_LD | BPF_ABS | BPF_W: 506 + case BPF_LD | BPF_ABS | BPF_H: 507 + case BPF_LD | BPF_ABS | BPF_B: 508 + /* R0 = ntohx(*(size *)(((struct sk_buff *)R6)->data + src + imm)) */ 509 + case BPF_LD | BPF_IND | BPF_W: 510 + case BPF_LD | BPF_IND | BPF_H: 511 + case BPF_LD | BPF_IND | BPF_B: 512 + { 513 + const u8 r0 = bpf2a64[BPF_REG_0]; /* r0 = return value */ 514 + const u8 r6 = bpf2a64[BPF_REG_6]; /* r6 = pointer to sk_buff */ 515 + const u8 fp = bpf2a64[BPF_REG_FP]; 516 + const u8 r1 = bpf2a64[BPF_REG_1]; /* r1: struct sk_buff *skb */ 517 + const u8 r2 = bpf2a64[BPF_REG_2]; /* r2: int k */ 518 + const u8 r3 = bpf2a64[BPF_REG_3]; /* r3: unsigned int size */ 519 + const u8 r4 = bpf2a64[BPF_REG_4]; /* r4: void *buffer */ 520 + const u8 r5 = bpf2a64[BPF_REG_5]; /* r5: void *(*func)(...) */ 521 + int size; 522 + 523 + emit(A64_MOV(1, r1, r6), ctx); 524 + emit_a64_mov_i(0, r2, imm, ctx); 525 + if (BPF_MODE(code) == BPF_IND) 526 + emit(A64_ADD(0, r2, r2, src), ctx); 527 + switch (BPF_SIZE(code)) { 528 + case BPF_W: 529 + size = 4; 530 + break; 531 + case BPF_H: 532 + size = 2; 533 + break; 534 + case BPF_B: 535 + size = 1; 536 + break; 537 + default: 538 + return -EINVAL; 539 + } 540 + emit_a64_mov_i64(r3, size, ctx); 541 + emit(A64_ADD_I(1, r4, fp, MAX_BPF_STACK), ctx); 542 + emit_a64_mov_i64(r5, (unsigned long)bpf_load_pointer, ctx); 543 + emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx); 544 + emit(A64_MOV(1, A64_FP, A64_SP), ctx); 545 + emit(A64_BLR(r5), ctx); 546 + emit(A64_MOV(1, r0, A64_R(0)), ctx); 547 + emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx); 548 + 549 + jmp_offset = epilogue_offset(ctx); 550 + check_imm19(jmp_offset); 551 + emit(A64_CBZ(1, r0, jmp_offset), ctx); 552 + emit(A64_MOV(1, r5, r0), ctx); 553 + switch (BPF_SIZE(code)) { 554 + case BPF_W: 555 + emit(A64_LDR32(r0, r5, A64_ZR), ctx); 556 + #ifndef CONFIG_CPU_BIG_ENDIAN 557 + emit(A64_REV32(0, r0, r0), ctx); 558 + #endif 559 + break; 560 + case BPF_H: 561 + emit(A64_LDRH(r0, r5, A64_ZR), ctx); 562 + #ifndef CONFIG_CPU_BIG_ENDIAN 563 + emit(A64_REV16(0, r0, r0), ctx); 564 + #endif 565 + break; 566 + case BPF_B: 567 + emit(A64_LDRB(r0, r5, A64_ZR), ctx); 568 + break; 569 + } 570 + break; 571 + } 572 + notyet: 573 + pr_info_once("*** NOT YET: opcode %02x ***\n", code); 574 + return -EFAULT; 575 + 576 + default: 577 + pr_err_once("unknown opcode %02x\n", code); 578 + return -EINVAL; 579 + } 580 + 581 + return 0; 582 + } 583 + 584 + static int build_body(struct jit_ctx *ctx) 585 + { 586 + const struct bpf_prog *prog = ctx->prog; 587 + int i; 588 + 589 + for (i = 0; i < prog->len; i++) { 590 + const struct bpf_insn *insn = &prog->insnsi[i]; 591 + int ret; 592 + 593 + if (ctx->image == NULL) 594 + ctx->offset[i] = ctx->idx; 595 + 596 + ret = build_insn(insn, ctx); 597 + if (ret) 598 + return ret; 599 + } 600 + 601 + return 0; 602 + } 603 + 604 + static inline void bpf_flush_icache(void *start, void *end) 605 + { 606 + flush_icache_range((unsigned long)start, (unsigned long)end); 607 + } 608 + 609 + void bpf_jit_compile(struct bpf_prog *prog) 610 + { 611 + /* Nothing to do here. We support Internal BPF. */ 612 + } 613 + 614 + void bpf_int_jit_compile(struct bpf_prog *prog) 615 + { 616 + struct jit_ctx ctx; 617 + int image_size; 618 + 619 + if (!bpf_jit_enable) 620 + return; 621 + 622 + if (!prog || !prog->len) 623 + return; 624 + 625 + memset(&ctx, 0, sizeof(ctx)); 626 + ctx.prog = prog; 627 + 628 + ctx.offset = kcalloc(prog->len, sizeof(int), GFP_KERNEL); 629 + if (ctx.offset == NULL) 630 + return; 631 + 632 + /* 1. Initial fake pass to compute ctx->idx. */ 633 + 634 + /* Fake pass to fill in ctx->offset. */ 635 + if (build_body(&ctx)) 636 + goto out; 637 + 638 + build_prologue(&ctx); 639 + 640 + build_epilogue(&ctx); 641 + 642 + /* Now we know the actual image size. */ 643 + image_size = sizeof(u32) * ctx.idx; 644 + ctx.image = module_alloc(image_size); 645 + if (unlikely(ctx.image == NULL)) 646 + goto out; 647 + 648 + /* 2. Now, the actual pass. */ 649 + 650 + ctx.idx = 0; 651 + build_prologue(&ctx); 652 + 653 + ctx.body_offset = ctx.idx; 654 + if (build_body(&ctx)) { 655 + module_free(NULL, ctx.image); 656 + goto out; 657 + } 658 + 659 + build_epilogue(&ctx); 660 + 661 + /* And we're done. */ 662 + if (bpf_jit_enable > 1) 663 + bpf_jit_dump(prog->len, image_size, 2, ctx.image); 664 + 665 + bpf_flush_icache(ctx.image, ctx.image + ctx.idx); 666 + prog->bpf_func = (void *)ctx.image; 667 + prog->jited = 1; 668 + 669 + out: 670 + kfree(ctx.offset); 671 + } 672 + 673 + void bpf_jit_free(struct bpf_prog *prog) 674 + { 675 + if (prog->jited) 676 + module_free(NULL, prog->bpf_func); 677 + 678 + kfree(prog); 679 + }
+3 -4
drivers/of/platform.c
··· 160 160 * can use Platform bus notifier and handle BUS_NOTIFY_ADD_DEVICE event 161 161 * to fix up DMA configuration. 162 162 */ 163 - static void of_dma_configure(struct platform_device *pdev) 163 + static void of_dma_configure(struct device *dev) 164 164 { 165 165 u64 dma_addr, paddr, size; 166 166 int ret; 167 - struct device *dev = &pdev->dev; 168 167 169 168 /* 170 169 * Set default dma-mask to 32 bit. Drivers are expected to setup ··· 228 229 if (!dev) 229 230 goto err_clear_flag; 230 231 231 - of_dma_configure(dev); 232 + of_dma_configure(&dev->dev); 232 233 dev->dev.bus = &platform_bus_type; 233 234 dev->dev.platform_data = platform_data; 234 235 ··· 290 291 } 291 292 292 293 /* setup generic device info */ 293 - dev->dev.coherent_dma_mask = ~0; 294 294 dev->dev.of_node = of_node_get(node); 295 295 dev->dev.parent = parent; 296 296 dev->dev.platform_data = platform_data; ··· 297 299 dev_set_name(&dev->dev, "%s", bus_id); 298 300 else 299 301 of_device_make_bus_id(&dev->dev); 302 + of_dma_configure(&dev->dev); 300 303 301 304 /* Allow the HW Peripheral ID to be overridden */ 302 305 prop = of_get_property(node, "arm,primecell-periphid", NULL);
+1 -1
kernel/module.c
··· 3388 3388 { 3389 3389 if (str[0] == '.' && str[1] == 'L') 3390 3390 return true; 3391 - return str[0] == '$' && strchr("atd", str[1]) 3391 + return str[0] == '$' && strchr("axtd", str[1]) 3392 3392 && (str[2] == '\0' || str[2] == '.'); 3393 3393 } 3394 3394
+1 -1
scripts/kallsyms.c
··· 84 84 */ 85 85 static inline int is_arm_mapping_symbol(const char *str) 86 86 { 87 - return str[0] == '$' && strchr("atd", str[1]) 87 + return str[0] == '$' && strchr("axtd", str[1]) 88 88 && (str[2] == '\0' || str[2] == '.'); 89 89 } 90 90
+1 -1
scripts/mod/modpost.c
··· 1147 1147 1148 1148 static inline int is_arm_mapping_symbol(const char *str) 1149 1149 { 1150 - return str[0] == '$' && strchr("atd", str[1]) 1150 + return str[0] == '$' && strchr("axtd", str[1]) 1151 1151 && (str[2] == '\0' || str[2] == '.'); 1152 1152 } 1153 1153