Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'powerpc-3.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/mpe/linux

Pull powerpc updates from Michael Ellerman:
"Some nice cleanups like removing bootmem, and removal of
__get_cpu_var().

There is one patch to mm/gup.c. This is the generic GUP
implementation, but is only used by us and arm(64). We have an ack
from Steve Capper, and although we didn't get an ack from Andrew he
told us to take the patch through the powerpc tree.

There's one cxl patch. This is in drivers/misc, but Greg said he was
happy for us to manage fixes for it.

There is an infrastructure patch to support an IPMI driver for OPAL.

There is also an RTC driver for OPAL. We weren't able to get any
response from the RTC maintainer, Alessandro Zummo, so in the end we
just merged the driver.

The usual batch of Freescale updates from Scott"

* tag 'powerpc-3.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/mpe/linux: (101 commits)
powerpc/powernv: Return to cpu offline loop when finished in KVM guest
powerpc/book3s: Fix partial invalidation of TLBs in MCE code.
powerpc/mm: don't do tlbie for updatepp request with NO HPTE fault
powerpc/xmon: Cleanup the breakpoint flags
powerpc/xmon: Enable HW instruction breakpoint on POWER8
powerpc/mm/thp: Use tlbiel if possible
powerpc/mm/thp: Remove code duplication
powerpc/mm/hugetlb: Sanity check gigantic hugepage count
powerpc/oprofile: Disable pagefaults during user stack read
powerpc/mm: Check for matching hpte without taking hpte lock
powerpc: Drop useless warning in eeh_init()
powerpc/powernv: Cleanup unused MCE definitions/declarations.
powerpc/eeh: Dump PHB diag-data early
powerpc/eeh: Recover EEH error on ownership change for BCM5719
powerpc/eeh: Set EEH_PE_RESET on PE reset
powerpc/eeh: Refactor eeh_reset_pe()
powerpc: Remove more traces of bootmem
powerpc/pseries: Initialise nvram_pstore_info's buf_lock
cxl: Name interrupts in /proc/interrupt
cxl: Return error to PSL if IRQ demultiplexing fails & print clearer warning
...

+3244 -2175
+12 -2
Documentation/devicetree/bindings/clock/qoriq-clock.txt
··· 62 62 It takes parent's clock-frequency as its clock. 63 63 * "fsl,qoriq-sysclk-2.0": for input system clock (v2.0). 64 64 It takes parent's clock-frequency as its clock. 65 + * "fsl,qoriq-platform-pll-1.0" for the platform PLL clock (v1.0) 66 + * "fsl,qoriq-platform-pll-2.0" for the platform PLL clock (v2.0) 65 67 - #clock-cells: From common clock binding. The number of cells in a 66 68 clock-specifier. Should be <0> for "fsl,qoriq-sysclk-[1,2].0" 67 69 clocks, or <1> for "fsl,qoriq-core-pll-[1,2].0" clocks. ··· 130 128 clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 131 129 clock-output-names = "cmux1"; 132 130 }; 131 + 132 + platform-pll: platform-pll@c00 { 133 + #clock-cells = <1>; 134 + reg = <0xc00 0x4>; 135 + compatible = "fsl,qoriq-platform-pll-1.0"; 136 + clocks = <&sysclk>; 137 + clock-output-names = "platform-pll", "platform-pll-div2"; 138 + }; 133 139 }; 134 - } 140 + }; 135 141 136 142 Example for clock consumer: 137 143 ··· 149 139 clocks = <&mux0>; 150 140 ... 151 141 }; 152 - } 142 + };
+534
Documentation/devicetree/bindings/powerpc/fsl/fman.txt
··· 1 + ============================================================================= 2 + Freescale Frame Manager Device Bindings 3 + 4 + CONTENTS 5 + - FMan Node 6 + - FMan Port Node 7 + - FMan MURAM Node 8 + - FMan dTSEC/XGEC/mEMAC Node 9 + - FMan IEEE 1588 Node 10 + - Example 11 + 12 + ============================================================================= 13 + FMan Node 14 + 15 + DESCRIPTION 16 + 17 + Due to the fact that the FMan is an aggregation of sub-engines (ports, MACs, 18 + etc.) the FMan node will have child nodes for each of them. 19 + 20 + PROPERTIES 21 + 22 + - compatible 23 + Usage: required 24 + Value type: <stringlist> 25 + Definition: Must include "fsl,fman" 26 + FMan version can be determined via FM_IP_REV_1 register in the 27 + FMan block. The offset is 0xc4 from the beginning of the 28 + Frame Processing Manager memory map (0xc3000 from the 29 + beginning of the FMan node). 30 + 31 + - cell-index 32 + Usage: required 33 + Value type: <u32> 34 + Definition: Specifies the index of the FMan unit. 35 + 36 + The cell-index value may be used by the SoC, to identify the 37 + FMan unit in the SoC memory map. In the table bellow, 38 + there's a description of the cell-index use in each SoC: 39 + 40 + - P1023: 41 + register[bit] FMan unit cell-index 42 + ============================================================ 43 + DEVDISR[1] 1 0 44 + 45 + - P2041, P3041, P4080 P5020, P5040: 46 + register[bit] FMan unit cell-index 47 + ============================================================ 48 + DCFG_DEVDISR2[6] 1 0 49 + DCFG_DEVDISR2[14] 2 1 50 + (Second FM available only in P4080 and P5040) 51 + 52 + - B4860, T1040, T2080, T4240: 53 + register[bit] FMan unit cell-index 54 + ============================================================ 55 + DCFG_CCSR_DEVDISR2[24] 1 0 56 + DCFG_CCSR_DEVDISR2[25] 2 1 57 + (Second FM available only in T4240) 58 + 59 + DEVDISR, DCFG_DEVDISR2 and DCFG_CCSR_DEVDISR2 are located in 60 + the specific SoC "Device Configuration/Pin Control" Memory 61 + Map. 62 + 63 + - reg 64 + Usage: required 65 + Value type: <prop-encoded-array> 66 + Definition: A standard property. Specifies the offset of the 67 + following configuration registers: 68 + - BMI configuration registers. 69 + - QMI configuration registers. 70 + - DMA configuration registers. 71 + - FPM configuration registers. 72 + - FMan controller configuration registers. 73 + 74 + - ranges 75 + Usage: required 76 + Value type: <prop-encoded-array> 77 + Definition: A standard property. 78 + 79 + - clocks 80 + Usage: required 81 + Value type: <prop-encoded-array> 82 + Definition: phandle for the fman input clock. 83 + 84 + - clock-names 85 + usage: required 86 + Value type: <stringlist> 87 + Definition: "fmanclk" for the fman input clock. 88 + 89 + - interrupts 90 + Usage: required 91 + Value type: <prop-encoded-array> 92 + Definition: A pair of IRQs are specified in this property. 93 + The first element is associated with the event interrupts and 94 + the second element is associated with the error interrupts. 95 + 96 + - fsl,qman-channel-range 97 + Usage: required 98 + Value type: <prop-encoded-array> 99 + Definition: Specifies the range of the available dedicated 100 + channels in the FMan. The first cell specifies the beginning 101 + of the range and the second cell specifies the number of 102 + channels. 103 + Further information available at: 104 + "Work Queue (WQ) Channel Assignments in the QMan" section 105 + in DPAA Reference Manual. 106 + 107 + - fsl,qman 108 + - fsl,bman 109 + Usage: required 110 + Definition: See soc/fsl/qman.txt and soc/fsl/bman.txt 111 + 112 + ============================================================================= 113 + FMan MURAM Node 114 + 115 + DESCRIPTION 116 + 117 + FMan Internal memory - shared between all the FMan modules. 118 + It contains data structures that are common and written to or read by 119 + the modules. 120 + FMan internal memory is split into the following parts: 121 + Packet buffering (Tx/Rx FIFOs) 122 + Frames internal context 123 + 124 + PROPERTIES 125 + 126 + - compatible 127 + Usage: required 128 + Value type: <stringlist> 129 + Definition: Must include "fsl,fman-muram" 130 + 131 + - ranges 132 + Usage: required 133 + Value type: <prop-encoded-array> 134 + Definition: A standard property. 135 + Specifies the multi-user memory offset and the size within 136 + the FMan. 137 + 138 + EXAMPLE 139 + 140 + muram@0 { 141 + compatible = "fsl,fman-muram"; 142 + ranges = <0 0x000000 0x28000>; 143 + }; 144 + 145 + ============================================================================= 146 + FMan Port Node 147 + 148 + DESCRIPTION 149 + 150 + The Frame Manager (FMan) supports several types of hardware ports: 151 + Ethernet receiver (RX) 152 + Ethernet transmitter (TX) 153 + Offline/Host command (O/H) 154 + 155 + PROPERTIES 156 + 157 + - compatible 158 + Usage: required 159 + Value type: <stringlist> 160 + Definition: A standard property. 161 + Must include one of the following: 162 + - "fsl,fman-v2-port-oh" for FManV2 OH ports 163 + - "fsl,fman-v2-port-rx" for FManV2 RX ports 164 + - "fsl,fman-v2-port-tx" for FManV2 TX ports 165 + - "fsl,fman-v3-port-oh" for FManV3 OH ports 166 + - "fsl,fman-v3-port-rx" for FManV3 RX ports 167 + - "fsl,fman-v3-port-tx" for FManV3 TX ports 168 + 169 + - cell-index 170 + Usage: required 171 + Value type: <u32> 172 + Definition: Specifies the hardware port id. 173 + Each hardware port on the FMan has its own hardware PortID. 174 + Super set of all hardware Port IDs available at FMan Reference 175 + Manual under "FMan Hardware Ports in Freescale Devices" table. 176 + 177 + Each hardware port is assigned a 4KB, port-specific page in 178 + the FMan hardware port memory region (which is part of the 179 + FMan memory map). The first 4 KB in the FMan hardware ports 180 + memory region is used for what are called common registers. 181 + The subsequent 63 4KB pages are allocated to the hardware 182 + ports. 183 + The page of a specific port is determined by the cell-index. 184 + 185 + - reg 186 + Usage: required 187 + Value type: <prop-encoded-array> 188 + Definition: There is one reg region describing the port 189 + configuration registers. 190 + 191 + EXAMPLE 192 + 193 + port@a8000 { 194 + cell-index = <0x28>; 195 + compatible = "fsl,fman-v2-port-tx"; 196 + reg = <0xa8000 0x1000>; 197 + }; 198 + 199 + port@88000 { 200 + cell-index = <0x8>; 201 + compatible = "fsl,fman-v2-port-rx"; 202 + reg = <0x88000 0x1000>; 203 + }; 204 + 205 + port@81000 { 206 + cell-index = <0x1>; 207 + compatible = "fsl,fman-v2-port-oh"; 208 + reg = <0x81000 0x1000>; 209 + }; 210 + 211 + ============================================================================= 212 + FMan dTSEC/XGEC/mEMAC Node 213 + 214 + DESCRIPTION 215 + 216 + mEMAC/dTSEC/XGEC are the Ethernet network interfaces 217 + 218 + PROPERTIES 219 + 220 + - compatible 221 + Usage: required 222 + Value type: <stringlist> 223 + Definition: A standard property. 224 + Must include one of the following: 225 + - "fsl,fman-dtsec" for dTSEC MAC 226 + - "fsl,fman-xgec" for XGEC MAC 227 + - "fsl,fman-memac for mEMAC MAC 228 + 229 + - cell-index 230 + Usage: required 231 + Value type: <u32> 232 + Definition: Specifies the MAC id. 233 + 234 + The cell-index value may be used by the FMan or the SoC, to 235 + identify the MAC unit in the FMan (or SoC) memory map. 236 + In the tables bellow there's a description of the cell-index 237 + use, there are two tables, one describes the use of cell-index 238 + by the FMan, the second describes the use by the SoC: 239 + 240 + 1. FMan Registers 241 + 242 + FManV2: 243 + register[bit] MAC cell-index 244 + ============================================================ 245 + FM_EPI[16] XGEC 8 246 + FM_EPI[16+n] dTSECn n-1 247 + FM_NPI[11+n] dTSECn n-1 248 + n = 1,..,5 249 + 250 + FManV3: 251 + register[bit] MAC cell-index 252 + ============================================================ 253 + FM_EPI[16+n] mEMACn n-1 254 + FM_EPI[25] mEMAC10 9 255 + 256 + FM_NPI[11+n] mEMACn n-1 257 + FM_NPI[10] mEMAC10 9 258 + FM_NPI[11] mEMAC9 8 259 + n = 1,..8 260 + 261 + FM_EPI and FM_NPI are located in the FMan memory map. 262 + 263 + 2. SoC registers: 264 + 265 + - P2041, P3041, P4080 P5020, P5040: 266 + register[bit] FMan MAC cell 267 + Unit index 268 + ============================================================ 269 + DCFG_DEVDISR2[7] 1 XGEC 8 270 + DCFG_DEVDISR2[7+n] 1 dTSECn n-1 271 + DCFG_DEVDISR2[15] 2 XGEC 8 272 + DCFG_DEVDISR2[15+n] 2 dTSECn n-1 273 + n = 1,..5 274 + 275 + - T1040, T2080, T4240, B4860: 276 + register[bit] FMan MAC cell 277 + Unit index 278 + ============================================================ 279 + DCFG_CCSR_DEVDISR2[n-1] 1 mEMACn n-1 280 + DCFG_CCSR_DEVDISR2[11+n] 2 mEMACn n-1 281 + n = 1,..6,9,10 282 + 283 + EVDISR, DCFG_DEVDISR2 and DCFG_CCSR_DEVDISR2 are located in 284 + the specific SoC "Device Configuration/Pin Control" Memory 285 + Map. 286 + 287 + - reg 288 + Usage: required 289 + Value type: <prop-encoded-array> 290 + Definition: A standard property. 291 + 292 + - fsl,fman-ports 293 + Usage: required 294 + Value type: <prop-encoded-array> 295 + Definition: An array of two phandles - the first references is 296 + the FMan RX port and the second is the TX port used by this 297 + MAC. 298 + 299 + - ptp-timer 300 + Usage required 301 + Value type: <phandle> 302 + Definition: A phandle for 1EEE1588 timer. 303 + 304 + EXAMPLE 305 + 306 + fman1_tx28: port@a8000 { 307 + cell-index = <0x28>; 308 + compatible = "fsl,fman-v2-port-tx"; 309 + reg = <0xa8000 0x1000>; 310 + }; 311 + 312 + fman1_rx8: port@88000 { 313 + cell-index = <0x8>; 314 + compatible = "fsl,fman-v2-port-rx"; 315 + reg = <0x88000 0x1000>; 316 + }; 317 + 318 + ptp-timer: ptp_timer@fe000 { 319 + compatible = "fsl,fman-ptp-timer"; 320 + reg = <0xfe000 0x1000>; 321 + }; 322 + 323 + ethernet@e0000 { 324 + compatible = "fsl,fman-dtsec"; 325 + cell-index = <0>; 326 + reg = <0xe0000 0x1000>; 327 + fsl,fman-ports = <&fman1_rx8 &fman1_tx28>; 328 + ptp-timer = <&ptp-timer>; 329 + }; 330 + 331 + ============================================================================ 332 + FMan IEEE 1588 Node 333 + 334 + DESCRIPTION 335 + 336 + The FMan interface to support IEEE 1588 337 + 338 + 339 + PROPERTIES 340 + 341 + - compatible 342 + Usage: required 343 + Value type: <stringlist> 344 + Definition: A standard property. 345 + Must include "fsl,fman-ptp-timer". 346 + 347 + - reg 348 + Usage: required 349 + Value type: <prop-encoded-array> 350 + Definition: A standard property. 351 + 352 + EXAMPLE 353 + 354 + ptp-timer@fe000 { 355 + compatible = "fsl,fman-ptp-timer"; 356 + reg = <0xfe000 0x1000>; 357 + }; 358 + 359 + ============================================================================= 360 + Example 361 + 362 + fman@400000 { 363 + #address-cells = <1>; 364 + #size-cells = <1>; 365 + cell-index = <1>; 366 + compatible = "fsl,fman" 367 + ranges = <0 0x400000 0x100000>; 368 + reg = <0x400000 0x100000>; 369 + clocks = <&fman_clk>; 370 + clock-names = "fmanclk"; 371 + interrupts = < 372 + 96 2 0 0 373 + 16 2 1 1>; 374 + fsl,qman-channel-range = <0x40 0xc>; 375 + 376 + muram@0 { 377 + compatible = "fsl,fman-muram"; 378 + reg = <0x0 0x28000>; 379 + }; 380 + 381 + port@81000 { 382 + cell-index = <1>; 383 + compatible = "fsl,fman-v2-port-oh"; 384 + reg = <0x81000 0x1000>; 385 + }; 386 + 387 + port@82000 { 388 + cell-index = <2>; 389 + compatible = "fsl,fman-v2-port-oh"; 390 + reg = <0x82000 0x1000>; 391 + }; 392 + 393 + port@83000 { 394 + cell-index = <3>; 395 + compatible = "fsl,fman-v2-port-oh"; 396 + reg = <0x83000 0x1000>; 397 + }; 398 + 399 + port@84000 { 400 + cell-index = <4>; 401 + compatible = "fsl,fman-v2-port-oh"; 402 + reg = <0x84000 0x1000>; 403 + }; 404 + 405 + port@85000 { 406 + cell-index = <5>; 407 + compatible = "fsl,fman-v2-port-oh"; 408 + reg = <0x85000 0x1000>; 409 + }; 410 + 411 + port@86000 { 412 + cell-index = <6>; 413 + compatible = "fsl,fman-v2-port-oh"; 414 + reg = <0x86000 0x1000>; 415 + }; 416 + 417 + fman1_rx_0x8: port@88000 { 418 + cell-index = <0x8>; 419 + compatible = "fsl,fman-v2-port-rx"; 420 + reg = <0x88000 0x1000>; 421 + }; 422 + 423 + fman1_rx_0x9: port@89000 { 424 + cell-index = <0x9>; 425 + compatible = "fsl,fman-v2-port-rx"; 426 + reg = <0x89000 0x1000>; 427 + }; 428 + 429 + fman1_rx_0xa: port@8a000 { 430 + cell-index = <0xa>; 431 + compatible = "fsl,fman-v2-port-rx"; 432 + reg = <0x8a000 0x1000>; 433 + }; 434 + 435 + fman1_rx_0xb: port@8b000 { 436 + cell-index = <0xb>; 437 + compatible = "fsl,fman-v2-port-rx"; 438 + reg = <0x8b000 0x1000>; 439 + }; 440 + 441 + fman1_rx_0xc: port@8c000 { 442 + cell-index = <0xc>; 443 + compatible = "fsl,fman-v2-port-rx"; 444 + reg = <0x8c000 0x1000>; 445 + }; 446 + 447 + fman1_rx_0x10: port@90000 { 448 + cell-index = <0x10>; 449 + compatible = "fsl,fman-v2-port-rx"; 450 + reg = <0x90000 0x1000>; 451 + }; 452 + 453 + fman1_tx_0x28: port@a8000 { 454 + cell-index = <0x28>; 455 + compatible = "fsl,fman-v2-port-tx"; 456 + reg = <0xa8000 0x1000>; 457 + }; 458 + 459 + fman1_tx_0x29: port@a9000 { 460 + cell-index = <0x29>; 461 + compatible = "fsl,fman-v2-port-tx"; 462 + reg = <0xa9000 0x1000>; 463 + }; 464 + 465 + fman1_tx_0x2a: port@aa000 { 466 + cell-index = <0x2a>; 467 + compatible = "fsl,fman-v2-port-tx"; 468 + reg = <0xaa000 0x1000>; 469 + }; 470 + 471 + fman1_tx_0x2b: port@ab000 { 472 + cell-index = <0x2b>; 473 + compatible = "fsl,fman-v2-port-tx"; 474 + reg = <0xab000 0x1000>; 475 + }; 476 + 477 + fman1_tx_0x2c: port@ac0000 { 478 + cell-index = <0x2c>; 479 + compatible = "fsl,fman-v2-port-tx"; 480 + reg = <0xac000 0x1000>; 481 + }; 482 + 483 + fman1_tx_0x30: port@b0000 { 484 + cell-index = <0x30>; 485 + compatible = "fsl,fman-v2-port-tx"; 486 + reg = <0xb0000 0x1000>; 487 + }; 488 + 489 + ethernet@e0000 { 490 + compatible = "fsl,fman-dtsec"; 491 + cell-index = <0>; 492 + reg = <0xe0000 0x1000>; 493 + fsl,fman-ports = <&fman1_rx_0x8 &fman1_tx_0x28>; 494 + }; 495 + 496 + ethernet@e2000 { 497 + compatible = "fsl,fman-dtsec"; 498 + cell-index = <1>; 499 + reg = <0xe2000 0x1000>; 500 + fsl,fman-ports = <&fman1_rx_0x9 &fman1_tx_0x29>; 501 + }; 502 + 503 + ethernet@e4000 { 504 + compatible = "fsl,fman-dtsec"; 505 + cell-index = <2>; 506 + reg = <0xe4000 0x1000>; 507 + fsl,fman-ports = <&fman1_rx_0xa &fman1_tx_0x2a>; 508 + }; 509 + 510 + ethernet@e6000 { 511 + compatible = "fsl,fman-dtsec"; 512 + cell-index = <3>; 513 + reg = <0xe6000 0x1000>; 514 + fsl,fman-ports = <&fman1_rx_0xb &fman1_tx_0x2b>; 515 + }; 516 + 517 + ethernet@e8000 { 518 + compatible = "fsl,fman-dtsec"; 519 + cell-index = <4>; 520 + reg = <0xf0000 0x1000>; 521 + fsl,fman-ports = <&fman1_rx_0xc &fman1_tx_0x2c>; 522 + 523 + ethernet@f0000 { 524 + cell-index = <8>; 525 + compatible = "fsl,fman-xgec"; 526 + reg = <0xf0000 0x1000>; 527 + fsl,fman-ports = <&fman1_rx_0x10 &fman1_tx_0x30>; 528 + }; 529 + 530 + ptp-timer@fe000 { 531 + compatible = "fsl,fman-ptp-timer"; 532 + reg = <0xfe000 0x1000>; 533 + }; 534 + };
+16
Documentation/devicetree/bindings/rtc/rtc-opal.txt
··· 1 + IBM OPAL real-time clock 2 + ------------------------ 3 + 4 + Required properties: 5 + - comapatible: Should be "ibm,opal-rtc" 6 + 7 + Optional properties: 8 + - has-tpo: Decides if the wakeup is supported or not. 9 + 10 + Example: 11 + rtc { 12 + compatible = "ibm,opal-rtc"; 13 + has-tpo; 14 + phandle = <0x10000029>; 15 + linux,phandle = <0x10000029>; 16 + };
+56
Documentation/devicetree/bindings/soc/fsl/bman-portals.txt
··· 1 + QorIQ DPAA Buffer Manager Portals Device Tree Binding 2 + 3 + Copyright (C) 2008 - 2014 Freescale Semiconductor Inc. 4 + 5 + CONTENTS 6 + 7 + - BMan Portal 8 + - Example 9 + 10 + BMan Portal Node 11 + 12 + Portals are memory mapped interfaces to BMan that allow low-latency, lock-less 13 + interaction by software running on processor cores, accelerators and network 14 + interfaces with the BMan 15 + 16 + PROPERTIES 17 + 18 + - compatible 19 + Usage: Required 20 + Value type: <stringlist> 21 + Definition: Must include "fsl,bman-portal-<hardware revision>" 22 + May include "fsl,<SoC>-bman-portal" or "fsl,bman-portal" 23 + 24 + - reg 25 + Usage: Required 26 + Value type: <prop-encoded-array> 27 + Definition: Two regions. The first is the cache-enabled region of 28 + the portal. The second is the cache-inhibited region of 29 + the portal 30 + 31 + - interrupts 32 + Usage: Required 33 + Value type: <prop-encoded-array> 34 + Definition: Standard property 35 + 36 + EXAMPLE 37 + 38 + The example below shows a (P4080) BMan portals container/bus node with two portals 39 + 40 + bman-portals@ff4000000 { 41 + #address-cells = <1>; 42 + #size-cells = <1>; 43 + compatible = "simple-bus"; 44 + ranges = <0 0xf 0xf4000000 0x200000>; 45 + 46 + bman-portal@0 { 47 + compatible = "fsl,bman-portal-1.0.0", "fsl,bman-portal"; 48 + reg = <0x0 0x4000>, <0x100000 0x1000>; 49 + interrupts = <105 2 0 0>; 50 + }; 51 + bman-portal@4000 { 52 + compatible = "fsl,bman-portal-1.0.0", "fsl,bman-portal"; 53 + reg = <0x4000 0x4000>, <0x101000 0x1000>; 54 + interrupts = <107 2 0 0>; 55 + }; 56 + };
+125
Documentation/devicetree/bindings/soc/fsl/bman.txt
··· 1 + QorIQ DPAA Buffer Manager Device Tree Bindings 2 + 3 + Copyright (C) 2008 - 2014 Freescale Semiconductor Inc. 4 + 5 + CONTENTS 6 + 7 + - BMan Node 8 + - BMan Private Memory Node 9 + - Example 10 + 11 + BMan Node 12 + 13 + The Buffer Manager is part of the Data-Path Acceleration Architecture (DPAA). 14 + BMan supports hardware allocation and deallocation of buffers belonging to pools 15 + originally created by software with configurable depletion thresholds. This 16 + binding covers the CCSR space programming model 17 + 18 + PROPERTIES 19 + 20 + - compatible 21 + Usage: Required 22 + Value type: <stringlist> 23 + Definition: Must include "fsl,bman" 24 + May include "fsl,<SoC>-bman" 25 + 26 + - reg 27 + Usage: Required 28 + Value type: <prop-encoded-array> 29 + Definition: Registers region within the CCSR address space 30 + 31 + The BMan revision information is located in the BMAN_IP_REV_1/2 registers which 32 + are located at offsets 0xbf8 and 0xbfc 33 + 34 + - interrupts 35 + Usage: Required 36 + Value type: <prop-encoded-array> 37 + Definition: Standard property. The error interrupt 38 + 39 + - fsl,liodn 40 + Usage: See pamu.txt 41 + Value type: <prop-encoded-array> 42 + Definition: PAMU property used for static LIODN assignment 43 + 44 + - fsl,iommu-parent 45 + Usage: See pamu.txt 46 + Value type: <phandle> 47 + Definition: PAMU property used for dynamic LIODN assignment 48 + 49 + For additional details about the PAMU/LIODN binding(s) see pamu.txt 50 + 51 + Devices connected to a BMan instance via Direct Connect Portals (DCP) must link 52 + to the respective BMan instance 53 + 54 + - fsl,bman 55 + Usage: Required 56 + Value type: <prop-encoded-array> 57 + Description: List of phandle and DCP index pairs, to the BMan instance 58 + to which this device is connected via the DCP 59 + 60 + BMan Private Memory Node 61 + 62 + BMan requires a contiguous range of physical memory used for the backing store 63 + for BMan Free Buffer Proxy Records (FBPR). This memory is reserved/allocated as a 64 + node under the /reserved-memory node 65 + 66 + The BMan FBPR memory node must be named "bman-fbpr" 67 + 68 + PROPERTIES 69 + 70 + - compatible 71 + Usage: required 72 + Value type: <stringlist> 73 + Definition: Must inclide "fsl,bman-fbpr" 74 + 75 + The following constraints are relevant to the FBPR private memory: 76 + - The size must be 2^(size + 1), with size = 11..33. That is 4 KiB to 77 + 16 GiB 78 + - The alignment must be a muliptle of the memory size 79 + 80 + The size of the FBPR must be chosen by observing the hardware features configured 81 + via the Reset Configuration Word (RCW) and that are relevant to a specific board 82 + (e.g. number of MAC(s) pinned-out, number of offline/host command FMan ports, 83 + etc.). The size configured in the DT must reflect the hardware capabilities and 84 + not the specific needs of an application 85 + 86 + For additional details about reserved memory regions see reserved-memory.txt 87 + 88 + EXAMPLE 89 + 90 + The example below shows a BMan FBPR dynamic allocation memory node 91 + 92 + reserved-memory { 93 + #address-cells = <2>; 94 + #size-cells = <2>; 95 + ranges; 96 + 97 + bman_fbpr: bman-fbpr { 98 + compatible = "fsl,bman-fbpr"; 99 + alloc-ranges = <0 0 0xf 0xffffffff>; 100 + size = <0 0x1000000>; 101 + alignment = <0 0x1000000>; 102 + }; 103 + }; 104 + 105 + The example below shows a (P4080) BMan CCSR-space node 106 + 107 + crypto@300000 { 108 + ... 109 + fsl,bman = <&bman, 2>; 110 + ... 111 + }; 112 + 113 + bman: bman@31a000 { 114 + compatible = "fsl,bman"; 115 + reg = <0x31a000 0x1000>; 116 + interrupts = <16 2 1 2>; 117 + fsl,liodn = <0x17>; 118 + memory-region = <&bman_fbpr>; 119 + }; 120 + 121 + fman@400000 { 122 + ... 123 + fsl,bman = <&bman, 0>; 124 + ... 125 + };
+154
Documentation/devicetree/bindings/soc/fsl/qman-portals.txt
··· 1 + QorIQ DPAA Queue Manager Portals Device Tree Binding 2 + 3 + Copyright (C) 2008 - 2014 Freescale Semiconductor Inc. 4 + 5 + CONTENTS 6 + 7 + - QMan Portal 8 + - QMan Pool Channel 9 + - Example 10 + 11 + QMan Portal Node 12 + 13 + Portals are memory mapped interfaces to QMan that allow low-latency, lock-less 14 + interaction by software running on processor cores, accelerators and network 15 + interfaces with the QMan 16 + 17 + PROPERTIES 18 + 19 + - compatible 20 + Usage: Required 21 + Value type: <stringlist> 22 + Definition: Must include "fsl,qman-portal-<hardware revision>" 23 + May include "fsl,<SoC>-qman-portal" or "fsl,qman-portal" 24 + 25 + - reg 26 + Usage: Required 27 + Value type: <prop-encoded-array> 28 + Definition: Two regions. The first is the cache-enabled region of 29 + the portal. The second is the cache-inhibited region of 30 + the portal 31 + 32 + - interrupts 33 + Usage: Required 34 + Value type: <prop-encoded-array> 35 + Definition: Standard property 36 + 37 + - fsl,liodn 38 + Usage: See pamu.txt 39 + Value type: <prop-encoded-array> 40 + Definition: Two LIODN(s). DQRR LIODN (DLIODN) and Frame LIODN 41 + (FLIODN) 42 + 43 + - fsl,iommu-parent 44 + Usage: See pamu.txt 45 + Value type: <phandle> 46 + Definition: PAMU property used for dynamic LIODN assignment 47 + 48 + For additional details about the PAMU/LIODN binding(s) see pamu.txt 49 + 50 + - fsl,qman-channel-id 51 + Usage: Required 52 + Value type: <u32> 53 + Definition: The hardware index of the channel. This can also be 54 + determined by dividing any of the channel's 8 work queue 55 + IDs by 8 56 + 57 + In addition to these properties the qman-portals should have sub-nodes to 58 + represent the HW devices/portals that are connected to the software portal 59 + described here 60 + 61 + The currently supported sub-nodes are: 62 + * fman0 63 + * fman1 64 + * pme 65 + * crypto 66 + 67 + These subnodes should have the following properties: 68 + 69 + - fsl,liodn 70 + Usage: See pamu.txt 71 + Value type: <prop-encoded-array> 72 + Definition: PAMU property used for static LIODN assignment 73 + 74 + - fsl,iommu-parent 75 + Usage: See pamu.txt 76 + Value type: <phandle> 77 + Definition: PAMU property used for dynamic LIODN assignment 78 + 79 + - dev-handle 80 + Usage: Required 81 + Value type: <phandle> 82 + Definition: The phandle to the particular hardware device that this 83 + portal is connected to. 84 + 85 + DPAA QMan Pool Channel Nodes 86 + 87 + Pool Channels are defined with the following properties. 88 + 89 + PROPERTIES 90 + 91 + - compatible 92 + Usage: Required 93 + Value type: <stringlist> 94 + Definition: Must include "fsl,qman-pool-channel" 95 + May include "fsl,<SoC>-qman-pool-channel" 96 + 97 + - fsl,qman-channel-id 98 + Usage: Required 99 + Value type: <u32> 100 + Definition: The hardware index of the channel. This can also be 101 + determined by dividing any of the channel's 8 work queue 102 + IDs by 8 103 + 104 + EXAMPLE 105 + 106 + The example below shows a (P4080) QMan portals container/bus node with two portals 107 + 108 + qman-portals@ff4200000 { 109 + #address-cells = <1>; 110 + #size-cells = <1>; 111 + compatible = "simple-bus"; 112 + ranges = <0 0xf 0xf4200000 0x200000>; 113 + 114 + qman-portal@0 { 115 + compatible = "fsl,qman-portal-1.2.0", "fsl,qman-portal"; 116 + reg = <0 0x4000>, <0x100000 0x1000>; 117 + interrupts = <104 2 0 0>; 118 + fsl,liodn = <1 2>; 119 + fsl,qman-channel-id = <0>; 120 + 121 + fman0 { 122 + fsl,liodn = <0x21>; 123 + dev-handle = <&fman0>; 124 + }; 125 + fman1 { 126 + fsl,liodn = <0xa1>; 127 + dev-handle = <&fman1>; 128 + }; 129 + crypto { 130 + fsl,liodn = <0x41 0x66>; 131 + dev-handle = <&crypto>; 132 + }; 133 + }; 134 + qman-portal@4000 { 135 + compatible = "fsl,qman-portal-1.2.0", "fsl,qman-portal"; 136 + reg = <0x4000 0x4000>, <0x101000 0x1000>; 137 + interrupts = <106 2 0 0>; 138 + fsl,liodn = <3 4>; 139 + fsl,qman-channel-id = <1>; 140 + 141 + fman0 { 142 + fsl,liodn = <0x22>; 143 + dev-handle = <&fman0>; 144 + }; 145 + fman1 { 146 + fsl,liodn = <0xa2>; 147 + dev-handle = <&fman1>; 148 + }; 149 + crypto { 150 + fsl,liodn = <0x42 0x67>; 151 + dev-handle = <&crypto>; 152 + }; 153 + }; 154 + };
+165
Documentation/devicetree/bindings/soc/fsl/qman.txt
··· 1 + QorIQ DPAA Queue Manager Device Tree Binding 2 + 3 + Copyright (C) 2008 - 2014 Freescale Semiconductor Inc. 4 + 5 + CONTENTS 6 + 7 + - QMan Node 8 + - QMan Private Memory Nodes 9 + - Example 10 + 11 + QMan Node 12 + 13 + The Queue Manager is part of the Data-Path Acceleration Architecture (DPAA). QMan 14 + supports queuing and QoS scheduling of frames to CPUs, network interfaces and 15 + DPAA logic modules, maintains packet ordering within flows. Besides providing 16 + flow-level queuing, is also responsible for congestion management functions such 17 + as RED/WRED, congestion notifications and tail discards. This binding covers the 18 + CCSR space programming model 19 + 20 + PROPERTIES 21 + 22 + - compatible 23 + Usage: Required 24 + Value type: <stringlist> 25 + Definition: Must include "fsl,qman" 26 + May include "fsl,<SoC>-qman" 27 + 28 + - reg 29 + Usage: Required 30 + Value type: <prop-encoded-array> 31 + Definition: Registers region within the CCSR address space 32 + 33 + The QMan revision information is located in the QMAN_IP_REV_1/2 registers which 34 + are located at offsets 0xbf8 and 0xbfc 35 + 36 + - interrupts 37 + Usage: Required 38 + Value type: <prop-encoded-array> 39 + Definition: Standard property. The error interrupt 40 + 41 + - fsl,liodn 42 + Usage: See pamu.txt 43 + Value type: <prop-encoded-array> 44 + Definition: PAMU property used for static LIODN assignment 45 + 46 + - fsl,iommu-parent 47 + Usage: See pamu.txt 48 + Value type: <phandle> 49 + Definition: PAMU property used for dynamic LIODN assignment 50 + 51 + For additional details about the PAMU/LIODN binding(s) see pamu.txt 52 + 53 + - clocks 54 + Usage: See clock-bindings.txt and qoriq-clock.txt 55 + Value type: <prop-encoded-array> 56 + Definition: Reference input clock. Its frequency is half of the 57 + platform clock 58 + 59 + Devices connected to a QMan instance via Direct Connect Portals (DCP) must link 60 + to the respective QMan instance 61 + 62 + - fsl,qman 63 + Usage: Required 64 + Value type: <prop-encoded-array> 65 + Description: List of phandle and DCP index pairs, to the QMan instance 66 + to which this device is connected via the DCP 67 + 68 + QMan Private Memory Nodes 69 + 70 + QMan requires two contiguous range of physical memory used for the backing store 71 + for QMan Frame Queue Descriptor (FQD) and Packed Frame Descriptor Record (PFDR). 72 + This memory is reserved/allocated as a nodes under the /reserved-memory node 73 + 74 + The QMan FQD memory node must be named "qman-fqd" 75 + 76 + PROPERTIES 77 + 78 + - compatible 79 + Usage: required 80 + Value type: <stringlist> 81 + Definition: Must inclide "fsl,qman-fqd" 82 + 83 + The QMan PFDR memory node must be named "qman-pfdr" 84 + 85 + PROPERTIES 86 + 87 + - compatible 88 + Usage: required 89 + Value type: <stringlist> 90 + Definition: Must inclide "fsl,qman-pfdr" 91 + 92 + The following constraints are relevant to the FQD and PFDR private memory: 93 + - The size must be 2^(size + 1), with size = 11..29. That is 4 KiB to 94 + 1 GiB 95 + - The alignment must be a muliptle of the memory size 96 + 97 + The size of the FQD and PFDP must be chosen by observing the hardware features 98 + configured via the Reset Configuration Word (RCW) and that are relevant to a 99 + specific board (e.g. number of MAC(s) pinned-out, number of offline/host command 100 + FMan ports, etc.). The size configured in the DT must reflect the hardware 101 + capabilities and not the specific needs of an application 102 + 103 + For additional details about reserved memory regions see reserved-memory.txt 104 + 105 + EXAMPLE 106 + 107 + The example below shows a QMan FQD and a PFDR dynamic allocation memory nodes 108 + 109 + reserved-memory { 110 + #address-cells = <2>; 111 + #size-cells = <2>; 112 + ranges; 113 + 114 + qman_fqd: qman-fqd { 115 + compatible = "fsl,qman-fqd"; 116 + alloc-ranges = <0 0 0xf 0xffffffff>; 117 + size = <0 0x400000>; 118 + alignment = <0 0x400000>; 119 + }; 120 + qman_pfdr: qman-pfdr { 121 + compatible = "fsl,qman-pfdr"; 122 + alloc-ranges = <0 0 0xf 0xffffffff>; 123 + size = <0 0x2000000>; 124 + alignment = <0 0x2000000>; 125 + }; 126 + }; 127 + 128 + The example below shows a (P4080) QMan CCSR-space node 129 + 130 + clockgen: global-utilities@e1000 { 131 + ... 132 + sysclk: sysclk { 133 + ... 134 + }; 135 + ... 136 + platform_pll: platform-pll@c00 { 137 + #clock-cells = <1>; 138 + reg = <0xc00 0x4>; 139 + compatible = "fsl,qoriq-platform-pll-1.0"; 140 + clocks = <&sysclk>; 141 + clock-output-names = "platform-pll", "platform-pll-div2"; 142 + }; 143 + ... 144 + }; 145 + 146 + crypto@300000 { 147 + ... 148 + fsl,qman = <&qman, 2>; 149 + ... 150 + }; 151 + 152 + qman: qman@318000 { 153 + compatible = "fsl,qman"; 154 + reg = <0x318000 0x1000>; 155 + interrupts = <16 2 1 3> 156 + fsl,liodn = <0x16>; 157 + memory-region = <&qman_fqd &qman_pfdr>; 158 + clocks = <&platform_pll 1>; 159 + }; 160 + 161 + fman@400000 { 162 + ... 163 + fsl,qman = <&qman, 0>; 164 + ... 165 + };
+4 -1
arch/powerpc/Kconfig
··· 88 88 select ARCH_MIGHT_HAVE_PC_PARPORT 89 89 select ARCH_MIGHT_HAVE_PC_SERIO 90 90 select BINFMT_ELF 91 + select ARCH_BINFMT_ELF_RANDOMIZE_PIE 91 92 select OF 92 93 select OF_EARLY_FLATTREE 93 94 select OF_RESERVED_MEM ··· 149 148 select HAVE_ARCH_AUDITSYSCALL 150 149 select ARCH_SUPPORTS_ATOMIC_RMW 151 150 select DCACHE_WORD_ACCESS if PPC64 && CPU_LITTLE_ENDIAN 151 + select NO_BOOTMEM 152 + select HAVE_GENERIC_RCU_GUP 152 153 153 154 config GENERIC_CSUM 154 155 def_bool CPU_LITTLE_ENDIAN ··· 552 549 bool "4k page size" 553 550 554 551 config PPC_16K_PAGES 555 - bool "16k page size" if 44x 552 + bool "16k page size" if 44x || PPC_8xx 556 553 557 554 config PPC_64K_PAGES 558 555 bool "64k page size" if 44x || PPC_STD_MMU_64 || PPC_BOOK3E_64
+2 -2
arch/powerpc/boot/dts/b4860emu.dts
··· 193 193 fsl,liodn-bits = <12>; 194 194 }; 195 195 196 - clockgen: global-utilities@e1000 { 196 + /include/ "fsl/qoriq-clockgen2.dtsi" 197 + global-utilities@e1000 { 197 198 compatible = "fsl,b4-clockgen", "fsl,qoriq-clockgen-2.0"; 198 - reg = <0xe1000 0x1000>; 199 199 }; 200 200 201 201 /include/ "fsl/qoriq-dma-0.dtsi"
+23
arch/powerpc/boot/dts/b4qds.dtsi
··· 152 152 reg = <0x68>; 153 153 }; 154 154 }; 155 + 156 + i2c@2 { 157 + #address-cells = <1>; 158 + #size-cells = <0>; 159 + reg = <0x2>; 160 + 161 + ina220@40 { 162 + compatible = "ti,ina220"; 163 + reg = <0x40>; 164 + shunt-resistor = <1000>; 165 + }; 166 + }; 167 + 168 + i2c@3 { 169 + #address-cells = <1>; 170 + #size-cells = <0>; 171 + reg = <0x3>; 172 + 173 + adt7461@4c { 174 + compatible = "adi,adt7461"; 175 + reg = <0x4c>; 176 + }; 177 + }; 155 178 }; 156 179 }; 157 180
-50
arch/powerpc/boot/dts/bsc9131rdb.dtsi
··· 40 40 compatible = "fsl,ifc-nand"; 41 41 reg = <0x0 0x0 0x4000>; 42 42 43 - partition@0 { 44 - /* This location must not be altered */ 45 - /* 3MB for u-boot Bootloader Image */ 46 - reg = <0x0 0x00300000>; 47 - label = "NAND U-Boot Image"; 48 - read-only; 49 - }; 50 - 51 - partition@300000 { 52 - /* 1MB for DTB Image */ 53 - reg = <0x00300000 0x00100000>; 54 - label = "NAND DTB Image"; 55 - }; 56 - 57 - partition@400000 { 58 - /* 8MB for Linux Kernel Image */ 59 - reg = <0x00400000 0x00800000>; 60 - label = "NAND Linux Kernel Image"; 61 - }; 62 - 63 - partition@c00000 { 64 - /* Rest space for Root file System Image */ 65 - reg = <0x00c00000 0x07400000>; 66 - label = "NAND RFS Image"; 67 - }; 68 43 }; 69 44 }; 70 45 ··· 56 81 compatible = "spansion,s25sl12801"; 57 82 reg = <0>; 58 83 spi-max-frequency = <50000000>; 59 - 60 - /* 512KB for u-boot Bootloader Image */ 61 - partition@0 { 62 - reg = <0x0 0x00080000>; 63 - label = "SPI Flash U-Boot Image"; 64 - read-only; 65 - }; 66 - 67 - /* 512KB for DTB Image */ 68 - partition@80000 { 69 - reg = <0x00080000 0x00080000>; 70 - label = "SPI Flash DTB Image"; 71 - }; 72 - 73 - /* 4MB for Linux Kernel Image */ 74 - partition@100000 { 75 - reg = <0x00100000 0x00400000>; 76 - label = "SPI Flash Kernel Image"; 77 - }; 78 - 79 - /*11MB for RFS Image */ 80 - partition@500000 { 81 - reg = <0x00500000 0x00B00000>; 82 - label = "SPI Flash RFS Image"; 83 - }; 84 84 85 85 }; 86 86 };
+2 -26
arch/powerpc/boot/dts/fsl/b4420si-post.dtsi
··· 80 80 compatible = "fsl,b4420-device-config", "fsl,qoriq-device-config-2.0"; 81 81 }; 82 82 83 - clockgen: global-utilities@e1000 { 83 + /include/ "qoriq-clockgen2.dtsi" 84 + global-utilities@e1000 { 84 85 compatible = "fsl,b4420-clockgen", "fsl,qoriq-clockgen-2.0"; 85 - ranges = <0x0 0xe1000 0x1000>; 86 - #address-cells = <1>; 87 - #size-cells = <1>; 88 - 89 - sysclk: sysclk { 90 - #clock-cells = <0>; 91 - compatible = "fsl,qoriq-sysclk-2.0"; 92 - clock-output-names = "sysclk"; 93 - }; 94 - 95 - pll0: pll0@800 { 96 - #clock-cells = <1>; 97 - reg = <0x800 0x4>; 98 - compatible = "fsl,qoriq-core-pll-2.0"; 99 - clocks = <&sysclk>; 100 - clock-output-names = "pll0", "pll0-div2", "pll0-div4"; 101 - }; 102 - 103 - pll1: pll1@820 { 104 - #clock-cells = <1>; 105 - reg = <0x820 0x4>; 106 - compatible = "fsl,qoriq-core-pll-2.0"; 107 - clocks = <&sysclk>; 108 - clock-output-names = "pll1", "pll1-div2", "pll1-div4"; 109 - }; 110 86 111 87 mux0: mux0@0 { 112 88 #clock-cells = <0>;
+2 -26
arch/powerpc/boot/dts/fsl/b4860si-post.dtsi
··· 124 124 compatible = "fsl,b4860-device-config", "fsl,qoriq-device-config-2.0"; 125 125 }; 126 126 127 - clockgen: global-utilities@e1000 { 127 + /include/ "qoriq-clockgen2.dtsi" 128 + global-utilities@e1000 { 128 129 compatible = "fsl,b4860-clockgen", "fsl,qoriq-clockgen-2.0"; 129 - ranges = <0x0 0xe1000 0x1000>; 130 - #address-cells = <1>; 131 - #size-cells = <1>; 132 - 133 - sysclk: sysclk { 134 - #clock-cells = <0>; 135 - compatible = "fsl,qoriq-sysclk-2.0"; 136 - clock-output-names = "sysclk"; 137 - }; 138 - 139 - pll0: pll0@800 { 140 - #clock-cells = <1>; 141 - reg = <0x800 0x4>; 142 - compatible = "fsl,qoriq-core-pll-2.0"; 143 - clocks = <&sysclk>; 144 - clock-output-names = "pll0", "pll0-div2", "pll0-div4"; 145 - }; 146 - 147 - pll1: pll1@820 { 148 - #clock-cells = <1>; 149 - reg = <0x820 0x4>; 150 - compatible = "fsl,qoriq-core-pll-2.0"; 151 - clocks = <&sysclk>; 152 - clock-output-names = "pll1", "pll1-div2", "pll1-div4"; 153 - }; 154 130 155 131 mux0: mux0@0 { 156 132 #clock-cells = <0>;
+2 -46
arch/powerpc/boot/dts/fsl/p2041si-post.dtsi
··· 305 305 #sleep-cells = <2>; 306 306 }; 307 307 308 - clockgen: global-utilities@e1000 { 308 + /include/ "qoriq-clockgen1.dtsi" 309 + global-utilities@e1000 { 309 310 compatible = "fsl,p2041-clockgen", "fsl,qoriq-clockgen-1.0"; 310 - ranges = <0x0 0xe1000 0x1000>; 311 - reg = <0xe1000 0x1000>; 312 - clock-frequency = <0>; 313 - #address-cells = <1>; 314 - #size-cells = <1>; 315 - 316 - sysclk: sysclk { 317 - #clock-cells = <0>; 318 - compatible = "fsl,qoriq-sysclk-1.0"; 319 - clock-output-names = "sysclk"; 320 - }; 321 - 322 - pll0: pll0@800 { 323 - #clock-cells = <1>; 324 - reg = <0x800 0x4>; 325 - compatible = "fsl,qoriq-core-pll-1.0"; 326 - clocks = <&sysclk>; 327 - clock-output-names = "pll0", "pll0-div2"; 328 - }; 329 - 330 - pll1: pll1@820 { 331 - #clock-cells = <1>; 332 - reg = <0x820 0x4>; 333 - compatible = "fsl,qoriq-core-pll-1.0"; 334 - clocks = <&sysclk>; 335 - clock-output-names = "pll1", "pll1-div2"; 336 - }; 337 - 338 - mux0: mux0@0 { 339 - #clock-cells = <0>; 340 - reg = <0x0 0x4>; 341 - compatible = "fsl,qoriq-core-mux-1.0"; 342 - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 343 - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 344 - clock-output-names = "cmux0"; 345 - }; 346 - 347 - mux1: mux1@20 { 348 - #clock-cells = <0>; 349 - reg = <0x20 0x4>; 350 - compatible = "fsl,qoriq-core-mux-1.0"; 351 - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 352 - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 353 - clock-output-names = "cmux1"; 354 - }; 355 311 356 312 mux2: mux2@40 { 357 313 #clock-cells = <0>;
+2 -46
arch/powerpc/boot/dts/fsl/p3041si-post.dtsi
··· 332 332 #sleep-cells = <2>; 333 333 }; 334 334 335 - clockgen: global-utilities@e1000 { 335 + /include/ "qoriq-clockgen1.dtsi" 336 + global-utilities@e1000 { 336 337 compatible = "fsl,p3041-clockgen", "fsl,qoriq-clockgen-1.0"; 337 - ranges = <0x0 0xe1000 0x1000>; 338 - reg = <0xe1000 0x1000>; 339 - clock-frequency = <0>; 340 - #address-cells = <1>; 341 - #size-cells = <1>; 342 - 343 - sysclk: sysclk { 344 - #clock-cells = <0>; 345 - compatible = "fsl,qoriq-sysclk-1.0"; 346 - clock-output-names = "sysclk"; 347 - }; 348 - 349 - pll0: pll0@800 { 350 - #clock-cells = <1>; 351 - reg = <0x800 0x4>; 352 - compatible = "fsl,qoriq-core-pll-1.0"; 353 - clocks = <&sysclk>; 354 - clock-output-names = "pll0", "pll0-div2"; 355 - }; 356 - 357 - pll1: pll1@820 { 358 - #clock-cells = <1>; 359 - reg = <0x820 0x4>; 360 - compatible = "fsl,qoriq-core-pll-1.0"; 361 - clocks = <&sysclk>; 362 - clock-output-names = "pll1", "pll1-div2"; 363 - }; 364 - 365 - mux0: mux0@0 { 366 - #clock-cells = <0>; 367 - reg = <0x0 0x4>; 368 - compatible = "fsl,qoriq-core-mux-1.0"; 369 - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 370 - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 371 - clock-output-names = "cmux0"; 372 - }; 373 - 374 - mux1: mux1@20 { 375 - #clock-cells = <0>; 376 - reg = <0x20 0x4>; 377 - compatible = "fsl,qoriq-core-mux-1.0"; 378 - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 379 - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 380 - clock-output-names = "cmux1"; 381 - }; 382 338 383 339 mux2: mux2@40 { 384 340 #clock-cells = <0>;
+2 -46
arch/powerpc/boot/dts/fsl/p4080si-post.dtsi
··· 352 352 #sleep-cells = <2>; 353 353 }; 354 354 355 - clockgen: global-utilities@e1000 { 355 + /include/ "qoriq-clockgen1.dtsi" 356 + global-utilities@e1000 { 356 357 compatible = "fsl,p4080-clockgen", "fsl,qoriq-clockgen-1.0"; 357 - ranges = <0x0 0xe1000 0x1000>; 358 - reg = <0xe1000 0x1000>; 359 - clock-frequency = <0>; 360 - #address-cells = <1>; 361 - #size-cells = <1>; 362 - 363 - sysclk: sysclk { 364 - #clock-cells = <0>; 365 - compatible = "fsl,qoriq-sysclk-1.0"; 366 - clock-output-names = "sysclk"; 367 - }; 368 - 369 - pll0: pll0@800 { 370 - #clock-cells = <1>; 371 - reg = <0x800 0x4>; 372 - compatible = "fsl,qoriq-core-pll-1.0"; 373 - clocks = <&sysclk>; 374 - clock-output-names = "pll0", "pll0-div2"; 375 - }; 376 - 377 - pll1: pll1@820 { 378 - #clock-cells = <1>; 379 - reg = <0x820 0x4>; 380 - compatible = "fsl,qoriq-core-pll-1.0"; 381 - clocks = <&sysclk>; 382 - clock-output-names = "pll1", "pll1-div2"; 383 - }; 384 358 385 359 pll2: pll2@840 { 386 360 #clock-cells = <1>; ··· 370 396 compatible = "fsl,qoriq-core-pll-1.0"; 371 397 clocks = <&sysclk>; 372 398 clock-output-names = "pll3", "pll3-div2"; 373 - }; 374 - 375 - mux0: mux0@0 { 376 - #clock-cells = <0>; 377 - reg = <0x0 0x4>; 378 - compatible = "fsl,qoriq-core-mux-1.0"; 379 - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 380 - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 381 - clock-output-names = "cmux0"; 382 - }; 383 - 384 - mux1: mux1@20 { 385 - #clock-cells = <0>; 386 - reg = <0x20 0x4>; 387 - compatible = "fsl,qoriq-core-mux-1.0"; 388 - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 389 - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 390 - clock-output-names = "cmux1"; 391 399 }; 392 400 393 401 mux2: mux2@40 {
+2 -46
arch/powerpc/boot/dts/fsl/p5020si-post.dtsi
··· 337 337 #sleep-cells = <2>; 338 338 }; 339 339 340 - clockgen: global-utilities@e1000 { 340 + /include/ "qoriq-clockgen1.dtsi" 341 + global-utilities@e1000 { 341 342 compatible = "fsl,p5020-clockgen", "fsl,qoriq-clockgen-1.0"; 342 - ranges = <0x0 0xe1000 0x1000>; 343 - reg = <0xe1000 0x1000>; 344 - clock-frequency = <0>; 345 - #address-cells = <1>; 346 - #size-cells = <1>; 347 - 348 - sysclk: sysclk { 349 - #clock-cells = <0>; 350 - compatible = "fsl,qoriq-sysclk-1.0"; 351 - clock-output-names = "sysclk"; 352 - }; 353 - 354 - pll0: pll0@800 { 355 - #clock-cells = <1>; 356 - reg = <0x800 0x4>; 357 - compatible = "fsl,qoriq-core-pll-1.0"; 358 - clocks = <&sysclk>; 359 - clock-output-names = "pll0", "pll0-div2"; 360 - }; 361 - 362 - pll1: pll1@820 { 363 - #clock-cells = <1>; 364 - reg = <0x820 0x4>; 365 - compatible = "fsl,qoriq-core-pll-1.0"; 366 - clocks = <&sysclk>; 367 - clock-output-names = "pll1", "pll1-div2"; 368 - }; 369 - 370 - mux0: mux0@0 { 371 - #clock-cells = <0>; 372 - reg = <0x0 0x4>; 373 - compatible = "fsl,qoriq-core-mux-1.0"; 374 - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 375 - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 376 - clock-output-names = "cmux0"; 377 - }; 378 - 379 - mux1: mux1@20 { 380 - #clock-cells = <0>; 381 - reg = <0x20 0x4>; 382 - compatible = "fsl,qoriq-core-mux-1.0"; 383 - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 384 - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 385 - clock-output-names = "cmux1"; 386 - }; 387 343 }; 388 344 389 345 rcpm: global-utilities@e2000 {
+2 -46
arch/powerpc/boot/dts/fsl/p5040si-post.dtsi
··· 297 297 #sleep-cells = <2>; 298 298 }; 299 299 300 - clockgen: global-utilities@e1000 { 300 + /include/ "qoriq-clockgen1.dtsi" 301 + global-utilities@e1000 { 301 302 compatible = "fsl,p5040-clockgen", "fsl,qoriq-clockgen-1.0"; 302 - ranges = <0x0 0xe1000 0x1000>; 303 - reg = <0xe1000 0x1000>; 304 - clock-frequency = <0>; 305 - #address-cells = <1>; 306 - #size-cells = <1>; 307 - 308 - sysclk: sysclk { 309 - #clock-cells = <0>; 310 - compatible = "fsl,qoriq-sysclk-1.0"; 311 - clock-output-names = "sysclk"; 312 - }; 313 - 314 - pll0: pll0@800 { 315 - #clock-cells = <1>; 316 - reg = <0x800 0x4>; 317 - compatible = "fsl,qoriq-core-pll-1.0"; 318 - clocks = <&sysclk>; 319 - clock-output-names = "pll0", "pll0-div2"; 320 - }; 321 - 322 - pll1: pll1@820 { 323 - #clock-cells = <1>; 324 - reg = <0x820 0x4>; 325 - compatible = "fsl,qoriq-core-pll-1.0"; 326 - clocks = <&sysclk>; 327 - clock-output-names = "pll1", "pll1-div2"; 328 - }; 329 - 330 - mux0: mux0@0 { 331 - #clock-cells = <0>; 332 - reg = <0x0 0x4>; 333 - compatible = "fsl,qoriq-core-mux-1.0"; 334 - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 335 - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 336 - clock-output-names = "cmux0"; 337 - }; 338 - 339 - mux1: mux1@20 { 340 - #clock-cells = <0>; 341 - reg = <0x20 0x4>; 342 - compatible = "fsl,qoriq-core-mux-1.0"; 343 - clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 344 - clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 345 - clock-output-names = "cmux1"; 346 - }; 347 303 348 304 mux2: mux2@40 { 349 305 #clock-cells = <0>;
+85
arch/powerpc/boot/dts/fsl/qoriq-clockgen1.dtsi
··· 1 + /* 2 + * QorIQ clock control device tree stub [ controller @ offset 0xe1000 ] 3 + * 4 + * Copyright 2014 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + global-utilities@e1000 { 36 + compatible = "fsl,qoriq-clockgen-1.0"; 37 + ranges = <0x0 0xe1000 0x1000>; 38 + reg = <0xe1000 0x1000>; 39 + clock-frequency = <0>; 40 + #address-cells = <1>; 41 + #size-cells = <1>; 42 + 43 + sysclk: sysclk { 44 + #clock-cells = <0>; 45 + compatible = "fsl,qoriq-sysclk-1.0", "fixed-clock"; 46 + clock-output-names = "sysclk"; 47 + }; 48 + pll0: pll0@800 { 49 + #clock-cells = <1>; 50 + reg = <0x800 0x4>; 51 + compatible = "fsl,qoriq-core-pll-1.0"; 52 + clocks = <&sysclk>; 53 + clock-output-names = "pll0", "pll0-div2"; 54 + }; 55 + pll1: pll1@820 { 56 + #clock-cells = <1>; 57 + reg = <0x820 0x4>; 58 + compatible = "fsl,qoriq-core-pll-1.0"; 59 + clocks = <&sysclk>; 60 + clock-output-names = "pll1", "pll1-div2"; 61 + }; 62 + mux0: mux0@0 { 63 + #clock-cells = <0>; 64 + reg = <0x0 0x4>; 65 + compatible = "fsl,qoriq-core-mux-1.0"; 66 + clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 67 + clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 68 + clock-output-names = "cmux0"; 69 + }; 70 + mux1: mux1@20 { 71 + #clock-cells = <0>; 72 + reg = <0x20 0x4>; 73 + compatible = "fsl,qoriq-core-mux-1.0"; 74 + clocks = <&pll0 0>, <&pll0 1>, <&pll1 0>, <&pll1 1>; 75 + clock-names = "pll0", "pll0-div2", "pll1", "pll1-div2"; 76 + clock-output-names = "cmux1"; 77 + }; 78 + platform_pll: platform-pll@c00 { 79 + #clock-cells = <1>; 80 + reg = <0xc00 0x4>; 81 + compatible = "fsl,qoriq-platform-pll-1.0"; 82 + clocks = <&sysclk>; 83 + clock-output-names = "platform-pll", "platform-pll-div2"; 84 + }; 85 + };
+68
arch/powerpc/boot/dts/fsl/qoriq-clockgen2.dtsi
··· 1 + /* 2 + * QorIQ clock control device tree stub [ controller @ offset 0xe1000 ] 3 + * 4 + * Copyright 2014 Freescale Semiconductor Inc. 5 + * 6 + * Redistribution and use in source and binary forms, with or without 7 + * modification, are permitted provided that the following conditions are met: 8 + * * Redistributions of source code must retain the above copyright 9 + * notice, this list of conditions and the following disclaimer. 10 + * * Redistributions in binary form must reproduce the above copyright 11 + * notice, this list of conditions and the following disclaimer in the 12 + * documentation and/or other materials provided with the distribution. 13 + * * Neither the name of Freescale Semiconductor nor the 14 + * names of its contributors may be used to endorse or promote products 15 + * derived from this software without specific prior written permission. 16 + * 17 + * 18 + * ALTERNATIVELY, this software may be distributed under the terms of the 19 + * GNU General Public License ("GPL") as published by the Free Software 20 + * Foundation, either version 2 of that License or (at your option) any 21 + * later version. 22 + * 23 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY 24 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 25 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY 27 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 28 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 29 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND 30 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 32 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 + */ 34 + 35 + global-utilities@e1000 { 36 + compatible = "fsl,qoriq-clockgen-2.0"; 37 + ranges = <0x0 0xe1000 0x1000>; 38 + reg = <0xe1000 0x1000>; 39 + #address-cells = <1>; 40 + #size-cells = <1>; 41 + 42 + sysclk: sysclk { 43 + #clock-cells = <0>; 44 + compatible = "fsl,qoriq-sysclk-2.0", "fixed-clock"; 45 + clock-output-names = "sysclk"; 46 + }; 47 + pll0: pll0@800 { 48 + #clock-cells = <1>; 49 + reg = <0x800 0x4>; 50 + compatible = "fsl,qoriq-core-pll-2.0"; 51 + clocks = <&sysclk>; 52 + clock-output-names = "pll0", "pll0-div2", "pll0-div4"; 53 + }; 54 + pll1: pll1@820 { 55 + #clock-cells = <1>; 56 + reg = <0x820 0x4>; 57 + compatible = "fsl,qoriq-core-pll-2.0"; 58 + clocks = <&sysclk>; 59 + clock-output-names = "pll1", "pll1-div2", "pll1-div4"; 60 + }; 61 + platform_pll: platform-pll@c00 { 62 + #clock-cells = <1>; 63 + reg = <0xc00 0x4>; 64 + compatible = "fsl,qoriq-platform-pll-2.0"; 65 + clocks = <&sysclk>; 66 + clock-output-names = "platform-pll", "platform-pll-div2"; 67 + }; 68 + };
+2 -28
arch/powerpc/boot/dts/fsl/t1040si-post.dtsi
··· 281 281 fsl,liodn-bits = <12>; 282 282 }; 283 283 284 - clockgen: global-utilities@e1000 { 284 + /include/ "qoriq-clockgen2.dtsi" 285 + global-utilities@e1000 { 285 286 compatible = "fsl,t1040-clockgen", "fsl,qoriq-clockgen-2.0"; 286 - ranges = <0x0 0xe1000 0x1000>; 287 - reg = <0xe1000 0x1000>; 288 - #address-cells = <1>; 289 - #size-cells = <1>; 290 - 291 - sysclk: sysclk { 292 - #clock-cells = <0>; 293 - compatible = "fsl,qoriq-sysclk-2.0"; 294 - clock-output-names = "sysclk", "fixed-clock"; 295 - }; 296 - 297 - 298 - pll0: pll0@800 { 299 - #clock-cells = <1>; 300 - reg = <0x800 4>; 301 - compatible = "fsl,qoriq-core-pll-2.0"; 302 - clocks = <&sysclk>; 303 - clock-output-names = "pll0", "pll0-div2", "pll0-div4"; 304 - }; 305 - 306 - pll1: pll1@820 { 307 - #clock-cells = <1>; 308 - reg = <0x820 4>; 309 - compatible = "fsl,qoriq-core-pll-2.0"; 310 - clocks = <&sysclk>; 311 - clock-output-names = "pll1", "pll1-div2", "pll1-div4"; 312 - }; 313 287 314 288 mux0: mux0@0 { 315 289 #clock-cells = <0>;
+2 -27
arch/powerpc/boot/dts/fsl/t2081si-post.dtsi
··· 305 305 fsl,liodn-bits = <12>; 306 306 }; 307 307 308 - clockgen: global-utilities@e1000 { 308 + /include/ "qoriq-clockgen2.dtsi" 309 + global-utilities@e1000 { 309 310 compatible = "fsl,t2080-clockgen", "fsl,qoriq-clockgen-2.0"; 310 - ranges = <0x0 0xe1000 0x1000>; 311 - reg = <0xe1000 0x1000>; 312 - #address-cells = <1>; 313 - #size-cells = <1>; 314 - 315 - sysclk: sysclk { 316 - #clock-cells = <0>; 317 - compatible = "fsl,qoriq-sysclk-2.0"; 318 - clock-output-names = "sysclk", "fixed-clock"; 319 - }; 320 - 321 - pll0: pll0@800 { 322 - #clock-cells = <1>; 323 - reg = <0x800 4>; 324 - compatible = "fsl,qoriq-core-pll-2.0"; 325 - clocks = <&sysclk>; 326 - clock-output-names = "pll0", "pll0-div2", "pll0-div4"; 327 - }; 328 - 329 - pll1: pll1@820 { 330 - #clock-cells = <1>; 331 - reg = <0x820 4>; 332 - compatible = "fsl,qoriq-core-pll-2.0"; 333 - clocks = <&sysclk>; 334 - clock-output-names = "pll1", "pll1-div2", "pll1-div4"; 335 - }; 336 311 337 312 mux0: mux0@0 { 338 313 #clock-cells = <0>;
+2 -27
arch/powerpc/boot/dts/fsl/t4240si-post.dtsi
··· 368 368 fsl,liodn-bits = <12>; 369 369 }; 370 370 371 - clockgen: global-utilities@e1000 { 371 + /include/ "qoriq-clockgen2.dtsi" 372 + global-utilities@e1000 { 372 373 compatible = "fsl,t4240-clockgen", "fsl,qoriq-clockgen-2.0"; 373 - ranges = <0x0 0xe1000 0x1000>; 374 - reg = <0xe1000 0x1000>; 375 - #address-cells = <1>; 376 - #size-cells = <1>; 377 - 378 - sysclk: sysclk { 379 - #clock-cells = <0>; 380 - compatible = "fsl,qoriq-sysclk-2.0"; 381 - clock-output-names = "sysclk"; 382 - }; 383 - 384 - pll0: pll0@800 { 385 - #clock-cells = <1>; 386 - reg = <0x800 0x4>; 387 - compatible = "fsl,qoriq-core-pll-2.0"; 388 - clocks = <&sysclk>; 389 - clock-output-names = "pll0", "pll0-div2", "pll0-div4"; 390 - }; 391 - 392 - pll1: pll1@820 { 393 - #clock-cells = <1>; 394 - reg = <0x820 0x4>; 395 - compatible = "fsl,qoriq-core-pll-2.0"; 396 - clocks = <&sysclk>; 397 - clock-output-names = "pll1", "pll1-div2", "pll1-div4"; 398 - }; 399 374 400 375 pll2: pll2@840 { 401 376 #clock-cells = <1>;
+20
arch/powerpc/boot/dts/p3041ds.dts
··· 98 98 reg = <0x68>; 99 99 interrupts = <0x1 0x1 0 0>; 100 100 }; 101 + ina220@40 { 102 + compatible = "ti,ina220"; 103 + reg = <0x40>; 104 + shunt-resistor = <1000>; 105 + }; 106 + ina220@41 { 107 + compatible = "ti,ina220"; 108 + reg = <0x41>; 109 + shunt-resistor = <1000>; 110 + }; 111 + ina220@44 { 112 + compatible = "ti,ina220"; 113 + reg = <0x44>; 114 + shunt-resistor = <1000>; 115 + }; 116 + ina220@45 { 117 + compatible = "ti,ina220"; 118 + reg = <0x45>; 119 + shunt-resistor = <1000>; 120 + }; 101 121 adt7461@4c { 102 122 compatible = "adi,adt7461"; 103 123 reg = <0x4c>;
+20
arch/powerpc/boot/dts/p5020ds.dts
··· 98 98 reg = <0x68>; 99 99 interrupts = <0x1 0x1 0 0>; 100 100 }; 101 + ina220@40 { 102 + compatible = "ti,ina220"; 103 + reg = <0x40>; 104 + shunt-resistor = <1000>; 105 + }; 106 + ina220@41 { 107 + compatible = "ti,ina220"; 108 + reg = <0x41>; 109 + shunt-resistor = <1000>; 110 + }; 111 + ina220@44 { 112 + compatible = "ti,ina220"; 113 + reg = <0x44>; 114 + shunt-resistor = <1000>; 115 + }; 116 + ina220@45 { 117 + compatible = "ti,ina220"; 118 + reg = <0x45>; 119 + shunt-resistor = <1000>; 120 + }; 101 121 adt7461@4c { 102 122 compatible = "adi,adt7461"; 103 123 reg = <0x4c>;
+20
arch/powerpc/boot/dts/p5040ds.dts
··· 95 95 reg = <0x68>; 96 96 interrupts = <0x1 0x1 0 0>; 97 97 }; 98 + ina220@40 { 99 + compatible = "ti,ina220"; 100 + reg = <0x40>; 101 + shunt-resistor = <1000>; 102 + }; 103 + ina220@41 { 104 + compatible = "ti,ina220"; 105 + reg = <0x41>; 106 + shunt-resistor = <1000>; 107 + }; 108 + ina220@44 { 109 + compatible = "ti,ina220"; 110 + reg = <0x44>; 111 + shunt-resistor = <1000>; 112 + }; 113 + ina220@45 { 114 + compatible = "ti,ina220"; 115 + reg = <0x45>; 116 + shunt-resistor = <1000>; 117 + }; 98 118 adt7461@4c { 99 119 compatible = "adi,adt7461"; 100 120 reg = <0x4c>;
+7
arch/powerpc/boot/dts/t104xrdb.dtsi
··· 83 83 }; 84 84 }; 85 85 86 + i2c@118000 { 87 + adt7461@4c { 88 + compatible = "adi,adt7461"; 89 + reg = <0x4c>; 90 + }; 91 + }; 92 + 86 93 i2c@118100 { 87 94 pca9546@77 { 88 95 compatible = "nxp,pca9546";
+11
arch/powerpc/boot/dts/t208xqds.dtsi
··· 169 169 shunt-resistor = <1000>; 170 170 }; 171 171 }; 172 + 173 + i2c@3 { 174 + #address-cells = <1>; 175 + #size-cells = <0>; 176 + reg = <0x3>; 177 + 178 + adt7461@4c { 179 + compatible = "adi,adt7461"; 180 + reg = <0x4c>; 181 + }; 182 + }; 172 183 }; 173 184 }; 174 185
+2 -2
arch/powerpc/boot/dts/t4240emu.dts
··· 250 250 fsl,liodn-bits = <12>; 251 251 }; 252 252 253 - clockgen: global-utilities@e1000 { 253 + /include/ "fsl/qoriq-clockgen2.dtsi" 254 + global-utilities@e1000 { 254 255 compatible = "fsl,t4240-clockgen", "fsl,qoriq-clockgen-2.0"; 255 - reg = <0xe1000 0x1000>; 256 256 }; 257 257 258 258 /include/ "fsl/qoriq-dma-0.dtsi"
+13 -2
arch/powerpc/boot/main.c
··· 144 144 145 145 static void prep_cmdline(void *chosen) 146 146 { 147 + unsigned int getline_timeout = 5000; 148 + int v; 149 + int n; 150 + 151 + /* Wait-for-input time */ 152 + n = getprop(chosen, "linux,cmdline-timeout", &v, sizeof(v)); 153 + if (n == sizeof(v)) 154 + getline_timeout = v; 155 + 147 156 if (cmdline[0] == '\0') 148 157 getprop(chosen, "bootargs", cmdline, BOOT_COMMAND_LINE_SIZE-1); 149 158 150 159 printf("\n\rLinux/PowerPC load: %s", cmdline); 160 + 151 161 /* If possible, edit the command line */ 152 - if (console_ops.edit_cmdline) 153 - console_ops.edit_cmdline(cmdline, BOOT_COMMAND_LINE_SIZE); 162 + if (console_ops.edit_cmdline && getline_timeout) 163 + console_ops.edit_cmdline(cmdline, BOOT_COMMAND_LINE_SIZE, getline_timeout); 164 + 154 165 printf("\n\r"); 155 166 156 167 /* Put the command line back into the devtree for the kernel */
+1 -1
arch/powerpc/boot/ops.h
··· 58 58 struct console_ops { 59 59 int (*open)(void); 60 60 void (*write)(const char *buf, int len); 61 - void (*edit_cmdline)(char *buf, int len); 61 + void (*edit_cmdline)(char *buf, int len, unsigned int getline_timeout); 62 62 void (*close)(void); 63 63 void *data; 64 64 };
+3 -3
arch/powerpc/boot/serial.c
··· 33 33 scdp->putc(*buf++); 34 34 } 35 35 36 - static void serial_edit_cmdline(char *buf, int len) 36 + static void serial_edit_cmdline(char *buf, int len, unsigned int timeout) 37 37 { 38 38 int timer = 0, count; 39 39 char ch, *cp; ··· 44 44 cp = &buf[count]; 45 45 count++; 46 46 47 - while (timer++ < 5*1000) { 47 + do { 48 48 if (scdp->tstc()) { 49 49 while (((ch = scdp->getc()) != '\n') && (ch != '\r')) { 50 50 /* Test for backspace/delete */ ··· 70 70 break; /* Exit 'timer' loop */ 71 71 } 72 72 udelay(1000); /* 1 msec */ 73 - } 73 + } while (timer++ < timeout); 74 74 *cp = 0; 75 75 } 76 76
+1
arch/powerpc/configs/corenet32_smp_defconfig
··· 144 144 CONFIG_RTC_DRV_DS3232=y 145 145 CONFIG_UIO=y 146 146 CONFIG_STAGING=y 147 + CONFIG_MEMORY=y 147 148 CONFIG_VIRT_DRIVERS=y 148 149 CONFIG_FSL_HV_MANAGER=y 149 150 CONFIG_EXT2_FS=y
+1
arch/powerpc/configs/corenet64_smp_defconfig
··· 118 118 CONFIG_VIRT_DRIVERS=y 119 119 CONFIG_FSL_HV_MANAGER=y 120 120 CONFIG_FSL_CORENET_CF=y 121 + CONFIG_MEMORY=y 121 122 CONFIG_EXT2_FS=y 122 123 CONFIG_EXT3_FS=y 123 124 CONFIG_ISO9660_FS=m
+1
arch/powerpc/configs/mpc85xx_defconfig
··· 215 215 CONFIG_RTC_DRV_CMOS=y 216 216 CONFIG_DMADEVICES=y 217 217 CONFIG_FSL_DMA=y 218 + CONFIG_MEMORY=y 218 219 # CONFIG_NET_DMA is not set 219 220 CONFIG_EXT2_FS=y 220 221 CONFIG_EXT3_FS=y
+1
arch/powerpc/configs/mpc85xx_smp_defconfig
··· 216 216 CONFIG_RTC_DRV_CMOS=y 217 217 CONFIG_DMADEVICES=y 218 218 CONFIG_FSL_DMA=y 219 + CONFIG_MEMORY=y 219 220 # CONFIG_NET_DMA is not set 220 221 CONFIG_EXT2_FS=y 221 222 CONFIG_EXT3_FS=y
+3 -3
arch/powerpc/include/asm/bitops.h
··· 14 14 * 15 15 * The bitop functions are defined to work on unsigned longs, so for a 16 16 * ppc64 system the bits end up numbered: 17 - * |63..............0|127............64|191...........128|255...........196| 17 + * |63..............0|127............64|191...........128|255...........192| 18 18 * and on ppc32: 19 - * |31.....0|63....31|95....64|127...96|159..128|191..160|223..192|255..224| 19 + * |31.....0|63....32|95....64|127...96|159..128|191..160|223..192|255..224| 20 20 * 21 21 * There are a few little-endian macros used mostly for filesystem 22 22 * bitmaps, these work on similar bit arrays layouts, but ··· 213 213 return __ilog2(x & -x); 214 214 } 215 215 216 - static __inline__ int __ffs(unsigned long x) 216 + static __inline__ unsigned long __ffs(unsigned long x) 217 217 { 218 218 return __ilog2(x & -x); 219 219 }
+3 -7
arch/powerpc/include/asm/cputable.h
··· 448 448 CPU_FTR_PURR | CPU_FTR_REAL_LE | CPU_FTR_DABRX) 449 449 #define CPU_FTRS_COMPATIBLE (CPU_FTR_USE_TB | CPU_FTR_PPCAS_ARCH_V2) 450 450 451 - #define CPU_FTRS_A2 (CPU_FTR_USE_TB | CPU_FTR_SMT | CPU_FTR_DBELL | \ 452 - CPU_FTR_NOEXECUTE | CPU_FTR_NODSISRALIGN | \ 453 - CPU_FTR_ICSWX | CPU_FTR_DABRX ) 454 - 455 451 #ifdef __powerpc64__ 456 452 #ifdef CONFIG_PPC_BOOK3E 457 - #define CPU_FTRS_POSSIBLE (CPU_FTRS_E6500 | CPU_FTRS_E5500 | CPU_FTRS_A2) 453 + #define CPU_FTRS_POSSIBLE (CPU_FTRS_E6500 | CPU_FTRS_E5500) 458 454 #else 459 455 #define CPU_FTRS_POSSIBLE \ 460 456 (CPU_FTRS_POWER4 | CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | \ ··· 501 505 502 506 #ifdef __powerpc64__ 503 507 #ifdef CONFIG_PPC_BOOK3E 504 - #define CPU_FTRS_ALWAYS (CPU_FTRS_E6500 & CPU_FTRS_E5500 & CPU_FTRS_A2) 508 + #define CPU_FTRS_ALWAYS (CPU_FTRS_E6500 & CPU_FTRS_E5500) 505 509 #else 506 510 #define CPU_FTRS_ALWAYS \ 507 511 (CPU_FTRS_POWER4 & CPU_FTRS_PPC970 & CPU_FTRS_POWER5 & \ 508 512 CPU_FTRS_POWER6 & CPU_FTRS_POWER7 & CPU_FTRS_CELL & \ 509 513 CPU_FTRS_PA6T & CPU_FTRS_POWER8 & CPU_FTRS_POWER8E & \ 510 - CPU_FTRS_POWER8_DD1 & CPU_FTRS_POSSIBLE) 514 + CPU_FTRS_POWER8_DD1 & ~CPU_FTR_HVMODE & CPU_FTRS_POSSIBLE) 511 515 #endif 512 516 #else 513 517 enum {
+2
arch/powerpc/include/asm/eeh.h
··· 39 39 #define EEH_PROBE_MODE_DEV 0x04 /* From PCI device */ 40 40 #define EEH_PROBE_MODE_DEVTREE 0x08 /* From device tree */ 41 41 #define EEH_ENABLE_IO_FOR_LOG 0x10 /* Enable IO for log */ 42 + #define EEH_EARLY_DUMP_LOG 0x20 /* Dump log immediately */ 42 43 43 44 /* 44 45 * Delay for PE reset, all in ms ··· 73 72 #define EEH_PE_ISOLATED (1 << 0) /* Isolated PE */ 74 73 #define EEH_PE_RECOVERING (1 << 1) /* Recovering PE */ 75 74 #define EEH_PE_CFG_BLOCKED (1 << 2) /* Block config access */ 75 + #define EEH_PE_RESET (1 << 3) /* PE reset in progress */ 76 76 77 77 #define EEH_PE_KEEP (1 << 8) /* Keep PE on hotplug */ 78 78 #define EEH_PE_CFG_RESTRICTED (1 << 9) /* Block config on error */
+1 -2
arch/powerpc/include/asm/elf.h
··· 28 28 the loader. We need to make sure that it is out of the way of the program 29 29 that it will "exec", and that there is sufficient room for the brk. */ 30 30 31 - extern unsigned long randomize_et_dyn(unsigned long base); 32 - #define ELF_ET_DYN_BASE (randomize_et_dyn(0x20000000)) 31 + #define ELF_ET_DYN_BASE 0x20000000 33 32 34 33 #define ELF_CORE_EFLAGS (is_elf2_task() ? 2 : 0) 35 34
+4 -1
arch/powerpc/include/asm/fsl_guts.h
··· 68 68 u8 res0b4[0xc0 - 0xb4]; 69 69 __be32 iovselsr; /* 0x.00c0 - I/O voltage select status register 70 70 Called 'elbcvselcr' on 86xx SOCs */ 71 - u8 res0c4[0x224 - 0xc4]; 71 + u8 res0c4[0x100 - 0xc4]; 72 + __be32 rcwsr[16]; /* 0x.0100 - Reset Control Word Status registers 73 + There are 16 registers */ 74 + u8 res140[0x224 - 0x140]; 72 75 __be32 iodelay1; /* 0x.0224 - IO delay control register 1 */ 73 76 __be32 iodelay2; /* 0x.0228 - IO delay control register 2 */ 74 77 u8 res22c[0x604 - 0x22c];
+6 -1
arch/powerpc/include/asm/hardirq.h
··· 21 21 22 22 #define __ARCH_IRQ_STAT 23 23 24 - #define local_softirq_pending() __get_cpu_var(irq_stat).__softirq_pending 24 + #define local_softirq_pending() __this_cpu_read(irq_stat.__softirq_pending) 25 + 26 + #define __ARCH_SET_SOFTIRQ_PENDING 27 + 28 + #define set_softirq_pending(x) __this_cpu_write(irq_stat.__softirq_pending, (x)) 29 + #define or_softirq_pending(x) __this_cpu_or(irq_stat.__softirq_pending, (x)) 25 30 26 31 static inline void ack_bad_irq(unsigned int irq) 27 32 {
+4 -4
arch/powerpc/include/asm/hugetlb.h
··· 48 48 #endif /* CONFIG_PPC_BOOK3S_64 */ 49 49 50 50 51 - static inline pte_t *hugepte_offset(hugepd_t *hpdp, unsigned long addr, 51 + static inline pte_t *hugepte_offset(hugepd_t hpd, unsigned long addr, 52 52 unsigned pdshift) 53 53 { 54 54 /* ··· 58 58 */ 59 59 unsigned long idx = 0; 60 60 61 - pte_t *dir = hugepd_page(*hpdp); 61 + pte_t *dir = hugepd_page(hpd); 62 62 #ifndef CONFIG_PPC_FSL_BOOK3E 63 - idx = (addr & ((1UL << pdshift) - 1)) >> hugepd_shift(*hpdp); 63 + idx = (addr & ((1UL << pdshift) - 1)) >> hugepd_shift(hpd); 64 64 #endif 65 65 66 66 return dir + idx; ··· 193 193 } 194 194 195 195 #define hugepd_shift(x) 0 196 - static inline pte_t *hugepte_offset(hugepd_t *hpdp, unsigned long addr, 196 + static inline pte_t *hugepte_offset(hugepd_t hpd, unsigned long addr, 197 197 unsigned pdshift) 198 198 { 199 199 return 0;
-3
arch/powerpc/include/asm/io.h
··· 855 855 856 856 #define clrsetbits_8(addr, clear, set) clrsetbits(8, addr, clear, set) 857 857 858 - void __iomem *devm_ioremap_prot(struct device *dev, resource_size_t offset, 859 - size_t size, unsigned long flags); 860 - 861 858 #endif /* __KERNEL__ */ 862 859 863 860 #endif /* _ASM_POWERPC_IO_H */
+2 -17
arch/powerpc/include/asm/machdep.h
··· 42 42 unsigned long newpp, 43 43 unsigned long vpn, 44 44 int bpsize, int apsize, 45 - int ssize, int local); 45 + int ssize, unsigned long flags); 46 46 void (*hpte_updateboltedpp)(unsigned long newpp, 47 47 unsigned long ea, 48 48 int psize, int ssize); ··· 60 60 void (*hugepage_invalidate)(unsigned long vsid, 61 61 unsigned long addr, 62 62 unsigned char *hpte_slot_array, 63 - int psize, int ssize); 63 + int psize, int ssize, int local); 64 64 /* special for kexec, to be called in real mode, linear mapping is 65 65 * destroyed as well */ 66 66 void (*hpte_clear_all)(void); ··· 142 142 #endif 143 143 144 144 void (*restart)(char *cmd); 145 - void (*power_off)(void); 146 145 void (*halt)(void); 147 146 void (*panic)(char *str); 148 147 void (*cpu_die)(void); ··· 291 292 #ifdef CONFIG_ARCH_RANDOM 292 293 int (*get_random_long)(unsigned long *v); 293 294 #endif 294 - 295 - #ifdef CONFIG_MEMORY_HOTREMOVE 296 - int (*remove_memory)(u64, u64); 297 - #endif 298 295 }; 299 296 300 297 extern void e500_idle(void); ··· 337 342 extern sys_ctrler_t sys_ctrler; 338 343 339 344 #endif /* CONFIG_PPC_PMAC */ 340 - 341 - 342 - /* Functions to produce codes on the leds. 343 - * The SRC code should be unique for the message category and should 344 - * be limited to the lower 24 bits (the upper 8 are set by these funcs), 345 - * and (for boot & dump) should be sorted numerically in the order 346 - * the events occur. 347 - */ 348 - /* Print a boot progress message. */ 349 - void ppc64_boot_msg(unsigned int src, const char *msg); 350 345 351 346 static inline void log_error(char *buf, unsigned int err_type, int fatal) 352 347 {
+2
arch/powerpc/include/asm/mmu-8xx.h
··· 56 56 * additional information from the MI_EPN, and MI_TWC registers. 57 57 */ 58 58 #define SPRN_MI_RPN 790 59 + #define MI_SPS16K 0x00000008 /* Small page size (0 = 4k, 1 = 16k) */ 59 60 60 61 /* Define an RPN value for mapping kernel memory to large virtual 61 62 * pages for boot initialization. This has real page number of 0, ··· 130 129 * additional information from the MD_EPN, and MD_TWC registers. 131 130 */ 132 131 #define SPRN_MD_RPN 798 132 + #define MD_SPS16K 0x00000008 /* Small page size (0 = 4k, 1 = 16k) */ 133 133 134 134 /* This is a temporary storage register that could be used to save 135 135 * a processor working register during a tablewalk.
+14 -8
arch/powerpc/include/asm/mmu-hash64.h
··· 316 316 return hash & 0x7fffffffffUL; 317 317 } 318 318 319 + #define HPTE_LOCAL_UPDATE 0x1 320 + #define HPTE_NOHPTE_UPDATE 0x2 321 + 319 322 extern int __hash_page_4K(unsigned long ea, unsigned long access, 320 323 unsigned long vsid, pte_t *ptep, unsigned long trap, 321 - unsigned int local, int ssize, int subpage_prot); 324 + unsigned long flags, int ssize, int subpage_prot); 322 325 extern int __hash_page_64K(unsigned long ea, unsigned long access, 323 326 unsigned long vsid, pte_t *ptep, unsigned long trap, 324 - unsigned int local, int ssize); 327 + unsigned long flags, int ssize); 325 328 struct mm_struct; 326 329 unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap); 327 - extern int hash_page_mm(struct mm_struct *mm, unsigned long ea, unsigned long access, unsigned long trap); 328 - extern int hash_page(unsigned long ea, unsigned long access, unsigned long trap); 330 + extern int hash_page_mm(struct mm_struct *mm, unsigned long ea, 331 + unsigned long access, unsigned long trap, 332 + unsigned long flags); 333 + extern int hash_page(unsigned long ea, unsigned long access, unsigned long trap, 334 + unsigned long dsisr); 329 335 int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid, 330 - pte_t *ptep, unsigned long trap, int local, int ssize, 331 - unsigned int shift, unsigned int mmu_psize); 336 + pte_t *ptep, unsigned long trap, unsigned long flags, 337 + int ssize, unsigned int shift, unsigned int mmu_psize); 332 338 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 333 339 extern int __hash_page_thp(unsigned long ea, unsigned long access, 334 340 unsigned long vsid, pmd_t *pmdp, unsigned long trap, 335 - int local, int ssize, unsigned int psize); 341 + unsigned long flags, int ssize, unsigned int psize); 336 342 #else 337 343 static inline int __hash_page_thp(unsigned long ea, unsigned long access, 338 344 unsigned long vsid, pmd_t *pmdp, 339 - unsigned long trap, int local, 345 + unsigned long trap, unsigned long flags, 340 346 int ssize, unsigned int psize) 341 347 { 342 348 BUG();
+19 -103
arch/powerpc/include/asm/opal.h
··· 154 154 #define OPAL_HANDLE_HMI 98 155 155 #define OPAL_REGISTER_DUMP_REGION 101 156 156 #define OPAL_UNREGISTER_DUMP_REGION 102 157 + #define OPAL_WRITE_TPO 103 158 + #define OPAL_READ_TPO 104 159 + #define OPAL_IPMI_SEND 107 160 + #define OPAL_IPMI_RECV 108 157 161 158 162 #ifndef __ASSEMBLY__ 159 163 ··· 288 284 OPAL_MSG_TYPE_MAX, 289 285 }; 290 286 291 - /* Machine check related definitions */ 292 - enum OpalMCE_Version { 293 - OpalMCE_V1 = 1, 294 - }; 295 - 296 - enum OpalMCE_Severity { 297 - OpalMCE_SEV_NO_ERROR = 0, 298 - OpalMCE_SEV_WARNING = 1, 299 - OpalMCE_SEV_ERROR_SYNC = 2, 300 - OpalMCE_SEV_FATAL = 3, 301 - }; 302 - 303 - enum OpalMCE_Disposition { 304 - OpalMCE_DISPOSITION_RECOVERED = 0, 305 - OpalMCE_DISPOSITION_NOT_RECOVERED = 1, 306 - }; 307 - 308 - enum OpalMCE_Initiator { 309 - OpalMCE_INITIATOR_UNKNOWN = 0, 310 - OpalMCE_INITIATOR_CPU = 1, 311 - }; 312 - 313 - enum OpalMCE_ErrorType { 314 - OpalMCE_ERROR_TYPE_UNKNOWN = 0, 315 - OpalMCE_ERROR_TYPE_UE = 1, 316 - OpalMCE_ERROR_TYPE_SLB = 2, 317 - OpalMCE_ERROR_TYPE_ERAT = 3, 318 - OpalMCE_ERROR_TYPE_TLB = 4, 319 - }; 320 - 321 - enum OpalMCE_UeErrorType { 322 - OpalMCE_UE_ERROR_INDETERMINATE = 0, 323 - OpalMCE_UE_ERROR_IFETCH = 1, 324 - OpalMCE_UE_ERROR_PAGE_TABLE_WALK_IFETCH = 2, 325 - OpalMCE_UE_ERROR_LOAD_STORE = 3, 326 - OpalMCE_UE_ERROR_PAGE_TABLE_WALK_LOAD_STORE = 4, 327 - }; 328 - 329 - enum OpalMCE_SlbErrorType { 330 - OpalMCE_SLB_ERROR_INDETERMINATE = 0, 331 - OpalMCE_SLB_ERROR_PARITY = 1, 332 - OpalMCE_SLB_ERROR_MULTIHIT = 2, 333 - }; 334 - 335 - enum OpalMCE_EratErrorType { 336 - OpalMCE_ERAT_ERROR_INDETERMINATE = 0, 337 - OpalMCE_ERAT_ERROR_PARITY = 1, 338 - OpalMCE_ERAT_ERROR_MULTIHIT = 2, 339 - }; 340 - 341 - enum OpalMCE_TlbErrorType { 342 - OpalMCE_TLB_ERROR_INDETERMINATE = 0, 343 - OpalMCE_TLB_ERROR_PARITY = 1, 344 - OpalMCE_TLB_ERROR_MULTIHIT = 2, 345 - }; 346 - 347 287 enum OpalThreadStatus { 348 288 OPAL_THREAD_INACTIVE = 0x0, 349 289 OPAL_THREAD_STARTED = 0x1, ··· 400 452 __be64 params[8]; 401 453 }; 402 454 403 - struct opal_machine_check_event { 404 - enum OpalMCE_Version version:8; /* 0x00 */ 405 - uint8_t in_use; /* 0x01 */ 406 - enum OpalMCE_Severity severity:8; /* 0x02 */ 407 - enum OpalMCE_Initiator initiator:8; /* 0x03 */ 408 - enum OpalMCE_ErrorType error_type:8; /* 0x04 */ 409 - enum OpalMCE_Disposition disposition:8; /* 0x05 */ 410 - uint8_t reserved_1[2]; /* 0x06 */ 411 - uint64_t gpr3; /* 0x08 */ 412 - uint64_t srr0; /* 0x10 */ 413 - uint64_t srr1; /* 0x18 */ 414 - union { /* 0x20 */ 415 - struct { 416 - enum OpalMCE_UeErrorType ue_error_type:8; 417 - uint8_t effective_address_provided; 418 - uint8_t physical_address_provided; 419 - uint8_t reserved_1[5]; 420 - uint64_t effective_address; 421 - uint64_t physical_address; 422 - uint8_t reserved_2[8]; 423 - } ue_error; 455 + enum { 456 + OPAL_IPMI_MSG_FORMAT_VERSION_1 = 1, 457 + }; 424 458 425 - struct { 426 - enum OpalMCE_SlbErrorType slb_error_type:8; 427 - uint8_t effective_address_provided; 428 - uint8_t reserved_1[6]; 429 - uint64_t effective_address; 430 - uint8_t reserved_2[16]; 431 - } slb_error; 432 - 433 - struct { 434 - enum OpalMCE_EratErrorType erat_error_type:8; 435 - uint8_t effective_address_provided; 436 - uint8_t reserved_1[6]; 437 - uint64_t effective_address; 438 - uint8_t reserved_2[16]; 439 - } erat_error; 440 - 441 - struct { 442 - enum OpalMCE_TlbErrorType tlb_error_type:8; 443 - uint8_t effective_address_provided; 444 - uint8_t reserved_1[6]; 445 - uint64_t effective_address; 446 - uint8_t reserved_2[16]; 447 - } tlb_error; 448 - } u; 459 + struct opal_ipmi_msg { 460 + uint8_t version; 461 + uint8_t netfn; 462 + uint8_t cmd; 463 + uint8_t data[]; 449 464 }; 450 465 451 466 /* FSP memory errors handling */ ··· 730 819 __be64 *hour_minute_second_millisecond); 731 820 int64_t opal_rtc_write(uint32_t year_month_day, 732 821 uint64_t hour_minute_second_millisecond); 822 + int64_t opal_tpo_read(uint64_t token, __be32 *year_mon_day, __be32 *hour_min); 823 + int64_t opal_tpo_write(uint64_t token, uint32_t year_mon_day, 824 + uint32_t hour_min); 733 825 int64_t opal_cec_power_down(uint64_t request); 734 826 int64_t opal_cec_reboot(void); 735 827 int64_t opal_read_nvram(uint64_t buffer, uint64_t size, uint64_t offset); ··· 877 963 int64_t opal_register_dump_region(uint32_t id, uint64_t start, uint64_t end); 878 964 int64_t opal_unregister_dump_region(uint32_t id); 879 965 int64_t opal_pci_set_phb_cxl_mode(uint64_t phb_id, uint64_t mode, uint64_t pe_number); 966 + int64_t opal_ipmi_send(uint64_t interface, struct opal_ipmi_msg *msg, 967 + uint64_t msg_len); 968 + int64_t opal_ipmi_recv(uint64_t interface, struct opal_ipmi_msg *msg, 969 + uint64_t *msg_len); 880 970 881 971 /* Internal functions */ 882 972 extern int early_init_dt_scan_opal(unsigned long node, const char *uname, ··· 910 992 extern int opal_get_sensor_data(u32 sensor_hndl, u32 *sensor_data); 911 993 912 994 struct rtc_time; 913 - extern int opal_set_rtc_time(struct rtc_time *tm); 914 - extern void opal_get_rtc_time(struct rtc_time *tm); 915 995 extern unsigned long opal_get_boot_time(void); 916 996 extern void opal_nvram_init(void); 917 997 extern void opal_flash_init(void);
-7
arch/powerpc/include/asm/paca.h
··· 42 42 #define get_slb_shadow() (get_paca()->slb_shadow_ptr) 43 43 44 44 struct task_struct; 45 - struct opal_machine_check_event; 46 45 47 46 /* 48 47 * Defines the layout of the paca. ··· 152 153 u64 tm_scratch; /* TM scratch area for reclaim */ 153 154 #endif 154 155 155 - #ifdef CONFIG_PPC_POWERNV 156 - /* Pointer to OPAL machine check event structure set by the 157 - * early exception handler for use by high level C handler 158 - */ 159 - struct opal_machine_check_event *opal_mc_evt; 160 - #endif 161 156 #ifdef CONFIG_PPC_BOOK3S_64 162 157 /* Exclusive emergency stack pointer for machine check exception. */ 163 158 void *mc_emergency_sp;
+3 -1
arch/powerpc/include/asm/page.h
··· 379 379 } 380 380 #endif 381 381 382 - #define is_hugepd(pdep) (hugepd_ok(*((hugepd_t *)(pdep)))) 382 + #define is_hugepd(hpd) (hugepd_ok(hpd)) 383 + #define pgd_huge pgd_huge 383 384 int pgd_huge(pgd_t pgd); 384 385 #else /* CONFIG_HUGETLB_PAGE */ 385 386 #define is_hugepd(pdep) 0 386 387 #define pgd_huge(pgd) 0 387 388 #endif /* CONFIG_HUGETLB_PAGE */ 389 + #define __hugepd(x) ((hugepd_t) { (x) }) 388 390 389 391 struct page; 390 392 extern void clear_user_page(void *page, unsigned long vaddr, struct page *pg);
+20
arch/powerpc/include/asm/pgtable-ppc32.h
··· 170 170 #ifdef PTE_ATOMIC_UPDATES 171 171 unsigned long old, tmp; 172 172 173 + #ifdef CONFIG_PPC_8xx 174 + unsigned long tmp2; 175 + 176 + __asm__ __volatile__("\ 177 + 1: lwarx %0,0,%4\n\ 178 + andc %1,%0,%5\n\ 179 + or %1,%1,%6\n\ 180 + /* 0x200 == Extended encoding, bit 22 */ \ 181 + /* Bit 22 has to be 1 if neither _PAGE_USER nor _PAGE_RW are set */ \ 182 + rlwimi %1,%1,32-2,0x200\n /* get _PAGE_USER */ \ 183 + rlwinm %3,%1,32-1,0x200\n /* get _PAGE_RW */ \ 184 + or %1,%3,%1\n\ 185 + xori %1,%1,0x200\n" 186 + " stwcx. %1,0,%4\n\ 187 + bne- 1b" 188 + : "=&r" (old), "=&r" (tmp), "=m" (*p), "=&r" (tmp2) 189 + : "r" (p), "r" (clr), "r" (set), "m" (*p) 190 + : "cc" ); 191 + #else /* CONFIG_PPC_8xx */ 173 192 __asm__ __volatile__("\ 174 193 1: lwarx %0,0,%3\n\ 175 194 andc %1,%0,%4\n\ ··· 199 180 : "=&r" (old), "=&r" (tmp), "=m" (*p) 200 181 : "r" (p), "r" (clr), "r" (set), "m" (*p) 201 182 : "cc" ); 183 + #endif /* CONFIG_PPC_8xx */ 202 184 #else /* PTE_ATOMIC_UPDATES */ 203 185 unsigned long old = pte_val(*p); 204 186 *p = __pte((old & ~clr) | set);
+15 -1
arch/powerpc/include/asm/pgtable-ppc64-4k.h
··· 57 57 #define pgd_present(pgd) (pgd_val(pgd) != 0) 58 58 #define pgd_clear(pgdp) (pgd_val(*(pgdp)) = 0) 59 59 #define pgd_page_vaddr(pgd) (pgd_val(pgd) & ~PGD_MASKED_BITS) 60 - #define pgd_page(pgd) virt_to_page(pgd_page_vaddr(pgd)) 60 + 61 + #ifndef __ASSEMBLY__ 62 + 63 + static inline pte_t pgd_pte(pgd_t pgd) 64 + { 65 + return __pte(pgd_val(pgd)); 66 + } 67 + 68 + static inline pgd_t pte_pgd(pte_t pte) 69 + { 70 + return __pgd(pte_val(pte)); 71 + } 72 + extern struct page *pgd_page(pgd_t pgd); 73 + 74 + #endif /* !__ASSEMBLY__ */ 61 75 62 76 #define pud_offset(pgdp, addr) \ 63 77 (((pud_t *) pgd_page_vaddr(*(pgdp))) + \
+3
arch/powerpc/include/asm/pgtable-ppc64-64k.h
··· 38 38 /* Bits to mask out from a PGD/PUD to get to the PMD page */ 39 39 #define PUD_MASKED_BITS 0x1ff 40 40 41 + #define pgd_pte(pgd) (pud_pte(((pud_t){ pgd }))) 42 + #define pte_pgd(pte) ((pgd_t)pte_pud(pte)) 43 + 41 44 #endif /* _ASM_POWERPC_PGTABLE_PPC64_64K_H */
+38 -14
arch/powerpc/include/asm/pgtable-ppc64.h
··· 152 152 #define pmd_none(pmd) (!pmd_val(pmd)) 153 153 #define pmd_bad(pmd) (!is_kernel_addr(pmd_val(pmd)) \ 154 154 || (pmd_val(pmd) & PMD_BAD_BITS)) 155 - #define pmd_present(pmd) (pmd_val(pmd) != 0) 155 + #define pmd_present(pmd) (!pmd_none(pmd)) 156 156 #define pmd_clear(pmdp) (pmd_val(*(pmdp)) = 0) 157 157 #define pmd_page_vaddr(pmd) (pmd_val(pmd) & ~PMD_MASKED_BITS) 158 158 extern struct page *pmd_page(pmd_t pmd); ··· 164 164 #define pud_present(pud) (pud_val(pud) != 0) 165 165 #define pud_clear(pudp) (pud_val(*(pudp)) = 0) 166 166 #define pud_page_vaddr(pud) (pud_val(pud) & ~PUD_MASKED_BITS) 167 - #define pud_page(pud) virt_to_page(pud_page_vaddr(pud)) 168 167 168 + extern struct page *pud_page(pud_t pud); 169 + 170 + static inline pte_t pud_pte(pud_t pud) 171 + { 172 + return __pte(pud_val(pud)); 173 + } 174 + 175 + static inline pud_t pte_pud(pte_t pte) 176 + { 177 + return __pud(pte_val(pte)); 178 + } 179 + #define pud_write(pud) pte_write(pud_pte(pud)) 169 180 #define pgd_set(pgdp, pudp) ({pgd_val(*(pgdp)) = (unsigned long)(pudp);}) 181 + #define pgd_write(pgd) pte_write(pgd_pte(pgd)) 170 182 171 183 /* 172 184 * Find an entry in a page-table-directory. We combine the address region ··· 434 422 pmd_t *pmdp, pmd_t pmd); 435 423 extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, 436 424 pmd_t *pmd); 437 - 425 + /* 426 + * 427 + * For core kernel code by design pmd_trans_huge is never run on any hugetlbfs 428 + * page. The hugetlbfs page table walking and mangling paths are totally 429 + * separated form the core VM paths and they're differentiated by 430 + * VM_HUGETLB being set on vm_flags well before any pmd_trans_huge could run. 431 + * 432 + * pmd_trans_huge() is defined as false at build time if 433 + * CONFIG_TRANSPARENT_HUGEPAGE=n to optimize away code blocks at build 434 + * time in such case. 435 + * 436 + * For ppc64 we need to differntiate from explicit hugepages from THP, because 437 + * for THP we also track the subpage details at the pmd level. We don't do 438 + * that for explicit huge pages. 439 + * 440 + */ 438 441 static inline int pmd_trans_huge(pmd_t pmd) 439 442 { 440 443 /* 441 444 * leaf pte for huge page, bottom two bits != 00 442 445 */ 443 446 return (pmd_val(pmd) & 0x3) && (pmd_val(pmd) & _PAGE_THP_HUGE); 444 - } 445 - 446 - static inline int pmd_large(pmd_t pmd) 447 - { 448 - /* 449 - * leaf pte for huge page, bottom two bits != 00 450 - */ 451 - if (pmd_trans_huge(pmd)) 452 - return pmd_val(pmd) & _PAGE_PRESENT; 453 - return 0; 454 447 } 455 448 456 449 static inline int pmd_trans_splitting(pmd_t pmd) ··· 467 450 468 451 extern int has_transparent_hugepage(void); 469 452 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 453 + 454 + static inline int pmd_large(pmd_t pmd) 455 + { 456 + /* 457 + * leaf pte for huge page, bottom two bits != 00 458 + */ 459 + return ((pmd_val(pmd) & 0x3) != 0x0); 460 + } 470 461 471 462 static inline pte_t pmd_pte(pmd_t pmd) 472 463 { ··· 601 576 */ 602 577 return true; 603 578 } 604 - 605 579 #endif /* __ASSEMBLY__ */ 606 580 #endif /* _ASM_POWERPC_PGTABLE_PPC64_H_ */
+2 -4
arch/powerpc/include/asm/pgtable.h
··· 274 274 */ 275 275 extern void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *); 276 276 277 - extern int gup_hugepd(hugepd_t *hugepd, unsigned pdshift, unsigned long addr, 278 - unsigned long end, int write, struct page **pages, int *nr); 279 - 280 277 extern int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, 281 - unsigned long end, int write, struct page **pages, int *nr); 278 + unsigned long end, int write, 279 + struct page **pages, int *nr); 282 280 #ifndef CONFIG_TRANSPARENT_HUGEPAGE 283 281 #define pmd_large(pmd) 0 284 282 #define has_transparent_hugepage() 0
+1 -1
arch/powerpc/include/asm/processor.h
··· 451 451 enum idle_boot_override {IDLE_NO_OVERRIDE = 0, IDLE_POWERSAVE_OFF}; 452 452 453 453 extern int powersave_nap; /* set if nap mode can be used in idle loop */ 454 - extern void power7_nap(int check_irq); 454 + extern unsigned long power7_nap(int check_irq); 455 455 extern void power7_sleep(void); 456 456 extern void flush_instruction_cache(void); 457 457 extern void hard_reset_now(void);
+5 -2
arch/powerpc/include/asm/pte-8xx.h
··· 48 48 */ 49 49 #define _PAGE_RW 0x0400 /* lsb PP bits, inverted in HW */ 50 50 #define _PAGE_USER 0x0800 /* msb PP bits */ 51 + /* set when neither _PAGE_USER nor _PAGE_RW are set */ 52 + #define _PAGE_KNLRO 0x0200 51 53 52 54 #define _PMD_PRESENT 0x0001 53 55 #define _PMD_BAD 0x0ff0 54 56 #define _PMD_PAGE_MASK 0x000c 55 57 #define _PMD_PAGE_8M 0x000c 56 58 57 - #define _PTE_NONE_MASK _PAGE_ACCESSED 59 + #define _PTE_NONE_MASK _PAGE_KNLRO 58 60 59 61 /* Until my rework is finished, 8xx still needs atomic PTE updates */ 60 62 #define PTE_ATOMIC_UPDATES 1 61 63 62 64 /* We need to add _PAGE_SHARED to kernel pages */ 63 - #define _PAGE_KERNEL_RO (_PAGE_SHARED) 65 + #define _PAGE_KERNEL_RO (_PAGE_SHARED | _PAGE_KNLRO) 66 + #define _PAGE_KERNEL_ROX (_PAGE_EXEC | _PAGE_KNLRO) 64 67 #define _PAGE_KERNEL_RW (_PAGE_DIRTY | _PAGE_RW | _PAGE_HWWRITE) 65 68 66 69 #endif /* __KERNEL__ */
+1 -2
arch/powerpc/include/asm/setup.h
··· 8 8 9 9 extern unsigned int rtas_data; 10 10 extern int mem_init_done; /* set on boot once kmalloc can be called */ 11 - extern int init_bootmem_done; /* set once bootmem is available */ 12 11 extern unsigned long long memory_limit; 13 12 extern unsigned long klimit; 14 13 extern void *zalloc_maybe_bootmem(size_t size, gfp_t mask); ··· 23 24 #define PTRRELOC(x) ((typeof(x)) add_reloc_offset((unsigned long)(x))) 24 25 25 26 void check_for_initrd(void); 26 - void do_init_bootmem(void); 27 + void initmem_init(void); 27 28 void setup_panic(void); 28 29 #define ARCH_PANIC_TIMEOUT 180 29 30
+2 -3
arch/powerpc/include/asm/thread_info.h
··· 71 71 #define THREAD_SIZE_ORDER (THREAD_SHIFT - PAGE_SHIFT) 72 72 73 73 /* how to get the thread information struct from C */ 74 + register unsigned long __current_r1 asm("r1"); 74 75 static inline struct thread_info *current_thread_info(void) 75 76 { 76 - register unsigned long sp asm("r1"); 77 - 78 77 /* gcc4, at least, is smart enough to turn this into a single 79 78 * rlwinm for ppc32 and clrrdi for ppc64 */ 80 - return (struct thread_info *)(sp & ~(THREAD_SIZE-1)); 79 + return (struct thread_info *)(__current_r1 & ~(THREAD_SIZE-1)); 81 80 } 82 81 83 82 #endif /* __ASSEMBLY__ */
+6 -4
arch/powerpc/include/asm/tlbflush.h
··· 107 107 108 108 static inline void arch_enter_lazy_mmu_mode(void) 109 109 { 110 - struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch); 110 + struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch); 111 111 112 112 batch->active = 1; 113 113 } 114 114 115 115 static inline void arch_leave_lazy_mmu_mode(void) 116 116 { 117 - struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch); 117 + struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch); 118 118 119 119 if (batch->index) 120 120 __flush_tlb_pending(batch); ··· 125 125 126 126 127 127 extern void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, 128 - int ssize, int local); 128 + int ssize, unsigned long flags); 129 129 extern void flush_hash_range(unsigned long number, int local); 130 - 130 + extern void flush_hash_hugepage(unsigned long vsid, unsigned long addr, 131 + pmd_t *pmdp, unsigned int psize, int ssize, 132 + unsigned long flags); 131 133 132 134 static inline void local_flush_tlb_mm(struct mm_struct *mm) 133 135 {
+1 -3
arch/powerpc/include/asm/vga.h
··· 38 38 39 39 #endif /* !CONFIG_VGA_CONSOLE && !CONFIG_MDA_CONSOLE */ 40 40 41 - extern unsigned long vgacon_remap_base; 42 - 43 41 #ifdef __powerpc64__ 44 42 #define VGA_MAP_MEM(x,s) ((unsigned long) ioremap((x), s)) 45 43 #else 46 - #define VGA_MAP_MEM(x,s) (x + vgacon_remap_base) 44 + #define VGA_MAP_MEM(x,s) (x) 47 45 #endif 48 46 49 47 #define vga_readb(x) (*(x))
+4 -4
arch/powerpc/include/asm/xics.h
··· 98 98 99 99 static inline void xics_push_cppr(unsigned int vec) 100 100 { 101 - struct xics_cppr *os_cppr = &__get_cpu_var(xics_cppr); 101 + struct xics_cppr *os_cppr = this_cpu_ptr(&xics_cppr); 102 102 103 103 if (WARN_ON(os_cppr->index >= MAX_NUM_PRIORITIES - 1)) 104 104 return; ··· 111 111 112 112 static inline unsigned char xics_pop_cppr(void) 113 113 { 114 - struct xics_cppr *os_cppr = &__get_cpu_var(xics_cppr); 114 + struct xics_cppr *os_cppr = this_cpu_ptr(&xics_cppr); 115 115 116 116 if (WARN_ON(os_cppr->index < 1)) 117 117 return LOWEST_PRIORITY; ··· 121 121 122 122 static inline void xics_set_base_cppr(unsigned char cppr) 123 123 { 124 - struct xics_cppr *os_cppr = &__get_cpu_var(xics_cppr); 124 + struct xics_cppr *os_cppr = this_cpu_ptr(&xics_cppr); 125 125 126 126 /* we only really want to set the priority when there's 127 127 * just one cppr value on the stack ··· 133 133 134 134 static inline unsigned char xics_cppr_top(void) 135 135 { 136 - struct xics_cppr *os_cppr = &__get_cpu_var(xics_cppr); 136 + struct xics_cppr *os_cppr = this_cpu_ptr(&xics_cppr); 137 137 138 138 return os_cppr->stack[os_cppr->index]; 139 139 }
+1 -1
arch/powerpc/kernel/align.c
··· 908 908 flush_fp_to_thread(current); 909 909 } 910 910 911 - if ((nb == 16)) { 911 + if (nb == 16) { 912 912 if (flags & F) { 913 913 /* Special case for 16-byte FP loads and stores */ 914 914 PPC_WARN_ALIGNMENT(fp_pair, regs);
-7
arch/powerpc/kernel/asm-offsets.c
··· 726 726 arch.timing_last_enter.tv32.tbl)); 727 727 #endif 728 728 729 - #ifdef CONFIG_PPC_POWERNV 730 - DEFINE(OPAL_MC_GPR3, offsetof(struct opal_machine_check_event, gpr3)); 731 - DEFINE(OPAL_MC_SRR0, offsetof(struct opal_machine_check_event, srr0)); 732 - DEFINE(OPAL_MC_SRR1, offsetof(struct opal_machine_check_event, srr1)); 733 - DEFINE(PACA_OPAL_MC_EVT, offsetof(struct paca_struct, opal_mc_evt)); 734 - #endif 735 - 736 729 return 0; 737 730 }
-1
arch/powerpc/kernel/crash_dump.c
··· 12 12 #undef DEBUG 13 13 14 14 #include <linux/crash_dump.h> 15 - #include <linux/bootmem.h> 16 15 #include <linux/io.h> 17 16 #include <linux/memblock.h> 18 17 #include <asm/code-patching.h>
+1 -1
arch/powerpc/kernel/dbell.c
··· 41 41 42 42 may_hard_irq_enable(); 43 43 44 - __get_cpu_var(irq_stat).doorbell_irqs++; 44 + __this_cpu_inc(irq_stat.doorbell_irqs); 45 45 46 46 smp_ipi_demux(); 47 47
+28 -17
arch/powerpc/kernel/eeh.c
··· 143 143 { 144 144 if (!strcmp(str, "off")) 145 145 eeh_add_flag(EEH_FORCE_DISABLED); 146 + else if (!strcmp(str, "early_log")) 147 + eeh_add_flag(EEH_EARLY_DUMP_LOG); 146 148 147 149 return 1; 148 150 } ··· 760 758 int eeh_reset_pe(struct eeh_pe *pe) 761 759 { 762 760 int flags = (EEH_STATE_MMIO_ACTIVE | EEH_STATE_DMA_ACTIVE); 763 - int i, rc; 761 + int i, state, ret; 762 + 763 + /* Mark as reset and block config space */ 764 + eeh_pe_state_mark(pe, EEH_PE_RESET | EEH_PE_CFG_BLOCKED); 764 765 765 766 /* Take three shots at resetting the bus */ 766 - for (i=0; i<3; i++) { 767 + for (i = 0; i < 3; i++) { 767 768 eeh_reset_pe_once(pe); 768 769 769 770 /* 770 771 * EEH_PE_ISOLATED is expected to be removed after 771 772 * BAR restore. 772 773 */ 773 - rc = eeh_ops->wait_state(pe, PCI_BUS_RESET_WAIT_MSEC); 774 - if ((rc & flags) == flags) 775 - return 0; 776 - 777 - if (rc < 0) { 778 - pr_err("%s: Unrecoverable slot failure on PHB#%d-PE#%x", 779 - __func__, pe->phb->global_number, pe->addr); 780 - return -1; 774 + state = eeh_ops->wait_state(pe, PCI_BUS_RESET_WAIT_MSEC); 775 + if ((state & flags) == flags) { 776 + ret = 0; 777 + goto out; 781 778 } 782 - pr_err("EEH: bus reset %d failed on PHB#%d-PE#%x, rc=%d\n", 783 - i+1, pe->phb->global_number, pe->addr, rc); 779 + 780 + if (state < 0) { 781 + pr_warn("%s: Unrecoverable slot failure on PHB#%d-PE#%x", 782 + __func__, pe->phb->global_number, pe->addr); 783 + ret = -ENOTRECOVERABLE; 784 + goto out; 785 + } 786 + 787 + /* We might run out of credits */ 788 + ret = -EIO; 789 + pr_warn("%s: Failure %d resetting PHB#%x-PE#%x\n (%d)\n", 790 + __func__, state, pe->phb->global_number, pe->addr, (i + 1)); 784 791 } 785 792 786 - return -1; 793 + out: 794 + eeh_pe_state_clear(pe, EEH_PE_RESET | EEH_PE_CFG_BLOCKED); 795 + return ret; 787 796 } 788 797 789 798 /** ··· 933 920 pr_warn("%s: Platform EEH operation not found\n", 934 921 __func__); 935 922 return -EEXIST; 936 - } else if ((ret = eeh_ops->init())) { 937 - pr_warn("%s: Failed to call platform init function (%d)\n", 938 - __func__, ret); 923 + } else if ((ret = eeh_ops->init())) 939 924 return ret; 940 - } 941 925 942 926 /* Initialize EEH event */ 943 927 ret = eeh_event_init(); ··· 1219 1209 static struct pci_device_id eeh_reset_ids[] = { 1220 1210 { PCI_DEVICE(0x19a2, 0x0710) }, /* Emulex, BE */ 1221 1211 { PCI_DEVICE(0x10df, 0xe220) }, /* Emulex, Lancer */ 1212 + { PCI_DEVICE(0x14e4, 0x1657) }, /* Broadcom BCM5719 */ 1222 1213 { 0 } 1223 1214 }; 1224 1215
+2 -8
arch/powerpc/kernel/eeh_driver.c
··· 528 528 eeh_pe_dev_traverse(pe, eeh_report_error, &result); 529 529 530 530 /* Issue reset */ 531 - eeh_pe_state_mark(pe, EEH_PE_CFG_BLOCKED); 532 531 ret = eeh_reset_pe(pe); 533 532 if (ret) { 534 - eeh_pe_state_clear(pe, EEH_PE_RECOVERING | EEH_PE_CFG_BLOCKED); 533 + eeh_pe_state_clear(pe, EEH_PE_RECOVERING); 535 534 return ret; 536 535 } 537 - eeh_pe_state_clear(pe, EEH_PE_CFG_BLOCKED); 538 536 539 537 /* Unfreeze the PE */ 540 538 ret = eeh_clear_pe_frozen_state(pe, true); ··· 599 601 * config accesses. So we prefer to block them. However, controlled 600 602 * PCI config accesses initiated from EEH itself are allowed. 601 603 */ 602 - eeh_pe_state_mark(pe, EEH_PE_CFG_BLOCKED); 603 604 rc = eeh_reset_pe(pe); 604 - if (rc) { 605 - eeh_pe_state_clear(pe, EEH_PE_CFG_BLOCKED); 605 + if (rc) 606 606 return rc; 607 - } 608 607 609 608 pci_lock_rescan_remove(); 610 609 611 610 /* Restore PE */ 612 611 eeh_ops->configure_bridge(pe); 613 612 eeh_pe_restore_bars(pe); 614 - eeh_pe_state_clear(pe, EEH_PE_CFG_BLOCKED); 615 613 616 614 /* Clear frozen state */ 617 615 rc = eeh_clear_pe_frozen_state(pe, false);
+9 -3
arch/powerpc/kernel/entry_32.S
··· 1424 1424 lwz r4, 44(r1) 1425 1425 subi r4, r4, MCOUNT_INSN_SIZE 1426 1426 1427 - /* get the parent address */ 1428 - addi r3, r1, 52 1427 + /* Grab the LR out of the caller stack frame */ 1428 + lwz r3,52(r1) 1429 1429 1430 1430 bl prepare_ftrace_return 1431 1431 nop 1432 + 1433 + /* 1434 + * prepare_ftrace_return gives us the address we divert to. 1435 + * Change the LR in the callers stack frame to this. 1436 + */ 1437 + stw r3,52(r1) 1432 1438 1433 1439 MCOUNT_RESTORE_FRAME 1434 1440 /* old link register ends up in ctr reg */ ··· 1463 1457 blr 1464 1458 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 1465 1459 1466 - #endif /* CONFIG_MCOUNT */ 1460 + #endif /* CONFIG_FUNCTION_TRACER */
+10 -25
arch/powerpc/kernel/entry_64.S
··· 1227 1227 ld r4, 128(r1) 1228 1228 subi r4, r4, MCOUNT_INSN_SIZE 1229 1229 1230 - /* get the parent address */ 1230 + /* Grab the LR out of the caller stack frame */ 1231 1231 ld r11, 112(r1) 1232 - addi r3, r11, 16 1232 + ld r3, 16(r11) 1233 1233 1234 1234 bl prepare_ftrace_return 1235 1235 nop 1236 + 1237 + /* 1238 + * prepare_ftrace_return gives us the address we divert to. 1239 + * Change the LR in the callers stack frame to this. 1240 + */ 1241 + ld r11, 112(r1) 1242 + std r3, 16(r11) 1236 1243 1237 1244 ld r0, 128(r1) 1238 1245 mtlr r0 ··· 1247 1240 blr 1248 1241 1249 1242 _GLOBAL(return_to_handler) 1250 - /* need to save return values */ 1251 - std r4, -24(r1) 1252 - std r3, -16(r1) 1253 - std r31, -8(r1) 1254 - mr r31, r1 1255 - stdu r1, -112(r1) 1256 - 1257 - bl ftrace_return_to_handler 1258 - nop 1259 - 1260 - /* return value has real return address */ 1261 - mtlr r3 1262 - 1263 - ld r1, 0(r1) 1264 - ld r4, -24(r1) 1265 - ld r3, -16(r1) 1266 - ld r31, -8(r1) 1267 - 1268 - /* Jump back to real return address */ 1269 - blr 1270 - 1271 - _GLOBAL(mod_return_to_handler) 1272 1243 /* need to save return values */ 1273 1244 std r4, -32(r1) 1274 1245 std r3, -24(r1) ··· 1257 1272 stdu r1, -112(r1) 1258 1273 1259 1274 /* 1260 - * We are in a module using the module's TOC. 1275 + * We might be called from a module. 1261 1276 * Switch to our TOC to run inside the core kernel. 1262 1277 */ 1263 1278 ld r2, PACATOC(r13)
+16 -18
arch/powerpc/kernel/exceptions-64s.S
··· 131 131 1: 132 132 #endif 133 133 134 + /* Return SRR1 from power7_nap() */ 135 + mfspr r3,SPRN_SRR1 134 136 beq cr1,2f 135 137 b power7_wakeup_noloss 136 138 2: b power7_wakeup_loss ··· 294 292 . = 0xc00 295 293 .globl system_call_pSeries 296 294 system_call_pSeries: 297 - HMT_MEDIUM 295 + /* 296 + * If CONFIG_KVM_BOOK3S_64_HANDLER is set, save the PPR (on systems 297 + * that support it) before changing to HMT_MEDIUM. That allows the KVM 298 + * code to save that value into the guest state (it is the guest's PPR 299 + * value). Otherwise just change to HMT_MEDIUM as userspace has 300 + * already saved the PPR. 301 + */ 298 302 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER 299 303 SET_SCRATCH0(r13) 300 304 GET_PACA(r13) 301 305 std r9,PACA_EXGEN+EX_R9(r13) 306 + OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR); 307 + HMT_MEDIUM; 302 308 std r10,PACA_EXGEN+EX_R10(r13) 309 + OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR); 303 310 mfcr r9 304 311 KVMTEST(0xc00) 305 312 GET_SCRATCH0(r13) 313 + #else 314 + HMT_MEDIUM; 306 315 #endif 307 316 SYSCALL_PSERIES_1 308 317 SYSCALL_PSERIES_2_RFID ··· 1314 1301 EXCEPTION_PROLOG_0(PACA_EXGEN) 1315 1302 b hmi_exception_hv 1316 1303 1317 - #ifdef CONFIG_PPC_POWERNV 1318 - _GLOBAL(opal_mc_secondary_handler) 1319 - HMT_MEDIUM_PPR_DISCARD 1320 - SET_SCRATCH0(r13) 1321 - GET_PACA(r13) 1322 - clrldi r3,r3,2 1323 - tovirt(r3,r3) 1324 - std r3,PACA_OPAL_MC_EVT(r13) 1325 - ld r13,OPAL_MC_SRR0(r3) 1326 - mtspr SPRN_SRR0,r13 1327 - ld r13,OPAL_MC_SRR1(r3) 1328 - mtspr SPRN_SRR1,r13 1329 - ld r3,OPAL_MC_GPR3(r3) 1330 - GET_SCRATCH0(r13) 1331 - b machine_check_pSeries 1332 - #endif /* CONFIG_PPC_POWERNV */ 1333 - 1334 1304 1335 1305 #define MACHINE_CHECK_HANDLER_WINDUP \ 1336 1306 /* Clear MSR_RI before setting SRR0 and SRR1. */\ ··· 1567 1571 * r3 contains the faulting address 1568 1572 * r4 contains the required access permissions 1569 1573 * r5 contains the trap number 1574 + * r6 contains dsisr 1570 1575 * 1571 1576 * at return r3 = 0 for success, 1 for page fault, negative for error 1572 1577 */ 1578 + ld r6,_DSISR(r1) 1573 1579 bl hash_page /* build HPTE if possible */ 1574 1580 cmpdi r3,0 /* see if hash_page succeeded */ 1575 1581
+15 -58
arch/powerpc/kernel/ftrace.c
··· 510 510 } 511 511 #endif /* CONFIG_DYNAMIC_FTRACE */ 512 512 513 - #ifdef CONFIG_PPC64 514 - extern void mod_return_to_handler(void); 515 - #endif 516 - 517 513 /* 518 514 * Hook the return address and push it in the stack of return addrs 519 - * in current thread info. 515 + * in current thread info. Return the address we want to divert to. 520 516 */ 521 - void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr) 517 + unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip) 522 518 { 523 - unsigned long old; 524 - int faulted; 525 519 struct ftrace_graph_ent trace; 526 - unsigned long return_hooker = (unsigned long)&return_to_handler; 520 + unsigned long return_hooker; 527 521 528 522 if (unlikely(ftrace_graph_is_dead())) 529 - return; 523 + goto out; 530 524 531 525 if (unlikely(atomic_read(&current->tracing_graph_pause))) 532 - return; 526 + goto out; 533 527 534 - #ifdef CONFIG_PPC64 535 - /* non core kernel code needs to save and restore the TOC */ 536 - if (REGION_ID(self_addr) != KERNEL_REGION_ID) 537 - return_hooker = (unsigned long)&mod_return_to_handler; 538 - #endif 528 + return_hooker = ppc_function_entry(return_to_handler); 539 529 540 - return_hooker = ppc_function_entry((void *)return_hooker); 541 - 542 - /* 543 - * Protect against fault, even if it shouldn't 544 - * happen. This tool is too much intrusive to 545 - * ignore such a protection. 546 - */ 547 - asm volatile( 548 - "1: " PPC_LL "%[old], 0(%[parent])\n" 549 - "2: " PPC_STL "%[return_hooker], 0(%[parent])\n" 550 - " li %[faulted], 0\n" 551 - "3:\n" 552 - 553 - ".section .fixup, \"ax\"\n" 554 - "4: li %[faulted], 1\n" 555 - " b 3b\n" 556 - ".previous\n" 557 - 558 - ".section __ex_table,\"a\"\n" 559 - PPC_LONG_ALIGN "\n" 560 - PPC_LONG "1b,4b\n" 561 - PPC_LONG "2b,4b\n" 562 - ".previous" 563 - 564 - : [old] "=&r" (old), [faulted] "=r" (faulted) 565 - : [parent] "r" (parent), [return_hooker] "r" (return_hooker) 566 - : "memory" 567 - ); 568 - 569 - if (unlikely(faulted)) { 570 - ftrace_graph_stop(); 571 - WARN_ON(1); 572 - return; 573 - } 574 - 575 - trace.func = self_addr; 530 + trace.func = ip; 576 531 trace.depth = current->curr_ret_stack + 1; 577 532 578 533 /* Only trace if the calling function expects to */ 579 - if (!ftrace_graph_entry(&trace)) { 580 - *parent = old; 581 - return; 582 - } 534 + if (!ftrace_graph_entry(&trace)) 535 + goto out; 583 536 584 - if (ftrace_push_return_trace(old, self_addr, &trace.depth, 0) == -EBUSY) 585 - *parent = old; 537 + if (ftrace_push_return_trace(parent, ip, &trace.depth, 0) == -EBUSY) 538 + goto out; 539 + 540 + parent = return_hooker; 541 + out: 542 + return parent; 586 543 } 587 544 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 588 545
+122 -108
arch/powerpc/kernel/head_8xx.S
··· 33 33 34 34 /* Macro to make the code more readable. */ 35 35 #ifdef CONFIG_8xx_CPU6 36 - #define DO_8xx_CPU6(val, reg) \ 37 - li reg, val; \ 38 - stw reg, 12(r0); \ 39 - lwz reg, 12(r0); 36 + #define SPRN_MI_TWC_ADDR 0x2b80 37 + #define SPRN_MI_RPN_ADDR 0x2d80 38 + #define SPRN_MD_TWC_ADDR 0x3b80 39 + #define SPRN_MD_RPN_ADDR 0x3d80 40 + 41 + #define MTSPR_CPU6(spr, reg, treg) \ 42 + li treg, spr##_ADDR; \ 43 + stw treg, 12(r0); \ 44 + lwz treg, 12(r0); \ 45 + mtspr spr, reg 40 46 #else 41 - #define DO_8xx_CPU6(val, reg) 47 + #define MTSPR_CPU6(spr, reg, treg) \ 48 + mtspr spr, reg 42 49 #endif 50 + 51 + /* 52 + * Value for the bits that have fixed value in RPN entries. 53 + * Also used for tagging DAR for DTLBerror. 54 + */ 55 + #ifdef CONFIG_PPC_16K_PAGES 56 + #define RPN_PATTERN (0x00f0 | MD_SPS16K) 57 + #else 58 + #define RPN_PATTERN 0x00f0 59 + #endif 60 + 43 61 __HEAD 44 62 _ENTRY(_stext); 45 63 _ENTRY(_start); ··· 82 64 * entry into each of the instruction and data TLBs to map the first 83 65 * 8M 1:1. I also mapped an additional I/O space 1:1 so we can get to 84 66 * the "internal" processor registers before MMU_init is called. 85 - * 86 - * The TLB code currently contains a major hack. Since I use the condition 87 - * code register, I have to save and restore it. I am out of registers, so 88 - * I just store it in memory location 0 (the TLB handlers are not reentrant). 89 - * To avoid making any decisions, I need to use the "segment" valid bit 90 - * in the first level table, but that would require many changes to the 91 - * Linux page directory/table functions that I don't want to do right now. 92 67 * 93 68 * -- Dan 94 69 */ ··· 222 211 EXCEPTION_PROLOG 223 212 mfspr r4,SPRN_DAR 224 213 stw r4,_DAR(r11) 225 - li r5,0x00f0 214 + li r5,RPN_PATTERN 226 215 mtspr SPRN_DAR,r5 /* Tag DAR, to be used in DTLB Error */ 227 216 mfspr r5,SPRN_DSISR 228 217 stw r5,_DSISR(r11) ··· 230 219 EXC_XFER_STD(0x200, machine_check_exception) 231 220 232 221 /* Data access exception. 233 - * This is "never generated" by the MPC8xx. We jump to it for other 234 - * translation errors. 222 + * This is "never generated" by the MPC8xx. 235 223 */ 236 224 . = 0x300 237 225 DataAccess: 238 - EXCEPTION_PROLOG 239 - mfspr r10,SPRN_DSISR 240 - stw r10,_DSISR(r11) 241 - mr r5,r10 242 - mfspr r4,SPRN_DAR 243 - li r10,0x00f0 244 - mtspr SPRN_DAR,r10 /* Tag DAR, to be used in DTLB Error */ 245 - EXC_XFER_LITE(0x300, handle_page_fault) 246 226 247 227 /* Instruction access exception. 248 - * This is "never generated" by the MPC8xx. We jump to it for other 249 - * translation errors. 228 + * This is "never generated" by the MPC8xx. 250 229 */ 251 230 . = 0x400 252 231 InstructionAccess: 253 - EXCEPTION_PROLOG 254 - mr r4,r12 255 - mr r5,r9 256 - EXC_XFER_LITE(0x400, handle_page_fault) 257 232 258 233 /* External interrupt */ 259 234 EXCEPTION(0x500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE) ··· 250 253 EXCEPTION_PROLOG 251 254 mfspr r4,SPRN_DAR 252 255 stw r4,_DAR(r11) 253 - li r5,0x00f0 256 + li r5,RPN_PATTERN 254 257 mtspr SPRN_DAR,r5 /* Tag DAR, to be used in DTLB Error */ 255 258 mfspr r5,SPRN_DSISR 256 259 stw r5,_DSISR(r11) ··· 289 292 . = 0x1100 290 293 /* 291 294 * For the MPC8xx, this is a software tablewalk to load the instruction 292 - * TLB. It is modelled after the example in the Motorola manual. The task 293 - * switch loads the M_TWB register with the pointer to the first level table. 295 + * TLB. The task switch loads the M_TW register with the pointer to the first 296 + * level table. 294 297 * If we discover there is no second level table (value is zero) or if there 295 298 * is an invalid pte, we load that into the TLB, which causes another fault 296 299 * into the TLB Error interrupt where we can handle such problems. ··· 299 302 */ 300 303 InstructionTLBMiss: 301 304 #ifdef CONFIG_8xx_CPU6 302 - stw r3, 8(r0) 305 + mtspr SPRN_DAR, r3 303 306 #endif 304 307 EXCEPTION_PROLOG_0 305 308 mtspr SPRN_SPRG_SCRATCH2, r10 306 309 mfspr r10, SPRN_SRR0 /* Get effective address of fault */ 307 310 #ifdef CONFIG_8xx_CPU15 308 - addi r11, r10, 0x1000 311 + addi r11, r10, PAGE_SIZE 309 312 tlbie r11 310 - addi r11, r10, -0x1000 313 + addi r11, r10, -PAGE_SIZE 311 314 tlbie r11 312 315 #endif 313 - DO_8xx_CPU6(0x3780, r3) 314 - mtspr SPRN_MD_EPN, r10 /* Have to use MD_EPN for walk, MI_EPN can't */ 315 - mfspr r10, SPRN_M_TWB /* Get level 1 table entry address */ 316 316 317 317 /* If we are faulting a kernel address, we have to use the 318 318 * kernel page tables. ··· 317 323 #ifdef CONFIG_MODULES 318 324 /* Only modules will cause ITLB Misses as we always 319 325 * pin the first 8MB of kernel memory */ 320 - andi. r11, r10, 0x0800 /* Address >= 0x80000000 */ 326 + andis. r11, r10, 0x8000 /* Address >= 0x80000000 */ 327 + #endif 328 + mfspr r11, SPRN_M_TW /* Get level 1 table base address */ 329 + #ifdef CONFIG_MODULES 321 330 beq 3f 322 - lis r11, swapper_pg_dir@h 323 - ori r11, r11, swapper_pg_dir@l 324 - rlwimi r10, r11, 0, 2, 19 331 + lis r11, (swapper_pg_dir-PAGE_OFFSET)@h 332 + ori r11, r11, (swapper_pg_dir-PAGE_OFFSET)@l 325 333 3: 326 334 #endif 327 - lwz r11, 0(r10) /* Get the level 1 entry */ 335 + /* Extract level 1 index */ 336 + rlwinm r10, r10, 32 - ((PAGE_SHIFT - 2) << 1), (PAGE_SHIFT - 2) << 1, 29 337 + lwzx r11, r10, r11 /* Get the level 1 entry */ 328 338 rlwinm. r10, r11,0,0,19 /* Extract page descriptor page address */ 329 339 beq 2f /* If zero, don't try to find a pte */ 330 340 331 341 /* We have a pte table, so load the MI_TWC with the attributes 332 342 * for this "segment." 333 343 */ 334 - ori r11,r11,1 /* Set valid bit */ 335 - DO_8xx_CPU6(0x2b80, r3) 336 - mtspr SPRN_MI_TWC, r11 /* Set segment attributes */ 337 - DO_8xx_CPU6(0x3b80, r3) 338 - mtspr SPRN_MD_TWC, r11 /* Load pte table base address */ 339 - mfspr r11, SPRN_MD_TWC /* ....and get the pte address */ 340 - lwz r10, 0(r11) /* Get the pte */ 344 + MTSPR_CPU6(SPRN_MI_TWC, r11, r3) /* Set segment attributes */ 345 + mfspr r11, SPRN_SRR0 /* Get effective address of fault */ 346 + /* Extract level 2 index */ 347 + rlwinm r11, r11, 32 - (PAGE_SHIFT - 2), 32 - PAGE_SHIFT, 29 348 + lwzx r10, r10, r11 /* Get the pte */ 341 349 342 350 #ifdef CONFIG_SWAP 343 351 andi. r11, r10, _PAGE_ACCESSED | _PAGE_PRESENT 344 352 cmpwi cr0, r11, _PAGE_ACCESSED | _PAGE_PRESENT 353 + li r11, RPN_PATTERN 345 354 bne- cr0, 2f 355 + #else 356 + li r11, RPN_PATTERN 346 357 #endif 347 358 /* The Linux PTE won't go exactly into the MMU TLB. 348 359 * Software indicator bits 21 and 28 must be clear. ··· 355 356 * set. All other Linux PTE bits control the behavior 356 357 * of the MMU. 357 358 */ 358 - li r11, 0x00f0 359 359 rlwimi r10, r11, 0, 0x07f8 /* Set 24-27, clear 21-23,28 */ 360 - DO_8xx_CPU6(0x2d80, r3) 361 - mtspr SPRN_MI_RPN, r10 /* Update TLB entry */ 360 + MTSPR_CPU6(SPRN_MI_RPN, r10, r3) /* Update TLB entry */ 362 361 363 362 /* Restore registers */ 364 363 #ifdef CONFIG_8xx_CPU6 365 - lwz r3, 8(r0) 364 + mfspr r3, SPRN_DAR 365 + mtspr SPRN_DAR, r11 /* Tag DAR */ 366 366 #endif 367 367 mfspr r10, SPRN_SPRG_SCRATCH2 368 368 EXCEPTION_EPILOG_0 369 369 rfi 370 370 2: 371 - mfspr r11, SPRN_SRR1 371 + mfspr r10, SPRN_SRR1 372 372 /* clear all error bits as TLB Miss 373 373 * sets a few unconditionally 374 374 */ 375 - rlwinm r11, r11, 0, 0xffff 376 - mtspr SPRN_SRR1, r11 375 + rlwinm r10, r10, 0, 0xffff 376 + mtspr SPRN_SRR1, r10 377 377 378 378 /* Restore registers */ 379 379 #ifdef CONFIG_8xx_CPU6 380 - lwz r3, 8(r0) 380 + mfspr r3, SPRN_DAR 381 + mtspr SPRN_DAR, r11 /* Tag DAR */ 381 382 #endif 382 383 mfspr r10, SPRN_SPRG_SCRATCH2 383 - EXCEPTION_EPILOG_0 384 - b InstructionAccess 384 + b InstructionTLBError1 385 385 386 386 . = 0x1200 387 387 DataStoreTLBMiss: 388 388 #ifdef CONFIG_8xx_CPU6 389 - stw r3, 8(r0) 389 + mtspr SPRN_DAR, r3 390 390 #endif 391 391 EXCEPTION_PROLOG_0 392 392 mtspr SPRN_SPRG_SCRATCH2, r10 393 - mfspr r10, SPRN_M_TWB /* Get level 1 table entry address */ 393 + mfspr r10, SPRN_MD_EPN 394 394 395 395 /* If we are faulting a kernel address, we have to use the 396 396 * kernel page tables. 397 397 */ 398 - andi. r11, r10, 0x0800 398 + andis. r11, r10, 0x8000 399 + mfspr r11, SPRN_M_TW /* Get level 1 table base address */ 399 400 beq 3f 400 - lis r11, swapper_pg_dir@h 401 - ori r11, r11, swapper_pg_dir@l 402 - rlwimi r10, r11, 0, 2, 19 401 + lis r11, (swapper_pg_dir-PAGE_OFFSET)@h 402 + ori r11, r11, (swapper_pg_dir-PAGE_OFFSET)@l 403 403 3: 404 - lwz r11, 0(r10) /* Get the level 1 entry */ 404 + /* Extract level 1 index */ 405 + rlwinm r10, r10, 32 - ((PAGE_SHIFT - 2) << 1), (PAGE_SHIFT - 2) << 1, 29 406 + lwzx r11, r10, r11 /* Get the level 1 entry */ 405 407 rlwinm. r10, r11,0,0,19 /* Extract page descriptor page address */ 406 408 beq 2f /* If zero, don't try to find a pte */ 407 409 408 410 /* We have a pte table, so load fetch the pte from the table. 409 411 */ 410 - ori r11, r11, 1 /* Set valid bit in physical L2 page */ 411 - DO_8xx_CPU6(0x3b80, r3) 412 - mtspr SPRN_MD_TWC, r11 /* Load pte table base address */ 413 - mfspr r10, SPRN_MD_TWC /* ....and get the pte address */ 412 + mfspr r10, SPRN_MD_EPN /* Get address of fault */ 413 + /* Extract level 2 index */ 414 + rlwinm r10, r10, 32 - (PAGE_SHIFT - 2), 32 - PAGE_SHIFT, 29 415 + rlwimi r10, r11, 0, 0, 32 - PAGE_SHIFT - 1 /* Add level 2 base */ 414 416 lwz r10, 0(r10) /* Get the pte */ 415 417 416 418 /* Insert the Guarded flag into the TWC from the Linux PTE. ··· 425 425 * It is bit 25 in the Linux PTE and bit 30 in the TWC 426 426 */ 427 427 rlwimi r11, r10, 32-5, 30, 30 428 - DO_8xx_CPU6(0x3b80, r3) 429 - mtspr SPRN_MD_TWC, r11 428 + MTSPR_CPU6(SPRN_MD_TWC, r11, r3) 430 429 431 430 /* Both _PAGE_ACCESSED and _PAGE_PRESENT has to be set. 432 431 * We also need to know if the insn is a load/store, so: ··· 441 442 and r11, r11, r10 442 443 rlwimi r10, r11, 0, _PAGE_PRESENT 443 444 #endif 444 - /* Honour kernel RO, User NA */ 445 - /* 0x200 == Extended encoding, bit 22 */ 446 - rlwimi r10, r10, 32-2, 0x200 /* Copy USER to bit 22, 0x200 */ 447 - /* r11 = (r10 & _PAGE_RW) >> 1 */ 448 - rlwinm r11, r10, 32-1, 0x200 449 - or r10, r11, r10 450 - /* invert RW and 0x200 bits */ 451 - xori r10, r10, _PAGE_RW | 0x200 445 + /* invert RW */ 446 + xori r10, r10, _PAGE_RW 452 447 453 448 /* The Linux PTE won't go exactly into the MMU TLB. 454 449 * Software indicator bits 22 and 28 must be clear. ··· 450 457 * set. All other Linux PTE bits control the behavior 451 458 * of the MMU. 452 459 */ 453 - 2: li r11, 0x00f0 460 + 2: li r11, RPN_PATTERN 454 461 rlwimi r10, r11, 0, 24, 28 /* Set 24-27, clear 28 */ 455 - DO_8xx_CPU6(0x3d80, r3) 456 - mtspr SPRN_MD_RPN, r10 /* Update TLB entry */ 462 + MTSPR_CPU6(SPRN_MD_RPN, r10, r3) /* Update TLB entry */ 457 463 458 464 /* Restore registers */ 459 465 #ifdef CONFIG_8xx_CPU6 460 - lwz r3, 8(r0) 466 + mfspr r3, SPRN_DAR 461 467 #endif 462 468 mtspr SPRN_DAR, r11 /* Tag DAR */ 463 469 mfspr r10, SPRN_SPRG_SCRATCH2 ··· 469 477 */ 470 478 . = 0x1300 471 479 InstructionTLBError: 472 - b InstructionAccess 480 + EXCEPTION_PROLOG_0 481 + InstructionTLBError1: 482 + EXCEPTION_PROLOG_1 483 + EXCEPTION_PROLOG_2 484 + mr r4,r12 485 + mr r5,r9 486 + andis. r10,r5,0x4000 487 + beq+ 1f 488 + tlbie r4 489 + /* 0x400 is InstructionAccess exception, needed by bad_page_fault() */ 490 + 1: EXC_XFER_LITE(0x400, handle_page_fault) 473 491 474 492 /* This is the data TLB error on the MPC8xx. This could be due to 475 493 * many reasons, including a dirty update to a pte. We bail out to ··· 490 488 EXCEPTION_PROLOG_0 491 489 492 490 mfspr r11, SPRN_DAR 493 - cmpwi cr0, r11, 0x00f0 491 + cmpwi cr0, r11, RPN_PATTERN 494 492 beq- FixupDAR /* must be a buggy dcbX, icbi insn. */ 495 493 DARFixed:/* Return from dcbx instruction bug workaround */ 496 - EXCEPTION_EPILOG_0 497 - b DataAccess 494 + EXCEPTION_PROLOG_1 495 + EXCEPTION_PROLOG_2 496 + mfspr r5,SPRN_DSISR 497 + stw r5,_DSISR(r11) 498 + mfspr r4,SPRN_DAR 499 + andis. r10,r5,0x4000 500 + beq+ 1f 501 + tlbie r4 502 + 1: li r10,RPN_PATTERN 503 + mtspr SPRN_DAR,r10 /* Tag DAR, to be used in DTLB Error */ 504 + /* 0x300 is DataAccess exception, needed by bad_page_fault() */ 505 + EXC_XFER_LITE(0x300, handle_page_fault) 498 506 499 507 EXCEPTION(0x1500, Trap_15, unknown_exception, EXC_XFER_EE) 500 508 EXCEPTION(0x1600, Trap_16, unknown_exception, EXC_XFER_EE) ··· 533 521 #define NO_SELF_MODIFYING_CODE 534 522 FixupDAR:/* Entry point for dcbx workaround. */ 535 523 #ifdef CONFIG_8xx_CPU6 536 - stw r3, 8(r0) 524 + mtspr SPRN_DAR, r3 537 525 #endif 538 526 mtspr SPRN_SPRG_SCRATCH2, r10 539 527 /* fetch instruction from memory. */ 540 528 mfspr r10, SPRN_SRR0 541 529 andis. r11, r10, 0x8000 /* Address >= 0x80000000 */ 542 - DO_8xx_CPU6(0x3780, r3) 543 - mtspr SPRN_MD_EPN, r10 544 - mfspr r11, SPRN_M_TWB /* Get level 1 table entry address */ 530 + mfspr r11, SPRN_M_TW /* Get level 1 table base address */ 545 531 beq- 3f /* Branch if user space */ 546 532 lis r11, (swapper_pg_dir-PAGE_OFFSET)@h 547 533 ori r11, r11, (swapper_pg_dir-PAGE_OFFSET)@l 548 - rlwimi r11, r10, 32-20, 0xffc /* r11 = r11&~0xffc|(r10>>20)&0xffc */ 549 - 3: lwz r11, 0(r11) /* Get the level 1 entry */ 550 - DO_8xx_CPU6(0x3b80, r3) 551 - mtspr SPRN_MD_TWC, r11 /* Load pte table base address */ 552 - mfspr r11, SPRN_MD_TWC /* ....and get the pte address */ 553 - lwz r11, 0(r11) /* Get the pte */ 534 + /* Extract level 1 index */ 535 + 3: rlwinm r10, r10, 32 - ((PAGE_SHIFT - 2) << 1), (PAGE_SHIFT - 2) << 1, 29 536 + lwzx r11, r10, r11 /* Get the level 1 entry */ 537 + rlwinm r10, r11,0,0,19 /* Extract page descriptor page address */ 538 + mfspr r11, SPRN_SRR0 /* Get effective address of fault */ 539 + /* Extract level 2 index */ 540 + rlwinm r11, r11, 32 - (PAGE_SHIFT - 2), 32 - PAGE_SHIFT, 29 541 + lwzx r11, r10, r11 /* Get the pte */ 554 542 #ifdef CONFIG_8xx_CPU6 555 - lwz r3, 8(r0) /* restore r3 from memory */ 543 + mfspr r3, SPRN_DAR 556 544 #endif 557 545 /* concat physical page address(r11) and page offset(r10) */ 558 - rlwimi r11, r10, 0, 20, 31 546 + mfspr r10, SPRN_SRR0 547 + rlwimi r11, r10, 0, 32 - PAGE_SHIFT, 31 559 548 lwz r11,0(r11) 560 549 /* Check if it really is a dcbx instruction. */ 561 550 /* dcbt and dcbtst does not generate DTLB Misses/Errors, ··· 711 698 #ifdef CONFIG_8xx_CPU6 712 699 lis r4, cpu6_errata_word@h 713 700 ori r4, r4, cpu6_errata_word@l 714 - li r3, 0x3980 701 + li r3, 0x3f80 715 702 stw r3, 12(r4) 716 703 lwz r3, 12(r4) 717 704 #endif 718 - mtspr SPRN_M_TWB, r6 705 + mtspr SPRN_M_TW, r6 719 706 lis r4,2f@h 720 707 ori r4,r4,2f@l 721 708 tophys(r4,r4) ··· 889 876 lis r6, cpu6_errata_word@h 890 877 ori r6, r6, cpu6_errata_word@l 891 878 tophys (r4, r4) 892 - li r7, 0x3980 879 + li r7, 0x3f80 893 880 stw r7, 12(r6) 894 881 lwz r7, 12(r6) 895 - mtspr SPRN_M_TWB, r4 /* Update MMU base address */ 882 + mtspr SPRN_M_TW, r4 /* Update MMU base address */ 896 883 li r7, 0x3380 897 884 stw r7, 12(r6) 898 885 lwz r7, 12(r6) ··· 900 887 #else 901 888 mtspr SPRN_M_CASID,r3 /* Update context */ 902 889 tophys (r4, r4) 903 - mtspr SPRN_M_TWB, r4 /* and pgd */ 890 + mtspr SPRN_M_TW, r4 /* and pgd */ 904 891 #endif 905 892 SYNC 906 893 blr ··· 932 919 .globl sdata 933 920 sdata: 934 921 .globl empty_zero_page 922 + .align PAGE_SHIFT 935 923 empty_zero_page: 936 - .space 4096 924 + .space PAGE_SIZE 937 925 938 926 .globl swapper_pg_dir 939 927 swapper_pg_dir: 940 - .space 4096 928 + .space PGD_TABLE_SIZE 941 929 942 930 /* Room for two PTE table poiners, usually the kernel and current user 943 931 * pointer to their respective root page table (pgdir).
+3 -3
arch/powerpc/kernel/hw_breakpoint.c
··· 63 63 int arch_install_hw_breakpoint(struct perf_event *bp) 64 64 { 65 65 struct arch_hw_breakpoint *info = counter_arch_bp(bp); 66 - struct perf_event **slot = &__get_cpu_var(bp_per_reg); 66 + struct perf_event **slot = this_cpu_ptr(&bp_per_reg); 67 67 68 68 *slot = bp; 69 69 ··· 88 88 */ 89 89 void arch_uninstall_hw_breakpoint(struct perf_event *bp) 90 90 { 91 - struct perf_event **slot = &__get_cpu_var(bp_per_reg); 91 + struct perf_event **slot = this_cpu_ptr(&bp_per_reg); 92 92 93 93 if (*slot != bp) { 94 94 WARN_ONCE(1, "Can't find the breakpoint"); ··· 226 226 */ 227 227 rcu_read_lock(); 228 228 229 - bp = __get_cpu_var(bp_per_reg); 229 + bp = __this_cpu_read(bp_per_reg); 230 230 if (!bp) 231 231 goto out; 232 232 info = counter_arch_bp(bp);
+10 -2
arch/powerpc/kernel/idle_power7.S
··· 212 212 mtspr SPRN_SRR0,r5 213 213 rfid 214 214 215 + /* 216 + * R3 here contains the value that will be returned to the caller 217 + * of power7_nap. 218 + */ 215 219 _GLOBAL(power7_wakeup_loss) 216 220 ld r1,PACAR1(r13) 217 221 BEGIN_FTR_SECTION ··· 223 219 END_FTR_SECTION_IFSET(CPU_FTR_HVMODE) 224 220 REST_NVGPRS(r1) 225 221 REST_GPR(2, r1) 226 - ld r3,_CCR(r1) 222 + ld r6,_CCR(r1) 227 223 ld r4,_MSR(r1) 228 224 ld r5,_NIP(r1) 229 225 addi r1,r1,INT_FRAME_SIZE 230 - mtcr r3 226 + mtcr r6 231 227 mtspr SPRN_SRR1,r4 232 228 mtspr SPRN_SRR0,r5 233 229 rfid 234 230 231 + /* 232 + * R3 here contains the value that will be returned to the caller 233 + * of power7_nap. 234 + */ 235 235 _GLOBAL(power7_wakeup_noloss) 236 236 lbz r0,PACA_NAPSTATELOST(r13) 237 237 cmpwi r0,0
+1 -1
arch/powerpc/kernel/iommu.c
··· 208 208 * We don't need to disable preemption here because any CPU can 209 209 * safely use any IOMMU pool. 210 210 */ 211 - pool_nr = __raw_get_cpu_var(iommu_pool_hash) & (tbl->nr_pools - 1); 211 + pool_nr = __this_cpu_read(iommu_pool_hash) & (tbl->nr_pools - 1); 212 212 213 213 if (largealloc) 214 214 pool = &(tbl->large_pool);
+2 -3
arch/powerpc/kernel/irq.c
··· 50 50 #include <linux/list.h> 51 51 #include <linux/radix-tree.h> 52 52 #include <linux/mutex.h> 53 - #include <linux/bootmem.h> 54 53 #include <linux/pci.h> 55 54 #include <linux/debugfs.h> 56 55 #include <linux/of.h> ··· 113 114 static inline notrace int decrementer_check_overflow(void) 114 115 { 115 116 u64 now = get_tb_or_rtc(); 116 - u64 *next_tb = &__get_cpu_var(decrementers_next_tb); 117 + u64 *next_tb = this_cpu_ptr(&decrementers_next_tb); 117 118 118 119 return now >= *next_tb; 119 120 } ··· 498 499 499 500 /* And finally process it */ 500 501 if (unlikely(irq == NO_IRQ)) 501 - __get_cpu_var(irq_stat).spurious_irqs++; 502 + __this_cpu_inc(irq_stat.spurious_irqs); 502 503 else 503 504 generic_handle_irq(irq); 504 505
+1 -1
arch/powerpc/kernel/kgdb.c
··· 155 155 { 156 156 struct thread_info *thread_info, *exception_thread_info; 157 157 struct thread_info *backup_current_thread_info = 158 - &__get_cpu_var(kgdb_thread_info); 158 + this_cpu_ptr(&kgdb_thread_info); 159 159 160 160 if (user_mode(regs)) 161 161 return 0;
+3 -3
arch/powerpc/kernel/kprobes.c
··· 119 119 120 120 static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb) 121 121 { 122 - __get_cpu_var(current_kprobe) = kcb->prev_kprobe.kp; 122 + __this_cpu_write(current_kprobe, kcb->prev_kprobe.kp); 123 123 kcb->kprobe_status = kcb->prev_kprobe.status; 124 124 kcb->kprobe_saved_msr = kcb->prev_kprobe.saved_msr; 125 125 } ··· 127 127 static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs, 128 128 struct kprobe_ctlblk *kcb) 129 129 { 130 - __get_cpu_var(current_kprobe) = p; 130 + __this_cpu_write(current_kprobe, p); 131 131 kcb->kprobe_saved_msr = regs->msr; 132 132 } 133 133 ··· 192 192 ret = 1; 193 193 goto no_kprobe; 194 194 } 195 - p = __get_cpu_var(current_kprobe); 195 + p = __this_cpu_read(current_kprobe); 196 196 if (p->break_handler && p->break_handler(p, regs)) { 197 197 goto ss_probe; 198 198 }
+12 -12
arch/powerpc/kernel/mce.c
··· 73 73 uint64_t nip, uint64_t addr) 74 74 { 75 75 uint64_t srr1; 76 - int index = __get_cpu_var(mce_nest_count)++; 77 - struct machine_check_event *mce = &__get_cpu_var(mce_event[index]); 76 + int index = __this_cpu_inc_return(mce_nest_count); 77 + struct machine_check_event *mce = this_cpu_ptr(&mce_event[index]); 78 78 79 79 /* 80 80 * Return if we don't have enough space to log mce event. ··· 143 143 */ 144 144 int get_mce_event(struct machine_check_event *mce, bool release) 145 145 { 146 - int index = __get_cpu_var(mce_nest_count) - 1; 146 + int index = __this_cpu_read(mce_nest_count) - 1; 147 147 struct machine_check_event *mc_evt; 148 148 int ret = 0; 149 149 ··· 153 153 154 154 /* Check if we have MCE info to process. */ 155 155 if (index < MAX_MC_EVT) { 156 - mc_evt = &__get_cpu_var(mce_event[index]); 156 + mc_evt = this_cpu_ptr(&mce_event[index]); 157 157 /* Copy the event structure and release the original */ 158 158 if (mce) 159 159 *mce = *mc_evt; ··· 163 163 } 164 164 /* Decrement the count to free the slot. */ 165 165 if (release) 166 - __get_cpu_var(mce_nest_count)--; 166 + __this_cpu_dec(mce_nest_count); 167 167 168 168 return ret; 169 169 } ··· 184 184 if (!get_mce_event(&evt, MCE_EVENT_RELEASE)) 185 185 return; 186 186 187 - index = __get_cpu_var(mce_queue_count)++; 187 + index = __this_cpu_inc_return(mce_queue_count); 188 188 /* If queue is full, just return for now. */ 189 189 if (index >= MAX_MC_EVT) { 190 - __get_cpu_var(mce_queue_count)--; 190 + __this_cpu_dec(mce_queue_count); 191 191 return; 192 192 } 193 - __get_cpu_var(mce_event_queue[index]) = evt; 193 + memcpy(this_cpu_ptr(&mce_event_queue[index]), &evt, sizeof(evt)); 194 194 195 195 /* Queue irq work to process this event later. */ 196 196 irq_work_queue(&mce_event_process_work); ··· 208 208 * For now just print it to console. 209 209 * TODO: log this error event to FSP or nvram. 210 210 */ 211 - while (__get_cpu_var(mce_queue_count) > 0) { 212 - index = __get_cpu_var(mce_queue_count) - 1; 211 + while (__this_cpu_read(mce_queue_count) > 0) { 212 + index = __this_cpu_read(mce_queue_count) - 1; 213 213 machine_check_print_event_info( 214 - &__get_cpu_var(mce_event_queue[index])); 215 - __get_cpu_var(mce_queue_count)--; 214 + this_cpu_ptr(&mce_event_queue[index])); 215 + __this_cpu_dec(mce_queue_count); 216 216 } 217 217 } 218 218
+2 -2
arch/powerpc/kernel/mce_power.c
··· 79 79 } 80 80 if (dsisr & P7_DSISR_MC_TLB_MULTIHIT_MFTLB) { 81 81 if (cur_cpu_spec && cur_cpu_spec->flush_tlb) 82 - cur_cpu_spec->flush_tlb(TLBIEL_INVAL_PAGE); 82 + cur_cpu_spec->flush_tlb(TLBIEL_INVAL_SET); 83 83 /* reset error bits */ 84 84 dsisr &= ~P7_DSISR_MC_TLB_MULTIHIT_MFTLB; 85 85 } ··· 110 110 break; 111 111 case P7_SRR1_MC_IFETCH_TLB_MULTIHIT: 112 112 if (cur_cpu_spec && cur_cpu_spec->flush_tlb) { 113 - cur_cpu_spec->flush_tlb(TLBIEL_INVAL_PAGE); 113 + cur_cpu_spec->flush_tlb(TLBIEL_INVAL_SET); 114 114 handled = 1; 115 115 } 116 116 break;
+1 -2
arch/powerpc/kernel/pci-common.c
··· 20 20 #include <linux/pci.h> 21 21 #include <linux/string.h> 22 22 #include <linux/init.h> 23 - #include <linux/bootmem.h> 24 23 #include <linux/delay.h> 25 24 #include <linux/export.h> 26 25 #include <linux/of_address.h> ··· 1463 1464 res = &hose->io_resource; 1464 1465 1465 1466 if (!res->flags) { 1466 - printk(KERN_WARNING "PCI: I/O resource not set for host" 1467 + pr_info("PCI: I/O resource not set for host" 1467 1468 " bridge %s (domain %d)\n", 1468 1469 hose->dn->full_name, hose->global_number); 1469 1470 } else {
+1 -3
arch/powerpc/kernel/pci_32.c
··· 199 199 struct property* of_prop; 200 200 struct device_node *dn; 201 201 202 - of_prop = (struct property*) alloc_bootmem(sizeof(struct property) + 256); 203 - if (!of_prop) 204 - return; 202 + of_prop = memblock_virt_alloc(sizeof(struct property) + 256, 0); 205 203 dn = of_find_node_by_path("/"); 206 204 if (dn) { 207 205 memset(of_prop, -1, sizeof(struct property) + 256);
-1
arch/powerpc/kernel/pci_64.c
··· 17 17 #include <linux/pci.h> 18 18 #include <linux/string.h> 19 19 #include <linux/init.h> 20 - #include <linux/bootmem.h> 21 20 #include <linux/export.h> 22 21 #include <linux/mm.h> 23 22 #include <linux/list.h>
+8 -28
arch/powerpc/kernel/process.c
··· 37 37 #include <linux/personality.h> 38 38 #include <linux/random.h> 39 39 #include <linux/hw_breakpoint.h> 40 + #include <linux/uaccess.h> 40 41 41 42 #include <asm/pgtable.h> 42 - #include <asm/uaccess.h> 43 43 #include <asm/io.h> 44 44 #include <asm/processor.h> 45 45 #include <asm/mmu.h> ··· 499 499 500 500 void __set_breakpoint(struct arch_hw_breakpoint *brk) 501 501 { 502 - __get_cpu_var(current_brk) = *brk; 502 + memcpy(this_cpu_ptr(&current_brk), brk, sizeof(*brk)); 503 503 504 504 if (cpu_has_feature(CPU_FTR_DAWR)) 505 505 set_dawr(brk); ··· 842 842 * schedule DABR 843 843 */ 844 844 #ifndef CONFIG_HAVE_HW_BREAKPOINT 845 - if (unlikely(!hw_brk_match(&__get_cpu_var(current_brk), &new->thread.hw_brk))) 845 + if (unlikely(!hw_brk_match(this_cpu_ptr(&current_brk), &new->thread.hw_brk))) 846 846 __set_breakpoint(&new->thread.hw_brk); 847 847 #endif /* CONFIG_HAVE_HW_BREAKPOINT */ 848 848 #endif ··· 856 856 * Collect processor utilization data per process 857 857 */ 858 858 if (firmware_has_feature(FW_FEATURE_SPLPAR)) { 859 - struct cpu_usage *cu = &__get_cpu_var(cpu_usage_array); 859 + struct cpu_usage *cu = this_cpu_ptr(&cpu_usage_array); 860 860 long unsigned start_tb, current_tb; 861 861 start_tb = old_thread->start_tb; 862 862 cu->current_tb = current_tb = mfspr(SPRN_PURR); ··· 866 866 #endif /* CONFIG_PPC64 */ 867 867 868 868 #ifdef CONFIG_PPC_BOOK3S_64 869 - batch = &__get_cpu_var(ppc64_tlb_batch); 869 + batch = this_cpu_ptr(&ppc64_tlb_batch); 870 870 if (batch->active) { 871 871 current_thread_info()->local_flags |= _TLF_LAZY_MMU; 872 872 if (batch->index) ··· 889 889 #ifdef CONFIG_PPC_BOOK3S_64 890 890 if (current_thread_info()->local_flags & _TLF_LAZY_MMU) { 891 891 current_thread_info()->local_flags &= ~_TLF_LAZY_MMU; 892 - batch = &__get_cpu_var(ppc64_tlb_batch); 892 + batch = this_cpu_ptr(&ppc64_tlb_batch); 893 893 batch->active = 1; 894 894 } 895 895 #endif /* CONFIG_PPC_BOOK3S_64 */ ··· 921 921 pc = (unsigned long)phys_to_virt(pc); 922 922 #endif 923 923 924 - /* We use __get_user here *only* to avoid an OOPS on a 925 - * bad address because the pc *should* only be a 926 - * kernel address. 927 - */ 928 924 if (!__kernel_text_address(pc) || 929 - __get_user(instr, (unsigned int __user *)pc)) { 925 + probe_kernel_address((unsigned int __user *)pc, instr)) { 930 926 printk(KERN_CONT "XXXXXXXX "); 931 927 } else { 932 928 if (regs->nip == pc) ··· 1527 1531 int curr_frame = current->curr_ret_stack; 1528 1532 extern void return_to_handler(void); 1529 1533 unsigned long rth = (unsigned long)return_to_handler; 1530 - unsigned long mrth = -1; 1531 - #ifdef CONFIG_PPC64 1532 - extern void mod_return_to_handler(void); 1533 - rth = *(unsigned long *)rth; 1534 - mrth = (unsigned long)mod_return_to_handler; 1535 - mrth = *(unsigned long *)mrth; 1536 - #endif 1537 1534 #endif 1538 1535 1539 1536 sp = (unsigned long) stack; ··· 1551 1562 if (!firstframe || ip != lr) { 1552 1563 printk("["REG"] ["REG"] %pS", sp, ip, (void *)ip); 1553 1564 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 1554 - if ((ip == rth || ip == mrth) && curr_frame >= 0) { 1565 + if ((ip == rth) && curr_frame >= 0) { 1555 1566 printk(" (%pS)", 1556 1567 (void *)current->ret_stack[curr_frame].ret); 1557 1568 curr_frame--; ··· 1654 1665 return ret; 1655 1666 } 1656 1667 1657 - unsigned long randomize_et_dyn(unsigned long base) 1658 - { 1659 - unsigned long ret = PAGE_ALIGN(base + brk_rnd()); 1660 - 1661 - if (ret < base) 1662 - return base; 1663 - 1664 - return ret; 1665 - }
+7 -4
arch/powerpc/kernel/prom.c
··· 160 160 {CPU_FTR_NODSISRALIGN, 0, 0, 1, 1, 1}, 161 161 {0, MMU_FTR_CI_LARGE_PAGE, 0, 1, 2, 0}, 162 162 {CPU_FTR_REAL_LE, PPC_FEATURE_TRUE_LE, 5, 0, 0}, 163 + /* 164 + * If the kernel doesn't support TM (ie. CONFIG_PPC_TRANSACTIONAL_MEM=n), 165 + * we don't want to turn on CPU_FTR_TM here, so we use CPU_FTR_TM_COMP 166 + * which is 0 if the kernel doesn't support TM. 167 + */ 168 + {CPU_FTR_TM_COMP, 0, 0, 22, 0, 0}, 163 169 }; 164 170 165 171 static void __init scan_features(unsigned long node, const unsigned char *ftrs, ··· 702 696 reserve_crashkernel(); 703 697 early_reserve_mem(); 704 698 705 - /* 706 - * Ensure that total memory size is page-aligned, because otherwise 707 - * mark_bootmem() gets upset. 708 - */ 699 + /* Ensure that total memory size is page-aligned. */ 709 700 limit = ALIGN(memory_limit ?: memblock_phys_mem_size(), PAGE_SIZE); 710 701 memblock_enforce_memory_limit(limit); 711 702
+9 -11
arch/powerpc/kernel/rtas-proc.c
··· 113 113 #define SENSOR_PREFIX "ibm,sensor-" 114 114 #define cel_to_fahr(x) ((x*9/5)+32) 115 115 116 - 117 - /* Globals */ 118 - static struct rtas_sensors sensors; 119 - static struct device_node *rtas_node = NULL; 120 - static unsigned long power_on_time = 0; /* Save the time the user set */ 121 - static char progress_led[MAX_LINELENGTH]; 122 - 123 - static unsigned long rtas_tone_frequency = 1000; 124 - static unsigned long rtas_tone_volume = 0; 125 - 126 - /* ****************STRUCTS******************************************* */ 127 116 struct individual_sensor { 128 117 unsigned int token; 129 118 unsigned int quant; ··· 122 133 struct individual_sensor sensor[MAX_SENSORS]; 123 134 unsigned int quant; 124 135 }; 136 + 137 + /* Globals */ 138 + static struct rtas_sensors sensors; 139 + static struct device_node *rtas_node = NULL; 140 + static unsigned long power_on_time = 0; /* Save the time the user set */ 141 + static char progress_led[MAX_LINELENGTH]; 142 + 143 + static unsigned long rtas_tone_frequency = 1000; 144 + static unsigned long rtas_tone_volume = 0; 125 145 126 146 /* ****************************************************************** */ 127 147 /* Declarations */
+2 -2
arch/powerpc/kernel/rtas.c
··· 1091 1091 } 1092 1092 1093 1093 /* 1094 - * Call early during boot, before mem init or bootmem, to retrieve the RTAS 1095 - * informations from the device-tree and allocate the RMO buffer for userland 1094 + * Call early during boot, before mem init, to retrieve the RTAS 1095 + * information from the device-tree and allocate the RMO buffer for userland 1096 1096 * accesses. 1097 1097 */ 1098 1098 void __init rtas_initialize(void)
-1
arch/powerpc/kernel/rtas_pci.c
··· 26 26 #include <linux/pci.h> 27 27 #include <linux/string.h> 28 28 #include <linux/init.h> 29 - #include <linux/bootmem.h> 30 29 31 30 #include <asm/io.h> 32 31 #include <asm/pgtable.h>
+3 -3
arch/powerpc/kernel/setup-common.c
··· 139 139 void machine_power_off(void) 140 140 { 141 141 machine_shutdown(); 142 - if (ppc_md.power_off) 143 - ppc_md.power_off(); 142 + if (pm_power_off) 143 + pm_power_off(); 144 144 #ifdef CONFIG_SMP 145 145 smp_send_stop(); 146 146 #endif ··· 151 151 /* Used by the G5 thermal driver */ 152 152 EXPORT_SYMBOL_GPL(machine_power_off); 153 153 154 - void (*pm_power_off)(void) = machine_power_off; 154 + void (*pm_power_off)(void); 155 155 EXPORT_SYMBOL_GPL(pm_power_off); 156 156 157 157 void machine_halt(void)
+2 -9
arch/powerpc/kernel/setup_32.c
··· 11 11 #include <linux/delay.h> 12 12 #include <linux/initrd.h> 13 13 #include <linux/tty.h> 14 - #include <linux/bootmem.h> 15 14 #include <linux/seq_file.h> 16 15 #include <linux/root_dev.h> 17 16 #include <linux/cpu.h> ··· 51 52 unsigned long ISA_DMA_THRESHOLD; 52 53 unsigned int DMA_MODE_READ; 53 54 unsigned int DMA_MODE_WRITE; 54 - 55 - #ifdef CONFIG_VGA_CONSOLE 56 - unsigned long vgacon_remap_base; 57 - EXPORT_SYMBOL(vgacon_remap_base); 58 - #endif 59 55 60 56 /* 61 57 * These are used in binfmt_elf.c to put aux entries on the stack ··· 305 311 306 312 irqstack_early_init(); 307 313 308 - /* set up the bootmem stuff with available memory */ 309 - do_init_bootmem(); 310 - if ( ppc_md.progress ) ppc_md.progress("setup_arch: bootmem", 0x3eab); 314 + initmem_init(); 315 + if ( ppc_md.progress ) ppc_md.progress("setup_arch: initmem", 0x3eab); 311 316 312 317 #ifdef CONFIG_DUMMY_CONSOLE 313 318 conswitchp = &dummy_con;
+2 -33
arch/powerpc/kernel/setup_64.c
··· 660 660 } 661 661 662 662 /* 663 - * Called into from start_kernel this initializes bootmem, which is used 663 + * Called into from start_kernel this initializes memblock, which is used 664 664 * to manage page allocation until mem_init is called. 665 665 */ 666 666 void __init setup_arch(char **cmdline_p) 667 667 { 668 - ppc64_boot_msg(0x12, "Setup Arch"); 669 - 670 668 *cmdline_p = boot_command_line; 671 669 672 670 /* ··· 689 691 exc_lvl_early_init(); 690 692 emergency_stack_init(); 691 693 692 - /* set up the bootmem stuff with available memory */ 693 - do_init_bootmem(); 694 - sparse_init(); 694 + initmem_init(); 695 695 696 696 #ifdef CONFIG_DUMMY_CONSOLE 697 697 conswitchp = &dummy_con; ··· 707 711 if ((unsigned long)_stext & 0xffff) 708 712 panic("Kernelbase not 64K-aligned (0x%lx)!\n", 709 713 (unsigned long)_stext); 710 - 711 - ppc64_boot_msg(0x15, "Setup Done"); 712 - } 713 - 714 - 715 - /* ToDo: do something useful if ppc_md is not yet setup. */ 716 - #define PPC64_LINUX_FUNCTION 0x0f000000 717 - #define PPC64_IPL_MESSAGE 0xc0000000 718 - #define PPC64_TERM_MESSAGE 0xb0000000 719 - 720 - static void ppc64_do_msg(unsigned int src, const char *msg) 721 - { 722 - if (ppc_md.progress) { 723 - char buf[128]; 724 - 725 - sprintf(buf, "%08X\n", src); 726 - ppc_md.progress(buf, 0); 727 - snprintf(buf, 128, "%s", msg); 728 - ppc_md.progress(buf, 0); 729 - } 730 - } 731 - 732 - /* Print a boot progress message. */ 733 - void ppc64_boot_msg(unsigned int src, const char *msg) 734 - { 735 - ppc64_do_msg(PPC64_LINUX_FUNCTION|PPC64_IPL_MESSAGE|src, msg); 736 - printk("[boot]%04x %s\n", src, msg); 737 714 } 738 715 739 716 #ifdef CONFIG_SMP
+3 -3
arch/powerpc/kernel/smp.c
··· 243 243 244 244 irqreturn_t smp_ipi_demux(void) 245 245 { 246 - struct cpu_messages *info = &__get_cpu_var(ipi_message); 246 + struct cpu_messages *info = this_cpu_ptr(&ipi_message); 247 247 unsigned int all; 248 248 249 249 mb(); /* order any irq clear */ ··· 442 442 idle_task_exit(); 443 443 cpu = smp_processor_id(); 444 444 printk(KERN_DEBUG "CPU%d offline\n", cpu); 445 - __get_cpu_var(cpu_state) = CPU_DEAD; 445 + __this_cpu_write(cpu_state, CPU_DEAD); 446 446 smp_wmb(); 447 - while (__get_cpu_var(cpu_state) != CPU_UP_PREPARE) 447 + while (__this_cpu_read(cpu_state) != CPU_UP_PREPARE) 448 448 cpu_relax(); 449 449 } 450 450
+2 -2
arch/powerpc/kernel/sysfs.c
··· 394 394 ppc_set_pmu_inuse(1); 395 395 396 396 /* Only need to enable them once */ 397 - if (__get_cpu_var(pmcs_enabled)) 397 + if (__this_cpu_read(pmcs_enabled)) 398 398 return; 399 399 400 - __get_cpu_var(pmcs_enabled) = 1; 400 + __this_cpu_write(pmcs_enabled, 1); 401 401 402 402 if (ppc_md.enable_pmcs) 403 403 ppc_md.enable_pmcs();
+12 -11
arch/powerpc/kernel/time.c
··· 458 458 459 459 DEFINE_PER_CPU(u8, irq_work_pending); 460 460 461 - #define set_irq_work_pending_flag() __get_cpu_var(irq_work_pending) = 1 462 - #define test_irq_work_pending() __get_cpu_var(irq_work_pending) 463 - #define clear_irq_work_pending() __get_cpu_var(irq_work_pending) = 0 461 + #define set_irq_work_pending_flag() __this_cpu_write(irq_work_pending, 1) 462 + #define test_irq_work_pending() __this_cpu_read(irq_work_pending) 463 + #define clear_irq_work_pending() __this_cpu_write(irq_work_pending, 0) 464 464 465 465 #endif /* 32 vs 64 bit */ 466 466 ··· 482 482 static void __timer_interrupt(void) 483 483 { 484 484 struct pt_regs *regs = get_irq_regs(); 485 - u64 *next_tb = &__get_cpu_var(decrementers_next_tb); 486 - struct clock_event_device *evt = &__get_cpu_var(decrementers); 485 + u64 *next_tb = this_cpu_ptr(&decrementers_next_tb); 486 + struct clock_event_device *evt = this_cpu_ptr(&decrementers); 487 487 u64 now; 488 488 489 489 trace_timer_interrupt_entry(regs); ··· 498 498 *next_tb = ~(u64)0; 499 499 if (evt->event_handler) 500 500 evt->event_handler(evt); 501 - __get_cpu_var(irq_stat).timer_irqs_event++; 501 + __this_cpu_inc(irq_stat.timer_irqs_event); 502 502 } else { 503 503 now = *next_tb - now; 504 504 if (now <= DECREMENTER_MAX) ··· 506 506 /* We may have raced with new irq work */ 507 507 if (test_irq_work_pending()) 508 508 set_dec(1); 509 - __get_cpu_var(irq_stat).timer_irqs_others++; 509 + __this_cpu_inc(irq_stat.timer_irqs_others); 510 510 } 511 511 512 512 #ifdef CONFIG_PPC64 513 513 /* collect purr register values often, for accurate calculations */ 514 514 if (firmware_has_feature(FW_FEATURE_SPLPAR)) { 515 - struct cpu_usage *cu = &__get_cpu_var(cpu_usage_array); 515 + struct cpu_usage *cu = this_cpu_ptr(&cpu_usage_array); 516 516 cu->current_tb = mfspr(SPRN_PURR); 517 517 } 518 518 #endif ··· 527 527 void timer_interrupt(struct pt_regs * regs) 528 528 { 529 529 struct pt_regs *old_regs; 530 - u64 *next_tb = &__get_cpu_var(decrementers_next_tb); 530 + u64 *next_tb = this_cpu_ptr(&decrementers_next_tb); 531 531 532 532 /* Ensure a positive value is written to the decrementer, or else 533 533 * some CPUs will continue to take decrementer exceptions. ··· 813 813 static int decrementer_set_next_event(unsigned long evt, 814 814 struct clock_event_device *dev) 815 815 { 816 - __get_cpu_var(decrementers_next_tb) = get_tb_or_rtc() + evt; 816 + __this_cpu_write(decrementers_next_tb, get_tb_or_rtc() + evt); 817 817 set_dec(evt); 818 818 819 819 /* We may have raced with new irq work */ ··· 833 833 /* Interrupt handler for the timer broadcast IPI */ 834 834 void tick_broadcast_ipi_handler(void) 835 835 { 836 - u64 *next_tb = &__get_cpu_var(decrementers_next_tb); 836 + u64 *next_tb = this_cpu_ptr(&decrementers_next_tb); 837 837 838 838 *next_tb = get_tb_or_rtc(); 839 839 __timer_interrupt(); ··· 989 989 990 990 tm->tm_wday = day % 7; 991 991 } 992 + EXPORT_SYMBOL_GPL(GregorianDay); 992 993 993 994 void to_tm(int tim, struct rtc_time * tm) 994 995 {
+4 -4
arch/powerpc/kernel/traps.c
··· 295 295 { 296 296 long handled = 0; 297 297 298 - __get_cpu_var(irq_stat).mce_exceptions++; 298 + __this_cpu_inc(irq_stat.mce_exceptions); 299 299 300 300 if (cur_cpu_spec && cur_cpu_spec->machine_check_early) 301 301 handled = cur_cpu_spec->machine_check_early(regs); ··· 304 304 305 305 long hmi_exception_realmode(struct pt_regs *regs) 306 306 { 307 - __get_cpu_var(irq_stat).hmi_exceptions++; 307 + __this_cpu_inc(irq_stat.hmi_exceptions); 308 308 309 309 if (ppc_md.hmi_exception_early) 310 310 ppc_md.hmi_exception_early(regs); ··· 700 700 enum ctx_state prev_state = exception_enter(); 701 701 int recover = 0; 702 702 703 - __get_cpu_var(irq_stat).mce_exceptions++; 703 + __this_cpu_inc(irq_stat.mce_exceptions); 704 704 705 705 /* See if any machine dependent calls. In theory, we would want 706 706 * to call the CPU first, and call the ppc_md. one if the CPU ··· 1519 1519 1520 1520 void performance_monitor_exception(struct pt_regs *regs) 1521 1521 { 1522 - __get_cpu_var(irq_stat).pmu_irqs++; 1522 + __this_cpu_inc(irq_stat.pmu_irqs); 1523 1523 1524 1524 perf_irq(regs); 1525 1525 }
+5 -1
arch/powerpc/kernel/udbg_16550.c
··· 69 69 70 70 static int udbg_uart_getc_poll(void) 71 71 { 72 - if (!udbg_uart_in || !(udbg_uart_in(UART_LSR) & LSR_DR)) 72 + if (!udbg_uart_in) 73 + return -1; 74 + 75 + if (!(udbg_uart_in(UART_LSR) & LSR_DR)) 73 76 return udbg_uart_in(UART_RBR); 77 + 74 78 return -1; 75 79 } 76 80
-1
arch/powerpc/kernel/vdso.c
··· 20 20 #include <linux/user.h> 21 21 #include <linux/elf.h> 22 22 #include <linux/security.h> 23 - #include <linux/bootmem.h> 24 23 #include <linux/memblock.h> 25 24 26 25 #include <asm/pgtable.h>
+1 -2
arch/powerpc/kvm/book3s_hv_builtin.c
··· 12 12 #include <linux/export.h> 13 13 #include <linux/sched.h> 14 14 #include <linux/spinlock.h> 15 - #include <linux/bootmem.h> 16 15 #include <linux/init.h> 17 16 #include <linux/memblock.h> 18 17 #include <linux/sizes.h> ··· 153 154 * kvm_cma_reserve() - reserve area for kvm hash pagetable 154 155 * 155 156 * This function reserves memory from early allocator. It should be 156 - * called by arch specific code once the early allocator (memblock or bootmem) 157 + * called by arch specific code once the memblock allocator 157 158 * has been activated and all other subsystems have already allocated/reserved 158 159 * memory. 159 160 */
+37 -17
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 201 201 bge kvm_novcpu_exit /* another thread already exiting */ 202 202 li r3, NAPPING_NOVCPU 203 203 stb r3, HSTATE_NAPPING(r13) 204 - li r3, 1 205 - stb r3, HSTATE_HWTHREAD_REQ(r13) 206 204 207 205 b kvm_do_nap 208 206 ··· 291 293 /* if we have no vcpu to run, go back to sleep */ 292 294 beq kvm_no_guest 293 295 296 + kvm_secondary_got_guest: 297 + 294 298 /* Set HSTATE_DSCR(r13) to something sensible */ 295 299 ld r6, PACA_DSCR(r13) 296 300 std r6, HSTATE_DSCR(r13) ··· 318 318 stwcx. r3, 0, r4 319 319 bne 51b 320 320 321 + /* 322 + * At this point we have finished executing in the guest. 323 + * We need to wait for hwthread_req to become zero, since 324 + * we may not turn on the MMU while hwthread_req is non-zero. 325 + * While waiting we also need to check if we get given a vcpu to run. 326 + */ 321 327 kvm_no_guest: 322 - li r0, KVM_HWTHREAD_IN_NAP 328 + lbz r3, HSTATE_HWTHREAD_REQ(r13) 329 + cmpwi r3, 0 330 + bne 53f 331 + HMT_MEDIUM 332 + li r0, KVM_HWTHREAD_IN_KERNEL 323 333 stb r0, HSTATE_HWTHREAD_STATE(r13) 324 - kvm_do_nap: 325 - /* Clear the runlatch bit before napping */ 326 - mfspr r2, SPRN_CTRLF 327 - clrrdi r2, r2, 1 328 - mtspr SPRN_CTRLT, r2 329 - 334 + /* need to recheck hwthread_req after a barrier, to avoid race */ 335 + sync 336 + lbz r3, HSTATE_HWTHREAD_REQ(r13) 337 + cmpwi r3, 0 338 + bne 54f 339 + /* 340 + * We jump to power7_wakeup_loss, which will return to the caller 341 + * of power7_nap in the powernv cpu offline loop. The value we 342 + * put in r3 becomes the return value for power7_nap. 343 + */ 330 344 li r3, LPCR_PECE0 331 345 mfspr r4, SPRN_LPCR 332 346 rlwimi r4, r3, 0, LPCR_PECE0 | LPCR_PECE1 333 347 mtspr SPRN_LPCR, r4 334 - isync 335 - std r0, HSTATE_SCRATCH0(r13) 336 - ptesync 337 - ld r0, HSTATE_SCRATCH0(r13) 338 - 1: cmpd r0, r0 339 - bne 1b 340 - nap 341 - b . 348 + li r3, 0 349 + b power7_wakeup_loss 350 + 351 + 53: HMT_LOW 352 + ld r4, HSTATE_KVM_VCPU(r13) 353 + cmpdi r4, 0 354 + beq kvm_no_guest 355 + HMT_MEDIUM 356 + b kvm_secondary_got_guest 357 + 358 + 54: li r0, KVM_HWTHREAD_IN_KVM 359 + stb r0, HSTATE_HWTHREAD_STATE(r13) 360 + b kvm_no_guest 342 361 343 362 /****************************************************************************** 344 363 * * ··· 2191 2172 * occurs, with PECE1, PECE0 and PECEDP set in LPCR. Also clear the 2192 2173 * runlatch bit before napping. 2193 2174 */ 2175 + kvm_do_nap: 2194 2176 mfspr r2, SPRN_CTRLF 2195 2177 clrrdi r2, r2, 1 2196 2178 mtspr SPRN_CTRLT, r2
+7 -7
arch/powerpc/kvm/e500.c
··· 76 76 unsigned long sid; 77 77 int ret = -1; 78 78 79 - sid = ++(__get_cpu_var(pcpu_last_used_sid)); 79 + sid = __this_cpu_inc_return(pcpu_last_used_sid); 80 80 if (sid < NUM_TIDS) { 81 - __get_cpu_var(pcpu_sids).entry[sid] = entry; 81 + __this_cpu_write(pcpu_sids)entry[sid], entry); 82 82 entry->val = sid; 83 - entry->pentry = &__get_cpu_var(pcpu_sids).entry[sid]; 83 + entry->pentry = this_cpu_ptr(&pcpu_sids.entry[sid]); 84 84 ret = sid; 85 85 } 86 86 ··· 108 108 static inline int local_sid_lookup(struct id *entry) 109 109 { 110 110 if (entry && entry->val != 0 && 111 - __get_cpu_var(pcpu_sids).entry[entry->val] == entry && 112 - entry->pentry == &__get_cpu_var(pcpu_sids).entry[entry->val]) 111 + __this_cpu_read(pcpu_sids.entry[entry->val]) == entry && 112 + entry->pentry == this_cpu_ptr(&pcpu_sids.entry[entry->val])) 113 113 return entry->val; 114 114 return -1; 115 115 } ··· 117 117 /* Invalidate all id mappings on local core -- call with preempt disabled */ 118 118 static inline void local_sid_destroy_all(void) 119 119 { 120 - __get_cpu_var(pcpu_last_used_sid) = 0; 121 - memset(&__get_cpu_var(pcpu_sids), 0, sizeof(__get_cpu_var(pcpu_sids))); 120 + __this_cpu_write(pcpu_last_used_sid, 0); 121 + memset(this_cpu_ptr(&pcpu_sids), 0, sizeof(pcpu_sids)); 122 122 } 123 123 124 124 static void *kvmppc_e500_id_table_alloc(struct kvmppc_vcpu_e500 *vcpu_e500)
+2 -2
arch/powerpc/kvm/e500mc.c
··· 144 144 mtspr(SPRN_GESR, vcpu->arch.shared->esr); 145 145 146 146 if (vcpu->arch.oldpir != mfspr(SPRN_PIR) || 147 - __get_cpu_var(last_vcpu_of_lpid)[get_lpid(vcpu)] != vcpu) { 147 + __this_cpu_read(last_vcpu_of_lpid[get_lpid(vcpu)]) != vcpu) { 148 148 kvmppc_e500_tlbil_all(vcpu_e500); 149 - __get_cpu_var(last_vcpu_of_lpid)[get_lpid(vcpu)] = vcpu; 149 + __this_cpu_write(last_vcpu_of_lpid[get_lpid(vcpu)], vcpu); 150 150 } 151 151 } 152 152
-1
arch/powerpc/lib/Makefile
··· 12 12 obj-y := string.o alloc.o \ 13 13 crtsavres.o ppc_ksyms.o 14 14 obj-$(CONFIG_PPC32) += div64.o copy_32.o 15 - obj-$(CONFIG_HAS_IOMEM) += devres.o 16 15 17 16 obj-$(CONFIG_PPC64) += copypage_64.o copyuser_64.o \ 18 17 usercopy_64.o mem_64.o string.o \
+1 -3
arch/powerpc/lib/alloc.c
··· 13 13 if (mem_init_done) 14 14 p = kzalloc(size, mask); 15 15 else { 16 - p = alloc_bootmem(size); 17 - if (p) 18 - memset(p, 0, size); 16 + p = memblock_virt_alloc(size, 0); 19 17 } 20 18 return p; 21 19 }
-43
arch/powerpc/lib/devres.c
··· 1 - /* 2 - * Copyright (C) 2008 Freescale Semiconductor, Inc. 3 - * 4 - * This program is free software; you can redistribute it and/or 5 - * modify it under the terms of the GNU General Public License 6 - * as published by the Free Software Foundation; either version 7 - * 2 of the License, or (at your option) any later version. 8 - */ 9 - 10 - #include <linux/device.h> /* devres_*(), devm_ioremap_release() */ 11 - #include <linux/gfp.h> 12 - #include <linux/io.h> /* ioremap_prot() */ 13 - #include <linux/export.h> /* EXPORT_SYMBOL() */ 14 - 15 - /** 16 - * devm_ioremap_prot - Managed ioremap_prot() 17 - * @dev: Generic device to remap IO address for 18 - * @offset: BUS offset to map 19 - * @size: Size of map 20 - * @flags: Page flags 21 - * 22 - * Managed ioremap_prot(). Map is automatically unmapped on driver 23 - * detach. 24 - */ 25 - void __iomem *devm_ioremap_prot(struct device *dev, resource_size_t offset, 26 - size_t size, unsigned long flags) 27 - { 28 - void __iomem **ptr, *addr; 29 - 30 - ptr = devres_alloc(devm_ioremap_release, sizeof(*ptr), GFP_KERNEL); 31 - if (!ptr) 32 - return NULL; 33 - 34 - addr = ioremap_prot(offset, size, flags); 35 - if (addr) { 36 - *ptr = addr; 37 - devres_add(dev, ptr); 38 - } else 39 - devres_free(ptr); 40 - 41 - return addr; 42 - } 43 - EXPORT_SYMBOL(devm_ioremap_prot);
+4 -2
arch/powerpc/lib/sstep.c
··· 1865 1865 } 1866 1866 goto ldst_done; 1867 1867 1868 + #ifdef CONFIG_PPC_FPU 1868 1869 case LOAD_FP: 1869 1870 if (regs->msr & MSR_LE) 1870 1871 return 0; ··· 1874 1873 else 1875 1874 err = do_fp_load(op.reg, do_lfd, op.ea, size, regs); 1876 1875 goto ldst_done; 1877 - 1876 + #endif 1878 1877 #ifdef CONFIG_ALTIVEC 1879 1878 case LOAD_VMX: 1880 1879 if (regs->msr & MSR_LE) ··· 1920 1919 err = write_mem(op.val, op.ea, size, regs); 1921 1920 goto ldst_done; 1922 1921 1922 + #ifdef CONFIG_PPC_FPU 1923 1923 case STORE_FP: 1924 1924 if (regs->msr & MSR_LE) 1925 1925 return 0; ··· 1929 1927 else 1930 1928 err = do_fp_store(op.reg, do_stfd, op.ea, size, regs); 1931 1929 goto ldst_done; 1932 - 1930 + #endif 1933 1931 #ifdef CONFIG_ALTIVEC 1934 1932 case STORE_VMX: 1935 1933 if (regs->msr & MSR_LE)
+1 -1
arch/powerpc/mm/Makefile
··· 6 6 7 7 ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) 8 8 9 - obj-y := fault.o mem.o pgtable.o gup.o mmap.o \ 9 + obj-y := fault.o mem.o pgtable.o mmap.o \ 10 10 init_$(CONFIG_WORD_SIZE).o \ 11 11 pgtable_$(CONFIG_WORD_SIZE).o 12 12 obj-$(CONFIG_PPC_MMU_NOHASH) += mmu_context_nohash.o tlb_nohash.o \
-7
arch/powerpc/mm/fault.c
··· 43 43 #include <asm/tlbflush.h> 44 44 #include <asm/siginfo.h> 45 45 #include <asm/debug.h> 46 - #include <mm/mmu_decl.h> 47 46 48 47 #include "icswx.h" 49 48 ··· 379 380 goto bad_area; 380 381 #endif /* CONFIG_6xx */ 381 382 #if defined(CONFIG_8xx) 382 - /* 8xx sometimes need to load a invalid/non-present TLBs. 383 - * These must be invalidated separately as linux mm don't. 384 - */ 385 - if (error_code & 0x40000000) /* no translation? */ 386 - _tlbil_va(address, 0, 0, 0); 387 - 388 383 /* The MPC8xx seems to always set 0x80000000, which is 389 384 * "undefined". Of those that can be set, this is the only 390 385 * one which seems bad.
-235
arch/powerpc/mm/gup.c
··· 1 - /* 2 - * Lockless get_user_pages_fast for powerpc 3 - * 4 - * Copyright (C) 2008 Nick Piggin 5 - * Copyright (C) 2008 Novell Inc. 6 - */ 7 - #undef DEBUG 8 - 9 - #include <linux/sched.h> 10 - #include <linux/mm.h> 11 - #include <linux/hugetlb.h> 12 - #include <linux/vmstat.h> 13 - #include <linux/pagemap.h> 14 - #include <linux/rwsem.h> 15 - #include <asm/pgtable.h> 16 - 17 - #ifdef __HAVE_ARCH_PTE_SPECIAL 18 - 19 - /* 20 - * The performance critical leaf functions are made noinline otherwise gcc 21 - * inlines everything into a single function which results in too much 22 - * register pressure. 23 - */ 24 - static noinline int gup_pte_range(pmd_t pmd, unsigned long addr, 25 - unsigned long end, int write, struct page **pages, int *nr) 26 - { 27 - unsigned long mask, result; 28 - pte_t *ptep; 29 - 30 - result = _PAGE_PRESENT|_PAGE_USER; 31 - if (write) 32 - result |= _PAGE_RW; 33 - mask = result | _PAGE_SPECIAL; 34 - 35 - ptep = pte_offset_kernel(&pmd, addr); 36 - do { 37 - pte_t pte = ACCESS_ONCE(*ptep); 38 - struct page *page; 39 - /* 40 - * Similar to the PMD case, NUMA hinting must take slow path 41 - */ 42 - if (pte_numa(pte)) 43 - return 0; 44 - 45 - if ((pte_val(pte) & mask) != result) 46 - return 0; 47 - VM_BUG_ON(!pfn_valid(pte_pfn(pte))); 48 - page = pte_page(pte); 49 - if (!page_cache_get_speculative(page)) 50 - return 0; 51 - if (unlikely(pte_val(pte) != pte_val(*ptep))) { 52 - put_page(page); 53 - return 0; 54 - } 55 - pages[*nr] = page; 56 - (*nr)++; 57 - 58 - } while (ptep++, addr += PAGE_SIZE, addr != end); 59 - 60 - return 1; 61 - } 62 - 63 - static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, 64 - int write, struct page **pages, int *nr) 65 - { 66 - unsigned long next; 67 - pmd_t *pmdp; 68 - 69 - pmdp = pmd_offset(&pud, addr); 70 - do { 71 - pmd_t pmd = ACCESS_ONCE(*pmdp); 72 - 73 - next = pmd_addr_end(addr, end); 74 - /* 75 - * If we find a splitting transparent hugepage we 76 - * return zero. That will result in taking the slow 77 - * path which will call wait_split_huge_page() 78 - * if the pmd is still in splitting state 79 - */ 80 - if (pmd_none(pmd) || pmd_trans_splitting(pmd)) 81 - return 0; 82 - if (pmd_huge(pmd) || pmd_large(pmd)) { 83 - /* 84 - * NUMA hinting faults need to be handled in the GUP 85 - * slowpath for accounting purposes and so that they 86 - * can be serialised against THP migration. 87 - */ 88 - if (pmd_numa(pmd)) 89 - return 0; 90 - 91 - if (!gup_hugepte((pte_t *)pmdp, PMD_SIZE, addr, next, 92 - write, pages, nr)) 93 - return 0; 94 - } else if (is_hugepd(pmdp)) { 95 - if (!gup_hugepd((hugepd_t *)pmdp, PMD_SHIFT, 96 - addr, next, write, pages, nr)) 97 - return 0; 98 - } else if (!gup_pte_range(pmd, addr, next, write, pages, nr)) 99 - return 0; 100 - } while (pmdp++, addr = next, addr != end); 101 - 102 - return 1; 103 - } 104 - 105 - static int gup_pud_range(pgd_t pgd, unsigned long addr, unsigned long end, 106 - int write, struct page **pages, int *nr) 107 - { 108 - unsigned long next; 109 - pud_t *pudp; 110 - 111 - pudp = pud_offset(&pgd, addr); 112 - do { 113 - pud_t pud = ACCESS_ONCE(*pudp); 114 - 115 - next = pud_addr_end(addr, end); 116 - if (pud_none(pud)) 117 - return 0; 118 - if (pud_huge(pud)) { 119 - if (!gup_hugepte((pte_t *)pudp, PUD_SIZE, addr, next, 120 - write, pages, nr)) 121 - return 0; 122 - } else if (is_hugepd(pudp)) { 123 - if (!gup_hugepd((hugepd_t *)pudp, PUD_SHIFT, 124 - addr, next, write, pages, nr)) 125 - return 0; 126 - } else if (!gup_pmd_range(pud, addr, next, write, pages, nr)) 127 - return 0; 128 - } while (pudp++, addr = next, addr != end); 129 - 130 - return 1; 131 - } 132 - 133 - int __get_user_pages_fast(unsigned long start, int nr_pages, int write, 134 - struct page **pages) 135 - { 136 - struct mm_struct *mm = current->mm; 137 - unsigned long addr, len, end; 138 - unsigned long next; 139 - unsigned long flags; 140 - pgd_t *pgdp; 141 - int nr = 0; 142 - 143 - pr_devel("%s(%lx,%x,%s)\n", __func__, start, nr_pages, write ? "write" : "read"); 144 - 145 - start &= PAGE_MASK; 146 - addr = start; 147 - len = (unsigned long) nr_pages << PAGE_SHIFT; 148 - end = start + len; 149 - 150 - if (unlikely(!access_ok(write ? VERIFY_WRITE : VERIFY_READ, 151 - start, len))) 152 - return 0; 153 - 154 - pr_devel(" aligned: %lx .. %lx\n", start, end); 155 - 156 - /* 157 - * XXX: batch / limit 'nr', to avoid large irq off latency 158 - * needs some instrumenting to determine the common sizes used by 159 - * important workloads (eg. DB2), and whether limiting the batch size 160 - * will decrease performance. 161 - * 162 - * It seems like we're in the clear for the moment. Direct-IO is 163 - * the main guy that batches up lots of get_user_pages, and even 164 - * they are limited to 64-at-a-time which is not so many. 165 - */ 166 - /* 167 - * This doesn't prevent pagetable teardown, but does prevent 168 - * the pagetables from being freed on powerpc. 169 - * 170 - * So long as we atomically load page table pointers versus teardown, 171 - * we can follow the address down to the the page and take a ref on it. 172 - */ 173 - local_irq_save(flags); 174 - 175 - pgdp = pgd_offset(mm, addr); 176 - do { 177 - pgd_t pgd = ACCESS_ONCE(*pgdp); 178 - 179 - pr_devel(" %016lx: normal pgd %p\n", addr, 180 - (void *)pgd_val(pgd)); 181 - next = pgd_addr_end(addr, end); 182 - if (pgd_none(pgd)) 183 - break; 184 - if (pgd_huge(pgd)) { 185 - if (!gup_hugepte((pte_t *)pgdp, PGDIR_SIZE, addr, next, 186 - write, pages, &nr)) 187 - break; 188 - } else if (is_hugepd(pgdp)) { 189 - if (!gup_hugepd((hugepd_t *)pgdp, PGDIR_SHIFT, 190 - addr, next, write, pages, &nr)) 191 - break; 192 - } else if (!gup_pud_range(pgd, addr, next, write, pages, &nr)) 193 - break; 194 - } while (pgdp++, addr = next, addr != end); 195 - 196 - local_irq_restore(flags); 197 - 198 - return nr; 199 - } 200 - 201 - int get_user_pages_fast(unsigned long start, int nr_pages, int write, 202 - struct page **pages) 203 - { 204 - struct mm_struct *mm = current->mm; 205 - int nr, ret; 206 - 207 - start &= PAGE_MASK; 208 - nr = __get_user_pages_fast(start, nr_pages, write, pages); 209 - ret = nr; 210 - 211 - if (nr < nr_pages) { 212 - pr_devel(" slow path ! nr = %d\n", nr); 213 - 214 - /* Try to get the remaining pages with get_user_pages */ 215 - start += nr << PAGE_SHIFT; 216 - pages += nr; 217 - 218 - down_read(&mm->mmap_sem); 219 - ret = get_user_pages(current, mm, start, 220 - nr_pages - nr, write, 0, pages, NULL); 221 - up_read(&mm->mmap_sem); 222 - 223 - /* Have to be a bit careful with return values */ 224 - if (nr > 0) { 225 - if (ret < 0) 226 - ret = nr; 227 - else 228 - ret += nr; 229 - } 230 - } 231 - 232 - return ret; 233 - } 234 - 235 - #endif /* __HAVE_ARCH_PTE_SPECIAL */
+10 -9
arch/powerpc/mm/hash_low_64.S
··· 46 46 47 47 /* 48 48 * _hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, 49 - * pte_t *ptep, unsigned long trap, int local, int ssize) 49 + * pte_t *ptep, unsigned long trap, unsigned long flags, 50 + * int ssize) 50 51 * 51 52 * Adds a 4K page to the hash table in a segment of 4K pages only 52 53 */ ··· 299 298 li r6,MMU_PAGE_4K /* base page size */ 300 299 li r7,MMU_PAGE_4K /* actual page size */ 301 300 ld r8,STK_PARAM(R9)(r1) /* segment size */ 302 - ld r9,STK_PARAM(R8)(r1) /* get "local" param */ 301 + ld r9,STK_PARAM(R8)(r1) /* get "flags" param */ 303 302 .globl htab_call_hpte_updatepp 304 303 htab_call_hpte_updatepp: 305 304 bl . /* Patched by htab_finish_init() */ ··· 339 338 *****************************************************************************/ 340 339 341 340 /* _hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid, 342 - * pte_t *ptep, unsigned long trap, int local, int ssize, 343 - * int subpg_prot) 341 + * pte_t *ptep, unsigned long trap, unsigned local flags, 342 + * int ssize, int subpg_prot) 344 343 */ 345 344 346 345 /* ··· 515 514 andis. r0,r31,_PAGE_4K_PFN@h 516 515 srdi r5,r31,PTE_RPN_SHIFT 517 516 bne- htab_special_pfn 518 - sldi r5,r5,PAGE_SHIFT-HW_PAGE_SHIFT 517 + sldi r5,r5,PAGE_FACTOR 519 518 add r5,r5,r25 520 519 htab_special_pfn: 521 520 sldi r5,r5,HW_PAGE_SHIFT ··· 545 544 andis. r0,r31,_PAGE_4K_PFN@h 546 545 srdi r5,r31,PTE_RPN_SHIFT 547 546 bne- 3f 548 - sldi r5,r5,PAGE_SHIFT-HW_PAGE_SHIFT 547 + sldi r5,r5,PAGE_FACTOR 549 548 add r5,r5,r25 550 549 3: sldi r5,r5,HW_PAGE_SHIFT 551 550 ··· 595 594 li r5,0 /* PTE.hidx */ 596 595 li r6,MMU_PAGE_64K /* psize */ 597 596 ld r7,STK_PARAM(R9)(r1) /* ssize */ 598 - ld r8,STK_PARAM(R8)(r1) /* local */ 597 + ld r8,STK_PARAM(R8)(r1) /* flags */ 599 598 bl flush_hash_page 600 599 /* Clear out _PAGE_HPTE_SUB bits in the new linux PTE */ 601 600 lis r0,_PAGE_HPTE_SUB@h ··· 667 666 li r6,MMU_PAGE_4K /* base page size */ 668 667 li r7,MMU_PAGE_4K /* actual page size */ 669 668 ld r8,STK_PARAM(R9)(r1) /* segment size */ 670 - ld r9,STK_PARAM(R8)(r1) /* get "local" param */ 669 + ld r9,STK_PARAM(R8)(r1) /* get "flags" param */ 671 670 .globl htab_call_hpte_updatepp 672 671 htab_call_hpte_updatepp: 673 672 bl . /* patched by htab_finish_init() */ ··· 963 962 li r6,MMU_PAGE_64K /* base page size */ 964 963 li r7,MMU_PAGE_64K /* actual page size */ 965 964 ld r8,STK_PARAM(R9)(r1) /* segment size */ 966 - ld r9,STK_PARAM(R8)(r1) /* get "local" param */ 965 + ld r9,STK_PARAM(R8)(r1) /* get "flags" param */ 967 966 .globl ht64_call_hpte_updatepp 968 967 ht64_call_hpte_updatepp: 969 968 bl . /* patched by htab_finish_init() */
+27 -14
arch/powerpc/mm/hash_native_64.c
··· 283 283 284 284 static long native_hpte_updatepp(unsigned long slot, unsigned long newpp, 285 285 unsigned long vpn, int bpsize, 286 - int apsize, int ssize, int local) 286 + int apsize, int ssize, unsigned long flags) 287 287 { 288 288 struct hash_pte *hptep = htab_address + slot; 289 289 unsigned long hpte_v, want_v; 290 - int ret = 0; 290 + int ret = 0, local = 0; 291 291 292 292 want_v = hpte_encode_avpn(vpn, bpsize, ssize); 293 293 294 294 DBG_LOW(" update(vpn=%016lx, avpnv=%016lx, group=%lx, newpp=%lx)", 295 295 vpn, want_v & HPTE_V_AVPN, slot, newpp); 296 - 297 - native_lock_hpte(hptep); 298 296 299 297 hpte_v = be64_to_cpu(hptep->v); 300 298 /* ··· 306 308 DBG_LOW(" -> miss\n"); 307 309 ret = -1; 308 310 } else { 309 - DBG_LOW(" -> hit\n"); 310 - /* Update the HPTE */ 311 - hptep->r = cpu_to_be64((be64_to_cpu(hptep->r) & ~(HPTE_R_PP | HPTE_R_N)) | 312 - (newpp & (HPTE_R_PP | HPTE_R_N | HPTE_R_C))); 311 + native_lock_hpte(hptep); 312 + /* recheck with locks held */ 313 + hpte_v = be64_to_cpu(hptep->v); 314 + if (unlikely(!HPTE_V_COMPARE(hpte_v, want_v) || 315 + !(hpte_v & HPTE_V_VALID))) { 316 + ret = -1; 317 + } else { 318 + DBG_LOW(" -> hit\n"); 319 + /* Update the HPTE */ 320 + hptep->r = cpu_to_be64((be64_to_cpu(hptep->r) & 321 + ~(HPTE_R_PP | HPTE_R_N)) | 322 + (newpp & (HPTE_R_PP | HPTE_R_N | 323 + HPTE_R_C))); 324 + } 325 + native_unlock_hpte(hptep); 313 326 } 314 - native_unlock_hpte(hptep); 315 327 316 - /* Ensure it is out of the tlb too. */ 317 - tlbie(vpn, bpsize, apsize, ssize, local); 328 + if (flags & HPTE_LOCAL_UPDATE) 329 + local = 1; 330 + /* 331 + * Ensure it is out of the tlb too if it is not a nohpte fault 332 + */ 333 + if (!(flags & HPTE_NOHPTE_UPDATE)) 334 + tlbie(vpn, bpsize, apsize, ssize, local); 318 335 319 336 return ret; 320 337 } ··· 432 419 static void native_hugepage_invalidate(unsigned long vsid, 433 420 unsigned long addr, 434 421 unsigned char *hpte_slot_array, 435 - int psize, int ssize) 422 + int psize, int ssize, int local) 436 423 { 437 424 int i; 438 425 struct hash_pte *hptep; ··· 478 465 * instruction compares entry_VA in tlb with the VA specified 479 466 * here 480 467 */ 481 - tlbie(vpn, psize, actual_psize, ssize, 0); 468 + tlbie(vpn, psize, actual_psize, ssize, local); 482 469 } 483 470 local_irq_restore(flags); 484 471 } ··· 642 629 unsigned long want_v; 643 630 unsigned long flags; 644 631 real_pte_t pte; 645 - struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch); 632 + struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch); 646 633 unsigned long psize = batch->psize; 647 634 int ssize = batch->ssize; 648 635 int i;
+98 -16
arch/powerpc/mm/hash_utils_64.c
··· 989 989 * -1 - critical hash insertion error 990 990 * -2 - access not permitted by subpage protection mechanism 991 991 */ 992 - int hash_page_mm(struct mm_struct *mm, unsigned long ea, unsigned long access, unsigned long trap) 992 + int hash_page_mm(struct mm_struct *mm, unsigned long ea, 993 + unsigned long access, unsigned long trap, 994 + unsigned long flags) 993 995 { 994 996 enum ctx_state prev_state = exception_enter(); 995 997 pgd_t *pgdir; ··· 999 997 pte_t *ptep; 1000 998 unsigned hugeshift; 1001 999 const struct cpumask *tmp; 1002 - int rc, user_region = 0, local = 0; 1000 + int rc, user_region = 0; 1003 1001 int psize, ssize; 1004 1002 1005 1003 DBG_LOW("hash_page(ea=%016lx, access=%lx, trap=%lx\n", ··· 1051 1049 /* Check CPU locality */ 1052 1050 tmp = cpumask_of(smp_processor_id()); 1053 1051 if (user_region && cpumask_equal(mm_cpumask(mm), tmp)) 1054 - local = 1; 1052 + flags |= HPTE_LOCAL_UPDATE; 1055 1053 1056 1054 #ifndef CONFIG_PPC_64K_PAGES 1057 1055 /* If we use 4K pages and our psize is not 4K, then we might ··· 1088 1086 if (hugeshift) { 1089 1087 if (pmd_trans_huge(*(pmd_t *)ptep)) 1090 1088 rc = __hash_page_thp(ea, access, vsid, (pmd_t *)ptep, 1091 - trap, local, ssize, psize); 1089 + trap, flags, ssize, psize); 1092 1090 #ifdef CONFIG_HUGETLB_PAGE 1093 1091 else 1094 1092 rc = __hash_page_huge(ea, access, vsid, ptep, trap, 1095 - local, ssize, hugeshift, psize); 1093 + flags, ssize, hugeshift, psize); 1096 1094 #else 1097 1095 else { 1098 1096 /* ··· 1151 1149 1152 1150 #ifdef CONFIG_PPC_HAS_HASH_64K 1153 1151 if (psize == MMU_PAGE_64K) 1154 - rc = __hash_page_64K(ea, access, vsid, ptep, trap, local, ssize); 1152 + rc = __hash_page_64K(ea, access, vsid, ptep, trap, 1153 + flags, ssize); 1155 1154 else 1156 1155 #endif /* CONFIG_PPC_HAS_HASH_64K */ 1157 1156 { ··· 1161 1158 rc = -2; 1162 1159 else 1163 1160 rc = __hash_page_4K(ea, access, vsid, ptep, trap, 1164 - local, ssize, spp); 1161 + flags, ssize, spp); 1165 1162 } 1166 1163 1167 1164 /* Dump some info in case of hash insertion failure, they should ··· 1184 1181 } 1185 1182 EXPORT_SYMBOL_GPL(hash_page_mm); 1186 1183 1187 - int hash_page(unsigned long ea, unsigned long access, unsigned long trap) 1184 + int hash_page(unsigned long ea, unsigned long access, unsigned long trap, 1185 + unsigned long dsisr) 1188 1186 { 1187 + unsigned long flags = 0; 1189 1188 struct mm_struct *mm = current->mm; 1190 1189 1191 1190 if (REGION_ID(ea) == VMALLOC_REGION_ID) 1192 1191 mm = &init_mm; 1193 1192 1194 - return hash_page_mm(mm, ea, access, trap); 1193 + if (dsisr & DSISR_NOHPTE) 1194 + flags |= HPTE_NOHPTE_UPDATE; 1195 + 1196 + return hash_page_mm(mm, ea, access, trap, flags); 1195 1197 } 1196 1198 EXPORT_SYMBOL_GPL(hash_page); 1197 1199 ··· 1208 1200 pgd_t *pgdir; 1209 1201 pte_t *ptep; 1210 1202 unsigned long flags; 1211 - int rc, ssize, local = 0; 1203 + int rc, ssize, update_flags = 0; 1212 1204 1213 1205 BUG_ON(REGION_ID(ea) != USER_REGION_ID); 1214 1206 ··· 1259 1251 1260 1252 /* Is that local to this CPU ? */ 1261 1253 if (cpumask_equal(mm_cpumask(mm), cpumask_of(smp_processor_id()))) 1262 - local = 1; 1254 + update_flags |= HPTE_LOCAL_UPDATE; 1263 1255 1264 1256 /* Hash it in */ 1265 1257 #ifdef CONFIG_PPC_HAS_HASH_64K 1266 1258 if (mm->context.user_psize == MMU_PAGE_64K) 1267 - rc = __hash_page_64K(ea, access, vsid, ptep, trap, local, ssize); 1259 + rc = __hash_page_64K(ea, access, vsid, ptep, trap, 1260 + update_flags, ssize); 1268 1261 else 1269 1262 #endif /* CONFIG_PPC_HAS_HASH_64K */ 1270 - rc = __hash_page_4K(ea, access, vsid, ptep, trap, local, ssize, 1271 - subpage_protection(mm, ea)); 1263 + rc = __hash_page_4K(ea, access, vsid, ptep, trap, update_flags, 1264 + ssize, subpage_protection(mm, ea)); 1272 1265 1273 1266 /* Dump some info in case of hash insertion failure, they should 1274 1267 * never happen so it is really useful to know if/when they do ··· 1287 1278 * do not forget to update the assembly call site ! 1288 1279 */ 1289 1280 void flush_hash_page(unsigned long vpn, real_pte_t pte, int psize, int ssize, 1290 - int local) 1281 + unsigned long flags) 1291 1282 { 1292 1283 unsigned long hash, index, shift, hidx, slot; 1284 + int local = flags & HPTE_LOCAL_UPDATE; 1293 1285 1294 1286 DBG_LOW("flush_hash_page(vpn=%016lx)\n", vpn); 1295 1287 pte_iterate_hashed_subpages(pte, psize, vpn, index, shift) { ··· 1325 1315 #endif 1326 1316 } 1327 1317 1318 + #ifdef CONFIG_TRANSPARENT_HUGEPAGE 1319 + void flush_hash_hugepage(unsigned long vsid, unsigned long addr, 1320 + pmd_t *pmdp, unsigned int psize, int ssize, 1321 + unsigned long flags) 1322 + { 1323 + int i, max_hpte_count, valid; 1324 + unsigned long s_addr; 1325 + unsigned char *hpte_slot_array; 1326 + unsigned long hidx, shift, vpn, hash, slot; 1327 + int local = flags & HPTE_LOCAL_UPDATE; 1328 + 1329 + s_addr = addr & HPAGE_PMD_MASK; 1330 + hpte_slot_array = get_hpte_slot_array(pmdp); 1331 + /* 1332 + * IF we try to do a HUGE PTE update after a withdraw is done. 1333 + * we will find the below NULL. This happens when we do 1334 + * split_huge_page_pmd 1335 + */ 1336 + if (!hpte_slot_array) 1337 + return; 1338 + 1339 + if (ppc_md.hugepage_invalidate) { 1340 + ppc_md.hugepage_invalidate(vsid, s_addr, hpte_slot_array, 1341 + psize, ssize, local); 1342 + goto tm_abort; 1343 + } 1344 + /* 1345 + * No bluk hpte removal support, invalidate each entry 1346 + */ 1347 + shift = mmu_psize_defs[psize].shift; 1348 + max_hpte_count = HPAGE_PMD_SIZE >> shift; 1349 + for (i = 0; i < max_hpte_count; i++) { 1350 + /* 1351 + * 8 bits per each hpte entries 1352 + * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit] 1353 + */ 1354 + valid = hpte_valid(hpte_slot_array, i); 1355 + if (!valid) 1356 + continue; 1357 + hidx = hpte_hash_index(hpte_slot_array, i); 1358 + 1359 + /* get the vpn */ 1360 + addr = s_addr + (i * (1ul << shift)); 1361 + vpn = hpt_vpn(addr, vsid, ssize); 1362 + hash = hpt_hash(vpn, shift, ssize); 1363 + if (hidx & _PTEIDX_SECONDARY) 1364 + hash = ~hash; 1365 + 1366 + slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; 1367 + slot += hidx & _PTEIDX_GROUP_IX; 1368 + ppc_md.hpte_invalidate(slot, vpn, psize, 1369 + MMU_PAGE_16M, ssize, local); 1370 + } 1371 + tm_abort: 1372 + #ifdef CONFIG_PPC_TRANSACTIONAL_MEM 1373 + /* Transactions are not aborted by tlbiel, only tlbie. 1374 + * Without, syncing a page back to a block device w/ PIO could pick up 1375 + * transactional data (bad!) so we force an abort here. Before the 1376 + * sync the page will be made read-only, which will flush_hash_page. 1377 + * BIG ISSUE here: if the kernel uses a page from userspace without 1378 + * unmapping it first, it may see the speculated version. 1379 + */ 1380 + if (local && cpu_has_feature(CPU_FTR_TM) && 1381 + current->thread.regs && 1382 + MSR_TM_ACTIVE(current->thread.regs->msr)) { 1383 + tm_enable(); 1384 + tm_abort(TM_CAUSE_TLBI); 1385 + } 1386 + #endif 1387 + } 1388 + #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 1389 + 1328 1390 void flush_hash_range(unsigned long number, int local) 1329 1391 { 1330 1392 if (ppc_md.flush_hash_range) ··· 1404 1322 else { 1405 1323 int i; 1406 1324 struct ppc64_tlb_batch *batch = 1407 - &__get_cpu_var(ppc64_tlb_batch); 1325 + this_cpu_ptr(&ppc64_tlb_batch); 1408 1326 1409 1327 for (i = 0; i < number; i++) 1410 1328 flush_hash_page(batch->vpn[i], batch->pte[i],
+5 -55
arch/powerpc/mm/hugepage-hash64.c
··· 18 18 #include <linux/mm.h> 19 19 #include <asm/machdep.h> 20 20 21 - static void invalidate_old_hpte(unsigned long vsid, unsigned long addr, 22 - pmd_t *pmdp, unsigned int psize, int ssize) 23 - { 24 - int i, max_hpte_count, valid; 25 - unsigned long s_addr; 26 - unsigned char *hpte_slot_array; 27 - unsigned long hidx, shift, vpn, hash, slot; 28 - 29 - s_addr = addr & HPAGE_PMD_MASK; 30 - hpte_slot_array = get_hpte_slot_array(pmdp); 31 - /* 32 - * IF we try to do a HUGE PTE update after a withdraw is done. 33 - * we will find the below NULL. This happens when we do 34 - * split_huge_page_pmd 35 - */ 36 - if (!hpte_slot_array) 37 - return; 38 - 39 - if (ppc_md.hugepage_invalidate) 40 - return ppc_md.hugepage_invalidate(vsid, s_addr, hpte_slot_array, 41 - psize, ssize); 42 - /* 43 - * No bluk hpte removal support, invalidate each entry 44 - */ 45 - shift = mmu_psize_defs[psize].shift; 46 - max_hpte_count = HPAGE_PMD_SIZE >> shift; 47 - for (i = 0; i < max_hpte_count; i++) { 48 - /* 49 - * 8 bits per each hpte entries 50 - * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit] 51 - */ 52 - valid = hpte_valid(hpte_slot_array, i); 53 - if (!valid) 54 - continue; 55 - hidx = hpte_hash_index(hpte_slot_array, i); 56 - 57 - /* get the vpn */ 58 - addr = s_addr + (i * (1ul << shift)); 59 - vpn = hpt_vpn(addr, vsid, ssize); 60 - hash = hpt_hash(vpn, shift, ssize); 61 - if (hidx & _PTEIDX_SECONDARY) 62 - hash = ~hash; 63 - 64 - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; 65 - slot += hidx & _PTEIDX_GROUP_IX; 66 - ppc_md.hpte_invalidate(slot, vpn, psize, 67 - MMU_PAGE_16M, ssize, 0); 68 - } 69 - } 70 - 71 - 72 21 int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid, 73 - pmd_t *pmdp, unsigned long trap, int local, int ssize, 74 - unsigned int psize) 22 + pmd_t *pmdp, unsigned long trap, unsigned long flags, 23 + int ssize, unsigned int psize) 75 24 { 76 25 unsigned int index, valid; 77 26 unsigned char *hpte_slot_array; ··· 94 145 * hash page table entries. 95 146 */ 96 147 if ((old_pmd & _PAGE_HASHPTE) && !(old_pmd & _PAGE_COMBO)) 97 - invalidate_old_hpte(vsid, ea, pmdp, MMU_PAGE_64K, ssize); 148 + flush_hash_hugepage(vsid, ea, pmdp, MMU_PAGE_64K, 149 + ssize, flags); 98 150 } 99 151 100 152 valid = hpte_valid(hpte_slot_array, index); ··· 108 158 slot += hidx & _PTEIDX_GROUP_IX; 109 159 110 160 ret = ppc_md.hpte_updatepp(slot, rflags, vpn, 111 - psize, lpsize, ssize, local); 161 + psize, lpsize, ssize, flags); 112 162 /* 113 163 * We failed to update, try to insert a new entry. 114 164 */
+3 -3
arch/powerpc/mm/hugetlbpage-book3e.c
··· 33 33 34 34 ncams = mfspr(SPRN_TLB1CFG) & TLBnCFG_N_ENTRY; 35 35 36 - index = __get_cpu_var(next_tlbcam_idx); 36 + index = this_cpu_read(next_tlbcam_idx); 37 37 38 38 /* Just round-robin the entries and wrap when we hit the end */ 39 39 if (unlikely(index == ncams - 1)) 40 - __get_cpu_var(next_tlbcam_idx) = tlbcam_index; 40 + __this_cpu_write(next_tlbcam_idx, tlbcam_index); 41 41 else 42 - __get_cpu_var(next_tlbcam_idx)++; 42 + __this_cpu_inc(next_tlbcam_idx); 43 43 44 44 return index; 45 45 }
+3 -3
arch/powerpc/mm/hugetlbpage-hash64.c
··· 19 19 unsigned long vflags, int psize, int ssize); 20 20 21 21 int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid, 22 - pte_t *ptep, unsigned long trap, int local, int ssize, 23 - unsigned int shift, unsigned int mmu_psize) 22 + pte_t *ptep, unsigned long trap, unsigned long flags, 23 + int ssize, unsigned int shift, unsigned int mmu_psize) 24 24 { 25 25 unsigned long vpn; 26 26 unsigned long old_pte, new_pte; ··· 81 81 slot += (old_pte & _PAGE_F_GIX) >> 12; 82 82 83 83 if (ppc_md.hpte_updatepp(slot, rflags, vpn, mmu_psize, 84 - mmu_psize, ssize, local) == -1) 84 + mmu_psize, ssize, flags) == -1) 85 85 old_pte &= ~_PAGE_HPTEFLAGS; 86 86 } 87 87
+26 -25
arch/powerpc/mm/hugetlbpage.c
··· 62 62 /* 63 63 * We have PGD_INDEX_SIZ = 12 and PTE_INDEX_SIZE = 8, so that we can have 64 64 * 16GB hugepage pte in PGD and 16MB hugepage pte at PMD; 65 + * 66 + * Defined in such a way that we can optimize away code block at build time 67 + * if CONFIG_HUGETLB_PAGE=n. 65 68 */ 66 69 int pmd_huge(pmd_t pmd) 67 70 { ··· 233 230 if (hugepd_none(*hpdp) && __hugepte_alloc(mm, hpdp, addr, pdshift, pshift)) 234 231 return NULL; 235 232 236 - return hugepte_offset(hpdp, addr, pdshift); 233 + return hugepte_offset(*hpdp, addr, pdshift); 237 234 } 238 235 239 236 #else ··· 273 270 if (hugepd_none(*hpdp) && __hugepte_alloc(mm, hpdp, addr, pdshift, pshift)) 274 271 return NULL; 275 272 276 - return hugepte_offset(hpdp, addr, pdshift); 273 + return hugepte_offset(*hpdp, addr, pdshift); 277 274 } 278 275 #endif 279 276 280 277 #ifdef CONFIG_PPC_FSL_BOOK3E 281 278 /* Build list of addresses of gigantic pages. This function is used in early 282 - * boot before the buddy or bootmem allocator is setup. 279 + * boot before the buddy allocator is setup. 283 280 */ 284 281 void add_gpage(u64 addr, u64 page_size, unsigned long number_of_pages) 285 282 { ··· 315 312 * If gpages can be in highmem we can't use the trick of storing the 316 313 * data structure in the page; allocate space for this 317 314 */ 318 - m = alloc_bootmem(sizeof(struct huge_bootmem_page)); 315 + m = memblock_virt_alloc(sizeof(struct huge_bootmem_page), 0); 319 316 m->phys = gpage_freearray[idx].gpage_list[--nr_gpages]; 320 317 #else 321 318 m = phys_to_virt(gpage_freearray[idx].gpage_list[--nr_gpages]); ··· 355 352 if (size != 0) { 356 353 if (sscanf(val, "%lu", &npages) <= 0) 357 354 npages = 0; 355 + if (npages > MAX_NUMBER_GPAGES) { 356 + pr_warn("MMU: %lu pages requested for page " 357 + "size %llu KB, limiting to " 358 + __stringify(MAX_NUMBER_GPAGES) "\n", 359 + npages, size / 1024); 360 + npages = MAX_NUMBER_GPAGES; 361 + } 358 362 gpage_npages[shift_to_mmu_psize(__ffs(size))] = npages; 359 363 size = 0; 360 364 } ··· 409 399 #else /* !PPC_FSL_BOOK3E */ 410 400 411 401 /* Build list of addresses of gigantic pages. This function is used in early 412 - * boot before the buddy or bootmem allocator is setup. 402 + * boot before the buddy allocator is setup. 413 403 */ 414 404 void add_gpage(u64 addr, u64 page_size, unsigned long number_of_pages) 415 405 { ··· 472 462 { 473 463 struct hugepd_freelist **batchp; 474 464 475 - batchp = &get_cpu_var(hugepd_freelist_cur); 465 + batchp = this_cpu_ptr(&hugepd_freelist_cur); 476 466 477 467 if (atomic_read(&tlb->mm->mm_users) < 2 || 478 468 cpumask_equal(mm_cpumask(tlb->mm), ··· 546 536 do { 547 537 pmd = pmd_offset(pud, addr); 548 538 next = pmd_addr_end(addr, end); 549 - if (!is_hugepd(pmd)) { 539 + if (!is_hugepd(__hugepd(pmd_val(*pmd)))) { 550 540 /* 551 541 * if it is not hugepd pointer, we should already find 552 542 * it cleared. ··· 595 585 do { 596 586 pud = pud_offset(pgd, addr); 597 587 next = pud_addr_end(addr, end); 598 - if (!is_hugepd(pud)) { 588 + if (!is_hugepd(__hugepd(pud_val(*pud)))) { 599 589 if (pud_none_or_clear_bad(pud)) 600 590 continue; 601 591 hugetlb_free_pmd_range(tlb, pud, addr, next, floor, ··· 661 651 do { 662 652 next = pgd_addr_end(addr, end); 663 653 pgd = pgd_offset(tlb->mm, addr); 664 - if (!is_hugepd(pgd)) { 654 + if (!is_hugepd(__hugepd(pgd_val(*pgd)))) { 665 655 if (pgd_none_or_clear_bad(pgd)) 666 656 continue; 667 657 hugetlb_free_pud_range(tlb, pgd, addr, next, floor, ceiling); ··· 721 711 return (__boundary - 1 < end - 1) ? __boundary : end; 722 712 } 723 713 724 - int gup_hugepd(hugepd_t *hugepd, unsigned pdshift, 725 - unsigned long addr, unsigned long end, 726 - int write, struct page **pages, int *nr) 714 + int gup_huge_pd(hugepd_t hugepd, unsigned long addr, unsigned pdshift, 715 + unsigned long end, int write, struct page **pages, int *nr) 727 716 { 728 717 pte_t *ptep; 729 - unsigned long sz = 1UL << hugepd_shift(*hugepd); 718 + unsigned long sz = 1UL << hugepd_shift(hugepd); 730 719 unsigned long next; 731 720 732 721 ptep = hugepte_offset(hugepd, addr, pdshift); ··· 968 959 else if (pgd_huge(pgd)) { 969 960 ret_pte = (pte_t *) pgdp; 970 961 goto out; 971 - } else if (is_hugepd(&pgd)) 962 + } else if (is_hugepd(__hugepd(pgd_val(pgd)))) 972 963 hpdp = (hugepd_t *)&pgd; 973 964 else { 974 965 /* ··· 985 976 else if (pud_huge(pud)) { 986 977 ret_pte = (pte_t *) pudp; 987 978 goto out; 988 - } else if (is_hugepd(&pud)) 979 + } else if (is_hugepd(__hugepd(pud_val(pud)))) 989 980 hpdp = (hugepd_t *)&pud; 990 981 else { 991 982 pdshift = PMD_SHIFT; ··· 1006 997 if (pmd_huge(pmd) || pmd_large(pmd)) { 1007 998 ret_pte = (pte_t *) pmdp; 1008 999 goto out; 1009 - } else if (is_hugepd(&pmd)) 1000 + } else if (is_hugepd(__hugepd(pmd_val(pmd)))) 1010 1001 hpdp = (hugepd_t *)&pmd; 1011 1002 else 1012 1003 return pte_offset_kernel(&pmd, ea); ··· 1015 1006 if (!hpdp) 1016 1007 return NULL; 1017 1008 1018 - ret_pte = hugepte_offset(hpdp, ea, pdshift); 1009 + ret_pte = hugepte_offset(*hpdp, ea, pdshift); 1019 1010 pdshift = hugepd_shift(*hpdp); 1020 1011 out: 1021 1012 if (shift) ··· 1044 1035 1045 1036 if ((pte_val(pte) & mask) != mask) 1046 1037 return 0; 1047 - 1048 - #ifdef CONFIG_TRANSPARENT_HUGEPAGE 1049 - /* 1050 - * check for splitting here 1051 - */ 1052 - if (pmd_trans_splitting(pte_pmd(pte))) 1053 - return 0; 1054 - #endif 1055 1038 1056 1039 /* hugepages are never "special" */ 1057 1040 VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
-10
arch/powerpc/mm/init_32.c
··· 26 26 #include <linux/mm.h> 27 27 #include <linux/stddef.h> 28 28 #include <linux/init.h> 29 - #include <linux/bootmem.h> 30 29 #include <linux/highmem.h> 31 30 #include <linux/initrd.h> 32 31 #include <linux/pagemap.h> ··· 192 193 193 194 /* Shortly after that, the entire linear mapping will be available */ 194 195 memblock_set_current_limit(lowmem_end_addr); 195 - } 196 - 197 - /* This is only called until mem_init is done. */ 198 - void __init *early_get_page(void) 199 - { 200 - if (init_bootmem_done) 201 - return alloc_bootmem_pages(PAGE_SIZE); 202 - else 203 - return __va(memblock_alloc(PAGE_SIZE, PAGE_SIZE)); 204 196 } 205 197 206 198 #ifdef CONFIG_8xx /* No 8xx specific .c file to put that in ... */
-1
arch/powerpc/mm/init_64.c
··· 34 34 #include <linux/vmalloc.h> 35 35 #include <linux/init.h> 36 36 #include <linux/delay.h> 37 - #include <linux/bootmem.h> 38 37 #include <linux/highmem.h> 39 38 #include <linux/idr.h> 40 39 #include <linux/nodemask.h>
+15 -62
arch/powerpc/mm/mem.c
··· 35 35 #include <linux/memblock.h> 36 36 #include <linux/hugetlb.h> 37 37 #include <linux/slab.h> 38 + #include <linux/vmalloc.h> 38 39 39 40 #include <asm/pgalloc.h> 40 41 #include <asm/prom.h> ··· 61 60 #define CPU_FTR_NOEXECUTE 0 62 61 #endif 63 62 64 - int init_bootmem_done; 65 63 int mem_init_done; 66 64 unsigned long long memory_limit; 67 65 ··· 144 144 145 145 zone = page_zone(pfn_to_page(start_pfn)); 146 146 ret = __remove_pages(zone, start_pfn, nr_pages); 147 - if (!ret && (ppc_md.remove_memory)) 148 - ret = ppc_md.remove_memory(start, size); 147 + if (ret) 148 + return ret; 149 + 150 + /* Remove htab bolted mappings for this section of memory */ 151 + start = (unsigned long)__va(start); 152 + ret = remove_section_mapping(start, start + size); 153 + 154 + /* Ensure all vmalloc mappings are flushed in case they also 155 + * hit that section of memory 156 + */ 157 + vm_unmap_aliases(); 149 158 150 159 return ret; 151 160 } ··· 189 180 } 190 181 EXPORT_SYMBOL_GPL(walk_system_ram_range); 191 182 192 - /* 193 - * Initialize the bootmem system and give it all the memory we 194 - * have available. If we are using highmem, we only put the 195 - * lowmem into the bootmem system. 196 - */ 197 183 #ifndef CONFIG_NEED_MULTIPLE_NODES 198 - void __init do_init_bootmem(void) 184 + void __init initmem_init(void) 199 185 { 200 - unsigned long start, bootmap_pages; 201 - unsigned long total_pages; 202 - struct memblock_region *reg; 203 - int boot_mapsize; 204 - 205 186 max_low_pfn = max_pfn = memblock_end_of_DRAM() >> PAGE_SHIFT; 206 - total_pages = (memblock_end_of_DRAM() - memstart_addr) >> PAGE_SHIFT; 187 + min_low_pfn = MEMORY_START >> PAGE_SHIFT; 207 188 #ifdef CONFIG_HIGHMEM 208 - total_pages = total_lowmem >> PAGE_SHIFT; 209 189 max_low_pfn = lowmem_end_addr >> PAGE_SHIFT; 210 190 #endif 211 - 212 - /* 213 - * Find an area to use for the bootmem bitmap. Calculate the size of 214 - * bitmap required as (Total Memory) / PAGE_SIZE / BITS_PER_BYTE. 215 - * Add 1 additional page in case the address isn't page-aligned. 216 - */ 217 - bootmap_pages = bootmem_bootmap_pages(total_pages); 218 - 219 - start = memblock_alloc(bootmap_pages << PAGE_SHIFT, PAGE_SIZE); 220 - 221 - min_low_pfn = MEMORY_START >> PAGE_SHIFT; 222 - boot_mapsize = init_bootmem_node(NODE_DATA(0), start >> PAGE_SHIFT, min_low_pfn, max_low_pfn); 223 191 224 192 /* Place all memblock_regions in the same node and merge contiguous 225 193 * memblock_regions 226 194 */ 227 195 memblock_set_node(0, (phys_addr_t)ULLONG_MAX, &memblock.memory, 0); 228 196 229 - /* Add all physical memory to the bootmem map, mark each area 230 - * present. 231 - */ 232 - #ifdef CONFIG_HIGHMEM 233 - free_bootmem_with_active_regions(0, lowmem_end_addr >> PAGE_SHIFT); 234 - 235 - /* reserve the sections we're already using */ 236 - for_each_memblock(reserved, reg) { 237 - unsigned long top = reg->base + reg->size - 1; 238 - if (top < lowmem_end_addr) 239 - reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT); 240 - else if (reg->base < lowmem_end_addr) { 241 - unsigned long trunc_size = lowmem_end_addr - reg->base; 242 - reserve_bootmem(reg->base, trunc_size, BOOTMEM_DEFAULT); 243 - } 244 - } 245 - #else 246 - free_bootmem_with_active_regions(0, max_pfn); 247 - 248 - /* reserve the sections we're already using */ 249 - for_each_memblock(reserved, reg) 250 - reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT); 251 - #endif 252 197 /* XXX need to clip this if using highmem? */ 253 198 sparse_memory_present_with_active_regions(0); 254 - 255 - init_bootmem_done = 1; 199 + sparse_init(); 256 200 } 257 201 258 202 /* mark pages that don't exist as nosave */ ··· 321 359 mark_nonram_nosave(); 322 360 } 323 361 324 - static void __init register_page_bootmem_info(void) 325 - { 326 - int i; 327 - 328 - for_each_online_node(i) 329 - register_page_bootmem_info_node(NODE_DATA(i)); 330 - } 331 - 332 362 void __init mem_init(void) 333 363 { 334 364 /* ··· 333 379 swiotlb_init(0); 334 380 #endif 335 381 336 - register_page_bootmem_info(); 337 382 high_memory = (void *) __va(max_low_pfn * PAGE_SIZE); 338 383 set_max_mapnr(max_pfn); 339 384 free_all_bootmem();
+4 -4
arch/powerpc/mm/mmu_context_nohash.c
··· 421 421 /* 422 422 * Allocate the maps used by context management 423 423 */ 424 - context_map = alloc_bootmem(CTX_MAP_SIZE); 425 - context_mm = alloc_bootmem(sizeof(void *) * (last_context + 1)); 424 + context_map = memblock_virt_alloc(CTX_MAP_SIZE, 0); 425 + context_mm = memblock_virt_alloc(sizeof(void *) * (last_context + 1), 0); 426 426 #ifndef CONFIG_SMP 427 - stale_map[0] = alloc_bootmem(CTX_MAP_SIZE); 427 + stale_map[0] = memblock_virt_alloc(CTX_MAP_SIZE, 0); 428 428 #else 429 - stale_map[boot_cpuid] = alloc_bootmem(CTX_MAP_SIZE); 429 + stale_map[boot_cpuid] = memblock_virt_alloc(CTX_MAP_SIZE, 0); 430 430 431 431 register_cpu_notifier(&mmu_context_cpu_nb); 432 432 #endif
+31 -185
arch/powerpc/mm/numa.c
··· 134 134 return 0; 135 135 } 136 136 137 - /* 138 - * get_node_active_region - Return active region containing pfn 139 - * Active range returned is empty if none found. 140 - * @pfn: The page to return the region for 141 - * @node_ar: Returned set to the active region containing @pfn 142 - */ 143 - static void __init get_node_active_region(unsigned long pfn, 144 - struct node_active_region *node_ar) 145 - { 146 - unsigned long start_pfn, end_pfn; 147 - int i, nid; 148 - 149 - for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) { 150 - if (pfn >= start_pfn && pfn < end_pfn) { 151 - node_ar->nid = nid; 152 - node_ar->start_pfn = start_pfn; 153 - node_ar->end_pfn = end_pfn; 154 - break; 155 - } 156 - } 157 - } 158 - 159 137 static void reset_numa_cpu_lookup_table(void) 160 138 { 161 139 unsigned int cpu; ··· 906 928 } 907 929 } 908 930 909 - /* 910 - * Allocate some memory, satisfying the memblock or bootmem allocator where 911 - * required. nid is the preferred node and end is the physical address of 912 - * the highest address in the node. 913 - * 914 - * Returns the virtual address of the memory. 915 - */ 916 - static void __init *careful_zallocation(int nid, unsigned long size, 917 - unsigned long align, 918 - unsigned long end_pfn) 919 - { 920 - void *ret; 921 - int new_nid; 922 - unsigned long ret_paddr; 923 - 924 - ret_paddr = __memblock_alloc_base(size, align, end_pfn << PAGE_SHIFT); 925 - 926 - /* retry over all memory */ 927 - if (!ret_paddr) 928 - ret_paddr = __memblock_alloc_base(size, align, memblock_end_of_DRAM()); 929 - 930 - if (!ret_paddr) 931 - panic("numa.c: cannot allocate %lu bytes for node %d", 932 - size, nid); 933 - 934 - ret = __va(ret_paddr); 935 - 936 - /* 937 - * We initialize the nodes in numeric order: 0, 1, 2... 938 - * and hand over control from the MEMBLOCK allocator to the 939 - * bootmem allocator. If this function is called for 940 - * node 5, then we know that all nodes <5 are using the 941 - * bootmem allocator instead of the MEMBLOCK allocator. 942 - * 943 - * So, check the nid from which this allocation came 944 - * and double check to see if we need to use bootmem 945 - * instead of the MEMBLOCK. We don't free the MEMBLOCK memory 946 - * since it would be useless. 947 - */ 948 - new_nid = early_pfn_to_nid(ret_paddr >> PAGE_SHIFT); 949 - if (new_nid < nid) { 950 - ret = __alloc_bootmem_node(NODE_DATA(new_nid), 951 - size, align, 0); 952 - 953 - dbg("alloc_bootmem %p %lx\n", ret, size); 954 - } 955 - 956 - memset(ret, 0, size); 957 - return ret; 958 - } 959 - 960 931 static struct notifier_block ppc64_numa_nb = { 961 932 .notifier_call = cpu_numa_callback, 962 933 .priority = 1 /* Must run before sched domains notifier. */ 963 934 }; 964 935 965 - static void __init mark_reserved_regions_for_nid(int nid) 936 + /* Initialize NODE_DATA for a node on the local memory */ 937 + static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn) 966 938 { 967 - struct pglist_data *node = NODE_DATA(nid); 968 - struct memblock_region *reg; 939 + u64 spanned_pages = end_pfn - start_pfn; 940 + const size_t nd_size = roundup(sizeof(pg_data_t), SMP_CACHE_BYTES); 941 + u64 nd_pa; 942 + void *nd; 943 + int tnid; 969 944 970 - for_each_memblock(reserved, reg) { 971 - unsigned long physbase = reg->base; 972 - unsigned long size = reg->size; 973 - unsigned long start_pfn = physbase >> PAGE_SHIFT; 974 - unsigned long end_pfn = PFN_UP(physbase + size); 975 - struct node_active_region node_ar; 976 - unsigned long node_end_pfn = pgdat_end_pfn(node); 945 + if (spanned_pages) 946 + pr_info("Initmem setup node %d [mem %#010Lx-%#010Lx]\n", 947 + nid, start_pfn << PAGE_SHIFT, 948 + (end_pfn << PAGE_SHIFT) - 1); 949 + else 950 + pr_info("Initmem setup node %d\n", nid); 977 951 978 - /* 979 - * Check to make sure that this memblock.reserved area is 980 - * within the bounds of the node that we care about. 981 - * Checking the nid of the start and end points is not 982 - * sufficient because the reserved area could span the 983 - * entire node. 984 - */ 985 - if (end_pfn <= node->node_start_pfn || 986 - start_pfn >= node_end_pfn) 987 - continue; 952 + nd_pa = memblock_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid); 953 + nd = __va(nd_pa); 988 954 989 - get_node_active_region(start_pfn, &node_ar); 990 - while (start_pfn < end_pfn && 991 - node_ar.start_pfn < node_ar.end_pfn) { 992 - unsigned long reserve_size = size; 993 - /* 994 - * if reserved region extends past active region 995 - * then trim size to active region 996 - */ 997 - if (end_pfn > node_ar.end_pfn) 998 - reserve_size = (node_ar.end_pfn << PAGE_SHIFT) 999 - - physbase; 1000 - /* 1001 - * Only worry about *this* node, others may not 1002 - * yet have valid NODE_DATA(). 1003 - */ 1004 - if (node_ar.nid == nid) { 1005 - dbg("reserve_bootmem %lx %lx nid=%d\n", 1006 - physbase, reserve_size, node_ar.nid); 1007 - reserve_bootmem_node(NODE_DATA(node_ar.nid), 1008 - physbase, reserve_size, 1009 - BOOTMEM_DEFAULT); 1010 - } 1011 - /* 1012 - * if reserved region is contained in the active region 1013 - * then done. 1014 - */ 1015 - if (end_pfn <= node_ar.end_pfn) 1016 - break; 955 + /* report and initialize */ 956 + pr_info(" NODE_DATA [mem %#010Lx-%#010Lx]\n", 957 + nd_pa, nd_pa + nd_size - 1); 958 + tnid = early_pfn_to_nid(nd_pa >> PAGE_SHIFT); 959 + if (tnid != nid) 960 + pr_info(" NODE_DATA(%d) on node %d\n", nid, tnid); 1017 961 1018 - /* 1019 - * reserved region extends past the active region 1020 - * get next active region that contains this 1021 - * reserved region 1022 - */ 1023 - start_pfn = node_ar.end_pfn; 1024 - physbase = start_pfn << PAGE_SHIFT; 1025 - size = size - reserve_size; 1026 - get_node_active_region(start_pfn, &node_ar); 1027 - } 1028 - } 962 + node_data[nid] = nd; 963 + memset(NODE_DATA(nid), 0, sizeof(pg_data_t)); 964 + NODE_DATA(nid)->node_id = nid; 965 + NODE_DATA(nid)->node_start_pfn = start_pfn; 966 + NODE_DATA(nid)->node_spanned_pages = spanned_pages; 1029 967 } 1030 968 1031 - 1032 - void __init do_init_bootmem(void) 969 + void __init initmem_init(void) 1033 970 { 1034 971 int nid, cpu; 1035 972 1036 - min_low_pfn = 0; 1037 973 max_low_pfn = memblock_end_of_DRAM() >> PAGE_SHIFT; 1038 974 max_pfn = max_low_pfn; 1039 975 ··· 956 1064 else 957 1065 dump_numa_memory_topology(); 958 1066 1067 + memblock_dump_all(); 1068 + 959 1069 for_each_online_node(nid) { 960 1070 unsigned long start_pfn, end_pfn; 961 - void *bootmem_vaddr; 962 - unsigned long bootmap_pages; 963 1071 964 1072 get_pfn_range_for_nid(nid, &start_pfn, &end_pfn); 965 - 966 - /* 967 - * Allocate the node structure node local if possible 968 - * 969 - * Be careful moving this around, as it relies on all 970 - * previous nodes' bootmem to be initialized and have 971 - * all reserved areas marked. 972 - */ 973 - NODE_DATA(nid) = careful_zallocation(nid, 974 - sizeof(struct pglist_data), 975 - SMP_CACHE_BYTES, end_pfn); 976 - 977 - dbg("node %d\n", nid); 978 - dbg("NODE_DATA() = %p\n", NODE_DATA(nid)); 979 - 980 - NODE_DATA(nid)->bdata = &bootmem_node_data[nid]; 981 - NODE_DATA(nid)->node_start_pfn = start_pfn; 982 - NODE_DATA(nid)->node_spanned_pages = end_pfn - start_pfn; 983 - 984 - if (NODE_DATA(nid)->node_spanned_pages == 0) 985 - continue; 986 - 987 - dbg("start_paddr = %lx\n", start_pfn << PAGE_SHIFT); 988 - dbg("end_paddr = %lx\n", end_pfn << PAGE_SHIFT); 989 - 990 - bootmap_pages = bootmem_bootmap_pages(end_pfn - start_pfn); 991 - bootmem_vaddr = careful_zallocation(nid, 992 - bootmap_pages << PAGE_SHIFT, 993 - PAGE_SIZE, end_pfn); 994 - 995 - dbg("bootmap_vaddr = %p\n", bootmem_vaddr); 996 - 997 - init_bootmem_node(NODE_DATA(nid), 998 - __pa(bootmem_vaddr) >> PAGE_SHIFT, 999 - start_pfn, end_pfn); 1000 - 1001 - free_bootmem_with_active_regions(nid, end_pfn); 1002 - /* 1003 - * Be very careful about moving this around. Future 1004 - * calls to careful_zallocation() depend on this getting 1005 - * done correctly. 1006 - */ 1007 - mark_reserved_regions_for_nid(nid); 1073 + setup_node_data(nid, start_pfn, end_pfn); 1008 1074 sparse_memory_present_with_active_regions(nid); 1009 1075 } 1010 1076 1011 - init_bootmem_done = 1; 1077 + sparse_init(); 1012 1078 1013 - /* 1014 - * Now bootmem is initialised we can create the node to cpumask 1015 - * lookup tables and setup the cpu callback to populate them. 1016 - */ 1017 1079 setup_node_to_cpumask_map(); 1018 1080 1019 1081 reset_numa_cpu_lookup_table();
+1 -2
arch/powerpc/mm/pgtable_32.c
··· 100 100 { 101 101 pte_t *pte; 102 102 extern int mem_init_done; 103 - extern void *early_get_page(void); 104 103 105 104 if (mem_init_done) { 106 105 pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO); 107 106 } else { 108 - pte = (pte_t *)early_get_page(); 107 + pte = __va(memblock_alloc(PAGE_SIZE, PAGE_SIZE)); 109 108 if (pte) 110 109 clear_page(pte); 111 110 }
+35 -67
arch/powerpc/mm/pgtable_64.c
··· 33 33 #include <linux/swap.h> 34 34 #include <linux/stddef.h> 35 35 #include <linux/vmalloc.h> 36 - #include <linux/bootmem.h> 37 36 #include <linux/memblock.h> 38 37 #include <linux/slab.h> 38 + #include <linux/hugetlb.h> 39 39 40 40 #include <asm/pgalloc.h> 41 41 #include <asm/page.h> ··· 51 51 #include <asm/cputable.h> 52 52 #include <asm/sections.h> 53 53 #include <asm/firmware.h> 54 + #include <asm/dma.h> 54 55 55 56 #include "mmu_decl.h" 56 57 ··· 76 75 { 77 76 void *pt; 78 77 79 - if (init_bootmem_done) 80 - pt = __alloc_bootmem(size, size, __pa(MAX_DMA_ADDRESS)); 81 - else 82 - pt = __va(memblock_alloc_base(size, size, 83 - __pa(MAX_DMA_ADDRESS))); 78 + pt = __va(memblock_alloc_base(size, size, __pa(MAX_DMA_ADDRESS))); 84 79 memset(pt, 0, size); 85 80 86 81 return pt; ··· 110 113 __pgprot(flags))); 111 114 } else { 112 115 #ifdef CONFIG_PPC_MMU_NOHASH 113 - /* Warning ! This will blow up if bootmem is not initialized 114 - * which our ppc64 code is keen to do that, we'll need to 115 - * fix it and/or be more careful 116 - */ 117 116 pgdp = pgd_offset_k(ea); 118 117 #ifdef PUD_TABLE_SIZE 119 118 if (pgd_none(*pgdp)) { ··· 345 352 EXPORT_SYMBOL(__iounmap); 346 353 EXPORT_SYMBOL(__iounmap_at); 347 354 355 + #ifndef __PAGETABLE_PUD_FOLDED 356 + /* 4 level page table */ 357 + struct page *pgd_page(pgd_t pgd) 358 + { 359 + if (pgd_huge(pgd)) 360 + return pte_page(pgd_pte(pgd)); 361 + return virt_to_page(pgd_page_vaddr(pgd)); 362 + } 363 + #endif 364 + 365 + struct page *pud_page(pud_t pud) 366 + { 367 + if (pud_huge(pud)) 368 + return pte_page(pud_pte(pud)); 369 + return virt_to_page(pud_page_vaddr(pud)); 370 + } 371 + 348 372 /* 349 373 * For hugepage we have pfn in the pmd, we use PTE_RPN_SHIFT bits for flags 350 374 * For PTE page, we have a PTE_FRAG_SIZE (4K) aligned virtual address. 351 375 */ 352 376 struct page *pmd_page(pmd_t pmd) 353 377 { 354 - #ifdef CONFIG_TRANSPARENT_HUGEPAGE 355 - if (pmd_trans_huge(pmd)) 378 + if (pmd_trans_huge(pmd) || pmd_huge(pmd)) 356 379 return pfn_to_page(pmd_pfn(pmd)); 357 - #endif 358 380 return virt_to_page(pmd_page_vaddr(pmd)); 359 381 } 360 382 ··· 739 731 void hpte_do_hugepage_flush(struct mm_struct *mm, unsigned long addr, 740 732 pmd_t *pmdp, unsigned long old_pmd) 741 733 { 742 - int ssize, i; 743 - unsigned long s_addr; 744 - int max_hpte_count; 745 - unsigned int psize, valid; 746 - unsigned char *hpte_slot_array; 747 - unsigned long hidx, vpn, vsid, hash, shift, slot; 748 - 749 - /* 750 - * Flush all the hptes mapping this hugepage 751 - */ 752 - s_addr = addr & HPAGE_PMD_MASK; 753 - hpte_slot_array = get_hpte_slot_array(pmdp); 754 - /* 755 - * IF we try to do a HUGE PTE update after a withdraw is done. 756 - * we will find the below NULL. This happens when we do 757 - * split_huge_page_pmd 758 - */ 759 - if (!hpte_slot_array) 760 - return; 734 + int ssize; 735 + unsigned int psize; 736 + unsigned long vsid; 737 + unsigned long flags = 0; 738 + const struct cpumask *tmp; 761 739 762 740 /* get the base page size,vsid and segment size */ 763 741 #ifdef CONFIG_DEBUG_VM 764 - psize = get_slice_psize(mm, s_addr); 742 + psize = get_slice_psize(mm, addr); 765 743 BUG_ON(psize == MMU_PAGE_16M); 766 744 #endif 767 745 if (old_pmd & _PAGE_COMBO) ··· 755 761 else 756 762 psize = MMU_PAGE_64K; 757 763 758 - if (!is_kernel_addr(s_addr)) { 759 - ssize = user_segment_size(s_addr); 760 - vsid = get_vsid(mm->context.id, s_addr, ssize); 764 + if (!is_kernel_addr(addr)) { 765 + ssize = user_segment_size(addr); 766 + vsid = get_vsid(mm->context.id, addr, ssize); 761 767 WARN_ON(vsid == 0); 762 768 } else { 763 - vsid = get_kernel_vsid(s_addr, mmu_kernel_ssize); 769 + vsid = get_kernel_vsid(addr, mmu_kernel_ssize); 764 770 ssize = mmu_kernel_ssize; 765 771 } 766 772 767 - if (ppc_md.hugepage_invalidate) 768 - return ppc_md.hugepage_invalidate(vsid, s_addr, 769 - hpte_slot_array, 770 - psize, ssize); 771 - /* 772 - * No bluk hpte removal support, invalidate each entry 773 - */ 774 - shift = mmu_psize_defs[psize].shift; 775 - max_hpte_count = HPAGE_PMD_SIZE >> shift; 776 - for (i = 0; i < max_hpte_count; i++) { 777 - /* 778 - * 8 bits per each hpte entries 779 - * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit] 780 - */ 781 - valid = hpte_valid(hpte_slot_array, i); 782 - if (!valid) 783 - continue; 784 - hidx = hpte_hash_index(hpte_slot_array, i); 773 + tmp = cpumask_of(smp_processor_id()); 774 + if (cpumask_equal(mm_cpumask(mm), tmp)) 775 + flags |= HPTE_LOCAL_UPDATE; 785 776 786 - /* get the vpn */ 787 - addr = s_addr + (i * (1ul << shift)); 788 - vpn = hpt_vpn(addr, vsid, ssize); 789 - hash = hpt_hash(vpn, shift, ssize); 790 - if (hidx & _PTEIDX_SECONDARY) 791 - hash = ~hash; 792 - 793 - slot = (hash & htab_hash_mask) * HPTES_PER_GROUP; 794 - slot += hidx & _PTEIDX_GROUP_IX; 795 - ppc_md.hpte_invalidate(slot, vpn, psize, 796 - MMU_PAGE_16M, ssize, 0); 797 - } 777 + return flush_hash_hugepage(vsid, addr, pmdp, psize, ssize, flags); 798 778 } 799 779 800 780 static pmd_t pmd_set_protbits(pmd_t pmd, pgprot_t pgprot)
+4 -2
arch/powerpc/oprofile/backtrace.c
··· 10 10 #include <linux/oprofile.h> 11 11 #include <linux/sched.h> 12 12 #include <asm/processor.h> 13 - #include <asm/uaccess.h> 13 + #include <linux/uaccess.h> 14 14 #include <asm/compat.h> 15 15 #include <asm/oprofile_impl.h> 16 16 ··· 105 105 first_frame = 0; 106 106 } 107 107 } else { 108 + pagefault_disable(); 108 109 #ifdef CONFIG_PPC64 109 110 if (!is_32bit_task()) { 110 111 while (depth--) { ··· 114 113 break; 115 114 first_frame = 0; 116 115 } 117 - 116 + pagefault_enable(); 118 117 return; 119 118 } 120 119 #endif ··· 125 124 break; 126 125 first_frame = 0; 127 126 } 127 + pagefault_enable(); 128 128 } 129 129 }
+11 -11
arch/powerpc/perf/core-book3s.c
··· 339 339 340 340 static void power_pmu_bhrb_enable(struct perf_event *event) 341 341 { 342 - struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events); 342 + struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events); 343 343 344 344 if (!ppmu->bhrb_nr) 345 345 return; ··· 354 354 355 355 static void power_pmu_bhrb_disable(struct perf_event *event) 356 356 { 357 - struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events); 357 + struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events); 358 358 359 359 if (!ppmu->bhrb_nr) 360 360 return; ··· 1144 1144 if (!ppmu) 1145 1145 return; 1146 1146 local_irq_save(flags); 1147 - cpuhw = &__get_cpu_var(cpu_hw_events); 1147 + cpuhw = this_cpu_ptr(&cpu_hw_events); 1148 1148 1149 1149 if (!cpuhw->disabled) { 1150 1150 /* ··· 1211 1211 return; 1212 1212 local_irq_save(flags); 1213 1213 1214 - cpuhw = &__get_cpu_var(cpu_hw_events); 1214 + cpuhw = this_cpu_ptr(&cpu_hw_events); 1215 1215 if (!cpuhw->disabled) 1216 1216 goto out; 1217 1217 ··· 1403 1403 * Add the event to the list (if there is room) 1404 1404 * and check whether the total set is still feasible. 1405 1405 */ 1406 - cpuhw = &__get_cpu_var(cpu_hw_events); 1406 + cpuhw = this_cpu_ptr(&cpu_hw_events); 1407 1407 n0 = cpuhw->n_events; 1408 1408 if (n0 >= ppmu->n_counter) 1409 1409 goto out; ··· 1469 1469 1470 1470 power_pmu_read(event); 1471 1471 1472 - cpuhw = &__get_cpu_var(cpu_hw_events); 1472 + cpuhw = this_cpu_ptr(&cpu_hw_events); 1473 1473 for (i = 0; i < cpuhw->n_events; ++i) { 1474 1474 if (event == cpuhw->event[i]) { 1475 1475 while (++i < cpuhw->n_events) { ··· 1575 1575 */ 1576 1576 static void power_pmu_start_txn(struct pmu *pmu) 1577 1577 { 1578 - struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events); 1578 + struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events); 1579 1579 1580 1580 perf_pmu_disable(pmu); 1581 1581 cpuhw->group_flag |= PERF_EVENT_TXN; ··· 1589 1589 */ 1590 1590 static void power_pmu_cancel_txn(struct pmu *pmu) 1591 1591 { 1592 - struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events); 1592 + struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events); 1593 1593 1594 1594 cpuhw->group_flag &= ~PERF_EVENT_TXN; 1595 1595 perf_pmu_enable(pmu); ··· 1607 1607 1608 1608 if (!ppmu) 1609 1609 return -EAGAIN; 1610 - cpuhw = &__get_cpu_var(cpu_hw_events); 1610 + cpuhw = this_cpu_ptr(&cpu_hw_events); 1611 1611 n = cpuhw->n_events; 1612 1612 if (check_excludes(cpuhw->event, cpuhw->flags, 0, n)) 1613 1613 return -EAGAIN; ··· 1964 1964 1965 1965 if (event->attr.sample_type & PERF_SAMPLE_BRANCH_STACK) { 1966 1966 struct cpu_hw_events *cpuhw; 1967 - cpuhw = &__get_cpu_var(cpu_hw_events); 1967 + cpuhw = this_cpu_ptr(&cpu_hw_events); 1968 1968 power_pmu_bhrb_read(cpuhw); 1969 1969 data.br_stack = &cpuhw->bhrb_stack; 1970 1970 } ··· 2037 2037 static void perf_event_interrupt(struct pt_regs *regs) 2038 2038 { 2039 2039 int i, j; 2040 - struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events); 2040 + struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events); 2041 2041 struct perf_event *event; 2042 2042 unsigned long val[8]; 2043 2043 int found, active;
+3 -3
arch/powerpc/perf/core-fsl-emb.c
··· 210 210 unsigned long flags; 211 211 212 212 local_irq_save(flags); 213 - cpuhw = &__get_cpu_var(cpu_hw_events); 213 + cpuhw = this_cpu_ptr(&cpu_hw_events); 214 214 215 215 if (!cpuhw->disabled) { 216 216 cpuhw->disabled = 1; ··· 249 249 unsigned long flags; 250 250 251 251 local_irq_save(flags); 252 - cpuhw = &__get_cpu_var(cpu_hw_events); 252 + cpuhw = this_cpu_ptr(&cpu_hw_events); 253 253 if (!cpuhw->disabled) 254 254 goto out; 255 255 ··· 653 653 static void perf_event_interrupt(struct pt_regs *regs) 654 654 { 655 655 int i; 656 - struct cpu_hw_events *cpuhw = &__get_cpu_var(cpu_hw_events); 656 + struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events); 657 657 struct perf_event *event; 658 658 unsigned long val; 659 659 int found = 0;
+1 -1
arch/powerpc/platforms/44x/ppc476.c
··· 94 94 { 95 95 avr_i2c_client = client; 96 96 ppc_md.restart = avr_reset_system; 97 - ppc_md.power_off = avr_power_off_system; 97 + pm_power_off = avr_power_off_system; 98 98 return 0; 99 99 } 100 100
+4 -5
arch/powerpc/platforms/512x/mpc512x_shared.c
··· 18 18 #include <linux/irq.h> 19 19 #include <linux/of_platform.h> 20 20 #include <linux/fsl-diu-fb.h> 21 - #include <linux/bootmem.h> 21 + #include <linux/memblock.h> 22 22 #include <sysdev/fsl_soc.h> 23 23 24 24 #include <asm/cacheflush.h> ··· 297 297 * and so negatively affect boot time. Instead we reserve the 298 298 * already configured frame buffer area so that it won't be 299 299 * destroyed. The starting address of the area to reserve and 300 - * also it's length is passed to reserve_bootmem(). It will be 300 + * also it's length is passed to memblock_reserve(). It will be 301 301 * freed later on first open of fbdev, when splash image is not 302 302 * needed any more. 303 303 */ 304 304 if (diu_shared_fb.in_use) { 305 - ret = reserve_bootmem(diu_shared_fb.fb_phys, 306 - diu_shared_fb.fb_len, 307 - BOOTMEM_EXCLUSIVE); 305 + ret = memblock_reserve(diu_shared_fb.fb_phys, 306 + diu_shared_fb.fb_len); 308 307 if (ret) { 309 308 pr_err("%s: reserve bootmem failed\n", __func__); 310 309 diu_shared_fb.in_use = false;
+2 -1
arch/powerpc/platforms/52xx/efika.c
··· 212 212 DMA_MODE_READ = 0x44; 213 213 DMA_MODE_WRITE = 0x48; 214 214 215 + pm_power_off = rtas_power_off; 216 + 215 217 return 1; 216 218 } 217 219 ··· 227 225 .init_IRQ = mpc52xx_init_irq, 228 226 .get_irq = mpc52xx_get_irq, 229 227 .restart = rtas_restart, 230 - .power_off = rtas_power_off, 231 228 .halt = rtas_halt, 232 229 .set_rtc_time = rtas_set_rtc_time, 233 230 .get_rtc_time = rtas_get_rtc_time,
+4 -4
arch/powerpc/platforms/83xx/mcu_mpc8349emitx.c
··· 167 167 if (ret) 168 168 goto err; 169 169 170 - /* XXX: this is potentially racy, but there is no lock for ppc_md */ 171 - if (!ppc_md.power_off) { 170 + /* XXX: this is potentially racy, but there is no lock for pm_power_off */ 171 + if (!pm_power_off) { 172 172 glob_mcu = mcu; 173 - ppc_md.power_off = mcu_power_off; 173 + pm_power_off = mcu_power_off; 174 174 dev_info(&client->dev, "will provide power-off service\n"); 175 175 } 176 176 ··· 197 197 device_remove_file(&client->dev, &dev_attr_status); 198 198 199 199 if (glob_mcu == mcu) { 200 - ppc_md.power_off = NULL; 200 + pm_power_off = NULL; 201 201 glob_mcu = NULL; 202 202 } 203 203
+1 -1
arch/powerpc/platforms/85xx/corenet_generic.c
··· 170 170 171 171 ppc_md.get_irq = ehv_pic_get_irq; 172 172 ppc_md.restart = fsl_hv_restart; 173 - ppc_md.power_off = fsl_hv_halt; 173 + pm_power_off = fsl_hv_halt; 174 174 ppc_md.halt = fsl_hv_halt; 175 175 #ifdef CONFIG_SMP 176 176 /*
+2 -2
arch/powerpc/platforms/85xx/sgy_cts1000.c
··· 120 120 121 121 /* Register our halt function */ 122 122 ppc_md.halt = gpio_halt_cb; 123 - ppc_md.power_off = gpio_halt_cb; 123 + pm_power_off = gpio_halt_cb; 124 124 125 125 printk(KERN_INFO "gpio-halt: registered GPIO %d (%d trigger, %d" 126 126 " irq).\n", gpio, trigger, irq); ··· 137 137 free_irq(irq, halt_node); 138 138 139 139 ppc_md.halt = NULL; 140 - ppc_md.power_off = NULL; 140 + pm_power_off = NULL; 141 141 142 142 gpio_free(gpio); 143 143
-4
arch/powerpc/platforms/8xx/Kconfig
··· 1 - config FADS 2 - bool 3 - 4 1 config CPM1 5 2 bool 6 3 select CPM ··· 10 13 11 14 config MPC8XXFADS 12 15 bool "FADS" 13 - select FADS 14 16 15 17 config MPC86XADS 16 18 bool "MPC86XADS"
+2 -2
arch/powerpc/platforms/cell/beat_htab.c
··· 186 186 unsigned long newpp, 187 187 unsigned long vpn, 188 188 int psize, int apsize, 189 - int ssize, int local) 189 + int ssize, unsigned long flags) 190 190 { 191 191 unsigned long lpar_rc; 192 192 u64 dummy0, dummy1; ··· 369 369 unsigned long newpp, 370 370 unsigned long vpn, 371 371 int psize, int apsize, 372 - int ssize, int local) 372 + int ssize, unsigned long flags) 373 373 { 374 374 unsigned long lpar_rc; 375 375 unsigned long want_v;
+3 -3
arch/powerpc/platforms/cell/celleb_pci.c
··· 29 29 #include <linux/pci.h> 30 30 #include <linux/string.h> 31 31 #include <linux/init.h> 32 - #include <linux/bootmem.h> 32 + #include <linux/memblock.h> 33 33 #include <linux/pci_regs.h> 34 34 #include <linux/of.h> 35 35 #include <linux/of_device.h> ··· 401 401 } else { 402 402 if (config && *config) { 403 403 size = 256; 404 - free_bootmem(__pa(*config), size); 404 + memblock_free(__pa(*config), size); 405 405 } 406 406 if (res && *res) { 407 407 size = sizeof(struct celleb_pci_resource); 408 - free_bootmem(__pa(*res), size); 408 + memblock_free(__pa(*res), size); 409 409 } 410 410 } 411 411
-1
arch/powerpc/platforms/cell/celleb_scc_epci.c
··· 25 25 #include <linux/pci.h> 26 26 #include <linux/init.h> 27 27 #include <linux/pci_regs.h> 28 - #include <linux/bootmem.h> 29 28 30 29 #include <asm/io.h> 31 30 #include <asm/irq.h>
-1
arch/powerpc/platforms/cell/celleb_scc_pciex.c
··· 25 25 #include <linux/string.h> 26 26 #include <linux/slab.h> 27 27 #include <linux/init.h> 28 - #include <linux/bootmem.h> 29 28 #include <linux/delay.h> 30 29 #include <linux/interrupt.h> 31 30
+2 -2
arch/powerpc/platforms/cell/celleb_setup.c
··· 142 142 powerpc_firmware_features |= FW_FEATURE_CELLEB_ALWAYS 143 143 | FW_FEATURE_BEAT | FW_FEATURE_LPAR; 144 144 hpte_init_beat_v3(); 145 + pm_power_off = beat_power_off; 145 146 146 147 return 1; 147 148 } ··· 191 190 192 191 powerpc_firmware_features |= FW_FEATURE_CELLEB_ALWAYS; 193 192 hpte_init_native(); 193 + pm_power_off = rtas_power_off; 194 194 195 195 return 1; 196 196 } ··· 206 204 .setup_arch = celleb_setup_arch_beat, 207 205 .show_cpuinfo = celleb_show_cpuinfo, 208 206 .restart = beat_restart, 209 - .power_off = beat_power_off, 210 207 .halt = beat_halt, 211 208 .get_rtc_time = beat_get_rtc_time, 212 209 .set_rtc_time = beat_set_rtc_time, ··· 231 230 .setup_arch = celleb_setup_arch_native, 232 231 .show_cpuinfo = celleb_show_cpuinfo, 233 232 .restart = rtas_restart, 234 - .power_off = rtas_power_off, 235 233 .halt = rtas_halt, 236 234 .get_boot_time = rtas_get_boot_time, 237 235 .get_rtc_time = rtas_get_rtc_time,
+3 -3
arch/powerpc/platforms/cell/interrupt.c
··· 82 82 83 83 static void iic_eoi(struct irq_data *d) 84 84 { 85 - struct iic *iic = &__get_cpu_var(cpu_iic); 85 + struct iic *iic = this_cpu_ptr(&cpu_iic); 86 86 out_be64(&iic->regs->prio, iic->eoi_stack[--iic->eoi_ptr]); 87 87 BUG_ON(iic->eoi_ptr < 0); 88 88 } ··· 148 148 struct iic *iic; 149 149 unsigned int virq; 150 150 151 - iic = &__get_cpu_var(cpu_iic); 151 + iic = this_cpu_ptr(&cpu_iic); 152 152 *(unsigned long *) &pending = 153 153 in_be64((u64 __iomem *) &iic->regs->pending_destr); 154 154 if (!(pending.flags & CBE_IIC_IRQ_VALID)) ··· 163 163 164 164 void iic_setup_cpu(void) 165 165 { 166 - out_be64(&__get_cpu_var(cpu_iic).regs->prio, 0xff); 166 + out_be64(this_cpu_ptr(&cpu_iic.regs->prio), 0xff); 167 167 } 168 168 169 169 u8 iic_get_target_id(int cpu)
+1 -1
arch/powerpc/platforms/cell/qpace_setup.c
··· 127 127 return 0; 128 128 129 129 hpte_init_native(); 130 + pm_power_off = rtas_power_off; 130 131 131 132 return 1; 132 133 } ··· 138 137 .setup_arch = qpace_setup_arch, 139 138 .show_cpuinfo = qpace_show_cpuinfo, 140 139 .restart = rtas_restart, 141 - .power_off = rtas_power_off, 142 140 .halt = rtas_halt, 143 141 .get_boot_time = rtas_get_boot_time, 144 142 .get_rtc_time = rtas_get_rtc_time,
+1 -1
arch/powerpc/platforms/cell/setup.c
··· 259 259 return 0; 260 260 261 261 hpte_init_native(); 262 + pm_power_off = rtas_power_off; 262 263 263 264 return 1; 264 265 } ··· 270 269 .setup_arch = cell_setup_arch, 271 270 .show_cpuinfo = cell_show_cpuinfo, 272 271 .restart = rtas_restart, 273 - .power_off = rtas_power_off, 274 272 .halt = rtas_halt, 275 273 .get_boot_time = rtas_get_boot_time, 276 274 .get_rtc_time = rtas_get_rtc_time,
+3 -2
arch/powerpc/platforms/cell/spu_base.c
··· 181 181 return 0; 182 182 } 183 183 184 - extern int hash_page(unsigned long ea, unsigned long access, unsigned long trap); //XXX 184 + extern int hash_page(unsigned long ea, unsigned long access, 185 + unsigned long trap, unsigned long dsisr); //XXX 185 186 static int __spu_trap_data_map(struct spu *spu, unsigned long ea, u64 dsisr) 186 187 { 187 188 int ret; ··· 197 196 (REGION_ID(ea) != USER_REGION_ID)) { 198 197 199 198 spin_unlock(&spu->register_lock); 200 - ret = hash_page(ea, _PAGE_PRESENT, 0x300); 199 + ret = hash_page(ea, _PAGE_PRESENT, 0x300, dsisr); 201 200 spin_lock(&spu->register_lock); 202 201 203 202 if (!ret) {
+1 -1
arch/powerpc/platforms/cell/spufs/fault.c
··· 144 144 access = (_PAGE_PRESENT | _PAGE_USER); 145 145 access |= (dsisr & MFC_DSISR_ACCESS_PUT) ? _PAGE_RW : 0UL; 146 146 local_irq_save(flags); 147 - ret = hash_page(ea, access, 0x300); 147 + ret = hash_page(ea, access, 0x300, dsisr); 148 148 local_irq_restore(flags); 149 149 150 150 /* hashing failed, so try the actual fault handler */
+2 -1
arch/powerpc/platforms/chrp/setup.c
··· 585 585 DMA_MODE_READ = 0x44; 586 586 DMA_MODE_WRITE = 0x48; 587 587 588 + pm_power_off = rtas_power_off; 589 + 588 590 return 1; 589 591 } 590 592 ··· 599 597 .show_cpuinfo = chrp_show_cpuinfo, 600 598 .init_IRQ = chrp_init_IRQ, 601 599 .restart = rtas_restart, 602 - .power_off = rtas_power_off, 603 600 .halt = rtas_halt, 604 601 .time_init = chrp_time_init, 605 602 .set_rtc_time = chrp_set_rtc_time,
+2 -1
arch/powerpc/platforms/embedded6xx/gamecube.c
··· 67 67 if (!of_flat_dt_is_compatible(dt_root, "nintendo,gamecube")) 68 68 return 0; 69 69 70 + pm_power_off = gamecube_power_off; 71 + 70 72 return 1; 71 73 } 72 74 ··· 82 80 .probe = gamecube_probe, 83 81 .init_early = gamecube_init_early, 84 82 .restart = gamecube_restart, 85 - .power_off = gamecube_power_off, 86 83 .halt = gamecube_halt, 87 84 .init_IRQ = flipper_pic_probe, 88 85 .get_irq = flipper_pic_get_irq,
+3 -1
arch/powerpc/platforms/embedded6xx/linkstation.c
··· 147 147 148 148 if (!of_flat_dt_is_compatible(root, "linkstation")) 149 149 return 0; 150 + 151 + pm_power_off = linkstation_power_off; 152 + 150 153 return 1; 151 154 } 152 155 ··· 161 158 .show_cpuinfo = linkstation_show_cpuinfo, 162 159 .get_irq = mpic_get_irq, 163 160 .restart = linkstation_restart, 164 - .power_off = linkstation_power_off, 165 161 .halt = linkstation_halt, 166 162 .calibrate_decr = generic_calibrate_decr, 167 163 };
+3 -3
arch/powerpc/platforms/embedded6xx/usbgecko_udbg.c
··· 247 247 np = of_find_compatible_node(NULL, NULL, "nintendo,flipper-exi"); 248 248 if (!np) { 249 249 udbg_printf("%s: EXI node not found\n", __func__); 250 - goto done; 250 + goto out; 251 251 } 252 252 253 253 exi_io_base = ug_udbg_setup_exi_io_base(np); ··· 267 267 } 268 268 269 269 done: 270 - if (np) 271 - of_node_put(np); 270 + of_node_put(np); 271 + out: 272 272 return; 273 273 } 274 274
+2 -1
arch/powerpc/platforms/embedded6xx/wii.c
··· 211 211 if (!of_flat_dt_is_compatible(dt_root, "nintendo,wii")) 212 212 return 0; 213 213 214 + pm_power_off = wii_power_off; 215 + 214 216 return 1; 215 217 } 216 218 ··· 228 226 .init_early = wii_init_early, 229 227 .setup_arch = wii_setup_arch, 230 228 .restart = wii_restart, 231 - .power_off = wii_power_off, 232 229 .halt = wii_halt, 233 230 .init_IRQ = wii_pic_probe, 234 231 .get_irq = flipper_pic_get_irq,
-1
arch/powerpc/platforms/maple/pci.c
··· 15 15 #include <linux/delay.h> 16 16 #include <linux/string.h> 17 17 #include <linux/init.h> 18 - #include <linux/bootmem.h> 19 18 #include <linux/irq.h> 20 19 21 20 #include <asm/sections.h>
+2 -2
arch/powerpc/platforms/maple/setup.c
··· 169 169 if (rtas_service_present("system-reboot") && 170 170 rtas_service_present("power-off")) { 171 171 ppc_md.restart = rtas_restart; 172 - ppc_md.power_off = rtas_power_off; 172 + pm_power_off = rtas_power_off; 173 173 ppc_md.halt = rtas_halt; 174 174 } 175 175 } ··· 312 312 alloc_dart_table(); 313 313 314 314 hpte_init_native(); 315 + pm_power_off = maple_power_off; 315 316 316 317 return 1; 317 318 } ··· 326 325 .pci_irq_fixup = maple_pci_irq_fixup, 327 326 .pci_get_legacy_ide_irq = maple_pci_get_legacy_ide_irq, 328 327 .restart = maple_restart, 329 - .power_off = maple_power_off, 330 328 .halt = maple_halt, 331 329 .get_boot_time = maple_get_boot_time, 332 330 .set_rtc_time = maple_set_rtc_time,
+1 -5
arch/powerpc/platforms/powermac/nvram.c
··· 513 513 printk(KERN_ERR "nvram: no address\n"); 514 514 return -EINVAL; 515 515 } 516 - nvram_image = alloc_bootmem(NVRAM_SIZE); 517 - if (nvram_image == NULL) { 518 - printk(KERN_ERR "nvram: can't allocate ram image\n"); 519 - return -ENOMEM; 520 - } 516 + nvram_image = memblock_virt_alloc(NVRAM_SIZE, 0); 521 517 nvram_data = ioremap(addr, NVRAM_SIZE*2); 522 518 nvram_naddrs = 1; /* Make sure we get the correct case */ 523 519
-1
arch/powerpc/platforms/powermac/pci.c
··· 15 15 #include <linux/delay.h> 16 16 #include <linux/string.h> 17 17 #include <linux/init.h> 18 - #include <linux/bootmem.h> 19 18 #include <linux/irq.h> 20 19 #include <linux/of_pci.h> 21 20
+2 -1
arch/powerpc/platforms/powermac/setup.c
··· 632 632 smu_cmdbuf_abs = memblock_alloc_base(4096, 4096, 0x80000000UL); 633 633 #endif /* CONFIG_PMAC_SMU */ 634 634 635 + pm_power_off = pmac_power_off; 636 + 635 637 return 1; 636 638 } 637 639 ··· 665 663 .get_irq = NULL, /* changed later */ 666 664 .pci_irq_fixup = pmac_pci_irq_fixup, 667 665 .restart = pmac_restart, 668 - .power_off = pmac_power_off, 669 666 .halt = pmac_halt, 670 667 .time_init = pmac_time_init, 671 668 .get_boot_time = pmac_get_boot_time,
+13 -3
arch/powerpc/platforms/powernv/eeh-ioda.c
··· 11 11 * (at your option) any later version. 12 12 */ 13 13 14 - #include <linux/bootmem.h> 15 14 #include <linux/debugfs.h> 16 15 #include <linux/delay.h> 17 16 #include <linux/io.h> ··· 353 354 } else if (!(pe->state & EEH_PE_ISOLATED)) { 354 355 eeh_pe_state_mark(pe, EEH_PE_ISOLATED); 355 356 ioda_eeh_phb_diag(pe); 357 + 358 + if (eeh_has_flag(EEH_EARLY_DUMP_LOG)) 359 + pnv_pci_dump_phb_diag_data(pe->phb, pe->data); 356 360 } 357 361 358 362 return result; ··· 375 373 * moving forward, we have to return operational 376 374 * state during PE reset. 377 375 */ 378 - if (pe->state & EEH_PE_CFG_BLOCKED) { 376 + if (pe->state & EEH_PE_RESET) { 379 377 result = (EEH_STATE_MMIO_ACTIVE | 380 378 EEH_STATE_DMA_ACTIVE | 381 379 EEH_STATE_MMIO_ENABLED | ··· 454 452 455 453 eeh_pe_state_mark(pe, EEH_PE_ISOLATED); 456 454 ioda_eeh_phb_diag(pe); 455 + 456 + if (eeh_has_flag(EEH_EARLY_DUMP_LOG)) 457 + pnv_pci_dump_phb_diag_data(pe->phb, pe->data); 457 458 } 458 459 459 460 return result; ··· 736 731 static int ioda_eeh_get_log(struct eeh_pe *pe, int severity, 737 732 char *drv_log, unsigned long len) 738 733 { 739 - pnv_pci_dump_phb_diag_data(pe->phb, pe->data); 734 + if (!eeh_has_flag(EEH_EARLY_DUMP_LOG)) 735 + pnv_pci_dump_phb_diag_data(pe->phb, pe->data); 740 736 741 737 return 0; 742 738 } ··· 1093 1087 !((*pe)->state & EEH_PE_ISOLATED)) { 1094 1088 eeh_pe_state_mark(*pe, EEH_PE_ISOLATED); 1095 1089 ioda_eeh_phb_diag(*pe); 1090 + 1091 + if (eeh_has_flag(EEH_EARLY_DUMP_LOG)) 1092 + pnv_pci_dump_phb_diag_data((*pe)->phb, 1093 + (*pe)->data); 1096 1094 } 1097 1095 1098 1096 /*
+3
arch/powerpc/platforms/powernv/opal-async.c
··· 71 71 72 72 return token; 73 73 } 74 + EXPORT_SYMBOL_GPL(opal_async_get_token_interruptible); 74 75 75 76 int __opal_async_release_token(int token) 76 77 { ··· 103 102 104 103 return 0; 105 104 } 105 + EXPORT_SYMBOL_GPL(opal_async_release_token); 106 106 107 107 int opal_async_wait_response(uint64_t token, struct opal_msg *msg) 108 108 { ··· 122 120 123 121 return 0; 124 122 } 123 + EXPORT_SYMBOL_GPL(opal_async_wait_response); 125 124 126 125 static int opal_async_comp_event(struct notifier_block *nb, 127 126 unsigned long msg_type, void *msg)
+20 -47
arch/powerpc/platforms/powernv/opal-rtc.c
··· 15 15 #include <linux/bcd.h> 16 16 #include <linux/rtc.h> 17 17 #include <linux/delay.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/of_platform.h> 18 20 19 21 #include <asm/opal.h> 20 22 #include <asm/firmware.h> ··· 45 43 long rc = OPAL_BUSY; 46 44 47 45 if (!opal_check_token(OPAL_RTC_READ)) 48 - goto out; 46 + return 0; 49 47 50 48 while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) { 51 49 rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms); ··· 55 53 mdelay(10); 56 54 } 57 55 if (rc != OPAL_SUCCESS) 58 - goto out; 56 + return 0; 59 57 60 58 y_m_d = be32_to_cpu(__y_m_d); 61 59 h_m_s_ms = be64_to_cpu(__h_m_s_ms); 62 60 opal_to_tm(y_m_d, h_m_s_ms, &tm); 63 61 return mktime(tm.tm_year + 1900, tm.tm_mon + 1, tm.tm_mday, 64 62 tm.tm_hour, tm.tm_min, tm.tm_sec); 65 - out: 66 - ppc_md.get_rtc_time = NULL; 67 - ppc_md.set_rtc_time = NULL; 68 - return 0; 69 63 } 70 64 71 - void opal_get_rtc_time(struct rtc_time *tm) 65 + static __init int opal_time_init(void) 72 66 { 73 - long rc = OPAL_BUSY; 74 - u32 y_m_d; 75 - u64 h_m_s_ms; 76 - __be32 __y_m_d; 77 - __be64 __h_m_s_ms; 67 + struct platform_device *pdev; 68 + struct device_node *rtc; 78 69 79 - while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) { 80 - rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms); 81 - if (rc == OPAL_BUSY_EVENT) 82 - opal_poll_events(NULL); 70 + rtc = of_find_node_by_path("/ibm,opal/rtc"); 71 + if (rtc) { 72 + pdev = of_platform_device_create(rtc, "opal-rtc", NULL); 73 + of_node_put(rtc); 74 + } else { 75 + if (opal_check_token(OPAL_RTC_READ) || 76 + opal_check_token(OPAL_READ_TPO)) 77 + pdev = platform_device_register_simple("opal-rtc", -1, 78 + NULL, 0); 83 79 else 84 - mdelay(10); 80 + return -ENODEV; 85 81 } 86 - if (rc != OPAL_SUCCESS) 87 - return; 88 - y_m_d = be32_to_cpu(__y_m_d); 89 - h_m_s_ms = be64_to_cpu(__h_m_s_ms); 90 - opal_to_tm(y_m_d, h_m_s_ms, tm); 82 + 83 + return PTR_ERR_OR_ZERO(pdev); 91 84 } 92 - 93 - int opal_set_rtc_time(struct rtc_time *tm) 94 - { 95 - long rc = OPAL_BUSY; 96 - u32 y_m_d = 0; 97 - u64 h_m_s_ms = 0; 98 - 99 - y_m_d |= ((u32)bin2bcd((tm->tm_year + 1900) / 100)) << 24; 100 - y_m_d |= ((u32)bin2bcd((tm->tm_year + 1900) % 100)) << 16; 101 - y_m_d |= ((u32)bin2bcd((tm->tm_mon + 1))) << 8; 102 - y_m_d |= ((u32)bin2bcd(tm->tm_mday)); 103 - 104 - h_m_s_ms |= ((u64)bin2bcd(tm->tm_hour)) << 56; 105 - h_m_s_ms |= ((u64)bin2bcd(tm->tm_min)) << 48; 106 - h_m_s_ms |= ((u64)bin2bcd(tm->tm_sec)) << 40; 107 - 108 - while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) { 109 - rc = opal_rtc_write(y_m_d, h_m_s_ms); 110 - if (rc == OPAL_BUSY_EVENT) 111 - opal_poll_events(NULL); 112 - else 113 - mdelay(10); 114 - } 115 - return rc == OPAL_SUCCESS ? 0 : -EIO; 116 - } 85 + machine_subsys_initcall(powernv, opal_time_init);
+2 -2
arch/powerpc/platforms/powernv/opal-tracepoints.c
··· 48 48 49 49 local_irq_save(flags); 50 50 51 - depth = &__get_cpu_var(opal_trace_depth); 51 + depth = this_cpu_ptr(&opal_trace_depth); 52 52 53 53 if (*depth) 54 54 goto out; ··· 69 69 70 70 local_irq_save(flags); 71 71 72 - depth = &__get_cpu_var(opal_trace_depth); 72 + depth = this_cpu_ptr(&opal_trace_depth); 73 73 74 74 if (*depth) 75 75 goto out;
+5 -1
arch/powerpc/platforms/powernv/opal-wrappers.S
··· 18 18 .section ".text" 19 19 20 20 #ifdef CONFIG_TRACEPOINTS 21 - #ifdef CONFIG_JUMP_LABEL 21 + #ifdef HAVE_JUMP_LABEL 22 22 #define OPAL_BRANCH(LABEL) \ 23 23 ARCH_STATIC_BRANCH(LABEL, opal_tracepoint_key) 24 24 #else ··· 250 250 OPAL_CALL(opal_register_dump_region, OPAL_REGISTER_DUMP_REGION); 251 251 OPAL_CALL(opal_unregister_dump_region, OPAL_UNREGISTER_DUMP_REGION); 252 252 OPAL_CALL(opal_pci_set_phb_cxl_mode, OPAL_PCI_SET_PHB_CXL_MODE); 253 + OPAL_CALL(opal_tpo_write, OPAL_WRITE_TPO); 254 + OPAL_CALL(opal_tpo_read, OPAL_READ_TPO); 255 + OPAL_CALL(opal_ipmi_send, OPAL_IPMI_SEND); 256 + OPAL_CALL(opal_ipmi_recv, OPAL_IPMI_RECV);
+20 -1
arch/powerpc/platforms/powernv/opal.c
··· 50 50 51 51 struct device_node *opal_node; 52 52 static DEFINE_SPINLOCK(opal_write_lock); 53 - extern u64 opal_mc_secondary_handler[]; 54 53 static unsigned int *opal_irqs; 55 54 static unsigned int opal_irq_count; 56 55 static ATOMIC_NOTIFIER_HEAD(opal_notifier_head); ··· 643 644 pr_warn("DUMP: Failed to register kernel log buffer. " 644 645 "rc = %d\n", rc); 645 646 } 647 + 648 + static void opal_ipmi_init(struct device_node *opal_node) 649 + { 650 + struct device_node *np; 651 + 652 + for_each_child_of_node(opal_node, np) 653 + if (of_device_is_compatible(np, "ibm,opal-ipmi")) 654 + of_platform_device_create(np, NULL, NULL); 655 + } 656 + 646 657 static int __init opal_init(void) 647 658 { 648 659 struct device_node *np, *consoles; ··· 716 707 opal_msglog_init(); 717 708 } 718 709 710 + opal_ipmi_init(opal_node); 711 + 719 712 return 0; 720 713 } 721 714 machine_subsys_initcall(powernv, opal_init); ··· 753 742 754 743 /* Export this so that test modules can use it */ 755 744 EXPORT_SYMBOL_GPL(opal_invalid_call); 745 + EXPORT_SYMBOL_GPL(opal_ipmi_send); 746 + EXPORT_SYMBOL_GPL(opal_ipmi_recv); 756 747 757 748 /* Convert a region of vmalloc memory to an opal sg list */ 758 749 struct opal_sg_list *opal_vmalloc_to_sg_list(void *vmalloc_addr, ··· 818 805 sg = NULL; 819 806 } 820 807 } 808 + 809 + EXPORT_SYMBOL_GPL(opal_poll_events); 810 + EXPORT_SYMBOL_GPL(opal_rtc_read); 811 + EXPORT_SYMBOL_GPL(opal_rtc_write); 812 + EXPORT_SYMBOL_GPL(opal_tpo_read); 813 + EXPORT_SYMBOL_GPL(opal_tpo_write);
+158 -59
arch/powerpc/platforms/powernv/pci-ioda.c
··· 91 91 (IORESOURCE_MEM_64 | IORESOURCE_PREFETCH)); 92 92 } 93 93 94 + static void pnv_ioda_reserve_pe(struct pnv_phb *phb, int pe_no) 95 + { 96 + if (!(pe_no >= 0 && pe_no < phb->ioda.total_pe)) { 97 + pr_warn("%s: Invalid PE %d on PHB#%x\n", 98 + __func__, pe_no, phb->hose->global_number); 99 + return; 100 + } 101 + 102 + if (test_and_set_bit(pe_no, phb->ioda.pe_alloc)) { 103 + pr_warn("%s: PE %d was assigned on PHB#%x\n", 104 + __func__, pe_no, phb->hose->global_number); 105 + return; 106 + } 107 + 108 + phb->ioda.pe_array[pe_no].phb = phb; 109 + phb->ioda.pe_array[pe_no].pe_number = pe_no; 110 + } 111 + 94 112 static int pnv_ioda_alloc_pe(struct pnv_phb *phb) 95 113 { 96 114 unsigned long pe; ··· 190 172 return -EIO; 191 173 } 192 174 193 - static void pnv_ioda2_alloc_m64_pe(struct pnv_phb *phb) 175 + static void pnv_ioda2_reserve_m64_pe(struct pnv_phb *phb) 194 176 { 195 177 resource_size_t sgsz = phb->ioda.m64_segsize; 196 178 struct pci_dev *pdev; ··· 203 185 * instead of root bus. 204 186 */ 205 187 list_for_each_entry(pdev, &phb->hose->bus->devices, bus_list) { 206 - for (i = PCI_BRIDGE_RESOURCES; 207 - i <= PCI_BRIDGE_RESOURCE_END; i++) { 208 - r = &pdev->resource[i]; 188 + for (i = 0; i < PCI_BRIDGE_RESOURCE_NUM; i++) { 189 + r = &pdev->resource[PCI_BRIDGE_RESOURCES + i]; 209 190 if (!r->parent || 210 191 !pnv_pci_is_mem_pref_64(r->flags)) 211 192 continue; 212 193 213 194 base = (r->start - phb->ioda.m64_base) / sgsz; 214 195 for (step = 0; step < resource_size(r) / sgsz; step++) 215 - set_bit(base + step, phb->ioda.pe_alloc); 196 + pnv_ioda_reserve_pe(phb, base + step); 216 197 } 217 198 } 218 199 } ··· 304 287 while ((i = find_next_bit(pe_alloc, phb->ioda.total_pe, i + 1)) < 305 288 phb->ioda.total_pe) { 306 289 pe = &phb->ioda.pe_array[i]; 307 - pe->phb = phb; 308 - pe->pe_number = i; 309 290 310 291 if (!master_pe) { 311 292 pe->flags |= PNV_IODA_PE_MASTER; ··· 328 313 const u32 *r; 329 314 u64 pci_addr; 330 315 316 + /* FIXME: Support M64 for P7IOC */ 317 + if (phb->type != PNV_PHB_IODA2) { 318 + pr_info(" Not support M64 window\n"); 319 + return; 320 + } 321 + 331 322 if (!firmware_has_feature(FW_FEATURE_OPALv3)) { 332 323 pr_info(" Firmware too old to support M64 window\n"); 333 324 return; ··· 343 322 if (!r) { 344 323 pr_info(" No <ibm,opal-m64-window> on %s\n", 345 324 dn->full_name); 346 - return; 347 - } 348 - 349 - /* FIXME: Support M64 for P7IOC */ 350 - if (phb->type != PNV_PHB_IODA2) { 351 - pr_info(" Not support M64 window\n"); 352 325 return; 353 326 } 354 327 ··· 360 345 /* Use last M64 BAR to cover M64 window */ 361 346 phb->ioda.m64_bar_idx = 15; 362 347 phb->init_m64 = pnv_ioda2_init_m64; 363 - phb->alloc_m64_pe = pnv_ioda2_alloc_m64_pe; 348 + phb->reserve_m64_pe = pnv_ioda2_reserve_m64_pe; 364 349 phb->pick_m64_pe = pnv_ioda2_pick_m64_pe; 365 350 } 366 351 ··· 373 358 /* Fetch master PE */ 374 359 if (pe->flags & PNV_IODA_PE_SLAVE) { 375 360 pe = pe->master; 376 - WARN_ON(!pe || !(pe->flags & PNV_IODA_PE_MASTER)); 361 + if (WARN_ON(!pe || !(pe->flags & PNV_IODA_PE_MASTER))) 362 + return; 363 + 377 364 pe_no = pe->pe_number; 378 365 } 379 366 ··· 524 507 } 525 508 #endif /* CONFIG_PCI_MSI */ 526 509 510 + static int pnv_ioda_set_one_peltv(struct pnv_phb *phb, 511 + struct pnv_ioda_pe *parent, 512 + struct pnv_ioda_pe *child, 513 + bool is_add) 514 + { 515 + const char *desc = is_add ? "adding" : "removing"; 516 + uint8_t op = is_add ? OPAL_ADD_PE_TO_DOMAIN : 517 + OPAL_REMOVE_PE_FROM_DOMAIN; 518 + struct pnv_ioda_pe *slave; 519 + long rc; 520 + 521 + /* Parent PE affects child PE */ 522 + rc = opal_pci_set_peltv(phb->opal_id, parent->pe_number, 523 + child->pe_number, op); 524 + if (rc != OPAL_SUCCESS) { 525 + pe_warn(child, "OPAL error %ld %s to parent PELTV\n", 526 + rc, desc); 527 + return -ENXIO; 528 + } 529 + 530 + if (!(child->flags & PNV_IODA_PE_MASTER)) 531 + return 0; 532 + 533 + /* Compound case: parent PE affects slave PEs */ 534 + list_for_each_entry(slave, &child->slaves, list) { 535 + rc = opal_pci_set_peltv(phb->opal_id, parent->pe_number, 536 + slave->pe_number, op); 537 + if (rc != OPAL_SUCCESS) { 538 + pe_warn(slave, "OPAL error %ld %s to parent PELTV\n", 539 + rc, desc); 540 + return -ENXIO; 541 + } 542 + } 543 + 544 + return 0; 545 + } 546 + 547 + static int pnv_ioda_set_peltv(struct pnv_phb *phb, 548 + struct pnv_ioda_pe *pe, 549 + bool is_add) 550 + { 551 + struct pnv_ioda_pe *slave; 552 + struct pci_dev *pdev; 553 + int ret; 554 + 555 + /* 556 + * Clear PE frozen state. If it's master PE, we need 557 + * clear slave PE frozen state as well. 558 + */ 559 + if (is_add) { 560 + opal_pci_eeh_freeze_clear(phb->opal_id, pe->pe_number, 561 + OPAL_EEH_ACTION_CLEAR_FREEZE_ALL); 562 + if (pe->flags & PNV_IODA_PE_MASTER) { 563 + list_for_each_entry(slave, &pe->slaves, list) 564 + opal_pci_eeh_freeze_clear(phb->opal_id, 565 + slave->pe_number, 566 + OPAL_EEH_ACTION_CLEAR_FREEZE_ALL); 567 + } 568 + } 569 + 570 + /* 571 + * Associate PE in PELT. We need add the PE into the 572 + * corresponding PELT-V as well. Otherwise, the error 573 + * originated from the PE might contribute to other 574 + * PEs. 575 + */ 576 + ret = pnv_ioda_set_one_peltv(phb, pe, pe, is_add); 577 + if (ret) 578 + return ret; 579 + 580 + /* For compound PEs, any one affects all of them */ 581 + if (pe->flags & PNV_IODA_PE_MASTER) { 582 + list_for_each_entry(slave, &pe->slaves, list) { 583 + ret = pnv_ioda_set_one_peltv(phb, slave, pe, is_add); 584 + if (ret) 585 + return ret; 586 + } 587 + } 588 + 589 + if (pe->flags & (PNV_IODA_PE_BUS_ALL | PNV_IODA_PE_BUS)) 590 + pdev = pe->pbus->self; 591 + else 592 + pdev = pe->pdev->bus->self; 593 + while (pdev) { 594 + struct pci_dn *pdn = pci_get_pdn(pdev); 595 + struct pnv_ioda_pe *parent; 596 + 597 + if (pdn && pdn->pe_number != IODA_INVALID_PE) { 598 + parent = &phb->ioda.pe_array[pdn->pe_number]; 599 + ret = pnv_ioda_set_one_peltv(phb, parent, pe, is_add); 600 + if (ret) 601 + return ret; 602 + } 603 + 604 + pdev = pdev->bus->self; 605 + } 606 + 607 + return 0; 608 + } 609 + 527 610 static int pnv_ioda_configure_pe(struct pnv_phb *phb, struct pnv_ioda_pe *pe) 528 611 { 529 612 struct pci_dev *parent; ··· 678 561 return -ENXIO; 679 562 } 680 563 681 - rc = opal_pci_set_peltv(phb->opal_id, pe->pe_number, 682 - pe->pe_number, OPAL_ADD_PE_TO_DOMAIN); 683 - if (rc) 684 - pe_warn(pe, "OPAL error %d adding self to PELTV\n", rc); 685 - opal_pci_eeh_freeze_clear(phb->opal_id, pe->pe_number, 686 - OPAL_EEH_ACTION_CLEAR_FREEZE_ALL); 564 + /* Configure PELTV */ 565 + pnv_ioda_set_peltv(phb, pe, true); 687 566 688 - /* Add to all parents PELT-V */ 689 - while (parent) { 690 - struct pci_dn *pdn = pci_get_pdn(parent); 691 - if (pdn && pdn->pe_number != IODA_INVALID_PE) { 692 - rc = opal_pci_set_peltv(phb->opal_id, pdn->pe_number, 693 - pe->pe_number, OPAL_ADD_PE_TO_DOMAIN); 694 - /* XXX What to do in case of error ? */ 695 - } 696 - parent = parent->bus->self; 697 - } 698 567 /* Setup reverse map */ 699 568 for (rid = pe->rid; rid < rid_end; rid++) 700 569 phb->ioda.pe_rmap[rid] = pe->pe_number; 701 570 702 571 /* Setup one MVTs on IODA1 */ 703 - if (phb->type == PNV_PHB_IODA1) { 704 - pe->mve_number = pe->pe_number; 705 - rc = opal_pci_set_mve(phb->opal_id, pe->mve_number, 706 - pe->pe_number); 572 + if (phb->type != PNV_PHB_IODA1) { 573 + pe->mve_number = 0; 574 + goto out; 575 + } 576 + 577 + pe->mve_number = pe->pe_number; 578 + rc = opal_pci_set_mve(phb->opal_id, pe->mve_number, pe->pe_number); 579 + if (rc != OPAL_SUCCESS) { 580 + pe_err(pe, "OPAL error %ld setting up MVE %d\n", 581 + rc, pe->mve_number); 582 + pe->mve_number = -1; 583 + } else { 584 + rc = opal_pci_set_mve_enable(phb->opal_id, 585 + pe->mve_number, OPAL_ENABLE_MVE); 707 586 if (rc) { 708 - pe_err(pe, "OPAL error %ld setting up MVE %d\n", 587 + pe_err(pe, "OPAL error %ld enabling MVE %d\n", 709 588 rc, pe->mve_number); 710 589 pe->mve_number = -1; 711 - } else { 712 - rc = opal_pci_set_mve_enable(phb->opal_id, 713 - pe->mve_number, OPAL_ENABLE_MVE); 714 - if (rc) { 715 - pe_err(pe, "OPAL error %ld enabling MVE %d\n", 716 - rc, pe->mve_number); 717 - pe->mve_number = -1; 718 - } 719 590 } 720 - } else if (phb->type == PNV_PHB_IODA2) 721 - pe->mve_number = 0; 591 + } 722 592 593 + out: 723 594 return 0; 724 595 } 725 596 ··· 942 837 phb = hose->private_data; 943 838 944 839 /* M64 layout might affect PE allocation */ 945 - if (phb->alloc_m64_pe) 946 - phb->alloc_m64_pe(phb); 840 + if (phb->reserve_m64_pe) 841 + phb->reserve_m64_pe(phb); 947 842 948 843 pnv_ioda_setup_PEs(hose->bus); 949 844 } ··· 1939 1834 phb_id = be64_to_cpup(prop64); 1940 1835 pr_debug(" PHB-ID : 0x%016llx\n", phb_id); 1941 1836 1942 - phb = alloc_bootmem(sizeof(struct pnv_phb)); 1943 - if (!phb) { 1944 - pr_err(" Out of memory !\n"); 1945 - return; 1946 - } 1837 + phb = memblock_virt_alloc(sizeof(struct pnv_phb), 0); 1947 1838 1948 1839 /* Allocate PCI controller */ 1949 - memset(phb, 0, sizeof(struct pnv_phb)); 1950 1840 phb->hose = hose = pcibios_alloc_controller(np); 1951 1841 if (!phb->hose) { 1952 1842 pr_err(" Can't allocate PCI controller for %s\n", 1953 1843 np->full_name); 1954 - free_bootmem((unsigned long)phb, sizeof(struct pnv_phb)); 1844 + memblock_free(__pa(phb), sizeof(struct pnv_phb)); 1955 1845 return; 1956 1846 } 1957 1847 ··· 2013 1913 } 2014 1914 pemap_off = size; 2015 1915 size += phb->ioda.total_pe * sizeof(struct pnv_ioda_pe); 2016 - aux = alloc_bootmem(size); 2017 - memset(aux, 0, size); 1916 + aux = memblock_virt_alloc(size, 0); 2018 1917 phb->ioda.pe_alloc = aux; 2019 1918 phb->ioda.m32_segmap = aux + m32map_off; 2020 1919 if (phb->type == PNV_PHB_IODA1) ··· 2098 1999 ioda_eeh_phb_reset(hose, EEH_RESET_DEACTIVATE); 2099 2000 } 2100 2001 2101 - /* Configure M64 window */ 2102 - if (phb->init_m64 && phb->init_m64(phb)) 2002 + /* Remove M64 resource if we can't configure it successfully */ 2003 + if (!phb->init_m64 || phb->init_m64(phb)) 2103 2004 hose->mem_resources[1].flags = 0; 2104 2005 } 2105 2006
+20 -24
arch/powerpc/platforms/powernv/pci-p5ioc2.c
··· 122 122 return; 123 123 } 124 124 125 - phb = alloc_bootmem(sizeof(struct pnv_phb)); 126 - if (phb) { 127 - memset(phb, 0, sizeof(struct pnv_phb)); 128 - phb->hose = pcibios_alloc_controller(np); 129 - } 130 - if (!phb || !phb->hose) { 125 + phb = memblock_virt_alloc(sizeof(struct pnv_phb), 0); 126 + phb->hose = pcibios_alloc_controller(np); 127 + if (!phb->hose) { 131 128 pr_err(" Failed to allocate PCI controller\n"); 132 129 return; 133 130 } ··· 193 196 hub_id = be64_to_cpup(prop64); 194 197 pr_info(" HUB-ID : 0x%016llx\n", hub_id); 195 198 199 + /* Count child PHBs and calculate TCE space per PHB */ 200 + for_each_child_of_node(np, phbn) { 201 + if (of_device_is_compatible(phbn, "ibm,p5ioc2-pcix") || 202 + of_device_is_compatible(phbn, "ibm,p5ioc2-pciex")) 203 + phb_count++; 204 + } 205 + 206 + if (phb_count <= 0) { 207 + pr_info(" No PHBs for Hub %s\n", np->full_name); 208 + return; 209 + } 210 + 211 + tce_per_phb = __rounddown_pow_of_two(P5IOC2_TCE_MEMORY / phb_count); 212 + pr_info(" Allocating %lld MB of TCE memory per PHB\n", 213 + tce_per_phb >> 20); 214 + 196 215 /* Currently allocate 16M of TCE memory for every Hub 197 216 * 198 217 * XXX TODO: Make it chip local if possible 199 218 */ 200 - tce_mem = __alloc_bootmem(P5IOC2_TCE_MEMORY, P5IOC2_TCE_MEMORY, 201 - __pa(MAX_DMA_ADDRESS)); 202 - if (!tce_mem) { 203 - pr_err(" Failed to allocate TCE Memory !\n"); 204 - return; 205 - } 219 + tce_mem = memblock_virt_alloc(P5IOC2_TCE_MEMORY, P5IOC2_TCE_MEMORY); 206 220 pr_debug(" TCE : 0x%016lx..0x%016lx\n", 207 221 __pa(tce_mem), __pa(tce_mem) + P5IOC2_TCE_MEMORY - 1); 208 222 rc = opal_pci_set_hub_tce_memory(hub_id, __pa(tce_mem), ··· 222 214 pr_err(" Failed to allocate TCE memory, OPAL error %lld\n", rc); 223 215 return; 224 216 } 225 - 226 - /* Count child PHBs */ 227 - for_each_child_of_node(np, phbn) { 228 - if (of_device_is_compatible(phbn, "ibm,p5ioc2-pcix") || 229 - of_device_is_compatible(phbn, "ibm,p5ioc2-pciex")) 230 - phb_count++; 231 - } 232 - 233 - /* Calculate how much TCE space we can give per PHB */ 234 - tce_per_phb = __rounddown_pow_of_two(P5IOC2_TCE_MEMORY / phb_count); 235 - pr_info(" Allocating %lld MB of TCE memory per PHB\n", 236 - tce_per_phb >> 20); 237 217 238 218 /* Initialize PHBs */ 239 219 for_each_child_of_node(np, phbn) {
-1
arch/powerpc/platforms/powernv/pci.c
··· 16 16 #include <linux/delay.h> 17 17 #include <linux/string.h> 18 18 #include <linux/init.h> 19 - #include <linux/bootmem.h> 20 19 #include <linux/irq.h> 21 20 #include <linux/io.h> 22 21 #include <linux/msi.h>
+1 -1
arch/powerpc/platforms/powernv/pci.h
··· 130 130 u32 (*bdfn_to_pe)(struct pnv_phb *phb, struct pci_bus *bus, u32 devfn); 131 131 void (*shutdown)(struct pnv_phb *phb); 132 132 int (*init_m64)(struct pnv_phb *phb); 133 - void (*alloc_m64_pe)(struct pnv_phb *phb); 133 + void (*reserve_m64_pe)(struct pnv_phb *phb); 134 134 int (*pick_m64_pe)(struct pnv_phb *phb, struct pci_bus *bus, int all); 135 135 int (*get_pe_state)(struct pnv_phb *phb, int pe_no); 136 136 void (*freeze_pe)(struct pnv_phb *phb, int pe_no);
+2 -4
arch/powerpc/platforms/powernv/setup.c
··· 265 265 static void __init pnv_setup_machdep_opal(void) 266 266 { 267 267 ppc_md.get_boot_time = opal_get_boot_time; 268 - ppc_md.get_rtc_time = opal_get_rtc_time; 269 - ppc_md.set_rtc_time = opal_set_rtc_time; 270 268 ppc_md.restart = pnv_restart; 271 - ppc_md.power_off = pnv_power_off; 269 + pm_power_off = pnv_power_off; 272 270 ppc_md.halt = pnv_halt; 273 271 ppc_md.machine_check_exception = opal_machine_check; 274 272 ppc_md.mce_check_early_recovery = opal_mce_check_early_recovery; ··· 283 285 ppc_md.set_rtc_time = rtas_set_rtc_time; 284 286 } 285 287 ppc_md.restart = rtas_restart; 286 - ppc_md.power_off = rtas_power_off; 288 + pm_power_off = rtas_power_off; 287 289 ppc_md.halt = rtas_halt; 288 290 } 289 291 #endif /* CONFIG_PPC_POWERNV_RTAS */
+18 -5
arch/powerpc/platforms/powernv/smp.c
··· 149 149 static void pnv_smp_cpu_kill_self(void) 150 150 { 151 151 unsigned int cpu; 152 + unsigned long srr1; 152 153 153 154 /* Standard hot unplug procedure */ 154 155 local_irq_disable(); ··· 166 165 mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1); 167 166 while (!generic_check_cpu_restart(cpu)) { 168 167 ppc64_runlatch_off(); 169 - power7_nap(1); 168 + srr1 = power7_nap(1); 170 169 ppc64_runlatch_on(); 171 170 172 - /* Clear the IPI that woke us up */ 173 - icp_native_flush_interrupt(); 174 - local_paca->irq_happened &= PACA_IRQ_HARD_DIS; 175 - mb(); 171 + /* 172 + * If the SRR1 value indicates that we woke up due to 173 + * an external interrupt, then clear the interrupt. 174 + * We clear the interrupt before checking for the 175 + * reason, so as to avoid a race where we wake up for 176 + * some other reason, find nothing and clear the interrupt 177 + * just as some other cpu is sending us an interrupt. 178 + * If we returned from power7_nap as a result of 179 + * having finished executing in a KVM guest, then srr1 180 + * contains 0. 181 + */ 182 + if ((srr1 & SRR1_WAKEMASK) == SRR1_WAKEEE) { 183 + icp_native_flush_interrupt(); 184 + local_paca->irq_happened &= PACA_IRQ_HARD_DIS; 185 + smp_mb(); 186 + } 176 187 177 188 if (cpu_core_split_required()) 178 189 continue;
+1 -1
arch/powerpc/platforms/ps3/htab.c
··· 110 110 111 111 static long ps3_hpte_updatepp(unsigned long slot, unsigned long newpp, 112 112 unsigned long vpn, int psize, int apsize, 113 - int ssize, int local) 113 + int ssize, unsigned long inv_flags) 114 114 { 115 115 int result; 116 116 u64 hpte_v, want_v, hpte_rs;
+1 -1
arch/powerpc/platforms/ps3/interrupt.c
··· 711 711 712 712 static unsigned int ps3_get_irq(void) 713 713 { 714 - struct ps3_private *pd = &__get_cpu_var(ps3_private); 714 + struct ps3_private *pd = this_cpu_ptr(&ps3_private); 715 715 u64 x = (pd->bmp.status & pd->bmp.mask); 716 716 unsigned int plug; 717 717
+2 -7
arch/powerpc/platforms/ps3/setup.c
··· 125 125 if (!p->size) 126 126 return; 127 127 128 - p->address = __alloc_bootmem(p->size, p->align, __pa(MAX_DMA_ADDRESS)); 129 - if (!p->address) { 130 - printk(KERN_ERR "%s: Cannot allocate %s\n", __func__, 131 - p->name); 132 - return; 133 - } 128 + p->address = memblock_virt_alloc(p->size, p->align); 134 129 135 130 printk(KERN_INFO "%s: %lu bytes at %p\n", p->name, p->size, 136 131 p->address); ··· 243 248 ps3_mm_init(); 244 249 ps3_mm_vas_create(&htab_size); 245 250 ps3_hpte_init(htab_size); 251 + pm_power_off = ps3_power_off; 246 252 247 253 DBG(" <- %s:%d\n", __func__, __LINE__); 248 254 return 1; ··· 274 278 .calibrate_decr = ps3_calibrate_decr, 275 279 .progress = ps3_progress, 276 280 .restart = ps3_restart, 277 - .power_off = ps3_power_off, 278 281 .halt = ps3_halt, 279 282 #if defined(CONFIG_KEXEC) 280 283 .kexec_cpu_down = ps3_kexec_cpu_down,
+1 -1
arch/powerpc/platforms/pseries/dtl.c
··· 75 75 */ 76 76 static void consume_dtle(struct dtl_entry *dtle, u64 index) 77 77 { 78 - struct dtl_ring *dtlr = &__get_cpu_var(dtl_rings); 78 + struct dtl_ring *dtlr = this_cpu_ptr(&dtl_rings); 79 79 struct dtl_entry *wp = dtlr->write_ptr; 80 80 struct lppaca *vpa = local_paca->lppaca_ptr; 81 81
-21
arch/powerpc/platforms/pseries/hotplug-memory.c
··· 12 12 #include <linux/of.h> 13 13 #include <linux/of_address.h> 14 14 #include <linux/memblock.h> 15 - #include <linux/vmalloc.h> 16 15 #include <linux/memory.h> 17 16 #include <linux/memory_hotplug.h> 18 17 ··· 65 66 } 66 67 67 68 #ifdef CONFIG_MEMORY_HOTREMOVE 68 - static int pseries_remove_memory(u64 start, u64 size) 69 - { 70 - int ret; 71 - 72 - /* Remove htab bolted mappings for this section of memory */ 73 - start = (unsigned long)__va(start); 74 - ret = remove_section_mapping(start, start + size); 75 - 76 - /* Ensure all vmalloc mappings are flushed in case they also 77 - * hit that section of memory 78 - */ 79 - vm_unmap_aliases(); 80 - 81 - return ret; 82 - } 83 - 84 69 static int pseries_remove_memblock(unsigned long base, unsigned int memblock_size) 85 70 { 86 71 unsigned long block_sz, start_pfn; ··· 243 260 { 244 261 if (firmware_has_feature(FW_FEATURE_LPAR)) 245 262 of_reconfig_notifier_register(&pseries_mem_nb); 246 - 247 - #ifdef CONFIG_MEMORY_HOTREMOVE 248 - ppc_md.remove_memory = pseries_remove_memory; 249 - #endif 250 263 251 264 return 0; 252 265 }
+2 -2
arch/powerpc/platforms/pseries/hvCall.S
··· 18 18 19 19 #ifdef CONFIG_TRACEPOINTS 20 20 21 - #ifndef CONFIG_JUMP_LABEL 21 + #ifndef HAVE_JUMP_LABEL 22 22 .section ".toc","aw" 23 23 24 24 .globl hcall_tracepoint_refcount ··· 78 78 mr r5,BUFREG; \ 79 79 __HCALL_INST_POSTCALL 80 80 81 - #ifdef CONFIG_JUMP_LABEL 81 + #ifdef HAVE_JUMP_LABEL 82 82 #define HCALL_BRANCH(LABEL) \ 83 83 ARCH_STATIC_BRANCH(LABEL, hcall_tracepoint_key) 84 84 #else
+2 -2
arch/powerpc/platforms/pseries/hvCall_inst.c
··· 110 110 if (opcode > MAX_HCALL_OPCODE) 111 111 return; 112 112 113 - h = &__get_cpu_var(hcall_stats)[opcode / 4]; 113 + h = this_cpu_ptr(&hcall_stats[opcode / 4]); 114 114 h->tb_start = mftb(); 115 115 h->purr_start = mfspr(SPRN_PURR); 116 116 } ··· 123 123 if (opcode > MAX_HCALL_OPCODE) 124 124 return; 125 125 126 - h = &__get_cpu_var(hcall_stats)[opcode / 4]; 126 + h = this_cpu_ptr(&hcall_stats[opcode / 4]); 127 127 h->num_calls++; 128 128 h->tb_total += mftb() - h->tb_start; 129 129 h->purr_total += mfspr(SPRN_PURR) - h->purr_start;
+5 -6
arch/powerpc/platforms/pseries/iommu.c
··· 199 199 200 200 local_irq_save(flags); /* to protect tcep and the page behind it */ 201 201 202 - tcep = __get_cpu_var(tce_page); 202 + tcep = __this_cpu_read(tce_page); 203 203 204 204 /* This is safe to do since interrupts are off when we're called 205 205 * from iommu_alloc{,_sg}() ··· 212 212 return tce_build_pSeriesLP(tbl, tcenum, npages, uaddr, 213 213 direction, attrs); 214 214 } 215 - __get_cpu_var(tce_page) = tcep; 215 + __this_cpu_write(tce_page, tcep); 216 216 } 217 217 218 218 rpn = __pa(uaddr) >> TCE_SHIFT; ··· 398 398 long l, limit; 399 399 400 400 local_irq_disable(); /* to protect tcep and the page behind it */ 401 - tcep = __get_cpu_var(tce_page); 401 + tcep = __this_cpu_read(tce_page); 402 402 403 403 if (!tcep) { 404 404 tcep = (__be64 *)__get_free_page(GFP_ATOMIC); ··· 406 406 local_irq_enable(); 407 407 return -ENOMEM; 408 408 } 409 - __get_cpu_var(tce_page) = tcep; 409 + __this_cpu_write(tce_page, tcep); 410 410 } 411 411 412 412 proto_tce = TCE_PCI_READ | TCE_PCI_WRITE; ··· 574 574 while (isa_dn && isa_dn != dn) 575 575 isa_dn = isa_dn->parent; 576 576 577 - if (isa_dn_orig) 578 - of_node_put(isa_dn_orig); 577 + of_node_put(isa_dn_orig); 579 578 580 579 /* Count number of direct PCI children of the PHB. */ 581 580 for (children = 0, tmp = dn->child; tmp; tmp = tmp->sibling)
+5 -5
arch/powerpc/platforms/pseries/lpar.c
··· 284 284 unsigned long newpp, 285 285 unsigned long vpn, 286 286 int psize, int apsize, 287 - int ssize, int local) 287 + int ssize, unsigned long inv_flags) 288 288 { 289 289 unsigned long lpar_rc; 290 290 unsigned long flags = (newpp & 7) | H_AVPN; ··· 442 442 static void pSeries_lpar_hugepage_invalidate(unsigned long vsid, 443 443 unsigned long addr, 444 444 unsigned char *hpte_slot_array, 445 - int psize, int ssize) 445 + int psize, int ssize, int local) 446 446 { 447 447 int i, index = 0; 448 448 unsigned long s_addr = addr; ··· 515 515 unsigned long vpn; 516 516 unsigned long i, pix, rc; 517 517 unsigned long flags = 0; 518 - struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch); 518 + struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch); 519 519 int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE); 520 520 unsigned long param[9]; 521 521 unsigned long hash, index, shift, hidx, slot; ··· 705 705 706 706 local_irq_save(flags); 707 707 708 - depth = &__get_cpu_var(hcall_trace_depth); 708 + depth = this_cpu_ptr(&hcall_trace_depth); 709 709 710 710 if (*depth) 711 711 goto out; ··· 730 730 731 731 local_irq_save(flags); 732 732 733 - depth = &__get_cpu_var(hcall_trace_depth); 733 + depth = this_cpu_ptr(&hcall_trace_depth); 734 734 735 735 if (*depth) 736 736 goto out;
+2
arch/powerpc/platforms/pseries/nvram.c
··· 715 715 nvram_pstore_info.buf = oops_data; 716 716 nvram_pstore_info.bufsize = oops_data_sz; 717 717 718 + spin_lock_init(&nvram_pstore_info.buf_lock); 719 + 718 720 rc = pstore_register(&nvram_pstore_info); 719 721 if (rc != 0) 720 722 pr_err("nvram: pstore_register() failed, defaults to "
+1 -1
arch/powerpc/platforms/pseries/pci.c
··· 134 134 of_node_put(pdn); 135 135 136 136 if (rc) { 137 - pr_err("no ibm,pcie-link-speed-stats property\n"); 137 + pr_debug("no ibm,pcie-link-speed-stats property\n"); 138 138 return 0; 139 139 } 140 140
+2 -2
arch/powerpc/platforms/pseries/ras.c
··· 302 302 /* If it isn't an extended log we can use the per cpu 64bit buffer */ 303 303 h = (struct rtas_error_log *)&savep[1]; 304 304 if (!rtas_error_extended(h)) { 305 - memcpy(&__get_cpu_var(mce_data_buf), h, sizeof(__u64)); 306 - errhdr = (struct rtas_error_log *)&__get_cpu_var(mce_data_buf); 305 + memcpy(this_cpu_ptr(&mce_data_buf), h, sizeof(__u64)); 306 + errhdr = (struct rtas_error_log *)this_cpu_ptr(&mce_data_buf); 307 307 } else { 308 308 int len, error_log_length; 309 309
+35 -30
arch/powerpc/platforms/pseries/setup.c
··· 500 500 501 501 if (firmware_has_feature(FW_FEATURE_SET_MODE)) { 502 502 long rc; 503 - if ((rc = pSeries_enable_reloc_on_exc()) != H_SUCCESS) { 503 + 504 + rc = pSeries_enable_reloc_on_exc(); 505 + if (rc == H_P2) { 506 + pr_info("Relocation on exceptions not supported\n"); 507 + } else if (rc != H_SUCCESS) { 504 508 pr_warn("Unable to enable relocation on exceptions: " 505 509 "%ld\n", rc); 506 510 } ··· 664 660 pr_debug(" <- pSeries_init_early()\n"); 665 661 } 666 662 663 + /** 664 + * pseries_power_off - tell firmware about how to power off the system. 665 + * 666 + * This function calls either the power-off rtas token in normal cases 667 + * or the ibm,power-off-ups token (if present & requested) in case of 668 + * a power failure. If power-off token is used, power on will only be 669 + * possible with power button press. If ibm,power-off-ups token is used 670 + * it will allow auto poweron after power is restored. 671 + */ 672 + static void pseries_power_off(void) 673 + { 674 + int rc; 675 + int rtas_poweroff_ups_token = rtas_token("ibm,power-off-ups"); 676 + 677 + if (rtas_flash_term_hook) 678 + rtas_flash_term_hook(SYS_POWER_OFF); 679 + 680 + if (rtas_poweron_auto == 0 || 681 + rtas_poweroff_ups_token == RTAS_UNKNOWN_SERVICE) { 682 + rc = rtas_call(rtas_token("power-off"), 2, 1, NULL, -1, -1); 683 + printk(KERN_INFO "RTAS power-off returned %d\n", rc); 684 + } else { 685 + rc = rtas_call(rtas_poweroff_ups_token, 0, 1, NULL); 686 + printk(KERN_INFO "RTAS ibm,power-off-ups returned %d\n", rc); 687 + } 688 + for (;;); 689 + } 690 + 667 691 /* 668 692 * Called very early, MMU is off, device-tree isn't unflattened 669 693 */ ··· 774 742 else 775 743 hpte_init_native(); 776 744 745 + pm_power_off = pseries_power_off; 746 + 777 747 pr_debug("Machine is%s LPAR !\n", 778 748 (powerpc_firmware_features & FW_FEATURE_LPAR) ? "" : " not"); 779 749 ··· 787 753 if (firmware_has_feature(FW_FEATURE_LPAR)) 788 754 return PCI_PROBE_DEVTREE; 789 755 return PCI_PROBE_NORMAL; 790 - } 791 - 792 - /** 793 - * pSeries_power_off - tell firmware about how to power off the system. 794 - * 795 - * This function calls either the power-off rtas token in normal cases 796 - * or the ibm,power-off-ups token (if present & requested) in case of 797 - * a power failure. If power-off token is used, power on will only be 798 - * possible with power button press. If ibm,power-off-ups token is used 799 - * it will allow auto poweron after power is restored. 800 - */ 801 - static void pSeries_power_off(void) 802 - { 803 - int rc; 804 - int rtas_poweroff_ups_token = rtas_token("ibm,power-off-ups"); 805 - 806 - if (rtas_flash_term_hook) 807 - rtas_flash_term_hook(SYS_POWER_OFF); 808 - 809 - if (rtas_poweron_auto == 0 || 810 - rtas_poweroff_ups_token == RTAS_UNKNOWN_SERVICE) { 811 - rc = rtas_call(rtas_token("power-off"), 2, 1, NULL, -1, -1); 812 - printk(KERN_INFO "RTAS power-off returned %d\n", rc); 813 - } else { 814 - rc = rtas_call(rtas_poweroff_ups_token, 0, 1, NULL); 815 - printk(KERN_INFO "RTAS ibm,power-off-ups returned %d\n", rc); 816 - } 817 - for (;;); 818 756 } 819 757 820 758 #ifndef CONFIG_PCI ··· 803 797 .pcibios_fixup = pSeries_final_fixup, 804 798 .pci_probe_mode = pSeries_pci_probe_mode, 805 799 .restart = rtas_restart, 806 - .power_off = pSeries_power_off, 807 800 .halt = rtas_halt, 808 801 .panic = rtas_os_term, 809 802 .get_boot_time = rtas_get_boot_time,
-1
arch/powerpc/sysdev/fsl_msi.c
··· 13 13 * 14 14 */ 15 15 #include <linux/irq.h> 16 - #include <linux/bootmem.h> 17 16 #include <linux/msi.h> 18 17 #include <linux/pci.h> 19 18 #include <linux/slab.h>
+1 -2
arch/powerpc/sysdev/fsl_pci.c
··· 23 23 #include <linux/string.h> 24 24 #include <linux/init.h> 25 25 #include <linux/interrupt.h> 26 - #include <linux/bootmem.h> 27 26 #include <linux/memblock.h> 28 27 #include <linux/log2.h> 29 28 #include <linux/slab.h> ··· 151 152 flags |= 0x10000000; /* enable relaxed ordering */ 152 153 153 154 for (i = 0; size > 0; i++) { 154 - unsigned int bits = min(ilog2(size), 155 + unsigned int bits = min_t(u32, ilog2(size), 155 156 __ffs(pci_addr | phys_addr)); 156 157 157 158 if (index + i >= 5)
+104
arch/powerpc/sysdev/fsl_rio.c
··· 58 58 #define RIO_ISR_AACR 0x10120 59 59 #define RIO_ISR_AACR_AA 0x1 /* Accept All ID */ 60 60 61 + #define RIWTAR_TRAD_VAL_SHIFT 12 62 + #define RIWTAR_TRAD_MASK 0x00FFFFFF 63 + #define RIWBAR_BADD_VAL_SHIFT 12 64 + #define RIWBAR_BADD_MASK 0x003FFFFF 65 + #define RIWAR_ENABLE 0x80000000 66 + #define RIWAR_TGINT_LOCAL 0x00F00000 67 + #define RIWAR_RDTYP_NO_SNOOP 0x00040000 68 + #define RIWAR_RDTYP_SNOOP 0x00050000 69 + #define RIWAR_WRTYP_NO_SNOOP 0x00004000 70 + #define RIWAR_WRTYP_SNOOP 0x00005000 71 + #define RIWAR_WRTYP_ALLOC 0x00006000 72 + #define RIWAR_SIZE_MASK 0x0000003F 73 + 61 74 #define __fsl_read_rio_config(x, addr, err, op) \ 62 75 __asm__ __volatile__( \ 63 76 "1: "op" %1,0(%2)\n" \ ··· 279 266 return 0; 280 267 } 281 268 269 + static void fsl_rio_inbound_mem_init(struct rio_priv *priv) 270 + { 271 + int i; 272 + 273 + /* close inbound windows */ 274 + for (i = 0; i < RIO_INB_ATMU_COUNT; i++) 275 + out_be32(&priv->inb_atmu_regs[i].riwar, 0); 276 + } 277 + 278 + int fsl_map_inb_mem(struct rio_mport *mport, dma_addr_t lstart, 279 + u64 rstart, u32 size, u32 flags) 280 + { 281 + struct rio_priv *priv = mport->priv; 282 + u32 base_size; 283 + unsigned int base_size_log; 284 + u64 win_start, win_end; 285 + u32 riwar; 286 + int i; 287 + 288 + if ((size & (size - 1)) != 0) 289 + return -EINVAL; 290 + 291 + base_size_log = ilog2(size); 292 + base_size = 1 << base_size_log; 293 + 294 + /* check if addresses are aligned with the window size */ 295 + if (lstart & (base_size - 1)) 296 + return -EINVAL; 297 + if (rstart & (base_size - 1)) 298 + return -EINVAL; 299 + 300 + /* check for conflicting ranges */ 301 + for (i = 0; i < RIO_INB_ATMU_COUNT; i++) { 302 + riwar = in_be32(&priv->inb_atmu_regs[i].riwar); 303 + if ((riwar & RIWAR_ENABLE) == 0) 304 + continue; 305 + win_start = ((u64)(in_be32(&priv->inb_atmu_regs[i].riwbar) & RIWBAR_BADD_MASK)) 306 + << RIWBAR_BADD_VAL_SHIFT; 307 + win_end = win_start + ((1 << ((riwar & RIWAR_SIZE_MASK) + 1)) - 1); 308 + if (rstart < win_end && (rstart + size) > win_start) 309 + return -EINVAL; 310 + } 311 + 312 + /* find unused atmu */ 313 + for (i = 0; i < RIO_INB_ATMU_COUNT; i++) { 314 + riwar = in_be32(&priv->inb_atmu_regs[i].riwar); 315 + if ((riwar & RIWAR_ENABLE) == 0) 316 + break; 317 + } 318 + if (i >= RIO_INB_ATMU_COUNT) 319 + return -ENOMEM; 320 + 321 + out_be32(&priv->inb_atmu_regs[i].riwtar, lstart >> RIWTAR_TRAD_VAL_SHIFT); 322 + out_be32(&priv->inb_atmu_regs[i].riwbar, rstart >> RIWBAR_BADD_VAL_SHIFT); 323 + out_be32(&priv->inb_atmu_regs[i].riwar, RIWAR_ENABLE | RIWAR_TGINT_LOCAL | 324 + RIWAR_RDTYP_SNOOP | RIWAR_WRTYP_SNOOP | (base_size_log - 1)); 325 + 326 + return 0; 327 + } 328 + 329 + void fsl_unmap_inb_mem(struct rio_mport *mport, dma_addr_t lstart) 330 + { 331 + u32 win_start_shift, base_start_shift; 332 + struct rio_priv *priv = mport->priv; 333 + u32 riwar, riwtar; 334 + int i; 335 + 336 + /* skip default window */ 337 + base_start_shift = lstart >> RIWTAR_TRAD_VAL_SHIFT; 338 + for (i = 0; i < RIO_INB_ATMU_COUNT; i++) { 339 + riwar = in_be32(&priv->inb_atmu_regs[i].riwar); 340 + if ((riwar & RIWAR_ENABLE) == 0) 341 + continue; 342 + 343 + riwtar = in_be32(&priv->inb_atmu_regs[i].riwtar); 344 + win_start_shift = riwtar & RIWTAR_TRAD_MASK; 345 + if (win_start_shift == base_start_shift) { 346 + out_be32(&priv->inb_atmu_regs[i].riwar, riwar & ~RIWAR_ENABLE); 347 + return; 348 + } 349 + } 350 + } 351 + 282 352 void fsl_rio_port_error_handler(int offset) 283 353 { 284 354 /*XXX: Error recovery is not implemented, we just clear errors */ ··· 485 389 ops->add_outb_message = fsl_add_outb_message; 486 390 ops->add_inb_buffer = fsl_add_inb_buffer; 487 391 ops->get_inb_message = fsl_get_inb_message; 392 + ops->map_inb = fsl_map_inb_mem; 393 + ops->unmap_inb = fsl_unmap_inb_mem; 488 394 489 395 rmu_node = of_parse_phandle(dev->dev.of_node, "fsl,srio-rmu-handle", 0); 490 396 if (!rmu_node) { ··· 700 602 RIO_ATMU_REGS_PORT2_OFFSET)); 701 603 702 604 priv->maint_atmu_regs = priv->atmu_regs + 1; 605 + priv->inb_atmu_regs = (struct rio_inb_atmu_regs __iomem *) 606 + (priv->regs_win + 607 + ((i == 0) ? RIO_INB_ATMU_REGS_PORT1_OFFSET : 608 + RIO_INB_ATMU_REGS_PORT2_OFFSET)); 609 + 703 610 704 611 /* Set to receive any dist ID for serial RapidIO controller. */ 705 612 if (port->phy_type == RIO_PHY_SERIAL) ··· 723 620 rio_law_start = range_start; 724 621 725 622 fsl_rio_setup_rmu(port, rmu_np[i]); 623 + fsl_rio_inbound_mem_init(priv); 726 624 727 625 dbell->mport[i] = port; 728 626
+13
arch/powerpc/sysdev/fsl_rio.h
··· 50 50 #define RIO_S_DBELL_REGS_OFFSET 0x13400 51 51 #define RIO_S_PW_REGS_OFFSET 0x134e0 52 52 #define RIO_ATMU_REGS_DBELL_OFFSET 0x10C40 53 + #define RIO_INB_ATMU_REGS_PORT1_OFFSET 0x10d60 54 + #define RIO_INB_ATMU_REGS_PORT2_OFFSET 0x10f60 53 55 54 56 #define MAX_MSG_UNIT_NUM 2 55 57 #define MAX_PORT_NUM 4 58 + #define RIO_INB_ATMU_COUNT 4 56 59 57 60 struct rio_atmu_regs { 58 61 u32 rowtar; ··· 64 61 u32 pad1; 65 62 u32 rowar; 66 63 u32 pad2[3]; 64 + }; 65 + 66 + struct rio_inb_atmu_regs { 67 + u32 riwtar; 68 + u32 pad1; 69 + u32 riwbar; 70 + u32 pad2; 71 + u32 riwar; 72 + u32 pad3[3]; 67 73 }; 68 74 69 75 struct rio_dbell_ring { ··· 111 99 void __iomem *regs_win; 112 100 struct rio_atmu_regs __iomem *atmu_regs; 113 101 struct rio_atmu_regs __iomem *maint_atmu_regs; 102 + struct rio_inb_atmu_regs __iomem *inb_atmu_regs; 114 103 void __iomem *maint_win; 115 104 void *rmm_handle; /* RapidIO message manager(unit) Handle */ 116 105 };
+2 -3
arch/powerpc/sysdev/fsl_soc.c
··· 197 197 if (!rstcr && ppc_md.restart == fsl_rstcr_restart) 198 198 printk(KERN_ERR "No RSTCR register, warm reboot won't work\n"); 199 199 200 - if (np) 201 - of_node_put(np); 200 + of_node_put(np); 202 201 203 202 return 0; 204 203 } ··· 237 238 /* 238 239 * Halt the current partition 239 240 * 240 - * This function should be assigned to the ppc_md.power_off and ppc_md.halt 241 + * This function should be assigned to the pm_power_off and ppc_md.halt 241 242 * function pointers, to shut down the partition when we're running under 242 243 * the Freescale hypervisor. 243 244 */
-1
arch/powerpc/sysdev/ipic.c
··· 20 20 #include <linux/signal.h> 21 21 #include <linux/syscore_ops.h> 22 22 #include <linux/device.h> 23 - #include <linux/bootmem.h> 24 23 #include <linux/spinlock.h> 25 24 #include <linux/fsl_devices.h> 26 25 #include <asm/irq.h>
+1 -2
arch/powerpc/sysdev/mpc5xxx_clocks.c
··· 26 26 of_node_put(node); 27 27 node = np; 28 28 } 29 - if (node) 30 - of_node_put(node); 29 + of_node_put(node); 31 30 32 31 return p_bus_freq ? *p_bus_freq : 0; 33 32 }
-1
arch/powerpc/sysdev/mpic.c
··· 24 24 #include <linux/irq.h> 25 25 #include <linux/smp.h> 26 26 #include <linux/interrupt.h> 27 - #include <linux/bootmem.h> 28 27 #include <linux/spinlock.h> 29 28 #include <linux/pci.h> 30 29 #include <linux/slab.h>
-1
arch/powerpc/sysdev/mpic_pasemi_msi.c
··· 16 16 #undef DEBUG 17 17 18 18 #include <linux/irq.h> 19 - #include <linux/bootmem.h> 20 19 #include <linux/msi.h> 21 20 #include <asm/mpic.h> 22 21 #include <asm/prom.h>
-1
arch/powerpc/sysdev/mpic_u3msi.c
··· 10 10 */ 11 11 12 12 #include <linux/irq.h> 13 - #include <linux/bootmem.h> 14 13 #include <linux/msi.h> 15 14 #include <asm/mpic.h> 16 15 #include <asm/prom.h>
+4 -4
arch/powerpc/sysdev/ppc4xx_cpm.c
··· 281 281 printk(KERN_ERR "cpm: could not parse dcr property for %s\n", 282 282 np->full_name); 283 283 ret = -EINVAL; 284 - goto out; 284 + goto node_put; 285 285 } 286 286 287 287 cpm.dcr_host = dcr_map(np, dcr_base, dcr_len); ··· 290 290 printk(KERN_ERR "cpm: failed to map dcr property for %s\n", 291 291 np->full_name); 292 292 ret = -EINVAL; 293 - goto out; 293 + goto node_put; 294 294 } 295 295 296 296 /* All 4xx SoCs with a CPM controller have one of two ··· 330 330 331 331 if (cpm.standby || cpm.suspend) 332 332 suspend_set_ops(&cpm_suspend_ops); 333 + node_put: 334 + of_node_put(np); 333 335 out: 334 - if (np) 335 - of_node_put(np); 336 336 return ret; 337 337 } 338 338
-1
arch/powerpc/sysdev/ppc4xx_msi.c
··· 22 22 */ 23 23 24 24 #include <linux/irq.h> 25 - #include <linux/bootmem.h> 26 25 #include <linux/pci.h> 27 26 #include <linux/msi.h> 28 27 #include <linux/of_platform.h>
-1
arch/powerpc/sysdev/ppc4xx_pci.c
··· 22 22 #include <linux/pci.h> 23 23 #include <linux/init.h> 24 24 #include <linux/of.h> 25 - #include <linux/bootmem.h> 26 25 #include <linux/delay.h> 27 26 #include <linux/slab.h> 28 27
-1
arch/powerpc/sysdev/qe_lib/qe.c
··· 22 22 #include <linux/spinlock.h> 23 23 #include <linux/mm.h> 24 24 #include <linux/interrupt.h> 25 - #include <linux/bootmem.h> 26 25 #include <linux/module.h> 27 26 #include <linux/delay.h> 28 27 #include <linux/ioport.h>
-1
arch/powerpc/sysdev/qe_lib/qe_ic.c
··· 23 23 #include <linux/sched.h> 24 24 #include <linux/signal.h> 25 25 #include <linux/device.h> 26 - #include <linux/bootmem.h> 27 26 #include <linux/spinlock.h> 28 27 #include <asm/irq.h> 29 28 #include <asm/io.h>
-1
arch/powerpc/sysdev/uic.c
··· 19 19 #include <linux/sched.h> 20 20 #include <linux/signal.h> 21 21 #include <linux/device.h> 22 - #include <linux/bootmem.h> 23 22 #include <linux/spinlock.h> 24 23 #include <linux/irq.h> 25 24 #include <linux/interrupt.h>
+1 -1
arch/powerpc/sysdev/xics/xics-common.c
··· 155 155 156 156 void xics_teardown_cpu(void) 157 157 { 158 - struct xics_cppr *os_cppr = &__get_cpu_var(xics_cppr); 158 + struct xics_cppr *os_cppr = this_cpu_ptr(&xics_cppr); 159 159 160 160 /* 161 161 * we have to reset the cppr index to 0 because we're
+63 -19
arch/powerpc/xmon/xmon.c
··· 51 51 #include <asm/paca.h> 52 52 #endif 53 53 54 + #if defined(CONFIG_PPC_SPLPAR) 55 + #include <asm/plpar_wrappers.h> 56 + #else 57 + static inline long plapr_set_ciabr(unsigned long ciabr) {return 0; }; 58 + #endif 59 + 54 60 #include "nonstdio.h" 55 61 #include "dis-asm.h" 56 62 ··· 94 88 }; 95 89 96 90 /* Bits in bpt.enabled */ 97 - #define BP_IABR_TE 1 /* IABR translation enabled */ 98 - #define BP_IABR 2 99 - #define BP_TRAP 8 100 - #define BP_DABR 0x10 91 + #define BP_CIABR 1 92 + #define BP_TRAP 2 93 + #define BP_DABR 4 101 94 102 95 #define NBPTS 256 103 96 static struct bpt bpts[NBPTS]; ··· 273 268 static inline void cinval(void *p) 274 269 { 275 270 asm volatile ("dcbi 0,%0; icbi 0,%0" : : "r" (p)); 271 + } 272 + 273 + /** 274 + * write_ciabr() - write the CIABR SPR 275 + * @ciabr: The value to write. 276 + * 277 + * This function writes a value to the CIARB register either directly 278 + * through mtspr instruction if the kernel is in HV privilege mode or 279 + * call a hypervisor function to achieve the same in case the kernel 280 + * is in supervisor privilege mode. 281 + */ 282 + static void write_ciabr(unsigned long ciabr) 283 + { 284 + if (!cpu_has_feature(CPU_FTR_ARCH_207S)) 285 + return; 286 + 287 + if (cpu_has_feature(CPU_FTR_HVMODE)) { 288 + mtspr(SPRN_CIABR, ciabr); 289 + return; 290 + } 291 + plapr_set_ciabr(ciabr); 292 + } 293 + 294 + /** 295 + * set_ciabr() - set the CIABR 296 + * @addr: The value to set. 297 + * 298 + * This function sets the correct privilege value into the the HW 299 + * breakpoint address before writing it up in the CIABR register. 300 + */ 301 + static void set_ciabr(unsigned long addr) 302 + { 303 + addr &= ~CIABR_PRIV; 304 + 305 + if (cpu_has_feature(CPU_FTR_HVMODE)) 306 + addr |= CIABR_PRIV_HYPER; 307 + else 308 + addr |= CIABR_PRIV_SUPER; 309 + write_ciabr(addr); 276 310 } 277 311 278 312 /* ··· 771 727 772 728 bp = bpts; 773 729 for (i = 0; i < NBPTS; ++i, ++bp) { 774 - if ((bp->enabled & (BP_TRAP|BP_IABR)) == 0) 730 + if ((bp->enabled & (BP_TRAP|BP_CIABR)) == 0) 775 731 continue; 776 732 if (mread(bp->address, &bp->instr[0], 4) != 4) { 777 733 printf("Couldn't read instruction at %lx, " ··· 786 742 continue; 787 743 } 788 744 store_inst(&bp->instr[0]); 789 - if (bp->enabled & BP_IABR) 745 + if (bp->enabled & BP_CIABR) 790 746 continue; 791 747 if (mwrite(bp->address, &bpinstr, 4) != 4) { 792 748 printf("Couldn't write instruction at %lx, " ··· 808 764 brk.len = 8; 809 765 __set_breakpoint(&brk); 810 766 } 811 - if (iabr && cpu_has_feature(CPU_FTR_IABR)) 812 - mtspr(SPRN_IABR, iabr->address 813 - | (iabr->enabled & (BP_IABR|BP_IABR_TE))); 767 + 768 + if (iabr) 769 + set_ciabr(iabr->address); 814 770 } 815 771 816 772 static void remove_bpts(void) ··· 821 777 822 778 bp = bpts; 823 779 for (i = 0; i < NBPTS; ++i, ++bp) { 824 - if ((bp->enabled & (BP_TRAP|BP_IABR)) != BP_TRAP) 780 + if ((bp->enabled & (BP_TRAP|BP_CIABR)) != BP_TRAP) 825 781 continue; 826 782 if (mread(bp->address, &instr, 4) == 4 827 783 && instr == bpinstr ··· 836 792 static void remove_cpu_bpts(void) 837 793 { 838 794 hw_breakpoint_disable(); 839 - if (cpu_has_feature(CPU_FTR_IABR)) 840 - mtspr(SPRN_IABR, 0); 795 + write_ciabr(0); 841 796 } 842 797 843 798 /* Command interpreting routine */ ··· 950 907 case 'u': 951 908 dump_segments(); 952 909 break; 953 - #elif defined(CONFIG_4xx) 910 + #elif defined(CONFIG_44x) 954 911 case 'u': 955 912 dump_tlb_44x(); 956 913 break; ··· 1024 981 else if (cmd == 'h') 1025 982 ppc_md.halt(); 1026 983 else if (cmd == 'p') 1027 - ppc_md.power_off(); 984 + if (pm_power_off) 985 + pm_power_off(); 1028 986 } 1029 987 1030 988 static int cpu_cmd(void) ··· 1171 1127 "b <addr> [cnt] set breakpoint at given instr addr\n" 1172 1128 "bc clear all breakpoints\n" 1173 1129 "bc <n/addr> clear breakpoint number n or at addr\n" 1174 - "bi <addr> [cnt] set hardware instr breakpoint (POWER3/RS64 only)\n" 1130 + "bi <addr> [cnt] set hardware instr breakpoint (POWER8 only)\n" 1175 1131 "bd <addr> [cnt] set hardware data breakpoint\n" 1176 1132 ""; 1177 1133 ··· 1210 1166 break; 1211 1167 1212 1168 case 'i': /* bi - hardware instr breakpoint */ 1213 - if (!cpu_has_feature(CPU_FTR_IABR)) { 1169 + if (!cpu_has_feature(CPU_FTR_ARCH_207S)) { 1214 1170 printf("Hardware instruction breakpoint " 1215 1171 "not supported on this cpu\n"); 1216 1172 break; 1217 1173 } 1218 1174 if (iabr) { 1219 - iabr->enabled &= ~(BP_IABR | BP_IABR_TE); 1175 + iabr->enabled &= ~BP_CIABR; 1220 1176 iabr = NULL; 1221 1177 } 1222 1178 if (!scanhex(&a)) ··· 1225 1181 break; 1226 1182 bp = new_breakpoint(a); 1227 1183 if (bp != NULL) { 1228 - bp->enabled |= BP_IABR | BP_IABR_TE; 1184 + bp->enabled |= BP_CIABR; 1229 1185 iabr = bp; 1230 1186 } 1231 1187 break; ··· 1282 1238 if (!bp->enabled) 1283 1239 continue; 1284 1240 printf("%2x %s ", BP_NUM(bp), 1285 - (bp->enabled & BP_IABR)? "inst": "trap"); 1241 + (bp->enabled & BP_CIABR) ? "inst": "trap"); 1286 1242 xmon_print_symbol(bp->address, " ", "\n"); 1287 1243 } 1288 1244 break;
+11 -4
drivers/misc/cxl/cxl.h
··· 336 336 struct cxl_afu { 337 337 irq_hw_number_t psl_hwirq; 338 338 irq_hw_number_t serr_hwirq; 339 + char *err_irq_name; 340 + char *psl_irq_name; 339 341 unsigned int serr_virq; 340 342 void __iomem *p1n_mmio; 341 343 void __iomem *p2n_mmio; ··· 381 379 bool enabled; 382 380 }; 383 381 382 + 383 + struct cxl_irq_name { 384 + struct list_head list; 385 + char *name; 386 + }; 387 + 384 388 /* 385 389 * This is a cxl context. If the PSL is in dedicated mode, there will be one 386 390 * of these per AFU. If in AFU directed there can be lots of these. ··· 411 403 412 404 unsigned long *irq_bitmap; /* Accessed from IRQ context */ 413 405 struct cxl_irq_ranges irqs; 406 + struct list_head irq_names; 414 407 u64 fault_addr; 415 408 u64 fault_dsisr; 416 409 u64 afu_err; ··· 453 444 struct dentry *trace; 454 445 struct dentry *psl_err_chk; 455 446 struct dentry *debugfs; 447 + char *irq_name; 456 448 struct bin_attribute cxl_attr; 457 449 int adapter_num; 458 450 int user_irqs; ··· 573 563 int cxl_afu_deactivate_mode(struct cxl_afu *afu); 574 564 int cxl_afu_select_best_mode(struct cxl_afu *afu); 575 565 576 - unsigned int cxl_map_irq(struct cxl *adapter, irq_hw_number_t hwirq, 577 - irq_handler_t handler, void *cookie); 578 - void cxl_unmap_irq(unsigned int virq, void *cookie); 579 566 int cxl_register_psl_irq(struct cxl_afu *afu); 580 567 void cxl_release_psl_irq(struct cxl_afu *afu); 581 568 int cxl_register_psl_err_irq(struct cxl *adapter); ··· 619 612 u64 amr); 620 613 int cxl_detach_process(struct cxl_context *ctx); 621 614 622 - int cxl_get_irq(struct cxl_context *ctx, struct cxl_irq_info *info); 615 + int cxl_get_irq(struct cxl_afu *afu, struct cxl_irq_info *info); 623 616 int cxl_ack_irq(struct cxl_context *ctx, u64 tfc, u64 psl_reset_mask); 624 617 625 618 int cxl_check_error(struct cxl_afu *afu);
+6 -2
drivers/misc/cxl/fault.c
··· 133 133 { 134 134 unsigned flt = 0; 135 135 int result; 136 - unsigned long access, flags; 136 + unsigned long access, flags, inv_flags = 0; 137 137 138 138 if ((result = copro_handle_mm_fault(mm, dar, dsisr, &flt))) { 139 139 pr_devel("copro_handle_mm_fault failed: %#x\n", result); ··· 149 149 access |= _PAGE_RW; 150 150 if ((!ctx->kernel) || ~(dar & (1ULL << 63))) 151 151 access |= _PAGE_USER; 152 + 153 + if (dsisr & DSISR_NOHPTE) 154 + inv_flags |= HPTE_NOHPTE_UPDATE; 155 + 152 156 local_irq_save(flags); 153 - hash_page_mm(mm, dar, access, 0x300); 157 + hash_page_mm(mm, dar, access, 0x300, inv_flags); 154 158 local_irq_restore(flags); 155 159 156 160 pr_devel("Page fault successfully handled for pe: %i!\n", ctx->pe);
+117 -27
drivers/misc/cxl/irq.c
··· 92 92 return IRQ_HANDLED; 93 93 } 94 94 95 - static irqreturn_t cxl_irq(int irq, void *data) 95 + static irqreturn_t cxl_irq(int irq, void *data, struct cxl_irq_info *irq_info) 96 96 { 97 97 struct cxl_context *ctx = data; 98 - struct cxl_irq_info irq_info; 99 98 u64 dsisr, dar; 100 - int result; 101 99 102 - if ((result = cxl_get_irq(ctx, &irq_info))) { 103 - WARN(1, "Unable to get CXL IRQ Info: %i\n", result); 104 - return IRQ_HANDLED; 105 - } 106 - 107 - dsisr = irq_info.dsisr; 108 - dar = irq_info.dar; 100 + dsisr = irq_info->dsisr; 101 + dar = irq_info->dar; 109 102 110 103 pr_devel("CXL interrupt %i for afu pe: %i DSISR: %#llx DAR: %#llx\n", irq, ctx->pe, dsisr, dar); 111 104 ··· 142 149 if (dsisr & CXL_PSL_DSISR_An_UR) 143 150 pr_devel("CXL interrupt: AURP PTE not found\n"); 144 151 if (dsisr & CXL_PSL_DSISR_An_PE) 145 - return handle_psl_slice_error(ctx, dsisr, irq_info.errstat); 152 + return handle_psl_slice_error(ctx, dsisr, irq_info->errstat); 146 153 if (dsisr & CXL_PSL_DSISR_An_AE) { 147 - pr_devel("CXL interrupt: AFU Error %.llx\n", irq_info.afu_err); 154 + pr_devel("CXL interrupt: AFU Error %.llx\n", irq_info->afu_err); 148 155 149 156 if (ctx->pending_afu_err) { 150 157 /* ··· 156 163 */ 157 164 dev_err_ratelimited(&ctx->afu->dev, "CXL AFU Error " 158 165 "undelivered to pe %i: %.llx\n", 159 - ctx->pe, irq_info.afu_err); 166 + ctx->pe, irq_info->afu_err); 160 167 } else { 161 168 spin_lock(&ctx->lock); 162 - ctx->afu_err = irq_info.afu_err; 169 + ctx->afu_err = irq_info->afu_err; 163 170 ctx->pending_afu_err = 1; 164 171 spin_unlock(&ctx->lock); 165 172 ··· 175 182 return IRQ_HANDLED; 176 183 } 177 184 185 + static irqreturn_t fail_psl_irq(struct cxl_afu *afu, struct cxl_irq_info *irq_info) 186 + { 187 + if (irq_info->dsisr & CXL_PSL_DSISR_TRANS) 188 + cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE); 189 + else 190 + cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A); 191 + 192 + return IRQ_HANDLED; 193 + } 194 + 178 195 static irqreturn_t cxl_irq_multiplexed(int irq, void *data) 179 196 { 180 197 struct cxl_afu *afu = data; 181 198 struct cxl_context *ctx; 199 + struct cxl_irq_info irq_info; 182 200 int ph = cxl_p2n_read(afu, CXL_PSL_PEHandle_An) & 0xffff; 183 201 int ret; 202 + 203 + if ((ret = cxl_get_irq(afu, &irq_info))) { 204 + WARN(1, "Unable to get CXL IRQ Info: %i\n", ret); 205 + return fail_psl_irq(afu, &irq_info); 206 + } 184 207 185 208 rcu_read_lock(); 186 209 ctx = idr_find(&afu->contexts_idr, ph); 187 210 if (ctx) { 188 - ret = cxl_irq(irq, ctx); 211 + ret = cxl_irq(irq, ctx, &irq_info); 189 212 rcu_read_unlock(); 190 213 return ret; 191 214 } 192 215 rcu_read_unlock(); 193 216 194 - WARN(1, "Unable to demultiplex CXL PSL IRQ\n"); 195 - return IRQ_HANDLED; 217 + WARN(1, "Unable to demultiplex CXL PSL IRQ for PE %i DSISR %.16llx DAR" 218 + " %.16llx\n(Possible AFU HW issue - was a term/remove acked" 219 + " with outstanding transactions?)\n", ph, irq_info.dsisr, 220 + irq_info.dar); 221 + return fail_psl_irq(afu, &irq_info); 196 222 } 197 223 198 224 static irqreturn_t cxl_irq_afu(int irq, void *data) ··· 255 243 } 256 244 257 245 unsigned int cxl_map_irq(struct cxl *adapter, irq_hw_number_t hwirq, 258 - irq_handler_t handler, void *cookie) 246 + irq_handler_t handler, void *cookie, const char *name) 259 247 { 260 248 unsigned int virq; 261 249 int result; ··· 271 259 272 260 pr_devel("hwirq %#lx mapped to virq %u\n", hwirq, virq); 273 261 274 - result = request_irq(virq, handler, 0, "cxl", cookie); 262 + result = request_irq(virq, handler, 0, name, cookie); 275 263 if (result) { 276 264 dev_warn(&adapter->dev, "cxl_map_irq: request_irq failed: %i\n", result); 277 265 return 0; ··· 290 278 irq_handler_t handler, 291 279 void *cookie, 292 280 irq_hw_number_t *dest_hwirq, 293 - unsigned int *dest_virq) 281 + unsigned int *dest_virq, 282 + const char *name) 294 283 { 295 284 int hwirq, virq; 296 285 297 286 if ((hwirq = cxl_alloc_one_irq(adapter)) < 0) 298 287 return hwirq; 299 288 300 - if (!(virq = cxl_map_irq(adapter, hwirq, handler, cookie))) 289 + if (!(virq = cxl_map_irq(adapter, hwirq, handler, cookie, name))) 301 290 goto err; 302 291 303 292 *dest_hwirq = hwirq; ··· 315 302 { 316 303 int rc; 317 304 305 + adapter->irq_name = kasprintf(GFP_KERNEL, "cxl-%s-err", 306 + dev_name(&adapter->dev)); 307 + if (!adapter->irq_name) 308 + return -ENOMEM; 309 + 318 310 if ((rc = cxl_register_one_irq(adapter, cxl_irq_err, adapter, 319 311 &adapter->err_hwirq, 320 - &adapter->err_virq))) 312 + &adapter->err_virq, 313 + adapter->irq_name))) { 314 + kfree(adapter->irq_name); 315 + adapter->irq_name = NULL; 321 316 return rc; 317 + } 322 318 323 319 cxl_p1_write(adapter, CXL_PSL_ErrIVTE, adapter->err_hwirq & 0xffff); 324 320 ··· 339 317 cxl_p1_write(adapter, CXL_PSL_ErrIVTE, 0x0000000000000000); 340 318 cxl_unmap_irq(adapter->err_virq, adapter); 341 319 cxl_release_one_irq(adapter, adapter->err_hwirq); 320 + kfree(adapter->irq_name); 342 321 } 343 322 344 323 int cxl_register_serr_irq(struct cxl_afu *afu) ··· 347 324 u64 serr; 348 325 int rc; 349 326 327 + afu->err_irq_name = kasprintf(GFP_KERNEL, "cxl-%s-err", 328 + dev_name(&afu->dev)); 329 + if (!afu->err_irq_name) 330 + return -ENOMEM; 331 + 350 332 if ((rc = cxl_register_one_irq(afu->adapter, cxl_slice_irq_err, afu, 351 333 &afu->serr_hwirq, 352 - &afu->serr_virq))) 334 + &afu->serr_virq, afu->err_irq_name))) { 335 + kfree(afu->err_irq_name); 336 + afu->err_irq_name = NULL; 353 337 return rc; 338 + } 354 339 355 340 serr = cxl_p1n_read(afu, CXL_PSL_SERR_An); 356 341 serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff); ··· 372 341 cxl_p1n_write(afu, CXL_PSL_SERR_An, 0x0000000000000000); 373 342 cxl_unmap_irq(afu->serr_virq, afu); 374 343 cxl_release_one_irq(afu->adapter, afu->serr_hwirq); 344 + kfree(afu->err_irq_name); 375 345 } 376 346 377 347 int cxl_register_psl_irq(struct cxl_afu *afu) 378 348 { 379 - return cxl_register_one_irq(afu->adapter, cxl_irq_multiplexed, afu, 380 - &afu->psl_hwirq, &afu->psl_virq); 349 + int rc; 350 + 351 + afu->psl_irq_name = kasprintf(GFP_KERNEL, "cxl-%s", 352 + dev_name(&afu->dev)); 353 + if (!afu->psl_irq_name) 354 + return -ENOMEM; 355 + 356 + if ((rc = cxl_register_one_irq(afu->adapter, cxl_irq_multiplexed, afu, 357 + &afu->psl_hwirq, &afu->psl_virq, 358 + afu->psl_irq_name))) { 359 + kfree(afu->psl_irq_name); 360 + afu->psl_irq_name = NULL; 361 + } 362 + return rc; 381 363 } 382 364 383 365 void cxl_release_psl_irq(struct cxl_afu *afu) 384 366 { 385 367 cxl_unmap_irq(afu->psl_virq, afu); 386 368 cxl_release_one_irq(afu->adapter, afu->psl_hwirq); 369 + kfree(afu->psl_irq_name); 370 + } 371 + 372 + void afu_irq_name_free(struct cxl_context *ctx) 373 + { 374 + struct cxl_irq_name *irq_name, *tmp; 375 + 376 + list_for_each_entry_safe(irq_name, tmp, &ctx->irq_names, list) { 377 + kfree(irq_name->name); 378 + list_del(&irq_name->list); 379 + kfree(irq_name); 380 + } 387 381 } 388 382 389 383 int afu_register_irqs(struct cxl_context *ctx, u32 count) 390 384 { 391 385 irq_hw_number_t hwirq; 392 - int rc, r, i; 386 + int rc, r, i, j = 1; 387 + struct cxl_irq_name *irq_name; 393 388 394 389 if ((rc = cxl_alloc_irq_ranges(&ctx->irqs, ctx->afu->adapter, count))) 395 390 return rc; ··· 429 372 sizeof(*ctx->irq_bitmap), GFP_KERNEL); 430 373 if (!ctx->irq_bitmap) 431 374 return -ENOMEM; 375 + 376 + /* 377 + * Allocate names first. If any fail, bail out before allocating 378 + * actual hardware IRQs. 379 + */ 380 + INIT_LIST_HEAD(&ctx->irq_names); 381 + for (r = 1; r < CXL_IRQ_RANGES; r++) { 382 + for (i = 0; i < ctx->irqs.range[r]; hwirq++, i++) { 383 + irq_name = kmalloc(sizeof(struct cxl_irq_name), 384 + GFP_KERNEL); 385 + if (!irq_name) 386 + goto out; 387 + irq_name->name = kasprintf(GFP_KERNEL, "cxl-%s-pe%i-%i", 388 + dev_name(&ctx->afu->dev), 389 + ctx->pe, j); 390 + if (!irq_name->name) { 391 + kfree(irq_name); 392 + goto out; 393 + } 394 + /* Add to tail so next look get the correct order */ 395 + list_add_tail(&irq_name->list, &ctx->irq_names); 396 + j++; 397 + } 398 + } 399 + 400 + /* We've allocated all memory now, so let's do the irq allocations */ 401 + irq_name = list_first_entry(&ctx->irq_names, struct cxl_irq_name, list); 432 402 for (r = 1; r < CXL_IRQ_RANGES; r++) { 433 403 hwirq = ctx->irqs.offset[r]; 434 404 for (i = 0; i < ctx->irqs.range[r]; hwirq++, i++) { 435 405 cxl_map_irq(ctx->afu->adapter, hwirq, 436 - cxl_irq_afu, ctx); 406 + cxl_irq_afu, ctx, irq_name->name); 407 + irq_name = list_next_entry(irq_name, list); 437 408 } 438 409 } 439 410 440 411 return 0; 412 + 413 + out: 414 + afu_irq_name_free(ctx); 415 + return -ENOMEM; 441 416 } 442 417 443 418 void afu_release_irqs(struct cxl_context *ctx) ··· 487 398 } 488 399 } 489 400 401 + afu_irq_name_free(ctx); 490 402 cxl_release_irq_ranges(&ctx->irqs, ctx->afu->adapter); 491 403 }
+7 -7
drivers/misc/cxl/native.c
··· 637 637 return detach_process_native_afu_directed(ctx); 638 638 } 639 639 640 - int cxl_get_irq(struct cxl_context *ctx, struct cxl_irq_info *info) 640 + int cxl_get_irq(struct cxl_afu *afu, struct cxl_irq_info *info) 641 641 { 642 642 u64 pidtid; 643 643 644 - info->dsisr = cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An); 645 - info->dar = cxl_p2n_read(ctx->afu, CXL_PSL_DAR_An); 646 - info->dsr = cxl_p2n_read(ctx->afu, CXL_PSL_DSR_An); 647 - pidtid = cxl_p2n_read(ctx->afu, CXL_PSL_PID_TID_An); 644 + info->dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An); 645 + info->dar = cxl_p2n_read(afu, CXL_PSL_DAR_An); 646 + info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An); 647 + pidtid = cxl_p2n_read(afu, CXL_PSL_PID_TID_An); 648 648 info->pid = pidtid >> 32; 649 649 info->tid = pidtid & 0xffffffff; 650 - info->afu_err = cxl_p2n_read(ctx->afu, CXL_AFU_ERR_An); 651 - info->errstat = cxl_p2n_read(ctx->afu, CXL_PSL_ErrStat_An); 650 + info->afu_err = cxl_p2n_read(afu, CXL_AFU_ERR_An); 651 + info->errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An); 652 652 653 653 return 0; 654 654 }
+11
drivers/rtc/Kconfig
··· 987 987 If you say yes here you get support for the RTC subsystem of the 988 988 NUC910/NUC920 used in embedded systems. 989 989 990 + config RTC_DRV_OPAL 991 + tristate "IBM OPAL RTC driver" 992 + depends on PPC_POWERNV 993 + default y 994 + help 995 + If you say yes here you get support for the PowerNV platform RTC 996 + driver based on OPAL interfaces. 997 + 998 + This driver can also be built as a module. If so, the module 999 + will be called rtc-opal. 1000 + 990 1001 comment "on-CPU RTC drivers" 991 1002 992 1003 config RTC_DRV_DAVINCI
+1
drivers/rtc/Makefile
··· 92 92 obj-$(CONFIG_RTC_DRV_MPC5121) += rtc-mpc5121.o 93 93 obj-$(CONFIG_RTC_DRV_MV) += rtc-mv.o 94 94 obj-$(CONFIG_RTC_DRV_NUC900) += rtc-nuc900.o 95 + obj-$(CONFIG_RTC_DRV_OPAL) += rtc-opal.o 95 96 obj-$(CONFIG_RTC_DRV_OMAP) += rtc-omap.o 96 97 obj-$(CONFIG_RTC_DRV_PALMAS) += rtc-palmas.o 97 98 obj-$(CONFIG_RTC_DRV_PCAP) += rtc-pcap.o
+261
drivers/rtc/rtc-opal.c
··· 1 + /* 2 + * IBM OPAL RTC driver 3 + * Copyright (C) 2014 IBM 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License as published by 7 + * the Free Software Foundation; either version 2 of the License, or 8 + * (at your option) any later version. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program. 17 + */ 18 + 19 + #define DRVNAME "rtc-opal" 20 + #define pr_fmt(fmt) DRVNAME ": " fmt 21 + 22 + #include <linux/module.h> 23 + #include <linux/err.h> 24 + #include <linux/rtc.h> 25 + #include <linux/delay.h> 26 + #include <linux/bcd.h> 27 + #include <linux/platform_device.h> 28 + #include <linux/of.h> 29 + #include <asm/opal.h> 30 + #include <asm/firmware.h> 31 + 32 + static void opal_to_tm(u32 y_m_d, u64 h_m_s_ms, struct rtc_time *tm) 33 + { 34 + tm->tm_year = ((bcd2bin(y_m_d >> 24) * 100) + 35 + bcd2bin((y_m_d >> 16) & 0xff)) - 1900; 36 + tm->tm_mon = bcd2bin((y_m_d >> 8) & 0xff) - 1; 37 + tm->tm_mday = bcd2bin(y_m_d & 0xff); 38 + tm->tm_hour = bcd2bin((h_m_s_ms >> 56) & 0xff); 39 + tm->tm_min = bcd2bin((h_m_s_ms >> 48) & 0xff); 40 + tm->tm_sec = bcd2bin((h_m_s_ms >> 40) & 0xff); 41 + 42 + GregorianDay(tm); 43 + } 44 + 45 + static void tm_to_opal(struct rtc_time *tm, u32 *y_m_d, u64 *h_m_s_ms) 46 + { 47 + *y_m_d |= ((u32)bin2bcd((tm->tm_year + 1900) / 100)) << 24; 48 + *y_m_d |= ((u32)bin2bcd((tm->tm_year + 1900) % 100)) << 16; 49 + *y_m_d |= ((u32)bin2bcd((tm->tm_mon + 1))) << 8; 50 + *y_m_d |= ((u32)bin2bcd(tm->tm_mday)); 51 + 52 + *h_m_s_ms |= ((u64)bin2bcd(tm->tm_hour)) << 56; 53 + *h_m_s_ms |= ((u64)bin2bcd(tm->tm_min)) << 48; 54 + *h_m_s_ms |= ((u64)bin2bcd(tm->tm_sec)) << 40; 55 + } 56 + 57 + static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm) 58 + { 59 + long rc = OPAL_BUSY; 60 + u32 y_m_d; 61 + u64 h_m_s_ms; 62 + __be32 __y_m_d; 63 + __be64 __h_m_s_ms; 64 + 65 + while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) { 66 + rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms); 67 + if (rc == OPAL_BUSY_EVENT) 68 + opal_poll_events(NULL); 69 + else 70 + msleep(10); 71 + } 72 + 73 + if (rc != OPAL_SUCCESS) 74 + return -EIO; 75 + 76 + y_m_d = be32_to_cpu(__y_m_d); 77 + h_m_s_ms = be64_to_cpu(__h_m_s_ms); 78 + opal_to_tm(y_m_d, h_m_s_ms, tm); 79 + 80 + return 0; 81 + } 82 + 83 + static int opal_set_rtc_time(struct device *dev, struct rtc_time *tm) 84 + { 85 + long rc = OPAL_BUSY; 86 + u32 y_m_d = 0; 87 + u64 h_m_s_ms = 0; 88 + 89 + tm_to_opal(tm, &y_m_d, &h_m_s_ms); 90 + while (rc == OPAL_BUSY || rc == OPAL_BUSY_EVENT) { 91 + rc = opal_rtc_write(y_m_d, h_m_s_ms); 92 + if (rc == OPAL_BUSY_EVENT) 93 + opal_poll_events(NULL); 94 + else 95 + msleep(10); 96 + } 97 + 98 + return rc == OPAL_SUCCESS ? 0 : -EIO; 99 + } 100 + 101 + /* 102 + * TPO Timed Power-On 103 + * 104 + * TPO get/set OPAL calls care about the hour and min and to make it consistent 105 + * with the rtc utility time conversion functions, we use the 'u64' to store 106 + * its value and perform bit shift by 32 before use.. 107 + */ 108 + static int opal_get_tpo_time(struct device *dev, struct rtc_wkalrm *alarm) 109 + { 110 + __be32 __y_m_d, __h_m; 111 + struct opal_msg msg; 112 + int rc, token; 113 + u64 h_m_s_ms; 114 + u32 y_m_d; 115 + 116 + token = opal_async_get_token_interruptible(); 117 + if (token < 0) { 118 + if (token != -ERESTARTSYS) 119 + pr_err("Failed to get the async token\n"); 120 + 121 + return token; 122 + } 123 + 124 + rc = opal_tpo_read(token, &__y_m_d, &__h_m); 125 + if (rc != OPAL_ASYNC_COMPLETION) { 126 + rc = -EIO; 127 + goto exit; 128 + } 129 + 130 + rc = opal_async_wait_response(token, &msg); 131 + if (rc) { 132 + rc = -EIO; 133 + goto exit; 134 + } 135 + 136 + rc = be64_to_cpu(msg.params[1]); 137 + if (rc != OPAL_SUCCESS) { 138 + rc = -EIO; 139 + goto exit; 140 + } 141 + 142 + y_m_d = be32_to_cpu(__y_m_d); 143 + h_m_s_ms = ((u64)be32_to_cpu(__h_m) << 32); 144 + opal_to_tm(y_m_d, h_m_s_ms, &alarm->time); 145 + 146 + exit: 147 + opal_async_release_token(token); 148 + return rc; 149 + } 150 + 151 + /* Set Timed Power-On */ 152 + static int opal_set_tpo_time(struct device *dev, struct rtc_wkalrm *alarm) 153 + { 154 + u64 h_m_s_ms = 0, token; 155 + struct opal_msg msg; 156 + u32 y_m_d = 0; 157 + int rc; 158 + 159 + tm_to_opal(&alarm->time, &y_m_d, &h_m_s_ms); 160 + 161 + token = opal_async_get_token_interruptible(); 162 + if (token < 0) { 163 + if (token != -ERESTARTSYS) 164 + pr_err("Failed to get the async token\n"); 165 + 166 + return token; 167 + } 168 + 169 + /* TPO, we care about hour and minute */ 170 + rc = opal_tpo_write(token, y_m_d, 171 + (u32)((h_m_s_ms >> 32) & 0xffff0000)); 172 + if (rc != OPAL_ASYNC_COMPLETION) { 173 + rc = -EIO; 174 + goto exit; 175 + } 176 + 177 + rc = opal_async_wait_response(token, &msg); 178 + if (rc) { 179 + rc = -EIO; 180 + goto exit; 181 + } 182 + 183 + rc = be64_to_cpu(msg.params[1]); 184 + if (rc != OPAL_SUCCESS) 185 + rc = -EIO; 186 + 187 + exit: 188 + opal_async_release_token(token); 189 + return rc; 190 + } 191 + 192 + static const struct rtc_class_ops opal_rtc_ops = { 193 + .read_time = opal_get_rtc_time, 194 + .set_time = opal_set_rtc_time, 195 + .read_alarm = opal_get_tpo_time, 196 + .set_alarm = opal_set_tpo_time, 197 + }; 198 + 199 + static int opal_rtc_probe(struct platform_device *pdev) 200 + { 201 + struct rtc_device *rtc; 202 + 203 + if (pdev->dev.of_node && of_get_property(pdev->dev.of_node, "has-tpo", 204 + NULL)) 205 + device_set_wakeup_capable(&pdev->dev, true); 206 + 207 + rtc = devm_rtc_device_register(&pdev->dev, DRVNAME, &opal_rtc_ops, 208 + THIS_MODULE); 209 + if (IS_ERR(rtc)) 210 + return PTR_ERR(rtc); 211 + 212 + rtc->uie_unsupported = 1; 213 + 214 + return 0; 215 + } 216 + 217 + static const struct of_device_id opal_rtc_match[] = { 218 + { 219 + .compatible = "ibm,opal-rtc", 220 + }, 221 + { } 222 + }; 223 + MODULE_DEVICE_TABLE(of, opal_rtc_match); 224 + 225 + static const struct platform_device_id opal_rtc_driver_ids[] = { 226 + { 227 + .name = "opal-rtc", 228 + }, 229 + { } 230 + }; 231 + MODULE_DEVICE_TABLE(platform, opal_rtc_driver_ids); 232 + 233 + static struct platform_driver opal_rtc_driver = { 234 + .probe = opal_rtc_probe, 235 + .id_table = opal_rtc_driver_ids, 236 + .driver = { 237 + .name = DRVNAME, 238 + .owner = THIS_MODULE, 239 + .of_match_table = opal_rtc_match, 240 + }, 241 + }; 242 + 243 + static int __init opal_rtc_init(void) 244 + { 245 + if (!firmware_has_feature(FW_FEATURE_OPAL)) 246 + return -ENODEV; 247 + 248 + return platform_driver_register(&opal_rtc_driver); 249 + } 250 + 251 + static void __exit opal_rtc_exit(void) 252 + { 253 + platform_driver_unregister(&opal_rtc_driver); 254 + } 255 + 256 + MODULE_AUTHOR("Neelesh Gupta <neelegup@linux.vnet.ibm.com>"); 257 + MODULE_DESCRIPTION("IBM OPAL RTC driver"); 258 + MODULE_LICENSE("GPL"); 259 + 260 + module_init(opal_rtc_init); 261 + module_exit(opal_rtc_exit);
+46
include/linux/hugetlb.h
··· 175 175 } 176 176 177 177 #endif /* !CONFIG_HUGETLB_PAGE */ 178 + /* 179 + * hugepages at page global directory. If arch support 180 + * hugepages at pgd level, they need to define this. 181 + */ 182 + #ifndef pgd_huge 183 + #define pgd_huge(x) 0 184 + #endif 185 + 186 + #ifndef pgd_write 187 + static inline int pgd_write(pgd_t pgd) 188 + { 189 + BUG(); 190 + return 0; 191 + } 192 + #endif 193 + 194 + #ifndef pud_write 195 + static inline int pud_write(pud_t pud) 196 + { 197 + BUG(); 198 + return 0; 199 + } 200 + #endif 201 + 202 + #ifndef is_hugepd 203 + /* 204 + * Some architectures requires a hugepage directory format that is 205 + * required to support multiple hugepage sizes. For example 206 + * a4fe3ce76 "powerpc/mm: Allow more flexible layouts for hugepage pagetables" 207 + * introduced the same on powerpc. This allows for a more flexible hugepage 208 + * pagetable layout. 209 + */ 210 + typedef struct { unsigned long pd; } hugepd_t; 211 + #define is_hugepd(hugepd) (0) 212 + #define __hugepd(x) ((hugepd_t) { (x) }) 213 + static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, 214 + unsigned pdshift, unsigned long end, 215 + int write, struct page **pages, int *nr) 216 + { 217 + return 0; 218 + } 219 + #else 220 + extern int gup_huge_pd(hugepd_t hugepd, unsigned long addr, 221 + unsigned pdshift, unsigned long end, 222 + int write, struct page **pages, int *nr); 223 + #endif 178 224 179 225 #define HUGETLB_ANON_FILE "anon_hugepage" 180 226
+73 -8
mm/gup.c
··· 3 3 #include <linux/err.h> 4 4 #include <linux/spinlock.h> 5 5 6 - #include <linux/hugetlb.h> 7 6 #include <linux/mm.h> 8 7 #include <linux/pagemap.h> 9 8 #include <linux/rmap.h> ··· 11 12 12 13 #include <linux/sched.h> 13 14 #include <linux/rwsem.h> 15 + #include <linux/hugetlb.h> 14 16 #include <asm/pgtable.h> 15 17 16 18 #include "internal.h" ··· 875 875 return 1; 876 876 } 877 877 878 + static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, 879 + unsigned long end, int write, 880 + struct page **pages, int *nr) 881 + { 882 + int refs; 883 + struct page *head, *page, *tail; 884 + 885 + if (write && !pgd_write(orig)) 886 + return 0; 887 + 888 + refs = 0; 889 + head = pgd_page(orig); 890 + page = head + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT); 891 + tail = page; 892 + do { 893 + VM_BUG_ON_PAGE(compound_head(page) != head, page); 894 + pages[*nr] = page; 895 + (*nr)++; 896 + page++; 897 + refs++; 898 + } while (addr += PAGE_SIZE, addr != end); 899 + 900 + if (!page_cache_add_speculative(head, refs)) { 901 + *nr -= refs; 902 + return 0; 903 + } 904 + 905 + if (unlikely(pgd_val(orig) != pgd_val(*pgdp))) { 906 + *nr -= refs; 907 + while (refs--) 908 + put_page(head); 909 + return 0; 910 + } 911 + 912 + while (refs--) { 913 + if (PageTail(tail)) 914 + get_huge_page_tail(tail); 915 + tail++; 916 + } 917 + 918 + return 1; 919 + } 920 + 878 921 static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, 879 922 int write, struct page **pages, int *nr) 880 923 { ··· 945 902 pages, nr)) 946 903 return 0; 947 904 905 + } else if (unlikely(is_hugepd(__hugepd(pmd_val(pmd))))) { 906 + /* 907 + * architecture have different format for hugetlbfs 908 + * pmd format and THP pmd format 909 + */ 910 + if (!gup_huge_pd(__hugepd(pmd_val(pmd)), addr, 911 + PMD_SHIFT, next, write, pages, nr)) 912 + return 0; 948 913 } else if (!gup_pte_range(pmd, addr, next, write, pages, nr)) 949 914 return 0; 950 915 } while (pmdp++, addr = next, addr != end); ··· 960 909 return 1; 961 910 } 962 911 963 - static int gup_pud_range(pgd_t *pgdp, unsigned long addr, unsigned long end, 964 - int write, struct page **pages, int *nr) 912 + static int gup_pud_range(pgd_t pgd, unsigned long addr, unsigned long end, 913 + int write, struct page **pages, int *nr) 965 914 { 966 915 unsigned long next; 967 916 pud_t *pudp; 968 917 969 - pudp = pud_offset(pgdp, addr); 918 + pudp = pud_offset(&pgd, addr); 970 919 do { 971 920 pud_t pud = ACCESS_ONCE(*pudp); 972 921 973 922 next = pud_addr_end(addr, end); 974 923 if (pud_none(pud)) 975 924 return 0; 976 - if (pud_huge(pud)) { 925 + if (unlikely(pud_huge(pud))) { 977 926 if (!gup_huge_pud(pud, pudp, addr, next, write, 978 - pages, nr)) 927 + pages, nr)) 928 + return 0; 929 + } else if (unlikely(is_hugepd(__hugepd(pud_val(pud))))) { 930 + if (!gup_huge_pd(__hugepd(pud_val(pud)), addr, 931 + PUD_SHIFT, next, write, pages, nr)) 979 932 return 0; 980 933 } else if (!gup_pmd_range(pud, addr, next, write, pages, nr)) 981 934 return 0; ··· 1025 970 local_irq_save(flags); 1026 971 pgdp = pgd_offset(mm, addr); 1027 972 do { 973 + pgd_t pgd = ACCESS_ONCE(*pgdp); 974 + 1028 975 next = pgd_addr_end(addr, end); 1029 - if (pgd_none(*pgdp)) 976 + if (pgd_none(pgd)) 1030 977 break; 1031 - else if (!gup_pud_range(pgdp, addr, next, write, pages, &nr)) 978 + if (unlikely(pgd_huge(pgd))) { 979 + if (!gup_huge_pgd(pgd, pgdp, addr, next, write, 980 + pages, &nr)) 981 + break; 982 + } else if (unlikely(is_hugepd(__hugepd(pgd_val(pgd))))) { 983 + if (!gup_huge_pd(__hugepd(pgd_val(pgd)), addr, 984 + PGDIR_SHIFT, next, write, pages, &nr)) 985 + break; 986 + } else if (!gup_pud_range(pgd, addr, next, write, pages, &nr)) 1032 987 break; 1033 988 } while (pgdp++, addr = next, addr != end); 1034 989 local_irq_restore(flags);