Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc: (72 commits)
powerpc/pseries: Fix build of topology stuff without CONFIG_NUMA
powerpc/pseries: Fix VPHN build errors on non-SMP systems
powerpc/83xx: add mpc8308_p1m DMA controller device-tree node
powerpc/83xx: add DMA controller to mpc8308 device-tree node
powerpc/512x: try to free dma descriptors in case of allocation failure
powerpc/512x: add MPC8308 dma support
powerpc/512x: fix the hanged dma transfer issue
powerpc/512x: scatter/gather dma fix
powerpc/powermac: Make auto-loading of therm_pm72 possible
of/address: Use propper endianess in get_flags
powerpc/pci: Use printf extension %pR for struct resource
powerpc: Remove unnecessary casts of void ptr
powerpc: Disable VPHN polling during a suspend operation
powerpc/pseries: Poll VPA for topology changes and update NUMA maps
powerpc: iommu: Add device name to iommu error printks
powerpc: Record vma->phys_addr in ioremap()
powerpc: Update compat_arch_ptrace
powerpc: Fix PPC_PTRACE_SETHWDEBUG on PPC_BOOK3S
powerpc/time: printk time stamp init not correct
powerpc: Minor cleanups for machdep.h
...

+2071 -712
+8
Documentation/kernel-parameters.txt
··· 403 bttv.pll= See Documentation/video4linux/bttv/Insmod-options 404 bttv.tuner= and Documentation/video4linux/bttv/CARDLIST 405 406 c101= [NET] Moxa C101 synchronous serial card 407 408 cachesize= [BUGS=X86-32] Override level 2 CPU cache size detection. ··· 1493 1494 mtdparts= [MTD] 1495 See drivers/mtd/cmdlinepart.c. 1496 1497 onenand.bdry= [HW,MTD] Flex-OneNAND Boundary Configuration 1498
··· 403 bttv.pll= See Documentation/video4linux/bttv/Insmod-options 404 bttv.tuner= and Documentation/video4linux/bttv/CARDLIST 405 406 + bulk_remove=off [PPC] This parameter disables the use of the pSeries 407 + firmware feature for flushing multiple hpte entries 408 + at a time. 409 + 410 c101= [NET] Moxa C101 synchronous serial card 411 412 cachesize= [BUGS=X86-32] Override level 2 CPU cache size detection. ··· 1489 1490 mtdparts= [MTD] 1491 See drivers/mtd/cmdlinepart.c. 1492 + 1493 + multitce=off [PPC] This parameter disables the use of the pSeries 1494 + firmware feature for updating multiple TCE entries 1495 + at a time. 1496 1497 onenand.bdry= [HW,MTD] Flex-OneNAND Boundary Configuration 1498
+2 -2
Documentation/powerpc/booting-without-of.txt
··· 131 point and the way a new platform should be added to the kernel. The 132 legacy iSeries platform breaks those rules as it predates this scheme, 133 but no new board support will be accepted in the main tree that 134 - doesn't follows them properly. In addition, since the advent of the 135 arch/powerpc merged architecture for ppc32 and ppc64, new 32-bit 136 platforms and 32-bit platforms which move into arch/powerpc will be 137 required to use these rules as well. ··· 1025 1026 WARNING: This version is still in early development stage; the 1027 resulting device-tree "blobs" have not yet been validated with the 1028 - kernel. The current generated bloc lacks a useful reserve map (it will 1029 be fixed to generate an empty one, it's up to the bootloader to fill 1030 it up) among others. The error handling needs work, bugs are lurking, 1031 etc...
··· 131 point and the way a new platform should be added to the kernel. The 132 legacy iSeries platform breaks those rules as it predates this scheme, 133 but no new board support will be accepted in the main tree that 134 + doesn't follow them properly. In addition, since the advent of the 135 arch/powerpc merged architecture for ppc32 and ppc64, new 32-bit 136 platforms and 32-bit platforms which move into arch/powerpc will be 137 required to use these rules as well. ··· 1025 1026 WARNING: This version is still in early development stage; the 1027 resulting device-tree "blobs" have not yet been validated with the 1028 + kernel. The current generated block lacks a useful reserve map (it will 1029 be fixed to generate an empty one, it's up to the bootloader to fill 1030 it up) among others. The error handling needs work, bugs are lurking, 1031 etc...
+52
Documentation/powerpc/dts-bindings/4xx/cpm.txt
···
··· 1 + PPC4xx Clock Power Management (CPM) node 2 + 3 + Required properties: 4 + - compatible : compatible list, currently only "ibm,cpm" 5 + - dcr-access-method : "native" 6 + - dcr-reg : < DCR register range > 7 + 8 + Optional properties: 9 + - er-offset : All 4xx SoCs with a CPM controller have 10 + one of two different order for the CPM 11 + registers. Some have the CPM registers 12 + in the following order (ER,FR,SR). The 13 + others have them in the following order 14 + (SR,ER,FR). For the second case set 15 + er-offset = <1>. 16 + - unused-units : specifier consist of one cell. For each 17 + bit in the cell, the corresponding bit 18 + in CPM will be set to turn off unused 19 + devices. 20 + - idle-doze : specifier consist of one cell. For each 21 + bit in the cell, the corresponding bit 22 + in CPM will be set to turn off unused 23 + devices. This is usually just CPM[CPU]. 24 + - standby : specifier consist of one cell. For each 25 + bit in the cell, the corresponding bit 26 + in CPM will be set on standby and 27 + restored on resume. 28 + - suspend : specifier consist of one cell. For each 29 + bit in the cell, the corresponding bit 30 + in CPM will be set on suspend (mem) and 31 + restored on resume. Note, for standby 32 + and suspend the corresponding bits can 33 + be different or the same. Usually for 34 + standby only class 2 and 3 units are set. 35 + However, the interface does not care. 36 + If they are the same, the additional 37 + power saving will be seeing if support 38 + is available to put the DDR in self 39 + refresh mode and any additional power 40 + saving techniques for the specific SoC. 41 + 42 + Example: 43 + CPM0: cpm { 44 + compatible = "ibm,cpm"; 45 + dcr-access-method = "native"; 46 + dcr-reg = <0x160 0x003>; 47 + er-offset = <0>; 48 + unused-units = <0x00000100>; 49 + idle-doze = <0x02000000>; 50 + standby = <0xfeff0000>; 51 + suspend = <0xfeff791d>; 52 + };
+13 -3
arch/powerpc/Kconfig
··· 20 config ARCH_PHYS_ADDR_T_64BIT 21 def_bool PPC64 || PHYS_64BIT 22 23 config MMU 24 bool 25 default y ··· 212 config ARCH_SUSPEND_POSSIBLE 213 def_bool y 214 depends on ADB_PMU || PPC_EFIKA || PPC_LITE5200 || PPC_83xx || \ 215 - PPC_85xx || PPC_86xx || PPC_PSERIES 216 217 config PPC_DCR_NATIVE 218 bool ··· 598 599 If unsure, leave blank 600 601 - if !44x || BROKEN 602 config ARCH_WANTS_FREEZER_CONTROL 603 def_bool y 604 depends on ADB_PMU 605 606 source kernel/power/Kconfig 607 - endif 608 609 config SECCOMP 610 bool "Enable seccomp to safely compute untrusted bytecode" ··· 682 help 683 Freescale MPC85xx/MPC86xx power management controller support 684 (suspend/resume). For MPC83xx see platforms/83xx/suspend.c 685 686 config 4xx_SOC 687 bool
··· 20 config ARCH_PHYS_ADDR_T_64BIT 21 def_bool PPC64 || PHYS_64BIT 22 23 + config ARCH_DMA_ADDR_T_64BIT 24 + def_bool ARCH_PHYS_ADDR_T_64BIT 25 + 26 config MMU 27 bool 28 default y ··· 209 config ARCH_SUSPEND_POSSIBLE 210 def_bool y 211 depends on ADB_PMU || PPC_EFIKA || PPC_LITE5200 || PPC_83xx || \ 212 + PPC_85xx || PPC_86xx || PPC_PSERIES || 44x || 40x 213 214 config PPC_DCR_NATIVE 215 bool ··· 595 596 If unsure, leave blank 597 598 config ARCH_WANTS_FREEZER_CONTROL 599 def_bool y 600 depends on ADB_PMU 601 602 source kernel/power/Kconfig 603 604 config SECCOMP 605 bool "Enable seccomp to safely compute untrusted bytecode" ··· 681 help 682 Freescale MPC85xx/MPC86xx power management controller support 683 (suspend/resume). For MPC83xx see platforms/83xx/suspend.c 684 + 685 + config PPC4xx_CPM 686 + bool 687 + default y 688 + depends on SUSPEND && (44x || 40x) 689 + help 690 + PPC4xx Clock Power Management (CPM) support (suspend/resume). 691 + It also enables support for two different idle states (idle-wait 692 + and idle-doze). 693 694 config 4xx_SOC 695 bool
+9 -22
arch/powerpc/boot/dts/canyonlands.dts
··· 105 dcr-reg = <0x00c 0x002>; 106 }; 107 108 L2C0: l2c { 109 compatible = "ibm,l2-cache-460ex", "ibm,l2-cache"; 110 dcr-reg = <0x020 0x008 /* Internal SRAM DCR's */ ··· 277 current-speed = <0>; /* Filled in by U-Boot */ 278 interrupt-parent = <&UIC0>; 279 interrupts = <0x1 0x4>; 280 - }; 281 - 282 - UART2: serial@ef600500 { 283 - device_type = "serial"; 284 - compatible = "ns16550"; 285 - reg = <0xef600500 0x00000008>; 286 - virtual-reg = <0xef600500>; 287 - clock-frequency = <0>; /* Filled in by U-Boot */ 288 - current-speed = <0>; /* Filled in by U-Boot */ 289 - interrupt-parent = <&UIC1>; 290 - interrupts = <28 0x4>; 291 - }; 292 - 293 - UART3: serial@ef600600 { 294 - device_type = "serial"; 295 - compatible = "ns16550"; 296 - reg = <0xef600600 0x00000008>; 297 - virtual-reg = <0xef600600>; 298 - clock-frequency = <0>; /* Filled in by U-Boot */ 299 - current-speed = <0>; /* Filled in by U-Boot */ 300 - interrupt-parent = <&UIC1>; 301 - interrupts = <29 0x4>; 302 }; 303 304 IIC0: i2c@ef600700 {
··· 105 dcr-reg = <0x00c 0x002>; 106 }; 107 108 + CPM0: cpm { 109 + compatible = "ibm,cpm"; 110 + dcr-access-method = "native"; 111 + dcr-reg = <0x160 0x003>; 112 + unused-units = <0x00000100>; 113 + idle-doze = <0x02000000>; 114 + standby = <0xfeff791d>; 115 + }; 116 + 117 L2C0: l2c { 118 compatible = "ibm,l2-cache-460ex", "ibm,l2-cache"; 119 dcr-reg = <0x020 0x008 /* Internal SRAM DCR's */ ··· 268 current-speed = <0>; /* Filled in by U-Boot */ 269 interrupt-parent = <&UIC0>; 270 interrupts = <0x1 0x4>; 271 }; 272 273 IIC0: i2c@ef600700 {
+9
arch/powerpc/boot/dts/kilauea.dts
··· 82 interrupt-parent = <&UIC0>; 83 }; 84 85 plb { 86 compatible = "ibm,plb-405ex", "ibm,plb4"; 87 #address-cells = <1>;
··· 82 interrupt-parent = <&UIC0>; 83 }; 84 85 + CPM0: cpm { 86 + compatible = "ibm,cpm"; 87 + dcr-access-method = "native"; 88 + dcr-reg = <0x0b0 0x003>; 89 + unused-units = <0x00000000>; 90 + idle-doze = <0x02000000>; 91 + standby = <0xe3e74800>; 92 + }; 93 + 94 plb { 95 compatible = "ibm,plb-405ex", "ibm,plb4"; 96 #address-cells = <1>;
+8
arch/powerpc/boot/dts/mpc8308_p1m.dts
··· 297 interrupt-parent = < &ipic >; 298 }; 299 300 }; 301 302 pci0: pcie@e0009000 {
··· 297 interrupt-parent = < &ipic >; 298 }; 299 300 + dma@2c000 { 301 + compatible = "fsl,mpc8308-dma", "fsl,mpc5121-dma"; 302 + reg = <0x2c000 0x1800>; 303 + interrupts = <3 0x8 304 + 94 0x8>; 305 + interrupt-parent = < &ipic >; 306 + }; 307 + 308 }; 309 310 pci0: pcie@e0009000 {
+8
arch/powerpc/boot/dts/mpc8308rdb.dts
··· 265 interrupt-parent = < &ipic >; 266 }; 267 268 }; 269 270 pci0: pcie@e0009000 {
··· 265 interrupt-parent = < &ipic >; 266 }; 267 268 + dma@2c000 { 269 + compatible = "fsl,mpc8308-dma", "fsl,mpc5121-dma"; 270 + reg = <0x2c000 0x1800>; 271 + interrupts = <3 0x8 272 + 94 0x8>; 273 + interrupt-parent = < &ipic >; 274 + }; 275 + 276 }; 277 278 pci0: pcie@e0009000 {
+5
arch/powerpc/configs/40x/kilauea_defconfig
··· 12 CONFIG_MODULE_UNLOAD=y 13 # CONFIG_BLK_DEV_BSG is not set 14 CONFIG_KILAUEA=y 15 # CONFIG_WALNUT is not set 16 CONFIG_SPARSE_IRQ=y 17 CONFIG_PCI=y ··· 44 CONFIG_MTD_NAND=y 45 CONFIG_MTD_NAND_NDFC=y 46 CONFIG_PROC_DEVICETREE=y 47 CONFIG_BLK_DEV_RAM=y 48 CONFIG_BLK_DEV_RAM_SIZE=35000 49 # CONFIG_MISC_DEVICES is not set
··· 12 CONFIG_MODULE_UNLOAD=y 13 # CONFIG_BLK_DEV_BSG is not set 14 CONFIG_KILAUEA=y 15 + CONFIG_NO_HZ=y 16 + CONFIG_HIGH_RES_TIMERS=y 17 # CONFIG_WALNUT is not set 18 CONFIG_SPARSE_IRQ=y 19 CONFIG_PCI=y ··· 42 CONFIG_MTD_NAND=y 43 CONFIG_MTD_NAND_NDFC=y 44 CONFIG_PROC_DEVICETREE=y 45 + CONFIG_PM=y 46 + CONFIG_SUSPEND=y 47 + CONFIG_PPC4xx_CPM=y 48 CONFIG_BLK_DEV_RAM=y 49 CONFIG_BLK_DEV_RAM_SIZE=35000 50 # CONFIG_MISC_DEVICES is not set
+3
arch/powerpc/configs/44x/canyonlands_defconfig
··· 42 CONFIG_MTD_NAND=y 43 CONFIG_MTD_NAND_NDFC=y 44 CONFIG_PROC_DEVICETREE=y 45 CONFIG_BLK_DEV_RAM=y 46 CONFIG_BLK_DEV_RAM_SIZE=35000 47 # CONFIG_MISC_DEVICES is not set
··· 42 CONFIG_MTD_NAND=y 43 CONFIG_MTD_NAND_NDFC=y 44 CONFIG_PROC_DEVICETREE=y 45 + CONFIG_PM=y 46 + CONFIG_SUSPEND=y 47 + CONFIG_PPC4xx_CPM=y 48 CONFIG_BLK_DEV_RAM=y 49 CONFIG_BLK_DEV_RAM_SIZE=35000 50 # CONFIG_MISC_DEVICES is not set
+9
arch/powerpc/include/asm/bitops.h
··· 267 #include <asm-generic/bitops/fls64.h> 268 #endif /* __powerpc64__ */ 269 270 #include <asm-generic/bitops/hweight.h> 271 #include <asm-generic/bitops/find.h> 272 273 /* Little-endian versions */
··· 267 #include <asm-generic/bitops/fls64.h> 268 #endif /* __powerpc64__ */ 269 270 + #ifdef CONFIG_PPC64 271 + unsigned int __arch_hweight8(unsigned int w); 272 + unsigned int __arch_hweight16(unsigned int w); 273 + unsigned int __arch_hweight32(unsigned int w); 274 + unsigned long __arch_hweight64(__u64 w); 275 + #include <asm-generic/bitops/const_hweight.h> 276 + #else 277 #include <asm-generic/bitops/hweight.h> 278 + #endif 279 + 280 #include <asm-generic/bitops/find.h> 281 282 /* Little-endian versions */
+6 -3
arch/powerpc/include/asm/cputable.h
··· 199 #define CPU_FTR_UNALIGNED_LD_STD LONG_ASM_CONST(0x0080000000000000) 200 #define CPU_FTR_ASYM_SMT LONG_ASM_CONST(0x0100000000000000) 201 #define CPU_FTR_STCX_CHECKS_ADDRESS LONG_ASM_CONST(0x0200000000000000) 202 203 #ifndef __ASSEMBLY__ 204 ··· 405 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 406 CPU_FTR_MMCRA | CPU_FTR_SMT | \ 407 CPU_FTR_COHERENT_ICACHE | CPU_FTR_LOCKLESS_TLBIE | \ 408 - CPU_FTR_PURR | CPU_FTR_STCX_CHECKS_ADDRESS) 409 #define CPU_FTRS_POWER6 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 410 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 411 CPU_FTR_MMCRA | CPU_FTR_SMT | \ 412 CPU_FTR_COHERENT_ICACHE | CPU_FTR_LOCKLESS_TLBIE | \ 413 CPU_FTR_PURR | CPU_FTR_SPURR | CPU_FTR_REAL_LE | \ 414 CPU_FTR_DSCR | CPU_FTR_UNALIGNED_LD_STD | \ 415 - CPU_FTR_STCX_CHECKS_ADDRESS) 416 #define CPU_FTRS_POWER7 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 417 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 418 CPU_FTR_MMCRA | CPU_FTR_SMT | \ 419 CPU_FTR_COHERENT_ICACHE | CPU_FTR_LOCKLESS_TLBIE | \ 420 CPU_FTR_PURR | CPU_FTR_SPURR | CPU_FTR_REAL_LE | \ 421 CPU_FTR_DSCR | CPU_FTR_SAO | CPU_FTR_ASYM_SMT | \ 422 - CPU_FTR_STCX_CHECKS_ADDRESS) 423 #define CPU_FTRS_CELL (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 424 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 425 CPU_FTR_ALTIVEC_COMP | CPU_FTR_MMCRA | CPU_FTR_SMT | \
··· 199 #define CPU_FTR_UNALIGNED_LD_STD LONG_ASM_CONST(0x0080000000000000) 200 #define CPU_FTR_ASYM_SMT LONG_ASM_CONST(0x0100000000000000) 201 #define CPU_FTR_STCX_CHECKS_ADDRESS LONG_ASM_CONST(0x0200000000000000) 202 + #define CPU_FTR_POPCNTB LONG_ASM_CONST(0x0400000000000000) 203 + #define CPU_FTR_POPCNTD LONG_ASM_CONST(0x0800000000000000) 204 205 #ifndef __ASSEMBLY__ 206 ··· 403 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 404 CPU_FTR_MMCRA | CPU_FTR_SMT | \ 405 CPU_FTR_COHERENT_ICACHE | CPU_FTR_LOCKLESS_TLBIE | \ 406 + CPU_FTR_PURR | CPU_FTR_STCX_CHECKS_ADDRESS | \ 407 + CPU_FTR_POPCNTB) 408 #define CPU_FTRS_POWER6 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 409 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 410 CPU_FTR_MMCRA | CPU_FTR_SMT | \ 411 CPU_FTR_COHERENT_ICACHE | CPU_FTR_LOCKLESS_TLBIE | \ 412 CPU_FTR_PURR | CPU_FTR_SPURR | CPU_FTR_REAL_LE | \ 413 CPU_FTR_DSCR | CPU_FTR_UNALIGNED_LD_STD | \ 414 + CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB) 415 #define CPU_FTRS_POWER7 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 416 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 417 CPU_FTR_MMCRA | CPU_FTR_SMT | \ 418 CPU_FTR_COHERENT_ICACHE | CPU_FTR_LOCKLESS_TLBIE | \ 419 CPU_FTR_PURR | CPU_FTR_SPURR | CPU_FTR_REAL_LE | \ 420 CPU_FTR_DSCR | CPU_FTR_SAO | CPU_FTR_ASYM_SMT | \ 421 + CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD) 422 #define CPU_FTRS_CELL (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 423 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 424 CPU_FTR_ALTIVEC_COMP | CPU_FTR_MMCRA | CPU_FTR_SMT | \
+9 -6
arch/powerpc/include/asm/cputhreads.h
··· 61 return cpu_thread_mask_to_cores(cpu_online_map); 62 } 63 64 - static inline int cpu_thread_to_core(int cpu) 65 - { 66 - return cpu >> threads_shift; 67 - } 68 69 static inline int cpu_thread_in_core(int cpu) 70 { 71 return cpu & (threads_per_core - 1); 72 } 73 74 - static inline int cpu_first_thread_in_core(int cpu) 75 { 76 return cpu & ~(threads_per_core - 1); 77 } 78 79 - static inline int cpu_last_thread_in_core(int cpu) 80 { 81 return cpu | (threads_per_core - 1); 82 }
··· 61 return cpu_thread_mask_to_cores(cpu_online_map); 62 } 63 64 + #ifdef CONFIG_SMP 65 + int cpu_core_index_of_thread(int cpu); 66 + int cpu_first_thread_of_core(int core); 67 + #else 68 + static inline int cpu_core_index_of_thread(int cpu) { return cpu; } 69 + static inline int cpu_first_thread_of_core(int core) { return core; } 70 + #endif 71 72 static inline int cpu_thread_in_core(int cpu) 73 { 74 return cpu & (threads_per_core - 1); 75 } 76 77 + static inline int cpu_first_thread_sibling(int cpu) 78 { 79 return cpu & ~(threads_per_core - 1); 80 } 81 82 + static inline int cpu_last_thread_sibling(int cpu) 83 { 84 return cpu | (threads_per_core - 1); 85 }
+6
arch/powerpc/include/asm/device.h
··· 9 struct dma_map_ops; 10 struct device_node; 11 12 struct dev_archdata { 13 /* DMA operations on that device */ 14 struct dma_map_ops *dma_ops;
··· 9 struct dma_map_ops; 10 struct device_node; 11 12 + /* 13 + * Arch extensions to struct device. 14 + * 15 + * When adding fields, consider macio_add_one_device in 16 + * drivers/macintosh/macio_asic.c 17 + */ 18 struct dev_archdata { 19 /* DMA operations on that device */ 20 struct dma_map_ops *dma_ops;
+2 -1
arch/powerpc/include/asm/firmware.h
··· 46 #define FW_FEATURE_PS3_LV1 ASM_CONST(0x0000000000800000) 47 #define FW_FEATURE_BEAT ASM_CONST(0x0000000001000000) 48 #define FW_FEATURE_CMO ASM_CONST(0x0000000002000000) 49 50 #ifndef __ASSEMBLY__ 51 ··· 60 FW_FEATURE_VIO | FW_FEATURE_RDMA | FW_FEATURE_LLAN | 61 FW_FEATURE_BULK_REMOVE | FW_FEATURE_XDABR | 62 FW_FEATURE_MULTITCE | FW_FEATURE_SPLPAR | FW_FEATURE_LPAR | 63 - FW_FEATURE_CMO, 64 FW_FEATURE_PSERIES_ALWAYS = 0, 65 FW_FEATURE_ISERIES_POSSIBLE = FW_FEATURE_ISERIES | FW_FEATURE_LPAR, 66 FW_FEATURE_ISERIES_ALWAYS = FW_FEATURE_ISERIES | FW_FEATURE_LPAR,
··· 46 #define FW_FEATURE_PS3_LV1 ASM_CONST(0x0000000000800000) 47 #define FW_FEATURE_BEAT ASM_CONST(0x0000000001000000) 48 #define FW_FEATURE_CMO ASM_CONST(0x0000000002000000) 49 + #define FW_FEATURE_VPHN ASM_CONST(0x0000000004000000) 50 51 #ifndef __ASSEMBLY__ 52 ··· 59 FW_FEATURE_VIO | FW_FEATURE_RDMA | FW_FEATURE_LLAN | 60 FW_FEATURE_BULK_REMOVE | FW_FEATURE_XDABR | 61 FW_FEATURE_MULTITCE | FW_FEATURE_SPLPAR | FW_FEATURE_LPAR | 62 + FW_FEATURE_CMO | FW_FEATURE_VPHN, 63 FW_FEATURE_PSERIES_ALWAYS = 0, 64 FW_FEATURE_ISERIES_POSSIBLE = FW_FEATURE_ISERIES | FW_FEATURE_LPAR, 65 FW_FEATURE_ISERIES_ALWAYS = FW_FEATURE_ISERIES | FW_FEATURE_LPAR,
+3 -1
arch/powerpc/include/asm/hvcall.h
··· 232 #define H_GET_EM_PARMS 0x2B8 233 #define H_SET_MPP 0x2D0 234 #define H_GET_MPP 0x2D4 235 - #define MAX_HCALL_OPCODE H_GET_MPP 236 237 #ifndef __ASSEMBLY__ 238
··· 232 #define H_GET_EM_PARMS 0x2B8 233 #define H_SET_MPP 0x2D0 234 #define H_GET_MPP 0x2D4 235 + #define H_HOME_NODE_ASSOCIATIVITY 0x2EC 236 + #define H_BEST_ENERGY 0x2F4 237 + #define MAX_HCALL_OPCODE H_BEST_ENERGY 238 239 #ifndef __ASSEMBLY__ 240
+4 -1
arch/powerpc/include/asm/lppaca.h
··· 62 volatile u32 dyn_pir; // Dynamic ProcIdReg value x20-x23 63 u32 dsei_data; // DSEI data x24-x27 64 u64 sprg3; // SPRG3 value x28-x2F 65 - u8 reserved3[80]; // Reserved x30-x7F 66 67 //============================================================================= 68 // CACHE_LINE_2 0x0080 - 0x00FF Contains local read-write data
··· 62 volatile u32 dyn_pir; // Dynamic ProcIdReg value x20-x23 63 u32 dsei_data; // DSEI data x24-x27 64 u64 sprg3; // SPRG3 value x28-x2F 65 + u8 reserved3[40]; // Reserved x30-x57 66 + volatile u8 vphn_assoc_counts[8]; // Virtual processor home node 67 + // associativity change counters x58-x5F 68 + u8 reserved4[32]; // Reserved x60-x7F 69 70 //============================================================================= 71 // CACHE_LINE_2 0x0080 - 0x00FF Contains local read-write data
+1 -5
arch/powerpc/include/asm/machdep.h
··· 27 struct rtc_time; 28 struct file; 29 struct pci_controller; 30 - #ifdef CONFIG_KEXEC 31 struct kimage; 32 - #endif 33 34 #ifdef CONFIG_SMP 35 struct smp_ops_t { ··· 70 int psize, int ssize); 71 void (*flush_hash_range)(unsigned long number, int local); 72 73 - /* special for kexec, to be called in real mode, linar mapping is 74 * destroyed as well */ 75 void (*hpte_clear_all)(void); 76 ··· 321 extern sys_ctrler_t sys_ctrler; 322 323 #endif /* CONFIG_PPC_PMAC */ 324 - 325 - extern void setup_pci_ptrs(void); 326 327 #ifdef CONFIG_SMP 328 /* Poor default implementations */
··· 27 struct rtc_time; 28 struct file; 29 struct pci_controller; 30 struct kimage; 31 32 #ifdef CONFIG_SMP 33 struct smp_ops_t { ··· 72 int psize, int ssize); 73 void (*flush_hash_range)(unsigned long number, int local); 74 75 + /* special for kexec, to be called in real mode, linear mapping is 76 * destroyed as well */ 77 void (*hpte_clear_all)(void); 78 ··· 323 extern sys_ctrler_t sys_ctrler; 324 325 #endif /* CONFIG_PPC_PMAC */ 326 327 #ifdef CONFIG_SMP 328 /* Poor default implementations */
+5
arch/powerpc/include/asm/mmzone.h
··· 33 extern cpumask_var_t node_to_cpumask_map[]; 34 #ifdef CONFIG_MEMORY_HOTPLUG 35 extern unsigned long max_pfn; 36 #endif 37 38 /* ··· 45 #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) 46 #define node_end_pfn(nid) (NODE_DATA(nid)->node_end_pfn) 47 48 #endif /* CONFIG_NEED_MULTIPLE_NODES */ 49 50 #endif /* __KERNEL__ */
··· 33 extern cpumask_var_t node_to_cpumask_map[]; 34 #ifdef CONFIG_MEMORY_HOTPLUG 35 extern unsigned long max_pfn; 36 + u64 memory_hotplug_max(void); 37 + #else 38 + #define memory_hotplug_max() memblock_end_of_DRAM() 39 #endif 40 41 /* ··· 42 #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) 43 #define node_end_pfn(nid) (NODE_DATA(nid)->node_end_pfn) 44 45 + #else 46 + #define memory_hotplug_max() memblock_end_of_DRAM() 47 #endif /* CONFIG_NEED_MULTIPLE_NODES */ 48 49 #endif /* __KERNEL__ */
+11 -41
arch/powerpc/include/asm/nvram.h
··· 10 #ifndef _ASM_POWERPC_NVRAM_H 11 #define _ASM_POWERPC_NVRAM_H 12 13 - #include <linux/errno.h> 14 - 15 - #define NVRW_CNT 0x20 16 - #define NVRAM_HEADER_LEN 16 /* sizeof(struct nvram_header) */ 17 - #define NVRAM_BLOCK_LEN 16 18 - #define NVRAM_MAX_REQ (2080/NVRAM_BLOCK_LEN) 19 - #define NVRAM_MIN_REQ (1056/NVRAM_BLOCK_LEN) 20 - 21 - #define NVRAM_AS0 0x74 22 - #define NVRAM_AS1 0x75 23 - #define NVRAM_DATA 0x77 24 - 25 - 26 - /* RTC Offsets */ 27 - 28 - #define MOTO_RTC_SECONDS 0x1FF9 29 - #define MOTO_RTC_MINUTES 0x1FFA 30 - #define MOTO_RTC_HOURS 0x1FFB 31 - #define MOTO_RTC_DAY_OF_WEEK 0x1FFC 32 - #define MOTO_RTC_DAY_OF_MONTH 0x1FFD 33 - #define MOTO_RTC_MONTH 0x1FFE 34 - #define MOTO_RTC_YEAR 0x1FFF 35 - #define MOTO_RTC_CONTROLA 0x1FF8 36 - #define MOTO_RTC_CONTROLB 0x1FF9 37 - 38 #define NVRAM_SIG_SP 0x02 /* support processor */ 39 #define NVRAM_SIG_OF 0x50 /* open firmware config */ 40 #define NVRAM_SIG_FW 0x51 /* general firmware */ ··· 25 #define NVRAM_SIG_OS 0xa0 /* OS defined */ 26 #define NVRAM_SIG_PANIC 0xa1 /* Apple OSX "panic" */ 27 28 - /* If change this size, then change the size of NVNAME_LEN */ 29 - struct nvram_header { 30 - unsigned char signature; 31 - unsigned char checksum; 32 - unsigned short length; 33 - char name[12]; 34 - }; 35 - 36 #ifdef __KERNEL__ 37 38 #include <linux/list.h> 39 40 - struct nvram_partition { 41 - struct list_head partition; 42 - struct nvram_header header; 43 - unsigned int index; 44 - }; 45 - 46 - 47 extern int nvram_write_error_log(char * buff, int length, 48 unsigned int err_type, unsigned int err_seq); 49 extern int nvram_read_error_log(char * buff, int length, 50 unsigned int * err_type, unsigned int *err_seq); 51 extern int nvram_clear_error_log(void); 52 - 53 extern int pSeries_nvram_init(void); 54 55 #ifdef CONFIG_MMIO_NVRAM 56 extern int mmio_nvram_init(void); ··· 47 return -ENODEV; 48 } 49 #endif 50 51 #endif /* __KERNEL__ */ 52
··· 10 #ifndef _ASM_POWERPC_NVRAM_H 11 #define _ASM_POWERPC_NVRAM_H 12 13 + /* Signatures for nvram partitions */ 14 #define NVRAM_SIG_SP 0x02 /* support processor */ 15 #define NVRAM_SIG_OF 0x50 /* open firmware config */ 16 #define NVRAM_SIG_FW 0x51 /* general firmware */ ··· 49 #define NVRAM_SIG_OS 0xa0 /* OS defined */ 50 #define NVRAM_SIG_PANIC 0xa1 /* Apple OSX "panic" */ 51 52 #ifdef __KERNEL__ 53 54 + #include <linux/errno.h> 55 #include <linux/list.h> 56 57 + #ifdef CONFIG_PPC_PSERIES 58 extern int nvram_write_error_log(char * buff, int length, 59 unsigned int err_type, unsigned int err_seq); 60 extern int nvram_read_error_log(char * buff, int length, 61 unsigned int * err_type, unsigned int *err_seq); 62 extern int nvram_clear_error_log(void); 63 extern int pSeries_nvram_init(void); 64 + #endif /* CONFIG_PPC_PSERIES */ 65 66 #ifdef CONFIG_MMIO_NVRAM 67 extern int mmio_nvram_init(void); ··· 84 return -ENODEV; 85 } 86 #endif 87 + 88 + extern int __init nvram_scan_partitions(void); 89 + extern loff_t nvram_create_partition(const char *name, int sig, 90 + int req_size, int min_size); 91 + extern int nvram_remove_partition(const char *name, int sig); 92 + extern int nvram_get_partition_size(loff_t data_index); 93 + extern loff_t nvram_find_partition(const char *name, int sig, int *out_size); 94 95 #endif /* __KERNEL__ */ 96
+8
arch/powerpc/include/asm/ppc-opcode.h
··· 36 #define PPC_INST_NOP 0x60000000 37 #define PPC_INST_POPCNTB 0x7c0000f4 38 #define PPC_INST_POPCNTB_MASK 0xfc0007fe 39 #define PPC_INST_RFCI 0x4c000066 40 #define PPC_INST_RFDI 0x4c00004e 41 #define PPC_INST_RFMCI 0x4c00004c ··· 90 __PPC_RB(b) | __PPC_EH(eh)) 91 #define PPC_MSGSND(b) stringify_in_c(.long PPC_INST_MSGSND | \ 92 __PPC_RB(b)) 93 #define PPC_RFCI stringify_in_c(.long PPC_INST_RFCI) 94 #define PPC_RFDI stringify_in_c(.long PPC_INST_RFDI) 95 #define PPC_RFMCI stringify_in_c(.long PPC_INST_RFMCI)
··· 36 #define PPC_INST_NOP 0x60000000 37 #define PPC_INST_POPCNTB 0x7c0000f4 38 #define PPC_INST_POPCNTB_MASK 0xfc0007fe 39 + #define PPC_INST_POPCNTD 0x7c0003f4 40 + #define PPC_INST_POPCNTW 0x7c0002f4 41 #define PPC_INST_RFCI 0x4c000066 42 #define PPC_INST_RFDI 0x4c00004e 43 #define PPC_INST_RFMCI 0x4c00004c ··· 88 __PPC_RB(b) | __PPC_EH(eh)) 89 #define PPC_MSGSND(b) stringify_in_c(.long PPC_INST_MSGSND | \ 90 __PPC_RB(b)) 91 + #define PPC_POPCNTB(a, s) stringify_in_c(.long PPC_INST_POPCNTB | \ 92 + __PPC_RA(a) | __PPC_RS(s)) 93 + #define PPC_POPCNTD(a, s) stringify_in_c(.long PPC_INST_POPCNTD | \ 94 + __PPC_RA(a) | __PPC_RS(s)) 95 + #define PPC_POPCNTW(a, s) stringify_in_c(.long PPC_INST_POPCNTW | \ 96 + __PPC_RA(a) | __PPC_RS(s)) 97 #define PPC_RFCI stringify_in_c(.long PPC_INST_RFCI) 98 #define PPC_RFDI stringify_in_c(.long PPC_INST_RFDI) 99 #define PPC_RFMCI stringify_in_c(.long PPC_INST_RFMCI)
-2
arch/powerpc/include/asm/processor.h
··· 122 TASK_UNMAPPED_BASE_USER32 : TASK_UNMAPPED_BASE_USER64 ) 123 #endif 124 125 - #ifdef __KERNEL__ 126 #ifdef __powerpc64__ 127 128 #define STACK_TOP_USER64 TASK_SIZE_USER64 ··· 138 #define STACK_TOP_MAX STACK_TOP 139 140 #endif /* __powerpc64__ */ 141 - #endif /* __KERNEL__ */ 142 143 typedef struct { 144 unsigned long seg;
··· 122 TASK_UNMAPPED_BASE_USER32 : TASK_UNMAPPED_BASE_USER64 ) 123 #endif 124 125 #ifdef __powerpc64__ 126 127 #define STACK_TOP_USER64 TASK_SIZE_USER64 ··· 139 #define STACK_TOP_MAX STACK_TOP 140 141 #endif /* __powerpc64__ */ 142 143 typedef struct { 144 unsigned long seg;
+14 -1
arch/powerpc/include/asm/topology.h
··· 106 int nid) 107 { 108 } 109 - 110 #endif /* CONFIG_NUMA */ 111 112 #include <asm-generic/topology.h> 113
··· 106 int nid) 107 { 108 } 109 #endif /* CONFIG_NUMA */ 110 + 111 + #if defined(CONFIG_NUMA) && defined(CONFIG_PPC_SPLPAR) 112 + extern int start_topology_update(void); 113 + extern int stop_topology_update(void); 114 + #else 115 + static inline int start_topology_update(void) 116 + { 117 + return 0; 118 + } 119 + static inline int stop_topology_update(void) 120 + { 121 + return 0; 122 + } 123 + #endif /* CONFIG_NUMA && CONFIG_PPC_SPLPAR */ 124 125 #include <asm-generic/topology.h> 126
-2
arch/powerpc/include/asm/vdso_datapage.h
··· 116 117 #endif /* CONFIG_PPC64 */ 118 119 - #ifdef __KERNEL__ 120 extern struct vdso_data *vdso_data; 121 - #endif 122 123 #endif /* __ASSEMBLY__ */ 124
··· 116 117 #endif /* CONFIG_PPC64 */ 118 119 extern struct vdso_data *vdso_data; 120 121 #endif /* __ASSEMBLY__ */ 122
+4 -5
arch/powerpc/kernel/Makefile
··· 29 obj-y := cputable.o ptrace.o syscalls.o \ 30 irq.o align.o signal_32.o pmc.o vdso.o \ 31 init_task.o process.o systbl.o idle.o \ 32 - signal.o sysfs.o cacheinfo.o 33 - obj-y += vdso32/ 34 obj-$(CONFIG_PPC64) += setup_64.o sys_ppc32.o \ 35 signal_64.o ptrace32.o \ 36 paca.o nvram_64.o firmware.o ··· 82 extra-$(CONFIG_8xx) := head_8xx.o 83 extra-y += vmlinux.lds 84 85 - obj-y += time.o prom.o traps.o setup-common.o \ 86 - udbg.o misc.o io.o dma.o \ 87 - misc_$(CONFIG_WORD_SIZE).o 88 obj-$(CONFIG_PPC32) += entry_32.o setup_32.o 89 obj-$(CONFIG_PPC64) += dma-iommu.o iommu.o 90 obj-$(CONFIG_KGDB) += kgdb.o
··· 29 obj-y := cputable.o ptrace.o syscalls.o \ 30 irq.o align.o signal_32.o pmc.o vdso.o \ 31 init_task.o process.o systbl.o idle.o \ 32 + signal.o sysfs.o cacheinfo.o time.o \ 33 + prom.o traps.o setup-common.o \ 34 + udbg.o misc.o io.o dma.o \ 35 + misc_$(CONFIG_WORD_SIZE).o vdso32/ 36 obj-$(CONFIG_PPC64) += setup_64.o sys_ppc32.o \ 37 signal_64.o ptrace32.o \ 38 paca.o nvram_64.o firmware.o ··· 80 extra-$(CONFIG_8xx) := head_8xx.o 81 extra-y += vmlinux.lds 82 83 obj-$(CONFIG_PPC32) += entry_32.o setup_32.o 84 obj-$(CONFIG_PPC64) += dma-iommu.o iommu.o 85 obj-$(CONFIG_KGDB) += kgdb.o
-1
arch/powerpc/kernel/asm-offsets.c
··· 209 DEFINE(RTASENTRY, offsetof(struct rtas_t, entry)); 210 211 /* Interrupt register frame */ 212 - DEFINE(STACK_FRAME_OVERHEAD, STACK_FRAME_OVERHEAD); 213 DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE); 214 DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs)); 215 #ifdef CONFIG_PPC64
··· 209 DEFINE(RTASENTRY, offsetof(struct rtas_t, entry)); 210 211 /* Interrupt register frame */ 212 DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE); 213 DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs)); 214 #ifdef CONFIG_PPC64
+16 -6
arch/powerpc/kernel/cputable.c
··· 457 .dcache_bsize = 128, 458 .num_pmcs = 6, 459 .pmc_type = PPC_PMC_IBM, 460 - .cpu_setup = __setup_cpu_power7, 461 - .cpu_restore = __restore_cpu_power7, 462 .oprofile_cpu_type = "ppc64/power7", 463 .oprofile_type = PPC_OPROFILE_POWER4, 464 - .oprofile_mmcra_sihv = POWER6_MMCRA_SIHV, 465 - .oprofile_mmcra_sipr = POWER6_MMCRA_SIPR, 466 - .oprofile_mmcra_clear = POWER6_MMCRA_THRM | 467 - POWER6_MMCRA_OTHER, 468 .platform = "power7", 469 }, 470 { /* Cell Broadband Engine */ 471 .pvr_mask = 0xffff0000,
··· 457 .dcache_bsize = 128, 458 .num_pmcs = 6, 459 .pmc_type = PPC_PMC_IBM, 460 .oprofile_cpu_type = "ppc64/power7", 461 .oprofile_type = PPC_OPROFILE_POWER4, 462 .platform = "power7", 463 + }, 464 + { /* Power7+ */ 465 + .pvr_mask = 0xffff0000, 466 + .pvr_value = 0x004A0000, 467 + .cpu_name = "POWER7+ (raw)", 468 + .cpu_features = CPU_FTRS_POWER7, 469 + .cpu_user_features = COMMON_USER_POWER7, 470 + .mmu_features = MMU_FTR_HPTE_TABLE | 471 + MMU_FTR_TLBIE_206, 472 + .icache_bsize = 128, 473 + .dcache_bsize = 128, 474 + .num_pmcs = 6, 475 + .pmc_type = PPC_PMC_IBM, 476 + .oprofile_cpu_type = "ppc64/power7", 477 + .oprofile_type = PPC_OPROFILE_POWER4, 478 + .platform = "power7+", 479 }, 480 { /* Cell Broadband Engine */ 481 .pvr_mask = 0xffff0000,
+33
arch/powerpc/kernel/crash_dump.c
··· 19 #include <asm/prom.h> 20 #include <asm/firmware.h> 21 #include <asm/uaccess.h> 22 23 #ifdef DEBUG 24 #include <asm/udbg.h> ··· 142 143 return csize; 144 }
··· 19 #include <asm/prom.h> 20 #include <asm/firmware.h> 21 #include <asm/uaccess.h> 22 + #include <asm/rtas.h> 23 24 #ifdef DEBUG 25 #include <asm/udbg.h> ··· 141 142 return csize; 143 } 144 + 145 + #ifdef CONFIG_PPC_RTAS 146 + /* 147 + * The crashkernel region will almost always overlap the RTAS region, so 148 + * we have to be careful when shrinking the crashkernel region. 149 + */ 150 + void crash_free_reserved_phys_range(unsigned long begin, unsigned long end) 151 + { 152 + unsigned long addr; 153 + const u32 *basep, *sizep; 154 + unsigned int rtas_start = 0, rtas_end = 0; 155 + 156 + basep = of_get_property(rtas.dev, "linux,rtas-base", NULL); 157 + sizep = of_get_property(rtas.dev, "rtas-size", NULL); 158 + 159 + if (basep && sizep) { 160 + rtas_start = *basep; 161 + rtas_end = *basep + *sizep; 162 + } 163 + 164 + for (addr = begin; addr < end; addr += PAGE_SIZE) { 165 + /* Does this page overlap with the RTAS region? */ 166 + if (addr <= rtas_end && ((addr + PAGE_SIZE) > rtas_start)) 167 + continue; 168 + 169 + ClearPageReserved(pfn_to_page(addr >> PAGE_SHIFT)); 170 + init_page_count(pfn_to_page(addr >> PAGE_SHIFT)); 171 + free_page((unsigned long)__va(addr)); 172 + totalram_pages++; 173 + } 174 + } 175 + #endif
+1 -1
arch/powerpc/kernel/dma-iommu.c
··· 19 dma_addr_t *dma_handle, gfp_t flag) 20 { 21 return iommu_alloc_coherent(dev, get_iommu_table_base(dev), size, 22 - dma_handle, device_to_mask(dev), flag, 23 dev_to_node(dev)); 24 } 25
··· 19 dma_addr_t *dma_handle, gfp_t flag) 20 { 21 return iommu_alloc_coherent(dev, get_iommu_table_base(dev), size, 22 + dma_handle, dev->coherent_dma_mask, flag, 23 dev_to_node(dev)); 24 } 25
+1
arch/powerpc/kernel/entry_32.S
··· 31 #include <asm/asm-offsets.h> 32 #include <asm/unistd.h> 33 #include <asm/ftrace.h> 34 35 #undef SHOW_SYSCALLS 36 #undef SHOW_SYSCALLS_TASK
··· 31 #include <asm/asm-offsets.h> 32 #include <asm/unistd.h> 33 #include <asm/ftrace.h> 34 + #include <asm/ptrace.h> 35 36 #undef SHOW_SYSCALLS 37 #undef SHOW_SYSCALLS_TASK
+1
arch/powerpc/kernel/exceptions-64s.S
··· 13 */ 14 15 #include <asm/exception-64s.h> 16 17 /* 18 * We layout physical memory as follows:
··· 13 */ 14 15 #include <asm/exception-64s.h> 16 + #include <asm/ptrace.h> 17 18 /* 19 * We layout physical memory as follows:
+1
arch/powerpc/kernel/fpu.S
··· 23 #include <asm/thread_info.h> 24 #include <asm/ppc_asm.h> 25 #include <asm/asm-offsets.h> 26 27 #ifdef CONFIG_VSX 28 #define REST_32FPVSRS(n,c,base) \
··· 23 #include <asm/thread_info.h> 24 #include <asm/ppc_asm.h> 25 #include <asm/asm-offsets.h> 26 + #include <asm/ptrace.h> 27 28 #ifdef CONFIG_VSX 29 #define REST_32FPVSRS(n,c,base) \
+1
arch/powerpc/kernel/head_40x.S
··· 40 #include <asm/thread_info.h> 41 #include <asm/ppc_asm.h> 42 #include <asm/asm-offsets.h> 43 44 /* As with the other PowerPC ports, it is expected that when code 45 * execution begins here, the following registers contain valid, yet
··· 40 #include <asm/thread_info.h> 41 #include <asm/ppc_asm.h> 42 #include <asm/asm-offsets.h> 43 + #include <asm/ptrace.h> 44 45 /* As with the other PowerPC ports, it is expected that when code 46 * execution begins here, the following registers contain valid, yet
+1
arch/powerpc/kernel/head_44x.S
··· 37 #include <asm/thread_info.h> 38 #include <asm/ppc_asm.h> 39 #include <asm/asm-offsets.h> 40 #include <asm/synch.h> 41 #include "head_booke.h" 42
··· 37 #include <asm/thread_info.h> 38 #include <asm/ppc_asm.h> 39 #include <asm/asm-offsets.h> 40 + #include <asm/ptrace.h> 41 #include <asm/synch.h> 42 #include "head_booke.h" 43
+3 -4
arch/powerpc/kernel/head_64.S
··· 38 #include <asm/page_64.h> 39 #include <asm/irqflags.h> 40 #include <asm/kvm_book3s_asm.h> 41 42 /* The physical memory is layed out such that the secondary processor 43 * spin code sits at 0x0000...0x00ff. On server, the vectors follow ··· 97 .llong hvReleaseData-KERNELBASE 98 #endif /* CONFIG_PPC_ISERIES */ 99 100 - #ifdef CONFIG_CRASH_DUMP 101 /* This flag is set to 1 by a loader if the kernel should run 102 * at the loaded address instead of the linked address. This 103 * is used by kexec-tools to keep the the kdump kernel in the ··· 385 /* process relocations for the final address of the kernel */ 386 lis r25,PAGE_OFFSET@highest /* compute virtual base of kernel */ 387 sldi r25,r25,32 388 - #ifdef CONFIG_CRASH_DUMP 389 lwz r7,__run_at_load-_stext(r26) 390 - cmplwi cr0,r7,1 /* kdump kernel ? - stay where we are */ 391 bne 1f 392 add r25,r25,r26 393 - #endif 394 1: mr r3,r25 395 bl .relocate 396 #endif
··· 38 #include <asm/page_64.h> 39 #include <asm/irqflags.h> 40 #include <asm/kvm_book3s_asm.h> 41 + #include <asm/ptrace.h> 42 43 /* The physical memory is layed out such that the secondary processor 44 * spin code sits at 0x0000...0x00ff. On server, the vectors follow ··· 96 .llong hvReleaseData-KERNELBASE 97 #endif /* CONFIG_PPC_ISERIES */ 98 99 + #ifdef CONFIG_RELOCATABLE 100 /* This flag is set to 1 by a loader if the kernel should run 101 * at the loaded address instead of the linked address. This 102 * is used by kexec-tools to keep the the kdump kernel in the ··· 384 /* process relocations for the final address of the kernel */ 385 lis r25,PAGE_OFFSET@highest /* compute virtual base of kernel */ 386 sldi r25,r25,32 387 lwz r7,__run_at_load-_stext(r26) 388 + cmplwi cr0,r7,1 /* flagged to stay where we are ? */ 389 bne 1f 390 add r25,r25,r26 391 1: mr r3,r25 392 bl .relocate 393 #endif
+1
arch/powerpc/kernel/head_8xx.S
··· 29 #include <asm/thread_info.h> 30 #include <asm/ppc_asm.h> 31 #include <asm/asm-offsets.h> 32 33 /* Macro to make the code more readable. */ 34 #ifdef CONFIG_8xx_CPU6
··· 29 #include <asm/thread_info.h> 30 #include <asm/ppc_asm.h> 31 #include <asm/asm-offsets.h> 32 + #include <asm/ptrace.h> 33 34 /* Macro to make the code more readable. */ 35 #ifdef CONFIG_8xx_CPU6
+1
arch/powerpc/kernel/head_fsl_booke.S
··· 41 #include <asm/ppc_asm.h> 42 #include <asm/asm-offsets.h> 43 #include <asm/cache.h> 44 #include "head_booke.h" 45 46 /* As with the other PowerPC ports, it is expected that when code
··· 41 #include <asm/ppc_asm.h> 42 #include <asm/asm-offsets.h> 43 #include <asm/cache.h> 44 + #include <asm/ptrace.h> 45 #include "head_booke.h" 46 47 /* As with the other PowerPC ports, it is expected that when code
+8 -6
arch/powerpc/kernel/iommu.c
··· 311 /* Handle failure */ 312 if (unlikely(entry == DMA_ERROR_CODE)) { 313 if (printk_ratelimit()) 314 - printk(KERN_INFO "iommu_alloc failed, tbl %p vaddr %lx" 315 - " npages %lx\n", tbl, vaddr, npages); 316 goto failure; 317 } 318 ··· 580 attrs); 581 if (dma_handle == DMA_ERROR_CODE) { 582 if (printk_ratelimit()) { 583 - printk(KERN_INFO "iommu_alloc failed, " 584 - "tbl %p vaddr %p npages %d\n", 585 - tbl, vaddr, npages); 586 } 587 } else 588 dma_handle |= (uaddr & ~IOMMU_PAGE_MASK); ··· 628 * the tce tables. 629 */ 630 if (order >= IOMAP_MAX_ORDER) { 631 - printk("iommu_alloc_consistent size too large: 0x%lx\n", size); 632 return NULL; 633 } 634
··· 311 /* Handle failure */ 312 if (unlikely(entry == DMA_ERROR_CODE)) { 313 if (printk_ratelimit()) 314 + dev_info(dev, "iommu_alloc failed, tbl %p " 315 + "vaddr %lx npages %lu\n", tbl, vaddr, 316 + npages); 317 goto failure; 318 } 319 ··· 579 attrs); 580 if (dma_handle == DMA_ERROR_CODE) { 581 if (printk_ratelimit()) { 582 + dev_info(dev, "iommu_alloc failed, tbl %p " 583 + "vaddr %p npages %d\n", tbl, vaddr, 584 + npages); 585 } 586 } else 587 dma_handle |= (uaddr & ~IOMMU_PAGE_MASK); ··· 627 * the tce tables. 628 */ 629 if (order >= IOMAP_MAX_ORDER) { 630 + dev_info(dev, "iommu_alloc_consistent size too large: 0x%lx\n", 631 + size); 632 return NULL; 633 } 634
-5
arch/powerpc/kernel/misc.S
··· 122 mtlr r0 123 mr r3,r4 124 blr 125 - 126 - _GLOBAL(__setup_cpu_power7) 127 - _GLOBAL(__restore_cpu_power7) 128 - /* place holder */ 129 - blr
··· 122 mtlr r0 123 mr r3,r4 124 blr
+1
arch/powerpc/kernel/misc_32.S
··· 30 #include <asm/processor.h> 31 #include <asm/kexec.h> 32 #include <asm/bug.h> 33 34 .text 35
··· 30 #include <asm/processor.h> 31 #include <asm/kexec.h> 32 #include <asm/bug.h> 33 + #include <asm/ptrace.h> 34 35 .text 36
+1
arch/powerpc/kernel/misc_64.S
··· 25 #include <asm/cputable.h> 26 #include <asm/thread_info.h> 27 #include <asm/kexec.h> 28 29 .text 30
··· 25 #include <asm/cputable.h> 26 #include <asm/thread_info.h> 27 #include <asm/kexec.h> 28 + #include <asm/ptrace.h> 29 30 .text 31
+164 -325
arch/powerpc/kernel/nvram_64.c
··· 34 35 #undef DEBUG_NVRAM 36 37 - static struct nvram_partition * nvram_part; 38 - static long nvram_error_log_index = -1; 39 - static long nvram_error_log_size = 0; 40 41 - struct err_log_info { 42 - int error_type; 43 - unsigned int seq_num; 44 }; 45 46 static loff_t dev_nvram_llseek(struct file *file, loff_t offset, int origin) 47 { ··· 197 #ifdef DEBUG_NVRAM 198 static void __init nvram_print_partitions(char * label) 199 { 200 - struct list_head * p; 201 struct nvram_partition * tmp_part; 202 203 printk(KERN_WARNING "--------%s---------\n", label); 204 printk(KERN_WARNING "indx\t\tsig\tchks\tlen\tname\n"); 205 - list_for_each(p, &nvram_part->partition) { 206 - tmp_part = list_entry(p, struct nvram_partition, partition); 207 - printk(KERN_WARNING "%4d \t%02x\t%02x\t%d\t%s\n", 208 tmp_part->index, tmp_part->header.signature, 209 tmp_part->header.checksum, tmp_part->header.length, 210 tmp_part->header.name); ··· 237 return c_sum; 238 } 239 240 - static int __init nvram_remove_os_partition(void) 241 { 242 - struct list_head *i; 243 - struct list_head *j; 244 - struct nvram_partition * part; 245 - struct nvram_partition * cur_part; 246 int rc; 247 248 - list_for_each(i, &nvram_part->partition) { 249 - part = list_entry(i, struct nvram_partition, partition); 250 - if (part->header.signature != NVRAM_SIG_OS) 251 continue; 252 - 253 - /* Make os partition a free partition */ 254 part->header.signature = NVRAM_SIG_FREE; 255 - sprintf(part->header.name, "wwwwwwwwwwww"); 256 part->header.checksum = nvram_checksum(&part->header); 257 - 258 - /* Merge contiguous free partitions backwards */ 259 - list_for_each_prev(j, &part->partition) { 260 - cur_part = list_entry(j, struct nvram_partition, partition); 261 - if (cur_part == nvram_part || cur_part->header.signature != NVRAM_SIG_FREE) { 262 - break; 263 - } 264 - 265 - part->header.length += cur_part->header.length; 266 - part->header.checksum = nvram_checksum(&part->header); 267 - part->index = cur_part->index; 268 - 269 - list_del(&cur_part->partition); 270 - kfree(cur_part); 271 - j = &part->partition; /* fixup our loop */ 272 - } 273 - 274 - /* Merge contiguous free partitions forwards */ 275 - list_for_each(j, &part->partition) { 276 - cur_part = list_entry(j, struct nvram_partition, partition); 277 - if (cur_part == nvram_part || cur_part->header.signature != NVRAM_SIG_FREE) { 278 - break; 279 - } 280 - 281 - part->header.length += cur_part->header.length; 282 - part->header.checksum = nvram_checksum(&part->header); 283 - 284 - list_del(&cur_part->partition); 285 - kfree(cur_part); 286 - j = &part->partition; /* fixup our loop */ 287 - } 288 - 289 rc = nvram_write_header(part); 290 if (rc <= 0) { 291 - printk(KERN_ERR "nvram_remove_os_partition: nvram_write failed (%d)\n", rc); 292 return rc; 293 } 294 295 } 296 297 return 0; 298 } 299 300 - /* nvram_create_os_partition 301 * 302 - * Create a OS linux partition to buffer error logs. 303 - * Will create a partition starting at the first free 304 - * space found if space has enough room. 305 */ 306 - static int __init nvram_create_os_partition(void) 307 { 308 struct nvram_partition *part; 309 struct nvram_partition *new_part; 310 struct nvram_partition *free_part = NULL; 311 - int seq_init[2] = { 0, 0 }; 312 loff_t tmp_index; 313 long size = 0; 314 int rc; 315 - 316 /* Find a free partition that will give us the maximum needed size 317 If can't find one that will give us the minimum size needed */ 318 - list_for_each_entry(part, &nvram_part->partition, partition) { 319 if (part->header.signature != NVRAM_SIG_FREE) 320 continue; 321 322 - if (part->header.length >= NVRAM_MAX_REQ) { 323 - size = NVRAM_MAX_REQ; 324 free_part = part; 325 break; 326 } 327 - if (!size && part->header.length >= NVRAM_MIN_REQ) { 328 - size = NVRAM_MIN_REQ; 329 free_part = part; 330 } 331 } ··· 353 /* Create our OS partition */ 354 new_part = kmalloc(sizeof(*new_part), GFP_KERNEL); 355 if (!new_part) { 356 - printk(KERN_ERR "nvram_create_os_partition: kmalloc failed\n"); 357 return -ENOMEM; 358 } 359 360 new_part->index = free_part->index; 361 - new_part->header.signature = NVRAM_SIG_OS; 362 new_part->header.length = size; 363 - strcpy(new_part->header.name, "ppc64,linux"); 364 new_part->header.checksum = nvram_checksum(&new_part->header); 365 366 rc = nvram_write_header(new_part); 367 if (rc <= 0) { 368 - printk(KERN_ERR "nvram_create_os_partition: nvram_write_header " 369 - "failed (%d)\n", rc); 370 - return rc; 371 - } 372 - 373 - /* make sure and initialize to zero the sequence number and the error 374 - type logged */ 375 - tmp_index = new_part->index + NVRAM_HEADER_LEN; 376 - rc = ppc_md.nvram_write((char *)&seq_init, sizeof(seq_init), &tmp_index); 377 - if (rc <= 0) { 378 - printk(KERN_ERR "nvram_create_os_partition: nvram_write " 379 "failed (%d)\n", rc); 380 return rc; 381 } 382 - 383 - nvram_error_log_index = new_part->index + NVRAM_HEADER_LEN; 384 - nvram_error_log_size = ((part->header.length - 1) * 385 - NVRAM_BLOCK_LEN) - sizeof(struct err_log_info); 386 - 387 list_add_tail(&new_part->partition, &free_part->partition); 388 389 - if (free_part->header.length <= size) { 390 list_del(&free_part->partition); 391 kfree(free_part); 392 - return 0; 393 } 394 395 - /* Adjust the partition we stole the space from */ 396 - free_part->index += size * NVRAM_BLOCK_LEN; 397 - free_part->header.length -= size; 398 - free_part->header.checksum = nvram_checksum(&free_part->header); 399 - 400 - rc = nvram_write_header(free_part); 401 - if (rc <= 0) { 402 - printk(KERN_ERR "nvram_create_os_partition: nvram_write_header " 403 - "failed (%d)\n", rc); 404 - return rc; 405 - } 406 - 407 - return 0; 408 - } 409 - 410 - 411 - /* nvram_setup_partition 412 - * 413 - * This will setup the partition we need for buffering the 414 - * error logs and cleanup partitions if needed. 415 - * 416 - * The general strategy is the following: 417 - * 1.) If there is ppc64,linux partition large enough then use it. 418 - * 2.) If there is not a ppc64,linux partition large enough, search 419 - * for a free partition that is large enough. 420 - * 3.) If there is not a free partition large enough remove 421 - * _all_ OS partitions and consolidate the space. 422 - * 4.) Will first try getting a chunk that will satisfy the maximum 423 - * error log size (NVRAM_MAX_REQ). 424 - * 5.) If the max chunk cannot be allocated then try finding a chunk 425 - * that will satisfy the minum needed (NVRAM_MIN_REQ). 426 - */ 427 - static int __init nvram_setup_partition(void) 428 - { 429 - struct list_head * p; 430 - struct nvram_partition * part; 431 - int rc; 432 - 433 - /* For now, we don't do any of this on pmac, until I 434 - * have figured out if it's worth killing some unused stuffs 435 - * in our nvram, as Apple defined partitions use pretty much 436 - * all of the space 437 - */ 438 - if (machine_is(powermac)) 439 - return -ENOSPC; 440 - 441 - /* see if we have an OS partition that meets our needs. 442 - will try getting the max we need. If not we'll delete 443 - partitions and try again. */ 444 - list_for_each(p, &nvram_part->partition) { 445 - part = list_entry(p, struct nvram_partition, partition); 446 - if (part->header.signature != NVRAM_SIG_OS) 447 - continue; 448 - 449 - if (strcmp(part->header.name, "ppc64,linux")) 450 - continue; 451 - 452 - if (part->header.length >= NVRAM_MIN_REQ) { 453 - /* found our partition */ 454 - nvram_error_log_index = part->index + NVRAM_HEADER_LEN; 455 - nvram_error_log_size = ((part->header.length - 1) * 456 - NVRAM_BLOCK_LEN) - sizeof(struct err_log_info); 457 - return 0; 458 } 459 } 460 461 - /* try creating a partition with the free space we have */ 462 - rc = nvram_create_os_partition(); 463 - if (!rc) { 464 - return 0; 465 - } 466 - 467 - /* need to free up some space */ 468 - rc = nvram_remove_os_partition(); 469 - if (rc) { 470 - return rc; 471 - } 472 473 - /* create a partition in this new space */ 474 - rc = nvram_create_os_partition(); 475 - if (rc) { 476 - printk(KERN_ERR "nvram_create_os_partition: Could not find a " 477 - "NVRAM partition large enough\n"); 478 - return rc; 479 } 480 - 481 - return 0; 482 } 483 484 485 - static int __init nvram_scan_partitions(void) 486 { 487 loff_t cur_index = 0; 488 struct nvram_header phead; ··· 451 int total_size; 452 int err; 453 454 - if (ppc_md.nvram_size == NULL) 455 return -ENODEV; 456 total_size = ppc_md.nvram_size(); 457 ··· 498 499 memcpy(&tmp_part->header, &phead, NVRAM_HEADER_LEN); 500 tmp_part->index = cur_index; 501 - list_add_tail(&tmp_part->partition, &nvram_part->partition); 502 503 cur_index += phead.length * NVRAM_BLOCK_LEN; 504 } 505 err = 0; 506 507 out: 508 kfree(header); ··· 515 516 static int __init nvram_init(void) 517 { 518 - int error; 519 int rc; 520 521 if (ppc_md.nvram_size == NULL || ppc_md.nvram_size() <= 0) 522 return -ENODEV; 523 ··· 528 return rc; 529 } 530 531 - /* initialize our anchor for the nvram partition list */ 532 - nvram_part = kmalloc(sizeof(struct nvram_partition), GFP_KERNEL); 533 - if (!nvram_part) { 534 - printk(KERN_ERR "nvram_init: Failed kmalloc\n"); 535 - return -ENOMEM; 536 - } 537 - INIT_LIST_HEAD(&nvram_part->partition); 538 - 539 - /* Get all the NVRAM partitions */ 540 - error = nvram_scan_partitions(); 541 - if (error) { 542 - printk(KERN_ERR "nvram_init: Failed nvram_scan_partitions\n"); 543 - return error; 544 - } 545 - 546 - if(nvram_setup_partition()) 547 - printk(KERN_WARNING "nvram_init: Could not find nvram partition" 548 - " for nvram buffered error logging.\n"); 549 - 550 - #ifdef DEBUG_NVRAM 551 - nvram_print_partitions("NVRAM Partitions"); 552 - #endif 553 - 554 return rc; 555 } 556 ··· 535 { 536 misc_deregister( &nvram_dev ); 537 } 538 - 539 - 540 - #ifdef CONFIG_PPC_PSERIES 541 - 542 - /* nvram_write_error_log 543 - * 544 - * We need to buffer the error logs into nvram to ensure that we have 545 - * the failure information to decode. If we have a severe error there 546 - * is no way to guarantee that the OS or the machine is in a state to 547 - * get back to user land and write the error to disk. For example if 548 - * the SCSI device driver causes a Machine Check by writing to a bad 549 - * IO address, there is no way of guaranteeing that the device driver 550 - * is in any state that is would also be able to write the error data 551 - * captured to disk, thus we buffer it in NVRAM for analysis on the 552 - * next boot. 553 - * 554 - * In NVRAM the partition containing the error log buffer will looks like: 555 - * Header (in bytes): 556 - * +-----------+----------+--------+------------+------------------+ 557 - * | signature | checksum | length | name | data | 558 - * |0 |1 |2 3|4 15|16 length-1| 559 - * +-----------+----------+--------+------------+------------------+ 560 - * 561 - * The 'data' section would look like (in bytes): 562 - * +--------------+------------+-----------------------------------+ 563 - * | event_logged | sequence # | error log | 564 - * |0 3|4 7|8 nvram_error_log_size-1| 565 - * +--------------+------------+-----------------------------------+ 566 - * 567 - * event_logged: 0 if event has not been logged to syslog, 1 if it has 568 - * sequence #: The unique sequence # for each event. (until it wraps) 569 - * error log: The error log from event_scan 570 - */ 571 - int nvram_write_error_log(char * buff, int length, 572 - unsigned int err_type, unsigned int error_log_cnt) 573 - { 574 - int rc; 575 - loff_t tmp_index; 576 - struct err_log_info info; 577 - 578 - if (nvram_error_log_index == -1) { 579 - return -ESPIPE; 580 - } 581 - 582 - if (length > nvram_error_log_size) { 583 - length = nvram_error_log_size; 584 - } 585 - 586 - info.error_type = err_type; 587 - info.seq_num = error_log_cnt; 588 - 589 - tmp_index = nvram_error_log_index; 590 - 591 - rc = ppc_md.nvram_write((char *)&info, sizeof(struct err_log_info), &tmp_index); 592 - if (rc <= 0) { 593 - printk(KERN_ERR "nvram_write_error_log: Failed nvram_write (%d)\n", rc); 594 - return rc; 595 - } 596 - 597 - rc = ppc_md.nvram_write(buff, length, &tmp_index); 598 - if (rc <= 0) { 599 - printk(KERN_ERR "nvram_write_error_log: Failed nvram_write (%d)\n", rc); 600 - return rc; 601 - } 602 - 603 - return 0; 604 - } 605 - 606 - /* nvram_read_error_log 607 - * 608 - * Reads nvram for error log for at most 'length' 609 - */ 610 - int nvram_read_error_log(char * buff, int length, 611 - unsigned int * err_type, unsigned int * error_log_cnt) 612 - { 613 - int rc; 614 - loff_t tmp_index; 615 - struct err_log_info info; 616 - 617 - if (nvram_error_log_index == -1) 618 - return -1; 619 - 620 - if (length > nvram_error_log_size) 621 - length = nvram_error_log_size; 622 - 623 - tmp_index = nvram_error_log_index; 624 - 625 - rc = ppc_md.nvram_read((char *)&info, sizeof(struct err_log_info), &tmp_index); 626 - if (rc <= 0) { 627 - printk(KERN_ERR "nvram_read_error_log: Failed nvram_read (%d)\n", rc); 628 - return rc; 629 - } 630 - 631 - rc = ppc_md.nvram_read(buff, length, &tmp_index); 632 - if (rc <= 0) { 633 - printk(KERN_ERR "nvram_read_error_log: Failed nvram_read (%d)\n", rc); 634 - return rc; 635 - } 636 - 637 - *error_log_cnt = info.seq_num; 638 - *err_type = info.error_type; 639 - 640 - return 0; 641 - } 642 - 643 - /* This doesn't actually zero anything, but it sets the event_logged 644 - * word to tell that this event is safely in syslog. 645 - */ 646 - int nvram_clear_error_log(void) 647 - { 648 - loff_t tmp_index; 649 - int clear_word = ERR_FLAG_ALREADY_LOGGED; 650 - int rc; 651 - 652 - if (nvram_error_log_index == -1) 653 - return -1; 654 - 655 - tmp_index = nvram_error_log_index; 656 - 657 - rc = ppc_md.nvram_write((char *)&clear_word, sizeof(int), &tmp_index); 658 - if (rc <= 0) { 659 - printk(KERN_ERR "nvram_clear_error_log: Failed nvram_write (%d)\n", rc); 660 - return rc; 661 - } 662 - 663 - return 0; 664 - } 665 - 666 - #endif /* CONFIG_PPC_PSERIES */ 667 668 module_init(nvram_init); 669 module_exit(nvram_cleanup);
··· 34 35 #undef DEBUG_NVRAM 36 37 + #define NVRAM_HEADER_LEN sizeof(struct nvram_header) 38 + #define NVRAM_BLOCK_LEN NVRAM_HEADER_LEN 39 40 + /* If change this size, then change the size of NVNAME_LEN */ 41 + struct nvram_header { 42 + unsigned char signature; 43 + unsigned char checksum; 44 + unsigned short length; 45 + /* Terminating null required only for names < 12 chars. */ 46 + char name[12]; 47 }; 48 + 49 + struct nvram_partition { 50 + struct list_head partition; 51 + struct nvram_header header; 52 + unsigned int index; 53 + }; 54 + 55 + static LIST_HEAD(nvram_partitions); 56 57 static loff_t dev_nvram_llseek(struct file *file, loff_t offset, int origin) 58 { ··· 186 #ifdef DEBUG_NVRAM 187 static void __init nvram_print_partitions(char * label) 188 { 189 struct nvram_partition * tmp_part; 190 191 printk(KERN_WARNING "--------%s---------\n", label); 192 printk(KERN_WARNING "indx\t\tsig\tchks\tlen\tname\n"); 193 + list_for_each_entry(tmp_part, &nvram_partitions, partition) { 194 + printk(KERN_WARNING "%4d \t%02x\t%02x\t%d\t%12s\n", 195 tmp_part->index, tmp_part->header.signature, 196 tmp_part->header.checksum, tmp_part->header.length, 197 tmp_part->header.name); ··· 228 return c_sum; 229 } 230 231 + /** 232 + * nvram_remove_partition - Remove one or more partitions in nvram 233 + * @name: name of the partition to remove, or NULL for a 234 + * signature only match 235 + * @sig: signature of the partition(s) to remove 236 + */ 237 + 238 + int __init nvram_remove_partition(const char *name, int sig) 239 { 240 + struct nvram_partition *part, *prev, *tmp; 241 int rc; 242 243 + list_for_each_entry(part, &nvram_partitions, partition) { 244 + if (part->header.signature != sig) 245 continue; 246 + if (name && strncmp(name, part->header.name, 12)) 247 + continue; 248 + 249 + /* Make partition a free partition */ 250 part->header.signature = NVRAM_SIG_FREE; 251 + strncpy(part->header.name, "wwwwwwwwwwww", 12); 252 part->header.checksum = nvram_checksum(&part->header); 253 rc = nvram_write_header(part); 254 if (rc <= 0) { 255 + printk(KERN_ERR "nvram_remove_partition: nvram_write failed (%d)\n", rc); 256 return rc; 257 } 258 + } 259 260 + /* Merge contiguous ones */ 261 + prev = NULL; 262 + list_for_each_entry_safe(part, tmp, &nvram_partitions, partition) { 263 + if (part->header.signature != NVRAM_SIG_FREE) { 264 + prev = NULL; 265 + continue; 266 + } 267 + if (prev) { 268 + prev->header.length += part->header.length; 269 + prev->header.checksum = nvram_checksum(&part->header); 270 + rc = nvram_write_header(part); 271 + if (rc <= 0) { 272 + printk(KERN_ERR "nvram_remove_partition: nvram_write failed (%d)\n", rc); 273 + return rc; 274 + } 275 + list_del(&part->partition); 276 + kfree(part); 277 + } else 278 + prev = part; 279 } 280 281 return 0; 282 } 283 284 + /** 285 + * nvram_create_partition - Create a partition in nvram 286 + * @name: name of the partition to create 287 + * @sig: signature of the partition to create 288 + * @req_size: size of data to allocate in bytes 289 + * @min_size: minimum acceptable size (0 means req_size) 290 * 291 + * Returns a negative error code or a positive nvram index 292 + * of the beginning of the data area of the newly created 293 + * partition. If you provided a min_size smaller than req_size 294 + * you need to query for the actual size yourself after the 295 + * call using nvram_partition_get_size(). 296 */ 297 + loff_t __init nvram_create_partition(const char *name, int sig, 298 + int req_size, int min_size) 299 { 300 struct nvram_partition *part; 301 struct nvram_partition *new_part; 302 struct nvram_partition *free_part = NULL; 303 + static char nv_init_vals[16]; 304 loff_t tmp_index; 305 long size = 0; 306 int rc; 307 + 308 + /* Convert sizes from bytes to blocks */ 309 + req_size = _ALIGN_UP(req_size, NVRAM_BLOCK_LEN) / NVRAM_BLOCK_LEN; 310 + min_size = _ALIGN_UP(min_size, NVRAM_BLOCK_LEN) / NVRAM_BLOCK_LEN; 311 + 312 + /* If no minimum size specified, make it the same as the 313 + * requested size 314 + */ 315 + if (min_size == 0) 316 + min_size = req_size; 317 + if (min_size > req_size) 318 + return -EINVAL; 319 + 320 + /* Now add one block to each for the header */ 321 + req_size += 1; 322 + min_size += 1; 323 + 324 /* Find a free partition that will give us the maximum needed size 325 If can't find one that will give us the minimum size needed */ 326 + list_for_each_entry(part, &nvram_partitions, partition) { 327 if (part->header.signature != NVRAM_SIG_FREE) 328 continue; 329 330 + if (part->header.length >= req_size) { 331 + size = req_size; 332 free_part = part; 333 break; 334 } 335 + if (part->header.length > size && 336 + part->header.length >= min_size) { 337 + size = part->header.length; 338 free_part = part; 339 } 340 } ··· 326 /* Create our OS partition */ 327 new_part = kmalloc(sizeof(*new_part), GFP_KERNEL); 328 if (!new_part) { 329 + pr_err("nvram_create_os_partition: kmalloc failed\n"); 330 return -ENOMEM; 331 } 332 333 new_part->index = free_part->index; 334 + new_part->header.signature = sig; 335 new_part->header.length = size; 336 + strncpy(new_part->header.name, name, 12); 337 new_part->header.checksum = nvram_checksum(&new_part->header); 338 339 rc = nvram_write_header(new_part); 340 if (rc <= 0) { 341 + pr_err("nvram_create_os_partition: nvram_write_header " 342 "failed (%d)\n", rc); 343 return rc; 344 } 345 list_add_tail(&new_part->partition, &free_part->partition); 346 347 + /* Adjust or remove the partition we stole the space from */ 348 + if (free_part->header.length > size) { 349 + free_part->index += size * NVRAM_BLOCK_LEN; 350 + free_part->header.length -= size; 351 + free_part->header.checksum = nvram_checksum(&free_part->header); 352 + rc = nvram_write_header(free_part); 353 + if (rc <= 0) { 354 + pr_err("nvram_create_os_partition: nvram_write_header " 355 + "failed (%d)\n", rc); 356 + return rc; 357 + } 358 + } else { 359 list_del(&free_part->partition); 360 kfree(free_part); 361 } 362 363 + /* Clear the new partition */ 364 + for (tmp_index = new_part->index + NVRAM_HEADER_LEN; 365 + tmp_index < ((size - 1) * NVRAM_BLOCK_LEN); 366 + tmp_index += NVRAM_BLOCK_LEN) { 367 + rc = ppc_md.nvram_write(nv_init_vals, NVRAM_BLOCK_LEN, &tmp_index); 368 + if (rc <= 0) { 369 + pr_err("nvram_create_partition: nvram_write failed (%d)\n", rc); 370 + return rc; 371 } 372 } 373 374 + return new_part->index + NVRAM_HEADER_LEN; 375 + } 376 + 377 + /** 378 + * nvram_get_partition_size - Get the data size of an nvram partition 379 + * @data_index: This is the offset of the start of the data of 380 + * the partition. The same value that is returned by 381 + * nvram_create_partition(). 382 + */ 383 + int nvram_get_partition_size(loff_t data_index) 384 + { 385 + struct nvram_partition *part; 386 387 + list_for_each_entry(part, &nvram_partitions, partition) { 388 + if (part->index + NVRAM_HEADER_LEN == data_index) 389 + return (part->header.length - 1) * NVRAM_BLOCK_LEN; 390 } 391 + return -1; 392 } 393 394 395 + /** 396 + * nvram_find_partition - Find an nvram partition by signature and name 397 + * @name: Name of the partition or NULL for any name 398 + * @sig: Signature to test against 399 + * @out_size: if non-NULL, returns the size of the data part of the partition 400 + */ 401 + loff_t nvram_find_partition(const char *name, int sig, int *out_size) 402 + { 403 + struct nvram_partition *p; 404 + 405 + list_for_each_entry(p, &nvram_partitions, partition) { 406 + if (p->header.signature == sig && 407 + (!name || !strncmp(p->header.name, name, 12))) { 408 + if (out_size) 409 + *out_size = (p->header.length - 1) * 410 + NVRAM_BLOCK_LEN; 411 + return p->index + NVRAM_HEADER_LEN; 412 + } 413 + } 414 + return 0; 415 + } 416 + 417 + int __init nvram_scan_partitions(void) 418 { 419 loff_t cur_index = 0; 420 struct nvram_header phead; ··· 465 int total_size; 466 int err; 467 468 + if (ppc_md.nvram_size == NULL || ppc_md.nvram_size() <= 0) 469 return -ENODEV; 470 total_size = ppc_md.nvram_size(); 471 ··· 512 513 memcpy(&tmp_part->header, &phead, NVRAM_HEADER_LEN); 514 tmp_part->index = cur_index; 515 + list_add_tail(&tmp_part->partition, &nvram_partitions); 516 517 cur_index += phead.length * NVRAM_BLOCK_LEN; 518 } 519 err = 0; 520 + 521 + #ifdef DEBUG_NVRAM 522 + nvram_print_partitions("NVRAM Partitions"); 523 + #endif 524 525 out: 526 kfree(header); ··· 525 526 static int __init nvram_init(void) 527 { 528 int rc; 529 530 + BUILD_BUG_ON(NVRAM_BLOCK_LEN != 16); 531 + 532 if (ppc_md.nvram_size == NULL || ppc_md.nvram_size() <= 0) 533 return -ENODEV; 534 ··· 537 return rc; 538 } 539 540 return rc; 541 } 542 ··· 567 { 568 misc_deregister( &nvram_dev ); 569 } 570 571 module_init(nvram_init); 572 module_exit(nvram_cleanup);
+1 -2
arch/powerpc/kernel/pci_64.c
··· 193 hose->io_resource.start += io_virt_offset; 194 hose->io_resource.end += io_virt_offset; 195 196 - pr_debug(" hose->io_resource=0x%016llx...0x%016llx\n", 197 - hose->io_resource.start, hose->io_resource.end); 198 199 return 0; 200 }
··· 193 hose->io_resource.start += io_virt_offset; 194 hose->io_resource.end += io_virt_offset; 195 196 + pr_debug(" hose->io_resource=%pR\n", &hose->io_resource); 197 198 return 0; 199 }
+7
arch/powerpc/kernel/ppc_ksyms.c
··· 186 EXPORT_SYMBOL(__mfdcr); 187 #endif 188 EXPORT_SYMBOL(empty_zero_page);
··· 186 EXPORT_SYMBOL(__mfdcr); 187 #endif 188 EXPORT_SYMBOL(empty_zero_page); 189 + 190 + #ifdef CONFIG_PPC64 191 + EXPORT_SYMBOL(__arch_hweight8); 192 + EXPORT_SYMBOL(__arch_hweight16); 193 + EXPORT_SYMBOL(__arch_hweight32); 194 + EXPORT_SYMBOL(__arch_hweight64); 195 + #endif
+1
arch/powerpc/kernel/ppc_save_regs.S
··· 11 #include <asm/processor.h> 12 #include <asm/ppc_asm.h> 13 #include <asm/asm-offsets.h> 14 15 /* 16 * Grab the register values as they are now.
··· 11 #include <asm/processor.h> 12 #include <asm/ppc_asm.h> 13 #include <asm/asm-offsets.h> 14 + #include <asm/ptrace.h> 15 16 /* 17 * Grab the register values as they are now.
+16 -6
arch/powerpc/kernel/ptrace.c
··· 1316 static long ppc_set_hwdebug(struct task_struct *child, 1317 struct ppc_hw_breakpoint *bp_info) 1318 { 1319 if (bp_info->version != 1) 1320 return -ENOTSUPP; 1321 #ifdef CONFIG_PPC_ADV_DEBUG_REGS ··· 1357 /* 1358 * We only support one data breakpoint 1359 */ 1360 - if (((bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_RW) == 0) || 1361 - ((bp_info->trigger_type & ~PPC_BREAKPOINT_TRIGGER_RW) != 0) || 1362 - (bp_info->trigger_type != PPC_BREAKPOINT_TRIGGER_WRITE) || 1363 - (bp_info->addr_mode != PPC_BREAKPOINT_MODE_EXACT) || 1364 - (bp_info->condition_mode != PPC_BREAKPOINT_CONDITION_NONE)) 1365 return -EINVAL; 1366 1367 if (child->thread.dabr) ··· 1369 if ((unsigned long)bp_info->addr >= TASK_SIZE) 1370 return -EIO; 1371 1372 - child->thread.dabr = (unsigned long)bp_info->addr; 1373 1374 return 1; 1375 #endif /* !CONFIG_PPC_ADV_DEBUG_DVCS */
··· 1316 static long ppc_set_hwdebug(struct task_struct *child, 1317 struct ppc_hw_breakpoint *bp_info) 1318 { 1319 + #ifndef CONFIG_PPC_ADV_DEBUG_REGS 1320 + unsigned long dabr; 1321 + #endif 1322 + 1323 if (bp_info->version != 1) 1324 return -ENOTSUPP; 1325 #ifdef CONFIG_PPC_ADV_DEBUG_REGS ··· 1353 /* 1354 * We only support one data breakpoint 1355 */ 1356 + if ((bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_RW) == 0 || 1357 + (bp_info->trigger_type & ~PPC_BREAKPOINT_TRIGGER_RW) != 0 || 1358 + bp_info->addr_mode != PPC_BREAKPOINT_MODE_EXACT || 1359 + bp_info->condition_mode != PPC_BREAKPOINT_CONDITION_NONE) 1360 return -EINVAL; 1361 1362 if (child->thread.dabr) ··· 1366 if ((unsigned long)bp_info->addr >= TASK_SIZE) 1367 return -EIO; 1368 1369 + dabr = (unsigned long)bp_info->addr & ~7UL; 1370 + dabr |= DABR_TRANSLATION; 1371 + if (bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_READ) 1372 + dabr |= DABR_DATA_READ; 1373 + if (bp_info->trigger_type & PPC_BREAKPOINT_TRIGGER_WRITE) 1374 + dabr |= DABR_DATA_WRITE; 1375 + 1376 + child->thread.dabr = dabr; 1377 1378 return 1; 1379 #endif /* !CONFIG_PPC_ADV_DEBUG_DVCS */
+7
arch/powerpc/kernel/ptrace32.c
··· 280 /* We only support one DABR and no IABRS at the moment */ 281 if (addr > 0) 282 break; 283 ret = put_user(child->thread.dabr, (u32 __user *)data); 284 break; 285 } 286 ··· 316 case PTRACE_SET_DEBUGREG: 317 case PTRACE_SYSCALL: 318 case PTRACE_CONT: 319 ret = arch_ptrace(child, request, addr, data); 320 break; 321
··· 280 /* We only support one DABR and no IABRS at the moment */ 281 if (addr > 0) 282 break; 283 + #ifdef CONFIG_PPC_ADV_DEBUG_REGS 284 + ret = put_user(child->thread.dac1, (u32 __user *)data); 285 + #else 286 ret = put_user(child->thread.dabr, (u32 __user *)data); 287 + #endif 288 break; 289 } 290 ··· 312 case PTRACE_SET_DEBUGREG: 313 case PTRACE_SYSCALL: 314 case PTRACE_CONT: 315 + case PPC_PTRACE_GETHWDBGINFO: 316 + case PPC_PTRACE_SETHWDEBUG: 317 + case PPC_PTRACE_DELHWDEBUG: 318 ret = arch_ptrace(child, request, addr, data); 319 break; 320
+3
arch/powerpc/kernel/rtas.c
··· 41 #include <asm/atomic.h> 42 #include <asm/time.h> 43 #include <asm/mmu.h> 44 45 struct rtas_t rtas = { 46 .lock = __ARCH_SPIN_LOCK_UNLOCKED ··· 714 int cpu; 715 716 slb_set_size(SLB_MIN_SIZE); 717 printk(KERN_DEBUG "calling ibm,suspend-me on cpu %i\n", smp_processor_id()); 718 719 while (rc == H_MULTI_THREADS_ACTIVE && !atomic_read(&data->done) && ··· 730 rc = atomic_read(&data->error); 731 732 atomic_set(&data->error, rc); 733 734 if (wake_when_done) { 735 atomic_set(&data->done, 1);
··· 41 #include <asm/atomic.h> 42 #include <asm/time.h> 43 #include <asm/mmu.h> 44 + #include <asm/topology.h> 45 46 struct rtas_t rtas = { 47 .lock = __ARCH_SPIN_LOCK_UNLOCKED ··· 713 int cpu; 714 715 slb_set_size(SLB_MIN_SIZE); 716 + stop_topology_update(); 717 printk(KERN_DEBUG "calling ibm,suspend-me on cpu %i\n", smp_processor_id()); 718 719 while (rc == H_MULTI_THREADS_ACTIVE && !atomic_read(&data->done) && ··· 728 rc = atomic_read(&data->error); 729 730 atomic_set(&data->error, rc); 731 + start_topology_update(); 732 733 if (wake_when_done) { 734 atomic_set(&data->done, 1);
+2 -2
arch/powerpc/kernel/setup_64.c
··· 437 unsigned int i; 438 439 /* 440 - * interrupt stacks must be under 256MB, we cannot afford to take 441 - * SLB misses on them. 442 */ 443 for_each_possible_cpu(i) { 444 softirq_ctx[i] = (struct thread_info *)
··· 437 unsigned int i; 438 439 /* 440 + * Interrupt stacks must be in the first segment since we 441 + * cannot afford to take SLB misses on them. 442 */ 443 for_each_possible_cpu(i) { 444 softirq_ctx[i] = (struct thread_info *)
+16 -3
arch/powerpc/kernel/smp.c
··· 466 return id; 467 } 468 469 - /* Must be called when no change can occur to cpu_present_mask, 470 * i.e. during cpu online or offline. 471 */ 472 static struct device_node *cpu_to_l2cache(int cpu) ··· 527 notify_cpu_starting(cpu); 528 set_cpu_online(cpu, true); 529 /* Update sibling maps */ 530 - base = cpu_first_thread_in_core(cpu); 531 for (i = 0; i < threads_per_core; i++) { 532 if (cpu_is_offline(base + i)) 533 continue; ··· 613 return err; 614 615 /* Update sibling maps */ 616 - base = cpu_first_thread_in_core(cpu); 617 for (i = 0; i < threads_per_core; i++) { 618 cpumask_clear_cpu(cpu, cpu_sibling_mask(base + i)); 619 cpumask_clear_cpu(base + i, cpu_sibling_mask(cpu));
··· 466 return id; 467 } 468 469 + /* Helper routines for cpu to core mapping */ 470 + int cpu_core_index_of_thread(int cpu) 471 + { 472 + return cpu >> threads_shift; 473 + } 474 + EXPORT_SYMBOL_GPL(cpu_core_index_of_thread); 475 + 476 + int cpu_first_thread_of_core(int core) 477 + { 478 + return core << threads_shift; 479 + } 480 + EXPORT_SYMBOL_GPL(cpu_first_thread_of_core); 481 + 482 + /* Must be called when no change can occur to cpu_present_map, 483 * i.e. during cpu online or offline. 484 */ 485 static struct device_node *cpu_to_l2cache(int cpu) ··· 514 notify_cpu_starting(cpu); 515 set_cpu_online(cpu, true); 516 /* Update sibling maps */ 517 + base = cpu_first_thread_sibling(cpu); 518 for (i = 0; i < threads_per_core; i++) { 519 if (cpu_is_offline(base + i)) 520 continue; ··· 600 return err; 601 602 /* Update sibling maps */ 603 + base = cpu_first_thread_sibling(cpu); 604 for (i = 0; i < threads_per_core; i++) { 605 cpumask_clear_cpu(cpu, cpu_sibling_mask(base + i)); 606 cpumask_clear_cpu(base + i, cpu_sibling_mask(cpu));
+1 -1
arch/powerpc/kernel/time.c
··· 155 156 static u64 tb_to_ns_scale __read_mostly; 157 static unsigned tb_to_ns_shift __read_mostly; 158 - static unsigned long boot_tb __read_mostly; 159 160 extern struct timezone sys_tz; 161 static long timezone_offset;
··· 155 156 static u64 tb_to_ns_scale __read_mostly; 157 static unsigned tb_to_ns_shift __read_mostly; 158 + static u64 boot_tb __read_mostly; 159 160 extern struct timezone sys_tz; 161 static long timezone_offset;
+1
arch/powerpc/kernel/vector.S
··· 5 #include <asm/cputable.h> 6 #include <asm/thread_info.h> 7 #include <asm/page.h> 8 9 /* 10 * load_up_altivec(unused, unused, tsk)
··· 5 #include <asm/cputable.h> 6 #include <asm/thread_info.h> 7 #include <asm/page.h> 8 + #include <asm/ptrace.h> 9 10 /* 11 * load_up_altivec(unused, unused, tsk)
+12 -3
arch/powerpc/kernel/vio.c
··· 600 vio_cmo_dealloc(viodev, alloc_size); 601 } 602 603 struct dma_map_ops vio_dma_mapping_ops = { 604 .alloc_coherent = vio_dma_iommu_alloc_coherent, 605 .free_coherent = vio_dma_iommu_free_coherent, ··· 612 .unmap_sg = vio_dma_iommu_unmap_sg, 613 .map_page = vio_dma_iommu_map_page, 614 .unmap_page = vio_dma_iommu_unmap_page, 615 616 }; 617 ··· 864 865 static void vio_cmo_set_dma_ops(struct vio_dev *viodev) 866 { 867 - vio_dma_mapping_ops.dma_supported = dma_iommu_ops.dma_supported; 868 - viodev->dev.archdata.dma_ops = &vio_dma_mapping_ops; 869 } 870 871 /** ··· 1249 if (firmware_has_feature(FW_FEATURE_CMO)) 1250 vio_cmo_set_dma_ops(viodev); 1251 else 1252 - viodev->dev.archdata.dma_ops = &dma_iommu_ops; 1253 set_iommu_table_base(&viodev->dev, vio_build_iommu_table(viodev)); 1254 set_dev_node(&viodev->dev, of_node_to_nid(of_node)); 1255 ··· 1257 viodev->dev.parent = &vio_bus_device.dev; 1258 viodev->dev.bus = &vio_bus_type; 1259 viodev->dev.release = vio_dev_release; 1260 1261 /* register with generic device framework */ 1262 if (device_register(&viodev->dev)) {
··· 600 vio_cmo_dealloc(viodev, alloc_size); 601 } 602 603 + static int vio_dma_iommu_dma_supported(struct device *dev, u64 mask) 604 + { 605 + return dma_iommu_ops.dma_supported(dev, mask); 606 + } 607 + 608 struct dma_map_ops vio_dma_mapping_ops = { 609 .alloc_coherent = vio_dma_iommu_alloc_coherent, 610 .free_coherent = vio_dma_iommu_free_coherent, ··· 607 .unmap_sg = vio_dma_iommu_unmap_sg, 608 .map_page = vio_dma_iommu_map_page, 609 .unmap_page = vio_dma_iommu_unmap_page, 610 + .dma_supported = vio_dma_iommu_dma_supported, 611 612 }; 613 ··· 858 859 static void vio_cmo_set_dma_ops(struct vio_dev *viodev) 860 { 861 + set_dma_ops(&viodev->dev, &vio_dma_mapping_ops); 862 } 863 864 /** ··· 1244 if (firmware_has_feature(FW_FEATURE_CMO)) 1245 vio_cmo_set_dma_ops(viodev); 1246 else 1247 + set_dma_ops(&viodev->dev, &dma_iommu_ops); 1248 set_iommu_table_base(&viodev->dev, vio_build_iommu_table(viodev)); 1249 set_dev_node(&viodev->dev, of_node_to_nid(of_node)); 1250 ··· 1252 viodev->dev.parent = &vio_bus_device.dev; 1253 viodev->dev.bus = &vio_bus_type; 1254 viodev->dev.release = vio_dev_release; 1255 + /* needed to ensure proper operation of coherent allocations 1256 + * later, in case driver doesn't set it explicitly */ 1257 + dma_set_mask(&viodev->dev, DMA_BIT_MASK(64)); 1258 + dma_set_coherent_mask(&viodev->dev, DMA_BIT_MASK(64)); 1259 1260 /* register with generic device framework */ 1261 if (device_register(&viodev->dev)) {
+1 -1
arch/powerpc/lib/Makefile
··· 16 17 obj-$(CONFIG_PPC64) += copypage_64.o copyuser_64.o \ 18 memcpy_64.o usercopy_64.o mem_64.o string.o \ 19 - checksum_wrappers_64.o 20 obj-$(CONFIG_XMON) += sstep.o ldstfp.o 21 obj-$(CONFIG_KPROBES) += sstep.o ldstfp.o 22 obj-$(CONFIG_HAVE_HW_BREAKPOINT) += sstep.o ldstfp.o
··· 16 17 obj-$(CONFIG_PPC64) += copypage_64.o copyuser_64.o \ 18 memcpy_64.o usercopy_64.o mem_64.o string.o \ 19 + checksum_wrappers_64.o hweight_64.o 20 obj-$(CONFIG_XMON) += sstep.o ldstfp.o 21 obj-$(CONFIG_KPROBES) += sstep.o ldstfp.o 22 obj-$(CONFIG_HAVE_HW_BREAKPOINT) += sstep.o ldstfp.o
+110
arch/powerpc/lib/hweight_64.S
···
··· 1 + /* 2 + * This program is free software; you can redistribute it and/or modify 3 + * it under the terms of the GNU General Public License as published by 4 + * the Free Software Foundation; either version 2 of the License, or 5 + * (at your option) any later version. 6 + * 7 + * This program is distributed in the hope that it will be useful, 8 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 + * GNU General Public License for more details. 11 + * 12 + * You should have received a copy of the GNU General Public License 13 + * along with this program; if not, write to the Free Software 14 + * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 15 + * 16 + * Copyright (C) IBM Corporation, 2010 17 + * 18 + * Author: Anton Blanchard <anton@au.ibm.com> 19 + */ 20 + #include <asm/processor.h> 21 + #include <asm/ppc_asm.h> 22 + 23 + /* Note: This code relies on -mminimal-toc */ 24 + 25 + _GLOBAL(__arch_hweight8) 26 + BEGIN_FTR_SECTION 27 + b .__sw_hweight8 28 + nop 29 + nop 30 + FTR_SECTION_ELSE 31 + PPC_POPCNTB(r3,r3) 32 + clrldi r3,r3,64-8 33 + blr 34 + ALT_FTR_SECTION_END_IFCLR(CPU_FTR_POPCNTB) 35 + 36 + _GLOBAL(__arch_hweight16) 37 + BEGIN_FTR_SECTION 38 + b .__sw_hweight16 39 + nop 40 + nop 41 + nop 42 + nop 43 + FTR_SECTION_ELSE 44 + BEGIN_FTR_SECTION_NESTED(50) 45 + PPC_POPCNTB(r3,r3) 46 + srdi r4,r3,8 47 + add r3,r4,r3 48 + clrldi r3,r3,64-8 49 + blr 50 + FTR_SECTION_ELSE_NESTED(50) 51 + clrlwi r3,r3,16 52 + PPC_POPCNTW(r3,r3) 53 + clrldi r3,r3,64-8 54 + blr 55 + ALT_FTR_SECTION_END_NESTED_IFCLR(CPU_FTR_POPCNTD, 50) 56 + ALT_FTR_SECTION_END_IFCLR(CPU_FTR_POPCNTB) 57 + 58 + _GLOBAL(__arch_hweight32) 59 + BEGIN_FTR_SECTION 60 + b .__sw_hweight32 61 + nop 62 + nop 63 + nop 64 + nop 65 + nop 66 + nop 67 + FTR_SECTION_ELSE 68 + BEGIN_FTR_SECTION_NESTED(51) 69 + PPC_POPCNTB(r3,r3) 70 + srdi r4,r3,16 71 + add r3,r4,r3 72 + srdi r4,r3,8 73 + add r3,r4,r3 74 + clrldi r3,r3,64-8 75 + blr 76 + FTR_SECTION_ELSE_NESTED(51) 77 + PPC_POPCNTW(r3,r3) 78 + clrldi r3,r3,64-8 79 + blr 80 + ALT_FTR_SECTION_END_NESTED_IFCLR(CPU_FTR_POPCNTD, 51) 81 + ALT_FTR_SECTION_END_IFCLR(CPU_FTR_POPCNTB) 82 + 83 + _GLOBAL(__arch_hweight64) 84 + BEGIN_FTR_SECTION 85 + b .__sw_hweight64 86 + nop 87 + nop 88 + nop 89 + nop 90 + nop 91 + nop 92 + nop 93 + nop 94 + FTR_SECTION_ELSE 95 + BEGIN_FTR_SECTION_NESTED(52) 96 + PPC_POPCNTB(r3,r3) 97 + srdi r4,r3,32 98 + add r3,r4,r3 99 + srdi r4,r3,16 100 + add r3,r4,r3 101 + srdi r4,r3,8 102 + add r3,r4,r3 103 + clrldi r3,r3,64-8 104 + blr 105 + FTR_SECTION_ELSE_NESTED(52) 106 + PPC_POPCNTD(r3,r3) 107 + clrldi r3,r3,64-8 108 + blr 109 + ALT_FTR_SECTION_END_NESTED_IFCLR(CPU_FTR_POPCNTD, 52) 110 + ALT_FTR_SECTION_END_IFCLR(CPU_FTR_POPCNTB)
+1 -1
arch/powerpc/mm/hash_utils_64.c
··· 1070 unsigned long access, unsigned long trap) 1071 { 1072 unsigned long vsid; 1073 - void *pgdir; 1074 pte_t *ptep; 1075 unsigned long flags; 1076 int rc, ssize, local = 0;
··· 1070 unsigned long access, unsigned long trap) 1071 { 1072 unsigned long vsid; 1073 + pgd_t *pgdir; 1074 pte_t *ptep; 1075 unsigned long flags; 1076 int rc, ssize, local = 0;
+6 -6
arch/powerpc/mm/mmu_context_nohash.c
··· 111 * a core map instead but this will do for now. 112 */ 113 for_each_cpu(cpu, mm_cpumask(mm)) { 114 - for (i = cpu_first_thread_in_core(cpu); 115 - i <= cpu_last_thread_in_core(cpu); i++) 116 __set_bit(id, stale_map[i]); 117 cpu = i - 1; 118 } ··· 264 */ 265 if (test_bit(id, stale_map[cpu])) { 266 pr_hardcont(" | stale flush %d [%d..%d]", 267 - id, cpu_first_thread_in_core(cpu), 268 - cpu_last_thread_in_core(cpu)); 269 270 local_flush_tlb_mm(next); 271 272 /* XXX This clear should ultimately be part of local_flush_tlb_mm */ 273 - for (i = cpu_first_thread_in_core(cpu); 274 - i <= cpu_last_thread_in_core(cpu); i++) { 275 __clear_bit(id, stale_map[i]); 276 } 277 }
··· 111 * a core map instead but this will do for now. 112 */ 113 for_each_cpu(cpu, mm_cpumask(mm)) { 114 + for (i = cpu_first_thread_sibling(cpu); 115 + i <= cpu_last_thread_sibling(cpu); i++) 116 __set_bit(id, stale_map[i]); 117 cpu = i - 1; 118 } ··· 264 */ 265 if (test_bit(id, stale_map[cpu])) { 266 pr_hardcont(" | stale flush %d [%d..%d]", 267 + id, cpu_first_thread_sibling(cpu), 268 + cpu_last_thread_sibling(cpu)); 269 270 local_flush_tlb_mm(next); 271 272 /* XXX This clear should ultimately be part of local_flush_tlb_mm */ 273 + for (i = cpu_first_thread_sibling(cpu); 274 + i <= cpu_last_thread_sibling(cpu); i++) { 275 __clear_bit(id, stale_map[i]); 276 } 277 }
+298 -13
arch/powerpc/mm/numa.c
··· 20 #include <linux/memblock.h> 21 #include <linux/of.h> 22 #include <linux/pfn.h> 23 #include <asm/sparsemem.h> 24 #include <asm/prom.h> 25 #include <asm/system.h> 26 #include <asm/smp.h> 27 28 static int numa_enabled = 1; 29 ··· 168 work_with_active_regions(nid, get_active_region_work_fn, node_ar); 169 } 170 171 - static void __cpuinit map_cpu_to_node(int cpu, int node) 172 { 173 numa_cpu_lookup_table[cpu] = node; 174 ··· 178 cpumask_set_cpu(cpu, node_to_cpumask_map[node]); 179 } 180 181 - #ifdef CONFIG_HOTPLUG_CPU 182 static void unmap_cpu_from_node(unsigned long cpu) 183 { 184 int node = numa_cpu_lookup_table[cpu]; ··· 192 cpu, node); 193 } 194 } 195 - #endif /* CONFIG_HOTPLUG_CPU */ 196 197 /* must hold reference to node during call */ 198 static const int *of_get_associativity(struct device_node *dev) ··· 251 /* Returns nid in the range [0..MAX_NUMNODES-1], or -1 if no useful numa 252 * info is found. 253 */ 254 - static int of_node_to_nid_single(struct device_node *device) 255 { 256 int nid = -1; 257 - const unsigned int *tmp; 258 259 if (min_common_depth == -1) 260 goto out; 261 262 - tmp = of_get_associativity(device); 263 - if (!tmp) 264 - goto out; 265 - 266 - if (tmp[0] >= min_common_depth) 267 - nid = tmp[min_common_depth]; 268 269 /* POWER4 LPAR uses 0xffff as invalid node */ 270 if (nid == 0xffff || nid >= MAX_NUMNODES) 271 nid = -1; 272 273 - if (nid > 0 && tmp[0] >= distance_ref_points_depth) 274 - initialize_distance_lookup_table(nid, tmp); 275 276 out: 277 return nid; 278 } 279 ··· 1261 return nid; 1262 } 1263 1264 #endif /* CONFIG_MEMORY_HOTPLUG */
··· 20 #include <linux/memblock.h> 21 #include <linux/of.h> 22 #include <linux/pfn.h> 23 + #include <linux/cpuset.h> 24 + #include <linux/node.h> 25 #include <asm/sparsemem.h> 26 #include <asm/prom.h> 27 #include <asm/system.h> 28 #include <asm/smp.h> 29 + #include <asm/firmware.h> 30 + #include <asm/paca.h> 31 + #include <asm/hvcall.h> 32 33 static int numa_enabled = 1; 34 ··· 163 work_with_active_regions(nid, get_active_region_work_fn, node_ar); 164 } 165 166 + static void map_cpu_to_node(int cpu, int node) 167 { 168 numa_cpu_lookup_table[cpu] = node; 169 ··· 173 cpumask_set_cpu(cpu, node_to_cpumask_map[node]); 174 } 175 176 + #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_PPC_SPLPAR) 177 static void unmap_cpu_from_node(unsigned long cpu) 178 { 179 int node = numa_cpu_lookup_table[cpu]; ··· 187 cpu, node); 188 } 189 } 190 + #endif /* CONFIG_HOTPLUG_CPU || CONFIG_PPC_SPLPAR */ 191 192 /* must hold reference to node during call */ 193 static const int *of_get_associativity(struct device_node *dev) ··· 246 /* Returns nid in the range [0..MAX_NUMNODES-1], or -1 if no useful numa 247 * info is found. 248 */ 249 + static int associativity_to_nid(const unsigned int *associativity) 250 { 251 int nid = -1; 252 253 if (min_common_depth == -1) 254 goto out; 255 256 + if (associativity[0] >= min_common_depth) 257 + nid = associativity[min_common_depth]; 258 259 /* POWER4 LPAR uses 0xffff as invalid node */ 260 if (nid == 0xffff || nid >= MAX_NUMNODES) 261 nid = -1; 262 263 + if (nid > 0 && associativity[0] >= distance_ref_points_depth) 264 + initialize_distance_lookup_table(nid, associativity); 265 266 out: 267 + return nid; 268 + } 269 + 270 + /* Returns the nid associated with the given device tree node, 271 + * or -1 if not found. 272 + */ 273 + static int of_node_to_nid_single(struct device_node *device) 274 + { 275 + int nid = -1; 276 + const unsigned int *tmp; 277 + 278 + tmp = of_get_associativity(device); 279 + if (tmp) 280 + nid = associativity_to_nid(tmp); 281 return nid; 282 } 283 ··· 1247 return nid; 1248 } 1249 1250 + static u64 hot_add_drconf_memory_max(void) 1251 + { 1252 + struct device_node *memory = NULL; 1253 + unsigned int drconf_cell_cnt = 0; 1254 + u64 lmb_size = 0; 1255 + const u32 *dm = 0; 1256 + 1257 + memory = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory"); 1258 + if (memory) { 1259 + drconf_cell_cnt = of_get_drconf_memory(memory, &dm); 1260 + lmb_size = of_get_lmb_size(memory); 1261 + of_node_put(memory); 1262 + } 1263 + return lmb_size * drconf_cell_cnt; 1264 + } 1265 + 1266 + /* 1267 + * memory_hotplug_max - return max address of memory that may be added 1268 + * 1269 + * This is currently only used on systems that support drconfig memory 1270 + * hotplug. 1271 + */ 1272 + u64 memory_hotplug_max(void) 1273 + { 1274 + return max(hot_add_drconf_memory_max(), memblock_end_of_DRAM()); 1275 + } 1276 #endif /* CONFIG_MEMORY_HOTPLUG */ 1277 + 1278 + /* Vrtual Processor Home Node (VPHN) support */ 1279 + #ifdef CONFIG_PPC_SPLPAR 1280 + #define VPHN_NR_CHANGE_CTRS (8) 1281 + static u8 vphn_cpu_change_counts[NR_CPUS][VPHN_NR_CHANGE_CTRS]; 1282 + static cpumask_t cpu_associativity_changes_mask; 1283 + static int vphn_enabled; 1284 + static void set_topology_timer(void); 1285 + 1286 + /* 1287 + * Store the current values of the associativity change counters in the 1288 + * hypervisor. 1289 + */ 1290 + static void setup_cpu_associativity_change_counters(void) 1291 + { 1292 + int cpu = 0; 1293 + 1294 + for_each_possible_cpu(cpu) { 1295 + int i = 0; 1296 + u8 *counts = vphn_cpu_change_counts[cpu]; 1297 + volatile u8 *hypervisor_counts = lppaca[cpu].vphn_assoc_counts; 1298 + 1299 + for (i = 0; i < VPHN_NR_CHANGE_CTRS; i++) { 1300 + counts[i] = hypervisor_counts[i]; 1301 + } 1302 + } 1303 + } 1304 + 1305 + /* 1306 + * The hypervisor maintains a set of 8 associativity change counters in 1307 + * the VPA of each cpu that correspond to the associativity levels in the 1308 + * ibm,associativity-reference-points property. When an associativity 1309 + * level changes, the corresponding counter is incremented. 1310 + * 1311 + * Set a bit in cpu_associativity_changes_mask for each cpu whose home 1312 + * node associativity levels have changed. 1313 + * 1314 + * Returns the number of cpus with unhandled associativity changes. 1315 + */ 1316 + static int update_cpu_associativity_changes_mask(void) 1317 + { 1318 + int cpu = 0, nr_cpus = 0; 1319 + cpumask_t *changes = &cpu_associativity_changes_mask; 1320 + 1321 + cpumask_clear(changes); 1322 + 1323 + for_each_possible_cpu(cpu) { 1324 + int i, changed = 0; 1325 + u8 *counts = vphn_cpu_change_counts[cpu]; 1326 + volatile u8 *hypervisor_counts = lppaca[cpu].vphn_assoc_counts; 1327 + 1328 + for (i = 0; i < VPHN_NR_CHANGE_CTRS; i++) { 1329 + if (hypervisor_counts[i] > counts[i]) { 1330 + counts[i] = hypervisor_counts[i]; 1331 + changed = 1; 1332 + } 1333 + } 1334 + if (changed) { 1335 + cpumask_set_cpu(cpu, changes); 1336 + nr_cpus++; 1337 + } 1338 + } 1339 + 1340 + return nr_cpus; 1341 + } 1342 + 1343 + /* 6 64-bit registers unpacked into 12 32-bit associativity values */ 1344 + #define VPHN_ASSOC_BUFSIZE (6*sizeof(u64)/sizeof(u32)) 1345 + 1346 + /* 1347 + * Convert the associativity domain numbers returned from the hypervisor 1348 + * to the sequence they would appear in the ibm,associativity property. 1349 + */ 1350 + static int vphn_unpack_associativity(const long *packed, unsigned int *unpacked) 1351 + { 1352 + int i = 0; 1353 + int nr_assoc_doms = 0; 1354 + const u16 *field = (const u16*) packed; 1355 + 1356 + #define VPHN_FIELD_UNUSED (0xffff) 1357 + #define VPHN_FIELD_MSB (0x8000) 1358 + #define VPHN_FIELD_MASK (~VPHN_FIELD_MSB) 1359 + 1360 + for (i = 0; i < VPHN_ASSOC_BUFSIZE; i++) { 1361 + if (*field == VPHN_FIELD_UNUSED) { 1362 + /* All significant fields processed, and remaining 1363 + * fields contain the reserved value of all 1's. 1364 + * Just store them. 1365 + */ 1366 + unpacked[i] = *((u32*)field); 1367 + field += 2; 1368 + } 1369 + else if (*field & VPHN_FIELD_MSB) { 1370 + /* Data is in the lower 15 bits of this field */ 1371 + unpacked[i] = *field & VPHN_FIELD_MASK; 1372 + field++; 1373 + nr_assoc_doms++; 1374 + } 1375 + else { 1376 + /* Data is in the lower 15 bits of this field 1377 + * concatenated with the next 16 bit field 1378 + */ 1379 + unpacked[i] = *((u32*)field); 1380 + field += 2; 1381 + nr_assoc_doms++; 1382 + } 1383 + } 1384 + 1385 + return nr_assoc_doms; 1386 + } 1387 + 1388 + /* 1389 + * Retrieve the new associativity information for a virtual processor's 1390 + * home node. 1391 + */ 1392 + static long hcall_vphn(unsigned long cpu, unsigned int *associativity) 1393 + { 1394 + long rc = 0; 1395 + long retbuf[PLPAR_HCALL9_BUFSIZE] = {0}; 1396 + u64 flags = 1; 1397 + int hwcpu = get_hard_smp_processor_id(cpu); 1398 + 1399 + rc = plpar_hcall9(H_HOME_NODE_ASSOCIATIVITY, retbuf, flags, hwcpu); 1400 + vphn_unpack_associativity(retbuf, associativity); 1401 + 1402 + return rc; 1403 + } 1404 + 1405 + static long vphn_get_associativity(unsigned long cpu, 1406 + unsigned int *associativity) 1407 + { 1408 + long rc = 0; 1409 + 1410 + rc = hcall_vphn(cpu, associativity); 1411 + 1412 + switch (rc) { 1413 + case H_FUNCTION: 1414 + printk(KERN_INFO 1415 + "VPHN is not supported. Disabling polling...\n"); 1416 + stop_topology_update(); 1417 + break; 1418 + case H_HARDWARE: 1419 + printk(KERN_ERR 1420 + "hcall_vphn() experienced a hardware fault " 1421 + "preventing VPHN. Disabling polling...\n"); 1422 + stop_topology_update(); 1423 + } 1424 + 1425 + return rc; 1426 + } 1427 + 1428 + /* 1429 + * Update the node maps and sysfs entries for each cpu whose home node 1430 + * has changed. 1431 + */ 1432 + int arch_update_cpu_topology(void) 1433 + { 1434 + int cpu = 0, nid = 0, old_nid = 0; 1435 + unsigned int associativity[VPHN_ASSOC_BUFSIZE] = {0}; 1436 + struct sys_device *sysdev = NULL; 1437 + 1438 + for_each_cpu_mask(cpu, cpu_associativity_changes_mask) { 1439 + vphn_get_associativity(cpu, associativity); 1440 + nid = associativity_to_nid(associativity); 1441 + 1442 + if (nid < 0 || !node_online(nid)) 1443 + nid = first_online_node; 1444 + 1445 + old_nid = numa_cpu_lookup_table[cpu]; 1446 + 1447 + /* Disable hotplug while we update the cpu 1448 + * masks and sysfs. 1449 + */ 1450 + get_online_cpus(); 1451 + unregister_cpu_under_node(cpu, old_nid); 1452 + unmap_cpu_from_node(cpu); 1453 + map_cpu_to_node(cpu, nid); 1454 + register_cpu_under_node(cpu, nid); 1455 + put_online_cpus(); 1456 + 1457 + sysdev = get_cpu_sysdev(cpu); 1458 + if (sysdev) 1459 + kobject_uevent(&sysdev->kobj, KOBJ_CHANGE); 1460 + } 1461 + 1462 + return 1; 1463 + } 1464 + 1465 + static void topology_work_fn(struct work_struct *work) 1466 + { 1467 + rebuild_sched_domains(); 1468 + } 1469 + static DECLARE_WORK(topology_work, topology_work_fn); 1470 + 1471 + void topology_schedule_update(void) 1472 + { 1473 + schedule_work(&topology_work); 1474 + } 1475 + 1476 + static void topology_timer_fn(unsigned long ignored) 1477 + { 1478 + if (!vphn_enabled) 1479 + return; 1480 + if (update_cpu_associativity_changes_mask() > 0) 1481 + topology_schedule_update(); 1482 + set_topology_timer(); 1483 + } 1484 + static struct timer_list topology_timer = 1485 + TIMER_INITIALIZER(topology_timer_fn, 0, 0); 1486 + 1487 + static void set_topology_timer(void) 1488 + { 1489 + topology_timer.data = 0; 1490 + topology_timer.expires = jiffies + 60 * HZ; 1491 + add_timer(&topology_timer); 1492 + } 1493 + 1494 + /* 1495 + * Start polling for VPHN associativity changes. 1496 + */ 1497 + int start_topology_update(void) 1498 + { 1499 + int rc = 0; 1500 + 1501 + if (firmware_has_feature(FW_FEATURE_VPHN)) { 1502 + vphn_enabled = 1; 1503 + setup_cpu_associativity_change_counters(); 1504 + init_timer_deferrable(&topology_timer); 1505 + set_topology_timer(); 1506 + rc = 1; 1507 + } 1508 + 1509 + return rc; 1510 + } 1511 + __initcall(start_topology_update); 1512 + 1513 + /* 1514 + * Disable polling for VPHN associativity changes. 1515 + */ 1516 + int stop_topology_update(void) 1517 + { 1518 + vphn_enabled = 0; 1519 + return del_timer_sync(&topology_timer); 1520 + } 1521 + #endif /* CONFIG_PPC_SPLPAR */
+2 -1
arch/powerpc/mm/pgtable_32.c
··· 78 79 /* pgdir take page or two with 4K pages and a page fraction otherwise */ 80 #ifndef CONFIG_PPC_4K_PAGES 81 - ret = (pgd_t *)kzalloc(1 << PGDIR_ORDER, GFP_KERNEL); 82 #else 83 ret = (pgd_t *)__get_free_pages(GFP_KERNEL|__GFP_ZERO, 84 PGDIR_ORDER - PAGE_SHIFT); ··· 230 area = get_vm_area_caller(size, VM_IOREMAP, caller); 231 if (area == 0) 232 return NULL; 233 v = (unsigned long) area->addr; 234 } else { 235 v = (ioremap_bot -= size);
··· 78 79 /* pgdir take page or two with 4K pages and a page fraction otherwise */ 80 #ifndef CONFIG_PPC_4K_PAGES 81 + ret = kzalloc(1 << PGDIR_ORDER, GFP_KERNEL); 82 #else 83 ret = (pgd_t *)__get_free_pages(GFP_KERNEL|__GFP_ZERO, 84 PGDIR_ORDER - PAGE_SHIFT); ··· 230 area = get_vm_area_caller(size, VM_IOREMAP, caller); 231 if (area == 0) 232 return NULL; 233 + area->phys_addr = p; 234 v = (unsigned long) area->addr; 235 } else { 236 v = (ioremap_bot -= size);
+2
arch/powerpc/mm/pgtable_64.c
··· 223 caller); 224 if (area == NULL) 225 return NULL; 226 ret = __ioremap_at(paligned, area->addr, size, flags); 227 if (!ret) 228 vunmap(area->addr);
··· 223 caller); 224 if (area == NULL) 225 return NULL; 226 + 227 + area->phys_addr = paligned; 228 ret = __ioremap_at(paligned, area->addr, size, flags); 229 if (!ret) 230 vunmap(area->addr);
+4 -1
arch/powerpc/platforms/44x/Makefile
··· 1 - obj-$(CONFIG_44x) := misc_44x.o idle.o 2 obj-$(CONFIG_PPC44x_SIMPLE) += ppc44x_simple.o 3 obj-$(CONFIG_EBONY) += ebony.o 4 obj-$(CONFIG_SAM440EP) += sam440ep.o
··· 1 + obj-$(CONFIG_44x) += misc_44x.o 2 + ifneq ($(CONFIG_PPC4xx_CPM),y) 3 + obj-$(CONFIG_44x) += idle.o 4 + endif 5 obj-$(CONFIG_PPC44x_SIMPLE) += ppc44x_simple.o 6 obj-$(CONFIG_EBONY) += ebony.o 7 obj-$(CONFIG_SAM440EP) += sam440ep.o
+4 -3
arch/powerpc/platforms/Kconfig
··· 313 source "arch/powerpc/sysdev/bestcomm/Kconfig" 314 315 config MPC8xxx_GPIO 316 - bool "MPC8xxx GPIO support" 317 - depends on PPC_MPC831x || PPC_MPC834x || PPC_MPC837x || FSL_SOC_BOOKE || PPC_86xx 318 select GENERIC_GPIO 319 select ARCH_REQUIRE_GPIOLIB 320 help 321 Say Y here if you're going to use hardware that connects to the 322 - MPC831x/834x/837x/8572/8610 GPIOs. 323 324 config SIMPLE_GPIO 325 bool "Support for simple, memory-mapped GPIO controllers"
··· 313 source "arch/powerpc/sysdev/bestcomm/Kconfig" 314 315 config MPC8xxx_GPIO 316 + bool "MPC512x/MPC8xxx GPIO support" 317 + depends on PPC_MPC512x || PPC_MPC831x || PPC_MPC834x || PPC_MPC837x || \ 318 + FSL_SOC_BOOKE || PPC_86xx 319 select GENERIC_GPIO 320 select ARCH_REQUIRE_GPIOLIB 321 help 322 Say Y here if you're going to use hardware that connects to the 323 + MPC512x/831x/834x/837x/8572/8610 GPIOs. 324 325 config SIMPLE_GPIO 326 bool "Support for simple, memory-mapped GPIO controllers"
+1 -2
arch/powerpc/platforms/cell/beat_iommu.c
··· 76 77 static void celleb_dma_dev_setup(struct device *dev) 78 { 79 - dev->archdata.dma_ops = get_pci_dma_ops(); 80 set_dma_offset(dev, celleb_dma_direct_offset); 81 } 82 ··· 106 static int __init celleb_init_iommu(void) 107 { 108 celleb_init_direct_mapping(); 109 - set_pci_dma_ops(&dma_direct_ops); 110 ppc_md.pci_dma_dev_setup = celleb_pci_dma_dev_setup; 111 bus_register_notifier(&platform_bus_type, &celleb_of_bus_notifier); 112
··· 76 77 static void celleb_dma_dev_setup(struct device *dev) 78 { 79 + set_dma_ops(dev, &dma_direct_ops); 80 set_dma_offset(dev, celleb_dma_direct_offset); 81 } 82 ··· 106 static int __init celleb_init_iommu(void) 107 { 108 celleb_init_direct_mapping(); 109 ppc_md.pci_dma_dev_setup = celleb_pci_dma_dev_setup; 110 bus_register_notifier(&platform_bus_type, &celleb_of_bus_notifier); 111
+1 -2
arch/powerpc/platforms/cell/spufs/lscsa_alloc.c
··· 36 struct spu_lscsa *lscsa; 37 unsigned char *p; 38 39 - lscsa = vmalloc(sizeof(struct spu_lscsa)); 40 if (!lscsa) 41 return -ENOMEM; 42 - memset(lscsa, 0, sizeof(struct spu_lscsa)); 43 csa->lscsa = lscsa; 44 45 /* Set LS pages reserved to allow for user-space mapping. */
··· 36 struct spu_lscsa *lscsa; 37 unsigned char *p; 38 39 + lscsa = vzalloc(sizeof(struct spu_lscsa)); 40 if (!lscsa) 41 return -ENOMEM; 42 csa->lscsa = lscsa; 43 44 /* Set LS pages reserved to allow for user-space mapping. */
+4
arch/powerpc/platforms/chrp/time.c
··· 29 30 extern spinlock_t rtc_lock; 31 32 static int nvram_as1 = NVRAM_AS1; 33 static int nvram_as0 = NVRAM_AS0; 34 static int nvram_data = NVRAM_DATA;
··· 29 30 extern spinlock_t rtc_lock; 31 32 + #define NVRAM_AS0 0x74 33 + #define NVRAM_AS1 0x75 34 + #define NVRAM_DATA 0x77 35 + 36 static int nvram_as1 = NVRAM_AS1; 37 static int nvram_as0 = NVRAM_AS0; 38 static int nvram_data = NVRAM_DATA;
-62
arch/powerpc/platforms/iseries/mf.c
··· 1045 .write = mf_side_proc_write, 1046 }; 1047 1048 - #if 0 1049 - static void mf_getSrcHistory(char *buffer, int size) 1050 - { 1051 - struct IplTypeReturnStuff return_stuff; 1052 - struct pending_event *ev = new_pending_event(); 1053 - int rc = 0; 1054 - char *pages[4]; 1055 - 1056 - pages[0] = kmalloc(4096, GFP_ATOMIC); 1057 - pages[1] = kmalloc(4096, GFP_ATOMIC); 1058 - pages[2] = kmalloc(4096, GFP_ATOMIC); 1059 - pages[3] = kmalloc(4096, GFP_ATOMIC); 1060 - if ((ev == NULL) || (pages[0] == NULL) || (pages[1] == NULL) 1061 - || (pages[2] == NULL) || (pages[3] == NULL)) 1062 - return -ENOMEM; 1063 - 1064 - return_stuff.xType = 0; 1065 - return_stuff.xRc = 0; 1066 - return_stuff.xDone = 0; 1067 - ev->event.hp_lp_event.xSubtype = 6; 1068 - ev->event.hp_lp_event.x.xSubtypeData = 1069 - subtype_data('M', 'F', 'V', 'I'); 1070 - ev->event.data.vsp_cmd.xEvent = &return_stuff; 1071 - ev->event.data.vsp_cmd.cmd = 4; 1072 - ev->event.data.vsp_cmd.lp_index = HvLpConfig_getLpIndex(); 1073 - ev->event.data.vsp_cmd.result_code = 0xFF; 1074 - ev->event.data.vsp_cmd.reserved = 0; 1075 - ev->event.data.vsp_cmd.sub_data.page[0] = iseries_hv_addr(pages[0]); 1076 - ev->event.data.vsp_cmd.sub_data.page[1] = iseries_hv_addr(pages[1]); 1077 - ev->event.data.vsp_cmd.sub_data.page[2] = iseries_hv_addr(pages[2]); 1078 - ev->event.data.vsp_cmd.sub_data.page[3] = iseries_hv_addr(pages[3]); 1079 - mb(); 1080 - if (signal_event(ev) != 0) 1081 - return; 1082 - 1083 - while (return_stuff.xDone != 1) 1084 - udelay(10); 1085 - if (return_stuff.xRc == 0) 1086 - memcpy(buffer, pages[0], size); 1087 - kfree(pages[0]); 1088 - kfree(pages[1]); 1089 - kfree(pages[2]); 1090 - kfree(pages[3]); 1091 - } 1092 - #endif 1093 - 1094 static int mf_src_proc_show(struct seq_file *m, void *v) 1095 { 1096 - #if 0 1097 - int len; 1098 - 1099 - mf_getSrcHistory(page, count); 1100 - len = count; 1101 - len -= off; 1102 - if (len < count) { 1103 - *eof = 1; 1104 - if (len <= 0) 1105 - return 0; 1106 - } else 1107 - len = count; 1108 - *start = page + off; 1109 - return len; 1110 - #else 1111 return 0; 1112 - #endif 1113 } 1114 1115 static int mf_src_proc_open(struct inode *inode, struct file *file)
··· 1045 .write = mf_side_proc_write, 1046 }; 1047 1048 static int mf_src_proc_show(struct seq_file *m, void *v) 1049 { 1050 return 0; 1051 } 1052 1053 static int mf_src_proc_open(struct inode *inode, struct file *file)
+1 -18
arch/powerpc/platforms/pasemi/iommu.c
··· 156 157 static void pci_dma_bus_setup_pasemi(struct pci_bus *bus) 158 { 159 - struct device_node *dn; 160 - 161 pr_debug("pci_dma_bus_setup, bus %p, bus->self %p\n", bus, bus->self); 162 163 if (!iommu_table_iobmap_inited) { 164 iommu_table_iobmap_inited = 1; 165 iommu_table_iobmap_setup(); 166 } 167 - 168 - dn = pci_bus_to_OF_node(bus); 169 - 170 - if (dn) 171 - PCI_DN(dn)->iommu_table = &iommu_table_iobmap; 172 - 173 } 174 175 ··· 183 184 set_iommu_table_base(&dev->dev, &iommu_table_iobmap); 185 } 186 - 187 - static void pci_dma_bus_setup_null(struct pci_bus *b) { } 188 - static void pci_dma_dev_setup_null(struct pci_dev *d) { } 189 190 int __init iob_init(struct device_node *dn) 191 { ··· 240 iommu_off = of_chosen && 241 of_get_property(of_chosen, "linux,iommu-off", NULL); 242 #endif 243 - if (iommu_off) { 244 - /* Direct I/O, IOMMU off */ 245 - ppc_md.pci_dma_dev_setup = pci_dma_dev_setup_null; 246 - ppc_md.pci_dma_bus_setup = pci_dma_bus_setup_null; 247 - set_pci_dma_ops(&dma_direct_ops); 248 - 249 return; 250 - } 251 252 iob_init(NULL); 253
··· 156 157 static void pci_dma_bus_setup_pasemi(struct pci_bus *bus) 158 { 159 pr_debug("pci_dma_bus_setup, bus %p, bus->self %p\n", bus, bus->self); 160 161 if (!iommu_table_iobmap_inited) { 162 iommu_table_iobmap_inited = 1; 163 iommu_table_iobmap_setup(); 164 } 165 } 166 167 ··· 191 192 set_iommu_table_base(&dev->dev, &iommu_table_iobmap); 193 } 194 195 int __init iob_init(struct device_node *dn) 196 { ··· 251 iommu_off = of_chosen && 252 of_get_property(of_chosen, "linux,iommu-off", NULL); 253 #endif 254 + if (iommu_off) 255 return; 256 257 iob_init(NULL); 258
+9
arch/powerpc/platforms/powermac/setup.c
··· 506 of_platform_device_create(np, "smu", NULL); 507 of_node_put(np); 508 } 509 510 return 0; 511 }
··· 506 of_platform_device_create(np, "smu", NULL); 507 of_node_put(np); 508 } 509 + np = of_find_node_by_type(NULL, "fcu"); 510 + if (np == NULL) { 511 + /* Some machines have strangely broken device-tree */ 512 + np = of_find_node_by_path("/u3@0,f8000000/i2c@f8001000/fan@15e"); 513 + } 514 + if (np) { 515 + of_platform_device_create(np, "temperature", NULL); 516 + of_node_put(np); 517 + } 518 519 return 0; 520 }
+10
arch/powerpc/platforms/pseries/Kconfig
··· 33 depends on PCI_MSI && EEH 34 default y 35 36 config SCANLOG 37 tristate "Scanlog dump interface" 38 depends on RTAS_PROC && PPC_PSERIES
··· 33 depends on PCI_MSI && EEH 34 default y 35 36 + config PSERIES_ENERGY 37 + tristate "pSeries energy management capabilities driver" 38 + depends on PPC_PSERIES 39 + default y 40 + help 41 + Provides interface to platform energy management capabilities 42 + on supported PSERIES platforms. 43 + Provides: /sys/devices/system/cpu/pseries_(de)activation_hint_list 44 + and /sys/devices/system/cpu/cpuN/pseries_(de)activation_hint 45 + 46 config SCANLOG 47 tristate "Scanlog dump interface" 48 depends on RTAS_PROC && PPC_PSERIES
+1
arch/powerpc/platforms/pseries/Makefile
··· 11 obj-$(CONFIG_KEXEC) += kexec.o 12 obj-$(CONFIG_PCI) += pci.o pci_dlpar.o 13 obj-$(CONFIG_PSERIES_MSI) += msi.o 14 15 obj-$(CONFIG_HOTPLUG_CPU) += hotplug-cpu.o 16 obj-$(CONFIG_MEMORY_HOTPLUG) += hotplug-memory.o
··· 11 obj-$(CONFIG_KEXEC) += kexec.o 12 obj-$(CONFIG_PCI) += pci.o pci_dlpar.o 13 obj-$(CONFIG_PSERIES_MSI) += msi.o 14 + obj-$(CONFIG_PSERIES_ENERGY) += pseries_energy.o 15 16 obj-$(CONFIG_HOTPLUG_CPU) += hotplug-cpu.o 17 obj-$(CONFIG_MEMORY_HOTPLUG) += hotplug-memory.o
+1
arch/powerpc/platforms/pseries/firmware.c
··· 55 {FW_FEATURE_XDABR, "hcall-xdabr"}, 56 {FW_FEATURE_MULTITCE, "hcall-multi-tce"}, 57 {FW_FEATURE_SPLPAR, "hcall-splpar"}, 58 }; 59 60 /* Build up the firmware features bitmask using the contents of
··· 55 {FW_FEATURE_XDABR, "hcall-xdabr"}, 56 {FW_FEATURE_MULTITCE, "hcall-multi-tce"}, 57 {FW_FEATURE_SPLPAR, "hcall-splpar"}, 58 + {FW_FEATURE_VPHN, "hcall-vphn"}, 59 }; 60 61 /* Build up the firmware features bitmask using the contents of
+1
arch/powerpc/platforms/pseries/hvCall.S
··· 11 #include <asm/processor.h> 12 #include <asm/ppc_asm.h> 13 #include <asm/asm-offsets.h> 14 15 #define STK_PARM(i) (48 + ((i)-3)*8) 16
··· 11 #include <asm/processor.h> 12 #include <asm/ppc_asm.h> 13 #include <asm/asm-offsets.h> 14 + #include <asm/ptrace.h> 15 16 #define STK_PARM(i) (48 + ((i)-3)*8) 17
+21 -28
arch/powerpc/platforms/pseries/iommu.c
··· 140 return ret; 141 } 142 143 - static DEFINE_PER_CPU(u64 *, tce_page) = NULL; 144 145 static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum, 146 long npages, unsigned long uaddr, ··· 323 static void iommu_table_setparms_lpar(struct pci_controller *phb, 324 struct device_node *dn, 325 struct iommu_table *tbl, 326 - const void *dma_window, 327 - int bussubno) 328 { 329 unsigned long offset, size; 330 331 - tbl->it_busno = bussubno; 332 of_parse_dma_window(dn, dma_window, &tbl->it_index, &offset, &size); 333 334 tbl->it_base = 0; 335 tbl->it_blocksize = 16; 336 tbl->it_type = TCE_PCI; ··· 449 if (!ppci->iommu_table) { 450 tbl = kzalloc_node(sizeof(struct iommu_table), GFP_KERNEL, 451 ppci->phb->node); 452 - iommu_table_setparms_lpar(ppci->phb, pdn, tbl, dma_window, 453 - bus->number); 454 ppci->iommu_table = iommu_init_table(tbl, ppci->phb->node); 455 pr_debug(" created table: %p\n", ppci->iommu_table); 456 } 457 - 458 - if (pdn != dn) 459 - PCI_DN(dn)->iommu_table = ppci->iommu_table; 460 } 461 462 ··· 528 } 529 pr_debug(" parent is %s\n", pdn->full_name); 530 531 - /* Check for parent == NULL so we don't try to setup the empty EADS 532 - * slots on POWER4 machines. 533 - */ 534 - if (dma_window == NULL || pdn->parent == NULL) { 535 - pr_debug(" no dma window for device, linking to parent\n"); 536 - set_iommu_table_base(&dev->dev, PCI_DN(pdn)->iommu_table); 537 - return; 538 - } 539 - 540 pci = PCI_DN(pdn); 541 if (!pci->iommu_table) { 542 tbl = kzalloc_node(sizeof(struct iommu_table), GFP_KERNEL, 543 pci->phb->node); 544 - iommu_table_setparms_lpar(pci->phb, pdn, tbl, dma_window, 545 - pci->phb->bus->number); 546 pci->iommu_table = iommu_init_table(tbl, pci->phb->node); 547 pr_debug(" created table: %p\n", pci->iommu_table); 548 } else { ··· 556 557 switch (action) { 558 case PSERIES_RECONFIG_REMOVE: 559 - if (pci && pci->iommu_table && 560 - of_get_property(np, "ibm,dma-window", NULL)) 561 iommu_free_table(pci->iommu_table, np->full_name); 562 break; 563 default: ··· 573 /* These are called very early. */ 574 void iommu_init_early_pSeries(void) 575 { 576 - if (of_chosen && of_get_property(of_chosen, "linux,iommu-off", NULL)) { 577 - /* Direct I/O, IOMMU off */ 578 - ppc_md.pci_dma_dev_setup = NULL; 579 - ppc_md.pci_dma_bus_setup = NULL; 580 - set_pci_dma_ops(&dma_direct_ops); 581 return; 582 - } 583 584 if (firmware_has_feature(FW_FEATURE_LPAR)) { 585 if (firmware_has_feature(FW_FEATURE_MULTITCE)) { ··· 601 set_pci_dma_ops(&dma_iommu_ops); 602 } 603
··· 140 return ret; 141 } 142 143 + static DEFINE_PER_CPU(u64 *, tce_page); 144 145 static int tce_buildmulti_pSeriesLP(struct iommu_table *tbl, long tcenum, 146 long npages, unsigned long uaddr, ··· 323 static void iommu_table_setparms_lpar(struct pci_controller *phb, 324 struct device_node *dn, 325 struct iommu_table *tbl, 326 + const void *dma_window) 327 { 328 unsigned long offset, size; 329 330 of_parse_dma_window(dn, dma_window, &tbl->it_index, &offset, &size); 331 332 + tbl->it_busno = phb->bus->number; 333 tbl->it_base = 0; 334 tbl->it_blocksize = 16; 335 tbl->it_type = TCE_PCI; ··· 450 if (!ppci->iommu_table) { 451 tbl = kzalloc_node(sizeof(struct iommu_table), GFP_KERNEL, 452 ppci->phb->node); 453 + iommu_table_setparms_lpar(ppci->phb, pdn, tbl, dma_window); 454 ppci->iommu_table = iommu_init_table(tbl, ppci->phb->node); 455 pr_debug(" created table: %p\n", ppci->iommu_table); 456 } 457 } 458 459 ··· 533 } 534 pr_debug(" parent is %s\n", pdn->full_name); 535 536 pci = PCI_DN(pdn); 537 if (!pci->iommu_table) { 538 tbl = kzalloc_node(sizeof(struct iommu_table), GFP_KERNEL, 539 pci->phb->node); 540 + iommu_table_setparms_lpar(pci->phb, pdn, tbl, dma_window); 541 pci->iommu_table = iommu_init_table(tbl, pci->phb->node); 542 pr_debug(" created table: %p\n", pci->iommu_table); 543 } else { ··· 571 572 switch (action) { 573 case PSERIES_RECONFIG_REMOVE: 574 + if (pci && pci->iommu_table) 575 iommu_free_table(pci->iommu_table, np->full_name); 576 break; 577 default: ··· 589 /* These are called very early. */ 590 void iommu_init_early_pSeries(void) 591 { 592 + if (of_chosen && of_get_property(of_chosen, "linux,iommu-off", NULL)) 593 return; 594 595 if (firmware_has_feature(FW_FEATURE_LPAR)) { 596 if (firmware_has_feature(FW_FEATURE_MULTITCE)) { ··· 622 set_pci_dma_ops(&dma_iommu_ops); 623 } 624 625 + static int __init disable_multitce(char *str) 626 + { 627 + if (strcmp(str, "off") == 0 && 628 + firmware_has_feature(FW_FEATURE_LPAR) && 629 + firmware_has_feature(FW_FEATURE_MULTITCE)) { 630 + printk(KERN_INFO "Disabling MULTITCE firmware feature\n"); 631 + ppc_md.tce_build = tce_build_pSeriesLP; 632 + ppc_md.tce_free = tce_free_pSeriesLP; 633 + powerpc_firmware_features &= ~FW_FEATURE_MULTITCE; 634 + } 635 + return 1; 636 + } 637 + 638 + __setup("multitce=", disable_multitce);
+12
arch/powerpc/platforms/pseries/lpar.c
··· 627 spin_unlock_irqrestore(&pSeries_lpar_tlbie_lock, flags); 628 } 629 630 void __init hpte_init_lpar(void) 631 { 632 ppc_md.hpte_invalidate = pSeries_lpar_hpte_invalidate;
··· 627 spin_unlock_irqrestore(&pSeries_lpar_tlbie_lock, flags); 628 } 629 630 + static int __init disable_bulk_remove(char *str) 631 + { 632 + if (strcmp(str, "off") == 0 && 633 + firmware_has_feature(FW_FEATURE_BULK_REMOVE)) { 634 + printk(KERN_INFO "Disabling BULK_REMOVE firmware feature"); 635 + powerpc_firmware_features &= ~FW_FEATURE_BULK_REMOVE; 636 + } 637 + return 1; 638 + } 639 + 640 + __setup("bulk_remove=", disable_bulk_remove); 641 + 642 void __init hpte_init_lpar(void) 643 { 644 ppc_md.hpte_invalidate = pSeries_lpar_hpte_invalidate;
+205
arch/powerpc/platforms/pseries/nvram.c
··· 22 #include <asm/prom.h> 23 #include <asm/machdep.h> 24 25 static unsigned int nvram_size; 26 static int nvram_fetch, nvram_store; 27 static char nvram_buf[NVRW_CNT]; /* assume this is in the first 4GB */ 28 static DEFINE_SPINLOCK(nvram_lock); 29 30 31 static ssize_t pSeries_nvram_read(char *buf, size_t count, loff_t *index) 32 { ··· 132 { 133 return nvram_size ? nvram_size : -ENODEV; 134 } 135 136 int __init pSeries_nvram_init(void) 137 {
··· 22 #include <asm/prom.h> 23 #include <asm/machdep.h> 24 25 + /* Max bytes to read/write in one go */ 26 + #define NVRW_CNT 0x20 27 + 28 static unsigned int nvram_size; 29 static int nvram_fetch, nvram_store; 30 static char nvram_buf[NVRW_CNT]; /* assume this is in the first 4GB */ 31 static DEFINE_SPINLOCK(nvram_lock); 32 33 + static long nvram_error_log_index = -1; 34 + static long nvram_error_log_size = 0; 35 + 36 + struct err_log_info { 37 + int error_type; 38 + unsigned int seq_num; 39 + }; 40 + #define NVRAM_MAX_REQ 2079 41 + #define NVRAM_MIN_REQ 1055 42 + 43 + #define NVRAM_LOG_PART_NAME "ibm,rtas-log" 44 45 static ssize_t pSeries_nvram_read(char *buf, size_t count, loff_t *index) 46 { ··· 118 { 119 return nvram_size ? nvram_size : -ENODEV; 120 } 121 + 122 + 123 + /* nvram_write_error_log 124 + * 125 + * We need to buffer the error logs into nvram to ensure that we have 126 + * the failure information to decode. If we have a severe error there 127 + * is no way to guarantee that the OS or the machine is in a state to 128 + * get back to user land and write the error to disk. For example if 129 + * the SCSI device driver causes a Machine Check by writing to a bad 130 + * IO address, there is no way of guaranteeing that the device driver 131 + * is in any state that is would also be able to write the error data 132 + * captured to disk, thus we buffer it in NVRAM for analysis on the 133 + * next boot. 134 + * 135 + * In NVRAM the partition containing the error log buffer will looks like: 136 + * Header (in bytes): 137 + * +-----------+----------+--------+------------+------------------+ 138 + * | signature | checksum | length | name | data | 139 + * |0 |1 |2 3|4 15|16 length-1| 140 + * +-----------+----------+--------+------------+------------------+ 141 + * 142 + * The 'data' section would look like (in bytes): 143 + * +--------------+------------+-----------------------------------+ 144 + * | event_logged | sequence # | error log | 145 + * |0 3|4 7|8 nvram_error_log_size-1| 146 + * +--------------+------------+-----------------------------------+ 147 + * 148 + * event_logged: 0 if event has not been logged to syslog, 1 if it has 149 + * sequence #: The unique sequence # for each event. (until it wraps) 150 + * error log: The error log from event_scan 151 + */ 152 + int nvram_write_error_log(char * buff, int length, 153 + unsigned int err_type, unsigned int error_log_cnt) 154 + { 155 + int rc; 156 + loff_t tmp_index; 157 + struct err_log_info info; 158 + 159 + if (nvram_error_log_index == -1) { 160 + return -ESPIPE; 161 + } 162 + 163 + if (length > nvram_error_log_size) { 164 + length = nvram_error_log_size; 165 + } 166 + 167 + info.error_type = err_type; 168 + info.seq_num = error_log_cnt; 169 + 170 + tmp_index = nvram_error_log_index; 171 + 172 + rc = ppc_md.nvram_write((char *)&info, sizeof(struct err_log_info), &tmp_index); 173 + if (rc <= 0) { 174 + printk(KERN_ERR "nvram_write_error_log: Failed nvram_write (%d)\n", rc); 175 + return rc; 176 + } 177 + 178 + rc = ppc_md.nvram_write(buff, length, &tmp_index); 179 + if (rc <= 0) { 180 + printk(KERN_ERR "nvram_write_error_log: Failed nvram_write (%d)\n", rc); 181 + return rc; 182 + } 183 + 184 + return 0; 185 + } 186 + 187 + /* nvram_read_error_log 188 + * 189 + * Reads nvram for error log for at most 'length' 190 + */ 191 + int nvram_read_error_log(char * buff, int length, 192 + unsigned int * err_type, unsigned int * error_log_cnt) 193 + { 194 + int rc; 195 + loff_t tmp_index; 196 + struct err_log_info info; 197 + 198 + if (nvram_error_log_index == -1) 199 + return -1; 200 + 201 + if (length > nvram_error_log_size) 202 + length = nvram_error_log_size; 203 + 204 + tmp_index = nvram_error_log_index; 205 + 206 + rc = ppc_md.nvram_read((char *)&info, sizeof(struct err_log_info), &tmp_index); 207 + if (rc <= 0) { 208 + printk(KERN_ERR "nvram_read_error_log: Failed nvram_read (%d)\n", rc); 209 + return rc; 210 + } 211 + 212 + rc = ppc_md.nvram_read(buff, length, &tmp_index); 213 + if (rc <= 0) { 214 + printk(KERN_ERR "nvram_read_error_log: Failed nvram_read (%d)\n", rc); 215 + return rc; 216 + } 217 + 218 + *error_log_cnt = info.seq_num; 219 + *err_type = info.error_type; 220 + 221 + return 0; 222 + } 223 + 224 + /* This doesn't actually zero anything, but it sets the event_logged 225 + * word to tell that this event is safely in syslog. 226 + */ 227 + int nvram_clear_error_log(void) 228 + { 229 + loff_t tmp_index; 230 + int clear_word = ERR_FLAG_ALREADY_LOGGED; 231 + int rc; 232 + 233 + if (nvram_error_log_index == -1) 234 + return -1; 235 + 236 + tmp_index = nvram_error_log_index; 237 + 238 + rc = ppc_md.nvram_write((char *)&clear_word, sizeof(int), &tmp_index); 239 + if (rc <= 0) { 240 + printk(KERN_ERR "nvram_clear_error_log: Failed nvram_write (%d)\n", rc); 241 + return rc; 242 + } 243 + 244 + return 0; 245 + } 246 + 247 + /* pseries_nvram_init_log_partition 248 + * 249 + * This will setup the partition we need for buffering the 250 + * error logs and cleanup partitions if needed. 251 + * 252 + * The general strategy is the following: 253 + * 1.) If there is log partition large enough then use it. 254 + * 2.) If there is none large enough, search 255 + * for a free partition that is large enough. 256 + * 3.) If there is not a free partition large enough remove 257 + * _all_ OS partitions and consolidate the space. 258 + * 4.) Will first try getting a chunk that will satisfy the maximum 259 + * error log size (NVRAM_MAX_REQ). 260 + * 5.) If the max chunk cannot be allocated then try finding a chunk 261 + * that will satisfy the minum needed (NVRAM_MIN_REQ). 262 + */ 263 + static int __init pseries_nvram_init_log_partition(void) 264 + { 265 + loff_t p; 266 + int size; 267 + 268 + /* Scan nvram for partitions */ 269 + nvram_scan_partitions(); 270 + 271 + /* Lookg for ours */ 272 + p = nvram_find_partition(NVRAM_LOG_PART_NAME, NVRAM_SIG_OS, &size); 273 + 274 + /* Found one but too small, remove it */ 275 + if (p && size < NVRAM_MIN_REQ) { 276 + pr_info("nvram: Found too small "NVRAM_LOG_PART_NAME" partition" 277 + ",removing it..."); 278 + nvram_remove_partition(NVRAM_LOG_PART_NAME, NVRAM_SIG_OS); 279 + p = 0; 280 + } 281 + 282 + /* Create one if we didn't find */ 283 + if (!p) { 284 + p = nvram_create_partition(NVRAM_LOG_PART_NAME, NVRAM_SIG_OS, 285 + NVRAM_MAX_REQ, NVRAM_MIN_REQ); 286 + /* No room for it, try to get rid of any OS partition 287 + * and try again 288 + */ 289 + if (p == -ENOSPC) { 290 + pr_info("nvram: No room to create "NVRAM_LOG_PART_NAME 291 + " partition, deleting all OS partitions..."); 292 + nvram_remove_partition(NULL, NVRAM_SIG_OS); 293 + p = nvram_create_partition(NVRAM_LOG_PART_NAME, 294 + NVRAM_SIG_OS, NVRAM_MAX_REQ, 295 + NVRAM_MIN_REQ); 296 + } 297 + } 298 + 299 + if (p <= 0) { 300 + pr_err("nvram: Failed to find or create "NVRAM_LOG_PART_NAME 301 + " partition, err %d\n", (int)p); 302 + return 0; 303 + } 304 + 305 + nvram_error_log_index = p; 306 + nvram_error_log_size = nvram_get_partition_size(p) - 307 + sizeof(struct err_log_info); 308 + 309 + return 0; 310 + } 311 + machine_arch_initcall(pseries, pseries_nvram_init_log_partition); 312 313 int __init pSeries_nvram_init(void) 314 {
+326
arch/powerpc/platforms/pseries/pseries_energy.c
···
··· 1 + /* 2 + * POWER platform energy management driver 3 + * Copyright (C) 2010 IBM Corporation 4 + * 5 + * This program is free software; you can redistribute it and/or 6 + * modify it under the terms of the GNU General Public License 7 + * version 2 as published by the Free Software Foundation. 8 + * 9 + * This pseries platform device driver provides access to 10 + * platform energy management capabilities. 11 + */ 12 + 13 + #include <linux/module.h> 14 + #include <linux/types.h> 15 + #include <linux/errno.h> 16 + #include <linux/init.h> 17 + #include <linux/seq_file.h> 18 + #include <linux/sysdev.h> 19 + #include <linux/cpu.h> 20 + #include <linux/of.h> 21 + #include <asm/cputhreads.h> 22 + #include <asm/page.h> 23 + #include <asm/hvcall.h> 24 + 25 + 26 + #define MODULE_VERS "1.0" 27 + #define MODULE_NAME "pseries_energy" 28 + 29 + /* Driver flags */ 30 + 31 + static int sysfs_entries; 32 + 33 + /* Helper routines */ 34 + 35 + /* 36 + * Routine to detect firmware support for hcall 37 + * return 1 if H_BEST_ENERGY is supported 38 + * else return 0 39 + */ 40 + 41 + static int check_for_h_best_energy(void) 42 + { 43 + struct device_node *rtas = NULL; 44 + const char *hypertas, *s; 45 + int length; 46 + int rc = 0; 47 + 48 + rtas = of_find_node_by_path("/rtas"); 49 + if (!rtas) 50 + return 0; 51 + 52 + hypertas = of_get_property(rtas, "ibm,hypertas-functions", &length); 53 + if (!hypertas) { 54 + of_node_put(rtas); 55 + return 0; 56 + } 57 + 58 + /* hypertas will have list of strings with hcall names */ 59 + for (s = hypertas; s < hypertas + length; s += strlen(s) + 1) { 60 + if (!strncmp("hcall-best-energy-1", s, 19)) { 61 + rc = 1; /* Found the string */ 62 + break; 63 + } 64 + } 65 + of_node_put(rtas); 66 + return rc; 67 + } 68 + 69 + /* Helper Routines to convert between drc_index to cpu numbers */ 70 + 71 + static u32 cpu_to_drc_index(int cpu) 72 + { 73 + struct device_node *dn = NULL; 74 + const int *indexes; 75 + int i; 76 + int rc = 1; 77 + u32 ret = 0; 78 + 79 + dn = of_find_node_by_path("/cpus"); 80 + if (dn == NULL) 81 + goto err; 82 + indexes = of_get_property(dn, "ibm,drc-indexes", NULL); 83 + if (indexes == NULL) 84 + goto err_of_node_put; 85 + /* Convert logical cpu number to core number */ 86 + i = cpu_core_index_of_thread(cpu); 87 + /* 88 + * The first element indexes[0] is the number of drc_indexes 89 + * returned in the list. Hence i+1 will get the drc_index 90 + * corresponding to core number i. 91 + */ 92 + WARN_ON(i > indexes[0]); 93 + ret = indexes[i + 1]; 94 + rc = 0; 95 + 96 + err_of_node_put: 97 + of_node_put(dn); 98 + err: 99 + if (rc) 100 + printk(KERN_WARNING "cpu_to_drc_index(%d) failed", cpu); 101 + return ret; 102 + } 103 + 104 + static int drc_index_to_cpu(u32 drc_index) 105 + { 106 + struct device_node *dn = NULL; 107 + const int *indexes; 108 + int i, cpu = 0; 109 + int rc = 1; 110 + 111 + dn = of_find_node_by_path("/cpus"); 112 + if (dn == NULL) 113 + goto err; 114 + indexes = of_get_property(dn, "ibm,drc-indexes", NULL); 115 + if (indexes == NULL) 116 + goto err_of_node_put; 117 + /* 118 + * First element in the array is the number of drc_indexes 119 + * returned. Search through the list to find the matching 120 + * drc_index and get the core number 121 + */ 122 + for (i = 0; i < indexes[0]; i++) { 123 + if (indexes[i + 1] == drc_index) 124 + break; 125 + } 126 + /* Convert core number to logical cpu number */ 127 + cpu = cpu_first_thread_of_core(i); 128 + rc = 0; 129 + 130 + err_of_node_put: 131 + of_node_put(dn); 132 + err: 133 + if (rc) 134 + printk(KERN_WARNING "drc_index_to_cpu(%d) failed", drc_index); 135 + return cpu; 136 + } 137 + 138 + /* 139 + * pseries hypervisor call H_BEST_ENERGY provides hints to OS on 140 + * preferred logical cpus to activate or deactivate for optimized 141 + * energy consumption. 142 + */ 143 + 144 + #define FLAGS_MODE1 0x004E200000080E01 145 + #define FLAGS_MODE2 0x004E200000080401 146 + #define FLAGS_ACTIVATE 0x100 147 + 148 + static ssize_t get_best_energy_list(char *page, int activate) 149 + { 150 + int rc, cnt, i, cpu; 151 + unsigned long retbuf[PLPAR_HCALL9_BUFSIZE]; 152 + unsigned long flags = 0; 153 + u32 *buf_page; 154 + char *s = page; 155 + 156 + buf_page = (u32 *) get_zeroed_page(GFP_KERNEL); 157 + if (!buf_page) 158 + return -ENOMEM; 159 + 160 + flags = FLAGS_MODE1; 161 + if (activate) 162 + flags |= FLAGS_ACTIVATE; 163 + 164 + rc = plpar_hcall9(H_BEST_ENERGY, retbuf, flags, 0, __pa(buf_page), 165 + 0, 0, 0, 0, 0, 0); 166 + if (rc != H_SUCCESS) { 167 + free_page((unsigned long) buf_page); 168 + return -EINVAL; 169 + } 170 + 171 + cnt = retbuf[0]; 172 + for (i = 0; i < cnt; i++) { 173 + cpu = drc_index_to_cpu(buf_page[2*i+1]); 174 + if ((cpu_online(cpu) && !activate) || 175 + (!cpu_online(cpu) && activate)) 176 + s += sprintf(s, "%d,", cpu); 177 + } 178 + if (s > page) { /* Something to show */ 179 + s--; /* Suppress last comma */ 180 + s += sprintf(s, "\n"); 181 + } 182 + 183 + free_page((unsigned long) buf_page); 184 + return s-page; 185 + } 186 + 187 + static ssize_t get_best_energy_data(struct sys_device *dev, 188 + char *page, int activate) 189 + { 190 + int rc; 191 + unsigned long retbuf[PLPAR_HCALL9_BUFSIZE]; 192 + unsigned long flags = 0; 193 + 194 + flags = FLAGS_MODE2; 195 + if (activate) 196 + flags |= FLAGS_ACTIVATE; 197 + 198 + rc = plpar_hcall9(H_BEST_ENERGY, retbuf, flags, 199 + cpu_to_drc_index(dev->id), 200 + 0, 0, 0, 0, 0, 0, 0); 201 + 202 + if (rc != H_SUCCESS) 203 + return -EINVAL; 204 + 205 + return sprintf(page, "%lu\n", retbuf[1] >> 32); 206 + } 207 + 208 + /* Wrapper functions */ 209 + 210 + static ssize_t cpu_activate_hint_list_show(struct sysdev_class *class, 211 + struct sysdev_class_attribute *attr, char *page) 212 + { 213 + return get_best_energy_list(page, 1); 214 + } 215 + 216 + static ssize_t cpu_deactivate_hint_list_show(struct sysdev_class *class, 217 + struct sysdev_class_attribute *attr, char *page) 218 + { 219 + return get_best_energy_list(page, 0); 220 + } 221 + 222 + static ssize_t percpu_activate_hint_show(struct sys_device *dev, 223 + struct sysdev_attribute *attr, char *page) 224 + { 225 + return get_best_energy_data(dev, page, 1); 226 + } 227 + 228 + static ssize_t percpu_deactivate_hint_show(struct sys_device *dev, 229 + struct sysdev_attribute *attr, char *page) 230 + { 231 + return get_best_energy_data(dev, page, 0); 232 + } 233 + 234 + /* 235 + * Create sysfs interface: 236 + * /sys/devices/system/cpu/pseries_activate_hint_list 237 + * /sys/devices/system/cpu/pseries_deactivate_hint_list 238 + * Comma separated list of cpus to activate or deactivate 239 + * /sys/devices/system/cpu/cpuN/pseries_activate_hint 240 + * /sys/devices/system/cpu/cpuN/pseries_deactivate_hint 241 + * Per-cpu value of the hint 242 + */ 243 + 244 + struct sysdev_class_attribute attr_cpu_activate_hint_list = 245 + _SYSDEV_CLASS_ATTR(pseries_activate_hint_list, 0444, 246 + cpu_activate_hint_list_show, NULL); 247 + 248 + struct sysdev_class_attribute attr_cpu_deactivate_hint_list = 249 + _SYSDEV_CLASS_ATTR(pseries_deactivate_hint_list, 0444, 250 + cpu_deactivate_hint_list_show, NULL); 251 + 252 + struct sysdev_attribute attr_percpu_activate_hint = 253 + _SYSDEV_ATTR(pseries_activate_hint, 0444, 254 + percpu_activate_hint_show, NULL); 255 + 256 + struct sysdev_attribute attr_percpu_deactivate_hint = 257 + _SYSDEV_ATTR(pseries_deactivate_hint, 0444, 258 + percpu_deactivate_hint_show, NULL); 259 + 260 + static int __init pseries_energy_init(void) 261 + { 262 + int cpu, err; 263 + struct sys_device *cpu_sys_dev; 264 + 265 + if (!check_for_h_best_energy()) { 266 + printk(KERN_INFO "Hypercall H_BEST_ENERGY not supported\n"); 267 + return 0; 268 + } 269 + /* Create the sysfs files */ 270 + err = sysfs_create_file(&cpu_sysdev_class.kset.kobj, 271 + &attr_cpu_activate_hint_list.attr); 272 + if (!err) 273 + err = sysfs_create_file(&cpu_sysdev_class.kset.kobj, 274 + &attr_cpu_deactivate_hint_list.attr); 275 + 276 + if (err) 277 + return err; 278 + for_each_possible_cpu(cpu) { 279 + cpu_sys_dev = get_cpu_sysdev(cpu); 280 + err = sysfs_create_file(&cpu_sys_dev->kobj, 281 + &attr_percpu_activate_hint.attr); 282 + if (err) 283 + break; 284 + err = sysfs_create_file(&cpu_sys_dev->kobj, 285 + &attr_percpu_deactivate_hint.attr); 286 + if (err) 287 + break; 288 + } 289 + 290 + if (err) 291 + return err; 292 + 293 + sysfs_entries = 1; /* Removed entries on cleanup */ 294 + return 0; 295 + 296 + } 297 + 298 + static void __exit pseries_energy_cleanup(void) 299 + { 300 + int cpu; 301 + struct sys_device *cpu_sys_dev; 302 + 303 + if (!sysfs_entries) 304 + return; 305 + 306 + /* Remove the sysfs files */ 307 + sysfs_remove_file(&cpu_sysdev_class.kset.kobj, 308 + &attr_cpu_activate_hint_list.attr); 309 + 310 + sysfs_remove_file(&cpu_sysdev_class.kset.kobj, 311 + &attr_cpu_deactivate_hint_list.attr); 312 + 313 + for_each_possible_cpu(cpu) { 314 + cpu_sys_dev = get_cpu_sysdev(cpu); 315 + sysfs_remove_file(&cpu_sys_dev->kobj, 316 + &attr_percpu_activate_hint.attr); 317 + sysfs_remove_file(&cpu_sys_dev->kobj, 318 + &attr_percpu_deactivate_hint.attr); 319 + } 320 + } 321 + 322 + module_init(pseries_energy_init); 323 + module_exit(pseries_energy_cleanup); 324 + MODULE_DESCRIPTION("Driver for pSeries platform energy management"); 325 + MODULE_AUTHOR("Vaidyanathan Srinivasan"); 326 + MODULE_LICENSE("GPL");
+1
arch/powerpc/sysdev/Makefile
··· 41 ifeq ($(CONFIG_PCI),y) 42 obj-$(CONFIG_4xx) += ppc4xx_pci.o 43 endif 44 obj-$(CONFIG_PPC4xx_GPIO) += ppc4xx_gpio.o 45 46 obj-$(CONFIG_CPM) += cpm_common.o
··· 41 ifeq ($(CONFIG_PCI),y) 42 obj-$(CONFIG_4xx) += ppc4xx_pci.o 43 endif 44 + obj-$(CONFIG_PPC4xx_CPM) += ppc4xx_cpm.o 45 obj-$(CONFIG_PPC4xx_GPIO) += ppc4xx_gpio.o 46 47 obj-$(CONFIG_CPM) += cpm_common.o
+1 -8
arch/powerpc/sysdev/dart_iommu.c
··· 312 313 static void pci_dma_bus_setup_dart(struct pci_bus *bus) 314 { 315 - struct device_node *dn; 316 - 317 if (!iommu_table_dart_inited) { 318 iommu_table_dart_inited = 1; 319 iommu_table_dart_setup(); 320 } 321 - 322 - dn = pci_bus_to_OF_node(bus); 323 - 324 - if (dn) 325 - PCI_DN(dn)->iommu_table = &iommu_table_dart; 326 } 327 328 static bool dart_device_on_pcie(struct device *dev) ··· 366 if (dn == NULL) { 367 dn = of_find_compatible_node(NULL, "dart", "u4-dart"); 368 if (dn == NULL) 369 - goto bail; 370 dart_is_u4 = 1; 371 } 372
··· 312 313 static void pci_dma_bus_setup_dart(struct pci_bus *bus) 314 { 315 if (!iommu_table_dart_inited) { 316 iommu_table_dart_inited = 1; 317 iommu_table_dart_setup(); 318 } 319 } 320 321 static bool dart_device_on_pcie(struct device *dev) ··· 373 if (dn == NULL) { 374 dn = of_find_compatible_node(NULL, "dart", "u4-dart"); 375 if (dn == NULL) 376 + return; /* use default direct_dma_ops */ 377 dart_is_u4 = 1; 378 } 379
+67 -8
arch/powerpc/sysdev/mpc8xxx_gpio.c
··· 1 /* 2 - * GPIOs on MPC8349/8572/8610 and compatible 3 * 4 * Copyright (C) 2008 Peter Korsgaard <jacmet@sunsite.dk> 5 * ··· 26 #define GPIO_IER 0x0c 27 #define GPIO_IMR 0x10 28 #define GPIO_ICR 0x14 29 30 struct mpc8xxx_gpio_chip { 31 struct of_mm_gpio_chip mm_gc; ··· 38 */ 39 u32 data; 40 struct irq_host *irq; 41 }; 42 43 static inline u32 mpc8xxx_gpio2mask(unsigned int gpio) ··· 217 return 0; 218 } 219 220 static struct irq_chip mpc8xxx_irq_chip = { 221 .name = "mpc8xxx-gpio", 222 .unmask = mpc8xxx_irq_unmask, ··· 273 static int mpc8xxx_gpio_irq_map(struct irq_host *h, unsigned int virq, 274 irq_hw_number_t hw) 275 { 276 set_irq_chip_data(virq, h->host_data); 277 set_irq_chip_and_handler(virq, &mpc8xxx_irq_chip, handle_level_irq); 278 set_irq_type(virq, IRQ_TYPE_NONE); ··· 305 .xlate = mpc8xxx_gpio_irq_xlate, 306 }; 307 308 static void __init mpc8xxx_add_controller(struct device_node *np) 309 { 310 struct mpc8xxx_gpio_chip *mpc8xxx_gc; 311 struct of_mm_gpio_chip *mm_gc; 312 struct gpio_chip *gc; 313 unsigned hwirq; 314 int ret; 315 ··· 358 if (!mpc8xxx_gc->irq) 359 goto skip_irq; 360 361 mpc8xxx_gc->irq->host_data = mpc8xxx_gc; 362 363 /* ack and mask all irqs */ ··· 386 { 387 struct device_node *np; 388 389 - for_each_compatible_node(np, NULL, "fsl,mpc8349-gpio") 390 - mpc8xxx_add_controller(np); 391 - 392 - for_each_compatible_node(np, NULL, "fsl,mpc8572-gpio") 393 - mpc8xxx_add_controller(np); 394 - 395 - for_each_compatible_node(np, NULL, "fsl,mpc8610-gpio") 396 mpc8xxx_add_controller(np); 397 398 for_each_compatible_node(np, NULL, "fsl,qoriq-gpio")
··· 1 /* 2 + * GPIOs on MPC512x/8349/8572/8610 and compatible 3 * 4 * Copyright (C) 2008 Peter Korsgaard <jacmet@sunsite.dk> 5 * ··· 26 #define GPIO_IER 0x0c 27 #define GPIO_IMR 0x10 28 #define GPIO_ICR 0x14 29 + #define GPIO_ICR2 0x18 30 31 struct mpc8xxx_gpio_chip { 32 struct of_mm_gpio_chip mm_gc; ··· 37 */ 38 u32 data; 39 struct irq_host *irq; 40 + void *of_dev_id_data; 41 }; 42 43 static inline u32 mpc8xxx_gpio2mask(unsigned int gpio) ··· 215 return 0; 216 } 217 218 + static int mpc512x_irq_set_type(unsigned int virq, unsigned int flow_type) 219 + { 220 + struct mpc8xxx_gpio_chip *mpc8xxx_gc = get_irq_chip_data(virq); 221 + struct of_mm_gpio_chip *mm = &mpc8xxx_gc->mm_gc; 222 + unsigned long gpio = virq_to_hw(virq); 223 + void __iomem *reg; 224 + unsigned int shift; 225 + unsigned long flags; 226 + 227 + if (gpio < 16) { 228 + reg = mm->regs + GPIO_ICR; 229 + shift = (15 - gpio) * 2; 230 + } else { 231 + reg = mm->regs + GPIO_ICR2; 232 + shift = (15 - (gpio % 16)) * 2; 233 + } 234 + 235 + switch (flow_type) { 236 + case IRQ_TYPE_EDGE_FALLING: 237 + case IRQ_TYPE_LEVEL_LOW: 238 + spin_lock_irqsave(&mpc8xxx_gc->lock, flags); 239 + clrsetbits_be32(reg, 3 << shift, 2 << shift); 240 + spin_unlock_irqrestore(&mpc8xxx_gc->lock, flags); 241 + break; 242 + 243 + case IRQ_TYPE_EDGE_RISING: 244 + case IRQ_TYPE_LEVEL_HIGH: 245 + spin_lock_irqsave(&mpc8xxx_gc->lock, flags); 246 + clrsetbits_be32(reg, 3 << shift, 1 << shift); 247 + spin_unlock_irqrestore(&mpc8xxx_gc->lock, flags); 248 + break; 249 + 250 + case IRQ_TYPE_EDGE_BOTH: 251 + spin_lock_irqsave(&mpc8xxx_gc->lock, flags); 252 + clrbits32(reg, 3 << shift); 253 + spin_unlock_irqrestore(&mpc8xxx_gc->lock, flags); 254 + break; 255 + 256 + default: 257 + return -EINVAL; 258 + } 259 + 260 + return 0; 261 + } 262 + 263 static struct irq_chip mpc8xxx_irq_chip = { 264 .name = "mpc8xxx-gpio", 265 .unmask = mpc8xxx_irq_unmask, ··· 226 static int mpc8xxx_gpio_irq_map(struct irq_host *h, unsigned int virq, 227 irq_hw_number_t hw) 228 { 229 + struct mpc8xxx_gpio_chip *mpc8xxx_gc = h->host_data; 230 + 231 + if (mpc8xxx_gc->of_dev_id_data) 232 + mpc8xxx_irq_chip.set_type = mpc8xxx_gc->of_dev_id_data; 233 + 234 set_irq_chip_data(virq, h->host_data); 235 set_irq_chip_and_handler(virq, &mpc8xxx_irq_chip, handle_level_irq); 236 set_irq_type(virq, IRQ_TYPE_NONE); ··· 253 .xlate = mpc8xxx_gpio_irq_xlate, 254 }; 255 256 + static struct of_device_id mpc8xxx_gpio_ids[] __initdata = { 257 + { .compatible = "fsl,mpc8349-gpio", }, 258 + { .compatible = "fsl,mpc8572-gpio", }, 259 + { .compatible = "fsl,mpc8610-gpio", }, 260 + { .compatible = "fsl,mpc5121-gpio", .data = mpc512x_irq_set_type, }, 261 + {} 262 + }; 263 + 264 static void __init mpc8xxx_add_controller(struct device_node *np) 265 { 266 struct mpc8xxx_gpio_chip *mpc8xxx_gc; 267 struct of_mm_gpio_chip *mm_gc; 268 struct gpio_chip *gc; 269 + const struct of_device_id *id; 270 unsigned hwirq; 271 int ret; 272 ··· 297 if (!mpc8xxx_gc->irq) 298 goto skip_irq; 299 300 + id = of_match_node(mpc8xxx_gpio_ids, np); 301 + if (id) 302 + mpc8xxx_gc->of_dev_id_data = id->data; 303 + 304 mpc8xxx_gc->irq->host_data = mpc8xxx_gc; 305 306 /* ack and mask all irqs */ ··· 321 { 322 struct device_node *np; 323 324 + for_each_matching_node(np, mpc8xxx_gpio_ids) 325 mpc8xxx_add_controller(np); 326 327 for_each_compatible_node(np, NULL, "fsl,qoriq-gpio")
+346
arch/powerpc/sysdev/ppc4xx_cpm.c
···
··· 1 + /* 2 + * PowerPC 4xx Clock and Power Management 3 + * 4 + * Copyright (C) 2010, Applied Micro Circuits Corporation 5 + * Victor Gallardo (vgallardo@apm.com) 6 + * 7 + * Based on arch/powerpc/platforms/44x/idle.c: 8 + * Jerone Young <jyoung5@us.ibm.com> 9 + * Copyright 2008 IBM Corp. 10 + * 11 + * Based on arch/powerpc/sysdev/fsl_pmc.c: 12 + * Anton Vorontsov <avorontsov@ru.mvista.com> 13 + * Copyright 2009 MontaVista Software, Inc. 14 + * 15 + * See file CREDITS for list of people who contributed to this 16 + * project. 17 + * 18 + * This program is free software; you can redistribute it and/or 19 + * modify it under the terms of the GNU General Public License as 20 + * published by the Free Software Foundation; either version 2 of 21 + * the License, or (at your option) any later version. 22 + * 23 + * This program is distributed in the hope that it will be useful, 24 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 25 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 26 + * GNU General Public License for more details. 27 + * 28 + * You should have received a copy of the GNU General Public License 29 + * along with this program; if not, write to the Free Software 30 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, 31 + * MA 02111-1307 USA 32 + */ 33 + 34 + #include <linux/kernel.h> 35 + #include <linux/of_platform.h> 36 + #include <linux/sysfs.h> 37 + #include <linux/cpu.h> 38 + #include <linux/suspend.h> 39 + #include <asm/dcr.h> 40 + #include <asm/dcr-native.h> 41 + #include <asm/machdep.h> 42 + 43 + #define CPM_ER 0 44 + #define CPM_FR 1 45 + #define CPM_SR 2 46 + 47 + #define CPM_IDLE_WAIT 0 48 + #define CPM_IDLE_DOZE 1 49 + 50 + struct cpm { 51 + dcr_host_t dcr_host; 52 + unsigned int dcr_offset[3]; 53 + unsigned int powersave_off; 54 + unsigned int unused; 55 + unsigned int idle_doze; 56 + unsigned int standby; 57 + unsigned int suspend; 58 + }; 59 + 60 + static struct cpm cpm; 61 + 62 + struct cpm_idle_mode { 63 + unsigned int enabled; 64 + const char *name; 65 + }; 66 + 67 + static struct cpm_idle_mode idle_mode[] = { 68 + [CPM_IDLE_WAIT] = { 1, "wait" }, /* default */ 69 + [CPM_IDLE_DOZE] = { 0, "doze" }, 70 + }; 71 + 72 + static unsigned int cpm_set(unsigned int cpm_reg, unsigned int mask) 73 + { 74 + unsigned int value; 75 + 76 + /* CPM controller supports 3 different types of sleep interface 77 + * known as class 1, 2 and 3. For class 1 units, they are 78 + * unconditionally put to sleep when the corresponding CPM bit is 79 + * set. For class 2 and 3 units this is not case; if they can be 80 + * put to to sleep, they will. Here we do not verify, we just 81 + * set them and expect them to eventually go off when they can. 82 + */ 83 + value = dcr_read(cpm.dcr_host, cpm.dcr_offset[cpm_reg]); 84 + dcr_write(cpm.dcr_host, cpm.dcr_offset[cpm_reg], value | mask); 85 + 86 + /* return old state, to restore later if needed */ 87 + return value; 88 + } 89 + 90 + static void cpm_idle_wait(void) 91 + { 92 + unsigned long msr_save; 93 + 94 + /* save off initial state */ 95 + msr_save = mfmsr(); 96 + /* sync required when CPM0_ER[CPU] is set */ 97 + mb(); 98 + /* set wait state MSR */ 99 + mtmsr(msr_save|MSR_WE|MSR_EE|MSR_CE|MSR_DE); 100 + isync(); 101 + /* return to initial state */ 102 + mtmsr(msr_save); 103 + isync(); 104 + } 105 + 106 + static void cpm_idle_sleep(unsigned int mask) 107 + { 108 + unsigned int er_save; 109 + 110 + /* update CPM_ER state */ 111 + er_save = cpm_set(CPM_ER, mask); 112 + 113 + /* go to wait state so that CPM0_ER[CPU] can take effect */ 114 + cpm_idle_wait(); 115 + 116 + /* restore CPM_ER state */ 117 + dcr_write(cpm.dcr_host, cpm.dcr_offset[CPM_ER], er_save); 118 + } 119 + 120 + static void cpm_idle_doze(void) 121 + { 122 + cpm_idle_sleep(cpm.idle_doze); 123 + } 124 + 125 + static void cpm_idle_config(int mode) 126 + { 127 + int i; 128 + 129 + if (idle_mode[mode].enabled) 130 + return; 131 + 132 + for (i = 0; i < ARRAY_SIZE(idle_mode); i++) 133 + idle_mode[i].enabled = 0; 134 + 135 + idle_mode[mode].enabled = 1; 136 + } 137 + 138 + static ssize_t cpm_idle_show(struct kobject *kobj, 139 + struct kobj_attribute *attr, char *buf) 140 + { 141 + char *s = buf; 142 + int i; 143 + 144 + for (i = 0; i < ARRAY_SIZE(idle_mode); i++) { 145 + if (idle_mode[i].enabled) 146 + s += sprintf(s, "[%s] ", idle_mode[i].name); 147 + else 148 + s += sprintf(s, "%s ", idle_mode[i].name); 149 + } 150 + 151 + *(s-1) = '\n'; /* convert the last space to a newline */ 152 + 153 + return s - buf; 154 + } 155 + 156 + static ssize_t cpm_idle_store(struct kobject *kobj, 157 + struct kobj_attribute *attr, 158 + const char *buf, size_t n) 159 + { 160 + int i; 161 + char *p; 162 + int len; 163 + 164 + p = memchr(buf, '\n', n); 165 + len = p ? p - buf : n; 166 + 167 + for (i = 0; i < ARRAY_SIZE(idle_mode); i++) { 168 + if (strncmp(buf, idle_mode[i].name, len) == 0) { 169 + cpm_idle_config(i); 170 + return n; 171 + } 172 + } 173 + 174 + return -EINVAL; 175 + } 176 + 177 + static struct kobj_attribute cpm_idle_attr = 178 + __ATTR(idle, 0644, cpm_idle_show, cpm_idle_store); 179 + 180 + static void cpm_idle_config_sysfs(void) 181 + { 182 + struct sys_device *sys_dev; 183 + unsigned long ret; 184 + 185 + sys_dev = get_cpu_sysdev(0); 186 + 187 + ret = sysfs_create_file(&sys_dev->kobj, 188 + &cpm_idle_attr.attr); 189 + if (ret) 190 + printk(KERN_WARNING 191 + "cpm: failed to create idle sysfs entry\n"); 192 + } 193 + 194 + static void cpm_idle(void) 195 + { 196 + if (idle_mode[CPM_IDLE_DOZE].enabled) 197 + cpm_idle_doze(); 198 + else 199 + cpm_idle_wait(); 200 + } 201 + 202 + static int cpm_suspend_valid(suspend_state_t state) 203 + { 204 + switch (state) { 205 + case PM_SUSPEND_STANDBY: 206 + return !!cpm.standby; 207 + case PM_SUSPEND_MEM: 208 + return !!cpm.suspend; 209 + default: 210 + return 0; 211 + } 212 + } 213 + 214 + static void cpm_suspend_standby(unsigned int mask) 215 + { 216 + unsigned long tcr_save; 217 + 218 + /* disable decrement interrupt */ 219 + tcr_save = mfspr(SPRN_TCR); 220 + mtspr(SPRN_TCR, tcr_save & ~TCR_DIE); 221 + 222 + /* go to sleep state */ 223 + cpm_idle_sleep(mask); 224 + 225 + /* restore decrement interrupt */ 226 + mtspr(SPRN_TCR, tcr_save); 227 + } 228 + 229 + static int cpm_suspend_enter(suspend_state_t state) 230 + { 231 + switch (state) { 232 + case PM_SUSPEND_STANDBY: 233 + cpm_suspend_standby(cpm.standby); 234 + break; 235 + case PM_SUSPEND_MEM: 236 + cpm_suspend_standby(cpm.suspend); 237 + break; 238 + } 239 + 240 + return 0; 241 + } 242 + 243 + static struct platform_suspend_ops cpm_suspend_ops = { 244 + .valid = cpm_suspend_valid, 245 + .enter = cpm_suspend_enter, 246 + }; 247 + 248 + static int cpm_get_uint_property(struct device_node *np, 249 + const char *name) 250 + { 251 + int len; 252 + const unsigned int *prop = of_get_property(np, name, &len); 253 + 254 + if (prop == NULL || len < sizeof(u32)) 255 + return 0; 256 + 257 + return *prop; 258 + } 259 + 260 + static int __init cpm_init(void) 261 + { 262 + struct device_node *np; 263 + int dcr_base, dcr_len; 264 + int ret = 0; 265 + 266 + if (!cpm.powersave_off) { 267 + cpm_idle_config(CPM_IDLE_WAIT); 268 + ppc_md.power_save = &cpm_idle; 269 + } 270 + 271 + np = of_find_compatible_node(NULL, NULL, "ibm,cpm"); 272 + if (!np) { 273 + ret = -EINVAL; 274 + goto out; 275 + } 276 + 277 + dcr_base = dcr_resource_start(np, 0); 278 + dcr_len = dcr_resource_len(np, 0); 279 + 280 + if (dcr_base == 0 || dcr_len == 0) { 281 + printk(KERN_ERR "cpm: could not parse dcr property for %s\n", 282 + np->full_name); 283 + ret = -EINVAL; 284 + goto out; 285 + } 286 + 287 + cpm.dcr_host = dcr_map(np, dcr_base, dcr_len); 288 + 289 + if (!DCR_MAP_OK(cpm.dcr_host)) { 290 + printk(KERN_ERR "cpm: failed to map dcr property for %s\n", 291 + np->full_name); 292 + ret = -EINVAL; 293 + goto out; 294 + } 295 + 296 + /* All 4xx SoCs with a CPM controller have one of two 297 + * different order for the CPM registers. Some have the 298 + * CPM registers in the following order (ER,FR,SR). The 299 + * others have them in the following order (SR,ER,FR). 300 + */ 301 + 302 + if (cpm_get_uint_property(np, "er-offset") == 0) { 303 + cpm.dcr_offset[CPM_ER] = 0; 304 + cpm.dcr_offset[CPM_FR] = 1; 305 + cpm.dcr_offset[CPM_SR] = 2; 306 + } else { 307 + cpm.dcr_offset[CPM_ER] = 1; 308 + cpm.dcr_offset[CPM_FR] = 2; 309 + cpm.dcr_offset[CPM_SR] = 0; 310 + } 311 + 312 + /* Now let's see what IPs to turn off for the following modes */ 313 + 314 + cpm.unused = cpm_get_uint_property(np, "unused-units"); 315 + cpm.idle_doze = cpm_get_uint_property(np, "idle-doze"); 316 + cpm.standby = cpm_get_uint_property(np, "standby"); 317 + cpm.suspend = cpm_get_uint_property(np, "suspend"); 318 + 319 + /* If some IPs are unused let's turn them off now */ 320 + 321 + if (cpm.unused) { 322 + cpm_set(CPM_ER, cpm.unused); 323 + cpm_set(CPM_FR, cpm.unused); 324 + } 325 + 326 + /* Now let's export interfaces */ 327 + 328 + if (!cpm.powersave_off && cpm.idle_doze) 329 + cpm_idle_config_sysfs(); 330 + 331 + if (cpm.standby || cpm.suspend) 332 + suspend_set_ops(&cpm_suspend_ops); 333 + out: 334 + if (np) 335 + of_node_put(np); 336 + return ret; 337 + } 338 + 339 + late_initcall(cpm_init); 340 + 341 + static int __init cpm_powersave_off(char *arg) 342 + { 343 + cpm.powersave_off = 1; 344 + return 0; 345 + } 346 + __setup("powersave=off", cpm_powersave_off);
+4 -4
arch/powerpc/sysdev/tsi108_dev.c
··· 84 memset(&tsi_eth_data, 0, sizeof(tsi_eth_data)); 85 86 ret = of_address_to_resource(np, 0, &r[0]); 87 - DBG("%s: name:start->end = %s:0x%lx-> 0x%lx\n", 88 - __func__,r[0].name, r[0].start, r[0].end); 89 if (ret) 90 goto err; 91 ··· 93 r[1].start = irq_of_parse_and_map(np, 0); 94 r[1].end = irq_of_parse_and_map(np, 0); 95 r[1].flags = IORESOURCE_IRQ; 96 - DBG("%s: name:start->end = %s:0x%lx-> 0x%lx\n", 97 - __func__,r[1].name, r[1].start, r[1].end); 98 99 tsi_eth_dev = 100 platform_device_register_simple("tsi-ethernet", i++, &r[0],
··· 84 memset(&tsi_eth_data, 0, sizeof(tsi_eth_data)); 85 86 ret = of_address_to_resource(np, 0, &r[0]); 87 + DBG("%s: name:start->end = %s:%pR\n", 88 + __func__, r[0].name, &r[0]); 89 if (ret) 90 goto err; 91 ··· 93 r[1].start = irq_of_parse_and_map(np, 0); 94 r[1].end = irq_of_parse_and_map(np, 0); 95 r[1].flags = IORESOURCE_IRQ; 96 + DBG("%s: name:start->end = %s:%pR\n", 97 + __func__, r[1].name, &r[1]); 98 99 tsi_eth_dev = 100 platform_device_register_simple("tsi-ethernet", i++, &r[0],
+1 -1
drivers/char/hvc_vio.c
··· 39 40 #include "hvc_console.h" 41 42 - char hvc_driver_name[] = "hvc_console"; 43 44 static struct vio_device_id hvc_driver_table[] __devinitdata = { 45 {"serial", "hvterm1"},
··· 39 40 #include "hvc_console.h" 41 42 + static const char hvc_driver_name[] = "hvc_console"; 43 44 static struct vio_device_id hvc_driver_table[] __devinitdata = { 45 {"serial", "hvterm1"},
+1 -1
drivers/dma/Kconfig
··· 109 110 config MPC512X_DMA 111 tristate "Freescale MPC512x built-in DMA engine support" 112 - depends on PPC_MPC512x 113 select DMA_ENGINE 114 ---help--- 115 Enable support for the Freescale MPC512x built-in DMA engine.
··· 109 110 config MPC512X_DMA 111 tristate "Freescale MPC512x built-in DMA engine support" 112 + depends on PPC_MPC512x || PPC_MPC831x 113 select DMA_ENGINE 114 ---help--- 115 Enable support for the Freescale MPC512x built-in DMA engine.
+118 -61
drivers/dma/mpc512x_dma.c
··· 1 /* 2 * Copyright (C) Freescale Semicondutor, Inc. 2007, 2008. 3 * Copyright (C) Semihalf 2009 4 * 5 * Written by Piotr Ziecik <kosmo@semihalf.com>. Hardware description 6 * (defines, structures and comments) was taken from MPC5121 DMA driver ··· 71 #define MPC_DMA_DMAES_SBE (1 << 1) 72 #define MPC_DMA_DMAES_DBE (1 << 0) 73 74 #define MPC_DMA_TSIZE_1 0x00 75 #define MPC_DMA_TSIZE_2 0x01 76 #define MPC_DMA_TSIZE_4 0x02 ··· 107 /* 0x30 */ 108 u32 dmahrsh; /* DMA hw request status high(ch63~32) */ 109 u32 dmahrsl; /* DMA hardware request status low(ch31~0) */ 110 - u32 dmaihsa; /* DMA interrupt high select AXE(ch63~32) */ 111 u32 dmailsa; /* DMA interrupt low select AXE(ch31~0) */ 112 /* 0x40 ~ 0xff */ 113 u32 reserve0[48]; /* Reserved */ ··· 201 struct mpc_dma_regs __iomem *regs; 202 struct mpc_dma_tcd __iomem *tcd; 203 int irq; 204 uint error_status; 205 206 /* Lock for error_status field in this structure */ 207 spinlock_t error_status_lock; ··· 260 prev = mdesc; 261 } 262 263 - prev->tcd->start = 0; 264 prev->tcd->int_maj = 1; 265 266 /* Send first descriptor in chain into hardware */ 267 memcpy_toio(&mdma->tcd[cid], first->tcd, sizeof(struct mpc_dma_tcd)); 268 out_8(&mdma->regs->dmassrt, cid); 269 } 270 ··· 283 mchan = &mdma->channels[ch + off]; 284 285 spin_lock(&mchan->lock); 286 287 /* Check error status */ 288 if (es & (1 << ch)) ··· 315 spin_unlock(&mdma->error_status_lock); 316 317 /* Handle interrupt on each channel */ 318 - mpc_dma_irq_process(mdma, in_be32(&mdma->regs->dmainth), 319 in_be32(&mdma->regs->dmaerrh), 32); 320 mpc_dma_irq_process(mdma, in_be32(&mdma->regs->dmaintl), 321 in_be32(&mdma->regs->dmaerrl), 0); 322 - 323 - /* Ack interrupt on all channels */ 324 - out_be32(&mdma->regs->dmainth, 0xFFFFFFFF); 325 - out_be32(&mdma->regs->dmaintl, 0xFFFFFFFF); 326 - out_be32(&mdma->regs->dmaerrh, 0xFFFFFFFF); 327 - out_be32(&mdma->regs->dmaerrl, 0xFFFFFFFF); 328 329 /* Schedule tasklet */ 330 tasklet_schedule(&mdma->tasklet); ··· 328 return IRQ_HANDLED; 329 } 330 331 - /* DMA Tasklet */ 332 - static void mpc_dma_tasklet(unsigned long data) 333 { 334 - struct mpc_dma *mdma = (void *)data; 335 dma_cookie_t last_cookie = 0; 336 struct mpc_dma_chan *mchan; 337 struct mpc_dma_desc *mdesc; 338 struct dma_async_tx_descriptor *desc; 339 unsigned long flags; 340 LIST_HEAD(list); 341 - uint es; 342 int i; 343 344 spin_lock_irqsave(&mdma->error_status_lock, flags); 345 es = mdma->error_status; ··· 415 dev_err(mdma->dma.dev, "- Destination Bus Error\n"); 416 } 417 418 - for (i = 0; i < mdma->dma.chancnt; i++) { 419 - mchan = &mdma->channels[i]; 420 - 421 - /* Get all completed descriptors */ 422 - spin_lock_irqsave(&mchan->lock, flags); 423 - if (!list_empty(&mchan->completed)) 424 - list_splice_tail_init(&mchan->completed, &list); 425 - spin_unlock_irqrestore(&mchan->lock, flags); 426 - 427 - if (list_empty(&list)) 428 - continue; 429 - 430 - /* Execute callbacks and run dependencies */ 431 - list_for_each_entry(mdesc, &list, node) { 432 - desc = &mdesc->desc; 433 - 434 - if (desc->callback) 435 - desc->callback(desc->callback_param); 436 - 437 - last_cookie = desc->cookie; 438 - dma_run_dependencies(desc); 439 - } 440 - 441 - /* Free descriptors */ 442 - spin_lock_irqsave(&mchan->lock, flags); 443 - list_splice_tail_init(&list, &mchan->free); 444 - mchan->completed_cookie = last_cookie; 445 - spin_unlock_irqrestore(&mchan->lock, flags); 446 - } 447 } 448 449 /* Submit descriptor to hardware */ ··· 580 mpc_dma_prep_memcpy(struct dma_chan *chan, dma_addr_t dst, dma_addr_t src, 581 size_t len, unsigned long flags) 582 { 583 struct mpc_dma_chan *mchan = dma_chan_to_mpc_dma_chan(chan); 584 struct mpc_dma_desc *mdesc = NULL; 585 struct mpc_dma_tcd *tcd; ··· 595 } 596 spin_unlock_irqrestore(&mchan->lock, iflags); 597 598 - if (!mdesc) 599 return NULL; 600 601 mdesc->error = 0; 602 tcd = mdesc->tcd; ··· 612 tcd->dsize = MPC_DMA_TSIZE_32; 613 tcd->soff = 32; 614 tcd->doff = 32; 615 - } else if (IS_ALIGNED(src | dst | len, 16)) { 616 tcd->ssize = MPC_DMA_TSIZE_16; 617 tcd->dsize = MPC_DMA_TSIZE_16; 618 tcd->soff = 16; ··· 673 return -EINVAL; 674 } 675 676 retval = of_address_to_resource(dn, 0, &res); 677 if (retval) { 678 dev_err(dev, "Error parsing memory region!\n"); ··· 712 return -EINVAL; 713 } 714 715 spin_lock_init(&mdma->error_status_lock); 716 717 dma = &mdma->dma; 718 dma->dev = dev; 719 - dma->chancnt = MPC_DMA_CHANNELS; 720 dma->device_alloc_chan_resources = mpc_dma_alloc_chan_resources; 721 dma->device_free_chan_resources = mpc_dma_free_chan_resources; 722 dma->device_issue_pending = mpc_dma_issue_pending; ··· 764 * - Round-robin group arbitration, 765 * - Round-robin channel arbitration. 766 */ 767 - out_be32(&mdma->regs->dmacr, MPC_DMA_DMACR_EDCG | 768 - MPC_DMA_DMACR_ERGA | MPC_DMA_DMACR_ERCA); 769 770 - /* Disable hardware DMA requests */ 771 - out_be32(&mdma->regs->dmaerqh, 0); 772 - out_be32(&mdma->regs->dmaerql, 0); 773 774 - /* Disable error interrupts */ 775 - out_be32(&mdma->regs->dmaeeih, 0); 776 - out_be32(&mdma->regs->dmaeeil, 0); 777 778 - /* Clear interrupts status */ 779 - out_be32(&mdma->regs->dmainth, 0xFFFFFFFF); 780 - out_be32(&mdma->regs->dmaintl, 0xFFFFFFFF); 781 - out_be32(&mdma->regs->dmaerrh, 0xFFFFFFFF); 782 - out_be32(&mdma->regs->dmaerrl, 0xFFFFFFFF); 783 784 - /* Route interrupts to IPIC */ 785 - out_be32(&mdma->regs->dmaihsa, 0); 786 - out_be32(&mdma->regs->dmailsa, 0); 787 788 /* Register DMA engine */ 789 dev_set_drvdata(dev, mdma);
··· 1 /* 2 * Copyright (C) Freescale Semicondutor, Inc. 2007, 2008. 3 * Copyright (C) Semihalf 2009 4 + * Copyright (C) Ilya Yanok, Emcraft Systems 2010 5 * 6 * Written by Piotr Ziecik <kosmo@semihalf.com>. Hardware description 7 * (defines, structures and comments) was taken from MPC5121 DMA driver ··· 70 #define MPC_DMA_DMAES_SBE (1 << 1) 71 #define MPC_DMA_DMAES_DBE (1 << 0) 72 73 + #define MPC_DMA_DMAGPOR_SNOOP_ENABLE (1 << 6) 74 + 75 #define MPC_DMA_TSIZE_1 0x00 76 #define MPC_DMA_TSIZE_2 0x01 77 #define MPC_DMA_TSIZE_4 0x02 ··· 104 /* 0x30 */ 105 u32 dmahrsh; /* DMA hw request status high(ch63~32) */ 106 u32 dmahrsl; /* DMA hardware request status low(ch31~0) */ 107 + union { 108 + u32 dmaihsa; /* DMA interrupt high select AXE(ch63~32) */ 109 + u32 dmagpor; /* (General purpose register on MPC8308) */ 110 + }; 111 u32 dmailsa; /* DMA interrupt low select AXE(ch31~0) */ 112 /* 0x40 ~ 0xff */ 113 u32 reserve0[48]; /* Reserved */ ··· 195 struct mpc_dma_regs __iomem *regs; 196 struct mpc_dma_tcd __iomem *tcd; 197 int irq; 198 + int irq2; 199 uint error_status; 200 + int is_mpc8308; 201 202 /* Lock for error_status field in this structure */ 203 spinlock_t error_status_lock; ··· 252 prev = mdesc; 253 } 254 255 prev->tcd->int_maj = 1; 256 257 /* Send first descriptor in chain into hardware */ 258 memcpy_toio(&mdma->tcd[cid], first->tcd, sizeof(struct mpc_dma_tcd)); 259 + 260 + if (first != prev) 261 + mdma->tcd[cid].e_sg = 1; 262 out_8(&mdma->regs->dmassrt, cid); 263 } 264 ··· 273 mchan = &mdma->channels[ch + off]; 274 275 spin_lock(&mchan->lock); 276 + 277 + out_8(&mdma->regs->dmacint, ch + off); 278 + out_8(&mdma->regs->dmacerr, ch + off); 279 280 /* Check error status */ 281 if (es & (1 << ch)) ··· 302 spin_unlock(&mdma->error_status_lock); 303 304 /* Handle interrupt on each channel */ 305 + if (mdma->dma.chancnt > 32) { 306 + mpc_dma_irq_process(mdma, in_be32(&mdma->regs->dmainth), 307 in_be32(&mdma->regs->dmaerrh), 32); 308 + } 309 mpc_dma_irq_process(mdma, in_be32(&mdma->regs->dmaintl), 310 in_be32(&mdma->regs->dmaerrl), 0); 311 312 /* Schedule tasklet */ 313 tasklet_schedule(&mdma->tasklet); ··· 319 return IRQ_HANDLED; 320 } 321 322 + /* proccess completed descriptors */ 323 + static void mpc_dma_process_completed(struct mpc_dma *mdma) 324 { 325 dma_cookie_t last_cookie = 0; 326 struct mpc_dma_chan *mchan; 327 struct mpc_dma_desc *mdesc; 328 struct dma_async_tx_descriptor *desc; 329 unsigned long flags; 330 LIST_HEAD(list); 331 int i; 332 + 333 + for (i = 0; i < mdma->dma.chancnt; i++) { 334 + mchan = &mdma->channels[i]; 335 + 336 + /* Get all completed descriptors */ 337 + spin_lock_irqsave(&mchan->lock, flags); 338 + if (!list_empty(&mchan->completed)) 339 + list_splice_tail_init(&mchan->completed, &list); 340 + spin_unlock_irqrestore(&mchan->lock, flags); 341 + 342 + if (list_empty(&list)) 343 + continue; 344 + 345 + /* Execute callbacks and run dependencies */ 346 + list_for_each_entry(mdesc, &list, node) { 347 + desc = &mdesc->desc; 348 + 349 + if (desc->callback) 350 + desc->callback(desc->callback_param); 351 + 352 + last_cookie = desc->cookie; 353 + dma_run_dependencies(desc); 354 + } 355 + 356 + /* Free descriptors */ 357 + spin_lock_irqsave(&mchan->lock, flags); 358 + list_splice_tail_init(&list, &mchan->free); 359 + mchan->completed_cookie = last_cookie; 360 + spin_unlock_irqrestore(&mchan->lock, flags); 361 + } 362 + } 363 + 364 + /* DMA Tasklet */ 365 + static void mpc_dma_tasklet(unsigned long data) 366 + { 367 + struct mpc_dma *mdma = (void *)data; 368 + unsigned long flags; 369 + uint es; 370 371 spin_lock_irqsave(&mdma->error_status_lock, flags); 372 es = mdma->error_status; ··· 370 dev_err(mdma->dma.dev, "- Destination Bus Error\n"); 371 } 372 373 + mpc_dma_process_completed(mdma); 374 } 375 376 /* Submit descriptor to hardware */ ··· 563 mpc_dma_prep_memcpy(struct dma_chan *chan, dma_addr_t dst, dma_addr_t src, 564 size_t len, unsigned long flags) 565 { 566 + struct mpc_dma *mdma = dma_chan_to_mpc_dma(chan); 567 struct mpc_dma_chan *mchan = dma_chan_to_mpc_dma_chan(chan); 568 struct mpc_dma_desc *mdesc = NULL; 569 struct mpc_dma_tcd *tcd; ··· 577 } 578 spin_unlock_irqrestore(&mchan->lock, iflags); 579 580 + if (!mdesc) { 581 + /* try to free completed descriptors */ 582 + mpc_dma_process_completed(mdma); 583 return NULL; 584 + } 585 586 mdesc->error = 0; 587 tcd = mdesc->tcd; ··· 591 tcd->dsize = MPC_DMA_TSIZE_32; 592 tcd->soff = 32; 593 tcd->doff = 32; 594 + } else if (!mdma->is_mpc8308 && IS_ALIGNED(src | dst | len, 16)) { 595 + /* MPC8308 doesn't support 16 byte transfers */ 596 tcd->ssize = MPC_DMA_TSIZE_16; 597 tcd->dsize = MPC_DMA_TSIZE_16; 598 tcd->soff = 16; ··· 651 return -EINVAL; 652 } 653 654 + if (of_device_is_compatible(dn, "fsl,mpc8308-dma")) { 655 + mdma->is_mpc8308 = 1; 656 + mdma->irq2 = irq_of_parse_and_map(dn, 1); 657 + if (mdma->irq2 == NO_IRQ) { 658 + dev_err(dev, "Error mapping IRQ!\n"); 659 + return -EINVAL; 660 + } 661 + } 662 + 663 retval = of_address_to_resource(dn, 0, &res); 664 if (retval) { 665 dev_err(dev, "Error parsing memory region!\n"); ··· 681 return -EINVAL; 682 } 683 684 + if (mdma->is_mpc8308) { 685 + retval = devm_request_irq(dev, mdma->irq2, &mpc_dma_irq, 0, 686 + DRV_NAME, mdma); 687 + if (retval) { 688 + dev_err(dev, "Error requesting IRQ2!\n"); 689 + return -EINVAL; 690 + } 691 + } 692 + 693 spin_lock_init(&mdma->error_status_lock); 694 695 dma = &mdma->dma; 696 dma->dev = dev; 697 + if (!mdma->is_mpc8308) 698 + dma->chancnt = MPC_DMA_CHANNELS; 699 + else 700 + dma->chancnt = 16; /* MPC8308 DMA has only 16 channels */ 701 dma->device_alloc_chan_resources = mpc_dma_alloc_chan_resources; 702 dma->device_free_chan_resources = mpc_dma_free_chan_resources; 703 dma->device_issue_pending = mpc_dma_issue_pending; ··· 721 * - Round-robin group arbitration, 722 * - Round-robin channel arbitration. 723 */ 724 + if (!mdma->is_mpc8308) { 725 + out_be32(&mdma->regs->dmacr, MPC_DMA_DMACR_EDCG | 726 + MPC_DMA_DMACR_ERGA | MPC_DMA_DMACR_ERCA); 727 728 + /* Disable hardware DMA requests */ 729 + out_be32(&mdma->regs->dmaerqh, 0); 730 + out_be32(&mdma->regs->dmaerql, 0); 731 732 + /* Disable error interrupts */ 733 + out_be32(&mdma->regs->dmaeeih, 0); 734 + out_be32(&mdma->regs->dmaeeil, 0); 735 736 + /* Clear interrupts status */ 737 + out_be32(&mdma->regs->dmainth, 0xFFFFFFFF); 738 + out_be32(&mdma->regs->dmaintl, 0xFFFFFFFF); 739 + out_be32(&mdma->regs->dmaerrh, 0xFFFFFFFF); 740 + out_be32(&mdma->regs->dmaerrl, 0xFFFFFFFF); 741 742 + /* Route interrupts to IPIC */ 743 + out_be32(&mdma->regs->dmaihsa, 0); 744 + out_be32(&mdma->regs->dmailsa, 0); 745 + } else { 746 + /* MPC8308 has 16 channels and lacks some registers */ 747 + out_be32(&mdma->regs->dmacr, MPC_DMA_DMACR_ERCA); 748 + 749 + /* enable snooping */ 750 + out_be32(&mdma->regs->dmagpor, MPC_DMA_DMAGPOR_SNOOP_ENABLE); 751 + /* Disable error interrupts */ 752 + out_be32(&mdma->regs->dmaeeil, 0); 753 + 754 + /* Clear interrupts status */ 755 + out_be32(&mdma->regs->dmaintl, 0xFFFF); 756 + out_be32(&mdma->regs->dmaerrl, 0xFFFF); 757 + } 758 759 /* Register DMA engine */ 760 dev_set_drvdata(dev, mdma);
+3 -4
drivers/macintosh/macio_asic.c
··· 387 /* Set the DMA ops to the ones from the PCI device, this could be 388 * fishy if we didn't know that on PowerMac it's always direct ops 389 * or iommu ops that will work fine 390 */ 391 - dev->ofdev.dev.archdata.dma_ops = 392 - chip->lbus.pdev->dev.archdata.dma_ops; 393 - dev->ofdev.dev.archdata.dma_data = 394 - chip->lbus.pdev->dev.archdata.dma_data; 395 #endif /* CONFIG_PCI */ 396 397 #ifdef DEBUG
··· 387 /* Set the DMA ops to the ones from the PCI device, this could be 388 * fishy if we didn't know that on PowerMac it's always direct ops 389 * or iommu ops that will work fine 390 + * 391 + * To get all the fields, copy all archdata 392 */ 393 + dev->ofdev.dev.archdata = chip->lbus.pdev->dev.archdata; 394 #endif /* CONFIG_PCI */ 395 396 #ifdef DEBUG
+5 -25
drivers/macintosh/therm_pm72.c
··· 2213 static int fcu_of_probe(struct platform_device* dev, const struct of_device_id *match) 2214 { 2215 state = state_detached; 2216 2217 /* Lookup the fans in the device tree */ 2218 fcu_lookup_fans(dev->dev.of_node); ··· 2238 }, 2239 {}, 2240 }; 2241 2242 static struct of_platform_driver fcu_of_platform_driver = 2243 { ··· 2256 */ 2257 static int __init therm_pm72_init(void) 2258 { 2259 - struct device_node *np; 2260 - 2261 rackmac = of_machine_is_compatible("RackMac3,1"); 2262 2263 if (!of_machine_is_compatible("PowerMac7,2") && ··· 2263 !rackmac) 2264 return -ENODEV; 2265 2266 - printk(KERN_INFO "PowerMac G5 Thermal control driver %s\n", VERSION); 2267 - 2268 - np = of_find_node_by_type(NULL, "fcu"); 2269 - if (np == NULL) { 2270 - /* Some machines have strangely broken device-tree */ 2271 - np = of_find_node_by_path("/u3@0,f8000000/i2c@f8001000/fan@15e"); 2272 - if (np == NULL) { 2273 - printk(KERN_ERR "Can't find FCU in device-tree !\n"); 2274 - return -ENODEV; 2275 - } 2276 - } 2277 - of_dev = of_platform_device_create(np, "temperature", NULL); 2278 - if (of_dev == NULL) { 2279 - printk(KERN_ERR "Can't register FCU platform device !\n"); 2280 - return -ENODEV; 2281 - } 2282 - 2283 - of_register_platform_driver(&fcu_of_platform_driver); 2284 - 2285 - return 0; 2286 } 2287 2288 static void __exit therm_pm72_exit(void) 2289 { 2290 of_unregister_platform_driver(&fcu_of_platform_driver); 2291 - 2292 - if (of_dev) 2293 - of_device_unregister(of_dev); 2294 } 2295 2296 module_init(therm_pm72_init);
··· 2213 static int fcu_of_probe(struct platform_device* dev, const struct of_device_id *match) 2214 { 2215 state = state_detached; 2216 + of_dev = dev; 2217 + 2218 + dev_info(&dev->dev, "PowerMac G5 Thermal control driver %s\n", VERSION); 2219 2220 /* Lookup the fans in the device tree */ 2221 fcu_lookup_fans(dev->dev.of_node); ··· 2235 }, 2236 {}, 2237 }; 2238 + MODULE_DEVICE_TABLE(of, fcu_match); 2239 2240 static struct of_platform_driver fcu_of_platform_driver = 2241 { ··· 2252 */ 2253 static int __init therm_pm72_init(void) 2254 { 2255 rackmac = of_machine_is_compatible("RackMac3,1"); 2256 2257 if (!of_machine_is_compatible("PowerMac7,2") && ··· 2261 !rackmac) 2262 return -ENODEV; 2263 2264 + return of_register_platform_driver(&fcu_of_platform_driver); 2265 } 2266 2267 static void __exit therm_pm72_exit(void) 2268 { 2269 of_unregister_platform_driver(&fcu_of_platform_driver); 2270 } 2271 2272 module_init(therm_pm72_init);
+1 -1
drivers/ps3/Makefile
··· 1 obj-$(CONFIG_PS3_VUART) += ps3-vuart.o 2 obj-$(CONFIG_PS3_PS3AV) += ps3av_mod.o 3 - ps3av_mod-objs += ps3av.o ps3av_cmd.o 4 obj-$(CONFIG_PPC_PS3) += sys-manager-core.o 5 obj-$(CONFIG_PS3_SYS_MANAGER) += ps3-sys-manager.o 6 obj-$(CONFIG_PS3_STORAGE) += ps3stor_lib.o
··· 1 obj-$(CONFIG_PS3_VUART) += ps3-vuart.o 2 obj-$(CONFIG_PS3_PS3AV) += ps3av_mod.o 3 + ps3av_mod-y := ps3av.o ps3av_cmd.o 4 obj-$(CONFIG_PPC_PS3) += sys-manager-core.o 5 obj-$(CONFIG_PS3_SYS_MANAGER) += ps3-sys-manager.o 6 obj-$(CONFIG_PS3_STORAGE) += ps3stor_lib.o
+2 -1
drivers/rtc/rtc-cmos.c
··· 687 #if defined(CONFIG_ATARI) 688 address_space = 64; 689 #elif defined(__i386__) || defined(__x86_64__) || defined(__arm__) \ 690 - || defined(__sparc__) || defined(__mips__) 691 address_space = 128; 692 #else 693 #warning Assuming 128 bytes of RTC+NVRAM address space, not 64 bytes.
··· 687 #if defined(CONFIG_ATARI) 688 address_space = 64; 689 #elif defined(__i386__) || defined(__x86_64__) || defined(__arm__) \ 690 + || defined(__sparc__) || defined(__mips__) \ 691 + || defined(__powerpc__) 692 address_space = 128; 693 #else 694 #warning Assuming 128 bytes of RTC+NVRAM address space, not 64 bytes.