Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'powerpc-5.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:

- Convert to the generic mmap support (ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT)

- Add support for outline-only KASAN with 64-bit Radix MMU (P9 or later)

- Increase SIGSTKSZ and MINSIGSTKSZ and add support for AT_MINSIGSTKSZ

- Enable the DAWR (Data Address Watchpoint) on POWER9 DD2.3 or later

- Drop support for system call instruction emulation

- Many other small features and fixes

Thanks to Alexey Kardashevskiy, Alistair Popple, Andy Shevchenko, Bagas
Sanjaya, Bjorn Helgaas, Bo Liu, Chen Huang, Christophe Leroy, Colin Ian
King, Daniel Axtens, Dwaipayan Ray, Fabiano Rosas, Finn Thain, Frank
Rowand, Fuqian Huang, Guilherme G. Piccoli, Hangyu Hua, Haowen Bai,
Haren Myneni, Hari Bathini, He Ying, Jason Wang, Jiapeng Chong, Jing
Yangyang, Joel Stanley, Julia Lawall, Kajol Jain, Kevin Hao, Krzysztof
Kozlowski, Laurent Dufour, Lv Ruyi, Madhavan Srinivasan, Magali Lemes,
Miaoqian Lin, Minghao Chi, Nathan Chancellor, Naveen N. Rao, Nicholas
Piggin, Oliver O'Halloran, Oscar Salvador, Pali Rohár, Paul Mackerras,
Peng Wu, Qing Wang, Randy Dunlap, Reza Arbab, Russell Currey, Sohaib
Mohamed, Vaibhav Jain, Vasant Hegde, Wang Qing, Wang Wensheng, Xiang
wangx, Xiaomeng Tong, Xu Wang, Yang Guang, Yang Li, Ye Bin, YueHaibing,
Yu Kuai, Zheng Bin, Zou Wei, and Zucheng Zheng.

* tag 'powerpc-5.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (200 commits)
powerpc/64: Include cache.h directly in paca.h
powerpc/64s: Only set HAVE_ARCH_UNMAPPED_AREA when CONFIG_PPC_64S_HASH_MMU is set
powerpc/xics: Include missing header
powerpc/powernv/pci: Drop VF MPS fixup
powerpc/fsl_book3e: Don't set rodata RO too early
powerpc/microwatt: Add mmu bits to device tree
powerpc/powernv/flash: Check OPAL flash calls exist before using
powerpc/powermac: constify device_node in of_irq_parse_oldworld()
powerpc/powermac: add missing g5_phy_disable_cpu1() declaration
selftests/powerpc/pmu: fix spelling mistake "mis-match" -> "mismatch"
powerpc: Enable the DAWR on POWER9 DD2.3 and above
powerpc/64s: Add CPU_FTRS_POWER10 to ALWAYS mask
powerpc/64s: Add CPU_FTRS_POWER9_DD2_2 to CPU_FTRS_ALWAYS mask
powerpc: Fix all occurences of "the the"
selftests/powerpc/pmu/ebb: remove fixed_instruction.S
powerpc/platforms/83xx: Use of_device_get_match_data()
powerpc/eeh: Drop redundant spinlock initialization
powerpc/iommu: Add missing of_node_put in iommu_init_early_dart
powerpc/pseries/vas: Call misc_deregister if sysfs init fails
powerpc/papr_scm: Fix leaking nvdimm_events_map elements
...

+3407 -4600
+2 -2
Documentation/ABI/testing/sysfs-class-cxl
··· 103 103 Date: September 2014 104 104 Contact: linuxppc-dev@lists.ozlabs.org 105 105 Description: read only 106 - Decimal value of the the lowest version of the userspace API 107 - this this kernel supports. 106 + Decimal value of the lowest version of the userspace API 107 + this kernel supports. 108 108 Users: https://github.com/ibm-capi/libcxl 109 109 110 110
-20
Documentation/devicetree/bindings/powerpc/fsl/cache_sram.txt
··· 1 - * Freescale PQ3 and QorIQ based Cache SRAM 2 - 3 - Freescale's mpc85xx and some QorIQ platforms provide an 4 - option of configuring a part of (or full) cache memory 5 - as SRAM. This cache SRAM representation in the device 6 - tree should be done as under:- 7 - 8 - Required properties: 9 - 10 - - compatible : should be "fsl,p2020-cache-sram" 11 - - fsl,cache-sram-ctlr-handle : points to the L2 controller 12 - - reg : offset and length of the cache-sram. 13 - 14 - Example: 15 - 16 - cache-sram@fff00000 { 17 - fsl,cache-sram-ctlr-handle = <&L2>; 18 - reg = <0 0xfff00000 0 0x10000>; 19 - compatible = "fsl,p2020-cache-sram"; 20 - };
+16 -8
Documentation/powerpc/dawr-power9.rst
··· 2 2 DAWR issues on POWER9 3 3 ===================== 4 4 5 - On POWER9 the Data Address Watchpoint Register (DAWR) can cause a checkstop 6 - if it points to cache inhibited (CI) memory. Currently Linux has no way to 7 - distinguish CI memory when configuring the DAWR, so (for now) the DAWR is 8 - disabled by this commit:: 5 + On older POWER9 processors, the Data Address Watchpoint Register (DAWR) can 6 + cause a checkstop if it points to cache inhibited (CI) memory. Currently Linux 7 + has no way to distinguish CI memory when configuring the DAWR, so on affected 8 + systems, the DAWR is disabled. 9 9 10 - commit 9654153158d3e0684a1bdb76dbababdb7111d5a0 11 - Author: Michael Neuling <mikey@neuling.org> 12 - Date: Tue Mar 27 15:37:24 2018 +1100 13 - powerpc: Disable DAWR in the base POWER9 CPU features 10 + Affected processor revisions 11 + ============================ 12 + 13 + This issue is only present on processors prior to v2.3. The revision can be 14 + found in /proc/cpuinfo:: 15 + 16 + processor : 0 17 + cpu : POWER9, altivec supported 18 + clock : 3800.000000MHz 19 + revision : 2.3 (pvr 004e 1203) 20 + 21 + On a system with the issue, the DAWR is disabled as detailed below. 14 22 15 23 Technical Details: 16 24 ==================
+58
Documentation/powerpc/kasan.txt
··· 1 + KASAN is supported on powerpc on 32-bit and Radix 64-bit only. 2 + 3 + 32 bit support 4 + ============== 5 + 6 + KASAN is supported on both hash and nohash MMUs on 32-bit. 7 + 8 + The shadow area sits at the top of the kernel virtual memory space above the 9 + fixmap area and occupies one eighth of the total kernel virtual memory space. 10 + 11 + Instrumentation of the vmalloc area is optional, unless built with modules, 12 + in which case it is required. 13 + 14 + 64 bit support 15 + ============== 16 + 17 + Currently, only the radix MMU is supported. There have been versions for hash 18 + and Book3E processors floating around on the mailing list, but nothing has been 19 + merged. 20 + 21 + KASAN support on Book3S is a bit tricky to get right: 22 + 23 + - It would be good to support inline instrumentation so as to be able to catch 24 + stack issues that cannot be caught with outline mode. 25 + 26 + - Inline instrumentation requires a fixed offset. 27 + 28 + - Book3S runs code with translations off ("real mode") during boot, including a 29 + lot of generic device-tree parsing code which is used to determine MMU 30 + features. 31 + 32 + - Some code - most notably a lot of KVM code - also runs with translations off 33 + after boot. 34 + 35 + - Therefore any offset has to point to memory that is valid with 36 + translations on or off. 37 + 38 + One approach is just to give up on inline instrumentation. This way boot-time 39 + checks can be delayed until after the MMU is set is up, and we can just not 40 + instrument any code that runs with translations off after booting. This is the 41 + current approach. 42 + 43 + To avoid this limitiation, the KASAN shadow would have to be placed inside the 44 + linear mapping, using the same high-bits trick we use for the rest of the linear 45 + mapping. This is tricky: 46 + 47 + - We'd like to place it near the start of physical memory. In theory we can do 48 + this at run-time based on how much physical memory we have, but this requires 49 + being able to arbitrarily relocate the kernel, which is basically the tricky 50 + part of KASLR. Not being game to implement both tricky things at once, this 51 + is hopefully something we can revisit once we get KASLR for Book3S. 52 + 53 + - Alternatively, we can place the shadow at the _end_ of memory, but this 54 + requires knowing how much contiguous physical memory a system has _at compile 55 + time_. This is a big hammer, and has some unfortunate consequences: inablity 56 + to handle discontiguous physical memory, total failure to boot on machines 57 + with less memory than specified, and that machines with more memory than 58 + specified can't use it. This was deemed unacceptable.
-2
arch/Kconfig
··· 1019 1019 depends on !IA64_PAGE_SIZE_64KB 1020 1020 depends on !PAGE_SIZE_64KB 1021 1021 depends on !PARISC_PAGE_SIZE_64KB 1022 - depends on !PPC_64K_PAGES 1023 1022 depends on PAGE_SIZE_LESS_THAN_256KB 1024 1023 1025 1024 config PAGE_SIZE_LESS_THAN_256KB 1026 1025 def_bool y 1027 - depends on !PPC_256K_PAGES 1028 1026 depends on !PAGE_SIZE_256KB 1029 1027 1030 1028 # This allows to use a set of generic functions to determine mmap base
+2 -2
arch/arm64/include/asm/processor.h
··· 92 92 #endif /* CONFIG_COMPAT */ 93 93 94 94 #ifndef CONFIG_ARM64_FORCE_52BIT 95 - #define arch_get_mmap_end(addr) ((addr > DEFAULT_MAP_WINDOW) ? TASK_SIZE :\ 96 - DEFAULT_MAP_WINDOW) 95 + #define arch_get_mmap_end(addr, len, flags) \ 96 + (((addr) > DEFAULT_MAP_WINDOW) ? TASK_SIZE : DEFAULT_MAP_WINDOW) 97 97 98 98 #define arch_get_mmap_base(addr, base) ((addr > DEFAULT_MAP_WINDOW) ? \ 99 99 base + TASK_SIZE - DEFAULT_MAP_WINDOW :\
+22 -3
arch/powerpc/Kconfig
··· 109 109 # Please keep this list sorted alphabetically. 110 110 # 111 111 select ARCH_32BIT_OFF_T if PPC32 112 + select ARCH_DISABLE_KASAN_INLINE if PPC_RADIX_MMU 112 113 select ARCH_ENABLE_MEMORY_HOTPLUG 113 114 select ARCH_ENABLE_MEMORY_HOTREMOVE 114 115 select ARCH_HAS_COPY_MC if PPC64 ··· 119 118 select ARCH_HAS_DEBUG_WX if STRICT_KERNEL_RWX 120 119 select ARCH_HAS_DEVMEM_IS_ALLOWED 121 120 select ARCH_HAS_DMA_MAP_DIRECT if PPC_PSERIES 122 - select ARCH_HAS_ELF_RANDOMIZE 123 121 select ARCH_HAS_FORTIFY_SOURCE 124 122 select ARCH_HAS_GCOV_PROFILE_ALL 125 123 select ARCH_HAS_HUGEPD if HUGETLB_PAGE ··· 155 155 select ARCH_USE_MEMTEST 156 156 select ARCH_USE_QUEUED_RWLOCKS if PPC_QUEUED_SPINLOCKS 157 157 select ARCH_USE_QUEUED_SPINLOCKS if PPC_QUEUED_SPINLOCKS 158 + select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT 158 159 select ARCH_WANT_IPC_PARSE_VERSION 159 160 select ARCH_WANT_IRQS_OFF_ACTIVATE_MM 160 161 select ARCH_WANT_LD_ORPHAN_WARN 161 162 select ARCH_WANTS_MODULES_DATA_IN_VMALLOC if PPC_BOOK3S_32 || PPC_8xx 163 + select ARCH_WANTS_NO_INSTR 162 164 select ARCH_WEAK_RELEASE_ACQUIRE 163 165 select BINFMT_ELF 164 166 select BUILDTIME_TABLE_SORT ··· 192 190 select HAVE_ARCH_JUMP_LABEL 193 191 select HAVE_ARCH_JUMP_LABEL_RELATIVE 194 192 select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14 195 - select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14 193 + select HAVE_ARCH_KASAN if PPC_RADIX_MMU 194 + select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN 196 195 select HAVE_ARCH_KFENCE if PPC_BOOK3S_32 || PPC_8xx || 40x 197 196 select HAVE_ARCH_KGDB 198 197 select HAVE_ARCH_MMAP_RND_BITS ··· 213 210 select HAVE_EFFICIENT_UNALIGNED_ACCESS if !(CPU_LITTLE_ENDIAN && POWER7_CPU) 214 211 select HAVE_FAST_GUP 215 212 select HAVE_FTRACE_MCOUNT_RECORD 216 - select HAVE_FUNCTION_DESCRIPTORS if PPC64 && !CPU_LITTLE_ENDIAN 213 + select HAVE_FUNCTION_DESCRIPTORS if PPC64_ELF_ABI_V1 217 214 select HAVE_FUNCTION_ERROR_INJECTION 218 215 select HAVE_FUNCTION_GRAPH_TRACER 219 216 select HAVE_FUNCTION_TRACER ··· 762 759 definition from 0x10000 to 0x40000 in older versions. 763 760 764 761 endchoice 762 + 763 + config PAGE_SIZE_4KB 764 + def_bool y 765 + depends on PPC_4K_PAGES 766 + 767 + config PAGE_SIZE_16KB 768 + def_bool y 769 + depends on PPC_16K_PAGES 770 + 771 + config PAGE_SIZE_64KB 772 + def_bool y 773 + depends on PPC_64K_PAGES 774 + 775 + config PAGE_SIZE_256KB 776 + def_bool y 777 + depends on PPC_256K_PAGES 765 778 766 779 config PPC_PAGE_SHIFT 767 780 int
+2 -1
arch/powerpc/Kconfig.debug
··· 374 374 config KASAN_SHADOW_OFFSET 375 375 hex 376 376 depends on KASAN 377 - default 0xe0000000 377 + default 0xe0000000 if PPC32 378 + default 0xa80e000000000000 if PPC64
+6 -6
arch/powerpc/Makefile
··· 89 89 90 90 ifdef CONFIG_PPC64 91 91 ifndef CONFIG_CC_IS_CLANG 92 - cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mabi=elfv1) 93 - cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mcall-aixdesc) 94 - aflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mabi=elfv1) 95 - aflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mabi=elfv2 92 + cflags-$(CONFIG_PPC64_ELF_ABI_V1) += $(call cc-option,-mabi=elfv1) 93 + cflags-$(CONFIG_PPC64_ELF_ABI_V1) += $(call cc-option,-mcall-aixdesc) 94 + aflags-$(CONFIG_PPC64_ELF_ABI_V1) += $(call cc-option,-mabi=elfv1) 95 + aflags-$(CONFIG_PPC64_ELF_ABI_V2) += -mabi=elfv2 96 96 endif 97 97 endif 98 98 ··· 141 141 142 142 CFLAGS-$(CONFIG_PPC64) := $(call cc-option,-mtraceback=no) 143 143 ifndef CONFIG_CC_IS_CLANG 144 - ifdef CONFIG_CPU_LITTLE_ENDIAN 144 + ifdef CONFIG_PPC64_ELF_ABI_V2 145 145 CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv2,$(call cc-option,-mcall-aixdesc)) 146 146 AFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv2) 147 147 else ··· 213 213 ifdef CONFIG_CPU_BIG_ENDIAN 214 214 CHECKFLAGS += -D__BIG_ENDIAN__ 215 215 else 216 - CHECKFLAGS += -D__LITTLE_ENDIAN__ -D_CALL_ELF=2 216 + CHECKFLAGS += -D__LITTLE_ENDIAN__ 217 217 endif 218 218 219 219 ifdef CONFIG_476FPE_ERR46
+8 -2
arch/powerpc/boot/Makefile
··· 38 38 $(LINUXINCLUDE) 39 39 40 40 ifdef CONFIG_PPC64_BOOT_WRAPPER 41 - BOOTCFLAGS += -m64 41 + ifdef CONFIG_CPU_LITTLE_ENDIAN 42 + BOOTCFLAGS += -m64 -mcpu=powerpc64le 42 43 else 43 - BOOTCFLAGS += -m32 44 + BOOTCFLAGS += -m64 -mcpu=powerpc64 45 + endif 46 + else 47 + BOOTCFLAGS += -m32 -mcpu=powerpc 44 48 endif 45 49 46 50 BOOTCFLAGS += -isystem $(shell $(BOOTCC) -print-file-name=include) ··· 53 49 BOOTCFLAGS += -mbig-endian 54 50 else 55 51 BOOTCFLAGS += -mlittle-endian 52 + endif 53 + ifdef CONFIG_PPC64_ELF_ABI_V2 56 54 BOOTCFLAGS += $(call cc-option,-mabi=elfv2) 57 55 endif 58 56
+29 -16
arch/powerpc/boot/crt0.S
··· 8 8 #include "ppc_asm.h" 9 9 10 10 RELA = 7 11 - RELACOUNT = 0x6ffffff9 11 + RELASZ = 8 12 + RELAENT = 9 12 13 13 14 .data 14 15 /* A procedure descriptor used when booting this as a COFF file. ··· 76 75 bne 11f 77 76 lwz r9,4(r12) /* get RELA pointer in r9 */ 78 77 b 12f 79 - 11: addis r8,r8,(-RELACOUNT)@ha 80 - cmpwi r8,RELACOUNT@l 78 + 11: cmpwi r8,RELASZ 79 + bne .Lcheck_for_relaent 80 + lwz r0,4(r12) /* get RELASZ value in r0 */ 81 + b 12f 82 + .Lcheck_for_relaent: 83 + cmpwi r8,RELAENT 81 84 bne 12f 82 - lwz r0,4(r12) /* get RELACOUNT value in r0 */ 85 + lwz r14,4(r12) /* get RELAENT value in r14 */ 83 86 12: addi r12,r12,8 84 87 b 9b 85 88 86 89 /* The relocation section contains a list of relocations. 87 90 * We now do the R_PPC_RELATIVE ones, which point to words 88 - * which need to be initialized with addend + offset. 89 - * The R_PPC_RELATIVE ones come first and there are RELACOUNT 90 - * of them. */ 91 + * which need to be initialized with addend + offset */ 91 92 10: /* skip relocation if we don't have both */ 92 93 cmpwi r0,0 93 94 beq 3f 94 95 cmpwi r9,0 95 96 beq 3f 97 + cmpwi r14,0 98 + beq 3f 96 99 97 100 add r9,r9,r11 /* Relocate RELA pointer */ 101 + divwu r0,r0,r14 /* RELASZ / RELAENT */ 98 102 mtctr r0 99 103 2: lbz r0,4+3(r9) /* ELF32_R_INFO(reloc->r_info) */ 100 104 cmpwi r0,22 /* R_PPC_RELATIVE */ 101 - bne 3f 105 + bne .Lnext 102 106 lwz r12,0(r9) /* reloc->r_offset */ 103 107 lwz r0,8(r9) /* reloc->r_addend */ 104 108 add r0,r0,r11 105 109 stwx r0,r11,r12 106 - addi r9,r9,12 110 + .Lnext: add r9,r9,r14 107 111 bdnz 2b 108 112 109 113 /* Do a cache flush for our text, in case the loader didn't */ ··· 166 160 bne 10f 167 161 ld r13,8(r11) /* get RELA pointer in r13 */ 168 162 b 11f 169 - 10: addis r12,r12,(-RELACOUNT)@ha 170 - cmpdi r12,RELACOUNT@l 171 - bne 11f 172 - ld r8,8(r11) /* get RELACOUNT value in r8 */ 163 + 10: cmpwi r12,RELASZ 164 + bne .Lcheck_for_relaent 165 + lwz r8,8(r11) /* get RELASZ pointer in r8 */ 166 + b 11f 167 + .Lcheck_for_relaent: 168 + cmpwi r12,RELAENT 169 + bne 11f 170 + lwz r14,8(r11) /* get RELAENT pointer in r14 */ 173 171 11: addi r11,r11,16 174 172 b 9b 175 173 12: 176 - cmpdi r13,0 /* check we have both RELA and RELACOUNT */ 174 + cmpdi r13,0 /* check we have both RELA, RELASZ, RELAENT*/ 177 175 cmpdi cr1,r8,0 178 176 beq 3f 179 177 beq cr1,3f 178 + cmpdi r14,0 179 + beq 3f 180 180 181 181 /* Calcuate the runtime offset. */ 182 182 subf r13,r13,r9 183 183 184 184 /* Run through the list of relocations and process the 185 185 * R_PPC64_RELATIVE ones. */ 186 + divdu r8,r8,r14 /* RELASZ / RELAENT */ 186 187 mtctr r8 187 188 13: ld r0,8(r9) /* ELF64_R_TYPE(reloc->r_info) */ 188 189 cmpdi r0,22 /* R_PPC64_RELATIVE */ 189 - bne 3f 190 + bne .Lnext 190 191 ld r12,0(r9) /* reloc->r_offset */ 191 192 ld r0,16(r9) /* reloc->r_addend */ 192 193 add r0,r0,r13 193 194 stdx r0,r13,r12 194 - addi r9,r9,24 195 + .Lnext: add r9,r9,r14 195 196 bdnz 13b 196 197 197 198 /* Do a cache flush for our text, in case the loader didn't */
+1 -1
arch/powerpc/boot/cuboot-hotfoot.c
··· 70 70 71 71 printf("Fixing devtree for 4M Flash\n"); 72 72 73 - /* First fix up the base addresse */ 73 + /* First fix up the base address */ 74 74 getprop(devp, "reg", regs, sizeof(regs)); 75 75 regs[0] = 0; 76 76 regs[1] = 0xffc00000;
+5
arch/powerpc/boot/dts/fsl/p2020si-post.dtsi
··· 198 198 reg = <0xe0000 0x1000>; 199 199 fsl,has-rstcr; 200 200 }; 201 + 202 + pmc: power@e0070 { 203 + compatible = "fsl,mpc8548-pmc"; 204 + reg = <0xe0070 0x20>; 205 + }; 201 206 };
+2
arch/powerpc/boot/dts/microwatt.dts
··· 90 90 64-bit; 91 91 d-cache-size = <0x1000>; 92 92 ibm,chip-id = <0>; 93 + ibm,mmu-lpid-bits = <12>; 94 + ibm,mmu-pid-bits = <20>; 93 95 }; 94 96 }; 95 97
-6
arch/powerpc/boot/ops.h
··· 200 200 __dt_fixup_mac_addresses(0, __VA_ARGS__, NULL) 201 201 202 202 203 - static inline void *find_node_by_linuxphandle(const u32 linuxphandle) 204 - { 205 - return find_node_by_prop_value(NULL, "linux,phandle", 206 - (char *)&linuxphandle, sizeof(u32)); 207 - } 208 - 209 203 static inline char *get_path(const void *phandle, char *buf, int len) 210 204 { 211 205 if (dt_ops.get_path)
+1 -1
arch/powerpc/boot/wrapper
··· 162 162 fi 163 163 ;; 164 164 --no-gzip) 165 - # a "feature" of the the wrapper script is that it can be used outside 165 + # a "feature" of the wrapper script is that it can be used outside 166 166 # the kernel tree. So keeping this around for backwards compatibility. 167 167 compression= 168 168 uboot_comp=none
+1 -1
arch/powerpc/crypto/aes-spe-glue.c
··· 404 404 405 405 /* 406 406 * Algorithm definitions. Disabling alignment (cra_alignmask=0) was chosen 407 - * because the e500 platform can handle unaligned reads/writes very efficently. 407 + * because the e500 platform can handle unaligned reads/writes very efficiently. 408 408 * This improves IPsec thoughput by another few percent. Additionally we assume 409 409 * that AES context is always aligned to at least 8 bytes because it is created 410 410 * with kmalloc() in the crypto infrastructure
+4
arch/powerpc/include/asm/book3s/64/hash.h
··· 18 18 #include <asm/book3s/64/hash-4k.h> 19 19 #endif 20 20 21 + #define H_PTRS_PER_PTE (1 << H_PTE_INDEX_SIZE) 22 + #define H_PTRS_PER_PMD (1 << H_PMD_INDEX_SIZE) 23 + #define H_PTRS_PER_PUD (1 << H_PUD_INDEX_SIZE) 24 + 21 25 /* Bits to set in a PMD/PUD/PGD entry valid bit*/ 22 26 #define HASH_PMD_VAL_BITS (0x8000000000000000UL) 23 27 #define HASH_PUD_VAL_BITS (0x8000000000000000UL)
-4
arch/powerpc/include/asm/book3s/64/hugetlb.h
··· 8 8 */ 9 9 void radix__flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr); 10 10 void radix__local_flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr); 11 - extern unsigned long 12 - radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr, 13 - unsigned long len, unsigned long pgoff, 14 - unsigned long flags); 15 11 16 12 extern void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma, 17 13 unsigned long addr, pte_t *ptep,
+1
arch/powerpc/include/asm/book3s/64/mmu-hash.h
··· 18 18 * complete pgtable.h but only a portion of it. 19 19 */ 20 20 #include <asm/book3s/64/pgtable.h> 21 + #include <asm/book3s/64/slice.h> 21 22 #include <asm/task_size_64.h> 22 23 #include <asm/cpu_has_feature.h> 23 24
-6
arch/powerpc/include/asm/book3s/64/mmu.h
··· 4 4 5 5 #include <asm/page.h> 6 6 7 - #ifdef CONFIG_HUGETLB_PAGE 8 - #define HAVE_ARCH_HUGETLB_UNMAPPED_AREA 9 - #endif 10 - #define HAVE_ARCH_UNMAPPED_AREA 11 - #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN 12 - 13 7 #ifndef __ASSEMBLY__ 14 8 /* 15 9 * Page size definition
+3
arch/powerpc/include/asm/book3s/64/pgtable.h
··· 231 231 #define PTRS_PER_PUD (1 << PUD_INDEX_SIZE) 232 232 #define PTRS_PER_PGD (1 << PGD_INDEX_SIZE) 233 233 234 + #define MAX_PTRS_PER_PTE ((H_PTRS_PER_PTE > R_PTRS_PER_PTE) ? H_PTRS_PER_PTE : R_PTRS_PER_PTE) 235 + #define MAX_PTRS_PER_PMD ((H_PTRS_PER_PMD > R_PTRS_PER_PMD) ? H_PTRS_PER_PMD : R_PTRS_PER_PMD) 236 + #define MAX_PTRS_PER_PUD ((H_PTRS_PER_PUD > R_PTRS_PER_PUD) ? H_PTRS_PER_PUD : R_PTRS_PER_PUD) 234 237 #define MAX_PTRS_PER_PGD (1 << (H_PGD_INDEX_SIZE > RADIX_PGD_INDEX_SIZE ? \ 235 238 H_PGD_INDEX_SIZE : RADIX_PGD_INDEX_SIZE)) 236 239
+9 -3
arch/powerpc/include/asm/book3s/64/radix.h
··· 35 35 #define RADIX_PMD_SHIFT (PAGE_SHIFT + RADIX_PTE_INDEX_SIZE) 36 36 #define RADIX_PUD_SHIFT (RADIX_PMD_SHIFT + RADIX_PMD_INDEX_SIZE) 37 37 #define RADIX_PGD_SHIFT (RADIX_PUD_SHIFT + RADIX_PUD_INDEX_SIZE) 38 + 39 + #define R_PTRS_PER_PTE (1 << RADIX_PTE_INDEX_SIZE) 40 + #define R_PTRS_PER_PMD (1 << RADIX_PMD_INDEX_SIZE) 41 + #define R_PTRS_PER_PUD (1 << RADIX_PUD_INDEX_SIZE) 42 + 38 43 /* 39 44 * Size of EA range mapped by our pagetables. 40 45 */ ··· 73 68 * 74 69 * 75 70 * 3rd quadrant expanded: 76 - * +------------------------------+ 71 + * +------------------------------+ Highest address (0xc010000000000000) 72 + * +------------------------------+ KASAN shadow end (0xc00fc00000000000) 77 73 * | | 78 74 * | | 79 - * | | 80 - * +------------------------------+ Kernel vmemmap end (0xc010000000000000) 75 + * +------------------------------+ Kernel vmemmap end/shadow start (0xc00e000000000000) 81 76 * | | 82 77 * | 512TB | 83 78 * | | ··· 96 91 * +------------------------------+ Kernel linear (0xc.....) 97 92 */ 98 93 94 + /* For the sizes of the shadow area, see kasan.h */ 99 95 100 96 /* 101 97 * If we store section details in page->flags we can't increase the MAX_PHYSMEM_BITS
+26
arch/powerpc/include/asm/book3s/64/slice.h
··· 2 2 #ifndef _ASM_POWERPC_BOOK3S_64_SLICE_H 3 3 #define _ASM_POWERPC_BOOK3S_64_SLICE_H 4 4 5 + #ifndef __ASSEMBLY__ 6 + 7 + #ifdef CONFIG_PPC_64S_HASH_MMU 8 + #ifdef CONFIG_HUGETLB_PAGE 9 + #define HAVE_ARCH_HUGETLB_UNMAPPED_AREA 10 + #endif 11 + #define HAVE_ARCH_UNMAPPED_AREA 12 + #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN 13 + #endif 14 + 5 15 #define SLICE_LOW_SHIFT 28 6 16 #define SLICE_LOW_TOP (0x100000000ul) 7 17 #define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT) ··· 22 12 #define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT) 23 13 24 14 #define SLB_ADDR_LIMIT_DEFAULT DEFAULT_MAP_WINDOW_USER64 15 + 16 + struct mm_struct; 17 + 18 + unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, 19 + unsigned long flags, unsigned int psize, 20 + int topdown); 21 + 22 + unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr); 23 + 24 + void slice_set_range_psize(struct mm_struct *mm, unsigned long start, 25 + unsigned long len, unsigned int psize); 26 + 27 + void slice_init_new_context_exec(struct mm_struct *mm); 28 + void slice_setup_new_exec(void); 29 + 30 + #endif /* __ASSEMBLY__ */ 25 31 26 32 #endif /* _ASM_POWERPC_BOOK3S_64_SLICE_H */
+12 -12
arch/powerpc/include/asm/checksum.h
··· 38 38 */ 39 39 static inline __sum16 csum_fold(__wsum sum) 40 40 { 41 - unsigned int tmp; 41 + u32 tmp = (__force u32)sum; 42 42 43 - /* swap the two 16-bit halves of sum */ 44 - __asm__("rlwinm %0,%1,16,0,31" : "=r" (tmp) : "r" (sum)); 45 - /* if there is a carry from adding the two 16-bit halves, 46 - it will carry from the lower half into the upper half, 47 - giving us the correct sum in the upper half. */ 48 - return (__force __sum16)(~((__force u32)sum + tmp) >> 16); 43 + /* 44 + * swap the two 16-bit halves of sum 45 + * if there is a carry from adding the two 16-bit halves, 46 + * it will carry from the lower half into the upper half, 47 + * giving us the correct sum in the upper half. 48 + */ 49 + return (__force __sum16)(~(tmp + rol32(tmp, 16)) >> 16); 49 50 } 50 51 51 52 static inline u32 from64to32(u64 x) ··· 96 95 { 97 96 #ifdef __powerpc64__ 98 97 u64 res = (__force u64)csum; 99 - #endif 98 + 99 + res += (__force u64)addend; 100 + return (__force __wsum)((u32)res + (res >> 32)); 101 + #else 100 102 if (__builtin_constant_p(csum) && csum == 0) 101 103 return addend; 102 104 if (__builtin_constant_p(addend) && addend == 0) 103 105 return csum; 104 106 105 - #ifdef __powerpc64__ 106 - res += (__force u64)addend; 107 - return (__force __wsum)((u32)res + (res >> 32)); 108 - #else 109 107 asm("addc %0,%0,%1;" 110 108 "addze %0,%0;" 111 109 : "+r" (csum) : "r" (addend) : "xer");
+55 -12
arch/powerpc/include/asm/code-patching.h
··· 22 22 #define BRANCH_SET_LINK 0x1 23 23 #define BRANCH_ABSOLUTE 0x2 24 24 25 - bool is_offset_in_branch_range(long offset); 26 - bool is_offset_in_cond_branch_range(long offset); 27 - int create_branch(ppc_inst_t *instr, const u32 *addr, 28 - unsigned long target, int flags); 25 + DECLARE_STATIC_KEY_FALSE(init_mem_is_free); 26 + 27 + /* 28 + * Powerpc branch instruction is : 29 + * 30 + * 0 6 30 31 31 + * +---------+----------------+---+---+ 32 + * | opcode | LI |AA |LK | 33 + * +---------+----------------+---+---+ 34 + * Where AA = 0 and LK = 0 35 + * 36 + * LI is a signed 24 bits integer. The real branch offset is computed 37 + * by: imm32 = SignExtend(LI:'0b00', 32); 38 + * 39 + * So the maximum forward branch should be: 40 + * (0x007fffff << 2) = 0x01fffffc = 0x1fffffc 41 + * The maximum backward branch should be: 42 + * (0xff800000 << 2) = 0xfe000000 = -0x2000000 43 + */ 44 + static inline bool is_offset_in_branch_range(long offset) 45 + { 46 + return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3)); 47 + } 48 + 49 + static inline bool is_offset_in_cond_branch_range(long offset) 50 + { 51 + return offset >= -0x8000 && offset <= 0x7fff && !(offset & 0x3); 52 + } 53 + 54 + static inline int create_branch(ppc_inst_t *instr, const u32 *addr, 55 + unsigned long target, int flags) 56 + { 57 + long offset; 58 + 59 + *instr = ppc_inst(0); 60 + offset = target; 61 + if (! (flags & BRANCH_ABSOLUTE)) 62 + offset = offset - (unsigned long)addr; 63 + 64 + /* Check we can represent the target in the instruction format */ 65 + if (!is_offset_in_branch_range(offset)) 66 + return 1; 67 + 68 + /* Mask out the flags and target, so they don't step on each other. */ 69 + *instr = ppc_inst(0x48000000 | (flags & 0x3) | (offset & 0x03FFFFFC)); 70 + 71 + return 0; 72 + } 73 + 29 74 int create_cond_branch(ppc_inst_t *instr, const u32 *addr, 30 75 unsigned long target, int flags); 31 76 int patch_branch(u32 *addr, unsigned long target, int flags); ··· 132 87 133 88 static inline unsigned long ppc_function_entry(void *func) 134 89 { 135 - #ifdef PPC64_ELF_ABI_v2 90 + #ifdef CONFIG_PPC64_ELF_ABI_V2 136 91 u32 *insn = func; 137 92 138 93 /* ··· 157 112 return (unsigned long)(insn + 2); 158 113 else 159 114 return (unsigned long)func; 160 - #elif defined(PPC64_ELF_ABI_v1) 115 + #elif defined(CONFIG_PPC64_ELF_ABI_V1) 161 116 /* 162 117 * On PPC64 ABIv1 the function pointer actually points to the 163 118 * function's descriptor. The first entry in the descriptor is the ··· 171 126 172 127 static inline unsigned long ppc_global_function_entry(void *func) 173 128 { 174 - #ifdef PPC64_ELF_ABI_v2 129 + #ifdef CONFIG_PPC64_ELF_ABI_V2 175 130 /* PPC64 ABIv2 the global entry point is at the address */ 176 131 return (unsigned long)func; 177 132 #else ··· 188 143 static inline unsigned long ppc_kallsyms_lookup_name(const char *name) 189 144 { 190 145 unsigned long addr; 191 - #ifdef PPC64_ELF_ABI_v1 146 + #ifdef CONFIG_PPC64_ELF_ABI_V1 192 147 /* check for dot variant */ 193 148 char dot_name[1 + KSYM_NAME_LEN]; 194 149 bool dot_appended = false; ··· 209 164 if (!addr && dot_appended) 210 165 /* Let's try the original non-dot symbol lookup */ 211 166 addr = kallsyms_lookup_name(name); 212 - #elif defined(PPC64_ELF_ABI_v2) 167 + #elif defined(CONFIG_PPC64_ELF_ABI_V2) 213 168 addr = kallsyms_lookup_name(name); 214 169 if (addr) 215 170 addr = ppc_function_entry((void *)addr); ··· 219 174 return addr; 220 175 } 221 176 222 - #ifdef CONFIG_PPC64 223 177 /* 224 178 * Some instruction encodings commonly used in dynamic ftracing 225 179 * and function live patching. 226 180 */ 227 181 228 182 /* This must match the definition of STK_GOT in <asm/ppc_asm.h> */ 229 - #ifdef PPC64_ELF_ABI_v2 183 + #ifdef CONFIG_PPC64_ELF_ABI_V2 230 184 #define R2_STACK_OFFSET 24 231 185 #else 232 186 #define R2_STACK_OFFSET 40 ··· 235 191 236 192 /* usually preceded by a mflr r0 */ 237 193 #define PPC_INST_STD_LR PPC_RAW_STD(_R0, _R1, PPC_LR_STKOFF) 238 - #endif /* CONFIG_PPC64 */ 239 194 240 195 #endif /* _ASM_POWERPC_CODE_PATCHING_H */
+12 -4
arch/powerpc/include/asm/cputable.h
··· 440 440 #define CPU_FTRS_POWER9_DD2_2 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1 | \ 441 441 CPU_FTR_P9_TM_HV_ASSIST | \ 442 442 CPU_FTR_P9_TM_XER_SO_BUG) 443 + #define CPU_FTRS_POWER9_DD2_3 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1 | \ 444 + CPU_FTR_P9_TM_HV_ASSIST | \ 445 + CPU_FTR_P9_TM_XER_SO_BUG | \ 446 + CPU_FTR_DAWR) 443 447 #define CPU_FTRS_POWER10 (CPU_FTR_LWSYNC | \ 444 448 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\ 445 449 CPU_FTR_MMCRA | CPU_FTR_SMT | \ ··· 473 469 #define CPU_FTRS_POSSIBLE \ 474 470 (CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | CPU_FTRS_POWER8 | \ 475 471 CPU_FTR_ALTIVEC_COMP | CPU_FTR_VSX_COMP | CPU_FTRS_POWER9 | \ 476 - CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | CPU_FTRS_POWER10) 472 + CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | \ 473 + CPU_FTRS_POWER9_DD2_3 | CPU_FTRS_POWER10) 477 474 #else 478 475 #define CPU_FTRS_POSSIBLE \ 479 476 (CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | \ 480 477 CPU_FTRS_POWER6 | CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | \ 481 478 CPU_FTRS_POWER8 | CPU_FTRS_CELL | CPU_FTRS_PA6T | \ 482 479 CPU_FTR_VSX_COMP | CPU_FTR_ALTIVEC_COMP | CPU_FTRS_POWER9 | \ 483 - CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | CPU_FTRS_POWER10) 480 + CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2 | \ 481 + CPU_FTRS_POWER9_DD2_3 | CPU_FTRS_POWER10) 484 482 #endif /* CONFIG_CPU_LITTLE_ENDIAN */ 485 483 #endif 486 484 #else ··· 547 541 #define CPU_FTRS_ALWAYS \ 548 542 (CPU_FTRS_POSSIBLE & ~CPU_FTR_HVMODE & CPU_FTRS_POWER7 & \ 549 543 CPU_FTRS_POWER8E & CPU_FTRS_POWER8 & CPU_FTRS_POWER9 & \ 550 - CPU_FTRS_POWER9_DD2_1 & CPU_FTRS_DT_CPU_BASE) 544 + CPU_FTRS_POWER9_DD2_1 & CPU_FTRS_POWER9_DD2_2 & \ 545 + CPU_FTRS_POWER10 & CPU_FTRS_DT_CPU_BASE) 551 546 #else 552 547 #define CPU_FTRS_ALWAYS \ 553 548 (CPU_FTRS_PPC970 & CPU_FTRS_POWER5 & \ 554 549 CPU_FTRS_POWER6 & CPU_FTRS_POWER7 & CPU_FTRS_CELL & \ 555 550 CPU_FTRS_PA6T & CPU_FTRS_POWER8 & CPU_FTRS_POWER8E & \ 556 551 ~CPU_FTR_HVMODE & CPU_FTRS_POSSIBLE & CPU_FTRS_POWER9 & \ 557 - CPU_FTRS_POWER9_DD2_1 & CPU_FTRS_DT_CPU_BASE) 552 + CPU_FTRS_POWER9_DD2_1 & CPU_FTRS_POWER9_DD2_2 & \ 553 + CPU_FTRS_POWER10 & CPU_FTRS_DT_CPU_BASE) 558 554 #endif /* CONFIG_CPU_LITTLE_ENDIAN */ 559 555 #endif 560 556 #else
+3
arch/powerpc/include/asm/drmem.h
··· 23 23 u64 lmb_size; 24 24 }; 25 25 26 + struct device_node; 27 + struct property; 28 + 26 29 extern struct drmem_lmb_info *drmem_info; 27 30 28 31 static inline struct drmem_lmb *drmem_lmb_next(struct drmem_lmb *lmb,
-6
arch/powerpc/include/asm/eeh.h
··· 333 333 334 334 static inline void eeh_show_enabled(void) { } 335 335 336 - static inline void eeh_dev_phb_init_dynamic(struct pci_controller *phb) { } 337 - 338 336 static inline int eeh_check_failure(const volatile void __iomem *token) 339 337 { 340 338 return 0; ··· 352 354 #endif /* CONFIG_EEH */ 353 355 354 356 #if defined(CONFIG_PPC_PSERIES) && defined(CONFIG_EEH) 355 - void pseries_eeh_init_edev(struct pci_dn *pdn); 356 357 void pseries_eeh_init_edev_recursive(struct pci_dn *pdn); 357 - #else 358 - static inline void pseries_eeh_add_device_early(struct pci_dn *pdn) { } 359 - static inline void pseries_eeh_add_device_tree_early(struct pci_dn *pdn) { } 360 358 #endif 361 359 362 360 #ifdef CONFIG_PPC64
+13 -1
arch/powerpc/include/asm/elf.h
··· 160 160 * even if DLINFO_ARCH_ITEMS goes to zero or is undefined. 161 161 * update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes 162 162 */ 163 - #define ARCH_DLINFO \ 163 + #define COMMON_ARCH_DLINFO \ 164 164 do { \ 165 165 /* Handle glibc compatibility. */ \ 166 166 NEW_AUX_ENT(AT_IGNOREPPC, AT_IGNOREPPC); \ ··· 171 171 NEW_AUX_ENT(AT_UCACHEBSIZE, 0); \ 172 172 VDSO_AUX_ENT(AT_SYSINFO_EHDR, (unsigned long)current->mm->context.vdso);\ 173 173 ARCH_DLINFO_CACHE_GEOMETRY; \ 174 + } while (0) 175 + 176 + #define ARCH_DLINFO \ 177 + do { \ 178 + COMMON_ARCH_DLINFO; \ 179 + NEW_AUX_ENT(AT_MINSIGSTKSZ, get_min_sigframe_size()); \ 180 + } while (0) 181 + 182 + #define COMPAT_ARCH_DLINFO \ 183 + do { \ 184 + COMMON_ARCH_DLINFO; \ 185 + NEW_AUX_ENT(AT_MINSIGSTKSZ, get_min_sigframe_size_compat()); \ 174 186 } while (0) 175 187 176 188 /* Relocate the kernel image to @final_address */
+1 -1
arch/powerpc/include/asm/fadump-internal.h
··· 50 50 u64 elfcorehdr_addr; 51 51 u32 crashing_cpu; 52 52 struct pt_regs regs; 53 - struct cpumask online_mask; 53 + struct cpumask cpu_mask; 54 54 }; 55 55 56 56 struct fadump_memory_range {
-35
arch/powerpc/include/asm/fsl_85xx_cache_sram.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Copyright 2009 Freescale Semiconductor, Inc. 4 - * 5 - * Cache SRAM handling for QorIQ platform 6 - * 7 - * Author: Vivek Mahajan <vivek.mahajan@freescale.com> 8 - 9 - * This file is derived from the original work done 10 - * by Sylvain Munaut for the Bestcomm SRAM allocator. 11 - */ 12 - 13 - #ifndef __ASM_POWERPC_FSL_85XX_CACHE_SRAM_H__ 14 - #define __ASM_POWERPC_FSL_85XX_CACHE_SRAM_H__ 15 - 16 - #include <asm/rheap.h> 17 - #include <linux/spinlock.h> 18 - 19 - /* 20 - * Cache-SRAM 21 - */ 22 - 23 - struct mpc85xx_cache_sram { 24 - phys_addr_t base_phys; 25 - void *base_virt; 26 - unsigned int size; 27 - rh_info_t *rh; 28 - spinlock_t lock; 29 - }; 30 - 31 - extern void mpc85xx_cache_sram_free(void *ptr); 32 - extern void *mpc85xx_cache_sram_alloc(unsigned int size, 33 - phys_addr_t *phys, unsigned int align); 34 - 35 - #endif /* __AMS_POWERPC_FSL_85XX_CACHE_SRAM_H__ */
+5 -3
arch/powerpc/include/asm/ftrace.h
··· 64 64 * those. 65 65 */ 66 66 #define ARCH_HAS_SYSCALL_MATCH_SYM_NAME 67 - #ifdef PPC64_ELF_ABI_v1 67 + #ifdef CONFIG_PPC64_ELF_ABI_V1 68 68 static inline bool arch_syscall_match_sym_name(const char *sym, const char *name) 69 69 { 70 70 /* We need to skip past the initial dot, and the __se_sys alias */ ··· 83 83 (!strncmp(sym, "ppc32_", 6) && !strcmp(sym + 6, name + 4)) || 84 84 (!strncmp(sym, "ppc64_", 6) && !strcmp(sym + 6, name + 4)); 85 85 } 86 - #endif /* PPC64_ELF_ABI_v1 */ 86 + #endif /* CONFIG_PPC64_ELF_ABI_V1 */ 87 87 #endif /* CONFIG_FTRACE_SYSCALLS */ 88 88 89 - #ifdef CONFIG_PPC64 89 + #if defined(CONFIG_PPC64) && defined(CONFIG_FUNCTION_TRACER) 90 90 #include <asm/paca.h> 91 91 92 92 static inline void this_cpu_disable_ftrace(void) ··· 110 110 return get_paca()->ftrace_enabled; 111 111 } 112 112 113 + void ftrace_free_init_tramp(void); 113 114 #else /* CONFIG_PPC64 */ 114 115 static inline void this_cpu_disable_ftrace(void) { } 115 116 static inline void this_cpu_enable_ftrace(void) { } 116 117 static inline void this_cpu_set_ftrace_enabled(u8 ftrace_enabled) { } 117 118 static inline u8 this_cpu_get_ftrace_enabled(void) { return 1; } 119 + static inline void ftrace_free_init_tramp(void) { } 118 120 #endif /* CONFIG_PPC64 */ 119 121 #endif /* !__ASSEMBLY__ */ 120 122
+1 -1
arch/powerpc/include/asm/hugetlb.h
··· 24 24 unsigned long addr, 25 25 unsigned long len) 26 26 { 27 - if (IS_ENABLED(CONFIG_PPC_MM_SLICES) && !radix_enabled()) 27 + if (IS_ENABLED(CONFIG_PPC_64S_HASH_MMU) && !radix_enabled()) 28 28 return slice_is_hugepage_only_range(mm, addr, len); 29 29 return 0; 30 30 }
+9 -4
arch/powerpc/include/asm/inst.h
··· 158 158 __str; \ 159 159 }) 160 160 161 - static inline int copy_inst_from_kernel_nofault(ppc_inst_t *inst, u32 *src) 161 + static inline int __copy_inst_from_kernel_nofault(ppc_inst_t *inst, u32 *src) 162 162 { 163 163 unsigned int val, suffix; 164 - 165 - if (unlikely(!is_kernel_addr((unsigned long)src))) 166 - return -ERANGE; 167 164 168 165 /* See https://github.com/ClangBuiltLinux/linux/issues/1521 */ 169 166 #if defined(CONFIG_CC_IS_CLANG) && CONFIG_CLANG_VERSION < 140000 ··· 176 179 return 0; 177 180 Efault: 178 181 return -EFAULT; 182 + } 183 + 184 + static inline int copy_inst_from_kernel_nofault(ppc_inst_t *inst, u32 *src) 185 + { 186 + if (unlikely(!is_kernel_addr((unsigned long)src))) 187 + return -ERANGE; 188 + 189 + return __copy_inst_from_kernel_nofault(inst, src); 179 190 } 180 191 181 192 #endif /* _ASM_POWERPC_INST_H */
+40 -12
arch/powerpc/include/asm/interrupt.h
··· 324 324 } 325 325 #endif 326 326 327 + /* If data relocations are enabled, it's safe to use nmi_enter() */ 328 + if (mfmsr() & MSR_DR) { 329 + nmi_enter(); 330 + return; 331 + } 332 + 327 333 /* 328 - * Do not use nmi_enter() for pseries hash guest taking a real-mode 334 + * But do not use nmi_enter() for pseries hash guest taking a real-mode 329 335 * NMI because not everything it touches is within the RMA limit. 330 336 */ 331 - if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64) || 332 - !firmware_has_feature(FW_FEATURE_LPAR) || 333 - radix_enabled() || (mfmsr() & MSR_DR)) 334 - nmi_enter(); 337 + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && 338 + firmware_has_feature(FW_FEATURE_LPAR) && 339 + !radix_enabled()) 340 + return; 341 + 342 + /* 343 + * Likewise, don't use it if we have some form of instrumentation (like 344 + * KASAN shadow) that is not safe to access in real mode (even on radix) 345 + */ 346 + if (IS_ENABLED(CONFIG_KASAN)) 347 + return; 348 + 349 + /* Otherwise, it should be safe to call it */ 350 + nmi_enter(); 335 351 } 336 352 337 353 static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct interrupt_nmi_state *state) 338 354 { 339 - if (!IS_ENABLED(CONFIG_PPC_BOOK3S_64) || 340 - !firmware_has_feature(FW_FEATURE_LPAR) || 341 - radix_enabled() || (mfmsr() & MSR_DR)) 355 + if (mfmsr() & MSR_DR) { 356 + // nmi_exit if relocations are on 342 357 nmi_exit(); 358 + } else if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && 359 + firmware_has_feature(FW_FEATURE_LPAR) && 360 + !radix_enabled()) { 361 + // no nmi_exit for a pseries hash guest taking a real mode exception 362 + } else if (IS_ENABLED(CONFIG_KASAN)) { 363 + // no nmi_exit for KASAN in real mode 364 + } else { 365 + nmi_exit(); 366 + } 343 367 344 368 /* 345 369 * nmi does not call nap_adjust_return because nmi should not create ··· 431 407 * Specific handlers may have additional restrictions. 432 408 */ 433 409 #define DEFINE_INTERRUPT_HANDLER_RAW(func) \ 434 - static __always_inline long ____##func(struct pt_regs *regs); \ 410 + static __always_inline __no_sanitize_address __no_kcsan long \ 411 + ____##func(struct pt_regs *regs); \ 435 412 \ 436 413 interrupt_handler long func(struct pt_regs *regs) \ 437 414 { \ ··· 446 421 } \ 447 422 NOKPROBE_SYMBOL(func); \ 448 423 \ 449 - static __always_inline long ____##func(struct pt_regs *regs) 424 + static __always_inline __no_sanitize_address __no_kcsan long \ 425 + ____##func(struct pt_regs *regs) 450 426 451 427 /** 452 428 * DECLARE_INTERRUPT_HANDLER - Declare synchronous interrupt handler function ··· 567 541 * body with a pair of curly brackets. 568 542 */ 569 543 #define DEFINE_INTERRUPT_HANDLER_NMI(func) \ 570 - static __always_inline long ____##func(struct pt_regs *regs); \ 544 + static __always_inline __no_sanitize_address __no_kcsan long \ 545 + ____##func(struct pt_regs *regs); \ 571 546 \ 572 547 interrupt_handler long func(struct pt_regs *regs) \ 573 548 { \ ··· 585 558 } \ 586 559 NOKPROBE_SYMBOL(func); \ 587 560 \ 588 - static __always_inline long ____##func(struct pt_regs *regs) 561 + static __always_inline __no_sanitize_address __no_kcsan long \ 562 + ____##func(struct pt_regs *regs) 589 563 590 564 591 565 /* Interrupt handlers */
-2
arch/powerpc/include/asm/io.h
··· 38 38 #define SIO_CONFIG_RA 0x398 39 39 #define SIO_CONFIG_RD 0x399 40 40 41 - #define SLOW_DOWN_IO 42 - 43 41 /* 32 bits uses slightly different variables for the various IO 44 42 * bases. Most of this file only uses _IO_BASE though which we 45 43 * define properly based on the platform
+2 -4
arch/powerpc/include/asm/iommu.h
··· 51 51 int (*xchg_no_kill)(struct iommu_table *tbl, 52 52 long index, 53 53 unsigned long *hpa, 54 - enum dma_data_direction *direction, 55 - bool realmode); 54 + enum dma_data_direction *direction); 56 55 57 56 void (*tce_kill)(struct iommu_table *tbl, 58 57 unsigned long index, 59 - unsigned long pages, 60 - bool realmode); 58 + unsigned long pages); 61 59 62 60 __be64 *(*useraddrptr)(struct iommu_table *tbl, long index, bool alloc); 63 61 #endif
+22
arch/powerpc/include/asm/kasan.h
··· 30 30 31 31 #define KASAN_SHADOW_OFFSET ASM_CONST(CONFIG_KASAN_SHADOW_OFFSET) 32 32 33 + #ifdef CONFIG_PPC32 33 34 #define KASAN_SHADOW_END (-(-KASAN_SHADOW_START >> KASAN_SHADOW_SCALE_SHIFT)) 35 + #elif defined(CONFIG_PPC_BOOK3S_64) 36 + /* 37 + * The shadow ends before the highest accessible address 38 + * because we don't need a shadow for the shadow. Instead: 39 + * c00e000000000000 << 3 + a80e000000000000 = c00fc00000000000 40 + */ 41 + #define KASAN_SHADOW_END 0xc00fc00000000000UL 42 + #endif 34 43 35 44 #ifdef CONFIG_KASAN 45 + #ifdef CONFIG_PPC_BOOK3S_64 46 + DECLARE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key); 47 + 48 + static __always_inline bool kasan_arch_is_ready(void) 49 + { 50 + if (static_branch_likely(&powerpc_kasan_enabled_key)) 51 + return true; 52 + return false; 53 + } 54 + 55 + #define kasan_arch_is_ready kasan_arch_is_ready 56 + #endif 57 + 36 58 void kasan_early_init(void); 37 59 void kasan_mmu_init(void); 38 60 void kasan_init(void);
-1
arch/powerpc/include/asm/kup.h
··· 52 52 return false; 53 53 } 54 54 55 - static inline void __kuap_assert_locked(void) { } 56 55 static inline void __kuap_lock(void) { } 57 56 static inline void __kuap_save_and_lock(struct pt_regs *regs) { } 58 57 static inline void kuap_user_restore(struct pt_regs *regs) { }
-3
arch/powerpc/include/asm/kvm_book3s_asm.h
··· 14 14 #define XICS_MFRR 0xc 15 15 #define XICS_IPI 2 /* interrupt source # for IPIs */ 16 16 17 - /* LPIDs we support with this build -- runtime limit may be lower */ 18 - #define KVMPPC_NR_LPIDS (LPID_RSVD + 1) 19 - 20 17 /* Maximum number of threads per physical core */ 21 18 #define MAX_SMT_THREADS 8 22 19
+7 -3
arch/powerpc/include/asm/kvm_host.h
··· 36 36 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE 37 37 #include <asm/kvm_book3s_asm.h> /* for MAX_SMT_THREADS */ 38 38 #define KVM_MAX_VCPU_IDS (MAX_SMT_THREADS * KVM_MAX_VCORES) 39 - #define KVM_MAX_NESTED_GUESTS KVMPPC_NR_LPIDS 39 + 40 + /* 41 + * Limit the nested partition table to 4096 entries (because that's what 42 + * hardware supports). Both guest and host use this value. 43 + */ 44 + #define KVM_MAX_NESTED_GUESTS_SHIFT 12 40 45 41 46 #else 42 47 #define KVM_MAX_VCPU_IDS KVM_MAX_VCPUS ··· 332 327 struct list_head uvmem_pfns; 333 328 struct mutex mmu_setup_lock; /* nests inside vcpu mutexes */ 334 329 u64 l1_ptcr; 335 - int max_nested_lpid; 336 - struct kvm_nested_guest *nested_guests[KVM_MAX_NESTED_GUESTS]; 330 + struct idr kvm_nested_guest_idr; 337 331 /* This array can grow quite large, keep it at the end */ 338 332 struct kvmppc_vcore *vcores[KVM_MAX_VCORES]; 339 333 #endif
+2 -12
arch/powerpc/include/asm/kvm_ppc.h
··· 177 177 178 178 extern long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm, 179 179 struct kvm_create_spapr_tce_64 *args); 180 - extern struct kvmppc_spapr_tce_table *kvmppc_find_table( 181 - struct kvm *kvm, unsigned long liobn); 182 180 #define kvmppc_ioba_validate(stt, ioba, npages) \ 183 181 (iommu_tce_check_ioba((stt)->page_shift, (stt)->offset, \ 184 182 (stt)->size, (ioba), (npages)) ? \ ··· 683 685 int level, bool line_status); 684 686 extern void kvmppc_xive_push_vcpu(struct kvm_vcpu *vcpu); 685 687 extern void kvmppc_xive_pull_vcpu(struct kvm_vcpu *vcpu); 686 - extern void kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu); 688 + extern bool kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu); 687 689 688 690 static inline int kvmppc_xive_enabled(struct kvm_vcpu *vcpu) 689 691 { ··· 721 723 int level, bool line_status) { return -ENODEV; } 722 724 static inline void kvmppc_xive_push_vcpu(struct kvm_vcpu *vcpu) { } 723 725 static inline void kvmppc_xive_pull_vcpu(struct kvm_vcpu *vcpu) { } 724 - static inline void kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu) { } 726 + static inline bool kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu) { return true; } 725 727 726 728 static inline int kvmppc_xive_enabled(struct kvm_vcpu *vcpu) 727 729 { return 0; } ··· 787 789 unsigned long dest, unsigned long src); 788 790 long kvmppc_hpte_hv_fault(struct kvm_vcpu *vcpu, unsigned long addr, 789 791 unsigned long slb_v, unsigned int status, bool data); 790 - unsigned long kvmppc_rm_h_xirr(struct kvm_vcpu *vcpu); 791 - unsigned long kvmppc_rm_h_xirr_x(struct kvm_vcpu *vcpu); 792 - unsigned long kvmppc_rm_h_ipoll(struct kvm_vcpu *vcpu, unsigned long server); 793 - int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server, 794 - unsigned long mfrr); 795 - int kvmppc_rm_h_cppr(struct kvm_vcpu *vcpu, unsigned long cppr); 796 - int kvmppc_rm_h_eoi(struct kvm_vcpu *vcpu, unsigned long xirr); 797 792 void kvmppc_guest_entry_inject_int(struct kvm_vcpu *vcpu); 798 793 799 794 /* ··· 868 877 struct kvm_dirty_tlb *cfg); 869 878 870 879 long kvmppc_alloc_lpid(void); 871 - void kvmppc_claim_lpid(long lpid); 872 880 void kvmppc_free_lpid(long lpid); 873 881 void kvmppc_init_lpid(unsigned long nr_lpids); 874 882
+1 -1
arch/powerpc/include/asm/linkage.h
··· 4 4 5 5 #include <asm/types.h> 6 6 7 - #ifdef PPC64_ELF_ABI_v1 7 + #ifdef CONFIG_PPC64_ELF_ABI_V1 8 8 #define cond_syscall(x) \ 9 9 asm ("\t.weak " #x "\n\t.set " #x ", sys_ni_syscall\n" \ 10 10 "\t.weak ." #x "\n\t.set ." #x ", .sys_ni_syscall\n")
-5
arch/powerpc/include/asm/mmu_context.h
··· 34 34 extern void mm_iommu_cleanup(struct mm_struct *mm); 35 35 extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup(struct mm_struct *mm, 36 36 unsigned long ua, unsigned long size); 37 - extern struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm( 38 - struct mm_struct *mm, unsigned long ua, unsigned long size); 39 37 extern struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm, 40 38 unsigned long ua, unsigned long entries); 41 39 extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem, 42 40 unsigned long ua, unsigned int pageshift, unsigned long *hpa); 43 - extern long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem, 44 - unsigned long ua, unsigned int pageshift, unsigned long *hpa); 45 - extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua); 46 41 extern bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa, 47 42 unsigned int pageshift, unsigned long *size); 48 43 extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem);
-2
arch/powerpc/include/asm/module.h
··· 41 41 42 42 #ifdef CONFIG_DYNAMIC_FTRACE 43 43 unsigned long tramp; 44 - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 45 44 unsigned long tramp_regs; 46 - #endif 47 45 #endif 48 46 49 47 /* List of BUG addresses, source line numbers and filenames */
+11 -1
arch/powerpc/include/asm/nohash/tlbflush.h
··· 30 30 31 31 extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, 32 32 unsigned long end); 33 - extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); 34 33 35 34 #ifdef CONFIG_PPC_8xx 36 35 static inline void local_flush_tlb_mm(struct mm_struct *mm) ··· 44 45 { 45 46 asm volatile ("tlbie %0; sync" : : "r" (vmaddr) : "memory"); 46 47 } 48 + 49 + static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) 50 + { 51 + start &= PAGE_MASK; 52 + 53 + if (end - start <= PAGE_SIZE) 54 + asm volatile ("tlbie %0; sync" : : "r" (start) : "memory"); 55 + else 56 + asm volatile ("sync; tlbia; isync" : : : "memory"); 57 + } 47 58 #else 59 + extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); 48 60 extern void local_flush_tlb_mm(struct mm_struct *mm); 49 61 extern void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr); 50 62
+1 -7
arch/powerpc/include/asm/paca.h
··· 12 12 13 13 #ifdef CONFIG_PPC64 14 14 15 + #include <linux/cache.h> 15 16 #include <linux/string.h> 16 17 #include <asm/types.h> 17 18 #include <asm/lppaca.h> ··· 153 152 struct tlb_core_data tcd; 154 153 #endif /* CONFIG_PPC_BOOK3E */ 155 154 156 - #ifdef CONFIG_PPC_BOOK3S 157 155 #ifdef CONFIG_PPC_64S_HASH_MMU 158 - #ifdef CONFIG_PPC_MM_SLICES 159 156 unsigned char mm_ctx_low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE]; 160 157 unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE]; 161 - #else 162 - u16 mm_ctx_user_psize; 163 - u16 mm_ctx_sllp; 164 - #endif 165 - #endif 166 158 #endif 167 159 168 160 /*
+5 -3
arch/powerpc/include/asm/page.h
··· 216 216 #define __pa(x) ((phys_addr_t)(unsigned long)(x) - VIRT_PHYS_OFFSET) 217 217 #else 218 218 #ifdef CONFIG_PPC64 219 + 220 + #define VIRTUAL_WARN_ON(x) WARN_ON(IS_ENABLED(CONFIG_DEBUG_VIRTUAL) && (x)) 221 + 219 222 /* 220 223 * gcc miscompiles (unsigned long)(&static_var) - PAGE_OFFSET 221 224 * with -mcmodel=medium, so we use & and | instead of - and + on 64-bit. ··· 226 223 */ 227 224 #define __va(x) \ 228 225 ({ \ 229 - VIRTUAL_BUG_ON((unsigned long)(x) >= PAGE_OFFSET); \ 226 + VIRTUAL_WARN_ON((unsigned long)(x) >= PAGE_OFFSET); \ 230 227 (void *)(unsigned long)((phys_addr_t)(x) | PAGE_OFFSET); \ 231 228 }) 232 229 233 230 #define __pa(x) \ 234 231 ({ \ 235 - VIRTUAL_BUG_ON((unsigned long)(x) < PAGE_OFFSET); \ 232 + VIRTUAL_WARN_ON((unsigned long)(x) < PAGE_OFFSET); \ 236 233 (unsigned long)(x) & 0x0fffffffffffffffUL; \ 237 234 }) 238 235 ··· 336 333 337 334 #include <asm-generic/memory_model.h> 338 335 #endif /* __ASSEMBLY__ */ 339 - #include <asm/slice.h> 340 336 341 337 #endif /* _ASM_POWERPC_PAGE_H */
+1 -1
arch/powerpc/include/asm/parport.h
··· 11 11 #define _ASM_POWERPC_PARPORT_H 12 12 #ifdef __KERNEL__ 13 13 14 - #include <asm/prom.h> 14 + #include <linux/of_irq.h> 15 15 16 16 static int parport_pc_find_nonpci_ports (int autoirq, int autodma) 17 17 {
+2 -12
arch/powerpc/include/asm/pci-bridge.h
··· 170 170 return bus->sysdata; 171 171 } 172 172 173 - #ifndef CONFIG_PPC64 174 - 175 173 extern int pci_device_from_OF_node(struct device_node *node, 176 174 u8 *bus, u8 *devfn); 175 + #ifndef CONFIG_PPC64 176 + 177 177 extern void pci_create_OF_bus_map(void); 178 178 179 179 #else /* CONFIG_PPC64 */ ··· 234 234 struct pci_dn *add_sriov_vf_pdns(struct pci_dev *pdev); 235 235 void remove_sriov_vf_pdns(struct pci_dev *pdev); 236 236 #endif 237 - 238 - static inline int pci_device_from_OF_node(struct device_node *np, 239 - u8 *bus, u8 *devfn) 240 - { 241 - if (!PCI_DN(np)) 242 - return -ENODEV; 243 - *bus = PCI_DN(np)->busno; 244 - *devfn = PCI_DN(np)->devfn; 245 - return 0; 246 - } 247 237 248 238 #if defined(CONFIG_EEH) 249 239 static inline struct eeh_dev *pdn_to_eeh_dev(struct pci_dn *pdn)
+1
arch/powerpc/include/asm/pnv-pci.h
··· 9 9 #include <linux/pci.h> 10 10 #include <linux/pci_hotplug.h> 11 11 #include <linux/irq.h> 12 + #include <linux/of.h> 12 13 #include <misc/cxl-base.h> 13 14 #include <asm/opal-api.h> 14 15
+59 -50
arch/powerpc/include/asm/ppc-opcode.h
··· 127 127 128 128 129 129 /* opcode and xopcode for instructions */ 130 - #define OP_TRAP 3 131 - #define OP_TRAP_64 2 130 + #define OP_PREFIX 1 131 + #define OP_TRAP_64 2 132 + #define OP_TRAP 3 133 + #define OP_SC 17 134 + #define OP_19 19 135 + #define OP_31 31 136 + #define OP_LWZ 32 137 + #define OP_LWZU 33 138 + #define OP_LBZ 34 139 + #define OP_LBZU 35 140 + #define OP_STW 36 141 + #define OP_STWU 37 142 + #define OP_STB 38 143 + #define OP_STBU 39 144 + #define OP_LHZ 40 145 + #define OP_LHZU 41 146 + #define OP_LHA 42 147 + #define OP_LHAU 43 148 + #define OP_STH 44 149 + #define OP_STHU 45 150 + #define OP_LMW 46 151 + #define OP_STMW 47 152 + #define OP_LFS 48 153 + #define OP_LFSU 49 154 + #define OP_LFD 50 155 + #define OP_LFDU 51 156 + #define OP_STFS 52 157 + #define OP_STFSU 53 158 + #define OP_STFD 54 159 + #define OP_STFDU 55 160 + #define OP_LQ 56 161 + #define OP_LD 58 162 + #define OP_STD 62 163 + 164 + #define OP_19_XOP_RFID 18 165 + #define OP_19_XOP_RFMCI 38 166 + #define OP_19_XOP_RFDI 39 167 + #define OP_19_XOP_RFI 50 168 + #define OP_19_XOP_RFCI 51 169 + #define OP_19_XOP_RFSCV 82 170 + #define OP_19_XOP_HRFID 274 171 + #define OP_19_XOP_URFID 306 172 + #define OP_19_XOP_STOP 370 173 + #define OP_19_XOP_DOZE 402 174 + #define OP_19_XOP_NAP 434 175 + #define OP_19_XOP_SLEEP 466 176 + #define OP_19_XOP_RVWINKLE 498 132 177 133 178 #define OP_31_XOP_TRAP 4 134 179 #define OP_31_XOP_LDX 21 ··· 195 150 #define OP_31_XOP_LHZUX 311 196 151 #define OP_31_XOP_MSGSNDP 142 197 152 #define OP_31_XOP_MSGCLRP 174 153 + #define OP_31_XOP_MTMSR 146 154 + #define OP_31_XOP_MTMSRD 178 198 155 #define OP_31_XOP_TLBIE 306 199 156 #define OP_31_XOP_MFSPR 339 200 157 #define OP_31_XOP_LWAX 341 ··· 255 208 /* VMX Vector Store Instructions */ 256 209 #define OP_31_XOP_STVX 231 257 210 258 - /* Prefixed Instructions */ 259 - #define OP_PREFIX 1 260 - 261 - #define OP_31 31 262 - #define OP_LWZ 32 263 - #define OP_STFS 52 264 - #define OP_STFSU 53 265 - #define OP_STFD 54 266 - #define OP_STFDU 55 267 - #define OP_LD 58 268 - #define OP_LWZU 33 269 - #define OP_LBZ 34 270 - #define OP_LBZU 35 271 - #define OP_STW 36 272 - #define OP_STWU 37 273 - #define OP_STD 62 274 - #define OP_STB 38 275 - #define OP_STBU 39 276 - #define OP_LHZ 40 277 - #define OP_LHZU 41 278 - #define OP_LHA 42 279 - #define OP_LHAU 43 280 - #define OP_STH 44 281 - #define OP_STHU 45 282 - #define OP_LMW 46 283 - #define OP_STMW 47 284 - #define OP_LFS 48 285 - #define OP_LFSU 49 286 - #define OP_LFD 50 287 - #define OP_LFDU 51 288 - #define OP_STFS 52 289 - #define OP_STFSU 53 290 - #define OP_STFD 54 291 - #define OP_STFDU 55 292 - #define OP_LQ 56 293 - 294 211 /* sorted alphabetically */ 295 212 #define PPC_INST_BCCTR_FLUSH 0x4c400420 296 213 #define PPC_INST_COPY 0x7c20060c ··· 296 285 #define PPC_INST_TRECHKPT 0x7c0007dd 297 286 #define PPC_INST_TRECLAIM 0x7c00075d 298 287 #define PPC_INST_TSR 0x7c0005dd 299 - #define PPC_INST_LD 0xe8000000 300 - #define PPC_INST_STD 0xf8000000 301 - #define PPC_INST_ADDIS 0x3c000000 302 - #define PPC_INST_ADD 0x7c000214 303 - #define PPC_INST_DIVD 0x7c0003d2 304 - #define PPC_INST_BRANCH 0x48000000 305 - #define PPC_INST_BL 0x48000001 306 288 #define PPC_INST_BRANCH_COND 0x40800000 307 289 308 290 /* Prefixes */ ··· 355 351 #define PPC_HA(v) PPC_HI((v) + 0x8000) 356 352 #define PPC_HIGHER(v) (((v) >> 32) & 0xffff) 357 353 #define PPC_HIGHEST(v) (((v) >> 48) & 0xffff) 354 + 355 + /* LI Field */ 356 + #define PPC_LI_MASK 0x03fffffc 357 + #define PPC_LI(v) ((v) & PPC_LI_MASK) 358 358 359 359 /* 360 360 * Only use the larx hint bit on 64bit CPUs. e500v1/v2 based CPUs will treat a ··· 468 460 (0x100000c7 | ___PPC_RT(vrt) | ___PPC_RA(vra) | ___PPC_RB(vrb) | __PPC_RC21) 469 461 #define PPC_RAW_VCMPEQUB_RC(vrt, vra, vrb) \ 470 462 (0x10000006 | ___PPC_RT(vrt) | ___PPC_RA(vra) | ___PPC_RB(vrb) | __PPC_RC21) 471 - #define PPC_RAW_LD(r, base, i) (PPC_INST_LD | ___PPC_RT(r) | ___PPC_RA(base) | IMM_DS(i)) 463 + #define PPC_RAW_LD(r, base, i) (0xe8000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_DS(i)) 472 464 #define PPC_RAW_LWZ(r, base, i) (0x80000000 | ___PPC_RT(r) | ___PPC_RA(base) | IMM_L(i)) 473 465 #define PPC_RAW_LWZX(t, a, b) (0x7c00002e | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b)) 474 - #define PPC_RAW_STD(r, base, i) (PPC_INST_STD | ___PPC_RS(r) | ___PPC_RA(base) | IMM_DS(i)) 466 + #define PPC_RAW_STD(r, base, i) (0xf8000000 | ___PPC_RS(r) | ___PPC_RA(base) | IMM_DS(i)) 475 467 #define PPC_RAW_STDCX(s, a, b) (0x7c0001ad | ___PPC_RS(s) | ___PPC_RA(a) | ___PPC_RB(b)) 476 468 #define PPC_RAW_LFSX(t, a, b) (0x7c00042e | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b)) 477 469 #define PPC_RAW_STFSX(s, a, b) (0x7c00052e | ___PPC_RS(s) | ___PPC_RA(a) | ___PPC_RB(b)) ··· 482 474 #define PPC_RAW_ADDE(t, a, b) (0x7c000114 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b)) 483 475 #define PPC_RAW_ADDZE(t, a) (0x7c000194 | ___PPC_RT(t) | ___PPC_RA(a)) 484 476 #define PPC_RAW_ADDME(t, a) (0x7c0001d4 | ___PPC_RT(t) | ___PPC_RA(a)) 485 - #define PPC_RAW_ADD(t, a, b) (PPC_INST_ADD | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b)) 486 - #define PPC_RAW_ADD_DOT(t, a, b) (PPC_INST_ADD | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b) | 0x1) 477 + #define PPC_RAW_ADD(t, a, b) (0x7c000214 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b)) 478 + #define PPC_RAW_ADD_DOT(t, a, b) (0x7c000214 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b) | 0x1) 487 479 #define PPC_RAW_ADDC(t, a, b) (0x7c000014 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b)) 488 480 #define PPC_RAW_ADDC_DOT(t, a, b) (0x7c000014 | ___PPC_RT(t) | ___PPC_RA(a) | ___PPC_RB(b) | 0x1) 489 481 #define PPC_RAW_NOP() PPC_RAW_ORI(0, 0, 0) ··· 579 571 #define PPC_RAW_MTSPR(spr, d) (0x7c0003a6 | ___PPC_RS(d) | __PPC_SPR(spr)) 580 572 #define PPC_RAW_EIEIO() (0x7c0006ac) 581 573 582 - #define PPC_RAW_BRANCH(addr) (PPC_INST_BRANCH | ((addr) & 0x03fffffc)) 574 + #define PPC_RAW_BRANCH(offset) (0x48000000 | PPC_LI(offset)) 575 + #define PPC_RAW_BL(offset) (0x48000001 | PPC_LI(offset)) 583 576 584 577 /* Deal with instructions that older assemblers aren't aware of */ 585 578 #define PPC_BCCTR_FLUSH stringify_in_c(.long PPC_INST_BCCTR_FLUSH)
+2 -2
arch/powerpc/include/asm/ppc_asm.h
··· 149 149 #define __STK_REG(i) (112 + ((i)-14)*8) 150 150 #define STK_REG(i) __STK_REG(__REG_##i) 151 151 152 - #ifdef PPC64_ELF_ABI_v2 152 + #ifdef CONFIG_PPC64_ELF_ABI_V2 153 153 #define STK_GOT 24 154 154 #define __STK_PARAM(i) (32 + ((i)-3)*8) 155 155 #else ··· 158 158 #endif 159 159 #define STK_PARAM(i) __STK_PARAM(__REG_##i) 160 160 161 - #ifdef PPC64_ELF_ABI_v2 161 + #ifdef CONFIG_PPC64_ELF_ABI_V2 162 162 163 163 #define _GLOBAL(name) \ 164 164 .align 2 ; \
+36
arch/powerpc/include/asm/probes.h
··· 8 8 * Copyright IBM Corporation, 2012 9 9 */ 10 10 #include <linux/types.h> 11 + #include <asm/disassemble.h> 11 12 12 13 typedef u32 ppc_opcode_t; 13 14 #define BREAKPOINT_INSTRUCTION 0x7fe00008 /* trap */ ··· 31 30 #else 32 31 #define MSR_SINGLESTEP (MSR_SE) 33 32 #endif 33 + 34 + static inline bool can_single_step(u32 inst) 35 + { 36 + switch (get_op(inst)) { 37 + case OP_TRAP_64: return false; 38 + case OP_TRAP: return false; 39 + case OP_SC: return false; 40 + case OP_19: 41 + switch (get_xop(inst)) { 42 + case OP_19_XOP_RFID: return false; 43 + case OP_19_XOP_RFMCI: return false; 44 + case OP_19_XOP_RFDI: return false; 45 + case OP_19_XOP_RFI: return false; 46 + case OP_19_XOP_RFCI: return false; 47 + case OP_19_XOP_RFSCV: return false; 48 + case OP_19_XOP_HRFID: return false; 49 + case OP_19_XOP_URFID: return false; 50 + case OP_19_XOP_STOP: return false; 51 + case OP_19_XOP_DOZE: return false; 52 + case OP_19_XOP_NAP: return false; 53 + case OP_19_XOP_SLEEP: return false; 54 + case OP_19_XOP_RVWINKLE: return false; 55 + } 56 + break; 57 + case OP_31: 58 + switch (get_xop(inst)) { 59 + case OP_31_XOP_TRAP: return false; 60 + case OP_31_XOP_TRAP_64: return false; 61 + case OP_31_XOP_MTMSR: return false; 62 + case OP_31_XOP_MTMSRD: return false; 63 + } 64 + break; 65 + } 66 + return true; 67 + } 34 68 35 69 /* Enable single stepping for the current task */ 36 70 static inline void enable_single_step(struct pt_regs *regs)
-2
arch/powerpc/include/asm/processor.h
··· 392 392 393 393 #define spin_lock_prefetch(x) prefetchw(x) 394 394 395 - #define HAVE_ARCH_PICK_MMAP_LAYOUT 396 - 397 395 /* asm stubs */ 398 396 extern unsigned long isa300_idle_stop_noloss(unsigned long psscr_val); 399 397 extern unsigned long isa300_idle_stop_mayloss(unsigned long psscr_val);
+1 -1
arch/powerpc/include/asm/ptrace.h
··· 120 120 STACK_FRAME_OVERHEAD + KERNEL_REDZONE_SIZE) 121 121 #define STACK_FRAME_MARKER 12 122 122 123 - #ifdef PPC64_ELF_ABI_v2 123 + #ifdef CONFIG_PPC64_ELF_ABI_V2 124 124 #define STACK_FRAME_MIN_SIZE 32 125 125 #else 126 126 #define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD
-3
arch/powerpc/include/asm/reg.h
··· 417 417 #define FSCR_DSCR __MASK(FSCR_DSCR_LG) 418 418 #define FSCR_INTR_CAUSE (ASM_CONST(0xFF) << 56) /* interrupt cause */ 419 419 #define SPRN_HFSCR 0xbe /* HV=1 Facility Status & Control Register */ 420 - #define HFSCR_PREFIX __MASK(FSCR_PREFIX_LG) 421 420 #define HFSCR_MSGP __MASK(FSCR_MSGP_LG) 422 421 #define HFSCR_TAR __MASK(FSCR_TAR_LG) 423 422 #define HFSCR_EBB __MASK(FSCR_EBB_LG) ··· 473 474 #ifndef SPRN_LPID 474 475 #define SPRN_LPID 0x13F /* Logical Partition Identifier */ 475 476 #endif 476 - #define LPID_RSVD_POWER7 0x3ff /* Reserved LPID for partn switching */ 477 - #define LPID_RSVD 0xfff /* Reserved LPID for partn switching */ 478 477 #define SPRN_HMER 0x150 /* Hypervisor maintenance exception reg */ 479 478 #define HMER_DEBUG_TRIG (1ul << (63 - 17)) /* Debug trigger */ 480 479 #define SPRN_HMEER 0x151 /* Hyp maintenance exception enable reg */
+5
arch/powerpc/include/asm/signal.h
··· 9 9 struct pt_regs; 10 10 void do_notify_resume(struct pt_regs *regs, unsigned long thread_info_flags); 11 11 12 + unsigned long get_min_sigframe_size_32(void); 13 + unsigned long get_min_sigframe_size_64(void); 14 + unsigned long get_min_sigframe_size(void); 15 + unsigned long get_min_sigframe_size_compat(void); 16 + 12 17 #endif /* _ASM_POWERPC_SIGNAL_H */
-46
arch/powerpc/include/asm/slice.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _ASM_POWERPC_SLICE_H 3 - #define _ASM_POWERPC_SLICE_H 4 - 5 - #ifdef CONFIG_PPC_BOOK3S_64 6 - #include <asm/book3s/64/slice.h> 7 - #endif 8 - 9 - #ifndef __ASSEMBLY__ 10 - 11 - struct mm_struct; 12 - 13 - #ifdef CONFIG_PPC_MM_SLICES 14 - 15 - #ifdef CONFIG_HUGETLB_PAGE 16 - #define HAVE_ARCH_HUGETLB_UNMAPPED_AREA 17 - #endif 18 - #define HAVE_ARCH_UNMAPPED_AREA 19 - #define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN 20 - 21 - unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, 22 - unsigned long flags, unsigned int psize, 23 - int topdown); 24 - 25 - unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr); 26 - 27 - void slice_set_range_psize(struct mm_struct *mm, unsigned long start, 28 - unsigned long len, unsigned int psize); 29 - 30 - void slice_init_new_context_exec(struct mm_struct *mm); 31 - void slice_setup_new_exec(void); 32 - 33 - #else /* CONFIG_PPC_MM_SLICES */ 34 - 35 - static inline void slice_init_new_context_exec(struct mm_struct *mm) {} 36 - 37 - static inline unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr) 38 - { 39 - return 0; 40 - } 41 - 42 - #endif /* CONFIG_PPC_MM_SLICES */ 43 - 44 - #endif /* __ASSEMBLY__ */ 45 - 46 - #endif /* _ASM_POWERPC_SLICE_H */
-2
arch/powerpc/include/asm/smp.h
··· 189 189 #define smp_setup_cpu_maps() 190 190 #define thread_group_shares_l2 0 191 191 #define thread_group_shares_l3 0 192 - static inline void inhibit_secondary_onlining(void) {} 193 - static inline void uninhibit_secondary_onlining(void) {} 194 192 static inline const struct cpumask *cpu_sibling_mask(int cpu) 195 193 { 196 194 return cpumask_of(cpu);
+2
arch/powerpc/include/asm/svm.h
··· 10 10 11 11 #ifdef CONFIG_PPC_SVM 12 12 13 + #include <asm/reg.h> 14 + 13 15 static inline bool is_secure_guest(void) 14 16 { 15 17 return mfmsr() & MSR_S;
+9
arch/powerpc/include/asm/switch_to.h
··· 62 62 #else 63 63 static inline void save_altivec(struct task_struct *t) { } 64 64 static inline void __giveup_altivec(struct task_struct *t) { } 65 + static inline void enable_kernel_altivec(void) 66 + { 67 + BUILD_BUG(); 68 + } 69 + 70 + static inline void disable_kernel_altivec(void) 71 + { 72 + BUILD_BUG(); 73 + } 65 74 #endif 66 75 67 76 #ifdef CONFIG_VSX
+8
arch/powerpc/include/asm/task_size_64.h
··· 72 72 #define STACK_TOP_MAX TASK_SIZE_USER64 73 73 #define STACK_TOP (is_32bit_task() ? STACK_TOP_USER32 : STACK_TOP_USER64) 74 74 75 + #define arch_get_mmap_base(addr, base) \ 76 + (((addr) > DEFAULT_MAP_WINDOW) ? (base) + TASK_SIZE - DEFAULT_MAP_WINDOW : (base)) 77 + 78 + #define arch_get_mmap_end(addr, len, flags) \ 79 + (((addr) > DEFAULT_MAP_WINDOW) || \ 80 + (((flags) & MAP_FIXED) && ((addr) + (len) > DEFAULT_MAP_WINDOW)) ? TASK_SIZE : \ 81 + DEFAULT_MAP_WINDOW) 82 + 75 83 #endif /* _ASM_POWERPC_TASK_SIZE_64_H */
+1
arch/powerpc/include/asm/time.h
··· 24 24 extern unsigned long tb_ticks_per_usec; 25 25 extern unsigned long tb_ticks_per_sec; 26 26 extern struct clock_event_device decrementer_clockevent; 27 + extern u64 decrementer_max; 27 28 28 29 29 30 extern void generic_calibrate_decr(void);
+2 -6
arch/powerpc/include/asm/topology.h
··· 111 111 #endif /* CONFIG_NUMA */ 112 112 113 113 #if defined(CONFIG_NUMA) && defined(CONFIG_PPC_SPLPAR) 114 - extern int find_and_online_cpu_nid(int cpu); 114 + void find_and_update_cpu_nid(int cpu); 115 115 extern int cpu_to_coregroup_id(int cpu); 116 116 #else 117 - static inline int find_and_online_cpu_nid(int cpu) 118 - { 119 - return 0; 120 - } 121 - 117 + static inline void find_and_update_cpu_nid(int cpu) {} 122 118 static inline int cpu_to_coregroup_id(int cpu) 123 119 { 124 120 #ifdef CONFIG_SMP
-8
arch/powerpc/include/asm/types.h
··· 11 11 12 12 #include <uapi/asm/types.h> 13 13 14 - #ifdef __powerpc64__ 15 - #if defined(_CALL_ELF) && _CALL_ELF == 2 16 - #define PPC64_ELF_ABI_v2 1 17 - #else 18 - #define PPC64_ELF_ABI_v1 1 19 - #endif 20 - #endif /* __powerpc64__ */ 21 - 22 14 #ifndef __ASSEMBLY__ 23 15 24 16 typedef __vector128 vector128;
+1 -1
arch/powerpc/include/asm/vas.h
··· 126 126 * Receive window attributes specified by the (in-kernel) owner of window. 127 127 */ 128 128 struct vas_rx_win_attr { 129 - void *rx_fifo; 129 + u64 rx_fifo; 130 130 int rx_fifo_size; 131 131 int wcreds_max; 132 132
+3 -1
arch/powerpc/include/uapi/asm/auxvec.h
··· 48 48 #define AT_L3_CACHESIZE 46 49 49 #define AT_L3_CACHEGEOMETRY 47 50 50 51 - #define AT_VECTOR_SIZE_ARCH 14 /* entries in ARCH_DLINFO */ 51 + #define AT_MINSIGSTKSZ 51 /* stack needed for signal delivery */ 52 + 53 + #define AT_VECTOR_SIZE_ARCH 15 /* entries in ARCH_DLINFO */ 52 54 53 55 #endif
+5
arch/powerpc/include/uapi/asm/signal.h
··· 62 62 63 63 #define SA_RESTORER 0x04000000U 64 64 65 + #ifdef __powerpc64__ 66 + #define MINSIGSTKSZ 8192 67 + #define SIGSTKSZ 32768 68 + #else 65 69 #define MINSIGSTKSZ 2048 66 70 #define SIGSTKSZ 8192 71 + #endif 67 72 68 73 #include <asm-generic/signal-defs.h> 69 74
+12 -1
arch/powerpc/kernel/Makefile
··· 33 33 KASAN_SANITIZE_cputable.o := n 34 34 KASAN_SANITIZE_prom_init.o := n 35 35 KASAN_SANITIZE_btext.o := n 36 + KASAN_SANITIZE_paca.o := n 37 + KASAN_SANITIZE_setup_64.o := n 38 + KASAN_SANITIZE_mce.o := n 39 + KASAN_SANITIZE_mce_power.o := n 40 + 41 + # we have to be particularly careful in ppc64 to exclude code that 42 + # runs with translations off, as we cannot access the shadow with 43 + # translations off. However, ppc32 can sanitize this. 44 + ifdef CONFIG_PPC64 45 + KASAN_SANITIZE_traps.o := n 46 + endif 36 47 37 48 ifdef CONFIG_KASAN 38 49 CFLAGS_early_32.o += -DDISABLE_BRANCH_PROFILING ··· 79 68 procfs-y := proc_powerpc.o 80 69 obj-$(CONFIG_PROC_FS) += $(procfs-y) 81 70 rtaspci-$(CONFIG_PPC64)-$(CONFIG_PCI) := rtas_pci.o 82 - obj-$(CONFIG_PPC_RTAS) += rtas.o rtas-rtc.o $(rtaspci-y-y) 71 + obj-$(CONFIG_PPC_RTAS) += rtas_entry.o rtas.o rtas-rtc.o $(rtaspci-y-y) 83 72 obj-$(CONFIG_PPC_RTAS_DAEMON) += rtasd.o 84 73 obj-$(CONFIG_RTAS_FLASH) += rtas_flash.o 85 74 obj-$(CONFIG_RTAS_PROC) += rtas-proc.o
+2 -3
arch/powerpc/kernel/btext.c
··· 10 10 #include <linux/export.h> 11 11 #include <linux/memblock.h> 12 12 #include <linux/pgtable.h> 13 + #include <linux/of.h> 13 14 14 15 #include <asm/sections.h> 15 - #include <asm/prom.h> 16 16 #include <asm/btext.h> 17 17 #include <asm/page.h> 18 18 #include <asm/mmu.h> ··· 45 45 46 46 static unsigned char vga_font[cmapsz]; 47 47 48 - int boot_text_mapped __force_data = 0; 49 - int force_printk_to_btext = 0; 48 + static int boot_text_mapped __force_data; 50 49 51 50 extern void rmci_on(void); 52 51 extern void rmci_off(void);
-1
arch/powerpc/kernel/cacheinfo.c
··· 18 18 #include <linux/of.h> 19 19 #include <linux/percpu.h> 20 20 #include <linux/slab.h> 21 - #include <asm/prom.h> 22 21 #include <asm/cputhreads.h> 23 22 #include <asm/smp.h> 24 23
+23 -5
arch/powerpc/kernel/cputable.c
··· 12 12 #include <linux/init.h> 13 13 #include <linux/export.h> 14 14 #include <linux/jump_label.h> 15 + #include <linux/of.h> 15 16 16 17 #include <asm/cputable.h> 17 - #include <asm/prom.h> /* for PTRRELOC on ARCH=ppc */ 18 18 #include <asm/mce.h> 19 19 #include <asm/mmu.h> 20 20 #include <asm/setup.h> ··· 487 487 .machine_check_early = __machine_check_early_realmode_p9, 488 488 .platform = "power9", 489 489 }, 490 - { /* Power9 DD2.2 or later */ 490 + { /* Power9 DD2.2 */ 491 + .pvr_mask = 0xffffefff, 492 + .pvr_value = 0x004e0202, 493 + .cpu_name = "POWER9 (raw)", 494 + .cpu_features = CPU_FTRS_POWER9_DD2_2, 495 + .cpu_user_features = COMMON_USER_POWER9, 496 + .cpu_user_features2 = COMMON_USER2_POWER9, 497 + .mmu_features = MMU_FTRS_POWER9, 498 + .icache_bsize = 128, 499 + .dcache_bsize = 128, 500 + .num_pmcs = 6, 501 + .pmc_type = PPC_PMC_IBM, 502 + .oprofile_cpu_type = "ppc64/power9", 503 + .cpu_setup = __setup_cpu_power9, 504 + .cpu_restore = __restore_cpu_power9, 505 + .machine_check_early = __machine_check_early_realmode_p9, 506 + .platform = "power9", 507 + }, 508 + { /* Power9 DD2.3 or later */ 491 509 .pvr_mask = 0xffff0000, 492 510 .pvr_value = 0x004e0000, 493 511 .cpu_name = "POWER9 (raw)", 494 - .cpu_features = CPU_FTRS_POWER9_DD2_2, 512 + .cpu_features = CPU_FTRS_POWER9_DD2_3, 495 513 .cpu_user_features = COMMON_USER_POWER9, 496 514 .cpu_user_features2 = COMMON_USER2_POWER9, 497 515 .mmu_features = MMU_FTRS_POWER9, ··· 2043 2025 * oprofile_cpu_type already has a value, then we are 2044 2026 * possibly overriding a real PVR with a logical one, 2045 2027 * and, in that case, keep the current value for 2046 - * oprofile_cpu_type. Futhermore, let's ensure that the 2028 + * oprofile_cpu_type. Furthermore, let's ensure that the 2047 2029 * fix for the PMAO bug is enabled on compatibility mode. 2048 2030 */ 2049 2031 if (old.oprofile_cpu_type != NULL) { ··· 2137 2119 struct static_key_true mmu_feature_keys[NUM_MMU_FTR_KEYS] = { 2138 2120 [0 ... NUM_MMU_FTR_KEYS - 1] = STATIC_KEY_TRUE_INIT 2139 2121 }; 2140 - EXPORT_SYMBOL_GPL(mmu_feature_keys); 2122 + EXPORT_SYMBOL(mmu_feature_keys); 2141 2123 2142 2124 void __init mmu_feature_keys_init(void) 2143 2125 {
+1 -1
arch/powerpc/kernel/crash_dump.c
··· 12 12 #include <linux/crash_dump.h> 13 13 #include <linux/io.h> 14 14 #include <linux/memblock.h> 15 + #include <linux/of.h> 15 16 #include <asm/code-patching.h> 16 17 #include <asm/kdump.h> 17 - #include <asm/prom.h> 18 18 #include <asm/firmware.h> 19 19 #include <linux/uio.h> 20 20 #include <asm/rtas.h>
+1 -1
arch/powerpc/kernel/dawr.c
··· 27 27 dawrx |= (brk->type & (HW_BRK_TYPE_PRIV_ALL)) >> 3; 28 28 /* 29 29 * DAWR length is stored in field MDR bits 48:53. Matches range in 30 - * doublewords (64 bits) baised by -1 eg. 0b000000=1DW and 30 + * doublewords (64 bits) biased by -1 eg. 0b000000=1DW and 31 31 * 0b111111=64DW. 32 32 * brk->hw_len is in bytes. 33 33 * This aligns up to double word size, shifts and does the bias.
+8 -2
arch/powerpc/kernel/dt_cpu_ftrs.c
··· 10 10 #include <linux/jump_label.h> 11 11 #include <linux/libfdt.h> 12 12 #include <linux/memblock.h> 13 + #include <linux/of_fdt.h> 13 14 #include <linux/printk.h> 14 15 #include <linux/sched.h> 15 16 #include <linux/string.h> ··· 20 19 #include <asm/dt_cpu_ftrs.h> 21 20 #include <asm/mce.h> 22 21 #include <asm/mmu.h> 23 - #include <asm/prom.h> 24 22 #include <asm/setup.h> 25 23 26 24 ··· 774 774 if ((version & 0xffffefff) == 0x004e0200) { 775 775 /* DD2.0 has no feature flag */ 776 776 cur_cpu_spec->cpu_features |= CPU_FTR_P9_RADIX_PREFETCH_BUG; 777 + cur_cpu_spec->cpu_features &= ~(CPU_FTR_DAWR); 777 778 } else if ((version & 0xffffefff) == 0x004e0201) { 778 779 cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1; 779 780 cur_cpu_spec->cpu_features |= CPU_FTR_P9_RADIX_PREFETCH_BUG; 781 + cur_cpu_spec->cpu_features &= ~(CPU_FTR_DAWR); 780 782 } else if ((version & 0xffffefff) == 0x004e0202) { 783 + cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST; 784 + cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_XER_SO_BUG; 785 + cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1; 786 + cur_cpu_spec->cpu_features &= ~(CPU_FTR_DAWR); 787 + } else if ((version & 0xffffefff) == 0x004e0203) { 781 788 cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST; 782 789 cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_XER_SO_BUG; 783 790 cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1; ··· 794 787 } 795 788 796 789 if ((version & 0xffff0000) == 0x004e0000) { 797 - cur_cpu_spec->cpu_features &= ~(CPU_FTR_DAWR); 798 790 cur_cpu_spec->cpu_features |= CPU_FTR_P9_TIDR; 799 791 } 800 792
+2 -2
arch/powerpc/kernel/eeh.c
··· 1329 1329 1330 1330 /* 1331 1331 * EEH functionality could possibly be disabled, just 1332 - * return error for the case. And the EEH functinality 1332 + * return error for the case. And the EEH functionality 1333 1333 * isn't expected to be disabled on one specific PE. 1334 1334 */ 1335 1335 switch (option) { ··· 1804 1804 * PE freeze. Using the in_8() accessor skips the eeh detection hook 1805 1805 * so the freeze hook so the EEH Detection machinery won't be 1806 1806 * triggered here. This is to match the usual behaviour of EEH 1807 - * where the HW will asyncronously freeze a PE and it's up to 1807 + * where the HW will asynchronously freeze a PE and it's up to 1808 1808 * the kernel to notice and deal with it. 1809 1809 * 1810 1810 * 3. Turn Memory space back on. This is more important for VFs
-1
arch/powerpc/kernel/eeh_driver.c
··· 16 16 #include <asm/eeh_event.h> 17 17 #include <asm/ppc-pci.h> 18 18 #include <asm/pci-bridge.h> 19 - #include <asm/prom.h> 20 19 #include <asm/rtas.h> 21 20 22 21 struct eeh_rmv_data {
+1 -1
arch/powerpc/kernel/eeh_event.c
··· 143 143 int eeh_send_failure_event(struct eeh_pe *pe) 144 144 { 145 145 /* 146 - * If we've manually supressed recovery events via debugfs 146 + * If we've manually suppressed recovery events via debugfs 147 147 * then just drop it on the floor. 148 148 */ 149 149 if (eeh_debugfs_no_recover) {
+2 -1
arch/powerpc/kernel/eeh_pe.c
··· 13 13 #include <linux/export.h> 14 14 #include <linux/gfp.h> 15 15 #include <linux/kernel.h> 16 + #include <linux/of.h> 16 17 #include <linux/pci.h> 17 18 #include <linux/string.h> 18 19 ··· 302 301 * @new_pe_parent. 303 302 * 304 303 * If @new_pe_parent is NULL then the new PE will be inserted under 305 - * directly under the the PHB. 304 + * directly under the PHB. 306 305 */ 307 306 int eeh_pe_tree_insert(struct eeh_dev *edev, struct eeh_pe *new_pe_parent) 308 307 {
+1
arch/powerpc/kernel/eeh_sysfs.c
··· 6 6 * 7 7 * Send comments and feedback to Linas Vepstas <linas@austin.ibm.com> 8 8 */ 9 + #include <linux/of.h> 9 10 #include <linux/pci.h> 10 11 #include <linux/stat.h> 11 12 #include <asm/ppc-pci.h>
-49
arch/powerpc/kernel/entry_32.S
··· 555 555 _ASM_NOKPROBE_SYMBOL(ret_from_mcheck_exc) 556 556 #endif /* CONFIG_BOOKE */ 557 557 #endif /* !(CONFIG_4xx || CONFIG_BOOKE) */ 558 - 559 - /* 560 - * PROM code for specific machines follows. Put it 561 - * here so it's easy to add arch-specific sections later. 562 - * -- Cort 563 - */ 564 - #ifdef CONFIG_PPC_RTAS 565 - /* 566 - * On CHRP, the Run-Time Abstraction Services (RTAS) have to be 567 - * called with the MMU off. 568 - */ 569 - _GLOBAL(enter_rtas) 570 - stwu r1,-INT_FRAME_SIZE(r1) 571 - mflr r0 572 - stw r0,INT_FRAME_SIZE+4(r1) 573 - LOAD_REG_ADDR(r4, rtas) 574 - lis r6,1f@ha /* physical return address for rtas */ 575 - addi r6,r6,1f@l 576 - tophys(r6,r6) 577 - lwz r8,RTASENTRY(r4) 578 - lwz r4,RTASBASE(r4) 579 - mfmsr r9 580 - stw r9,8(r1) 581 - LOAD_REG_IMMEDIATE(r0,MSR_KERNEL) 582 - mtmsr r0 /* disable interrupts so SRR0/1 don't get trashed */ 583 - li r9,MSR_KERNEL & ~(MSR_IR|MSR_DR) 584 - mtlr r6 585 - stw r1, THREAD + RTAS_SP(r2) 586 - mtspr SPRN_SRR0,r8 587 - mtspr SPRN_SRR1,r9 588 - rfi 589 - 1: 590 - lis r8, 1f@h 591 - ori r8, r8, 1f@l 592 - LOAD_REG_IMMEDIATE(r9,MSR_KERNEL) 593 - mtspr SPRN_SRR0,r8 594 - mtspr SPRN_SRR1,r9 595 - rfi /* Reactivate MMU translation */ 596 - 1: 597 - lwz r8,INT_FRAME_SIZE+4(r1) /* get return address */ 598 - lwz r9,8(r1) /* original msr value */ 599 - addi r1,r1,INT_FRAME_SIZE 600 - li r0,0 601 - stw r0, THREAD + RTAS_SP(r2) 602 - mtlr r8 603 - mtmsr r9 604 - blr /* return to caller */ 605 - _ASM_NOKPROBE_SYMBOL(enter_rtas) 606 - #endif /* CONFIG_PPC_RTAS */
-150
arch/powerpc/kernel/entry_64.S
··· 264 264 addi r1,r1,SWITCH_FRAME_SIZE 265 265 blr 266 266 267 - #ifdef CONFIG_PPC_RTAS 268 - /* 269 - * On CHRP, the Run-Time Abstraction Services (RTAS) have to be 270 - * called with the MMU off. 271 - * 272 - * In addition, we need to be in 32b mode, at least for now. 273 - * 274 - * Note: r3 is an input parameter to rtas, so don't trash it... 275 - */ 276 - _GLOBAL(enter_rtas) 277 - mflr r0 278 - std r0,16(r1) 279 - stdu r1,-SWITCH_FRAME_SIZE(r1) /* Save SP and create stack space. */ 280 - 281 - /* Because RTAS is running in 32b mode, it clobbers the high order half 282 - * of all registers that it saves. We therefore save those registers 283 - * RTAS might touch to the stack. (r0, r3-r13 are caller saved) 284 - */ 285 - SAVE_GPR(2, r1) /* Save the TOC */ 286 - SAVE_GPR(13, r1) /* Save paca */ 287 - SAVE_NVGPRS(r1) /* Save the non-volatiles */ 288 - 289 - mfcr r4 290 - std r4,_CCR(r1) 291 - mfctr r5 292 - std r5,_CTR(r1) 293 - mfspr r6,SPRN_XER 294 - std r6,_XER(r1) 295 - mfdar r7 296 - std r7,_DAR(r1) 297 - mfdsisr r8 298 - std r8,_DSISR(r1) 299 - 300 - /* Temporary workaround to clear CR until RTAS can be modified to 301 - * ignore all bits. 302 - */ 303 - li r0,0 304 - mtcr r0 305 - 306 - #ifdef CONFIG_BUG 307 - /* There is no way it is acceptable to get here with interrupts enabled, 308 - * check it with the asm equivalent of WARN_ON 309 - */ 310 - lbz r0,PACAIRQSOFTMASK(r13) 311 - 1: tdeqi r0,IRQS_ENABLED 312 - EMIT_WARN_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING 313 - #endif 314 - 315 - /* Hard-disable interrupts */ 316 - mfmsr r6 317 - rldicl r7,r6,48,1 318 - rotldi r7,r7,16 319 - mtmsrd r7,1 320 - 321 - /* Unfortunately, the stack pointer and the MSR are also clobbered, 322 - * so they are saved in the PACA which allows us to restore 323 - * our original state after RTAS returns. 324 - */ 325 - std r1,PACAR1(r13) 326 - std r6,PACASAVEDMSR(r13) 327 - 328 - /* Setup our real return addr */ 329 - LOAD_REG_ADDR(r4,rtas_return_loc) 330 - clrldi r4,r4,2 /* convert to realmode address */ 331 - mtlr r4 332 - 333 - li r0,0 334 - ori r0,r0,MSR_EE|MSR_SE|MSR_BE|MSR_RI 335 - andc r0,r6,r0 336 - 337 - li r9,1 338 - rldicr r9,r9,MSR_SF_LG,(63-MSR_SF_LG) 339 - ori r9,r9,MSR_IR|MSR_DR|MSR_FE0|MSR_FE1|MSR_FP|MSR_RI|MSR_LE 340 - andc r6,r0,r9 341 - 342 - __enter_rtas: 343 - sync /* disable interrupts so SRR0/1 */ 344 - mtmsrd r0 /* don't get trashed */ 345 - 346 - LOAD_REG_ADDR(r4, rtas) 347 - ld r5,RTASENTRY(r4) /* get the rtas->entry value */ 348 - ld r4,RTASBASE(r4) /* get the rtas->base value */ 349 - 350 - mtspr SPRN_SRR0,r5 351 - mtspr SPRN_SRR1,r6 352 - RFI_TO_KERNEL 353 - b . /* prevent speculative execution */ 354 - 355 - rtas_return_loc: 356 - FIXUP_ENDIAN 357 - 358 - /* 359 - * Clear RI and set SF before anything. 360 - */ 361 - mfmsr r6 362 - li r0,MSR_RI 363 - andc r6,r6,r0 364 - sldi r0,r0,(MSR_SF_LG - MSR_RI_LG) 365 - or r6,r6,r0 366 - sync 367 - mtmsrd r6 368 - 369 - /* relocation is off at this point */ 370 - GET_PACA(r4) 371 - clrldi r4,r4,2 /* convert to realmode address */ 372 - 373 - bcl 20,31,$+4 374 - 0: mflr r3 375 - ld r3,(1f-0b)(r3) /* get &rtas_restore_regs */ 376 - 377 - ld r1,PACAR1(r4) /* Restore our SP */ 378 - ld r4,PACASAVEDMSR(r4) /* Restore our MSR */ 379 - 380 - mtspr SPRN_SRR0,r3 381 - mtspr SPRN_SRR1,r4 382 - RFI_TO_KERNEL 383 - b . /* prevent speculative execution */ 384 - _ASM_NOKPROBE_SYMBOL(__enter_rtas) 385 - _ASM_NOKPROBE_SYMBOL(rtas_return_loc) 386 - 387 - .align 3 388 - 1: .8byte rtas_restore_regs 389 - 390 - rtas_restore_regs: 391 - /* relocation is on at this point */ 392 - REST_GPR(2, r1) /* Restore the TOC */ 393 - REST_GPR(13, r1) /* Restore paca */ 394 - REST_NVGPRS(r1) /* Restore the non-volatiles */ 395 - 396 - GET_PACA(r13) 397 - 398 - ld r4,_CCR(r1) 399 - mtcr r4 400 - ld r5,_CTR(r1) 401 - mtctr r5 402 - ld r6,_XER(r1) 403 - mtspr SPRN_XER,r6 404 - ld r7,_DAR(r1) 405 - mtdar r7 406 - ld r8,_DSISR(r1) 407 - mtdsisr r8 408 - 409 - addi r1,r1,SWITCH_FRAME_SIZE /* Unstack our frame */ 410 - ld r0,16(r1) /* get return address */ 411 - 412 - mtlr r0 413 - blr /* return to caller */ 414 - 415 - #endif /* CONFIG_PPC_RTAS */ 416 - 417 267 _GLOBAL(enter_prom) 418 268 mflr r0 419 269 std r0,16(r1)
+33 -19
arch/powerpc/kernel/fadump.c
··· 25 25 #include <linux/cma.h> 26 26 #include <linux/hugetlb.h> 27 27 #include <linux/debugfs.h> 28 + #include <linux/of.h> 29 + #include <linux/of_fdt.h> 28 30 29 31 #include <asm/page.h> 30 - #include <asm/prom.h> 31 32 #include <asm/fadump.h> 32 33 #include <asm/fadump-internal.h> 33 34 #include <asm/setup.h> ··· 74 73 * The total size of fadump reserved memory covers for boot memory size 75 74 * + cpu data size + hpte size and metadata. 76 75 * Initialize only the area equivalent to boot memory size for CMA use. 77 - * The reamining portion of fadump reserved memory will be not given 78 - * to CMA and pages for thoes will stay reserved. boot memory size is 76 + * The remaining portion of fadump reserved memory will be not given 77 + * to CMA and pages for those will stay reserved. boot memory size is 79 78 * aligned per CMA requirement to satisy cma_init_reserved_mem() call. 80 79 * But for some reason even if it fails we still have the memory reservation 81 80 * with us and we can still continue doing fadump. ··· 366 365 367 366 size += fw_dump.cpu_state_data_size; 368 367 size += fw_dump.hpte_region_size; 368 + /* 369 + * Account for pagesize alignment of boot memory area destination address. 370 + * This faciliates in mmap reading of first kernel's memory. 371 + */ 372 + size = PAGE_ALIGN(size); 369 373 size += fw_dump.boot_memory_size; 370 374 size += sizeof(struct fadump_crash_info_header); 371 375 size += sizeof(struct elfhdr); /* ELF core header.*/ ··· 734 728 else 735 729 ppc_save_regs(&fdh->regs); 736 730 737 - fdh->online_mask = *cpu_online_mask; 731 + fdh->cpu_mask = *cpu_online_mask; 738 732 739 733 /* 740 734 * If we came in via system reset, wait a while for the secondary ··· 873 867 sizeof(struct fadump_memory_range)); 874 868 return 0; 875 869 } 876 - 877 870 static inline int fadump_add_mem_range(struct fadump_mrange_info *mrange_info, 878 871 u64 base, u64 end) 879 872 { ··· 891 886 start = mem_ranges[mrange_info->mem_range_cnt - 1].base; 892 887 size = mem_ranges[mrange_info->mem_range_cnt - 1].size; 893 888 894 - if ((start + size) == base) 889 + /* 890 + * Boot memory area needs separate PT_LOAD segment(s) as it 891 + * is moved to a different location at the time of crash. 892 + * So, fold only if the region is not boot memory area. 893 + */ 894 + if ((start + size) == base && start >= fw_dump.boot_mem_top) 895 895 is_adjacent = true; 896 896 } 897 897 if (!is_adjacent) { ··· 978 968 elf->e_entry = 0; 979 969 elf->e_phoff = sizeof(struct elfhdr); 980 970 elf->e_shoff = 0; 981 - #if defined(_CALL_ELF) 982 - elf->e_flags = _CALL_ELF; 983 - #else 984 - elf->e_flags = 0; 985 - #endif 971 + 972 + if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2)) 973 + elf->e_flags = 2; 974 + else if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V1)) 975 + elf->e_flags = 1; 976 + else 977 + elf->e_flags = 0; 978 + 986 979 elf->e_ehsize = sizeof(struct elfhdr); 987 980 elf->e_phentsize = sizeof(struct elf_phdr); 988 981 elf->e_phnum = 0; ··· 1177 1164 fdh->elfcorehdr_addr = addr; 1178 1165 /* We will set the crashing cpu id in crash_fadump() during crash. */ 1179 1166 fdh->crashing_cpu = FADUMP_CPU_UNKNOWN; 1167 + /* 1168 + * When LPAR is terminated by PYHP, ensure all possible CPUs' 1169 + * register data is processed while exporting the vmcore. 1170 + */ 1171 + fdh->cpu_mask = *cpu_possible_mask; 1180 1172 1181 1173 return addr; 1182 1174 } ··· 1289 1271 static void sort_and_merge_mem_ranges(struct fadump_mrange_info *mrange_info) 1290 1272 { 1291 1273 struct fadump_memory_range *mem_ranges; 1292 - struct fadump_memory_range tmp_range; 1293 1274 u64 base, size; 1294 1275 int i, j, idx; 1295 1276 ··· 1303 1286 if (mem_ranges[idx].base > mem_ranges[j].base) 1304 1287 idx = j; 1305 1288 } 1306 - if (idx != i) { 1307 - tmp_range = mem_ranges[idx]; 1308 - mem_ranges[idx] = mem_ranges[i]; 1309 - mem_ranges[i] = tmp_range; 1310 - } 1289 + if (idx != i) 1290 + swap(mem_ranges[idx], mem_ranges[i]); 1311 1291 } 1312 1292 1313 1293 /* Merge adjacent reserved ranges */ ··· 1675 1661 } 1676 1662 /* 1677 1663 * Use subsys_initcall_sync() here because there is dependency with 1678 - * crash_save_vmcoreinfo_init(), which mush run first to ensure vmcoreinfo initialization 1679 - * is done before regisering with f/w. 1664 + * crash_save_vmcoreinfo_init(), which must run first to ensure vmcoreinfo initialization 1665 + * is done before registering with f/w. 1680 1666 */ 1681 1667 subsys_initcall_sync(setup_fadump); 1682 1668 #else /* !CONFIG_PRESERVE_FA_DUMP */
+2 -2
arch/powerpc/kernel/head_64.S
··· 111 111 #ifdef CONFIG_RELOCATABLE 112 112 /* This flag is set to 1 by a loader if the kernel should run 113 113 * at the loaded address instead of the linked address. This 114 - * is used by kexec-tools to keep the the kdump kernel in the 114 + * is used by kexec-tools to keep the kdump kernel in the 115 115 * crash_kernel region. The loader is responsible for 116 116 * observing the alignment requirement. 117 117 */ ··· 435 435 ld r12,CPU_SPEC_RESTORE(r23) 436 436 cmpdi 0,r12,0 437 437 beq 3f 438 - #ifdef PPC64_ELF_ABI_v1 438 + #ifdef CONFIG_PPC64_ELF_ABI_V1 439 439 ld r12,0(r12) 440 440 #endif 441 441 mtctr r12
+1 -1
arch/powerpc/kernel/idle.c
··· 37 37 { 38 38 ppc_md.power_save = NULL; 39 39 cpuidle_disable = IDLE_POWERSAVE_OFF; 40 - return 0; 40 + return 1; 41 41 } 42 42 __setup("powersave=off", powersave_off); 43 43
+1 -11
arch/powerpc/kernel/interrupt_64.S
··· 219 219 */ 220 220 system_call_vectored sigill 0x7ff0 221 221 222 - 223 - /* 224 - * Entered via kernel return set up by kernel/sstep.c, must match entry regs 225 - */ 226 - .globl system_call_vectored_emulate 227 - system_call_vectored_emulate: 228 - _ASM_NOKPROBE_SYMBOL(system_call_vectored_emulate) 229 - li r10,IRQS_ALL_DISABLED 230 - stb r10,PACAIRQSOFTMASK(r13) 231 - b system_call_vectored_common 232 222 #endif /* CONFIG_PPC_BOOK3S */ 233 223 234 224 .balign IFETCH_ALIGN_BYTES ··· 711 721 REST_NVGPRS(r1) 712 722 mtctr r14 713 723 mr r3,r15 714 - #ifdef PPC64_ELF_ABI_v2 724 + #ifdef CONFIG_PPC64_ELF_ABI_V2 715 725 mr r12,r14 716 726 #endif 717 727 bctrl
+2 -3
arch/powerpc/kernel/iommu.c
··· 27 27 #include <linux/sched.h> 28 28 #include <linux/debugfs.h> 29 29 #include <asm/io.h> 30 - #include <asm/prom.h> 31 30 #include <asm/iommu.h> 32 31 #include <asm/pci-bridge.h> 33 32 #include <asm/machdep.h> ··· 1064 1065 long ret; 1065 1066 unsigned long size = 0; 1066 1067 1067 - ret = tbl->it_ops->xchg_no_kill(tbl, entry, hpa, direction, false); 1068 + ret = tbl->it_ops->xchg_no_kill(tbl, entry, hpa, direction); 1068 1069 if (!ret && ((*direction == DMA_FROM_DEVICE) || 1069 1070 (*direction == DMA_BIDIRECTIONAL)) && 1070 1071 !mm_iommu_is_devmem(mm, *hpa, tbl->it_page_shift, ··· 1079 1080 unsigned long entry, unsigned long pages) 1080 1081 { 1081 1082 if (tbl->it_ops->tce_kill) 1082 - tbl->it_ops->tce_kill(tbl, entry, pages, false); 1083 + tbl->it_ops->tce_kill(tbl, entry, pages); 1083 1084 } 1084 1085 EXPORT_SYMBOL_GPL(iommu_tce_kill); 1085 1086
+7 -79
arch/powerpc/kernel/irq.c
··· 52 52 #include <linux/of_irq.h> 53 53 #include <linux/vmalloc.h> 54 54 #include <linux/pgtable.h> 55 + #include <linux/static_call.h> 55 56 56 57 #include <linux/uaccess.h> 57 58 #include <asm/interrupt.h> 58 59 #include <asm/io.h> 59 60 #include <asm/irq.h> 60 61 #include <asm/cache.h> 61 - #include <asm/prom.h> 62 62 #include <asm/ptrace.h> 63 63 #include <asm/machdep.h> 64 64 #include <asm/udbg.h> ··· 217 217 #define replay_soft_interrupts_irqrestore() replay_soft_interrupts() 218 218 #endif 219 219 220 - #ifdef CONFIG_CC_HAS_ASM_GOTO 221 220 notrace void arch_local_irq_restore(unsigned long mask) 222 221 { 223 222 unsigned char irq_happened; ··· 312 313 __hard_irq_enable(); 313 314 preempt_enable(); 314 315 } 315 - #else 316 - notrace void arch_local_irq_restore(unsigned long mask) 317 - { 318 - unsigned char irq_happened; 319 - 320 - /* Write the new soft-enabled value */ 321 - irq_soft_mask_set(mask); 322 - if (mask) 323 - return; 324 - 325 - if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) 326 - WARN_ON_ONCE(in_nmi() || in_hardirq()); 327 - 328 - /* 329 - * From this point onward, we can take interrupts, preempt, 330 - * etc... unless we got hard-disabled. We check if an event 331 - * happened. If none happened, we know we can just return. 332 - * 333 - * We may have preempted before the check below, in which case 334 - * we are checking the "new" CPU instead of the old one. This 335 - * is only a problem if an event happened on the "old" CPU. 336 - * 337 - * External interrupt events will have caused interrupts to 338 - * be hard-disabled, so there is no problem, we 339 - * cannot have preempted. 340 - */ 341 - irq_happened = get_irq_happened(); 342 - if (!irq_happened) { 343 - if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) 344 - WARN_ON_ONCE(!(mfmsr() & MSR_EE)); 345 - return; 346 - } 347 - 348 - /* We need to hard disable to replay. */ 349 - if (!(irq_happened & PACA_IRQ_HARD_DIS)) { 350 - if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) 351 - WARN_ON_ONCE(!(mfmsr() & MSR_EE)); 352 - __hard_irq_disable(); 353 - local_paca->irq_happened |= PACA_IRQ_HARD_DIS; 354 - } else { 355 - /* 356 - * We should already be hard disabled here. We had bugs 357 - * where that wasn't the case so let's dbl check it and 358 - * warn if we are wrong. Only do that when IRQ tracing 359 - * is enabled as mfmsr() can be costly. 360 - */ 361 - if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) { 362 - if (WARN_ON_ONCE(mfmsr() & MSR_EE)) 363 - __hard_irq_disable(); 364 - } 365 - 366 - if (irq_happened == PACA_IRQ_HARD_DIS) { 367 - local_paca->irq_happened = 0; 368 - __hard_irq_enable(); 369 - return; 370 - } 371 - } 372 - 373 - /* 374 - * Disable preempt here, so that the below preempt_enable will 375 - * perform resched if required (a replayed interrupt may set 376 - * need_resched). 377 - */ 378 - preempt_disable(); 379 - irq_soft_mask_set(IRQS_ALL_DISABLED); 380 - trace_hardirqs_off(); 381 - 382 - replay_soft_interrupts_irqrestore(); 383 - local_paca->irq_happened = 0; 384 - 385 - trace_hardirqs_on(); 386 - irq_soft_mask_set(IRQS_ENABLED); 387 - __hard_irq_enable(); 388 - preempt_enable(); 389 - } 390 - #endif 391 316 EXPORT_SYMBOL(arch_local_irq_restore); 392 317 393 318 /* ··· 653 730 ); 654 731 } 655 732 733 + DEFINE_STATIC_CALL_RET0(ppc_get_irq, *ppc_md.get_irq); 734 + 656 735 void __do_irq(struct pt_regs *regs) 657 736 { 658 737 unsigned int irq; ··· 666 741 * 667 742 * This will typically lower the interrupt line to the CPU 668 743 */ 669 - irq = ppc_md.get_irq(); 744 + irq = static_call(ppc_get_irq)(); 670 745 671 746 /* We can hard enable interrupts now to allow perf interrupts */ 672 747 if (should_hard_irq_enable()) ··· 734 809 735 810 if (ppc_md.init_IRQ) 736 811 ppc_md.init_IRQ(); 812 + 813 + if (!WARN_ON(!ppc_md.get_irq)) 814 + static_call_update(ppc_get_irq, ppc_md.get_irq); 737 815 } 738 816 739 817 #ifdef CONFIG_BOOKE_OR_40x
+1 -1
arch/powerpc/kernel/isa-bridge.c
··· 18 18 #include <linux/init.h> 19 19 #include <linux/mm.h> 20 20 #include <linux/notifier.h> 21 + #include <linux/of_address.h> 21 22 #include <linux/vmalloc.h> 22 23 23 24 #include <asm/processor.h> 24 25 #include <asm/io.h> 25 - #include <asm/prom.h> 26 26 #include <asm/pci-bridge.h> 27 27 #include <asm/machdep.h> 28 28 #include <asm/ppc-pci.h>
+5 -5
arch/powerpc/kernel/kprobes.c
··· 45 45 { 46 46 kprobe_opcode_t *addr = NULL; 47 47 48 - #ifdef PPC64_ELF_ABI_v2 48 + #ifdef CONFIG_PPC64_ELF_ABI_V2 49 49 /* PPC64 ABIv2 needs local entry point */ 50 50 addr = (kprobe_opcode_t *)kallsyms_lookup_name(name); 51 51 if (addr && !offset) { ··· 63 63 #endif 64 64 addr = (kprobe_opcode_t *)ppc_function_entry(addr); 65 65 } 66 - #elif defined(PPC64_ELF_ABI_v1) 66 + #elif defined(CONFIG_PPC64_ELF_ABI_V1) 67 67 /* 68 68 * 64bit powerpc ABIv1 uses function descriptors: 69 69 * - Check for the dot variant of the symbol first. ··· 107 107 108 108 static bool arch_kprobe_on_func_entry(unsigned long offset) 109 109 { 110 - #ifdef PPC64_ELF_ABI_v2 110 + #ifdef CONFIG_PPC64_ELF_ABI_V2 111 111 #ifdef CONFIG_KPROBES_ON_FTRACE 112 112 return offset <= 16; 113 113 #else ··· 150 150 if ((unsigned long)p->addr & 0x03) { 151 151 printk("Attempt to register kprobe at an unaligned address\n"); 152 152 ret = -EINVAL; 153 - } else if (IS_MTMSRD(insn) || IS_RFID(insn)) { 154 - printk("Cannot register a kprobe on mtmsr[d]/rfi[d]\n"); 153 + } else if (!can_single_step(ppc_inst_val(insn))) { 154 + printk("Cannot register a kprobe on instructions that can't be single stepped\n"); 155 155 ret = -EINVAL; 156 156 } else if ((unsigned long)p->addr & ~PAGE_MASK && 157 157 ppc_inst_prefixed(ppc_inst_read(p->addr - 1))) {
+1 -1
arch/powerpc/kernel/legacy_serial.c
··· 7 7 #include <linux/pci.h> 8 8 #include <linux/of_address.h> 9 9 #include <linux/of_device.h> 10 + #include <linux/of_irq.h> 10 11 #include <linux/serial_reg.h> 11 12 #include <asm/io.h> 12 13 #include <asm/mmu.h> 13 - #include <asm/prom.h> 14 14 #include <asm/serial.h> 15 15 #include <asm/udbg.h> 16 16 #include <asm/pci-bridge.h>
+1 -1
arch/powerpc/kernel/misc_64.S
··· 454 454 beq 1f 455 455 456 456 /* clear out hardware hash page table and tlb */ 457 - #ifdef PPC64_ELF_ABI_v1 457 + #ifdef CONFIG_PPC64_ELF_ABI_V1 458 458 ld r12,0(r27) /* deref function descriptor */ 459 459 #else 460 460 mr r12,r27
+2 -2
arch/powerpc/kernel/module.c
··· 64 64 (void *)sect->sh_addr + sect->sh_size); 65 65 #endif /* CONFIG_PPC64 */ 66 66 67 - #ifdef PPC64_ELF_ABI_v1 67 + #ifdef CONFIG_PPC64_ELF_ABI_V1 68 68 sect = find_section(hdr, sechdrs, ".opd"); 69 69 if (sect != NULL) { 70 70 me->arch.start_opd = sect->sh_addr; 71 71 me->arch.end_opd = sect->sh_addr + sect->sh_size; 72 72 } 73 - #endif /* PPC64_ELF_ABI_v1 */ 73 + #endif /* CONFIG_PPC64_ELF_ABI_V1 */ 74 74 75 75 #ifdef CONFIG_PPC_BARRIER_NOSPEC 76 76 sect = find_section(hdr, sechdrs, "__spec_barrier_fixup");
+23 -17
arch/powerpc/kernel/module_32.c
··· 99 99 100 100 /* Sort the relocation information based on a symbol and 101 101 * addend key. This is a stable O(n*log n) complexity 102 - * alogrithm but it will reduce the complexity of 102 + * algorithm but it will reduce the complexity of 103 103 * count_relocs() to linear complexity O(n) 104 104 */ 105 105 sort((void *)hdr + sechdrs[i].sh_offset, ··· 256 256 value, (uint32_t)location); 257 257 pr_debug("Location before: %08X.\n", 258 258 *(uint32_t *)location); 259 - value = (*(uint32_t *)location & ~0x03fffffc) 260 - | ((value - (uint32_t)location) 261 - & 0x03fffffc); 259 + value = (*(uint32_t *)location & ~PPC_LI_MASK) | 260 + PPC_LI(value - (uint32_t)location); 262 261 263 262 if (patch_instruction(location, ppc_inst(value))) 264 263 return -EFAULT; ··· 265 266 pr_debug("Location after: %08X.\n", 266 267 *(uint32_t *)location); 267 268 pr_debug("ie. jump to %08X+%08X = %08X\n", 268 - *(uint32_t *)location & 0x03fffffc, 269 - (uint32_t)location, 270 - (*(uint32_t *)location & 0x03fffffc) 271 - + (uint32_t)location); 269 + *(uint32_t *)PPC_LI((uint32_t)location), (uint32_t)location, 270 + (*(uint32_t *)PPC_LI((uint32_t)location)) + (uint32_t)location); 272 271 break; 273 272 274 273 case R_PPC_REL32: ··· 286 289 } 287 290 288 291 #ifdef CONFIG_DYNAMIC_FTRACE 289 - int module_trampoline_target(struct module *mod, unsigned long addr, 290 - unsigned long *target) 292 + notrace int module_trampoline_target(struct module *mod, unsigned long addr, 293 + unsigned long *target) 291 294 { 292 - unsigned int jmp[4]; 295 + ppc_inst_t jmp[4]; 293 296 294 297 /* Find where the trampoline jumps to */ 295 - if (copy_from_kernel_nofault(jmp, (void *)addr, sizeof(jmp))) 298 + if (copy_inst_from_kernel_nofault(jmp, (void *)addr)) 299 + return -EFAULT; 300 + if (__copy_inst_from_kernel_nofault(jmp + 1, (void *)addr + 4)) 301 + return -EFAULT; 302 + if (__copy_inst_from_kernel_nofault(jmp + 2, (void *)addr + 8)) 303 + return -EFAULT; 304 + if (__copy_inst_from_kernel_nofault(jmp + 3, (void *)addr + 12)) 296 305 return -EFAULT; 297 306 298 307 /* verify that this is what we expect it to be */ 299 - if ((jmp[0] & 0xffff0000) != PPC_RAW_LIS(_R12, 0) || 300 - (jmp[1] & 0xffff0000) != PPC_RAW_ADDI(_R12, _R12, 0) || 301 - jmp[2] != PPC_RAW_MTCTR(_R12) || 302 - jmp[3] != PPC_RAW_BCTR()) 308 + if ((ppc_inst_val(jmp[0]) & 0xffff0000) != PPC_RAW_LIS(_R12, 0)) 309 + return -EINVAL; 310 + if ((ppc_inst_val(jmp[1]) & 0xffff0000) != PPC_RAW_ADDI(_R12, _R12, 0)) 311 + return -EINVAL; 312 + if (ppc_inst_val(jmp[2]) != PPC_RAW_MTCTR(_R12)) 313 + return -EINVAL; 314 + if (ppc_inst_val(jmp[3]) != PPC_RAW_BCTR()) 303 315 return -EINVAL; 304 316 305 - addr = (jmp[1] & 0xffff) | ((jmp[0] & 0xffff) << 16); 317 + addr = (ppc_inst_val(jmp[1]) & 0xffff) | ((ppc_inst_val(jmp[0]) & 0xffff) << 16); 306 318 if (addr & 0x8000) 307 319 addr -= 0x10000; 308 320
+5 -6
arch/powerpc/kernel/module_64.c
··· 31 31 this, and makes other things simpler. Anton? 32 32 --RR. */ 33 33 34 - #ifdef PPC64_ELF_ABI_v2 34 + #ifdef CONFIG_PPC64_ELF_ABI_V2 35 35 36 36 static func_desc_t func_desc(unsigned long addr) 37 37 { ··· 122 122 /* Save current r2 value in magic place on the stack. */ 123 123 PPC_RAW_STD(_R2, _R1, R2_STACK_OFFSET), 124 124 PPC_RAW_LD(_R12, _R11, 32), 125 - #ifdef PPC64_ELF_ABI_v1 125 + #ifdef CONFIG_PPC64_ELF_ABI_V1 126 126 /* Set up new r2 from function descriptor */ 127 127 PPC_RAW_LD(_R2, _R11, 40), 128 128 #endif ··· 194 194 195 195 /* Sort the relocation information based on a symbol and 196 196 * addend key. This is a stable O(n*log n) complexity 197 - * alogrithm but it will reduce the complexity of 197 + * algorithm but it will reduce the complexity of 198 198 * count_relocs() to linear complexity O(n) 199 199 */ 200 200 sort((void *)sechdrs[i].sh_addr, ··· 361 361 entry->jump[1] |= PPC_HA(reladdr); 362 362 entry->jump[2] |= PPC_LO(reladdr); 363 363 364 - /* Eventhough we don't use funcdata in the stub, it's needed elsewhere. */ 364 + /* Even though we don't use funcdata in the stub, it's needed elsewhere. */ 365 365 entry->funcdata = func_desc(addr); 366 366 entry->magic = STUB_MAGIC; 367 367 ··· 653 653 } 654 654 655 655 /* Only replace bits 2 through 26 */ 656 - value = (*(uint32_t *)location & ~0x03fffffc) 657 - | (value & 0x03fffffc); 656 + value = (*(uint32_t *)location & ~PPC_LI_MASK) | PPC_LI(value); 658 657 659 658 if (patch_instruction((u32 *)location, ppc_inst(value))) 660 659 return -EFAULT;
+1 -1
arch/powerpc/kernel/nvram_64.c
··· 19 19 #include <linux/pstore.h> 20 20 #include <linux/zlib.h> 21 21 #include <linux/uaccess.h> 22 + #include <linux/of.h> 22 23 #include <asm/nvram.h> 23 24 #include <asm/rtas.h> 24 - #include <asm/prom.h> 25 25 #include <asm/machdep.h> 26 26 27 27 #undef DEBUG_NVRAM
-5
arch/powerpc/kernel/paca.c
··· 344 344 { 345 345 mm_context_t *context = &mm->context; 346 346 347 - #ifdef CONFIG_PPC_MM_SLICES 348 347 VM_BUG_ON(!mm_ctx_slb_addr_limit(context)); 349 348 memcpy(&get_paca()->mm_ctx_low_slices_psize, mm_ctx_low_slices(context), 350 349 LOW_SLICE_ARRAY_SZ); 351 350 memcpy(&get_paca()->mm_ctx_high_slices_psize, mm_ctx_high_slices(context), 352 351 TASK_SLICE_ARRAY_SZ(context)); 353 - #else /* CONFIG_PPC_MM_SLICES */ 354 - get_paca()->mm_ctx_user_psize = context->user_psize; 355 - get_paca()->mm_ctx_sllp = context->sllp; 356 - #endif 357 352 } 358 353 #endif /* CONFIG_PPC_64S_HASH_MMU */
+3 -3
arch/powerpc/kernel/pci-common.c
··· 30 30 #include <linux/vgaarb.h> 31 31 #include <linux/numa.h> 32 32 #include <linux/msi.h> 33 + #include <linux/irqdomain.h> 33 34 34 35 #include <asm/processor.h> 35 36 #include <asm/io.h> 36 - #include <asm/prom.h> 37 37 #include <asm/pci-bridge.h> 38 38 #include <asm/byteorder.h> 39 39 #include <asm/machdep.h> ··· 42 42 43 43 #include "../../../drivers/pci/pci.h" 44 44 45 - /* hose_spinlock protects accesses to the the phb_bitmap. */ 45 + /* hose_spinlock protects accesses to the phb_bitmap. */ 46 46 static DEFINE_SPINLOCK(hose_spinlock); 47 47 LIST_HEAD(hose_list); 48 48 ··· 1688 1688 static void fixup_hide_host_resource_fsl(struct pci_dev *dev) 1689 1689 { 1690 1690 int i, class = dev->class >> 8; 1691 - /* When configured as agent, programing interface = 1 */ 1691 + /* When configured as agent, programming interface = 1 */ 1692 1692 int prog_if = dev->class & 0xf; 1693 1693 1694 1694 if ((class == PCI_CLASS_PROCESSOR_POWERPC ||
+1
arch/powerpc/kernel/pci-hotplug.c
··· 12 12 13 13 #include <linux/pci.h> 14 14 #include <linux/export.h> 15 + #include <linux/of.h> 15 16 #include <asm/pci-bridge.h> 16 17 #include <asm/ppc-pci.h> 17 18 #include <asm/firmware.h>
-1
arch/powerpc/kernel/pci_32.c
··· 21 21 22 22 #include <asm/processor.h> 23 23 #include <asm/io.h> 24 - #include <asm/prom.h> 25 24 #include <asm/sections.h> 26 25 #include <asm/pci-bridge.h> 27 26 #include <asm/ppc-pci.h>
+10 -1
arch/powerpc/kernel/pci_64.c
··· 19 19 #include <linux/syscalls.h> 20 20 #include <linux/irq.h> 21 21 #include <linux/vmalloc.h> 22 + #include <linux/of.h> 22 23 23 24 #include <asm/processor.h> 24 25 #include <asm/io.h> 25 - #include <asm/prom.h> 26 26 #include <asm/pci-bridge.h> 27 27 #include <asm/byteorder.h> 28 28 #include <asm/machdep.h> ··· 285 285 } 286 286 EXPORT_SYMBOL(pcibus_to_node); 287 287 #endif 288 + 289 + int pci_device_from_OF_node(struct device_node *np, u8 *bus, u8 *devfn) 290 + { 291 + if (!PCI_DN(np)) 292 + return -ENODEV; 293 + *bus = PCI_DN(np)->busno; 294 + *devfn = PCI_DN(np)->devfn; 295 + return 0; 296 + }
+1 -1
arch/powerpc/kernel/pci_dn.c
··· 12 12 #include <linux/export.h> 13 13 #include <linux/init.h> 14 14 #include <linux/gfp.h> 15 + #include <linux/of.h> 15 16 16 17 #include <asm/io.h> 17 - #include <asm/prom.h> 18 18 #include <asm/pci-bridge.h> 19 19 #include <asm/ppc-pci.h> 20 20 #include <asm/firmware.h>
+2 -2
arch/powerpc/kernel/pci_of_scan.c
··· 13 13 14 14 #include <linux/pci.h> 15 15 #include <linux/export.h> 16 + #include <linux/of.h> 16 17 #include <asm/pci-bridge.h> 17 - #include <asm/prom.h> 18 18 19 19 /** 20 20 * get_int_prop - Decode a u32 from a device tree property ··· 244 244 * @dev: pci_dev structure for the bridge 245 245 * 246 246 * of_scan_bus() calls this routine for each PCI bridge that it finds, and 247 - * this routine in turn call of_scan_bus() recusively to scan for more child 247 + * this routine in turn call of_scan_bus() recursively to scan for more child 248 248 * devices. 249 249 */ 250 250 void of_scan_pci_bridge(struct pci_dev *dev)
+1 -1
arch/powerpc/kernel/proc_powerpc.c
··· 7 7 #include <linux/mm.h> 8 8 #include <linux/proc_fs.h> 9 9 #include <linux/kernel.h> 10 + #include <linux/of.h> 10 11 11 12 #include <asm/machdep.h> 12 13 #include <asm/vdso_datapage.h> 13 14 #include <asm/rtas.h> 14 15 #include <linux/uaccess.h> 15 - #include <asm/prom.h> 16 16 17 17 #ifdef CONFIG_PPC64 18 18
+2 -44
arch/powerpc/kernel/process.c
··· 34 34 #include <linux/ftrace.h> 35 35 #include <linux/kernel_stat.h> 36 36 #include <linux/personality.h> 37 - #include <linux/random.h> 38 37 #include <linux/hw_breakpoint.h> 39 38 #include <linux/uaccess.h> 40 - #include <linux/elf-randomize.h> 41 39 #include <linux/pkeys.h> 42 40 #include <linux/seq_buf.h> 43 41 ··· 43 45 #include <asm/io.h> 44 46 #include <asm/processor.h> 45 47 #include <asm/mmu.h> 46 - #include <asm/prom.h> 47 48 #include <asm/machdep.h> 48 49 #include <asm/time.h> 49 50 #include <asm/runlatch.h> ··· 304 307 unsigned long msr = tsk->thread.regs->msr; 305 308 306 309 /* 307 - * We should never be ssetting MSR_VSX without also setting 310 + * We should never be setting MSR_VSX without also setting 308 311 * MSR_FP and MSR_VEC 309 312 */ 310 313 WARN_ON((msr & MSR_VSX) && !((msr & MSR_FP) && (msr & MSR_VEC))); ··· 642 645 return; 643 646 } 644 647 645 - /* Otherwise findout which DAWR caused exception and disable it. */ 648 + /* Otherwise find out which DAWR caused exception and disable it. */ 646 649 wp_get_instr_detail(regs, &instr, &type, &size, &ea); 647 650 648 651 for (i = 0; i < nr_wp_slots(); i++) { ··· 2310 2313 sp -= get_random_int() & ~PAGE_MASK; 2311 2314 return sp & ~0xf; 2312 2315 } 2313 - 2314 - static inline unsigned long brk_rnd(void) 2315 - { 2316 - unsigned long rnd = 0; 2317 - 2318 - /* 8MB for 32bit, 1GB for 64bit */ 2319 - if (is_32bit_task()) 2320 - rnd = (get_random_long() % (1UL<<(23-PAGE_SHIFT))); 2321 - else 2322 - rnd = (get_random_long() % (1UL<<(30-PAGE_SHIFT))); 2323 - 2324 - return rnd << PAGE_SHIFT; 2325 - } 2326 - 2327 - unsigned long arch_randomize_brk(struct mm_struct *mm) 2328 - { 2329 - unsigned long base = mm->brk; 2330 - unsigned long ret; 2331 - 2332 - #ifdef CONFIG_PPC_BOOK3S_64 2333 - /* 2334 - * If we are using 1TB segments and we are allowed to randomise 2335 - * the heap, we can put it above 1TB so it is backed by a 1TB 2336 - * segment. Otherwise the heap will be in the bottom 1TB 2337 - * which always uses 256MB segments and this may result in a 2338 - * performance penalty. 2339 - */ 2340 - if (!radix_enabled() && !is_32bit_task() && (mmu_highuser_ssize == MMU_SEGSIZE_1T)) 2341 - base = max_t(unsigned long, mm->brk, 1UL << SID_SHIFT_1T); 2342 - #endif 2343 - 2344 - ret = PAGE_ALIGN(base + brk_rnd()); 2345 - 2346 - if (ret < mm->brk) 2347 - return mm->brk; 2348 - 2349 - return ret; 2350 - } 2351 -
-1
arch/powerpc/kernel/prom.c
··· 31 31 #include <linux/cpu.h> 32 32 #include <linux/pgtable.h> 33 33 34 - #include <asm/prom.h> 35 34 #include <asm/rtas.h> 36 35 #include <asm/page.h> 37 36 #include <asm/processor.h>
+3 -1
arch/powerpc/kernel/prom_init.c
··· 28 28 #include <linux/bitops.h> 29 29 #include <linux/pgtable.h> 30 30 #include <linux/printk.h> 31 + #include <linux/of.h> 32 + #include <linux/of_fdt.h> 31 33 #include <asm/prom.h> 32 34 #include <asm/rtas.h> 33 35 #include <asm/page.h> ··· 3418 3416 * 3419 3417 * PowerMacs use a different mechanism to spin CPUs 3420 3418 * 3421 - * (This must be done after instanciating RTAS) 3419 + * (This must be done after instantiating RTAS) 3422 3420 */ 3423 3421 if (of_platform != PLATFORM_POWERMAC) 3424 3422 prom_hold_cpus();
+1 -1
arch/powerpc/kernel/ptrace/ptrace-view.c
··· 174 174 175 175 /* 176 176 * softe copies paca->irq_soft_mask variable state. Since irq_soft_mask is 177 - * no more used as a flag, lets force usr to alway see the softe value as 1 177 + * no more used as a flag, lets force usr to always see the softe value as 1 178 178 * which means interrupts are not soft disabled. 179 179 */ 180 180 if (IS_ENABLED(CONFIG_PPC64) && regno == PT_SOFTE) {
-6
arch/powerpc/kernel/ptrace/ptrace.c
··· 444 444 * real registers. 445 445 */ 446 446 BUILD_BUG_ON(PT_DSCR < sizeof(struct user_pt_regs) / sizeof(unsigned long)); 447 - 448 - #ifdef PPC64_ELF_ABI_v1 449 - BUILD_BUG_ON(!IS_ENABLED(CONFIG_HAVE_FUNCTION_DESCRIPTORS)); 450 - #else 451 - BUILD_BUG_ON(IS_ENABLED(CONFIG_HAVE_FUNCTION_DESCRIPTORS)); 452 - #endif 453 447 }
+2 -7
arch/powerpc/kernel/rtas-proc.c
··· 24 24 #include <linux/seq_file.h> 25 25 #include <linux/bitops.h> 26 26 #include <linux/rtc.h> 27 + #include <linux/of.h> 27 28 28 29 #include <linux/uaccess.h> 29 30 #include <asm/processor.h> 30 31 #include <asm/io.h> 31 - #include <asm/prom.h> 32 32 #include <asm/rtas.h> 33 33 #include <asm/machdep.h> /* for ppc_md */ 34 34 #include <asm/time.h> ··· 259 259 static int parse_number(const char __user *p, size_t count, u64 *val) 260 260 { 261 261 char buf[40]; 262 - char *end; 263 262 264 263 if (count > 39) 265 264 return -EINVAL; ··· 268 269 269 270 buf[count] = 0; 270 271 271 - *val = simple_strtoull(buf, &end, 10); 272 - if (*end && *end != '\n') 273 - return -EINVAL; 274 - 275 - return 0; 272 + return kstrtoull(buf, 10, val); 276 273 } 277 274 278 275 /* ****************************************************************** */
-1
arch/powerpc/kernel/rtas-rtc.c
··· 6 6 #include <linux/rtc.h> 7 7 #include <linux/delay.h> 8 8 #include <linux/ratelimit.h> 9 - #include <asm/prom.h> 10 9 #include <asm/rtas.h> 11 10 #include <asm/time.h> 12 11
+20 -1
arch/powerpc/kernel/rtas.c
··· 24 24 #include <linux/slab.h> 25 25 #include <linux/reboot.h> 26 26 #include <linux/syscalls.h> 27 + #include <linux/of.h> 28 + #include <linux/of_fdt.h> 27 29 28 30 #include <asm/interrupt.h> 29 - #include <asm/prom.h> 30 31 #include <asm/rtas.h> 31 32 #include <asm/hvcall.h> 32 33 #include <asm/machdep.h> ··· 50 49 51 50 static inline void do_enter_rtas(unsigned long args) 52 51 { 52 + unsigned long msr; 53 + 54 + /* 55 + * Make sure MSR[RI] is currently enabled as it will be forced later 56 + * in enter_rtas. 57 + */ 58 + msr = mfmsr(); 59 + BUG_ON(!(msr & MSR_RI)); 60 + 61 + BUG_ON(!irqs_disabled()); 62 + 63 + hard_irq_disable(); /* Ensure MSR[EE] is disabled on PPC64 */ 64 + 53 65 enter_rtas(args); 54 66 55 67 srr_regs_clobbered(); /* rtas uses SRRs, invalidate */ ··· 475 461 476 462 if (!rtas.entry || token == RTAS_UNKNOWN_SERVICE) 477 463 return -1; 464 + 465 + if ((mfmsr() & (MSR_IR|MSR_DR)) != (MSR_IR|MSR_DR)) { 466 + WARN_ON_ONCE(1); 467 + return -1; 468 + } 478 469 479 470 s = lock_rtas(); 480 471
+172
arch/powerpc/kernel/rtas_entry.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + 3 + #include <asm/asm-offsets.h> 4 + #include <asm/bug.h> 5 + #include <asm/page.h> 6 + #include <asm/ppc_asm.h> 7 + 8 + /* 9 + * RTAS is called with MSR IR, DR, EE disabled, and LR in the return address. 10 + * 11 + * Note: r3 is an input parameter to rtas, so don't trash it... 12 + */ 13 + 14 + #ifdef CONFIG_PPC32 15 + _GLOBAL(enter_rtas) 16 + stwu r1,-INT_FRAME_SIZE(r1) 17 + mflr r0 18 + stw r0,INT_FRAME_SIZE+4(r1) 19 + LOAD_REG_ADDR(r4, rtas) 20 + lis r6,1f@ha /* physical return address for rtas */ 21 + addi r6,r6,1f@l 22 + tophys(r6,r6) 23 + lwz r8,RTASENTRY(r4) 24 + lwz r4,RTASBASE(r4) 25 + mfmsr r9 26 + stw r9,8(r1) 27 + li r9,MSR_KERNEL & ~(MSR_IR|MSR_DR) 28 + mtlr r6 29 + stw r1, THREAD + RTAS_SP(r2) 30 + mtspr SPRN_SRR0,r8 31 + mtspr SPRN_SRR1,r9 32 + rfi 33 + 1: 34 + lis r8, 1f@h 35 + ori r8, r8, 1f@l 36 + LOAD_REG_IMMEDIATE(r9,MSR_KERNEL) 37 + mtspr SPRN_SRR0,r8 38 + mtspr SPRN_SRR1,r9 39 + rfi /* Reactivate MMU translation */ 40 + 1: 41 + lwz r8,INT_FRAME_SIZE+4(r1) /* get return address */ 42 + lwz r9,8(r1) /* original msr value */ 43 + addi r1,r1,INT_FRAME_SIZE 44 + li r0,0 45 + stw r0, THREAD + RTAS_SP(r2) 46 + mtlr r8 47 + mtmsr r9 48 + blr /* return to caller */ 49 + _ASM_NOKPROBE_SYMBOL(enter_rtas) 50 + 51 + #else /* CONFIG_PPC32 */ 52 + #include <asm/exception-64s.h> 53 + 54 + /* 55 + * 32-bit rtas on 64-bit machines has the additional problem that RTAS may 56 + * not preserve the upper parts of registers it uses. 57 + */ 58 + _GLOBAL(enter_rtas) 59 + mflr r0 60 + std r0,16(r1) 61 + stdu r1,-SWITCH_FRAME_SIZE(r1) /* Save SP and create stack space. */ 62 + 63 + /* Because RTAS is running in 32b mode, it clobbers the high order half 64 + * of all registers that it saves. We therefore save those registers 65 + * RTAS might touch to the stack. (r0, r3-r12 are caller saved) 66 + */ 67 + SAVE_GPR(2, r1) /* Save the TOC */ 68 + SAVE_NVGPRS(r1) /* Save the non-volatiles */ 69 + 70 + mfcr r4 71 + std r4,_CCR(r1) 72 + mfctr r5 73 + std r5,_CTR(r1) 74 + mfspr r6,SPRN_XER 75 + std r6,_XER(r1) 76 + mfdar r7 77 + std r7,_DAR(r1) 78 + mfdsisr r8 79 + std r8,_DSISR(r1) 80 + 81 + /* Temporary workaround to clear CR until RTAS can be modified to 82 + * ignore all bits. 83 + */ 84 + li r0,0 85 + mtcr r0 86 + 87 + mfmsr r6 88 + 89 + /* Unfortunately, the stack pointer and the MSR are also clobbered, 90 + * so they are saved in the PACA which allows us to restore 91 + * our original state after RTAS returns. 92 + */ 93 + std r1,PACAR1(r13) 94 + std r6,PACASAVEDMSR(r13) 95 + 96 + /* Setup our real return addr */ 97 + LOAD_REG_ADDR(r4,rtas_return_loc) 98 + clrldi r4,r4,2 /* convert to realmode address */ 99 + mtlr r4 100 + 101 + __enter_rtas: 102 + LOAD_REG_ADDR(r4, rtas) 103 + ld r5,RTASENTRY(r4) /* get the rtas->entry value */ 104 + ld r4,RTASBASE(r4) /* get the rtas->base value */ 105 + 106 + /* 107 + * RTAS runs in 32-bit big endian real mode, but leave MSR[RI] on as we 108 + * may hit NMI (SRESET or MCE) while in RTAS. RTAS should disable RI in 109 + * its critical regions (as specified in PAPR+ section 7.2.1). MSR[S] 110 + * is not impacted by RFI_TO_KERNEL (only urfid can unset it). So if 111 + * MSR[S] is set, it will remain when entering RTAS. 112 + */ 113 + LOAD_REG_IMMEDIATE(r6, MSR_ME | MSR_RI) 114 + 115 + li r0,0 116 + mtmsrd r0,1 /* disable RI before using SRR0/1 */ 117 + 118 + mtspr SPRN_SRR0,r5 119 + mtspr SPRN_SRR1,r6 120 + RFI_TO_KERNEL 121 + b . /* prevent speculative execution */ 122 + rtas_return_loc: 123 + FIXUP_ENDIAN 124 + 125 + /* Set SF before anything. */ 126 + LOAD_REG_IMMEDIATE(r6, MSR_KERNEL & ~(MSR_IR|MSR_DR)) 127 + mtmsrd r6 128 + 129 + /* relocation is off at this point */ 130 + GET_PACA(r13) 131 + 132 + bcl 20,31,$+4 133 + 0: mflr r3 134 + ld r3,(1f-0b)(r3) /* get &rtas_restore_regs */ 135 + 136 + ld r1,PACAR1(r13) /* Restore our SP */ 137 + ld r4,PACASAVEDMSR(r13) /* Restore our MSR */ 138 + 139 + mtspr SPRN_SRR0,r3 140 + mtspr SPRN_SRR1,r4 141 + RFI_TO_KERNEL 142 + b . /* prevent speculative execution */ 143 + _ASM_NOKPROBE_SYMBOL(enter_rtas) 144 + _ASM_NOKPROBE_SYMBOL(__enter_rtas) 145 + _ASM_NOKPROBE_SYMBOL(rtas_return_loc) 146 + 147 + .align 3 148 + 1: .8byte rtas_restore_regs 149 + 150 + rtas_restore_regs: 151 + /* relocation is on at this point */ 152 + REST_GPR(2, r1) /* Restore the TOC */ 153 + REST_NVGPRS(r1) /* Restore the non-volatiles */ 154 + 155 + ld r4,_CCR(r1) 156 + mtcr r4 157 + ld r5,_CTR(r1) 158 + mtctr r5 159 + ld r6,_XER(r1) 160 + mtspr SPRN_XER,r6 161 + ld r7,_DAR(r1) 162 + mtdar r7 163 + ld r8,_DSISR(r1) 164 + mtdsisr r8 165 + 166 + addi r1,r1,SWITCH_FRAME_SIZE /* Unstack our frame */ 167 + ld r0,16(r1) /* get return address */ 168 + 169 + mtlr r0 170 + blr /* return to caller */ 171 + 172 + #endif /* CONFIG_PPC32 */
+1 -1
arch/powerpc/kernel/rtas_flash.c
··· 120 120 /* 121 121 * Local copy of the flash block list. 122 122 * 123 - * The rtas_firmware_flash_list varable will be 123 + * The rtas_firmware_flash_list variable will be 124 124 * set once the data is fully read. 125 125 * 126 126 * For convenience as we build the list we use virtual addrs,
+2 -1
arch/powerpc/kernel/rtas_pci.c
··· 14 14 #include <linux/string.h> 15 15 #include <linux/init.h> 16 16 #include <linux/pgtable.h> 17 + #include <linux/of_address.h> 18 + #include <linux/of_fdt.h> 17 19 18 20 #include <asm/io.h> 19 21 #include <asm/irq.h> 20 - #include <asm/prom.h> 21 22 #include <asm/machdep.h> 22 23 #include <asm/pci-bridge.h> 23 24 #include <asm/iommu.h>
-1
arch/powerpc/kernel/rtasd.c
··· 22 22 #include <linux/uaccess.h> 23 23 #include <asm/io.h> 24 24 #include <asm/rtas.h> 25 - #include <asm/prom.h> 26 25 #include <asm/nvram.h> 27 26 #include <linux/atomic.h> 28 27 #include <asm/machdep.h>
+57 -22
arch/powerpc/kernel/setup-common.c
··· 23 23 #include <linux/console.h> 24 24 #include <linux/screen_info.h> 25 25 #include <linux/root_dev.h> 26 - #include <linux/notifier.h> 27 26 #include <linux/cpu.h> 28 27 #include <linux/unistd.h> 29 28 #include <linux/serial.h> 30 29 #include <linux/serial_8250.h> 31 30 #include <linux/percpu.h> 32 31 #include <linux/memblock.h> 32 + #include <linux/of_irq.h> 33 + #include <linux/of_fdt.h> 33 34 #include <linux/of_platform.h> 34 35 #include <linux/hugetlb.h> 35 36 #include <linux/pgtable.h> 36 37 #include <asm/io.h> 37 38 #include <asm/paca.h> 38 - #include <asm/prom.h> 39 39 #include <asm/processor.h> 40 40 #include <asm/vdso_datapage.h> 41 41 #include <asm/smp.h> ··· 279 279 proc_freq / 1000000, proc_freq % 1000000); 280 280 281 281 /* If we are a Freescale core do a simple check so 282 - * we dont have to keep adding cases in the future */ 282 + * we don't have to keep adding cases in the future */ 283 283 if (PVR_VER(pvr) & 0x8000) { 284 284 switch (PVR_VER(pvr)) { 285 285 case 0x8000: /* 7441/7450/7451, Voyager */ ··· 680 680 } 681 681 EXPORT_SYMBOL(check_legacy_ioport); 682 682 683 - static int ppc_panic_event(struct notifier_block *this, 684 - unsigned long event, void *ptr) 683 + /* 684 + * Panic notifiers setup 685 + * 686 + * We have 3 notifiers for powerpc, each one from a different "nature": 687 + * 688 + * - ppc_panic_fadump_handler() is a hypervisor notifier, which hard-disables 689 + * IRQs and deal with the Firmware-Assisted dump, when it is configured; 690 + * should run early in the panic path. 691 + * 692 + * - dump_kernel_offset() is an informative notifier, just showing the KASLR 693 + * offset if we have RANDOMIZE_BASE set. 694 + * 695 + * - ppc_panic_platform_handler() is a low-level handler that's registered 696 + * only if the platform wishes to perform final actions in the panic path, 697 + * hence it should run late and might not even return. Currently, only 698 + * pseries and ps3 platforms register callbacks. 699 + */ 700 + static int ppc_panic_fadump_handler(struct notifier_block *this, 701 + unsigned long event, void *ptr) 685 702 { 686 703 /* 687 704 * panic does a local_irq_disable, but we really ··· 708 691 709 692 /* 710 693 * If firmware-assisted dump has been registered then trigger 711 - * firmware-assisted dump and let firmware handle everything else. 694 + * its callback and let the firmware handles everything else. 712 695 */ 713 696 crash_fadump(NULL, ptr); 714 - if (ppc_md.panic) 715 - ppc_md.panic(ptr); /* May not return */ 697 + 716 698 return NOTIFY_DONE; 717 699 } 718 700 719 - static struct notifier_block ppc_panic_block = { 720 - .notifier_call = ppc_panic_event, 721 - .priority = INT_MIN /* may not return; must be done last */ 722 - }; 723 - 724 - /* 725 - * Dump out kernel offset information on panic. 726 - */ 727 701 static int dump_kernel_offset(struct notifier_block *self, unsigned long v, 728 702 void *p) 729 703 { 730 704 pr_emerg("Kernel Offset: 0x%lx from 0x%lx\n", 731 705 kaslr_offset(), KERNELBASE); 732 706 733 - return 0; 707 + return NOTIFY_DONE; 734 708 } 735 709 710 + static int ppc_panic_platform_handler(struct notifier_block *this, 711 + unsigned long event, void *ptr) 712 + { 713 + /* 714 + * This handler is only registered if we have a panic callback 715 + * on ppc_md, hence NULL check is not needed. 716 + * Also, it may not return, so it runs really late on panic path. 717 + */ 718 + ppc_md.panic(ptr); 719 + 720 + return NOTIFY_DONE; 721 + } 722 + 723 + static struct notifier_block ppc_fadump_block = { 724 + .notifier_call = ppc_panic_fadump_handler, 725 + .priority = INT_MAX, /* run early, to notify the firmware ASAP */ 726 + }; 727 + 736 728 static struct notifier_block kernel_offset_notifier = { 737 - .notifier_call = dump_kernel_offset 729 + .notifier_call = dump_kernel_offset, 730 + }; 731 + 732 + static struct notifier_block ppc_panic_block = { 733 + .notifier_call = ppc_panic_platform_handler, 734 + .priority = INT_MIN, /* may not return; must be done last */ 738 735 }; 739 736 740 737 void __init setup_panic(void) 741 738 { 739 + /* Hard-disables IRQs + deal with FW-assisted dump (fadump) */ 740 + atomic_notifier_chain_register(&panic_notifier_list, 741 + &ppc_fadump_block); 742 + 742 743 if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) 743 744 atomic_notifier_chain_register(&panic_notifier_list, 744 745 &kernel_offset_notifier); 745 746 746 - /* PPC64 always does a hard irq disable in its panic handler */ 747 - if (!IS_ENABLED(CONFIG_PPC64) && !ppc_md.panic) 748 - return; 749 - atomic_notifier_chain_register(&panic_notifier_list, &ppc_panic_block); 747 + /* Low-level platform-specific routines that should run on panic */ 748 + if (ppc_md.panic) 749 + atomic_notifier_chain_register(&panic_notifier_list, 750 + &ppc_panic_block); 750 751 } 751 752 752 753 #ifdef CONFIG_CHECK_CACHE_COHERENCY
+2 -1
arch/powerpc/kernel/setup_32.c
··· 20 20 #include <linux/export.h> 21 21 #include <linux/nvram.h> 22 22 #include <linux/pgtable.h> 23 + #include <linux/of_fdt.h> 24 + #include <linux/irq.h> 23 25 24 26 #include <asm/io.h> 25 - #include <asm/prom.h> 26 27 #include <asm/processor.h> 27 28 #include <asm/setup.h> 28 29 #include <asm/smp.h>
+2 -1
arch/powerpc/kernel/setup_64.c
··· 31 31 #include <linux/memory.h> 32 32 #include <linux/nmi.h> 33 33 #include <linux/pgtable.h> 34 + #include <linux/of.h> 35 + #include <linux/of_fdt.h> 34 36 35 37 #include <asm/kvm_guest.h> 36 38 #include <asm/io.h> 37 39 #include <asm/kdump.h> 38 - #include <asm/prom.h> 39 40 #include <asm/processor.h> 40 41 #include <asm/smp.h> 41 42 #include <asm/elf.h>
+15
arch/powerpc/kernel/signal.c
··· 141 141 142 142 int show_unhandled_signals = 1; 143 143 144 + unsigned long get_min_sigframe_size(void) 145 + { 146 + if (IS_ENABLED(CONFIG_PPC64)) 147 + return get_min_sigframe_size_64(); 148 + else 149 + return get_min_sigframe_size_32(); 150 + } 151 + 152 + #ifdef CONFIG_COMPAT 153 + unsigned long get_min_sigframe_size_compat(void) 154 + { 155 + return get_min_sigframe_size_32(); 156 + } 157 + #endif 158 + 144 159 /* 145 160 * Allocate space for the signal frame 146 161 */
+6
arch/powerpc/kernel/signal_32.c
··· 233 233 int abigap[56]; 234 234 }; 235 235 236 + unsigned long get_min_sigframe_size_32(void) 237 + { 238 + return max(sizeof(struct rt_sigframe) + __SIGNAL_FRAMESIZE + 16, 239 + sizeof(struct sigframe) + __SIGNAL_FRAMESIZE); 240 + } 241 + 236 242 /* 237 243 * Save the current user registers on the user stack. 238 244 * We only save the altivec/spe registers if the process has used
+6 -1
arch/powerpc/kernel/signal_64.c
··· 66 66 char abigap[USER_REDZONE_SIZE]; 67 67 } __attribute__ ((aligned (16))); 68 68 69 + unsigned long get_min_sigframe_size_64(void) 70 + { 71 + return sizeof(struct rt_sigframe) + __SIGNAL_FRAMESIZE; 72 + } 73 + 69 74 /* 70 75 * This computes a quad word aligned pointer inside the vmx_reserve array 71 76 * element. For historical reasons sigcontext might not be quad word aligned, ··· 128 123 #endif 129 124 struct pt_regs *regs = tsk->thread.regs; 130 125 unsigned long msr = regs->msr; 131 - /* Force usr to alway see softe as 1 (interrupts enabled) */ 126 + /* Force usr to always see softe as 1 (interrupts enabled) */ 132 127 unsigned long softe = 0x1; 133 128 134 129 BUG_ON(tsk != current);
+13 -14
arch/powerpc/kernel/smp.c
··· 43 43 #include <asm/kvm_ppc.h> 44 44 #include <asm/dbell.h> 45 45 #include <asm/page.h> 46 - #include <asm/prom.h> 47 46 #include <asm/smp.h> 48 47 #include <asm/time.h> 49 48 #include <asm/machdep.h> ··· 411 412 static bool nmi_ipi_busy = false; 412 413 static void (*nmi_ipi_function)(struct pt_regs *) = NULL; 413 414 414 - static void nmi_ipi_lock_start(unsigned long *flags) 415 + noinstr static void nmi_ipi_lock_start(unsigned long *flags) 415 416 { 416 417 raw_local_irq_save(*flags); 417 418 hard_irq_disable(); 418 - while (atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) { 419 + while (arch_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) { 419 420 raw_local_irq_restore(*flags); 420 - spin_until_cond(atomic_read(&__nmi_ipi_lock) == 0); 421 + spin_until_cond(arch_atomic_read(&__nmi_ipi_lock) == 0); 421 422 raw_local_irq_save(*flags); 422 423 hard_irq_disable(); 423 424 } 424 425 } 425 426 426 - static void nmi_ipi_lock(void) 427 + noinstr static void nmi_ipi_lock(void) 427 428 { 428 - while (atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) 429 - spin_until_cond(atomic_read(&__nmi_ipi_lock) == 0); 429 + while (arch_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) 430 + spin_until_cond(arch_atomic_read(&__nmi_ipi_lock) == 0); 430 431 } 431 432 432 - static void nmi_ipi_unlock(void) 433 + noinstr static void nmi_ipi_unlock(void) 433 434 { 434 435 smp_mb(); 435 - WARN_ON(atomic_read(&__nmi_ipi_lock) != 1); 436 - atomic_set(&__nmi_ipi_lock, 0); 436 + WARN_ON(arch_atomic_read(&__nmi_ipi_lock) != 1); 437 + arch_atomic_set(&__nmi_ipi_lock, 0); 437 438 } 438 439 439 - static void nmi_ipi_unlock_end(unsigned long *flags) 440 + noinstr static void nmi_ipi_unlock_end(unsigned long *flags) 440 441 { 441 442 nmi_ipi_unlock(); 442 443 raw_local_irq_restore(*flags); ··· 445 446 /* 446 447 * Platform NMI handler calls this to ack 447 448 */ 448 - int smp_handle_nmi_ipi(struct pt_regs *regs) 449 + noinstr int smp_handle_nmi_ipi(struct pt_regs *regs) 449 450 { 450 451 void (*fn)(struct pt_regs *) = NULL; 451 452 unsigned long flags; ··· 874 875 * @tg : The thread-group structure of the CPU node which @cpu belongs 875 876 * to. 876 877 * 877 - * Returns the index to tg->thread_list that points to the the start 878 + * Returns the index to tg->thread_list that points to the start 878 879 * of the thread_group that @cpu belongs to. 879 880 * 880 881 * Returns -1 if cpu doesn't belong to any of the groups pointed to by ··· 1101 1102 DBG("smp_prepare_cpus\n"); 1102 1103 1103 1104 /* 1104 - * setup_cpu may need to be called on the boot cpu. We havent 1105 + * setup_cpu may need to be called on the boot cpu. We haven't 1105 1106 * spun any cpus up but lets be paranoid. 1106 1107 */ 1107 1108 BUG_ON(boot_cpuid != smp_processor_id());
+1 -1
arch/powerpc/kernel/syscalls.c
··· 73 73 int 74 74 ppc_select(int n, fd_set __user *inp, fd_set __user *outp, fd_set __user *exp, struct __kernel_old_timeval __user *tvp) 75 75 { 76 - if ( (unsigned long)n >= 4096 ) 76 + if ((unsigned long)n >= 4096) 77 77 return sys_old_select((void __user *)n); 78 78 79 79 return sys_select(n, inp, outp, exp, tvp);
+1 -1
arch/powerpc/kernel/sysfs.c
··· 9 9 #include <linux/nodemask.h> 10 10 #include <linux/cpumask.h> 11 11 #include <linux/notifier.h> 12 + #include <linux/of.h> 12 13 13 14 #include <asm/current.h> 14 15 #include <asm/processor.h> 15 16 #include <asm/cputable.h> 16 17 #include <asm/hvcall.h> 17 - #include <asm/prom.h> 18 18 #include <asm/machdep.h> 19 19 #include <asm/smp.h> 20 20 #include <asm/pmc.h>
+7 -8
arch/powerpc/kernel/time.c
··· 54 54 #include <linux/of_clk.h> 55 55 #include <linux/suspend.h> 56 56 #include <linux/processor.h> 57 - #include <asm/trace.h> 57 + #include <linux/mc146818rtc.h> 58 + #include <linux/platform_device.h> 58 59 60 + #include <asm/trace.h> 59 61 #include <asm/interrupt.h> 60 62 #include <asm/io.h> 61 63 #include <asm/nvram.h> ··· 65 63 #include <asm/machdep.h> 66 64 #include <linux/uaccess.h> 67 65 #include <asm/time.h> 68 - #include <asm/prom.h> 69 66 #include <asm/irq.h> 70 67 #include <asm/div64.h> 71 68 #include <asm/smp.h> ··· 157 156 u64 __cputime_usec_factor; 158 157 EXPORT_SYMBOL(__cputime_usec_factor); 159 158 160 - #ifdef CONFIG_PPC_SPLPAR 161 - void (*dtl_consumer)(struct dtl_entry *, u64); 162 - #endif 163 - 164 159 static void calc_cputime_factors(void) 165 160 { 166 161 struct div_result res; ··· 181 184 #ifdef CONFIG_PPC_SPLPAR 182 185 183 186 #include <asm/dtl.h> 187 + 188 + void (*dtl_consumer)(struct dtl_entry *, u64); 184 189 185 190 /* 186 191 * Scan the dispatch trace log and count up the stolen time. ··· 828 829 static int first = 1; 829 830 830 831 ts->tv_nsec = 0; 831 - /* XXX this is a litle fragile but will work okay in the short term */ 832 + /* XXX this is a little fragile but will work okay in the short term */ 832 833 if (first) { 833 834 first = 0; 834 835 if (ppc_md.time_init) ··· 973 974 */ 974 975 start_cpu_decrementer(); 975 976 976 - /* FIME: Should make unrelatred change to move snapshot_timebase 977 + /* FIME: Should make unrelated change to move snapshot_timebase 977 978 * call here ! */ 978 979 register_decrementer_clockevent(smp_processor_id()); 979 980 }
+1 -4
arch/powerpc/kernel/trace/Makefile
··· 14 14 else 15 15 obj64-$(CONFIG_FUNCTION_TRACER) += ftrace_64_pg.o 16 16 endif 17 - obj-$(CONFIG_FUNCTION_TRACER) += ftrace_low.o 18 - obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o 19 - obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o 20 - obj-$(CONFIG_FTRACE_SYSCALLS) += ftrace.o 17 + obj-$(CONFIG_FUNCTION_TRACER) += ftrace_low.o ftrace.o 21 18 obj-$(CONFIG_TRACING) += trace_clock.o 22 19 23 20 obj-$(CONFIG_PPC64) += $(obj64-y)
+121 -262
arch/powerpc/kernel/trace/ftrace.c
··· 28 28 #include <asm/syscall.h> 29 29 #include <asm/inst.h> 30 30 31 - 32 - #ifdef CONFIG_DYNAMIC_FTRACE 33 - 34 31 /* 35 32 * We generally only have a single long_branch tramp and at most 2 or 3 plt 36 33 * tramps generated. But, we don't use the plt tramps currently. We also allot ··· 45 48 addr = ppc_function_entry((void *)addr); 46 49 47 50 /* if (link) set op to 'bl' else 'b' */ 48 - create_branch(&op, (u32 *)ip, addr, link ? 1 : 0); 51 + create_branch(&op, (u32 *)ip, addr, link ? BRANCH_SET_LINK : 0); 49 52 50 53 return op; 51 54 } 52 55 53 - static int 56 + static inline int 54 57 ftrace_modify_code(unsigned long ip, ppc_inst_t old, ppc_inst_t new) 55 58 { 56 59 ppc_inst_t replaced; ··· 75 78 } 76 79 77 80 /* replace the text with the new text */ 78 - if (patch_instruction((u32 *)ip, new)) 79 - return -EPERM; 80 - 81 - return 0; 81 + return patch_instruction((u32 *)ip, new); 82 82 } 83 83 84 84 /* ··· 83 89 */ 84 90 static int test_24bit_addr(unsigned long ip, unsigned long addr) 85 91 { 86 - ppc_inst_t op; 87 92 addr = ppc_function_entry((void *)addr); 88 93 89 - /* use the create_branch to verify that this offset can be branched */ 90 - return create_branch(&op, (u32 *)ip, addr, 0) == 0; 94 + return is_offset_in_branch_range(addr - ip); 91 95 } 92 96 93 97 static int is_bl_op(ppc_inst_t op) 94 98 { 95 - return (ppc_inst_val(op) & 0xfc000003) == 0x48000001; 99 + return (ppc_inst_val(op) & ~PPC_LI_MASK) == PPC_RAW_BL(0); 96 100 } 97 101 98 102 static int is_b_op(ppc_inst_t op) 99 103 { 100 - return (ppc_inst_val(op) & 0xfc000003) == 0x48000000; 104 + return (ppc_inst_val(op) & ~PPC_LI_MASK) == PPC_RAW_BRANCH(0); 101 105 } 102 106 103 107 static unsigned long find_bl_target(unsigned long ip, ppc_inst_t op) 104 108 { 105 109 int offset; 106 110 107 - offset = (ppc_inst_val(op) & 0x03fffffc); 111 + offset = PPC_LI(ppc_inst_val(op)); 108 112 /* make it signed */ 109 113 if (offset & 0x02000000) 110 114 offset |= 0xfe000000; ··· 111 119 } 112 120 113 121 #ifdef CONFIG_MODULES 114 - #ifdef CONFIG_PPC64 115 122 static int 116 123 __ftrace_make_nop(struct module *mod, 117 124 struct dyn_ftrace *rec, unsigned long addr) ··· 150 159 return -EINVAL; 151 160 } 152 161 153 - #ifdef CONFIG_MPROFILE_KERNEL 154 - /* When using -mkernel_profile there is no load to jump over */ 155 - pop = ppc_inst(PPC_RAW_NOP()); 162 + if (IS_ENABLED(CONFIG_MPROFILE_KERNEL)) { 163 + if (copy_inst_from_kernel_nofault(&op, (void *)(ip - 4))) { 164 + pr_err("Fetching instruction at %lx failed.\n", ip - 4); 165 + return -EFAULT; 166 + } 156 167 157 - if (copy_inst_from_kernel_nofault(&op, (void *)(ip - 4))) { 158 - pr_err("Fetching instruction at %lx failed.\n", ip - 4); 159 - return -EFAULT; 168 + /* We expect either a mflr r0, or a std r0, LRSAVE(r1) */ 169 + if (!ppc_inst_equal(op, ppc_inst(PPC_RAW_MFLR(_R0))) && 170 + !ppc_inst_equal(op, ppc_inst(PPC_INST_STD_LR))) { 171 + pr_err("Unexpected instruction %s around bl _mcount\n", 172 + ppc_inst_as_str(op)); 173 + return -EINVAL; 174 + } 175 + } else if (IS_ENABLED(CONFIG_PPC64)) { 176 + /* 177 + * Check what is in the next instruction. We can see ld r2,40(r1), but 178 + * on first pass after boot we will see mflr r0. 179 + */ 180 + if (copy_inst_from_kernel_nofault(&op, (void *)(ip + 4))) { 181 + pr_err("Fetching op failed.\n"); 182 + return -EFAULT; 183 + } 184 + 185 + if (!ppc_inst_equal(op, ppc_inst(PPC_INST_LD_TOC))) { 186 + pr_err("Expected %08lx found %s\n", PPC_INST_LD_TOC, ppc_inst_as_str(op)); 187 + return -EINVAL; 188 + } 160 189 } 161 190 162 - /* We expect either a mflr r0, or a std r0, LRSAVE(r1) */ 163 - if (!ppc_inst_equal(op, ppc_inst(PPC_RAW_MFLR(_R0))) && 164 - !ppc_inst_equal(op, ppc_inst(PPC_INST_STD_LR))) { 165 - pr_err("Unexpected instruction %s around bl _mcount\n", 166 - ppc_inst_as_str(op)); 167 - return -EINVAL; 168 - } 169 - #else 170 191 /* 171 - * Our original call site looks like: 192 + * When using -mprofile-kernel or PPC32 there is no load to jump over. 193 + * 194 + * Otherwise our original call site looks like: 172 195 * 173 196 * bl <tramp> 174 197 * ld r2,XX(r1) ··· 194 189 * 195 190 * Use a b +8 to jump over the load. 196 191 */ 197 - 198 - pop = ppc_inst(PPC_INST_BRANCH | 8); /* b +8 */ 199 - 200 - /* 201 - * Check what is in the next instruction. We can see ld r2,40(r1), but 202 - * on first pass after boot we will see mflr r0. 203 - */ 204 - if (copy_inst_from_kernel_nofault(&op, (void *)(ip + 4))) { 205 - pr_err("Fetching op failed.\n"); 206 - return -EFAULT; 207 - } 208 - 209 - if (!ppc_inst_equal(op, ppc_inst(PPC_INST_LD_TOC))) { 210 - pr_err("Expected %08lx found %s\n", PPC_INST_LD_TOC, ppc_inst_as_str(op)); 211 - return -EINVAL; 212 - } 213 - #endif /* CONFIG_MPROFILE_KERNEL */ 192 + if (IS_ENABLED(CONFIG_MPROFILE_KERNEL) || IS_ENABLED(CONFIG_PPC32)) 193 + pop = ppc_inst(PPC_RAW_NOP()); 194 + else 195 + pop = ppc_inst(PPC_RAW_BRANCH(8)); /* b +8 */ 214 196 215 197 if (patch_instruction((u32 *)ip, pop)) { 216 198 pr_err("Patching NOP failed.\n"); ··· 206 214 207 215 return 0; 208 216 } 209 - 210 - #else /* !PPC64 */ 211 - static int 212 - __ftrace_make_nop(struct module *mod, 213 - struct dyn_ftrace *rec, unsigned long addr) 217 + #else 218 + static int __ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr) 214 219 { 215 - ppc_inst_t op; 216 - unsigned long ip = rec->ip; 217 - unsigned long tramp, ptr; 218 - 219 - if (copy_from_kernel_nofault(&op, (void *)ip, MCOUNT_INSN_SIZE)) 220 - return -EFAULT; 221 - 222 - /* Make sure that that this is still a 24bit jump */ 223 - if (!is_bl_op(op)) { 224 - pr_err("Not expected bl: opcode is %s\n", ppc_inst_as_str(op)); 225 - return -EINVAL; 226 - } 227 - 228 - /* lets find where the pointer goes */ 229 - tramp = find_bl_target(ip, op); 230 - 231 - /* Find where the trampoline jumps to */ 232 - if (module_trampoline_target(mod, tramp, &ptr)) { 233 - pr_err("Failed to get trampoline target\n"); 234 - return -EFAULT; 235 - } 236 - 237 - if (ptr != addr) { 238 - pr_err("Trampoline location %08lx does not match addr\n", 239 - tramp); 240 - return -EINVAL; 241 - } 242 - 243 - op = ppc_inst(PPC_RAW_NOP()); 244 - 245 - if (patch_instruction((u32 *)ip, op)) 246 - return -EPERM; 247 - 248 220 return 0; 249 221 } 250 - #endif /* PPC64 */ 251 222 #endif /* CONFIG_MODULES */ 252 223 253 224 static unsigned long find_ftrace_tramp(unsigned long ip) 254 225 { 255 226 int i; 256 - ppc_inst_t instr; 257 227 258 228 /* 259 229 * We have the compiler generated long_branch tramps at the end ··· 224 270 for (i = NUM_FTRACE_TRAMPS - 1; i >= 0; i--) 225 271 if (!ftrace_tramps[i]) 226 272 continue; 227 - else if (create_branch(&instr, (void *)ip, 228 - ftrace_tramps[i], 0) == 0) 273 + else if (is_offset_in_branch_range(ftrace_tramps[i] - ip)) 229 274 return ftrace_tramps[i]; 230 275 231 276 return 0; ··· 254 301 int i; 255 302 ppc_inst_t op; 256 303 unsigned long ptr; 257 - ppc_inst_t instr; 258 - static unsigned long ftrace_plt_tramps[NUM_FTRACE_TRAMPS]; 259 304 260 305 /* Is this a known long jump tramp? */ 261 306 for (i = 0; i < NUM_FTRACE_TRAMPS; i++) 262 - if (!ftrace_tramps[i]) 263 - break; 264 - else if (ftrace_tramps[i] == tramp) 307 + if (ftrace_tramps[i] == tramp) 265 308 return 0; 266 - 267 - /* Is this a known plt tramp? */ 268 - for (i = 0; i < NUM_FTRACE_TRAMPS; i++) 269 - if (!ftrace_plt_tramps[i]) 270 - break; 271 - else if (ftrace_plt_tramps[i] == tramp) 272 - return -1; 273 309 274 310 /* New trampoline -- read where this goes */ 275 311 if (copy_inst_from_kernel_nofault(&op, (void *)tramp)) { ··· 281 339 } 282 340 283 341 /* Let's re-write the tramp to go to ftrace_[regs_]caller */ 284 - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 285 - ptr = ppc_global_function_entry((void *)ftrace_regs_caller); 286 - #else 287 - ptr = ppc_global_function_entry((void *)ftrace_caller); 288 - #endif 289 - if (create_branch(&instr, (void *)tramp, ptr, 0)) { 290 - pr_debug("%ps is not reachable from existing mcount tramp\n", 291 - (void *)ptr); 292 - return -1; 293 - } 342 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) 343 + ptr = ppc_global_function_entry((void *)ftrace_regs_caller); 344 + else 345 + ptr = ppc_global_function_entry((void *)ftrace_caller); 294 346 295 347 if (patch_branch((u32 *)tramp, ptr, 0)) { 296 348 pr_debug("REL24 out of range!\n"); ··· 354 418 old = ftrace_call_replace(ip, addr, 1); 355 419 new = ppc_inst(PPC_RAW_NOP()); 356 420 return ftrace_modify_code(ip, old, new); 357 - } else if (core_kernel_text(ip)) 421 + } else if (core_kernel_text(ip)) { 358 422 return __ftrace_make_nop_kernel(rec, addr); 423 + } else if (!IS_ENABLED(CONFIG_MODULES)) { 424 + return -EINVAL; 425 + } 359 426 360 - #ifdef CONFIG_MODULES 361 427 /* 362 428 * Out of range jumps are called from modules. 363 429 * We should either already have a pointer to the module ··· 382 444 mod = rec->arch.mod; 383 445 384 446 return __ftrace_make_nop(mod, rec, addr); 385 - #else 386 - /* We should not get here without modules */ 387 - return -EINVAL; 388 - #endif /* CONFIG_MODULES */ 389 447 } 390 448 391 449 #ifdef CONFIG_MODULES 392 - #ifdef CONFIG_PPC64 393 450 /* 394 451 * Examine the existing instructions for __ftrace_make_call. 395 452 * They should effectively be a NOP, and follow formal constraints, 396 453 * depending on the ABI. Return false if they don't. 397 454 */ 398 - #ifndef CONFIG_MPROFILE_KERNEL 399 - static int 400 - expected_nop_sequence(void *ip, ppc_inst_t op0, ppc_inst_t op1) 455 + static bool expected_nop_sequence(void *ip, ppc_inst_t op0, ppc_inst_t op1) 401 456 { 402 - /* 403 - * We expect to see: 404 - * 405 - * b +8 406 - * ld r2,XX(r1) 407 - * 408 - * The load offset is different depending on the ABI. For simplicity 409 - * just mask it out when doing the compare. 410 - */ 411 - if (!ppc_inst_equal(op0, ppc_inst(0x48000008)) || 412 - (ppc_inst_val(op1) & 0xffff0000) != 0xe8410000) 413 - return 0; 414 - return 1; 457 + if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V1)) 458 + return ppc_inst_equal(op0, ppc_inst(PPC_RAW_BRANCH(8))) && 459 + ppc_inst_equal(op1, ppc_inst(PPC_INST_LD_TOC)); 460 + else 461 + return ppc_inst_equal(op0, ppc_inst(PPC_RAW_NOP())); 415 462 } 416 - #else 417 - static int 418 - expected_nop_sequence(void *ip, ppc_inst_t op0, ppc_inst_t op1) 419 - { 420 - /* look for patched "NOP" on ppc64 with -mprofile-kernel */ 421 - if (!ppc_inst_equal(op0, ppc_inst(PPC_RAW_NOP()))) 422 - return 0; 423 - return 1; 424 - } 425 - #endif 426 463 427 464 static int 428 465 __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 429 466 { 430 467 ppc_inst_t op[2]; 431 - ppc_inst_t instr; 432 468 void *ip = (void *)rec->ip; 433 469 unsigned long entry, ptr, tramp; 434 470 struct module *mod = rec->arch.mod; ··· 411 499 if (copy_inst_from_kernel_nofault(op, ip)) 412 500 return -EFAULT; 413 501 414 - if (copy_inst_from_kernel_nofault(op + 1, ip + 4)) 502 + if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V1) && 503 + copy_inst_from_kernel_nofault(op + 1, ip + 4)) 415 504 return -EFAULT; 416 505 417 506 if (!expected_nop_sequence(ip, op[0], op[1])) { ··· 422 509 } 423 510 424 511 /* If we never set up ftrace trampoline(s), then bail */ 425 - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 426 - if (!mod->arch.tramp || !mod->arch.tramp_regs) { 427 - #else 428 - if (!mod->arch.tramp) { 429 - #endif 512 + if (!mod->arch.tramp || 513 + (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && !mod->arch.tramp_regs)) { 430 514 pr_err("No ftrace trampoline\n"); 431 515 return -EINVAL; 432 516 } 433 517 434 - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 435 - if (rec->flags & FTRACE_FL_REGS) 518 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && rec->flags & FTRACE_FL_REGS) 436 519 tramp = mod->arch.tramp_regs; 437 520 else 438 - #endif 439 521 tramp = mod->arch.tramp; 440 522 441 523 if (module_trampoline_target(mod, tramp, &ptr)) { ··· 447 539 return -EINVAL; 448 540 } 449 541 450 - /* Ensure branch is within 24 bits */ 451 - if (create_branch(&instr, ip, tramp, BRANCH_SET_LINK)) { 452 - pr_err("Branch out of range\n"); 453 - return -EINVAL; 454 - } 455 - 456 542 if (patch_branch(ip, tramp, BRANCH_SET_LINK)) { 457 543 pr_err("REL24 out of range!\n"); 458 544 return -EINVAL; ··· 454 552 455 553 return 0; 456 554 } 457 - 458 - #else /* !CONFIG_PPC64: */ 459 - static int 460 - __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 461 - { 462 - int err; 463 - ppc_inst_t op; 464 - u32 *ip = (u32 *)rec->ip; 465 - struct module *mod = rec->arch.mod; 466 - unsigned long tramp; 467 - 468 - /* read where this goes */ 469 - if (copy_inst_from_kernel_nofault(&op, ip)) 470 - return -EFAULT; 471 - 472 - /* It should be pointing to a nop */ 473 - if (!ppc_inst_equal(op, ppc_inst(PPC_RAW_NOP()))) { 474 - pr_err("Expected NOP but have %s\n", ppc_inst_as_str(op)); 475 - return -EINVAL; 476 - } 477 - 478 - /* If we never set up a trampoline to ftrace_caller, then bail */ 479 - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 480 - if (!mod->arch.tramp || !mod->arch.tramp_regs) { 481 555 #else 482 - if (!mod->arch.tramp) { 483 - #endif 484 - pr_err("No ftrace trampoline\n"); 485 - return -EINVAL; 486 - } 487 - 488 - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 489 - if (rec->flags & FTRACE_FL_REGS) 490 - tramp = mod->arch.tramp_regs; 491 - else 492 - #endif 493 - tramp = mod->arch.tramp; 494 - /* create the branch to the trampoline */ 495 - err = create_branch(&op, ip, tramp, BRANCH_SET_LINK); 496 - if (err) { 497 - pr_err("REL24 out of range!\n"); 498 - return -EINVAL; 499 - } 500 - 501 - pr_devel("write to %lx\n", rec->ip); 502 - 503 - if (patch_instruction(ip, op)) 504 - return -EPERM; 505 - 556 + static int __ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) 557 + { 506 558 return 0; 507 559 } 508 - #endif /* CONFIG_PPC64 */ 509 560 #endif /* CONFIG_MODULES */ 510 561 511 562 static int __ftrace_make_call_kernel(struct dyn_ftrace *rec, unsigned long addr) ··· 471 616 entry = ppc_global_function_entry((void *)ftrace_caller); 472 617 ptr = ppc_global_function_entry((void *)addr); 473 618 474 - if (ptr != entry) { 475 - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 619 + if (ptr != entry && IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) 476 620 entry = ppc_global_function_entry((void *)ftrace_regs_caller); 477 - if (ptr != entry) { 478 - #endif 479 - pr_err("Unknown ftrace addr to patch: %ps\n", (void *)ptr); 480 - return -EINVAL; 481 - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 482 - } 483 - #endif 621 + 622 + if (ptr != entry) { 623 + pr_err("Unknown ftrace addr to patch: %ps\n", (void *)ptr); 624 + return -EINVAL; 484 625 } 485 626 486 627 /* Make sure we have a nop */ ··· 519 668 old = ppc_inst(PPC_RAW_NOP()); 520 669 new = ftrace_call_replace(ip, addr, 1); 521 670 return ftrace_modify_code(ip, old, new); 522 - } else if (core_kernel_text(ip)) 671 + } else if (core_kernel_text(ip)) { 523 672 return __ftrace_make_call_kernel(rec, addr); 673 + } else if (!IS_ENABLED(CONFIG_MODULES)) { 674 + /* We should not get here without modules */ 675 + return -EINVAL; 676 + } 524 677 525 - #ifdef CONFIG_MODULES 526 678 /* 527 679 * Out of range jumps are called from modules. 528 680 * Being that we are converting from nop, it had better ··· 537 683 } 538 684 539 685 return __ftrace_make_call(rec, addr); 540 - #else 541 - /* We should not get here without modules */ 542 - return -EINVAL; 543 - #endif /* CONFIG_MODULES */ 544 686 } 545 687 546 688 #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS ··· 620 770 return -EINVAL; 621 771 } 622 772 623 - /* Ensure branch is within 24 bits */ 624 - if (create_branch(&op, (u32 *)ip, tramp, BRANCH_SET_LINK)) { 625 - pr_err("Branch out of range\n"); 626 - return -EINVAL; 627 - } 628 - 629 773 if (patch_branch((u32 *)ip, tramp, BRANCH_SET_LINK)) { 630 774 pr_err("REL24 out of range!\n"); 631 775 return -EINVAL; 632 776 } 633 777 778 + return 0; 779 + } 780 + #else 781 + static int __ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, unsigned long addr) 782 + { 634 783 return 0; 635 784 } 636 785 #endif ··· 656 807 * variant, so there is nothing to do here 657 808 */ 658 809 return 0; 810 + } else if (!IS_ENABLED(CONFIG_MODULES)) { 811 + /* We should not get here without modules */ 812 + return -EINVAL; 659 813 } 660 814 661 - #ifdef CONFIG_MODULES 662 815 /* 663 816 * Out of range jumps are called from modules. 664 817 */ ··· 670 819 } 671 820 672 821 return __ftrace_modify_call(rec, old_addr, addr); 673 - #else 674 - /* We should not get here without modules */ 675 - return -EINVAL; 676 - #endif /* CONFIG_MODULES */ 677 822 } 678 823 #endif 679 824 ··· 683 836 new = ftrace_call_replace(ip, (unsigned long)func, 1); 684 837 ret = ftrace_modify_code(ip, old, new); 685 838 686 - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 687 839 /* Also update the regs callback function */ 688 - if (!ret) { 840 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && !ret) { 689 841 ip = (unsigned long)(&ftrace_regs_call); 690 842 old = ppc_inst_read((u32 *)&ftrace_regs_call); 691 843 new = ftrace_call_replace(ip, (unsigned long)func, 1); 692 844 ret = ftrace_modify_code(ip, old, new); 693 845 } 694 - #endif 695 846 696 847 return ret; 697 848 } ··· 708 863 709 864 extern unsigned int ftrace_tramp_text[], ftrace_tramp_init[]; 710 865 866 + void ftrace_free_init_tramp(void) 867 + { 868 + int i; 869 + 870 + for (i = 0; i < NUM_FTRACE_TRAMPS && ftrace_tramps[i]; i++) 871 + if (ftrace_tramps[i] == (unsigned long)ftrace_tramp_init) { 872 + ftrace_tramps[i] = 0; 873 + return; 874 + } 875 + } 876 + 711 877 int __init ftrace_dyn_arch_init(void) 712 878 { 713 879 int i; 714 880 unsigned int *tramp[] = { ftrace_tramp_text, ftrace_tramp_init }; 715 881 u32 stub_insns[] = { 716 - 0xe98d0000 | PACATOC, /* ld r12,PACATOC(r13) */ 717 - 0x3d8c0000, /* addis r12,r12,<high> */ 718 - 0x398c0000, /* addi r12,r12,<low> */ 719 - 0x7d8903a6, /* mtctr r12 */ 720 - 0x4e800420, /* bctr */ 882 + PPC_RAW_LD(_R12, _R13, PACATOC), 883 + PPC_RAW_ADDIS(_R12, _R12, 0), 884 + PPC_RAW_ADDI(_R12, _R12, 0), 885 + PPC_RAW_MTCTR(_R12), 886 + PPC_RAW_BCTR() 721 887 }; 722 - #ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS 723 - unsigned long addr = ppc_global_function_entry((void *)ftrace_regs_caller); 724 - #else 725 - unsigned long addr = ppc_global_function_entry((void *)ftrace_caller); 726 - #endif 727 - long reladdr = addr - kernel_toc_addr(); 888 + unsigned long addr; 889 + long reladdr; 728 890 729 - if (reladdr > 0x7FFFFFFF || reladdr < -(0x80000000L)) { 891 + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS)) 892 + addr = ppc_global_function_entry((void *)ftrace_regs_caller); 893 + else 894 + addr = ppc_global_function_entry((void *)ftrace_caller); 895 + 896 + reladdr = addr - kernel_toc_addr(); 897 + 898 + if (reladdr >= SZ_2G || reladdr < -(long)SZ_2G) { 730 899 pr_err("Address of %ps out of range of kernel_toc.\n", 731 900 (void *)addr); 732 901 return -1; ··· 755 896 756 897 return 0; 757 898 } 758 - #else 759 - int __init ftrace_dyn_arch_init(void) 760 - { 761 - return 0; 762 - } 763 899 #endif 764 - #endif /* CONFIG_DYNAMIC_FTRACE */ 765 900 766 901 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 767 902 ··· 792 939 * Hook the return address and push it in the stack of return addrs 793 940 * in current thread info. Return the address we want to divert to. 794 941 */ 795 - unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip, 796 - unsigned long sp) 942 + static unsigned long 943 + __prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp) 797 944 { 798 945 unsigned long return_hooker; 799 946 int bit; ··· 822 969 void ftrace_graph_func(unsigned long ip, unsigned long parent_ip, 823 970 struct ftrace_ops *op, struct ftrace_regs *fregs) 824 971 { 825 - fregs->regs.link = prepare_ftrace_return(parent_ip, ip, fregs->regs.gpr[1]); 972 + fregs->regs.link = __prepare_ftrace_return(parent_ip, ip, fregs->regs.gpr[1]); 973 + } 974 + #else 975 + unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip, 976 + unsigned long sp) 977 + { 978 + return __prepare_ftrace_return(parent, ip, sp); 826 979 } 827 980 #endif 828 981 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 829 982 830 - #ifdef PPC64_ELF_ABI_v1 983 + #ifdef CONFIG_PPC64_ELF_ABI_V1 831 984 char *arch_ftrace_match_adjust(char *str, const char *search) 832 985 { 833 986 if (str[0] == '.' && search[0] != '.') ··· 841 982 else 842 983 return str; 843 984 } 844 - #endif /* PPC64_ELF_ABI_v1 */ 985 + #endif /* CONFIG_PPC64_ELF_ABI_V1 */
+4 -2
arch/powerpc/kernel/traps.c
··· 393 393 * Builds that do not support KVM could take this second option to increase 394 394 * the recoverability of NMIs. 395 395 */ 396 - void hv_nmi_check_nonrecoverable(struct pt_regs *regs) 396 + noinstr void hv_nmi_check_nonrecoverable(struct pt_regs *regs) 397 397 { 398 398 #ifdef CONFIG_PPC_POWERNV 399 399 unsigned long kbase = (unsigned long)_stext; ··· 433 433 return; 434 434 435 435 nonrecoverable: 436 - regs_set_unrecoverable(regs); 436 + regs->msr &= ~MSR_RI; 437 + local_paca->hsrr_valid = 0; 438 + local_paca->srr_valid = 0; 437 439 #endif 438 440 } 439 441 DEFINE_INTERRUPT_HANDLER_NMI(system_reset_exception)
+5
arch/powerpc/kernel/uprobes.c
··· 48 48 return -EINVAL; 49 49 } 50 50 51 + if (!can_single_step(ppc_inst_val(ppc_inst_read(auprobe->insn)))) { 52 + pr_info_ratelimited("Cannot register a uprobe on instructions that can't be single stepped\n"); 53 + return -ENOTSUPP; 54 + } 55 + 51 56 return 0; 52 57 } 53 58
-1
arch/powerpc/kernel/vdso.c
··· 25 25 #include <asm/processor.h> 26 26 #include <asm/mmu.h> 27 27 #include <asm/mmu_context.h> 28 - #include <asm/prom.h> 29 28 #include <asm/machdep.h> 30 29 #include <asm/cputable.h> 31 30 #include <asm/sections.h>
+1
arch/powerpc/kernel/vdso/Makefile
··· 48 48 KASAN_SANITIZE := n 49 49 50 50 ccflags-y := -shared -fno-common -fno-builtin -nostdlib -Wl,--hash-style=both 51 + ccflags-$(CONFIG_LD_IS_LLD) += $(call cc-option,--ld-path=$(LD),-fuse-ld=lld) 51 52 52 53 CC32FLAGS := -Wl,-soname=linux-vdso32.so.1 -m32 53 54 AS32FLAGS := -D__VDSO32__ -s
-1
arch/powerpc/kernel/vdso/vdso32.lds.S
··· 13 13 OUTPUT_FORMAT("elf32-powerpc", "elf32-powerpc", "elf32-powerpc") 14 14 #endif 15 15 OUTPUT_ARCH(powerpc:common) 16 - ENTRY(_start) 17 16 18 17 SECTIONS 19 18 {
-1
arch/powerpc/kernel/vdso/vdso64.lds.S
··· 13 13 OUTPUT_FORMAT("elf64-powerpc", "elf64-powerpc", "elf64-powerpc") 14 14 #endif 15 15 OUTPUT_ARCH(powerpc:common64) 16 - ENTRY(_start) 17 16 18 17 SECTIONS 19 18 {
+1 -1
arch/powerpc/kernel/watchdog.c
··· 56 56 * solved by also having a SMP watchdog where all CPUs check all other 57 57 * CPUs heartbeat. 58 58 * 59 - * The SMP checker can detect lockups on other CPUs. A gobal "pending" 59 + * The SMP checker can detect lockups on other CPUs. A global "pending" 60 60 * cpumask is kept, containing all CPUs which enable the watchdog. Each 61 61 * CPU clears their pending bit in their heartbeat timer. When the bitmask 62 62 * becomes empty, the last CPU to clear its pending bit updates a global
+2
arch/powerpc/kexec/Makefile
··· 13 13 GCOV_PROFILE_core_$(BITS).o := n 14 14 KCOV_INSTRUMENT_core_$(BITS).o := n 15 15 UBSAN_SANITIZE_core_$(BITS).o := n 16 + KASAN_SANITIZE_core.o := n 17 + KASAN_SANITIZE_core_$(BITS) := n
-1
arch/powerpc/kexec/core.c
··· 18 18 #include <asm/kdump.h> 19 19 #include <asm/machdep.h> 20 20 #include <asm/pgalloc.h> 21 - #include <asm/prom.h> 22 21 #include <asm/sections.h> 23 22 24 23 void machine_kexec_mask_interrupts(void) {
+2 -2
arch/powerpc/kexec/core_64.c
··· 16 16 #include <linux/kernel.h> 17 17 #include <linux/cpu.h> 18 18 #include <linux/hardirq.h> 19 + #include <linux/of.h> 19 20 20 21 #include <asm/page.h> 21 22 #include <asm/current.h> ··· 26 25 #include <asm/paca.h> 27 26 #include <asm/mmu.h> 28 27 #include <asm/sections.h> /* _end */ 29 - #include <asm/prom.h> 30 28 #include <asm/smp.h> 31 29 #include <asm/hw_breakpoint.h> 32 30 #include <asm/svm.h> ··· 406 406 if (!node) 407 407 return -ENODEV; 408 408 409 - /* remove any stale propertys so ours can be found */ 409 + /* remove any stale properties so ours can be found */ 410 410 of_remove_property(node, of_find_property(node, htab_base_prop.name, NULL)); 411 411 of_remove_property(node, of_find_property(node, htab_size_prop.name, NULL)); 412 412
-1
arch/powerpc/kexec/crash.c
··· 20 20 #include <asm/processor.h> 21 21 #include <asm/machdep.h> 22 22 #include <asm/kexec.h> 23 - #include <asm/prom.h> 24 23 #include <asm/smp.h> 25 24 #include <asm/setjmp.h> 26 25 #include <asm/debug.h>
+6 -4
arch/powerpc/kvm/Makefile
··· 37 37 e500_emulate.o 38 38 kvm-objs-$(CONFIG_KVM_E500MC) := $(kvm-e500mc-objs) 39 39 40 - kvm-book3s_64-builtin-objs-$(CONFIG_SPAPR_TCE_IOMMU) := \ 41 - book3s_64_vio_hv.o 42 - 43 40 kvm-pr-y := \ 44 41 fpu.o \ 45 42 emulate.o \ ··· 73 76 book3s_hv_tm.o 74 77 75 78 kvm-book3s_64-builtin-xics-objs-$(CONFIG_KVM_XICS) := \ 76 - book3s_hv_rm_xics.o book3s_hv_rm_xive.o 79 + book3s_hv_rm_xics.o 77 80 78 81 kvm-book3s_64-builtin-tm-objs-$(CONFIG_PPC_TRANSACTIONAL_MEM) += \ 79 82 book3s_hv_tm_builtin.o ··· 131 134 obj-$(CONFIG_KVM_BOOK3S_64_HV) += kvm-hv.o 132 135 133 136 obj-y += $(kvm-book3s_64-builtin-objs-y) 137 + 138 + # KVM does a lot in real-mode, and 64-bit Book3S KASAN doesn't support that 139 + ifdef CONFIG_PPC_BOOK3S_64 140 + KASAN_SANITIZE := n 141 + endif
+1 -1
arch/powerpc/kvm/book3s_64_entry.S
··· 124 124 125 125 /* 126 126 * "Skip" interrupts are part of a trick KVM uses a with hash guests to load 127 - * the faulting instruction in guest memory from the the hypervisor without 127 + * the faulting instruction in guest memory from the hypervisor without 128 128 * walking page tables. 129 129 * 130 130 * When the guest takes a fault that requires the hypervisor to load the
+25 -17
arch/powerpc/kvm/book3s_64_mmu_hv.c
··· 58 58 /* Possible values and their usage: 59 59 * <0 an error occurred during allocation, 60 60 * -EBUSY allocation is in the progress, 61 - * 0 allocation made successfuly. 61 + * 0 allocation made successfully. 62 62 */ 63 63 int error; 64 64 ··· 256 256 257 257 int kvmppc_mmu_hv_init(void) 258 258 { 259 - unsigned long host_lpid, rsvd_lpid; 259 + unsigned long nr_lpids; 260 260 261 261 if (!mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE)) 262 262 return -EINVAL; 263 263 264 - host_lpid = 0; 265 - if (cpu_has_feature(CPU_FTR_HVMODE)) 266 - host_lpid = mfspr(SPRN_LPID); 264 + if (cpu_has_feature(CPU_FTR_HVMODE)) { 265 + if (WARN_ON(mfspr(SPRN_LPID) != 0)) 266 + return -EINVAL; 267 + nr_lpids = 1UL << mmu_lpid_bits; 268 + } else { 269 + nr_lpids = 1UL << KVM_MAX_NESTED_GUESTS_SHIFT; 270 + } 267 271 268 - /* POWER8 and above have 12-bit LPIDs (10-bit in POWER7) */ 269 - if (cpu_has_feature(CPU_FTR_ARCH_207S)) 270 - rsvd_lpid = LPID_RSVD; 271 - else 272 - rsvd_lpid = LPID_RSVD_POWER7; 272 + if (!cpu_has_feature(CPU_FTR_ARCH_300)) { 273 + /* POWER7 has 10-bit LPIDs, POWER8 has 12-bit LPIDs */ 274 + if (cpu_has_feature(CPU_FTR_ARCH_207S)) 275 + WARN_ON(nr_lpids != 1UL << 12); 276 + else 277 + WARN_ON(nr_lpids != 1UL << 10); 273 278 274 - kvmppc_init_lpid(rsvd_lpid + 1); 279 + /* 280 + * Reserve the last implemented LPID use in partition 281 + * switching for POWER7 and POWER8. 282 + */ 283 + nr_lpids -= 1; 284 + } 275 285 276 - kvmppc_claim_lpid(host_lpid); 277 - /* rsvd_lpid is reserved for use in partition switching */ 278 - kvmppc_claim_lpid(rsvd_lpid); 286 + kvmppc_init_lpid(nr_lpids); 279 287 280 288 return 0; 281 289 } ··· 887 879 struct revmap_entry *rev = kvm->arch.hpt.rev; 888 880 unsigned long head, i, j; 889 881 __be64 *hptep; 890 - int ret = 0; 882 + bool ret = false; 891 883 unsigned long *rmapp; 892 884 893 885 rmapp = &memslot->arch.rmap[gfn - memslot->base_gfn]; ··· 895 887 lock_rmap(rmapp); 896 888 if (*rmapp & KVMPPC_RMAP_REFERENCED) { 897 889 *rmapp &= ~KVMPPC_RMAP_REFERENCED; 898 - ret = 1; 890 + ret = true; 899 891 } 900 892 if (!(*rmapp & KVMPPC_RMAP_PRESENT)) { 901 893 unlock_rmap(rmapp); ··· 927 919 rev[i].guest_rpte |= HPTE_R_R; 928 920 note_hpte_modification(kvm, &rev[i]); 929 921 } 930 - ret = 1; 922 + ret = true; 931 923 } 932 924 __unlock_hpte(hptep, be64_to_cpu(hptep[0])); 933 925 } while ((i = j) != head);
+43
arch/powerpc/kvm/book3s_64_vio.c
··· 32 32 #include <asm/tce.h> 33 33 #include <asm/mmu_context.h> 34 34 35 + static struct kvmppc_spapr_tce_table *kvmppc_find_table(struct kvm *kvm, 36 + unsigned long liobn) 37 + { 38 + struct kvmppc_spapr_tce_table *stt; 39 + 40 + list_for_each_entry_lockless(stt, &kvm->arch.spapr_tce_tables, list) 41 + if (stt->liobn == liobn) 42 + return stt; 43 + 44 + return NULL; 45 + } 46 + 35 47 static unsigned long kvmppc_tce_pages(unsigned long iommu_pages) 36 48 { 37 49 return ALIGN(iommu_pages * sizeof(u64), PAGE_SIZE) / PAGE_SIZE; ··· 765 753 return ret; 766 754 } 767 755 EXPORT_SYMBOL_GPL(kvmppc_h_stuff_tce); 756 + 757 + long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn, 758 + unsigned long ioba) 759 + { 760 + struct kvmppc_spapr_tce_table *stt; 761 + long ret; 762 + unsigned long idx; 763 + struct page *page; 764 + u64 *tbl; 765 + 766 + stt = kvmppc_find_table(vcpu->kvm, liobn); 767 + if (!stt) 768 + return H_TOO_HARD; 769 + 770 + ret = kvmppc_ioba_validate(stt, ioba, 1); 771 + if (ret != H_SUCCESS) 772 + return ret; 773 + 774 + idx = (ioba >> stt->page_shift) - stt->offset; 775 + page = stt->pages[idx / TCES_PER_PAGE]; 776 + if (!page) { 777 + vcpu->arch.regs.gpr[4] = 0; 778 + return H_SUCCESS; 779 + } 780 + tbl = (u64 *)page_address(page); 781 + 782 + vcpu->arch.regs.gpr[4] = tbl[idx % TCES_PER_PAGE]; 783 + 784 + return H_SUCCESS; 785 + } 786 + EXPORT_SYMBOL_GPL(kvmppc_h_get_tce);
-672
arch/powerpc/kvm/book3s_64_vio_hv.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * 4 - * Copyright 2010 Paul Mackerras, IBM Corp. <paulus@au1.ibm.com> 5 - * Copyright 2011 David Gibson, IBM Corporation <dwg@au1.ibm.com> 6 - * Copyright 2016 Alexey Kardashevskiy, IBM Corporation <aik@au1.ibm.com> 7 - */ 8 - 9 - #include <linux/types.h> 10 - #include <linux/string.h> 11 - #include <linux/kvm.h> 12 - #include <linux/kvm_host.h> 13 - #include <linux/highmem.h> 14 - #include <linux/gfp.h> 15 - #include <linux/slab.h> 16 - #include <linux/hugetlb.h> 17 - #include <linux/list.h> 18 - #include <linux/stringify.h> 19 - 20 - #include <asm/kvm_ppc.h> 21 - #include <asm/kvm_book3s.h> 22 - #include <asm/book3s/64/mmu-hash.h> 23 - #include <asm/mmu_context.h> 24 - #include <asm/hvcall.h> 25 - #include <asm/synch.h> 26 - #include <asm/ppc-opcode.h> 27 - #include <asm/udbg.h> 28 - #include <asm/iommu.h> 29 - #include <asm/tce.h> 30 - #include <asm/pte-walk.h> 31 - 32 - #ifdef CONFIG_BUG 33 - 34 - #define WARN_ON_ONCE_RM(condition) ({ \ 35 - static bool __section(".data.unlikely") __warned; \ 36 - int __ret_warn_once = !!(condition); \ 37 - \ 38 - if (unlikely(__ret_warn_once && !__warned)) { \ 39 - __warned = true; \ 40 - pr_err("WARN_ON_ONCE_RM: (%s) at %s:%u\n", \ 41 - __stringify(condition), \ 42 - __func__, __LINE__); \ 43 - dump_stack(); \ 44 - } \ 45 - unlikely(__ret_warn_once); \ 46 - }) 47 - 48 - #else 49 - 50 - #define WARN_ON_ONCE_RM(condition) ({ \ 51 - int __ret_warn_on = !!(condition); \ 52 - unlikely(__ret_warn_on); \ 53 - }) 54 - 55 - #endif 56 - 57 - /* 58 - * Finds a TCE table descriptor by LIOBN. 59 - * 60 - * WARNING: This will be called in real or virtual mode on HV KVM and virtual 61 - * mode on PR KVM 62 - */ 63 - struct kvmppc_spapr_tce_table *kvmppc_find_table(struct kvm *kvm, 64 - unsigned long liobn) 65 - { 66 - struct kvmppc_spapr_tce_table *stt; 67 - 68 - list_for_each_entry_lockless(stt, &kvm->arch.spapr_tce_tables, list) 69 - if (stt->liobn == liobn) 70 - return stt; 71 - 72 - return NULL; 73 - } 74 - EXPORT_SYMBOL_GPL(kvmppc_find_table); 75 - 76 - #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE 77 - static long kvmppc_rm_tce_to_ua(struct kvm *kvm, 78 - unsigned long tce, unsigned long *ua) 79 - { 80 - unsigned long gfn = tce >> PAGE_SHIFT; 81 - struct kvm_memory_slot *memslot; 82 - 83 - memslot = __gfn_to_memslot(kvm_memslots_raw(kvm), gfn); 84 - if (!memslot) 85 - return -EINVAL; 86 - 87 - *ua = __gfn_to_hva_memslot(memslot, gfn) | 88 - (tce & ~(PAGE_MASK | TCE_PCI_READ | TCE_PCI_WRITE)); 89 - 90 - return 0; 91 - } 92 - 93 - /* 94 - * Validates TCE address. 95 - * At the moment flags and page mask are validated. 96 - * As the host kernel does not access those addresses (just puts them 97 - * to the table and user space is supposed to process them), we can skip 98 - * checking other things (such as TCE is a guest RAM address or the page 99 - * was actually allocated). 100 - */ 101 - static long kvmppc_rm_tce_validate(struct kvmppc_spapr_tce_table *stt, 102 - unsigned long tce) 103 - { 104 - unsigned long gpa = tce & ~(TCE_PCI_READ | TCE_PCI_WRITE); 105 - enum dma_data_direction dir = iommu_tce_direction(tce); 106 - struct kvmppc_spapr_tce_iommu_table *stit; 107 - unsigned long ua = 0; 108 - 109 - /* Allow userspace to poison TCE table */ 110 - if (dir == DMA_NONE) 111 - return H_SUCCESS; 112 - 113 - if (iommu_tce_check_gpa(stt->page_shift, gpa)) 114 - return H_PARAMETER; 115 - 116 - if (kvmppc_rm_tce_to_ua(stt->kvm, tce, &ua)) 117 - return H_TOO_HARD; 118 - 119 - list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { 120 - unsigned long hpa = 0; 121 - struct mm_iommu_table_group_mem_t *mem; 122 - long shift = stit->tbl->it_page_shift; 123 - 124 - mem = mm_iommu_lookup_rm(stt->kvm->mm, ua, 1ULL << shift); 125 - if (!mem) 126 - return H_TOO_HARD; 127 - 128 - if (mm_iommu_ua_to_hpa_rm(mem, ua, shift, &hpa)) 129 - return H_TOO_HARD; 130 - } 131 - 132 - return H_SUCCESS; 133 - } 134 - 135 - /* Note on the use of page_address() in real mode, 136 - * 137 - * It is safe to use page_address() in real mode on ppc64 because 138 - * page_address() is always defined as lowmem_page_address() 139 - * which returns __va(PFN_PHYS(page_to_pfn(page))) which is arithmetic 140 - * operation and does not access page struct. 141 - * 142 - * Theoretically page_address() could be defined different 143 - * but either WANT_PAGE_VIRTUAL or HASHED_PAGE_VIRTUAL 144 - * would have to be enabled. 145 - * WANT_PAGE_VIRTUAL is never enabled on ppc32/ppc64, 146 - * HASHED_PAGE_VIRTUAL could be enabled for ppc32 only and only 147 - * if CONFIG_HIGHMEM is defined. As CONFIG_SPARSEMEM_VMEMMAP 148 - * is not expected to be enabled on ppc32, page_address() 149 - * is safe for ppc32 as well. 150 - * 151 - * WARNING: This will be called in real-mode on HV KVM and virtual 152 - * mode on PR KVM 153 - */ 154 - static u64 *kvmppc_page_address(struct page *page) 155 - { 156 - #if defined(HASHED_PAGE_VIRTUAL) || defined(WANT_PAGE_VIRTUAL) 157 - #error TODO: fix to avoid page_address() here 158 - #endif 159 - return (u64 *) page_address(page); 160 - } 161 - 162 - /* 163 - * Handles TCE requests for emulated devices. 164 - * Puts guest TCE values to the table and expects user space to convert them. 165 - * Cannot fail so kvmppc_rm_tce_validate must be called before it. 166 - */ 167 - static void kvmppc_rm_tce_put(struct kvmppc_spapr_tce_table *stt, 168 - unsigned long idx, unsigned long tce) 169 - { 170 - struct page *page; 171 - u64 *tbl; 172 - 173 - idx -= stt->offset; 174 - page = stt->pages[idx / TCES_PER_PAGE]; 175 - /* 176 - * kvmppc_rm_ioba_validate() allows pages not be allocated if TCE is 177 - * being cleared, otherwise it returns H_TOO_HARD and we skip this. 178 - */ 179 - if (!page) { 180 - WARN_ON_ONCE_RM(tce != 0); 181 - return; 182 - } 183 - tbl = kvmppc_page_address(page); 184 - 185 - tbl[idx % TCES_PER_PAGE] = tce; 186 - } 187 - 188 - /* 189 - * TCEs pages are allocated in kvmppc_rm_tce_put() which won't be able to do so 190 - * in real mode. 191 - * Check if kvmppc_rm_tce_put() can succeed in real mode, i.e. a TCEs page is 192 - * allocated or not required (when clearing a tce entry). 193 - */ 194 - static long kvmppc_rm_ioba_validate(struct kvmppc_spapr_tce_table *stt, 195 - unsigned long ioba, unsigned long npages, bool clearing) 196 - { 197 - unsigned long i, idx, sttpage, sttpages; 198 - unsigned long ret = kvmppc_ioba_validate(stt, ioba, npages); 199 - 200 - if (ret) 201 - return ret; 202 - /* 203 - * clearing==true says kvmppc_rm_tce_put won't be allocating pages 204 - * for empty tces. 205 - */ 206 - if (clearing) 207 - return H_SUCCESS; 208 - 209 - idx = (ioba >> stt->page_shift) - stt->offset; 210 - sttpage = idx / TCES_PER_PAGE; 211 - sttpages = ALIGN(idx % TCES_PER_PAGE + npages, TCES_PER_PAGE) / 212 - TCES_PER_PAGE; 213 - for (i = sttpage; i < sttpage + sttpages; ++i) 214 - if (!stt->pages[i]) 215 - return H_TOO_HARD; 216 - 217 - return H_SUCCESS; 218 - } 219 - 220 - static long iommu_tce_xchg_no_kill_rm(struct mm_struct *mm, 221 - struct iommu_table *tbl, 222 - unsigned long entry, unsigned long *hpa, 223 - enum dma_data_direction *direction) 224 - { 225 - long ret; 226 - 227 - ret = tbl->it_ops->xchg_no_kill(tbl, entry, hpa, direction, true); 228 - 229 - if (!ret && ((*direction == DMA_FROM_DEVICE) || 230 - (*direction == DMA_BIDIRECTIONAL))) { 231 - __be64 *pua = IOMMU_TABLE_USERSPACE_ENTRY_RO(tbl, entry); 232 - /* 233 - * kvmppc_rm_tce_iommu_do_map() updates the UA cache after 234 - * calling this so we still get here a valid UA. 235 - */ 236 - if (pua && *pua) 237 - mm_iommu_ua_mark_dirty_rm(mm, be64_to_cpu(*pua)); 238 - } 239 - 240 - return ret; 241 - } 242 - 243 - static void iommu_tce_kill_rm(struct iommu_table *tbl, 244 - unsigned long entry, unsigned long pages) 245 - { 246 - if (tbl->it_ops->tce_kill) 247 - tbl->it_ops->tce_kill(tbl, entry, pages, true); 248 - } 249 - 250 - static void kvmppc_rm_clear_tce(struct kvm *kvm, struct kvmppc_spapr_tce_table *stt, 251 - struct iommu_table *tbl, unsigned long entry) 252 - { 253 - unsigned long i; 254 - unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift); 255 - unsigned long io_entry = entry << (stt->page_shift - tbl->it_page_shift); 256 - 257 - for (i = 0; i < subpages; ++i) { 258 - unsigned long hpa = 0; 259 - enum dma_data_direction dir = DMA_NONE; 260 - 261 - iommu_tce_xchg_no_kill_rm(kvm->mm, tbl, io_entry + i, &hpa, &dir); 262 - } 263 - } 264 - 265 - static long kvmppc_rm_tce_iommu_mapped_dec(struct kvm *kvm, 266 - struct iommu_table *tbl, unsigned long entry) 267 - { 268 - struct mm_iommu_table_group_mem_t *mem = NULL; 269 - const unsigned long pgsize = 1ULL << tbl->it_page_shift; 270 - __be64 *pua = IOMMU_TABLE_USERSPACE_ENTRY_RO(tbl, entry); 271 - 272 - if (!pua) 273 - /* it_userspace allocation might be delayed */ 274 - return H_TOO_HARD; 275 - 276 - mem = mm_iommu_lookup_rm(kvm->mm, be64_to_cpu(*pua), pgsize); 277 - if (!mem) 278 - return H_TOO_HARD; 279 - 280 - mm_iommu_mapped_dec(mem); 281 - 282 - *pua = cpu_to_be64(0); 283 - 284 - return H_SUCCESS; 285 - } 286 - 287 - static long kvmppc_rm_tce_iommu_do_unmap(struct kvm *kvm, 288 - struct iommu_table *tbl, unsigned long entry) 289 - { 290 - enum dma_data_direction dir = DMA_NONE; 291 - unsigned long hpa = 0; 292 - long ret; 293 - 294 - if (iommu_tce_xchg_no_kill_rm(kvm->mm, tbl, entry, &hpa, &dir)) 295 - /* 296 - * real mode xchg can fail if struct page crosses 297 - * a page boundary 298 - */ 299 - return H_TOO_HARD; 300 - 301 - if (dir == DMA_NONE) 302 - return H_SUCCESS; 303 - 304 - ret = kvmppc_rm_tce_iommu_mapped_dec(kvm, tbl, entry); 305 - if (ret) 306 - iommu_tce_xchg_no_kill_rm(kvm->mm, tbl, entry, &hpa, &dir); 307 - 308 - return ret; 309 - } 310 - 311 - static long kvmppc_rm_tce_iommu_unmap(struct kvm *kvm, 312 - struct kvmppc_spapr_tce_table *stt, struct iommu_table *tbl, 313 - unsigned long entry) 314 - { 315 - unsigned long i, ret = H_SUCCESS; 316 - unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift); 317 - unsigned long io_entry = entry * subpages; 318 - 319 - for (i = 0; i < subpages; ++i) { 320 - ret = kvmppc_rm_tce_iommu_do_unmap(kvm, tbl, io_entry + i); 321 - if (ret != H_SUCCESS) 322 - break; 323 - } 324 - 325 - iommu_tce_kill_rm(tbl, io_entry, subpages); 326 - 327 - return ret; 328 - } 329 - 330 - static long kvmppc_rm_tce_iommu_do_map(struct kvm *kvm, struct iommu_table *tbl, 331 - unsigned long entry, unsigned long ua, 332 - enum dma_data_direction dir) 333 - { 334 - long ret; 335 - unsigned long hpa = 0; 336 - __be64 *pua = IOMMU_TABLE_USERSPACE_ENTRY_RO(tbl, entry); 337 - struct mm_iommu_table_group_mem_t *mem; 338 - 339 - if (!pua) 340 - /* it_userspace allocation might be delayed */ 341 - return H_TOO_HARD; 342 - 343 - mem = mm_iommu_lookup_rm(kvm->mm, ua, 1ULL << tbl->it_page_shift); 344 - if (!mem) 345 - return H_TOO_HARD; 346 - 347 - if (WARN_ON_ONCE_RM(mm_iommu_ua_to_hpa_rm(mem, ua, tbl->it_page_shift, 348 - &hpa))) 349 - return H_TOO_HARD; 350 - 351 - if (WARN_ON_ONCE_RM(mm_iommu_mapped_inc(mem))) 352 - return H_TOO_HARD; 353 - 354 - ret = iommu_tce_xchg_no_kill_rm(kvm->mm, tbl, entry, &hpa, &dir); 355 - if (ret) { 356 - mm_iommu_mapped_dec(mem); 357 - /* 358 - * real mode xchg can fail if struct page crosses 359 - * a page boundary 360 - */ 361 - return H_TOO_HARD; 362 - } 363 - 364 - if (dir != DMA_NONE) 365 - kvmppc_rm_tce_iommu_mapped_dec(kvm, tbl, entry); 366 - 367 - *pua = cpu_to_be64(ua); 368 - 369 - return 0; 370 - } 371 - 372 - static long kvmppc_rm_tce_iommu_map(struct kvm *kvm, 373 - struct kvmppc_spapr_tce_table *stt, struct iommu_table *tbl, 374 - unsigned long entry, unsigned long ua, 375 - enum dma_data_direction dir) 376 - { 377 - unsigned long i, pgoff, ret = H_SUCCESS; 378 - unsigned long subpages = 1ULL << (stt->page_shift - tbl->it_page_shift); 379 - unsigned long io_entry = entry * subpages; 380 - 381 - for (i = 0, pgoff = 0; i < subpages; 382 - ++i, pgoff += IOMMU_PAGE_SIZE(tbl)) { 383 - 384 - ret = kvmppc_rm_tce_iommu_do_map(kvm, tbl, 385 - io_entry + i, ua + pgoff, dir); 386 - if (ret != H_SUCCESS) 387 - break; 388 - } 389 - 390 - iommu_tce_kill_rm(tbl, io_entry, subpages); 391 - 392 - return ret; 393 - } 394 - 395 - long kvmppc_rm_h_put_tce(struct kvm_vcpu *vcpu, unsigned long liobn, 396 - unsigned long ioba, unsigned long tce) 397 - { 398 - struct kvmppc_spapr_tce_table *stt; 399 - long ret; 400 - struct kvmppc_spapr_tce_iommu_table *stit; 401 - unsigned long entry, ua = 0; 402 - enum dma_data_direction dir; 403 - 404 - /* udbg_printf("H_PUT_TCE(): liobn=0x%lx ioba=0x%lx, tce=0x%lx\n", */ 405 - /* liobn, ioba, tce); */ 406 - 407 - stt = kvmppc_find_table(vcpu->kvm, liobn); 408 - if (!stt) 409 - return H_TOO_HARD; 410 - 411 - ret = kvmppc_rm_ioba_validate(stt, ioba, 1, tce == 0); 412 - if (ret != H_SUCCESS) 413 - return ret; 414 - 415 - ret = kvmppc_rm_tce_validate(stt, tce); 416 - if (ret != H_SUCCESS) 417 - return ret; 418 - 419 - dir = iommu_tce_direction(tce); 420 - if ((dir != DMA_NONE) && kvmppc_rm_tce_to_ua(vcpu->kvm, tce, &ua)) 421 - return H_PARAMETER; 422 - 423 - entry = ioba >> stt->page_shift; 424 - 425 - list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { 426 - if (dir == DMA_NONE) 427 - ret = kvmppc_rm_tce_iommu_unmap(vcpu->kvm, stt, 428 - stit->tbl, entry); 429 - else 430 - ret = kvmppc_rm_tce_iommu_map(vcpu->kvm, stt, 431 - stit->tbl, entry, ua, dir); 432 - 433 - if (ret != H_SUCCESS) { 434 - kvmppc_rm_clear_tce(vcpu->kvm, stt, stit->tbl, entry); 435 - return ret; 436 - } 437 - } 438 - 439 - kvmppc_rm_tce_put(stt, entry, tce); 440 - 441 - return H_SUCCESS; 442 - } 443 - 444 - static long kvmppc_rm_ua_to_hpa(struct kvm_vcpu *vcpu, unsigned long mmu_seq, 445 - unsigned long ua, unsigned long *phpa) 446 - { 447 - pte_t *ptep, pte; 448 - unsigned shift = 0; 449 - 450 - /* 451 - * Called in real mode with MSR_EE = 0. We are safe here. 452 - * It is ok to do the lookup with arch.pgdir here, because 453 - * we are doing this on secondary cpus and current task there 454 - * is not the hypervisor. Also this is safe against THP in the 455 - * host, because an IPI to primary thread will wait for the secondary 456 - * to exit which will agains result in the below page table walk 457 - * to finish. 458 - */ 459 - /* an rmap lock won't make it safe. because that just ensure hash 460 - * page table entries are removed with rmap lock held. After that 461 - * mmu notifier returns and we go ahead and removing ptes from Qemu page table. 462 - */ 463 - ptep = find_kvm_host_pte(vcpu->kvm, mmu_seq, ua, &shift); 464 - if (!ptep) 465 - return -ENXIO; 466 - 467 - pte = READ_ONCE(*ptep); 468 - if (!pte_present(pte)) 469 - return -ENXIO; 470 - 471 - if (!shift) 472 - shift = PAGE_SHIFT; 473 - 474 - /* Avoid handling anything potentially complicated in realmode */ 475 - if (shift > PAGE_SHIFT) 476 - return -EAGAIN; 477 - 478 - if (!pte_young(pte)) 479 - return -EAGAIN; 480 - 481 - *phpa = (pte_pfn(pte) << PAGE_SHIFT) | (ua & ((1ULL << shift) - 1)) | 482 - (ua & ~PAGE_MASK); 483 - 484 - return 0; 485 - } 486 - 487 - long kvmppc_rm_h_put_tce_indirect(struct kvm_vcpu *vcpu, 488 - unsigned long liobn, unsigned long ioba, 489 - unsigned long tce_list, unsigned long npages) 490 - { 491 - struct kvm *kvm = vcpu->kvm; 492 - struct kvmppc_spapr_tce_table *stt; 493 - long i, ret = H_SUCCESS; 494 - unsigned long tces, entry, ua = 0; 495 - unsigned long mmu_seq; 496 - bool prereg = false; 497 - struct kvmppc_spapr_tce_iommu_table *stit; 498 - 499 - /* 500 - * used to check for invalidations in progress 501 - */ 502 - mmu_seq = kvm->mmu_notifier_seq; 503 - smp_rmb(); 504 - 505 - stt = kvmppc_find_table(vcpu->kvm, liobn); 506 - if (!stt) 507 - return H_TOO_HARD; 508 - 509 - entry = ioba >> stt->page_shift; 510 - /* 511 - * The spec says that the maximum size of the list is 512 TCEs 512 - * so the whole table addressed resides in 4K page 513 - */ 514 - if (npages > 512) 515 - return H_PARAMETER; 516 - 517 - if (tce_list & (SZ_4K - 1)) 518 - return H_PARAMETER; 519 - 520 - ret = kvmppc_rm_ioba_validate(stt, ioba, npages, false); 521 - if (ret != H_SUCCESS) 522 - return ret; 523 - 524 - if (mm_iommu_preregistered(vcpu->kvm->mm)) { 525 - /* 526 - * We get here if guest memory was pre-registered which 527 - * is normally VFIO case and gpa->hpa translation does not 528 - * depend on hpt. 529 - */ 530 - struct mm_iommu_table_group_mem_t *mem; 531 - 532 - if (kvmppc_rm_tce_to_ua(vcpu->kvm, tce_list, &ua)) 533 - return H_TOO_HARD; 534 - 535 - mem = mm_iommu_lookup_rm(vcpu->kvm->mm, ua, IOMMU_PAGE_SIZE_4K); 536 - if (mem) 537 - prereg = mm_iommu_ua_to_hpa_rm(mem, ua, 538 - IOMMU_PAGE_SHIFT_4K, &tces) == 0; 539 - } 540 - 541 - if (!prereg) { 542 - /* 543 - * This is usually a case of a guest with emulated devices only 544 - * when TCE list is not in preregistered memory. 545 - * We do not require memory to be preregistered in this case 546 - * so lock rmap and do __find_linux_pte_or_hugepte(). 547 - */ 548 - if (kvmppc_rm_tce_to_ua(vcpu->kvm, tce_list, &ua)) 549 - return H_TOO_HARD; 550 - 551 - arch_spin_lock(&kvm->mmu_lock.rlock.raw_lock); 552 - if (kvmppc_rm_ua_to_hpa(vcpu, mmu_seq, ua, &tces)) { 553 - ret = H_TOO_HARD; 554 - goto unlock_exit; 555 - } 556 - } 557 - 558 - for (i = 0; i < npages; ++i) { 559 - unsigned long tce = be64_to_cpu(((u64 *)tces)[i]); 560 - 561 - ret = kvmppc_rm_tce_validate(stt, tce); 562 - if (ret != H_SUCCESS) 563 - goto unlock_exit; 564 - } 565 - 566 - for (i = 0; i < npages; ++i) { 567 - unsigned long tce = be64_to_cpu(((u64 *)tces)[i]); 568 - 569 - ua = 0; 570 - if (kvmppc_rm_tce_to_ua(vcpu->kvm, tce, &ua)) { 571 - ret = H_PARAMETER; 572 - goto unlock_exit; 573 - } 574 - 575 - list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { 576 - ret = kvmppc_rm_tce_iommu_map(vcpu->kvm, stt, 577 - stit->tbl, entry + i, ua, 578 - iommu_tce_direction(tce)); 579 - 580 - if (ret != H_SUCCESS) { 581 - kvmppc_rm_clear_tce(vcpu->kvm, stt, stit->tbl, 582 - entry + i); 583 - goto unlock_exit; 584 - } 585 - } 586 - 587 - kvmppc_rm_tce_put(stt, entry + i, tce); 588 - } 589 - 590 - unlock_exit: 591 - if (!prereg) 592 - arch_spin_unlock(&kvm->mmu_lock.rlock.raw_lock); 593 - return ret; 594 - } 595 - 596 - long kvmppc_rm_h_stuff_tce(struct kvm_vcpu *vcpu, 597 - unsigned long liobn, unsigned long ioba, 598 - unsigned long tce_value, unsigned long npages) 599 - { 600 - struct kvmppc_spapr_tce_table *stt; 601 - long i, ret; 602 - struct kvmppc_spapr_tce_iommu_table *stit; 603 - 604 - stt = kvmppc_find_table(vcpu->kvm, liobn); 605 - if (!stt) 606 - return H_TOO_HARD; 607 - 608 - ret = kvmppc_rm_ioba_validate(stt, ioba, npages, tce_value == 0); 609 - if (ret != H_SUCCESS) 610 - return ret; 611 - 612 - /* Check permission bits only to allow userspace poison TCE for debug */ 613 - if (tce_value & (TCE_PCI_WRITE | TCE_PCI_READ)) 614 - return H_PARAMETER; 615 - 616 - list_for_each_entry_lockless(stit, &stt->iommu_tables, next) { 617 - unsigned long entry = ioba >> stt->page_shift; 618 - 619 - for (i = 0; i < npages; ++i) { 620 - ret = kvmppc_rm_tce_iommu_unmap(vcpu->kvm, stt, 621 - stit->tbl, entry + i); 622 - 623 - if (ret == H_SUCCESS) 624 - continue; 625 - 626 - if (ret == H_TOO_HARD) 627 - return ret; 628 - 629 - WARN_ON_ONCE_RM(1); 630 - kvmppc_rm_clear_tce(vcpu->kvm, stt, stit->tbl, entry + i); 631 - } 632 - } 633 - 634 - for (i = 0; i < npages; ++i, ioba += (1ULL << stt->page_shift)) 635 - kvmppc_rm_tce_put(stt, ioba >> stt->page_shift, tce_value); 636 - 637 - return ret; 638 - } 639 - 640 - /* This can be called in either virtual mode or real mode */ 641 - long kvmppc_h_get_tce(struct kvm_vcpu *vcpu, unsigned long liobn, 642 - unsigned long ioba) 643 - { 644 - struct kvmppc_spapr_tce_table *stt; 645 - long ret; 646 - unsigned long idx; 647 - struct page *page; 648 - u64 *tbl; 649 - 650 - stt = kvmppc_find_table(vcpu->kvm, liobn); 651 - if (!stt) 652 - return H_TOO_HARD; 653 - 654 - ret = kvmppc_ioba_validate(stt, ioba, 1); 655 - if (ret != H_SUCCESS) 656 - return ret; 657 - 658 - idx = (ioba >> stt->page_shift) - stt->offset; 659 - page = stt->pages[idx / TCES_PER_PAGE]; 660 - if (!page) { 661 - vcpu->arch.regs.gpr[4] = 0; 662 - return H_SUCCESS; 663 - } 664 - tbl = (u64 *)page_address(page); 665 - 666 - vcpu->arch.regs.gpr[4] = tbl[idx % TCES_PER_PAGE]; 667 - 668 - return H_SUCCESS; 669 - } 670 - EXPORT_SYMBOL_GPL(kvmppc_h_get_tce); 671 - 672 - #endif /* KVM_BOOK3S_HV_POSSIBLE */
+1 -1
arch/powerpc/kvm/book3s_emulate.c
··· 268 268 269 269 /* 270 270 * add rules to fit in ISA specification regarding TM 271 - * state transistion in TM disable/Suspended state, 271 + * state transition in TM disable/Suspended state, 272 272 * and target TM state is TM inactive(00) state. (the 273 273 * change should be suppressed). 274 274 */
+60 -14
arch/powerpc/kvm/book3s_hv.c
··· 42 42 #include <linux/module.h> 43 43 #include <linux/compiler.h> 44 44 #include <linux/of.h> 45 + #include <linux/irqdomain.h> 45 46 46 47 #include <asm/ftrace.h> 47 48 #include <asm/reg.h> ··· 1327 1326 case H_CONFER: 1328 1327 case H_REGISTER_VPA: 1329 1328 case H_SET_MODE: 1329 + #ifdef CONFIG_SPAPR_TCE_IOMMU 1330 + case H_GET_TCE: 1331 + case H_PUT_TCE: 1332 + case H_PUT_TCE_INDIRECT: 1333 + case H_STUFF_TCE: 1334 + #endif 1330 1335 case H_LOGICAL_CI_LOAD: 1331 1336 case H_LOGICAL_CI_STORE: 1332 1337 #ifdef CONFIG_KVM_XICS ··· 2841 2834 * to trap and then we emulate them. 2842 2835 */ 2843 2836 vcpu->arch.hfscr = HFSCR_TAR | HFSCR_EBB | HFSCR_PM | HFSCR_BHRB | 2844 - HFSCR_DSCR | HFSCR_VECVSX | HFSCR_FP | HFSCR_PREFIX; 2837 + HFSCR_DSCR | HFSCR_VECVSX | HFSCR_FP; 2845 2838 if (cpu_has_feature(CPU_FTR_HVMODE)) { 2846 2839 vcpu->arch.hfscr &= mfspr(SPRN_HFSCR); 2847 2840 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM ··· 3974 3967 3975 3968 kvmhv_save_hv_regs(vcpu, &hvregs); 3976 3969 hvregs.lpcr = lpcr; 3970 + hvregs.amor = ~0; 3977 3971 vcpu->arch.regs.msr = vcpu->arch.shregs.msr; 3978 3972 hvregs.version = HV_GUEST_STATE_VERSION; 3979 3973 if (vcpu->arch.nested) { ··· 4037 4029 static int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, 4038 4030 unsigned long lpcr, u64 *tb) 4039 4031 { 4032 + struct kvm *kvm = vcpu->kvm; 4033 + struct kvm_nested_guest *nested = vcpu->arch.nested; 4040 4034 u64 next_timer; 4041 4035 int trap; 4042 4036 ··· 4058 4048 trap = kvmhv_vcpu_entry_p9_nested(vcpu, time_limit, lpcr, tb); 4059 4049 4060 4050 /* H_CEDE has to be handled now, not later */ 4061 - if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && 4051 + if (trap == BOOK3S_INTERRUPT_SYSCALL && !nested && 4062 4052 kvmppc_get_gpr(vcpu, 3) == H_CEDE) { 4063 4053 kvmppc_cede(vcpu); 4064 4054 kvmppc_set_gpr(vcpu, 3, 0); 4065 4055 trap = 0; 4066 4056 } 4067 4057 4068 - } else { 4069 - struct kvm *kvm = vcpu->kvm; 4058 + } else if (nested) { 4059 + __this_cpu_write(cpu_in_guest, kvm); 4060 + trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr, tb); 4061 + __this_cpu_write(cpu_in_guest, NULL); 4070 4062 4063 + } else { 4071 4064 kvmppc_xive_push_vcpu(vcpu); 4072 4065 4073 4066 __this_cpu_write(cpu_in_guest, kvm); 4074 4067 trap = kvmhv_vcpu_entry_p9(vcpu, time_limit, lpcr, tb); 4075 4068 __this_cpu_write(cpu_in_guest, NULL); 4076 4069 4077 - if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested && 4070 + if (trap == BOOK3S_INTERRUPT_SYSCALL && 4078 4071 !(vcpu->arch.shregs.msr & MSR_PR)) { 4079 4072 unsigned long req = kvmppc_get_gpr(vcpu, 3); 4080 4073 4081 - /* H_CEDE has to be handled now, not later */ 4074 + /* 4075 + * XIVE rearm and XICS hcalls must be handled 4076 + * before xive context is pulled (is this 4077 + * true?) 4078 + */ 4082 4079 if (req == H_CEDE) { 4080 + /* H_CEDE has to be handled now */ 4083 4081 kvmppc_cede(vcpu); 4084 - kvmppc_xive_rearm_escalation(vcpu); /* may un-cede */ 4082 + if (!kvmppc_xive_rearm_escalation(vcpu)) { 4083 + /* 4084 + * Pending escalation so abort 4085 + * the cede. 4086 + */ 4087 + vcpu->arch.ceded = 0; 4088 + } 4085 4089 kvmppc_set_gpr(vcpu, 3, 0); 4086 4090 trap = 0; 4087 4091 4088 - /* XICS hcalls must be handled before xive is pulled */ 4092 + } else if (req == H_ENTER_NESTED) { 4093 + /* 4094 + * L2 should not run with the L1 4095 + * context so rearm and pull it. 4096 + */ 4097 + if (!kvmppc_xive_rearm_escalation(vcpu)) { 4098 + /* 4099 + * Pending escalation so abort 4100 + * H_ENTER_NESTED. 4101 + */ 4102 + kvmppc_set_gpr(vcpu, 3, 0); 4103 + trap = 0; 4104 + } 4105 + 4089 4106 } else if (hcall_is_xics(req)) { 4090 4107 int ret; 4091 4108 ··· 4270 4233 start_wait = ktime_get(); 4271 4234 4272 4235 vc->vcore_state = VCORE_SLEEPING; 4273 - trace_kvmppc_vcore_blocked(vc, 0); 4236 + trace_kvmppc_vcore_blocked(vc->runner, 0); 4274 4237 spin_unlock(&vc->lock); 4275 4238 schedule(); 4276 4239 finish_rcuwait(&vc->wait); 4277 4240 spin_lock(&vc->lock); 4278 4241 vc->vcore_state = VCORE_INACTIVE; 4279 - trace_kvmppc_vcore_blocked(vc, 1); 4242 + trace_kvmppc_vcore_blocked(vc->runner, 1); 4280 4243 ++vc->runner->stat.halt_successful_wait; 4281 4244 4282 4245 cur = ktime_get(); ··· 4556 4519 4557 4520 if (!nested) { 4558 4521 kvmppc_core_prepare_to_enter(vcpu); 4559 - if (test_bit(BOOK3S_IRQPRIO_EXTERNAL, 4560 - &vcpu->arch.pending_exceptions)) 4522 + if (vcpu->arch.shregs.msr & MSR_EE) { 4523 + if (xive_interrupt_pending(vcpu)) 4524 + kvmppc_inject_interrupt_hv(vcpu, 4525 + BOOK3S_INTERRUPT_EXTERNAL, 0); 4526 + } else if (test_bit(BOOK3S_IRQPRIO_EXTERNAL, 4527 + &vcpu->arch.pending_exceptions)) { 4561 4528 lpcr |= LPCR_MER; 4529 + } 4562 4530 } else if (vcpu->arch.pending_exceptions || 4563 4531 vcpu->arch.doorbell_request || 4564 4532 xive_interrupt_pending(vcpu)) { ··· 4661 4619 if (kvmppc_vcpu_check_block(vcpu)) 4662 4620 break; 4663 4621 4664 - trace_kvmppc_vcore_blocked(vc, 0); 4622 + trace_kvmppc_vcore_blocked(vcpu, 0); 4665 4623 schedule(); 4666 - trace_kvmppc_vcore_blocked(vc, 1); 4624 + trace_kvmppc_vcore_blocked(vcpu, 1); 4667 4625 } 4668 4626 finish_rcuwait(wait); 4669 4627 } ··· 5325 5283 kvm->arch.host_lpcr = lpcr = mfspr(SPRN_LPCR); 5326 5284 lpcr &= LPCR_PECE | LPCR_LPES; 5327 5285 } else { 5286 + /* 5287 + * The L2 LPES mode will be set by the L0 according to whether 5288 + * or not it needs to take external interrupts in HV mode. 5289 + */ 5328 5290 lpcr = 0; 5329 5291 } 5330 5292 lpcr |= (4UL << LPCR_DPFD_SH) | LPCR_HDICE |
-64
arch/powerpc/kvm/book3s_hv_builtin.c
··· 489 489 return kvmppc_check_passthru(xisr, xirr, again); 490 490 } 491 491 492 - #ifdef CONFIG_KVM_XICS 493 - unsigned long kvmppc_rm_h_xirr(struct kvm_vcpu *vcpu) 494 - { 495 - if (!kvmppc_xics_enabled(vcpu)) 496 - return H_TOO_HARD; 497 - if (xics_on_xive()) 498 - return xive_rm_h_xirr(vcpu); 499 - else 500 - return xics_rm_h_xirr(vcpu); 501 - } 502 - 503 - unsigned long kvmppc_rm_h_xirr_x(struct kvm_vcpu *vcpu) 504 - { 505 - if (!kvmppc_xics_enabled(vcpu)) 506 - return H_TOO_HARD; 507 - vcpu->arch.regs.gpr[5] = get_tb(); 508 - if (xics_on_xive()) 509 - return xive_rm_h_xirr(vcpu); 510 - else 511 - return xics_rm_h_xirr(vcpu); 512 - } 513 - 514 - unsigned long kvmppc_rm_h_ipoll(struct kvm_vcpu *vcpu, unsigned long server) 515 - { 516 - if (!kvmppc_xics_enabled(vcpu)) 517 - return H_TOO_HARD; 518 - if (xics_on_xive()) 519 - return xive_rm_h_ipoll(vcpu, server); 520 - else 521 - return H_TOO_HARD; 522 - } 523 - 524 - int kvmppc_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server, 525 - unsigned long mfrr) 526 - { 527 - if (!kvmppc_xics_enabled(vcpu)) 528 - return H_TOO_HARD; 529 - if (xics_on_xive()) 530 - return xive_rm_h_ipi(vcpu, server, mfrr); 531 - else 532 - return xics_rm_h_ipi(vcpu, server, mfrr); 533 - } 534 - 535 - int kvmppc_rm_h_cppr(struct kvm_vcpu *vcpu, unsigned long cppr) 536 - { 537 - if (!kvmppc_xics_enabled(vcpu)) 538 - return H_TOO_HARD; 539 - if (xics_on_xive()) 540 - return xive_rm_h_cppr(vcpu, cppr); 541 - else 542 - return xics_rm_h_cppr(vcpu, cppr); 543 - } 544 - 545 - int kvmppc_rm_h_eoi(struct kvm_vcpu *vcpu, unsigned long xirr) 546 - { 547 - if (!kvmppc_xics_enabled(vcpu)) 548 - return H_TOO_HARD; 549 - if (xics_on_xive()) 550 - return xive_rm_h_eoi(vcpu, xirr); 551 - else 552 - return xics_rm_h_eoi(vcpu, xirr); 553 - } 554 - #endif /* CONFIG_KVM_XICS */ 555 - 556 492 void kvmppc_bad_interrupt(struct pt_regs *regs) 557 493 { 558 494 /*
+70 -67
arch/powerpc/kvm/book3s_hv_nested.c
··· 261 261 /* 262 262 * Don't let L1 change LPCR bits for the L2 except these: 263 263 */ 264 - mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | 265 - LPCR_LPES | LPCR_MER; 264 + mask = LPCR_DPFD | LPCR_ILE | LPCR_TC | LPCR_AIL | LPCR_LD | LPCR_MER; 266 265 267 266 /* 268 267 * Additional filtering is required depending on hardware ··· 438 439 if (!radix_enabled()) 439 440 return -ENODEV; 440 441 441 - /* find log base 2 of KVMPPC_NR_LPIDS, rounding up */ 442 - ptb_order = __ilog2(KVMPPC_NR_LPIDS - 1) + 1; 443 - if (ptb_order < 8) 444 - ptb_order = 8; 442 + /* Partition table entry is 1<<4 bytes in size, hence the 4. */ 443 + ptb_order = KVM_MAX_NESTED_GUESTS_SHIFT + 4; 444 + /* Minimum partition table size is 1<<12 bytes */ 445 + if (ptb_order < 12) 446 + ptb_order = 12; 445 447 pseries_partition_tb = kmalloc(sizeof(struct patb_entry) << ptb_order, 446 448 GFP_KERNEL); 447 449 if (!pseries_partition_tb) { ··· 450 450 return -ENOMEM; 451 451 } 452 452 453 - ptcr = __pa(pseries_partition_tb) | (ptb_order - 8); 453 + ptcr = __pa(pseries_partition_tb) | (ptb_order - 12); 454 454 rc = plpar_hcall_norets(H_SET_PARTITION_TABLE, ptcr); 455 455 if (rc != H_SUCCESS) { 456 456 pr_err("kvm-hv: Parent hypervisor does not support nesting (rc=%ld)\n", ··· 521 521 kvmhv_set_ptbl_entry(gp->shadow_lpid, dw0, gp->process_table); 522 522 } 523 523 524 - void kvmhv_vm_nested_init(struct kvm *kvm) 525 - { 526 - kvm->arch.max_nested_lpid = -1; 527 - } 528 - 529 524 /* 530 525 * Handle the H_SET_PARTITION_TABLE hcall. 531 526 * r4 = guest real address of partition table + log_2(size) - 12 ··· 534 539 long ret = H_SUCCESS; 535 540 536 541 srcu_idx = srcu_read_lock(&kvm->srcu); 537 - /* 538 - * Limit the partition table to 4096 entries (because that's what 539 - * hardware supports), and check the base address. 540 - */ 541 - if ((ptcr & PRTS_MASK) > 12 - 8 || 542 + /* Check partition size and base address. */ 543 + if ((ptcr & PRTS_MASK) + 12 - 4 > KVM_MAX_NESTED_GUESTS_SHIFT || 542 544 !kvm_is_visible_gfn(vcpu->kvm, (ptcr & PRTB_MASK) >> PAGE_SHIFT)) 543 545 ret = H_PARAMETER; 544 546 srcu_read_unlock(&kvm->srcu, srcu_idx); 545 547 if (ret == H_SUCCESS) 546 548 kvm->arch.l1_ptcr = ptcr; 549 + 547 550 return ret; 548 551 } 549 552 ··· 637 644 638 645 ret = -EFAULT; 639 646 ptbl_addr = (kvm->arch.l1_ptcr & PRTB_MASK) + (gp->l1_lpid << 4); 640 - if (gp->l1_lpid < (1ul << ((kvm->arch.l1_ptcr & PRTS_MASK) + 8))) { 647 + if (gp->l1_lpid < (1ul << ((kvm->arch.l1_ptcr & PRTS_MASK) + 12 - 4))) { 641 648 int srcu_idx = srcu_read_lock(&kvm->srcu); 642 649 ret = kvm_read_guest(kvm, ptbl_addr, 643 650 &ptbl_entry, sizeof(ptbl_entry)); ··· 651 658 gp->process_table = be64_to_cpu(ptbl_entry.patb1); 652 659 } 653 660 kvmhv_set_nested_ptbl(gp); 661 + } 662 + 663 + void kvmhv_vm_nested_init(struct kvm *kvm) 664 + { 665 + idr_init(&kvm->arch.kvm_nested_guest_idr); 666 + } 667 + 668 + static struct kvm_nested_guest *__find_nested(struct kvm *kvm, int lpid) 669 + { 670 + return idr_find(&kvm->arch.kvm_nested_guest_idr, lpid); 671 + } 672 + 673 + static bool __prealloc_nested(struct kvm *kvm, int lpid) 674 + { 675 + if (idr_alloc(&kvm->arch.kvm_nested_guest_idr, 676 + NULL, lpid, lpid + 1, GFP_KERNEL) != lpid) 677 + return false; 678 + return true; 679 + } 680 + 681 + static void __add_nested(struct kvm *kvm, int lpid, struct kvm_nested_guest *gp) 682 + { 683 + if (idr_replace(&kvm->arch.kvm_nested_guest_idr, gp, lpid)) 684 + WARN_ON(1); 685 + } 686 + 687 + static void __remove_nested(struct kvm *kvm, int lpid) 688 + { 689 + idr_remove(&kvm->arch.kvm_nested_guest_idr, lpid); 654 690 } 655 691 656 692 static struct kvm_nested_guest *kvmhv_alloc_nested(struct kvm *kvm, unsigned int lpid) ··· 742 720 long ref; 743 721 744 722 spin_lock(&kvm->mmu_lock); 745 - if (gp == kvm->arch.nested_guests[lpid]) { 746 - kvm->arch.nested_guests[lpid] = NULL; 747 - if (lpid == kvm->arch.max_nested_lpid) { 748 - while (--lpid >= 0 && !kvm->arch.nested_guests[lpid]) 749 - ; 750 - kvm->arch.max_nested_lpid = lpid; 751 - } 723 + if (gp == __find_nested(kvm, lpid)) { 724 + __remove_nested(kvm, lpid); 752 725 --gp->refcnt; 753 726 } 754 727 ref = gp->refcnt; ··· 760 743 */ 761 744 void kvmhv_release_all_nested(struct kvm *kvm) 762 745 { 763 - int i; 746 + int lpid; 764 747 struct kvm_nested_guest *gp; 765 748 struct kvm_nested_guest *freelist = NULL; 766 749 struct kvm_memory_slot *memslot; 767 750 int srcu_idx, bkt; 768 751 769 752 spin_lock(&kvm->mmu_lock); 770 - for (i = 0; i <= kvm->arch.max_nested_lpid; i++) { 771 - gp = kvm->arch.nested_guests[i]; 772 - if (!gp) 773 - continue; 774 - kvm->arch.nested_guests[i] = NULL; 753 + idr_for_each_entry(&kvm->arch.kvm_nested_guest_idr, gp, lpid) { 754 + __remove_nested(kvm, lpid); 775 755 if (--gp->refcnt == 0) { 776 756 gp->next = freelist; 777 757 freelist = gp; 778 758 } 779 759 } 780 - kvm->arch.max_nested_lpid = -1; 760 + idr_destroy(&kvm->arch.kvm_nested_guest_idr); 761 + /* idr is empty and may be reused at this point */ 781 762 spin_unlock(&kvm->mmu_lock); 782 763 while ((gp = freelist) != NULL) { 783 764 freelist = gp->next; ··· 807 792 { 808 793 struct kvm_nested_guest *gp, *newgp; 809 794 810 - if (l1_lpid >= KVM_MAX_NESTED_GUESTS || 811 - l1_lpid >= (1ul << ((kvm->arch.l1_ptcr & PRTS_MASK) + 12 - 4))) 795 + if (l1_lpid >= (1ul << ((kvm->arch.l1_ptcr & PRTS_MASK) + 12 - 4))) 812 796 return NULL; 813 797 814 798 spin_lock(&kvm->mmu_lock); 815 - gp = kvm->arch.nested_guests[l1_lpid]; 799 + gp = __find_nested(kvm, l1_lpid); 816 800 if (gp) 817 801 ++gp->refcnt; 818 802 spin_unlock(&kvm->mmu_lock); ··· 822 808 newgp = kvmhv_alloc_nested(kvm, l1_lpid); 823 809 if (!newgp) 824 810 return NULL; 811 + 812 + if (!__prealloc_nested(kvm, l1_lpid)) { 813 + kvmhv_release_nested(newgp); 814 + return NULL; 815 + } 816 + 825 817 spin_lock(&kvm->mmu_lock); 826 - if (kvm->arch.nested_guests[l1_lpid]) { 827 - /* someone else beat us to it */ 828 - gp = kvm->arch.nested_guests[l1_lpid]; 829 - } else { 830 - kvm->arch.nested_guests[l1_lpid] = newgp; 818 + gp = __find_nested(kvm, l1_lpid); 819 + if (!gp) { 820 + __add_nested(kvm, l1_lpid, newgp); 831 821 ++newgp->refcnt; 832 822 gp = newgp; 833 823 newgp = NULL; 834 - if (l1_lpid > kvm->arch.max_nested_lpid) 835 - kvm->arch.max_nested_lpid = l1_lpid; 836 824 } 837 825 ++gp->refcnt; 838 826 spin_unlock(&kvm->mmu_lock); ··· 857 841 kvmhv_release_nested(gp); 858 842 } 859 843 860 - static struct kvm_nested_guest *kvmhv_find_nested(struct kvm *kvm, int lpid) 861 - { 862 - if (lpid > kvm->arch.max_nested_lpid) 863 - return NULL; 864 - return kvm->arch.nested_guests[lpid]; 865 - } 866 - 867 844 pte_t *find_kvm_nested_guest_pte(struct kvm *kvm, unsigned long lpid, 868 845 unsigned long ea, unsigned *hshift) 869 846 { 870 847 struct kvm_nested_guest *gp; 871 848 pte_t *pte; 872 849 873 - gp = kvmhv_find_nested(kvm, lpid); 850 + gp = __find_nested(kvm, lpid); 874 851 if (!gp) 875 852 return NULL; 876 853 ··· 969 960 970 961 gpa = n_rmap & RMAP_NESTED_GPA_MASK; 971 962 lpid = (n_rmap & RMAP_NESTED_LPID_MASK) >> RMAP_NESTED_LPID_SHIFT; 972 - gp = kvmhv_find_nested(kvm, lpid); 963 + gp = __find_nested(kvm, lpid); 973 964 if (!gp) 974 965 return; 975 966 ··· 1161 1152 { 1162 1153 struct kvm *kvm = vcpu->kvm; 1163 1154 struct kvm_nested_guest *gp; 1164 - int i; 1155 + int lpid; 1165 1156 1166 1157 spin_lock(&kvm->mmu_lock); 1167 - for (i = 0; i <= kvm->arch.max_nested_lpid; i++) { 1168 - gp = kvm->arch.nested_guests[i]; 1169 - if (gp) { 1170 - spin_unlock(&kvm->mmu_lock); 1171 - kvmhv_emulate_tlbie_lpid(vcpu, gp, ric); 1172 - spin_lock(&kvm->mmu_lock); 1173 - } 1158 + idr_for_each_entry(&kvm->arch.kvm_nested_guest_idr, gp, lpid) { 1159 + spin_unlock(&kvm->mmu_lock); 1160 + kvmhv_emulate_tlbie_lpid(vcpu, gp, ric); 1161 + spin_lock(&kvm->mmu_lock); 1174 1162 } 1175 1163 spin_unlock(&kvm->mmu_lock); 1176 1164 } ··· 1319 1313 * H_ENTER_NESTED call. Since we can't differentiate this case from 1320 1314 * the invalid case, we ignore such flush requests and return success. 1321 1315 */ 1322 - if (!kvmhv_find_nested(vcpu->kvm, lpid)) 1316 + if (!__find_nested(vcpu->kvm, lpid)) 1323 1317 return H_SUCCESS; 1324 1318 1325 1319 /* ··· 1663 1657 1664 1658 int kvmhv_nested_next_lpid(struct kvm *kvm, int lpid) 1665 1659 { 1666 - int ret = -1; 1660 + int ret = lpid + 1; 1667 1661 1668 1662 spin_lock(&kvm->mmu_lock); 1669 - while (++lpid <= kvm->arch.max_nested_lpid) { 1670 - if (kvm->arch.nested_guests[lpid]) { 1671 - ret = lpid; 1672 - break; 1673 - } 1674 - } 1663 + if (!idr_get_next(&kvm->arch.kvm_nested_guest_idr, &ret)) 1664 + ret = -1; 1675 1665 spin_unlock(&kvm->mmu_lock); 1666 + 1676 1667 return ret; 1677 1668 }
+12 -5
arch/powerpc/kvm/book3s_hv_p9_entry.c
··· 379 379 { 380 380 /* 381 381 * current->thread.xxx registers must all be restored to host 382 - * values before a potential context switch, othrewise the context 382 + * values before a potential context switch, otherwise the context 383 383 * switch itself will overwrite current->thread.xxx with the values 384 384 * from the guest SPRs. 385 385 */ ··· 539 539 { 540 540 struct kvm_nested_guest *nested = vcpu->arch.nested; 541 541 u32 lpid; 542 + u32 pid; 542 543 543 544 lpid = nested ? nested->shadow_lpid : kvm->arch.lpid; 545 + pid = vcpu->arch.pid; 544 546 545 547 /* 546 548 * Prior memory accesses to host PID Q3 must be completed before we ··· 553 551 isync(); 554 552 mtspr(SPRN_LPID, lpid); 555 553 mtspr(SPRN_LPCR, lpcr); 556 - mtspr(SPRN_PID, vcpu->arch.pid); 554 + mtspr(SPRN_PID, pid); 557 555 /* 558 556 * isync not required here because we are HRFID'ing to guest before 559 557 * any guest context access, which is context synchronising. ··· 563 561 static void switch_mmu_to_guest_hpt(struct kvm *kvm, struct kvm_vcpu *vcpu, u64 lpcr) 564 562 { 565 563 u32 lpid; 564 + u32 pid; 566 565 int i; 567 566 568 567 lpid = kvm->arch.lpid; 568 + pid = vcpu->arch.pid; 569 569 570 570 /* 571 571 * See switch_mmu_to_guest_radix. ptesync should not be required here ··· 578 574 isync(); 579 575 mtspr(SPRN_LPID, lpid); 580 576 mtspr(SPRN_LPCR, lpcr); 581 - mtspr(SPRN_PID, vcpu->arch.pid); 577 + mtspr(SPRN_PID, pid); 582 578 583 579 for (i = 0; i < vcpu->arch.slb_max; i++) 584 580 mtslb(vcpu->arch.slb[i].orige, vcpu->arch.slb[i].origv); ··· 589 585 590 586 static void switch_mmu_to_host(struct kvm *kvm, u32 pid) 591 587 { 588 + u32 lpid = kvm->arch.host_lpid; 589 + u64 lpcr = kvm->arch.host_lpcr; 590 + 592 591 /* 593 592 * The guest has exited, so guest MMU context is no longer being 594 593 * non-speculatively accessed, but a hwsync is needed before the ··· 601 594 asm volatile("hwsync" ::: "memory"); 602 595 isync(); 603 596 mtspr(SPRN_PID, pid); 604 - mtspr(SPRN_LPID, kvm->arch.host_lpid); 605 - mtspr(SPRN_LPCR, kvm->arch.host_lpcr); 597 + mtspr(SPRN_LPID, lpid); 598 + mtspr(SPRN_LPCR, lpcr); 606 599 /* 607 600 * isync is not required after the switch, because mtmsrd with L=0 608 601 * is performed after this switch, which is context synchronising.
+6 -1
arch/powerpc/kvm/book3s_hv_rm_xics.c
··· 479 479 } 480 480 } 481 481 482 + unsigned long xics_rm_h_xirr_x(struct kvm_vcpu *vcpu) 483 + { 484 + vcpu->arch.regs.gpr[5] = get_tb(); 485 + return xics_rm_h_xirr(vcpu); 486 + } 482 487 483 488 unsigned long xics_rm_h_xirr(struct kvm_vcpu *vcpu) 484 489 { ··· 888 883 889 884 /* --- Non-real mode XICS-related built-in routines --- */ 890 885 891 - /** 886 + /* 892 887 * Host Operations poked by RM KVM 893 888 */ 894 889 static void rm_host_ipi_action(int action, void *data)
-46
arch/powerpc/kvm/book3s_hv_rm_xive.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - #include <linux/kernel.h> 3 - #include <linux/kvm_host.h> 4 - #include <linux/err.h> 5 - #include <linux/kernel_stat.h> 6 - #include <linux/pgtable.h> 7 - 8 - #include <asm/kvm_book3s.h> 9 - #include <asm/kvm_ppc.h> 10 - #include <asm/hvcall.h> 11 - #include <asm/xics.h> 12 - #include <asm/debug.h> 13 - #include <asm/synch.h> 14 - #include <asm/cputhreads.h> 15 - #include <asm/ppc-opcode.h> 16 - #include <asm/pnv-pci.h> 17 - #include <asm/opal.h> 18 - #include <asm/smp.h> 19 - #include <asm/xive.h> 20 - #include <asm/xive-regs.h> 21 - 22 - #include "book3s_xive.h" 23 - 24 - /* XXX */ 25 - #include <asm/udbg.h> 26 - //#define DBG(fmt...) udbg_printf(fmt) 27 - #define DBG(fmt...) do { } while(0) 28 - 29 - static inline void __iomem *get_tima_phys(void) 30 - { 31 - return local_paca->kvm_hstate.xive_tima_phys; 32 - } 33 - 34 - #undef XIVE_RUNTIME_CHECKS 35 - #define X_PFX xive_rm_ 36 - #define X_STATIC 37 - #define X_STAT_PFX stat_rm_ 38 - #define __x_tima get_tima_phys() 39 - #define __x_eoi_page(xd) ((void __iomem *)((xd)->eoi_page)) 40 - #define __x_trig_page(xd) ((void __iomem *)((xd)->trig_page)) 41 - #define __x_writeb __raw_rm_writeb 42 - #define __x_readw __raw_rm_readw 43 - #define __x_readq __raw_rm_readq 44 - #define __x_writeq __raw_rm_writeq 45 - 46 - #include "book3s_xive_template.c"
+14 -16
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 51 51 #define STACK_SLOT_FSCR (SFS-96) 52 52 53 53 /* 54 + * Use the last LPID (all implemented LPID bits = 1) for partition switching. 55 + * This is reserved in the LPID allocator. POWER7 only implements 0x3ff, but 56 + * we write 0xfff into the LPID SPR anyway, which seems to work and just 57 + * ignores the top bits. 58 + */ 59 + #define LPID_RSVD 0xfff 60 + 61 + /* 54 62 * Call kvmppc_hv_entry in real mode. 55 63 * Must be called with interrupts hard-disabled. 56 64 * ··· 1792 1784 .long DOTSYM(kvmppc_h_clear_mod) - hcall_real_table 1793 1785 .long DOTSYM(kvmppc_h_clear_ref) - hcall_real_table 1794 1786 .long DOTSYM(kvmppc_h_protect) - hcall_real_table 1795 - #ifdef CONFIG_SPAPR_TCE_IOMMU 1796 - .long DOTSYM(kvmppc_h_get_tce) - hcall_real_table 1797 - .long DOTSYM(kvmppc_rm_h_put_tce) - hcall_real_table 1798 - #else 1799 1787 .long 0 /* 0x1c */ 1800 1788 .long 0 /* 0x20 */ 1801 - #endif 1802 1789 .long 0 /* 0x24 - H_SET_SPRG0 */ 1803 1790 .long DOTSYM(kvmppc_h_set_dabr) - hcall_real_table 1804 1791 .long DOTSYM(kvmppc_rm_h_page_init) - hcall_real_table ··· 1811 1808 .long 0 /* 0x5c */ 1812 1809 .long 0 /* 0x60 */ 1813 1810 #ifdef CONFIG_KVM_XICS 1814 - .long DOTSYM(kvmppc_rm_h_eoi) - hcall_real_table 1815 - .long DOTSYM(kvmppc_rm_h_cppr) - hcall_real_table 1816 - .long DOTSYM(kvmppc_rm_h_ipi) - hcall_real_table 1817 - .long DOTSYM(kvmppc_rm_h_ipoll) - hcall_real_table 1818 - .long DOTSYM(kvmppc_rm_h_xirr) - hcall_real_table 1811 + .long DOTSYM(xics_rm_h_eoi) - hcall_real_table 1812 + .long DOTSYM(xics_rm_h_cppr) - hcall_real_table 1813 + .long DOTSYM(xics_rm_h_ipi) - hcall_real_table 1814 + .long 0 /* 0x70 - H_IPOLL */ 1815 + .long DOTSYM(xics_rm_h_xirr) - hcall_real_table 1819 1816 #else 1820 1817 .long 0 /* 0x64 - H_EOI */ 1821 1818 .long 0 /* 0x68 - H_CPPR */ ··· 1871 1868 .long 0 /* 0x12c */ 1872 1869 .long 0 /* 0x130 */ 1873 1870 .long DOTSYM(kvmppc_h_set_xdabr) - hcall_real_table 1874 - #ifdef CONFIG_SPAPR_TCE_IOMMU 1875 - .long DOTSYM(kvmppc_rm_h_stuff_tce) - hcall_real_table 1876 - .long DOTSYM(kvmppc_rm_h_put_tce_indirect) - hcall_real_table 1877 - #else 1878 1871 .long 0 /* 0x138 */ 1879 1872 .long 0 /* 0x13c */ 1880 - #endif 1881 1873 .long 0 /* 0x140 */ 1882 1874 .long 0 /* 0x144 */ 1883 1875 .long 0 /* 0x148 */ ··· 1985 1987 .long 0 /* 0x2f4 */ 1986 1988 .long 0 /* 0x2f8 */ 1987 1989 #ifdef CONFIG_KVM_XICS 1988 - .long DOTSYM(kvmppc_rm_h_xirr_x) - hcall_real_table 1990 + .long DOTSYM(xics_rm_h_xirr_x) - hcall_real_table 1989 1991 #else 1990 1992 .long 0 /* 0x2fc - H_XIRR_X*/ 1991 1993 #endif
+6 -4
arch/powerpc/kvm/book3s_hv_uvmem.c
··· 120 120 * content is un-encrypted. 121 121 * 122 122 * (c) Normal - The GFN is a normal. The GFN is associated with 123 - * a normal VM. The contents of the GFN is accesible to 123 + * a normal VM. The contents of the GFN is accessible to 124 124 * the Hypervisor. Its content is never encrypted. 125 125 * 126 126 * States of a VM. ··· 361 361 static bool kvmppc_next_nontransitioned_gfn(const struct kvm_memory_slot *memslot, 362 362 struct kvm *kvm, unsigned long *gfn) 363 363 { 364 - struct kvmppc_uvmem_slot *p; 364 + struct kvmppc_uvmem_slot *p = NULL, *iter; 365 365 bool ret = false; 366 366 unsigned long i; 367 367 368 - list_for_each_entry(p, &kvm->arch.uvmem_pfns, list) 369 - if (*gfn >= p->base_pfn && *gfn < p->base_pfn + p->nr_pfns) 368 + list_for_each_entry(iter, &kvm->arch.uvmem_pfns, list) 369 + if (*gfn >= iter->base_pfn && *gfn < iter->base_pfn + iter->nr_pfns) { 370 + p = iter; 370 371 break; 372 + } 371 373 if (!p) 372 374 return ret; 373 375 /*
+1 -1
arch/powerpc/kvm/book3s_interrupts.S
··· 15 15 #include <asm/asm-compat.h> 16 16 17 17 #if defined(CONFIG_PPC_BOOK3S_64) 18 - #ifdef PPC64_ELF_ABI_v2 18 + #ifdef CONFIG_PPC64_ELF_ABI_V2 19 19 #define FUNC(name) name 20 20 #else 21 21 #define FUNC(name) GLUE(.,name)
+1 -1
arch/powerpc/kvm/book3s_pr.c
··· 1287 1287 1288 1288 /* Get last sc for papr */ 1289 1289 if (vcpu->arch.papr_enabled) { 1290 - /* The sc instuction points SRR0 to the next inst */ 1290 + /* The sc instruction points SRR0 to the next inst */ 1291 1291 emul = kvmppc_get_last_inst(vcpu, INST_SC, &last_sc); 1292 1292 if (emul != EMULATE_DONE) { 1293 1293 kvmppc_set_pc(vcpu, kvmppc_get_pc(vcpu) - 4);
+6
arch/powerpc/kvm/book3s_pr_papr.c
··· 433 433 case H_REMOVE: 434 434 case H_PROTECT: 435 435 case H_BULK_REMOVE: 436 + #ifdef CONFIG_SPAPR_TCE_IOMMU 437 + case H_GET_TCE: 436 438 case H_PUT_TCE: 437 439 case H_PUT_TCE_INDIRECT: 438 440 case H_STUFF_TCE: 441 + #endif 439 442 case H_CEDE: 440 443 case H_LOGICAL_CI_LOAD: 441 444 case H_LOGICAL_CI_STORE: ··· 467 464 H_REMOVE, 468 465 H_PROTECT, 469 466 H_BULK_REMOVE, 467 + #ifdef CONFIG_SPAPR_TCE_IOMMU 468 + H_GET_TCE, 470 469 H_PUT_TCE, 470 + #endif 471 471 H_CEDE, 472 472 H_SET_MODE, 473 473 #ifdef CONFIG_KVM_XICS
+1 -1
arch/powerpc/kvm/book3s_rmhandlers.S
··· 26 26 27 27 #if defined(CONFIG_PPC_BOOK3S_64) 28 28 29 - #ifdef PPC64_ELF_ABI_v2 29 + #ifdef CONFIG_PPC64_ELF_ABI_V2 30 30 #define FUNC(name) name 31 31 #else 32 32 #define FUNC(name) GLUE(.,name)
+1 -1
arch/powerpc/kvm/book3s_xics.c
··· 462 462 * new guy. We cannot assume that the rejected interrupt is less 463 463 * favored than the new one, and thus doesn't need to be delivered, 464 464 * because by the time we exit icp_try_to_deliver() the target 465 - * processor may well have alrady consumed & completed it, and thus 465 + * processor may well have already consumed & completed it, and thus 466 466 * the rejected interrupt might actually be already acceptable. 467 467 */ 468 468 if (icp_try_to_deliver(icp, new_irq, state->priority, &reject)) {
+630 -25
arch/powerpc/kvm/book3s_xive.c
··· 30 30 31 31 #include "book3s_xive.h" 32 32 33 - 34 - /* 35 - * Virtual mode variants of the hcalls for use on radix/radix 36 - * with AIL. They require the VCPU's VP to be "pushed" 37 - * 38 - * We still instantiate them here because we use some of the 39 - * generated utility functions as well in this file. 40 - */ 41 - #define XIVE_RUNTIME_CHECKS 42 - #define X_PFX xive_vm_ 43 - #define X_STATIC static 44 - #define X_STAT_PFX stat_vm_ 45 - #define __x_tima xive_tima 46 33 #define __x_eoi_page(xd) ((void __iomem *)((xd)->eoi_mmio)) 47 34 #define __x_trig_page(xd) ((void __iomem *)((xd)->trig_mmio)) 48 - #define __x_writeb __raw_writeb 49 - #define __x_readw __raw_readw 50 - #define __x_readq __raw_readq 51 - #define __x_writeq __raw_writeq 52 35 53 - #include "book3s_xive_template.c" 36 + /* Dummy interrupt used when taking interrupts out of a queue in H_CPPR */ 37 + #define XICS_DUMMY 1 38 + 39 + static void xive_vm_ack_pending(struct kvmppc_xive_vcpu *xc) 40 + { 41 + u8 cppr; 42 + u16 ack; 43 + 44 + /* 45 + * Ensure any previous store to CPPR is ordered vs. 46 + * the subsequent loads from PIPR or ACK. 47 + */ 48 + eieio(); 49 + 50 + /* Perform the acknowledge OS to register cycle. */ 51 + ack = be16_to_cpu(__raw_readw(xive_tima + TM_SPC_ACK_OS_REG)); 52 + 53 + /* Synchronize subsequent queue accesses */ 54 + mb(); 55 + 56 + /* XXX Check grouping level */ 57 + 58 + /* Anything ? */ 59 + if (!((ack >> 8) & TM_QW1_NSR_EO)) 60 + return; 61 + 62 + /* Grab CPPR of the most favored pending interrupt */ 63 + cppr = ack & 0xff; 64 + if (cppr < 8) 65 + xc->pending |= 1 << cppr; 66 + 67 + /* Check consistency */ 68 + if (cppr >= xc->hw_cppr) 69 + pr_warn("KVM-XIVE: CPU %d odd ack CPPR, got %d at %d\n", 70 + smp_processor_id(), cppr, xc->hw_cppr); 71 + 72 + /* 73 + * Update our image of the HW CPPR. We don't yet modify 74 + * xc->cppr, this will be done as we scan for interrupts 75 + * in the queues. 76 + */ 77 + xc->hw_cppr = cppr; 78 + } 79 + 80 + static u8 xive_vm_esb_load(struct xive_irq_data *xd, u32 offset) 81 + { 82 + u64 val; 83 + 84 + if (offset == XIVE_ESB_SET_PQ_10 && xd->flags & XIVE_IRQ_FLAG_STORE_EOI) 85 + offset |= XIVE_ESB_LD_ST_MO; 86 + 87 + val = __raw_readq(__x_eoi_page(xd) + offset); 88 + #ifdef __LITTLE_ENDIAN__ 89 + val >>= 64-8; 90 + #endif 91 + return (u8)val; 92 + } 93 + 94 + 95 + static void xive_vm_source_eoi(u32 hw_irq, struct xive_irq_data *xd) 96 + { 97 + /* If the XIVE supports the new "store EOI facility, use it */ 98 + if (xd->flags & XIVE_IRQ_FLAG_STORE_EOI) 99 + __raw_writeq(0, __x_eoi_page(xd) + XIVE_ESB_STORE_EOI); 100 + else if (xd->flags & XIVE_IRQ_FLAG_LSI) { 101 + /* 102 + * For LSIs the HW EOI cycle is used rather than PQ bits, 103 + * as they are automatically re-triggred in HW when still 104 + * pending. 105 + */ 106 + __raw_readq(__x_eoi_page(xd) + XIVE_ESB_LOAD_EOI); 107 + } else { 108 + uint64_t eoi_val; 109 + 110 + /* 111 + * Otherwise for EOI, we use the special MMIO that does 112 + * a clear of both P and Q and returns the old Q, 113 + * except for LSIs where we use the "EOI cycle" special 114 + * load. 115 + * 116 + * This allows us to then do a re-trigger if Q was set 117 + * rather than synthetizing an interrupt in software 118 + */ 119 + eoi_val = xive_vm_esb_load(xd, XIVE_ESB_SET_PQ_00); 120 + 121 + /* Re-trigger if needed */ 122 + if ((eoi_val & 1) && __x_trig_page(xd)) 123 + __raw_writeq(0, __x_trig_page(xd)); 124 + } 125 + } 126 + 127 + enum { 128 + scan_fetch, 129 + scan_poll, 130 + scan_eoi, 131 + }; 132 + 133 + static u32 xive_vm_scan_interrupts(struct kvmppc_xive_vcpu *xc, 134 + u8 pending, int scan_type) 135 + { 136 + u32 hirq = 0; 137 + u8 prio = 0xff; 138 + 139 + /* Find highest pending priority */ 140 + while ((xc->mfrr != 0xff || pending != 0) && hirq == 0) { 141 + struct xive_q *q; 142 + u32 idx, toggle; 143 + __be32 *qpage; 144 + 145 + /* 146 + * If pending is 0 this will return 0xff which is what 147 + * we want 148 + */ 149 + prio = ffs(pending) - 1; 150 + 151 + /* Don't scan past the guest cppr */ 152 + if (prio >= xc->cppr || prio > 7) { 153 + if (xc->mfrr < xc->cppr) { 154 + prio = xc->mfrr; 155 + hirq = XICS_IPI; 156 + } 157 + break; 158 + } 159 + 160 + /* Grab queue and pointers */ 161 + q = &xc->queues[prio]; 162 + idx = q->idx; 163 + toggle = q->toggle; 164 + 165 + /* 166 + * Snapshot the queue page. The test further down for EOI 167 + * must use the same "copy" that was used by __xive_read_eq 168 + * since qpage can be set concurrently and we don't want 169 + * to miss an EOI. 170 + */ 171 + qpage = READ_ONCE(q->qpage); 172 + 173 + skip_ipi: 174 + /* 175 + * Try to fetch from the queue. Will return 0 for a 176 + * non-queueing priority (ie, qpage = 0). 177 + */ 178 + hirq = __xive_read_eq(qpage, q->msk, &idx, &toggle); 179 + 180 + /* 181 + * If this was a signal for an MFFR change done by 182 + * H_IPI we skip it. Additionally, if we were fetching 183 + * we EOI it now, thus re-enabling reception of a new 184 + * such signal. 185 + * 186 + * We also need to do that if prio is 0 and we had no 187 + * page for the queue. In this case, we have non-queued 188 + * IPI that needs to be EOId. 189 + * 190 + * This is safe because if we have another pending MFRR 191 + * change that wasn't observed above, the Q bit will have 192 + * been set and another occurrence of the IPI will trigger. 193 + */ 194 + if (hirq == XICS_IPI || (prio == 0 && !qpage)) { 195 + if (scan_type == scan_fetch) { 196 + xive_vm_source_eoi(xc->vp_ipi, 197 + &xc->vp_ipi_data); 198 + q->idx = idx; 199 + q->toggle = toggle; 200 + } 201 + /* Loop back on same queue with updated idx/toggle */ 202 + WARN_ON(hirq && hirq != XICS_IPI); 203 + if (hirq) 204 + goto skip_ipi; 205 + } 206 + 207 + /* If it's the dummy interrupt, continue searching */ 208 + if (hirq == XICS_DUMMY) 209 + goto skip_ipi; 210 + 211 + /* Clear the pending bit if the queue is now empty */ 212 + if (!hirq) { 213 + pending &= ~(1 << prio); 214 + 215 + /* 216 + * Check if the queue count needs adjusting due to 217 + * interrupts being moved away. 218 + */ 219 + if (atomic_read(&q->pending_count)) { 220 + int p = atomic_xchg(&q->pending_count, 0); 221 + 222 + if (p) { 223 + WARN_ON(p > atomic_read(&q->count)); 224 + atomic_sub(p, &q->count); 225 + } 226 + } 227 + } 228 + 229 + /* 230 + * If the most favoured prio we found pending is less 231 + * favored (or equal) than a pending IPI, we return 232 + * the IPI instead. 233 + */ 234 + if (prio >= xc->mfrr && xc->mfrr < xc->cppr) { 235 + prio = xc->mfrr; 236 + hirq = XICS_IPI; 237 + break; 238 + } 239 + 240 + /* If fetching, update queue pointers */ 241 + if (scan_type == scan_fetch) { 242 + q->idx = idx; 243 + q->toggle = toggle; 244 + } 245 + } 246 + 247 + /* If we are just taking a "peek", do nothing else */ 248 + if (scan_type == scan_poll) 249 + return hirq; 250 + 251 + /* Update the pending bits */ 252 + xc->pending = pending; 253 + 254 + /* 255 + * If this is an EOI that's it, no CPPR adjustment done here, 256 + * all we needed was cleanup the stale pending bits and check 257 + * if there's anything left. 258 + */ 259 + if (scan_type == scan_eoi) 260 + return hirq; 261 + 262 + /* 263 + * If we found an interrupt, adjust what the guest CPPR should 264 + * be as if we had just fetched that interrupt from HW. 265 + * 266 + * Note: This can only make xc->cppr smaller as the previous 267 + * loop will only exit with hirq != 0 if prio is lower than 268 + * the current xc->cppr. Thus we don't need to re-check xc->mfrr 269 + * for pending IPIs. 270 + */ 271 + if (hirq) 272 + xc->cppr = prio; 273 + /* 274 + * If it was an IPI the HW CPPR might have been lowered too much 275 + * as the HW interrupt we use for IPIs is routed to priority 0. 276 + * 277 + * We re-sync it here. 278 + */ 279 + if (xc->cppr != xc->hw_cppr) { 280 + xc->hw_cppr = xc->cppr; 281 + __raw_writeb(xc->cppr, xive_tima + TM_QW1_OS + TM_CPPR); 282 + } 283 + 284 + return hirq; 285 + } 286 + 287 + static unsigned long xive_vm_h_xirr(struct kvm_vcpu *vcpu) 288 + { 289 + struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu; 290 + u8 old_cppr; 291 + u32 hirq; 292 + 293 + pr_devel("H_XIRR\n"); 294 + 295 + xc->stat_vm_h_xirr++; 296 + 297 + /* First collect pending bits from HW */ 298 + xive_vm_ack_pending(xc); 299 + 300 + pr_devel(" new pending=0x%02x hw_cppr=%d cppr=%d\n", 301 + xc->pending, xc->hw_cppr, xc->cppr); 302 + 303 + /* Grab previous CPPR and reverse map it */ 304 + old_cppr = xive_prio_to_guest(xc->cppr); 305 + 306 + /* Scan for actual interrupts */ 307 + hirq = xive_vm_scan_interrupts(xc, xc->pending, scan_fetch); 308 + 309 + pr_devel(" got hirq=0x%x hw_cppr=%d cppr=%d\n", 310 + hirq, xc->hw_cppr, xc->cppr); 311 + 312 + /* That should never hit */ 313 + if (hirq & 0xff000000) 314 + pr_warn("XIVE: Weird guest interrupt number 0x%08x\n", hirq); 315 + 316 + /* 317 + * XXX We could check if the interrupt is masked here and 318 + * filter it. If we chose to do so, we would need to do: 319 + * 320 + * if (masked) { 321 + * lock(); 322 + * if (masked) { 323 + * old_Q = true; 324 + * hirq = 0; 325 + * } 326 + * unlock(); 327 + * } 328 + */ 329 + 330 + /* Return interrupt and old CPPR in GPR4 */ 331 + vcpu->arch.regs.gpr[4] = hirq | (old_cppr << 24); 332 + 333 + return H_SUCCESS; 334 + } 335 + 336 + static unsigned long xive_vm_h_ipoll(struct kvm_vcpu *vcpu, unsigned long server) 337 + { 338 + struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu; 339 + u8 pending = xc->pending; 340 + u32 hirq; 341 + 342 + pr_devel("H_IPOLL(server=%ld)\n", server); 343 + 344 + xc->stat_vm_h_ipoll++; 345 + 346 + /* Grab the target VCPU if not the current one */ 347 + if (xc->server_num != server) { 348 + vcpu = kvmppc_xive_find_server(vcpu->kvm, server); 349 + if (!vcpu) 350 + return H_PARAMETER; 351 + xc = vcpu->arch.xive_vcpu; 352 + 353 + /* Scan all priorities */ 354 + pending = 0xff; 355 + } else { 356 + /* Grab pending interrupt if any */ 357 + __be64 qw1 = __raw_readq(xive_tima + TM_QW1_OS); 358 + u8 pipr = be64_to_cpu(qw1) & 0xff; 359 + 360 + if (pipr < 8) 361 + pending |= 1 << pipr; 362 + } 363 + 364 + hirq = xive_vm_scan_interrupts(xc, pending, scan_poll); 365 + 366 + /* Return interrupt and old CPPR in GPR4 */ 367 + vcpu->arch.regs.gpr[4] = hirq | (xc->cppr << 24); 368 + 369 + return H_SUCCESS; 370 + } 371 + 372 + static void xive_vm_push_pending_to_hw(struct kvmppc_xive_vcpu *xc) 373 + { 374 + u8 pending, prio; 375 + 376 + pending = xc->pending; 377 + if (xc->mfrr != 0xff) { 378 + if (xc->mfrr < 8) 379 + pending |= 1 << xc->mfrr; 380 + else 381 + pending |= 0x80; 382 + } 383 + if (!pending) 384 + return; 385 + prio = ffs(pending) - 1; 386 + 387 + __raw_writeb(prio, xive_tima + TM_SPC_SET_OS_PENDING); 388 + } 389 + 390 + static void xive_vm_scan_for_rerouted_irqs(struct kvmppc_xive *xive, 391 + struct kvmppc_xive_vcpu *xc) 392 + { 393 + unsigned int prio; 394 + 395 + /* For each priority that is now masked */ 396 + for (prio = xc->cppr; prio < KVMPPC_XIVE_Q_COUNT; prio++) { 397 + struct xive_q *q = &xc->queues[prio]; 398 + struct kvmppc_xive_irq_state *state; 399 + struct kvmppc_xive_src_block *sb; 400 + u32 idx, toggle, entry, irq, hw_num; 401 + struct xive_irq_data *xd; 402 + __be32 *qpage; 403 + u16 src; 404 + 405 + idx = q->idx; 406 + toggle = q->toggle; 407 + qpage = READ_ONCE(q->qpage); 408 + if (!qpage) 409 + continue; 410 + 411 + /* For each interrupt in the queue */ 412 + for (;;) { 413 + entry = be32_to_cpup(qpage + idx); 414 + 415 + /* No more ? */ 416 + if ((entry >> 31) == toggle) 417 + break; 418 + irq = entry & 0x7fffffff; 419 + 420 + /* Skip dummies and IPIs */ 421 + if (irq == XICS_DUMMY || irq == XICS_IPI) 422 + goto next; 423 + sb = kvmppc_xive_find_source(xive, irq, &src); 424 + if (!sb) 425 + goto next; 426 + state = &sb->irq_state[src]; 427 + 428 + /* Has it been rerouted ? */ 429 + if (xc->server_num == state->act_server) 430 + goto next; 431 + 432 + /* 433 + * Allright, it *has* been re-routed, kill it from 434 + * the queue. 435 + */ 436 + qpage[idx] = cpu_to_be32((entry & 0x80000000) | XICS_DUMMY); 437 + 438 + /* Find the HW interrupt */ 439 + kvmppc_xive_select_irq(state, &hw_num, &xd); 440 + 441 + /* If it's not an LSI, set PQ to 11 the EOI will force a resend */ 442 + if (!(xd->flags & XIVE_IRQ_FLAG_LSI)) 443 + xive_vm_esb_load(xd, XIVE_ESB_SET_PQ_11); 444 + 445 + /* EOI the source */ 446 + xive_vm_source_eoi(hw_num, xd); 447 + 448 + next: 449 + idx = (idx + 1) & q->msk; 450 + if (idx == 0) 451 + toggle ^= 1; 452 + } 453 + } 454 + } 455 + 456 + static int xive_vm_h_cppr(struct kvm_vcpu *vcpu, unsigned long cppr) 457 + { 458 + struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu; 459 + struct kvmppc_xive *xive = vcpu->kvm->arch.xive; 460 + u8 old_cppr; 461 + 462 + pr_devel("H_CPPR(cppr=%ld)\n", cppr); 463 + 464 + xc->stat_vm_h_cppr++; 465 + 466 + /* Map CPPR */ 467 + cppr = xive_prio_from_guest(cppr); 468 + 469 + /* Remember old and update SW state */ 470 + old_cppr = xc->cppr; 471 + xc->cppr = cppr; 472 + 473 + /* 474 + * Order the above update of xc->cppr with the subsequent 475 + * read of xc->mfrr inside push_pending_to_hw() 476 + */ 477 + smp_mb(); 478 + 479 + if (cppr > old_cppr) { 480 + /* 481 + * We are masking less, we need to look for pending things 482 + * to deliver and set VP pending bits accordingly to trigger 483 + * a new interrupt otherwise we might miss MFRR changes for 484 + * which we have optimized out sending an IPI signal. 485 + */ 486 + xive_vm_push_pending_to_hw(xc); 487 + } else { 488 + /* 489 + * We are masking more, we need to check the queue for any 490 + * interrupt that has been routed to another CPU, take 491 + * it out (replace it with the dummy) and retrigger it. 492 + * 493 + * This is necessary since those interrupts may otherwise 494 + * never be processed, at least not until this CPU restores 495 + * its CPPR. 496 + * 497 + * This is in theory racy vs. HW adding new interrupts to 498 + * the queue. In practice this works because the interesting 499 + * cases are when the guest has done a set_xive() to move the 500 + * interrupt away, which flushes the xive, followed by the 501 + * target CPU doing a H_CPPR. So any new interrupt coming into 502 + * the queue must still be routed to us and isn't a source 503 + * of concern. 504 + */ 505 + xive_vm_scan_for_rerouted_irqs(xive, xc); 506 + } 507 + 508 + /* Apply new CPPR */ 509 + xc->hw_cppr = cppr; 510 + __raw_writeb(cppr, xive_tima + TM_QW1_OS + TM_CPPR); 511 + 512 + return H_SUCCESS; 513 + } 514 + 515 + static int xive_vm_h_eoi(struct kvm_vcpu *vcpu, unsigned long xirr) 516 + { 517 + struct kvmppc_xive *xive = vcpu->kvm->arch.xive; 518 + struct kvmppc_xive_src_block *sb; 519 + struct kvmppc_xive_irq_state *state; 520 + struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu; 521 + struct xive_irq_data *xd; 522 + u8 new_cppr = xirr >> 24; 523 + u32 irq = xirr & 0x00ffffff, hw_num; 524 + u16 src; 525 + int rc = 0; 526 + 527 + pr_devel("H_EOI(xirr=%08lx)\n", xirr); 528 + 529 + xc->stat_vm_h_eoi++; 530 + 531 + xc->cppr = xive_prio_from_guest(new_cppr); 532 + 533 + /* 534 + * IPIs are synthetized from MFRR and thus don't need 535 + * any special EOI handling. The underlying interrupt 536 + * used to signal MFRR changes is EOId when fetched from 537 + * the queue. 538 + */ 539 + if (irq == XICS_IPI || irq == 0) { 540 + /* 541 + * This barrier orders the setting of xc->cppr vs. 542 + * subsquent test of xc->mfrr done inside 543 + * scan_interrupts and push_pending_to_hw 544 + */ 545 + smp_mb(); 546 + goto bail; 547 + } 548 + 549 + /* Find interrupt source */ 550 + sb = kvmppc_xive_find_source(xive, irq, &src); 551 + if (!sb) { 552 + pr_devel(" source not found !\n"); 553 + rc = H_PARAMETER; 554 + /* Same as above */ 555 + smp_mb(); 556 + goto bail; 557 + } 558 + state = &sb->irq_state[src]; 559 + kvmppc_xive_select_irq(state, &hw_num, &xd); 560 + 561 + state->in_eoi = true; 562 + 563 + /* 564 + * This barrier orders both setting of in_eoi above vs, 565 + * subsequent test of guest_priority, and the setting 566 + * of xc->cppr vs. subsquent test of xc->mfrr done inside 567 + * scan_interrupts and push_pending_to_hw 568 + */ 569 + smp_mb(); 570 + 571 + again: 572 + if (state->guest_priority == MASKED) { 573 + arch_spin_lock(&sb->lock); 574 + if (state->guest_priority != MASKED) { 575 + arch_spin_unlock(&sb->lock); 576 + goto again; 577 + } 578 + pr_devel(" EOI on saved P...\n"); 579 + 580 + /* Clear old_p, that will cause unmask to perform an EOI */ 581 + state->old_p = false; 582 + 583 + arch_spin_unlock(&sb->lock); 584 + } else { 585 + pr_devel(" EOI on source...\n"); 586 + 587 + /* Perform EOI on the source */ 588 + xive_vm_source_eoi(hw_num, xd); 589 + 590 + /* If it's an emulated LSI, check level and resend */ 591 + if (state->lsi && state->asserted) 592 + __raw_writeq(0, __x_trig_page(xd)); 593 + 594 + } 595 + 596 + /* 597 + * This barrier orders the above guest_priority check 598 + * and spin_lock/unlock with clearing in_eoi below. 599 + * 600 + * It also has to be a full mb() as it must ensure 601 + * the MMIOs done in source_eoi() are completed before 602 + * state->in_eoi is visible. 603 + */ 604 + mb(); 605 + state->in_eoi = false; 606 + bail: 607 + 608 + /* Re-evaluate pending IRQs and update HW */ 609 + xive_vm_scan_interrupts(xc, xc->pending, scan_eoi); 610 + xive_vm_push_pending_to_hw(xc); 611 + pr_devel(" after scan pending=%02x\n", xc->pending); 612 + 613 + /* Apply new CPPR */ 614 + xc->hw_cppr = xc->cppr; 615 + __raw_writeb(xc->cppr, xive_tima + TM_QW1_OS + TM_CPPR); 616 + 617 + return rc; 618 + } 619 + 620 + static int xive_vm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server, 621 + unsigned long mfrr) 622 + { 623 + struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu; 624 + 625 + pr_devel("H_IPI(server=%08lx,mfrr=%ld)\n", server, mfrr); 626 + 627 + xc->stat_vm_h_ipi++; 628 + 629 + /* Find target */ 630 + vcpu = kvmppc_xive_find_server(vcpu->kvm, server); 631 + if (!vcpu) 632 + return H_PARAMETER; 633 + xc = vcpu->arch.xive_vcpu; 634 + 635 + /* Locklessly write over MFRR */ 636 + xc->mfrr = mfrr; 637 + 638 + /* 639 + * The load of xc->cppr below and the subsequent MMIO store 640 + * to the IPI must happen after the above mfrr update is 641 + * globally visible so that: 642 + * 643 + * - Synchronize with another CPU doing an H_EOI or a H_CPPR 644 + * updating xc->cppr then reading xc->mfrr. 645 + * 646 + * - The target of the IPI sees the xc->mfrr update 647 + */ 648 + mb(); 649 + 650 + /* Shoot the IPI if most favored than target cppr */ 651 + if (mfrr < xc->cppr) 652 + __raw_writeq(0, __x_trig_page(&xc->vp_ipi_data)); 653 + 654 + return H_SUCCESS; 655 + } 54 656 55 657 /* 56 658 * We leave a gap of a couple of interrupts in the queue to ··· 726 124 * interrupt might have fired and be on its way to the 727 125 * host queue while we mask it, and if we unmask it 728 126 * early enough (re-cede right away), there is a 729 - * theorical possibility that it fires again, thus 127 + * theoretical possibility that it fires again, thus 730 128 * landing in the target queue more than once which is 731 129 * a big no-no. 732 130 * ··· 781 179 } 782 180 EXPORT_SYMBOL_GPL(kvmppc_xive_pull_vcpu); 783 181 784 - void kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu) 182 + bool kvmppc_xive_rearm_escalation(struct kvm_vcpu *vcpu) 785 183 { 786 184 void __iomem *esc_vaddr = (void __iomem *)vcpu->arch.xive_esc_vaddr; 185 + bool ret = true; 787 186 788 187 if (!esc_vaddr) 789 - return; 188 + return ret; 790 189 791 190 /* we are using XIVE with single escalation */ 792 191 ··· 800 197 * we also don't want to set xive_esc_on to 1 here in 801 198 * case we race with xive_esc_irq(). 802 199 */ 803 - vcpu->arch.ceded = 0; 200 + ret = false; 804 201 /* 805 202 * The escalation interrupts are special as we don't EOI them. 806 203 * There is no need to use the load-after-store ordering offset ··· 813 210 __raw_readq(esc_vaddr + XIVE_ESB_SET_PQ_00); 814 211 } 815 212 mb(); 213 + 214 + return ret; 816 215 } 817 216 EXPORT_SYMBOL_GPL(kvmppc_xive_rearm_escalation); 818 217 ··· 843 238 844 239 vcpu->arch.irq_pending = 1; 845 240 smp_mb(); 846 - if (vcpu->arch.ceded) 241 + if (vcpu->arch.ceded || vcpu->arch.nested) 847 242 kvmppc_fast_vcpu_kick(vcpu); 848 243 849 244 /* Since we have the no-EOI flag, the interrupt is effectively ··· 1227 622 1228 623 /* 1229 624 * Targetting rules: In order to avoid losing track of 1230 - * pending interrupts accross mask and unmask, which would 625 + * pending interrupts across mask and unmask, which would 1231 626 * allow queue overflows, we implement the following rules: 1232 627 * 1233 628 * - Unless it was never enabled (or we run out of capacity) ··· 1678 1073 /* 1679 1074 * If old_p is set, the interrupt is pending, we switch it to 1680 1075 * PQ=11. This will force a resend in the host so the interrupt 1681 - * isn't lost to whatver host driver may pick it up 1076 + * isn't lost to whatever host driver may pick it up 1682 1077 */ 1683 1078 if (state->old_p) 1684 1079 xive_vm_esb_load(state->pt_data, XIVE_ESB_SET_PQ_11);
-7
arch/powerpc/kvm/book3s_xive.h
··· 285 285 return cur & 0x7fffffff; 286 286 } 287 287 288 - extern unsigned long xive_rm_h_xirr(struct kvm_vcpu *vcpu); 289 - extern unsigned long xive_rm_h_ipoll(struct kvm_vcpu *vcpu, unsigned long server); 290 - extern int xive_rm_h_ipi(struct kvm_vcpu *vcpu, unsigned long server, 291 - unsigned long mfrr); 292 - extern int xive_rm_h_cppr(struct kvm_vcpu *vcpu, unsigned long cppr); 293 - extern int xive_rm_h_eoi(struct kvm_vcpu *vcpu, unsigned long xirr); 294 - 295 288 /* 296 289 * Common Xive routines for XICS-over-XIVE and XIVE native 297 290 */
+1 -1
arch/powerpc/kvm/book3s_xive_native.c
··· 209 209 210 210 /* 211 211 * Clear the ESB pages of the IRQ number being mapped (or 212 - * unmapped) into the guest and let the the VM fault handler 212 + * unmapped) into the guest and let the VM fault handler 213 213 * repopulate with the appropriate ESB pages (device or IC) 214 214 */ 215 215 pr_debug("clearing esb pages for girq 0x%lx\n", irq);
-636
arch/powerpc/kvm/book3s_xive_template.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright 2017 Benjamin Herrenschmidt, IBM Corporation 4 - */ 5 - 6 - /* File to be included by other .c files */ 7 - 8 - #define XGLUE(a,b) a##b 9 - #define GLUE(a,b) XGLUE(a,b) 10 - 11 - /* Dummy interrupt used when taking interrupts out of a queue in H_CPPR */ 12 - #define XICS_DUMMY 1 13 - 14 - static void GLUE(X_PFX,ack_pending)(struct kvmppc_xive_vcpu *xc) 15 - { 16 - u8 cppr; 17 - u16 ack; 18 - 19 - /* 20 - * Ensure any previous store to CPPR is ordered vs. 21 - * the subsequent loads from PIPR or ACK. 22 - */ 23 - eieio(); 24 - 25 - /* Perform the acknowledge OS to register cycle. */ 26 - ack = be16_to_cpu(__x_readw(__x_tima + TM_SPC_ACK_OS_REG)); 27 - 28 - /* Synchronize subsequent queue accesses */ 29 - mb(); 30 - 31 - /* XXX Check grouping level */ 32 - 33 - /* Anything ? */ 34 - if (!((ack >> 8) & TM_QW1_NSR_EO)) 35 - return; 36 - 37 - /* Grab CPPR of the most favored pending interrupt */ 38 - cppr = ack & 0xff; 39 - if (cppr < 8) 40 - xc->pending |= 1 << cppr; 41 - 42 - #ifdef XIVE_RUNTIME_CHECKS 43 - /* Check consistency */ 44 - if (cppr >= xc->hw_cppr) 45 - pr_warn("KVM-XIVE: CPU %d odd ack CPPR, got %d at %d\n", 46 - smp_processor_id(), cppr, xc->hw_cppr); 47 - #endif 48 - 49 - /* 50 - * Update our image of the HW CPPR. We don't yet modify 51 - * xc->cppr, this will be done as we scan for interrupts 52 - * in the queues. 53 - */ 54 - xc->hw_cppr = cppr; 55 - } 56 - 57 - static u8 GLUE(X_PFX,esb_load)(struct xive_irq_data *xd, u32 offset) 58 - { 59 - u64 val; 60 - 61 - if (offset == XIVE_ESB_SET_PQ_10 && xd->flags & XIVE_IRQ_FLAG_STORE_EOI) 62 - offset |= XIVE_ESB_LD_ST_MO; 63 - 64 - val =__x_readq(__x_eoi_page(xd) + offset); 65 - #ifdef __LITTLE_ENDIAN__ 66 - val >>= 64-8; 67 - #endif 68 - return (u8)val; 69 - } 70 - 71 - 72 - static void GLUE(X_PFX,source_eoi)(u32 hw_irq, struct xive_irq_data *xd) 73 - { 74 - /* If the XIVE supports the new "store EOI facility, use it */ 75 - if (xd->flags & XIVE_IRQ_FLAG_STORE_EOI) 76 - __x_writeq(0, __x_eoi_page(xd) + XIVE_ESB_STORE_EOI); 77 - else if (xd->flags & XIVE_IRQ_FLAG_LSI) { 78 - /* 79 - * For LSIs the HW EOI cycle is used rather than PQ bits, 80 - * as they are automatically re-triggred in HW when still 81 - * pending. 82 - */ 83 - __x_readq(__x_eoi_page(xd) + XIVE_ESB_LOAD_EOI); 84 - } else { 85 - uint64_t eoi_val; 86 - 87 - /* 88 - * Otherwise for EOI, we use the special MMIO that does 89 - * a clear of both P and Q and returns the old Q, 90 - * except for LSIs where we use the "EOI cycle" special 91 - * load. 92 - * 93 - * This allows us to then do a re-trigger if Q was set 94 - * rather than synthetizing an interrupt in software 95 - */ 96 - eoi_val = GLUE(X_PFX,esb_load)(xd, XIVE_ESB_SET_PQ_00); 97 - 98 - /* Re-trigger if needed */ 99 - if ((eoi_val & 1) && __x_trig_page(xd)) 100 - __x_writeq(0, __x_trig_page(xd)); 101 - } 102 - } 103 - 104 - enum { 105 - scan_fetch, 106 - scan_poll, 107 - scan_eoi, 108 - }; 109 - 110 - static u32 GLUE(X_PFX,scan_interrupts)(struct kvmppc_xive_vcpu *xc, 111 - u8 pending, int scan_type) 112 - { 113 - u32 hirq = 0; 114 - u8 prio = 0xff; 115 - 116 - /* Find highest pending priority */ 117 - while ((xc->mfrr != 0xff || pending != 0) && hirq == 0) { 118 - struct xive_q *q; 119 - u32 idx, toggle; 120 - __be32 *qpage; 121 - 122 - /* 123 - * If pending is 0 this will return 0xff which is what 124 - * we want 125 - */ 126 - prio = ffs(pending) - 1; 127 - 128 - /* Don't scan past the guest cppr */ 129 - if (prio >= xc->cppr || prio > 7) { 130 - if (xc->mfrr < xc->cppr) { 131 - prio = xc->mfrr; 132 - hirq = XICS_IPI; 133 - } 134 - break; 135 - } 136 - 137 - /* Grab queue and pointers */ 138 - q = &xc->queues[prio]; 139 - idx = q->idx; 140 - toggle = q->toggle; 141 - 142 - /* 143 - * Snapshot the queue page. The test further down for EOI 144 - * must use the same "copy" that was used by __xive_read_eq 145 - * since qpage can be set concurrently and we don't want 146 - * to miss an EOI. 147 - */ 148 - qpage = READ_ONCE(q->qpage); 149 - 150 - skip_ipi: 151 - /* 152 - * Try to fetch from the queue. Will return 0 for a 153 - * non-queueing priority (ie, qpage = 0). 154 - */ 155 - hirq = __xive_read_eq(qpage, q->msk, &idx, &toggle); 156 - 157 - /* 158 - * If this was a signal for an MFFR change done by 159 - * H_IPI we skip it. Additionally, if we were fetching 160 - * we EOI it now, thus re-enabling reception of a new 161 - * such signal. 162 - * 163 - * We also need to do that if prio is 0 and we had no 164 - * page for the queue. In this case, we have non-queued 165 - * IPI that needs to be EOId. 166 - * 167 - * This is safe because if we have another pending MFRR 168 - * change that wasn't observed above, the Q bit will have 169 - * been set and another occurrence of the IPI will trigger. 170 - */ 171 - if (hirq == XICS_IPI || (prio == 0 && !qpage)) { 172 - if (scan_type == scan_fetch) { 173 - GLUE(X_PFX,source_eoi)(xc->vp_ipi, 174 - &xc->vp_ipi_data); 175 - q->idx = idx; 176 - q->toggle = toggle; 177 - } 178 - /* Loop back on same queue with updated idx/toggle */ 179 - #ifdef XIVE_RUNTIME_CHECKS 180 - WARN_ON(hirq && hirq != XICS_IPI); 181 - #endif 182 - if (hirq) 183 - goto skip_ipi; 184 - } 185 - 186 - /* If it's the dummy interrupt, continue searching */ 187 - if (hirq == XICS_DUMMY) 188 - goto skip_ipi; 189 - 190 - /* Clear the pending bit if the queue is now empty */ 191 - if (!hirq) { 192 - pending &= ~(1 << prio); 193 - 194 - /* 195 - * Check if the queue count needs adjusting due to 196 - * interrupts being moved away. 197 - */ 198 - if (atomic_read(&q->pending_count)) { 199 - int p = atomic_xchg(&q->pending_count, 0); 200 - if (p) { 201 - #ifdef XIVE_RUNTIME_CHECKS 202 - WARN_ON(p > atomic_read(&q->count)); 203 - #endif 204 - atomic_sub(p, &q->count); 205 - } 206 - } 207 - } 208 - 209 - /* 210 - * If the most favoured prio we found pending is less 211 - * favored (or equal) than a pending IPI, we return 212 - * the IPI instead. 213 - */ 214 - if (prio >= xc->mfrr && xc->mfrr < xc->cppr) { 215 - prio = xc->mfrr; 216 - hirq = XICS_IPI; 217 - break; 218 - } 219 - 220 - /* If fetching, update queue pointers */ 221 - if (scan_type == scan_fetch) { 222 - q->idx = idx; 223 - q->toggle = toggle; 224 - } 225 - } 226 - 227 - /* If we are just taking a "peek", do nothing else */ 228 - if (scan_type == scan_poll) 229 - return hirq; 230 - 231 - /* Update the pending bits */ 232 - xc->pending = pending; 233 - 234 - /* 235 - * If this is an EOI that's it, no CPPR adjustment done here, 236 - * all we needed was cleanup the stale pending bits and check 237 - * if there's anything left. 238 - */ 239 - if (scan_type == scan_eoi) 240 - return hirq; 241 - 242 - /* 243 - * If we found an interrupt, adjust what the guest CPPR should 244 - * be as if we had just fetched that interrupt from HW. 245 - * 246 - * Note: This can only make xc->cppr smaller as the previous 247 - * loop will only exit with hirq != 0 if prio is lower than 248 - * the current xc->cppr. Thus we don't need to re-check xc->mfrr 249 - * for pending IPIs. 250 - */ 251 - if (hirq) 252 - xc->cppr = prio; 253 - /* 254 - * If it was an IPI the HW CPPR might have been lowered too much 255 - * as the HW interrupt we use for IPIs is routed to priority 0. 256 - * 257 - * We re-sync it here. 258 - */ 259 - if (xc->cppr != xc->hw_cppr) { 260 - xc->hw_cppr = xc->cppr; 261 - __x_writeb(xc->cppr, __x_tima + TM_QW1_OS + TM_CPPR); 262 - } 263 - 264 - return hirq; 265 - } 266 - 267 - X_STATIC unsigned long GLUE(X_PFX,h_xirr)(struct kvm_vcpu *vcpu) 268 - { 269 - struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu; 270 - u8 old_cppr; 271 - u32 hirq; 272 - 273 - pr_devel("H_XIRR\n"); 274 - 275 - xc->GLUE(X_STAT_PFX,h_xirr)++; 276 - 277 - /* First collect pending bits from HW */ 278 - GLUE(X_PFX,ack_pending)(xc); 279 - 280 - pr_devel(" new pending=0x%02x hw_cppr=%d cppr=%d\n", 281 - xc->pending, xc->hw_cppr, xc->cppr); 282 - 283 - /* Grab previous CPPR and reverse map it */ 284 - old_cppr = xive_prio_to_guest(xc->cppr); 285 - 286 - /* Scan for actual interrupts */ 287 - hirq = GLUE(X_PFX,scan_interrupts)(xc, xc->pending, scan_fetch); 288 - 289 - pr_devel(" got hirq=0x%x hw_cppr=%d cppr=%d\n", 290 - hirq, xc->hw_cppr, xc->cppr); 291 - 292 - #ifdef XIVE_RUNTIME_CHECKS 293 - /* That should never hit */ 294 - if (hirq & 0xff000000) 295 - pr_warn("XIVE: Weird guest interrupt number 0x%08x\n", hirq); 296 - #endif 297 - 298 - /* 299 - * XXX We could check if the interrupt is masked here and 300 - * filter it. If we chose to do so, we would need to do: 301 - * 302 - * if (masked) { 303 - * lock(); 304 - * if (masked) { 305 - * old_Q = true; 306 - * hirq = 0; 307 - * } 308 - * unlock(); 309 - * } 310 - */ 311 - 312 - /* Return interrupt and old CPPR in GPR4 */ 313 - vcpu->arch.regs.gpr[4] = hirq | (old_cppr << 24); 314 - 315 - return H_SUCCESS; 316 - } 317 - 318 - X_STATIC unsigned long GLUE(X_PFX,h_ipoll)(struct kvm_vcpu *vcpu, unsigned long server) 319 - { 320 - struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu; 321 - u8 pending = xc->pending; 322 - u32 hirq; 323 - 324 - pr_devel("H_IPOLL(server=%ld)\n", server); 325 - 326 - xc->GLUE(X_STAT_PFX,h_ipoll)++; 327 - 328 - /* Grab the target VCPU if not the current one */ 329 - if (xc->server_num != server) { 330 - vcpu = kvmppc_xive_find_server(vcpu->kvm, server); 331 - if (!vcpu) 332 - return H_PARAMETER; 333 - xc = vcpu->arch.xive_vcpu; 334 - 335 - /* Scan all priorities */ 336 - pending = 0xff; 337 - } else { 338 - /* Grab pending interrupt if any */ 339 - __be64 qw1 = __x_readq(__x_tima + TM_QW1_OS); 340 - u8 pipr = be64_to_cpu(qw1) & 0xff; 341 - if (pipr < 8) 342 - pending |= 1 << pipr; 343 - } 344 - 345 - hirq = GLUE(X_PFX,scan_interrupts)(xc, pending, scan_poll); 346 - 347 - /* Return interrupt and old CPPR in GPR4 */ 348 - vcpu->arch.regs.gpr[4] = hirq | (xc->cppr << 24); 349 - 350 - return H_SUCCESS; 351 - } 352 - 353 - static void GLUE(X_PFX,push_pending_to_hw)(struct kvmppc_xive_vcpu *xc) 354 - { 355 - u8 pending, prio; 356 - 357 - pending = xc->pending; 358 - if (xc->mfrr != 0xff) { 359 - if (xc->mfrr < 8) 360 - pending |= 1 << xc->mfrr; 361 - else 362 - pending |= 0x80; 363 - } 364 - if (!pending) 365 - return; 366 - prio = ffs(pending) - 1; 367 - 368 - __x_writeb(prio, __x_tima + TM_SPC_SET_OS_PENDING); 369 - } 370 - 371 - static void GLUE(X_PFX,scan_for_rerouted_irqs)(struct kvmppc_xive *xive, 372 - struct kvmppc_xive_vcpu *xc) 373 - { 374 - unsigned int prio; 375 - 376 - /* For each priority that is now masked */ 377 - for (prio = xc->cppr; prio < KVMPPC_XIVE_Q_COUNT; prio++) { 378 - struct xive_q *q = &xc->queues[prio]; 379 - struct kvmppc_xive_irq_state *state; 380 - struct kvmppc_xive_src_block *sb; 381 - u32 idx, toggle, entry, irq, hw_num; 382 - struct xive_irq_data *xd; 383 - __be32 *qpage; 384 - u16 src; 385 - 386 - idx = q->idx; 387 - toggle = q->toggle; 388 - qpage = READ_ONCE(q->qpage); 389 - if (!qpage) 390 - continue; 391 - 392 - /* For each interrupt in the queue */ 393 - for (;;) { 394 - entry = be32_to_cpup(qpage + idx); 395 - 396 - /* No more ? */ 397 - if ((entry >> 31) == toggle) 398 - break; 399 - irq = entry & 0x7fffffff; 400 - 401 - /* Skip dummies and IPIs */ 402 - if (irq == XICS_DUMMY || irq == XICS_IPI) 403 - goto next; 404 - sb = kvmppc_xive_find_source(xive, irq, &src); 405 - if (!sb) 406 - goto next; 407 - state = &sb->irq_state[src]; 408 - 409 - /* Has it been rerouted ? */ 410 - if (xc->server_num == state->act_server) 411 - goto next; 412 - 413 - /* 414 - * Allright, it *has* been re-routed, kill it from 415 - * the queue. 416 - */ 417 - qpage[idx] = cpu_to_be32((entry & 0x80000000) | XICS_DUMMY); 418 - 419 - /* Find the HW interrupt */ 420 - kvmppc_xive_select_irq(state, &hw_num, &xd); 421 - 422 - /* If it's not an LSI, set PQ to 11 the EOI will force a resend */ 423 - if (!(xd->flags & XIVE_IRQ_FLAG_LSI)) 424 - GLUE(X_PFX,esb_load)(xd, XIVE_ESB_SET_PQ_11); 425 - 426 - /* EOI the source */ 427 - GLUE(X_PFX,source_eoi)(hw_num, xd); 428 - 429 - next: 430 - idx = (idx + 1) & q->msk; 431 - if (idx == 0) 432 - toggle ^= 1; 433 - } 434 - } 435 - } 436 - 437 - X_STATIC int GLUE(X_PFX,h_cppr)(struct kvm_vcpu *vcpu, unsigned long cppr) 438 - { 439 - struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu; 440 - struct kvmppc_xive *xive = vcpu->kvm->arch.xive; 441 - u8 old_cppr; 442 - 443 - pr_devel("H_CPPR(cppr=%ld)\n", cppr); 444 - 445 - xc->GLUE(X_STAT_PFX,h_cppr)++; 446 - 447 - /* Map CPPR */ 448 - cppr = xive_prio_from_guest(cppr); 449 - 450 - /* Remember old and update SW state */ 451 - old_cppr = xc->cppr; 452 - xc->cppr = cppr; 453 - 454 - /* 455 - * Order the above update of xc->cppr with the subsequent 456 - * read of xc->mfrr inside push_pending_to_hw() 457 - */ 458 - smp_mb(); 459 - 460 - if (cppr > old_cppr) { 461 - /* 462 - * We are masking less, we need to look for pending things 463 - * to deliver and set VP pending bits accordingly to trigger 464 - * a new interrupt otherwise we might miss MFRR changes for 465 - * which we have optimized out sending an IPI signal. 466 - */ 467 - GLUE(X_PFX,push_pending_to_hw)(xc); 468 - } else { 469 - /* 470 - * We are masking more, we need to check the queue for any 471 - * interrupt that has been routed to another CPU, take 472 - * it out (replace it with the dummy) and retrigger it. 473 - * 474 - * This is necessary since those interrupts may otherwise 475 - * never be processed, at least not until this CPU restores 476 - * its CPPR. 477 - * 478 - * This is in theory racy vs. HW adding new interrupts to 479 - * the queue. In practice this works because the interesting 480 - * cases are when the guest has done a set_xive() to move the 481 - * interrupt away, which flushes the xive, followed by the 482 - * target CPU doing a H_CPPR. So any new interrupt coming into 483 - * the queue must still be routed to us and isn't a source 484 - * of concern. 485 - */ 486 - GLUE(X_PFX,scan_for_rerouted_irqs)(xive, xc); 487 - } 488 - 489 - /* Apply new CPPR */ 490 - xc->hw_cppr = cppr; 491 - __x_writeb(cppr, __x_tima + TM_QW1_OS + TM_CPPR); 492 - 493 - return H_SUCCESS; 494 - } 495 - 496 - X_STATIC int GLUE(X_PFX,h_eoi)(struct kvm_vcpu *vcpu, unsigned long xirr) 497 - { 498 - struct kvmppc_xive *xive = vcpu->kvm->arch.xive; 499 - struct kvmppc_xive_src_block *sb; 500 - struct kvmppc_xive_irq_state *state; 501 - struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu; 502 - struct xive_irq_data *xd; 503 - u8 new_cppr = xirr >> 24; 504 - u32 irq = xirr & 0x00ffffff, hw_num; 505 - u16 src; 506 - int rc = 0; 507 - 508 - pr_devel("H_EOI(xirr=%08lx)\n", xirr); 509 - 510 - xc->GLUE(X_STAT_PFX,h_eoi)++; 511 - 512 - xc->cppr = xive_prio_from_guest(new_cppr); 513 - 514 - /* 515 - * IPIs are synthetized from MFRR and thus don't need 516 - * any special EOI handling. The underlying interrupt 517 - * used to signal MFRR changes is EOId when fetched from 518 - * the queue. 519 - */ 520 - if (irq == XICS_IPI || irq == 0) { 521 - /* 522 - * This barrier orders the setting of xc->cppr vs. 523 - * subsquent test of xc->mfrr done inside 524 - * scan_interrupts and push_pending_to_hw 525 - */ 526 - smp_mb(); 527 - goto bail; 528 - } 529 - 530 - /* Find interrupt source */ 531 - sb = kvmppc_xive_find_source(xive, irq, &src); 532 - if (!sb) { 533 - pr_devel(" source not found !\n"); 534 - rc = H_PARAMETER; 535 - /* Same as above */ 536 - smp_mb(); 537 - goto bail; 538 - } 539 - state = &sb->irq_state[src]; 540 - kvmppc_xive_select_irq(state, &hw_num, &xd); 541 - 542 - state->in_eoi = true; 543 - 544 - /* 545 - * This barrier orders both setting of in_eoi above vs, 546 - * subsequent test of guest_priority, and the setting 547 - * of xc->cppr vs. subsquent test of xc->mfrr done inside 548 - * scan_interrupts and push_pending_to_hw 549 - */ 550 - smp_mb(); 551 - 552 - again: 553 - if (state->guest_priority == MASKED) { 554 - arch_spin_lock(&sb->lock); 555 - if (state->guest_priority != MASKED) { 556 - arch_spin_unlock(&sb->lock); 557 - goto again; 558 - } 559 - pr_devel(" EOI on saved P...\n"); 560 - 561 - /* Clear old_p, that will cause unmask to perform an EOI */ 562 - state->old_p = false; 563 - 564 - arch_spin_unlock(&sb->lock); 565 - } else { 566 - pr_devel(" EOI on source...\n"); 567 - 568 - /* Perform EOI on the source */ 569 - GLUE(X_PFX,source_eoi)(hw_num, xd); 570 - 571 - /* If it's an emulated LSI, check level and resend */ 572 - if (state->lsi && state->asserted) 573 - __x_writeq(0, __x_trig_page(xd)); 574 - 575 - } 576 - 577 - /* 578 - * This barrier orders the above guest_priority check 579 - * and spin_lock/unlock with clearing in_eoi below. 580 - * 581 - * It also has to be a full mb() as it must ensure 582 - * the MMIOs done in source_eoi() are completed before 583 - * state->in_eoi is visible. 584 - */ 585 - mb(); 586 - state->in_eoi = false; 587 - bail: 588 - 589 - /* Re-evaluate pending IRQs and update HW */ 590 - GLUE(X_PFX,scan_interrupts)(xc, xc->pending, scan_eoi); 591 - GLUE(X_PFX,push_pending_to_hw)(xc); 592 - pr_devel(" after scan pending=%02x\n", xc->pending); 593 - 594 - /* Apply new CPPR */ 595 - xc->hw_cppr = xc->cppr; 596 - __x_writeb(xc->cppr, __x_tima + TM_QW1_OS + TM_CPPR); 597 - 598 - return rc; 599 - } 600 - 601 - X_STATIC int GLUE(X_PFX,h_ipi)(struct kvm_vcpu *vcpu, unsigned long server, 602 - unsigned long mfrr) 603 - { 604 - struct kvmppc_xive_vcpu *xc = vcpu->arch.xive_vcpu; 605 - 606 - pr_devel("H_IPI(server=%08lx,mfrr=%ld)\n", server, mfrr); 607 - 608 - xc->GLUE(X_STAT_PFX,h_ipi)++; 609 - 610 - /* Find target */ 611 - vcpu = kvmppc_xive_find_server(vcpu->kvm, server); 612 - if (!vcpu) 613 - return H_PARAMETER; 614 - xc = vcpu->arch.xive_vcpu; 615 - 616 - /* Locklessly write over MFRR */ 617 - xc->mfrr = mfrr; 618 - 619 - /* 620 - * The load of xc->cppr below and the subsequent MMIO store 621 - * to the IPI must happen after the above mfrr update is 622 - * globally visible so that: 623 - * 624 - * - Synchronize with another CPU doing an H_EOI or a H_CPPR 625 - * updating xc->cppr then reading xc->mfrr. 626 - * 627 - * - The target of the IPI sees the xc->mfrr update 628 - */ 629 - mb(); 630 - 631 - /* Shoot the IPI if most favored than target cppr */ 632 - if (mfrr < xc->cppr) 633 - __x_writeq(0, __x_trig_page(&xc->vp_ipi_data)); 634 - 635 - return H_SUCCESS; 636 - }
+1 -2
arch/powerpc/kvm/e500mc.c
··· 309 309 BUILD_BUG_ON(offsetof(struct kvmppc_vcpu_e500, vcpu) != 0); 310 310 vcpu_e500 = to_e500(vcpu); 311 311 312 - /* Invalid PIR value -- this LPID dosn't have valid state on any cpu */ 312 + /* Invalid PIR value -- this LPID doesn't have valid state on any cpu */ 313 313 vcpu->arch.oldpir = 0xffffffff; 314 314 315 315 err = kvmppc_e500_tlb_init(vcpu_e500); ··· 399 399 * allocator. 400 400 */ 401 401 kvmppc_init_lpid(KVMPPC_NR_LPIDS/threads_per_core); 402 - kvmppc_claim_lpid(0); /* host */ 403 402 404 403 r = kvm_init(NULL, sizeof(struct kvmppc_vcpu_e500), 0, THIS_MODULE); 405 404 if (r)
+14 -17
arch/powerpc/kvm/powerpc.c
··· 19 19 #include <linux/module.h> 20 20 #include <linux/irqbypass.h> 21 21 #include <linux/kvm_irqfd.h> 22 + #include <linux/of.h> 22 23 #include <asm/cputable.h> 23 24 #include <linux/uaccess.h> 24 25 #include <asm/kvm_ppc.h> ··· 2497 2496 return r; 2498 2497 } 2499 2498 2500 - static unsigned long lpid_inuse[BITS_TO_LONGS(KVMPPC_NR_LPIDS)]; 2499 + static DEFINE_IDA(lpid_inuse); 2501 2500 static unsigned long nr_lpids; 2502 2501 2503 2502 long kvmppc_alloc_lpid(void) 2504 2503 { 2505 - long lpid; 2504 + int lpid; 2506 2505 2507 - do { 2508 - lpid = find_first_zero_bit(lpid_inuse, KVMPPC_NR_LPIDS); 2509 - if (lpid >= nr_lpids) { 2506 + /* The host LPID must always be 0 (allocation starts at 1) */ 2507 + lpid = ida_alloc_range(&lpid_inuse, 1, nr_lpids - 1, GFP_KERNEL); 2508 + if (lpid < 0) { 2509 + if (lpid == -ENOMEM) 2510 + pr_err("%s: Out of memory\n", __func__); 2511 + else 2510 2512 pr_err("%s: No LPIDs free\n", __func__); 2511 - return -ENOMEM; 2512 - } 2513 - } while (test_and_set_bit(lpid, lpid_inuse)); 2513 + return -ENOMEM; 2514 + } 2514 2515 2515 2516 return lpid; 2516 2517 } 2517 2518 EXPORT_SYMBOL_GPL(kvmppc_alloc_lpid); 2518 2519 2519 - void kvmppc_claim_lpid(long lpid) 2520 - { 2521 - set_bit(lpid, lpid_inuse); 2522 - } 2523 - EXPORT_SYMBOL_GPL(kvmppc_claim_lpid); 2524 - 2525 2520 void kvmppc_free_lpid(long lpid) 2526 2521 { 2527 - clear_bit(lpid, lpid_inuse); 2522 + ida_free(&lpid_inuse, lpid); 2528 2523 } 2529 2524 EXPORT_SYMBOL_GPL(kvmppc_free_lpid); 2530 2525 2526 + /* nr_lpids_param includes the host LPID */ 2531 2527 void kvmppc_init_lpid(unsigned long nr_lpids_param) 2532 2528 { 2533 - nr_lpids = min_t(unsigned long, KVMPPC_NR_LPIDS, nr_lpids_param); 2534 - memset(lpid_inuse, 0, sizeof(lpid_inuse)); 2529 + nr_lpids = nr_lpids_param; 2535 2530 } 2536 2531 EXPORT_SYMBOL_GPL(kvmppc_init_lpid); 2537 2532
+4 -4
arch/powerpc/kvm/trace_hv.h
··· 409 409 ); 410 410 411 411 TRACE_EVENT(kvmppc_vcore_blocked, 412 - TP_PROTO(struct kvmppc_vcore *vc, int where), 412 + TP_PROTO(struct kvm_vcpu *vcpu, int where), 413 413 414 - TP_ARGS(vc, where), 414 + TP_ARGS(vcpu, where), 415 415 416 416 TP_STRUCT__entry( 417 417 __field(int, n_runnable) ··· 421 421 ), 422 422 423 423 TP_fast_assign( 424 - __entry->runner_vcpu = vc->runner->vcpu_id; 425 - __entry->n_runnable = vc->n_runnable; 424 + __entry->runner_vcpu = vcpu->vcpu_id; 425 + __entry->n_runnable = vcpu->arch.vcore->n_runnable; 426 426 __entry->where = where; 427 427 __entry->tgid = current->tgid; 428 428 ),
+3
arch/powerpc/lib/Makefile
··· 13 13 14 14 KASAN_SANITIZE_code-patching.o := n 15 15 KASAN_SANITIZE_feature-fixups.o := n 16 + # restart_table.o contains functions called in the NMI interrupt path 17 + # which can be in real mode. Disable KASAN. 18 + KASAN_SANITIZE_restart_table.o := n 16 19 17 20 ifdef CONFIG_KASAN 18 21 CFLAGS_code-patching.o += -DDISABLE_BRANCH_PROFILING
+10 -51
arch/powerpc/lib/code-patching.c
··· 8 8 #include <linux/init.h> 9 9 #include <linux/cpuhotplug.h> 10 10 #include <linux/uaccess.h> 11 + #include <linux/jump_label.h> 11 12 12 13 #include <asm/tlbflush.h> 13 14 #include <asm/page.h> ··· 33 32 return 0; 34 33 35 34 failed: 36 - return -EFAULT; 35 + return -EPERM; 37 36 } 38 37 39 38 int raw_patch_instruction(u32 *addr, ppc_inst_t instr) ··· 79 78 return 0; 80 79 } 81 80 81 + static __ro_after_init DEFINE_STATIC_KEY_FALSE(poking_init_done); 82 + 82 83 /* 83 84 * Although BUG_ON() is rude, in this case it should only happen if ENOMEM, and 84 85 * we judge it as being preferable to a kernel that will crash later when ··· 91 88 BUG_ON(!cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, 92 89 "powerpc/text_poke:online", text_area_cpu_up, 93 90 text_area_cpu_down)); 91 + static_branch_enable(&poking_init_done); 94 92 } 95 93 96 94 /* ··· 101 97 { 102 98 unsigned long pfn; 103 99 104 - if (is_vmalloc_or_module_addr(addr)) 100 + if (IS_ENABLED(CONFIG_MODULES) && is_vmalloc_or_module_addr(addr)) 105 101 pfn = vmalloc_to_pfn(addr); 106 102 else 107 103 pfn = __pa_symbol(addr) >> PAGE_SHIFT; ··· 174 170 * when text_poke_area is not ready, but we still need 175 171 * to allow patching. We just do the plain old patching 176 172 */ 177 - if (!this_cpu_read(text_poke_area)) 173 + if (!static_branch_likely(&poking_init_done)) 178 174 return raw_patch_instruction(addr, instr); 179 175 180 176 local_irq_save(flags); ··· 192 188 193 189 #endif /* CONFIG_STRICT_KERNEL_RWX */ 194 190 191 + __ro_after_init DEFINE_STATIC_KEY_FALSE(init_mem_is_free); 192 + 195 193 int patch_instruction(u32 *addr, ppc_inst_t instr) 196 194 { 197 195 /* Make sure we aren't patching a freed init section */ 198 - if (system_state >= SYSTEM_FREEING_INITMEM && init_section_contains(addr, 4)) 196 + if (static_branch_likely(&init_mem_is_free) && init_section_contains(addr, 4)) 199 197 return 0; 200 198 201 199 return do_patch_instruction(addr, instr); ··· 212 206 return -ERANGE; 213 207 214 208 return patch_instruction(addr, instr); 215 - } 216 - 217 - bool is_offset_in_branch_range(long offset) 218 - { 219 - /* 220 - * Powerpc branch instruction is : 221 - * 222 - * 0 6 30 31 223 - * +---------+----------------+---+---+ 224 - * | opcode | LI |AA |LK | 225 - * +---------+----------------+---+---+ 226 - * Where AA = 0 and LK = 0 227 - * 228 - * LI is a signed 24 bits integer. The real branch offset is computed 229 - * by: imm32 = SignExtend(LI:'0b00', 32); 230 - * 231 - * So the maximum forward branch should be: 232 - * (0x007fffff << 2) = 0x01fffffc = 0x1fffffc 233 - * The maximum backward branch should be: 234 - * (0xff800000 << 2) = 0xfe000000 = -0x2000000 235 - */ 236 - return (offset >= -0x2000000 && offset <= 0x1fffffc && !(offset & 0x3)); 237 - } 238 - 239 - bool is_offset_in_cond_branch_range(long offset) 240 - { 241 - return offset >= -0x8000 && offset <= 0x7fff && !(offset & 0x3); 242 209 } 243 210 244 211 /* ··· 235 256 return false; 236 257 } 237 258 NOKPROBE_SYMBOL(is_conditional_branch); 238 - 239 - int create_branch(ppc_inst_t *instr, const u32 *addr, 240 - unsigned long target, int flags) 241 - { 242 - long offset; 243 - 244 - *instr = ppc_inst(0); 245 - offset = target; 246 - if (! (flags & BRANCH_ABSOLUTE)) 247 - offset = offset - (unsigned long)addr; 248 - 249 - /* Check we can represent the target in the instruction format */ 250 - if (!is_offset_in_branch_range(offset)) 251 - return 1; 252 - 253 - /* Mask out the flags and target, so they don't step on each other. */ 254 - *instr = ppc_inst(0x48000000 | (flags & 0x3) | (offset & 0x03FFFFFC)); 255 - 256 - return 0; 257 - } 258 259 259 260 int create_cond_branch(ppc_inst_t *instr, const u32 *addr, 260 261 unsigned long target, int flags)
+1 -1
arch/powerpc/lib/feature-fixups.c
··· 451 451 452 452 if (types & L1D_FLUSH_FALLBACK) 453 453 /* b .+16 to fallback flush */ 454 - instrs[0] = PPC_INST_BRANCH | 16; 454 + instrs[0] = PPC_RAW_BRANCH(16); 455 455 456 456 i = 0; 457 457 if (types & L1D_FLUSH_ORI) {
+13 -39
arch/powerpc/lib/sstep.c
··· 15 15 #include <asm/cputable.h> 16 16 #include <asm/disassemble.h> 17 17 18 - extern char system_call_common[]; 19 - extern char system_call_vectored_emulate[]; 20 - 21 18 #ifdef CONFIG_PPC64 22 19 /* Bits in SRR1 that are copied from MSR */ 23 20 #define MSR_MASK 0xffffffff87c0ffffUL ··· 1163 1166 1164 1167 if (carry_in) 1165 1168 ++val; 1166 - op->type = COMPUTE + SETREG + SETXER; 1169 + op->type = COMPUTE | SETREG | SETXER; 1167 1170 op->reg = rd; 1168 1171 op->val = val; 1169 1172 val = truncate_if_32bit(regs->msr, val); ··· 1184 1187 { 1185 1188 unsigned int crval, shift; 1186 1189 1187 - op->type = COMPUTE + SETCC; 1190 + op->type = COMPUTE | SETCC; 1188 1191 crval = (regs->xer >> 31) & 1; /* get SO bit */ 1189 1192 if (v1 < v2) 1190 1193 crval |= 8; ··· 1203 1206 { 1204 1207 unsigned int crval, shift; 1205 1208 1206 - op->type = COMPUTE + SETCC; 1209 + op->type = COMPUTE | SETCC; 1207 1210 crval = (regs->xer >> 31) & 1; /* get SO bit */ 1208 1211 if (v1 < v2) 1209 1212 crval |= 8; ··· 1373 1376 if (branch_taken(word, regs, op)) 1374 1377 op->type |= BRTAKEN; 1375 1378 return 1; 1376 - #ifdef CONFIG_PPC64 1377 1379 case 17: /* sc */ 1378 1380 if ((word & 0xfe2) == 2) 1379 1381 op->type = SYSCALL; ··· 1384 1388 } else 1385 1389 op->type = UNKNOWN; 1386 1390 return 0; 1387 - #endif 1388 1391 case 18: /* b */ 1389 1392 op->type = BRANCH | BRTAKEN; 1390 1393 imm = word & 0x03fffffc; ··· 3638 3643 regs_set_return_msr(regs, (regs->msr & ~op.val) | (val & op.val)); 3639 3644 goto instr_done; 3640 3645 3641 - #ifdef CONFIG_PPC64 3642 3646 case SYSCALL: /* sc */ 3643 3647 /* 3644 - * N.B. this uses knowledge about how the syscall 3645 - * entry code works. If that is changed, this will 3646 - * need to be changed also. 3648 + * Per ISA v3.1, section 7.5.15 'Trace Interrupt', we can't 3649 + * single step a system call instruction: 3650 + * 3651 + * Successful completion for an instruction means that the 3652 + * instruction caused no other interrupt. Thus a Trace 3653 + * interrupt never occurs for a System Call or System Call 3654 + * Vectored instruction, or for a Trap instruction that 3655 + * traps. 3647 3656 */ 3648 - if (IS_ENABLED(CONFIG_PPC_FAST_ENDIAN_SWITCH) && 3649 - cpu_has_feature(CPU_FTR_REAL_LE) && 3650 - regs->gpr[0] == 0x1ebe) { 3651 - regs_set_return_msr(regs, regs->msr ^ MSR_LE); 3652 - goto instr_done; 3653 - } 3654 - regs->gpr[9] = regs->gpr[13]; 3655 - regs->gpr[10] = MSR_KERNEL; 3656 - regs->gpr[11] = regs->nip + 4; 3657 - regs->gpr[12] = regs->msr & MSR_MASK; 3658 - regs->gpr[13] = (unsigned long) get_paca(); 3659 - regs_set_return_ip(regs, (unsigned long) &system_call_common); 3660 - regs_set_return_msr(regs, MSR_KERNEL); 3661 - return 1; 3662 - 3663 - #ifdef CONFIG_PPC_BOOK3S_64 3657 + return -1; 3664 3658 case SYSCALL_VECTORED_0: /* scv 0 */ 3665 - regs->gpr[9] = regs->gpr[13]; 3666 - regs->gpr[10] = MSR_KERNEL; 3667 - regs->gpr[11] = regs->nip + 4; 3668 - regs->gpr[12] = regs->msr & MSR_MASK; 3669 - regs->gpr[13] = (unsigned long) get_paca(); 3670 - regs_set_return_ip(regs, (unsigned long) &system_call_vectored_emulate); 3671 - regs_set_return_msr(regs, MSR_KERNEL); 3672 - return 1; 3673 - #endif 3674 - 3659 + return -1; 3675 3660 case RFI: 3676 3661 return -1; 3677 - #endif 3678 3662 } 3679 3663 return 0; 3680 3664
+1 -2
arch/powerpc/mm/Makefile
··· 5 5 6 6 ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) 7 7 8 - obj-y := fault.o mem.o pgtable.o mmap.o maccess.o pageattr.o \ 8 + obj-y := fault.o mem.o pgtable.o maccess.o pageattr.o \ 9 9 init_$(BITS).o pgtable_$(BITS).o \ 10 10 pgtable-frag.o ioremap.o ioremap_$(BITS).o \ 11 11 init-common.o mmu_context.o drmem.o \ ··· 14 14 obj-$(CONFIG_PPC_BOOK3S_32) += book3s32/ 15 15 obj-$(CONFIG_PPC_BOOK3S_64) += book3s64/ 16 16 obj-$(CONFIG_NUMA) += numa.o 17 - obj-$(CONFIG_PPC_MM_SLICES) += slice.o 18 17 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o 19 18 obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o 20 19 obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o
-1
arch/powerpc/mm/book3s32/mmu.c
··· 23 23 #include <linux/highmem.h> 24 24 #include <linux/memblock.h> 25 25 26 - #include <asm/prom.h> 27 26 #include <asm/mmu.h> 28 27 #include <asm/machdep.h> 29 28 #include <asm/code-patching.h>
+10 -1
arch/powerpc/mm/book3s64/Makefile
··· 5 5 obj-y += mmu_context.o pgtable.o trace.o 6 6 ifdef CONFIG_PPC_64S_HASH_MMU 7 7 CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE) 8 - obj-y += hash_pgtable.o hash_utils.o hash_tlb.o slb.o 8 + obj-y += hash_pgtable.o hash_utils.o hash_tlb.o slb.o slice.o 9 9 obj-$(CONFIG_PPC_HASH_MMU_NATIVE) += hash_native.o 10 10 obj-$(CONFIG_PPC_4K_PAGES) += hash_4k.o 11 11 obj-$(CONFIG_PPC_64K_PAGES) += hash_64k.o ··· 24 24 25 25 # Instrumenting the SLB fault path can lead to duplicate SLB entries 26 26 KCOV_INSTRUMENT_slb.o := n 27 + 28 + # Parts of these can run in real mode and therefore are 29 + # not safe with the current outline KASAN implementation 30 + KASAN_SANITIZE_mmu_context.o := n 31 + KASAN_SANITIZE_pgtable.o := n 32 + KASAN_SANITIZE_radix_pgtable.o := n 33 + KASAN_SANITIZE_radix_tlb.o := n 34 + KASAN_SANITIZE_slb.o := n 35 + KASAN_SANITIZE_pkeys.o := n
+1 -1
arch/powerpc/mm/book3s64/hash_pgtable.c
··· 377 377 if (mmu_psize_defs[MMU_PAGE_16M].shift != PMD_SHIFT) 378 378 return 0; 379 379 /* 380 - * We need to make sure that we support 16MB hugepage in a segement 380 + * We need to make sure that we support 16MB hugepage in a segment 381 381 * with base page size 64K or 4K. We only enable THP with a PAGE_SIZE 382 382 * of 64K. 383 383 */
+22 -17
arch/powerpc/mm/book3s64/hash_utils.c
··· 37 37 #include <linux/cpu.h> 38 38 #include <linux/pgtable.h> 39 39 #include <linux/debugfs.h> 40 + #include <linux/random.h> 41 + #include <linux/elf-randomize.h> 42 + #include <linux/of_fdt.h> 40 43 41 44 #include <asm/interrupt.h> 42 45 #include <asm/processor.h> ··· 49 46 #include <asm/types.h> 50 47 #include <linux/uaccess.h> 51 48 #include <asm/machdep.h> 52 - #include <asm/prom.h> 53 49 #include <asm/io.h> 54 50 #include <asm/eeh.h> 55 51 #include <asm/tlb.h> ··· 1266 1264 return pp; 1267 1265 } 1268 1266 1269 - #ifdef CONFIG_PPC_MM_SLICES 1270 1267 static unsigned int get_paca_psize(unsigned long addr) 1271 1268 { 1272 1269 unsigned char *psizes; ··· 1282 1281 return (psizes[index >> 1] >> (mask_index * 4)) & 0xF; 1283 1282 } 1284 1283 1285 - #else 1286 - unsigned int get_paca_psize(unsigned long addr) 1287 - { 1288 - return get_paca()->mm_ctx_user_psize; 1289 - } 1290 - #endif 1291 1284 1292 1285 /* 1293 1286 * Demote a segment to using 4k pages. ··· 1338 1343 spp >>= 30 - 2 * ((ea >> 12) & 0xf); 1339 1344 1340 1345 /* 1341 - * 0 -> full premission 1346 + * 0 -> full permission 1342 1347 * 1 -> Read only 1343 1348 * 2 -> no access. 1344 1349 * We return the flag that need to be cleared. ··· 1659 1664 1660 1665 err = hash_page_mm(mm, ea, access, TRAP(regs), flags); 1661 1666 if (unlikely(err < 0)) { 1662 - // failed to instert a hash PTE due to an hypervisor error 1667 + // failed to insert a hash PTE due to an hypervisor error 1663 1668 if (user_mode(regs)) { 1664 1669 if (IS_ENABLED(CONFIG_PPC_SUBPAGE_PROT) && err == -2) 1665 1670 _exception(SIGSEGV, regs, SEGV_ACCERR, ea); ··· 1675 1680 } 1676 1681 } 1677 1682 1678 - #ifdef CONFIG_PPC_MM_SLICES 1679 1683 static bool should_hash_preload(struct mm_struct *mm, unsigned long ea) 1680 1684 { 1681 1685 int psize = get_slice_psize(mm, ea); ··· 1691 1697 1692 1698 return true; 1693 1699 } 1694 - #else 1695 - static bool should_hash_preload(struct mm_struct *mm, unsigned long ea) 1696 - { 1697 - return true; 1698 - } 1699 - #endif 1700 1700 1701 1701 static void hash_preload(struct mm_struct *mm, pte_t *ptep, unsigned long ea, 1702 1702 bool is_exec, unsigned long trap) ··· 2134 2146 2135 2147 if (htab_hash_mask) 2136 2148 pr_info("htab_hash_mask = 0x%lx\n", htab_hash_mask); 2149 + } 2150 + 2151 + unsigned long arch_randomize_brk(struct mm_struct *mm) 2152 + { 2153 + /* 2154 + * If we are using 1TB segments and we are allowed to randomise 2155 + * the heap, we can put it above 1TB so it is backed by a 1TB 2156 + * segment. Otherwise the heap will be in the bottom 1TB 2157 + * which always uses 256MB segments and this may result in a 2158 + * performance penalty. 2159 + */ 2160 + if (is_32bit_task()) 2161 + return randomize_page(mm->brk, SZ_32M); 2162 + else if (!radix_enabled() && mmu_highuser_ssize == MMU_SEGSIZE_1T) 2163 + return randomize_page(max_t(unsigned long, mm->brk, SZ_1T), SZ_1G); 2164 + else 2165 + return randomize_page(mm->brk, SZ_1G); 2137 2166 }
-68
arch/powerpc/mm/book3s64/iommu_api.c
··· 305 305 } 306 306 EXPORT_SYMBOL_GPL(mm_iommu_lookup); 307 307 308 - struct mm_iommu_table_group_mem_t *mm_iommu_lookup_rm(struct mm_struct *mm, 309 - unsigned long ua, unsigned long size) 310 - { 311 - struct mm_iommu_table_group_mem_t *mem, *ret = NULL; 312 - 313 - list_for_each_entry_lockless(mem, &mm->context.iommu_group_mem_list, 314 - next) { 315 - if ((mem->ua <= ua) && 316 - (ua + size <= mem->ua + 317 - (mem->entries << PAGE_SHIFT))) { 318 - ret = mem; 319 - break; 320 - } 321 - } 322 - 323 - return ret; 324 - } 325 - 326 308 struct mm_iommu_table_group_mem_t *mm_iommu_get(struct mm_struct *mm, 327 309 unsigned long ua, unsigned long entries) 328 310 { ··· 350 368 return 0; 351 369 } 352 370 EXPORT_SYMBOL_GPL(mm_iommu_ua_to_hpa); 353 - 354 - long mm_iommu_ua_to_hpa_rm(struct mm_iommu_table_group_mem_t *mem, 355 - unsigned long ua, unsigned int pageshift, unsigned long *hpa) 356 - { 357 - const long entry = (ua - mem->ua) >> PAGE_SHIFT; 358 - unsigned long *pa; 359 - 360 - if (entry >= mem->entries) 361 - return -EFAULT; 362 - 363 - if (pageshift > mem->pageshift) 364 - return -EFAULT; 365 - 366 - if (!mem->hpas) { 367 - *hpa = mem->dev_hpa + (ua - mem->ua); 368 - return 0; 369 - } 370 - 371 - pa = (void *) vmalloc_to_phys(&mem->hpas[entry]); 372 - if (!pa) 373 - return -EFAULT; 374 - 375 - *hpa = (*pa & MM_IOMMU_TABLE_GROUP_PAGE_MASK) | (ua & ~PAGE_MASK); 376 - 377 - return 0; 378 - } 379 - 380 - extern void mm_iommu_ua_mark_dirty_rm(struct mm_struct *mm, unsigned long ua) 381 - { 382 - struct mm_iommu_table_group_mem_t *mem; 383 - long entry; 384 - void *va; 385 - unsigned long *pa; 386 - 387 - mem = mm_iommu_lookup_rm(mm, ua, PAGE_SIZE); 388 - if (!mem) 389 - return; 390 - 391 - if (mem->dev_hpa != MM_IOMMU_TABLE_INVALID_HPA) 392 - return; 393 - 394 - entry = (ua - mem->ua) >> PAGE_SHIFT; 395 - va = &mem->hpas[entry]; 396 - 397 - pa = (void *) vmalloc_to_phys(va); 398 - if (!pa) 399 - return; 400 - 401 - *pa |= MM_IOMMU_TABLE_GROUP_PAGE_DIRTY; 402 - } 403 371 404 372 bool mm_iommu_is_devmem(struct mm_struct *mm, unsigned long hpa, 405 373 unsigned int pageshift, unsigned long *size)
+1 -1
arch/powerpc/mm/book3s64/pgtable.c
··· 332 332 spin_lock(&mm->page_table_lock); 333 333 /* 334 334 * If we find pgtable_page set, we return 335 - * the allocated page with single fragement 335 + * the allocated page with single fragment 336 336 * count. 337 337 */ 338 338 if (likely(!mm->context.pmd_frag)) {
-55
arch/powerpc/mm/book3s64/radix_hugetlbpage.c
··· 41 41 radix__flush_tlb_range_psize(vma->vm_mm, start, end, psize); 42 42 } 43 43 44 - /* 45 - * A vairant of hugetlb_get_unmapped_area doing topdown search 46 - * FIXME!! should we do as x86 does or non hugetlb area does ? 47 - * ie, use topdown or not based on mmap_is_legacy check ? 48 - */ 49 - unsigned long 50 - radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr, 51 - unsigned long len, unsigned long pgoff, 52 - unsigned long flags) 53 - { 54 - struct mm_struct *mm = current->mm; 55 - struct vm_area_struct *vma; 56 - struct hstate *h = hstate_file(file); 57 - int fixed = (flags & MAP_FIXED); 58 - unsigned long high_limit; 59 - struct vm_unmapped_area_info info; 60 - 61 - high_limit = DEFAULT_MAP_WINDOW; 62 - if (addr >= high_limit || (fixed && (addr + len > high_limit))) 63 - high_limit = TASK_SIZE; 64 - 65 - if (len & ~huge_page_mask(h)) 66 - return -EINVAL; 67 - if (len > high_limit) 68 - return -ENOMEM; 69 - 70 - if (fixed) { 71 - if (addr > high_limit - len) 72 - return -ENOMEM; 73 - if (prepare_hugepage_range(file, addr, len)) 74 - return -EINVAL; 75 - return addr; 76 - } 77 - 78 - if (addr) { 79 - addr = ALIGN(addr, huge_page_size(h)); 80 - vma = find_vma(mm, addr); 81 - if (high_limit - len >= addr && addr >= mmap_min_addr && 82 - (!vma || addr + len <= vm_start_gap(vma))) 83 - return addr; 84 - } 85 - /* 86 - * We are always doing an topdown search here. Slice code 87 - * does that too. 88 - */ 89 - info.flags = VM_UNMAPPED_AREA_TOPDOWN; 90 - info.length = len; 91 - info.low_limit = max(PAGE_SIZE, mmap_min_addr); 92 - info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW); 93 - info.align_mask = PAGE_MASK & ~huge_page_mask(h); 94 - info.align_offset = 0; 95 - 96 - return vm_unmapped_area(&info); 97 - } 98 - 99 44 void radix__huge_ptep_modify_prot_commit(struct vm_area_struct *vma, 100 45 unsigned long addr, pte_t *ptep, 101 46 pte_t old_pte, pte_t pte)
+1 -1
arch/powerpc/mm/book3s64/radix_pgtable.c
··· 359 359 if (!cpu_has_feature(CPU_FTR_HVMODE) && 360 360 cpu_has_feature(CPU_FTR_P9_RADIX_PREFETCH_BUG)) { 361 361 /* 362 - * Older versions of KVM on these machines perfer if the 362 + * Older versions of KVM on these machines prefer if the 363 363 * guest only uses the low 19 PID bits. 364 364 */ 365 365 mmu_pid_bits = 19;
+1 -1
arch/powerpc/mm/book3s64/radix_tlb.c
··· 397 397 398 398 /* 399 399 * Workaround the fact that the "ric" argument to __tlbie_pid 400 - * must be a compile-time contraint to match the "i" constraint 400 + * must be a compile-time constraint to match the "i" constraint 401 401 * in the asm statement. 402 402 */ 403 403 switch (ric) {
+2 -2
arch/powerpc/mm/book3s64/slb.c
··· 347 347 /* 348 348 * We have no good place to clear the slb preload cache on exec, 349 349 * flush_thread is about the earliest arch hook but that happens 350 - * after we switch to the mm and have aleady preloaded the SLBEs. 350 + * after we switch to the mm and have already preloaded the SLBEs. 351 351 * 352 352 * For the most part that's probably okay to use entries from the 353 353 * previous exec, they will age out if unused. It may turn out to ··· 615 615 } else { 616 616 /* 617 617 * Our cache is full and the current cache content strictly 618 - * doesn't indicate the active SLB conents. Bump the ptr 618 + * doesn't indicate the active SLB contents. Bump the ptr 619 619 * so that switch_slb() will ignore the cache. 620 620 */ 621 621 local_paca->slb_cache_ptr = SLB_CACHE_ENTRIES + 1;
+1 -1
arch/powerpc/mm/cacheflush.c
··· 12 12 /* 13 13 * For a snooping icache, we still need a dummy icbi to purge all the 14 14 * prefetched instructions from the ifetch buffers. We also need a sync 15 - * before the icbi to order the the actual stores to memory that might 15 + * before the icbi to order the actual stores to memory that might 16 16 * have modified instructions with the icbi. 17 17 */ 18 18 if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) {
+1 -1
arch/powerpc/mm/drmem.c
··· 11 11 #include <linux/of.h> 12 12 #include <linux/of_fdt.h> 13 13 #include <linux/memblock.h> 14 - #include <asm/prom.h> 14 + #include <linux/slab.h> 15 15 #include <asm/drmem.h> 16 16 17 17 static int n_root_addr_cells, n_root_size_cells;
-34
arch/powerpc/mm/hugetlbpage.c
··· 542 542 return page; 543 543 } 544 544 545 - #ifdef HAVE_ARCH_HUGETLB_UNMAPPED_AREA 546 - static inline int file_to_psize(struct file *file) 547 - { 548 - struct hstate *hstate = hstate_file(file); 549 - return shift_to_mmu_psize(huge_page_shift(hstate)); 550 - } 551 - 552 - unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, 553 - unsigned long len, unsigned long pgoff, 554 - unsigned long flags) 555 - { 556 - #ifdef CONFIG_PPC_RADIX_MMU 557 - if (radix_enabled()) 558 - return radix__hugetlb_get_unmapped_area(file, addr, len, 559 - pgoff, flags); 560 - #endif 561 - #ifdef CONFIG_PPC_MM_SLICES 562 - return slice_get_unmapped_area(addr, len, flags, file_to_psize(file), 1); 563 - #endif 564 - BUG(); 565 - } 566 - #endif 567 - 568 - unsigned long vma_mmu_pagesize(struct vm_area_struct *vma) 569 - { 570 - /* With radix we don't use slice, so derive it from vma*/ 571 - if (IS_ENABLED(CONFIG_PPC_MM_SLICES) && !radix_enabled()) { 572 - unsigned int psize = get_slice_psize(vma->vm_mm, vma->vm_start); 573 - 574 - return 1UL << mmu_psize_to_shift(psize); 575 - } 576 - return vma_kernel_pagesize(vma); 577 - } 578 - 579 545 bool __init arch_hugetlb_valid_size(unsigned long size) 580 546 { 581 547 int shift = __ffs(size);
-1
arch/powerpc/mm/init_32.c
··· 29 29 #include <linux/slab.h> 30 30 #include <linux/hugetlb.h> 31 31 32 - #include <asm/prom.h> 33 32 #include <asm/io.h> 34 33 #include <asm/mmu.h> 35 34 #include <asm/smp.h>
+5 -2
arch/powerpc/mm/init_64.c
··· 111 111 } 112 112 113 113 /* 114 - * vmemmap virtual address space management does not have a traditonal page 114 + * vmemmap virtual address space management does not have a traditional page 115 115 * table to track which virtual struct pages are backed by physical mapping. 116 116 * The virtual to physical mappings are tracked in a simple linked list 117 117 * format. 'vmemmap_list' maintains the entire vmemmap physical mapping at ··· 128 128 129 129 /* 130 130 * The same pointer 'next' tracks individual chunks inside the allocated 131 - * full page during the boot time and again tracks the freeed nodes during 131 + * full page during the boot time and again tracks the freed nodes during 132 132 * runtime. It is racy but it does not happen as they are separated by the 133 133 * boot process. Will create problem if some how we have memory hotplug 134 134 * operation during boot !! ··· 372 372 373 373 #ifdef CONFIG_PPC_BOOK3S_64 374 374 unsigned int mmu_lpid_bits; 375 + #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE 376 + EXPORT_SYMBOL_GPL(mmu_lpid_bits); 377 + #endif 375 378 unsigned int mmu_pid_bits; 376 379 377 380 static bool disable_radix = !IS_ENABLED(CONFIG_PPC_RADIX_MMU_DEFAULT);
+2 -1
arch/powerpc/mm/kasan/Makefile
··· 2 2 3 3 KASAN_SANITIZE := n 4 4 5 - obj-$(CONFIG_PPC32) += kasan_init_32.o 5 + obj-$(CONFIG_PPC32) += init_32.o 6 6 obj-$(CONFIG_PPC_8xx) += 8xx.o 7 7 obj-$(CONFIG_PPC_BOOK3S_32) += book3s_32.o 8 + obj-$(CONFIG_PPC_BOOK3S_64) += init_book3s_64.o
+102
arch/powerpc/mm/kasan/init_book3s_64.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * KASAN for 64-bit Book3S powerpc 4 + * 5 + * Copyright 2019-2022, Daniel Axtens, IBM Corporation. 6 + */ 7 + 8 + /* 9 + * ppc64 turns on virtual memory late in boot, after calling into generic code 10 + * like the device-tree parser, so it uses this in conjunction with a hook in 11 + * outline mode to avoid invalid access early in boot. 12 + */ 13 + 14 + #define DISABLE_BRANCH_PROFILING 15 + 16 + #include <linux/kasan.h> 17 + #include <linux/printk.h> 18 + #include <linux/sched/task.h> 19 + #include <linux/memblock.h> 20 + #include <asm/pgalloc.h> 21 + 22 + DEFINE_STATIC_KEY_FALSE(powerpc_kasan_enabled_key); 23 + 24 + static void __init kasan_init_phys_region(void *start, void *end) 25 + { 26 + unsigned long k_start, k_end, k_cur; 27 + void *va; 28 + 29 + if (start >= end) 30 + return; 31 + 32 + k_start = ALIGN_DOWN((unsigned long)kasan_mem_to_shadow(start), PAGE_SIZE); 33 + k_end = ALIGN((unsigned long)kasan_mem_to_shadow(end), PAGE_SIZE); 34 + 35 + va = memblock_alloc(k_end - k_start, PAGE_SIZE); 36 + for (k_cur = k_start; k_cur < k_end; k_cur += PAGE_SIZE, va += PAGE_SIZE) 37 + map_kernel_page(k_cur, __pa(va), PAGE_KERNEL); 38 + } 39 + 40 + void __init kasan_init(void) 41 + { 42 + /* 43 + * We want to do the following things: 44 + * 1) Map real memory into the shadow for all physical memblocks 45 + * This takes us from c000... to c008... 46 + * 2) Leave a hole over the shadow of vmalloc space. KASAN_VMALLOC 47 + * will manage this for us. 48 + * This takes us from c008... to c00a... 49 + * 3) Map the 'early shadow'/zero page over iomap and vmemmap space. 50 + * This takes us up to where we start at c00e... 51 + */ 52 + 53 + void *k_start = kasan_mem_to_shadow((void *)RADIX_VMALLOC_END); 54 + void *k_end = kasan_mem_to_shadow((void *)RADIX_VMEMMAP_END); 55 + phys_addr_t start, end; 56 + u64 i; 57 + pte_t zero_pte = pfn_pte(virt_to_pfn(kasan_early_shadow_page), PAGE_KERNEL); 58 + 59 + if (!early_radix_enabled()) { 60 + pr_warn("KASAN not enabled as it requires radix!"); 61 + return; 62 + } 63 + 64 + for_each_mem_range(i, &start, &end) 65 + kasan_init_phys_region((void *)start, (void *)end); 66 + 67 + for (i = 0; i < PTRS_PER_PTE; i++) 68 + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page, 69 + &kasan_early_shadow_pte[i], zero_pte, 0); 70 + 71 + for (i = 0; i < PTRS_PER_PMD; i++) 72 + pmd_populate_kernel(&init_mm, &kasan_early_shadow_pmd[i], 73 + kasan_early_shadow_pte); 74 + 75 + for (i = 0; i < PTRS_PER_PUD; i++) 76 + pud_populate(&init_mm, &kasan_early_shadow_pud[i], 77 + kasan_early_shadow_pmd); 78 + 79 + /* map the early shadow over the iomap and vmemmap space */ 80 + kasan_populate_early_shadow(k_start, k_end); 81 + 82 + /* mark early shadow region as RO and wipe it */ 83 + zero_pte = pfn_pte(virt_to_pfn(kasan_early_shadow_page), PAGE_KERNEL_RO); 84 + for (i = 0; i < PTRS_PER_PTE; i++) 85 + __set_pte_at(&init_mm, (unsigned long)kasan_early_shadow_page, 86 + &kasan_early_shadow_pte[i], zero_pte, 0); 87 + 88 + /* 89 + * clear_page relies on some cache info that hasn't been set up yet. 90 + * It ends up looping ~forever and blows up other data. 91 + * Use memset instead. 92 + */ 93 + memset(kasan_early_shadow_page, 0, PAGE_SIZE); 94 + 95 + static_branch_inc(&powerpc_kasan_enabled_key); 96 + 97 + /* Enable error messages */ 98 + init_task.kasan_depth = 0; 99 + pr_info("KASAN init done\n"); 100 + } 101 + 102 + void __init kasan_late_init(void) { }
arch/powerpc/mm/kasan/kasan_init_32.c arch/powerpc/mm/kasan/init_32.c
+4
arch/powerpc/mm/mem.c
··· 23 23 #include <asm/kasan.h> 24 24 #include <asm/svm.h> 25 25 #include <asm/mmzone.h> 26 + #include <asm/ftrace.h> 27 + #include <asm/code-patching.h> 26 28 27 29 #include <mm/mmu_decl.h> 28 30 ··· 311 309 { 312 310 ppc_md.progress = ppc_printk_progress; 313 311 mark_initmem_nx(); 312 + static_branch_enable(&init_mem_is_free); 314 313 free_initmem_default(POISON_FREE_INITMEM); 314 + ftrace_free_init_tramp(); 315 315 } 316 316 317 317 /*
-256
arch/powerpc/mm/mmap.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * flexible mmap layout support 4 - * 5 - * Copyright 2003-2004 Red Hat Inc., Durham, North Carolina. 6 - * All Rights Reserved. 7 - * 8 - * Started by Ingo Molnar <mingo@elte.hu> 9 - */ 10 - 11 - #include <linux/personality.h> 12 - #include <linux/mm.h> 13 - #include <linux/random.h> 14 - #include <linux/sched/signal.h> 15 - #include <linux/sched/mm.h> 16 - #include <linux/elf-randomize.h> 17 - #include <linux/security.h> 18 - #include <linux/mman.h> 19 - 20 - /* 21 - * Top of mmap area (just below the process stack). 22 - * 23 - * Leave at least a ~128 MB hole. 24 - */ 25 - #define MIN_GAP (128*1024*1024) 26 - #define MAX_GAP (TASK_SIZE/6*5) 27 - 28 - static inline int mmap_is_legacy(struct rlimit *rlim_stack) 29 - { 30 - if (current->personality & ADDR_COMPAT_LAYOUT) 31 - return 1; 32 - 33 - if (rlim_stack->rlim_cur == RLIM_INFINITY) 34 - return 1; 35 - 36 - return sysctl_legacy_va_layout; 37 - } 38 - 39 - unsigned long arch_mmap_rnd(void) 40 - { 41 - unsigned long shift, rnd; 42 - 43 - shift = mmap_rnd_bits; 44 - #ifdef CONFIG_COMPAT 45 - if (is_32bit_task()) 46 - shift = mmap_rnd_compat_bits; 47 - #endif 48 - rnd = get_random_long() % (1ul << shift); 49 - 50 - return rnd << PAGE_SHIFT; 51 - } 52 - 53 - static inline unsigned long stack_maxrandom_size(void) 54 - { 55 - if (!(current->flags & PF_RANDOMIZE)) 56 - return 0; 57 - 58 - /* 8MB for 32bit, 1GB for 64bit */ 59 - if (is_32bit_task()) 60 - return (1<<23); 61 - else 62 - return (1<<30); 63 - } 64 - 65 - static inline unsigned long mmap_base(unsigned long rnd, 66 - struct rlimit *rlim_stack) 67 - { 68 - unsigned long gap = rlim_stack->rlim_cur; 69 - unsigned long pad = stack_maxrandom_size() + stack_guard_gap; 70 - 71 - /* Values close to RLIM_INFINITY can overflow. */ 72 - if (gap + pad > gap) 73 - gap += pad; 74 - 75 - if (gap < MIN_GAP) 76 - gap = MIN_GAP; 77 - else if (gap > MAX_GAP) 78 - gap = MAX_GAP; 79 - 80 - return PAGE_ALIGN(DEFAULT_MAP_WINDOW - gap - rnd); 81 - } 82 - 83 - #ifdef HAVE_ARCH_UNMAPPED_AREA 84 - #ifdef CONFIG_PPC_RADIX_MMU 85 - /* 86 - * Same function as generic code used only for radix, because we don't need to overload 87 - * the generic one. But we will have to duplicate, because hash select 88 - * HAVE_ARCH_UNMAPPED_AREA 89 - */ 90 - static unsigned long 91 - radix__arch_get_unmapped_area(struct file *filp, unsigned long addr, 92 - unsigned long len, unsigned long pgoff, 93 - unsigned long flags) 94 - { 95 - struct mm_struct *mm = current->mm; 96 - struct vm_area_struct *vma; 97 - int fixed = (flags & MAP_FIXED); 98 - unsigned long high_limit; 99 - struct vm_unmapped_area_info info; 100 - 101 - high_limit = DEFAULT_MAP_WINDOW; 102 - if (addr >= high_limit || (fixed && (addr + len > high_limit))) 103 - high_limit = TASK_SIZE; 104 - 105 - if (len > high_limit) 106 - return -ENOMEM; 107 - 108 - if (fixed) { 109 - if (addr > high_limit - len) 110 - return -ENOMEM; 111 - return addr; 112 - } 113 - 114 - if (addr) { 115 - addr = PAGE_ALIGN(addr); 116 - vma = find_vma(mm, addr); 117 - if (high_limit - len >= addr && addr >= mmap_min_addr && 118 - (!vma || addr + len <= vm_start_gap(vma))) 119 - return addr; 120 - } 121 - 122 - info.flags = 0; 123 - info.length = len; 124 - info.low_limit = mm->mmap_base; 125 - info.high_limit = high_limit; 126 - info.align_mask = 0; 127 - 128 - return vm_unmapped_area(&info); 129 - } 130 - 131 - static unsigned long 132 - radix__arch_get_unmapped_area_topdown(struct file *filp, 133 - const unsigned long addr0, 134 - const unsigned long len, 135 - const unsigned long pgoff, 136 - const unsigned long flags) 137 - { 138 - struct vm_area_struct *vma; 139 - struct mm_struct *mm = current->mm; 140 - unsigned long addr = addr0; 141 - int fixed = (flags & MAP_FIXED); 142 - unsigned long high_limit; 143 - struct vm_unmapped_area_info info; 144 - 145 - high_limit = DEFAULT_MAP_WINDOW; 146 - if (addr >= high_limit || (fixed && (addr + len > high_limit))) 147 - high_limit = TASK_SIZE; 148 - 149 - if (len > high_limit) 150 - return -ENOMEM; 151 - 152 - if (fixed) { 153 - if (addr > high_limit - len) 154 - return -ENOMEM; 155 - return addr; 156 - } 157 - 158 - if (addr) { 159 - addr = PAGE_ALIGN(addr); 160 - vma = find_vma(mm, addr); 161 - if (high_limit - len >= addr && addr >= mmap_min_addr && 162 - (!vma || addr + len <= vm_start_gap(vma))) 163 - return addr; 164 - } 165 - 166 - info.flags = VM_UNMAPPED_AREA_TOPDOWN; 167 - info.length = len; 168 - info.low_limit = max(PAGE_SIZE, mmap_min_addr); 169 - info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW); 170 - info.align_mask = 0; 171 - 172 - addr = vm_unmapped_area(&info); 173 - if (!(addr & ~PAGE_MASK)) 174 - return addr; 175 - VM_BUG_ON(addr != -ENOMEM); 176 - 177 - /* 178 - * A failed mmap() very likely causes application failure, 179 - * so fall back to the bottom-up function here. This scenario 180 - * can happen with large stack limits and large mmap() 181 - * allocations. 182 - */ 183 - return radix__arch_get_unmapped_area(filp, addr0, len, pgoff, flags); 184 - } 185 - #endif 186 - 187 - unsigned long arch_get_unmapped_area(struct file *filp, 188 - unsigned long addr, 189 - unsigned long len, 190 - unsigned long pgoff, 191 - unsigned long flags) 192 - { 193 - #ifdef CONFIG_PPC_MM_SLICES 194 - return slice_get_unmapped_area(addr, len, flags, 195 - mm_ctx_user_psize(&current->mm->context), 0); 196 - #else 197 - BUG(); 198 - #endif 199 - } 200 - 201 - unsigned long arch_get_unmapped_area_topdown(struct file *filp, 202 - const unsigned long addr0, 203 - const unsigned long len, 204 - const unsigned long pgoff, 205 - const unsigned long flags) 206 - { 207 - #ifdef CONFIG_PPC_MM_SLICES 208 - return slice_get_unmapped_area(addr0, len, flags, 209 - mm_ctx_user_psize(&current->mm->context), 1); 210 - #else 211 - BUG(); 212 - #endif 213 - } 214 - #endif /* HAVE_ARCH_UNMAPPED_AREA */ 215 - 216 - static void radix__arch_pick_mmap_layout(struct mm_struct *mm, 217 - unsigned long random_factor, 218 - struct rlimit *rlim_stack) 219 - { 220 - #ifdef CONFIG_PPC_RADIX_MMU 221 - if (mmap_is_legacy(rlim_stack)) { 222 - mm->mmap_base = TASK_UNMAPPED_BASE; 223 - mm->get_unmapped_area = radix__arch_get_unmapped_area; 224 - } else { 225 - mm->mmap_base = mmap_base(random_factor, rlim_stack); 226 - mm->get_unmapped_area = radix__arch_get_unmapped_area_topdown; 227 - } 228 - #endif 229 - } 230 - 231 - /* 232 - * This function, called very early during the creation of a new 233 - * process VM image, sets up which VM layout function to use: 234 - */ 235 - void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack) 236 - { 237 - unsigned long random_factor = 0UL; 238 - 239 - if (current->flags & PF_RANDOMIZE) 240 - random_factor = arch_mmap_rnd(); 241 - 242 - if (radix_enabled()) 243 - return radix__arch_pick_mmap_layout(mm, random_factor, 244 - rlim_stack); 245 - /* 246 - * Fall back to the standard layout if the personality 247 - * bit is set, or if the expected stack growth is unlimited: 248 - */ 249 - if (mmap_is_legacy(rlim_stack)) { 250 - mm->mmap_base = TASK_UNMAPPED_BASE; 251 - mm->get_unmapped_area = arch_get_unmapped_area; 252 - } else { 253 - mm->mmap_base = mmap_base(random_factor, rlim_stack); 254 - mm->get_unmapped_area = arch_get_unmapped_area_topdown; 255 - } 256 - }
+4
arch/powerpc/mm/mmu_decl.h
··· 155 155 u32 MAS3; 156 156 u32 MAS7; 157 157 }; 158 + 159 + #define NUM_TLBCAMS 64 160 + 161 + extern struct tlbcam TLBCAM[NUM_TLBCAMS]; 158 162 #endif 159 163 160 164 #if defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_FSL_BOOKE) || defined(CONFIG_PPC_8xx)
-1
arch/powerpc/mm/nohash/40x.c
··· 32 32 #include <linux/highmem.h> 33 33 #include <linux/memblock.h> 34 34 35 - #include <asm/prom.h> 36 35 #include <asm/io.h> 37 36 #include <asm/mmu_context.h> 38 37 #include <asm/mmu.h>
+1 -1
arch/powerpc/mm/nohash/book3e_hugetlbpage.c
··· 142 142 tsize = shift - 10; 143 143 /* 144 144 * We can't be interrupted while we're setting up the MAS 145 - * regusters or after we've confirmed that no tlb exists. 145 + * registers or after we've confirmed that no tlb exists. 146 146 */ 147 147 local_irq_save(flags); 148 148
+9 -13
arch/powerpc/mm/nohash/fsl_book3e.c
··· 36 36 #include <linux/delay.h> 37 37 #include <linux/highmem.h> 38 38 #include <linux/memblock.h> 39 + #include <linux/of_fdt.h> 39 40 40 - #include <asm/prom.h> 41 41 #include <asm/io.h> 42 42 #include <asm/mmu_context.h> 43 43 #include <asm/mmu.h> ··· 51 51 52 52 unsigned int tlbcam_index; 53 53 54 - #define NUM_TLBCAMS (64) 55 54 struct tlbcam TLBCAM[NUM_TLBCAMS]; 56 55 57 - struct tlbcamrange { 56 + static struct { 58 57 unsigned long start; 59 58 unsigned long limit; 60 59 phys_addr_t phys; ··· 273 274 274 275 i = switch_to_as1(); 275 276 __max_low_memory = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM, false, true); 276 - restore_to_as0(i, 0, 0, 1); 277 + restore_to_as0(i, 0, NULL, 1); 277 278 278 279 pr_info("Memory CAM mapping: "); 279 280 for (i = 0; i < tlbcam_index - 1; i++) ··· 287 288 #ifdef CONFIG_STRICT_KERNEL_RWX 288 289 void mmu_mark_rodata_ro(void) 289 290 { 290 - /* Everything is done in mmu_mark_initmem_nx() */ 291 + unsigned long remapped; 292 + 293 + remapped = map_mem_in_cams(__max_low_memory, CONFIG_LOWMEM_CAM_NUM, false, false); 294 + 295 + WARN_ON(__max_low_memory != remapped); 291 296 } 292 297 #endif 293 298 294 299 void mmu_mark_initmem_nx(void) 295 300 { 296 - unsigned long remapped; 297 - 298 - if (!strict_kernel_rwx_enabled()) 299 - return; 300 - 301 - remapped = map_mem_in_cams(__max_low_memory, CONFIG_LOWMEM_CAM_NUM, false, false); 302 - 303 - WARN_ON(__max_low_memory != remapped); 301 + /* Everything is done in mmu_mark_rodata_ro() */ 304 302 } 305 303 306 304 void setup_initial_memory_limit(phys_addr_t first_memblock_base,
+3 -2
arch/powerpc/mm/nohash/kaslr_booke.c
··· 14 14 #include <linux/memblock.h> 15 15 #include <linux/libfdt.h> 16 16 #include <linux/crash_core.h> 17 + #include <linux/of.h> 18 + #include <linux/of_fdt.h> 17 19 #include <asm/cacheflush.h> 18 - #include <asm/prom.h> 19 20 #include <asm/kdump.h> 20 21 #include <mm/mmu_decl.h> 21 22 #include <generated/compile.h> ··· 316 315 ram = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM, true, true); 317 316 linear_sz = min_t(unsigned long, ram, SZ_512M); 318 317 319 - /* If the linear size is smaller than 64M, do not randmize */ 318 + /* If the linear size is smaller than 64M, do not randomize */ 320 319 if (linear_sz < SZ_64M) 321 320 return 0; 322 321
-9
arch/powerpc/mm/nohash/mmu_context.c
··· 317 317 */ 318 318 int init_new_context(struct task_struct *t, struct mm_struct *mm) 319 319 { 320 - /* 321 - * We have MMU_NO_CONTEXT set to be ~0. Hence check 322 - * explicitly against context.id == 0. This ensures that we properly 323 - * initialize context slice details for newly allocated mm's (which will 324 - * have id == 0) and don't alter context slice inherited via fork (which 325 - * will have id != 0). 326 - */ 327 - if (mm->context.id == 0) 328 - slice_init_new_context_exec(mm); 329 320 mm->context.id = MMU_NO_CONTEXT; 330 321 mm->context.active = 0; 331 322 pte_frag_set(&mm->context, NULL);
+2 -4
arch/powerpc/mm/nohash/tlb.c
··· 358 358 /* 359 359 * Flush kernel TLB entries in the given range 360 360 */ 361 + #ifndef CONFIG_PPC_8xx 361 362 void flush_tlb_kernel_range(unsigned long start, unsigned long end) 362 363 { 363 364 #ifdef CONFIG_SMP ··· 371 370 #endif 372 371 } 373 372 EXPORT_SYMBOL(flush_tlb_kernel_range); 373 + #endif 374 374 375 375 /* 376 376 * Currently, for range flushing, we just do a full mm flush. This should ··· 774 772 { 775 773 #ifdef CONFIG_PPC_47x 776 774 early_init_mmu_47x(); 777 - #endif 778 - 779 - #ifdef CONFIG_PPC_MM_SLICES 780 - mm_ctx_set_slb_addr_limit(&init_mm.context, SLB_ADDR_LIMIT_DEFAULT); 781 775 #endif 782 776 } 783 777 #endif /* CONFIG_PPC64 */
+9 -27
arch/powerpc/mm/numa.c
··· 26 26 #include <linux/slab.h> 27 27 #include <asm/cputhreads.h> 28 28 #include <asm/sparsemem.h> 29 - #include <asm/prom.h> 30 29 #include <asm/smp.h> 31 30 #include <asm/topology.h> 32 31 #include <asm/firmware.h> ··· 1422 1423 return rc; 1423 1424 } 1424 1425 1425 - int find_and_online_cpu_nid(int cpu) 1426 + void find_and_update_cpu_nid(int cpu) 1426 1427 { 1427 1428 __be32 associativity[VPHN_ASSOC_BUFSIZE] = {0}; 1428 1429 int new_nid; 1429 1430 1430 1431 /* Use associativity from first thread for all siblings */ 1431 1432 if (vphn_get_associativity(cpu, associativity)) 1432 - return cpu_to_node(cpu); 1433 + return; 1433 1434 1435 + /* Do not have previous associativity, so find it now. */ 1434 1436 new_nid = associativity_to_nid(associativity); 1437 + 1435 1438 if (new_nid < 0 || !node_possible(new_nid)) 1436 1439 new_nid = first_online_node; 1440 + else 1441 + // Associate node <-> cpu, so cpu_up() calls 1442 + // try_online_node() on the right node. 1443 + set_cpu_numa_node(cpu, new_nid); 1437 1444 1438 - if (!node_online(new_nid)) { 1439 - #ifdef CONFIG_MEMORY_HOTPLUG 1440 - /* 1441 - * Need to ensure that NODE_DATA is initialized for a node from 1442 - * available memory (see memblock_alloc_try_nid). If unable to 1443 - * init the node, then default to nearest node that has memory 1444 - * installed. Skip onlining a node if the subsystems are not 1445 - * yet initialized. 1446 - */ 1447 - if (!topology_inited || try_online_node(new_nid)) 1448 - new_nid = first_online_node; 1449 - #else 1450 - /* 1451 - * Default to using the nearest node that has memory installed. 1452 - * Otherwise, it would be necessary to patch the kernel MM code 1453 - * to deal with more memoryless-node error conditions. 1454 - */ 1455 - new_nid = first_online_node; 1456 - #endif 1457 - } 1458 - 1459 - pr_debug("%s:%d cpu %d nid %d\n", __FUNCTION__, __LINE__, 1460 - cpu, new_nid); 1461 - return new_nid; 1445 + pr_debug("%s:%d cpu %d nid %d\n", __func__, __LINE__, cpu, new_nid); 1462 1446 } 1463 1447 1464 1448 int cpu_to_coregroup_id(int cpu)
+1
arch/powerpc/mm/pageattr.c
··· 31 31 { 32 32 long action = (long)data; 33 33 34 + addr &= PAGE_MASK; 34 35 /* modify the PTE bits as desired */ 35 36 switch (action) { 36 37 case SET_MEMORY_RO:
+1 -1
arch/powerpc/mm/pgtable-frag.c
··· 83 83 spin_lock(&mm->page_table_lock); 84 84 /* 85 85 * If we find pgtable_page set, we return 86 - * the allocated page with single fragement 86 + * the allocated page with single fragment 87 87 * count. 88 88 */ 89 89 if (likely(!pte_frag_get(&mm->context))) {
+1 -1
arch/powerpc/mm/pgtable.c
··· 351 351 * (4) hugepd pointer, _PAGE_PTE = 0 and bits [2..6] indicate size of table 352 352 * 353 353 * So long as we atomically load page table pointers we are safe against teardown, 354 - * we can follow the address down to the the page and take a ref on it. 354 + * we can follow the address down to the page and take a ref on it. 355 355 * This function need to be called with interrupts disabled. We use this variant 356 356 * when we have MSR[EE] = 0 but the paca->irq_soft_mask = IRQS_ENABLED 357 357 */
-1
arch/powerpc/mm/pgtable_64.c
··· 32 32 #include <linux/hugetlb.h> 33 33 34 34 #include <asm/page.h> 35 - #include <asm/prom.h> 36 35 #include <asm/mmu_context.h> 37 36 #include <asm/mmu.h> 38 37 #include <asm/smp.h>
+2 -1
arch/powerpc/mm/ptdump/ptdump.c
··· 21 21 #include <linux/seq_file.h> 22 22 #include <asm/fixmap.h> 23 23 #include <linux/const.h> 24 + #include <linux/kasan.h> 24 25 #include <asm/page.h> 25 26 #include <asm/hugetlb.h> 26 27 ··· 290 289 #endif 291 290 address_markers[i++].start_address = FIXADDR_START; 292 291 address_markers[i++].start_address = FIXADDR_TOP; 292 + #endif /* CONFIG_PPC64 */ 293 293 #ifdef CONFIG_KASAN 294 294 address_markers[i++].start_address = KASAN_SHADOW_START; 295 295 address_markers[i++].start_address = KASAN_SHADOW_END; 296 296 #endif 297 - #endif /* CONFIG_PPC64 */ 298 297 } 299 298 300 299 static int ptdump_show(struct seq_file *m, void *v)
+58 -13
arch/powerpc/mm/slice.c arch/powerpc/mm/book3s64/slice.c
··· 276 276 } 277 277 278 278 static unsigned long slice_find_area_bottomup(struct mm_struct *mm, 279 - unsigned long len, 279 + unsigned long addr, unsigned long len, 280 280 const struct slice_mask *available, 281 281 int psize, unsigned long high_limit) 282 282 { 283 283 int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); 284 - unsigned long addr, found, next_end; 284 + unsigned long found, next_end; 285 285 struct vm_unmapped_area_info info; 286 286 287 287 info.flags = 0; 288 288 info.length = len; 289 289 info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); 290 290 info.align_offset = 0; 291 - 292 - addr = TASK_UNMAPPED_BASE; 293 291 /* 294 292 * Check till the allow max value for this mmap request 295 293 */ ··· 320 322 } 321 323 322 324 static unsigned long slice_find_area_topdown(struct mm_struct *mm, 323 - unsigned long len, 325 + unsigned long addr, unsigned long len, 324 326 const struct slice_mask *available, 325 327 int psize, unsigned long high_limit) 326 328 { 327 329 int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT); 328 - unsigned long addr, found, prev; 330 + unsigned long found, prev; 329 331 struct vm_unmapped_area_info info; 330 332 unsigned long min_addr = max(PAGE_SIZE, mmap_min_addr); 331 333 ··· 333 335 info.length = len; 334 336 info.align_mask = PAGE_MASK & ((1ul << pshift) - 1); 335 337 info.align_offset = 0; 336 - 337 - addr = mm->mmap_base; 338 338 /* 339 339 * If we are trying to allocate above DEFAULT_MAP_WINDOW 340 340 * Add the different to the mmap_base. ··· 373 377 * can happen with large stack limits and large mmap() 374 378 * allocations. 375 379 */ 376 - return slice_find_area_bottomup(mm, len, available, psize, high_limit); 380 + return slice_find_area_bottomup(mm, TASK_UNMAPPED_BASE, len, available, psize, high_limit); 377 381 } 378 382 379 383 ··· 382 386 int topdown, unsigned long high_limit) 383 387 { 384 388 if (topdown) 385 - return slice_find_area_topdown(mm, len, mask, psize, high_limit); 389 + return slice_find_area_topdown(mm, mm->mmap_base, len, mask, psize, high_limit); 386 390 else 387 - return slice_find_area_bottomup(mm, len, mask, psize, high_limit); 391 + return slice_find_area_bottomup(mm, mm->mmap_base, len, mask, psize, high_limit); 388 392 } 389 393 390 394 static inline void slice_copy_mask(struct slice_mask *dst, ··· 635 639 } 636 640 EXPORT_SYMBOL_GPL(slice_get_unmapped_area); 637 641 642 + unsigned long arch_get_unmapped_area(struct file *filp, 643 + unsigned long addr, 644 + unsigned long len, 645 + unsigned long pgoff, 646 + unsigned long flags) 647 + { 648 + if (radix_enabled()) 649 + return generic_get_unmapped_area(filp, addr, len, pgoff, flags); 650 + 651 + return slice_get_unmapped_area(addr, len, flags, 652 + mm_ctx_user_psize(&current->mm->context), 0); 653 + } 654 + 655 + unsigned long arch_get_unmapped_area_topdown(struct file *filp, 656 + const unsigned long addr0, 657 + const unsigned long len, 658 + const unsigned long pgoff, 659 + const unsigned long flags) 660 + { 661 + if (radix_enabled()) 662 + return generic_get_unmapped_area_topdown(filp, addr0, len, pgoff, flags); 663 + 664 + return slice_get_unmapped_area(addr0, len, flags, 665 + mm_ctx_user_psize(&current->mm->context), 1); 666 + } 667 + 638 668 unsigned int notrace get_slice_psize(struct mm_struct *mm, unsigned long addr) 639 669 { 640 670 unsigned char *psizes; ··· 714 692 bitmap_fill(mask->high_slices, SLICE_NUM_HIGH); 715 693 } 716 694 717 - #ifdef CONFIG_PPC_BOOK3S_64 718 695 void slice_setup_new_exec(void) 719 696 { 720 697 struct mm_struct *mm = current->mm; ··· 725 704 726 705 mm_ctx_set_slb_addr_limit(&mm->context, DEFAULT_MAP_WINDOW); 727 706 } 728 - #endif 729 707 730 708 void slice_set_range_psize(struct mm_struct *mm, unsigned long start, 731 709 unsigned long len, unsigned int psize) ··· 778 758 } 779 759 780 760 return !slice_check_range_fits(mm, maskp, addr, len); 761 + } 762 + 763 + unsigned long vma_mmu_pagesize(struct vm_area_struct *vma) 764 + { 765 + /* With radix we don't use slice, so derive it from vma*/ 766 + if (radix_enabled()) 767 + return vma_kernel_pagesize(vma); 768 + 769 + return 1UL << mmu_psize_to_shift(get_slice_psize(vma->vm_mm, vma->vm_start)); 770 + } 771 + 772 + static int file_to_psize(struct file *file) 773 + { 774 + struct hstate *hstate = hstate_file(file); 775 + return shift_to_mmu_psize(huge_page_shift(hstate)); 776 + } 777 + 778 + unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, 779 + unsigned long len, unsigned long pgoff, 780 + unsigned long flags) 781 + { 782 + if (radix_enabled()) 783 + return generic_hugetlb_get_unmapped_area(file, addr, len, pgoff, flags); 784 + 785 + return slice_get_unmapped_area(addr, len, flags, file_to_psize(file), 1); 781 786 } 782 787 #endif
+2 -2
arch/powerpc/net/bpf_jit.h
··· 13 13 #include <asm/types.h> 14 14 #include <asm/ppc-opcode.h> 15 15 16 - #ifdef PPC64_ELF_ABI_v1 16 + #ifdef CONFIG_PPC64_ELF_ABI_V1 17 17 #define FUNCTION_DESCR_SIZE 24 18 18 #else 19 19 #define FUNCTION_DESCR_SIZE 0 ··· 35 35 } while (0) 36 36 37 37 /* bl (unconditional 'branch' with link) */ 38 - #define PPC_BL(dest) EMIT(PPC_INST_BL | (((dest) - (unsigned long)(image + ctx->idx)) & 0x03fffffc)) 38 + #define PPC_BL(dest) EMIT(PPC_RAW_BL((dest) - (unsigned long)(image + ctx->idx))) 39 39 40 40 /* "cond" here covers BO:BI fields. */ 41 41 #define PPC_BCC_SHORT(cond, dest) \
+1 -1
arch/powerpc/net/bpf_jit_comp.c
··· 276 276 */ 277 277 bpf_jit_dump(flen, proglen, pass, code_base); 278 278 279 - #ifdef PPC64_ELF_ABI_v1 279 + #ifdef CONFIG_PPC64_ELF_ABI_V1 280 280 /* Function descriptor nastiness: Address + TOC */ 281 281 ((u64 *)image)[0] = (u64)code_base; 282 282 ((u64 *)image)[1] = local_paca->kernel_toc;
+2 -2
arch/powerpc/net/bpf_jit_comp64.c
··· 126 126 { 127 127 int i; 128 128 129 - if (__is_defined(PPC64_ELF_ABI_v2)) 129 + if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2)) 130 130 EMIT(PPC_RAW_LD(_R2, _R13, offsetof(struct paca_struct, kernel_toc))); 131 131 132 132 /* ··· 266 266 int b2p_index = bpf_to_ppc(BPF_REG_3); 267 267 int bpf_tailcall_prologue_size = 8; 268 268 269 - if (__is_defined(PPC64_ELF_ABI_v2)) 269 + if (IS_ENABLED(CONFIG_PPC64_ELF_ABI_V2)) 270 270 bpf_tailcall_prologue_size += 4; /* skip past the toc load */ 271 271 272 272 /*
+1 -1
arch/powerpc/perf/8xx-pmu.c
··· 157 157 158 158 mpc8xx_pmu_read(event); 159 159 160 - /* If it was the last user, stop counting to avoid useles overhead */ 160 + /* If it was the last user, stop counting to avoid useless overhead */ 161 161 switch (event_type(event)) { 162 162 case PERF_8xx_ID_CPU_CYCLES: 163 163 break;
+3 -3
arch/powerpc/perf/core-book3s.c
··· 1142 1142 /* 1143 1143 * POWER7 can roll back counter values, if the new value is smaller 1144 1144 * than the previous value it will cause the delta and the counter to 1145 - * have bogus values unless we rolled a counter over. If a coutner is 1145 + * have bogus values unless we rolled a counter over. If a counter is 1146 1146 * rolled back, it will be smaller, but within 256, which is the maximum 1147 1147 * number of events to rollback at once. If we detect a rollback 1148 1148 * return 0. This can lead to a small lack of precision in the ··· 2057 2057 /* 2058 2058 * PMU config registers have fields that are 2059 2059 * reserved and some specific values for bit fields are reserved. 2060 - * For ex., MMCRA[61:62] is Randome Sampling Mode (SM) 2060 + * For ex., MMCRA[61:62] is Random Sampling Mode (SM) 2061 2061 * and value of 0b11 to this field is reserved. 2062 2062 * Check for invalid values in attr.config. 2063 2063 */ ··· 2447 2447 } 2448 2448 2449 2449 /* 2450 - * During system wide profling or while specific CPU is monitored for an 2450 + * During system wide profiling or while specific CPU is monitored for an 2451 2451 * event, some corner cases could cause PMC to overflow in idle path. This 2452 2452 * will trigger a PMI after waking up from idle. Since counter values are _not_ 2453 2453 * saved/restored in idle path, can lead to below "Can't find PMC" message.
+20 -20
arch/powerpc/perf/hv-24x7.c
··· 33 33 34 34 static cpumask_t hv_24x7_cpumask; 35 35 36 - static bool domain_is_valid(unsigned domain) 36 + static bool domain_is_valid(unsigned int domain) 37 37 { 38 38 switch (domain) { 39 39 #define DOMAIN(n, v, x, c) \ ··· 47 47 } 48 48 } 49 49 50 - static bool is_physical_domain(unsigned domain) 50 + static bool is_physical_domain(unsigned int domain) 51 51 { 52 52 switch (domain) { 53 53 #define DOMAIN(n, v, x, c) \ ··· 128 128 domain <= HV_PERF_DOMAIN_VCPU_REMOTE_NODE)); 129 129 } 130 130 131 - static const char *domain_name(unsigned domain) 131 + static const char *domain_name(unsigned int domain) 132 132 { 133 133 if (!domain_is_valid(domain)) 134 134 return NULL; ··· 146 146 return NULL; 147 147 } 148 148 149 - static bool catalog_entry_domain_is_valid(unsigned domain) 149 + static bool catalog_entry_domain_is_valid(unsigned int domain) 150 150 { 151 151 /* POWER8 doesn't support virtual domains. */ 152 152 if (interface_version == 1) ··· 258 258 259 259 static char *event_desc(struct hv_24x7_event_data *ev, int *len) 260 260 { 261 - unsigned nl = be16_to_cpu(ev->event_name_len); 261 + unsigned int nl = be16_to_cpu(ev->event_name_len); 262 262 __be16 *desc_len = (__be16 *)(ev->remainder + nl - 2); 263 263 264 264 *len = be16_to_cpu(*desc_len) - 2; ··· 267 267 268 268 static char *event_long_desc(struct hv_24x7_event_data *ev, int *len) 269 269 { 270 - unsigned nl = be16_to_cpu(ev->event_name_len); 270 + unsigned int nl = be16_to_cpu(ev->event_name_len); 271 271 __be16 *desc_len_ = (__be16 *)(ev->remainder + nl - 2); 272 - unsigned desc_len = be16_to_cpu(*desc_len_); 272 + unsigned int desc_len = be16_to_cpu(*desc_len_); 273 273 __be16 *long_desc_len = (__be16 *)(ev->remainder + nl + desc_len - 2); 274 274 275 275 *len = be16_to_cpu(*long_desc_len) - 2; ··· 296 296 { 297 297 void *start = ev; 298 298 __be16 *dl_, *ldl_; 299 - unsigned dl, ldl; 300 - unsigned nl = be16_to_cpu(ev->event_name_len); 299 + unsigned int dl, ldl; 300 + unsigned int nl = be16_to_cpu(ev->event_name_len); 301 301 302 302 if (nl < 2) { 303 303 pr_debug("%s: name length too short: %d", __func__, nl); ··· 398 398 * - Specifying (i.e overriding) values for other parameters 399 399 * is undefined. 400 400 */ 401 - static char *event_fmt(struct hv_24x7_event_data *event, unsigned domain) 401 + static char *event_fmt(struct hv_24x7_event_data *event, unsigned int domain) 402 402 { 403 403 const char *sindex; 404 404 const char *lpar; ··· 529 529 return NULL; 530 530 } 531 531 532 - static struct attribute *event_to_attr(unsigned ix, 532 + static struct attribute *event_to_attr(unsigned int ix, 533 533 struct hv_24x7_event_data *event, 534 - unsigned domain, 534 + unsigned int domain, 535 535 int nonce) 536 536 { 537 537 int event_name_len; ··· 599 599 return device_str_attr_create(name, nl, nonce, desc, dl); 600 600 } 601 601 602 - static int event_data_to_attrs(unsigned ix, struct attribute **attrs, 603 - struct hv_24x7_event_data *event, int nonce) 602 + static int event_data_to_attrs(unsigned int ix, struct attribute **attrs, 603 + struct hv_24x7_event_data *event, int nonce) 604 604 { 605 605 *attrs = event_to_attr(ix, event, event->domain, nonce); 606 606 if (!*attrs) ··· 614 614 struct rb_node node; 615 615 const char *name; 616 616 int nl; 617 - unsigned ct; 618 - unsigned domain; 617 + unsigned int ct; 618 + unsigned int domain; 619 619 }; 620 620 621 621 static int memord(const void *d1, size_t s1, const void *d2, size_t s2) ··· 628 628 return memcmp(d1, d2, s1); 629 629 } 630 630 631 - static int ev_uniq_ord(const void *v1, size_t s1, unsigned d1, const void *v2, 632 - size_t s2, unsigned d2) 631 + static int ev_uniq_ord(const void *v1, size_t s1, unsigned int d1, 632 + const void *v2, size_t s2, unsigned int d2) 633 633 { 634 634 int r = memord(v1, s1, v2, s2); 635 635 ··· 643 643 } 644 644 645 645 static int event_uniq_add(struct rb_root *root, const char *name, int nl, 646 - unsigned domain) 646 + unsigned int domain) 647 647 { 648 648 struct rb_node **new = &(root->rb_node), *parent = NULL; 649 649 struct event_uniq *data; ··· 1398 1398 static int h_24x7_event_init(struct perf_event *event) 1399 1399 { 1400 1400 struct hv_perf_caps caps; 1401 - unsigned domain; 1401 + unsigned int domain; 1402 1402 unsigned long hret; 1403 1403 u64 ct; 1404 1404
+3 -2
arch/powerpc/perf/imc-pmu.c
··· 6 6 * (C) 2017 Anju T Sudhakar, IBM Corporation. 7 7 * (C) 2017 Hemant K Shaw, IBM Corporation. 8 8 */ 9 + #include <linux/of.h> 9 10 #include <linux/perf_event.h> 10 11 #include <linux/slab.h> 11 12 #include <asm/opal.h> ··· 522 521 523 522 /* 524 523 * Nest HW counter memory resides in a per-chip reserve-memory (HOMER). 525 - * Get the base memory addresss for this cpu. 524 + * Get the base memory address for this cpu. 526 525 */ 527 526 chip_id = cpu_to_chip_id(event->cpu); 528 527 ··· 675 674 /* 676 675 * Check whether core_imc is registered. We could end up here 677 676 * if the cpuhotplug callback registration fails. i.e, callback 678 - * invokes the offline path for all sucessfully registered cpus. 677 + * invokes the offline path for all successfully registered cpus. 679 678 * At this stage, core_imc pmu will not be registered and we 680 679 * should return here. 681 680 *
+10 -8
arch/powerpc/perf/isa207-common.c
··· 82 82 static void mmcra_sdar_mode(u64 event, unsigned long *mmcra) 83 83 { 84 84 /* 85 - * MMCRA[SDAR_MODE] specifices how the SDAR should be updated in 86 - * continous sampling mode. 85 + * MMCRA[SDAR_MODE] specifies how the SDAR should be updated in 86 + * continuous sampling mode. 87 87 * 88 88 * Incase of Power8: 89 - * MMCRA[SDAR_MODE] will be programmed as "0b01" for continous sampling 89 + * MMCRA[SDAR_MODE] will be programmed as "0b01" for continuous sampling 90 90 * mode and will be un-changed when setting MMCRA[63] (Marked events). 91 91 * 92 92 * Incase of Power9/power10: ··· 108 108 *mmcra |= MMCRA_SDAR_MODE_TLB; 109 109 } 110 110 111 - static u64 p10_thresh_cmp_val(u64 value) 111 + static int p10_thresh_cmp_val(u64 value) 112 112 { 113 113 int exp = 0; 114 114 u64 result = value; ··· 139 139 * exponent is also zero. 140 140 */ 141 141 if (!(value & 0xC0) && exp) 142 - result = 0; 142 + result = -1; 143 143 else 144 144 result = (exp << 8) | value; 145 145 } ··· 187 187 unsigned int cmp, exp; 188 188 189 189 if (cpu_has_feature(CPU_FTR_ARCH_31)) 190 - return p10_thresh_cmp_val(event) != 0; 190 + return p10_thresh_cmp_val(event) >= 0; 191 191 192 192 /* 193 193 * Check the mantissa upper two bits are not zero, unless the ··· 502 502 value |= CNST_THRESH_CTL_SEL_VAL(event >> EVENT_THRESH_SHIFT); 503 503 mask |= p10_CNST_THRESH_CMP_MASK; 504 504 value |= p10_CNST_THRESH_CMP_VAL(p10_thresh_cmp_val(event_config1)); 505 - } 505 + } else if (event_is_threshold(event)) 506 + return -1; 506 507 } else if (cpu_has_feature(CPU_FTR_ARCH_300)) { 507 508 if (event_is_threshold(event) && is_thresh_cmp_valid(event)) { 508 509 mask |= CNST_THRESH_MASK; 509 510 value |= CNST_THRESH_VAL(event >> EVENT_THRESH_SHIFT); 510 - } 511 + } else if (event_is_threshold(event)) 512 + return -1; 511 513 } else { 512 514 /* 513 515 * Special case for PM_MRK_FAB_RSP_MATCH and PM_MRK_FAB_RSP_MATCH_CYC,
+2 -2
arch/powerpc/perf/power9-pmu.c
··· 98 98 /* PowerISA v2.07 format attribute structure*/ 99 99 extern const struct attribute_group isa207_pmu_format_group; 100 100 101 - int p9_dd21_bl_ev[] = { 101 + static int p9_dd21_bl_ev[] = { 102 102 PM_MRK_ST_DONE_L2, 103 103 PM_RADIX_PWC_L1_HIT, 104 104 PM_FLOP_CMPL, ··· 112 112 PM_DISP_HELD_SYNC_HOLD, 113 113 }; 114 114 115 - int p9_dd22_bl_ev[] = { 115 + static int p9_dd22_bl_ev[] = { 116 116 PM_DTLB_MISS_16G, 117 117 PM_DERAT_MISS_2M, 118 118 PM_DTLB_MISS_2M,
-1
arch/powerpc/platforms/40x/ppc40x_simple.c
··· 13 13 #include <asm/machdep.h> 14 14 #include <asm/pci-bridge.h> 15 15 #include <asm/ppc4xx.h> 16 - #include <asm/prom.h> 17 16 #include <asm/time.h> 18 17 #include <asm/udbg.h> 19 18 #include <asm/uic.h>
+1
arch/powerpc/platforms/44x/canyonlands.c
··· 12 12 #include <asm/ppc4xx.h> 13 13 #include <asm/udbg.h> 14 14 #include <asm/uic.h> 15 + #include <linux/of_address.h> 15 16 #include <linux/of_platform.h> 16 17 #include <linux/delay.h> 17 18 #include "44x.h"
+1 -1
arch/powerpc/platforms/44x/fsp2.c
··· 14 14 */ 15 15 16 16 #include <linux/init.h> 17 + #include <linux/of_fdt.h> 17 18 #include <linux/of_platform.h> 18 19 #include <linux/rtc.h> 19 20 20 21 #include <asm/machdep.h> 21 - #include <asm/prom.h> 22 22 #include <asm/udbg.h> 23 23 #include <asm/time.h> 24 24 #include <asm/uic.h>
-1
arch/powerpc/platforms/44x/ppc44x_simple.c
··· 13 13 #include <asm/machdep.h> 14 14 #include <asm/pci-bridge.h> 15 15 #include <asm/ppc4xx.h> 16 - #include <asm/prom.h> 17 16 #include <asm/time.h> 18 17 #include <asm/udbg.h> 19 18 #include <asm/uic.h>
+1 -1
arch/powerpc/platforms/44x/ppc476.c
··· 19 19 20 20 #include <linux/init.h> 21 21 #include <linux/of.h> 22 + #include <linux/of_address.h> 22 23 #include <linux/of_platform.h> 23 24 #include <linux/rtc.h> 24 25 25 26 #include <asm/machdep.h> 26 - #include <asm/prom.h> 27 27 #include <asm/udbg.h> 28 28 #include <asm/time.h> 29 29 #include <asm/uic.h>
-1
arch/powerpc/platforms/44x/sam440ep.c
··· 17 17 #include <linux/of_platform.h> 18 18 19 19 #include <asm/machdep.h> 20 - #include <asm/prom.h> 21 20 #include <asm/udbg.h> 22 21 #include <asm/time.h> 23 22 #include <asm/uic.h>
+2 -1
arch/powerpc/platforms/44x/warp.c
··· 11 11 #include <linux/i2c.h> 12 12 #include <linux/interrupt.h> 13 13 #include <linux/delay.h> 14 + #include <linux/of_address.h> 15 + #include <linux/of_irq.h> 14 16 #include <linux/of_gpio.h> 15 17 #include <linux/slab.h> 16 18 #include <linux/export.h> 17 19 18 20 #include <asm/machdep.h> 19 - #include <asm/prom.h> 20 21 #include <asm/udbg.h> 21 22 #include <asm/time.h> 22 23 #include <asm/uic.h>
+1 -1
arch/powerpc/platforms/4xx/cpm.c
··· 327 327 static int __init cpm_powersave_off(char *arg) 328 328 { 329 329 cpm.powersave_off = 1; 330 - return 0; 330 + return 1; 331 331 } 332 332 __setup("powersave=off", cpm_powersave_off);
+1
arch/powerpc/platforms/4xx/hsta_msi.c
··· 10 10 #include <linux/interrupt.h> 11 11 #include <linux/msi.h> 12 12 #include <linux/of.h> 13 + #include <linux/of_irq.h> 13 14 #include <linux/of_platform.h> 14 15 #include <linux/pci.h> 15 16 #include <linux/semaphore.h>
+1
arch/powerpc/platforms/4xx/pci.c
··· 22 22 #include <linux/pci.h> 23 23 #include <linux/init.h> 24 24 #include <linux/of.h> 25 + #include <linux/of_address.h> 25 26 #include <linux/delay.h> 26 27 #include <linux/slab.h> 27 28
+2 -1
arch/powerpc/platforms/4xx/uic.c
··· 19 19 #include <linux/irq.h> 20 20 #include <linux/interrupt.h> 21 21 #include <linux/kernel_stat.h> 22 + #include <linux/of.h> 23 + #include <linux/of_irq.h> 22 24 #include <asm/irq.h> 23 25 #include <asm/io.h> 24 - #include <asm/prom.h> 25 26 #include <asm/dcr.h> 26 27 27 28 #define NR_UIC_INTS 32
+1 -1
arch/powerpc/platforms/512x/clock-commonclk.c
··· 663 663 * the PSC/MSCAN/SPDIF (serial drivers et al) need the MCLK 664 664 * for their bitrate 665 665 * - in the absence of "aliases" for clocks we need to create 666 - * individial 'struct clk' items for whatever might get 666 + * individual 'struct clk' items for whatever might get 667 667 * referenced or looked up, even if several of those items are 668 668 * identical from the logical POV (their rate value) 669 669 * - for easier future maintenance and for better reflection of
-1
arch/powerpc/platforms/512x/mpc5121_ads.c
··· 14 14 15 15 #include <asm/machdep.h> 16 16 #include <asm/ipic.h> 17 - #include <asm/prom.h> 18 17 #include <asm/time.h> 19 18 20 19 #include <sysdev/fsl_pci.h>
+2 -1
arch/powerpc/platforms/512x/mpc5121_ads_cpld.c
··· 14 14 #include <linux/interrupt.h> 15 15 #include <linux/irq.h> 16 16 #include <linux/io.h> 17 - #include <asm/prom.h> 17 + #include <linux/of_address.h> 18 + #include <linux/of_irq.h> 18 19 19 20 static struct device_node *cpld_pic_node; 20 21 static struct irq_domain *cpld_pic_host;
-1
arch/powerpc/platforms/512x/mpc512x_generic.c
··· 13 13 14 14 #include <asm/machdep.h> 15 15 #include <asm/ipic.h> 16 - #include <asm/prom.h> 17 16 #include <asm/time.h> 18 17 19 18 #include "mpc512x.h"
+2 -2
arch/powerpc/platforms/512x/mpc512x_shared.c
··· 12 12 #include <linux/kernel.h> 13 13 #include <linux/io.h> 14 14 #include <linux/irq.h> 15 + #include <linux/of_address.h> 15 16 #include <linux/of_platform.h> 16 17 #include <linux/fsl-diu-fb.h> 17 18 #include <linux/memblock.h> ··· 21 20 #include <asm/cacheflush.h> 22 21 #include <asm/machdep.h> 23 22 #include <asm/ipic.h> 24 - #include <asm/prom.h> 25 23 #include <asm/time.h> 26 24 #include <asm/mpc5121.h> 27 25 #include <asm/mpc52xx_psc.h> ··· 289 289 290 290 /* 291 291 * We do not allocate and configure new area for bitmap buffer 292 - * because it would requere copying bitmap data (splash image) 292 + * because it would require copying bitmap data (splash image) 293 293 * and so negatively affect boot time. Instead we reserve the 294 294 * already configured frame buffer area so that it won't be 295 295 * destroyed. The starting address of the area to reserve and
-1
arch/powerpc/platforms/52xx/efika.c
··· 14 14 #include <linux/pci.h> 15 15 #include <linux/of.h> 16 16 #include <asm/dma.h> 17 - #include <asm/prom.h> 18 17 #include <asm/time.h> 19 18 #include <asm/machdep.h> 20 19 #include <asm/rtas.h>
-1
arch/powerpc/platforms/52xx/lite5200.c
··· 21 21 #include <asm/time.h> 22 22 #include <asm/io.h> 23 23 #include <asm/machdep.h> 24 - #include <asm/prom.h> 25 24 #include <asm/mpc52xx.h> 26 25 27 26 /* ************************************************************************
+2
arch/powerpc/platforms/52xx/lite5200_pm.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/init.h> 3 3 #include <linux/suspend.h> 4 + #include <linux/of_address.h> 5 + 4 6 #include <asm/io.h> 5 7 #include <asm/time.h> 6 8 #include <asm/mpc52xx.h>
+2 -1
arch/powerpc/platforms/52xx/media5200.c
··· 20 20 #include <linux/irq.h> 21 21 #include <linux/interrupt.h> 22 22 #include <linux/io.h> 23 + #include <linux/of_address.h> 24 + #include <linux/of_irq.h> 23 25 #include <asm/time.h> 24 - #include <asm/prom.h> 25 26 #include <asm/machdep.h> 26 27 #include <asm/mpc52xx.h> 27 28
+1 -1
arch/powerpc/platforms/52xx/mpc5200_simple.c
··· 22 22 */ 23 23 24 24 #undef DEBUG 25 + #include <linux/of.h> 25 26 #include <asm/time.h> 26 - #include <asm/prom.h> 27 27 #include <asm/machdep.h> 28 28 #include <asm/mpc52xx.h> 29 29
+2 -2
arch/powerpc/platforms/52xx/mpc52xx_common.c
··· 15 15 #include <linux/gpio.h> 16 16 #include <linux/kernel.h> 17 17 #include <linux/spinlock.h> 18 + #include <linux/of_address.h> 18 19 #include <linux/of_platform.h> 19 20 #include <linux/of_gpio.h> 20 21 #include <linux/export.h> 21 22 #include <asm/io.h> 22 - #include <asm/prom.h> 23 23 #include <asm/mpc52xx.h> 24 24 25 25 /* MPC5200 device tree match tables */ ··· 308 308 309 309 spin_lock_irqsave(&gpio_lock, flags); 310 310 311 - /* Reconfiure pin-muxing to gpio */ 311 + /* Reconfigure pin-muxing to gpio */ 312 312 mux = in_be32(&simple_gpio->port_config); 313 313 out_be32(&simple_gpio->port_config, mux & (~gpio)); 314 314
+5 -3
arch/powerpc/platforms/52xx/mpc52xx_gpt.c
··· 5 5 * Copyright (c) 2009 Secret Lab Technologies Ltd. 6 6 * Copyright (c) 2008 Sascha Hauer <s.hauer@pengutronix.de>, Pengutronix 7 7 * 8 - * This file is a driver for the the General Purpose Timer (gpt) devices 8 + * This file is a driver for the General Purpose Timer (gpt) devices 9 9 * found on the MPC5200 SoC. Each timer has an IO pin which can be used 10 10 * for GPIO or can be used to raise interrupts. The timer function can 11 11 * be used independently from the IO pin, or it can be used to control ··· 55 55 #include <linux/list.h> 56 56 #include <linux/mutex.h> 57 57 #include <linux/of.h> 58 + #include <linux/of_address.h> 59 + #include <linux/of_irq.h> 58 60 #include <linux/of_platform.h> 59 61 #include <linux/of_gpio.h> 60 62 #include <linux/kernel.h> ··· 400 398 set |= MPC52xx_GPT_MODE_CONTINUOUS; 401 399 402 400 /* Determine the number of clocks in the requested period. 64 bit 403 - * arithmatic is done here to preserve the precision until the value 401 + * arithmetic is done here to preserve the precision until the value 404 402 * is scaled back down into the u32 range. Period is in 'ns', bus 405 403 * frequency is in Hz. */ 406 404 clocks = period * (u64)gpt->ipb_freq; ··· 504 502 if (prescale == 0) 505 503 prescale = 0x10000; 506 504 period = period * prescale * 1000000000ULL; 507 - do_div(period, (u64)gpt->ipb_freq); 505 + do_div(period, gpt->ipb_freq); 508 506 return period; 509 507 } 510 508 EXPORT_SYMBOL(mpc52xx_gpt_timer_period);
+3 -2
arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c
··· 11 11 #include <linux/interrupt.h> 12 12 #include <linux/kernel.h> 13 13 #include <linux/of.h> 14 + #include <linux/of_address.h> 15 + #include <linux/of_irq.h> 14 16 #include <linux/of_platform.h> 15 17 #include <linux/spinlock.h> 16 18 #include <linux/module.h> 17 19 #include <asm/io.h> 18 - #include <asm/prom.h> 19 20 #include <asm/mpc52xx.h> 20 21 #include <asm/time.h> 21 22 ··· 105 104 * 106 105 * Configure the watermarks so DMA will always complete correctly. 107 106 * It may be worth experimenting with the ALARM value to see if 108 - * there is a performance impacit. However, if it is wrong there 107 + * there is a performance impact. However, if it is wrong there 109 108 * is a risk of DMA not transferring the last chunk of data 110 109 */ 111 110 if (write) {
+7 -15
arch/powerpc/platforms/52xx/mpc52xx_pci.c
··· 13 13 #undef DEBUG 14 14 15 15 #include <linux/pci.h> 16 + #include <linux/of_address.h> 16 17 #include <asm/mpc52xx.h> 17 18 #include <asm/delay.h> 18 19 #include <asm/machdep.h> ··· 243 242 u32 tmp; 244 243 int iwcr0 = 0, iwcr1 = 0, iwcr2 = 0; 245 244 246 - pr_debug("mpc52xx_pci_setup(hose=%p, pci_regs=%p)\n", hose, pci_regs); 245 + pr_debug("%s(hose=%p, pci_regs=%p)\n", __func__, hose, pci_regs); 247 246 248 247 /* pci_process_bridge_OF_ranges() found all our addresses for us; 249 248 * now store them in the right places */ ··· 258 257 /* Memory windows */ 259 258 res = &hose->mem_resources[0]; 260 259 if (res->flags) { 261 - pr_debug("mem_resource[0] = " 262 - "{.start=%llx, .end=%llx, .flags=%llx}\n", 263 - (unsigned long long)res->start, 264 - (unsigned long long)res->end, 265 - (unsigned long long)res->flags); 260 + pr_debug("mem_resource[0] = %pr\n", res); 266 261 out_be32(&pci_regs->iw0btar, 267 262 MPC52xx_PCI_IWBTAR_TRANSLATION(res->start, res->start, 268 263 resource_size(res))); ··· 271 274 272 275 res = &hose->mem_resources[1]; 273 276 if (res->flags) { 274 - pr_debug("mem_resource[1] = {.start=%x, .end=%x, .flags=%lx}\n", 275 - res->start, res->end, res->flags); 277 + pr_debug("mem_resource[1] = %pr\n", res); 276 278 out_be32(&pci_regs->iw1btar, 277 279 MPC52xx_PCI_IWBTAR_TRANSLATION(res->start, res->start, 278 280 resource_size(res))); ··· 288 292 printk(KERN_ERR "%s: Didn't find IO resources\n", __FILE__); 289 293 return; 290 294 } 291 - pr_debug(".io_resource={.start=%llx,.end=%llx,.flags=%llx} " 292 - ".io_base_phys=0x%p\n", 293 - (unsigned long long)res->start, 294 - (unsigned long long)res->end, 295 - (unsigned long long)res->flags, (void*)hose->io_base_phys); 295 + pr_debug(".io_resource = %pr .io_base_phys=0x%pa\n", 296 + res, &hose->io_base_phys); 296 297 out_be32(&pci_regs->iw2btar, 297 298 MPC52xx_PCI_IWBTAR_TRANSLATION(hose->io_base_phys, 298 299 res->start, ··· 329 336 { 330 337 int i; 331 338 332 - pr_debug("mpc52xx_pci_fixup_resources() %.4x:%.4x\n", 333 - dev->vendor, dev->device); 339 + pr_debug("%s() %.4x:%.4x\n", __func__, dev->vendor, dev->device); 334 340 335 341 /* We don't rely on boot loader for PCI and resets all 336 342 devices */
+2 -1
arch/powerpc/platforms/52xx/mpc52xx_pic.c
··· 101 101 #include <linux/interrupt.h> 102 102 #include <linux/irq.h> 103 103 #include <linux/of.h> 104 + #include <linux/of_address.h> 105 + #include <linux/of_irq.h> 104 106 #include <asm/io.h> 105 - #include <asm/prom.h> 106 107 #include <asm/mpc52xx.h> 107 108 108 109 /* HW IRQ mapping */
+2
arch/powerpc/platforms/52xx/mpc52xx_pm.c
··· 2 2 #include <linux/init.h> 3 3 #include <linux/suspend.h> 4 4 #include <linux/io.h> 5 + #include <linux/of_address.h> 6 + 5 7 #include <asm/time.h> 6 8 #include <asm/cacheflush.h> 7 9 #include <asm/mpc52xx.h>
-1
arch/powerpc/platforms/82xx/ep8248e.c
··· 20 20 #include <asm/machdep.h> 21 21 #include <asm/time.h> 22 22 #include <asm/mpc8260.h> 23 - #include <asm/prom.h> 24 23 25 24 #include <sysdev/fsl_soc.h> 26 25 #include <sysdev/cpm2_pic.h>
-1
arch/powerpc/platforms/82xx/km82xx.c
··· 20 20 #include <asm/machdep.h> 21 21 #include <linux/time.h> 22 22 #include <asm/mpc8260.h> 23 - #include <asm/prom.h> 24 23 25 24 #include <sysdev/fsl_soc.h> 26 25 #include <sysdev/cpm2_pic.h>
+1 -1
arch/powerpc/platforms/82xx/pq2ads-pci-pic.c
··· 14 14 #include <linux/irq.h> 15 15 #include <linux/types.h> 16 16 #include <linux/slab.h> 17 + #include <linux/of_irq.h> 17 18 18 19 #include <asm/io.h> 19 - #include <asm/prom.h> 20 20 #include <asm/cpm2.h> 21 21 22 22 #include "pq2.h"
-1
arch/powerpc/platforms/83xx/km83xx.c
··· 29 29 #include <asm/machdep.h> 30 30 #include <asm/ipic.h> 31 31 #include <asm/irq.h> 32 - #include <asm/prom.h> 33 32 #include <asm/udbg.h> 34 33 #include <sysdev/fsl_soc.h> 35 34 #include <sysdev/fsl_pci.h>
+5 -10
arch/powerpc/platforms/83xx/mcu_mpc8349emitx.c
··· 8 8 */ 9 9 10 10 #include <linux/kernel.h> 11 + #include <linux/mod_devicetable.h> 11 12 #include <linux/module.h> 12 13 #include <linux/device.h> 13 14 #include <linux/mutex.h> 14 15 #include <linux/i2c.h> 15 16 #include <linux/gpio/driver.h> 16 - #include <linux/of.h> 17 - #include <linux/of_gpio.h> 18 17 #include <linux/slab.h> 19 18 #include <linux/kthread.h> 19 + #include <linux/property.h> 20 20 #include <linux/reboot.h> 21 - #include <asm/prom.h> 22 21 #include <asm/machdep.h> 23 22 24 23 /* ··· 115 116 116 117 static int mcu_gpiochip_add(struct mcu *mcu) 117 118 { 118 - struct device_node *np; 119 + struct device *dev = &mcu->client->dev; 119 120 struct gpio_chip *gc = &mcu->gc; 120 121 121 - np = of_find_compatible_node(NULL, NULL, "fsl,mcu-mpc8349emitx"); 122 - if (!np) 123 - return -ENODEV; 124 - 125 122 gc->owner = THIS_MODULE; 126 - gc->label = kasprintf(GFP_KERNEL, "%pOF", np); 123 + gc->label = kasprintf(GFP_KERNEL, "%pfw", dev_fwnode(dev)); 127 124 gc->can_sleep = 1; 128 125 gc->ngpio = MCU_NUM_GPIO; 129 126 gc->base = -1; 130 127 gc->set = mcu_gpio_set; 131 128 gc->direction_output = mcu_gpio_dir_out; 132 - gc->of_node = np; 129 + gc->parent = dev; 133 130 134 131 return gpiochip_add_data(gc, mcu); 135 132 }
-1
arch/powerpc/platforms/83xx/mpc832x_mds.c
··· 28 28 #include <asm/machdep.h> 29 29 #include <asm/ipic.h> 30 30 #include <asm/irq.h> 31 - #include <asm/prom.h> 32 31 #include <asm/udbg.h> 33 32 #include <sysdev/fsl_soc.h> 34 33 #include <sysdev/fsl_pci.h>
+1
arch/powerpc/platforms/83xx/mpc832x_rdb.c
··· 15 15 #include <linux/spi/spi.h> 16 16 #include <linux/spi/mmc_spi.h> 17 17 #include <linux/mmc/host.h> 18 + #include <linux/of_irq.h> 18 19 #include <linux/of_platform.h> 19 20 #include <linux/fsl_devices.h> 20 21
-1
arch/powerpc/platforms/83xx/mpc834x_itx.c
··· 27 27 #include <asm/machdep.h> 28 28 #include <asm/ipic.h> 29 29 #include <asm/irq.h> 30 - #include <asm/prom.h> 31 30 #include <asm/udbg.h> 32 31 #include <sysdev/fsl_soc.h> 33 32 #include <sysdev/fsl_pci.h>
+1 -1
arch/powerpc/platforms/83xx/mpc834x_mds.c
··· 19 19 #include <linux/delay.h> 20 20 #include <linux/seq_file.h> 21 21 #include <linux/root_dev.h> 22 + #include <linux/of_address.h> 22 23 #include <linux/of_platform.h> 23 24 24 25 #include <linux/atomic.h> ··· 28 27 #include <asm/machdep.h> 29 28 #include <asm/ipic.h> 30 29 #include <asm/irq.h> 31 - #include <asm/prom.h> 32 30 #include <asm/udbg.h> 33 31 #include <sysdev/fsl_soc.h> 34 32 #include <sysdev/fsl_pci.h>
-1
arch/powerpc/platforms/83xx/mpc836x_mds.c
··· 35 35 #include <asm/machdep.h> 36 36 #include <asm/ipic.h> 37 37 #include <asm/irq.h> 38 - #include <asm/prom.h> 39 38 #include <asm/udbg.h> 40 39 #include <sysdev/fsl_soc.h> 41 40 #include <sysdev/fsl_pci.h>
-1
arch/powerpc/platforms/83xx/mpc836x_rdk.c
··· 12 12 #include <linux/pci.h> 13 13 #include <linux/of_platform.h> 14 14 #include <linux/io.h> 15 - #include <asm/prom.h> 16 15 #include <asm/time.h> 17 16 #include <asm/ipic.h> 18 17 #include <asm/udbg.h>
+1 -1
arch/powerpc/platforms/83xx/mpc837x_mds.c
··· 9 9 10 10 #include <linux/pci.h> 11 11 #include <linux/of.h> 12 + #include <linux/of_address.h> 12 13 #include <linux/of_platform.h> 13 14 14 15 #include <asm/time.h> 15 16 #include <asm/ipic.h> 16 17 #include <asm/udbg.h> 17 - #include <asm/prom.h> 18 18 #include <sysdev/fsl_pci.h> 19 19 20 20 #include "mpc83xx.h"
+2 -5
arch/powerpc/platforms/83xx/suspend.c
··· 322 322 static const struct of_device_id pmc_match[]; 323 323 static int pmc_probe(struct platform_device *ofdev) 324 324 { 325 - const struct of_device_id *match; 326 325 struct device_node *np = ofdev->dev.of_node; 327 326 struct resource res; 328 327 const struct pmc_type *type; 329 328 int ret = 0; 330 329 331 - match = of_match_device(pmc_match, &ofdev->dev); 332 - if (!match) 330 + type = of_device_get_match_data(&ofdev->dev); 331 + if (!type) 333 332 return -EINVAL; 334 - 335 - type = match->data; 336 333 337 334 if (!of_device_is_available(np)) 338 335 return -ENODEV;
+1 -1
arch/powerpc/platforms/83xx/usb.c
··· 11 11 #include <linux/kernel.h> 12 12 #include <linux/errno.h> 13 13 #include <linux/of.h> 14 + #include <linux/of_address.h> 14 15 15 16 #include <asm/io.h> 16 - #include <asm/prom.h> 17 17 #include <sysdev/fsl_soc.h> 18 18 19 19 #include "mpc83xx.h"
-9
arch/powerpc/platforms/85xx/Kconfig
··· 16 16 17 17 if PPC32 18 18 19 - config FSL_85XX_CACHE_SRAM 20 - bool 21 - select PPC_LIB_RHEAP 22 - help 23 - When selected, this option enables cache-sram support 24 - for memory allocation on P1/P2 QorIQ platforms. 25 - cache-sram-size and cache-sram-offset kernel boot 26 - parameters should be passed when this option is enabled. 27 - 28 19 config BSC9131_RDB 29 20 bool "Freescale BSC9131RDB" 30 21 select DEFAULT_UIMAGE
-1
arch/powerpc/platforms/85xx/corenet_generic.c
··· 19 19 #include <asm/pci-bridge.h> 20 20 #include <asm/ppc-pci.h> 21 21 #include <mm/mmu_decl.h> 22 - #include <asm/prom.h> 23 22 #include <asm/udbg.h> 24 23 #include <asm/mpic.h> 25 24 #include <asm/ehv_pic.h>
+1 -1
arch/powerpc/platforms/85xx/ge_imp3a.c
··· 17 17 #include <linux/delay.h> 18 18 #include <linux/seq_file.h> 19 19 #include <linux/interrupt.h> 20 + #include <linux/of_address.h> 20 21 #include <linux/of_platform.h> 21 22 22 23 #include <asm/time.h> 23 24 #include <asm/machdep.h> 24 25 #include <asm/pci-bridge.h> 25 26 #include <mm/mmu_decl.h> 26 - #include <asm/prom.h> 27 27 #include <asm/udbg.h> 28 28 #include <asm/mpic.h> 29 29 #include <asm/swiotlb.h>
-1
arch/powerpc/platforms/85xx/ksi8560.c
··· 26 26 #include <asm/mpic.h> 27 27 #include <mm/mmu_decl.h> 28 28 #include <asm/udbg.h> 29 - #include <asm/prom.h> 30 29 31 30 #include <sysdev/fsl_soc.h> 32 31 #include <sysdev/fsl_pci.h>
-1
arch/powerpc/platforms/85xx/mpc8536_ds.c
··· 18 18 #include <asm/machdep.h> 19 19 #include <asm/pci-bridge.h> 20 20 #include <mm/mmu_decl.h> 21 - #include <asm/prom.h> 22 21 #include <asm/udbg.h> 23 22 #include <asm/mpic.h> 24 23 #include <asm/swiotlb.h>
+3 -2
arch/powerpc/platforms/85xx/mpc85xx_cds.c
··· 21 21 #include <linux/initrd.h> 22 22 #include <linux/interrupt.h> 23 23 #include <linux/fsl_devices.h> 24 + #include <linux/of_address.h> 25 + #include <linux/of_irq.h> 24 26 #include <linux/of_platform.h> 25 27 #include <linux/pgtable.h> 26 28 ··· 35 33 #include <asm/pci-bridge.h> 36 34 #include <asm/irq.h> 37 35 #include <mm/mmu_decl.h> 38 - #include <asm/prom.h> 39 36 #include <asm/udbg.h> 40 37 #include <asm/mpic.h> 41 38 #include <asm/i8259.h> ··· 152 151 */ 153 152 case PCI_DEVICE_ID_VIA_82C586_2: 154 153 /* There are two USB controllers. 155 - * Identify them by functon number 154 + * Identify them by function number 156 155 */ 157 156 if (PCI_FUNC(dev->devfn) == 3) 158 157 dev->irq = 11;
+1 -1
arch/powerpc/platforms/85xx/mpc85xx_ds.c
··· 15 15 #include <linux/delay.h> 16 16 #include <linux/seq_file.h> 17 17 #include <linux/interrupt.h> 18 + #include <linux/of_irq.h> 18 19 #include <linux/of_platform.h> 19 20 20 21 #include <asm/time.h> 21 22 #include <asm/machdep.h> 22 23 #include <asm/pci-bridge.h> 23 24 #include <mm/mmu_decl.h> 24 - #include <asm/prom.h> 25 25 #include <asm/udbg.h> 26 26 #include <asm/mpic.h> 27 27 #include <asm/i8259.h>
-1
arch/powerpc/platforms/85xx/mpc85xx_mds.c
··· 39 39 #include <asm/pci-bridge.h> 40 40 #include <asm/irq.h> 41 41 #include <mm/mmu_decl.h> 42 - #include <asm/prom.h> 43 42 #include <asm/udbg.h> 44 43 #include <sysdev/fsl_soc.h> 45 44 #include <sysdev/fsl_pci.h>
-1
arch/powerpc/platforms/85xx/mpc85xx_rdb.c
··· 19 19 #include <asm/machdep.h> 20 20 #include <asm/pci-bridge.h> 21 21 #include <mm/mmu_decl.h> 22 - #include <asm/prom.h> 23 22 #include <asm/udbg.h> 24 23 #include <asm/mpic.h> 25 24 #include <soc/fsl/qe/qe.h>
-1
arch/powerpc/platforms/85xx/p1010rdb.c
··· 16 16 #include <asm/machdep.h> 17 17 #include <asm/pci-bridge.h> 18 18 #include <mm/mmu_decl.h> 19 - #include <asm/prom.h> 20 19 #include <asm/udbg.h> 21 20 #include <asm/mpic.h> 22 21
+1
arch/powerpc/platforms/85xx/p1022_ds.c
··· 18 18 19 19 #include <linux/fsl/guts.h> 20 20 #include <linux/pci.h> 21 + #include <linux/of_address.h> 21 22 #include <linux/of_platform.h> 22 23 #include <asm/div64.h> 23 24 #include <asm/mpic.h>
+1
arch/powerpc/platforms/85xx/p1022_rdk.c
··· 14 14 15 15 #include <linux/fsl/guts.h> 16 16 #include <linux/pci.h> 17 + #include <linux/of_address.h> 17 18 #include <linux/of_platform.h> 18 19 #include <asm/div64.h> 19 20 #include <asm/mpic.h>
+1 -1
arch/powerpc/platforms/85xx/p1023_rdb.c
··· 15 15 #include <linux/delay.h> 16 16 #include <linux/module.h> 17 17 #include <linux/fsl_devices.h> 18 + #include <linux/of_address.h> 18 19 #include <linux/of_platform.h> 19 20 #include <linux/of_device.h> 20 21 ··· 23 22 #include <asm/machdep.h> 24 23 #include <asm/pci-bridge.h> 25 24 #include <mm/mmu_decl.h> 26 - #include <asm/prom.h> 27 25 #include <asm/udbg.h> 28 26 #include <asm/mpic.h> 29 27 #include "smp.h"
+1
arch/powerpc/platforms/85xx/qemu_e500.c
··· 12 12 */ 13 13 14 14 #include <linux/kernel.h> 15 + #include <linux/of.h> 15 16 #include <linux/of_fdt.h> 16 17 #include <linux/pgtable.h> 17 18 #include <asm/machdep.h>
+1 -1
arch/powerpc/platforms/85xx/smp.c
··· 208 208 * The bootpage and highmem can be accessed via ioremap(), but 209 209 * we need to directly access the spinloop if its in lowmem. 210 210 */ 211 - ioremappable = *cpu_rel_addr > virt_to_phys(high_memory); 211 + ioremappable = *cpu_rel_addr > virt_to_phys(high_memory - 1); 212 212 213 213 /* Map the spin table */ 214 214 if (ioremappable)
-1
arch/powerpc/platforms/85xx/socrates.c
··· 29 29 #include <asm/machdep.h> 30 30 #include <asm/pci-bridge.h> 31 31 #include <asm/mpic.h> 32 - #include <asm/prom.h> 33 32 #include <mm/mmu_decl.h> 34 33 #include <asm/udbg.h> 35 34
-1
arch/powerpc/platforms/85xx/stx_gp3.c
··· 28 28 #include <asm/machdep.h> 29 29 #include <asm/pci-bridge.h> 30 30 #include <asm/mpic.h> 31 - #include <asm/prom.h> 32 31 #include <mm/mmu_decl.h> 33 32 #include <asm/udbg.h> 34 33
-1
arch/powerpc/platforms/85xx/tqm85xx.c
··· 26 26 #include <asm/machdep.h> 27 27 #include <asm/pci-bridge.h> 28 28 #include <asm/mpic.h> 29 - #include <asm/prom.h> 30 29 #include <mm/mmu_decl.h> 31 30 #include <asm/udbg.h> 32 31
+1 -1
arch/powerpc/platforms/85xx/xes_mpc85xx.c
··· 16 16 #include <linux/delay.h> 17 17 #include <linux/seq_file.h> 18 18 #include <linux/interrupt.h> 19 + #include <linux/of_address.h> 19 20 #include <linux/of_platform.h> 20 21 21 22 #include <asm/time.h> 22 23 #include <asm/machdep.h> 23 24 #include <asm/pci-bridge.h> 24 25 #include <mm/mmu_decl.h> 25 - #include <asm/prom.h> 26 26 #include <asm/udbg.h> 27 27 #include <asm/mpic.h> 28 28
+2 -2
arch/powerpc/platforms/86xx/gef_ppc9a.c
··· 18 18 #include <linux/kdev_t.h> 19 19 #include <linux/delay.h> 20 20 #include <linux/seq_file.h> 21 + #include <linux/of_address.h> 21 22 #include <linux/of_platform.h> 22 23 23 24 #include <asm/time.h> 24 25 #include <asm/machdep.h> 25 26 #include <asm/pci-bridge.h> 26 - #include <asm/prom.h> 27 27 #include <mm/mmu_decl.h> 28 28 #include <asm/udbg.h> 29 29 ··· 180 180 * 181 181 * This function is called to determine whether the BSP is compatible with the 182 182 * supplied device-tree, which is assumed to be the correct one for the actual 183 - * board. It is expected thati, in the future, a kernel may support multiple 183 + * board. It is expected that, in the future, a kernel may support multiple 184 184 * boards. 185 185 */ 186 186 static int __init gef_ppc9a_probe(void)
+2 -2
arch/powerpc/platforms/86xx/gef_sbc310.c
··· 18 18 #include <linux/kdev_t.h> 19 19 #include <linux/delay.h> 20 20 #include <linux/seq_file.h> 21 + #include <linux/of_address.h> 21 22 #include <linux/of_platform.h> 22 23 23 24 #include <asm/time.h> 24 25 #include <asm/machdep.h> 25 26 #include <asm/pci-bridge.h> 26 - #include <asm/prom.h> 27 27 #include <mm/mmu_decl.h> 28 28 #include <asm/udbg.h> 29 29 ··· 167 167 * 168 168 * This function is called to determine whether the BSP is compatible with the 169 169 * supplied device-tree, which is assumed to be the correct one for the actual 170 - * board. It is expected thati, in the future, a kernel may support multiple 170 + * board. It is expected that, in the future, a kernel may support multiple 171 171 * boards. 172 172 */ 173 173 static int __init gef_sbc310_probe(void)
+2 -2
arch/powerpc/platforms/86xx/gef_sbc610.c
··· 18 18 #include <linux/kdev_t.h> 19 19 #include <linux/delay.h> 20 20 #include <linux/seq_file.h> 21 + #include <linux/of_address.h> 21 22 #include <linux/of_platform.h> 22 23 23 24 #include <asm/time.h> 24 25 #include <asm/machdep.h> 25 26 #include <asm/pci-bridge.h> 26 - #include <asm/prom.h> 27 27 #include <mm/mmu_decl.h> 28 28 #include <asm/udbg.h> 29 29 ··· 157 157 * 158 158 * This function is called to determine whether the BSP is compatible with the 159 159 * supplied device-tree, which is assumed to be the correct one for the actual 160 - * board. It is expected thati, in the future, a kernel may support multiple 160 + * board. It is expected that, in the future, a kernel may support multiple 161 161 * boards. 162 162 */ 163 163 static int __init gef_sbc610_probe(void)
+2 -1
arch/powerpc/platforms/86xx/mpc8610_hpcd.c
··· 20 20 #include <linux/delay.h> 21 21 #include <linux/seq_file.h> 22 22 #include <linux/of.h> 23 + #include <linux/of_address.h> 24 + #include <linux/of_irq.h> 23 25 #include <linux/fsl/guts.h> 24 26 25 27 #include <asm/time.h> 26 28 #include <asm/machdep.h> 27 29 #include <asm/pci-bridge.h> 28 - #include <asm/prom.h> 29 30 #include <mm/mmu_decl.h> 30 31 #include <asm/udbg.h> 31 32
-1
arch/powerpc/platforms/86xx/mpc86xx_hpcn.c
··· 19 19 #include <asm/time.h> 20 20 #include <asm/machdep.h> 21 21 #include <asm/pci-bridge.h> 22 - #include <asm/prom.h> 23 22 #include <mm/mmu_decl.h> 24 23 #include <asm/udbg.h> 25 24 #include <asm/swiotlb.h>
+1
arch/powerpc/platforms/86xx/mvme7100.c
··· 19 19 20 20 #include <linux/pci.h> 21 21 #include <linux/of.h> 22 + #include <linux/of_fdt.h> 22 23 #include <linux/of_platform.h> 23 24 #include <linux/of_address.h> 24 25 #include <asm/udbg.h>
+1 -1
arch/powerpc/platforms/8xx/Makefile
··· 3 3 # Makefile for the PowerPC 8xx linux kernel. 4 4 # 5 5 obj-y += m8xx_setup.o machine_check.o pic.o 6 - obj-$(CONFIG_CPM1) += cpm1.o 6 + obj-$(CONFIG_CPM1) += cpm1.o cpm1-ic.o 7 7 obj-$(CONFIG_UCODE_PATCH) += micropatch.o 8 8 obj-$(CONFIG_MPC885ADS) += mpc885ads_setup.o 9 9 obj-$(CONFIG_MPC86XADS) += mpc86xads_setup.o
+2 -2
arch/powerpc/platforms/8xx/adder875.c
··· 15 15 #include <asm/cpm1.h> 16 16 #include <asm/fs_pd.h> 17 17 #include <asm/udbg.h> 18 - #include <asm/prom.h> 19 18 20 19 #include "mpc8xx.h" 20 + #include "pic.h" 21 21 22 22 struct cpm_pin { 23 23 int port, pin, flags; ··· 104 104 .name = "Adder MPC875", 105 105 .probe = adder875_probe, 106 106 .setup_arch = adder875_setup, 107 - .init_IRQ = mpc8xx_pics_init, 107 + .init_IRQ = mpc8xx_pic_init, 108 108 .get_irq = mpc8xx_get_irq, 109 109 .restart = mpc8xx_restart, 110 110 .calibrate_decr = generic_calibrate_decr,
+188
arch/powerpc/platforms/8xx/cpm1-ic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Interrupt controller for the 4 + * Communication Processor Module. 5 + * Copyright (c) 1997 Dan error_act (dmalek@jlc.net) 6 + */ 7 + #include <linux/kernel.h> 8 + #include <linux/interrupt.h> 9 + #include <linux/irqdomain.h> 10 + #include <linux/platform_device.h> 11 + #include <asm/cpm1.h> 12 + 13 + struct cpm_pic_data { 14 + cpic8xx_t __iomem *reg; 15 + struct irq_domain *host; 16 + }; 17 + 18 + static void cpm_mask_irq(struct irq_data *d) 19 + { 20 + struct cpm_pic_data *data = irq_data_get_irq_chip_data(d); 21 + unsigned int cpm_vec = (unsigned int)irqd_to_hwirq(d); 22 + 23 + clrbits32(&data->reg->cpic_cimr, (1 << cpm_vec)); 24 + } 25 + 26 + static void cpm_unmask_irq(struct irq_data *d) 27 + { 28 + struct cpm_pic_data *data = irq_data_get_irq_chip_data(d); 29 + unsigned int cpm_vec = (unsigned int)irqd_to_hwirq(d); 30 + 31 + setbits32(&data->reg->cpic_cimr, (1 << cpm_vec)); 32 + } 33 + 34 + static void cpm_end_irq(struct irq_data *d) 35 + { 36 + struct cpm_pic_data *data = irq_data_get_irq_chip_data(d); 37 + unsigned int cpm_vec = (unsigned int)irqd_to_hwirq(d); 38 + 39 + out_be32(&data->reg->cpic_cisr, (1 << cpm_vec)); 40 + } 41 + 42 + static struct irq_chip cpm_pic = { 43 + .name = "CPM PIC", 44 + .irq_mask = cpm_mask_irq, 45 + .irq_unmask = cpm_unmask_irq, 46 + .irq_eoi = cpm_end_irq, 47 + }; 48 + 49 + static int cpm_get_irq(struct irq_desc *desc) 50 + { 51 + struct cpm_pic_data *data = irq_desc_get_handler_data(desc); 52 + int cpm_vec; 53 + 54 + /* 55 + * Get the vector by setting the ACK bit and then reading 56 + * the register. 57 + */ 58 + out_be16(&data->reg->cpic_civr, 1); 59 + cpm_vec = in_be16(&data->reg->cpic_civr); 60 + cpm_vec >>= 11; 61 + 62 + return irq_linear_revmap(data->host, cpm_vec); 63 + } 64 + 65 + static void cpm_cascade(struct irq_desc *desc) 66 + { 67 + generic_handle_irq(cpm_get_irq(desc)); 68 + } 69 + 70 + static int cpm_pic_host_map(struct irq_domain *h, unsigned int virq, 71 + irq_hw_number_t hw) 72 + { 73 + irq_set_chip_data(virq, h->host_data); 74 + irq_set_status_flags(virq, IRQ_LEVEL); 75 + irq_set_chip_and_handler(virq, &cpm_pic, handle_fasteoi_irq); 76 + return 0; 77 + } 78 + 79 + static const struct irq_domain_ops cpm_pic_host_ops = { 80 + .map = cpm_pic_host_map, 81 + }; 82 + 83 + static int cpm_pic_probe(struct platform_device *pdev) 84 + { 85 + struct device *dev = &pdev->dev; 86 + struct resource *res; 87 + int irq; 88 + struct cpm_pic_data *data; 89 + 90 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 91 + if (!res) 92 + return -ENODEV; 93 + 94 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 95 + if (!data) 96 + return -ENOMEM; 97 + 98 + data->reg = devm_ioremap(dev, res->start, resource_size(res)); 99 + if (!data->reg) 100 + return -ENODEV; 101 + 102 + irq = platform_get_irq(pdev, 0); 103 + if (irq < 0) 104 + return irq; 105 + 106 + /* Initialize the CPM interrupt controller. */ 107 + out_be32(&data->reg->cpic_cicr, 108 + (CICR_SCD_SCC4 | CICR_SCC_SCC3 | CICR_SCB_SCC2 | CICR_SCA_SCC1) | 109 + ((virq_to_hw(irq) / 2) << 13) | CICR_HP_MASK); 110 + 111 + out_be32(&data->reg->cpic_cimr, 0); 112 + 113 + data->host = irq_domain_add_linear(dev->of_node, 64, &cpm_pic_host_ops, data); 114 + if (!data->host) 115 + return -ENODEV; 116 + 117 + irq_set_handler_data(irq, data); 118 + irq_set_chained_handler(irq, cpm_cascade); 119 + 120 + setbits32(&data->reg->cpic_cicr, CICR_IEN); 121 + 122 + return 0; 123 + } 124 + 125 + static const struct of_device_id cpm_pic_match[] = { 126 + { 127 + .compatible = "fsl,cpm1-pic", 128 + }, { 129 + .type = "cpm-pic", 130 + .compatible = "CPM", 131 + }, {}, 132 + }; 133 + 134 + static struct platform_driver cpm_pic_driver = { 135 + .driver = { 136 + .name = "cpm-pic", 137 + .of_match_table = cpm_pic_match, 138 + }, 139 + .probe = cpm_pic_probe, 140 + }; 141 + 142 + static int __init cpm_pic_init(void) 143 + { 144 + return platform_driver_register(&cpm_pic_driver); 145 + } 146 + arch_initcall(cpm_pic_init); 147 + 148 + /* 149 + * The CPM can generate the error interrupt when there is a race condition 150 + * between generating and masking interrupts. All we have to do is ACK it 151 + * and return. This is a no-op function so we don't need any special 152 + * tests in the interrupt handler. 153 + */ 154 + static irqreturn_t cpm_error_interrupt(int irq, void *dev) 155 + { 156 + return IRQ_HANDLED; 157 + } 158 + 159 + static int cpm_error_probe(struct platform_device *pdev) 160 + { 161 + int irq; 162 + 163 + irq = platform_get_irq(pdev, 0); 164 + if (irq < 0) 165 + return irq; 166 + 167 + return request_irq(irq, cpm_error_interrupt, IRQF_NO_THREAD, "error", NULL); 168 + } 169 + 170 + static const struct of_device_id cpm_error_ids[] = { 171 + { .compatible = "fsl,cpm1" }, 172 + { .type = "cpm" }, 173 + {}, 174 + }; 175 + 176 + static struct platform_driver cpm_error_driver = { 177 + .driver = { 178 + .name = "cpm-error", 179 + .of_match_table = cpm_error_ids, 180 + }, 181 + .probe = cpm_error_probe, 182 + }; 183 + 184 + static int __init cpm_error_init(void) 185 + { 186 + return platform_driver_register(&cpm_error_driver); 187 + } 188 + subsys_initcall(cpm_error_init);
+2 -140
arch/powerpc/platforms/8xx/cpm1.c
··· 33 33 #include <linux/module.h> 34 34 #include <linux/spinlock.h> 35 35 #include <linux/slab.h> 36 + #include <linux/of_irq.h> 36 37 #include <asm/page.h> 37 38 #include <asm/8xx_immap.h> 38 39 #include <asm/cpm1.h> 39 40 #include <asm/io.h> 40 41 #include <asm/rheap.h> 41 - #include <asm/prom.h> 42 42 #include <asm/cpm.h> 43 43 44 44 #include <asm/fs_pd.h> ··· 51 51 52 52 cpm8xx_t __iomem *cpmp; /* Pointer to comm processor space */ 53 53 immap_t __iomem *mpc8xx_immr = (void __iomem *)VIRT_IMMR_BASE; 54 - static cpic8xx_t __iomem *cpic_reg; 55 - 56 - static struct irq_domain *cpm_pic_host; 57 - 58 - static void cpm_mask_irq(struct irq_data *d) 59 - { 60 - unsigned int cpm_vec = (unsigned int)irqd_to_hwirq(d); 61 - 62 - clrbits32(&cpic_reg->cpic_cimr, (1 << cpm_vec)); 63 - } 64 - 65 - static void cpm_unmask_irq(struct irq_data *d) 66 - { 67 - unsigned int cpm_vec = (unsigned int)irqd_to_hwirq(d); 68 - 69 - setbits32(&cpic_reg->cpic_cimr, (1 << cpm_vec)); 70 - } 71 - 72 - static void cpm_end_irq(struct irq_data *d) 73 - { 74 - unsigned int cpm_vec = (unsigned int)irqd_to_hwirq(d); 75 - 76 - out_be32(&cpic_reg->cpic_cisr, (1 << cpm_vec)); 77 - } 78 - 79 - static struct irq_chip cpm_pic = { 80 - .name = "CPM PIC", 81 - .irq_mask = cpm_mask_irq, 82 - .irq_unmask = cpm_unmask_irq, 83 - .irq_eoi = cpm_end_irq, 84 - }; 85 - 86 - int cpm_get_irq(void) 87 - { 88 - int cpm_vec; 89 - 90 - /* 91 - * Get the vector by setting the ACK bit and then reading 92 - * the register. 93 - */ 94 - out_be16(&cpic_reg->cpic_civr, 1); 95 - cpm_vec = in_be16(&cpic_reg->cpic_civr); 96 - cpm_vec >>= 11; 97 - 98 - return irq_linear_revmap(cpm_pic_host, cpm_vec); 99 - } 100 - 101 - static int cpm_pic_host_map(struct irq_domain *h, unsigned int virq, 102 - irq_hw_number_t hw) 103 - { 104 - pr_debug("cpm_pic_host_map(%d, 0x%lx)\n", virq, hw); 105 - 106 - irq_set_status_flags(virq, IRQ_LEVEL); 107 - irq_set_chip_and_handler(virq, &cpm_pic, handle_fasteoi_irq); 108 - return 0; 109 - } 110 - 111 - /* 112 - * The CPM can generate the error interrupt when there is a race condition 113 - * between generating and masking interrupts. All we have to do is ACK it 114 - * and return. This is a no-op function so we don't need any special 115 - * tests in the interrupt handler. 116 - */ 117 - static irqreturn_t cpm_error_interrupt(int irq, void *dev) 118 - { 119 - return IRQ_HANDLED; 120 - } 121 - 122 - static const struct irq_domain_ops cpm_pic_host_ops = { 123 - .map = cpm_pic_host_map, 124 - }; 125 - 126 - unsigned int __init cpm_pic_init(void) 127 - { 128 - struct device_node *np = NULL; 129 - struct resource res; 130 - unsigned int sirq = 0, hwirq, eirq; 131 - int ret; 132 - 133 - pr_debug("cpm_pic_init\n"); 134 - 135 - np = of_find_compatible_node(NULL, NULL, "fsl,cpm1-pic"); 136 - if (np == NULL) 137 - np = of_find_compatible_node(NULL, "cpm-pic", "CPM"); 138 - if (np == NULL) { 139 - printk(KERN_ERR "CPM PIC init: can not find cpm-pic node\n"); 140 - return sirq; 141 - } 142 - 143 - ret = of_address_to_resource(np, 0, &res); 144 - if (ret) 145 - goto end; 146 - 147 - cpic_reg = ioremap(res.start, resource_size(&res)); 148 - if (cpic_reg == NULL) 149 - goto end; 150 - 151 - sirq = irq_of_parse_and_map(np, 0); 152 - if (!sirq) 153 - goto end; 154 - 155 - /* Initialize the CPM interrupt controller. */ 156 - hwirq = (unsigned int)virq_to_hw(sirq); 157 - out_be32(&cpic_reg->cpic_cicr, 158 - (CICR_SCD_SCC4 | CICR_SCC_SCC3 | CICR_SCB_SCC2 | CICR_SCA_SCC1) | 159 - ((hwirq/2) << 13) | CICR_HP_MASK); 160 - 161 - out_be32(&cpic_reg->cpic_cimr, 0); 162 - 163 - cpm_pic_host = irq_domain_add_linear(np, 64, &cpm_pic_host_ops, NULL); 164 - if (cpm_pic_host == NULL) { 165 - printk(KERN_ERR "CPM2 PIC: failed to allocate irq host!\n"); 166 - sirq = 0; 167 - goto end; 168 - } 169 - 170 - /* Install our own error handler. */ 171 - np = of_find_compatible_node(NULL, NULL, "fsl,cpm1"); 172 - if (np == NULL) 173 - np = of_find_node_by_type(NULL, "cpm"); 174 - if (np == NULL) { 175 - printk(KERN_ERR "CPM PIC init: can not find cpm node\n"); 176 - goto end; 177 - } 178 - 179 - eirq = irq_of_parse_and_map(np, 0); 180 - if (!eirq) 181 - goto end; 182 - 183 - if (request_irq(eirq, cpm_error_interrupt, IRQF_NO_THREAD, "error", 184 - NULL)) 185 - printk(KERN_ERR "Could not allocate CPM error IRQ!"); 186 - 187 - setbits32(&cpic_reg->cpic_cicr, CICR_IEN); 188 - 189 - end: 190 - of_node_put(np); 191 - return sirq; 192 - } 193 54 194 55 void __init cpm_reset(void) 195 56 { ··· 141 280 out_be32(bp, (((BRG_UART_CLK_DIV16 / rate) - 1) << 1) | 142 281 CPM_BRG_EN | CPM_BRG_DIV16); 143 282 } 283 + EXPORT_SYMBOL(cpm_setbrg); 144 284 145 285 struct cpm_ioport16 { 146 286 __be16 dir, par, odr_sor, dat, intr;
+2 -1
arch/powerpc/platforms/8xx/ep88xc.c
··· 20 20 #include <asm/cpm1.h> 21 21 22 22 #include "mpc8xx.h" 23 + #include "pic.h" 23 24 24 25 struct cpm_pin { 25 26 int port, pin, flags; ··· 167 166 .name = "Embedded Planet EP88xC", 168 167 .probe = ep88xc_probe, 169 168 .setup_arch = ep88xc_setup_arch, 170 - .init_IRQ = mpc8xx_pics_init, 169 + .init_IRQ = mpc8xx_pic_init, 171 170 .get_irq = mpc8xx_get_irq, 172 171 .restart = mpc8xx_restart, 173 172 .calibrate_decr = mpc8xx_calibrate_decr,
+2 -29
arch/powerpc/platforms/8xx/m8xx_setup.c
··· 17 17 #include <linux/time.h> 18 18 #include <linux/rtc.h> 19 19 #include <linux/fsl_devices.h> 20 + #include <linux/of.h> 21 + #include <linux/of_irq.h> 20 22 21 23 #include <asm/io.h> 22 24 #include <asm/8xx_immap.h> 23 - #include <asm/prom.h> 24 25 #include <asm/fs_pd.h> 25 26 #include <mm/mmu_decl.h> 26 27 27 28 #include "pic.h" 28 29 29 30 #include "mpc8xx.h" 30 - 31 - extern int cpm_pic_init(void); 32 - extern int cpm_get_irq(void); 33 31 34 32 /* A place holder for time base interrupts, if they are ever enabled. */ 35 33 static irqreturn_t timebase_interrupt(int irq, void *dev) ··· 204 206 205 207 in_8(&clk_r->res[0]); 206 208 panic("Restart failed\n"); 207 - } 208 - 209 - static void cpm_cascade(struct irq_desc *desc) 210 - { 211 - generic_handle_irq(cpm_get_irq()); 212 - } 213 - 214 - /* Initialize the internal interrupt controllers. The number of 215 - * interrupts supported can vary with the processor type, and the 216 - * 82xx family can have up to 64. 217 - * External interrupts can be either edge or level triggered, and 218 - * need to be initialized by the appropriate driver. 219 - */ 220 - void __init mpc8xx_pics_init(void) 221 - { 222 - int irq; 223 - 224 - if (mpc8xx_pic_init()) { 225 - printk(KERN_ERR "Failed interrupt 8xx controller initialization\n"); 226 - return; 227 - } 228 - 229 - irq = cpm_pic_init(); 230 - if (irq) 231 - irq_set_chained_handler(irq, cpm_cascade); 232 209 }
+2 -1
arch/powerpc/platforms/8xx/mpc86xads_setup.c
··· 29 29 30 30 #include "mpc86xads.h" 31 31 #include "mpc8xx.h" 32 + #include "pic.h" 32 33 33 34 struct cpm_pin { 34 35 int port, pin, flags; ··· 141 140 .name = "MPC86x ADS", 142 141 .probe = mpc86xads_probe, 143 142 .setup_arch = mpc86xads_setup_arch, 144 - .init_IRQ = mpc8xx_pics_init, 143 + .init_IRQ = mpc8xx_pic_init, 145 144 .get_irq = mpc8xx_get_irq, 146 145 .restart = mpc8xx_restart, 147 146 .calibrate_decr = mpc8xx_calibrate_decr,
+2 -1
arch/powerpc/platforms/8xx/mpc885ads_setup.c
··· 42 42 43 43 #include "mpc885ads.h" 44 44 #include "mpc8xx.h" 45 + #include "pic.h" 45 46 46 47 static u32 __iomem *bcsr, *bcsr5; 47 48 ··· 217 216 .name = "Freescale MPC885 ADS", 218 217 .probe = mpc885ads_probe, 219 218 .setup_arch = mpc885ads_setup_arch, 220 - .init_IRQ = mpc8xx_pics_init, 219 + .init_IRQ = mpc8xx_pic_init, 221 220 .get_irq = mpc8xx_get_irq, 222 221 .restart = mpc8xx_restart, 223 222 .calibrate_decr = mpc8xx_calibrate_decr,
-1
arch/powerpc/platforms/8xx/mpc8xx.h
··· 15 15 extern void mpc8xx_calibrate_decr(void); 16 16 extern int mpc8xx_set_rtc_time(struct rtc_time *tm); 17 17 extern void mpc8xx_get_rtc_time(struct rtc_time *tm); 18 - extern void mpc8xx_pics_init(void); 19 18 extern unsigned int mpc8xx_get_irq(void); 20 19 21 20 #endif /* __MPC8xx_H */
+6 -14
arch/powerpc/platforms/8xx/pic.c
··· 4 4 #include <linux/signal.h> 5 5 #include <linux/irq.h> 6 6 #include <linux/dma-mapping.h> 7 - #include <asm/prom.h> 7 + #include <linux/of_address.h> 8 + #include <linux/of_irq.h> 8 9 #include <asm/irq.h> 9 10 #include <asm/io.h> 10 11 #include <asm/8xx_immap.h> ··· 14 13 15 14 16 15 #define PIC_VEC_SPURRIOUS 15 17 - 18 - extern int cpm_get_irq(struct pt_regs *regs); 19 16 20 17 static struct irq_domain *mpc8xx_pic_host; 21 18 static unsigned long mpc8xx_cached_irq_mask; ··· 124 125 .xlate = mpc8xx_pic_host_xlate, 125 126 }; 126 127 127 - int __init mpc8xx_pic_init(void) 128 + void __init mpc8xx_pic_init(void) 128 129 { 129 130 struct resource res; 130 131 struct device_node *np; ··· 135 136 np = of_find_node_by_type(NULL, "mpc8xx-pic"); 136 137 if (np == NULL) { 137 138 printk(KERN_ERR "Could not find fsl,pq1-pic node\n"); 138 - return -ENOMEM; 139 + return; 139 140 } 140 141 141 142 ret = of_address_to_resource(np, 0, &res); ··· 143 144 goto out; 144 145 145 146 siu_reg = ioremap(res.start, resource_size(&res)); 146 - if (siu_reg == NULL) { 147 - ret = -EINVAL; 147 + if (!siu_reg) 148 148 goto out; 149 - } 150 149 151 150 mpc8xx_pic_host = irq_domain_add_linear(np, 64, &mpc8xx_pic_host_ops, NULL); 152 - if (mpc8xx_pic_host == NULL) { 151 + if (!mpc8xx_pic_host) 153 152 printk(KERN_ERR "MPC8xx PIC: failed to allocate irq host!\n"); 154 - ret = -ENOMEM; 155 - goto out; 156 - } 157 153 158 - ret = 0; 159 154 out: 160 155 of_node_put(np); 161 - return ret; 162 156 }
+1 -1
arch/powerpc/platforms/8xx/pic.h
··· 4 4 #include <linux/irq.h> 5 5 #include <linux/interrupt.h> 6 6 7 - int mpc8xx_pic_init(void); 7 + void mpc8xx_pic_init(void); 8 8 unsigned int mpc8xx_get_irq(void); 9 9 10 10 /*
+2 -1
arch/powerpc/platforms/8xx/tqm8xx_setup.c
··· 43 43 #include <asm/udbg.h> 44 44 45 45 #include "mpc8xx.h" 46 + #include "pic.h" 46 47 47 48 struct cpm_pin { 48 49 int port, pin, flags; ··· 143 142 .name = "TQM8xx", 144 143 .probe = tqm8xx_probe, 145 144 .setup_arch = tqm8xx_setup_arch, 146 - .init_IRQ = mpc8xx_pics_init, 145 + .init_IRQ = mpc8xx_pic_init, 147 146 .get_irq = mpc8xx_get_irq, 148 147 .restart = mpc8xx_restart, 149 148 .calibrate_decr = mpc8xx_calibrate_decr,
+7 -4
arch/powerpc/platforms/Kconfig.cputype
··· 104 104 select HAVE_MOVE_PUD 105 105 select IRQ_WORK 106 106 select PPC_64S_HASH_MMU if !PPC_RADIX_MMU 107 + select KASAN_VMALLOC if KASAN 107 108 108 109 config PPC_BOOK3E_64 109 110 bool "Embedded processors" ··· 378 377 config PPC_64S_HASH_MMU 379 378 bool "Hash MMU Support" 380 379 depends on PPC_BOOK3S_64 381 - select PPC_MM_SLICES 382 380 default y 383 381 help 384 382 Enable support for the Power ISA Hash style MMU. This is implemented ··· 450 450 config PPC_BOOK3E_MMU 451 451 def_bool y 452 452 depends on FSL_BOOKE || PPC_BOOK3E 453 - 454 - config PPC_MM_SLICES 455 - bool 456 453 457 454 config PPC_HAVE_PMU_SUPPORT 458 455 bool ··· 552 555 little endian powerpc. 553 556 554 557 endchoice 558 + 559 + config PPC64_ELF_ABI_V1 560 + def_bool PPC64 && CPU_BIG_ENDIAN 561 + 562 + config PPC64_ELF_ABI_V2 563 + def_bool PPC64 && CPU_LITTLE_ENDIAN 555 564 556 565 config PPC64_BOOT_WRAPPER 557 566 def_bool n
+1
arch/powerpc/platforms/amigaone/setup.c
··· 8 8 * Copyright 2003 by Hans-Joerg Frieden and Thomas Frieden 9 9 */ 10 10 11 + #include <linux/irqdomain.h> 11 12 #include <linux/kernel.h> 12 13 #include <linux/of.h> 13 14 #include <linux/of_address.h>
+1 -1
arch/powerpc/platforms/book3s/vas-api.c
··· 30 30 * 31 31 * where "vas_copy" and "vas_paste" are defined in copy-paste.h. 32 32 * copy/paste returns to the user space directly. So refer NX hardware 33 - * documententation for exact copy/paste usage and completion / error 33 + * documentation for exact copy/paste usage and completion / error 34 34 * conditions. 35 35 */ 36 36
+1 -1
arch/powerpc/platforms/cell/axon_msi.c
··· 13 13 #include <linux/of_platform.h> 14 14 #include <linux/slab.h> 15 15 #include <linux/debugfs.h> 16 + #include <linux/of_irq.h> 16 17 17 18 #include <asm/dcr.h> 18 19 #include <asm/machdep.h> 19 - #include <asm/prom.h> 20 20 21 21 #include "cell.h" 22 22
+1 -1
arch/powerpc/platforms/cell/cbe_powerbutton.c
··· 9 9 10 10 #include <linux/input.h> 11 11 #include <linux/module.h> 12 + #include <linux/of.h> 12 13 #include <linux/platform_device.h> 13 14 #include <asm/pmi.h> 14 - #include <asm/prom.h> 15 15 16 16 static struct input_dev *button_dev; 17 17 static struct platform_device *button_pdev;
+2 -2
arch/powerpc/platforms/cell/cbe_regs.c
··· 10 10 #include <linux/percpu.h> 11 11 #include <linux/types.h> 12 12 #include <linux/export.h> 13 + #include <linux/of_address.h> 13 14 #include <linux/of_device.h> 14 15 #include <linux/of_platform.h> 15 16 #include <linux/pgtable.h> 16 17 17 18 #include <asm/io.h> 18 - #include <asm/prom.h> 19 19 #include <asm/ptrace.h> 20 20 #include <asm/cell-regs.h> 21 21 ··· 23 23 * Current implementation uses "cpu" nodes. We build our own mapping 24 24 * array of cpu numbers to cpu nodes locally for now to allow interrupt 25 25 * time code to have a fast path rather than call of_get_cpu_node(). If 26 - * we implement cpu hotplug, we'll have to install an appropriate norifier 26 + * we implement cpu hotplug, we'll have to install an appropriate notifier 27 27 * in order to release references to the cpu going away 28 28 */ 29 29 static struct cbe_regs_map
-1
arch/powerpc/platforms/cell/cbe_thermal.c
··· 39 39 #include <linux/stringify.h> 40 40 #include <asm/spu.h> 41 41 #include <asm/io.h> 42 - #include <asm/prom.h> 43 42 #include <asm/cell-regs.h> 44 43 45 44 #include "spu_priv1_mmio.h"
+2 -1
arch/powerpc/platforms/cell/interrupt.c
··· 18 18 19 19 #include <linux/interrupt.h> 20 20 #include <linux/irq.h> 21 + #include <linux/irqdomain.h> 21 22 #include <linux/export.h> 22 23 #include <linux/percpu.h> 23 24 #include <linux/types.h> 24 25 #include <linux/ioport.h> 25 26 #include <linux/kernel_stat.h> 26 27 #include <linux/pgtable.h> 28 + #include <linux/of_address.h> 27 29 28 30 #include <asm/io.h> 29 - #include <asm/prom.h> 30 31 #include <asm/ptrace.h> 31 32 #include <asm/machdep.h> 32 33 #include <asm/cell-regs.h>
+3 -1
arch/powerpc/platforms/cell/iommu.c
··· 12 12 #include <linux/kernel.h> 13 13 #include <linux/init.h> 14 14 #include <linux/interrupt.h> 15 + #include <linux/irqdomain.h> 15 16 #include <linux/notifier.h> 16 17 #include <linux/of.h> 18 + #include <linux/of_address.h> 17 19 #include <linux/of_platform.h> 18 20 #include <linux/slab.h> 19 21 #include <linux/memblock.h> ··· 584 582 { 585 583 struct device *dev = data; 586 584 587 - /* We are only intereted in device addition */ 585 + /* We are only interested in device addition */ 588 586 if (action != BUS_NOTIFY_ADD_DEVICE) 589 587 return 0; 590 588
-1
arch/powerpc/platforms/cell/pervasive.c
··· 19 19 20 20 #include <asm/io.h> 21 21 #include <asm/machdep.h> 22 - #include <asm/prom.h> 23 22 #include <asm/reg.h> 24 23 #include <asm/cell-regs.h> 25 24 #include <asm/cpu_has_feature.h>
+1 -1
arch/powerpc/platforms/cell/ras.c
··· 12 12 #include <linux/reboot.h> 13 13 #include <linux/kexec.h> 14 14 #include <linux/crash_dump.h> 15 + #include <linux/of.h> 15 16 16 17 #include <asm/kexec.h> 17 18 #include <asm/reg.h> 18 19 #include <asm/io.h> 19 - #include <asm/prom.h> 20 20 #include <asm/machdep.h> 21 21 #include <asm/rtas.h> 22 22 #include <asm/cell-regs.h>
-1
arch/powerpc/platforms/cell/setup.c
··· 31 31 #include <asm/mmu.h> 32 32 #include <asm/processor.h> 33 33 #include <asm/io.h> 34 - #include <asm/prom.h> 35 34 #include <asm/rtas.h> 36 35 #include <asm/pci-bridge.h> 37 36 #include <asm/iommu.h>
-1
arch/powerpc/platforms/cell/smp.c
··· 28 28 #include <asm/irq.h> 29 29 #include <asm/page.h> 30 30 #include <asm/io.h> 31 - #include <asm/prom.h> 32 31 #include <asm/smp.h> 33 32 #include <asm/paca.h> 34 33 #include <asm/machdep.h>
+2 -1
arch/powerpc/platforms/cell/spider-pci.c
··· 8 8 #undef DEBUG 9 9 10 10 #include <linux/kernel.h> 11 + #include <linux/of_address.h> 11 12 #include <linux/of_platform.h> 12 13 #include <linux/slab.h> 13 14 #include <linux/io.h> ··· 82 81 /* 83 82 * On CellBlade, we can't know that which XDR memory is used by 84 83 * kmalloc() to allocate dummy_page_va. 85 - * In order to imporve the performance, the XDR which is used to 84 + * In order to improve the performance, the XDR which is used to 86 85 * allocate dummy_page_va is the nearest the spider-pci. 87 86 * We have to select the CBE which is the nearest the spider-pci 88 87 * to allocate memory from the best XDR, but I don't know that
+2 -1
arch/powerpc/platforms/cell/spider-pic.c
··· 10 10 #include <linux/interrupt.h> 11 11 #include <linux/irq.h> 12 12 #include <linux/ioport.h> 13 + #include <linux/of_address.h> 14 + #include <linux/of_irq.h> 13 15 #include <linux/pgtable.h> 14 16 15 - #include <asm/prom.h> 16 17 #include <asm/io.h> 17 18 18 19 #include "interrupt.h"
-1
arch/powerpc/platforms/cell/spu_base.c
··· 24 24 #include <asm/spu_priv1.h> 25 25 #include <asm/spu_csa.h> 26 26 #include <asm/xmon.h> 27 - #include <asm/prom.h> 28 27 #include <asm/kexec.h> 29 28 30 29 const struct spu_management_ops *spu_management_ops;
+3 -2
arch/powerpc/platforms/cell/spu_manage.c
··· 16 16 #include <linux/io.h> 17 17 #include <linux/mutex.h> 18 18 #include <linux/device.h> 19 + #include <linux/of_address.h> 20 + #include <linux/of_irq.h> 19 21 20 22 #include <asm/spu.h> 21 23 #include <asm/spu_priv1.h> 22 24 #include <asm/firmware.h> 23 - #include <asm/prom.h> 24 25 25 26 #include "spufs/spufs.h" 26 27 #include "interrupt.h" ··· 458 457 459 458 /* 460 459 * Walk through each phandle in vicinity property of the spu 461 - * (tipically two vicinity phandles per spe node) 460 + * (typically two vicinity phandles per spe node) 462 461 */ 463 462 for (i = 0; i < (lenp / sizeof(phandle)); i++) { 464 463 if (vic_handles[i] == avoid_ph)
-1
arch/powerpc/platforms/cell/spu_priv1_mmio.c
··· 19 19 #include <asm/spu.h> 20 20 #include <asm/spu_priv1.h> 21 21 #include <asm/firmware.h> 22 - #include <asm/prom.h> 23 22 24 23 #include "interrupt.h" 25 24 #include "spu_priv1_mmio.h"
+1 -1
arch/powerpc/platforms/cell/spufs/inode.c
··· 21 21 #include <linux/namei.h> 22 22 #include <linux/pagemap.h> 23 23 #include <linux/poll.h> 24 + #include <linux/of.h> 24 25 #include <linux/seq_file.h> 25 26 #include <linux/slab.h> 26 27 27 - #include <asm/prom.h> 28 28 #include <asm/spu.h> 29 29 #include <asm/spu_priv1.h> 30 30 #include <linux/uaccess.h>
+1 -1
arch/powerpc/platforms/chrp/nvram.c
··· 10 10 #include <linux/init.h> 11 11 #include <linux/spinlock.h> 12 12 #include <linux/uaccess.h> 13 - #include <asm/prom.h> 13 + #include <linux/of.h> 14 14 #include <asm/machdep.h> 15 15 #include <asm/rtas.h> 16 16 #include "chrp.h"
+1 -1
arch/powerpc/platforms/chrp/pci.c
··· 9 9 #include <linux/string.h> 10 10 #include <linux/init.h> 11 11 #include <linux/pgtable.h> 12 + #include <linux/of_address.h> 12 13 13 14 #include <asm/io.h> 14 15 #include <asm/irq.h> 15 16 #include <asm/hydra.h> 16 - #include <asm/prom.h> 17 17 #include <asm/machdep.h> 18 18 #include <asm/sections.h> 19 19 #include <asm/pci-bridge.h>
+4 -2
arch/powerpc/platforms/chrp/setup.c
··· 32 32 #include <linux/root_dev.h> 33 33 #include <linux/initrd.h> 34 34 #include <linux/timer.h> 35 + #include <linux/of_address.h> 36 + #include <linux/of_fdt.h> 37 + #include <linux/of_irq.h> 35 38 36 39 #include <asm/io.h> 37 - #include <asm/prom.h> 38 40 #include <asm/pci-bridge.h> 39 41 #include <asm/dma.h> 40 42 #include <asm/machdep.h> ··· 253 251 * Per default, input/output-device points to the keyboard/screen 254 252 * If no card is installed, the built-in serial port is used as a fallback. 255 253 * But unfortunately, the firmware does not connect /chosen/{stdin,stdout} 256 - * the the built-in serial node. Instead, a /failsafe node is created. 254 + * to the built-in serial node. Instead, a /failsafe node is created. 257 255 */ 258 256 static __init void chrp_init(void) 259 257 {
-1
arch/powerpc/platforms/chrp/smp.c
··· 24 24 #include <asm/page.h> 25 25 #include <asm/sections.h> 26 26 #include <asm/io.h> 27 - #include <asm/prom.h> 28 27 #include <asm/smp.h> 29 28 #include <asm/machdep.h> 30 29 #include <asm/mpic.h>
+1 -3
arch/powerpc/platforms/chrp/time.c
··· 21 21 #include <linux/init.h> 22 22 #include <linux/bcd.h> 23 23 #include <linux/ioport.h> 24 + #include <linux/of_address.h> 24 25 25 26 #include <asm/io.h> 26 27 #include <asm/nvram.h> 27 - #include <asm/prom.h> 28 28 #include <asm/sections.h> 29 29 #include <asm/time.h> 30 30 31 31 #include <platforms/chrp/chrp.h> 32 - 33 - extern spinlock_t rtc_lock; 34 32 35 33 #define NVRAM_AS0 0x74 36 34 #define NVRAM_AS1 0x75
-1
arch/powerpc/platforms/embedded6xx/gamecube.c
··· 16 16 17 17 #include <asm/io.h> 18 18 #include <asm/machdep.h> 19 - #include <asm/prom.h> 20 19 #include <asm/time.h> 21 20 #include <asm/udbg.h> 22 21
+2 -1
arch/powerpc/platforms/embedded6xx/holly.c
··· 22 22 #include <linux/serial.h> 23 23 #include <linux/tty.h> 24 24 #include <linux/serial_core.h> 25 + #include <linux/of_address.h> 26 + #include <linux/of_irq.h> 25 27 #include <linux/of_platform.h> 26 28 #include <linux/extable.h> 27 29 28 30 #include <asm/time.h> 29 31 #include <asm/machdep.h> 30 - #include <asm/prom.h> 31 32 #include <asm/udbg.h> 32 33 #include <asm/tsi108.h> 33 34 #include <asm/pci-bridge.h>
-1
arch/powerpc/platforms/embedded6xx/linkstation.c
··· 15 15 #include <linux/of_platform.h> 16 16 17 17 #include <asm/time.h> 18 - #include <asm/prom.h> 19 18 #include <asm/mpic.h> 20 19 #include <asm/pci-bridge.h> 21 20
+1 -1
arch/powerpc/platforms/embedded6xx/ls_uart.c
··· 14 14 #include <linux/delay.h> 15 15 #include <linux/serial_reg.h> 16 16 #include <linux/serial_8250.h> 17 + #include <linux/of.h> 17 18 #include <asm/io.h> 18 - #include <asm/prom.h> 19 19 #include <asm/termbits.h> 20 20 21 21 #include "mpc10x.h"
+1 -1
arch/powerpc/platforms/embedded6xx/mpc7448_hpc2.c
··· 27 27 #include <linux/serial.h> 28 28 #include <linux/tty.h> 29 29 #include <linux/serial_core.h> 30 + #include <linux/of_irq.h> 30 31 31 32 #include <asm/time.h> 32 33 #include <asm/machdep.h> 33 - #include <asm/prom.h> 34 34 #include <asm/udbg.h> 35 35 #include <asm/tsi108.h> 36 36 #include <asm/pci-bridge.h>
+1 -1
arch/powerpc/platforms/embedded6xx/mvme5100.c
··· 12 12 * Author: Stephen Chivers <schivers@csc.com> 13 13 */ 14 14 15 + #include <linux/of_irq.h> 15 16 #include <linux/of_platform.h> 16 17 17 18 #include <asm/i8259.h> 18 19 #include <asm/pci-bridge.h> 19 20 #include <asm/mpic.h> 20 - #include <asm/prom.h> 21 21 #include <mm/mmu_decl.h> 22 22 #include <asm/udbg.h> 23 23
-1
arch/powerpc/platforms/embedded6xx/storcenter.c
··· 17 17 #include <linux/of_platform.h> 18 18 19 19 #include <asm/time.h> 20 - #include <asm/prom.h> 21 20 #include <asm/mpic.h> 22 21 #include <asm/pci-bridge.h> 23 22
+2 -1
arch/powerpc/platforms/embedded6xx/usbgecko_udbg.c
··· 7 7 * Copyright (C) 2008,2009 Albert Herranz 8 8 */ 9 9 10 + #include <linux/of_address.h> 11 + 10 12 #include <mm/mmu_decl.h> 11 13 12 14 #include <asm/io.h> 13 - #include <asm/prom.h> 14 15 #include <asm/udbg.h> 15 16 #include <asm/fixmap.h> 16 17
+1 -1
arch/powerpc/platforms/embedded6xx/wii.c
··· 13 13 #include <linux/init.h> 14 14 #include <linux/irq.h> 15 15 #include <linux/seq_file.h> 16 + #include <linux/of_address.h> 16 17 #include <linux/of_platform.h> 17 18 #include <linux/memblock.h> 18 19 #include <mm/mmu_decl.h> 19 20 20 21 #include <asm/io.h> 21 22 #include <asm/machdep.h> 22 - #include <asm/prom.h> 23 23 #include <asm/time.h> 24 24 #include <asm/udbg.h> 25 25
+1
arch/powerpc/platforms/fsl_uli1575.c
··· 10 10 #include <linux/pci.h> 11 11 #include <linux/interrupt.h> 12 12 #include <linux/mc146818rtc.h> 13 + #include <linux/of_irq.h> 13 14 14 15 #include <asm/pci-bridge.h> 15 16
+1 -1
arch/powerpc/platforms/maple/pci.c
··· 12 12 #include <linux/string.h> 13 13 #include <linux/init.h> 14 14 #include <linux/irq.h> 15 + #include <linux/of_irq.h> 15 16 16 17 #include <asm/sections.h> 17 18 #include <asm/io.h> 18 - #include <asm/prom.h> 19 19 #include <asm/pci-bridge.h> 20 20 #include <asm/machdep.h> 21 21 #include <asm/iommu.h>
+1 -1
arch/powerpc/platforms/maple/setup.c
··· 36 36 #include <linux/serial.h> 37 37 #include <linux/smp.h> 38 38 #include <linux/bitops.h> 39 + #include <linux/of_address.h> 39 40 #include <linux/of_device.h> 40 41 #include <linux/memblock.h> 41 42 42 43 #include <asm/processor.h> 43 44 #include <asm/sections.h> 44 - #include <asm/prom.h> 45 45 #include <asm/io.h> 46 46 #include <asm/pci-bridge.h> 47 47 #include <asm/iommu.h>
+1 -1
arch/powerpc/platforms/maple/time.c
··· 19 19 #include <linux/interrupt.h> 20 20 #include <linux/mc146818rtc.h> 21 21 #include <linux/bcd.h> 22 + #include <linux/of_address.h> 22 23 23 24 #include <asm/sections.h> 24 - #include <asm/prom.h> 25 25 #include <asm/io.h> 26 26 #include <asm/machdep.h> 27 27 #include <asm/time.h>
+2
arch/powerpc/platforms/pasemi/dma_lib.c
··· 10 10 #include <linux/pci.h> 11 11 #include <linux/slab.h> 12 12 #include <linux/of.h> 13 + #include <linux/of_address.h> 14 + #include <linux/of_irq.h> 13 15 #include <linux/sched.h> 14 16 15 17 #include <asm/pasemi_dma.h>
+1
arch/powerpc/platforms/pasemi/iommu.c
··· 11 11 #include <linux/types.h> 12 12 #include <linux/spinlock.h> 13 13 #include <linux/pci.h> 14 + #include <linux/of.h> 14 15 #include <asm/iommu.h> 15 16 #include <asm/machdep.h> 16 17 #include <asm/firmware.h>
+1
arch/powerpc/platforms/pasemi/misc.c
··· 11 11 #include <linux/kernel.h> 12 12 #include <linux/pci.h> 13 13 #include <linux/of.h> 14 + #include <linux/of_irq.h> 14 15 #include <linux/i2c.h> 15 16 16 17 #ifdef CONFIG_I2C_BOARDINFO
+1 -1
arch/powerpc/platforms/pasemi/msi.c
··· 9 9 */ 10 10 11 11 #include <linux/irq.h> 12 + #include <linux/irqdomain.h> 12 13 #include <linux/msi.h> 13 14 #include <asm/mpic.h> 14 - #include <asm/prom.h> 15 15 #include <asm/hw_irq.h> 16 16 #include <asm/ppc-pci.h> 17 17 #include <asm/msi_bitmap.h>
+1
arch/powerpc/platforms/pasemi/pci.c
··· 12 12 13 13 14 14 #include <linux/kernel.h> 15 + #include <linux/of_address.h> 15 16 #include <linux/pci.h> 16 17 17 18 #include <asm/pci-bridge.h>
+1 -1
arch/powerpc/platforms/pasemi/setup.c
··· 18 18 #include <linux/pci.h> 19 19 #include <linux/of_platform.h> 20 20 #include <linux/gfp.h> 21 + #include <linux/irqdomain.h> 21 22 22 - #include <asm/prom.h> 23 23 #include <asm/iommu.h> 24 24 #include <asm/machdep.h> 25 25 #include <asm/i8259.h>
-1
arch/powerpc/platforms/powermac/backlight.c
··· 15 15 #include <linux/pmu.h> 16 16 #include <linux/atomic.h> 17 17 #include <linux/export.h> 18 - #include <asm/prom.h> 19 18 #include <asm/backlight.h> 20 19 21 20 #define OLD_BACKLIGHT_MAX 15
+2 -1
arch/powerpc/platforms/powermac/bootx_init.c
··· 8 8 #include <linux/kernel.h> 9 9 #include <linux/string.h> 10 10 #include <linux/init.h> 11 + #include <linux/of_fdt.h> 11 12 #include <generated/utsrelease.h> 12 13 #include <asm/sections.h> 13 14 #include <asm/prom.h> ··· 244 243 DBG(" detected display ! adding properties names !\n"); 245 244 bootx_dt_add_string("linux,boot-display", mem_end); 246 245 bootx_dt_add_string("linux,opened", mem_end); 247 - strlcpy(bootx_disp_path, namep, sizeof(bootx_disp_path)); 246 + strscpy(bootx_disp_path, namep, sizeof(bootx_disp_path)); 248 247 } 249 248 250 249 /* get and store all property names */
-1
arch/powerpc/platforms/powermac/feature.c
··· 31 31 #include <asm/keylargo.h> 32 32 #include <asm/uninorth.h> 33 33 #include <asm/io.h> 34 - #include <asm/prom.h> 35 34 #include <asm/machdep.h> 36 35 #include <asm/pmac_feature.h> 37 36 #include <asm/dbdma.h>
+2 -2
arch/powerpc/platforms/powermac/low_i2c.c
··· 40 40 #include <linux/mutex.h> 41 41 #include <linux/i2c.h> 42 42 #include <linux/slab.h> 43 + #include <linux/of_irq.h> 43 44 #include <asm/keylargo.h> 44 45 #include <asm/uninorth.h> 45 46 #include <asm/io.h> 46 - #include <asm/prom.h> 47 47 #include <asm/machdep.h> 48 48 #include <asm/smu.h> 49 49 #include <asm/pmac_pfunc.h> ··· 1472 1472 smu_i2c_probe(); 1473 1473 #endif 1474 1474 1475 - /* Now add plaform functions for some known devices */ 1475 + /* Now add platform functions for some known devices */ 1476 1476 pmac_i2c_devscan(pmac_i2c_dev_create); 1477 1477 1478 1478 return 0;
+2 -2
arch/powerpc/platforms/powermac/nvram.c
··· 17 17 #include <linux/memblock.h> 18 18 #include <linux/completion.h> 19 19 #include <linux/spinlock.h> 20 + #include <linux/of_address.h> 20 21 #include <asm/sections.h> 21 22 #include <asm/io.h> 22 - #include <asm/prom.h> 23 23 #include <asm/machdep.h> 24 24 #include <asm/nvram.h> 25 25 ··· 71 71 static int nvram_naddrs; 72 72 static volatile unsigned char __iomem *nvram_data; 73 73 static int is_core_99; 74 - static int core99_bank = 0; 74 + static int core99_bank; 75 75 static int nvram_partitions[3]; 76 76 // XXX Turn that into a sem 77 77 static DEFINE_RAW_SPINLOCK(nv_lock);
+2 -1
arch/powerpc/platforms/powermac/pci.c
··· 12 12 #include <linux/string.h> 13 13 #include <linux/init.h> 14 14 #include <linux/irq.h> 15 + #include <linux/of_address.h> 16 + #include <linux/of_irq.h> 15 17 #include <linux/of_pci.h> 16 18 17 19 #include <asm/sections.h> 18 20 #include <asm/io.h> 19 - #include <asm/prom.h> 20 21 #include <asm/pci-bridge.h> 21 22 #include <asm/machdep.h> 22 23 #include <asm/pmac_feature.h>
+2 -2
arch/powerpc/platforms/powermac/pfunc_core.c
··· 12 12 #include <linux/slab.h> 13 13 #include <linux/module.h> 14 14 #include <linux/mutex.h> 15 + #include <linux/of.h> 15 16 16 - #include <asm/prom.h> 17 17 #include <asm/pmac_pfunc.h> 18 18 19 19 /* Debug */ ··· 685 685 const int plen = strlen(PP_PREFIX); 686 686 int count = 0; 687 687 688 - for (pp = dev->node->properties; pp != 0; pp = pp->next) { 688 + for_each_property_of_node(dev->node, pp) { 689 689 const char *name; 690 690 if (strncmp(pp->name, PP_PREFIX, plen) != 0) 691 691 continue;
+4 -2
arch/powerpc/platforms/powermac/pic.c
··· 20 20 #include <linux/adb.h> 21 21 #include <linux/minmax.h> 22 22 #include <linux/pmu.h> 23 + #include <linux/irqdomain.h> 24 + #include <linux/of_address.h> 25 + #include <linux/of_irq.h> 23 26 24 27 #include <asm/sections.h> 25 28 #include <asm/io.h> 26 29 #include <asm/smp.h> 27 - #include <asm/prom.h> 28 30 #include <asm/pci-bridge.h> 29 31 #include <asm/time.h> 30 32 #include <asm/pmac_feature.h> ··· 384 382 #endif 385 383 } 386 384 387 - int of_irq_parse_oldworld(struct device_node *device, int index, 385 + int of_irq_parse_oldworld(const struct device_node *device, int index, 388 386 struct of_phandle_args *out_irq) 389 387 { 390 388 const u32 *ints = NULL;
+2
arch/powerpc/platforms/powermac/pmac.h
··· 16 16 17 17 extern int pmac_newworld; 18 18 19 + void g5_phy_disable_cpu1(void); 20 + 19 21 extern long pmac_time_init(void); 20 22 extern time64_t pmac_get_boot_time(void); 21 23 extern void pmac_get_rtc_time(struct rtc_time *);
-5
arch/powerpc/platforms/powermac/setup.c
··· 50 50 51 51 #include <asm/reg.h> 52 52 #include <asm/sections.h> 53 - #include <asm/prom.h> 54 53 #include <asm/io.h> 55 54 #include <asm/pci-bridge.h> 56 55 #include <asm/ohare.h> ··· 79 80 static int current_root_goodness = -1; 80 81 81 82 #define DEFAULT_ROOT_DEVICE Root_SDA1 /* sda1 - slightly silly choice */ 82 - 83 - #ifdef CONFIG_PPC64 84 - int sccdbg; 85 - #endif 86 83 87 84 sys_ctrler_t sys_ctrler = SYS_CTRLER_UNKNOWN; 88 85 EXPORT_SYMBOL(sys_ctrler);
+1 -3
arch/powerpc/platforms/powermac/smp.c
··· 22 22 #include <linux/sched/hotplug.h> 23 23 #include <linux/smp.h> 24 24 #include <linux/interrupt.h> 25 + #include <linux/irqdomain.h> 25 26 #include <linux/kernel_stat.h> 26 27 #include <linux/delay.h> 27 28 #include <linux/init.h> ··· 40 39 #include <asm/page.h> 41 40 #include <asm/sections.h> 42 41 #include <asm/io.h> 43 - #include <asm/prom.h> 44 42 #include <asm/smp.h> 45 43 #include <asm/machdep.h> 46 44 #include <asm/pmac_feature.h> ··· 875 875 876 876 static void __init smp_core99_bringup_done(void) 877 877 { 878 - extern void __init g5_phy_disable_cpu1(void); 879 - 880 878 /* Close i2c bus if it was used for tb sync */ 881 879 if (pmac_tb_clock_chip_host) 882 880 pmac_i2c_close(pmac_tb_clock_chip_host);
+1 -1
arch/powerpc/platforms/powermac/time.c
··· 24 24 #include <linux/interrupt.h> 25 25 #include <linux/hardirq.h> 26 26 #include <linux/rtc.h> 27 + #include <linux/of_address.h> 27 28 28 29 #include <asm/sections.h> 29 - #include <asm/prom.h> 30 30 #include <asm/io.h> 31 31 #include <asm/machdep.h> 32 32 #include <asm/time.h>
+1 -1
arch/powerpc/platforms/powermac/udbg_adb.c
··· 7 7 #include <linux/adb.h> 8 8 #include <linux/pmu.h> 9 9 #include <linux/cuda.h> 10 + #include <linux/of.h> 10 11 #include <asm/machdep.h> 11 12 #include <asm/io.h> 12 13 #include <asm/page.h> 13 14 #include <asm/xmon.h> 14 - #include <asm/prom.h> 15 15 #include <asm/bootx.h> 16 16 #include <asm/errno.h> 17 17 #include <asm/pmac_feature.h>
+1 -1
arch/powerpc/platforms/powermac/udbg_scc.c
··· 5 5 * Copyright (C) 2001-2005 PPC 64 Team, IBM Corp 6 6 */ 7 7 #include <linux/types.h> 8 + #include <linux/of.h> 8 9 #include <asm/udbg.h> 9 10 #include <asm/processor.h> 10 11 #include <asm/io.h> 11 - #include <asm/prom.h> 12 12 #include <asm/pmac_feature.h> 13 13 14 14 extern u8 real_readb(volatile u8 __iomem *addr);
+8
arch/powerpc/platforms/powernv/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 + 3 + # nothing that deals with real mode is safe to KASAN 4 + # in particular, idle code runs a bunch of things in real mode 5 + KASAN_SANITIZE_idle.o := n 6 + KASAN_SANITIZE_pci-ioda.o := n 7 + # pnv_machine_check_early 8 + KASAN_SANITIZE_setup.o := n 9 + 2 10 obj-y += setup.o opal-call.o opal-wrappers.o opal.o opal-async.o 3 11 obj-y += idle.o opal-rtc.o opal-nvram.o opal-lpc.o opal-flash.o 4 12 obj-y += rng.o opal-elog.o opal-dump.o opal-sysparam.o opal-sensor.o
+6 -23
arch/powerpc/platforms/powernv/eeh-powernv.c
··· 11 11 #include <linux/export.h> 12 12 #include <linux/init.h> 13 13 #include <linux/interrupt.h> 14 + #include <linux/irqdomain.h> 14 15 #include <linux/list.h> 15 16 #include <linux/msi.h> 16 17 #include <linux/of.h> ··· 391 390 * should be blocked until PE reset. MMIO access is dropped 392 391 * by hardware certainly. In order to drop PCI config requests, 393 392 * one more flag (EEH_PE_CFG_RESTRICTED) is introduced, which 394 - * will be checked in the backend for PE state retrival. If 393 + * will be checked in the backend for PE state retrieval. If 395 394 * the PE becomes frozen for the first time and the flag has 396 395 * been set for the PE, we will set EEH_PE_CFG_BLOCKED for 397 396 * that PE to block its config space. ··· 982 981 case EEH_RESET_FUNDAMENTAL: 983 982 /* 984 983 * Wait for Transaction Pending bit to clear. A word-aligned 985 - * test is used, so we use the conrol offset rather than status 984 + * test is used, so we use the control offset rather than status 986 985 * and shift the test bit to match. 987 986 */ 988 987 pnv_eeh_wait_for_pending(pdn, "AF", ··· 1049 1048 * frozen state during PE reset. However, the good idea here from 1050 1049 * benh is to keep frozen state before we get PE reset done completely 1051 1050 * (until BAR restore). With the frozen state, HW drops illegal IO 1052 - * or MMIO access, which can incur recrusive frozen PE during PE 1051 + * or MMIO access, which can incur recursive frozen PE during PE 1053 1052 * reset. The side effect is that EEH core has to clear the frozen 1054 1053 * state explicitly after BAR restore. 1055 1054 */ ··· 1096 1095 * bus is behind a hotplug slot and it will use the slot provided 1097 1096 * reset methods to prevent spurious hotplug events during the reset. 1098 1097 * 1099 - * Fundemental resets need to be handled internally to EEH since the 1100 - * PCI core doesn't really have a concept of a fundemental reset, 1098 + * Fundamental resets need to be handled internally to EEH since the 1099 + * PCI core doesn't really have a concept of a fundamental reset, 1101 1100 * mainly because there's no standard way to generate one. Only a 1102 1101 * few devices require an FRESET so it should be fine. 1103 1102 */ ··· 1640 1639 .restore_config = pnv_eeh_restore_config, 1641 1640 .notify_resume = NULL 1642 1641 }; 1643 - 1644 - #ifdef CONFIG_PCI_IOV 1645 - static void pnv_pci_fixup_vf_mps(struct pci_dev *pdev) 1646 - { 1647 - struct pci_dn *pdn = pci_get_pdn(pdev); 1648 - int parent_mps; 1649 - 1650 - if (!pdev->is_virtfn) 1651 - return; 1652 - 1653 - /* Synchronize MPS for VF and PF */ 1654 - parent_mps = pcie_get_mps(pdev->physfn); 1655 - if ((128 << pdev->pcie_mpss) >= parent_mps) 1656 - pcie_set_mps(pdev, parent_mps); 1657 - pdn->mps = pcie_get_mps(pdev); 1658 - } 1659 - DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, pnv_pci_fixup_vf_mps); 1660 - #endif /* CONFIG_PCI_IOV */ 1661 1642 1662 1643 /** 1663 1644 * eeh_powernv_init - Register platform dependent EEH operations
+2 -2
arch/powerpc/platforms/powernv/idle.c
··· 112 112 if (rc != 0) 113 113 return rc; 114 114 115 - /* Only p8 needs to set extra HID regiters */ 115 + /* Only p8 needs to set extra HID registers */ 116 116 if (!cpu_has_feature(CPU_FTR_ARCH_300)) { 117 117 uint64_t hid1_val = mfspr(SPRN_HID1); 118 118 uint64_t hid4_val = mfspr(SPRN_HID4); ··· 1204 1204 * The idle code does not deal with TB loss occurring 1205 1205 * in a shallower state than SPR loss, so force it to 1206 1206 * behave like SPRs are lost if TB is lost. POWER9 would 1207 - * never encouter this, but a POWER8 core would if it 1207 + * never encounter this, but a POWER8 core would if it 1208 1208 * implemented the stop instruction. So this is for forward 1209 1209 * compatibility. 1210 1210 */
+1 -1
arch/powerpc/platforms/powernv/ocxl.c
··· 289 289 * be used by a function depends on how many functions exist 290 290 * on the device. The NPU needs to be configured to know how 291 291 * many bits are available to PASIDs and how many are to be 292 - * used by the function BDF indentifier. 292 + * used by the function BDF identifier. 293 293 * 294 294 * We only support one AFU-carrying function for now. 295 295 */
+56 -46
arch/powerpc/platforms/powernv/opal-fadump.c
··· 60 60 addr = be64_to_cpu(addr); 61 61 pr_debug("Kernel metadata addr: %llx\n", addr); 62 62 opal_fdm_active = (void *)addr; 63 - if (opal_fdm_active->registered_regions == 0) 63 + if (be16_to_cpu(opal_fdm_active->registered_regions) == 0) 64 64 return; 65 65 66 66 ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_BOOT_MEM, &addr); ··· 95 95 static void opal_fadump_update_config(struct fw_dump *fadump_conf, 96 96 const struct opal_fadump_mem_struct *fdm) 97 97 { 98 - pr_debug("Boot memory regions count: %d\n", fdm->region_cnt); 98 + pr_debug("Boot memory regions count: %d\n", be16_to_cpu(fdm->region_cnt)); 99 99 100 100 /* 101 101 * The destination address of the first boot memory region is the 102 102 * destination address of boot memory regions. 103 103 */ 104 - fadump_conf->boot_mem_dest_addr = fdm->rgn[0].dest; 104 + fadump_conf->boot_mem_dest_addr = be64_to_cpu(fdm->rgn[0].dest); 105 105 pr_debug("Destination address of boot memory regions: %#016llx\n", 106 106 fadump_conf->boot_mem_dest_addr); 107 107 108 - fadump_conf->fadumphdr_addr = fdm->fadumphdr_addr; 108 + fadump_conf->fadumphdr_addr = be64_to_cpu(fdm->fadumphdr_addr); 109 109 } 110 110 111 111 /* ··· 126 126 fadump_conf->boot_memory_size = 0; 127 127 128 128 pr_debug("Boot memory regions:\n"); 129 - for (i = 0; i < fdm->region_cnt; i++) { 130 - base = fdm->rgn[i].src; 131 - size = fdm->rgn[i].size; 129 + for (i = 0; i < be16_to_cpu(fdm->region_cnt); i++) { 130 + base = be64_to_cpu(fdm->rgn[i].src); 131 + size = be64_to_cpu(fdm->rgn[i].size); 132 132 pr_debug("\t[%03d] base: 0x%lx, size: 0x%lx\n", i, base, size); 133 133 134 134 fadump_conf->boot_mem_addr[i] = base; ··· 143 143 * Start address of reserve dump area (permanent reservation) for 144 144 * re-registering FADump after dump capture. 145 145 */ 146 - fadump_conf->reserve_dump_area_start = fdm->rgn[0].dest; 146 + fadump_conf->reserve_dump_area_start = be64_to_cpu(fdm->rgn[0].dest); 147 147 148 148 /* 149 149 * Rarely, but it can so happen that system crashes before all ··· 155 155 * Hope the memory that could not be preserved only has pages 156 156 * that are usually filtered out while saving the vmcore. 157 157 */ 158 - if (fdm->region_cnt > fdm->registered_regions) { 158 + if (be16_to_cpu(fdm->region_cnt) > be16_to_cpu(fdm->registered_regions)) { 159 159 pr_warn("Not all memory regions were saved!!!\n"); 160 160 pr_warn(" Unsaved memory regions:\n"); 161 - i = fdm->registered_regions; 162 - while (i < fdm->region_cnt) { 161 + i = be16_to_cpu(fdm->registered_regions); 162 + while (i < be16_to_cpu(fdm->region_cnt)) { 163 163 pr_warn("\t[%03d] base: 0x%llx, size: 0x%llx\n", 164 - i, fdm->rgn[i].src, fdm->rgn[i].size); 164 + i, be64_to_cpu(fdm->rgn[i].src), 165 + be64_to_cpu(fdm->rgn[i].size)); 165 166 i++; 166 167 } 167 168 ··· 171 170 } 172 171 173 172 fadump_conf->boot_mem_top = (fadump_conf->boot_memory_size + hole_size); 174 - fadump_conf->boot_mem_regs_cnt = fdm->region_cnt; 173 + fadump_conf->boot_mem_regs_cnt = be16_to_cpu(fdm->region_cnt); 175 174 opal_fadump_update_config(fadump_conf, fdm); 176 175 } 177 176 ··· 179 178 static void opal_fadump_init_metadata(struct opal_fadump_mem_struct *fdm) 180 179 { 181 180 fdm->version = OPAL_FADUMP_VERSION; 182 - fdm->region_cnt = 0; 183 - fdm->registered_regions = 0; 184 - fdm->fadumphdr_addr = 0; 181 + fdm->region_cnt = cpu_to_be16(0); 182 + fdm->registered_regions = cpu_to_be16(0); 183 + fdm->fadumphdr_addr = cpu_to_be64(0); 185 184 } 186 185 187 186 static u64 opal_fadump_init_mem_struct(struct fw_dump *fadump_conf) 188 187 { 189 188 u64 addr = fadump_conf->reserve_dump_area_start; 189 + u16 reg_cnt; 190 190 int i; 191 191 192 192 opal_fdm = __va(fadump_conf->kernel_metadata); 193 193 opal_fadump_init_metadata(opal_fdm); 194 194 195 195 /* Boot memory regions */ 196 + reg_cnt = be16_to_cpu(opal_fdm->region_cnt); 196 197 for (i = 0; i < fadump_conf->boot_mem_regs_cnt; i++) { 197 - opal_fdm->rgn[i].src = fadump_conf->boot_mem_addr[i]; 198 - opal_fdm->rgn[i].dest = addr; 199 - opal_fdm->rgn[i].size = fadump_conf->boot_mem_sz[i]; 198 + opal_fdm->rgn[i].src = cpu_to_be64(fadump_conf->boot_mem_addr[i]); 199 + opal_fdm->rgn[i].dest = cpu_to_be64(addr); 200 + opal_fdm->rgn[i].size = cpu_to_be64(fadump_conf->boot_mem_sz[i]); 200 201 201 - opal_fdm->region_cnt++; 202 + reg_cnt++; 202 203 addr += fadump_conf->boot_mem_sz[i]; 203 204 } 205 + opal_fdm->region_cnt = cpu_to_be16(reg_cnt); 204 206 205 207 /* 206 - * Kernel metadata is passed to f/w and retrieved in capture kerenl. 208 + * Kernel metadata is passed to f/w and retrieved in capture kernel. 207 209 * So, use it to save fadump header address instead of calculating it. 208 210 */ 209 - opal_fdm->fadumphdr_addr = (opal_fdm->rgn[0].dest + 210 - fadump_conf->boot_memory_size); 211 + opal_fdm->fadumphdr_addr = cpu_to_be64(be64_to_cpu(opal_fdm->rgn[0].dest) + 212 + fadump_conf->boot_memory_size); 211 213 212 214 opal_fadump_update_config(fadump_conf, opal_fdm); 213 215 ··· 273 269 static int opal_fadump_register(struct fw_dump *fadump_conf) 274 270 { 275 271 s64 rc = OPAL_PARAMETER; 272 + u16 registered_regs; 276 273 int i, err = -EIO; 277 274 278 - for (i = 0; i < opal_fdm->region_cnt; i++) { 275 + registered_regs = be16_to_cpu(opal_fdm->registered_regions); 276 + for (i = 0; i < be16_to_cpu(opal_fdm->region_cnt); i++) { 279 277 rc = opal_mpipl_update(OPAL_MPIPL_ADD_RANGE, 280 - opal_fdm->rgn[i].src, 281 - opal_fdm->rgn[i].dest, 282 - opal_fdm->rgn[i].size); 278 + be64_to_cpu(opal_fdm->rgn[i].src), 279 + be64_to_cpu(opal_fdm->rgn[i].dest), 280 + be64_to_cpu(opal_fdm->rgn[i].size)); 283 281 if (rc != OPAL_SUCCESS) 284 282 break; 285 283 286 - opal_fdm->registered_regions++; 284 + registered_regs++; 287 285 } 286 + opal_fdm->registered_regions = cpu_to_be16(registered_regs); 288 287 289 288 switch (rc) { 290 289 case OPAL_SUCCESS: ··· 298 291 case OPAL_RESOURCE: 299 292 /* If MAX regions limit in f/w is hit, warn and proceed. */ 300 293 pr_warn("%d regions could not be registered for MPIPL as MAX limit is reached!\n", 301 - (opal_fdm->region_cnt - opal_fdm->registered_regions)); 294 + (be16_to_cpu(opal_fdm->region_cnt) - 295 + be16_to_cpu(opal_fdm->registered_regions))); 302 296 fadump_conf->dump_registered = 1; 303 297 err = 0; 304 298 break; ··· 320 312 * If some regions were registered before OPAL_MPIPL_ADD_RANGE 321 313 * OPAL call failed, unregister all regions. 322 314 */ 323 - if ((err < 0) && (opal_fdm->registered_regions > 0)) 315 + if ((err < 0) && (be16_to_cpu(opal_fdm->registered_regions) > 0)) 324 316 opal_fadump_unregister(fadump_conf); 325 317 326 318 return err; ··· 336 328 return -EIO; 337 329 } 338 330 339 - opal_fdm->registered_regions = 0; 331 + opal_fdm->registered_regions = cpu_to_be16(0); 340 332 fadump_conf->dump_registered = 0; 341 333 return 0; 342 334 } ··· 571 563 else 572 564 fdm_ptr = opal_fdm; 573 565 574 - for (i = 0; i < fdm_ptr->region_cnt; i++) { 566 + for (i = 0; i < be16_to_cpu(fdm_ptr->region_cnt); i++) { 575 567 /* 576 568 * Only regions that are registered for MPIPL 577 569 * would have dump data. 578 570 */ 579 571 if ((fadump_conf->dump_active) && 580 - (i < fdm_ptr->registered_regions)) 581 - dumped_bytes = fdm_ptr->rgn[i].size; 572 + (i < be16_to_cpu(fdm_ptr->registered_regions))) 573 + dumped_bytes = be64_to_cpu(fdm_ptr->rgn[i].size); 582 574 583 575 seq_printf(m, "DUMP: Src: %#016llx, Dest: %#016llx, ", 584 - fdm_ptr->rgn[i].src, fdm_ptr->rgn[i].dest); 576 + be64_to_cpu(fdm_ptr->rgn[i].src), 577 + be64_to_cpu(fdm_ptr->rgn[i].dest)); 585 578 seq_printf(m, "Size: %#llx, Dumped: %#llx bytes\n", 586 - fdm_ptr->rgn[i].size, dumped_bytes); 579 + be64_to_cpu(fdm_ptr->rgn[i].size), dumped_bytes); 587 580 } 588 581 589 - /* Dump is active. Show reserved area start address. */ 582 + /* Dump is active. Show preserved area start address. */ 590 583 if (fadump_conf->dump_active) { 591 - seq_printf(m, "\nMemory above %#016lx is reserved for saving crash dump\n", 592 - fadump_conf->reserve_dump_area_start); 584 + seq_printf(m, "\nMemory above %#016llx is reserved for saving crash dump\n", 585 + fadump_conf->boot_mem_top); 593 586 } 594 587 } 595 588 ··· 633 624 { 634 625 const __be32 *prop; 635 626 unsigned long dn; 627 + __be64 be_addr; 636 628 u64 addr = 0; 637 629 int i, len; 638 630 s64 ret; ··· 690 680 if (!prop) 691 681 return; 692 682 693 - ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_KERNEL, &addr); 694 - if ((ret != OPAL_SUCCESS) || !addr) { 683 + ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_KERNEL, &be_addr); 684 + if ((ret != OPAL_SUCCESS) || !be_addr) { 695 685 pr_err("Failed to get Kernel metadata (%lld)\n", ret); 696 686 return; 697 687 } 698 688 699 - addr = be64_to_cpu(addr); 689 + addr = be64_to_cpu(be_addr); 700 690 pr_debug("Kernel metadata addr: %llx\n", addr); 701 691 702 692 opal_fdm_active = __va(addr); ··· 707 697 } 708 698 709 699 /* Kernel regions not registered with f/w for MPIPL */ 710 - if (opal_fdm_active->registered_regions == 0) { 700 + if (be16_to_cpu(opal_fdm_active->registered_regions) == 0) { 711 701 opal_fdm_active = NULL; 712 702 return; 713 703 } 714 704 715 - ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_CPU, &addr); 716 - if (addr) { 717 - addr = be64_to_cpu(addr); 705 + ret = opal_mpipl_query_tag(OPAL_MPIPL_TAG_CPU, &be_addr); 706 + if (be_addr) { 707 + addr = be64_to_cpu(be_addr); 718 708 pr_debug("CPU metadata addr: %llx\n", addr); 719 709 opal_cpu_metadata = __va(addr); 720 710 }
+5 -5
arch/powerpc/platforms/powernv/opal-fadump.h
··· 31 31 * OPAL FADump kernel metadata 32 32 * 33 33 * The address of this structure will be registered with f/w for retrieving 34 - * and processing during crash dump. 34 + * in the capture kernel to process the crash dump. 35 35 */ 36 36 struct opal_fadump_mem_struct { 37 37 u8 version; 38 38 u8 reserved[3]; 39 - u16 region_cnt; /* number of regions */ 40 - u16 registered_regions; /* Regions registered for MPIPL */ 41 - u64 fadumphdr_addr; 39 + __be16 region_cnt; /* number of regions */ 40 + __be16 registered_regions; /* Regions registered for MPIPL */ 41 + __be64 fadumphdr_addr; 42 42 struct opal_mpipl_region rgn[FADUMP_MAX_MEM_REGS]; 43 43 } __packed; 44 44 ··· 135 135 for (i = 0; i < regs_cnt; i++, bufp += reg_entry_size) { 136 136 reg_entry = (struct hdat_fadump_reg_entry *)bufp; 137 137 val = (cpu_endian ? be64_to_cpu(reg_entry->reg_val) : 138 - reg_entry->reg_val); 138 + (u64)(reg_entry->reg_val)); 139 139 opal_fadump_set_regval_regnum(regs, 140 140 be32_to_cpu(reg_entry->reg_type), 141 141 be32_to_cpu(reg_entry->reg_num),
+4
arch/powerpc/platforms/powernv/opal-flash.c
··· 520 520 { 521 521 int ret; 522 522 523 + /* Firmware update is not supported by firmware */ 524 + if (!opal_check_token(OPAL_FLASH_VALIDATE)) 525 + return; 526 + 523 527 /* Allocate validate image buffer */ 524 528 validate_flash_data.buf = kzalloc(VALIDATE_BUF_SIZE, GFP_KERNEL); 525 529 if (!validate_flash_data.buf) {
+1 -1
arch/powerpc/platforms/powernv/opal-imc.c
··· 211 211 get_hard_smp_processor_id(cpu)); 212 212 if (rc) 213 213 pr_err("%s: Failed to stop Core (cpu = %d)\n", 214 - __FUNCTION__, cpu); 214 + __func__, cpu); 215 215 } 216 216 cpus_read_unlock(); 217 217 }
+1 -1
arch/powerpc/platforms/powernv/opal-lpc.c
··· 197 197 198 198 /* 199 199 * Select access size based on count and alignment and 200 - * access type. IO and MEM only support byte acceses, 200 + * access type. IO and MEM only support byte accesses, 201 201 * FW supports all 3. 202 202 */ 203 203 len = 1;
+1 -1
arch/powerpc/platforms/powernv/opal-memory-errors.c
··· 82 82 83 83 /* 84 84 * opal_memory_err_event - notifier handler that queues up the opal message 85 - * to be preocessed later. 85 + * to be processed later. 86 86 */ 87 87 static int opal_memory_err_event(struct notifier_block *nb, 88 88 unsigned long msg_type, void *msg)
+1
arch/powerpc/platforms/powernv/pci-cxl.c
··· 4 4 */ 5 5 6 6 #include <linux/module.h> 7 + #include <misc/cxl-base.h> 7 8 #include <asm/pnv-pci.h> 8 9 #include <asm/opal.h> 9 10
+2 -3
arch/powerpc/platforms/powernv/pci-ioda-tce.c
··· 145 145 146 146 #ifdef CONFIG_IOMMU_API 147 147 int pnv_tce_xchg(struct iommu_table *tbl, long index, 148 - unsigned long *hpa, enum dma_data_direction *direction, 149 - bool alloc) 148 + unsigned long *hpa, enum dma_data_direction *direction) 150 149 { 151 150 u64 proto_tce = iommu_direction_to_tce_perm(*direction); 152 151 unsigned long newtce = *hpa | proto_tce, oldtce; ··· 163 164 } 164 165 165 166 if (!ptce) { 166 - ptce = pnv_tce(tbl, false, idx, alloc); 167 + ptce = pnv_tce(tbl, false, idx, true); 167 168 if (!ptce) 168 169 return -ENOMEM; 169 170 }
+21 -30
arch/powerpc/platforms/powernv/pci-ioda.c
··· 21 21 #include <linux/rculist.h> 22 22 #include <linux/sizes.h> 23 23 #include <linux/debugfs.h> 24 + #include <linux/of_address.h> 25 + #include <linux/of_irq.h> 24 26 25 27 #include <asm/sections.h> 26 28 #include <asm/io.h> 27 - #include <asm/prom.h> 28 29 #include <asm/pci-bridge.h> 29 30 #include <asm/machdep.h> 30 31 #include <asm/msi_bitmap.h> ··· 1268 1267 return false; 1269 1268 } 1270 1269 1271 - static inline __be64 __iomem *pnv_ioda_get_inval_reg(struct pnv_phb *phb, 1272 - bool real_mode) 1270 + static inline __be64 __iomem *pnv_ioda_get_inval_reg(struct pnv_phb *phb) 1273 1271 { 1274 - return real_mode ? (__be64 __iomem *)(phb->regs_phys + 0x210) : 1275 - (phb->regs + 0x210); 1272 + return phb->regs + 0x210; 1276 1273 } 1277 1274 1278 1275 static void pnv_pci_p7ioc_tce_invalidate(struct iommu_table *tbl, 1279 - unsigned long index, unsigned long npages, bool rm) 1276 + unsigned long index, unsigned long npages) 1280 1277 { 1281 1278 struct iommu_table_group_link *tgl = list_first_entry_or_null( 1282 1279 &tbl->it_group_list, struct iommu_table_group_link, 1283 1280 next); 1284 1281 struct pnv_ioda_pe *pe = container_of(tgl->table_group, 1285 1282 struct pnv_ioda_pe, table_group); 1286 - __be64 __iomem *invalidate = pnv_ioda_get_inval_reg(pe->phb, rm); 1283 + __be64 __iomem *invalidate = pnv_ioda_get_inval_reg(pe->phb); 1287 1284 unsigned long start, end, inc; 1288 1285 1289 1286 start = __pa(((__be64 *)tbl->it_base) + index - tbl->it_offset); ··· 1296 1297 1297 1298 mb(); /* Ensure above stores are visible */ 1298 1299 while (start <= end) { 1299 - if (rm) 1300 - __raw_rm_writeq_be(start, invalidate); 1301 - else 1302 - __raw_writeq_be(start, invalidate); 1303 - 1300 + __raw_writeq_be(start, invalidate); 1304 1301 start += inc; 1305 1302 } 1306 1303 ··· 1315 1320 attrs); 1316 1321 1317 1322 if (!ret) 1318 - pnv_pci_p7ioc_tce_invalidate(tbl, index, npages, false); 1323 + pnv_pci_p7ioc_tce_invalidate(tbl, index, npages); 1319 1324 1320 1325 return ret; 1321 1326 } ··· 1323 1328 #ifdef CONFIG_IOMMU_API 1324 1329 /* Common for IODA1 and IODA2 */ 1325 1330 static int pnv_ioda_tce_xchg_no_kill(struct iommu_table *tbl, long index, 1326 - unsigned long *hpa, enum dma_data_direction *direction, 1327 - bool realmode) 1331 + unsigned long *hpa, enum dma_data_direction *direction) 1328 1332 { 1329 - return pnv_tce_xchg(tbl, index, hpa, direction, !realmode); 1333 + return pnv_tce_xchg(tbl, index, hpa, direction); 1330 1334 } 1331 1335 #endif 1332 1336 ··· 1334 1340 { 1335 1341 pnv_tce_free(tbl, index, npages); 1336 1342 1337 - pnv_pci_p7ioc_tce_invalidate(tbl, index, npages, false); 1343 + pnv_pci_p7ioc_tce_invalidate(tbl, index, npages); 1338 1344 } 1339 1345 1340 1346 static struct iommu_table_ops pnv_ioda1_iommu_ops = { ··· 1355 1361 static inline void pnv_pci_phb3_tce_invalidate_pe(struct pnv_ioda_pe *pe) 1356 1362 { 1357 1363 /* 01xb - invalidate TCEs that match the specified PE# */ 1358 - __be64 __iomem *invalidate = pnv_ioda_get_inval_reg(pe->phb, false); 1364 + __be64 __iomem *invalidate = pnv_ioda_get_inval_reg(pe->phb); 1359 1365 unsigned long val = PHB3_TCE_KILL_INVAL_PE | (pe->pe_number & 0xFF); 1360 1366 1361 1367 mb(); /* Ensure above stores are visible */ 1362 1368 __raw_writeq_be(val, invalidate); 1363 1369 } 1364 1370 1365 - static void pnv_pci_phb3_tce_invalidate(struct pnv_ioda_pe *pe, bool rm, 1371 + static void pnv_pci_phb3_tce_invalidate(struct pnv_ioda_pe *pe, 1366 1372 unsigned shift, unsigned long index, 1367 1373 unsigned long npages) 1368 1374 { 1369 - __be64 __iomem *invalidate = pnv_ioda_get_inval_reg(pe->phb, rm); 1375 + __be64 __iomem *invalidate = pnv_ioda_get_inval_reg(pe->phb); 1370 1376 unsigned long start, end, inc; 1371 1377 1372 1378 /* We'll invalidate DMA address in PE scope */ ··· 1381 1387 mb(); 1382 1388 1383 1389 while (start <= end) { 1384 - if (rm) 1385 - __raw_rm_writeq_be(start, invalidate); 1386 - else 1387 - __raw_writeq_be(start, invalidate); 1390 + __raw_writeq_be(start, invalidate); 1388 1391 start += inc; 1389 1392 } 1390 1393 } ··· 1398 1407 } 1399 1408 1400 1409 static void pnv_pci_ioda2_tce_invalidate(struct iommu_table *tbl, 1401 - unsigned long index, unsigned long npages, bool rm) 1410 + unsigned long index, unsigned long npages) 1402 1411 { 1403 1412 struct iommu_table_group_link *tgl; 1404 1413 ··· 1409 1418 unsigned int shift = tbl->it_page_shift; 1410 1419 1411 1420 if (phb->model == PNV_PHB_MODEL_PHB3 && phb->regs) 1412 - pnv_pci_phb3_tce_invalidate(pe, rm, shift, 1421 + pnv_pci_phb3_tce_invalidate(pe, shift, 1413 1422 index, npages); 1414 1423 else 1415 1424 opal_pci_tce_kill(phb->opal_id, ··· 1428 1437 attrs); 1429 1438 1430 1439 if (!ret) 1431 - pnv_pci_ioda2_tce_invalidate(tbl, index, npages, false); 1440 + pnv_pci_ioda2_tce_invalidate(tbl, index, npages); 1432 1441 1433 1442 return ret; 1434 1443 } ··· 1438 1447 { 1439 1448 pnv_tce_free(tbl, index, npages); 1440 1449 1441 - pnv_pci_ioda2_tce_invalidate(tbl, index, npages, false); 1450 + pnv_pci_ioda2_tce_invalidate(tbl, index, npages); 1442 1451 } 1443 1452 1444 1453 static struct iommu_table_ops pnv_ioda2_iommu_ops = { ··· 2374 2383 2375 2384 /* 2376 2385 * This function is supposed to be called on basis of PE from top 2377 - * to bottom style. So the the I/O or MMIO segment assigned to 2386 + * to bottom style. So the I/O or MMIO segment assigned to 2378 2387 * parent PE could be overridden by its child PEs if necessary. 2379 2388 */ 2380 2389 static void pnv_ioda_setup_pe_seg(struct pnv_ioda_pe *pe) ··· 2729 2738 if (rc != OPAL_SUCCESS) 2730 2739 return; 2731 2740 2732 - pnv_pci_p7ioc_tce_invalidate(tbl, tbl->it_offset, tbl->it_size, false); 2741 + pnv_pci_p7ioc_tce_invalidate(tbl, tbl->it_offset, tbl->it_size); 2733 2742 if (pe->table_group.group) { 2734 2743 iommu_group_put(pe->table_group.group); 2735 2744 WARN_ON(pe->table_group.group);
+2 -2
arch/powerpc/platforms/powernv/pci-sriov.c
··· 22 22 * have the same requirement. 23 23 * 24 24 * For a SR-IOV BAR things are a little more awkward since size and alignment 25 - * are not coupled. The alignment is set based on the the per-VF BAR size, but 25 + * are not coupled. The alignment is set based on the per-VF BAR size, but 26 26 * the total BAR area is: number-of-vfs * per-vf-size. The number of VFs 27 27 * isn't necessarily a power of two, so neither is the total size. To fix that 28 28 * we need to finesse (read: hack) the Linux BAR allocator so that it will ··· 699 699 return -ENOSPC; 700 700 } 701 701 702 - /* allocate a contigious block of PEs for our VFs */ 702 + /* allocate a contiguous block of PEs for our VFs */ 703 703 base_pe = pnv_ioda_alloc_pe(phb, num_vfs); 704 704 if (!base_pe) { 705 705 pci_err(pdev, "Unable to allocate PEs for %d VFs\n", num_vfs);
-1
arch/powerpc/platforms/powernv/pci.c
··· 18 18 19 19 #include <asm/sections.h> 20 20 #include <asm/io.h> 21 - #include <asm/prom.h> 22 21 #include <asm/pci-bridge.h> 23 22 #include <asm/machdep.h> 24 23 #include <asm/msi_bitmap.h>
+1 -2
arch/powerpc/platforms/powernv/pci.h
··· 311 311 unsigned long attrs); 312 312 extern void pnv_tce_free(struct iommu_table *tbl, long index, long npages); 313 313 extern int pnv_tce_xchg(struct iommu_table *tbl, long index, 314 - unsigned long *hpa, enum dma_data_direction *direction, 315 - bool alloc); 314 + unsigned long *hpa, enum dma_data_direction *direction); 316 315 extern __be64 *pnv_tce_useraddrptr(struct iommu_table *tbl, long index, 317 316 bool alloc); 318 317 extern unsigned long pnv_tce_get(struct iommu_table *tbl, long index);
+9
arch/powerpc/platforms/powernv/setup.c
··· 96 96 97 97 if (fw_feature_is("disabled", "needs-spec-barrier-for-bound-checks", np)) 98 98 security_ftr_clear(SEC_FTR_BNDS_CHK_SPEC_BAR); 99 + 100 + if (fw_feature_is("enabled", "no-need-l1d-flush-msr-pr-1-to-0", np)) 101 + security_ftr_clear(SEC_FTR_L1D_FLUSH_ENTRY); 102 + 103 + if (fw_feature_is("enabled", "no-need-l1d-flush-kernel-on-user-access", np)) 104 + security_ftr_clear(SEC_FTR_L1D_FLUSH_UACCESS); 105 + 106 + if (fw_feature_is("enabled", "no-need-store-drain-on-priv-state-switch", np)) 107 + security_ftr_clear(SEC_FTR_STF_BARRIER); 99 108 } 100 109 101 110 static void __init pnv_setup_security_mitigations(void)
+1 -1
arch/powerpc/platforms/powernv/smp.c
··· 345 345 } 346 346 } 347 347 348 - static int pnv_system_reset_exception(struct pt_regs *regs) 348 + noinstr static int pnv_system_reset_exception(struct pt_regs *regs) 349 349 { 350 350 if (smp_handle_nmi_ipi(regs)) 351 351 return 1;
+1
arch/powerpc/platforms/powernv/ultravisor.c
··· 55 55 return -ENODEV; 56 56 57 57 uv_memcons = memcons_init(node, "memcons"); 58 + of_node_put(node); 58 59 if (!uv_memcons) 59 60 return -ENOENT; 60 61
+1 -1
arch/powerpc/platforms/powernv/vas-fault.c
··· 216 216 vas_init_rx_win_attr(&attr, VAS_COP_TYPE_FAULT); 217 217 218 218 attr.rx_fifo_size = vinst->fault_fifo_size; 219 - attr.rx_fifo = vinst->fault_fifo; 219 + attr.rx_fifo = __pa(vinst->fault_fifo); 220 220 221 221 /* 222 222 * Max creds is based on number of CRBs can fit in the FIFO.
+2 -2
arch/powerpc/platforms/powernv/vas-window.c
··· 404 404 * 405 405 * See also: Design note in function header. 406 406 */ 407 - val = __pa(winctx->rx_fifo); 407 + val = winctx->rx_fifo; 408 408 val = SET_FIELD(VAS_PAGE_MIGRATION_SELECT, val, 0); 409 409 write_hvwc_reg(window, VREG(LFIFO_BAR), val); 410 410 ··· 739 739 */ 740 740 winctx->fifo_disable = true; 741 741 winctx->intr_disable = true; 742 - winctx->rx_fifo = NULL; 742 + winctx->rx_fifo = 0; 743 743 } 744 744 745 745 winctx->lnotify_lpid = rxattr->lnotify_lpid;
+1 -1
arch/powerpc/platforms/powernv/vas.h
··· 376 376 * is a container for the register fields in the window context. 377 377 */ 378 378 struct vas_winctx { 379 - void *rx_fifo; 379 + u64 rx_fifo; 380 380 int rx_fifo_size; 381 381 int wcreds_max; 382 382 int rsvd_txbuf_count;
+1 -1
arch/powerpc/platforms/ps3/Kconfig
··· 90 90 bool "PS3 Verbose LV1 hypercall results" if PS3_ADVANCED 91 91 depends on PPC_PS3 92 92 help 93 - Enables more verbose log mesages for LV1 hypercall results. 93 + Enables more verbose log messages for LV1 hypercall results. 94 94 95 95 If in doubt, say N here and reduce the size of the kernel by a 96 96 small amount.
-1
arch/powerpc/platforms/ps3/htab.c
··· 10 10 #include <linux/memblock.h> 11 11 12 12 #include <asm/machdep.h> 13 - #include <asm/prom.h> 14 13 #include <asm/udbg.h> 15 14 #include <asm/lv1call.h> 16 15 #include <asm/ps3fb.h>
+1 -2
arch/powerpc/platforms/ps3/mm.c
··· 14 14 15 15 #include <asm/cell-regs.h> 16 16 #include <asm/firmware.h> 17 - #include <asm/prom.h> 18 17 #include <asm/udbg.h> 19 18 #include <asm/lv1call.h> 20 19 #include <asm/setup.h> ··· 363 364 * @bus_addr: Starting ioc bus address of the area to map. 364 365 * @len: Length in bytes of the area to map. 365 366 * @link: A struct list_head used with struct ps3_dma_region.chunk_list, the 366 - * list of all chuncks owned by the region. 367 + * list of all chunks owned by the region. 367 368 * 368 369 * This implementation uses a very simple dma page manager 369 370 * based on the dma_chunk structure. This scheme assumes
-2
arch/powerpc/platforms/ps3/os-area.c
··· 17 17 #include <linux/of.h> 18 18 #include <linux/slab.h> 19 19 20 - #include <asm/prom.h> 21 - 22 20 #include "platform.h" 23 21 24 22 enum {
+1 -1
arch/powerpc/platforms/ps3/setup.c
··· 13 13 #include <linux/console.h> 14 14 #include <linux/export.h> 15 15 #include <linux/memblock.h> 16 + #include <linux/of.h> 16 17 17 18 #include <asm/machdep.h> 18 19 #include <asm/firmware.h> 19 20 #include <asm/time.h> 20 21 #include <asm/iommu.h> 21 22 #include <asm/udbg.h> 22 - #include <asm/prom.h> 23 23 #include <asm/lv1call.h> 24 24 #include <asm/ps3gpu.h> 25 25
+1 -1
arch/powerpc/platforms/ps3/system-bus.c
··· 601 601 iopte_flag |= CBE_IOPTE_PP_W | CBE_IOPTE_SO_RW; 602 602 break; 603 603 default: 604 - /* not happned */ 604 + /* not happened */ 605 605 BUG(); 606 606 } 607 607 result = ps3_dma_map(dev->d_region, (unsigned long)ptr, size,
+4
arch/powerpc/platforms/pseries/Makefile
··· 33 33 obj-$(CONFIG_PPC_VAS) += vas.o vas-sysfs.o 34 34 35 35 obj-$(CONFIG_ARCH_HAS_CC_PLATFORM) += cc_platform.o 36 + 37 + # nothing that operates in real mode is safe for KASAN 38 + KASAN_SANITIZE_ras.o := n 39 + KASAN_SANITIZE_kexec.o := n
+1 -3
arch/powerpc/platforms/pseries/cmm.c
··· 475 475 static int cmm_memory_cb(struct notifier_block *self, 476 476 unsigned long action, void *arg) 477 477 { 478 - int ret = 0; 479 - 480 478 switch (action) { 481 479 case MEM_GOING_OFFLINE: 482 480 mutex_lock(&hotplug_mutex); ··· 491 493 break; 492 494 } 493 495 494 - return notifier_from_errno(ret); 496 + return NOTIFY_OK; 495 497 } 496 498 497 499 static struct notifier_block cmm_mem_nb = {
+1 -2
arch/powerpc/platforms/pseries/dlpar.c
··· 19 19 #include "of_helpers.h" 20 20 #include "pseries.h" 21 21 22 - #include <asm/prom.h> 23 22 #include <asm/machdep.h> 24 23 #include <linux/uaccess.h> 25 24 #include <asm/rtas.h> ··· 388 389 handle_dlpar_errorlog(hp_work->errlog); 389 390 390 391 kfree(hp_work->errlog); 391 - kfree((void *)work); 392 + kfree(work); 392 393 } 393 394 394 395 void queue_hotplug_event(struct pseries_hp_errorlog *hp_errlog)
+5 -4
arch/powerpc/platforms/pseries/eeh_pseries.c
··· 43 43 static int ibm_get_config_addr_info2; 44 44 static int ibm_configure_pe; 45 45 46 + static void pseries_eeh_init_edev(struct pci_dn *pdn); 47 + 46 48 static void pseries_pcibios_bus_add_device(struct pci_dev *pdev) 47 49 { 48 50 struct pci_dn *pdn = pci_get_pdn(pdev); ··· 361 359 * This function takes care of the initialisation and inserts the eeh_dev 362 360 * into the correct eeh_pe. If no eeh_pe exists we'll allocate one. 363 361 */ 364 - void pseries_eeh_init_edev(struct pci_dn *pdn) 362 + static void pseries_eeh_init_edev(struct pci_dn *pdn) 365 363 { 366 364 struct eeh_pe pe, *parent; 367 365 struct eeh_dev *edev; ··· 512 510 int ret = 0; 513 511 514 512 /* 515 - * When we're enabling or disabling EEH functioality on 513 + * When we're enabling or disabling EEH functionality on 516 514 * the particular PE, the PE config address is possibly 517 515 * unavailable. Therefore, we have to figure it out from 518 516 * the FDT node. ··· 847 845 return -EINVAL; 848 846 } 849 847 850 - /* Initialize error log lock and size */ 851 - spin_lock_init(&slot_errbuf_lock); 848 + /* Initialize error log size */ 852 849 eeh_error_buf_size = rtas_token("rtas-error-log-max"); 853 850 if (eeh_error_buf_size == RTAS_UNKNOWN_SERVICE) { 854 851 pr_info("%s: unknown EEH error log size\n",
+1 -1
arch/powerpc/platforms/pseries/hotplug-cpu.c
··· 398 398 if (get_hard_smp_processor_id(cpu) != thread) 399 399 continue; 400 400 cpu_maps_update_done(); 401 - find_and_online_cpu_nid(cpu); 401 + find_and_update_cpu_nid(cpu); 402 402 rc = device_online(get_cpu_device(cpu)); 403 403 if (rc) { 404 404 dlpar_offline_cpu(dn);
-1
arch/powerpc/platforms/pseries/hotplug-memory.c
··· 16 16 17 17 #include <asm/firmware.h> 18 18 #include <asm/machdep.h> 19 - #include <asm/prom.h> 20 19 #include <asm/sparsemem.h> 21 20 #include <asm/fadump.h> 22 21 #include <asm/drmem.h>
+2 -3
arch/powerpc/platforms/pseries/iommu.c
··· 666 666 667 667 #ifdef CONFIG_IOMMU_API 668 668 static int tce_exchange_pseries(struct iommu_table *tbl, long index, unsigned 669 - long *tce, enum dma_data_direction *direction, 670 - bool realmode) 669 + long *tce, enum dma_data_direction *direction) 671 670 { 672 671 long rc; 673 672 unsigned long ioba = (unsigned long) index << tbl->it_page_shift; ··· 1429 1430 1430 1431 pci->table_group->tables[1] = newtbl; 1431 1432 1432 - /* Keep default DMA window stuct if removed */ 1433 + /* Keep default DMA window struct if removed */ 1433 1434 if (default_win_removed) { 1434 1435 tbl->it_size = 0; 1435 1436 vfree(tbl->it_map);
+8
arch/powerpc/platforms/pseries/kexec.c
··· 61 61 } else 62 62 xics_kexec_teardown_cpu(secondary); 63 63 } 64 + 65 + void pseries_machine_kexec(struct kimage *image) 66 + { 67 + if (firmware_has_feature(FW_FEATURE_SET_MODE)) 68 + pseries_disable_reloc_on_exc(); 69 + 70 + default_machine_kexec(image); 71 + }
-1
arch/powerpc/platforms/pseries/lpar.c
··· 31 31 #include <asm/mmu_context.h> 32 32 #include <asm/iommu.h> 33 33 #include <asm/tlb.h> 34 - #include <asm/prom.h> 35 34 #include <asm/cputable.h> 36 35 #include <asm/udbg.h> 37 36 #include <asm/smp.h>
-1
arch/powerpc/platforms/pseries/lparcfg.c
··· 28 28 #include <asm/firmware.h> 29 29 #include <asm/rtas.h> 30 30 #include <asm/time.h> 31 - #include <asm/prom.h> 32 31 #include <asm/vdso_datapage.h> 33 32 #include <asm/vio.h> 34 33 #include <asm/mmu.h>
+1
arch/powerpc/platforms/pseries/msi.c
··· 7 7 #include <linux/crash_dump.h> 8 8 #include <linux/device.h> 9 9 #include <linux/irq.h> 10 + #include <linux/irqdomain.h> 10 11 #include <linux/msi.h> 11 12 12 13 #include <asm/rtas.h>
+1 -1
arch/powerpc/platforms/pseries/nvram.c
··· 13 13 #include <linux/slab.h> 14 14 #include <linux/ctype.h> 15 15 #include <linux/uaccess.h> 16 + #include <linux/of.h> 16 17 #include <asm/nvram.h> 17 18 #include <asm/rtas.h> 18 - #include <asm/prom.h> 19 19 #include <asm/machdep.h> 20 20 21 21 /* Max bytes to read/write in one go */
+24 -30
arch/powerpc/platforms/pseries/papr_scm.c
··· 125 125 /* The bits which needs to be overridden */ 126 126 u64 health_bitmap_inject_mask; 127 127 128 - /* array to have event_code and stat_id mappings */ 129 - char **nvdimm_events_map; 128 + /* array to have event_code and stat_id mappings */ 129 + u8 *nvdimm_events_map; 130 130 }; 131 131 132 132 static int papr_scm_pmem_flush(struct nd_region *nd_region, ··· 370 370 371 371 stat = &stats->scm_statistic[0]; 372 372 memcpy(&stat->stat_id, 373 - p->nvdimm_events_map[event->attr.config], 373 + &p->nvdimm_events_map[event->attr.config * sizeof(stat->stat_id)], 374 374 sizeof(stat->stat_id)); 375 375 stat->stat_val = 0; 376 376 ··· 462 462 { 463 463 struct papr_scm_perf_stat *stat; 464 464 struct papr_scm_perf_stats *stats; 465 - int index, rc, count; 466 465 u32 available_events; 467 - 468 - if (!p->stat_buffer_len) 469 - return -ENOENT; 466 + int index, rc = 0; 470 467 471 468 available_events = (p->stat_buffer_len - sizeof(struct papr_scm_perf_stats)) 472 469 / sizeof(struct papr_scm_perf_stat); 470 + if (available_events == 0) 471 + return -EOPNOTSUPP; 473 472 474 473 /* Allocate the buffer for phyp where stats are written */ 475 474 stats = kzalloc(p->stat_buffer_len, GFP_KERNEL); ··· 477 478 return rc; 478 479 } 479 480 480 - /* Allocate memory to nvdimm_event_map */ 481 - p->nvdimm_events_map = kcalloc(available_events, sizeof(char *), GFP_KERNEL); 482 - if (!p->nvdimm_events_map) { 483 - rc = -ENOMEM; 484 - goto out_stats; 485 - } 486 - 487 481 /* Called to get list of events supported */ 488 482 rc = drc_pmem_query_stats(p, stats, 0); 489 483 if (rc) 490 - goto out_nvdimm_events_map; 484 + goto out; 491 485 492 - for (index = 0, stat = stats->scm_statistic, count = 0; 493 - index < available_events; index++, ++stat) { 494 - p->nvdimm_events_map[count] = kmemdup_nul(stat->stat_id, 8, GFP_KERNEL); 495 - if (!p->nvdimm_events_map[count]) { 496 - rc = -ENOMEM; 497 - goto out_nvdimm_events_map; 498 - } 499 - 500 - count++; 486 + /* 487 + * Allocate memory and populate nvdimm_event_map. 488 + * Allocate an extra element for NULL entry 489 + */ 490 + p->nvdimm_events_map = kcalloc(available_events + 1, 491 + sizeof(stat->stat_id), 492 + GFP_KERNEL); 493 + if (!p->nvdimm_events_map) { 494 + rc = -ENOMEM; 495 + goto out; 501 496 } 502 - p->nvdimm_events_map[count] = NULL; 503 - kfree(stats); 504 - return 0; 505 497 506 - out_nvdimm_events_map: 507 - kfree(p->nvdimm_events_map); 508 - out_stats: 498 + /* Copy all stat_ids to event map */ 499 + for (index = 0, stat = stats->scm_statistic; 500 + index < available_events; index++, ++stat) { 501 + memcpy(&p->nvdimm_events_map[index * sizeof(stat->stat_id)], 502 + &stat->stat_id, sizeof(stat->stat_id)); 503 + } 504 + out: 509 505 kfree(stats); 510 506 return rc; 511 507 }
-1
arch/powerpc/platforms/pseries/pci.c
··· 14 14 15 15 #include <asm/eeh.h> 16 16 #include <asm/pci-bridge.h> 17 - #include <asm/prom.h> 18 17 #include <asm/ppc-pci.h> 19 18 #include <asm/pci.h> 20 19 #include "pseries.h"
-1
arch/powerpc/platforms/pseries/pmem.c
··· 15 15 #include <linux/of.h> 16 16 #include <linux/of_platform.h> 17 17 #include <linux/slab.h> 18 - #include <asm/prom.h> 19 18 #include <asm/rtas.h> 20 19 #include <asm/firmware.h> 21 20 #include <asm/machdep.h>
+1
arch/powerpc/platforms/pseries/pseries.h
··· 38 38 #endif 39 39 40 40 extern void pseries_kexec_cpu_down(int crash_shutdown, int secondary); 41 + void pseries_machine_kexec(struct kimage *image); 41 42 42 43 extern void pSeries_final_fixup(void); 43 44
-1
arch/powerpc/platforms/pseries/reconfig.c
··· 13 13 #include <linux/slab.h> 14 14 #include <linux/of.h> 15 15 16 - #include <asm/prom.h> 17 16 #include <asm/machdep.h> 18 17 #include <linux/uaccess.h> 19 18 #include <asm/mmu.h>
+12 -5
arch/powerpc/platforms/pseries/rtas-fadump.c
··· 13 13 #include <linux/delay.h> 14 14 #include <linux/seq_file.h> 15 15 #include <linux/crash_dump.h> 16 + #include <linux/of.h> 17 + #include <linux/of_fdt.h> 16 18 17 19 #include <asm/page.h> 18 - #include <asm/prom.h> 19 20 #include <asm/rtas.h> 20 21 #include <asm/fadump.h> 21 22 #include <asm/fadump-internal.h> ··· 108 107 cpu_to_be64(fadump_conf->hpte_region_size); 109 108 fdm.hpte_region.destination_address = cpu_to_be64(addr); 110 109 addr += fadump_conf->hpte_region_size; 110 + 111 + /* 112 + * Align boot memory area destination address to page boundary to 113 + * be able to mmap read this area in the vmcore. 114 + */ 115 + addr = PAGE_ALIGN(addr); 111 116 112 117 /* RMA region section */ 113 118 fdm.rmr_region.request_flag = cpu_to_be32(RTAS_FADUMP_REQUEST_FLAG); ··· 358 351 /* Lower 4 bytes of reg_value contains logical cpu id */ 359 352 cpu = (be64_to_cpu(reg_entry->reg_value) & 360 353 RTAS_FADUMP_CPU_ID_MASK); 361 - if (fdh && !cpumask_test_cpu(cpu, &fdh->online_mask)) { 354 + if (fdh && !cpumask_test_cpu(cpu, &fdh->cpu_mask)) { 362 355 RTAS_FADUMP_SKIP_TO_NEXT_CPU(reg_entry); 363 356 continue; 364 357 } ··· 469 462 be64_to_cpu(fdm_ptr->rmr_region.source_len), 470 463 be64_to_cpu(fdm_ptr->rmr_region.bytes_dumped)); 471 464 472 - /* Dump is active. Show reserved area start address. */ 465 + /* Dump is active. Show preserved area start address. */ 473 466 if (fdm_active) { 474 - seq_printf(m, "\nMemory above %#016lx is reserved for saving crash dump\n", 475 - fadump_conf->reserve_dump_area_start); 467 + seq_printf(m, "\nMemory above %#016llx is reserved for saving crash dump\n", 468 + fadump_conf->boot_mem_top); 476 469 } 477 470 } 478 471
+4 -14
arch/powerpc/platforms/pseries/setup.c
··· 36 36 #include <linux/seq_file.h> 37 37 #include <linux/root_dev.h> 38 38 #include <linux/of.h> 39 + #include <linux/of_irq.h> 39 40 #include <linux/of_pci.h> 40 41 #include <linux/memblock.h> 41 42 #include <linux/swiotlb.h> ··· 44 43 #include <asm/mmu.h> 45 44 #include <asm/processor.h> 46 45 #include <asm/io.h> 47 - #include <asm/prom.h> 48 46 #include <asm/rtas.h> 49 47 #include <asm/pci-bridge.h> 50 48 #include <asm/iommu.h> ··· 421 421 } 422 422 EXPORT_SYMBOL(pseries_disable_reloc_on_exc); 423 423 424 - #ifdef CONFIG_KEXEC_CORE 425 - static void pSeries_machine_kexec(struct kimage *image) 426 - { 427 - if (firmware_has_feature(FW_FEATURE_SET_MODE)) 428 - pseries_disable_reloc_on_exc(); 429 - 430 - default_machine_kexec(image); 431 - } 432 - #endif 433 - 434 424 #ifdef __LITTLE_ENDIAN__ 435 425 void pseries_big_endian_exceptions(void) 436 426 { ··· 648 658 */ 649 659 num_res = of_read_number(&indexes[NUM_RES_PROPERTY], 1); 650 660 if (resno >= num_res) 651 - return 0; /* or an errror */ 661 + return 0; /* or an error */ 652 662 653 663 i = START_OF_ENTRIES + NEXT_ENTRY * resno; 654 664 switch (value) { ··· 752 762 753 763 if (!pdev->is_physfn) 754 764 return; 755 - /*Firmware must support open sriov otherwise dont configure*/ 765 + /*Firmware must support open sriov otherwise don't configure*/ 756 766 indexes = of_get_property(dn, "ibm,open-sriov-vf-bar-info", NULL); 757 767 if (indexes) 758 768 of_pci_parse_iov_addrs(pdev, indexes); ··· 1086 1096 .machine_check_exception = pSeries_machine_check_exception, 1087 1097 .machine_check_log_err = pSeries_machine_check_log_err, 1088 1098 #ifdef CONFIG_KEXEC_CORE 1089 - .machine_kexec = pSeries_machine_kexec, 1099 + .machine_kexec = pseries_machine_kexec, 1090 1100 .kexec_cpu_down = pseries_kexec_cpu_down, 1091 1101 #endif 1092 1102 #ifdef CONFIG_MEMORY_HOTPLUG
-1
arch/powerpc/platforms/pseries/smp.c
··· 27 27 #include <asm/irq.h> 28 28 #include <asm/page.h> 29 29 #include <asm/io.h> 30 - #include <asm/prom.h> 31 30 #include <asm/smp.h> 32 31 #include <asm/paca.h> 33 32 #include <asm/machdep.h>
+10 -8
arch/powerpc/platforms/pseries/vas-sysfs.c
··· 74 74 75 75 /* 76 76 * Create sysfs interface: 77 - * /sys/devices/vas/vas0/gzip/default_capabilities 77 + * /sys/devices/virtual/misc/vas/vas0/gzip/default_capabilities 78 78 * This directory contains the following VAS GZIP capabilities 79 - * for the defaule credit type. 80 - * /sys/devices/vas/vas0/gzip/default_capabilities/nr_total_credits 79 + * for the default credit type. 80 + * /sys/devices/virtual/misc/vas/vas0/gzip/default_capabilities/nr_total_credits 81 81 * Total number of default credits assigned to the LPAR which 82 82 * can be changed with DLPAR operation. 83 - * /sys/devices/vas/vas0/gzip/default_capabilities/nr_used_credits 83 + * /sys/devices/virtual/misc/vas/vas0/gzip/default_capabilities/nr_used_credits 84 84 * Number of credits used by the user space. One credit will 85 85 * be assigned for each window open. 86 86 * 87 - * /sys/devices/vas/vas0/gzip/qos_capabilities 87 + * /sys/devices/virtual/misc/vas/vas0/gzip/qos_capabilities 88 88 * This directory contains the following VAS GZIP capabilities 89 89 * for the Quality of Service (QoS) credit type. 90 - * /sys/devices/vas/vas0/gzip/qos_capabilities/nr_total_credits 90 + * /sys/devices/virtual/misc/vas/vas0/gzip/qos_capabilities/nr_total_credits 91 91 * Total number of QoS credits assigned to the LPAR. The user 92 92 * has to define this value using HMC interface. It can be 93 93 * changed dynamically by the user. 94 - * /sys/devices/vas/vas0/gzip/qos_capabilities/nr_used_credits 94 + * /sys/devices/virtual/misc/vas/vas0/gzip/qos_capabilities/nr_used_credits 95 95 * Number of credits used by the user space. 96 - * /sys/devices/vas/vas0/gzip/qos_capabilities/update_total_credits 96 + * /sys/devices/virtual/misc/vas/vas0/gzip/qos_capabilities/update_total_credits 97 97 * Update total QoS credits dynamically 98 98 */ 99 99 ··· 248 248 pseries_vas_kobj = kobject_create_and_add("vas0", 249 249 &vas_miscdev.this_device->kobj); 250 250 if (!pseries_vas_kobj) { 251 + misc_deregister(&vas_miscdev); 251 252 pr_err("Failed to create VAS sysfs entry\n"); 252 253 return -ENOMEM; 253 254 } ··· 260 259 if (!gzip_caps_kobj) { 261 260 pr_err("Failed to create VAS GZIP capability entry\n"); 262 261 kobject_put(pseries_vas_kobj); 262 + misc_deregister(&vas_miscdev); 263 263 return -ENOMEM; 264 264 } 265 265 }
+1 -1
arch/powerpc/platforms/pseries/vas.c
··· 801 801 atomic_set(&caps->nr_total_credits, new_nr_creds); 802 802 /* 803 803 * The total number of available credits may be decreased or 804 - * inceased with DLPAR operation. Means some windows have to be 804 + * increased with DLPAR operation. Means some windows have to be 805 805 * closed / reopened. Hold the vas_pseries_mutex so that the 806 806 * the user space can not open new windows. 807 807 */
+1
arch/powerpc/platforms/pseries/vio.c
··· 23 23 #include <linux/dma-map-ops.h> 24 24 #include <linux/kobject.h> 25 25 #include <linux/kexec.h> 26 + #include <linux/of_irq.h> 26 27 27 28 #include <asm/iommu.h> 28 29 #include <asm/dma.h>
-1
arch/powerpc/sysdev/Makefile
··· 23 23 obj-$(CONFIG_FSL_CORENET_RCPM) += fsl_rcpm.o 24 24 obj-$(CONFIG_FSL_LBC) += fsl_lbc.o 25 25 obj-$(CONFIG_FSL_GTM) += fsl_gtm.o 26 - obj-$(CONFIG_FSL_85XX_CACHE_SRAM) += fsl_85xx_l2ctlr.o fsl_85xx_cache_sram.o 27 26 obj-$(CONFIG_FSL_RIO) += fsl_rio.o fsl_rmu.o 28 27 obj-$(CONFIG_TSI108_BRIDGE) += tsi108_pci.o tsi108_dev.o 29 28 obj-$(CONFIG_RTC_DRV_CMOS) += rtc_cmos_setup.o
+1 -1
arch/powerpc/sysdev/cpm2_pic.c
··· 30 30 #include <linux/sched.h> 31 31 #include <linux/signal.h> 32 32 #include <linux/irq.h> 33 + #include <linux/irqdomain.h> 33 34 34 35 #include <asm/immap_cpm2.h> 35 36 #include <asm/mpc8260.h> 36 37 #include <asm/io.h> 37 - #include <asm/prom.h> 38 38 #include <asm/fs_pd.h> 39 39 40 40 #include "cpm2_pic.h"
+5 -3
arch/powerpc/sysdev/dart_iommu.c
··· 25 25 #include <linux/memblock.h> 26 26 #include <linux/gfp.h> 27 27 #include <linux/kmemleak.h> 28 + #include <linux/of_address.h> 28 29 #include <asm/io.h> 29 - #include <asm/prom.h> 30 30 #include <asm/iommu.h> 31 31 #include <asm/pci-bridge.h> 32 32 #include <asm/machdep.h> ··· 404 404 } 405 405 406 406 /* Initialize the DART HW */ 407 - if (dart_init(dn) != 0) 407 + if (dart_init(dn) != 0) { 408 + of_node_put(dn); 408 409 return; 409 - 410 + } 410 411 /* 411 412 * U4 supports a DART bypass, we use it for 64-bit capable devices to 412 413 * improve performance. However, that only works for devices connected ··· 420 419 421 420 /* Setup pci_dma ops */ 422 421 set_pci_dma_ops(&dma_iommu_ops); 422 + of_node_put(dn); 423 423 } 424 424 425 425 #ifdef CONFIG_PM
+1 -1
arch/powerpc/sysdev/dcr.c
··· 8 8 9 9 #include <linux/kernel.h> 10 10 #include <linux/export.h> 11 - #include <asm/prom.h> 11 + #include <linux/of_address.h> 12 12 #include <asm/dcr.h> 13 13 14 14 #ifdef CONFIG_PPC_DCR_MMIO
-88
arch/powerpc/sysdev/fsl_85xx_cache_ctlr.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Copyright 2009-2010, 2012 Freescale Semiconductor, Inc 4 - * 5 - * QorIQ based Cache Controller Memory Mapped Registers 6 - * 7 - * Author: Vivek Mahajan <vivek.mahajan@freescale.com> 8 - */ 9 - 10 - #ifndef __FSL_85XX_CACHE_CTLR_H__ 11 - #define __FSL_85XX_CACHE_CTLR_H__ 12 - 13 - #define L2CR_L2FI 0x40000000 /* L2 flash invalidate */ 14 - #define L2CR_L2IO 0x00200000 /* L2 instruction only */ 15 - #define L2CR_SRAM_ZERO 0x00000000 /* L2SRAM zero size */ 16 - #define L2CR_SRAM_FULL 0x00010000 /* L2SRAM full size */ 17 - #define L2CR_SRAM_HALF 0x00020000 /* L2SRAM half size */ 18 - #define L2CR_SRAM_TWO_HALFS 0x00030000 /* L2SRAM two half sizes */ 19 - #define L2CR_SRAM_QUART 0x00040000 /* L2SRAM one quarter size */ 20 - #define L2CR_SRAM_TWO_QUARTS 0x00050000 /* L2SRAM two quarter size */ 21 - #define L2CR_SRAM_EIGHTH 0x00060000 /* L2SRAM one eighth size */ 22 - #define L2CR_SRAM_TWO_EIGHTH 0x00070000 /* L2SRAM two eighth size */ 23 - 24 - #define L2SRAM_OPTIMAL_SZ_SHIFT 0x00000003 /* Optimum size for L2SRAM */ 25 - 26 - #define L2SRAM_BAR_MSK_LO18 0xFFFFC000 /* Lower 18 bits */ 27 - #define L2SRAM_BARE_MSK_HI4 0x0000000F /* Upper 4 bits */ 28 - 29 - enum cache_sram_lock_ways { 30 - LOCK_WAYS_ZERO, 31 - LOCK_WAYS_EIGHTH, 32 - LOCK_WAYS_TWO_EIGHTH, 33 - LOCK_WAYS_HALF = 4, 34 - LOCK_WAYS_FULL = 8, 35 - }; 36 - 37 - struct mpc85xx_l2ctlr { 38 - u32 ctl; /* 0x000 - L2 control */ 39 - u8 res1[0xC]; 40 - u32 ewar0; /* 0x010 - External write address 0 */ 41 - u32 ewarea0; /* 0x014 - External write address extended 0 */ 42 - u32 ewcr0; /* 0x018 - External write ctrl */ 43 - u8 res2[4]; 44 - u32 ewar1; /* 0x020 - External write address 1 */ 45 - u32 ewarea1; /* 0x024 - External write address extended 1 */ 46 - u32 ewcr1; /* 0x028 - External write ctrl 1 */ 47 - u8 res3[4]; 48 - u32 ewar2; /* 0x030 - External write address 2 */ 49 - u32 ewarea2; /* 0x034 - External write address extended 2 */ 50 - u32 ewcr2; /* 0x038 - External write ctrl 2 */ 51 - u8 res4[4]; 52 - u32 ewar3; /* 0x040 - External write address 3 */ 53 - u32 ewarea3; /* 0x044 - External write address extended 3 */ 54 - u32 ewcr3; /* 0x048 - External write ctrl 3 */ 55 - u8 res5[0xB4]; 56 - u32 srbar0; /* 0x100 - SRAM base address 0 */ 57 - u32 srbarea0; /* 0x104 - SRAM base addr reg ext address 0 */ 58 - u32 srbar1; /* 0x108 - SRAM base address 1 */ 59 - u32 srbarea1; /* 0x10C - SRAM base addr reg ext address 1 */ 60 - u8 res6[0xCF0]; 61 - u32 errinjhi; /* 0xE00 - Error injection mask high */ 62 - u32 errinjlo; /* 0xE04 - Error injection mask low */ 63 - u32 errinjctl; /* 0xE08 - Error injection tag/ecc control */ 64 - u8 res7[0x14]; 65 - u32 captdatahi; /* 0xE20 - Error data high capture */ 66 - u32 captdatalo; /* 0xE24 - Error data low capture */ 67 - u32 captecc; /* 0xE28 - Error syndrome */ 68 - u8 res8[0x14]; 69 - u32 errdet; /* 0xE40 - Error detect */ 70 - u32 errdis; /* 0xE44 - Error disable */ 71 - u32 errinten; /* 0xE48 - Error interrupt enable */ 72 - u32 errattr; /* 0xE4c - Error attribute capture */ 73 - u32 erradrrl; /* 0xE50 - Error address capture low */ 74 - u32 erradrrh; /* 0xE54 - Error address capture high */ 75 - u32 errctl; /* 0xE58 - Error control */ 76 - u8 res9[0x1A4]; 77 - }; 78 - 79 - struct sram_parameters { 80 - unsigned int sram_size; 81 - phys_addr_t sram_offset; 82 - }; 83 - 84 - extern int instantiate_cache_sram(struct platform_device *dev, 85 - struct sram_parameters sram_params); 86 - extern void remove_cache_sram(struct platform_device *dev); 87 - 88 - #endif /* __FSL_85XX_CACHE_CTLR_H__ */
-147
arch/powerpc/sysdev/fsl_85xx_cache_sram.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Copyright 2009-2010 Freescale Semiconductor, Inc. 4 - * 5 - * Simple memory allocator abstraction for QorIQ (P1/P2) based Cache-SRAM 6 - * 7 - * Author: Vivek Mahajan <vivek.mahajan@freescale.com> 8 - * 9 - * This file is derived from the original work done 10 - * by Sylvain Munaut for the Bestcomm SRAM allocator. 11 - */ 12 - 13 - #include <linux/kernel.h> 14 - #include <linux/export.h> 15 - #include <linux/slab.h> 16 - #include <linux/err.h> 17 - #include <linux/of_platform.h> 18 - #include <linux/pgtable.h> 19 - #include <asm/fsl_85xx_cache_sram.h> 20 - 21 - #include "fsl_85xx_cache_ctlr.h" 22 - 23 - struct mpc85xx_cache_sram *cache_sram; 24 - 25 - void *mpc85xx_cache_sram_alloc(unsigned int size, 26 - phys_addr_t *phys, unsigned int align) 27 - { 28 - unsigned long offset; 29 - unsigned long flags; 30 - 31 - if (unlikely(cache_sram == NULL)) 32 - return NULL; 33 - 34 - if (!size || (size > cache_sram->size) || (align > cache_sram->size)) { 35 - pr_err("%s(): size(=%x) or align(=%x) zero or too big\n", 36 - __func__, size, align); 37 - return NULL; 38 - } 39 - 40 - if ((align & (align - 1)) || align <= 1) { 41 - pr_err("%s(): align(=%x) must be power of two and >1\n", 42 - __func__, align); 43 - return NULL; 44 - } 45 - 46 - spin_lock_irqsave(&cache_sram->lock, flags); 47 - offset = rh_alloc_align(cache_sram->rh, size, align, NULL); 48 - spin_unlock_irqrestore(&cache_sram->lock, flags); 49 - 50 - if (IS_ERR_VALUE(offset)) 51 - return NULL; 52 - 53 - *phys = cache_sram->base_phys + offset; 54 - 55 - return (unsigned char *)cache_sram->base_virt + offset; 56 - } 57 - EXPORT_SYMBOL(mpc85xx_cache_sram_alloc); 58 - 59 - void mpc85xx_cache_sram_free(void *ptr) 60 - { 61 - unsigned long flags; 62 - BUG_ON(!ptr); 63 - 64 - spin_lock_irqsave(&cache_sram->lock, flags); 65 - rh_free(cache_sram->rh, ptr - cache_sram->base_virt); 66 - spin_unlock_irqrestore(&cache_sram->lock, flags); 67 - } 68 - EXPORT_SYMBOL(mpc85xx_cache_sram_free); 69 - 70 - int __init instantiate_cache_sram(struct platform_device *dev, 71 - struct sram_parameters sram_params) 72 - { 73 - int ret = 0; 74 - 75 - if (cache_sram) { 76 - dev_err(&dev->dev, "Already initialized cache-sram\n"); 77 - return -EBUSY; 78 - } 79 - 80 - cache_sram = kzalloc(sizeof(struct mpc85xx_cache_sram), GFP_KERNEL); 81 - if (!cache_sram) { 82 - dev_err(&dev->dev, "Out of memory for cache_sram structure\n"); 83 - return -ENOMEM; 84 - } 85 - 86 - cache_sram->base_phys = sram_params.sram_offset; 87 - cache_sram->size = sram_params.sram_size; 88 - 89 - if (!request_mem_region(cache_sram->base_phys, cache_sram->size, 90 - "fsl_85xx_cache_sram")) { 91 - dev_err(&dev->dev, "%pOF: request memory failed\n", 92 - dev->dev.of_node); 93 - ret = -ENXIO; 94 - goto out_free; 95 - } 96 - 97 - cache_sram->base_virt = ioremap_coherent(cache_sram->base_phys, 98 - cache_sram->size); 99 - if (!cache_sram->base_virt) { 100 - dev_err(&dev->dev, "%pOF: ioremap_coherent failed\n", 101 - dev->dev.of_node); 102 - ret = -ENOMEM; 103 - goto out_release; 104 - } 105 - 106 - cache_sram->rh = rh_create(sizeof(unsigned int)); 107 - if (IS_ERR(cache_sram->rh)) { 108 - dev_err(&dev->dev, "%pOF: Unable to create remote heap\n", 109 - dev->dev.of_node); 110 - ret = PTR_ERR(cache_sram->rh); 111 - goto out_unmap; 112 - } 113 - 114 - rh_attach_region(cache_sram->rh, 0, cache_sram->size); 115 - spin_lock_init(&cache_sram->lock); 116 - 117 - dev_info(&dev->dev, "[base:0x%llx, size:0x%x] configured and loaded\n", 118 - (unsigned long long)cache_sram->base_phys, cache_sram->size); 119 - 120 - return 0; 121 - 122 - out_unmap: 123 - iounmap(cache_sram->base_virt); 124 - 125 - out_release: 126 - release_mem_region(cache_sram->base_phys, cache_sram->size); 127 - 128 - out_free: 129 - kfree(cache_sram); 130 - return ret; 131 - } 132 - 133 - void remove_cache_sram(struct platform_device *dev) 134 - { 135 - BUG_ON(!cache_sram); 136 - 137 - rh_detach_region(cache_sram->rh, 0, cache_sram->size); 138 - rh_destroy(cache_sram->rh); 139 - 140 - iounmap(cache_sram->base_virt); 141 - release_mem_region(cache_sram->base_phys, cache_sram->size); 142 - 143 - kfree(cache_sram); 144 - cache_sram = NULL; 145 - 146 - dev_info(&dev->dev, "MPC85xx Cache-SRAM driver unloaded\n"); 147 - }
-216
arch/powerpc/sysdev/fsl_85xx_l2ctlr.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 2 - /* 3 - * Copyright 2009-2010, 2012 Freescale Semiconductor, Inc. 4 - * 5 - * QorIQ (P1/P2) L2 controller init for Cache-SRAM instantiation 6 - * 7 - * Author: Vivek Mahajan <vivek.mahajan@freescale.com> 8 - */ 9 - 10 - #include <linux/kernel.h> 11 - #include <linux/module.h> 12 - #include <linux/of_platform.h> 13 - #include <asm/io.h> 14 - 15 - #include "fsl_85xx_cache_ctlr.h" 16 - 17 - static char *sram_size; 18 - static char *sram_offset; 19 - struct mpc85xx_l2ctlr __iomem *l2ctlr; 20 - 21 - static int get_cache_sram_params(struct sram_parameters *sram_params) 22 - { 23 - unsigned long long addr; 24 - unsigned int size; 25 - 26 - if (!sram_size || (kstrtouint(sram_size, 0, &size) < 0)) 27 - return -EINVAL; 28 - 29 - if (!sram_offset || (kstrtoull(sram_offset, 0, &addr) < 0)) 30 - return -EINVAL; 31 - 32 - sram_params->sram_offset = addr; 33 - sram_params->sram_size = size; 34 - 35 - return 0; 36 - } 37 - 38 - static int __init get_size_from_cmdline(char *str) 39 - { 40 - if (!str) 41 - return 0; 42 - 43 - sram_size = str; 44 - return 1; 45 - } 46 - 47 - static int __init get_offset_from_cmdline(char *str) 48 - { 49 - if (!str) 50 - return 0; 51 - 52 - sram_offset = str; 53 - return 1; 54 - } 55 - 56 - __setup("cache-sram-size=", get_size_from_cmdline); 57 - __setup("cache-sram-offset=", get_offset_from_cmdline); 58 - 59 - static int mpc85xx_l2ctlr_of_probe(struct platform_device *dev) 60 - { 61 - long rval; 62 - unsigned int rem; 63 - unsigned char ways; 64 - const unsigned int *prop; 65 - unsigned int l2cache_size; 66 - struct sram_parameters sram_params; 67 - 68 - if (!dev->dev.of_node) { 69 - dev_err(&dev->dev, "Device's OF-node is NULL\n"); 70 - return -EINVAL; 71 - } 72 - 73 - prop = of_get_property(dev->dev.of_node, "cache-size", NULL); 74 - if (!prop) { 75 - dev_err(&dev->dev, "Missing L2 cache-size\n"); 76 - return -EINVAL; 77 - } 78 - l2cache_size = *prop; 79 - 80 - if (get_cache_sram_params(&sram_params)) 81 - return 0; /* fall back to L2 cache only */ 82 - 83 - rem = l2cache_size % sram_params.sram_size; 84 - ways = LOCK_WAYS_FULL * sram_params.sram_size / l2cache_size; 85 - if (rem || (ways & (ways - 1))) { 86 - dev_err(&dev->dev, "Illegal cache-sram-size in command line\n"); 87 - return -EINVAL; 88 - } 89 - 90 - l2ctlr = of_iomap(dev->dev.of_node, 0); 91 - if (!l2ctlr) { 92 - dev_err(&dev->dev, "Can't map L2 controller\n"); 93 - return -EINVAL; 94 - } 95 - 96 - /* 97 - * Write bits[0-17] to srbar0 98 - */ 99 - out_be32(&l2ctlr->srbar0, 100 - lower_32_bits(sram_params.sram_offset) & L2SRAM_BAR_MSK_LO18); 101 - 102 - /* 103 - * Write bits[18-21] to srbare0 104 - */ 105 - #ifdef CONFIG_PHYS_64BIT 106 - out_be32(&l2ctlr->srbarea0, 107 - upper_32_bits(sram_params.sram_offset) & L2SRAM_BARE_MSK_HI4); 108 - #endif 109 - 110 - clrsetbits_be32(&l2ctlr->ctl, L2CR_L2E, L2CR_L2FI); 111 - 112 - switch (ways) { 113 - case LOCK_WAYS_EIGHTH: 114 - setbits32(&l2ctlr->ctl, 115 - L2CR_L2E | L2CR_L2FI | L2CR_SRAM_EIGHTH); 116 - break; 117 - 118 - case LOCK_WAYS_TWO_EIGHTH: 119 - setbits32(&l2ctlr->ctl, 120 - L2CR_L2E | L2CR_L2FI | L2CR_SRAM_QUART); 121 - break; 122 - 123 - case LOCK_WAYS_HALF: 124 - setbits32(&l2ctlr->ctl, 125 - L2CR_L2E | L2CR_L2FI | L2CR_SRAM_HALF); 126 - break; 127 - 128 - case LOCK_WAYS_FULL: 129 - default: 130 - setbits32(&l2ctlr->ctl, 131 - L2CR_L2E | L2CR_L2FI | L2CR_SRAM_FULL); 132 - break; 133 - } 134 - eieio(); 135 - 136 - rval = instantiate_cache_sram(dev, sram_params); 137 - if (rval < 0) { 138 - dev_err(&dev->dev, "Can't instantiate Cache-SRAM\n"); 139 - iounmap(l2ctlr); 140 - return -EINVAL; 141 - } 142 - 143 - return 0; 144 - } 145 - 146 - static int mpc85xx_l2ctlr_of_remove(struct platform_device *dev) 147 - { 148 - BUG_ON(!l2ctlr); 149 - 150 - iounmap(l2ctlr); 151 - remove_cache_sram(dev); 152 - dev_info(&dev->dev, "MPC85xx L2 controller unloaded\n"); 153 - 154 - return 0; 155 - } 156 - 157 - static const struct of_device_id mpc85xx_l2ctlr_of_match[] = { 158 - { 159 - .compatible = "fsl,p2020-l2-cache-controller", 160 - }, 161 - { 162 - .compatible = "fsl,p2010-l2-cache-controller", 163 - }, 164 - { 165 - .compatible = "fsl,p1020-l2-cache-controller", 166 - }, 167 - { 168 - .compatible = "fsl,p1011-l2-cache-controller", 169 - }, 170 - { 171 - .compatible = "fsl,p1013-l2-cache-controller", 172 - }, 173 - { 174 - .compatible = "fsl,p1022-l2-cache-controller", 175 - }, 176 - { 177 - .compatible = "fsl,mpc8548-l2-cache-controller", 178 - }, 179 - { .compatible = "fsl,mpc8544-l2-cache-controller",}, 180 - { .compatible = "fsl,mpc8572-l2-cache-controller",}, 181 - { .compatible = "fsl,mpc8536-l2-cache-controller",}, 182 - { .compatible = "fsl,p1021-l2-cache-controller",}, 183 - { .compatible = "fsl,p1012-l2-cache-controller",}, 184 - { .compatible = "fsl,p1025-l2-cache-controller",}, 185 - { .compatible = "fsl,p1016-l2-cache-controller",}, 186 - { .compatible = "fsl,p1024-l2-cache-controller",}, 187 - { .compatible = "fsl,p1015-l2-cache-controller",}, 188 - { .compatible = "fsl,p1010-l2-cache-controller",}, 189 - { .compatible = "fsl,bsc9131-l2-cache-controller",}, 190 - {}, 191 - }; 192 - 193 - static struct platform_driver mpc85xx_l2ctlr_of_platform_driver = { 194 - .driver = { 195 - .name = "fsl-l2ctlr", 196 - .of_match_table = mpc85xx_l2ctlr_of_match, 197 - }, 198 - .probe = mpc85xx_l2ctlr_of_probe, 199 - .remove = mpc85xx_l2ctlr_of_remove, 200 - }; 201 - 202 - static __init int mpc85xx_l2ctlr_of_init(void) 203 - { 204 - return platform_driver_register(&mpc85xx_l2ctlr_of_platform_driver); 205 - } 206 - 207 - static void __exit mpc85xx_l2ctlr_of_exit(void) 208 - { 209 - platform_driver_unregister(&mpc85xx_l2ctlr_of_platform_driver); 210 - } 211 - 212 - subsys_initcall(mpc85xx_l2ctlr_of_init); 213 - module_exit(mpc85xx_l2ctlr_of_exit); 214 - 215 - MODULE_DESCRIPTION("Freescale MPC85xx L2 controller init"); 216 - MODULE_LICENSE("GPL v2");
+3 -2
arch/powerpc/sysdev/fsl_lbc.c
··· 18 18 #include <linux/types.h> 19 19 #include <linux/io.h> 20 20 #include <linux/of.h> 21 + #include <linux/of_address.h> 22 + #include <linux/of_irq.h> 21 23 #include <linux/slab.h> 22 24 #include <linux/sched.h> 23 25 #include <linux/platform_device.h> 24 26 #include <linux/interrupt.h> 25 27 #include <linux/mod_devicetable.h> 26 28 #include <linux/syscore_ops.h> 27 - #include <asm/prom.h> 28 29 #include <asm/fsl_lbc.h> 29 30 30 31 static DEFINE_SPINLOCK(fsl_lbc_lock); ··· 38 37 * 39 38 * This function converts a base address of lbc into the right format for the 40 39 * BR register. If the SOC has eLBC then it returns 32bit physical address 41 - * else it convers a 34bit local bus physical address to correct format of 40 + * else it converts a 34bit local bus physical address to correct format of 42 41 * 32bit address for BR register (Example: MPC8641). 43 42 */ 44 43 u32 fsl_lbc_addr(phys_addr_t addr_base)
+3 -1
arch/powerpc/sysdev/fsl_msi.c
··· 11 11 #include <linux/msi.h> 12 12 #include <linux/pci.h> 13 13 #include <linux/slab.h> 14 + #include <linux/of_address.h> 15 + #include <linux/of_irq.h> 14 16 #include <linux/of_platform.h> 15 17 #include <linux/interrupt.h> 18 + #include <linux/irqdomain.h> 16 19 #include <linux/seq_file.h> 17 20 #include <sysdev/fsl_soc.h> 18 - #include <asm/prom.h> 19 21 #include <asm/hw_irq.h> 20 22 #include <asm/ppc-pci.h> 21 23 #include <asm/mpic.h>
+3 -2
arch/powerpc/sysdev/fsl_pci.c
··· 22 22 #include <linux/interrupt.h> 23 23 #include <linux/memblock.h> 24 24 #include <linux/log2.h> 25 + #include <linux/of_address.h> 26 + #include <linux/of_irq.h> 25 27 #include <linux/platform_device.h> 26 28 #include <linux/slab.h> 27 29 #include <linux/suspend.h> ··· 31 29 #include <linux/uaccess.h> 32 30 33 31 #include <asm/io.h> 34 - #include <asm/prom.h> 35 32 #include <asm/pci-bridge.h> 36 33 #include <asm/ppc-pci.h> 37 34 #include <asm/machdep.h> ··· 219 218 * windows have implemented the default target value as 0xf 220 219 * for CCSR space.In all Freescale legacy devices the target 221 220 * of 0xf is reserved for local memory space. 9132 Rev1.0 222 - * now has local mempry space mapped to target 0x0 instead of 221 + * now has local memory space mapped to target 0x0 instead of 223 222 * 0xf. Hence adding a workaround to remove the target 0xf 224 223 * defined for memory space from Inbound window attributes. 225 224 */
+2
arch/powerpc/sysdev/fsl_rio.c
··· 505 505 if (rc) { 506 506 dev_err(&dev->dev, "Can't get %pOF property 'reg'\n", 507 507 rmu_node); 508 + of_node_put(rmu_node); 508 509 goto err_rmu; 509 510 } 511 + of_node_put(rmu_node); 510 512 rmu_regs_win = ioremap(rmu_regs.start, resource_size(&rmu_regs)); 511 513 if (!rmu_regs_win) { 512 514 dev_err(&dev->dev, "Unable to map rmu register window\n");
-1
arch/powerpc/sysdev/fsl_soc.c
··· 31 31 #include <asm/io.h> 32 32 #include <asm/irq.h> 33 33 #include <asm/time.h> 34 - #include <asm/prom.h> 35 34 #include <asm/machdep.h> 36 35 #include <sysdev/fsl_soc.h> 37 36 #include <mm/mmu_decl.h>
+4 -2
arch/powerpc/sysdev/ge/ge_pic.c
··· 14 14 #include <linux/kernel.h> 15 15 #include <linux/init.h> 16 16 #include <linux/irq.h> 17 + #include <linux/irqdomain.h> 17 18 #include <linux/interrupt.h> 19 + #include <linux/of_address.h> 20 + #include <linux/of_irq.h> 18 21 #include <linux/spinlock.h> 19 22 20 23 #include <asm/byteorder.h> 21 24 #include <asm/io.h> 22 - #include <asm/prom.h> 23 25 #include <asm/irq.h> 24 26 25 27 #include "ge_pic.h" ··· 152 150 }; 153 151 154 152 155 - /* When an interrupt is being configured, this call allows some flexibilty 153 + /* When an interrupt is being configured, this call allows some flexibility 156 154 * in deciding which irq_chip structure is used 157 155 */ 158 156 static int gef_pic_host_map(struct irq_domain *h, unsigned int virq,
+1 -1
arch/powerpc/sysdev/grackle.c
··· 9 9 #include <linux/kernel.h> 10 10 #include <linux/pci.h> 11 11 #include <linux/init.h> 12 + #include <linux/of.h> 12 13 13 14 #include <asm/io.h> 14 - #include <asm/prom.h> 15 15 #include <asm/pci-bridge.h> 16 16 #include <asm/grackle.h> 17 17
+1 -1
arch/powerpc/sysdev/i8259.c
··· 6 6 7 7 #include <linux/ioport.h> 8 8 #include <linux/interrupt.h> 9 + #include <linux/irqdomain.h> 9 10 #include <linux/kernel.h> 10 11 #include <linux/delay.h> 11 12 #include <asm/io.h> 12 13 #include <asm/i8259.h> 13 - #include <asm/prom.h> 14 14 15 15 static volatile void __iomem *pci_intack; /* RO, gives us the irq vector */ 16 16
-1
arch/powerpc/sysdev/indirect_pci.c
··· 12 12 #include <linux/init.h> 13 13 14 14 #include <asm/io.h> 15 - #include <asm/prom.h> 16 15 #include <asm/pci-bridge.h> 17 16 #include <asm/machdep.h> 18 17
+2 -1
arch/powerpc/sysdev/ipic.c
··· 18 18 #include <linux/device.h> 19 19 #include <linux/spinlock.h> 20 20 #include <linux/fsl_devices.h> 21 + #include <linux/irqdomain.h> 22 + #include <linux/of_address.h> 21 23 #include <asm/irq.h> 22 24 #include <asm/io.h> 23 - #include <asm/prom.h> 24 25 #include <asm/ipic.h> 25 26 26 27 #include "ipic.h"
+1 -1
arch/powerpc/sysdev/mmio_nvram.c
··· 10 10 #include <linux/fs.h> 11 11 #include <linux/init.h> 12 12 #include <linux/kernel.h> 13 + #include <linux/of_address.h> 13 14 #include <linux/spinlock.h> 14 15 #include <linux/types.h> 15 16 16 17 #include <asm/machdep.h> 17 18 #include <asm/nvram.h> 18 - #include <asm/prom.h> 19 19 20 20 static void __iomem *mmio_nvram_start; 21 21 static long mmio_nvram_len;
+2
arch/powerpc/sysdev/mpic.c
··· 30 30 #include <linux/syscore_ops.h> 31 31 #include <linux/ratelimit.h> 32 32 #include <linux/pgtable.h> 33 + #include <linux/of_address.h> 34 + #include <linux/of_irq.h> 33 35 34 36 #include <asm/ptrace.h> 35 37 #include <asm/signal.h>
+3 -2
arch/powerpc/sysdev/mpic_msgr.c
··· 7 7 */ 8 8 9 9 #include <linux/list.h> 10 + #include <linux/of_address.h> 11 + #include <linux/of_irq.h> 10 12 #include <linux/of_platform.h> 11 13 #include <linux/errno.h> 12 14 #include <linux/err.h> 13 15 #include <linux/export.h> 14 16 #include <linux/slab.h> 15 - #include <asm/prom.h> 16 17 #include <asm/hw_irq.h> 17 18 #include <asm/ppc-pci.h> 18 19 #include <asm/mpic_msgr.h> ··· 100 99 EXPORT_SYMBOL_GPL(mpic_msgr_disable); 101 100 102 101 /* The following three functions are used to compute the order and number of 103 - * the message register blocks. They are clearly very inefficent. However, 102 + * the message register blocks. They are clearly very inefficient. However, 104 103 * they are called *only* a few times during device initialization. 105 104 */ 106 105 static unsigned int mpic_msgr_number_of_blocks(void)
+3 -2
arch/powerpc/sysdev/mpic_msi.c
··· 4 4 */ 5 5 6 6 #include <linux/irq.h> 7 + #include <linux/irqdomain.h> 8 + #include <linux/of_irq.h> 7 9 #include <linux/bitmap.h> 8 10 #include <linux/msi.h> 9 11 #include <asm/mpic.h> 10 - #include <asm/prom.h> 11 12 #include <asm/hw_irq.h> 12 13 #include <asm/ppc-pci.h> 13 14 #include <asm/msi_bitmap.h> ··· 38 37 /* Reserve source numbers we know are reserved in the HW. 39 38 * 40 39 * This is a bit of a mix of U3 and U4 reserves but that's going 41 - * to work fine, we have plenty enugh numbers left so let's just 40 + * to work fine, we have plenty enough numbers left so let's just 42 41 * mark anything we don't like reserved. 43 42 */ 44 43 for (i = 0; i < 8; i++)
+1 -1
arch/powerpc/sysdev/mpic_timer.c
··· 255 255 256 256 /** 257 257 * mpic_stop_timer - stop hardware timer 258 - * @handle: the timer to be stoped 258 + * @handle: the timer to be stopped 259 259 * 260 260 * The timer periodically generates an interrupt. Unless user stops the timer. 261 261 */
+2 -2
arch/powerpc/sysdev/mpic_u3msi.c
··· 5 5 */ 6 6 7 7 #include <linux/irq.h> 8 + #include <linux/irqdomain.h> 8 9 #include <linux/msi.h> 9 10 #include <asm/mpic.h> 10 - #include <asm/prom.h> 11 11 #include <asm/hw_irq.h> 12 12 #include <asm/ppc-pci.h> 13 13 #include <asm/msi_bitmap.h> ··· 78 78 79 79 /* U4 PCIe MSIs need to write to the special register in 80 80 * the bridge that generates interrupts. There should be 81 - * theorically a register at 0xf8005000 where you just write 81 + * theoretically a register at 0xf8005000 where you just write 82 82 * the MSI number and that triggers the right interrupt, but 83 83 * unfortunately, this is busted in HW, the bridge endian swaps 84 84 * the value and hits the wrong nibble in the register.
+1
arch/powerpc/sysdev/msi_bitmap.c
··· 8 8 #include <linux/kmemleak.h> 9 9 #include <linux/bitmap.h> 10 10 #include <linux/memblock.h> 11 + #include <linux/of.h> 11 12 #include <asm/msi_bitmap.h> 12 13 #include <asm/setup.h> 13 14
+2 -1
arch/powerpc/sysdev/pmi.c
··· 17 17 #include <linux/spinlock.h> 18 18 #include <linux/module.h> 19 19 #include <linux/workqueue.h> 20 + #include <linux/of_address.h> 20 21 #include <linux/of_device.h> 22 + #include <linux/of_irq.h> 21 23 #include <linux/of_platform.h> 22 24 23 25 #include <asm/io.h> 24 26 #include <asm/pmi.h> 25 - #include <asm/prom.h> 26 27 27 28 struct pmi_data { 28 29 struct list_head handler;
+1 -1
arch/powerpc/sysdev/rtc_cmos_setup.c
··· 14 14 #include <linux/init.h> 15 15 #include <linux/module.h> 16 16 #include <linux/mc146818rtc.h> 17 + #include <linux/of_address.h> 17 18 18 - #include <asm/prom.h> 19 19 20 20 static int __init add_rtc(void) 21 21 {
+2 -1
arch/powerpc/sysdev/tsi108_dev.c
··· 16 16 #include <linux/device.h> 17 17 #include <linux/etherdevice.h> 18 18 #include <linux/platform_device.h> 19 + #include <linux/of_address.h> 20 + #include <linux/of_irq.h> 19 21 #include <linux/of_net.h> 20 22 #include <asm/tsi108.h> 21 23 22 24 #include <linux/atomic.h> 23 25 #include <asm/io.h> 24 26 #include <asm/irq.h> 25 - #include <asm/prom.h> 26 27 #include <mm/mmu_decl.h> 27 28 28 29 #undef DEBUG
+2 -1
arch/powerpc/sysdev/tsi108_pci.c
··· 12 12 #include <linux/init.h> 13 13 #include <linux/pci.h> 14 14 #include <linux/irq.h> 15 + #include <linux/irqdomain.h> 15 16 #include <linux/interrupt.h> 17 + #include <linux/of_address.h> 16 18 17 19 #include <asm/byteorder.h> 18 20 #include <asm/io.h> ··· 25 23 #include <asm/tsi108.h> 26 24 #include <asm/tsi108_pci.h> 27 25 #include <asm/tsi108_irq.h> 28 - #include <asm/prom.h> 29 26 30 27 #undef DEBUG 31 28 #ifdef DEBUG
+2 -1
arch/powerpc/sysdev/xics/icp-native.c
··· 6 6 #include <linux/types.h> 7 7 #include <linux/kernel.h> 8 8 #include <linux/irq.h> 9 + #include <linux/irqdomain.h> 9 10 #include <linux/smp.h> 10 11 #include <linux/interrupt.h> 11 12 #include <linux/init.h> 12 13 #include <linux/cpu.h> 13 14 #include <linux/of.h> 15 + #include <linux/of_address.h> 14 16 #include <linux/spinlock.h> 15 17 #include <linux/module.h> 16 18 17 - #include <asm/prom.h> 18 19 #include <asm/io.h> 19 20 #include <asm/smp.h> 20 21 #include <asm/irq.h>
+1
arch/powerpc/sysdev/xics/icp-opal.c
··· 196 196 197 197 printk("XICS: Using OPAL ICP fallbacks\n"); 198 198 199 + of_node_put(np); 199 200 return 0; 200 201 } 201 202
+1 -1
arch/powerpc/sysdev/xics/ics-native.c
··· 15 15 #include <linux/init.h> 16 16 #include <linux/cpu.h> 17 17 #include <linux/of.h> 18 + #include <linux/of_address.h> 18 19 #include <linux/spinlock.h> 19 20 #include <linux/msi.h> 20 21 #include <linux/list.h> 21 22 22 - #include <asm/prom.h> 23 23 #include <asm/smp.h> 24 24 #include <asm/machdep.h> 25 25 #include <asm/irq.h>
-1
arch/powerpc/sysdev/xics/ics-opal.c
··· 18 18 #include <linux/spinlock.h> 19 19 #include <linux/msi.h> 20 20 21 - #include <asm/prom.h> 22 21 #include <asm/smp.h> 23 22 #include <asm/machdep.h> 24 23 #include <asm/irq.h>
-1
arch/powerpc/sysdev/xics/ics-rtas.c
··· 10 10 #include <linux/spinlock.h> 11 11 #include <linux/msi.h> 12 12 13 - #include <asm/prom.h> 14 13 #include <asm/smp.h> 15 14 #include <asm/machdep.h> 16 15 #include <asm/irq.h>
+3 -3
arch/powerpc/sysdev/xics/xics-common.c
··· 6 6 #include <linux/threads.h> 7 7 #include <linux/kernel.h> 8 8 #include <linux/irq.h> 9 + #include <linux/irqdomain.h> 9 10 #include <linux/debugfs.h> 10 11 #include <linux/smp.h> 11 12 #include <linux/interrupt.h> ··· 18 17 #include <linux/spinlock.h> 19 18 #include <linux/delay.h> 20 19 21 - #include <asm/prom.h> 22 20 #include <asm/io.h> 23 21 #include <asm/smp.h> 24 22 #include <asm/machdep.h> ··· 146 146 147 147 #endif /* CONFIG_SMP */ 148 148 149 - void xics_teardown_cpu(void) 149 + noinstr void xics_teardown_cpu(void) 150 150 { 151 151 struct xics_cppr *os_cppr = this_cpu_ptr(&xics_cppr); 152 152 ··· 159 159 icp_ops->teardown_cpu(); 160 160 } 161 161 162 - void xics_kexec_teardown_cpu(int secondary) 162 + noinstr void xics_kexec_teardown_cpu(int secondary) 163 163 { 164 164 xics_teardown_cpu(); 165 165
+3 -3
arch/powerpc/sysdev/xive/common.c
··· 9 9 #include <linux/threads.h> 10 10 #include <linux/kernel.h> 11 11 #include <linux/irq.h> 12 + #include <linux/irqdomain.h> 12 13 #include <linux/debugfs.h> 13 14 #include <linux/smp.h> 14 15 #include <linux/interrupt.h> ··· 22 21 #include <linux/msi.h> 23 22 #include <linux/vmalloc.h> 24 23 25 - #include <asm/prom.h> 26 24 #include <asm/io.h> 27 25 #include <asm/smp.h> 28 26 #include <asm/machdep.h> ··· 1241 1241 return 0; 1242 1242 } 1243 1243 1244 - static void xive_cleanup_cpu_ipi(unsigned int cpu, struct xive_cpu *xc) 1244 + noinstr static void xive_cleanup_cpu_ipi(unsigned int cpu, struct xive_cpu *xc) 1245 1245 { 1246 1246 unsigned int xive_ipi_irq = xive_ipi_cpu_to_irq(cpu); 1247 1247 ··· 1634 1634 1635 1635 #endif /* CONFIG_SMP */ 1636 1636 1637 - void xive_teardown_cpu(void) 1637 + noinstr void xive_teardown_cpu(void) 1638 1638 { 1639 1639 struct xive_cpu *xc = __this_cpu_read(xive_cpu); 1640 1640 unsigned int cpu = smp_processor_id();
+2 -2
arch/powerpc/sysdev/xive/native.c
··· 13 13 #include <linux/seq_file.h> 14 14 #include <linux/init.h> 15 15 #include <linux/of.h> 16 + #include <linux/of_address.h> 16 17 #include <linux/slab.h> 17 18 #include <linux/spinlock.h> 18 19 #include <linux/delay.h> ··· 22 21 #include <linux/kmemleak.h> 23 22 24 23 #include <asm/machdep.h> 25 - #include <asm/prom.h> 26 24 #include <asm/io.h> 27 25 #include <asm/smp.h> 28 26 #include <asm/irq.h> ··· 617 617 618 618 xive_tima_os = r.start; 619 619 620 - /* Grab size of provisionning pages */ 620 + /* Grab size of provisioning pages */ 621 621 xive_parse_provisioning(np); 622 622 623 623 /* Switch the XIVE to exploitation mode */
+7 -2
arch/powerpc/sysdev/xive/spapr.c
··· 11 11 #include <linux/interrupt.h> 12 12 #include <linux/init.h> 13 13 #include <linux/of.h> 14 + #include <linux/of_address.h> 15 + #include <linux/of_fdt.h> 14 16 #include <linux/slab.h> 15 17 #include <linux/spinlock.h> 16 18 #include <linux/cpumask.h> ··· 832 830 /* Resource 1 is the OS ring TIMA */ 833 831 if (of_address_to_resource(np, 1, &r)) { 834 832 pr_err("Failed to get thread mgmnt area resource\n"); 835 - return false; 833 + goto err_put; 836 834 } 837 835 tima = ioremap(r.start, resource_size(&r)); 838 836 if (!tima) { 839 837 pr_err("Failed to map thread mgmnt area\n"); 840 - return false; 838 + goto err_put; 841 839 } 842 840 843 841 if (!xive_get_max_prio(&max_prio)) ··· 873 871 if (!xive_core_init(np, &xive_spapr_ops, tima, TM_QW1_OS, max_prio)) 874 872 goto err_mem_free; 875 873 874 + of_node_put(np); 876 875 pr_info("Using %dkB queues\n", 1 << (xive_queue_shift - 10)); 877 876 return true; 878 877 ··· 881 878 xive_irq_bitmap_remove_all(); 882 879 err_unmap: 883 880 iounmap(tima); 881 + err_put: 882 + of_node_put(np); 884 883 return false; 885 884 } 886 885
+1 -1
arch/powerpc/xmon/ppc-opc.c
··· 408 408 #define FXM4 FXM + 1 409 409 { 0xff, 12, insert_fxm, extract_fxm, 410 410 PPC_OPERAND_OPTIONAL | PPC_OPERAND_OPTIONAL_VALUE}, 411 - /* If the FXM4 operand is ommitted, use the sentinel value -1. */ 411 + /* If the FXM4 operand is omitted, use the sentinel value -1. */ 412 412 { -1, -1, NULL, NULL, 0}, 413 413 414 414 /* The IMM20 field in an LI instruction. */
+7 -9
arch/powerpc/xmon/xmon.c
··· 31 31 #include <asm/ptrace.h> 32 32 #include <asm/smp.h> 33 33 #include <asm/string.h> 34 - #include <asm/prom.h> 35 34 #include <asm/machdep.h> 36 35 #include <asm/xmon.h> 37 36 #include <asm/processor.h> ··· 372 373 * set_ciabr() - set the CIABR 373 374 * @addr: The value to set. 374 375 * 375 - * This function sets the correct privilege value into the the HW 376 + * This function sets the correct privilege value into the HW 376 377 * breakpoint address before writing it up in the CIABR register. 377 378 */ 378 379 static void set_ciabr(unsigned long addr) ··· 920 921 bp->enabled = 0; 921 922 continue; 922 923 } 923 - if (IS_MTMSRD(instr) || IS_RFID(instr)) { 924 - printf("Breakpoint at %lx is on an mtmsrd or rfid " 925 - "instruction, disabling it\n", bp->address); 924 + if (!can_single_step(ppc_inst_val(instr))) { 925 + printf("Breakpoint at %lx is on an instruction that can't be single stepped, disabling it\n", 926 + bp->address); 926 927 bp->enabled = 0; 927 928 continue; 928 929 } ··· 1469 1470 printf("Can't read instruction at address %lx\n", addr); 1470 1471 return 0; 1471 1472 } 1472 - if (IS_MTMSRD(instr) || IS_RFID(instr)) { 1473 - printf("Breakpoints may not be placed on mtmsrd or rfid " 1474 - "instructions\n"); 1473 + if (!can_single_step(ppc_inst_val(instr))) { 1474 + printf("Breakpoints may not be placed on instructions that can't be single stepped\n"); 1475 1475 return 0; 1476 1476 } 1477 1477 return 1; ··· 2022 2024 if (!cpu_has_feature(CPU_FTR_ARCH_206)) 2023 2025 return; 2024 2026 2025 - /* Actually some of these pre-date 2.06, but whatevs */ 2027 + /* Actually some of these pre-date 2.06, but whatever */ 2026 2028 2027 2029 printf("srr0 = %.16lx srr1 = %.16lx dsisr = %.8lx\n", 2028 2030 mfspr(SPRN_SRR0), mfspr(SPRN_SRR1), mfspr(SPRN_DSISR));
+1 -1
drivers/crypto/nx/nx-common-powernv.c
··· 827 827 goto err_out; 828 828 829 829 vas_init_rx_win_attr(&rxattr, coproc->ct); 830 - rxattr.rx_fifo = (void *)rx_fifo; 830 + rxattr.rx_fifo = rx_fifo; 831 831 rxattr.rx_fifo_size = fifo_size; 832 832 rxattr.lnotify_lpid = lpid; 833 833 rxattr.lnotify_pid = pid;
+6
drivers/macintosh/Kconfig
··· 44 44 config ADB_CUDA 45 45 bool "Support for Cuda/Egret based Macs and PowerMacs" 46 46 depends on (ADB || PPC_PMAC) && !PPC_PMAC64 47 + select RTC_LIB 47 48 help 48 49 This provides support for Cuda/Egret based Macintosh and 49 50 Power Macintosh systems. This includes most m68k based Macs, ··· 58 57 config ADB_PMU 59 58 bool "Support for PMU based PowerMacs and PowerBooks" 60 59 depends on PPC_PMAC || MAC 60 + select RTC_LIB 61 61 help 62 62 On PowerBooks, iBooks, and recent iMacs and Power Macintoshes, the 63 63 PMU is an embedded microprocessor whose primary function is to ··· 68 66 RAM and the RTC (real time clock) chip. Say Y to enable support for 69 67 this device; you should do so if your machine is one of those 70 68 mentioned above. 69 + 70 + config ADB_PMU_EVENT 71 + def_bool y 72 + depends on ADB_PMU && INPUT=y 71 73 72 74 config ADB_PMU_LED 73 75 bool "Support for the Power/iBook front LED"
+2 -1
drivers/macintosh/Makefile
··· 12 12 obj-$(CONFIG_INPUT_ADBHID) += adbhid.o 13 13 obj-$(CONFIG_ANSLCD) += ans-lcd.o 14 14 15 - obj-$(CONFIG_ADB_PMU) += via-pmu.o via-pmu-event.o 15 + obj-$(CONFIG_ADB_PMU) += via-pmu.o 16 + obj-$(CONFIG_ADB_PMU_EVENT) += via-pmu-event.o 16 17 obj-$(CONFIG_ADB_PMU_LED) += via-pmu-led.o 17 18 obj-$(CONFIG_PMAC_BACKLIGHT) += via-pmu-backlight.o 18 19 obj-$(CONFIG_ADB_CUDA) += via-cuda.o
+1 -1
drivers/macintosh/adb.c
··· 38 38 #include <linux/kthread.h> 39 39 #include <linux/platform_device.h> 40 40 #include <linux/mutex.h> 41 + #include <linux/of.h> 41 42 42 43 #include <linux/uaccess.h> 43 44 #ifdef CONFIG_PPC 44 - #include <asm/prom.h> 45 45 #include <asm/machdep.h> 46 46 #endif 47 47
+3 -6
drivers/macintosh/adbhid.c
··· 789 789 790 790 switch (default_id) { 791 791 case ADB_KEYBOARD: 792 - hid->keycode = kmalloc(sizeof(adb_to_linux_keycodes), GFP_KERNEL); 792 + hid->keycode = kmemdup(adb_to_linux_keycodes, 793 + sizeof(adb_to_linux_keycodes), GFP_KERNEL); 793 794 if (!hid->keycode) { 794 795 err = -ENOMEM; 795 796 goto fail; 796 797 } 797 798 798 799 sprintf(hid->name, "ADB keyboard"); 799 - 800 - memcpy(hid->keycode, adb_to_linux_keycodes, sizeof(adb_to_linux_keycodes)); 801 800 802 801 switch (original_handler_id) { 803 802 default: ··· 816 817 case 0xC4: case 0xC7: 817 818 keyboard_type = "ISO, swapping keys"; 818 819 input_dev->id.version = ADB_KEYBOARD_ISO; 819 - i = hid->keycode[10]; 820 - hid->keycode[10] = hid->keycode[50]; 821 - hid->keycode[50] = i; 820 + swap(hid->keycode[10], hid->keycode[50]); 822 821 break; 823 822 824 823 case 0x12: case 0x15: case 0x16: case 0x17: case 0x1A:
+1 -1
drivers/macintosh/ams/ams-core.c
··· 50 50 ams_sensors(&x, &y, &z); 51 51 mutex_unlock(&ams_info.lock); 52 52 53 - return snprintf(buf, PAGE_SIZE, "%d %d %d\n", x, y, z); 53 + return sysfs_emit(buf, "%d %d %d\n", x, y, z); 54 54 } 55 55 56 56 static DEVICE_ATTR(current, S_IRUGO, ams_show_current, NULL);
+1 -5
drivers/macintosh/ams/ams-i2c.c
··· 256 256 257 257 int __init ams_i2c_init(struct device_node *np) 258 258 { 259 - int result; 260 - 261 259 /* Set implementation stuff */ 262 260 ams_info.of_node = np; 263 261 ams_info.exit = ams_i2c_exit; ··· 264 266 ams_info.clear_irq = ams_i2c_clear_irq; 265 267 ams_info.bustype = BUS_I2C; 266 268 267 - result = i2c_add_driver(&ams_i2c_driver); 268 - 269 - return result; 269 + return i2c_add_driver(&ams_i2c_driver); 270 270 }
+1 -1
drivers/macintosh/ans-lcd.c
··· 11 11 #include <linux/module.h> 12 12 #include <linux/delay.h> 13 13 #include <linux/fs.h> 14 + #include <linux/of.h> 14 15 15 16 #include <linux/uaccess.h> 16 17 #include <asm/sections.h> 17 - #include <asm/prom.h> 18 18 #include <asm/io.h> 19 19 20 20 #include "ans-lcd.h"
+4 -1
drivers/macintosh/macio-adb.c
··· 9 9 #include <linux/spinlock.h> 10 10 #include <linux/interrupt.h> 11 11 #include <linux/pgtable.h> 12 - #include <asm/prom.h> 12 + #include <linux/of.h> 13 + #include <linux/of_address.h> 14 + #include <linux/of_irq.h> 13 15 #include <linux/adb.h> 16 + 14 17 #include <asm/io.h> 15 18 #include <asm/hydra.h> 16 19 #include <asm/irq.h>
+5 -4
drivers/macintosh/macio_asic.c
··· 20 20 #include <linux/init.h> 21 21 #include <linux/module.h> 22 22 #include <linux/slab.h> 23 + #include <linux/of.h> 23 24 #include <linux/of_address.h> 25 + #include <linux/of_device.h> 24 26 #include <linux/of_irq.h> 25 27 26 28 #include <asm/machdep.h> 27 29 #include <asm/macio.h> 28 30 #include <asm/pmac_feature.h> 29 - #include <asm/prom.h> 30 31 31 32 #undef DEBUG 32 33 ··· 473 472 root_res = &rdev->resource[0]; 474 473 475 474 /* First scan 1st level */ 476 - for (np = NULL; (np = of_get_next_child(pnode, np)) != NULL;) { 475 + for_each_child_of_node(pnode, np) { 477 476 if (macio_skip_device(np)) 478 477 continue; 479 478 of_node_get(np); ··· 490 489 /* Add media bay devices if any */ 491 490 if (mbdev) { 492 491 pnode = mbdev->ofdev.dev.of_node; 493 - for (np = NULL; (np = of_get_next_child(pnode, np)) != NULL;) { 492 + for_each_child_of_node(pnode, np) { 494 493 if (macio_skip_device(np)) 495 494 continue; 496 495 of_node_get(np); ··· 503 502 /* Add serial ports if any */ 504 503 if (sdev) { 505 504 pnode = sdev->ofdev.dev.of_node; 506 - for (np = NULL; (np = of_get_next_child(pnode, np)) != NULL;) { 505 + for_each_child_of_node(pnode, np) { 507 506 if (macio_skip_device(np)) 508 507 continue; 509 508 of_node_get(np);
+2
drivers/macintosh/macio_sysfs.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/kernel.h> 3 + #include <linux/of.h> 4 + #include <linux/of_device.h> 3 5 #include <linux/stat.h> 4 6 #include <asm/macio.h> 5 7
+1 -1
drivers/macintosh/mediabay.c
··· 17 17 #include <linux/kthread.h> 18 18 #include <linux/mutex.h> 19 19 #include <linux/pgtable.h> 20 - #include <asm/prom.h> 20 + 21 21 #include <asm/io.h> 22 22 #include <asm/machdep.h> 23 23 #include <asm/pmac_feature.h>
-1
drivers/macintosh/rack-meter.c
··· 27 27 #include <linux/of_irq.h> 28 28 29 29 #include <asm/io.h> 30 - #include <asm/prom.h> 31 30 #include <asm/machdep.h> 32 31 #include <asm/pmac_feature.h> 33 32 #include <asm/dbdma.h>
+3 -4
drivers/macintosh/smu.c
··· 41 41 42 42 #include <asm/byteorder.h> 43 43 #include <asm/io.h> 44 - #include <asm/prom.h> 45 44 #include <asm/machdep.h> 46 45 #include <asm/pmac_feature.h> 47 46 #include <asm/smu.h> ··· 1086 1087 unsigned long flags; 1087 1088 1088 1089 pp = kzalloc(sizeof(struct smu_private), GFP_KERNEL); 1089 - if (pp == 0) 1090 + if (!pp) 1090 1091 return -ENOMEM; 1091 1092 spin_lock_init(&pp->lock); 1092 1093 pp->mode = smu_file_commands; ··· 1253 1254 __poll_t mask = 0; 1254 1255 unsigned long flags; 1255 1256 1256 - if (pp == 0) 1257 + if (!pp) 1257 1258 return 0; 1258 1259 1259 1260 if (pp->mode == smu_file_commands) { ··· 1276 1277 unsigned long flags; 1277 1278 unsigned int busy; 1278 1279 1279 - if (pp == 0) 1280 + if (!pp) 1280 1281 return 0; 1281 1282 1282 1283 file->private_data = NULL;
-1
drivers/macintosh/therm_adt746x.c
··· 27 27 #include <linux/freezer.h> 28 28 #include <linux/of_platform.h> 29 29 30 - #include <asm/prom.h> 31 30 #include <asm/machdep.h> 32 31 #include <asm/io.h> 33 32 #include <asm/sections.h>
-1
drivers/macintosh/therm_windtunnel.c
··· 38 38 #include <linux/kthread.h> 39 39 #include <linux/of_platform.h> 40 40 41 - #include <asm/prom.h> 42 41 #include <asm/machdep.h> 43 42 #include <asm/io.h> 44 43 #include <asm/sections.h>
+6 -4
drivers/macintosh/via-cuda.c
··· 18 18 #include <linux/cuda.h> 19 19 #include <linux/spinlock.h> 20 20 #include <linux/interrupt.h> 21 + #include <linux/of_address.h> 22 + #include <linux/of_irq.h> 23 + 21 24 #ifdef CONFIG_PPC 22 - #include <asm/prom.h> 23 25 #include <asm/machdep.h> 24 26 #include <asm/pmac_feature.h> 25 27 #else ··· 239 237 const u32 *reg; 240 238 int err; 241 239 242 - if (vias != 0) 240 + if (vias) 243 241 return 1; 244 242 vias = of_find_node_by_name(NULL, "via-cuda"); 245 - if (vias == 0) 243 + if (!vias) 246 244 return 0; 247 245 248 246 reg = of_get_property(vias, "reg", NULL); ··· 520 518 req->reply_len = 0; 521 519 522 520 spin_lock_irqsave(&cuda_lock, flags); 523 - if (current_req != 0) { 521 + if (current_req) { 524 522 last_req->next = req; 525 523 last_req = req; 526 524 } else {
-1
drivers/macintosh/via-pmu-backlight.c
··· 12 12 #include <linux/adb.h> 13 13 #include <linux/pmu.h> 14 14 #include <asm/backlight.h> 15 - #include <asm/prom.h> 16 15 17 16 #define MAX_PMU_LEVEL 0xFF 18 17
+1 -1
drivers/macintosh/via-pmu-led.c
··· 25 25 #include <linux/leds.h> 26 26 #include <linux/adb.h> 27 27 #include <linux/pmu.h> 28 - #include <asm/prom.h> 28 + #include <linux/of.h> 29 29 30 30 static spinlock_t pmu_blink_lock; 31 31 static struct adb_request pmu_blink_req;
+2 -7
drivers/macintosh/via-pmu.c
··· 59 59 #include <asm/pmac_feature.h> 60 60 #include <asm/pmac_pfunc.h> 61 61 #include <asm/pmac_low_i2c.h> 62 - #include <asm/prom.h> 63 62 #include <asm/mmu_context.h> 64 63 #include <asm/cputable.h> 65 64 #include <asm/time.h> ··· 160 161 static int gpio_irq = 0; 161 162 static int gpio_irq_enabled = -1; 162 163 static volatile int pmu_suspended; 163 - static spinlock_t pmu_lock; 164 + static DEFINE_SPINLOCK(pmu_lock); 164 165 static u8 pmu_intr_mask; 165 166 static int pmu_version; 166 167 static int drop_interrupts; ··· 304 305 goto fail; 305 306 } 306 307 307 - spin_lock_init(&pmu_lock); 308 - 309 308 pmu_has_adb = 1; 310 309 311 310 pmu_intr_mask = PMU_INT_PCEJECT | ··· 384 387 return 0; 385 388 386 389 pmu_kind = PMU_UNKNOWN; 387 - 388 - spin_lock_init(&pmu_lock); 389 390 390 391 pmu_has_adb = 1; 391 392 ··· 1454 1459 pmu_pass_intr(data, len); 1455 1460 /* len == 6 is probably a bad check. But how do I 1456 1461 * know what PMU versions send what events here? */ 1457 - if (len == 6) { 1462 + if (IS_ENABLED(CONFIG_ADB_PMU_EVENT) && len == 6) { 1458 1463 via_pmu_event(PMU_EVT_POWER, !!(data[1]&8)); 1459 1464 via_pmu_event(PMU_EVT_LID, data[1]&1); 1460 1465 }
+1 -1
drivers/macintosh/windfarm_ad7417_sensor.c
··· 13 13 #include <linux/init.h> 14 14 #include <linux/wait.h> 15 15 #include <linux/i2c.h> 16 - #include <asm/prom.h> 16 + 17 17 #include <asm/machdep.h> 18 18 #include <asm/io.h> 19 19 #include <asm/sections.h>
-2
drivers/macintosh/windfarm_core.c
··· 35 35 #include <linux/mutex.h> 36 36 #include <linux/freezer.h> 37 37 38 - #include <asm/prom.h> 39 - 40 38 #include "windfarm.h" 41 39 42 40 #define VERSION "0.2"
-2
drivers/macintosh/windfarm_cpufreq_clamp.c
··· 10 10 #include <linux/cpu.h> 11 11 #include <linux/cpufreq.h> 12 12 13 - #include <asm/prom.h> 14 - 15 13 #include "windfarm.h" 16 14 17 15 #define VERSION "0.3"
+1 -1
drivers/macintosh/windfarm_fcu_controls.c
··· 14 14 #include <linux/init.h> 15 15 #include <linux/wait.h> 16 16 #include <linux/i2c.h> 17 - #include <asm/prom.h> 17 + 18 18 #include <asm/machdep.h> 19 19 #include <asm/io.h> 20 20 #include <asm/sections.h>
-1
drivers/macintosh/windfarm_lm75_sensor.c
··· 15 15 #include <linux/wait.h> 16 16 #include <linux/i2c.h> 17 17 #include <linux/of_device.h> 18 - #include <asm/prom.h> 19 18 #include <asm/machdep.h> 20 19 #include <asm/io.h> 21 20 #include <asm/sections.h>
+1 -1
drivers/macintosh/windfarm_lm87_sensor.c
··· 13 13 #include <linux/init.h> 14 14 #include <linux/wait.h> 15 15 #include <linux/i2c.h> 16 - #include <asm/prom.h> 16 + 17 17 #include <asm/machdep.h> 18 18 #include <asm/io.h> 19 19 #include <asm/sections.h>
+1 -1
drivers/macintosh/windfarm_max6690_sensor.c
··· 10 10 #include <linux/init.h> 11 11 #include <linux/slab.h> 12 12 #include <linux/i2c.h> 13 - #include <asm/prom.h> 13 + 14 14 #include <asm/pmac_low_i2c.h> 15 15 16 16 #include "windfarm.h"
+2
drivers/macintosh/windfarm_mpu.h
··· 8 8 #ifndef __WINDFARM_MPU_H 9 9 #define __WINDFARM_MPU_H 10 10 11 + #include <linux/of.h> 12 + 11 13 typedef unsigned short fu16; 12 14 typedef int fs32; 13 15 typedef short fs16;
+3 -1
drivers/macintosh/windfarm_pm112.c
··· 12 12 #include <linux/device.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/reboot.h> 15 - #include <asm/prom.h> 15 + #include <linux/of.h> 16 + #include <linux/slab.h> 17 + 16 18 #include <asm/smu.h> 17 19 18 20 #include "windfarm.h"
+2 -1
drivers/macintosh/windfarm_pm121.c
··· 201 201 #include <linux/kmod.h> 202 202 #include <linux/device.h> 203 203 #include <linux/platform_device.h> 204 - #include <asm/prom.h> 204 + #include <linux/of.h> 205 + 205 206 #include <asm/machdep.h> 206 207 #include <asm/io.h> 207 208 #include <asm/sections.h>
+1 -1
drivers/macintosh/windfarm_pm72.c
··· 11 11 #include <linux/device.h> 12 12 #include <linux/platform_device.h> 13 13 #include <linux/reboot.h> 14 - #include <asm/prom.h> 14 + 15 15 #include <asm/smu.h> 16 16 17 17 #include "windfarm.h"
+2 -1
drivers/macintosh/windfarm_pm81.c
··· 102 102 #include <linux/kmod.h> 103 103 #include <linux/device.h> 104 104 #include <linux/platform_device.h> 105 - #include <asm/prom.h> 105 + #include <linux/of.h> 106 + 106 107 #include <asm/machdep.h> 107 108 #include <asm/io.h> 108 109 #include <asm/sections.h>
+2 -1
drivers/macintosh/windfarm_pm91.c
··· 37 37 #include <linux/kmod.h> 38 38 #include <linux/device.h> 39 39 #include <linux/platform_device.h> 40 - #include <asm/prom.h> 40 + #include <linux/of.h> 41 + 41 42 #include <asm/machdep.h> 42 43 #include <asm/io.h> 43 44 #include <asm/sections.h>
+1 -1
drivers/macintosh/windfarm_rm31.c
··· 11 11 #include <linux/device.h> 12 12 #include <linux/platform_device.h> 13 13 #include <linux/reboot.h> 14 - #include <asm/prom.h> 14 + 15 15 #include <asm/smu.h> 16 16 17 17 #include "windfarm.h"
+2 -1
drivers/macintosh/windfarm_smu_controls.c
··· 14 14 #include <linux/init.h> 15 15 #include <linux/wait.h> 16 16 #include <linux/completion.h> 17 - #include <asm/prom.h> 17 + #include <linux/of.h> 18 + 18 19 #include <asm/machdep.h> 19 20 #include <asm/io.h> 20 21 #include <asm/sections.h>
+1 -1
drivers/macintosh/windfarm_smu_sat.c
··· 13 13 #include <linux/wait.h> 14 14 #include <linux/i2c.h> 15 15 #include <linux/mutex.h> 16 - #include <asm/prom.h> 16 + 17 17 #include <asm/smu.h> 18 18 #include <asm/pmac_low_i2c.h> 19 19
+2 -1
drivers/macintosh/windfarm_smu_sensors.c
··· 14 14 #include <linux/init.h> 15 15 #include <linux/wait.h> 16 16 #include <linux/completion.h> 17 - #include <asm/prom.h> 17 + #include <linux/of.h> 18 + 18 19 #include <asm/machdep.h> 19 20 #include <asm/io.h> 20 21 #include <asm/sections.h>
+1
drivers/misc/cxl/api.c
··· 12 12 #include <linux/pseudo_fs.h> 13 13 #include <linux/sched/mm.h> 14 14 #include <linux/mmu_context.h> 15 + #include <linux/irqdomain.h> 15 16 16 17 #include "cxl.h" 17 18
+2
drivers/misc/cxl/cxl.h
··· 25 25 26 26 extern uint cxl_verbose; 27 27 28 + struct property; 29 + 28 30 #define CXL_TIMEOUT 5 29 31 30 32 /*
+1
drivers/misc/cxl/cxllib.c
··· 5 5 6 6 #include <linux/hugetlb.h> 7 7 #include <linux/sched/mm.h> 8 + #include <asm/opal-api.h> 8 9 #include <asm/pnv-pci.h> 9 10 #include <misc/cxllib.h> 10 11
+1
drivers/misc/cxl/flash.c
··· 4 4 #include <linux/semaphore.h> 5 5 #include <linux/slab.h> 6 6 #include <linux/uaccess.h> 7 + #include <linux/of.h> 7 8 #include <asm/rtas.h> 8 9 9 10 #include "cxl.h"
+2
drivers/misc/cxl/guest.c
··· 6 6 #include <linux/spinlock.h> 7 7 #include <linux/uaccess.h> 8 8 #include <linux/delay.h> 9 + #include <linux/irqdomain.h> 10 + #include <linux/platform_device.h> 9 11 10 12 #include "cxl.h" 11 13 #include "hcalls.h"
+1
drivers/misc/cxl/irq.c
··· 4 4 */ 5 5 6 6 #include <linux/interrupt.h> 7 + #include <linux/irqdomain.h> 7 8 #include <linux/workqueue.h> 8 9 #include <linux/sched.h> 9 10 #include <linux/wait.h>
+1
drivers/misc/cxl/main.c
··· 15 15 #include <linux/slab.h> 16 16 #include <linux/idr.h> 17 17 #include <linux/pci.h> 18 + #include <linux/platform_device.h> 18 19 #include <linux/sched/task.h> 19 20 20 21 #include <asm/cputable.h>
+1
drivers/misc/cxl/native.c
··· 11 11 #include <linux/mm.h> 12 12 #include <linux/uaccess.h> 13 13 #include <linux/delay.h> 14 + #include <linux/irqdomain.h> 14 15 #include <asm/synch.h> 15 16 #include <asm/switch_to.h> 16 17 #include <misc/cxl-base.h>
+1
drivers/misc/ocxl/afu_irq.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 2 // Copyright 2017 IBM Corp. 3 3 #include <linux/interrupt.h> 4 + #include <linux/irqdomain.h> 4 5 #include <asm/pnv-ocxl.h> 5 6 #include <asm/xive.h> 6 7 #include "ocxl_internal.h"
+2
drivers/misc/ocxl/file.c
··· 556 556 557 557 err_unregister: 558 558 ocxl_sysfs_unregister_afu(info); // safe to call even if register failed 559 + free_minor(info); 559 560 device_unregister(&info->dev); 561 + return rc; 560 562 err_put: 561 563 ocxl_afu_put(afu); 562 564 free_minor(info);
+1
drivers/misc/ocxl/link.c
··· 6 6 #include <linux/mm_types.h> 7 7 #include <linux/mmu_context.h> 8 8 #include <linux/mmu_notifier.h> 9 + #include <linux/irqdomain.h> 9 10 #include <asm/copro.h> 10 11 #include <asm/pnv-ocxl.h> 11 12 #include <asm/xive.h>
+16 -7
fs/hugetlbfs/inode.c
··· 195 195 * Called under mmap_write_lock(mm). 196 196 */ 197 197 198 - #ifndef HAVE_ARCH_HUGETLB_UNMAPPED_AREA 199 198 static unsigned long 200 199 hugetlb_get_unmapped_area_bottomup(struct file *file, unsigned long addr, 201 200 unsigned long len, unsigned long pgoff, unsigned long flags) ··· 205 206 info.flags = 0; 206 207 info.length = len; 207 208 info.low_limit = current->mm->mmap_base; 208 - info.high_limit = arch_get_mmap_end(addr); 209 + info.high_limit = arch_get_mmap_end(addr, len, flags); 209 210 info.align_mask = PAGE_MASK & ~huge_page_mask(h); 210 211 info.align_offset = 0; 211 212 return vm_unmapped_area(&info); ··· 236 237 VM_BUG_ON(addr != -ENOMEM); 237 238 info.flags = 0; 238 239 info.low_limit = current->mm->mmap_base; 239 - info.high_limit = arch_get_mmap_end(addr); 240 + info.high_limit = arch_get_mmap_end(addr, len, flags); 240 241 addr = vm_unmapped_area(&info); 241 242 } 242 243 243 244 return addr; 244 245 } 245 246 246 - static unsigned long 247 - hugetlb_get_unmapped_area(struct file *file, unsigned long addr, 248 - unsigned long len, unsigned long pgoff, unsigned long flags) 247 + unsigned long 248 + generic_hugetlb_get_unmapped_area(struct file *file, unsigned long addr, 249 + unsigned long len, unsigned long pgoff, 250 + unsigned long flags) 249 251 { 250 252 struct mm_struct *mm = current->mm; 251 253 struct vm_area_struct *vma; 252 254 struct hstate *h = hstate_file(file); 253 - const unsigned long mmap_end = arch_get_mmap_end(addr); 255 + const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags); 254 256 255 257 if (len & ~huge_page_mask(h)) 256 258 return -EINVAL; ··· 282 282 pgoff, flags); 283 283 return hugetlb_get_unmapped_area_bottomup(file, addr, len, 284 284 pgoff, flags); 285 + } 286 + 287 + #ifndef HAVE_ARCH_HUGETLB_UNMAPPED_AREA 288 + static unsigned long 289 + hugetlb_get_unmapped_area(struct file *file, unsigned long addr, 290 + unsigned long len, unsigned long pgoff, 291 + unsigned long flags) 292 + { 293 + return generic_hugetlb_get_unmapped_area(file, addr, len, pgoff, flags); 285 294 } 286 295 #endif 287 296
+5
include/linux/hugetlb.h
··· 528 528 unsigned long flags); 529 529 #endif /* HAVE_ARCH_HUGETLB_UNMAPPED_AREA */ 530 530 531 + unsigned long 532 + generic_hugetlb_get_unmapped_area(struct file *file, unsigned long addr, 533 + unsigned long len, unsigned long pgoff, 534 + unsigned long flags); 535 + 531 536 /* 532 537 * huegtlb page specific state flags. These flags are located in page.private 533 538 * of the hugetlb head page. Functions created via the below macros should be
+3 -3
include/linux/of_irq.h
··· 20 20 #if defined(CONFIG_PPC32) && defined(CONFIG_PPC_PMAC) 21 21 extern unsigned int of_irq_workarounds; 22 22 extern struct device_node *of_irq_dflt_pic; 23 - extern int of_irq_parse_oldworld(struct device_node *device, int index, 24 - struct of_phandle_args *out_irq); 23 + int of_irq_parse_oldworld(const struct device_node *device, int index, 24 + struct of_phandle_args *out_irq); 25 25 #else /* CONFIG_PPC32 && CONFIG_PPC_PMAC */ 26 26 #define of_irq_workarounds (0) 27 27 #define of_irq_dflt_pic (NULL) 28 - static inline int of_irq_parse_oldworld(struct device_node *device, int index, 28 + static inline int of_irq_parse_oldworld(const struct device_node *device, int index, 29 29 struct of_phandle_args *out_irq) 30 30 { 31 31 return -EINVAL;
+10 -1
include/linux/sched/mm.h
··· 137 137 138 138 #ifdef CONFIG_MMU 139 139 #ifndef arch_get_mmap_end 140 - #define arch_get_mmap_end(addr) (TASK_SIZE) 140 + #define arch_get_mmap_end(addr, len, flags) (TASK_SIZE) 141 141 #endif 142 142 143 143 #ifndef arch_get_mmap_base ··· 153 153 arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr, 154 154 unsigned long len, unsigned long pgoff, 155 155 unsigned long flags); 156 + 157 + unsigned long 158 + generic_get_unmapped_area(struct file *filp, unsigned long addr, 159 + unsigned long len, unsigned long pgoff, 160 + unsigned long flags); 161 + unsigned long 162 + generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, 163 + unsigned long len, unsigned long pgoff, 164 + unsigned long flags); 156 165 #else 157 166 static inline void arch_pick_mmap_layout(struct mm_struct *mm, 158 167 struct rlimit *rlim_stack) {}
+26 -9
mm/mmap.c
··· 2140 2140 * 2141 2141 * This function "knows" that -ENOMEM has the bits set. 2142 2142 */ 2143 - #ifndef HAVE_ARCH_UNMAPPED_AREA 2144 2143 unsigned long 2145 - arch_get_unmapped_area(struct file *filp, unsigned long addr, 2146 - unsigned long len, unsigned long pgoff, unsigned long flags) 2144 + generic_get_unmapped_area(struct file *filp, unsigned long addr, 2145 + unsigned long len, unsigned long pgoff, 2146 + unsigned long flags) 2147 2147 { 2148 2148 struct mm_struct *mm = current->mm; 2149 2149 struct vm_area_struct *vma, *prev; 2150 2150 struct vm_unmapped_area_info info; 2151 - const unsigned long mmap_end = arch_get_mmap_end(addr); 2151 + const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags); 2152 2152 2153 2153 if (len > mmap_end - mmap_min_addr) 2154 2154 return -ENOMEM; ··· 2173 2173 info.align_offset = 0; 2174 2174 return vm_unmapped_area(&info); 2175 2175 } 2176 + 2177 + #ifndef HAVE_ARCH_UNMAPPED_AREA 2178 + unsigned long 2179 + arch_get_unmapped_area(struct file *filp, unsigned long addr, 2180 + unsigned long len, unsigned long pgoff, 2181 + unsigned long flags) 2182 + { 2183 + return generic_get_unmapped_area(filp, addr, len, pgoff, flags); 2184 + } 2176 2185 #endif 2177 2186 2178 2187 /* 2179 2188 * This mmap-allocator allocates new areas top-down from below the 2180 2189 * stack's low limit (the base): 2181 2190 */ 2182 - #ifndef HAVE_ARCH_UNMAPPED_AREA_TOPDOWN 2183 2191 unsigned long 2184 - arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr, 2185 - unsigned long len, unsigned long pgoff, 2186 - unsigned long flags) 2192 + generic_get_unmapped_area_topdown(struct file *filp, unsigned long addr, 2193 + unsigned long len, unsigned long pgoff, 2194 + unsigned long flags) 2187 2195 { 2188 2196 struct vm_area_struct *vma, *prev; 2189 2197 struct mm_struct *mm = current->mm; 2190 2198 struct vm_unmapped_area_info info; 2191 - const unsigned long mmap_end = arch_get_mmap_end(addr); 2199 + const unsigned long mmap_end = arch_get_mmap_end(addr, len, flags); 2192 2200 2193 2201 /* requested length too big for entire address space */ 2194 2202 if (len > mmap_end - mmap_min_addr) ··· 2238 2230 } 2239 2231 2240 2232 return addr; 2233 + } 2234 + 2235 + #ifndef HAVE_ARCH_UNMAPPED_AREA_TOPDOWN 2236 + unsigned long 2237 + arch_get_unmapped_area_topdown(struct file *filp, unsigned long addr, 2238 + unsigned long len, unsigned long pgoff, 2239 + unsigned long flags) 2240 + { 2241 + return generic_get_unmapped_area_topdown(filp, addr, len, pgoff, flags); 2241 2242 } 2242 2243 #endif 2243 2244
+1 -1
mm/util.c
··· 377 377 } 378 378 379 379 #ifdef CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT 380 - unsigned long arch_randomize_brk(struct mm_struct *mm) 380 + unsigned long __weak arch_randomize_brk(struct mm_struct *mm) 381 381 { 382 382 /* Is the current task 32bit ? */ 383 383 if (!IS_ENABLED(CONFIG_64BIT) || is_compat_task())
+5
tools/testing/selftests/powerpc/include/utils.h
··· 135 135 #define PPC_FEATURE2_ARCH_3_1 0x00040000 136 136 #endif 137 137 138 + /* POWER10 features */ 139 + #ifndef PPC_FEATURE2_MMA 140 + #define PPC_FEATURE2_MMA 0x00020000 141 + #endif 142 + 138 143 #if defined(__powerpc64__) 139 144 #define UCONTEXT_NIA(UC) (UC)->uc_mcontext.gp_regs[PT_NIP] 140 145 #define UCONTEXT_MSR(UC) (UC)->uc_mcontext.gp_regs[PT_MSR]
+3 -1
tools/testing/selftests/powerpc/math/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - TEST_GEN_PROGS := fpu_syscall fpu_preempt fpu_signal fpu_denormal vmx_syscall vmx_preempt vmx_signal vsx_preempt 2 + TEST_GEN_PROGS := fpu_syscall fpu_preempt fpu_signal fpu_denormal vmx_syscall vmx_preempt vmx_signal vsx_preempt mma 3 3 4 4 top_srcdir = ../../../../.. 5 5 include ../../lib.mk ··· 17 17 18 18 $(OUTPUT)/vsx_preempt: CFLAGS += -mvsx 19 19 $(OUTPUT)/vsx_preempt: vsx_asm.S ../utils.c 20 + 21 + $(OUTPUT)/mma: mma.c mma.S ../utils.c
+33
tools/testing/selftests/powerpc/math/mma.S
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later 2 + * 3 + * Test basic matrix multiply assist (MMA) functionality if available. 4 + * 5 + * Copyright 2020, Alistair Popple, IBM Corp. 6 + */ 7 + .global test_mma 8 + test_mma: 9 + /* Load accumulator via VSX registers from image passed in r3 */ 10 + lxvh8x 4,0,3 11 + lxvh8x 5,0,4 12 + 13 + /* Clear and prime the accumulator (xxsetaccz) */ 14 + .long 0x7c030162 15 + 16 + /* Prime the accumulator with MMA VSX move to accumulator 17 + * X-form (xxmtacc) (not needed due to above zeroing) */ 18 + //.long 0x7c010162 19 + 20 + /* xvi16ger2s */ 21 + .long 0xec042958 22 + 23 + /* Store result in image passed in r5 */ 24 + stxvw4x 0,0,5 25 + addi 5,5,16 26 + stxvw4x 1,0,5 27 + addi 5,5,16 28 + stxvw4x 2,0,5 29 + addi 5,5,16 30 + stxvw4x 3,0,5 31 + addi 5,5,16 32 + 33 + blr
+48
tools/testing/selftests/powerpc/math/mma.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * Test basic matrix multiply assist (MMA) functionality if available. 4 + * 5 + * Copyright 2020, Alistair Popple, IBM Corp. 6 + */ 7 + #include <stdio.h> 8 + #include <stdint.h> 9 + 10 + #include "utils.h" 11 + 12 + extern void test_mma(uint16_t (*)[8], uint16_t (*)[8], uint32_t (*)[4*4]); 13 + 14 + static int mma(void) 15 + { 16 + int i; 17 + int rc = 0; 18 + uint16_t x[] = {1, 0, 2, 0, 3, 0, 4, 0}; 19 + uint16_t y[] = {1, 0, 2, 0, 3, 0, 4, 0}; 20 + uint32_t z[4*4]; 21 + uint32_t exp[4*4] = {1, 2, 3, 4, 22 + 2, 4, 6, 8, 23 + 3, 6, 9, 12, 24 + 4, 8, 12, 16}; 25 + 26 + SKIP_IF_MSG(!have_hwcap2(PPC_FEATURE2_ARCH_3_1), "Need ISAv3.1"); 27 + SKIP_IF_MSG(!have_hwcap2(PPC_FEATURE2_MMA), "Need MMA"); 28 + 29 + test_mma(&x, &y, &z); 30 + 31 + for (i = 0; i < 16; i++) { 32 + printf("MMA[%d] = %d ", i, z[i]); 33 + 34 + if (z[i] == exp[i]) { 35 + printf(" (Correct)\n"); 36 + } else { 37 + printf(" (Incorrect)\n"); 38 + rc = 1; 39 + } 40 + } 41 + 42 + return rc; 43 + } 44 + 45 + int main(int argc, char *argv[]) 46 + { 47 + return test_harness(mma, "mma"); 48 + }
+1
tools/testing/selftests/powerpc/mm/.gitignore
··· 12 12 pkey_siginfo 13 13 stack_expansion_ldst 14 14 stack_expansion_signal 15 + large_vm_gpr_corruption
+3 -1
tools/testing/selftests/powerpc/mm/Makefile
··· 4 4 5 5 TEST_GEN_PROGS := hugetlb_vs_thp_test subpage_prot prot_sao segv_errors wild_bctr \ 6 6 large_vm_fork_separation bad_accesses pkey_exec_prot \ 7 - pkey_siginfo stack_expansion_signal stack_expansion_ldst 7 + pkey_siginfo stack_expansion_signal stack_expansion_ldst \ 8 + large_vm_gpr_corruption 8 9 TEST_PROGS := stress_code_patching.sh 9 10 10 11 TEST_GEN_PROGS_EXTENDED := tlbie_test ··· 20 19 21 20 $(OUTPUT)/wild_bctr: CFLAGS += -m64 22 21 $(OUTPUT)/large_vm_fork_separation: CFLAGS += -m64 22 + $(OUTPUT)/large_vm_gpr_corruption: CFLAGS += -m64 23 23 $(OUTPUT)/bad_accesses: CFLAGS += -m64 24 24 $(OUTPUT)/pkey_exec_prot: CFLAGS += -m64 25 25 $(OUTPUT)/pkey_siginfo: CFLAGS += -m64
+156
tools/testing/selftests/powerpc/mm/large_vm_gpr_corruption.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + // 3 + // Copyright 2022, Michael Ellerman, IBM Corp. 4 + // 5 + // Test that the 4PB address space SLB handling doesn't corrupt userspace registers 6 + // (r9-r13) due to a SLB fault while saving the PPR. 7 + // 8 + // The bug was introduced in f384796c4 ("powerpc/mm: Add support for handling > 512TB 9 + // address in SLB miss") and fixed in 4c2de74cc869 ("powerpc/64: Interrupts save PPR on 10 + // stack rather than thread_struct"). 11 + // 12 + // To hit the bug requires the task struct and kernel stack to be in different segments. 13 + // Usually that requires more than 1TB of RAM, or if that's not practical, boot the kernel 14 + // with "disable_1tb_segments". 15 + // 16 + // The test works by creating mappings above 512TB, to trigger the large address space 17 + // support. It creates 64 mappings, double the size of the SLB, to cause SLB faults on 18 + // each access (assuming naive replacement). It then loops over those mappings touching 19 + // each, and checks that r9-r13 aren't corrupted. 20 + // 21 + // It then forks another child and tries again, because a new child process will get a new 22 + // kernel stack and thread struct allocated, which may be more optimally placed to trigger 23 + // the bug. It would probably be better to leave the previous child processes hanging 24 + // around, so that kernel stack & thread struct allocations are not reused, but that would 25 + // amount to a 30 second fork bomb. The current design reliably triggers the bug on 26 + // unpatched kernels. 27 + 28 + #include <signal.h> 29 + #include <stdio.h> 30 + #include <stdlib.h> 31 + #include <sys/mman.h> 32 + #include <sys/types.h> 33 + #include <sys/wait.h> 34 + #include <unistd.h> 35 + 36 + #include "utils.h" 37 + 38 + #ifndef MAP_FIXED_NOREPLACE 39 + #define MAP_FIXED_NOREPLACE MAP_FIXED // "Should be safe" above 512TB 40 + #endif 41 + 42 + #define BASE_ADDRESS (1ul << 50) // 1PB 43 + #define STRIDE (2ul << 40) // 2TB 44 + #define SLB_SIZE 32 45 + #define NR_MAPPINGS (SLB_SIZE * 2) 46 + 47 + static volatile sig_atomic_t signaled; 48 + 49 + static void signal_handler(int sig) 50 + { 51 + signaled = 1; 52 + } 53 + 54 + #define CHECK_REG(_reg) \ 55 + if (_reg != _reg##_orig) { \ 56 + printf(str(_reg) " corrupted! Expected 0x%lx != 0x%lx\n", _reg##_orig, \ 57 + _reg); \ 58 + _exit(1); \ 59 + } 60 + 61 + static int touch_mappings(void) 62 + { 63 + unsigned long r9_orig, r10_orig, r11_orig, r12_orig, r13_orig; 64 + unsigned long r9, r10, r11, r12, r13; 65 + unsigned long addr, *p; 66 + int i; 67 + 68 + for (i = 0; i < NR_MAPPINGS; i++) { 69 + addr = BASE_ADDRESS + (i * STRIDE); 70 + p = (unsigned long *)addr; 71 + 72 + asm volatile("mr %0, %%r9 ;" // Read original GPR values 73 + "mr %1, %%r10 ;" 74 + "mr %2, %%r11 ;" 75 + "mr %3, %%r12 ;" 76 + "mr %4, %%r13 ;" 77 + "std %10, 0(%11) ;" // Trigger SLB fault 78 + "mr %5, %%r9 ;" // Save possibly corrupted values 79 + "mr %6, %%r10 ;" 80 + "mr %7, %%r11 ;" 81 + "mr %8, %%r12 ;" 82 + "mr %9, %%r13 ;" 83 + "mr %%r9, %0 ;" // Restore original values 84 + "mr %%r10, %1 ;" 85 + "mr %%r11, %2 ;" 86 + "mr %%r12, %3 ;" 87 + "mr %%r13, %4 ;" 88 + : "=&b"(r9_orig), "=&b"(r10_orig), "=&b"(r11_orig), 89 + "=&b"(r12_orig), "=&b"(r13_orig), "=&b"(r9), "=&b"(r10), 90 + "=&b"(r11), "=&b"(r12), "=&b"(r13) 91 + : "b"(i), "b"(p) 92 + : "r9", "r10", "r11", "r12", "r13"); 93 + 94 + CHECK_REG(r9); 95 + CHECK_REG(r10); 96 + CHECK_REG(r11); 97 + CHECK_REG(r12); 98 + CHECK_REG(r13); 99 + } 100 + 101 + return 0; 102 + } 103 + 104 + static int test(void) 105 + { 106 + unsigned long page_size, addr, *p; 107 + struct sigaction action; 108 + bool hash_mmu; 109 + int i, status; 110 + pid_t pid; 111 + 112 + // This tests a hash MMU specific bug. 113 + FAIL_IF(using_hash_mmu(&hash_mmu)); 114 + SKIP_IF(!hash_mmu); 115 + 116 + page_size = sysconf(_SC_PAGESIZE); 117 + 118 + for (i = 0; i < NR_MAPPINGS; i++) { 119 + addr = BASE_ADDRESS + (i * STRIDE); 120 + 121 + p = mmap((void *)addr, page_size, PROT_READ | PROT_WRITE, 122 + MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED_NOREPLACE, -1, 0); 123 + if (p == MAP_FAILED) { 124 + perror("mmap"); 125 + printf("Error: couldn't mmap(), confirm kernel has 4PB support?\n"); 126 + return 1; 127 + } 128 + } 129 + 130 + action.sa_handler = signal_handler; 131 + action.sa_flags = SA_RESTART; 132 + FAIL_IF(sigaction(SIGALRM, &action, NULL) < 0); 133 + 134 + // Seen to always crash in under ~10s on affected kernels. 135 + alarm(30); 136 + 137 + while (!signaled) { 138 + // Fork new processes, to increase the chance that we hit the case where 139 + // the kernel stack and task struct are in different segments. 140 + pid = fork(); 141 + if (pid == 0) 142 + exit(touch_mappings()); 143 + 144 + FAIL_IF(waitpid(-1, &status, 0) == -1); 145 + FAIL_IF(WIFSIGNALED(status)); 146 + FAIL_IF(!WIFEXITED(status)); 147 + FAIL_IF(WEXITSTATUS(status)); 148 + } 149 + 150 + return 0; 151 + } 152 + 153 + int main(void) 154 + { 155 + return test_harness(test, "large_vm_gpr_corruption"); 156 + }
-43
tools/testing/selftests/powerpc/pmu/ebb/fixed_instruction_loop.S
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright 2014, Michael Ellerman, IBM Corp. 4 - */ 5 - 6 - #include <ppc-asm.h> 7 - 8 - .text 9 - 10 - FUNC_START(thirty_two_instruction_loop) 11 - cmpwi r3,0 12 - beqlr 13 - addi r4,r3,1 14 - addi r4,r4,1 15 - addi r4,r4,1 16 - addi r4,r4,1 17 - addi r4,r4,1 18 - addi r4,r4,1 19 - addi r4,r4,1 20 - addi r4,r4,1 21 - addi r4,r4,1 22 - addi r4,r4,1 23 - addi r4,r4,1 24 - addi r4,r4,1 25 - addi r4,r4,1 26 - addi r4,r4,1 27 - addi r4,r4,1 28 - addi r4,r4,1 29 - addi r4,r4,1 30 - addi r4,r4,1 31 - addi r4,r4,1 32 - addi r4,r4,1 33 - addi r4,r4,1 34 - addi r4,r4,1 35 - addi r4,r4,1 36 - addi r4,r4,1 37 - addi r4,r4,1 38 - addi r4,r4,1 39 - addi r4,r4,1 40 - addi r4,r4,1 # 28 addi's 41 - subi r3,r3,1 42 - b FUNC_NAME(thirty_two_instruction_loop) 43 - FUNC_END(thirty_two_instruction_loop)
+1 -1
tools/testing/selftests/powerpc/pmu/sampling_tests/misc.c
··· 274 274 return intr_regs; 275 275 } 276 276 277 - static const unsigned int __perf_reg_mask(const char *register_name) 277 + static const int __perf_reg_mask(const char *register_name) 278 278 { 279 279 if (!strcmp(register_name, "R0")) 280 280 return 0;
+19 -13
tools/testing/selftests/powerpc/security/spectre_v2.c
··· 182 182 case COUNT_CACHE_FLUSH_HW: 183 183 // These should all not affect userspace branch prediction 184 184 if (miss_percent > 15) { 185 - printf("Branch misses > 15%% unexpected in this configuration!\n"); 186 - printf("Possible mis-match between reported & actual mitigation\n"); 187 - /* 188 - * Such a mismatch may be caused by a guest system 189 - * reporting as vulnerable when the host is mitigated. 190 - * Return skip code to avoid detecting this as an error. 191 - * We are not vulnerable and reporting otherwise, so 192 - * missing such a mismatch is safe. 193 - */ 194 - if (miss_percent > 95) 185 + if (miss_percent > 95) { 186 + /* 187 + * Such a mismatch may be caused by a system being unaware 188 + * the count cache is disabled. This may be to enable 189 + * guest migration between hosts with different settings. 190 + * Return skip code to avoid detecting this as an error. 191 + * We are not vulnerable and reporting otherwise, so 192 + * missing such a mismatch is safe. 193 + */ 194 + printf("Branch misses > 95%% unexpected in this configuration.\n"); 195 + printf("Count cache likely disabled without Linux knowing.\n"); 196 + if (state == COUNT_CACHE_FLUSH_SW) 197 + printf("WARNING: Kernel performing unnecessary flushes.\n"); 195 198 return 4; 199 + } 200 + printf("Branch misses > 15%% unexpected in this configuration!\n"); 201 + printf("Possible mismatch between reported & actual mitigation\n"); 196 202 197 203 return 1; 198 204 } ··· 207 201 // This seems to affect userspace branch prediction a bit? 208 202 if (miss_percent > 25) { 209 203 printf("Branch misses > 25%% unexpected in this configuration!\n"); 210 - printf("Possible mis-match between reported & actual mitigation\n"); 204 + printf("Possible mismatch between reported & actual mitigation\n"); 211 205 return 1; 212 206 } 213 207 break; 214 208 case COUNT_CACHE_DISABLED: 215 209 if (miss_percent < 95) { 216 - printf("Branch misses < 20%% unexpected in this configuration!\n"); 217 - printf("Possible mis-match between reported & actual mitigation\n"); 210 + printf("Branch misses < 95%% unexpected in this configuration!\n"); 211 + printf("Possible mismatch between reported & actual mitigation\n"); 218 212 return 1; 219 213 } 220 214 break;