Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'powerpc-5.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:
"Notable changes:

- Removal of the NPU DMA code, used by the out-of-tree Nvidia driver,
as well as some other functions only used by drivers that haven't
(yet?) made it upstream.

- A fix for a bug in our handling of hardware watchpoints (eg. perf
record -e mem: ...) which could lead to register corruption and
kernel crashes.

- Enable HAVE_ARCH_HUGE_VMAP, which allows us to use large pages for
vmalloc when using the Radix MMU.

- A large but incremental rewrite of our exception handling code to
use gas macros rather than multiple levels of nested CPP macros.

And the usual small fixes, cleanups and improvements.

Thanks to: Alastair D'Silva, Alexey Kardashevskiy, Andreas Schwab,
Aneesh Kumar K.V, Anju T Sudhakar, Anton Blanchard, Arnd Bergmann,
Athira Rajeev, Cédric Le Goater, Christian Lamparter, Christophe
Leroy, Christophe Lombard, Christoph Hellwig, Daniel Axtens, Denis
Efremov, Enrico Weigelt, Frederic Barrat, Gautham R. Shenoy, Geert
Uytterhoeven, Geliang Tang, Gen Zhang, Greg Kroah-Hartman, Greg Kurz,
Gustavo Romero, Krzysztof Kozlowski, Madhavan Srinivasan, Masahiro
Yamada, Mathieu Malaterre, Michael Neuling, Nathan Lynch, Naveen N.
Rao, Nicholas Piggin, Nishad Kamdar, Oliver O'Halloran, Qian Cai, Ravi
Bangoria, Sachin Sant, Sam Bobroff, Satheesh Rajendran, Segher
Boessenkool, Shaokun Zhang, Shawn Anastasio, Stewart Smith, Suraj
Jitindar Singh, Thiago Jung Bauermann, YueHaibing"

* tag 'powerpc-5.3-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (163 commits)
powerpc/powernv/idle: Fix restore of SPRN_LDBAR for POWER9 stop state.
powerpc/eeh: Handle hugepages in ioremap space
ocxl: Update for AFU descriptor template version 1.1
powerpc/boot: pass CONFIG options in a simpler and more robust way
powerpc/boot: add {get, put}_unaligned_be32 to xz_config.h
powerpc/irq: Don't WARN continuously in arch_local_irq_restore()
powerpc/module64: Use symbolic instructions names.
powerpc/module32: Use symbolic instructions names.
powerpc: Move PPC_HA() PPC_HI() and PPC_LO() to ppc-opcode.h
powerpc/module64: Fix comment in R_PPC64_ENTRY handling
powerpc/boot: Add lzo support for uImage
powerpc/boot: Add lzma support for uImage
powerpc/boot: don't force gzipped uImage
powerpc/8xx: Add microcode patch to move SMC parameter RAM.
powerpc/8xx: Use IO accessors in microcode programming.
powerpc/8xx: replace #ifdefs by IS_ENABLED() in microcode.c
powerpc/8xx: refactor programming of microcode CPM params.
powerpc/8xx: refactor printing of microcode patch name.
powerpc/8xx: Refactor microcode write
powerpc/8xx: refactor writing of CPM microcode arrays
...

+3631 -3531
+10 -1
Documentation/admin-guide/kernel-parameters.txt
··· 2932 2932 register save and restore. The kernel will only save 2933 2933 legacy floating-point registers on task switch. 2934 2934 2935 - nohugeiomap [KNL,x86] Disable kernel huge I/O mappings. 2935 + nohugeiomap [KNL,x86,PPC] Disable kernel huge I/O mappings. 2936 2936 2937 2937 nosmt [KNL,S390] Disable symmetric multithreading (SMT). 2938 2938 Equivalent to smt=1. ··· 5281 5281 xirc2ps_cs= [NET,PCMCIA] 5282 5282 Format: 5283 5283 <irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]] 5284 + 5285 + xive= [PPC] 5286 + By default on POWER9 and above, the kernel will 5287 + natively use the XIVE interrupt controller. This option 5288 + allows the fallback firmware mode to be used: 5289 + 5290 + off Fallback to firmware control of XIVE interrupt 5291 + controller on both pseries and powernv 5292 + platforms. Only useful on POWER9 and above. 5284 5293 5285 5294 xhci-hcd.quirks [USB,KNL] 5286 5295 A hex value specifying bitmask with supplemental xhci
+68
Documentation/powerpc/vcpudispatch_stats.txt
··· 1 + VCPU Dispatch Statistics: 2 + ========================= 3 + 4 + For Shared Processor LPARs, the POWER Hypervisor maintains a relatively 5 + static mapping of the LPAR processors (vcpus) to physical processor 6 + chips (representing the "home" node) and tries to always dispatch vcpus 7 + on their associated physical processor chip. However, under certain 8 + scenarios, vcpus may be dispatched on a different processor chip (away 9 + from its home node). 10 + 11 + /proc/powerpc/vcpudispatch_stats can be used to obtain statistics 12 + related to the vcpu dispatch behavior. Writing '1' to this file enables 13 + collecting the statistics, while writing '0' disables the statistics. 14 + By default, the DTLB log for each vcpu is processed 50 times a second so 15 + as not to miss any entries. This processing frequency can be changed 16 + through /proc/powerpc/vcpudispatch_stats_freq. 17 + 18 + The statistics themselves are available by reading the procfs file 19 + /proc/powerpc/vcpudispatch_stats. Each line in the output corresponds to 20 + a vcpu as represented by the first field, followed by 8 numbers. 21 + 22 + The first number corresponds to: 23 + 1. total vcpu dispatches since the beginning of statistics collection 24 + 25 + The next 4 numbers represent vcpu dispatch dispersions: 26 + 2. number of times this vcpu was dispatched on the same processor as last 27 + time 28 + 3. number of times this vcpu was dispatched on a different processor core 29 + as last time, but within the same chip 30 + 4. number of times this vcpu was dispatched on a different chip 31 + 5. number of times this vcpu was dispatches on a different socket/drawer 32 + (next numa boundary) 33 + 34 + The final 3 numbers represent statistics in relation to the home node of 35 + the vcpu: 36 + 6. number of times this vcpu was dispatched in its home node (chip) 37 + 7. number of times this vcpu was dispatched in a different node 38 + 8. number of times this vcpu was dispatched in a node further away (numa 39 + distance) 40 + 41 + An example output: 42 + $ sudo cat /proc/powerpc/vcpudispatch_stats 43 + cpu0 6839 4126 2683 30 0 6821 18 0 44 + cpu1 2515 1274 1229 12 0 2509 6 0 45 + cpu2 2317 1198 1109 10 0 2312 5 0 46 + cpu3 2259 1165 1088 6 0 2256 3 0 47 + cpu4 2205 1143 1056 6 0 2202 3 0 48 + cpu5 2165 1121 1038 6 0 2162 3 0 49 + cpu6 2183 1127 1050 6 0 2180 3 0 50 + cpu7 2193 1133 1052 8 0 2187 6 0 51 + cpu8 2165 1115 1032 18 0 2156 9 0 52 + cpu9 2301 1252 1033 16 0 2293 8 0 53 + cpu10 2197 1138 1041 18 0 2187 10 0 54 + cpu11 2273 1185 1062 26 0 2260 13 0 55 + cpu12 2186 1125 1043 18 0 2177 9 0 56 + cpu13 2161 1115 1030 16 0 2153 8 0 57 + cpu14 2206 1153 1033 20 0 2196 10 0 58 + cpu15 2163 1115 1032 16 0 2155 8 0 59 + 60 + In the output above, for vcpu0, there have been 6839 dispatches since 61 + statistics were enabled. 4126 of those dispatches were on the same 62 + physical cpu as the last time. 2683 were on a different core, but within 63 + the same chip, while 30 dispatches were on a different chip compared to 64 + its last dispatch. 65 + 66 + Also, out of the total of 6839 dispatches, we see that there have been 67 + 6821 dispatches on the vcpu's home node, while 18 dispatches were 68 + outside its home node, on a neighbouring chip.
+28 -20
arch/powerpc/Kconfig
··· 48 48 # Allow randomisation to consume up to 512MB of address space (2^29). 49 49 default 11 if PPC_256K_PAGES # 11 = 29 (512MB) - 18 (256K) 50 50 default 13 if PPC_64K_PAGES # 13 = 29 (512MB) - 16 (64K) 51 - default 15 if PPC_16K_PAGES # 15 = 29 (512MB) - 14 (16K) 51 + default 15 if PPC_16K_PAGES # 15 = 29 (512MB) - 14 (16K) 52 52 default 17 # 17 = 29 (512MB) - 12 (4K) 53 53 54 54 config ARCH_MMAP_RND_COMPAT_BITS_MIN ··· 168 168 select GENERIC_STRNLEN_USER 169 169 select GENERIC_TIME_VSYSCALL 170 170 select HAVE_ARCH_AUDITSYSCALL 171 + select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 && PPC_RADIX_MMU 171 172 select HAVE_ARCH_JUMP_LABEL 172 173 select HAVE_ARCH_KASAN if PPC32 173 174 select HAVE_ARCH_KGDB ··· 177 176 select HAVE_ARCH_NVRAM_OPS 178 177 select HAVE_ARCH_SECCOMP_FILTER 179 178 select HAVE_ARCH_TRACEHOOK 179 + select HAVE_C_RECORDMCOUNT 180 180 select HAVE_CBPF_JIT if !PPC64 181 181 select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r13) 182 182 select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r2) ··· 199 197 select HAVE_IOREMAP_PROT 200 198 select HAVE_IRQ_EXIT_ON_IRQ_STACK 201 199 select HAVE_KERNEL_GZIP 200 + select HAVE_KERNEL_LZMA if DEFAULT_UIMAGE 201 + select HAVE_KERNEL_LZO if DEFAULT_UIMAGE 202 202 select HAVE_KERNEL_XZ if PPC_BOOK3S || 44x 203 203 select HAVE_KPROBES 204 204 select HAVE_KPROBES_ON_FTRACE ··· 239 235 select OLD_SIGSUSPEND 240 236 select PCI_DOMAINS if PCI 241 237 select PCI_SYSCALL if PCI 238 + select PPC_DAWR if PPC64 242 239 select RTC_LIB 243 240 select SPARSE_IRQ 244 241 select SYSCTL_EXCEPTION_TRACE ··· 250 245 # 251 246 252 247 config PPC_BARRIER_NOSPEC 253 - bool 254 - default y 255 - depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E 248 + bool 249 + default y 250 + depends on PPC_BOOK3S_64 || PPC_FSL_BOOK3E 256 251 257 252 config EARLY_PRINTK 258 253 bool ··· 376 371 depends on PPC_ADV_DEBUG_REGS && 44x 377 372 default y 378 373 374 + config PPC_DAWR 375 + bool 376 + 379 377 config ZONE_DMA 380 378 bool 381 379 default y if PPC_BOOK3E_64 ··· 407 399 config MATH_EMULATION 408 400 bool "Math emulation" 409 401 depends on 4xx || PPC_8xx || PPC_MPC832x || BOOKE 410 - ---help--- 402 + help 411 403 Some PowerPC chips designed for embedded applications do not have 412 404 a floating-point unit and therefore do not implement the 413 405 floating-point instructions in the PowerPC instruction set. If you ··· 426 418 427 419 config MATH_EMULATION_FULL 428 420 bool "Emulate all the floating point instructions" 429 - ---help--- 421 + help 430 422 Select this option will enable the kernel to support to emulate 431 423 all the floating point instructions. If your SoC doesn't have 432 424 a FPU, you should select this. 433 425 434 426 config MATH_EMULATION_HW_UNIMPLEMENTED 435 427 bool "Just emulate the FPU unimplemented instructions" 436 - ---help--- 428 + help 437 429 Select this if you know there does have a hardware FPU on your 438 430 SoC, but some floating point instructions are not implemented by that. 439 431 440 432 endchoice 441 433 442 434 config PPC_TRANSACTIONAL_MEM 443 - bool "Transactional Memory support for POWERPC" 444 - depends on PPC_BOOK3S_64 445 - depends on SMP 446 - select ALTIVEC 447 - select VSX 448 - ---help--- 449 - Support user-mode Transactional Memory on POWERPC. 435 + bool "Transactional Memory support for POWERPC" 436 + depends on PPC_BOOK3S_64 437 + depends on SMP 438 + select ALTIVEC 439 + select VSX 440 + help 441 + Support user-mode Transactional Memory on POWERPC. 450 442 451 443 config LD_HEAD_STUB_CATCH 452 444 bool "Reserve 256 bytes to cope with linker stubs in HEAD text" if EXPERT ··· 466 458 bool "Support for enabling/disabling CPUs" 467 459 depends on SMP && (PPC_PSERIES || \ 468 460 PPC_PMAC || PPC_POWERNV || FSL_SOC_BOOKE) 469 - ---help--- 461 + help 470 462 Say Y here to be able to disable and re-enable individual 471 463 CPUs at runtime on SMP machines. 472 464 ··· 834 826 bool "PowerPC denormalisation exception handling" 835 827 depends on PPC_BOOK3S_64 836 828 default "y" if PPC_POWERNV 837 - ---help--- 829 + help 838 830 Add support for handling denormalisation of single precision 839 831 values. Useful for bare metal only. If unsure say Y here. 840 832 ··· 947 939 bool 948 940 949 941 config FSL_PCI 950 - bool 942 + bool 951 943 select ARCH_HAS_DMA_SET_MASK 952 944 select PPC_INDIRECT_PCI 953 945 select PCI_QUIRKS ··· 995 987 bool "Freescale Embedded SRIO Controller support" 996 988 depends on RAPIDIO = y && HAVE_RAPIDIO 997 989 default "n" 998 - ---help--- 990 + help 999 991 Include support for RapidIO controller on Freescale embedded 1000 992 processors (MPC8548, MPC8641, etc). 1001 993 ··· 1059 1051 select NONSTATIC_KERNEL 1060 1052 help 1061 1053 This option enables the kernel to be loaded at any page aligned 1062 - physical address. The kernel creates a mapping from KERNELBASE to 1054 + physical address. The kernel creates a mapping from KERNELBASE to 1063 1055 the address where the kernel is loaded. The page size here implies 1064 1056 the TLB page size of the mapping for kernel on the particular platform. 1065 1057 Please refer to the init code for finding the TLB page size. 1066 1058 1067 1059 DYNAMIC_MEMSTART is an easy way of implementing pseudo-RELOCATABLE 1068 1060 kernel image, where the only restriction is the page aligned kernel 1069 - load address. When this option is enabled, the compile time physical 1061 + load address. When this option is enabled, the compile time physical 1070 1062 address CONFIG_PHYSICAL_START is ignored. 1071 1063 1072 1064 This option is overridden by CONFIG_RELOCATABLE
-2
arch/powerpc/boot/.gitignore
··· 44 44 fdt_wip.c 45 45 libfdt.h 46 46 libfdt_internal.h 47 - autoconf.h 48 -
+5 -11
arch/powerpc/boot/Makefile
··· 20 20 21 21 all: $(obj)/zImage 22 22 23 - compress-$(CONFIG_KERNEL_GZIP) := CONFIG_KERNEL_GZIP 24 - compress-$(CONFIG_KERNEL_XZ) := CONFIG_KERNEL_XZ 25 - 26 23 ifdef CROSS32_COMPILE 27 24 BOOTCC := $(CROSS32_COMPILE)gcc 28 25 BOOTAR := $(CROSS32_COMPILE)ar ··· 31 34 BOOTCFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ 32 35 -fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx \ 33 36 -pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc \ 34 - -D$(compress-y) 37 + $(LINUXINCLUDE) 35 38 36 39 ifdef CONFIG_PPC64_BOOT_WRAPPER 37 40 BOOTCFLAGS += -m64 ··· 48 51 BOOTCFLAGS += $(call cc-option,-mabi=elfv2) 49 52 endif 50 53 51 - BOOTAFLAGS := -D__ASSEMBLY__ $(BOOTCFLAGS) -traditional -nostdinc 54 + BOOTAFLAGS := -D__ASSEMBLY__ $(BOOTCFLAGS) -nostdinc 52 55 53 56 BOOTARFLAGS := -cr$(KBUILD_ARFLAGS) 54 57 ··· 199 202 $(obj)/zImage.coff.lds $(obj)/zImage.ps3.lds : $(obj)/%: $(srctree)/$(src)/%.S 200 203 $(Q)cp $< $@ 201 204 202 - $(srctree)/$(src)/serial.c: $(obj)/autoconf.h 203 - 204 - $(obj)/autoconf.h: $(obj)/%: $(objtree)/include/generated/% 205 - $(Q)cp $< $@ 206 - 207 205 clean-files := $(zlib-) $(zlibheader-) $(zliblinuxheader-) \ 208 206 $(zlib-decomp-) $(libfdt) $(libfdtheader) \ 209 - autoconf.h empty.c zImage.coff.lds zImage.ps3.lds zImage.lds 207 + empty.c zImage.coff.lds zImage.ps3.lds zImage.lds 210 208 211 209 quiet_cmd_bootcc = BOOTCC $@ 212 210 cmd_bootcc = $(BOOTCC) -Wp,-MD,$(depfile) $(BOOTCFLAGS) -c -o $@ $< ··· 249 257 250 258 compressor-$(CONFIG_KERNEL_GZIP) := gz 251 259 compressor-$(CONFIG_KERNEL_XZ) := xz 260 + compressor-$(CONFIG_KERNEL_LZMA) := lzma 261 + compressor-$(CONFIG_KERNEL_LZO) := lzo 252 262 253 263 # args (to if_changed): 1 = (this rule), 2 = platform, 3 = dts 4=dtb 5=initrd 254 264 quiet_cmd_wrap = WRAP $@
-1
arch/powerpc/boot/serial.c
··· 18 18 #include "stdio.h" 19 19 #include "io.h" 20 20 #include "ops.h" 21 - #include "autoconf.h" 22 21 23 22 static int serial_open(void) 24 23 {
+17 -2
arch/powerpc/boot/wrapper
··· 40 40 cacheit= 41 41 binary= 42 42 compression=.gz 43 + uboot_comp=gzip 43 44 pie= 44 45 format= 45 46 ··· 131 130 ;; 132 131 -z) 133 132 compression=.gz 133 + uboot_comp=gzip 134 134 ;; 135 135 -Z) 136 136 shift 137 137 [ "$#" -gt 0 ] || usage 138 - [ "$1" != "gz" -o "$1" != "xz" -o "$1" != "none" ] || usage 138 + [ "$1" != "gz" -o "$1" != "xz" -o "$1" != "lzma" -o "$1" != "lzo" -o "$1" != "none" ] || usage 139 139 140 140 compression=".$1" 141 + uboot_comp=$1 141 142 142 143 if [ $compression = ".none" ]; then 143 144 compression= 145 + uboot_comp=none 144 146 fi 147 + if [ $uboot_comp = "gz" ]; then 148 + uboot_comp=gzip 149 + fi 145 150 ;; 146 151 --no-gzip) 147 152 # a "feature" of the the wrapper script is that it can be used outside 148 153 # the kernel tree. So keeping this around for backwards compatibility. 149 154 compression= 155 + uboot_comp=none 150 156 ;; 151 157 -?) 152 158 usage ··· 373 365 .gz) 374 366 gzip -n -f -9 "$vmz.$$" 375 367 ;; 368 + .lzma) 369 + xz --format=lzma -f -6 "$vmz.$$" 370 + ;; 371 + .lzo) 372 + lzop -f -9 "$vmz.$$" 373 + ;; 376 374 *) 377 375 # drop the compression suffix so the stripped vmlinux is used 378 376 compression= 377 + uboot_comp=none 379 378 ;; 380 379 esac 381 380 ··· 426 411 case "$platform" in 427 412 uboot) 428 413 rm -f "$ofile" 429 - ${MKIMAGE} -A ppc -O linux -T kernel -C gzip -a $membase -e $membase \ 414 + ${MKIMAGE} -A ppc -O linux -T kernel -C $uboot_comp -a $membase -e $membase \ 430 415 $uboot_version -d "$vmz" "$ofile" 431 416 if [ -z "$cacheit" ]; then 432 417 rm -f "$vmz"
+20
arch/powerpc/boot/xz_config.h
··· 20 20 21 21 #ifdef __LITTLE_ENDIAN__ 22 22 #define get_le32(p) (*((uint32_t *) (p))) 23 + #define cpu_to_be32(x) swab32(x) 24 + static inline u32 be32_to_cpup(const u32 *p) 25 + { 26 + return swab32p((u32 *)p); 27 + } 23 28 #else 24 29 #define get_le32(p) swab32p(p) 30 + #define cpu_to_be32(x) (x) 31 + static inline u32 be32_to_cpup(const u32 *p) 32 + { 33 + return *p; 34 + } 25 35 #endif 36 + 37 + static inline uint32_t get_unaligned_be32(const void *p) 38 + { 39 + return be32_to_cpup(p); 40 + } 41 + 42 + static inline void put_unaligned_be32(u32 val, void *p) 43 + { 44 + *((u32 *)p) = cpu_to_be32(val); 45 + } 26 46 27 47 #define memeq(a, b, size) (memcmp(a, b, size) == 0) 28 48 #define memzero(buf, size) memset(buf, 0, size)
-1
arch/powerpc/configs/40x/acadia_defconfig
··· 22 22 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 23 23 # CONFIG_INET_XFRM_MODE_BEET is not set 24 24 # CONFIG_IPV6 is not set 25 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 26 25 CONFIG_CONNECTOR=y 27 26 CONFIG_MTD=y 28 27 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/40x/ep405_defconfig
··· 21 21 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 22 22 # CONFIG_INET_XFRM_MODE_BEET is not set 23 23 # CONFIG_IPV6 is not set 24 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 25 24 CONFIG_CONNECTOR=y 26 25 CONFIG_MTD=y 27 26 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/40x/kilauea_defconfig
··· 24 24 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 25 25 # CONFIG_INET_XFRM_MODE_BEET is not set 26 26 # CONFIG_IPV6 is not set 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 CONFIG_CONNECTOR=y 29 28 CONFIG_MTD=y 30 29 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/40x/klondike_defconfig
··· 14 14 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 15 15 CONFIG_MATH_EMULATION=y 16 16 # CONFIG_SUSPEND is not set 17 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 18 17 CONFIG_BLK_DEV_RAM=y 19 18 CONFIG_BLK_DEV_RAM_SIZE=35000 20 19 CONFIG_SCSI=y
-1
arch/powerpc/configs/40x/makalu_defconfig
··· 21 21 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 22 22 # CONFIG_INET_XFRM_MODE_BEET is not set 23 23 # CONFIG_IPV6 is not set 24 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 25 24 CONFIG_CONNECTOR=y 26 25 CONFIG_MTD=y 27 26 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/40x/obs600_defconfig
··· 24 24 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 25 25 # CONFIG_INET_XFRM_MODE_BEET is not set 26 26 # CONFIG_IPV6 is not set 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 CONFIG_CONNECTOR=y 29 28 CONFIG_MTD=y 30 29 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/40x/virtex_defconfig
··· 31 31 CONFIG_IP_NF_IPTABLES=m 32 32 CONFIG_IP_NF_FILTER=m 33 33 CONFIG_IP_NF_MANGLE=m 34 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 35 34 CONFIG_BLK_DEV_LOOP=y 36 35 CONFIG_BLK_DEV_RAM=y 37 36 CONFIG_BLK_DEV_RAM_SIZE=8192
-1
arch/powerpc/configs/40x/walnut_defconfig
··· 19 19 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 20 20 # CONFIG_INET_XFRM_MODE_BEET is not set 21 21 # CONFIG_IPV6 is not set 22 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 23 22 CONFIG_CONNECTOR=y 24 23 CONFIG_MTD=y 25 24 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/44x/akebono_defconfig
··· 33 33 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 34 34 # CONFIG_INET_XFRM_MODE_BEET is not set 35 35 # CONFIG_IPV6 is not set 36 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 37 36 CONFIG_DEVTMPFS=y 38 37 CONFIG_DEVTMPFS_MOUNT=y 39 38 CONFIG_CONNECTOR=y
-1
arch/powerpc/configs/44x/arches_defconfig
··· 24 24 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 25 25 # CONFIG_INET_XFRM_MODE_BEET is not set 26 26 # CONFIG_IPV6 is not set 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 CONFIG_CONNECTOR=y 29 28 CONFIG_MTD=y 30 29 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/44x/bamboo_defconfig
··· 22 22 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 23 23 # CONFIG_INET_XFRM_MODE_BEET is not set 24 24 # CONFIG_IPV6 is not set 25 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 26 25 CONFIG_CONNECTOR=y 27 26 CONFIG_BLK_DEV_RAM=y 28 27 CONFIG_BLK_DEV_RAM_SIZE=35000
-1
arch/powerpc/configs/44x/bluestone_defconfig
··· 20 20 CONFIG_IP_PNP=y 21 21 CONFIG_IP_PNP_DHCP=y 22 22 CONFIG_IP_PNP_BOOTP=y 23 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 24 23 CONFIG_CONNECTOR=y 25 24 CONFIG_MTD=y 26 25 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/44x/canyonlands_defconfig
··· 24 24 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 25 25 # CONFIG_INET_XFRM_MODE_BEET is not set 26 26 # CONFIG_IPV6 is not set 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 CONFIG_CONNECTOR=y 29 28 CONFIG_MTD=y 30 29 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/44x/currituck_defconfig
··· 31 31 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 32 32 # CONFIG_INET_XFRM_MODE_BEET is not set 33 33 # CONFIG_IPV6 is not set 34 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 35 34 CONFIG_DEVTMPFS=y 36 35 CONFIG_DEVTMPFS_MOUNT=y 37 36 CONFIG_CONNECTOR=y
-1
arch/powerpc/configs/44x/ebony_defconfig
··· 20 20 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 21 21 # CONFIG_INET_XFRM_MODE_BEET is not set 22 22 # CONFIG_IPV6 is not set 23 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 24 23 CONFIG_CONNECTOR=y 25 24 CONFIG_MTD=y 26 25 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/44x/eiger_defconfig
··· 25 25 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 26 26 # CONFIG_INET_XFRM_MODE_BEET is not set 27 27 # CONFIG_IPV6 is not set 28 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 29 28 CONFIG_CONNECTOR=y 30 29 CONFIG_MTD=y 31 30 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/44x/fsp2_defconfig
··· 44 44 # CONFIG_INET_XFRM_MODE_BEET is not set 45 45 # CONFIG_IPV6 is not set 46 46 CONFIG_VLAN_8021Q=m 47 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 48 47 CONFIG_DEVTMPFS=y 49 48 CONFIG_DEVTMPFS_MOUNT=y 50 49 CONFIG_CONNECTOR=y
-1
arch/powerpc/configs/44x/icon_defconfig
··· 24 24 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 25 25 # CONFIG_INET_XFRM_MODE_BEET is not set 26 26 # CONFIG_IPV6 is not set 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 CONFIG_CONNECTOR=y 29 28 CONFIG_MTD=y 30 29 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/44x/iss476-smp_defconfig
··· 33 33 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 34 34 # CONFIG_INET_XFRM_MODE_BEET is not set 35 35 # CONFIG_IPV6 is not set 36 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 37 36 CONFIG_CONNECTOR=y 38 37 CONFIG_MTD=y 39 38 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/44x/katmai_defconfig
··· 22 22 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 23 23 # CONFIG_INET_XFRM_MODE_BEET is not set 24 24 # CONFIG_IPV6 is not set 25 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 26 25 CONFIG_CONNECTOR=y 27 26 CONFIG_MTD=y 28 27 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/44x/rainier_defconfig
··· 23 23 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 24 24 # CONFIG_INET_XFRM_MODE_BEET is not set 25 25 # CONFIG_IPV6 is not set 26 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 27 26 CONFIG_CONNECTOR=y 28 27 CONFIG_MTD=y 29 28 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/44x/redwood_defconfig
··· 25 25 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 26 26 # CONFIG_INET_XFRM_MODE_BEET is not set 27 27 # CONFIG_IPV6 is not set 28 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 29 28 CONFIG_CONNECTOR=y 30 29 CONFIG_MTD=y 31 30 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/44x/sam440ep_defconfig
··· 27 27 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 28 28 # CONFIG_INET_XFRM_MODE_BEET is not set 29 29 # CONFIG_IPV6 is not set 30 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 31 30 CONFIG_CONNECTOR=y 32 31 CONFIG_BLK_DEV_LOOP=y 33 32 CONFIG_BLK_DEV_RAM=y
-1
arch/powerpc/configs/44x/sequoia_defconfig
··· 24 24 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 25 25 # CONFIG_INET_XFRM_MODE_BEET is not set 26 26 # CONFIG_IPV6 is not set 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 CONFIG_CONNECTOR=y 29 28 CONFIG_MTD=y 30 29 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/44x/taishan_defconfig
··· 22 22 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 23 23 # CONFIG_INET_XFRM_MODE_BEET is not set 24 24 # CONFIG_IPV6 is not set 25 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 26 25 CONFIG_CONNECTOR=y 27 26 CONFIG_MTD=y 28 27 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/44x/virtex5_defconfig
··· 30 30 CONFIG_IP_NF_IPTABLES=m 31 31 CONFIG_IP_NF_FILTER=m 32 32 CONFIG_IP_NF_MANGLE=m 33 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 34 33 CONFIG_BLK_DEV_LOOP=y 35 34 CONFIG_BLK_DEV_RAM=y 36 35 CONFIG_BLK_DEV_RAM_SIZE=8192
-1
arch/powerpc/configs/44x/warp_defconfig
··· 26 26 # CONFIG_IPV6 is not set 27 27 CONFIG_NETFILTER=y 28 28 CONFIG_VLAN_8021Q=y 29 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 30 29 # CONFIG_STANDALONE is not set 31 30 CONFIG_MTD=y 32 31 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/52xx/cm5200_defconfig
··· 23 23 CONFIG_IP_PNP_BOOTP=y 24 24 CONFIG_SYN_COOKIES=y 25 25 # CONFIG_IPV6 is not set 26 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 27 26 # CONFIG_FW_LOADER is not set 28 27 CONFIG_MTD=y 29 28 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/52xx/lite5200b_defconfig
··· 26 26 CONFIG_IP_PNP_BOOTP=y 27 27 CONFIG_SYN_COOKIES=y 28 28 # CONFIG_IPV6 is not set 29 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 30 29 # CONFIG_FW_LOADER is not set 31 30 CONFIG_BLK_DEV_LOOP=y 32 31 CONFIG_BLK_DEV_RAM=y
-1
arch/powerpc/configs/52xx/motionpro_defconfig
··· 23 23 CONFIG_IP_PNP_BOOTP=y 24 24 CONFIG_SYN_COOKIES=y 25 25 # CONFIG_IPV6 is not set 26 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 27 26 # CONFIG_FW_LOADER is not set 28 27 CONFIG_MTD=y 29 28 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/52xx/pcm030_defconfig
··· 36 36 # CONFIG_INET_XFRM_MODE_BEET is not set 37 37 # CONFIG_INET_DIAG is not set 38 38 # CONFIG_IPV6 is not set 39 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 40 39 # CONFIG_FW_LOADER is not set 41 40 CONFIG_MTD=y 42 41 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/52xx/tqm5200_defconfig
··· 27 27 CONFIG_IP_PNP_BOOTP=y 28 28 CONFIG_SYN_COOKIES=y 29 29 # CONFIG_IPV6 is not set 30 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 31 30 # CONFIG_FW_LOADER is not set 32 31 CONFIG_MTD=y 33 32 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/83xx/asp8347_defconfig
··· 27 27 CONFIG_IP_PNP_BOOTP=y 28 28 CONFIG_SYN_COOKIES=y 29 29 # CONFIG_IPV6 is not set 30 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 31 30 # CONFIG_FW_LOADER is not set 32 31 CONFIG_MTD=y 33 32 CONFIG_MTD_REDBOOT_PARTS=y
-1
arch/powerpc/configs/83xx/mpc8313_rdb_defconfig
··· 24 24 CONFIG_IP_PNP_BOOTP=y 25 25 CONFIG_SYN_COOKIES=y 26 26 # CONFIG_IPV6 is not set 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 # CONFIG_FW_LOADER is not set 29 28 CONFIG_MTD=y 30 29 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/83xx/mpc8315_rdb_defconfig
··· 24 24 CONFIG_IP_PNP_BOOTP=y 25 25 CONFIG_SYN_COOKIES=y 26 26 # CONFIG_IPV6 is not set 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 # CONFIG_FW_LOADER is not set 29 28 CONFIG_MTD=y 30 29 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/83xx/mpc832x_mds_defconfig
··· 26 26 CONFIG_IP_PNP_BOOTP=y 27 27 CONFIG_SYN_COOKIES=y 28 28 # CONFIG_IPV6 is not set 29 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 30 29 # CONFIG_FW_LOADER is not set 31 30 CONFIG_BLK_DEV_LOOP=y 32 31 CONFIG_BLK_DEV_RAM=y
-1
arch/powerpc/configs/83xx/mpc832x_rdb_defconfig
··· 27 27 CONFIG_IP_PNP_BOOTP=y 28 28 CONFIG_SYN_COOKIES=y 29 29 # CONFIG_IPV6 is not set 30 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 31 30 # CONFIG_FW_LOADER is not set 32 31 CONFIG_BLK_DEV_LOOP=y 33 32 CONFIG_BLK_DEV_RAM=y
-1
arch/powerpc/configs/83xx/mpc834x_itx_defconfig
··· 25 25 CONFIG_IP_PNP_BOOTP=y 26 26 CONFIG_SYN_COOKIES=y 27 27 # CONFIG_IPV6 is not set 28 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 29 28 # CONFIG_FW_LOADER is not set 30 29 CONFIG_MTD=y 31 30 CONFIG_MTD_CFI=y
-1
arch/powerpc/configs/83xx/mpc834x_itxgp_defconfig
··· 25 25 CONFIG_IP_PNP_BOOTP=y 26 26 CONFIG_SYN_COOKIES=y 27 27 # CONFIG_IPV6 is not set 28 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 29 28 # CONFIG_FW_LOADER is not set 30 29 CONFIG_MTD=y 31 30 CONFIG_MTD_CFI=y
-1
arch/powerpc/configs/83xx/mpc834x_mds_defconfig
··· 26 26 CONFIG_IP_PNP_BOOTP=y 27 27 CONFIG_SYN_COOKIES=y 28 28 # CONFIG_IPV6 is not set 29 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 30 29 # CONFIG_FW_LOADER is not set 31 30 CONFIG_BLK_DEV_LOOP=y 32 31 CONFIG_BLK_DEV_RAM=y
-1
arch/powerpc/configs/83xx/mpc836x_mds_defconfig
··· 25 25 CONFIG_IP_PNP_BOOTP=y 26 26 CONFIG_SYN_COOKIES=y 27 27 # CONFIG_IPV6 is not set 28 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 29 28 # CONFIG_FW_LOADER is not set 30 29 CONFIG_MTD=y 31 30 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/83xx/mpc836x_rdk_defconfig
··· 24 24 CONFIG_IP_PNP_BOOTP=y 25 25 CONFIG_SYN_COOKIES=y 26 26 # CONFIG_IPV6 is not set 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 CONFIG_MTD=y 29 28 CONFIG_MTD_CMDLINE_PARTS=y 30 29 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/83xx/mpc837x_mds_defconfig
··· 24 24 CONFIG_IP_PNP_BOOTP=y 25 25 CONFIG_SYN_COOKIES=y 26 26 # CONFIG_IPV6 is not set 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 # CONFIG_FW_LOADER is not set 29 28 CONFIG_BLK_DEV_LOOP=y 30 29 CONFIG_BLK_DEV_RAM=y
-1
arch/powerpc/configs/83xx/mpc837x_rdb_defconfig
··· 26 26 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 27 27 # CONFIG_INET_XFRM_MODE_BEET is not set 28 28 # CONFIG_IPV6 is not set 29 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 30 29 # CONFIG_FW_LOADER is not set 31 30 CONFIG_BLK_DEV_LOOP=y 32 31 CONFIG_BLK_DEV_RAM=y
-1
arch/powerpc/configs/85xx/ge_imp3a_defconfig
··· 65 65 CONFIG_INET6_IPCOMP=m 66 66 CONFIG_IPV6_TUNNEL=m 67 67 CONFIG_NET_PKTGEN=m 68 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 69 68 CONFIG_MTD=y 70 69 CONFIG_MTD_BLOCK=y 71 70 CONFIG_MTD_CFI=y
-1
arch/powerpc/configs/85xx/ksi8560_defconfig
··· 23 23 CONFIG_IP_PNP_BOOTP=y 24 24 CONFIG_SYN_COOKIES=y 25 25 # CONFIG_IPV6 is not set 26 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 27 26 # CONFIG_FW_LOADER is not set 28 27 CONFIG_MTD=y 29 28 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/85xx/mpc8540_ads_defconfig
··· 24 24 CONFIG_IP_PNP_BOOTP=y 25 25 CONFIG_SYN_COOKIES=y 26 26 # CONFIG_IPV6 is not set 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 # CONFIG_FW_LOADER is not set 29 28 CONFIG_BLK_DEV_LOOP=y 30 29 CONFIG_BLK_DEV_RAM=y
-1
arch/powerpc/configs/85xx/mpc8560_ads_defconfig
··· 23 23 CONFIG_IP_PNP_BOOTP=y 24 24 CONFIG_SYN_COOKIES=y 25 25 # CONFIG_IPV6 is not set 26 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 27 26 # CONFIG_FW_LOADER is not set 28 27 CONFIG_BLK_DEV_LOOP=y 29 28 CONFIG_BLK_DEV_RAM=y
-1
arch/powerpc/configs/85xx/mpc85xx_cds_defconfig
··· 25 25 CONFIG_IP_PNP_BOOTP=y 26 26 CONFIG_SYN_COOKIES=y 27 27 # CONFIG_IPV6 is not set 28 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 29 28 # CONFIG_FW_LOADER is not set 30 29 CONFIG_BLK_DEV_LOOP=y 31 30 CONFIG_BLK_DEV_RAM=y
-1
arch/powerpc/configs/85xx/sbc8548_defconfig
··· 22 22 CONFIG_IP_PNP_BOOTP=y 23 23 CONFIG_SYN_COOKIES=y 24 24 # CONFIG_IPV6 is not set 25 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 26 25 # CONFIG_FW_LOADER is not set 27 26 CONFIG_MTD=y 28 27 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/85xx/stx_gp3_defconfig
··· 22 22 CONFIG_IP_NF_IPTABLES=m 23 23 CONFIG_IP_NF_FILTER=m 24 24 CONFIG_NET_PKTGEN=y 25 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 26 25 # CONFIG_FW_LOADER is not set 27 26 CONFIG_PARPORT=m 28 27 CONFIG_PARPORT_PC=m
-1
arch/powerpc/configs/85xx/tqm8548_defconfig
··· 29 29 CONFIG_IP_PNP_BOOTP=y 30 30 CONFIG_SYN_COOKIES=y 31 31 # CONFIG_IPV6 is not set 32 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 33 32 # CONFIG_FW_LOADER is not set 34 33 CONFIG_MTD=y 35 34 CONFIG_MTD_CFI=y
-1
arch/powerpc/configs/85xx/xes_mpc85xx_defconfig
··· 54 54 # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 55 55 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 56 56 # CONFIG_INET_XFRM_MODE_BEET is not set 57 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 58 57 CONFIG_MTD=y 59 58 CONFIG_MTD_REDBOOT_PARTS=y 60 59 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/adder875_defconfig
··· 26 26 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 27 27 # CONFIG_INET_XFRM_MODE_BEET is not set 28 28 # CONFIG_IPV6 is not set 29 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 30 29 # CONFIG_FW_LOADER is not set 31 30 CONFIG_MTD=y 32 31 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/amigaone_defconfig
··· 37 37 # CONFIG_NETFILTER_XT_MATCH_CONNTRACK is not set 38 38 # CONFIG_NETFILTER_XT_MATCH_STATE is not set 39 39 # CONFIG_IP_NF_MANGLE is not set 40 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 41 40 # CONFIG_STANDALONE is not set 42 41 CONFIG_PARPORT=y 43 42 CONFIG_PARPORT_PC=y
-1
arch/powerpc/configs/cell_defconfig
··· 102 102 CONFIG_IP_NF_ARPTABLES=m 103 103 CONFIG_IP_NF_ARPFILTER=m 104 104 CONFIG_IP_NF_ARP_MANGLE=m 105 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 106 105 CONFIG_BLK_DEV_LOOP=y 107 106 CONFIG_BLK_DEV_RAM=y 108 107 CONFIG_BLK_DEV_RAM_SIZE=131072
-1
arch/powerpc/configs/chrp32_defconfig
··· 38 38 # CONFIG_NETFILTER_XT_MATCH_CONNTRACK is not set 39 39 # CONFIG_NETFILTER_XT_MATCH_STATE is not set 40 40 # CONFIG_IP_NF_MANGLE is not set 41 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 42 41 # CONFIG_STANDALONE is not set 43 42 CONFIG_BLK_DEV_FD=y 44 43 CONFIG_BLK_DEV_LOOP=y
-1
arch/powerpc/configs/ep8248e_defconfig
··· 24 24 CONFIG_IP_PNP_BOOTP=y 25 25 CONFIG_SYN_COOKIES=y 26 26 CONFIG_NETFILTER=y 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 # CONFIG_FW_LOADER is not set 29 28 CONFIG_MTD=y 30 29 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/ep88xc_defconfig
··· 28 28 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 29 29 # CONFIG_INET_XFRM_MODE_BEET is not set 30 30 # CONFIG_IPV6 is not set 31 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 32 31 # CONFIG_FW_LOADER is not set 33 32 CONFIG_MTD=y 34 33 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/fsl-emb-nonhw.config
··· 118 118 CONFIG_TMPFS=y 119 119 CONFIG_UBIFS_FS=y 120 120 CONFIG_UDF_FS=m 121 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 122 121 CONFIG_UFS_FS=m 123 122 CONFIG_UIO=y 124 123 CONFIG_UNIX=y
-2
arch/powerpc/configs/g5_defconfig
··· 52 52 CONFIG_NF_CONNTRACK_TFTP=m 53 53 CONFIG_NF_CT_NETLINK=m 54 54 CONFIG_NF_CONNTRACK_IPV4=m 55 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 56 55 CONFIG_DEVTMPFS=y 57 56 CONFIG_DEVTMPFS_MOUNT=y 58 57 CONFIG_BLK_DEV_LOOP=y ··· 243 244 CONFIG_MAGIC_SYSRQ=y 244 245 CONFIG_DEBUG_KERNEL=y 245 246 CONFIG_DEBUG_MUTEXES=y 246 - CONFIG_LATENCYTOP=y 247 247 CONFIG_BOOTX_TEXT=y 248 248 CONFIG_CRYPTO_TEST=m 249 249 CONFIG_CRYPTO_PCBC=m
-2
arch/powerpc/configs/gamecube_defconfig
··· 35 35 # CONFIG_INET_DIAG is not set 36 36 # CONFIG_IPV6 is not set 37 37 # CONFIG_WIRELESS is not set 38 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 39 38 # CONFIG_STANDALONE is not set 40 39 # CONFIG_FW_LOADER is not set 41 40 CONFIG_BLK_DEV_LOOP=y ··· 90 91 CONFIG_PRINTK_TIME=y 91 92 CONFIG_DEBUG_SPINLOCK=y 92 93 CONFIG_DEBUG_MUTEXES=y 93 - CONFIG_LATENCYTOP=y 94 94 CONFIG_SCHED_TRACER=y 95 95 CONFIG_DMA_API_DEBUG=y 96 96 CONFIG_PPC_EARLY_DEBUG=y
-1
arch/powerpc/configs/holly_defconfig
··· 27 27 CONFIG_IP_PNP_BOOTP=y 28 28 CONFIG_SYN_COOKIES=y 29 29 # CONFIG_IPV6 is not set 30 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 31 30 # CONFIG_FW_LOADER is not set 32 31 CONFIG_BLK_DEV_LOOP=y 33 32 CONFIG_BLK_DEV_RAM=y
-1
arch/powerpc/configs/linkstation_defconfig
··· 48 48 CONFIG_IP_NF_ARPTABLES=m 49 49 CONFIG_IP_NF_ARPFILTER=m 50 50 CONFIG_IP_NF_ARP_MANGLE=m 51 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 52 51 CONFIG_MTD=y 53 52 CONFIG_MTD_CMDLINE_PARTS=y 54 53 CONFIG_MTD_BLOCK=y
-2
arch/powerpc/configs/maple_defconfig
··· 36 36 CONFIG_IP_PNP=y 37 37 CONFIG_IP_PNP_DHCP=y 38 38 # CONFIG_IPV6 is not set 39 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 40 39 CONFIG_BLK_DEV_RAM=y 41 40 CONFIG_BLK_DEV_RAM_SIZE=8192 42 41 # CONFIG_SCSI_PROC_FS is not set ··· 103 104 CONFIG_DEBUG_KERNEL=y 104 105 CONFIG_DEBUG_STACK_USAGE=y 105 106 CONFIG_DEBUG_STACKOVERFLOW=y 106 - CONFIG_LATENCYTOP=y 107 107 CONFIG_XMON=y 108 108 CONFIG_XMON_DEFAULT=y 109 109 CONFIG_BOOTX_TEXT=y
-1
arch/powerpc/configs/mgcoge_defconfig
··· 30 30 # CONFIG_IPV6 is not set 31 31 CONFIG_NETFILTER=y 32 32 CONFIG_TIPC=y 33 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 34 33 # CONFIG_FW_LOADER is not set 35 34 CONFIG_MTD=y 36 35 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/mpc512x_defconfig
··· 35 35 CONFIG_CAN_MSCAN=y 36 36 CONFIG_CAN_DEBUG_DEVICES=y 37 37 # CONFIG_WIRELESS is not set 38 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 39 38 CONFIG_DEVTMPFS=y 40 39 CONFIG_DEVTMPFS_MOUNT=y 41 40 # CONFIG_PREVENT_FIRMWARE_BUILD is not set
-1
arch/powerpc/configs/mpc5200_defconfig
··· 27 27 CONFIG_IP_PNP_BOOTP=y 28 28 CONFIG_SYN_COOKIES=y 29 29 # CONFIG_IPV6 is not set 30 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 31 30 CONFIG_MTD=y 32 31 CONFIG_MTD_CMDLINE_PARTS=y 33 32 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/mpc7448_hpc2_defconfig
··· 25 25 CONFIG_IP_PNP_BOOTP=y 26 26 CONFIG_SYN_COOKIES=y 27 27 # CONFIG_IPV6 is not set 28 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 29 28 # CONFIG_FW_LOADER is not set 30 29 CONFIG_BLK_DEV_LOOP=y 31 30 CONFIG_BLK_DEV_RAM=y
-1
arch/powerpc/configs/mpc8272_ads_defconfig
··· 23 23 CONFIG_IP_PNP_BOOTP=y 24 24 CONFIG_SYN_COOKIES=y 25 25 CONFIG_NETFILTER=y 26 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 27 26 # CONFIG_FW_LOADER is not set 28 27 CONFIG_MTD=y 29 28 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/mpc83xx_defconfig
··· 37 37 CONFIG_SYN_COOKIES=y 38 38 CONFIG_INET_ESP=y 39 39 # CONFIG_IPV6 is not set 40 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 41 40 CONFIG_DEVTMPFS=y 42 41 CONFIG_DEVTMPFS_MOUNT=y 43 42 # CONFIG_FW_LOADER is not set
-1
arch/powerpc/configs/mpc885_ads_defconfig
··· 27 27 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 28 28 # CONFIG_INET_XFRM_MODE_BEET is not set 29 29 # CONFIG_IPV6 is not set 30 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 31 30 # CONFIG_FW_LOADER is not set 32 31 CONFIG_MTD=y 33 32 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/mvme5100_defconfig
··· 58 58 CONFIG_IP_NF_ARPFILTER=m 59 59 CONFIG_IP_NF_ARP_MANGLE=m 60 60 CONFIG_LAPB=m 61 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 62 61 CONFIG_BLK_DEV_LOOP=y 63 62 CONFIG_BLK_DEV_RAM=y 64 63 CONFIG_BLK_DEV_RAM_COUNT=2
-1
arch/powerpc/configs/pasemi_defconfig
··· 44 44 CONFIG_INET_AH=y 45 45 CONFIG_INET_ESP=y 46 46 # CONFIG_IPV6 is not set 47 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 48 47 CONFIG_DEVTMPFS=y 49 48 CONFIG_DEVTMPFS_MOUNT=y 50 49 CONFIG_MTD=y
-2
arch/powerpc/configs/pmac32_defconfig
··· 112 112 CONFIG_CFG80211=m 113 113 CONFIG_MAC80211=m 114 114 CONFIG_MAC80211_LEDS=y 115 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 116 115 # CONFIG_STANDALONE is not set 117 116 CONFIG_CONNECTOR=y 118 117 CONFIG_MAC_FLOPPY=m ··· 292 293 CONFIG_MAGIC_SYSRQ=y 293 294 CONFIG_DEBUG_KERNEL=y 294 295 CONFIG_DETECT_HUNG_TASK=y 295 - CONFIG_LATENCYTOP=y 296 296 CONFIG_XMON=y 297 297 CONFIG_XMON_DEFAULT=y 298 298 CONFIG_BOOTX_TEXT=y
-2
arch/powerpc/configs/powernv_defconfig
··· 98 98 CONFIG_DNS_RESOLVER=y 99 99 CONFIG_BPF_JIT=y 100 100 # CONFIG_WIRELESS is not set 101 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 102 101 CONFIG_DEVTMPFS=y 103 102 CONFIG_DEVTMPFS_MOUNT=y 104 103 CONFIG_MTD=y ··· 316 317 CONFIG_DEBUG_STACKOVERFLOW=y 317 318 CONFIG_SOFTLOCKUP_DETECTOR=y 318 319 CONFIG_HARDLOCKUP_DETECTOR=y 319 - CONFIG_LATENCYTOP=y 320 320 CONFIG_FUNCTION_TRACER=y 321 321 CONFIG_SCHED_TRACER=y 322 322 CONFIG_FTRACE_SYSCALLS=y
-1
arch/powerpc/configs/ppc40x_defconfig
··· 25 25 # CONFIG_INET_XFRM_MODE_TRANSPORT is not set 26 26 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 27 27 # CONFIG_INET_XFRM_MODE_BEET is not set 28 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 29 28 CONFIG_CONNECTOR=y 30 29 CONFIG_MTD=y 31 30 CONFIG_MTD_CMDLINE_PARTS=y
-1
arch/powerpc/configs/ppc44x_defconfig
··· 36 36 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 37 37 # CONFIG_INET_XFRM_MODE_BEET is not set 38 38 CONFIG_BRIDGE=m 39 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 40 39 CONFIG_CONNECTOR=y 41 40 CONFIG_MTD=y 42 41 CONFIG_MTD_BLOCK=y
+1 -3
arch/powerpc/configs/ppc64_defconfig
··· 89 89 CONFIG_INET_AH=m 90 90 CONFIG_INET_ESP=m 91 91 CONFIG_INET_IPCOMP=m 92 - # CONFIG_IPV6 is not set 92 + CONFIG_IPV6=y 93 93 CONFIG_NETFILTER=y 94 94 # CONFIG_NETFILTER_ADVANCED is not set 95 95 CONFIG_BRIDGE=m ··· 98 98 CONFIG_NET_CLS_ACT=y 99 99 CONFIG_NET_ACT_BPF=m 100 100 CONFIG_BPF_JIT=y 101 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 102 101 CONFIG_DEVTMPFS=y 103 102 CONFIG_DEVTMPFS_MOUNT=y 104 103 CONFIG_BLK_DEV_FD=y ··· 366 367 CONFIG_SOFTLOCKUP_DETECTOR=y 367 368 CONFIG_HARDLOCKUP_DETECTOR=y 368 369 CONFIG_DEBUG_MUTEXES=y 369 - CONFIG_LATENCYTOP=y 370 370 CONFIG_FUNCTION_TRACER=y 371 371 CONFIG_SCHED_TRACER=y 372 372 CONFIG_BLK_DEV_IO_TRACE=y
-2
arch/powerpc/configs/ppc64e_defconfig
··· 50 50 CONFIG_NETFILTER=y 51 51 # CONFIG_NETFILTER_ADVANCED is not set 52 52 CONFIG_BRIDGE=m 53 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 54 53 CONFIG_DEVTMPFS=y 55 54 CONFIG_DEVTMPFS_MOUNT=y 56 55 CONFIG_BLK_DEV_FD=y ··· 222 223 CONFIG_DEBUG_STACKOVERFLOW=y 223 224 CONFIG_DETECT_HUNG_TASK=y 224 225 CONFIG_DEBUG_MUTEXES=y 225 - CONFIG_LATENCYTOP=y 226 226 CONFIG_IRQSOFF_TRACER=y 227 227 CONFIG_SCHED_TRACER=y 228 228 CONFIG_BLK_DEV_IO_TRACE=y
-2
arch/powerpc/configs/ppc6xx_defconfig
··· 345 345 CONFIG_MAC80211_DEBUGFS=y 346 346 CONFIG_NET_9P=m 347 347 CONFIG_NET_9P_VIRTIO=m 348 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 349 348 CONFIG_DEBUG_DEVRES=y 350 349 CONFIG_CONNECTOR=y 351 350 CONFIG_PARPORT=m ··· 1147 1148 CONFIG_FAIL_IO_TIMEOUT=y 1148 1149 CONFIG_FAULT_INJECTION_DEBUG_FS=y 1149 1150 CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y 1150 - CONFIG_LATENCYTOP=y 1151 1151 CONFIG_SCHED_TRACER=y 1152 1152 CONFIG_STACK_TRACER=y 1153 1153 CONFIG_BLK_DEV_IO_TRACE=y
-1
arch/powerpc/configs/pq2fads_defconfig
··· 24 24 CONFIG_IP_PNP_BOOTP=y 25 25 CONFIG_SYN_COOKIES=y 26 26 CONFIG_NETFILTER=y 27 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 28 27 # CONFIG_FW_LOADER is not set 29 28 CONFIG_MTD=y 30 29 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/ps3_defconfig
··· 63 63 CONFIG_CFG80211_WEXT=y 64 64 CONFIG_MAC80211=m 65 65 # CONFIG_MAC80211_RC_MINSTREL is not set 66 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 67 66 CONFIG_BLK_DEV_LOOP=y 68 67 CONFIG_BLK_DEV_RAM=y 69 68 CONFIG_BLK_DEV_RAM_SIZE=65535
-2
arch/powerpc/configs/pseries_defconfig
··· 83 83 CONFIG_NET_CLS_ACT=y 84 84 CONFIG_NET_ACT_BPF=m 85 85 CONFIG_BPF_JIT=y 86 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 87 86 CONFIG_DEVTMPFS=y 88 87 CONFIG_DEVTMPFS_MOUNT=y 89 88 CONFIG_PARPORT=m ··· 289 290 CONFIG_DEBUG_STACKOVERFLOW=y 290 291 CONFIG_SOFTLOCKUP_DETECTOR=y 291 292 CONFIG_HARDLOCKUP_DETECTOR=y 292 - CONFIG_LATENCYTOP=y 293 293 CONFIG_FUNCTION_TRACER=y 294 294 CONFIG_SCHED_TRACER=y 295 295 CONFIG_BLK_DEV_IO_TRACE=y
-1
arch/powerpc/configs/skiroot_defconfig
··· 68 68 # CONFIG_INET_XFRM_MODE_BEET is not set 69 69 CONFIG_DNS_RESOLVER=y 70 70 # CONFIG_WIRELESS is not set 71 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 72 71 CONFIG_DEVTMPFS=y 73 72 CONFIG_DEVTMPFS_MOUNT=y 74 73 CONFIG_MTD=m
-1
arch/powerpc/configs/storcenter_defconfig
··· 26 26 # CONFIG_INET_XFRM_MODE_TUNNEL is not set 27 27 # CONFIG_INET_XFRM_MODE_BEET is not set 28 28 # CONFIG_IPV6 is not set 29 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 30 29 CONFIG_MTD=y 31 30 CONFIG_MTD_CMDLINE_PARTS=y 32 31 CONFIG_MTD_BLOCK=y
-1
arch/powerpc/configs/tqm8xx_defconfig
··· 32 32 # CONFIG_INET_XFRM_MODE_BEET is not set 33 33 # CONFIG_IPV6 is not set 34 34 # CONFIG_WIRELESS is not set 35 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 36 35 # CONFIG_FW_LOADER is not set 37 36 CONFIG_MTD=y 38 37 CONFIG_MTD_CMDLINE_PARTS=y
-2
arch/powerpc/configs/wii_defconfig
··· 41 41 CONFIG_BT_HIDP=y 42 42 CONFIG_CFG80211=y 43 43 CONFIG_MAC80211=y 44 - CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 45 44 # CONFIG_STANDALONE is not set 46 45 CONFIG_BLK_DEV_LOOP=y 47 46 CONFIG_BLK_DEV_RAM=y ··· 122 123 CONFIG_MAGIC_SYSRQ=y 123 124 CONFIG_DEBUG_SPINLOCK=y 124 125 CONFIG_DEBUG_MUTEXES=y 125 - CONFIG_LATENCYTOP=y 126 126 CONFIG_SCHED_TRACER=y 127 127 CONFIG_BLK_DEV_IO_TRACE=y 128 128 CONFIG_DMA_API_DEBUG=y
-2
arch/powerpc/include/asm/book3s/64/mmu.h
··· 116 116 /* Number of users of the external (Nest) MMU */ 117 117 atomic_t copros; 118 118 119 - /* NPU NMMU context */ 120 - struct npu_context *npu_context; 121 119 struct hash_mm_context *hash_context; 122 120 123 121 unsigned long vdso_base;
+29 -1
arch/powerpc/include/asm/book3s/64/pgtable.h
··· 274 274 #define VMALLOC_START __vmalloc_start 275 275 #define VMALLOC_END __vmalloc_end 276 276 277 + static inline unsigned int ioremap_max_order(void) 278 + { 279 + if (radix_enabled()) 280 + return PUD_SHIFT; 281 + return 7 + PAGE_SHIFT; /* default from linux/vmalloc.h */ 282 + } 283 + #define IOREMAP_MAX_ORDER ioremap_max_order() 284 + 277 285 extern unsigned long __kernel_virt_start; 278 - extern unsigned long __kernel_virt_size; 279 286 extern unsigned long __kernel_io_start; 280 287 extern unsigned long __kernel_io_end; 281 288 #define KERN_VIRT_START __kernel_virt_start ··· 1348 1341 return true; 1349 1342 1350 1343 return false; 1344 + } 1345 + 1346 + /* 1347 + * Like pmd_huge() and pmd_large(), but works regardless of config options 1348 + */ 1349 + #define pmd_is_leaf pmd_is_leaf 1350 + static inline bool pmd_is_leaf(pmd_t pmd) 1351 + { 1352 + return !!(pmd_raw(pmd) & cpu_to_be64(_PAGE_PTE)); 1353 + } 1354 + 1355 + #define pud_is_leaf pud_is_leaf 1356 + static inline bool pud_is_leaf(pud_t pud) 1357 + { 1358 + return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PTE)); 1359 + } 1360 + 1361 + #define pgd_is_leaf pgd_is_leaf 1362 + static inline bool pgd_is_leaf(pgd_t pgd) 1363 + { 1364 + return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PTE)); 1351 1365 } 1352 1366 1353 1367 #endif /* __ASSEMBLY__ */
+3
arch/powerpc/include/asm/book3s/64/radix.h
··· 266 266 extern int radix__map_kernel_page(unsigned long ea, unsigned long pa, 267 267 pgprot_t flags, unsigned int psz); 268 268 269 + extern int radix__ioremap_range(unsigned long ea, phys_addr_t pa, 270 + unsigned long size, pgprot_t prot, int nid); 271 + 269 272 static inline unsigned long radix__get_tree_size(void) 270 273 { 271 274 unsigned long rts_field;
+28 -6
arch/powerpc/include/asm/cache.h
··· 33 33 34 34 #define IFETCH_ALIGN_BYTES (1 << IFETCH_ALIGN_SHIFT) 35 35 36 - #if defined(__powerpc64__) && !defined(__ASSEMBLY__) 36 + #if !defined(__ASSEMBLY__) 37 + #ifdef CONFIG_PPC64 37 38 38 39 struct ppc_cache_info { 39 40 u32 size; ··· 54 53 }; 55 54 56 55 extern struct ppc64_caches ppc64_caches; 57 - #endif /* __powerpc64__ && ! __ASSEMBLY__ */ 56 + 57 + static inline u32 l1_cache_shift(void) 58 + { 59 + return ppc64_caches.l1d.log_block_size; 60 + } 61 + 62 + static inline u32 l1_cache_bytes(void) 63 + { 64 + return ppc64_caches.l1d.block_size; 65 + } 66 + #else 67 + static inline u32 l1_cache_shift(void) 68 + { 69 + return L1_CACHE_SHIFT; 70 + } 71 + 72 + static inline u32 l1_cache_bytes(void) 73 + { 74 + return L1_CACHE_BYTES; 75 + } 76 + #endif 77 + #endif /* ! __ASSEMBLY__ */ 58 78 59 79 #if defined(__ASSEMBLY__) 60 80 /* ··· 107 85 108 86 static inline void dcbz(void *addr) 109 87 { 110 - __asm__ __volatile__ ("dcbz 0, %0" : : "r"(addr) : "memory"); 88 + __asm__ __volatile__ ("dcbz %y0" : : "Z"(*(u8 *)addr) : "memory"); 111 89 } 112 90 113 91 static inline void dcbi(void *addr) 114 92 { 115 - __asm__ __volatile__ ("dcbi 0, %0" : : "r"(addr) : "memory"); 93 + __asm__ __volatile__ ("dcbi %y0" : : "Z"(*(u8 *)addr) : "memory"); 116 94 } 117 95 118 96 static inline void dcbf(void *addr) 119 97 { 120 - __asm__ __volatile__ ("dcbf 0, %0" : : "r"(addr) : "memory"); 98 + __asm__ __volatile__ ("dcbf %y0" : : "Z"(*(u8 *)addr) : "memory"); 121 99 } 122 100 123 101 static inline void dcbst(void *addr) 124 102 { 125 - __asm__ __volatile__ ("dcbst 0, %0" : : "r"(addr) : "memory"); 103 + __asm__ __volatile__ ("dcbst %y0" : : "Z"(*(u8 *)addr) : "memory"); 126 104 } 127 105 #endif /* !__ASSEMBLY__ */ 128 106 #endif /* __KERNEL__ */
+28 -18
arch/powerpc/include/asm/cacheflush.h
··· 29 29 * not expect this type of fault. flush_cache_vmap is not exactly the right 30 30 * place to put this, but it seems to work well enough. 31 31 */ 32 - #define flush_cache_vmap(start, end) do { asm volatile("ptesync" ::: "memory"); } while (0) 32 + static inline void flush_cache_vmap(unsigned long start, unsigned long end) 33 + { 34 + asm volatile("ptesync" ::: "memory"); 35 + } 33 36 #else 34 - #define flush_cache_vmap(start, end) do { } while (0) 37 + static inline void flush_cache_vmap(unsigned long start, unsigned long end) { } 35 38 #endif 36 39 37 40 #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 ··· 57 54 } 58 55 #endif 59 56 60 - #ifdef CONFIG_PPC32 61 57 /* 62 58 * Write any modified data cache blocks out to memory and invalidate them. 63 59 * Does not invalidate the corresponding instruction cache blocks. 64 60 */ 65 61 static inline void flush_dcache_range(unsigned long start, unsigned long stop) 66 62 { 67 - void *addr = (void *)(start & ~(L1_CACHE_BYTES - 1)); 68 - unsigned long size = stop - (unsigned long)addr + (L1_CACHE_BYTES - 1); 63 + unsigned long shift = l1_cache_shift(); 64 + unsigned long bytes = l1_cache_bytes(); 65 + void *addr = (void *)(start & ~(bytes - 1)); 66 + unsigned long size = stop - (unsigned long)addr + (bytes - 1); 69 67 unsigned long i; 70 68 71 - for (i = 0; i < size >> L1_CACHE_SHIFT; i++, addr += L1_CACHE_BYTES) 69 + if (IS_ENABLED(CONFIG_PPC64)) { 70 + mb(); /* sync */ 71 + isync(); 72 + } 73 + 74 + for (i = 0; i < size >> shift; i++, addr += bytes) 72 75 dcbf(addr); 73 76 mb(); /* sync */ 77 + 78 + if (IS_ENABLED(CONFIG_PPC64)) 79 + isync(); 74 80 } 75 81 76 82 /* ··· 89 77 */ 90 78 static inline void clean_dcache_range(unsigned long start, unsigned long stop) 91 79 { 92 - void *addr = (void *)(start & ~(L1_CACHE_BYTES - 1)); 93 - unsigned long size = stop - (unsigned long)addr + (L1_CACHE_BYTES - 1); 80 + unsigned long shift = l1_cache_shift(); 81 + unsigned long bytes = l1_cache_bytes(); 82 + void *addr = (void *)(start & ~(bytes - 1)); 83 + unsigned long size = stop - (unsigned long)addr + (bytes - 1); 94 84 unsigned long i; 95 85 96 - for (i = 0; i < size >> L1_CACHE_SHIFT; i++, addr += L1_CACHE_BYTES) 86 + for (i = 0; i < size >> shift; i++, addr += bytes) 97 87 dcbst(addr); 98 88 mb(); /* sync */ 99 89 } ··· 108 94 static inline void invalidate_dcache_range(unsigned long start, 109 95 unsigned long stop) 110 96 { 111 - void *addr = (void *)(start & ~(L1_CACHE_BYTES - 1)); 112 - unsigned long size = stop - (unsigned long)addr + (L1_CACHE_BYTES - 1); 97 + unsigned long shift = l1_cache_shift(); 98 + unsigned long bytes = l1_cache_bytes(); 99 + void *addr = (void *)(start & ~(bytes - 1)); 100 + unsigned long size = stop - (unsigned long)addr + (bytes - 1); 113 101 unsigned long i; 114 102 115 - for (i = 0; i < size >> L1_CACHE_SHIFT; i++, addr += L1_CACHE_BYTES) 103 + for (i = 0; i < size >> shift; i++, addr += bytes) 116 104 dcbi(addr); 117 105 mb(); /* sync */ 118 106 } 119 - 120 - #endif /* CONFIG_PPC32 */ 121 - #ifdef CONFIG_PPC64 122 - extern void flush_dcache_range(unsigned long start, unsigned long stop); 123 - extern void flush_inval_dcache_range(unsigned long start, unsigned long stop); 124 - #endif 125 107 126 108 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 127 109 do { \
+5 -604
arch/powerpc/include/asm/exception-64s.h
··· 30 30 * exception handlers (including pSeries LPAR) and iSeries LPAR 31 31 * implementations as possible. 32 32 */ 33 - #include <asm/head-64.h> 34 33 #include <asm/feature-fixups.h> 35 34 36 - /* PACA save area offsets (exgen, exmc, etc) */ 37 - #define EX_R9 0 38 - #define EX_R10 8 39 - #define EX_R11 16 40 - #define EX_R12 24 41 - #define EX_R13 32 42 - #define EX_DAR 40 43 - #define EX_DSISR 48 44 - #define EX_CCR 52 45 - #define EX_CFAR 56 46 - #define EX_PPR 64 35 + /* PACA save area size in u64 units (exgen, exmc, etc) */ 47 36 #if defined(CONFIG_RELOCATABLE) 48 - #define EX_CTR 72 49 - #define EX_SIZE 10 /* size in u64 units */ 37 + #define EX_SIZE 10 50 38 #else 51 - #define EX_SIZE 9 /* size in u64 units */ 39 + #define EX_SIZE 9 52 40 #endif 53 41 54 42 /* ··· 44 56 */ 45 57 #define MAX_MCE_DEPTH 4 46 58 47 - /* 48 - * EX_R3 is only used by the bad_stack handler. bad_stack reloads and 49 - * saves DAR from SPRN_DAR, and EX_DAR is not used. So EX_R3 can overlap 50 - * with EX_DAR. 51 - */ 52 - #define EX_R3 EX_DAR 59 + #ifdef __ASSEMBLY__ 53 60 54 61 #define STF_ENTRY_BARRIER_SLOT \ 55 62 STF_ENTRY_BARRIER_FIXUP_SECTION; \ ··· 127 144 hrfid; \ 128 145 b hrfi_flush_fallback 129 146 130 - #ifdef CONFIG_RELOCATABLE 131 - #define __EXCEPTION_PROLOG_2_RELON(label, h) \ 132 - mfspr r11,SPRN_##h##SRR0; /* save SRR0 */ \ 133 - LOAD_HANDLER(r12,label); \ 134 - mtctr r12; \ 135 - mfspr r12,SPRN_##h##SRR1; /* and SRR1 */ \ 136 - li r10,MSR_RI; \ 137 - mtmsrd r10,1; /* Set RI (EE=0) */ \ 138 - bctr; 139 - #else 140 - /* If not relocatable, we can jump directly -- and save messing with LR */ 141 - #define __EXCEPTION_PROLOG_2_RELON(label, h) \ 142 - mfspr r11,SPRN_##h##SRR0; /* save SRR0 */ \ 143 - mfspr r12,SPRN_##h##SRR1; /* and SRR1 */ \ 144 - li r10,MSR_RI; \ 145 - mtmsrd r10,1; /* Set RI (EE=0) */ \ 146 - b label; 147 - #endif 148 - #define EXCEPTION_PROLOG_2_RELON(label, h) \ 149 - __EXCEPTION_PROLOG_2_RELON(label, h) 150 - 151 - /* 152 - * As EXCEPTION_PROLOG(), except we've already got relocation on so no need to 153 - * rfid. Save LR in case we're CONFIG_RELOCATABLE, in which case 154 - * EXCEPTION_PROLOG_2_RELON will be using LR. 155 - */ 156 - #define EXCEPTION_RELON_PROLOG(area, label, h, extra, vec) \ 157 - SET_SCRATCH0(r13); /* save r13 */ \ 158 - EXCEPTION_PROLOG_0(area); \ 159 - EXCEPTION_PROLOG_1(area, extra, vec); \ 160 - EXCEPTION_PROLOG_2_RELON(label, h) 161 - 162 - /* 163 - * We're short on space and time in the exception prolog, so we can't 164 - * use the normal LOAD_REG_IMMEDIATE macro to load the address of label. 165 - * Instead we get the base of the kernel from paca->kernelbase and or in the low 166 - * part of label. This requires that the label be within 64KB of kernelbase, and 167 - * that kernelbase be 64K aligned. 168 - */ 169 - #define LOAD_HANDLER(reg, label) \ 170 - ld reg,PACAKBASE(r13); /* get high part of &label */ \ 171 - ori reg,reg,FIXED_SYMBOL_ABS_ADDR(label); 172 - 173 - #define __LOAD_HANDLER(reg, label) \ 174 - ld reg,PACAKBASE(r13); \ 175 - ori reg,reg,(ABS_ADDR(label))@l; 176 - 177 - /* 178 - * Branches from unrelocated code (e.g., interrupts) to labels outside 179 - * head-y require >64K offsets. 180 - */ 181 - #define __LOAD_FAR_HANDLER(reg, label) \ 182 - ld reg,PACAKBASE(r13); \ 183 - ori reg,reg,(ABS_ADDR(label))@l; \ 184 - addis reg,reg,(ABS_ADDR(label))@h; 185 - 186 - /* Exception register prefixes */ 187 - #define EXC_HV H 188 - #define EXC_STD 189 - 190 - #if defined(CONFIG_RELOCATABLE) 191 - /* 192 - * If we support interrupts with relocation on AND we're a relocatable kernel, 193 - * we need to use CTR to get to the 2nd level handler. So, save/restore it 194 - * when required. 195 - */ 196 - #define SAVE_CTR(reg, area) mfctr reg ; std reg,area+EX_CTR(r13) 197 - #define GET_CTR(reg, area) ld reg,area+EX_CTR(r13) 198 - #define RESTORE_CTR(reg, area) ld reg,area+EX_CTR(r13) ; mtctr reg 199 - #else 200 - /* ...else CTR is unused and in register. */ 201 - #define SAVE_CTR(reg, area) 202 - #define GET_CTR(reg, area) mfctr reg 203 - #define RESTORE_CTR(reg, area) 204 - #endif 205 - 206 - /* 207 - * PPR save/restore macros used in exceptions_64s.S 208 - * Used for P7 or later processors 209 - */ 210 - #define SAVE_PPR(area, ra) \ 211 - BEGIN_FTR_SECTION_NESTED(940) \ 212 - ld ra,area+EX_PPR(r13); /* Read PPR from paca */ \ 213 - std ra,_PPR(r1); \ 214 - END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,940) 215 - 216 - #define RESTORE_PPR_PACA(area, ra) \ 217 - BEGIN_FTR_SECTION_NESTED(941) \ 218 - ld ra,area+EX_PPR(r13); \ 219 - mtspr SPRN_PPR,ra; \ 220 - END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,941) 221 - 222 - /* 223 - * Get an SPR into a register if the CPU has the given feature 224 - */ 225 - #define OPT_GET_SPR(ra, spr, ftr) \ 226 - BEGIN_FTR_SECTION_NESTED(943) \ 227 - mfspr ra,spr; \ 228 - END_FTR_SECTION_NESTED(ftr,ftr,943) 229 - 230 - /* 231 - * Set an SPR from a register if the CPU has the given feature 232 - */ 233 - #define OPT_SET_SPR(ra, spr, ftr) \ 234 - BEGIN_FTR_SECTION_NESTED(943) \ 235 - mtspr spr,ra; \ 236 - END_FTR_SECTION_NESTED(ftr,ftr,943) 237 - 238 - /* 239 - * Save a register to the PACA if the CPU has the given feature 240 - */ 241 - #define OPT_SAVE_REG_TO_PACA(offset, ra, ftr) \ 242 - BEGIN_FTR_SECTION_NESTED(943) \ 243 - std ra,offset(r13); \ 244 - END_FTR_SECTION_NESTED(ftr,ftr,943) 245 - 246 - #define EXCEPTION_PROLOG_0(area) \ 247 - GET_PACA(r13); \ 248 - std r9,area+EX_R9(r13); /* save r9 */ \ 249 - OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR); \ 250 - HMT_MEDIUM; \ 251 - std r10,area+EX_R10(r13); /* save r10 - r12 */ \ 252 - OPT_GET_SPR(r10, SPRN_CFAR, CPU_FTR_CFAR) 253 - 254 - #define __EXCEPTION_PROLOG_1_PRE(area) \ 255 - OPT_SAVE_REG_TO_PACA(area+EX_PPR, r9, CPU_FTR_HAS_PPR); \ 256 - OPT_SAVE_REG_TO_PACA(area+EX_CFAR, r10, CPU_FTR_CFAR); \ 257 - INTERRUPT_TO_KERNEL; \ 258 - SAVE_CTR(r10, area); \ 259 - mfcr r9; 260 - 261 - #define __EXCEPTION_PROLOG_1_POST(area) \ 262 - std r11,area+EX_R11(r13); \ 263 - std r12,area+EX_R12(r13); \ 264 - GET_SCRATCH0(r10); \ 265 - std r10,area+EX_R13(r13) 266 - 267 - /* 268 - * This version of the EXCEPTION_PROLOG_1 will carry 269 - * addition parameter called "bitmask" to support 270 - * checking of the interrupt maskable level in the SOFTEN_TEST. 271 - * Intended to be used in MASKABLE_EXCPETION_* macros. 272 - */ 273 - #define MASKABLE_EXCEPTION_PROLOG_1(area, extra, vec, bitmask) \ 274 - __EXCEPTION_PROLOG_1_PRE(area); \ 275 - extra(vec, bitmask); \ 276 - __EXCEPTION_PROLOG_1_POST(area); 277 - 278 - /* 279 - * This version of the EXCEPTION_PROLOG_1 is intended 280 - * to be used in STD_EXCEPTION* macros 281 - */ 282 - #define _EXCEPTION_PROLOG_1(area, extra, vec) \ 283 - __EXCEPTION_PROLOG_1_PRE(area); \ 284 - extra(vec); \ 285 - __EXCEPTION_PROLOG_1_POST(area); 286 - 287 - #define EXCEPTION_PROLOG_1(area, extra, vec) \ 288 - _EXCEPTION_PROLOG_1(area, extra, vec) 289 - 290 - #define __EXCEPTION_PROLOG_2(label, h) \ 291 - ld r10,PACAKMSR(r13); /* get MSR value for kernel */ \ 292 - mfspr r11,SPRN_##h##SRR0; /* save SRR0 */ \ 293 - LOAD_HANDLER(r12,label) \ 294 - mtspr SPRN_##h##SRR0,r12; \ 295 - mfspr r12,SPRN_##h##SRR1; /* and SRR1 */ \ 296 - mtspr SPRN_##h##SRR1,r10; \ 297 - h##RFI_TO_KERNEL; \ 298 - b . /* prevent speculative execution */ 299 - #define EXCEPTION_PROLOG_2(label, h) \ 300 - __EXCEPTION_PROLOG_2(label, h) 301 - 302 - /* _NORI variant keeps MSR_RI clear */ 303 - #define __EXCEPTION_PROLOG_2_NORI(label, h) \ 304 - ld r10,PACAKMSR(r13); /* get MSR value for kernel */ \ 305 - xori r10,r10,MSR_RI; /* Clear MSR_RI */ \ 306 - mfspr r11,SPRN_##h##SRR0; /* save SRR0 */ \ 307 - LOAD_HANDLER(r12,label) \ 308 - mtspr SPRN_##h##SRR0,r12; \ 309 - mfspr r12,SPRN_##h##SRR1; /* and SRR1 */ \ 310 - mtspr SPRN_##h##SRR1,r10; \ 311 - h##RFI_TO_KERNEL; \ 312 - b . /* prevent speculative execution */ 313 - 314 - #define EXCEPTION_PROLOG_2_NORI(label, h) \ 315 - __EXCEPTION_PROLOG_2_NORI(label, h) 316 - 317 - #define EXCEPTION_PROLOG(area, label, h, extra, vec) \ 318 - SET_SCRATCH0(r13); /* save r13 */ \ 319 - EXCEPTION_PROLOG_0(area); \ 320 - EXCEPTION_PROLOG_1(area, extra, vec); \ 321 - EXCEPTION_PROLOG_2(label, h); 322 - 323 - #define __KVMTEST(h, n) \ 324 - lbz r10,HSTATE_IN_GUEST(r13); \ 325 - cmpwi r10,0; \ 326 - bne do_kvm_##h##n 327 - 328 - #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE 329 - /* 330 - * If hv is possible, interrupts come into to the hv version 331 - * of the kvmppc_interrupt code, which then jumps to the PR handler, 332 - * kvmppc_interrupt_pr, if the guest is a PR guest. 333 - */ 334 - #define kvmppc_interrupt kvmppc_interrupt_hv 335 - #else 336 - #define kvmppc_interrupt kvmppc_interrupt_pr 337 - #endif 338 - 339 - /* 340 - * Branch to label using its 0xC000 address. This results in instruction 341 - * address suitable for MSR[IR]=0 or 1, which allows relocation to be turned 342 - * on using mtmsr rather than rfid. 343 - * 344 - * This could set the 0xc bits for !RELOCATABLE as an immediate, rather than 345 - * load KBASE for a slight optimisation. 346 - */ 347 - #define BRANCH_TO_C000(reg, label) \ 348 - __LOAD_HANDLER(reg, label); \ 349 - mtctr reg; \ 350 - bctr 351 - 352 - #ifdef CONFIG_RELOCATABLE 353 - #define BRANCH_TO_COMMON(reg, label) \ 354 - __LOAD_HANDLER(reg, label); \ 355 - mtctr reg; \ 356 - bctr 357 - 358 - #define BRANCH_LINK_TO_FAR(label) \ 359 - __LOAD_FAR_HANDLER(r12, label); \ 360 - mtctr r12; \ 361 - bctrl 362 - 363 - /* 364 - * KVM requires __LOAD_FAR_HANDLER. 365 - * 366 - * __BRANCH_TO_KVM_EXIT branches are also a special case because they 367 - * explicitly use r9 then reload it from PACA before branching. Hence 368 - * the double-underscore. 369 - */ 370 - #define __BRANCH_TO_KVM_EXIT(area, label) \ 371 - mfctr r9; \ 372 - std r9,HSTATE_SCRATCH1(r13); \ 373 - __LOAD_FAR_HANDLER(r9, label); \ 374 - mtctr r9; \ 375 - ld r9,area+EX_R9(r13); \ 376 - bctr 377 - 378 - #else 379 - #define BRANCH_TO_COMMON(reg, label) \ 380 - b label 381 - 382 - #define BRANCH_LINK_TO_FAR(label) \ 383 - bl label 384 - 385 - #define __BRANCH_TO_KVM_EXIT(area, label) \ 386 - ld r9,area+EX_R9(r13); \ 387 - b label 388 - 389 - #endif 390 - 391 - /* Do not enable RI */ 392 - #define EXCEPTION_PROLOG_NORI(area, label, h, extra, vec) \ 393 - EXCEPTION_PROLOG_0(area); \ 394 - EXCEPTION_PROLOG_1(area, extra, vec); \ 395 - EXCEPTION_PROLOG_2_NORI(label, h); 396 - 397 - 398 - #define __KVM_HANDLER(area, h, n) \ 399 - BEGIN_FTR_SECTION_NESTED(947) \ 400 - ld r10,area+EX_CFAR(r13); \ 401 - std r10,HSTATE_CFAR(r13); \ 402 - END_FTR_SECTION_NESTED(CPU_FTR_CFAR,CPU_FTR_CFAR,947); \ 403 - BEGIN_FTR_SECTION_NESTED(948) \ 404 - ld r10,area+EX_PPR(r13); \ 405 - std r10,HSTATE_PPR(r13); \ 406 - END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948); \ 407 - ld r10,area+EX_R10(r13); \ 408 - std r12,HSTATE_SCRATCH0(r13); \ 409 - sldi r12,r9,32; \ 410 - ori r12,r12,(n); \ 411 - /* This reloads r9 before branching to kvmppc_interrupt */ \ 412 - __BRANCH_TO_KVM_EXIT(area, kvmppc_interrupt) 413 - 414 - #define __KVM_HANDLER_SKIP(area, h, n) \ 415 - cmpwi r10,KVM_GUEST_MODE_SKIP; \ 416 - beq 89f; \ 417 - BEGIN_FTR_SECTION_NESTED(948) \ 418 - ld r10,area+EX_PPR(r13); \ 419 - std r10,HSTATE_PPR(r13); \ 420 - END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948); \ 421 - ld r10,area+EX_R10(r13); \ 422 - std r12,HSTATE_SCRATCH0(r13); \ 423 - sldi r12,r9,32; \ 424 - ori r12,r12,(n); \ 425 - /* This reloads r9 before branching to kvmppc_interrupt */ \ 426 - __BRANCH_TO_KVM_EXIT(area, kvmppc_interrupt); \ 427 - 89: mtocrf 0x80,r9; \ 428 - ld r9,area+EX_R9(r13); \ 429 - ld r10,area+EX_R10(r13); \ 430 - b kvmppc_skip_##h##interrupt 431 - 432 - #ifdef CONFIG_KVM_BOOK3S_64_HANDLER 433 - #define KVMTEST(h, n) __KVMTEST(h, n) 434 - #define KVM_HANDLER(area, h, n) __KVM_HANDLER(area, h, n) 435 - #define KVM_HANDLER_SKIP(area, h, n) __KVM_HANDLER_SKIP(area, h, n) 436 - 437 - #else 438 - #define KVMTEST(h, n) 439 - #define KVM_HANDLER(area, h, n) 440 - #define KVM_HANDLER_SKIP(area, h, n) 441 - #endif 442 - 443 - #define NOTEST(n) 444 - 445 - #define EXCEPTION_PROLOG_COMMON_1() \ 446 - std r9,_CCR(r1); /* save CR in stackframe */ \ 447 - std r11,_NIP(r1); /* save SRR0 in stackframe */ \ 448 - std r12,_MSR(r1); /* save SRR1 in stackframe */ \ 449 - std r10,0(r1); /* make stack chain pointer */ \ 450 - std r0,GPR0(r1); /* save r0 in stackframe */ \ 451 - std r10,GPR1(r1); /* save r1 in stackframe */ \ 452 - 453 - 454 - /* 455 - * The common exception prolog is used for all except a few exceptions 456 - * such as a segment miss on a kernel address. We have to be prepared 457 - * to take another exception from the point where we first touch the 458 - * kernel stack onwards. 459 - * 460 - * On entry r13 points to the paca, r9-r13 are saved in the paca, 461 - * r9 contains the saved CR, r11 and r12 contain the saved SRR0 and 462 - * SRR1, and relocation is on. 463 - */ 464 - #define EXCEPTION_PROLOG_COMMON(n, area) \ 465 - andi. r10,r12,MSR_PR; /* See if coming from user */ \ 466 - mr r10,r1; /* Save r1 */ \ 467 - subi r1,r1,INT_FRAME_SIZE; /* alloc frame on kernel stack */ \ 468 - beq- 1f; \ 469 - ld r1,PACAKSAVE(r13); /* kernel stack to use */ \ 470 - 1: cmpdi cr1,r1,-INT_FRAME_SIZE; /* check if r1 is in userspace */ \ 471 - blt+ cr1,3f; /* abort if it is */ \ 472 - li r1,(n); /* will be reloaded later */ \ 473 - sth r1,PACA_TRAP_SAVE(r13); \ 474 - std r3,area+EX_R3(r13); \ 475 - addi r3,r13,area; /* r3 -> where regs are saved*/ \ 476 - RESTORE_CTR(r1, area); \ 477 - b bad_stack; \ 478 - 3: EXCEPTION_PROLOG_COMMON_1(); \ 479 - kuap_save_amr_and_lock r9, r10, cr1, cr0; \ 480 - beq 4f; /* if from kernel mode */ \ 481 - ACCOUNT_CPU_USER_ENTRY(r13, r9, r10); \ 482 - SAVE_PPR(area, r9); \ 483 - 4: EXCEPTION_PROLOG_COMMON_2(area) \ 484 - EXCEPTION_PROLOG_COMMON_3(n) \ 485 - ACCOUNT_STOLEN_TIME 486 - 487 - /* Save original regs values from save area to stack frame. */ 488 - #define EXCEPTION_PROLOG_COMMON_2(area) \ 489 - ld r9,area+EX_R9(r13); /* move r9, r10 to stackframe */ \ 490 - ld r10,area+EX_R10(r13); \ 491 - std r9,GPR9(r1); \ 492 - std r10,GPR10(r1); \ 493 - ld r9,area+EX_R11(r13); /* move r11 - r13 to stackframe */ \ 494 - ld r10,area+EX_R12(r13); \ 495 - ld r11,area+EX_R13(r13); \ 496 - std r9,GPR11(r1); \ 497 - std r10,GPR12(r1); \ 498 - std r11,GPR13(r1); \ 499 - BEGIN_FTR_SECTION_NESTED(66); \ 500 - ld r10,area+EX_CFAR(r13); \ 501 - std r10,ORIG_GPR3(r1); \ 502 - END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66); \ 503 - GET_CTR(r10, area); \ 504 - std r10,_CTR(r1); 505 - 506 - #define EXCEPTION_PROLOG_COMMON_3(n) \ 507 - std r2,GPR2(r1); /* save r2 in stackframe */ \ 508 - SAVE_4GPRS(3, r1); /* save r3 - r6 in stackframe */ \ 509 - SAVE_2GPRS(7, r1); /* save r7, r8 in stackframe */ \ 510 - mflr r9; /* Get LR, later save to stack */ \ 511 - ld r2,PACATOC(r13); /* get kernel TOC into r2 */ \ 512 - std r9,_LINK(r1); \ 513 - lbz r10,PACAIRQSOFTMASK(r13); \ 514 - mfspr r11,SPRN_XER; /* save XER in stackframe */ \ 515 - std r10,SOFTE(r1); \ 516 - std r11,_XER(r1); \ 517 - li r9,(n)+1; \ 518 - std r9,_TRAP(r1); /* set trap number */ \ 519 - li r10,0; \ 520 - ld r11,exception_marker@toc(r2); \ 521 - std r10,RESULT(r1); /* clear regs->result */ \ 522 - std r11,STACK_FRAME_OVERHEAD-16(r1); /* mark the frame */ 523 - 524 - /* 525 - * Exception vectors. 526 - */ 527 - #define STD_EXCEPTION(vec, label) \ 528 - EXCEPTION_PROLOG(PACA_EXGEN, label, EXC_STD, KVMTEST_PR, vec); 529 - 530 - /* Version of above for when we have to branch out-of-line */ 531 - #define __OOL_EXCEPTION(vec, label, hdlr) \ 532 - SET_SCRATCH0(r13) \ 533 - EXCEPTION_PROLOG_0(PACA_EXGEN) \ 534 - b hdlr; 535 - 536 - #define STD_EXCEPTION_OOL(vec, label) \ 537 - EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_PR, vec); \ 538 - EXCEPTION_PROLOG_2(label, EXC_STD) 539 - 540 - #define STD_EXCEPTION_HV(loc, vec, label) \ 541 - EXCEPTION_PROLOG(PACA_EXGEN, label, EXC_HV, KVMTEST_HV, vec); 542 - 543 - #define STD_EXCEPTION_HV_OOL(vec, label) \ 544 - EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_HV, vec); \ 545 - EXCEPTION_PROLOG_2(label, EXC_HV) 546 - 547 - #define STD_RELON_EXCEPTION(loc, vec, label) \ 548 - /* No guest interrupts come through here */ \ 549 - EXCEPTION_RELON_PROLOG(PACA_EXGEN, label, EXC_STD, NOTEST, vec); 550 - 551 - #define STD_RELON_EXCEPTION_OOL(vec, label) \ 552 - EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, vec); \ 553 - EXCEPTION_PROLOG_2_RELON(label, EXC_STD) 554 - 555 - #define STD_RELON_EXCEPTION_HV(loc, vec, label) \ 556 - EXCEPTION_RELON_PROLOG(PACA_EXGEN, label, EXC_HV, KVMTEST_HV, vec); 557 - 558 - #define STD_RELON_EXCEPTION_HV_OOL(vec, label) \ 559 - EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_HV, vec); \ 560 - EXCEPTION_PROLOG_2_RELON(label, EXC_HV) 561 - 562 - /* This associate vector numbers with bits in paca->irq_happened */ 563 - #define SOFTEN_VALUE_0x500 PACA_IRQ_EE 564 - #define SOFTEN_VALUE_0x900 PACA_IRQ_DEC 565 - #define SOFTEN_VALUE_0x980 PACA_IRQ_DEC 566 - #define SOFTEN_VALUE_0xa00 PACA_IRQ_DBELL 567 - #define SOFTEN_VALUE_0xe80 PACA_IRQ_DBELL 568 - #define SOFTEN_VALUE_0xe60 PACA_IRQ_HMI 569 - #define SOFTEN_VALUE_0xea0 PACA_IRQ_EE 570 - #define SOFTEN_VALUE_0xf00 PACA_IRQ_PMI 571 - 572 - #define __SOFTEN_TEST(h, vec, bitmask) \ 573 - lbz r10,PACAIRQSOFTMASK(r13); \ 574 - andi. r10,r10,bitmask; \ 575 - li r10,SOFTEN_VALUE_##vec; \ 576 - bne masked_##h##interrupt 577 - 578 - #define _SOFTEN_TEST(h, vec, bitmask) __SOFTEN_TEST(h, vec, bitmask) 579 - 580 - #define SOFTEN_TEST_PR(vec, bitmask) \ 581 - KVMTEST(EXC_STD, vec); \ 582 - _SOFTEN_TEST(EXC_STD, vec, bitmask) 583 - 584 - #define SOFTEN_TEST_HV(vec, bitmask) \ 585 - KVMTEST(EXC_HV, vec); \ 586 - _SOFTEN_TEST(EXC_HV, vec, bitmask) 587 - 588 - #define KVMTEST_PR(vec) \ 589 - KVMTEST(EXC_STD, vec) 590 - 591 - #define KVMTEST_HV(vec) \ 592 - KVMTEST(EXC_HV, vec) 593 - 594 - #define SOFTEN_NOTEST_PR(vec, bitmask) _SOFTEN_TEST(EXC_STD, vec, bitmask) 595 - #define SOFTEN_NOTEST_HV(vec, bitmask) _SOFTEN_TEST(EXC_HV, vec, bitmask) 596 - 597 - #define __MASKABLE_EXCEPTION(vec, label, h, extra, bitmask) \ 598 - SET_SCRATCH0(r13); /* save r13 */ \ 599 - EXCEPTION_PROLOG_0(PACA_EXGEN); \ 600 - MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec, bitmask); \ 601 - EXCEPTION_PROLOG_2(label, h); 602 - 603 - #define MASKABLE_EXCEPTION(vec, label, bitmask) \ 604 - __MASKABLE_EXCEPTION(vec, label, EXC_STD, SOFTEN_TEST_PR, bitmask) 605 - 606 - #define MASKABLE_EXCEPTION_OOL(vec, label, bitmask) \ 607 - MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec, bitmask);\ 608 - EXCEPTION_PROLOG_2(label, EXC_STD) 609 - 610 - #define MASKABLE_EXCEPTION_HV(vec, label, bitmask) \ 611 - __MASKABLE_EXCEPTION(vec, label, EXC_HV, SOFTEN_TEST_HV, bitmask) 612 - 613 - #define MASKABLE_EXCEPTION_HV_OOL(vec, label, bitmask) \ 614 - MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec, bitmask);\ 615 - EXCEPTION_PROLOG_2(label, EXC_HV) 616 - 617 - #define __MASKABLE_RELON_EXCEPTION(vec, label, h, extra, bitmask) \ 618 - SET_SCRATCH0(r13); /* save r13 */ \ 619 - EXCEPTION_PROLOG_0(PACA_EXGEN); \ 620 - MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec, bitmask); \ 621 - EXCEPTION_PROLOG_2_RELON(label, h) 622 - 623 - #define MASKABLE_RELON_EXCEPTION(vec, label, bitmask) \ 624 - __MASKABLE_RELON_EXCEPTION(vec, label, EXC_STD, SOFTEN_NOTEST_PR, bitmask) 625 - 626 - #define MASKABLE_RELON_EXCEPTION_OOL(vec, label, bitmask) \ 627 - MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_PR, vec, bitmask);\ 628 - EXCEPTION_PROLOG_2(label, EXC_STD); 629 - 630 - #define MASKABLE_RELON_EXCEPTION_HV(vec, label, bitmask) \ 631 - __MASKABLE_RELON_EXCEPTION(vec, label, EXC_HV, SOFTEN_TEST_HV, bitmask) 632 - 633 - #define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label, bitmask) \ 634 - MASKABLE_EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec, bitmask);\ 635 - EXCEPTION_PROLOG_2_RELON(label, EXC_HV) 636 - 637 - /* 638 - * Our exception common code can be passed various "additions" 639 - * to specify the behaviour of interrupts, whether to kick the 640 - * runlatch, etc... 641 - */ 642 - 643 - /* 644 - * This addition reconciles our actual IRQ state with the various software 645 - * flags that track it. This may call C code. 646 - */ 647 - #define ADD_RECONCILE RECONCILE_IRQ_STATE(r10,r11) 648 - 649 - #define ADD_NVGPRS \ 650 - bl save_nvgprs 651 - 652 - #define RUNLATCH_ON \ 653 - BEGIN_FTR_SECTION \ 654 - ld r3, PACA_THREAD_INFO(r13); \ 655 - ld r4,TI_LOCAL_FLAGS(r3); \ 656 - andi. r0,r4,_TLF_RUNLATCH; \ 657 - beql ppc64_runlatch_on_trampoline; \ 658 - END_FTR_SECTION_IFSET(CPU_FTR_CTRL) 659 - 660 - #define EXCEPTION_COMMON(area, trap, label, hdlr, ret, additions) \ 661 - EXCEPTION_PROLOG_COMMON(trap, area); \ 662 - /* Volatile regs are potentially clobbered here */ \ 663 - additions; \ 664 - addi r3,r1,STACK_FRAME_OVERHEAD; \ 665 - bl hdlr; \ 666 - b ret 667 - 668 - /* 669 - * Exception where stack is already set in r1, r1 is saved in r10, and it 670 - * continues rather than returns. 671 - */ 672 - #define EXCEPTION_COMMON_NORET_STACK(area, trap, label, hdlr, additions) \ 673 - EXCEPTION_PROLOG_COMMON_1(); \ 674 - kuap_save_amr_and_lock r9, r10, cr1; \ 675 - EXCEPTION_PROLOG_COMMON_2(area); \ 676 - EXCEPTION_PROLOG_COMMON_3(trap); \ 677 - /* Volatile regs are potentially clobbered here */ \ 678 - additions; \ 679 - addi r3,r1,STACK_FRAME_OVERHEAD; \ 680 - bl hdlr 681 - 682 - #define STD_EXCEPTION_COMMON(trap, label, hdlr) \ 683 - EXCEPTION_COMMON(PACA_EXGEN, trap, label, hdlr, \ 684 - ret_from_except, ADD_NVGPRS;ADD_RECONCILE) 685 - 686 - /* 687 - * Like STD_EXCEPTION_COMMON, but for exceptions that can occur 688 - * in the idle task and therefore need the special idle handling 689 - * (finish nap and runlatch) 690 - */ 691 - #define STD_EXCEPTION_COMMON_ASYNC(trap, label, hdlr) \ 692 - EXCEPTION_COMMON(PACA_EXGEN, trap, label, hdlr, \ 693 - ret_from_except_lite, FINISH_NAP;ADD_RECONCILE;RUNLATCH_ON) 694 - 695 - /* 696 - * When the idle code in power4_idle puts the CPU into NAP mode, 697 - * it has to do so in a loop, and relies on the external interrupt 698 - * and decrementer interrupt entry code to get it out of the loop. 699 - * It sets the _TLF_NAPPING bit in current_thread_info()->local_flags 700 - * to signal that it is in the loop and needs help to get out. 701 - */ 702 - #ifdef CONFIG_PPC_970_NAP 703 - #define FINISH_NAP \ 704 - BEGIN_FTR_SECTION \ 705 - ld r11, PACA_THREAD_INFO(r13); \ 706 - ld r9,TI_LOCAL_FLAGS(r11); \ 707 - andi. r10,r9,_TLF_NAPPING; \ 708 - bnel power4_fixup_nap; \ 709 - END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP) 710 - #else 711 - #define FINISH_NAP 712 - #endif 147 + #endif /* __ASSEMBLY__ */ 713 148 714 149 #endif /* _ASM_POWERPC_EXCEPTION_H */
+1 -203
arch/powerpc/include/asm/head-64.h
··· 169 169 170 170 #define ABS_ADDR(label) (label - fs_label + fs_start) 171 171 172 - /* 173 - * Following are the BOOK3S exception handler helper macros. 174 - * Handlers come in a number of types, and each type has a number of varieties. 175 - * 176 - * EXC_REAL_* - real, unrelocated exception vectors 177 - * EXC_VIRT_* - virt (AIL), unrelocated exception vectors 178 - * TRAMP_REAL_* - real, unrelocated helpers (virt can call these) 179 - * TRAMP_VIRT_* - virt, unreloc helpers (in practice, real can use) 180 - * TRAMP_KVM - KVM handlers that get put into real, unrelocated 181 - * EXC_COMMON - virt, relocated common handlers 182 - * 183 - * The EXC handlers are given a name, and branch to name_common, or the 184 - * appropriate KVM or masking function. Vector handler verieties are as 185 - * follows: 186 - * 187 - * EXC_{REAL|VIRT}_BEGIN/END - used to open-code the exception 188 - * 189 - * EXC_{REAL|VIRT} - standard exception 190 - * 191 - * EXC_{REAL|VIRT}_suffix 192 - * where _suffix is: 193 - * - _MASKABLE - maskable exception 194 - * - _OOL - out of line with trampoline to common handler 195 - * - _HV - HV exception 196 - * 197 - * There can be combinations, e.g., EXC_VIRT_OOL_MASKABLE_HV 198 - * 199 - * The one unusual case is __EXC_REAL_OOL_HV_DIRECT, which is 200 - * an OOL vector that branches to a specified handler rather than the usual 201 - * trampoline that goes to common. It, and other underscore macros, should 202 - * be used with care. 203 - * 204 - * KVM handlers come in the following verieties: 205 - * TRAMP_KVM 206 - * TRAMP_KVM_SKIP 207 - * TRAMP_KVM_HV 208 - * TRAMP_KVM_HV_SKIP 209 - * 210 - * COMMON handlers come in the following verieties: 211 - * EXC_COMMON_BEGIN/END - used to open-code the handler 212 - * EXC_COMMON 213 - * EXC_COMMON_ASYNC 214 - * 215 - * TRAMP_REAL and TRAMP_VIRT can be used with BEGIN/END. KVM 216 - * and OOL handlers are implemented as types of TRAMP and TRAMP_VIRT handlers. 217 - */ 218 - 219 172 #define EXC_REAL_BEGIN(name, start, size) \ 220 173 FIXED_SECTION_ENTRY_BEGIN_LOCATION(real_vectors, exc_real_##start##_##name, start, size) 221 174 ··· 208 255 209 256 #define EXC_VIRT_NONE(start, size) \ 210 257 FIXED_SECTION_ENTRY_BEGIN_LOCATION(virt_vectors, exc_virt_##start##_##unused, start, size); \ 211 - FIXED_SECTION_ENTRY_END_LOCATION(virt_vectors, exc_virt_##start##_##unused, start, size); 212 - 213 - 214 - #define EXC_REAL(name, start, size) \ 215 - EXC_REAL_BEGIN(name, start, size); \ 216 - STD_EXCEPTION(start, name##_common); \ 217 - EXC_REAL_END(name, start, size); 218 - 219 - #define EXC_VIRT(name, start, size, realvec) \ 220 - EXC_VIRT_BEGIN(name, start, size); \ 221 - STD_RELON_EXCEPTION(start, realvec, name##_common); \ 222 - EXC_VIRT_END(name, start, size); 223 - 224 - #define EXC_REAL_MASKABLE(name, start, size, bitmask) \ 225 - EXC_REAL_BEGIN(name, start, size); \ 226 - MASKABLE_EXCEPTION(start, name##_common, bitmask); \ 227 - EXC_REAL_END(name, start, size); 228 - 229 - #define EXC_VIRT_MASKABLE(name, start, size, realvec, bitmask) \ 230 - EXC_VIRT_BEGIN(name, start, size); \ 231 - MASKABLE_RELON_EXCEPTION(realvec, name##_common, bitmask); \ 232 - EXC_VIRT_END(name, start, size); 233 - 234 - #define EXC_REAL_HV(name, start, size) \ 235 - EXC_REAL_BEGIN(name, start, size); \ 236 - STD_EXCEPTION_HV(start, start, name##_common); \ 237 - EXC_REAL_END(name, start, size); 238 - 239 - #define EXC_VIRT_HV(name, start, size, realvec) \ 240 - EXC_VIRT_BEGIN(name, start, size); \ 241 - STD_RELON_EXCEPTION_HV(start, realvec, name##_common); \ 242 - EXC_VIRT_END(name, start, size); 243 - 244 - #define __EXC_REAL_OOL(name, start, size) \ 245 - EXC_REAL_BEGIN(name, start, size); \ 246 - __OOL_EXCEPTION(start, label, tramp_real_##name); \ 247 - EXC_REAL_END(name, start, size); 248 - 249 - #define __TRAMP_REAL_OOL(name, vec) \ 250 - TRAMP_REAL_BEGIN(tramp_real_##name); \ 251 - STD_EXCEPTION_OOL(vec, name##_common); 252 - 253 - #define EXC_REAL_OOL(name, start, size) \ 254 - __EXC_REAL_OOL(name, start, size); \ 255 - __TRAMP_REAL_OOL(name, start); 256 - 257 - #define __EXC_REAL_OOL_MASKABLE(name, start, size) \ 258 - __EXC_REAL_OOL(name, start, size); 259 - 260 - #define __TRAMP_REAL_OOL_MASKABLE(name, vec, bitmask) \ 261 - TRAMP_REAL_BEGIN(tramp_real_##name); \ 262 - MASKABLE_EXCEPTION_OOL(vec, name##_common, bitmask); 263 - 264 - #define EXC_REAL_OOL_MASKABLE(name, start, size, bitmask) \ 265 - __EXC_REAL_OOL_MASKABLE(name, start, size); \ 266 - __TRAMP_REAL_OOL_MASKABLE(name, start, bitmask); 267 - 268 - #define __EXC_REAL_OOL_HV_DIRECT(name, start, size, handler) \ 269 - EXC_REAL_BEGIN(name, start, size); \ 270 - __OOL_EXCEPTION(start, label, handler); \ 271 - EXC_REAL_END(name, start, size); 272 - 273 - #define __EXC_REAL_OOL_HV(name, start, size) \ 274 - __EXC_REAL_OOL(name, start, size); 275 - 276 - #define __TRAMP_REAL_OOL_HV(name, vec) \ 277 - TRAMP_REAL_BEGIN(tramp_real_##name); \ 278 - STD_EXCEPTION_HV_OOL(vec, name##_common); \ 279 - 280 - #define EXC_REAL_OOL_HV(name, start, size) \ 281 - __EXC_REAL_OOL_HV(name, start, size); \ 282 - __TRAMP_REAL_OOL_HV(name, start); 283 - 284 - #define __EXC_REAL_OOL_MASKABLE_HV(name, start, size) \ 285 - __EXC_REAL_OOL(name, start, size); 286 - 287 - #define __TRAMP_REAL_OOL_MASKABLE_HV(name, vec, bitmask) \ 288 - TRAMP_REAL_BEGIN(tramp_real_##name); \ 289 - MASKABLE_EXCEPTION_HV_OOL(vec, name##_common, bitmask); \ 290 - 291 - #define EXC_REAL_OOL_MASKABLE_HV(name, start, size, bitmask) \ 292 - __EXC_REAL_OOL_MASKABLE_HV(name, start, size); \ 293 - __TRAMP_REAL_OOL_MASKABLE_HV(name, start, bitmask); 294 - 295 - #define __EXC_VIRT_OOL(name, start, size) \ 296 - EXC_VIRT_BEGIN(name, start, size); \ 297 - __OOL_EXCEPTION(start, label, tramp_virt_##name); \ 298 - EXC_VIRT_END(name, start, size); 299 - 300 - #define __TRAMP_VIRT_OOL(name, realvec) \ 301 - TRAMP_VIRT_BEGIN(tramp_virt_##name); \ 302 - STD_RELON_EXCEPTION_OOL(realvec, name##_common); 303 - 304 - #define EXC_VIRT_OOL(name, start, size, realvec) \ 305 - __EXC_VIRT_OOL(name, start, size); \ 306 - __TRAMP_VIRT_OOL(name, realvec); 307 - 308 - #define __EXC_VIRT_OOL_MASKABLE(name, start, size) \ 309 - __EXC_VIRT_OOL(name, start, size); 310 - 311 - #define __TRAMP_VIRT_OOL_MASKABLE(name, realvec, bitmask) \ 312 - TRAMP_VIRT_BEGIN(tramp_virt_##name); \ 313 - MASKABLE_RELON_EXCEPTION_OOL(realvec, name##_common, bitmask); 314 - 315 - #define EXC_VIRT_OOL_MASKABLE(name, start, size, realvec, bitmask) \ 316 - __EXC_VIRT_OOL_MASKABLE(name, start, size); \ 317 - __TRAMP_VIRT_OOL_MASKABLE(name, realvec, bitmask); 318 - 319 - #define __EXC_VIRT_OOL_HV(name, start, size) \ 320 - __EXC_VIRT_OOL(name, start, size); 321 - 322 - #define __TRAMP_VIRT_OOL_HV(name, realvec) \ 323 - TRAMP_VIRT_BEGIN(tramp_virt_##name); \ 324 - STD_RELON_EXCEPTION_HV_OOL(realvec, name##_common); \ 325 - 326 - #define EXC_VIRT_OOL_HV(name, start, size, realvec) \ 327 - __EXC_VIRT_OOL_HV(name, start, size); \ 328 - __TRAMP_VIRT_OOL_HV(name, realvec); 329 - 330 - #define __EXC_VIRT_OOL_MASKABLE_HV(name, start, size) \ 331 - __EXC_VIRT_OOL(name, start, size); 332 - 333 - #define __TRAMP_VIRT_OOL_MASKABLE_HV(name, realvec, bitmask) \ 334 - TRAMP_VIRT_BEGIN(tramp_virt_##name); \ 335 - MASKABLE_RELON_EXCEPTION_HV_OOL(realvec, name##_common, bitmask);\ 336 - 337 - #define EXC_VIRT_OOL_MASKABLE_HV(name, start, size, realvec, bitmask) \ 338 - __EXC_VIRT_OOL_MASKABLE_HV(name, start, size); \ 339 - __TRAMP_VIRT_OOL_MASKABLE_HV(name, realvec, bitmask); 340 - 341 - #define TRAMP_KVM(area, n) \ 342 - TRAMP_KVM_BEGIN(do_kvm_##n); \ 343 - KVM_HANDLER(area, EXC_STD, n); \ 344 - 345 - #define TRAMP_KVM_SKIP(area, n) \ 346 - TRAMP_KVM_BEGIN(do_kvm_##n); \ 347 - KVM_HANDLER_SKIP(area, EXC_STD, n); \ 348 - 349 - /* 350 - * HV variant exceptions get the 0x2 bit added to their trap number. 351 - */ 352 - #define TRAMP_KVM_HV(area, n) \ 353 - TRAMP_KVM_BEGIN(do_kvm_H##n); \ 354 - KVM_HANDLER(area, EXC_HV, n + 0x2); \ 355 - 356 - #define TRAMP_KVM_HV_SKIP(area, n) \ 357 - TRAMP_KVM_BEGIN(do_kvm_H##n); \ 358 - KVM_HANDLER_SKIP(area, EXC_HV, n + 0x2); \ 359 - 360 - #define EXC_COMMON(name, realvec, hdlr) \ 361 - EXC_COMMON_BEGIN(name); \ 362 - STD_EXCEPTION_COMMON(realvec, name, hdlr); \ 363 - 364 - #define EXC_COMMON_ASYNC(name, realvec, hdlr) \ 365 - EXC_COMMON_BEGIN(name); \ 366 - STD_EXCEPTION_COMMON_ASYNC(realvec, name, hdlr); \ 258 + FIXED_SECTION_ENTRY_END_LOCATION(virt_vectors, exc_virt_##start##_##unused, start, size) 367 259 368 260 #endif /* __ASSEMBLY__ */ 369 261
+14 -7
arch/powerpc/include/asm/hw_breakpoint.h
··· 76 76 extern void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs); 77 77 int hw_breakpoint_handler(struct die_args *args); 78 78 79 - extern int set_dawr(struct arch_hw_breakpoint *brk); 79 + #else /* CONFIG_HAVE_HW_BREAKPOINT */ 80 + static inline void hw_breakpoint_disable(void) { } 81 + static inline void thread_change_pc(struct task_struct *tsk, 82 + struct pt_regs *regs) { } 83 + 84 + #endif /* CONFIG_HAVE_HW_BREAKPOINT */ 85 + 86 + 87 + #ifdef CONFIG_PPC_DAWR 80 88 extern bool dawr_force_enable; 81 89 static inline bool dawr_enabled(void) 82 90 { 83 91 return dawr_force_enable; 84 92 } 85 - 86 - #else /* CONFIG_HAVE_HW_BREAKPOINT */ 87 - static inline void hw_breakpoint_disable(void) { } 88 - static inline void thread_change_pc(struct task_struct *tsk, 89 - struct pt_regs *regs) { } 93 + int set_dawr(struct arch_hw_breakpoint *brk); 94 + #else 90 95 static inline bool dawr_enabled(void) { return false; } 91 - #endif /* CONFIG_HAVE_HW_BREAKPOINT */ 96 + static inline int set_dawr(struct arch_hw_breakpoint *brk) { return -1; } 97 + #endif 98 + 92 99 #endif /* __KERNEL__ */ 93 100 #endif /* _PPC_BOOK3S_64_HW_BREAKPOINT_H */
-8
arch/powerpc/include/asm/iommu.h
··· 314 314 315 315 extern const struct dma_map_ops dma_iommu_ops; 316 316 317 - static inline unsigned long device_to_mask(struct device *dev) 318 - { 319 - if (dev->dma_mask && *dev->dma_mask) 320 - return *dev->dma_mask; 321 - /* Assume devices without mask can take 32 bit addresses */ 322 - return 0xfffffffful; 323 - } 324 - 325 317 #endif /* __KERNEL__ */ 326 318 #endif /* _ASM_IOMMU_H */
+40
arch/powerpc/include/asm/lppaca.h
··· 5 5 */ 6 6 #ifndef _ASM_POWERPC_LPPACA_H 7 7 #define _ASM_POWERPC_LPPACA_H 8 + 9 + /* 10 + * The below VPHN macros are outside the __KERNEL__ check since these are 11 + * used for compiling the vphn selftest in userspace 12 + */ 13 + 14 + /* The H_HOME_NODE_ASSOCIATIVITY h_call returns 6 64-bit registers. */ 15 + #define VPHN_REGISTER_COUNT 6 16 + 17 + /* 18 + * 6 64-bit registers unpacked into up to 24 be32 associativity values. To 19 + * form the complete property we have to add the length in the first cell. 20 + */ 21 + #define VPHN_ASSOC_BUFSIZE (VPHN_REGISTER_COUNT*sizeof(u64)/sizeof(u16) + 1) 22 + 23 + /* 24 + * The H_HOME_NODE_ASSOCIATIVITY hcall takes two values for flags: 25 + * 1 for retrieving associativity information for a guest cpu 26 + * 2 for retrieving associativity information for a host/hypervisor cpu 27 + */ 28 + #define VPHN_FLAG_VCPU 1 29 + #define VPHN_FLAG_PCPU 2 30 + 8 31 #ifdef __KERNEL__ 9 32 10 33 /* ··· 42 19 */ 43 20 #include <linux/cache.h> 44 21 #include <linux/threads.h> 22 + #include <linux/spinlock_types.h> 45 23 #include <asm/types.h> 46 24 #include <asm/mmu.h> 47 25 #include <asm/firmware.h> ··· 165 141 #define DISPATCH_LOG_BYTES 4096 /* bytes per cpu */ 166 142 #define N_DISPATCH_LOG (DISPATCH_LOG_BYTES / sizeof(struct dtl_entry)) 167 143 144 + /* 145 + * Dispatch trace log event enable mask: 146 + * 0x1: voluntary virtual processor waits 147 + * 0x2: time-slice preempts 148 + * 0x4: virtual partition memory page faults 149 + */ 150 + #define DTL_LOG_CEDE 0x1 151 + #define DTL_LOG_PREEMPT 0x2 152 + #define DTL_LOG_FAULT 0x4 153 + #define DTL_LOG_ALL (DTL_LOG_CEDE | DTL_LOG_PREEMPT | DTL_LOG_FAULT) 154 + 168 155 extern struct kmem_cache *dtl_cache; 156 + extern rwlock_t dtl_access_lock; 169 157 170 158 /* 171 159 * When CONFIG_VIRT_CPU_ACCOUNTING_NATIVE = y, the cpu accounting code controls ··· 186 150 * called once for each DTL entry that gets processed. 187 151 */ 188 152 extern void (*dtl_consumer)(struct dtl_entry *entry, u64 index); 153 + 154 + extern void register_dtl_buffer(int cpu); 155 + extern void alloc_dtl_buffers(unsigned long *time_limit); 156 + extern long hcall_vphn(unsigned long cpu, u64 flags, __be32 *associativity); 189 157 190 158 #endif /* CONFIG_PPC_BOOK3S */ 191 159 #endif /* __KERNEL__ */
+1
arch/powerpc/include/asm/opal-api.h
··· 564 564 CHECKSTOP_TYPE_UNKNOWN = 0, 565 565 CHECKSTOP_TYPE_CORE = 1, 566 566 CHECKSTOP_TYPE_NX = 2, 567 + CHECKSTOP_TYPE_NPU = 3 567 568 }; 568 569 569 570 enum OpalHMI_CoreXstopReason {
-2
arch/powerpc/include/asm/opal.h
··· 283 283 uint32_t qtoggle, 284 284 uint32_t qindex); 285 285 int64_t opal_xive_get_vp_state(uint64_t vp, __be64 *out_w01); 286 - int64_t opal_pci_set_p2p(uint64_t phb_init, uint64_t phb_target, 287 - uint64_t desc, uint16_t pe_number); 288 286 289 287 int64_t opal_imc_counters_init(uint32_t type, uint64_t address, 290 288 uint64_t cpu_pir);
+2
arch/powerpc/include/asm/paca.h
··· 166 166 u64 kstack; /* Saved Kernel stack addr */ 167 167 u64 saved_r1; /* r1 save for RTAS calls or PM or EE=0 */ 168 168 u64 saved_msr; /* MSR saved here by enter_rtas */ 169 + #ifdef CONFIG_PPC_BOOK3E 169 170 u16 trap_save; /* Used when bad stack is encountered */ 171 + #endif 170 172 u8 irq_soft_mask; /* mask for irq soft masking */ 171 173 u8 irq_happened; /* irq happened while soft-disabled */ 172 174 u8 irq_work_pending; /* IRQ_WORK interrupt while soft-disable */
+24
arch/powerpc/include/asm/pgtable.h
··· 140 140 } 141 141 #endif 142 142 143 + #ifndef pmd_is_leaf 144 + #define pmd_is_leaf pmd_is_leaf 145 + static inline bool pmd_is_leaf(pmd_t pmd) 146 + { 147 + return false; 148 + } 149 + #endif 150 + 151 + #ifndef pud_is_leaf 152 + #define pud_is_leaf pud_is_leaf 153 + static inline bool pud_is_leaf(pud_t pud) 154 + { 155 + return false; 156 + } 157 + #endif 158 + 159 + #ifndef pgd_is_leaf 160 + #define pgd_is_leaf pgd_is_leaf 161 + static inline bool pgd_is_leaf(pgd_t pgd) 162 + { 163 + return false; 164 + } 165 + #endif 166 + 143 167 #ifdef CONFIG_PPC64 144 168 #define is_ioremap_addr is_ioremap_addr 145 169 static inline bool is_ioremap_addr(const void *x)
+1 -1
arch/powerpc/include/asm/pnv-ocxl.h
··· 1 - // SPDX-License-Identifier: GPL-2.0+ 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 2 // Copyright 2017 IBM Corp. 3 3 #ifndef _ASM_PNV_OCXL_H 4 4 #define _ASM_PNV_OCXL_H
-6
arch/powerpc/include/asm/pnv-pci.h
··· 22 22 extern int pnv_pci_get_power_state(uint64_t id, uint8_t *state); 23 23 extern int pnv_pci_set_power_state(uint64_t id, uint8_t state, 24 24 struct opal_msg *msg); 25 - extern int pnv_pci_set_p2p(struct pci_dev *initiator, struct pci_dev *target, 26 - u64 desc); 27 25 28 - extern int pnv_pci_enable_tunnel(struct pci_dev *dev, uint64_t *asnind); 29 - extern int pnv_pci_disable_tunnel(struct pci_dev *dev); 30 26 extern int pnv_pci_set_tunnel_bar(struct pci_dev *dev, uint64_t addr, 31 27 int enable); 32 - extern int pnv_pci_get_as_notify_info(struct task_struct *task, u32 *lpid, 33 - u32 *pid, u32 *tid); 34 28 int pnv_phb_to_cxl_mode(struct pci_dev *dev, uint64_t mode); 35 29 int pnv_cxl_ioda_msi_setup(struct pci_dev *dev, unsigned int hwirq, 36 30 unsigned int virq);
-22
arch/powerpc/include/asm/powernv.h
··· 7 7 #define _ASM_POWERNV_H 8 8 9 9 #ifdef CONFIG_PPC_POWERNV 10 - #define NPU2_WRITE 1 11 10 extern void powernv_set_nmmu_ptcr(unsigned long ptcr); 12 - extern struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, 13 - unsigned long flags, 14 - void (*cb)(struct npu_context *, void *), 15 - void *priv); 16 - extern void pnv_npu2_destroy_context(struct npu_context *context, 17 - struct pci_dev *gpdev); 18 - extern int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea, 19 - unsigned long *flags, unsigned long *status, 20 - int count); 21 11 22 12 void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val); 23 13 24 14 void pnv_tm_init(void); 25 15 #else 26 16 static inline void powernv_set_nmmu_ptcr(unsigned long ptcr) { } 27 - static inline struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, 28 - unsigned long flags, 29 - struct npu_context *(*cb)(struct npu_context *, void *), 30 - void *priv) { return ERR_PTR(-ENODEV); } 31 - static inline void pnv_npu2_destroy_context(struct npu_context *context, 32 - struct pci_dev *gpdev) { } 33 - 34 - static inline int pnv_npu2_handle_fault(struct npu_context *context, 35 - uintptr_t *ea, unsigned long *flags, 36 - unsigned long *status, int count) { 37 - return -ENODEV; 38 - } 39 17 40 18 static inline void pnv_tm_init(void) { } 41 19 #endif
+19 -1
arch/powerpc/include/asm/ppc-opcode.h
··· 410 410 #define __PPC_RC21 (0x1 << 10) 411 411 412 412 /* 413 + * Both low and high 16 bits are added as SIGNED additions, so if low 16 bits 414 + * has high bit set, high 16 bits must be adjusted. These macros do that (stolen 415 + * from binutils). 416 + */ 417 + #define PPC_LO(v) ((v) & 0xffff) 418 + #define PPC_HI(v) (((v) >> 16) & 0xffff) 419 + #define PPC_HA(v) PPC_HI((v) + 0x8000) 420 + 421 + /* 413 422 * Only use the larx hint bit on 64bit CPUs. e500v1/v2 based CPUs will treat a 414 423 * larx with EH set as an illegal instruction. 415 424 */ ··· 597 588 598 589 #define PPC_SLBIA(IH) stringify_in_c(.long PPC_INST_SLBIA | \ 599 590 ((IH & 0x7) << 21)) 600 - #define PPC_INVALIDATE_ERAT PPC_SLBIA(7) 591 + 592 + /* 593 + * These may only be used on ISA v3.0 or later (aka. CPU_FTR_ARCH_300, radix 594 + * implies CPU_FTR_ARCH_300). USER/GUEST invalidates may only be used by radix 595 + * mode (on HPT these would also invalidate various SLBEs which may not be 596 + * desired). 597 + */ 598 + #define PPC_ISA_3_0_INVALIDATE_ERAT PPC_SLBIA(7) 599 + #define PPC_RADIX_INVALIDATE_ERAT_USER PPC_SLBIA(3) 600 + #define PPC_RADIX_INVALIDATE_ERAT_GUEST PPC_SLBIA(6) 601 601 602 602 #define VCMPEQUD_RC(vrt, vra, vrb) stringify_in_c(.long PPC_INST_VCMPEQUD | \ 603 603 ___PPC_RT(vrt) | ___PPC_RA(vra) | \
+1 -1
arch/powerpc/include/asm/ps3stor.h
··· 39 39 unsigned int num_regions; 40 40 unsigned long accessible_regions; 41 41 unsigned int region_idx; /* first accessible region */ 42 - struct ps3_storage_region regions[0]; /* Must be last */ 42 + struct ps3_storage_region regions[]; /* Must be last */ 43 43 }; 44 44 45 45 static inline struct ps3_storage_device *to_ps3_storage_device(struct device *dev)
+26 -2
arch/powerpc/include/asm/pte-walk.h
··· 10 10 static inline pte_t *find_linux_pte(pgd_t *pgdir, unsigned long ea, 11 11 bool *is_thp, unsigned *hshift) 12 12 { 13 + pte_t *pte; 14 + 13 15 VM_WARN(!arch_irqs_disabled(), "%s called with irq enabled\n", __func__); 14 - return __find_linux_pte(pgdir, ea, is_thp, hshift); 16 + pte = __find_linux_pte(pgdir, ea, is_thp, hshift); 17 + 18 + #if defined(CONFIG_DEBUG_VM) && \ 19 + !(defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)) 20 + /* 21 + * We should not find huge page if these configs are not enabled. 22 + */ 23 + if (hshift) 24 + WARN_ON(*hshift); 25 + #endif 26 + return pte; 15 27 } 16 28 17 29 static inline pte_t *find_init_mm_pte(unsigned long ea, unsigned *hshift) ··· 38 26 static inline pte_t *find_current_mm_pte(pgd_t *pgdir, unsigned long ea, 39 27 bool *is_thp, unsigned *hshift) 40 28 { 29 + pte_t *pte; 30 + 41 31 VM_WARN(!arch_irqs_disabled(), "%s called with irq enabled\n", __func__); 42 32 VM_WARN(pgdir != current->mm->pgd, 43 33 "%s lock less page table lookup called on wrong mm\n", __func__); 44 - return __find_linux_pte(pgdir, ea, is_thp, hshift); 34 + pte = __find_linux_pte(pgdir, ea, is_thp, hshift); 35 + 36 + #if defined(CONFIG_DEBUG_VM) && \ 37 + !(defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)) 38 + /* 39 + * We should not find huge page if these configs are not enabled. 40 + */ 41 + if (hshift) 42 + WARN_ON(*hshift); 43 + #endif 44 + return pte; 45 45 } 46 46 47 47 #endif /* _ASM_POWERPC_PTE_WALK_H */
+6
arch/powerpc/include/asm/topology.h
··· 35 35 cpu_all_mask : \ 36 36 cpumask_of_node(pcibus_to_node(bus))) 37 37 38 + extern int cpu_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc); 38 39 extern int __node_distance(int, int); 39 40 #define node_distance(a, b) __node_distance(a, b) 40 41 ··· 84 83 } 85 84 86 85 static inline void update_numa_cpu_lookup_table(unsigned int cpu, int node) {} 86 + 87 + static inline int cpu_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc) 88 + { 89 + return 0; 90 + } 87 91 88 92 #endif /* CONFIG_NUMA */ 89 93
+1
arch/powerpc/include/asm/uaccess.h
··· 312 312 { 313 313 unsigned long ret; 314 314 315 + barrier_nospec(); 315 316 allow_user_access(to, from, n); 316 317 ret = __copy_tofrom_user(to, from, n); 317 318 prevent_user_access(to, from, n);
-10
arch/powerpc/include/asm/vas.h
··· 163 163 */ 164 164 int vas_paste_crb(struct vas_window *win, int offset, bool re); 165 165 166 - /* 167 - * Return a system-wide unique id for the VAS window @win. 168 - */ 169 - extern u32 vas_win_id(struct vas_window *win); 170 - 171 - /* 172 - * Return the power bus paste address associated with @win so the caller 173 - * can map that address into their address space. 174 - */ 175 - extern u64 vas_win_paste_addr(struct vas_window *win); 176 166 #endif /* __ASM_POWERPC_VAS_H */
+1
arch/powerpc/kernel/Makefile
··· 56 56 obj-$(CONFIG_VDSO32) += vdso32/ 57 57 obj-$(CONFIG_PPC_WATCHDOG) += watchdog.o 58 58 obj-$(CONFIG_HAVE_HW_BREAKPOINT) += hw_breakpoint.o 59 + obj-$(CONFIG_PPC_DAWR) += dawr.o 59 60 obj-$(CONFIG_PPC_BOOK3S_64) += cpu_setup_ppc970.o cpu_setup_pa6t.o 60 61 obj-$(CONFIG_PPC_BOOK3S_64) += cpu_setup_power.o 61 62 obj-$(CONFIG_PPC_BOOK3S_64) += mce.o mce_power.o
+2
arch/powerpc/kernel/asm-offsets.c
··· 266 266 OFFSET(ACCOUNT_STARTTIME_USER, paca_struct, accounting.starttime_user); 267 267 OFFSET(ACCOUNT_USER_TIME, paca_struct, accounting.utime); 268 268 OFFSET(ACCOUNT_SYSTEM_TIME, paca_struct, accounting.stime); 269 + #ifdef CONFIG_PPC_BOOK3E 269 270 OFFSET(PACA_TRAP_SAVE, paca_struct, trap_save); 271 + #endif 270 272 OFFSET(PACA_SPRG_VDSO, paca_struct, sprg_vdso); 271 273 #else /* CONFIG_PPC64 */ 272 274 #ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
+21
arch/powerpc/kernel/cacheinfo.c
··· 891 891 if (cache) 892 892 cache_cpu_clear(cache, cpu_id); 893 893 } 894 + 895 + void cacheinfo_teardown(void) 896 + { 897 + unsigned int cpu; 898 + 899 + lockdep_assert_cpus_held(); 900 + 901 + for_each_online_cpu(cpu) 902 + cacheinfo_cpu_offline(cpu); 903 + } 904 + 905 + void cacheinfo_rebuild(void) 906 + { 907 + unsigned int cpu; 908 + 909 + lockdep_assert_cpus_held(); 910 + 911 + for_each_online_cpu(cpu) 912 + cacheinfo_cpu_online(cpu); 913 + } 914 + 894 915 #endif /* (CONFIG_PPC_PSERIES && CONFIG_SUSPEND) || CONFIG_HOTPLUG_CPU */
+4
arch/powerpc/kernel/cacheinfo.h
··· 6 6 extern void cacheinfo_cpu_online(unsigned int cpu_id); 7 7 extern void cacheinfo_cpu_offline(unsigned int cpu_id); 8 8 9 + /* Allow migration/suspend to tear down and rebuild the hierarchy. */ 10 + extern void cacheinfo_teardown(void); 11 + extern void cacheinfo_rebuild(void); 12 + 9 13 #endif /* _PPC_CACHEINFO_H */
+101
arch/powerpc/kernel/dawr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * DAWR infrastructure 4 + * 5 + * Copyright 2019, Michael Neuling, IBM Corporation. 6 + */ 7 + 8 + #include <linux/types.h> 9 + #include <linux/export.h> 10 + #include <linux/fs.h> 11 + #include <linux/debugfs.h> 12 + #include <asm/debugfs.h> 13 + #include <asm/machdep.h> 14 + #include <asm/hvcall.h> 15 + 16 + bool dawr_force_enable; 17 + EXPORT_SYMBOL_GPL(dawr_force_enable); 18 + 19 + int set_dawr(struct arch_hw_breakpoint *brk) 20 + { 21 + unsigned long dawr, dawrx, mrd; 22 + 23 + dawr = brk->address; 24 + 25 + dawrx = (brk->type & (HW_BRK_TYPE_READ | HW_BRK_TYPE_WRITE)) 26 + << (63 - 58); 27 + dawrx |= ((brk->type & (HW_BRK_TYPE_TRANSLATE)) >> 2) << (63 - 59); 28 + dawrx |= (brk->type & (HW_BRK_TYPE_PRIV_ALL)) >> 3; 29 + /* 30 + * DAWR length is stored in field MDR bits 48:53. Matches range in 31 + * doublewords (64 bits) baised by -1 eg. 0b000000=1DW and 32 + * 0b111111=64DW. 33 + * brk->len is in bytes. 34 + * This aligns up to double word size, shifts and does the bias. 35 + */ 36 + mrd = ((brk->len + 7) >> 3) - 1; 37 + dawrx |= (mrd & 0x3f) << (63 - 53); 38 + 39 + if (ppc_md.set_dawr) 40 + return ppc_md.set_dawr(dawr, dawrx); 41 + 42 + mtspr(SPRN_DAWR, dawr); 43 + mtspr(SPRN_DAWRX, dawrx); 44 + 45 + return 0; 46 + } 47 + 48 + static void set_dawr_cb(void *info) 49 + { 50 + set_dawr(info); 51 + } 52 + 53 + static ssize_t dawr_write_file_bool(struct file *file, 54 + const char __user *user_buf, 55 + size_t count, loff_t *ppos) 56 + { 57 + struct arch_hw_breakpoint null_brk = {0, 0, 0}; 58 + size_t rc; 59 + 60 + /* Send error to user if they hypervisor won't allow us to write DAWR */ 61 + if (!dawr_force_enable && 62 + firmware_has_feature(FW_FEATURE_LPAR) && 63 + set_dawr(&null_brk) != H_SUCCESS) 64 + return -ENODEV; 65 + 66 + rc = debugfs_write_file_bool(file, user_buf, count, ppos); 67 + if (rc) 68 + return rc; 69 + 70 + /* If we are clearing, make sure all CPUs have the DAWR cleared */ 71 + if (!dawr_force_enable) 72 + smp_call_function(set_dawr_cb, &null_brk, 0); 73 + 74 + return rc; 75 + } 76 + 77 + static const struct file_operations dawr_enable_fops = { 78 + .read = debugfs_read_file_bool, 79 + .write = dawr_write_file_bool, 80 + .open = simple_open, 81 + .llseek = default_llseek, 82 + }; 83 + 84 + static int __init dawr_force_setup(void) 85 + { 86 + if (cpu_has_feature(CPU_FTR_DAWR)) { 87 + /* Don't setup sysfs file for user control on P8 */ 88 + dawr_force_enable = true; 89 + return 0; 90 + } 91 + 92 + if (PVR_VER(mfspr(SPRN_PVR)) == PVR_POWER9) { 93 + /* Turn DAWR off by default, but allow admin to turn it on */ 94 + debugfs_create_file_unsafe("dawr_enable_dangerous", 0600, 95 + powerpc_debugfs_root, 96 + &dawr_force_enable, 97 + &dawr_enable_fops); 98 + } 99 + return 0; 100 + } 101 + arch_initcall(dawr_force_setup);
+38 -2
arch/powerpc/kernel/dma-iommu.c
··· 71 71 return dma_direct_map_page(dev, page, offset, size, direction, 72 72 attrs); 73 73 return iommu_map_page(dev, get_iommu_table_base(dev), page, offset, 74 - size, device_to_mask(dev), direction, attrs); 74 + size, dma_get_mask(dev), direction, attrs); 75 75 } 76 76 77 77 ··· 82 82 if (!dma_iommu_map_bypass(dev, attrs)) 83 83 iommu_unmap_page(get_iommu_table_base(dev), dma_handle, size, 84 84 direction, attrs); 85 + else 86 + dma_direct_unmap_page(dev, dma_handle, size, direction, attrs); 85 87 } 86 88 87 89 ··· 94 92 if (dma_iommu_map_bypass(dev, attrs)) 95 93 return dma_direct_map_sg(dev, sglist, nelems, direction, attrs); 96 94 return ppc_iommu_map_sg(dev, get_iommu_table_base(dev), sglist, nelems, 97 - device_to_mask(dev), direction, attrs); 95 + dma_get_mask(dev), direction, attrs); 98 96 } 99 97 100 98 static void dma_iommu_unmap_sg(struct device *dev, struct scatterlist *sglist, ··· 104 102 if (!dma_iommu_map_bypass(dev, attrs)) 105 103 ppc_iommu_unmap_sg(get_iommu_table_base(dev), sglist, nelems, 106 104 direction, attrs); 105 + else 106 + dma_direct_unmap_sg(dev, sglist, nelems, direction, attrs); 107 107 } 108 108 109 109 static bool dma_iommu_bypass_supported(struct device *dev, u64 mask) ··· 167 163 return mask; 168 164 } 169 165 166 + static void dma_iommu_sync_for_cpu(struct device *dev, dma_addr_t addr, 167 + size_t size, enum dma_data_direction dir) 168 + { 169 + if (dma_iommu_alloc_bypass(dev)) 170 + dma_direct_sync_single_for_cpu(dev, addr, size, dir); 171 + } 172 + 173 + static void dma_iommu_sync_for_device(struct device *dev, dma_addr_t addr, 174 + size_t sz, enum dma_data_direction dir) 175 + { 176 + if (dma_iommu_alloc_bypass(dev)) 177 + dma_direct_sync_single_for_device(dev, addr, sz, dir); 178 + } 179 + 180 + extern void dma_iommu_sync_sg_for_cpu(struct device *dev, 181 + struct scatterlist *sgl, int nents, enum dma_data_direction dir) 182 + { 183 + if (dma_iommu_alloc_bypass(dev)) 184 + dma_direct_sync_sg_for_cpu(dev, sgl, nents, dir); 185 + } 186 + 187 + extern void dma_iommu_sync_sg_for_device(struct device *dev, 188 + struct scatterlist *sgl, int nents, enum dma_data_direction dir) 189 + { 190 + if (dma_iommu_alloc_bypass(dev)) 191 + dma_direct_sync_sg_for_device(dev, sgl, nents, dir); 192 + } 193 + 170 194 const struct dma_map_ops dma_iommu_ops = { 171 195 .alloc = dma_iommu_alloc_coherent, 172 196 .free = dma_iommu_free_coherent, ··· 204 172 .map_page = dma_iommu_map_page, 205 173 .unmap_page = dma_iommu_unmap_page, 206 174 .get_required_mask = dma_iommu_get_required_mask, 175 + .sync_single_for_cpu = dma_iommu_sync_for_cpu, 176 + .sync_single_for_device = dma_iommu_sync_for_device, 177 + .sync_sg_for_cpu = dma_iommu_sync_sg_for_cpu, 178 + .sync_sg_for_device = dma_iommu_sync_sg_for_device, 207 179 };
+12 -3
arch/powerpc/kernel/eeh.c
··· 354 354 ptep = find_init_mm_pte(token, &hugepage_shift); 355 355 if (!ptep) 356 356 return token; 357 - WARN_ON(hugepage_shift); 358 - pa = pte_pfn(*ptep) << PAGE_SHIFT; 359 357 360 - return pa | (token & (PAGE_SIZE-1)); 358 + pa = pte_pfn(*ptep); 359 + 360 + /* On radix we can do hugepage mappings for io, so handle that */ 361 + if (hugepage_shift) { 362 + pa <<= hugepage_shift; 363 + pa |= token & ((1ul << hugepage_shift) - 1); 364 + } else { 365 + pa <<= PAGE_SHIFT; 366 + pa |= token & (PAGE_SIZE - 1); 367 + } 368 + 369 + return pa; 361 370 } 362 371 363 372 /*
+3
arch/powerpc/kernel/eeh_cache.c
··· 18 18 19 19 20 20 /** 21 + * DOC: Overview 22 + * 21 23 * The pci address cache subsystem. This subsystem places 22 24 * PCI device address resources into a red-black tree, sorted 23 25 * according to the address range, so that given only an i/o ··· 36 34 * than any hash algo I could think of for this problem, even 37 35 * with the penalty of slow pointer chases for d-cache misses). 38 36 */ 37 + 39 38 struct pci_io_addr_range { 40 39 struct rb_node rb_node; 41 40 resource_size_t addr_lo;
+993 -446
arch/powerpc/kernel/exceptions-64s.S
··· 21 21 #include <asm/feature-fixups.h> 22 22 #include <asm/kup.h> 23 23 24 + /* PACA save area offsets (exgen, exmc, etc) */ 25 + #define EX_R9 0 26 + #define EX_R10 8 27 + #define EX_R11 16 28 + #define EX_R12 24 29 + #define EX_R13 32 30 + #define EX_DAR 40 31 + #define EX_DSISR 48 32 + #define EX_CCR 52 33 + #define EX_CFAR 56 34 + #define EX_PPR 64 35 + #if defined(CONFIG_RELOCATABLE) 36 + #define EX_CTR 72 37 + .if EX_SIZE != 10 38 + .error "EX_SIZE is wrong" 39 + .endif 40 + #else 41 + .if EX_SIZE != 9 42 + .error "EX_SIZE is wrong" 43 + .endif 44 + #endif 45 + 46 + /* 47 + * We're short on space and time in the exception prolog, so we can't 48 + * use the normal LOAD_REG_IMMEDIATE macro to load the address of label. 49 + * Instead we get the base of the kernel from paca->kernelbase and or in the low 50 + * part of label. This requires that the label be within 64KB of kernelbase, and 51 + * that kernelbase be 64K aligned. 52 + */ 53 + #define LOAD_HANDLER(reg, label) \ 54 + ld reg,PACAKBASE(r13); /* get high part of &label */ \ 55 + ori reg,reg,FIXED_SYMBOL_ABS_ADDR(label) 56 + 57 + #define __LOAD_HANDLER(reg, label) \ 58 + ld reg,PACAKBASE(r13); \ 59 + ori reg,reg,(ABS_ADDR(label))@l 60 + 61 + /* 62 + * Branches from unrelocated code (e.g., interrupts) to labels outside 63 + * head-y require >64K offsets. 64 + */ 65 + #define __LOAD_FAR_HANDLER(reg, label) \ 66 + ld reg,PACAKBASE(r13); \ 67 + ori reg,reg,(ABS_ADDR(label))@l; \ 68 + addis reg,reg,(ABS_ADDR(label))@h 69 + 70 + /* Exception register prefixes */ 71 + #define EXC_HV 1 72 + #define EXC_STD 0 73 + 74 + #if defined(CONFIG_RELOCATABLE) 75 + /* 76 + * If we support interrupts with relocation on AND we're a relocatable kernel, 77 + * we need to use CTR to get to the 2nd level handler. So, save/restore it 78 + * when required. 79 + */ 80 + #define SAVE_CTR(reg, area) mfctr reg ; std reg,area+EX_CTR(r13) 81 + #define GET_CTR(reg, area) ld reg,area+EX_CTR(r13) 82 + #define RESTORE_CTR(reg, area) ld reg,area+EX_CTR(r13) ; mtctr reg 83 + #else 84 + /* ...else CTR is unused and in register. */ 85 + #define SAVE_CTR(reg, area) 86 + #define GET_CTR(reg, area) mfctr reg 87 + #define RESTORE_CTR(reg, area) 88 + #endif 89 + 90 + /* 91 + * PPR save/restore macros used in exceptions-64s.S 92 + * Used for P7 or later processors 93 + */ 94 + #define SAVE_PPR(area, ra) \ 95 + BEGIN_FTR_SECTION_NESTED(940) \ 96 + ld ra,area+EX_PPR(r13); /* Read PPR from paca */ \ 97 + std ra,_PPR(r1); \ 98 + END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,940) 99 + 100 + #define RESTORE_PPR_PACA(area, ra) \ 101 + BEGIN_FTR_SECTION_NESTED(941) \ 102 + ld ra,area+EX_PPR(r13); \ 103 + mtspr SPRN_PPR,ra; \ 104 + END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,941) 105 + 106 + /* 107 + * Get an SPR into a register if the CPU has the given feature 108 + */ 109 + #define OPT_GET_SPR(ra, spr, ftr) \ 110 + BEGIN_FTR_SECTION_NESTED(943) \ 111 + mfspr ra,spr; \ 112 + END_FTR_SECTION_NESTED(ftr,ftr,943) 113 + 114 + /* 115 + * Set an SPR from a register if the CPU has the given feature 116 + */ 117 + #define OPT_SET_SPR(ra, spr, ftr) \ 118 + BEGIN_FTR_SECTION_NESTED(943) \ 119 + mtspr spr,ra; \ 120 + END_FTR_SECTION_NESTED(ftr,ftr,943) 121 + 122 + /* 123 + * Save a register to the PACA if the CPU has the given feature 124 + */ 125 + #define OPT_SAVE_REG_TO_PACA(offset, ra, ftr) \ 126 + BEGIN_FTR_SECTION_NESTED(943) \ 127 + std ra,offset(r13); \ 128 + END_FTR_SECTION_NESTED(ftr,ftr,943) 129 + 130 + .macro EXCEPTION_PROLOG_0 area 131 + SET_SCRATCH0(r13) /* save r13 */ 132 + GET_PACA(r13) 133 + std r9,\area\()+EX_R9(r13) /* save r9 */ 134 + OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR) 135 + HMT_MEDIUM 136 + std r10,\area\()+EX_R10(r13) /* save r10 - r12 */ 137 + OPT_GET_SPR(r10, SPRN_CFAR, CPU_FTR_CFAR) 138 + .endm 139 + 140 + .macro EXCEPTION_PROLOG_1 hsrr, area, kvm, vec, dar, dsisr, bitmask 141 + OPT_SAVE_REG_TO_PACA(\area\()+EX_PPR, r9, CPU_FTR_HAS_PPR) 142 + OPT_SAVE_REG_TO_PACA(\area\()+EX_CFAR, r10, CPU_FTR_CFAR) 143 + INTERRUPT_TO_KERNEL 144 + SAVE_CTR(r10, \area\()) 145 + mfcr r9 146 + .if \kvm 147 + KVMTEST \hsrr \vec 148 + .endif 149 + .if \bitmask 150 + lbz r10,PACAIRQSOFTMASK(r13) 151 + andi. r10,r10,\bitmask 152 + /* Associate vector numbers with bits in paca->irq_happened */ 153 + .if \vec == 0x500 || \vec == 0xea0 154 + li r10,PACA_IRQ_EE 155 + .elseif \vec == 0x900 156 + li r10,PACA_IRQ_DEC 157 + .elseif \vec == 0xa00 || \vec == 0xe80 158 + li r10,PACA_IRQ_DBELL 159 + .elseif \vec == 0xe60 160 + li r10,PACA_IRQ_HMI 161 + .elseif \vec == 0xf00 162 + li r10,PACA_IRQ_PMI 163 + .else 164 + .abort "Bad maskable vector" 165 + .endif 166 + 167 + .if \hsrr 168 + bne masked_Hinterrupt 169 + .else 170 + bne masked_interrupt 171 + .endif 172 + .endif 173 + 174 + std r11,\area\()+EX_R11(r13) 175 + std r12,\area\()+EX_R12(r13) 176 + 177 + /* 178 + * DAR/DSISR, SCRATCH0 must be read before setting MSR[RI], 179 + * because a d-side MCE will clobber those registers so is 180 + * not recoverable if they are live. 181 + */ 182 + GET_SCRATCH0(r10) 183 + std r10,\area\()+EX_R13(r13) 184 + .if \dar 185 + mfspr r10,SPRN_DAR 186 + std r10,\area\()+EX_DAR(r13) 187 + .endif 188 + .if \dsisr 189 + mfspr r10,SPRN_DSISR 190 + stw r10,\area\()+EX_DSISR(r13) 191 + .endif 192 + .endm 193 + 194 + .macro EXCEPTION_PROLOG_2_REAL label, hsrr, set_ri 195 + ld r10,PACAKMSR(r13) /* get MSR value for kernel */ 196 + .if ! \set_ri 197 + xori r10,r10,MSR_RI /* Clear MSR_RI */ 198 + .endif 199 + .if \hsrr 200 + mfspr r11,SPRN_HSRR0 /* save HSRR0 */ 201 + mfspr r12,SPRN_HSRR1 /* and HSRR1 */ 202 + mtspr SPRN_HSRR1,r10 203 + .else 204 + mfspr r11,SPRN_SRR0 /* save SRR0 */ 205 + mfspr r12,SPRN_SRR1 /* and SRR1 */ 206 + mtspr SPRN_SRR1,r10 207 + .endif 208 + LOAD_HANDLER(r10, \label\()) 209 + .if \hsrr 210 + mtspr SPRN_HSRR0,r10 211 + HRFI_TO_KERNEL 212 + .else 213 + mtspr SPRN_SRR0,r10 214 + RFI_TO_KERNEL 215 + .endif 216 + b . /* prevent speculative execution */ 217 + .endm 218 + 219 + .macro EXCEPTION_PROLOG_2_VIRT label, hsrr 220 + #ifdef CONFIG_RELOCATABLE 221 + .if \hsrr 222 + mfspr r11,SPRN_HSRR0 /* save HSRR0 */ 223 + .else 224 + mfspr r11,SPRN_SRR0 /* save SRR0 */ 225 + .endif 226 + LOAD_HANDLER(r12, \label\()) 227 + mtctr r12 228 + .if \hsrr 229 + mfspr r12,SPRN_HSRR1 /* and HSRR1 */ 230 + .else 231 + mfspr r12,SPRN_SRR1 /* and HSRR1 */ 232 + .endif 233 + li r10,MSR_RI 234 + mtmsrd r10,1 /* Set RI (EE=0) */ 235 + bctr 236 + #else 237 + .if \hsrr 238 + mfspr r11,SPRN_HSRR0 /* save HSRR0 */ 239 + mfspr r12,SPRN_HSRR1 /* and HSRR1 */ 240 + .else 241 + mfspr r11,SPRN_SRR0 /* save SRR0 */ 242 + mfspr r12,SPRN_SRR1 /* and SRR1 */ 243 + .endif 244 + li r10,MSR_RI 245 + mtmsrd r10,1 /* Set RI (EE=0) */ 246 + b \label 247 + #endif 248 + .endm 249 + 250 + /* 251 + * Branch to label using its 0xC000 address. This results in instruction 252 + * address suitable for MSR[IR]=0 or 1, which allows relocation to be turned 253 + * on using mtmsr rather than rfid. 254 + * 255 + * This could set the 0xc bits for !RELOCATABLE as an immediate, rather than 256 + * load KBASE for a slight optimisation. 257 + */ 258 + #define BRANCH_TO_C000(reg, label) \ 259 + __LOAD_FAR_HANDLER(reg, label); \ 260 + mtctr reg; \ 261 + bctr 262 + 263 + #ifdef CONFIG_KVM_BOOK3S_64_HANDLER 264 + #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE 265 + /* 266 + * If hv is possible, interrupts come into to the hv version 267 + * of the kvmppc_interrupt code, which then jumps to the PR handler, 268 + * kvmppc_interrupt_pr, if the guest is a PR guest. 269 + */ 270 + #define kvmppc_interrupt kvmppc_interrupt_hv 271 + #else 272 + #define kvmppc_interrupt kvmppc_interrupt_pr 273 + #endif 274 + 275 + .macro KVMTEST hsrr, n 276 + lbz r10,HSTATE_IN_GUEST(r13) 277 + cmpwi r10,0 278 + .if \hsrr 279 + bne do_kvm_H\n 280 + .else 281 + bne do_kvm_\n 282 + .endif 283 + .endm 284 + 285 + .macro KVM_HANDLER area, hsrr, n, skip 286 + .if \skip 287 + cmpwi r10,KVM_GUEST_MODE_SKIP 288 + beq 89f 289 + .else 290 + BEGIN_FTR_SECTION_NESTED(947) 291 + ld r10,\area+EX_CFAR(r13) 292 + std r10,HSTATE_CFAR(r13) 293 + END_FTR_SECTION_NESTED(CPU_FTR_CFAR,CPU_FTR_CFAR,947) 294 + .endif 295 + 296 + BEGIN_FTR_SECTION_NESTED(948) 297 + ld r10,\area+EX_PPR(r13) 298 + std r10,HSTATE_PPR(r13) 299 + END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948) 300 + ld r10,\area+EX_R10(r13) 301 + std r12,HSTATE_SCRATCH0(r13) 302 + sldi r12,r9,32 303 + /* HSRR variants have the 0x2 bit added to their trap number */ 304 + .if \hsrr 305 + ori r12,r12,(\n + 0x2) 306 + .else 307 + ori r12,r12,(\n) 308 + .endif 309 + 310 + #ifdef CONFIG_RELOCATABLE 311 + /* 312 + * KVM requires __LOAD_FAR_HANDLER beause kvmppc_interrupt lives 313 + * outside the head section. CONFIG_RELOCATABLE KVM expects CTR 314 + * to be saved in HSTATE_SCRATCH1. 315 + */ 316 + mfctr r9 317 + std r9,HSTATE_SCRATCH1(r13) 318 + __LOAD_FAR_HANDLER(r9, kvmppc_interrupt) 319 + mtctr r9 320 + ld r9,\area+EX_R9(r13) 321 + bctr 322 + #else 323 + ld r9,\area+EX_R9(r13) 324 + b kvmppc_interrupt 325 + #endif 326 + 327 + 328 + .if \skip 329 + 89: mtocrf 0x80,r9 330 + ld r9,\area+EX_R9(r13) 331 + ld r10,\area+EX_R10(r13) 332 + .if \hsrr 333 + b kvmppc_skip_Hinterrupt 334 + .else 335 + b kvmppc_skip_interrupt 336 + .endif 337 + .endif 338 + .endm 339 + 340 + #else 341 + .macro KVMTEST hsrr, n 342 + .endm 343 + .macro KVM_HANDLER area, hsrr, n, skip 344 + .endm 345 + #endif 346 + 347 + #define EXCEPTION_PROLOG_COMMON_1() \ 348 + std r9,_CCR(r1); /* save CR in stackframe */ \ 349 + std r11,_NIP(r1); /* save SRR0 in stackframe */ \ 350 + std r12,_MSR(r1); /* save SRR1 in stackframe */ \ 351 + std r10,0(r1); /* make stack chain pointer */ \ 352 + std r0,GPR0(r1); /* save r0 in stackframe */ \ 353 + std r10,GPR1(r1); /* save r1 in stackframe */ \ 354 + 355 + /* Save original regs values from save area to stack frame. */ 356 + #define EXCEPTION_PROLOG_COMMON_2(area) \ 357 + ld r9,area+EX_R9(r13); /* move r9, r10 to stackframe */ \ 358 + ld r10,area+EX_R10(r13); \ 359 + std r9,GPR9(r1); \ 360 + std r10,GPR10(r1); \ 361 + ld r9,area+EX_R11(r13); /* move r11 - r13 to stackframe */ \ 362 + ld r10,area+EX_R12(r13); \ 363 + ld r11,area+EX_R13(r13); \ 364 + std r9,GPR11(r1); \ 365 + std r10,GPR12(r1); \ 366 + std r11,GPR13(r1); \ 367 + BEGIN_FTR_SECTION_NESTED(66); \ 368 + ld r10,area+EX_CFAR(r13); \ 369 + std r10,ORIG_GPR3(r1); \ 370 + END_FTR_SECTION_NESTED(CPU_FTR_CFAR, CPU_FTR_CFAR, 66); \ 371 + GET_CTR(r10, area); \ 372 + std r10,_CTR(r1); 373 + 374 + #define EXCEPTION_PROLOG_COMMON_3(trap) \ 375 + std r2,GPR2(r1); /* save r2 in stackframe */ \ 376 + SAVE_4GPRS(3, r1); /* save r3 - r6 in stackframe */ \ 377 + SAVE_2GPRS(7, r1); /* save r7, r8 in stackframe */ \ 378 + mflr r9; /* Get LR, later save to stack */ \ 379 + ld r2,PACATOC(r13); /* get kernel TOC into r2 */ \ 380 + std r9,_LINK(r1); \ 381 + lbz r10,PACAIRQSOFTMASK(r13); \ 382 + mfspr r11,SPRN_XER; /* save XER in stackframe */ \ 383 + std r10,SOFTE(r1); \ 384 + std r11,_XER(r1); \ 385 + li r9,(trap)+1; \ 386 + std r9,_TRAP(r1); /* set trap number */ \ 387 + li r10,0; \ 388 + ld r11,exception_marker@toc(r2); \ 389 + std r10,RESULT(r1); /* clear regs->result */ \ 390 + std r11,STACK_FRAME_OVERHEAD-16(r1); /* mark the frame */ 391 + 392 + /* 393 + * On entry r13 points to the paca, r9-r13 are saved in the paca, 394 + * r9 contains the saved CR, r11 and r12 contain the saved SRR0 and 395 + * SRR1, and relocation is on. 396 + */ 397 + #define EXCEPTION_COMMON(area, trap) \ 398 + andi. r10,r12,MSR_PR; /* See if coming from user */ \ 399 + mr r10,r1; /* Save r1 */ \ 400 + subi r1,r1,INT_FRAME_SIZE; /* alloc frame on kernel stack */ \ 401 + beq- 1f; \ 402 + ld r1,PACAKSAVE(r13); /* kernel stack to use */ \ 403 + 1: tdgei r1,-INT_FRAME_SIZE; /* trap if r1 is in userspace */ \ 404 + EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,0; \ 405 + 3: EXCEPTION_PROLOG_COMMON_1(); \ 406 + kuap_save_amr_and_lock r9, r10, cr1, cr0; \ 407 + beq 4f; /* if from kernel mode */ \ 408 + ACCOUNT_CPU_USER_ENTRY(r13, r9, r10); \ 409 + SAVE_PPR(area, r9); \ 410 + 4: EXCEPTION_PROLOG_COMMON_2(area); \ 411 + EXCEPTION_PROLOG_COMMON_3(trap); \ 412 + ACCOUNT_STOLEN_TIME 413 + 414 + /* 415 + * Exception where stack is already set in r1, r1 is saved in r10. 416 + * PPR save and CPU accounting is not done (for some reason). 417 + */ 418 + #define EXCEPTION_COMMON_STACK(area, trap) \ 419 + EXCEPTION_PROLOG_COMMON_1(); \ 420 + kuap_save_amr_and_lock r9, r10, cr1; \ 421 + EXCEPTION_PROLOG_COMMON_2(area); \ 422 + EXCEPTION_PROLOG_COMMON_3(trap) 423 + 424 + /* 425 + * Restore all registers including H/SRR0/1 saved in a stack frame of a 426 + * standard exception. 427 + */ 428 + .macro EXCEPTION_RESTORE_REGS hsrr 429 + /* Move original SRR0 and SRR1 into the respective regs */ 430 + ld r9,_MSR(r1) 431 + .if \hsrr 432 + mtspr SPRN_HSRR1,r9 433 + .else 434 + mtspr SPRN_SRR1,r9 435 + .endif 436 + ld r9,_NIP(r1) 437 + .if \hsrr 438 + mtspr SPRN_HSRR0,r9 439 + .else 440 + mtspr SPRN_SRR0,r9 441 + .endif 442 + ld r9,_CTR(r1) 443 + mtctr r9 444 + ld r9,_XER(r1) 445 + mtxer r9 446 + ld r9,_LINK(r1) 447 + mtlr r9 448 + ld r9,_CCR(r1) 449 + mtcr r9 450 + REST_8GPRS(2, r1) 451 + REST_4GPRS(10, r1) 452 + REST_GPR(0, r1) 453 + /* restore original r1. */ 454 + ld r1,GPR1(r1) 455 + .endm 456 + 457 + #define RUNLATCH_ON \ 458 + BEGIN_FTR_SECTION \ 459 + ld r3, PACA_THREAD_INFO(r13); \ 460 + ld r4,TI_LOCAL_FLAGS(r3); \ 461 + andi. r0,r4,_TLF_RUNLATCH; \ 462 + beql ppc64_runlatch_on_trampoline; \ 463 + END_FTR_SECTION_IFSET(CPU_FTR_CTRL) 464 + 465 + /* 466 + * When the idle code in power4_idle puts the CPU into NAP mode, 467 + * it has to do so in a loop, and relies on the external interrupt 468 + * and decrementer interrupt entry code to get it out of the loop. 469 + * It sets the _TLF_NAPPING bit in current_thread_info()->local_flags 470 + * to signal that it is in the loop and needs help to get out. 471 + */ 472 + #ifdef CONFIG_PPC_970_NAP 473 + #define FINISH_NAP \ 474 + BEGIN_FTR_SECTION \ 475 + ld r11, PACA_THREAD_INFO(r13); \ 476 + ld r9,TI_LOCAL_FLAGS(r11); \ 477 + andi. r10,r9,_TLF_NAPPING; \ 478 + bnel power4_fixup_nap; \ 479 + END_FTR_SECTION_IFSET(CPU_FTR_CAN_NAP) 480 + #else 481 + #define FINISH_NAP 482 + #endif 483 + 484 + /* 485 + * Following are the BOOK3S exception handler helper macros. 486 + * Handlers come in a number of types, and each type has a number of varieties. 487 + * 488 + * EXC_REAL_* - real, unrelocated exception vectors 489 + * EXC_VIRT_* - virt (AIL), unrelocated exception vectors 490 + * TRAMP_REAL_* - real, unrelocated helpers (virt can call these) 491 + * TRAMP_VIRT_* - virt, unreloc helpers (in practice, real can use) 492 + * TRAMP_KVM - KVM handlers that get put into real, unrelocated 493 + * EXC_COMMON - virt, relocated common handlers 494 + * 495 + * The EXC handlers are given a name, and branch to name_common, or the 496 + * appropriate KVM or masking function. Vector handler verieties are as 497 + * follows: 498 + * 499 + * EXC_{REAL|VIRT}_BEGIN/END - used to open-code the exception 500 + * 501 + * EXC_{REAL|VIRT} - standard exception 502 + * 503 + * EXC_{REAL|VIRT}_suffix 504 + * where _suffix is: 505 + * - _MASKABLE - maskable exception 506 + * - _OOL - out of line with trampoline to common handler 507 + * - _HV - HV exception 508 + * 509 + * There can be combinations, e.g., EXC_VIRT_OOL_MASKABLE_HV 510 + * 511 + * KVM handlers come in the following verieties: 512 + * TRAMP_KVM 513 + * TRAMP_KVM_SKIP 514 + * TRAMP_KVM_HV 515 + * TRAMP_KVM_HV_SKIP 516 + * 517 + * COMMON handlers come in the following verieties: 518 + * EXC_COMMON_BEGIN/END - used to open-code the handler 519 + * EXC_COMMON 520 + * EXC_COMMON_ASYNC 521 + * 522 + * TRAMP_REAL and TRAMP_VIRT can be used with BEGIN/END. KVM 523 + * and OOL handlers are implemented as types of TRAMP and TRAMP_VIRT handlers. 524 + */ 525 + 526 + #define __EXC_REAL(name, start, size, area) \ 527 + EXC_REAL_BEGIN(name, start, size); \ 528 + EXCEPTION_PROLOG_0 area ; \ 529 + EXCEPTION_PROLOG_1 EXC_STD, area, 1, start, 0, 0, 0 ; \ 530 + EXCEPTION_PROLOG_2_REAL name##_common, EXC_STD, 1 ; \ 531 + EXC_REAL_END(name, start, size) 532 + 533 + #define EXC_REAL(name, start, size) \ 534 + __EXC_REAL(name, start, size, PACA_EXGEN) 535 + 536 + #define __EXC_VIRT(name, start, size, realvec, area) \ 537 + EXC_VIRT_BEGIN(name, start, size); \ 538 + EXCEPTION_PROLOG_0 area ; \ 539 + EXCEPTION_PROLOG_1 EXC_STD, area, 0, realvec, 0, 0, 0; \ 540 + EXCEPTION_PROLOG_2_VIRT name##_common, EXC_STD ; \ 541 + EXC_VIRT_END(name, start, size) 542 + 543 + #define EXC_VIRT(name, start, size, realvec) \ 544 + __EXC_VIRT(name, start, size, realvec, PACA_EXGEN) 545 + 546 + #define EXC_REAL_MASKABLE(name, start, size, bitmask) \ 547 + EXC_REAL_BEGIN(name, start, size); \ 548 + EXCEPTION_PROLOG_0 PACA_EXGEN ; \ 549 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 1, start, 0, 0, bitmask ; \ 550 + EXCEPTION_PROLOG_2_REAL name##_common, EXC_STD, 1 ; \ 551 + EXC_REAL_END(name, start, size) 552 + 553 + #define EXC_VIRT_MASKABLE(name, start, size, realvec, bitmask) \ 554 + EXC_VIRT_BEGIN(name, start, size); \ 555 + EXCEPTION_PROLOG_0 PACA_EXGEN ; \ 556 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 0, realvec, 0, 0, bitmask ; \ 557 + EXCEPTION_PROLOG_2_VIRT name##_common, EXC_STD ; \ 558 + EXC_VIRT_END(name, start, size) 559 + 560 + #define EXC_REAL_HV(name, start, size) \ 561 + EXC_REAL_BEGIN(name, start, size); \ 562 + EXCEPTION_PROLOG_0 PACA_EXGEN; \ 563 + EXCEPTION_PROLOG_1 EXC_HV, PACA_EXGEN, 1, start, 0, 0, 0 ; \ 564 + EXCEPTION_PROLOG_2_REAL name##_common, EXC_HV, 1 ; \ 565 + EXC_REAL_END(name, start, size) 566 + 567 + #define EXC_VIRT_HV(name, start, size, realvec) \ 568 + EXC_VIRT_BEGIN(name, start, size); \ 569 + EXCEPTION_PROLOG_0 PACA_EXGEN; \ 570 + EXCEPTION_PROLOG_1 EXC_HV, PACA_EXGEN, 1, realvec, 0, 0, 0 ; \ 571 + EXCEPTION_PROLOG_2_VIRT name##_common, EXC_HV ; \ 572 + EXC_VIRT_END(name, start, size) 573 + 574 + #define __EXC_REAL_OOL(name, start, size) \ 575 + EXC_REAL_BEGIN(name, start, size); \ 576 + EXCEPTION_PROLOG_0 PACA_EXGEN ; \ 577 + b tramp_real_##name ; \ 578 + EXC_REAL_END(name, start, size) 579 + 580 + #define __TRAMP_REAL_OOL(name, vec) \ 581 + TRAMP_REAL_BEGIN(tramp_real_##name); \ 582 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 1, vec, 0, 0, 0 ; \ 583 + EXCEPTION_PROLOG_2_REAL name##_common, EXC_STD, 1 584 + 585 + #define EXC_REAL_OOL(name, start, size) \ 586 + __EXC_REAL_OOL(name, start, size); \ 587 + __TRAMP_REAL_OOL(name, start) 588 + 589 + #define __EXC_REAL_OOL_MASKABLE(name, start, size) \ 590 + __EXC_REAL_OOL(name, start, size) 591 + 592 + #define __TRAMP_REAL_OOL_MASKABLE(name, vec, bitmask) \ 593 + TRAMP_REAL_BEGIN(tramp_real_##name); \ 594 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 1, vec, 0, 0, bitmask ; \ 595 + EXCEPTION_PROLOG_2_REAL name##_common, EXC_STD, 1 596 + 597 + #define EXC_REAL_OOL_MASKABLE(name, start, size, bitmask) \ 598 + __EXC_REAL_OOL_MASKABLE(name, start, size); \ 599 + __TRAMP_REAL_OOL_MASKABLE(name, start, bitmask) 600 + 601 + #define __EXC_REAL_OOL_HV(name, start, size) \ 602 + __EXC_REAL_OOL(name, start, size) 603 + 604 + #define __TRAMP_REAL_OOL_HV(name, vec) \ 605 + TRAMP_REAL_BEGIN(tramp_real_##name); \ 606 + EXCEPTION_PROLOG_1 EXC_HV, PACA_EXGEN, 1, vec, 0, 0, 0 ; \ 607 + EXCEPTION_PROLOG_2_REAL name##_common, EXC_HV, 1 608 + 609 + #define EXC_REAL_OOL_HV(name, start, size) \ 610 + __EXC_REAL_OOL_HV(name, start, size); \ 611 + __TRAMP_REAL_OOL_HV(name, start) 612 + 613 + #define __EXC_REAL_OOL_MASKABLE_HV(name, start, size) \ 614 + __EXC_REAL_OOL(name, start, size) 615 + 616 + #define __TRAMP_REAL_OOL_MASKABLE_HV(name, vec, bitmask) \ 617 + TRAMP_REAL_BEGIN(tramp_real_##name); \ 618 + EXCEPTION_PROLOG_1 EXC_HV, PACA_EXGEN, 1, vec, 0, 0, bitmask ; \ 619 + EXCEPTION_PROLOG_2_REAL name##_common, EXC_HV, 1 620 + 621 + #define EXC_REAL_OOL_MASKABLE_HV(name, start, size, bitmask) \ 622 + __EXC_REAL_OOL_MASKABLE_HV(name, start, size); \ 623 + __TRAMP_REAL_OOL_MASKABLE_HV(name, start, bitmask) 624 + 625 + #define __EXC_VIRT_OOL(name, start, size) \ 626 + EXC_VIRT_BEGIN(name, start, size); \ 627 + EXCEPTION_PROLOG_0 PACA_EXGEN ; \ 628 + b tramp_virt_##name; \ 629 + EXC_VIRT_END(name, start, size) 630 + 631 + #define __TRAMP_VIRT_OOL(name, realvec) \ 632 + TRAMP_VIRT_BEGIN(tramp_virt_##name); \ 633 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 0, vec, 0, 0, 0 ; \ 634 + EXCEPTION_PROLOG_2_VIRT name##_common, EXC_STD 635 + 636 + #define EXC_VIRT_OOL(name, start, size, realvec) \ 637 + __EXC_VIRT_OOL(name, start, size); \ 638 + __TRAMP_VIRT_OOL(name, realvec) 639 + 640 + #define __EXC_VIRT_OOL_MASKABLE(name, start, size) \ 641 + __EXC_VIRT_OOL(name, start, size) 642 + 643 + #define __TRAMP_VIRT_OOL_MASKABLE(name, realvec, bitmask) \ 644 + TRAMP_VIRT_BEGIN(tramp_virt_##name); \ 645 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 0, realvec, 0, 0, bitmask ; \ 646 + EXCEPTION_PROLOG_2_REAL name##_common, EXC_STD, 1 647 + 648 + #define EXC_VIRT_OOL_MASKABLE(name, start, size, realvec, bitmask) \ 649 + __EXC_VIRT_OOL_MASKABLE(name, start, size); \ 650 + __TRAMP_VIRT_OOL_MASKABLE(name, realvec, bitmask) 651 + 652 + #define __EXC_VIRT_OOL_HV(name, start, size) \ 653 + __EXC_VIRT_OOL(name, start, size) 654 + 655 + #define __TRAMP_VIRT_OOL_HV(name, realvec) \ 656 + TRAMP_VIRT_BEGIN(tramp_virt_##name); \ 657 + EXCEPTION_PROLOG_1 EXC_HV, PACA_EXGEN, 1, realvec, 0, 0, 0 ; \ 658 + EXCEPTION_PROLOG_2_VIRT name##_common, EXC_HV 659 + 660 + #define EXC_VIRT_OOL_HV(name, start, size, realvec) \ 661 + __EXC_VIRT_OOL_HV(name, start, size); \ 662 + __TRAMP_VIRT_OOL_HV(name, realvec) 663 + 664 + #define __EXC_VIRT_OOL_MASKABLE_HV(name, start, size) \ 665 + __EXC_VIRT_OOL(name, start, size) 666 + 667 + #define __TRAMP_VIRT_OOL_MASKABLE_HV(name, realvec, bitmask) \ 668 + TRAMP_VIRT_BEGIN(tramp_virt_##name); \ 669 + EXCEPTION_PROLOG_1 EXC_HV, PACA_EXGEN, 1, realvec, 0, 0, bitmask ; \ 670 + EXCEPTION_PROLOG_2_VIRT name##_common, EXC_HV 671 + 672 + #define EXC_VIRT_OOL_MASKABLE_HV(name, start, size, realvec, bitmask) \ 673 + __EXC_VIRT_OOL_MASKABLE_HV(name, start, size); \ 674 + __TRAMP_VIRT_OOL_MASKABLE_HV(name, realvec, bitmask) 675 + 676 + #define TRAMP_KVM(area, n) \ 677 + TRAMP_KVM_BEGIN(do_kvm_##n); \ 678 + KVM_HANDLER area, EXC_STD, n, 0 679 + 680 + #define TRAMP_KVM_SKIP(area, n) \ 681 + TRAMP_KVM_BEGIN(do_kvm_##n); \ 682 + KVM_HANDLER area, EXC_STD, n, 1 683 + 684 + #define TRAMP_KVM_HV(area, n) \ 685 + TRAMP_KVM_BEGIN(do_kvm_H##n); \ 686 + KVM_HANDLER area, EXC_HV, n, 0 687 + 688 + #define TRAMP_KVM_HV_SKIP(area, n) \ 689 + TRAMP_KVM_BEGIN(do_kvm_H##n); \ 690 + KVM_HANDLER area, EXC_HV, n, 1 691 + 692 + #define EXC_COMMON(name, realvec, hdlr) \ 693 + EXC_COMMON_BEGIN(name); \ 694 + EXCEPTION_COMMON(PACA_EXGEN, realvec); \ 695 + bl save_nvgprs; \ 696 + RECONCILE_IRQ_STATE(r10, r11); \ 697 + addi r3,r1,STACK_FRAME_OVERHEAD; \ 698 + bl hdlr; \ 699 + b ret_from_except 700 + 701 + /* 702 + * Like EXC_COMMON, but for exceptions that can occur in the idle task and 703 + * therefore need the special idle handling (finish nap and runlatch) 704 + */ 705 + #define EXC_COMMON_ASYNC(name, realvec, hdlr) \ 706 + EXC_COMMON_BEGIN(name); \ 707 + EXCEPTION_COMMON(PACA_EXGEN, realvec); \ 708 + FINISH_NAP; \ 709 + RECONCILE_IRQ_STATE(r10, r11); \ 710 + RUNLATCH_ON; \ 711 + addi r3,r1,STACK_FRAME_OVERHEAD; \ 712 + bl hdlr; \ 713 + b ret_from_except_lite 714 + 715 + 24 716 /* 25 717 * There are a few constraints to be concerned with. 26 718 * - Real mode exceptions code/data must be located at their physical location. ··· 799 107 EXC_VIRT_NONE(0x4000, 0x100) 800 108 801 109 110 + EXC_REAL_BEGIN(system_reset, 0x100, 0x100) 802 111 #ifdef CONFIG_PPC_P7_NAP 803 112 /* 804 113 * If running native on arch 2.06 or later, check if we are waking up ··· 807 114 * bits 46:47. A non-0 value indicates that we are coming from a power 808 115 * saving state. The idle wakeup handler initially runs in real mode, 809 116 * but we branch to the 0xc000... address so we can turn on relocation 810 - * with mtmsr. 117 + * with mtmsrd later, after SPRs are restored. 118 + * 119 + * Careful to minimise cost for the fast path (idle wakeup) while 120 + * also avoiding clobbering CFAR for the debug path (non-idle). 121 + * 122 + * For the idle wake case volatile registers can be clobbered, which 123 + * is why we use those initially. If it turns out to not be an idle 124 + * wake, carefully put everything back the way it was, so we can use 125 + * common exception macros to handle it. 811 126 */ 812 - #define IDLETEST(n) \ 813 - BEGIN_FTR_SECTION ; \ 814 - mfspr r10,SPRN_SRR1 ; \ 815 - rlwinm. r10,r10,47-31,30,31 ; \ 816 - beq- 1f ; \ 817 - cmpwi cr1,r10,2 ; \ 818 - mfspr r3,SPRN_SRR1 ; \ 819 - bltlr cr1 ; /* no state loss, return to idle caller */ \ 820 - BRANCH_TO_C000(r10, system_reset_idle_common) ; \ 821 - 1: \ 822 - KVMTEST_PR(n) ; \ 823 - END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) 824 - #else 825 - #define IDLETEST NOTEST 127 + BEGIN_FTR_SECTION 128 + SET_SCRATCH0(r13) 129 + GET_PACA(r13) 130 + std r3,PACA_EXNMI+0*8(r13) 131 + std r4,PACA_EXNMI+1*8(r13) 132 + std r5,PACA_EXNMI+2*8(r13) 133 + mfspr r3,SPRN_SRR1 134 + mfocrf r4,0x80 135 + rlwinm. r5,r3,47-31,30,31 136 + bne+ system_reset_idle_wake 137 + /* Not powersave wakeup. Restore regs for regular interrupt handler. */ 138 + mtocrf 0x80,r4 139 + ld r3,PACA_EXNMI+0*8(r13) 140 + ld r4,PACA_EXNMI+1*8(r13) 141 + ld r5,PACA_EXNMI+2*8(r13) 142 + GET_SCRATCH0(r13) 143 + END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) 826 144 #endif 827 145 828 - EXC_REAL_BEGIN(system_reset, 0x100, 0x100) 829 - SET_SCRATCH0(r13) 146 + EXCEPTION_PROLOG_0 PACA_EXNMI 147 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXNMI, 1, 0x100, 0, 0, 0 148 + EXCEPTION_PROLOG_2_REAL system_reset_common, EXC_STD, 0 830 149 /* 831 150 * MSR_RI is not enabled, because PACA_EXNMI and nmi stack is 832 151 * being used, so a nested NMI exception would corrupt it. 152 + * 153 + * In theory, we should not enable relocation here if it was disabled 154 + * in SRR1, because the MMU may not be configured to support it (e.g., 155 + * SLB may have been cleared). In practice, there should only be a few 156 + * small windows where that's the case, and sreset is considered to 157 + * be dangerous anyway. 833 158 */ 834 - EXCEPTION_PROLOG_NORI(PACA_EXNMI, system_reset_common, EXC_STD, 835 - IDLETEST, 0x100) 836 - 837 159 EXC_REAL_END(system_reset, 0x100, 0x100) 160 + 838 161 EXC_VIRT_NONE(0x4100, 0x100) 839 162 TRAMP_KVM(PACA_EXNMI, 0x100) 840 163 841 164 #ifdef CONFIG_PPC_P7_NAP 842 - EXC_COMMON_BEGIN(system_reset_idle_common) 843 - /* 844 - * This must be a direct branch (without linker branch stub) because 845 - * we can not use TOC at this point as r2 may not be restored yet. 846 - */ 847 - b idle_return_gpr_loss 165 + TRAMP_REAL_BEGIN(system_reset_idle_wake) 166 + /* We are waking up from idle, so may clobber any volatile register */ 167 + cmpwi cr1,r5,2 168 + bltlr cr1 /* no state loss, return to idle caller with r3=SRR1 */ 169 + BRANCH_TO_C000(r12, DOTSYM(idle_return_gpr_loss)) 848 170 #endif 849 171 172 + #ifdef CONFIG_PPC_PSERIES 850 173 /* 851 - * Set IRQS_ALL_DISABLED unconditionally so arch_irqs_disabled does 852 - * the right thing. We do not want to reconcile because that goes 853 - * through irq tracing which we don't want in NMI. 854 - * 855 - * Save PACAIRQHAPPENED because some code will do a hard disable 856 - * (e.g., xmon). So we want to restore this back to where it was 857 - * when we return. DAR is unused in the stack, so save it there. 174 + * Vectors for the FWNMI option. Share common code. 858 175 */ 859 - #define ADD_RECONCILE_NMI \ 860 - li r10,IRQS_ALL_DISABLED; \ 861 - stb r10,PACAIRQSOFTMASK(r13); \ 862 - lbz r10,PACAIRQHAPPENED(r13); \ 863 - std r10,_DAR(r1) 176 + TRAMP_REAL_BEGIN(system_reset_fwnmi) 177 + /* See comment at system_reset exception, don't turn on RI */ 178 + EXCEPTION_PROLOG_0 PACA_EXNMI 179 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXNMI, 0, 0x100, 0, 0, 0 180 + EXCEPTION_PROLOG_2_REAL system_reset_common, EXC_STD, 0 181 + 182 + #endif /* CONFIG_PPC_PSERIES */ 864 183 865 184 EXC_COMMON_BEGIN(system_reset_common) 866 185 /* ··· 890 185 mr r10,r1 891 186 ld r1,PACA_NMI_EMERG_SP(r13) 892 187 subi r1,r1,INT_FRAME_SIZE 893 - EXCEPTION_COMMON_NORET_STACK(PACA_EXNMI, 0x100, 894 - system_reset, system_reset_exception, 895 - ADD_NVGPRS;ADD_RECONCILE_NMI) 188 + EXCEPTION_COMMON_STACK(PACA_EXNMI, 0x100) 189 + bl save_nvgprs 190 + /* 191 + * Set IRQS_ALL_DISABLED unconditionally so arch_irqs_disabled does 192 + * the right thing. We do not want to reconcile because that goes 193 + * through irq tracing which we don't want in NMI. 194 + * 195 + * Save PACAIRQHAPPENED because some code will do a hard disable 196 + * (e.g., xmon). So we want to restore this back to where it was 197 + * when we return. DAR is unused in the stack, so save it there. 198 + */ 199 + li r10,IRQS_ALL_DISABLED 200 + stb r10,PACAIRQSOFTMASK(r13) 201 + lbz r10,PACAIRQHAPPENED(r13) 202 + std r10,_DAR(r1) 896 203 897 - /* This (and MCE) can be simplified with mtmsrd L=1 */ 204 + addi r3,r1,STACK_FRAME_OVERHEAD 205 + bl system_reset_exception 206 + 898 207 /* Clear MSR_RI before setting SRR0 and SRR1. */ 899 - li r0,MSR_RI 900 - mfmsr r9 901 - andc r9,r9,r0 208 + li r9,0 902 209 mtmsrd r9,1 903 210 904 211 /* ··· 928 211 ld r10,SOFTE(r1) 929 212 stb r10,PACAIRQSOFTMASK(r13) 930 213 931 - /* 932 - * Keep below code in synch with MACHINE_CHECK_HANDLER_WINDUP. 933 - * Should share common bits... 934 - */ 935 - 936 - /* Move original SRR0 and SRR1 into the respective regs */ 937 - ld r9,_MSR(r1) 938 - mtspr SPRN_SRR1,r9 939 - ld r3,_NIP(r1) 940 - mtspr SPRN_SRR0,r3 941 - ld r9,_CTR(r1) 942 - mtctr r9 943 - ld r9,_XER(r1) 944 - mtxer r9 945 - ld r9,_LINK(r1) 946 - mtlr r9 947 - REST_GPR(0, r1) 948 - REST_8GPRS(2, r1) 949 - REST_GPR(10, r1) 950 - ld r11,_CCR(r1) 951 - mtcr r11 952 - REST_GPR(11, r1) 953 - REST_2GPRS(12, r1) 954 - /* restore original r1. */ 955 - ld r1,GPR1(r1) 214 + EXCEPTION_RESTORE_REGS EXC_STD 956 215 RFI_TO_USER_OR_KERNEL 957 - 958 - #ifdef CONFIG_PPC_PSERIES 959 - /* 960 - * Vectors for the FWNMI option. Share common code. 961 - */ 962 - TRAMP_REAL_BEGIN(system_reset_fwnmi) 963 - SET_SCRATCH0(r13) /* save r13 */ 964 - /* See comment at system_reset exception */ 965 - EXCEPTION_PROLOG_NORI(PACA_EXNMI, system_reset_common, EXC_STD, 966 - NOTEST, 0x100) 967 - #endif /* CONFIG_PPC_PSERIES */ 968 216 969 217 970 218 EXC_REAL_BEGIN(machine_check, 0x200, 0x100) ··· 937 255 * some code path might still want to branch into the original 938 256 * vector 939 257 */ 940 - SET_SCRATCH0(r13) /* save r13 */ 941 - EXCEPTION_PROLOG_0(PACA_EXMC) 258 + EXCEPTION_PROLOG_0 PACA_EXMC 942 259 BEGIN_FTR_SECTION 943 260 b machine_check_common_early 944 261 FTR_SECTION_ELSE ··· 946 265 EXC_REAL_END(machine_check, 0x200, 0x100) 947 266 EXC_VIRT_NONE(0x4200, 0x100) 948 267 TRAMP_REAL_BEGIN(machine_check_common_early) 949 - EXCEPTION_PROLOG_1(PACA_EXMC, NOTEST, 0x200) 268 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXMC, 0, 0x200, 0, 0, 0 950 269 /* 951 270 * Register contents: 952 271 * R13 = PACA ··· 1025 344 TRAMP_REAL_BEGIN(machine_check_pSeries) 1026 345 .globl machine_check_fwnmi 1027 346 machine_check_fwnmi: 1028 - SET_SCRATCH0(r13) /* save r13 */ 1029 - EXCEPTION_PROLOG_0(PACA_EXMC) 347 + EXCEPTION_PROLOG_0 PACA_EXMC 1030 348 BEGIN_FTR_SECTION 1031 349 b machine_check_common_early 1032 350 END_FTR_SECTION_IFCLR(CPU_FTR_HVMODE) 1033 351 machine_check_pSeries_0: 1034 - EXCEPTION_PROLOG_1(PACA_EXMC, KVMTEST_PR, 0x200) 352 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXMC, 1, 0x200, 1, 1, 0 1035 353 /* 1036 354 * MSR_RI is not enabled, because PACA_EXMC is being used, so a 1037 355 * nested machine check corrupts it. machine_check_common enables 1038 356 * MSR_RI. 1039 357 */ 1040 - EXCEPTION_PROLOG_2_NORI(machine_check_common, EXC_STD) 358 + EXCEPTION_PROLOG_2_REAL machine_check_common, EXC_STD, 0 1041 359 1042 360 TRAMP_KVM_SKIP(PACA_EXMC, 0x200) 1043 361 ··· 1045 365 * Machine check is different because we use a different 1046 366 * save area: PACA_EXMC instead of PACA_EXGEN. 1047 367 */ 1048 - mfspr r10,SPRN_DAR 1049 - std r10,PACA_EXMC+EX_DAR(r13) 1050 - mfspr r10,SPRN_DSISR 1051 - stw r10,PACA_EXMC+EX_DSISR(r13) 1052 - EXCEPTION_PROLOG_COMMON(0x200, PACA_EXMC) 368 + EXCEPTION_COMMON(PACA_EXMC, 0x200) 1053 369 FINISH_NAP 1054 370 RECONCILE_IRQ_STATE(r10, r11) 1055 371 ld r3,PACA_EXMC+EX_DAR(r13) ··· 1062 386 1063 387 #define MACHINE_CHECK_HANDLER_WINDUP \ 1064 388 /* Clear MSR_RI before setting SRR0 and SRR1. */\ 1065 - li r0,MSR_RI; \ 1066 - mfmsr r9; /* get MSR value */ \ 1067 - andc r9,r9,r0; \ 389 + li r9,0; \ 1068 390 mtmsrd r9,1; /* Clear MSR_RI */ \ 1069 - /* Move original SRR0 and SRR1 into the respective regs */ \ 1070 - ld r9,_MSR(r1); \ 1071 - mtspr SPRN_SRR1,r9; \ 1072 - ld r3,_NIP(r1); \ 1073 - mtspr SPRN_SRR0,r3; \ 1074 - ld r9,_CTR(r1); \ 1075 - mtctr r9; \ 1076 - ld r9,_XER(r1); \ 1077 - mtxer r9; \ 1078 - ld r9,_LINK(r1); \ 1079 - mtlr r9; \ 1080 - REST_GPR(0, r1); \ 1081 - REST_8GPRS(2, r1); \ 1082 - REST_GPR(10, r1); \ 1083 - ld r11,_CCR(r1); \ 1084 - mtcr r11; \ 1085 - /* Decrement paca->in_mce. */ \ 391 + /* Decrement paca->in_mce now RI is clear. */ \ 1086 392 lhz r12,PACA_IN_MCE(r13); \ 1087 393 subi r12,r12,1; \ 1088 394 sth r12,PACA_IN_MCE(r13); \ 1089 - REST_GPR(11, r1); \ 1090 - REST_2GPRS(12, r1); \ 1091 - /* restore original r1. */ \ 1092 - ld r1,GPR1(r1) 395 + EXCEPTION_RESTORE_REGS EXC_STD 1093 396 1094 397 #ifdef CONFIG_PPC_P7_NAP 1095 398 /* ··· 1127 472 * 1128 473 * Go back to nap/sleep/winkle mode again if (b) is true. 1129 474 */ 1130 - BEGIN_FTR_SECTION 475 + BEGIN_FTR_SECTION 1131 476 rlwinm. r11,r12,47-31,30,31 1132 477 bne machine_check_idle_common 1133 - END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) 478 + END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) 1134 479 #endif 1135 480 1136 481 /* ··· 1212 557 9: 1213 558 /* Deliver the machine check to host kernel in V mode. */ 1214 559 MACHINE_CHECK_HANDLER_WINDUP 1215 - SET_SCRATCH0(r13) /* save r13 */ 1216 - EXCEPTION_PROLOG_0(PACA_EXMC) 560 + EXCEPTION_PROLOG_0 PACA_EXMC 1217 561 b machine_check_pSeries_0 1218 562 1219 563 EXC_COMMON_BEGIN(unrecover_mce) ··· 1236 582 b . 1237 583 1238 584 EXC_REAL_BEGIN(data_access, 0x300, 0x80) 1239 - SET_SCRATCH0(r13) /* save r13 */ 1240 - EXCEPTION_PROLOG_0(PACA_EXGEN) 585 + EXCEPTION_PROLOG_0 PACA_EXGEN 1241 586 b tramp_real_data_access 1242 587 EXC_REAL_END(data_access, 0x300, 0x80) 1243 588 1244 589 TRAMP_REAL_BEGIN(tramp_real_data_access) 1245 - EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_PR, 0x300) 1246 - /* 1247 - * DAR/DSISR must be read before setting MSR[RI], because 1248 - * a d-side MCE will clobber those registers so is not 1249 - * recoverable if they are live. 1250 - */ 1251 - mfspr r10,SPRN_DAR 1252 - mfspr r11,SPRN_DSISR 1253 - std r10,PACA_EXGEN+EX_DAR(r13) 1254 - stw r11,PACA_EXGEN+EX_DSISR(r13) 1255 - EXCEPTION_PROLOG_2(data_access_common, EXC_STD) 590 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 1, 0x300, 1, 1, 0 591 + EXCEPTION_PROLOG_2_REAL data_access_common, EXC_STD, 1 1256 592 1257 593 EXC_VIRT_BEGIN(data_access, 0x4300, 0x80) 1258 - SET_SCRATCH0(r13) /* save r13 */ 1259 - EXCEPTION_PROLOG_0(PACA_EXGEN) 1260 - EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0x300) 1261 - mfspr r10,SPRN_DAR 1262 - mfspr r11,SPRN_DSISR 1263 - std r10,PACA_EXGEN+EX_DAR(r13) 1264 - stw r11,PACA_EXGEN+EX_DSISR(r13) 1265 - EXCEPTION_PROLOG_2_RELON(data_access_common, EXC_STD) 594 + EXCEPTION_PROLOG_0 PACA_EXGEN 595 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 0, 0x300, 1, 1, 0 596 + EXCEPTION_PROLOG_2_VIRT data_access_common, EXC_STD 1266 597 EXC_VIRT_END(data_access, 0x4300, 0x80) 1267 598 1268 599 TRAMP_KVM_SKIP(PACA_EXGEN, 0x300) ··· 1259 620 * r9 - r13 are saved in paca->exgen. 1260 621 * EX_DAR and EX_DSISR have saved DAR/DSISR 1261 622 */ 1262 - EXCEPTION_PROLOG_COMMON(0x300, PACA_EXGEN) 623 + EXCEPTION_COMMON(PACA_EXGEN, 0x300) 1263 624 RECONCILE_IRQ_STATE(r10, r11) 1264 625 ld r12,_MSR(r1) 1265 626 ld r3,PACA_EXGEN+EX_DAR(r13) ··· 1275 636 1276 637 1277 638 EXC_REAL_BEGIN(data_access_slb, 0x380, 0x80) 1278 - SET_SCRATCH0(r13) /* save r13 */ 1279 - EXCEPTION_PROLOG_0(PACA_EXSLB) 639 + EXCEPTION_PROLOG_0 PACA_EXSLB 1280 640 b tramp_real_data_access_slb 1281 641 EXC_REAL_END(data_access_slb, 0x380, 0x80) 1282 642 1283 643 TRAMP_REAL_BEGIN(tramp_real_data_access_slb) 1284 - EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x380) 1285 - mfspr r10,SPRN_DAR 1286 - std r10,PACA_EXSLB+EX_DAR(r13) 1287 - EXCEPTION_PROLOG_2(data_access_slb_common, EXC_STD) 644 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXSLB, 1, 0x380, 1, 0, 0 645 + EXCEPTION_PROLOG_2_REAL data_access_slb_common, EXC_STD, 1 1288 646 1289 647 EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x80) 1290 - SET_SCRATCH0(r13) /* save r13 */ 1291 - EXCEPTION_PROLOG_0(PACA_EXSLB) 1292 - EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x380) 1293 - mfspr r10,SPRN_DAR 1294 - std r10,PACA_EXSLB+EX_DAR(r13) 1295 - EXCEPTION_PROLOG_2_RELON(data_access_slb_common, EXC_STD) 648 + EXCEPTION_PROLOG_0 PACA_EXSLB 649 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXSLB, 0, 0x380, 1, 0, 0 650 + EXCEPTION_PROLOG_2_VIRT data_access_slb_common, EXC_STD 1296 651 EXC_VIRT_END(data_access_slb, 0x4380, 0x80) 1297 652 1298 653 TRAMP_KVM_SKIP(PACA_EXSLB, 0x380) 1299 654 1300 655 EXC_COMMON_BEGIN(data_access_slb_common) 1301 - EXCEPTION_PROLOG_COMMON(0x380, PACA_EXSLB) 656 + EXCEPTION_COMMON(PACA_EXSLB, 0x380) 1302 657 ld r4,PACA_EXSLB+EX_DAR(r13) 1303 658 std r4,_DAR(r1) 1304 659 addi r3,r1,STACK_FRAME_OVERHEAD ··· 1322 689 TRAMP_KVM(PACA_EXGEN, 0x400) 1323 690 1324 691 EXC_COMMON_BEGIN(instruction_access_common) 1325 - EXCEPTION_PROLOG_COMMON(0x400, PACA_EXGEN) 692 + EXCEPTION_COMMON(PACA_EXGEN, 0x400) 1326 693 RECONCILE_IRQ_STATE(r10, r11) 1327 694 ld r12,_MSR(r1) 1328 695 ld r3,_NIP(r1) ··· 1337 704 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX) 1338 705 1339 706 1340 - EXC_REAL_BEGIN(instruction_access_slb, 0x480, 0x80) 1341 - EXCEPTION_PROLOG(PACA_EXSLB, instruction_access_slb_common, EXC_STD, KVMTEST_PR, 0x480); 1342 - EXC_REAL_END(instruction_access_slb, 0x480, 0x80) 1343 - 1344 - EXC_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x80) 1345 - EXCEPTION_RELON_PROLOG(PACA_EXSLB, instruction_access_slb_common, EXC_STD, NOTEST, 0x480); 1346 - EXC_VIRT_END(instruction_access_slb, 0x4480, 0x80) 1347 - 707 + __EXC_REAL(instruction_access_slb, 0x480, 0x80, PACA_EXSLB) 708 + __EXC_VIRT(instruction_access_slb, 0x4480, 0x80, 0x480, PACA_EXSLB) 1348 709 TRAMP_KVM(PACA_EXSLB, 0x480) 1349 710 1350 711 EXC_COMMON_BEGIN(instruction_access_slb_common) 1351 - EXCEPTION_PROLOG_COMMON(0x480, PACA_EXSLB) 712 + EXCEPTION_COMMON(PACA_EXSLB, 0x480) 1352 713 ld r4,_NIP(r1) 1353 714 addi r3,r1,STACK_FRAME_OVERHEAD 1354 715 BEGIN_MMU_FTR_SECTION ··· 1367 740 1368 741 1369 742 EXC_REAL_BEGIN(hardware_interrupt, 0x500, 0x100) 1370 - .globl hardware_interrupt_hv; 1371 - hardware_interrupt_hv: 1372 - BEGIN_FTR_SECTION 1373 - MASKABLE_EXCEPTION_HV(0x500, hardware_interrupt_common, IRQS_DISABLED) 1374 - FTR_SECTION_ELSE 1375 - MASKABLE_EXCEPTION(0x500, hardware_interrupt_common, IRQS_DISABLED) 1376 - ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) 743 + EXCEPTION_PROLOG_0 PACA_EXGEN 744 + BEGIN_FTR_SECTION 745 + EXCEPTION_PROLOG_1 EXC_HV, PACA_EXGEN, 1, 0x500, 0, 0, IRQS_DISABLED 746 + EXCEPTION_PROLOG_2_REAL hardware_interrupt_common, EXC_HV, 1 747 + FTR_SECTION_ELSE 748 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 1, 0x500, 0, 0, IRQS_DISABLED 749 + EXCEPTION_PROLOG_2_REAL hardware_interrupt_common, EXC_STD, 1 750 + ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) 1377 751 EXC_REAL_END(hardware_interrupt, 0x500, 0x100) 1378 752 1379 753 EXC_VIRT_BEGIN(hardware_interrupt, 0x4500, 0x100) 1380 - .globl hardware_interrupt_relon_hv; 1381 - hardware_interrupt_relon_hv: 1382 - BEGIN_FTR_SECTION 1383 - MASKABLE_RELON_EXCEPTION_HV(0x500, hardware_interrupt_common, 1384 - IRQS_DISABLED) 1385 - FTR_SECTION_ELSE 1386 - __MASKABLE_RELON_EXCEPTION(0x500, hardware_interrupt_common, 1387 - EXC_STD, SOFTEN_TEST_PR, IRQS_DISABLED) 1388 - ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE) 754 + EXCEPTION_PROLOG_0 PACA_EXGEN 755 + BEGIN_FTR_SECTION 756 + EXCEPTION_PROLOG_1 EXC_HV, PACA_EXGEN, 1, 0x500, 0, 0, IRQS_DISABLED 757 + EXCEPTION_PROLOG_2_VIRT hardware_interrupt_common, EXC_HV 758 + FTR_SECTION_ELSE 759 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 1, 0x500, 0, 0, IRQS_DISABLED 760 + EXCEPTION_PROLOG_2_VIRT hardware_interrupt_common, EXC_STD 761 + ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE) 1389 762 EXC_VIRT_END(hardware_interrupt, 0x4500, 0x100) 1390 763 1391 764 TRAMP_KVM(PACA_EXGEN, 0x500) ··· 1394 767 1395 768 1396 769 EXC_REAL_BEGIN(alignment, 0x600, 0x100) 1397 - SET_SCRATCH0(r13) /* save r13 */ 1398 - EXCEPTION_PROLOG_0(PACA_EXGEN) 1399 - EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_PR, 0x600) 1400 - mfspr r10,SPRN_DAR 1401 - mfspr r11,SPRN_DSISR 1402 - std r10,PACA_EXGEN+EX_DAR(r13) 1403 - stw r11,PACA_EXGEN+EX_DSISR(r13) 1404 - EXCEPTION_PROLOG_2(alignment_common, EXC_STD) 770 + EXCEPTION_PROLOG_0 PACA_EXGEN 771 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 1, 0x600, 1, 1, 0 772 + EXCEPTION_PROLOG_2_REAL alignment_common, EXC_STD, 1 1405 773 EXC_REAL_END(alignment, 0x600, 0x100) 1406 774 1407 775 EXC_VIRT_BEGIN(alignment, 0x4600, 0x100) 1408 - SET_SCRATCH0(r13) /* save r13 */ 1409 - EXCEPTION_PROLOG_0(PACA_EXGEN) 1410 - EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0x600) 1411 - mfspr r10,SPRN_DAR 1412 - mfspr r11,SPRN_DSISR 1413 - std r10,PACA_EXGEN+EX_DAR(r13) 1414 - stw r11,PACA_EXGEN+EX_DSISR(r13) 1415 - EXCEPTION_PROLOG_2_RELON(alignment_common, EXC_STD) 776 + EXCEPTION_PROLOG_0 PACA_EXGEN 777 + EXCEPTION_PROLOG_1 EXC_STD, PACA_EXGEN, 0, 0x600, 1, 1, 0 778 + EXCEPTION_PROLOG_2_VIRT alignment_common, EXC_STD 1416 779 EXC_VIRT_END(alignment, 0x4600, 0x100) 1417 780 1418 781 TRAMP_KVM(PACA_EXGEN, 0x600) 1419 782 EXC_COMMON_BEGIN(alignment_common) 1420 - EXCEPTION_PROLOG_COMMON(0x600, PACA_EXGEN) 783 + EXCEPTION_COMMON(PACA_EXGEN, 0x600) 1421 784 ld r3,PACA_EXGEN+EX_DAR(r13) 1422 785 lwz r4,PACA_EXGEN+EX_DSISR(r13) 1423 786 std r3,_DAR(r1) ··· 1431 814 * we switch to the emergency stack if we're taking a TM Bad Thing from 1432 815 * the kernel. 1433 816 */ 1434 - li r10,MSR_PR /* Build a mask of MSR_PR .. */ 1435 - oris r10,r10,0x200000@h /* .. and SRR1_PROGTM */ 1436 - and r10,r10,r12 /* Mask SRR1 with that. */ 1437 - srdi r10,r10,8 /* Shift it so we can compare */ 1438 - cmpldi r10,(0x200000 >> 8) /* .. with an immediate. */ 1439 - bne 1f /* If != go to normal path. */ 1440 817 1441 - /* SRR1 had PR=0 and SRR1_PROGTM=1, so use the emergency stack */ 1442 - andi. r10,r12,MSR_PR; /* Set CR0 correctly for label */ 818 + andi. r10,r12,MSR_PR 819 + bne 2f /* If userspace, go normal path */ 820 + 821 + andis. r10,r12,(SRR1_PROGTM)@h 822 + bne 1f /* If TM, emergency */ 823 + 824 + cmpdi r1,-INT_FRAME_SIZE /* check if r1 is in userspace */ 825 + blt 2f /* normal path if not */ 826 + 827 + /* Use the emergency stack */ 828 + 1: andi. r10,r12,MSR_PR /* Set CR0 correctly for label */ 1443 829 /* 3 in EXCEPTION_PROLOG_COMMON */ 1444 830 mr r10,r1 /* Save r1 */ 1445 831 ld r1,PACAEMERGSP(r13) /* Use emergency stack */ 1446 832 subi r1,r1,INT_FRAME_SIZE /* alloc stack frame */ 1447 833 b 3f /* Jump into the macro !! */ 1448 - 1: EXCEPTION_PROLOG_COMMON(0x700, PACA_EXGEN) 834 + 2: 835 + EXCEPTION_COMMON(PACA_EXGEN, 0x700) 1449 836 bl save_nvgprs 1450 837 RECONCILE_IRQ_STATE(r10, r11) 1451 838 addi r3,r1,STACK_FRAME_OVERHEAD ··· 1461 840 EXC_VIRT(fp_unavailable, 0x4800, 0x100, 0x800) 1462 841 TRAMP_KVM(PACA_EXGEN, 0x800) 1463 842 EXC_COMMON_BEGIN(fp_unavailable_common) 1464 - EXCEPTION_PROLOG_COMMON(0x800, PACA_EXGEN) 843 + EXCEPTION_COMMON(PACA_EXGEN, 0x800) 1465 844 bne 1f /* if from user, just load it up */ 1466 845 bl save_nvgprs 1467 846 RECONCILE_IRQ_STATE(r10, r11) ··· 1553 932 * without saving, though xer is not a good idea to use, as hardware may 1554 933 * interpret some bits so it may be costly to change them. 1555 934 */ 935 + .macro SYSTEM_CALL virt 1556 936 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER 1557 937 /* 1558 938 * There is a little bit of juggling to get syscall and hcall ··· 1563 941 * Userspace syscalls have already saved the PPR, hcalls must save 1564 942 * it before setting HMT_MEDIUM. 1565 943 */ 1566 - #define SYSCALL_KVMTEST \ 1567 - mtctr r13; \ 1568 - GET_PACA(r13); \ 1569 - std r10,PACA_EXGEN+EX_R10(r13); \ 1570 - INTERRUPT_TO_KERNEL; \ 1571 - KVMTEST_PR(0xc00); /* uses r10, branch to do_kvm_0xc00_system_call */ \ 1572 - HMT_MEDIUM; \ 1573 - mfctr r9; 1574 - 944 + mtctr r13 945 + GET_PACA(r13) 946 + std r10,PACA_EXGEN+EX_R10(r13) 947 + INTERRUPT_TO_KERNEL 948 + KVMTEST EXC_STD 0xc00 /* uses r10, branch to do_kvm_0xc00_system_call */ 949 + mfctr r9 1575 950 #else 1576 - #define SYSCALL_KVMTEST \ 1577 - HMT_MEDIUM; \ 1578 - mr r9,r13; \ 1579 - GET_PACA(r13); \ 1580 - INTERRUPT_TO_KERNEL; 951 + mr r9,r13 952 + GET_PACA(r13) 953 + INTERRUPT_TO_KERNEL 1581 954 #endif 1582 - 1583 - #define LOAD_SYSCALL_HANDLER(reg) \ 1584 - __LOAD_HANDLER(reg, system_call_common) 1585 - 1586 - /* 1587 - * After SYSCALL_KVMTEST, we reach here with PACA in r13, r13 in r9, 1588 - * and HMT_MEDIUM. 1589 - */ 1590 - #define SYSCALL_REAL \ 1591 - mfspr r11,SPRN_SRR0 ; \ 1592 - mfspr r12,SPRN_SRR1 ; \ 1593 - LOAD_SYSCALL_HANDLER(r10) ; \ 1594 - mtspr SPRN_SRR0,r10 ; \ 1595 - ld r10,PACAKMSR(r13) ; \ 1596 - mtspr SPRN_SRR1,r10 ; \ 1597 - RFI_TO_KERNEL ; \ 1598 - b . ; /* prevent speculative execution */ 1599 955 1600 956 #ifdef CONFIG_PPC_FAST_ENDIAN_SWITCH 1601 - #define SYSCALL_FASTENDIAN_TEST \ 1602 - BEGIN_FTR_SECTION \ 1603 - cmpdi r0,0x1ebe ; \ 1604 - beq- 1f ; \ 1605 - END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE) \ 1606 - 1607 - #define SYSCALL_FASTENDIAN \ 1608 - /* Fast LE/BE switch system call */ \ 1609 - 1: mfspr r12,SPRN_SRR1 ; \ 1610 - xori r12,r12,MSR_LE ; \ 1611 - mtspr SPRN_SRR1,r12 ; \ 1612 - mr r13,r9 ; \ 1613 - RFI_TO_USER ; /* return to userspace */ \ 1614 - b . ; /* prevent speculative execution */ 1615 - #else 1616 - #define SYSCALL_FASTENDIAN_TEST 1617 - #define SYSCALL_FASTENDIAN 1618 - #endif /* CONFIG_PPC_FAST_ENDIAN_SWITCH */ 1619 - 1620 - #if defined(CONFIG_RELOCATABLE) 1621 - /* 1622 - * We can't branch directly so we do it via the CTR which 1623 - * is volatile across system calls. 1624 - */ 1625 - #define SYSCALL_VIRT \ 1626 - LOAD_SYSCALL_HANDLER(r10) ; \ 1627 - mtctr r10 ; \ 1628 - mfspr r11,SPRN_SRR0 ; \ 1629 - mfspr r12,SPRN_SRR1 ; \ 1630 - li r10,MSR_RI ; \ 1631 - mtmsrd r10,1 ; \ 1632 - bctr ; 1633 - #else 1634 - /* We can branch directly */ 1635 - #define SYSCALL_VIRT \ 1636 - mfspr r11,SPRN_SRR0 ; \ 1637 - mfspr r12,SPRN_SRR1 ; \ 1638 - li r10,MSR_RI ; \ 1639 - mtmsrd r10,1 ; /* Set RI (EE=0) */ \ 1640 - b system_call_common ; 957 + BEGIN_FTR_SECTION 958 + cmpdi r0,0x1ebe 959 + beq- 1f 960 + END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE) 1641 961 #endif 1642 962 963 + /* We reach here with PACA in r13, r13 in r9. */ 964 + mfspr r11,SPRN_SRR0 965 + mfspr r12,SPRN_SRR1 966 + 967 + HMT_MEDIUM 968 + 969 + .if ! \virt 970 + __LOAD_HANDLER(r10, system_call_common) 971 + mtspr SPRN_SRR0,r10 972 + ld r10,PACAKMSR(r13) 973 + mtspr SPRN_SRR1,r10 974 + RFI_TO_KERNEL 975 + b . /* prevent speculative execution */ 976 + .else 977 + li r10,MSR_RI 978 + mtmsrd r10,1 /* Set RI (EE=0) */ 979 + #ifdef CONFIG_RELOCATABLE 980 + __LOAD_HANDLER(r10, system_call_common) 981 + mtctr r10 982 + bctr 983 + #else 984 + b system_call_common 985 + #endif 986 + .endif 987 + 988 + #ifdef CONFIG_PPC_FAST_ENDIAN_SWITCH 989 + /* Fast LE/BE switch system call */ 990 + 1: mfspr r12,SPRN_SRR1 991 + xori r12,r12,MSR_LE 992 + mtspr SPRN_SRR1,r12 993 + mr r13,r9 994 + RFI_TO_USER /* return to userspace */ 995 + b . /* prevent speculative execution */ 996 + #endif 997 + .endm 998 + 1643 999 EXC_REAL_BEGIN(system_call, 0xc00, 0x100) 1644 - SYSCALL_KVMTEST /* loads PACA into r13, and saves r13 to r9 */ 1645 - SYSCALL_FASTENDIAN_TEST 1646 - SYSCALL_REAL 1647 - SYSCALL_FASTENDIAN 1000 + SYSTEM_CALL 0 1648 1001 EXC_REAL_END(system_call, 0xc00, 0x100) 1649 1002 1650 1003 EXC_VIRT_BEGIN(system_call, 0x4c00, 0x100) 1651 - SYSCALL_KVMTEST /* loads PACA into r13, and saves r13 to r9 */ 1652 - SYSCALL_FASTENDIAN_TEST 1653 - SYSCALL_VIRT 1654 - SYSCALL_FASTENDIAN 1004 + SYSTEM_CALL 1 1655 1005 EXC_VIRT_END(system_call, 0x4c00, 0x100) 1656 1006 1657 1007 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER ··· 1647 1053 SET_SCRATCH0(r10) 1648 1054 std r9,PACA_EXGEN+EX_R9(r13) 1649 1055 mfcr r9 1650 - KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xc00) 1056 + KVM_HANDLER PACA_EXGEN, EXC_STD, 0xc00, 0 1651 1057 #endif 1652 1058 1653 1059 ··· 1664 1070 std r10,PACA_EXGEN+EX_DAR(r13) 1665 1071 mfspr r10,SPRN_HDSISR 1666 1072 stw r10,PACA_EXGEN+EX_DSISR(r13) 1667 - EXCEPTION_PROLOG_COMMON(0xe00, PACA_EXGEN) 1073 + EXCEPTION_COMMON(PACA_EXGEN, 0xe00) 1668 1074 bl save_nvgprs 1669 1075 RECONCILE_IRQ_STATE(r10, r11) 1670 1076 addi r3,r1,STACK_FRAME_OVERHEAD ··· 1698 1104 * first, and then eventaully from there to the trampoline to get into virtual 1699 1105 * mode. 1700 1106 */ 1701 - __EXC_REAL_OOL_HV_DIRECT(hmi_exception, 0xe60, 0x20, hmi_exception_early) 1702 - __TRAMP_REAL_OOL_MASKABLE_HV(hmi_exception, 0xe60, IRQS_DISABLED) 1107 + EXC_REAL_BEGIN(hmi_exception, 0xe60, 0x20) 1108 + EXCEPTION_PROLOG_0 PACA_EXGEN 1109 + b hmi_exception_early 1110 + EXC_REAL_END(hmi_exception, 0xe60, 0x20) 1703 1111 EXC_VIRT_NONE(0x4e60, 0x20) 1704 1112 TRAMP_KVM_HV(PACA_EXGEN, 0xe60) 1705 1113 TRAMP_REAL_BEGIN(hmi_exception_early) 1706 - EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_HV, 0xe60) 1114 + EXCEPTION_PROLOG_1 EXC_HV, PACA_EXGEN, 1, 0xe60, 0, 0, 0 1115 + mfctr r10 /* save ctr, even for !RELOCATABLE */ 1116 + BRANCH_TO_C000(r11, hmi_exception_early_common) 1117 + 1118 + EXC_COMMON_BEGIN(hmi_exception_early_common) 1119 + mtctr r10 /* Restore ctr */ 1120 + mfspr r11,SPRN_HSRR0 /* Save HSRR0 */ 1121 + mfspr r12,SPRN_HSRR1 /* Save HSRR1 */ 1707 1122 mr r10,r1 /* Save r1 */ 1708 1123 ld r1,PACAEMERGSP(r13) /* Use emergency stack for realmode */ 1709 1124 subi r1,r1,INT_FRAME_SIZE /* alloc stack frame */ 1710 - mfspr r11,SPRN_HSRR0 /* Save HSRR0 */ 1711 - mfspr r12,SPRN_HSRR1 /* Save HSRR1 */ 1712 1125 EXCEPTION_PROLOG_COMMON_1() 1713 1126 /* We don't touch AMR here, we never go to virtual mode */ 1714 1127 EXCEPTION_PROLOG_COMMON_2(PACA_EXGEN) 1715 1128 EXCEPTION_PROLOG_COMMON_3(0xe60) 1716 1129 addi r3,r1,STACK_FRAME_OVERHEAD 1717 - BRANCH_LINK_TO_FAR(DOTSYM(hmi_exception_realmode)) /* Function call ABI */ 1130 + bl hmi_exception_realmode 1718 1131 cmpdi cr0,r3,0 1719 - 1720 - /* Windup the stack. */ 1721 - /* Move original HSRR0 and HSRR1 into the respective regs */ 1722 - ld r9,_MSR(r1) 1723 - mtspr SPRN_HSRR1,r9 1724 - ld r3,_NIP(r1) 1725 - mtspr SPRN_HSRR0,r3 1726 - ld r9,_CTR(r1) 1727 - mtctr r9 1728 - ld r9,_XER(r1) 1729 - mtxer r9 1730 - ld r9,_LINK(r1) 1731 - mtlr r9 1732 - REST_GPR(0, r1) 1733 - REST_8GPRS(2, r1) 1734 - REST_GPR(10, r1) 1735 - ld r11,_CCR(r1) 1736 - REST_2GPRS(12, r1) 1737 1132 bne 1f 1738 - mtcr r11 1739 - REST_GPR(11, r1) 1740 - ld r1,GPR1(r1) 1133 + 1134 + EXCEPTION_RESTORE_REGS EXC_HV 1741 1135 HRFI_TO_USER_OR_KERNEL 1742 1136 1743 - 1: mtcr r11 1744 - REST_GPR(11, r1) 1745 - ld r1,GPR1(r1) 1746 - 1137 + 1: 1747 1138 /* 1748 1139 * Go to virtual mode and pull the HMI event information from 1749 1140 * firmware. 1750 1141 */ 1751 - .globl hmi_exception_after_realmode 1752 - hmi_exception_after_realmode: 1753 - SET_SCRATCH0(r13) 1754 - EXCEPTION_PROLOG_0(PACA_EXGEN) 1755 - b tramp_real_hmi_exception 1142 + EXCEPTION_RESTORE_REGS EXC_HV 1143 + EXCEPTION_PROLOG_0 PACA_EXGEN 1144 + EXCEPTION_PROLOG_1 EXC_HV, PACA_EXGEN, 1, 0xe60, 0, 0, IRQS_DISABLED 1145 + EXCEPTION_PROLOG_2_REAL hmi_exception_common, EXC_HV, 1 1756 1146 1757 1147 EXC_COMMON_BEGIN(hmi_exception_common) 1758 - EXCEPTION_COMMON(PACA_EXGEN, 0xe60, hmi_exception_common, handle_hmi_exception, 1759 - ret_from_except, FINISH_NAP;ADD_NVGPRS;ADD_RECONCILE;RUNLATCH_ON) 1148 + EXCEPTION_COMMON(PACA_EXGEN, 0xe60) 1149 + FINISH_NAP 1150 + bl save_nvgprs 1151 + RECONCILE_IRQ_STATE(r10, r11) 1152 + RUNLATCH_ON 1153 + addi r3,r1,STACK_FRAME_OVERHEAD 1154 + bl handle_hmi_exception 1155 + b ret_from_except 1760 1156 1761 1157 EXC_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0x20, IRQS_DISABLED) 1762 1158 EXC_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x20, 0xe80, IRQS_DISABLED) ··· 1780 1196 EXC_VIRT_OOL(altivec_unavailable, 0x4f20, 0x20, 0xf20) 1781 1197 TRAMP_KVM(PACA_EXGEN, 0xf20) 1782 1198 EXC_COMMON_BEGIN(altivec_unavailable_common) 1783 - EXCEPTION_PROLOG_COMMON(0xf20, PACA_EXGEN) 1199 + EXCEPTION_COMMON(PACA_EXGEN, 0xf20) 1784 1200 #ifdef CONFIG_ALTIVEC 1785 1201 BEGIN_FTR_SECTION 1786 1202 beq 1f ··· 1817 1233 EXC_VIRT_OOL(vsx_unavailable, 0x4f40, 0x20, 0xf40) 1818 1234 TRAMP_KVM(PACA_EXGEN, 0xf40) 1819 1235 EXC_COMMON_BEGIN(vsx_unavailable_common) 1820 - EXCEPTION_PROLOG_COMMON(0xf40, PACA_EXGEN) 1236 + EXCEPTION_COMMON(PACA_EXGEN, 0xf40) 1821 1237 #ifdef CONFIG_VSX 1822 1238 BEGIN_FTR_SECTION 1823 1239 beq 1f ··· 1893 1309 EXC_VIRT_NONE(0x5400, 0x100) 1894 1310 1895 1311 EXC_REAL_BEGIN(denorm_exception_hv, 0x1500, 0x100) 1896 - mtspr SPRN_SPRG_HSCRATCH0,r13 1897 - EXCEPTION_PROLOG_0(PACA_EXGEN) 1898 - EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0x1500) 1312 + EXCEPTION_PROLOG_0 PACA_EXGEN 1313 + EXCEPTION_PROLOG_1 EXC_HV, PACA_EXGEN, 0, 0x1500, 0, 0, 0 1899 1314 1900 1315 #ifdef CONFIG_PPC_DENORMALISATION 1901 1316 mfspr r10,SPRN_HSRR1 ··· 1902 1319 bne+ denorm_assist 1903 1320 #endif 1904 1321 1905 - KVMTEST_HV(0x1500) 1906 - EXCEPTION_PROLOG_2(denorm_common, EXC_HV) 1322 + KVMTEST EXC_HV 0x1500 1323 + EXCEPTION_PROLOG_2_REAL denorm_common, EXC_HV, 1 1907 1324 EXC_REAL_END(denorm_exception_hv, 0x1500, 0x100) 1908 1325 1909 1326 #ifdef CONFIG_PPC_DENORMALISATION ··· 1929 1346 mtmsrd r10 1930 1347 sync 1931 1348 1932 - #define FMR2(n) fmr (n), (n) ; fmr n+1, n+1 1933 - #define FMR4(n) FMR2(n) ; FMR2(n+2) 1934 - #define FMR8(n) FMR4(n) ; FMR4(n+4) 1935 - #define FMR16(n) FMR8(n) ; FMR8(n+8) 1936 - #define FMR32(n) FMR16(n) ; FMR16(n+16) 1937 - FMR32(0) 1349 + .Lreg=0 1350 + .rept 32 1351 + fmr .Lreg,.Lreg 1352 + .Lreg=.Lreg+1 1353 + .endr 1938 1354 1939 1355 FTR_SECTION_ELSE 1940 1356 /* ··· 1945 1363 mtmsrd r10 1946 1364 sync 1947 1365 1948 - #define XVCPSGNDP2(n) XVCPSGNDP(n,n,n) ; XVCPSGNDP(n+1,n+1,n+1) 1949 - #define XVCPSGNDP4(n) XVCPSGNDP2(n) ; XVCPSGNDP2(n+2) 1950 - #define XVCPSGNDP8(n) XVCPSGNDP4(n) ; XVCPSGNDP4(n+4) 1951 - #define XVCPSGNDP16(n) XVCPSGNDP8(n) ; XVCPSGNDP8(n+8) 1952 - #define XVCPSGNDP32(n) XVCPSGNDP16(n) ; XVCPSGNDP16(n+16) 1953 - XVCPSGNDP32(0) 1366 + .Lreg=0 1367 + .rept 32 1368 + XVCPSGNDP(.Lreg,.Lreg,.Lreg) 1369 + .Lreg=.Lreg+1 1370 + .endr 1954 1371 1955 1372 ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_206) 1956 1373 ··· 1960 1379 * To denormalise we need to move a copy of the register to itself. 1961 1380 * For POWER8 we need to do that for all 64 VSX registers 1962 1381 */ 1963 - XVCPSGNDP32(32) 1382 + .Lreg=32 1383 + .rept 32 1384 + XVCPSGNDP(.Lreg,.Lreg,.Lreg) 1385 + .Lreg=.Lreg+1 1386 + .endr 1387 + 1964 1388 denorm_done: 1965 1389 mfspr r11,SPRN_HSRR0 1966 1390 subi r11,r11,4 ··· 2028 1442 std r12,PACA_EXGEN+EX_R12(r13); \ 2029 1443 GET_SCRATCH0(r10); \ 2030 1444 std r10,PACA_EXGEN+EX_R13(r13); \ 2031 - EXCEPTION_PROLOG_2(soft_nmi_common, _H) 1445 + EXCEPTION_PROLOG_2_REAL soft_nmi_common, _H, 1 2032 1446 2033 1447 /* 2034 1448 * Branch to soft_nmi_interrupt using the emergency stack. The emergency ··· 2043 1457 mr r10,r1 2044 1458 ld r1,PACAEMERGSP(r13) 2045 1459 subi r1,r1,INT_FRAME_SIZE 2046 - EXCEPTION_COMMON_NORET_STACK(PACA_EXGEN, 0x900, 2047 - system_reset, soft_nmi_interrupt, 2048 - ADD_NVGPRS;ADD_RECONCILE) 1460 + EXCEPTION_COMMON_STACK(PACA_EXGEN, 0x900) 1461 + bl save_nvgprs 1462 + RECONCILE_IRQ_STATE(r10, r11) 1463 + addi r3,r1,STACK_FRAME_OVERHEAD 1464 + bl soft_nmi_interrupt 2049 1465 b ret_from_except 2050 1466 2051 1467 #else /* CONFIG_PPC_WATCHDOG */ ··· 2065 1477 * - Else it is one of PACA_IRQ_MUST_HARD_MASK, so hard disable and return. 2066 1478 * This is called with r10 containing the value to OR to the paca field. 2067 1479 */ 2068 - #define MASKED_INTERRUPT(_H) \ 2069 - masked_##_H##interrupt: \ 2070 - std r11,PACA_EXGEN+EX_R11(r13); \ 2071 - lbz r11,PACAIRQHAPPENED(r13); \ 2072 - or r11,r11,r10; \ 2073 - stb r11,PACAIRQHAPPENED(r13); \ 2074 - cmpwi r10,PACA_IRQ_DEC; \ 2075 - bne 1f; \ 2076 - lis r10,0x7fff; \ 2077 - ori r10,r10,0xffff; \ 2078 - mtspr SPRN_DEC,r10; \ 2079 - b MASKED_DEC_HANDLER_LABEL; \ 2080 - 1: andi. r10,r10,PACA_IRQ_MUST_HARD_MASK; \ 2081 - beq 2f; \ 2082 - mfspr r10,SPRN_##_H##SRR1; \ 2083 - xori r10,r10,MSR_EE; /* clear MSR_EE */ \ 2084 - mtspr SPRN_##_H##SRR1,r10; \ 2085 - ori r11,r11,PACA_IRQ_HARD_DIS; \ 2086 - stb r11,PACAIRQHAPPENED(r13); \ 2087 - 2: /* done */ \ 2088 - mtcrf 0x80,r9; \ 2089 - std r1,PACAR1(r13); \ 2090 - ld r9,PACA_EXGEN+EX_R9(r13); \ 2091 - ld r10,PACA_EXGEN+EX_R10(r13); \ 2092 - ld r11,PACA_EXGEN+EX_R11(r13); \ 2093 - /* returns to kernel where r13 must be set up, so don't restore it */ \ 2094 - ##_H##RFI_TO_KERNEL; \ 2095 - b .; \ 2096 - MASKED_DEC_HANDLER(_H) 1480 + .macro MASKED_INTERRUPT hsrr 1481 + .if \hsrr 1482 + masked_Hinterrupt: 1483 + .else 1484 + masked_interrupt: 1485 + .endif 1486 + std r11,PACA_EXGEN+EX_R11(r13) 1487 + lbz r11,PACAIRQHAPPENED(r13) 1488 + or r11,r11,r10 1489 + stb r11,PACAIRQHAPPENED(r13) 1490 + cmpwi r10,PACA_IRQ_DEC 1491 + bne 1f 1492 + lis r10,0x7fff 1493 + ori r10,r10,0xffff 1494 + mtspr SPRN_DEC,r10 1495 + b MASKED_DEC_HANDLER_LABEL 1496 + 1: andi. r10,r10,PACA_IRQ_MUST_HARD_MASK 1497 + beq 2f 1498 + .if \hsrr 1499 + mfspr r10,SPRN_HSRR1 1500 + xori r10,r10,MSR_EE /* clear MSR_EE */ 1501 + mtspr SPRN_HSRR1,r10 1502 + .else 1503 + mfspr r10,SPRN_SRR1 1504 + xori r10,r10,MSR_EE /* clear MSR_EE */ 1505 + mtspr SPRN_SRR1,r10 1506 + .endif 1507 + ori r11,r11,PACA_IRQ_HARD_DIS 1508 + stb r11,PACAIRQHAPPENED(r13) 1509 + 2: /* done */ 1510 + mtcrf 0x80,r9 1511 + std r1,PACAR1(r13) 1512 + ld r9,PACA_EXGEN+EX_R9(r13) 1513 + ld r10,PACA_EXGEN+EX_R10(r13) 1514 + ld r11,PACA_EXGEN+EX_R11(r13) 1515 + /* returns to kernel where r13 must be set up, so don't restore it */ 1516 + .if \hsrr 1517 + HRFI_TO_KERNEL 1518 + .else 1519 + RFI_TO_KERNEL 1520 + .endif 1521 + b . 1522 + MASKED_DEC_HANDLER(\hsrr\()) 1523 + .endm 2097 1524 2098 1525 TRAMP_REAL_BEGIN(stf_barrier_fallback) 2099 1526 std r9,PACA_EXRFI+EX_R9(r13) ··· 2215 1612 * cannot reach these if they are put there. 2216 1613 */ 2217 1614 USE_FIXED_SECTION(virt_trampolines) 2218 - MASKED_INTERRUPT() 2219 - MASKED_INTERRUPT(H) 1615 + MASKED_INTERRUPT EXC_STD 1616 + MASKED_INTERRUPT EXC_HV 2220 1617 2221 1618 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER 2222 1619 TRAMP_REAL_BEGIN(kvmppc_skip_interrupt) ··· 2349 1746 addi r3,r1,STACK_FRAME_OVERHEAD 2350 1747 bl do_page_fault 2351 1748 cmpdi r3,0 2352 - beq+ 12f 1749 + beq+ ret_from_except_lite 2353 1750 bl save_nvgprs 2354 1751 mr r5,r3 2355 1752 addi r3,r1,STACK_FRAME_OVERHEAD ··· 2364 1761 ld r5,_DSISR(r1) 2365 1762 addi r3,r1,STACK_FRAME_OVERHEAD 2366 1763 bl do_break 2367 - 12: b ret_from_except_lite 1764 + /* 1765 + * do_break() may have changed the NV GPRS while handling a breakpoint. 1766 + * If so, we need to restore them with their updated values. Don't use 1767 + * ret_from_except_lite here. 1768 + */ 1769 + b ret_from_except 2368 1770 2369 1771 2370 1772 #ifdef CONFIG_PPC_BOOK3S_64 ··· 2397 1789 li r5,SIGSEGV 2398 1790 bl bad_page_fault 2399 1791 b ret_from_except 2400 - 2401 - /* 2402 - * Here we have detected that the kernel stack pointer is bad. 2403 - * R9 contains the saved CR, r13 points to the paca, 2404 - * r10 contains the (bad) kernel stack pointer, 2405 - * r11 and r12 contain the saved SRR0 and SRR1. 2406 - * We switch to using an emergency stack, save the registers there, 2407 - * and call kernel_bad_stack(), which panics. 2408 - */ 2409 - bad_stack: 2410 - ld r1,PACAEMERGSP(r13) 2411 - subi r1,r1,64+INT_FRAME_SIZE 2412 - std r9,_CCR(r1) 2413 - std r10,GPR1(r1) 2414 - std r11,_NIP(r1) 2415 - std r12,_MSR(r1) 2416 - mfspr r11,SPRN_DAR 2417 - mfspr r12,SPRN_DSISR 2418 - std r11,_DAR(r1) 2419 - std r12,_DSISR(r1) 2420 - mflr r10 2421 - mfctr r11 2422 - mfxer r12 2423 - std r10,_LINK(r1) 2424 - std r11,_CTR(r1) 2425 - std r12,_XER(r1) 2426 - SAVE_GPR(0,r1) 2427 - SAVE_GPR(2,r1) 2428 - ld r10,EX_R3(r3) 2429 - std r10,GPR3(r1) 2430 - SAVE_GPR(4,r1) 2431 - SAVE_4GPRS(5,r1) 2432 - ld r9,EX_R9(r3) 2433 - ld r10,EX_R10(r3) 2434 - SAVE_2GPRS(9,r1) 2435 - ld r9,EX_R11(r3) 2436 - ld r10,EX_R12(r3) 2437 - ld r11,EX_R13(r3) 2438 - std r9,GPR11(r1) 2439 - std r10,GPR12(r1) 2440 - std r11,GPR13(r1) 2441 - BEGIN_FTR_SECTION 2442 - ld r10,EX_CFAR(r3) 2443 - std r10,ORIG_GPR3(r1) 2444 - END_FTR_SECTION_IFSET(CPU_FTR_CFAR) 2445 - SAVE_8GPRS(14,r1) 2446 - SAVE_10GPRS(22,r1) 2447 - lhz r12,PACA_TRAP_SAVE(r13) 2448 - std r12,_TRAP(r1) 2449 - addi r11,r1,INT_FRAME_SIZE 2450 - std r11,0(r1) 2451 - li r12,0 2452 - std r12,0(r11) 2453 - ld r2,PACATOC(r13) 2454 - ld r11,exception_marker@toc(r2) 2455 - std r12,RESULT(r1) 2456 - std r11,STACK_FRAME_OVERHEAD-16(r1) 2457 - 1: addi r3,r1,STACK_FRAME_OVERHEAD 2458 - bl kernel_bad_stack 2459 - b 1b 2460 - _ASM_NOKPROBE_SYMBOL(bad_stack); 2461 1792 2462 1793 /* 2463 1794 * When doorbell is triggered from system reset wakeup, the message is
+2
arch/powerpc/kernel/head_64.S
··· 900 900 /* 901 901 * This is where the main kernel code starts. 902 902 */ 903 + __REF 903 904 start_here_multiplatform: 904 905 /* set up the TOC */ 905 906 bl relative_toc ··· 976 975 RFI 977 976 b . /* prevent speculative execution */ 978 977 978 + .previous 979 979 /* This is where all platforms converge execution */ 980 980 981 981 start_here_common:
-56
arch/powerpc/kernel/hw_breakpoint.c
··· 366 366 { 367 367 /* TODO */ 368 368 } 369 - 370 - bool dawr_force_enable; 371 - EXPORT_SYMBOL_GPL(dawr_force_enable); 372 - 373 - static ssize_t dawr_write_file_bool(struct file *file, 374 - const char __user *user_buf, 375 - size_t count, loff_t *ppos) 376 - { 377 - struct arch_hw_breakpoint null_brk = {0, 0, 0}; 378 - size_t rc; 379 - 380 - /* Send error to user if they hypervisor won't allow us to write DAWR */ 381 - if ((!dawr_force_enable) && 382 - (firmware_has_feature(FW_FEATURE_LPAR)) && 383 - (set_dawr(&null_brk) != H_SUCCESS)) 384 - return -1; 385 - 386 - rc = debugfs_write_file_bool(file, user_buf, count, ppos); 387 - if (rc) 388 - return rc; 389 - 390 - /* If we are clearing, make sure all CPUs have the DAWR cleared */ 391 - if (!dawr_force_enable) 392 - smp_call_function((smp_call_func_t)set_dawr, &null_brk, 0); 393 - 394 - return rc; 395 - } 396 - 397 - static const struct file_operations dawr_enable_fops = { 398 - .read = debugfs_read_file_bool, 399 - .write = dawr_write_file_bool, 400 - .open = simple_open, 401 - .llseek = default_llseek, 402 - }; 403 - 404 - static int __init dawr_force_setup(void) 405 - { 406 - dawr_force_enable = false; 407 - 408 - if (cpu_has_feature(CPU_FTR_DAWR)) { 409 - /* Don't setup sysfs file for user control on P8 */ 410 - dawr_force_enable = true; 411 - return 0; 412 - } 413 - 414 - if (PVR_VER(mfspr(SPRN_PVR)) == PVR_POWER9) { 415 - /* Turn DAWR off by default, but allow admin to turn it on */ 416 - dawr_force_enable = false; 417 - debugfs_create_file_unsafe("dawr_enable_dangerous", 0600, 418 - powerpc_debugfs_root, 419 - &dawr_force_enable, 420 - &dawr_enable_fops); 421 - } 422 - return 0; 423 - } 424 - arch_initcall(dawr_force_setup);
+3 -3
arch/powerpc/kernel/irq.c
··· 255 255 irq_happened = get_irq_happened(); 256 256 if (!irq_happened) { 257 257 #ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG 258 - WARN_ON(!(mfmsr() & MSR_EE)); 258 + WARN_ON_ONCE(!(mfmsr() & MSR_EE)); 259 259 #endif 260 260 return; 261 261 } ··· 268 268 */ 269 269 if (!(irq_happened & PACA_IRQ_HARD_DIS)) { 270 270 #ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG 271 - WARN_ON(!(mfmsr() & MSR_EE)); 271 + WARN_ON_ONCE(!(mfmsr() & MSR_EE)); 272 272 #endif 273 273 __hard_irq_disable(); 274 274 #ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG ··· 279 279 * warn if we are wrong. Only do that when IRQ tracing 280 280 * is enabled as mfmsr() can be costly. 281 281 */ 282 - if (WARN_ON(mfmsr() & MSR_EE)) 282 + if (WARN_ON_ONCE(mfmsr() & MSR_EE)) 283 283 __hard_irq_disable(); 284 284 #endif 285 285 }
+1 -2
arch/powerpc/kernel/mce_power.c
··· 82 82 return; 83 83 } 84 84 #endif 85 - /* PPC_INVALIDATE_ERAT can only be used on ISA v3 and newer */ 86 - asm volatile(PPC_INVALIDATE_ERAT : : :"memory"); 85 + asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT : : :"memory"); 87 86 } 88 87 89 88 #define MCE_FLUSH_SLB 1
-52
arch/powerpc/kernel/misc_64.S
··· 110 110 EXPORT_SYMBOL(flush_icache_range) 111 111 112 112 /* 113 - * Like above, but only do the D-cache. 114 - * 115 - * flush_dcache_range(unsigned long start, unsigned long stop) 116 - * 117 - * flush all bytes from start to stop-1 inclusive 118 - */ 119 - _GLOBAL_TOC(flush_dcache_range) 120 - 121 - /* 122 - * Flush the data cache to memory 123 - * 124 - * Different systems have different cache line sizes 125 - */ 126 - ld r10,PPC64_CACHES@toc(r2) 127 - lwz r7,DCACHEL1BLOCKSIZE(r10) /* Get dcache block size */ 128 - addi r5,r7,-1 129 - andc r6,r3,r5 /* round low to line bdy */ 130 - subf r8,r6,r4 /* compute length */ 131 - add r8,r8,r5 /* ensure we get enough */ 132 - lwz r9,DCACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of dcache block size */ 133 - srw. r8,r8,r9 /* compute line count */ 134 - beqlr /* nothing to do? */ 135 - mtctr r8 136 - 0: dcbst 0,r6 137 - add r6,r6,r7 138 - bdnz 0b 139 - sync 140 - blr 141 - EXPORT_SYMBOL(flush_dcache_range) 142 - 143 - _GLOBAL(flush_inval_dcache_range) 144 - ld r10,PPC64_CACHES@toc(r2) 145 - lwz r7,DCACHEL1BLOCKSIZE(r10) /* Get dcache block size */ 146 - addi r5,r7,-1 147 - andc r6,r3,r5 /* round low to line bdy */ 148 - subf r8,r6,r4 /* compute length */ 149 - add r8,r8,r5 /* ensure we get enough */ 150 - lwz r9,DCACHEL1LOGBLOCKSIZE(r10)/* Get log-2 of dcache block size */ 151 - srw. r8,r8,r9 /* compute line count */ 152 - beqlr /* nothing to do? */ 153 - sync 154 - isync 155 - mtctr r8 156 - 0: dcbf 0,r6 157 - add r6,r6,r7 158 - bdnz 0b 159 - sync 160 - isync 161 - blr 162 - 163 - 164 - /* 165 113 * Flush a particular page from the data cache to RAM. 166 114 * Note: this is necessary because the instruction cache does *not* 167 115 * snoop from the data cache.
+16 -8
arch/powerpc/kernel/module_32.c
··· 160 160 161 161 static inline int entry_matches(struct ppc_plt_entry *entry, Elf32_Addr val) 162 162 { 163 - if (entry->jump[0] == 0x3d800000 + ((val + 0x8000) >> 16) 164 - && entry->jump[1] == 0x398c0000 + (val & 0xffff)) 165 - return 1; 166 - return 0; 163 + if (entry->jump[0] != (PPC_INST_ADDIS | __PPC_RT(R12) | PPC_HA(val))) 164 + return 0; 165 + if (entry->jump[1] != (PPC_INST_ADDI | __PPC_RT(R12) | __PPC_RA(R12) | 166 + PPC_LO(val))) 167 + return 0; 168 + return 1; 167 169 } 168 170 169 171 /* Set up a trampoline in the PLT to bounce us to the distant function */ ··· 190 188 entry++; 191 189 } 192 190 193 - entry->jump[0] = 0x3d800000+((val+0x8000)>>16); /* lis r12,sym@ha */ 194 - entry->jump[1] = 0x398c0000 + (val&0xffff); /* addi r12,r12,sym@l*/ 195 - entry->jump[2] = 0x7d8903a6; /* mtctr r12 */ 196 - entry->jump[3] = 0x4e800420; /* bctr */ 191 + /* 192 + * lis r12, sym@ha 193 + * addi r12, r12, sym@l 194 + * mtctr r12 195 + * bctr 196 + */ 197 + entry->jump[0] = PPC_INST_ADDIS | __PPC_RT(R12) | PPC_HA(val); 198 + entry->jump[1] = PPC_INST_ADDI | __PPC_RT(R12) | __PPC_RA(R12) | PPC_LO(val); 199 + entry->jump[2] = PPC_INST_MTCTR | __PPC_RS(R12); 200 + entry->jump[3] = PPC_INST_BCTR; 197 201 198 202 pr_debug("Initialized plt for 0x%x at %p\n", val, entry); 199 203 return (uint32_t)entry;
+36 -26
arch/powerpc/kernel/module_64.c
··· 121 121 * the stub, but it's significantly shorter to put these values at the 122 122 * end of the stub code, and patch the stub address (32-bits relative 123 123 * to the TOC ptr, r2) into the stub. 124 + * 125 + * addis r11,r2, <high> 126 + * addi r11,r11, <low> 127 + * std r2,R2_STACK_OFFSET(r1) 128 + * ld r12,32(r11) 129 + * ld r2,40(r11) 130 + * mtctr r12 131 + * bctr 124 132 */ 125 - 126 133 static u32 ppc64_stub_insns[] = { 127 - 0x3d620000, /* addis r11,r2, <high> */ 128 - 0x396b0000, /* addi r11,r11, <low> */ 134 + PPC_INST_ADDIS | __PPC_RT(R11) | __PPC_RA(R2), 135 + PPC_INST_ADDI | __PPC_RT(R11) | __PPC_RA(R11), 129 136 /* Save current r2 value in magic place on the stack. */ 130 - 0xf8410000|R2_STACK_OFFSET, /* std r2,R2_STACK_OFFSET(r1) */ 131 - 0xe98b0020, /* ld r12,32(r11) */ 137 + PPC_INST_STD | __PPC_RS(R2) | __PPC_RA(R1) | R2_STACK_OFFSET, 138 + PPC_INST_LD | __PPC_RT(R12) | __PPC_RA(R11) | 32, 132 139 #ifdef PPC64_ELF_ABI_v1 133 140 /* Set up new r2 from function descriptor */ 134 - 0xe84b0028, /* ld r2,40(r11) */ 141 + PPC_INST_LD | __PPC_RT(R2) | __PPC_RA(R11) | 40, 135 142 #endif 136 - 0x7d8903a6, /* mtctr r12 */ 137 - 0x4e800420 /* bctr */ 143 + PPC_INST_MTCTR | __PPC_RS(R12), 144 + PPC_INST_BCTR, 138 145 }; 139 146 140 147 #ifdef CONFIG_DYNAMIC_FTRACE ··· 394 387 { 395 388 return (sechdrs[me->arch.toc_section].sh_addr & ~0xfful) + 0x8000; 396 389 } 397 - 398 - /* Both low and high 16 bits are added as SIGNED additions, so if low 399 - 16 bits has high bit set, high 16 bits must be adjusted. These 400 - macros do that (stolen from binutils). */ 401 - #define PPC_LO(v) ((v) & 0xffff) 402 - #define PPC_HI(v) (((v) >> 16) & 0xffff) 403 - #define PPC_HA(v) PPC_HI ((v) + 0x8000) 404 390 405 391 /* Patch stub to reference function and correct r2 value. */ 406 392 static inline int create_stub(const Elf64_Shdr *sechdrs, ··· 699 699 * ld r2, ...(r12) 700 700 * add r2, r2, r12 701 701 */ 702 - if ((((uint32_t *)location)[0] & ~0xfffc) 703 - != 0xe84c0000) 702 + if ((((uint32_t *)location)[0] & ~0xfffc) != 703 + (PPC_INST_LD | __PPC_RT(R2) | __PPC_RA(R12))) 704 704 break; 705 - if (((uint32_t *)location)[1] != 0x7c426214) 705 + if (((uint32_t *)location)[1] != 706 + (PPC_INST_ADD | __PPC_RT(R2) | __PPC_RA(R2) | __PPC_RB(R12))) 706 707 break; 707 708 /* 708 709 * If found, replace it with: 709 710 * addis r2, r12, (.TOC.-func)@ha 710 - * addi r2, r12, (.TOC.-func)@l 711 + * addi r2, r2, (.TOC.-func)@l 711 712 */ 712 - ((uint32_t *)location)[0] = 0x3c4c0000 + PPC_HA(value); 713 - ((uint32_t *)location)[1] = 0x38420000 + PPC_LO(value); 713 + ((uint32_t *)location)[0] = PPC_INST_ADDIS | __PPC_RT(R2) | 714 + __PPC_RA(R12) | PPC_HA(value); 715 + ((uint32_t *)location)[1] = PPC_INST_ADDI | __PPC_RT(R2) | 716 + __PPC_RA(R2) | PPC_LO(value); 714 717 break; 715 718 716 719 case R_PPC64_REL16_HA: ··· 767 764 { 768 765 struct ppc64_stub_entry *entry; 769 766 unsigned int i, num_stubs; 767 + /* 768 + * ld r12,PACATOC(r13) 769 + * addis r12,r12,<high> 770 + * addi r12,r12,<low> 771 + * mtctr r12 772 + * bctr 773 + */ 770 774 static u32 stub_insns[] = { 771 - 0xe98d0000 | PACATOC, /* ld r12,PACATOC(r13) */ 772 - 0x3d8c0000, /* addis r12,r12,<high> */ 773 - 0x398c0000, /* addi r12,r12,<low> */ 774 - 0x7d8903a6, /* mtctr r12 */ 775 - 0x4e800420, /* bctr */ 775 + PPC_INST_LD | __PPC_RT(R12) | __PPC_RA(R13) | PACATOC, 776 + PPC_INST_ADDIS | __PPC_RT(R12) | __PPC_RA(R12), 777 + PPC_INST_ADDI | __PPC_RT(R12) | __PPC_RA(R12), 778 + PPC_INST_MTCTR | __PPC_RS(R12), 779 + PPC_INST_BCTR, 776 780 }; 777 781 long reladdr; 778 782
+12 -2
arch/powerpc/kernel/pci_of_scan.c
··· 42 42 if (addr0 & 0x02000000) { 43 43 flags = IORESOURCE_MEM | PCI_BASE_ADDRESS_SPACE_MEMORY; 44 44 flags |= (addr0 >> 22) & PCI_BASE_ADDRESS_MEM_TYPE_64; 45 + if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) 46 + flags |= IORESOURCE_MEM_64; 45 47 flags |= (addr0 >> 28) & PCI_BASE_ADDRESS_MEM_TYPE_1M; 46 48 if (addr0 & 0x40000000) 47 49 flags |= IORESOURCE_PREFETCH ··· 79 77 const __be32 *addrs; 80 78 u32 i; 81 79 int proplen; 80 + bool mark_unset = false; 82 81 83 82 addrs = of_get_property(node, "assigned-addresses", &proplen); 84 - if (!addrs) 85 - return; 83 + if (!addrs || !proplen) { 84 + addrs = of_get_property(node, "reg", &proplen); 85 + if (!addrs || !proplen) 86 + return; 87 + mark_unset = true; 88 + } 89 + 86 90 pr_debug(" parse addresses (%d bytes) @ %p\n", proplen, addrs); 87 91 for (; proplen >= 20; proplen -= 20, addrs += 5) { 88 92 flags = pci_parse_of_flags(of_read_number(addrs, 1), 0); ··· 113 105 continue; 114 106 } 115 107 res->flags = flags; 108 + if (mark_unset) 109 + res->flags |= IORESOURCE_UNSET; 116 110 res->name = pci_name(dev); 117 111 region.start = base; 118 112 region.end = base + size - 1;
-28
arch/powerpc/kernel/process.c
··· 793 793 return __set_dabr(dabr, dabrx); 794 794 } 795 795 796 - int set_dawr(struct arch_hw_breakpoint *brk) 797 - { 798 - unsigned long dawr, dawrx, mrd; 799 - 800 - dawr = brk->address; 801 - 802 - dawrx = (brk->type & (HW_BRK_TYPE_READ | HW_BRK_TYPE_WRITE)) \ 803 - << (63 - 58); //* read/write bits */ 804 - dawrx |= ((brk->type & (HW_BRK_TYPE_TRANSLATE)) >> 2) \ 805 - << (63 - 59); //* translate */ 806 - dawrx |= (brk->type & (HW_BRK_TYPE_PRIV_ALL)) \ 807 - >> 3; //* PRIM bits */ 808 - /* dawr length is stored in field MDR bits 48:53. Matches range in 809 - doublewords (64 bits) baised by -1 eg. 0b000000=1DW and 810 - 0b111111=64DW. 811 - brk->len is in bytes. 812 - This aligns up to double word size, shifts and does the bias. 813 - */ 814 - mrd = ((brk->len + 7) >> 3) - 1; 815 - dawrx |= (mrd & 0x3f) << (63 - 53); 816 - 817 - if (ppc_md.set_dawr) 818 - return ppc_md.set_dawr(dawr, dawrx); 819 - mtspr(SPRN_DAWR, dawr); 820 - mtspr(SPRN_DAWRX, dawrx); 821 - return 0; 822 - } 823 - 824 796 void __set_breakpoint(struct arch_hw_breakpoint *brk) 825 797 { 826 798 memcpy(this_cpu_ptr(&current_brk), brk, sizeof(*brk));
+19 -10
arch/powerpc/kernel/prom_init.c
··· 168 168 169 169 #ifdef CONFIG_PPC_PSERIES 170 170 static bool __prombss prom_radix_disable; 171 + static bool __prombss prom_xive_disable; 171 172 #endif 172 173 173 174 struct platform_support { ··· 805 804 } 806 805 if (prom_radix_disable) 807 806 prom_debug("Radix disabled from cmdline\n"); 807 + 808 + opt = prom_strstr(prom_cmd_line, "xive=off"); 809 + if (opt) { 810 + prom_xive_disable = true; 811 + prom_debug("XIVE disabled from cmdline\n"); 812 + } 808 813 #endif /* CONFIG_PPC_PSERIES */ 809 814 } 810 815 ··· 1219 1212 switch (val) { 1220 1213 case OV5_FEAT(OV5_XIVE_EITHER): /* Either Available */ 1221 1214 prom_debug("XIVE - either mode supported\n"); 1222 - support->xive = true; 1215 + support->xive = !prom_xive_disable; 1223 1216 break; 1224 1217 case OV5_FEAT(OV5_XIVE_EXPLOIT): /* Only Exploitation mode */ 1225 1218 prom_debug("XIVE - exploitation mode supported\n"); 1219 + if (prom_xive_disable) { 1220 + /* 1221 + * If we __have__ to do XIVE, we're better off ignoring 1222 + * the command line rather than not booting. 1223 + */ 1224 + prom_printf("WARNING: Ignoring cmdline option xive=off\n"); 1225 + } 1226 1226 support->xive = true; 1227 1227 break; 1228 1228 case OV5_FEAT(OV5_XIVE_LEGACY): /* Only Legacy mode */ ··· 1576 1562 static void __init prom_init_mem(void) 1577 1563 { 1578 1564 phandle node; 1579 - #ifdef DEBUG_PROM 1580 - char *path; 1581 - #endif 1582 1565 char type[64]; 1583 1566 unsigned int plen; 1584 1567 cell_t *p, *endp; ··· 1597 1586 prom_debug("root_size_cells: %x\n", rsc); 1598 1587 1599 1588 prom_debug("scanning memory:\n"); 1600 - #ifdef DEBUG_PROM 1601 - path = prom_scratch; 1602 - #endif 1603 1589 1604 1590 for (node = 0; prom_next_node(&node); ) { 1605 1591 type[0] = 0; ··· 1621 1613 endp = p + (plen / sizeof(cell_t)); 1622 1614 1623 1615 #ifdef DEBUG_PROM 1624 - memset(path, 0, sizeof(prom_scratch)); 1625 - call_prom("package-to-path", 3, 1, node, path, sizeof(prom_scratch) - 1); 1626 - prom_debug(" node %s :\n", path); 1616 + memset(prom_scratch, 0, sizeof(prom_scratch)); 1617 + call_prom("package-to-path", 3, 1, node, prom_scratch, 1618 + sizeof(prom_scratch) - 1); 1619 + prom_debug(" node %s :\n", prom_scratch); 1627 1620 #endif /* DEBUG_PROM */ 1628 1621 1629 1622 while ((endp - p) >= (rac + rsc)) {
+3 -4
arch/powerpc/kernel/rtas.c
··· 980 980 cpu_hotplug_disable(); 981 981 982 982 /* Check if we raced with a CPU-Offline Operation */ 983 - if (unlikely(!cpumask_equal(cpu_present_mask, cpu_online_mask))) { 984 - pr_err("%s: Raced against a concurrent CPU-Offline\n", 985 - __func__); 986 - atomic_set(&data.error, -EBUSY); 983 + if (!cpumask_equal(cpu_present_mask, cpu_online_mask)) { 984 + pr_info("%s: Raced against a concurrent CPU-Offline\n", __func__); 985 + atomic_set(&data.error, -EAGAIN); 987 986 goto out_hotplug_enable; 988 987 } 989 988
+65 -8
arch/powerpc/kernel/swsusp_32.S
··· 25 25 #define SL_IBAT2 0x48 26 26 #define SL_DBAT3 0x50 27 27 #define SL_IBAT3 0x58 28 - #define SL_TB 0x60 29 - #define SL_R2 0x68 30 - #define SL_CR 0x6c 31 - #define SL_LR 0x70 32 - #define SL_R12 0x74 /* r12 to r31 */ 28 + #define SL_DBAT4 0x60 29 + #define SL_IBAT4 0x68 30 + #define SL_DBAT5 0x70 31 + #define SL_IBAT5 0x78 32 + #define SL_DBAT6 0x80 33 + #define SL_IBAT6 0x88 34 + #define SL_DBAT7 0x90 35 + #define SL_IBAT7 0x98 36 + #define SL_TB 0xa0 37 + #define SL_R2 0xa8 38 + #define SL_CR 0xac 39 + #define SL_LR 0xb0 40 + #define SL_R12 0xb4 /* r12 to r31 */ 33 41 #define SL_SIZE (SL_R12 + 80) 34 42 35 43 .section .data ··· 121 113 stw r4,SL_IBAT3(r11) 122 114 mfibatl r4,3 123 115 stw r4,SL_IBAT3+4(r11) 116 + 117 + BEGIN_MMU_FTR_SECTION 118 + mfspr r4,SPRN_DBAT4U 119 + stw r4,SL_DBAT4(r11) 120 + mfspr r4,SPRN_DBAT4L 121 + stw r4,SL_DBAT4+4(r11) 122 + mfspr r4,SPRN_DBAT5U 123 + stw r4,SL_DBAT5(r11) 124 + mfspr r4,SPRN_DBAT5L 125 + stw r4,SL_DBAT5+4(r11) 126 + mfspr r4,SPRN_DBAT6U 127 + stw r4,SL_DBAT6(r11) 128 + mfspr r4,SPRN_DBAT6L 129 + stw r4,SL_DBAT6+4(r11) 130 + mfspr r4,SPRN_DBAT7U 131 + stw r4,SL_DBAT7(r11) 132 + mfspr r4,SPRN_DBAT7L 133 + stw r4,SL_DBAT7+4(r11) 134 + mfspr r4,SPRN_IBAT4U 135 + stw r4,SL_IBAT4(r11) 136 + mfspr r4,SPRN_IBAT4L 137 + stw r4,SL_IBAT4+4(r11) 138 + mfspr r4,SPRN_IBAT5U 139 + stw r4,SL_IBAT5(r11) 140 + mfspr r4,SPRN_IBAT5L 141 + stw r4,SL_IBAT5+4(r11) 142 + mfspr r4,SPRN_IBAT6U 143 + stw r4,SL_IBAT6(r11) 144 + mfspr r4,SPRN_IBAT6L 145 + stw r4,SL_IBAT6+4(r11) 146 + mfspr r4,SPRN_IBAT7U 147 + stw r4,SL_IBAT7(r11) 148 + mfspr r4,SPRN_IBAT7L 149 + stw r4,SL_IBAT7+4(r11) 150 + END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) 124 151 125 152 #if 0 126 153 /* Backup various CPU config stuffs */ ··· 322 279 mtibatu 3,r4 323 280 lwz r4,SL_IBAT3+4(r11) 324 281 mtibatl 3,r4 325 - #endif 326 - 327 282 BEGIN_MMU_FTR_SECTION 328 - li r4,0 283 + lwz r4,SL_DBAT4(r11) 329 284 mtspr SPRN_DBAT4U,r4 285 + lwz r4,SL_DBAT4+4(r11) 330 286 mtspr SPRN_DBAT4L,r4 287 + lwz r4,SL_DBAT5(r11) 331 288 mtspr SPRN_DBAT5U,r4 289 + lwz r4,SL_DBAT5+4(r11) 332 290 mtspr SPRN_DBAT5L,r4 291 + lwz r4,SL_DBAT6(r11) 333 292 mtspr SPRN_DBAT6U,r4 293 + lwz r4,SL_DBAT6+4(r11) 334 294 mtspr SPRN_DBAT6L,r4 295 + lwz r4,SL_DBAT7(r11) 335 296 mtspr SPRN_DBAT7U,r4 297 + lwz r4,SL_DBAT7+4(r11) 336 298 mtspr SPRN_DBAT7L,r4 299 + lwz r4,SL_IBAT4(r11) 337 300 mtspr SPRN_IBAT4U,r4 301 + lwz r4,SL_IBAT4+4(r11) 338 302 mtspr SPRN_IBAT4L,r4 303 + lwz r4,SL_IBAT5(r11) 339 304 mtspr SPRN_IBAT5U,r4 305 + lwz r4,SL_IBAT5+4(r11) 340 306 mtspr SPRN_IBAT5L,r4 307 + lwz r4,SL_IBAT6(r11) 341 308 mtspr SPRN_IBAT6U,r4 309 + lwz r4,SL_IBAT6+4(r11) 342 310 mtspr SPRN_IBAT6L,r4 311 + lwz r4,SL_IBAT7(r11) 343 312 mtspr SPRN_IBAT7U,r4 313 + lwz r4,SL_IBAT7+4(r11) 344 314 mtspr SPRN_IBAT7L,r4 345 315 END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) 316 + #endif 346 317 347 318 /* Flush all TLBs */ 348 319 lis r4,0x1000
+2 -2
arch/powerpc/kernel/tm.S
··· 148 148 /* Stash the stack pointer away for use after reclaim */ 149 149 std r1, PACAR1(r13) 150 150 151 - /* Clear MSR RI since we are about to change r1, EE is already off. */ 151 + /* Clear MSR RI since we are about to use SCRATCH0, EE is already off */ 152 152 li r5, 0 153 153 mtmsrd r5, 1 154 154 ··· 474 474 475 475 REST_GPR(7, r7) 476 476 477 - /* Clear MSR RI since we are about to change r1. EE is already off */ 477 + /* Clear MSR RI since we are about to use SCRATCH0. EE is already off */ 478 478 li r5, 0 479 479 mtmsrd r5, 1 480 480
-4
arch/powerpc/kernel/trace/ftrace.c
··· 866 866 #ifdef CONFIG_PPC64 867 867 #define PACATOC offsetof(struct paca_struct, kernel_toc) 868 868 869 - #define PPC_LO(v) ((v) & 0xffff) 870 - #define PPC_HI(v) (((v) >> 16) & 0xffff) 871 - #define PPC_HA(v) PPC_HI ((v) + 0x8000) 872 - 873 869 extern unsigned int ftrace_tramp_text[], ftrace_tramp_init[]; 874 870 875 871 int __init ftrace_dyn_arch_init(void)
+4 -3
arch/powerpc/kvm/Kconfig
··· 38 38 config KVM_BOOK3S_64_HANDLER 39 39 bool 40 40 select KVM_BOOK3S_HANDLER 41 + select PPC_DAWR_FORCE_ENABLE 41 42 42 43 config KVM_BOOK3S_PR_POSSIBLE 43 44 bool ··· 184 183 select HAVE_KVM_MSI 185 184 help 186 185 Enable support for emulating MPIC devices inside the 187 - host kernel, rather than relying on userspace to emulate. 188 - Currently, support is limited to certain versions of 189 - Freescale's MPIC implementation. 186 + host kernel, rather than relying on userspace to emulate. 187 + Currently, support is limited to certain versions of 188 + Freescale's MPIC implementation. 190 189 191 190 config KVM_XICS 192 191 bool "KVM in-kernel XICS emulation"
+3 -9
arch/powerpc/kvm/book3s_64_mmu_radix.c
··· 361 361 kmem_cache_free(kvm_pte_cache, ptep); 362 362 } 363 363 364 - /* Like pmd_huge() and pmd_large(), but works regardless of config options */ 365 - static inline int pmd_is_leaf(pmd_t pmd) 366 - { 367 - return !!(pmd_val(pmd) & _PAGE_PTE); 368 - } 369 - 370 364 static pmd_t *kvmppc_pmd_alloc(void) 371 365 { 372 366 return kmem_cache_alloc(kvm_pmd_cache, GFP_KERNEL); ··· 481 487 for (iu = 0; iu < PTRS_PER_PUD; ++iu, ++p) { 482 488 if (!pud_present(*p)) 483 489 continue; 484 - if (pud_huge(*p)) { 490 + if (pud_is_leaf(*p)) { 485 491 pud_clear(p); 486 492 } else { 487 493 pmd_t *pmd; ··· 580 586 new_pud = pud_alloc_one(kvm->mm, gpa); 581 587 582 588 pmd = NULL; 583 - if (pud && pud_present(*pud) && !pud_huge(*pud)) 589 + if (pud && pud_present(*pud) && !pud_is_leaf(*pud)) 584 590 pmd = pmd_offset(pud, gpa); 585 591 else if (level <= 1) 586 592 new_pmd = kvmppc_pmd_alloc(); ··· 603 609 new_pud = NULL; 604 610 } 605 611 pud = pud_offset(pgd, gpa); 606 - if (pud_huge(*pud)) { 612 + if (pud_is_leaf(*pud)) { 607 613 unsigned long hgpa = gpa & PUD_MASK; 608 614 609 615 /* Check if we raced and someone else has set the same thing */
+11 -2
arch/powerpc/kvm/book3s_hv.c
··· 3603 3603 3604 3604 vcpu->arch.slb_max = 0; 3605 3605 dec = mfspr(SPRN_DEC); 3606 + if (!(lpcr & LPCR_LD)) /* Sign extend if not using large decrementer */ 3607 + dec = (s32) dec; 3606 3608 tb = mftb(); 3607 3609 vcpu->arch.dec_expires = dec + tb; 3608 3610 vcpu->cpu = -1; ··· 4124 4122 4125 4123 preempt_enable(); 4126 4124 4127 - /* cancel pending decrementer exception if DEC is now positive */ 4128 - if (get_tb() < vcpu->arch.dec_expires && kvmppc_core_pending_dec(vcpu)) 4125 + /* 4126 + * cancel pending decrementer exception if DEC is now positive, or if 4127 + * entering a nested guest in which case the decrementer is now owned 4128 + * by L2 and the L1 decrementer is provided in hdec_expires 4129 + */ 4130 + if (kvmppc_core_pending_dec(vcpu) && 4131 + ((get_tb() < vcpu->arch.dec_expires) || 4132 + (trap == BOOK3S_INTERRUPT_SYSCALL && 4133 + kvmppc_get_gpr(vcpu, 3) == H_ENTER_NESTED))) 4129 4134 kvmppc_core_dequeue_dec(vcpu); 4130 4135 4131 4136 trace_kvm_guest_exit(vcpu);
+4 -2
arch/powerpc/kvm/book3s_hv_builtin.c
··· 820 820 : : "r" (rb), "i" (1), "i" (1), "i" (0), 821 821 "r" (0) : "memory"); 822 822 } 823 + asm volatile("ptesync": : :"memory"); 824 + asm volatile(PPC_RADIX_INVALIDATE_ERAT_GUEST : : :"memory"); 823 825 } else { 824 826 for (set = 0; set < kvm->arch.tlb_sets; ++set) { 825 827 /* R=0 PRS=0 RIC=0 */ ··· 830 828 "r" (0) : "memory"); 831 829 rb += PPC_BIT(51); /* increment set number */ 832 830 } 831 + asm volatile("ptesync": : :"memory"); 832 + asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT : : :"memory"); 833 833 } 834 - asm volatile("ptesync": : :"memory"); 835 - asm volatile(PPC_INVALIDATE_ERAT : : :"memory"); 836 834 } 837 835 838 836 void kvmppc_check_need_tlb_flush(struct kvm *kvm, int pcpu,
+3 -3
arch/powerpc/kvm/book3s_hv_tm.c
··· 128 128 } 129 129 /* Set CR0 to indicate previous transactional state */ 130 130 vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) | 131 - (((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 28); 131 + (((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29); 132 132 /* L=1 => tresume, L=0 => tsuspend */ 133 133 if (instr & (1 << 21)) { 134 134 if (MSR_TM_SUSPENDED(msr)) ··· 172 172 173 173 /* Set CR0 to indicate previous transactional state */ 174 174 vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) | 175 - (((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 28); 175 + (((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29); 176 176 vcpu->arch.shregs.msr &= ~MSR_TS_MASK; 177 177 return RESUME_GUEST; 178 178 ··· 202 202 203 203 /* Set CR0 to indicate previous transactional state */ 204 204 vcpu->arch.regs.ccr = (vcpu->arch.regs.ccr & 0x0fffffff) | 205 - (((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 28); 205 + (((msr & MSR_TS_MASK) >> MSR_TS_S_LG) << 29); 206 206 vcpu->arch.shregs.msr = msr | MSR_TS_S; 207 207 return RESUME_GUEST; 208 208 }
+2 -1
arch/powerpc/lib/Makefile
··· 49 49 obj-y += checksum_$(BITS).o checksum_wrappers.o \ 50 50 string_$(BITS).o 51 51 52 - obj-y += sstep.o ldstfp.o quad.o 52 + obj-y += sstep.o 53 + obj-$(CONFIG_PPC_FPU) += ldstfp.o 53 54 obj64-y += quad.o 54 55 55 56 obj-$(CONFIG_PPC_LIB_RHEAP) += rheap.o
-4
arch/powerpc/lib/ldstfp.S
··· 14 14 #include <asm/asm-compat.h> 15 15 #include <linux/errno.h> 16 16 17 - #ifdef CONFIG_PPC_FPU 18 - 19 17 #define STKFRM (PPC_MIN_STKFRM + 16) 20 18 21 19 /* Get the contents of frN into *p; N is in r3 and p is in r4. */ ··· 235 237 MTMSRD(r6) 236 238 isync 237 239 blr 238 - 239 - #endif /* CONFIG_PPC_FPU */
+4 -4
arch/powerpc/lib/pmem.c
··· 15 15 void arch_wb_cache_pmem(void *addr, size_t size) 16 16 { 17 17 unsigned long start = (unsigned long) addr; 18 - flush_inval_dcache_range(start, start + size); 18 + flush_dcache_range(start, start + size); 19 19 } 20 20 EXPORT_SYMBOL(arch_wb_cache_pmem); 21 21 22 22 void arch_invalidate_pmem(void *addr, size_t size) 23 23 { 24 24 unsigned long start = (unsigned long) addr; 25 - flush_inval_dcache_range(start, start + size); 25 + flush_dcache_range(start, start + size); 26 26 } 27 27 EXPORT_SYMBOL(arch_invalidate_pmem); 28 28 ··· 35 35 unsigned long copied, start = (unsigned long) dest; 36 36 37 37 copied = __copy_from_user(dest, src, size); 38 - flush_inval_dcache_range(start, start + size); 38 + flush_dcache_range(start, start + size); 39 39 40 40 return copied; 41 41 } ··· 45 45 unsigned long start = (unsigned long) dest; 46 46 47 47 memcpy(dest, src, size); 48 - flush_inval_dcache_range(start, start + size); 48 + flush_dcache_range(start, start + size); 49 49 50 50 return dest; 51 51 }
-1
arch/powerpc/mm/book3s64/Makefile
··· 10 10 obj-$(CONFIG_PPC_RADIX_MMU) += radix_pgtable.o radix_tlb.o 11 11 obj-$(CONFIG_PPC_4K_PAGES) += hash_4k.o 12 12 obj-$(CONFIG_PPC_64K_PAGES) += hash_64k.o 13 - obj-$(CONFIG_PPC_SPLPAR) += vphn.o 14 13 obj-$(CONFIG_HUGETLB_PAGE) += hash_hugetlbpage.o 15 14 ifdef CONFIG_HUGETLB_PAGE 16 15 obj-$(CONFIG_PPC_RADIX_MMU) += radix_hugetlbpage.o
+3 -3
arch/powerpc/mm/book3s64/hash_native.c
··· 41 41 #define HPTE_LOCK_BIT (56+3) 42 42 #endif 43 43 44 - DEFINE_RAW_SPINLOCK(native_tlbie_lock); 44 + static DEFINE_RAW_SPINLOCK(native_tlbie_lock); 45 45 46 46 static inline void tlbiel_hash_set_isa206(unsigned int set, unsigned int is) 47 47 { ··· 56 56 * tlbiel instruction for hash, set invalidation 57 57 * i.e., r=1 and is=01 or is=10 or is=11 58 58 */ 59 - static inline void tlbiel_hash_set_isa300(unsigned int set, unsigned int is, 59 + static __always_inline void tlbiel_hash_set_isa300(unsigned int set, unsigned int is, 60 60 unsigned int pid, 61 61 unsigned int ric, unsigned int prs) 62 62 { ··· 112 112 113 113 asm volatile("ptesync": : :"memory"); 114 114 115 - asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory"); 115 + asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT "; isync" : : :"memory"); 116 116 } 117 117 118 118 void hash__tlbiel_all(unsigned int action)
+2 -4
arch/powerpc/mm/book3s64/hash_utils.c
··· 684 684 if (mmu_psize_defs[MMU_PAGE_16M].shift && 685 685 memblock_phys_mem_size() >= 0x40000000) 686 686 mmu_vmemmap_psize = MMU_PAGE_16M; 687 - else if (mmu_psize_defs[MMU_PAGE_64K].shift) 688 - mmu_vmemmap_psize = MMU_PAGE_64K; 689 687 else 690 - mmu_vmemmap_psize = MMU_PAGE_4K; 688 + mmu_vmemmap_psize = mmu_virtual_psize; 691 689 #endif /* CONFIG_SPARSEMEM_VMEMMAP */ 692 690 693 691 printk(KERN_DEBUG "Page orders: linear mapping = %d, " ··· 979 981 htab_scan_page_sizes(); 980 982 } 981 983 982 - struct hash_mm_context init_hash_mm_context; 984 + static struct hash_mm_context init_hash_mm_context; 983 985 void __init hash__early_init_mmu(void) 984 986 { 985 987 #ifndef CONFIG_PPC_64K_PAGES
-1
arch/powerpc/mm/book3s64/mmu_context.c
··· 174 174 */ 175 175 asm volatile("ptesync;isync" : : : "memory"); 176 176 177 - mm->context.npu_context = NULL; 178 177 mm->context.hash_context = NULL; 179 178 180 179 return index;
+22 -1
arch/powerpc/mm/book3s64/pgtable.c
··· 72 72 73 73 WARN_ON(pte_hw_valid(pmd_pte(*pmdp)) && !pte_protnone(pmd_pte(*pmdp))); 74 74 assert_spin_locked(pmd_lockptr(mm, pmdp)); 75 - WARN_ON(!(pmd_large(pmd) || pmd_devmap(pmd))); 75 + WARN_ON(!(pmd_large(pmd))); 76 76 #endif 77 77 trace_hugepage_set_pmd(addr, pmd_val(pmd)); 78 78 return set_pte_at(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd)); ··· 445 445 return (new_pmd_ptl != old_pmd_ptl) && vma_is_anonymous(vma); 446 446 447 447 return true; 448 + } 449 + 450 + int ioremap_range(unsigned long ea, phys_addr_t pa, unsigned long size, pgprot_t prot, int nid) 451 + { 452 + unsigned long i; 453 + 454 + if (radix_enabled()) 455 + return radix__ioremap_range(ea, pa, size, prot, nid); 456 + 457 + for (i = 0; i < size; i += PAGE_SIZE) { 458 + int err = map_kernel_page(ea + i, pa + i, prot); 459 + if (err) { 460 + if (slab_is_available()) 461 + unmap_kernel_range(ea, size); 462 + else 463 + WARN_ON_ONCE(1); /* Should clean up */ 464 + return err; 465 + } 466 + } 467 + 468 + return 0; 448 469 }
+134 -15
arch/powerpc/mm/book3s64/radix_pgtable.c
··· 7 7 8 8 #define pr_fmt(fmt) "radix-mmu: " fmt 9 9 10 + #include <linux/io.h> 10 11 #include <linux/kernel.h> 11 12 #include <linux/sched/mm.h> 12 13 #include <linux/memblock.h> ··· 199 198 pudp = pud_alloc(&init_mm, pgdp, idx); 200 199 if (!pudp) 201 200 continue; 202 - if (pud_huge(*pudp)) { 201 + if (pud_is_leaf(*pudp)) { 203 202 ptep = (pte_t *)pudp; 204 203 goto update_the_pte; 205 204 } 206 205 pmdp = pmd_alloc(&init_mm, pudp, idx); 207 206 if (!pmdp) 208 207 continue; 209 - if (pmd_huge(*pmdp)) { 208 + if (pmd_is_leaf(*pmdp)) { 210 209 ptep = pmdp_ptep(pmdp); 211 210 goto update_the_pte; 212 211 } ··· 320 319 return 0; 321 320 } 322 321 323 - void __init radix_init_pgtable(void) 322 + static void __init radix_init_pgtable(void) 324 323 { 325 324 unsigned long rts_field; 326 325 struct memblock_region *reg; ··· 516 515 mmu_psize_defs[MMU_PAGE_64K].shift = 16; 517 516 mmu_psize_defs[MMU_PAGE_64K].ap = 0x5; 518 517 found: 519 - #ifdef CONFIG_SPARSEMEM_VMEMMAP 520 - if (mmu_psize_defs[MMU_PAGE_2M].shift) { 521 - /* 522 - * map vmemmap using 2M if available 523 - */ 524 - mmu_vmemmap_psize = MMU_PAGE_2M; 525 - } 526 - #endif /* CONFIG_SPARSEMEM_VMEMMAP */ 527 518 return; 528 519 } 529 520 ··· 580 587 581 588 #ifdef CONFIG_SPARSEMEM_VMEMMAP 582 589 /* vmemmap mapping */ 583 - mmu_vmemmap_psize = mmu_virtual_psize; 590 + if (mmu_psize_defs[MMU_PAGE_2M].shift) { 591 + /* 592 + * map vmemmap using 2M if available 593 + */ 594 + mmu_vmemmap_psize = MMU_PAGE_2M; 595 + } else 596 + mmu_vmemmap_psize = mmu_virtual_psize; 584 597 #endif 585 598 /* 586 599 * initialize page table size ··· 831 832 if (!pmd_present(*pmd)) 832 833 continue; 833 834 834 - if (pmd_huge(*pmd)) { 835 + if (pmd_is_leaf(*pmd)) { 835 836 split_kernel_mapping(addr, end, PMD_SIZE, (pte_t *)pmd); 836 837 continue; 837 838 } ··· 856 857 if (!pud_present(*pud)) 857 858 continue; 858 859 859 - if (pud_huge(*pud)) { 860 + if (pud_is_leaf(*pud)) { 860 861 split_kernel_mapping(addr, end, PUD_SIZE, (pte_t *)pud); 861 862 continue; 862 863 } ··· 882 883 if (!pgd_present(*pgd)) 883 884 continue; 884 885 885 - if (pgd_huge(*pgd)) { 886 + if (pgd_is_leaf(*pgd)) { 886 887 split_kernel_mapping(addr, end, PGDIR_SIZE, (pte_t *)pgd); 887 888 continue; 888 889 } ··· 1116 1117 radix__flush_tlb_page(vma, addr); 1117 1118 1118 1119 set_pte_at(mm, addr, ptep, pte); 1120 + } 1121 + 1122 + int __init arch_ioremap_pud_supported(void) 1123 + { 1124 + /* HPT does not cope with large pages in the vmalloc area */ 1125 + return radix_enabled(); 1126 + } 1127 + 1128 + int __init arch_ioremap_pmd_supported(void) 1129 + { 1130 + return radix_enabled(); 1131 + } 1132 + 1133 + int p4d_free_pud_page(p4d_t *p4d, unsigned long addr) 1134 + { 1135 + return 0; 1136 + } 1137 + 1138 + int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot) 1139 + { 1140 + pte_t *ptep = (pte_t *)pud; 1141 + pte_t new_pud = pfn_pte(__phys_to_pfn(addr), prot); 1142 + 1143 + if (!radix_enabled()) 1144 + return 0; 1145 + 1146 + set_pte_at(&init_mm, 0 /* radix unused */, ptep, new_pud); 1147 + 1148 + return 1; 1149 + } 1150 + 1151 + int pud_clear_huge(pud_t *pud) 1152 + { 1153 + if (pud_huge(*pud)) { 1154 + pud_clear(pud); 1155 + return 1; 1156 + } 1157 + 1158 + return 0; 1159 + } 1160 + 1161 + int pud_free_pmd_page(pud_t *pud, unsigned long addr) 1162 + { 1163 + pmd_t *pmd; 1164 + int i; 1165 + 1166 + pmd = (pmd_t *)pud_page_vaddr(*pud); 1167 + pud_clear(pud); 1168 + 1169 + flush_tlb_kernel_range(addr, addr + PUD_SIZE); 1170 + 1171 + for (i = 0; i < PTRS_PER_PMD; i++) { 1172 + if (!pmd_none(pmd[i])) { 1173 + pte_t *pte; 1174 + pte = (pte_t *)pmd_page_vaddr(pmd[i]); 1175 + 1176 + pte_free_kernel(&init_mm, pte); 1177 + } 1178 + } 1179 + 1180 + pmd_free(&init_mm, pmd); 1181 + 1182 + return 1; 1183 + } 1184 + 1185 + int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot) 1186 + { 1187 + pte_t *ptep = (pte_t *)pmd; 1188 + pte_t new_pmd = pfn_pte(__phys_to_pfn(addr), prot); 1189 + 1190 + if (!radix_enabled()) 1191 + return 0; 1192 + 1193 + set_pte_at(&init_mm, 0 /* radix unused */, ptep, new_pmd); 1194 + 1195 + return 1; 1196 + } 1197 + 1198 + int pmd_clear_huge(pmd_t *pmd) 1199 + { 1200 + if (pmd_huge(*pmd)) { 1201 + pmd_clear(pmd); 1202 + return 1; 1203 + } 1204 + 1205 + return 0; 1206 + } 1207 + 1208 + int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) 1209 + { 1210 + pte_t *pte; 1211 + 1212 + pte = (pte_t *)pmd_page_vaddr(*pmd); 1213 + pmd_clear(pmd); 1214 + 1215 + flush_tlb_kernel_range(addr, addr + PMD_SIZE); 1216 + 1217 + pte_free_kernel(&init_mm, pte); 1218 + 1219 + return 1; 1220 + } 1221 + 1222 + int radix__ioremap_range(unsigned long ea, phys_addr_t pa, unsigned long size, 1223 + pgprot_t prot, int nid) 1224 + { 1225 + if (likely(slab_is_available())) { 1226 + int err = ioremap_page_range(ea, ea + size, pa, prot); 1227 + if (err) 1228 + unmap_kernel_range(ea, size); 1229 + return err; 1230 + } else { 1231 + unsigned long i; 1232 + 1233 + for (i = 0; i < size; i += PAGE_SIZE) { 1234 + int err = map_kernel_page(ea + i, pa + i, prot); 1235 + if (WARN_ON_ONCE(err)) /* Should clean up */ 1236 + return err; 1237 + } 1238 + return 0; 1239 + } 1119 1240 }
+20 -20
arch/powerpc/mm/book3s64/radix_tlb.c
··· 25 25 * tlbiel instruction for radix, set invalidation 26 26 * i.e., r=1 and is=01 or is=10 or is=11 27 27 */ 28 - static inline void tlbiel_radix_set_isa300(unsigned int set, unsigned int is, 28 + static __always_inline void tlbiel_radix_set_isa300(unsigned int set, unsigned int is, 29 29 unsigned int pid, 30 30 unsigned int ric, unsigned int prs) 31 31 { ··· 83 83 else 84 84 WARN(1, "%s called on pre-POWER9 CPU\n", __func__); 85 85 86 - asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory"); 86 + asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT "; isync" : : :"memory"); 87 87 } 88 88 89 89 static __always_inline void __tlbiel_pid(unsigned long pid, int set, ··· 146 146 trace_tlbie(lpid, 0, rb, rs, ric, prs, r); 147 147 } 148 148 149 - static inline void __tlbiel_lpid_guest(unsigned long lpid, int set, 150 - unsigned long ric) 149 + static __always_inline void __tlbiel_lpid_guest(unsigned long lpid, int set, 150 + unsigned long ric) 151 151 { 152 152 unsigned long rb,rs,prs,r; 153 153 ··· 163 163 } 164 164 165 165 166 - static inline void __tlbiel_va(unsigned long va, unsigned long pid, 167 - unsigned long ap, unsigned long ric) 166 + static __always_inline void __tlbiel_va(unsigned long va, unsigned long pid, 167 + unsigned long ap, unsigned long ric) 168 168 { 169 169 unsigned long rb,rs,prs,r; 170 170 ··· 179 179 trace_tlbie(0, 1, rb, rs, ric, prs, r); 180 180 } 181 181 182 - static inline void __tlbie_va(unsigned long va, unsigned long pid, 183 - unsigned long ap, unsigned long ric) 182 + static __always_inline void __tlbie_va(unsigned long va, unsigned long pid, 183 + unsigned long ap, unsigned long ric) 184 184 { 185 185 unsigned long rb,rs,prs,r; 186 186 ··· 195 195 trace_tlbie(0, 0, rb, rs, ric, prs, r); 196 196 } 197 197 198 - static inline void __tlbie_lpid_va(unsigned long va, unsigned long lpid, 199 - unsigned long ap, unsigned long ric) 198 + static __always_inline void __tlbie_lpid_va(unsigned long va, unsigned long lpid, 199 + unsigned long ap, unsigned long ric) 200 200 { 201 201 unsigned long rb,rs,prs,r; 202 202 ··· 235 235 /* 236 236 * We use 128 set in radix mode and 256 set in hpt mode. 237 237 */ 238 - static inline void _tlbiel_pid(unsigned long pid, unsigned long ric) 238 + static __always_inline void _tlbiel_pid(unsigned long pid, unsigned long ric) 239 239 { 240 240 int set; 241 241 ··· 258 258 __tlbiel_pid(pid, set, RIC_FLUSH_TLB); 259 259 260 260 asm volatile("ptesync": : :"memory"); 261 - asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory"); 261 + asm volatile(PPC_RADIX_INVALIDATE_ERAT_USER "; isync" : : :"memory"); 262 262 } 263 263 264 264 static inline void _tlbie_pid(unsigned long pid, unsigned long ric) ··· 310 310 __tlbiel_lpid(lpid, set, RIC_FLUSH_TLB); 311 311 312 312 asm volatile("ptesync": : :"memory"); 313 - asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory"); 313 + asm volatile(PPC_RADIX_INVALIDATE_ERAT_GUEST "; isync" : : :"memory"); 314 314 } 315 315 316 316 static inline void _tlbie_lpid(unsigned long lpid, unsigned long ric) ··· 337 337 asm volatile("eieio; tlbsync; ptesync": : :"memory"); 338 338 } 339 339 340 - static inline void _tlbiel_lpid_guest(unsigned long lpid, unsigned long ric) 340 + static __always_inline void _tlbiel_lpid_guest(unsigned long lpid, unsigned long ric) 341 341 { 342 342 int set; 343 343 ··· 362 362 __tlbiel_lpid_guest(lpid, set, RIC_FLUSH_TLB); 363 363 364 364 asm volatile("ptesync": : :"memory"); 365 - asm volatile(PPC_INVALIDATE_ERAT : : :"memory"); 365 + asm volatile(PPC_RADIX_INVALIDATE_ERAT_GUEST : : :"memory"); 366 366 } 367 367 368 368 ··· 377 377 __tlbiel_va(addr, pid, ap, RIC_FLUSH_TLB); 378 378 } 379 379 380 - static inline void _tlbiel_va(unsigned long va, unsigned long pid, 381 - unsigned long psize, unsigned long ric) 380 + static __always_inline void _tlbiel_va(unsigned long va, unsigned long pid, 381 + unsigned long psize, unsigned long ric) 382 382 { 383 383 unsigned long ap = mmu_get_ap(psize); 384 384 ··· 409 409 __tlbie_va(addr, pid, ap, RIC_FLUSH_TLB); 410 410 } 411 411 412 - static inline void _tlbie_va(unsigned long va, unsigned long pid, 413 - unsigned long psize, unsigned long ric) 412 + static __always_inline void _tlbie_va(unsigned long va, unsigned long pid, 413 + unsigned long psize, unsigned long ric) 414 414 { 415 415 unsigned long ap = mmu_get_ap(psize); 416 416 ··· 420 420 asm volatile("eieio; tlbsync; ptesync": : :"memory"); 421 421 } 422 422 423 - static inline void _tlbie_lpid_va(unsigned long va, unsigned long lpid, 423 + static __always_inline void _tlbie_lpid_va(unsigned long va, unsigned long lpid, 424 424 unsigned long psize, unsigned long ric) 425 425 { 426 426 unsigned long ap = mmu_get_ap(psize);
+18 -2
arch/powerpc/mm/book3s64/vphn.c arch/powerpc/platforms/pseries/vphn.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <asm/byteorder.h> 3 - #include "vphn.h" 3 + #include <asm/lppaca.h> 4 4 5 5 /* 6 6 * The associativity domain numbers are returned from the hypervisor as a ··· 22 22 * 23 23 * Convert to the sequence they would appear in the ibm,associativity property. 24 24 */ 25 - int vphn_unpack_associativity(const long *packed, __be32 *unpacked) 25 + static int vphn_unpack_associativity(const long *packed, __be32 *unpacked) 26 26 { 27 27 __be64 be_packed[VPHN_REGISTER_COUNT]; 28 28 int i, nr_assoc_doms = 0; ··· 71 71 72 72 return nr_assoc_doms; 73 73 } 74 + 75 + /* NOTE: This file is included by a selftest and built in userspace. */ 76 + #ifdef __KERNEL__ 77 + #include <asm/hvcall.h> 78 + 79 + long hcall_vphn(unsigned long cpu, u64 flags, __be32 *associativity) 80 + { 81 + long rc; 82 + long retbuf[PLPAR_HCALL9_BUFSIZE] = {0}; 83 + 84 + rc = plpar_hcall9(H_HOME_NODE_ASSOCIATIVITY, retbuf, flags, cpu); 85 + vphn_unpack_associativity(retbuf, associativity); 86 + 87 + return rc; 88 + } 89 + #endif
-16
arch/powerpc/mm/book3s64/vphn.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #ifndef _ARCH_POWERPC_MM_VPHN_H_ 3 - #define _ARCH_POWERPC_MM_VPHN_H_ 4 - 5 - /* The H_HOME_NODE_ASSOCIATIVITY h_call returns 6 64-bit registers. */ 6 - #define VPHN_REGISTER_COUNT 6 7 - 8 - /* 9 - * 6 64-bit registers unpacked into up to 24 be32 associativity values. To 10 - * form the complete property we have to add the length in the first cell. 11 - */ 12 - #define VPHN_ASSOC_BUFSIZE (VPHN_REGISTER_COUNT*sizeof(u64)/sizeof(u16) + 1) 13 - 14 - extern int vphn_unpack_associativity(const long *packed, __be32 *unpacked); 15 - 16 - #endif
+22 -3
arch/powerpc/mm/hugetlbpage.c
··· 61 61 num_hugepd = 1; 62 62 } 63 63 64 + if (!cachep) { 65 + WARN_ONCE(1, "No page table cache created for hugetlb tables"); 66 + return -ENOMEM; 67 + } 68 + 64 69 new = kmem_cache_alloc(cachep, pgtable_gfp_flags(mm, GFP_KERNEL)); 65 70 66 71 BUG_ON(pshift > HUGEPD_SHIFT_MASK); 67 72 BUG_ON((unsigned long)new & HUGEPD_SHIFT_MASK); 68 73 69 - if (! new) 74 + if (!new) 70 75 return -ENOMEM; 71 76 72 77 /* ··· 135 130 } else { 136 131 pdshift = PUD_SHIFT; 137 132 pu = pud_alloc(mm, pg, addr); 133 + if (!pu) 134 + return NULL; 138 135 if (pshift == PUD_SHIFT) 139 136 return (pte_t *)pu; 140 137 else if (pshift > PMD_SHIFT) { ··· 145 138 } else { 146 139 pdshift = PMD_SHIFT; 147 140 pm = pmd_alloc(mm, pu, addr); 141 + if (!pm) 142 + return NULL; 148 143 if (pshift == PMD_SHIFT) 149 144 /* 16MB hugepage */ 150 145 return (pte_t *)pm; ··· 163 154 } else { 164 155 pdshift = PUD_SHIFT; 165 156 pu = pud_alloc(mm, pg, addr); 157 + if (!pu) 158 + return NULL; 166 159 if (pshift >= PUD_SHIFT) { 167 160 ptl = pud_lockptr(mm, pu); 168 161 hpdp = (hugepd_t *)pu; 169 162 } else { 170 163 pdshift = PMD_SHIFT; 171 164 pm = pmd_alloc(mm, pu, addr); 165 + if (!pm) 166 + return NULL; 172 167 ptl = pmd_lockptr(mm, pm); 173 168 hpdp = (hugepd_t *)pm; 174 169 } ··· 594 581 595 582 static int __init hugetlbpage_init(void) 596 583 { 584 + bool configured = false; 597 585 int psize; 598 586 599 587 if (hugetlb_disabled) { ··· 645 631 pgtable_cache_add(pdshift - shift); 646 632 else if (IS_ENABLED(CONFIG_PPC_FSL_BOOK3E) || IS_ENABLED(CONFIG_PPC_8xx)) 647 633 pgtable_cache_add(PTE_T_ORDER); 634 + 635 + configured = true; 648 636 } 649 637 650 - if (IS_ENABLED(CONFIG_HUGETLB_PAGE_SIZE_VARIABLE)) 651 - hugetlbpage_init_default(); 638 + if (configured) { 639 + if (IS_ENABLED(CONFIG_HUGETLB_PAGE_SIZE_VARIABLE)) 640 + hugetlbpage_init_default(); 641 + } else 642 + pr_info("Failed to initialize. Disabling HugeTLB"); 652 643 653 644 return 0; 654 645 }
+4 -1
arch/powerpc/mm/init_64.c
··· 194 194 * fail due to alignment issues when using 16MB hugepages, so 195 195 * fall back to system memory if the altmap allocation fail. 196 196 */ 197 - if (altmap) 197 + if (altmap) { 198 198 p = altmap_alloc_block_buf(page_size, altmap); 199 + if (!p) 200 + pr_debug("altmap block allocation failed, falling back to system memory"); 201 + } 199 202 if (!p) 200 203 p = vmemmap_alloc_block_buf(page_size, node); 201 204 if (!p)
+2 -2
arch/powerpc/mm/mem.c
··· 120 120 start, start + size, rc); 121 121 return -EFAULT; 122 122 } 123 - flush_inval_dcache_range(start, start + size); 123 + flush_dcache_range(start, start + size); 124 124 125 125 return __add_pages(nid, start_pfn, nr_pages, restrictions); 126 126 } ··· 146 146 147 147 /* Remove htab bolted mappings for this section of memory */ 148 148 start = (unsigned long)__va(start); 149 - flush_inval_dcache_range(start, start + size); 149 + flush_dcache_range(start, start + size); 150 150 ret = remove_section_mapping(start, start + size); 151 151 WARN_ON_ONCE(ret); 152 152
+35 -26
arch/powerpc/mm/numa.c
··· 163 163 } 164 164 #endif /* CONFIG_HOTPLUG_CPU || CONFIG_PPC_SPLPAR */ 165 165 166 + int cpu_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc) 167 + { 168 + int dist = 0; 169 + 170 + int i, index; 171 + 172 + for (i = 0; i < distance_ref_points_depth; i++) { 173 + index = be32_to_cpu(distance_ref_points[i]); 174 + if (cpu1_assoc[index] == cpu2_assoc[index]) 175 + break; 176 + dist++; 177 + } 178 + 179 + return dist; 180 + } 181 + 166 182 /* must hold reference to node during call */ 167 183 static const __be32 *of_get_associativity(struct device_node *dev) 168 184 { ··· 228 212 { 229 213 int nid = NUMA_NO_NODE; 230 214 231 - if (min_common_depth == -1) 215 + if (!numa_enabled) 232 216 goto out; 233 217 234 218 if (of_read_number(associativity, 1) >= min_common_depth) ··· 432 416 static int of_drconf_to_nid_single(struct drmem_lmb *lmb) 433 417 { 434 418 struct assoc_arrays aa = { .arrays = NULL }; 435 - int default_nid = 0; 419 + int default_nid = NUMA_NO_NODE; 436 420 int nid = default_nid; 437 421 int rc, index; 422 + 423 + if ((min_common_depth < 0) || !numa_enabled) 424 + return default_nid; 438 425 439 426 rc = of_get_assoc_arrays(&aa); 440 427 if (rc) 441 428 return default_nid; 442 429 443 - if (min_common_depth > 0 && min_common_depth <= aa.array_sz && 444 - !(lmb->flags & DRCONF_MEM_AI_INVALID) && 445 - lmb->aa_index < aa.n_arrays) { 430 + if (min_common_depth <= aa.array_sz && 431 + !(lmb->flags & DRCONF_MEM_AI_INVALID) && lmb->aa_index < aa.n_arrays) { 446 432 index = lmb->aa_index * aa.array_sz + min_common_depth - 1; 447 433 nid = of_read_number(&aa.arrays[index], 1); 448 434 ··· 644 626 645 627 min_common_depth = find_min_common_depth(); 646 628 647 - if (min_common_depth < 0) 629 + if (min_common_depth < 0) { 630 + /* 631 + * if we fail to parse min_common_depth from device tree 632 + * mark the numa disabled, boot with numa disabled. 633 + */ 634 + numa_enabled = false; 648 635 return min_common_depth; 636 + } 649 637 650 638 dbg("NUMA associativity depth for CPU/Memory: %d\n", min_common_depth); 651 639 ··· 767 743 unsigned int node; 768 744 unsigned int cpu, count; 769 745 770 - if (min_common_depth == -1 || !numa_enabled) 746 + if (!numa_enabled) 771 747 return; 772 748 773 749 for_each_online_node(node) { ··· 832 808 struct device_node *rtas; 833 809 u32 numnodes, i; 834 810 835 - if (min_common_depth <= 0) 811 + if (!numa_enabled) 836 812 return; 837 813 838 814 rtas = of_find_node_by_path("/rtas"); ··· 1034 1010 struct device_node *memory = NULL; 1035 1011 int nid; 1036 1012 1037 - if (!numa_enabled || (min_common_depth < 0)) 1013 + if (!numa_enabled) 1038 1014 return first_online_node; 1039 1015 1040 1016 memory = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory"); ··· 1087 1063 1088 1064 /* Virtual Processor Home Node (VPHN) support */ 1089 1065 #ifdef CONFIG_PPC_SPLPAR 1090 - 1091 - #include "book3s64/vphn.h" 1092 - 1093 1066 struct topology_update_data { 1094 1067 struct topology_update_data *next; 1095 1068 unsigned int cpu; ··· 1182 1161 * Retrieve the new associativity information for a virtual processor's 1183 1162 * home node. 1184 1163 */ 1185 - static long hcall_vphn(unsigned long cpu, __be32 *associativity) 1186 - { 1187 - long rc; 1188 - long retbuf[PLPAR_HCALL9_BUFSIZE] = {0}; 1189 - u64 flags = 1; 1190 - int hwcpu = get_hard_smp_processor_id(cpu); 1191 - 1192 - rc = plpar_hcall9(H_HOME_NODE_ASSOCIATIVITY, retbuf, flags, hwcpu); 1193 - vphn_unpack_associativity(retbuf, associativity); 1194 - 1195 - return rc; 1196 - } 1197 - 1198 1164 static long vphn_get_associativity(unsigned long cpu, 1199 1165 __be32 *associativity) 1200 1166 { 1201 1167 long rc; 1202 1168 1203 - rc = hcall_vphn(cpu, associativity); 1169 + rc = hcall_vphn(get_hard_smp_processor_id(cpu), 1170 + VPHN_FLAG_VCPU, associativity); 1204 1171 1205 1172 switch (rc) { 1206 1173 case H_FUNCTION:
+8 -8
arch/powerpc/mm/pgtable.c
··· 336 336 if (pgd_none(pgd)) 337 337 return NULL; 338 338 339 - if (pgd_huge(pgd)) { 339 + if (pgd_is_leaf(pgd)) { 340 340 ret_pte = (pte_t *)pgdp; 341 341 goto out; 342 342 } 343 + 343 344 if (is_hugepd(__hugepd(pgd_val(pgd)))) { 344 345 hpdp = (hugepd_t *)&pgd; 345 346 goto out_huge; ··· 358 357 if (pud_none(pud)) 359 358 return NULL; 360 359 361 - if (pud_huge(pud)) { 360 + if (pud_is_leaf(pud)) { 362 361 ret_pte = (pte_t *)pudp; 363 362 goto out; 364 363 } 364 + 365 365 if (is_hugepd(__hugepd(pud_val(pud)))) { 366 366 hpdp = (hugepd_t *)&pud; 367 367 goto out_huge; 368 368 } 369 + 369 370 pdshift = PMD_SHIFT; 370 371 pmdp = pmd_offset(&pud, ea); 371 372 pmd = READ_ONCE(*pmdp); ··· 396 393 ret_pte = (pte_t *)pmdp; 397 394 goto out; 398 395 } 399 - /* 400 - * pmd_large check below will handle the swap pmd pte 401 - * we need to do both the check because they are config 402 - * dependent. 403 - */ 404 - if (pmd_huge(pmd) || pmd_large(pmd)) { 396 + 397 + if (pmd_is_leaf(pmd)) { 405 398 ret_pte = (pte_t *)pmdp; 406 399 goto out; 407 400 } 401 + 408 402 if (is_hugepd(__hugepd(pmd_val(pmd)))) { 409 403 hpdp = (hugepd_t *)&pmd; 410 404 goto out_huge;
+1 -1
arch/powerpc/mm/pgtable_32.c
··· 360 360 unsigned long numpages = PFN_UP((unsigned long)_einittext) - 361 361 PFN_DOWN((unsigned long)_sinittext); 362 362 363 - if (v_block_mapped((unsigned long)_stext) + 1) 363 + if (v_block_mapped((unsigned long)_stext + 1)) 364 364 mmu_mark_initmem_nx(); 365 365 else 366 366 change_page_attr(page, numpages, PAGE_KERNEL);
+29 -10
arch/powerpc/mm/pgtable_64.c
··· 103 103 unsigned long ioremap_bot = IOREMAP_BASE; 104 104 #endif 105 105 106 + int __weak ioremap_range(unsigned long ea, phys_addr_t pa, unsigned long size, pgprot_t prot, int nid) 107 + { 108 + unsigned long i; 109 + 110 + for (i = 0; i < size; i += PAGE_SIZE) { 111 + int err = map_kernel_page(ea + i, pa + i, prot); 112 + if (err) { 113 + if (slab_is_available()) 114 + unmap_kernel_range(ea, size); 115 + else 116 + WARN_ON_ONCE(1); /* Should clean up */ 117 + return err; 118 + } 119 + } 120 + 121 + return 0; 122 + } 123 + 106 124 /** 107 125 * __ioremap_at - Low level function to establish the page tables 108 126 * for an IO mapping 109 127 */ 110 128 void __iomem *__ioremap_at(phys_addr_t pa, void *ea, unsigned long size, pgprot_t prot) 111 129 { 112 - unsigned long i; 113 - 114 130 /* We don't support the 4K PFN hack with ioremap */ 115 131 if (pgprot_val(prot) & H_PAGE_4K_PFN) 116 132 return NULL; ··· 140 124 WARN_ON(((unsigned long)ea) & ~PAGE_MASK); 141 125 WARN_ON(size & ~PAGE_MASK); 142 126 143 - for (i = 0; i < size; i += PAGE_SIZE) 144 - if (map_kernel_page((unsigned long)ea + i, pa + i, prot)) 145 - return NULL; 127 + if (ioremap_range((unsigned long)ea, pa, size, prot, NUMA_NO_NODE)) 128 + return NULL; 146 129 147 130 return (void __iomem *)ea; 148 131 } ··· 192 177 193 178 area->phys_addr = paligned; 194 179 ret = __ioremap_at(paligned, area->addr, size, prot); 195 - if (!ret) 196 - vunmap(area->addr); 197 180 } else { 198 181 ret = __ioremap_at(paligned, (void *)ioremap_bot, size, prot); 199 182 if (ret) ··· 304 291 /* 4 level page table */ 305 292 struct page *pgd_page(pgd_t pgd) 306 293 { 307 - if (pgd_huge(pgd)) 294 + if (pgd_is_leaf(pgd)) { 295 + VM_WARN_ON(!pgd_huge(pgd)); 308 296 return pte_page(pgd_pte(pgd)); 297 + } 309 298 return virt_to_page(pgd_page_vaddr(pgd)); 310 299 } 311 300 #endif 312 301 313 302 struct page *pud_page(pud_t pud) 314 303 { 315 - if (pud_huge(pud)) 304 + if (pud_is_leaf(pud)) { 305 + VM_WARN_ON(!pud_huge(pud)); 316 306 return pte_page(pud_pte(pud)); 307 + } 317 308 return virt_to_page(pud_page_vaddr(pud)); 318 309 } 319 310 ··· 327 310 */ 328 311 struct page *pmd_page(pmd_t pmd) 329 312 { 330 - if (pmd_large(pmd) || pmd_huge(pmd) || pmd_devmap(pmd)) 313 + if (pmd_is_leaf(pmd)) { 314 + VM_WARN_ON(!(pmd_large(pmd) || pmd_huge(pmd))); 331 315 return pte_page(pmd_pte(pmd)); 316 + } 332 317 return virt_to_page(pmd_page_vaddr(pmd)); 333 318 } 334 319
+3 -3
arch/powerpc/mm/ptdump/ptdump.c
··· 273 273 274 274 for (i = 0; i < PTRS_PER_PMD; i++, pmd++) { 275 275 addr = start + i * PMD_SIZE; 276 - if (!pmd_none(*pmd) && !pmd_huge(*pmd)) 276 + if (!pmd_none(*pmd) && !pmd_is_leaf(*pmd)) 277 277 /* pmd exists */ 278 278 walk_pte(st, pmd, addr); 279 279 else ··· 289 289 290 290 for (i = 0; i < PTRS_PER_PUD; i++, pud++) { 291 291 addr = start + i * PUD_SIZE; 292 - if (!pud_none(*pud) && !pud_huge(*pud)) 292 + if (!pud_none(*pud) && !pud_is_leaf(*pud)) 293 293 /* pud exists */ 294 294 walk_pmd(st, pud, addr); 295 295 else ··· 310 310 * the hash pagetable. 311 311 */ 312 312 for (i = 0; i < PTRS_PER_PGD; i++, pgd++, addr += PGDIR_SIZE) { 313 - if (!pgd_none(*pgd) && !pgd_huge(*pgd)) 313 + if (!pgd_none(*pgd) && !pgd_is_leaf(*pgd)) 314 314 /* pgd exists */ 315 315 walk_pud(st, pgd, addr); 316 316 else
+1 -1
arch/powerpc/perf/hv-24x7.c
··· 567 567 struct event_uniq *it; 568 568 int result; 569 569 570 - it = container_of(*new, struct event_uniq, node); 570 + it = rb_entry(*new, struct event_uniq, node); 571 571 result = ev_uniq_ord(name, nl, domain, it->name, it->nl, 572 572 it->domain); 573 573
+12 -2
arch/powerpc/perf/imc-pmu.c
··· 362 362 */ 363 363 nid = cpu_to_node(cpu); 364 364 l_cpumask = cpumask_of_node(nid); 365 - target = cpumask_any_but(l_cpumask, cpu); 365 + target = cpumask_last(l_cpumask); 366 + 367 + /* 368 + * If this(target) is the last cpu in the cpumask for this chip, 369 + * check for any possible online cpu in the chip. 370 + */ 371 + if (unlikely(target == cpu)) 372 + target = cpumask_any_but(l_cpumask, cpu); 366 373 367 374 /* 368 375 * Update the cpumask with the target cpu and ··· 674 667 return 0; 675 668 676 669 /* Find any online cpu in that core except the current "cpu" */ 677 - ncpu = cpumask_any_but(cpu_sibling_mask(cpu), cpu); 670 + ncpu = cpumask_last(cpu_sibling_mask(cpu)); 671 + 672 + if (unlikely(ncpu == cpu)) 673 + ncpu = cpumask_any_but(cpu_sibling_mask(cpu), cpu); 678 674 679 675 if (ncpu >= 0 && ncpu < nr_cpu_ids) { 680 676 cpumask_set_cpu(ncpu, &core_imc_cpumask);
+3 -4
arch/powerpc/platforms/40x/Kconfig
··· 16 16 This option enables support for the EP405/EP405PC boards. 17 17 18 18 config HOTFOOT 19 - bool "Hotfoot" 19 + bool "Hotfoot" 20 20 depends on 40x 21 21 select PPC40x_SIMPLE 22 22 select FORCE_PCI 23 - help 24 - This option enables support for the ESTEEM 195E Hotfoot board. 23 + help 24 + This option enables support for the ESTEEM 195E Hotfoot board. 25 25 26 26 config KILAUEA 27 27 bool "Kilauea" ··· 79 79 select PPC40x_SIMPLE 80 80 help 81 81 This option enables support for PlatHome OpenBlockS 600 server 82 - 83 82 84 83 config PPC40x_SIMPLE 85 84 bool "Simple PowerPC 40x board support"
+5 -5
arch/powerpc/platforms/44x/Kconfig
··· 40 40 This option enables support for the IBM PPC440GP evaluation board. 41 41 42 42 config SAM440EP 43 - bool "Sam440ep" 43 + bool "Sam440ep" 44 44 depends on 44x 45 - select 440EP 46 - select FORCE_PCI 47 - help 48 - This option enables support for the ACube Sam440ep board. 45 + select 440EP 46 + select FORCE_PCI 47 + help 48 + This option enables support for the ACube Sam440ep board. 49 49 50 50 config SEQUOIA 51 51 bool "Sequoia"
+1
arch/powerpc/platforms/4xx/uic.c
··· 154 154 155 155 mtdcr(uic->dcrbase + UIC_PR, pr); 156 156 mtdcr(uic->dcrbase + UIC_TR, tr); 157 + mtdcr(uic->dcrbase + UIC_SR, ~mask); 157 158 158 159 raw_spin_unlock_irqrestore(&uic->lock, flags); 159 160
+4 -4
arch/powerpc/platforms/85xx/Kconfig
··· 147 147 This option enables support for the Socrates board. 148 148 149 149 config KSI8560 150 - bool "Emerson KSI8560" 151 - select DEFAULT_UIMAGE 152 - help 153 - This option enables support for the Emerson KSI8560 board 150 + bool "Emerson KSI8560" 151 + select DEFAULT_UIMAGE 152 + help 153 + This option enables support for the Emerson KSI8560 board 154 154 155 155 config XES_MPC85xx 156 156 bool "X-ES single-board computer"
+3 -3
arch/powerpc/platforms/86xx/Kconfig
··· 62 62 This option enables support for the GE SBC610. 63 63 64 64 config MVME7100 65 - bool "Artesyn MVME7100" 66 - help 67 - This option enables support for the Emerson/Artesyn MVME7100 board. 65 + bool "Artesyn MVME7100" 66 + help 67 + This option enables support for the Emerson/Artesyn MVME7100 board. 68 68 69 69 endif 70 70
+7
arch/powerpc/platforms/8xx/Kconfig
··· 157 157 help 158 158 Help not implemented yet, coming soon. 159 159 160 + config SMC_UCODE_PATCH 161 + bool "SMC relocation patch" 162 + help 163 + This microcode relocates SMC1 and SMC2 parameter RAMs at 164 + offset 0x1ec0 and 0x1fc0 to allow extended parameter RAM 165 + for SCC3 and SCC4. 166 + 160 167 endchoice 161 168 162 169 config UCODE_PATCH
+2
arch/powerpc/platforms/8xx/Makefile
··· 3 3 # Makefile for the PowerPC 8xx linux kernel. 4 4 # 5 5 obj-y += m8xx_setup.o machine_check.o pic.o 6 + obj-$(CONFIG_CPM1) += cpm1.o 7 + obj-$(CONFIG_UCODE_PATCH) += micropatch.o 6 8 obj-$(CONFIG_MPC885ADS) += mpc885ads_setup.o 7 9 obj-$(CONFIG_MPC86XADS) += mpc86xads_setup.o 8 10 obj-$(CONFIG_PPC_EP88XC) += ep88xc.o
+378
arch/powerpc/platforms/8xx/micropatch.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + /* 4 + * Microcode patches for the CPM as supplied by Motorola. 5 + * This is the one for IIC/SPI. There is a newer one that 6 + * also relocates SMC2, but this would require additional changes 7 + * to uart.c, so I am holding off on that for a moment. 8 + */ 9 + #include <linux/init.h> 10 + #include <linux/errno.h> 11 + #include <linux/sched.h> 12 + #include <linux/kernel.h> 13 + #include <linux/param.h> 14 + #include <linux/string.h> 15 + #include <linux/mm.h> 16 + #include <linux/interrupt.h> 17 + #include <asm/irq.h> 18 + #include <asm/page.h> 19 + #include <asm/pgtable.h> 20 + #include <asm/8xx_immap.h> 21 + #include <asm/cpm.h> 22 + #include <asm/cpm1.h> 23 + 24 + struct patch_params { 25 + ushort rccr; 26 + ushort cpmcr1; 27 + ushort cpmcr2; 28 + ushort cpmcr3; 29 + ushort cpmcr4; 30 + }; 31 + 32 + /* 33 + * I2C/SPI relocation patch arrays. 34 + */ 35 + 36 + #ifdef CONFIG_I2C_SPI_UCODE_PATCH 37 + 38 + static char patch_name[] __initdata = "I2C/SPI"; 39 + 40 + static struct patch_params patch_params __initdata = { 41 + 1, 0x802a, 0x8028, 0x802e, 0x802c, 42 + }; 43 + 44 + static uint patch_2000[] __initdata = { 45 + 0x7FFFEFD9, 0x3FFD0000, 0x7FFB49F7, 0x7FF90000, 46 + 0x5FEFADF7, 0x5F89ADF7, 0x5FEFAFF7, 0x5F89AFF7, 47 + 0x3A9CFBC8, 0xE7C0EDF0, 0x77C1E1BB, 0xF4DC7F1D, 48 + 0xABAD932F, 0x4E08FDCF, 0x6E0FAFF8, 0x7CCF76CF, 49 + 0xFD1FF9CF, 0xABF88DC6, 0xAB5679F7, 0xB0937383, 50 + 0xDFCE79F7, 0xB091E6BB, 0xE5BBE74F, 0xB3FA6F0F, 51 + 0x6FFB76CE, 0xEE0DF9CF, 0x2BFBEFEF, 0xCFEEF9CF, 52 + 0x76CEAD24, 0x90B2DF9A, 0x7FDDD0BF, 0x4BF847FD, 53 + 0x7CCF76CE, 0xCFEF7E1F, 0x7F1D7DFD, 0xF0B6EF71, 54 + 0x7FC177C1, 0xFBC86079, 0xE722FBC8, 0x5FFFDFFF, 55 + 0x5FB2FFFB, 0xFBC8F3C8, 0x94A67F01, 0x7F1D5F39, 56 + 0xAFE85F5E, 0xFFDFDF96, 0xCB9FAF7D, 0x5FC1AFED, 57 + 0x8C1C5FC1, 0xAFDD5FC3, 0xDF9A7EFD, 0xB0B25FB2, 58 + 0xFFFEABAD, 0x5FB2FFFE, 0x5FCE600B, 0xE6BB600B, 59 + 0x5FCEDFC6, 0x27FBEFDF, 0x5FC8CFDE, 0x3A9CE7C0, 60 + 0xEDF0F3C8, 0x7F0154CD, 0x7F1D2D3D, 0x363A7570, 61 + 0x7E0AF1CE, 0x37EF2E68, 0x7FEE10EC, 0xADF8EFDE, 62 + 0xCFEAE52F, 0x7D0FE12B, 0xF1CE5F65, 0x7E0A4DF8, 63 + 0xCFEA5F72, 0x7D0BEFEE, 0xCFEA5F74, 0xE522EFDE, 64 + 0x5F74CFDA, 0x0B627385, 0xDF627E0A, 0x30D8145B, 65 + 0xBFFFF3C8, 0x5FFFDFFF, 0xA7F85F5E, 0xBFFE7F7D, 66 + 0x10D31450, 0x5F36BFFF, 0xAF785F5E, 0xBFFDA7F8, 67 + 0x5F36BFFE, 0x77FD30C0, 0x4E08FDCF, 0xE5FF6E0F, 68 + 0xAFF87E1F, 0x7E0FFD1F, 0xF1CF5F1B, 0xABF80D5E, 69 + 0x5F5EFFEF, 0x79F730A2, 0xAFDD5F34, 0x47F85F34, 70 + 0xAFED7FDD, 0x50B24978, 0x47FD7F1D, 0x7DFD70AD, 71 + 0xEF717EC1, 0x6BA47F01, 0x2D267EFD, 0x30DE5F5E, 72 + 0xFFFD5F5E, 0xFFEF5F5E, 0xFFDF0CA0, 0xAFED0A9E, 73 + 0xAFDD0C3A, 0x5F3AAFBD, 0x7FBDB082, 0x5F8247F8 74 + }; 75 + 76 + static uint patch_2f00[] __initdata = { 77 + 0x3E303430, 0x34343737, 0xABF7BF9B, 0x994B4FBD, 78 + 0xBD599493, 0x349FFF37, 0xFB9B177D, 0xD9936956, 79 + 0xBBFDD697, 0xBDD2FD11, 0x31DB9BB3, 0x63139637, 80 + 0x93733693, 0x193137F7, 0x331737AF, 0x7BB9B999, 81 + 0xBB197957, 0x7FDFD3D5, 0x73B773F7, 0x37933B99, 82 + 0x1D115316, 0x99315315, 0x31694BF4, 0xFBDBD359, 83 + 0x31497353, 0x76956D69, 0x7B9D9693, 0x13131979, 84 + 0x79376935 85 + }; 86 + 87 + static uint patch_2e00[] __initdata = {}; 88 + #endif 89 + 90 + /* 91 + * I2C/SPI/SMC1 relocation patch arrays. 92 + */ 93 + 94 + #ifdef CONFIG_I2C_SPI_SMC1_UCODE_PATCH 95 + 96 + static char patch_name[] __initdata = "I2C/SPI/SMC1"; 97 + 98 + static struct patch_params patch_params __initdata = { 99 + 3, 0x8080, 0x808a, 0x8028, 0x802a, 100 + }; 101 + 102 + static uint patch_2000[] __initdata = { 103 + 0x3fff0000, 0x3ffd0000, 0x3ffb0000, 0x3ff90000, 104 + 0x5f13eff8, 0x5eb5eff8, 0x5f88adf7, 0x5fefadf7, 105 + 0x3a9cfbc8, 0x77cae1bb, 0xf4de7fad, 0xabae9330, 106 + 0x4e08fdcf, 0x6e0faff8, 0x7ccf76cf, 0xfdaff9cf, 107 + 0xabf88dc8, 0xab5879f7, 0xb0925d8d, 0xdfd079f7, 108 + 0xb090e6bb, 0xe5bbe74f, 0x9e046f0f, 0x6ffb76ce, 109 + 0xee0cf9cf, 0x2bfbefef, 0xcfeef9cf, 0x76cead23, 110 + 0x90b3df99, 0x7fddd0c1, 0x4bf847fd, 0x7ccf76ce, 111 + 0xcfef77ca, 0x7eaf7fad, 0x7dfdf0b7, 0xef7a7fca, 112 + 0x77cafbc8, 0x6079e722, 0xfbc85fff, 0xdfff5fb3, 113 + 0xfffbfbc8, 0xf3c894a5, 0xe7c9edf9, 0x7f9a7fad, 114 + 0x5f36afe8, 0x5f5bffdf, 0xdf95cb9e, 0xaf7d5fc3, 115 + 0xafed8c1b, 0x5fc3afdd, 0x5fc5df99, 0x7efdb0b3, 116 + 0x5fb3fffe, 0xabae5fb3, 0xfffe5fd0, 0x600be6bb, 117 + 0x600b5fd0, 0xdfc827fb, 0xefdf5fca, 0xcfde3a9c, 118 + 0xe7c9edf9, 0xf3c87f9e, 0x54ca7fed, 0x2d3a3637, 119 + 0x756f7e9a, 0xf1ce37ef, 0x2e677fee, 0x10ebadf8, 120 + 0xefdecfea, 0xe52f7d9f, 0xe12bf1ce, 0x5f647e9a, 121 + 0x4df8cfea, 0x5f717d9b, 0xefeecfea, 0x5f73e522, 122 + 0xefde5f73, 0xcfda0b61, 0x5d8fdf61, 0xe7c9edf9, 123 + 0x7e9a30d5, 0x1458bfff, 0xf3c85fff, 0xdfffa7f8, 124 + 0x5f5bbffe, 0x7f7d10d0, 0x144d5f33, 0xbfffaf78, 125 + 0x5f5bbffd, 0xa7f85f33, 0xbffe77fd, 0x30bd4e08, 126 + 0xfdcfe5ff, 0x6e0faff8, 0x7eef7e9f, 0xfdeff1cf, 127 + 0x5f17abf8, 0x0d5b5f5b, 0xffef79f7, 0x309eafdd, 128 + 0x5f3147f8, 0x5f31afed, 0x7fdd50af, 0x497847fd, 129 + 0x7f9e7fed, 0x7dfd70a9, 0xef7e7ece, 0x6ba07f9e, 130 + 0x2d227efd, 0x30db5f5b, 0xfffd5f5b, 0xffef5f5b, 131 + 0xffdf0c9c, 0xafed0a9a, 0xafdd0c37, 0x5f37afbd, 132 + 0x7fbdb081, 0x5f8147f8, 0x3a11e710, 0xedf0ccdd, 133 + 0xf3186d0a, 0x7f0e5f06, 0x7fedbb38, 0x3afe7468, 134 + 0x7fedf4fc, 0x8ffbb951, 0xb85f77fd, 0xb0df5ddd, 135 + 0xdefe7fed, 0x90e1e74d, 0x6f0dcbf7, 0xe7decfed, 136 + 0xcb74cfed, 0xcfeddf6d, 0x91714f74, 0x5dd2deef, 137 + 0x9e04e7df, 0xefbb6ffb, 0xe7ef7f0e, 0x9e097fed, 138 + 0xebdbeffa, 0xeb54affb, 0x7fea90d7, 0x7e0cf0c3, 139 + 0xbffff318, 0x5fffdfff, 0xac59efea, 0x7fce1ee5, 140 + 0xe2ff5ee1, 0xaffbe2ff, 0x5ee3affb, 0xf9cc7d0f, 141 + 0xaef8770f, 0x7d0fb0c6, 0xeffbbfff, 0xcfef5ede, 142 + 0x7d0fbfff, 0x5ede4cf8, 0x7fddd0bf, 0x49f847fd, 143 + 0x7efdf0bb, 0x7fedfffd, 0x7dfdf0b7, 0xef7e7e1e, 144 + 0x5ede7f0e, 0x3a11e710, 0xedf0ccab, 0xfb18ad2e, 145 + 0x1ea9bbb8, 0x74283b7e, 0x73c2e4bb, 0x2ada4fb8, 146 + 0xdc21e4bb, 0xb2a1ffbf, 0x5e2c43f8, 0xfc87e1bb, 147 + 0xe74ffd91, 0x6f0f4fe8, 0xc7ba32e2, 0xf396efeb, 148 + 0x600b4f78, 0xe5bb760b, 0x53acaef8, 0x4ef88b0e, 149 + 0xcfef9e09, 0xabf8751f, 0xefef5bac, 0x741f4fe8, 150 + 0x751e760d, 0x7fdbf081, 0x741cafce, 0xefcc7fce, 151 + 0x751e70ac, 0x741ce7bb, 0x3372cfed, 0xafdbefeb, 152 + 0xe5bb760b, 0x53f2aef8, 0xafe8e7eb, 0x4bf8771e, 153 + 0x7e247fed, 0x4fcbe2cc, 0x7fbc30a9, 0x7b0f7a0f, 154 + 0x34d577fd, 0x308b5db7, 0xde553e5f, 0xaf78741f, 155 + 0x741f30f0, 0xcfef5e2c, 0x741f3eac, 0xafb8771e, 156 + 0x5e677fed, 0x0bd3e2cc, 0x741ccfec, 0xe5ca53cd, 157 + 0x6fcb4f74, 0x5dadde4b, 0x2ab63d38, 0x4bb3de30, 158 + 0x751f741c, 0x6c42effa, 0xefea7fce, 0x6ffc30be, 159 + 0xefec3fca, 0x30b3de2e, 0xadf85d9e, 0xaf7daefd, 160 + 0x5d9ede2e, 0x5d9eafdd, 0x761f10ac, 0x1da07efd, 161 + 0x30adfffe, 0x4908fb18, 0x5fffdfff, 0xafbb709b, 162 + 0x4ef85e67, 0xadf814ad, 0x7a0f70ad, 0xcfef50ad, 163 + 0x7a0fde30, 0x5da0afed, 0x3c12780f, 0xefef780f, 164 + 0xefef790f, 0xa7f85e0f, 0xffef790f, 0xefef790f, 165 + 0x14adde2e, 0x5d9eadfd, 0x5e2dfffb, 0xe79addfd, 166 + 0xeff96079, 0x607ae79a, 0xddfceff9, 0x60795dff, 167 + 0x607acfef, 0xefefefdf, 0xefbfef7f, 0xeeffedff, 168 + 0xebffe7ff, 0xafefafdf, 0xafbfaf7f, 0xaeffadff, 169 + 0xabffa7ff, 0x6fef6fdf, 0x6fbf6f7f, 0x6eff6dff, 170 + 0x6bff67ff, 0x2fef2fdf, 0x2fbf2f7f, 0x2eff2dff, 171 + 0x2bff27ff, 0x4e08fd1f, 0xe5ff6e0f, 0xaff87eef, 172 + 0x7e0ffdef, 0xf11f6079, 0xabf8f542, 0x7e0af11c, 173 + 0x37cfae3a, 0x7fec90be, 0xadf8efdc, 0xcfeae52f, 174 + 0x7d0fe12b, 0xf11c6079, 0x7e0a4df8, 0xcfea5dc4, 175 + 0x7d0befec, 0xcfea5dc6, 0xe522efdc, 0x5dc6cfda, 176 + 0x4e08fd1f, 0x6e0faff8, 0x7c1f761f, 0xfdeff91f, 177 + 0x6079abf8, 0x761cee24, 0xf91f2bfb, 0xefefcfec, 178 + 0xf91f6079, 0x761c27fb, 0xefdf5da7, 0xcfdc7fdd, 179 + 0xd09c4bf8, 0x47fd7c1f, 0x761ccfcf, 0x7eef7fed, 180 + 0x7dfdf093, 0xef7e7f1e, 0x771efb18, 0x6079e722, 181 + 0xe6bbe5bb, 0xae0ae5bb, 0x600bae85, 0xe2bbe2bb, 182 + 0xe2bbe2bb, 0xaf02e2bb, 0xe2bb2ff9, 0x6079e2bb 183 + }; 184 + 185 + static uint patch_2f00[] __initdata = { 186 + 0x30303030, 0x3e3e3434, 0xabbf9b99, 0x4b4fbdbd, 187 + 0x59949334, 0x9fff37fb, 0x9b177dd9, 0x936956bb, 188 + 0xfbdd697b, 0xdd2fd113, 0x1db9f7bb, 0x36313963, 189 + 0x79373369, 0x3193137f, 0x7331737a, 0xf7bb9b99, 190 + 0x9bb19795, 0x77fdfd3d, 0x573b773f, 0x737933f7, 191 + 0xb991d115, 0x31699315, 0x31531694, 0xbf4fbdbd, 192 + 0x35931497, 0x35376956, 0xbd697b9d, 0x96931313, 193 + 0x19797937, 0x6935af78, 0xb9b3baa3, 0xb8788683, 194 + 0x368f78f7, 0x87778733, 0x3ffffb3b, 0x8e8f78b8, 195 + 0x1d118e13, 0xf3ff3f8b, 0x6bd8e173, 0xd1366856, 196 + 0x68d1687b, 0x3daf78b8, 0x3a3a3f87, 0x8f81378f, 197 + 0xf876f887, 0x77fd8778, 0x737de8d6, 0xbbf8bfff, 198 + 0xd8df87f7, 0xfd876f7b, 0x8bfff8bd, 0x8683387d, 199 + 0xb873d87b, 0x3b8fd7f8, 0xf7338883, 0xbb8ee1f8, 200 + 0xef837377, 0x3337b836, 0x817d11f8, 0x7378b878, 201 + 0xd3368b7d, 0xed731b7d, 0x833731f3, 0xf22f3f23 202 + }; 203 + 204 + static uint patch_2e00[] __initdata = { 205 + 0x27eeeeee, 0xeeeeeeee, 0xeeeeeeee, 0xeeeeeeee, 206 + 0xee4bf4fb, 0xdbd259bb, 0x1979577f, 0xdfd2d573, 207 + 0xb773f737, 0x4b4fbdbd, 0x25b9b177, 0xd2d17376, 208 + 0x956bbfdd, 0x697bdd2f, 0xff9f79ff, 0xff9ff22f 209 + }; 210 + #endif 211 + 212 + /* 213 + * USB SOF patch arrays. 214 + */ 215 + 216 + #ifdef CONFIG_USB_SOF_UCODE_PATCH 217 + 218 + static char patch_name[] __initdata = "USB SOF"; 219 + 220 + static struct patch_params patch_params __initdata = { 221 + 9, 222 + }; 223 + 224 + static uint patch_2000[] __initdata = { 225 + 0x7fff0000, 0x7ffd0000, 0x7ffb0000, 0x49f7ba5b, 226 + 0xba383ffb, 0xf9b8b46d, 0xe5ab4e07, 0xaf77bffe, 227 + 0x3f7bbf79, 0xba5bba38, 0xe7676076, 0x60750000 228 + }; 229 + 230 + static uint patch_2f00[] __initdata = { 231 + 0x3030304c, 0xcab9e441, 0xa1aaf220 232 + }; 233 + 234 + static uint patch_2e00[] __initdata = {}; 235 + #endif 236 + 237 + /* 238 + * SMC relocation patch arrays. 239 + */ 240 + 241 + #ifdef CONFIG_SMC_UCODE_PATCH 242 + 243 + static char patch_name[] __initdata = "SMC"; 244 + 245 + static struct patch_params patch_params __initdata = { 246 + 2, 0x8080, 0x8088, 247 + }; 248 + 249 + static uint patch_2000[] __initdata = { 250 + 0x3fff0000, 0x3ffd0000, 0x3ffb0000, 0x3ff90000, 251 + 0x5fefeff8, 0x5f91eff8, 0x3ff30000, 0x3ff10000, 252 + 0x3a11e710, 0xedf0ccb9, 0xf318ed66, 0x7f0e5fe2, 253 + 0x7fedbb38, 0x3afe7468, 0x7fedf4d8, 0x8ffbb92d, 254 + 0xb83b77fd, 0xb0bb5eb9, 0xdfda7fed, 0x90bde74d, 255 + 0x6f0dcbd3, 0xe7decfed, 0xcb50cfed, 0xcfeddf6d, 256 + 0x914d4f74, 0x5eaedfcb, 0x9ee0e7df, 0xefbb6ffb, 257 + 0xe7ef7f0e, 0x9ee57fed, 0xebb7effa, 0xeb30affb, 258 + 0x7fea90b3, 0x7e0cf09f, 0xbffff318, 0x5fffdfff, 259 + 0xac35efea, 0x7fce1fc1, 0xe2ff5fbd, 0xaffbe2ff, 260 + 0x5fbfaffb, 0xf9a87d0f, 0xaef8770f, 0x7d0fb0a2, 261 + 0xeffbbfff, 0xcfef5fba, 0x7d0fbfff, 0x5fba4cf8, 262 + 0x7fddd09b, 0x49f847fd, 0x7efdf097, 0x7fedfffd, 263 + 0x7dfdf093, 0xef7e7e1e, 0x5fba7f0e, 0x3a11e710, 264 + 0xedf0cc87, 0xfb18ad0a, 0x1f85bbb8, 0x74283b7e, 265 + 0x7375e4bb, 0x2ab64fb8, 0x5c7de4bb, 0x32fdffbf, 266 + 0x5f0843f8, 0x7ce3e1bb, 0xe74f7ded, 0x6f0f4fe8, 267 + 0xc7ba32be, 0x73f2efeb, 0x600b4f78, 0xe5bb760b, 268 + 0x5388aef8, 0x4ef80b6a, 0xcfef9ee5, 0xabf8751f, 269 + 0xefef5b88, 0x741f4fe8, 0x751e760d, 0x7fdb70dd, 270 + 0x741cafce, 0xefcc7fce, 0x751e7088, 0x741ce7bb, 271 + 0x334ecfed, 0xafdbefeb, 0xe5bb760b, 0x53ceaef8, 272 + 0xafe8e7eb, 0x4bf8771e, 0x7e007fed, 0x4fcbe2cc, 273 + 0x7fbc3085, 0x7b0f7a0f, 0x34b177fd, 0xb0e75e93, 274 + 0xdf313e3b, 0xaf78741f, 0x741f30cc, 0xcfef5f08, 275 + 0x741f3e88, 0xafb8771e, 0x5f437fed, 0x0bafe2cc, 276 + 0x741ccfec, 0xe5ca53a9, 0x6fcb4f74, 0x5e89df27, 277 + 0x2a923d14, 0x4b8fdf0c, 0x751f741c, 0x6c1eeffa, 278 + 0xefea7fce, 0x6ffc309a, 0xefec3fca, 0x308fdf0a, 279 + 0xadf85e7a, 0xaf7daefd, 0x5e7adf0a, 0x5e7aafdd, 280 + 0x761f1088, 0x1e7c7efd, 0x3089fffe, 0x4908fb18, 281 + 0x5fffdfff, 0xafbbf0f7, 0x4ef85f43, 0xadf81489, 282 + 0x7a0f7089, 0xcfef5089, 0x7a0fdf0c, 0x5e7cafed, 283 + 0xbc6e780f, 0xefef780f, 0xefef790f, 0xa7f85eeb, 284 + 0xffef790f, 0xefef790f, 0x1489df0a, 0x5e7aadfd, 285 + 0x5f09fffb, 0xe79aded9, 0xeff96079, 0x607ae79a, 286 + 0xded8eff9, 0x60795edb, 0x607acfef, 0xefefefdf, 287 + 0xefbfef7f, 0xeeffedff, 0xebffe7ff, 0xafefafdf, 288 + 0xafbfaf7f, 0xaeffadff, 0xabffa7ff, 0x6fef6fdf, 289 + 0x6fbf6f7f, 0x6eff6dff, 0x6bff67ff, 0x2fef2fdf, 290 + 0x2fbf2f7f, 0x2eff2dff, 0x2bff27ff, 0x4e08fd1f, 291 + 0xe5ff6e0f, 0xaff87eef, 0x7e0ffdef, 0xf11f6079, 292 + 0xabf8f51e, 0x7e0af11c, 0x37cfae16, 0x7fec909a, 293 + 0xadf8efdc, 0xcfeae52f, 0x7d0fe12b, 0xf11c6079, 294 + 0x7e0a4df8, 0xcfea5ea0, 0x7d0befec, 0xcfea5ea2, 295 + 0xe522efdc, 0x5ea2cfda, 0x4e08fd1f, 0x6e0faff8, 296 + 0x7c1f761f, 0xfdeff91f, 0x6079abf8, 0x761cee00, 297 + 0xf91f2bfb, 0xefefcfec, 0xf91f6079, 0x761c27fb, 298 + 0xefdf5e83, 0xcfdc7fdd, 0x50f84bf8, 0x47fd7c1f, 299 + 0x761ccfcf, 0x7eef7fed, 0x7dfd70ef, 0xef7e7f1e, 300 + 0x771efb18, 0x6079e722, 0xe6bbe5bb, 0x2e66e5bb, 301 + 0x600b2ee1, 0xe2bbe2bb, 0xe2bbe2bb, 0x2f5ee2bb, 302 + 0xe2bb2ff9, 0x6079e2bb, 303 + }; 304 + 305 + static uint patch_2f00[] __initdata = { 306 + 0x30303030, 0x3e3e3030, 0xaf79b9b3, 0xbaa3b979, 307 + 0x9693369f, 0x79f79777, 0x97333fff, 0xfb3b9e9f, 308 + 0x79b91d11, 0x9e13f3ff, 0x3f9b6bd9, 0xe173d136, 309 + 0x695669d1, 0x697b3daf, 0x79b93a3a, 0x3f979f91, 310 + 0x379ff976, 0xf99777fd, 0x9779737d, 0xe9d6bbf9, 311 + 0xbfffd9df, 0x97f7fd97, 0x6f7b9bff, 0xf9bd9683, 312 + 0x397db973, 0xd97b3b9f, 0xd7f9f733, 0x9993bb9e, 313 + 0xe1f9ef93, 0x73773337, 0xb936917d, 0x11f87379, 314 + 0xb979d336, 0x8b7ded73, 0x1b7d9337, 0x31f3f22f, 315 + 0x3f2327ee, 0xeeeeeeee, 0xeeeeeeee, 0xeeeeeeee, 316 + 0xeeeeee4b, 0xf4fbdbd2, 0x58bb1878, 0x577fdfd2, 317 + 0xd573b773, 0xf7374b4f, 0xbdbd25b8, 0xb177d2d1, 318 + 0x7376856b, 0xbfdd687b, 0xdd2fff8f, 0x78ffff8f, 319 + 0xf22f0000, 320 + }; 321 + 322 + static uint patch_2e00[] __initdata = {}; 323 + #endif 324 + 325 + static void __init cpm_write_patch(cpm8xx_t *cp, int offset, uint *patch, int len) 326 + { 327 + if (!len) 328 + return; 329 + memcpy_toio(cp->cp_dpmem + offset, patch, len); 330 + } 331 + 332 + void __init cpm_load_patch(cpm8xx_t *cp) 333 + { 334 + out_be16(&cp->cp_rccr, 0); 335 + 336 + cpm_write_patch(cp, 0, patch_2000, sizeof(patch_2000)); 337 + cpm_write_patch(cp, 0xf00, patch_2f00, sizeof(patch_2f00)); 338 + cpm_write_patch(cp, 0xe00, patch_2e00, sizeof(patch_2e00)); 339 + 340 + if (IS_ENABLED(CONFIG_I2C_SPI_UCODE_PATCH) || 341 + IS_ENABLED(CONFIG_I2C_SPI_SMC1_UCODE_PATCH)) { 342 + u16 rpbase = 0x500; 343 + iic_t *iip; 344 + struct spi_pram *spp; 345 + 346 + iip = (iic_t *)&cp->cp_dparam[PROFF_IIC]; 347 + out_be16(&iip->iic_rpbase, rpbase); 348 + 349 + /* Put SPI above the IIC, also 32-byte aligned. */ 350 + spp = (struct spi_pram *)&cp->cp_dparam[PROFF_SPI]; 351 + out_be16(&spp->rpbase, (rpbase + sizeof(iic_t) + 31) & ~31); 352 + 353 + if (IS_ENABLED(CONFIG_I2C_SPI_SMC1_UCODE_PATCH)) { 354 + smc_uart_t *smp; 355 + 356 + smp = (smc_uart_t *)&cp->cp_dparam[PROFF_SMC1]; 357 + out_be16(&smp->smc_rpbase, 0x1FC0); 358 + } 359 + } 360 + 361 + if (IS_ENABLED(CONFIG_SMC_UCODE_PATCH)) { 362 + smc_uart_t *smp; 363 + 364 + smp = (smc_uart_t *)&cp->cp_dparam[PROFF_SMC1]; 365 + out_be16(&smp->smc_rpbase, 0x1ec0); 366 + smp = (smc_uart_t *)&cp->cp_dparam[PROFF_SMC2]; 367 + out_be16(&smp->smc_rpbase, 0x1fc0); 368 + } 369 + 370 + out_be16(&cp->cp_cpmcr1, patch_params.cpmcr1); 371 + out_be16(&cp->cp_cpmcr2, patch_params.cpmcr2); 372 + out_be16(&cp->cp_cpmcr3, patch_params.cpmcr3); 373 + out_be16(&cp->cp_cpmcr4, patch_params.cpmcr4); 374 + 375 + out_be16(&cp->cp_rccr, patch_params.rccr); 376 + 377 + pr_info("%s microcode patch installed\n", patch_name); 378 + }
+1 -1
arch/powerpc/platforms/Kconfig.cputype
··· 330 330 331 331 config PPC_RADIX_MMU 332 332 bool "Radix MMU Support" 333 - depends on PPC_BOOK3S_64 && HUGETLB_PAGE 333 + depends on PPC_BOOK3S_64 334 334 select ARCH_HAS_GIGANTIC_PAGE 335 335 select PPC_HAVE_KUEP 336 336 select PPC_HAVE_KUAP
+1 -1
arch/powerpc/platforms/cell/spufs/file.c
··· 446 446 .release = spufs_cntl_release, 447 447 .read = simple_attr_read, 448 448 .write = simple_attr_write, 449 - .llseek = generic_file_llseek, 449 + .llseek = no_llseek, 450 450 .mmap = spufs_cntl_mmap, 451 451 }; 452 452
+1 -1
arch/powerpc/platforms/maple/Kconfig
··· 14 14 select MMIO_NVRAM 15 15 select ATA_NONSTANDARD if ATA 16 16 help 17 - This option enables support for the Maple 970FX Evaluation Board. 17 + This option enables support for the Maple 970FX Evaluation Board. 18 18 For more information, refer to <http://www.970eval.com>
+63 -5
arch/powerpc/platforms/powermac/sleep.S
··· 33 33 #define SL_IBAT2 0x48 34 34 #define SL_DBAT3 0x50 35 35 #define SL_IBAT3 0x58 36 - #define SL_TB 0x60 37 - #define SL_R2 0x68 38 - #define SL_CR 0x6c 39 - #define SL_R12 0x70 /* r12 to r31 */ 36 + #define SL_DBAT4 0x60 37 + #define SL_IBAT4 0x68 38 + #define SL_DBAT5 0x70 39 + #define SL_IBAT5 0x78 40 + #define SL_DBAT6 0x80 41 + #define SL_IBAT6 0x88 42 + #define SL_DBAT7 0x90 43 + #define SL_IBAT7 0x98 44 + #define SL_TB 0xa0 45 + #define SL_R2 0xa8 46 + #define SL_CR 0xac 47 + #define SL_R12 0xb0 /* r12 to r31 */ 40 48 #define SL_SIZE (SL_R12 + 80) 41 49 42 50 .section .text ··· 128 120 stw r4,SL_IBAT3(r1) 129 121 mfibatl r4,3 130 122 stw r4,SL_IBAT3+4(r1) 123 + 124 + BEGIN_MMU_FTR_SECTION 125 + mfspr r4,SPRN_DBAT4U 126 + stw r4,SL_DBAT4(r1) 127 + mfspr r4,SPRN_DBAT4L 128 + stw r4,SL_DBAT4+4(r1) 129 + mfspr r4,SPRN_DBAT5U 130 + stw r4,SL_DBAT5(r1) 131 + mfspr r4,SPRN_DBAT5L 132 + stw r4,SL_DBAT5+4(r1) 133 + mfspr r4,SPRN_DBAT6U 134 + stw r4,SL_DBAT6(r1) 135 + mfspr r4,SPRN_DBAT6L 136 + stw r4,SL_DBAT6+4(r1) 137 + mfspr r4,SPRN_DBAT7U 138 + stw r4,SL_DBAT7(r1) 139 + mfspr r4,SPRN_DBAT7L 140 + stw r4,SL_DBAT7+4(r1) 141 + mfspr r4,SPRN_IBAT4U 142 + stw r4,SL_IBAT4(r1) 143 + mfspr r4,SPRN_IBAT4L 144 + stw r4,SL_IBAT4+4(r1) 145 + mfspr r4,SPRN_IBAT5U 146 + stw r4,SL_IBAT5(r1) 147 + mfspr r4,SPRN_IBAT5L 148 + stw r4,SL_IBAT5+4(r1) 149 + mfspr r4,SPRN_IBAT6U 150 + stw r4,SL_IBAT6(r1) 151 + mfspr r4,SPRN_IBAT6L 152 + stw r4,SL_IBAT6+4(r1) 153 + mfspr r4,SPRN_IBAT7U 154 + stw r4,SL_IBAT7(r1) 155 + mfspr r4,SPRN_IBAT7L 156 + stw r4,SL_IBAT7+4(r1) 157 + END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) 131 158 132 159 /* Backup various CPU config stuffs */ 133 160 bl __save_cpu_setup ··· 364 321 mtibatl 3,r4 365 322 366 323 BEGIN_MMU_FTR_SECTION 367 - li r4,0 324 + lwz r4,SL_DBAT4(r1) 368 325 mtspr SPRN_DBAT4U,r4 326 + lwz r4,SL_DBAT4+4(r1) 369 327 mtspr SPRN_DBAT4L,r4 328 + lwz r4,SL_DBAT5(r1) 370 329 mtspr SPRN_DBAT5U,r4 330 + lwz r4,SL_DBAT5+4(r1) 371 331 mtspr SPRN_DBAT5L,r4 332 + lwz r4,SL_DBAT6(r1) 372 333 mtspr SPRN_DBAT6U,r4 334 + lwz r4,SL_DBAT6+4(r1) 373 335 mtspr SPRN_DBAT6L,r4 336 + lwz r4,SL_DBAT7(r1) 374 337 mtspr SPRN_DBAT7U,r4 338 + lwz r4,SL_DBAT7+4(r1) 375 339 mtspr SPRN_DBAT7L,r4 340 + lwz r4,SL_IBAT4(r1) 376 341 mtspr SPRN_IBAT4U,r4 342 + lwz r4,SL_IBAT4+4(r1) 377 343 mtspr SPRN_IBAT4L,r4 344 + lwz r4,SL_IBAT5(r1) 378 345 mtspr SPRN_IBAT5U,r4 346 + lwz r4,SL_IBAT5+4(r1) 379 347 mtspr SPRN_IBAT5L,r4 348 + lwz r4,SL_IBAT6(r1) 380 349 mtspr SPRN_IBAT6U,r4 350 + lwz r4,SL_IBAT6+4(r1) 381 351 mtspr SPRN_IBAT6L,r4 352 + lwz r4,SL_IBAT7(r1) 382 353 mtspr SPRN_IBAT7U,r4 354 + lwz r4,SL_IBAT7+4(r1) 383 355 mtspr SPRN_IBAT7L,r4 384 356 END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) 385 357
+1 -3
arch/powerpc/platforms/powernv/eeh-powernv.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 2 /* 3 - * The file intends to implement the platform dependent EEH operations on 4 - * powernv platform. Actually, the powernv was created in order to fully 5 - * hypervisor support. 3 + * PowerNV Platform dependent EEH operations 6 4 * 7 5 * Copyright Benjamin Herrenschmidt & Gavin Shan, IBM Corporation 2013. 8 6 */
+4 -4
arch/powerpc/platforms/powernv/idle.c
··· 716 716 * to reload MMCR0 (see mmcr0 comment above). 717 717 */ 718 718 if (!cpu_has_feature(CPU_FTR_POWER9_DD2_1)) { 719 - asm volatile(PPC_INVALIDATE_ERAT); 719 + asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT); 720 720 mtspr(SPRN_MMCR0, mmcr0); 721 721 } 722 722 ··· 758 758 mtspr(SPRN_PTCR, sprs.ptcr); 759 759 mtspr(SPRN_RPR, sprs.rpr); 760 760 mtspr(SPRN_TSCR, sprs.tscr); 761 - mtspr(SPRN_LDBAR, sprs.ldbar); 762 761 763 762 if (pls >= pnv_first_tb_loss_level) { 764 763 /* TB loss */ ··· 789 790 mtspr(SPRN_MMCR0, sprs.mmcr0); 790 791 mtspr(SPRN_MMCR1, sprs.mmcr1); 791 792 mtspr(SPRN_MMCR2, sprs.mmcr2); 793 + mtspr(SPRN_LDBAR, sprs.ldbar); 792 794 793 795 mtspr(SPRN_SPRG3, local_paca->sprg_vdso); 794 796 ··· 1155 1155 pnv_deepest_stop_psscr_mask); 1156 1156 } 1157 1157 1158 - pr_info("cpuidle-powernv: First stop level that may lose SPRs = 0x%lld\n", 1158 + pr_info("cpuidle-powernv: First stop level that may lose SPRs = 0x%llx\n", 1159 1159 pnv_first_spr_loss_level); 1160 1160 1161 - pr_info("cpuidle-powernv: First stop level that may lose timebase = 0x%lld\n", 1161 + pr_info("cpuidle-powernv: First stop level that may lose timebase = 0x%llx\n", 1162 1162 pnv_first_tb_loss_level); 1163 1163 } 1164 1164
+14 -557
arch/powerpc/platforms/powernv/npu-dma.c
··· 19 19 20 20 #include "pci.h" 21 21 22 - /* 23 - * spinlock to protect initialisation of an npu_context for a particular 24 - * mm_struct. 25 - */ 26 - static DEFINE_SPINLOCK(npu_context_lock); 27 - 28 22 static struct pci_dev *get_pci_dev(struct device_node *dn) 29 23 { 30 24 struct pci_dn *pdn = PCI_DN(dn); 25 + struct pci_dev *pdev; 31 26 32 - return pci_get_domain_bus_and_slot(pci_domain_nr(pdn->phb->bus), 27 + pdev = pci_get_domain_bus_and_slot(pci_domain_nr(pdn->phb->bus), 33 28 pdn->busno, pdn->devfn); 29 + 30 + /* 31 + * pci_get_domain_bus_and_slot() increased the reference count of 32 + * the PCI device, but callers don't need that actually as the PE 33 + * already holds a reference to the device. Since callers aren't 34 + * aware of the reference count change, call pci_dev_put() now to 35 + * avoid leaks. 36 + */ 37 + if (pdev) 38 + pci_dev_put(pdev); 39 + 40 + return pdev; 34 41 } 35 42 36 43 /* Given a NPU device get the associated PCI device. */ ··· 366 359 /* An NPU descriptor, valid for POWER9 only */ 367 360 struct npu { 368 361 int index; 369 - __be64 *mmio_atsd_regs[NV_NMMU_ATSD_REGS]; 370 - unsigned int mmio_atsd_count; 371 - 372 - /* Bitmask for MMIO register usage */ 373 - unsigned long mmio_atsd_usage; 374 - 375 - /* Do we need to explicitly flush the nest mmu? */ 376 - bool nmmu_flush; 377 - 378 362 struct npu_comp npucomp; 379 363 }; 380 364 ··· 622 624 } 623 625 #endif /* CONFIG_IOMMU_API */ 624 626 625 - /* Maximum number of nvlinks per npu */ 626 - #define NV_MAX_LINKS 6 627 - 628 - /* Maximum index of npu2 hosts in the system. Always < NV_MAX_NPUS */ 629 - static int max_npu2_index; 630 - 631 - struct npu_context { 632 - struct mm_struct *mm; 633 - struct pci_dev *npdev[NV_MAX_NPUS][NV_MAX_LINKS]; 634 - struct mmu_notifier mn; 635 - struct kref kref; 636 - bool nmmu_flush; 637 - 638 - /* Callback to stop translation requests on a given GPU */ 639 - void (*release_cb)(struct npu_context *context, void *priv); 640 - 641 - /* 642 - * Private pointer passed to the above callback for usage by 643 - * device drivers. 644 - */ 645 - void *priv; 646 - }; 647 - 648 - struct mmio_atsd_reg { 649 - struct npu *npu; 650 - int reg; 651 - }; 652 - 653 - /* 654 - * Find a free MMIO ATSD register and mark it in use. Return -ENOSPC 655 - * if none are available. 656 - */ 657 - static int get_mmio_atsd_reg(struct npu *npu) 658 - { 659 - int i; 660 - 661 - for (i = 0; i < npu->mmio_atsd_count; i++) { 662 - if (!test_bit(i, &npu->mmio_atsd_usage)) 663 - if (!test_and_set_bit_lock(i, &npu->mmio_atsd_usage)) 664 - return i; 665 - } 666 - 667 - return -ENOSPC; 668 - } 669 - 670 - static void put_mmio_atsd_reg(struct npu *npu, int reg) 671 - { 672 - clear_bit_unlock(reg, &npu->mmio_atsd_usage); 673 - } 674 - 675 - /* MMIO ATSD register offsets */ 676 - #define XTS_ATSD_LAUNCH 0 677 - #define XTS_ATSD_AVA 1 678 - #define XTS_ATSD_STAT 2 679 - 680 - static unsigned long get_atsd_launch_val(unsigned long pid, unsigned long psize) 681 - { 682 - unsigned long launch = 0; 683 - 684 - if (psize == MMU_PAGE_COUNT) { 685 - /* IS set to invalidate entire matching PID */ 686 - launch |= PPC_BIT(12); 687 - } else { 688 - /* AP set to invalidate region of psize */ 689 - launch |= (u64)mmu_get_ap(psize) << PPC_BITLSHIFT(17); 690 - } 691 - 692 - /* PRS set to process-scoped */ 693 - launch |= PPC_BIT(13); 694 - 695 - /* PID */ 696 - launch |= pid << PPC_BITLSHIFT(38); 697 - 698 - /* Leave "No flush" (bit 39) 0 so every ATSD performs a flush */ 699 - 700 - return launch; 701 - } 702 - 703 - static void mmio_atsd_regs_write(struct mmio_atsd_reg 704 - mmio_atsd_reg[NV_MAX_NPUS], unsigned long offset, 705 - unsigned long val) 706 - { 707 - struct npu *npu; 708 - int i, reg; 709 - 710 - for (i = 0; i <= max_npu2_index; i++) { 711 - reg = mmio_atsd_reg[i].reg; 712 - if (reg < 0) 713 - continue; 714 - 715 - npu = mmio_atsd_reg[i].npu; 716 - __raw_writeq_be(val, npu->mmio_atsd_regs[reg] + offset); 717 - } 718 - } 719 - 720 - static void mmio_invalidate_pid(struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS], 721 - unsigned long pid) 722 - { 723 - unsigned long launch = get_atsd_launch_val(pid, MMU_PAGE_COUNT); 724 - 725 - /* Invalidating the entire process doesn't use a va */ 726 - mmio_atsd_regs_write(mmio_atsd_reg, XTS_ATSD_LAUNCH, launch); 727 - } 728 - 729 - static void mmio_invalidate_range(struct mmio_atsd_reg 730 - mmio_atsd_reg[NV_MAX_NPUS], unsigned long pid, 731 - unsigned long start, unsigned long psize) 732 - { 733 - unsigned long launch = get_atsd_launch_val(pid, psize); 734 - 735 - /* Write all VAs first */ 736 - mmio_atsd_regs_write(mmio_atsd_reg, XTS_ATSD_AVA, start); 737 - 738 - /* Issue one barrier for all address writes */ 739 - eieio(); 740 - 741 - /* Launch */ 742 - mmio_atsd_regs_write(mmio_atsd_reg, XTS_ATSD_LAUNCH, launch); 743 - } 744 - 745 - #define mn_to_npu_context(x) container_of(x, struct npu_context, mn) 746 - 747 - static void mmio_invalidate_wait( 748 - struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS]) 749 - { 750 - struct npu *npu; 751 - int i, reg; 752 - 753 - /* Wait for all invalidations to complete */ 754 - for (i = 0; i <= max_npu2_index; i++) { 755 - if (mmio_atsd_reg[i].reg < 0) 756 - continue; 757 - 758 - /* Wait for completion */ 759 - npu = mmio_atsd_reg[i].npu; 760 - reg = mmio_atsd_reg[i].reg; 761 - while (__raw_readq(npu->mmio_atsd_regs[reg] + XTS_ATSD_STAT)) 762 - cpu_relax(); 763 - } 764 - } 765 - 766 - /* 767 - * Acquires all the address translation shootdown (ATSD) registers required to 768 - * launch an ATSD on all links this npu_context is active on. 769 - */ 770 - static void acquire_atsd_reg(struct npu_context *npu_context, 771 - struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS]) 772 - { 773 - int i, j; 774 - struct npu *npu; 775 - struct pci_dev *npdev; 776 - 777 - for (i = 0; i <= max_npu2_index; i++) { 778 - mmio_atsd_reg[i].reg = -1; 779 - for (j = 0; j < NV_MAX_LINKS; j++) { 780 - /* 781 - * There are no ordering requirements with respect to 782 - * the setup of struct npu_context, but to ensure 783 - * consistent behaviour we need to ensure npdev[][] is 784 - * only read once. 785 - */ 786 - npdev = READ_ONCE(npu_context->npdev[i][j]); 787 - if (!npdev) 788 - continue; 789 - 790 - npu = pci_bus_to_host(npdev->bus)->npu; 791 - if (!npu) 792 - continue; 793 - 794 - mmio_atsd_reg[i].npu = npu; 795 - mmio_atsd_reg[i].reg = get_mmio_atsd_reg(npu); 796 - while (mmio_atsd_reg[i].reg < 0) { 797 - mmio_atsd_reg[i].reg = get_mmio_atsd_reg(npu); 798 - cpu_relax(); 799 - } 800 - break; 801 - } 802 - } 803 - } 804 - 805 - /* 806 - * Release previously acquired ATSD registers. To avoid deadlocks the registers 807 - * must be released in the same order they were acquired above in 808 - * acquire_atsd_reg. 809 - */ 810 - static void release_atsd_reg(struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS]) 811 - { 812 - int i; 813 - 814 - for (i = 0; i <= max_npu2_index; i++) { 815 - /* 816 - * We can't rely on npu_context->npdev[][] being the same here 817 - * as when acquire_atsd_reg() was called, hence we use the 818 - * values stored in mmio_atsd_reg during the acquire phase 819 - * rather than re-reading npdev[][]. 820 - */ 821 - if (mmio_atsd_reg[i].reg < 0) 822 - continue; 823 - 824 - put_mmio_atsd_reg(mmio_atsd_reg[i].npu, mmio_atsd_reg[i].reg); 825 - } 826 - } 827 - 828 - /* 829 - * Invalidate a virtual address range 830 - */ 831 - static void mmio_invalidate(struct npu_context *npu_context, 832 - unsigned long start, unsigned long size) 833 - { 834 - struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS]; 835 - unsigned long pid = npu_context->mm->context.id; 836 - unsigned long atsd_start = 0; 837 - unsigned long end = start + size - 1; 838 - int atsd_psize = MMU_PAGE_COUNT; 839 - 840 - /* 841 - * Convert the input range into one of the supported sizes. If the range 842 - * doesn't fit, use the next larger supported size. Invalidation latency 843 - * is high, so over-invalidation is preferred to issuing multiple 844 - * invalidates. 845 - * 846 - * A 4K page size isn't supported by NPU/GPU ATS, so that case is 847 - * ignored. 848 - */ 849 - if (size == SZ_64K) { 850 - atsd_start = start; 851 - atsd_psize = MMU_PAGE_64K; 852 - } else if (ALIGN_DOWN(start, SZ_2M) == ALIGN_DOWN(end, SZ_2M)) { 853 - atsd_start = ALIGN_DOWN(start, SZ_2M); 854 - atsd_psize = MMU_PAGE_2M; 855 - } else if (ALIGN_DOWN(start, SZ_1G) == ALIGN_DOWN(end, SZ_1G)) { 856 - atsd_start = ALIGN_DOWN(start, SZ_1G); 857 - atsd_psize = MMU_PAGE_1G; 858 - } 859 - 860 - if (npu_context->nmmu_flush) 861 - /* 862 - * Unfortunately the nest mmu does not support flushing specific 863 - * addresses so we have to flush the whole mm once before 864 - * shooting down the GPU translation. 865 - */ 866 - flush_all_mm(npu_context->mm); 867 - 868 - /* 869 - * Loop over all the NPUs this process is active on and launch 870 - * an invalidate. 871 - */ 872 - acquire_atsd_reg(npu_context, mmio_atsd_reg); 873 - 874 - if (atsd_psize == MMU_PAGE_COUNT) 875 - mmio_invalidate_pid(mmio_atsd_reg, pid); 876 - else 877 - mmio_invalidate_range(mmio_atsd_reg, pid, atsd_start, 878 - atsd_psize); 879 - 880 - mmio_invalidate_wait(mmio_atsd_reg); 881 - 882 - /* 883 - * The GPU requires two flush ATSDs to ensure all entries have been 884 - * flushed. We use PID 0 as it will never be used for a process on the 885 - * GPU. 886 - */ 887 - mmio_invalidate_pid(mmio_atsd_reg, 0); 888 - mmio_invalidate_wait(mmio_atsd_reg); 889 - mmio_invalidate_pid(mmio_atsd_reg, 0); 890 - mmio_invalidate_wait(mmio_atsd_reg); 891 - 892 - release_atsd_reg(mmio_atsd_reg); 893 - } 894 - 895 - static void pnv_npu2_mn_release(struct mmu_notifier *mn, 896 - struct mm_struct *mm) 897 - { 898 - struct npu_context *npu_context = mn_to_npu_context(mn); 899 - 900 - /* Call into device driver to stop requests to the NMMU */ 901 - if (npu_context->release_cb) 902 - npu_context->release_cb(npu_context, npu_context->priv); 903 - 904 - /* 905 - * There should be no more translation requests for this PID, but we 906 - * need to ensure any entries for it are removed from the TLB. 907 - */ 908 - mmio_invalidate(npu_context, 0, ~0UL); 909 - } 910 - 911 - static void pnv_npu2_mn_invalidate_range(struct mmu_notifier *mn, 912 - struct mm_struct *mm, 913 - unsigned long start, unsigned long end) 914 - { 915 - struct npu_context *npu_context = mn_to_npu_context(mn); 916 - mmio_invalidate(npu_context, start, end - start); 917 - } 918 - 919 - static const struct mmu_notifier_ops nv_nmmu_notifier_ops = { 920 - .release = pnv_npu2_mn_release, 921 - .invalidate_range = pnv_npu2_mn_invalidate_range, 922 - }; 923 - 924 - /* 925 - * Call into OPAL to setup the nmmu context for the current task in 926 - * the NPU. This must be called to setup the context tables before the 927 - * GPU issues ATRs. pdev should be a pointed to PCIe GPU device. 928 - * 929 - * A release callback should be registered to allow a device driver to 930 - * be notified that it should not launch any new translation requests 931 - * as the final TLB invalidate is about to occur. 932 - * 933 - * Returns an error if there no contexts are currently available or a 934 - * npu_context which should be passed to pnv_npu2_handle_fault(). 935 - * 936 - * mmap_sem must be held in write mode and must not be called from interrupt 937 - * context. 938 - */ 939 - struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, 940 - unsigned long flags, 941 - void (*cb)(struct npu_context *, void *), 942 - void *priv) 943 - { 944 - int rc; 945 - u32 nvlink_index; 946 - struct device_node *nvlink_dn; 947 - struct mm_struct *mm = current->mm; 948 - struct npu *npu; 949 - struct npu_context *npu_context; 950 - struct pci_controller *hose; 951 - 952 - /* 953 - * At present we don't support GPUs connected to multiple NPUs and I'm 954 - * not sure the hardware does either. 955 - */ 956 - struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0); 957 - 958 - if (!npdev) 959 - /* No nvlink associated with this GPU device */ 960 - return ERR_PTR(-ENODEV); 961 - 962 - /* We only support DR/PR/HV in pnv_npu2_map_lpar_dev() */ 963 - if (flags & ~(MSR_DR | MSR_PR | MSR_HV)) 964 - return ERR_PTR(-EINVAL); 965 - 966 - nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0); 967 - if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index", 968 - &nvlink_index))) 969 - return ERR_PTR(-ENODEV); 970 - 971 - if (!mm || mm->context.id == 0) { 972 - /* 973 - * Kernel thread contexts are not supported and context id 0 is 974 - * reserved on the GPU. 975 - */ 976 - return ERR_PTR(-EINVAL); 977 - } 978 - 979 - hose = pci_bus_to_host(npdev->bus); 980 - npu = hose->npu; 981 - if (!npu) 982 - return ERR_PTR(-ENODEV); 983 - 984 - /* 985 - * We store the npu pci device so we can more easily get at the 986 - * associated npus. 987 - */ 988 - spin_lock(&npu_context_lock); 989 - npu_context = mm->context.npu_context; 990 - if (npu_context) { 991 - if (npu_context->release_cb != cb || 992 - npu_context->priv != priv) { 993 - spin_unlock(&npu_context_lock); 994 - return ERR_PTR(-EINVAL); 995 - } 996 - 997 - WARN_ON(!kref_get_unless_zero(&npu_context->kref)); 998 - } 999 - spin_unlock(&npu_context_lock); 1000 - 1001 - if (!npu_context) { 1002 - /* 1003 - * We can set up these fields without holding the 1004 - * npu_context_lock as the npu_context hasn't been returned to 1005 - * the caller meaning it can't be destroyed. Parallel allocation 1006 - * is protected against by mmap_sem. 1007 - */ 1008 - rc = -ENOMEM; 1009 - npu_context = kzalloc(sizeof(struct npu_context), GFP_KERNEL); 1010 - if (npu_context) { 1011 - kref_init(&npu_context->kref); 1012 - npu_context->mm = mm; 1013 - npu_context->mn.ops = &nv_nmmu_notifier_ops; 1014 - rc = __mmu_notifier_register(&npu_context->mn, mm); 1015 - } 1016 - 1017 - if (rc) { 1018 - kfree(npu_context); 1019 - return ERR_PTR(rc); 1020 - } 1021 - 1022 - mm->context.npu_context = npu_context; 1023 - } 1024 - 1025 - npu_context->release_cb = cb; 1026 - npu_context->priv = priv; 1027 - 1028 - /* 1029 - * npdev is a pci_dev pointer setup by the PCI code. We assign it to 1030 - * npdev[][] to indicate to the mmu notifiers that an invalidation 1031 - * should also be sent over this nvlink. The notifiers don't use any 1032 - * other fields in npu_context, so we just need to ensure that when they 1033 - * deference npu_context->npdev[][] it is either a valid pointer or 1034 - * NULL. 1035 - */ 1036 - WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], npdev); 1037 - 1038 - if (!npu->nmmu_flush) { 1039 - /* 1040 - * If we're not explicitly flushing ourselves we need to mark 1041 - * the thread for global flushes 1042 - */ 1043 - npu_context->nmmu_flush = false; 1044 - mm_context_add_copro(mm); 1045 - } else 1046 - npu_context->nmmu_flush = true; 1047 - 1048 - return npu_context; 1049 - } 1050 - EXPORT_SYMBOL(pnv_npu2_init_context); 1051 - 1052 - static void pnv_npu2_release_context(struct kref *kref) 1053 - { 1054 - struct npu_context *npu_context = 1055 - container_of(kref, struct npu_context, kref); 1056 - 1057 - if (!npu_context->nmmu_flush) 1058 - mm_context_remove_copro(npu_context->mm); 1059 - 1060 - npu_context->mm->context.npu_context = NULL; 1061 - } 1062 - 1063 - /* 1064 - * Destroy a context on the given GPU. May free the npu_context if it is no 1065 - * longer active on any GPUs. Must not be called from interrupt context. 1066 - */ 1067 - void pnv_npu2_destroy_context(struct npu_context *npu_context, 1068 - struct pci_dev *gpdev) 1069 - { 1070 - int removed; 1071 - struct npu *npu; 1072 - struct pci_dev *npdev = pnv_pci_get_npu_dev(gpdev, 0); 1073 - struct device_node *nvlink_dn; 1074 - u32 nvlink_index; 1075 - struct pci_controller *hose; 1076 - 1077 - if (WARN_ON(!npdev)) 1078 - return; 1079 - 1080 - hose = pci_bus_to_host(npdev->bus); 1081 - npu = hose->npu; 1082 - if (!npu) 1083 - return; 1084 - nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0); 1085 - if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index", 1086 - &nvlink_index))) 1087 - return; 1088 - WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], NULL); 1089 - spin_lock(&npu_context_lock); 1090 - removed = kref_put(&npu_context->kref, pnv_npu2_release_context); 1091 - spin_unlock(&npu_context_lock); 1092 - 1093 - /* 1094 - * We need to do this outside of pnv_npu2_release_context so that it is 1095 - * outside the spinlock as mmu_notifier_destroy uses SRCU. 1096 - */ 1097 - if (removed) { 1098 - mmu_notifier_unregister(&npu_context->mn, 1099 - npu_context->mm); 1100 - 1101 - kfree(npu_context); 1102 - } 1103 - 1104 - } 1105 - EXPORT_SYMBOL(pnv_npu2_destroy_context); 1106 - 1107 - /* 1108 - * Assumes mmap_sem is held for the contexts associated mm. 1109 - */ 1110 - int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea, 1111 - unsigned long *flags, unsigned long *status, int count) 1112 - { 1113 - u64 rc = 0, result = 0; 1114 - int i, is_write; 1115 - struct page *page[1]; 1116 - const char __user *u; 1117 - char c; 1118 - 1119 - /* mmap_sem should be held so the struct_mm must be present */ 1120 - struct mm_struct *mm = context->mm; 1121 - 1122 - WARN_ON(!rwsem_is_locked(&mm->mmap_sem)); 1123 - 1124 - for (i = 0; i < count; i++) { 1125 - is_write = flags[i] & NPU2_WRITE; 1126 - rc = get_user_pages_remote(NULL, mm, ea[i], 1, 1127 - is_write ? FOLL_WRITE : 0, 1128 - page, NULL, NULL); 1129 - 1130 - if (rc != 1) { 1131 - status[i] = rc; 1132 - result = -EFAULT; 1133 - continue; 1134 - } 1135 - 1136 - /* Make sure partition scoped tree gets a pte */ 1137 - u = page_address(page[0]); 1138 - if (__get_user(c, u)) 1139 - result = -EFAULT; 1140 - 1141 - status[i] = 0; 1142 - put_page(page[0]); 1143 - } 1144 - 1145 - return result; 1146 - } 1147 - EXPORT_SYMBOL(pnv_npu2_handle_fault); 1148 - 1149 627 int pnv_npu2_init(struct pci_controller *hose) 1150 628 { 1151 - unsigned int i; 1152 - u64 mmio_atsd; 1153 629 static int npu_index; 1154 630 struct npu *npu; 1155 631 int ret; ··· 632 1160 if (!npu) 633 1161 return -ENOMEM; 634 1162 635 - npu->nmmu_flush = of_property_read_bool(hose->dn, "ibm,nmmu-flush"); 636 - 637 - for (i = 0; i < ARRAY_SIZE(npu->mmio_atsd_regs) && 638 - !of_property_read_u64_index(hose->dn, "ibm,mmio-atsd", 639 - i, &mmio_atsd); i++) 640 - npu->mmio_atsd_regs[i] = ioremap(mmio_atsd, 32); 641 - 642 - pr_info("NPU%d: Found %d MMIO ATSD registers", hose->global_number, i); 643 - npu->mmio_atsd_count = i; 644 - npu->mmio_atsd_usage = 0; 645 1163 npu_index++; 646 1164 if (WARN_ON(npu_index >= NV_MAX_NPUS)) { 647 1165 ret = -ENOSPC; 648 1166 goto fail_exit; 649 1167 } 650 - max_npu2_index = npu_index; 651 1168 npu->index = npu_index; 652 1169 hose->npu = npu; 653 1170 654 1171 return 0; 655 1172 656 1173 fail_exit: 657 - for (i = 0; i < npu->mmio_atsd_count; ++i) 658 - iounmap(npu->mmio_atsd_regs[i]); 659 - 660 1174 kfree(npu); 661 - 662 1175 return ret; 663 1176 } 664 1177
-1
arch/powerpc/platforms/powernv/opal-call.c
··· 273 273 OPAL_CALL(opal_imc_counters_init, OPAL_IMC_COUNTERS_INIT); 274 274 OPAL_CALL(opal_imc_counters_start, OPAL_IMC_COUNTERS_START); 275 275 OPAL_CALL(opal_imc_counters_stop, OPAL_IMC_COUNTERS_STOP); 276 - OPAL_CALL(opal_pci_set_p2p, OPAL_PCI_SET_P2P); 277 276 OPAL_CALL(opal_get_powercap, OPAL_GET_POWERCAP); 278 277 OPAL_CALL(opal_set_powercap, OPAL_SET_POWERCAP); 279 278 OPAL_CALL(opal_get_power_shift_ratio, OPAL_GET_POWER_SHIFT_RATIO);
+40
arch/powerpc/platforms/powernv/opal-hmi.c
··· 137 137 xstop_reason[i].description); 138 138 } 139 139 140 + static void print_npu_checkstop_reason(const char *level, 141 + struct OpalHMIEvent *hmi_evt) 142 + { 143 + uint8_t reason, reason_count, i; 144 + 145 + /* 146 + * We may not have a checkstop reason on some combination of 147 + * hardware and/or skiboot version 148 + */ 149 + if (!hmi_evt->u.xstop_error.xstop_reason) { 150 + printk("%s NPU checkstop on chip %x\n", level, 151 + be32_to_cpu(hmi_evt->u.xstop_error.u.chip_id)); 152 + return; 153 + } 154 + 155 + /* 156 + * NPU2 has 3 FIRs. Reason encoded on a byte as: 157 + * 2 bits for the FIR number 158 + * 6 bits for the bit number 159 + * It may be possible to find several reasons. 160 + * 161 + * We don't display a specific message per FIR bit as there 162 + * are too many and most are meaningless without the workbook 163 + * and/or hw team help anyway. 164 + */ 165 + reason_count = sizeof(hmi_evt->u.xstop_error.xstop_reason) / 166 + sizeof(reason); 167 + for (i = 0; i < reason_count; i++) { 168 + reason = (hmi_evt->u.xstop_error.xstop_reason >> (8 * i)) & 0xFF; 169 + if (reason) 170 + printk("%s NPU checkstop on chip %x: FIR%d bit %d is set\n", 171 + level, 172 + be32_to_cpu(hmi_evt->u.xstop_error.u.chip_id), 173 + reason >> 6, reason & 0x3F); 174 + } 175 + } 176 + 140 177 static void print_checkstop_reason(const char *level, 141 178 struct OpalHMIEvent *hmi_evt) 142 179 { ··· 184 147 break; 185 148 case CHECKSTOP_TYPE_NX: 186 149 print_nx_checkstop_reason(level, hmi_evt); 150 + break; 151 + case CHECKSTOP_TYPE_NPU: 152 + print_npu_checkstop_reason(level, hmi_evt); 187 153 break; 188 154 default: 189 155 printk("%s Unknown Malfunction Alert of type %d\n",
+15 -8
arch/powerpc/platforms/powernv/opal.c
··· 202 202 glue = 0x7000; 203 203 204 204 /* 205 - * Check if we are running on newer firmware that exports 206 - * OPAL_HANDLE_HMI token. If yes, then don't ask OPAL to patch 207 - * the HMI interrupt and we catch it directly in Linux. 205 + * Only ancient OPAL firmware requires this. 206 + * Specifically, firmware from FW810.00 (released June 2014) 207 + * through FW810.20 (Released October 2014). 208 208 * 209 - * For older firmware (i.e currently released POWER8 System Firmware 210 - * as of today <= SV810_087), we fallback to old behavior and let OPAL 211 - * patch the HMI vector and handle it inside OPAL firmware. 209 + * Check if we are running on newer (post Oct 2014) firmware that 210 + * exports the OPAL_HANDLE_HMI token. If yes, then don't ask OPAL to 211 + * patch the HMI interrupt and we catch it directly in Linux. 212 212 * 213 - * For newer firmware (in development/yet to be released) we will 214 - * start catching/handling HMI directly in Linux. 213 + * For older firmware (i.e < FW810.20), we fallback to old behavior and 214 + * let OPAL patch the HMI vector and handle it inside OPAL firmware. 215 + * 216 + * For newer firmware we catch/handle the HMI directly in Linux. 215 217 */ 216 218 if (!opal_check_token(OPAL_HANDLE_HMI)) { 217 219 pr_info("Old firmware detected, OPAL handles HMIs.\n"); ··· 223 221 glue += 128; 224 222 } 225 223 224 + /* 225 + * Only applicable to ancient firmware, all modern 226 + * (post March 2015/skiboot 5.0) firmware will just return 227 + * OPAL_UNSUPPORTED. 228 + */ 226 229 opal_register_exception_handler(OPAL_SOFTPATCH_HANDLER, 0, glue); 227 230 #endif 228 231
+13 -1
arch/powerpc/platforms/powernv/pci-ioda.c
··· 50 50 static const char * const pnv_phb_names[] = { "IODA1", "IODA2", "NPU_NVLINK", 51 51 "NPU_OCAPI" }; 52 52 53 + static void pnv_pci_ioda2_set_bypass(struct pnv_ioda_pe *pe, bool enable); 54 + 53 55 void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level, 54 56 const char *fmt, ...) 55 57 { ··· 2358 2356 return 0; 2359 2357 } 2360 2358 2361 - void pnv_pci_ioda2_set_bypass(struct pnv_ioda_pe *pe, bool enable) 2359 + static void pnv_pci_ioda2_set_bypass(struct pnv_ioda_pe *pe, bool enable) 2362 2360 { 2363 2361 uint16_t window_id = (pe->pe_number << 1 ) + 1; 2364 2362 int64_t rc; ··· 2458 2456 if (!pnv_iommu_bypass_disabled) 2459 2457 pnv_pci_ioda2_set_bypass(pe, true); 2460 2458 2459 + /* 2460 + * Set table base for the case of IOMMU DMA use. Usually this is done 2461 + * from dma_dev_setup() which is not called when a device is returned 2462 + * from VFIO so do it here. 2463 + */ 2464 + if (pe->pdev) 2465 + set_iommu_table_base(&pe->pdev->dev, tbl); 2466 + 2461 2467 return 0; 2462 2468 } 2463 2469 ··· 2553 2543 pnv_pci_ioda2_unset_window(&pe->table_group, 0); 2554 2544 if (pe->pbus) 2555 2545 pnv_ioda_setup_bus_dma(pe, pe->pbus); 2546 + else if (pe->pdev) 2547 + set_iommu_table_base(&pe->pdev->dev, NULL); 2556 2548 iommu_tce_table_put(tbl); 2557 2549 } 2558 2550
-145
arch/powerpc/platforms/powernv/pci.c
··· 34 34 #include "powernv.h" 35 35 #include "pci.h" 36 36 37 - static DEFINE_MUTEX(p2p_mutex); 38 37 static DEFINE_MUTEX(tunnel_mutex); 39 38 40 39 int pnv_pci_get_slot_id(struct device_node *np, uint64_t *id) ··· 856 857 } 857 858 } 858 859 859 - int pnv_pci_set_p2p(struct pci_dev *initiator, struct pci_dev *target, u64 desc) 860 - { 861 - struct pci_controller *hose; 862 - struct pnv_phb *phb_init, *phb_target; 863 - struct pnv_ioda_pe *pe_init; 864 - int rc; 865 - 866 - if (!opal_check_token(OPAL_PCI_SET_P2P)) 867 - return -ENXIO; 868 - 869 - hose = pci_bus_to_host(initiator->bus); 870 - phb_init = hose->private_data; 871 - 872 - hose = pci_bus_to_host(target->bus); 873 - phb_target = hose->private_data; 874 - 875 - pe_init = pnv_ioda_get_pe(initiator); 876 - if (!pe_init) 877 - return -ENODEV; 878 - 879 - /* 880 - * Configuring the initiator's PHB requires to adjust its 881 - * TVE#1 setting. Since the same device can be an initiator 882 - * several times for different target devices, we need to keep 883 - * a reference count to know when we can restore the default 884 - * bypass setting on its TVE#1 when disabling. Opal is not 885 - * tracking PE states, so we add a reference count on the PE 886 - * in linux. 887 - * 888 - * For the target, the configuration is per PHB, so we keep a 889 - * target reference count on the PHB. 890 - */ 891 - mutex_lock(&p2p_mutex); 892 - 893 - if (desc & OPAL_PCI_P2P_ENABLE) { 894 - /* always go to opal to validate the configuration */ 895 - rc = opal_pci_set_p2p(phb_init->opal_id, phb_target->opal_id, 896 - desc, pe_init->pe_number); 897 - 898 - if (rc != OPAL_SUCCESS) { 899 - rc = -EIO; 900 - goto out; 901 - } 902 - 903 - pe_init->p2p_initiator_count++; 904 - phb_target->p2p_target_count++; 905 - } else { 906 - if (!pe_init->p2p_initiator_count || 907 - !phb_target->p2p_target_count) { 908 - rc = -EINVAL; 909 - goto out; 910 - } 911 - 912 - if (--pe_init->p2p_initiator_count == 0) 913 - pnv_pci_ioda2_set_bypass(pe_init, true); 914 - 915 - if (--phb_target->p2p_target_count == 0) { 916 - rc = opal_pci_set_p2p(phb_init->opal_id, 917 - phb_target->opal_id, desc, 918 - pe_init->pe_number); 919 - if (rc != OPAL_SUCCESS) { 920 - rc = -EIO; 921 - goto out; 922 - } 923 - } 924 - } 925 - rc = 0; 926 - out: 927 - mutex_unlock(&p2p_mutex); 928 - return rc; 929 - } 930 - EXPORT_SYMBOL_GPL(pnv_pci_set_p2p); 931 - 932 860 struct device_node *pnv_pci_get_phb_node(struct pci_dev *dev) 933 861 { 934 862 struct pci_controller *hose = pci_bus_to_host(dev->bus); ··· 863 937 return of_node_get(hose->dn); 864 938 } 865 939 EXPORT_SYMBOL(pnv_pci_get_phb_node); 866 - 867 - int pnv_pci_enable_tunnel(struct pci_dev *dev, u64 *asnind) 868 - { 869 - struct device_node *np; 870 - const __be32 *prop; 871 - struct pnv_ioda_pe *pe; 872 - uint16_t window_id; 873 - int rc; 874 - 875 - if (!radix_enabled()) 876 - return -ENXIO; 877 - 878 - if (!(np = pnv_pci_get_phb_node(dev))) 879 - return -ENXIO; 880 - 881 - prop = of_get_property(np, "ibm,phb-indications", NULL); 882 - of_node_put(np); 883 - 884 - if (!prop || !prop[1]) 885 - return -ENXIO; 886 - 887 - *asnind = (u64)be32_to_cpu(prop[1]); 888 - pe = pnv_ioda_get_pe(dev); 889 - if (!pe) 890 - return -ENODEV; 891 - 892 - /* Increase real window size to accept as_notify messages. */ 893 - window_id = (pe->pe_number << 1 ) + 1; 894 - rc = opal_pci_map_pe_dma_window_real(pe->phb->opal_id, pe->pe_number, 895 - window_id, pe->tce_bypass_base, 896 - (uint64_t)1 << 48); 897 - return opal_error_code(rc); 898 - } 899 - EXPORT_SYMBOL_GPL(pnv_pci_enable_tunnel); 900 - 901 - int pnv_pci_disable_tunnel(struct pci_dev *dev) 902 - { 903 - struct pnv_ioda_pe *pe; 904 - 905 - pe = pnv_ioda_get_pe(dev); 906 - if (!pe) 907 - return -ENODEV; 908 - 909 - /* Restore default real window size. */ 910 - pnv_pci_ioda2_set_bypass(pe, true); 911 - return 0; 912 - } 913 - EXPORT_SYMBOL_GPL(pnv_pci_disable_tunnel); 914 940 915 941 int pnv_pci_set_tunnel_bar(struct pci_dev *dev, u64 addr, int enable) 916 942 { ··· 917 1039 return rc; 918 1040 } 919 1041 EXPORT_SYMBOL_GPL(pnv_pci_set_tunnel_bar); 920 - 921 - #ifdef CONFIG_PPC64 /* for thread.tidr */ 922 - int pnv_pci_get_as_notify_info(struct task_struct *task, u32 *lpid, u32 *pid, 923 - u32 *tid) 924 - { 925 - struct mm_struct *mm = NULL; 926 - 927 - if (task == NULL) 928 - return -EINVAL; 929 - 930 - mm = get_task_mm(task); 931 - if (mm == NULL) 932 - return -EINVAL; 933 - 934 - *pid = mm->context.id; 935 - mmput(mm); 936 - 937 - *tid = task->thread.tidr; 938 - *lpid = mfspr(SPRN_LPID); 939 - return 0; 940 - } 941 - EXPORT_SYMBOL_GPL(pnv_pci_get_as_notify_info); 942 - #endif 943 1042 944 1043 void pnv_pci_shutdown(void) 945 1044 {
-6
arch/powerpc/platforms/powernv/pci.h
··· 79 79 struct pnv_ioda_pe *master; 80 80 struct list_head slaves; 81 81 82 - /* PCI peer-to-peer*/ 83 - int p2p_initiator_count; 84 - 85 82 /* Link in list of PE#s */ 86 83 struct list_head list; 87 84 }; ··· 169 172 /* PHB and hub diagnostics */ 170 173 unsigned int diag_data_size; 171 174 u8 *diag_data; 172 - 173 - int p2p_target_count; 174 175 }; 175 176 176 177 extern struct pci_ops pnv_pci_ops; ··· 195 200 extern void pnv_teardown_msi_irqs(struct pci_dev *pdev); 196 201 extern struct pnv_ioda_pe *pnv_ioda_get_pe(struct pci_dev *dev); 197 202 extern void pnv_set_msi_irq_chip(struct pnv_phb *phb, unsigned int virq); 198 - extern void pnv_pci_ioda2_set_bypass(struct pnv_ioda_pe *pe, bool enable); 199 203 extern unsigned long pnv_pci_ioda2_get_table_size(__u32 page_shift, 200 204 __u64 window_size, __u32 levels); 201 205 extern int pnv_eeh_post_init(void);
-19
arch/powerpc/platforms/powernv/vas-window.c
··· 40 40 pr_debug("Txwin #%d: Paste addr 0x%llx\n", winid, *addr); 41 41 } 42 42 43 - u64 vas_win_paste_addr(struct vas_window *win) 44 - { 45 - u64 addr; 46 - 47 - compute_paste_address(win, &addr, NULL); 48 - 49 - return addr; 50 - } 51 - EXPORT_SYMBOL(vas_win_paste_addr); 52 - 53 43 static inline void get_hvwc_mmio_bar(struct vas_window *window, 54 44 u64 *start, int *len) 55 45 { ··· 1254 1264 return 0; 1255 1265 } 1256 1266 EXPORT_SYMBOL_GPL(vas_win_close); 1257 - 1258 - /* 1259 - * Return a system-wide unique window id for the window @win. 1260 - */ 1261 - u32 vas_win_id(struct vas_window *win) 1262 - { 1263 - return encode_pswid(win->vinst->vas_id, win->winid); 1264 - } 1265 - EXPORT_SYMBOL_GPL(vas_win_id);
-20
arch/powerpc/platforms/powernv/vas.h
··· 444 444 return in_be64(win->hvwc_map+reg); 445 445 } 446 446 447 - /* 448 - * Encode/decode the Partition Send Window ID (PSWID) for a window in 449 - * a way that we can uniquely identify any window in the system. i.e. 450 - * we should be able to locate the 'struct vas_window' given the PSWID. 451 - * 452 - * Bits Usage 453 - * 0:7 VAS id (8 bits) 454 - * 8:15 Unused, 0 (3 bits) 455 - * 16:31 Window id (16 bits) 456 - */ 457 - static inline u32 encode_pswid(int vasid, int winid) 458 - { 459 - u32 pswid = 0; 460 - 461 - pswid |= vasid << (31 - 7); 462 - pswid |= winid; 463 - 464 - return pswid; 465 - } 466 - 467 447 static inline void decode_pswid(u32 pswid, int *vasid, int *winid) 468 448 { 469 449 if (vasid)
+10 -9
arch/powerpc/platforms/pseries/Kconfig
··· 23 23 select ARCH_RANDOM 24 24 select PPC_DOORBELL 25 25 select FORCE_SMP 26 + select SWIOTLB 26 27 default y 27 28 28 29 config PPC_SPLPAR ··· 81 80 bool "LPAR Configuration Data" 82 81 depends on PPC_PSERIES 83 82 help 84 - Provide system capacity information via human readable 85 - <key word>=<value> pairs through a /proc/ppc64/lparcfg interface. 83 + Provide system capacity information via human readable 84 + <key word>=<value> pairs through a /proc/ppc64/lparcfg interface. 86 85 87 86 config PPC_PSERIES_DEBUG 88 87 depends on PPC_PSERIES && PPC_EARLY_DEBUG 89 88 bool "Enable extra debug logging in platforms/pseries" 90 - help 89 + default y 90 + help 91 91 Say Y here if you want the pseries core to produce a bunch of 92 92 debug messages to the system log. Select this if you are having a 93 93 problem with the pseries core and want to see more of what is 94 94 going on. This does not enable debugging in lpar.c, which must 95 95 be manually done due to its verbosity. 96 - default y 97 96 98 97 config PPC_SMLPAR 99 98 bool "Support for shared-memory logical partitions" ··· 118 117 balance memory across many LPARs. 119 118 120 119 config HV_PERF_CTRS 121 - bool "Hypervisor supplied PMU events (24x7 & GPCI)" 122 - default y 123 - depends on PERF_EVENTS && PPC_PSERIES 124 - help 120 + bool "Hypervisor supplied PMU events (24x7 & GPCI)" 121 + default y 122 + depends on PERF_EVENTS && PPC_PSERIES 123 + help 125 124 Enable access to hypervisor supplied counters in perf. Currently, 126 125 this enables code that uses the hcall GetPerfCounterInfo and 24x7 127 126 interfaces to retrieve counters. GPCI exists on Power 6 and later 128 127 systems. 24x7 is available on Power 8 and later systems. 129 128 130 - If unsure, select Y. 129 + If unsure, select Y. 131 130 132 131 config IBMVIO 133 132 depends on PPC_PSERIES
+1
arch/powerpc/platforms/pseries/Makefile
··· 25 25 obj-$(CONFIG_IBMVIO) += vio.o 26 26 obj-$(CONFIG_IBMEBUS) += ibmebus.o 27 27 obj-$(CONFIG_PAPR_SCM) += papr_scm.o 28 + obj-$(CONFIG_PPC_SPLPAR) += vphn.o 28 29 29 30 ifdef CONFIG_PPC_PSERIES 30 31 obj-$(CONFIG_SUSPEND) += suspend.o
+8 -4
arch/powerpc/platforms/pseries/dlpar.c
··· 58 58 59 59 name = (char *)ccwa + be32_to_cpu(ccwa->name_offset); 60 60 prop->name = kstrdup(name, GFP_KERNEL); 61 + if (!prop->name) { 62 + dlpar_free_cc_property(prop); 63 + return NULL; 64 + } 61 65 62 66 prop->length = be32_to_cpu(ccwa->prop_length); 63 67 value = (char *)ccwa + be32_to_cpu(ccwa->prop_offset); ··· 387 383 struct pseries_hp_work *work; 388 384 struct pseries_hp_errorlog *hp_errlog_copy; 389 385 390 - hp_errlog_copy = kmalloc(sizeof(struct pseries_hp_errorlog), 391 - GFP_KERNEL); 392 - memcpy(hp_errlog_copy, hp_errlog, sizeof(struct pseries_hp_errorlog)); 386 + hp_errlog_copy = kmemdup(hp_errlog, sizeof(*hp_errlog), GFP_ATOMIC); 387 + if (!hp_errlog_copy) 388 + return; 393 389 394 - work = kmalloc(sizeof(struct pseries_hp_work), GFP_KERNEL); 390 + work = kmalloc(sizeof(struct pseries_hp_work), GFP_ATOMIC); 395 391 if (work) { 396 392 INIT_WORK((struct work_struct *)work, pseries_hp_work_fn); 397 393 work->errlog = hp_errlog_copy;
+12 -11
arch/powerpc/platforms/pseries/dtl.c
··· 27 27 }; 28 28 static DEFINE_PER_CPU(struct dtl, cpu_dtl); 29 29 30 - /* 31 - * Dispatch trace log event mask: 32 - * 0x7: 0x1: voluntary virtual processor waits 33 - * 0x2: time-slice preempts 34 - * 0x4: virtual partition memory page faults 35 - */ 36 - static u8 dtl_event_mask = 0x7; 30 + static u8 dtl_event_mask = DTL_LOG_ALL; 37 31 38 32 39 33 /* ··· 42 48 struct dtl_entry *write_ptr; 43 49 struct dtl_entry *buf; 44 50 struct dtl_entry *buf_end; 45 - u8 saved_dtl_mask; 46 51 }; 47 52 48 53 static DEFINE_PER_CPU(struct dtl_ring, dtl_rings); ··· 91 98 dtlr->write_ptr = dtl->buf; 92 99 93 100 /* enable event logging */ 94 - dtlr->saved_dtl_mask = lppaca_of(dtl->cpu).dtl_enable_mask; 95 101 lppaca_of(dtl->cpu).dtl_enable_mask |= dtl_event_mask; 96 102 97 103 dtl_consumer = consume_dtle; ··· 108 116 dtlr->buf = NULL; 109 117 110 118 /* restore dtl_enable_mask */ 111 - lppaca_of(dtl->cpu).dtl_enable_mask = dtlr->saved_dtl_mask; 119 + lppaca_of(dtl->cpu).dtl_enable_mask = DTL_LOG_PREEMPT; 112 120 113 121 if (atomic_dec_and_test(&dtl_count)) 114 122 dtl_consumer = NULL; ··· 180 188 if (dtl->buf) 181 189 return -EBUSY; 182 190 191 + /* ensure there are no other conflicting dtl users */ 192 + if (!read_trylock(&dtl_access_lock)) 193 + return -EBUSY; 194 + 183 195 n_entries = dtl_buf_entries; 184 196 buf = kmem_cache_alloc_node(dtl_cache, GFP_KERNEL, cpu_to_node(dtl->cpu)); 185 197 if (!buf) { 186 198 printk(KERN_WARNING "%s: buffer alloc failed for cpu %d\n", 187 199 __func__, dtl->cpu); 200 + read_unlock(&dtl_access_lock); 188 201 return -ENOMEM; 189 202 } 190 203 ··· 206 209 } 207 210 spin_unlock(&dtl->lock); 208 211 209 - if (rc) 212 + if (rc) { 213 + read_unlock(&dtl_access_lock); 210 214 kmem_cache_free(dtl_cache, buf); 215 + } 216 + 211 217 return rc; 212 218 } 213 219 ··· 222 222 dtl->buf = NULL; 223 223 dtl->buf_entries = 0; 224 224 spin_unlock(&dtl->lock); 225 + read_unlock(&dtl_access_lock); 225 226 } 226 227 227 228 /* file interface */
+3
arch/powerpc/platforms/pseries/hotplug-memory.c
··· 976 976 if (!memblock_size) 977 977 return -EINVAL; 978 978 979 + if (!pr->old_prop) 980 + return 0; 981 + 979 982 p = (__be32 *) pr->old_prop->value; 980 983 if (!p) 981 984 return -EINVAL;
+1 -1
arch/powerpc/platforms/pseries/hvconsole.c
··· 49 49 * @vtermno: The vtermno or unit_address of the adapter from which the data 50 50 * originated. 51 51 * @buf: The character buffer that contains the character data to send to 52 - * firmware. 52 + * firmware. Must be at least 16 bytes, even if count is less than 16. 53 53 * @count: Send this number of characters. 54 54 */ 55 55 int hvc_put_chars(uint32_t vtermno, const char *buf, int count)
+585 -18
arch/powerpc/platforms/pseries/lpar.c
··· 17 17 #include <linux/jump_label.h> 18 18 #include <linux/delay.h> 19 19 #include <linux/stop_machine.h> 20 + #include <linux/spinlock.h> 21 + #include <linux/cpuhotplug.h> 22 + #include <linux/workqueue.h> 23 + #include <linux/proc_fs.h> 20 24 #include <asm/processor.h> 21 25 #include <asm/mmu.h> 22 26 #include <asm/page.h> ··· 56 52 EXPORT_SYMBOL(plpar_hcall9); 57 53 EXPORT_SYMBOL(plpar_hcall_norets); 58 54 55 + #ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE 56 + static u8 dtl_mask = DTL_LOG_PREEMPT; 57 + #else 58 + static u8 dtl_mask; 59 + #endif 60 + 61 + void alloc_dtl_buffers(unsigned long *time_limit) 62 + { 63 + int cpu; 64 + struct paca_struct *pp; 65 + struct dtl_entry *dtl; 66 + 67 + for_each_possible_cpu(cpu) { 68 + pp = paca_ptrs[cpu]; 69 + if (pp->dispatch_log) 70 + continue; 71 + dtl = kmem_cache_alloc(dtl_cache, GFP_KERNEL); 72 + if (!dtl) { 73 + pr_warn("Failed to allocate dispatch trace log for cpu %d\n", 74 + cpu); 75 + #ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE 76 + pr_warn("Stolen time statistics will be unreliable\n"); 77 + #endif 78 + break; 79 + } 80 + 81 + pp->dtl_ridx = 0; 82 + pp->dispatch_log = dtl; 83 + pp->dispatch_log_end = dtl + N_DISPATCH_LOG; 84 + pp->dtl_curr = dtl; 85 + 86 + if (time_limit && time_after(jiffies, *time_limit)) { 87 + cond_resched(); 88 + *time_limit = jiffies + HZ; 89 + } 90 + } 91 + } 92 + 93 + void register_dtl_buffer(int cpu) 94 + { 95 + long ret; 96 + struct paca_struct *pp; 97 + struct dtl_entry *dtl; 98 + int hwcpu = get_hard_smp_processor_id(cpu); 99 + 100 + pp = paca_ptrs[cpu]; 101 + dtl = pp->dispatch_log; 102 + if (dtl && dtl_mask) { 103 + pp->dtl_ridx = 0; 104 + pp->dtl_curr = dtl; 105 + lppaca_of(cpu).dtl_idx = 0; 106 + 107 + /* hypervisor reads buffer length from this field */ 108 + dtl->enqueue_to_dispatch_time = cpu_to_be32(DISPATCH_LOG_BYTES); 109 + ret = register_dtl(hwcpu, __pa(dtl)); 110 + if (ret) 111 + pr_err("WARNING: DTL registration of cpu %d (hw %d) failed with %ld\n", 112 + cpu, hwcpu, ret); 113 + 114 + lppaca_of(cpu).dtl_enable_mask = dtl_mask; 115 + } 116 + } 117 + 118 + #ifdef CONFIG_PPC_SPLPAR 119 + struct dtl_worker { 120 + struct delayed_work work; 121 + int cpu; 122 + }; 123 + 124 + struct vcpu_dispatch_data { 125 + int last_disp_cpu; 126 + 127 + int total_disp; 128 + 129 + int same_cpu_disp; 130 + int same_chip_disp; 131 + int diff_chip_disp; 132 + int far_chip_disp; 133 + 134 + int numa_home_disp; 135 + int numa_remote_disp; 136 + int numa_far_disp; 137 + }; 138 + 139 + /* 140 + * This represents the number of cpus in the hypervisor. Since there is no 141 + * architected way to discover the number of processors in the host, we 142 + * provision for dealing with NR_CPUS. This is currently 2048 by default, and 143 + * is sufficient for our purposes. This will need to be tweaked if 144 + * CONFIG_NR_CPUS is changed. 145 + */ 146 + #define NR_CPUS_H NR_CPUS 147 + 148 + DEFINE_RWLOCK(dtl_access_lock); 149 + static DEFINE_PER_CPU(struct vcpu_dispatch_data, vcpu_disp_data); 150 + static DEFINE_PER_CPU(u64, dtl_entry_ridx); 151 + static DEFINE_PER_CPU(struct dtl_worker, dtl_workers); 152 + static enum cpuhp_state dtl_worker_state; 153 + static DEFINE_MUTEX(dtl_enable_mutex); 154 + static int vcpudispatch_stats_on __read_mostly; 155 + static int vcpudispatch_stats_freq = 50; 156 + static __be32 *vcpu_associativity, *pcpu_associativity; 157 + 158 + 159 + static void free_dtl_buffers(unsigned long *time_limit) 160 + { 161 + #ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE 162 + int cpu; 163 + struct paca_struct *pp; 164 + 165 + for_each_possible_cpu(cpu) { 166 + pp = paca_ptrs[cpu]; 167 + if (!pp->dispatch_log) 168 + continue; 169 + kmem_cache_free(dtl_cache, pp->dispatch_log); 170 + pp->dtl_ridx = 0; 171 + pp->dispatch_log = 0; 172 + pp->dispatch_log_end = 0; 173 + pp->dtl_curr = 0; 174 + 175 + if (time_limit && time_after(jiffies, *time_limit)) { 176 + cond_resched(); 177 + *time_limit = jiffies + HZ; 178 + } 179 + } 180 + #endif 181 + } 182 + 183 + static int init_cpu_associativity(void) 184 + { 185 + vcpu_associativity = kcalloc(num_possible_cpus() / threads_per_core, 186 + VPHN_ASSOC_BUFSIZE * sizeof(__be32), GFP_KERNEL); 187 + pcpu_associativity = kcalloc(NR_CPUS_H / threads_per_core, 188 + VPHN_ASSOC_BUFSIZE * sizeof(__be32), GFP_KERNEL); 189 + 190 + if (!vcpu_associativity || !pcpu_associativity) { 191 + pr_err("error allocating memory for associativity information\n"); 192 + return -ENOMEM; 193 + } 194 + 195 + return 0; 196 + } 197 + 198 + static void destroy_cpu_associativity(void) 199 + { 200 + kfree(vcpu_associativity); 201 + kfree(pcpu_associativity); 202 + vcpu_associativity = pcpu_associativity = 0; 203 + } 204 + 205 + static __be32 *__get_cpu_associativity(int cpu, __be32 *cpu_assoc, int flag) 206 + { 207 + __be32 *assoc; 208 + int rc = 0; 209 + 210 + assoc = &cpu_assoc[(int)(cpu / threads_per_core) * VPHN_ASSOC_BUFSIZE]; 211 + if (!assoc[0]) { 212 + rc = hcall_vphn(cpu, flag, &assoc[0]); 213 + if (rc) 214 + return NULL; 215 + } 216 + 217 + return assoc; 218 + } 219 + 220 + static __be32 *get_pcpu_associativity(int cpu) 221 + { 222 + return __get_cpu_associativity(cpu, pcpu_associativity, VPHN_FLAG_PCPU); 223 + } 224 + 225 + static __be32 *get_vcpu_associativity(int cpu) 226 + { 227 + return __get_cpu_associativity(cpu, vcpu_associativity, VPHN_FLAG_VCPU); 228 + } 229 + 230 + static int cpu_relative_dispatch_distance(int last_disp_cpu, int cur_disp_cpu) 231 + { 232 + __be32 *last_disp_cpu_assoc, *cur_disp_cpu_assoc; 233 + 234 + if (last_disp_cpu >= NR_CPUS_H || cur_disp_cpu >= NR_CPUS_H) 235 + return -EINVAL; 236 + 237 + last_disp_cpu_assoc = get_pcpu_associativity(last_disp_cpu); 238 + cur_disp_cpu_assoc = get_pcpu_associativity(cur_disp_cpu); 239 + 240 + if (!last_disp_cpu_assoc || !cur_disp_cpu_assoc) 241 + return -EIO; 242 + 243 + return cpu_distance(last_disp_cpu_assoc, cur_disp_cpu_assoc); 244 + } 245 + 246 + static int cpu_home_node_dispatch_distance(int disp_cpu) 247 + { 248 + __be32 *disp_cpu_assoc, *vcpu_assoc; 249 + int vcpu_id = smp_processor_id(); 250 + 251 + if (disp_cpu >= NR_CPUS_H) { 252 + pr_debug_ratelimited("vcpu dispatch cpu %d > %d\n", 253 + disp_cpu, NR_CPUS_H); 254 + return -EINVAL; 255 + } 256 + 257 + disp_cpu_assoc = get_pcpu_associativity(disp_cpu); 258 + vcpu_assoc = get_vcpu_associativity(vcpu_id); 259 + 260 + if (!disp_cpu_assoc || !vcpu_assoc) 261 + return -EIO; 262 + 263 + return cpu_distance(disp_cpu_assoc, vcpu_assoc); 264 + } 265 + 266 + static void update_vcpu_disp_stat(int disp_cpu) 267 + { 268 + struct vcpu_dispatch_data *disp; 269 + int distance; 270 + 271 + disp = this_cpu_ptr(&vcpu_disp_data); 272 + if (disp->last_disp_cpu == -1) { 273 + disp->last_disp_cpu = disp_cpu; 274 + return; 275 + } 276 + 277 + disp->total_disp++; 278 + 279 + if (disp->last_disp_cpu == disp_cpu || 280 + (cpu_first_thread_sibling(disp->last_disp_cpu) == 281 + cpu_first_thread_sibling(disp_cpu))) 282 + disp->same_cpu_disp++; 283 + else { 284 + distance = cpu_relative_dispatch_distance(disp->last_disp_cpu, 285 + disp_cpu); 286 + if (distance < 0) 287 + pr_debug_ratelimited("vcpudispatch_stats: cpu %d: error determining associativity\n", 288 + smp_processor_id()); 289 + else { 290 + switch (distance) { 291 + case 0: 292 + disp->same_chip_disp++; 293 + break; 294 + case 1: 295 + disp->diff_chip_disp++; 296 + break; 297 + case 2: 298 + disp->far_chip_disp++; 299 + break; 300 + default: 301 + pr_debug_ratelimited("vcpudispatch_stats: cpu %d (%d -> %d): unexpected relative dispatch distance %d\n", 302 + smp_processor_id(), 303 + disp->last_disp_cpu, 304 + disp_cpu, 305 + distance); 306 + } 307 + } 308 + } 309 + 310 + distance = cpu_home_node_dispatch_distance(disp_cpu); 311 + if (distance < 0) 312 + pr_debug_ratelimited("vcpudispatch_stats: cpu %d: error determining associativity\n", 313 + smp_processor_id()); 314 + else { 315 + switch (distance) { 316 + case 0: 317 + disp->numa_home_disp++; 318 + break; 319 + case 1: 320 + disp->numa_remote_disp++; 321 + break; 322 + case 2: 323 + disp->numa_far_disp++; 324 + break; 325 + default: 326 + pr_debug_ratelimited("vcpudispatch_stats: cpu %d on %d: unexpected numa dispatch distance %d\n", 327 + smp_processor_id(), 328 + disp_cpu, 329 + distance); 330 + } 331 + } 332 + 333 + disp->last_disp_cpu = disp_cpu; 334 + } 335 + 336 + static void process_dtl_buffer(struct work_struct *work) 337 + { 338 + struct dtl_entry dtle; 339 + u64 i = __this_cpu_read(dtl_entry_ridx); 340 + struct dtl_entry *dtl = local_paca->dispatch_log + (i % N_DISPATCH_LOG); 341 + struct dtl_entry *dtl_end = local_paca->dispatch_log_end; 342 + struct lppaca *vpa = local_paca->lppaca_ptr; 343 + struct dtl_worker *d = container_of(work, struct dtl_worker, work.work); 344 + 345 + if (!local_paca->dispatch_log) 346 + return; 347 + 348 + /* if we have been migrated away, we cancel ourself */ 349 + if (d->cpu != smp_processor_id()) { 350 + pr_debug("vcpudispatch_stats: cpu %d worker migrated -- canceling worker\n", 351 + smp_processor_id()); 352 + return; 353 + } 354 + 355 + if (i == be64_to_cpu(vpa->dtl_idx)) 356 + goto out; 357 + 358 + while (i < be64_to_cpu(vpa->dtl_idx)) { 359 + dtle = *dtl; 360 + barrier(); 361 + if (i + N_DISPATCH_LOG < be64_to_cpu(vpa->dtl_idx)) { 362 + /* buffer has overflowed */ 363 + pr_debug_ratelimited("vcpudispatch_stats: cpu %d lost %lld DTL samples\n", 364 + d->cpu, 365 + be64_to_cpu(vpa->dtl_idx) - N_DISPATCH_LOG - i); 366 + i = be64_to_cpu(vpa->dtl_idx) - N_DISPATCH_LOG; 367 + dtl = local_paca->dispatch_log + (i % N_DISPATCH_LOG); 368 + continue; 369 + } 370 + update_vcpu_disp_stat(be16_to_cpu(dtle.processor_id)); 371 + ++i; 372 + ++dtl; 373 + if (dtl == dtl_end) 374 + dtl = local_paca->dispatch_log; 375 + } 376 + 377 + __this_cpu_write(dtl_entry_ridx, i); 378 + 379 + out: 380 + schedule_delayed_work_on(d->cpu, to_delayed_work(work), 381 + HZ / vcpudispatch_stats_freq); 382 + } 383 + 384 + static int dtl_worker_online(unsigned int cpu) 385 + { 386 + struct dtl_worker *d = &per_cpu(dtl_workers, cpu); 387 + 388 + memset(d, 0, sizeof(*d)); 389 + INIT_DELAYED_WORK(&d->work, process_dtl_buffer); 390 + d->cpu = cpu; 391 + 392 + #ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE 393 + per_cpu(dtl_entry_ridx, cpu) = 0; 394 + register_dtl_buffer(cpu); 395 + #else 396 + per_cpu(dtl_entry_ridx, cpu) = be64_to_cpu(lppaca_of(cpu).dtl_idx); 397 + #endif 398 + 399 + schedule_delayed_work_on(cpu, &d->work, HZ / vcpudispatch_stats_freq); 400 + return 0; 401 + } 402 + 403 + static int dtl_worker_offline(unsigned int cpu) 404 + { 405 + struct dtl_worker *d = &per_cpu(dtl_workers, cpu); 406 + 407 + cancel_delayed_work_sync(&d->work); 408 + 409 + #ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE 410 + unregister_dtl(get_hard_smp_processor_id(cpu)); 411 + #endif 412 + 413 + return 0; 414 + } 415 + 416 + static void set_global_dtl_mask(u8 mask) 417 + { 418 + int cpu; 419 + 420 + dtl_mask = mask; 421 + for_each_present_cpu(cpu) 422 + lppaca_of(cpu).dtl_enable_mask = dtl_mask; 423 + } 424 + 425 + static void reset_global_dtl_mask(void) 426 + { 427 + int cpu; 428 + 429 + #ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE 430 + dtl_mask = DTL_LOG_PREEMPT; 431 + #else 432 + dtl_mask = 0; 433 + #endif 434 + for_each_present_cpu(cpu) 435 + lppaca_of(cpu).dtl_enable_mask = dtl_mask; 436 + } 437 + 438 + static int dtl_worker_enable(unsigned long *time_limit) 439 + { 440 + int rc = 0, state; 441 + 442 + if (!write_trylock(&dtl_access_lock)) { 443 + rc = -EBUSY; 444 + goto out; 445 + } 446 + 447 + set_global_dtl_mask(DTL_LOG_ALL); 448 + 449 + /* Setup dtl buffers and register those */ 450 + alloc_dtl_buffers(time_limit); 451 + 452 + state = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "powerpc/dtl:online", 453 + dtl_worker_online, dtl_worker_offline); 454 + if (state < 0) { 455 + pr_err("vcpudispatch_stats: unable to setup workqueue for DTL processing\n"); 456 + free_dtl_buffers(time_limit); 457 + reset_global_dtl_mask(); 458 + write_unlock(&dtl_access_lock); 459 + rc = -EINVAL; 460 + goto out; 461 + } 462 + dtl_worker_state = state; 463 + 464 + out: 465 + return rc; 466 + } 467 + 468 + static void dtl_worker_disable(unsigned long *time_limit) 469 + { 470 + cpuhp_remove_state(dtl_worker_state); 471 + free_dtl_buffers(time_limit); 472 + reset_global_dtl_mask(); 473 + write_unlock(&dtl_access_lock); 474 + } 475 + 476 + static ssize_t vcpudispatch_stats_write(struct file *file, const char __user *p, 477 + size_t count, loff_t *ppos) 478 + { 479 + unsigned long time_limit = jiffies + HZ; 480 + struct vcpu_dispatch_data *disp; 481 + int rc, cmd, cpu; 482 + char buf[16]; 483 + 484 + if (count > 15) 485 + return -EINVAL; 486 + 487 + if (copy_from_user(buf, p, count)) 488 + return -EFAULT; 489 + 490 + buf[count] = 0; 491 + rc = kstrtoint(buf, 0, &cmd); 492 + if (rc || cmd < 0 || cmd > 1) { 493 + pr_err("vcpudispatch_stats: please use 0 to disable or 1 to enable dispatch statistics\n"); 494 + return rc ? rc : -EINVAL; 495 + } 496 + 497 + mutex_lock(&dtl_enable_mutex); 498 + 499 + if ((cmd == 0 && !vcpudispatch_stats_on) || 500 + (cmd == 1 && vcpudispatch_stats_on)) 501 + goto out; 502 + 503 + if (cmd) { 504 + rc = init_cpu_associativity(); 505 + if (rc) 506 + goto out; 507 + 508 + for_each_possible_cpu(cpu) { 509 + disp = per_cpu_ptr(&vcpu_disp_data, cpu); 510 + memset(disp, 0, sizeof(*disp)); 511 + disp->last_disp_cpu = -1; 512 + } 513 + 514 + rc = dtl_worker_enable(&time_limit); 515 + if (rc) { 516 + destroy_cpu_associativity(); 517 + goto out; 518 + } 519 + } else { 520 + dtl_worker_disable(&time_limit); 521 + destroy_cpu_associativity(); 522 + } 523 + 524 + vcpudispatch_stats_on = cmd; 525 + 526 + out: 527 + mutex_unlock(&dtl_enable_mutex); 528 + if (rc) 529 + return rc; 530 + return count; 531 + } 532 + 533 + static int vcpudispatch_stats_display(struct seq_file *p, void *v) 534 + { 535 + int cpu; 536 + struct vcpu_dispatch_data *disp; 537 + 538 + if (!vcpudispatch_stats_on) { 539 + seq_puts(p, "off\n"); 540 + return 0; 541 + } 542 + 543 + for_each_online_cpu(cpu) { 544 + disp = per_cpu_ptr(&vcpu_disp_data, cpu); 545 + seq_printf(p, "cpu%d", cpu); 546 + seq_put_decimal_ull(p, " ", disp->total_disp); 547 + seq_put_decimal_ull(p, " ", disp->same_cpu_disp); 548 + seq_put_decimal_ull(p, " ", disp->same_chip_disp); 549 + seq_put_decimal_ull(p, " ", disp->diff_chip_disp); 550 + seq_put_decimal_ull(p, " ", disp->far_chip_disp); 551 + seq_put_decimal_ull(p, " ", disp->numa_home_disp); 552 + seq_put_decimal_ull(p, " ", disp->numa_remote_disp); 553 + seq_put_decimal_ull(p, " ", disp->numa_far_disp); 554 + seq_puts(p, "\n"); 555 + } 556 + 557 + return 0; 558 + } 559 + 560 + static int vcpudispatch_stats_open(struct inode *inode, struct file *file) 561 + { 562 + return single_open(file, vcpudispatch_stats_display, NULL); 563 + } 564 + 565 + static const struct file_operations vcpudispatch_stats_proc_ops = { 566 + .open = vcpudispatch_stats_open, 567 + .read = seq_read, 568 + .write = vcpudispatch_stats_write, 569 + .llseek = seq_lseek, 570 + .release = single_release, 571 + }; 572 + 573 + static ssize_t vcpudispatch_stats_freq_write(struct file *file, 574 + const char __user *p, size_t count, loff_t *ppos) 575 + { 576 + int rc, freq; 577 + char buf[16]; 578 + 579 + if (count > 15) 580 + return -EINVAL; 581 + 582 + if (copy_from_user(buf, p, count)) 583 + return -EFAULT; 584 + 585 + buf[count] = 0; 586 + rc = kstrtoint(buf, 0, &freq); 587 + if (rc || freq < 1 || freq > HZ) { 588 + pr_err("vcpudispatch_stats_freq: please specify a frequency between 1 and %d\n", 589 + HZ); 590 + return rc ? rc : -EINVAL; 591 + } 592 + 593 + vcpudispatch_stats_freq = freq; 594 + 595 + return count; 596 + } 597 + 598 + static int vcpudispatch_stats_freq_display(struct seq_file *p, void *v) 599 + { 600 + seq_printf(p, "%d\n", vcpudispatch_stats_freq); 601 + return 0; 602 + } 603 + 604 + static int vcpudispatch_stats_freq_open(struct inode *inode, struct file *file) 605 + { 606 + return single_open(file, vcpudispatch_stats_freq_display, NULL); 607 + } 608 + 609 + static const struct file_operations vcpudispatch_stats_freq_proc_ops = { 610 + .open = vcpudispatch_stats_freq_open, 611 + .read = seq_read, 612 + .write = vcpudispatch_stats_freq_write, 613 + .llseek = seq_lseek, 614 + .release = single_release, 615 + }; 616 + 617 + static int __init vcpudispatch_stats_procfs_init(void) 618 + { 619 + if (!lppaca_shared_proc(get_lppaca())) 620 + return 0; 621 + 622 + if (!proc_create("powerpc/vcpudispatch_stats", 0600, NULL, 623 + &vcpudispatch_stats_proc_ops)) 624 + pr_err("vcpudispatch_stats: error creating procfs file\n"); 625 + else if (!proc_create("powerpc/vcpudispatch_stats_freq", 0600, NULL, 626 + &vcpudispatch_stats_freq_proc_ops)) 627 + pr_err("vcpudispatch_stats_freq: error creating procfs file\n"); 628 + 629 + return 0; 630 + } 631 + 632 + machine_device_initcall(pseries, vcpudispatch_stats_procfs_init); 633 + #endif /* CONFIG_PPC_SPLPAR */ 634 + 59 635 void vpa_init(int cpu) 60 636 { 61 637 int hwcpu = get_hard_smp_processor_id(cpu); 62 638 unsigned long addr; 63 639 long ret; 64 - struct paca_struct *pp; 65 - struct dtl_entry *dtl; 66 640 67 641 /* 68 642 * The spec says it "may be problematic" if CPU x registers the VPA of ··· 681 99 /* 682 100 * Register dispatch trace log, if one has been allocated. 683 101 */ 684 - pp = paca_ptrs[cpu]; 685 - dtl = pp->dispatch_log; 686 - if (dtl) { 687 - pp->dtl_ridx = 0; 688 - pp->dtl_curr = dtl; 689 - lppaca_of(cpu).dtl_idx = 0; 690 - 691 - /* hypervisor reads buffer length from this field */ 692 - dtl->enqueue_to_dispatch_time = cpu_to_be32(DISPATCH_LOG_BYTES); 693 - ret = register_dtl(hwcpu, __pa(dtl)); 694 - if (ret) 695 - pr_err("WARNING: DTL registration of cpu %d (hw %d) " 696 - "failed with %ld\n", smp_processor_id(), 697 - hwcpu, ret); 698 - lppaca_of(cpu).dtl_enable_mask = 2; 699 - } 102 + register_dtl_buffer(cpu); 700 103 } 701 104 702 105 #ifdef CONFIG_PPC_BOOK3S_64
+19
arch/powerpc/platforms/pseries/mobility.c
··· 6 6 * Copyright (C) 2010 IBM Corporation 7 7 */ 8 8 9 + #include <linux/cpu.h> 9 10 #include <linux/kernel.h> 10 11 #include <linux/kobject.h> 11 12 #include <linux/smp.h> ··· 20 19 #include <asm/machdep.h> 21 20 #include <asm/rtas.h> 22 21 #include "pseries.h" 22 + #include "../../kernel/cacheinfo.h" 23 23 24 24 static struct kobject *mobility_kobj; 25 25 ··· 337 335 if (rc) 338 336 printk(KERN_ERR "Post-mobility activate-fw failed: %d\n", rc); 339 337 338 + /* 339 + * We don't want CPUs to go online/offline while the device 340 + * tree is being updated. 341 + */ 342 + cpus_read_lock(); 343 + 344 + /* 345 + * It's common for the destination firmware to replace cache 346 + * nodes. Release all of the cacheinfo hierarchy's references 347 + * before updating the device tree. 348 + */ 349 + cacheinfo_teardown(); 350 + 340 351 rc = pseries_devicetree_update(MIGRATION_SCOPE); 341 352 if (rc) 342 353 printk(KERN_ERR "Post-mobility device tree update " 343 354 "failed: %d\n", rc); 355 + 356 + cacheinfo_rebuild(); 357 + 358 + cpus_read_unlock(); 344 359 345 360 /* Possibly switch to a new RFI flush type */ 346 361 pseries_setup_rfi_flush();
+94 -21
arch/powerpc/platforms/pseries/papr_scm.c
··· 28 28 uint64_t blocks; 29 29 uint64_t block_size; 30 30 int metadata_size; 31 + bool is_volatile; 31 32 32 33 uint64_t bound_addr; 33 34 ··· 97 96 } 98 97 99 98 static int papr_scm_meta_get(struct papr_scm_priv *p, 100 - struct nd_cmd_get_config_data_hdr *hdr) 99 + struct nd_cmd_get_config_data_hdr *hdr) 101 100 { 102 101 unsigned long data[PLPAR_HCALL_BUFSIZE]; 102 + unsigned long offset, data_offset; 103 + int len, read; 103 104 int64_t ret; 104 105 105 - if (hdr->in_offset >= p->metadata_size || hdr->in_length != 1) 106 + if ((hdr->in_offset + hdr->in_length) >= p->metadata_size) 106 107 return -EINVAL; 107 108 108 - ret = plpar_hcall(H_SCM_READ_METADATA, data, p->drc_index, 109 - hdr->in_offset, 1); 109 + for (len = hdr->in_length; len; len -= read) { 110 110 111 - if (ret == H_PARAMETER) /* bad DRC index */ 112 - return -ENODEV; 113 - if (ret) 114 - return -EINVAL; /* other invalid parameter */ 111 + data_offset = hdr->in_length - len; 112 + offset = hdr->in_offset + data_offset; 115 113 116 - hdr->out_buf[0] = data[0] & 0xff; 114 + if (len >= 8) 115 + read = 8; 116 + else if (len >= 4) 117 + read = 4; 118 + else if (len >= 2) 119 + read = 2; 120 + else 121 + read = 1; 117 122 123 + ret = plpar_hcall(H_SCM_READ_METADATA, data, p->drc_index, 124 + offset, read); 125 + 126 + if (ret == H_PARAMETER) /* bad DRC index */ 127 + return -ENODEV; 128 + if (ret) 129 + return -EINVAL; /* other invalid parameter */ 130 + 131 + switch (read) { 132 + case 8: 133 + *(uint64_t *)(hdr->out_buf + data_offset) = be64_to_cpu(data[0]); 134 + break; 135 + case 4: 136 + *(uint32_t *)(hdr->out_buf + data_offset) = be32_to_cpu(data[0] & 0xffffffff); 137 + break; 138 + 139 + case 2: 140 + *(uint16_t *)(hdr->out_buf + data_offset) = be16_to_cpu(data[0] & 0xffff); 141 + break; 142 + 143 + case 1: 144 + *(uint8_t *)(hdr->out_buf + data_offset) = (data[0] & 0xff); 145 + break; 146 + } 147 + } 118 148 return 0; 119 149 } 120 150 121 151 static int papr_scm_meta_set(struct papr_scm_priv *p, 122 - struct nd_cmd_set_config_hdr *hdr) 152 + struct nd_cmd_set_config_hdr *hdr) 123 153 { 154 + unsigned long offset, data_offset; 155 + int len, wrote; 156 + unsigned long data; 157 + __be64 data_be; 124 158 int64_t ret; 125 159 126 - if (hdr->in_offset >= p->metadata_size || hdr->in_length != 1) 160 + if ((hdr->in_offset + hdr->in_length) >= p->metadata_size) 127 161 return -EINVAL; 128 162 129 - ret = plpar_hcall_norets(H_SCM_WRITE_METADATA, 130 - p->drc_index, hdr->in_offset, hdr->in_buf[0], 1); 163 + for (len = hdr->in_length; len; len -= wrote) { 131 164 132 - if (ret == H_PARAMETER) /* bad DRC index */ 133 - return -ENODEV; 134 - if (ret) 135 - return -EINVAL; /* other invalid parameter */ 165 + data_offset = hdr->in_length - len; 166 + offset = hdr->in_offset + data_offset; 167 + 168 + if (len >= 8) { 169 + data = *(uint64_t *)(hdr->in_buf + data_offset); 170 + data_be = cpu_to_be64(data); 171 + wrote = 8; 172 + } else if (len >= 4) { 173 + data = *(uint32_t *)(hdr->in_buf + data_offset); 174 + data &= 0xffffffff; 175 + data_be = cpu_to_be32(data); 176 + wrote = 4; 177 + } else if (len >= 2) { 178 + data = *(uint16_t *)(hdr->in_buf + data_offset); 179 + data &= 0xffff; 180 + data_be = cpu_to_be16(data); 181 + wrote = 2; 182 + } else { 183 + data_be = *(uint8_t *)(hdr->in_buf + data_offset); 184 + data_be &= 0xff; 185 + wrote = 1; 186 + } 187 + 188 + ret = plpar_hcall_norets(H_SCM_WRITE_METADATA, p->drc_index, 189 + offset, data_be, wrote); 190 + if (ret == H_PARAMETER) /* bad DRC index */ 191 + return -ENODEV; 192 + if (ret) 193 + return -EINVAL; /* other invalid parameter */ 194 + } 136 195 137 196 return 0; 138 197 } ··· 214 153 get_size_hdr = buf; 215 154 216 155 get_size_hdr->status = 0; 217 - get_size_hdr->max_xfer = 1; 156 + get_size_hdr->max_xfer = 8; 218 157 get_size_hdr->config_size = p->metadata_size; 219 158 *cmd_rc = 0; 220 159 break; ··· 309 248 ndr_desc.nd_set = &p->nd_set; 310 249 set_bit(ND_REGION_PAGEMAP, &ndr_desc.flags); 311 250 312 - p->region = nvdimm_pmem_region_create(p->bus, &ndr_desc); 251 + if (p->is_volatile) 252 + p->region = nvdimm_volatile_region_create(p->bus, &ndr_desc); 253 + else 254 + p->region = nvdimm_pmem_region_create(p->bus, &ndr_desc); 313 255 if (!p->region) { 314 256 dev_err(dev, "Error registering region %pR from %pOF\n", 315 257 ndr_desc.res, p->dn); ··· 357 293 return -ENODEV; 358 294 } 359 295 296 + 360 297 p = kzalloc(sizeof(*p), GFP_KERNEL); 361 298 if (!p) 362 299 return -ENOMEM; ··· 369 304 p->drc_index = drc_index; 370 305 p->block_size = block_size; 371 306 p->blocks = blocks; 307 + p->is_volatile = !of_property_read_bool(dn, "ibm,cache-flush-required"); 372 308 373 309 /* We just need to ensure that set cookies are unique across */ 374 310 uuid_parse(uuid_str, (uuid_t *) uuid); 375 - p->nd_set.cookie1 = uuid[0]; 376 - p->nd_set.cookie2 = uuid[1]; 311 + /* 312 + * cookie1 and cookie2 are not really little endian 313 + * we store a little endian representation of the 314 + * uuid str so that we can compare this with the label 315 + * area cookie irrespective of the endian config with which 316 + * the kernel is built. 317 + */ 318 + p->nd_set.cookie1 = cpu_to_le64(uuid[0]); 319 + p->nd_set.cookie2 = cpu_to_le64(uuid[1]); 377 320 378 321 /* might be zero */ 379 322 p->metadata_size = metadata_size;
+7 -32
arch/powerpc/platforms/pseries/setup.c
··· 38 38 #include <linux/of.h> 39 39 #include <linux/of_pci.h> 40 40 #include <linux/memblock.h> 41 + #include <linux/swiotlb.h> 41 42 42 43 #include <asm/mmu.h> 43 44 #include <asm/processor.h> ··· 68 67 #include <asm/isa-bridge.h> 69 68 #include <asm/security_features.h> 70 69 #include <asm/asm-const.h> 70 + #include <asm/swiotlb.h> 71 71 72 72 #include "pseries.h" 73 73 #include "../../../../drivers/pci/pci.h" ··· 275 273 */ 276 274 static int alloc_dispatch_logs(void) 277 275 { 278 - int cpu, ret; 279 - struct paca_struct *pp; 280 - struct dtl_entry *dtl; 281 - 282 276 if (!firmware_has_feature(FW_FEATURE_SPLPAR)) 283 277 return 0; 284 278 285 279 if (!dtl_cache) 286 280 return 0; 287 281 288 - for_each_possible_cpu(cpu) { 289 - pp = paca_ptrs[cpu]; 290 - dtl = kmem_cache_alloc(dtl_cache, GFP_KERNEL); 291 - if (!dtl) { 292 - pr_warn("Failed to allocate dispatch trace log for cpu %d\n", 293 - cpu); 294 - pr_warn("Stolen time statistics will be unreliable\n"); 295 - break; 296 - } 297 - 298 - pp->dtl_ridx = 0; 299 - pp->dispatch_log = dtl; 300 - pp->dispatch_log_end = dtl + N_DISPATCH_LOG; 301 - pp->dtl_curr = dtl; 302 - } 282 + alloc_dtl_buffers(0); 303 283 304 284 /* Register the DTL for the current (boot) cpu */ 305 - dtl = get_paca()->dispatch_log; 306 - get_paca()->dtl_ridx = 0; 307 - get_paca()->dtl_curr = dtl; 308 - get_paca()->lppaca_ptr->dtl_idx = 0; 309 - 310 - /* hypervisor reads buffer length from this field */ 311 - dtl->enqueue_to_dispatch_time = cpu_to_be32(DISPATCH_LOG_BYTES); 312 - ret = register_dtl(hard_smp_processor_id(), __pa(dtl)); 313 - if (ret) 314 - pr_err("WARNING: DTL registration of cpu %d (hw %d) failed " 315 - "with %d\n", smp_processor_id(), 316 - hard_smp_processor_id(), ret); 317 - get_paca()->lppaca_ptr->dtl_enable_mask = 2; 285 + register_dtl_buffer(smp_processor_id()); 318 286 319 287 return 0; 320 288 } ··· 765 793 } 766 794 767 795 ppc_md.pcibios_root_bridge_prepare = pseries_root_bridge_prepare; 796 + 797 + if (swiotlb_force == SWIOTLB_FORCE) 798 + ppc_swiotlb_enable = 1; 768 799 } 769 800 770 801 static void pseries_panic(char *str)
+2 -2
arch/powerpc/platforms/pseries/vio.c
··· 520 520 521 521 if (vio_cmo_alloc(viodev, roundup(size, IOMMU_PAGE_SIZE(tbl)))) 522 522 goto out_fail; 523 - ret = iommu_map_page(dev, tbl, page, offset, size, device_to_mask(dev), 523 + ret = iommu_map_page(dev, tbl, page, offset, size, dma_get_mask(dev), 524 524 direction, attrs); 525 525 if (unlikely(ret == DMA_MAPPING_ERROR)) 526 526 goto out_deallocate; ··· 560 560 561 561 if (vio_cmo_alloc(viodev, alloc_size)) 562 562 goto out_fail; 563 - ret = ppc_iommu_map_sg(dev, tbl, sglist, nelems, device_to_mask(dev), 563 + ret = ppc_iommu_map_sg(dev, tbl, sglist, nelems, dma_get_mask(dev), 564 564 direction, attrs); 565 565 if (unlikely(!ret)) 566 566 goto out_deallocate;
-2
arch/powerpc/sysdev/Makefile
··· 37 37 obj-$(CONFIG_OF_RTC) += of_rtc.o 38 38 39 39 obj-$(CONFIG_CPM) += cpm_common.o 40 - obj-$(CONFIG_CPM1) += cpm1.o 41 40 obj-$(CONFIG_CPM2) += cpm2.o cpm2_pic.o cpm_gpio.o 42 41 obj-$(CONFIG_8xx_GPIO) += cpm_gpio.o 43 42 obj-$(CONFIG_QUICC_ENGINE) += cpm_common.o 44 43 obj-$(CONFIG_PPC_DCR) += dcr.o 45 - obj-$(CONFIG_UCODE_PATCH) += micropatch.o 46 44 47 45 obj-$(CONFIG_PPC_MPC512x) += mpc5xxx_clocks.o 48 46 obj-$(CONFIG_PPC_MPC52xx) += mpc5xxx_clocks.o
+13 -11
arch/powerpc/sysdev/cpm1.c arch/powerpc/platforms/8xx/cpm1.c
··· 88 88 { 89 89 int cpm_vec; 90 90 91 - /* Get the vector by setting the ACK bit and then reading 91 + /* 92 + * Get the vector by setting the ACK bit and then reading 92 93 * the register. 93 94 */ 94 95 out_be16(&cpic_reg->cpic_civr, 1); ··· 109 108 return 0; 110 109 } 111 110 112 - /* The CPM can generate the error interrupt when there is a race condition 111 + /* 112 + * The CPM can generate the error interrupt when there is a race condition 113 113 * between generating and masking interrupts. All we have to do is ACK it 114 114 * and return. This is a no-op function so we don't need any special 115 115 * tests in the interrupt handler. ··· 210 208 cpmp = &mpc8xx_immr->im_cpm; 211 209 212 210 #ifndef CONFIG_PPC_EARLY_DEBUG_CPM 213 - /* Perform a reset. 214 - */ 211 + /* Perform a reset. */ 215 212 out_be16(&cpmp->cp_cpcr, CPM_CR_RST | CPM_CR_FLG); 216 213 217 - /* Wait for it. 218 - */ 214 + /* Wait for it. */ 219 215 while (in_be16(&cpmp->cp_cpcr) & CPM_CR_FLG); 220 216 #endif 221 217 ··· 221 221 cpm_load_patch(cpmp); 222 222 #endif 223 223 224 - /* Set SDMA Bus Request priority 5. 224 + /* 225 + * Set SDMA Bus Request priority 5. 225 226 * On 860T, this also enables FEC priority 6. I am not sure 226 227 * this is what we really want for some applications, but the 227 228 * manual recommends it. ··· 264 263 } 265 264 EXPORT_SYMBOL(cpm_command); 266 265 267 - /* Set a baud rate generator. This needs lots of work. There are 266 + /* 267 + * Set a baud rate generator. This needs lots of work. There are 268 268 * four BRGs, any of which can be wired to any channel. 269 269 * The internal baud rate clock is the system clock divided by 16. 270 270 * This assumes the baudrate is 16x oversampled by the uart. ··· 279 277 { 280 278 u32 __iomem *bp; 281 279 282 - /* This is good enough to get SMCs running..... 283 - */ 280 + /* This is good enough to get SMCs running..... */ 284 281 bp = &cpmp->cp_brgc1; 285 282 bp += brg; 286 - /* The BRG has a 12-bit counter. For really slow baud rates (or 283 + /* 284 + * The BRG has a 12-bit counter. For really slow baud rates (or 287 285 * really fast processors), we may have to further divide by 16. 288 286 */ 289 287 if (((BRG_UART_CLK / rate) - 1) < 4096)
+1 -1
arch/powerpc/sysdev/dart_iommu.c
··· 144 144 unsigned int tmp; 145 145 146 146 /* Perform a standard cache flush */ 147 - flush_inval_dcache_range(start, end); 147 + flush_dcache_range(start, end); 148 148 149 149 /* 150 150 * Perform the sequence described in the CPC925 manual to
-749
arch/powerpc/sysdev/micropatch.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - 3 - /* Microcode patches for the CPM as supplied by Motorola. 4 - * This is the one for IIC/SPI. There is a newer one that 5 - * also relocates SMC2, but this would require additional changes 6 - * to uart.c, so I am holding off on that for a moment. 7 - */ 8 - #include <linux/init.h> 9 - #include <linux/errno.h> 10 - #include <linux/sched.h> 11 - #include <linux/kernel.h> 12 - #include <linux/param.h> 13 - #include <linux/string.h> 14 - #include <linux/mm.h> 15 - #include <linux/interrupt.h> 16 - #include <asm/irq.h> 17 - #include <asm/page.h> 18 - #include <asm/pgtable.h> 19 - #include <asm/8xx_immap.h> 20 - #include <asm/cpm.h> 21 - #include <asm/cpm1.h> 22 - 23 - /* 24 - * I2C/SPI relocation patch arrays. 25 - */ 26 - 27 - #ifdef CONFIG_I2C_SPI_UCODE_PATCH 28 - 29 - static uint patch_2000[] __initdata = { 30 - 0x7FFFEFD9, 31 - 0x3FFD0000, 32 - 0x7FFB49F7, 33 - 0x7FF90000, 34 - 0x5FEFADF7, 35 - 0x5F89ADF7, 36 - 0x5FEFAFF7, 37 - 0x5F89AFF7, 38 - 0x3A9CFBC8, 39 - 0xE7C0EDF0, 40 - 0x77C1E1BB, 41 - 0xF4DC7F1D, 42 - 0xABAD932F, 43 - 0x4E08FDCF, 44 - 0x6E0FAFF8, 45 - 0x7CCF76CF, 46 - 0xFD1FF9CF, 47 - 0xABF88DC6, 48 - 0xAB5679F7, 49 - 0xB0937383, 50 - 0xDFCE79F7, 51 - 0xB091E6BB, 52 - 0xE5BBE74F, 53 - 0xB3FA6F0F, 54 - 0x6FFB76CE, 55 - 0xEE0DF9CF, 56 - 0x2BFBEFEF, 57 - 0xCFEEF9CF, 58 - 0x76CEAD24, 59 - 0x90B2DF9A, 60 - 0x7FDDD0BF, 61 - 0x4BF847FD, 62 - 0x7CCF76CE, 63 - 0xCFEF7E1F, 64 - 0x7F1D7DFD, 65 - 0xF0B6EF71, 66 - 0x7FC177C1, 67 - 0xFBC86079, 68 - 0xE722FBC8, 69 - 0x5FFFDFFF, 70 - 0x5FB2FFFB, 71 - 0xFBC8F3C8, 72 - 0x94A67F01, 73 - 0x7F1D5F39, 74 - 0xAFE85F5E, 75 - 0xFFDFDF96, 76 - 0xCB9FAF7D, 77 - 0x5FC1AFED, 78 - 0x8C1C5FC1, 79 - 0xAFDD5FC3, 80 - 0xDF9A7EFD, 81 - 0xB0B25FB2, 82 - 0xFFFEABAD, 83 - 0x5FB2FFFE, 84 - 0x5FCE600B, 85 - 0xE6BB600B, 86 - 0x5FCEDFC6, 87 - 0x27FBEFDF, 88 - 0x5FC8CFDE, 89 - 0x3A9CE7C0, 90 - 0xEDF0F3C8, 91 - 0x7F0154CD, 92 - 0x7F1D2D3D, 93 - 0x363A7570, 94 - 0x7E0AF1CE, 95 - 0x37EF2E68, 96 - 0x7FEE10EC, 97 - 0xADF8EFDE, 98 - 0xCFEAE52F, 99 - 0x7D0FE12B, 100 - 0xF1CE5F65, 101 - 0x7E0A4DF8, 102 - 0xCFEA5F72, 103 - 0x7D0BEFEE, 104 - 0xCFEA5F74, 105 - 0xE522EFDE, 106 - 0x5F74CFDA, 107 - 0x0B627385, 108 - 0xDF627E0A, 109 - 0x30D8145B, 110 - 0xBFFFF3C8, 111 - 0x5FFFDFFF, 112 - 0xA7F85F5E, 113 - 0xBFFE7F7D, 114 - 0x10D31450, 115 - 0x5F36BFFF, 116 - 0xAF785F5E, 117 - 0xBFFDA7F8, 118 - 0x5F36BFFE, 119 - 0x77FD30C0, 120 - 0x4E08FDCF, 121 - 0xE5FF6E0F, 122 - 0xAFF87E1F, 123 - 0x7E0FFD1F, 124 - 0xF1CF5F1B, 125 - 0xABF80D5E, 126 - 0x5F5EFFEF, 127 - 0x79F730A2, 128 - 0xAFDD5F34, 129 - 0x47F85F34, 130 - 0xAFED7FDD, 131 - 0x50B24978, 132 - 0x47FD7F1D, 133 - 0x7DFD70AD, 134 - 0xEF717EC1, 135 - 0x6BA47F01, 136 - 0x2D267EFD, 137 - 0x30DE5F5E, 138 - 0xFFFD5F5E, 139 - 0xFFEF5F5E, 140 - 0xFFDF0CA0, 141 - 0xAFED0A9E, 142 - 0xAFDD0C3A, 143 - 0x5F3AAFBD, 144 - 0x7FBDB082, 145 - 0x5F8247F8 146 - }; 147 - 148 - static uint patch_2f00[] __initdata = { 149 - 0x3E303430, 150 - 0x34343737, 151 - 0xABF7BF9B, 152 - 0x994B4FBD, 153 - 0xBD599493, 154 - 0x349FFF37, 155 - 0xFB9B177D, 156 - 0xD9936956, 157 - 0xBBFDD697, 158 - 0xBDD2FD11, 159 - 0x31DB9BB3, 160 - 0x63139637, 161 - 0x93733693, 162 - 0x193137F7, 163 - 0x331737AF, 164 - 0x7BB9B999, 165 - 0xBB197957, 166 - 0x7FDFD3D5, 167 - 0x73B773F7, 168 - 0x37933B99, 169 - 0x1D115316, 170 - 0x99315315, 171 - 0x31694BF4, 172 - 0xFBDBD359, 173 - 0x31497353, 174 - 0x76956D69, 175 - 0x7B9D9693, 176 - 0x13131979, 177 - 0x79376935 178 - }; 179 - #endif 180 - 181 - /* 182 - * I2C/SPI/SMC1 relocation patch arrays. 183 - */ 184 - 185 - #ifdef CONFIG_I2C_SPI_SMC1_UCODE_PATCH 186 - 187 - static uint patch_2000[] __initdata = { 188 - 0x3fff0000, 189 - 0x3ffd0000, 190 - 0x3ffb0000, 191 - 0x3ff90000, 192 - 0x5f13eff8, 193 - 0x5eb5eff8, 194 - 0x5f88adf7, 195 - 0x5fefadf7, 196 - 0x3a9cfbc8, 197 - 0x77cae1bb, 198 - 0xf4de7fad, 199 - 0xabae9330, 200 - 0x4e08fdcf, 201 - 0x6e0faff8, 202 - 0x7ccf76cf, 203 - 0xfdaff9cf, 204 - 0xabf88dc8, 205 - 0xab5879f7, 206 - 0xb0925d8d, 207 - 0xdfd079f7, 208 - 0xb090e6bb, 209 - 0xe5bbe74f, 210 - 0x9e046f0f, 211 - 0x6ffb76ce, 212 - 0xee0cf9cf, 213 - 0x2bfbefef, 214 - 0xcfeef9cf, 215 - 0x76cead23, 216 - 0x90b3df99, 217 - 0x7fddd0c1, 218 - 0x4bf847fd, 219 - 0x7ccf76ce, 220 - 0xcfef77ca, 221 - 0x7eaf7fad, 222 - 0x7dfdf0b7, 223 - 0xef7a7fca, 224 - 0x77cafbc8, 225 - 0x6079e722, 226 - 0xfbc85fff, 227 - 0xdfff5fb3, 228 - 0xfffbfbc8, 229 - 0xf3c894a5, 230 - 0xe7c9edf9, 231 - 0x7f9a7fad, 232 - 0x5f36afe8, 233 - 0x5f5bffdf, 234 - 0xdf95cb9e, 235 - 0xaf7d5fc3, 236 - 0xafed8c1b, 237 - 0x5fc3afdd, 238 - 0x5fc5df99, 239 - 0x7efdb0b3, 240 - 0x5fb3fffe, 241 - 0xabae5fb3, 242 - 0xfffe5fd0, 243 - 0x600be6bb, 244 - 0x600b5fd0, 245 - 0xdfc827fb, 246 - 0xefdf5fca, 247 - 0xcfde3a9c, 248 - 0xe7c9edf9, 249 - 0xf3c87f9e, 250 - 0x54ca7fed, 251 - 0x2d3a3637, 252 - 0x756f7e9a, 253 - 0xf1ce37ef, 254 - 0x2e677fee, 255 - 0x10ebadf8, 256 - 0xefdecfea, 257 - 0xe52f7d9f, 258 - 0xe12bf1ce, 259 - 0x5f647e9a, 260 - 0x4df8cfea, 261 - 0x5f717d9b, 262 - 0xefeecfea, 263 - 0x5f73e522, 264 - 0xefde5f73, 265 - 0xcfda0b61, 266 - 0x5d8fdf61, 267 - 0xe7c9edf9, 268 - 0x7e9a30d5, 269 - 0x1458bfff, 270 - 0xf3c85fff, 271 - 0xdfffa7f8, 272 - 0x5f5bbffe, 273 - 0x7f7d10d0, 274 - 0x144d5f33, 275 - 0xbfffaf78, 276 - 0x5f5bbffd, 277 - 0xa7f85f33, 278 - 0xbffe77fd, 279 - 0x30bd4e08, 280 - 0xfdcfe5ff, 281 - 0x6e0faff8, 282 - 0x7eef7e9f, 283 - 0xfdeff1cf, 284 - 0x5f17abf8, 285 - 0x0d5b5f5b, 286 - 0xffef79f7, 287 - 0x309eafdd, 288 - 0x5f3147f8, 289 - 0x5f31afed, 290 - 0x7fdd50af, 291 - 0x497847fd, 292 - 0x7f9e7fed, 293 - 0x7dfd70a9, 294 - 0xef7e7ece, 295 - 0x6ba07f9e, 296 - 0x2d227efd, 297 - 0x30db5f5b, 298 - 0xfffd5f5b, 299 - 0xffef5f5b, 300 - 0xffdf0c9c, 301 - 0xafed0a9a, 302 - 0xafdd0c37, 303 - 0x5f37afbd, 304 - 0x7fbdb081, 305 - 0x5f8147f8, 306 - 0x3a11e710, 307 - 0xedf0ccdd, 308 - 0xf3186d0a, 309 - 0x7f0e5f06, 310 - 0x7fedbb38, 311 - 0x3afe7468, 312 - 0x7fedf4fc, 313 - 0x8ffbb951, 314 - 0xb85f77fd, 315 - 0xb0df5ddd, 316 - 0xdefe7fed, 317 - 0x90e1e74d, 318 - 0x6f0dcbf7, 319 - 0xe7decfed, 320 - 0xcb74cfed, 321 - 0xcfeddf6d, 322 - 0x91714f74, 323 - 0x5dd2deef, 324 - 0x9e04e7df, 325 - 0xefbb6ffb, 326 - 0xe7ef7f0e, 327 - 0x9e097fed, 328 - 0xebdbeffa, 329 - 0xeb54affb, 330 - 0x7fea90d7, 331 - 0x7e0cf0c3, 332 - 0xbffff318, 333 - 0x5fffdfff, 334 - 0xac59efea, 335 - 0x7fce1ee5, 336 - 0xe2ff5ee1, 337 - 0xaffbe2ff, 338 - 0x5ee3affb, 339 - 0xf9cc7d0f, 340 - 0xaef8770f, 341 - 0x7d0fb0c6, 342 - 0xeffbbfff, 343 - 0xcfef5ede, 344 - 0x7d0fbfff, 345 - 0x5ede4cf8, 346 - 0x7fddd0bf, 347 - 0x49f847fd, 348 - 0x7efdf0bb, 349 - 0x7fedfffd, 350 - 0x7dfdf0b7, 351 - 0xef7e7e1e, 352 - 0x5ede7f0e, 353 - 0x3a11e710, 354 - 0xedf0ccab, 355 - 0xfb18ad2e, 356 - 0x1ea9bbb8, 357 - 0x74283b7e, 358 - 0x73c2e4bb, 359 - 0x2ada4fb8, 360 - 0xdc21e4bb, 361 - 0xb2a1ffbf, 362 - 0x5e2c43f8, 363 - 0xfc87e1bb, 364 - 0xe74ffd91, 365 - 0x6f0f4fe8, 366 - 0xc7ba32e2, 367 - 0xf396efeb, 368 - 0x600b4f78, 369 - 0xe5bb760b, 370 - 0x53acaef8, 371 - 0x4ef88b0e, 372 - 0xcfef9e09, 373 - 0xabf8751f, 374 - 0xefef5bac, 375 - 0x741f4fe8, 376 - 0x751e760d, 377 - 0x7fdbf081, 378 - 0x741cafce, 379 - 0xefcc7fce, 380 - 0x751e70ac, 381 - 0x741ce7bb, 382 - 0x3372cfed, 383 - 0xafdbefeb, 384 - 0xe5bb760b, 385 - 0x53f2aef8, 386 - 0xafe8e7eb, 387 - 0x4bf8771e, 388 - 0x7e247fed, 389 - 0x4fcbe2cc, 390 - 0x7fbc30a9, 391 - 0x7b0f7a0f, 392 - 0x34d577fd, 393 - 0x308b5db7, 394 - 0xde553e5f, 395 - 0xaf78741f, 396 - 0x741f30f0, 397 - 0xcfef5e2c, 398 - 0x741f3eac, 399 - 0xafb8771e, 400 - 0x5e677fed, 401 - 0x0bd3e2cc, 402 - 0x741ccfec, 403 - 0xe5ca53cd, 404 - 0x6fcb4f74, 405 - 0x5dadde4b, 406 - 0x2ab63d38, 407 - 0x4bb3de30, 408 - 0x751f741c, 409 - 0x6c42effa, 410 - 0xefea7fce, 411 - 0x6ffc30be, 412 - 0xefec3fca, 413 - 0x30b3de2e, 414 - 0xadf85d9e, 415 - 0xaf7daefd, 416 - 0x5d9ede2e, 417 - 0x5d9eafdd, 418 - 0x761f10ac, 419 - 0x1da07efd, 420 - 0x30adfffe, 421 - 0x4908fb18, 422 - 0x5fffdfff, 423 - 0xafbb709b, 424 - 0x4ef85e67, 425 - 0xadf814ad, 426 - 0x7a0f70ad, 427 - 0xcfef50ad, 428 - 0x7a0fde30, 429 - 0x5da0afed, 430 - 0x3c12780f, 431 - 0xefef780f, 432 - 0xefef790f, 433 - 0xa7f85e0f, 434 - 0xffef790f, 435 - 0xefef790f, 436 - 0x14adde2e, 437 - 0x5d9eadfd, 438 - 0x5e2dfffb, 439 - 0xe79addfd, 440 - 0xeff96079, 441 - 0x607ae79a, 442 - 0xddfceff9, 443 - 0x60795dff, 444 - 0x607acfef, 445 - 0xefefefdf, 446 - 0xefbfef7f, 447 - 0xeeffedff, 448 - 0xebffe7ff, 449 - 0xafefafdf, 450 - 0xafbfaf7f, 451 - 0xaeffadff, 452 - 0xabffa7ff, 453 - 0x6fef6fdf, 454 - 0x6fbf6f7f, 455 - 0x6eff6dff, 456 - 0x6bff67ff, 457 - 0x2fef2fdf, 458 - 0x2fbf2f7f, 459 - 0x2eff2dff, 460 - 0x2bff27ff, 461 - 0x4e08fd1f, 462 - 0xe5ff6e0f, 463 - 0xaff87eef, 464 - 0x7e0ffdef, 465 - 0xf11f6079, 466 - 0xabf8f542, 467 - 0x7e0af11c, 468 - 0x37cfae3a, 469 - 0x7fec90be, 470 - 0xadf8efdc, 471 - 0xcfeae52f, 472 - 0x7d0fe12b, 473 - 0xf11c6079, 474 - 0x7e0a4df8, 475 - 0xcfea5dc4, 476 - 0x7d0befec, 477 - 0xcfea5dc6, 478 - 0xe522efdc, 479 - 0x5dc6cfda, 480 - 0x4e08fd1f, 481 - 0x6e0faff8, 482 - 0x7c1f761f, 483 - 0xfdeff91f, 484 - 0x6079abf8, 485 - 0x761cee24, 486 - 0xf91f2bfb, 487 - 0xefefcfec, 488 - 0xf91f6079, 489 - 0x761c27fb, 490 - 0xefdf5da7, 491 - 0xcfdc7fdd, 492 - 0xd09c4bf8, 493 - 0x47fd7c1f, 494 - 0x761ccfcf, 495 - 0x7eef7fed, 496 - 0x7dfdf093, 497 - 0xef7e7f1e, 498 - 0x771efb18, 499 - 0x6079e722, 500 - 0xe6bbe5bb, 501 - 0xae0ae5bb, 502 - 0x600bae85, 503 - 0xe2bbe2bb, 504 - 0xe2bbe2bb, 505 - 0xaf02e2bb, 506 - 0xe2bb2ff9, 507 - 0x6079e2bb 508 - }; 509 - 510 - static uint patch_2f00[] __initdata = { 511 - 0x30303030, 512 - 0x3e3e3434, 513 - 0xabbf9b99, 514 - 0x4b4fbdbd, 515 - 0x59949334, 516 - 0x9fff37fb, 517 - 0x9b177dd9, 518 - 0x936956bb, 519 - 0xfbdd697b, 520 - 0xdd2fd113, 521 - 0x1db9f7bb, 522 - 0x36313963, 523 - 0x79373369, 524 - 0x3193137f, 525 - 0x7331737a, 526 - 0xf7bb9b99, 527 - 0x9bb19795, 528 - 0x77fdfd3d, 529 - 0x573b773f, 530 - 0x737933f7, 531 - 0xb991d115, 532 - 0x31699315, 533 - 0x31531694, 534 - 0xbf4fbdbd, 535 - 0x35931497, 536 - 0x35376956, 537 - 0xbd697b9d, 538 - 0x96931313, 539 - 0x19797937, 540 - 0x6935af78, 541 - 0xb9b3baa3, 542 - 0xb8788683, 543 - 0x368f78f7, 544 - 0x87778733, 545 - 0x3ffffb3b, 546 - 0x8e8f78b8, 547 - 0x1d118e13, 548 - 0xf3ff3f8b, 549 - 0x6bd8e173, 550 - 0xd1366856, 551 - 0x68d1687b, 552 - 0x3daf78b8, 553 - 0x3a3a3f87, 554 - 0x8f81378f, 555 - 0xf876f887, 556 - 0x77fd8778, 557 - 0x737de8d6, 558 - 0xbbf8bfff, 559 - 0xd8df87f7, 560 - 0xfd876f7b, 561 - 0x8bfff8bd, 562 - 0x8683387d, 563 - 0xb873d87b, 564 - 0x3b8fd7f8, 565 - 0xf7338883, 566 - 0xbb8ee1f8, 567 - 0xef837377, 568 - 0x3337b836, 569 - 0x817d11f8, 570 - 0x7378b878, 571 - 0xd3368b7d, 572 - 0xed731b7d, 573 - 0x833731f3, 574 - 0xf22f3f23 575 - }; 576 - 577 - static uint patch_2e00[] __initdata = { 578 - 0x27eeeeee, 579 - 0xeeeeeeee, 580 - 0xeeeeeeee, 581 - 0xeeeeeeee, 582 - 0xee4bf4fb, 583 - 0xdbd259bb, 584 - 0x1979577f, 585 - 0xdfd2d573, 586 - 0xb773f737, 587 - 0x4b4fbdbd, 588 - 0x25b9b177, 589 - 0xd2d17376, 590 - 0x956bbfdd, 591 - 0x697bdd2f, 592 - 0xff9f79ff, 593 - 0xff9ff22f 594 - }; 595 - #endif 596 - 597 - /* 598 - * USB SOF patch arrays. 599 - */ 600 - 601 - #ifdef CONFIG_USB_SOF_UCODE_PATCH 602 - 603 - static uint patch_2000[] __initdata = { 604 - 0x7fff0000, 605 - 0x7ffd0000, 606 - 0x7ffb0000, 607 - 0x49f7ba5b, 608 - 0xba383ffb, 609 - 0xf9b8b46d, 610 - 0xe5ab4e07, 611 - 0xaf77bffe, 612 - 0x3f7bbf79, 613 - 0xba5bba38, 614 - 0xe7676076, 615 - 0x60750000 616 - }; 617 - 618 - static uint patch_2f00[] __initdata = { 619 - 0x3030304c, 620 - 0xcab9e441, 621 - 0xa1aaf220 622 - }; 623 - #endif 624 - 625 - void __init cpm_load_patch(cpm8xx_t *cp) 626 - { 627 - volatile uint *dp; /* Dual-ported RAM. */ 628 - volatile cpm8xx_t *commproc; 629 - #if defined(CONFIG_I2C_SPI_UCODE_PATCH) || \ 630 - defined(CONFIG_I2C_SPI_SMC1_UCODE_PATCH) 631 - volatile iic_t *iip; 632 - volatile struct spi_pram *spp; 633 - #ifdef CONFIG_I2C_SPI_SMC1_UCODE_PATCH 634 - volatile smc_uart_t *smp; 635 - #endif 636 - #endif 637 - int i; 638 - 639 - commproc = cp; 640 - 641 - #ifdef CONFIG_USB_SOF_UCODE_PATCH 642 - commproc->cp_rccr = 0; 643 - 644 - dp = (uint *)(commproc->cp_dpmem); 645 - for (i=0; i<(sizeof(patch_2000)/4); i++) 646 - *dp++ = patch_2000[i]; 647 - 648 - dp = (uint *)&(commproc->cp_dpmem[0x0f00]); 649 - for (i=0; i<(sizeof(patch_2f00)/4); i++) 650 - *dp++ = patch_2f00[i]; 651 - 652 - commproc->cp_rccr = 0x0009; 653 - 654 - printk("USB SOF microcode patch installed\n"); 655 - #endif /* CONFIG_USB_SOF_UCODE_PATCH */ 656 - 657 - #if defined(CONFIG_I2C_SPI_UCODE_PATCH) || \ 658 - defined(CONFIG_I2C_SPI_SMC1_UCODE_PATCH) 659 - 660 - commproc->cp_rccr = 0; 661 - 662 - dp = (uint *)(commproc->cp_dpmem); 663 - for (i=0; i<(sizeof(patch_2000)/4); i++) 664 - *dp++ = patch_2000[i]; 665 - 666 - dp = (uint *)&(commproc->cp_dpmem[0x0f00]); 667 - for (i=0; i<(sizeof(patch_2f00)/4); i++) 668 - *dp++ = patch_2f00[i]; 669 - 670 - iip = (iic_t *)&commproc->cp_dparam[PROFF_IIC]; 671 - # define RPBASE 0x0500 672 - iip->iic_rpbase = RPBASE; 673 - 674 - /* Put SPI above the IIC, also 32-byte aligned. 675 - */ 676 - i = (RPBASE + sizeof(iic_t) + 31) & ~31; 677 - spp = (struct spi_pram *)&commproc->cp_dparam[PROFF_SPI]; 678 - spp->rpbase = i; 679 - 680 - # if defined(CONFIG_I2C_SPI_UCODE_PATCH) 681 - commproc->cp_cpmcr1 = 0x802a; 682 - commproc->cp_cpmcr2 = 0x8028; 683 - commproc->cp_cpmcr3 = 0x802e; 684 - commproc->cp_cpmcr4 = 0x802c; 685 - commproc->cp_rccr = 1; 686 - 687 - printk("I2C/SPI microcode patch installed.\n"); 688 - # endif /* CONFIG_I2C_SPI_UCODE_PATCH */ 689 - 690 - # if defined(CONFIG_I2C_SPI_SMC1_UCODE_PATCH) 691 - 692 - dp = (uint *)&(commproc->cp_dpmem[0x0e00]); 693 - for (i=0; i<(sizeof(patch_2e00)/4); i++) 694 - *dp++ = patch_2e00[i]; 695 - 696 - commproc->cp_cpmcr1 = 0x8080; 697 - commproc->cp_cpmcr2 = 0x808a; 698 - commproc->cp_cpmcr3 = 0x8028; 699 - commproc->cp_cpmcr4 = 0x802a; 700 - commproc->cp_rccr = 3; 701 - 702 - smp = (smc_uart_t *)&commproc->cp_dparam[PROFF_SMC1]; 703 - smp->smc_rpbase = 0x1FC0; 704 - 705 - printk("I2C/SPI/SMC1 microcode patch installed.\n"); 706 - # endif /* CONFIG_I2C_SPI_SMC1_UCODE_PATCH) */ 707 - 708 - #endif /* some variation of the I2C/SPI patch was selected */ 709 - } 710 - 711 - /* 712 - * Take this entire routine out, since no one calls it and its 713 - * logic is suspect. 714 - */ 715 - 716 - #if 0 717 - void 718 - verify_patch(volatile immap_t *immr) 719 - { 720 - volatile uint *dp; 721 - volatile cpm8xx_t *commproc; 722 - int i; 723 - 724 - commproc = (cpm8xx_t *)&immr->im_cpm; 725 - 726 - printk("cp_rccr %x\n", commproc->cp_rccr); 727 - commproc->cp_rccr = 0; 728 - 729 - dp = (uint *)(commproc->cp_dpmem); 730 - for (i=0; i<(sizeof(patch_2000)/4); i++) 731 - if (*dp++ != patch_2000[i]) { 732 - printk("patch_2000 bad at %d\n", i); 733 - dp--; 734 - printk("found 0x%X, wanted 0x%X\n", *dp, patch_2000[i]); 735 - break; 736 - } 737 - 738 - dp = (uint *)&(commproc->cp_dpmem[0x0f00]); 739 - for (i=0; i<(sizeof(patch_2f00)/4); i++) 740 - if (*dp++ != patch_2f00[i]) { 741 - printk("patch_2f00 bad at %d\n", i); 742 - dp--; 743 - printk("found 0x%X, wanted 0x%X\n", *dp, patch_2f00[i]); 744 - break; 745 - } 746 - 747 - commproc->cp_rccr = 0x0009; 748 - } 749 - #endif
+6 -7
arch/powerpc/sysdev/xics/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 config PPC_XICS 3 - def_bool n 4 - select PPC_SMP_MUXED_IPI 5 - select HARDIRQS_SW_RESEND 3 + def_bool n 4 + select PPC_SMP_MUXED_IPI 5 + select HARDIRQS_SW_RESEND 6 6 7 7 config PPC_ICP_NATIVE 8 - def_bool n 8 + def_bool n 9 9 10 10 config PPC_ICP_HV 11 - def_bool n 11 + def_bool n 12 12 13 13 config PPC_ICS_RTAS 14 - def_bool n 15 - 14 + def_bool n
+51 -1
arch/powerpc/sysdev/xive/spapr.c
··· 16 16 #include <linux/cpumask.h> 17 17 #include <linux/mm.h> 18 18 #include <linux/delay.h> 19 + #include <linux/libfdt.h> 19 20 20 21 #include <asm/prom.h> 21 22 #include <asm/io.h> ··· 660 659 return true; 661 660 } 662 661 662 + static const u8 *get_vec5_feature(unsigned int index) 663 + { 664 + unsigned long root, chosen; 665 + int size; 666 + const u8 *vec5; 667 + 668 + root = of_get_flat_dt_root(); 669 + chosen = of_get_flat_dt_subnode_by_name(root, "chosen"); 670 + if (chosen == -FDT_ERR_NOTFOUND) 671 + return NULL; 672 + 673 + vec5 = of_get_flat_dt_prop(chosen, "ibm,architecture-vec-5", &size); 674 + if (!vec5) 675 + return NULL; 676 + 677 + if (size <= index) 678 + return NULL; 679 + 680 + return vec5 + index; 681 + } 682 + 683 + static bool xive_spapr_disabled(void) 684 + { 685 + const u8 *vec5_xive; 686 + 687 + vec5_xive = get_vec5_feature(OV5_INDX(OV5_XIVE_SUPPORT)); 688 + if (vec5_xive) { 689 + u8 val; 690 + 691 + val = *vec5_xive & OV5_FEAT(OV5_XIVE_SUPPORT); 692 + switch (val) { 693 + case OV5_FEAT(OV5_XIVE_EITHER): 694 + case OV5_FEAT(OV5_XIVE_LEGACY): 695 + break; 696 + case OV5_FEAT(OV5_XIVE_EXPLOIT): 697 + /* Hypervisor only supports XIVE */ 698 + if (xive_cmdline_disabled) 699 + pr_warn("WARNING: Ignoring cmdline option xive=off\n"); 700 + return false; 701 + default: 702 + pr_warn("%s: Unknown xive support option: 0x%x\n", 703 + __func__, val); 704 + break; 705 + } 706 + } 707 + 708 + return xive_cmdline_disabled; 709 + } 710 + 663 711 bool __init xive_spapr_init(void) 664 712 { 665 713 struct device_node *np; ··· 721 671 const __be32 *reg; 722 672 int i; 723 673 724 - if (xive_cmdline_disabled) 674 + if (xive_spapr_disabled()) 725 675 return false; 726 676 727 677 pr_devel("%s()\n", __func__);
+9 -5
arch/powerpc/xmon/xmon.c
··· 465 465 local_irq_save(flags); 466 466 hard_irq_disable(); 467 467 468 - tracing_enabled = tracing_is_on(); 469 - tracing_off(); 468 + if (!fromipi) { 469 + tracing_enabled = tracing_is_on(); 470 + tracing_off(); 471 + } 470 472 471 473 bp = in_breakpoint_table(regs->nip, &offset); 472 474 if (bp != NULL) { ··· 2450 2448 DUMP(p, canary, "%#-*lx"); 2451 2449 #endif 2452 2450 DUMP(p, saved_r1, "%#-*llx"); 2451 + #ifdef CONFIG_PPC_BOOK3E 2453 2452 DUMP(p, trap_save, "%#-*x"); 2453 + #endif 2454 2454 DUMP(p, irq_soft_mask, "%#-*x"); 2455 2455 DUMP(p, irq_happened, "%#-*x"); 2456 2456 #ifdef CONFIG_MMIOWB ··· 3094 3090 3095 3091 printf("pgd @ 0x%px\n", pgdir); 3096 3092 3097 - if (pgd_huge(*pgdp)) { 3093 + if (pgd_is_leaf(*pgdp)) { 3098 3094 format_pte(pgdp, pgd_val(*pgdp)); 3099 3095 return; 3100 3096 } ··· 3107 3103 return; 3108 3104 } 3109 3105 3110 - if (pud_huge(*pudp)) { 3106 + if (pud_is_leaf(*pudp)) { 3111 3107 format_pte(pudp, pud_val(*pudp)); 3112 3108 return; 3113 3109 } ··· 3121 3117 return; 3122 3118 } 3123 3119 3124 - if (pmd_huge(*pmdp)) { 3120 + if (pmd_is_leaf(*pmdp)) { 3125 3121 format_pte(pmdp, pmd_val(*pmdp)); 3126 3122 return; 3127 3123 }
+2 -2
drivers/macintosh/smu.c
··· 132 132 /* Flush command and data to RAM */ 133 133 faddr = (unsigned long)smu->cmd_buf; 134 134 fend = faddr + smu->cmd_buf->length + 2; 135 - flush_inval_dcache_range(faddr, fend); 135 + flush_dcache_range(faddr, fend); 136 136 137 137 138 138 /* We also disable NAP mode for the duration of the command ··· 194 194 * reply length (it's only 2 cache lines anyway) 195 195 */ 196 196 faddr = (unsigned long)smu->cmd_buf; 197 - flush_inval_dcache_range(faddr, faddr + 256); 197 + flush_dcache_range(faddr, faddr + 256); 198 198 199 199 /* Now check ack */ 200 200 ack = (~cmd->cmd) & 0xff;
+160 -19
drivers/misc/ocxl/config.c
··· 20 20 #define OCXL_DVSEC_TEMPL_MMIO_GLOBAL_SZ 0x28 21 21 #define OCXL_DVSEC_TEMPL_MMIO_PP 0x30 22 22 #define OCXL_DVSEC_TEMPL_MMIO_PP_SZ 0x38 23 - #define OCXL_DVSEC_TEMPL_MEM_SZ 0x3C 24 - #define OCXL_DVSEC_TEMPL_WWID 0x40 23 + #define OCXL_DVSEC_TEMPL_ALL_MEM_SZ 0x3C 24 + #define OCXL_DVSEC_TEMPL_LPC_MEM_START 0x40 25 + #define OCXL_DVSEC_TEMPL_WWID 0x48 26 + #define OCXL_DVSEC_TEMPL_LPC_MEM_SZ 0x58 25 27 26 28 #define OCXL_MAX_AFU_PER_FUNCTION 64 27 - #define OCXL_TEMPL_LEN 0x58 29 + #define OCXL_TEMPL_LEN_1_0 0x58 30 + #define OCXL_TEMPL_LEN_1_1 0x60 28 31 #define OCXL_TEMPL_NAME_LEN 24 29 32 #define OCXL_CFG_TIMEOUT 3 30 33 ··· 272 269 return 0; 273 270 } 274 271 272 + /** 273 + * Read the template version from the AFU 274 + * dev: the device for the AFU 275 + * fn: the AFU offsets 276 + * len: outputs the template length 277 + * version: outputs the major<<8,minor version 278 + * 279 + * Returns 0 on success, negative on failure 280 + */ 281 + static int read_template_version(struct pci_dev *dev, struct ocxl_fn_config *fn, 282 + u16 *len, u16 *version) 283 + { 284 + u32 val32; 285 + u8 major, minor; 286 + int rc; 287 + 288 + rc = read_afu_info(dev, fn, OCXL_DVSEC_TEMPL_VERSION, &val32); 289 + if (rc) 290 + return rc; 291 + 292 + *len = EXTRACT_BITS(val32, 16, 31); 293 + major = EXTRACT_BITS(val32, 8, 15); 294 + minor = EXTRACT_BITS(val32, 0, 7); 295 + *version = (major << 8) + minor; 296 + return 0; 297 + } 298 + 275 299 int ocxl_config_check_afu_index(struct pci_dev *dev, 276 300 struct ocxl_fn_config *fn, int afu_idx) 277 301 { 278 - u32 val; 279 - int rc, templ_major, templ_minor, len; 302 + int rc; 303 + u16 templ_version; 304 + u16 len, expected_len; 280 305 281 306 pci_write_config_byte(dev, 282 307 fn->dvsec_afu_info_pos + OCXL_DVSEC_AFU_INFO_AFU_IDX, 283 308 afu_idx); 284 - rc = read_afu_info(dev, fn, OCXL_DVSEC_TEMPL_VERSION, &val); 309 + 310 + rc = read_template_version(dev, fn, &len, &templ_version); 285 311 if (rc) 286 312 return rc; 287 313 288 - /* AFU index map can have holes */ 289 - if (!val) 314 + /* AFU index map can have holes, in which case we read all 0's */ 315 + if (!templ_version && !len) 290 316 return 0; 291 317 292 - templ_major = EXTRACT_BITS(val, 8, 15); 293 - templ_minor = EXTRACT_BITS(val, 0, 7); 294 318 dev_dbg(&dev->dev, "AFU descriptor template version %d.%d\n", 295 - templ_major, templ_minor); 319 + templ_version >> 8, templ_version & 0xFF); 296 320 297 - len = EXTRACT_BITS(val, 16, 31); 298 - if (len != OCXL_TEMPL_LEN) { 299 - dev_warn(&dev->dev, 300 - "Unexpected template length in AFU information (%#x)\n", 301 - len); 321 + switch (templ_version) { 322 + case 0x0005: // v0.5 was used prior to the spec approval 323 + case 0x0100: 324 + expected_len = OCXL_TEMPL_LEN_1_0; 325 + break; 326 + case 0x0101: 327 + expected_len = OCXL_TEMPL_LEN_1_1; 328 + break; 329 + default: 330 + dev_warn(&dev->dev, "Unknown AFU template version %#x\n", 331 + templ_version); 332 + expected_len = len; 302 333 } 334 + if (len != expected_len) 335 + dev_warn(&dev->dev, 336 + "Unexpected template length %#x in AFU information, expected %#x for version %#x\n", 337 + len, expected_len, templ_version); 303 338 return 1; 304 339 } 305 340 ··· 475 434 return 0; 476 435 } 477 436 437 + /** 438 + * Populate AFU metadata regarding LPC memory 439 + * dev: the device for the AFU 440 + * fn: the AFU offsets 441 + * afu: the AFU struct to populate the LPC metadata into 442 + * 443 + * Returns 0 on success, negative on failure 444 + */ 445 + static int read_afu_lpc_memory_info(struct pci_dev *dev, 446 + struct ocxl_fn_config *fn, 447 + struct ocxl_afu_config *afu) 448 + { 449 + int rc; 450 + u32 val32; 451 + u16 templ_version; 452 + u16 templ_len; 453 + u64 total_mem_size = 0; 454 + u64 lpc_mem_size = 0; 455 + 456 + afu->lpc_mem_offset = 0; 457 + afu->lpc_mem_size = 0; 458 + afu->special_purpose_mem_offset = 0; 459 + afu->special_purpose_mem_size = 0; 460 + /* 461 + * For AFUs following template v1.0, the LPC memory covers the 462 + * total memory. Its size is a power of 2. 463 + * 464 + * For AFUs with template >= v1.01, the total memory size is 465 + * still a power of 2, but it is split in 2 parts: 466 + * - the LPC memory, whose size can now be anything 467 + * - the remainder memory is a special purpose memory, whose 468 + * definition is AFU-dependent. It is not accessible through 469 + * the usual commands for LPC memory 470 + */ 471 + rc = read_afu_info(dev, fn, OCXL_DVSEC_TEMPL_ALL_MEM_SZ, &val32); 472 + if (rc) 473 + return rc; 474 + 475 + val32 = EXTRACT_BITS(val32, 0, 7); 476 + if (!val32) 477 + return 0; /* No LPC memory */ 478 + 479 + /* 480 + * The configuration space spec allows for a memory size of up 481 + * to 2^255 bytes. 482 + * 483 + * Current generation hardware uses 56-bit physical addresses, 484 + * but we won't be able to get near close to that, as we won't 485 + * have a hole big enough in the memory map. Let it pass in 486 + * the driver for now. We'll get an error from the firmware 487 + * when trying to configure something too big. 488 + */ 489 + total_mem_size = 1ull << val32; 490 + 491 + rc = read_afu_info(dev, fn, OCXL_DVSEC_TEMPL_LPC_MEM_START, &val32); 492 + if (rc) 493 + return rc; 494 + 495 + afu->lpc_mem_offset = val32; 496 + 497 + rc = read_afu_info(dev, fn, OCXL_DVSEC_TEMPL_LPC_MEM_START + 4, &val32); 498 + if (rc) 499 + return rc; 500 + 501 + afu->lpc_mem_offset |= (u64) val32 << 32; 502 + 503 + rc = read_template_version(dev, fn, &templ_len, &templ_version); 504 + if (rc) 505 + return rc; 506 + 507 + if (templ_version >= 0x0101) { 508 + rc = read_afu_info(dev, fn, 509 + OCXL_DVSEC_TEMPL_LPC_MEM_SZ, &val32); 510 + if (rc) 511 + return rc; 512 + lpc_mem_size = val32; 513 + 514 + rc = read_afu_info(dev, fn, 515 + OCXL_DVSEC_TEMPL_LPC_MEM_SZ + 4, &val32); 516 + if (rc) 517 + return rc; 518 + lpc_mem_size |= (u64) val32 << 32; 519 + } else { 520 + lpc_mem_size = total_mem_size; 521 + } 522 + afu->lpc_mem_size = lpc_mem_size; 523 + 524 + if (lpc_mem_size < total_mem_size) { 525 + afu->special_purpose_mem_offset = 526 + afu->lpc_mem_offset + lpc_mem_size; 527 + afu->special_purpose_mem_size = 528 + total_mem_size - lpc_mem_size; 529 + } 530 + return 0; 531 + } 532 + 478 533 int ocxl_config_read_afu(struct pci_dev *dev, struct ocxl_fn_config *fn, 479 534 struct ocxl_afu_config *afu, u8 afu_idx) 480 535 { ··· 604 467 if (rc) 605 468 return rc; 606 469 607 - rc = read_afu_info(dev, fn, OCXL_DVSEC_TEMPL_MEM_SZ, &val32); 470 + rc = read_afu_lpc_memory_info(dev, fn, afu); 608 471 if (rc) 609 472 return rc; 610 - afu->log_mem_size = EXTRACT_BITS(val32, 0, 7); 611 473 612 474 rc = read_afu_control(dev, afu); 613 475 if (rc) ··· 623 487 dev_dbg(&dev->dev, " pp mmio bar = %hhu\n", afu->pp_mmio_bar); 624 488 dev_dbg(&dev->dev, " pp mmio offset = %#llx\n", afu->pp_mmio_offset); 625 489 dev_dbg(&dev->dev, " pp mmio stride = %#x\n", afu->pp_mmio_stride); 626 - dev_dbg(&dev->dev, " mem size (log) = %hhu\n", afu->log_mem_size); 490 + dev_dbg(&dev->dev, " lpc_mem offset = %#llx\n", afu->lpc_mem_offset); 491 + dev_dbg(&dev->dev, " lpc_mem size = %#llx\n", afu->lpc_mem_size); 492 + dev_dbg(&dev->dev, " special purpose mem offset = %#llx\n", 493 + afu->special_purpose_mem_offset); 494 + dev_dbg(&dev->dev, " special purpose mem size = %#llx\n", 495 + afu->special_purpose_mem_size); 627 496 dev_dbg(&dev->dev, " pasid supported (log) = %u\n", 628 497 afu->pasid_supported_log); 629 498 dev_dbg(&dev->dev, " actag supported = %u\n",
+1 -1
drivers/misc/ocxl/pci.c
··· 41 41 return 0; 42 42 } 43 43 44 - void ocxl_remove(struct pci_dev *dev) 44 + static void ocxl_remove(struct pci_dev *dev) 45 45 { 46 46 struct ocxl_fn *fn; 47 47 struct ocxl_afu *afu;
+15 -1
drivers/tty/hvc/hvc_vio.c
··· 107 107 return got; 108 108 } 109 109 110 + /** 111 + * hvterm_raw_put_chars: send characters to firmware for given vterm adapter 112 + * @vtermno: The virtual terminal number. 113 + * @buf: The characters to send. Because of the underlying hypercall in 114 + * hvc_put_chars(), this buffer must be at least 16 bytes long, even if 115 + * you are sending fewer chars. 116 + * @count: number of chars to send. 117 + */ 110 118 static int hvterm_raw_put_chars(uint32_t vtermno, const char *buf, int count) 111 119 { 112 120 struct hvterm_priv *pv = hvterm_privs[vtermno]; ··· 227 219 static void udbg_hvc_putc(char c) 228 220 { 229 221 int count = -1; 222 + unsigned char bounce_buffer[16]; 230 223 231 224 if (!hvterm_privs[0]) 232 225 return; ··· 238 229 do { 239 230 switch(hvterm_privs[0]->proto) { 240 231 case HV_PROTOCOL_RAW: 241 - count = hvterm_raw_put_chars(0, &c, 1); 232 + /* 233 + * hvterm_raw_put_chars requires at least a 16-byte 234 + * buffer, so go via the bounce buffer 235 + */ 236 + bounce_buffer[0] = c; 237 + count = hvterm_raw_put_chars(0, bounce_buffer, 1); 242 238 break; 243 239 case HV_PROTOCOL_HVSI: 244 240 count = hvterm_hvsi_put_chars(0, &c, 1);
+4 -1
include/misc/ocxl.h
··· 32 32 u8 pp_mmio_bar; /* per-process MMIO area */ 33 33 u64 pp_mmio_offset; 34 34 u32 pp_mmio_stride; 35 - u8 log_mem_size; 35 + u64 lpc_mem_offset; 36 + u64 lpc_mem_size; 37 + u64 special_purpose_mem_offset; 38 + u64 special_purpose_mem_size; 36 39 u8 pasid_supported_log; 37 40 u16 actag_supported; 38 41 };
+7 -7
include/uapi/misc/ocxl.h
··· 33 33 }; 34 34 35 35 struct ocxl_ioctl_metadata { 36 - __u16 version; // struct version, always backwards compatible 36 + __u16 version; /* struct version, always backwards compatible */ 37 37 38 - // Version 0 fields 38 + /* Version 0 fields */ 39 39 __u8 afu_version_major; 40 40 __u8 afu_version_minor; 41 - __u32 pasid; // PASID assigned to the current context 41 + __u32 pasid; /* PASID assigned to the current context */ 42 42 43 - __u64 pp_mmio_size; // Per PASID MMIO size 43 + __u64 pp_mmio_size; /* Per PASID MMIO size */ 44 44 __u64 global_mmio_size; 45 45 46 - // End version 0 fields 46 + /* End version 0 fields */ 47 47 48 - __u64 reserved[13]; // Total of 16*u64 48 + __u64 reserved[13]; /* Total of 16*u64 */ 49 49 }; 50 50 51 51 struct ocxl_ioctl_p9_wait { 52 - __u16 thread_id; // The thread ID required to wake this thread 52 + __u16 thread_id; /* The thread ID required to wake this thread */ 53 53 __u16 reserved1; 54 54 __u32 reserved2; 55 55 __u64 reserved3[3];
+2 -1
scripts/recordmcount.h
··· 325 325 if (!mcountsym) 326 326 mcountsym = get_mcountsym(sym0, relp, str0); 327 327 328 - if (mcountsym == Elf_r_sym(relp) && !is_fake_mcount(relp)) { 328 + if (mcountsym && mcountsym == Elf_r_sym(relp) && 329 + !is_fake_mcount(relp)) { 329 330 uint_t const addend = 330 331 _w(_w(relp->r_offset) - recval + mcount_adjust); 331 332 mrelp->r_offset = _w(offbase
tools/testing/selftests/powerpc/mm/.gitignore
+1 -1
tools/testing/selftests/powerpc/stringloops/asm/ppc_asm.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 #ifndef _PPC_ASM_H 3 - #define __PPC_ASM_H 3 + #define _PPC_ASM_H 4 4 #include <ppc-asm.h> 5 5 6 6 #ifndef r1
+1 -1
tools/testing/selftests/powerpc/tm/tm-vmxcopy.c
··· 79 79 80 80 "5:;" 81 81 "stxvd2x 40,0,%[vecoutptr];" 82 - : [res]"=r"(aborted) 82 + : [res]"=&r"(aborted) 83 83 : [vecinptr]"r"(&vecin), 84 84 [vecoutptr]"r"(&vecout), 85 85 [map]"r"(a)
+1 -1
tools/testing/selftests/powerpc/vphn/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 TEST_GEN_PROGS := test-vphn 3 3 4 - CFLAGS += -m64 4 + CFLAGS += -m64 -I$(CURDIR) 5 5 6 6 top_srcdir = ../../../../.. 7 7 include ../../lib.mk